Вы находитесь на странице: 1из 10

Here is an example of what the brief answer could be.

Lets say I am applying for Software Tester


job with an IT company, that designs software and hardware for GPS navigation devices. Prior to
joining Software Testing profession I was a deep sea captain for 4 years. Than I graduated from
Portnov Computer School in California and became a Software Tester. Ironically, we actually had a
few students with that same background.

As a teenager I was dreaming about becoming a sailor, which brought me to the Marine Academy
and after graduation I worked for 4 years as a Deep Sea Captain. I learned lots of things which later
on helped me to prosper in Software testing profession: discipline, compliance, documentation
skills, dealing with people including real difficult ones, I've got lots of user experience with various
software applications and electronic devices, etc.

After a while I realized that changing my profession would improve my family life. I became
involved in software testing training at Portnov School by accident. My neighbor gave me the idea
to become a software tester. I finished my training, I completed the internship, and the more I
became involved the more I enjoyed the profession. As of today, I know for sure that Software
testing is my long-term career commitment and it is my goal to become someone who makes a
difference in Software Testing. Therefore, I am looking forward to becomeing a member of a
professionally strong team working on technically challenging projects. This position and your
company represent a very special interest to me because I used navigation devices professionally
before. Bringing together my passion for software quality and my previous experience as a deep see
captain is beyond any dreams. To me, it is a unique lifetime opportunity.
What is your weakness?
Tell them that there are two types of weaknesses:
• The ones we know about
• The ones we are not aware of
1.What are some recent major computer system failures caused by software bugs?
• A major U.S. retailer was reportedly hit with a large government fine in October of
2003 due to web site errors that enabled customers to view one anothers' online
orders.
• News stories in the fall of 2003 stated that a manufacturing company recalled all
their transportation products in order to fix a software problem causing instability in
certain circumstances. The company found and reported the bug itself and initiated
the recall procedure in which a software upgrade fixed the problems.
What is SEI? CMM? ISO? IEEE? ANSI? Will it help?
• SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by
the U.S. Defense Department to help improve software development processes.
• CMM = 'Capability Maturity Model', developed by the SEI. It's a model of 5 levels of
organizational 'maturity' that determine effectiveness in delivering quality software.
It is geared to large organizations such as large U.S. Defense Department
contractors. However, many of the QA processes involved are appropriate to any
organization, and if reasonably applied can be helpful. Organizations can receive
CMM ratings by undergoing assessments by qualified auditors.
What are different types of s/w testings explain brifly?
* COMPATIBILITY TESTING. Testing to ensure compatibility of an application or Web site with
different browsers, OSs, and hardware platforms. Compatibility testing can be performed manually
or can be driven by an automated functional or regression test suite.

* CONFORMANCE TESTING. Verifying implementation conformance to industry standards.


Producing tests for the behavior of an implementation to be sure it provides the portability,
interoperability, and/or compatibility a standard defines.

* FUNCTIONAL TESTING. Validating an application or Web site conforms to its specifications


and correctly performs all its required functions. This entails a series of tests which perform a
feature by feature validation of behavior, using a wide range of normal and erroneous input data.
This can involve testing of the product's user interface, APIs, database management, security,
installation, networking, etcF testing can be performed on an automated or manual basis using black
box or white box methodologies.

* LOAD TESTING. Load testing is a generic term covering Performance Testing and Stress
Testing.

* PERFORMANCE TESTING. Performance testing can be applied to understand your application


or WWW site's scalability, or to benchmark the performance in an environment of third party
products such as servers and middleware for potential purchase. This sort of testing is particularly
useful to identify performance bottlenecks in high use applications. Performance testing generally
involves an automated test suite as this allows easy simulation of a variety of normal, peak, and
exceptional load conditions.

* REGRESSION TESTING. Similar in scope to a functional test, a regression test allows a


consistent, repeatable validation of each new release of a product or Web site. Such testing ensures
reported product defects have been corrected for each new release and that no new quality problems
were introduced in the maintenance process. Though regression testing can be performed manually
an automated test suite is often used to reduce the time and resources needed to perform the
required testing.

* SMOKE TESTING. A quick-and-dirty test that the major functions of a piece of software work
without bothering with finer details. Originated in the hardware testing practice of turning on a new
piece of hardware for the first time and considering it a success if it does not catch on fire.

* STRESS TESTING. Testing conducted to evaluate a system or component at or beyond the limits
of its specified requirements to determine the load under which it fails and how. A graceful
degradation under load leading to non-catastrophic failure is the desired result. Often Stress Testing
is performed using the same process as Performance Testing but employing a very high level of
simulated load.

* UNIT TESTING. Functional and reliability testing in an Engineering environment. Producing


tests for the behavior of components of a product to ensure their correct behavior prior to system
integration.
2.what is capability testing explain?
If it is Scalbilty testing , then Scalability testing can be performed as a series of load tests with
different hardware or software configurations keeping other settings of testing environment
unchanged. The purpose of scalability testing is to determine whether your application scales for the
workload growth. Suppose your company expects a six-fold load increase on your server in the next
two months. You may need to increase the server performance and to shorten the request processing
time to better serve visitors. If your application is scalable, you can shorten this time by upgrading
the server hardware, for example, you can increase the CPU frequency and add more RAM (also,
you can increase the request performance by changing the server software, for example, by
replacing the text-file data storages with SQL Server databases. To find a better solution, first you
can test hardware changes, then software changes and after that compare the results of the tests).
3.testing methodologies?
testing techniques are:-
1.blackbox testing 2.whitebox testing testing methodolgies are:-
1.smoke testing 2.sanity testing 3.integration testing 4.system testing5.regression testing
6.acceptance testing
4.how many types of testings are there?
there are Number of types of testing that are- Stand Alone Testing Unit Testing Static Testing
Proof of Concept Testing ( POC Testing )System TestingFunctional Testing / Functionality Testing
User Interface TestingError exit Testing Help Information Testing Integration Testing
Dynamic Testing Black Box Testing White Box Testing Performance TestingStress/Load Testing
volume TestingLimit TestingDisaster Recovery TestingUser Acceptance Testing ( UAT )
Free Fall TestingEquivalence class partitioningBoundary Value AnalysisCompatibility Testing /
Data MigrationSecurity Testing
types of testing and there definitions are as follows.

Difference between streess ant performance testing?


Stress testing tries to break the system under test by overwhelming its resources or by taking
resources away from it (in which case it is sometimes called negative testing). The main purpose
behind this madness is to make sure that the system fails and recovers gracefully -- this quality is
known as recoverability.
++++++++++++++++++++++++++++++++++
The goal of performance testing is not to find bugs, but to eliminate bottlenecks and establish a
baseline for future regression testing. To conduct performance testing is to engage in a carefully
controlled process of measurement and analysis. Ideally, the software under test is already stable
enough so that this process can proceed smoothly.
Application Scalability
Is the application scalable in terms of software also. If yes then give an example for scalability
Application scalability -Characteristics of the product under test related to the number of users the
product can support. It is to determine whether the product scales with the workload as the network
grew in no. and complexity.These characteristics of a product under test may be related to user load,
network , data capacity and/or other failure modes related to product's inability to scale beyond a
particular level?
How can u do the following 1) Usability testing 2) scalability Testing
UT:Testing the ease with which users can learn and use a product.
ST:It?s a Web Testing defn.allows web site capability improvement.
What is Compatibility Testing?
Testing whether software is compatible with other elements of a system with which it should
operate, e.g. browsers, Operating Systems, or hardware
What is the purpose of the testing?
the purpose of testing is to catch defects in the application to deliver the reliable product to
the customer.
1. Testing improves the quality of the product2. we release the product with out defects.
3. By testing, the product can be trust able, reliable, secure.4. with out testing we
cannot release the product.
Unit testing
Unit testing is the process of testing a singular item of software. An example would be a
window/form which allows a user to choose two ways of launching the application. Option A
will launch exe A where Option B will launch exe B. The single form can be launched on its
own ( normally by the developer ) and the function of launching each option can be confirmed
before adding the code to the main application.
Static Testing
Static tests are those that do not involve the execution of anything ? be it code or executable
specification. Static testing comprises of comparing specifications.

The Requirement Expression states what the system is expected to achieve from the end users
point of view

The System Specification lists the functions and attributes of the actual system in detail

The System Design Specification states how the system is to be put together, and at a high
level, what the overall software design of the system is.

A Module Specification states what an item of code is expected to do.

Proof of Concept testing ( POC Testing )


POC testing in many cases is the first opportunity to use the software and confirm that the
program is capable of providing the desired end solution. In many cases the design
requirements may have changed from the initial order and this is the first opportunity to
confirm the software is capable of adapting to meet the end requirement.

System Testing
System testing is the first time at which the entire system can be tested against the system
specification. The specifications are defined within the business analysis documentation
defining the programs purpose. System testing is in effect testing that the entire system is
working together and all the functionality of the system is performing as expected. System
testing ONLY proves the system and does not prove the software or the data/work flow..
Below are some of the stages of System testing.

Functional Testing / Functionality Testing


Functional testing is the process of confirming the functionality of the application. Generally
this form of testing can be scripted directly from the menu options of the application.

User Interface Testing


From a system testing point of view the User Interface Testing confirms that the forms/
windows or GUI?s which appear perform as specified and are sized and viewed as expected.
Items such as menus, minimise and maximise options are checked.

Error exit testing


This form of testing confirms the application and all it?s separate forms will close once open
and that any forms have cancel options in case the user has selected them accidentally.

Help Information Testing


The process of launching all the Help links within an application and confirming they launch
the appropriate help item if required.
Integration Testing
Integration testing is often set up with it?s own testing team who only perform integration
testing. The main purpose of this type of testing is to check if the new software interferes with
any other functionality of any other software which is running on the companies machines.
Many companies may have ?loadsets? for each department ( ie. the accounts departments pc?
s will have different software to the art departments pc?s. One would be the Accounts loadset
where the other would be the Art departments loadset ) Personally I would look to automate a
large proportion of Integration testing along with developing a DLL/OCX database which
would highlight immediate concerns just be looking at the installation files of any new
software.

Dynamic Testing
Dynamic testing confirms that a deliverable ? typically some software ? functions according to
its specifications. Test scripts and recorded results should be agreed within an acceptance
plan

Dynamic testing can be based on two different aspects. Black box and White box testing

Black Box Testing


Black box testing is the process of testing a function ( such as a program which converts the
format of an interface file ) without having access to the code which is converting the data.
The testing stages would consist of specifying the file before the conversion takes place and
then confirming the changes which occur after the program has been run and converted the
file. The name ?Black Box? comes from not being how to see how the function works

White Box Testing


White box testing on the other hand allows the tester to see the code which is converting the
data. Consequently the tester can write tests to include data which will ?trip up the code?.

Performance Testing
Performance testing is the most effective way to gauge an application or an environment?s
capacity and scalability. This type of testing must be automated and record the systems
response times to a simulation of users logging onto the system. The expected performance
ratio of users to response times will be identified before the tests are carried out. With good
planning the performance tool can be used for ongoing analysis of the system and the
behaviours of the users. Data can be assessed to identify the most popular times users log on
and consequently the key time when the system will under the greatest loads.

Stress/Load Testing
Such testing involves running the system under heavy loading by simulating users and
functionality?s up to a point where the maximum loads are anticipated from the design
specification documentation.

Volume Testing
Such tests submit the system to large volumes of data. Normally this is automated and consists
of multiple processes being run simultaneously increasing the size of transactions files being
processed. The volume which has been specified within the business requirements
documentation can be confirmed. As well as multiple files of increasing size the system should
be analysed and tested for single files too.

For instance, attached to a financial application was an audit log which detailed every
transaction entered by the 120 users across the UK.( not high volumes of users ) The Audit file
was never refreshed so the file just kept growing. The area which highlighted the potential
issue was that the file was being used to create an interface file of transactions which had
occurred that week. The volume at which this file broke the integrity of the system was during
the backup procedure run over night. Within six months I identified that this file would grow
to be a surprising 1.2 gigabytes which the system could still handle, however during the
backup procedure the system would require 2.4 gigabytes of space which the Unix partition
didn?t have available.

Limit Testing
At least one test should be developed for each of the documented system limits Such tests are
designed to investigate how the system will react to data which is maximal or minimal in the
sense of attaining some limit either specified within the system specification or the user guide.

During the system testing the system should be tested beyond the limits specified for it. The
purpose here is to find any situation where insufficient safety margins have been built in.

Disaster Recovery Testing


This is clearly a vital area of testing for safety critical and similar systems. The systems
reactions to failures of all sorts might need to be tested. During this testing we can identify
any corruption?s and potential down times during system failure. .

User Acceptance Testing ( UAT )


User acceptance testing is probably the most known term of testing by non testers.
Consequently if the testing structure and stages have not been performed correctly, users will
tend to include and lump all aspects of testing into the User Acceptance Testing stage. This is
often due to the defects from previous testing stages being fixed and regression tested. Some
companies do not have the facility to have multiple testing environments ( One for System
Testing & One for UAT ) as well as a development environment, so there is a high possibility
that the regression system testing will also happen during the UAT stage.

In truth the User Acceptance Testing stage should not include any of it?s previous testing
stages (But time constraints and budget often intervene ) and the explanation of UAT is
described within it?s name. In short User Acceptance Testing includes the processes and
functionality performed by the users who will be using the system on a day to day basis. The
tests will follow the processes from end to end with a fully functional and complete system.
Additionally and the more difficult to identify; this phase will also include all the strange and
wonderful things the users will attempt to do with the software even though the software was
never designed to do these things.

To identify some of the wonderful things the users will attempt, the tester must analyse the
current system and identify the differences between the old system and the new. Less obvious
scenarios can be obtained through testing methods such as Boundary Value Analysis &
Equivalence class partitioning.

Maybe the best way to explain UAT is to break down each word within it?s name.

User

Users are the real business users who will have to operate the system on a day to day basis

Acceptance

The Users Acceptance that the system completes all the requirements which are needed for
day to day usage of the software as a business tool which gives benefit to the business. If this is
an upgrade from a previous system then the goal should be that the user can complete all the
previous functionality of the old system and any new functionality which has been identified
as beneficial.

Testing

This area can be broken into two halves

(1) Testing the system to prove that it behaves and produces the results expected by the users.
As you would expect these tested functions give the user confidence that the new software and
system will do everything they expected it to do. The users will confirm what needs to be
tested and will naturally sign off documentation which concludes that the tests performed
cover everything they need for acceptance. They will be happy that business will continue
with the new system.

(2) Testing the system to prove that it behaves and produces the results expected by the users
even when they do the most obscure things which the software was never designed for.
Sometimes ?only? users can perform these actions as experienced software users would never
do some of the things an inexperience user would attempt.

Free Fall Testing


This form of testing is normally done just before release to the users and uses a system which
has already been tested. The general goal in this testing is to round up some of the key users of
the software and allow them to free fall their way through completing the normal daily tasks
they will be performing when the system goes live. Here some of the obscure things a user will
attempt to do will be highlighted and a final lockdown of certain functionality can be
identified.

Equivalence class partitioning


A software testing technique which identifies a small amount of input values that invokes as
many different input conditions as possible.

Boundary Value Analysis


A test data selection technique in which values are chosen to lie along data extremes.
Boundary values include maximum, minimum, just inside/outside boundaries, typical values,
and error values. The theory for boundary value analysis is that if the system performs
correctly for these special values then it is likely to work correctly for all the numbers in-
between.

An example would be if a data field was set to except amounts of money from 0 to 10 pounds
the boundaries would be ?0.00, ?0.01, ?9.99 & ?10:00

Compatibility Testing / Data Migration


Tests are made to probe where the new system does not subsume the facilities and modes of
use of the old system where it was intended to do so. In these cases and dependant on the
system being tested, a parallel run should be considered so that data from the old system can
be directly compared to the new system.

Security Testing
Tests are performed to compromise the systems security. This include as an example,
accessing an oracle data base containing the data using multiple logins or unauthorised id?s..
Additionally hacking tools could be considered if the system could be access externally, such
as over the internet..
What is silk testing ? Is it automated testing or type of manual testing?
It's an Automation testing tool.......ITS Performance testing tool.
Advantages of automation over manual testing?
Automation Vs Manual Testing
- Automation saves time and resources. - Latest trend,one time effort - Reduces the testing
budget.
- Manual testing is driven manually by testers i.e. by executing the whole testing flow.
- Takes time and cost as it requires more resources comparably.
- Has a very good coverage.
- Preferable for huge applications, that have many settings or configurations.
What is the difference between Bug and Defect?
Bug: Deviation from the expected result. Defect: Problem in algorithm leads to failure.

Severity and Priority for bugs


Give me example for BUG that has HIGH SERVERITY and LOW PRIORITY and SAME
BUG in other case should be LOW SEVERITY and HIGH PRIORITY.
Note : It should be SAME BUG NOT TWO DIFFERENT BUGS
Low priority High Severity:

If the application crashes after using it 100 times, it has high severity but low priority.
High Priority Low Severity:
If there is a spelling mistake in the home page, it is high priority but low severity.
What are the different components available in software Testing Life Cycle?
The test development life cycle contains the following components:
Requirements Use Case Document Test Plan Test Case Test Case execution Report Analysis
Bug Analysis Bug Reporting
How many Process are there in Testing Life Cycle?
Software Testing Life Cycle consists of:
1.Test Planning, 2.Test Analysis, 3.Test Design, 4.Construction and verification, 5.Testing
Cycles, 6.Final Testing and Implementation and 7.Post Implementation.

Describe Software Testing life cycle? What are the steps in volved in STLC (Software Testing
Life Cycle)?
STEPS IN SDLC ARE
1.Proposal2.Request for proposal 3.Negotation4.LOI (letter of intent)5.contract.
6.urs (user specfication)7.srs8.hld9.lld10.coding11.unit testing12.integration 13.system
testing14.user acceptance test(UAT)15.release with 90 days warranty16.Maintainence
->fix bug-->upgradtion-->enhancement.
What are main benefits of test automation?
The main purpose of automation testing is speed, accuracy & tests can be repeated.

What are the different types of Bugs we normally see in any of the Project? Include the
severity as well.
1. User Interface Defects -------------------------------- Low
2. Boundary Related Defects ------------------------------- Medium
3. Error Handling Defects --------------------------------- Medium
4. Calculation Defects ------------------------------------ High
5. Improper Service Levels (Control flow defects) --------- High
6. Interpreting Data Defects ------------------------------ High
7. Race Conditions (Compatibility and Intersystem defects)- High
8. Load Conditions (Memory Leakages under load) ----------- High
9. Hardware Failures:-------------------------------------- High
Is all the testing methods are having the same life cycle?
No. It is not necessary.The process may change based on the type of testing carried out. It
depends on the primary objective of the testing, viz., regression testing, SIT, or parallel
testing. As you proceed with one among these types of testing, there might be some inclusion
& removal of certains tasks of STLC inbetween.
What is tree view in Automation testing?
In treeview is nothing but it is view in QTP there u can see the statements in keyword or in
tree structure arrangement.THis was there till QTP-6.5,from QTP -8.2 onword it is changed
as "keywordview"
Does automation replace manual testing?
No, manual testing cannot be replaced with the Automation. Automation can be done at an
extent of maximum to 90-99% not 100 percent, since the tool itself will have certain
limitations about the memory management, resources, platform on which the product need to
be tested

Software testing is the process used to measure the quality of developed computer software.
Usually, quality is constrained to such topics as correctness, completeness, security, but can also
include more technical requirements as described under the ISO standard ISO 9126, such as
capability, reliability, efficiency, portability, maintainability, compatibility, and usability.
how would you test a fast lazer printer?
-test for power supply -pc connection test -printer sample test -buffer test -allignment test
-test for clearity -speed of printing -performance
stress testing: for all types of applications
deny the resources it needs. like: application is developed for 256MB Ram or higher. you test
on 64Mb Ram see it fails and fails safely
Load testing: for client/server applications { 2-tier or hgher }

Вам также может понравиться