Академический Документы
Профессиональный Документы
Культура Документы
net
Index
Topic page no
1) Fundamentals of Testing (07 marks) 002
2) Testing throughout the software life cycle (06 marks) 011
3) Static Techniques (03 marks) 020
4) Test Design Techniques (12 marks) 026
5) Test management (08 marks) 038
6) Tool support for testing (04 marks) 054
7) Model Questions 060
8) Model Test -1 091
9) Model Test -2 098
10) Standards 104
11) Skill Test 106
12) Configuration management notes 107
13) Examination Guidelines 111
Prepared by
G.Chandra Mohan Reddy
www.GCREDDY.COM
1. Fundamentals of Testing
Important Terms:
1.1 Why testing necessary?
bug, defect, error, failure, mistake, quality, risk, software, testing and exhaustive testing.
1.2 What is testing?
code, debugging, requirement, test basis, test case, test objective
1.3 Testing principles
1.4 Fundamental test process
conformation testing, exit criteria, incident, regression testing, test condition, test
coverage, test data, test execution, test log, test plan, test strategy, test summary report
and testware.
1.5 The psychology of testing
independence.
I) General testing principles
Principles
A number of testing principles have been suggested over the past 40 years and offer
general guidelines common for all testing.
Test planning is the activity of verifying the mission of testing, defining the objectives of
testing and the specification of test activities in order to meet the objectives and mission.
It involves taking actions necessary to meet the mission and objectives of the project. In
order to control testing, it should be monitored throughout the project. Test planning
takes into account the feedback from monitoring and control activities.
Test analysis and design is the activity where general testing objectives are transformed
into tangible test conditions and test cases.
Identifying and prioritizing test conditions based on analysis of test items, the
specification, behaviour and structure.
Designing and prioritizing test cases.
Identifying necessary test data to support the test conditions and test cases.
Designing the test environment set-up and identifying any required infrastructure
and tools.
Developing and prioritizing test procedures, creating test data and, optionally,
preparing test harnesses and writing automated test scripts.
Creating test suites from the test procedures for efficient test execution.
Logging the outcome of test execution and recording the identities and versions of
the software under test, test tools and testware.
Repeating test activities as a result of action taken for each discrepancy. For
example, reexecution of a test that previously failed in order to confirm a fix
(confirmation testing), execution of a corrected test and/or execution of tests in
order to ensure that defects have not been introduced in unchanged areas of the
software or that defect fixing did not uncover other defects (regression testing).
Checking test logs against the exit criteria specified in test planning.
Assessing if more tests are needed or if the exit criteria specified should be
changed.
Writing a test summary report for stakeholders.
Checking which planned deliverables have been delivered, the closure of incident
reports or raising of change records for any that remain open, and the
documentation of the acceptance of the system.
Finalizing and archiving testware, the test environment and the test infrastructure
for later reuse.
Handover of testware to the maintenance organization.
Analyzing lessons learned for future releases and projects, and the improvement of
test maturity.
Tests designed by the person(s) who wrote the software under test (low level of
independence).
Tests designed by another person(s) (e.g. from the development team).
Tests designed by a person(s) from a different organizational group (e.g. an
independent test team) or test specialists (e.g. usability or performance test
specialists).
Tests designed by a person(s) from a different organization or company (i.e.
outsourcing or certification by an external body).
Questions:
1) When what is visible to end-users is a deviation from the specific or expected
behavior, this is called:
a) an error
b) a fault
c) a failure
d) a defect
c) Independent Testing.
d) Destructive Testing.
50) What is the Main reason for testing software before releasing it?
a) To show the system will work after release
b) To decide when software is of sufficient quality to release
c) To find as many of bugs as possible before release
d) To give information for a risk based decision about release
51) Select a reason that does not agree with the fact that complete testing is
impossible:
a) The domain of possible inputs is too large to test.
b) Limited financial resources.
c) There are too many possible paths through the program to test.
d) The user interface issues (and thus the design issues) are too complex to completely
test.
Although variants of the V-model exist, a common type of V-model uses four test levels,
corresponding to the four development levels.
The analysis and design of tests for a given test level should begin during the
corresponding development activity.
a) Component testing
Component testing searches for defects in, and verifies the functioning of, software (e.g.
modules, programs, objects, classes, etc.) that are separately testable.
One approach to component testing is to prepare and automate test cases before coding.
This is called a test-first approach or test-driven development.
b) Integration testing
Integration testing tests interfaces between components, interactions with different parts
of a system, such as the operating system, file system, hardware, or interfaces between
systems.
Component integration testing tests the interactions between software components and is
done after component testing;
System integration testing tests the interactions between different systems and may be
done after system testing.
c) System testing
In system testing, the test environment should correspond to the final target or
production environment as much as possible in order to minimize the risk of environment-
specific failures not being found in testing.
System testing may include tests based on risks and/or on requirements specifications,
business processes, use cases, or other high level descriptions of system behaviour,
interactions with the operating system, and system resources.
System testing should investigate both functional and non-functional requirements of the
system.
d) Acceptance testing
Acceptance testing is often the responsibility of the customers or users of a system; other
stakeholders may be involved as well.
The goal in acceptance testing is to establish confidence in the system, parts of the
system or specific non-functional characteristics of the system
The functions that a system, subsystem or component are to perform may be described in
work products such as a requirements specification, use cases, or a functional
specification, or they may be undocumented. The functions are “what” the system does.
A type of functional testing, security testing, investigates the functions (e.g. a firewall)
relating to detection of threats, such as viruses, from malicious outsiders. Another type of
functional testing, interoperability testing, evaluates the capability of the software product
to interact with one or more specified components or systems.
For QTP Information visit: www.gcreddy.com 13
For Software Testing Information visit: www.gcreddy.net
Non-functional testing includes, but is not limited to, performance testing, load testing,
stress testing, usability testing, maintainability testing, reliability testing and portability
testing. It is the testing of “how” the system works.
Structural (white-box) testing may be performed at all test levels. Structural techniques
are best used after specification-based techniques, in order to help measure the
thoroughness of testing through assessment of coverage of a type of structure.
After a defect is detected and fixed, the software should be retested to confirm that the
original defect has been successfully removed. This is called confirmation. Debugging
(defect fixing) is a development activity, not a testing activity.
Regression testing may be performed at all test levels, and applies to functional, non-
functional and structural testing.
Once deployed, a software system is often in service for years or decades. During this
time the system and its environment are often corrected, changed or extended.
Maintenance testing for migration (e.g. from one platform to another) should include
operational tests of the new environment, as well as of the changed software.
Maintenance testing for the retirement of a system may include the testing of data
migration or archiving if long data-retention periods are required.
Maintenance testing may be done at any or all test levels and for any or all test types.
Questions:
For QTP Information visit: www.gcreddy.com 14
For Software Testing Information visit: www.gcreddy.net
1) What are the good practices for testing with in the software development life
cycle?
a) Early test analysis and design
b) Different test levels are defined with specific objectives
c) Testers will start to get involved as soon as coding is done.
d) A and B above
2) Which option best describes objectives for test levels with a life cycle model?
a) Objectives should be generic for any test level
b) Objectives are the same for each test level.
c) The objectives of a test level don’t need to be defined in advance
d) Each level has objectives specific to that level.
3) Which of the following is a type?
a) Component testing
b) Functional testing
c) System testing
d) Acceptance testing
4) Non-functional system testing includes:
a) Testing to see where the system does not function properly
b) testing quality attributes of the system including performance and usability
c) testing a system feature using only the software required for that action
d) testing for functions that should not exist
5) Beta testing is:
a) Performed by customers at their own site
b) Performed by customers at their software developer’s site
c) Performed by an independent test team
d) Performed as early as possible in the lifecycle
6) Which of the following is not part of performance testing:
a) Measuring response time
b) Measuring transaction rates
c) Recovery testing
d) Simulating many users
7) Which one of the following statements about system testing is NOT true?
a) System tests are often performed by independent teams.
b) Functional testing is used more than structural testing.
c) Faults found during system tests can be very expensive to fix.
d) End-users should be involved in system tests.
8) Integration testing in the small:
a) Tests the individual components that have been developed.
b) Tests interactions between modules or subsystems.
c) Only uses components that form part of the live system.
d) Tests interfaces to other systems.
9) Alpha testing is:
a) Post-release testing by end user representatives at the developer’s site.
b) The first testing that is performed.
a) I and III
b) I and IV
c) II and III
d) II and IV
c) requirements-based testing
d) top-down integration testing
15) To test a function, the programmer has to write a _________, which calls
the function to be tested and passes it test data.
a. Stub
b. Driver
c. Proxy
d. None of the above
17) Which one of the following describes the major benefit of verification early
in the life cycle?
a) It allows the identification of changes in user requirements.
b) It facilitates timely set up of the test environment.
c) It reduces defect multiplication.
d) It allows testers to become involved early in the project.
18) The most important thing about early test design is that it:
a) makes test preparation easier.
b) means inspections are not required.
c) can prevent fault multiplication.
d) will find all faults.
a. Design based
b. Big-bang
c. Bottom-up
d. Top-down
a) Regression testing
23) “Which life cycle model is basically driven by schedule and budget risks” This
statement is best suited for
a) Water fall model
b) Spiral model
c) Incremental model
d) V-Model
A. Performance testing
B. Unit testing
C. Business scenarios
D. Static testing
A. Unit testing
B. Regression testing
C. Beta testing
D. Integration testing
a)component testing
b)non-functional system testing
c)user acceptance testing
d)maintenance testing
35) Match every stage of the software Development Life cycle with the Testing
Life cycle:
i. Global design
ii. System Requirements
iii. Detailed design
iv. User Requirements
a) Unit tests
b) Acceptance tests
c) System tests
d) Integration tests
3) Static Techniques
Important Terms:
3.1 Static techniques and the test process
dynamic testing, static testing, static technique
3.2 Review process
entry criteria, formal review, informal review, inspection, metric, moderator/inspection
leader, peer review, reviewer, scribe, technical review, walkthrough.
3.3 Static analysis by tools
Compiler, complexity, control flow, data flow, static analysis
1) Planning Selecting the personal, allocating roles, defining entry and exit
criteria for more formal reviews etc.
2) Kick-off Distributing documents, explaining the objectives, checking
entry criteria etc.
3) Individual Work done by each of the participants on their own work before
preparation the review meeting, questions and comments etc.
4) Review meeting Discussion or logging, make recommendations for handling the
defects, or make decisions about the defects etc.
5) Rework Fixing defects found, typically done by the author Fixing defects
found, typically done by the author
6) Follow-up Checking the defects have been addressed, gathering metrics
and checking on exit criteria
Note: walkthroughs, technical reviews and inspections can be performed within a peer
group-colleague at the same organization level. This type of review is called a “peer
review”.
People issues and psychological aspects are dealt with (e.g. making it a positive
experience for the author).
Review techniques are applied that are suitable to the type and level of software
work products and reviewers.
Checklists or roles are used if appropriate to increase effectiveness of defect
identification.
Training is given in review techniques, especially the more formal techniques, such
as inspection.
V) Cyclomatic Complexity
Alternatively one may calculate Cyclomatic Complexity using decision point rule
Decision points +1
Questions
1. led by author
2. undocumented
3. no management participation
4. led by a trained moderator or leader
5. uses entry exit criteria
s) inspection
t) peer review
u) informal review
v) walkthrough
a) s = 4, t = 3, u = 2 and 5, v = 1
b) s = 4 and 5, t = 3, u = 2, v = 1
c) s = 1 and 5, t = 3, u = 2, v = 4
d) s = 5, t = 4, u = 3, v = 1 and 2
8) Who is responsible for document all the issues, problems and open point that
were identified during the review meeting
A. Moderator
B. Scribe
C. Reviewers
D. Author
d) Informal review
14) Which of the following statements about early test design are true and which
are false?
1. Defects found during early test design are more expensive to fix
2. Early test design can find defects
3. Early test design can cause to the changes to the requirements
4. Early test design can takes more effort
15) Static code analysis typically identifies all but one of the following problems.
Which is it?
a) Unreachable code
b) Faults in requirements
c) Undeclared variables
d) Too few comments
17) What is the more important factor for successful performance of review?
a) A separate scribe during the logging meeting
b) Trained participants and review leaders
c) The availability of tools to support the review process
d) A reviewed test plan
c. Reviewer
d. Recorder
22) The person who leads the review of the document(s), planning the review,
running the meeting and follow-up after the meeting
a. Reviewer
b. Author
c. Moderator
d. Auditor
24) The Kick Off phase of a formal review includes the following
a) Explaining the objective
b) Fixing defects found typically done by author
c) Follow up
d) Individual Meeting preparations
29) The Phases of formal review process is mentioned below arrange them in the
correct order.
i. Planning ii. Review Meeting
iii. Rework iv. Individual Preparations
v. Kick Off vi. Follow up
a) i,ii,iii,iv,v,vi
b) vi,i,ii,iii,iv,v
c) i,v,iv,ii,iii,vi
d) i,ii,iii,v,iv,vi
o Experience-based techniques
I) Specification-based/Black-box techniques
Equivalence partitioning
Boundary value analysis
Decision table testing
State transition testing
Use case testing
Equivalence partitioning
o Inputs to the software or system are divided in to groups that are expected to
exhibit similar behavior
o Equivalence partitions or classes can be found for both valid data and invalid data
o Partitions can also be identified for outputs, internal values, time related values
and for interface values.
o Equivalence partitioning is applicable all levels of testing
Statement coverage
The percentage of executable statements that have been exercised by a test suite
Statement testing
A white box test design technique in which test cases are designed to execute statements
Decision Coverage
The percentage of decision outcomes that have been exercised by a test suite
100% decision coverage implies both 100% branches coverage and 100% statement
coverage
Decision testing
A white box test design technique in which test cases are designed to execute decision
outcomes.
Condition coverage
The percentage of condition outcomes that have been exercised by a test suite
Condition testing
A white box test design technique in which test cases are designed to execute condition
outcomes
Error guessing
o Error guessing is a commonly used experience-based technique
o Generally testers anticipate defects based on experience, these defects list can be
built based on experience, available defect data, and from common knowledge
about why software fails.
Exploratory testing
o Exploratory testing is concurrent test design, test execution, test logging and
learning , based on test charter containing test objectives and carried out within
time boxes
o It is approach that is most useful where there are few or inadequate specifications
and serve time pressure.
Questions
1) Order numbers on a stock control system can range between 10000 and
99999 inclusive. Which of the following inputs might be a result of designing
tests for only valid equivalence classes and valid boundaries:
a) 1000, 5000, 99999
b) 9999, 50000, 100000
c) 10000, 50000, 99999
d) 10000, 99999
e) 9999, 10000, 50000, 99999, 10000
Which of the following input values cover all of the equivalence partitions?
a. 10, 11, 21
b. 3, 20, 21
c. 3, 10, 22
d. 10, 21, 22
10) Using the same specifications as question 9, which of the following covers
the MOST boundary values?
a. 9,10,11,22
b. 9,10,21,22
c. 10,11,21,22
d. 10,11,20,21
16) An input field takes the year of birth between 1900 and 2004
The boundary values for testing this field are
a. 0,1900,2004,2005
b. 1900, 2004
c. 1899,1900,2004,2005
d. 1899, 1900, 1901,2003,2004,2005
18) When testing a grade calculation system, a tester determines that all scores
from 90 to 100 will yield a grade of A, but scores below 90 will not. This analysis
is known as:
a) Equivalence partitioning
b) Boundary value analysis
c) Decision table
d) Hybrid analysis
19) Which technique can be used to achieve input and output coverage? It can
be applied to human input, input via interfaces to a system, or interface
parameters in integration testing.
a) Error Guessing
b) Boundary Value Analysis
c) Decision Table testing
d) Equivalence partitioning
20) Features to be tested, approach, item pass/fail criteria and test deliverables
should be specified in which document?
a) Test case specification
b) Test procedure specification
c) Test plan
d) Test design specification
Which test inputs (in grams) would be selected using boundary value analysis?
23) If the temperature falls below 18 degrees, the heating system is switched
on. When the temperature reaches 21 degrees, the heating system is switched
off. What is the minimum set of test input values to cover all valid equivalence
partitions?
a) 15, 19 and 25 degrees
b) 17, 18, 20 and 21 degrees
c) 18, 20 and 22 degrees
d) 16 and 26 degrees
27) Find the Equivalence class for the following test case
Enter a number to test the validity of being accepting the numbers between 1 and
99
a) All numbers < 1
b) All numbers > 99
c) Number = 0
d) All numbers between 1 and 99
30) The following defines the statement of what the tester is expected to
accomplish or validate during testing activity
a) Test scope
b) Test objective
c) Test environment
d) None of the above
c) 1 to 10
d) None of the above
33) Deliverables of test design phase include all the following except
a) Test data
b) Test data plan
c) Test summary report
d) Test procedure plan
d) none of these
41) Find the invalid equivalence class for the following test case
Draw a line up to the length of 4 inches
a) Line with 1 dot-width
b) Curve
c) line with 4 inches
d) line with 1 inch.
43) Which of the following best describes the difference between clear
box and opaque box?
1. Clear box is structural testing, opaque box is Ad-hoc testing
2. Clear box is done by tester, and opaque box is done by developer
3. Opaque box is functional testing, clear box is exploratory testing
a) 1
b) 1 and 3
c) 2
d) 3
44) What is the concept of introducing a small change to the program and having
the effects of that change show up in some test?
a) Desk checking
b) Debugging a program
c) A mutation error
d) Introducing mutation
45) How many test cases are necessary to cover all the possible sequences of
statements (paths) for the following program fragment? Assume that the two
conditions are independent of each other : - …………
if (Condition 1)
then statement 1
else statement 2
fi
if (Condition 2)
then statement 3
fi
…………
a. 1 test case
b. 3 Test Cases
c. 4 Test Cases
d. Not achievable
46) Given the following code, which is true about the minimum number of test
cases required for full statement and branch coverage:
Read P
Read Q
IF P+Q > 100 THEN
Print “Large”
ENDIF
If P > 50 THEN
Print “P Large”
ENDIF
52) If the pseudo code below were a programming language, how many tests
are required to achieve 100% statement coverage?
1. If x=3 then
2. Display_messageX;
3. If y=2 then
4. Display_messageY;
5. Else
6. Display_messageZ;
a. 1
b. 2
c. 3
d. 4
53) Using the same code example as question 17, how many tests are required
to achieve 100% branch/decision coverage?
a. 1
b. 2
c. 3
d. 4
a) Equivalence partitioning
b) State transition testing
c) LCSAJ
d) Syntax testing
IF A>B THEN
C=A–B
ELSE
C=A+B
ENDIF
Read D
IF C = D THEN
Print “Error”
ENDIF
a) SC = 1 and DC = 3
b) SC = 1 and DC = 2
c) SC = 2 and DC = 2
d) SC = 2 and DC = 3
57) The specification: an integer field shall contain values from and including 1
to and including 12 (number of the month)
Now decide the minimum number of tests that are needed to ensure that all the questions
have been asked, all combinations have occurred and all replies given.
a) 3
b) 4
c) 5
d) 6
5) Test management
For QTP Information visit: www.gcreddy.com 38
For Software Testing Information visit: www.gcreddy.net
Important terms:
5.1 Test organization
Tester, test leader, test manager
5.2 Test planning and estimation
Test approach
5.3 Test progress monitoring and control
Defect density, failure rate, test control, test monitoring, test report.
5.4 Configuration management
Configuration management, version control
5.5 Risk and testing
Risk, product risk, project risk, risk-based testing
5.6 Incident Management
Incident logging, incident management
1) Test organization
a) Test organization and independence
The effectiveness of finding defects by testing and reviews can be improved by using
independent testers. Options for independence are:
Independent test specialists for specific test targets such as usability testers,
security testers or certification testers (who certify a software product against
standards and regulations).
Independent testers see other and different defects, and are unbiased.
An independent tester can verify assumptions people made during specification and
implementation of the system.
Drawbacks include:
Coordinate the test strategy and plan with project managers and others.
Write or review a test strategy for the project, and test policy for the organization.
Contribute the testing perspective to other project activities, such as integration
planning.
Plan the tests – considering the context and understanding the test objectives and
risks –including selecting test approaches, estimating the time, effort and cost of
testing, acquiring resources, defining test levels, cycles, and planning incident
management.
Initiate the specification, preparation, implementation and execution of tests,
monitor the test results and check the exit criteria.
Adapt planning based on test results and progress (sometimes documented in
status reports) and take any action necessary to compensate for problems.
Set up adequate configuration management of testware for traceability.
Introduce suitable metrics for measuring test progress and evaluating the quality
of the testing and the product.
Decide what should be automated, to what degree, and how.
Select tools to support testing and organize any training in tool use for testers.
Decide about the implementation of the test environment.
Write test summary reports based on the information gathered during testing.
Note: People who work on test analysis, test design, specific test types or test automation
may be specialists in these roles. Depending on the test level and the risks related to the
product and the project, different people may take over the role of tester, keeping some
degree of independence. Typically testers at the component and integration level would be
developers; testers at the acceptance test level would be business experts and users, and
testers for operational acceptance testing would be operators.
c) Defining skills test staff need
Now days a testing professional must have ‘application’ or ‘business domain’ knowledge
and ‘Technology’ expertise apart from ‘Testing’ Skills
Determining the scope and risks, and identifying the objectives of testing.
Defining the overall approach of testing (the test strategy), including the definition
of the test levels and entry and exit criteria.
Integrating and coordinating the testing activities into the software life cycle
activities: acquisition, supply, development, operation and maintenance.
Making decisions about what to test, what roles will perform the test activities, how
the test activities should be done, and how the test results will be evaluated.
Scheduling test analysis and design activities.
Scheduling test implementation, execution and evaluation.
Assigning resources for the different activities defined.
Defining the amount, level of detail, structure and templates for the test
documentation.
Selecting metrics for monitoring and controlling test preparation and execution,
defect resolution and risk issues.
Setting the level of detail for test procedures in order to provide enough
information to support reproducible test preparation and execution.
b) Exit criteria
The purpose of exit criteria is to define when to stop testing, such as at the end of a test
level or when a set of tests has a specific goal.
Two approaches for the estimation of test effort are covered in this syllabus:
The expert-based approach: estimating the tasks by the owner of these tasks or by
experts.
Once the test effort is estimated, resources can be identified and a schedule can be drawn
up.
The testing effort may depend on a number of factors, including:
Characteristics of the product: the quality of the specification and other information
used for test models (i.e. the test basis), the size of the product, the complexity of
the problem domain, the requirements for reliability and security, and the
requirements for documentation.
The outcome of testing: the number of defects and the amount of rework required.
One way to classify test approaches or strategies is based on the point in time at which
the bulk of the test design work is begun:
Reactive approaches, where test design comes after the software or system has
been produced.
Risk of failure of the project, hazards to the product and risks of product failure to
humans, the environment and the company.
Skills and experience of the people in the proposed techniques, tools and methods.
The objective of the testing endeavour and the mission of the testing team.
Regulatory aspects, such as external and internal regulations for the development
process.
The nature of the product and the business.
What happened during a period of testing, such as dates when exit criteria were
met.
Metrics should be collected during and at the end of a test level in order to assess:
c) Test control
Test control describes any guiding or corrective actions taken as a result of information
and metrics gathered and reported. Actions may cover any test activity and may affect
any other software life cycle activity or task.
4) Configuration management
The purpose of configuration management is to establish and maintain the integrity of the
products (components, data and documentation) of the software or system through the
project and product life cycle.
All items of testware are identified, version controlled, tracked for changes, related
to each other and related to development items (test objects) so that traceability
can be maintained throughout the test process.
All identified documents and software items are referenced unambiguously in test
documentation
For the tester, configuration management helps to uniquely identify (and to reproduce)
the tested item, test documents, the tests and the test harness.
During test planning, the configuration management procedures and infrastructure (tools)
should be chosen, documented and implemented.
improper attitude toward or expectations of testing (e.g. not appreciating the value
of finding defects during testing).
Technical issues:
problems in defining the right requirements;
the extent that requirements can be met given existing constraints;
the quality of the design, code and tests.
Supplier issues:
failure of a third party;
contractual issues.
b) Product risks
Potential failure areas (adverse future events or hazards) in the software or system are
known as product risks, as they are a risk to the quality of the product, such as:
Risks are used to decide where to start testing and where to test more; testing is used to
reduce the risk of an adverse effect occurring, or to reduce the impact of an adverse
effect.
Product risks are a special type of risk to the success of a project. Testing as a risk-control
activity provides feedback about the residual risk by measuring the effectiveness of critical
defect removal and of contingency plans.
Risk-based testing draws on the collective knowledge and insight of the project
stakeholders to determine the risks and the levels of testing required to address those
risks.
To ensure that the chance of a product failure is minimized, risk management activities
provide a disciplined approach to:
In addition, testing may support the identification of new risks, may help to determine
what risks should be reduced, and may lower uncertainty about risks.
6) Incident management
Since one of the objectives of testing is to find defects, the discrepancies between actual
and expected outcomes need to be logged as incidents. Incidents should be tracked from
discovery and classification to correction and confirmation of the solution. In order to
manage all incidents to completion, an organization should establish a process and rules
for classification.
Incidents may be raised during development, review, testing or use of a software product.
They may be raised for issues in code or the working system, or in any type of
documentation including requirements, development documents, test documents, and
user information such as “Help” or installation guides.
Provide developers and other parties with feedback about the problem to enable
identification, isolation and correction as necessary.
Provide test leaders a means of tracking the quality of the system under test and
the progress of the testing.
Provide ideas for test process improvement.
Status of the incident (e.g. open, deferred, duplicate, waiting to be fixed, fixed
awaiting retest, closed).
Conclusions, recommendations and approvals.
Global issues, such as other areas that may be affected by a change resulting from
the incident.
Change history, such as the sequence of actions taken by project team members
with respect to the incident to isolate, repair, and confirm it as fixed.
References, including the identity of the test case specification that revealed the
problem.
Questions
1) The following list contains risks that have been identified for a software
product to be developed. Which of these risks is an example of a product risk?
2) Which set of metrics can be used for monitoring of the test execution?
3) A defect management system shall keep track of the status of every defect
registered and enforce the rules about changing these states. If your task is to
test the status tracking, which method would be best?
a) Logic-based testing
b) Use-case-based testing
c) State transition testing
d) Systematic testing according to the V-model
a) Because configuration management assures that we know the exact version of the
testware and the test object
b) Because test execution is not allowed to proceed without the consent of the change
control board
c) Because changes in the test object are always subject to configuration management
d) Because configuration management assures the right configuration of the test tools
10) Which of the following items need not to be given in an incident report?
a) The version number of the test object
b) Test data and used environment
c) Identification of the test case that failed
d) The location and instructions on how to correct the fault
12) IEEE 829 test plan documentation standard contains all of the following
except:
a) test items
b) test deliverables
c) test tasks
d) test environment
e) test specification
17) Which of the following is NOT included in the Test Plan document of the Test
Documentation Standard:
a) Test items (i.e. software versions)
b) What is not to be tested
c) Test environments
d) Quality plans
e) Schedules and deadlines
19) Which of the following would NOT normally form part of a test plan?
a) Features to be tested
b) Incident reports
c) Risks
d) Schedule
a) The number of defects identified in a component or system divided by the size of the
component or the system
b) The number of defects found by a test phase divided by the number found by that test
phase and any other means after wards
c) The number of defects identified in the component or system divided by the number of
defects found by a test phase
d) The number of defects found by a test phase divided by the number found by the size
of the system
24) During the testing of a module tester ‘X’ finds a bug and assigned it to
developer. But developer rejects the same, saying that it’s not a bug. What ‘X’
should do?
a) Report the issue to the test manager and try to settle with the developer.
b) Retest the module and confirm the bug
c) Assign the same bug to another developer
d) Send to the detailed information of the bug encountered and check the reproducibility
25) The primary goal of comparing a user manual with the actual behavior of the
running program during system testing is to
26) You are the test manager and you are about the start the system testing. The
developer team says that due to change in requirements they will be able to
deliver the system to you for testing 5 working days after the due date. You can
not change the resources (work hours, test tools, etc.) What steps you will take
to be able to finish the testing in time.
a) Tell to the development team to deliver the system in time so that testing activity will
be finish in time.
b) Extend the testing plan, so that you can accommodate the slip going to occur
c) Rank the functionality as per risk and concentrate more on critical functionality testing
d) Add more resources so that the slippage should be avoided
a) Incident report
b) Release note
c) Review report
d) Audit report
28) The bug tracking system will need to capture these phases for each bug.
I. Phase injected
II. Phase detected
III. Phase fixed
IV. Phase removed
a) I, II and III
b) I, II and IV
c) II, III and IV
d) I, III and IV
a) Supplier issues
b) Organization factors
c) Technical issues
d) Error-prone software delivered
33) A project that is in the implementation phase is six weeks behind schedule.
The delivery date for the product is four months away. The project is not allowed
to slip the delivery date or compromise on the quality standards established for
this product. Which of the following actions would bring this project back on
schedule?
a) Eliminate some of the requirements that have not yet been implemented.
b) Add more engineers to the project to make up for lost work.
c) Ask the current developers to work overtime until the lost work is recovered.
d) Hire more software quality assurance personnel.
39) Which of the following items would not come under Configuration
Management?
a) Operating systems
b) Test documentation
c) Live data
42) In a REACTIVE approach to testing when would you expect the bulk of the
test design work to be begun?
a) After the software or system has been produced.
b) During development.
c) As early as possible.
d) During requirements analysis.
44) What is the difference between a project risk and a product risk?
a) Project risks are potential failure areas in the software or system; product risks
are risks that surround the project’s capability to deliver its objectives.
b) Project risks are the risks that surround the project’s capability to deliver its
objectives; product risks are potential failure areas in the software or system.
c) Project risks are typically related to supplier issues, organizational factors and
technical issues; product risks are typically related to skill and staff shortages.
d) Project risks are risks that delivered software will not work; product risks are
typically related to supplier issues, organizational factors and technical issues.
47) For testing, which of the options below best represents the main concerns
of Configuration Management?
For QTP Information visit: www.gcreddy.com 53
For Software Testing Information visit: www.gcreddy.net
a) i, iv, vi.
b) ii, iii, v.
c) i, iii, iv.
d) iv, v, vi.
50) What needs to be done when there is an insufficient time for testing?
1) Do Ad-hoc testing
2) Do usability testing
3) Do sanity testing
4) Do a risk based analysis to prioritize
a) 1 and 2
b) 3 & 4
c) All of the above
d) None of the above
Questions:
2) Given the following types of tool, which tools would typically be used by
developers and which by an independent test team:
i. static analysis
ii. Performance testing
iii. Test management
iv. Dynamic analysis
For QTP Information visit: www.gcreddy.com 57
For Software Testing Information visit: www.gcreddy.net
v. test running
vi. test data preparation
a) developers would typically use i, iv and vi; test team ii, iii and v
b) developers would typically use i and iv; test team ii, iii, v and vi
c) developers would typically use i, ii, iii and iv; test team v and vi
d) developers would typically use ii, iv and vi; test team I, ii and v
e) developers would typically use i, iii, iv and v; test team ii and vi
3) A typical commercial test execution tool would be able to perform all of the
following EXCEPT:
a) generating expected outputs
b) replaying inputs according to a programmed script
c) comparison of expected outcomes with actual outcomes
d) recording test inputs
e) reading test values from a data file
4) Which of the following tools would you use to detect a memory leak?
a. State analysis
b. Coverage analysis
c. Dynamic analysis
d. Memory analysis
6) Which tool store information about versions and builds of software and
testware?
a. Test Management tool
b. Requirements management tool
c. Configuration management tool
d. Static analysis tool
11). When a new testing tool is purchased, it should be used first by:
a. A small team to establish the best way to use the tool
b. Everyone who may eventually have some use for the tool
c. The independent testing team
d. The vendor contractor to write the initial scripts
12) Which one of the following statements, about capture-replay tools, is NOT
correct?
a) They are used to support multi-user testing.
b) They are used to capture and animate user requirements.
c) They are the most frequently purchased types of CAST tool.
d) They capture aspects of user behavior.
14) Which test activities are supported by test harness or unit test framework
tools?
a) Test management and control
b) Test specification and design
c) Test execution and logging
d) Performance and monitoring
15) Which of the following are advanced scripting techniques for test execution
tools?
a) Data-driven and keyword-driven
b) Data-driven and capture-driven
c) Capture driven and keyhole-driven
d) playback-driven and keyword-driven
17) Which test activities are supported by test data preparation tools?
a) Test management and control
b) Test specification and design
c) Test execution and logging
18) Which of the following are benefits and which are risks of using tools to
support testing?
1 over reliance on the tools
2 greater consistency and repeatability
3 objective assessment
4 unrealistic expectations
5 underestimating the effort require maintaining the test assets generated by the tool
6 ease of access to information about tests or testing
7 repetitive work is reduced
19) Which of the following is a goal for a proof-of-concept or pilot phase for tool
evaluation?
20) Which success factors are required for good tool support within an
organization?
a) Acquiring the best tool and ensuring that all testers use it
b) Adopting process to fit with the use of the tool and monitoring tool use and benefits
c) Setting ambitious objectives for tool benefits and aggressive deadlines for achieving
them.
d) Adopting practices from other successful organizations and ensuring that initial ways of
using the tool are maintained.
Model Questions -1
1. Which of the following is true?
a. Testing is the same as quality assurance
b. Testing is a part of quality assurance
c. Testing is not a part of quality assurance
d. Testing is same as debugging
4. A number of critical bugs are fixed in software. All the bugs are in one module,
related to reports. The test manager decides to do regression testing only on the
reports module.
a. The test manager should do only automated regression testing.
b. The test manager is justified in her decision because no bug has been fixed in other
modules
c. The test manager should only do confirmation testing. There is no need to do regression
testing
d. Regression testing should be done on other modules as well because fixing one module
may affect other modules
8. In foundation level syllabus you will find the main basic principles of testing.
Which of the following sentences describes one of these basic principles?
a. Complete testing of software is attainable if you have enough resources and test tools
b. With automated testing you can make statements with more confidence about the
quality of a product than with manual testing
c. For a software system, it is not possible, under normal conditions, to test all input and
preconditions.
d. A goal of testing is to show that the software is defect free.
9. Which of the following statements contains a valid goal for a functional test
set?
a. A goal is that no more failures will result from the remaining defects
b. A goal is to find as many failures as possible so that the cause of the failures can be
identified and fixed
c. A goal is to eliminate as much as possible the causes of defects
d. A goal is to fulfill all requirements for testing that are defined in the project plan.
12. Why does the boundary value analysis provide good test cases?
a. Because it is an industry standard
b. Because errors are frequently made during programming of the different cases near the
‘edges’ of the range of values
c. Because only equivalence classes that are equal from a functional point of view are
considered in the test cases
d. Because the test object is tested under maximal load up to its performance limits
14. The following list contains risks that have been identified for a software
product to be developed. Which of these risks is an example of a product risk?
a. Not enough qualified testers to complete the planned tests
b. Software delivery is behind schedule
c. Threat to a patient’s life
d. 3rd party supplier does not supply as stipulated
15. Which set of metrics can be used for monitoring of the test execution?
a. Number of detected defects, testing cost;
b. Number of residual defects in the test object.
c. Percentage of completed tasks in the preparation of test environment; test cases
prepared
d. Number of test cases run / not run; test cases passed / failed
18. Which of the following is a valid collection of equivalence classes for the
following problem: An integer field shall contain values from and including 1 to
and including 15
a. Less than 1, 1 through 15, more than 15
b. Negative numbers, 1 through 15, above 15
c. Less than 1, 1 through 14, more than 15
d. Less than 0, 1 through 14, 15 and more
19. Which of the following is a valid collection of equivalence classes for the
following problem: Paying with credit cards shall be possible with Visa, Master
and Amex cards only.
a. Visa, Master, Amex;
b. Visa, Master, Amex, Diners, Keycards, and other option
c. Visa, Master, Amex, any other card, no card
d. No card, other cards, any of Visa – Master – Amex
21. A defect management system shall keep track of the status of every defect
registered and enforce the rules about changing these states. If your task is to
test the status tracking, which method would be best?
a. Logic-based testing
b. Use-case-based testing
c. State transition testing
d. Systematic testing according to the V-model
25. Which of the following can be root cause of a bug in a software product?
(I) The project had incomplete procedures for configuration management.
(II) The time schedule to develop a certain component was cut.
(III) the specification was unclear
(IV) Use of the code standard was not followed up
(V) The testers were not certified
a. (I) and (II) are correct
b. (I) through (IV) are correct
c. (III) through (V) are correct
d. (I), (II) and (IV) are correct
b. The system is difficult to use due to a too complicated terminal input structure
c. The messages for user input errors are misleading and not helpful for understanding the
input error cause
d. Under high load, the system does not provide enough open ports to connect to
28. What is the purpose of test exit criteria in the test plan?
a. To specify when to stop the testing activity
b. To set the criteria used in generating test inputs
c. To ensure that the test case specification is complete
d. To know when a specific test has finished its execution
29. Which of the following items need not to be given in an incident report?
a. The version number of the test object
b. Test data and used environment
c. Identification of the test case that failed
d. The instructions on how to correct the fault
c. A strategy is needed to inform the project management how the test team will schedule
the test-cycles
d. Software failure may cause loss of money, time, business reputation, and in extreme
cases injury and death. It is therefore critical to have a proper test strategy in place.
Model Questions -2
1 When what is visible to end-users is a deviation from the specific or expected
behavior, this is called:
a) an error
b) a fault
c) a failure
d) a defect
e) a mistake
3 IEEE 829 test plan documentation standard contains all of the following
except:
a) test items
b) test deliverables
c) test tasks
d) test environment
e) test specification
5 Order numbers on a stock control system can range between 10000 and 99999
inclusive. Which of the following inputs might be a result of designing tests for
only valid equivalence classes and valid boundaries:
a) 1000, 5000, 99999
b) 9999, 50000, 100000
c) 10000, 50000, 99999
d) 10000, 99999
e) 9999, 10000, 50000, 99999, 10000
9 Which of the following is the main purpose of the integration strategy for
integration testing in the small?
a) to ensure that all of the small modules are tested adequately
b) to ensure that the system interfaces to other systems and networks
c) to specify which modules to combine when and how many at once
d) to ensure that the integration testing can be performed by a small team
e) to specify how the software should be divided into modules
12 Given the following code, which is true about the minimum number of test
cases required for full statement and branch coverage:
Read P
Read Q
IF P+Q > 100 THEN
Print “Large”
ENDIF
If P > 50 THEN
Print “P Large”
ENDIF
If there is a program that you are interested in watching then switch the the television on
and watch the program
Otherwise
Continue reading the newspaper
If there is a crossword in the newspaper then try and complete the crossword
a) SC = 1 and DC = 1
b) SC = 1 and DC = 2
c) SC = 1 and DC = 3
d) SC = 2 and DC = 2
e) SC = 2 and DC = 3
21 Given the following types of tool, which tools would typically be used by
developers and which by an independent test team:
i. static analysis
ii. performance testing
iii. test management
iv. dynamic analysis
v. test running
a) developers would typically use i, iv and vi; test team ii, iii and v
b) developers would typically use i and iv; test team ii, iii, v and vi
c) developers would typically use i, ii, iii and iv; test team v and vi
d) developers would typically use ii, iv and vi; test team I, ii and v
e) developers would typically use i, iii, iv and v; test team ii and vi
25 A typical commercial test execution tool would be able to perform all of the
following EXCEPT:
a) generating expected outputs
b) replaying inputs according to a programmed script
c) comparison of expected outcomes with actual outcomes
d) recording test inputs
e) reading test values from a data file
s) inspection
t) peer review
u) informal review
v) walkthrough
a) s = 4, t = 3, u = 2 and 5, v = 1
b) s = 4 and 5, t = 3, u = 2, v = 1
c) s = 1 and 5, t = 3, u = 2, v = 4
d) s = 5, t = 4, u = 3, v = 1 and 2
e) s = 4 and 5, t = 1, u = 2, v = 3
37 Which of the following is NOT included in the Test Plan document of the Test
Documentation Standard:
a) Test items (i.e. software versions)
b) What is not to be tested
c) Test environments
d) Quality plans
e) Schedules and deadlines
Model Questions -3
1. An input field takes the year of birth between 1900 and 2004
The boundary values for testing this field are
a. 0,1900,2004,2005
b. 1900, 2004
c. 1899,1900,2004,2005
d. 1899, 1900, 1901,2003,2004,2005
6. To test a function, the programmer has to write a _________, which calls the
function to be tested and passes it test data.
a. Stub
b. Driver
c. Proxy
d. None of the above
9. Fault Masking is
a. Error condition hiding another error condition
For QTP Information visit: www.gcreddy.com 73
For Software Testing Information visit: www.gcreddy.net
10. One Key reason why developers have difficulty testing their own work is:
a. Lack of technical documentation
b. Lack of test tools on the market for developers
c. Lack of training
d. Lack of Objectivity
11. During the software development process, at what point can the test process
start?
a. When the code is complete.
b. When the design is complete.
c. When the software requirements have been approved.
d. When the first code module is ready for unit testing
Model Questions -4
1 We split testing into distinct stages primarily because:
a) Each test stage has a different purpose.
b) It is easier to manage testing in stages.
c) We can run different tests in different environments.
d) The more stages we have, the better the testing.
2 Which of the following is likely to benefit most from the use of test tools
providing test capture and replay facilities?
a) Regression testing
b) Integration testing
c) System testing
d) User acceptance testing
6 Error guessing:
a) supplements formal test design techniques.
9 Given the following sets of test management terms (v-z), and activity
descriptions (1-5), which one of the following best pairs the two sets?
v – test control
w – test monitoring
x - test estimation
y - incident management
z - configuration control
a) v-3,w-2,x-1,y-5,z-4
b) v-2,w-5,x-1,y-4,z-3
c) v-3,w-4,x-1,y-5,z-2
d) v-2,w-1,x-4,y-3,z-5
10 Which one of the following statements about system testing is NOT true?
a) System tests are often performed by independent teams.
b) Functional testing is used more than structural testing.
c) Faults found during system tests can be very expensive to fix.
d) End-users should be involved in system tests.
a) Incident resolution is the responsibility of the author of the software under test.
b) Incidents may be raised against user requirements.
c) Incidents require investigation and/or correction.
d) Incidents are raised when expected and actual results differ.
23 Which of the following would NOT normally form part of a test plan?
a) Features to be tested
b) Incident reports
c) Risks
d) Schedule
24 Which of these activities provides the biggest potential cost saving from the
use of CAST?
a) Test management
b) Test design
c) Test execution
d) Test planning
29 Which of the following is the best source of Expected Outcomes for User
Acceptance Test scripts?
a) Actual results
b) Program specification
c) User requirements
d) System specification
31 Which one of the following describes the major benefit of verification early in
the life cycle?
a) It allows the identification of changes in user requirements.
b) It facilitates timely set up of the test environment.
c) It reduces defect multiplication.
d) It allows testers to become involved early in the project.
35 A failure is:
a) found in the software; the result of an error.
b) departure from specified behaviour.
c) an incorrect step, process or data definition in a computer program.
d) a human action that produces an incorrect result.
37 The most important thing about early test design is that it:
For QTP Information visit: www.gcreddy.com 79
For Software Testing Information visit: www.gcreddy.net
Model Questions -5
1. Software testing activities should start
a. as soon as the code is written
b. during the design stage
c. when the requirements have been formally documented
d. as soon as possible in the development life cycle
3.What is the main reason for testing software before releasing it?
a. to show that system will work after release
b. to decide when the software is of sufficient quality to release
c. to find as many bugs as possible before release
d. to give information for a risk based decision about release
7. The later in the development life cycle a fault is discovered, the more
expensive it is to fix. why?
a. the documentation is poor, so it takes longer to find out what the software is doing.
b. wages are rising
c. the fault has been built into more documentation, code, tests, etc
d. none of the above
12. Increasing the quality of the software, by better development methods, will
affect the time needed for testing (the test phases) by:
a. reducing test time
b. no change
c. increasing test time
d. can’t say
16. What is the important criterion in deciding what testing technique to use?
a. how well you know a particular technique
b. the objective of the test
c. how appropriate the technique is for testing the application
d. whether there is a tool to support the technique
17. If the pseudocode below were a programming language ,how many tests are
required to achieve 100% statement coverage?
1. If x=3 then
2. Display_messageX;
3. If y=2 then
4. Display_messageY;
5. Else
6. Display_messageZ;
7. Else
8. Display_messageZ;
a. 1
b. 2
c. 3
d. 4
18. Using the same code example as question 17,how many tests are required to
achieve 100% branch/decision coverage?
a. 1
b. 2
c. 3
d. 4
20. Which of the following tools would you use to detect a memory leak?
a. State analysis
b. Coverage analysis
c. Dynamic analysis
d. Memory analysis
which of the following input values cover all of the equivalence partitions?
a. 10,11,21
b. 3,20,21
c. 3,10,22
d. 10,21,22
30. Using the same specifications as question 29, which of the following covers
the MOST boundary values?
a. 9,10,11,22
b. 9,10,21,22
c. 10,11,21,22
d. 10,11,20,21
Model Questions -6
1. COTS is known as
A. Commercial off the shelf software
B. Compliance of the software
C. Change control of the software
D. Capable off the shelf software
2. From the below given choices, which one is the ‘Confidence testing’
A. Performance Testing
B. System testing
C. Smoke testing
D. Regression testing
6. When testing a grade calculation system, a tester determines that all scores
from 90 to 100 will yield a grade of A, but scores below 90 will not. This analysis
is known as:
A. Equivalence partitioning
B. Boundary value analysis
C. Decision table
D. Hybrid analysis
A. Desk check
B. Manual support testing
C. Walkthrough
D. Compiler based testing
10. Which of the following statements is true about a software verification and
validation program?
A. I, II&III
B.II, III&IV
C.I, II&IV
D.I, III&IV
I. Ease of use
II. Capacity for incremental implementation
III. Capability of evolving with the needs of a project
IV. Inclusion of advanced tools
A.I, II &III
B.I, II &IV
C.II, III&IV
For QTP Information visit: www.gcreddy.com 85
For Software Testing Information visit: www.gcreddy.net
D.I, III&IV
A.I, II&III
B.II, III &IV
C.I, III &IV
D.I, II & IV
14. During the testing of a module tester ‘X’ finds a bug and assigned it to
developer. But developer rejects the same, saying that it’s not a bug. What ‘X’
should do?
A. Report the issue to the test manager and try to settle with the developer.
B. Retest the module and confirm the bug
C. Assign the same bug to another developer
D. Send to the detailed information of the bug encountered and check the reproducibility
15. The primary goal of comparing a user manual with the actual behavior of the
running program during system testing is to
17. Which technique can be used to achieve input and output coverage? It can be
applied to human input, input via interfaces to a system, or interface parameters
in integration testing.
A. Error Guessing
18. There is one application, which runs on a single terminal. There is another
application that works on multiple terminals. What are the test techniques you
will use on the second application that you would not do on the first application?
19. You are the test manager and you are about the start the system testing. The
developer team says that due to change in requirements they will be able to
deliver the system to you for testing 5 working days after the due date. You can
not change the resources(work hours, test tools, etc.) What steps you will take
to be able to finish the testing in time. (
A. Tell to the development team to deliver the system in time so that testing activity will
be finish in time.
B. Extend the testing plan, so that you can accommodate the slip going to occur
C. Rank the functionality as per risk and concentrate more on critical functionality testing
D. Add more resources so that the slippage should be avoided
21. Testing of software used to convert data from existing systems for use in
replacement systems
A. Data driven testing
B. Migration testing
C. Configuration testing
D. Back to back testing
23. “The tracing of requirements for a test level through the layers of a test
documentation” done by
A. Horizontal traceability
B. Depth traceability
C. Vertical traceability
D. Horizontal & Vertical traceability
I. Are the necessary documentation, design and requirements information available that
will allow testers to operate the system and judge correct behavior.
II. Is the test environment-lab, hardware, software and system administration support
ready?
III. Those conditions and situations that must prevail in the testing process to allow
testing to continue effectively and efficiently.
IV. Are the supporting utilities, accessories and prerequisites available in forms that
testers can use
A. I, II and IV
B. I, II and III
C. I, II, III and IV
D. II, III and IV.
26. “This life cycle model is basically driven by schedule and budget risks” This
statement is best suited for
A. Water fall model
B. Spiral model
C. Incremental model
D. V-Model
Model Questions -7
1. ___________ Testing will be performed by the people at client own locations
A. Alpha testing
B. Field testing
C. Performance testing
D. System testing
A. Performance testing
B. Unit testing
C. Regression testing
D. Sanity testing
4. Who is responsible for document all the issues, problems and open point that
were identified during the review meeting
A. Moderator
B. Scribe
C. Reviewers
D. Author
A. Performance testing
B. Unit testing
C. Business scenarios
D. Static testing
A. Unit testing
B. Regression testing
C. Alpha testing
D. Integration testing
A. Supplier issues
B. Organization factors
C. Technical issues
D. Error-prone software delivered
12. ________ and ________ are used within individual workbenches to produce
the right output products.
14. A _____ is the step-by-step method followed to ensure that standards are
met
A. SDLC
B. Project Plan
C. Policy
D. Procedure
15. Which of the following is the standard for the Software product quality?
A. ISO 9126
B. ISO 829
C. ISO 1012
D. ISO 1028
A. Finding defects
B. Gaining confidence about the level of quality and providing information
C. Preventing defects.
D. Debugging defects
A. Reliability
B. Usability
C. Scalability
D. Maintainability
A. Error Seeding
B. Defect clustering
C. Pesticide paradox
D. Exhaustive testing
20. ‘X’ has given a data on a person age, which should be between 1 and 99.
Using BVA which is the appropriate one
A. 0, 1,2,99
B. 1, 99, 100, 98
C. 0, 1, 99, 100
D. –1, 0, 1, 99
A. System testing
B. Acceptance testing
C. Integration testing
D. Smoke testing
A. Equivalence partition
B. Decision tables
C. Transaction diagrams
D. Decision testing
A. Branch testing
B. Agile testing
C. Beta testing
D. Ad-hoc testing
A. L-N +2P
B. N-L +2P
C. N-L +P
D. N-L +P
Model Test -1
1) Designing the test environment set-up and identifying any required
infrastructure and tools are a part of which phase
a) Test Implementation and execution
b) Test Analysis and Design
c) Evaluating the Exit Criteria and reporting
d) Test Closure Activities
2) Test Implementation and execution has which of the following major tasks?
i. Developing and prioritizing test cases, creating test data, writing test
procedures and optionally preparing the test harnesses and writing automated
test scripts.
ii. Creating the test suite from the test cases for efficient test execution.
iii. Verifying that the test environment has been set up correctly.
iv. Determining the exit criteria.
4) One of the fields on a form contains a text box which accepts numeric values
in the range of 18 to 25. Identify the invalid Equivalence class
a) 17
b) 19
c) 24
d) 21
10) What is the expected result for each of the following test cases?
12) Which tool store information about versions and builds of software and
testware
a. Test Management tool
b. Requirements management tool
c. Configuration management tool
d. Static analysis tool
15) Given the following types of tool, which tools would typically be used by
developers, and which by an independent system test team?
i. Static analysis
ii. Performance testing
iii. Test management
iv. Dynamic analysis
a) developers would typically use i and iv; test team ii and iii
b) developers would typically use i and iii; test team ii and iv
c) developers would typically use ii and iv; test team i and iii
d) developers would typically use i, iii and iv; test team ii
19) Which of these activities provides the biggest potential cost saving from
the use of CAST?
a) Test management
b) Test design
c) Test execution
d) Test planning
i. Interaction with the Test Tool Vendor to identify best ways to leverage test tool on
the project.
ii. Prepare and acquire Test Data
iii. Implement Tests on all test levels, execute and log the tests.
iv. Create the Test Specifications
a) i, ii, iii is true and iv is false
b) ii, iii, iv is true and i is false
c) i is true and ii, iii, iv are false
d) iii and iv is correct and i and ii are incorrect
24) The Planning phase of a formal review includes the following:-
a) Explaining the objectives
b) Selecting the personnel, allocating roles.
c) Follow up
d) Individual Meeting preparations
25) A Person who documents all the issues, problems and open points that
were identified during a formal review.
a) Moderator.
b) Scribe
c) Author
d) Manager
26) Who are the persons involved in a Formal Review:-
i. Manager
ii. Moderator
iii. Scribe / Recorder
iv. Assistant Manager
a) i, ii, iii, iv are true
b) i, ii, iii are true and iv is false.
c) ii, iii, iv are true and i is false.
d) i, iv are true and ii, iii are false.
27) – Is the activity where general testing objectives are transformed into
tangible test conditions and test designs
a. Testing planning
b. Test Control
c. Test analysis and design
d. Test implementation and execution
33) For testing, which of the options below best represents the main concerns
of Configuration Management?
i. All items of testware are identified and version controlled;
ii. All items of testware are used in the final acceptance test;
iii. All items of testware are stored in a common repository;
iv. All items of testware are tracked for change;
v. All items of testware are assigned to a responsible owner;
vi. All items of testware are related to each other and to development items.
a) i, iv, vi.
b) ii, iii, v.
c) i, iii, iv.
D iv, v, vi.
34) Which of the following is a characteristic of good testing in any life cycle
model?
a) All document reviews involve the development team.
b) Some, but not all, development activities have corresponding test activities.
c) Each test level has test objectives specific to that level.
d) Analysis and design of tests begins as soon as development is complete.
35) One of the fields on a form contains a text box which accepts alpha
numeric values. Identify the Valid Equivalence class
a) BOOK
b) Book
c) Boo01k
d) book
36) Reviewing the test Basis is a part of which phase
a) Test Implementation and execution
b) Test Closure Activities
c) Evaluating exit criteria and reporting
d) Test Analysis and Design
A B C D E F
SS S1
S1 S2
S2 S3 S1
S3 ES S3
ES
38) Which of the following items would not come under Configuration
Management?
a) Operating systems
b) Test documentation
c) Live data
d) User requirement document
READ A
READ B
READ C
IF C>A THEN
IF C>B THEN
PRINT "C must be greater than at least one number"
ELSE
PRINT "Proceed to next stage"
ENDIF
ELSE
PRINT "B can be smaller than C"
ENDIF
a) SC=3, DC=3.
b) SC=2, DC=3.
c) SC=2, DC=4.
d) SC=3, DC=2.
Model Test -2
1) The concept ‘Finding and fixing defects does not help if the system built is
unusable manner’, belongs to which testing principle?
a) Early Testing
b) Testing shows presence of defects
c) Absence-of-errors fallacy
d) Pesticide paradox
2) Analyzing lessons learned for future releases and projects, and the
improvement of test maturity, are a part of which phase?
a) Test Implementation and execution
b) Test Analysis and Design
c) Evaluating the Exit Criteria and reporting
d) Test Closure Activities
4) Test Approach is
a) A high level description of the test levels to be performed and the testing with in those
levels for an organization.
b) The implementation of the test strategy for a specific project.
c) A document describing the scope, approach, resources and schedule of intended test
activities?
d) A high level document describing the principles, approach and major objectives of the
organization regarding testing.
a. Decision testing
b. Error guessing
c. statement testing
d. Exploratory testing
8) Which of the statements below is the best assessment of how the test
principles apply across the test life cycle?
a) Test principles only affect the preparation for testing.
b) Test principles only affect test execution activities.
c) Test principles affect the early test activities such as review
d) Test principles affect activities throughout the test life cycle.
c) Usability testing
d) Portability testing
10) In a Preventative approach to testing when would you expect the bulk of the
test design work to be begun?
a) After the software or system has been produced.
b) During development.
c) As early as possible.
d) During requirements analysis.
11) The standard that gives Software Verification and Validation -Description
a) ISO/IEC 12207
b) BS 7925-1
c) IEEE 1012-1998
d) ANSI/IEEE 729
16) From the below pseudo code , calculate the MINIMUM number of test cases
for statement coverage, and the MINIMUM number of test cases for decision
coverage respectively.
READ X
READ Y
IF X>YTHEN
Print ‘X is a big number’
ELSE
Print ‘Y is a big number’
ENDIF
a) SC=3, DC=3.
b) SC=2, DC=2.
c) SC=1, DC=2.
d) SC=2, DC=1.
18) Which of the following are benefits and which are risks of using tools to
support testing?
19) Which of the following tools would you use to identify time dependencies
and identify pointer arithmetic errors?
a. Static analysis
b. Coverage analysis
c. Dynamic analysis
d. Memory analysis
21) Which of the following test activities is supported by Static analysis tools?
a) The enforcement of coding standards
b) Measure the percentage of specific types of code structure
c) Test specification and design
d) Manipulate the tests using scripting language
26) Which of the following test activities comes under Test Monitoring?
a) Determining the scope and risks, and identifying the objective of testing
b) Setting the level of detail for test procedures in order to provide enough information to
support reproducible test preparation and execution.
c) Residual risks, such as defects not fixed or lack of test coverage in certain areas
d) Test coverage of requirements, risks or code
Now decide the minimum number of tests required to cover all statements in the
procedure
a) 2
b) 3
c) 4
d) 5
34) Which test design technique is useful, if our system requirement contains
logical condition?
a) Equivalence partitioning
b) Decision table testing
c) Boundary value analysis
d) State transition testing
37) What is the most important reason to use risk to drive testing efforts?
a) Because risk-based testing is the most efficient approach to find defects
b) Because risk-based testing is the most efficient way to show value
c) Because testing every thing is not feasible
d) Because software is inherently risky
Standards
2) IEEE 829-1998
IEEE Standard for Software Test Documentation
The Types of Document:
There are eight document types in the IEEE 829 standard, which can be used in three
distinct phases of software testing:
Preparation of Tests
Test Plan: Plan how the testing will proceed.
Test Design Specification: Decide what needs to be tested.
Test Case Specification: Create the tests to be run.
Test Procedure: Describe how the tests are run.
Test Item Transmittal Report: Specify the items released for testing.
Running the Tests
Test Log: Record the details of tests in time order.
Test Incident Report: Record details of events that need to be investigated.
Completion of Testing
Test Summary Report: Summarise and evaluate tests.
5) IEEE 1008
IEEE Standard for Software Unit Testing
6) IEEE 1044-1993
IEEE Standard Classification for Software Anomalies
7) IEEE 1219-1998
Standard for Software Maintenance
For QTP Information visit: www.gcreddy.com 105
For Software Testing Information visit: www.gcreddy.net
8) ISO/IEC 9126-1:2001
(Software engineering – Software product quality- part 1)
Support for review, verification and validation, and a framework for quantitative quality
evaluation, in the support process;
Support for setting organisational quality goals in the management process.
9) ISO/IEC 12207:2008
Systems and software engineering -- Software life cycle processes
13) BS 7925-2:1998
Software testing, Software component testing
(This standard defines the process for software component testing using specified test
case design and measurement techniques. This will enable users of the standard to
directly improve the quality of their software testing, and improve the quality of their
software products)
14) DO-178B:1992
– Software Considerations in Airborne Systems and Equipment
15) BS7925-1
The British software testing standard governing testing terminology
Skill Test
1) In Microsoft Windows, how many types of Operating systems are there, what are
they, and give examples?
9) What are the difference between General Programming Languages (C, C++, Java,
and VC++ etc.) and Scripting Languages (VB script, Java script and Perl Script
etc.) ?
17)What is the major Difference between ISO, IEEE standards and CMM/CMMI Levels?
18)Write definitions for the Master Data, Metadata and Runtime Data?
20) What are the major advantages in Web Applications than Client/Server
Applications?
Configuration management
Configuration management – Why
Starts during the early phases of the project All products of the software process may
have to be managed
Specifications
code
Designs
Programs
Test data
User manuals
Thousands of separate documents may be generated for a large software system
The CM plan
Change management
Derivation history
Versions/variants/releases
Version identification
Release management
System releases
Release problems
Release creation
System building
For QTP Information visit: www.gcreddy.com 110
For Software Testing Information visit: www.gcreddy.net
Key points
Examination Guidelines
(ISTQB Foundation Level)
Within the multiple-choice format, questions can be presented in different ways. For
example, the amount of information presented in a question’s stem can be limited or
extensive. Also, a question writer can include written code within the stem, specifically for
example, when writing questions to test knowledge of white box techniques.
Following are examples of the type of multiple-choice items to be used in any ISTQB
qualification. Correct answers should always be the first option.
The basic multiple-choice question has a short stem and a single correct response. A
limited amount of information is presented in the stem, and a single set of response
options is presented to the candidates. The following example of a basic multiple-choice
question is targeted to assess knowledge of static testing at K1 cognitive level of
application.
Example:
d) Runs the same tests multiple times, and checks that the results are statistically
meaningful
Another variation of the basic multiple-choice question is the Roman type. In this format,
the candidate is presented with several statements; each proceeded by either a Roman
numeral or a letter of the alphabet. This differs from the multiple-choice questions already
discussed in that the response options may require the candidate to know or derive
several pieces of related information. The task for the candidate is to select the option
that represents the correct combination of statements; as shown in the following example:
Which of the following answers reflect when Regression testing should normally
be performed?
A. Every week
B. After the software has changed
C. On the same day each year
D. When the environment has changed
E. Before the code has been written