Академический Документы
Профессиональный Документы
Культура Документы
• Test plans for software projects are very complex and detailed
documents. The planner usually includes the following essential
high-level items.
– Overall test objectives. As testers, why are we testing, what is to be achieved
by the tests, and what are the risks associated with testing this product?
– What to test (scope of the tests). What items, features, procedures, functions,
objects, clusters, and subsystems will be tested?
– Who will test. Who are the personnel responsible for the tests?
– How to test. What strategies, methods, hardware, software tools, and
techniques are going to be applied? What test documents and deliverable
should be produced?
– When to test. What are the schedules for tests? What items need to be
available?
– When to stop testing. It is not economically feasible or practical to plan to test
until all defects have been revealed. This is a goal that testers can never be
sure they have reached. Because of budgets, scheduling, and customer
deadlines, specific conditions must be outlined in the test plan that allow
testers/managers to decide when testing is considered to be complete.
Test Preparation Activities
Test Preparation Activities
• Definition of the overall test approach, of the test levels, and their
entry and exit criteria;
• Defining the level of detail for test cases and test procedures, so
as to provide enough detail to allow the creation and execution of
reusable tests. This greatly depends on the knowledge of the
testers in charge of the actual test execution.
Test Planning Activities
Test Planning Activities
• Risk analysis
Estimation in
Agile World
Experience-
based
Metrics-based
Metrics-based Estimation
• In this section, you set boundaries for the test plan by discussing
what I will and will not test, by defining important terms and
acronyms related to the testing you plan to perform, and by
determining where and in what context the test efforts associated
with this test subproject will take place.
– Scope. Webster’s Dictionary defines scope, in the context of a project or an
operation, as the ‘‘extent of treatment, activity, or influence; [the] range of
operation’’.
– Definitions. Therefore, you include a table of definitions in my test plans. Such
a table can help to clarify terminology for those who are not experienced in
the field of testing, and can also help to ensure that everyone on the test
team is operating from the same set of definitions.
– Setting. This section of the test plan describes where I intend to perform the
testing and the way those organizations doing the testing relate to the rest of
the organization. The description might be as simple as ‘‘our test lab’’.
Quality Risks
• For each test phase, the system under test must satisfy a minimal
set of qualifications before the test organization can run tests
effectively and efficiently.
– For example, it makes little sense to start extensive user-scenario testing of
SpeedyWriter if the application cannot open or save a file or display text on
the screen.
– Likewise, the DataRocket server can’t undergo environmental testing—
especially thermal testing— if you don’t have even a prototype case.
• This section of the test plan should specify the criteria essential
for beginning and completing various test phases (and for
continuing an effective and efficient test process). I usually refer
to these as entry, continuation, and exit criteria, respectively, but
some test professionals use the terms entry,
suspension/resumption, and exit criteria or entry, stopping, and
exit criteria.
Entry, Continuation, and Exit
Criteria
• In this section you’ll describe how my test team will create each of
various test objects, such as test cases, test tools, test
procedures, test suites, automated test scripts, and so forth.
Test Configurations and
Environments
• This section of the test plan is where you document which
hardware, software, networks, and lab space you will use to
perform the testing. For these various test systems, you’ll
describe whatever important configuration details bear
mentioning, as well. For a PC application or utility, this task can
be as simple as listing the half-dozen or so test PCs, the two or
three test networks (assuming that networking is even an issue),
and the printers, external drives, and other accessories you might
require from time to time.
Test Execution
• This part of the document records the changes and revisions that
have been made to the test plan itself to this point. Specifically,
you can assign a revision number and record who made the
changes, what those changes were, and when the revision was
released.
Referenced Documents
• That said, if you don’t need this section, don’t use it. As with the
Test Hours section, using it inappropriately can waste your time
and create problems. It can become a catch-all for any question
anyone ever asked about testing, bloating your test plans into
huge, unnavigable, and unmanageable documents.
Entry and Exit Criteria for Test Activities
Entry Criteria
for Test Activities
• The entry criteria define when the activities can start, whether for
a test level or for activities within a single test level. Generally
these criteria define the availability of:
– Adequate documentation (requirements, design, operations manual, etc.)
allowing testers to determine the expected behavior of the component to be
tested;
– Test object (component, software, system) of an appropriate level of quality.
This means that the previous phases (test or design phases) were
successfully finished (the exit criteria for these previous phases have been
successfully reached);
– Test environment, test harness, drivers, and stubs necessary to execute the
component to be tested, in a format usable by the testers;
– Test resources (testers, hardware and software resources, etc.);
– Test tools and test scripts;
– Test data required for the test to be executed.
Examples of Entry Criteria
• All planned fixes for this version have been implemented by the
development team;
• Exit criteria are defined per test level, per activity or for the whole
software. If the exit criteria are sufficiently detailed, they will help
in designing a strategy for unit testing and integration testing.
Examples of Exit Criteria
• The testing team has run all tests planned on the delivery
candidate version of the software;
• The development team has solved all the “to be fixed” defects, as
planned by sales, marketing or customer services;
Examples of Exit Criteria
(cont.)
• The testing team has checked that all issues identified in the
defect management tool have been either closed or postponed to
a subsequent version and – where applicable – have been
verified by adequate regression and confirmation tests;
• Test metrics indicate a stable and reliable product, the end of all
the planned tests, and adequate coverage of all the critical quality
risks identified;
• The project management team held a meeting for the end of the
system testing phase and accepts that system testing can be
considered as finished.
Exit Criteria for SpeedyWriter
• Stopping testing when the planned test effort has been reached.
This criterion does not allow measuring the final level of quality of
the software or system, and is dependent on the initial level of
quality of the software. If the planned workload is too small, the
criterion will be reached quickly, without significantly impacting
the quality level of the software;
• Stopping when all the test cases have been executed without
finding new defects. This criterion is simultaneously
counterproductive (pushes for the design of test cases that have
a small possibility of identifying defects) and depends on the
quality of the tests.
Considerations
for Exit Criteria
• If the exit criteria specified in the test planning and organizational
phase is ignored, the following phases – including production –
provide a system that is not sufficiently mature and will potentially
have significant defects. These subsequent phases will be less
efficient (increase in the number of defects identified that have to
be corrected) and more expensive. If the exit criteria apply to
system tests or to acceptance tests, poor-quality, defective
software will be delivered to the customers and end users,
thereby leading to unhappy users, loss of perceived quality of
products, and potential loss of sales for the company.