Академический Документы
Профессиональный Документы
Культура Документы
DWIGHT THOMAS
SOFTWARE TESTING
Testing is the process of exercising a program with the specific intent of finding errors prior to delivery to the end user.
3
3
performance
an indication of quality
4
4
Formal Technical Reviews Begin testing at the component level and work outward towards integration of the entire system Different testing techniques are appropriate at different points in time Testing is conducted by the developer of the software and independent tests groups Testing and debugging are different activities but debugging must be accommodated in testing strategy
developer
Understands the system but, will test "gently" and, is driven by "delivery"
independent tester
Must learn about the system, but, will attempt to break it and, is driven by quality
Developer
The developer is responsible for testing the individual units of the program Developer also conducts integration testing
TESTING STRATEGY
We begin by testing-in-the-small and move toward testing-in-the-large For conventional software
For OO software
our focus when testing in the small changes from an individual module (the conventional view) to an OO class that encompasses attributes and operations and implies communication and collaboration
8
8
A strategy for software testing must accommodate low-level tests that are necessary to verify that a small source code segment has been correctly implemented as well as high-level tests that validate major system functions against customer requirements.
strategy must provide guidance for the practitioner and a set of milestones for the manager. Because the steps of the test strategy occur at a time when dead-line pressure begins to rise, progress must be measurable and problems must surface as early as possible.
10
11
12
14
High order testing (to be discussed later) validation criteria is evaluated System Testing people, hardware, database, other systems
15
f(t) = (1/p) ln [l0 pt + 1] (18-1) where f(t) = cumulative number of failures that are expected to occur once the software has been tested for a certain amount of execution time, t, l0 = the initial software failure intensity (failures per time unit) at the beginning of testing, p = the exponential reduction in failure intensity as errors are uncovered and repairs are made. The instantaneous failure intensity, l(t) can be derived by taking the derivative
f(t)
l(t) = l0 / (l0 pt + 1)
16
TESTING STRATEGY
unit test integration test
system test
validation test
17
17
UNIT TESTING
module to be tested
results
software engineer
test cases
18
18
UNIT TESTING
module to be tested
test cases
19
19
UNIT TESTING
The module interface is tested to ensure that information properly flows into and out of the program unit under test. The local data structure is examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithm's execution. Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing. All independent paths (basis paths) through the control structure are exercised to ensure that all statements in a module have been executed at least once. And finally, all error handling paths are tested. 20
22
Error description is unintelligible. Error noted does not correspond to error encountered. Error condition causes system intervention prior to error handling. Exception-condition processing is incorrect. Error description does not provide enough information to assist in the location of the cause of the error.
23
Module
stub
stub
test cases
RESULTS
24
24
Because a component is not a stand-alone program, driver and/or stub software must be developed for each unit test.
In most applications a driver is nothing more than a "main program" that accepts test case data, passes such data to the component (to be tested), and prints relevant results.
25
serve to replace modules that are subordinate (called by) the component to be tested. A stub or "dummy subprogram" uses the subordinate modules interface, may do minimal data manipulation, prints verification of entry, and returns control to the module undergoing testing.
26
27
27
stubs are replaced one at a time, "depth first" C as new modules are integrated, some subset of tests is re-run D E
28
These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005
28
TOP-DOWN INTEGRATION
The integration process is performed in a series of five steps: 1. The main control module is used as a test driver and stubs are substituted for all components directly subordinate to the main control module. 2. Depending on the integration approach selected (i.e., depth or breadth first), subordinate stubs are replaced one at a time with actual components. 3. Tests are conducted as each component is integrated. 4. On completion of each set of tests, another stub is replaced with the real component. 5. Regression testing may be conducted to ensure that new errors have not been introduced. The process continues from step 2 until the entire program structure is built.
29
BOTTOM-UP INTEGRATION
A
drivers are replaced one at a time, "depth first" worker modules are grouped into builds and integrated
cluster
30
These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005
30
BOTTOM-UP INTEGRATION
A bottom-up integration strategy may be implemented with the following steps: 1. Low-level components are combined into clusters (sometimes called builds) that perform a specific software subfunction. 2. A driver (a control program for testing) is written to coordinate test case input and output. 3. The cluster is tested. 4. Drivers are removed and clusters are combined moving upward in the program structure.
31
SANDWICH TESTING
A Top modules are tested with stubs G
cluster
32
These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005
32
SANDWICH TESTING
Combine the two so as to capitalize upon their strengths and m inimize their weaknesses neither top-down nor bottom-up implementation/integration is suitable for all the modules, the solution is to partition them
33
REGRESSION TESTING
Each time a new module is added as part of integration testing, the software changes. New data flow paths are established, new I/O may occur, and new control logic is invoked. These changes may cause problems with functions that previously worked flawlessly. In the context of an integration test strategy, regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects.
34
REGRESSION TESTING
The regression test suite (the subset of tests to be executed) contains three different classes of test cases: A representative sample of tests that will exercise all software functions. Additional tests that focus on software functions that are likely to be affected by the change. Tests that focus on the software components that have been changed.
35
SMOKE TESTING
Software components that have been translated into code are integrated into a build. A build includes all data files, libraries, reusable modules, and engineered components that are required to implement one or more product functions. A series of tests is designed to expose errors that will keep the build from properly performing its function. The intent should be to uncover show stopper errors that have the highest likelihood of throwing the software project behind schedule. The build is integrated with other builds and the entire product (in its current form) is smoke tested daily. The integration approach may be top down or bottom up. 36
SMOKE TESTING
Integration risk is minimized. Because smoke tests are conducted daily, incompatibilities and other show-stopper errors are uncovered early, thereby reducing the likelihood of serious schedule impact when errors are uncovered. The quality of the end-product is improved. Because the approach is construction (integration) oriented, smoke testing is likely to uncover both functional errors and architectural and component-level design defects. If these defects are corrected early, better product quality will result. Error diagnosis and correction are simplified. Like all integration testing approaches, errors uncovered during smoke testing are likely to be associated with new software incrementsthat is, the software that has just been added to the build(s) is a probable cause of a newly discovered error. Progress is easier to assess. With each passing day, more of the software has been integrated and more has been demonstrated to work. This improves team morale and gives managers a good indication that progress is being made.
37
Test specification
Test plan - describes overall strategy for integration Test procedure - describes detailed test procedure required to accompany test plan
Test Report history of test results, problems and peculiarities are recorded in the test report
39
40
41
Integration of classes into OO Architecture calls for regression testing due to communication and collaboration between classes
42
OBJECT-ORIENTED TESTING
begins by evaluating the correctness and consistency of the OOA and OOD models testing strategy changes
the concept of the unit broadens due to encapsulation integration focuses on classes and their execution across a thread or in the context of a usage scenario validation uses conventional black box methods
test case design draws on conventional methods, but also encompasses special features
43
These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005
43
Therefore, a problem in the definition of class attributes that is uncovered during analysis will circumvent side effects that might occur if the problem were not discovered until design or code (or even the next iteration of analysis).
44
These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005
44
4. Using the inverted connections examined in step 3, d etermine w hether other classes m ight be requ ired or w hether responsibilities are properly grou ped am ong the classes.
5. Determ ine w hether w id ely requ ested responsibilities m ight be com bined into a single responsibility. 6. Steps 1 to 5 are applied iteratively to each class and throu gh each evolu tion of the OOA m od el.
45
These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005
45
46
These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005
46
Validation testing
System testing
forces the software to fail in a variety of ways and verifies that recovery is properly performed
verifies that protection mechanisms built into a system will, in fact, protect it from improper penetration
Stress testing
executes a system in a manner that demands resources in abnormal quantity, frequency, or volume
test the run-time performance of software within the context of an integrated system
Performance Testing
47
These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005
47
VALIDATION TESTING
Validation succeeds when software functions in a manner that can be reasonably expected by the customer. "Who or what is the arbiter of reasonable expectations? The specification contains a section called Validation Criteria. Information contained in that section forms the basis for a validation testing approach.
48
CONFIGURATION REVIEW
The intent of the review is to ensure that all elements of the software configuration have been properly developed, are cataloged, and have the necessary detail to bolster the support phase of the software life cycle.
50
Beta Test
51
SYSTEM TESTING
52
RECOVERY TESTING
Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed. If recovery is automatic (performed by the system itself), reinitialization, checkpointing mechanisms, data recovery, and restart are evaluated for correctness. If recovery requires human intervention, the mean-time-to-repair (MTTR) is evaluated to determine whether it is within acceptable limits.
53
SECURITY TESTING
Security Testing attempts to verify that protection mechanisms built into the system will in fact protect it from improper penetration. During this phase, the tester plays the role of an individual trying to penetrate the system. He may try to acquire passwords that will break the system, browse through data files etc. The role of the system designer is to make the penetration cost greater than the value of the information that will be obtained.
54
STRESS TESTING
Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency, or volume. For example, 1. Special tests may be designed that generate ten interrupts per second, when one or two is the average rate, 2. Input data rates may be increased by an order of magnitude to determine how input functions will respond, 3. Test cases that require maximum memory or other resources are executed, 4. Test cases that may cause thrashing in a virtual operating system are designed, 5. Test cases that may cause excessive hunting for disk-resident data are created. Essentially, the tester attempts to break the program.
55
PERFORMANCE TESTING
Performance tests are often coupled with stress testing and usually require both hardware and software instrumentation. That is, it is often necessary to measure resource utilization (e.g., processor cycles) in an exacting fashion. External instrumentation can monitor execution intervals, log events (e.g., interrupts) as they occur, and sample machine states on a regular basis. By instrumenting a system, the tester can uncover situations that lead to degradation and possible system failure.
56
57
57
DEBUGGING
Debugging occurs as a consequence of successful testing. That is, when a test case uncovers an error, debugging is the process that results in the removal of the error.
58
regression tests
results
Debugging
59
These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005
59
DEBUGGING EFFORT
time required to diagnose the symptom and determine the cause
60
60
symptom cause
cause may be due to assumptions that everyone believes symptom may be intermittent
61
61
Is the cause of the bug reproduced in another part of the program? In many situations, a program defect is caused by an erroneous pattern of logic that may be reproduced elsewhere. Explicit consideration of the logical pattern may result in the discovery of other errors.
64
65
What could we have done to prevent this bug in the first place? This question is the first step toward establishing a statistical software quality assurance approach. If we correct the process as well as the product, the bug will be removed from the current program and may be eliminated from all future programs.
66
BASED ON EVERYTHING THAT WE HAVE DONE FROM THE START OF THE SEMESTER
HAPPY STUDYING
67
BORLAND SILKTEST
SilkCentral Test Manager - A Powerful Software Test Management Tool SilkTest - Automated functional and regression testing SilkPerformer - Automated load and performance testing SilkMonitor - 24x7 monitoring and reporting of Web, application and database servers
69
SILKTEST
70
HP
QuickTest ProfessionalTM - E-business functional testing LoadRunner - Enterprise load testing TestDirectorTM Integrated test management WinRunner - Test automation for the enterprise
72
HP QUICKTEST
73
HP LOADRUNNER
74
75
CONSEQUENCES OF BUGS
infectious
Bug Type Bug Categories: function-related bugs, system-related bugs, data bugs, coding bugs, design bugs, documentation bugs, standards violations, etc.
These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005
76
76
DEBUGGING TECHNIQUES
brute force / testing backtracking induction
deduction
77
These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005
77
78
These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005
78
STRATEGIC ISSUES
State testing objectives explicitly. Understand the users of the software and develop a profile for each user category. Develop a testing plan that emphasizes rapid cycle testing. Build robust software that is designed to test itself Use effective formal technical reviews as a filter prior to testing Conduct formal technical reviews to assess the test strategy and test cases themselves. Develop a continuous improvement approach for the testing process.
79
These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005
79