Вы находитесь на странице: 1из 79

SOFTWARE IMPLEMENTATION & TESTING

FACILITATOR: A. PENROSE WHITTAKER

DWIGHT THOMAS

Software Testing Strategies


2

SOFTWARE TESTING

Testing is the process of exercising a program with the specific intent of finding errors prior to delivery to the end user.

3
3

WHAT TESTING SHOWS


errors requirements conformance

performance

an indication of quality

4
4

GENERIC CHARACTERISTICS OF TESTING


Conduct

Formal Technical Reviews Begin testing at the component level and work outward towards integration of the entire system Different testing techniques are appropriate at different points in time Testing is conducted by the developer of the software and independent tests groups Testing and debugging are different activities but debugging must be accommodated in testing strategy

WHO TESTS THE SOFTWARE?

developer
Understands the system but, will test "gently" and, is driven by "delivery"

independent tester
Must learn about the system, but, will attempt to break it and, is driven by quality

WHO TESTS THE SOFTWARE?

Developer
The developer is responsible for testing the individual units of the program Developer also conducts integration testing

Independent Test Group (ITG)

Removes conflict of interest that may be present

The developer and ITG works closely together

TESTING STRATEGY
We begin by testing-in-the-small and move toward testing-in-the-large For conventional software

The module (component) is our initial focus Integration of modules follows


For OO software

our focus when testing in the small changes from an individual module (the conventional view) to an OO class that encompasses attributes and operations and implies communication and collaboration

8
8

SOFTWARE TESTING STRATEGY

A strategy for software testing must accommodate low-level tests that are necessary to verify that a small source code segment has been correctly implemented as well as high-level tests that validate major system functions against customer requirements.

SOFTWARE TESTING STRATEGYCONTD


A

strategy must provide guidance for the practitioner and a set of milestones for the manager. Because the steps of the test strategy occur at a time when dead-line pressure begins to rise, progress must be measurable and problems must surface as early as possible.

10

TESTING STRATEGY CONVENTIONAL ARCHITECTURE

11

CONVENTIONAL TESTING STRATEGY

12

CONVENTIONAL TESTING STRATEGY


Unit testing begins at the vortex of the spiral and concentrates on each unit (i.e., component) of the software as implemented in source code. Testing progresses Integration testing, where the focus is on design and the construction of the software architecture. Validation testing, where requirements established as part of software requirements analysis are validated against the software that has been constructed. System testing, where the software and other system elements are tested as a whole. 13

CONVENTIONAL TESTING STRATEGY

14

CONVENTIONAL TESTING STRATEGY

From a procedural point of view, testing is a series of four steps:


Unit testing focuses on path testing Integration testing

Assembling components Focuses on inputs and outputs

High order testing (to be discussed later) validation criteria is evaluated System Testing people, hardware, database, other systems

15

CRITERIA FOR COMPLETION OF TESTING


f(t) = (1/p) ln [l0 pt + 1] (18-1) where f(t) = cumulative number of failures that are expected to occur once the software has been tested for a certain amount of execution time, t, l0 = the initial software failure intensity (failures per time unit) at the beginning of testing, p = the exponential reduction in failure intensity as errors are uncovered and repairs are made. The instantaneous failure intensity, l(t) can be derived by taking the derivative

f(t)

l(t) = l0 / (l0 pt + 1)

16

TESTING STRATEGY
unit test integration test

system test

validation test
17
17

UNIT TESTING

module to be tested
results
software engineer

test cases

18
18

UNIT TESTING
module to be tested

interface local data structures


boundary conditions independent paths error handling paths

test cases
19
19

UNIT TESTING

The module interface is tested to ensure that information properly flows into and out of the program unit under test. The local data structure is examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithm's execution. Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing. All independent paths (basis paths) through the control structure are exercised to ensure that all statements in a module have been executed at least once. And finally, all error handling paths are tested. 20

WHAT ERRORS ARE COMMONLY FOUND DURING UNIT TESTING?


Among the more common errors in computation are misunderstood or incorrect arithmetic precedence, mixed mode operations, incorrect initialization, precision inaccuracy, incorrect symbolic representation of an expression. Comparison and control flow are closely coupled to one another (i.e., change of flow frequently occurs after a comparison).
21

WHAT ERRORS ARE COMMONLY FOUND DURING UNIT TESTING?


Test cases should uncover errors such as Comparison of different data types, Incorrect logical operators or precedence, Expectation of equality when precision error makes equality unlikely, Incorrect comparison of variables, Improper or nonexistent loop termination, Failure to exit when divergent iteration is encountered, and Improperly modified loop variables.

22

POTENTIAL ERRORS WHILE ERROR


HANDLING IS EVALUATED

Error description is unintelligible. Error noted does not correspond to error encountered. Error condition causes system intervention prior to error handling. Exception-condition processing is incorrect. Error description does not provide enough information to assist in the location of the cause of the error.

23

UNIT TEST ENVIRONMENT


driver
interface local data structures

Module

boundary conditions independent paths error handling paths

stub

stub

test cases

RESULTS

24
24

UNIT TEST PROCEDURES

Because a component is not a stand-alone program, driver and/or stub software must be developed for each unit test.
In most applications a driver is nothing more than a "main program" that accepts test case data, passes such data to the component (to be tested), and prints relevant results.

25

UNIT TEST PROCEDURES


Stubs

serve to replace modules that are subordinate (called by) the component to be tested. A stub or "dummy subprogram" uses the subordinate modules interface, may do minimal data manipulation, prints verification of entry, and returns control to the module undergoing testing.

26

INTEGRATION TESTING STRATEGIES


Options: the big bang approach an incremental construction strategy

27
27

TOP DOWN INTEGRATION

top module is tested with stubs G

stubs are replaced one at a time, "depth first" C as new modules are integrated, some subset of tests is re-run D E

28
These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005

28

TOP-DOWN INTEGRATION
The integration process is performed in a series of five steps: 1. The main control module is used as a test driver and stubs are substituted for all components directly subordinate to the main control module. 2. Depending on the integration approach selected (i.e., depth or breadth first), subordinate stubs are replaced one at a time with actual components. 3. Tests are conducted as each component is integrated. 4. On completion of each set of tests, another stub is replaced with the real component. 5. Regression testing may be conducted to ensure that new errors have not been introduced. The process continues from step 2 until the entire program structure is built.
29

BOTTOM-UP INTEGRATION
A

drivers are replaced one at a time, "depth first" worker modules are grouped into builds and integrated

cluster
30
These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005

30

BOTTOM-UP INTEGRATION
A bottom-up integration strategy may be implemented with the following steps: 1. Low-level components are combined into clusters (sometimes called builds) that perform a specific software subfunction. 2. A driver (a control program for testing) is written to coordinate test case input and output. 3. The cluster is tested. 4. Drivers are removed and clusters are combined moving upward in the program structure.
31

SANDWICH TESTING
A Top modules are tested with stubs G

C Worker modules are grouped into builds and integrated

cluster
32
These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005

32

SANDWICH TESTING
Combine the two so as to capitalize upon their strengths and m inimize their weaknesses neither top-down nor bottom-up implementation/integration is suitable for all the modules, the solution is to partition them

33

REGRESSION TESTING
Each time a new module is added as part of integration testing, the software changes. New data flow paths are established, new I/O may occur, and new control logic is invoked. These changes may cause problems with functions that previously worked flawlessly. In the context of an integration test strategy, regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects.

34

REGRESSION TESTING
The regression test suite (the subset of tests to be executed) contains three different classes of test cases: A representative sample of tests that will exercise all software functions. Additional tests that focus on software functions that are likely to be affected by the change. Tests that focus on the software components that have been changed.

35

SMOKE TESTING

Software components that have been translated into code are integrated into a build. A build includes all data files, libraries, reusable modules, and engineered components that are required to implement one or more product functions. A series of tests is designed to expose errors that will keep the build from properly performing its function. The intent should be to uncover show stopper errors that have the highest likelihood of throwing the software project behind schedule. The build is integrated with other builds and the entire product (in its current form) is smoke tested daily. The integration approach may be top down or bottom up. 36

SMOKE TESTING

Integration risk is minimized. Because smoke tests are conducted daily, incompatibilities and other show-stopper errors are uncovered early, thereby reducing the likelihood of serious schedule impact when errors are uncovered. The quality of the end-product is improved. Because the approach is construction (integration) oriented, smoke testing is likely to uncover both functional errors and architectural and component-level design defects. If these defects are corrected early, better product quality will result. Error diagnosis and correction are simplified. Like all integration testing approaches, errors uncovered during smoke testing are likely to be associated with new software incrementsthat is, the software that has just been added to the build(s) is a probable cause of a newly discovered error. Progress is easier to assess. With each passing day, more of the software has been integrated and more has been demonstrated to work. This improves team morale and gives managers a good indication that progress is being made.
37

COMMENTS ON INTEGRATION TESTING


As integration testing is conducted, the tester should identify critical modules. A critical module has one or more of the following characteristics: (1) addresses several software requirements, (2) has a high level of control (resides relatively high in the program structure), (3) is complex or error prone (cyclomatic complexity may be used as an indicator), or (4) has definite performance requirements. Critical modules should be tested as early as is possible. In addition, regression tests should focus on critical module function.
38

INTEGRATION TESTING DOCUMENT

Test specification
Test plan - describes overall strategy for integration Test procedure - describes detailed test procedure required to accompany test plan

Test Report history of test results, problems and peculiarities are recorded in the test report

39

TESTING STRATEGY OBJECT ORIENTED APPROACH

40

TESTING STRATEGY OO APPROACH


Must include error discovery techniques 9eg formal reviews) that are applied to analysis and design models Completeness and consistency of OO representation must be assessed as they are built When testing units , we tests class (attributes and operations)

41

TESTING STRATEGY OO APPROACH

Integration of classes into OO Architecture calls for regression testing due to communication and collaboration between classes

42

OBJECT-ORIENTED TESTING
begins by evaluating the correctness and consistency of the OOA and OOD models testing strategy changes

the concept of the unit broadens due to encapsulation integration focuses on classes and their execution across a thread or in the context of a usage scenario validation uses conventional black box methods

test case design draws on conventional methods, but also encompasses special features
43
These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005

43

BROADENING THE VIEW OF TESTING


It can be argued that the review of OO analysis and design models is especially useful because the same semantic constructs (e.g., classes, attributes, operations, messages) appear at the analysis, design, and code level.

Therefore, a problem in the definition of class attributes that is uncovered during analysis will circumvent side effects that might occur if the problem were not discovered until design or code (or even the next iteration of analysis).
44
These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005

44

TESTING THE CRC MODEL


1. Revisit the CRC m od el and the object-relationship m od el. 2. Inspect the d escription of each CRC ind ex card to d eterm ine if a d elegated responsibility is part of the collaborator s d efinition. 3. Invert the connection to ensu re that each collaborator that is asked for service is receiving requ ests from a reasonable sou rce.

4. Using the inverted connections examined in step 3, d etermine w hether other classes m ight be requ ired or w hether responsibilities are properly grou ped am ong the classes.
5. Determ ine w hether w id ely requ ested responsibilities m ight be com bined into a single responsibility. 6. Steps 1 to 5 are applied iteratively to each class and throu gh each evolu tion of the OOA m od el.

45
These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005

45

OBJECT-ORIENTED TESTING STRATEGY

class testing is the equivalent of unit testing


operations within the class are tested the state behavior of the class is examined

integration applied three different strategies


thread-based testingintegrates the set of classes required to respond to one input or event use-based testingintegrates the set of classes required to respond to one use case cluster testingintegrates the set of classes required to demonstrate one collaboration

46
These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005

46

HIGH ORDER TESTING


Validation testing

Focus is on software requirements Focus is on system integration Focus is on customer usage

System testing

Alpha/Beta testing Recovery testing Security testing

forces the software to fail in a variety of ways and verifies that recovery is properly performed
verifies that protection mechanisms built into a system will, in fact, protect it from improper penetration

Stress testing

executes a system in a manner that demands resources in abnormal quantity, frequency, or volume
test the run-time performance of software within the context of an integrated system

Performance Testing

47
These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005

47

VALIDATION TESTING

Validation succeeds when software functions in a manner that can be reasonably expected by the customer. "Who or what is the arbiter of reasonable expectations? The specification contains a section called Validation Criteria. Information contained in that section forms the basis for a validation testing approach.

48

VALIDATION TESTING CRITERIA


After each validation test case has been conducted, one of two possible conditions exist: (1) The function or performance characteristics conform to specification band are accepted or (2) a deviation from specification is uncovered and a deficiency list is created. Deviation or error discovered at this stage in a project can rarely be corrected prior to scheduled delivery. It is often necessary to negotiate with the customer to establish a method for resolving deficiencies.
49

CONFIGURATION REVIEW

The intent of the review is to ensure that all elements of the software configuration have been properly developed, are cataloged, and have the necessary detail to bolster the support phase of the software life cycle.

50

ALPHA AND BETA TESTING


Alpha and beta testing are used to uncover errors that only the end-user seems able to find. Alpha Test

Done at a site determined by the developer


Done at the user/customer site

Beta Test

51

SYSTEM TESTING

Recovery Testing Security Testing Stress Testing Performance Testing

52

RECOVERY TESTING

Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed. If recovery is automatic (performed by the system itself), reinitialization, checkpointing mechanisms, data recovery, and restart are evaluated for correctness. If recovery requires human intervention, the mean-time-to-repair (MTTR) is evaluated to determine whether it is within acceptable limits.
53

SECURITY TESTING
Security Testing attempts to verify that protection mechanisms built into the system will in fact protect it from improper penetration. During this phase, the tester plays the role of an individual trying to penetrate the system. He may try to acquire passwords that will break the system, browse through data files etc. The role of the system designer is to make the penetration cost greater than the value of the information that will be obtained.

54

STRESS TESTING
Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency, or volume. For example, 1. Special tests may be designed that generate ten interrupts per second, when one or two is the average rate, 2. Input data rates may be increased by an order of magnitude to determine how input functions will respond, 3. Test cases that require maximum memory or other resources are executed, 4. Test cases that may cause thrashing in a virtual operating system are designed, 5. Test cases that may cause excessive hunting for disk-resident data are created. Essentially, the tester attempts to break the program.
55

PERFORMANCE TESTING

Performance tests are often coupled with stress testing and usually require both hardware and software instrumentation. That is, it is often necessary to measure resource utilization (e.g., processor cycles) in an exacting fashion. External instrumentation can monitor execution intervals, log events (e.g., interrupts) as they occur, and sample machine states on a regular basis. By instrumenting a system, the tester can uncover situations that lead to degradation and possible system failure.
56

DEBUGGING: A DIAGNOSTIC PROCESS

57
57

DEBUGGING

Debugging occurs as a consequence of successful testing. That is, when a test case uncovers an error, debugging is the process that results in the removal of the error.

58

THE DEBUGGING PROCESS


test cases

regression tests

new test cases suspected causes

results

corrections identified causes

Debugging
59

These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005

59

DEBUGGING EFFORT
time required to diagnose the symptom and determine the cause

time required to correct the error and conduct regression tests

60
60

SYMPTOMS & CAUSES


symptom and cause may be geographically separated symptom may disappear when another problem is fixed cause may be due to a combination of non-errors cause may be due to a system or compiler error

symptom cause

cause may be due to assumptions that everyone believes symptom may be intermittent
61
61

WHY IS DEBUGGING SO DIFFICULT?


1. The symptom and the cause may be geographically remote. That is, the symptom may appear in one part of a program, while the cause may actually be located at a site that is far removed. 2. The symptom may disappear (temporarily) when another error is corrected. 3. The symptom may actually be caused by nonerrors (e.g., round-off inaccuracies). 4. The symptom may be caused by human error that is not easily traced. 5. The symptom may be a result of timing problems, rather than processing problems.
62

WHY IS DEBUGGING SO DIFFICULT?


6. It may be difficult to accurately reproduce input conditions (e.g., a real-time application in which input ordering is indeterminate). 7. The symptom may be intermittent. This is particularly common in embedded systems that couple hardware and software inextricably. 8. The symptom may be due to causes that are distributed across a number of tasks running on different processors [CHE90].
63

WHEN I CORRECT AN ERROR, WHAT QUESTIONS SHOULD I ASK MYSELF?

Is the cause of the bug reproduced in another part of the program? In many situations, a program defect is caused by an erroneous pattern of logic that may be reproduced elsewhere. Explicit consideration of the logical pattern may result in the discovery of other errors.

64

WHEN I CORRECT AN ERROR, WHAT QUESTIONS SHOULD I ASK MYSELF?


What "next bug" might be introduced by the fix I'm about to make? Before the correction is made, the source code (or, better, the design) should be evaluated to assess coupling of logic and data structures. If the correction is to be made in a highly coupled section of the program, special care must be taken when any change is made.

65

WHEN I CORRECT AN ERROR, WHAT QUESTIONS SHOULD I ASK MYSELF?

What could we have done to prevent this bug in the first place? This question is the first step toward establishing a statistical software quality assurance approach. If we correct the process as well as the product, the bug will be removed from the current program and may be eliminated from all future programs.

66

WHEN IS THE TEST?

OCTOBER 19, 2010

MULTIPLE CHOICE AND STRUCTURED QUESTIONS

BASED ON EVERYTHING THAT WE HAVE DONE FROM THE START OF THE SEMESTER
HAPPY STUDYING

67

SOFTWARE TESTING TOOLS


Functional testing Performance testing Test management Bug databases Link checkers Security Unit Testing Tools (Ada | C/C++ | HTML | Java |

Javascript | .NET | Perl | PHP | Python | Ruby | SQL | Tcl | XML)


68

BORLAND SILKTEST

SilkCentral Test Manager - A Powerful Software Test Management Tool SilkTest - Automated functional and regression testing SilkPerformer - Automated load and performance testing SilkMonitor - 24x7 monitoring and reporting of Web, application and database servers

69

SILKTEST

70

SAMPLE SCRIPT FOR SILKTEST


Browser.LoadPage ("mail.yoo.com")
Sign In YahooMail.SetActive () SignIn Yahoo.Mail.objSingIn YahooMail.SignUp.Now.Click() Sleep (3)

WelcomeTo Yahoo.Set Active


Welcome To yahoo.objWelcomeToYahoo.LastName.SetText("lastname") Welcome To Yahoo.objWelcomeToYahoo.LanguageContent.Select(5) WelcomeTo Yahoo.objWelcome ToYahoo.ContactMeOccassionally About.Click () Welcome To Yahoo.objWelcome To Yahoo.Submit ThisForm.Click() If Registration Success.Exists () Print ("Test Pass") else logerror ("Test Fail")
71

HP
QuickTest ProfessionalTM - E-business functional testing LoadRunner - Enterprise load testing TestDirectorTM Integrated test management WinRunner - Test automation for the enterprise

72

HP QUICKTEST

73

HP LOADRUNNER

74

UNIT TESTING C# NUNIT


namespace bank { public class Account { private float balance; public void Deposit(float amount) { balance+=amount; } public void Withdraw(float amount) { balance-=amount; } public void TransferFunds(Account destination, float amount) { } public float Balance { get{ return balance;} } } } namespace bank { using NUnit.Framework; [TestFixture] public class AccountTest { [Test] public void TransferFunds() { Account source = new Account(); source.Deposit(200.00F); Account destination = new Account(); destination.Deposit(150.00F); source.TransferFunds(destination, 100.00F); Assert.AreEqual(250.00F, destination.Balance); Assert.AreEqual(100.00F, source.Balance); } } }

75

CONSEQUENCES OF BUGS
infectious

damage catastrophic extreme


serious disturbing mild annoying

Bug Type Bug Categories: function-related bugs, system-related bugs, data bugs, coding bugs, design bugs, documentation bugs, standards violations, etc.
These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005

76
76

DEBUGGING TECHNIQUES
brute force / testing backtracking induction

deduction

77
These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005

77

DEBUGGING: FINAL THOUGHTS


1. Don't ru n off half-cocked , think abou t the sym ptom you 're seeing.

2. Use tools (e.g., d ynam ic d ebu gger) to gain m ore insight.


3. If at an im passe, get help from som eone else. 4. Be absolu tely su re to cond u ct regression tests w hen you d o "fix" the bu g.

78
These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005

78

STRATEGIC ISSUES

State testing objectives explicitly. Understand the users of the software and develop a profile for each user category. Develop a testing plan that emphasizes rapid cycle testing. Build robust software that is designed to test itself Use effective formal technical reviews as a filter prior to testing Conduct formal technical reviews to assess the test strategy and test cases themselves. Develop a continuous improvement approach for the testing process.

79
These courseware materials are to be used in conjunction with Software Engineering: A Practitioners Approach, 6/e and are provided with permission by R.S. Pressman & Associates, Inc., copyright 1996, 2001, 2005

79

Вам также может понравиться