Вы находитесь на странице: 1из 79

Software Testing

Overview

By

Hyung Jae (Chris) Chang


Troy University

Some slides are in courtesy of Dr. Paul Ammann and Dr. Jeff Offutt.

Contents
Testing

Concepts

Unit

Testing
Integration Testing
Object-Oriented Testing
Black-box Testing
White-box Testing
Model-Driven

Test Design

Criteria-Based

Test Design

Testing Concepts

Objective of Testing
Testing

is a process of executing a program


with the specific intent of finding errors.

good test case is one that has a high probability of fi


nding as yet undiscovered error.

successful test is one that uncovers an as yet undisco


vered error.

Testing

cannot show the absence of defects, it can onl


y show that software defects are present.

Software Faults, Errors & Failures


Software

Fault : A static defect in the software; a manifestation of


an Error in the software

Software

Failure : External, incorrect behavior with respect to the


requirements or other description of the expected behavior (devia
tion of the software from its intended purpose)

Software

Error : An incorrect internal state made by programmer

Fault and Failure Example


A

patient gives a doctor a list of symptoms

Failures

The

doctor tries to diagnose the root cause, the ail


ment
Fault

The

doctor may look for anomalous internal condit


ions (high blood pressure, irregular heartbeat, bact
eria in the blood stream)
Errors

Most medical problems result from external


attacks (bacteria, viruses) or physical
degradation as we age.
They were there at the beginning and do not
appear when a part wears out.

A Concrete Example
Fault: Should
start searching
at 0, not 1
public static int numZero (int [ ] arr)
Test 1
{ // Effects: If arr is null throw
[ 2, 7, 0 ]
NullPointerException
Expected:
// else return the number of occurrences of
1
0 in arr
Actual: 1
Error: i is 1, not
int count = 0;
Test 2
0,
on
the
first
for (int i = 1; i < arr.length; i++)
[ 0, 2, 7 ]
iteration
{
Expected:
Failure:
none
if (arr [ i ] == 0)
1
{
Error: i is 1, not 0
Actual: 0
count++;
Error propagates to the variable
}
count
}
Failure: count is 0 at the return
return count;
statement
}
7

Spectacular Software Failures

NASAs Mars lander: September 1999, THERAC-25


crashed due to a units integration fault design
Mars Polar
Lander
crash
site?

THERAC-25 radiation machine : Poor


testing of safety-critical software can Ariane 5:
exception-handling
cost lives : 3 patients were killed
bug : forced self
Ariane 5 explosion : Very expensive
Intels Pentium FDIV fault : Public
relations nightmare

We need our software to be


dependable
Testing is one way to assess

destruct on maiden
flight (64-bit to 16-bit
conversion: about
370 million $ lost)

Northeast Blackout of 2003


508 generating
units and 256
power plants shut
down
Affected 10 million
people in Ontario,
Canada
Affected 45 million
people in 8 US
states
Financial losses of
$6 Billion USD
The alarm system in the energy management system failed due to a software
error and operators were not informed of the power overload in the system
9

What Testing Shows


Errors
Conformance to Requirements

Performance

Indication of
Quality

10

Who should test the Software?

Developer
Understands the
system,
but will test "gently,
and
testing is driven by
"delivery"

Independent Tester
Must learn about the
system, but will attempt to
break it, and testing is
driven by quality

11

Verification and Validation (V&V)


Software

Verification: the process of evaluating softwa


re at the end of software development to ensure com
pliance with intended usage
"Are

we building the product right ?"


checks that the program conforms to its specification.
Software

Validation: the process of determining wheth


er the products of a given phase of the software devel
opment process fulfill the requirements established du
ring the previous phase
"Are

we building the right product ?"


checks that the program as implemented meets the expe
ctations of the clients.
12

Cost of Not Testing


Poor Program Managers might say:
Testing is too expensive.
Testing

is the most time consuming and expensiv


e part of software development

Not

testing is even more expensive

If

we have too little testing effort early, the cost o


f testing increases

Planning

for testing after development is prohibit


ively expensive
13

Cost of Late Testing


6
0
Assume
$1000 unit cost, per fault, 100 faults
25
0
$
5
K
$3
0
6
4
K 0
0
3
$10
$2
0
0K
0
2
$1
K
3
0
K
$
6
1
K
0
0
ts
st
t
t
t
gn
i
n
s
s
e
n
s
e
e
e
e
T
e
T
T
m
m
D
t
e
i
y
n
r
n
m
i
lo
o
i
e
u
U
p
t
t
q
e
s
a
/
e
r
y
D
R
S
og
eg
tr
t
s
P
In
Po

Fault origin (%)


Fault detection (%)
Unit cost (X)

Software Engineering Institute; Carnegie Mellon University; Handbook


CMU/SEI-96-HB-002
14

V Model

System
Engineering

Analysi
s
Desig
n
Codin
g

System test
Validation test
Integration test
Unit test

15

15

Unit Testing
Concept

testing focuses verification effort on the smallest uni


t of software design - the module.
Uses detailed design as a guide, control paths are tested
within the boundary of the module.
After code has been developed and reviewed, unit test ca
se design begins.
White-box oriented
Unit

16

Unit Testing
What

to test

Module

Interface
Local Data Structure
Boundary Conditions
Independent Paths
Error-Handling Paths
Module
Module

Test
Test Case
Case
17

Unit Testing
Test

Driver

main program that accepts test case data, passes data


to the module, and prints the results.

Stub
A

stub serves to replace a module that is subordinate (call


ed by) the module to be tested.
A stub uses the subordinate modules interface, may do
minimal data manipulation, prints verification of entry, an
d returns.

18

Unit Testing
Unit

Testing Environment

Module
Module

Stub
Stub

Test
Test
Driver
Driver

Stub
Stub
Test Results

Test
Test Case
Case

Module Interface
Local Data Structure
Boundary Conditions
Independent Paths
Error-Handling Paths
19

Unit Testing
Unit

testing is simplified when a module has a high co


hesion.
Drivers

and stubs represent overhead.


When only one function is addressed by a module, the n
umber of test cases is reduced and errors can be more ea
sily predicted.

20

Integration Testing
Concept
Once

all modules have been unit-tested, why test more ?

There might be problems on putting them together - interfacing.

Integration

testing is a systematic technique

for constructing the whole program structure while at the same ti


me testing the interface.

21

Integration Testing
The

big bang approach

Incremental

Integration

The

program is constructed and tested in small segments,


where errors are easier to isolate and correct.

Top-Down Integration
Bottom-Up Integration

22

Integration Testing
Top-Down

Testing

Modules

are integrated by moving downward through th


e control hierarchy, beginning with the main control mod
ule.
Modules subordinate (and ultimately subordinate) to the
main control module are incorporated into the structure

in either a depth-first or breadth-first manner.

Bottom-Up

Testing

Begins

construction and testing with atomic modules.


Modules are integrated from the bottom up.

23

Integration Testing
Top-Down
M1,

Testing with Depth-First

M2, M5, M8

M6
M3,

M
M11

M7

M4

Top-Down

Testing
with Breath-First

M
M22

M
M33

M4

M1
M2,

M3, M4
M5, M6, M7
M8

M
M55

M
M66

M7

M
M88
24

Top Down Integration


ATop module is tested with stubs.
B

Stubs are replaced one at a time, depth first

C
As new modules are integrated,
some subset of tests is re-run.

25

Bottom-Up Integration
A
B

drivers are replaced one


at a time, "depth first"

C
D

worker modules are grouped into


builds and integrated

Cluster
26

High Order Testing


Validation

testing
Focus is on software requirements

System

testing
Focus is on system integration

Alpha/Beta

testing
Focus is on customer usage

Recovery

testing
Forces the software to fail in a variety of ways and verifies that recovery is prop
erly performed

Security

testing
Verifies that protection mechanisms built into a system will, in fact, protect it fr
om improper penetration

Stress

testing
Executes a system in a manner that demands resources in abnormal quantity, fr
equency, or volume

Performance

Testing
Test the run-time performance of software within the context of an integrated
system
27

Alpha and Beta Testings


Alpha

Testing

Conducted

Beta

at the developers site by a customer.

Testing

Conducted

at one or more customer sites by the end-use

rs.
Developers generally are not present.

28

Exhaustive Testing
Exhaustive

Testing is impossible.

Testing

by executing every statement and every possible path is


impossible in practice.
Therefore, testing must be based on a subset of possible test c
ases.

Example>

Application with 10 input fields where each can have size possible values
(number of combinations: 6 10)

If it takes 1 millisecond to run each test case, it will take about 2 years to c
omplete testing this application.

29

Selective Testing
Selected path

30

Black-Box Testing
Requirements

Output

Input

Events

31

Black-Box Testing
Concept
It

relies on the specification of the system or component


which is being tested.
The system is blackbox whose behavior can only be deter
mined by studying its inputs and the related outputs.
It is also called functional testing.

32

Black-Box Testing
Key

Problem

To

select inputs that have a high probability of being me


mbers of the set Ie.

In

many cases, use the previous experience and domain k


nowledge to identify test cases.
m
o
D

n
ai

Input
Input test
test Data
Data

Ie
Inputs causing
anomalous behavior

System
System
Outputs which reveal
the presence of defects
e
g
n
a
R

Oe
Output
Output test
test Results
Results
33

Black-Box Testing
Equivalence

Partitioning

Input

data to a program usually fall into a number of diff


erent classes or partitions.
These classes have common characteristics.

Positive Numbers, Negative Numbers, Strings without blanks

Identify

a set of these equivalence partitions which must


be handled by a program.

34

In

p
u

ts

Black-Box Testing

ion
t
i
rt
a
P

O
u

tp

ts

System
System

35

Black-Box Testing
Test

Cases for Each Partition

Choose

particular test cases from each partition.


Choose test cases on the boundaries of the partitions and
test cases close to the mid-point of the partition.
Boundary values are often atypical and so they are overlooked.
Designers and programmers tend to consider typical values of input
s. And, these are tested by the mid-point cases.

36

Black-Box Testing: Example


Program

accepting 4 to 10 inputs which are 5 digit int


egers greater than 10000.
3

Number of Input Values


Less than 4

Input Values

11
10

Between 4 and 10More than 10

9999
10000

50000

100000
99999

Less than 10000


Bet. 10000 & 99999
More than 99999

Test

each partition with all instances of partitions in ot


37
her classes.

White-box Testing
Concept
A

complementary approach to blackbox testing, also call


ed structural or glass-box testing.
Analyze the code and use the knowledge about the prog
ram structure to derive test data.

38

White-box Testing
Path

Testing

whitebox testing strategy whose objective is to exercise


every independent execution path through the compone
nt.
Use program flow graph, which is a skeletal model of all
paths through the program.

39

White-box Testing
Flow

Graph Representations

if-then-else

loop-while

case-of

40

White-box Testing
void

Binary_Search (elem key, elem*t, int size, boolean &found, int &L) {
int bott, top, mid;
bott = 0;
top = size - 1;
L = (top + bott) / 2;
if (T[L] == key)
found = true;
2
else
found = false;
3
while (bott <= top && !found) {
mod = (top + bott) /2;
if (T[mid] == key) {
4
5
found = true;
L = mid;
6
} else
if (T[mid] < key)
8
bott = mid + 1;
else
bott = mid - 1;
} // while

}
41

White-box Testing
Independent

Program Path

path which traverses at least one new edge in the flow


graph.
2, 3, 4, 8
2
2, 3, 5, 6, 8
3
2, 3, 5, 7, 8
Execute
Every

all these paths

statement in the routine


has been executed at least once and
every branch has been exercised
for true and false conditions.

5
6

7
8

42

Model-Driven Test Design

43

Complexity of Testing Software


No

other engineering field builds products as complica


ted as software

Like

other engineers, we must use abstraction to mana


ge complexity
This

is the purpose of the model-driven test design proce

ss
The model is an abstract structure

44

Software Testing Foundations

Testing can only show the


presence of failures
Not their absence

45

Testing & Debugging


Testing : Evaluating software by observing its exec
ution

Test Failure : Execution of a test that results in a so


ftware failure

Debugging : The process of finding a fault given a


failure Not all inputs will trigger a
fault into causing a failure

46

Fault & Failure Model (RIPR)


Four conditions necessary for a failure to be observed
1. Reachability : The location or locations in the prog
ram that contain the fault must be reached
2. Infection : The state of the program must be incor
rect
3. Propagation : The infected state must cause some
output or final state of the program to be incorrec
t
4. Reveal : The tester must observe part of the incorrect
portion of the program state
47

RIPR Model
Reachabilit
y
Infection
Propagatio
n
Revealabili
ty

Test
Reache
s
Fault
Infect
s
Incorre
ct
Progra
m
State

Final Program State


Observed
Final
Observed
Program
Final
State
Program
Incorre
State
ct Final
State
Propagat
es

Reveal
s
Test
Oracle
s

48

Traditional Testing Levels

main Class P

Class A

Class B

method mA1()

method mB1()

method mA2()

method mB2()

This view obscures underlying


similarities

Acceptance
testing : Is the
software
acceptable to the
user?
System testing :
Test the overall
functionality of
the system
Integration
testing : Test
how modules
interact with
each other
Module testing
(developer
testing) : Test
each class, file,
module,
Unit
testing
component
(developer
testing) : Test
each unit
49

Object-Oriented Testing Levels

Class A

Inter-class
testing : Test
multiple classes
together

Class B

method mA1()

method mB1()

method mA2()

method mB2()

Intra-class testing
:
Test an entire
class as
sequences of calls

Inter-method
testing : Test
pairs of methods
in the same class
Intra-method
testing : Test each
method
individually

50

Coverage Criteria
Even

small programs have too many inputs to fully test t


hem all
private

static double computeAverage (int A, int B, int C)


On a 32-bit machine, each variable has over 4 billion possibl
e values
Input space might as well be infinite
Testers
Trying

search a huge input space


to find the fewest inputs that will find the most probl

ems
Coverage

criteria give structured, practical ways to searc


h the input space
Search

the input space thoroughly


Not much overlap in the tests
51

Advantages of Coverage Criteria


Maximize
Provide

the bang for the buck

Source,

traceability from software artifacts to tests


requirements, design models,

Make

regression testing easier

Gives

testers a stopping rule when testing is fini

shed

52

Test Requirements and Criteria


Test

Criterion : A collection of rules and a process that


define test requirements
Cover every statement
Cover every functional requirement

Test

Requirements : Specific things that must be satisfi


ed or covered during testing
Each

statement might be a test requirement


Each functional requirement might be a test requirement
Testing researchers have defined dozens of
criteria, but they are all really just a few
criteria on four types of
structures
1. Input domains
2. Graphs

3. Logic expressions
4. Syntax
descriptions

53

Old View : Colored Boxes


Black-box

testing : Derive tests from external descripti

ons of the software, including specifications, require


ments, and design
White-box

testing : Derive tests from the source code i

nternals of the software, specifically including branch


es, individual conditions, and statements
Model-based

testing : Derive tests from a model of the

software (such as a UML diagram)

Model Driven Test Design (MDTD) makes


these
distinctions less important. The
more general
question is:

from what abstraction level do we

54

Model-Driven Test Design


Test Design is the process of designing input values
that will effectively test software

Test design is one of several activities for testing so


ftware

Most

mathematical
Most technically challenging

55

Types of Test Activities


Testing

can be broken up into four general types of acti

vities
1.
2.
3.
4.

Test Design
Test Automation
Test Execution
Test Evaluation

1.a) Criteria-based
1.b) Human-based

Each

type of activity requires different skills, backgroun


d knowledge, education and training
No reasonable software development organization uses
the same people for requirements, design, implementa
tion, integration and configuration control

56

1.

Test Design(a) Criteria-Based

Design test cases to satisfy


coverage
criteria or other
goal
This is the most engineering
technical job in software testing
Requires

knowledge of :

Discrete

math
Programming
Testing
Requires much of a traditional CS degree
This is intellectually stimulating, rewarding,

and challen

ging
Test design is analogous to software architecture on th
e development side
Using people who are not qualified to design tests is a s
ure way to get ineffective tests
57

1.

Test Design(b) Human-Based

Design test cases based on domain


knowledge of the program and human
knowledge
of testing
This is much harder
than it may
seem to developers
Criteria-based

approaches can be blind to special situati

ons
Requires knowledge of :
Domain,

Requires

testing, and user interfaces

almost no traditional CS

background in the domain of the software is essential


An empirical background is very helpful (biology, psycholo
gy, )
A logic background is very helpful (law, philosophy, math,
)
This

ging

is intellectually stimulating, rewarding, and challen


58

2. Test

Automation

Embed test cases into executable


Requires knowledge ofscripts
programming
Requires

very little theory

Programming

is out of reach for many domain experts

59

3.

Test Execution

Run tests on the software and record


theif results
This is easy and trivial
the tests are well automated
Requires

basic computer skills

Interns
Employees

with no technical background

If,

for example, GUI tests are not well automated, this r


equires a lot of manual labor

Test

executors have to be very careful and meticulous w


ith bookkeeping (most of the tools have this functionali
ty)
60

4.

Test Evaluation

Evaluate results of testing, report to


Requires knowledgedevelopers
of :
Domain
Testing
User

interfaces and psychology

Usually

requires almost no traditional CS

background in the domain of the software is essential


An empirical background is very helpful (biology, psycholo
gy, )
A logic background is very helpful (law, philosophy, math,
)
61

Applying Test Activities


To use our people effectively
and to test efficiently
we need a process that
lets test designers
raise their level of abstraction

62

Model-Driven Test Design


model /
structure

test
requirements
test
requirements

software
artifact

refined
requirements /
test specs
DESIGN
ABSTRACTION
LEVEL

IMPLEMENTATION
ABSTRACTION
LEVEL

pass /
fail

test
results

input
values

test
scripts

test
cases
63

Small Illustrative Example


Software Artifact : Java Method
/**
* Return index of node n at the
* first position it appears,
* -1 if it is not present
*/
public int indexOf (Node n)
{
for (int i=0; i < path.size(); i++)
if (path.get(i).equals(n))
return i;
return -1;
}

Control Flow Graph


1 i = 0
2 i < path.size()
3 if
5
return -1

4
return i
64

Example (cont.)
Graph
Abstract version
1
2

3
5

6 requirements
Edges
for Edge-Pair
12
Coverage
23
1. [1, 2, 3]
32
2. [1, 2, 5]
34
3. [2, 3, 4]
25
Initial Node: 1 4. [2, 3, 2]
5. [3, 2, 3]
Final Nodes:
6. [3, 2, 5]
4, 5
Test Paths
[1, 2, 5]
Find values
[1,
2,
3,
2,
5]
4
[1, 2, 3, 2, 3,
4]
65

Criteria-Based Test Design

66

Changing Notions of Testing


Old view focused on testing at each software devel
opment phase as being very different from other phas

es
Unit,

module, integration, system

New view is in terms of structures and criteria


input

space, graphs, logical expressions, syntax

67

67

New : Test Coverage Criteria


A testers job is simple :

Define a model of the software,


then find ways to cover it

Test Requirements : A specific element of a software artifact t


hat a test case must satisfy or cover

Coverage Criterion : A rule or collection of rules that impose t


est requirements on a test set

Testing researchers have defined


dozens of
criteria, but they are all
really just a few criteria on four types
of structures

68

Source of Structures
These

structures can be extracted from lots of soft


ware artifacts
Graphs

can be extracted from UML use cases, finite state


machines, source code,
Logical expressions can be extracted from decisions in pro
gram source, guards on transitions, conditionals in use cas
es,

is not the same as model-based testing, whic


h derives tests from a model that describes some as
pects of the system under test

This

The

model usually describes part of the behavior


The source is explicitly not considered a model
69

Criteria Based on Structures


Structures : Four ways to model
software

1. Input Domain
Characterization
(sets)

A: {0, 1, >1}
B: {600, 700, 800}
C: {swe, cs, isa, infs}

2. Graphs
3. Logical Expressions

(not X or not Y) and A and B

4. Syntactic Structures
(grammars)

if (x > y)
z = x - y;
else
z = 2 * x;
70

Example : Jelly Bean Coverage


Flavors :
1.
2.
3.
4.
5.
6.

Lemon
Pistachio
Cantaloupe
Pear
Tangerine
Apricot

Colors :
1. Yellow (Lemon,
Apricot)
2. Green (Pistachio)
3. Orange (Cantaloupe,
Tangerine)
4. White (Pear)

Possible coverage criteria :


1. Taste one jelly bean of each flavor
Deciding if yellow jelly bean is Lemon or Apricot is a
controllability
problem

2. Taste one jelly bean of each color


71

Coverage
Given a set of test requirements (TR) for
coverage criterion , a test set (T) satisfies
the coverage if and only if for every test
requirement tr in TR, there is at least one
test t in T such that t satisfies tr
Infeasible

test requirements : test requirements that ca


nnot be satisfied
No

test case values exist that meet the test requirements


Example: Dead code
Detection of infeasible test requirements is formally undecidable for
most test criteria

Most

tice

of the time, 100% coverage is impossible in prac


72

More Jelly Beans


T1 = { three Lemons, one Pistachio, two Cantaloupes,
one Pear, one Tangerine, four Apricots }
Does

test set T1 satisfy the flavor criterion ?

T2 = { One Lemon, two Pistachios, one Pear, three


Tangerines }

Does test set T2 satisfy the flavor criterion ?

Does test set T2 satisfy the color criterion ?

73

Coverage Level
The ratio of the number of test
requirements satisfied by T to the size
of TR
T2

on the previous slide satisfies 4 of 6 test requiremen

ts

74

Comparing Criteria with Subsumption


Subsumption : A test criterion C1 subsume
s C2 if and only if every set of test cases that satisfi
es criterion C1 also satisfies C2

Criteria

Must

be true for every set of test cases

Examples

The

flavor criterion on jelly beans subsumes the color crite


rion if we taste every flavor we taste one of every color
If a test set has covered every branch in a program (satisfi
ed the branch criterion), then the test set is guaranteed t
o also have covered every statement

75

Advantages of
Criteria-Based Test Design

Criteria maximize the bang for the buck


Fewer

tests that are more effective at finding faults

Comprehensive test set with minimal overlap

Traceability from software artifacts to tests


The

why for each test is answered


Built-in support for regression testing

A stopping rule for testingadvance knowledge


of how many tests are needed

76

Characteristics of a Good Coverage Cri


terion
1.

It should be fairly easy to compute test requirements


automatically

2.

It should be efficient to generate test cases

3.

The resulting tests should reveal as many faults as pos


sible

77

Test Coverage Criteria


Traditional

software testing is expensive and labor-i

ntensive
Formal

coverage criteria are used to decide which t


est inputs to use

More

likely that the tester will find problems

Greater

assurance that the software is of high qual


ity and reliability

goal or stopping rule for testing

Criteria

makes testing more efficient and effective


78

Structures for Criteria-Based Testing


Four Structures for
Modeling Software
Input Space

Graphs

Logic
Applied to

Applied
to
Source

Specs
Source

Specs
Design

Syntax

Use cases

Applied
to

FSMs
DNF

Source

Models

Integ

Input
79

Вам также может понравиться