Вы находитесь на странице: 1из 18

Software Testing

Naresh Chauhan, Assistant Professor in the


Department of Computer Engineering at the
YMCA University of Science and Technology,
Faridabad

1
© Oxford University Press 2011. All rights reserved.
Chapter 12
Efficient Test Suite Management

2
© Oxford University Press 2011. All rights reserved.
Chapter 12
Efficient Test Suite Management

Objectives

• The test suite grows as the software evolves and its reasons.

• The test suite becomes too large to test, therefore there is need to
minimize the test cases.

• Factors to be considered while minimizing the test cases.

• Definition of Test Suite Minimization problem.

• Test suite prioritization as a method to reduce the test cases.

• Prioritization Techniques
3
© Oxford University Press 2011. All rights reserved.
Minimizing the test suite and its benefits
Evolution of Software Testing
Why?
• Release date of the product is near.
• Limited staff to execute all the test cases.
• Limited test equipments or unavailability of testing tools

Benefits
• redundant test cases will be eliminated.
• lower costs by reducing a test suite to a minimal subset.
• A reduction in the size of test suite decreases both the overhead
of maintaining the test suite and the number of test cases that
must be rerun after changes are made to the software, thereby
reducing the cost of regression testing.

4
© Oxford University Press 2011. All rights reserved.
Defining the Test Suite Minimization Problem
Evolution of Software Testing
• Given: A test suite TS, a set of test case requirements r1,
r2 ,….., rn that must be satisfied to provide the desired
testing coverage of the program, and subsets of TS, T1 ,
T2 , …. , Tn , one associated with each of the ri‘s such
that any one of the test cases tj belonging to Ti can be
used to test ri.

• Problem: Find a representative set of test cases from TS


that satisfies all of the ri‘s.

5
© Oxford University Press 2011. All rights reserved.
Test Suite Prioritization

• Priority1: The test cases must be executed otherwise there may be


worse consequences for the release of the product.

• Priority 2: The test cases may be executed, if time permits.

• Priority 3: The test case is not important prior to the current release.
It may be tested shortly after the release of the current version of the
software.

• Priority 4: The test case is never important as its impact is nearly


negligible.

6
© Oxford University Press 2011. All rights reserved.
Types of Test Case Prioritization

General Test Case Prioritization

Version-Specific Test Case Prioritization

Prioritization for Regression Test Suite

Prioritization for System test Suite

7
© Oxford University Press 2011. All rights reserved.
Prioritization Techniques

Coverage based Test Case Prioritization

Total Statement Coverage Prioritization

Additional Statement Coverage Prioritization

8
© Oxford University Press 2011. All rights reserved.
Prioritization Techniques

Total Branch Coverage Prioritization


Additional Branch Coverage Prioritization

9
© Oxford University Press 2011. All rights reserved.
Prioritization Techniques
Software Testing Myths
• Total Fault-Exposing-Potential (FEP) Prioritization
• Given program P and test suite T,
• First create a set of mutants N = {n1, n2,…..nm} for P, noting which
statement sj in P contains each mutant.
• Next, for each test case ti έT, execute each mutant version nk of P on
ti , noting whether ti kills that mutant.
• calculate the FEP(s,t) as the ratio :
• Mutants of sj killed / Total number of mutants of sj.
• To perform total FEP prioritization, given these FEP(s,t) values,
calculate an award value for each test case ti έT, by summing the
FEP(sj, ti) values for all statements sj in P. Given these award
values, we prioritize test cases by sorting them in order of
descending award value

10
© Oxford University Press 2011. All rights reserved.
Prioritization Techniques

11
© Oxford University Press 2011. All rights reserved.
Risk based Prioritization

Probability of occurrence / Fault Likelihood


Severity of Impact / Failure Impact

12
© Oxford University Press 2011. All rights reserved.
Prioritization using Relevant Slices

• Execution Slice
• The set of statements executed under a test case is
called Execution Slice of the program
• Dynamic Slice
• The set of statements executed under a test case and
have an effect on the program output under that test
case is called Dynamic Slice of the program with respect
to the output variables.
• Relevant Slice
• The set of statements that were executed under a test
case and did not affect the output, but have potential to
affect the output produced by a test case is known as
Relevant Slice of the program.

13
© Oxford University Press 2011. All rights reserved.
Prioritization using Relevant Slices

Jeffrey and Gupta enhanced the approach of relevant slicing


and stated: ”If a modification in the program has to affect the
output of a test case in the regression test suite , it must affect
some computation in the relevant slice of the output for that test
case”. Thus, they applied the heuristic for prioritizing test cases
such that the test case with larger number of statements must
get higher weight and will get priority for the execution.

14
© Oxford University Press 2011. All rights reserved.
Prioritization based on Requirements

Hema Srikanth et al. [136] have applied requirement


engineering approach for prioritizing the system test cases. It is
known as PORT (Prioritization of Requirements for Test). They
have considered the four factors for analyzing and measuring
the criticality of requirements:

• Customer-Assigned priority of requirements:


• Requirement Volatility:
• Developer-perceived implementation complexity:
• Fault proneness of requirements

15
© Oxford University Press 2011. All rights reserved.
Prioritization based on Requirements

PFVi = ∑ (FVij * FWj)


where FV = Factor value is the value of factor j corresponding to
requirement i
FW = Factor weight is the weight given to factor j

16
© Oxford University Press 2011. All rights reserved.
Measuring Effectiveness of Prioritized
Test Suite

Elbaum et al. [133,134] developed APFD (Average percentage of faults


detected) metric that measures the weighted average of the percentage
of faults detected during the execution of the test suite. Its value ranges
from 0 to 100 where higher value means faster fault detection rate.

APFD is calculated as given below.


APFD = 1- ((TF1+TF2+……….+ TFm)/nm) + 1/2n
where TFi is the position of the first test in Test suite T that exposes
fault i
m is the total number of faults exposed in the system or module
under T
n is the total number of test cases in T

17
© Oxford University Press 2011. All rights reserved.
Measuring Effectiveness of Prioritized
Test Suite

18
© Oxford University Press 2011. All rights reserved.

Вам также может понравиться