Вы находитесь на странице: 1из 14

ST.

ANN'S COLLEGE OF ENGINEERING & TECHNOLOGY,CHIRALA CSE

SOFTWARE TESTING METHODOLOGIES


UNIT -4
Syllabus: Validation activities: Unit testing, Integration Testing, Function testing, system testing,
acceptance testing
Regression testing: Progressives Vs regressive testing, Regression test ability, Objectives of
regression testing, when regression testing done? Regression testing types, Regression testing
techniques.
Validation Activities:
4.1. UNIT VALIDATION TESTING:
Unit is the smallest testable part of the software system. Unit testing is done to verify that the
lowest independent entities in any software are working fine. The smallest testable part is
isolated from the remainder code and tested to determine whether it works correctly.
Why is Unit Testing important?
Suppose you have two units and you do not want to test the units individually but as an integrated
system to save your time.
Once the system is integrated and you found error in an integrated system it becomes difficult
to differentiate that the error occurred in which unit so unit testing is mandatory before
integrating the units.

S E
When developer is coding the software, it may happen that the dependent modules are not
completed for testing, in such cases developers use stubs and drivers to simulate the called(stub)

C
and caller(driver) units. Unit testing requires stubs and drivers, stubs simulates the called unit
and driver simulates the calling unit.
Drivers:

E T
C
Drivers are considered as the dummy modules that always simulate the
high-level modules. It allows testing of the lower levels of the code, when

A
the upper levels of the code are not yet developed. For example, in

S
following figure Module B as to be tested, to test module B, A module is
not yet completed then to test B we will use Drive Module for A.
A test driver may take inputs in the following form and call the unit to be
tested:
• It may hardcode the inputs as parameters of the calling unit.
• It may take the inputs from the user.
• It may read the inputs from a file.
Stubs: Stub can be defined as a piece of software that works
similar to a unit which is referenced by the Unit being tested,
but it is much simpler that the actual unit.
The module under testing may also call some other module
which is not ready at the time of testing. Therefore, these
modules need to be simulated for testing. In most cases,
dummy modules instead of actual modules, which are not
ready, are prepared for these subordinates’ modules. For
example, Module B need to test But Module D and Module E
are not ready then we will use stub for Module D and Stub for E.
Stub characteristics:
• Stub is a place holder for the actual module to be called.
• It does not perform any action of its own and returns to the calling unit.
• We may include a display instruction as a trace message in the body of stub.

1
ST.ANN'S COLLEGE OF ENGINEERING & TECHNOLOGY,CHIRALA CSE

• A constant or null must be returned from the body of stub to the calling module.
• Stub may simulate exceptions or abnormal conditions, if required.
Benefits of designing stubs and drivers:
• Stubs allow the programmer to call a
method in the code being developed,
even if the method does not have the
desired behaviour yet.
• By using stubs and drivers effectively,
we can cut down our total debugging
and testing small parts of a program
individually, helping us to narrow
down problems before they expand.
• Stubs and Drivers can also be an
effective tool for demonstrating
progress in a business environment.
Example
main()
{
int a,b,sum,diff,mul;
scanf("d%d",&a,&b);
sum=calsum(a,b);
diff=caldiff(a,b);
mul=calmul(a,b);

}
printf("The sum is %d:",sum);

S E
calsum(int x, int y)
{ int d;
d=x+y;
C
}
return (d);

E T

C
Suppose main() module is not ready for testing of calsum() module. Design a driver
module for main().
A

S
Modules caldiff() and calmul() are not ready when called in main(). Design stubs for those
two modules.
Driver for main() Module:
driver_main()
{
int a,b,c,sum;
scanf("%d%d",&a,&b);
sum=calsum(a,b);
printf("The sum is %d:",sum);
}
Stub for calldiff() module:
calldiff(int x, int y)
{
printf("Difference calculating module")
return 0;
}
Stub for callmul() module:
callmul(int x, int y)
{
printf(“multiplication calculating module")
return 0;
}

2
ST.ANN'S COLLEGE OF ENGINEERING & TECHNOLOGY,CHIRALA CSE

4.2 INTEGRATION TESTING:


The main function or goal of Integration testing is to test the interfaces between the
units/modules. The individual modules are first tested in isolation. Once the modules are unit
tested, they are integrated one by one, till all the modules are integrated, to check the
combinational behaviour, and validate whether the requirements are implemented correctly or
not.
Integration Testing is necessary for the following reasons:
• It exposes inconsistency between the modules such as improper call or return
sequences.
• Data can be lost across an interface.
• One module when combined with another module may not give the desired result.
• Data types and their valid ranges may mismatch between the modules.
There are three approaches for integration testing:

S E
4.2.1. DECOMPOSITION BASED TESTING:
C
The idea for this type of integration is based on the decomposition of design into functional

have been unit tested in isolation.


E T
components or modules. In decomposition based integration assume that all the modules

A C
Integration methods in decomposition based integration depend on the methods on which
the activity of integration is based.
• One method of integrating is to integrate all the modules together and then test it.

S
• Another method is to integrate the modules one by one and then test them
incrementally.
Based on these methods, integration testing methods are classified into two categories (a)
non-incremental (b) incremental
Non-Incremental integration testing:
• It is also known as big bang integration testing.
• In Big-Bang testing all the modules of the software are integrated simultaneously thus
creating a big collection of modules and all the modules are tested together, therefore
the name Big-Bang is given.
• Big Bang Integration Testing is an integration testing strategy wherein all units are linked
at once, resulting in a complete system. When this type of testing strategy is adopted, it
is difficult to isolate any errors found, because attention is not paid to verifying the
interfaces across individual units.

3
ST.ANN'S COLLEGE OF ENGINEERING & TECHNOLOGY,CHIRALA CSE

Consider the above given diagram of a system. There are 6 modules named Module 1, Module
2, Module 3, Module 4, Module 5, and Module 6. These six modules are integrated
simultaneously and all the six modules are tested all together. As these modules are not
tested individually chances of occurring the failure may increase.
Advantages of big bang integration testing:
• This type of testing requires very little or no planning.
• All the modules are completed before the integration testing starts.
Disadvantages of big bang integration testing:
• Since all the modules are tested together chances of failure increases.
• Difficult to find the root cause of the failure if failure occurs.
• Difficult to detach the modules if any error occurs.
• Time consuming process.
Incremental integration testing:
In Incremental integration testing, the developers integrate the modules one by one using
stubs or drivers to uncover the defects. This approach is known as incremental integration
testing. To the contrary, big bang is one other integration testing technique, where all the
modules are integrated in one shot.
Benefits
• Does not require many drivers and stubs.
• Interfacing errors are uncovered earlier.
• It is easy to localize the errors since modules are combined one by one. The first suspect
is the recently added module. Thus, debugging becomes easy.
Types of incremental integration testing:

S E
Top down Integration - This type of integration testing takes place from top to bottom.
Unavailable Components or systems are substituted by stubs

C
Bottom Up Integration - This type of integration testing takes place from bottom to top.
Unavailable Components or systems are substituted by Drivers
Top down integration testing:

E T
• The strategy in top-down integration is to look at the

A C
design hierarchy from top to bottom.
• There are two types of approaches

S
– Depth first integration
– Breadth first integration
In this type, all modules on a major control path of the design
hierarchy are integrated first. Modules 1,2,6,7/8 will be integrated
first. Next, modules 1,3,4/5 will be integrated.
Breadth First Integration:
In this type, all modules directly subordinate at each level, moving across the design hierarchy
horizontally, are integrated first. Module 2 and 3 integrated first. Next 6,4,5. Modules 7 and 8
next.
Top down integration procedure:
The procedure for top-down integration process is:
1. Start with the top or initial module in the software. Substitute the stubs for all the
subordinate modules of top module. Test the top module.
2. After testing the top module, stubs are replaced one at a time with the actual modules
for integration.
3. Perform testing on this recent integrated environment.
4. Regression testing may be conducted to ensure that new errors have not appeared.
5. Repeat steps 2-4 for the whole design hierarchy.

4
ST.ANN'S COLLEGE OF ENGINEERING & TECHNOLOGY,CHIRALA CSE

Drawbacks of Top down integration:


• Stubs must be prepared as required for testing one module.


Stubs are often more complicated than they first appear.

S E
Before the I/O functions are added, the representation of test cases in stubs can be
difficult.
Bottom-up integration testing:
C
T
It starts with the terminal or modules at the lowest level in the software structure. After testing
these modules, they are integrated and tested moving from bottom to top level.
Steps in Bottom-up integration:
E
A C
1. Start with the lower level modules in the design hierarchy, these are the modules from
which no other module is being called.

S
2. Look for the super-ordinate module which calls the module selected in step1. design
driver module for the super –ordinate module.
3. Test the module selected in step 1 with the driver designed in step2.
4. The next module to be tested is any module whose subordinate modules have all been
tested.
5. Repeat step 2 to 5 and move up in the design hierarchy.
6. Whenever, the actual modules are available, replace drivers with actual one and test
again.

5
ST.ANN'S COLLEGE OF ENGINEERING & TECHNOLOGY,CHIRALA CSE

Comparison between top-down and bottom-up testing:

Practical Approach for Integration Testing:


There is no single strategy adopted for industry practice. For integrating the modules, one cannot
rely on a single strategy. There are situations depending upon the project in hand which will force
to integrate the modules by combining top-down and bottom-up techniques. This combined
approach is sometimes known as Sandwich Integration testing. The practical approach for
adopting sandwich testing is driven by the following factors:
 Priority

Advantages:
 Availability

• Top and bottom layers can be done in parallel


S E
• Less stubs and drivers needed
• Easy to construct test cases C
• Better coverage control

E T
• Integration is done as soon a component is implemented
Disadvantages:

C
• Still requires throw-away code programming

A
• Partial big bang integration

S
• Hard to isolate problems
4.2.2.CALL GRAPH BASED INTEGRATION:
A call graph is a directed graph wherein nodes are modules or units and a directed edge from one
node to another node means one module has called another module. The call graph can be
captured in a matrix form which is known as adjacency matrix.

Figure.Example of call graph

6
ST.ANN'S COLLEGE OF ENGINEERING & TECHNOLOGY,CHIRALA CSE

Figure. Adjacency matrix


There are two types of integration testing based on call graph:
• Pair wise Integration
• Neighbourhood Integration.
Pair-wise Integration:
• In pair-wise integration, we eliminate the need of stub and driver, by using the real code
instead. This is similar to big bang where has problem isolation problem due to the large
number of modules. We are testing at once.

S E
• By pairing up the modules using the edges, we will have a number of test sessions equal
to the number of edges that exist in the call graph.
• Since the edges correspond to
C
functions or procedures
invocated, in a standard
system, this implies many test
E T
sessions.

A C
• For pair-wise integration the

S
number of integration testing
sessions is the number of
edges.
Neighbourhood Integration: While pair-
wise integration eliminates the need of
stub and driver, it still requires many test
cases. As an attempt of improving from
pair-wise, neighbourhood requires fewer
test cases. In neighbourhood integration,
we create a subsystem for a test session
by having a target node and grouping all
the nodes near it. Near is defined as nodes
that are linked to the target node that is
an immediate predecessor or successor of
it. By doing this we will be able to reduce
considerably the amount of test sessions
required.
The total test sessions in neighbourhood
integration can be calculated as:
Neighborhoods = nodes – sink nodes
= 20 - 10 = 10
where Sink Node is an instruction in a
module at which execution terminates.

7
ST.ANN'S COLLEGE OF ENGINEERING & TECHNOLOGY,CHIRALA CSE

4.2.3. PATH BASED INTEGRATION:


We need to understand the following definitions for path-based integration:
Source Node: it is an instruction in the module at which the execution starts or resumes. The
nodes where the control is being transferred after calling the module are also source nodes.
Sink Node: it is an instruction in a module at which the execution terminates. The nodes from
which the control is transferred are also sink nodes.
Module Execution Path (MEP) Message: it is a path consisting of set of executable statements
within a module.
Message: when the control from one unit is transferred to another unit then the programming
language mechanism used to do this is known as message.
MM-Path: it is a path consisting of MEPs and messages. The path shows the sequence of
executable statements.
MM-Path Graph: it can be defined as an extended flow graph where nodes are MEPs and edges
are messages.

S E
C
E T
A C
S

MEP graph
4.3. FUNCTION TESTING:
The process of attempting to detect discrepancies between the functional specifications of a
software and its actual behavior.
The function test must determine if each component or business event:
– performs in accordance to the specifications,
– responds correctly to all conditions that may be presented by incoming events /
data,
– moves data correctly from one business event to the next (including data stores),
and

8
ST.ANN'S COLLEGE OF ENGINEERING & TECHNOLOGY,CHIRALA CSE

– business events are initiated in the order required to meet the business objectives
of the system.
An effective function test cycle must have a defined set of processes and deliverables. The
primary processes / deliverables for requirements based function test are:
Test Planning: During planning, the test leader with assistance from the test team defines the
scope, schedule, and deliverables for the function test cycle.
Partioning / Functional Decomposition: Functional decomposition of a system is the breakdown
of a system into functional components or functional areas.
Requirement Definition: The testing organization needs specified requirements in the form of
proper documents to proceed with the function test.
Test case design: A tester designs and implements a test case to validate that the product
performs in accordance with the requirements.
Traceability matrix formation: Test cases need to be traced / mapped back to the appropriate
requirement. A function coverage matrix is prepared. This matrix is a table, listing specific
functions to be tested, the priority for testing each function, and test cases that contain tests for
each function.
Functions/Features Priority Test Cases
F1 3 T2,T4,T6
F2 1 T1,T3,T5
Table: functional coverage matrix
Test case execution: As in all the phases of testing, an appropriate set of test cases need to be
executed and the results of those test cases recorded.

4.4. SYSTEM TESTING:

requirements and objectives, as stated in


S E
It is the process of attempting to demonstrate that a program or system does not meet its original

the requirement specification. System


testing is actually a series of different tests C
to test the whole system on various
grounds where bugs have the probability
to occur. The integrated system is passed
E T
grounds and depending on the
A C
through various tests based on these

S
environment and type of project. After
passing through these tests, the resulting
system is a system which is ready for acceptance testing which involves the user as shown in the
figure. But it is difficult to test the system on various grounds, since there is no methodology for
system testing. One solution to this problem is think from the perspective of the user and the
problem the user is trying to solve.
4.4.1. CATEGORIES OF SYSTEM TESTING:
Recovery Testing:
Recovery is just like the exception handling feature in a programming language. Recovery is the
ability of a system to restart operations after the integrity of the application has been lost. It
reverts to a point where the system was functioning correctly and then reprocesses the
transactions up until the point of failure. It is the activity of testing how well the software is able
to recover from crashes, hardware failures, and other similar problems.
Some examples of Recovery testing are:
• When an application is receiving data from a network, unplug the connecting cable. After
some time, plug the cable back in and analyze the application’s ability to continue
receiving data from the point at which the network connection was broken.
• Restart the system while a browser has a definite number of sessions and check whether
the browser is able to recover all of them or not.
“Biezer” proposes that testers should work on the following areas during recovery testing:
• Restart: Testers must ensure that all transactions have been reconstructed correctly and
that all devices are in proper states.

9
ST.ANN'S COLLEGE OF ENGINEERING & TECHNOLOGY,CHIRALA CSE

• Switchover: Recovery can also be done if there are standby components and in case of
failure of one component, the standby takes over the control.
Security Testing:
Security is a protection that is needed to assure the customers that their data will be protected.
Security may include controlling access to data, encrypting data in communication, ensuring
secrecy of stored data etc.
Types of Security Requirements:
• Security Requirements should be associated with each functional requirement.
• In addition to security concerns that are directly related to particular requirements, a
software project has security issues that are global in nature.
How to perform security testing:
• Testers must use a risk based approach, grounded in both the systems architectural reality
and the attackers’ mindset, to gauge software security adequately. By identifying risks
and potential loss associated with those risks in the system and creating tests driven by
those risks, the tester can properly focus on areas of code in which an attack is likely to
succeed.
Elements of Security Testing:
• Confidentiality: A security measure which protects against the disclosure of information
to parties other than the intended recipient.
• Integrity: It is intended to allow the receiver to determine that the information which it
receives has not been altered in transit or by anyone other than the originator of the
information.
• Authentication: Designed to establish the validity of a transmission, message or
originator.

S E
• Authorization: It is the process of determining that a requester is allowed to receive a
service or perform an operation.
C
• Availability: It assures that the information and communication services will be ready for
use when excepted.

T
• Non-repudiation: A measure intended to prevent the later denial that an action

E
happened, or a communication take place took place.
Performance Testing:

A C
Performance testing, a non-functional testing technique performed to determine the system

S
parameters in terms of responsiveness and stability under various workload. Performance testing
measures the quality attributes of the system, such as scalability, reliability and resource usage.
The following tasks must be done for this thing:
-- Develop high level plan including requirements, resources, timeliness, and milestones.
-- Develop a detailed performance test plan.
-- Specify test data needed.
-- Execute tests probably repeatedly on order to se whether any unaccounted factor might affect
the results.
Performance Testing Techniques:
• Load testing - It is the simplest form of testing conducted to understand the behaviour of
the system under a specific load. Load testing will result in measuring important business
critical transactions and load on the database, application server, etc., are also monitored.
• Stress testing - It is performed to find the upper limit capacity of the system and also to
determine how the system performs if the current load goes well above the expected
maximum. The areas that may be stressed in a system are: Input Transactions, Disk Space,
Output, Communications, Interaction with users.

Usability Testing:
Usability testing identifies discrepancies between the user interfaces of a product and the human
engineering requirements of its potential users. What the user wants or excepts from the system
can be determined using several ways like:
• Area Experts,

10
ST.ANN'S COLLEGE OF ENGINEERING & TECHNOLOGY,CHIRALA CSE

• Group meetings
• Surveys
• Analyze similar products
Usability characteristics against which testing is conducted are:

• Ease of Use
• Interface steps
• Response Time
• Help System
• Error Messages
Compatibility/Conversion/Configuration Testing:
Compatibility testing is a non-functional testing conducted on the application to evaluate the
application's compatibility within different environments.
• Operating system Compatibility Testing – Linux , Mac OS, Windows
• Database Compatibility Testing - Oracle SQL Server
• Browser Compatibility Testing - IE , Chrome, Firefox
• Other System Software - Web server, networking/ messaging tool, etc.
Conversion testing is to verify that one data format can be converted into another data format
so that the converted data format can be used seamlessly by the application under test
appropriately.
Examples:
• Programming language Conversion
• Database file conversion
• Media conversion (audio, video, image, documents)
4.5. ACCEPTANCE TESTING:
S E
C
Acceptance Testing is the formal testing conducted to determine whether a software system
satisfies its acceptance criteria and to enable buyer to determine whether to accept the system

T
or not. Determine whether the software is fit for the user to use. Making users confident about
product. Determine whether a software system satisfies its acceptance criteria. Enable the buyer
to determine whether to accept the system.
E
Types of acceptance testing

A C
• Alpha Testing: Tests are conducted at the development site by the end users.

S
• Beta Testing: Tests are conducted at the customer site and the development team does
not have any control over the test environment.
4.5.1. Alpha testing:
Alpha testing is one of the most common software testing strategy used in software
development. Its specially used by product development organizations. This test takes place
at the developer’s site. Developers observe the users and note problems. Alpha testing is
testing of an application when development is about to complete. Minor design changes can
still be made as a result of alpha testing. Alpha testing is typically performed by a group that
is independent of the design team, but still within the company, e.g. in-house software test
engineers, or software QA engineers.
Entry Criteria for Alpha:
• All features are complete / testable
• High bugs on primary platform are fixed / verified.
• 50% of medium bugs on primary platforms are fixed / verified.
• All features are tested on the primary platforms.
• Performance has been measured / compared with previous releases.
• Alpha sites are ready for installation.
Exit criteria from Alpha:
After alpha testing, we must:
• Get response / feedbacks from the customers.
• Prepare a report of any serious bugs being noticed.
• Notify bug – fixing issues to developers.

11
ST.ANN'S COLLEGE OF ENGINEERING & TECHNOLOGY,CHIRALA CSE

4.5.2. BETA TESTING


Beta Testing is also known as field testing. It takes place at customer’s site. It sends the
system/software to users who install it and use it under real-world working conditions. A beta
test is the second phase of software testing in which a sampling of the intended audience
tries the product out. Beta testing can be considered “pre-release testing”.
The goal of beta testing is to place your application in the hands of real users outside of your
own engineering team to discover any flaws or issues from the user’s perspective that you
would not want to have in your final, released version of the application.
Entry Criteria for Beta:
• Positive responses from alpha site.
• Customer bugs in alpha testing have been addressed.
• There are no fatal errors which can affect the functionality of the software.
• Beta sites are ready for installation.
Exit criteria from Beta:
After beta testing, we must:
• Get response / feedbacks from the beta testers.
• Prepare a report of all serious bugs.
• Notify bug–fixing issues to developers.

4.6. PROGRESSIVE VS REGRESSIVE TESTING:

progressive testing or development testing.


S E
All the test case design methods or testing technique, discussed till now are referred to as

not adversely affected existing features.


C
• The purpose of regression testing is to confirm that a recent program or code change has

• Regression testing is nothing but full or partial selection of already executed test cases

E T
which are re-executed to ensure existing functionalities work fine.
• This testing is done to make sure that new code changes should not have side effects on

A C
the existing functionalities.
• It ensures that old code still works once the new code changes are done.
Need of Regression Testing:

S
• Change in requirements and code is modified according to the requirement.
• New feature is added to the software
• Defect fixing
• Performance issue fix

Regression Testing:
Regression testing is the process of testing changes to computer programs to make sure that the
older programming still works with the new changes. It is a Selective retesting of a system or
component to verify that modifications have not caused unintended effects and that the system
or component still complies with its specified requirements. It is a Software Maintance task
performed on a modified program to instill confidence that changes are correct and have not
adversely affected the unchanged portions of the program.
4.7. REGRESSION TESTING PRODUCES QUALITY SOFTWARE:
The importance of regression testing
• It validates the parts of the software where changes occur.
• It validates the parts of the software which may be affected by some changes, but
otherwise unrelated.
• It ensures proper functioning of the software, as it was before changes occurred.
• It enhances the quality of software, as it reduces the risk of high- risk bugs

12
ST.ANN'S COLLEGE OF ENGINEERING & TECHNOLOGY,CHIRALA CSE

Figure: Regression Testing produces Quality Software


4.8. REGRESSION TESTABILITY:
Regression testability refers to the property of a program, modification or test suite that lets it
be effectively and efficiently regression-tested. Regression number is the average number of
affected test cases in the test suite that are affected by any modification to a single instruction.

4.9. OBJECTIVES OF REGRESSION TESTING:


It tests to check that the bug has been addressed: The first objective in bug fix testing is to check
whether the bug-fixing has worked or not.
It finds other related bugs: Regression tests are necessary to validate that the system does not
have any related bugs.
It tests to check the effect on other parts of the program: It may be possible that bug-fixing has
unwanted consequences on other parts of a program. Therefore, it is necessary to check the
influence of changes in one part or other parts of the program.

4.10. WHEN TO DO REGRESSION TESTING?


S E
• Software Maintenance:

C
 Corrective Maintenance: Changes made to correct a system after a failure has
been observed.

E T
 Adaptive Maintenance: Changes made to achieve continuing compatibility with
the target environment or other systems.

A C
 Perfective Maintenance: Changes made to improve or add capabilities.
 Preventive Maintenance: Changes made to increase robustness, maintainability,

S
portability, and other features.
• Rapid Iterative Development: The extreme programming approach requires that a test
be developed for each class and that this test be re-run every time the class changes.
• Compatibility Assessment and Benchmarking: Some test suites designed to be run on a
wide range of platforms and applications to establish conformance with a standard or to
evaluate time and space performance.
4.11. REGRESSION TESTING TYPES:
Bug-fix Regression: This testing is performed after a bug has been reported and fixed.
Side-Effect Regression / Stability Regression: It involves retesting a substantial part of the
product. The goal is to prove that the changes has no detrimental effect on something that was
earlier in order.
4.12. REGRESSION TESTING TECHNIQUES:
Regression Test Selection Technique: This technique attempts to reduce the time required to
retest a modified program by selecting some subset of the existing test suite.
Test Case Prioritization Technique: Regression test prioritization attempts to reorder a
regression test suite so that those tests with the highest priority according to some established
criteria, are executed earlier in the regression testing process rather than those lower priority.
Test Suite Reduction Technique: It reduces testing costs by permanently eliminating redundant
test cases form test suites in terms of codes of functionalities exercised.
4.12.1. SELECTIVE RETEST TECHNIQUE:
Selective retest technique attempts to reduce the cost of testing by identifying the portions of P’
(modified version of Program) that must be exercised.

13
ST.ANN'S COLLEGE OF ENGINEERING & TECHNOLOGY,CHIRALA CSE

characteristic features of the selective retest technique:


• It minimizes the resources required to regression test a new version.
• It is achieved by minimizing the number of test cases applied to the new version.
• It analyses the relationship between the test cases and the software elements they cover.
• It uses the information about changes to select test cases.
Steps in Selective retest technique:
1. Select T' subset of T, a set of test cases to execute on P'.
2. Test P' with T', establishing correctness of P' with respect to T'.
3. If necessary, create T'', a set of new functional test cases for P'.
4. Test P' with T'', establishing correctness of P' with respect to T''.
5. Create T'''. a new test suite and test execution profile for P', from T, T' and T''.

Figure: Selective Retest Technique

S E
Regression Test Selection Techniques:
C
Minimization Techniques: Minimization-based regression test selection techniques attempt to

E T
select minimal sets of test cases from T that yield coverage of modified or affected portions of P.
Dataflow Techniques: Dataflow-coverage-based regression test selection techniques select test

A C
cases that exercise data interactions that have been affected by modifications.
Safe Techniques: when an explicit set of safety conditions can be satisfied, safe regression test

S
selection techniques guarantee that the selected subset, T', contains all test cases in the original
test suite T that can reveal faults in P’.
Adhoc / Random Techniques: simple approach is to randomly select a predetermined number
of test cases from T.
Retest-all Technique: The retest-all technique simply reuses all existing test cases. To test P', the
technique effectively ―selects all test cases in T.

Regression Test Prioritization:


Regression test prioritization attempts to reorder a regression test suite so that those tests with
the highest priority, according to some established criterion, are executed earlier in the
regression testing process than those with a lower priority.
The steps for this approach are:
1. Select T ' from T, a set of test cases to execute on P'.
2. Produce T 'P, a permutation of T ', such that T'P will have a better rate of fault detection
than T '.
3. Test P' with T‘P in order to establish the correctness of P' wrt T’P.
4. If necessary create T '', a set of new functional or structural tests for P'.
5. Test P' with T '' in order to establish the correctness of P' wrt T ''.
6. Create T ''', a new test suite for P', from T, T ‘P and T''.

14

Вам также может понравиться