Вы находитесь на странице: 1из 99

Linköping Studies in Science and Technology

Dissertation No. 1073

Handling Combinatorial Explosion in Software Testing

by

Mats Grindal

Department of Computer and Information Science


Linköpings universitet
SE-581 83 Linköping, Sweden

Linköping 2007
ISBN: 978-91-85715-74-9
ISSN: 0345-7524
© Mats Grindal

Printed by LiU-Tryck, Linköping 2007


i

To the memory of Thomas Vesterlund, who


encouraged and motivated me to pursue this
project but never got to see it finished.
ii
iii

Abstract
In this thesis, the overall conclusion is that combination strategies, (i.e., test
case selection methods that manage the combinatorial explosion of possi-
ble things to test), can improve the software testing in most organizations.
The research underlying this thesis emphasizes relevance by working in close
relationship with industry.
Input parameter models of test objects play a crucial role for combi-
nation strategies. These models consist of parameters with corresponding
parameter values and represent the input space and possibly other proper-
ties, such as state, of the test object. Test case selection is then defined as
the selection of combinations of parameter values from these models.
This research describes a complete test process, adapted to combination
strategies. Guidelines and step-by-step descriptions of the activities in pro-
cess are included in the presentation. In particular, selection of suitable
combination strategies, input parameter modeling and handling of conflicts
in the input parameter models are addressed. It is also shown that several
of the steps in the test process can be automated.
The test process is validated through a set of experiments and case stud-
ies involving industrial testers as well as actual test problems as they occur
in industry. In conjunction with the validation of the test process, aspects of
applicability of the combination strategy test process (e.g., usability, scala-
bility and performance) are studied. Identification and discussion of barriers
for the introduction of the combination strategy test process in industrial
projects are also included.
This research also presents a comprehensive survey of existing combina-
tion strategies, complete with classifications and descriptions of their differ-
ent properties. Further, this thesis contains a survey of the testing maturity
of twelve software-producing organizations. The data indicate low test ma-
turity in most of the investigated organizations. Test managers are often
aware of this but have trouble improving. Combination strategies are suit-
able improvement enablers, due to their low introduction costs.

Keywords: Combination Strategies, Software Testing, State-of-Practice,


Equivalence Partitioning, Test Process.
iv
v

Acknowledgements
A project of this magnitude is not possible to complete without the support
and encouragement from many different people. I am forever grateful for all
the help I have received during this time.
A number of people have been of crucial importance for the completion
of this project and as such they all deserve my special gratitude.
First and foremost, I would like to thank my supervisor Prof. Sten F.
Andler, and my co-supervisors Prof. Jeff Offutt, Dr. Jonas Mellin, and
Prof. Mariam Kamkar for their excellent coaching, feedback and support.
Towards the end of this thesis project, Prof. Yu Lei, Prof. Per Runeson,
Prof Kristian Sandahl, Prof. Benkt Wangler, and Ruth Morrison Svensson
all played significant roles in providing important feedback.
Then I am also very much indebted to my co-authors Prof. Jeff Offutt,
Dr. Jonas Mellin, Prof. Sten F. Andler, Birgitta Lindström, and Åsa G.
Dahlstedt. Several anonymous reviewers of our papers have also contributed
significantly. Thank you very much for all the valuable feedback.
Past and present colleagues at University of Skövde, and in particular
the members of the DRTS group: Alexander, AnnMarie, Bengt, Birgitta,
Gunnar, Joakim, Jonas, Jörgen, Marcus, Robert, Ronnie, Sanny, and Sten,
have been great sources of inspiration and have provided constructive advice.
Thanks also Anna, Camilla, and Maria at University of Skövde and Lillemor
and Britt-Inger at Linköping University for helping me out with all the
administrative tasks.
Past and present colleagues at Enea: Anders, Bogdan, Gunnel, Johan
M., Johan P., Joakim, Krister, Maria, Niclas, Per, Sture, Thomas T. and
Thomas V. have contributed with enthusiasm and lots of patience. Special
thanks to Marie for helping out with the cover.
Last but not least, I couldn’t have done this without the support of my
family and my close friends: Lars Johan, Birgitta, Åsa and Claes, Joakim,
Arne and Timo. I love you all.
It is impossible to list everyone that deserves my appreciation so to all
those persons who have contributed knowingly or unknowingly to the com-
pletion of this project I bow my head.
The financial sponsors of this project are Enea, the Knowledge Founda-
tion, the University of Skövde, and the Information Fusion Programme at
the University of Skövde, directly sponsored by Atlas Copco Tools, Enea,
Ericsson, the Knowledge Foundation and the University of Skövde.
vi
Contents

I Introduction 1

1 Background 3

2 Theoretical Framework 7
2.1 What is testing . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Challenges of Testing Software Systems . . . . . . . . . . . . 9
2.3 State-of-Practice in Testing Software Systems . . . . . . . . . 10

3 Combination Strategies 13

4 Problem - Can Combination Strategies Help? 19


4.1 Test Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2 Combination Strategy Selection . . . . . . . . . . . . . . . . . 21
4.3 Input Parameter Modeling . . . . . . . . . . . . . . . . . . . . 22
4.4 Handling Conflicts in the IPM . . . . . . . . . . . . . . . . . 22
4.5 Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

5 Research Methodology 25
5.1 Introduction to Research Methodology . . . . . . . . . . . . . 25
5.2 Research Methodologies Applied . . . . . . . . . . . . . . . . 26

6 Method - A Combination Strategy Test Process 33


6.1 A Combination Strategy Test Process . . . . . . . . . . . . . 33
6.2 Combination Strategy Selection . . . . . . . . . . . . . . . . . 36
6.3 A Method for Input Parameter Modeling . . . . . . . . . . . 38
6.4 Handling Conflicts in the IPM . . . . . . . . . . . . . . . . . 41
6.5 Validation of the Combination Strategy Test Process . . . . . 42
6.6 Overview of Articles . . . . . . . . . . . . . . . . . . . . . . . 43

vii
viii Contents

7 Results 47
7.1 Combination Strategy Selection . . . . . . . . . . . . . . . . . 47
7.2 Input Parameter Modeling . . . . . . . . . . . . . . . . . . . . 51
7.3 Conflict Handling in the IPM . . . . . . . . . . . . . . . . . . 52
7.4 Applying the Combination Strategy Test Process . . . . . . . 55
7.4.1 Handling Large and Complex Test Problems . . . . . 55
7.4.2 Alternative Test Case Generation Methods . . . . . . 57
7.4.3 Validation of Thesis Goals . . . . . . . . . . . . . . . . 58
7.5 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
7.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
7.6.1 Formulation and Evaluation of Research Goals . . . . 63
7.6.2 Software Process Improvements . . . . . . . . . . . . . 63
7.6.3 Properties of Methods . . . . . . . . . . . . . . . . . . 66

8 Conclusions 69
8.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
8.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
8.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

References 77
Part I

Introduction

1
Chapter 1

Background

Testing consumes significant amounts of resources in development projects


[Mye79, Bei90, GOM06]. Hence, it is of general interest to assess the effec-
tiveness and efficiency of current test methods and compare these with new
or refinements of existing test methods to find possible ways of improving
the testing activity [Mye78, BS87, Rei97, WRBM97, SCSK02].
Research on testing has been conducted for at least thirty years [Het76].
Despite numerous advances in the state-of-art, it is still possible to find
examples where only limited testing knowledge is used in practice [NMR+ 04,
GSM04, GOM06]. This research is an attempt to bridge the gap between
the state-of-art and the state-of-practice.
Combinatorial explosion is a frequently occurring problem in testing.
One instance of combinatorial explosion in testing is when systems under test
have several parameters, each with so many possible values that testing every
possible combination of parameter values is infeasible. Another instance
of combinatorial explosion in testing may occur for configurable systems.
When systems under test have many configuration parameters, each with
several possible values, testing each configuration is infeasible. Examples
of configuration parameters are versions of a specific software or hardware
module, different types of software or hardware modules, and number of
logical or physical entities included in a computer system.
Combination strategies is a family of methods, which targets combinato-
rial explosion. The fundamental property of combination strategies is their

3
4 Background

ability to select a subset of all possible combinations such that a coverage


criterion is satisfied. In their general form combination strategies can be ap-
plied to any testing problem, which can be described in terms of parameters
with values. For instance, consider testing a date field (YYMMDD). This
testing problem can be described by the three parameters “YY”, “MM”,
and “DD”, where each of the three parameters has a set of associated val-
ues. Any combination of values of the three parameters represent a possible
test case.
The seminal work on combination strategies applied to testing was per-
formed in the mid-eighties [Man85]. The early research investigated how
combination strategies could be used to identify system configurations that
should be used during testing [Man85, WP96]. In more recent research,
combination strategies are also used for test case selection [CDKP94, LT98,
CGMC03].
Despite twenty years of research on combination strategies, little is known
about the feasibility of combination strategies in industrial settings. Hence,
the goal of this research is to investigate if combination strategies are feasible
alternatives to test case selection methods used in practice.
Within the context of using combination strategies in industrial settings
several issues are to a great extent unexplored. These issues are (i) how to
integrate combination strategies within existing test processes, (ii) given a
specific test situation, how to select an appropriate combination strategy,
(iii) how to represent the system under test as a set of parameters with
corresponding values, and (iv) how to manage conflicts among the values in
the input space of the system under test.
Combination strategies need to be integrated into existing test processes
(issue (i)) since these are used to as a general means to provide structure to
planning, monitoring and controlling the performed activities within testing.
When new techniques, such as combination strategies, are considered, it is
vital that these fit within established ways of working.
The need to select an appropriate combination strategy for a specific
test problem (issue (ii)) arises from the fact that there exist more than 15
combination strategies with different properties [GOA05, GLOA06].
Combination strategies require the system under test to be represented as
an input parameter model (IPM). As an example, consider testing function
int index(element, vector) which returns the index of the element in
the vector. An IPM of the index function consists of two parameters, one
representing different elements and the other representing different vectors.
5

An alternative IPM could use one parameter to represent possible results,


for instance, element found first in vector, element found last in vector,
element not found at all, etc. A second parameter could be used to represent
the size of the vector, for instance, zero elements, one element, and several
elements. This example illustrates that the tester has several alternatives
when deciding on a suitable IPM. Therefore, it is vital to provide support
to the tester during input parameter modeling (issue (iii)).
In some cases, the IPM contains values of two or more parameters that
cannot be combined. For instance, in the second alternative of the index
example, it is impossible to return an element in the first position in an
empty list. Hence, there is a need to handle such conflicts, which illustrates
issue (iv).
Within this thesis all of these issues (i)-(iv) are addressed. A test pro-
cess custom-designed for combination strategies is formulated and partly
explored. Further, methods are developed for selection of a suitable combi-
nation strategy, formulation of IPMs, and handling of conflicts in the input
space of the test object. To validate the process, with its methods, experi-
ments in real industrial settings have been conducted.
The combination strategy selection method is based on an assessment of
the project priorities and the importance of the test object. The project pri-
orities allow the tester to determine which properties of combination strate-
gies are important. The importance of the system under test gives advice
to the tester about suitable levels for the important properties. Analysis
and experimentation, within the scope of this research, have resulted in de-
scriptions, and in some cases quantification, of properties of several existing
combination strategies [GOA05, GLOA06].
An eight-step input parameter modeling method is formulated within
this research project. The method allows requirements on the test object to
be expressed in any way, ranging from informal to formal. These require-
ments are translated step-by-step into a semi-formal model, which satisfies
the requirements imposed by combination strategies. The input parameter
modeling method has been evaluated in an experiment involving a number
of professional testers. Results from this experiment indicate that the input
parameter modeling method can be successfully employed after less than an
hour of teaching the method. The results also confirm the observation by
Grochtmann and Grimm [GG93] that input parameter modeling is a creative
process that can never be fully automated.
6 Background

Cohen, Dalal, Fredman, and Patton [CDFP97] show examples of conflicts


in the input space of the test object, that is, when the combination of two or
more IPM parameter values is infeasible. New methods for conflict handling
are proposed within this research. These and existing methods have been
evaluated with respect to the size of the final conflict-free test suite in an
experiment. The main conclusion from this is that conflicts handled in the
test generation step result in smaller test suites than if the conflicts are
handled in the input parameter modeling step [GOM07].
This thesis contains two parts. The first part is organized as follows.
Chapter 2 provides a general background on testing, describing the chal-
lenges of testing a software based system. Chapter 3 gives a thorough de-
scription of combination strategies. Chapter 4 describes and motivates the
research problem. The problem section also explains how the problem is
divided into a set of goals for the research. The goals are described in
separate sections. In chapter 5, the research methods used within this re-
search project are discussed. For each goal section in chapter 4 there is a
corresponding methods section in chapter 6. Chapter 7 contains the results
achieved and chapter 8 concludes the first part of this thesis with a summary,
some conclusions, and directions for future work.
The second part of this thesis contains six papers [GOM06, GOA05,
GLOA06, GOM07, GO07, GDOM06]. Section 6.6 describes the framework
for this research and how these papers relate to each other and to this frame-
work.
Chapter 2

Theoretical Framework

This chapter contains all necessary background on testing for this thesis.
There are three parts to this background. The first part (section 2.1) pro-
vides a general overview on testing and defines a number of central con-
cepts. The second part (section 2.2) describes the main challenges of testing
when applied in industrial projects. The third part (section 2.3) contains an
overview of the state-of-practice in a number of software producing organi-
zations.

2.1 What is testing


Testing is the activity in which test cases are identified, prepared and exe-
cuted. A test case contains, at least, some input and expected results. The
software unit (e.g., module, program or system) under test is called the test
object. During the execution of a test case, the test object is stimulated by
the input of the test case and reacts to this by producing actual results.
The expected results of the test case is compared with the actual results
produced by the test object and a test result, that is, either pass or fail, is
produced.
A failure [AAC+ 94] is a deviation of the delivered service from fulfilling
the system function. A failure is normally detected by a difference between
expected and actual result. An error is the part of the system state that
is liable to lead to a subsequent failure [AAC+ 94]. Finally, a fault in the
general sense is the adjudged or hypothesized cause of an error [AAC+ 94].
7
8 Theoretical Framework

Identification of test cases is the task of deciding what to test. Methods


that aid the tester in test case identification are called test case selection
methods. A set of test cases is a test suite. Test suites may be joined to form
larger test suites.

Replan, correct
and retest
Correct
and retest

Start (1) Test (2) Test (3) Test (4) Test Yes
Planning Preparation Execution Stop
?
No
Plan, prepare or execute more test cases

Figure 2.1: A generic test process.

Figure 2.1 shows a generic test process [BS 98b]. Step 1 of any test pro-
cess is to plan the forthcoming activities. The planning includes, at least,
identifying the tasks to be performed, estimating the amount of resources
needed to perform the tasks, and making financial and time budgets. Step 2
is to make any preparations needed for the upcoming test execution. Impor-
tant tasks during the preparation step are to select and document the test
cases. In step 3, the test cases are executed and test results are collected.
These results are then analyzed in step 4 in order to determine whether
or not more testing is needed. If more testing is needed feedback loops
allow for returning to any of the previous steps depending on the amount
of work needed. Also, feedback loops from step 3 allow for correction and
re-execution of failed test cases.

During testing it is important to use test cases with both valid and invalid
values. Valid values are values within the normal operating ranges of the
test object and correspondingly invalid values are values outside the normal
operating ranges. Testing using invalid values in the test cases is called
negative testing. Negative testing is used to test error or exception handling
mechanisms.
2.2 Challenges of Testing Software Systems 9

2.2 Challenges of Testing Software Systems

The software testing organization as well as the individual testers face a


number of challenges in producing good-quality testing. Some of these are
quality of requirements, test planning, efficiency of testing, and test case
selection.
The quality of the requirements has a large impact on both the quality of
the testing and the quality of the final product. Incorrect requirements may
both lead to faults in the implementation of the product and to incorrect test
cases. Studies show that incorrect requirements may be the source of as much
as 51 % of the faults [VL00]. It can safely be stated that the infrastructure,
that is, tools, processes, and organization, for handling requirements will
impact the testing. Further it is important for testers to be able to influence
the quality of the requirements [Dah05].
Related to incorrect requirements are requirement changes that are al-
lowed to occur late in the software development project. Changes may occur,
either triggered by new needs or by the realization that the current set of
requirements contains faults. The later a change is allowed to occur the
higher demands on the development and test processes to be able to handle
the change. For the tester, an ideal situation, for handling changes, would be
if all test cases affected by the changes could be both identified and changed
automatically.
Another area of challenge for a testing organization is planning, in par-
ticular, estimation of resource consumption. It is known in advance that
failures will be detected but it is very difficult to predict the amount of fail-
ures. The correction times, the severity, and when and where the failures and
their corresponding faults will be found are other aspects of faults that are
difficult to predict. Test planning is a large and important area for research.
Increasing the efficiency of the testing is a major challenge for many
test organizations. The automation of suitable testing tasks has interested
software organizations for several years [KP99].
Test case selection is in itself a difficult problem. Many test objects have
large input spaces. Testing all possible inputs is generally infeasible. Instead
the input space needs to be sampled. Often, there are many aspects of a
test object that need to be considered in the sampling process.
The focus of this research is on handling the large input space but also
touches on handling late requirement changes and automation.
10 Theoretical Framework

2.3 State-of-Practice in Testing Software Systems


Reports from the 1970s and 1980s show that testing in industry consumes a
large amount of resources, sometimes more than 50% [Boe, Bro75, Deu87,
You75]. Corresponding figures for the years 2004-2005 are reported in a
study of twelve software producing organizations. The reported test time
consumption ranges from 10% up to 65% with a mean of 35% (paper I
[GOM06]). It is safe to say that testing has been and still is a big cost for
software developing organizations.
The study also revealed that despite the incentives to cut costs in test-
ing and the large body of knowledge of testing generated during the last 30
years, the test maturity of software producing organizations can be surpris-
ingly low. Even organizations developing safety-critical applications exhibit
a great variance in applying structured test case selection methods and in
their collection and use of metrics.
Lack of usage of structured test case selection methods is seen in or-
ganizations developing any type of software products, from web-based sys-
tems to safety-critical applications. A similar observation is made by Ng et
al. [NMR+ 04].
Further, the results of the study indicate that the testers’ knowledge is at
least fair. The testers are also allowed to start working early in the projects so
the reasons for the relative immaturity have to be sought elsewhere. The test
managers expressed concern over the situation, which shows an awareness of
the problem.
The study identifies three major obstacles for improving test maturity.
First, there may be a lack of understanding among upper management of the
contributions of testing. From their perspective, as long as their products
generate profit, they are good enough and improvement is not prioritized.
Second, the relative immaturity of testing does not only manifest itself in
lack of structured usage of test case selection methods. There is also a lack of
structured usage of metrics. With few or even no collected metrics it is very
difficult to assess the current situation and to estimate the potential gains
of an improvement. Without the ability to express the potential benefits in
economical terms it is hard to get an approval for a change. Third, a change
in the way of working or even the introduction of a tool almost always
requires an initial investment which should payoff later. Most development
projects are conducted under high time-pressure. It is therefore difficult to
find a project in which one is willing to take the initial investment costs to
make future projects more effective and efficient.
2.3 State-of-Practice in Testing Software Systems 11

The most important conclusion from this study is that organizations


need to focus more on structured metrics programs to be able to establish
the necessary foundation for change, not only in testing. A second conclusion
is that improvements should be made in many small steps rather than one
large step, which is also appealing from a risk perspective.
12 Theoretical Framework
Chapter 3

Combination Strategies

This chapter introduces the necessary background and concepts relating to


combination strategies.
Most test objects have too many possible test cases to be exhaustively
tested. For instance, consider the input space of the test problem. It can be
described by the parameters of the system under test. Usually, the number
of parameters and the possible values of each parameter result in too many
combinations for testing to be feasible. Consider for instance testing the
addition functionality of a pocket calculator. Restricting the input space
to only positive integers still yields a large number of possible test cases,
(1+1, 1+2, 1+3, ..., 1+N, 2+1, 2+2, ..., N +N ), where N is the largest integer
that the calculator can represent. This is one example of combinatorial
explosion in testing.
Combinatorial explosion can be handled by combination strategies. Com-
bination strategies is a class of test case selection methods that use combi-
natorial strategies to select test suites that have tractable size. Combina-
tion strategies are used in experimentation in other disciplines, for instance
physics and medicine, to identify combinations of values of the controlled
variables. Mandl [Man85] is the seminal work on application of combination
strategies in testing. Since then, a large number of combination strategies
have been proposed for testing. Many of these are surveyed in paper II
[GOA05].

13
14 Combination Strategies

Figure 3.1 shows an overview of a classification scheme for combination


strategies. Non-deterministic combination strategies rely to some degree
on randomness. Hence, these combination strategies may produce different
solutions to the same problem at different times. The deterministic combi-
nation strategies always produce the same solution for a given problem. The
instant combination strategies produce all combinations in an atomic step
while the iterative combination strategies build the solution one combination
at a time.
Combination
Strategies
© HH
©©
¼ H
j
Non-deterministic Deterministic
¡ @
¡
ª @R
Instant Iterative

Figure 3.1: Classification scheme for combination strategies.

A common property of combination strategies is that they require an


input parameter model. The input parameter model (IPM) is a represen-
tation of properties of the test object, for instance input space, state, and
functionality.
The IPM contains a set of parameters, IPM parameters. The IPM param-
eters are given unique names, for instance A, B, C, etc. Each IPM parameter
has a set of associated values, IPM parameter values. These values are also
given unique names, for example, 1, 2, 3, 4, and so on. Figure 3.2 shows a
small example of an IPM with 4 IPM parameters, containing 3, 4, 2, and 4
number of values respectively.

Parameter A: 1, 2, 3
Parameter B: 1, 2, 3, 4
Parameter C: 1, 2
Parameter D: 1, 2, 3, 4

Figure 3.2: Simple input parameter model.

The IPM can be used to represent the input space of the test object. A
simple approach is to assign one IPM parameter to each parameter of the
15

test object. The domains of each test object parameter are then partitioned
into groups of values using for instance equivalence partitioning [Mye79] or
boundary value analysis [Mye79]. Each partition of a test object parameter
is then represented by a separate value of the corresponding IPM parameter.
Almost all software components have some input parameters, which could
be partitioned into suitable partitions and used directly. This is also the
approach taken in many of the papers on combination strategies [KKS98].
However, Yin, Lebne-Dengel, and Malaiya [YLDM97] point out that in
choosing a set of IPM parameters the problem space should be divided into
sub-domains that conceptually can be seen as consisting of orthogonal di-
mensions. These dimensions do not necessarily map one-to-one onto the
actual input parameters of the system under test. Along the same lines of
thinking, Cohen, Dalal, Parelius, and Patton [CDPP96] state that in choos-
ing the IPM parameters, one should model the system’s functionality, not its
interface. Dunietz, Ehrlich, Szablak, Mallows, and Iannino [DES+ 97] show
that the same test problem may result in several different IPMs depending
on how the input space is represented and partitioned.
The following example illustrates that the same test problem may result
in more than one IPM. Consider testing the function step(start, end,
step), which prints a sequence of integers starting with start, in steps of
step up to end.

A (start): 1: negative, 2: zero, 3: positive, 4: non-integer


B (end): 1: negative, 2: zero, 3: positive, 4: non-integer
C (step): 1: negative, 2: zero, 3: one, 4: > 1

Figure 3.3: IPM, alternative (i), for the step function.

There are two alternative examples of IPMs for the step function: alter-
native (i) depicted in Figure 3.3 and alternative (ii) depicted in Figure 3.4.
In alternative (i), the parameters of the function are mapped one-to-one onto
IPM parameters. This results in 64 (4 × 4 × 4) possible test cases. In alter-
native (ii), properties of the printed sequence are represented as parameters
in the IPM. This IPM results in 18 (3 × 3 × 2) possible test cases.
Based on the IPM, combination strategies generate abstract test cases.
Abstract test cases are combinations of IPM parameter values consisting of
one value from each IPM parameter. Abstract test cases may be translated,
in a post-processing step, into actual inputs of the test object.
16 Combination Strategies

A (start): 1: negative, 2: zero, 3: positive


B (length of sequence): 1: zero, 2: one, 3: > 1
C (direction of sequence): 1: negative, 2: positive

Figure 3.4: IPM, alternative (ii), for the step function.

An abstract test suite is a set of abstract test cases. Abstract test suites
can be described as sets of N -tuples, where N is the number of IPM param-
eters in the IPM.
Coverage is a key factor when deciding which combination strategy to
use. Different combination strategies support different levels of coverage
with respect to the IPM. The level of coverage affects the size of the test
suite and the ability to detect certain types of faults.
1-wise (also known as each-used) coverage is the simplest coverage cri-
terion. 1-wise coverage requires that every value of every IPM parameter
is included in at least one test case in the test suite. Table 3.1 shows an
abstract test suite with three test cases, which satisfy 1-wise coverage with
respect to IPM (ii) depicted in figure 3.4.

Test Case A B C
TC1 1 1 1
TC2 2 2 2
TC3 3 3 1

Table 3.1: 1-wise coverage of IPM alternative (ii).

2-wise (also known as pair-wise) coverage requires that every possible


pair of values of any two IPM parameters is included in some test case.
Note that the same test case can often cover more than one unique pair of
values. Table 3.2 shows an abstract test suite with nine test cases, which
satisfy 2-wise coverage with respect to IPM (ii) depicted in Figure 3.4.
A natural extension of 2-wise coverage is t-wise coverage, which requires
every possible combination of values of t IPM parameters to be included in
some test case in the test suite.
The most thorough coverage criterion, N -wise coverage, requires a test
suite to contain every possible combination of the IPM parameter values in
the IPM. The resulting test suite is often too large to be practical. N -wise
coverage of IPM (ii), depicted in Figure 3.4, requires all possible combina-
tions of the values of the three IPM parameters (3×3×2 = 18 combinations).
17

Test Case A B C
TC1 1 1 1
TC2 1 2 2
TC3 1 3 1
TC4 2 1 1
TC5 2 2 1
TC6 2 3 2
TC7 3 1 2
TC8 3 2 2
TC9 3 3 1

Table 3.2: 2-wise coverage of IPM alternative (ii).

Base choice coverage [AO94] is an alternative coverage criterion partly


based on semantic information. One base value is selected from each pa-
rameter. The base value may, for instance, be selected based on the most
frequently used value of each parameter. The combination of base values is
called the base test case. Base choice coverage requires every value of each
IPM parameter to be included in a test case in which the rest of the values
are base values. Further, the test suite must also contain the base test case.
Recent years show a growing interest from academia and industry alike
in applying combination strategies to testing. From an academic perspec-
tive the interest has manifested itself in an increased production of research
focusing on combination strategies. One sign of the increased interest from
industry is the growing number of combination strategy tools. The website
http://www.pairwise.org/1 contains a collection of about 20 commercial and
free combination strategy tools.
Although test cases selected by combination strategies often focus on
the input space of the test object, any property of test objects that can be
expressed in IPMs can be tested. Combination strategies have this wide ap-
plicability in common with other test case selection methods such as Equiv-
alence Partitioning [Mye79] and Boundary Value Analysis [Mye79]. In con-
trast, state testing [Bei90], focuses primarily on testing the functionality of
the test object. Another, related, difference is that test cases resulting from
state testing often contain sequences of relatively simple inputs whereas com-
bination strategies usually select test cases with a single but more complex
input.
1
page visited Jan 2007
18 Combination Strategies

A common property of all the above mentioned test case selection meth-
ods, including combination strategies is that they are based on models of the
test object. In all these cases these models are generated from information
of the specifications and rely, at least to some extent, on the ingenuity of the
tester. This means that different testers may come up with different mod-
els and hence different test cases. As soon as specifications are available,
modeling with its subsequent test case selection can be initiated.
Two other classes of model-based test case selection methods are control-
flow based [Bei90] and data-flow based [Bei90] test case selection. In both
these cases, models (graphs) are derived from the source code. This has
two important implications for testing. First, modeling (and hence testing)
cannot be started before the source code is finished. Second, the models can
be generated automatically.
From a performance perspective it is still unclear how combination strate-
gies relate to other test case selection methods, such as the above mentioned.
There is still room for much more research on this topic.
Chapter 4

Problem - Can Combination


Strategies Help?

The overall aim of this research is to investigate if combination strategies are


feasible alternatives to test case selection methods used in practical testing.
The motivation behind this aim is that combination strategies are ap-
pealing since they offer potential solutions to several challenges in testing.
First and foremost, combination strategies can handle combinatorial ex-
plosion, which was illustrated in section 3. As described in section 2.2, many
test problems have very large input spaces. It is often the case that large
input spaces are a result of combinatorial explosion. Hence, combination
strategies target the challenge of large input spaces.
Other possible advantages with combination strategies relate to the prob-
lems identified by the test managers in the state-of-practice investigation
(see section 2.3). The application of combination strategies should be time-
efficient and hence not jeopardize test projects under time pressure. Further,
combination strategies naturally provide a way of assessing the test quality
through the use of coverage with respect to the IPM as discussed in section 3.
A potential gain with combination strategies is the possibility of postpon-
ing some of the costly test activities until it is certain that these activities
should be executed. The same IPM can be input to several combination
strategies supporting different coverage levels. If the combination strategies
are automated the cost of generating several abstract test suites is small.
Each test suite can be evaluated, for instance with respect to size, and the
most suitable abstract test suite is then selected. Identification of expected

19
20 Problem - Can Combination Strategies Help?

results for a test case is often costly since, in the general case, it has to be
done manually. With combination strategies it is possible to delay this step
until it is decided which test cases should be used.
Finally, in some circumstances, combination strategies can provide a ba-
sis for automation, in part, of the test generation step. In cases where the
IPM parameters map one-to-one onto the parameters of the test object it
is possible to automatically transform abstract test cases generated by the
combination strategies to real test case inputs. A consequence of this is that
changes in the requirements of the test object can be handled efficiently by
changing the IPM and generating a new test suite.
On a number of occasions, combination strategies have been applied to
test problems in industry settings. In most cases combination strategies
have been used to select test cases for functional testing [DJK+ 99, BY98,
Hul00, DHS02]. Although the examples of using combination strategies for
functional testing dominate, there are several examples of the applicability
of combination strategies in other areas of testing. Combination strategies
can also be used to select system configurations that should be used dur-
ing testing [WP96, YCP04]. Robustness testing is another area in which
combination strategies have been used [KKS98].
Despite these experience reports, there is little documented knowledge
about the actual effects of applying combination strategies to test problems.
Further, these reports do not contain much substantial information about
requirements and guidelines for the use of combination strategies in industry
settings. Hence, it is not possible to judge from previous knowledge if combi-
nation strategies are feasible alternatives to the test case selection methods
used in practical testing, which is the aim of this research.
To reach this aim five goals have been identified. The following list
summarizes these goals and sections 4.1 - 4.5 discuss them in more detail.

G1 A test process tailored for combination strategies should be defined.


G2 Means for comparing and selecting suitable combination strategies for
test problems should be explicitly described.
G3 A method for input parameter modeling should be formulated.
G4 A method for handling conflicts in the input space should be identified.
G5 The combination strategy test process should be validated and com-
pared to currently used methods in real industrial settings.
4.1 Test Process 21

4.1 Test Process


The first goal, (G1), states that a test process tailored for combination strate-
gies should be defined.
Most organizations describe the set of activities that are followed in order
to build software in development processes. The purposes with a process
approach are to ensure that all activities are performed in the right order
and to allow for more predictable quality in the performed activities. The
use of a process also makes the work less dependent on key persons. Testing
is part of the development process so the testing activities can be part of
a development process. In practice, it is often more convenient to define
an own test process, which should be possible to plug into an arbitrary
development process.
The generic test process, depicted in Figure 2.1, contains the necessary
activities in a testing project. In sequential development processes, based on
for instance the Waterfall- or the V-models, the test process can be applied
directly. In iterative development processes, a slightly adjusted version of the
test process is used. First, test planning is conducted for the entire project,
then the four steps of the test process are repeated for each iteration. Finally,
an extra test execution activity together with a test stop decision is used on
the final product.
A test process tailored for combination strategies should include these
activities and support both sequential and iterative project models. Hence,
a preferable approach is to refine the generic test process. Further, it should
define the necessary activities to enable use of combination strategies, for
instance combination strategy selection.

4.2 Combination Strategy Selection


The second goal, (G2) expresses the need to explicitly describe means for
comparing and selecting a suitable combination strategy for a specific test
problem.
Our survey of combination strategies presented in paper II [GOA05] iden-
tifies more than 15 combination strategies. Moreover, the results from the
study presented in paper III [GLOA06] indicate that the combined usage
of several combination strategies may be beneficial, further increasing the
choices available to the tester. The sheer number of combination strategies
22 Problem - Can Combination Strategies Help?

to choose from makes it difficult to select a suitable combination strategy


for a given test problem.
Which combination strategy to apply may depend on several properties,
for instance, associated coverage metric, size of generated test suite, and
types of faults that the combination strategy targets. Hence, it is neces-
sary to provide the tester with means to compare combination strategies.
These means include descriptions of the properties of existing combination
strategies as well as descriptions of methods for exploring these properties
of future combination strategies.

4.3 Input Parameter Modeling


The applied combination strategy is a factor that obviously has a large im-
pact on the contents of the final test suite. The IPM with its contents is
another factor that influences the contents of the final test suite greatly.
Hence, the third goal, (G3), which calls for the definition of an input pa-
rameter modeling method.
Recall from section 3 that the IPM is a representation of the input space
of the test object and that the same test problem may result in several
different IPMs. At first glance, creating an IPM for a test problem seems
an easy task. However, with different modeling alternatives available to the
tester the task becomes less obvious.
It is an open question how the effectiveness and efficiency of the testing
is affected by the choice of IPM parameters and their values. Hence, it is
important to investigate different alternatives to input parameter modeling.
At some point, the contents of the IPM depend on the experience and
creativity of the tester. To decrease the alternatives and guide the tester
towards an IPM of acceptable quality there is a need for an input parameter
modeling method or a set of guidelines for this task.

4.4 Handling Conflicts in the IPM


The fourth goal, (G4), calls for the identification of a method for handling
conflicts in the input space of the test object.
Cohen, Dalal, Fredman, and Patton [CDFP97] show examples in which
a specific value of one of the IPM parameters is in conflict with one or more
values of another IPM parameter. In other words, some [sub-]combination of
4.5 Validation 23

IPM parameter values is not feasible. A test case that contains an infeasible
sub-combination cannot be executed. Hence, there must be a mechanism to
handle infeasible sub-combinations. Infeasible sub-combinations should not
be confused with negative testing (see section 2.1).
An important principle of conflict handling is that the coverage criterion
must be preserved. That is, if a test suite satisfies 1-wise coverage with
conflicts, it must still satisfy 1-wise coverage after the conflicts are removed.
An important goal is to minimize the growth of the number of test cases in
the test suite.

4.5 Validation
The fifth, goal (G5) requires the combination strategy process to be validated
in a real setting.
This is a key goal of this research, that is, to determine if combination
strategies are feasible alternatives to the test case selection methods used
in practical testing. Only the use of the combination strategy process in a
real setting can demonstrate if the theory works in practice. Further, it is
of great interest to assess the performance of combination strategies under
realistic conditions.
24 Problem - Can Combination Strategies Help?
Chapter 5

Research Methodology

Within the scope of this research six empirical studies have been conducted.
Empirical studies can be conducted in different ways, for instance depend-
ing on the goal of the study. This chapter presents a brief introduction to
research methodology, in section 5.1. Given this introduction, section 5.2
describes the six studies from a research methodology point of view and
highlights important methodological issues in each study.

5.1 Introduction to Research Methodology


The researcher can choose between three types of research strategies when
designing an empirical study according to Robson [Rob93]:
• A survey is a collection of information in standardized form from groups
of people.
• A case study is the development of detailed, intensive knowledge about
a single case or of a small number of related cases.
• An experiment measures the effect of manipulating one variable on
another variable.
An important difference between a case study and an experiment is
the level of control, which is lower in a case study than in an experi-
ment [WRH+ 00].

25
26 Research Methodology

Which research strategy to choose for a given study depends on several


factors, such as the purpose of the evaluation, the available resources, the
desired level of control, and the ease of replication [WRH+ 00].
Another aspect of empirical studies is whether to employ a quantitative
or a qualitative approach [WRH+ 00]. Studies with qualitative approaches
are aimed at discovering and describing causes of phenomena based on de-
scriptions given by the subjects of the study [WRH+ 00]. Qualitative studies
are often used when the information cannot be quantified in a sufficiently
meaningful way. In contrast, studies with quantitative approaches seek to
quantify relationships or behaviors of the studied objects, often in the form
of controlled experiments [WRH+ 00].
Both surveys and case studies can be conducted using either qualitative
or quantitative approaches. Experiments are, in general conducted with
quantitative approaches [WRH+ 00].
The three main means for data collection are through observation, by
interviews and questionnaires, and by unobtrusive measures [Rob93]. Two
classes of measurements exist. Objective measures contain no judgement
[WRH+ 00]. In contrast, subjective measures require the person measuring
to contribute with some kind of judgement [WRH+ 00].
The validity of the results of a study is an important aspect of the
research methodology. Cook and Campbell [CC79] identify four different
types of validity. Conclusion validity concerns on what grounds conclusions
are made, for instance the knowledge of the respondents and the statistical
methods used. Construct validity concerns whether or not what is believed
to be measured is actually what is being measured. Internal validity concerns
matters that may affect the causality of an independent variable, without the
knowledge of the researcher. External validity concerns the generalization
of the findings to other contexts and environments. The representativity
of the studied sample, with respect to the goal population, has a large im-
pact on the external validity since it determines how well the results can be
generalized [Rob93].

5.2 Research Methodologies Applied


This research project contains five studies S1-S5. Study S5 has three major
parts S5a, S5b and S5c. These five studies have resulted in six papers (I-VI),
which can be found in the second part of this thesis. Papers I-IV document
5.2 Research Methodologies Applied 27

studies S1-S4, one-to-one. Paper V documents parts of study S5c and paper
VI documents studies S5a, S5b and the rest of study S5c.
Table 5.1 categorizes the five studies with respect to the employed re-
search strategy and whether the chosen approach is quantitative or qualita-
tive. The different parts of study S5 have different research methodologies
and are thus indicated separately in the table.

Study Research Strategy Approach


S1 survey qualitative
S2 survey qualitative
S3 experiment quantitative
S4 experiment quantitative
S5a case study qualitative
S5b case study qualitative
S5c experiment quantitative

Table 5.1: Overview of research methodologies used in the studies.

The following paragraphs discuss key methodological issues with respect


to each of the conducted studies. Further details can be found in the corre-
sponding papers.

Study S1
Prior, informal observations of aspects of testing maturity aspects in several
organizations indicate generally low testing maturity. If formal studies would
confirm these informal observations, that would motivate further research on
how this situation could be improved. With an explicit focus on combination
strategies in this research project, a natural consequence was to focus on test
case selection methods in the state-of-practice investigation.
Hence, the aim of study S1 was to describe the state-of-practice with
respect to testing maturity, focusing on the use of test case selection methods.
Low usage of test case selection methods, would increase the relevance of
further studies of combination strategies.
Testing maturity can be defined in many ways and may consist of sev-
eral components, some of which are quantifiable and others not. The first
key methodological issue for study S1 is whether to use a qualitative or a
quantitative approach.
28 Research Methodology

The governing factor when selecting a qualitative approach was that the
most important questions in the study are more qualitative than quantita-
tive.
Although study S1 is classified as a survey, it also has some properties
in common with case studies, for instance the difficulty of generalizing the
results. The second key methodological issue for study S1 was the selection
of subjects, that is organizations to study. The total population of orga-
nizations producing and testing software is not known, which means that
judging the representativity of a sample is impossible. The effect of this is
that regardless of which and how many subjects are studied, results are not
generalizable [Yin94]. Based on the decision to use a qualitative approach,
that is to identify and explore potential cause-effect relationships, it was
deemed worthwhile to have a heterogenous set of study objects. This is the
motivation behind the decision to deliberately select the subjects instead of
sampling them.
A third key methodological issue for study S1 was how to collect the
data. Using a self-completed questionnaire would in many respects have
been adequate but to avoid problems with interpretation and terminology
fully structured interviews were used.
Qualitative studies use different techniques, such as analysis, triangula-
tion, and explanation-building to analyze the data [Rob93]. In several of
these techniques sorting, re-sorting, and playing with the data are impor-
tant tools for qualitative analysis [Rob93]. Further, qualitative data may be
converted into numbers and statistical analysis can be applied to these data
as long as this is done overtly [Rob93].
In study S1, much of the initial aim was reached by just examining the
raw data, that is most organizations did not use test case selection methods
in a structured way. Since much of the aim with the study was already
reached, only limited analysis of the data was conducted. From a qualita-
tive analysis perspective, this study may be considered to be prematurely
terminated. However, reaching the original study aims, that is identification
of several organizations where test case selection methods are not used in a
structured way, motivated the decision to terminate the study.

Study S2
The objective of study S2 is to devise a comprehensive compilation of the pre-
vious work on combination strategies. The main challenges from a method-
ological perspective are how to find all relevant sources and when to stop.
5.2 Research Methodologies Applied 29

The approach adopted for these questions was to query the major arti-
cle databases and follow all relevant references recursively until all relevant
articles have been found. Peer review was used to validate the completeness
of the survey.

Study S3
The aim of study S3 was to investigate and compare the performance of a
number of test case selection methods. An experimental approach suited this
purpose best. The Goal Question Metric (GQM) approach [BR88] was used
to identify suitable metrics and to describe the experiment in a structured
manner.

A general methodological challenge of experiments with test case selec-


tion methods is that there needs to be test objects with known faults. It is
impossible to judge the representativity of a set of test objects and equally
impossible to judge the representativity of a set of faults. This makes gener-
alization difficult, which is a threat to external validity. The approach taken
in this experiment is to look not only at the number of faults found but
also at the types of faults found. Through subsumption [RW85] it was then
possible to make some general claims from the results.

Study S4
Just like study S3, study S4 is conducted as a controlled experiment. Again
the aim is to investigate and compare a number of alternative methods for
handling conflicts in IPMs. The design of the experiment allowed for the
investigated methods to be compared with a reference method. This allowed
for a hypothesis testing approach [WRH+ 00] in the analysis of the results.
The controlled variables of this experiment are different properties of
IPMs. As described in chapter 3, IPMs contain representations of test prob-
lems and are used as input to combination strategies. The conflict handling
methods do not require the IPMs to include semantic information, that is
the meanings of the IPMs. This allowed IPMs with desired properties to
be created artificially, that is without basing them on actual test problems.
From a methodological perspective, this experiment can be seen as a simula-
tion experiment [Rob93] since the test problems are simulated. However, it
should be noted that although the IPMs in this experiment are artificial, it
is perfectly feasible to identify actual test problems which will result in IPMs
that are exactly the same as those in the experiment. Further, the studied
30 Research Methodology

conflict handling methods cannot discriminate between an artificially cre-


ated IPM and one that is based on an existing test problem. The reason is
that the conflict resolution algorithms do not utilize semantic information.
Also in this study representativity was an issue. As has already been
stated, it is possible to identify test problems that will yield exactly those
IPMs used in the experiment. However, it is impossible to know the general
distributions of properties of test problems, which means that the experi-
mental results alone are not sufficient for general claims. To strengthen the
results, the experiment was complemented with studies of the algorithms of
the investigated methods. Specifically, the relation between the size of the
IPM and the size of the generated test suite was studied. These studies con-
firm the experimental results and point in the direction of the results being
general.

Study S5
The aim of study S5 was to validate the combination strategy test process
in a real setting. Validation of a suggested solution in a real setting presents
several challenges. First, there may be specific requirements on the object
of study limiting the number of potential candidates. In the context of
this research, an ideal study object requires a well specified test problem,
existing test cases derived by some structured test case selection method,
documented found faults, and time available for generation and execution
of an alternative test suite by combination strategies. Second, if a suitable
study object is found, it may involve a great economic risk for a company
to allow the researcher to experiment with a previously untried solution or
concept within its own production. Third, if the researcher is allowed into
the company, there are often restrictions on how the study can be performed
and on how much information can be published. These problems manifested
themselves in this research project. The ideal study object was not available,
with the result that the validation study was divided into three different
studies, S5a, S5b and S5c.
Studies S5a and S5b were both conducted as case studies and study S5c
was conducted as an experiment. Both case studies (S5a and S5b) were
single case designs [Yin94]. This approach is motivated, in part, by the
decision to perform a feasibility study. It is also motivated, in part, by the
difficulty of finding study objects for untried solutions. From an industry
perspective, a single case feasibility study has the properties of a pilot study,
which, if successful, is a strong case for further larger studies.
5.2 Research Methodologies Applied 31

As described in section 5.1, the results of a case study cannot be gener-


alized based on statistical analysis. Instead, analytical generalization has to
be used [Yin94]. An important part of analytical generalization is replication
of the study under similar conditions. Hence, a detailed description of the
context and conditions of the study is required. Both studies S5a and S5b
were performed in commercial companies under confidentiality agreements.
As far as possible, the descriptions of these studies reveal the details but to
some extent the completeness of the descriptions is limited. The two most
severe limitations are details of the actual faults found in study S5a and
details of the already existing test cases in study S5b.
To enhance the reliability of the two case studies, detailed protocols were
used during the studies [Yin94]. Due to the confidentiality agreement, these
protocols cannot be released. However, a general description of the major
steps is included in the appendix of the technical report [GDOM06]. Further,
parts of study S5a are currently being repeated by the company that owns
the test problem as part of their normal testing.
The aim of study S5c was a proof-of concept with respect to IPM. The
object of investigation is a set of guidelines for transforming a test problem
into an IPM. Previous to this study, these guidelines had not been validated
under realistic conditions, which is why this study focuses mainly on the
feasibility of the guidelines. Within this context, the guidelines are consid-
ered feasible if they can be used to produce IPMs of sufficient quality within
reasonable time frames.
This study was conducted as an experiment in which subjects (testers)
were asked to follow the guidelines and the resulting IPMs were evaluated
with respect to a number of desired properties.
A research methodological issue in study S5c was how to choose response
variables. Input parameter modeling may contain an element of creativity
from the tester. Further, some desired properties of an IPM, such as com-
pleteness, can to some extent be subjective. To decrease the potential risk of
bias, response variables were selected to support objective measures. Where
subjective measures could not be avoided they where formulated as Boolean
conditions to magnify the differences in the values as much as possible.
A small pilot study was executed as part of study S5c. This pilot study
provided valuable feedback for the execution of the main experiment.
Due to the decision to perform a feasibility study, further studies will
be necessary to investigate the actual performance of the input parameter
modeling guidelines.
32 Research Methodology
Chapter 6

Method - A Combination
Strategy Test Process

A major part of this work is focused on refining the generic test process,
depicted in Figure 2.1, to support the practical use of combination strategies.
This includes the development of techniques, methods, and advice for the
combination specific parts of the test process and a validation of the complete
process.
This chapter provides a top-down description of the test process tailored
for combination strategies in section 6.1. It is followed by detailed descrip-
tions of some of the specific activities in sections 6.2 - 6.4.
In addition to the descriptions of these activities, this research also in-
cludes a series of experiments and case studies to investigate and evaluate
these methods and techniques. In particular, a proof-of-concept experiment
has been conducted. A description of this experiment is presented in sec-
tion 6.5.
Further details of these methods, techniques and advice can be found in
the six papers that form the basis of this thesis. Section 6.6 provides an
overview of how the papers are related to each other and to the process of
using combination strategies in testing.

6.1 A Combination Strategy Test Process


Figure 6.1 shows a test process specifically designed for the use of combina-
tion strategies [Gri04]. This test process is an adaptation of the generic test
process described in section 2.1.
33
34 Method - A Combination Strategy Test Process

Combination Strategy Test Process

1
Combination
Strategy
Selection 3
4 5 6 7
Abstract
Test Suite Test Case Test Case Test Result
Test Case
Eval. Generation Execution Eval.
2 Generation
Input test suite testing
Parameter inadequate incomplete
Modeling change combination strategy and/or IPM

Figure 6.1: A combination strategy test process [Gri04].

The planning step of the generic test process is omitted in the combina-
tion strategy test process. The reason is that the planning step is general
in the sense that the planning decisions made govern the rest of the testing
activities, for instance which test case selection methods to use. One impli-
cation of this is that instructions on how to use a specific test case selection
method do not need to take planning into account. Actually, it is beneficial
to keep planning and test case selection independent. Planning may be per-
formed in a large variety of ways and the desired test case selection method
should not impose any unnecessary restrictions on the planning.
Apart from the absence of a planning activity, the main difference be-
tween the combination strategy test process and the generic test process
is in the test preparation activity of the generic test process. In the com-
bination strategy test process, this activity has been refined to satisfy the
requirements from combination strategies. Steps (1)-(5) in the combination
strategy test process are all specific to combination strategies.
Step (1) is to select a combination strategy to use. This step is covered
in more detail in section 6.2. Step (2) is to construct an IPM. This step is
presented in more detail in section 6.3. There is a bidirectional dependence
between these steps, that is, the results of one step may affect the other step.
For instance, if the combination strategy base choice is selected, one value
for each IPM parameter in the IPM should be marked as the base choice. In
a similar fashion, if the result of input parameter modeling is two or more
IPMs, it may be favorable to use different combination strategies for the
different IPMs. Hence, the combination strategy test process should support
multiple iterations between the two steps choice of combination strategies
6.1 A Combination Strategy Test Process 35

and creation of an IPM. The arrows between the two steps provide this
possibility.
Step (3) is the generation of abstract test cases. In this step, the selected
combination strategies are applied to the created IPM. The result of this step
is an abstract test suite. Most combination strategies can be expressed as
algorithms. Hence, this step is possible to automate, which makes this step
inexpensive to perform.
In practice, the selection of test cases is often influenced by the time
available for test case execution. In step (4) the abstract test suite is eval-
uated. The evaluation may, for instance, focus on the size of the test suite
and indirectly consider the testing time.
If the abstract test suite is too large the tester may return to steps
one and two to try to reduce the size of the test suite. The advantage
with this approach is that the costly parts of test case development, that
is, identification of expected results and documentation of test cases are
postponed until it is certain that the test cases will actually be used.
In step (5), “test case generation”, the abstract test cases are transformed
into executable test cases. This step consists of at least three tasks. The
first task is the identification of actual test case inputs to the test object.
The abstract test cases are converted into real test case inputs through some
mapping function that is established during the input parameter modeling.
The second task is to identify the expected result for the specific input and
the third task is to document the test case in a suitable way. If the intention
is to automate test execution this part involves writing test programs. For
manual test execution, test case instructions should be documented.
All three of the test generation tasks are difficult to automate. Iden-
tification of actual test case inputs can be automated but it requires that
the function mapping IPM parameter values to actual inputs be formalized.
Automation of the identification of the expected results may possibly be the
most difficult task to automate. The reason is that the specification needs
to be semantically exact and machine readable, that is, expressed in some
formal language. Finally, automatic documentation of test cases requires
code generation, which also requires semantically exact specifications. Un-
less these requirements are satisfied, the test generation step is likely to be
relatively expensive due to much manual intervention.
Step (6) of the combination strategy test process is test case execution.
As the name implies, the test cases are executed and the results recorded.
There are no differences in this step compared to the corresponding step
36 Method - A Combination Strategy Test Process

in the generic test process. The final step (7) is the test stop decision.
Again, it is a copy of the corresponding test stop decision step in the generic
test process. Steps (6) and (7) are included in the combination strategy test
process to indicate the opportunity for combination strategy re-selection and
input parameter re-modeling should the test results be unsatisfactory.

6.2 Combination Strategy Selection


Recall from section 4.2 that combination strategy selection for a specific
testing problem is not trivial. There are many combination strategies with
different properties. Combination strategies may also be combined, increas-
ing the options for the tester [GLOA06]. The used combination strategies
have significant impacts on both the effectiveness and the efficiency of the
whole test activity.

1) Determine the project priority (time, quality, cost)


2) Determine the test object priority
3) Select suitable combination strategies

Figure 6.2: Three-step combination strategy selection method.

This research advocates a three-step method, shown in Figure 6.2, for


the selection of combination strategies. In the first step, the overall project
priorities are determined. A well-known and often used model of the project
priorities involves the three properties: time, quality, and cost [Arc92]. The
fundamental idea is that a project cannot focus on all three of these prop-
erties at the same time. Instead one or two of the properties should take
precedence over the rest. As a basis for selection of combination strategies
for a test object, this research suggests a total ordering among these three
properties. This ordering forms the basis for which properties of combination
strategies that should be considered during the selection process. Further,
a description of the relations between properties of combination strategies
and the three project properties is used. Table 6.1 exemplifies such relations.
These relations were identified through reasoning. The column static anal-
ysis, denotes the properties that can be assessed statically. Strong relations
indicate which properties that should be considered for each of the three
project properties.
The objective of the second step is to determine the importance of the
specific test object. Any type of importance metric may be used. This
6.2 Combination Strategy Selection 37

Combination Strategy Static


Properties Analysis Project Properties
Possible Time Quality Cost
1) Supported coverage criteria X Strong
2) Size of generated test suite Strong
3) Algorithmic time complexity X Strong
4) Types of targeted faults X Strong
5) Number of found faults Strong
6) Tool support X Strong Strong
7) Conflict handling support X Strong
8) Predefined test case support X Strong

Table 6.1: Combination strategy properties and project priorities.

information is used to determine the level of the considered properties from


the first step.
In the third and final step, one or more combination strategies are se-
lected based on the project priority and the importance of the test object.
In particular, when the main priority of the project is quality it may be
desirable to use more than one combination strategy with properties that
complement each other.
Two of the eight combination strategy properties, “size of generated test
suite” (2) and “number of faults” (5) differ between test problems and can
thus only be evaluated dynamically. The remaining six properties can be
assessed statically.
As shown in section 3, combination strategies can be classified into the
two broad categories deterministic and non-deterministic. In the former
case, it is possible to calculate exactly the size of the test suite based on the
number of IPM parameters and IPM parameter values. In the latter case,
randomness plays a role in the combination strategy algorithms. Hence, the
exact size of the test suite cannot be stated in advance for all combination
strategies. In these cases, the approximate size of the test suite may be
estimated or the actual test case generation may be performed.
The number of faults found, depends not only on the used combination
strategy. The number and types of existing faults in the test object also has
a large impact on the number of faults found. This makes it very difficult
to predict, how many faults a combination strategy will actually find.
38 Method - A Combination Strategy Test Process

To assess the performance of combination strategies with respect to some


of these properties an experiment was staged. In the experiment, five test
problems were used. IPMs were prepared from the specifications of each
of these test objects. Different combination strategies were then applied to
the IPMs to generate test suites. Implementations of the five test problems,
seeded with faults, were then tested using the generated test cases and the
results were recorded, analyzed and presented. The properties used in the
comparison are number of test cases in the generated test suite, the number
and types of the faults found and the supported coverage criteria. The details
of the experiment can be found in paper III [GLOA06].
The data presented in this thesis represent a snap-shot of knowledge
about combination strategies. It is likely that new combination strategies
will emerge and that further properties will be deemed interesting. Hence,
there is a need to extend combination strategy property data. Several of
the properties, such as, “support of specific coverage criteria” (1), “types
of targeted faults” (4), “tool support” (6), “conflict handling support” (7),
and “support for predefined test cases” (8), should be possible to assess by
reviewing the available documentation and the combination strategy algo-
rithm. The “algorithmic time complexity” (3) requires a careful analysis of
the algorithm.
Assessment of the “test suite size” (2) and “number of found faults”
(5) requires the investigated combination strategy to be executed in a de-
fined context. The combination strategy test process in Figure 6.1 supports
efficient evaluation of the test suite size through automated generation of
abstract test cases and the feedback loop to the combination strategy se-
lection. The number of faults found is difficult to assess since it depends
on the number, types and distribution of faults in the test object, which is
impossible to know without actually running the test cases.

6.3 A Method for Input Parameter Modeling


Section 4.3 identifies input parameter modeling as a challenge for the tester.
A representation of the test object in an IPM is a pre-requisite for the use
of combination strategies.
Figure 6.3 shows an overview of the eight steps of the proposed input
parameter modeling method [GO07].
6.3 A Method for Input Parameter Modeling 39

Input Parameter Modeling Method

5
Document
Constraints

1 6
2 3 4 Establish 8
Determine
Identify Identify complete Translation More
Modeling
Parameters Values IPM? Table IPMs?
Approach

7
refine parameters Add
and values Preselected
Test Cases
add a new IPM covering more properties of the SUT

Figure 6.3: An input parameter modeling method.

In step (1), the tester decides whether to use a structural or functional


approach to input parameter modeling. In the structural approach, each
physical parameter in the interface is modeled as a separate IPM parameter.
The strength of the structural approach is that it is usually easy to identify
the IPM parameters and to translate the abstract test cases into real test
case inputs. The weakness of the structural approach is that the model may
not reflect the functionality of the test object. To illustrate this, consider
testing a function: boolean find element(list, element), which returns
true if the integer element is found in the list of integers. Figure 6.4 shows
an IPM generated with a structural approach. In this example there is
no guarantee that the IPM will result in test cases that exercise the two
situations when the element is found and when it is not found.

A (list): 1: empty, 2: one element, 3: several elements


B (element): 1: integer, 2: non-integer

Figure 6.4: Structural based IPM for the find element function.

The basis for the functional approach is the observation by Cohen et


al. [CDPP96], that in choosing the IPM parameters, one should base the
model on the system’s functionality not on its interface. This means that
the IPM parameters do not necessarily map one-to-one onto the physical
parameter in the interface. The state of the test object may also be rep-
resented by IPM parameters. Higher test quality is the strength of the
40 Method - A Combination Strategy Test Process

functional approach. A downside is the difficulty to identify the IPM pa-


rameters. Figure 6.5 shows an IPM for the find element function, generated
with a functional approach.

A (length of list): 1: empty, 2: one element, 3: several elements


B (sorting): 1: unsorted, 2: sorted, 3: all elements equal
C (match): 1: yes, 2: no

Figure 6.5: Functional based IPM for the find element function.

Step (2) and (3) of the IPM parameters and IPM parameter values are
identified. An IPM parameter represents some characteristic of the test ob-
ject. Some examples are the physical parameters in the interface, properties
of the physical parameters, parts of the functionality, and parts of the state
of the test object. IPM parameters may be identified from any source of in-
formation about the test object, for instance, the interface descriptions or the
functional specifications. The experience of the tester may also play a role
during the IPM parameter identification. The value domains of each IPM
parameter are then partitioned into non-empty groups of values. The un-
derlying idea of this step is the same as in equivalence partitioning [Mye79],
that is, all elements in a partition are expected to result in the same behav-
ior of the test object. The set of partitions for an IPM parameter should be
complete and disjoint.
Step (4) allows the tester to statically evaluate the IPM, for instance
with respect to completeness. When the tester is satisfied with the IPM in
step four, additional information is added to the IPM in steps (5), (6), and
(7). Step (5) allows the tester to document constraints in how values of
different IPM parameters may be combined. This topic is further explained
in section 6.4. In step (6), a translation table is established. The translation
table contains mappings between IPM parameter values and values of the
actual test object interface. This information is used when automatic gen-
eration of actual test cases is desired. Step (7) allows the tester to include
already existing test cases into the test suite.
Finally, in step (8) the tester decides whether to use one or several IPMs
to represent the test object. If a test object consists of several independent
parts or aspects it may make sense to create several smaller models rather
than one large model. More than one model also allows for varying coverage
of the different models.
6.4 Handling Conflicts in the IPM 41

6.4 Handling Conflicts in the IPM


As was mentioned in section 4.4, Cohen et al. [CDFP97] observed that some-
times one value of an IPM parameter cannot be combined with one or more
values of another IPM parameter. Ignoring such conflicts may lead to the
selection of test cases that are impossible to execute. Hence, a general mech-
anism for handling conflicts is needed. The purpose of conflict handling is
to give the tester the power to exclude impossible sub-combinations, which
is not the same as eliminating negative test cases, as was described in sec-
tion 4.4.
Two methods to handle conflicts have previously been proposed. In
the sub-models method the IPM is rewritten into several conflict-free sub-
IPMs [CDFP97, WP96, DHS02]. The avoid method is to avoid selecting
infeasilbe combinations, used for example in the AETG system [CDFP97].
In paper IV [GOM07] two new methods are suggested, the abstract pa-
rameter method and the replace method.
The abstract parameter method allows the IPM parameters involved
in a conflict to be replaced by an abstract IPM parameter. The abstract
IPM parameter contains valid combinations of the replaced IPM parameters,
such that the desired coverage is maintained. The replace method makes it
possible to handle conflicts after the abstract test cases have been selected.
The idea is to replace an abstract test case containing a conflict with two or
more conflict-free test cases such that the coverage is maintained.
The previously proposed sub-models method, as well as the new abstract
parameter and replace methods, may all result in unnecessary large test
suites due to over-representation of some IPM parameter values. This can
be compensated for by post analysis and reduction of the abstract test suite.
A simple reduction algorithm is to process test cases one-by-one and discard
any test case that does not increase the coverage.
With this reduction algorithm it is possible to form seven distinct meth-
ods for conflict handling. These seven methods are evaluated and compared
in an experiment described in paper IV [GOM07]. In the experiment, the
number and types of conflicts, as well as the size of the IPM and the used cov-
erage criterion are varied. All in all, 2568 test suites with a total of 634, 263
test cases were generated. The conflict handling methods are evaluated on
the number of test cases in the final conflict-free test suites.
In addition to the size of the test suite, other properties of the conflict
handling methods could also influence the tester’s choice. Four of these
42 Method - A Combination Strategy Test Process

properties are possibilities to automate, time consumption, ease of use, and


applicability of the conflict handling methods. Because these properties
heavily depend on potential tools used, experimental evaluation using one
specific tool is essentially meaningless. Instead these properties are discussed
from an analytical perspective in paper IV [GOM07].

6.5 Validation of the Combination Strategy Test


Process
Sections 6.1 to 6.4 describe the key parts of the combination strategy test
methodology. The main aim of this research is to determine if combination
strategies are feasible alternatives to test case selection methods used in
practical testing. Hence, an important part of this project is to validate
the use of combination strategies in a practical setting and to compare the
results with the results of using alternative methods.
Section 5 describes the challenges of using real settings to evaluate a the-
ory. In this research project, these challenges caused the validation study to
be split into three separated but related studies with partially different fo-
cuses [GDOM06]. These are: (a) application of the combination strategy test
process on a large test problem, (b) a performance comparison between com-
bination strategies and the currently employed test case selection methods,
and (c) evaluation of the input parameter modeling step of the combination
strategy test process.
All three studies are staged in existing industrial settings. Most notably,
this includes using real testers and solving test problems as they occur in
real life.
In study (a) combination strategies were used to test a large and complex
test problem involving parts of the functionality of an embedded system. The
combination strategy test process described in section 6.1 was used as a basis
for the experiment. Selection of combination strategies and input parameter
modeling were performed by the author of this thesis. The purpose of this
study was to investigate if the combination strategy test process works for
real, large-size problems.
Study (b) also involved an embedded system. In this study a more lim-
ited part of the functionality was used. The purpose of this study was to
compare the results of applying the combination strategy process with results
of currently used testing practices.
6.6 Overview of Articles 43

Study (c) focused on input parameter modeling. An experiment was


staged with professional software testers, in which they were asked to ap-
ply the input parameter modeling method to a specification. The resulting
IPMs were collected and analyzed to determine the feasibility of the input
parameter modeling method. The motivation for the third study is that it is
important that input parameter modeling can be conducted by non-experts.

6.6 Overview of Articles


The following six papers are included in this thesis and figure 6.6 on page 45
shows how the papers relate to each other and to the combination strategy
test process.

I On the Maturity of Software Producing Organizations - M. Grindal,


J. Offut, J. Mellin - in Proceedings of the 1st International Confer-
ence Testing: Academia & Industry Conference Practice And Research
Techniques (TAIC PART 2006), Cumberland Lodge, Windsor, UK,
29th-31st Aug., 2006 [GOM06]
This paper describes state-of-practice in twelve software testing orga-
nizations. It provides motivation for this research.

II Combination Testing Strategies: A survey - M. Grindal, J. Offutt, S.


F. Andler, Software Testing, Verification, and Reliability, 2005; 15:
Sept. [GOA05]
This paper surveys the previous research on combination strategies.
In particular it contains descriptions of a large number of combination
strategies and their static properties, for example, support for coverage
criteria and algorithmic complexity.

III An Evaluation of Combination Strategies for Test Case Selection - M.


Grindal, B. Lindström, J. Offutt, S. F. Andler - , Empirical Software
Engineering, vol 11, no. 4, Dec 2006, pp 583-611 [GLOA06]
This paper compares dynamic properties, for example, found faults and
size of test suite, of a number of combination strategies in a controlled
experiment.
44 Method - A Combination Strategy Test Process

IV Managing Conflicts when Using Combination Strategies to Test Soft-


ware - M. Grindal, J. Offutt, J. Mellin - in Proceedings of the 18th
Australian Conference on Software Engineering (ASWEC2007), Mel-
bourne, Australia, 10-13 April 2007 [GOM07]
This paper describes and evaluates a number of methods to handle con-
flicts in the input space of the test object. The evaluation is conducted
through a controlled experiment.

V Input Parameter Modeling for Combination Strategies in Software


Testing - M. Grindal, J. Offutt - In Proceedings of the IASTED In-
ternational Conference on Software Engineering (SE2007), Innsbruck,
Austria, 13-15 Feb 2007 [GO07]
This paper describes a method for input parameter modeling and con-
tains results of an experiment in which the method is used by a number
of non-experts to construct input parameter models from a specifica-
tion.

VI Using Combination Strategies for Software Testing in Practice - A


proof-of-concept - M. Grindal, Å. Dahlstedt, J. Offutt, J. Mellin. -
Technical Report HS-IKI-TR-06-001, School of Humanities and Infor-
matics, University of Skövde [GDOM06]
This paper contains three related studies, which together form a proof-
of-concept of the combination strategy test process and indirectly indi-
cate that combination strategies are feasible alternatives to currently
used test case selection methods.

Papers V and VI have some overlap since they report details of the same
study (S5c in section 5.2. Paper V contains details on the method and
summaries of the experiment and the results of the experiment, whereas
paper VI is focused more on describing the details of the experiment and
the results thereof.
6.6 Overview of Articles 45

PAPER III
PAPER I PAPER II
An Evaluation
On the Maturity Combination
of Combination
of Software Testing
Strategies for
Producing Strategies -
Test Case
Organizations A Survey
Selection

Combination Strategy Test Process

1
Combination
Strategy
Selection 3
4 5 6 7
Abstract
Test Suite Test Case Test Case Test Result
Test Case
Eval. Generation Execution Eval.
2 Generation
Input test suite testing
Parameter inadequate incomplete
Modeling change combination strategy and/or IPM

PAPER V PAPER IV PAPER VI


Input Parameter Managing Using
Modeling for Conflicts when Combination
Combination Using Strategies for
Strategies in Combination Software
Software Strategies to Testing in
Testing Test Software Practice

Figure 6.6: Papers included in this research.


46 Method - A Combination Strategy Test Process
Chapter 7

Results

As was described in section 5.2, this research is based on five studies. This
chapter contains the main results from the studies. The results are presented
in the same order as in chapter 6, with the exception of the first section on
the combination strategy test process, which has not been explored in isola-
tion. After the results have been presented, this research is contrasted with
related work in section 7.5. Finally, the methods and results are discussed
in section 7.6.

7.1 Combination Strategy Selection


Section 6 presents a test process custom designed for combination strate-
gies. The first activity in this process focuses on selection of one or more
combination strategies to use when faced with a testing problem. Further,
section 6.2 lists a number of important properties to consider. As a result
of this research a number of combination strategies have been investigated
with respect to these properties. Detailed data from these investigations are
contained in papers II [GOA05] and III [GLOA06].
Tables 7.1 and 7.2 show detailed data on these properties for six combi-
nation strategies. These are Each Choice (EC) [AO94], Base Choice (BC)
[AO94], Orthogonal Arrays (OA) [Man85, WP96], In-Parameter-Order (IPO)
[LT98], Automatic Efficient Test Generator (AETG) [BJE94, CDKP94], and
All Combinations (AC). The AETG combination strategy supports several

47
48 Results

levels of coverage, which explains the two AETG entries in the table. In the
formulae, t is the level of coverage, N denotes the number of IPM parameters,
Vi is the number of values of the i:th IPM parameter and Vmax = M AXi=1N V .
i
For example, Vmax = 3 for the IPM given in Figure 6.4 on page 39 since pa-
rameter A contains three elements.

Combination Strategy Properties


Comb. Coverage Test Suite Algorithm
Strategy Criteria Size Complexity
EC 1-wise Vmax O(Vmax )
P
BC base 1+ N i=1 i(V − 1) O(N × Vmax )
choice
OA 2-wise ∼ Vmax 2 O(1)
IPO 2-wise ∼ Vmax 2 O(Vmax N 2 log(N ))
3

AETG 2-wise ∼ Vmax 2 4 N 2 log(N ))


O(Vmax
t-wise - Ω((Vmax )t )
QN
AC N-wise i=1 Vi O((Vmax )N )

Table 7.1: Properties of some combination strategies (I).

Combination Strategy Properties


Comb. fault # tool conflict predefined
Strategy type faults support handling test cases
EC 1-factor 107 Yes Yes Yes
BC 1-factor 119 Yes No No
OA 2-factor 117 No No No
IPO 2-factor - Yes Yes Yes
AETG (2-wise) 2-factor 117 Yes Yes Yes
(t-wise) t-factor - No
AC N-factor 120 Yes Yes -

Table 7.2: Properties of some combination strategies (II).

The supported coverage criteria is a fundamental property of combina-


tion strategies. The underlying algorithms of each combination strategy are
defined to support one or more specified coverage criteria, which is illus-
trated in Table 7.1. The relations between different coverage criteria can
7.1 Combination Strategy Selection 49

be expressed in a subsumption hierarchy. Rapps and Weyuker [RW85] de-


fine subsumption: coverage criterion X subsumes coverage criterion Y if and
only if 100% X coverage implies 100% Y coverage. (Rapps and Weyukers’
original paper used the term inclusion instead of subsumption). Figure 7.1
shows a subsumption hierarchy for coverage criteria supported by combina-
tion strategies. The left, boxed-in, column of coverage criteria do not rely on
semantic information (i.e., information about what the model represents).
Detailed information about the coverage criteria and the subsumption hier-
archy can be found in paper II [GOA05]. It explains why t-wise coverage
does not subsume t-wise valid coverage.

N-wise

t-wise
N-wise
valid

variable
strength t-wise
valid
base choice
2-wise pair-wise
(pair-wise) valid
single
error
each valid
1-wise
used
(each-used)

Figure 7.1: Subsumption for combination strategy coverage criteria.

The size of the generated test suite is heavily dependent on the specific
coverage criterion and the algorithm used to satisfy that coverage criterion.
As mentioned in section 6.2, some algorithms are deterministic while oth-
ers rely to some degree on randomness. For the deterministic combination
50 Results

strategies, for example, EC, BC, and AC, it is possible to precisely calculate
the size of the test suite based on the number of IPM parameters and IPM
parameter values.
The other three combination strategies in Table 7.1, that is, OA, IPO,
and AETG all contain an element of randomness. These combination strate-
gies may generate test suites with varying contents and sizes from the same
IPM at different times. Hence, the exact size of the test suite cannot be
stated in advance. However, in most cases the variation in test suite size
seems to be small which makes it possible to state an approximate size. For
more details on test suite sizes see paper III [GLOA06]).
The algorithms of some of the more complex combination strategies, for
instance, IPO [LT98] and AETG [CDPP96] are time consuming. The algo-
rithmic complexity is a static property and can thus be assessed analytically.
OA does not rely on an algorithm in the same sense as the others. Based on
the assumption that precalculated Orthogonal Latin Squares [Man85] can
be used, the algorithmic complexity of OA is (O(1)). For t-wise, AETG Ω
denotes the floor for the algorithmic complexity, which is given since the
exact value depends on the value of t.
The types of targeted faults depend on the coverage level and the amount
of semantic information available in the IPM. Dalal and Mallows [DM98]
suggest a model for software faults in which faults are classified according to
how many IPM parameters (factors) need distinct values to cause the fault
to result in a failure. A t-factor fault is triggered whenever the values of
some t IPM parameters are involved in triggering a failure. Each possible
fault can be specified by giving the combination(s) of values relevant for
triggering that fault.
Recall from section 6.2 that it is impossible to predict how many faults
a combination strategy will find. Within the scope of this research an ex-
periment was conducted, which involved five test objects with a total of 128
faults. The numbers reported in Table 7.2 refer to this experiment. Although
no general conclusions can be drawn based on these figures due to represen-
tativity issues it is surprising that the more simple combination strategies
detect as many faults as they do. The main reason for this behavior is that
many of the faults in the study were simple in the sense that specific val-
ues of only one or two parameters are needed to trigger the fault. A more
detailed explanation of this behavior can be found in paper III [GLOA06]).
The tool support described in Table 7.2 reflects the current situation and
is subject to change over time. Hence, tool support for AETG t-wise is likely
7.2 Input Parameter Modeling 51

to emerge if there is a need for it. As previously stated, the OA combination


strategy does not rely on an algorithm in the strict sense. Hence tool support
for OA is less likely to appear.
The conflict handling property describes the possibility of adding conflict
handling support to the combination strategy algorithm. In the case of BC,
its associated coverage criterion makes conflict handling unclear. For the OA
combination strategy the same reason as in the previous cases, that is, lack
of algorithm description, limits the possibility of adding conflict handling
support.
Support for predefined test cases means that the combination strategy
takes a set of test cases, that is supplied in advance by the tester, and selects
further test cases such that the coverage criterion is satisfied. This feature
permits legacy test cases to be used. It also allows a combination strategy
to be combined with other test case selection methods in an efficient way.

7.2 Input Parameter Modeling


Input parameter modeling plays an important role when using combination
strategies. The constructed IPM affects both the effectiveness and the ef-
ficiency of the test process [DHS02]. As there is an element of creativity
involved in the creation of IPMs it is vital to provide support to the tester
in the modeling task. An input parameter modeling method is presented in
section 6.3. In the context of this research, this input parameter modeling
method has been investigated in an experiment. This section summarizes
the results from this experiment. More details on the experiment and its
results can be found in paper VI [GDOM06].
In the experiment, nineteen professional testers were given a one hour
tutorial on combination strategies with focus on the input parameter mod-
eling method. After the tutorial, the subjects of the experiment were asked
to independently create IPMs from a specification of a priority queue. The
subjects’ solutions were then graded with respect to number of desired prop-
erties. The results of the input parameter modeling experiment is shown in
Table 7.3.
Fifteen of the nineteen subjects managed to create an IPM from which
test cases can be generated. Three of the four subjects whose IPMs could
not be used for test case generation reported no experience in applying other
structured test case selection methods, such as equivalence partitioning and
52 Results

Property Results
A. Overall -
A1. Model can be used for test case generation 15
A2. Semantic information indicated in IPM 3
A3. Conflicts indicated 5
B. IPM Parameters -
B1. Concept correctly used 14
B2. All aspects of test object covered 6
B3. No overlap of IPM parameters 17
C. IPM Parameter Values -
C1. Concept correctly used 16
C2. No overlap of IPM Parameter Values 18
C3. Valid and invalid IPM Parameter Values included 17

Table 7.3: Number of solutions (of 19) that satisfy different properties.

boundary value analysis.


Except for the three properties “semantic information indicated” (A2),
“conflicts indicated” (A3), and “all aspects covered” (B2), the number of
solutions satisfying the other properties vary around 15.
The low results in inclusion of semantic and conflict information in the
subjects’ solutions can be explained by the fact that the tutorial did not
stress how test cases are generated by combination strategies. This is not
a problem since adding this information afterwards is inexpensive and does
not affect the contents of the IPM. Covering all aspects is difficult, since
input parameter modeling relies to some extent on experience and creativity.
Further training, checklists, and reviews are different methods to remedy this
problem.
The results from this experiment show that it is possible to tutor per-
sons with proper testing background to perform input parameter modeling,
according to the suggested input parameter modeling method, such that the
resulting models are useful.

7.3 Conflict Handling in the IPM


In section 6.4 seven methods for handling conflicts in the input space of the
test object are mentioned. The point in the test process when these methods
7.3 Conflict Handling in the IPM 53

should be applied varies with the method. Further, some of the methods also
affect the input parameter modeling process. Hence it is important to decide
from the start which conflict handling method to use.
The abstract parameter and sub-models methods are both applied dur-
ing input parameter modeling. The avoid method is an extension of the
combination strategy algorithm that prohibits conflicting combinations to
be selected. The replace method removes conflicts from the final test suite.
The abstract parameter, sub-models, and replace methods can be combined
with a reduction mechanism. This reduction mechanism filters test suites
and removes any test case not increasing the coverage.
To determine the efficiency of these conflict handling methods, they were
compared in an experiment. In the experiment, the number and types of
conflicts, as well as the size of the IPM and the coverage criterion used, are
varied. The response variable in the experiment is the number of test cases
in the final conflict-free test suite. Details of this experiment can be found
in paper IV [GOM07].
Table 7.4 shows the sum of sizes of the test suites in two experiments
for the seven investigated methods. In the small experiment, one or two
conflicting pairs were used and test suites for 1-wise and 2-wise coverage
were generated. In the large experiment, conflicts with up to ten conflicting
pairs were used and coverage was extended to also include 3-wise coverage.
In the large experiment the abstract parameter method was omitted due
to large execution times of the experiment. Long execution times are indi-
rectly visible in the small experiment where the abstract parameter method
generated about four times as many test cases as the rest of the methods.
Further, analysis of the results from 2-wise coverage indicates that the ab-
stract parameter will perform even worse for 3-wise coverage than in the
case of 2-wise coverage.
The abstract parameter and sub-models methods are both applied dur-
ing input parameter modeling. The avoid method is an extension to the
combination strategy algorithm that prohibits conflicting combinations to
be selected. The replace method removes conflicts from the final test suite.
Reduction filters a test suite and removes any test case not increasing the
coverage.
Statistical analysis of all the results shows that the abstract parameter
and sub-models methods (even with reductions) perform worse than the
replace and avoid methods. The avoid method seems to be the best but this
cannot be verified with statistical significance.
54 Results

Conflict Handling Method Small Large


Avoid 13,630 109,865
Replace 14,521 119,020
Reduced Replace 14,141 115,770
Abstract Parameter 64,469 -
Reduced Abstract Parameter 17,493 -
Sub-Models 28,819 315,601
Reduced Sub-Models 17,025 158,671

Table 7.4: Sum of test suite sizes in two experiment runs.

In addition to the size of the test suite, other properties of the conflict
handling methods may also influence the tester’s choice of method. Four
such properties are: possibilities to automate, time consumption, ease of
use, and applicability of the conflict handling methods.
The avoid and the replace methods are possible to fully automate, al-
though the avoid method requires a different implementation for each com-
bination strategy since it needs to be built into the actual algorithm of the
combination strategy. The abstract parameter and sub-model methods are
more difficult to automate since they require transformations of the IPM.
When test suites are generated, with the exception of the abstract pa-
rameter method, time consumption is dominated by test case generation,
not conflict handling.
To operate the combination strategies, the avoid and replace methods
only require a command to be invoked. The abstract parameter and sub-
model methods require the IPM to be adjusted, a step that probably requires
human intervention.
The abstract parameter, sub-models, and replace methods are all com-
pletely general with respect to the combination strategy used. The avoid
method has to be integrated into the combination strategy, which may not
be possible for some combination strategies. For instance, OA is one com-
bination strategy that cannot be used with the avoid method. More im-
portantly, if third-party combination strategy tools are used, it may not be
possible to integrate the avoid method into the tool since the source code
may not be available.
The overall conclusion of the experimental results and the following dis-
cussion is that the avoid or reduced replace methods are superior to the
7.4 Applying the Combination Strategy Test Process 55

other methods. Which of these two methods should be used is impossible to


say based on the results of the experiment. In most cases the avoid method
results in fewer test cases, but the differences are small. So the specific
situation will govern the choice of conflict handling method.
An important consequence of these results is that the tester does not
need to worry about introducing conflicts when creating the IPM. As long
as the tester keeps track of the infeasible sub-combinations, these can be
resolved automatically and efficiently at a later stage in the test process.

7.4 Applying the Combination Strategy Test Pro-


cess
The previous section (7.3) describes the results of validating the input pa-
rameter modeling method (study S5c in section 5.2), which is part of the
study to demonstrate the practical use of the combination strategy process.
The results of the two remaining parts of this study (applying combination
strategies to a large testing problem, study S5a in section 5.2, and compar-
ing the combination strategy generated test cases with test cases derived by
other means, study S5b in section 5.2) are presented in this section. Further
details on all three studies can be found in paper VI [GDOM06].
In the concluding subsection 7.4.3 validation of the goals of this thesis is
discussed.

7.4.1 Handling Large and Complex Test Problems


The complexity study (study S5a) was conducted in a case study at a com-
pany with a complex testing problem. This testing problem consists of a
programmable event handler based on Boolean expressions. 100 different
Boolean input signals are available to the user. These can be toggled using
the unary NOT operator, combined arbitrarily using any of the six standard
binary Boolean operators AND, OR, NAND, NOR, XOR, and XNOR
or the two ternary flip-flop operators.
In addition, two ternary and one four-input counter functions can also
be used in the expressions. The Boolean input signals are continuous over
time so there also exist five unary timing operators, for instance, delay all
events, delay up flanks, delay down flanks and so on. The output from an
expression can be connected to some physical output, for instance a sound
alarm or a lamp.
56 Results

The event handler, viewed as a test problem, has an enormous input


space. For instance, it is possible to vary the number of Boolean input
signals and the types of operators included in an expression. Further, it is
also possible to vary the bit vectors and their timing on the input signals.
The vast number of possibilities makes it impossible, in practice, to test
everything. Combination strategies is therefore an interesting alternative
for this test problem.
The final solution is based on four different IPMs with different foci.
These are (1) Boolean input signals, (2) Boolean operators including flip-
flops, (3) counter functions, and (4) timing operators. Table 7.5 shows an
overview of the number of test cases required to satisfy different levels of
coverage for the different IPMs. Several of the IPMs have so few parameters
that N -wise (full) coverage can be reached within the limits of the table.
Higher coverage of these IPMs are thus not applicable. These situations are
indicated with N in the table. Already for low coverage levels, the input
signal and timer IPMs yield so many test cases that test case generation for
higher levels of coverage is meaningless. This is represented by (-) in the
table.

IPM contents no. 1-wise 2-wise 3-wise 4-wise


IPM1 input signals - 101 816 - -
IPM2 unary 1 2 N N N
binary 6 2 4 N N
ternary 2 2 4 8 N
IPM3 small counters 2 4 8 16 N
large counters 1 4 8 16 32
IPM4 timers 5 4 23 86 -

Table 7.5: Test cases for varying coverage of the event handler IPMs.

In the case of the input signals, a naive solution would be to test each
signal in isolation, thus requiring 2×100 test cases. An alternative solution is
to use OR operators to create expressions with input signals. An expression
can then be tested using an MCDC approach [CM94] by setting one signal
at a time true. This approach also allows for varying coverage by considering
the inclusion of individual signals, pairs of signals, triplets of signals etc, in
the expressions. The results in the table are derived using this approach.
In the table, the values in bold indicate the level of coverage used during
execution. All in all, 1352 test cases were executed. These test cases found
7.4 Applying the Combination Strategy Test Process 57

four major faults. Additional experiences from this experiment are that (i)
constructing the IPMs took less than a day, (ii) once the IPMs were con-
structed, changes in the specification were easy to handle, and (iii) several
alternative abstract test suites with different levels of coverage was cheap to
generate. This proved beneficial since it allowed a quantified discussion with
the owner of the test problem concerning test quality vs. cost and risk.

7.4.2 Alternative Test Case Generation Methods


The study comparing combination strategies with currently used test case
selection methods (study S5b) was also conducted in the form of an industrial
case study.
The test problem in this experiment is a low-level resource manager. It
administers different types of communications, such as point-to-point links or
package based services between processes, which may be distributed across a
set of computation nodes. Within the resource manager a set-up() function
was selected for this study. A structural modeling approach was used to
create an IPM covering the input space defined by the 11 actual parameters
of the set-up function. A twelfth parameter was added to handle the device
state. The specification made it easy to identify both valid and invalid
values for each of the 12 IPM parameters. The large number of invalid
values in the IPM created an interest in using coverage criteria based on
semantic information (see Figure 7.1 on page 49). Table 7.6 classifies the
IPM parameter values into (N)ormal, (B)ase choice, and (I)nvalid values.

Parameter
Value # A B C D E F G H I J K L
1 I N I I I B I I I B I B
2 I B I I B I I I B I I I
3 I I B B - - B B - - B -
4 B - - - - - - - - - - -
5 N - - - - - - - - - - -
6 N - - - - - - - - - - -

Table 7.6: IPM value classification for the low level resource handler.

Table 7.7 shows the total number of test cases and the number of valid
test cases for varying levels of coverage. A valid test case contains only
58 Results

normal or base choice values. The large number of invalid values in the IPM
is the reason for the low number of valid test cases.

Type of Test Cases 1-wise 2-wise BC


All 6 25 24
Valid 0 0 4

Table 7.7: Test cases for varying coverage of the resource handler IPM.

The set-up function was originally tested by a requirement based ap-


proach. Given explicitly stated requirements on the functionality, test cases
were designed to verify these. This resulted in five test cases. Four of these
are valid and the fifth test case is invalid, that is tests error handling.
Experiences from comparing the existing test cases with the test cases
generated in the combination strategy approach are: (i) the existing test
cases over-emphasize valid test cases, (ii) the structural approach used to
generate the IPM results in test cases missed by the requirement approach,
and (iii) the requirements based approach makes coverage assessments more
difficult since it is not obvious how to test a requirement.

7.4.3 Validation of Thesis Goals


The aim of this research is to investigate if combination strategies are fea-
sible alternatives to test case selection methods used in practical testing. To
achieve this aim, a test process custom-designed for combination strategies
has been refined from a generic test process. A large part of this research
project has focused on defining those activities in the test process that are
combination strategy specific. The formulation of the test process with its
associated activities is a necessary step towards reaching the aim. This is
also reflected in goals G1-G4 on page 20.
The work in formulating the test process with its activities is based on a
combination of literature studies and experimental evaluations of alternative
solutions. Experience in practical testing has also played a role in this work.
The final step of this research project, reflected in goal G5, is a series
of validation experiments. The joint objective of these experiments is to
show that the test process is tractable and that it performs at least as well
as currently used test case selection methods. A common feature of all
these experiments is that they are all conducted in actual industrial settings.
The key results of these experiments are that (i) testers can learn to apply
7.5 Related Work 59

the input parameter modeling method with only a moderate investment in


training, (ii) combination strategies can be used to handle large and complex
testing problems. The use of combination strategies provides means for
(iii) measuring the test quality, and (iv) determining a suitable quality of
testing with respect to available resources. Together, these results show that
combination strategies are tractable.
When it comes to the performance of the combination strategy test pro-
cess compared to the performance of other, currently used, test case selec-
tion methods, it is impossible to make a general claim. The main reasons
are that the state-of-practice is very diverse and that it is impossible to
judge the representativity of a test problem or a set of faults. However,
there is nothing in the results from this research that contradicts the idea
of combination strategies performing at least as well as currently used test
case selection methods. First, as the state-of-practice investigation indicates
(see paper I [GOM06]), much of the test case selection today is performed
in an ad hoc manner. Thus, the use of any structured test case selection
method, including combination strategies, results in a more predictable test
quality. Second, the combination strategy test process already includes tool
support and more importantly allows cheap evaluation of test suites prior to
the costly work of detailing each test case. Hence, the use of combination
strategies contributes to more exact test planning, monitoring, and control.
Third, the overhead caused by input parameter modeling is small compared
to the time it takes to create the final test cases. Based on this, the conclu-
sion is that combination strategies should be interesting to many software
testing organizations.

7.5 Related Work


In the attempt to bridge the gap between academia and industry with re-
spect to use of combination strategies, this thesis combines three themes:
selection of combination strategy, input parameter modeling and handling
conflicts in the IPM. Selection of combination strategies has previously been
partly explored through evaluation of selected properties. However, there is
no previous work on how to use this information to select a suitable com-
bination strategy for a given test problem. Input parameter modeling has
also previously been partly targeted but not with an explicit focus on com-
bination strategies. Conflict handling has been recognized as an important
60 Results

aspect of combination strategy usage but no explicit exploration of alterna-


tive conflict handling methods exists.
Tables 7.8 and 7.9 give overviews of the most important related work
with respect to these themes. Much of the previous work has been centered
around suggesting new combination strategies, which is why this has been
included as a fourth theme in the tables. More details on these related works
can be found in paper II [GOA05].

CS CS
Work Evaluate IPM Confl. Suggest
Cohen et al. [CDKP94] No No Partly Yes
Willams & Probert [WP96] No No Partly Yes
Mandl [Man85] No No No Yes
Ammann & Offutt [AO94] Yes No No Yes
Lei & Tai [LT98] Yes No No Yes
Williams [Wil00] Yes No No Yes
Cohen et al. [CGMC03] Yes No No Yes
Shiba et al. [STK04] Yes No No Yes
Lei & Tai [LT01] Yes No No No

Table 7.8: Classification of related work on combination strategies.

Most of the previous work on combination strategies has emphasized


new combination strategies and compared these with previously suggested
combination strategies. In the early papers, there is usually no or little
comparison, which is indicated with “No” in column “CS Evaluate” in ta-
bles 7.8 and 7.9. Examples of this are the papers by Cohen, Dalal, Kajla,
and Patton [CDKP94], Williams and Probert [WP96], and Mandl [Man85].
In later papers, in addition to describing new combination strategies,
more emphasis is put on comparing different combination strategies. Am-
mann and Offutt [AO94], Lei and Tai [LT98], Williams [Wil00], Cohen,
Gibbons, Mugridge, and Colburn [CGMC03], and Shiba, Tsuchiya, and
Kikuno [STK04] are all examples of this. In addition, Lei and Tai [LT01] pro-
vide further information on the performance of different combination strate-
gies without suggesting any new combination strategies. These comparisons
are the basis for combination strategy selection.
Similar to Lei and Tai [LT01], this work also compares the performance of
different combination strategies without suggesting any new, but in contrast
7.5 Related Work 61

to their work, this project suggests how to use the obtained performance
information to select an appropriate combination strategy for a given test
problem.

CS CS
Work Evaluate IPM Confl. Suggest
Ostrand & Balcer [OB88] No Yes Partly Partly
Grochtmann & Grimm [GG93] No Yes Partly No
Mayrhauser et al. [vMSOM96] No Yes Partly No
Chen et al. [CTPT05] No Yes Partly No
Piwowarski et al. [POC93] No Yes No No

Table 7.9: Classification of related work on input parameter modeling.

The Category Partition Method (CPM) by Ostrand and Balcer [OB88]


is likely to be the seminal work with respect to input parameter modeling.
It is a step-by-step method to transform a specification into abstract test
cases (test frames in their terminology). In a sense, the Category Partition
Method can be viewed as a combination strategy with the ability to han-
dle conflicts since the method requires all valid test frames to be included.
However, selecting all valid combinations may be infeasible for larger test
problems. The input parameter modeling method suggested in this work
is in many ways an extension of the CPM. The main difference is in the
generality. Where the CPM is custom-designed for a specific combination
strategy and a certain way of handling conflicts, the method suggested in
this work is completely general with respect to combination strategies and
conflict handling methods.
Grochtmann and Grimm [GG93] suggest Classification Trees as a solu-
tion to the input parameter modeling problem. Classification Trees subdi-
vides the input space based on properties of the objects in the input space.
Classification Trees contains a built-in ability to handle conflicts but does
not give any advice on test case selection. The input parameter modeling
suggested for Classification Trees has aspects in common with the input pa-
rameter modeling method suggested in this work, for instance, the explicit
goal to cover all aspects of the test object and for each aspect to cover the
entire input domain. The main difference is again that this work explicitly
targets combination strategies.
62 Results

Sleuth, which was developed by von Mayrhauser, Shumway, Ocken, and


Mraz [vMSOM96], is an alternative solution to the input parameter modeling
problem. It has similarities with the Classification Tree approach in the
sense that both methods result in hierarchical models of the test object
domains. Sleuth contains a built-in, rule based, ability to handle conflicts.
Rules can be specified to describe different types of constraints, such as
inheritance rules and parameter binding. Combination strategies support
different levels of coverage of an IPM. Sleuth takes another approach. The
tool always generates all possible test cases that satisfy the constraints, but
with a mechanism to turn certain constraints on and off, the size of the final
test suite can be manipulated.
Chen, Tang, Poon, and Tse [CTPT05] propose an input parameter mod-
eling method starting from UML activity diagrams. Like Classification
Trees, this method has a built-in ability to handle conflicts but give no
advice on test case selection. The UML input parameter modeling method
has the same similarities and differences with this input parameter modeling
method as Classification Trees.
Piwowarski, Ohba, and Caruso [POC93] showed the applicability of refin-
ing the IPM by monitoring the code coverage achieved by the generated test
suite. This is a completely different approach compared to the one employed
in this work, since combination strategies generally are black-box methods
and thus do not rely on access to the source code.
As far as the author knows, there is no previous work specifically tar-
geting conflict handling in IPMs even though some briefly touch upon the
subject. Distantly related to this area is constraint satisfaction program-
ming [Tsa93].
Only fragmental information exists on the performance of combination
strategies in real industrial settings, which is why this aspect is not included
in this comparison.

7.6 Discussion

This section contains a discussion of some aspects of the methods and ex-
periments conducted in this research. These aspects fall into the three cate-
gories (i) formulation and evaluation of research goals, (ii) software process
improvement, and (iii) properties of the proposed methods.
7.6 Discussion 63

7.6.1 Formulation and Evaluation of Research Goals


Recall from section 4 on page 19 that the aim of this research project was
to investigate if combination strategies are feasible alternatives to test case
selection methods used in practical testing. The following issues were consid-
ered during the formulation of this aim and the derived research questions.

• There should be a focus on practical aspects of combination strategies.

• Goals should be measurable.

• Goals should be challenging.

• A negative result with respect to the performance of combination


strategies should not jeopardize the research project.

The use of the words “feasible alternatives” in the aim was a direct result
of these considerations. One implication of these words is that parts of the
research had to be staged in real industrial settings. Another implication is
that it is not assumed in the formulation of the goals that combination strate-
gies will lead to success. In other words, if the performance of combination
strategies had turned out to be distinctly worse than contemporarily used
test case selection methods, this result would not invalidate the research.
The diversity of the state-of-practice in testing, see section 2.3, presented
a challenge in the evaluation of the goals. What would be an improvement
in one organization does not necessarily improve another. To avoid making
claims that are not general, words like “feasible” and “tractable” have been
used in the analysis and evaluation of goals. Although imprecise, the use
of these words is deliberate. These words allow the readers to form own
opinions based on what they consider reasonable. These words also allow
for future investigations based on defined performance goals.

7.6.2 Software Process Improvements


The low test maturity reported in paper I [GOM06] motivates a discussion
around introducing combination strategies in industry testing projects. In-
put parameter modeling being to some extent a creative and intellectual ac-
tivity also deserves further discussion. Finally, aspects of reuse with respect
to combination strategies and their artifacts is an interesting area to discuss.
64 Results

Implementation of Combination Strategies


This research indicates that many organizations could benefit from using
combination strategies in their test processes. However, the relatively low
testing maturity of several software producing organizations, reported in pa-
per I [GOM06], suggests that other test process improvements may be at
least as important as improving the test case selection mechanism. For in-
stance, several test managers in the test maturity investigation point in the
direction of improving their metrics collection mechanisms. The underly-
ing reason is that without the means for objectively assessing the current
situation it is very difficult to state a valid case for any improvement.
Test Process Improvement (TPI) [KP99] is, in industry, a widely known
and recognized test process improvement method. Among other things it
contains a dependency matrix, which gives advice to the user on the order of
which improvements should be undertaken. It suggests two preconditions for
implementation of structured test case selection methods. These are training
of testers and test managers and infrastructure for testware management,
for example, configuration management tools and structured storage of test
cases and test specifications.
In the light of these observations, it can be suspected that there exist
organizations that are not ready for combination strategies. This is proba-
bly true for full-scale implementations of combination strategies, but small
pilot studies are cheap and do not require more than a few hours training,
hence the conclusion presented in section 7.4.3, that combination strategies
are likely to be interesting to most software testing organizations. A further
advantage with combination strategies, observed in the event handler case
study (section 7.4.1), is the ability to handle changed requirements. During
the course of the study the requirements on the event handler changed, mak-
ing the original IPMs obsolete. The amount of additional work to handle
these changes was very small since it was enough to change the IPMs and
then automatically regenerate the test cases.

Input Parameter Modeling


As noted previously, input parameter modeling is likely to always contain an
element of creativity. Hence, it is very difficult to develop an exact method
for this task. The subjects of the input parameter modeling experiment,
described in paper VI [GDOM06], created several different solutions to the
input parameter modeling task. This can be seen as an indication of the
creativity element. The results are also likely to be explained, in part, by
7.6 Discussion 65

the fact that the tutorial was the first contact with input parameter modeling
and that the tutorial only contained theory. It is likely that the quality of
the achieved solutions will improve with practice.
Another aspect of input parameter modeling, also worth mentioning,
is the opportunity to exploit constraint handling for other purposes rather
than to exclude apparently infeasible sub-combinations. One can imagine
that some sub-combinations required for full coverage is deemed so unlikely
in operation that a decision is taken to exclude these in the generation step.
An alternative way of handling this situation would be to include these un-
likely sub-combinations but refrain from translation of these into actual test
cases.

Reuse
Reuse of tools and artifacts within an organization is a potential cost and
time saver. This research has not explicitly targeted reuse. However, the
ability to use pre-selected test cases in the input to some combination strate-
gies can be seen as a kind of support for reuse of test cases. This is referred
to by von Mayrhauser et al. [vMSOM96] as test case reuse. A further de-
velopment of this thought would be to include an ninth row in Table 6.1 on
page 37, with a strong relationship between test case reuse and time.
Reuse of tools is another topic relevant to this project. As described
earlier, several implementations of combination strategies exist. A limitation
in the reuse of tools is that the avoid conflict handling method requires access
to the source code, which is usually not the case with commercial tools.
Other parts of the combination strategy test process may also be auto-
mated. One example is the translation from abstract inputs into actual test
case inputs. This activity is likely to require customization to the specific
test object, which is why general tools are less likely. Instead, it is probable
that such tools will be developed in-house. For instance, this is the case in
the experiment reported in paper III [GLOA06]. In-house developed tools
simplify reuse but often require more time and effort to develop and maintain
than commercial tools.
Finally, it is worth discussing reuse with respect to IPMs, which is called
domain reuse by von Mayrhauser et al. [vMSOM96]. As described previously,
IPMs are representations of the input space and possibly other properties of
a system under test. Within the combination strategy test process reuse of
IPMs is exploited in the feedback loop from the evaluation of abstract test
suites in step 4. In a larger context it is also possible to reuse IPMs from one
66 Results

project to another. For instance, an IPM developed for a new function in one
project, may be used with a less thorough coverage criterion for regression
testing purposes in a following project.

7.6.3 Properties of Methods


The introduction of new processes, methods, and tools draws attention to
usage aspects of the products to be launched. Among other things usability,
scalability and performance are important aspects to consider.

Usability
The British standard BS7925-1 [BS 98a] defines usability testing as testing
the ease with which users can learn and use a product. This definition
focuses the attention both on learning and using a product.
This research has resulted in a refined test process with several activities
that need to be taught. In particular, choosing combination strategies and
developing IPMs for a given test problem are activities that probably works
best if taught. One important reason for this is that these activities both
rely to some extent on the knowledge and experience of the individual tester.
For most test problems there are likely to be several possible solutions for
both problems, with similar performance. Another contributing reason for
the need for training is the lack of detail presented in the existing research
(including this). A research document is generally not the right place for
detailed usage instructions, which would be required when properly using
these products. The conclusion is that for a roll-out of combination strate-
gies to succeed in industry there is a need for the development of training
material.
This research has only partly focused on the ease of use of the suggested
methods and tools. Among the methods and processes, there is certainly
more to do to increase usability. At this stage, apart from the combination
strategy test process, the methods and guidelines should be seen as research
prototypes rather than finished products.
As far as the available tools (http://www.pairwise.org/) are concerned
it is impossible to make any general claims about the ease of use. There is a
big diversity among the tools and there is also much development currently
taking place.
7.6 Discussion 67

Scalability
The major benefit with combination strategies is to handle large test prob-
lems. The combination strategy test process contains three mechanisms for
the tester to control the size of the final test suite. First, the choice of
combination strategy will have a great impact on the test suite size. Sec-
ond, the tester can to a large extent influence the size of the IPM, which
will also impact the final test suite size. Third, the process itself contains a
checkpoint where the size of a suggested test suite can be checked and the
previous process steps revisited. Together, these three mechanisms provide
an opportunity for the tester to iteratively find an acceptable compromise
between test quality and time consumption. This means that from a scala-
bility point of view, the combination strategy test process scales well with
the size of test problem.
Although the test process itself scales well, the combination strategy
algorithms for higher coverage (2-wise and higher) have scalability issues.
Among the studied combination strategies, IPO [LT98] has the lowest algo-
rithmic time complexity for 2-wise coverage (O(Vmax 3 N 2 log(N )) where N is

the number of parameters in the IPM and Vmax is the number of values of the
parameter with the most values). With this time complexity, at some point,
the size of the IPM will have a major impact on the test suite generation
time. However, as shown in section 7.4.1, even a prototype implementation
in Perl can handle large IPMs (100 parameters, each with two values). It
is our belief that a majority of test problems in industry can already be
managed in a satisfactory manner with existing algorithms and tools.

Performance
Performance of combination strategies can be assessed in several ways. The
size of the generated test suite is obviously an important performance indi-
cator since it directly affects the efficiency of the testing. Equally important
are the types and number of faults found, which together with the achieved
coverage are measures of the effectiveness of the process. Coverage and types
of faults found for a given combination strategy are relatively simple to as-
sess. In both cases this enables comparisons between different combination
strategies but since both coverage and types of faults are defined based on
properties of combination strategies it is difficult to compare these metrics
with similar metrics from other test case selection methods. The number of
faults found makes comparisons easier but due to representativity issues of
68 Results

the existing faults no general conclusions can be drawn from these results. In
summary, comparing the performance of different test case selection methods
is difficult.
Chapter 8

Conclusions

This chapter provides a summary of this thesis and describes its contribu-
tions to this area of research. This chapter also outlines some directions of
future work.

8.1 Summary
The aim of this thesis is to investigate if combination strategies are feasible
alternatives to test case selection methods used in practical testing. Com-
bination strategies are test case selection methods designed to handle com-
binatorial explosion. Combinatorial explosion can occur in testing when a
test object can be described by a number of parameters, each with many
possible values. Every complete combination of parameter values is a po-
tential test case. Due to its generality, combinatorial explosion is a common
problem in practical testing. The effect of combinatorial explosion is that it
is infeasible to test every possible combination of parameter values. Com-
bination strategies handle the combinatorial explosion by identification of
a tractable subset of all combinations. This selection is based on coverage,
which enables objective evaluation of the selected combinations.
To investigate the tractability of combination strategies in practical test-
ing, this research formulates a test process custom-designed for combination
strategies. The combination strategy test process is a refinement of a generic

69
70 Conclusions

test process containing the four activities (i) planning, (ii) specification, (iii)
execution, and (iv) test stop decision. Most of the tailoring of the generic
test process is focused on the specification activity. Within the scope of test
specification, the custom-designed test process contains activities for (a) se-
lection of suitable combination strategies, (b) input parameter modeling, and
(c) handling conflicting values in the input space of the test object. This
research suggests guidelines or methods for each of these three activities.

The custom-designed test process, including its guidelines and methods,


is the result of a series of studies and experiments. Existing combination
strategies are surveyed in paper II [GOA05] and a subset of these are com-
pared in an experiment presented in paper III [GLOA06]. These two papers
form the basis for the activity of selecting a suitable combination strategy
for a given test problem.

A method for input parameter modeling is presented and partly validated


in paper V [GO07]. Input parameter modeling is the process of converting
requirements on the test object into a model suitable for combination strate-
gies.

Paper IV [GOM07] describes an experiment which experimentally eval-


uates seven alternative ways of handling conflicts in the IPM. The results
of the experiment show that the two best ways of handling conflicts are
(i) to avoid the selection of combinations that contain conflicts and (ii) to
post-process the set of selected combinations. In this post processing step,
combinations containing conflicts are replaced by alternative conflict-free
combinations such that the original coverage is maintained.

Finally, this research project contains a proof-of-concept study, which


validates the combination strategy test process in three industrial settings.
This study, documented in paper VI [GDOM06], shows that combination
strategies can be used to identify tractable test suites for large and complex
test problems. Further, the study shows that combination strategies can
be more effective than other, currently used test case selection methods.
Finally, the study indicates that testers with proper testing background can
be trained to use combination strategies without significant effort.

The overall conclusion from this research project is that combination


strategies are feasible alternatives to test case selection methods used in
practical testing.
8.2 Contributions 71

8.2 Contributions
Several aspects of using combination strategies in software testing were rel-
atively unexplored prior to this research. In the light of the related work,
presented in section 7.5, the following paragraphs describe the most impor-
tant contributions of this work.
An important contribution of this work are the guidelines for selection
of suitable combination strategies for a given test problem. These are based
on previous explorations of the properties of various combination strategies.
As described earlier, none of the previous work has elaborated on how to
select combination strategies given the special circumstances of specific de-
velopment projects.
Another equally important contribution of this research is the formu-
lation of an input parameter modeling method custom-designed for com-
bination strategies. To the author’s knowledge, no other input parameter
modeling method specific to combination strategies exists. Previously sug-
gested methods for identification of parameters and parameter values are
either completely general or specific to other methods.
A third contribution, related to the two previous, is the advice on how to
effectively and efficiently handle conflicts in the IPM. No previous evaluation
of conflict handling methods exists. The relationship between these three
contributions is through the suggested combination strategy test process,
which is in itself a contribution.
The three above mentioned contributions are combined with previous
work into a combination strategy test process, which is a fourth contribution
of this research project.
The state-of-practice investigation included in this research is another
type of contribution. The results, which for instance show surprisingly low
usage of structured test case selection methods, are important as such. The
most important conclusion from the state-of-practice study is that organi-
zations need to focus more on structured metrics programs to be able to
establish the necessary foundation for change, not only in testing.
Further, the results from this study confirm results from several previ-
ously performed studies [GSM04, NMR+ 04, RAH03]. Confirming previously
achieved results is in itself a valuable contribution.
Just like the state-of-practice study, the case studies conducted within
this research project have resulted in several contributions. First, the re-
sults themselves, that is that combination strategies have been shown to be
72 Conclusions

feasible in industrial contexts. Second, the descriptions of these case studies


enable future replication of these studies. From an analytical generaliza-
tion perspective, this is an important contribution since the representativity
problem makes it difficult to base theory-building on statistical generaliza-
tion. Third, combination strategies are no longer unexplored territory in
industry. The successful results from applying combination strategies in in-
dustrial projects may be used as an argument for staging more extensive
case studies, which previously may have been considered too risky.
The knowledge transfer occurring naturally during research at industry
sites also deserves to be mentioned explicitly as a contribution of this research
project. For instance, one of the organizations in which a case study was
conducted now use combination strategies as part of their testing strategy.
Knowledge transfer to industry has also occurred in other ways, for instance
by conducting seminars and training sessions.

8.3 Future Work


The old truism “the more you know, the more you realize you don’t know”
is certainly true in this case. It is easy to identify a large number of poten-
tial research directions. In the paragraphs to follow, a number of interesting
questions and research directions are described. These are grouped into four
main themes (i) combination strategies, (ii) input parameter modeling (iii)
automation, and (iv) general research on test case selection methods.

Combination Strategies
This work shows that combination strategies can be applied to a large num-
ber of test problems in industry. Although this research contains some com-
parisons with other test case selection methods, little is still known about
the actual performance of combination strategies applied to different testing
problems in real industrial settings. Further studies of the efficiency and
effectiveness of combination strategies are desirable to determine in which
cases combination strategies can make a difference. This is especially true
in the light of the validation study, of this research project, being divided
into three different parts.
The explicit industrial perspective of this research project has resulted
in a shift in focus towards the less complex (and cheaper) coverage criteria,
such as 1-wise, 2-wise, and base choice. In many industrial settings, time
8.3 Future Work 73

to market is more important than quality or cost. This drives organizations


towards finding fast solutions to their problems, which means that the less
complex coverage criteria are of special interest. Moving up in the subsump-
tion hierarchy (see fig 7.1) will certainly increase the test quality but at the
same time increase the cost for testing. At some point, it is likely that there
will be diminishing returns, that is the increase in test time is higher than
the increase in quality. The study in paper III, can be used as an example
of this. Of the 120 detected faults, 95 are guaranteed to be discovered by
around 180 test cases if 2-wise testing is used. Moving to 3-wise coverage,
would detect all 120 faults but would at least require 1022 test cases. (The
minimum number of test cases required for 3-wise coverage is the product
of the sizes of the three largest parameters). Further research on t-wise
coverage, where t is larger than two is certainly needed.
There is also room for more combination strategies. In particular, com-
bination strategies that support variable-strength coverage, that is, different
levels of coverage for different IPM parameters and IPM parameter values
within the same IPM. A related area of interest is combination strategies
that build their selection strategies more on semantic information. In both
cases, these properties can be used to improve the quality of test cases, for
example by more closely mimicing actual usage.
Combination strategies may help in other areas of testing, for instance
in testing real-time systems. One important property of a real-time system
is timeliness. Timeliness is the ability of the system to deliver a result at or
within a specific time. In timeliness testing different execution orders play
a crucial part [Nil06]. It is possible that an IPM can be used to express
different execution order patterns. With a similar approach, IPMs may be
used to express orderings between feature invocations and hence be useful
for feature interaction testing.

Input Parameter Modeling


The IPM plays a crucial role in determining both efficiency and effectiveness
of the testing. This research presents a general method for input parameter
modeling. This method is based, to a large extent, on the current knowledge
of the behavior of combination strategies. As this knowledge is likely to
increase with an increased use of combination strategies, it is also likely that
there is room for new and refined methods for input parameter modeling.
74 Conclusions

Automation
One of the key features of combination strategies is that semi-formal models
of the test object input, that is, IPMs are created. Automatic generation
of the abstract test cases is already implemented. One of the strengths of
combination strategies is that it should be possible to increase the level of
automation.
First, for some applications it is possible to automatically translate ab-
stract test cases into actual test case inputs. An example of this was used
in the experiment presented in paper III [GLOA06]. Translation into actual
test cases requires a mapping of information from IPM parameter values to
values in the input space of the test object. What this mapping looks like
and when it is applicable are largely unexplored.
A second topic for potential automation is the test case results. For this
to work it is necessary to include behavioral information in the IPM. In the
general case this may be too difficult or expensive but in some cases it may
be possible. For instance, it may be possible to add high-level behavioral
information in terms of normal or abnormal termination. Such a scheme can
be useful for robustness testing.
A third topic for potential automation is to amend the combination strat-
egy test process with a third evaluation point with corresponding feedback
loops. The idea is to divide step 5 - test case generation into two steps,
5a and 5b. In step 5a, abstract test cases are transformed into real test
case inputs. These are then automatically executed while code coverage is
monitored. If coverage is sufficient, then step 5b is invoked, where expected
results are added to the test cases and the ordinary progress through the
process continues. Otherwise, feedback loops return the tester back to the
first two steps of the process in order to increase the amount of test cases
and thus raise the code coverage.

Research on Test Case Selection Methods


All of these described directions for future work on combination strategies
will at one point or another require evaluation of suggested enhancements
and solutions. Both the research community and representatives from indus-
try want, as far as possible, results from such evaluations to be generalizable.
Hence, there is a great need for studies with respect to representativity. This
need spans from general areas such as types of test objects and distribution
of types of faults to combination strategy specific areas such as types of
conflicts in IPMs.
8.3 Future Work 75

Generalization of results may not be realistic in every situation. In cases


where generalization is difficult to reach, much is gained if the research is
conducted and documented in such a way that experiments can be repeated
with controlled changes. This enables researchers to develop confidence in
the comparison of results. A repository of described test problems to be
used for research on test case selection methods would greatly help in this
area.

All in all, it can be concluded that the research on applying combination


strategies to testing has only just begun.
76 Conclusions
References

[AAC+ 94] T. Anderson, A. Avizienis, W. C. Carter, A. Costes, F. Chris-


tian, Y. Koga, H. Kopetz, J. H. Lala, J. C. Laprie, J. F. Meyer,
B. Randell, and A. S. Robinson. Dependability: Basic Con-
cepts and Terminology. Technical report, IFIP. WG 10.4, 1994.

[AO94] P. E. Ammann and A. J. Offutt. Using formal methods to de-


rive test frames in category-partition testing. In Proceedings
of Ninth Annual Conference on Computer Assurance (COM-
PASS’94),Gaithersburg MD, pages 69–80. IEEE Computer So-
ciety Press, Jun. 1994.

[Arc92] R. D. Archibald. Managing High-Technology Programs and


Projects. John Wiley and Sons, Inc, 1992.

[Bei90] B. Beizer. Software Testing Techniques. Van Nostrand Rein-


hold, 1990.

[BJE94] K. Burroughs, A. Jain, and R. L. Erickson. Improved qual-


ity of protocol testing through techniques of experimental de-
sign. In Proceedings of the IEEE International Conference
on Communications (Supercomm/ICC’94), May 1-5, New Or-
leans, Louisiana, USA, pages 745–752. IEEE, May 1994.

[Boe] B. W. Boehm. Some Information Processing Implications of


Air Force Space Missions: 1970-1980, Santa Monica, California,
The Rand Corp. Memorandum RM-6213-PR.

[BR88] V. R. Basili and H. D. Rombach. The TAME project: Towards


improvement-oriented software environments. IEEE Transac-
tions on Software Engineering, SE-14(6):758–773, Jun. 1988.

77
78 References

[Bro75] F. P. Brooks. The Mythical Man-Month. Addison-Wesley:


Reading MA, 1975.
[BS 98a] British Computer Society BS 7925-1. Glossary of Terms used
in Software Testing, 1998.
[BS 98b] British Computer Society BS 7925-2. Standard for Component
Testing, 1998.
[BS87] V. R. Basili and R. W. Selby. Comparing the Effectiveness of
Software Testing Strategies. IEEE Transactions on Software
Engineering, SE-13(12):1278–1296, Dec. 1987.
[BY98] K. Burr and W. Young. Combinatorial test techniques: Table-
based automation, test generatio and code coverage. In Pro-
ceedings of the International Conference on Software Testing,
Analysis, and Review (STAR’98), San Diego, CA, USA, Oct.
26-28, 1998, pages 26–28. Software Quality Engineering, 1998.
[CC79] T. D. Cook and D. T. Campbell. Quasi-Experimentation De-
sign and Analysis Issues for Field Settings. Houghton Mifflin
Company, 1979.
[CDFP97] D. M. Cohen, S. R. Dalal, M. L. Fredman, and G. C. Patton.
The AETG System: An Approach to Testing Based on Combi-
natorial Design. IEEE Transactions on Software Engineering,
23(7):437–444, Jul. 1997.
[CDKP94] D. M. Cohen, S. R. Dalal, A. Kajla, and G. C. Patton. The
automatic efficient test generator (AETG) system. In Proceed-
ings of Fifth International Symposium on Software Reliability
Engineering (ISSRE’94), Los Alamitos, California, USA, Nov.
6-9, 1994, pages 303–309. IEEE Computer Society, 1994.
[CDPP96] D. M. Cohen, S. R. Dalal, J. Parelius, and G. C. Patton. The
Combinatorial Design Approach to Automatic Test Genera-
tion. IEEE Software, 13(5):83–89, Sep. 1996.
[CGMC03] M. B. Cohen, P. B. Gibbons, W. B. Mugridge, and C. J. Col-
burn. Constructing test cases for interaction testing. In Pro-
ceedings of 25th International Conference on Software Engi-
neering, (ICSE’03), Portland, Oregon, USA, May 3-10, 2003,
pages 38–48. IEEE Computer Society, May 2003.
References 79

[CM94] J. J. Chilenski and S. P. Miller. Applicability of modified condi-


tion/decision coverage to software testing. Software Engineer-
ing Journal, 9(5):193–200, Sep. 1994.

[CTPT05] T. Y. Chen, S. F. Tang, P. L. Poon, and T. H. Tse. Identifica-


tion of Categories and Choices in Activity Diagrams. In Pro-
ceedings of Fifth International Conference on Quality Software
(QSIC 2005) Sep. 19-20, 2005, Melbourne, Australia, pages
55–63. IEEE Computer Society, Sep. 2005.

[Dah05] Å. G. Dahlstedt. Guidelines regarding requirements engineer-


ing practices in order to facilitate system testing. In Proceed-
ing of Eleventh International Workshop on Requirements En-
gineering: Foundation for Software Quality, Porto, Portugal,
Jun. 13-14, 2005, pages 179–191. Essener Informatik Beitraege,
2005.

[DES+ 97] I. S. Dunietz, W. K. Ehrlich, B. D. Szablak, C. L. Mallows, and


A. Iannino. Applying design of experiments to software testing.
In Proceedings of 19th International Conference on Software
Engineering (ICSE’97), Boston, MA, USA, May, 1997, pages
205–215. ACM, May 1997.

[Deu87] M. S. Deutsch. Verification and validation. In Jensen and


Tonies, editors, Software Engineering. Prentice Hall, 1987.

[DHS02] N. Daley, D. Hoffman, and P. Strooper. A framework for table


driven testing of Java classes. Software - Practice and Experi-
ence (SP&E), 32(5):465–493, Apr. 2002.

[DJK+ 99] S. R. Dalal, A. Jain, N. Karunanithi, J. M. Leaton, C. M.


Lott, G. C. Patton, and B. M. Horowitz. Model-based testing
in practice. In Proceedings 21st International Conference on
Software Engineering (ICSE’99) 1999, Los Angeles, CA, USA,
16-22 May 1999, pages 285–294. ACM Press, 1999.

[DM98] S. Dalal and C. L. Mallows. Factor-Covering Designs for Test-


ing Software. Technometrics, 50(3):234–243, Aug. 1998.

[GDOM06] M. Grindal, Å. G. Dahlstedt, A. J. Offutt, and J. Mellin. Using


Combination Strategies for Software Testing in Practice - A
80 References

proof-of-concept. Technical Report HS-IKI-TR-06-010, School


of Humanities and Informatics, University of Skövde, 2006.

[GG93] M. Grochtmann and K. Grimm. Classification Trees for Par-


tition Testing. Journal of Software Testing, Verification, and
Reliability, 3(2):63–82, 1993.

[GLOA06] M. Grindal, B. Lindström, A. J. Offutt, and S. F. Andler. An


evaluation of combination strategies for test case selection. Em-
pirical Software Engineering, 11(4):583–611, Dec. 2006.

[GO07] M. Grindal and A. J. Offutt. Input parameter modeling for


combination strategies. In Proceedings of the IASTED Inter-
national Conference on Software Engineering (SE2007), Inns-
bruck, Austria, Feb. 13-15, 2007, Feb. 2007.

[GOA05] M. Grindal, A. J. Offutt, and S. F. Andler. Combination test-


ing strategies: A survey. Software Testing, Verification, and
Reliability, 15(3):167–199, Sep. 2005.

[GOM06] M. Grindal, A. J. Offutt, and J. Mellin. On the testing ma-


turity of software producing organizations. In Proceedings of
First International Conference Testing: Academia & Industry
Conference Practice And Research Techniques (TAIC PART
2006), Cumberland Lodge, Windsor, UK, Aug. 29-31, 2006,
pages 171–180, Aug. 2006.

[GOM07] M. Grindal, A. J. Offutt, and J. Mellin. Managing conflicts


when using combination strategies to test software. In Proceed-
ings of 18th Australian Conference on Software Engineering
(ASWEC2007), Melbourne, Australia, 10-13 Apr. 2007, Apr.
2007.

[Gri04] M. Grindal. Thesis Proposal: Evaluation of Combination


Strategies for Practical Testing, Technical Report. Technical
Report HS-IKI-TR-04-003, School of Humanities and Informat-
ics, University of Skövde, 2004.

[GSM04] A. M. Geras, M. R. Smith, and J. Miller. A survey of software


testing practices in Alberta. Canadian Journal of Electrical
and Computer Engineering, 29(3):183–191, 2004.
References 81

[Het76] W. C. Hetzel. An Experimental Analysis of Program Veri-


fication Methods. PhD thesis, University of North Carolina,
Chapel Hill, 1976.

[Hul00] J. Huller. Reducing time to market with combinatorial de-


sign method testing. In Proceedings of Tenth Annual Inter-
national Council on Systems Engineering (INCOSE’00) 2000,
Minneapolis, MN, USA, Jul. 16-20, 2000, 2000.

[KKS98] N. P. Kropp, P. J. Koopman, and D. P. Siewiorek. Automated


robustness testing of off-the-shelf software components. In Pro-
ceedings of FTCS’98: Fault Tolerant Computing Symposium,
Jun. 23-25, 1998, Munich, Germany, pages 230–239. IEEE,
1998.

[KP99] T. Koomen and M. Pol. Test Process Improvement - A practical


step-by-step guide to structured testing. ACM Press, 1999.

[LT98] Y. Lei and K. C. Tai. In-parameter-order: A test generation


strategy for pair-wise testing. In Proceedings of Third IEEE
High Assurance Systems Engineering Symposium, pages 254–
261. IEEE, Nov. 1998.

[LT01] Y. Lei and K. C. Tai. A Test Generation Strategy for Pairwise


Testing. Technical Report TR-2001-03, Department of Com-
puter Science, North Carolina State University, Raleigh, 2001.

[Man85] R. Mandl. Orthogonal Latin Squares: An application of ex-


periment design to compiler testing. Communications of the
ACM, 28(10):1054–1058, Oct. 1985.

[Mye78] G. J. Myers. A Controlled Experiment in Program Testing and


Code Walkthroughs/Inspection. Communcations of the ACM,
21(9):760–768, Sep. 1978.

[Mye79] G. J. Myers. The Art of Software Testing. John Wiley and


Sons, 1979.

[Nil06] R. Nilsson. A Mutation-based Framework for Automated Test-


ing of Timeliness. PhD thesis, Department of Computer and
Information Science, Linköping University, 2006.
82 References

[NMR+ 04] S. P. Ng, T. Murnane, K. Reed, D. Grant, and T. Y. Chen. A


preliminary survey on software testing practices in australia. In
Proceedings of the 2004 Australian Software Engineering Con-
ference (ASWEC04), Apr. 13-16, Melbourne, Australia, pages
116–125. IEEE, Apr. 2004.

[OB88] T. J. Ostrand and M. J. Balcer. The Category-Partition


Method for Specifying and Generating Functional Tests. Com-
munications of the ACM, 31(6):676–686, Jun. 1988.

[POC93] P. Piwowarski, M. Ohba, and J. Caruso. Coverge measure ex-


perience during function test. In Proceedings of 14th Inter-
national Conference on Software Engineering (ICSE’93), Los
Alamitos, CA, USA 1993, pages 287–301. ACM, May 1993.

[RAH03] P. Runeson, C. Andersson, and M. Höst. Test Processes in


software product evolution - a qualitative survey on the state
of practice. Journal of Software Maintenance and Evolution:
Research and Practice, 15(1):41–59, Jan.-Feb. 2003.

[Rei97] S. Reid. An empirical analysis of equivalence partitioning,


boundary value analysis and random testing. In Proceedings
of Fourth International Software Metrics Symposium (MET-
RICS’97), Albaquerque, New Mexico, USA, Nov. 5-7, 1997,
pages 64–73. IEEE, 1997.

[Rob93] C. Robson. Real World Research. Blackwell, Oxford, UK, 1993.

[RW85] S. Rapps and E. J. Weyuker. Selecting Software Test Data


Using Data Flow Information. IEEE Transactions on Software
Engineering, SE-11(4):367–375, Apr. 1985.

[SCSK02] S. S. So, S. D. Cha, T. J. Shimeall, and Y. R. Kwon. An


empirical evaluation of six methods to detect faults in software.
Software Testing, Verification and Reliability, 12(3):155–171,
Sep. 2002.

[STK04] T. Shiba, T. Tsuchiya, and T. Kikuno. Using artificial life tech-


niques to generate test cases for combinatorial testing. In Pro-
ceedings of 28th Annual International Computer Software and
References 83

Applications Conference (COMPSAC’04) 2004, Hong Kong,


China, Sep. 28-30, 2004, pages 72–77. IEEE Computer Society,
2004.

[Tsa93] E. P. K. Tsang. Foundations of Constraint Satisfaction. Aca-


demic Press, London and San Diego, 1993.

[VL00] O. Vinter and S. Lauesen. Analyzing Requirements Bugs.


STQE Magazine, 2(6):14–18, Nov./Dec. 2000.

[vMSOM96] A. v. Mayrhauser, M. Shumway, P. Ocken, and R. Mraz. On


Domain Models for System Testing. Proceedings of the Fourth
International Conference on Software Reuse (ICSR 96), Or-
lando Florida, Apr. 23-26, 1996, pages 114–123, Apr. 1996.

[Wil00] A. W. Williams. Determination of test configurations for pair-


wise interaction coverage. In Proceedings of 13th International
Conference on the Testing of Communicating Systems (Test-
Com 2000), Ottawa, Canada, Aug., 2000, pages 59–74, Aug.
2000.

[WP96] A. W. Williams and R. L. Probert. A practical strategy for test-


ing pair-wise coverage of network interfaces. In Proceedings of
Seventh International Symposium on Software Reliability En-
gineering (ISSRE96), White Plains, New York, USA, Oct. 30
- Nov. 2, 1996, pages 246–254. IEEE Computer Society Press,
Nov. 1996.

[WRBM97] M. Wood, M. Roper, A. Brooks, and J. Miller. Comparing and


combining software defect detection techniques: A replicated
study. In Proceedings of Sixth European Software Engineerng
Conference (ESEC/FSE 97), pages 262–277. Springer Verlag,
Lecture Notes in Computer Science Nr. 1013, Sep. 1997.

[WRH+ 00] C. Wohlin, P. Runeson, M. Höst, M.C. Ohlsson, B. Regnell,


and A. Wesslén. Experimentation in Software Engineering: An
Introduction. Kluwer Academic Press, 2000.

[YCP04] C. Yilmaz, M.B. Cohen, and A. Porter. Covering arrays for


efficient fault characterization in complex configuration spaces.
In Proceedings of the ACM SIGSOFT International Symposium
84 References

on Software Testing and Analysis (ISSTA 2004), Boston, pages


45–54. ACM Software Engineering Notes, Jul. 2004.

[Yin94] R. K. Yin. Case Study Research Design and Methods. Sage


Publicatoins, Thousand Oaks, London, New Delhi, 1994.

[YLDM97] H. Yin, Z. Lebne-Dengel, and Y. K. Malaiya. Automatic Test


Generation using Checkpoint Encoding and Antirandom Test-
ing. Technical Report CS-97-116, Colorado State University,
1997.

[You75] E. Yourdon. Techniques of Program Structure and Design.


Prentice Hall Inc., 1975.
Department of Computer and Information Science
Linköpings universitet
Dissertations
Linköping Studies in Science and Technology

No 14 Anders Haraldsson: A Program Manipulation Non-Monotonic Reasoning, 1987, ISBN 91-7870-


System Based on Partial Evaluation, 1977, ISBN 183-X.
91-7372-144-1. No 170 Zebo Peng: A Formal Methodology for Automated
No 17 Bengt Magnhagen: Probability Based Verification Synthesis of VLSI Systems, 1987, ISBN 91-7870-
of Time Margins in Digital Designs, 1977, ISBN 225-9.
91-7372-157-3. No 174 Johan Fagerström: A Paradigm and System for
No 18 Mats Cedwall: Semantisk analys av process- Design of Distributed Systems, 1988, ISBN 91-
beskrivningar i naturligt språk, 1977, ISBN 91- 7870-301-8.
7372-168-9. No 192 Dimiter Driankov: Towards a Many Valued Logic
No 22 Jaak Urmi: A Machine Independent LISP Compil- of Quantified Belief, 1988, ISBN 91-7870-374-3.
er and its Implications for Ideal Hardware, 1978, No 213 Lin Padgham: Non-Monotonic Inheritance for an
ISBN 91-7372-188-3. Object Oriented Knowledge Base, 1989, ISBN 91-
No 33 Tore Risch: Compilation of Multiple File Queries 7870-485-5.
in a Meta-Database System 1978, ISBN 91-7372- No 214 Tony Larsson: A Formal Hardware Description and
232-4. Verification Method, 1989, ISBN 91-7870-517-7.
No 51 Erland Jungert: Synthesizing Database Structures No 221 Michael Reinfrank: Fundamentals and Logical
from a User Oriented Data Model, 1980, ISBN 91- Foundations of Truth Maintenance, 1989, ISBN 91-
7372-387-8. 7870-546-0.
No 54 Sture Hägglund: Contributions to the Develop- No 239 Jonas Löwgren: Knowledge-Based Design Support
ment of Methods and Tools for Interactive Design and Discourse Management in User Interface Man-
of Applications Software, 1980, ISBN 91-7372- agement Systems, 1991, ISBN 91-7870-720-X.
404-1.
No 244 Henrik Eriksson: Meta-Tool Support for Knowl-
No 55 Pär Emanuelson: Performance Enhancement in a edge Acquisition, 1991, ISBN 91-7870-746-3.
Well-Structured Pattern Matcher through Partial
Evaluation, 1980, ISBN 91-7372-403-3. No 252 Peter Eklund: An Epistemic Approach to Interac-
tive Design in Multiple Inheritance Hierar-
No 58 Bengt Johnsson, Bertil Andersson: The Human- chies,1991, ISBN 91-7870-784-6.
Computer Interface in Commercial Systems, 1981,
ISBN 91-7372-414-9. No 258 Patrick Doherty: NML3 - A Non-Monotonic For-
malism with Explicit Defaults, 1991, ISBN 91-
No 69 H. Jan Komorowski: A Specification of an Ab- 7870-816-8.
stract Prolog Machine and its Application to Partial
Evaluation, 1981, ISBN 91-7372-479-3. No 260 Nahid Shahmehri: Generalized Algorithmic De-
bugging, 1991, ISBN 91-7870-828-1.
No 71 René Reboh: Knowledge Engineering Techniques
and Tools for Expert Systems, 1981, ISBN 91- No 264 Nils Dahlbäck: Representation of Discourse-Cog-
7372-489-0. nitive and Computational Aspects, 1992, ISBN 91-
7870-850-8.
No 77 Östen Oskarsson: Mechanisms of Modifiability in
large Software Systems, 1982, ISBN 91-7372-527- No 265 Ulf Nilsson: Abstract Interpretations and Abstract
7. Machines: Contributions to a Methodology for the
Implementation of Logic Programs, 1992, ISBN 91-
No 94 Hans Lunell: Code Generator Writing Systems, 7870-858-3.
1983, ISBN 91-7372-652-4.
No 270 Ralph Rönnquist: Theory and Practice of Tense-
No 97 Andrzej Lingas: Advances in Minimum Weight bound Object References, 1992, ISBN 91-7870-
Triangulation, 1983, ISBN 91-7372-660-5. 873-7.
No 109 Peter Fritzson: Towards a Distributed Program- No 273 Björn Fjellborg: Pipeline Extraction for VLSI Data
ming Environment based on Incremental Compila- Path Synthesis, 1992, ISBN 91-7870-880-X.
tion,1984, ISBN 91-7372-801-2.
No 276 Staffan Bonnier: A Formal Basis for Horn Clause
No 111 Erik Tengvald: The Design of Expert Planning Logic with External Polymorphic Functions, 1992,
Systems. An Experimental Operations Planning ISBN 91-7870-896-6.
System for Turning, 1984, ISBN 91-7372-805-5.
No 277 Kristian Sandahl: Developing Knowledge Man-
No 155 Christos Levcopoulos: Heuristics for Minimum agement Systems with an Active Expert Methodolo-
Decompositions of Polygons, 1987, ISBN 91-7870- gy, 1992, ISBN 91-7870-897-4.
133-3.
No 281 Christer Bäckström: Computational Complexity
No 165 James W. Goodwin: A Theory and System for
of Reasoning about Plans, 1992, ISBN 91-7870- Unification-Based Formalisms,1997, ISBN 91-
979-2. 7871-857-0.
No 292 Mats Wirén: Studies in Incremental Natural Lan- No 462 Lars Degerstedt: Tabulation-based Logic Program-
guage Analysis, 1992, ISBN 91-7871-027-8. ming: A Multi-Level View of Query Answering,
No 297 Mariam Kamkar: Interprocedural Dynamic Slic- 1996, ISBN 91-7871-858-9.
ing with Applications to Debugging and Testing, No 475 Fredrik Nilsson: Strategi och ekonomisk styrning -
1993, ISBN 91-7871-065-0. En studie av hur ekonomiska styrsystem utformas
No 302 Tingting Zhang: A Study in Diagnosis Using Clas- och används efter företagsförvärv, 1997, ISBN 91-
sification and Defaults, 1993, ISBN 91-7871-078-2. 7871-914-3.
No 312 Arne Jönsson: Dialogue Management for Natural No 480 Mikael Lindvall: An Empirical Study of Require-
Language Interfaces - An Empirical Approach, ments-Driven Impact Analysis in Object-Oriented
1993, ISBN 91-7871-110-X. Software Evolution, 1997, ISBN 91-7871-927-5.
No 338 Simin Nadjm-Tehrani: Reactive Systems in Phys- No 485 Göran Forslund: Opinion-Based Systems: The Co-
ical Environments: Compositional Modelling and operative Perspective on Knowledge-Based Deci-
Framework for Verification, 1994, ISBN 91-7871- sion Support, 1997, ISBN 91-7871-938-0.
237-8. No 494 Martin Sköld: Active Database Management Sys-
No 371 Bengt Savén: Business Models for Decision Sup- tems for Monitoring and Control, 1997, ISBN 91-
port and Learning. A Study of Discrete-Event Man- 7219-002-7.
ufacturing Simulation at Asea/ABB 1968-1993, No 495 Hans Olsén: Automatic Verification of Petri Nets in
1995, ISBN 91-7871-494-X. a CLP framework, 1997, ISBN 91-7219-011-6.
No 375 Ulf Söderman: Conceptual Modelling of Mode No 498 Thomas Drakengren: Algorithms and Complexity
Switching Physical Systems, 1995, ISBN 91-7871- for Temporal and Spatial Formalisms, 1997, ISBN
516-4. 91-7219-019-1.
No 383 Andreas Kågedal: Exploiting Groundness in Log- No 502 Jakob Axelsson: Analysis and Synthesis of Hetero-
ic Programs, 1995, ISBN 91-7871-538-5. geneous Real-Time Systems, 1997, ISBN 91-7219-
No 396 George Fodor: Ontological Control, Description, 035-3.
Identification and Recovery from Problematic Con- No 503 Johan Ringström: Compiler Generation for Data-
trol Situations, 1995, ISBN 91-7871-603-9. Parallel Programming Langugaes from Two-Level
No 413 Mikael Pettersson: Compiling Natural Semantics, Semantics Specifications, 1997, ISBN 91-7219-
1995, ISBN 91-7871-641-1. 045-0.
No 414 Xinli Gu: RT Level Testability Improvement by No 512 Anna Moberg: Närhet och distans - Studier av
Testability Analysis and Transformations, 1996, kommunikationsmmönster i satellitkontor och flexi-
ISBN 91-7871-654-3. bla kontor, 1997, ISBN 91-7219-119-8.
No 416 Hua Shu: Distributed Default Reasoning, 1996, No 520 Mikael Ronström: Design and Modelling of a Par-
ISBN 91-7871-665-9. allel Data Server for Telecom Applications, 1998,
ISBN 91-7219-169-4.
No 429 Jaime Villegas: Simulation Supported Industrial
Training from an Organisational Learning Perspec- No 522 Niclas Ohlsson: Towards Effective Fault
tive - Development and Evaluation of the SSIT Prevention - An Empirical Study in Software Engi-
Method, 1996, ISBN 91-7871-700-0. neering, 1998, ISBN 91-7219-176-7.
No 431 Peter Jonsson: Studies in Action Planning: Algo- No 526 Joachim Karlsson: A Systematic Approach for Pri-
rithms and Complexity, 1996, ISBN 91-7871-704- oritizing Software Requirements, 1998, ISBN 91-
3. 7219-184-8.
No 437 Johan Boye: Directional Types in Logic Program- No 530 Henrik Nilsson: Declarative Debugging for Lazy
ming, 1996, ISBN 91-7871-725-6. Functional Languages, 1998, ISBN 91-7219-197-x.
No 439 Cecilia Sjöberg: Activities, Voices and Arenas: No 555 Jonas Hallberg: Timing Issues in High-Level Syn-
Participatory Design in Practice, 1996, ISBN 91- thesis,1998, ISBN 91-7219-369-7.
7871-728-0. No 561 Ling Lin: Management of 1-D Sequence Data -
No 448 Patrick Lambrix: Part-Whole Reasoning in De- From Discrete to Continuous, 1999, ISBN 91-7219-
scription Logics, 1996, ISBN 91-7871-820-1. 402-2.
No 452 Kjell Orsborn: On Extensible and Object-Rela- No 563 Eva L Ragnemalm: Student Modelling based on
tional Database Technology for Finite Element Collaborative Dialogue with a Learning Compan-
Analysis Applications, 1996, ISBN 91-7871-827-9. ion, 1999, ISBN 91-7219-412-X.
No 459 Olof Johansson: Development Environments for No 567 Jörgen Lindström: Does Distance matter? On geo-
Complex Product Models, 1996, ISBN 91-7871- graphical dispersion in organisations, 1999, ISBN
855-4. 91-7219-439-1.
No 461 Lena Strömbäck: User-Defined Constructions in No 582 Vanja Josifovski: Design, Implementation and
Evaluation of a Distributed Mediator System for No 720 Carl-Johan Petri: Organizational Information Pro-
Data Integration, 1999, ISBN 91-7219-482-0. vision - Managing Mandatory and Discretionary Use
No 589 Rita Kovordányi: Modeling and Simulating Inhib- of Information Technology, 2001, ISBN-91-7373-
itory Mechanisms in Mental Image Reinterpretation 126-9.
- Towards Cooperative Human-Computer Creativi- No 724 Paul Scerri: Designing Agents for Systems with
ty, 1999, ISBN 91-7219-506-1. Adjustable Autonomy, 2001, ISBN 91 7373 207 9.
No 592 Mikael Ericsson: Supporting the Use of Design No 725 Tim Heyer: Semantic Inspection of Software Arti-
Knowledge - An Assessment of Commenting facts: From Theory to Practice, 2001, ISBN 91 7373
Agents, 1999, ISBN 91-7219-532-0. 208 7.
No 593 Lars Karlsson: Actions, Interactions and Narra- No 726 Pär Carlshamre: A Usability Perspective on Re-
tives, 1999, ISBN 91-7219-534-7. quirements Engineering - From Methodology to
No 594 C. G. Mikael Johansson: Social and Organization- Product Development, 2001, ISBN 91 7373 212 5.
al Aspects of Requirements Engineering Methods - No 732 Juha Takkinen: From Information Management to
A practice-oriented approach, 1999, ISBN 91- Task Management in Electronic Mail, 2002, ISBN
7219-541-X. 91 7373 258 3.
No 595 Jörgen Hansson: Value-Driven Multi-Class Over- No 745 Johan Åberg: Live Help Systems: An Approach to
load Management in Real-Time Database Systems, Intelligent Help for Web Information Systems,
1999, ISBN 91-7219-542-8. 2002, ISBN 91-7373-311-3.
No 596 Niklas Hallberg: Incorporating User Values in the No 746 Rego Granlund: Monitoring Distributed Team-
Design of Information Systems and Services in the work Training, 2002, ISBN 91-7373-312-1.
Public Sector: A Methods Approach, 1999, ISBN No 757 Henrik André-Jönsson: Indexing Strategies for
91-7219-543-6. Time Series Data, 2002, ISBN 917373-346-6.
No 597 Vivian Vimarlund: An Economic Perspective on No 747 Anneli Hagdahl: Development of IT-suppor-ted In-
the Analysis of Impacts of Information Technology: ter-organisational Collaboration - A Case Study in
From Case Studies in Health-Care towards General the Swedish Public Sector, 2002, ISBN 91-7373-
Models and Theories, 1999, ISBN 91-7219-544-4. 314-8.
No 749 Sofie Pilemalm: Information Technology for Non-
No 598 Johan Jenvald: Methods and Tools in Computer-
Profit Organisations - Extended Participatory De-
Supported Taskforce Training, 1999, ISBN 91-
sign of an Information System for Trade Union Shop
7219-547-9.
Stewards, 2002, ISBN 91-7373-
No 607 Magnus Merkel: Understanding and enhancing 318-0.
translation by parallel text processing, 1999, ISBN No 765 Stefan Holmlid: Adapting users: Towards a theory
91-7219-614-9. of use quality, 2002, ISBN 91-7373-397-0.
No 611 Silvia Coradeschi: Anchoring symbols to sensory No 771 Magnus Morin: Multimedia Representations of
data, 1999, ISBN 91-7219-623-8. Distributed Tactical Operations, 2002, ISBN 91-
No 613 Man Lin: Analysis and Synthesis of Reactive 7373-421-7.
Systems: A Generic Layered Architecture No 772 Pawel Pietrzak: A Type-Based Framework for Lo-
Perspective, 1999, ISBN 91-7219-630-0. cating Errors in Constraint Logic Programs, 2002,
ISBN 91-7373-422-5.
No 618 Jimmy Tjäder: Systemimplementering i praktiken
No 758 Erik Berglund: Library Communication Among
- En studie av logiker i fyra projekt, 1999, ISBN 91-
Programmers Worldwide, 2002,
7219-657-2.
ISBN 91-7373-349-0.
No 627 Vadim Engelson: Tools for Design, Interactive No 774 Choong-ho Yi: Modelling Object-Oriented
Simulation, and Visualization of Object-Oriented Dynamic Systems Using a Logic-Based Framework,
Models in Scientific Computing, 2000, ISBN 91- 2002, ISBN 91-7373-424-1.
7219-709-9. No 779 Mathias Broxvall: A Study in the
No 637 Esa Falkenroth: Database Technology for Control Computational Complexity of Temporal
and Simulation, 2000, ISBN 91-7219-766-8. Reasoning, 2002, ISBN 91-7373-440-3.
No 639 Per-Arne Persson: Bringing Power and No 793 Asmus Pandikow: A Generic Principle for
Knowledge Together: Information Systems Design Enabling Interoperability of Structured and
for Autonomy and Control in Command Work, Object-Oriented Analysis and Design Tools, 2002,
2000, ISBN 91-7219-796-X. ISBN 91-7373-479-9.
No 785 Lars Hult: Publika Informationstjänster. En studie
No 660 Erik Larsson: An Integrated System-Level Design
av den Internetbaserade encyklopedins bruksegen-
for Testability Methodology, 2000, ISBN 91-7219-
skaper, 2003, ISBN 91-7373-461-6.
890-7.
No 800 Lars Taxén: A Framework for the Coordination of
No 688 Marcus Bjäreland: Model-based Execution Complex Systems´ Development, 2003, ISBN 91-
Monitoring, 2001, ISBN 91-7373-016-5. 7373-604-X
No 689 Joakim Gustafsson: Extending Temporal Action No 808 Klas Gäre: Tre perspektiv på förväntningar och
Logic, 2001, ISBN 91-7373-017-3. förändringar i samband med införande av informa-
tionsystem, 2003, ISBN 91-7373-618-X. No 920 Luis Alejandro Cortés: Verification and Schedul-
No 821 Mikael Kindborg: Concurrent Comics - program- ing Techniques for Real-Time Embedded Systems,
ming of social agents by children, 2003, 2004, ISBN 91-85297-21-6.
ISBN 91-7373-651-1. No 929 Diana Szentivanyi: Performance Studies of Fault-
No 823 Christina Ölvingson: On Development of Infor- Tolerant Middleware, 2005, ISBN 91-85297-58-5.
mation Systems with GIS Functionality in Public No 933 Mikael Cäker: Management Accounting as Con-
Health Informatics: A Requirements Engineering structing and Opposing Customer Focus: Three Case
Approach, 2003, ISBN 91-7373-656-2. Studies on Management Accounting and Customer
No 828 Tobias Ritzau: Memory Efficient Hard Real-Time Relations, 2005, ISBN 91-85297-64-X.
Garbage Collection, 2003, ISBN 91-7373-666-X. No 937 Jonas Kvarnström: TALplanner and Other Exten-
No 833 Paul Pop: Analysis and Synthesis of sions to Temporal Action Logic, 2005, ISBN 91-
Communication-Intensive Heterogeneous Real- 85297-75-5.
Time Systems, 2003, ISBN 91-7373-683-X. No 938 Bourhane Kadmiry: Fuzzy Gain-Scheduled Visual
No 852 Johan Moe: Observing the Dynamic Servoing for Unmanned Helicopter, 2005, ISBN 91-
Behaviour of Large Distributed Systems to Improve 85297-76-3.
Development and Testing - An Emperical Study in No 945 Gert Jervan: Hybrid Built-In Self-Test and Test
Software Engineering, 2003, ISBN 91-7373-779-8. Generation Techniques for Digital Systems, 2005,
ISBN: 91-85297-97-6.
No 867 Erik Herzog: An Approach to Systems Engineer-
No 946 Anders Arpteg: Intelligent Semi-Structured Infor-
ing Tool Data Representation and Exchange, 2004,
mation Extraction, 2005, ISBN 91-85297-98-4.
ISBN 91-7373-929-4.
No 947 Ola Angelsmark: Constructing Algorithms for
No 872 Aseel Berglund: Augmenting the Remote Control:
Constraint Satisfaction and Related Problems -
Studies in Complex Information Navigation for
Methods and Applications, 2005, ISBN 91-85297-
Digital TV, 2004, ISBN 91-7373-940-5.
99-2.
No 869 Jo Skåmedal: Telecommuting’s Implications on
No 963 Calin Curescu: Utility-based Optimisation of Re-
Travel and Travel Patterns, 2004, ISBN 91-7373-
source Allocation for Wireless Networks, 2005.
935-9.
ISBN 91-85457-07-8.
No 870 Linda Askenäs: The Roles of IT - Studies of Or-
No 972 Björn Johansson: Joint Control in Dynamic Situa-
ganising when Implementing and Using Enterprise
tions, 2005, ISBN 91-85457-31-0.
Systems, 2004, ISBN 91-7373-936-7.
No 974 Dan Lawesson: An Approach to Diagnosability
No 874 Annika Flycht-Eriksson: Design and Use of On-
Analysis for Interacting Finite State Systems, 2005,
tologies in Information-Providing Dialogue Sys-
ISBN 91-85457-39-6.
tems, 2004, ISBN 91-7373-947-2.
No 979 Claudiu Duma: Security and Trust Mechanisms for
No 873 Peter Bunus: Debugging Techniques for Equation-
Groups in Distributed Services, 2005, ISBN 91-
Based Languages, 2004, ISBN 91-7373-941-3.
85457-54-X.
No 876 Jonas Mellin: Resource-Predictable and Efficient
No 983 Sorin Manolache: Analysis and Optimisation of
Monitoring of Events, 2004, ISBN 91-7373-956-1.
Real-Time Systems with Stochastic Behaviour,
No 883 Magnus Bång: Computing at the Speed of Paper:
2005, ISBN 91-85457-60-4.
Ubiquitous Computing Environments for Health-
No 986 Yuxiao Zhao: Standards-Based Application Inte-
care Professionals, 2004, ISBN 91-7373-971-5
gration for Business-to-Business Communications,
No 882 Robert Eklund: Disfluency in Swedish
2005, ISBN 91-85457-66-3.
human-human and human-machine travel booking
No 1004 Patrik Haslum: Admissible Heuristics for Auto-
dialogues, 2004. ISBN 91-7373-966-9.
mated Planning, 2006, ISBN 91-85497-28-2.
No 887 Anders Lindström: English and other Foreign Lin-
No 1005 Aleksandra Tešanovic: Developing Re-
quistic Elements in Spoken Swedish. Studies of
usable and Reconfigurable Real-Time Software us-
Productive Processes and their Modelling using Fi-
ing Aspects and Components, 2006, ISBN 91-
nite-State Tools, 2004, ISBN 91-7373-981-2.
85497-29-0.
No 889 Zhiping Wang: Capacity-Constrained Production-
No 1008 David Dinka: Role, Identity and Work: Extending
inventory systems - Modellling and Analysis in
the design and development agenda, 2006, ISBN 91-
both a traditional and an e-business context, 2004,
85497-42-8.
ISBN 91-85295-08-6.
No 1009 Iakov Nakhimovski: Contributions to the Modeling
No 893 Pernilla Qvarfordt: Eyes on Multimodal Interac-
and Simulation of Mechanical Systems with De-
tion, 2004, ISBN 91-85295-30-2.
tailed Contact Analysis, 2006, ISBN 91-85497-43-
No 910 Magnus Kald: In the Borderland between Strategy
X.
and Management Control - Theoretical Framework
No 1013 Wilhelm Dahllöf: Exact Algorithms for Exact Sat-
and Empirical Evidence, 2004, ISBN 91-85295-82-
isfiability Problems, 2006, ISBN 91-85523-97-6.
5.
No 1016 Levon Saldamli: PDEModelica - A High-Level
No 918 Jonas Lundberg: Shaping Electronic News: Genre
Language for Modeling with Partial Differential
Perspectives on Interaction Design, 2004, ISBN 91-
Equations, 2006, ISBN 91-85523-84-4.
85297-14-3.
No 1017 Daniel Karlsson: Verification of Component-based
No 900 Mattias Arvola: Shades of use: The dynamics of
Embedded System Designs, 2006, ISBN 91-85523-
interaction design for sociable use, 2004, ISBN 91-
79-8.
85295-42-6.
No 1018 Ioan Chisalita: Communication and Networking for Business Action and Communication, 2003,
Techniques for Traffic Safety Systems, 2006, ISBN ISBN 91-7373-628-7.
91-85523-77-1. No 8 Ulf Seigerroth: Att förstå och förändra
No 1019 Tarja Susi: The Puzzle of Social Activity - The systemutvecklingsverksamheter - en taxonomi
Significance of Tools in Cognition and Coopera- för metautveckling, 2003, ISBN91-7373-736-4.
tion, 2006, ISBN 91-85523-71-2. No 9 Karin Hedström: Spår av datoriseringens värden -
No 1021 Andrzej Bednarski: Integrated Optimal Code Effekter av IT i äldreomsorg, 2004, ISBN 91-7373-
Generation for Digital Signal Processors, 2006, 963-4.
ISBN 91-85523-69-0.
No 1022 Peter Aronsson: Automatic Parallelization of No 10 Ewa Braf: Knowledge Demanded for Action -
Equation-Based Simulation Programs, 2006, ISBN Studies on Knowledge Mediation in Organisations,
91-85523-68-2. 2004, ISBN 91-85295-47-7.
No 1023 Sonia Sangari: Some Visual Correlates to Focal No 11 Fredrik Karlsson: Method Configuration -
Accent in Swedish, 2006, ISBN 91-85523-67-4. method and computerized tool support, 2005, ISBN
No 1030 Robert Nilsson: A Mutation-based Framework for 91-85297-48-8.
Automated Testing of Timeliness, 2006, ISBN 91- No 12 Malin Nordström: Styrbar systemförvaltning - Att
85523-35-6. organisera systemförvaltningsverksamhet med hjälp
No 1034 Jon Edvardsson: Techniques for Automatic av effektiva förvaltningsobjekt, 2005, ISBN 91-
Generation of Tests from Programs and Specifica- 85297-60-7.
tions, 2006, ISBN 91-85523-31-3. No 13 Stefan Holgersson: Yrke: POLIS - Yrkeskunskap,
No 1035 Vaida Jakoniene: Integration of Biological Data, motivation, IT-system och andra förutsättningar för
2006, ISBN 91-85523-28-3. polisarbete, 2005, ISBN 91-85299-43-X.
No 1045 Genevieve Gorrell: Generalized Hebbian
Algorithms for Dimensionality Reduction in Natu- No 14 Benneth Christiansson, Marie-Therese
ral Language Processing, 2006, ISBN 91-85643- Christiansson: Mötet mellan process och kompo-
88-2. nent - mot ett ramverk för en verksamhetsnära
No 1051 Yu-Hsing Huang: Having a New Pair of kravspecifikation vid anskaffning av komponent-
Glasses - Applying Systemic Accident Models on baserade informationssystem, 2006, ISBN 91-
Road Safety, 2006, ISBN 91-85643-64-5. 85643-22-X.
No 1054 Åsa Hedenskog: Perceive those things which can-
not be seen - A Cognitive Systems Engineering per-
spective on requirements management, 2006, ISBN
91-85643-57-2.
No 1061 Cécile Åberg: An Evaluation Platform for
Semantic Web Technology, 2007, ISBN 91-85643-
31-9.
No 1073 Mats Grindal: Handling Combinatorial Explosion
in Software Testing, 2007, ISBN 978-91-85715-74-
9.

Linköping Studies in Information Science


No 1 Karin Axelsson: Metodisk systemstrukturering- att
skapa samstämmighet mellan informa-tionssyste-
markitektur och verksamhet, 1998. ISBN-9172-19-
296-8.
No 2 Stefan Cronholm: Metodverktyg och användbar-
het - en studie av datorstödd metodbaserad syste-
mutveckling, 1998. ISBN-9172-19-299-2.
No 3 Anders Avdic: Användare och utvecklare - om an-
veckling med kalkylprogram, 1999. ISBN-91-
7219-606-8.
No 4 Owen Eriksson: Kommunikationskvalitet hos in-
formationssystem och affärsprocesser, 2000. ISBN
91-7219-811-7.
No 5 Mikael Lind: Från system till process - kriterier för
processbestämning vid verksamhetsanalys, 2001,
ISBN 91-7373-067-X
No 6 Ulf Melin: Koordination och informationssystem i
företag och nätverk, 2002, ISBN 91-7373-278-8.
No 7 Pär J. Ågerfalk: Information Systems Actability -
Understanding Information Technology as a Tool