Вы находитесь на странице: 1из 504

Software testing

Mika Katara, Matti Vuori and Antti Jääskeläinen


Tampere University of Technology, Department of Pervasive Computing
18.8.2015

Software testing, 2015 1(504)


Ohjelmistotekniikka
Contents 1/10
Foreword 12
1. Goals of the course 13
1.1 The approach to testing on this course 16
1.2 What is not included in this course? 17
1.3 Web pages on testing 18
2. Introduction 19
2.1 What is testing? Different points of view 20
2.2 Always a reference: Requirements 24
2.3 Sources of 31
2.4 In what activities are errors caused? 34
2.5 The concept of test type 35
2.6 Supporting business 36
2.7 Traditional testing process 40
2.8 Everything cannot be tested 41

Ohjelmistotekniikka
Contents 2/10
2.9 Schools of testing 44
2.10 Testing 50
2.11 Testing is demanding work 53
2.12 Testing is team work 54
2.13 What can be required from testing? 55
2.14 Important terms 56
3. Testing based on test cases 62
3.1 Test case in a nutshell 63
3.2 Test case structure and content 71
3.3 Other notes about test cases 74
4. Testing as a part of a software engineering process 75
4.1 Levels of testing – V-model of testing 79
4.2 Unit testing 84
4.3 Test-Driven Development 104

Ohjelmistotekniikka
Contents 3/10
4.4 Techniques for designing test cases 108
4.5 Equivalence partitioning method 109
4.6 Boundary value analysis 114
4.7 Testing combinations 119
4.8 Fuzz testing 126
4.9 Low level integration testing 129
4.10 Continuous Delivery 144
4.11 System testing 147
4.12 System integration testing 156
4.14 Testing in agile development 170
4.15 Acceptance testing 179
4.16 Agile acceptance testing: ATDD 187
4.17 Alpha and Beta tests 190
4.18 Are all test levels and phases needed? 191

Ohjelmistotekniikka
Contents 4/10
4.19 When to move to the next level? 195
5. Exploratory testing 203
5.1 Exploratory testing in nutshell 204
5.2 Exploratory testing: Examples 209
5.3 Exploratory testing is based on strategies and knowledge 211
5.4 Starting point for a session 215
5.5 Documenting the testing? 219
5.6 Exploratory testing in practice 221
5.7 Fast test planning for new features 223
5.8 Preparation 224
5.9 Agile testing in non-agile project 225
6. Risk analysis and prioritization of tests 237
6.1 Basis in understanding usage 239
6.2 Flow of risk analysis 240

Ohjelmistotekniikka
Contents 5/10
7. On documentation 253
7.1 High level test plan documentation 254
7.2 Test planning process 255
7.3 IEEE 829 guides planning 260
7.4 Low level: documentation of test cases 263
7.5 Lightly in small projects 264
7.6 Test reports 265
7.7 Good test report 266
7.8 IEEE 829-2008 – In agile development, less documentation is required 271
8. Monitoring testing 272
8.1 Test management software in a nutshell 273
8.2 Monitoring error situation 274
8.3 Test report 275
9. More on methods and techniques of testing 276

Ohjelmistotekniikka
Contents 6/10
9.1 Is the source code used in testing or not? 280
9.2 Who does the testing? 284
9.5 What kinds of things are tested? 287
9.8 What kinds of problems are looked for? 290
9.11 How does the program need to be used? 293
9.12 Installation testing 294
9.13 Performance and load testing 295
9.14 Robustness testing 302
9.15 Regression testing 303
9.16 Smoke testing 306
9.17 How do we know whether a test run was successful or not? 308
9.18 Heuristic consistency 309
10. Error reporting 311
10.1 An error report shared information 312

Ohjelmistotekniikka
Contents 7/10
10.2 Can the error be repeated? 321
10.3 Recipe for a good error report 322
10.4 Error databases 327
11. Measuring software 329
11.1 Metrics 330
11.2 Non-functional testing 344
11.3 You get what you measure! – How metrics are fooled 345
11.4 System level coverage metrics 346
11.5 Code coverage metrics 348
11.6 Complexity metrics 359
11.8 Error seeding 361
12. Automation and tools 364
12.1 Test automation as a whole 366
12.2 What is test automation 367

Ohjelmistotekniikka
Contents 8/10
Different ways to control the SUT (simplified) 368
12.3 Promises of test automation 369
12.4 Common problems of test automation 370
12.5 Limits of automation 371
12.6 Testability of software 374
12.7 Test automation – different kind of software engineering 379
12.8 Approaches to automation 383
12.9 Planning of automation 393
12.10 Well-known tools 397
12.11 Automation project 398
12.12 Choosing a tool 400
12.13 Model-based testing 405
13. Testing of information security 426
13.1 What is included in information security? 428

Ohjelmistotekniikka
Contents 9/10
13.2 Testing of information security is important 430
13.3 Targets of information security testing 432
13.4 Nature of testing information security 433
13.5 Based on a risk analysis 434
13.6 Lots of guidelines 435
13.7 OWASP – Security of web pages 436
13.8 OWASP – Mobile security 439
13.9 Threats to PC applications 440
14. Techniques of static testing 458
14.1 Inspection 459
14.2 Review 468
14.3 Walkthrough 471
14.4 Static analysis of code 473
15. Improving testing 478

Ohjelmistotekniikka
Contents 10/10
15.1 Continuous improvement 480
15.2 Improvement project 481
15.3 Key areas of testing process – an example with TPI 487
15.4 Improvement to meet the requirements of standards 494
15.5 Tester certification – ISTQB 496
16. Closing words of the course 498
Literature 502

Ohjelmistotekniikka
Foreword
These slides have been created during the years 2003 – 2015, when
the software testing course has been arranged at TUT in its current
form. Sources used include the books listed at the end of the slide set,
training materials by Maaret Pyhäjärvi and Erkki Pöyhönen, and related
course material from University of Helsinki. Other sources will be cited
in the slides as needed.

The slide set has been updated every year to keep it up to date, taking
into account the competencies required of MSc’s in the near future.
OTE
The mark in some of the slides indicates that the subject has
been covered in the prerequisite course Ohjelmoinnin tekniikat
(programming techniques) and will be dealt with more briefly.

Software testing, 2015 12(504)


Ohjelmistotekniikka
1. Goals of the course 1/3
• The goal of the course is to learn the basics of software
testing and to form a good basis for learning more
– More advanced topics will be dealt with as necessary
• The lectures seek to convey a wide view of testing, including
much more than designing and running tests
• The point of the course project is to learn by doing
– Usually students have considered it a good experience
• The width of the point of view on this course is also shown in
considering the needs of different interest groups, including
e.g. company management and end users

Software testing, 2015 13(504)


Ohjelmistotekniikka
Goals of the course 2/3
• Even if an MSc doesn’t perform testing themselves, the subject will
be encountered in many different tasks related to software
development, as buyer and seller, and at all levels of organizations
• In some cases the MSc may be the only person in the organization
with some knowledge of the subject
• Testing skills are nowadays needed in many business processes
• Good organizations have people who are experts in their own fields
but also understand what is going on elsewhere
– People performing unit testing are usually more technologically oriented
and those performing system testing are closer to the end users
– Testing know-how can also be applied to many business processes

Software testing, 2015 14(504)


Ohjelmistotekniikka
Goals of the course 3/3
• We are not in a vocational school.
– We do not practise the tools of the latest fashion, but seek
to learn general principles.
– An MSc should be able to apply their skills in different
situations and, if necessary, develop testing methods best
suited for their organization.

• Different jobs often have different


views of testing. Life in organizations
is cooperation. That’s why these
points of view need to be
understood.

Software testing, 2015 15(504)


Ohjelmistotekniikka
1.1 The approach to testing on this
course
• We will approach testing objectively and highlight different
methods, tools, and requirements against which the software
is tested.
• We will examine testing as part of the software development
process that defines what is tested at which phase
– For simplicity some of the rich details of reality have been
blurred out, to make it easier to see the overview

Software testing, 2015 16(504)


Ohjelmistotekniikka
1.2 What is not included in this course?
• The world of testing is vast and a single course cannot cover
all of it
• The special characteristic of specific domains
• Embedded systems, safety-critical systems, SOA, etc.
• Testing management in large and distributed projects
• Organization, roles and teamwork
• Specific models and practices of project work
• Some types of testing such as usability testing, localization
testing, A/B testing etc.
• The entire range of testing tools

Software testing, 2015 17(504)


Ohjelmistotekniikka
1.3 Web pages on testing
• There is a lot of information on testing in the internet.
• Links:
– http://mattivuori.net/extra/www_testing_links.htm
– The link list also includes references to free net publications on
testing.
• Wikipedia is a good source if you need to find out what some
specialized testing term means.
– It has good presentations of many subjects covered only briefly
in these slides.
• And of course Google.

Software testing, 2015 18(504)


Ohjelmistotekniikka
2. Introduction

Let’s begin the course by defining what we’re


studying. What is testing and what challenges
are related to it.

Software testing, 2015 19(504)


Ohjelmistotekniikka
OTE
2.1 What is testing? Different points of view

• Some views on what testing is:


– Testing is a process where the program is executed for the
purpose of finding errors in it (Myers)
– Testing is measuring the quality of the software (Hetzel)
– An essential part of testing is the use and maintenance of the
documentation, tools etc. related to it (testware) (Craig&Jaskiel)
– Testing is a technical investigation performed to reveal quality-
related information on the product under test (Kaner)
– Testing is breaking programs (Whittaker)
– Testing is experimental work that produces information for
decision making (several)

Software testing, 2015 20(504)


Ohjelmistotekniikka
The goal is to find errors 1/3

• Unfortunately testing cannot show that the software is free of


errors
• Testing also doesn’t improve the quality of the software by
itself, it just measures and produces information on it
– For staying up to date with the situation and making decisions
• Testing is not primarily making sure that the program works
as it should
– Confirming functionality is not a good starting point for designing
test cases, because people often see what they want to see
• A better starting point: a successful test run is one that causes
a failure in the behaviour of the program

Software testing, 2015 21(504)


Ohjelmistotekniikka
The goal is to find errors 2/3
• This kind of test run enables the location
of an error in the software, and only by
removing that error is quality improved
• Tester’s assumption: the program always
contains errors waiting to be found
• We test to show that
– The program does what it shouldn’t
– The program doesn’t do what it should
– The programs works in a way not mentioned in
specification
• Maybe it should be mentioned?
– The program is hard to understand, difficult to use,
slow, or works wrong in its users’ opinion
Software testing, 2015 22(504)
Ohjelmistotekniikka
OTE

The goal is to find errors 3/3


• Fixing the errors found in testing is the first step in improving
quality.
– It’s a direct improvement in the product.
• We can also find out what caused these errors and handle
their ”root causes” – was it because of a problem in
requirements, specification or implementation? Is something
being done badly there?
– This review can improve the process and reduce the number of
errors in the future.
• Every piece of information testing produces on an error
is a chance to learn.
• An error found is a happy event!

Software testing, 2015 23(504)


Ohjelmistotekniikka
2.2 Always a reference: Requirements 1/3
• In testing we always compare observations to what we know
about the expectations directed to the system under test
– Described requirements, how systems like this usually work,
what do we know users will expect, etc.

Requirement Comparison Observations

Designed test
Program
execution

Software testing, 2015 24(504)


Ohjelmistotekniikka
Always a reference: Requirements 2/3
• We therefore need to find out what
is expected of the system
– Functionality?
– What failures must it tolerate?
– Performance?
– Usability?
– Information security?
– Etc…
• Some requirements may have been
considered for this product, but not
all – we need general knowledge of
how products and systems work
and what is expected of them
Software testing, 2015 25(504)
Ohjelmistotekniikka
Always a reference: Requirements 3/3
• Users have to be understood:
– What are the users like?
– What do they do, what are they trying to accomplish?
– How do they work, in what environment?
– What kind of hardware do they use?
– What data do they handle?
– What must work above all else?
• Information can be found in specifications, user stories,
comparable products, product owners, clients… but a tester
needs to look for that information and interpret it – and clarify
the issue for themselves

Software testing, 2015 26(504)


Ohjelmistotekniikka
Quality model standard – ISO 25010 –
as a check list 1/3
ISO/IEC 25010:2011. Systems and software engineering – Systems and software Quality
Requirements and Evaluation (SQuaRE) – System and software quality models. 34 p.

Characteristic Sub characteristics


Functional suitability Functional completeness
Functional correctness
Functional appropriateness
Performance efficiency Time behaviour
Resource utilization
Capacity
Compatibility Co-existence
Interoperability

Software testing, 2015 27(504)


Ohjelmistotekniikka
Quality model standard – ISO 25010 –
as a check list 2/3
Characteristic Sub characteristics
Usability Appropriateness recognisability
Learnability
Operability
User error protection
User interface aesthetics
Accessibility
Reliability Maturity
Availability
Fault tolerance
Recoverability
Security Confidentiality
Integrity
Non-repudiation
Accountability
Authenticity
Software testing, 2015 28(504)
Ohjelmistotekniikka
Quality model standard – ISO 25010 –
as a check list 3/3
Characteristic Sub characteristics
Maintainability Modularity
Reusability
Analysability
Modifiability
Testability
Portability Adaptability
Installability
Replaceability

• Many characteristic can be tested.


• Many cannot.
• Some characteristics have an effect on how the system can
be tested.

Software testing, 2015 29(504)


Ohjelmistotekniikka
Functional vs. non-functional testing

• The requirements can be divided roughly into functional and


non-functional requirements
– Examples of non-functional requirements: performance, security,
usability, etc.
• Usually the techniques used for testing depend heavily on
which kind of requirements are to be tested
• Functional testing is ”ordinary” technical testing – there is
pressure to increase the testing of non-functional features
– Usability testing and security testing are done comparatively little

Software testing, 2015 30(504)


Ohjelmistotekniikka
2.3 Sources of errors1/3
2. Environmental factors
• Different hardware environments
• Operating system
• User access rights 3. Changes in other software
• Method of installation • Changes in parent program
• Other software/ "DLL hell" (browser breaks the plugins)
• Updates • Change management
1. Wrong ideas of the desired
functionality • Other modules
• Wrong source of information (PO
instead of client) 4. Implementation errors
• Information changes along the way • Unfitting algorithms
• Different context • Coding errors
• Obsolete information
• Documents: missing, wrong,
Sources • Configuration errors
• Errors in resources
erroneous, ambiguous, unclear,
obsolete
of errors • Changes made by others

7. Errors in compilers etc.


• Source code is ok, but resulting 5. Low quality modules
machine code is erroneous • Hiding errors or incompatibilities in third party
modules

6. Wrong compilation unit


• Version management
• Configuration management

Software testing, 2015 31(504)


Ohjelmistotekniikka
Sources of errors 2/3

2. Misuse of software
• Wrong purpose

3. Habits of use
1. Failures
• Different users
• Network connection is lost
• Hard drive becomes full
• File isn’t found
Un-
prepared-
ness
6. Erroneous inputs
4. Use environment
• Typos
• Misunderstandings
• Missing data
• Errors in files 5. Sabotage
• Internal attacks
• External attacks
• Hacking
• Denial of service

Software testing, 2015 32(504)


Ohjelmistotekniikka
Sources of errors 3/3
• New things
– New technology
– New environment
– New developers
– New team
• Complexity
– Complex technology
– Complex systems
• Pressure
– Time pressure
– Pressure to implement new things
• Done by people – people make errors!

Software testing, 2015 33(504)


Ohjelmistotekniikka
OTE
2.4 In what activities are errors caused?

Figure: Timo Malm, VTT. Data origin: Capers Jones. Software quality in 2008: A
survey of the state of the art.
Software testing, 2015 34(504)
Ohjelmistotekniikka
2.5 The concept of test type
• The concept of test type is closely related to system features.
• It describes testing performed to measure the quality of a specific
feature.
– Functional testing tests functional features – does everything work as it
should. It can be done in low level unit testing or high level system
testing for example through the user interface
– Usability testing tests usability.
– Performance testing tests performance etc…
• The idea is to focus testing on some thing and then make use of
methods and tools suitable for it – and also professionals expert in it.
• Functional testing is the most common and therefore will be
examined most on this course.

Software testing, 2015 35(504)


Ohjelmistotekniikka
2.6 Supporting business
• Testing is not performed in vacuum, but used to support for
example business goals. For example:
• E-commerce:
– Good activity with customers on the net: Good user experience
security, can handle load. System can be upgraded easily – a
reliable platform.
• Product development:
– Product development speed, new
value to client in a rapid cycle.
– The right thing implemented well.
– Management of product risks.

Software testing, 2015 36(504)


Ohjelmistotekniikka
Information needs served by testing 1/2
• Software implementation
– Does the program work correctly and well?
– Does it have problems or dangers?
– Does it fulfil the requirements set in the project?
– Does it interoperate with other systems?
– How do users like it?
– What should be improved?

Software testing, 2015 37(504)


Ohjelmistotekniikka
Information needs served by testing 2/2
• Product business
– What kinds of solutions do clients prefer?
– Is the system mature enough to release?
– What problems does it have?
– What risks are there in releasing the software?
– Does it fulfil the client’s requirements?
– Is it as good as the competitors?
– Does it fulfil the requirements set in standards?
– Do the testing methods fulfil the requirements set in standards?
• Etc…

Software testing, 2015 38(504)


Ohjelmistotekniikka
Technical debt
• Velocity and agility require the platform to be in good shape.
• There mustn’t be too much ”technical debt".
• That doesn’t mean excessive perfectionism, but e.g. that the
product is as free of bugs as possible and doesn’t break down
when it’s modified.
• Testing and rapid fixing of errors help here.

Software testing, 2015 39(504)


Ohjelmistotekniikka
2.7 Traditional testing process
• (Traditional) testing process can be considered to include at
least the following phases:
– Planning of testing
– Developing test cases
– Executing test cases
– Evaluating the results of test runs and reporting
• Since time is always short, the process need to be designed
so that it won’t be circumvented to reach the goals
• Often the time requirements of testing are underestimated
even more than those of other phases
– And when in a hurry, which will yield: coding or testing?

Software testing, 2015 40(504)


Ohjelmistotekniikka
OTE
2.8 Everything cannot be tested 1/3
• A practical example showing why everything cannot be
tested:
long add(int i, int j) {

}
• If the type int corresponds to a 16-bit integer and specific
inputs always result in the same output, there are 216 x 216 =
4294967296 test cases
• If each test takes 5 seconds to execute, it would take 680
years to test the function above
– 680 testers could execute the tests within one year if they
worked around the clock in parallel

Software testing, 2015 41(504)


Ohjelmistotekniikka
Everything cannot be tested 2/3
• So would automation help? If we could drop the execution
time of one test to 1/100000 of the original, it would take only
a few days to test the function without using parallel methods
• Unfortunately in real life test targets are much more
complicated than the function above, so automation would not
do the trick
– For example adding another int type parameter to the function
increases the number of test cases by a factor of 65536
– The number of test cases for a real program increases very
quickly and becomes impossible to handle
• If someone claims to have tested everything, you should
probably question that claim

Software testing, 2015 42(504)


Ohjelmistotekniikka
OTE

Everything cannot be tested 3/3


• If for example 10% of the requirements for an airplane, built in
the sixties, were related software, today the number can be
80%
• Moreover, the number of requirements is increasing all the
time
• In addition, the complexity of software is growing even faster
than its size
• Even if history data of the required workload of testing were
available, it would still be very hard to estimate how much the
increase of the size of the software affects the testing work to
be done

Software testing, 2015 43(504)


Ohjelmistotekniikka
2.9 Schools of testing

• There are such different views on the basics of testing that it


is appropriate to talk about ”schools of thought".
• It is good to recognize and understand them – otherwise it’s
difficult to work in different environments.
• Earlier they have been defined by e.g. Cem Kaner and Brett
Pettichord:
– Kaner, Cem. 2006a. Schools of software testing. Available at:
http.//kaner.com/?p=15. [Checked 23.3.2012]
– Pettichord, Bret. 2007. Schools of Software Testing. Slide set. 33 p.
• The following is Vuori’s understanding of the schools in 2015.
• They are something of caricatures and in the real world merge
into each other.

Software testing, 2015 44(504)


Ohjelmistotekniikka
Schools of testing 2015 1/5

• Standardization school.
– Testing should be based on standards, such as ISO/IEC 29119 or the
descriptions of ISTQB. The compatibility of testing methods with these
is considered a show of professionalism.
– Testing is the same in all contexts and so should be the practices.
• Quality management and assurance school.
– Testing is part of quality management and assurance.
– Testing is a method of verification and validation (V&V) and used in
well-defined, repeatable ways.
– Testing is well defined in software production processes.
– Testing management and measurement have a large role.
– Close to standardization school.

Software testing, 2015 45(504)


Ohjelmistotekniikka
Schools of testing 2015 2/5

• Automation school.
– All testing should be automated, no testing should be performed
manually.
– The nature of bugs is such that automation can detect them.
– Test coverage should be near perfect.
– Testing is by its nature a logistic process.
• Developer-centric school.
– Unit and integration testing performed by developers is usually enough.
– Testing must be integrated if software production processes.
– Test automation must be practical, fast, simple and easy.

Software testing, 2015 46(504)


Ohjelmistotekniikka
Schools of testing 2015 3/5

• QA centric test automation school.


– Testing requires challenging test automation.
– No complexity is bad if the challenges are complex.
– Testing must be backed by good science in order to optimize the test
systems and results.
– Good design of test system and an analytical approach are important.
– This is found in industry in testing complex systems.
• Context-driven school.
– The testing that should be performed depends on the situation.
– Evaluating the context is important.
– There are no ”best practices".

Software testing, 2015 47(504)


Ohjelmistotekniikka
Schools of testing 2015 4/5

• Intelligent testing school.


– Testing is intelligent activity that requires abilities characteristic
of humans.
– Exploration is essential in finding errors.
– There’s more to every thing to be tested than can be seen at first
glance.
– Tools can help in testing, but they don’t ”do” testing.
– There’s a big difference between simple checking and real
testing that reveals new information.

Software testing, 2015 48(504)


Ohjelmistotekniikka
Schools of testing 2015 5/5

• Routine school.
– Testing is simple work that doesn’t require special skills.
– Pretty much everyone in the organization can test.
– Discipline, precision and following the plan are the most
important things in testing work.
• Holistic school.
– There is no one way to do testing.
– Good testing applies a context-dependent mixture of several
paradigms.
– All approaches complement each other.
– Even contradictory paradigms are good for testing.

Software testing, 2015 49(504)


Ohjelmistotekniikka
2.10 Testing folklore1/3
• Errors are made in all the phases of software development
• If you want to avoid errors, you have to avoid human beings
• The sooner you find the errors, the better

Software testing, 2015 50(504)


Ohjelmistotekniikka
Testing folklore 2/3
• In the worst case the faults are discovered only after the
system has been delivered:
– Therac-25 x-ray machine gave too big dosages of radiation
leading to the deaths of six people
– Unsuccessful launch of ESA Ariane 5 rocket caused costs of
seven billion US dollars
– Replacement of faulty Pentium processors cost Intel over 400
million US dollars
– 1/3 of phone network in US has gone silent twice (AT&T and
DCS)
• The costs of defective software in USA are estimated to reach
0,6% GDP, i.e., about $60 billion (The Economic Impacts of
Inadequate Infrastructure for Software Testing, 2002)
– It is unlikely that the situation is any better in Europe
Software testing, 2015 51(504)
Ohjelmistotekniikka
Testing folklore 3/3
• It has been estimated that in a carefully implemented program
there are about five errors for every thousand lines of code
• Windows XP has about 45 million lines of code, so it can be
estimated to carry around 225 000 errors
– Vista has about 60 million lines of code
• Fortunately Windows can update itself automatically from the
Internet!
• Notice! Microsoft has developers and testers in a ratio of 1:1
• Be honest: who would have been able to do it better than
Microsoft in this case?

Software testing, 2015 52(504)


Ohjelmistotekniikka
2.11 Testing is demanding work
• In practice more is required of testers than coders
• One needs an understanding of the domain and system
architecture as a whole
• Detective skills are often needed in analysing test results
• How to test a program without any documents or manuals?
– This situation isn’t as rare as you might think…
• Test code is often as valuable to the organization as the
actual product code
– (Automated) regression tests enable maintaining and extending
the program as well as agile reaction to changing requirements

Software testing, 2015 53(504)


Ohjelmistotekniikka
2.12 Testing is team work
• Drawing too sharp a border between testers and developers
is in no one’s interests
– Software development is team work in the literal meaning of the
word
• Even if all testers do not need to know how to program,
usually programming skills are a real benefit
– It’s good to know what kinds of errors developers make
– Developing test automation is software development – including
creating and maintaining all kinds of small scripts
• On the other hand, less technologically oriented testers may
better understand the needs of the end users
• It’s important to have people who think in different ways in
teams
Software testing, 2015 54(504)
Ohjelmistotekniikka
2.13 What can be required from
testing?
• You cannot require
– Finding all the errors
– Quality assurance
– Deciding the moment of release
• You can require
– Providing new information about the product
• You can use testers for other tasks than just finding errors
– Conception of the entire system that is being developed and its
quality
– Working communication about problems
• The existence and repairing of bugs may have to be ”sold” to
the developers

Software testing, 2015 55(504)


Ohjelmistotekniikka
OTE
2.14 Important terms 1/6
• Error is an anomaly in the program that deviates from the
specification
• Fault or a defect can be caused when a line with an error is
executed or when something should be executed that has not
been implemented
• Failure is an event that is externally observed and is caused
by a fault
– Fault does not always manifest in the operation of the system,
i.e. not all faults lead to a failure
• Bug can mean any of the above

Note! These terms are used inconsistently

Software testing, 2015 56(504)


Ohjelmistotekniikka
Important terms 2/6
• In dynamic testing a program is executed with an input that
is usually related to a test case
• In static testing program is not executed, but errors are
searched by reading through the source code or the
documentation
– Documentation is “software” so it has to be tested as well
– Some think that this is not testing in the actual meaning of the
word
• Specification describes how something should or should not
work
– For example, requirement specification tells what requirements
the system should fulfil; specification of a class tells how a class
should work
Software testing, 2015 57(504)
Ohjelmistotekniikka
OTE

Important terms 3/6


• Test condition describes the issue we are testing
• Test case describes the inputs that are used to get the system
under test to exhibit a failure
• Test procedure describes the steps taken to execute the test case.
In practice described as part of the test case
• Test suite is a group of tests that are logically related to each other
• Test environment means the hardware and software environment
in which the system under test is executed, including the interfaces,
stubs and drivers
– The defining of test environment is important for avoiding the situations
like ”It worked yesterday on my PC” when debugging
– Important to match the production environment
– Can be many different environments

Software testing, 2015 58(504)


Ohjelmistotekniikka
Important terms 4/6
• Testware includes all documents and products created for
testing, such as test cases and test plans [Craig&Jaskiel 02]
• Validation tries to make sure that we are making the right
product
– Does it really do what is expected
• Verification tries to make sure that we are making the
product right
– How it meets the stated requirements
– Some formalists consider only so called "formal verification" to
be ”real” verification, since testing cannot prove the absence of
errors

Software testing, 2015 59(504)


Ohjelmistotekniikka
Important terms 5/6
• Testing strategy is a “military plan” that defines the scale and
depth of testing, and the risks to be covered during testing
• Some test phases may have multiple names: for example,
unit, module, and component testing can mean the same
thing for different testers
• SUT, system under test
– IUT: implementation under test, DUT: device under test, etc.
• Software terms ”function” and ”method” are used in this
course variably for describing the same thing
– If the purpose is to emphasize the special properties of object-
oriented testing, we try to use the term ”method”

Software testing, 2015 60(504)


Ohjelmistotekniikka
OTE
Important terms 6/6
• Positive testing tries to ensure that the system does what it
is supposed to do by executing ”happy case” type of test
cases derived directly from requirements
• Negative testing tests ”unhappy cases” that are not usually
specified by the requirements (at least completely), such as
error conditions, in order to ”break” the system under test
• ISTQB has a good glossary, see
http://www.fistb.fi/sites/fistb.ttlry.mearra.com/files/istqb_sanasto.pdf

Software testing, 2015 61(504)


Ohjelmistotekniikka
3. Testing based on test cases

Dynamic testing is designing and


executing of tests and analysing
the results. Because tests dictate
which errors are found, it is wise
to concentrate on their quality.
What does a well-defined test
case consist of?

Software testing, 2015 62(504)


Ohjelmistotekniikka
OTE
3.1 Test case in a nutshell
• A small test for program behaviour
– An idea of something to test: does a calculator’s division work?
– An idea of a good input: try division by zero
– An idea of a result: an error message, no crash, something else
• Performed either in a predefined way or left up to the tester
• May be designed systematically in a set before execution
• Or create one ”on the fly"
• Or they can be generated automatically based on the type of
software components or a model describing software
behaviour

Software testing, 2015 63(504)


Ohjelmistotekniikka
Examples
• Input an ordinary sum into a calculator and check whether it
gives the correct result.
• Input a non-existent date into an information system and see
if it can handle the situation.
• Leave a required field in a form empty.
• Try to read a huge file into a program.
• Try different character sets in a text file and see if they can be
opened and saved correctly.
• Try to open a corrupted file into an image processing
program.
• Close the internet connection when the program in reading a
file through it.
Software testing, 2015 64(504)
Ohjelmistotekniikka
What do we want to know?
• Before designing test cases we need a reason for them
– It’s not enough that the boss told to make sure that the system
works
• Recognize the situations and thing about which quality
information is needed
• Requirement documentation, user stories etc. are very useful
here
– Although the viewpoint of negative testing must not be forgotten
• Sometimes the tester can only rely on their own competence,
if there’s no firm information available

Software testing, 2015 65(504)


Ohjelmistotekniikka
OTE

Test cases must be challenging


• Coming up with good test cases is usually difficult
• Bad test cases can give a falsely bright image of software
quality
– That the program appears to work as it should may only be
because the test cases are ill chosen
• For business good test cases may be more valuable that the
actual product code
• Typically 20% of test cases test ordinary functionality (positive
testing) and 80% ”unusual” functionality such as error
conditions etc. (negative testing)

Software testing, 2015 66(504)


Ohjelmistotekniikka
Sources of test cases 1/2
• Inputs
– What kinds of inputs must the program handle? What kinds of
faulty inputs must it tolerate?
– At low level in functions and at UI level
– What kind of data must the program be able to read? Does it
work with corrupted data?
– Parameter combinations
• States
– What should be possible in a specific state? What should be
prevented?

Software testing, 2015 67(504)


Ohjelmistotekniikka
Sources of test cases 2/2
• Use cases
– Things that must work. Failures that must be managed.
– Things users normally do
– Mistakes users make
– Intentional abuse
– Hazards
• Interaction with other components and programs
– Ordinary interactions
– Failures in the system
• Things defined in standards etc.
• Etc… overall all kinds of behaviour at different levels of the
system
Software testing, 2015 68(504)
Ohjelmistotekniikka
Minimizing time used 1/2
• Since running tests can take a considerable time, their
number is minimized where possible
• Minimizing can be done be designing test cases that cover as
many things to be examined as possible
– So one test case can cover multiple situations, though that may
make analysing the test results more difficult
– For example testing a long web form: fill the field in specific ways
each time – and when an error occurs, try to figure out what
caused it
• Equivalence partition methodology considers input value
ranges where the system behaves in the same way – then a
single test case may be enough

Software testing, 2015 69(504)


Ohjelmistotekniikka
Minimizing time used 2/2
• Depending on the situation the time needed for test runs may
or may not become the project bottleneck
• Testing must be done continuously as the product is
developed
• Focus on test case quality, not quantity
• Speed-up can be sought by automating at least some of the
test runs
– Even if automation is expensive, it is usually cost efficient in
situations where the same tests are run multiple times such as in
regression testing (presented later)
– Automation may also leave more time for designing better test
cases

Software testing, 2015 70(504)


Ohjelmistotekniikka
3.2 Test case structure and content 1/3
OTE
• Test case execution can usually be divided into four phases:
– Setup: the system under test is made ready for the test case
• Allocate resources, initialize databases etc.
– Execution: execute one test case in the system
– Evaluating results: compare the outputs of the system to the
ones expected in the test case and give a verdict
– Cleanup: release the resources reserved in setup

Software testing, 2015 71(504)


Ohjelmistotekniikka
Test case structure and content 2/3
• Test case content – not everything is written down explicitly:
– Unique identifier (code – not always used: in test code function name is
enough)
– Descriptive name / header – this is used to identify the case in long
listings and in test automation code
– Type – functional or other; positive, negative etc…
– Purpose, e.g. a requirement or use case related to the case
– Format trace to a requirement etc. sometimes needed
– Preconditions
– Inputs
– Expected results
– Postconditions

Software testing, 2015 72(504)


Ohjelmistotekniikka
Test case structure and content 3/3
• Example of a test for a automatic teller machine (ATM):
– Identity: TT0001
– Name: Balance query, basic case
– Type: Functional test
– Purpose: Test balance query in a basic situation, requirement VT0203
– Preconditions: Card placed into the ATM, 100 € of money in the balance
– Inputs: Press balance query button
– Expected results: The ATM displays the balance 100 €
– Postconditions: The ATM returns to the main view after 4,5 – 5,5 s

Software testing, 2015 73(504)


Ohjelmistotekniikka
3.3 Other notes about test cases
• Test cases are used in almost all kinds of testing, excluding e.g.
usability testing, which uses different terminology
• Even if the test case isn’t written down, it’s logic guides the tester in
e.g. exploratory testing
• Sometimes test cases are designed in large numbers based on
program requirements, but there is the danger that they are no
longer valid when implementation begins
– In agile development test cases are designed near implementation
• Sometimes testing relies more on varying data than actual defined
test cases
• In model-based testing (presented later) there aren’t necessarily any
”test cases”, only guidance for program behaviour

Software testing, 2015 74(504)


Ohjelmistotekniikka
4. Testing as a part of a software
engineering process
Software engineering is obviously much more
than just testing. But how is testing included in
the process? In this section we will consider four
important levels if testing: unit, integration,
system and acceptance testing.

Software testing, 2015 75(504)


Ohjelmistotekniikka
Testing is an essential part of the
engineering process
• As we have stated before, testing is an essential part of
software development
• Testing cannot and should not be isolated from the software
engineering process
• A rule of thumb: 20% of the code contains 80% of the errors,
so it is wise to carefully consider how to allocate the
resources
– Location of errors is not random, they are often hiding in the new
and changed parts and in the most complex components
• Every self-respecting software process has a view on when
testing is done (and what should tested)

Software testing, 2015 76(504)


Ohjelmistotekniikka
Testing must be started quickly 1/2
• Because even over a half of the software project’s resources can be
spent on testing, locating errors and fixing them, the improvement of
the process can result in great cost savings
– We can also improve testing methods, tools, etc.
• Motto: the sooner the errors are fixed, the better
• You should test immediately, as soon as there is something to test
– Testing the implementation of a feature as soon as some kind of a
version of it is available
– Starting testing when some part of the system can be executed
– For example by writing a rough system test plan, when the requirements
are mature enough
• The details should be specified only when they are certain enough
in order to avoid unnecessary rework

Software testing, 2015 77(504)


Ohjelmistotekniikka
Testing must be started quickly 2/2
• By testing early a better grasp will be obtained of the direction where
the project is headed
– It is vital information for the project management
• On the other hand, designing the test cases early increases the risk
that they will have to be redesigned before they can be executed
– If the target is moving, aiming has to be redone constantly
– Test specifications must be done at a right time and at the right detail
– In agile development testing is also built in phases
• Of course, it would be even better to not to make errors in the first
place
– Improve the software process to reduce the number errors made

Software testing, 2015 78(504)


Ohjelmistotekniikka
4.1 Levels of testing –
V-model of testing
Requirements Acceptance
specification testing

Functional
System testing
specification

Architecture design Integration testing

Module design Unit testing

Implementation
Test design
Result verification
Software testing, 2015 79(504)
Ohjelmistotekniikka
Traditional, still essential
• Testing is attached to traditional software engineering process
that follows the waterfall model according to the V-model
• Describes the ”levels” of testing that are still relevant in all
development models
• Has a strong role in ”regulated” development, in standards
regarding safety-critical systems
• Part of an all-round understanding of testing
• In practice often too rigids to be applied literally, often more of
a pedagogical abstraction

Software testing, 2015 80(504)


Ohjelmistotekniikka
The logic of V-model 1/2
• The left hand side of the V-model describes the waterfall
model, where you move from top down, starting from the
requirements specification, moving on to functional
specification and from there to architectural design and finally
to module design and to implementation
• In every phase testing plans are made for corresponding
testing phases (the right hand side)
• When implementation starts to exist, testing it is begun
– We move according the right hand side of the V-model bottom-
up acting according the previously mentioned test plans
– Testing verifies on each level whether the implementation
corresponds to the functional and technical specifications or not

Software testing, 2015 81(504)


Ohjelmistotekniikka
The logic of V-model 2/2
• The techniques used vary depending on the phase of the
process
– For example, on lower levels there is more white box than black
box testing and on the higher levels vice versa
• Traceability between different phases eases the tracking of
the origin of errors
• When an error has been located, the testing process could be
improved so that the errors of that type can be detected
earlier
• ”Verification and validation" are typical to the vocabulary of
developing safety-critical systems. On the next slide is an
example of a typical diagram in that culture.

Software testing, 2015 82(504)


Ohjelmistotekniikka
Verification and validation in the V-model
[based on Pezzè&Young 07]
Validation
Verification
Actual needs Acceptance testing by users
Delivered
package
Review
System testing
System
Requirements
integration
specification Analysis /
review
Integration testing
Subsystem Subsystem
design / specs Analysis /
review
Unit testing
Component Units /
specs components

Review with users

Software testing, 2015 83(504)


Ohjelmistotekniikka
OTE
4.2 Unit testing
• Unit testing, module testing – many names
• Each unit of the program is tested separately
– A unit can be a module, class, process, etc.
• Unit testing is usually a part of the implementation phase of
the unit
– A developer of the unit tests his/her own implementation
• Usually the knowledge of errors is left only to the developer
– Because errors tend to pile up, could this information be
used to target testing better?
– On the other hand, developers can pinpoint to the testers
the most complex parts of the system
– The sooner the unit’s implementation is tested the better

Software testing, 2015 84(504)


Ohjelmistotekniikka
Focus areas
• Focus areas of unit testing:
– Interfaces – important for all units
– Boundary values – logic, variables
– Handling of error conditions
– Data structures (consider an API that implements a tree
structure)
– Execution paths and loops – are all paths visited correctly, does
the loop end correctly
• Testing is usually done at source code level – developers
want to ”make sure” that everything works
• Interfaces are usually favoured as target, since they are
relatively stable and tests are a safety net through refactoring.

Software testing, 2015 85(504)


Ohjelmistotekniikka
Testing of interfaces 1/2
• An interface usually consists of functions, whose parameters and
return values are used for passing information
• The functionality of interfaces should usually be tested first
– They define the external behaviour of the unit
– If interfaces do not work, it might be difficult or even impossible to
execute other tests
– When the code is developed, interfaces may stay the same, so the tests
need not be changed
• Problems related to these:
– The number of parameters and their order
– The return values and types of parameters
– Is the value of a supposedly constant parameter changed?
– Is global data defined consistently everywhere in the code?

Software testing, 2015 86(504)


Ohjelmistotekniikka
Testing of interfaces 2/2
• Depending on the programming language used a good
compiler notices most of the previous errors
– Unfortunately, popular scripting languages do not support early
error detection
• However, a compiler cannot detect whether the caller and the
callee interpret the parameter/return value in a same way
– For example, one interprets a value in centimetres and the other
in inches (NASA has lost a probe because of an error like this)

“NASA lost a $125 million Mars orbiter because a Lockheed Martin


engineering team used English units of measurement while
the agency's team used the more conventional metric system
for a key spacecraft operation, according to a review finding
released Thursday.”
– “Metric mishap caused loss of NASA orbiter”, CNN, September 30, 1999

Software testing, 2015 87(504)


Ohjelmistotekniikka
Testing of boundary values
• Checklist:
– The boundary values of parameters and return values
– The boundary values of loop variables
– The boundary values related to data structures
• For example, does a dynamic data structure grow and shrink
correctly when memory is actually allocated and de-allocated
• Boundary value analysis will be covered in more detail later

Software testing, 2015 88(504)


Ohjelmistotekniikka
OTE
Testing of error conditions 1/2
• When software is designed little effort is often put in error
handling
• Unfortunately, the testing of error handling is also often done poorly
– Even if the software is otherwise designed to be easily testable, error
handling might not be
• As a result
– Software doesn't tolerate the smallest disturbances
– Error messages given by the program might be useless or misleading
for the user
• Poorly implemented error handling can also jeopardize information security

Software testing, 2015 89(504)


Ohjelmistotekniikka
Testing of error conditions 2/2
• A checklist for testing of failure handling:
– Has error handling been implemented correctly?
• Is error recovery even possible?
– If not, does the program crash in a user friendly way?
– Has error handling been implemented so that the handler is
actually reached or does the program crash before that?
– Is the right exception thrown?
– Is the error message comprehensible?
– Does the error message correspond to the error that occurred?
– Does the error message help the user to locate the reason for
the error and to move on?
– Are error messages consistent in all the units of the software?

Software testing, 2015 90(504)


Ohjelmistotekniikka
Testing of data structures 1/2
• The local data structures implemented within the unit are
prone to errors
• Also data structures used by the unit but implemented
elsewhere should be tested during the testing of the unit
– Type errors
– Initializations of default values
– Spelling of variable names
– Consistent use of data types
– Overflow, underflow and exceptions
• A good compiler can detect some of these errors
• One should pay attention to the data structures during code
inspections
Software testing, 2015 91(504)
Ohjelmistotekniikka
Testing of data structures 2/2
• Tested often as a ”by-product” of interface testing:
– Test the function with an input. Does it work correctly?
– Then: have the data structures been updated or destroyed
correctly?

• Whenever possible, use ready-made, well tested data


structures
– Standard libraries provided with languages and platforms are the
most safe
– For example Standard Template Library (STL) in C++, see
http://en.wikipedia.org/wiki/Standard_Template_Library

Software testing, 2015 92(504)


Ohjelmistotekniikka
OTE

Execution path and loop testing 1/3


• The places in the code where the execution branches are error-
prone
– If statements, loops, jumps
• Test cases should be selected so that as many critical paths through
the unit as possible will be tested
– For example choose function inputs so that loops will be executed in different
ways
• Tested execution paths should especially include those that can
cause error conditions (depending on the value of input, for
instance)
• When testing loops, simple, nested and consecutive loops should be
distinguished
• Techniques for testing simple loops are presented later
– For example, test cases can concentrate on the boundary values of a
loop variable

Software testing, 2015 93(504)


Ohjelmistotekniikka
Execution path and loop testing 2/3
• These techniques can be generalized for nested loops
– The problem is rapid growth of the required test cases when
nesting increases
– There are strategies developed for testing nested loops that
focus on keeping the number of test cases reasonable
• For example the innermost loop is tested first and the loop
variables in other loops are kept in minimum values. After
that, the same is repeated for the second innermost loop, etc.
• Consecutive loops can either be independent or dependent
on each other
– Dependency can be created when a loop variable of the latter
loop is initialized using the value of first loop’s loop variable
(often also a problem in code…)
Software testing, 2015 94(504)
Ohjelmistotekniikka
Execution path and loop testing 3/3
• For the testing of loops that depend on each other one can
use heuristics for nested loops
• The testing of independent loops is the same as testing
simple loops

Software testing, 2015 95(504)


Ohjelmistotekniikka
Implementing unit testing 1/4
• Short, clear test functions and simple checks within them
– ”Smart” assertions defined in the test framework (xUnit-tools
have many), used for reporting. Language-specific assertions
should not be used in testing – often they interrupt test
execution, which isn’t sensible.
– Assertions are not placed in production code but in test code (in
production code they do not exist for testing but for interrupting a
program in an erroneous state)
– … below in pseudo code
TEST_TYPE test_square_root() {
double result = my_sqrt(x);
ASSERT_TRUE((result * result) == x); // Note minor
bug in test code (what?) for simplicity…
}

Software testing, 2015 96(504)


Ohjelmistotekniikka
Implementing unit testing 2/4
• Because units under test cannot usually work by themselves
testing them requires writing code specific to testing
– Such as routines that return a file supposedly downloaded from
the internet
• Test code is code as usual
– It has to be developed and documented as carefully as the code
to be tested
– Also test code has to be tested and maintained
– The term ”scaffolding” describes very well the relationship
between the code under test and the test code
• Test code surrounds the production code, mirroring its
structure (AddBalance(amount) <> testAddBalance())

Software testing, 2015 97(504)


Ohjelmistotekniikka
Implementing unit testing 3/4
• Two important concepts related to unit testing (along test
framework) are driver and stub
• Driver (test bed)
– Is a program that takes as input data related to the test case and
inputs this data to the unit under test
– Takes care of the results that the unit under test produces and
passes them on for further analysis
– Should be designed in such a way that it can be used for testing
different units
– Should be designed in parallel with the unit it is to test
– The problems with the design of the driver can reveal errors in
the design of the unit to be tested

Software testing, 2015 98(504)


Ohjelmistotekniikka
Implementing unit testing 4/4
• Stub
– Replaces another unit that is called by the unit under test
• A stub is needed for every unit called by the unit under test
– The tasks of a stub:
• Implements the interfaces needed
• Returns control to the unit under test
• Handles the inputs it receives as little as possible
• Just returns a simulated value or throws an exception
– Should be designed to be as easy as possible to implement
– Importance is emphasized especially when error conditions are tested
• Generating error conditions is hard work and systematic search of
error conditions is difficult
• A stub designed for this purpose can easily create the error
condition needed
Software testing, 2015 99(504)
Ohjelmistotekniikka
Notes on unit testing of object-oriented
programs
• Almost all programs these days are object-oriented.
• There are many things that need to be considered when testing them.
These include:
– The behaviour of the object depends on its state
– Object hide data and the hidden data needs to be revealed in testing
– Private methods may be difficult to test depending on the language – that just
has to be accepted
– One should consider the future use of the classes and how to test their base and
child classes
– And what kinds of test implementations are created for abstract classes
– Testing constructors and their potential problems is especially important (in
particular because it may not be possible to throw exceptions in them)
• On this course these matters cannot be covered thoroughly, but we
recommend taking a look at this slide set:
– http://www.cs.tut.fi/~testaus/s2015/luennot/Testing_object-oriented_programs.pdf

Software testing, 2015 100(504)


Ohjelmistotekniikka
OTE

Top 8 points for unit testing


• Think what is most important to the method
– How it is used (by whom, under what circumstances), what parameters
it is called with
• Focus on the interface – it’s stable, but implementation may change
• Test the most common cases
• Make the code robust
– Test the most common errors and exceptions
– Test all boundary values
– Test combinations of parameters
• Be creative – so are the method callers
• Follow test coverage, but remember that it does not often tell much
about actual coverage
• Make your test cases as simple as possible
• Use a framework like JUnit or QTest and follow its conventions
Software testing, 2015 101(504)
Ohjelmistotekniikka
xUnit test frameworks 1/2
• In unit testing it’s a good idea to use pre-existing test
frameworks, because:
– They’re based on commonly used (and thus familiar to many)
good practices of test design.
– They’re available for many languages, often as free open-source
programs.
– The "xUnit family" includes many slightly different frameworks for
different languages etc.
– More on xUnit programs: http://en.wikipedia.org/wiki/XUnit
– One of the first and most popular is JUnit, a unit test framework
for Java programs
– QTest of Qt also follows these principles

Software testing, 2015 102(504)


Ohjelmistotekniikka
xUnit test frameworks 2/2
• Principle:
– Several design patterns have been used to create a test
framework where tests or test suites are described with objects
– Structured test cases and tools for executing them
– Test cases with specific structure – setUp, execution, tearDown
– Test case combined to suites for execution
• Advantages:
– Ready functionality for test case execution, checks, and
reporting results
– Test cases in specific format help in maintaining the tests
– Can also be used in integration testing
– Many tools produce e.g. tests compatible with JUnit or reports in
JUnit format
Software testing, 2015 103(504)
Ohjelmistotekniikka
4.3 Test-Driven Development
• The traditional way of unit testing is problematic
– Unit Tests are made and executed occasionally, if there is time
– So in practice they are only done for some classes or methods
• TDD's way
– Unit testing is a routine
– When a new method is designed design, unit tests are created
for it before the implementation of the method
– Passing of the tests shows that the coding is progressing
– The result is a uniform set of unit tests, which are a safety net for
the code when developing it

Software testing, 2015 104(504)


Ohjelmistotekniikka
TDD in nutshell
• Write a test for a method based on its interface and
parameters
• Run it and see that it fails – as the implementation is just a
stub
• Code the implementation
• Run the test(s) and see that they pass
• Tests are integrated to the development environment and are
run always during compilation
• Clean implementation (re-factoring)
• Automated tests are now a safety net for any changes

Software testing, 2015 105(504)


Ohjelmistotekniikka
Requirements for using TDD
• Willingness to perform unit testing
• Development environment where making stubs for unit tests
is as easy as possible (i.e. a modern IDE, where the tools are
ready or into which they can be integrated)
• Running of tests integrated to local build system
• Fast development environment where tests are run
immediately
• New development: "continuous compiling" and continuous
running of unit tests – less delays, immediate feedback

Software testing, 2015 106(504)


Ohjelmistotekniikka
The effects of TDD on quality
• Plus
– Forces to create unit tests
• Not quite plus
– Is viewed more as a design then testing method (focuses on
defining behaviour) – testing requires a different mindset
– There is no evidence that the produced unit testing is better than
with other methodologies
– Excessive systematization may hurt overall

Software testing, 2015 107(504)


Ohjelmistotekniikka
4.4 Techniques for designing test
cases
• Techniques have been developed specifically for designing
test cases. They are used at all levels of testing, but most in
unit testing
• We therefore take a look at the most important techniques.
• The methods provide a suitable systematization to test
design.
• In practice the methods are used together and they support
the thoughts of the test designer.
• In fact they describe models of thought.

Software testing, 2015 108(504)


Ohjelmistotekniikka
OTE

4.5 Equivalence partitioning method 1/5


• Minimize the number of needed test cases by classifying the
inputs of the program to certain equivalence classes that have
the following properties:
– When an instance of a equivalence class causes a failure, any
other instance of the class causes the same failure, too
– When an instance of class does not cause a failure, neither do
the other instances of the class
• So: we can assume that the program behaves the same for all
inputs from the same input domain
• Instead of testing the entire class, one instance of the class
can be chosen for testing

Software testing, 2015 109(504)


Ohjelmistotekniikka
Equivalence partitioning method 2/5
• Identification of equivalence classes
– A condition of input value space is selected and the input is
divided into two or more classes based on that
– For starters a division into legal and illegal values
– For example, is a field asks for a positive integer, one big class
is positive integers and another non-positive integers
– The positives can then be further divided into smaller classes

Software testing, 2015 110(504)


Ohjelmistotekniikka
Equivalence partitioning method 3/5
• A few guidelines for choosing equivalence classes:
– If a condition of the input value space defines a range of legal
values like “number is between 1-999”, three equivalence
classes are created: (1 ≤ number ≤ 999), (number < 1) and (999
< number)
– If a condition defines an exact number of possible values like ”a
vehicle can have one to six owners”, one class is created to
correspond to the legal values and two classes to correspond to
the illegal values, ”no owner” and ”more than six owners”

Software testing, 2015 111(504)


Ohjelmistotekniikka
Equivalence partitioning method 4/5
– If a condition defines a set of values, each of which requires a
different kind of processing, for each value an equivalence class
is created and one class for illegal values
• For example ”a vehicle can be a bus, truck, car or a
motorcycle” creates four classes that correspond to the legal
values and a class that corresponds to the illegal values for
example ”trailer” or ”other vehicles”
– If there is an absolute requirement like ”the first letter has to be a
capital letter”, two classes are created, the other to correspond to
the legal value ”capital letter” and the other for illegal value ”non-
capital letter”
– When there is reason to suspect that all instances of an
equivalence class are not treated the same way in the program,
the class must be divided into still smaller equivalence classes
Software testing, 2015 112(504)
Ohjelmistotekniikka
Equivalence partitioning method 5/5
• When the classification has been done, test cases are created from
the instances of the classes
• Illegal test cases should test only one illegal equivalence class at a
time
• When testing multiple variables, one option is to create test cases
that correspond to all the possible combinations of the equivalence
classes, both legal and illegal
– As a result, the tests have more coverage, but the price is much larger
number of test cases that might take too much time to execute
– Such test cases are best generated programmatically (think of an array
of combinations from which tests are generated)
• The usefulness of the equivalence classes is not restricted to inputs
but the technique can also be used based on the value ranges of the
outputs
Software testing, 2015 113(504)
Ohjelmistotekniikka
4.6 Boundary value analysis 1/3
• Experience has shown that programmers make more errors
on the limits of the boundary values of parameters and
variables (including the boundaries of the equivalence
classes)
– For example the operator ≤ is used instead of < or the initial
value of the loop variable is ”off by one”
• The loop code might be executed one time too few
• From a set of parameters typically one is chosen at a time,
and its boundary values are tested while keeping all the other
parameters in their ”nominal” values, i.e. strictly inside the
boundaries (for example, inside an equivalence class)

Software testing, 2015 114(504)


Ohjelmistotekniikka
Boundary value analysis 2/3
• Select:
– Values corresponding to the minimum (min) and maximum (max) of the
legal range
– Slightly smaller than minimum (min-) and slightly greater than maximum
(max+)
– If the parameters have several legal value ranges (equivalence
classes), choose values from all of their boundaries
• Boundary value analysis works best when testing a group of
parameters that have no dependencies with each other and which
describe for example amounts or physical quantities such as
temperature, pressure, speed, weight, etc.

Software testing, 2015 115(504)


Ohjelmistotekniikka
OTE

Boundary value analysis 3/3


• For example, for Boolean or logical values it is not probably
worthwhile or even possible to do a boundary value analysis
• An example of the importance of physical quantities:
– In June 1992 Phoenix International Airport was closed because
the pilots could not perform some adjustments; their instruments
accepted the highest possible temperature to be 120 °F while
temperature was 122 °F (≈ 50 °C)
• Also the assumption of independence is important; if it cannot
be made, the results may be poor
• Testing the boundaries of variable value ranges is usually not
sensible (e.g. the greatest value possible for an integer
variable)

Software testing, 2015 116(504)


Ohjelmistotekniikka
Taking advantage of function results and
value ranges
• The use of these methods shouldn’t be restricted to inputs
• Errors can also be found by applying them to outputs or
internal variables
• For example:
– If we know that the result should be in range 1..100, test with
inputs that should produce the results 1 and 100
– If a loop can be executed from zero to ten times, test it with
inputs that should cause it to be executed zero and ten times
– Test with inputs that should cause error messages

Software testing, 2015 117(504)


Ohjelmistotekniikka
Worst case testing
• Maybe we should test all the possible combinations of
boundary values?
• When a Cartesian product is taken from the boundary values
of all parameters (min-, min, nominal, max, max+), we get 5n
test cases
• The test cases formed by basic boundary value analysis are
of course a subset of this
• Due to the large number of test cases worst case testing is
only worthwhile when high reliability is required

Software testing, 2015 118(504)


Ohjelmistotekniikka
4.7 Testing combinations 1/7
• If all the possible combinations of equivalence classes are to
be tested, the number of test cases can easily grow too large
• Another option would be to test so that each class will be
tested only once
– Unfortunately this is not very effective for finding errors
• Between these two extremes is a practical alternative: instead
of trying to cover all possible combinations, pairs or triplets
formed from all the classes are to be covered

Software testing, 2015 119(504)


Ohjelmistotekniikka
Testing combinations 2/7
• An example of testing an imaginary WWW page in different
environments:
– Languages: Finnish, English, French, Spanish
– Browsers: IE, Firefox, Chrome, Opera
– Font sizes: small, medium, huge
– Display size: small_laptop, family_desktop, huge
• (Details have been abstracted here, since they change every
year)

Software testing, 2015 120(504)


Ohjelmistotekniikka
Testing combinations 3/7
• If we want to test all possible combinations, we need 144 (4 *
4 * 3 * 3) test cases
– If for example the readability of the text on the screen has to be
evaluated by a human, this is way too much
• If each choice is to be tried once, we need only four test
cases
– Even with carefully selected test cases, testing would be
superficial
• If new parameters were added to the system, e.g. connection
bandwidth, the number of combinations would grow
exponentially.

Software testing, 2015 121(504)


Ohjelmistotekniikka
Testing combinations 4/7
• The golden mean: cover all possible pairs (pairwise testing)
– If testing all possible combinations a test case is needed for
each combination
– Now for example test case {Finnish, IE, small_font, small_laptop}
covers more than one pair : (Finnish, IE), (IE, small_laptop) etc.
– In the example all pairs can be covered with 19 test cases listed
on the next slide
– The listing has been created with the AllPairs program
(http://www.satisfice.com/tools/pairs.zip)
• Character ”~” means that the choice has no effect on the
pairwise coverage
• The smallest solution possible would contain 16 test cases,
so AllPairs gets fairly close

Software testing, 2015 122(504)


Ohjelmistotekniikka
Testing combinations 5/7

Software testing, 2015 123(504)


Ohjelmistotekniikka
Testing combinations 6/7
• Instead of covering pairs one can cover triplets, in this case
the overall coverage will be improved and the number of test
cases usually remains at a rational level
• Because the manual generation of all pairs or triplets can be
very hard work, several tools and algorithms have been
developed
• The Pairwise Testing web page focusing on the topic has a
Tools page listing many tools besides the AllPairs used
above.
– http://pairwise.org/tools.asp

Software testing, 2015 124(504)


Ohjelmistotekniikka
Testing combinations 7/7
• When an important combination is missing from a set of test
cases that is 100% pair-wise covered, it should be added
there even if it does not increase the pair-wise coverage
• On the other hand, when there are limitations for some
combinations, like selection of colour ”monochrome” is legal
only if the size of the display is ”PDA”, all pairs can be
generated first and illegal ones discarded afterwards
– Because a discarded test case might have been also
representing legal pairs, new test cases may have to be added
to cover these pairs

Software testing, 2015 125(504)


Ohjelmistotekniikka
4.8 Fuzz testing 1/3
• Create errors in files used by the program and see how it
handles them.
– Errors must be handled in a controlled manner, no crashes.
– Errors can be created in different ways – random corruption
("dumb fuzzing") or e.g. by generating different XML files.
• Used e.g. in Microsoft (See MS book The Security
Development Lifecycle)
• A well-known Finnish program Radamsa
(https://www.ee.oulu.fi/research/ouspg/Radamsa)
– Open source.

Software testing, 2015 126(504)


Ohjelmistotekniikka
Fuzz testing 2/3
• According to MS dumb fuzzing techniques include:
– Making files shorter than usual.
– Filling the entire file with random data.
– Filling parts of the file with random data.
– Finding 0 value bytes and replacing them with non-zeros.
– Switching numbers to negatives. Setting numbers to zero.
Setting numbers to value 2^N +/- 1.
– Swapping consecutive bytes.
– Setting or clearing the top bits of numbers. Doing OR/XOR
changes to bits.

Software testing, 2015 127(504)


Ohjelmistotekniikka
Fuzz testing 3/3
• For image files e.g.:
– Faulty colour depth information, changes in header information,
setting width and height to zero etc…
• For web traffic:
– Faulty packets. Faulty HTTP headers. Etc…
– Changing packets on the fly with a proxy.

Software testing, 2015 128(504)


Ohjelmistotekniikka
OTE
4.9 Low level integration testing
• Integration testing can be performed on
many levels. We will first examine
”ordinary” low level integration testing
and later system integration testing.
• After unit testing units are integrated into
larger wholes.
• Integration testing tests the interfaces
and interactions of the units.
• Test automation is used where possible.

Software testing, 2015 129(504)


Ohjelmistotekniikka
Most of integration happens in OTE

developer’s workstation
• In modern software development the developers have the
entire program available through version control.
– In the state in which others have submitted their work into the
whole.
• Own code is developed within this whole.
• The testing performed by the developer is focused on their
own work, but at the same time they check that the entire
program compiles and at least can be launched.
• So own code is personally integrated into the whole and unit
tests used to check that it works.

Software testing, 2015 130(504)


Ohjelmistotekniikka
Why is a special integration testing
needed 1/3
• Combine the separately working code everyone has produced
into a common build.
• Running unit tests and others produces an idea of the testing
of the whole and the maturity of individual components.
– The information can be shared in intranet, web or on the wall in a
”radiator”.
• Developers must be allowed to focus on their own work and
run quick tests on it.
• A special commitment can be made to test the whole and to
design tests creatively.

Software testing, 2015 131(504)


Ohjelmistotekniikka
Why is a special integration testing
needed 2/3
• Many kinds of tests can be executed
– Unit tests again in the integration environment
– Replace stubs and libraries with others.
– Run longer tests, including through the UI.
– Run tests required by the process used (smoke tests before
system testing, tests required by safety standards)
– Tests required to be executed by someone other than the
developer.
• Other things can also be done
– Time-consuming code analysis (static analysis, architecture
inspection, check for use of forbidden APIs, complexity
measurements).

Software testing, 2015 132(504)


Ohjelmistotekniikka
Why is a special integration testing
needed 3/3
• Integration testing measures project progress and product
maturity
• Based on test results, a build can be declared stable enough
for wider use
– In other projects, client tests, etc…

Software testing, 2015 133(504)


Ohjelmistotekniikka
Integration rhythm
• Historically integration has been done for example once a
week. Developers send in their work and someone tries to put
it all together.
– Even before internet… in 1990s.
• Daily rhythm ("nightly build") was reached in early 2000s.
• At that time the possibility to perform integration all the time
was considered.
– It’s easiest in small pieces.
– Problems are caught immediately.
– Tests can be kept running continuously.
– Thus began the time of continuous integration – CI.

Software testing, 2015 134(504)


Ohjelmistotekniikka
Continuous integration (CI) 1/2
• When a developer produces a small working piece of code (unit
tested), it is ”immediately” integrated into the whole.
• That is done on a server which notices new code in version control,
makes the build and runs integration tests – and whatever other
tests are desired.
• Each developer has access to all code through version control, so
”pre-integration” is commonly used and makes integration on the
server easy.
• Benefits to quality:
– Changes made in small pieces are clear: if something doesn’t work, the
problem is easy to fix.
– The program is always mostly stable and working.
– Developers get fast feedback from integration testing.
– Work done by different people doesn’t get out of synch.

Software testing, 2015 135(504)


Ohjelmistotekniikka
Continuous integration (CI) 2/2
• Benefits to testing:
– Continuous testing. Fast feedback.
– All kinds of automated tests can be executed.
– The development of the whole can be monitored – which
components are growing, which ones have errors, what is test
coverage.
• Note:
– Since testing must be fast, different test suites are needed.
– The slowest tests may be run once a day or even more rarely.
– Of course multiple servers can be used for running tests.
– CI is magic: integration itself tells nothing about quality. The tests
must also be good. Repeating unit tests on the server is not
enough.

Software testing, 2015 136(504)


Ohjelmistotekniikka
Other methods of integration and
integration testing
• In the following a few integration mechanisms are covered.
• They are simply about how the program is assembled, and
that usually happens based on ”common sense” and specific
principles.
• In agile development the integration style must quickly
produce a working program that can be tried out.
• Even if continuous integration is favoured during program
development, there are still situations where larger systems
must be combined.

Software testing, 2015 137(504)


Ohjelmistotekniikka
Big bang integration
• “In the beginning there was nothing and suddenly everything
exploded”
• Idea: First all units are unit tested separately and then they
are integrated into a working whole
– Common in small programs
– Problem 1: during unit testing stubs and drivers have to be
implemented for all units
– Problem 2: When an failure is found during testing, it is very hard
to find the error causing it from so many units
– Problem 3: After the error has been fixed, tests have to be re-
executed to make sure that the fix has not broken anything else

Software testing, 2015 138(504)


Ohjelmistotekniikka
Top-down incremental integration

• Also a strategy for developing the program


• This is a very typical method: Consider a program whose
structure is based on some framework. First we create the
basic skeleton (”control unit”) and then start adding pieces to
it.
– At first much is missing. For example the print or search features
are empty routines for a long time.
• Advantages to testing
– An executable program is available the entire time
– The program is seen from a user’s point of view
– User stories and use cases can be simulated
– The whole can be evaluated – are we making the right program

Software testing, 2015 139(504)


Ohjelmistotekniikka
Bottom-up incremental integration 1/2

• Unit testing is started from the lowest level units and drivers
are implemented for these units
• In the next phase, these drivers are replaced with the
corresponding units as they become available
• These units are tested next
– Next, implement the necessary drivers again
– The lowest level units remove the need for stubs
• Emphasizes low-level functionality
• No prototype of the system is available until the end of the
integration testing

Software testing, 2015 140(504)


Ohjelmistotekniikka
Bottom-up incremental integration 2/2
• The greatest benefit: no need for stubs
– Stubs are usually harder to make than drivers
• Additional advantages:
– In bottom-up integration the testing of clusters can be easily
divided among many teams
– The errors made in the design and implementation of low level
units can be found quickly
• For example, the performance problems hidden in these
units can be detected more easily
• The big downside: the whole program won’t be usable in a
long time…

Software testing, 2015 141(504)


Ohjelmistotekniikka
On integration order
• The order of testing may depend on the order of integration
– What can be tested!
• It’s best to first integrate those units that have the greatest
risks or otherwise happen to be on the critical path
– The greater the risk, the earlier the testing must be done
– The critical path may be related to a feature that must work as
soon as possible or technology whose usefulness must be
evaluated as soon as possible
• Testing of new things should (also) be done as separate proof
of concept testing – try out the new (e.g.) database paradigm
separately from the code.

Software testing, 2015 142(504)


Ohjelmistotekniikka
Rules of thumb for integration testing
• The amount of scaffolding code should be minimized
• Only a small number of units should be integrated at a time
– To show what is causing the problems…
– That’s why continuous integration is great
• Integration is not the same thing as integration testing
– Test quality must be assured
• Testing has to be fast to facilitate quick feedback to
developers
• There can be different test suites – small and fast (immediate
feedback to developers), big and slow running through the
night

Software testing, 2015 143(504)


Ohjelmistotekniikka
4.10 Continuous Delivery 1/3

• A method for delivering the program through a complete


workflow to the client even after minor changes:
– Into production environment, user’s workstation or mobile
device.
• Although the word is ”continuous”, the rhythm can be chosen
according to the client’s needs and what is being updated:
bug fixes or new functionality that affects basic use.
• What counts is the ability to deliver at any time.
• Favours Kanban style development process.
• An excellent book: Humble & Farley: Continuous Delivery.
• Here we will only consider a few things from the testing point
of view.
Software testing, 2015 144(504)
Ohjelmistotekniikka
Continuous Delivery 2/3

• Success is based on good logistics (including continuous


integration), configuration management and good testing.
• Logistics and test environments for different platforms must
be reliable to avert nasty surprises.
• Test automation is usually emphasized, but also possible with
manual testing.
• Manual testing is important in any case.
– Testing of new functionality and (if needed) acceptance testing
by users and suitable large often exploratory regression testing.
– Nothing should ever be delivered to the users ”blind”.

Software testing, 2015 145(504)


Ohjelmistotekniikka
Continuous Delivery 3/3

• Can be used tactically to reduce testing because bad changes


can be pulled back quickly.
• Helps do A/B testing, because the mechanisms allow
configurations available in different environments to be
changed easily.
– In A/B testing different users are provided different versions of
the system and their use is compared. Used for example in
Google and all large social media sites.
• In services with a large user base some functionality can first
be ”market tested” on some servers and then spread
elsewhere.

Software testing, 2015 146(504)


Ohjelmistotekniikka
4.11 System testing
• Test the functionality of the whole system
• Generally a task for test engineer
• Could be a task of a separate testing team
• Errors found are expensive to fix
• System testing takes much more time compared to the other testing
phases
• Executing all the test cases can consume a great amount of
resources
– It is natural to start with a smoke test
• If system testing is done only at the end of the project, problems can
be expected
– Especially in an agile process, testing is done continuously also at
system level

Software testing, 2015 147(504)


Ohjelmistotekniikka
Diverse testing at system level 1/2
• In the system test phase, the whole system is available, so it
makes sense to test the following things:
– Client requirements
– Usability
– Security
– Performance
– Load tolerance
– Special properties of the design
– The states of the system
– Capacity
– Concurrency issues

Software testing, 2015 148(504)


Ohjelmistotekniikka
Diverse testing at system level 2/2
– Configurations of software and hardware
– Interfaces and the interoperability with other systems (cf. system
integration testing) – end to end use cases
– Installation
– Localization
– Error recovery
– Reliability
– Usage of resources
– Scalability
• System testing requires much more creativity than lower level
testing
– It’s impossible to define general methods with any precision

Software testing, 2015 149(504)


Ohjelmistotekniikka
The phases of systematic system
testing
• The phases of system testing (according to Kit: Software Testing in
the Real World, 1995):
– Note! Here mostly in the context of functional testing
– Requirements analysis and division into manageable parts
– For every component a listing of detailed requirements
– For each significant requirement the definition of corresponding inputs
and outputs
– The creation of a requirements coverage matrix
• Maps requirements and tests – is there always a correspondence
• There are applications to create one automatically
– The execution of test cases and measuring of requirements coverage
– The definition of new test cases in order to reach the coverage required

Software testing, 2015 150(504)


Ohjelmistotekniikka
Not executed yet
Requirements coverage
Require-
ment
• Requirements coverage matrix R1 R2 R3 R4

– A table that describes all Test case

relations between (different


types of) test cases and TC1 TBD

requirements
– Whenever possible, it is
worthwhile to test many TC2 OK OK

requirements with one (legal)


test case
– From the matrix it is easy to TC3 FAIL

see what is not (yet) tested

TC4 OK OK

Software testing, 2015 151(504)


Ohjelmistotekniikka
Importance of quick error reporting
• In the system testing phase it should be ensured that
developers get error reports quickly
– It is not as obvious as in the earlier phases
• Organizational limits may slow down communication
– Going through project management is not necessarily a good
idea
– Quick reporting might stop the error from spreading to other
places
– In agile development, testers often work in development teams –
immediate feedback

Software testing, 2015 152(504)


Ohjelmistotekniikka
On change management in large
systems 1/2
• Because system testers do not fix the errors found, it is very
important to take care of change management
– If every error is fixed right away, so that the version to be tested
changes all the time, the system testing cannot go forward
– On the other hand, the quick fixing of some errors can be
important for finding more errors
• Usually the prioritizing and scheduling of defect fixing is
decided by a separate party (Change/Configuration Control
Board)
– Might include parties other than testers and developers: product
manager, project manager, quality manager, database
maintainer, clients, end users, client support personnel and
marketing personnel
Software testing, 2015 153(504)
Ohjelmistotekniikka
On change management in large
systems 2/2
• This all applies to large systems where it’s important for the
test configuration to remain the same through the entire round
of testing
• An important case here is acceptance testing, where a round
of testing is used to decide whether something can be
deployed or moved into the next phase in the testing process

Software testing, 2015 154(504)


Ohjelmistotekniikka
Top 7 points for system testing
• Find out how the software is used and what is most important in its
use
– Who uses it, for what purpose, what is the objective of its use
• Identify all the essential characteristics of the software
– Functionality, usability, security
– If you don’t know how to test them yourself, get someone else to do it
• Make the order of importance of features and functionality clear
– How often used, risks associated with functions
• Start testing with the most important things
• Use a variety of creative styles of testing
– Plan-based and agility support each other
– Automation has its place
• Create documents that help others
• Error reports should "sell" the bugs to developers and get them fixed
Software testing, 2015 155(504)
Ohjelmistotekniikka
4.12 System integration testing 1/2
• In large information system
projects compatibility with other
systems becomes an issue
• Since the parts of the entire
system are often delivered by
different parties, it’s important to
ensure that the component
systems ”talk” to each other
seamlessly
• Thus it makes sense to invest in
integration and related testing

Software testing, 2015 156(504)


Ohjelmistotekniikka
4.13 System integration testing 2/2
• System integration testing is by its nature technical,
functionality and performance are only ensures once
everything works ”under the hood”
• Should be started as early as possible, a late start leads to
trouble at the end of the project
– Drivers and stubs have to be used to replace incomplete and
missing parts

Software testing, 2015 157(504)


Ohjelmistotekniikka
General description

• Purpose
– Ensure that the combined system works well and all component
systems interoperate.
• Based on a test plan.
• Test environment matches the end use environment of the system.
• Test environment combines systems produced by different parties.
• One of the most critical parts of testing in information system
projects.
– Many large projects are in trouble because the systems made by
different parties don’t work together.
• Requires cooperation between all parties.
• Done by a separate team.
Software testing, 2015 158(504)
Ohjelmistotekniikka
Approach

• Test the interfaces between the systems.


• Ensure the functionality of the most important features.
– Business processes through the entire system.
• Test types:
– Functional testing.
– Performance testing.
• Includes inspection.
– Databases, files used to transfer data between systems etc.
should be inspected manually in addition to testing.
• May be part of client’s acceptance testing.
• Tests can be automated if the interfaces are stable.

Software testing, 2015 159(504)


Ohjelmistotekniikka
Actors

• Separate integration team.


• May be a party that doesn’t implement any component
systems.
• May be a party performing acceptance testing for client.
• Supported by a change board and error board, which handle
questions and problems related to integration.
– Whose fault is it when something doesn’t work… sometimes
everyone does everything right, yet the combined system still
doesn’t work!

Software testing, 2015 160(504)


Ohjelmistotekniikka
Planning

• Integration plan at the beginning of the project


– Linked to the release plans of the component systems
– Review of the testability of the systems
– Timing essential – at what pace are new versions of systems
produced
– Automated continuous integration point of view is also possible
• Agreement on end-to-end scenarios to be tested at each phase
– For example handling an order from beginning to end – test how
it proceeds through different systems
• Agreement on responsibilities
• Reservation of resources important
– Environments
– Testing resources
Software testing, 2015 161(504)
Ohjelmistotekniikka
Implementation

• Setting up the environment


• Acquisition of test data / databases
• Configuring the system for different component systems
• Designing stubs / simulators / mock objects for missing systems so
that testing can begin before everything has been implemented
• Start testing as soon as there is something to integrate
• Automate tests for regression testing
• Continue testing through the entire project

Software testing, 2015 162(504)


Ohjelmistotekniikka
Handling the whole

• Integration point of view required in the project


• When errors are detected, a responsible party needs to be found
– Each system may work ”correctly” on its own
– May be difficult…
• An error board to interpret the situation is desirable
– Includes representatives of different parties
• Test environment requires a maintainer
• Performer of system integration
– May be a party that doesn’t implement systems to be integrated

Software testing, 2015 163(504)


Ohjelmistotekniikka
Measurement

• The goal is to follow the progression of the integration


– Business case test coverage %
– Passed testes – what works
• Error count
• Trends in error count
• Error in different pairs of component systems
– Which ones don’t work together

Software testing, 2015 164(504)


Ohjelmistotekniikka
Commonly found problems 1/2

• Incorrectly implemented interfaces


– Program interfaces
– Wrong order of parameters
• Incompletely implemented protocols
• Wrong order of events in asynchronous systems
• Different data types
– Representation of numbers
– Character sets of text data
• Faulty data types in the implementation

Software testing, 2015 165(504)


Ohjelmistotekniikka
Commonly found problems 2/2

• Errors in documentation
• Erroneous assumptions made in absence of documentation
• Timeouts
– One system doesn’t wait until another finishes a task

Software testing, 2015 166(504)


Ohjelmistotekniikka
Factors in successful system
integration testing 1/2
• All parties have a common goal
• Stable, realistic test environment
– Manageability and configurability of the environment – opening
ports mustn’t take a month
• Parties aim for integration from the beginning
• Openness in implementing interfaces (this isn’t obvious either)
• Use of open standards
• Open source programs that allow other parties to familiarize
themselves with the solutions used by others

Software testing, 2015 167(504)


Ohjelmistotekniikka
Factors in successful system
integration testing 2/2
• Integration is measured in a way that follows the progress of
the project
• Investment in communication
• Good change management
• Integration begins early – it requires practice; won’t work
immediately

Software testing, 2015 168(504)


Ohjelmistotekniikka
Pitfalls of system integration

• Left to the end of the project


– Won’t work immediately
• No resources reserved for integration
• Interface documentation is not delivered to other parties
• Problems with environments
– Configuration
– Opening ports
– Access rights

Software testing, 2015 169(504)


Ohjelmistotekniikka
4.14 Testing in agile development

Agile development uses


similar levels of testing as
the V-model, but with a
slightly different rhythm
and style. How is it done in
sprint-based processes?

Software testing, 2015 170(504)


Ohjelmistotekniikka
Rhythm of the sprints
• Agile development processes are based on short sprints
– They produce a set of new functionality / features
– During the sprint, the features are tested to be in deliverable
condition
– There are few detailed plans
– Testing always focuses first on one feature even at system level
– Testers are usually part of the team and start testing new
features as soon as developers have them ready for testing
• Once they have been unit tested and integrated

Software testing, 2015 171(504)


Ohjelmistotekniikka
Agile testing process in principle
Pre-game Sprint 1 Sprint 2…N Post-game

Unit and
integration testing New features

Feature testing In development team


(system level)

Overall system In team or distributed; depending on project


testing

Software testing, 2015 172(504)


Ohjelmistotekniikka
Testing of one new feature 1/2
• The features to be implemented are listed in sprint backlog
• Testing tasks listed too – as part of the feature or separately
• Testers can prepare testing, even if only based on a light user
story – communication within the team is key
• When implementation takes shape, the implementation of
testing is refined
• Testing is at least at first exploratory, but the systematic
techniques (with agile planning) should be used too

Software testing, 2015 173(504)


Ohjelmistotekniikka
Testing of one new feature 2/2
• Simple tests are often automated so they can easily be
repeated later
• Client's representative may participate in rough acceptance
testing of the feature
• Feature / story is "done" only after it has passed testing!

Software testing, 2015 174(504)


Ohjelmistotekniikka
Testing of one feature in more detail
New
Sprint
Feature1 program
backlog
version

Feature 2…N

Developer Unit and


Schedule
designs & integration Fixing errors
(order)
implements testing

Test planning Testing Verification

Software testing, 2015 175(504)


Ohjelmistotekniikka
Agile Testing Quadrants 1/2

• All testing should have a purpose.


• Different test types are required in agile development, and they
have been analysed by Brian Marick and thoroughly
documented by Crispin & Gregory in their book Agile Testing
[Crispin&Cregory 09].
• Agile Testing Quadrants reminds that specific testing may:
– Primarily support the development team or be focused on critique
of the program.
– May be oriented towards technology or business – the purpose of
the product.
• The diagram has become popular. It shouldn’t be accepted
without question, but suitably applied to each situation.

Software testing, 2015 176(504)


Ohjelmistotekniikka
Agile Testing Quadrants 2/2

Automated Business-oriented Manual


and manual
Q1 Q2
Functional tests Exploratory testing
Examples Scenarios
Story tests Usability testing

Critiques the product


Supports the team

Prototypes Acceptance testing by


users
Simulations
Alpha / Beta testing
Q3 Q4
Unit tests Performance and
load tests
Component tests
Security testing
Other non-functional
testing

Automated Technology-oriented Tools

Software testing, 2015 177(504)


Ohjelmistotekniikka
Do sprints have enough time for
testing?
• The idea is to include into sprint just the amount of new
functionality that can be tested well
• Supported by good unit and integration testing
• Testing is always started as soon as there is something to test
– for features, not entire release
• If there are long period tests, they can in practice be spread
into the next sprint
– For example time-consuming reliability and compatibility tests
– Validation effort required for release into marker (that doesn’t need to be
done in every sprint)

Software testing, 2015 178(504)


Ohjelmistotekniikka
4.15 Acceptance testing
• Here we look into acceptance testing in the context of
information system acquisition
• Acceptance testing can be used to determine whether the
product conforms to the agreements and can be deployed
– Based on client’s requirements
• The client is responsible for the testing
• According to V-model it is the first phase in test design and
the last phase in test execution
• Note! The ATDD style used in agile development is nowhere
near sufficient for real acceptance testing.
– It is mostly just a "smoke test" of functionality with client's
representative

Software testing, 2015 179(504)


Ohjelmistotekniikka
Comprehensive approach
• In acceptance testing the entire finished product is tested
– End users of the product should be used as testers
– The test environment should be as close as possible to the real
end user environment
– Large companies can have many dedicated testing
environments: functionality testing, performance testing, system
integration testing

Software testing, 2015 180(504)


Ohjelmistotekniikka
Things to be tested
• All features: functionality, usability, performance, security
• What has been specified, what has been agreed
• Reclamations
• Things that have not been specified but that can be expected
in the product on the basis of the industry's practices

Software testing, 2015 181(504)


Ohjelmistotekniikka
A hurry to find errors while warranty
lasts
• It is important to find deficiencies during the warranty period
so that the supplier fixes them at the same price
• Otherwise repairs must be contracted at a high price
• Without testing there is a risk that the system is not mature
enough, but full of bugs that people need to fight all the time
• It is therefore important to test properly

Software testing, 2015 182(504)


Ohjelmistotekniikka
Doesn't the supplier test it?
• No, they promise more than they deliver
• And they lack the end user's practical viewpoint
• Supplier has reasons to promote bad acceptance testing: it
will find plenty of things to fix, billing will be delayed, there will
be fewer problems to fix after the warranty period…

Software testing, 2015 183(504)


Ohjelmistotekniikka
Test environments
• Differences between the test environment and the end user
environment are issues that are not tested
– Are real databases, or some other related systems, used or are
they simulated somehow?
– Is test data generated or real?
– Is there other software in the end user environment?
– Is the hardware configuration realistic?
• There can be many different kinds of end user environments
– Creation of profiles to correspond to the most common
environments can be beneficial

Software testing, 2015 184(504)


Ohjelmistotekniikka
Organization
• The client's own project
– Own organization, own point of view, own responsibility
• Someone is responsible for testing and leads it
– Possibly a hired consultant
• Entire organization involved – business, ICT, administration
• Users must be involved in testing
• A testing service house can be subcontracted for the purpose
– The whole project or just e.g. performance testing
• Duration varies
– For small systems it can be a short test that the system is
suitable for production
– For large systems it can take months and be a large project
Software testing, 2015 185(504)
Ohjelmistotekniikka
Users’ participation important
• The users understand
– Requirements
– The risks related to their own business
• The users can
– Supply realistic test data
– Supply use cases
– Plan tests and do testing
– Inspect and review
• Test reports
• Other documents created in the project
• But they cannot do good testing without help

Software testing, 2015 186(504)


Ohjelmistotekniikka
4.16 Agile acceptance testing: ATDD
• ATDD = Acceptance Test Driven Development
• In TDD, low level test cases drive the coding phase
• In ATDD, high level test cases drive the whole process
• How to define the ”Definition of done” from the perspective of
the client or end-user?
• ATDD is based on defining executable system level tests
before the corresponding implementation is even begun
• Ideally, the tests are created by the client or end-user
• When a test passes, the corresponding requirement is
considered to be ”Done”
• ATDD tools enable easy test creation in a form understood by
the client or the end-user
Software testing, 2015 187(504)
Ohjelmistotekniikka
ATDD process
• Test cases for a feature agreed upon with the client.
• Designed by tester or developer and usually lightly
automated.
• Tests added to backlog or as phases for features in Kanban
process (passing is required for the feature to be Done).
– Automated tests can be added to continuous integration and
thus won’t pass at first.
– Manual testing is naturally not done before implementation…
• As implementation proceeds and errors are fixed the tests
start to pass.
• Things can be preliminarily considered to work in this
respect…
Software testing, 2015 188(504)
Ohjelmistotekniikka
Features of ATDD
• Benefits
– Less ambiguity in requirements, since the client or end-user has
approved the tests
– When a test passes we can move on to implement the next
requirement
• The tests define the scope of the project: if no unpassed tests
exist, there is nothing more to implement (as approved by the
client or end-user)
• Progress is visible for the management and the client
• However, the tools available for ATDD do not necessarily
support every domain
• …and not all clients are willing to do this!

Software testing, 2015 189(504)


Ohjelmistotekniikka
4.17 Alpha and Beta tests
• Done by users
• Alpha tests are done by the user at the supplier’s premises in
an environment that is as realistic as possible
– Development or test environment is not good enough
• Beta tests are done by the user at the user’s premises and
environment
• In both alpha and beta testing it must be remembered that ad-
hoc testing does not replace systematic testing using a
predefined test strategy and test cases
– Distributing an unfinished version to selected users is not
enough by itself
– Ad-hoc testing can also be useful, but only after systematic
acceptance testing
Software testing, 2015 190(504)
Ohjelmistotekniikka
4.18 Are all test levels and phases needed?

• Systematic division of testing to levels and phases has clear


benefits compared to the unplanned approach
• Yet in small projects going through all the test phases might
cause extra work
• When planning the levels and phases the following should be
taken into consideration
– Complexity of the system being developed
– Budget of the project
– Test organization

Software testing, 2015 191(504)


Ohjelmistotekniikka
Use of common sense is allowed
• The goal is not to maximize the number of phases but to
choose the right phases for each project
• The phases should not unnecessarily slow down deliveries or
slow down the finding of bugs
• Note! The same phase can be called differently
– Unit testing can also be called module testing, component
testing, developer testing, etc.
• Don’t let the terms fool you

Software testing, 2015 192(504)


Ohjelmistotekniikka
System tests can be executed during
integration
• Previously, when presenting integration testing, its relation to
unit testing was considered: does it need its own phase or
should it be a part of unit testing
• On the other hand, could system testing be combined with
integration testing?
– Craig and Jaskiel [Craig&Jaskiel 02] list attributes that are
common among their customer organizations in which combining
these two phases has been successful:
• Good product management
• Relatively large number of automated tests
• Interaction between testers and designers works well
– Automated system tests can be run during integration

Software testing, 2015 193(504)


Ohjelmistotekniikka
Acceptance testing along system testing
• What about combining acceptance testing with system
testing?
– Might be reasonable when end users participate in system
testing
– Tests should then be executed in an environment that is as close
as possible to the production environment, preferably in it

Software testing, 2015 194(504)


Ohjelmistotekniikka
4.19 When to move to the next level?
• Two essential questions when considering the different
phases
– How do the phases relate to each other?
• Clear separation helps when moving from one phase to
another
– How does one know when to move from one phase to another?
• Clear entry and exit criteria
• Different phases should not overlap, and there should not be
great gaps between them
– There’s no point in testing the same thing multiple times from the
same point of view. And everything needs to be tested
– Note! Most essential with manual testing.

Software testing, 2015 195(504)


Ohjelmistotekniikka
Problems caused by overlap
– Errors that were found (and fixed) earlier can be found and
handled again
– Errors found in unit and interface testing are much cheaper to fix
than the bugs found later on
• Try to find as much as possible at low levels
– When testing is transferred away from developers to a separate
testing team the price of errors gets higher
• Try to find as much as possible at low levels
– However:
• At different levels the environment often changes and new
layers are added to the program. So care must be taken in
considering what not to test.

Software testing, 2015 196(504)


Ohjelmistotekniikka
Entry and exit criteria 1/2
• Well defined and carefully applied entry and exit criteria for different
phases create common rules for advancing the project
• Some of the entry and exit criteria depend on the project, some can
be standardized within the organization
• Those working in the previous phase are interested in the next
phase as their efforts affect not only the reaching of the exit criteria
of their own phase but also reaching of the entry criteria of the next
• The significance of the criteria depends on the life cycle model
used in development
– Linear or iterated, agile?
• More generally this is matter of accepting a phase, which is reached
by iterating – fixing errors until the situation is good enough

Software testing, 2015 197(504)


Ohjelmistotekniikka
Entry and exit criteria 2/2
• The entry criteria of a phase should contain at least some of
the exit criteria of the previous phase
– Additionally, they may include criteria regarding for example the
creation of the test environment, test tools and even getting the
work force for testing
• When there is a danger that the version to be tested is
untestable, the entry criteria should include a requirement that
a smoke test has to be passed

Software testing, 2015 198(504)


Ohjelmistotekniikka
When can software be advanced in the
process? 1/4
• From unit testing to integration testing:
– Unit tests are passed
– Or errors don’t break anything that already works (the new code
may not make the program worse or cause trouble to others)
– Sometimes: sufficient test coverage, passed inspections, checks
with tools, documentation
• From integration testing to system testing:
– The build works reliably enough – can be tested
– The pass rate of tests is sufficiently high
– Sufficient test coverage
– Existing errors are documented (lists of known errors)
– Smoke tests defined by system testers are passed

Software testing, 2015 199(504)


Ohjelmistotekniikka
When can software be advanced in the
process? 2/4
• Exit criteria for system testing – when may the software be
transferred to client for acceptance testing or packaged as a
product
– Sufficient requirements coverage – all parts have been covered
in testing
– Sufficiently high pass rate of tests (note the danger involved in
these kinds of metrics…)
– No critical errors
– Memory leak, load and security tests are passed with the chosen
criteria
– The testing team feels that the build can be considered
acceptable!

Software testing, 2015 200(504)


Ohjelmistotekniikka
When can software be advanced in the
process? 3/4
• An example of exit criteria for acceptance testing (according
to [Craig&Jaskiel 02]) – when is the program good enough for
production:
– There must not be medium or more serious errors open
– No feature may have more than two open errors and there may
not be more than 50 open errors altogether
– At least one test case for each requirement must have been
passed (a dangerous type of requirement…)
– Test cases TT23, TT25 and TT38-52 must have been passed
– Eight out of ten skilled banking clerks can open an account
within ten minutes using the on-line instructions (usability testing)
– The system can open 1000 accounts in an hour (performance
testing)
Software testing, 2015 201(504)
Ohjelmistotekniikka
When can software be advanced in the
process? 4/4
– The transition from one view to another shall take no more than
one second on average when there are 100 users using the
system (performance testing)
– The users must accept the results of the tests with their
signatures
• Note:
– Although the acceptance testing has been "passed" the system
still has errors which the supplier must correct
– The system may however be taken into use – a fixed version
will be delivered later
– Just looking at numbers is not sufficient for the decision
about taking the system into use – the users must accept
that it is good enough

Software testing, 2015 202(504)


Ohjelmistotekniikka
5. Exploratory testing

Exploratory testing is a method of


testing that proceeds according to
tester’s observations to things that
are expected to have the most
problems. Testing isn’t based on
precise pre-made plans, but is open
to everything implemented in the
product.

Software testing, 2015 203(504)


Ohjelmistotekniikka
5.1 Exploratory testing in nutshell
• Exploratory testing, ET
• This part is based on the slide set "Agile testing and testing in agile
software development" by Matti Vuori
http://www.mattivuori.net/julkaisuluettelo/liitteet/agile_testing.pdf
• Usually done at system testing level
• Explore the software by using it and observe it, try to learn and
understand it
• Recognize risky areas that need to be tested
• Test them immediately, without formal test cases
• Or design more precise tests for them and execute those
• Examine the results and continue iterating

Software testing, 2015 204(504)


Ohjelmistotekniikka
Warning
• There are no standards for exploratory testing and even
experts may have different approaches for it
– The paradigm is still developing
• In applying exploratory testing the requirements for testing
need to be analysed and many things considered:
– The entire testing process – what other kinds of testing is done,
what complements ET
– Organizational culture, including terminology
– Documentation requirements
– Tester competence – skilled testers required

Software testing, 2015 205(504)


Ohjelmistotekniikka
Exploratory testing: Why? 1/2
• It’s good to base testing on what is learned from the implementation
– Be perfectly realistic. Don’t trust documents but what has actually been
done. What features are actually included?
– Specifications are always incomplete and incorrect – don’t get trapped
by missing specifications
– (Of course one needs to know how the software should work)
• No time is spent on planning testing, so results are obtained faster.
– Quick start to testing
– Quick reaction to changes
– Quick feedback to developers
• Exploratory testing is suited for situations where requirements change
continually
– Plans would be always obsolete

Software testing, 2015 206(504)


Ohjelmistotekniikka
Exploratory testing: Why? 2/2
• Keeping one’s eyes open is better for finding errors than only considering
predesigned test cases
– Excessive planning creates expectations and closes eyes to observations
• One can recognize characteristics in the software that weren’t considered
during specification
• Programs are so complex that testing based on test cases is not enough
– Not all test cases can be defined; not enough time
• The level of technical risk in features is only revealed by examining the
program once implementation begins
• Enables learning about the system
• Fits users’ point of view
• Testing is different each time, so different errors can be found

Software testing, 2015 207(504)


Ohjelmistotekniikka
Exploratory testing: Problems
• Confirming the quality of testing is difficult
• Requires skills and guidance – of little use otherwise.
– If badly done, can be very bad…
• Formally verifying the testing is difficult
– Missing specs and reports
• Techniques used are ill documents
• Often it is good not to restrict testing to one style –
exploratory testing is just one part of the whole

Software testing, 2015 208(504)


Ohjelmistotekniikka
5.2 Exploratory testing: Examples
• Examining the behaviour of the product
– How does the functionality just implemented work?
• Agile smoke testing
• Thorough testing of features
– In system testing, acceptance testing
• Agile regression testing (to complement systematic methods)
• Analysis of the causes behind the symptoms with more careful
exploration
– For example figuring out the reasons for load test results with
more precise tests and metrics
• The first phase of usability testing, where the main purpose is to
learn the new software concept
Software testing, 2015 209(504)
Ohjelmistotekniikka
Exploratory testing: Areas of
application
• In agile development
• In systematic development
• In client’s acceptance testing
– Some Finnish national bureaus have moved to mainly exploratory
testing

Software testing, 2015 210(504)


Ohjelmistotekniikka
5.3 Exploratory testing is based on
strategies and knowledge
• Understanding the software
• Observations – recognize symptoms of problems and parts of the
program that may contain errors?
• Strategies – what mentality is used in searching for errors?
– E.g. ”breaking” the software
• Experience – what kinds of errors have there been before and
where?
• Testing is intellectual, challenging work
• Tester is a detective! Not a robot inputting test cases

Software testing, 2015 211(504)


Ohjelmistotekniikka
Understanding the software with
experimentation and observation
• Eyes open to everything
• Usage scenarios as framework
• Recognizing different elements of the software
• Recognizing events
– How does the program behave, how does it react
• Recognizing states
• Changes
– State changes
– Changes in data

Software testing, 2015 212(504)


Ohjelmistotekniikka
Making observations about behaviour
• What is familiar
• What is new or strange?
– What logic does it follow?
• Reactions of the program
– Speed of use
– Different sequences
– Different data
– In the first time and the ones after
• Traditional problems in comparable software and situations

Software testing, 2015 213(504)


Ohjelmistotekniikka
Testing mentality
• Everything is allowed
• We know that there are errors, they just need to be found
• Try to ”break” the program
– Be rough
– There’s always more bits available – let the program crash
• Take advantage of experience
• Follow your hunches

Software testing, 2015 214(504)


Ohjelmistotekniikka
5.4 Starting point for a session 1/2
• Purpose
– Learning, breaking the software, or something else?
• Understanding the software
– The purpose and use of the software
– The purpose and use of the new features
– Which thing provide the most value to client in this situation? How
do they work?
– So: Where are the greatest risks of something not working or
working wrong?

Software testing, 2015 215(504)


Ohjelmistotekniikka
Starting point for a session 2/2
• Understanding the cooperation between the software and the
user
– What usage scenarios are there or could there be?
– What is known about typical usage?
– What exceptions are imaginable.
– What about deliberate misuse?

Software testing, 2015 216(504)


Ohjelmistotekniikka
Frame of mind
• Exploratory testing is brain work and requires suitable
conditions
• No time pressure (even if only a specific period of time has
been allocated for testing – that’s just a frame)
• Realistic idea of bugs and willingness to see them anywhere
• Orientation to what is essential
• Willingness to change your mind with new observations

Software testing, 2015 217(504)


Ohjelmistotekniikka
The progress of a session
• Usage scenarios and use cases are a good starting point
• Models of activity for different users
• No precise guidance from descriptions – just a framework
– Strictly following them would be systematic testing…
• Follow observations, let experiences guide the testing

Software testing, 2015 218(504)


Ohjelmistotekniikka
5.5 Documenting the testing?
• Test logs are always needed
• Externalizing your thought even by just writing them down
improves thinking – and improves testing
• Automated logging application in the background supports
making notes
• On the next page is an example of a test log

Software testing, 2015 219(504)


Ohjelmistotekniikka
Example test log

Software testing, 2015 220(504)


Ohjelmistotekniikka
5.6 Exploratory testing in practice 1/2
• The conventions of exploratory testing are definitely important
principles
• In real software industry it’s not possible to follow just a single set of
principles
• Exploratory testing has long been seen as an important part of even
a well planned testing project
• Overall good testing conventions include changing the conventions
when new information about the product is obtained and old
methods no longer reveal errors.

Software testing, 2015 221(504)


Ohjelmistotekniikka
Exploratory testing in practice 2/2
• It’s essential that exploration also results in new test cases that are
included in updated test specs
– New knowledge spread to all testers
– Reduces risks
• Terms are important
– ”Exploration” is easier to accept in systematic software development
than ”ad-hoc” activity
• To get the most out of this style of testing it needs to be accepted as
an important testing method and skilled testers need to be given
time to use it
• But good testing consists of many different methods and styles and
it’s not good to let one dominate – at least until it’s known to work
excellently
Software testing, 2015 222(504)
Ohjelmistotekniikka
5.7 Fast test planning for new features

• Level of interaction:
– User story or use case as a starting point
– Works directly as a starting point for exploratory testing
• Level of logic:
– Traditional testing techniques – equivalence partitioning,
boundary value analysis, decision trees, state machine based
testing
• Physical level:
– Monitoring events programmatically as part of exploratory testing

Software testing, 2015 223(504)


Ohjelmistotekniikka
5.8 Preparation
• In exploratory testing it is good to have an advance idea of what is
to be tested
– That way an immediate idea of what should be tested in new
functionality can be formed
– List of usual thing to test in specific kinds of applications and
features (e.g. ”List of some usual things to test in an application”
available on course web page)
– All new systems are still to some extent repetition of the old ones
– Lists of typical errors in specific kinds of applications are useful

Software testing, 2015 224(504)


Ohjelmistotekniikka
5.9 Agile testing in non-agile project 1/4

• Every project has some agile characteristics


• It’s understood that:
– A project never goes exactly as was planned
– Testing produces new information that needs to be reacted to
• Even if testing is done in a carefully planned manner, it’s
important to use more agile methods in some parts

Software testing, 2015 225(504)


Ohjelmistotekniikka
Agile testing in non-agile project 2/4

• Test planning
– The W-model of testing, where the basic lines of testing are
planned at the beginning of the project but details only when the
product is getting ready for testing
• Dynamic guidance
– Although the product has a build plan, testing is fitted into the
completion of components in an agile manner
– The emphases of testing on different sections is changed
dynamically according to how many errors are found and the risk
levels of the sections
– The test set for each round is in the end a situational choice
– Test suites are updated according to observations gotten from
clients and stakeholders

Software testing, 2015 226(504)


Ohjelmistotekniikka
Agile testing in non-agile project 3/4

• Exploratory testing
– Testing always includes a more free-form part
– Familiarizing oneself with a new version is done with exploring; it
gives an idea of targets for systematic testing and is therefore
part of test planning
– When systematic testing no longer finds errors effectively,
emphasis is moved more to exploratory testing
– Systematic tests are also updated based on observations

Software testing, 2015 227(504)


Ohjelmistotekniikka
Agile testing in non-agile project 4/4

• The whole is therefore a combination of preplanning and agile


reacting

Master test plan


Dynamic
changes: test
Exploratory cases and
testing emphases

Produces Systematic
knowledge testing

Continues and Exploratory


complements testing

Software testing, 2015 228(504)


Ohjelmistotekniikka
Quick verification of risky parts

• A central theme in agility is the ability to quickly focus on risky things


• If for example the project involves implementing a new protocol, its
functionality is not only verified along regular testing, but the goal is
also to perform the verification immediately by separating the
protocol implementation and testing it as soon as possible
– Thus can be shown that the protocol is usable
– The risk related to it can be closed

Software testing, 2015 229(504)


Ohjelmistotekniikka
One model for exploratory testing in two
phases
1. Testing performed for new features on the first rounds of
testing, to learn about the behaviour of the product
2. Agile testing on later rounds, continuing the search for errors
once systematic testing no longer finds errors effectively
– Switch to a more ”dirty” test environment at the same time
• More systematic testing between these.

Software testing, 2015 230(504)


Ohjelmistotekniikka
a) Testing new functionality in the first
rounds of testing 1/2
• Application situation:
– New build has not been tested systematically; it’s not known how
it works and behaves
– It may have passed automated smoke tests
• Goals
– Learn the new functionality
– Explore and understand the application
– Observe its behaviour
– Observe potential / familiar problem spots
– Find errors

Software testing, 2015 231(504)


Ohjelmistotekniikka
a) Testing new functionality in the first
rounds of testing 2/2
• Tactics and methods
– Execute full use cases in ”explorative state of mind”
– Try everything at least once
– Work in clean test environment
– Make notes on failures, slowness, state of the system etc...
– Try out areas of the application that have had errors before
– Recognize and report errors
– Check later based on notes (paper and mental) that the test specs
cover all suspicious areas

Software testing, 2015 232(504)


Ohjelmistotekniikka
b) Exploratory testing on late rounds 1/4

• Application situation
– New build has been tested systematically
• Basic idea:
– Change in testing methods helps find errors
– Use exploratory testing to find new errors
– Apply techniques that have yet to be used in the project

Software testing, 2015 233(504)


Ohjelmistotekniikka
b) Exploratory testing on late rounds 2/4

• Tactics and methods


– Use exploratory techniques to find errors
– Cover areas that have had lots of errors before
– Test ”around” official test cases with the goal of ”breaking” the
software
– Execute full use cases
– Perform testing in a ”dirty” test environment – this helps
encounter problems
– Change testing according to behaviour; focus on areas that are
slow, have unexpected phenomena etc.

Software testing, 2015 234(504)


Ohjelmistotekniikka
b) Exploratory testing on late rounds 3/4

– Use imagination in complex parts of the software


– Set yourself into the state of mind of a novice (who doesn’t understand
anything) and an expert (who tries everything because they ”have the
right to do so”), or act like a child
– Do things at random. Do things simultaneously. Interrupt things (event
on mobile device, event on PC, interruption by a colleague) etc.
– Make mistakes!
– Report all errors
– Check the test specs and suggest new test cases

Software testing, 2015 235(504)


Ohjelmistotekniikka
b) Exploratory testing on late rounds 4/4

1. Choose a role in which you act (simulate some user group’s choice of
features to use)
2. Choose the test environment based on that
3. Choose use cases and prioritize them
4. Recognize priorities
– New features, changes
– Which areas have had errors before
– Feature priorities (for the product and chosen user demographic)
5. Perform an experimental round of testing
6. Perform a round of testing making mistakes
7. Perform a round of testing that stresses the system
8. Fit your actions to your observations and improvise!
Software testing, 2015 236(504)
Ohjelmistotekniikka
6. Risk analysis and prioritization of
tests
• Risk analysis is performed
– Unknowingly: If I only study for the exam on the previous night,
what are the odds that I’ll fail, and what are the consequences?
– Knowingly: When the key persons of the organization have to
travel on the same day to a same place, is it a good idea to put
them on a same flight?
• In developing safety-critical systems, risk analysis is
mandatory, elsewhere recommended
– There are no one-size-fits-all practices for risk analysis
– Every organisation should have a person who can plan efficient
and simple practices for it
• The methods of risk analysis vary by situation. This section
only covers a few method suitable for directing testing. The risk
analysis for e.g. a project would be different.

Software testing, 2015 237(504)


Ohjelmistotekniikka
Different risks
• There are two main types of risk analyses (and risks)
– Project risks. What could go wrong in a project? Schedules,
availability of people, technical problems…
– Product risks. What in the product could cause problems to the
user?
• When we plan testing, the latter is most important
– Unfortunately, out of all possible tests, only a small fraction can
actually be documented and executed
– By using risk analysis can we prioritise tests: start with the ones
that test the most important things and test other things later if
there is time
– The results of the risk analysis can also be used in
communication with the upper management

Software testing, 2015 238(504)


Ohjelmistotekniikka
6.1 Basis in understanding usage
• Important to understand the use of the product
– How is the product used?
– What is the work process, what is important for it, in it?
– What is done the most often – what needs to be in good
condition
– What are the most critical things – what must never cause
problems
• For example, information system users have basic tasks that
are used all the time – they must work reliably
• And information security causes strict requirement for
reliability
• Risk identification in user actions & business level is important

Software testing, 2015 239(504)


Ohjelmistotekniikka
6.2 Flow of risk analysis 1/5
• Outline of a simple user-centric risk analysis process :

List the Estimate


Assemble a Estimate
features and their
risk analysis their rate of
characteristics use importance
session
of the system to users

Evaluate
Use ordering
to prioritize
Sort features results and Calculate risk-
by risk change if based priority
tests
needed

Software testing, 2015 240(504)


Ohjelmistotekniikka
Flow of risk analysis 2/5
• Risk analysis should be performed as
early as possible in the project
• Gathering of a brainstorming team:
including for example users, developers,
testers and people from marketing,
customer service and technical support
– The objective is to gather as much
information as possible about the
product and the related business
– A session of a few hours, with a script
and a leader

Software testing, 2015 241(504)


Ohjelmistotekniikka
Flow of risk analysis 3/5
• List the features and characteristics of the product
– In other words, everything to be tested
– Consider how often the most important user groups use the
features and how important are the features to them
– As rough numerical estimates 1…5
• Frequency of use e.g. 1-5 (rarely, occasionally, daily, several
times a day, continuously)
• Scale of importance, e.g. 1-5 (cosmetic flaw, slowdown, extra
work, serious damage, death)
– The product of these two numbers denotes the risk and priority
of testing
– Estimates are best rounded up

Software testing, 2015 242(504)


Ohjelmistotekniikka
Flow of risk analysis 4/5
• Analysis is usually listed in a table. An example from a
telephony application
Feature Frequency of Criticality of Priority
use (1-5) success (1-5) (multiply)
Making a call 5 3 15
Emergency call 1 5 5 (yet
critical…)
Display caller 5 1 5
name

Software testing, 2015 243(504)


Ohjelmistotekniikka
Flow of risk analysis 5/5
• The table and discussion are just a method for arriving at a
common understanding
• Based on results we can see which parts of the product need
an emphasis in testing and elsewhere
– When analysis is done before design, design can also be
affected
• Knowledge about users required as background knowledge –
testing isn’t possible without that either

Software testing, 2015 244(504)


Ohjelmistotekniikka
Risk analysis from implementation
point of view
• Above we examined features from the user point of view
• When we know the software development process, we can
consider its features and estimate how likely bugs are in the
project
• Consider for example for each main component
– The amount of new code – new is risky
– The experience of the implementing team – new team requires
more oversight
– New and still maturing technologies – more testing needed
• Often risk analysis focuses too much on this point of view,
without considering actual use of the features

Software testing, 2015 245(504)


Ohjelmistotekniikka
Risk analysis must be updated
• Just like other documents, the results of risk analysis require
maintenance
– E.g. when requirements change
• When a new version of the software is made, the results of
the previous risk analysis may be used
– Risks usually increase when a part is modified
– The probabilities of errors are more likely to change than their
effects

Software testing, 2015 246(504)


Ohjelmistotekniikka
Risk analyses for safety critical
systems
• Above we considered ”regular” software
• For safety critical systems risk analyses must be made with
greater precision
• The starting point is a systematic safety analysis or danger
estimate, which examine e.g. the flow of usage tasks or parts
of the system and estimate the risk involved in them
• In-depth examination of these practices isn’t possible here
• The basic principle is: The greater the risks, the more
systematically the risk analysis must be made and the
more official or ”cultural” requirements it must fulfil

Software testing, 2015 247(504)


Ohjelmistotekniikka
MoSCoW-method for prioritization
• Another prioritizing method is the MoSCoW method which
defines the following four priorities (in a descending priority
order):
– Must test
– Should test
– Could test
– Won’t test

• See “Successful Test Management: An Integral Approach”,


Iris Pinkster & Bob van de Burgt & Dennis Janssen & Erik van
Veenendaal, Springer, 2004. and
https://en.wikipedia.org/wiki/MoSCoW_Method

Software testing, 2015 248(504)


Ohjelmistotekniikka
An example of MoSCoW-analysis
All clients suffer Must test
People die
One clients suffers Must test

All clients suffer Must test


Clients
lose money
If this One clients suffers Should test
requirement
is not met... No workaround Should test
We lose
money
A workaround exists Could test

No workaround Could test


No one
loses money
A workaround exists Won’t test

Software testing, 2015 249(504)


Ohjelmistotekniikka
Measuring progress, one example
delivery
tested requirements

Could test

estimate
Should test
actual

Must test

time

Software testing, 2015 250(504)


Ohjelmistotekniikka
Fine-grained MoSCow…
• In some projects more detailed prioritization may be needed
– Test suites are collections of test cases that inherit the MoSCoW
priority of the test suite
• For example, a test suite may contain the test cases for a
single requirement
– Inside a test suite test cases are prioritized with the scale: High,
Medium and Low
– Now a decision might be made that in the ”Should test” test suite
only those test cases that have a High priority are executed
– Note! The priorities of test cases between different test suites are
not necessarily comparable
• Is Should test Low more important than Could test High?

Software testing, 2015 251(504)


Ohjelmistotekniikka
Matching the requirements and risks of
the system under test
• If there is a risk but no corresponding requirement, either the
risk is unnecessary or the corresponding requirement has to
be added
• If there is a requirement but no corresponding risk, the risk
should be added or the requirement removed as unnecessary
• Risk analysis can thereby improve the quality of the
requirements specification
• Analysis can be done also from a financial point of view
– Evaluation doesn’t consider the effects of the errors for the users
but on the amount of money that the software will produce
– For example, if a feature is extremely important for some high
paying customer, this can be taken into a consideration when the
effect of a possible failure is evaluated

Software testing, 2015 252(504)


Ohjelmistotekniikka
7. On documentation
• As usual in software engineering documentation has a
significant role in testing
– Usually a lot of documentation is created
– A considerable amount of time may be spent on writing it
– Synchronization of the documents and the actual tests can
become a problem
• The purpose of documentation is to get the information from
the minds of the designers/testers into a form that others can
also use
– When things are written down, misunderstandings and flaws are
usually found
– Things need to be remembered later too
– Dilemma: how to document everything that is necessary but
nothing useless
Software testing, 2015 253(504)
Ohjelmistotekniikka
7.1 High level test plan documentation
• The purpose of test planning is to make the execution of tests
and reporting of the results as easy as possible
– Project plan:
• What documents are produced
• Who produces
• When
• A general plan of testing (Master Test Plan) is useful:
– General approach to testing – what, how, when
– Schedules for testing
– Entry and exit criteria for phases of testing
– Who is responsible and for what (for example the setup of test
environment, etc.)

Software testing, 2015 254(504)


Ohjelmistotekniikka
7.2 Test planning process
• Test planning, adapted from [Haikala&Märijärvi 06]:

Requirements Experience, Company Software Earlier


checklists directives specification projects

Project life Plans for


cycle model testing and
tests More details plan for Design and
tests (at the right implementation of test
Followed time) cases, scripts etc.
standards
(e.g. safety crit.)
Test
execution

Test reports

Software testing, 2015 255(504)


Ohjelmistotekniikka
On designing plans 1/2
• In making plans, keep in mind their purpose and requirements
– Good documentation helps concentrate on essential issues
when changes in “what” or “how” occur
– What plans are required
– Who should use them and understand them
• For example, acceptance testing plan should be understood
by people who do not know much about testing
– Think of the habits and culture of the domain and the managers
– Large projects need more plans than small ones
– Plans share information and create trust
– If there are problems in testing, the accepted plans and
compromises documented in them are an important cover for the
testers
Software testing, 2015 256(504)
Ohjelmistotekniikka
On designing plans 2/2
• Plans must be maintained
• Planning process is more important than the plans
– People share views and information, learn
• Test tools might help with documentation
– (Partial) generation of some types of documents

Software testing, 2015 257(504)


Ohjelmistotekniikka
Overall documentation 1/2
• Similarly to phasing of testing, documentation has to be
designed based on the size of the project, its risk level and
the size of the system under test
• Some kind of plan is always needed
• For small projects, a generic test plan that covers integration
and system testing is sufficient
– Made in conjunction with specification and updated later

Software testing, 2015 258(504)


Ohjelmistotekniikka
Overall documentation 2/2
• Larger projects have a master test plan that describes all
testing activities, and separate plans for test levels and types
• Master test plan is important. It distributed information on
what testing is planned to be done.

Master
System
test plan
test plan

Integration
test plan

Unit
test plan

Software testing, 2015 259(504)


Ohjelmistotekniikka
7.3 IEEE 829 guides planning
• IEEE 829-2008, Standard for Software
and System Test Documentation, is
one of the few standards in testing
• Many test plans and reports have been
made according to it
• Good news: the latest version of the
standard understands that situations
and needs vary, and so may the size
and content of the documentation

Software testing, 2015 260(504)


Ohjelmistotekniikka
• IEEE 829 example outline for master
test plan
1. Introduction 2. Details of the master test plan
1.1 Document identifier 2.1 Test processes including definition of test levels
1.2 Scope 2.1.1 Process: management
1.3 References 2.1.2 Process: acquisition
2.1.3 Process: supply
1.4 System overview and key features
2.1.4 Process: development
1.5 Test Overview 2.1.5 Process: operation
1.5.1 Organization 2.1.6 Process: maintenance
1.5.2 Master test schedule 2.2 Test documentation requirements
1.5.3 Integrity level schema 2.3 Test administration requirements
1.5.4 Resources summary 2.4 Test reporting requirements

1.5.5 Responsibilities 3. General


1.5.6 Tools, techniques, methods, and metrics 3.1 Glossary
3.2 Document change procedures and history

Software testing, 2015 261(504)


Ohjelmistotekniikka
• IEEE 829 example outline for one test
level plan
1. Introduction 3. Test management
1.1 Document identifier 3.1 Planned activities and tasks; test progression
1.2 Scope 3.2 Environment / infrastructure
1.3 References 3.3 Responsibilities and authority
1.4 Level in the overall sequence 3.4 Interfaces among the parties involved
1.5 Test classes and overall test conditions 3.5 Resources and their allocation
2. Details for this level of test plan 3.6 Training
2.1 Test items and their identifiers 3.7 Schedules, estimates, and costs
2.2 Test traceability matrix 3.8 Risk(s) and contingency(s)
2.3 Features to be tested 4. General
2.4 Features not to be tested 4.1 Quality assurance procedures
2.5 Approach 4.2 Metrics
2.6 Item pass/fail criteria 4.3 Test coverage
2.7 Suspension criteria and resumption requirements
4.4 Glossary
2.8 Test deliverables
4.5 Document change procedures and history
Software testing, 2015 262(504)
Ohjelmistotekniikka
7.4 Low level: documentation of test
cases
• Test cases are usually not recommended to be listed in plans,
but
– In a test management system
– in a separate document, like an Excel sheet
– Or somewhere else where their maintenance is easy
– Unit test cases are documented in their code (which can be used
to produce lists)
• Test cases shouldn’t be something to be frozen but
continually developing ”raw material” for testing, so their
handling must be easy and concentrated in one place

Software testing, 2015 263(504)


Ohjelmistotekniikka
7.5 Lightly in small projects
• In small projects testing plans can be integrated into other
plans
– Goal: minimise the number of documents and their pages
– System test plan can be included in the project plan
– Integration test plan can be included in design documentation
– In each module’s design document one can attach the definitions
of the corresponding test environments and the test cases

Software testing, 2015 264(504)


Ohjelmistotekniikka
7.6 Test reports

• Specific test reports are created usually at system testing


level, for the testing of a specific build
– They are complemented with real-time views to testing in test
management systems
• Test report outline example:
– Introduction
– Conflicts and deviation
– Coverage evaluation
– Results
– Evaluation
– Actions to be taken
– Acceptance

Software testing, 2015 265(504)


Ohjelmistotekniikka
7.7 Good test report
• A good report offers exactly the information needed at the
moment of reporting.
• It’s written to be read by the people who need that
information (presentation style, details, order of contents).
• Brevity and compactness are good.
• Writing reports or waiting for them may not delay activity.
– Reported data should be produced automatically, not entered
manually.
• Reporting supports the documentation of testing e.g. to meet
the requirements of standards (e.g. for safety critical
systems) and helps improve the process.

Software testing, 2015 266(504)


Ohjelmistotekniikka
IEEE 829-2008 documentation structure 1/2

Note: Levels of
Master test plan testing may be
(MTP) other than those
shown here

Acceptance test Component Component test


System test plan
plan integration test plan plan

System test design

System test
System test cases
procedures

Test execution

Software testing, 2015 267(504)


Ohjelmistotekniikka
IEEE 829-2008 documentation structure 2/2

Level interim test


Test execution
status report(s)

Test level
Test level logs
anomaly reports

Component
Acceptance test System test Component test
integration test
report report report
report

Master test
report

Software testing, 2015 268(504)


Ohjelmistotekniikka
IEEE 829-2008 recommended documents by risk
level

• Comparison between critical and trivial projects (example, not


precise)

Software testing, 2015 269(504)


Ohjelmistotekniikka
Examples of IEEE 829-2008 document templates

• On the course project page you will find document templates


based on the standard's examples.

Software testing, 2015 270(504)


Ohjelmistotekniikka
7.8 IEEE 829-2008 – In agile development, less
documentation is required

"6.2 Eliminate content topics covered by the process


Some organizations are using agile methods and exploratory
testing techniques that may de-emphasize written
documentation. For testing software with agile methods, they
can choose as little as a general approach test plan, and then
no detailed documentation of anything except the Anomaly
Reports. Some organizations use a blended approach combining
agile testing with parts of the test documentation detailed in this
standard. It is up to the individual organization to ensure that
there is adequate documentation to meet the identified integrity
level."

Software testing, 2015 271(504)


Ohjelmistotekniikka
8. Monitoring testing
• Monitoring progress in unit testing and integration testing
– Automated listing of tests executed, passed and failed are
preferred
– Test tools produce these naturally
– Test coverage is monitored for all product areas
• Monitoring progress in system testing
– The tests carried out are reported in test management system
(Quality Center, TestLink, Excel file), or a task management
system (such as JIRA)
– Tests logs are kept especially in exploratory testing
– Progress in testing of features and requirement coverage are
monitored
– Results are compared to desired progress of testing
Software testing, 2015 272(504)
Ohjelmistotekniikka
8.1 Test management software in
a nutshell i
• Create and manage test cases
• Organize test cases into test plans
• Execute test cases and track their passing by build
• Track results
• Generate reports
• Test cases can be linked to requirements
– Monitor requirement coverage
• Links to error databases, such as Bugzilla, Mantis, JIRA
• Example: open source TestLink http://www.teamst.org/

Software testing, 2015 273(504)


Ohjelmistotekniikka
8.2 Monitoring error situation
• Important point of view to project progress
• Found errors are recorded in error reports (one report per
error) – more on this later
• Errors are categorized (e.g. mild, moderate, serious,
catastrophic)
– Too many levels make categorization difficult
• At unit and integration levels, errors are not usually listed
• At system level, the errors are brought into an error database
(Bugzilla, Mantis, etc.) – more on reporting errors later
• Test coverage metrics are followed
• The situation is monitored for different areas of the product
and testing methods tuned as needed
Software testing, 2015 274(504)
Ohjelmistotekniikka
8.3 Test report
• After specific testing phases a test report is often written
– For example system testing for a specific build
– Special testing such as usability or performance testing

• Exploratory testing logs are part of the material for these


reports

• Keeping a test diary can also be useful


– Testers often keep a personal diary

Software testing, 2015 275(504)


Ohjelmistotekniikka
9. More on methods and techniques of
testing
• In this section we take a look at methods related
to dynamic testing and add to our vision of the
rich whole of testing.

Software testing, 2015 276(504)


Ohjelmistotekniikka
Great many methods available
• A huge number of different techniques exist
– The techniques to be used have to be selected case by case
– Generally speaking, it’s really hard to find Best Practices™ –
everything depends on the situation
• There are some rules of thumb for choosing the right
technique in each situation, though
• The techniques usually complement each other and are used
flexibly together
• Techniques can also be classified in many ways

Software testing, 2015 277(504)


Ohjelmistotekniikka
Basic questions in making the choice
• There are questions to think about when selecting techniques:
– Is the source code used in testing or not?
– Who does the testing?
– What kinds of things are tested?
– What kinds of problems are looked for?
– How does the program need to be used?
– How do we know whether a test run was successful or not?
– In what phase of the project are we? (Are we creating code or
estimating suitability for use)
• We'll look into many of these next.

Software testing, 2015 278(504)


Ohjelmistotekniikka
Description of methods by type
• The following classification divides the techniques based on
which of the previous questions they try to answer
– Classification and descriptions are mostly based on [Kaner et al.
02]
• Different techniques have to be combined when necessary
• The techniques form a kind of a toolbox
– Screwdriver and hammer will get you far, but sometimes you
need a saw

Software testing, 2015 279(504)


Ohjelmistotekniikka
9.1 Is the source code used in testing or
not? 1/4
• In white box testing test cases are chosen based on
information about the internals of the system, such as the
source code (glass/clear box testing)
• White box testing concentrates on testing functionality that
has been implemented
– Unit testing is like this
– In TDD tests are created for things yet to be implemented. They
will fail until the implementation is done (and works). This keeps
track of progress.

Software testing, 2015 280(504)


Ohjelmistotekniikka
Is the source code used in testing or not?
2/4
• In black box testing the program under test is seen as a black
box, meaning its implementation is ignored
• When units are combined and the abstraction level rises,
there is a transition from white box testing to black box testing
– Unit testing is often white box testing, system testing often black
box testing
• But at the bottom level white-box testing goes on

Software testing, 2015 281(504)


Ohjelmistotekniikka
Is the source code used in testing or not?
3/4
• Test cases are designed using the specifications or the
implementation
• Tests can be designed even if the implementation does not
exist yet
– Based on specification, use cases or user stories
• Test cases retain their value when the implementation
changes, if the specification remains the same
• The method also works for the other parts of the system than
just code, so the ideas generalize

Software testing, 2015 282(504)


Ohjelmistotekniikka
Is the source code used in testing or not?
4/4
• The techniques complement each other: white box tests find
low level errors (closer to code) and black box tests find
higher level errors (closer to requirements and users)
– Specifications are at a higher level of abstraction than the
implementation, where the details may obscure the whole
• If some knowledge of the program implementation is used in
testing, but pure white or black box testing isn’t done, this can
be called grey box testing

Software testing, 2015 283(504)


Ohjelmistotekniikka
9.2 Who does the testing? 1/3

• Developer:
– Unit testing, integration testing
• Test engineer:
– Integration testing, system testing
– On the customer's side: acceptance testing
• User tests:
– Software is tested by its user, sometimes with a member of the
supplier’s testing team; acceptance testing
• Alpha testing:
– A user test at supplier's premises, test releases not public
• Beta testing:
– A user test at customer's premises, public test releases
Software testing, 2015 284(504)
Ohjelmistotekniikka
9.3 Who does the testing? 2/3
• Crowd testing:
– Large crowds perform goal-oriented testing
• Subject-matter expert testing:
– The software is given for testing to a person (not necessarily an
end user) who knows (some part of) the application area very
well
– This results in errors, criticism and sometimes even praise
• Pair testing:
– Two testers test together with one computer and switch roles
from time to time

Software testing, 2015 285(504)


Ohjelmistotekniikka
9.4 Who does the testing? 3/3
• Bug bashes:
– Just before the release, an event that lasts for example half a
day, where all employees (including programmers, salesmen,
secretaries, etc.) can participate in testing at the supplier’s
premises
– Success depends a lot from the organization
• Eat your own dog food:
– The supplier takes a preliminary version of the software into use in its
own organization
– Only when the program has been found reliable enough in internal use
is it delivered to the customer

Software testing, 2015 286(504)


Ohjelmistotekniikka
9.5 What kinds of things are tested? 1/3

• Function testing: test one function at a time


– Test every function so well that you can say that it works as it is
supposed to work
– It is worthwhile to make more complex tests, where test cases
cover more than one function
• Feature testing: test features that are typically implemented
as a combination of several functions
• Menu tour: all menus and dialogs of a graphical user interface
(GUI) are gone through and all items tested

Software testing, 2015 287(504)


Ohjelmistotekniikka
9.6 What kinds of things are tested? 2/3
• Logic testing: the dependencies between the variables of the
program are tested
– For example, if the variable age is greater that 50 and the value of the
variable Smoker is True, the variable OfferLifeInsurancePolicy must be
False
– The goal is to test each logical relationship between each of the
variables of the program (which is unfortunately often impossible)
• State-based testing: a great number of possible states of the
program are gone through and it is checked that in each of those
states the program accepts only legal inputs and as their result
moves to a correct state
• Configuration coverage testing: different configurations of the
program are tested (for example, hardware, some other
environment) and it is measured how many of all the possible
combinations are covered

Software testing, 2015 288(504)


Ohjelmistotekniikka
9.7 What kinds of things are tested? 3/3
• Requirements testing: the requirements specified in the
requirements specification are tested with the goal of showing
whether they are fulfilled or not
– Requirements coverage matrix can be used to measure
coverage against the requirements
• Specification-based testing: go through the specification and
test the requirements whose fulfilment status is either yes or
no
– Specifications can include manuals, sales brochures, etc.
• Testing of inputs
– Classic methods are equivalence partitioning and boundary
value analysis

Software testing, 2015 289(504)


Ohjelmistotekniikka
9.8 What kinds of problems are looked for?
1/3
• Risk-based testing: A risk analysis is used to find out what to
test next
– Testing is prioritized based on the probability of feature not
working correctly and the possible consequences of this
– The greater the risk is for a some feature, the earlier and better it
has to be tested

Software testing, 2015 290(504)


Ohjelmistotekniikka
9.9 What kinds of problems are looked for?
2/3
• In addition to areas pointed out by the risk analysis, areas that
are at low risk according to the analysis should also be tested
– One can never be sure how well the analysis has been done
• There is a risk that the risk analysis is wrong
– If a risk has not been noticed in the risk analysis, the
corresponding implementation gets tested anyway at least a little
• For example, features depending on timing and concurrent
execution should be tested even if the risk analysis has paid
no attention to these
– For example, if an event A occurs usually before event B, a
situation is looked for where B occurs before A

Software testing, 2015 291(504)


Ohjelmistotekniikka
9.10 What kinds of problems are looked for?
3/3
• Based on the characteristics of the features:
– Usability testing looks for usability problems.
– Performance testing looks for performance problems.
– Security testing looks for information security problems.
– Etc…

Software testing, 2015 292(504)


Ohjelmistotekniikka
9.11 How does the program need to be
used?
• Scenario testing: test with test cases that have been derived
from use cases or user stories
• Business process testing: test the business process
preferably end to end
• Usability testing can include actual functional scenarios
(larger than use cases)
• It is essential to test the entire life cycle of the program
– Installation
– Usage: ordinary use, maintenance, backups
– Updating
– Taking out of use

Software testing, 2015 293(504)


Ohjelmistotekniikka
9.12 Installation testing
• Installation has traditionally been a problem area
– Today it is also encountered in delivering patches and in
continuous deployment
• Install the software in different ways, in different environments
– Check which files are added and which change on the disk
– Does the installed program work?
i
– What happens if the installation is removed?
• Common problems
– Success with different access rights – many programs
needlessly require admin privileges to install
– Some Linux-based software doesn’t like whitespace in
installation path
– Sometimes the drive on which you install matters
Software testing, 2015 294(504)
Ohjelmistotekniikka
9.13 Performance and load testing 1/8
• Performance testing
– Test whether the systems fulfils its performance requirements
– See how fast it runs, so that it can be optimized if needed
– Problems can be found by comparing the time consumed by the
same test run at different times
– If performance suddenly drops, an error may be at fault
• The error might be found in third-party code
• E.g. in a Java program performance drop might be related to
a new version of the virtual machine
• For web services usually done along load testing.

Software testing, 2015 295(504)


Ohjelmistotekniikka
Performance and load testing 2/8
• Load testing: load is imposed so
that the software needs a lot of
Application
resources (lots of work, little time) server 2
Database
Application server 1
– This probably leads to a failure; the server 1
reasons for the failure might lead to
the identification of some Network
connection
weaknesses that are also relevant
in normal usage
Users and
• Stress testing: load is increased clients

gradually, until failure appears


(response times become longer
than allowed etc.) – when does the
system stop working?
Software testing, 2015 296(504)
Ohjelmistotekniikka
Performance and load testing 3/8
• It is important to begin testing early so that performance can be
improved with system design – or prepare to buy servers
• The testing of web services is done with load testing applications
such as JMeter, which are used to create a set of virtual clients
doing typical thing on one or more computers
• Functional errors are not looked for and they may not interrupt this
testing
– Use simple scripts that have been found to work and will not be broken
all the time as the system develops
• In Finnish: More information on slide set by Vuori at
http://www.mattivuori.net/julkaisuluettelo/liitteet/suorituskykytestaus.
pdf

Software testing, 2015 297(504)


Ohjelmistotekniikka
Performance and load testing 4/8
• Testing process of a web system in nutshell:
• Prepare, model and plan
– Make a small team to think how the testing is done.
– Hire an expert to do the first tests - and learn how they are done.
– Find out performance requirements: how many concurrent users should
be supported, response times, etc... Analyse how realistic the
requirements are. What would be the worst case?
– Model how the users behave (usage profile). Identify times when there
are plenty of users (last working hour of Friday for some business
systems).
– Identify critical / most representative use cases – some whole task, like
making an order.
– Make the components used in tests reliable – don't let testing be
interrupted by functional errors.

Software testing, 2015 298(504)


Ohjelmistotekniikka
Performance and load testing 5/8
• Build test system
– Build a test environment that mirrors production "exactly".
– If you can't have exact replica of production system, use mathematics to think
how it corresponds (like the number and speed of servers). If it think of other
differences and what they mean for performance. You can alter system
configuration to produce a "performance model" of the system based on some
measurements.
– Select a good performance testing tool that you use to produce large enough
number of virtual users and run the tests from many workstations. (Even one
workstation can sometimes launch a couple of hundred users.)
– If you may need to bypass the UI in tests, analyse and test what effect that may
have.
– If you can arrange an environment, automate as much as possible so you can
just press a button to run the tests every now and then -- for new builds, if the
servers change etc.
– Run the first tests as early as possible so that you can change the architecture if
needed.
Software testing, 2015 299(504)
Ohjelmistotekniikka
Performance and load testing 7/8
• Setup test system:
– Setup monitoring on servers to see what is going on under load.
– Use realistic test data - but no production data.
– Double check that there are no interfaces to production systems and their data
that could be endangered.
– Run the same other processes and other loads on the servers as the production
system.
• Run tests
– Launch the tests and add volume gradually and see how the system behaves.
– Follow the metrics: for example, processor usage should not usually rise above
60 %.
– Use the systems manually during testing too. Validate the performance
requirements.
– Add stress to see how the system works under very heavy load – it must not
crash.
– Simulate a denial of service attack.
– Repeat the tests a couple of times.
Software testing, 2015 300(504)
Ohjelmistotekniikka
Performance and load testing 8/8
• Analyse
– Analyse the test information with the team and think of what the bottlenecks
really are and how to work with them.
– If the data shows funny things, run special test to isolate issues and find what is
happening

Software testing, 2015 301(504)


Ohjelmistotekniikka
9.14 Robustness testing
• Robustness testing: a long-period test to uncover errors that
would go unnoticed in quick tests
– For example errors in pointer handling, memory leaks, stack
overflows, etc.
– The test may be run even for several days
• Usually done late in development as the system matures
• Important for platforms with a wide user base
– E.g. smartphone operating system and basic applications
• Model-based testing is suited for this kind of testing since it
can bring continuous variation to the test run (more on model-
based testing later…)

Software testing, 2015 302(504)


Ohjelmistotekniikka
9.15 Regression testing 1/3
• Reuse test cases to test thing again after changes
– When something is changes, all kinds of thing may break
– ”When one thing is fixed, two others break"
• In practice one of the most important and used methods
• Done at all levels of testing
• Can be done manually, with test automation or both
• The most traditional application of test automation

Software testing, 2015 303(504)


Ohjelmistotekniikka
Regression testing 2/3
• What is tested?
– What can be affected by the change? Analyse the code. Be
realistic: a change may have an effect anywhere…
– Sometimes there’s a constant regression test that is
complemented as needed, including with exploratory testing
– Two main strategies: 1) always the same way (with test cases
that have passed many times before), or 2) why not with
somewhat new test cases, to catch different bugs, or 3) a
compromise: some things that ”have to be checked” are tested
with the basic set, but rest of the testing is designed based on
the situation

Software testing, 2015 304(504)


Ohjelmistotekniikka
Regression testing 3/3
• Some methods
– In integration testing automated tests check the other
components work ok after one has been changed. Tests have
been created in testing done during implementation.
– After changes to an information system automated user interface
tests are executed. The tests have been created specifically for
this purpose once the interface has stabilized.
– After changes regression testing is done in manual exploratory
manner.
• Problems
– May be boring work when done manually
– Automated tests always have maintenance problems

Software testing, 2015 305(504)


Ohjelmistotekniikka
9.16 Smoke testing 1/2
• The purpose is to test whether a new version of the software
(for example a new build) works well enough to be tested
properly
– If it crashes all the time, testing it will fail
• Often an automated and standardized test for the basic
functionality of the program that can be assumed to work
– Focus on width instead of depth
– For example, if a wrong file has been linked to the build, the
error can be found quickly with a smoke test
• The actual purpose is not to find errors
• Should be designed together with the developers

Software testing, 2015 306(504)


Ohjelmistotekniikka
Smoke testing 2/2
• Test cases are usually a subset of the test cases used in
regression testing
• A smoke test can be done by either a developer or a tester
• It is important to test in an environment similar to where the
tests will be executed afterwards
• Whenever possible, smoke tests should be automated,
because they probably have to be repeated often
– It is realistic, since they are simple basic tests

Software testing, 2015 307(504)


Ohjelmistotekniikka
9.17 How do we know whether a test
run was successful or not?
• Evaluation-based
• Comparison to the specification or another authority
– If there are differences, there probably is an error (somewhere)
• Comparison with the stored results
– For example, in a regression test the results are compared with
the results of the previous test run
– If the previous results were correct and they are different now,
there might be an error

Software testing, 2015 308(504)


Ohjelmistotekniikka
9.18 Heuristic consistency
• The program should work…
– As before
– As other programs in the same situation
– As people say it should work (the program should work as we
think the users want it to work)
– Internally consistently, for example, error handling has been
implemented in a same way in similar functions
– As its obvious purpose requires
• If inconsistencies are detected in any of the previous aspects,
there might be an error – or an intentional design decision

Software testing, 2015 309(504)


Ohjelmistotekniikka
Concept of oracle i
• Oracle is a term for a mechanism for telling whether a test
passes or not – in other words a way to tell whether the
observed behaviour is correct or not
– The term comes from the Delphic Oracle
https://en.wikipedia.org/wiki/Delphic_Oracle
• In practice the oracle might be e.g.
– An earlier, reliable version of the software
– Another algorithm
– Another program or cloud service
– An expert user (who is familiar with the business logic)
• No oracle is perfect…

Software testing, 2015 310(504)


Ohjelmistotekniikka
10. Error reporting

Error reporting (or defect


reporting) is in practice the most
important form of
communication between testers
and developers. Although the
goals are usually clear, good
reporting is not easy to
accomplish.

Software testing, 2015 311(504)


Ohjelmistotekniikka
10.1 An error report shared information
• Based on ”Bug Advocacy” by Cem Kaner, Quality Week 2002.
• If the software is targeted to mass markets, most of the errors
found by the users have already been found in actual testing
– Why have they not been fixed?
• Defect report is a tool whose purpose is to convince the
organization that money and time should be spend for fixing
the defect

Software testing, 2015 312(504)


Ohjelmistotekniikka
The problem must be ”sold” to developers
• If the purpose of testing is to find defects, then defect reports
are the primary product of the testers
– They are results that are displayed outside of the testing team
and for which the testers are remembered
– The best tester is not the one who finds the most errors, but the
one who gets most errors fixed
• Because there is never enough time, the tester has to ”sell”
the bug to the developer, so that the developer uses their time
to fix it
• The selling is based on two goals
– The developer has to be motivated to fix the defect
– The developer’s arguments and explanations for why the error
does not need fixing have to be countered by the tester

Software testing, 2015 313(504)


Ohjelmistotekniikka
What makes the developer want to fix
the error?
• The failure caused by the error looks really bad
• It looks like an interesting problem
• It affects many people
• It is trivial to repeat
• It makes us look bad, or a corresponding bug has made our
competitors look bad
• The management wants the error fixed
• The programmer wants to do a favour to the tester and other
personal reasons
• Appeal to professionalism etc.

Software testing, 2015 314(504)


Ohjelmistotekniikka
What makes the developer resist fixing
the error? 1/2
• The developer cannot repeat the error
• Repetition of the error is not straightforward
• Repetition is not possible with this information and getting
more information requires a lot of work
• The developer does not understand the error report
• Unrealistic bug
• Fixing requires a lot of work
• Fixing is risky
• There is no foreseeable effect for the user
• Not important; nobody cares if this does not work

Software testing, 2015 315(504)


Ohjelmistotekniikka
What makes the developer resist fixing
the error? 2/2
• It’s not a bug, it’s a feature
• The management does not care about problems like this
• The developer does not like the tester or does not trust them,
or other personal reasons

Software testing, 2015 316(504)


Ohjelmistotekniikka
How to motivate to fixing the error?
• When tester notices a failure, it is just a symptom, the error
itself has yet to be revealed
• Perhaps this failure is not the best possible example to show
the effects of this error
• Thus, more work needs to be done before writing an error
report to point out that the error is more severe and common
than it looks at the first glance

Software testing, 2015 317(504)


Ohjelmistotekniikka
Finding out the severity
• When a tester has found a failure, the software has been
driven to a vulnerable state
• When continuing testing from this state, the true severity of
the error can been found out by
– Changing control (first A, then B or the other way around)
– Changing the values of options and settings
– Changing software and hardware configuration

Software testing, 2015 318(504)


Ohjelmistotekniikka
Is it a new error or an old one?
• If it is an old error, there is little motivation to fix it, unless it
has caused complaints from the customers
• People take new errors more seriously
• If it is an old error, you might find new ways to repeat the error
in a new version of the software
– Then it becomes a new problem
• These points are emphasized during the maintenance of the
program, when the purpose of a new version is only to fix the
most critical errors
• If the error database is cleaned out periodically, important
information may be lost

Software testing, 2015 319(504)


Ohjelmistotekniikka
Finding out how common the error is
• Find dependencies from the configuration
– If the error cannot be repeated in the developer's environment, it
is hard to motivate them to fix it
– The bug report must describe the environments where the error
appears
• Try to generalize the extreme cases
– Boundary value analysis helps to search for the problem
– Negative testing, erroneous inputs
– When the bug is found you don’t have to settle just for the
boundary values anymore

Software testing, 2015 320(504)


Ohjelmistotekniikka
10.2 Can the error be repeated?
• The tester has to report also the errors that cannot be
repeated
– If the report is precise enough, the programmer may be able to
find out the source of the error
– When you notice that you are unable to repeat an error, write
down everything that you remember about it immediately
• If you are unsure what you did, say so in the error report
– It is also worthwhile to write down what you did before you
noticed the failure
– Check the error database, there might be something similar
• Try to change the timing of the program to slower and faster
• Discuss with the developer and/or read the code
• The unrepeatable error is an error by the tester, you should
learn your lesson
Software testing, 2015 321(504)
Ohjelmistotekniikka
10.3 Recipe for a good error report
• Good, descriptive title helps to find the report from the database
• Describe how you were able to repeat the failure, include all the
steps
• Describe what the steps are, which ones are not important for
repeating the error, and how test results varied in different test runs
• Analyse the problem and tell how it can be repeated with the least
numbers of steps
• The report has to be easy to understand
• The tone has to be neutral
• Only one error per report
• If a file is needed for the repetition, attach it to the report or tell
where it can be acquired
– Screenshots are favoured these days since disk space is cheap
Software testing, 2015 322(504)
Ohjelmistotekniikka
Template for error report 1/4
Typical things to be found
• ID, a number identifying the report in report forms – but it
• Title varies
– Describes the error in one line
– The most important field in an error report
• The management reads this when they want to know which defects
are yet to be fixed
• Description
– Longer text (images as attachments)
• Who reports
• Date of creation of the report
• Name of the software or component
• Release and version number
• Test configuration(s)

Software testing, 2015 323(504)


Ohjelmistotekniikka
Template for error report 2/4
• Incident type
– Programming error, design error, document inconsistency,
proposal, question
• Is it repeatable
– Yes, no, sometimes, I don’t know; if a tester claims that the error
can be repeated and the developer disagrees, the tester has to
repeat the error in the presence of the developer
• Severity
– How much does it hinder use
• Priority (for fixing)
– Project manager / error manager etc. fills this
• Impact on customers
– E.g. technical support fills this
Software testing, 2015 324(504)
Ohjelmistotekniikka
Template for error report 3/4
• Keywords
• Description of the problem and how to repeat it
– Step by step
• Recommended fix
• Who is in charge
– Filled by the project manager
• Status
– Tester fills this (at least initially): open, closed, dumped
• Decision
– Project manager owns this field
– Waits, fixed, cannot be repeated, fixing delayed, works as
designed, more information needed, duplicate, withdrawn, etc.
Software testing, 2015 325(504)
Ohjelmistotekniikka
Template for error report 4/4
• The version number of a build that corresponds to the
decision
• The maker of the decision
– Programmer, project manager, error manager, tester
• Tester of the repair
– If a tester finds an error, they also test the fix
• The version history of the error report
• Free form comments
– Remember neutral tone

Software testing, 2015 326(504)


Ohjelmistotekniikka
10.4 Error databases
• Errors are usually reported to a real time error database
• From there they can be assigned to a developer or team
• The situation is updated as the error is being handled and
finally the error can be ”closed”
• Well-known systems include
– Bugzilla
– Mantis
– Jira
• The situation of the error is followed and statistics updated
continuously
– These programs have good reporting features

Software testing, 2015 327(504)


Ohjelmistotekniikka
Example of Mantis's form

Software testing, 2015 328(504)


Ohjelmistotekniikka
11. Measuring software

Nothing is permanent in software development


except change. With measurements, educated
guesses can be made about the current state of
the project and what should be changed and to
which direction. On the other hand, product
metrics can help in targeting testing to the most
error-prone parts.

Software testing, 2015 329(504)


Ohjelmistotekniikka
11.1 Metrics 1/2
• Based mainly on [Haikala&Märijärvi 06]
• What do you want to measure, what information is needed?
• Do you need product metrics, project metrics or both? Or do
you want to measure something else?
– The more the better?
• Some quantities cannot be measured precisely, such as the
number of defects left, but estimates can be given
– If the number of the defects left could be calculated, they would
probably be easy to remove too
– Some estimates are better than others

Software testing, 2015 330(504)


Ohjelmistotekniikka
Metrics 2/2
• Attitude problems are often encountered in measurement
– In software projects there is always a danger that measurement
is directed at a person and their performance
– Everybody should know how the results will be used
• Traditionally metrics have been used in testing quite widely
– Test coverage
– Targeting testing to the most error-prone parts of the software
• Code that is complex is often also prone to errors
– Parts with high reliability requirements

Software testing, 2015 331(504)


Ohjelmistotekniikka
Start with simple metrics
• Start measuring testing with simple metrics that seem
trustworthy and assemble them into a metrics suite
– If you have means to measure requirements coverage, it should
be a good starting point
– A mix of complementary metrics based on requirements,
specifications and code-level coverage may be useful
– Instead of focusing on small changes in the measured values,
more interesting are large changes and trends (are we getting
better or worse)
– There is no silver bullet in metrics either
– Change the metrics suite in small steps, try to keep track of the
effects

Software testing, 2015 332(504)


Ohjelmistotekniikka
Metrics vs. risks
• Is there some relation between your metrics suite and the
product or project risks?
– Test your metrics suite by inventing scenarios where some
important risk is realized in the product or in the project to see if
your metrics clearly reveal it
– If not, could you improve the suite to better detect such
problems?
• Question the metrics that have been often wrong: what does
the rate of passed and failed tests really tell?
• Could you complement this information with some
requirements / specifications / code-level coverage metrics?

Software testing, 2015 333(504)


Ohjelmistotekniikka
Careful planning of measurements
• A measurement plan is often needed in large
projects/organizations:
– Why measure
– What to measure
– Who participates in measuring
– Which parts of the system are measured
– When to measure
– How to collect and analyse the information

Software testing, 2015 334(504)


Ohjelmistotekniikka
Requirements for metrics 1/2
• The relation of measurement results to business
– The basis for measurement
– Common understanding of why the metric is collected and how it
may and will be used
• Ease of deployment
– Is a ramp up project needed?
• Understandability
– What are the reasons for changes in the values
• Selectivity
– There must be a relation between the results and the target of
the measurement
• If the value of the metric changes, it is known what has changed in
the target

Software testing, 2015 335(504)


Ohjelmistotekniikka
Requirements for metrics 2/2
• Objectivity
– The results do not depend on the measurer or the measuring situation
• Reliability
– Accuracy, repeatability, the metric is not misused by accident
• Cost-effectiveness
– The costs of measurement compared to the gain
• Reusability

Software testing, 2015 336(504)


Ohjelmistotekniikka
Quality metrics – short list of examples
for starters
• Number of open defects
– All defects
– Number of serious defects – progress blockers or especially
hindering
• Happiness of test users
• Qualitative metric
• Improvement to previous version
• Comparison to competitor
• Happiness of development team
• Qualitative metric
• Happiness of testers
• Qualitative metric

Software testing, 2015 337(504)


Ohjelmistotekniikka
Number of test cases as a metric
• The number of test cases doesn’t really tell anything.
• And therefore neither does passing them – "5000 test cases
passed"
• Reason: what counts is the quality of the test cases, and
when quantity is overemphasized there is a danger that most
of them will be trivial.
• Better to examine the number and nature of the errors in the
software.
– At different severities.
• The number of test runs is more related to work process: what
has been agreed upon and how does testing progress
• Exploratory testing does not even have ”test cases”
Software testing, 2015 338(504)
Ohjelmistotekniikka
Progress in traditional testing

• In large projects with a long system testing phase [Rothman


07]
Software testing, 2015 339(504)
Ohjelmistotekniikka
S curve tells about maturity
• Following errors is
important from the first
• An example of the
cumulative error curve of
the system testing
[Haikala&Märijärvi 06]:
• Consider publishing
when the error curve
evens out
• Unfortunately, the
resources needed for
this are very hard to
estimate beforehand
Software testing, 2015 340(504)
Ohjelmistotekniikka
Another curve on the same thing – which
one communicates more positively?

• Another way to describe the issue – possibly with a clearer


message [Rothman 07]
Software testing, 2015 341(504)
Ohjelmistotekniikka
More on maturity metrics
• We saw the S curve as an example of measuring maturity
• The curve gives an implicit hint that there should be fewer
errors found
• Why follow the absolute numbers of errors found and fixed?
• Why not follow the number of open errors?
• How about rate of errors found (how many per day)?

• There are many alternative metrics, and the first one offered
should not be accepted immediately
• The spirit among the testers is an important and sometimes
the best metric: how does the software feel, has it matured
and is it getting ready for deployment
Software testing, 2015 342(504)
Ohjelmistotekniikka
Direct and indirect metrics
• Basic metrics: tests run, errors found, coverage with regard to
requirements / specifications, code coverage, size,
complexity, amount of work done, etc.
• Derived metrics: error density, rate of successful and
unsuccessful error fixes, rate of errors found to errors that
could have been found, etc.
• Process-based metrics: execution time of tests, degree of
automation, life time of errors in days, amount of work spent
in test development, quality debt in an agile project, etc.
• Generally the problem is not in coming up with metrics, as
there are enough already; the problem is in picking the right
ones

Software testing, 2015 343(504)


Ohjelmistotekniikka
11.2 Non-functional testing
• How to measure non-functional testing?
• In performance testing measurement may be on/off – if
requirements are fulfilled, the situation is ok and won’t change
unless the software slows down as it develops.
• Things are different in usability testing – increased maturity
means looking at how the improvements in the backlog get
done and whether there are still defects that prevent
publishing.
– Minor cosmetic errors don’t matter.
– Instead of defects might measure the happiness of test
personnel on different rounds of testing.
• Etc…

Software testing, 2015 344(504)


Ohjelmistotekniikka
11.3 You get what you measure! – How
metrics are fooled
• The management gets what it measures
– 100% code coverage is easier to achieve when you ignore test
results
– Degree of automation can be raised by getting rid of manual
testing
– One can become an effective tester by making changes in the
source code and next day finding the errors you introduced
– If there is pressure to find errors, there is no motivation to
prevent them
– If there is pressure to not find errors, fewer will be reported
• If metrics are combined with rewards, the temptation for
gaming them can become a problem

Software testing, 2015 345(504)


Ohjelmistotekniikka
11.4 System level coverage metrics 1/2
• Traditional metrics:
– Requirements coverage – how well have different requirements
been covered
• What percentage of requirements have associated tests?
• Coverage by importance of requirements
– Functional coverage
• Map the tests to program functions, even user interface
functions
– Coverage by different units
• Components, modules, etc.

Software testing, 2015 346(504)


Ohjelmistotekniikka
System level coverage metrics 2/2
• Other possibilities
– Risk coverage
• How product risks have been covered in testing, especially
the major ones
– Coverage by risks associated with requirements/functions
– Coverage by what is essential to user segments
• 70 % of requirements important to business users have been
tested, 83 % of those important to home users
– Usage scenario / use case / user story coverage
• Coverage metrics used at this level depend on the
development process – V-model or agile

Software testing, 2015 347(504)


Ohjelmistotekniikka
11.5 Code coverage metrics
• Code coverage metrics can show that not enough testing has
been done
– To determine that there has been enough testing is another
matter entirely
– 100% code coverage doesn’t usually guarantee anything
• Code coverage is followed in unit testing
• Coverage measurement requires instrumenting the code,
slowing it down and hiding errors
• Although system level testers do not need to worry about
code coverage, it is good to know the strengths and
weaknesses of the different metrics used by developers

Software testing, 2015 348(504)


Ohjelmistotekniikka
On code coverage in general
• Good unit testing must be required from programming teams
and subcontractors
– Quality of tests is a better strategy than just looking at the
metrics
• And tests are just one method of quality assurance: code
reviews, static analysis etc. are also important
• Unit level metrics are produced automatically by tools
integrated into unit testing and/or low level integration testing
– Very easy to use as requirements for moving to the next phase
(quality gates)

Software testing, 2015 349(504)


Ohjelmistotekniikka
Why excessive focus on code coverage
is dangerous 1/2
• When code is developed sensibly, its amount is
minimized and most of the work is done in libraries.
• Then the coverage metrics for own code tell nothing:
sometype MyFunction(optional String myString) {
return LibraryFunction(myString);
}
• When a test visits the return line, everything has been tested.
Or has it? How does LibraryFunction() behave with all
possible inputs?

Software testing, 2015 350(504)


Ohjelmistotekniikka
Why excessive focus on code coverage
is dangerous 2/2
• Instead of coverage numbers the focus has to be on
developing good tests and only use the numbers as auxiliary
information.
• Beware especially of using coverage metrics as major testing
requirements or criteria for evaluating subcontracted work.
• Remember also that some things are better tested at system
level
– That’s what counts!
– At system level code coverage is measured only occasionally.

Software testing, 2015 351(504)


Ohjelmistotekniikka
calculate
results
Example setup
initialisations

• We demonstrate different coverage Calculate


students
metrics by presenting the program to be left
no
grade
distribution

tested with this flowchart yes

• In testing, we are interested in the end


get next
behaviour of the program corresponding student's data

to different execution paths


no
• An execution path describes the control participated
in exam
flow of the program from its start to the yes

end define
grade

Update
student's
data
Example from
[Haikala&Märijärvi 06]
Software testing, 2015 352(504)
Ohjelmistotekniikka
calculate
results
Statement coverage
initialisations

• Which percentage of the statements Calculate


students
are covered by tests? left
no
grade
distribution

• If every statement is executed at least yes

once in some test run, statement end


get next
coverage is 100 % student's data

• If the program contains 100 no


statements, 63 of which are executed participated
in exam
by test cases, statement coverage is yes

63 % define
grade

Update
student's
data

Software testing, 2015 353(504)


Ohjelmistotekniikka
Decision coverage
• Also called branch coverage
• Branches of control determine test a==0 &&
b>0
no

coverage
• Decision coverage requires that every yes

decision gets both values in testing


• So check e.g. if statements and see that test
cases cover both branches

Software testing, 2015 354(504)


Ohjelmistotekniikka
Condition coverage
• Continuation of decision coverage, but now
looking at how the possible values of ”partial a==0 &&
no
conditions” are handled in testing b>0

• Do we test situations where yes


– a == 0
– a != 0
– b>0
– b <= 0

Software testing, 2015 355(504)


Ohjelmistotekniikka
Multiple condition coverage
• Multiple condition coverage requires that all combinations of
all conditions in a decision are covered
• So its not enough to test all conditions separately, but also
their combinations, as follows:
– (a=0, b=0),
– (a=1, b=0),
– (a=0, b=1) and
– (a=1, b=1)

Software testing, 2015 356(504)


Ohjelmistotekniikka
Path coverage 1/2
• How well are all possible execution paths covered?
• Less used coverage metric
• Full path coverage is in practice impossible to achieve
– Problem I: loops make the number of paths too great
• Many programs are intended to be executed in an eternal
loop, ready accept external inputs => an infinite number of
paths
• Only achievable within e.g. individual functions
– Problem II: all paths found in code may not be possible in actual
execution
• E.g. a taking a specific path requires a parameter to have a
value that the function is never called with

Software testing, 2015 357(504)


Ohjelmistotekniikka
Path coverage 2/2 laske
tulokset

alustukset

• How many execution paths does this


program have? laske
opiskelijoita
– In other words, how many different ways to
ei arvosana-
jäljellä
jakauma

get from the initial node to the end? kyllä

• If there are n students, and each has either hae seuraavan


loppu

n opiskelijan
attended the exam or not: 2 tiedot

• For a 100 student course the number of mukana


ei
execution paths is 2100 tentissä

• If the loops could be broken away from in


kyllä

määrää
the middle, the number of paths would be arvosana
n n-1 n-2 1 0
even greater: 2 +2 +2 +…+2 +2
päivitä
opiskelijan
tiedot

Software testing, 2015 358(504)


Ohjelmistotekniikka
11.6 Complexity metrics 1/2 i
• Tells about the quality of the code.
– Maintainability. Good to check when accepting maintenance
responsibility for code produced by another company
– Tendency to break when changed, almost on its own… or when specific
tools or their combinations are used.
– The probability of containing errors.
– A strong ”smell”, so to say…
– Usually based on static analysis on code structure with tools.
• No significant role in everyday testing, but only in code quality
assurance, and not all that widely even there – except for
safety-critical systems.

Software testing, 2015 359(504)


Ohjelmistotekniikka
11.7 Complexity metrics 2/2 i
• The idea in testing: focus tests of the most complex parts of the
code and complex components – especially when they first become
available for testing.
• In principle the metrics could be used to figure out the number of
test cases needed.
• Typical metrics: Halstead’s metrics, McCabe’s cyclomatic number.
Number of lines of code tells nothing.
• Development environments need to have a measurement tool ready
or one has to be easy to install.
• More information:
– http://en.wikipedia.org/wiki/Cyclomatic_complexity
– http://en.wikipedia.org/wiki/Halstead_complexity_measures

Software testing, 2015 360(504)


Ohjelmistotekniikka
11.8 Error seeding 1/2
• Consciously adding of errors to the source code in order to
– Try to estimate the number of ”real” errors in the code based on
the number of seeded errors found
– Evaluate the effectiveness of testing
• Let us denote the total number of real errors with Vo and the
total number of seeded errors with Vk, and the numbers of
real and seeded errors found with Vol and Vkl
• Furthermore let us assume that Vo/Vol = Vk/Vkl
• An estimate for the number of real errors not found is
(Vol x (Vk/Vkl)) - Vol

Software testing, 2015 361(504)


Ohjelmistotekniikka
11.9 Error seeding 2/2
• Problems of seeding:
– What kinds of errors should be seeded?
– Have all the seeded errors been removed from the deployed
version?
– How much extra work is caused by the method?
– Removing of the seeds changes the program and it needs to be
tested completely again – more time and money
– Ethics: it is demoralising to add defects to the program on
purpose
• One form is to seed errors into other parts of the system (how
does this component manage with those?), but they are better
called test cases.

Software testing, 2015 362(504)


Ohjelmistotekniikka
Mutation testing
• A special case of error seeding
• Different versions of the program are produced, each with a
different error seeded
• Can be used to
– Assess the effectiveness of testing (how many mutants were
detected)
– Test how well the program is able to recover from errors
• Earlier the generation of mutants has been used also for
generating project assignments in the TUT testing course!

Software testing, 2015 363(504)


Ohjelmistotekniikka
12. Automation and tools

In this part we introduce test


automation and tools to help in
testing. Due to the great number
of tools available, the purpose is
just to give a overview. One tool
is good for one job and another
tool for another.

Software testing, 2015 364(504)


Ohjelmistotekniikka
Orientation
• The early slides are based on material collected by Mika
Maunumaa
• Different views
– Software design describes how things should be
– Testing questions if things are as they seem
• Testing has traditionally been handwork
• The promise of test automation is to remove handwork and
solve testing problems
– When a program is testing a program, money and time is saved
– A program can perform testing without errors, faster and for a
longer time
– Everything cannot be tested by hand
• stress and performance testing etc.

Software testing, 2015 365(504)


Ohjelmistotekniikka
12.1 Test automation as a whole
• Test automation complements for manual testing, and not a
replacement
– Tests have to be designed beforehand
– Only the best tests are automated
– The role of manual work changes, but is not removed
• A program whose purpose is to ”run” another program
– Both are created (partially) from the same requirements
– There is a big difference in the point of view and the goals
• Manual testing and test automation require different skills
• Test automation is not the silver bullet of testing

Software testing, 2015 366(504)


Ohjelmistotekniikka
12.2 What is test automation
• In a basic case a testing software executes tests on the
system under test.
– Test cases are not executed manually.
• Tests are still typically designed and implemented manually.
– Small scripts for test cases.
• Used on all levels of testing
– Running unit tests from JUnit or QTest is test automation.
– Running test on integration server.
– Automating user interface tests.
– Running load tests.
– Etc…

Software testing, 2015 367(504)


Ohjelmistotekniikka
Different ways to control the SUT
(simplified)
• Test bench:
– Test driver calls functions or other interfaces in the SUT. Unit
testing works like this.
• Instrumented:
– The SUT or the operating system has a driver that the test
program asks to e.g. find a button on the screen and click it. May
work by calling the API of the button or by emulating a mouse
click.
• Test robot:
– A physical robot identifies objects on the screen and uses the
program with a finger.
…Each of these has its applications and pros and cons

Software testing, 2015 368(504)


Ohjelmistotekniikka
12.3 Promises of test automation
• Regression testing becomes easier; more tests, more often
• Possible to run tests that would be impossible manually
– For example stress testing of web sites
• The errors made during testing are reduced, machine is more
precise than man
– On the other hand new kinds of errors become possible
• More efficient use of resources
• Repeatability and integrity of the tests
• Tests can be executed on different hardware or software platforms
• Reusability of tests when the test target remains the same
• Confidence of correctness increases, shortening the time to market

Software testing, 2015 369(504)


Ohjelmistotekniikka
12.4 Common problems of test
automation
• Unrealistic expectations
– Tools solve all the problems
• Bad ways of testing
– ”Automating chaos just gives faster chaos”
• Automation is expected to find a lot of new errors
• False sense of security
– Since automation did not find errors, there are none
• Maintenance of automated tests
• Technical problems
– Buggy testing software, compatibility
• Organizational problems
– Lack of support from the management, the culture of the organization,
training
Software testing, 2015 370(504)
Ohjelmistotekniikka
12.5 Limits of automation 1/3
• Does not remove the need for manual testing
– Not everything should be automated
• Tests executed rarely
• The software to be tested is a ”moving target”
• The results of a test are easily interpreted by a human but
very difficult for a machine (for example the quality of sound
or picture)
• Tests that require physical activity
– There is no need to automate everything
• Only the best and most often repeated tests
• Testability of the SUT needs special emphasis, the lack of it
may result in failure of automation

Software testing, 2015 371(504)


Ohjelmistotekniikka
Limits of automation 2/3
• Manual tests find more errors
– An error is found first with a manual test that is then automated
for regression testing
– James Bach reports his experiences from Borland: automation
only found less than 20% of all errors that were discovered
during the project even though there had been investments to
automation for many years (James Bach, Test Automation
Snake Oil, 1999)
• The new features contain more errors than the old ones
• A manual tester can find the new errors while automation is
still being adjusted to test new areas

Software testing, 2015 372(504)


Ohjelmistotekniikka
Limits of automation 3/3
• Test automation might limit the software development
– Tests are sensitive to slight changes in the software
• Initializing an automated test may be a heavier operation
than a manual test
• Maintenance requires work
• Tools have no imagination
– Tools only do what they are programmed to do – a human can
vary test execution and act as an intelligent observer
• Automation does not increase effectiveness
– The price of a test run and test execution time vs. the price of the
design and maintenance

Software testing, 2015 373(504)


Ohjelmistotekniikka
12.6 Testability of software
• Software needs to be designed so that test automation can
– Control it, perform tasks
– Find out the current state of the program
– Collect data from the user interface and data storages to find out what
happened during the tests
• This challenge has been known for long
– Yet e.g. in new operating systems this is usually forgotten
• Many features that improve testability also improve maintainability

Software testing, 2015 374(504)


Ohjelmistotekniikka
Important features for testability 1/4
• General simplicity and clarity
– Test automation will not work with a complex and confusing
program
• Architecture:
– Architecture which allows bypassing the user interface for logic
testing
– Modularity, so that components can be tested separately
• APIs
– API, that can be accessed from an external program (in system
tests in addition to unit tests)
– Standardized, well-known communication protocols and data
formats that are easy to use
– Testing interface (must not form a back door for hackers)
Software testing, 2015 375(504)
Ohjelmistotekniikka
Important features for testability 2/4
• Databases
– Standard databases from which data is easily retrieved
– Test data can be generated without the application itself
• User interfaces
– User interface technology that allows the elements of the user
interface to be identified programmatically
– Possible to enter data and use a suitable API to simulate
keyboard and gestures
– Reading data from the elements with the API. If that’s not
possible, one has to take screenshots and use OCR to find texts
in the fields, or compare bitmaps
– Pitfalls: dynamically generated window classes, same identifier
on many objects, custom controls, varying placement of controls.

Software testing, 2015 376(504)


Ohjelmistotekniikka
Important features for testability 3/4
• Logs
– Logging the execution is important in long test runs and to find
out what happened under the hood
– Possibility to save temporary files to be examined with a text
editor
• Configuring the execution environment
– Configuration with simple configuration files that can be
generated for tests
• Coding practices
– Clear, structured, modular code that is easy to understand and
where parts to be tested can be identified
– Clear naming conventions
– No hardcoded resource descriptors but descriptions in files,
where they can be replacedSoftware
with testing,
test versions
2015 377(504)
Ohjelmistotekniikka
Important features for testability 4/4
• Non-intrusive testing
– In testing e.g. a safety critical software it is not good to
instrument the software for testing. The software must be
testable in a specific configuration just by ”using” it, without
touching its components
– Here it’s important to be able to test the user interface without an
API, just by simulating a user, e.g. with a physical robot.
– And the program state must be verifiable without extra logs etc.

Software testing, 2015 378(504)


Ohjelmistotekniikka
12.7 Test automation – different kind of
software engineering
• The implementation of test automation is a software
engineering project; scripts require
– Programming and design skills
– Testing skills
– Skills in documentation
– Maintenance skills
• Who tests the test program?

Software testing, 2015 379(504)


Ohjelmistotekniikka
Test automation in different phases of the
software engineering process 1/3
• Unit testing:
– The developers should take care of the unit testing
– The developers are often not interested in testing their own code
manually
• Manual testing is seen a boring way to waste a lot of time
– Tools are needed to make the automated unit testing as easy as
possible
• Preferably these tools should be integrated to the
development tools of the developers as seamless as possible
– Editors, compilers, IDEs (Integrated Development
Environment)

Software testing, 2015 380(504)


Ohjelmistotekniikka
Test automation in different phases of the
software engineering process 2/3
• Several commercial and free tools are available
• One example are the xUnit frameworks
– The idea behind them is to move the responsibility to design,
code, run, and analyse tests to the developers
– Automated regression tests are gained for free along the way

Software testing, 2015 381(504)


Ohjelmistotekniikka
Test automation in different phases of the
software engineering process 3/3
• Integration testing:
– Unit testing tools can also be used in integration testing
– Automatic generation of drivers and/or stubs
– Before running actual integration tests, it is best to run a smoke
test to the system to be tested in order to find out if it is fit to
enter integration testing
– In continuous integration, developers do a pre-integration in their
own workstation during the normal course of coding and after
that they pass the code to real integration on a server
• Probability of successful integration is raised
• As important as the testing tools, is the engine that runs the
integration
• System testing – more on that later
Software testing, 2015 382(504)
Ohjelmistotekniikka
12.8 Approaches to automation
• Especially in GUI test automation, different approaches can
be identified
– Ease of use, maintainability and the ability to find errors in
different situation vary
• In the following test automation is divided into five approaches
– Capture and replay
– Structured scripts
– Data-driven
– Keywords and action words
– Model-based

Software testing, 2015 383(504)


Ohjelmistotekniikka
Capture and replay 1/2
• The most primitive (and notorious) form
– Capture a certain sequence of inputs and replay the sequence
the same way
– Change in the target often requires a new captured recording
– Usually the error is found during the recording
• Repetition of the captured sequence does not find new errors
• The goal of the test is just to execute a certain sequence and
check the result (if checking has been implemented)

Software testing, 2015 384(504)


Ohjelmistotekniikka
Capture and replay 2/2
• The scripts created are generally
– Badly structured
– Poorly documented
– Almost impossible to maintain
– Linear
• Doesn’t necessarily require programming skills
– Easy to try out
• Suited for presenting the features of a program
– A finalized test target, whose interface does not change
• However, capturing may be used to create a framework for
building better general-purpose tests
– E.g. by modifying recorded Python code
Software testing, 2015 385(504)
Ohjelmistotekniikka
Scripts 1/2
• Code a script to execute a certain sequence
– Each test case is a small program or function
– A change to the sequence requires a small change to part of the
script
– Input and output information are embedded in the script
– Scripts are well structured and documented
– Typical in unit testing
• Preparing scripts to handle errors is well managed
• Requires skills in software engineering
– The same kind of discipline
– Same skills

Software testing, 2015 386(504)


Ohjelmistotekniikka
Scripts 2/2
• Usage may also require programming skills
• There are two kinds of scripts
– Linear
• Script only inputs data (etc.) and checks what happened
– Structured
• Like linear scripts but the control etc. is coded manually

Software testing, 2015 387(504)


Ohjelmistotekniikka
Data-driven testing
• Separate test case information and execution code
• Skill requirements, roles
– Coding the execution script requires programming skills
– Creating test suites requires testing skills
1) Describing tests with test data mass (most common meaning)
– For example carefully design a large number of entries for a database to
handle
– Executed with a script which inputs each entry and checks the results
– What vs. how
2) Test cases are written into data
– Typically a sequence of low level commands
– Available commands depend on the script
– The script executes the given test case or suite
• Contains implementation for simple navigation in the style of ”press
button X”
Software testing, 2015 388(504)
Ohjelmistotekniikka
Keywords, action words 1/2

• “Almost all problems in information technology can be solved


by adding a new interface or raising the level of abstraction”
• A keyword describes some action in the interface
– kwSelectMenu ”File”, kwPressButton btOK
– Also comparisons: kwCompareText ”foo.txt”
– Low level operations
• An action word describes a user activity at a higher level
– awSendEmail is the same action on different platforms
– Can be composed of one or many keywords
– Action words represent the vocabulary of a problem domain

Software testing, 2015 389(504)


Ohjelmistotekniikka
Keywords, action words 2/2
• Test cases are defined in terms of action words
– The mapping from action words to keywords is defined in
separate data
– Keywords are implemented in the execution script (e.g. a small
Python function)
• With action words higher portability is reached
– If the user interface implementation for the activity described by
an action word changes, only the mapping to keywords has to be
updated – tests don’t need to be modified
• The same tests can be used for different members of the
same product family
– Each member can have its own implementation of keywords and
possibly action words
Software testing, 2015 390(504)
Ohjelmistotekniikka
Model-based 1/2
• Model-based testing is based on a model that describes the
behaviour of the system
– Represented with e.g. state machines
– Originally used in protocol and API testing
– Nowadays used also with embedded systems and in GUI testing
• In the most simple form describes a set of sequences for
example through the user interface
• Strengths
– Variation of usage
– The variation of inputs
– The control of concurrency
– The models can also contain the expected results
• Smart monkey vs. dumb monkey testing

Software testing, 2015 391(504)


Ohjelmistotekniikka
Model-based 2/2
• Weaknesses?
– Using the tools requires new skills
– May cause organizational changes, if there is no more need to design
test cases
– Requires a new way of thinking about testing
• The test model does not describe a test case but it is closer to a test
suite
– Right level for the model
• Do we model the user interface elements and their inputs
– Quick monkey testing
• … or are the user actions modelled
– The names of the state transitions are action words and
keywords
– Modelling requires competence (quality of models vs. quality of tests)

Software testing, 2015 392(504)


Ohjelmistotekniikka
12.9 Planning of automation 1/2
• Identify the test conditions
– What can be tested
– Prioritize
– Boundary value analysis, equivalence classes
• Design the test cases
– How can selected subject be tested
– The purpose of the test, the expected result

Software testing, 2015 393(504)


Ohjelmistotekniikka
Planning of automation 2/2
• Create test cases (in model-based testing test models)
– Scripts, data, results, etc.
– Setup and cleanup operations
– Preconditions and postconditions
• Execute tests
• Compare the execution results to the expected results
– Assumption: if both are the same the test is OK
– Notice varying things like time and date
• Regular expressions

Software testing, 2015 394(504)


Ohjelmistotekniikka
Configuration management
• Test material up to date
– Scripts
– Test data
• Software version
– Testware is essentially related to a specific version of the
software

Software testing, 2015 395(504)


Ohjelmistotekniikka
Pre and postprocessing
• The amount of data required is usually very large
– The thorough manual processing of data is time consuming and
error prone
• Preprocessing prepares data
– From text to binary form
– Tables of the database
– Combination of data elements
• Postprocessing checks and formats the result
– Comparisons against the log file (regular expressions)
– Changes in the database
– From binary to (human readable) text form

Software testing, 2015 396(504)


Ohjelmistotekniikka
12.10 Well-known tools
• For functional testing
– HP QuickTestPro, IBM Rational Functional Tester, Robot Framework, etc.
• For performance testing
– HP LoadRunner, IBM Rational Performance Tester, JMeter, etc.
• Test management
– HP Quality Center, TestLink, etc.
• Scripting languages for testing
– Perl, Python, Ruby, Visual Basic, Java, C++
• Test frameworks
– xUnit, Fit(Nesse), QTest
• Continuous integration engines
– Jenkins, Hudson
• Comparison, search and processing tools
– diff, grep, awk
Software testing, 2015 397(504)
Ohjelmistotekniikka
12.11 Automation project 1/2
• Often begins with great promises
– The tool will solve all problems in testing
– Automated tests are cheap
– When an automated test is running through a GUI, it’s easy to
euphorically feel like the quality of the product is improving
before your eyes – unless you understand what’s going on
• Often crashes when the illusion breaks
– Automating bad practices did not improve things
• Automated wrong, bad or erroneous tests
– Chosen tool is wrong or incompatible (and expensive)
– Unpreparedness for maintenance
• Changing scripts is a lot of work
• The dependency of tests on the version of software wasn’t
considered
Software testing, 2015 398(504)
Ohjelmistotekniikka
Automation project 2/2
– ”Let us automate all tests” and other unrealistic goals
– Automators are often more expensive than manual testers, with
the same resources much would have been achieved in manual
testing
– Automation doesn’t recover from errors
• Best start light
– Try out multiple tools
– Pilot project
– Start e.g. by automating smoke tests
– Find out if some other project or organization has used a similar
tool in a similar project
– Testing strategy and commitment by people
• Automation doesn’t reduce need for test planning

Software testing, 2015 399(504)


Ohjelmistotekniikka
12.12 Choosing a tool
Goals

Acquisition plan First trials

Interviews several choices

comparison
Eliminating alternatives
single choice

Acquisition and trial

Pilot project

Source: Jussi Niutanen. Symbian-sovellusten


Deployment
testauksen automatisointityökalut [Automation tools
for testing Symbian applications]. Master’s thesis,
TUT, Institute of Electrical Engineering, 2005, in
Improving automation Finnish.

Software testing, 2015 400(504)


Ohjelmistotekniikka
Checklist for purchasing test tools 1/3
• Adapted from: Dustin, Rashka, Paul: Automated Software
Testing, Addison Wesley, 1999.
• Estimate how the tool will improve the current situation
• How to choose the right tool
• How much money we are willing to invest in a tool
• How much additional time is required for deployment
• How much expertise is needed to use the tool
– Is new workforce needed?
• How much does it cost to train the users

Software testing, 2015 401(504)


Ohjelmistotekniikka
Checklist for purchasing test tools 2/3
• How to evaluate the usefulness of the tool before the decision
to acquire it
– It is important to evaluate based on your own needs
– Pilot project
• How to deploy the tool
• Some rules of thumb
– Even good test automation does not replace professional testers
and good testing strategy
• It can complement them, though
– It is unwise to assume that test tools are higher quality than other
tools in software development
– If a graphical user interface is tested, it is wise to assume that it
is volatile

Software testing, 2015 402(504)


Ohjelmistotekniikka
Checklist for purchasing test tools 3/3
– Test automation script is dumb, test cases require good planning
• When planning a manual test, one can always assume that
the tester is a thinking person
– Whenever possible, test scripts should be written with a general
purpose scripting language
• Learning curve, the lack of ready made libraries, the
cooperation of the testers and developers
• The requirement to use a tool-specific language might be a
good reason to abandon a tool
• Beware of vendor lock-in

Software testing, 2015 403(504)


Ohjelmistotekniikka
Summary
• Test automation does not remove manual testing but
complements it
• Creating automation is more expensive than manual testing
– The repetition of tests is cheaper
• Automation requires skills in software engineering
– The planning and implementation of testware
– Configuration management
• New errors are found with better tests – not by automating the
execution of old ones

Software testing, 2015 404(504)


Ohjelmistotekniikka
12.13 Model-based testing
• Based on Ibrahim Ibrahim K. El-Far, James A. Whittaker:
Model-based Software Testing and [Broekman&Notenboom
02]
• The testers often have in their mind some kind of implicit
model about the behaviour of the system under test
– Such as: if I perform action A, I can then perform either action B
or C
• The idea behind model-based testing is simply to write down
this model and use it as an explicit help for testing
• If the model is accurate enough, it can be
– Shared between several testers
– Reused

Software testing, 2015 405(504)


Ohjelmistotekniikka
Modelling 1/2
• Ideally the model is created by the person who best understands the
desired behaviour of the system under test
• The model should be created already in the development phase, but
if it has not been, creating it only for testing might be beneficial in
some cases
– The profitability of creating a model depends on, for example, how long
the testing is expected to last
– If an implementation has been generated from a model created in the
development phase, testing should not be based only on this model
because then only the correctness of code generation is tested
• Should a model would grow too large and complicated, its size can
be limited by concentrating on the most interesting (riskiest) parts of
the system

Software testing, 2015 406(504)


Ohjelmistotekniikka
Modelling 2/2
• Often the most beneficial way for testing is to model the
behaviour of the test target as a state machine
• Formally a finite State Machine (FSM) can be presented as a
quintuple (A, S, I, T, F), where
– A is the set of possible inputs (alphabet)
– S is the set of the states of the system
– I is the initial state of the system
– T is a state transition function A x S → S
• Determines the new state when a new input is received in the
current state
– F is the set of final states of the system
• A state machine is always in one state at one time
Software testing, 2015 407(504)
Ohjelmistotekniikka
Example model 1/2
• An example: a state machine model that describes the functionality
of a some media player, where
– A = {Play, Rec, FF, Rwd, Stop}
– S = {Ready, Play, Record, Fast forward, Rewind}
– I = Ready, F = {Ready}
– T is defined with a table with the name of the button pressed at the top,
the current state at left and the state where the state machine moves to
at the crossing:

Play Rec FF Rwd Stop


Ready Play Record Fast forward Rewind Ready
Play Play Play Play Play Ready
Record Record Record Record Record Ready
Fast forward Play Fast forward Fast forward Fast forward Ready
RewindOhjelmistotekniikka Play Rewind Software testing, 2015
Rewind Rewind Ready
408(504)
Example model 2/2
• A finite state machine can be drawn as a directed graph:

Play
FF, Rwd, Play Fast forward
Rec, Play Stop Stop
Stop
Play FF, Rec, Rwd
FF
Play Ready

Rec
Stop
Stop
Rwd

Rewind Record

Rwd, Rec, FF Play, FF, Rec, Rwd

Software testing, 2015 409(504)


Ohjelmistotekniikka
Different types of models 1/2
• UML state machines are an extension of finite state machines
and also allow for example:
– Sub-states of states, a state can contain another state machine
– Concurrent states, a state machine can be in several states at
the same time
• Markov chains, which can be used to model stochastic
systems, can also be seen as extensions to state machines
– Useful, for example, in collecting information about the
probability of errors when estimating the reliability of the system
– So called operational profiles that attempt to model the actions of
the user are one way to use statistical information in testing
• For example if a video tape is at its end, it is more likely that
the user rewinds than uses fast forward

Software testing, 2015 410(504)


Ohjelmistotekniikka
Different types of models 2/2
• Action models are based on variables and actions
– The state of the system is stored in variables
– Each action has a precondition (guard), which determines the
states in which it can be executed, and a postcondition, which
determines the effects of its execution
• For example an action Decrement, which can be executed
when x > 0, and whose execution causes x ← x – 1
– An action model shows the logic of the actions more clearly than
a state machine, but the exact state and its changes are more
difficult to see

Software testing, 2015 411(504)


Ohjelmistotekniikka
Generation of tests 1/3
• Once the behaviour has been modelled using a finite state
machine, the design of test cases is easy
– Test cases can be generated automatically
• Each test case corresponds to a path in the directed graph
– In order for the test cases to be independent from each other,
they should always start from the initial state (or some other
known state)

Software testing, 2015 412(504)


Ohjelmistotekniikka
Generation of tests 2/3
• An example of a test case where the player is first set to
record, then rewind and then play:

Play
FF, Rwd, Play Fast forward
Rec, Play 5 Stop Stop
Stop
Play FF, Rec, Rwd
FF
Play Ready
1 Rec
Stop
3
Stop
Rwd
4
Rewind Record
2
Rwd, Rec, FF Play, FF, Rec, Rwd

Software testing, 2015 413(504)


Ohjelmistotekniikka
Generation of tests 3/3
• A test case can be executed manually by following the path in
the state machine
• Alternatively the path can be coded into a test script, for
instance, that calls the corresponding functionality in the order
that is described by the arcs of the path
• The best thing is that it is easy to automate the production of
tests

Software testing, 2015 414(504)


Ohjelmistotekniikka
Following the execution in the model
1/2
• Generally speaking, a model does not help with collecting the
test results
• However, the test target can be instrumented so that its
behaviour can be examined from the state machine
– Illegal behaviour can be detected easily for example in a
situation where the test target attempts to move the state
machine from a state to another with an input that should keep
the state the same

Software testing, 2015 415(504)


Ohjelmistotekniikka
Following the execution in the model
2/2
• From the state machine one can easily see not only the test
cases that have been executed but also those that have not
yet been executed
• Test coverage in a state machine can be defined with inputs
given, states reached or the arcs passed
– For example, one may require that all possible inputs have been
given, all states must have been reached and half of the arcs
have been passed in the tests
– Note! The coverage in terms of the state machine elements may
in practice mean something very different thing than the
coverage in the test code

Software testing, 2015 416(504)


Ohjelmistotekniikka
Nondeterminism and real time systems
• Nondeterminism
– If it is possible to move from one state to any of two or more
states with the same input, the state machine is considered
nondeterministic
– This is one way to model events happening in the surrounding
environment, such as failures in communication networks
• For modelling real time, there are different extensions for
state machines
– Features such as: the program cannot remain in state A more
than 3 ms

Software testing, 2015 417(504)


Ohjelmistotekniikka
State space explosion 1/2
• Unfortunately state machines suffer from so called state
space explosion problem
• State space explosion means that the behavioural model of a
non-trivial system grows so large (lots of states and
transitions) that it cannot be handled efficiently with current
algorithms
• One solution: the state machine is reduced in size by either
– Not modelling something
– Abstracting behaviour
• For example a complex input to the test target is modelled
only with two options: legal input and illegal input

Software testing, 2015 418(504)


Ohjelmistotekniikka
State space explosion 2/2
– In general reducing the size of a state machine is a difficult task
that requires precision
• Is it easy to create a smaller state machine that does not actually
correspond to the assumed behaviour of the system under test
– Another commonly used technique is the generation of state
space on-the-fly only to the extent that is needed in the current
situation

Software testing, 2015 419(504)


Ohjelmistotekniikka
Errors detected by state machine
testing 1/2
• Some errors detected by state machine testing (the
information needed to notice such errors is not always
available):
– States of state machine with no incoming transitions
• The model is incomplete, maybe the implementation too
• For example in the case of the VCR a Pause state which cannot be
reached with any input from the other states
– A state is missing from the state machine
• Maybe the implementation contains something that has not been
specified
• A pause state has been implemented in the VCR but has not been
included in the state machine

Software testing, 2015 420(504)


Ohjelmistotekniikka
Errors detected by state machine
testing 2/2
– An extra state in the state machine
• Something might not have been implemented
• A pause state has been included in the state machine but it has not
been implemented
– The state transition leads to a wrong state
• The is a flaw in the model, maybe also in the implementation
• With the input Play the state machine moves from the Play state to
Ready state
– A missing state transition
• A model is flawed, maybe also the implementation
• From the Play state one cannot go to Ready state with the Stop
button

Software testing, 2015 421(504)


Ohjelmistotekniikka
Off-line- vs. online-testaus 1/2

Test
Generate Execute Evaluate Report
Behaviour
Test Suite Test Suite Test Results Results

No

Online? Yes

Yes No

Test
Select Next Execute Step on Evaluate Objectives
Objectives
Test Step Model& SUT Result Achieved?

Adapted from: Alan Hartman, Mika Katara, and Sergey Olvovsky. Choosing a Test Modeling Language:
a Survey. In Proceedings of the Haifa Verification Conference 2006, IBM Haifa Labs, Haifa, Israel,
October 2006. Number 4383 in Lecture Notes in Computer Science, pages 204-218. Springer 2007.

Software testing, 2015 422(504)


Ohjelmistotekniikka
Off-line- vs. online-testaus 2/2
• In off-line testing, the test cases/suites are generated first and
then they are executed as in conventional automation
• In online testing, the test generation and execution advance in
lockstep
– There are no test cases as such, only long test runs
• Off-line in more compatible with existing testing processes,
thus easier to deploy
• Online enables long duration testing and is better suited for
testing nondeterministic systems
– A nondeterministic test target can provide many correct reactions
to the given input

Software testing, 2015 423(504)


Ohjelmistotekniikka
Tools? 1/2
• Commercial model-based testing tools include
– Conformiq Designer
http://www.conformiq.com/products/conformiq-designer/
• UML state machines extended with Java as test models
• Test generator for producing tests in e.g. TTCN-3 language
– Smartesting CertifyIt
http://www.smartesting.com/index.php/cms/en/product/certify-it
– ALL4TEC MaTeLo http://www.all4tec.net/index.php/en/model-
based-testing/20-markov-test-logic-matelo

Software testing, 2015 424(504)


Ohjelmistotekniikka
Tools? 2/2
• Free open source programs
– TEMA toolset http://tema.cs.tut.fi/ developed at TUT
– OSMO Tester http://code.google.com/p/osmo/ developed at VTT
– fMBT https://01.org/projects/fmbt developed at Intel
– ModelJUnit http://www.cs.waikato.ac.nz/~marku/mbt/modeljunit/
– Etc… the range of tools is continually growing
• Probably a bigger problem than finding the right tools is to get
the testers to create test models

Software testing, 2015 425(504)


Ohjelmistotekniikka
13. Testing of information security

Just a couple of years ago


information security was of
interest to only a small group
of people. Unfortunately
nowadays everyone must be
interested in it. So how should
things related to it be tested?

Software testing, 2015 426(504)


Ohjelmistotekniikka
The great challenge of security testing
• Testing must find all security flaws in the system, a
single one is enough for the opponent
• Security failures are usually much less noticeable than e.g.
failures in functionality or performance
• Functional problems may be tolerable, but a single security
problem may be too much

Software testing, 2015 427(504)


Ohjelmistotekniikka
13.1 What is included in information
security? 1/2
• See Wikipedia article
https://en.wikipedia.org/wiki/Information_security
• Basics:
– Availability: the information is available when it’s needed.
– Confidentiality: the information can only be accessed by those
who have the right to do so
– Integrity: the information may not be changed by accident or
attack, or at least the change must be noticeable; sometimes
integrity is defined as logical consistency (internal integrity) and
correctness (external integrity)

Software testing, 2015 428(504)


Ohjelmistotekniikka
What is included in information
security? 2/2
• In addition:
– Non-repudiation: A person cannot successfully deny that they
have performed a specific action. In the end depends on what is
accepted as evidence in court.
– Authentication: A person (user of the information system) can be
connected to a user ID (which may be anonymous) and can
possible be shown to be a specific natural or legal person.
• Areas of information security include security of workstations,
security of communications network, environmental security
and application security.
– Here we are mostly interested in application security

Software testing, 2015 429(504)


Ohjelmistotekniikka
13.2 Testing of information security is
important
• Computers (PC, tablet, smartphone etc.) are routinely used to
handle confidential and secret information
• Information is moved to cloud and back
• Information systems are full of information critical to people
and business
• All digital devices will soon be networked
• If you neglect security testing, hackers will do it for you!

Software testing, 2015 430(504)


Ohjelmistotekniikka
… but often forgotten
• Software can work otherwise perfectly, but may still contain
errors that endanger information security
• The requirements related to information security are not yet
widely understood
– The internet is full of software made with no regard on
information security
– For example operating systems have to be updated constantly
with security patches

Software testing, 2015 431(504)


Ohjelmistotekniikka
13.3 Targets of information security
testing
• Network traffic.
• Server environments.
• User’s digital environments.
• Mobile devices.
• Digital devices.
• Applications.

• These slides mostly focus on application level.

Software testing, 2015 432(504)


Ohjelmistotekniikka
13.4 Nature of testing information
security
• Testing is a wide whole where
– Information risks of the system are estimated – important
information, who do they attract, what must be protected most of
all.
– Verify plans and implementation – have appropriate protections
and designs been used, is the implementation up to spec.
– The working of all protection mechanisms are tested.
• Especially functional testing.
• Specific attacks tried out.
– Also includes estimation and testing of usability: human errors
are an essential method of gaining access to information.
– Likewise load testing is used to check that the system can
handle denial of service attacks.
Software testing, 2015 433(504)
Ohjelmistotekniikka
13.5 Based on a risk analysis
• The needs of information security testing
can be considered with a risk analysis in
order to find the most important things to
do to protect information and business
• What information is handled?
• What risks are related to the information?
• Which information needs the most
protection?
• Risk analysis shows which things need to
be tested above all

Software testing, 2015 434(504)


Ohjelmistotekniikka
13.6 Lots of guidelines
• Information security can and must be ensured on many levels,
e.g. physical location of server vs. firewalls vs. user
authentication
• The Finnish government has directed the users and providers
of information systems in this matter (in Finnish):
http://www.2014.vm.fi/vm/fi/16_ict_toiminta/009_Tietoturvallisuus
/02_tietoturvaohjeet_ja_maaraykset/index.jsp
• In developing a new system the presence of the correct
protection mechanisms must first be reviewed and then test
whether everything has been properly implemented and really
works.

Software testing, 2015 435(504)


Ohjelmistotekniikka
13.7 OWASP – Security of web pages
• https://www.owasp.org, The Open
Web Application Security Project
• Organization for improving the
security of web pages
• Famous for its Top 10 list of threats
that many organizations use as the
basis of security testing
• Many tools are also focused on
testing these threats
• In Finland: OWASP Helsinki
– https://www.owasp.org/index.php/Helsinki
– Lots of useful material available on the page
Software testing, 2015 436(504)
Ohjelmistotekniikka
OWASP – Vulnerabilities of web pages
• As an example OWASP Top 10 – 2015
(http://owasptop10.googlecode.com/files/OWASP%20Top%2
010%20-%202015.pdf)
– A1 Injection
– A2 Broken Authentication and Session Management
– A3 Cross-Site Scripting (XSS)
– A4 Insecure Direct Object References
– A5 Security Misconfiguration
– A6 Sensitive Data Exposure
– A7 Missing Function Level Access Control
– A8 Cross-Site Request Forgery (CSRF)
– A9 Using Components with Known Vulnerabilities
– A10 Unvalidated Redirects and Forwards
• These cannot be covered in more detail here, but those
interested should learn more.
Software testing, 2015 437(504)
Ohjelmistotekniikka
OWASP Testing guide
• A thorough guide to testing of information security:
• https://www.owasp.org/index.php/OWASP_Testing_Guide_v4
_Table_of_Contents
• Note that the guide only covers ”technical testing” of
information security and vulnerabilities – the testing process
should begin with an information risk analysis.

Software testing, 2015 438(504)


Ohjelmistotekniikka
13.8 OWASP – Mobile security
• Mobile security testing project page:
• https://www.owasp.org/index.php/OWASP_Mobile_Security_
Project#tab=Home
• Still (in spring 2015) ”work in progress”.

Software testing, 2015 439(504)


Ohjelmistotekniikka
13.9 Threats to PC applications
• Unlike in traditional testing, security testing benefits less from
specifications
• Instead of checking whether the software complies with the
specification one should examine
– The side effects allowed by the implementation
• Buffer overflow may enable the execution of foreign code
– How software interacts with its environment
• Operating system calls, network traffic etc.

Software testing, 2015 440(504)


Ohjelmistotekniikka
Threats through the user interface 1/2
• The protection of the user interface from unauthorized users
for example with a password
– Does the user authentication work properly?
– Are weak passwords accepted?
• Is e.g. the username accepted as a password?
– If a user may access only some of the information (for example a
home directory in a multiuser operating system) do the
restrictions work correctly?

Software testing, 2015 441(504)


Ohjelmistotekniikka
Threats through the user interface 2/2
• Is it possible to input illegal inputs such as too long strings?
– A result can be a buffer overflow, which can lead to execution of
foreign code
– The system may also crash, which can mean a denial of service,
that is the services offered by the system become unavailable

Software testing, 2015 442(504)


Ohjelmistotekniikka
Threats through the file system
• The connections of the software to the file system are often
poorly tested
• Yet secrets such as passwords, license keys, etc. are saved
in files
• In the Windows world, the registry is an especially bad place
to save secrets
– In the Unix world the same naturally applies to configuration files
of programs (.emacs, etc.)
• Fuzz testing based on file formats can used to test the
robustness of the implementation of the file operations

Software testing, 2015 443(504)


Ohjelmistotekniikka
Threats through the operating system
• If the program contains within its memory encrypted secret
information, it is often safe; problems begin when information
is handled unencrypted
– For example password management programs decrypt only as
much information as necessary
• If the resources of the operating system run low, such as the
amount of available memory, the result may be for example
– Denial of service
– Uncontrolled crashing of the program
• Can secret information be dumped to a hard drive in an
unencrypted form?

Software testing, 2015 444(504)


Ohjelmistotekniikka
Threats through other applications
• Programs make more and more use of other programs
through different interfaces
• What happens if another program becomes a victim of an
attack or crashes?
• The tester has to find out the connections to other programs
and identify situations where these might cause threats
• For example, when communicating through a network one
should make sure the program functions correctly with illegal
packets, too short and too long packet frames etc.
– Fuzz testing is again needed here
– Similar worries are related to regular interfaces

Software testing, 2015 445(504)


Ohjelmistotekniikka
Protecting the software itself
• Sometimes the program contains algorithms, etc. that give an
competitive edge
• In such a case it might be sensible to protect against reverse
engineering trying to figure out the contents of the code using
e.g. a disassembler

Software testing, 2015 446(504)


Ohjelmistotekniikka
Other threats
• Often the attack may come from many fronts at the same time
– For example blocking access to some library and sending an
illegal input through the user interface at the same time
• Sometimes built-in ”features” of the program might offer holes
in security
– Instrumented test code may have been forgotten in the binary
code, offering ready hooks for running test cases
• A developer of a single component might have been ignorant
of the whole the component belongs to
– If for example the data relayed as a parameter is secret, it must
not be written to disk unencrypted even temporarily

Software testing, 2015 447(504)


Ohjelmistotekniikka
The challenge of openness
• Testing interfaces may also offer routes into the application
past the protections.
• Test scripts in software repository may reveal routes past the
protections.
– A special challenge for open source software – the code is
available to hackers – the community must ensure that there are
no security holes.
– On the other hand openness makes any holes much more likely
to be discovered and fixed; security experts usually favour
openness
• Sometimes the code may hint at authentication used in testing
that may be used by the attacker.

Software testing, 2015 448(504)


Ohjelmistotekniikka
Example attacks on software 1/7
• Block the calls from the application to the (dynamic) libraries
– A tool is used to observe which libraries the application calls in
which situation and the calls to a certain library are blocked
 Test that the application does not endanger information security
even if the library is not available
– A call to a blocked library usually results in an exception
• Often exception handling hasn’t been tested as well as other
code
• The result may be a crash and dumping of secret information
to the screen or disk

Software testing, 2015 449(504)


Ohjelmistotekniikka
Example attacks on software 2/7
– The application may also appear to keep working normally, even
though the service provided by the library function is not
available
• This can be a sign not checking return values, for example
• The result may even be the granting of incorrect privileges to
the user

Software testing, 2015 450(504)


Ohjelmistotekniikka
Example attacks on software 3/7
• Look for suspicious options and their combinations
– Testing the options that can be given through both the command
line and the graphical user interface is difficult because of the
large number of combinations
– If testing only focuses on the options most essential for the user,
it is possible that some options may cause the application to run
untested code
– One should look for combinations of options whose synergy may
compromise information security
– If the software in question is a new version of an old program,
one should examine the previous version and its documentation

Software testing, 2015 451(504)


Ohjelmistotekniikka
Example attacks on software 4/7
• If an option has disappeared from the documentation of the
new version, test if the implementation still supports it and if it
does, does it work safely
– User input might only be verified for some of the options
• If an application asks for an address, and country can be
picked from a preselected set of countries using a menu, the
verification of the postal code might depend on which country
is selected
– A program might accept an overly long postal number
containing control symbols…
» Would it be possible to input SQL-queries and
execute them?
– Or is the postal code verification algorithm implemented
also for the foreign addresses.

Software testing, 2015 452(504)


Ohjelmistotekniikka
Example attacks on software 5/7
• Port scanning
– Applications that use network must protect themselves against
port scanning
– A port scanner is used as a help for testing
– In particular the use of application specific ports should be tested
(numbers 1024-65535)
– If an application is trying to hide an open port by returning an
error message, the message must be of the standard format so
as not to raise suspicion

Software testing, 2015 453(504)


Ohjelmistotekniikka
Example attacks on software 6/7
• Find alternative ways to do the same thing
– How many different ways are there to open a Word document in
Windows?
– Have all the ways been tested and shown to be safe?
– The authors of the book ”How to break software security” found a
security hole in Windows XP in the following way:

Software testing, 2015 454(504)


Ohjelmistotekniikka
Example attacks on software 7/7
• Because Windows Explorer can be used for many of the same
purposes as Internet Explorer, let's read emails with IE using MS
Exchange Server’s Outlook Web Access interface so that first IE is
started through WE.
• After the session we close the WE and IE windows
• Next we contact the email server with WE and what do you know,
we get in without a password query!
• When we try to read individual messages with a separate window
the password is asked, but when using the preview of Outlook, the
password is not asked
• The lesson: Information security was breached when an optional
way to do the same thing was used
– Replaced Internet Explorer with Windows Explorer
– Replaced an Outlook window with a preview
• Note! This defect has since been fixed
Software testing, 2015 455(504)
Ohjelmistotekniikka
Top 8 practices for testing information security
1/2
1. Have an expert consult or perform it. This is a difficult subject.
– Advice early in the project, challenging testing later on.
2. Find out the risks – what must be protected reliably.
– Investment in security is chosen based on the level of risk.
– Remember to be suitably paranoid.
– Risk analysis is teamwork.
3. Analyse the technology used in the product – what means do
attackers have to access the information.
– Find out the pitfalls of the technologies.
– Libraries, protocols, programming languages, operating system
weaknesses…
4. Explore and test how the design has prepared for those.
– Functional testing techniques.

Software testing, 2015 456(504)


Ohjelmistotekniikka
Top 8 practices for testing information security
2/2
5. Review and inspect architecture, design, implementation and
security mechanisms.
– Databases, components, communications, use of encryption.
6. Test all protection mechanisms well (passwords, privileges,
encryption…).
7. Test that the system is robust and will not crash on e.g. faulty files
to enable denial of service or hacking.
– Fuzz testing is one means for that.
– Strong negative testing.
8. Examine user activity – human mistakes, realism in the handling of
usernames and passwords.

Software testing, 2015 457(504)


Ohjelmistotekniikka
14. Techniques of static testing

The tool chests of a tester and a quality


assurance professional contain techniques that
complement each other. In the following we
present some group work techniques, as well as
discuss automated code analysis.

Software testing, 2015 458(504)


Ohjelmistotekniikka
14.1 Inspection
• Composed and adapted from the source : Ahtee, Haikala,
Märijärvi: Tarkastukset (training material, in Finnish)
• Inspection – IEEE 610.12-1990 (Standard Glossary of
Software Engineering Terminology):
– A static analysis technique that relies on visual examination of
development products to detect errors, violations of development
standards, and other problems. Types include code inspection;
design inspection.
• Internal inside a project, 3-6 participants
• Very formal compared to other group work techniques
• Flexible scheduling, multiple inspections during the project
• Only small artefacts are inspected at a time

Software testing, 2015 459(504)


Ohjelmistotekniikka
Background and purpose
• Fagan developed the technique at IBM in the 1970’s
• Inspections are considered the most effective technique
known for quality assurance
• Inspections support the process and project in the following
ways
– By seeking to remove the errors at the earliest possible stage
– By making the progress in the project visible
• An accepted inspection corresponds to the reaching of a
milestone
– By sharing information with other people

Software testing, 2015 460(504)


Ohjelmistotekniikka
Inspection process
• An inspection takes around two hours, during which at most
– 50 pages of documentation or
– 500 lines of code
can be inspected
• 50-80% of errors are found
• Inspection consume about 5-15 % of the work time
• Very cost effective!
• Although the method does not add very much to the workload,
commitment from the management is needed

Software testing, 2015 461(504)


Ohjelmistotekniikka
Inspections in software engineering
process
Do it right the first time?

OK
Specification Do Draft Inspect Phase product
n

(Phase product Rework


n-1)

Software testing, 2015 462(504)


Ohjelmistotekniikka
What can be inspected?
• Inspection can be applied to e.g.
– An offer, an agreement
– Requirements specification
– Project plan
– Technical specification
– Test plan
– Code
– End user documentation
– Study material

Software testing, 2015 463(504)


Ohjelmistotekniikka
Roles in a inspection meeting
• A chairperson
• The author of the phase product
• Secretary = often the author
• Inspector = everyone
• Presenter
– Can be the author
– Only in the code inspections
• An application domain expert, user experience specialist,
proof-reader, testing expert, etc.
– Someone can focus on checking the memory management,
somebody else can check loops, algorithms, interfaces, etc.

Software testing, 2015 464(504)


Ohjelmistotekniikka
Rules of thumb
• If everything cannot be inspected, focus on the most
important parts
• Careful preparation is very important
– The material to be inspected has to be given out to the
inspectors at least 3 days before the inspection
– The inspection meeting should be cancelled if there are no
prerequisites for its success
• For example phase product is too incomplete
• What is evaluated is the product, not its author
• The goal is to find problems, not solve them
• Fixes for cosmetic errors can be delivered to the secretary
after the inspection meeting
Software testing, 2015 465(504)
Ohjelmistotekniikka
Other notes
• One should consider in advance rules to facilitate the
inspection
– For example “does the requirement really consider the real
needs of the client”
• The rules should be considered carefully according to the
artefact, organization, process, risks, etc.
– With just a few rules one can reach good results (Marko Komssi,
EuroSTAR 2004)
– The precision of the rules can be increased over time
• A version of inspections with no preparation required has also
been developed at IBM:
E. Farchi, S. Ur: “Selective Homeworkless Reviews”, Proceedings of the 2008
International Conference on Software Testing, Verification, and Validation IEEE
Computer Society Washington, DC, USA
Software testing, 2015 466(504)
Ohjelmistotekniikka
Maturity levels of inspections
• Inspections are held
• Some benefit from inspections
• Inspections are effective Useful to
the project
• Statistics are collected from inspections
• Defect analyses and checklists are produced
according to the experiences
Useful to
• The collected information is used for improving the organization
the inspection and the software processes

Software testing, 2015 467(504)


Ohjelmistotekniikka
14.2 Review
• Also known as technical review
• IEEE 610.12-1990:
– A process or meeting during which a work product, or set of work
products, is presented to project personnel, managers, users,
clients, or other interested parties for comment or approval.
Types include code review, design review, formal qualification
review, requirements review, test readiness review.
• Is used for ending a phase
– For example functional or technical specification phase
• A formal check to see if all the exit criteria of the phase have
been met
• The goal is to make the advancement of the project visible
– The goal is consensus, not so much finding defects

Software testing, 2015 468(504)


Ohjelmistotekniikka
Reviews and inspections in a project
• [Haikala&Märijärvi 06]:
pre-study &
contract

requirements specification

design

inspection programming and


module testing
review, internal
integration
review, with client

system testing

time

Software testing, 2015 469(504)


Ohjelmistotekniikka
What is being talked about?
• Note! The terminology is not consistent: in some cases
inspections are called reviews, or vice versa
• Because these are fundamentally different techniques, it is
worthwhile to find out beforehand whether the goal is to
inspect or review
• General principle: To avoid misunderstandings, find out what
the organisation means with any terms

Software testing, 2015 470(504)


Ohjelmistotekniikka
14.3 Walkthrough
• IEEE 610.12-1990:
– A static analysis technique in which a designer or programmer
leads members of the development team and other interested
parties through a segment of documentation or code, and the
participants ask questions and make comments about possible
errors, violation of development standards, and other problems.
• Usually only done with code, with the purpose of finding errors
• The developer explains what they think the program does

Software testing, 2015 471(504)


Ohjelmistotekniikka
Walkthrough vs. inspection
• Compared to an inspection, a walkthrough
– Emphasizes the role of the developer in the event
– Is less formal
– Requires less training
– Finds fewer errors
– Is usually less cost effective

Software testing, 2015 472(504)


Ohjelmistotekniikka
OTE
14.4 Static analysis of code
• Idea: analyse the source code automatically without executing
it
• The purpose is
– To find errors from the code
– To notice deviations from the normal coding conventions (style
guides)
– To generate documentation for the code
– To calculate values for code metrics such as length, complexity
etc.
• The techniques used are usually based on information and
control flow analysis, constraint solving, etc.

Software testing, 2015 473(504)


Ohjelmistotekniikka
Lint
• The first static code analysis tool meant for wide audience
was the Lint tool delivered with Unix
• The purpose of Lint was to find certain errors typical to the
C++ language that went unnoticed by the compilers of the
time
• A well-known commercial product for C++ and C is PC-Lint /
Flexelint http://www.gimpel.com/html/products.htm
– Still finds many problems missed by compilers

Software testing, 2015 474(504)


Ohjelmistotekniikka
OTE

What kind of errors can be found?


• For example
– Syntax errors
– Same code in multiple places, dead code (never executed)
– Maintainability and portability problems
– Uninitialized variables
– Unused return values
– Misuse of pointers
– Buffer overflows, other information security issues

Software testing, 2015 475(504)


Ohjelmistotekniikka
Some tools
• PC-Lint, Coverity, PolySpace, KlocWork, FindBugs
• Commercial tools tend to be expensive, but some open
source tools are also available
• Scalability may become an issue
• The number of false alarms may also be high
• Can pay back for themselves very quickly by finding errors in
safety critical code

Software testing, 2015 476(504)


Ohjelmistotekniikka
Where to find help for documentation?
• For example:
– Javadoc style API-documentation
– Doxygen style graphical model (UML) of the software
• www.doxygen.org
– Call hierarchy

bar1()

foo()
bar2()

Software testing, 2015 477(504)


Ohjelmistotekniikka
15. Improving testing

Next we go over methods for improving testing.


Continuous improvement of all activity is
important to stave off decay. And sometimes a
need is noticed for a comprehensive evaluation
and improvement of the process with a one-time
improvement project.

Software testing, 2015 478(504)


Ohjelmistotekniikka
General
• Testing activity in an organization must be actively improved.
– Every company will slowly learn to do thing better.
– As business, product size and company size grow, testing needs to be
performed better and more efficiently.
– There’s always the danger of getting stuck in old practices.
– Therefor improvement is needed.
• Three kinds of improvement:
1. Continuous improvement – when someone notices that something
doesn’t work well, improve it. For example test execution is the
bottleneck in projects -> do something about it.
2. Improvement project, where the all testing activity is evaluated and
ways to improve it considered.
3. Improvement to meet compulsory requirements, e.g. standards of the
domain.

Software testing, 2015 479(504)


Ohjelmistotekniikka
15.1 Continuous improvement
• Potential improvements are identified in project meetings,
reviews and everyday work.
• Talk about them with colleagues and managers and make
plans for improving things.
• Then spread the improvement to other projects and teams.

Software testing, 2015 480(504)


Ohjelmistotekniikka
15.2 Improvement project
• Perform an evaluation of operations, identify strengths and
weaknesses, choose points to be improved and launch
improvements.
• Either:
– A consult performs the evaluation and improvements are allocated to
personnel. A committee is assembled and e.g. the quality manager
observes their implementation.
– An internal committee is assembled to evaluate the situation and launch
the improvements.
• Methods usually include interviewing representatives of different
occupations and common discussions.

Software testing, 2015 481(504)


Ohjelmistotekniikka
The motive for improvement
• An organization has a reason to begin an improvement project:
– The organization grows.
– Product business expands.
– Product quality is found to be insufficient.
– Testing is a bottleneck.
– Quality of testing feels lacking.
– Management demands improvement.
– An audit of the quality system finds problems in testing.
– Etc…
• The essential thing is a shared feeling that the operations need
improvement and support from the management. Then changes can
be achieved.

Software testing, 2015 482(504)


Ohjelmistotekniikka
Revealing the most important problem
spots as the starting point
• There are no ”best practices”, so it’s good to review the
current situation of the company and consider which
improvements are truly helpful. See an example of this
kind of process: (also includes much supplementary
information):
• http://mattivuori.net/julkaisuluettelo/liitteet/holistic_assess
ment_of_testing.pdf
• Sometimes the analysis is based on a structure given by
a maturity model etc.
– TPI and Tommi are well-known examples.
– A listing of key areas within TPI is given later as an example.

Software testing, 2015 483(504)


Ohjelmistotekniikka
Professionals know the problems
• The professionals within the organizations should usually
be trusted in improvement:
– They know the real problems.
– They understand what can be improved.
– They must be listened to and supported in order to implement
the improvements.
• Why don’t they improve things if they know the solutions?
– Everyday work in everyday life – part of the culture that ”binds
the hands”.
– Flaws accumulate slowly (a frog in a kettle…).
– Someone who understands improvement is needed as a catalyst
and expert the management listens to.

Software testing, 2015 484(504)


Ohjelmistotekniikka
The steps of improving the test process
1/2
• Find out the current level of the testing process (the whole
process or some part of it)
• Set goals
• Find out the requirements for reaching the goals
– The requirements should be realistic, exact and measurable
– Prioritize the requirements
• Start the process improvement project in the same way as
any other software development project
– Ensure sufficient resources
• Make a plan that describes the steps for reaching the goal
– Include a timetable, budget, risks, etc.

Software testing, 2015 485(504)


Ohjelmistotekniikka
The steps of improving the test process
2/2
• Implement the changes gradually
– So that not as many resources are needed at a time
– The analysis of the results is easier
– Use a pilot project
• Measure the results
– Compare the results to the plan
• If necessary start again from the first step
• If a process is to be improved, it should be ensured that all the
parties involved approve the changes needed
– The acceptance can be gained for example with metrics,
collecting feedback and using it, training and with ”sponsors”
internal to the organization

Software testing, 2015 486(504)


Ohjelmistotekniikka
15.3 Key areas of testing process – an
example with TPI
• TPI, Test Process Improvement is a model for improving
testing.
• We will not cover it as a method, but present it’s structuring of
testing activity as an example.
– It can work as a framework and checklist for the key areas to be
considered.
– Are their conventions good and suitably settled, are people
skilled enough, is equipment good enough?

Software testing, 2015 487(504)


Ohjelmistotekniikka
List of key areas 1/6
• Key areas (20)
– Testing strategy
• The strategy must concentrate on finding the most important
errors at the earliest possible time and as cheaply as
possible
• Defines which tests cover the requirements and quality risks
• The quality of the master strategy is affected by the quality
and compatibility of the strategies at different levels of testing
– The life cycle model of the testing process
• Planning, preparation, specification, execution and
finalization
• Improves predictability
• Enables adjusting the testing process

Software testing, 2015 488(504)


Ohjelmistotekniikka
List of key areas 2/6
– The early introduction of testing to software development
• Even if the tests are executed in late phases of development,
the testing process must begin a lot earlier
– Calculation and planning
• What to do, when and with what resources (personnel)
• A basis for resource allocation
– The specification techniques for tests
• The evaluation of the quality and ”depth” of test cases
• The reusability of the test cases
– The techniques of static testing
• For example usage of checklists

Software testing, 2015 489(504)


Ohjelmistotekniikka
List of key areas 3/6
– Metrics
• For the testing process the most important metrics describe
the advancement of the process and the quality of the
system under test
• When the process is improved, the metrics are used to
evaluate the effects of the actions taken
– Testing tools
• For example better motivation for testers vs. manual testing
– Test environment
– The office environment of testers
• Motivation, communication, the effectiveness of work
– Commitment and motivation
• Both management and workers (resource allocation etc.)

Software testing, 2015 490(504)


Ohjelmistotekniikka
List of key areas 4/6
– Knowledge and training
• The testing team should consist of people whose knowledge
and skills complement each other, for example knowledge of
the domain and organization, programming and social skills
• Training helps with deficiencies
– Comprehensiveness of methods
• The methods used should be comprehensive enough to
cover all needs and on the other hand be detailed enough, so
that the same things need not be considered again every
time the method is applied
– Communication
• Both inside the test group and external interest groups, such
as developers, customers, users
• Informing about progress and quality

Software testing, 2015 491(504)


Ohjelmistotekniikka
List of key areas 5/6
– Reporting
• Testing is about measuring quality and this information needs
to be disseminated
– Error management
• Means have to be offered for the management to find out the
life cycle of an error
• Finding out the quality trends and their analysis that can help
in providing justified advice to improve quality
– Testware management
• Ensuring maintainability and reusability requires
management
• Version control of testware
– Test process management

Software testing, 2015 492(504)


Ohjelmistotekniikka
List of key areas 6/6
– Evaluation (reviews etc.)
• All phase products, such as requirements and design are
evaluated
• The purpose is to find errors before the actual testing begins
– Low level testing (unit and integration testing)
• The purpose is to find errors as soon as possible
• The error is made, found and fixed usually by the same
person
– Efficient, as not much communication is needed

Software testing, 2015 493(504)


Ohjelmistotekniikka
15.4 Improvement to meet the requirements
of standards
• Following specific safety standards can be a prerequisite for
business especially in safety-critical application development.
• Standards vary by field and country.
• Sometimes there is some choice in which standard to apply,
so it’s best to consider carefully what to commit to.
• One good example of such a standard is IEC 61508-3
• Functional safety of electrical/electronic/programmable
electronic safety-related systems – Part 3: Software
requirements

Software testing, 2015 494(504)


Ohjelmistotekniikka
Requirements of IEC 61508-3 for testing
• It has requirements on the following areas among others:
– Modelling appearance of defects.
– Testing techniques.
– Test coverage.
– Use of model-based testing in the most critical systems.
– Performance testing.
– Static analysis, reviews.
• See ”Testing of safety-critical software – some principles”
• https://noppa.oulu.fi/noppa/kurssi/811601s/luennot/811601S_l
ecture_11__vuori.pdf

Software testing, 2015 495(504)


Ohjelmistotekniikka
15.5 Tester certification – ISTQB

• See www.istqb.org, in Finland http://www.fistb.fi/


• A personal certificate to ”prove” a certain competence
– Three levels: foundation, advanced, expert
• Replaces earlier ISEB certification
• The foundation level certificate is often earned with a multiple
choice exam after a three day course
• The syllabi are public and aim to describe global consensus of good
testing
• ISTQB has also published a large testing glossary

Software testing, 2015 496(504)


Ohjelmistotekniikka
Has been criticized
• Does it really advance best competence?
• Does not address the needs of different contexts
• Does it suit the needs of agile development?
• While supposedly "non-profit" is it really just a money-making
tool for training companies
• What does passing a multiple choice exam prove?

Software testing, 2015 497(504)


Ohjelmistotekniikka
16. Closing words of the course 1/3
• The world of testing is vast and diverse.
• This course can only cover a part of it.
• This course is just the beginning and enables more in-depth
study of testing and at least understanding why and how it’s
done and what things are related to it.
• For many testing becomes a profession where they can fulfil
their goals and motivations
• See the mindmap on next slide.

Software testing, 2015 498(504)


Ohjelmistotekniikka
Closing words of the course 2/3
2. Making a better world
• Better products for people
• Occupational safety and product safety (for
1. Understanding technology, devices and
certain types of products)
systems
• (Things that depend on the targets of
• Curiosity about how things work and what
testing)
they can tolerate
• Understanding quality in experimental way,
by own hands and brain 3. Will to progress
• Getting familiar with all kinds of products • Safety of new technology (nuclear
• Challenging – and winning – the designers power)
• Success of mission (space
missions, ruling the airspace)
• Managing complex technology
Testing as • Advancing technology
6. Will to succeed and be
successful passion
• To be good in something, to be
best, to differentiate from others 4. Professional identity
• Making money • Sense of responsibility
• Making business better • Doing work that has positive
• Success of products effects
• Joining a successful product • Helping software developers
development 5. Aesthetic of quality • Helping customers
• Leaving a positive mark (even it • Aesthetic of being error-free • Speaking for the end users
would not be noticed • Making technology perfect • Success as a team, together
• Will to make "the world work" • Perfectionism (in technology)
• Will to control own world

Software testing, 2015 499(504)


Ohjelmistotekniikka
Closing words of the course 3/3
• Could we only use ”best practices”?
• In reality there are no ”best practices”, only common
practices. And at some point the best practice of the field
becomes obsolete.
• An MSc must be able to estimate what methods could be
used in specific environments and situations, so that testing
would fulfil its purpose: produce at the right time and as well
as possible the kind of information required in business,
product and systems development, acquisitions etc...

Software testing, 2015 500(504)


Ohjelmistotekniikka
Top 10 points
1. You need to take testing seriously, but also find joy in it
2. Testing is an ordinary everyday part of mature software
development
3. Begin testing early and do it continuously
4. Testing needs time – only tested is ”done”
5. Understand usage and product risks
6. Prioritize testing and invest in testing the most important
things
7. Developers and clients always think differently – acceptance
testing is a job and challenge for the client
8. There are no silver bullets in testing
9. Good testing is diverse
10. Testing methods must be fit to the context, criticality of the
project, and product requirements
Software testing, 2015 501(504)
Ohjelmistotekniikka
Literature 1/3
• There are huge numbers of books on testing. This is a list of a
couple of generally useful foundational works referred to in the
slides. No book available covers everything.
• [Broekman&Notenboom 02] B. Broekman, E. Notenboom: Testing
Embedded Software (2002) – a view on testing embedded systems,
Multiple V-model
• [Craig&Jaskiel 02] R. Craig, S. Jaskiel: Systematic Software Testing
(2002) – nicely written book on systematic testing
• [Crispin&Cregory 09] Crispin, Lisa & Gregory, Janet. 2009. Agile
Testing. A Practical Guide For Testers and Agile Teams. Addison-
Wesley. 554 p. – a well-regarded work on agile testing
• [Fewster&Graham 99] M. Fewster, D. Graham: Software Test
Automation (1999) – foundational book on test automation

Software testing, 2015 502(504)


Ohjelmistotekniikka
Literature 2/3
• [Jorgensen 02] P.C. Jorgensen: Software Testing: A Craftsman’s
Approach (second edition, 2002) – the view of analytic school on
testing
• [Kaner et al. 02] C. Kaner, J. Bach, B. Pettichord: Lessons Learned
in Software Testing: A Context-Driven Approach (2002) – 293 small
lectures on everything related to testing from the context-driven
school, emphasizes exploratory testing, not suitable as the first book
on testing
• [Myers et al. 04] G.J. Myers, T. Badgett , T.M. Thomas, C. Sandler:
The Art of Software Testing (2004) – a new edition of the classic

Software testing, 2015 503(504)


Ohjelmistotekniikka
Literature 3/3
• [Pezzè&Young 07] M. Pezzè, M. Young: Software Testing and
Analysis: Process, Principles, and Techniques – combines
traditional testing and more formal approaches nicely
• [Rothman 07] J. Rothman. 2007. Manage It! Your Guide to Modern,
Pragmatic Project Management – a great book on software
engineering project management
• [Utting&Legeard 07] M. Utting, B. Legeard: Practical Model-Based
Testing – A Tools Approach (2007) – the first book with a practical
approach to model-based testing
• [Whittaker&Thompson 03] J. Whittaker, H. Thompson: How to Break
Software Security (2003) – ”Hands on” approach for testing
information security

Software testing, 2015 504(504)


Ohjelmistotekniikka

Вам также может понравиться