Вы находитесь на странице: 1из 211

TESTING CONCEPTS

Beginners Guide to Software Testing

By
M.Saravanan
(Software Testing Engineer)

1
TESTING CONCEPTS

FOREWORD

Beginners Guide to Software Testing introduces a practical approach to


testing software. It bridges the gap between theoretical knowledge and real world
implementation. This article helps you gain an insight to Software Testing -
understand technical aspects and the processes followed in a real working
environment.

Who will benefit?

Beginners: For those of you who wish to mould your theoretical software
engineering knowledge into practical approach to working in the real world.
Those who wish to take up Software Testing as a profession.

Developers! This is an era where you need to be an “All rounder”. It is


advantageous for developers to posses testing capabilities to test the application
before hand. This will help reduce overhead on the testing team.

Already a Tester! You can refresh all your testing basics and techniques and gear
up for Certifications in Software Testing

An earnest suggestion: No matter which profession you choose, it is advisable that


you posses the following skills:
● Good communication skills – oratory and writing
● Fluency in English
● Good Typing skills
By the time you finish reading this article, you will be aware of all the techniques and
processes that improves your efficiency, skills and confidence to jump start into the
field of Software Testing.

Beginners Guide To Software Testing is our sincere effort to educate and create
awareness among people, the growing importance of software quality. With the
advent of globalization and increase in market demand for software with good

2
TESTING CONCEPTS

quality, we see the need for all Software Engineers to know more about Software
Testing.
We believe that this article helps serve our motto – “Adding Values to Lives Around
Us”.

A brief history of Software Engineering

Being a Software Test Professional, you must know a brief history of Software
Engineering. Software Testing comes into picture in every phase of Software
Engineering.

The software industry has evolved through 4 eras, 50’s –60’s, mid 60’s –late 70’s,
mid 70’s- mid 80’s, and mid 80’s-present. Each era has its own distinctive
characteristics, but over the years the software’s have increased in size and
complexity. Several problems are common to almost all of the eras and are
discussed below.

The Software Crisis dates back to the 1960’s when the primary reasons for
this situation were less than acceptable software engineering practices. In the early
stages of software there was a lot of interest in computers, a lot of code written but
no established standards. Then in early 70’s a lot of computer programs
Started failing and people lost confidence and thus an industry crisis was declared.
Various reasons leading to the crisis included:

- Hardware advances outpacing the ability to build software for this hardware.
- The ability to build in pace with the demands.
- Increasing dependency on software’s
- Struggle to build reliable and high quality software
- Poor design and inadequate resources.

This crisis though identified in the early years, exists to date and we have
examples of software failures around the world. Software is basically considered a
failure if the project is terminated because of costs or overrun schedules, if the
project has experienced overruns in excess of 50% of the original or if the software
results in client lawsuits. Some examples of failures include failure of Air traffic
control systems, failure of medical software, and failure in telecommunication

3
TESTING CONCEPTS

software. The primary reason for these failures other than those mentioned above is
due to bad software engineering practices adopted. Some of the worst software
practices include:

● No historical software-measurement data.


● Rejection of accurate cost estimates.
● Failure to use automated estimating and planning tools.
● Excessive, irrational schedule pressure and creep in user requirements.
● Failure to monitor progress and to perform risk management.
● Failure to use design reviews and code inspections.

To avoid these failures and thus improve the record, what is needed is a
better understanding of the process, better estimation techniques for cost time and
quality measures. But the question is, what is a process? Process transform inputs to
outputs i.e. a product. A software process is a set of activities, methods and
practices involving transformation that people use to develop and maintain software.

At present a large number of problems exist due to a chaotic software process


and the occasional success depends on individual efforts. Therefore to be able to
deliver successful software projects, a focus on the process is essential since a focus
on the product alone is likely to miss the scalability issues, and improvements in the
existing system. This focus would help in the predictability of outcomes, project
trends, and project characteristics.

The process that has been defined and adopted needs to be managed well
and thus process management comes into play. Process management is concerned
with the knowledge and management of the software process, its technical aspects
and also ensures that the processes are being followed as expected and
improvements are shown.

From this we conclude that a set of defined processes can possibly save us
from software project failures. But it is nonetheless important to note that the
process alone cannot help us avoid all the problems, because with varying

4
TESTING CONCEPTS

circumstances the need varies and the process has to be adaptive to these varying
needs. Importance needs to be given to the human aspect of software development
since that alone can have a lot of impact on the results, and effective cost and time
estimations may go totally waste if the human resources are not planned and
managed effectively. Secondly, the reasons mentioned related to the software
engineering principles may be resolved when the needs are correctly identified.
Correct identification would then make it easier to identify the best practices that can
be applied because one process that might be suitable for one organization may not
be most suitable for another.

Therefore to make a successful product a combination of Process and


Technicalities will be required under the umbrella of a well-defined process.

Having talked about the Software process overall, it is important to identify


and relate the role software testing plays not only in producing quality software but
also maneuvering the overall process.

The computer society defines testing as follows: “Testing -- A verification


method that applies a controlled set of conditions and stimuli for the purpose of
finding errors. This is the most desirable method of verifying the functional and
performance requirements. Test results are documented proof that requirements
were met and can be repeated. The resulting data can be reviewed by all concerned
for confirmation of capabilities.”

There may be many definitions of software testing and many which appeal to
us from time to time, but its best to start by defining testing and then move on
depending on the requirements or needs.

5
TESTING CONCEPTS

Content:

1. TESTING…………………………………………………………………………. 1
2. TESTING TYPES……………………………………………………………….. 2
2.1Static Testing……………………………………………………………………………. 3
2.2Dynamic Testing………………………………………………………………………. 4
3TESTING TECHNIQUES……………………………………………………….. 5
3.1 White box Testing……………………………………………………………………. X
White Box Techniques………………………………………………….. X
3.1.1 Method Coverage……………………………………………………. X
3.1.2 Statement Coverage………………………………………………..
X
3.1.3 Branch Coverage…………………………………………………….. X
3.1.4 Condition Coverage…………………………………………………. X
Types of White Box Testing
Advantages………………………………………………………………………… X
Disadvantages…………………………………………………………………… X
3.2 Black box Testing…………………………………………………………………… X
Black Box Techniques………………………………………………….. X
3.2.1 Boundary value Analysis………………………………………… X
3.2.2 Equivalence Partitioning………………………………………… X
3.2.3 Decision Table Testing………………………………………….. X
3.3 Gray box Testing………………………………………………………………….. X
4. Testing Levels …………………………………………………………………………………. X
a. Unit Testing………………………………………………………………….. X
b. Integration Testing………………………………………………………. X
Types of Integration Testing……………………………………………. X
1. Top down Integration Testing…………………………………….. X
2. Bottom up Integration Testing……………………………………. X
3. System Testing……………………………………………………………. X

6
TESTING CONCEPTS

4. User Acceptance Testing…………………………………………….. X


Types of user acceptance Testing……………………………………. X
1. Alpha Testing……………………………………………………………….. X
2. Beta Testing………………………………………….. .................... X
3. Gamma Testing……………………………………………………………… X
Smoke Testing…………………………………………………………………………...... X
Sanity Testing………………………………………………………………………………… X
Performance Testing……………………………………………………………………… X
a. Load Testing…………………………………………..……………………… X
b. Stress Testing………………………………………………………………… X
Functional Testing………………………………………………………………………… X
Exploratory Testing………………………………………………………………………. X
Ad-hoc Testing………………………………………….. ………………………………… X
Regression Testing……………………………………………………………………….. X
Retesting………………………………………………………………………………………. X
Usability Testing…………………………………………………………………………… X
Install / Uninstall Testing……………………………………………………………… X
Security Testing…………………………………………………………………………… X
Compatibility Testing…………………………………………………………………… X
Compression Testing……………………………………………………………………… X
Incremental Integration Testing…………………………………………………. X
End-to-End Testing………………………………………………………………………… X
Recovery Testing…………………………………………………………………………… X
5. TESTING METHODOLOGY
5.1 V – model……………………………………………………………………………….. X

5.2 Waterfall model………………………………………………………………………. X

5.3 The Spiral Model……………………………………………………… X

5.4 Iterative Model………………………………………………………………………… X

6. STLC…………………………………………..………………………………………………………… X
7. TESTING DOCUMENTS
Test Plan………………………………………………………………………………………… X
Software Requirement Specification……………………………………………… X
Test Case………………………………………………………………………………………… X
8. BUG TRACKING…………………………………………………………………………………… X

7
TESTING CONCEPTS

9. BUG LIFE CYCLE………………………………………………………………………………… . X


10. TEST SUMMARY RESULT…………………………………………………………………. X

What is a good test case………………………………………………………………………… X

Severity……………………………………………………………………………………………………… X

Priority………………………………………………………………………………………………………. X

Client server testing……………………………………………………………………………….. X

Web testing………………………………………………………………………………………………. X

Configuration management………………………………………………………………….. X

Test deliverables……………………………………………………………………………………… X

Why do we go for automation testing…………………………………………………. X

GUI TESTING………………………………………….. ………………………………………………. X

Windows Compliance Testing……………………………………………………………….. X

Application……………………………………………………………………………………….. X

8
TESTING CONCEPTS

Text Boxes………………………………………………………………………………………..
X

Option (Radio Buttons) ……………………………………………………………………


X

Check Boxes……………………………………………………………………………………..
X

Command Buttons…………………………………………………………………………….
X

Drop Down List Boxes………………………………………….. …………………………


X

Combo Boxes…………………………………………………………………………………….
X

List Boxes…………………………………………………………………………………………..
X

Screen Validation Checklist

Aesthetic Conditions………………………………………………………….
X

Validation Conditions…………………………………………………………………………. X

Navigation Conditions……………………………………………………………………….. X

Usability Conditions……………………………………………………………………………. X

Data Integrity Conditions…………………………………………………………………… X

Modes (Editable Read-only) Conditions……………………………………………… X

General Conditions………………………………………………………………………………………….. X

Specific Field Tests………………………………………………………………………………………….. X

Testing Interview Questions……………………………………………………… X

What is Regression Testing……………………………………………………….. X

9
TESTING CONCEPTS

CLIENT / SERVER TESTING………………………………………………………... X

WEB TESTING…………………………………………………………………….. …… X

Black Box Testing: Types and techniques of BBT………………………….. X

BVA(Boundary Value Analysis) techniques…………………………………. X

Types of Risks in Software Projects…………………………………………… X

Categories of risks…………………………………………………………………… X

How Domain knowledge is Important for testers………………………… X

How to get your all bugs resolved without any ‘Invalid bug’ label…. X

What troubleshooting you need to perform before reporting any bug… X

How to write a good bug report? Tips and Tricks…………………………. X

How to Report a Bug………………………………………………………………… X

Some Bonus tips to write a good bug report………………………………… X

How to write software Testing Weekly Status Report……………………. X

How to hire the right candidates for software testing positions……… X

Website Cookie Testing, Test cases for testing web application cookies?

Applications where cookies can be used…………………………………….. X

Drawbacks of cookies………………………………………………………………. X

Some Major Test cases for web application cookie testing……………. X

Software Installation/Uninstallation Testing………………………………. X

What are the Quality attributes…………………………………………………. X

Developers are not good testers. What you say…………………………… X

Living life as a Software Tester………………………………………………….. X

How to be a good tester……………………………………………………………. X

Need of Skilled Testers……………………………………………………………… X

Steps to Effective Software Testing……………………………………………. X

Unit Testing: Why? What? & How………………………………………………. X

10
TESTING CONCEPTS

Integration Testing: Why? What? And how…………………………………. X

Metrics Used In Testing…………………………………………………………….. X

Metrics for Evaluating Application System Testing………………………... X

Positive and Negative Testing…………………………………………………….. X

Table of Content……………………………………………………………………….. X

Foreword........................................................................................... X

1. Introduction................................................................................. X
2. Scope .......................................................................................... X
3. Arrangement ............................................................................... X
4. Normative references................................................................... X
5. Definitions...................................................................................
A......................................................................................................
B......................................................................................................
C.......................................................................................................
D.......................................................................................................
E........................................................................................................
F........................................................................................................
G....................................................................................... …………….
H.......................................................................................................
I........................................................................................................
K................................................................................... ………………..
L................................................................................. …………………..
M.......................................................................................................
N........................................................................................................
O................................................................................. …………………..
P........................................................................................ …………….
R........................................................................................................
S .................................................................................................... ..
T.................................................................................................... …
U................................................................................................... …
V...................................................................................... ……………..
W................................................................................................... ..
Annex A (Informative).......................................................... ………..

11
TESTING CONCEPTS

Annex B (Method of commenting on this glossary)...........................

TESTING:

Testing is the process of executing a program with the intent of finding errors

TESTING TYPES

There are two types of testing.


Static Testing
Dynamic Testing
Static Testing:

12
TESTING CONCEPTS

Verifying the document alone is called static testing.

Dynamic Testing
Testing the functionality is called dynamic Testing.

Difference b/w static testing and dynamic testing:


There are many approaches to software testing. Reviews, walkthroughs or
inspections are considered as static testing, whereas actually executing programmed
code with a given set of test cases is referred to as dynamic testing. The former can
be, and unfortunately in practice often is, omitted, whereas the latter takes place
when programs begin to be used for the first time - which is normally considered the
beginning of the testing stage. This may actually begin before the program is 100%
complete in order to test particular sections of code (modules or discrete functions).

For example, Spreadsheet programs are, by their very nature, tested to a large
extent "on the fly" during the build process as the result of some calculation or text
manipulation is shown interactively immediately after each formula is entered.

TESTING TECHNIQUES:
White box testing OR (clear box testing, glass box testing, and transparent
box testing, translucent box testing or structural testing)
The process of Checking the program coding or source coding of the
application is called as whit box testing.

WHITE BOX TESTING TECHNIQUES:


Testing the application with the knowledge of coding to examine
output is called white box testing.

1. Method Coverage:
Method coverage is a measure of the percentage of methods that have been
executed by test cases. Undoubtedly, your tests should call 100% of your methods.
It seems irresponsible to deliver methods in your product when your testing never

13
TESTING CONCEPTS

used these methods. As a result, you need to ensure you have 100% method
coverage.

2. Statement Coverage:
Statement coverage is a measure of the percentage of statements that have
been executed by test cases. Your objective should be to achieve 100% statement
coverage through your testing. Identifying your cyclomatic number and executing
this minimum set of test cases will make this statement coverage achievable.

3. Branch Coverage:

Branch coverage is a measure of the percentage of the decision points


(Boolean expressions) of the program have been evaluated as both true and false in
test cases.

4. Condition Coverage:

We will go one step deeper and examine condition coverage. Condition


coverage is a Measure of percentage of Boolean sub-expressions of the program that
have been evaluated as either true or false outcome [applies to compound predicate]
in test cases.

Types of white box testing


The following types of white box testing exist:

1. Code coverage:

Creating tests to satisfy some criteria of code coverage. For example, the test
designer can create tests to cause all statements in the program to be executed at
least once.

14
TESTING CONCEPTS

Code coverage is a measure used in software testing. It describes the


degree to which the source code of a program has been tested. It is a form of testing
that inspects the code directly and is therefore a form of white box testing.
Currently, the use of code coverage is extended to the field of digital hardware, the
contemporary design methodology of which relies on Hardware description languages
(HDLs).

Code coverage techniques were amongst the first techniques invented for
systematic software testing. The first published reference was by Miller and Maloney
in Communications of the ACM in 1963.

Code coverage is one consideration in the safety certification of avionics equipment.


The standard by which avionics gear is certified by the Federal Aviation
Administration (FAA) is documented in DO-178(b)

2. Mutation testing methods:

Mutation testing (or Mutation analysis) is a method of software testing,


which involves modifying program's source code in small ways.[1] These, so-called
mutations, are based on well-defined mutation operators that either mimic typical
programming errors (such as using the wrong operator or variable name) or force
the creation of valuable tests (such as driving each expression to zero). The purpose
is to help the tester develop effective tests or locate weaknesses in the test data
used for the program or in sections of the code that are seldom or never accessed
during execution.

3. Fault injection methods:

In software testing, fault injection is a technique for improving the coverage


of a test by introducing faults in order to test code paths, in particular error handling
code paths, which might otherwise rarely be followed. It is often used with stress
testing and is widely considered to be an important part of developing robust
software.

15
TESTING CONCEPTS

The propagation of a fault through to an observable failure follows a well


defined cycle. When executed, a fault may cause an error, which is an invalid state
within a system boundary. An error may cause further errors within the system
boundary, therefore each new error acts as a fault, or it may propagate to the
system boundary and be observable. When error states are observed at the system
boundary they are termed failures. This mechanism is termed the fault-error-failure
[2]
cycle and is a key mechanism in dependability.

4. Static testing: White box testing includes all static testing.

Static testing is a form of software testing where the software isn't actually
used. This is in contrast to dynamic testing. It is generally not detailed testing, but
checks mainly for the sanity of the code, algorithm, or document. It is primarily
syntax checking of the code or and manually reading of the code or document to find
errors. This type of testing can be used by the developer who wrote the code, in
isolation. Code reviews, inspections and walkthroughs are also used.

From the black box testing point of view, static testing involves review of
requirements or specifications. This is done with an eye toward completeness or
appropriateness for the task at hand. This is the verification portion of Verification
and Validation.

Even static testing can be automated. A static testing test suite consists in
programs to be analyzed by an interpreter or a compiler that asserts the programs
syntactic validity.

Bugs discovered at this stage of development are less expensive to fix than
later in the development cycle.

Advantages:
● As the knowledge of internal coding structure is precondition, it becomes very
easy to find out which type of input/data can help in testing the application
effectively.
● The other advantage of white box testing is that it helps in optimizing the code
● It helps in removing the extra lines of code, which can bring in hidden defects.
● Forces test developer to reason carefully about implementation
Disadvantages:

16
TESTING CONCEPTS

● As knowledge of code and internal structure is a prerequisite, a skilled tester is


needed to carry out this type of testing, which increases the cost.
● And it is nearly impossible to look into every bit of code to find out hidden errors,
which may create problems, resulting in failure of the application.
● Not looking at the code in a runtime environment. That’s important for a number of
reasons. Development of vulnerability is dependent upon all aspects of the
platform being targeted and source code is just of those components. The
underlying operating system, the backend database being used, third party
security tools, dependent libraries, etc. must all be taken into account when
determining exploitability. A source code review is not able to take these factors
into account.
● Very few white-box tests can be done without modifying the program, changing
values to force different execution paths, or to generate a full range of inputs to
test a particular function.
● Miss cases omitted in the code
a. Unit Testing
Unit testing is defines as Testing the individual module. The Tools used in Unit
Testing are debuggers, tracers and is done by Programmers.
b. Static and Dynamic Analysis
Static analysis involves going through the code in order to find out any
possible defect in the code.
Dynamic analysis involves executing the code and analyzing the output.

c. Statement coverage
In this type of testing the code is executed in such a manner that every
statement of the application is executed at least once. It helps in assuring that all
the statements execute without any side effect.
d. Branch coverage
No software application can be written in a continuous mode of coding, at
some point we need to branch out the code in order to perform a particular
functionality. Branch coverage testing helps in validating of all the branches in the

17
TESTING CONCEPTS

code and making sure that no branching leads to abnormal behavior of the
application.
e. Security Testing
Security Testing is carried out in order to find out how well the system can
protect itself from unauthorized access, hacking – cracking, any code damage etc.
which deals with the code of application. This type of testing needs sophisticated
testing techniques.
f. Mutation Testing
A kind of testing in which, the application is tested for the code that was
modified after fixing a particular bug/defect. It also helps in finding out which code
and which strategy of coding can help in developing the functionality effectively.

BLACK BOX TESTING:


(Testing of application without the knowledge of coding)
Black box testing treats the software as a black box without any knowledge of
internal implementation. Black box testing methods include equivalence partitioning,
boundary value analysis, all-pairs testing, fuzz testing, model-based testing,
traceability matrix, exploratory testing and specification-based testing.
a. Boundary value Analysis
Boundary value is defined as a data value that corresponds to a minimum or
maximum input, internal, or output value specified for a system or component.
b. Equivalence Partitioning
Equivalence partitioning is a strategy that can be used to reduce the number
of test cases that need to be developed.
Equivalence partitioning divides the input domain of a program into classes.
For each of these equivalence classes, the set of data should be treated the same by
the module under test and should produce the same answer. Test cases should be
designed so the inputs lie within these equivalence classes.
c. Decision Table Testing
Decision tables are used to record complex business rules that must be
implemented in the program, and therefore tested.
GRAY BOX TESTING:
Gray box testing is a software testing technique that uses a combination of
black box testing and white box testing. Gray box testing is not black box testing,
because the tester does know some of the internal workings of the software under

18
TESTING CONCEPTS

test. In gray box testing, the tester applies a limited number of test cases to the
internal workings of the software under test. In the remaining part of the gray box
testing, one takes a black box approach in applying inputs to the software under test
and observing the outputs.

Gray box testing is a powerful idea. The concept is simple; if one knows
something about how the product works on the inside, one can test it better, even
from the outside. Gray box testing is not to be confused with white box testing; i.e. a
testing approach that attempts to cover the internals of the product in detail. Gray
box testing is a test strategy based partly on internals. The testing approach is
known as gray box testing, when one does have some knowledge, but not the full
knowledge of the internals of the product one is testing.

In gray box testing, just as in black box testing, you test from the outside of
a product, just as you do with black box, but you make better-informed testing
choices because you're better informed; because you know how the underlying
software components operate and interact.

TESTING LEVELS
There are four testing level such as;
Unit Testing
Integration Testing
System Testing
User acceptance Testing

Unit Testing:
The testing done to a unit or to a smallest piece of software. Done to verify if
it satisfies its functional specification or its intended design structure.
Integration Testing:
Testing which takes place as sub elements are combined (i.e., integrated) to
form higher-level elements
Or

19
TESTING CONCEPTS

Testing the related module/ one or more module together is called as


integration testing.
Types of Integrated Testing:
There are two types of integrated Testing such as;
1. Top down Integration Testing.
2. Bottom up Integration Testing.
1. Top down Integration Testing:
Checking the high level module to low level module of the application. Low
level module normally simulated stubs.
2.Bottom up Integration Testing:
Checking the low level module to high level module of the application. High
level module normally simulated main.
3.System Testing:
Testing the software for the required specifications on the intended hardware.
OR
Several modules constitute a project, once all this modules is integrated several
error may occur and this type of testing is done in system testing.
4.User acceptance Testing:
Formal testing conducted to determine whether or not a system satisfies its
acceptance criteria, which enables a customer to determine whether to accept the
system or not.
OR
It is the testing with intent of confirming readiness of the product and customer
acceptance.

Types of User Acceptance Testing


There are three types of user acceptance Testing such as;
1. Alpha testing
2. Beta testing
3. Gamma Testing
1. Alpha testing:
Alpha testing is a part of the user acceptance testing. Before releasing the
application the customer test that application in developer sight.

20
TESTING CONCEPTS

2. Beta testing
Beta testing is also part of the user acceptance testing. Before releasing the
application testers and customers combined check the application inn customer
sight.
Gamma Testing:
Gamma testing is testing of software that has all the required features,
but it did not go through all the in-house quality checks.
Some of the Testing are mentioned below are:
Smoke testing
Smoke testing is a safe test. Smoke testing is a set of test cases, which we
execute when we getting a new build. Smoke testing is verify the build is testable or
not to based on smoke test cases. We can reject the build. It covers all the
functionality.
Sanity testing
Sanity testing is test critical and major functionality of application. It is a one
time application. For example cursor navigation.
Performance testing
We have putting under heavy load of the application. Checking the multi task
performance of the application.
Two types of performance testing.
Load testing
Stress testing
a. Load testing
We have putting under heavy load of the application. Checking the within a
limit of the application. Load testing success is criteria.

b. Stress testing
We have putting under heavy load of the application. Checking the beyond
the limit of the application. Stress testing failure is criteria.

Functional Testing:
Testing the application, whether the application is functioning as per the
requirements is called Functional testing.

21
TESTING CONCEPTS

Exploratory testing
Checking the application without coding knowledge. Testing is done by giving
random input.
Ad-hoc testing
It is a part of the exploratory testing. It is a random testing it means testing a
application proper without test plans. It’s carried out at the end of the project at the
all test cases are executed.
OR
Testing with out a formal test plan or outside of a test plan.

Regression testing
Testing the application to find whether the change in code affects anywhere in
the application.
Checking Re executing the modification of the application. It should not
affect old build.

Retesting:
We check for the particular bug and its dependencies after it is said to be fixed.

OR
Testing that runs test cases that failed the last time they were run, in order to
verify the success of corrective actions.

Usability testing - User-friendliness check. Application flow is tested, Can new user
understand the application easily, Proper help documented whenever user stuck at
any point. Basically system navigation is checked in this testing.

Install/uninstall testing - Tested for full, partial, or upgrade install/uninstall


processes on different operating systems under different hardware, software
environment.

Compatibility testing - Testing how well software performs in a particular


hardware/software/operating system/network environment and different combination
s of above.

Comparison testing - Comparison of product strengths and weaknesses with


previous versions or other similar products.

22
TESTING CONCEPTS

Incremental integration testing - Bottom up approach for testing i.e continuous


testing of an application as new functionality is added; Application functionality and
modules should be independent enough to test separately. done by programmers or
by testers.

End-to-end testing - Similar to system testing, involves testing of a complete


application environment in a situation that mimics real-world use, such as interacting
with a database, using network communications, or interacting with other hardware,
applications, or systems if appropriate.

Recovery testing - Testing how well a system recovers from crashes, hardware
failures, or other catastrophic problems.

TESTING METHODOLOGY
Testing methodology means what kind of the methods to be selected
that suits for our applications.
It has v & v model, waterfall model., etc….
5.1 V – model
The V-model is a software development model which can be accepted to be
the extension of the waterfall model. Instead of moving down in a linear way, the
process steps are bent upwards after the coding phase, to form the typical V shape.
The V-Model demonstrates the relationships between each phase of the development
life cycle and its associated phase of testing.

Verification Validation

User acceptance testing technique


Requirement User acceptance testing

Design System testing technique System testing

23
TESTING CONCEPTS

Integration testing technique


Coding Integration testing

Unit testing

5.2 Waterfall model:

Waterfall approach was first Process Model to be introduced and followed


widely in software testing to ensure success of the project. In "The Waterfall"
approach, the whole process of software development is divided into separate
process phases.

The phases in Waterfall model are: Requirement Specifications phase,


Software Design, Implementation and Testing & Maintenance. All these phases are
cascaded to each other so that second phase is started as and when defined set of
goals are achieved for first phase and it is signed off, so the name "Waterfall Model".
All the methods and processes undertaken in Waterfall Model are more visible.

Water fall model

Requirement

Design

24
TESTING CONCEPTS

Coding

Unit testing

Integration testing

System testing

User acceptance testing

5.3 The Spiral Model

The spiral model, also known as the spiral lifecycle model, is a systems
development method (SDM) used in information technology (IT). This model of
development combines the features of the prototyping model and the waterfall
model. The spiral model is intended for large, expensive, and complicated projects.

The steps in the spiral model can be generalized as follows:

25
TESTING CONCEPTS

1.The new system requirements are defined in as much detail as possible. This
usually involves interviewing a number of users representing all the external or
internal users and other aspects of the existing system.

2.A preliminary design is created for the new system.

3.A first prototype of the new system is constructed from the preliminary design.
This is usually a scaled-down system, and represents an approximation of the
characteristics of the final product.

4.A second prototype is evolved by a fourfold procedure: (1) evaluating the first
prototype in terms of its strengths, weaknesses, and risks; (2) defining the
requirements of the second prototype; (3) planning and designing the second
prototype; (4) constructing and testing the second prototype.

5.At the customer's option, the entire project can be aborted if the risk is deemed
too great. Risk factors might involve development cost overruns, operating-cost
miscalculation, or any other factor that could, in the customer's judgment, result in a
less-than-satisfactory final product.

6.The existing prototype is evaluated in the same manner as was the previous
prototype, and, if necessary, another prototype is developed from it according to the
fourfold procedure outlined above.

7.The preceding steps are iterated until the customer is satisfied that the refined
prototype represents the final product desired.

8.The final system is constructed, based on the refined prototype.

9.The final system is thoroughly evaluated and tested. Routine maintenance is


carried out on a continuing basis to prevent large-scale failures and to minimize
downtime.

Applications

For a typical shrink-wrap application, the spiral model might mean that you
have a rough-cut of user elements (without the polished / pretty graphics) as an
operable application, add features in phases, and, at some point, add the final

26
TESTING CONCEPTS

graphics. The spiral model is used most often in large projects. For smaller projects,
the concept of agile software development is becoming a viable alternative. The US
military has adopted the spiral model for its Future Combat Systems program.

Advantages

1.Estimates (i.e. budget, schedule, etc.) become more realistic as work progresses,
because important issues are discovered earlier.

2.It is more able to cope with the (nearly inevitable) changes that software
development generally entails.

3.Software engineers (who can get restless with protracted design processes) can
get their hands in and start working on a project earlier.

Disadvantages

1.Highly customized limiting re-usability

2.Applied differently for each application

3.Risk of not meeting budget or schedule

4.Risk of not meeting budget or schedule

5.4 Iterative Model

An iterative lifecycle model does not attempt to start with a full specification
of requirements. Instead, development begins by specifying and implementing just
part of the software, which can then be reviewed in order to identify further
requirements. This process is then repeated, producing a new version of the software
for each cycle of the model. Consider an iterative lifecycle model which consists of
repeating the following four phases in sequence:

27
TESTING CONCEPTS

A Requirements phase, in which the requirements for the software are


gathered and analyzed. Iteration should eventually result in a requirements phase
that produces a complete and final specification of requirements. - A Design phase,
in which a software solution to meet the requirements is designed. This may be a
new design, or an extension of an earlier design.

An Implementation and Test phase, when the software is coded,


integrated and tested.
A Review phase, in which the software is evaluated, the current
requirements are reviewed, and changes and additions to requirements proposed.
For each cycle of the model, a decision has to be made as to whether the
software produced by the cycle will be discarded, or kept as a starting point for the
next cycle (sometimes referred to as incremental prototyping). Eventually a point will
be reached where the requirements are complete and the software can be delivered,
or it becomes impossible to enhance the software as required, and a fresh start has
to be made.
The iterative lifecycle model can be likened to producing software by
successive approximation. Drawing an analogy with mathematical methods that use
successive approximation to arrive at a final solution, the benefit of such methods
depends on how rapidly they converge on a solution.

28
TESTING CONCEPTS

The key to successful use of an iterative software development lifecycle is


rigorous validation of requirements, and verification (including testing) of each
version of the software against those requirements within each cycle of the model.
The first three phases of the example iterative model is in fact an abbreviated form
of a sequential V or waterfall lifecycle model. Each cycle of the model produces
software that requires testing at the unit level, for software integration, for system
integration

and for acceptance. As the software evolves through successive cycles, tests have to
be repeated and extended to verify each version of the software.
1.STLC:

Requirement

Test plan

Test case design

Test case execution


(manual or automation)

Defect tracking

TESTING DOCUMENTS

29
TESTING CONCEPTS

TEST PLAN

Test plan is document. It is a set of activities of the application. Test plan based
when, who, what and how. Test plan involves following details.
Objective
Aim of the project. Introduction, overview of the project.

Scope
1. Features to be tested
2. Features not to be tested

Approach
1. Write a high level scenario.
2. Write a flow graph
Testing functionality
Functionality of the testing, what we done test functionality of the application.
Assumption
Risk analysis
Analyzing risks
Backup plan
Effort estimation
Estimate a effort . ex., Time , cost
Roles and responsibilities
Roles and responsible of the tester.
Entry and exit criteria
Entry and exit criteria of the tester.
Templates

Test automation
Using automation tools.
Environment
Software and hardware requirements.
Defect tracking
Deliverables
Approvals

30
TESTING CONCEPTS

Purpose of test plan :

● Preparing it helps us to think through the efforts needed to validate the


acceptability of a software product.
● It can and will help people outside the test group to understand the why and how
of the product validation.
● In regulated, environments, we have to have written test plan
● We want a document that describes the objectives, scope, approach and focus of
the software testing effort.
● It includes test cases, conditions, the test environment, a list of related tasks,
pass/fail criteria and risk assessment.
● One of the outputs for creating a test strategy is an approved and signed off test
plan.
● Test plans should be documented. So that they are repeatable.
SOFTWARE REQUIREMENT SPECIFICATION (SRS) :
A Software Requirements Specification (SRS) is a complete description of the
behavior of the system to be developed. It includes a set of use cases that describe
all the interactions the users will have with the software. Use cases are also known
as functional requirements. In addition to use cases, the SRS also contains
nonfunctional (or supplementary) requirements. Non-functional requirements are
requirements which impose constraints on the design or implementation
9 TEST CASE
Test case is a document. It is a set of the activities of the application. Here,
we compare the expected result to actual result.
Test case design
Test case id
Description
Procedure
Input
Expected result
Actual result
Status.
Error – programmatically mistake leads to error
Bug - Deviation from the expected result
Defect - Problem in algorithm leads to failure.

31
TESTING CONCEPTS

Failure - Result of any of the above

BUG TRACKING:

A bug tracking system is a software application that is designed to help


quality assurance and programmers keep track of reported software bugs in their
work. It may be regarded as a sort of issue tracking system.

Many bug-tracking systems, such as those used by most open source software
projects, allow users to enter bug reports directly. Other systems are used only
internally in a company or organization doing software development. Typically bug
tracking systems are integrated with other software project management
applications

Bug reporting:
Design of the bug reporting,
S. no
Link
Bug id
Description
Priority
Severity
Status

1.BUG LIFE CYCLE:


In software development process, the bug has a life cycle. The bug should go
through the life cycle to be closed. A specific life cycle ensures that the process is
standardized. The bug attains different states in the life cycle. The life cycle of the
bug can be shown diagrammatically as follows:

32
TESTING CONCEPTS

The different states of a bug can be summarized as follows:

1.New: bugs posted on first time.


2.Open: After a tester has posted a bug, the lead checks if it is genuine then it is
called open.

3.Assign: Once the lead changes the state as “OPEN”, he assigns the bug to
corresponding developer or developer team. The state of the bug now is changed to
“ASSIGN”.
4. Test: Once the developer fixes the bug, he has to assign the bug to the testing
team for next round of testing. Before he releases the software with bug fixed, he
changes the state of bug to “TEST”. It specifies that the bug has been fixed and is
released to testing team.
5. Deferred: The bug, changed to deferred state means the bug is expected to be
fixed in next releases. The reasons for changing the bug to this state have many
factors. Some of them are priority of the bug may be low, lack of time for the release
or the bug may not have major effect on the software.
6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug.
Then the state of the bug is changed to “REJECTED”.

7. Duplicate: If the bug is repeated twice or the two bugs mention the same
concept of the bug, then one bug status is changed to “DUPLICATE”.

33
TESTING CONCEPTS

8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester
tests the bug. If the bug is not present in the software, he approves that the bug is
fixed and changes the status to “VERIFIED”.

9. Reopened: If the bug still exists even after the bug is fixed by the developer, the
tester changes the status to “REOPENED”.
10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that
the bug no longer exists in the software, he changes the status of the bug to
“CLOSED”.

10 TEST SUMMARY RESULTS:

A management report providing any important information uncovered


by the tests accomplished, and including assessments of the quality of the testing
effort, the quality of the software system under test, and statistics derived from
Incident Reports. The report also records what testing was done and how long it
took, in order to improve any future test planning. This final document is used to
indicate whether the software system under test is fit for purpose according to
whether or not it has met acceptance criteria defined by project stakeholders.
Format:
Objective
Scope
Testing methodology
Expecting date of testing
Actual date of testing
Release Date
Number of days tested
Release to Production
Roles and Responsibilities
Number of Modules Tested
Number of Test cases executed
Number of Pass cases executed
Number of Fail cases executed
Pass percentage
Fail percentage

34
TESTING CONCEPTS

Bug status
New
Fixed
Reopen
Closed
Priority bugs
Severity bugs
Conclusion

What is a good test case?


Designing good test cases is a complex art. The complexity comes from three
Sources:
Test cases help us discover information. Different types of tests are more
effective for different classes of information.
Test cases can be “good” in a variety of ways. No test cases will be good in
all of them.
People tend to create test cases according to certain testing styles, such as
domain testing or risk-
Severity:
This is assigned by the tester. Severity of a defect is set based on the issue's
seriousness. It can be stated as mentioned
It contain,
Major
Medium
Minor
Priority:

This will be set by the team lead or the project lead. Based on the severity and the
time constraint that the module has the priority will be set. It contain,

High

Medium

Low

35
TESTING CONCEPTS

Client server testing:

This type of testing usually done for 2 tier applications (usually developed for
LAN) Here we will be having front-end and backend.

The application launched on front-end will be having forms and reports which
will be monitoring and manipulating data

Web testing:

This is done for 3 tier applications (developed for Internet / intranet /


extranet). Here we will be having Browser, web server and DB server.

Configuration management:

Configuration management (CM) is the detailed recording and updating of


information that describes an enterprise's computer systems and networks, including
all hardware and software components. Such information typically includes the
versions and updates that have been applied to installed software packages and the
locations and network addresses of hardware devices. Special configuration
management software is available. When a system needs a hardware or software
upgrade, a computer technician can accesses the configuration management
program and database to see what is currently installed.

An advantage of a configuration management application is that the entire collection


of systems can be reviewed to make sure any changes made to one system do not
adversely affect any of the other systems

Configuration management is also used in software development, where it is called


Unified Configuration Management (UCM). Using UCM, developers can keep track of
the source code, documentation, problems, changes requested, and changes made.

36
TESTING CONCEPTS

Test deliverables:

During the testing what are the documents we delivered, in all the documents are
test deliverables. Like,

Test plan, test case, bug report, RTM, SRS, test summary report.

Why do we go for automation testing?

In automation testing, user actions are simulated using a testing tool. The actions a
manual tester performs are recorded, and then played back to execute the same test
case.
The benefits of automating software testing are many:

● Providing more coverage of regression testing.

● Reducing the elapsed time for testing, getting your product to market faster.

● Improving productivity of human testing.

● Improving the re-usability of tests.

● Providing a detailed test log.

37
TESTING CONCEPTS

GUI Testing

What is GUI Testing?

GUI is the abbreviation for Graphic User Interface. It is absolutely essential


that any application has to be user-friendly. The end user should be comfortable
while using all the components on screen and the components should also perform
their functionality with utmost clarity. Hence it becomes very essential to test the
GUI components of any application. GUI Testing can refer to just ensuring that the
look-and-feel of the application is acceptable to the user, or it can refer to testing the
functionality of each and every component involved.

The following is a set of guidelines to ensure effective GUI Testing and can be used
even as a checklist while testing a product / application.

Section 1 - Windows Compliance Testing

Application

Start Application by Double Clicking on its ICON. The Loading message should
show the application name, version number, and a bigger pictorial representation of
the icon. No Login is necessary. The main window of the application should have the
same caption as the caption of the icon in Program Manager. Closing the application
should result in an "Are you Sure" message box Attempt to start application twice.
This should not be allowed - you should be returned to main window. Try to start the
application twice as it is loading. On each window, if the application is busy, then the
hour glass should be displayed. If there is no hour glass, then some enquiry in
progress message should be displayed. All screens should have a Help button (i.e.)
F1 key should work the same.

If Window has a Minimize Button, click it. Window should return to an icon on
the bottom of the screen. This icon should correspond to the Original Icon under
Program Manager. Double Click the Icon to return the Window to its original size.
The window caption for every application should have the name of the application
and the window name - especially the error messages. These should be checked for

38
TESTING CONCEPTS

spelling, English and clarity, especially on the top of the screen. Check does the title
of the window make sense. If the screen has a Control menu, then use all un-grayed
options.

Check all text on window for Spelling/Tense and Grammar.


Use TAB to move focus around the Window. Use SHIFT+TAB to move focus
backwards. Tab order should be left to right, and Up to Down within a group box on
the screen. All controls should get focus - indicated by dotted box, or cursor. Tabbing
to an entry field with text in it should highlight the entire text in the field. The text in
the Micro Help line should change - Check for spelling, clarity and non-updateable
etc. If a field is disabled (grayed) then it should not get focus. It should not be
possible to select them with either the mouse or by using TAB. Try this for every
grayed control.

Never updateable fields should be displayed with black text on a gray background
with a black label. All text should be left justified, followed by a colon tight to it. In a
field that may or may not be updateable, the label text and contents changes from
black to gray depending on the current status. List boxes are always white
background with black text whether they are disabled or not. All others are gray.

In general, double-clicking is not essential. In general, everything can be done using


both the mouse and the keyboard. All tab buttons should have a distinct letter.

Text Boxes

Move the Mouse Cursor over all Enterable Text Boxes. Cursor should change
from arrow to Insert Bar. If it doesn't then the text in the box should be gray or non-
updateable. Refer to previous page. Enter text into Box Try to overflow the text by
typing to many characters - should be stopped Check the field width with capitals W.
Enter invalid characters - Letters in amount fields, try strange characters like + , - *
etc. in All fields. SHIFT and Arrow should Select Characters. Selection should also be
possible with mouse. Double Click should select all text in box.

Option (Radio Buttons)

Left and Right arrows should move 'ON' Selection. So should Up and Down.
Select with mouse by clicking.

39
TESTING CONCEPTS

Check Boxes

Clicking with the mouse on the box, or on the text should SET/UNSET the
box. SPACE should do the same.

Command Buttons

If Command Button leads to another Screen, and if the user can enter or
change details on the other screen then the Text on the button should be followed by
three dots. All Buttons except for OK and Cancel should have a letter Access to
them. This is indicated by a letter underlined in the button text. Pressing ALT+Letter
should activate the button. Make sure there is no duplication. Click each button once
with the mouse - This should activate Tab to each button - Press SPACE - This should
activate Tab to each button - Press RETURN - This should activate The above are
VERY IMPORTANT, and should be done for EVERY command Button. Tab to another
type of control (not a command button). One button on the screen should be default
(indicated by a thick black border). Pressing Return in ANY no command button
control should activate it.
If there is a Cancel Button on the screen, then pressing <Esc> should activate it. If
pressing the Command button results in uncorrectable data e.g. closing an action
step, there should be a message phrased positively with Yes/No answers where Yes
results in the completion of the action.

Drop Down List Boxes

Pressing the Arrow should give list of options. This List may be scrollable. You should
not be able to type text in the box. Pressing a letter should bring you to the first item
in the list with that start with that letter. Pressing ‘Ctrl - F4’ should open/drop down
the list box. Spacing should be compatible with the existing windows spacing (word
etc.). Items should be in alphabetical order with the exception of blank/none, which
is at the top or the bottom of the list box. Drop down with the item selected should
be display the list with the selected item on the top. Make sure only one space
appears, shouldn't have a blank line at the bottom.

Combo Boxes

Should allow text to be entered. Clicking Arrow should allow user to choose
from list

40
TESTING CONCEPTS

List Boxes

Should allow a single selection to be chosen, by clicking with the mouse, or


using the Up and Down Arrow keys. Pressing a letter should take you to the first
item in the list starting with that letter. If there is a 'View' or 'Open' button besides
the list box then double clicking on a line in the List Box, should act in the same way
as selecting and item in the list box, then clicking the command button. Force the
scroll bar to appear, make sure all the data can be seen in the box.

Section 2 - Screen Validation Checklist

Aesthetic Conditions:

1. Is the general screen background the correct color?


2. Are the field prompts the correct color?
3. Are the field backgrounds the correct color?
4. In read-only mode, are the field prompts the correct color?
5. In read-only mode, are the field backgrounds the correct color?
6. Are all the screen prompts specified in the correct screen font?
7. Is the text in all fields specified in the correct screen font?
8. Are all the field prompts aligned perfectly on the screen?
9. Are all the field edit boxes aligned perfectly on the screen?
10. Are all group boxes aligned correctly on the screen?
11. Should the screen be resizable?
12. Should the screen be allowed to minimize?
13. Are all the field prompts spelt correctly?
14. Are all character or alphanumeric fields left justified? This is the default unless
otherwise specified.
15. Are all numeric fields right justified? This is the default unless otherwise
specified.
16. Is all the micro-help text spelt correctly on this screen?
17. Is all the error message text spelt correctly on this screen?
18. Is all user input captured in UPPER case or lowercase consistently?
19. Where the database requires a value (other than null) then this should be
defaulted into fields. The user must either enter an alternative valid value or
leave the default value intact.

41
TESTING CONCEPTS

20. Assure that all windows have a consistent look and feel.
21. Assure that all dialog boxes have a consistent look and feel.

Validation Conditions:

1. Does a failure of validation on every field cause a sensible user error


message?
2. Is the user required to fix entries, which have failed validation tests?
3. Have any fields got multiple validation rules and if so are all rules being
applied?
4. If the user enters an invalid value and clicks on the OK button (i.e. does not
TAB off the field) is the invalid entry identified and highlighted correctly with
an error message?
5. Is validation consistently applied at screen level unless specifically required at
field level?
6. For all numeric fields check whether negative numbers can and should be able
to be entered.
7. For all numeric fields check the minimum and maximum values and also some
mid-range values allowable?
8. For all character/alphanumeric fields check the field to ensure that there is a
character limit specified and that this limit is exactly correct for the specified
database size?
9. Do all mandatory fields require user input?
10. If any of the database columns don't allow null values then the corresponding
screen fields must be mandatory. (If any field, which initially was mandatory,
has become optional then check whether null values are allowed in this field.)

Navigation Conditions:

1. Can the screen be accessed correctly from the menu?


2. Can the screen be accessed correctly from the toolbar?
3. Can the screen be accessed correctly by double clicking on a list control on
the previous screen?
4. Can all screens accessible via buttons on this screen be accessed correctly?

42
TESTING CONCEPTS

5. Can all screens accessible by double clicking on a list control be accessed


correctly?
6. Is the screen modal? (i.e.) Is the user prevented from accessing other
functions when this screen is active and is this correct?
7. Can a number of instances of this screen be opened at the same time and is
this correct?

Usability Conditions:

1. Are all the dropdowns on this screen sorted correctly? Alphabetic sorting is
the default unless otherwise specified.
2. Is all date entry required in the correct format?
3. Have all pushbuttons on the screen been given appropriate Shortcut keys?
4. Do the Shortcut keys work correctly?
5. Have the menu options that apply to your screen got fast keys associated and
should they have?
6. Does the Tab Order specified on the screen go in sequence from Top Left to
bottom right? This is the default unless otherwise specified.
7. Are all read-only fields avoided in the TAB sequence?
8. Are all disabled fields avoided in the TAB sequence?
9. Can the cursor be placed in the micro help text box by clicking on the text
box with the mouse?
10. Can the cursor be placed in read-only fields by clicking in the field with the
mouse?
11. Is the cursor positioned in the first input field or control when the screen is
opened?
12. Is there a default button specified on the screen?
13. Does the default button work correctly?
14. When an error message occurs does the focus return to the field in error
when the user cancels it?
15. When the user Alt+Tab's to another application does this have any impact on
the screen upon return to the application?
16. Do all the fields edit boxes indicate the number of characters they will hold by
there length? e.g. a 30 character field should be a lot longer

43
TESTING CONCEPTS

Data Integrity Conditions:

1. Is the data saved when the window is closed by double clicking on the close
box?
2. Check the maximum field lengths to ensure that there are no truncated
characters?
3. Where the database requires a value (other than null) then this should be
defaulted into fields. The user must either enter an alternative valid value or
leave the default value intact.
4. Check maximum and minimum field values for numeric fields?
5. If numeric fields accept negative values can these be stored correctly on the
database and does it make sense for the field to accept negative numbers?
6. If a set of radio buttons represents a fixed set of values such as A, B and C
then what happens if a blank value is retrieved from the database? (In some
situations rows can be created on the database by other functions, which are
not screen based, and thus the required initial values can be incorrect.)
7. If a particular set of data is saved to the database check that each value gets
saved fully to the database. (i.e.) Beware of truncation (of strings) and
rounding of numeric values.

Modes (Editable Read-only) Conditions:

1. Are the screen and field colors adjusted correctly for read-only mode?
2. Should a read-only mode be provided for this screen?
3. Are all fields and controls disabled in read-only mode?
4. Can the screen be accessed from the previous screen/menu/toolbar in read-
only mode?
5. Can all screens available from this screen be accessed in read-only mode?
6. Check that no validation is performed in read-only mode.

General Conditions:

1. Assure the existence of the "Help" menu.


2. Assure that the proper commands and options are in each menu.
3. Assure that all buttons on all tool bars have a corresponding key commands.
4. Assure that each menu command has an alternative (hot-key) key sequence,
which will invoke it where appropriate.

44
TESTING CONCEPTS

5. In drop down list boxes, ensure that the names are not abbreviations / cut
short
6. In drop down list boxes, assure that the list and each entry in the list can be
accessed via appropriate key / hot key combinations.
7. Ensure that duplicate hot keys do not exist on each screen
8. Ensure the proper usage of the escape key (which is to undo any changes
that have been made) and generates a caution message "Changes will be lost
- Continue yes/no"
9. Assure that the cancel button functions the same as the escape key.
10. Assure that the Cancel button operates, as a Close button when changes have
been made that cannot be undone.
11. Assure that only command buttons, which are used by a particular window, or
in a particular dialog box, are present. – (i.e) make sure they don't work on
the screen behind the current screen.
12. When a command button is used sometimes and not at other times, assures
that it is grayed out when it should not be used.
13. Assure that OK and Cancel buttons are grouped separately from other
command buttons.
14. Assure that command button names are not abbreviations.
15. Assure that all field labels/names are not technical labels, but rather are
names meaningful to system users.
16. Assure that command buttons are all of similar size and shape, and same font
& font size.
17. Assure that each command button can be accessed via a hot key
combination.
18. Assure that command buttons in the same window/dialog box do not have
duplicate hot keys.
19. Assure that each window/dialog box has a clearly marked default value
(command button, or other object) which is invoked when the Enter key is
pressed - and NOT the Cancel or Close button
20. Assure that focus is set to an object/button, which makes sense according to
the function of the window/dialog box.
21. Assure that all option buttons (and radio buttons) names are not
abbreviations.

45
TESTING CONCEPTS

22. Assure that option button names are not technical labels, but rather are
names meaningful to system users.
23. If hot keys are used to access option buttons, assure that duplicate hot keys
do not exist in the same window/dialog box.
24. Assure that option box names are not abbreviations.
25. Assure that option boxes, option buttons, and command buttons are logically
grouped together in clearly demarcated areas "Group Box"
26. Assure that the Tab key sequence, which traverses the screens, does so in a
logical way.
27. Assure consistency of mouse actions across windows.
28. Assure that the color red is not used to highlight active objects (many
individuals are red-green color blind).
29. Assure that the user will have control of the desktop with respect to general
color and highlighting (the application should not dictate the desktop
background characteristics).
30. Assure that the screen/window does not have a cluttered appearance
31. Ctrl + F6 opens next tab within tabbed window
32. Shift + Ctrl + F6 opens previous tab within tabbed window
33. Tabbing will open next tab within tabbed window if on last field of current tab
34. Tabbing will go onto the 'Continue' button if on last field of last tab within
tabbed window
35. Tabbing will go onto the next editable field in the window
36. Banner style & size & display exact same as existing windows
37. If 8 or less options in a list box, display all options on open of list box - should
be no need to scroll
38. Errors on continue will cause user to be returned to the tab and the focus
should be on the field causing the error. (i.e the tab is opened, highlighting
the field with the error on it)
39. Pressing continue while on the first tab of a tabbed window (assuming all
fields filled correctly) will not open all the tabs.
40. On open of tab focus will be on first editable field
41. All fonts to be the same
42. Alt+F4 will close the tabbed window and return you to main screen or
previous screen (as appropriate), generating "changes will be lost" message if
necessary.

46
TESTING CONCEPTS

43. Micro help text for every enabled field & button
44. Ensure all fields are disabled in read-only mode
45. Progress messages on load of tabbed screens
46. Return operates continue
47. If retrieve on load of tabbed window fails window should not open

Specific Field Tests

Date Field Checks

1. Assure that leap years are validated correctly & do not cause
errors/miscalculations.
2. Assure that month code 00 and 13 are validated correctly & do not cause
errors/miscalculations.
3. Assure that 00 and 13 are reported as errors.
4. Assure that day values 00 and 32 are validated correctly & do not cause
errors/miscalculations.
5. Assure that Feb. 28, 29, 30 are validated correctly & do not cause errors/
miscalculations.
6. Assure that Feb. 30 is reported as an error.
7. Assure that century change is validated correctly & does not cause errors/
miscalculations.
8. Assure that out of cycle dates are validated correctly & do not cause
errors/miscalculations.

Numeric Fields

1. Assure that lowest and highest values are handled correctly.


2. Assure that invalid values are logged and reported.
3. Assure that valid values are handles by the correct procedure.
4. Assure that numeric fields with a blank in position 1 are processed or reported
as an error.
5. Assure that fields with a blank in the last position are processed or reported
as an error an error.
6. Assure that both + and - values are correctly processed.
7. Assure that division by zero does not occur.
8. Include value zero in all calculations.

47
TESTING CONCEPTS

9. Include at least one in-range value.


10. Include maximum and minimum range values.
11. Include out of range values above the maximum and below the minimum.
12. Assure that upper and lower values in ranges are handled correctly.

Alpha Field Checks

1. Use blank and non-blank data.


2. Include lowest and highest values.
3. Include invalid characters & symbols.
4. Include valid characters.
5. Include data items with first position blank.
6. Include data items with last position blank.

48
TESTING CONCEPTS

Interview Questions

What's Ad Hoc Testing?

A testing where the tester tries to break the software by randomly trying
functionality of software.

What's the Accessibility Testing?

Testing that determines if software will be usable by people with disabilities.

What's the Alpha Testing?

The Alpha Testing is conducted at the developer sites and in a controlled


environment by the end user of the software

What's the Beta Testing?

Testing the application after the installation at the client place.

What is Component Testing?

Testing of individual software components (Unit Testing).

What's Compatibility Testing?

In Compatibility testing we can test that software is compatible with other elements
of system.

What is Concurrency Testing?

Multi-user testing geared towards determining the effects of accessing the same
application code, module or database records. Identifies and measures the level of
locking, deadlocking and use of single-threaded code and locking semaphores.

49
TESTING CONCEPTS

What is Conformance Testing?

The process of testing that an implementation conforms to the specification on which


it is based. Usually applied to testing conformance to a formal standard.

What is Context Driven Testing?

The context-driven school of software testing is flavor of Agile Testing that advocates
continuous and creative evaluation of testing opportunities in light of the potential
information revealed and the value of that information to the organization right now.

What is Data Driven Testing?

Testing in which the action of a test case is parameterized by externally defined data
values, maintained as a file or spreadsheet. A common technique in Automated
Testing.

What is Conversion Testing?

Testing of programs or procedures used to convert data from existing systems for
use in replacement systems.

What is Dependency Testing?

Examines an application's requirements for pre-existing software, initial states and


configuration in order to maintain proper functionality.

What is Depth Testing?

A test that exercises a feature of a product in full detail.

What is Dynamic Testing?

Testing software through executing it. See also Static Testing.

50
TESTING CONCEPTS

What is Endurance Testing?

Checks for memory leaks or other problems that may occur with prolonged
execution.

What is End-to-End testing?

Testing a complete application environment in a situation that mimics real-world use,


such as interacting with a database, using network communications, or interacting
with other hardware, applications, or systems if appropriate.

What is Exhaustive Testing?

Testing which covers all combinations of input values and preconditions for an
element of the software under test.

What is Gorilla Testing?

Testing one particular module, functionality heavily.

What is Installation Testing?

Confirms that the application under test recovers from expected or unexpected
events without loss of data or functionality. Events can include shortage of disk
space, unexpected loss of communication, or power out conditions.

What is Localization Testing?

This term refers to making software specifically designed for a specific locality.

What is Loop Testing?

A white box testing technique that exercises program loops.

51
TESTING CONCEPTS

What is Mutation Testing?

Mutation testing is a method for determining if a set of test data or test cases is
useful, by deliberately introducing various code changes ('bugs') and retesting with
the original test data/cases to determine if the 'bugs' are detected. Proper
implementation requires large computational resources

What is Monkey Testing?

Testing a system or an Application on the fly, i.e. just few tests here and there to
ensure the system or an application does not crash out.

What is Positive Testing?

Testing aimed at showing software works. Also known as "test to pass". See also
Negative Testing.

What is Negative Testing?

Testing aimed at showing software does not work. Also known as "test to fail". See
also Positive Testing.

What is Path Testing?

Testing in which all paths in the program source code are tested at least once.

What is Performance Testing?

Testing conducted to evaluate the compliance of a system or component with


specified performance requirements. Often this is performed using an automated test
tool to simulate large number of users. Also know as "Load Testing".

What is Ramp Testing?

Continuously raising an input signal until the system breaks down.

52
TESTING CONCEPTS

What is Recovery Testing?

Confirms that the program recovers from expected or unexpected events without
loss of data or functionality. Events can include shortage of disk space, unexpected
loss of communication, or power out conditions.

What is the Re-testing testing?

Retesting- Again testing the functionality of the application.

What is the Regression testing?

Regression- Check that change in code have not effected the working functionality

What is Sanity Testing?

Brief test of major functional elements of a piece of software to determine if its


basically operational.

What is Scalability Testing?

Performance testing focused on ensuring the application under test gracefully


handles increases in work load.

What is Security Testing?

Testing which confirms that the program can restrict access to authorized personnel
and that the authorized personnel can access the functions available to their security
level.

What is Stress Testing?

Stress testing is a form of testing that is used to determine the stability of a given
system or entity. It involves testing beyond normal operational capacity, often to a

53
TESTING CONCEPTS

breaking point, in order to observe the results.

What is Smoke Testing?

A quick-and-dirty test that the major functions of a piece of software work.


Originated in the hardware testing practice of turning on a new piece of hardware for
the first time and considering it a success if it does not catch on fire.

What is Soak Testing?

Running a system at high load for a prolonged period of time. For example, running
several times more transactions in an entire day (or night) than would be expected
in a busy day, to identify and performance problems that appear after a large
number of transactions have been executed.

What's the Usability testing?

Usability testing is for user friendliness.

What's the User acceptance testing?

User acceptance testing is determining if software is satisfactory to an end-user or


customer.

What's the Volume Testing?

We can perform the Volume testing, where the system is subjected to large volume
of data.

Difference b/w high severity and low severity?

When the application has critical problem and it has to be solved after a month then
we can say it as high severity and low priority.

54
TESTING CONCEPTS

When the application has trivial problem (less affected) and it has to be solved with
in a day then we can say it as low severity with high priority.

When the testing should be ended?

Testing is a never ending process.

Because of some factors testing May terminates.


The factors may be most of the tests are executed, project deadline, test
budget depletion, bug rate falls down below the criteria.

Is regression testing performed manually?


It depends on the initial testing approach. If the initial testing approach was
manual testing, then the regression testing is normally performed manually.
Conversely, if the initial testing approach was automated testing, then the regression
testing is normally performed by automated testing.

Give me others' FAQs on testing.


Here is what you can do. You can visit my web site, and on
http://robdavispe.com/free
http://robdavispe.com/free2,
you can findanswers to the vast majority of others' questions on testing, from a
tester's point of view. As to questions and answers that are not on my web site now,
please be patient, as I am going to add more FAQs and answers, as soon as time
permits.

Can you share with me your knowledge of software testing?


Surely I can. Visit my web site,
http://robdavispe.com/free and
http://robdavispe.com/free2,
and you will find my knowledge on software testing, from a tester's point of view. As
to knowledge that is not on my web site now, please be patient, as I am going to
add more answers, as soon as time permits.

55
TESTING CONCEPTS

How can I learn software testing?


I suggest you visit my web site, especially http://robdavispe.com/free and
http://robdavispe.com/free2, and you will find answers to most questions on
software testing. As to questions and answers that are not on my web site at the
moment, please be patient. I will add more questions and answers, as soon as time
permits. I also suggest you get a job in software testing. Why? Because you can get
additional, free education, on the job, by an employer, while you are being paid to
do software testing. On the job, you will be able to use some of the more popular
software tools, including Win Runner, Load Runner, Lab View, and the Rational
Toolset. The tools you use will depend on the end client, their needs and
preferences. I also suggest you sign up for courses at nearby educational
institutions. Classroom education, especially non degree courses in local community
colleges, tends to be inexpensive.

What is your view of software QA/testing?


Software QA/testing is easy, if requirements are solid, clear, complete,
detailed, cohesive, attainable and testable, and if schedules are realistic, and if there
is good communication. Software QA/testing is a piece of cake, if project schedules
are realistic, if adequate time is allowed for planning, design, testing, bug fixing, re-
testing, changes, and documentation. Software QA/testing is relatively easy, if
testing is started early on, and if fixes or changes are re-tested, and if sufficient time
is planned for both testing and bug fixing. Software QA/testing is easy, if new
features are avoided, and if one sticks to initial requirements as much as possible.

How can I be a good tester?


We, good testers, take the customers' point of view. We are tactful and
diplomatic. We have a "test to break" attitude, a strong desire for quality, an
attention to detail, and good communication skills, both oral and written. Previous
software development experience is also helpful as it provides a deeper
understanding of the software development process.

What is the difference between software bug and software defect?


A 'software bug' is a nonspecific term that means an inexplicable defect,
error, flaw, mistake, failure, fault, or unwanted behavior of a computer program.

56
TESTING CONCEPTS

Other terms, e.g. software defect and software failure, are more specific. While there
are many who believe the term 'bug' is a reference to insects that caused
malfunctions in early electromechanical computers (1950-1970), the term 'bug' had
been a part of engineering jargon for many decades before the 1950s; even the
great inventor, Thomas Edison (1847-1931), wrote about a 'bug' in one of his letters.

How can I improve my career in software QA/testing?


Invest in your skills! Learn all you can! Visit my web site, and on
http://robdavispe.com/free and http://robdavispe.com/free2, you will find answers
to the vast majority of questions on testing, from software QA/testers' point of view.
Get additional education, on the job. Free education is often provided by employers,
while you are paid to do the job of a tester. On the job, often you can use many
software tools, including WinRunner, LoadRunner, LabView, and Rational Toolset.
Find an employer whose needs and preferences are similar to yours. Get an
education! Sign up for courses at nearby educational institutes. Take classes!
Classroom education, especially non-degree courses in local community colleges,
tends to be inexpensive. Improve your attitude! Become the best software
QA/tester! Always strive to exceed the expectations of your customers!

How do you compare two files?


Use PVCS, SCCS, or "diff". PVCS is a document version control tool, a
competitor of SCCS. SCCS is an original UNIX program, based on "diff". Diff is a
UNIX utility that compares the difference between two text files.

What do we use for comparison?


Generally speaking, when we write a software program to compare files, we
compare two files, bit by bit. For example, when we use "diff", a UNIX utility, we
compare two text files.

What is the reason we compare files?


We compare files because of configuration management, revision control,
requirement version control, or document version control. Examples are Rational
ClearCase, DOORS, PVCS, and CVS. CVS, for example, enables several, often
distant, developers to work together on the same source code.

57
TESTING CONCEPTS

When is a process repeatable?


If we use detailed and well-written processes and procedures, we ensure the
correct steps are being executed. This facilitates a successful completion of a task.
This is a way we also ensure a process is repeatable.

What is test methodology?


One test methodology is a three-step process. Creating a test strategy,
creating a test plan/design, and executing tests. This methodology can be used and
molded to your organization's needs. Rob Davis believes that using this methodology
is important in the development and ongoing maintenance of his customers'
applications.

What does a Test Strategy Document contain?


The test strategy document is a formal description of how a software product
will be tested. A test strategy is developed for all levels of testing, as required.
The test team analyzes the requirements, writes the test strategy and reviews the
plan with the project team.
The test plan may include test cases, conditions, the test environment, and a list of
related tasks, pass/fail criteria and risk assessment. Additional sections in the test
strategy document include:
A description of the required hardware and software components, including test
tools. This information comes from the test environment, including test tool data.
A description of roles and responsibilities of the resources required for the test and
schedule constraints. This information comes from man-hours and schedules.
Testing methodology. This is based on known standards. Functional and technical
requirements of the application. This information comes from requirements, change
request, technical, and functional design documents. Requirements that the system
cannot provide, e.g. system limitations.

What is monkey testing?


"Monkey testing" is random testing performed by automated testing tools.
These automated testing tools are considered "monkeys", if they work at random.
We call them "monkeys" because it is widely believed, if we allow six monkeys to
pound on six typewriters at random, for a million years, they will recreate all the
works of Isaac Asimov. There are "smart monkeys" and "dumb monkeys". "Smart

58
TESTING CONCEPTS

monkeys" are valuable for load and stress testing, and will find a significant number
of bugs, but they're also very expensive to develop. "Dumb monkeys", on the other
hand, are inexpensive to develop, are able to do some basic testing, but they will
find few bugs. However, the bugs "dumb monkeys" do find will be hangs and
crashes, i.e. the bugs you least want to have in your software product. "Monkey
testing" can be valuable, but they should not be your only testing.

What is stochastic testing?


Stochastic testing is the same as "monkey testing", but stochastic testing is a
more technical sounding name for the same testing process. Stochastic testing is
black box testing, random testing, performed by automated testing tools. Stochastic
testing is a series of random tests over time. The software under test typically
passes the individual tests, but our goal is to see if it can pass a large series of the
individual tests.

What is mutation testing?


In mutation testing, we create mutant software, we make mutant software to
fail, and thus demonstrate the adequacy of our test case. When we create a set of
mutant software, each mutant software differs from the original software by one
mutation, i.e. one single syntax change made to one of its program statements, i.e.
each mutant software contains only one single fault.
When we apply test cases to the original software and to the mutant
software, we evaluate if our test case is adequate. Our test case is inadequate, if
both the original software and all mutant software generate the same output. Our
test case is adequate, if our test case detects faults, or, if, at least one mutant
software generates a different output than does the original software for our test
case.

What is PDR?
PDR is an acronym. In the world of software QA/testing, it stands for "peer
design review", or "peer review".

What is is good about PDRs?


PDRs are informal meetings, and I do like all informal meetings. PDRs make
perfect sense, because they're for the mutual benefit of you and your end client.

59
TESTING CONCEPTS

Your end client requires a PDR, because they work on a product, and want to come
up with the very best possible design and documentation. Your end client requires
you to have a PDR, because when you organize a PDR, you invite and assemble the
end client's best experts and encourage them to voice their concerns as to what
should or should not go into the design and documentation, and why. When you're a
developer, designer, author, or writer, it's also to your advantage to come up with
the best possible design and documentation. Therefore you want to embrace the idea
of the PDR, because holding a PDR gives you a significant opportunity to invite and
assemble the end client's best experts and make them work for you for one hour, for
your own benefit. To come up with the best possible design and documentation, you
want to encourage your end client's experts to speak up and voice their concerns as
to what should or should not go into your design and documentation, and why.

Why is that my company requires a PDR?


Your company requires a PDR, because your company wants to be the owner
of the very best possible design and documentation. Your company requires a PDR,
because when you organize a PDR, you invite, assemble and encourage the
company's best experts to voice their concerns as to what should or should not go
into your design and documentation, and why.

Remember, PDRs are not about you, but about design and documentation. Please
don't be negative; please do not assume your company is finding fault with your
work, or distrusting you in any way. There is a 90+ per cent probability your
company wants you, likes you and trust you, because you're a specialist, and
because your company hired you after a long and careful selection process.

Your company requires a PDR, because PDRs are useful and constructive. Just about
everyone - even corporate chief executive officers (CEOs) - attend PDRsfrom time to
time. When a corporate CEO attends a PDR, he has to listen for "feedback" from
shareholders. When a CEO attends a PDR, the meeting is called the "annual
shareholders' meeting".

60
TESTING CONCEPTS

Give me a list of ten good things about PDRs!


1. PDRs are easy, because all your meeting attendees are your coworkers
and friends.
2. PDRs do produce results. With the help of your meeting attendees, PDRs
help you produce better designs and better documents than the ones you
could come up with, without the help of your meeting attendees.
3. Preparation for PDRs helps a lot, but, in the worst case, if you had no time
to read every page of every document, it's still OK for you to show up at
the PDR.
4. It's technical expertise that counts the most, but many times you can
influence your group just as much, or even more so, if you're dominant or
have good acting skills.
5. PDRs are easy, because, even at the best and biggest companies, you can
dominate the meeting by being either very negative, or very bright and
wise.
6. It is easy to deliver gentle suggestions and constructive criticism. The
brightest and wisest meeting attendees are usually gentle on you; they
deliver gentle suggestions that are constructive, not destructive.
7. You get many-many chances to express your ideas, every time a meeting
attendee asks you to justify why you wrote what you wrote.
8. PDRs are effective, because there is no need to wait for anything or
anyone; because the attendees make decisions quickly (as to what errors
are in your document). There is no confusion either, because all the
group's recommendations are clearly written down for you by the PDR's
facilitator.
9. Your work goes faster, because the group itself is an independent decision
making authority. Your work gets done faster, because the group's
decisions are subject to neither oversight nor supervision.
10. At PDRs, your meeting attendees are the very best experts anyone can
find, and they work for you, for FREE!

What is the Exit criteria?


"Exit criteria" is a checklist, sometimes known as the "PDR sign-off sheet",
i.e. a list of peer design review related tasks that have to be done by the facilitator

61
TESTING CONCEPTS

or other attendees of the PDR, during or near the conclusion of the PDR. By having a
checklist, and by going through a checklist, the facilitator can...
1. Verify that the attendees have inspected all the relevant documents and reports,
and
2. Verify that all suggestions and recommendations for each issue have been
recorded, and
3. Verify that all relevant facts of the meeting have been recorded. The facilitator's
checklist includes the following questions:
1. "Have we inspected all the relevant documents, code blocks, or products?"
2. "Have we completed all the required checklists?"
3. "Have I recorded all the facts relevant to this peer review?"
4. "Does anyone have any additional suggestions, recommendations, or comments?"
5. "What is the outcome of this peer review?" At the end of the peer review, the
facilitator asks the attendees of the peer review to make a decision as to the
outcome of the peer review. I.e., "What is our consensus?" "Are we accepting the
design (or document or code)?" Or, "Are we accepting it with minor modifications?
"Or, "Are we accepting it, after it is modified, and approved
through e-mails to the participants?" Or, "Do we want another peer review?" This is
a phase, during which the attendees of the PDR work as a committee, and the
committee's decision is final.

What is the Entry criteria?


The entry criteria is a checklist, or a combination of checklists that includes
the "developer's checklist", "testing checklist", and the "PDR checklist". Checklists
are list of tasks that have to be done by developers, testers, or the facilitator, at or
before the start of the peer review.
Using these checklists, before the start of the peer review, the developer,
tester and facilitator can determine if all the documents, reports, code blocks or
software products are ready to be reviewed, and if the peer review's attendees are
prepared to inspect them. The facilitator can ask the peer review's attendees if they
have been able to prepare for the peer review, and if they're not well prepared, the
facilitator can send them back to their desks, and even ask the task lead to
reschedule the peer review.
The facilitator's script for the entry criteria includes the following questions:
1. Are all the required attendees present at the peer review?

62
TESTING CONCEPTS

2. Have all the attendees received all the relevant documents and reports?
3. Are all the attendees well prepared for this peer review?
4. Have all the preceding life cycle activities been concluded?
5. Are there any changes to the baseline?

What are the parameters of peer reviews?


By definition, parameters are values on which something else depends. Peer
reviews depend on the attendance and active participation of several key people;
usually the facilitator, task lead, test lead, and at least one additional reviewer. The
attendance of these four people are usually required for the approval of the PDR.
According to company policy, depending on your company, other participants are
often invited, but generally not required for approval.
Peer reviews depend on the facilitator, sometimes known as the moderator,
who controls the meeting, keeps the meeting on schedule, and records all
suggestions from all attendees.
Peer reviews greatly depend on the developer, also known as the designer,
author, or task lead -- usually a software engineer -- who is most familiar with the
project, and most likely able to answer any questions or address any concerns that
may come up during the peer review.
Peer reviews greatly depend on the tester, also known as test lead, or bench
test person -- usually another software engineer -- who is also familiar with the
project, and most likely able to answer any questions or address any concerns that
may come up during the peer review.
Peer reviews greatly depend on the participation of additional reviewers and
additional attendees who often make specific suggestions and recommendations, and
ask the largest number of questions.

Have you attended any review meetings?


Yes, in the last 10+ years I have attended many review meetings; mostly
peer reviews. In today's corporate world, the vast majority of review meetings are
peer review meetings.
In my experience, the most useful peer reviews are the ones where you're
the author of something. Why? Because when you're the author, then it's you who
decides what to do and how, and it's you who receives all the free help. In my
experience, on the long run, the inputs of your additional reviewers and additional

63
TESTING CONCEPTS

attendees can be the most valuable to you and your company. But, in your own best
interest, in order to expedite things, before every peer review it is a good idea to get
together with the additional reviewer and additional attendee, and talk with them
about issues, because if you don't, they will be the ones with the largest number
of questions and usually negative feedback.
When a PDR is done right, it is useful, beneficial, pleasant, and friendly.
Generally speaking, the fewer people show up at the PDR, the easier it tends to be,
and the earlier it can be adjourned. When you're an author, developer, or task lead,
many times you can relax, because during your peer review your facilitator and test
lead are unlikely to ask any tough questions from you. Why? Because, the facilitator
is too busy taking notes, and the test lead is kind of bored (because he had already
asked his toughest questions before the PDR).
When you're a facilitator, every PDR tends to be a pleasant experience. In my
experience, one of the easiest review meetings are PDRs where you're the facilitator
(whose only job is to call the shots and make notes).

What types of review meetings can you tell me about?


Of review meetings, peer design reviews are the most common. Peer design
reviews are so common that they tend to replace both inspections and walk-troughs.
Peer design reviews can be classified according to the 'subject' of the review. I.e., "Is
this a document review, design review, or code review?" Peer design reviews can be
classified according to the 'role' you play at the meeting. I.e., "Are you the task lead,
test lead, facilitator, moderator, or additional reviewer?" Peer design reviews can be
classified according to the 'job title of attendees. I.e., "Is this a meeting of peers,
managers, systems engineers, or system integration testers?" Peer design reviews
can be classified according to what is being reviewed at the meeting. I.e., "Are we
reviewing the work of a developer, tester, engineer, or technical document writer?"
Peer design reviews can be classified according to the 'objective' of the review. I.e.,
"Is this document for the file cabinets of our company, or that of the government
(e.g. the FAA or FDA)?" PDRs of government documents tend to attract the attention
of managers, and the meeting quickly becomes a meeting of managers.

How can I shift my focus and area of work from QC to QA?


Number one, focus on your strengths, skills, and abilities! Realize that there
are MANY similarities between Quality Control and Quality Assurance! Realize that

64
TESTING CONCEPTS

you have MANY transferable skills! Number two, make a plan! Develop a belief that
getting a job in QA is easy!
HR professionals cannot tell the difference between quality control and quality
assurance! HR professionals tend to respond to keywords (i.e. QC and QA), without
knowing the exact meaning of those keywords! Number three, make it a reality!
Invest your time! Get some hands-on experience! Do some QA work! Do any QA
work, even if, for a few months, you get paid a little less than usual! Your goals,
beliefs, enthusiasm, and action will make a huge difference in your life! Number four,
I suggest you read all you can, and that includes reading product pamphlets,
manuals, books, information on the Internet, and whatever information you can lay
your hands on! If there is a will, there is a way! You CAN do it, if you put your mind
to it! You CAN learn to do QA work, with little or no outside help! Click on a link!

What techniques and tools can enable me to migrate from QC to QA?

• Technique number one is mental preparation. Understand and believe what


you want is not unusual at all! Develop a belief in yourself! Start believing
what you want is attainable! You can change your career! Every year, millions
of men and women change their careers successfully!

• Number two, make a plan! Develop a belief that getting a job in QA is easy!
HR professionals cannot tell the difference between quality control and quality
assurance! HR professionals tend to respond to keywords (i.e. QC and QA),
without knowing the exact meaning of those keywords!

• Number three, make it a reality! Invest your time! Get some hands-on
experience! Do some QA work! Do any QA work, even if, for a few months,
you get paid a little less than usual! Your goals, beliefs, enthusiasm, and
action will make a huge difference in your life!

• Number four, I suggest you read all you can, and that includes reading
product pamphlets, manuals, books, information on the Internet, and
whatever information you can lay your hands on! If there is a will, there is a
way! You CAN do it, if you put your mind to it! You CAN learn to do QA work,
with little or no outside help! Click on a link!

65
TESTING CONCEPTS

What is the difference between build and release?


A: Builds and releases are similar, because both builds and releases are end products
of software development processes. Builds and releases are similar, because both
builds and releases help developers and QA teams to deliver reliable software. Build
means a version of a software, typically one that is still in testing. Usually a version
number is given to a released product, but, sometimes, a build number is used
instead. Difference number one: Builds refer to software that is still in testing,
release refers to software that is usually no longer in testing.
Difference number two: Builds occur more frequently; releases occur less
frequently. Difference number three: Versions are based on builds, and not vice
versa. Builds, or usually a series of builds, are generated first, as often as one build
per every morning, depending on the company, and then every release is based on a
build, or several builds, i.e. the accumulated code of several builds.

What is CMM?
A: CMM is an acronym that stands for Capability Maturity Model. The idea of CMM is,
as to future efforts in developing and testing software, concepts and experiences do
not always point us in the right direction, therefore we should develop processes,
and then refine those processes.
There are five CMM levels, of which Level 5 is the highest...
CMM Level 1 is called "Initial".
CMM Level 2 is called "Repeatable".
CMM Level 3 is called "Defined".
CMM Level 4 is called "Managed".
CMM Level 5 is called "Optimized".
There are not many Level 5 companies;
most hardly need to be. Within the United States, fewer than 8% of software
companies are rated CMM Level 4, or higher. The U.S. government requires that all
companies with federal government contracts to maintain a minimum of a CMM Level
3 assessment.
CMM assessments take two weeks. They're conducted by a nine-member team led
by a SEI-certified lead assessor.

66
TESTING CONCEPTS

What are CMM levels and their definitions?


There are five CMM levels of which level 5 is the highest.

• CMM level 1 is called "initial". The software process is at CMM level 1, if it is


an ad hoc process. At CMM level 1, few processes are defined, and success, in
general, depends on individual effort and heroism.

• CMM level 2 is called "repeatable". The software process is at CMM level 2, if


the subject company has some basic project management processes, in order
to track cost, schedule, and functionality. Software processes are at CMM
level 2, if necessary processes are in place, in order to repeat earlier
successes on projects with similar applications. Software processes are at
CMM level 2, if there are requirements management, project planning, project
tracking, subcontract management, QA, and configuration management.

• CMM level 3 is called "defined". The software process is at CMM level 3, if


the software process is documented, standardized, and integrated into a
standard software process for the subject company. The software process is
at CMM level 3, if all projects use approved, tailored versions of the
company's standard software process for developing and maintaining
software. Software processes are at CMM level 3, if there are process
definition, training programs, process focus, integrated software
management, software product engineering, inter group coordination, and
peer reviews.

• CMM level 4 is called "managed". The software process is at CMM level 4, if


the subject company collects detailed data on the software process and
product quality, and if both the software process and the software products
are quantitatively understood and controlled. Software processes are at CMM
level 4, if there are software quality management (SQM) and quantitative
process management.

• CMM level 5 is called "optimized". The software process is at CMM level 5, if


there is continuous process improvement, if there is quantitative feedback
from the process and from piloting innovative ideas and technologies.
Software processes are at CMM level 5, if there are process change
management, and defect prevention technology change management.

67
TESTING CONCEPTS

What is the difference between bug and defect in software testing?


In software testing, the difference between bug and defect is small, and
depends on your company. For some companies, bug and defect are synonymous,
while others believe bug is a subset of defect. Generally speaking, we, software test
engineers, discover BOTH bugs and defects, before bugs and defects damage the
reputation of our company. We, QA engineers, use the software much like real users
would, to find BOTH bugs and defects, to find ways to replicate BOTH bugs and
defects, to submit bug reports to the developers, and to provide feedback to the
developers, i.e. tell them if they've achieved the desired level of quality. Therefore,
we, software A engineers, do not differentiate between bugs and defects. In our
bug reports, we include BOTH bugs and defects, and any differences between them
are minor. Difference number one: In bug reports, the defects are usually easier to
describe. Difference number two: In bug reports, it is usually easier to write the
descriptions on how to replicate the defects. Defects tend to require brief
explanations only.

What is grey box testing?


Grey box testing is a software testing technique that uses a combination of
black box testing and white box testing. Gray box testing is not black box testing,
because the tester does know some of the internal workings of the software under
test.
In grey box testing, the tester applies a limited number of test cases to the
internal workings of the software under test. In the remaining part of the grey box
testing, one takes a black box approach in applying inputs to the software under test
and observing the outputs.
Gray box testing is a powerful idea. The concept is simple; if one knows
something about how the product works on the inside, one can test it better, even
from the outside.
Grey box testing is not to be confused with white box testing; i.e. a testing
approach that attempts to cover the internals of the product in detail. Grey box
testing is a test strategy based partly on internals. The testing approach is known as
gray box testing, when one does have some knowledge, but not the full knowledge
of the internals of the product one is testing. In gray box testing, just as in black box
testing, you test from the outside of a product, just as you do with black box, but

68
TESTING CONCEPTS

you make better-informed testing choices because you're better informed; because
you know how the underlying software components operate and interact.

What is the difference between version and release?


Both version and release indicate a particular point in the software
development life cycle, or in the lifecycle of a document. The two terms, version and
release, are similar (i.e. mean pretty much the same thing), but there are minor
differences between them. Version means a VARIATION of an earlier, or original,
type; for example, "I've downloaded the latest version of the software from the
Internet. The latest version number is 3.3." Release, on the other hand, is the ACT
OR INSTANCE of issuing something for publication, use, or distribution. Release is
something thus released. For example, "A new release of a software program."

What is data integrity?


Data integrity is one of the six fundamental components of information
security. Data integrity is the completeness, soundness, and wholeness of the data
that also complies with the intention of the creators of the data. In databases,
important data – including customer information, order database, and pricing tables
-- may be stored. In databases, data integrity is achieved by preventing accidental,
or deliberate, or unauthorized insertion, or modification, or destruction of data.

How do you test data integrity?


Data integrity testing should verify the completeness, soundness, and
wholeness of the stored data. Testing should be performed on a regular basis,
because important data can and will change over time. Data integrity tests include
the followings:
1. Verify that you can create, modify, and delete any data in tables.
2. Verify that sets of radio buttons represent fixed sets of values.
3. Verify that a blank value can be retrieved from the database.
4. Verify that, when a particular set of data is saved to the database, each value gets
saved fully, and the truncation of strings and rounding of numeric values do not
occur.

69
TESTING CONCEPTS

5. Verify that the default values are saved in the database, if the user input is not
specified.
6. Verify compatibility with old data, old hardware, versions of operating systems,
and interfaces with other software.

What is data validity?


Data validity is the correctness and reasonableness of data. Reasonableness
of data means, for example, account numbers falling within a range, numeric data
being all digits, dates having a valid month, day and year, spelling of proper names.
Data validity errors are probably the most common, and the most difficult to detect,
data-related errors. What causes data validity errors? Data validity errors are usually
caused by incorrect data entries, when a large volume of data is entered in a short
period of time. For example, 12/25/2005 is entered as 13/25/2005 by mistake. This
date is therefore invalid. How can you reduce data validity errors? Use simple field
validation rules. Technique 1: If the date field in a database uses the MM/DD/YYYY
format, then use a program with the following two data validation rules: "MM should
not exceed 12, and DD should not exceed 31". Technique 2: If the original figures
do not seem to match the ones in the database, then use a program to validate data
fields. Compare the sum of the numbers in the database data field to the original
sum of numbers from the source. If there is a difference between the figures, it is an
indication of an error in at least one data element.

What is the difference between data validity and data integrity?

• Difference number one: Data validity is about the correctness and


reasonableness of data, while data integrity is about the completeness,
soundness, and wholeness of the data that also complies with the intention of
the creators of the data.

• Difference number two: Data validity errors are more common, while data
integrity errors are less common. Difference number three: Errors in data
validity are caused by HUMANS -- usually data entry personnel – who enter,
for example, 13/25/2005, by mistake, while errors in data integrity are
caused by BUGS in computer programs that, for example, cause the
overwriting of some of the data in the database, when one attempts to
retrieve a blank value from the database.

70
TESTING CONCEPTS

What is Test Director?


Test Director, also known as Mercury Test Director, is a software tool made
for software QA professionals. Mercury Test Director, as the name implies, is the
product of Mercury Interactive Corporation, located at 379 North Whisman Road,
Mountain View, California 94043 USA.
Mercury's products include the Mercury TestDirector®, Mercury QuickTest
Professional™, Mercury WinRunner™, and Mercury Business Process Testing™.

139. Tell me about 'TestDirector'.


Made by Mercury Interactive, 'Test Director' is a single browser-based
application that streamlines the software QA process. It is a software tool that helps
software QA professionals to gather requirements, to plan, schedule and run tests,
and to manage and track defects/issues/bugs. Test Director's Requirements Manager
links test cases to requirements, ensures traceability, and calculates what
percentage of the requirements are covered by tests, how many of these tests have
been run, and how many have passed or failed. As to planning, test plans can be
created, or imported, for both manual and automated tests. The test plans can then
be reused, shared, and preserved. As to running tests, the Test Director’s Test Lab
Manager allows you to schedule tests to run unattended, or run even overnight. The
Test Director's Defect Manager supports the entire bug life cycle, from initial problem
detection through fixing the defect, and verifying the fix. Additionally, the Test
Director can create customizable graphs and reports, including test execution reports
and release status assessments.

What is structural testing?


Structural testing is also known as clear box testing, glass box testing.
Structural testing is a way to test software with knowledge of the internal workings
of the code being tested. Structural testing is white box testing, not black box
testing, since black boxes are considered opaque and do not permit visibility into the
code.

What is the difference between static and dynamic testing?


The differences between static and dynamic testing are as follows:

• Difference number 1: Static testing is about prevention, dynamic testing is


about cure.

71
TESTING CONCEPTS

• Difference number 2: She static tools offer greater marginal benefits.

• Difference number 3: Static testing is many times more cost-effective than


dynamic testing.

• Difference number 4: Static testing beats dynamic testing by a wide


margin.

• Difference number 5: Static testing is more effective!

• Difference number 6: Static testing gives you comprehensive diagnostics


for your code.

• Difference number 7: Static testing achieves 100% statement coverage in a


relatively short time, while dynamic testing often often achieves less
than 50% statement coverage, because dynamic testing finds bugs only in
parts of the code that are actually executed.

• Difference number 8: Dynamic testing usually takes longer than static


testing. Dynamic testing may involve running several test cases, each
of which may take longer than compilation.

• Difference number 9: Dynamic testing finds fewer bugs than static testing.

• Difference number 10: Static testing can be done before compilation, while
dynamic testing can take place only after compilation and linking.

• Difference number 11: Static testing can find all of the following that
dynamic testing cannot find: syntax errors, code that is hard to
maintain, code that is hard to test, code that does not conform to coding
standards, and ANSI violations.

What testing tools should I use?


Ideally, you should use both static and dynamic testing tools. To maximize
software reliability, you should use both static and dynamic techniques, supported by
appropriate static and dynamic testing tools. Static and dynamic testing are
complementary. Static and dynamic testing find different classes of bugs. Some bugs
are detectable only by static testing, some only by dynamic.
Dynamic testing does detect some errors that static testing misses. To eliminate as
many errors as possible, both static and dynamic testing should be used. All this
static testing (i.e. testing for syntax errors, testing for code that is hard to maintain,
testing for code that is hard to test, testing for code that does not conform to coding
standards, and testing for ANSI violations) takes place before compilation. Static

72
TESTING CONCEPTS

testing takes roughly as long as compilation and checks every statement you have
written.

Why should I use static testing techniques?


You should use static testing techniques because static testing is a bargain,
compared to dynamic testing. Static testing is up to 100 times more effective. Even
in selective testing, static testing may be up to 10 times more effective. The most
pessimistic estimates suggest a factor of 4. Since static testing is faster and achieves
100% coverage, the unit cost of detecting these bugs by static testing is many times
lower than that of by dynamic testing.About half of the bugs, detectable by dynamic
testing, can be detected earlier by static testing. If you use neither static nor
dynamic test tools, the static tools offer greater marginal benefits. If urgent
deadlines loom on the horizon, the use of dynamic testing tools can be omitted, but
tool-supported static testing should never be omitted.

How can I get registered and licensed as a professional engineer?


To get registered and licensed as a professional engineer, generally you have
to be a legal resident of the jurisdiction where you submit your application. You also
have to be at least 18 years of age, trustworthy, with no criminal record. You also
have to have a minimum of a
bachelor's degree in engineering, from an established, recognized, and approved
university.
Usually you have to provide two references, from licensed and professional
engineers, and work for a few years as an engineer, as an "engineer in training",
under the supervision of a registered and licensed professional engineer. You have to
pass a test of competence in your engineering discipline as well as in professional
ethics. For many candidates, the biggest two hurdles of getting a license seem to be
the lack of a university degree in engineering, or the lack of an acceptable, verifiable
work experience, under the supervision of a licensed, professional engineer.

What is the definition of top down design?


Top down design progresses from simple design to detailed design. Top down
design solves problems by breaking them down into smaller, easier to solve sub
problems. Top down design creates solutions to these smaller problems, and then
tests them using test drivers.

73
TESTING CONCEPTS

In other words, top down design starts the design process with the main
module or system, and then progresses down to lower level modules and
subsystems. To put it differently, top down design looks at the whole system, and
then explodes it into subsystems, or smaller parts. A systems engineer or systems
analyst determines what the top level objectives are, and how they can be met. He
then divides the system into subsystems, i.e. breaks the whole system into logical,
manageable-size modules, and deals with them individually.

What are the future prospects of software QA/testing?


In many IT-related occupations, employers want to see an increasingly
broader range of skills; often non technical skills. In software QA/testing, for
example, employers want us to have a combination of technical, business, and
personal skills. Technical skills mean skills in IT, quantitative analysis, data
modeling, and technical writing. Business skills mean skills in strategy and business
writing. Personal skills mean personal communication, leadership, teamwork, and
problem-solving skills. We, employees, on the other hand, want increasingly more
autonomy, better lifestyle, increasingly more employee oriented company culture,
and better geographic location. We will continue to enjoy relatively good job security
and, depending on the business cycle, many job opportunities as well. We realize our
skills are important, and have strong incentives to upgrade our skills, although
sometimes lack the information on how to do so. Educational institutions are
increasingly more likely to ensure that we are exposed to real-life situations and
problems, but high turnover rates and a rapid pace of change in the IT industry will
often act as strong disincentives for employers to invest in our skills, especially non-
company specific skills. Employers will continue to establish closer links with
educational institutions, both through in-house educationprograms and human
resources.The share of IT workers with IT degrees will keep increasing. Certification
will continue to keep helping employers to quickly identify us with the latest skills.
During boom times, smaller and younger companies will continue to be the most
attractive to us, especially those companies that offer stock options and performance
bonuses in order to retain and attract those of us who are most skilled. High
turnover rates will continue to be the norm, especially during boom. Software
QA/Testing will continue to be outsourced to offshore locations. Software QA/testing
will continue to be performed by a disproportionate share of men, but the share of
women will increase.

74
TESTING CONCEPTS

How can I be effective and efficient, when I do black box testing of


ecommerce web sites?

When you're doing black box testing of e-commerce web sites, you're most
efficient and effective when you're testing the sites' Visual Appeal, Contents, and
Home Pages. When you want to be effective and efficient, you need to verify that the
site is well planned. Verify that the site is customer-friendly. Verify that the choices
of colors are attractive. Verify that the choices of fonts are attractive. Verify that the
site's audio is customer friendly. Verify that the site's video is attractive. Verify that
the choice of graphics is attractive. Verify that every page of the site is displayed
properly on all the popular browsers. Verify the authenticity of facts. Ensure the site
provides reliable and consistent information. Test the site for appearance. Test the
site for grammatical and spelling errors. Test the site for visual appeal, choice of
browsers, consistency of font size, download time, broken links, missing links,
incorrect links, and browser compatibility. Test each toolbar, each menu item, every
window, every field prompt, every pop-up text, and every error message. Test every
page of the site for left and right justifications, every shortcut key, each control,
each push button, every radio button, and each item on every drop-down menu. Test
each list box, and each help menu item. Also check, if the command buttons are
grayed out when they're not in use.

What is the difference between top down and bottom up design?


Top down design proceeds from the abstract (entity) to get to the concrete
(design). Bottom up design proceeds from the concrete (design) to get to the
abstract (entity). Top down design is most often used in designing brand new
systems, while bottom up design is sometimes used when one is reverse engineering
a design; i.e. when one is trying to figure out what somebody else designed in an
existing system. Bottom up design begins the design with the lowest level modules
or subsystems, and progresses upward to the main program, module, or subsystem.

75
TESTING CONCEPTS

With bottom up design, a structure chart is necessary to determine the order of


execution, and the development of drivers is necessary to complete the bottom up
approach.
Top down design, on the other hand, begins the design with the main or
toplevel module, and progresses downward to the lowest level modules or
subsystems. Real life sometimes is a combination of top down design and bottom up
design. For instance, data modeling sessions
tend to be iterative, bouncing back and forth between top down and bottom up
modes, as the need arises.

What is the definition of bottom up design?


Bottom up design begins the design at the lowest level modules or
subsystems, and progresses upward to the design of the main program, main
module, or main subsystem.
To determine the order of execution, a structure chart is needed, and, to
complete the bottom up design, the development of drivers is needed. In software
design - assuming that the data you start with is a pretty good model of what you're
trying to do - bottom up design generally starts with the known data (e.g. customer
lists, order forms), then the data is broken into into chunks (i.e. entities) appropriate
for planning a relational database. This process reveals what relationships the
entities have, and what the entities' attributes are. In software design, bottom up
design doesn't only mean writing the program in a different order, but there is more
to it. When you design bottom up, you often end up with a different program.
Instead of a single, monolithic program, you get a larger language, with more
abstract operators, and a smaller program written in it. Once you abstract out the
parts which are merely utilities, what is left is much shorter program. The higher you
build up the language, the less distance you will have to travel down to it, from the
top. Bottom up design makes it easy to reuse code blocks. For example, many of the
utilities you write for one program are also useful for programs you have to write
later. Bottom up design also makes programs easier to read.

What is smoke testing?


Smoke testing is a relatively simple check to see whether the product
"smokes" when it runs. Smoke testing is also known as ad hoc testing, i.e. testing
without a formal test plan. With many projects, smoke testing is carried out in

76
TESTING CONCEPTS

addition to formal testing. If smoke testing is carried out by a skilled tester, it can
often find problems that are not caught during regular testing. Sometimes, if testing
occurs very early or very late in the software development cycle, this can be the only
kind of testing that can be performed. Smoke tests are, by definition, not exhaustive,
but, over time, you can increase your coverage of smoke testing. A common practice
at Microsoft, and some other software companies, is the daily build and smoke test
process. This means, every file is compiled, linked, and combined into an executable
file every single day, and then the software is smoke tested. Smoke testing
minimizes integration risk, reduces the risk of low quality, supports easier defect
diagnosis, and improves morale. Smoke testing does not have to be exhaustive, but
should expose any major problems. Smoke testing should be thorough enough that,
if it passes, the tester can assume the product is stable enough to be tested more
thoroughly. Without smoke testing, the daily build is just a time wasting exercise.
Smoke testing is the sentry that guards against any errors in development and
future problems during integration. At first, smoke testing might be the testing of
something that is easy to test. Then, as the system grows, smoke testing should
expand and grow, from a few seconds to 30 minutes or more.

What is the difference between monkey testing and smoke testing?

• Difference number 1: Monkey testing is random testing, and smoke testing


is a nonrandom check to see whether the product "smokes" when it
runs. Smoke testing is nonrandom testing that deliberately exercises the
entire system from end to end, with the goal of exposing any major
problems.

• Difference number 2: Monkey testing is performed by automated testing


tools. On the other hand, smoke testing, more often than not, is a
manual check to see whether the product "smokes" when it runs.

• Difference number 3: Monkey testing is performed by "monkeys", while


smoke testing is performed by skilled testers (to see whether the
product "smokes" when it runs).

• Difference number 4: "Smart monkeys" are valuable for load and stress
testing, but not very valuable for smoke testing, because they are too
expensive for smoke testing.

77
TESTING CONCEPTS

• Difference number 5: "Dumb monkeys" are inexpensive to develop, are


able to do some basic testing, but, if we use them for smoke testing,
they find few bugs during smoke testing.

• Difference number 6: Monkey testing is not a thorough testing, but smoke


testing is thorough enough that, if the build passes, one can assume
that the program is stable enough to be tested more thoroughly.

• Difference number 7: Monkey testing does not evolve. Smoke testing, on


the other hand, evolves as the system evolves from something simple
to something more thorough.

• Difference number 8: Monkey testing takes "six monkeys" and a "million


years" to run. Smoke testing, on the other hand, takes much less time
to run, i.e. anywhere from a few seconds to a couple of hours.

Tell me about the process of daily builds and smoke tests.


The idea behind the process of daily builds and smoke tests is to build the
product every day, and test it every day. The software development process at
Microsoft and many other software companies requires daily builds and smoke tests.
According to their process, every day, every single file has to be compiled, linked,
and combined into an executable program. And, then, the program has to be "smoke
tested". Smoke testing is a relatively simple check to see whether the product
"smokes" when it runs. You should add revisions to the build only when it makes
sense to do so. You should to establish a Build Group, and build *daily*; set your
*own standard* for what constitutes "breaking the build", and create a penalty for
breaking the build, and check for broken builds *every day*. In addition to the daily
builds, you should smoke test the builds, and smoke test them Daily. You should
make the smoke test Evolve, as the system evolves. You should build and smoke
test Daily, even when the project is under pressure. Think about the many benefits
of this process! The process of daily builds and smoke tests minimizes the integration
risk, reduces the risk of low quality, supports easier defect diagnosis, improves
morale, enforces discipline, and keeps pressure-cooker projects on track. If you build
and smoke test *daily*, success will come, even when you're working on large
projects!
What is the purpose of test strategy?

• Reason number 1: The number one reason of writing a test strategy


document is to "have" a signed, sealed, and delivered, FDA (or FAA) approved

78
TESTING CONCEPTS

document, where the document includes a written testing methodology, test


plan, and test cases.

• Reason number 2: Having a test strategy does satisfy one important step in
the software testing process.

• Reason number 3: The test strategy document tells us how the software
product will be tested.

• Reason number 4: The creation of a test strategy document presents an


opportunity to review the test plan with the project team.

• Reason number 5: The test strategy document describes the roles,


responsibilities, and the resources required for the test and schedule
constraints.

• Reason number 6: When we create a test strategy document, we have to


put into writing any testing issues requiring resolution (and usually this
means additional negotiation at the project management level).

• Reason number 7: The test strategy is decided first, before lower level
decisions are made on the test plan, test design, and other testing issues.

What do you mean by 'the process is repeatable'?


A process is repeatable, whenever we have the necessary processes in place,
in order to repeat earlier successes on projects with similar applications. A process is
repeatable, if we use detailed and well-written processes and procedures. A process
is repeatable, if we ensure that the correct steps are executed. When the correct
steps are executed, we facilitate a successful completion of the task. Documentation
is critical. A software process is repeatable, if there are requirements management,
project planning, project tracking, subcontract management, QA, and configuration
management. Both QA processes and practices should be documented, so that they
are repeatable. Specifications, designs, business rules, inspection reports,
configurations, code changes, test plans, test cases, bug reports, user manuals
should all be documented, so that they are repeatable. Document files should be
well organized. There should be a system for easily finding and obtaining documents,
and determining what document has a particular piece of information. We should use
documentation change management, if possible. Once Rob Davis has learned and
reviewed a customer's business processes and procedures, he will follow them. He
will also recommend improvements and/or additions.

79
TESTING CONCEPTS

What is the purpose of a test plan?

• Reason number 1: We create a test plan because preparing it helps us to


think through the efforts needed to validate the acceptability of a software
product.

• Reason number 2: We create a test plan because it can and will help people
outside the test group to understand the why and how of product validation.

• Reason number 3: We create a test plan because, in regulated


environments, we have to have a written test plan.

• Reason number 4: We create a test plan because the general testing


process includes the creation of a test plan.

• Reason number 5: We create a test plan because we want a document that


describes the objectives, scope, approach and focus of the software testing
effort.

• Reason number 6: We create a test plan because it includes test cases,


conditions, the test environment, a list of related tasks, pass/fail criteria, and
risk assessment.

• Reason number 7: We create test plan because one of the outputs for
creating a test strategy is an approved and signed off test plan document.

• Reason number 8: We create a test plan because the software testing


methodology a three step process, and one of the steps is the creation of a
test plan.

• Reason number 9: We create a test plan because we want an opportunity to


review the test plan with the project team.

• Reason number 10: We create a test plan document because test plans
should be documented, so that they are repeatable.

Give me one test case that catches all the bugs!


If there is a "magic bullet", i.e. the one test case that has a good possibility to
catch ALL the bugs, or at least the most important bugs, it is a challenge to find it,
because test cases depend on requirements; requirements depend on what
customers need; and customers can have great many different needs. As software
systems are getting increasingly complex, it is increasingly more challenging to write
test cases. It is true that there are ways to create "minimal test cases" which can

80
TESTING CONCEPTS

greatly simplify the test steps to be executed. But, writing such test cases is time
consuming, and project deadlines often prevent us from going that route.
Often the lack of enough time for testing is the reason for bugs to occur in the field.
However, even with ample time to catch the "most important bugs", bugs still
surface with amazing spontaneity. The challenge is, developers do not seem to know
how to avoid providing the many opportunities for bugs to hide, and testers do not
seem to know where the bugs are hiding.

What is the difference between a test plan and a test scenario?


Difference number 1: A test plan is a document that describes the scope,
approach, resources, and schedule of intended testing activities, while a test scenario
is a document that describes both typical and atypical situations that may occur in
the use of an application. Difference number 2: Test plans define the scope,
approach, resources, and schedule of the intended testing activities, while test
procedures define test conditions, data to be used for
testing, and expected results, including database updates, file outputs, and report
results.
Difference number 3: A test plan is a description of the scope, approach, resources,
and schedule of intended testing activities, while a test scenario is a description of
test cases that ensure that a business process flow, applicable to the customer, is
tested from end to end.

What is a test scenario?


The terms "test scenario" and "test case" are often used synonymously. Test
scenarios are test cases, or test scripts, and the sequence in which they are to be
executed. Test scenarios are test cases that ensure that business process flows are
tested from end to end. Test scenarios are independent tests, or a series of tests,
that follow each other, where each of them dependent upon the output of the
previous one. Test scenarios are prepared by reviewing functional requirements, and
preparing logical groups of functions that can be further broken into test procedures.
Test scenarios are designed to represent both typical and unusual situations that
may occur in the application. Test engineers define unit test requirements and unit
test scenarios. Test engineers also execute unit test scenarios. It is the test team
that, with assistance of developers and clients, develops test scenarios for
integration and system testing. Test scenarios are executed through the use of test

81
TESTING CONCEPTS

procedures or scripts. Test procedures or scripts define a series of steps necessary to


perform one or more test scenarios. Test procedures or scripts may cover multiple
test scenarios.

Give me some sample test cases you would write!


For instance, if one of the requirements is, "Brake lights shall be on, when the
brake pedal is depressed", then, based on this one simple requirement, for starters, I
would write all of the following test cases: Test case number 101: "Inputs: The
headlights are on. The brake pedal is depressed. Expected result: The brake lights
are on. Verify that the brake lights are on, when the brake pedal is depressed." Test
case number 102: "Inputs: The left turn lights are on. The brake pedal is depressed.
Expected result: The brake lights are on. Verify that the brake lights are on, when
the brake pedal is depressed."
Test case number 103: "Inputs: The right turn lights are on. The brake pedal is
depressed. Expected result: The brake lights are on. Verify that the brake lights are
on, when the brake pedal is depressed." As you might have guessed, in the work
place, in real life, requirements are more complex than this one; and, just to verify
this one, simple requirement, there is a need for many more test cases.
How do you write test cases?
When I write test cases, I concentrate on one requirement at a time. Then,
based on that one requirement, I come up with several real life scenarios that are
likely to occur in the use of the application by end users. When I write test cases, I
describe the inputs, action, or event, and their expected results, in order to
determine if a feature of an application is working correctly. To make the test case
complete, I also add particulars e.g. test case identifiers, test case names,
objectives, test conditions (or setups), input data requirements (or steps), and
expected results. If I have a choice, I prefer writing test cases as early as possible in
the development life cycle. Why? Because, as a side benefit of writing test cases,
many times I am able to find problems in the requirements or design of an
application. And, because the process of developing test cases makes me completely
think through the operation of the application. You can learn to write test cases! If
there is a will, there is a way! You CAN do it, if you put your mind to it! You CAN
learn to write test cases, with little or no outside help. Click on a link!

What is a parameter?

82
TESTING CONCEPTS

A parameter is an item of information - such as a name, a number, or a


selected option - that is passed to a program by a user or another program. By
definition, a parameter is a value on which something else depends. Any desired
numerical value may be given as a parameter. We use parameters when we want to
allow a specified range of variables. We use parameters when we want to
differentiate behavior or pass input data to computer programs or their
subprograms. Thus, when we are testing, the parameters of the test can be varied to
produce different results, because parameters do affect the operation of the program
receiving them. Example 1: We use a parameter, such as temperature, that defines
a system. In this definition, it is temperature that defines the system and determines
its behavior. Example 2: In the definition of function f(x) = x + 10, x is a parameter.
In this definition, x defines the f(x) function and determines its behavior. Thus, when
we are testing, x can be varied to make f(x) produce different values, because the
value of x does affect the value of f(x).When parameters are passed to a function
subroutine, they are called arguments.

What is a constant?
In software or software testing, a constant is a meaningful name that
represents a number, or string, that does not change. Constants are variables whose
value remain the same, i.e. constant, throughout the execution of a program. Why
do developers use constants? Because if we have code that contains constant values
that keep reappearing, or, if we have code that depends on certain numbers that are
difficult to remember, we can improve both the readability and maintainability of our
code, by using constants. To give you an example, let's suppose we declare a
constant and we call it Pi. We set it to 3.14159265 and use it throughout our code.
Constants, such as Pi, as the name implies, store values that remain constant
throughout the execution of our program. Keep in mind that, unlike variables which
can be read from and written to, constants are read-only variables. Although
constants resemble variables, we cannot modify or assign new values to them, as we
can to variables. But we can make constants public, or private. We can also specify
what data type they are.

What is a requirements test matrix?


The requirements test matrix is a project management tool for tracking and
managing testing efforts, based on requirements, throughout the project's life cycle.

83
TESTING CONCEPTS

The requirements test matrix is a table, where requirement descriptions are put in
the rows of the table, and the descriptions of testing efforts are put in the column
headers of the same table. The requirements test matrix is similar to the
requirements traceability matrix, which is a representation of user requirements
aligned against system functionality. The requirements traceability matrix ensures
that all user requirements are addressed by the system integration team and
implemented in the system integration effort. The requirements test matrix is a
representation of user requirements aligned against system testing. Similarly to the
requirements traceability matrix, the requirements test matrix ensures that all user
requirements are addressed by the system test team and implemented in the system
testing effort.

Give me a requirements test matrix template!


For a simple requirements test matrix template, you want a basic table that
you would like to use for cross referencing purposes. How do you create one? You
can create a requirements test matrix template in the following six steps:
Step 1: Find out how many requirements you have.
Step 2: Find out how many test cases you have.
Step 3: Based on these numbers, create a basic table. Let's suppose you have a list
of 90 requirements and 360 test cases. Based on these numbers, you want to create
a table of 91 rows and 361 columns.
Step 4: Focus on the first column of your table. One by one, copy all your 90
requirement numbers, and paste them into rows 2 through 91 of your table.
Step 5: Focus on the first row of your table. One by one, copy all your 360 test case
numbers, and paste them into columns 2 through 361 of your table.
Step 6: Examine each of your 360 test cases, and, one by one, determine which of
the 90 requirements they satisfy. If, for the sake of this example, test case 64
satisfies requirement 12, then put a large "X" into cell 13-65 of your table... and
then you have it; you have just created a requirements test matrix template that
you can use for cross-referencing purposes.

What is reliability testing?


Reliability testing is designing reliability test cases, using accelerated
reliability techniques (e.g. step-stress, test/analyze/fix, and continuously increasing
stress testing techniques), AND testing units or systems to failure, in order to obtain

84
TESTING CONCEPTS

raw failure time data for product life analysis. The purpose of reliability testing is to
determine product reliability, and to determine whether the software meets the
customer's reliability requirements. In the system test phase, or after the software is
fully developed, one reliability testing technique we use is a test/analyze/fix
technique, where we couple reliability testing with the removal of faults. When we
identify a failure, we send the software back to the developers, for repair. The
developers build a new version of the software, and then we do another test
iteration. We track failure intensity (e.g. failures per transaction, or failures per hour)
in order to guide our test process, and to determine the feasibility of the software
release, and to determine whether the software meets the customer's reliability
requirements.

Give me an example on reliability testing.


For example, our products are defibrillators. From direct contact with
customers during the requirements gathering phase, our sales team learns that a
large hospital wants to purchase defibrillators with the assurance that 99 out of
every 100 shocks will be delivered properly.
In this example, the fact that our defibrillator is able to run for 250 hours without
any failure, in order to demonstrate the reliability, is irrelevant to these customers.
In order to test for reliability we need to translate terminology that is meaningful to
the customers into equivalent delivery units, such as the number of shocks. We
describe the customer needs in a quantifiable manner, using the customer’s
terminology. For example, our of quantified reliability testing goal becomes as
follows: Our defibrillator will be considered sufficiently reliable if 10 (or fewer)
failures occur from 1,000 shocks. Then, for example, we use a test/analyze/fix
technique, and couple reliability testing with the removal of errors. When we identify
a failed delivery of a shock, we send the software back to the developers, for repair.
The developers build a new version of the software, and then we deliver another
1,000 shocks into dummy resistor loads. We track failure intensity (i.e. number of
failures per 1,000 shocks) in order to guide our reliability testing, and to determine
the feasibility of the software release, and to determine whether the software meets
our customers' reliability requirements.

What is verification?

85
TESTING CONCEPTS

Verification ensures the product is designed to deliver all functionality to the


customer; it typically involves reviews and meetings to evaluate documents, plans,
code, requirements and specifications; this can be done with checklists, issues lists,
walk-through and inspection meetings. You CAN learn to do verification, with little or
no outside help. Get CAN get free information. Click on a link!

What is validation?
Validation ensures that functionality, as defined in requirements, is the
intended behavior of the product; validation typically involves actual testing and
takes place after verifications are completed.

What is a walk-through?
A walk-through is an informal meeting for evaluation or informational
purposes. A walk-through is also a process at an abstract level. It's the process of
inspecting software code by following paths through the code (as determined by
input conditions and choices made along the way). The purpose of code walk-
through is to ensure the code fits the purpose. Walk-through also offer opportunities
to assess an individual's or team's competency.
What is an inspection?
An inspection is a formal meeting, more formalized than a walk-through and
typically consists of 3-10 people including a moderator, reader (the author of
whatever is being reviewed) and a recorder (to make notes in the document). The
subject of the inspection is typically a document, such as a requirements document
or a test plan. The purpose of an inspection is to find problems and see what is
missing, not to fix anything. The result of the meeting should be documented in a
written report. Attendees should prepare for this type of meeting by reading through
the document, before the meeting starts; most problems are found during this
preparation. Preparation for inspections is difficult, but is one of the most cost
effective methods of ensuring quality, since bug prevention is more cost effective
than bug detection.

What is quality?

86
TESTING CONCEPTS

Quality software is software that is reasonably bug-free, delivered on time


and within budget, meets requirements and expectations and is maintainable.
However, quality is a subjective term. Quality depends on who the customer is and
their overall influence in the scheme of things. Customers of a software development
project include end-users, customer acceptance test engineers, testers, customer
contract officers, customer management, the development organization's
management, test engineers, testers, salespeople, software engineers, stockholders
and accountants. Each type of customer will have his or her own slant on quality.
The accounting department might define quality in terms of profits, while an end-
user might define quality as user friendly and bug free.

What is good code?


A good code is code that works, is free of bugs and is readable and
maintainable. Organizations usually have coding standards all developers should
adhere to, but every programmer and software engineer has different ideas about
what is best and what are too many or too few rules. We need to keep in mind that
excessive use of rules can stifle both productivity and creativity. Peer reviews and
code analysis tools can be used to check for problems and enforce standards.

What is good design?


Design could mean to many things, but often refers to functional design or
internal design. Good functional design is indicated by software functionality can be
traced back to customer and end-user requirements. Good internal design is
indicated by software code whose overall structure is clear, understandable, easily
modifiable and maintainable; is robust with sufficient error handling and status
logging capability; and works correctly when implemented.

What is software life


Software life cycle begins when a software product is first conceived and ends
when it is no longer in use. It includes phases like initial concept, requirements
analysis, functional design, internal design, documentation planning, test planning,
coding, document preparation, integration, testing, maintenance, updates, retesting
and phase-out.

87
TESTING CONCEPTS

How do you introduce a new software QA process?


It depends on the size of the organization and the risks involved. For large
organizations with high-risk projects, a serious management buy-in is required and a
formalized QA process is necessary. For medium size organizations with lower risk
projects, management and organizational buy-in and a slower, step-by-step process
is required. Generally speaking, QA processes should be balanced with productivity,
in order to keep any bureaucracy from getting out of hand. For smaller groups or
projects, an ad-hoc process is more appropriate. A lot depends on team leads and
managers, feedback to developers and good communication is essential among
customers, managers, developers, test engineers and testers. Regardless the size of
the company, the greatest value for effort is in managing requirement processes,
where the goal is requirements that are clear, complete and testable.

What is the role of documentation in QA?


Documentation plays a critical role in QA. QA practices should be
documented, so that they are repeatable. Specifications, designs, business rules,
inspection reports, configurations, code changes, test plans, test cases, bug reports,
user manuals should all be documented. Ideally, there should be a system for easily
finding and obtaining of documents and determining what document will have a
particular piece of information. Use documentation change management, if possible.

Why are there so many software bugs?


Generally speaking, there are bugs in software because of unclear
requirements, software complexity, programming errors, changes in requirements,
errors made in bug tracking, time pressure, poorly documented code and/or bugs in
tools used in software development. _ There are unclear software requirements
because there is miscommunication as to what the software should or shouldn't do. _
Software complexity. All of the followings contribute to the exponential growth in
software and system complexity: Windows interfaces, client-server and distributed
applications, data communications, enormous relational databases and the sheer size
of applications. _ Programming errors occur because programmers and software
engineers, like everyone else, can make mistakes. _ As to changing requirements, in
some fast-changing business environments, continuously modified requirements are
a fact of life. Sometimes customers do not understand the effects of changes, or

88
TESTING CONCEPTS

understand them but request them anyway. And the changes require redesign of the
software, rescheduling of resources and some of the work already completed have to
be redone or discarded and hardware requirements can be effected, too. _ Bug
tracking can result in errors because the complexity of keeping track of changes can
result in errors, too. _ Time pressures can cause problems, because scheduling of
software projects is not easy and it often requires a lot of guesswork and when
deadlines loom and the crunch comes, mistakes will be made. _ Code documentation
is tough to maintain and it is also tough to modify code that is poorly documented.
The result is bugs. Sometimes there is no incentive for programmers and software
engineers to document their code and write clearly documented, understandable
code. Sometimes developers get kudos for quickly turning out code, or programmers
and software engineers feel they cannot have job security if everyone can
understand the code they write, or they believe if the code was hard to write, it
should be hard to read. _ Software development tools , including visual tools, class
libraries, compilers, scripting tools, can introduce their own bugs. Other times the
tools are poorly documented, which can create additional bugs.

Give me five common problems that occur during software development.


Poorly written requirements, unrealistic schedules, inadequate testing, adding
new features after development is underway and poor communication. 1.
Requirements are poorly written when requirements are unclear, incomplete, too
general, or not testable; therefore there will be problems.
2. The schedule is unrealistic if too much work is crammed in too little time.
3. Software testing is inadequate if none knows whether or not the software is any
good until customers complain or the system crashes.
4. It's extremely common that new features are added after development is
underway.
5. Miscommunication either means the developers don't know what is needed, or
customers have unrealistic expectations and therefore problems are guaranteed.

179. What is a backward compatible design?


The design is backward compatible, if the design continues to work with
earlier versions of a language, program, code, or software. When the design is
backward compatible, the signals or data that had to be changed, did not break the
existing code. For instance, our mythical web designer decides that the fun of using

89
TESTING CONCEPTS

Java script and Flash is more important than backward compatible design, or, he
decides that he doesn't have the resources to maintain multiple styles of backward
compatible web design. This decision of his will inconvenience some users, because
some of the earlier versions of Internet Explorer and Netscape will not display his
web pages properly, as there are some serious improvements in the newer versions
of Internet Explorer and Netscape that make the older versions of these browsers
incompatible with, for example, DHTML. This is when we say, "This design doesn't
continue to work with earlier versions of browser software. Therefore our mythical
designer's web design is not backward compatible". On the other hand, if the same
mythical web designer decides that backward compatibility is more important than
fun, or, if he decides that he has the resources to maintain multiple styles of
backward compatible code, then no user will be inconvenienced. No one will be
inconvenienced, even when Microsoft and Netscape make some serious
improvements in their web browsers. This is when we can say, "Our mythical web
designer's design is backward compatible".

What is Regression Testing?

Introduction:

This article attempts to take a close look at the process and techniques in Regression
Testing.

What is Regression Testing?

If a piece of Software is modified for any reason testing needs to be done to ensure
that it works as specified and that it has not negatively impacted any functionality
that it offered previously. This is known as Regression Testing.

Regression Testing attempts to verify:

● That the application works as specified even after the


changes/additions/modification were made to it

● The original functionality continues to work as specified even after


changes/additions/modification to the software application

90
TESTING CONCEPTS

● The changes/additions/modification to the software application have not


introduced any new bugs

When is Regression Testing necessary?

Regression Testing plays an important role in any Scenario where a change has been
made to a previously tested software code. Regression Testing is hence an important
aspect in various Software Methodologies where software changes enhancements
occur frequently.

Any Software Development Project is invariably faced with requests for changing
Design, code, features or all of them.

Some Development Methodologies embrace change.

For example ‘Extreme Programming’ Methodology advocates applying small


incremental changes to the system based on the end user feedback.

Each change implies more Regression Testing needs to be done to ensure that the
System meets the Project Goals.

Why is Regression Testing important?

Any Software change can cause existing functionality to break.


Changes to a Software component could impact dependent Components.

It is commonly observed that a Software fix could cause other bugs.

All this affects the quality and reliability of the system. Hence Regression Testing,
since it aims to verify all this, is very important.

Making Regression Testing Cost Effective:

Every time a change occurs one or more of the following scenarios may occur:
● More Functionality may be added to the system
● More complexity may be added to the system
● New bugs may be introduced

91
TESTING CONCEPTS

● New vulnerabilities may be introduced in the system


● System may tend to become more and more fragile with each change

After the change the new functionality may have to be tested along with all the
original functionality.

With each change Regression Testing could become more and more costly.

To make the Regression Testing Cost Effective and yet ensure good coverage one or
more of the following techniques may be applied:

● Test Automation: If the Test cases are automated the test cases may be
executed using scripts after each change is introduced in the system. The execution
of test cases in this way helps eliminate oversight, human errors,. It may also result
in faster and cheaper execution of Test cases. However there is cost involved in
building the scripts.

● Selective Testing: Some Teams choose execute the test cases selectively. They
do not execute all the Test Cases during the Regression Testing. They test only what
they decide is relevant. This helps reduce the Testing Time and Effort.

Regression Testing – What to Test?

Since Regression Testing tends to verify the software application after a change has
been made everything that may be impacted by the change should be tested during
Regression Testing. Generally the following areas are covered during Regression
Testing:

● Any functionality that was addressed by the change

● Original Functionality of the system

● Performance of the System after the change was introduced

92
TESTING CONCEPTS

Regression Testing – How to Test?

Like any other Testing Regression Testing Needs proper planning.


For an Effective Regression Testing to be done the following ingredients are
necessary:

● Create a Regression Test Plan: Test Plan identified Focus Areas, Strategy, Test
Entry and Exit Criteria. It can also outline Testing Prerequisites, Responsibilities, etc.

● Create Test Cases: Test Cases that cover all the necessary areas are important.
They describe what to Test, Steps needed to test, Inputs and Expected Outputs. Test
Cases used for Regression Testing should specifically cover the functionality
addressed by the change and all components affected by the change. The Regression
Test case may also include the testing of the performance of the components and the
application after the change(s) were done.

● Defect Tracking: As in all other Testing Levels and Types It is important Defects
are tracked systematically, otherwise it undermines the Testing Effort.

Summary:

In this article we studied the importance of ‘Regression Testing’, its role and how it is
done.

What is client-server and web based testing and how to test these
applications

What is the difference between client-server testing and web based testing
and what are things that we need to test in such applications?

Ans:
Projects are broadly divided into two types of:

• 2 tier applications
• 3 tier applications

93
TESTING CONCEPTS

CLIENT / SERVER TESTING


This type of testing usually done for 2 tier applications (usually developed for LAN)
Here we will be having front-end and backend.

The application launched on front-end will be having forms and reports which will be
monitoring and manipulating data

E.g: applications developed in VB, VC++, Core Java, C, C++, D2K, PowerBuilder
etc.,
The backend for these applications would be MS Access, SQL Server, Oracle, Sybase,
Mysql, Quadbase

The tests performed on these types of applications would be


● User interface testing
● Manual support testing
● Functionality testing
● Compatibility testing & configuration testing
● Intersystem testing

WEB TESTING
This is done for 3 tier applications (developed for Internet / intranet / xtranet)
Here we will be having Browser, web server and DB server.

The applications accessible in browser would be developed in HTML, DHTML, XML,


JavaScript etc. (We can monitor through these applications)

Applications for the web server would be developed in Java, ASP, JSP, VBScript,
JavaScript, Perl, Cold Fusion, PHP etc. (All the manipulations are done on the web
server with the help of these programs developed)

The DB server would be having oracle, sill server, sybase, mysql etc. (All data is
stored in the database available on the DB server)

The tests performed on these types of applications would be


● User interface testing
● Functionality testing
● Security testing

94
TESTING CONCEPTS

● Browser compatibility testing


● Load / stress testing
● Interoperability testing/intersystem testing

● Storage and data volume testing

A web-application is a three-tier application.


This has a browser (monitors data) [monitoring is done using html, dhtml, xml,
javascript]-> webserver (manipulates data) [manipulations are done using
programming languages or scripts like adv java, asp, jsp, vbscript, javascript, perl,
coldfusion, php] -> database server (stores data) [data storage and retrieval is done
using databases like oracle, sql server, sybase, mysql].

The types of tests, which can be applied on this type of applications, are:
1. User interface testing for validation & user friendliness
2. Functionality testing to validate behaviors, i/p, error handling, o/p, manipulations,
services levels, order of functionality, links, content of web page & backend
coverage’s
3. Security testing
4. Browser compatibility
5. Load / stress testing
6. Interoperability testing
7. Storage & data volume testing

A client-server application is a two tier application.


This has forms & reporting at front-end (monitoring & manipulations are done)
[using vb, vc++, core java, c, c++, d2k, power builder etc.,] -> database server at
the backend [data storage & retrieval) [using ms access, sql server, oracle, sybase,
mysql, quadbase etc.,]

The tests performed on these applications would be


1. User interface testing
2. Manual support testing
3. Functionality testing
4. Compatibility testing
5. Intersystem testing

95
TESTING CONCEPTS

Some more points to clear the difference between client server, web and
desktop applications:

Desktop application:
1. Application runs in single memory (Front end and Back end in one place)
2. Single user only

Client/Server application:
1. Application runs in two or more machines
2. Application is a menu-driven
3. Connected mode (connection exists always until logout)
4. Limited number of users
5. Less number of network issues when compared to web app.

Web application:
1. Application runs in two or more machines
2. URL-driven
3. Disconnected mode (state less)
4. Unlimited number of users
5. Many issues like hardware compatibility, browser compatibility, version
compatibility, security issues, performance issues etc.

As per difference in both the applications come where, how to access the resources.
In client server once connection is made it will be in state on connected, whereas in
case of web testing http protocol is stateless, then there comes logic of cookies,
which is not in client server.

For client server application users are well known, whereas for web application any
user can login and access the content, he/she will use it as per his intentions.

So, there are always issues of security and compatibility for web application.

Black Box Testing: Types and techniques of BBT

I have covered what is White box Testing in previous article. Here I will
concentrate on Black box testing. BBT advantages, disadvantages and and How
Black box testing is performed i.e. the black box testing techniques.

96
TESTING CONCEPTS

Black box testing treats the system as a “black-box”, so it doesn’t explicitly use
Knowledge of the internal structure or code. Or in other words the Test engineer
need not know the internal working of the “Black box” or application.

Main focus in black box testing is on functionality of the system as a whole.


The term ‘behavioral testing’ is also used for black box testing and white box
testing is also sometimes called ’structural testing’. Behavioral test design is
slightly different from black-box test design because the use of internal knowledge
isn’t strictly forbidden, but it’s still discouraged.

Each testing method has its own advantages and disadvantages. There are some
bugs that cannot be found using only black box or only white box. Majority of the
application are tested by black box testing method. We need to cover majority of
test cases so that most of the bugs will get discovered by black box testing.

Black box testing occurs throughout the software development and Testing life cycle
i.e. in Unit, Integration, System, Acceptance and regression testing stages.

Tools used for Black Box testing:


Black box testing tools are mainly record and playback tools. These tools are used
for regression testing that to check whether new build has created any bug in
previous working application functionality. These record and playback tools records
test cases in the form of some scripts like TSL, VB script, Java script, Perl.

Advantages of Black Box Testing


- Tester can be non-technical.
- Used to verify contradictions in actual system and the specifications.
- Test cases can be designed as soon as the functional specifications are complete

Disadvantages of Black Box Testing


- The test inputs needs to be from large sample space.
- It is difficult to identify all possible inputs in limited testing time. So writing test
cases is slow and difficult
- Chances of having unidentified paths during this testing

Methods of Black box Testing:

97
TESTING CONCEPTS

Graph Based Testing Methods:


Each and every application is build up of some objects. All such objects are identified
and graph is prepared. From this object graph each object relationship is identified
and test cases written accordingly to discover the errors.

Error Guessing:
This is purely based on previous experience and judgment of tester. Error Guessing
is the art of guessing where errors can be hidden. For this technique there are no
specific tools, writing the test cases that cover all the application paths.

Boundary Value Analysis:


Many systems have tendency to fail on boundary. So testing boundry values of
application is important. Boundary Value Analysis (BVA) is a test Functional Testing
technique where the extreme boundary values are chosen. Boundary values include
maximum, minimum, just inside/outside boundaries, typical values, and error
values.

Extends equivalence partitioning


Test both sides of each boundary
Look at output boundaries for test cases too
Test min, min-1, max, max+1, typical values

BVA techniques:
1. Number of variables
For n variables: BVA yields 4n + 1 test cases.
2. Kinds of ranges
Generalizing ranges depends on the nature or type of variables
Advantages of Boundary Value Analysis
1. Robustness Testing - Boundary Value Analysis plus values that go beyond the
limits
2. Min - 1, Min, Min +1, Nom, Max -1, Max, Max +1
3. Forces attention to exception handling

Limitations of Boundary Value Analysis


Boundary value testing is efficient only for variables of fixed values i.e boundary.

98
TESTING CONCEPTS

Equivalence Partitioning:
Equivalence partitioning is a black box testing method that divides the input domain
of a program into classes of data from which test cases can be derived.

How is this partitioning performed while testing:


1. If an input condition specifies a range, one valid and one two invalid classes are
defined.
2. If an input condition requires a specific value, one valid and two invalid
equivalence classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid
equivalence class is defined.
4. If an input condition is Boolean, one valid and one invalid class is defined.

Comparison Testing:
Different independent versions of same software are used to compare to each other
for testing in this method.

What you need to know about BVT (Build Verification Testing)

What is BVT?

Build Verification test is a set of tests run on every new build to verify that build is
testable before it is released to test team for further testing. These test cases are
core functionality test cases that ensure application is stable and can be tested
thoroughly. Typically BVT process is automated. If BVT fails that build is again get
assigned to developer for fix.

BVT is also called smoke testing or build acceptance testing (BAT)

New Build is checked mainly for two things:

• Build validation
• Build acceptance

Some BVT basics:

• It is a subset of tests that verify main functionalities.

99
TESTING CONCEPTS

• The BVT’s are typically run on daily builds and if the BVT fails the build is
rejected and a new build is released after the fixes are done.
• The advantage of BVT is it saves the efforts of a test team to setup and test a
build when major functionality is broken.
• Design BVTs carefully enough to cover basic functionality.
• Typically BVT should not run more than 30 minutes.
• BVT is a type of regression testing, done on each and every new build.

BVT primarily checks for the project integrity and checks whether all the modules are
integrated properly or not. Module integration testing is very important when
different teams develop project modules. I heard many cases of application failure
due to improper module integration. Even in worst cases complete project gets
scraped due to failure in module integration.

What is the main task in build release?

Obviously file ‘check in’ i.e. to include all the new and modified project files
associated with respective builds. BVT was primarily introduced to check initial build
health i.e. to check whether - all the new and modified files are included in release,
all file formats are correct, every file version and language, flags associated with
each file.
These basic checks are worth before build release to test team for testing. You will
save time and money by discovering the build flaws at the very beginning using BVT.

Which test cases should be included in BVT?

This is very tricky decision to take before automating the BVT task. Keep in mind
that success of BVT depends on which test cases you include in BVT.

Here are some simple tips to include test cases in your BVT automation
suite:

100
TESTING CONCEPTS

• Include only critical test cases in BVT.


• All test cases included in BVT should be stable.
• All the test cases should have known expected result.
• Make sure all included critical functionality test cases are sufficient for
application test coverage.

Also do not includes modules in BVT, which are not yet stable. For some under-
development features you can’t predict expected behavior as these modules are
unstable and you might know some known failures before testing for these
incomplete modules. There is no point using such modules or test cases in BVT.

You can make this critical functionality test cases inclusion task simple by
communicating with all those involved in project development and testing life cycle.
Such process should negotiate BVT test cases, which ultimately ensure BVT success.
Set some BVT quality standards and these standards can be met only by analyzing
major project features and scenarios.

Example: Test cases to be included in BVT for Text editor application (Some
sample tests only):
1) Test case for creating text file.
2) Test cases for writing something into text editor
3) Test case for copy, cut, paste functionality of text editor
4) Test case for opening, saving, deleting text file.

These are some sample test cases, which can be marked as ‘critical’ and for every
minor or major changes in application these basic critical test cases should be
executed. This task can be easily accomplished by BVT.

BVT automation suits needs to be maintained and modified time-to-time. E.g. include
test cases in BVT when there are new stable project modules available.

What happens when BVT suite run:


Say Build verification automation test suite executed after any new build.
1) The result of BVT execution is sent to all the email ID’s associated with that
project.
2) The BVT owner (person executing and maintaining the BVT suite) inspects the
result of BVT.

101
TESTING CONCEPTS

3) If BVT fails then BVT owner diagnose the cause of failure.


4) If the failure cause is defect in build, all the relevant information with failure logs
is sent to respective developers.
5) Developer on his initial diagnostic replies to team about the failure cause.
Whether this is really a bug? And if it’s a bug then what will be his bug-fixing
scenario.
6) On bug fix once again BVT test suite is executed and if build passes BVT, the build
is passed to test team for further detail functionality, performance and other testes.

This process gets repeated for every new build.

Why BVT or build fails?


BVT breaks sometimes. This doesn’t mean that there is always bug in the build.
There are some other reasons to build fail like test case coding error, automation
suite error, infrastructure error, hardware failures etc.
You need to troubleshoot the cause for the BVT break and need to take proper action
after diagnosis.

Tips for BVT success:


1) Spend considerable time writing BVT test cases scripts.
2) Log as much detailed info as possible to diagnose the BVT pass or fail result. This
will help developer team to debug and quickly know the failure cause.
3) Select stable test cases to include in BVT. For new features if new critical test
case passes consistently on different configuration then promote this test case in
your BVT suite. This will reduce the probability of frequent build failure due to new
unstable modules and test cases.
4) Automate BVT process as much as possible. Right from build release process to
BVT result - automate everything.
5) Have some penalties for breaking the build Some chocolates or team coffee party
from developer who breaks the build will do.

Types of Risks in Software Projects

Are you developing any Test plan or test strategy for your project? Have
you addressed all risks properly in your test plan or test strategy?

102
TESTING CONCEPTS

As testing is the last part of the project, it’s always under pressure and time
constraint. To save time and money you should be able to prioritize your testing
work. How will prioritize testing work? For this you should be able to judge more
important and less important testing work. How will you decide which work is more
or less important? Here comes need of risk-based testing.

What is Risk?
“Risk are future uncertain events with a probability of occurrence and a potential for
loss” Risk identification and management are the main concerns in every software
project. Effective analysis of software risks will help to effective planning and
assignments of work.

In this article I will cover what are the “types of risks”. In next articles I will
try to focus on risk identification, risk management and mitigation.

Risks are identified, classified and managed before actual execution of program.
These risks are classified in different categories.

Categories of risks:

Schedule Risk:
Project schedule get slip when project tasks and schedule release risks are not
addressed properly.
Schedule risks mainly affect on project and finally on company economy and may
lead to project failure.
Schedules often slip due to following reasons:

• Wrong time estimation


• Resources are not tracked properly. All resources like staff, systems, skills of
individuals etc.
• Failure to identify complex functionalities and time required to develop those
functionalities.
• Unexpected project scope expansions.

103
TESTING CONCEPTS

Budget Risk:

• Wrong budget estimation.


• Cost overruns
• Project scope expansion

Operational Risks:
Risks of loss due to improper process implementation, failed system or some
external events risks.
Causes of Operational risks:

• Failure to address priority conflicts


• Failure to resolve the responsibilities
• Insufficient resources
• No proper subject training
• No resource planning
• No communication in team.

Technical risks:
Technical risks generally leads to failure of functionality and performance.
Causes of technical risks are:

• Continuous changing requirements


• No advanced technology available or the existing technology is in initial
stages.
• Product is complex to implement.
• Difficult project modules integration.

Programmatic Risks:
These are the external risks beyond the operational limits. These are all uncertain
risks are outside the control of the program.
These external events can be:

• Running out of fund.


• Market development
• Changing customer product strategy and priority
• Government rule changes.

104
TESTING CONCEPTS

How Domain knowledge is Important for testers?

“Looking at the current scenario from the industry it is seen that the testers are
expected to have both technical testing skills as well either need to be from the
domain background or have gathered domain knowledge mainly for BFSI is
commonly seen.
I would like to know why and when is this domain knowledge imparted to the tester
during the testing cycle?”

First of all I would like to introduce three dimensional testing career mentioned
by Danny R. Faught. There are three categories of skill that need to be judged before
hiring any software tester. What are those three skill categories?
1) Testing skill
2) Domain knowledge
3) Technical expertise.

No doubt that any tester should have the basic testing skills like Manual testing and
Automation testing. Tester having the common sense can even find most of the
obvious bugs in the software. Then would you say that this much testing is
sufficient? Would you release the product on the basis of this much testing done?
Certainly not. You will certainly have a product look by the domain expert
before the product goes into the market.

While testing any application you should think like a end-user. But every human
being has the limitations and one can’t be the expert in all of the three dimensions
mentioned above. (If you are the experts in all of the above skills then please let me
know ;-)) So you can’t assure that you can think 100% like how the end-user going
to use your application. User who is going to use your application may be having a
good understanding of the domain he is working on. You need to balance all these
skill activities so that all product aspects will get addressed.

Nowadays you can see the professional being hired in different companies are more
domain experts than having technical skills. Current software industry is also seeing

105
TESTING CONCEPTS

a good trend that many professional developers and domain experts are moving into
software testing.

We can observe one more reason why domain experts are most wanted! When you
hire fresh engineers who are just out of college you cannot expect them to compete
with the experienced professionals. Why? Because experienced professional certainly
have the advantage of domain and testing experience and they have better
understandings of different issues and can deliver the application better and faster.

Here are some of the examples where you can see the distinct edge of
domain knowledge:
1) Mobile application testing.
2) Wireless application testing
3) VoIP applications
4) Protocol testing
5) Banking applications
6) Network testing

How will you test such applications without knowledge of specific domain?
Are you going to test the BFSI applications (Banking, Financial Services and
Insurance) just for UI or functionality or security or load or stress? You should know
what are the user requirements in banking, working procedures, commerce
background, exposure to brokerage etc and should test application accordingly, then
only you can say that your testing is enough - Here comes the need of subject-
matter experts.

Let’s take example of my current project: I am currently working on search


engine application. Where I need to know the basic of search engine terminologies
and concepts. Many times I see some other team tester’s asking me questions like
what is ‘publishers’ and ‘advertisers’, what is the difference and what they do? Do
you think they can test the application based on current online advertising and SEO?
Certainly not. Unless and until they get well familiar with these terminologies and
functionalities.

When I know the functional domain better I can better write and execute more test
cases and can effectively simulate the end user actions which is distinctly a big
advantage.

106
TESTING CONCEPTS

Here is the big list of the required testing knowledge:

• Testing skill
• Bug hunting skill
• Technical skill
• Domain knowledge
• Communication skill
• Automation skill
• Some programming skill
• Quick grasping
• Ability to Work under pressure …

That is going to be a huge list. So you will certainly say, do I need to have these
many skills? Its’ depends on you. You can stick to one skill or can be expert in one
skill and have good understanding of other skills or balanced approach of all the
skills. This is the competitive market and you should definitely take advantage of it.
Make sure to be expert in at least one domain before making any move.

What if you don’t have enough domain knowledge?


You will be posted on any project and company can assign any work to you. Then
what if you don’t have enough domain knowledge of that project? You need to
quickly grasp as many concepts as you can. Try to understand the product as if you
are the customer and what customer will do with application. Visit the customer site
if possible know how they work with the product, Read online resources about the
domain you want to test the application, participate in events addressing on such
domain, meet the domain experts. Or either company will provide all this in-house
training before assigning any domain specific task to testers.

There is no specific stage where you need this domain knowledge. You need to apply
your domain knowledge in each and every software testing life cycle.

How to get your all bugs resolved without any ‘Invalid bug’ label?

I hate “Invalid bug” label from developers for the bugs reported by
me, do you? I think every tester should try to get his/her 100% bugs resolved. This
requires bug reporting skill. See my previous post on “How to write a good bug
report? Tips and Tricks” to report bugs professionally and without any ambiguity.

107
TESTING CONCEPTS

The main reason for bug being marked as invalid is “Insufficient


troubleshooting” by tester before reporting the bug. In this post I will focus only
on troubleshooting to find main cause of the bug. Troubleshooting will help you to
decide whether the ambiguity you found in your application under test is really a bug
or any test setup mistake.

Yes, 50% bugs get marked as “Invalid bugs” only due to testers incomplete
testing setup. Let’s say you found an ambiguity in application under test. You are
now preparing the steps to report this ambiguity as a bug. But wait! Have you done
enough troubleshooting before reporting this bug? Or have you confirmed if it is
really a bug?

What troubleshooting you need to perform before reporting any bug?

Troubleshooting of:

• What’s not working?


• Why it’s not working?
• How can you make it work?
• What are the possible reasons for the failure?

Answer for the first question “what’s not working?” is sufficient for you to report the
bug steps in bug tracking system. Then why to answer remaining three questions?
Think beyond your responsibilities. Act smarter, don’t be a dumb person who
only follow his routine steps and don’t even think outside of that. You should be able
to suggest all possible solutions to resolve the bug and efficiency as well as
drawbacks of each solution. This will increase your respect in your team and will also
reduce the possibility of getting your bugs rejected, not due to this respect but due
to your troubleshooting skill.

108
TESTING CONCEPTS

Before reporting any bug, make sure it isn’t your mistake while testing, you
have missed any important flag to set or you might have not configured your test
setup properly.

Troubleshoot the reasons for the failure in application. On proper troubleshooting


report the bug. I have complied a troubleshooting list. Check it out - what can
be different reasons for failure.

Reasons of failure:
1) If you are using any configuration file for testing your application then make
sure this file is up to date as per the application requirements: Many times some
global configuration file is used to pick or set some application flags. Failure to
maintain this file as per your software requirements will lead to malfunctioning of
your application under test. You can’t report it as bug.

2) Check if your database is proper: Missing table is main reason that your
application will not work properly.
I have a classic example for this: One of my projects was querying many monthly
user database tables for showing the user reports. First table existence was checked
in master table (This table was maintaining only monthly table names) and then data
was queried from different individual monthly tables. Many testers were selecting big
date range to see the user reports. But many times it was crashing the application as
those tables were not present in database of test machine server, giving SQL query
error and they were reporting it as bug which subsequently was getting marked as
invalid by developers.

3) If you are working on automation testing project then debug your script
twice before coming to conclusion that the application failure is a bug.

4) Check if you are not using invalid access credentials for authentication.

5) Check if software versions are compatible.

6) Check if there is any other hardware issue that is not related to your application.

7) Make sure your application hardware and software prerequisites are correct.

109
TESTING CONCEPTS

8 ) Check if all software components are installed properly on your test machine.
Check whether registry entries are valid.

9) For any failure look into ‘system event viewer’ for details. You can trace out many
failure reasons from system event log file.

10) Before starting to test make sure you have uploaded all latest version files to
your test environment.

How to write a good bug report? Tips and Tricks

Why good Bug report?


If your bug report is effective, chances are higher that it will get fixed. So fixing a
bug depends on how effectively you report it. Reporting a bug is nothing but a skill
and I will tell you how to achieve this skill.

“The point of writing problem report(bug report) is to get bugs fixed” - By


Cem Kaner. If tester is not reporting bug correctly, programmer will most likely
reject this bug stating as irreproducible. This can hurt testers moral and some time
ego also. (I suggest do not keep any type of ego. Ego’s like “I have reported bug
correctly”, “I can reproduce it”, “Why he/she has rejected the bug?”, “It’s not my
fault” etc etc..)

What are the qualities of a good software bug report?


Anyone can write a bug report. But not everyone can write a effective bug report.
You should be able to distinguish between average bug report and a good bug
report. How to distinguish a good or bad bug report? It’s simple, apply following
characteristics and techniques to report a bug.

1) Having clearly specified bug number:


Always assign a unique number to each bug report. This will help to identify the bug
record. If you are using any automated bug-reporting tool then this unique number
will be generated automatically each time you report the bug. Note the number and
brief description of each bug you reported.

2) Reproducible:
If your bug is not reproducible it will never get fixed. You should clearly mention the

110
TESTING CONCEPTS

steps to reproduce the bug. Do not assume or skip any reproducing step. Step by
step described bug problem is easy to reproduce and fix.

3) Be Specific:
Do not write a essay about the problem. Be Specific and to the point. Try to
summarize the problem in minimum words yet in effective way. Do not combine
multiple problems even they seem to be similar. Write different reports for each
problem.

How to Report a Bug?

Use following simple Bug report template:


This is a simple bug report format. It may vary on the bug report tool you are using.
If you are writing bug report manually then some fields need to specifically mention
like Bug number which should be assigned manually.

Reporter: Your name and email address.

Product: In which product you found this bug.

Version: The product version if any.

Component: These are the major sub modules of the product.

Platform: Mention the hardware platform where you found this bug. The various
platforms like ‘PC’, ‘MAC’, ‘HP’, ‘Sun’ etc.

Operating system: Mention all operating systems where you found the bug.
Operating systems like Windows, Linux, Unix, SunOS, Mac OS. Mention the different
OS versions also if applicable like Windows NT, Windows 2000, Windows XP etc.

Priority:
When bug should be fixed? Priority is generally set from P1 to P5. P1 as “fix the bug
with highest priority” and P5 as ” Fix when time permits”.

Severity:
This describes the impact of the bug.
Types of Severity:

111
TESTING CONCEPTS

• Blocker: No further testing work can be done.


• Critical: Application crash, Loss of data.
• Major: Major loss of function.
• Minor: minor loss of function.
• Trivial: Some UI enhancements.
• Enhancement: Request for new feature or some enhancement in existing
one.

Status:
When you are logging the bug in any bug tracking system then by default the bug
status is ‘New’.
Later on bug goes through various stages like Fixed, Verified, Reopen, Won’t Fix etc.
Click here to read more about detail bug life cycle.

Assign To:
If you know which developer is responsible for that particular module in which bug
occurred, then you can specify email address of that developer. Else keep it blank
this will assign bug to module owner or Manger will assign bug to developer. Possibly
add the manager email address in CC list.

URL:
The page url on which bug occurred.

Summary:
A brief summary of the bug mostly in 60 or below words. Make sure your summary is
reflecting what the problem is and where it is.

Description:
A detailed description of bug. Use following fields for description field:

• Reproduce steps: Clearly mention the steps to reproduce the bug.


• Expected result: How application should behave on above mentioned steps.
• Actual result: What is the actual result on running above steps i.e. the bug
behavior.

These are the important steps in bug report. You can also add the “Report type” as
one more field which will describe the bug type.

112
TESTING CONCEPTS

The report types are typically:


1) Coding error
2) Design error
3) New suggestion
4) Documentation issue
5) Hardware problem

Some Bonus tips to write a good bug report:

1) Report the problem immediately: If you found any bug while testing, do not
wait to write detail bug report later. Instead write the bug report immediately. This
will ensure a good and reproducible bug report. If you decide to write the bug report
later on then chances are high to miss the important steps in your report.

2) Reproduce the bug three times before writing bug report:Your bug should
be reproducible. Make sure your steps are robust enough to reproduce the bug
without any ambiguity. If your bug is not reproducible every time you can still file a
bug mentioning the periodic nature of the bug.

3) Test the same bug occurrence on other similar module:


Sometimes developer use same code for different similar modules. So chances are
high that bug in one module can occur in other similar modules as well. Even you
can try to find more severe version of the bug you found.

4) Write a good bug summary:


Bug summary will help developers to quickly analyze the bug nature. Poor quality
report will unnecessarily increase the development and testing time. Communicate
well through your bug report summary. Keep in mind bug summary is used as a
reference to search the bug in bug inventory.

5) Read bug report before hitting Submit button:


Read all sentences, wording, steps used in bug report. See if any sentence is
creating ambiguity that can lead to misinterpretation. Misleading words or sentences
should be avoided in order to have a clear bug report.

113
TESTING CONCEPTS

6) Do not use Abusive language:


It’s nice that you did a good work and found a bug but do not use this credit for
criticizing developer or to attack any individual.

Conclusion:
No doubt that your bug report should be a high quality document. Focus on writing
good bug reports, spend some time on this task because this is main communication
point between tester, developer and manager. Mangers should make aware to their
team that writing a good bug report is primary responsibility of any tester. Your
efforts towards writing good bug report will not only save company resources but
also create a good relationship between you and developers.

How to write software Testing Weekly Status Report

Writing effective status report is as important as the actual work you did!
How to write a effective status report of your weekly work at the end of each
week?

Here I am going to give some tips. Weekly report is important to track the
important project issues, accomplishments of the projects, pending work
and milestone analysis. Even using these reports you can track the team
performance to some extent. From this report prepare future actionables items
according to the priorities and make the list of next weeks actionable.

So how to write weekly status report?

Follow the below template:


Prepared By:
Project:
Date of preparation:
Status:
A) Issues:
Issues holding the QA team from delivering on schedule:
Project:
Issue description:

114
TESTING CONCEPTS

Possible solution:
Issue resolution date:

You can mark these issues in red colour. These are the issues that requires
managements help in resolving them.

Issues that management should be aware:

These are the issues that not hold the QA team from delivering on time but
management should be aware of them. Mark these issues in Yellow colour. You can
use above same template to report them.

Project accomplishments:
Mark them in Green colour. Use below template.
Project:
Accomplishment:
Accomplishment date:

B) Next week Priorities:


Actionable items next week list them in two categories:

1) Pending deliverables: Mark them in blue color: These are previous weeks
deliverables which should get released as soon as possible in this week.
Project:
Work update:
Scheduled date:
Reason for extending:

2) New tasks:
List all next weeks new task here. You can use black color for this.
Project:
Scheduled Task:
Date of release:

C) Defect status:

Active defects:
List all active defects here with Reporter, Module, Severity, priority, assigned to.

115
TESTING CONCEPTS

Closed Defects:
List all closed defects with Reporter, Module, Severity, priority, assigned to.

Test cases:
List total number of test cases wrote, test cases passed, test cases failed, test cases
to be executed.

This template should give you the overall idea of the status report. Don’t ignore the
status report. Even if your managers are not forcing you to write these reports they
are most important for your work assessment in future.

How to hire the right candidates for software testing positions?

Do companies really judge candidate’s testing ability in interviews? Do they


ask the questions that really judge the candidate’s skill? What questions should be
asked to judge the candidate for software testing field? What is the key process to
hire good candidates for software testing positions?

Ok, I am asking to many questions without giving answer to any of it. Well, each
question mentioned above will require a separate post to address the problem fairly.
Here we will address in short about - How to hire the right candidates for
software testing positions?

Companies or interviewers, who are not serious about hiring right candidates, often
end with hiring poor performers.

What I mean by “Not serious” here?


- They don’t know why and for what post they are hiring a candidate.
- They either fake or fail to post the exact job opening details.
- Or they don’t want to hire skilled performers at all. Hmm, jealousy might be the
key here!

Whichever is the reason, there is definitely loss of organization. Loss in terms of both
revenue and growth.

116
TESTING CONCEPTS

If you need answer to these questions, here is an informative video from Pradeep
Soundararajan - Consulting tester of Satisfice Inc in India. He explained what is the
current situation of software testing interview process in India and how interviewers
are wrong in selecting questions to be asked to candidates. A nice start to spread the
awareness and importance of software testing interviews.

Website Cookie Testing, Test cases for testing web application cookies?

We will first focus on what exactly cookies are and how they work. It
would be easy for you to understand the test cases for testing cookies when you
have clear understanding of how cookies work? How cookies stored on hard drive?
And how can we edit cookie settings?

What is Cookie?
Cookie is small information stored in text file on user’s hard drive by web server.
This information is later used by web browser to retrieve information from that
machine. Generally cookie contains personalized user data or information that is
used to communicate between different web pages.

Why Cookies are used?


Cookies are nothing but the user’s identity and used to track where the user
navigated throughout the web site pages. The communication between web browser
and web server is stateless.

For example if you are accessing domain http://www.example.com/1.html then web


browser will simply query to example.com web server for the page 1.html. Next time
if you type page as http://www.example.com/2.html then new request is send to
example.com web server for sending 2.html page and web server don’t know
anything about to whom the previous page 1.html served.

What if you want the previous history of this user communication with the web
server? You need to maintain the user state and interaction between web browser
and web server somewhere. This is where cookie comes into picture. Cookies serve
the purpose of maintaining the user interactions with web server.

How cookies work?


The HTTP protocol used to exchange information files on the web is used to maintain

117
TESTING CONCEPTS

the cookies. There are two types of HTTP protocol. Stateless HTTP and State ful HTTP
protocol. Stateless HTTP protocol does not keep any record of previously accessed
web page history. While State ful HTTP protocol do keep some history of previous
web browser and web server interactions and this protocol is used by cookies to
maintain the user interactions.

Whenever user visits the site or page that is using cookie, small code inside that
HTML page (Generally a call to some language script to write the cookie like cookies
in JAVA Script, PHP, Perl) writes a text file on users machine called cookie.
Here is one example of the code that is used to write cookie and can be placed inside
any HTML page:

Set-Cookie: NAME=VALUE; expires=DATE; path=PATH; domain=DOMAIN_NAME;

When user visits the same page or domain later time this cookie is read from disk
and used to identify the second visit of the same user on that domain. Expiration
time is set while writing the cookie. This time is decided by the application that is
going to use the cookie.

Generally two types of cookies are written on user machine.

1) Session cookies: This cookie is active till the browser that invoked the cookie is
open. When we close the browser this session cookie gets deleted. Some time
session of say 20 minutes can be set to expire the cookie.
2) Persistent cookies: The cookies that are written permanently on user machine
and lasts for months or years.

Where cookies are stored?


When any web page application writes cookie it get saved in a text file on user hard
disk drive. The path where the cookies get stored depends on the browser. Different
browsers store cookie in different paths. E.g. Internet explorer store cookies on path
“C:\Documents and Settings\Default User\Cookies”
Here the “Default User” can be replaced by the current user you logged in as. Like
“Administrator”, or user name like “Vijay” etc.
The cookie path can be easily found by navigating through the browser options. In
Mozilla Fire fox browser you can even see the cookies in browser options itself. Open

118
TESTING CONCEPTS

the Mozilla browser, click on Tools->Options->Privacy and then “Show cookies”


button.

How cookies are stored?


Lets take example of cookie written by rediff.com on Mozilla Firefox browser:
On Mozilla Fire fox browser when you open the page rediff.com or login to your
rediffmail account, a cookie will get written on your Hard disk. To view this cookie
simply click on “Show cookies” button mentioned on above path. Click on Rediff.com
site under this cookie list. You can see different cookies written by rediff domain with
different names.

Site: Rediff.com Cookie name: RMID


Name: RMID (Name of the cookie)
Content: 1d11c8ec44bf49e0… (Encrypted content)
Domain: .rediff.com
Path: / (Any path after the domain name)
Send For: Any type of connection
Expires: Thursday, December 31, 2020 11:59:59 PM

Applications where cookies can be used:

1) To implement shopping cart:


Cookies are used for maintaining online ordering system. Cookies remember what
user wants to buy. What if user adds some products in their shopping cart and if due
to some reason user don’t want to buy those products this time and closes the
browser window? When next time same user visits the purchase page he can see all
the products he added in shopping cart in his last visit.

2) Personalized sites:
When user visits certain pages they are asked which pages they don’t want to visit or
display. User options are get stored in cookie and till the user is online, those pages
are not shown to him.

3) User tracking:
To track number of unique visitors online at particular time.

119
TESTING CONCEPTS

4) Marketing:
Some companies use cookies to display advertisements on user machines. Cookies
control these advertisements. When and which advertisement should be shown?
What is the interest of the user? Which keywords he searches on the site? All these
things can be maintained using cookies.

5) User sessions:
Cookies can track user sessions to particular domain using user ID and password.

Drawbacks of cookies:

1) Even writing Cookie is a great way to maintain user interaction, if user has set
browser options to warn before writing any cookie or disabled the cookies completely
then site containing cookie will be completely disabled and can not perform any
operation resulting in loss of site traffic.

2) Too many Cookies:


If you are writing too many cookies on every page navigation and if user has turned
on option to warn before writing cookie, this could turn away user from your site.

3) Security issues:
Some times users personal information is stored in cookies and if someone hack the
cookie then hacker can get access to your personal information. Even corrupted
cookies can be read by different domains and lead to security issues.

4) Sensitive information:
Some sites may write and store your sensitive information in cookies, which should
not be allowed due to privacy concerns.

This should be enough to know what cookies are. If you want more cookie info see
Cookie Central page.

Some Major Test cases for web application cookie testing:

The first obvious test case is to test if your application is writing cookies properly on
disk. You can use the Cookie Tester application also if you don’t have any web
application to test but you want to understand the cookie concept for testing.

120
TESTING CONCEPTS

Test cases:

1) As a Cookie privacy policy make sure from your design documents that no
personal or sensitive data is stored in the cookie.

2) If you have no option than saving sensitive data in cookie make sure data
stored in cookie is stored in encrypted format.

3) Make sure that there is no overuse of cookies on your site under test. Overuse
of cookies will annoy users if browser is prompting for cookies more often and this
could result in loss of site traffic and eventually loss of business.

4) Disable the cookies from your browser settings: If you are using cookies on
your site, your sites major functionality will not work by disabling the cookies. Then
try to access the web site under test. Navigate through the site. See if appropriate
messages are displayed to user like “For smooth functioning of this site make sure
that cookies are enabled on your browser”. There should not be any page crash due
to disabling the cookies. (Please make sure that you close all browsers, delete all
previously written cookies before performing this test)

5) Accepts/Reject some cookies: The best way to check web site functionality is,
not to accept all cookies. If you are writing 10 cookies in your web application then
randomly accept some cookies say accept 5 and reject 5 cookies. For executing this
test case you can set browser options to prompt whenever cookie is being written to
disk. On this prompt window you can either accept or reject cookie. Try to access
major functionality of web site. See if pages are getting crashed or data is getting
corrupted.

6) Delete cookie: Allow site to write the cookies and then close all browsers and
manually delete all cookies for web site under test. Access the web pages and check
the behavior of the pages.

7) Corrupt the cookies: Corrupting cookie is easy. You know where cookies are
stored. Manually edit the cookie in notepad and change the parameters to some
vague values. Like alter the cookie content, Name of the cookie or expiry date of the
cookie and see the site functionality. In some cases corrupted cookies allow to read
the data inside it for any other domain. This should not happen in case of your web

121
TESTING CONCEPTS

site cookies. Note that the cookies written by one domain say rediff.com can’t be
accessed by other domain say yahoo.com unless and until the cookies are corrupted
and someone trying to hack the cookie data.

8 ) Checking the deletion of cookies from your web application page: Some
times cookie written by domain say rediff.com may be deleted by same domain but
by different page under that domain. This is the general case if you are testing some
‘action tracking’ web portal. Action tracking or purchase tracking pixel is placed on
the action web page and when any action or purchase occurs by user the cookie
written on disk get deleted to avoid multiple action logging from same cookie. Check
if reaching to your action or purchase page deletes the cookie properly and no more
invalid actions or purchase get logged from same user.

9) Cookie Testing on Multiple browsers: This is the important case to check if


your web application page is writing the cookies properly on different browsers as
intended and site works properly using these cookies. You can test your web
application on Major used browsers like Internet explorer (Various versions), Mozilla
Fire fox, Netscape, Opera etc.

10) If your web application is using cookies to maintain the logging state of any
user then log in to your web application using some username and password. In
many cases you can see the logged in user ID parameter directly in browser address
bar. Change this parameter to different value say if previous user ID is 100 then
make it 101 and press enter. The proper access message should be displayed to user
and user should not be able to see other users account.

Software Installation/Uninstallation Testing

Have you performed software installation testing? How was the experience?
Well, Installation testing (Implementation Testing) is quite interesting part of
software testing life cycle.

Installation testing is like introducing a guest in your home. The new guest should be
properly introduced to all the family members in order to feel him comfortable.
Installation of new software is also quite like above example.

122
TESTING CONCEPTS

If your installation is successful on the new system then customer will be


definitely happy but what if things are completely opposite.

If installation fails then our program will not work on that system not only this
but can leave user’s system badly damaged. User might require to reinstall the full
operating system.

In above case will you make any impression on user? Definitely not! Your first
impression to make a loyal customer is ruined due to incomplete installation testing.
What you need to do for a good first impression? Test the installer
appropriately with combination of both manual and automated processes on
different machines with different configuration. Major concerned of installation
testing is Time! It requires lot of time to even execute a single test case. If you are
going to test a big application installer then think about time required to perform
such a many test cases on different configurations.

We will see different methods to perform manual installer testing and some
basic guideline for automating the installation process.

To start installation testing first decide on how many different system configurations
you want to test the installation. Prepare one basic hard disk drive. Format this HDD
with most common or default file system, install most common operating system
(Windows) on this HDD. Install some basic required components on this HDD. Each
time create images of this base HDD and you can create other configurations on this
base drive. Make one set of each configuration like Operating system and file format
to be used for further testing.

How we can use automation in this process? Well make some systems dedicated for
creating basic images (use software’s like Norton Ghost for creating exact images of
operating system quickly) of base configuration. This will save your tremendous time
in each test case. For example if time to install one OS with basic configuration is say
1 hour then for each test case on fresh OS you will require 1+ hour. But creating
image of OS will hardly require 5 to 10 minutes and you will save approximately 40
to 50 minutes!

You can use one operating system with multiple attempts of installation of installer.
Each time uninstalling the application and preparing the base state for next test

123
TESTING CONCEPTS

case. Be careful here that your uninstallation program should be tested before and
should be working fine.

Installation testing tips with some broad


test cases:

1) Use flow diagrams to perform


installation testing. Flow diagrams simplify
our task. See example flow diagram for basic
installation testing test case.

Add some more test cases on this basic flow


chart Such as if our application is not the first
release then try to add different logical
installation paths.

2) If you have previously installed compact


basic version of application then in next test
case install the full application version on
the same path as used for compact version.

3) If you are using flow diagram to test


different files to be written on disk while

124
TESTING CONCEPTS

installation then use the same flow diagram in reverse order to test uninstallation of
all the installed files on disk.

4) Use flow diagrams to automate the testing efforts. It will be very easy to
convert diagrams into automated scripts.

5) Test the installer scripts used for checking the required disk space. If installer is
prompting required disk space 1MB, then make sure exactly 1MB is used or whether
more disk space utilized during installation. If yes flag this as error.

6) Test disk space requirement on different file system format. Like FAT16 will
require more space than efficient NTFS or FAT32 file systems.

7) If possible set a dedicated system for only creating disk images. As said above
this will save your testing time.

8 ) Use distributed testing environment in order to carry out installation testing.


Distributed environment simply save your time and you can effectively manage all
the different test cases from a single machine. The good approach for this is to
create a master machine, which will drive different slave machines on network. You
can start installation simultaneously on different machine from the master system.

9) Try to automate the routine to test the number of files to be written on disk. You
can maintain this file list to be written on disk in and excel sheet and can give this
list as a input to automated script that will check each and every path to verify the
correct installation.

10) Use software’s available freely in market to verify registry changes on


successful installation. Verify the registry changes with your expected change list
after installation.

11) Forcefully break the installation process in between. See the behavior of
system and whether system recovers to its original state without any issues. You can
test this “break of installation” on every installation step.

12) Disk space checking: This is the crucial checking in the installation-testing
scenario. You can choose different manual and automated methods to do this
checking. In manual methods you can check free disk space available on drive before

125
TESTING CONCEPTS

installation and disk space reported by installer script to check whether installer is
calculating and reporting disk space accurately. Check the disk space after the
installation to verify accurate usage of installation disk space. Run various
combination of disk space availability by using some tools to automatically making
disk space full while installation. Check system behavior on low disk space conditions
while installation.

13) As you check installation you can test for uninstallation also. Before each new
iteration of installation make sure that all the files written to disk are removed after
uninstallation. Some times uninstallation routine removes files from only last
upgraded installation keeping the old version files untouched. Also check for
rebooting option after uninstallation manually and forcefully not to reboot.

I have addressed many areas of manual as well as automated installation


testing procedure. Still there are many areas you need to focus on depending on
the complexity of your software under installation. These not addressed important
tasks includes installation over the network, online installation, patch
installation, Database checking on Installation, Shared DLL installation and
uninstallation etc.

What are the Quality attributes?

First in brief know what is Quality? Quality can be define in


different manner. Quality definition may differ from person to person. But finally
there should be some standards. So Quality can be defined as

• Degree of excellence - Oxford dictionary


• Fitness for purpose - Edward Deming
• Best for the customer’s use and selling price - Feigenbaum
• The totality of characteristics of an entity that bear on its ability to satisfy
stated or implied needs - ISO

How a Product developer will define quality - The product which meets the
customer requirements.
How Customer will define Quality - Required functionality is provided with user
friendly manner.

126
TESTING CONCEPTS

These are some quality definitions from different perspective. Now lets see how can
one measure some quality attributes of product or application.
Following factors are used to measure software development quality. Each
attribute can be used to measure the product performance. These attributes can be
used for Quality assurance as well as Quality control. Quality Assurance activities
are oriented towards prevention of introduction of defects and Quality control
activities are aimed at detecting defects in products and services.

Reliability
Measure if product is reliable enough to sustain in any condition. Should give
consistently correct results.
Product reliability is measured in terms of working of project under different working
environment and different conditions.

Maintainability
Different versions of the product should be easy to maintain. For development its
should be easy to add code to existing system, should be easy to upgrade for new
features and new technologies time to time. Maintenance should be cost effective
and easy. System be easy to maintain and correcting defects or making a change in
the software.

Usability
This can be measured in terms of ease of use. Application should be user friendly.
Should be easy to learn. Navigation should be simple.
The system must be:

• Easy to use for input preparation, operation, and interpretation of output.


• Provide consistent user interface standards or conventions with our other
frequently used systems.
• Easy for new or infrequent users to learn to use the system.

Portability
This can be measured in terms of Costing issues related to porting, Technical issues
related to porting, Behavioral issues related to porting.

Correctness
Application should be correct in terms of its functionality, calculations used internally

127
TESTING CONCEPTS

and the navigation should be correct. This means application should adhere to
functional requirements.

Efficiency
To Major system quality attribute. Measured in terms of time required to complete
any task given to the system. For example system should utilize processor capacity,
disk space and memory efficiently. If system is using all the available resources then
user will get degraded performance failing the system for efficiency. If system is not
efficient then it can not be used in real time applications.

Integrity or security
Integrity comes with security. System integrity or security should be sufficient to
prevent unauthorized access to system functions, preventing information loss,
ensure that the software is protected from virus infection, and protecting the privacy
of data entered into the system.

Testability
System should be easy to test and find defects. If required should be easy to divide
in different modules for testing.

Flexibility
Should be flexible enough to modify. Adaptable to other products with which it needs
interaction. Should be easy to interface with other standard 3rd party components.

Reusability
Software reuse is a good cost efficient and time saving development way. Different
code libraries classes should be generic enough to use easily in different application
modules. Dividing application into different modules so that modules can be reused
across the application.

Interoperability
Interoperability of one system to another should be easy for product to exchange
data or services with other systems. Different system modules should work on
different operating system platforms, different databases and protocols conditions.

Applying above quality attributes standards we can determine whether system meets
the requirements of quality or not. As specified above all these attributes are

128
TESTING CONCEPTS

applied on QA and QC process so that tester or customer also can find


quality of application or system.

Developers are not good testers. What you say?

This can be a big debate. Developers testing their own code - what will
be the testing output? All happy endings! Yes, the person who develops the code
generally sees only happy paths of the product and don’t want to go in much
details.

The main concern of developer testing is - misunderstanding of requirements. If


requirements are misunderstood by developer then no matter at what depth
developer test the application, he will never find the error. The first place where the
bug gets introduced will remain till end, as developer will see it as functionality.

Optimistic developers - Yes, I wrote the code and I am confident it’s working
properly. No need to test this path, no need to test that path, as I know it’s working
properly. And right here developers skip the bugs.

Developer vs. Tester: Developer always wants to see his code working properly. So
he will test it to check if it’s working correctly. But you know why tester will test the
application? To make it fail in any way, and tester surely will test how application is
not working correctly. This is the main difference in developer testing and tester
testing.

Should developers test their own work?

I personally don’t mind developers testing


their own code. After all it’s there baby They
know their code very well. They know what are
the traps in their codes. Where it can fail, where
to concentrate more, which is important path of
the application. Developer can do unit testing
very well and can effectively identify boundary
cases. (Image credit)

129
TESTING CONCEPTS

This is all applicable to a developer who is a good tester! But most of the
developers consider testing as painful job, even they know the system well, due to
their negligence they tend to skip many testing paths, as it’s a very painful
experience for them. If developers find any errors in their code in unit testing then
it’s comparatively easier to fix, as the code is fresh to them, rather than getting the
bug from testers after two-three days. But this only possible if the developer is
interested in doing that much testing.

It’s testers responsibility to make sure each and every path is tested or not.
Testers should ideally give importance to all small possible details to verify
application is not breaking anywhere.

Developers, please don’t review your own code. Generally you will overlook the
issues in your code. So give it to others for review.

Everyone is having specialization in particular subject. Developers generally think


how to develop the application on the other hand testers think how the end user is
going to use the application.

Conclusion

So in short there is no problem if developers are doing the basic unit testing
and basic verification testing. Developers can test few exceptional conditions they
know are critical and should not be missed. But there are some great testers out
there. Through the build to test team. Don’t waste your time as well. For success of
any project there should be independent testing team validating your applications.
After all it’s our (testers) responsibility to make the ‘baby’ smarter!!

Living life as a Software Tester!

I will extract only points related to software testing. As a software tester keep in
mind these simple points:

Share everything:
If you are a experienced tester on any project then help the new developers on your
project. Some testers have habit to keep the known bugs hidden till they get

130
TESTING CONCEPTS

implement in code and then they write a big defect report on that. Don’t try to only
pump your bug count, share everything with developers.

Build trust:
Let the developers know any bug you found in design phase. Do not log the bug
repeatedly with small variations just to pump the bug count. Build trust in developer
and tester relation.

Don’t blame others:


As a tester you should not always blame developers for the bugs. Concentrate on
bug, not always on pointing that bug in front of all people. Hit the bug and its cause
not the developer!

Clean up your own mess:


When you finish doing any test scenario then reconfigure that machine to its original
configuration. The same case applies for bug report. Write a clean effective bug
report. Let the developer find it easy to repro and fix it.

Give credit to others for their work:


Do not take others credit. If you have referred any others work, immediately give
credit to that person. Do not get frustrated if you not found any bug that later has
been reported by client. Do work hard, use your skill.

Remember to flush
Like the toilets flush all the software’s at some point. While doing performance
testing remember to flush the system cache.

Take a nap everyday:


We need time to think, get refresh or to regenerate our energy.
Some times it’s important to take one step back in order to get fresh insight and to
find different working approach.

How to be a good tester?

It’s a every testers question. How to be a good tester? Apart from the
technical knowledge, testing skills, tester should have some personal level skills
which will help them to build a good rapport in the testing team.

131
TESTING CONCEPTS

What are these abilities , skills which make a tester as a good tester? Well, I was
reading Dave Whalen’s article “Ugly Baby Syndrome!” and found it very
interesting. Dave compared software developers with the parents who deliver
a baby (software) with countless efforts. Naturally the product managers,
architectures, developers spent their countless time on developing application for the
customer. Then they show it to us (testers) and asks: “ How is the baby
(Application)? “ And testers tell them often that they have and ugly baby.
(Application with Bugs!)

Testers don’t want to tell them that they have ugly baby, but unfortunately its our
job. So effectively tester can convey the message to the developers without hurting
them. How can be this done? Ya that is the skill of a good tester!

Here are the tips sated by Dave to handle such a delicate situation:

Be honest and Responsive:


Tell developers what are your plans to attack their application.

Be open and available:


If any dev ask you to have a look at the application developed by him before the
release, then politely give feedback on it and report any extra efforts needed. Don’t
log the bug’s for these notes.

Let them review your tests:


If you have designed or wrote some test cases from the requirement specifications
then just show them those test cases. Let them know your stuff as you are going to
critic on developers work!

Use of Bug tracker:


Some testers have habit to report each and everything publicly. This attitude hurts
the developers. So if you have logged any bug then let the bug tracking system
report it to respective developers and managers. Also don’t each time rely on bug
tracker, talk personally to developers what you logged and why you logged?

Finally some good personal points:

132
TESTING CONCEPTS

Don’t take it personally:


Do the job of messenger. You could be a close target always. So build a thick skin!

Be prepared:
A good message in the end, Be prepared for everything! If worst things might not
happened till now but they can happen at any moment in your career. So be ready to
face them.

Need of Skilled Testers

Some years ago many companies preferred not to have separate test
engineers in the project team. But this I have not seen in past 2-3 years in my
career. As now all the companies have a clear idea of the need of the QA and test
engineers. Also the QA and testers rolls are now concrete and there is no
confusion.

Unfortunately, still I find the perception of testing as a inferior role in some


developers mind. This “anyone can do” attitude should be removed from those
people’s mind. Lots of companies hiring “any” skilled personals to do this job and
eventually suffering from the lost of money and time. Instead of hiring the junk of
testers they should hire some gifted testers who can do there job beyond the
developer’s limitations.

If Managers and management remove this inferiority thinking from their mind then
they can hire these gifted testers in their organization. Such testers can do
complex job well, can find complex bugs and further more can add some
procedures to the way of doing the routine jobs in order to make it more
structured.

Effective Software Testing

In this tutorial you will learn about Effective Software Testing? How do we measure
‘Effectiveness’ of Software Testing? Steps to Effective Software Testing, Coverage
and Test Planning and Process.

133
TESTING CONCEPTS

A 1994 study in US revealed that only about “9% of software projects were
successful”

A large number of projects upon completion do not have all the promised features or
they do not meet all the requirements that were defined when the project was kicked
off.

It is an understatement to say that – An increasing number of businesses depend on


the software for their day-to-day businesses.
Billions of Dollars change hands every day with the help of commercial software.
Lots of lives depend on the reliability of the software for example running critical
medical systems, controlling power plants, flying air planes and so on.

Whether you are part of a team that is building a book keeping application or a
software that runs a power plant you cannot afford to have less than reliable
software.
Unreliable software can severely hurt businesses and endanger lives depending on
the criticality of the application. The simplest application poorly written can
deteriorate the performance of your environment such as the servers, the network
and thereby causing an unwanted mess.

To ensure software application reliability and project success Software Testing plays
a very crucial role.
Everything can and should be tested –

• Test if all defined requirements are met


• Test the performance of the application
• Test each component
• Test the components integrated with each other
• Test the application end to end
• Test the application in various environments
• Test all the application paths
• Test all the scenarios and then test some more

134
TESTING CONCEPTS

What is Effective Software Testing?

How do we measure ‘Effectiveness’ of Software Testing?


The effectiveness of Testing can be measured if the goal and purpose of the testing
effort is clearly defined. Some of the typical Testing goals are:

• Testing in each phase of the Development cycle to ensure that the


“bugs”(defects) are eliminated at the earliest
• Testing to ensure no “bugs” creep through in the final product
• Testing to ensure the reliability of the software
• Above all testing to ensure that the user expectations are met

The effectiveness of testing can be measured with the degree of success in achieving
the above goals.

Steps to Effective Software Testing:

Several factors influence the effectiveness of Software Testing Effort, which


ultimately determine the success of the Project.

A) Coverage:

The testing process and the test cases should cover

• All the scenarios that can occur when using the software application
• Each business requirement that was defined for the project
• Specific levels of testing should cover every line of code written for the
application

There are various levels of testing which focus on different aspects of the software
application. The often-quoted V model best explains this:

135
TESTING CONCEPTS

The various levels of testing illustrated above are:

• Unit Testing
• Integration Testing
• System Testing
• User Acceptance Testing

The goal of each testing level is slightly different thereby ensuring the overall project
reliability.

Each Level of testing should provide adequate test coverage.

• Unit testing should ensure each and every line of code is tested.

• Integration Testing should ensure the components can be integrated


and all the interfaces of each component are working correctly.

• System Testing should cover all the “paths”/scenarios possible when


using the system.

136
TESTING CONCEPTS

The system testing is done in an environment that is similar to the production


environment i.e. the environment where the product will be finally deployed.

There are various types of System Testing possible which test the various aspects of
the software application.

B) Test Planning and Process:

To ensure effective Testing Proper Test Planning is important


An Effective Testing Process will comprise of the following steps:

• Test Strategy and Planning


• Review Test Strategy to ensure its aligned with the Project Goals
• Design/Write Test Cases
• Review Test Cases to ensure proper Test Coverage
• Execute Test Cases
• Capture Test Results
• Track Defects
• Capture Relevant Metrics
• Analyze

Having followed the above steps for various levels of testing the product is rolled.

It is not uncommon to see various “bugs”/Defects even after the product is released
to production. An effective Testing Strategy and Process helps to minimize or
eliminate these defects. The extent to which it eliminates these post-production
defects (Design Defects/Coding Defects/etc) is a good measure of the effectiveness
of the Testing Strategy and Process.
As the saying goes - 'the proof of the pudding is in the eating'

Summary:

The success of the project and the reliability of the software application depend a lot
on the effectiveness of the testing effort. This article discusses “What is effective
Software Testing?”

A link to the 1994 study called The Chaos Report

137
TESTING CONCEPTS

http://www.standishgroup.com/sample_research/chaos_1994_1.php

Unit Testing: Why? What? & How?

In this tutorial you will learn about unit testing, various levels of testing, various
types of testing based upon the intent of testing, How does Unit Testing fit into the
Software Development Life Cycle? Unit Testing Tasks and Steps, What is a Unit Test
Plan? What is a Test Case? and Test Case Sample, Steps to Effective Unit Testing.

There are various levels of testing:

• Unit Testing
• Integration Testing
• System Testing

There are various types of testing based upon the intent of testing such as:

• Acceptance Testing
• Performance Testing
• Load Testing
• Regression Testing

Based on the testing Techniques testing can be classified as:

• Black box Testing


• White box Testing

How does Unit Testing fit into the Software Development Life Cycle?

This is the first and the most important level of testing. As soon as the programmer
develops a unit of code the unit is tested for various scenarios. As the application is
built it is much more economical to find and eliminate the bugs early on. Hence Unit
Testing is the most important of all the testing levels. As the software project
progresses ahead it becomes more and more costly to find and fix the bugs.

In most cases it is the developer’s responsibility to deliver Unit Tested Code.

138
TESTING CONCEPTS

Unit Testing Tasks and Steps:


Step 1: Create a Test Plan
Step 2: Create Test Cases and Test Data
Step 3: If applicable create scripts to run test cases
Step 4: Once the code is ready execute the test cases
Step 5: Fix the bugs if any and re test the code
Step 6: Repeat the test cycle until the “unit” is free of all bugs

What is a Unit Test Plan?

This document describes the Test Plan in other words how the tests will be carried
out.
This will typically include the list of things to be Tested, Roles and Responsibilities,
prerequisites to begin Testing, Test Environment, Assumptions, what to do after a
test is successfully carried out, what to do if test fails, Glossary and so on

What is a Test Case?

Simply put, a Test Case describes exactly how the test should be carried out.
For example the test case may describe a test as follows:
Step 1: Type 10 characters in the Name Field
Step 2: Click on Submit

Test Cases clubbed together form a Test Suite

Test Case Sample

Test Test Case Input Expected Actual


Pass/Fail Remarks
Case ID Description Data Result Result

139
TESTING CONCEPTS

Additionally the following information may also be captured:


a) Unit Name and Version Being tested
b) Tested By
c) Date
d) Test Iteration (One or more iterations of unit testing may be performed)

Steps to Effective Unit Testing:

1) Documentation: Early on document all the Test Cases needed to test your code.
A lot of times this task is not given due importance. Document the Test Cases, actual
Results when executing the Test Cases, Response Time of the code for each test
case. There are several important advantages if the test cases and the actual
execution of test cases are well documented.

a. Documenting Test Cases prevents oversight.


b. Documentation clearly indicates the quality of test cases
c. If the code needs to be retested we can be sure that we did not miss anything
d. It provides a level of transparency of what was really tested during unit testing.
This is one of the most important aspects.
e. It helps in knowledge transfer in case of employee attrition
f. Sometimes Unit Test Cases can be used to develop test cases for other levels of
testing

2) What should be tested when Unit Testing: A lot depends on the type of
program or unit that is being created. It could be a screen or a component or a web
service. Broadly the following aspects should be considered:

a. For a UI screen include test cases to verify all the screen elements that need to
appear on the screens
b. For a UI screen include Test cases to verify the spelling/font/size of all the “labels”
or text that appears on the screen
c. Create Test Cases such that every line of code in the unit is tested at least once in
a test cycle
d. Create Test Cases such that every condition in case of “conditional statements” is
tested once
e. Create Test Cases to test the minimum/maximum range of data that can be
entered. For example what is the maximum “amount” that can be entered or the

140
TESTING CONCEPTS

max length of string that can be entered or passed in as a parameter


f. Create Test Cases to verify how various errors are handled
g. Create Test Cases to verify if all the validations are being performed

3) Automate where Necessary: Time pressures/Pressure to get the job done may
result in developers cutting corners in unit testing. Sometimes it helps to write
scripts, which automate a part of unit testing. This may help ensure that the
necessary tests were done and may result in saving time required to perform the
tests.

Summary:

“Unit Testing” is the first level of testing and the most important one. Detecting and
fixing bugs early on in the Software Lifecycle helps reduce costly fixes later on. An
Effective Unit Testing Process can and should be developed to increase the Software
Reliability and credibility of the developer. The Above article explains how Unit
Testing should be done and the important points that should be considered when
doing Unit Testing.

Many new developers take the unit testing tasks lightly and realize the importance of
Unit Testing further down the road if they are still part of the project. This article
serves as a starting point for laying out an effective (Unit) Testing Strategy.

Integration Testing: Why? What? & How?

Introduction:

As we covered in various articles in the Testing series there are various levels of
testing:

Unit Testing, Integration Testing, System Testing

Each level of testing builds on the previous level.

141
TESTING CONCEPTS

“Unit testing” focuses on testing a unit of the code.


“Integration testing” is the next level of testing. This ‘level of testing’ focuses on
testing the integration of “units of code” or components.

How does Integration Testing fit into the Software Development Life Cycle?

Even if a software component is successfully unit tested, in an enterprise n-tier


distributed application it is of little or no value if the component cannot be
successfully integrated with the rest of the application.

Once unit tested components are delivered we then integrate them together.
These “integrated” components are tested to weed out errors and bugs caused due
to the integration. This is a very important step in the Software Development Life
Cycle.

It is possible that different programmers developed different components.

A lot of bugs emerge during the integration step.

In most cases a dedicated testing team focuses on Integration Testing.

Prerequisites for Integration Testing:

Before we begin Integration Testing it is important that all the components have
been successfully unit tested.

Integration Testing Steps:

Integration Testing typically involves the following Steps:


Step 1: Create a Test Plan
Step 2: Create Test Cases and Test Data
Step 3: If applicable create scripts to run test cases
Step 4: Once the components have been integrated execute the test cases
Step 5: Fix the bugs if any and re test the code
Step 6: Repeat the test cycle until the components have been successfully integrated

142
TESTING CONCEPTS

What is an ‘Integration Test Plan’?

As you may have read in the other articles in the series, this document typically
describes one or more of the following:
- How the tests will be carried out
- The list of things to be Tested
- Roles and Responsibilities
- Prerequisites to begin Testing
- Test Environment
- Assumptions
- What to do after a test is successfully carried out
- What to do if test fails
- Glossary

How to write an Integration Test Case?

Simply put, a Test Case describes exactly how the test should be carried out.
The Integration test cases specifically focus on the flow of data/information/control
from one component to the other.

So the Integration Test cases should typically focus on scenarios where one
component is being called from another. Also the overall application functionality
should be tested to make sure the app works when the different components are
brought together.

The various Integration Test Cases clubbed together form an Integration Test Suite
Each suite may have a particular focus. In other words different Test Suites may be
created to focus on different areas of the application.

As mentioned before a dedicated Testing Team may be created to execute the


Integration test cases. Therefore the Integration Test Cases should be as detailed as
possible.

Sample Test Case Table:

Test Test Case Input Expected Actual Pass/Fail Remarks


Case Description Data Result Result

143
TESTING CONCEPTS

ID

Additionally the following information may also be captured:


a) Test Suite Name
b) Tested By
c) Date
d) Test Iteration (One or more iterations of Integration testing may be performed)

Working towards Effective Integration Testing:

There are various factors that affect Software Integration and hence Integration
Testing:

1) Software Configuration Management: Since Integration Testing focuses on


Integration of components and components can be built by different developers and
even different development teams, it is important the right version of components
are tested. This may sound very basic, but the biggest problem faced in n-tier
development is integrating the right version of components. Integration testing may
run through several iterations and to fix bugs components may undergo changes.
Hence it is important that a good Software Configuration Management (SCM) policy
is in place. We should be able to track the components and their versions. So each
time we integrate the application components we know exactly what versions go into
the build process.

2) Automate Build Process where Necessary: A Lot of errors occur because the
wrong version of components were sent for the build or there are missing
components. If possible write a script to integrate and deploy the components this
helps reduce manual errors.

3) Document: Document the Integration process/build process to help eliminate the


errors of omission or oversight. It is possible that the person responsible for
integrating the components forgets to run a required script and the Integration
Testing will not yield correct results.

4) Defect Tracking: Integration Testing will lose its edge if the defects are not
tracked correctly. Each defect should be documented and tracked. Information

144
TESTING CONCEPTS

should be captured as to how the defect was fixed. This is valuable information. It
can help in future integration and deployment processes.

Metrics Used In Testing

In this tutorial you will learn about metrics used in testing, The Product Quality
Measures - 1. Customer satisfaction index, 2. Delivered defect quantities, 3.
Responsiveness (turnaround time) to users, 4. Product volatility, 5. Defect ratios, 6.
Defect removal efficiency, 7. Complexity of delivered product, 8. Test coverage, 9.
Cost of defects, 10. Costs of quality activities, 11. Re-work, 12. Reliability and
Metrics for Evaluating Application System Testing.

The Product Quality Measures:

1. Customer satisfaction index

This index is surveyed before product delivery and after product delivery
(and on-going on a periodic basis, using standard questionnaires).The following are
analyzed:

• Number of system enhancement requests per year


• Number of maintenance fix requests per year
• User friendliness: call volume to customer service hotline
• User friendliness: training time per new user
• Number of product recalls or fix releases (software vendors)
• Number of production re-runs (in-house information systems groups)

2. Delivered defect quantities

They are normalized per function point (or per LOC) at product delivery (first 3
months or first year of operation) or Ongoing (per year of operation) by level of
severity, by category or cause, e.g.: requirements defect, design defect, code defect,
documentation/on-line help defect, defect introduced by fixes, etc.

3. Responsiveness (turnaround time) to users

• Turnaround time for defect fixes, by level of severity


• Time for minor vs. major enhancements; actual vs. planned elapsed time

145
TESTING CONCEPTS

4. Product volatility

• Ratio of maintenance fixes (to repair the system & bring it into compliance
with specifications), vs. enhancement requests (requests by users to enhance
or change functionality)

5. Defect ratios

• Defects found after product delivery per function point.


• Defects found after product delivery per LOC
• Pre-delivery defects: annual post-delivery defects
• Defects per function point of the system modifications

6. Defect removal efficiency

• Number of post-release defects (found by clients in field operation),


categorized by level of severity
• Ratio of defects found internally prior to release (via inspections and testing),
as a percentage of all defects
• All defects include defects found internally plus externally (by customers) in
the first year after product delivery

7. Complexity of delivered product

• McCabe's cyclomatic complexity counts across the system


• Halstead’s measure
• Card's design complexity measures
• Predicted defects and maintenance costs, based on complexity measures

8. Test coverage

• Breadth of functional coverage


• Percentage of paths, branches or conditions that were actually tested

146
TESTING CONCEPTS

• Percentage by criticality level: perceived level of risk of paths


• The ratio of the number of detected faults to the number of predicted faults.

9. Cost of defects

• Business losses per defect that occurs during operation


• Business interruption costs; costs of work-arounds
• Lost sales and lost goodwill
• Litigation costs resulting from defects
• Annual maintenance cost (per function point)
• Annual operating cost (per function point)
• Measurable damage to your boss's career

10. Costs of quality activities

• Costs of reviews, inspections and preventive measures


• Costs of test planning and preparation
• Costs of test execution, defect tracking, version and change control
• Costs of diagnostics, debugging and fixing
• Costs of tools and tool support
• Costs of test case library maintenance
• Costs of testing & QA education associated with the product
• Costs of monitoring and oversight by the QA organization (if separate from
the development and test organizations)

11. Re-work

• Re-work effort (hours, as a percentage of the original coding hours)


• Re-worked LOC (source lines of code, as a percentage of the total delivered
LOC)
• Re-worked software components (as a percentage of the total delivered
components)

12. Reliability

• Availability (percentage of time a system is available, versus the time the


system is needed to be available)

147
TESTING CONCEPTS

• Mean time between failure (MTBF).


• Man time to repair (MTTR)
• Reliability ratio (MTBF / MTTR)
• Number of product recalls or fix releases
• Number of production re-runs as a ratio of production runs

Metrics for Evaluating Application System Testing:

Metric = Formula

Test Coverage = Number of units (KLOC/FP) tested / total size of the system. (LOC
represents Lines of Code)

Number of tests per unit size = Number of test cases per KLOC/FP (LOC
represents Lines of Code).

Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria

Defects per size = Defects detected / system size

Test cost (in %) = Cost of testing / total cost *100

Cost to locate defect = Cost of testing / the number of defects located

Achieving Budget = Actual cost of testing / Budgeted cost of testing

Defects detected in testing = Defects detected in testing / total system defects

Defects detected in production = Defects detected in production/system size

Quality of Testing = No of defects found during Testing/(No of defects found during


testing + No of acceptance defects found after delivery) *100

Effectiveness of testing to business = Loss due to problems / total resources


processed by the system.

System complaints = Number of third party complaints / number of transactions


processed

148
TESTING CONCEPTS

Scale of Ten = Assessment of testing by giving rating in scale of 1 to 10

Source Code Analysis = Number of source code statements changed / total


number of tests.

Effort Productivity = Test Planning Productivity = No of Test cases designed /


Actual Effort for Design and Documentation

Test Execution Productivity = No of Test cycles executed / Actual Effort for


testing

Life Cycle of Testing Process

This article explains about Different steps in Life Cycle of Testing Process. In Each
phase of the development process will have a specific input and a specific output.

In the whole development process, testing consumes highest amount of time. But
most of the developers oversee that and testing phase is generally neglected. As a
consequence, erroneous software is released. The testing team should be involved
right from the requirements stage itself.

The various phases involved in testing, with regard to the software development life
cycle are:

1. Requirements stage
2. Test Plan
3. Test Design.
4. Design Reviews
5. Code Reviews
6. Test Cases preparation.
7. Test Execution
8. Test Reports.

149
TESTING CONCEPTS

9. Bugs Reporting
10. Reworking on patches.
11. Release to production.

Requirements Stage

Normally in many companies, developers itself take part in the requirements stage.
Especially for product-based companies, a tester should also be involved in this
stage. Since a tester thinks from the user side whereas a developer can’t. A separate
panel should be formed for each module comprising a developer, a tester and a user.
Panel meetings should be scheduled in order to gather everyone’s view. All the
requirements should be documented properly for further use and this document is
called “Software Requirements Specifications”.

Test Plan

Without a good plan, no work is a success. A successful work always contains a good
plan. The testing process of software should also require good plan. Test plan
document is the most important document that brings in a process – oriented
approach. A test plan document should be prepared after the requirements of the
project are confirmed. The test plan document must consist of the following
information:

• Total number of features to be tested.


• Testing approaches to be followed.
• The testing methodologies
• Number of man-hours required.
• Resources required for the whole testing process.
• The testing tools that are to be used.
• The test cases, etc

Test Design

Test Design is done based on the requirements of the project. Test has to be
designed based on whether manual or automated testing is done. For automation
testing, the different paths for testing are to be identified first. An end to end
checklist has to be prepared covering all the features of the project.

150
TESTING CONCEPTS

The test design is represented pictographically. The test design involves various
stages. These stages can be summarized as follows:

• The different modules of the software are identified first.


• Next, the paths connecting all the modules are identified.

Then the design is drawn. The test design is the most critical one, which decides the
test case preparation. So the test design assesses the quality of testing process.

Test Cases Preparation

Test cases should be prepared based on the following scenarios:

• Positive scenarios
• Negative scenarios
• Boundary conditions and
• Real World scenarios

The software design is done in systematical manner or using the UML language. The
tester can do the reviews over the design and can suggest the ideas and the
modifications needed.

Code Reviews

Code reviews are similar to unit testing. Once the code is ready for release, the
tester should be ready to do unit testing for the code. He must be ready with his own
unit test cases. Though a developer does the unit testing, a tester must also do it.
The developers may oversee some of the minute mistakes in the code, which a
tester may find out.

Test Execution and Bugs Reporting

Once the unit testing is completed and the code is released to QA, the functional
testing is done. A top-level testing is done at the beginning of the testing to find out

151
TESTING CONCEPTS

the top-level failures. If any top-level failures occur, the bugs should be reported to
the developer immediately to get the required workaround.

The test reports should be documented properly and the bugs have to be reported to
the developer after the testing is completed.

Release to Production

Once the bugs are fixed, another release is given to the QA with the modified
changes. Regression testing is executed. Once the QA assures the software, the
software is released to production. Before releasing to production, another round of
top-level testing is done.

The testing process is an iterative process. Once the bugs are fixed, the testing has
to be done repeatedly. Thus the testing process is an unending process.

Technical Terms Used in Testing World

In this tutorial you will learn about technical terms used in testing world, from Audit,
Acceptance Testng to Validation, Verification and Testing.

Audit: An independent examination of a work product or set of work products to


assess compliance with specifications, standards, contractual agreements, or other
criteria.

Acceptance testing: Testing conducted to determine whether or not a system


satisfies its acceptance criteria and to enable the customer to determine whether or
not to accept the system.

Alpha Testing: Acceptance testing performed by the customer in a controlled


environment at the developer's site. The software is used by the customer in a
setting approximating the target environment with the developer observing and
recording errors and usage problems.

Assertion Testing: A dynamic analysis technique which inserts assertions about the
relationship between program variables into the program code. The truth of the
assertions is determined as the program executes.

152
TESTING CONCEPTS

Boundary Value: (1) A data value that corresponds to a minimum or maximum


input, internal, or output value specified for a system or component. (2) A value
which lies at, or just inside or just outside a specified range of valid input and output
values.

Boundary Value Analysis: A selection technique in which test data are chosen to
lie along "boundaries" of the input domain [or output range] classes, data structures,
procedure parameters, etc. Choices often include maximum, minimum, and trivial
values or parameters.

Branch Coverage: A test coverage criteria which requires that for each decision
point each possible branch be executed at least once.

Bug: A fault in a program which causes the program to perform in an unintended or


unanticipated manner.

Beta Testing: Acceptance testing performed by the customer in a live application of


the software, at one or more end user sites, in an environment not controlled by the
developer.

Boundary Value Testing: A testing technique using input values at, just below, and
just above, the defined limits of an input domain; and with input values causing
outputs to be at, just below, and just above, the defined limits of an output domain.

Branch Testing: Testing technique to satisfy coverage criteria which require that for
each decision point, each possible branch [outcome] be executed at least once.
Contrast with testing, path; testing, statement. See: branch coverage.

Compatibility Testing: The process of determining the ability of two or more


systems to exchange information. In a situation where the developed software
replaces an already working program, an investigation should be conducted to assess
possible comparability problems between the new software and other programs or
systems.

Cause Effect Graph: A Boolean graph linking causes and effects. The graph is
actually a digital-logic circuit (a combinatorial logic network) using a simpler notation
than standard electronics notation.

153
TESTING CONCEPTS

Cause Effect Graphing: This is a Test data selection technique. The input and
output domains are partitioned into classes and analysis is performed to determine
which input classes cause which effect. A minimal set of inputs is chosen which will
cover the entire effect set. It is a systematic method of generating test cases
representing combinations of conditions.

Code Inspection: A manual [formal] testing [error detection] technique where the
programmer reads source code, statement by statement, to a group who ask
questions analyzing the program logic, analyzing the code with respect to a checklist
of historically common programming errors, and analyzing its compliance with coding
standards.

Code Review: A meeting at which software code is presented to project personnel,


managers, users, customers, or other interested parties for comment or approval.

Code Walkthrough: A manual testing [error detection] technique where program


[source code] logic [structure] is traced manually [mentally] by a group with a small
set of test cases, while the state of program variables is manually monitored, to
analyze the programmer's logic and assumptions.

Coverage Analysis: Determining and assessing measures associated with the


invocation of program structural elements to determine the adequacy of a test run.
Coverage analysis is useful when attempting to execute each statement, branch,
path, or iterative structure in a program.

Crash: The sudden and complete failure of a computer system or component.

Criticality: The degree of impact that a requirement, module, error, fault, failure, or
other item has on the development or operation of a system.

Cyclomatic Complexity: The number of independent paths through a program. The


cyclomatic complexity of a program is equivalent to the number of decision
statements plus 1.

Error: A discrepancy between a computed, observed, or measured value or condition


and the true, specified, or theoretically correct value or condition.

154
TESTING CONCEPTS

Error Guessing: This is a Test data selection technique. The selection criterion is to
pick values that seem likely to cause errors.

Error Seeding: The process of intentionally adding known faults to those already in
a computer program for the purpose of monitoring the rate of detection and removal,
and estimating the number of faults remaining in the program. Contrast with
mutation analysis.

Exception: An event that causes suspension of normal program execution. Types


include addressing exception, data exception, operation exception, overflow
exception, protection exception, and underflow exception.

Exhaustive Testing: Executing the program with all possible combinations of values
for program variables. This type of testing is feasible only for small, simple
programs.

Failure: The inability of a system or component to perform its required functions


within specified performance requirements.

Fault: An incorrect step, process, or data definition in a computer program which


causes the program to perform in an unintended or unanticipated manner.

Functional Testing: Testing that ignores the internal mechanism or structure of a


system or component and focuses on the outputs generated in response to selected
inputs and execution conditions. (2) Testing conducted to evaluate the compliance of
a system or component with specified functional requirements and corresponding
predicted results.

Integration Testing: An orderly progression of testing in which software elements,


hardware elements, or both are combined and tested, to evaluate their interactions,
until the entire system has been integrated.

Interface Testing: Testing conducted to evaluate whether systems or components


pass data and control correctly to one another.

Mutation Testing: A testing methodology in which two or more program mutations


are executed using the same test cases to evaluate the ability of the test cases to
detect differences in the mutations.

155
TESTING CONCEPTS

Operational Testing: Testing conducted to evaluate a system or component in its


operational environment.

Parallel Testing: Testing a new or an altered data processing system with the same
source data that is used in another system. The other system is considered as the
standard of comparison.

Path Testing: Testing to satisfy coverage criteria that each logical path through the
program be tested. Often paths through the program are grouped into a finite set of
classes. One path from each class is then tested.

Performance Testing: Functional testing conducted to evaluate the compliance of a


system or component with specified performance requirements.

Qualification Testing: Formal testing, usually conducted by the developer for the
consumer, to demonstrate that the software meets its specified requirements.

Quality Assurance: (1) The planned systematic activities necessary to ensure that
a component, module, or system conforms to established technical requirements. (2)
All actions that are taken to ensure that a development organization delivers
products that meet performance requirements and adhere to standards and
procedures. (3) The policy, procedures, and systematic actions established in an
enterprise for the purpose of providing and maintaining some degree of confidence in
data integrity and accuracy throughout the life cycle of the data, which includes
input, update, manipulation, and output. (4) The actions, planned and performed, to
provide confidence that all systems and components that influence the quality of the
product are working as expected individually and collectively.

Quality Control: The operational techniques and procedures used to achieve quality
requirements.

Regression Testing: Rerunning test cases which a program has previously


executed correctly in order to detect errors spawned by changes or corrections made
during software development and maintenance.

Review: A process or meeting during which a work product or set of work products,
is presented to project personnel, managers, users, customers, or other interested

156
TESTING CONCEPTS

parties for comment or approval. Types include code review, design review, formal
qualification review, requirements review, test readiness review.

Risk: A measure of the probability and severity of undesired effects.

Risk Assessment: A comprehensive evaluation of the risk and its associated


impact.

Software Review: An evaluation of software elements to ascertain discrepancies


from planned results and to recommend improvement. This evaluation follows a
formal process. Syn: software audit. See: code audit, code inspection, code review,
code walkthrough, design review, specification analysis, static analysis

Static Analysis: Analysis of a program that is performed without executing the


program. The process of evaluating a system or component based on its form,
structure, content, documentation is also called as Static Analysis.

Statement Testing: Testing to satisfy the criterion that each statement in a


program be executed at least once during program testing.

Storage Testing: This is a determination of whether or not certain processing


conditions use more storage [memory] than estimated.

Stress Testing: Testing conducted to evaluate a system or component at or beyond


the limits of its specified requirements.

Structural Testing: Testing that takes into account the internal mechanism
[structure] of a system or component. Types include branch testing, path testing,
statement testing. (2) Testing to insure each program statement is made to execute
during testing and that each program statement performs its intended function.

System Testing: The process of testing an integrated hardware and software


system to verify that the system meets its specified requirements. Such testing may
be conducted in both the development environment and the target environment.

Test: An activity in which a system or component is executed under specified


conditions, the results are observed or recorded and an evaluation is made of some
aspect of the system or component.

157
TESTING CONCEPTS

Testability: The degree to which a system or component facilitates the


establishment of test criteria and the performance of tests to determine whether
those criteria have been met.

Test case: Documentation specifying inputs, predicted results, and a set of


execution conditions for a test item.

Test case Generator: A software tool that accepts as input source code, test
criteria, specifications, or data structure definitions; uses these inputs to generate
test input data; and, sometimes, determines expected results.

Test Design: Documentation specifying the details of the test approach for a
software feature or combination of software features and identifying the associated
tests.

Test Documentation: Documentation describing plans for, or results of, the testing
of a system or component, Types include test case specification, test incident report,
test log, test plan, test procedure, test report.

Test Driver: A software module used to invoke a module under test and, often,
provide test inputs, control and monitor execution, and report test results.

Test Incident Report: A document reporting on any event that occurs during
testing that requires further investigation.

Test Item: A software item which is the object of testing.

Test Log: A chronological record of all relevant details about the execution of a test.

Test Phase: The period of time in the software life cycle in which the components of
a software product are evaluated and integrated, and the software product is
evaluated to determine whether or not requirements have been satisfied.

Test Plan: Documentation specifying the scope, approach, resources, and schedule
of intended testing activities. It identifies test items, the features to be tested, the
testing tasks, responsibilities, required, resources, and any risks requiring
contingency planning. See: test design, validation protocol.

158
TESTING CONCEPTS

Test Procedure: A formal document developed from a test plan that presents
detailed instructions for the setup, operation, and evaluation of the results for each
defined test.

Test Report: A document describing the conduct and results of the testing carried
out for a system or system component.

Test Result Analyzer: A software tool used to test output data reduction,
formatting, and printing.

Testing: (1) The process of operating a system or component under specified


conditions, observing or recording the results, and making an evaluation of some
aspect of the system or component. (2) The process of analyzing a software item to
detect the differences between existing and required conditions, i.e. bugs, and to
evaluate the features of the software items.

Traceability Matrix: A matrix that records the relationship between two or more
products; e.g., a matrix that records the relationship between the requirements and
the design of a given software component. See: traceability, traceability analysis.

Unit Testing: Testing of a module for typographic, syntactic, and logical errors, for
correct implementation of its design, and for satisfaction of its requirements (or)
Testing conducted to verify the implementation of the design for one software
element; e.g., a unit or module; or a collection of software elements.

Usability: The ease with which a user can learn to operate, prepare inputs for, and
interpret outputs of a system or component.

Usability Testing: Tests designed to evaluate the machine/user interface.

Validation: Establishing documented evidence which provides a high degree of


assurance that a specific process will consistently produce a product meeting its
predetermined specifications and quality attributes.

Validation, Verification and Testing: Used as an entity to define a procedure of


review, analysis, and testing throughout the software life cycle to discover errors,
determine functionality, and ensure the production of quality software.

159
TESTING CONCEPTS

Volume Testing: Testing designed to challenge a system's ability to manage the


maximum amount of data over a period of time. This type of testing also evaluates a
system's ability to handle overload situations in an orderly fashion.

Positive and Negative Testing

The notion of something like "Integration Testing" or "System Testing" can


(and should) be defined so that everyone knows what is meant by that activity within
the same organization, but terms like "negative test" and "positive test" are more of
a concept than a strict activity. In both instances you are dealing with an input, an
action, and an output. The action acts upon the input to derive a certain output. So a
test case (and thus a good test) is just one that deals with those three things. Both
test cases can produce errors and, in fact, some say that the success of a test case is
based upon the probability of it finding new errors in an application.

What I want to do here, however, is state clearly one viewpoint of what the
distinction between positive and negative testing is. Then I want to play Devil's
Advocate and try to undermine that viewpoint by presenting an argument that others
have put forth - an alternative viewpoint. The real point of this will be to show that
sometimes trying to adhere too rigidly to conceptual terms like this can lead to a lot
of stagnating action. Read this section as a sort of extended argument that I am
having with myself as I come to grips with these terms.

So let us first state a simple hypothetical definition: positive testing is that


testing which attempts to show that a given module of an application does what it is
supposed to do. Negative testing is that testing which attempts to show that the
module does not do anything that it is not supposed to do. So, by that logic, and to
make a concrete example, an application delivering an error when it should is
actually an example of a positive test. A negative test would be the program not
delivering an error when it should or delivering an error when it should not. But this
sounds like it is more based on what the application does during testing rather than
how the tester is actually going about testing it. Well, sort of. The idea here is that
neither test necessarily has to force an error condition, per se, at least by strict
definition. But both concepts (negative and positive) are looking for different types of
error conditions. Consider that one part of negative testing is often considered to be
boundary analysis. In this case, you are not so much "forcing an error" because, of

160
TESTING CONCEPTS

course, the application should handle boundary problems. But what you are doing is
seeing if the boundary problem is not, in fact, handled. So if the program is
supposed to give an error when the person types in "101" on a field that should be
between "1" and "100", then that is valid if an error shows up. If, however, the
application does not give an error when the user typed "101" then you have a
problem. So really negative testing and positive testing are the same kinds of things
when you really boil it right down.

Now, some make a distinguishing remark from what I said. I said the following:

Positive testing is that testing which attempts to show that a given module of
an application does what it is supposed to do.
Negative testing is that testing which attempts to show that the module does
not do anything that it is not supposed to do.
Playing the Devil's Advocate, others would change this around and say the following
is a better distinction:

Positive testing is that testing which attempts to show that a given module of an
application does not do what it is supposed to do.
Negative testing is that testing which attempts to show that the module does
something that it is not supposed to do.
Let us look at this slightly shifted point of view. By this logic, we would say
that most syntax/input validation tests are positive tests. Even if you give an invalid
input, you are expecting a positive result (e.g., an error message) in the hope of
finding a situation where the module either gives the wrong error message or
actually allows the invalid input. A negative test is, by this logic, more trying to get
the module to do something differently than it was designed to do. For example, if
you are testing a state transition machine and the state transition sequence is: State
1 -> State 2 -> State 3 -> State 4, then trying to get the module to go from State 2
to State 4, skipping State 3, is a negative test. So, negative testing, in this case, is
about thinking of how to disrupt the module and, by extension, positive testing is
examining how well/badly the module does its task.

Now, in response to this, I would agree that most can see looking at it this
way from what the tester hopes to find. Testing pundits often tell testers to look for

161
TESTING CONCEPTS

error because if you look for success, you will often find success - even when there is
error. By proxy, if you do not find an error and you have reliable test cases (that
latter point is crucial), then a positive test case will show that the application did not,
in fact, manifest that error. However, showing an error when it should have done so
is an example of a "positive test" by the strict definition of that term. So in other
words:

Positive Testing = (Not showing error when not supposed to) + (Showing error
when supposed to)
So if either of the situations in parentheses happens you have a positive test in
terms of its result - not what the test was hoping to find. The application did what it
was supposed to do. By that logic:

Negative Testing = (Showing error when not supposed to) + (Not showing error
when supposed to)
(Usually these situations crop up during boundary testing or cause-effect testing.)
Here if either of the situations in parentheses happens you have a negative test in
terms of its result - again, not what the test was hoping to find. The application did
what it was not supposed to do.

However, in both cases, these were good results because they showed you
what the application was doing and you were able to determine if it was working
correctly or not. So, by my original definitions, the testing is all about errors and
finding them. It is just how you are looking for those errors that make the
distinction. (Granted, how you are looking will often dictate what you are hoping to
find but since that is the case, it hardly makes sense to make a grand distinction
between them.) Now, regarding the point I made above, as a Devil's Advocate: "A
negative test is more trying to get the module to do something differently than it
was designed to do." We have to realize, I think, that what we call "negative testing"
is often about exercising boundary conditions - and those boundaries exist within the
context of design. Granted, that can be trying to get a value in to a field that it
should not accept. However, a good application should have, during the
requirements stage, had provisions for invalid input. Thus really what you are testing
here is (a) whether the provisions for invalid input exist and (b) whether they are
working correctly. And, again, that is why this distinction, for me (between positive

162
TESTING CONCEPTS

and negative), is somewhat banal.

Your negative test can turn into a positive test just be shifting the emphasis
of what you are looking for. To get the application to do something it is not designed
to do could be looked at as accepting invalid input. However, if you find that the
application does accept invalid input and does not, in fact, give a warning, I would
agree that is a negative test if it was specified in requirements that the application
should respond to invalid input. In this case the application did not, but it was not
also specified that it should. So, here, by strict requirements did the application do
what it was supposed to do? Technically, yes. If requirements did not specify
differently, design was not put in place to handle the issue. Thus you are not testing
something outside the scope of design. Rather, you are testing something that was
not designed in the first place.

So, going back to one of the previous points, one thing we can probably all
agree on: it entirely depends on how you view a test. But are we saying the result of
the test determines whether it was a positive or negative test? If so, many would
disagree with that, indicating that it is the thinking behind the test that should be
positive or negative. In actuality, most experienced testers do not think in terms of
positive or negative, they think in terms of "what can I do to establish the level of
risk?" However, to this point, I would argue that if that is truly how the tester thinks
of things then all concepts of positive/negative go right out of the window (as I think
they mostly should anyway). Obviously you could classify the test design in terms of
negative or positive, but to some extent that is irrelevant. However, without getting
into that, I am not sure we are saying that the result of the test determines positivity
or negativity. What I said earlier, relative to my example, was that "in both cases,
these were good results because they showed you what the application was doing
and you were able to determine if it was working correctly or not." If the application
was behaving correctly or incorrectly, you still determined what the application was
actually doing and, as such, those are good results. Thus the result tells you about
the application and that is good (without recourse to terms like positive and
negative). If the result tells you nothing about how the application is functioning that
is, obviously, bad (and, again, this is without recourse to positive or negative).

We can apply the term "effective" to these types of test cases and we can say

163
TESTING CONCEPTS

that all test cases, positive or negative, should be effective. But what about the idea
of relying on the thinking behind the test? This kind of concept is just a little too
vague for me because people's thinking can be more or less different, even on this
issue, which can often depend on what people have been taught regarding these
concepts. As I showed, you can transform a postive test mentality into a negative
test mentality just by thinking about the results of the test differently. And if
negative testing is just about "disrupting a module" (the Devil's Advocate position),
even a positive test can do that if there is a fault. However I am being a little flip
because with the notion of the thinking behind the test, obviously someone here
would be talking about intent. The intent is to disrupt the module so as to cause a
fault fault and that would constitute a negative test (by the Devil's Advocate
position) while a positive test would not be trying to disrupt the module - even
though disruption might occur (again, by the Devil's Advocate position). The key
differentiator is the intent. I could sort of buy that but, then again, boundary testing
is an attempt to disrupt modules because you are seeing if the system can handle
the boundary violation. This can also happen with results. As I said: "Your negative
test can turn into a positive test just be shifting the emphasis of what you are
looking for." That sort of speaks to the intention of what you are hoping to find but
also how you view the problem. If the disruption you tried to cause in the module is,
in fact, handled by the code then you will get a positive test result - an error
message of some sort.

Now I want to keep on this point because, again, some people state that
negative testing is about exercising boundary conditions. Some were taught that this
is not negative testing; rather that this is testing invalid inputs, which are positive
tests - so it depends how you were taught. And figure that a boundary condition, if
not handled by the code logic, will potentially severely disrupt the module - which is
the point of negative testing according to some views of it. However, that is not the
intent here according to some. And yet while that was not the intent, that might be
the result. That is why the distinction, for me, blurs. But here is where the crux of
the point is for me: you can generlaly forget all about intent of test case design for
the moment and look at the distinction of what the result is in terms of a "positive
result" (the application showed me an error when it should have) and a "negative
result" (the application did not show me an error when it should have). The latter is
definitely a more negative connotation than the former, regardless of the intent of

164
TESTING CONCEPTS

the tester during design of the test case and that is important to realize because
sometimes our intentions for tests are changed by the reality of what exists and
what happens as a result of running the tests. So, in the case of intent for the
situation of the application not showing an error when it was supposed to, this is
simply a matter of writing "negative test cases" (if we stick with the term for a
moment) that will generate conditions that should, in turn, generate error messages.

But the point is that the intent of the test case is to see if the application does
not, in fact, generate that error message. In other words, you are looking for a
negative result. But, then again, we can say: "Okay, now I will look that the
application does generate the error message that it should." Well, in that case, we
are really just running the negative test case! Either way the result is that the error
either will or will not show up and thus the result is, at least to some extent,
determining the nature of the test case (in terms of negative or positive
connotation). If the error does not show up, the invalid input might break the
module. So is the breakdown this:

P: Not showing error when not supposed to


N: Not showing error when supposed to
P: Showing error when supposed to
N: Showing error when not supposed to
I think the one thing we have to consider is the viewpoint: hinges on the idea of
"negative testing" being looked at as forcing the module to do something it was not
designed to do. However, if the module was never designed to do the thing you are
trying, then your testing is of an interesting sort because, after all, you know nothing
exists to handle it. So the real question should not be: "What happens when I do
this?" but rather "Why have we not designed this to handle this situation?" Let us
say that something is designed to handle the "module disruption" you are proposed
to test. In that case, you are actually positively testing the code that handles that
situation. To a strict degree, forcing a module to do something it was not designed to
do suggests that this is something your average user can do. In other words, your
average user could potentially use the application in such a fashion that the negative
test case you are putting forth could be emulated by the user. However, if that is the

165
TESTING CONCEPTS

case, design should be in place to mitigate that problem. And, again, you are then
positively testing.

Now, one can argue, "Well, it is possible that the user can try something that there
simply is no way to design around." Okay. But then I ask: "Like what?" If there is no
way you can design around it or even design something to watch for the event, or
have the system account for it, how do you write a valid test case for that? I mean,
you can write a test case that breaks the application by disrupting the module but --
you already knew that was going to happen. However, this is not as cut and dry as I
am sure anyone reading this could point out. After all, in some cases maybe you are
not sure that what you are writing as a test case will be disruptive. Ah, but that is
the rub. We just defined "negative testing" as trying to disrupt the module. Whether
we succeed or not is a different issue (and speaks to the result), but that was the
intent. We are trying to do something that is outside the bounds of design and thus
it is not so much a matter of testing for disruption as it is testing for the effects of
that disruption. If the effects could be mitigated, that must be some sort of design
that is mitigating them and then you are positively testing that mitigating influence.

As an example, a good test case for a word processer might be: "Turn off the
computer to simulate a power failure when an unsaved document is present in the
application." Now, the idea here is that you might have some document saving
feature that automatically kicks in when the application suddenly terminates, say via
a General Protection Fault (GPF). However, strictly speaking, powering down the
computer is different than a GPF. So here you are testing to see what happens if the
application is shut down via a power-off of the PC, which, let us say, the application
was not strictly designed to really handle. So my intent is to disrupt the module.
However, in this case, since I can state the negative condition, I can state a possible
design that could account for it. After all: we already know that the document will
not be saved because nothing was designed to account for that. But the crucial point
is that if nothing was designed into the system to account for the power-off of the
PC, then what are you really testing? You are testing that the application does what
the application does when a power-off occurs. But if nothing is designed to happen
one way or the other, then testing for disruption really does you no good. After all,
you know it is going to be disrupted. That is not in question. What is (or should be)
in question is how you can handle that disruption and then test how that handling

166
TESTING CONCEPTS

works. So let us take those alternate (Devil's Advocate) definitions:

Positive testing is that testing which attempts to show that a given module
of an application does not do what it is supposed to do.
In this case of the power-down test case, we are not positive testing because we did
not test that the application did not do what it was supposed to do. The application
was not "supposed" to do anything because nothing was designed to handle the
power-down.

Negative testing is that testing which attempts to show that the module
does something that it is not supposed to do.
In the case of the power-down test case, we are also not negative testing by this
definition because the application, in not saving the document or doing anything
(since it was not designed to do anything in the first place), is not doing something
that it is not supposed to do. Again, the application is not "supposed" to do anything
since it was not designed to do anything in this situation.

Now consider my quasi-definition/equation for positive testing that I gave earlier:

Positive Testing = (Not showing error when not supposed to) + (Showing error when
supposed to)
I would have to loosen my language a little but, basically, the application was not
supposed to show an error and, in fact, did not do so in this case. But what if the
application was, in fact, supposed to handle that situation of a power-down? Let us
say the developers hooked into the API so that if a shut-down event was fired off,
the application automatically issues an error/warning and then saves the document
in a recovery mode format. Now let us say I test that and find that the application
did not, in fact, save the file. Consider again my quasi-definition/equation for
negative testing:
Negative Testing = (Showing error when not supposed to) + (Not showing error
when supposed to)
In this case I have done negative testing because the application was supposed to
issue an error/warning but did not. However, notice, that the test case is the same

167
TESTING CONCEPTS

exact test case. The intent of my testing was simply to test this aspect of the
application. The result of the test relative to the stated design is what determines if
the test was negative or positive by my definitions. Now, because I want to be
challenged on this stuff, you could also say: "Yes, but forget the document in the
word processor. What if the application gets corrupted because of the power-off?"
Let us say that the corruption is just part of the Windows environment and there is
nothing that can be done about it. Is this negative testing? By the Devil's Advocate
definition, strictly it is not, because remember by that definition: "Negative testing is
that testing which attempts to show that the module does something that it is not
supposed to do." But, in this case, the module did not do something (become
corrupted) that it was not supposed to do. This simply happened as a by-product of a
Windows event that cannot be handled. But we did, after all, try to disrupt the
module, right? So is it a negative test or not by the definition of disruption?
Incidentally, by my definition, it is not a negative test either. However, what is
common in all of what I have said is that the power-down test case is an effective
test case and this is the case regardless of whether you choose to connote it with a
"positive" or "negative" qualifier. Since that can be the case, then, for me, the use of
the qualifier is irrelevant.

But now let us consider another viewpoint from the Devil's Advocate and one
that I think is pretty good. Consider this example: An application takes mouse clicks
as input. The requirement is for one mouse click to be processed at a time, the user
hitting multiple mouse clicks will cause the application to discard anything but the
first. Any tester will do the obvious and design a test to hit multiple mouse clicks.
Now the application is designed to discard anything but the first, so the test could be
classified (by my definition) a negative one as the application is designed not to
process multiple mouse clicks. The negative test is to try to force the application to
process more than the first. BUT, I hear you say, this is an input validation test that
tests that the application does discard mutiple mouse clicks, therefore it is a positive
test (again, by my definition), and I would then agree, it is a positive test. However,
the tester might also design a test that overflows the input buffer with mouse clicks -
is that a negative test?. Note, this situation is not covered explicitly in the
requirements - and that is crucial to what I would call negative testing, that very
often it is the tester's "what if" analysis that designs negative tests - so, yes, it is a
negative test as you are forcing the application into a situation it may not have been

168
TESTING CONCEPTS

designed and/or coded for - you may not know whether it had or not. The actual
result of the test may be that the application stops accepting any more clicks on its
input buffer and causes an error message or it may crash.

Now, having said all this, it makes me realize how my point starts to coalesce
with the Devil's Advocate. One way that might happen is via the use of the term
"error" that has gotten tossed around a lot. My language seemed too restrictive in
the sense that when I used the word "error" (as in "showing error when not
supposed to") I did not make it clear enough that I was not necessarily talking about
an error screen of some sort, but rather an error condition or a failure. With this, my
negative testing definition really starts to coalesce with the Devil's Advocate's
definition ("does something that it is not supposed to do"). I had said: "Negative
Testing = (Showing error when not supposed to) + (Not showing error when
supposed to)" and broadening my language more, really what I am saying is that the
application is either showing (doing) something it is not supposed to (which matches
the Devil's Advocate thought) but I was also saying that the application is not
showing (doing) something that it was supposed to. And to the Devil's Advocate that
latter is positive testing. Let me restate the two viewpoints somewhat:

Positive Testing (Jeff):


Not doing something it was not supposed to do.
Doing something it was supposed to do.

Positive Testing (Devil's Advocate):


Not doing what it is supposed to do.

Negative Testing (Jeff):


Doing something it was not supposed to do.
Not doing something it was supposed to do.

Negative Testing (Devil's Advocate):


Doing something that it is not supposed to do.
I think I was essentially saying the same thing in terms of negative testing as my
hypothetical opponent, just not in terms of positive testing. If you notice, both of our
"negative testings" really contain the same point. On the other hand, relative to the

169
TESTING CONCEPTS

common Devil's Advocate position, I am having a hard time seeing the major
distinction between positive and negative. The Devil's Advocate's original conception:

"Positive testing is that testing which attempts to show that a given module of an
application does NOT do what it is supposed to do. Negative testing is that testing
which attempts to show that the module does something that it is not supposed to
do."

To me, "doing something you are not supposed to" (the Devil's Advocate
negative test) and "not doing something you are supposed to" (the Devil's Advocate
positive test) are really two sides of the same coin or maybe just two ways of saying
the same thing. So let us say that our requirement is "do not process multiple mouse
clicks". In that case, "not doing something you are supposed to" (Devil's Advocate
positive test) means, in this case, "processing multiple mouse clicks". In other
words, the application should not process multiple mouse clicks. If it does, it is doing
something it is not supposed to. Likewise, "doing something you are not supposed to
do" (Devil's Advocate negative test) means, in this case, "processing multiple mouse
clicks". In other words, the application should not process multiple mouse clicks.
Either way, it is saying the same thing. So what we are testing for is "application not
processing multiple mouse clicks". It would seem that if the application does process
multiple mouse clicks it is both not doing what it is supposed to do (not processing
them) and doing something it is not supposed to do (processing them). The same
statement, just made different ways. Now, let me see if that works with my
definitions.
Again, the lemma is "do not process multiple mouse clicks". If the application
does this then it falls under lemma 1 of my negative test ("Doing something it was
not supposed to do.") If the application does not do this, it falls under lemma 1 of
my positive test ("not doing something it was not supposed to do"). Even with the
mouse click example we have two aspects:
Application designed not to process multiple mouse clicks
Application designed to process only one mouse click
Saying the same thing, and yet a subtle shift in emphasis if you want to go by the
positive and negative distinctions. The difference, however, is also whether you are
dealing with active design or passive design. In other words, does the application
actively make sure that only one mouse click is handled (by closing the buffer) or

170
TESTING CONCEPTS

does it simply only process one click, but allow the buffer to fill up anyway. I like the
idea of tying this whole thing in with "mitgating design factors". I think that we can
encapsulate "intent" and "result" (both of which are important to test casing) by
looking more at the efficient and effective demarcations. We have to consider result
that is part of how you do test case effectiveness metrics as well as proactive defect
detection metrics. If a test case is a tautology test then it is not really efficient or
effective - but that is based solely on the result, not the intent or anything else

Conclusion:
BVT is nothing but a set of regression test cases that are executed each time for new
build. This is also called as smoke test. Build is not assigned to test team unless and
until the BVT passes. BVT can be run by developer or tester and BVT result is
communicated throughout the team and immediate action is taken to fix the bug if
BVT fails. BVT process is typically automated by writing scripts for test cases. Only
critical test cases are included in BVT. These test cases should ensure application
test coverage. BVT is very effective for daily as well as long term builds. This saves
significant time, cost, resources and after all no frustration of test team for
incomplete build.

Definitions
A
abstract test case: See high level test case.
acceptance: See acceptance testing.
acceptance criteria: The exit criteria that a component or system must satisfy in
order to be accepted by a user, customer, or other authorized entity. [IEEE 610]
acceptance testing: Formal testing with respect to user needs, requirements, and
business processes conducted to determine whether or not a system satisfies the
acceptance criteria and to enable the user, customers or other authorized entity to
determine whether or not to accept the system. [After IEEE 610]
accessibility testing: Testing to determine the ease by which users with disabilities
can use a component or system. [Gerrard]
accuracy: The capability of the software product to provide the right or agreed
results or effects with the needed degree of precision. [ISO 9126] See also
functionality testing.

171
TESTING CONCEPTS

actual outcome: See actual result.


actual result: The behavior produced/observed when a component or system is
tested.
ad hoc review: See informal review.
ad hoc testing: Testing carried out informally; no formal test preparation takes
place, no recognized test design technique is used, there are no expectations for
results and arbitrariness guides the test execution activity.
adaptability: The capability of the software product to be adapted for different
specified environments without applying actions or means other than those provided
for this purpose for the software considered. [ISO 9126] See also portability.
agile testing: Testing practice for a project using agile methodologies, such as
extreme programming (XP), treating development as the customer of testing and
emphasizing the test-first design paradigm. See also test driven development.
algorithm test [TMap]: See branch testing.
alpha testing: Simulated or actual operational testing by potential users/customers
or an independent test team at the developers’ site, but outside the development
organization.
Alpha testing is often employed for off-the-shelf software as a form of internal
acceptance testing.
analyzability: The capability of the software product to be diagnosed for
deficiencies or causes of failures in the software, or for the parts to be modified to be
identified.
analyzer: See static analyzer.
anomaly: Any condition that deviates from expectation based on requirements
specifications, design documents, user documents, standards, etc. or from
someone’s perception or experience. Anomalies may be found during, but not limited
to, reviewing, testing, analysis, compilation, or use of software products or
applicable documentation. [IEEE 1044] See also defect, deviation, error, fault,
failure, incident, problem.
arc testing: See branch testing.
attractiveness: The capability of the software product to be attractive to the user.
[ISO 9126] See also usability.
audit: An independent evaluation of software products or processes to ascertain
compliance to standards, guidelines, specifications, and/or procedures based on
objective criteria, including documents that specify:

172
TESTING CONCEPTS

(1) the form or content of the products to be produced


(2) the process by which the products shall be produced
(3) how compliance to standards or guidelines shall be measured. [IEEE 1028]
audit trail: A path by which the original input to a process (e.g. data) can be traced
back through the process, taking the process output as a starting point. This
facilitates defect analysis and allows a process audit to be carried out. [After TMap]
automated testware: Testware used in automated testing, such as tool scripts.
availability: The degree to which a component or system is operational and
accessible when required for use. Often expressed as a percentage. [IEEE 610]
B
back-to-back testing: Testing in which two or more variants of a component or
system are executed with the same inputs, the outputs compared, and analyzed in
cases of discrepancies. [IEEE 610]
baseline: A specification or software product that has been formally reviewed or
agreed upon, that thereafter serves as the basis for further development, and that
can be changed only through a formal change control process. [After IEEE 610]
basic block: A sequence of one or more consecutive executable statements
containing no branches.
basis test set: A set of test cases derived from the internal structure of a
component or specification to ensure that 100% of a specified coverage criterion will
be achieved.
bebugging: See error seeding. [Abbott]
behavior: The response of a component or system to a set of input values and
preconditions.
benchmark test: (1) A standard against which measurements or comparisons can
be made.
(2) A test that is be used to compare components or systems to each other or to a
standard as in (1). [After IEEE 610]
bespoke software: Software developed specifically for a set of users or customers.
The opposite is off-the-shelf software.
best practice: A superior method or innovative practice that contributes to the
improved performance of an organization under given context, usually recognized as
‘best’ by other peer organizations.
beta testing: Operational testing by potential and/or existing users/customers at an
external site not otherwise involved with the developers, to determine whether or

173
TESTING CONCEPTS

not a component or system satisfies the user/customer needs and fits within the
business processes. Beta testing is often employed as a form of external acceptance
testing for off-the-shelf software in order to acquire feedback from the market.
big-bang testing: A type of integration testing in which software elements,
hardware elements, or both are combined all at once into a component or an overall
system, rather than in stages. [After IEEE 610] See also integration testing.
black-box technique: See black box test design technique.
black-box testing: Testing, either functional or non-functional, without reference to
the internal structure of the component or system.
black-box test design technique: Procedure to derive and/or select test cases
based on an analysis of the specification, either functional or non-functional, of a
component or system without reference to its internal structure.
blocked test case: A test case that cannot be executed because the preconditions
for its execution are not fulfilled.
bottom-up testing: An incremental approach to integration testing where the
lowest level components are tested first, and then used to facilitate the testing of
higher level components. This process is repeated until the component at the top of
the hierarchy is tested. See also integration testing.
boundary value: An input value or output value which is on the edge of an
equivalence partition or at the smallest incremental distance on either side of an
edge, for example the minimum or maximum value of a range.
boundary value analysis: A black box test design technique in which test cases are
designed based on boundary values.
boundary value coverage: The percentage of boundary values that have been
exercised by a test suite.
boundary value testing: See boundary value analysis.
branch: A basic block that can be selected for execution based on a program
construct in which one of two or more alternative program paths are available, e.g.
case, jump, go to, ifthen- else.
branch condition: See condition.
branch condition combination coverage: See multiple condition coverage.
branch condition combination testing: See multiple condition testing.
branch condition coverage: See condition coverage.

174
TESTING CONCEPTS

branch coverage: The percentage of branches that have been exercised by a test
suite. 100% branch coverage implies both 100% decision coverage and 100%
statement coverage.
branch testing: A white box test design technique in which test cases are designed
to execute branches.
bug: See defect.
bug: See defect report.
business process-based testing: An approach to testing in which test cases are
designed based on descriptions and/or knowledge of business processes.
C
Capability Maturity Model (CMM): A five level staged framework that describes
the key elements of an effective software process. The Capability Maturity Model
covers bestpractices for planning, engineering and managing software development
and maintenance. [CMM]
Capability Maturity Model Integration (CMMI): A framework that describes the
key elements of an effective product development and maintenance process. The
Capability Maturity Model Integration covers best-practices for planning, engineering
and managing product development and maintenance. CMMI is the designated
successor of the CMM. [CMMI]
capture/playback tool: A type of test execution tool where inputs are recorded
during manual testing in order to generate automated test scripts that can be
executed later (i.e. replayed). These tools are often used to support automated
regression testing.
capture/replay tool: See capture/playback tool.
CASE: Acronym for Computer Aided Software Engineering.
CAST: Acronym for Computer Aided Software Testing. See also test automation.
cause-effect graph: A graphical representation of inputs and/or stimuli (causes)
with their associated outputs (effects), which can be used to design test cases.
cause-effect graphing: A black box test design technique in which test cases are
designed from cause-effect graphs. [BS 7925/2]
cause-effect analysis: See cause-effect graphing.
cause-effect decision table: See decision table.
certification: The process of confirming that a component, system or person
complies with its specified requirements, e.g. by passing an exam.

175
TESTING CONCEPTS

changeability: The capability of the software product to enable specified


modifications to be implemented. [ISO 9126] See also maintainability.
change control: See configuration control.
change control board: See configuration control board.
checker: See reviewer.
Chow's coverage metrics: See N-switch coverage. [Chow]
classification tree method: A black box test design technique in which test cases,
described by means of a classification tree, are designed to execute combinations of
representatives of input and/or output domains. [Grochtmann]
code: Computer instructions and data definitions expressed in a programming
language or in a form output by an assembler, compiler or other translator. [IEEE
610]
code analyzer: See static code analyzer.
code coverage: An analysis method that determines which parts of the software
have been executed (covered) by the test suite and which parts have not been
executed, e.g. statement coverage, decision coverage or condition coverage.
code-based testing: See white box testing.
co-existence: The capability of the software product to co-exist with other
independent software in a common environment sharing common resources. [ISO
9126] See also portability.
commercial off-the-shelf software: See off-the-shelf software.
comparator: See test comparator.
compatibility testing: See interoperability testing.
compiler: A software tool that translates programs expressed in a high order
language into their machine language equivalents. [IEEE 610]
complete testing: See exhaustive testing.
completion criteria: See exit criteria.
complexity: The degree to which a component or system has a design and/or
internal structure that is difficult to understand, maintain and verify. See also
cyclomatic complexity.
compliance: The capability of the software product to adhere to standards,
conventions or regulations in laws and similar prescriptions. [ISO 9126]
compliance testing: The process of testing to determine the compliance of the
component or system.
component: A minimal software item that can be tested in isolation.

176
TESTING CONCEPTS

component integration testing: Testing performed to expose defects in the


interfaces and interaction between integrated components.
component specification: A description of a component’s function in terms of its
output values for specified input values under specified conditions, and required non-
functional behavior (e.g. resource-utilization).
component testing: The testing of individual software components. [After IEEE
610]
compound condition: Two or more single conditions joined by means of a logical
operator (AND, OR or XOR), e.g. ‘A>B AND C>1000’.
concrete test case: See low level test case.
concurrency testing: Testing to determine how the occurrence of two or more
activities within the same interval of time, achieved either by interleaving the
activities or by simultaneous execution, is handled by the component or system.
[After IEEE 610]
condition: A logical expression that can be evaluated as True or False, e.g. A>B.
See also test condition.
condition combination coverage: See multiple condition coverage.
condition combination testing: See multiple condition testing.
condition coverage: The percentage of condition outcomes that have been
exercised by a test suite. 100% condition coverage requires each single condition in
every decision statement to be tested as True and False.
condition determination coverage: The percentage of all single condition
outcomes that independently affect a decision outcome that have been exercised by
a test case suite. 100% condition determination coverage implies 100% decision
condition coverage.
condition determination testing: A white box test design technique in which test
cases are designed to execute single condition outcomes that independently affect a
decision outcome.
condition testing: A white box test design technique in which test cases are
designed to execute condition outcomes.
condition outcome: The evaluation of a condition to True or False.
confidence test: See smoke test.
configuration: The composition of a component or system as defined by the
number, nature, and interconnections of its constituent parts.

177
TESTING CONCEPTS

configuration auditing: The function to check on the contents of libraries of


configuration items, e.g. for standards compliance. [IEEE 610]
configuration control: An element of configuration management, consisting of the
evaluation, co-ordination, approval or disapproval, and implementation of changes to
configuration items after formal establishment of their configuration identification.
[IEEE 610]
configuration control board (CCB): A group of people responsible for evaluating
and approving or disapproving proposed changes to configuration items, and for
ensuring implementation of approved changes. [IEEE 610]
configuration identification: An element of configuration management, consisting
of selecting the configuration items for a system and recording their functional and
physical characteristics in technical documentation. [IEEE 610] 11
configuration item: An aggregation of hardware, software or both, that is
designated for configuration management and treated as a single entity in the
configuration management process. [IEEE 610]
configuration management: A discipline applying technical and administrative
direction and surveillance to: identify and document the functional and physical
characteristics of a configuration item, control changes to those characteristics,
record and report change processing and implementation status, and verify
compliance with specified requirements. [IEEE 610]
configuration management tool: A tool that provides support for the identification
and control of configuration items, their status over changes and versions, and the
release of baselines consisting of configuration items.
configuration testing: See portability testing.
confirmation testing: See re-testing.
conformance testing: See compliance testing.
consistency: The degree of uniformity, standardization, and freedom from
contradiction among the documents or parts of a component or system. [IEEE 610]
control flow: A sequence of events (paths) in the execution through a component
or system.
control flow graph: A sequence of events (paths) in the execution through a
component or system.
control flow path: See path.
conversion testing: Testing of software used to convert data from existing systems
for use in replacement systems.

178
TESTING CONCEPTS

COTS: Acronym for Commercial Off-The-Shelf software. See off-the-shelf software.


coverage: The degree, expressed as a percentage, to which a specified coverage
item has been exercised by a test suite.
coverage analysis: Measurement of achieved coverage to a specified coverage item
during test execution referring to predetermined criteria to determine whether
additional testing is required and if so, which test cases are needed.
coverage item: An entity or property used as a basis for test coverage, e.g.
equivalence partitions or code statements.
coverage tool: A tool that provides objective measures of what structural elements,
e.g. statements, branches have been exercised by a test suite.
custom software: See bespoke software.
cyclomatic complexity: The number of independent paths through a program.
Cyclomatic complexity is defined as: L – N + 2P, where
- L = the number of edges/links in a graph
- N = the number of nodes in a graph
- P = the number of disconnected parts of the graph (e.g. a called graph and a
subroutine)
cyclomatic number: See cyclomatic complexity.
D
daily build: a development activity where a complete system is compiled and linked
every day (usually overnight), so that a consistent system is available at any time
including all latest changes.
data definition: An executable statement where a variable is assigned a value.
data driven testing: A scripting technique that stores test input and expected
results in a table or spreadsheet, so that a single control script can execute all of the
tests in the table. Data driven testing is often used to support the application of test
execution tools such as capture/playback tools. [Fewster and Graham] See also
keyword driven testing.
data flow: An abstract representation of the sequence and possible changes of the
state of data objects, where the state of an object is any of: creation, usage, or
destruction. [Beizer]
data flow analysis: A form of static analysis based on the definition and usage of
variables.
data flow coverage: The percentage of definition-use pairs that have been
exercised by a test suite.

179
TESTING CONCEPTS

data flow testing: A white box test design technique in which test cases are
designed to execute definition and use pairs of variables.
data integrity testing: See database integrity testing.
database integrity testing: Testing the methods and processes used to access and
manage the data(base), to ensure access methods, processes and data rules
function as expected and that during access to the database, data is not corrupted or
unexpectedly deleted, updated or created.
dead code: See unreachable code.
debugger: See debugging tool.
debugging: The process of finding, analyzing and removing the causes of failures in
software.
debugging tool: A tool used by programmers to reproduce failures, investigate the
state of programs and find the corresponding defect. Debuggers enable
programmers to execute programs step by step, to halt a program at any program
statement and to set and examine program variables.
decision: A program point at which the control flow has two or more alternative
routes. A node with two or more links to separate branches.
decision condition coverage: The percentage of all condition outcomes and
decision outcomes that have been exercised by a test suite. 100% decision condition
coverage implies both 100% condition coverage and 100% decision coverage.
decision condition testing: A white box test design technique in which test cases
are designed to execute condition outcomes and decision outcomes.
decision coverage: The percentage of decision outcomes that have been exercised
by a test suite. 100% decision coverage implies both 100% branch coverage and
100% statement coverage.
decision table: A table showing combinations of inputs and/or stimuli (causes) with
their associated outputs and/or actions (effects), which can be used to design test
cases. 13
decision table testing: A black box test design techniques in which test cases are
designed to execute the combinations of inputs and/or stimuli (causes) shown in a
decision table. [Veenendaal]
decision testing: A white box test design technique in which test cases are
designed toexecute decision outcomes.
decision outcome: The result of a decision (which therefore determines the
branches to be taken).

180
TESTING CONCEPTS

defect: A flaw in a component or system that can cause the component or system to
fail to perform its required function, e.g. an incorrect statement or data definition. A
defect, if encountered during execution, may cause a failure of the component or
system.
defect density: The number of defects identified in a component or system divided
by the size of the component or system (expressed in standard measurement terms,
e.g. lines-ofcode, number of classes or function points).
Defect Detection Percentage (DDP): the number of defects found by a test
phase, divided by the number found by that test phase and any other means
afterwards.
defect management: The process of recognizing, investigating, taking action and
disposing of defects. It involves recording defects, classifying them and identifying
the impact. [After IEEE 1044]
defect management tool: A tool that facilitates the recording and status tracking
of defects. They often have workflow-oriented facilities to track and control the
allocation, correction and re-testing of defects and provide reporting facilities. See
also incident management
tool.
defect masking: An occurrence in which one defect prevents the detection of
another. [After IEEE 610]
defect report: A document reporting on any flaw in a component or system that can
cause the component or system to fail to perform its required function. [After IEEE
829]
defect tracking tool: See defect management tool.
definition-use pair: The association of the definition of a variable with the use of
that variable. Variable uses include computational (e.g. multiplication) or to direct
the execution of a path (“predicate” use).
deliverable: Any (work) product that must be delivered to someone other than the
(work)product’s author.
design-based testing: An approach to testing in which test cases are designed
based on the architecture and/or detailed design of a component or system (e.g.
tests of interfaces between components or systems).
desk checking: Testing of software or specification by manual simulation of its
execution. See also static analysis.

181
TESTING CONCEPTS

development testing: Formal or informal testing conducted during the


implementation of a component or system, usually in the development environment
by developers. [After IEEE 610]
deviation: See incident.
deviation report: See incident report.
dirty testing: See negative testing.
documentation testing: Testing the quality of the documentation, e.g. user guide
or installation guide.
domain: The set from which valid input and/or output values can be selected.
driver: A software component or test tool that replaces a component that takes care
of the control and/or the calling of a component or system. [After TMap]
dynamic analysis: The process of evaluating behavior, e.g. memory performance,
CPU usage, of a system or component during execution. [After IEEE 610]
dynamic analysis tool: A tool that provides run-time information on the state of
the software code. These tools are most commonly used to identify unassigned
pointers, check pointer arithmetic and to monitor the allocation, use and de-
allocation of memory and to flag memory leaks.
dynamic comparison: Comparison of actual and expected results, performed while
the software is being executed, for example by a test execution tool.
dynamic testing: Testing that involves the execution of the software of a
component or system.
E
efficiency: The capability of the software product to provide appropriate
performance, relative to the amount of resources used under stated conditions. [ISO
9126]
efficiency testing: The process of testing to determine the efficiency of a software
product.
elementary comparison testing: A black box test design techniques in which test
cases are designed to execute combinations of inputs using the concept of condition
determination coverage. [TMap]
emulator: A device, computer program, or system that accepts the same inputs and
produces the same outputs as a given system. [IEEE 610] See also simulator.
entry criteria: the set of generic and specific conditions for permitting a process to
go forward with a defined task, e.g. test phase. The purpose of entry criteria is to

182
TESTING CONCEPTS

prevent a task from starting which would entail more (wasted) effort compared to
the effort needed to remove the failed entry criteria. [Gilb and Graham]
entry point: The first executable statement within a component.
equivalence class: See equivalence partition.
equivalence partition: A portion of an input or output domain for which the
behavior of a component or system is assumed to be the same, based on the
specification.
equivalence partition coverage: The percentage of equivalence partitions that
have been exercised by a test suite.
equivalence partitioning: A black box test design technique in which test cases are
designed to execute representatives from equivalence partitions. In principle test
cases are designed to cover each partition at least once.
error: A human action that produces an incorrect result. [After IEEE 610]
15
error guessing: A test design technique where the experience of the tester is used
to anticipate what defects might be present in the component or system under test
as a result of errors made, and to design tests specifically to expose them.
error seeding: The process of intentionally adding known defects to those already
in the component or system for the purpose of monitoring the rate of detection and
removal, and estimating the number of remaining defects. [IEEE 610]
error tolerance: The ability of a system or component to continue normal operation
despite the presence of erroneous inputs. [After IEEE 610].
evaluation: See testing.
exception handling: Behavior of a component or system in response to erroneous
input, from either a human user or from another component or system, or to an
internal failure.
executable statement: A statement which, when compiled, is translated into object
code, and which will be executed procedurally when the program is running and may
perform an action on data.
exercised: A program element is said to be exercised by a test case when the input
value causes the execution of that element, such as a statement, decision, or other
structural element.
exhaustive testing: A test approach in which the test suite comprises all
combinations of input values and preconditions.

183
TESTING CONCEPTS

exit criteria: The set of generic and specific conditions, agreed upon with the
stakeholders, for permitting a process to be officially completed. The purpose of exit
criteria is to prevent a task from being considered completed when there are still
outstanding parts of the task which have not been finished. Exit criteria are used to
report against and to plan when to stop testing. [After Gilb and Graham]
exit point: The last executable statement within a component.
expected outcome: See expected result.
expected result: The behavior predicted by the specification, or another source, of
the component or system under specified conditions.
experienced-based test design technique: Procedure to derive and/or select test
cases based on the tester’s experience, knowledge and intuition.
exploratory testing: An informal test design technique where the tester actively
controls the design of the tests as those tests are performed and uses information
gained while testing to design new and better tests. [After Bach]
F
fail: A test is deemed to fail if its actual result does not match its expected result.
failure: Deviation of the component or system from its expected delivery, service or
result. [After Fenton]
failure mode: The physical or functional manifestation of a failure. For example, a
system in failure mode may be characterized by slow operation, incorrect outputs, or
complete termination of execution. [IEEE 610] 16
Failure Mode and Effect Analysis (FMEA): A systematic approach to risk
identification and analysis of identifying possible modes of failure and attempting to
prevent their occurrence.
failure rate: The ratio of the number of failures of a given category to a given unit
of measure, e.g. failures per unit of time, failures per number of transactions,
failures per number of computer runs. [IEEE 610]
fault: See defect.
fault density: See defect density.
Fault Detection Percentage (FDP): See Defect Detection Percentage (DDP).
fault masking: See defect masking.
fault tolerance: The capability of the software product to maintain a specified level
of performance in cases of software faults (defects) or of infringement of its specified
interface. [ISO 9126] See also reliability.
fault tree analysis: A method used to analyze the causes of faults (defects).

184
TESTING CONCEPTS

feasible path: A path for which a set of input values and preconditions exists which
causes it to be executed.
feature: An attribute of a component or system specified or implied by requirements
documentation (for example reliability, usability or design constraints). [After IEEE
1008]
field testing: See beta testing.
finite state machine: A computational model consisting of a finite number of states
and transitions between those states, possibly with accompanying actions. [IEEE
610]
finite state testing: See state transition testing.
formal review: A review characterized by documented procedures and
requirements, e.g. inspection.
frozen test basis: A test basis document that can only be amended by a formal
change control process. See also baseline.
Function Point Analysis (FPA): Method aiming to measure the size of the
functionality of an information system. The measurement is independent of the
technology. This measurement may be used as a basis for the measurement of
productivity, the estimation of the needed resources, and project control.
functional integration: An integration approach that combines the components or
systems for the purpose of getting a basic functionality working early. See also
integration testing.
functional requirement: A requirement that specifies a function that a component
or system must perform. [IEEE 610]
functional test design technique: Procedure to derive and/or select test cases
based on an analysis of the specification of the functionality of a component or
system without reference to its internal structure. See also black box test design
technique.
functional testing: Testing based on an analysis of the specification of the
functionality of a component or system. See also black box testing.
functionality: The capability of the software product to provide functions which
meet stated and implied needs when the software is used under specified conditions.
[ISO 9126] 17
functionality testing: The process of testing to determine the functionality of a
software product.
G

185
TESTING CONCEPTS

glass box testing: See white box testing.


H
heuristic evaluation: A static usability test technique to determine the compliance
of a user interface with recognized usability principles (the so-called “heuristics”).
high level test case: A test case without concrete (implementation level) values for
input data and expected results. Logical operators are used; instances of the actual
values are not yet defined and/or available. See also low level test case.
horizontal traceability: The tracing of requirements for a test level through the
layers of test documentation (e.g. test plan, test design specification, test case
specification and test procedure specification or test script).
I
impact analysis: The assessment of change to the layers of development
documentation, test documentation and components, in order to implement a given
change to specified requirements.
incident: Any event occurring that requires investigation. [After IEEE 1008]
incident logging: Recording the details of any incident that occurred, e.g. during
testing.
incident management: The process of recognizing, investigating, taking action and
disposing of incidents. It involves logging incidents, classifying them and identifying
the impact. [After IEEE 1044]
incident management tool: A tool that facilitates the recording and status tracking
of incidents. They often have workflow-oriented facilities to track and control the
allocation, correction and re-testing of incidents and provide reporting facilities. See
also defect management tool.
incident report: A document reporting on any event that occurred, e.g. during the
testing, which requires investigation. [After IEEE 829]
incremental development model: A development life cycle where a project is
broken into a series of increments, each of which delivers a portion of the
functionality in the overall project requirements. The requirements are prioritized
and delivered in priority order in the appropriate increment. In some (but not all)
versions of this life cycle model, each subproject follows a ‘mini V-model’ with its
own design, coding and testing phases.
incremental testing: Testing where components or systems are integrated and
tested one or some at a time, until all the components or systems are integrated and
tested.

186
TESTING CONCEPTS

independence: Separation of responsibilities, which encourages the ccomplishment


of objective testing. [After DO-178b]
infeasible path: A path that cannot be exercised by any set of possible input
values.
informal review: A review not based on a formal (documented) procedure.
input: A variable (whether stored within a component or outside) that is read by a
component. 18
input domain: The set from which valid input values can be selected. See also
domain.
input value: An instance of an input. See also input.
inspection: A type of peer review that relies on visual examination of documents to
detect defects, e.g. violations of development standards and non-conformance to
higher level documentation. The most formal review technique and therefore always
based on a documented procedure. [After IEEE 610, IEEE 1028] See also peer
review.
inspection leader: See moderator.
inspector: See reviewer.
installability: The capability of the software product to be installed in a specified
environment [ISO 9126]. See also portability.
installability testing: The process of testing the installability of a software product.
See also portability testing.
installation guide: Supplied instructions on any suitable media, which guides the
installer through the installation process. This may be a manual guide, step-by-step
procedure, installation wizard, or any other similar process description.
installation wizard: Supplied software on any suitable media, which leads the
installer through the installation process. It normally runs the installation process,
provides feedback on installation results, and prompts for options.
instrumentation: The insertion of additional code into the program in order to
collect information about program behavior during execution, e.g. for measuring
code coverage.
instrumenter: A software tool used to carry out instrumentation.
intake test: A special instance of a smoke test to decide if the component or system
is ready for detailed and further testing. An intake test is typically carried out at the
start of the test execution phase. See also smoke test.

187
TESTING CONCEPTS

integration: The process of combining components or systems into larger


assemblies.
integration testing: Testing performed to expose defects in the interfaces and in
the interactions between integrated components or systems. See also component
integration testing, system integration testing.
integration testing in the large: See system integration testing.
integration testing in the small: See component integration testing.
interface testing: An integration test type that is concerned with testing the
interfaces between components or systems.
interoperability: The capability of the software product to interact with one or more
specified components or systems. [After ISO 9126] See also functionality.
interoperability testing: The process of testing to determine the interoperability of
a software product. See also functionality testing.
invalid testing: Testing using input values that should be rejected by the
component or system. See also error tolerance.
isolation testing: Testing of individual components in isolation from surrounding
components, with surrounding components being simulated by stubs and drivers, if
needed.
item transmittal report: See release note. 19
iterative development model: A development life cycle where a project is broken
into a usually large number of iterations. An iteration is a complete development loop
resulting in a release (internal or external) of an executable product, a subset of the
final product under development, which grows from iteration to iteration to become
the final product.
K
key performance indicator: See performance indicator.
keyword driven testing: A scripting technique that uses data files to contain not
only test data and expected results, but also keywords related to the application
being tested. The keywords are interpreted by special supporting scripts that are
called by the control script for the test. See also data driven testing.
L
LCSAJ: A Linear Code Sequence And Jump, consisting of the following three items
(conventionally identified by line numbers in a source code listing): the start of the
linear sequence of executable statements, the end of the linear sequence, and the
target line to which control flow is transferred at the end of the linear sequence.

188
TESTING CONCEPTS

LCSAJ coverage: The percentage of LCSAJs of a component that have been


exercised by a test suite. 100% LCSAJ coverage implies 100% decision coverage.
LCSAJ testing: A white box test design technique in which test cases are designed
to execute LCSAJs.
learnability: The capability of the software product to enable the user to learn its
application. [ISO 9126] See also usability.
level test plan: A test plan that typically addresses one test level. See also test
plan.
link testing: See component integration testing.
load testing: A test type concerned with measuring the behavior of a component or
system with increasing load, e.g. number of parallel users and/or numbers of
transactions to determine what load can be handled by the component or system.
See also stress testing.
logic-coverage testing: See white box testing. [Myers]
logic-driven testing: See white box testing.
logical test case: See high level test case.
low level test case: A test case with concrete (implementation level) values for
input data and expected results. Logical operators from high level test cases are
replaced by actual values that correspond to the objectives of the logical operators.
See also high level test case.
M
maintenance: Modification of a software product after delivery to correct defects, to
improve performance or other attributes, or to adapt the product to a modified
environment. [IEEE 1219]
maintenance testing: Testing the changes to an operational system or the impact
of a changed environment to an operational system.
maintainability: The ease with which a software product can be modified to correct
defects, modified to meet new requirements, modified to make future maintenance
easier, or adapted to a changed environment. [ISO 9126] 20
maintainability testing: The process of testing to determine the maintainability of
a software product.
management review: A systematic evaluation of software acquisition, supply,
development, operation, or maintenance process, performed by or on behalf of
management that monitors progress, determines the status of plans and schedules,

189
TESTING CONCEPTS

confirms requirements and their system allocation, or evaluates the effectiveness of


management approaches to achieve fitness for purpose. [After IEEE 610, IEEE 1028]
master test plan: A test plan that typically addresses multiple test levels. See also
test plan.
maturity: (1) The capability of an organization with respect to the effectiveness and
efficiency of its processes and work practices. See also Capability Maturity Model,
Test Maturity Model. (2) The capability of the software product to avoid failure as a
result of defects in the software. [ISO 9126] See also reliability.
measure: The number or category assigned to an attribute of an entity by making a
measurement. [ISO 14598]
measurement: The process of assigning a number or category to an entity to
describe an attribute of that entity. [ISO 14598]
measurement scale: A scale that constrains the type of data analysis that can be
performed on it. [ISO 14598]
memory leak: A defect in a program's dynamic store allocation logic that causes it
to fail to reclaim memory after it has finished using it, eventually causing the
program to fail due to lack of memory.
metric: A measurement scale and the method used for measurement. [ISO 14598]
migration testing: See conversion testing.
milestone: A point in time in a project at which defined (intermediate) deliverables
and results should be ready.
mistake: See error.
moderator: The leader and main person responsible for an inspection or other
review process.
modified condition decision coverage: See condition determination coverage.
modified condition decision testing: See condition determination coverage
testing.
modified multiple condition coverage: See condition determination coverage.
modified multiple condition testing: See condition determination coverage
testing.
module: See component.
module testing: See component testing.
monitor: A software tool or hardware device that runs concurrently with the
component or system under test and supervises, records and/or analyses the
behavior of the component or system. [After IEEE 610]

190
TESTING CONCEPTS

monitoring tool: See monitor.


multiple condition: See compound condition. 21
multiple condition coverage: The percentage of combinations of all single
condition outcomes within one statement that have been exercised by a test suite.
100% multiple condition coverage implies 100% condition determination coverage.
multiple condition testing: A white box test design technique in which test cases
are designed to execute combinations of single condition outcomes (within one
statement).
mutation analysis: A method to determine test suite thoroughness by measuring
the extent to which a test suite can discriminate the program from slight variants
(mutants) of the program.
mutation testing: See back-to-back testing.
N
N-switch coverage: The percentage of sequences of N+1 transitions that have
been exercised by a test suite. [Chow]
N-switch testing: A form of state transition testing in which test cases are designed
to execute all valid sequences of N+1 transitions. [Chow] See also state transition
testing.
negative testing: Tests aimed at showing that a component or system does not
work. Negative testing is related to the testers’ attitude rather than a specific test
approach or test design technique, e.g. testing with invalid input values or
exceptions. [After Beizer].
non-conformity: Non fulfillment of a specified requirement. [ISO 9000]
non-functional requirement: A requirement that does not relate to functionality,
but to attributes such as reliability, efficiency, usability, maintainability and
portability.
non-functional testing: Testing the attributes of a component or system that do
not relate to functionality, e.g. reliability, efficiency, usability, maintainability and
portability.
non-functional test design techniques: Procedure to derive and/or select test
cases for nonfunctional testing based on an analys is of the specification of a
component or system without reference to its internal structure. See also black box
test design technique.
O

191
TESTING CONCEPTS

off-the-shelf software: A software product that is developed for the general


market, i.e. for a large number of customers, and that is delivered to many
customers in identical format.
operability: The capability of the software product to enable the user to operate and
control it. [ISO 9126] See also usability.
operational environment: Hardware and software products installed at users’ or
customers’ sites where the component or system under test will be used. The
software may include operating systems, database management systems, and other
applications.
operational profile testing: Statistical testing using a model of system operations
(short duration tasks) and their probability of typical use. [Musa]
operational testing: Testing conducted to evaluate a component or system in its
operational environment. [IEEE 610]
oracle: See test oracle.
outcome: See result.
output: A variable (whether stored within a component or outside) that is written by
a component. 22
output domain: The set from which valid output values can be selected. See also
domain.
output value: An instance of an output. See also output.
P
pair programming: A software development approach whereby lines of code
(production and/or test) of a component are written by two programmers sitting at a
single computer.
This implicitly means ongoing real-time code reviews are performed.
pair testing: Two persons, e.g. two testers, a developer and a tester, or an end-
user and a tester, working together to find defects. Typically, they share one
computer and trade control of it while testing.
partition testing: See equivalence partitioning. [Beizer]
pass: A test is deemed to pass if its actual result matches its expected result.
pass/fail criteria: Decision rules used to determine whether a test item (function)
or feature has passed or failed a test. [IEEE 829]
path: A sequence of events, e.g. executable statements, of a component or system
from an entry point to an exit point.

192
TESTING CONCEPTS

path coverage: The percentage of paths that have been exercised by a test suite.
100% path coverage implies 100% LCSAJ coverage.
path sensitizing: Choosing a set of input values to force the execution of a given
path.
path testing: A white box test design technique in which test cases are designed to
execute paths.
peer review: A review of a software work product by colleagues of the producer of
the product for the purpose of identifying defects and improvements. Examples are
inspection, technical review and walkthrough.
performance: The degree to which a system or component accomplishes its
designated functions within given constraints regarding processing time and
throughput rate. [After IEEE 610] See also efficiency.
performance indicator: A high level metric of effectiveness and/or efficiency used
to guide and control progressive development, e.g. lead-time slip for software
development. [CMMI]
performance testing: The process of testing to determine the performance of a
software product. See also efficiency testing.
performance testing tool: A tool to support performance testing and that usually
has two main facilities: load generation and test transaction measurement. Load
generation can simulate either multiple users or high volumes of input data. During
execution, response time measurements are taken from selected transactions and
these are logged. Performance testing tools normally provide reports based on test
logs and graphs of load against response times.
phase test plan: A test plan that typically addresses one test phase. See also test
plan.
portability: The ease with which the software product can be transferred from one
hardware or software environment to another. [ISO 9126]
portability testing: The process of testing to determine the portability of a software
product. 23
postcondition: Environmental and state conditions that must be fulfilled after the
execution of a test or test procedure.
post-execution comparison: Comparison of actual and expected results,
performed after the software has finished running.
precondition: Environmental and state conditions that must be fulfilled before the
component or system can be executed with a particular test or test procedure.

193
TESTING CONCEPTS

predicted outcome: See expected result.


pretest: See intake test.
priority: The level of (business) importance assigned to an item, e.g. defect.
probe effect: The effect on the component or system by the measurement
instrument when the component or system is being measured, e.g. by a
performance testing tool or monitor.
For example performance may be slightly worse when performance testing tools are
being used.
problem: See defect.
problem management: See defect management.
problem report: See defect report.
process: A set of interrelated activities, which transform inputs into outputs. [ISO
12207]
process cycle test: A black box test design technique in which test cases are
designed to execute business procedures and processes. [TMap]
product risk: A risk directly related to the test object. See also risk.
project: A project is a unique set of coordinated and controlled activities with start
and finish dates undertaken to achieve an objective conforming to specific
requirements, including the constraints of time, cost and resources. [ISO 9000]
project risk: A risk related to management and control of the (test) project. See
also risk.
program instrumenter: See instrumenter.
program testing: See component testing.
project test plan: See master test plan.
pseudo-random: A series which appears to be random but is in fact generated
according to some prearranged sequence.
Q
quality: The degree to which a component, system or process meets specified
requirements and/or user/customer needs and expectations. [After IEEE 610]
quality assurance: Part of quality management focused on providing confidence
that quality requirements will be fulfilled. [ISO 9000]
quality attribute: A feature or characteristic that affects an item’s quality. [IEEE
610]
quality characteristic: See quality attribute.

194
TESTING CONCEPTS

quality management: Coordinated activities to direct and control an organization


with regard to quality. Direction and control with regard to quality generally includes
the establishment 24 of the quality policy and quality objectives, quality planning,
quality control, quality assurance and quality improvement. [ISO 9000]
R
random testing: A black box test design technique where test cases are selected,
possibly using a pseudo-random generation algorithm, to match an operational
profile. This technique can be used for testing non-functional attributes such as
reliability and performance.
recorder: See scribe.
record/playback tool: See capture/playback tool.
recoverability: The capability of the software product to re-establish a specified
level of performance and recover the data directly affected in case of failure. [ISO
9126] See alsoreliability.
recoverability testing: The process of testing to determine the recoverability of a
software product. See also reliability testing.
recovery testing: See recoverability testing.
regression testing: Testing of a previously tested program following modification to
ensure that defects have not been introduced or uncovered in unchanged areas of
the software, as a result of the changes made. It is performed when the software or
its environment is changed.
regulation testing: See compliance testing.
release note: A document identifying test items, their configuration, current status
and other delivery information delivered by development to testing, and possibly
other stakeholders, at the start of a test execution phase. [After IEEE 829]
reliability: The ability of the software product to perform its required functions
under stated conditions for a specified period of time, or for a specified number of
operations. [ISO 9126]
reliability testing: The process of testing to determine the reliability of a software
product.
replaceability: The capability of the software product to be used in place of another
specified software product for the same purpose in the same environment. [ISO
9126] See also portability.
requirement: A condition or capability needed by a user to solve a problem or
achieve an objective that must be met or possessed by a system or system

195
TESTING CONCEPTS

component to satisfy a contract, standard, specification, or other formally imposed


document. [After IEEE 610]
requirements-based testing: An approach to testing in which test cases are
designed based on test objectives and test conditions derived from requirements,
e.g. tests that exercise specific functions or probe non-functional attributes such as
reliability or usability.
requirements management tool: A tool that supports the recording of
requirements, requirements attributes (e.g. priority, knowledge responsible) and
annotation, and facilitates traceability through layers of requirements and
requirements change management. Some requirements management tools also
provide facilities for static analysis, such as consistency checking and violations to
pre-defined requirements rules.
requirements phase: The period of time in the software life cycle during which the
requirements for a software product are defined and documented. [IEEE 610] 25
resource utilization: The capability of the software product to use appropriate
amounts and types of resources, for example the amounts of main and secondary
memory used by the program and the sizes of required temporary or overflow files,
when the software performs its function under stated conditions. [After ISO 9126]
See also efficiency.
resource utilization testing: The process of testing to determine the resource-
utilization of a software product. See also efficiency testing.
result: The consequence/outcome of the execution of a test. It includes outputs to
screens, changes to data, reports, and communication messages sent out. See also
actual result, expected result.
resumption criteria: The testing activities that must be repeated when testing is
re-started after a suspension. [After IEEE 829]
re-testing: Testing that runs test cases that failed the last time they were run, in
order to verify the success of corrective actions.
review: An evaluation of a product or project status to ascertain discrepancies from
planned results and to recommend improvements. Examples include management
review, informal review, technical review, inspection, and walkthrough. [After IEEE
1028]
reviewer: The person involved in the review that identifies and describes anomalies
in the product or project under review. Reviewers can be chosen to represent
different viewpoints and roles in the review process.

196
TESTING CONCEPTS

review tool: A tool that provides support to the review process. Typical features
include review planning and tracking support, communication support, collaborative
reviews and a repository for collecting and reporting of metrics.
risk: A factor that could result in future negative consequences; usually expressed
as impact and likelihood.
risk analysis: The process of assessing identified risks to estimate their impact and
probability of occurrence (likelihood).
risk-based testing: Testing oriented towards exploring and providing information
about product risks. [After Gerrard]
risk control: The process through which decisions are reached and protective
measures are implemented for reducing risks to, or maintaining risks within,
specified levels.
risk identification: The process of identifying risks using techniques such as
brainstorming, checklists and failure history.
risk management: Systematic application of procedures and practices to the tasks
of identifying, analyzing, prioritizing, and controlling risk.
risk mitigation: See risk control.
robustness: The degree to which a component or system can function correctly in
the presence of invalid inputs or stressful environmental conditions. [IEEE 610] See
also error-tolerance, fault-tolerance.
robustness testing: Testing to determine the robustness of the software product.
root cause: An underlying factor that caused a non-conformance and possibly
should be permanently eliminated through process improvement. 26
S
safety: The capability of the software product to achieve acceptable levels of risk of
harm to people, business, software, property or the environment in a specified
context of use. [ISO 9126]
safety testing: Testing to determine the safety of a software product.
sanity test: See smoke test.
scalability: The capability of the software product to be upgraded to accommodate
increased loads. [After Gerrard]
scalability testing: Testing to determine the scalability of the software product.
scenario testing: See use case testing.

197
TESTING CONCEPTS

scribe: The person who records each defect mentioned and any suggestions for
process improvement during a review meeting, on a logging form. The scribe has to
ensure that the logging form is readable and understandable.
scripting language: A programming language in which executable test scripts are
written, used by a test execution tool (e.g. a capture/playback tool).
security: Attributes of software products that bear on its ability to prevent
unauthorized access, whether accidental or deliberate, to programs and data. [ISO
9126] See also functionality.
security testing: Testing to determine the security of the software product. See
also functionality testing.
security testing tool: A tool that provides support for testing security
characteristics and vulnerabilities.
security tool: A tool that supports operational security.
serviceability testing: See maintainability testing.
severity: The degree of impact that a defect has on the development or operation of
a component or system. [After IEEE 610]
simulation: The representation of selected behavioral characteristics of one physical
or abstract system by another system. [ISO 2382/1]
simulator: A device, computer program or system used during testing, which
behaves or operates like a given system when provided with a set of controlled
inputs. [After IEEE 610, DO178b] See also emulator.
site acceptance testing: Acceptance testing by users/customers at their site, to
determine whether or not a component or system satisfies the user/customer needs
and fits within the business processes, normally including hardware as well as
software.
smoke test: A subset of all defined/planned test cases that cover the main
functionality of a component or system, to ascertaining that the most crucial
functions of a program work, but not bothering with finer details. A daily build and
smoke test is among industry best practices. See also intake test.
software: Computer programs, procedures, and possibly associated documentation
and data pertaining to the operation of a computer system [IEEE 610]
software feature: See feature. 27
software quality: The totality of functionality and features of a software product
that bear on its ability to satisfy stated or implied needs. [After ISO 9126]
software quality characteristic: See quality attribute.

198
TESTING CONCEPTS

software test incident: See incident.


software test incident report: See incident report.
Software Usability Measurement Inventory (SUMI): A questionnaire based
usability test technique to evaluate the usability, e.g. user-satisfaction, of a
component or system. [Veenendaal]
source statement: See statement.
specification: A document that specifies, ideally in a complete, precise and
verifiable manner, the requirements, design, behavior, or other characteristics of a
component or system, and, often, the procedures for determining whether these
provisions have been satisfied. [After IEEE 610]
specification-based testing: See black box testing.
specification-based test design technique: See black box test design technique.
specified input: An input for which the specification predicts a result.
stability: The capability of the software product to avoid unexpected effects from
modifications in the software. [ISO 9126] See also maintainability.
standard software: See off-the-shelf software.
standards testing: See compliance testing.
state diagram: A diagram that depicts the states that a component or system can
assume, and shows the events or circumstances that cause and/or result from a
change from one state to another. [IEEE 610]
state table: A grid showing the resulting transitions for each state combined with
each possible event, showing both valid and invalid transitions.
state transition: A transition between two states of a component or system.
state transition testing: A black box test design technique in which test cases are
designed to execute valid and invalid state transitions. See also N-switch testing.
statement: An entity in a programming language, which is typically the smallest
indivisible unit of execution.
statement coverage: The percentage of executable statements that have been
exercised by a test suite.
statement testing: A white box test design technique in which test cases are
designed to execute statements.
static analysis: Analysis of software artifacts, e.g. requirements or code, carried
out without execution of these software artifacts.
static analysis tool: See static analyzer.
static analyzer: A tool that carries out static analysis.

199
TESTING CONCEPTS

static code analysis: Analysis of source code carried out without execution of that
software. 28
static code analyzer: A tool that carries out static code analysis. The tool checks
source code, for certain properties such as conformance to coding standards, quality
metrics or data flow anomalies.
static testing: Testing of a component or system at specification or implementation
level without execution of that software, e.g. reviews or static code analysis.
statistical testing: A test design technique in which a model of the statistical
distribution of the input is used to construct representative test cases. See also
operational profile testing.
status accounting: An element of configuration management, consisting of the
recording and reporting of information needed to manage a configuration effectively.
This information includes a listing of the approved configuration identification, the
status of proposed changes to the configuration, and the implementation status of
the approved changes. [IEEE 610]
storage: See resource utilization.
storage testing: See resource utilization testing.
stress testing: Testing conducted to evaluate a system or component at or beyond
the limits of its specified requirements. [IEEE 610] See also load testing.
structure-based techniques: See white box test design technique.
structural coverage: Coverage measures based on the internal structure of a
component or system.
structural test design technique: See white box test design technique.
structural testing: See white box testing.
structured walkthrough: See walkthrough.
stub: A skeletal or special-purpose implementation of a software component, used
to develop or test a component that calls or is otherwise dependent on it. It replaces
a called component. [After IEEE 610]
subpath: A sequence of executable statements within a component.
suitability: The capability of the software product to provide an appropriate set of
functions for specified tasks and user objectives. [ISO 9126] See also functionality.
suspension criteria: The criteria used to (temporarily) stop all or a portion of the
testing activities on the test items. [After IEEE 829]
syntax testing: A black box test design technique in which test cases are designed
based upon the definition of the input domain and/or output domain.

200
TESTING CONCEPTS

system: A collection of components organized to accomplish a specific function or


set of functions. [IEEE 610]
system integration testing: Testing the integration of systems and packages;
testing interfaces to external organizations (e.g. Electronic Data Interchange,
Internet).
system testing: The process of testing an integrated system to verify that it meets
specified requirements. [Hetzel]
T
technical review: A peer group discussion activity that focuses on achieving
consensus on the technical approach to be taken. [Gilb and Graham, IEEE 1028] See
also peer review. 29
test: A set of one or more test cases [IEEE 829]
test approach: The implementation of the test strategy for a specific project. It
typically includes the decisions made that follow based on the (test) project’s goal
and the risk assessment carried out, starting points regarding the test process, the
test design techniques to be applied, exit criteria and test types to be performed.
test automation: The use of software to perform or support test activities, e.g. test
management, test design, test execution and results checking.
test basis: All documents from which the requirements of a component or system
can be inferred. The documentation on which the test cases are based. If a
document can be amended only by way of formal amendment procedure, then the
test basis is called a frozen test basis. [After TMap]
test bed: See test environment.
test case: A set of input values, execution preconditions, expected results and
execution postconditions, developed for a particular objective or test condition, such
as to exercise a particular program path or to verify compliance with a specific
requirement. [After IEEE 610]
test case design technique: See test design technique.
test case specification: A document specifying a set of test cases (objective,
inputs, test actions, expected results, and execution preconditions) for a test item.
[After IEEE 829]
test case suite: See test suite.
test charter: A statement of test objectives, and possibly test ideas on how to test.
Test charters are for example often used in exploratory testing. See also exploratory
testing.

201
TESTING CONCEPTS

test closure: During the test closure phase of a test process data is collected from
completed activities to consolidate experience, testware, facts and numbers. The test
closure phase consists of finalizing and archiving the testware and evaluating the test
process, including preparation of a test evaluation report. See also test process.
test comparator: A test tool to perform automated test comparison.
test comparison: The process of identifying differences between the actual results
produced by the component or system under test and the expected results for a test.
Test comparison can be performed during test execution (dynamic comparison) or
after test execution.
test completion criteria: See exit criteria.
test condition: An item or event of a component or system that could be verified by
one or more test cases, e.g. a function, transaction, feature, quality attribute, or
structural element.
test control: A test management task that deals with developing and applying a set
of corrective actions to get a test project on track when monitoring shows a
deviation from what was planned. See also test management.
test coverage: See coverage.
test cycle: Execution of the test process against a single identifiable release of the
test object.
test data: Data that exists (for example, in a database) before a test is executed,
and that affects or is affected by the component or system under test.
test data preparation tool: A type of test tool that enables data to be selected
from existing databases or created, generated, manipulated and edited for use in
testing. 30
test design: See test design specification.
test design specification: A document specifying the test conditions (coverage
items) for a test item, the detailed test approach and identifying the associated high
level test cases. [After IEEE 829]
test design technique: Procedure used to derive and/or select test cases.
test design tool: A tool that supports the test design activity by generating test
inputs from a specification that may be held in a CASE tool repository, e.g.
requirements management tool, from specified test conditions held in the tool itself,
or from code.
test driver: See driver.

202
TESTING CONCEPTS

test driven development: A way of developing software where the test cases are
developed, and often automated, before the software is developed to run those test
cases.
test environment: An environment containing hardware, instrumentation,
simulators, software tools, and other support elements needed to conduct a test.
[After IEEE 610]
test evaluation report: A document produced at the end of the test process
summarizing all testing activities and results. It also contains an evaluation of the
test process and lessons learned.
test execution: The process of running a test on the component or system under
test, producing actual result(s).
test execution automation: The use of software, e.g. capture/playback tools, to
control the execution of tests, the comparison of actual results to expected results,
the setting up of test preconditions, and other test control and reporting functions.
test execution phase: The period of time in a software development life cycle
during which the components of a software product are executed, and the software
product is evaluated to determine whether or not requirements have been satisfied.
[IEEE 610]
test execution schedule: A scheme for the execution of test procedures. The test
procedures are included in the test execution schedule in their context and in the
order in which they are to be executed.
test execution technique: The method used to perform the actual test execution,
either manually or automated.
test execution tool: A type of test tool that is able to execute other software using
an automated test script, e.g. capture/playback. [Fewster and Graham]
test fail: See fail.
test generator: See test data preparation tool.
test leader: See test manager.
test harness: A test environment comprised of stubs and drivers needed to execute
a test.
test incident: See incident.
test incident report: See incident report.
test infrastructure: The organizational artifacts needed to perform testing,
consisting of test environments, test tools, office environment and procedures.

203
TESTING CONCEPTS

test input: The data received from an external source by the test object during test
execution. The external source can be hardware, software or human.
31
test item: The individual element to be tested. There usually is one test object and
many test items. See also test object.
test item transmittal report: See release note.
test leader: See test manager.
test level: A group of test activities that are organized and managed together. A
test level is
linked to the responsibilities in a project. Examples of test levels are component test,
integration test, system test and acceptance test. [After TMap]
test log: A chronological record of relevant details about the execution of tests.
[IEEE 829]
test logging: The process of recording information about tests executed into a test
log.
test manager: The person responsible for project management of testing activities
and resources, and evaluation of a test object. The individual who directs, controls,
administers, plans and regulates the evaluation of a test object.
test management: The planning, estimating, monitoring and control of test
activities, typically carried out by a test manager.
test management tool: A tool that provides support to the test management and
control part of a test process. It often has several capabilities, such as testware
management, scheduling of tests, the logging of results, progress tracking, incident
management and test reporting.
Test Maturity Model (TMM): A five level staged framework for test process
improvement, related to the Capability Maturity Model (CMM) that describes the key
elements of an effective test process.
test monitoring: A test management task that deals with the activities related to
periodically checking the status of a test project. Reports are prepared that compare
the actuals to that which was planned. See also test management.
test object: The component or system to be tested. See also test item.
test objective: A reason or purpose for designing and executing a test.
test oracle: A source to determine expected results to compare with the actual
result of the software under test. An oracle may be the existing system (for a

204
TESTING CONCEPTS

benchmark), a user manual, or an individual’s specialized knowledge, but should not


be the code. [After Adrion]
test outcome: See result.
test pass: See pass.
test performance indicator: A high level metric of effectiveness and/or efficiency
used to guide and control progressive test development, e.g. Defect Detection
Percentage (DDP).
test phase: A distinct set of test activities collected into a manageable phase of a
project, e.g. the execution activities of a test level. [After Gerrard]
test plan: A document describing the scope, approach, resources and schedule of
intended test activities. It identifies amongst others test items, the features to be
tested, the testing tasks, who will do each task, degree of tester independence, the
test environment, the test design techniques and entry and exit criteria to be used,
and the rationale for their choice, and any risks requiring contingency planning. It is
a record of the test planning process. [After IEEE 829]
test planning: The activity of establishing or updating a test plan. 32
test policy: A high level document describing the principles, approach and major
objectives of the organization regarding testing.
Test Point Analysis (TPA): A formula based test estimation method based on
function point analysis. [TMap]
test procedure: See test procedure specification.
test procedure specification: A document specifying a sequence of actions for the
execution of a test. Also known as test script or manual test script. [After IEEE 829]
test process: The fundamental test process comprises planning, specification,
execution, recording, checking for completion and test closure activities. [After BS
7925/2]
Test Process Improvement (TPI): A continuous framework for test process
improvement that describes the key elements of an effective test process, especially
targeted at system testing and acceptance testing.
test record: See test log.
test recording: See test logging.
test reproduceability: An attribute of a test indicating whether the same results
are produced each time the test is executed.
test report: See test summary report.
test requirement: See test condition.

205
TESTING CONCEPTS

test run: Execution of a test on a specific version of the test object.


test run log: See test log.
test result: See result.
test scenario: See test procedure specification.
test script: Commonly used to refer to a test procedure specification, especially an
automated one.
test set: See test suite.
test situation: See test condition.
test specification: A document that consists of a test design specification, test case
specification and/or test procedure specification.
test specification technique: See test design technique.
test stage: See test level.
test strategy: A high-level description of the test levels to be performed and the
testing within those levels for an organization or programme (one or more projects).
test suite: A set of several test cases for a component or system under test, where
the post condition of one test is often used as the precondition for the next one.
test summary report: A document summarizing testing activities and results. It
also contains an evaluation of the corresponding test items against exit criteria.
[After IEEE 829]
test target: A set of exit criteria.
test technique: See test design technique.
test tool: A software product that supports one or more test activities, such as
planning and control, specification, building initial files and data, test execution and
test analysis. [TMap] See also CAST.
test type: A group of test activities aimed at testing a component or system focused
on a specific test objective, i.e. functional test, usability test, regression test etc. A
test type may take place on one or more test levels or test phases. [After TMap]
testability: The capability of the software product to enable modified software to be
tested. [ISO 9126] See also maintainability.
testability review: A detailed check of the test basis to determine whether the test
basis is at an adequate quality level to act as an input document for the test process.
[After TMap]
testable requirements: The degree to which a requirement is stated in terms that
permit establishment of test designs (and subsequently test cases) and execution of
tests to determine whether the requirements have been met. [After IEEE 610]

206
TESTING CONCEPTS

tester: A skilled professional who is involved in the testing of a component or


system.
testing: The process consisting of all life cycle activities, both static and dynamic,
concerned with planning, preparation and evaluation of software products and
related work products to determine that they satisfy specified requirements, to
demonstrate that they are fit for purpose and to detect defects.
testware: Artifacts produced during the test process required to plan, design, and
execute tests, such as documentation, scripts, inputs, expected results, set-up and
clear-up procedures, files, databases, environment, and any additional software or
utilities used in testing. [After Fewster and Graham]
thread testing: A version of component integration testing where the progressive
integration of components follows the implementation of subsets of the
requirements, as opposed to the integration of components by levels of a hierarchy.
time behavior: See performance.
top-down testing: An incremental approach to integration testing where the
component at the top of the component hierarchy is tested first, with lower level
components being simulated by stubs. Tested components are then used to test
lower level components. The process is repeated until the lowest level components
have been tested. See also integration testing.
traceability: The ability to identify related items in documentation and software,
such as requirements with associated tests. See also horizontal traceability, vertical
traceability.
U
understandability: The capability of the software product to enable the user to
understand whether the software is suitable, and how it can be used for particular
tasks and conditions of use. [ISO 9126] See also usability.
unit: See component.
unit testing: See component testing.
unreachable code: Code that cannot be reached and therefore is impossible to
execute.
usability: The capability of the software to be understood, learned, used and
attractive to the user when used under specified conditions. [ISO 9126]
34
usability testing: Testing to determine the extent to which the software product is

207
TESTING CONCEPTS

understood, easy to learn, easy to operate and attractive to the users under
specified conditions. [After ISO 9126]
use case: A sequence of transactions in a dialogue between a user and the system
with a tangible result.
use case testing: A black box test design technique in which test cases are
designed to execute user scenarios.
user acceptance testing: See acceptance testing.
user scenario testing: See use case testing.
user test: A test whereby real-life users are involved to evaluate the usability of a
component or system.
V
V-model: A framework to describe the software development life cycle activities
from requirements specification to maintenance. The V-model illustrates how testing
activities can be integrated into each phase of the software development life cycle.
validation: Confirmation by examination and through provision of objective
evidence that the requirements for a specific intended use or application have been
fulfilled. [ISO 9000]
variable: An element of storage in a computer that is accessible by a software
program by referring to it by a name.
verification: Confirmation by examination and through provision of objective
evidence that specified requirements have been fulfilled. [ISO 9000]
vertical traceability: The tracing of requirements through the layers of
development documentation to components.
version control: See configuration control.
volume testing: Testing where the system is subjected to large volumes of data.
See also resource-utilization testing.
W
walkthrough: A step-by-step presentation by the author of a document in order to
gather information and to establish a common understanding of its content.
[Freedman and Weinberg, IEEE 1028] See also peer review.
white-box test design technique: Procedure to derive and/or select test cases
based on an analysis of the internal structure of a component or system.
white-box testing: Testing based on an analysis of the internal structure of the
component or system.

208
TESTING CONCEPTS

Wide Band Delphi: An expert based test estimation technique that aims at making
an accurate estimation using the collective wisdom of the team members. 35

209
TESTING CONCEPTS

210
TESTING CONCEPTS

211

Вам также может понравиться