Вы находитесь на странице: 1из 25

Types of testing and

differences between

them
Saturday, July 25, 2009

There are different types

of testing a software, the

type depends on many

aspects like time of

testing, type of product,

who is testing...etc

I will list the main types

of testing and a brief

description of each type

Black box testing:

Internal system design is

not considered in this

type of testing. Tests are

based on requirements

and functionality.

technical skills or

programming knowledge

is not essential
White box testing: This

testing is based on

knowledge of the internal

logic of an application’s

code. Also known as

Glass box Testing.

Internal software and

code working should be

known for this type of

testing. Tests are based

on coverage of code

statements, branches,

paths, conditions.

Unit testing: Testing of

individual software

components or modules

typically done by the

programmer and not by

testers as it requires

detailed knowledge of the

internal program design


and code may require

developing test driver

modules or test

harnesses.

The goal of unit testing

is to isolate each part of

the program and show

that the individual parts

are correct. A unit test

provides a strict, written

contract that the piece of

code must satisfy.

Incremental

integration testing:

Bottom up approach for

testing i.e. continuous

testing of an application

as new functionality is

added; Application

functionality and

modules should be
independent enough to

test separately, done by

programmers or by

testers.

Integration testing:

Testing of integrated

modules to verify

combined functionality

after integration.

Modules are typically

code modules, individual

applications, client and

server applications on a

network, etc. This type of

testing is especially

relevant to client/server

and distributed systems.

Types of integration

testing are:

1. Big Bang:
In this approach, all or

most of the developed

modules are coupled

together to form a

complete software system

or major part of the

system and then used for

integration testing. The

Big Bang method is very

effective for saving time

in the integration testing

process. However, if the

test cases and their

results are not recorded

properly, the entire

integration process will

be more complicated and

may prevent the testing

team from achieving the

goal of integration

testing.
2. Bottom Up

All the bottom or

low-level modules,

procedures or functions

are integrated and then

tested. After the

integration testing of

lower level integrated

modules, the next level of

modules will be formed

and can be used for

integration testing. This

approach is helpful only

when all or most of the

modules of the same

development level are

ready. This method also

helps to determine the

levels of software

developed and makes it

easier to report testing

progress in the form of a


percentage.

Functional testing:

This type of testing

ignores the internal parts

and focus on the output

is as per requirement or

not. Black-box type

testing geared to

functional requirements

of an application.

System testing: Entire

system is tested as per

the requirements.

Black-box type testing

that is based on overall

requirements

specifications, covers all

combined parts of a

system.
End-to-end testing:

Similar to system testing,

involves testing of a

complete application

environment in a

situation that mimics

real-world use, such as

interacting with a

database, using network

communications, or

interacting with other

hardware, applications,

or systems if appropriate.

Smoke testing:

Smoke testing

originated in the

hardware testing practice

of turning on a new piece

of hardware for the first

time and considering it a

success if it does not

catch fire and smoke. In


software industry, smoke

testing is a shallow and

wide approach whereby

all areas of the

application without

getting into too deep, is

tested.
A smoke test is

scripted, either using a

written set of tests or an

automated test
A Smoke test is

designed to touch every

part of the application in

a cursory way. It’s

shallow and wide.


Smoke testing is

conducted to ensure

whether the most crucial

functions of a program

are working, but not

bothering with finer

details. (Such as build

verification).
Smoke testing is
normal health check up

to a build of an

application before taking

it to testing in depth.

Sanity testing:

A sanity test is a

narrow regression test

that focuses on one or a

few areas of functionality.

Sanity testing is usually

narrow and deep.


A sanity test is

usually unscripted.
A Sanity test is used

to determine a small

section of the application

is still working after a

minor change.
Sanity testing is a

cursory testing; it is

performed whenever a

cursory testing is

sufficient to prove the

application is functioning
according to

specifications. This level

of testing is a subset of

regression testing.
Sanity testing is to

verify whether

requirements are met or

not, checking all features

breadth-first.

Regression testing:

Testing the application as

a whole for the

modification in any

module or functionality

difficult to cover all the

system in regression

testing so typically

automation tools are

used for these testing

types.

Acceptance testing
-Normally this type of

testing is done to verify if

system meets the

customer specified

requirements. User or

customers do this testing

to determine whether to

accept application.

Load testing: It's a

performance testing to

check system behavior

under load. Testing an

application under heavy

loads, such as testing of

a web site under a range

of loads to determine at

what point the system’s

response time degrades

or fails.

Stress testing: System

is stressed beyond its


specifications to check

how and when it fails

performed under heavy

load like putting large

number beyond storage

capacity, complex

database queries,

continuous input to

system, or database load.

Performance testing:

Term often used

interchangeably with

’stress’ and ‘load’ testing

to check whether system

meets performance

requirements. Use

different performance

and load tools to do this.

Usability testing:

User-friendliness check.

Application flow is tested,


Can new user

understand the

application easily, Proper

help documented

whenever user stuck at

any point. Basically

system navigation is

checked in this testing.

Install/uninstall

testing: Tested for full,

partial, or upgrade

install/uninstall

processes on different

operating systems under

different hardware,

software environment.

Recovery testing:

Testing how well a

system recovers from

crashes, hardware

failures, or other
catastrophic problems.

Security testing: Can

system be penetrated by

any hacking way. Testing

how well the system

protects against

unauthorized internal or

external access. Check if

system database is safe

from external attacks.

Compatibility testing:

Testing how well software

performs in a particular

hardware/software/oper

ating system/network

environment and

different combination s of

above.

Comparison testing:

Comparison of product
strengths and

weaknesses with

previous versions or

other similar products.

Alpha testing: In

house virtual user

environment can be

created for this type of

testing. Testing is done

at the end of

development. Still minor

design changes may be

made as a result of such

testing.

Beta testing: Testing

typically done by end

users or other final

testing before releasing

the application for

commercial purposes.

Mutation testing - a
method for determining

if a set of test data or test

cases is useful, by

deliberately introducing

various code changes

('bugs') and retesting

with the original test

data/cases to determine

if the 'bugs' are detected.

Proper implementation

requires large

computational resources.

Smoke Testing: Software Testing done to ensure that whether the build can be accepted
for through software testing or not. Basically, it is done to check the stability of the build
received for software testing.

Sanity testing: After receiving a build with minor changes in the code or functionality, a
subset of regression test cases are executed that to check whether it rectified the software
bugs or issues and no other software bug is introduced by the changes. Sometimes, when
multiple cycles of regression testing are executed, sanity testing of the software can be
done at later cycles after through regression test cycles. If we are moving a build from
staging / testing server to production server, sanity testing of the software application can
be done to check that whether the build is sane enough to move to further at production
server or not.

Difference between Smoke & Sanity Software Testing:

Smoke testing is a wide approach where all areas of the software application are tested
without getting into too deep. However, a sanity software testing is a narrow regression
testing with a focus on one or a small set of areas of functionality of the software
application.
The test cases for smoke testing of the software can be either manual or automated.
However, a sanity test is generally without test scripts or test cases.
Smoke testing is done to ensure whether the main functions of the software application
are working or not. During smoke testing of the software, we do not go into finer details.
However, sanity testing is a cursory software testing type. It is done whenever a quick
round of software testing can prove that the software application is functioning according
to business / functional requirements.
Smoke testing of the software application is done to check whether the build can be
accepted for through software testing. Sanity testing of the software is to ensure whether
the requirements are met or not.

SMOKE TESTING:

Smoke testing originated in the hardware testing practice of turning on a new piece of
hardware for the first time and considering it a success if it does not catch fire and smoke.
In software industry, smoke testing is a shallow and wide approach whereby all areas of
the application without getting into too deep, is tested.
A smoke test is scripted, either using a written set of tests or an automated test
A Smoke test is designed to touch every part of the application in a cursory way. It’s
shallow and wide.
Smoke testing is conducted to ensure whether the most crucial functions of a program
are working, but not bothering with finer details. (Such as build verification).
Smoke testing is normal health check up to a build of an application before taking it to
testing in depth.

SANITY TESTING:

A sanity test is a narrow regression test that focuses on one or a few areas of
functionality. Sanity testing is usually narrow and deep.
A sanity test is usually unscripted.
A Sanity test is used to determine a small section of the application is still working
after a minor change.
Sanity testing is a cursory testing, it is performed whenever a cursory testing is
sufficient to prove the application is functioning according to specifications. This level of
testing is a subset of regression testing.
Sanity testing is to verify whether requirements are met or not, checking all features
breadth-first.

Smoke Testing Vs Sanity Testing - Key


Differences
Smoke Testing Sanity Testing
Smoke Testing is performed to ascertain that
Sanity Testing is done to check the new
the critical functionalities of the program is
functionality / bugs have been fixed
working fine
The objective of this testing is to verify the The objective of the testing is to verify the
"stability" of the system in order to proceed "rationality" of the system in order to
with more rigorous testing proceed with more rigorous testing
This testing is performed by the developers
Sanity testing is usually performed by testers
or testers
Smoke testing is usually documented or Sanity testing is usually not documented and
scripted is unscripted
Smoke testing is a subset of Regression Sanity testing is a subset of Acceptance
testing testing
Smoke testing exercises the entire system Sanity testing exercises only the particular
from end to end component of the entire system
Smoke testing is like General Health Check Sanity Testing is like specialized health
Up check up

Load Testing is the process of subjecting the system to load while measuring something
such as performance or reliability. The Load Testing is pretty vague and most often it is
used to mean measuring the performance of a system while subjecting it to an increasing
load (to the expected maximum the system should handle).

In other words, the Load testing is done by loading the system with 'n' number of users,
which is usually done by using automated tools by introducing virtual users and
validating the load of the system. Here the testing is done by introducing load at the same
time, applying load in regular intervals and applying load randomnly.

Stress Testing is where a system is tested to a breaking point to ensure that it fails in a
graceful way; also known as graceful degredation. The system is stressed by subjecting it
to conditions outside of its specified limits whilst denying of the resources required to
process that load. For example, the system could have memory or disk space removed.
The purpose of the test is to find bugs in the way the system degrades, for example does
it place an order without taking payment.

Alpha Test Beta Test


For S/w Application For S/w Product

By customer site people By customer site like people

In development site In customer side environment

Virtual environment Real Time Environment

Retesting is a general term that is used for verifying a product again. That might not have
any objective too. Verifying an issues again is called Retesting.Regression testing is
methodology. It is retesting of a previous tested program that undergone modificaitons so
as to ensure that more bugs have not been introduced due to the modifications

Software Testing-Requirements Traceability Matrix

What is the need for Requirements Traceability Matrix in Software Testing?


Automation requirement in an organization initiates it to go for a custom built Software.
The client who had ordered for the product specifies his requirements to the development
Team and the process of Software Development gets started.

In addition to the requirements specified by the client, the development team may also
propose various value added suggestions that could be added on to the software. But
maintaining a track of all the requirements specified in the requirement document and
checking whether all the requirements have been met by the end product is a cumbersome
and a laborious process.

The remedy for this problem is the Requirements Traceability Matrix.

Traceability Matrix is used in entire software development life cycle phases:


1. Risk Analysis phase
2. Requirements Analysis and Specification phase

3. Design Analysis and Specification phase

4. Source Code Analysis, Unit Testing & Integration Testing phase

5. Validation – System Testing, Functional Testing phase

In this topic we will discuss:

 What is Traceability Matrix from Software Testing perspective? (Point 5)


 Types of Traceability Matrix
 Disadvantages of not using Traceability Matrix
 Benefits of using Traceability Matrix in testing
 Step by step process of creating an effective Traceability Matrix from
requirements. Sample formats of Traceability Matrix basic version to
advanced version.

In Simple words - A requirements traceability matrix is a document that traces and maps
user requirements [requirement Ids from requirement specification document] with the
test case ids. Purpose is to make sure that all the requirements are covered in test cases so
that while testing no functionality can be missed.

This document is prepared to make the clients satisfy that the coverage done is complete
as end to end, this document consists of Requirement/Base line doc Ref No., Test
case/Condition, and Defects/Bug id. Using this document the person can track the
Requirement based on the Defect id

Note – We can make it a “Test case Coverage checklist” document by adding few more
columns. We will discuss in later posts

Types of Traceability Matrix:

 Forward Traceability – Mapping of Requirements to Test cases


 Backward Traceability – Mapping of Test Cases to Requirements
 Bi-Directional Traceability - A Good Traceability matrix is the References
from test cases to basis documentation and vice versa.
Why Bi-Directional Traceability is required?

Bi-Directional Traceability contains both Forward & Backward Traceability. Through


Backward Traceability Matrix, we can see that test cases are mapped with which
requirements.

This will help us in identifying if there are test cases that do not trace to any coverage
item— in which case the test case is not required and should be removed (or maybe a
specification like a requirement or two should be added!). This “backward” Traceability
is also very helpful if you want to identify that a particular test case is covering how
many requirements?

Through Forward Traceability – we can check that requirements are covered in which test
cases? Whether is the requirements are coved in the test cases or not?

Forward Traceability Matrix ensures – We are building the Right Product.

Backward Traceability Matrix ensures – We the Building the Product


Right.

Traceability matrix is the answer of the following questions of any Software Project:

 How is it feasible to ensure, for each phase of the SDLC, that I have correctly
accounted for all the customer’s needs?
 How can I certify that the final software product meets the customer’s needs?
Now we can only make sure requirements are captured in the test cases by
traceability matrix.

Disadvantages of not using Traceability Matrix [some possible (seen) impact]:

No traceability or Incomplete Traceability Results into:


1. Poor or unknown test coverage, more defects found in production

2. It will lead to miss some bugs in earlier test cycles which may arise in later test cycles.
Then a lot of discussions arguments with other teams and managers before release.

3. Difficult project planning and tracking, misunderstandings between different teams


over project dependencies, delays, etc

Benefits of using Traceability Matrix

 Make obvious to the client that the software is being developed as per the
requirements.
 To make sure that all requirements included in the test cases
 To make sure that developers are not creating features that no one has requested
 Easy to identify the missing functionalities.
 If there is a change request for a requirement, then we can easily find out which
test cases need to update.
 The completed system may have “Extra” functionality that may have not been
specified in the design specification, resulting in wastage of manpower, time and
effort.

Steps to create Traceability Martix:

1. Make use of excel to create Traceability Matrix:

2. Define following columns:

Base Specification/Requirement ID (If any)

Requirement ID

Requirement description

TC 001

TC 002

TC 003.. So on.

3. Identify all the testable requirements in granular level from requirement document.
Typical requirements you need to capture are as follows:
Used cases (all the flows are captured)
Error Messages
Business rules
Functional rules
SRS
FRS
So on…

4. Identity all the test scenarios and test flows.

5. Map Requirement IDs to the test cases. Assume (as per below table), Test case “TC
001” is your one flow/scenario. Now in this scenario, Requirements SR-1.1 and SR-1.2
are covered. So mark “x” for these requirements.

Now from below table you can conclude –

Requirement SR-1.1 is covered in TC 001

Requirement SR-1.2 is covered in TC 001

Requirement SR-1.5 is covered in TC 001, TC 003 [Now it is easy to identify, which test
cases need to be updated if there is any change request].

TC 001 Covers SR-1.1, SR, 1.2 [we can easily identify that test cases covers which
requirements].

TC 002 covers SR-1.3.. So on..

Requirement ID Requirement TC 001 TC 002 TC 003


description

SR-1.1 User should be able to x


do this

SR-1.2 User should be able to x


do that

SR-1.3 On clicking this, x


following message
should appear

SR-1.4 x

SR-1.5 x x

SR-1.6 x

SR-1.7 x
This is a very basic traceability matrix format. You can add more following columns and
make it more effective:

ID, Assoc ID, Technical Assumption(s) and/or Customer Need(s), Functional


Requirement, Status, Architectural/Design Document, Technical Specification, System
Component(s), Software Module(s), Test Case Number, Tested In, Implemented In,
Verification, Additional Comments,

Click here to download sample advanced version of traceability matrix.