Вы находитесь на странице: 1из 47

Differences between the different

levels of tests
What differences are there between the different levels of tests? The focus
shifts from early component testing to late acceptance testing. It is important
that everybody understands this.
There are generally four recognized levels of tests: unit/component testing,
integration testing, system testing, and acceptance testing. Tests are frequently
grouped by where they are added in the software development process, or by
the level of specificity of the test.

Unit/component testing
The most basic type of testing is unit, or component, testing.

1
Unit testing aims to verify each part of the software by isolating it and then
perform tests to demonstrate that each individual component is correct in
terms of fulfilling requirements and the desired functionality.
This type of testing is performed at the earliest stages of the development
process, and in many cases it is executed by the developers themselves before
handing the software over to the testing te am.

The advantage of detecting any errors in the software early in the day is that by
doing so the team minimises software development risks, as well as time and
money wasted in having to go back and undo fundamental problems in the
program once it is nearly completed.

Integration testing
Integration testing aims to test different parts of the system in combination in
order to assess if they work correctly together. By testing the units in groups,
any faults in the way they interact together can be iden tified.

There are many ways to test how different components of the system function
at their interface; testers can adopt either a bottom -up or a top-down
integration method.

In bottom-up integration testing, testing builds on the results of unit testi ng by


testing higher-level combination of units (called modules) in successively more
complex scenarios.

It is recommended that testers start with this approach first, before applying
the top-down approach which tests higher-level modules first and studies
simpler ones later.

System testing
The next level of testing is system testing. As the name implies, all the
components of the software are tested as a whole in order to ensure that the
overall product meets the requirements specified.

System testing is a very important step as the software is almost ready to ship
and it can be tested in an environment which is very close to that which the
user will experience once it is deployed.

System testing enables testers to ensure that the product meet s business
requirements, as well as determine that it runs smoothly within its operating
environment. This type of testing is typically performed by a specialized testing
team.

2
Acceptance testing
Finally, acceptance testing is the level in the software testing process where a
product is given the green light or not. The aim of this type of testing is to
evaluate whether the system complies with the end -user requirements and if it
is ready for deployment.

The testing team will utilise a variety of methods, such as pre -written scenarios
and test cases to test the software and use the results obtained from these tools
to find ways in which the system can be improved.

The scope of acceptance testing ranges from simply finding spelling mistakes
and cosmetic errors, to uncovering bugs that could cause a major error in the
application.

By performing acceptance tests, the te sting team can find out how the product
will perform when it is installed on the user’s system. There are also various
legal and contractual reasons why acceptance testing has to be carried out.

The testing sequence

These four testing types cannot be applied haphazardly during development.


There is a logical sequence that should be adhered to in order to minimise the
risk of bugs cropping up just before the launch date.

Any testing team should know that testing is important at every phase of the
development cycle.

By progressively testing the simpler components of the system and moving on


the bigger, more complex groupings, the testers can rest assured they are
thoroughly examining the software in the most efficient way possible.

The four levels of tests shouldn’t only be seen as a hierarchy that extends from
simple to complex, but also as a sequence that spans the whole development
process from the early to the later stages. Note however that later does not
imply that acceptance testing is done only after say 6 months of development
work. In a more agile approach, acceptance testing can be carried out as often
as every 2-3 weeks, as a part of the sprint demo. In an organization working
more traditionally it is quite typical to have 3 -4 releases per year, each
following the cycle described here.

3
Conclusion
Testing early and testing frequently is well worth the effort.

By adopting an attitude of constant alertness and scrutiny in all your projects,


as well as a systematic approach to testing, the tester can pinpoint any faults in
the system sooner, which translates in less time and money wasted later on.

Detecting software errors early is important since it more effort is needed to fix
bugs when the system is nearing launch, and — due to the interactive nature of
components in the system — one small bug in a particular component hidden
deep within layers of code can result in an effect that is magnified several times
over on a system-level.

4
A list of 100 types of Software Testing Types along with definitions. A must read
for any QA professional.

1. Acceptance Testing: Formal testing conducted to determine whether or not


a system satisfies its acceptance criteria and to enable the customer to
determine whether or not to accept the system. It is usually performed by
the customer.
2. Accessibility Testing: Type of testing which determines the usability of a
product to the people having disabilities (deaf, blind, mentally disabled
etc). The evaluation process is conducted by persons having disabilities.
3. Active Testing: Type of testing consisting in introducing test data and
analyzing the execution results. It is usually conducted by the testing team.
4. Agile Testing: Software testing practice that follows the principles of the
agile manifesto, emphasizing testing from the perspective of customers
who will utilize the system. It is usually performed by the QA teams.
5. Age Testing: Type of testing which evaluates a system's ability to perform
in the future. The evaluation process is conducted by testing teams.
6. Ad-hoc Testing: Testing performed without planning and documentation -
the tester tries to 'break' the system by randomly trying the system's
functionality. It is performed by the testing team.
7. Alpha Testing: Type of testing a software product or system conducted at
the developer's site. Usually it is performed by the end users.
8. Assertion Testing: Type of testing consisting in verifying if the conditions
confirm the product requirements. It is performed by the testing team.
9. API Testing: Testing technique similar to Unit Testing in that it targets the
code level. Api Testing differs from Unit Testing in that it is typically a QA
task and not a developer task.
10.All-pairs Testing: Combinatorial testing method that tests all possible
discrete combinations of input parameters. It is performed by the testing
teams.
11.Automated Testing: Testing technique that uses Automation Testing tools
to control the environment set-up, test execution and results reporting. It
is performed by a computer and is used inside the testing teams.
12.Basis Path Testing: A testing mechanism which derives a logical complexity
measure of a procedural design and use this as a guide for defining a basic
set of execution paths. It is used by testing teams when defining test cases.
13.Backward Compatibility Testing: Testing method which verifies the
behavior of the developed software with older versions of the test
environment. It is performed by testing team.
14.Beta TestingFinal testing before releasing application for commercial
purpose. It is typically done by end-users or others.
5
15.Benchmark Testing: Testing technique that uses representative sets of
programs and data designed to evaluate the performance of computer
hardware and software in a given configuration. It is performed by testing
teams.
16.Big Bang Integration Testing: Testing technique which integrates individual
program modules only when everything is ready. It is performed by the
testing teams.
17.Binary Portability Testing: Technique that tests an executable application
for portability across system platforms and environments, usually for
conformation to an ABI specification. It is performed by the testing teams.
18.Boundary Value Testing: Software testing technique in which tests are
designed to include representatives of boundary values. It is performed by
the QA testing teams.
19.Bottom Up Integration Testing: In bottom-up Integration Testing, module
at the lowest level are developed first and other modules which go
towards the 'main' program are integrated and tested one at a time. It is
usually performed by the testing teams.
20.Branch Testing: Testing technique in which all branches in the program
source code are tested at least once. This is done by the developer.
21.Breadth Testing: A test suite that exercises the full functionality of a
product but does not test features in detail. It is performed by testing
teams.
22.Black box Testing: A method of software testing that verifies the
functionality of an application without having specific knowledge of the
application's code/internal structure. Tests are based on requirements and
functionality. It is performed by QA teams.
23.Code-driven Testing: Testing technique that uses testing frameworks (such
as xUnit) that allow the execution of unit tests to determine whether
various sections of the code are acting as expected under various
circumstances. It is performed by the development teams.
24.Compatibility Testing: Testing technique that validates how well a software
performs in a particular hardware/software/operating system/network
environment. It is performed by the testing teams.
25.Comparison Testing: Testing technique which compares the product
strengths and weaknesses with previous versions or other similar
products. Can be performed by tester, developers, product managers or
product owners.
26.Component Testing: Testing technique similar to unit testing but with a
higher level of integration - testing is done in the context of the application

6
instead of just directly testing a specific method. Can be performed by
testing or development teams.
27.Configuration Testing: Testing technique which determines minimal and
optimal configuration of hardware and software, and the effect of adding
or modifying resources such as memory, disk drives and CPU. Usually it is
performed by the Performance Testingengineers.
28.Condition Coverage Testing: Type of software testing where each condition
is executed by making it true and false, in each of the ways at least once. It
is typically made by the Automation Testing teams.
29.Compliance Testing: Type of testing which checks whether the system was
developed in accordance with standards, procedures and guidelines. It is
usually performed by external companies which offer "Certified OGC
Compliant" brand.
30.Concurrency Testing: Multi-user testing geared towards determining the
effects of accessing the same application code, module or database records.
It it usually done by performance engineers.
31.Conformance Testing: The process of testing that an implementation
conforms to the specification on which it is based. It is usually performed
by testing teams.
32.Context Driven Testing: An Agile Testing technique that advocates
continuous and creative evaluation of testing opportunities in light of the
potential information revealed and the value of that information to the
organization at a specific moment. It is usually performed by Agile testing
teams.
33.Conversion Testing: Testing of programs or procedures used to convert
data from existing systems for use in replacement systems. It is usually
performed by the QA teams.
34.Decision Coverage Testing: Type of software testing where each
condition/decision is executed by setting it on true/false. It is typically
made by the automation testing teams.
35.Destructive Testing: Type of testing in which the tests are carried out to the
specimen's failure, in order to understand a specimen's structural
performance or material behavior under different loads. It is usually
performed by QA teams.
36.Dependency Testing: Testing type which examines an application's
requirements for pre-existing software, initial states and configuration in
order to maintain proper functionality. It is usually performed by testing
teams.

7
37.Dynamic Testing: Term used in software engineering to describe the
testing of the dynamic behavior of code. It is typically performed by testing
teams.
38.Domain Testing: White box testing technique which contains checkings
that the program accepts only valid input. It is usually done by software
development teams and occasionally by automation testing teams.
39.Error-Handling Testing: Software testing type which determines the ability
of the system to properly process erroneous transactions. It is usually
performed by the testing teams.
40.End-to-end Testing: Similar to system testing, involves testing of a
complete application environment in a situation that mimics real-world
use, such as interacting with a database, using network communications,
or interacting with other hardware, applications, or systems if appropriate.
It is performed by QA teams.
41.Endurance Testing: Type of testing which checks for memory leaks or
other problems that may occur with prolonged execution. It is usually
performed by performance engineers.
42.Exploratory Testing: Black box testing technique performed without
planning and documentation. It is usually performed by manual testers.
43.Equivalence Partitioning Testing: Software testing technique that divides
the input data of a software unit into partitions of data from which test
cases can be derived. it is usually performed by the QA teams.
44.Fault injection Testing: Element of a comprehensive test strategy that
enables the tester to concentrate on the manner in which the application
under test is able to handle exceptions. It is performed by QA teams.
45.Formal verification Testing: The act of proving or disproving the
correctness of intended algorithms underlying a system with respect to a
certain formal specification or property, using formal methods of
mathematics. It is usually performed by QA teams.
46.Functional Testing: Type of black box testing that bases its test cases on the
specifications of the software component under test. It is performed by
testing teams.
47.Fuzz Testing: Software testing technique that provides invalid, unexpected,
or random data to the inputs of a program - a special area of mutation
testing. Fuzz testing is performed by testing teams.
48.Gorilla Testing: Software testing technique which focuses on heavily
testing of one particular module. It is performed by quality assurance
teams, usually when running full testing.
49.Gray Box Testing: A combination of Black Box and White Box testing
methodologies: testing a piece of software against its specification but

8
using some knowledge of its internal workings. It can be performed by
either development or testing teams.
50.Glass box Testing: Similar to white box testing, based on knowledge of the
internal logic of an application's code. It is performed by development
teams.
51.GUI software Testing: The process of testing a product that uses a graphical
user interface, to ensure it meets its written specifications. This is
normally done by the testing teams.
52.Globalization Testing: Testing method that checks proper functionality of
the product with any of the culture/locale settings using every type of
international input possible. It is performed by the testing team.
53.Hybrid Integration Testing: Testing technique which combines top-down
and bottom-up integration techniques in order leverage benefits of these
kind of testing. It is usually performed by the testing teams.
54.Integration Testing: The phase in software testing in which individual
software modules are combined and tested as a group. It is usually
conducted by testing teams.
55.Interface Testing: Testing conducted to evaluate whether systems or
components pass data and control correctly to one another. It is usually
performed by both testing and development teams.
56.Install/uninstall Testing: Quality assurance work that focuses on what
customers will need to do to install and set up the new software
successfully. It may involve full, partial or upgrades install/uninstall
processes and is typically done by the software testing engineer in
conjunction with the configuration manager.
57.Internationalization Testing: The process which ensures that product's
functionality is not broken and all the messages are properly externalized
when used in different languages and locale. It is usually performed by the
testing teams.
58.Inter-Systems Testing: Testing technique that focuses on testing the
application to ensure that interconnection between application functions
correctly. It is usually done by the testing teams.
59.Keyword-driven Testing: Also known as table-driven testing or action-
word testing, is a software testing methodology for automated testing that
separates the test creation process into two distinct stages: a Planning
Stage and an Implementation Stage. It can be used by either manual or
automation testing teams.
60.Load Testing: Testing technique that puts demand on a system or device
and measures its response. It is usually conducted by the performance
engineers.

9
61.Localization Testing: Part of software testing process focused on adapting a
globalized application to a particular culture/locale. It is normally done by
the testing teams.
62.Loop Testing: A white box testing technique that exercises program loops.
It is performed by the development teams.
63.Manual Scripted Testing: Testing method in which the test cases are
designed and reviewed by the team before executing it. It is done
by Manual Testing teams.
64.Manual-Support Testing: Testing technique that involves testing of all the
functions performed by the people while preparing the data and using
these data from automated system. it is conducted by testing teams.
65.Model-Based Testing: The application of Model based design for designing
and executing the necessary artifacts to perform software testing. It is
usually performed by testing teams.
66.Mutation Testing: Method of software testing which involves modifying
programs' source code or byte code in small ways in order to test sections
of the code that are seldom or never accessed during normal tests
execution. It is normally conducted by testers.
67.Modularity-driven Testing: Software testing technique which requires the
creation of small, independent scripts that represent modules, sections,
and functions of the application under test. It is usually performed by the
testing team.
68.Non-functional Testing: Testing technique which focuses on testing of a
software application for its non-functional requirements. Can be
conducted by the performance engineers or by manual testing teams.
69.Negative Testing: Also known as "test to fail" - testing method where the
tests' aim is showing that a component or system does not work. It is
performed by manual or automation testers.
70.Operational Testing: Testing technique conducted to evaluate a system or
component in its operational environment. Usually it is performed by
testing teams.
71.Orthogonal array Testing: Systematic, statistical way of testing which can
be applied in user interface testing, system testing, Regression Testing,
configuration testing and Performance Testing. It is performed by the
testing team.
72.Pair Testing: Software development technique in which two team
members work together at one keyboard to test the software application.
One does the testing and the other analyzes or reviews the testing. This
can be done between one Tester and Developer or Business Analyst or

10
between two testers with both participants taking turns at driving the
keyboard.
73.Passive Testing: Testing technique consisting in monitoring the results of a
running system without introducing any special test data. It is performed
by the testing team.
74.Parallel Testing: Testing technique which has the purpose to ensure that a
new application which has replaced its older version has been installed
and is running correctly. It is conducted by the testing team.
75.Path Testing: Typical white box testing which has the goal to satisfy
coverage criteria for each logical path through the program. It is usually
performed by the development team.
76.Penetration Testing: Testing method which evaluates the security of a
computer system or network by simulating an attack from a malicious
source. Usually they are conducted by specialized penetration testing
companies.
77.Performance Testing: Functional testing conducted to evaluate the
compliance of a system or component with specified performance
requirements. It is usually conducted by the performance engineer.
78.Qualification Testing: Testing against the specifications of the previous
release, usually conducted by the developer for the consumer, to
demonstrate that the software meets its specified requirements.
79.Ramp Testing: Type of testing consisting in raising an input signal
continuously until the system breaks down. It may be conducted by the
testing team or the performance engineer.
80.Regression Testing: Type of software testing that seeks to uncover
software errors after changes to the program (e.g. bug fixes or new
functionality) have been made, by retesting the program. It is performed
by the testing teams.
81.Recovery Testing: Testing technique which evaluates how well a system
recovers from crashes, hardware failures, or other catastrophic problems.
It is performed by the testing teams.
82.Requirements Testing: Testing technique which validates that the
requirements are correct, complete, unambiguous, and logically consistent
and allows designing a necessary and sufficient set of test cases from those
requirements. It is performed by QA teams.
83.Security Testing: A process to determine that an information system
protects data and maintains functionality as intended. It can be performed
by testing teams or by specialized security-testing companies.

11
84.Sanity Testing: Testing technique which determines if a new software
version is performing well enough to accept it for a major testing effort. It
is performed by the testing teams.
85.Scenario Testing: Testing activity that uses scenarios based on a
hypothetical story to help a person think through a complex problem or
system for a testing environment. It is performed by the testing teams.
86.Scalability Testing: Part of the battery of non-functional tests which tests a
software application for measuring its capability to scale up - be it the user
load supported, the number of transactions, the data volume etc. It is
conducted by the performance engineer.
87.Statement Testing: White box testing which satisfies the criterion that each
statement in a program is executed at least once during program testing. It
is usually performed by the development team.
88.Static Testing: A form of software testing where the software isn't actually
used it checks mainly for the sanity of the code, algorithm, or document. It
is used by the developer who wrote the code.
89.Stability Testing: Testing technique which attempts to determine if an
application will crash. It is usually conducted by the performance engineer.
90.Smoke Testing: Testing technique which examines all the basic
components of a software system to ensure that they work properly.
Typically, smoke testing is conducted by the testing team, immediately
after a software build is made .
91.Storage Testing: Testing type that verifies the program under test stores
data files in the correct directories and that it reserves sufficient space to
prevent unexpected termination resulting from lack of space. It is usually
performed by the testing team.
92.Stress Testing: Testing technique which evaluates a system or component
at or beyond the limits of its specified requirements. It is usually
conducted by the performance engineer.
93.Structural Testing: White box testing technique which takes into account
the internal structure of a system or component and ensures that each
program statement performs its intended function. It is usually performed
by the software developers.
94.System Testing: The process of testing an integrated hardware and
software system to verify that the system meets its specified requirements.
It is conducted by the testing teams in both development and target
environment.
95.System integration Testing: Testing process that exercises a software
system's coexistence with others. It is usually performed by the testing
teams.

12
96.Top Down Integration Testing: Testing technique that involves starting at
the stop of a system hierarchy at the user interface and using stubs to test
from the top down until the entire system has been implemented. It is
conducted by the testing teams.

97.Thread Testing: A variation of top-down testing technique where the


progressive integration of components follows the implementation of
subsets of the requirements. It is usually performed by the testing teams.

98.Upgrade Testing: Testing technique that verifies if assets created with


older versions can be used properly and that user's learning is not
challenged. It is performed by the testing teams.
99.Unit Testing: Software verification and validation method in which a
programmer tests if individual units of source code are fit for use. It is
usually conducted by the development team.
100. User Interface Testing: Type of testing which is performed to check
how user-friendly the application is. It is performed by testing teams.

Difference between defect, error, bug, failure and fault: “A mistake in coding is
called error ,error found by tester is called defect, defect accepted by development
team then it is called bug ,build does not meet the requirements then it Is failure.”

13
Bug:
An Error found in the development environment before the product is shipped to the
customer.

Bug: Simply Bug is an error found BEFORE the application goes into production. A
programming error that causes a program to work poorly, produce incorrect results, or
crash. An error in software or hardware that causes a program to malfunction.

Defect:
Defect is the difference between expected and actual result in the context of testing.
Defect is the deviation of the customer requirement. An Error found in the product itself
after it is shipped to the customer. Defect is an error found AFTER the application goes
into production. Simply defect can be defined as a variance between expected and
actual. Defect is an error found AFTER the application goes into production.

Error: It the one which is generated because of wrong login, loop or due to syntax.
Error means normally arises in software Error means to change the functionality of the
program.

Fault: A wrong or mistaken step, process or Data definition in a computed program


which causes the program to perform in an unintended or unanticipated manner.

Difference between a defect and a failure -


When a defect reaches the end customer it is called a failure and if the defect is detected
internally and resolved it’s called a defect.

Software Development Life Cycle –


SDLC | Software Testing Material
Last Updated on August 9, 2018 by Rajkumar

Software Development Life Cycle


Software Development Life Cycle (SDLC) aims to produce a
high-quality system that meets or exceeds customer expectations,
works effectively and efficiently in the current and planned
information technology infrastructure, and is inexpensive to
maintain and cost-effective to enhance.
14
Contents [hide]
 1 Software Development Life Cycle
o 1.1 Detailed Explanation:
o 1.2 Requirement Phase:
o 1.3 Analysis Phase:
o 1.4 Design Phase:
o 1.5 Development Phase:
o 1.6 Testing Phase:
o 1.7 Deployment & Maintenance Phase:
o 1.8 Types of Software Development Life Cycle Models:
Detailed Explanation:
A process followed in software projects is SDLC. Each phase of
SDLC produces deliverables required by the next phase in the life
cycle. Requirements are translated into design. Code is produced
according to the design. Testing should be done on a developed
product based on requirement. Deployment should be done once
the testing was completed. It aims to produce a high-quality system
that meets or exceeds customer expectations, works effectively and
efficiently in the current and planned information technology
infrastructure, and is inexpensive to maintain and cost-effective to
enhance.

15
A typical Software Development Life Cycle (SDLC) consists of the
following phases:

Check out the below video to watch “Software Testing Life Cycle
Phases (STLC Phases)”

Please be patient. The video will load in some time.

Requirement Phase:
Requirement gathering and analysis is the most important phase in
software development lifecycle. Business Analyst collects the
requirement from the Customer/Client as per the clients business
needs and documents the requirements in the Business
Requirement Specification (document name varies depends upon
the Organization. Some examples are Customer Requirement
Specification (CRS), Business Specification (BS) etc., and provides
the same to Development Team.

Analysis Phase:
Once the requirement gathering and analysis is done the next step
is to define and document the product requirements and get them
approved by the customer. This is done through SRS (Software
Requirement Specification) document. SRS consists of all the
product requirements to be designed and developed during the
project life cycle. Key people involved in this phase are Project
Manager, Business Analysist and Senior members of the Team. The
outcome of this phase is Software Requirement Specification.

Design Phase:
It has two steps:
HLD – High Level Design – It gives the architecture of the software
product to be developed and is done by architects and senior
developers
LLD – Low Level Design – It is done by senior developers. It
16
describes how each and every feature in the product should work
and how every component should work. Here, only the design will
be there and not the code
The outcome from this phase is High Level Document and Low Level
Document which works as an input to the next phase

Development Phase:
Developers of all levels (seniors, juniors, freshers) involved in this
phase. This is the phase where we start building the software and
start writing the code for the product. The outcome from this phase
is Source Code Document (SCD) and the developed product.

Testing Phase:
When the software is ready, it is sent to the testing department
where Test team tests it thoroughly for different defects. They
either test the software manually or using automated testing tools
depends on process defined in STLC (Software Testing Life
Cycle) and ensure that each and every component of the software
works fine. Once the QA makes sure that the software is error-free,
it goes to the next stage, which is Implementation. The outcome of
this phase is the Quality Product and the Testing Artifacts.

Deployment & Maintenance Phase:


After successful testing, the product is delivered/deployed to the
customer for their use. Deployment is done by the
Deployment/Implementation engineers. Once when the customers
start using the developed system then the actual problems will
come up and needs to be solved from time to time. Fixing the issues
found by the customer comes in the maintenance phase. 100%
testing is not possible – because, the way testers test the product is
different from the way customers use the product. Maintenance
should be done as per SLA (Service Level Agreement)

Types of Software Development Life Cycle Models:

17
Some of the SDLC Models are as follows:

1. Waterfall Model
2. Spiral
3. V Model
4. Prototype
5. Agile

The other related models are Agile Model, Rapid Application


Development, Rational Unified Model, Hybrid Model etc.,

What is Software Testing Life Cycle


(STLC)
Last Updated on January 21, 2018 by Rajkumar

Software Testing Life Cycle:


Software Testing Life Cycle (STLC) identifies what test activities to
carry out and when to accomplish those test activities. Even though
testing differs between Organizations, there is a testing life cycle.

Don’t Miss: Manual Testing Complete Tutorial

18
The different phases of Software Testing Life Cycle are:
Contents [hide]
 1 Software Testing Life Cycle:
o 1.1 Requirement Analysis:
o 1.2 Test Planning:
o 1.3 Test Design:
o 1.4 Test Environment Setup:
o 1.5 Test Execution:
o 1.6 Test Closure:
Every phase of STLC (Software Testing Life Cycle) has a definite
Entry and Exit Criteria.

Check out the below video to watch “Software Testing Life Cycle
Phases (STLC Phases)”

Please be patient. The video will load in some time.

Requirement Analysis:
Entry criteria for this phase is BRS (Business Requirement
Specification) document. During this phase, test team studies and

19
analyzes the requirements from a testing perspective. This phase
helps to identify whether the requirements are testable or not. If
any requirement is not testable, test team can communicate with
various stakeholders (Client, Business Analyst, Technical Leads,
System Architects etc) during this phase so that the mitigation
strategy can be planned.

Entry Criteria: BRS (Business Requirement Specification)


Deliverables: List of all testable requirements, Automation
feasibility report (if applicable)

Test Planning:
Test planning is the first step of the testing process. In this phase
typically Test Manager/Test Lead involves determining the effort
and cost estimates for the entire project. Preparation of Test Plan
will be done based on the requirement analysis. Activities like
resource planning, determining roles and responsibilities, tool
selection (if automation), training requirement etc., carried out in
this phase. The deliverables of this phase are Test Plan & Effort
estimation documents.

Must Read: Test Strategy In Deapth Explanation


Entry Criteria: Requirements Documents
Deliverables: Test Strategy, Test Plan, and Test Effort estimation
document.

Must Read: How To Write A Good Test Plan


Test Design:
Test team starts with test cases development activity here in this
phase. Test team prepares test cases, test scripts (if automation)
and test data. Once the test cases are ready then these test cases
are reviewed by peer members or team lead. Also, test team
prepares the Requirement Traceability Matrix (RTM). RTM traces the
requirements to the test cases that are needed to verify whether
the requirements are fulfilled. The deliverables of this phase are

20
Test Cases, Test Scripts, Test Data, Requirements Traceability
Matrix

Entry Criteria: Requirements Documents (Updated version of


unclear or missing requirement)
Deliverables: Test cases, Test Scripts (if automation), Test data.

Must Read: How To Write Test Cases


Test Environment Setup:
This phase can be started in parallel with Test design phase. Test
environment setup is done based on the hardware and software
requirement list. Some cases test team may not be involved in this
phase. Development team or customer provides the test
environment. Meanwhile, test team should prepare the smoke test
cases to check the readiness of the given test environment.

Entry Criteria: Test Plan, Smoke Test cases, Test Data


Deliverables: Test Environment. Smoke Test Results.

Test Execution:
Test team starts executing the test cases based on the planned test
cases. If a test case result is Pass/Fail then the same should be
updated in the test cases. Defect report should be prepared for
failed test cases and should be reported to the Development Team
through bug tracking tool (eg., Quality Center) for fixing the
defects. Retesting will be performed once the defect was fixed. Click
here to see the Bug Life Cycle.

Entry Criteria: Test Plan document, Test cases, Test data, Test
Environment.
Deliverables: Test case execution report, Defect report, RTM

Must Read: How To Write An Effective Defect Report


Test Closure:

21
The final stage where we prepare Test Closure Report, Test Metrics.
Testing team will be called out for a meeting to evaluate cycle
completion criteria based on Test coverage, Quality, Time, Cost,
Software, Business objectives. Test team analyses the test
artifacts (such as Test cases, Defect reports etc.,) to identify
strategies that have to be implemented in future, which will help to
remove process bottlenecks in the upcoming projects. Test metrics
and Test closure report will be prepared based on the above criteria.

Entry Criteria: Test Case Execution report (make sure there are no
high severity defects opened), Defect report
Deliverables: Test Closure report, Test metrics

What Is Bug Life Cycle or Defect Life


Cycle In Software Testing
Last Updated on January 13, 2018 by Rajkumar

Bug Life Cycle or Defect Life Cycle:


Bug life cycle is also known as Defect life cycle. In Software
Development process, the bug has a life cycle. The bug should go
through the life cycle to be closed. Bug life cycle varies depends
upon the tools (QC, JIRA etc.,) used and the process followed in the
organization.

Before going further I strongly recommend you to go through both


the Software Life Cycle’s such as SDLC and STLC.

What is a Software Bug?

Software bug can be defined as the abnormal behavior of the


software. Bug starts when the defect is found and ends when a
defect is closed, after ensuring it is not reproduced.

22
Check below video to see detailed explanation on “Bug Life Cycle
/ Defect Life Cycle”

Please be patient. The video will load in some time.

If you liked this video, then please subscribe to our YouTube


Channel for more video tutorials.

The different states of a bug in the bug life cycle are as


follows:

New: When a tester finds a new defect. He should provide a proper


Defect document to the Development team to reproduce and fix the
defect. In this state, the status of the defect posted by tester is
“New”

Assigned: Defects which are in the status of New will be approved


(if valid) and assigned to the development team by Test
Lead/Project Lead/Project Manager. Once the defect is assigned
then the status of the bug changes to “Assigned”

23
Open: The development team starts analyzing and works on the
defect fix

Fixed: When a developer makes the necessary code change and


verifies the change, then the status of the bug will be changed as
“Fixed” and the bug is passed to the testing team.

Test: If the status is “Test”, it means the defect is fixed and ready
to do test whether it is fixed or not.

Verified: The tester re-tests the bug after it got fixed by the
developer. If there is no bug detected in the software, then the bug
is fixed and the status assigned is “verified.”

Closed: After verified the fix, if the bug is no longer exits then the
status of bug will be assigned as “Closed.”

Reopen: If the defect remains same after the retest, then the
tester posts the defect using defect retesting document and changes
the status to “Reopen”. Again the bug goes through the life cycle to
be fixed.

Duplicate: If the defect is repeated twice or the defect corresponds


the same concept of the bug, the status is changed to “duplicate” by
the development team.

Deferred: In some cases, Project Manager/Lead may set the bug


status as deferred.
If the bug found during end of release and the bug is minor or not
important to fix immediately
If the bug is not related to current build
If it is expected to get fixed in the next release
Customer is thinking to change the requirement
In such cases the status will be changed as “deferred” and it will be
fixed in the next release.

24
Rejected: If the system is working according to specifications and
bug is just due to some misinterpretation (such as referring to old
requirements or extra features) then Team lead or developers can
mark such bugs as “Rejected”

Some other statuses are:

Cannot be fixed: Technology not supporting, Root of the product


issue, Cost of fixing bug is more

Not Reproducible: Platform mismatch, improper defect document,


data mismatch, build mismatch, inconsistent defects

Need more information: If a developer is unable to reproduce the


bug as per the steps provided by a tester then the developer can
change the status as “Need more information’. In this case,
the tester needs to add detailed reproducing steps and assign bug
back to the development team for a fix. This won’t happen if the
tester writes a good defect document.

This is all about Bug Life Cycle / Defect Life Cycle. Some companies
use these bug id’s in RTM to map with the test cases.

Software Test Metrics – Product


Metrics & Process Metrics
Last Updated on November 22, 2018 by Rajkumar

25
Software Test Metrics:
Before starting what is Software Test Metrics and types, I would like
to start with the famous quotes in terms of metrics.

You can’t control what you can’t measure – Tom Demacro


(an American software engineer, author, and consultant on
software engineering topics).
Software test metrics is to monitor and control process and product.
It helps to drive the project towards our planned goals without
deviation.

Metrics answer different questions. It’s important to decide what


questions you want answers to.

Software test metrics are classified into two types

1. Process metrics
2. Product metrics

26
Process Metrics:
Software Test Metrics used in the process of test preparation and
test execution phase of STLC.

The following are generated during the Test Preparation phase


of STLC:

Test Case Preparation Productivity:


It is used to calculate the number of Test Cases prepared and the
effort spent for the preparation of Test Cases.

Formula:

1 Test Case Preparation Productivity = (No of Test Case)/ (Effort spent for Test Case Preparation)

E.g.:

No. of Test cases = 240

Effort spent for Test case preparation (in hours) = 10

Test Case preparation productivity = 240/10 = 24 test cases/hour

Check below video to see “Test Metrics in Software Testing”

Test Design Coverage:


It helps to measure the percentage of test case coverage against
the number of requirements

27
Formula:

1 Test Design Coverage = ((Total number of requirements mapped to test cases) / (Total number of requirements)*100

E.g.:

Total number of requirements: 100

Total number of requirements mapped to test cases: 98

Test Design Coverage = (98/100)*100 = 98%

The following are generated during the Test Execution phase of


STLC:

Test Execution Productivity:


It determines the number of Test Cases that can be executed per
hour

Formula:

1 (No of Test cases executed)/ (Effort spent for execution of test cases)

E.g.:

No of Test cases executed = 180

Effort spent for execution of test cases = 10

28
Test Execution Productivity = 180/10 = 18 test cases/hour

Test Execution Coverage:


It is to measure the number of test cases executed against the
number of test cases planed.

Formula:

1 Test Execution Coverage = (Total no. of test cases executed / Total no. of test cases planned to execute)*100

E.g.:

Total no. of test cases planned to execute = 240

Total no. of test cases executed = 160

Test Execution Coverage = (180/240)*100 = 75%

Test Cases Passed:


It is to measure the percentage no. of test cases passed

Formula:

1 Test Cases Pass = (Total no. of test cases passed) / (Total no. of test cases executed) * 100

E.g.:
Test Cases Pass = (80/90)*100 = 88.8 = 89%

29
Test Cases Failed:
It is to measure the percentage no. of test cases failed

Formula:

1 Test Cases Failed = (Total no. of test cases failed) / (Total no. of test cases executed) * 100

E.g.:

Test Cases Failed= (10/90)*100 = 11.1 = 11%

Test Cases Blocked:


It is to measure the percentage no. of test cases blocked

Formula:

1 Test Cases Blocked = (Total no. of test cases blocked) / (Total no. of test cases executed) * 100

E.g.:

Test Cases Blocked = (5/90)*100 = 5.5 = 6%


Check below video to see “Test Metrics”

Product metric:
Software Test Metrics used in the process of defect analysis phase
of STLC.

30
Error Discovery Rate:
It is to determine the effectiveness of the test cases.

Formula:

1 Error Discovery Rate = (Total number of defects found /Total no. of test cases executed)*100

E.g.:

Total no. of test cases executed = 240

Total number of defects found = 60

Error Discovery Rate = (60/240)*100 = 25%

Defect Fix Rate:


It helps to know the quality of a build in terms of defect fixing.

Formula:

Defect Fix Rate = (Total no of Defects reported as fixed - Total no. of defects reopened) / (Total no of Defects reported
1
as fixed + Total no. of new Bugs due to fix)*100

E.g.:

Total no of defects reported as fixed = 10

Total no. of defects reopened = 2


31
Total no. of new Bugs due to fix = 1

Defect Fix Rate = ((10 – 2)/(10 + 1))*100 = (8/11)100 = 72.7 =


73%

Defect Density:
It is defined as the ratio of defects to requirements.

Defect density determines the stability of the application.

Formula:

1 Defect Density = Total no. of defects identified / Actual Size (requirements)

E.g.:

Total no. of defects identified = 80

Actual Size= 10

Defect Density = 80/10 = 8

Defect Leakage:
It is used to review the efficiency of the testing process before UAT.

Formula:

32
1 Defect Leakage = ((Total no. of defects found in UAT)/(Total no. of defects found before UAT)) * 100

E.g.:

No. of defects found in UAT = 20

No. of Defects found before UAT = 120

Defect Leakage = (20 /120) * 100 = 16.6 = 17%

Defect Removal Efficiency:


It allows us to compare the overall (defects found pre and post-
delivery) defect removal efficiency

Formula:

Defect Removal Efficiency = ((Total no. of defects found pre-delivery) /( (Total no. of defects found pre-delivery )+
1
(Total no. of defects found post-delivery)))* 100

E.g.:

Total no. of defects found pre-delivery = 80

Total no. of defects found post-delivery = 10

Defect Removal Efficiency = ((80) / ((80) + (10)))*100 =


(80/90)*100 = 88.8 = 89%

Here I have hand-picked few posts which will help you to learn more
interview related stuff:

 Explain Test Automation Framework

33
 Test Automation Framework Interview Questions
 TestNG Interview Questions
 SQL Interview Questions
 Manual Testing Interview Questions
 Agile Interview Questions
 Why You Choose Software Testing As A Career
 General Interview Questions

If you have any more questions, feel free to ask via comments. If
you find this post useful, do share it with your friends on Social
Networking.

Requirements Traceability Matrix


(RTM) | SoftwareTestingMaterial
Last Updated on September 9, 2018 by Rajkumar

Requirements Traceability Matrix (RTM) is used to trace the


requirements to the tests that are needed to verify whether the
requirements are fulfilled.

Requirement Traceability Matrix AKA Traceability Matrixor Cross


Reference Matrix.

34
Like all other test artifacts, RTM too varies between organizations.
Most of the organizations use just the Requirement Id’s and Test
Case Id’s in the RTM. It is possible to make some other fields such
as Requirement Description, Test Phase, Test case
result, Document Owner etc., It is necessary to update the RTM
whenever there is a change in requirement.

The following illustration gives you a basic idea about Requirement


Traceability Matrix (RTM).

Assume we have 5 requirements

35
Assume
total test cases identified are 10

Whenever we write new test cases, the same need to be updated in


the RTM

Adding a new test case id TID011 and mapping it to the


requirement id BID005

Check below video to see “Requirements Traceability Matrix”


36
Please be patient. The video will load in some time.

Types of Requirements Traceability Matrix (RTM):


Let’s see different types of Traceability Matrix:

 Forward Traceability: Mapping requirements to test cases is


called Forward Traceability Matrix. It is used to ensure whether
the project progresses in the desired direction. It makes sure
that each requirement is tested thoroughly.
 Backward or Reverse Traceability: Mapping test cases to
requirements is called Backward Traceability Matrix. It is used
to ensure whether the current product remains on the right
track. It makes sure that we are not expanding the scope of the
project by adding functionality that is not specified in the
requirements.
 Bi-directional traceability (Forward +
Backward): Mapping requirements to test cases(forward
traceability) and test cases to requirements (backward
traceability) is called Bi-directional Traceability Matrix. It is used
to ensure that all the specified requirements have appropriate
test cases and vice versa.

Advantage of Requirements Traceability Matrix (RTM):


1. 100% test coverage
2. It allows to identify the missing functionality easily
3. It allows to identify the test cases which needs to be updated in
case of change in requirement
4. It is easy to track the overall test execution status

Test Deliverables in Software Testing –


Detailed Explanation
37
Last Updated on October 3, 2018 by Rajkumar

Test Deliverables are the test artifacts which are given to the
stakeholders of a software project during the SDLC (Software
Development Life Cycle). A software project which follows SDLC
undergoes the different phases before delivering to the customer. In
this process there will be some deliverables in every phase. Some of
the deliverables are provided before the testing phase commences
and some are provided during the testing phase and rest after the
testing phase is completed.

Every software application goes through different phases of SDLC


and STLC. In the process of software application development, test
teams prepare different documents to improve the communication
among the team members and other stakeholders. These
documents are also known as Test Deliverables, as they are
delivered to client along with the final product of software
application.

38
Interview Question: What is test deliverables and list out the test
deliverables you have come across in the process of STLC?
This is one of the most important QA interview questions for
freshers.

Check below video to see “Test Deliverables”

In the next section, we will discuss the above mentioned important


test deliverables in detail.

The following are list of test deliverables:


The test deliverables prepared during the process of software
testing are as follows

1. Test Strategy: Test Strategy is a high-level document (static


document) and usually developed by project manager. It is a
document which captures the approach on how we go about testing
the product and achieve the goals. It is normally derived from the
Business Requirement Specification (BRS). Documents like Test Plan
are prepared by keeping this document as a base. Click here for
more details.

2. Test Plan: Test plan document is a document which contains the


plan for all the testing activities to be done to deliver a quality
product. Test Plan document is derived from the Product
Description, SRS, or Use Case documents for all future activities of
the project. It is usually prepared by the Test Lead or Test
Manager. Click here for more details.

3. Effort Estimation Report: In this report, usually test teams


mention the efforts put in to complete the testing process by the
test team.

4. Test Scenarios: Test Scenario gives the idea of what we have to


test. Test Scenario is like a high-level test case.

39
5. Test Cases/Scripts: Test cases are the set of positive and
negative executable steps of a test scenario which has a set of pre-
conditions, test data, expected result, post-conditions and actual
results. Click here for more details.

6. Test Data: Test data is the data that is used by the testers to run
the test cases. Whilst running the test cases, testers need to enter
some input data. To do so, testers prepare test data. It can be
prepared manually and also by using tools.

For example, To test a basic login functionality having a user id,


password fields. We need to enter some data in the user id and
password fields. So we need to collect some test data.

7. Requirement Traceability Matrix (RTM): Requirements


Traceability Matrix (RTM) is used to trace the requirements to the
tests that are needed to verify whether the requirements are
fulfilled. Requirement Traceability Matrix AKA Traceability Matrix or
Cross Reference Matrix. Click here for more details.

8. Defect Report/Bug Report: The purpose of using Defect report


template or Bug report template is to convey the detailed
information (like environment details, steps to reproduce etc.,)
about the bug to the developers. It allows developers to replicate
the bug easily. Click here for more details.

9. Test Execution Report: It contains the test results and the


summary of test execution activities.

10. Graphs and Metrics: Software test metrics is to monitor and


control process and product. It helps to drive the project towards
our planned goals without deviation. Metrics answer different
questions. It’s important to decide what questions you want
answers to. Click here for more details.

11. Test summary report: It contains the summary of test activities


and final test results.

40
12. Test incident report: It contains all the incidents such as
resolved or unresolved incidents which are found while testing the
software.

13. Test closure report: It gives a detailed analysis of the bugs


found, bugs removed and discrepancies found in the software.

14. Release Note: Release notes will be sent to the client, customer
or stakeholders along with the build. It contains list of new releases,
bug fixes.

15. Installation/configuration guide: This guide helps to install or


configure the components that make up the system and its
hardware and software requirements.

16. User guide: This guide gives assistance to the end user on
accessing the software application.

17. Test status report: It is to track the testing status. It is


prepared on a periodic or weekly basis. It contains work done till
date and work remains pending.

18. Weekly status report (Project manager to client): It is similar to


the Test status report but generate weekly.

The Complete Guide To Writing Test


Strategy [Sample Test Strategy
Document]
Last Updated on September 9, 2018 by Rajkumar

41
The Complete Guide To Writing Test
Strategy
Test Strategy is a high level document (static document) and
usually developed by project manager. It is a document which
captures the approach on how we go about testing the product and
achieve the goals. It is normally derived from the Business
Requirement Specification (BRS). Documents like Test Plan
are prepared by keeping this document as base.

Even though testing differs between organizations. Almost all the


software development organizations follow Test Strategy document
to achieve the goals and to follow the best practice.

Usually test team starts writing the detailed Test Plan and continue
further phases of testing once the test strategy is ready. In Agile
world, some of the companies are not spending time on test plan
preparation due to the minimal time for each release but they
maintain test strategy document. Maintaining this document for the
entire project leads to mitigate the unforeseen risks.

This is one of the important documents in test deliverables. Like


other test deliverables, test team shares this with the stakeholders
for better understanding about the scope of the project, test
approaches and other important aspects.

Contents [hide]
 1 The Complete Guide To Writing Test Strategy
o 1.1 Sections of Test Strategy Document:
o 1.2 Scope and overview:
o 1.3 Test Approach:
o 1.4 Test Levels:
o 1.5 Test Types:
o 1.6 Roles and responsibilities:
o 1.7 Environment requirements:
o 1.8 Testing tools:
o 1.9 Industry standards to follow:
o 1.10 Test deliverables:

42
o 1.11 Testing metrics:
o 1.12 Requirement Traceability Matrix:
o 1.13 Risk and mitigation:
o 1.14 Reporting tool:
o 1.15 Test Summary:
o 1.16 Download Sample Test Strategy Document:

If you are a beginner, you may not get an opportunity to create a


test strategy document but it’s good to know how to create test
strategy document. It will be helpful when you are handling a QA
Team. Once you become a Project Lead or Project manager you
have to develop test strategy document. Creating an effective test
strategy document is a skill which you must acquire. By writing a
test strategy plan you can define the testing approach of your
project. Test strategy document should be circulated to all the team
members so that every team member will be consistent with the
testing approach. Remember there is no rule to maintain all these
sections in your Test Strategy document. It varies from company to
company. This list gives a fair idea on how to write a good Test
Strategy.

Sections of Test Strategy Document:


Following are the sections of test strategy document:

43
1. Scope and overview
2. Test Approach
3. Testing tools
4. Industry standards to follow
5. Test deliverables
6. Testing metrics
7. Requirement Traceability Matrix
8. Risk and mitigation
9. Reporting tool
10. Test summary

We have seen what is test strategy document and what it contains.


Let’s discuss each section of Test Strategy in STLC briefly.

Scope and overview:


In this section, we will mention the scope of testing activities (what
to test and why to test) and mention an overview of the AUT.

Example: Creating a new Application (Say Google Mail) which offers


email services. Test the functionality of emails and make sure it
gives value to the customer.

Test Approach:
In this section, we usually define the following

 Test levels
 Test types
 Roles and responsibilities
 Environment requirements

Test Levels:
This section lists out the levels of testing that will be performed
during QA Testing. Levels of testing such as unit testing, integration
testing, system testing and user acceptance testing. Testers are
responsible for integration testing, system testing and user
acceptance testing.
44
Test Types:
This section lists out the testing types that will be performed during
QA Testing.

Roles and responsibilities:


This section describes the roles and responsibilities of Project
Manager, Project Lead, individual testers.

Environment requirements:
This section lists out the hardware and software for the test
environment in order to commence the testing activities.

Testing tools:
This section will describe the testing tools necessary to conduct the
tests

Example: Name of Test Management Tool, Name of Bug Tracking


Tool, Name of Automation Tool

Industry standards to follow:


This section describes the industry standard to produce high quality
system that meets or exceeds customer expectations. Usually,
project manager decides the testing models and procedures which
need to follow to achieve the goals of the project.

Test deliverables:
This section lists out the deliverables that need to produce before,
during and at the end of testing.

Read more on Test Deliverables here..

45
Testing metrics:
This section describes the metrics that should be used in the project
to analyze the project status.

Read more on Test Metrics here..

Requirement Traceability Matrix:


Requirement traceability matrix is used to trace the requirements to
the tests that are needed to verify whether the requirements are
fulfilled.

Read more on RTM here..

Risk and mitigation:


Identify all the testing risks that will affect the testing process and
specify a plan to mitigate the risk.

Reporting tool:
This section outlines how defects and issues will be tracked using a
reporting tool.

Must read: Popular defect tracking tools


Test Summary:
This section lists out what kind of test summary reports will be
produced along with the frequency. Test summary reports will be
generated on a daily, weekly or monthly basis depends on how
critical the project is.

Download Sample Test Strategy Document:


Click the below download button to download the sample
Test_Strategy_Document.

46
Conclusion:

Test strategy document gives a clear vision of what the test team
will do for the whole project. It is a static document means it wont
change throughout the project life cycle. The one who prepares this
document, must have good experience in the product domain, as
this is the document that is going to drive the entire team and it
won’t change throughout the project life cycle (it is a static
document). Test strategy document should be circulated to entire
testing team before the testing activities begin. Writing a good test
strategy improves the complete testing process and leads to
produce a high-quality system.

Like this post, don’t forgot to share it with your friends.

47