Вы находитесь на странице: 1из 73

http://www.technostall.

com By: Chankey Pathak


chankey007@gmail.com
Agenda:-
• Introduction
• Importance of testing in SDLC
• Testing life cycle
• Test planning
• Types of testing
• Verification & Validation
• Quality Assurance & Control
• Bug reporting
Software Testing
Software testing is a process used to identify the
correctness, completeness and quality of developed
computer software.

It is the process of executing a program /


application under positive and negative conditions
by manual or automated means. It checks for the :-
 Specification
 Functionality
 Performance
Why Software Testing ?
Software Testing is important as it may cause
mission failure, impact on operational
performance and reliability if not done
properly.

Effective software testing delivers quality


software products satisfying user’s
requirements, needs and expectations.
What …????
…is an ”ERROR”??

….is a ”Bug”??

….is Fault, Failure ??


Bug, Fault & Failure
A person makes an Error
That creates a fault in software
That can cause a failure in operation

Error : An error is a human action that produces the incorrect result that
results in a fault.
Bug : The presence of error at the time of execution of the software.
Fault : State of software caused by an error.
Failure : Deviation of the software from its expected result. It is an event.
Who is a Software
Tester??..
Software Tester is the one who performs testing and
find bugs, if they exist in the tested application.
The Testing Team
 Program Manager-
• The planning and execution of the project to ensure the success
of a project minimizing risk throughout the lifetime of the project.
• Responsible for writing the product specification, managing the schedule
and making the critical decisions and trade-offs.

 QA Lead-
• Coach and mentor other team members to help improve QA effectiveness
• Work with other department representatives to collaborate on
joint projects and initiatives
• Implement industry best practices related to testing automation and to
streamline the QA Department.
Test Analyst\Lead-
• Responsible for planning, developing and executing automated test
systems, manual test plans and regressions test plans.
• Identifying the Target Test Items to be evaluated by the test effort
• Defining the appropriate tests required and any associated Test
Data
• Gathering and managing the Test Data
• Evaluating the outcome of each test cycle

Test Engineer-
• Writing and executing test cases and Reporting defects
• Test engineers are also responsible for determining the best way a
test can be performed in order to achieve 100% test coverage of all
components
TESTABILITY

Operability
Observe-ability
Controllability
Decomposability
Stability
Understandability

Srihari Techsoft
Characteristics of Testable
Software
• Operable
– The better it works (i.e., better quality), the easier it is to test
• Observable
– Incorrect output is easily identified; internal errors are automatically
detected
• Controllable
– The states and variables of the software can be controlled directly
by the tester
• Decomposable
– The software is built from independent modules that can be tested
independently

(more on next slide)


13
Characteristics of Testable
Software (continued)
• Simple
– The program should exhibit functional, structural, and code
simplicity
• Stable
– Changes to the software during testing are infrequent and do not
invalidate existing tests
• Understandable
– The architectural design is well understood; documentation is
available and organized

14
Test Characteristics

• A good test has a high probability of finding an error


– The tester must understand the software and how it might fail
• A good test is not redundant
– Testing time is limited; one test should not serve the same purpose
as another test
• A good test should be “best of breed”
– Tests that have the highest likelihood of uncovering a whole class
of errors should be used
• A good test should be neither too simple nor too complex
– Each test should be executed separately; combining a series of
tests could cause side effects and mask certain errors

15
When to Start Testing in
SDLC
• Requirement
• Analysis
• Design
• Coding
• Testing
• Implementation
• Maintenance

 Testing starts from Requirement


Phase
Testing Life Cycle
Project Initiation
Summary Reports
System Study
Analysis
Test Plan
Regression Test

Design Test Cases


Report Defects

Execute Test Cases


( manual /automated )
BASIS PATH TESTING
• The basis path method enables the
test case designer to derive a logical
complexity measure of a procedural
design and use this measure as a
guide for defining a basis set of
execution paths.

• Test cases derived to exercise the


basis set are guaranteed to execute
every statement in the program at
least one time during testing.
Flow Graph Notation
• A circle in a graph represents a node, which stands for a sequence
of one or more procedural statements
• A node containing a simple conditional expression is referred to as
a predicate node
– Each compound condition in a conditional expression containing one
or more Boolean operators (e.g., and, or) is represented by a separate
predicate node
– A predicate node has two edges leading out from it (True and False)
• An edge, or a link, is a an arrow representing flow of control in a
specific direction
– An edge must start and terminate at a node
– An edge does not intersect or cross over another edge
• Areas bounded by a set of edges and nodes are called regions
• When counting regions, include the area outside the graph as a
region, too
20
FLOW GRAPH NOTATION

SEQUENCE

IF
FLOW GRAPH NOTATION
cont.
WHILE

UNTIL
FLOW GRAPH NOTATION
cont.
CASE
CYCLOMATIC COMPLEXITY

E-N+2
11-9+2 = 4
2
1 3
4
P+1
3+1=4 # Regions = 4
INDEPENDENT PATH
An independent path is any path through the
program that introduces at least one new set
of processing statements or a new condition.

In the flow graph, an independent path must


move along at least one edge that has not been
traversed before the path path is defined.
INDEPENDENT PATHS
1

3 8
4 5 1-9
6 1-2-8-7-1-9
7 1-2-3-4-6-7-1-9
1-2-3-5-6-7-1-9
9
BASIS SET

PATH 1 1-9
PATH 2 1-2-8-7-1-9

PATH 3 1-2-3-4-6-7-1-9

PATH 4 1-2-3-5-6-7-1-9
TESTING METHODOLOGIES
AND TYPES

Black box testing

White box testing

Incremental testing

Thread testing
INDEPENDENT PATH
An independent path is any path through the
program that introduces at least one new set
of processing statements or a new condition.

In the flow graph, an independent path must


move along at least one edge that has not been
traversed before the path path is defined.
• Black box testing
• No knowledge of internal design or code
required.
• Tests are based on requirements and
functionality
• White box testing
• Knowledge of the internal program design
and code required.
• Tests are based on coverage of code
statements,branches,paths,conditions.
Srihari Techsoft
Black Box - testing technique

• Incorrect or missing functions


• Interface errors
• Errors in data structures or external database
access
• Performance errors
• Initialization and termination errors

Srihari Techsoft
Black box / Functional testing

• Based on requirements and functionality

• Not based on any knowledge of internal


design or code

• Covers all combined parts of a system

• Tests are data driven

Srihari Techsoft
White Box - testing technique
• All independent paths within a module have been
exercised at least once

• Exercise all logical decisions on their true and false


sides

• Execute all loops at their boundaries and within their


operational bounds

• Exercise internal data structures to ensure their


validity
Srihari Techsoft
Loop Testing
This white box technique focuses on the validity
of loop constructs.

4 different classes of loops can be defined


• simple loops
• nested loops
• concatenated loops
• Unstructured loops
Srihari Techsoft
Other White Box Techniques
Statement Coverage – execute all statements at least once

Decision Coverage – execute each decision direction at least


once
Condition Coverage – execute each decision with all possible
outcomes at least once

Decision / Condition coverage – execute all possible


combinations of condition outcomes in
each decision.

Multiple condition Coverage – Invokes each point of entry at


least once.
Srihari Techsoft Examples ……
Testing Levels

• Unit testing
• Integration testing
• System testing
• Acceptance testing

Srihari Techsoft
Unit testing

• The most ‘micro’ scale of testing.


• Tests done on particular functions or code
modules.
• Requires knowledge of the internal program
design and code.
• Done by Programmers (not by testers).

Srihari Techsoft
Unit testing
Objectives  To test the function of a program or unit of
code such as a program or module
 To test internal logic
 To verify internal design
 To test path & conditions coverage
 To test exception conditions & error
handling
When  After modules are coded
Input  Internal Application Design
 Master Test Plan
 Unit Test Plan
Output  Unit Test Report
Srihari Techsoft
Who Developer

Methods White Box testing techniques


Test Coverage techniques

Tools Debug
Re-structure
Code Analyzers
Path/statement coverage tools
Education Testing Methodology
Effective use of tools

Srihari Techsoft
Incremental integration testing

Continuous testing of an application as and


when a new functionality is added.

Application’s functionality aspects are required


to be independent enough to work separately
before completion of development.

Done by programmers or testers.


Srihari Techsoft
Integration Testing

– Testing of combined parts of an application to


determine their functional correctness.

– ‘Parts’ can be
• code modules
• individual applications
• client/server applications on a network.
Srihari Techsoft
Types of Integration Testing

»Big Bang testing

»Top Down Integration testing

»Bottom Up Integration testing

Srihari Techsoft
Integration testing
Objectives  To technically verify proper
interfacing between modules, and
within sub-systems
When  After modules are unit tested
Input  Internal & External Application
Design
 Master Test Plan
 Integration Test Plan
Output  Integration Test report
Srihari Techsoft
Who Developers

Methods White and Black Box


techniques
Problem /
Configuration
Management
Tools Debug
Re-structure
Code Analyzers
Education Testing Methodology
Effective use of tools
Srihari Techsoft
System Testing
Objectives  To verify that the system components perform
control functions
 To perform inter-system test
 To demonstrate that the system performs both
functionally and operationally as specified
 To perform appropriate types of tests relating
to Transaction Flow, Installation, Reliability,
Regression etc.
When  After Integration Testing
Input  Detailed Requirements & External Application
Design
 Master Test Plan
 System Test Plan

Output  System Test Report


Srihari Techsoft
Who Development Team and Users

Methods Problem
/ Configuration
Management

Tools Recommended set of tools

Education Testing Methodology


Effective use of tools

Srihari Techsoft
Systems Integration Testing
Objectives  To test the co-existence of products and
applications that are required to perform
together in the production-like operational
environment (hardware, software, network)
 To ensure that the system functions together
with all the components of its environment as a
total system
 To ensure that the system releases can be
deployed in the current environment
When  After system testing
 Often performed outside of project life-cycle
Input  Test Strategy
 Master Test Plan
 Systems Integration Test Plan

Output  Systems Integration


Srihari Techsoft Test report
Unit Testing
1. Test each module individually.
2. Follows a white box testing (Logic of the program)
3. Done by Developers
Integration Testing
After completing the unit testing and
dependent modules development,
programmers connect the modules with
respect to HLD for Integration Testing
through below approaches.
System Testing
After completing Unit and Integration testing
through white box testing techniques development
team release an .exe build (all integrated module) to
perform black box testing.

• Usability Testing
• Functional Testing
• Performance Testing
• Security Testing
Usability Testing
During this test, testing team concentrates on the user friendliness of
build interface. It consists of following sub tests.

• User Interface Test: Ease of use (screens should be understandable


to operate by End User)

• Look & Feel :- attractive

• Speed in interface :- Less number of task to complete task

• Manual Support Test :- Context sensitiveness of user manual.


Functional
Testing
• The process of checking the
behavior of the application.

• It is geared to functional
requirements of an application.

• To check the correctness of


outputs.

• Data validation and Integration


i.e. inputs are correct or not.
Performance Testing
• LOAD TESTING – Also Known as Scalability Testing. During this
test, test engineers execute application build under customer expected
configuration and load to estimate performance.

• STRESS TESTING – During this test, Test engineers estimates the


peak load. To find out the maximum number of users for execution of
out application user customer expected configuration to estimate peak
load.
PEAK LOAD > CUSTOMER LOAD (EXPECTED)

• DATA VOLUME TESING -- Testing team conducts this test to find


the maximum limit of data volume of your application.
Security Testing

Testing how well the system protects


against unauthorized internal or
external access, willful damage, etc,
may require sophisticated testing
techniques
Smoke testing
Smoke testing is non-exhaustive
software testing, ascertaining that
the most crucial functions of a
program work, but not bothering
with finer details.
Alpha Testing
1. The application is tested by the users who doesn’t know about
the application.
2. Done at developer’s site under controlled conditions
3. Under the supervision of the developers.
Acceptance Testing
A formal test conducted to determine whether or not a system
satisfies its acceptance criteria and to enable the customer to
determine whether or not to accept the system.
It is the final test action before deploying the software. The
goal of acceptance testing is to verify that the software is ready
and can be used by the end user to perform the functions for
which the software was built.
Beta Testing
1. This Testing is done before the final
release of the software to end-users.
2. Before the final release of the software
is released to users for testing where
there will be no controlled conditions
and the user here is free enough to do
what ever he wants to do on the system
to find errors.
Regression Testing
Testing with the intent of determining
if bug fixes have been successful and
have not created any new problems.
Also, this type of testing is done to
ensure that no degradation of baseline
functionality has occurred.
Monkey Testing

Testing the application randomly like hitting


keys irregularly and try to breakdown the
system there is no specific test cases and
scenarios for monkey testing.
Verification
 Verification is the process confirming
that -software meets its specification, done
through inspections and walkthroughs

Use – To identify defects in the product


early in the life cycle
Validation
 Validation is the process confirming
that it meets the user’s requirements. It
is the actual testing.

Verification : Is the Product Right


Validation : Is it the Right Product
What is Quality ?
Quality is defined as meeting the customer’s requirements and
according to the standards
The best measure of Quality is given by FURPS
 Functionality
 Usability
 Reliability
 Performance
 Scalability
Why Quality ?

Quality is the important factor


affecting an organization’s long term
performance.
 Quality improves productivity and
competitiveness in any organization.
Quality Assurance
Quality Assurance is a planned and
systematic set of activities necessary to
provide adequate confidence that
products and services will conform to
specified requirements and meets user
needs.

•It is process oriented.


•Defect prevention based.
•Throughout the Life Cycle.
•It’s a management process.
Quality Control
Quality control is the process
by which product quality is
compared with the applicable
standards and the action
taken when non
conformance is detected.

• It is product oriented
• Defect detection based
QA vs. QC
• Quality Assurance makes sure • Quality Control makes sure the
that we are doing the right things, results of what we’ve done are
the right Way. what we expected .

• QA focuses on building in quality • QC focuses on testing for quality


and hence preventing defects. and hence detecting defects.

• QA deals with process. • QC deals with product.

• QA is for entire life cycle. • QC is for testing part in SDLC.

• QA is preventive process. • QC is corrective process.


When to Stop
Testing
Cost incurred
Cost No. of Bugs
Stop Testing

Bugs ratio

Amount of Testing

Вам также может понравиться