Вы находитесь на странице: 1из 62

Testing, Testing & Testing

QA doesn't make software but makes it better

- By Vishal Gupta

Myths Vs Facts
Myths :

Facts:

Developers require more skills in comparative to QA.

Development needs more effort then testing.

Tester needs to think one step ahead then developers to


breaks their code.

Testing is more creative than development because you


need to be creative to become destructive :)

Software Testing

Software Testing is a process of evaluating a system by


manual or automatic means and verify that it satisfies
specified requirements or identify differences between
expected and actual results From ALU slide

How to do Testing and it's life cycle


Unit Testing

Sanity Testing

Functional Testing

Integration Testing

Regression Testing

Stress Testing

Load Testing

Performance Testing

Solution Testing

How to do testing??

1st Cycle - Unit and Sanity Testing


Unit Testing: It covers testing
on a specific part of code
perform by developers.
Sanity Testing: It's very basic
level testing done by QA after
fixing some bug.
Basic Cleaning of Bugs

2nd Cycle - Functional , Integration and


Regression Testing
Functional Testing: Focus majorly on specific functionality of
component.
Integration Testing: To check, how that functionality work after
integrating with some other functionality.
Regression Testing: To check older functionality after integrating new
functionality.

Complete Mix Testing

3rd Cycle - Stress, Load and Performance


testing
Performance Testing: It means how best something
performs under a given benchmark.
Load Testing: It is also performance testing but under
various loads.
Stress Testing: It is performance under stress conditions.

4 Cycle- Solution or End to End


Testing
th

It's done in a completely emulated customer setup,


involving multiple products and most of the time
multiple vendor products.
If it's done at the customer site then it's also called
as pre-production testing.

How to Plan and Organize Test case


Project
Test plan
Functionality
Test cases

FAQ in Testing
What is the use of Automation in Testing?
It reduce time in testing cycle.
Automation find Regression issue quickly.

Will you allow bugs in the product to be released to the customer?


Depend on PM on market scenario.

What is the cost of poor quality?


It prevents from doing further business with the customer
It is expensive to fix bugs after releases as whole life cycle needs to be repeated

Difference Between Verification and Validation.

Verification Vs Validation

Verification:

Are we building the product right?

The software should conform to its specification.

Validation:

Are we building the right product?

The software should do what the user really requires

Common tools for Testing


Siebel: Bug tracking tool from Oracle.
Rational Robot: Load testing tool from IBM.
Mercury winrunner: Performance testing tool from Hp
SIPP: For Sip protocol testing.
Ethereal: For sniffing packets.
Test complete: For GUI automation.

How then to proceed?


Exhaustive testing most often is not feasible
Random statistical testing does not work either if
you want to find errors
Therefore, we look for systematic ways to
proceed during testing
SE, Testing, Hans van Vliet,
2008

13

Classification of testing techniques


Classification based on the criterion to measure
the adequacy of a set of test cases:
coverage-based testing
fault-based testing
error-based testing

Classification based on the source of information


to derive test cases:
black-box testing (functional, specification-based)
white-box testing (structural,
program-based)
14

SE, Testing, Hans van Vliet,


2008

When exactly is a failure a failure?


Failure is a relative notion: e.g. a failure w.r.t. the
specification document
Verification: evaluate a product to see whether it
satisfies the conditions specified at the start:
Have we built the system right?
Validation: evaluate a product to see whether it does
what we think it should do:
Have we built the right system?

SE, Testing, Hans van Vliet,


2008

15

Testing process

input
P

subset of
input
test
strategy

compare
subset of
input

SE, Testing, Hans van Vliet,


2008

oracle

expected
output

16

real
output

test
results

What is our goal during testing?

Objective 1: find as many faults as possible


Objective 2: make you feel confident that the
software works OK

SE, Testing, Hans van Vliet,


2008

17

Testing and the life cycle


requirements engineering
criteria: completeness, consistency, feasibility, and testability.
typical errors: missing, wrong, and extra information
determine testing strategy
generate functional test cases
test specification, through reviews and the like

design
functional and structural tests can be devised on the basis of the
decomposition
the design itself can be tested (against the requirements)
formal verification techniques
SE, Testing, Hans van Vliet,
2008 the architecture

can be evaluated18

Test documentation (IEEE 928)


Test plan
Test design specification
Test case specification
Test procedure specification
Test item transmittal report
Test log
Test incident report
Test summary report

SE, Testing, Hans van Vliet,


2008

19

Example checklist
Wrong use of data: variable not initialized, dangling
pointer, array index out of bounds,
Faults in declarations: undeclared variable, variable
declared twice,
Faults in computation: division by zero, mixed-type
expressions, wrong operator priorities,
Faults in relational expressions: incorrect Boolean
operator, wrong operator priorities, .
Faults in control flow: infinite loops, loops that
SE, Testing, Hans van Vliet,
20
execute
n-1
or
n+1
times
instead of n, ...
2008

Observations about Testing


Testing is the process of executing a program
with the intention of finding errors. Myers
Testing can show the presence of bugs but
never their absence. - Dijkstra

Good Testing Practices


A good test case is one that has a high
probability of detecting an undiscovered defect,
not one that shows that the program works
correctly
It is impossible to test your own program
A necessary part of every test case is a
description of the expected result

Good Testing Practices (contd)


Avoid nonreproducible or on-the-fly testing
Write test cases for valid as well as invalid input
conditions.
Thoroughly inspect the results of each test
As the number of detected defects in a piece of
software increases, the probability of the
existence of more undetected defects also
increases

Good Testing Practices (contd)


Assign your best people to testing
Ensure that testability is a key objective in your
software design
Never alter the program to make testing easier
Testing, like almost every other activity, must
start with objectives

Levels of Testing
Unit Testing
Integration Testing
Validation Testing
Regression Testing
Alpha Testing
Beta Testing

Acceptance Testing

Unit Testing
Algorithms and logic
Data structures (global and local)
Interfaces
Independent paths
Boundary conditions
Error handling

Why Integration Testing Is


Necessary
One module can have an adverse effect on
another
Subfunctions, when combined, may not produce
the desired major function
Individually acceptable imprecision in
calculations may be magnified to unacceptable
levels

Why Integration Testing Is


Necessary (contd)
Interfacing errors not detected in unit testing may
appear
Timing problems (in real-time systems) are not
detectable by unit testing
Resource contention problems are not detectable
by unit testing

Top-Down Integration
1. The main control module is used as a driver,
and stubs are substituted for all modules
directly subordinate to the main module.
2. Depending on the integration approach
selected (depth or breadth first), subordinate
stubs are replaced by modules one at a time.

Top-Down Integration (contd)


3. Tests are run as each individual module is
integrated.
4. On the successful completion of a set of
tests, another stub is replaced with a real
module
5. Regression testing is performed to ensure
that errors have not developed as result of
integrating new modules

Problems with Top-Down Integration


Many times, calculations are performed in the
modules at the bottom of the hierarchy
Stubs typically do not pass data up to the higher
modules
Delaying testing until lower-level modules are
ready usually results in integrating many
modules at the same time rather than one at a
time
Developing stubs that can pass data up is almost
as much work as developing the actual module

Bottom-Up Integration
Integration begins with the lowest-level modules, which
are combined into clusters, or builds, that perform a
specific software subfunction
Drivers (control programs developed as stubs) are written
to coordinate test case input and output
The cluster is tested
Drivers are removed and clusters are combined moving
upward in the program structure

Problems with Bottom-Up


Integration
The whole program does not exist until the
last module is integrated
Timing and resource contention problems
are not found until late in the process

Validation Testing
Determine if the software meets all of the requirements
defined in the SRS
Having written requirements is essential
Regression testing is performed to determine if the
software still meets all of its requirements in light of
changes and modifications to the software
Regression testing involves selectively repeating existing
validation tests, not developing new tests

Alpha and Beta Testing


Its best to provide customers with an outline of
the things that you would like them to focus on
and specific test scenarios for them to execute.
Provide with customers who are actively involved
with a commitment to fix defects that they
discover.

Acceptance Testing
Similar to validation testing except that
customers are present or directly
involved.
Usually the tests are developed by the
customer

Test Methods
White box or glass box testing
Black box testing
Top-down and bottom-up for performing
incremental integration
ALAC (Act-like-a-customer)

Boundary Value Testing


A type of Black box functional testing
The program takes inputs and maps some out-puts
The internal of the program itself is not considered

A technique to generate test cases via considering (mostly)


the inputs to the program
The rationale for this focus is that there are experiences from
the past which indicate that errors tend occur at the
extreme points.
Input data (legal versus illegal)
Loop iteration (beginning and ending of loops)
Output fields (legal versus illegal)

A simple example
Consider a program that reads the age of each
person here and computes the average age of the
people.
input (s) Program output: average age
How would you test this?
How many test cases would you generate?
What type of test data would you input to test this
program?

Input (s) to Program Average


First question would be - - - how many input data?
The answer is some number 1 or more --- not too sure, yet.

Second question would be --- what value should each input


age be?
Try some typical age such as 23 or 45
Try some atypical age 125 or 700
How about trying a wrong age of -5 or 0 or @

When we try the atypical age or some wrong age, we may


discover that the program may not handle or process
properly ---- possibly resulting in a failure. Failure in this
case, may include strange answer, but not necessarily
program termination.

Most Common Software problems


Incorrect calculation
Incorrect data edits & ineffective data edits
Incorrect matching and merging of data
Data searches that yields incorrect results
Incorrect processing of data relationship
Incorrect coding / implementation of
business rules
Inadequate software performance

Srihari Techsoft

Confusing or misleading data


Software usability by end users &
Obsolete Software
Inconsistent processing
Unreliable results or performance
Inadequate support of business needs
Incorrect or inadequate interfaces
with other systems
Inadequate performance and security
controls
Incorrect file handling
Srihari Techsoft

Objectives of testing
Executing a program with the intent of finding an
error.
To check if the system meets the requirements and
be executed successfully in the Intended
environment.
To check if the system is Fit for purpose.
To check if the system does what it is expected to
do.

Srihari Techsoft

Objectives of testing
A good test case is one that has a probability of
finding an as yet undiscovered error.
A successful test is one that uncovers a yet
undiscovered error.
A good test is not redundant.
A good test should be best of breed.
A good test should neither be too simple nor too
complex.
Srihari Techsoft

Objective of a Software Tester


Find bugs as early as possible and make sure they
get fixed.
To understand the application well.
Study the functionality in detail to find where the
bugs are likely to occur.
Study the code to ensure that each and every line of
code is tested.
Create test cases in such a way that testing is done
to uncover the hidden bugs and also ensure that
the software is usable and reliable
Srihari Techsoft

VERIFICATION & VALIDATION


Verification - typically involves reviews and meeting to
evaluate documents, plans, code, requirements, and
specifications. This can be done with checklists, issues
lists, walkthroughs, and inspection meeting.
Validation - typically involves actual testing and takes
place after verifications are completed.
Validation and Verification process continue in a cycle
till the software becomes defects free.
Srihari Techsoft

Software Development Process Cycle


Plan

Action

Do

Check

Srihari Techsoft

PLAN (P): Device a plan. Define your objective and


determine the strategy and supporting methods
required to achieve that objective.
DO (D):
Execute the plan. Create the conditions and
perform the necessary training to execute the plan.
CHECK (C): Check the results. Check to determine
whether work is progressing according to the plan and
whether the results are obtained.
ACTION (A): Take the necessary and appropriate action
if checkup reveals that the work is not being performed
Techsoft
according to plan or not Srihari
as anticipated.

Quality Assurance

Quality Control

A planned and systematic


set of activities necessary to
provide adequate
confidence that
requirements are properly
established and products or
services conform to
specified requirements.

The process by which


product quality is compared
with applicable standards;
and the action taken when
non-conformance is
detected.

An activity that establishes


An activity which verifies if
and evaluates the
the product meets preprocesses to produce the
defined standards.
products.
Srihari Techsoft

Quality Assurance

Quality Control

Helps establish processes. Implements the process.


Sets up measurements
programs to evaluate
processes.

Verifies if specific
attributes are in a specific
product or Service

Identifies weaknesses in
processes and improves
them.

Identifies defects for the


primary purpose of
correcting defects.
Srihari Techsoft

Responsibilities of QA and QC
QA is the responsibility of
the entire team.

QC is the responsibility of the


tester.

Prevents the introduction of Detects, reports and corrects


issues or defects
defects
QA evaluates whether or
not quality control is
working for
the primary purpose of
determining whether or not
there is a weakness in the
process.

QC evaluates if the
application is working for the
primary purpose of
determining if there is a flaw /
defect in the functionalities.

Srihari Techsoft

Responsibilities of QA and QC

QA improves the process


that is applied to multiple
products that will ever be
produced by a process.

QC improves the
development of a specific
product or service.

QA personnel should not


perform quality control
unless doing it to validate
quality control is working.

QC personnel may perform


quality assurance tasks if
and when required.
Srihari Techsoft

Differences between Alpha and


Beta Testing (Field Testing)
1. It is always performed by the developers at the software
development site.
1. It is always performed by the customers at their own site.
2. Sometimes it is also performed by Independent Testing Team.
2. It is not performed by Independent Testing Team.
3. Alpha Testing is not open to the market and public
3. Beta Testing is always open to the market and public.

Alpha beta Testing


4. It is conducted for the software application and project.
4. It is usually conducted for software product.
5. It is always performed in Virtual Environment.
5. It is performed in Real Time Environment.
6. It is always performed within the organization.
6. It is always performed outside the organization.
7. It is the form of Acceptance Testing.
7. It is also the form of Acceptance Testing.

8. Alpha Testing is definitely performed and carried out at the developing


organizations location with the involvement of developers.
8.Beta Testing (field testing) is performed and carried out by users or you can
say people at their own locations and site using customer data.
9. It comes under the category of both White Box Testing and Black Box
Testing.
9. It is only a kind of Black Box Testing.
10. Alpha Testing is always performed at the time of Acceptance Testing when
developers test the product and project to check whether it meets the user
requirements or not.
10. Beta Testing is always performed at the time when software product and
project are marketed.

11. It is always performed at the developers premises in the


absence of the users.
11. It is always performed at the users premises in the absence
of the development team.
12. Alpha Testing is not known by any other different name.
12 Beta Testing is also known by the name Field Testing means
it is also known as field testing.
13. It is considered as the User Acceptance Testing (UAT) which
is done at developers area.
13. It is also considered as the User Acceptance Testing (UAT)
which is done at customers or users area.

Integration testing
Integration testing can be of two type:
1.Bottom up integration testing.
2.Top down integration tesing

Bottom Up integration testing


example
Ex. a ATM software is being developed which involves limited
modules in current development cycle.
Total Scope : User interaction module[calling module], Balance
calculation module, Money withdrawal module, Print reciept
module.
Current scope: Balance calculation module, Money withdrawal
module, Print reciept module.
As per current scope developers develop Balance calculation
module, Money withdrawal module, Print reciept modules...

Bottom up integration testing


Note 1:All these modules are sub modules [function definations modules]
of parent module User interaction module[Parent module-> with main
function]
Note 2: Development team has taken up Bottom up approach of
development.That means they developed sub modules first without their
parent module[ which calls all these modules into action i.e. User
interaction module.]
During testing cycle..
To test the working of these submodules in integration we need a
calling module ->User interaction module[calling module] which is not in
scope and undeveloped.
So as an alternative we are suppose to use Driver [which calls all sub
modules] in the absence of parent module.
This is Bottom up approach of testing.

Top Bottom integration testing


example
Ex. a ATM software is being developed which involves limited
modules in current development cycle.
Total Scope : User interaction module[calling module], Balance
calculation module, Money withdrawal module, Print reciept
module.
Current scope: User interaction module[calling module]
Note 1 :No submodules are being developed.
Note 2:Top down approach of development is involved.

Top Down integration testing


Testing team is supposed to test Main module
User interaction module[calling module]
integration with submodules which are not yet
available.
So, now we employ stubs which act as artificial
temporary submodules and handles calls from
User interaction module[calling module].
This is how Top- down integration approach is
taken.

Thank You

Вам также может понравиться