Вы находитесь на странице: 1из 444

Application Testing

2.1

Testing Competency Team

© 2010 Wipro Ltd – Internal & Restricted


Course Objectives

After completing this course, you should be able to:

 Understand the Fundamental Concepts of Quality

 Explain how Testing fits into the SDLC Models

 Understand the concepts and Techniques used in Test Case


Design

 Explain the following types of Testing in Detail:-


 Unit testing, regression, integration, usability, system, acceptance,
adhoc, static, cross browser, internationalization, web testing
 Understand the Software Project Environment basics

 Understand the basics of Metrics Capture

2 © 2009 Wipro Ltd - Confidential


Application Testing 2.1- Agenda

1 Quality Fundamentals

2 Testing in SDLC

3 Defect Tracking

4 Test Case Design

5 Black Box Techniques

3
© 2009
© 2010 Wipro
Wipro Ltd –Ltd - Confidential
Internal & Restricted
Application Testing 2.1- Agenda

6 White Box Techniques

7 Unit Level Testing

8 Integration Testing

9 System Testing

10 Acceptance Testing

4
© 2009
© 2010 Wipro
Wipro Ltd –Ltd - Confidential
Internal & Restricted
Application Testing 2.1- Agenda

11 Regression Testing

12 Static Testing

13 Adhoc Testing

14 Software Project Environment

15 Web Based Testing

5
© 2009
© 2010 Wipro
Wipro Ltd –Ltd - Confidential
Internal & Restricted
Application Testing 2.1- Agenda

16 Test Metrics Capture

17 Cross Browser Testing

18 Usability Testing

19 Configuration Testing

20 Internationalization Testing

6
© 2009
© 2010 Wipro
Wipro Ltd –Ltd - Confidential
Internal & Restricted
Quality Fundamentals

7
© 2010 Wipro Ltd – Internal & Restricted
Quality Fundamentals

Topics Covered in this lesson are:

– Quality Definition
– Quality – Different Guru’s views
– PDCA or Deming’s Wheel
– Quality Assurance and Control
– ISO 8402 Definitions
– Verification and Validation

8 © 2009 Wipro Ltd - Confidential


Quality - Definition
 Quality is “degree of excellence” - Oxford English Dictionary

definition

 Word Quality is used synonymously with words - luxurious,

up market, exclusive, prestigious etc.,

 Subjective and difficult to measure!


Quality : Crosby’s view
 Definition : ‘Conformance with requirements’

 System : Prevention not cure

 Measure : Cost of Quality

 Target : Zero defects - right the first time

 The only way to increase productivity and lower cost is to

increase quality.
Quality: Feigenbaum’s View
 Setting quality standards

 Appraising conformance to those standards

 Acting when standards are not met

 Planning for improvements to those standards


Quality: Deming’s View
 Separating special causes from common causes

 Management is responsible for common causes

 SPC /SQC Control Charts used for cause removal

 Deming Cycle – Repeatedly (Plan-Do-Check-Act)


Quality: Juran’s View
 Quality does not happen by accident

 Quality is a result of intention and actions

 Indicates the need for ‘Requirements Specs and process’


Quality: Ishikawa’s View
 Quality is beyond the quality of product

 Quality of after sales service, quality of management, quality


of the company and quality of people - all matter.

 EVERYONE contributes to the quality of output.


Quality: Garwin’s View
“Quality is a complex multifaceted concept” that can be
described from five perspectives:

• Transcendental view - quality is recognizable but not


definable
• User view - quality means fitness-for-purpose
• Manufacturing view - quality means conformance to
specification
• Product view - quality is tied to inherent characteristics of
product
• Value-based - quality is dependent on how much customer
is willing to pay
Quality is ...

“Quality is Nothing else …”


Quality Perspective
Phil Crosby’s defines Quality from the “Producer’s”

perspective, while Deming and Juran define Quality from the

End-customer’s perspective.
Quality Can Be Managed
 Both these perspectives maintain that given a specific
product/service context, it is possible to define, understand,
plan, communicate, apply, measure and improve quality. In
short, Quality can be managed.
Process Approach
Focuses upon defining, implementing, reviewing and
continually improving the processes in an organization
towards the aim of achieving higher customer satisfaction
and productivity.
Continual Improvement
 Focus on Continual Improvement is a fundamental principle

of quality management

 Approach to continuous improvement is best illustrated

using the PDCA cycle, which was developed in the 1930s by


Dr. Shewhart of the Bell System
Deming’s Wheel or PDCA

PLAN DO

ACT CHECK
Quality : Assurance & Control
 QC is an activity that verifies whether or not the product

produced meets requirements and standards

 QA is an activity that establishes and evaluates the

processes that produce the products


ISO 8402 Definitions

Quality
The totality of characteristics of an entity (product or service)
that bear on its ability to satisfy stated or implied needs.
Quality Assurance
All those planned and systematic actions necessary to
provide adequate confidence that a product or service will
satisfy given requirements for quality.
Quality Control
The operational techniques and activities that are used to
fulfil requirements for quality.
Verification and Validation

 Quality Control activities are of two types, verification and


validation

 Includes technical review, inspection, walk-through, code


review and program Testing
V&V
 Verification activities focus upon ensuring that the product
is being built right

 Validation activities focus upon ensuring that the right


product is built

 Testing includes verification tests and validation tests


Managing Quality

Quality Policy:
 The overall intention and direction of an organization
regarding quality as formally expressed by top
management.
Quality Management:
 That aspect of the overall management function that
determines and implements quality policy.
Quality System:
 The organizational structure, responsibilities,
procedures, processes and resources for implementing
quality management.
Testing in SDLC

27
© 2010 Wipro Ltd – Internal & Restricted
Testing in SDLC

Topics Covered in this lesson are:


– SDLC Phases
– Testing in V-Model
– Preparations for testing
– Variants of SDLC models
– Waterfall Model
– Modified Waterfall Model
– Prototype model
– Iterative Enhancement Model
– Spiral Model
28 © 2009 Wipro Ltd - Confidential
SDLC

 Software development life cycle [SDLC] represents the


overall development process by which a software system
is defined and realized

 Principle engineering processes in the SDLC are


requirements, design, coding and testing
Requirements

• User requirements are understood and translated into


system requirements.
• Enables designers to understand the problem domain.
• Enables testers to understand what is the expected
behaviour of the system.
 Output document is well understood by both the
customer and supplier.
Architecture Design

 Approach/top-view of the Technical solution.

 Defines structural layers and communication processes.

 Alternative architectures are considered.

 Enables developers and testers to understand the big


picture of a software solution.
High-Level Design

 This includes application design, database design, user-


interface design, background process design, menu
design, library design, security design.

 Enable testers to understand the interactions between


various system functions
Low-level Design

 Components or units or programs are specified for


implementation.

 Enable testers to understand the individual behaviour of


programs.
Coding

 Implements the low-level design using the chosen tools.


 Results in source code.
 Also referred as construction.
Testing

 Testing exercises the system and its components to find


errors.

 Reduces the risk of product failure.

 Last stage before delivery to customer.


V-Model

 Variants of SDLC models are used in practice


• Waterfall model
• Modified Waterfall model
• Prototype model
• Iterative enhancement
• The spiral model
 Depicts the relationship between overall SDLC cycle and
testing process
V & V in SDLC

Requirements Acceptance
Specification Testing

Architecture System
Testing

High Level Integration


Design Testing

Detailed Unit
Design Testing

Construction
Testing in V-model

 Even though test execution can start only after the code
is built, we can start testing related activities much
early in the life cycle
 Test Plans are prepared along with Project Plans
 Test Cases are prepared along with Requirements &
Design stages.
 This approach helps the tester to start the testing
immediately when the code is made ready.
Testing in V-model

 Sufficient effort is directed towards the planning and


design of the tests.

 Depicts the importance of specifying requirements and


designing the solution for testability.

 Depicts the engineering inputs provided to different levels


of testing.
Preparations for testing -1

• Understand the application domain – Typical


application domains are financials, Sales, Telecom
domains, insurance, manufacturing
• Understand the technology & platform – Software may
be developed based on client-server, multi-tier, e-
commerce/ web technology, RDBMS, OS
• Learn how to use the application – Assume as a end-
user and run the application to get a feel of how the
software works.
Preparations for testing-2
• Study the Software Requirements Specification (SRS)
document – The software has to be tested only based on
the SRS. The tester must have sound knowledge of the
requirements of software.
• Study the Design – Go through the Architecture, Design
documents in order to prepare test cases.
• Study the Source Code – If specified the test engineer
needs to carry out white box testing then, study the source
code to generate test cases.
Variants of SDLC models

 Waterfall Model
 Modified Waterfall Model
 Prototype Model
 Iterative Enhancement Model
 Spiral Model
Waterfall Model

 Simple waterfall model

Requirement D

Design
C

Construction
T

Testing
Waterfall Model

This is the simplest model where in the phases are


organised in a linear order. In this model, sequence of
activities are:
• Feasibility requirements and analysis
• Requirement analysis and Project planning
• Design
• Coding
• Testing
Waterfall Model

• Here the project begins with feasibility analysis. On


successful demonstration of the feasibility of a project, the
requirements analysis and project plan are prepared.
• The design starts only after completion of requirement
phase.
• Coding begins only after design is complete. Then testing
is carried out.
• On successful completion of testing, the system is
installed. After this, operation and maintenance of the
system begins.
Waterfall Model

Application:
• This model is used where in the requirements are known
fully and there is no change in requirement.
• Conversion of existing project into new project.
• Short duration projects.
• Since there is no rework in each phase, results in a high
Quality Product.
Waterfall Model

• Limitations:
 Not suitable for change driven projects.
 Not suitable for large projects.
 It assumes uniform and orderly sequence of steps.
 For change driven projects, modified waterfall model is used.
One important consideration is that the changes should be
manageable.
Modified Waterfall Model

 Modified waterfall model

R
For change in phases
D

T
Prototype Model

 This is used for systems which are not clear to both the
customer and the developer.
 Initially a GUI is prepared with the basic idea in mind. This
forms the requirements phase.
 Then design, coding and testing are carried out by using
any appropriate models.
Prototype Model

 Prototype model

D
D C T
C

Requirements
Prototype Model

 The goal of prototype based development is to counter the


limitation of waterfall model.
 Basic idea is that instead of freezing the requirements before
design or coding can proceed, a prototype is built to
understand the requirements. Prototype is developed on the
currently known requirements.
Prototype Model

 This is an alternative idea for which there is no manual or


existing system to help determine the requirements.
 After prototype is developed, the end user and the client
are permitted to use and play with the application.
 They provide feedback and based on this, prototype is
modified.
 Disadvantage: Development cost will be high.
Iterative Enhancement Model

 Core of the requirement should be known.


 The basic idea is that the software should be developed in
increments, each increment adding some functional
capability to the system until the full system is
implemented.
Iterative Enhancement Model
 Iterative enhancement model

D
Core of the
requirements
is known C

Releas Releas
e1 e2
Iterative Enhancement Model

Advantages:
• Customer will always get the software that works.

• Useful for Product development.

• Cost benefit.
Spiral Model

Proposed by Barry Boehm


• Activities organised in spirals that has many cycles.
• Each cycle begins with determination of objectives, alternatives
and constraints.
• Evaluation of different alternatives based on objectives, and to
identify and resolve risk.
Spiral Model

 This step may involve activities such as bench marking,


simulation and prototyping.
 Design verification activities and Testing activities.
 Plan for next phase.
Spiral Model

• Advantages:
 Each cycle involves review.
 Plans next cycle.
 Useful for development as well as enhancement projects.
 Preferred for high risk projects.
Exercises

1. Consider a Commercial project with clear SRS which


should be completed within one month. Which is the
best development life cycle model you prefer? Why?
2. Consider a project where the user does not know
exactly what the software is supposed to do. You have
to develop the software as per the said SRS and
enhancements might be there in the future. Which life
cycle model you prefer? Why?
Exercises

3. Consider a project where the core of the requirements is


known. You have to develop the software within time and
cost. Which life cycle you prefer? Why?
4. Consider a project where high risk is involved to develop.
Which life cycle you prefer to develop this kind of project?
Why?
Defect Tracking

61
© 2010 Wipro Ltd – Internal & Restricted
Defect Tracking

Topics Covered in this lesson are:


– Bug tracking systems
– Features of bug tracking systems
– Defect tracking workflow
– Defect metrics
– Template of defect report

62 © 2009 Wipro Ltd - Confidential


Bug Tracking Systems

Why are there Bugs?


• Bugs exist because humans aren't perfect.
• Bug Tracking Systems can be used to overcome such
situation
Bug Tracking Systems

A Bug Tracking Systems helps manage software development


projects by tracking software bugs, action items, and change
requests with problem reports.
Features of Bug Tracking System

• These software testing tools helps manage developing projects


more efficiently by tracking bugs.

• This will bring you better response to the user because it identifies
how many bugs are in process, and how many bugs are closed.

• Besides it records bug report in database and it allows


simultaneous access by different users. And it offers bug
classification
Features of Bug Tracking System

• Defect Tracking system allows to keep track of defect


information during software testing.

• It provides general information such as


• Origin of defect
• Status
• Symptoms
• Repair priority
• Severity
• & Other details
Specifying the Defect Information

General - Allows to specify the Basic information of the Defect,


priority, Severity, No. of times the problem occurred and symptoms.

Priority - Specifies Repair priority of defect.


The default priorities settings are :
1 - Resolve immediately
2 - Give High Attention
3 - Normal Queue
4 - Low Priority

contd..
.
Specifying the Defect Information

Severity - Allows to specify the affect that the defect has on


the application under test.

The Default severity settings are:


1 - Critical
2 - Major
3 - Average
4 - Minor
5 - Enhancement
Specifying the Defect Information

Severity can be customized using the admin option.


• Occurrences - Specifies how many times the defect has occurred
during testing.
• Symptoms - Allows to specify the symptoms for the defect.

why do we need two parameters, severity and priority


• Severity tells us how bad the defect is.
• Priority tells us how soon it is desired to fix the problem.
Specifying the Defect Information

Default symptom settings are:


Cosmetic Flaw - Data Corruption
Data loss - Documentation Issue
Incorrect Operation - Installation Problem
Missing Feature - Slow Performance
System Crash - Unexpected Behaviour
Specifying the Defect Information

Origin :- Allows to specify information about test


environment,
test data, and reported by user or company
information.

Other :- Allows to enter additional information related to


the defect.
Specifying the Defect Information

Status - Allows to specify current status of the defect and


track the history of the defect from discovery to
final repair.

Action - Allows to specify action to be performed on the


defect and allows to update the defect status.

Default Action are:

Enter - Creates a defect with the status of “New”.


Open - Changes the defect status to “Open”.
Specifying the Defect Information

Re-Assign - Allows to assign the defect to a different


person without changing the status.
Re-Open - Changes the status from “Closed” to “Open”.

Reject - Sets the default status as “Open”.


Specifying the Defect Information

Close - Sets the default status as “Closed” Action can be


customized using the admin option.

Notes - Allows to enter description of the defect.

Resolution -Allows to specify information about the repair of


the defect and what software components are
modified.
General flow of Overall testing and defect
tracking sequence

Automated Yes
Automatedtest
test Generate
GenerateDefect
Defect
case
case Test
TestLog
Log Test
from Using
Viewer Failure? from Using
Test
TestProcedure
Procedure Viewer Test
TestLog
LogViewer
Viewer
No

Exit
Exit
Viewer
Viewer Review
Reviewstatus
status
ininthe
the
Defect
DefectForm
Form

Manually
ManuallyEnter
Enter
Manual
Manual Defect
Testing Defectinto
intothe
the
Testing Defect
Defectmanagement
managementtooltool
Defect tracking workflow

Bug found by the QA Engineer

Submitted to the Team Leader


for review

Development Team Fixes the Bug submitted to the Review


Bug Team and assigns Owner,
severity, Priority, Status, etc.,

Bug gets closed and maintains


QA Team verifies the bug is
the complete audit trial of all
fixed
Activity
Defect Metrics

Defect data provides wealth of information for analysis.


Some of the frequently used metrics are

• Number of defects
• Defect density (Defect / size –
KLOC,FP,COSMIC FFP)
• Defects per test level
• Defects per unit / module
• Defects per cause
• Defect per status
• Defect per priority
Template of Defect Report

Defect template includes following fields.


1) Defect ID
2) Defect Description
3) Severity
4) Priority
5) Status
6) Tested By
7) Tested Date
8) Fixed By
9) Fixed Date
10) Comments
Test Case Design

79
© 2010 Wipro Ltd – Internal & Restricted
Topics

 What is a Test Case ?

 Designing effective test cases

 Black-box or Functional Testing

 White-box or Structural Testing

 What is a good strategy?


Test Case Design

 Beyond the psychological and economic aspects of testing,


the most important aspect in software testing is the design
of effective test cases.

 Complete testing is impossible and a test of any program


must be necessarily incomplete.

 Obvious strategy is to try to reduce this incompleteness as


much as possible.
Design Strategy

 Given the constraints on time, cost etc

 Key Question is …

 What subset of all possible test cases has the highest


probability of detecting the most errors?
What is a test case?

 A set of inputs, execution preconditions, and expected


outcomes developed for a particular objective, such as to
exercise a particular program path or to verify compliance
with a specific requirement.
Main Approaches
 Two main approaches to designing test cases are Black-box
testing and White-box testing.
Black-Box Testing

 Tester views the program as a Black-box.


 Tester is completely unconcerned about the internal
behaviour or structure of the program.
 Tester is only interested in finding circumstances in which
the program does not behave according to its
specifications.
 Test data are solely derived from the specifications.
Black Box Testing
Why Black Box Testing….?
 Purpose of Black Box Testing is to cause failures in order to
make the faults visible.
 Another purpose is to assess the overall quality level of the
code.
Why the name Black Box…?

 Software tester does not have access to the source code.


 Code is considered to be a “Black Box” to the tester who
cannot see inside the box.
 Based on the specifications, the tester knows what to
outcomes to expect from the black box.
Black Box Testing
 Black Box Testing is called as Functionality Testing or Data-
Driven Testing or Closed Box Testing.
 A System or Components whose inputs, outputs and
general functions are known, but whose contents or
implementation are unknown for Testing.
Black Box Testing
 Tester only knows the inputs and what the expected
outcomes should be and not how the program arrives at
those outputs.

 Tester does not ever examine the programming code and


does not need any further knowledge of the program other
than its specifications.
Testing Strategies / Techniques
 Black box testing should make use of randomly generated
inputs (only a test range should be specified by the
tester), to eliminate any guess work by the tester as to
the methods of the function.
 Data outside of the specified input range should be tested
to check the robustness of  the program.
 Boundary cases should be tested (top and bottom of
specified range) to make sure the highest and lowest
allowable inputs produce proper output.
Testing Strategies / Techniques

 Number ZERO should be tested when numerical data is to be


input.
 Stress testing should be performed (try to overload the
program with inputs to see where it reaches its maximum
capacity), especially with real time systems.
 Crash testing should be performed to see what it takes to
bring the system down.
Advantages

 More effective on larger units of code than glass box


testing.
 Tester needs no knowledge of implementation, including
specific programming languages.
 Tester and Programmer are independent of each other.
 Tests are done from a user's point of view.
 Test cases can be designed as soon as the specifications
are complete.
Disadvantages
 Only a small number of possible inputs can actually be
tested, to test every possible input stream would take
nearly forever.
 Without clear and concise specifications, test cases are hard
to design.
 Could result in unnecessary repetition of test inputs if the
tester is not informed of test cases already tried.
 May leave many program paths untested.
Black-Box Methods

 Equivalence partitioning

 Boundary-Value Analysis

 Cause-effect graphing

 Error-Guessing
White-Box Testing

 Tester is permitted to examine the internal structure of the


program.
 Tester derives test data from an examination of the
program logic.
 Common methods are statement coverage, decision
coverage and condition coverage.
White Box testing is…

… An approach that examine the program structure and derive test data
from the program logic.
White Box Testing is also known as …
• Logic Driven Testing
• Structured Testing
• Glass Box Testing
White Box Testing is to

Derive test cases:


 Based on the program structure.
 To guarantee that all the independent paths within a
program module have been tested.
White-Box Methods
 Statement Coverage.
 Decision/Condition-Coverage.
 Path Coverage.
What is a good strategy?

 Good mix of both approaches


 Develop a rigorous test by using black-box oriented test
case design methodologies
 Supplement these test cases by examining the logic of
the program
 Try to use most of the methods, if not all, since each
method has distinct strengths and weaknesses
Black Box testing Grey Box testing White Box testing
Black Box Design Techniques

102
© 2010 Wipro Ltd – Internal & Restricted
Black Box Design Techniques

Topics Covered in this lesson are:

– Equivalence Partitioning
– Boundary Value Analysis
– Cause Effect Graphing
– Decision Table
– Error Guessing

103 © 2009 Wipro Ltd - Confidential


Equivalence Partitioning

• Idea is to partition the input space into a small number


of equivalence classes such that, according to the
specification,
every element of a given class is ‘‘handled’’
(i.e., mapped to an output) ‘‘in the same manner.’’
Equivalence Partitioning strives to define a test case that
uncovers
classes of errors and thereby reduces the number of
test cases needed.
• Two types of classes are identified:
• valid (V) (corresponding to inputs deemed valid from the
specification)
• Invalid (I) (corresponding to inputs deemed erroneous from the
specification)
• Technique is also known as input space partitioning
Equivalence Partitioning

• Group of tests forms an equivalence class, if


• They all test the same thing
• If one test finds a defect, the others will
• If one test does not find a defect, the
others will not
• Tests are grouped into one equivalence class when
• They involve the same input variables
• They result in similar operations in the
program
• They affect the same output variables
Finding equivalence classes

1. Identify all Inputs

2. Identify all Outputs

3. Identify equivalence classes for each inputs

4. Identify equivalence classes for each output

5. Ensure that test cases test each input / output equivalence


class at least once.
Boundary Value Analysis

Boundary value analysis leads to selection of test cases that exercise


bounding values.

BVA is a test case design technique that complements equivalence


partitioning. Rather than selecting any element of an equivalence, BVA
lead to the selection of test case at the “edges” of the class.
Number of Values

 A file will contain 1-25 records

 Identify the minimum, the maximum, and values just


below the minimum and above the maximum

 Boundary values:
• Empty file (Invalid)
• File with 1 record (Valid)
• 25 records (Valid)
• 26 records (Invalid)
Cause Effect Graphing

• Cause-Effect Graphing is a technique that provides a concise


representation of logical conditions and corresponding actions

• Cause-effect Graphing is a testing technique that employs the Black


Box Testing Theory

• Cause refers to the input and the Effect to the output

• Selective sets of data can be extracted from a wide spectrum, thus


eliminating any redundancy and results in a significant reduction in
the number of test cases.
Cause Effect Graphing

1. Divide specification into small workable pieces


2. List
a. Causes – input Equivalence Classes
b. Effects – output Equivalence Classes
3. The Semantic content of the specification is analyzed and
transformed into a Boolean graph linking the causes to the effects.
4. By methodically tracing state conditions in the graph, the graph is
converted to a limited entry decision table
5. The columns in the decision table are converted into test cases.
Boolean Graph - Notation

• Notation 1: Identity
if an input condition is true, then output condition is true

• Example: Salary Calculation


If employee is salesperson (TRUE), then commission is
applicable (TRUE)

• IF A = 1THEN B = 1
A B

• “A” represents an INPUT condition (CAUSE), while “B” represents


an OUTPUT condition (EFFECT)
Notation 2: NOT

• If an input condition is true, then output condition is


false.
• Example: Salary Calculation
• If employee is salesperson (TRUE), then bonus is not
applicable (FALSE).
• If employee is not salesperson (FALSE), then bonus
is applicable (TRUE).
Notation 2: NOT

• IF A = 1 THEN B = 0
• ELSE B = 1
A B

• ”A” represent an INPUT condition (CAUSE), while “B”


represents an OUTPUT condition (EFFECT)
Notation 3: OR

• Condition 3: OR - if one or more distinct condition is


true, then a distinct output condition is true.
• Even if one of the input conditions is true, then output
condition is true
• Example: Salary Calculation
If employee is salesperson or sales manager or VP
sales, then commission is applicable.
Notation 3: OR

• IF A = 1 OR B = 1 OR C = 1 THEN D = 1

V
B D

 A, B, C represent INPUT conditions (CAUSE), while D


represents OUTPUT condition (EFFECT)
Notation 4 : AND

• If and only if all input conditions are true, only then the
output condition is true.
• Even if one of the input conditions is false, then output
condition is false.
• Example: Salary Calculation.
If employee is salesperson and sales_made >
$30,000 and Grade > 3, then commission is applicable.
Notation 4 : AND

• IF A = 1 AND B = 1 then C = 1
•  
A

• A and B represent INPUT conditions (CAUSE), while C


represents OUTPUT condition (EFFECT)
Decision Tables

 Decision table is an effective tool in designing test cases.


 A decision table has two parts:

• condition part
• action part
 The two together specify under what condition will an
action be performed.
Decision Table-Nomenclature

• C: denotes a condition (Cause)


• A: denotes an action (Effect)
• Y: denotes true
• N: denotes false
• X: denotes action to be taken.
• Blank in condition: denotes “don’t care”
• Blank in action: denotes “do not take the action”
Decision Table - Analysis

 Are all possible combinations of conditions covered?

 No! Which ones are not covered?

 We need a default action for the uncovered combinations.


A default action could be an error report or a reset.
Error Guessing

• Testers surmise, both by intuition and experience, certain


probable types of errors and then write test cases to
explore these errors.

• Difficult to a give procedure for the error-guessing


technique since it is largely an intuitive and ad hoc
process.
Error Guessing

• Examples
• Presence of a value “0” in a program input or output is a
error-prone situation.
• Another idea is to identify test cases associated with
assumptions that the programmer might have made
from reading the specifications.
White Box Design Techniques

123
© 2010 Wipro Ltd – Internal & Restricted
White Box Design Techniques

Topics Covered in this lesson are:

– Statement Coverage
– Decision Coverage
– Path Coverage
– McCabe Cyclomatic Complexity

124 © 2009 Wipro Ltd - Confidential


Statement Coverage

This Coverage is also known as



Line coverage,

Segment coverage &

Basic Block coverage
• 100% statement coverage means that test data will ensure that
every statement is executed at least once during testing.
Statement Coverage

Disadvantage of statement coverage is that it is


insensitive to some control structures.

For Example
int a=1000,b=500;
if (a>b)
a= 100;
printf (“%d”, a);
end
Decision coverage

Decision Coverage is also known as:


• Branch coverage and
• All-edges coverage
• 100% branch coverage means that test data will ensure
that every branch is executed at least once during
testing
• Branch coverage includes statement coverage
Decision coverage

If (a >=b) then
a =100
Print a
Else
b=100
Print b
Stop
end if
Path Coverage

• Studies indicate that there are many errors whose


presence is not detected by branch testing because some
errors are related to some combinations of branches and
their presence is revealed by an execution that follows the
path that includes those branches
Path Coverage

• Need a general coverage criterion that requires all


possible paths be executed during testing

• Called as path coverage criterion and the testing is


called path testing

• Path coverage leads to a potentially infinite number of


paths, a criterion between branch coverage and path
coverage is required
Path Coverage

• Basic aim of such approaches is to select a set of paths


that ensure branch coverage criterion and try some
other paths that may reveal errors
• Another approach is based on the Cyclomatic
complexity
• Module with Cyclomatic complexity of V has at least V
distinct paths
Path Coverage

1. Path testing mechanism was proposed by McCabe.


2. Aim is to derive a logical complexity measure of a
procedural design and use this as a guide for defining a
basic set of execution paths.
McCabe Complexity Metric

• Cyclomatic Complexity introduced by Tom McCabe in


1976
• Based on Flow-graph
• Measures the number of linearly independent paths in
a function
• Also referred as complexity or McCabe’s Cyclomatic
complexity
McCabe’s Cyclomatic Complexity

1. No of Edges - No of Nodes + 2
2. No of Bounded region + 1
3. Matrix Method
Example program

main()
{
int a, b;
scanf (“%d %d”, &a ,&b);
if (a>b) then
printf (“a is greater”);
else
printf (“b is greater”);
getch( );
}
Flow Graph Notation

Sequence If While Until Case


In the Flow graphs….

• Arrows called edges represent flow of control or


connection between two nodes.
• Circles called nodes represent one or more procedural
statements and actions.
• Areas bounded by edges and nodes called regions.
• A predicate node is a node containing a condition
Flow Graph

2
3 Nodes

5 3 4
Edges
6

7
McCabe Number

No of Edges - No of Nodes + 2
6–6+2=2

The Independent Paths are


Path 1: 1,2,3,5,6.
Path 2: 1,2,3,4,6.
Bounded Region

5 4

Total bounded region is 1 i.e. 3,4,5,6 . The McCabe Number is


No of Bounded region +1.
i.e. 1 + 1 = 2
Graph Matrix

• A graph matrix is a square matrix whose number of rows


and columns corresponds to the number of nodes on a
flow graph.
• Each row and column corresponds to a node.
• Matrix entries corresponds to connection between nodes.
Matrix Method

1 2 3 4 5 6 1 - 1 = 0

1 - 1 - - - - 1 - 1 = 0

2 - - 1 - - - 2 - 1 = 1

3 - - - 1 1 - 1 - 1 = 0

4 - - - - - 1 1 - 1 = 0

5 - - - - - 1 1 - 1 = 0

6 - - - - - - Total 1

Number of Independent paths = 1 + 1 = 2


To Write Test Cases

1. Using the design or code, draw the corresponding flow


graph.
2. Determine the Cyclomatic complexity of the flow graph.
3. Determine a basis set of independent paths.
4. Prepare test cases that will force execution of each path
in the basis set.
Check list

Sr. No. Independent Paths

1. 1,2,3,5,6

2. 1,2,3,4,6
Sample Test case

Test case Expected Actual


Sr. No. Test Data
I.D Result result

Enter a = 1 To accept and to


1 UTC 001
and b = 2. display “b is greater”

Enter a = 2 To accept and to


2 UTC 002
and b = 1. display “a is greater”

Enter a = 1 To popup error


3 UTC 003
and b = 1. message.
Unit Level Testing

146
© 2010 Wipro Ltd – Internal & Restricted
Unit Level Testing

Topics Covered in this lesson are:

– Levels of Testing
– Why Testing Levels?
– Unit Testing Activities
– Examples
– Unit Test Plan
– Unit Test Case

147 © 2009 Wipro Ltd - Confidential


Unit Testing

 Why is it important?
• It is inefficient and ineffective to test
the system solely as a ‘Big Black Box’
• The viable approach is to perform a
hierarchy of tests
• Ensure Reasonable and Consistent
behavior at the Lowest level of the
Product
• Experience has shown that Unit Testing
is Cost Effective
Unit Testing

• Focuses on verification effort on the smallest unit of software


design
• The main characteristic that distinguishes a unit is that, it is small
enough to test thoroughly, if not exhaustively. The small size of
units allows a high level of code/ functionality coverage.
• It is also easier to locate and remove bugs at this level of testing.
Unit Testing

• Lowest Level of Testing.

• Individual unit of the software are tested in isolation from other

parts of a program.
Unit Testing

• What can be a UNIT?


• Screen / Front-end Program

• Back-end related to a Screen

• Screen + back end

• Library/DB stored units

• Batch programs

• Report Programs
Unit Testing Activities - 1

Field Level Checks


• Null / Not Null Checks
• Uniqueness Checks
• Length Checks
• Date field Checks
• Numeric Checks
• Alphanumeric Checks
• Negative Checks
• Default Display

Contd ...
Unit Testing Activities - 2

Field Level Validation


• To Test all Validations for an input field
• Date Range Checks
• Date Check validation with the system date

Functionality Checks
• Screen Functionality
• Referential Integrity Checks
• Field Dependencies

Contd ...
Unit Testing Activities - 3

User Interface Checks


• Readability of the controls
• Tool Tips Validations
• Ease of usage of interface across
• Consistency with the user interface across the product
• User Interface dialogs
• Tab related checks for screen controls.
Unit Testing

• Module interface is tested - to ensure that information properly


flows into and out of the program unit under test
• Boundary conditions are tested to ensure that the module

operates properly at boundaries established to limit or restrict


processing.
Example

Login Screen
Example

• FLC for login screen:

• Username & Password-


• Length
• Alphanumeric
• Default
• Null/Not null

• FC for login screen:

• OK button-
i. For valid username and password should login into the
application.
ii. For invalid username and password should prompt error
messages.
Example

FC for login screen:


• CANCEL button-
• Should close the form.
User Interface checks:
• Tab related checks
• Tool tips validations
• Readability of the controls
Unit Test Plan

• Sample:
• Unit Test Plan ID:UTP-YahooMail
• Unit Test Plan Reference:SRS
• Module/Screen:Registration Screen
• Author:”Name of the person who prepares the plan” 
• Dated:12/12/03
Unit Test Plan

Sr. Test Field Type Null Uniqu Lengt Alphanum Numeri Date Negati Defa Functionali
N0 Plan Nam of e h eric c ve ult ty
ID e Chec
k

1 UTP1 First FLC Yes No Yes Yes No No No Yes No


Nam
e
Unit Test Case

• Sample:
• Unit Test Plan Reference: UTP-YahooMail
• Module/Screen: Registration Screen
• Author: ”Name of the person who prepares the test design
”Ex: Remi Jullian 
• Dated:12/12/08
Unit Test Case

Sr. Test Test Test Case Test Data Expected


No. Plan Case Sequence Result
ID ID

1. UTP1 UTC1 Do not enter any Null Should


value in the field prompt the
and proceed user to enter
further. a value.
2. UTP1 UTC2 Enter a value with “asdfasdfasdf Should
more than 30 asdfasdfasdf prompt
characters. asdfasdf” proper error
messages.
Defect Reporting and Tracking

 Defect Id, Test Case Reference, Defect Description,


 Steps to simulate Defect, Tester Name, Test Date/Time
 Defect fixer name, Date/Time
 Defect Fix Verification
 Additional Test Case Reference
 Regression test details
 Defect Closure
Defect Template

Sr. Defe Test Defect Defec Statu Tested Tested Severit Priorit Fixed Fixed Fix
No ct ID Case Descripti t s By Date/Ti y y By Date Verifie
Referen on Detail me d By
ce s
Integration Testing

165
© 2010 Wipro Ltd – Internal & Restricted
Integration Testing

Topics Covered in this lesson are:

– Integration Testing
– Types of Integration
– Types of Checks in Integration
– Examples
– Integration Test Plan
– Integration Test Case

166 © 2009 Wipro Ltd - Confidential


Integration Testing

 Focus on verifying the component interfaces when the


units are combined to function as a single sub-system.

 Goal is to see if the modules can be integrated properly.

 Test activity can be considered as testing the design.


Integration Testing

Integration Testing

• Intermediate level of testing

• One of the most difficult aspects of software development is


the integration and testing of large, untested sub-systems. The
integrated system frequently fails in significant and mysterious
ways, and it is difficult to fix it

• Progressively unit tested software components are integrated


and tested until the software works as a whole

• Test that evaluate the interaction and consistency of interacting


components.
Contd ...
Integration Testing

Integration Testing

• Integration Testing refers to the testing in which Software units


of an application are combined and tested for evaluating the
interaction between them.

• Types of Integration Testing


 Big Bang Testing
 Bottom - Up Testing
 Top Down Testing
 Sandwich Testing
Types of Integration

Big Bang Testing

• In Big Bang testing software components of an application are


combined into a overall system.

• According to this approach, every module is first unit tested in


isolation. After each module is tested, all of the module are
integrated .Ref Fig 2

Contd…
Types of Integration

Big Bang Integration Testing

Module 1

Module 6 Module 2

System

Module 5 Module 3

Module 4

Fig 2
Types of Integration

Bottom Up Integration
• An integration testing technique that tests the low-level
components first using test drivers for those components that
have not yet been developed to call the low-level components
for test.
• Begins construction and testing with atomic modules (i.e.,
modules at the lowest levels in the program structure).
• Program is merged and tested from the bottom to top.

Contd…
Types of Integration Testing

 Bottom up Integration

• The terminal module is tested in isolation first, then the


next set of higher level modules are tested with the
previously tested lower modules. Ref Fig 3.

• Low level modules can be combined into various clusters


(sometimes called builds) that perform a specific
software sub-function. Cluster is tested.

• Drivers (control program for testing) are removed and


clusters are combined moving upward in program
structure

Contd…
Types of Integration Testing

 Bottom Up Integration
• Major emphasis is on module functionality and
performance
 Types of Drivers

Driver A Driver B Driver C Driver D

Invoke Send parameter Display


Subordinate from a table (or Combination
parameter of Driver B&C
external file)

Contd…
Types of Integration

Bottom Up Integration

Fig 3

Cluster 1 Cluster 2
Contd…
Types of Integration

Advantages of Bottom Up Testing :

• Many Programming and testing operations can be carried


out simultaneously, yielding apparent improvement in
Software Development effectiveness

• Unit Testing of each module can be done very thoroughly

• Errors in critical modules are found early

Contd…
Types of Integration

Disadvantages of Bottom Up Integration Testing


• Test drivers have to be generated for modules at all levels
except the top controlling one
• We cannot test the program in the actual environment in
which it will be run.
• Many modules must be integrated before a working program
is available
• Interface errors are discovered late
Types of Integration
Top Down Integration
• Program merged and tested from the top to the bottom
• Modules are integrated by moving downward through the
control hierarchy, beginning with the main control module Ref fig
4
• Modules subordinate to the main control module are
incorporated into the structure in either a depth - first or breadth-
first manner
• Stubs are substituted for all components directly subordinate to
the main control module
• The control program is tested first

Contd…
Types of Integration Testing

• Top Down Integration


• Subordinate stubs are replaced one at a time with
actual components
• Regression testing may be conducted to ensure that
new errors have not been introduced.
• Major emphasis is on interface testing
 Types of stubs

Stub A Stub B Stub C Stub D

Display a Display passed Return a value Do a table search for


trace parameter from a table or input parameter and
message external file return associated
output parameter

Contd…
Types of Integration

Top Down Integration Testing

Fig 4

Contd…
Types of Integration

Advantages of Top Down Integration Testing :-

• Integrated Testing is done in an environment that closely


resembles that of the reality, so the tested product is more
reliable

• Stubs are functionally simpler than drivers, and therefore


they can be written with less time and labor

• Interface errors are discovered early

Contd…
Types of Integration

Disadvantages of Top Down Integration Testing :-

• Unit Testing of lower modules can be complicated by the


complexity of upper modules

• In the initial phase of Testing, it is difficult to do coding and


testing simultaneously

• Errors in critical modules at low levels are found late


Types of Integration Testing

• Sandwich Integration

• Combination of bottom-up and top-down testing


• Top-Down approach is used for the top layer
• A Bottom-Up approach is used for the bottom
layer
• Allows integration to begin early in the testing
phase
• Does not test individual components thoroughly
before integration
Types of Checks in Integration - 1 of 2

• Data Dependency Check

Example: In a typical ERP application, when a purchase module


is used, it will use the “inventory levels” information stored in the
inventory module. Similarly when materials are issued to different
departments from stores, the “inventory module” will check
against the purchase requisition details in the purchase module.
When integration testing is performed, this data dependency
should be seamless to the end-user.
Types of Checks in Integration - 2 of 2

• Data Transfer Check


Example: When materials are purchased or products are
sold, the financial entries relevant to these transactions
must be “transferred/posted” to the financial accounting
module. This related to the flow of data from one module
to another using the designed interfaces.
Example

• A mailing application (Outlook)


• Data Dependency Check (DDC)

Address Book (New Entry) Contd…


Example

•Address List
•Address List depends on the entries in the address book to
display the list of addresses.
Example

• Data Transfer Check

• Compose Page Address List

Contd…
Example

• The addresses selected in the TO,CC and BCC field of the address list
have to be transferred to the corresponding TO,CC and BCC fields in the
compose page.

Data Transfer
Integration Test Plan

• Sample:

• Integration Test Plan ID:ITP-Course Registration

• Integration Test Plan Reference: SRS

• Project Name: Course Registration

• Project Version:1.0

Contd….
Integration Test Plan

• Dependency Diagram:

Student Module

Admin Module

• Student module depends on Admin module for DOC


• Author: ”Name of the person who prepares the Plan”
• Dated:12/12/08

Contd…
Integration Test Plan

Sr. Integratio Module Type Data Dependency Check Data Transfer Check
No. n Test Name of
Plan ID Check
(DDC/
DTC) Module Depends For Transfers Transfers Data
Depended On To Module
1. ITP1 Student DDC Admin Date of - -
Commenceme
nt
Integration Test Case

• Sample:

• Integration Test Plan Reference: ITP-Course Registration

• Project Name/Version: Course Registration/1.0

• Author: ”Name of the Person Who prepares the plan”

• Dated:12/12/08

Contd…
Integration Test Case

Sr. Integration Integration Test Case Sequence Expected Result


No. Test Plan Test Case ID
ID

1. ITP1 ITC1 Go to the student module and try On selecting the date of
registering a student for some course commencement the
and module for which no batch is application should
available. prompt proper error
messages. Should not
allow the user to create
own date of
commencement.
2. ITP1 ITC2 Login to the admin module. Add a new The date of
batch starting on a particular date for commencement for this
some course and module. Go to the course & module should
student module and register a student be listed and should
for that course and corresponding allow the user to register
module. for that batch.
System Testing

195
© 2010 Wipro Ltd – Internal & Restricted
System Testing
Topics Covered in this lesson are:
– System Testing - Functional
– System Testing – Non Functional
– Functional System Testing
– Business Process Testing
– Performance Testing
– Performance Test reports
– Load and Stress Testing
– Security Testing
– Compatibility Testing
– Scalability Testing
196
– Usability Testing
© 2009 Wipro Ltd - Confidential
System Testing

• Complete, integrated system is tested to evaluate it’s


ability to satisfy the specified system requirements.
• Broadly, system is evaluated for functional and non-
functional requirements.
• Non-functional tests include performance, security,
compatibility, volume, load, stress.
System Testing - Functional

• Evaluate the implementation of business processes that


spans one or more system modules.

• Configure the system to test various business processes,


their variants and resulting scenarios.
System Testing – Non Functional

 Simulate system environment that to evaluate


performance capability.
 Develop test scripts and generate test data to evaluate
system thresholds and responses under different load,
stress, volume conditions.
 Perform special purpose tests like security,
compatibility, usability.
Functional System Testing

• Business Process Testing:


 Identify implementation of all business processes.
 Verify the possible options / paths in execution of business
processes.
Business Process Testing

• Example – Purchase Cycle


 Purchase Request (PR) - Approval - Purchase Order (PO) -
Approval - Incoming Goods Inspection - Materials Update
-Accounts Payable - Fixed Assets update
Business Process Testing

Business Sequence:

Sequenc Sequence Expected Result Actual Result


e Id Description
Performance Testing

• Determine what performance factors are most critical to this


application and what degree.
• Throughput
• Response Time
• Handling Peak-loads
• Determining a good strategy to evaluate the performance
capability
Performance Testing

Performance Testing Process


Step 1 Step 2 Step 3
Planning Execution Reporting

Understand Setup the Generate custom reports


requirements environment based on:
· Simulated Load
Test Planning Generate Scripts · Round Time
Tool Identification Conduct the Test · Response time
· CPU Utilization
Problem · Memory utilization
Investigation · Disk utilization Etc

Load, Stress and Scalability Testing


Performance Testing

• Can be accomplished in parallel with Load and Stress testing


because you want to assess performance under all conditions.
• System performance is generally assessed in terms of response
times and throughput rates under differing processing and
configuration conditions.
Performance Reports

• Develop custom reports for each client based on the pre-defined


aspects of evaluation
• Custom reports summarize transaction processing and system
throughput
• Reports also provide detail transaction analysis

Contd…
Performance Reports

Transactions/sec (passed) - The number of Completed, successful


transactions performed per second.
Transactions/sec (failed) - The number of Incomplete failed
transactions per second.
Performance Under Load - Transaction times relative to the number
of virtual users running at any given point during the scenario.
Load and Stress Testing

Load Testing

• Evaluate the system’s capacity to handle large amounts of data


during short time periods.

Stress Testing

• Evaluate system’s capacity to handle large numbers of processing


transactions during peak periods
Load and Stress Testing

• Eliminates surprises and ensures that a System will be able to


perform under load.
• Without testing, it's impossible for to know what might happen when
a site has to function under a heavy load.
• Will response time degrade slowly or drop off precipitously?
• At what point might a Web site crash completely?
Load and Stress Testing

To conduct Load Testing on a web-site :


• Apply stress to a web-site by simulating real users and real activity
• Monitor response time as load is increased.
• Perform Capacity Testing to determine the maximum load a Web
site can handle before failing
• Capacity testing reveals a system ultimate limit.
Security Testing


Security Testing:

Protected at different levels based on the security
requirements in the organization

Attempts to verify that protection mechanisms built into the
systems will protect it from improper penetration
Security Testing

• The tester play the roles of the individual who desires


to penetrate the system.
• Acquire passwords through external clerical means.
• Overwhelm the system, thereby denying the service
to others
• Purposely cause system errors, hoping to penetrate
• Browse through insecure data, hoping to find the key
to system entry
• The good security testing will ultimately penetrate
the system.
Recovery Testing

• Recovery Testing:

 Recovery testing is a system test that forces the software to


fail in a variety of ways and verifies that recovery is properly
performed.
• Automatic - data recovery, check pointing mechanisms and
restart are evaluated for correctness.
• Human Intervention - mean time to repair is evaluated to
determine whether is within acceptable limits.
Compatibility Testing

 Testing a Web site, or Web-delivered application for


compatibility with a range of the leading browsers and
desktop hardware platforms
 Testing an application on a range of different operating
systems, and in combination with other applications
Compatibility Testing

 Verifying the compatibility and interoperability of a set of


enterprise-class applications in a complex hardware and
software environment designed to meet your specific
business needs

 Testing a peripheral with a wide range of PCs

 Testing a server with different add-in cards, applications, and


operating systems
Scalability Testing

• Scalability testing differs from simple load testing in that it


focuses on the performance of your Web sites, hardware and
software products, and internal applications at all the stages
from minimum to maximum load.
Usability Testing

• “Usability is a quality attribute that assesses how easy user


interfaces are to use. The word 'usability' also refers to
methods for improving ease-of-use during the design
process”
• “Usability is the measure of the quality of a user's
experience when interacting with a product or system”
Acceptance Testing

218
© 2010 Wipro Ltd – Internal & Restricted
Acceptance Testing

Topics Covered in this lesson are:


– Acceptance Testing
– What is an Acceptance Test?
– Creating Acceptance Test
– Exercises

219 © 2009 Wipro Ltd - Confidential


Acceptance Testing

• Is the process of comparing a program to its requirements

• Testing the system with the intent of confirming readiness of the


product and customer acceptance.

• The performance and reliability of the system will be tested and


confirmed.
Acceptance Testing

• Formal testing conducted to enable a user, customer or other


authorized entity to determine whether to accept a system or
component.
• An acceptance test is a test that the user defines, to tell whether
the system as a whole works the way the user expects.
What is an Acceptance Test

An Acceptance Test must -

• be written by the customer (directly


or indirectly)
• be black-box oriented
• not involve writing any code, only
execution
• not necessarily involve running the
system exactly as run by the
customer but should be as close to
Creating Acceptance Test

• Acceptance tests are created from user stories.


• The user stories selected during the meeting will be translated
into acceptance tests.
• Acceptance tests are black box system tests.
• Customers are responsible for verifying the correctness of the
acceptance tests and reviewing test scores to decide which failed
tests are of highest priority.
• Acceptance tests should be automated so that they can be run
often.
Exercise

1. List the steps for Acceptance testing for a Microsoft


Notepad.
Exercise
Regression Testing

226
© 2010 Wipro Ltd – Internal & Restricted
Topics

• What is Regression Testing ?


• Why Regression Testing ?
• When to do Regression Testing ?
• Regression Testing automation?
• Regression Testing tools?
Regression Testing -What

• What is Regression Testing ?


• Regression Testing refers to the selective re-testing of a
system or component to verify that modifications have not
caused unintended effects and the system component still
conforms to the specified requirements.
Regression Testing -What

• Regression testing tests for changes.

• Typically a regression test compares new output from a


program against old output run using the same data.
Example - Original

• A purchase application generates a report that lists the


set of orders booked between two dates. Input is a set of
two dates.

•Original Version

• Dates Input: 12-5-2002 and 12-3-2004. It lists all the


orders correctly.

•Change Made

• Condition that the dates must be in the same financial


year was introduced in the program
Example – New Version
• New Version
• Dates Input: 12-5-2003 and 12-2-2004. it lists the all
orders correctly for the period.
• Dates input: 12-5-2002 and 12-3-2004. It gives an
“appropriate error message”. 
• Dates input: 12-5-2003 and 12-8-2003. It gives an
unexpected “error message”. ( Original functionality is lost
!)
Example - Side Effect

• Last set of dates should have worked without any problem

• “Change” introduced has produced an unwanted “side-


effect”

• Identify a set of test cases which must be executed


whenever “date logic changes” are done on the program
Regression Testing - Why

The main purpose of using Regression Testing is to have a


standard and comprehensive set of tests which can be
used to compare the results of one version of the program
with those of an updated or corrected version.
Regression Test - When

• Anytime you add new features.


• Anytime you tweak the system to improve
performance.
• Whenever program maintenance is done to an existing
system.
• At each new phase of a new system.
Regression Test - When

• It can be used to see if a bug in an old program has been


cleared up.

• Anytime you add a new component or third party control


to the system. The same holds true for updating an old
control.

• Regression testing can be used to easily see if different


operating systems or different versions of operating
systems impact the way the system computes the
results.
Regression Testing - Automation

• Use a regression testing utility that can run from a script


or batch file.

• Create a script or batch file that automatically runs the


new version of the program and the regression testing
utility for each test scenario.
Regression Testing - Automation

• Once the upfront work has been done to create the


scripts or batch file, the regression testing can be
scheduled to run at night, lunch or coffee breaks.

• Check to see if an error log has been created.


Regression Testing Tools

Regression Testing can be performed through tools like

• Win-Runner
• Quick Test Professional
• Silk Test
Static Testing

239
© 2010 Wipro Ltd – Internal & Restricted
Static Testing

Topics Covered in this lesson are:


– Reviews
– Types of Reviews
– Benefits of Reviews
– Work products that undergo reviews
– Desk Checking
– Desk Checking – Advantages and
Disadvantages
– Code Inspection
– Objective of Code Inspection
– Composition of Code Inspection Team
240 © 2009 Wipro Ltd - Confidential
Static Testing

Topics Covered in this lesson are: (Contd….)

– Rules for Code Inspection


– Inspection Process
– Classification of anomaly
– Severity of anomaly
– Benefits of Code inspection
– Guidelines for Code inspection
– Code Walkthrough
– Difference between inspection and
walkthroughs
241 © 2009 Wipro Ltd - Confidential
Static Testing

Topics Covered in this lesson are: (Contd….)

– Code Walkthrough team


– Code Walkthrough process
– Output of Code Walkthrough
– Technical review of code
– Technical review process
– Output of Technical review process

242 © 2009 Wipro Ltd - Confidential


Static Testing

1. Static Testing is a type of testing that requires only the source


code of the product, not the binaries or executables.
2. Static Testing does not involve executing the programs on
computers.
3. It involves selecting people going through the code to find out
whether
1. The code works according to the functional requirement
2. The code has been written in accordance with the design
developed earlier in the project life cycle
3. The code for any functionality has been missed out
4. The code handles errors properly
Static Testing - Advantages

1. This process has several advantages:


1. Sometimes humans can find errors that computers cannot.
2. For example, when there are two variables with similar
names and the programmer used a “wrong” variable by
mistake in an expression, the computer will not detect the
error but execute the statement and produce incorrect
results, whereas a human being can spot such an error.
Static Testing - Advantages

2. By making multiple humans read and evaluate the program,


we can get multiple perspectives and therefore have more
problems identified upfront than a computer could.
3. A human evaluation of the code can compare it against the
specifications or design and thus ensure that it does what is
intended to do. This may not be always possible when a
computer runs a test
4. A human evaluation can detect many problems at one go
and can even try to identify the root causes of the problem,
that can be corrected with much ease.
Static Testing - Advantages

5. By making human resources test the code before execution


can save computer resources.
6. A proactive method like static testing minimizes the delay in
identification of the problems. The sooner the defect is
identified and corrected, lesser is the cost of fixing the
defect
7. From a psychological point of view, finding defects later in
the cycle ( for example, after the code is compiled and the
system is being put together) creates immense pressure on
programmers
8. They have to fix defects with less time to spare
9. With this kind of pressure, there are higher chances of other
defects creeping in
Methods of static testing

 Multiple methods of static testing done by humans in the


increasing order of formalism
1.Desk checking of the code
2.Code walkthrough
3.Code review
4.Code inspection
Static Testing

 Static testing is the process of evaluating a system


or component based on its form, structure, content
or documentation (without computer program
execution).

 Reviews form an important activity in static


testing.
Reviews

 Reviews are "filters" applied to uncover error from products at


the end of each phase.

 A review process can be defined as a critical evaluation of an


object.

 Involve a group meeting to assess a work product. In certain


phases, such as the Requirements phase, Prototyping phase
and the final delivery phase.
Types of Reviews

1. Inspections
2. Walkthroughs
3. Technical Reviews
4. Audits
Benefits of Reviews

1. Identification of the anomalies at the earlier stage of the life


cycle
2. Identifying needed improvements
3. Certifying correctness
4. Encouraging uniformity
5. Enforcing subjective rules
Work-products that undergo reviews

1. Software Requirement Specification


2. Software design description
3. Source Code
4. Software test documentation
5. Software user documentation
6. System Build
7. Release Notes
8. Let us discuss Inspections, Walkthroughs and Technical Reviews
with respect to Code.
Desk Checking

 Normally done manually by the author of the code


 Desk Checking is a method to verify the portions of the code for
correctness.
 Such verifications is done by comparing the code with the
design or specifications to make sure that the code does what it
is supposed to do and effectively
 This is desk checking that most programmers do before
compiling and executing the code.
Desk checking

1. Done manually by the author


2. Done by comparing the code with the design or specifications
3. For the errors found author applies correction on the spot
4. This method is characterized by
1. No structured method or formalism to ensure completeness
2. No maintaining of a log or checklist
Desk checking - Advantages

1. Programmer who knows the code and the programming


language very well is equipped to read and understand his or
her own code
2. Done by one individual, there are fewer scheduling and
logistics overhead
3. The defect is detected and corrected with minimum time delay
Desk checking - Disadvantages

 A developer is not the best person to detect problems on his or


her own code
 Developers generally prefer to write new code rather than do
any form of testing
 This method is essentially person dependent and informal and
thus may not work consistently across all developers
Code Inspection

1. Code inspection is a visual examination of a software product to


detect and identify software anomalies including errors and
deviations from standards and specifications.
2. Inspections are conducted by peers led by impartial facilitators.
3. Inspectors are trained in Inspection techniques.
4. Determination of remedial or investigative action for an anomaly
is mandatory element of software inspection
5. Attempt to discover the solution for the fault is not part of the
inspection meeting.
Objectives of code Inspection

 Cost of detecting and fixing defects is less during early


stages.

 Gives management an insight into the development


process – through metrics.

 Inspectors learn from the inspection process.

 Allows easy transfer of ownership, should staff leave or


change responsibility.

 Build team strength at emotional level.


Composition of Code Inspection Team

1. Author
2. Reader
3. Moderator
4. Inspector
5. Recorder
Rules for Code Inspection

1. Inspection team can have only 3 to 6 participants


maximum.
2. Author shall not act as Inspection leader, reader or
recorder.
3. Management member shall not participate in the
inspection.
4. Reader responsible for leading the inspection team
through the program written interpreting sections of work
line by line.
5. Relating the code back to higher level work products like
Design, Requirements.
Inspection Process

1. Planning
2. Overview
3. Preparation
4. Inspection
5. Rework
6. Follow up
Classification of anomaly

1. Missing
2. Superfluous (additional)
3. Ambiguous
4. Inconsistent
5. Improvement desirable
6. Non-conformance to standards
7. Risk-prone (safer alternative methods are
available)
8. Factually incorrect
9. Non-implementable (due to system or time
constraints)
Benefits of Code Inspection

 Synergy – 3-6 active people work together, focused on a


common goal.
 Work product is detached from the individual.
 Identification of the anomalies at the earlier stage of the
life cycle.
 Uniformity is maintained.
Guidelines for Code Inspection

1. Adequate preparation time must be provided to participants.


2. The inspection time must be limited to 2-hours sessions,
with a maximum of 2 sessions a day.
3. The inspection meeting must be focused only on identifying
anomalies, not on the resolution of the anomalies.
4. The author must be dissociated from his work.
5. The management must not participate in the inspections.
6. Selecting the right participants for the inspection.
Code Walkthrough

 Walkthrough is a static analysis technique in which a


designer or programmer leads members of the
development team and other interested parties through a
software program.

 Participants ask questions on the program and make


comments about possible errors, violation of standards,
guidelines etc.
Objectives of Code Walkthrough

 To evaluate a software program, check conformance to


standards, guidelines and specifications
 Educating / Training participants
 Find anomalies
 Improve software program
 Consider alternative implementation if required (not
done in inspections)
Difference between Inspections and Walkthroughs

Inspection Walkthrough

A group of relevant persons from Usually team members of the


different departments participate same project take participation in
in the inspection. the walkthrough. Author himself
acts the walkthrough leader.

Checklist is used to find faults No checklist used in


walkthroughs

Inspection process includes Walkthrough process includes


Overview, preparation, Overview, little or no preparation,
inspection, rework and follow up. examination (actual walkthrough
meeting), rework and follow up.
Difference between Inspections and Walkthroughs
Contd.

Inspection Walkthrough

Formalized procedure in each No formalized procedure in the


step. steps.

Inspection takes longer time as Shorter time is spent on


the list of items in the checklist is walkthroughs as there is not
tracked to completion. formal checklist used to evaluate
the program.
Code Walkthrough Team

1. Author

2. Walkthrough Leader

3. Recorder

4. Team member
Code Walkthrough Process

 Overview

 Preparation

 Examination

 Rework / Follow-up
Outputs of Code Walkthrough

1. Walkthrough team members

2. Software program examined

3. Walkthrough objectives and whether they were met.

4. Recommendations regarding each anomaly.

5. List of actions, due dates and responsible people.


Technical Review of Code

 A technical review is a formal team evaluation of a product.

 It identifies any discrepancies from specifications and


standards or provides recommendations after the
examination of alternatives or both.
 The technical review is less formal than the formal inspection.

 The technical review participants include the author and


participants knowledgeable of the technical content of the
product being reviewed.
Technical review process

Step 1: Planning the Technical Review Meeting


Step 2: Reviewing the Product
Step 3: Conducting the Technical Review
Step 4: Resolving Defects
Step 5: Reworking the Product
Adhoc Testing

274
© 2010 Wipro Ltd – Internal & Restricted
Topics

 Overview of Adhoc Testing


 Difference between planned testing and adhoc testing
 Plan for adhoc testing
 Documentation needed ?
 Advantages
 Types of adhoc testing
Overview of Ad-hoc Testing

 Ad-hoc Testing is testing carried out in an unplanned manner


 Testing done without any formal testing technique is called Ad-
hoc Testing

Some of the issues faced with planned testing are as follows:


1.Lack of clarity in requirements and other
specifications
2.Lack of skills for doing testing formally
3.Lack of time for test design
Overview of Ad-hoc Testing

• Planned testing enables catching certain types of defects.


• Though planned testing helps in boosting the tester's
confidence, it is the tester's “intuition” that often finds critical
defects
• Example is domain testing. Testing done without the
specifications and making a domain expert testing a product
can bring in new perspectives and help identify new types of
defects. These perspectives evolve dynamically when testers
think “out of the box”
• It is possible that some of the most critical perspectives may
be missed in planned testing, which gets identified late by the
testers
• Also those critical perspectives are not reflected in the test
cases that have already been designed.
Overview of Ad-hoc Testing

 Ad-hoc testing attempts to bridge the above two gaps


 It is done to explore the undiscovered areas in the product
by using intuition, previous experience in working with the
product,
expert knowledge of the platform or technology, and
experience of testing a similar product
 It is generally done to uncover defects that are not covered by
planned testing
Difference between planned testing and
ad-hoc testing

O n e o f t h e m o s t f u n d a m e n t a l d i f f e r e n c e s b e t w e e n p la n n e d t e s t in g a n d a d h o c t e s t i n g is
t h a t t e s t e x e c u t io n a n d t e s t r e p o r t g e n e r a t i o n t a k e s p la c e b e f o r e t e s t c a s e d e s ig n i n
a d h o c t e s t in g
Plan for ad-hoc testing

 Ad-hoc testing can be planned in one of the two ways


 After a certain number of planned test cases are executed.
In this case, the product is likely to be in a better shape and
thus newer perspectives and defects can be uncovered.
Since ad-hoc testing doesn’t require all the test cases to be
documented immediately, this provides an opportunity to
catch multiple missing perspectives with minimal time delay.
 Prior to planned testing. This will enable gaining better clarity

on requirements and assessing the quality of the product


Documentation needed ?

 It has been mentioned that ad-hoc testing does not require the
test cases to be documented
 This is applicable only for the test execution phase
 After test execution, ad-hoc testing requires all the perspectives
that were tested to be documented as a set of test cases
 These test cases will become part of the planned test execution
for the next cycle
Advantages

 First, the perspectives gained in one round of ad-hoc testing


are formally captured and not lost
 Second, the subsequent rounds of ad-hoc testing bring in new
perspectives, without repeating the same thing
 This ensures that ad-hoc testing is intuitive every time
Types of ad-hoc testing

1. Buddy testing
2. Pair testing
3. Exploratory testing
4. Iterative testing
5. Agile and extreme testing
6. Defect seeding
Buddy testing

 This type of testing uses the “buddy system” practice where in


two team members are identified as buddies. The buddies
mutually help each other, with a common goal of identifying
defects early and correcting them.
 A developer and tester usually become buddies
 Buddy testing uses both white box and black box testing
approaches
Pair testing

 Pair testing is done by two testers working simultaneously on


the same machine to find defects in the work product
 When one person is executing the tests, the other person
takes notes
 The other person suggests an idea or helps in providing
additional perspectives
 Pair testing can be done during any phase of testing
Pair testing - Advantages

 Pair testing can be done during any phase of testing


 The presence of one senior member can also help in planning
 Pair testing is usually a focused session for about an hour or
two.
During this session, a pair is given a specific area to focus
and test. It is up to the pair to decide on different ways of
testing this functionality.
Exploratory testing

 Another technique to find defects in ad-hoc testing is to keep


exploring the product, covering more depth and breadth
 Exploratory testing can be done during any phase of testing
 Exploratory testing may execute their tests based on their past
experiences on testing a similar product, or a product of similar
domain, or a product in a technology area
Exploratory testing

There are different exploratory test techniques. They are:


1.Guesses
2.Architecture diagram, use cases
3.Past defects
4.Error handling
5.Discussions
6.Questions and Checklists
Iterative Testing

1. The iterative model is where the requirements keep coming


and the product is developed iteratively for each requirement.
2. The testing associated for this process is called iterative
testing
3. Iterative testing aims at testing the product for all
requirements, irrespective of the phase they belong to in the
spiral model
Agile and Extreme Testing

 Agile and extreme models take the processes to the extreme


to ensure that customer requirements are met in a timely
manner
 In this model, customer partner with the project teams to go
step by step in bringing the project to completion in a phased
manner
 The customer becomes part of the project team so as to
clarify
any doubts/questions
Defect Seeding

 Defect seeding is a method of intentionally introducing defects


into a product to check the rate of its detection and residual
defects
 Defects that can be seeded may vary from severe or critical
defects to cosmetic errors.
 Defect seeding may act as a guide to check the efficiency of
the inspection or testing process
Defect Seeding

 For example assume that 20 defects are seeded on a


product. Suppose when the test team completes testing,
it has found 12 seeded defects and 25 original defects.
The total number of defects that may be latent with the
product is calculated as follows:

Defect seeded
Total latent defects = ------------------------------ * Original
defects found
Defect seeded found

 So, the number of estimated defects = (20/12)*25 = 41.67


 Based on the above calculations, the number of estimated
defects yet to be found is 42. 
Software Project Environment

293
© 2010 Wipro Ltd – Internal & Restricted
Software Project Environment

Topics Covered in this lesson are:


– Software Project Environment
– Software Project Management
– Life Cycle
– Project Deliverables
– Project Estimates
– Project Organization
– Project Schedules
– Project Risks
– Monitoring and Control
294
– Change Management
© 2009 Wipro Ltd - Confidential
Software Project Environment

Topics Covered in this lesson are: (Contd…)

– Test Plan
– Software Quality Management
– Software Configuration Management
– SCM Plan
– Baselining
– Bug Fixing – Test Cycle
– Change Documentation
– Naming Conventions
– SCM Tools
295 © 2009 Wipro Ltd - Confidential
Software Project Environment

Topics Covered in this lesson are: (Contd…)


– Test and Release Staging
– Test Organization – Typical Structures
– Unit Testing, Integration Testing, System
Testing by developers
– Risk Management

296 © 2009 Wipro Ltd - Confidential


Software Project Environment

 Software Testing takes place dynamic SDLC context.

 Managed by critical processes like software project


management, configuration management, quality
management.

 Need to align test plans and efforts with project objectives,


arrangements and activities.
Software Project Environment

 Test managers/leads need to understand overall project


goals/priorities and plans.

 Testers need to understand configuration management


plan and arrangements.

 Test activities need to be done in sync with the overall


project activities.
Software Project Management

 Typical software project begins with signing of a customer


contract.

 Detailed project plan is prepared to direct and guide all


the activities to be undertaken within the time-frame.
Life Cycle

 Suitable SDLC model is chosen as a vehicle to carry out all


the SDLC activities in the right sequence.

 Reflects the project milestones and expected deliverables


at each stage.

 Test team needs to derive its test inputs, activities and


test outputs from the model.
Project Deliverables

 Project plan lists project deliverables.


 Includes software, supporting documents, manuals
 Project deliverables would be customer deliverables and
internal deliverables.
 Test team needs to derive its deliverables and
associated milestone.
 Typical test deliverables include test strategy, test plan,
test design, test reports.
Project Estimates

 Project size and effort estimates are detailed in the project


plan.

 Test team to ensure that estimates for testing are in line


with the product quality expectations and associated risks.
Project Organization

 Project organization structure, roles, responsibilities, team


members, customers, suppliers are defined.

 Test team needs to understand its role and communication


responsibilities.
Project Schedule -1

 Detailed Schedule of activities like requirements, design,


coding, testing

 Test plan should sync its deliverables and activities with


the project milestones
Project Schedule - 2

 Project plan allots a overall duration and person-effort to


testing.
 As the project progresses- requirements analysis, design
and construction activities tend to eat into the time
allotted for testing, which leads to a compromise of the
test efforts.
 Test team needs to proactively monitor this trend and
have strategies to prevent and/or handle the risks.
Project Risks

 Project uncertainties and related impacts are analyzed


to draw up a risk management plan.

 Test team needs to understand how these risks will


impact the test efforts.

 Test strategy and plan need to take this impact into


account.
Monitoring & control

 Include communication mechanisms, stakeholder


involvement, progress review, technical review,
management review, customer review.
 Test team needs to plan its involvement in monitoring
and control activities.
 Test team being an important stakeholder needs to
proactively get involved in these reviews and take
corrective or preventive actions related to test efforts.
Change Management

 Software requirements are subjected to changes.

 Change management plan would be detailed in a software


configuration management plan.

 Test team needs to understand the impact of changes and


reflect it in the test strategy and plan.
Test Plan

 ‘Test Plan’ document as we discussed in earlier chapter


talks about sort of tests to be carried out.
 Planning aspects such as schedules and resources for
preparing test cases and executing test cases are often
mentioned in project plan.
 In some cases, resources and schedule details are also
included in the Test Plan document, in which case
project plan stops with referring to overall Test Plan as
an activity and one lead/manager / tester made
responsible for that activity.
Software Quality Management

 SQM efforts in a project are driven by organization level


processes and project characteristics.

 Quality assurance is prevention oriented.

 Quality control is correction oriented.

 Testing is correction oriented aspect.


Software Quality Management

 Test team needs to understand the overall quality


control efforts and devise test strategy.
 Test manager needs to apprise the Senior management
of the test project challenges and risks.
 Test team needs to be aware of the status of other
control activities like technical reviews, inspections on
an on-going basis and alter its strategy and plan with
management support.
Software Configuration Management

 Software requirements undergo many changes.

 One of the reasons for change is bug fixing carried out by


developer.

 SCM is the process of ensuring that integrity of software


artifacts are retained as these changes are applied.
SCM

 Tester has to understand the configuration management


plan and base-lining strategy of the project in order to
ensure only tested, bug fixed and regression done
software gets released.
 Configuration Manager responsible for the project will
create a controlled environment for the testers and
developers to work.
SCM

 Tester need not do anything specific for configuration


management except to abide by the plan and control
mechanism; but an appreciation of the need for proper
configuration management is required for tester to
effectively contribute to the project.
SCM by definition

 A disciplined approach to managing the evolution of

software development and maintenance practices, and

their software products.


SCM Plan

 Based on the project requirements and SDLC model


chosen, a software configuration management plan is
prepared to direct and guide all configuration and change
control activities.
SCM Plan

 Identification of software artifacts like documents,


source code, tools.

 Referred as configuration items (CI’s).

 Understand the testing related CI’s and what type of


changes impact them.

 Configuration items include test plan, test cases.


Baseline

1. Take place when a CI reaches a stable state (minimal


changes).
2. Ready to be shared as a base reference.
3. Corresponding version is called baseline.
4. Formal approval required for changes.
5. Test team would typically test using a stable set of
components, that is, from a baseline.
Bug Fixing – Test Cycle

1. Bugs are reported from the field/customer.


2. Development team fixes the bug and performs a re-baseline
before it is tested.
3. Test team takes new baseline components and performs the
relevant tests.
4. Upon finding defects, the development has to rework on the
components and provide a new baseline to the test team.
5. Cycle continues till the bug fix is verified to be successful.
Bug Fixing – Test Cycle

 Product Cycle
CI  Baseline  Test  Release

 Bug Fixing cycle


Success

Bug Report  Fixing  Baseline Test FailureRelease


Change Documentation

 Test team needs to make corresponding changes to its


test plans and test cases.

 When major changes are applied to the software, the test


team may need to revise its plan and strategy.

 Demonstrate that test plans and designs have taken latest


changes into account.
Naming conventions

 Test teams need to use appropriate naming and


versioning conventions while preparing test documents
like test plan, test case design, defect report.
Document Control

 Test Teams needs to follow the document change control


process while making changes to a test artifact.

 Proof of changes applied should be reflected in the version


history of the document being changed and the product
traceability matrix.
SCM Tools

 Tools such as CVS, Source Safe aid configuration


management by helping to retrieve earlier versions,
controlled check-in and check-out and release of software
from the configuration repository.
Test & Release Staging

 Maintain different levels of staging servers, for


controlling the software during various stages of testing
unit, integration, alpha, beta, production
 Separate server with access control provided only to
relevant members working in that stage
 Gate-way control mechanism to move the software from
one stage to next stage server are defined
 Defect leakage to production is minimized
Test Organization

 Testing is the final stage before the software is shipped to


the customer.

 Important to organize this function for high-level objective


verification, independent organization reporting and good
visibility through out the project.
Typical Structures

 Independent test group in the organization with


reporting line to Sr. Management.

 Sub team of the project team, where all testers are


reporting to the project manager.

 Sub team of the project team, where the test team


reports to Sr. Management.
Typical Structures

 Test activity outsourced but testing is carried out inside


the company and test team reports to the Quality Head or
Sr. Management.

 Test activity is completely outsourced and testing is done


at the third-party location, the test team reports to the Sr.
Management.
Unit Testing by Developers

 Unit testing is sometimes carried out by the developer


without a peer/independent person performing the testing.

 Practice is prevalent in the industry even though it is not


the best practice.

 Unit testing aided by various standard checklists can


ensure consistency even in this case.
Unit Test Cases by Developer

 Unit level test cases prepared by the same developer and


unit tests executed by independent tester.

 Test cases review by another person can bring in


independence to larger extent.
Integration test by Developer

 Integration testing is also carried out by some of the


developers from the project team wearing the cap of
testers.

 Review of integration test cases and review of integration


test progress and defect status to be carefully done in
such situations to ensure proper testing.
System Test by Developer

 System testing is done by the selected members from


same development team.
 Can bring in bias to testing.
 System test cases can be prepared by independent
person with domain knowledge.
 System testing carried out under strict supervision to
ensure defect tracking and closure.
 Independent team can do a much better job of testing
without bias and delivery pressure.
Test Planning

 Often lack of domain knowledge and product knowledge


have been sited as reasons not to involve others in
testing.

 Properly planning from the beginning of project can


involve testers in project meetings.

 Testers can start learning and contribute to the project


from the early stages by preparing test cases and
reviewing test case prior to code completion.
Risk Management

• In a SDLC, The Loss could be

• In the form of diminished quality of the


end product.
• Increased Costs
• Delayed Product
• Failure
• For a risk to be understandable it must be expressed
unambiguously and clearly.
• Statement must include a description of the current
conditions that may lead to the loss and a description of the
loss.
Risk Management

• What is Risk?

•Uncertainty -- An event may occur or


may not occur
• Why is Risk arises?

•Uncertainty

•Inexperience

•Inability to accurately forecast


Risk Management

STEPS
• Assess what can go wrong (Identification).
• Determine what risks are important to deal with
(Analysis).
• Implementing strategies to deal with those risk (Action
plan).
Risk Management

• Identification of all risks- by Brain storming


• Analysis of risks--Magnitude of Risk
• Magnitude of Risk = Probability of occurrence *
Impact
• Impact in terms of Cost, schedule, performance
• classification of risks---Ranking
• Implementation of response
Risk Management

• Magnitude of risk = Probability of Occurrence*


Impact

• Scale= 1 to 5 for both

• Occurrence - V. low, Low, Medium, High, V. high

• Impact -- Negligible, Marginal, Critical, Catastrophic


Risk Management
Risk Management--Response

• Risk Avoidance : Refrain from undertaking the work

• Risk Mitigation : Change in methodology / work


practices to reduce the impact

• Risk Transfer : Transfer the risk to someone else


(insurance policy. subcontracts)

• Risk Acceptance : Accept the risks -- Bear the loss


Risk Management Plan
• What action to be taken for each risk?

• When it must be undertaken

• Who is responsible for the action?

• Review of risk management actions

• Reporting on results that the plan is Working


Web Based Testing

342
© 2010 Wipro Ltd – Internal & Restricted
Software Project Environment

Topics Covered in this lesson are:


– Why is Web Based Testing Different?
– Web--Based Architecture with Real Users
– Special Techniques for Web Testing
– Page Flow Testing
– Database Testing
– Connectivity Testing
– Links Testing
– Link Checkers
– HTML Validators
343 © 2009 Wipro Ltd - Confidential
Software Project Environment

Topics Covered in this lesson are: (Contd…)


– Java Compatibility
– Session Testing
– Content Testing
– Cookies Testing
• What is a cookie
• Why cookies
• Types of cookies
• Where cookies are stored
• Examples –– where cookies are necessary
• Drawbacks of cookies
• Test Cases for Cookies Testing
• Session Testing
• Content Testing
344 © 2009 Wipro Ltd - Confidential
Page Flow Testing

 Page Flow testing deals with ensuring that the application


does not get confused by jumping to random pages.
 Each page should typically check to ensure that it can only
be viewed via specific previous pages, and if the referring
page was not one of that set, then an error page should be
displayed.
 A page flow diagram is a very useful aid for the tester to
use when checking for correct page flow within the
application.
Page Flow Testing

How to conduct?

 Manual Execution
 Use Bookmarks
 Try to Navigate in an unnatural path
 Establish a session – jump into any page in any order
 Try to use faked session information
Connectivity Testing

 Connectivity testing involves determining if the servers


and clients behave appropriately under varying
circumstances
 Difficult to accomplish from a server perspective since
• it is expected that the servers will be
• operating with standby power supplies as well as
being in a highly available configuration
 Two aspects of connectivity testing
 Voluntary:- where a user actively interacts with the
system
 • In an unexpected way
 Involuntary:- where the system acts in an unpredictable
manner
Connectivity Testing - Voluntary

 Quit from session without the user saving state


 Quit from session with the user saving state
 Server-forced quit from session due to inactivity
 Server-forced quit from session due to server problem
 Client forced quit from session due to visiting another
 site in the middle of a session for a brief period of time
 Client-forced quit from session due to visiting another
site/application for an extended period of time
 Client-forced quit from session due to client PC crashing
 Client-forced quit due to browser crashing
Connectivity Testing - Involuntary

 Forcing the browser to prematurely terminate during a


page load using a task manager to kill the
browser,
 Hitting the ESC key and reloading or revisiting the same
page via a bookmark
 Simulation of Hub Failure between PC and the Web Server.
 While browsing remove the network cable from the PC,
• attempt to visit a page, abort the visit,
• then reconnect the cable.
Connectivity Testing - Involuntary

 Web Server On/Off Test.


 Shutdown the web server, then restart the server.
 The user should be able to connect back to the application
without being redirected to the login page this will prove
the statelessness of individual
pages.
 Note the shutdown is only for the web server., not for an
application server
 Database Server On/Off Test. Shutdown the database
server and restart it.
 The user should be able to connect back
to the application without being redirected to the login
page.
Connectivity Testing - Involuntary

 Application Server On/Off test.


 Shutdown the application server and restart it.
 Two possible outcomes for this depending on how
session management is implemented
– The first outcome is that the application redirects
to an error page indicating loss of
connectivity, and the user is
requested to login and retry.
– The second outcome is the application continues
normally since no session information was
lost because it was held in a
persistent state that transcends
application server restarts.
Database Testing

 The original query must be checked to uncover errors in


translating the user’s request to SQL
 Problems in communicating between the Web Application
server and Database server need to be tested.
 Need to demonstrate the validity of the raw data sent
from the database to the Web Application and the validity
of the transformations applied to the raw data.
 Need to test validity of dynamic content object formats
transmitted to the user and the validity of the
transformations to make the data visible to the user
Links Testing

Objectives:
 Can the page be downloaded and displayed?
 Do all the objects on a page load?
 Do all the objects on a page load
in an acceptable time?
 If the user turns images off, uses a non-graphical or no
frames browser, does it still work?
 Do all the text and graphical links work?
Link Checkers

While doing link testing we should test:


 Linked pages (other pages to be navigated to by clicking on
hyperlinks).
 Frame pages (where a page is partitioned into frames and
each frame has its own HTML page to create
the image displayed in the browser window)
 Images used for graphical appearance or as buttons
to navigate (e.g. GIFs and JPEGs)
 Form handlers, where these are CGI scripts, Active Server Pages etc.
 ActiveX, Java applets and other objects that are downloaded and
executed within the Browser
 Other content files, such as video (AVI, MPEG), and audio (WAV, AU,
MIDI, MPEG) files
 Other Internet protocols such as email links, FTP, Newsgroups links
HTML Validators

 Validating your HTML will help insure that it displays


properly on all browsers
 The best way to validate is by running your documents
through one or more HTML Validators.
– http://validator.w3.org
This Validators checks the markup validity of Web
documents in HTML, XHTML, SMIL, MathML etc.
– http://valet.webthing.com/page/
– http://www.htmlhelp.com/tools/validator/
– And Many more…
 For link checking like broken links
– http://validator.w3.org/checklink
– Checks anchors (hyperlinks) in a HTML/XHTML
document. Useful to find broken links, etc.
Java Compatibility

 Java-compatible Web browser


– Is a web browser that can display and execute any Java
applets and Java Script
– Has a Java interpreter
– Kinds of Java-compatible Web browser:
• Hot-Java browser
• Microsoft Internet Explorer
• Netscape Navigator
 Java And JavaScript:-
– Some browsers may not have either of these languages.
Do not rely on them working.
– Internet Explorer 2 does not handle JavaScript, and
IE3 handles JavaScript in slightly different ways
– JavaScript 1.1 works only with NS3.
– JAVA will not work if any of the images on that page do not
have WIDTH= and HEIGHT=
Java Compatibility
 Testing to see whether Java works.
– Here is a simple test to see if your browser is Java compatible.
– Open http://library.thinkquest.org/12255/java/tester.html in
your browser
– Then see which of the following messages appeared on your
screen.
– If you saw a button that says, "Java Enabled (Click to Go
Back)," Java works on
your computer.
– If you saw a message, "Sorry, your browser is not Java
compatible“ then Java doesn't works on your computer.
 Incompatible Browser
– Think of your web site that makes extensive use of the
scripting language, Java Script or has a Java applet
– But your browser either has this feature turned off, or is too
old to support it.
– New browsers, support W3C standards which enables the Java
Compatibility.
– Old browsers, don’t support W3C standards.
Java Compatibility
 Getting Java to work on your computer
Windows 95
– Your Browser has Java disabled. Go to an options menu and enable
Java.
– Your Browser is not 32-bit. You can download the 32-bit versions of
your browsers by clicking on the logo near the top right corner of your
browser. This will bring you to the browser creator's web page where
the latest versions will be posted.
– If all else fails, close all applications and try rebooting your system after
you save all important data.
 Macintosh
– Make sure your computer can handle 32-bit operations.
– Your Browser is not 32-bit. You can download the 32-bit versions of your
browsers by clicking on the logo near the top right corner of your
browser. This will bring you to the browser creator's web page where
the latest versions will be posted.
– Your Browser has Java disabled. Go to an options menu and enable
Java.
Unix / Linux
– Your Browser has Java disabled. Go to an options menu and enable Java
Java Compatibility
 Turning on Java Compatibility in Updated Browser
Mozilla 1.x
• Open Mozilla.
• Select Preferences from the Edit menu.
• Click the arrow next to Advanced.
• Click Scripts & Plug-ins.
• Check Navigator beneath "Enable JavaScript for".
• Click OK.
• Click Reload.
Opera 5.X and Opera 7.X
• Open Opera.
• Select Quick Preferences from the File menu.
• Make sure Enable JavaScript is checked.
• Click Reload.
Safari for MAC OSX
• Open Safari.
• Select Preferences from the Safari menu.
• Click Security.
• Check both Enable Java and Enable JavaScript.
• Close the window.
Java Compatibility
 Turning on Java Compatibility in Updated Browser
Internet Explorer 5.X or better
• Open Internet Explorer.
• Select Internet Options from the Tools menu.
• In Internet Options dialog box select the Security tab.
• Click Custom level button at bottom. The Security Settings dialog box will pop
• Under Scripting category enable Active Scripting, Allow paste options via scrip
and Scripting of Java applets
• Click OK twice to close out.
• Hit Refresh.
Internet Explorer 5.X for MAC OSX or MAC OS9
• Open Internet Explorer.
• Select Preferences from the Explorer menu.
• Click the arrow next to Web Browser.
• Click Web Content.
• Under Active Content check Enable Scripting.
• Click OK.
• Click Refresh.
Java Compatibility
 Turning on Java Compatibility in Updated Browser
Netscape 7.X
• Open Netscape.
• Select Preferences from the Edit menu.
• Click the arrow next to Advanced.
• Click Scripts & Plug-ins.
• Check Navigator beneath "Enable JavaScript for".
• Click OK.
• Click Reload.

AOL 4.0 and 5.0


• Click My AOL.
• Click Preferences.
• Click WWW.
• Click the Security tab.
• Click Custom.
• Click Settings.
• Scroll down to locate Scripting.
• Click Enable for Active Scripting.
• Click OK, then click the Reload button.
Cookies Testing
What is a Cookie
 Cookie is small information stored in text file on user’s hard drive by
web server. This information is later used by web browser to retrieve
information from that machine.
 Generally cookie contains personalized user data or information that
used to communicate between different web pages.
 Cookies are small bits of information that get stored on your hard dri
(persistent cookies) or in memory (non-persistent cookies) of your
computer. They are placed on your computer by the websites you ar
visiting.
Why cookies are used?
 Cookies are nothing but the user’s identity and used to track where t
user navigated throughout the web site pages.
 The communication between web browser and web server is stateles
 When you are requesting one page to another page of same web site
the corresponding web server don’t know anything about to whom th
previous page served because each request/response pair is
independent
Cookies Testing

How Cookies Work

 The HTTP protocol used to exchange information files on the web is


used to maintain the cookies. There are two types of HTTP protocol.
Stateless HTTP and Stateful HTTP protoco
Stateless HTTP protocol does not keep any record of previously
accessed web page history. While Stateful HTTP protocol do keep som
history of previous web browser and web server interactions and this
protocol is used by cookies to maintain the use
interactions.

 Whenever user visits the site or page that is using cookie, small cod
inside that HTML page (Generally a call to some language script to w
the cookie like cookies in JavaScript, PHP, Perl) writes a text file on
users machine called cookie. When user
visits the same page or domain later time this cookie is read from dis
and used to identify the second
visit of the same user on that domain. Expiration time is set while
Cookies Testing

Types of Cookies

Session Cookies
 These are temporary and are erased when you close your browser at
the end of your surfing session.
 The next time you visit that particular site it will not recognize you an
will treat you as a completely new
visitor as there is nothing in your browser to let the site know
that you have visited before

Persistent cookies
 These are remain on your hard drive until you erase them or they
expire.
 How long a cookie remains on your browser depends on how long th
visited website has programmed the cookies to last.
Cookies Testing

What are Session Cookies used for?


 Web sites typically use session cookies to ensure that you are
recognized when you move from page to page within one site and th
any information you have entered is remembered.
Example:-
 If an e-commerce site did not use session cookies then items placed
a shopping basket would disappear by the time
you reach the checkout.

What are Persistent Cookies used for?


 A persistent cookie enables a website to remember you on subseque
visits, speeding up or enhancing your experience of services or
functions offered.
Example:-
 A website may offer its contents in different languages. On your firs
visit, you may choose to have the content delivered in French, and th
site may record that preference in a persistent cookie set on your
Cookies Testing
Where are cookies stored?
 When any web page application writes cookie it get saved in a text file on
user hard disk drive.
 The path where the cookies get stored depends on the browser.
 Different browsers store cookie in different paths.
 E.g. Internet explorer store cookies on path “C:\Documents and
Settings\Default User\Cookies”
 In Mozilla Firefox browser you can even see the cookies in browser options
itself. Open the Mozilla browser, click on
Tools->Options->Privacy and then “Show cookies” button.
Applications where cookies can be used
 Online shopping carts, Personalized sites, User Tracking , Marketing , User
sessions
Drawbacks of Cookies
 Too many cookies (Occupies more disk space)
 Cookies are disabled on the browser
 Security issues
 Sensitive information
 Loss of privacy
Cookies Testing
Major Test cases for web application cookie testing
 Privacy policy
 Make sure no sensitive data or personal data is stored in the cookie
 If sensitive data is used inside the cookie, to be tested for encryption
 No overuse of cookies
 Disable the cookies in the browser settings
 Accept / Reject some cookies
 Delete a cookie
 Corrupt the cookie
 Deletion of cookies from your web application page
 Cookies testing on multiple browsers
Session Testing
 In user-session based testing, data is collected from users of a web
application by the web server.
 Each user session is a collection of user requests in the form of URL a
name value pairs.
 More specifically, a user session is defined as beginning when a requ
from a new IP address reaches the server and ending when the user
leaves the web site or the session times out.
 Application Session should get expired after a predefined a period of
inactivity
 Basically for the Security
 Back – Forward Buttons
 Multiple Logins – from the same M/c
 Using Multiple Browser Sessions
 Same Browser
 Different Browser (IE & NN)
Content Testing
Two types of Contents – Static & Dynamic
Static Content
 Verify for correctness
 Verify for accuracy
 Verify organization of content
Dynamic Content
 Test by feeding new content
 Try all possible combinations
 Wrong data
 Huge amount of data
 Not matching the expected type of content
 With and without graphics
Web Browser Error Messages
400 Bad Request:-
 The site was found, but the page could not be found. Check the spelling of t
URL carefully, including the upper and lower ca
of the letters. If you still can't find the pa
see Finding A Missing Link
401 Unauthorized:-
 Page exists but is only accessible to specified users (not you). Possibly the s
or page has been temporarily taken off-line for
maintenance or other activity.
403 Forbidden Page:-
 Same as "401 Unauthorized" above.
404 File Not Found:-
 Same as "400 Bad Request" above. Error returned by the destination web
server when the URL is misspelled or the page no longer exists.
503 Service Unavailable:-
 The site is busy, or service has been temporarily suspended. Try again later.
Bad file :-
 Either your browser doesn't support a feature in the accessed page, or there
an HTML error in the page. Try upgrading your browser to the latest release
or seeing if anyone else can access the page.
Web Browser Error Messages
Connection refused:-
 Either the site is serving the maximum number of users, or is tempora
closed to public access. Try again later.
Connection reset by peer
 For some reason the remote side terminated the connection. Try again.
this repeats on one site, there is something wrong with the site. If it
repeats on more than one site, you probably have a bad line connection
Document contains no data:-
 The page was found, but is empty. It may be being updated. Try again
later.
Fatal Error. System call 'fopen' failed:-
 No such file or directory. Page has moved, or the URL is wrong.
Failed DNS lookup:-
 The site cannot be found. Check your spelling. If your spelling is correc
the site has been removed (temporarily or permanently) from the Inter
Forbidden:-
 Your client is not allowed to access the requested object Same as "401
Unauthorized" above.
Web Browser Error Messages
 Helper application not found:-
 You tried to access a file for which your browser needs a special helper
application. Check the file type referenced in the message, and add the
appropriate helper application.
Host unavailable:-
 The site exists, but is not presently accessible. If you can't access any othe
site either, then your ISP is down. Otherwise
the site has been taken off the Internet, possibly only temporarily for
maintenance. Try again later.
Network connection was refused by the server:-
 The site is probably just too busy, or down for maintenance. Try again late
Too Many Connections-Try Again Later:-
 The site is serving the maximum number of users. Every few minutes try
pressing Reload or Refresh until you gain access.
You can't log on as an anonymous user:-
 Usually caused when trying to access an FTP site. The site may already ha
the maximum number of anonymous users logged in, in which case you
should try later. If you still can't get in, it may be that t
site does not allow anonymous logins, in which case you will have to use an
FTP program to log in with a valid user name.
Test Metrics Capture

373
© 2010 Wipro Ltd – Internal & Restricted
Test Metrics and Reporting
• Test Process Metrics
– Focus area: Reporting the current status of testing activities
• Test Coverage
• Test Case Design and Execution Productivity – Day/week wise and tester
wise
• Test Case Pass Rate
• Test Case Effectiveness (in finding defects)
• Forecasting Test Case Productivity and Test Case Effectiveness based on
Statistical methods
– Intended Audience – QA Managers/Directors
 Test Product Metrics
– Focus Area – Reporting the state/health of the application under test
• Defect Trends (Day/week wise defect data, cumulative defect data)
• Defect Ratio: Defect found v/s application size (function points, Lines of
Code)
• Defect Removal Efficiency: Ratio of defects found internally v/s the
defects found in production
• Forecasting Defect Trends and number of Defects based on Statistical
methods
• Defect Severity Distribution
– Intended Audience – Application Owners, Business Users
374
© 2009
© 2010 Wipro
Wipro Ltd –Ltd - Confidential
Internal & Restricted
Cross Browser Testing

375
© 2010 Wipro Ltd – Internal & Restricted
Cross Browser Testing

Topics Covered in this lesson are:


– Basics of Cross Browser Testing
– Need of Cross Browser Testing
– Online Services and Tools
– Statistics on Cross Browser Testing
– Guidelines for Cross Browser Testing

376 © 2009 Wipro Ltd - Confidential


Cross Browser Testing
 Cross browser testing:-
 Testing the Cross Browser Compatibility of Web sites across various browse
and operating systems.
 The goal is that a website can be used by the largest possible audience, wi
minimal variation in the user-experience
 A webpage should ideally look and work the same in all web browsers. The
unique challenge of achieving this goal lies in the nature of the medium its
 Some applications are very dependent on browsers. Different browsers hav
different configurations and settings that your web page should be compat
with. Your web site coding should be cross browser
platform compatible.
 Why Browser Tests are necessary?
 The browser-environment is still very quirky and the risk of inconsistent
presentation is simply too high to ignore it
 Different browsers and operating systems use different techniques for
rendering fonts (Win vs. Mac on handling fonts).
 The font size isn’t identical on different platforms and some fonts might no
installed on the user’s system.
 Ex:- Firefox on Linux doesn’t display web-sites as Firefox on Windows does.
Cross Browser Testing
5. HTML Features that may fail to work if visitors are using a different browser
 • Ex:-
 Sound and movies, Images, Java and Java Scripts
 Forms submit using mailto
 Frames, Tables, Lists, Blinking and Marquee
 <P> and </P> Pairs

Browsers Tests: What Can You Do?


 • To ensure the (more or less) identical presentation in browsers y
need to verify its consistency in a number of browsers
Either install a number of web browsers or use web-based
browser test services; It provides an instant remote access to the
browsers (via Virtual Private Network (VPN)) or instantly deliver th
screenshots of your site
 in different browsers (Mozilla family, Internet Explorer, Opera, Safari, mobile
Browsers),
 in different screen resolutions (usually 640×480, 800×600, 1024×768, 1200×8
 and on different operating systems (Mac OS, Linux, Win).
The effect of your changes can be observed instantly
Cross Browser Testing
Browser Tests: Online-Services & Tools :-
 IE Browser Render
 Browser shots
 Litmus
 Browser Photo
 Browser cam
 Browser Pool etc..
 For more information refer
 http://www.smashingmagazine.com/2007/10/02/browser-tests-services
and-compatibility-test-suites/
General info
 Cross browser testing and debugging can be the most frustrating exercise
 Debugging is a headache, but good practices can help you deal with the
issue
 The first practice is to discuss the issue of cross browser compatibility with
client as early as possible.
 Get an agreement on which browsers do you guarantee that the site will
match the approved layouts and make it clear that the rest of the browser
may not look perfect.
Cross Browser Testing
 Browser Statistics – Most common Browsers?
 Study the browser statistics to make a conscious decision as to what
are your priorities.
 From the statistics below, you can see that Internet Explorer is the m
common browser. However, FireFox have become quite popular as w

 Source: http://www.w3schools.com/browsers/browsers_stats.asp
Cross Browser Testing
 Browser Display Statistics - Most common display resolutions
 From statistics, most users are using a display with 1024x768 pixels
More
• Source:
http://www.w3schools.com/browsers/browsers_display.asp

 Be aware: Many users still have only 800x600 display screens.


 This fact indicates that the figures above might not be 100% realistic
 The average user might have display screens with a lower resolution
Cross Browser Testing
 Guidelines:-
 Make your site as simple as possible.
 The more complicated your HTML and CSS the more difficult debugg
will be.
 Use HTML Validator and CSS Validator to check HTML and CSS errors
 Use Browser Compatibility Testing Tools
– http://www.browsershots.org
– http://www.browsercam.com
 Browser Compatibility
– http://www.netmechanic.com/products/Browser-Tutorial.shtml
– Crossbrowsertesting.com
Cross Browser Testing Tools:-
Usability Testing

383
© 2010 Wipro Ltd – Internal & Restricted
Usability Testing

Topics Covered in this lesson are:


– Guidelines for Usability Testing
– Some Usability Tests
– Navigation

384 © 2009 Wipro Ltd - Confidential


Testing Types – Non Functional Testing
(contd…)
– Usability testing
• Checks for human factor problems :
- Are outputs meaningful?
- Are error diagnostics straightforward?
- Does GUI have conformity of Syntax, conventions, format, style
abbreviations?
- Is it easy to use?
- Is there an exit option in all choices?
• It should not
- Annoy intended user in function or speed
- Take control from the user without indicating when it will be
returned.
• It should
- Provide on-line help or user manual
- Consistent in its function and overall design

385
© 2009
© 2010 Wipro
Wipro Ltd –Ltd - Confidential
Internal & Restricted
Usability Testing

• “Usability is a quality attribute that assesses how easy user


interfaces are to use. The word 'usability' also refers to
methods for improving ease-of-use during the design
process”
• “Usability is the measure of the quality of a user's
experience when interacting with a product or system”
Guidelines during design process

Guidelines
– Provide Useful content
– Establish user requirements
– Understand and meet user requirements
– Involve users in establishing requirements
– Set and state goals
– Focus on performance before preference
– Consider user interface issues
– Be easily found in search engines
– Set usability goals
– Use parallel design
Provide useful content

1. Provide Useful Content


1.Content is the information provided on a
website
2.Critical component in a website
3.More important than
1. Navigation
2. Visual design
3. Interactivity
Establish user requirements

2. Establish user Requirements


1.Use all available resources to better
understand requirements
2.Greater no. of exchanges between
developers and users,
higher the probability of having a
successful product
3.Exchanges may include
1. Customer surveys, Bulletin Boards, Customer support lines,
User groups, focus groups etc

4.The information gathered can be used to


build use cases
Understand and meet user’s expectations

3. Understand and Meet user’s Expectations


1.Ensure that website meets user’s
expectations specially w.r.t.
1. Content
2. Navigation
3. Organization

2.Make use of familiar formatting and


navigation schemes
to ensure user learn easily
3.Users act on their own expectations even if
there are indications to counter those
expectations
A sample website meeting user expectations
Involve users in establishing requirements

4. User involvement
1.Involve users to improve completeness
and accuracy of requirements
2.User involvement has become widely
accepted principle in
software development (Agile
methodology recommends)
3.Helps to avoid unused or little used
system features
4.Helps to improve the level of user
acceptance
Involve users in establishing requirements

Caution
– Users are not good at helping make design
decisions
– Research results do not indicate effective and
efficient
system when users are involved

Conclusion
1.Users are valuable in helping designers what a
system must do
2.But, not in helping designers in determining
how
best the system must do it
Set and State Goals

5. Set and State Goals


1.Identify and clearly articulate the
primary goals of the
website before proceeding to the design
2.Primary goal examples
1. Education, Selling, Information, Entertainment etc

3.Goals determine
1. Content, Audience, Functionalities, Site’s unique look & feel
Focus on performance before preference

6. Focus on performance
1.Focus on achieving higher rate of
performance before
dealing with aesthetics
2.Graphics issues tend to have little impact
on
user’s speed of performance
3.If the primary objective is to achieve
performance,
focus on content, format, interaction and
navigation before
deciding on colors and decorative graphics
Consider user interface issues

7. Ensure you consider all user interface issues during design


1.Context within which users will be
visiting a website
2.Experience level of the users
3.Types of tasks users will perform on the
site
4.Types of computer and connection
speeds
5.Evaluation of prototypes
6.Usability Test Results
Be easily found on search engines

8. Be Easily found on Search Engines


Considerations
1. Ensure the website is in the top 30 list
presented by a major search engine
2. You have high probability of being accessed when
you are in top 30
3. Research indicates users generally do not look at
websites that are not in the top 30…
4. Requires
1. Appropriate meta content, page titles, number of links
to the website, updated registration with major search
engines
Set Usability Goals

9. Set Usability Goals


1.Set preference goals that addresses
1. Satisfaction and acceptance by users

2.Set performance goals that addresses


1. Success rates
2. Time it takes users to find specific information

3.Setting performance/preference goals helps


developers build better
web sites
4.Makes usability testing more effective
5.Goal example
1. The information will be found eighty percent of
the times and in less than one minute
Use parallel design

10. Use Parallel Design


Guidelines
• Do not rely on ideas of a single designer
• Group discussions also do not lead to best solutions
• Allow different designers to propose their design and
select best items from each design
• The more varied and independent ideas considered results
in a better product
Accessibility

Guidelines
– Websites must be developed to ensure that
everyone
including who have difficulty in hearing, seeing
and
making precise movements, can use the
website
– Web sites must facilitate use of common
assistive technologies
– All US Federal Govt. websites must comply with
section 508 Federal Accessibility Standards
– Visit www.section508.gov for specific details
Comply with section 508

1. Compliance with section 508 is mandatory for any


website to be used US federal govt.
2. Section 508 ensures procurement of IT takes into
account all users including users with disabilities
3. End users with disabilities – Approximately 8%
1.4% - Vision related disabilities
2.2% - Movement related disabilities
3.1% - Hearing related disabilities
4.1% - Learning related disabilities
5.Visit www.section508.gov for specific
details
Do not use color alone to convey information

Guidelines
– Never use color as the only indicator for activities
– Ensure that all information conveyed with color is
also conveyed without color
– Most users with color deficiencies have difficulty in
seeing green portions
Do not use color alone to convey information

To accommodate users with color deficiency


1.Select color combinations that can be
differentiated by
the users with color deficiency
2.Use tools to see how web pages look
like when used by color deficient individuals
3.Ensure that the lightness contrast between
foreground and
background color is high
4.Increase the lightness contrast between
colors on either
end of the spectrum (ex.. blues and reds)
Enable to skip repetitive navigational links

Guidelines
1.A series of routine navigational links are
provided
at different locations in the site
2.Difficult and time consuming task for
disabled users
to wait for all repeated links to be read
3.Users must have the option to avoid
these
repeated navigational links
Text equivalents for non text elements

Guideline
1.Provide a text equivalent for all non text
elements that conveys information
2.Test equivalents must be used for
1. Images
2. Graphical representations of text
3. Image map regions
4. Animations
5. Graphical buttons
6. Sounds, Audio files
In addition to Audio, text is displayed
Test Plug ins and applets

Test Plug ins and Applets for accessibility


Test any applets, plug ins and other
software
as they can create problems for people
with disabilities
Scripts must allow accessibility

Ensure that scripts allow accessibility


– Whenever the script changes the
content of a page,
the change must be indicated in a way
that can be detected
– If mouseover is used, it should be
possible
to activate the same through keyboard
Provide equivalent pages

Guidelines
1.Provide text only pages with equivalent
information and
functionality if compliance with accessibility
provisions can not
be accomplished in any way
2.Ensure the text only pages are also updated
along with non text counter parts
3.Also inform users that text only pages have
equivalent information and as current as
non text counterparts
Synchronize multimedia elements

Guidelines
1.Provide equivalent alternatives for
multimedia elements
2.For multimedia presentations (movie or
animation)
1. Synchronize captions or auditory descriptions of the visual
track with the presentation
Provide frame titles

1. Guidelines
1. Frames are used to divide the browser screen
into different areas with each area presenting different
but related information
2. Provide frame titles that facilitates frame identification
and navigation
2. Example
1. A designer may put all navigational links in one
frame on the left side and all information
in a larger frame right side
2. This facilitates the user to scroll through the
information without disturbing the navigation section
3. Clear and concise frame titles help users with
disabilities to orient themselves when frames are used
First frame has a title, but second
frame does not
Avoid screen flicker

Guidelines
1.Design web pages that do not cause the
screen to flicker
2.Certain users are photosensitive and
can cause problems
with certain screen flicker frequencies
Introduction to Hardware & Software

General considerations
1. Designers must consider their users ’ needs for specific
information and any constraints imposed on them by
their users ’ hardware, software, and speed of connection
to the Internet
2. Today, a single operating system (Microsoft ’s XP)
dominates personal computer market
3. Similarly,only two Web site browsers are favored
by the vast majority of users
4. More than ninety percent of users have their
monitors set to 1024x768,800x600 or 1280x1024 pixel
resolution
5. And while most users at work have high-speed
Internet access, many home users still connect using dial-
up
Introduction to Hardware & Software

General considerations
6.Within the constraints of available time,
money, and
resources, it is usually impossible to
design for all users
7.Therefore,identify the hardware and
software used by
your primary and secondary audiences
8.Design to maximize the effectiveness of
your Web site
Design for most common browsers

Guideline
1.Design, develop and test for most
common browsers
2.Designers should attempt to
accommodate ninety-five percent
of all users
3.Ensure that all testing of a web site is
done using the most popular browsers
4.Visit http://www.thecounter.com/stats/
for the information
on commonly used browsers
Website on Macintosh
Account for browser differences

Guideline
1.Consider features of different browsers, and
default settings
2.Users with visual problems tend to increase the
font size
3.Some users may turn off backgrounds, user
fewer
colors, and overrides font
4.Designer needs to find out the most common
settings
5.Need to specify on the website what assumptions
are made about the browser settings
Across different browsers
Design for popular operating systems

Guideline
1. Design the Web site so it will work
well with the most popular operating systems
2. Designers should attempt to accommodate ninety-five
percent
of all users
3. Ensure that all testing of a web site is
done using the most common operating systems
4. Currently, the most popular operating system is
Microsoft’s
Windows XP (over 80% of market share)… contd.
5. Designers should consult one of the several sources
that maintain current figures to help ensure that they
are designing to accommodate as many users as
possible
Design for popular operating systems

 Source: http://www.thecounter.com/stats
Design for User’s Typical Connection Speed

Guideline
1.Design for the connection speed of most
users
2.At US, 89% of users have high speed
access
3.11% are using 56K modems
4.These figures are continuously changing
5.Must find a way to get latest data
Design for commonly used screen resolutions

Guideline
– Design for monitors with the screen resolution set at
1024x768 pixels
– Designers should attempt to accommodate ninety-five
percent of all users
– As of June 2006, 56% of users have their screen
resolution set at 1024x768
– By designing for 1024x768, the site will accommodate
this most common resolution, as well as those at
any higher resolution
– Ensure that all testing of web sites is done
using the most common screen resolutions
Source: http://thecounter.com
1. 1024x768   (50%)
2. 1280x1024   (26%)
3. 800x600   (10%)
4. Unknown   (7%)
Scrolling

Guideline
1.Horizontal scrolling is slow and tedious way
to
view an entire screen
2.Use an appropriate page layout to eliminate
the
need for users to scroll horizontally
3.Some page layouts may force users to scroll
horizontally if their monitor resolution is
smaller than
that used by the designers
640*480 resolution Vs 800*600 resolution
Facilitate rapid scrolling while reading

Guidelines

1.Facilitate fast scrolling by highlighting major


items on
the web page
2.Web pages move quickly or slowly depending on
how user elects to scroll
3.Some users click arrows at the end of the
scroll bar (slow movement)
4.Users can read to some extent in this case
5.Some users click the scroll box (quicker
movement)
6.Users can read only major headings through the
scroll box movement
7.Highlight major headings
Observe major headings allowing faster
scrolling
Use scrolling pages for reading comprehension

Guidelines

1.Use longer, scrolling pages when users


are reading
for comprehension
2.Paging introduces a delay that can
interrupt user’s
thought process
3.Scrolling allows readers to go through
the text
without losing the context of the
message
4.But it may occur when users are required
to follow links
Use paging – than scrolling

Guidelines

1.Use paging rather than scrolling only if there


are specific target audience and end user’s system
response are reasonably fast

2.User should be able to move from page to


page by selecting links in a quick fashion and
without having to scroll to find important
information
Configuration Testing

430
© 2010 Wipro Ltd – Internal & Restricted
Configuration Testing

Topics Covered in this lesson are:


– Definition and Goals
– Client Side Configuration Testing
– Server Side Configuration Testing
– Examples

431 © 2009 Wipro Ltd - Confidential


Configuration Testing

Definition:-
 Configuration testing is the process of checking the
operation of the software you are testing with all the types
of hardware and software.
 Configuration testing is the system testing of different
variations of an integrated, black box application against its
configurability requirements
Goals
 Cause the application to fail to meet its configurability
requirements so that the underlying defects can be
identified, analyzed, fixed, and prevented in the future.
Configuration Testing

Objectives
 Partially validate the application (i.e., to determine if it
fulfills its configurability requirements).
 Cause failures concerning the configurability requirements
that help identify defects that are not efficiently found
during unit and integration testing:
 Functional Variants.
 Internationalization (e.g., multiple languages, currencies,
taxes and tariffs, time zones, etc.).
 Personalization
Configuration Testing

Objectives Conti.. :-
 Report these failures to the development teams so that the
associated defects can be fixed.
 Determine the effect of adding or modifying hardware
resources such as:
 Memory
 Disk and tape resources
 Processors
 Load balancers
 Determine an optimal system configuration
Configuration Testing

Configuration Testing: Client Side


 Operating systems
 Browser software
 User interface components
 Plug-ins
 Connectivity
Configuration Testing

Configuration Testing: Server Side


 Compatibility of Web Application with server OS
 Correct file and directory creation by Web Application
 System security measures do not degrade user service
by Web Application
 Testing Web Application with distributed server
configuration
 Web Application properly integrated with database software
 Correct execution of Web application scripts
 Examination system administration errors for impact on
Web Application
 On-site testing of proxy servers
Configuration Testing

Typical examples for configuration testing include an


application that must:
 Have multiple functional variants.
 Support internationalization.
 Support personalization.
Internationalization and
Localization Testing

438
© 2010 Wipro Ltd – Internal & Restricted
Internationalization and Localization
Testing
Topics Covered in this lesson are:

– Concepts of Internationalization and


Localization
– Internationalization Testing
– Localization Testing

439 © 2009 Wipro Ltd - Confidential


Internationalization
Definition:-
 Internationalization (i18n) is the process of designing software so
that it can be adapted (localized) to various languages and
regions easily, cost-effectively, and in particular without
engineering changes to the software.
 The term internationalization is abbreviated as i18n,
because there are 18 letters between the first "i"
and the last "n."
Characteristics:
 With the addition of localization data, the same
executable can run worldwide.
 Textual elements, such as status messages and the GUI
component labels, are not hard coded in the program. Instead
they are stored outside the source code and retrieved dynamically.
 Support for new languages does not require recompilation.
 Culturally-dependent data, such as dates and currencies,
appear in formats that conform to the end user's
region and language.
 It can be localized quickly.
Localization
 Localization refers to the adaptation of a product, application or
document content to meet the language, cultural and other
requirements of a specific target market (a "locale").
 Localization is sometimes written as "l10n", where 10 is the number of
letters between 'l' and 'n'.
 A synonym for translation of the user interface and documentation
 Substantially more complex issue
 Localization can entail customization related to:
– Numeric, date and time formats
– Use of currency
– Keyboard usage
– Collation and sorting
– Symbols, icons and colors
– Text and graphics containing references to objects, actions or ideas
which, in a given culture, may be subject to misinterpretation or
viewed as insensitive.
– Varying legal requirements
– and many more things
Internationalization Testing
Test suites – Configurations
 Internationalization Taxonomy Matrix
 Locales :- system, client, program
 Combinations:- single locale set-up, varied locale set-up
 Representative sampling:- Western Europe, Eastern Europe, Asia,
Middle East
 Time zones:- different from server to client
 Char sets :- different data passed through system
Test suites – Data
 Text - characters, punctuation, symbols, word wrap, sorting, search
 Numeric - date/time, currency, telephone numbers, measure,
search
 Graphics - icons, banners, colors, buttons
 Layout - screens, pop-up windows, frames
 Language range - Western European, Eastern European (non Latin),
Far Eastern, Right-to-left
 Edge testing - min-max values, min-max length, incorrect chars &
formats, mismatched dates/times
Internationalization Testing
What to look for -Text
 Truncation - characters cut off, or partial multi-byte character
 Rendering - proper characters, display configuration, cut-and-paste
 Line wrap - logical place for language, line lengths (client vs. server)
 Search - exact vs. fuzzy matches, false matching, per language/multilingu
 Sort - logical order for the language, multilingual
 Indexing - layout makes sense for language, order correct
What to look for - Numeric
 Number groupings – amount per group, separator, good for locale
 Decimal point – correct character and number of digits after for locale
 Date/Time – order of day, month, year and separator, short and long form
Of day and month names
 Currency – symbol, 3-letter id, positioning, no. of digits after decimal, exp
room
 Measure – proper units, expansion room
 Telephone numbers – no restriction format, allowance of certain nonnume
characters
 Address – general address lines, states/province optional, postal code, cou
Internationalization Testing
What to look for – Graphics and Layout
 Images - internationally appropriate, no imbedded text, sizing
 Colors - no implied meaning by color, consistency in use
 Windows - expansion room, word order dependency
 Screen - positioning, resolution, window layering
 Sound - non-culture specific, switch off capability

References
 Number groupings – amount per group, separator, good for locale
 http://www.w3.org/International/getting-started/
 http://www.w3.org/International/
 http://java.sun.com/j2se/1.4.2/docs/guide/intl/
 http://www.w3.org/International/quicktips/

Вам также может понравиться