Вы находитесь на странице: 1из 138

Manual Testing

Testing Tools

Mind

Software
Computer software has become a driving force. It is the engine that drives business
decision making. It serves as the basis for modern scientific investigation and engineering
problem solving. It is a key factor that differentiates modern products and services. It is
embedded in systems of all kinds: Transportation, Medical, Telecommunications Military,
Industrial processes, Entertainment, Office products,, the list is almost endless.
Software is virtually inescapable in a modern world. And as we move into the twentyfirst century, it will become the driver for new advances in everything from elementary
education to genetic engineering.
It affects nearly every aspect of our lives and has become pervasive in our commerce, our
culture, and our everyday activities.
Why Software has bugs?
Software have bugs because of; Mis-interpretation of requirements or no communication,
software complexity, programming errors, changing requirements, time pressure, egos of
people, poorly documented code, and software development tools used.
5 common problems in the Software Development Process
1. Poor requirement if requirements are unclear, incomplete, too general, or not
testable, there will be problems
2. Unrealistic schedule If too much work is crammed in too little time, problems
are inevitable
3. Inadequate testing no one will know whether or not the program is any good
until the customer complains or systems crash.
4. Futurities Requests to pile on new features after development is underway,
extremely common.
5. Miscommunications If developers dont know whats needed or customers
have erroneous expectations, problems are guaranteed.
5 Common Solutions to Software Development Problems:
1. Solid requirements
2. Realistic schedule
3. Adequate Testing
4. Stick to initial requirements as much as possible
5. Communication

Manual Testing
Q

Testing Tools

Mind

Software Engineering:The application of a systematic, disciplined, quantifiable approach to the development,


operation, and maintenance of software; that is, the application of engineering to
software.
Software Quality:- Quality software is reasonably bug-free, delivered on time, within
budget, meets requirements, and is maintainable; However, quality is obviously a
subjective term. It will depend on who the customer is and their overall influence in the
scheme of things.
Software Quality Assurance (SQA): -Involves the entire Software development Process
monitoring and improving the process, and making sure that any agreed-upon standards
and procedures are followed, and ensuring that problems are found and dealt with. It is
oriented to prevention of bugs.
Software Quality Control :- (QC)
It is a process in which actual testing of the software happens by following the process
defined by the QA team, it is mainly driven by the Defect Detection
Measuring and monitoring the quality of s/w by conducting reviews after completion of
every development phase through review meetings.
Testing Definitions:
Testing is the process of executing a program with the intent of finding errors

Or
Verifying and validating the application with respect to customer requirements
Or
Finding the differences between customer expected and actual values
Testing should also ensure that a quality product is delivered to the customer.

Tester Responsibilities:
Identifying the most appropriate implementation approach for a given test
Implementing individual tests.
Setting up and executing the tests
Logging outcomes and verifying test execution
Analyzing and recovering from execution errors.
SDLC (Software Development Life Cycle)
Any software Development has to go through the below 5 stages
2

Manual Testing
Q

Testing Tools

Mind

Feasibility & Analysis


Design
Coding
Testing
Release & Maintenance

1. Software Development Life Cycle (SDLC)


All of the stages from start to finish that take place when developing a new software
is known as SDLC.
The Software Life Cycle is a description of the events that occur between the
birth and death of a software project inclusively.
Defines the concrete strategy to engineer Software artifacts
SDLC is separated into phases (steps, stages)
SDLC also determines the order of the phases, and the criteria for
transitioning from phase to phase
Feasibility study
Analysis

Feasibility Study and Problem Analysis


- What exactly is this system supposed to do?
Determine and spell out the details of the problem.

Design

Design
- How will the system solve the problem?
Logical implementation of the s/w happens.

Coding

Coding
-Translating the design into the actual system
- Physical construction of the s/w product

Testing

Installation &
Maintenance

Testing
-Does the system completely solve the problem?
-Have the requirements been satisfied?
-Does the system work properly in all situations?
Maintenance
Small Enhancements to the s/w happens and the
support is provided to solve the real time problems that
the system faces when the system goes live

Manual Testing
Q

Testing Tools

Mind

Feasibility Study:
Feasibility Study

Analysis

Design

Coding
Testing
Installation &
Maintenance

The Analyst conducts an initial


study of the problem and asks if
the solution is
Technologically
possible?
Economically possible?
Legally possible?
Operationally possible?

The Feasibility Report


Feasibility report contains,
Applications areas to be considered eg stock control, purchasing, accounts etc
System investigations for each application
Cost estimates
System requirements
Timescale for implementation
Expected benefits
System Analysis:
System analysis and design is the process of investigating a business with a view to
determining how best to manage the various procedures and information processing
tasks that it involves.
Feasibility Study
Analysis
Design

Coding

Manual Testing
Q

Testing Tools

Mind

System Analysis Report


System Analysis Report consists of
BRS (Business Requirement Document)
FRS (Functional Requirement Document) or Functional specifications
Use Cases (User action and system Response)
{ These 3 are the base documents for writing Test Cases]
Documenting the results
Systems flow charts
Data flow diagrams
Organization charts
Note: FRS contains Input, Output, process but no format. Use Case contains user action
and system response with fixed format.
Systems Design:

Planning the structure of the information system to be implemented.


Systems analysis determines what the system Should do
And design determines how it should be done.
Feasibility Study
Analysis

Design

Coding

Testing

User interface design


Design of output report
Input screens
Data storage ie files, database
tables
System security
Backups,
Validation passwords
Test plan

Installation &
Maintenance

Manual Testing
Q

Testing Tools

Mind

System Design Report


Design Document consists of Architectural Design, Database Design, Interface
Design
Coding:
Feasibility Study

Analysis

Design
Program development
Draft up user guides
Coding

Testing
Installation & Maintenance

Testing
Testing is executing a program with an intent of finding Error / Fault and Failure. Fault is
a condition that causes the software to fail to perform its required function. Error refers
to difference between Actual Output and Expected Output. Failure is the inability of a
system of a system or component to perform required function according to its
specification. Failure is an event; fault is a state of the software, caused by an error.

Manual Testing
Q

Testing Tools

Mind

Feasibility Study
Analysis
Design
Coding
Testing
Installation &
Maintenance

Why Software Testing?

To discover defects.
To learn about the reliability of the software
To ensure that product works as user expected.
To avoid being sued by customers To detect defects early, which helps in
reducing the cost of defect fixing.

Cost of Defect Repair


Phase
Requirements
Design
Coding
Testing
Customer Site

% Cost
0
10
20
50
100

Manual Testing
Q

Testing Tools

Mind

Require Design Coding Testing Customer Site


% Cost 0
10
20
50
100

Installation & Maintenance:


Feasibility Study
Analysis
Design
Coding
Testing
Installation &
Maintenance

Installation:
File conversion
System changeover
New system becomes operational
8

Manual Testing
Q

Testing Tools

Mind

Staff training

Maintenance:
Corrective maintenance
Perfective maintenance
Adaptive maintenance

Software Development Models


WaterFall Model
This model is same like as SDLC . This is a step by step model, after completion
of one phase then the next phase is implemented. Also, known as Linear Sequential
Model or Classical Model.

Analysis
Design
Coding
Testing
Maintenance

Waterfall Strengths
Easy to understand, easy to use
Provides structure to inexperienced staff
Milestones are well understood
Sets requirements stability
Good for management control (plan, staff, track)
Works well when quality is more important than cost or
schedule
Disadvantages
The waterfall model is the oldest and the most widely used paradigm.
However, many projects rarely follow its sequential flow. This is due to the inherent
problems associated with its rigid format. Namely:
9

Manual Testing
Q

Testing Tools

Mind

It only incorporates iteration indirectly, thus changes may cause considerable


confusion as the project progresses.
As the client usually only has a vague idea of what is required from the software
product, Waterfall Model has difficulty accommodating the natural uncertainty
that exists at the beginning of the project.
The customer only sees a working version of the product after it has been coded.
This may result in disaster if any undetected problems are precipitated to this
stage.

When to use the Waterfall Model


Requirements are very well known
Product definition is stable
Technology is understood
New version of an existing product
Porting an existing product to a new platform.

Prototyping Model
Developers build a prototype during the requirements phase
Prototype is evaluated by end users
Users give corrective feedback
Developers further refine the prototype
When the user is satisfied, the prototype code is brought up to the standards needed for
a final product.
Prototyping Steps
A preliminary project plan is developed
An partial high-level paper model is created
The model is source for a partial requirements specification
A prototype is built with basic and critical attributes
The designer builds
the database
user interface
algorithmic functions
The designer demonstrates the prototype, the user evaluates for problems and suggests
improvements.
This loop continues until the user is satisfied

10

Manual Testing
Q

Testing Tools

Mind

Prototyping Strengths
Customers can see the system requirements as they are being gathered
Developers learn from customers
A more accurate end product
Unexpected requirements accommodated
Allows for flexible design and development
Steady, visible signs of progress produced
Interaction with the prototype stimulates awareness of additional needed functionality
Prototyping Weaknesses
Tendency to abandon structured program development for code-and-fix development
Bad reputation for quick-and-dirty methods
Overall maintainability may be overlooked
The customer may want the prototype delivered.
Process may continue forever (scope creep)
When to use Prototyping Model
Requirements are unstable or have to be clarified
As the requirements clarification stage of a waterfall model
Develop user interfaces
Short-lived demonstrations
New, original development
With the analysis and design portions of object-oriented development.
Prototype Model
This model is suitable when the client is not clear about the requirements. This is
a cyclic version of the linear model. In this model, once the requirement analysis is done
and the design for a prototype is made, the development process gets started. Once the
prototype is created, it is given to the customer for evaluation. The customer tests the
package and gives his/her feed back to the developer who refines the product according
to the customer's exact expectation. After a finite number of iterations, the final software
package is given to the customer. In this methodology, the software is evolved as a result
of periodic shuttling of information between the customer and developer. This is the most
popular development model in the contemporary IT industry. Most of the successful
software products have been developed using this model - as it is very difficult (even for
a whiz kid!) to comprehend all the requirements of a customer in one shot. There are
many variations of this model skewed with respect to the project management styles of
the companies. New versions of a software product evolve as a result of prototyping.

11

Manual Testing
Q

Testing Tools

Mind

Rapid Application Development Model


The RAD model is a linear sequential software development process that emphasizes an
extremely short development cycle. The RAD model is a "high speed" adaptation of the
linear sequential model in which rapid development is achieved by using a componentbased construction approach. Used primarily for information systems applications, the
RAD approach encompasses the following phases:
1. Business modeling
The information flow among business functions is modeled in a way that answers the
following questions:
What information drives the business process?
What information is generated?
Who generates it?
Where does the information go?
Who processes it?
2. Data modeling
The information flow defined as part of the business modeling phase is refined into a set
of data objects that are needed to support the business. The characteristic (called
attributes) of each object is identified and the relationships between these objects are
defined.

3. Process modeling
The data objects defined in the data-modeling phase are transformed to achieve the
information flow necessary to implement a business function. Processing the descriptions
are created for adding, modifying, deleting, or retrieving a data object.
4. Application generation
The RAD model assumes the use of the RAD tools like VB, VC++, Delphi etc... rather
than creating software using conventional third generation programming languages. The
RAD model works to reuse existing program components (when possible) or create
reusable components (when necessary). In all cases, automated tools are used to facilitate
construction of the software.
6. Testing and turnover
Since the RAD process emphasizes reuse, many of the program components have already
been tested. This minimizes the testing and development time.
12

Manual Testing
Q

Testing Tools

Mind

Spiral Model or Iterative Model or Evolutionary Model

It is the most generic of the models Most life cycle models can be derived as special
cases of the spiral model. The spiral uses a risk management approach to S/W
development some advantages of the spiral model are:

Defers elaboration of low risk S/W elements


Incorporates prototyping as a risk reduction strategy
Gives an early focus to reusable S/W
Accommodates life-cycle evolution, growth, and requirement changes
Incorporates S/W quality objectives into the product
Focus on early error detection and design flaws
Sets completion criteria for each project activity to answer the question: how
much is enough?
Uses identical approaches for development and maintenance.
Can be used for H/W-S/W system development

Draw backs: Even though there is no technical draw back the maintanance is very high

13

Manual Testing
Q

Testing Tools

Mind

V Model
Business Req

User Accept

System Req

System Testing

High level

Integration

Low Level

UNIT Test
Coding

V model stands for Verification & Validation model which has the above stages of
software development, left side is all development and involves more verification where
as right side involves more validation and little bit of verification. It is a suitable model
for large scale companies to maintain testing process. This model defines co-existence
relation between development process and testing process.
There are different levels of testing involved in V-model

Unit Testing
Integration Testing
System Testing
User Acceptance Testing

After Completion of every development phase the corresponding testing activities should
be initiated.
Draw Backs:The cost of Maintaining of independent testing team is very high.

14

Manual Testing
Q

Testing Tools

Mind

Agile SDLCs
Speed up or bypass one or more life cycle phases
Usually less formal and reduced scope
Used for time-critical applications
Used in organizations that employ disciplined methods
Some Agile Methods
Adaptive Software Development (ASD)
Feature Driven Development (FDD)
Crystal Clear
Dynamic Software Development Method (DSDM)
Rapid Application Development (RAD)
Scrum
Extreme Programming (XP)
Rational Unify Process (RUP)

Extreme Programming XP
For small-to-medium-sized teams
developing software with vague or
rapidly changing
Requirements Coding is the key
activity throughout a software project
Communication among teammates is
done with code
Life cycle and behavior of complex
objects defined in test cases again in
code
XP Practices
1. Planning game determine scope of
the next release by combining business
priorities and technical estimates
2. Small releases put a simple system
into production, then release new versions in very short cycle
3. Metaphor all development is guided by a simple shared story of how the whole
system works

15

Manual Testing
Q

Testing Tools

Mind

4. Simple design system is designed as simply as possible (extra complexity removed


as soon as found)
5. Testing programmers continuously write unit tests; customers write tests for features
6. Refactoring programmers continuously restructure the system without changing its
behavior to remove duplication and simplify
7. Pair-programming -- all production code is written with two programmers at one
machine
8. Collective ownership anyone can change any code anywhere in the system at any
time.
9. Continuous integration integrate and build the system many times a day every time
a task is completed.
10. 40-hour week work no more than 40 hours a week as a rule
11. On-site customer a user is on the team and available full-time to answer questions
12. Coding standards programmers write all code in accordance with rules emphasizing
communication through the code
XP is extreme because
Commonsense practices taken to extreme levels
If code reviews are good, review code all the time (pair programming)
If testing is good, everybody will test all the time
If simplicity is good, keep the system in the simplest design that supports its current
functionality. (simplest thing that works)
If design is good, everybody will design daily (refactoring)
If architecture is important, everybody will work at defining and refining the rchitecture
(metaphor)
If integration testing is important, build and integrate test several times a day continuous
integration)
If short iterations are good, make iterations really, really short (hours rather than weeks)

Testing Types:Black Box Testing:- Black box testing is also called as Functionality Testing. In this
testing user will be asked to test the correctness of the functionality with the help of
Inputs and Outputs. User doesnt require the knowledge of software code.
BBT methods focus on the functional requirements of the software / product.
and
attempts to find errors in the following categories.
Incorrect or missing functions
Interface errors
Errors in database structures / External database access
Performance errors.
16

Manual Testing
Q

Testing Tools

Mind

Initialization and termination errors

Approach:
Equivalance Class
For each piece of the specification, generate one or more equivalence class and
give an equivalent treatment.
Label the classes as Valid or Invalid
Generate one test case for each Invalid Equivalence class
Generate a test case that covers as many Valid Equivalence Classes as possible
Boundary Value Analysis
Generate test cases for the boundary values.
Minimum Value, Minimum Value + 1, Minimum Value 1
Maximum Value, Maximum Value + 1, Maximum Value 1

Error Guessing
-Generating test cases against to the specification based on the experience.
It is a typical check list driven testing method.
White Box Testing:White box testing is also called as Structural testing. User does require the knowledge of
software code. Using WBT methods, the S/W engineer can derive test cases that do the
following:

Guarantee that all independent paths within a module have been exercised at least
once
Exercise all logical decisions on their true and false sides
Execute all loops at their boundaries and within their operation bounds
Exercise internal data structures to ensure their validity

Control Structure Testing:


Has the following test case design method.
1. Conditional Testing is a method that exercises the logical conditions contained in a
program module. The possible types of components in a condition include a Boolean
Operator, a Boolean variable, a pair of Boolean parenthesis, a relational operator, an
17

Manual Testing
Q

Testing Tools

Mind

arithmetic expression. If a condition is incorrect, then at least one component of the


condition is incorrect. Thus the type of errors that may occur is.

Boolean opearator error


Boolean variable error
Boolean Parenthesis error
Relational operator error
Arithmetic expression error

Structure = 1 Entry + 1 Exit with certain Constraints, Conditions and Loops.


Approach :Basic Path Testing:
Cyclomatic Complexity and McCabe Method
Structure Testing:
Condition Testing , Data Flow Testing and Loop Testing
Grey Box Testing :Grey Box Testing is a new term, which evolved due to the different behaviors of the
system.This is just a combination of both Black Box & White Box Testing.Tester should
have the knowledge of both the internals and externals of the function.
Even though you probably dont have full knowledge of the internals of the product you
test, a test strategy based partly on internals is a powerful idea. We call this Grey Box
Testing. The concept is simple: If you know something about how the product works on
the inside, you can test it better from the outside. This is not to be confused with White
Box Testing, which attempts to cover the internals of the product in detail. In Gray Box
mode, you are testing from the outside of the product, just as you do with Black Box, but
your testing choices are informed by your knowledge of how the underlying components
operate and interact.
Gray Box Testing is especially important with Web and Internet applications,
because the Internet is built around loosely integrated components that connect via
relatively well-defined interfaces. Unless you understand the architecture of the Net,
your testing will be skin deep
The Test Development Life Cycle (TDLC)
Usually, Testing is considered as a part of the System Development Life Cycle. With our
practical experience, we framed this Test Development Life Cycle.
18

Manual Testing
Q

Testing Tools

Mind

The diagram does not depict where and when you write your Test Plan and Strategy
documents. But, it is understood that before you begin your testing activities these
documents should be ready. Ideally, when the Project Plan and Project Strategy are being
made, this is the time when the Test Plan and Test Strategy documents are also made.
Testing at Each Stage of Development

Requirement Study

Requirement Checklist

Software Requirement
Specification

Software Requirement
Specification

Functional Specification
Checklist

Functional Specification
Document

Functional Specification
Document

Architecture Design

Architecture Design

Detailed Design Document

Coding

Coding

Functional Specification
Document

Unit Test Case Documents

Unit Test Case Document


Design Document
Functional Specification
FunctionalDocument
Specification
Document
Performance Criteria
Unit/Integration/System
Test Case Documents
Software Requirement
Specification
Regression
Test
Performance
Test Case
Cases
Document
and
Scenarios

System Test Case


Document
Integration Test Case
Document
Performance Test Cases
and Scenarios
Regression Test Case
Document
User Acceptance Test Case
Documents/Scenarios

19

Manual Testing
Q

Testing Tools

Mind

Verification:Verification is the process of evaluating a system or component to determine whether the


products of a given development phase satisfy the conditions imposed at the start of that
phase.
Importance of the Verification Phase:Verification process helps in detecting defects early, and preventing their leakage
downstream. Thus, the higher cost of later detection and rework is eliminated.

Reviews:A process or meeting during which a work product, or set of work products, is presented
to project personnel, managers, users, customers, or other interested parties for comment
or approval.
The main goal of reviews is to find defects. Reviews are a good compliment to testing to
help assure quality. A few purposes of SQA reviews can be as follows:
Assure the quality of deliverable before the project moves to the next stage.
Once a deliverable has been reviewed, revised as required, and approved, it can be
used as a basis for the next stage in the life cycle.
Types of reviews:Types of reviews include Management Reviews, Technical Reviews, Inspections,
Walkthroughs and Audits.

Management Reviews:20

Manual Testing
Q

Testing Tools

Mind

Management reviews are performed by those directly responsible for the system in order
to monitor progress, determine status of plans and schedules, confirm requirements and
their system allocation.
Therefore the main objectives of Management Reviews can be categorized as follows:
Validate from a management perspective that the project is making progress
according to the project plan.
Ensure that deliverables are ready for management approvals.
Resolve issues that require managements attention.
Identify any project bottlenecks.
Keeping project in Control.
Support decisions made during such reviews include Corrective actions, Changes in the
allocation of resources or changes to the scope of the project.
In management reviews the following Software products are reviewed:

Audit Reports
Software Configuration Management Plan
Contingency plans
Installation plans
Risk management plans
Software Q/A

The participants of the review play the roles of Decision-Maker, Review Leader,
Recorder, Management Staff, and Technical Staff.

Technical Reviews
:Technical reviews confirm that product Conforms to specifications, adheres to
regulations, standards, guidelines, plans, changes are properly implemented, changes
affect only those system areas identified by the change specification.
The main objectives of Technical Reviews can be categorized as follows:
Ensure that the software confirms to the organization standards.
Ensure that any changes in the development procedures (design, coding, testing)
are implemented per the organization pre-defined standards.
21

Manual Testing
Q

Testing Tools

Mind

In technical reviews, the following Software products are reviewed


Software requirements specification
Software design description
Software test documentation
Software user documentation
Installation procedure
Release notes
The participants of the review play the roles of Decision-maker, Review leader, Recorder,
Technical staff.

Requirement Review:A process or meeting during which the requirements for a system, hardware item, or
software item are presented to project personnel, managers, users, customers, or other
interested parties for comment or approval. Types include system requirements review,
software requirements review.
Who is involved in Requirement Review?
Product management leads Requirement Review. Members from every affected
department participate in the review which includes functional consultants from
customer end.
Input Criteria
Software requirement specification is the essential document for the review. A
checklist can be used for the review.

Exit Criteria
Exit criteria include the filled & completed checklist with the reviewers
comments & suggestions and the re-verification whether they are incorporated in the
documents.

Design Review:A process or meeting during which a system, hardware, or software design is presented to
project personnel, managers, users, customers, or other interested parties for comment or
approval. Types include critical design review, preliminary design review, and system
design review.
22

Manual Testing
Q

Testing Tools

Mind

Who involves in Design Review?


QA team member leads design review. Members from development team and QA
team participate in the review.
Input Criteria:Design document is the essential document for the review. A checklist can be used
for the review.
Exit Criteria:Exit criteria include the filled & completed checklist with the reviewers
comments & suggestions and the re-verification whether they are incorporated in the
documents.

Code Review:A meeting at which software code is presented to project personnel, managers, users,
customers, or other interested parties for comment or approval.
Who is involved in Code Review?
QA team member (In case the QA Team is only involved in Black Box Testing, then
the Development team lead chairs the review team) leads code review. Members from
development team and QA team participate in the review.
Input Criteria:The Coding Standards Document and the Source file are the essential documents
for the review. A checklist can be used for the review.
Exit Criteria:Exit criteria include the filled & completed checklist with the reviewers
comments & suggestions and the re-verification whether they are incorporated in the
documents.

Walkthroughs:A static analysis technique in which a designer or programmer leads members of the
development team and other interested parties through a segment of documentation or
code, and the participants ask questions and make comments about possible errors,
violation of development standards, and other problems.
The objectives of Walkthrough can be summarized as follows:
Detect errors early.
23

Manual Testing
Q

Testing Tools

Mind

Ensure (re)established standards are followed:


Train and exchange technical information among project teams, which participate in
the walkthrough.
Increase the quality of the project, thereby improving morale of the team members.

The participants in Walkthroughs assume one or more of the following roles:


a) Walk-through leader
b) Recorder
c) Author
d) Team member
To consider a review as a systematic walk-through, a team of at least two members shall
be assembled. Roles may be shared among the team members. The walk-through leader
or the author may serve as the recorder. The walk-through leader may be the author.
Individuals holding management positions over any member of the walk-through team
shall not participate in the walk-through.
Input to the walk-through shall include the following:
a) A statement of objectives for the walk-through
b) The software product being examined
c) Standards that are in effect for the acquisition, supply, development, operation, and/or
maintenance of the software product
Input to the walk-through may also include the following:
d) Any regulations, standards, guidelines, plans, and procedures against which the
software product is to be inspected
e) Anomaly categories
The walk-through shall be considered complete when
a) The entire software product has been examined
b) Recommendations and required actions have been recorded
c) The walk-through output has been completed

Inspection:A static analysis technique that relies on visual examination of development products to
detect errors, violations of development standards, and other problems. Types include
code inspection; design inspection, Architectural inspections, Test ware inspections etc.
The participants in Inspections assume one or more of the following roles:
a) Inspection leader
24

Manual Testing
Q

Testing Tools

Mind

b) Recorder
c) Reader
d) Author
e) Inspector
All participants in the review are inspectors. The author shall not act as inspection leader
and should not act as reader or recorder. Other roles may be shared among the team
members. Individual participants may act in more than one role.
Individuals holding management positions over any member of the inspection team shall
not participate in the inspection.
Input to the inspection shall include the following:
a) A statement of objectives for the inspection
b) The software product to be inspected
c) Documented inspection procedure
d) Inspection reporting forms
e) Current anomalies or issues list
Input to the inspection may also include the following:
f) Inspection checklists
g) Any regulations, standards, guidelines, plans, and procedures against which the
software product is to be inspected
h) Hardware product specifications
i) Hardware performance data
j) Anomaly categories
The individuals may make additional reference material available responsible for the
software product when requested by the inspection leader.
The purpose of the exit criteria is to bring an unambiguous closure to the inspection
meeting. The exit decision shall determine if the software product meets the inspection
exit criteria and shall prescribe any appropriate rework and verification. Specifically, the
inspection team shall identify the software product disposition as one of the following:
a) Accept with no or minor rework. The software product is accepted as is or with only
minor rework. (For example, that would require no further verification).
b) Accept with rework verification. The software product is to be accepted after the
inspection leader or
a designated member of the inspection team (other than the author) verifies rework.
c) Re-inspect. Schedule a re-inspection to verify rework. At a minimum, a re-inspection
shall examine the software product areas changed to resolve anomalies identified in the
last inspection, as well as side effects of those changes.

25

Manual Testing
Q

Testing Tools

Mind

White Box Testing:White box testing involves looking at the structure of the code. When you know the
internal structure of a product, tests can be conducted to ensure that the internal
operations performed according to the specification. And all internal components have
been adequately exercised. In other word WBT tends to involve the coverage of the
specification in the code.
Code coverage is defined in six types as listed below.

Segment coverage Each segment of code b/w control structure is executed at


least once.
Branch Coverage or Node Testing Each branch in the code is taken in each
possible direction at least once.
Compound Condition Coverage When there are multiple conditions, you must
test not only each direction but also each possible combinations of conditions,
which is usually done by using a Truth Table
Basis Path Testing Each independent path through the code is taken in a predetermined order. This point will further be discussed in other section.
Data Flow Testing (DFT) In this approach you track the specific variables
through each possible calculation, thus defining the set of intermediate paths
through the code i.e., those based on each piece of code chosen to be tracked.
Even though the paths are considered independent, dependencies across multiple
paths are not really tested for by this approach. DFT tends to reflect dependencies
but it is mainly through sequences of data manipulation. This approach tends to
uncover bugs like variables used but not initialize, or declared but not used, and
so on.
Path Testing Path testing is where all possible paths through the code are
defined and covered. This testing is extremely laborious and time consuming.
Loop Testing In addition top above measures, there are testing strategies based
on loop testing. These strategies relate to testing single loops, concatenated loops,
and nested loops. Loops are fairly simple to test unless dependencies exist among
the loop or b/w a loop and the code it contains.

What do we do in WBT?
In WBT, we use the control structure of the procedural design to derive test cases.
Using WBT methods a tester can derive the test cases that

26

Manual Testing
Q

Testing Tools

Mind

Guarantee that all independent paths within a module have been exercised at
least once.
Exercise all logical decisions on their true and false values.
Execute all loops at their boundaries and within their operational bounds
Exercise internal data structures to ensure their validity.

White box testing (WBT) is also called Structural or Glass box testing.
Why WBT?
We do WBT because Black box testing is unlikely to uncover numerous sorts of defects
in the program. These defects can be of the following nature:

Logic errors and incorrect assumptions are inversely proportional to the


probability that a program path will be executed. Error tend to creep into our
work when we design and implement functions, conditions or controls that are
out of the program
The logical flow of the program is sometimes counterintuitive, meaning that
our unconscious assumptions about flow of control and data may lead to
design errors that are uncovered only when path testing starts.
Typographical errors are random, some of which will be uncovered by syntax
checking mechanisms but others will go undetected until testing begins.

Limitations:Unfortunately in WBT, exhaustive testing of a code presents certain logistical problems.


Even for small programs, the number of possible logical paths can be very large. For
instance, a 100 line C Language program that contains two nested loops executing 1 to 20
times depending upon some initial input after some basic data declaration. Inside the
interior loop four if-then-else constructs are required. Then there are approximately 10 14
logical paths that are to be exercised to test the program exhaustively. Which means that a
magic test processor developing a single test case, execute it and evaluate results in one
millisecond would require 3170 years working continuously for this exhaustive testing
which is certainly impractical. Exhaustive WBT is impossible for large software systems.
But that doesnt mean WBT should be considered as impractical. Limited WBT in which
a limited no. of important logical paths are selected and exercised and important data
structures are probed for validity, is both practical and WBT. It is suggested that white
and black box testing techniques can be coupled to provide an approach that that
validates the software interface selectively ensuring the correction of internal working of
the software.
27

Manual Testing
Q

Testing Tools

Mind

Basis Path Testing:Basis path testing is a white box testing technique first proposed by Tom McCabe. The
Basis path method enables to derive a logical complexity measure of a procedural design
and use this measure as a guide for defining a basis set of execution paths. Test Cases
derived to exercise the basis set are guaranteed to execute every statement in the program
at least one time during testing.
The flow graph depicts logical control flow using a diagrammatic notation. Each
structured construct has a corresponding flow graph symbol.
Cyclomatic Complexity:Cyclomatic complexity is a software metric that provides a quantitative measure of the
logical complexity of a program. When used in the context of a basis path testing method,
the value computed for Cyclomatic complexity defines the number for independent paths
in the basis set of a program and provides us an upper bound for the number of tests that
must be conducted to ensure that all statements have been executed at least once.
An independent path is any path through the program that introduces at least one new set
of processing statements or a new condition.
Computing Cyclomatic Complexity:Cyclomatic complexity has a foundation in graph theory and provides us with extremely
useful software metric. Complexity is computed in one of the three ways:
1. The number of regions of the flow graph corresponds to the Cyclomatic complexity.
2. Cyclomatic complexity, V(G), for a flow graph, G is defined as
V (G) = E-N+2
Where E, is the number of flow graph edges, N is the number of flow graph nodes.
3. Cyclomatic complexity, V (G) for a flow graph, G is also defined as:
V (G) = P+1
Where P is the number of predicate nodes contained in the flow graph G.
Graph Matrices:The procedure for deriving the flow graph and even determining a set of basis paths is
amenable to mechanization. To develop a software tool that assists in basis path testing, a
data structure, called a graph matrix can be quite useful.
A Graph Matrix is a square matrix whose size is equal to the number of nodes on the flow
graph. Each row and column corresponds to an identified node, and matrix entries
correspond to connections between nodes.

28

Manual Testing
Q

Testing Tools

Mind

Control Structure Testing:Described below are some of the variations of Control Structure Testing.
Condition Testing:Condition testing is a test case design method that exercises the logical conditions
contained in a program module.
Data Flow Testing:The data flow testing method selects test paths of a program according to the locations of
definitions and uses of variables in the program.
Loop Testing:Loop Testing is a white box testing technique that focuses exclusively on the validity of
loop constructs. Four classes of loops can be defined: Simple loops, Concatenated loops,
nested loops, and unstructured loops.
Simple Loops:The following sets of tests can be applied to simple loops, where n is the maximum
number of allowable passes through the loop.
1. Skip the loop entirely.
2. Only one pass through the loop.
3. Two passes through the loop.
4. m passes through the loop where m<n.
5. n-1, n, n+1 passes through the loop.
Nested Loops:If we extend the test approach from simple loops to nested loops, the number of possible
tests would grow geometrically as the level of nesting increases.
1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their
minimum iteration parameter values. Add other tests for out-of-range or exclude values.
29

Manual Testing
Q

Testing Tools

Mind

3. Work outward, conducting tests for the next loop, but keep all other outer loops at
minimum values and other nested loops to typical values.
4. Continue until all loops have been tested.
Concatenated Loops:Concatenated loops can be tested using the approach defined for simple loops, if each of
the loops is independent of the other. However, if two loops are concatenated and the
loop counter for loop 1 is used as the initial value for loop 2, then the loops are not
independent.
Unstructured Loops:Whenever possible, this class of loops should be redesigned to reflect the use of the
structured programming constructs.

Black Box Testing:


Black box is a test design method. Black box testing treats the system as a "black-box",
so it doesn't explicitly use Knowledge of the internal structure. Or in other words the Test
engineer need not know the internal working of the Black box.
It focuses on the functionality part of the module.
Some people like to call black box testing as behavioral, functional, opaque-box, and
closed-box. While the term black box is most popularly use, many people prefer the
terms "behavioral" and "structural" for black box and white box respectively. Behavioral
test design is slightly different from black-box test design because the use of internal
knowledge isn't strictly forbidden, but it's still discouraged.
Personally we feel that there is a trade off between the approaches used to test a product
using white box and black box types.
There are some bugs that cannot be found using only black box or only white box. If the
test cases are extensive and the test inputs are also from a large sample space then it is
always possible to find majority of the bugs through black box testing.
Advantages:- Tester can be non-technical.
- This testing is most likely to find those bugs as the user would find.
30

Manual Testing
Q

Testing Tools

Mind

- Testing helps to identify the vagueness and contradiction in functional specifications.


- Test cases can be designed as soon as the functional specifications are complete
Disadvantages:- Chances of having repetition of tests that are already done by programmer.
- The test inputs needs to be from large sample space.
- It is difficult to identify all possible inputs in limited testing time. So writing test cases
is slow and difficult
Chances of having unidentified paths during this testing
Validation Phase:The Validation Phase falls into picture after the software is ready or when the code is
being written. There are various techniques and testing types that can be appropriately
used while performing the testing activities. Let us examine a few of them.

Unit Testing:This is a typical scenario of Manual Unit Testing activityA Unit is allocated to a Programmer for programming. Programmer has to use
Functional Specifications document as input for his work. Programmer prepares
Program Specifications for his Unit from the Functional Specifications. Program
Specifications describe the programming approach, coding tips for the Units coding.
The programmer implements some functionality for the system to be developed. The
same is tested by referring the unit test cases. While testing that functionality if
any defects have been found, they are recorded using the defect logging tool
whichever is applicable. The programmer fixes the bugs found and tests the
same for any errors.
Stubs and Drivers:A software application is made up of a number of Units, where output of one Unit
goes as an Input of another Unit. e.g. A Sales Order Printing program takes a Sales
Order as an input, which is actually an output of Sales Order Creation program.
Due to such interfaces, independent testing of a Unit becomes impossible. But that is
what we want to do; we want to test a Unit in isolation! So here we use Stub and
Driver.
A Driver is a piece of software that drives (invokes) the Unit being tested. A driver
creates necessary Inputs required for the Unit and then invokes the Unit.
31

Manual Testing
Q

Testing Tools

Mind

A Unit may reference another Unit in its logic. A Stub takes place of such subordinate
unit during the Unit Testing. A Stub is a piece of software that works similar to a unit
which is referenced by the Unit being tested, but it is much simpler that the actual unit. A
Stub works as a Stand-in for the subordinate unit and provides the minimum required
behavior for that unit.
Programmer needs to create such Drivers and Stubs for carrying out Unit Testing.
Both the Driver and the Stub are kept at a minimum level of complexity, so that they do
not induce any errors while testing the Unit in question.
Example - For Unit Testing of Sales Order Printing program, a Driver program will
have the code which will create Sales Order records using hardcoded data and then call
Sales Order Printing program. Suppose this printing program uses another unit which
calculates Sales discounts by some complex calculations. Then call to this unit will be
replaced by a Stub, which will simply return fix discount data.

Integration Testing:Integration testing is a systematic technique for constructing the program structure while
at the same time conducting tests to uncover errors associated with interfacing. The
objective is to take unit tested components and build a program structure that has been
dictated by design.
Usually, the following methods of Integration testing are followed:
1. Top-down Integration approach.
2. Bottom-up Integration approach.
Top-Down Integration:Top-down integration testing is an incremental approach to construction of program
structure. Modules are integrated by moving downward through the control hierarchy,
beginning with the main control module. Modules subordinate to the main control
module are incorporated into the structure in either a depth-first or breadth-first manner.
1. The Integration process is performed in a series of five steps:
2. The main control module is used as a test driver and stubs are substituted for all
components directly subordinate to the main control module.
3. Depending on the integration approach selected subordinate stubs are replaced
one at a time with actual components.
4. Tests are conducted as each component is integrated.
5. On completion of each set of tests, stub is replaced with the real component.
6. Regression testing may be conducted to ensure that new errors have not been
introduced.
32

Manual Testing
Q

Testing Tools

Mind

Bottom-Up Integration:Bottom-up integration testing begins construction and testing with atomic modules (i.e.
components at the lowest levels in the program structure). Because components are
integrated from the bottom up, processing required for components subordinate to a given
level is always available and the need for stubs is eliminated.
1. A Bottom-up integration strategy may be implemented with the following steps:
2. Low level components are combined into clusters that perform a specific software
sub function.
3. A driver is written to coordinate test case input and output.
4. The cluster is tested.
Drivers are removed and clusters are combined moving upward in the program structure.

System Testing:System testing concentrates on testing the complete system with a variety of techniques
and methods. System Testing comes into picture after the Unit and Integration Tests.

Compatibility Testing:Compatibility Testing concentrates on testing whether the given application goes well
with third party tools, software or hardware platform.
For example, you have developed a web application. The major compatibility issue is, the
web site should work well in various browsers. Similarly when you develop applications
on one platform, you need to check if the application works on other operating systems as
well. This is the main goal of Compatibility Testing.
Before you begin compatibility tests, our sincere suggestion is that you should have a
cross reference matrix between various softwares, hardware based on the application
requirements. For example, let us suppose you are testing a web application. A sample list
can be as follows:
Hardware
Software
Operating System
Pentium II, 128 MB RAM IE 4.x, Opera, Netscape
Windows 95
Pentium III, 256 MB IE 5.x, Netscape
Windows XP
RAM
Pentium IV, 512 MB Mozilla
Linux
RAM
Compatibility tests are also performed for various client/server based applications where
the hardware changes from client to client.

33

Manual Testing
Q

Testing Tools

Mind

Compatibility Testing is very crucial to organizations developing their own products. The
products have to be checked for compliance with the competitors of the third party tools,
hardware, or software platform. E.g. A Call center product has been built for a solution
with X product but there is a client interested in using it with Y product; then the issue of
compatibility arises. It is of importance that the product is compatible with varying
platforms. Within the same platform, the organization has to be watchful that with each
new release the product has to be tested for compatibility.
A good way to keep up with this would be to have a few resources assigned along with
their routine tasks to keep updated about such compatibility issues and plan for testing
when and if the need arises.
By the above example it is not intended that companies which are not developing
products do not have to cater for this type of testing. There case is equally existent, if an
application uses standard software then would it be able to run successfully with the
newer versions too? Or if a website is running on IE or Netscape, what will happen when
it is opened through Opera or Mozilla. Here again it is best to keep these issues in mind
and plan for compatibility testing in parallel to avoid any catastrophic failures and delays.

Recovery Testing:Recovery testing is a system test that focuses the software to fall in a variety of ways and
verifies that recovery is properly performed. If it is automatic recovery then reinitialization, check pointing mechanisms, data recovery and restart should be evaluated
for correctness. If recovery requires human intervention, the mean-time-to-repair
(MTTR) is evaluated to determine whether it is within acceptable limits.

Usability Testing:Usability is the degree to which a user can easily learn and use a product to achieve a
goal. Usability testing is the system testing which attempts to find any human-factor
problems. A simpler description is testing the software from a users point of view.
Essentially it means testing software to prove/ensure that it is user-friendly, as distinct
from testing the functionality of the software. In practical terms it includes ergonomic
considerations, screen design, standardization etc.

Security Testing
Security testing attempts to verify that protection mechanisms built into a system will, in
fact, protect it from improper penetration. During Security testing, password cracking,
unauthorized entry into the software, network security are all taken into consideration.

Stress Testing:-

34

Manual Testing
Q

Testing Tools

Mind

Stress testing executes a system in a manner that demands resources in abnormal


quantity, frequency, or volume. The following types of tests may be conducted during
stress testing;
Special tests may be designed that generate ten interrupts per second, when
one or two is the average rate.
Input data rates may be increases by an order of magnitude to determine how
input functions will respond.
Test Cases that require maximum memory or other resources.
Test Cases that may cause excessive hunting for disk-resident data.
Test Cases that my cause thrashing in a virtual operating system.

Performance Testing
Performance testing of a Web site is basically the process of understanding how the Web
application and its operating environment respond at various user load levels. In general,
we want to measure the Response Time, Throughput, and Utilization of the Web site
while simulating attempts by virtual users to simultaneously access the site. One of the
main objectives of performance testing is to maintain a Web site with low response time,
high throughput, and low utilization.
The effort of performance testing is addressed in two ways:
Load testing
Stress testing

Load testing:Load testing is a much used industry term for the effort of performance testing. Here load
means the number of users or the traffic for the system. Load testing is defined as the
testing to determine whether the system is capable of handling anticipated number of
users or not.
In Load Testing, the virtual users are simulated to exhibit the real user behavior as much
as possible. Even the user think time such as how users will take time to think before
inputting data will also be emulated. It is carried out to justify whether the system is
performing well for the specified limit of load.
For example, Let us say an online-shopping application is anticipating 1000 concurrent
user hits at peak period. In addition, the peak period is expected to stay for 12 hrs. Then
the system is load tested with 1000 virtual users for 12 hrs. These kinds of tests are
carried out in levels: first 1 user, 50 users, and 100 users, 250 users, 500 users and so on
till the anticipated limit are reached. The testing effort is closed exactly for 1000
concurrent users.
35

Manual Testing
Q

Testing Tools

Mind

The objective of load testing is to check whether the system can perform well for
specified load. The system may be capable of accommodating more than 1000 concurrent
users. But, validating that is not under the scope of load testing. No attempt is made to
determine how many more concurrent users the system is capable of servicing. Table 1
illustrates the example specified.

Stress testing:Stress testing is another industry term of performance testing. Though load testing &
Stress testing are used synonymously for performancerelated efforts, their goal is
different.
Unlike load testing where testing is conducted for specified number of users, stress
testing is conducted for the number of concurrent users beyond the specified limit. The
objective is to identify the maximum number of users the system can handle before
breaking down or degrading drastically. Since the aim is to put more stress on system,
think time of the user is ignored and the system is exposed to excess load. The goals of
load and stress testing are listed in Table 2. Refer to table 3 for the inference drawn
through the Performance Testing Efforts.
Let us take the same example of online shopping application to illustrate the objective of
stress testing. It determines the maximum number of concurrent users an online system
can service which can be beyond 1000 users (specified limit). However, there is a
possibility that the maximum load that can be handled by the system may found to be
same as the anticipated limit.
Stress testing also determines the behavior of the system as user base increases. It checks
whether the system is going to degrade gracefully or crash at a shot when the load goes
beyond the specified limit.
Load and stress testing of illustrative example
Types of Testing
Load Testing

Stress Testing

Number of Concurrent users


1 User 50 Users 100 Users 250
Users 500 Users.
1000Users
1 User 50 Users 100 Users 250
Users 500 Users.

Duration
12 Hours

12 Hours

36

Manual Testing
Q

Testing Tools

Mind

1000Users Beyond 1000


Users.. Maximum Users
Goals of load and stress testing
Types of testing
Load testing

Goals
Testing for anticipated user base

Validates

whether

system

is

capable of handling load under


Stress testing

specified limit
Testing beyond the anticipated
user base

Identifies the maximum load a


system can handle

Checks

whether

the

system

degrades gracefully or crashes at


a shot

Regression Testing:Regression testing as the name suggests is used to test / check the effect of changes made
in the code. Most of the time the testing team is asked to check last minute changes in the
code just before making a release to the client, in this situation the testing team needs to
check only the affected areas. So in short for the regression testing the testing team
should get the input from the development team about the nature / amount of change in
the fix so that testing team can first check the fix and then the side effects of the fix.
In fact the regression testing is the testing in which maximum automation can be done.
The reason being the same set of test cases will be run on different builds multiple times.
But again the extent of automation depends on whether the test cases will remain
applicable over the time, In case the automated test cases do not remain applicable for
some amount of time then test engineers will end up in wasting time to automate and
dont get enough out of automation.

37

Manual Testing
Q

Testing Tools

Mind

Regression Testing is retesting unchanged segments of application. It involves rerunning


tests that have been previously executed to ensure that the same results can be achieved
currently as were achieved when the segment was last tested.
The selective retesting of a software system that has been modified to ensure that any
bugs have been fixed and that no other previously working functions have failed as a
result of the reparations and that newly added features have not created problems with
previous versions of the software. Also referred to as verification testing, regression
testing is initiated after a programmer has attempted to fix a recognized problem or has
added source code to a program that may have inadvertently introduced errors. It is a
quality control measure to ensure that the newly modified code still complies with its
specified requirements and that unmodified code has not been affected by the
maintenance activity.
What do you do during Regression testing?
o Rerunning of previously conducted tests
o Reviewing previously prepared manual procedures
o Comparing the current test results with the previously executed test results

What are the end goals of Regression testing?


o To ensure that the unchanged system segments function properly
o To ensure that the previously prepared manual procedures remain correct
after the changes have been made to the application system
o To verify that the data dictionary of data elements that have been changed
is correct

Sofware Testing Life Cycle(STLC):

Test Strategy
Test Planning
Test Design
Test Execution
Defect Report

Test Strategy:Before starting any testing activities, the team lead will have to think a lot & arrive at a
strategy. This will describe the approach, which is to be adopted for carrying out test
activities including the planning activities. This is a formal document and the very first
document regarding the testing area and is prepared at a very early stag in SDLC. This
38

Manual Testing
Q

Testing Tools

Mind

document must provide generic test approach as well as specific details regarding the
project. The following areas are addressed in the test strategy document.
For example we should have a master Test Strategy document at a project level and we
should have a detailed Test Plan for every release. This document should give the overall
scope of project at a high level.
1 Test Levels
The test strategy must talk about what are the test levels that will be carried out for that
particular project. Unit, Integration & System testing will be carried out in all projects.
But many times, the integration & system testing may be combined. Details like this may
be addressed in this section.
2 Roles and Responsibilities
The roles and responsibilities of test leader, individual testers, project manager are to be
clearly defined at a project level in this section. This may not have names associated: but
the role has to be very clearly defined. The review and approval mechanism must be
stated here for test plans and other test documents. Also, we have to state who reviews the
test cases, test records and who approved them. The documents may go thru a series of
reviews or multiple approvals and they have to be mentioned here.
3 Testing Tools
Any testing tools, which are to be used in different test levels, must be, clearly identified.
This includes justifications for the tools being used in that particular level also.
4 Risks and Mitigation
Any risks that will affect the testing process must be listed along with the mitigation. By
documenting the risks in this document, we can anticipate the occurrence of it well ahead
of time and then we can proactively prevent it from occurring. Sample risks are
dependency of completion of coding, which is done by sub-contractors, capability of
testing tools etc.
5 Regression Test Approach
When a particular problem is identified, the programs will be debugged and the fix will
be done to the program. To make sure that the fix works, the program will be tested
again. Regression test will make sure that one fix does not create some other problems in
that program or in any other interface. So, a set of related test cases may have to be
repeated again, to make sure that nothing else is affected by a particular fix. How this is
going to be carried out must be elaborated in this section. In some companies, whenever
there is a fix in one unit, all unit test cases for that unit will be repeated, to achieve a
higher level of quality.
39

Manual Testing
Q

Testing Tools

Mind

6 Test Groups
From the list of requirements, we can identify related areas, whose functionality is
similar. These areas are the test groups. For example, in a railway reservation system,
anything related to ticket booking is a functional group; anything related with report
generation is a functional group. Same way, we have to identify the test groups based on
the functionality aspect.
7 Test Priorities
Among test cases, we need to establish priorities. While testing software projects, certain
test cases will be treated as the most important ones and if they fail, the product cannot be
released. Some other test cases may be treated like cosmetic and if they fail, we can
release the product without much compromise on the functionality. This priority levels
must be clearly stated. These may be mapped to the test groups also.
8 Test Status Collections and Reporting
When test cases are executed, the test leader and the project manager must know, where
exactly we stand in terms of testing activities. To know where we stand, the inputs from
the individual testers must come to the test leader. This will include, what test cases are
executed, how long it took, how many test cases passed and how many-failed etc. Also,
how often we collect the status is to be clearly mentioned. Some companies will have a
practice of collecting the status on a daily basis or weekly basis. This has to be mentioned
clearly.
9 Test Records Maintenance
When the test cases are executed, we need to keep track of the execution details like
when it is executed, who did it, how long it took, what is the result etc. This data must be
available to the test leader and the project manager, along with all the team members, in a
central location. This may be stored in a specific directory in a central server and the
document must say clearly about the locations and the directories. The naming
convention for the documents and files must also be mentioned.
10 Requirements Traceability Matrix
Ideally each software developed must satisfy the set of requirements completely. So, right
from design, each requirement must be addressed in every single document in the
software process. The documents include the HLD, LLD, source codes, unit test cases,
integration test cases and the system test cases. Refer the following sample table which
describes Requirements Traceability Matrix process. In this matrix, the rows will have the
requirements. For every document {HLD, LLD etc}, there will be a separate column. So,
in every cell, we need to state, what section in HLD addresses a particular requirement.
Ideally, if every requirement is addressed in every single document, all the individual
40

Manual Testing
Q

Testing Tools

Mind

cells must have valid section ids or names filled in. Then we know that every requirement
is addressed. In case of any missing of requirement, we need to go back to the document
and correct it, so that it addressed the requirement. For testing at each level, we may have
to address the requirements. One integration and the system test case may address
multiple requirements.
11 Test Summary
The senior management may like to have test summary on a weekly or monthly basis. If
the project is very critical, they may need it on a daily basis also. This section must
address what kind of test summary reports will be produced for the senior management
along with the frequency.
The test strategy must give a clear vision of what the testing team will do for the whole
project for the entire duration. This document will/may be presented to the client also, if
needed. The person, who prepares this document, must be functionally strong in the
product domain, with a very good experience, as this is the document that is going to
drive the entire team for the testing activities. Test strategy must be clearly explained to
the testing team members tight at the beginning of the project.

Test Plan:The test strategy identifies multiple test levels, which are going to be performed for the
project. Activities at each level must be planned well in advance and it has to be formally
documented. Based on the individual plans only, the individual test levels are carried out.
The plans are to be prepared by experienced people only. In all test plans, the ETVX
{Entry-Task-Validation-Exit} criteria are to be mentioned. Entry means the entry point to
that phase. For example, for unit testing, the coding must be complete and then only one
can start unit testing. Task is the activity that is performed. Validation is the way in which
the progress and correctness and compliance are verified for that phase. Exit tells the
completion criteria of that phase, after the validation is done. For example, the exit
criterion for unit testing is all unit test cases must pass.
ETVX is a modeling technique for developing worldly and atomic level models. It sands
for Entry, Task, Verification and Exit. It is a task-based model where the details of each
task are explicitly defined in a specification table against each phase i.e. Entry, Exit, Task,
Feedback In, Feedback Out, and measures.
There are two types of cells, unit cells and implementation cells. The implementation
cells are basically unit cells containing the further tasks.
For example if there is a task of size estimation, then there will be a unit cell of size
estimation. Then since this task has further tasks namely, define measures, estimate size.
41

Manual Testing
Q

Testing Tools

Mind

The unit cell containing these further tasks will be referred to as the implementation cell
and a separate table will be constructed for it.
A purpose is also stated and the viewer of the model may also be defined e.g. top
management or customer.

Unit Test Plan:-The test plan is the overall plan to carry out the unit test activities.
The lead tester prepares it and it will be distributed to the individual testers, which
contains the following sections.
1 What is to be tested?
The unit test plan must clearly specify the scope of unit testing. In this, normally the basic
input/output of the units along with their basic functionality will be tested. In this case
mostly the input units will be tested for the format, alignment, accuracy and the totals.
The UTP will clearly give the rules of what data types are present in the system, their
format and their boundary conditions. This list may not be exhaustive; but it is better to
have a complete list of these details.
2Sequence of Testing
The sequences of test activities that are to be carried out in this phase are to be listed in
this section. This includes whether to execute positive test cases first or negative test
cases first, to execute test cases based on the priority, to execute test cases based on test
groups etc. Positive test cases prove that the system performs what is supposed to do;
negative test cases prove that the system does not perform what is not supposed to do.
Testing the screens, files, database etc., are to be given in proper sequence.
3 Basic Functionality of Units
How the independent functionalities of the units are tested which excludes any
communication between the unit and other units. The interface part is out of scope of this
test level. Apart from the above sections, the following sections are addressed, very
specific to unit testing.
Unit Testing Tools
Priority of Program units
Naming convention for test cases
Status reporting mechanism
Regression test approach
ETVX criteria

2 Integration Test Plan


The integration test plan is the overall plan for carrying out the activities in the
integration test level, which contains the following sections.
42

Manual Testing
Q

Testing Tools

Mind

2.1What is to be tested?
This section clearly specifies the kinds of interfaces fall under the scope of testing
internal, external interfaces, with request and response is to be explained. This need not
go deep in terms of technical details but the general approach how the interfaces are
triggered is explained.
2.2Sequence of Integration
When there are multiple modules present in an application, the sequence in which they
are to be integrated will be specified in this section. In this, the dependencies between the
modules play a vital role. If a unit B has to be executed, it may need the data that is fed
by unit A and unit X. In this case, the units A and X have to be integrated and then using
that data, the unit B has to be tested. This has to be stated to the whole set of units in the
program. Given this correctly, the testing activities will lead to the product, slowly
building the product, unit by unit and then integrating them.
2.3 List of Modules and Interface Functions
There may be N number of units in the application, but the units that are going to
communicate with each other, alone are tested in this phase. If the units are designed in
such a way that they are mutually independent, then the interfaces do not come into
picture. This is almost impossible in any system, as the units have to communicate to
other units, in order to get different types of functionalities executed. In this section, we
need to list the units and for what purpose it talks to the others need to be mentioned. This
will not go into technical aspects, but at a higher level, this has to be explained in plain
English.

3 System Test Plan {STP}


The system test plan is the overall plan carrying out the system test level activities. In the
system test, apart from testing the functional aspects of the system, there are some special
testing activities carried out, such as stress testing etc. The following are the sections
normally present in system test plan.
3.1What is to be tested? (Should define both in scope and out scope of testing)
This section defines the scope of system testing, very specific to the project. Normally,
the system testing is based on the requirements. All requirements are to be verified in the
scope of system testing. This covers the functionality of the product. Apart from this what
special testing is performed are also stated here.
3.2 Functional Groups and the Sequence
The requirements can be grouped in terms of the functionality. Based on this, there may
be priorities also among the functional groups. For example, in a banking application,
43

Manual Testing
Q

Testing Tools

Mind

anything related to customer accounts can be grouped into one area, anything related to
inter-branch transactions may be grouped into one area etc. Same way for the product
being tested, these areas are to be mentioned here and the suggested sequences of testing
of these areas, based on the priorities are to be described.
3.3Special Testing Methods
This covers the different special tests like load/volume testing, stress testing,
interoperability testing etc. These testing are to be done based on the nature of the
product and it is not mandatory that every one of these special tests must be performed
for every product.
Apart from the above sections, the following sections are addressed, very specific to
system testing.

System Testing Tools


Priority of functional groups
Naming convention for test cases
Status reporting mechanism
Regression test approach
ETVX criteria
Build/Refresh criteria

3.4 Acceptance Test Plan {ATP}


The client at their place performs the acceptance testing. It will be very similar to the
system test performed by the Software Development Unit. Since the client is the one who
decides the format and testing methods as part of acceptance testing, there is no specific
clue on the way they will carry out the testing. But it will not differ much from the
system testing. Assume that all the rules, which are applicable to system test, can be
implemented to acceptance testing also.
Since this is just one level of testing done by the client for the overall product, it may
include test cases including the unit and integration test level details.
A sample Test Plan Outline along with their description is as shown below:
Test Plan Outline
1. BACKGROUND This item summarizes the functions of the application system
and the tests to be performed.
2. INTRODUCTION

44

Manual Testing
Q

Testing Tools

Mind

3. ASSUMPTIONS Indicates any anticipated assumptions which will be made


while testing the application.
4. TEST ITEMS - List each of the items (programs) to be tested.
5. FEATURES TO BE TESTED - List each of the features (functions or
requirements), which will be tested or demonstrated by the test.
6. FEATURES NOT TO BE TESTED - Explicitly lists each feature, function, or
requirement which won't be tested and why not.
7. APPROACH - Describe the data flows and test philosophy.
Simulation or Live execution, Etc. This section also mentions all the approaches,
which will be followed at the various stages of the test execution.
8. ITEM PASS/FAIL CRITERIA Blanket statement - Itemized list of expected
output and tolerances
9. SUSPENSION/RESUMPTION CRITERIA - Must the test run from start to
completion?
Under what circumstances it may be resumed in the middle?
Establish check-points in long tests.
10. TEST DELIVERABLES - What, besides software, will be delivered?
We have to identify all possible deliverables to the customer and should be clearly
documented, the typical deliverables could be
Test Requirements document
Test Strategy
Test Plan
Test case design docs and RTMs
Test execution Results and Reporting
Metrics Report
Test summary report
11. TESTING TASKS Functional tasks (e.g., equipment set up)
Administrative tasks
12. ENVIRONMENTAL NEEDS
Security clearance
Office space & equipment
Hardware/software requirements
13. RESPONSIBILITIES
Who does the tasks in Section 10?
What does the user do?
45

Manual Testing
Q

Testing Tools

Mind

14. STAFFING & TRAINING


15. SCHEDULE
16. RESOURCES
17. RISKS & CONTINGENCIES
18. APPROVALS
The schedule details of the various test pass such as Unit tests, Integration tests, System
Tests should be clearly mentioned along with the estimated efforts.
Test Plan Review:
After completion of preperation of test plan test lead conducts a review meeting for
completeness and correctness.In most of the companies review meetings conducted
through coverage analysis.
Coverage analysis are driven by the Checklist for the following,
SRS Based Coverage
BRS Based Coverage

Test Case Design:Designing good test cases is a complex art. The complexity comes from three sources:
Test cases help us discover information. Different types of tests are
more effective for different classes of information.
Test cases can be good in a variety of ways. No test case will be
good in all of them.
People tend to create test cases according to certain testing styles,
such as domain testing or risk-based testing. Good domain tests are
different from good risk-based tests.
Whats a test case?
A test case specifies the pretest state of the IUT and its environment, the test inputs or
conditions, and the expected result. The expected result specifies what the IUT should
produce from the test inputs. This specification includes messages generated by the IUT,
exceptions, returned values, and resultant state of the IUT and its environment. Test cases
may also specify initial and resulting conditions for other objects that constitute the IUT
and its environment.
46

Manual Testing
Q

Testing Tools

Mind

Or
A Test Case is a description of what to be tested, what data to be given, what data to be
done to check the actual result against the expected.
Or
The process of designing test cases, including executing them as thought experiments,
will often identify bugs before the software has even been built. It is not uncommon to
find more bugs when designing tests than when executing tests.
Let us now see how to design test cases in a generic manner:
Understand the requirements document.
Break the requirements into smaller requirements (if it improves your testability).
For each Requirement, decide what technique you should use to derive the test
cases. For example, if you are testing a Login page, you need to write test cases
basing on error guessing and also negative cases for handling failures.
Have a Traceability Matrix as follows:
Requirement No (In RD)

Requirement

Test Case No

What this Traceability Matrix provides you is the coverage of Testing. Keep filling in the
Traceability matrix when you complete writing test cases for each requirement.
Whats a scenario?
A scenario is a hypothetical story, used to help a person think through a complex problem
or system.
Characteristics of good test case: TC should start with what are u testing
TC should be independent
TC should be not contain If / or statements.
47

Manual Testing
Q

Testing Tools

Mind

TC should be uniform.
Every TC designed should be traced back to at least one requirement.
A TC should have high probability of finding errors.
Issues to consider during test case design:

All testcases should be traceble


There shold not be many duplicate test cases
Out dated test cases should be cleared off
All test cases should be executable.

Test Cases Techniques: Error Guessing


Boundary value Analysis
Equivalence Class partition

Error Guessing:Guessing is the art of guessing where errors can be hidden. There are no specific tools
and techniques for this, but you can write test cases depending on the situation: Either
when reading the functional documents or when you are testing and find an error that you
have not documented.
Error guessing is based mostly upon experience, with some assistance from other
techniques such as boundary value analysis. Based on experience, the test designer
guesses the types of errors that could occur in a particular type of software and designs
test cases to uncover them. For example, if any type of resource is allocated dynamically,
a good place to look for errors is in the de-allocation of resources. Are all resources
correctly de-allocated, or are some lost as the software executes?
Error guessing by an experienced engineer is probably the single most effective method
of designing tests, which uncover bugs. A well-placed error guess can show a bug, which
could easily be missed by many of the other test case design techniques presented in this
paper.
Conversely, in the wrong hands error guessing can be a waste of time. To make the
maximum use of available experience and to add some structure to this test case design
technique, it is a good idea to build a checklist of types of errors. This checklist can then
be used to help guess where errors may occur within a unit.The checklist should be
maintained with the benefit of experience gained in earlier unit tests, helping to improve
the overall effectiveness of error guessing.
48

Manual Testing
Q

Testing Tools

Mind

Boundary Value Analysis:Boundary Value Analysis (BVA) is a test data selection technique (Functional Testing
technique) where the extreme values are chosen. Boundary values include maximum,
minimum, just inside/outside boundaries, typical values, and error values. The hope is
that, if a system works correctly for these special values then it will work correctly for all
values in between.

Extends equivalence partitioning


Test both sides of each boundary
Look at output boundaries for test cases too
Test min, min-1, max, max+1, typical values
BVA focuses on the boundary of the input space to identify test cases
Rational is that errors tend to occur near the extreme values of an input
variable

There are two ways to generalize the BVA techniques:


1. By the number of variables
o For n variables: BVA yields 4n + 1 test cases.
2. By the kinds of ranges
o Generalizing ranges depends on the nature or type of variables
NextDate has a variable Month and the range could be defined
as {Jan, Feb, Dec}
Min = Jan, Min +1 = Feb, etc.
Triangle had a declared range of {1, 20,000}
Boolean variables have extreme values True and False but
there is no clear choice for the remaining three values
Advantages of Boundary Value Analysis
1. Robustness Testing - Boundary Value Analysis plus values that go beyond the
limits
2. Min - 1, Min, Min +1, Nom, Max -1, Max, Max +1
3. Forces attention to exception handling
4. For strongly typed languages robust testing results in run-time errors that
abort normal execution

49

Manual Testing
Q

Testing Tools

Mind

Limitations of Boundary Value Analysis:BVA works best when the program is a function of several independent variables that
represent bounded physical quantities
Independent Variables
o NextDate test cases derived from BVA would be inadequate: focusing
on the boundary would not leave emphasis on February or leap years
o Dependencies exist with NextDate's Day, Month and Year
o Test cases derived without consideration of the function
Physical Quantities
o An example of physical variables being tested, telephone numbers what faults might be revealed by numbers of 000-0000, 000-0001,
555-5555, 999-9998, 999-9999?

Equivalence Partitioning:Equivalence partitioning is a black box testing method that divides the input domain of a
program into classes of data from which test cases can be derived.
EP can be defined according to the following guidelines:
1. If an input condition specifies a range, one valid and one two invalid classes are
defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence
classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence
class is defined.
4. If an input condition is Boolean, one valid and one invalid class is defined.
Comparison Testing:There are situations where independent versions of software be developed for critical
applications, even when only a single version will be used in the delivered computer
based system. It is these independent versions which form the basis of a black box testing
technique called Comparison testing or back-to-back testing.

50

Manual Testing
Q

Testing Tools

Mind

Orthogonal Array Testing:The Orthogonal Array Testing Strategy (OATS) is a systematic, statistical way of testing
pair-wise interactions by deriving a suitable small set of test cases (from a large number
of possibilities).

Characteristics of Good Scenarios


A scenario has five key characteristics. It is (a) a story that is (b) motivating, (c) credible,
(d) complex, and (e) easy to evaluate.
The primary objective of test case design is to derive a set of tests that have the highest
attitude of discovering defects in the software. Test cases are designed based on the
analysis of requirements, use cases, and technical specifications, and they should be
developed in parallel with the software development effort.
A test case describes a set of actions to be performed and the results that are expected. A
test case should target specific functionality or aim to exercise a valid path through a use
case. This should include invalid user actions and illegal inputs that are not necessarily
listed in the use case. A test case is described depends on several factors, e.g. the number
of test cases, the frequency with which they change, the level of automation employed,
the skill of the testers, the selected testing methodology, staff turnover, and risk.
The test cases will have a generic format as below.
Test case ID - The test case id must be unique across the application
Test case description - The test case description must be very brief.
Test prerequisite - The test pre-requisite clearly describes what should be present
in the system, before the test can be executes.
Test Inputs - The test input is nothing but the test data that is prepared to be fed to
the system.
Test steps - The test steps are the step-by-step instructions on how to carry out the
test.
Expected Results - The expected results are the ones that say what the system
must give as output or how the system must react based on the test steps.
Actual Results The actual results are the ones that say outputs of the action for
the given inputs or how the system reacts for the given inputs.
51

Manual Testing
Q

Testing Tools

Mind

Pass/Fail - If the Expected and Actual results are same then test is Pass otherwise
Fail.
The test cases are classified into positive and negative test cases. Positive test cases are
designed to prove that the system accepts the valid inputs and then process them
correctly. Suitable techniques to design the positive test cases are Specification derived
tests, Equivalence partitioning and State-transition testing. The negative test cases are
designed to prove that the system rejects invalid inputs and does not process them.
Suitable techniques to design the negative test cases are Error guessing, Boundary value
analysis, internal boundary value testing and State-transition testing. The test cases details
must be very clearly specified, so that a new person can go through the test cases step and
step and is able to execute it. The test cases will be explained with specific examples in
the following section.
For example consider online shopping application. At the user interface level the client
request the web server to display the product details by giving email id and Username.
The web server processes the request and will give the response. For this application we
will design the unit, Integration and system test cases.

Web based application

Test Engineers can write testcases based on Requiremnets or use cases, the use cases
are described as below

Use Case
Each use case focuses on describing how to achieve a goal or task. For most software
projects this means that multiple, perhaps dozens, of use cases are needed to embrace the
scope of the new system. The degree of formality of a particular software project and the
stage of the project will influence the level of detail required in each use case.

52

Manual Testing
Q

Testing Tools

Mind

Use cases should not be confused with the features of the system under consideration. A
use case may be related to one or more features, a feature may be related to one or more
use cases.
A use case defines the interactions between external actors and the system under
consideration to accomplish a goal. An actor is a role that a person or thing plays when
interacting with the system. The same person using the system may be represented as two
different actors because they are playing different roles. For example, "Joe" could be
playing the role of a Customer when using an Automated Teller Machine to Withdraw
Cash, or playing the role of a Bank Teller when using the system to Restock the Cash
Drawer.
Use cases treat the system as a black box, and the interactions with the system, including
system responses, are perceived as from outside the system. This is a deliberate policy,
because it forces the author to focus on what the system must do, not how it is to be done,
and avoids the trap of making assumptions about how this functionality will be
accomplished.
Use cases may be described at the abstract level (business use case, sometimes called
essential use case), or at the system level (system use case). The difference between these
is the scope.
The business use case is described in technology free terminology which treats the
business process as a black box and describes the business process that is used by its
business actors (people or systems) to achieve their goals (e.g., manual payment
processing, expense report approval, manage corporate real estate.) The business use case
will describe a process that provides value to the business actor, and it describes what the
process does.
The system use cases are normally described at the sub process level (for example, create
voucher) and specify the data input and the expected data response. The system use case
will describe how the actor and the system interact. For this reason it is recommended
that a system use case specification begin with a verb (e.g., create voucher, select
payments, exclude payment, cancel voucher.)
A use case should:
Describe how the system shall be used by an actor to achieve a particular goal. Have no
implementation-specific language. Be at the appropriate level of detail. Not include detail
regarding user interfaces and screens. This is done in user-interface design.
Sample Use Case Diagrams
A use case is a set of scenarios that describing an interaction between a user and a
system. A use case diagram displays the relationship among actors and use cases. The
two main components of a use case diagram are use cases and actors.

53

Manual Testing
Q

Testing Tools

Mind

An actor represents a user or another system that will interact with the system you are
modeling. A use case is an external view of the system that represents some action the
user might perform in order to complete a task.
When to Use: Use Cases Diagrams
Use cases are used in almost every project. They are helpful in exposing requirements
and planning the project. During the initial stage of a project most use cases should be
defined, but as the project continues more might become visible.
How to Draw: Use Cases Diagrams
Use cases are a relatively easy UML diagram to draw, but this is a very simplified
example. This example is only meant as an introduction to the UML and use cases.
Start by listing a sequence of steps a user might take in order to complete an action. For
example a user placing an order with a sales company might follow these steps.
1.
2.
3.
4.
5.

Browse catalog and select items.


Call sales representative.
Supply shipping information.
Supply payment information.
Receive conformation number from salesperson.

These steps would generate this simple use case diagram:

54

Manual Testing
Q

Testing Tools

Mind

This example shows the customer as a actor because the customer is using the ordering
system. The diagram takes the simple steps listed above and shows them as actions the
customer might perform. The salesperson could also be included in this use case diagram
because the salesperson is also interacting with the ordering system.
From this simple diagram the requirements of the ordering system can easily be derived.
The system will need to be able to perform actions for all of the use cases listed. As the
project progresses other use cases might appear. The customer might have a need to add
an item to an order that has already been placed. This diagram can easily be expanded
until a complete description of the ordering system is derived capturing all of the
requirements that the system will need to perform.

55

Manual Testing
Q

Testing Tools

Mind

Types of test cases :Unit Test Cases (UTC):Specifying the test cases for testing of individual units of software. These may form
sections of the Detailed Design Specifications.
These are very specific to a particular unit. The basic functionality of the unit is to be
understood based on the requirements and the design documents. Generally, Design
document will provide a lot of information about the functionality of a unit. The Design
document has to be referred before UTC is written, because it provides the actual
functionality of how the system must behave, for given inputs.
For example, In the Online shopping application, If the user enters valid Email id and
Username values, let us assume that Design document says, that the system must display
a product details and should insert the Email id and Username in database table. If user
enters invalid values the system will display appropriate error message and will not store
it in database.
Integration Test Cases:Before designing the integration test cases the testers should go through the Integration
test plan. It will give complete idea of how to write integration test cases. The main aim
of integration test cases is that it tests the multiple modules together. By executing these
test cases the user can find out the errors in the interfaces between the Modules.
For example, in online shopping, there will be Catalog and Administration module. In
catalog section the customer can track the list of products and can buy the products
online. In administration module the admin can enter the product name and information
related to it.

System Test Cases: The system test cases are meant to test the system as per the requirements; end-to end.
This is basically to make sure that the application works as per SRS. In system test cases,
(generally in system testing itself), the testers are supposed to act as an end user. So,
system test cases normally do concentrate on the functionality of the system, inputs are
fed through the system and each and every check is performed using the system itself.
Normally, the verifications done by checking the database tables directly or running
programs manually are not encouraged in the system test.
The system test must focus on functional groups, rather than identifying the program
units. When it comes to system testing, it is assume that the interfaces between the
modules are working fine (integration passed).
56

Manual Testing
Q

Testing Tools

Mind

Ideally the test cases are nothing but a union of the functionalities tested in the unit
testing and the integration testing. Instead of testing the system inputs outputs through
database or external programs, everything is tested through the system itself. For
example, in a online shopping application, the catalog and administration screens
(program units) would have been independently unit tested and the test results would be
verified through the database. In system testing, the tester will mimic as an end user and
hence checks the application through its output.
There are occasions, where some/many of the integration and unit test cases are repeated
in system testing also; especially when the units are tested with test stubs before and not
actually tested with other real modules, during system testing those cases will be
performed again with real modules/data in
Once the test plan for a level of testing has been written, the next stage of test design is to
specify a set of test cases or test paths for each item to be tested at that level A
number of test cases will be identified for each item to be tested at each level of
testing. Each test case will specify how the implementation of a particular
requirement of design decision is to be tested and the criteria for success of the test.
Review Test Cases:
After preparation of test cases, testing team
completeness and correctness.

reviews the test cases for

During review meeting they cover the below factors through coverage analysis.
SRS Based Coverage
BRS Based Coverage
Note: In most of the companies all three types of test cases will be prepared combinedly

Test Case Preparation checklist:


This is used to ensure Test cases have been prepared as per
specifications. For all the test responses the test case
preparation review checklist test mgr will assess the impact &
document it as an issue to concerned parties for resolution. This
can be assessed using weekly status reports or emails.
Conte
xt

Activity

Yes

No

Comme
nts

Is the approved Test Plan


available
57

Manual Testing
Q

Testing Tools

Mind

Are the resources identified to


implement test plan
Are the Base line docs available
Is domain knowledge being
imparted to the team members
who are working on the
application
Have Test cases been developed
considering all requirements
Have all the +ve as well as ve
test cases been identified
Have all boundary test cases been
covered
Have test cases been written for
GUI/Hyperlink testing for Web
application
Have test cases been written to
check Date Integrity
Test Cases review:Peer to Peer Reviews
Team Lead Review
Team Manager Review
Test Data Set Up:
This is a very important activity, which should be in place before we start with Test
Execution. Test data is the data with which the functionality would be tested, it is used an
input with which we can influence the functionality, for example to test User ID edit Box
functionality we need valid User Id or in order to access an ATM machine we need valid
pin and hence we need a set of system inputs which we can use it for our testing
perspective. At the same time the system process the provided input and would give the
output, we may use a part of this Out put data as an actual value to validate a
functionality by comparing with its expected value during Test Execution.
REVIEW PROCESS:
Take demo of the functionality

58

Manual Testing
Q

Testing Tools

Mind

Go through use case / function specification


Try to see TC & find out the gap between
Test cases Vs. Use Cases
Submit the review report

Test Execution:Phases of Execution:Development


Team

SRS Information
gathering

Testing
Team
BRS

HLD+LLD

Study SRS/BRS

Coding

Pm/TL
Prepare testplan

Unit Testing

Prepare Test Cases

Integration Testing

Review of Test Cases


Intial Build
(Sanity/Smoke)

Prepare Test batches/Test suites/Test sets


Bug fixing
& Resolving Select a test batch & starts execution
Regression
development
team

If any serious defects find


then suspend that batch
New
59

Manual Testing
Q

Testing Tools

Mind

Otherwise
Test execution closure (T.L)
Pre acceptance Testing
Acceptance Testing
Sign off & Release
Sanity/ Smoke testing:It is the first testing technique applied by testing team. After getting initial build from
development tem testing team verifies whether all screens are opening are not? all objects
are responding are not etc. This is called as sanity testing
The main objective of sanity testing is to ensure that whether is suitable for conducting
next level testing are not?
After conducting sanity testing team reviews whether build is acceptable or not? if they
satisfy with the build then they will accept otherwise they reject the build to development
team.
Note:- Sanity testing will be repeated no.of times until testing team gets stable build or
suitable build
Preparing Test batches or Test suites:After completion of sanity testing i.e., after getting stable build from development team
testing identifies all dependencies between test cases and groups them as test batch, then
they execute as test batches.
Test Environment Preparation:
After preparation of test batches testing team prepares necessary environment for
conducting testing as follows

Understanding the end users application environment.


Importance of selecting all browser types, if it is web application
Selection of the operating system.
60

Manual Testing
Q

Testing Tools

Mind

Selecting all necessary documents to execute test cases, DDD(Detailed


Design Document etc
Automated tools selection (for main functionalities or modules)
Test Data gathering and Setup.

Test Bed Creation:


Partition of hard disk.
Whether our application runs on all customer expected platforms or not?
Platforms means that the required system software to run our application such as
operating system, compiler, interpreters, browsers .etc.
Test Cases Execution:- After preparing test environment and test bed testing team
executes all test cases either manually or through automated tools.
During the execution of test cases possible outputs are
Passed:-If both expected and actual values are same
Failed:-If both expected and actual values are different
Blocked:-If parent test case or module is failed then we block the execution of child test
cases
Regression Testing:- After finding no. differences between expected values and actual
values testing team identifies all these differences as defects to development team.
Development team works on resolving the defects and modified build will then be
released to testing team.
On the modified build testing team verifies whether all defects indentified on earlier
builds are resolved or not. This testing is called as regression testing
The main objectives of regression testing are:
To check whether new functionalities have been incorporated correctly without impacting
on the existing functionalities. The defects need to be communicated and assigned to
developers who can resolve them. Once the defects are resolved, fixes should be retested and determination made to check that fixes did not create problems else where.
Types of Regression Testing:Regression testing are 2 types
1. Selective regression testing
61

Manual Testing
Q

Testing Tools

Mind

2. Final regression testing


TEST METRICS:A metric is a mathematical number that shows a relationship between two variables.
Software metrics are measures that are used to quantity software development resources,
and/ or the software development process. This includes items that are directly
measurable, such as lines of code, as well as items that are calculated from
measurements, such as earned value.
Metrics specific to testing include data regarding testing, defect tracking, and software
performance.
Metric:- A quantitative measure of the degree to which a system, component, or process
possesses a given attribute.
Process metric:- A metric used to measure characteristics of the methods, techniques,
and tools employed in developing, implementing, and maintaining the software system.
Product Metric:- A metric used to measure the characteristics of the documentation and
code.
Software quality metric:- A function whose inputs are software data and whose output
is a single numerical value that can be interpreted as the degree to which software
possesses a given attribute that affects its quality, (IEEE Std 1061.1992)
Testing Data Used for Metrics:Testers are typically responsible for reporting their test status at regular intervals. The
following measurements generated during testing are applicable.

Total number of tests


Number of tests executed to date
Number of tests executed successfully to date

Data concerning software defects include:

Total number of defects corrected in each activity


Total number of defects detected in each activity
Average duration between defect detection and defect correction
Average effort to correct a defect
62

Manual Testing
Q

Testing Tools

Mind

Total number of defects remaining at delivery

Software performance data is usually generated during system testing, once the software
has been integrated and functional testing is complete.

Average CPU utilization


Average memory utilization
Measured I/O transaction rate

Examples of metrics and their uses are as follows:


METRIC

USE OF METRIC

3.

Number of Tests

Extent of testing

4.

Paths Tested

5.

Acceptance Criteria
Tested

6.

Test Cost

7.

Cost to Locate Defect

8.

Achieving Budget

DESCRIPTION

Number of tests (number of tests


versus size of system tested)
metric identifies the number of tests
required to evaluate a unit of
information technology work.
Extent of testing
Paths tested (number of paths tested
versus total number of logical paths
that were executed during the test
process.
Extent of testing
Acceptance criteria tested
(acceptance criteria verified versus
total acceptance criteria) Metric
identifies the number of useridentified criteria that were
evaluated during the test process.
Resources
Test cost (test cost versus total
consumed in testing system cost) Metric identifies the
amount of resources used in the
development or maintenance
process allocated to testing.
Resources
Cost to locate defect (cost of testing
consumed in testing versus the number of defects
located in testing) Metric shows
the cost to locate a defect.
Resource consume Achieving budget (anticipated cost
in testing
of testing versus the actual cost of
63

Manual Testing
Q

9.

Detected Production
Errors

Testing Tools

Effectiveness of
testing

10. Defects Uncovered in


Testing

Effectiveness of
testing

12. System Complaints

Effectiveness of
Testing

13. Test Automation

Effectiveness of
Testing

14

Requirements Phase
Testing Effectiveness

Effectiveness of
testing

15. Design Phase Testing

Effectiveness of
testing

Mind

testing) Metric determines the


effectiveness of using test dollars.
Detected production errors (number
of errors detected in production
versus application system size)
Metric determines the effectiveness
of system testing in detecting errors
in the application prior to being
placed into production.
Defects uncovered in testing
(defects located by testing versus
total system defects) Metric
shows the percent of defect that
were detected as a result of testing.
System complaints (system,
complaints versus number of
transactions processed) Metric
shows the effectiveness of testing
and reducing third party
complaints.
Test automation (cost of manual
test effort versus total test cost)
Metric shows the percent of testing
performed manually and that
performed automatically.
Requirements phase testing
effectiveness requirements test cost
versus number of errors detected
during requirements phase)
Metric shows the value returned for
testing during the requirements
phase.
Design phase testing effectiveness
(design test cost versus number of
errors detected during design
phase) Metric shows the value
returned for testing during the
design phase.

64

Manual Testing
Q

Testing Tools

16. Program Phase Testing


Effectiveness

Effectiveness of
testing

17. Test Phase Testing


Effectiveness

Effectiveness of
testing

18. Installation Phase


Testing Effectiveness

Effectiveness of
testing

19. Maintenance Phase


Testing Effectiveness

Effectiveness of
testing

20. Defects Uncovered in


Test

Effectiveness of
testing

21. Untested Change


Problems

Effectiveness of
testing

22. Tested Change


Problems

Effectiveness of
Testing

23. Scale of Ten (Customer


Satisfaction)

Assessment of
testing

Mind

Program phase testing effectiveness


(program test cost versus number of
errors detected during program
phase) Metric shows the value
returned for testing during the
program phase.
Test phase testing effectiveness
(test cost versus number of errors
detected during test phase) - Metric
shows the value returned for testing
during the test phase.
Installation phase testing
effectiveness (installation test cost
versus number of errors detected
during test phase) Metric shows
the value returned for testing during
the installation.
Maintenance phase testing
effectiveness (maintenance test cost
versus number of errors detected
during maintenance phase) Metric
shows the value returned for testing
during the maintenance phase.
Defects uncovered in test (defects
uncovered versus size of systems)
Metric shows the number of defects
uncovered through testing based on
a unit of work.
Untested change problems (number
of tested changes versus problems
attributable to those changes)
Metric shows the effect of testing
system changes.
Tested change problems (number of
tested changes versus problems
attributable to those changes)
Metric shows the effect of testing
system changes.
Scale of ten (assessment of testing
rated on a scale of ten) Metric
shows peoples assessment of the
65

Manual Testing
Q

Testing Tools

24. Code Coverage

Assessment of
testing

25. Requirement Coverage

Assessment of
testing

Mind

effectiveness of testing on a scale


on which 1 is poor and 10 is
outstanding.
Identify which program branches or
statements are executed by a set of
test cases.
Monitoring and reporting on the
number of requirements exercised,
and/or tested.

Defect Management:Defects determine the effectiveness of the Testing what we do. If there are no defects, it
directly implies that we dont have our job. There are two points worth considering here,
either the developer is so strong that there are no defects arising out, or the test engineer
is weak. In many situations, the second is proving correct. This implies that we lack the
knack. In this section, let us understand Defects.
What is a Defect?
For a test engineer, a defect is following: Any deviation from specification
Anything that causes user dissatisfaction
Incorrect output
Software does not do what it intended to do.
Bug / Defect / Error:

Software is said to have bug if it features deviates from specifications.


Software is said to have defect if it has unwanted side effects.
Software is said to have Error if it gives incorrect output.

But as for a test engineer all are same as the above definition is only for the purpose of
documentation or indicative.
Categories of Defects:All software defects can be broadly categorized into the below mentioned types:
Errors of commission: something wrong is done
66

Manual Testing
Q

Testing Tools

Mind

Errors of omission: something left out by accident


Errors of clarity and ambiguity: different interpretations
Errors of speed and capacity
However, the above is a broad categorization; below we have for you a host of varied
types of defects that can be identified in different software applications:
1. Conceptual bugs / Design bugs
2. Coding bugs
3. Integration bugs
4. User Interface Errors
5. Functionality
6. Communication
7. Command Structure
8. Missing Commands
9. Performance
10. Output
11. Error Handling Errors
12. Boundary-Related Errors
13. Calculation Errors
14. Initial and Later States
15. Control Flow Errors
16. Errors in Handling Data
17. Race Conditions Errors
18. Load Conditions Errors
19. Hardware Errors
20. Source and Version Control Errors
21. Documentation Errors
22. Testing Errors
Life Cycle of a Defect:The following self explanatory figure explains the life cycle of a defect:
Defer

Submit Defect

Update Defect

Assign

Fix/Change

Review, Verify
and Qualify

Validate

Duplicate,
Reject or More
Info

Close

67

Manual Testing
Q

Testing Tools

Mind
Cancel

Defect Discovery vs. Finding a Defect


If technology cannot guarantee that defects will not be created, and this is certainly the
case in software development today, then the next best thing is to find defects quickly
before the cost to fix is great. a defect is considered to have been discovered when the
defect has been formally brought to the attention of the developers, and the developers
acknowledge that the defect is valid. Since it is important to minimize the time between
defect origination and defect discovery. Strategies that not only uncover the defect, but
also facilitate the reporting and developer acknowledgment of the defect can be very
important.
Defect Discovery
Find
Defect

Report
Defect

Acknowledge
Defect

The steps involved in defect discovery are as follows:


Find defect.
Report defect.
Acknowledge defect.
These steps are discussed in more detail below.
Find Defect:Defects are found either by preplanned activities specifically intended to uncover defects
(e.g., quality control activities such as inspections; testing etc) or, in effect, by accident
(e.g., users in production).
Techniques to find defects can be divided into three categories.

68

Manual Testing
Q

Testing Tools

Mind

Static techniques. A deliverable is examined (manually or by a tool) for


defects. Reviews, walkthroughs, and inspections, are examples of static
techniques.
Dynamic techniques. A deliverable is used to discover defects. Testing is an
example of a dynamic technique.
Operational techniques. An operational system produces a deliverable
containing a defect found by users, customers or control personnel, i.e., the
defect is found as a result of a failure.

Report Defect:Once found, defects must be brought to the attention of the developers. When the defect
is found by a technique specially designed to find defects, such as those mentioned
above, this is a relative straight forward process and is almost as simple as writing a
problem report. Techniques that facilitate the reporting of the defect may significantly
shorten the defect discovery time. As software becomes more complex and more widely
used, these techniques become more valuable. Those techniques include computer
forums electronic mail, help desks, etc.
It should also be noted that there are some human factors / cultural involved with the
defect discovery process. When a defect is initially uncovered. May be very unclear
whether it is a defect, a change, user error, or misunderstanding. Developers may resist
calling something a defect because that implies bad work and may not reflect well on
the development team. Users may resist calling something a change because that
implies that the developers can charge them more money. Some organizations have
skirted this issue initially labeling everything by a different name e.g., incidents or
issues from a defect management perspective, what they are called is not an important
issue. What is important is that the defect be quickly brought to the developers attention
and formally controlled.
Defect Naming:It is important that defects be named early in the defect management process. This will
enable individuals to better articulate the problem they are encountering. This will
eliminate vocabulary such as defect, bug, and problem, and articulating more specifically
what the defect is.
A three-level framework for naming defects is recommended as follows:

Level 1 Name of the defect. The naming of specific defects should be


performed as follows:

69

Manual Testing
Q

Testing Tools

Mind

1. Gather a representative sample of defects that have been identified and


documented in the organization. This list of defects can come from areas
such as the help desk, quality assurance, problem management, and
project teams.
2. Identify the major developmental phases and activities initially these
should be as broad as possible, with a goal of no more than 20 phases /
activities for an organization.
3. The defects identified should then be sorted by these phases/activities
(sorting can be either or both by phase found phase in which the defect
was created).

Level 2 Developmental

Level 3 The category of defect. The following defect categories are suggested
for each phase.
1.
Missing
2.
Inaccurate
3.
Incomplete
4.
Inconsistent

Defect Naming Example:


If an incorrect requirement was found, it should be named an
Level 1 Incorrect requirement
Level 2 Requirement phase
Level 3 Inaccurate
Note that levels 2 and 3 are qualifiers to the level 1 name
Defect Resolution:Once the developers have acknowledged that a reported defect is a valid defect, the
defect resolution process begins. The steps involved in defect resolution (see figure 9.5)
are as follows:
Prioritize fix
Schedule fix
Fix defect
Report resolution
Defect Resolution
Prioritize Fix

Schedule Fix

Fix Defect

Report
Resolution
70

Manual Testing
Q

Testing Tools

Mind

Defect Reporting:Defects are recorded for four major purposes


To correct the defect
To report status of the application
To gather statistics used to develop defect expectations in future applications
To improve the software development process.
Most project teams utilize some type of tool to support the defect tracking process.
This tool could be as simple as a white board or a table created and maintained in a
word processor, or one of the more robust tools available today on the market, such as
Mercurys Test Director, Tools marketed for this purpose usually come with some
number of customizable fields for tracking project specific data in addition to the
basics. They also provide advanced features such as standard and ad-hoc reporting, email notification to developers and/or testers when a problem is assigned to them, and
graphing capabilities.
At a minimum, the tool selected should support the recording and communication of all
significant information about a defect. For example, a defect log could include:

Defect ID number
Descriptive defect name and type
Source of defect test case or other source
Defect severity
Defect priority
Defect status (e.g. open, fixed, closed, user error, design, and so on) more
robust tools provide a status history for the defect
Date and time tracking for either the most recent status change, or for each
change in the status history.
Detailed description, including the steps necessary to reproduce the defect
Component or program where defect was found.
Screen prints, logs, etc., that will aid the developer in resolution process
Stage of origination
Person assigned to research and/or correct the defect.

Fault is a condition that causes the software to fail to perform its required function. Error
refers to difference between Actual Output and Expected Output. Failure is the inability
of a system of a system or component to perform required function according to its
specification.
71

Manual Testing
Q

Testing Tools

Mind

Failure is an event; fault is a state of the software, caused by an error.


Defect Metrics:Defect Density: (No. Of Defects Reported by SQA + No. Defects Reported By Peer
Review)/Actual Size.
The Size can be in KLOC, SLOC, or Function Points. The method used in the
Organization to measure the size of the Software Product.
The SQA is considered to be the part of the Software testing team.
Test effectiveness:
t / (t+Uat) where t=total no. of defects reported during testing
UAT = total no. of defects reported during User acceptance testing
User Acceptance Testing is generally carried out using the Acceptance Test Criteria
according to the Acceptance Test Plan.
Defect Removal Efficiency:(Total No Of Defects Removed /Total No. Of Defects Injected)*100 at various stages of
SDLC
Description:This metric will indicate the effectiveness of the defect identification and removal in
stages for a given project Formula

Requirements: DRE = [(Requirement defects corrected during Requirements


phase) / (Requirement defects injected during Requirements phase)] * 100
Design: DRE = [(Design defects corrected during Design phase) / (Defects
identified during Requirements phase + Defects injected during Design
phase)] * 100
Code: DRE = [(Code defects corrected during Coding phase) / (Defects
identified during Requirements phase + Defects identified during Design
phase + Defects injected during coding phase)] * 100
Overall: DRE = [(Total defects corrected at all phases before delivery) / (Total
defects detected at all phases before and after delivery)] * 100

When to stop Testing?


"When to stop testing" is one of the most difficult questions to a test engineer.
The following are few of the common Test Stop criteria:
1. All the high priority bugs are fixed.
2. The rate at which bugs are found is too small. (For example if the Defect Density
has fallen to a very low value).
3. The testing budget is exhausted.
72

Manual Testing
Q

Testing Tools

Mind

4. The project duration is completed.


5. The risk in the project is under acceptable limit.
6. Dead lines (release deadlines, testing dead lines etc..)
7. Test Case execution completed with certain percentage passed
8. Bug rate falls below a certain level
9. Beta or alpha testing period ends.
10. Maximum number of test cases successfully executed.
11. Uncover minimum number of defects (16/1000 stm).
12. Statement coverage.
13. Testing uneconomical.
14. Reliability model.
Practically, the decision of stopping testing is based on the level of the risk acceptable to
the management. As testing is a never ending process we can never assume that 100 %
testing has been done, we can only minimize the risk of shipping the product to client
with X% testing done. The risk can be measured by Risk analysis but for small duration /
low budget / low resources / project risk can be deduced by simply:

Measuring Test Coverage.


Number of test cycles.
Number of high priority bugs.

Testing limitations?

We can only test against system requirements.


o May not detect errors in the requirements.
o Incomplete or ambiguous requirements may lead to inadequate or
incorrect testing.
Exhaustive (total) testing is impossible in present scenario.
Time and budget constrains normally require very careful planning of the testing
effort.
Compromise between thoroughness and budget.
Test results are used to make business decisions for release dates.

Alpha Testing:A software prototype stage when the software is first available for run. Here the software
has the core functionalities in it but complete functionality is not aimed at. It would be
able to accept inputs and give outputs. Usually the most used functionalities (parts of
code) are developed more. The test is conducted at the developers site only.
73

Manual Testing
Q

Testing Tools

Mind

In a software development cycle, depending on the functionalities the number of alpha


phases required is laid down in the project plan itself.
During this, the testing is not a through one, since only the prototype of the software is
available. Basic installation un-installation tests, the completed core functionalities are
tested. The functionality complete area of the Alpha stage is got from the project plan
document.
Aim: is to identify any serious errors
to judge if the indented functionalities are implemented
to provide to the customer the feel of the software
A through understanding of the product is done now. During this phase, the test plan and
test cases for the beta phase (the next stage) is created. The errors reported are
documented internally for the testers and developers reference. No issues are usually
reported and recorded in any of the defect management/bug trackers
What is Agile testing?
Agile testers treat the developers as their customer and follow the agile manifesto. The
Context driven testing principles act as a set of principles for the agile tester.
It can be treated as the testing methodology followed by testing team when an entire
project follows agile methodologies.
Acceptance Testing:

Make up user experiences or User stories, which are short descriptions of the
features to be coded.
Acceptance tests verify the completion of user stories.
Ideally they are written before coding.

With all these features and process included we can define a practice for Agile testing
encompassing the following features.
Conversational Test Creation
Coaching Tests
Providing Test Interfaces
Exploratory Learning
Conversational Test Creation

74

Manual Testing
Q

Testing Tools

Mind

Test case writing should be a collaborative activity including majority of the


entire team. As the customers will be busy we should have someone
representing the customer.
Defining tests is a key activity that should include programmers and customer
representatives.
Don't do it alone.

Client Server Testing Tests to examine the N/W communication, and the interplay
between SW that resides on the Client and Server. Checks are run on the Client, on the
Server and on both are.
Application Function tests.
Server tests
Database tests
Transaction tests
N/W communication tests
Website Testing Testing which goes beyond the basic functional and system testing of
the C/S world, to include tests for availability, Performance / Load, Scalability, Usability,
Compatibility and Links.
Optimize testing with through risk analysis of the site to identify and prioritize
key areas and testing tasks.
Consider interactions between HTML pages TCP/IP communications, internet
connections, Firwwalls, and Applications that run on the Server side.
E-application Testing:
Testing ensures that reliability, accuracy and performance of web-based
applications, including web services.
Simulate a live e-application environment if required.
Conduct tests across heterogeneous environments and across all the application
tiers.
Alpha and Beta Testing:
Alpha Testing
Testing an application, when development
is nearing completion, minor design
changes may still be made as a result of
such testing.
Typically done by the end-user / others at
our place.

Beta Testing
Testing when development and testing are
essentially completed and final bugs and
problems need to be found before final
release. Typically done by the end-user /
others at the clients plance.

75

Manual Testing
Q

Testing Tools

Mind

Product Testing
Checks for all requirements across all stages of product development
Encompasses additional tests like Compatibility, User Aceptance, Maintainability,
Installation, Serviceability and Usability etc.
Comparison / Back to Back Testing There are some situations in which the reliability
of the S/W is absolutely critical. Ex: Aircraft avionics, automobile braking system,
nuclear power plant control etc. In such application redundant H/W and S/W are often
used to minimize the possibility of errors.
Test Engineer Vs.Quality Assurance Engineer:
Test Engineer
QA Engineer
Has a test to break attitude and ability
QA Engineer should have the same qualities
to take the point of view of the customer. of a good test engineer. Additionally they
A strong desire for quality and an
must be able to understand the entire SWD
attention to detail
process and how it can fit into the business
approach and goals of an organization.
Tact and diplomacy are useful in
Communication skills and ability to
maintaining a co-operative relation
understand various sides of issues are
between developers and ability to
important. In organizations in the early stages
communicate with both technical and
of implementing QA processes. Patience and
non-technical people is useful
diplomacy are essentially needed. The ability
to find problems as well as to see what is
missing is important, inspection and reviews.

Web Application Testing


The recent evolution of Internet
Internet has grown, and is growing, rapidly. Between 1980 and 1994 the growth rate was
close to one hundred percent per year and sometimes even over the one-hundred limit.
Internet is still growing fast and today the growth rate is about sixty percent per year.
Many think of World Wide Web as Internet. Today when you are on the net you are likely
to be visiting a WWW-site, but WWW was not released until 1991.
Together with the fact that the web now reaches almost all possible target groups, it
makes it inevitable that new businesses enter the web. New businesses demand new
features. All this adds to the growing complexity of the sites on the web. Not long ago
most sites did not offer much of interactivity. Today the possibilities for interactivity are
endless. The development of new techniques for Internet makes it easier every year to
76

Manual Testing
Q

Testing Tools

Mind

make reality of your ideas regarding your site. Special features are performed on your
browser and live events are broadcast over the world, through the web.
The possibility of real time publishing in many cases sets the pace for web site
development. With the growing complexity and demands for rapid deployment the web
site development tends to lack testing efforts even when the need for it, in fact, increases.
Classification of web sites
When publishing a web site the construction and design, of course, is based upon what
you hope to achieve with the site. Depending on this, the site may be classified as a
certain type of site. There are a number of different types of sites published on the web.
These sites have been categorized by a number of authors. We have chosen two different
classifications that we believe in a clear way show the different angels from which to
view the sites.
The first classification is based on the different business purposes of a commercial web
site. These purposes of commercial web sites into three categories:

Promotion of products and services


Provision of data and information
Processing of business transactions

Promotion is information about products and services that are part of the companys
business, whereas provision is information about, for instance, the environmental care
program the company may sponsor. Processing refers to regular business transactions.
Although this classification is meant to show the purposes with one commercial web site,
we believe it can also be used to categorize the main purpose of a web site. For instance,
a companys on-line catalogue would be a promotional site, a private persons homepage
may be considered a provisional site and, of course, a web site for banking services may
be considered a site for processing.
Another classification is based on the degree of interactivity the web site offers.
Static Web Sites
The most basic web site. Presentation of HTML-documents. The only
interactivity offered is the choice of page by clicking links.

Static with Form-Based interactivity


Forms are used collecting information from the user, including comments or
requests for information. Main purpose is document delivery, limited emphasis on
data collection mechanisms.
77

Manual Testing
Q

Testing Tools

Mind

Sites with Dynamic Data Access


Web site used as front-end for accessing a database. Via the web page users can
search a catalogue or perform queries on the content of a database. The results of
the search are displayed as HTML-documents.

Dynamically Generated Sites


The wish to provide customized pages for every user creates a need to take a step
away from the static web site. The page presented may be static, providing no
interactivity, but the way it was created has similarities with the way screens are
created in software applications.

Web-Based Software Applications


Web sites that are part of the business process often have more in common with
other client/server applications than with static web sites. This could be a
inventory tracking system or a sales force automaton tool.

This classification is derived from the need of methodology during the development of
web sites. The classification is useful also for the testing process, not only for the need of
methodology but also for how extensive the testing must be. For instance, for a static web
site the demands may be, besides that the information is correct and up-to-date, that the
source code is correct and that the load capacity of the server is great enough, i.e. the
server can handle a large enough number of visitors at the same time. There is no need to
go much deeper in this case. For the other extreme, Web-Based Software Application, the
requirements are much greater, where, for instance, security is of great importance.
These two classifications are two major ways of showing distinctions between web sites.
Together they provide information about interactivity and purpose which gives us an idea
on the sites complexity.
Web applications
The title of this paper, Web Application Testing, creates a need to define what we mean
with web applications. Are we talking only about high complexity e-commerce web
sites?
Above we introduced two different authors classifications of web sites. We find it
interesting that Powell (1998) in his definitions does not use the word application until a
higher degree of interactivity is offered. Instead he uses the word Site for the first,
simpler, categories. Regardless of if this is indeed intended or not, we choose to define
web application as any web-based site or application available on internet or on an

78

Manual Testing
Q

Testing Tools

Mind

intranet, whether it be a static promotion site or a highly interactive site for banking
services.
User Issues
When we, ordinary web surfers, use the Internet, what is it that we experience as
problems? Which sites make us leave and move on to another? What characteristics shall
a site have in order to make users want to stay? It is hard, if not impossible, to give an
answer of general character to these questions. What makes it so difficult is the diversity
of users. Since visitors to a site may come from all corners of the world, they differ
greatly in how they experience a site as satisfying. But regardless of which culture they
are from or what kind of site that is visited, some things are never appreciated. For
example, when a page takes too long to load many users get impatient and move on to
another site or page. The same if a site is too difficult to navigate. Overall, users tend not
to tolerate certain problems when out surfing the web. If we have trouble understanding
the layout or if it takes to much effort to find the information we are seeking, the site is
experienced as complex and we will start looking elsewhere for what we seek. Many sites
today present animations or other graphical effects, which many users experience as
positive. But if you are a visitor searching for specific information, you seldom
appreciate waiting time in order to obtain what you seek. Today though, there is almost
always an option to skip the feature, which is positive.
Another problem that always irritates when on the web is broken links. We dont think
that there is anyone with some web-browsing experience that hasnt encountered this. It
is an always-returning error that will continue to haunt the web for as long as pages are
moved or taken off the Internet. These relatively small errors shouldnt be too difficult to
remove, and there is therefore no excuse to have broken links on a site for more than a
short period of time.
Below is a presentation of the main areas to test when developing and publishing a web
site. It is a checklist that presents the most important features to test under each area and
how to perform them.
Functionality testing
1. Links
Links are maybe the main feature on web sites. They constitute the mean of
transport between pages and guide the user to certain addresses without the user
knowing the actual address itself. Linkage testing is divided into three sub areas.
First - check that the link takes you to the page it said it would. Second That the
link isnt broken i.e. that the page youre linking to exists. Third Ensure that you
have no orphan pages at your site. An orphan page is a page that has no links to it,
79

Manual Testing
Q

Testing Tools

Mind

and may therefore only be reached if you know the correct URL. Remember that
to reduce redundant testing, there is no need to test a link more than once to a
specific page if it appears on several pages; it needs only to be tested once.
This kind of test can preferably be automated and several tools provide solutions
for this.
Link testing should be done during integration testing, when connections between
pages subsist.
Summary:
Verify that you end up at the designated page
Verify that the link isnt broken
Locate orphan pages if present
2. Forms
Forms are used to submit information from the user to the host, which in turn gets
processed and acted upon in some way. Testing the integrity of the submitting
operation should be done in order to verify that the information hits the server in
correct form. If default values are used, verify the correctness of the value. If the
forms are designed to only accept certain values this should also be tested for. For
example, if only certain characters should be accepted, try to override this when
testing. These controls can be done both on the client side as well as the server
side, depending on how the application is designed, for example using scripting
languages such as Jscript, JavaScript or VBScript. Check that invalid inputs are
detected and handled.
Summary:
Information hits the server in correct form
Acceptance of invalid input
Handling of wrong input (both client an server side)
Optional versus mandatory fields
Input longer than field allows
Radio buttons
Default values
3. Cookies
Cookies are often used to store information about the user and his actions on a
particular site. When a user accesses a site that uses cookies, the web server sends
information about the user and stores it on the client computer in form of a
cookie. These can be used to create more dynamic and custom-made pages or by
storing, for example, login info. If you have designed your site to use cookies,
80

Manual Testing
Q

Testing Tools

Mind

they need to be checked. Verify that the information that is to be retrieved is there.
If login information is stored in cookies check for correct encryption of these. If
your applications require cookies, how does it respond to users that disabled the
use of such? Does it still function or will the user get notified of the current
situation. How will temporary cookies be handled? What will happen when
cookies expire? Depending on what cookies are used for, one should examine the
possibilities for other solutions.
Summary:
Encryption of e.g. login info
Users denying or accepting
Temporary and expired cookies
4. Web Indexing
There are a number of different techniques and algorithms used by different
search engines to search the Internet. Depending on how the site is designed using
Meta tags, frames, HTML syntax, dynamically created pages, passwords or
different languages, your site will be searchable in different ways.
Summary:
Meta tags
Frames
HTML syntax
Passwords
Dynamically created pages

5. Programming Language
Differences in web programming language versions or specifications can cause
serious problems on both client and server side. For example, which HTML
specification will be used (for example 3.2 or 4.0)? How strictly? When HTML is
generated dynamically it is important to know how it is generated.
When development is done in a distributed environment where developers, for
instance, are geographically separated, this area becomes increasingly important.
Make sure that specifications are well spread throughout the development
organization to avoid future problems.

81

Manual Testing
Q

Testing Tools

Mind

Except HTML classes, specifications on e.g. Java, JavaScript, ActiveX, VBScript


or Perl need to be verified.
There are several tools on the market for validating different programming
languages. For languages that need compiling e.g. C++, this kind of check is often
done by the compiling program. Since this kind of testing is done by static
analysis tools and needs no actual running of the code, these tests can be done as
early as possible in the development process.
Language validation tools can be found in compilers, online as well as for
download, free or by payment.
Summary:
Language specifications
Language syntax (HTML, C++, Java, Scripting languages, SQL etc.)
6. Dynamic Interface Components
Web pages are not just presented in static HTML anymore. Demands for more
dynamic features, custom made sites and high interactivity have made the Internet
a more vivid place than before. Dynamic Interface Components reside and operate
both on server and client side of the web, depending on the application. The most
important include Java applets, Java Servlets, ActiveX controls, JavaScript,
VBScript, CGI, ASP, CSS and third-party plug-ins (QuickTime, ShockWave or
Real Player). The issue here is to test and verify the function of the components,
not compatibility issues. An example of what to test can be a Java applet
constructing and displaying a chart of company statistics, where the information
first have to be retrieved and then interpreted and displayed on the screen. Since
server-side components dont have user interface, event logging (logfiles) can be
used to record events by applications on the server side in order to determine
functionality.
Summary:
Do client side components (applets, ActiveX controls, JavaScript, CSS
etc.) function as intended (i.e. do the components perform the right tasks
in a correct way)
User disabling features (Java-applets, ActiveX, scripts etc.)
Do server side components (ASP, Java-Servlets, server-side scripting etc.)
function as intended (i.e. do the components perform the right tasks in a
correct way)
7. Databases
82

Manual Testing
Q

Testing Tools

Mind

Databases play an important role in web application technology, housing the


content that the web application manages, running queries and fulfilling user
requests for data storage. The most commonly used type of database in web
applications is the relational database and its managed by SQL to write, retrieve
and editing of information. In general, there are two types of errors that may
occur, data integrity errors and output errors. Data integrity errors refer to missing
or wrong data in tables and output errors are errors in writing, editing or reading
operations in the tables. The issue is to test the functionality of the database, not
the content and the focus here is therefore on output errors. Verify that queries,
writing, retrieving or editing in the database is performed in a correct way.
Issues to test are:
Creation of tables
Indexing of data
Writing and editing in tables (for example valid numbers or characters,
input longer than field etc.)
Reading from tables
Usability
1. Navigation
Navigation describes the way users navigate within a page, between different user
interface controls (buttons, boxes, lists, windows etc.), or between pages via e.g.
links. To determine whether or not your page is easy to navigate through consider
the following. Is the applications navigation intuitive? Are the main features of
the site accessible from the main page? Do the site need a site map, search engine,
or other navigational help. Be careful though that you dont over do your site. Too
much information often has the opposite effect as to what was intended. Users of
the web tend to be very goal driven and scan a site very quickly to see if it meets
their expectations. If not, they quickly move on. They rarely take the time to learn
about the sites structure, and it is therefore important to keep the navigational help
as concise as possible.
Another important aspect of navigation is if the site is consistent in its
conventions regarding page layout, navigation bars, menus, links etc. Make sure
that users intuitively know that they are still within the site by keeping the page
design uniform throughout the site.
As soon as the hierarchy of the site is determined, testing of how users navigate
can commence. Have real users try and navigate through ordinary papers
describing how the layout is done.
83

Manual Testing
Q

Testing Tools

Mind

Summary:
Intuitive navigation
Main features accessible from main page
Site map or other navigational help
Consistent conventions (navigation bars, menus, links etc.)
2. Graphics
The graphics of a web site include images, animations, borders, colours, movie
clips, fonts, backgrounds, buttons etc. Issues to check are:
Make sure that the graphics serve a definite purpose and that images or
animations dont just clutter up the visual design and waste bandwidth
Verify that fonts are consistent in style
Suitable background colours combined with font- and foreground colour.
Remember that a computer display exceptionally well presents contrasts
apposed to printed paper
Three-dimensional effects on buttons often gives useful cues
When displaying large amount of images, consider using thumbnails.
Check that the original picture appears when a thumbnail is clicked
Size quality of pictures, usage of compressed formats (JPG or GIF)
Mouse-over effects
3. Content
Content testing is done to verify the correctness, accuracy and relevancy of
information presented on the site, or in a database, in forms of text, images or
animations.
Correctness is whether the information is truthful or contains misinformation. For
example wrong prices in a price list may cause financial problems or even induce
legal issues.
The accuracy of the information is whether it is without grammatical or spelling
errors. These kinds of verifications are often done in e.g. Word or other word
processors.
Remove irrelevant information from your site. This may otherwise cause
misunderstandings or confusion. Content testing should be done as early as
possible, i.e. when the information is posted.
Summary:
Correctness
Accuracy
84

Manual Testing
Q

Testing Tools

Mind

Relevancy

4. General Appearance
Does the site feel right when using it? Do you intuitively know where to look for
information? Is the design consistent throughout the site? Make sure that the
design and aim goes hand in hand. Too much design can easily turn a conservative
corporate site in to a publicity stunt. Important to all kinds of usability tests is to
involve external personnel that have little or no connection to the development of
the site. Its easy to get fond of ones own solution, so having actual users
evaluating the site may be critical.
Summary:
Intuitive design
Consistent design
If using frames, make sure that the main area is large enough
Consider size of pages. Several screens on the same page or links between
them
Do features on the site need help systems or will they be intuitive
Server Side Interface
1. Server Interface
Due to the complex architecture of web systems, interface and compatibility
issues may occur on several areas. The core components are web servers,
application servers and database servers (and possibly mail servers). Web servers
normally hosts HTML pages and other web services. Application severs typically
contains objects such as programs, scripts, DLLs or third party products, that
provide and extend functionality and effects for the web application. Test the
communication between the different servers by making transactions and view
logfiles to verify the result. Depending on the configuration of the server side
compatibility issues may occur depending on, for example, server hardware,
server software or network connections. Database compatibility issues may occur
depending on different database types (SQL, Oracle, Sybase etc.).
Issues to test:
Verify that communication is done correctly, web server-application
server, application server-database server and vice versa.
Compatibility of server software, hardware, network connections
Database compatibility (SQL, Oracle, Sybase etc.)
2. External Interface
Several web pages have external interfaces, such as merchants verifying credit
card numbers to allow transactions to be made or a site like http://www.pris.nu/
85

Manual Testing
Q

Testing Tools

Mind

that compares prices and delivery times on different merchants on the web. Verify
that is sent and retrieved in correct form.
Client Side compatibility
1. Platform
There are several different operating systems that are being used on the market
today, and depending on the configuration of the user system, compatibility issues
may occur. Different applications may work fine under certain operating systems,
but fail under another. The following are the most commonly used:
Windows (95, 98, 2000, NT)
Unix (different sets)
Macintosh
Linux
2. Browsers
The browser is the most central component on the client side of the web.
Browsers come in different brands and versions and have different support for
Java, JavaScript, ActiveX, plugins or different HTML specifications. ActiveX, for
example, is a Microsoft product and therefore designed for Internet Explorer,
while JavaScript is produced by Netscape and Java by Sun. This substantiates the
fact that compatibility problems commonly occur. Frames and Cascading style
sheets may display differently on different browsers, or not at all. Different
browsers also have different settings for e.g. security or Java support.
A good way to test browser compatibility is to create a compatibility matrix where
different brands and versions of browsers are tested to a certain number of
components and settings, for example Applets, scripting, ActiveX controls or
cookies.
Summary:
Internet Explorer (3.X 4.X, 5.X)
Netscape Navigator (3.X, 4.X, 6.X)
AOL
Browser settings (security settings, graphics, Java etc.)
Frames and Cascade Style sheets
Applets, ActiveX controls, DHTML, client side scripting
HTML specifications
Graphics
3. Settings, Preferences
86

Manual Testing
Q

Testing Tools

Mind

Depending on settings and preferences of the client machine, web applications


may behave differently. Try and vary the following:
Screen resolution (check that text and graphic alignment still work, font
are readable etc.)
Colour depth (256, 16-bit, 32-bit)
4. Printing
Despite the paperless society the web was to introduce, printing is done more than
ever. Verify that pages are printable with considerations on:
Text and image alignment
Colours of text, foreground and background
Scalability to fit paper size
Tables and borders
Performance
1. Connection speed
Users may differ greatly in connection speed. They may be on a 28.8 modem or
on a T3 connection. Users expect longer download times when retrieving demos
or programs, but not when requesting a homepage. If the transaction response
time is to long, user will leave the site. Other issues to consider are time-out on a
page that request logins. If load time is to long, users may be thrown out due to
time-out. Database problem may occur if the connection speed is two low, causing
data loss.
Summary:
Connection speed: 14.4, 28.8, 33.6, 56.6, ISDN, cable, DSL, T1, T3
Time-out
2. Load
What is the estimated number of users per time period and how will it be divided
over the period? Will there be peak loads and how will the system react? Can your
site handle a large amount of users requesting a certain page? Load testing is done
to measure the performance at a given load level to assure that the site work
within requirements for performance. The load level may be a certain amount of
users using your site at the same time or large amount of data transactions from
user such as online ordering.
Summary:
Many users requesting a certain page at the same time or using the site
simultaneously
87

Manual Testing
Q

Testing Tools

Mind

Large amount of data from users

3. Stress
Stress testing is done in order to actually break a site or a certain feature to
determine how the system reacts. Stress tests are designed to push and test system
limitations and determine whether the system recovers gracefully from crashes.
Hackers often stress systems by providing loads of wrong in-data until it crash
and then gain access to it during start-up. Typical areas to test are forms, logins or
other information transaction components.
Summary:
Performance of memory, CPU, file handling etc.
Error in software, hardware, memory errors (leakage, overwrite or
pointers)
4. Continuous use
Is the application or certain features going to be used only during certain periods
of time or will it be used continuously 24 hours a day 7 days a week? Test that the
application is able to perform during those conditions. Will downtime be allowed
or is that out of the question? Verify that the application is able to meet the
requirements and does not run out of memory or disk space.
Security
Security is an area of immense extent, and would need extensive writing to be fairly
covered. We will no more than point out the most central elements to test. First make
sure that you have a correct directory setup. You dont want users to be able to brows
through directories on your server.
Logins are very common on todays web sites, and they must be error free. Make sure
to test both valid and invalid login names and passwords. Are they case sensitive? Is
there a limit to how many tries that are allowed? Can it be bypassed by typing the
URL to a page inside directly in the browser?
Is there a time-out limit within your site? What happens when its exceeded? Are
users still able to navigate through the site?
Logfiles are a very important in order to maintain security at the site. Verify that
relevant information is written to the logfiles and that the information is traceable.
When secure socket layers are used, verify that the encryption is done correctly and
check the integrity of the information.
88

Manual Testing
Q

Testing Tools

Mind

Scripting on the server often constitute security holes and are often used by hackers.
Test that it isnt possible to plant or edit scripts on the server without authorisation.
Summary:
Directory setup
Logins
Time-out
Logfiles
SSL
Scripting Languages

Database Testing
There are several reasons why you need to develop a comprehensive testing strategy for
RDBMS:

Data is an important corporate asset. .


Mission-critical business functionality is implemented in RDBMSs.
Current approaches aren't sufficient.
Testing provides the concrete feedback required to identify defects.
Support for evolutionary development.

Here's a few interesting questions to ask someone who isn't convinced that you need to
test the DB:
If you're implementing code in the DB in the form of stored procedures, triggers, ...
shouldn't you test that code to the same level that you test your app code?
Think of all the data quality problems you've run into over the years. Wouldn't it have
been nice if someone had originally tested and discovered those problems before you
did?
Wouldn't it be nice to have a test suite to run so that you could determine how (and if) the
DB actually works?
What Should We Test?
Figure 1 indicates what you should consider testing when it comes to relational
databases. The diagram is drawn from the point of view of a single database, the dashed
lines indicate threat boundaries, indicating that you need to consider threats both within
the database (clear box testing) and at the interface to the database (black box testing).
Table 1 lists the issues which you should consider testing for both internally within the
database and at the interface to it.
89

Manual Testing
Q

Testing Tools

Mind

Figure 1. What to test.

Table 1. What to test in an RDBMS.


Black-Box Testing at the Interface

White/Clear-Box Testing Internally Within the


Database

O/R mappings (including the meta Scaffolding code (e.g. triggers or updateable
data)
views) which support refactorings
Incoming data values
Typical unit tests for your stored procedures,
Outgoing data values (from queries, functions, and triggers
stored functions, views ...)
Existence tests for database schema elements
(tables, procedures, ...)
90

Manual Testing
Q

Testing Tools

Mind

View definitions
Referential integrity (RI) rules
Default values for a column
Data invariants for a single column
Data invariants involving several columns

How to Test
Although you want to keep your database testing efforts as simple as possible, at first you
will discover that you have a fair bit of both learning and set up to do. In this section we
discuss the need for various database sandboxes in which people will test: in short, if you
want to do database testing then you're going to need test databases (sandboxes) to work
in. We then overview how to write a database test and more importantly describe setup
strategies for database tests. Finally, we overview several database testing tools which
you may want to consider.
Database Sandboxes
A sandbox is basically a technical environment whose scope is well defined and
respected. In each sandbox you'll have a copy of the database. In the development
sandbox you'll experiment, implement new functionality, and refactor existing
functionality, validate your changes through testing, and then eventually you'll promote
your work once you're happy with it to the project integration sandbox.
Writing Database Tests
There's no magic when it comes to writing a database test, you write them just like you
would any other type of test. Database tests are typically a three-step process:
Setup the test. You need to put your database into a known state before running tests
against it. There are several strategies for doing so.
Run the test. Using a database regression testing tool, run your database tests just like
you would run your application tests.
Check the results. You'll need to be able to do "table dumps" to obtain the current
values in the database so that you can compare them against the results which you
expected.
The article What To Test in an RDBMS goes into greater detail.

Setting up Database Tests

91

Manual Testing
Q

Testing Tools

Mind

To successfully your database you must first know the exact state of the database, and the
best way to do that is to simply put the database in a known state before running your test
suite. There are two common strategies for doing this:
Fresh start. A common practice is to rebuild the database, including both creation of the
schema as well as loading of initial test data, for every major test run (e.g. testing that you
do in your project integration or pre-production test sandboxes).
Data reinitialization. For testing in developer sandboxes, something that you should do
every time you rebuild the system, you may want to forgo dropping and rebuilding the
database in favor of simply reinitializing the source data. You can do this either by
erasing all existing data and then inserting the initial data vales back into the database, or
you can simple run updates to reset the data values. The first approach is less risky and
may even be faster for large amounts of data.
An important part of writing database tests is the creation of test data. You have several
strategies for doing so:
Have source test data. You can maintain an external definition of the test data, perhaps
in flat files, XML files, or a secondary set of tables. This data would be loaded in from
the external source as needed.
Test data creation scripts. You develop and maintain scripts, perhaps using data
manipulation language (DML) SQL code or simply application source code (e.g. Java or
C#), which does the necessary deletions, insertions, and/or updates required to create the
test data.
Self-contained test cases. Each individual test case puts the database into a known state
required for the test.
These approaches to creating test data can be used alone or in combination. A significant
advantage of writing creation scripts and self-contained test cases is that it is much more
likely that the developers of that code will place it under configuration management
(CM) control. Although it is possible to put test data itself under CM control, worst case
you generate an export file that you check in, this isnt a common practice and therefore
may not occur as frequently as required. Choose an approach that reflects the culture of
your organization.
Where does test data come from? For unit testing, we should prefer to create sample data
with known values. This way we can predict the actual results for the tests that we write
and know that we have the appropriate data values for those tests. For other forms of
testing -- particularly load/stress, system integration, and function testing, live data
should be used so as to better simulate real-world conditions.

92

Manual Testing
Q

Testing Tools

Mind

What Testing Tools Are Available?


There are several critical features which you need to successfully test RDBMSs. First, as
Figure 1 implies you need two categories of database testing tools, one for interface tests
and one for internal database tests. Second, these testing tools should support the
language that you're developing in. For example, for internal database testing if you're a
Microsoft SQL Server developer, your T-SQL procedures should likely be tested using
some form of T-SQL framework. Similarly, Oracle DBAs should have a PL-SQL-based
unit testing framework. Third, you need tools which help you to put your database into a
known state, which implies the need not only for test data generation but also for
managing that data (like other critical development assets, test data should be under
configuration management control).
To make a long story short, although we're starting to see a glimmer of hope when it
comes to database testing tools, as you can see in Table 2, but we still have a long way to
go. Luckily there are some good tools being developed by the open source software
(OSS) community and there are some commercial tools available as well.
Table 2. Some database testing tools.
Category Description
Examples

Unit
testing
tools

DBFit
DBUnit
NDbUnit
Tools
which
OUnit for Oracle (being replaced soon by Qute)
enable you to
SQLUnit
regression
test
TSQLUnit (for testing T-SQL in MS SQL Server)
your database.
Visual Studio Team Edition for Database Professionals
includes testing capabilities
XTUnit

Tools
simulate
high usage loads
on your database,
Empirix
Testing
enabling you to
Mercury Interactive
tools for determine whether
RadView
load
your
system's
Rational Suite Test Studio
testing
architecture will
Web Performance
stand up to your
true
production
needs.
Test Data Developers need Data Factory
Generator test data against Datatect
93

Manual Testing
Q

Testing Tools

Mind

which to validate
their
systems.
Test
data
generators can be
particularly useful DTM Data Generator
when you need Turbo Data
large amounts of
data, perhaps for
stress and load
testing.
Who Should Test?
During development cycles, the primary people responsible for doing database testing are
application developers and agile DBAs. They will typically pair together, and because
they are hopefully taking a Test Driven Development approach to development the
implication is that they'll be doing database unit testing on a continuous basis. During the
release cycle your testers, if you have any, will be responsible for the final system testing
efforts and therefore they will also be doing database testing.
The role of your data management (DM) group, or IT management if your organization
has no DM group, should be to support your database testing efforts. They should
promote the concept that database testing is important, should help people get the
requisite training that they require, and should help obtain database testing tools for your
organization. As you have seen, database testing is something that is done continuously
by the people on development teams; it isn't something that is done by another group
(except of course for system testing efforts). In short, the DM group needs to support
database testing efforts and then get out of the way of the people who are actually doing
the work.
Introducing Database Regression Testing into Your Organization
Database testing is new to many people, and as a result you are likely to face several
challenges:
Insufficient testing skills.
Insufficient unit tests for existing databases.
Insufficient database testing tools.
Reticent DM groups.
Database Testing and Data Inspection
A common quality technique is to use data inspection tools to examine existing data
within a database. You might use something as simple as a SQL-based query tool such as
DB Inspect to select a subset of the data within a database to visually inspect the results.
94

Manual Testing
Q

Testing Tools

Mind

For example, you may choose to view the unique values in a column to determine what
values are stored in it, or compare the row count of a table with the count of the resulting
rows from joining the table with another one. If the two counts are the same then you
don't have an RI problem across the join.
As Richard Dallaway points out, the problem with data inspection is that it is often done
manually and on an irregular basis. When you make changes later, sometimes months or
years later, you need to redo your inspection efforts. This is costly, time consuming, and
error prone.
Data inspection is more of a debugging technique than it is a testing technique. It is
clearly an important technique, but it's not something that will greatly contribute to your
efforts to ensure data quality within your organization.
Best Practices
Use an in-memory database for regression testing. You can dramatically speed up
your database tests by running them, or at least portions of them, against an in-memory
database such as HSQLDB. The challenge with this approach is that because database
methods are implemented differently across database vendors that any method tests will
still need to run against the actual database server.
Start fresh each major test run. To ensure a clean database, a common strategy is that
at the beginning of each test run you drop the database, then rebuild it from scratch taking
into account all database refactorings and transformations to that point, then reload the
test data, and then run your tests. Of course, you wouldn't do this to your production
database.
Take a continuous approach to regression testing. TDD approach to development is an
incredibly effective way to work.
Train people in testing. Many developers and DBAs have not been trained in testing
skills, and they almost certainly haven't been trained in database testing skills. Invest in
your people, and give them the training and education they need to do their jobs.
Pair with novices with people that have database testing experience. One of the
easiest ways to gain database testing skills is to pair program with someone who already
has them.

95

Manual Testing
Q

Testing Tools

Mind

Software QA and Testing-related Organizations and Certifications


SEI Software Engineering Institute web site; info about SEI technical programs,
publications, bibliographies, some online documents, SEI courses and training, links to
related sites.
American Society for Quality - American Society for Quality (formerly the American
Society for Quality Control) web site; geared to quality issues in general, not just
software QA, ASQ is the largest quality organization in the world, with more than
100,000 members. Provides a wide variety of general quality-related certifications, as
well as the CSQE (Certified Software Quality Engineer).
SPIN Software Process Improvement Network, for those interested in improving
software engineering practices. Organized into regional groups called SPINs that meet
and share their experiences initiating and sustaining software process improvement
programs. Annual meeting at the Software Engineering Process Group (SEPG)
Conference, which is co-sponsored by the SEI and a regional SPIN. Web site lists links
to regional SPINs worldwide.
IEEE Standards IEEE web site; has Software Engineering Standards titles and prices;
the topical areas for publications of interest would include listings under the Computer
Engineering section in the categories of Software Design/Development and Software
Quality and Management.
Society for Software Quality Has chapters in San Diego, Delaware, and Washington
DC area; each with monthly meetings.
QAI Quality Assurance Institute.
EOQ-SG European Organization for Quality Software Group, an independent notfor-profit organization founded in 1983. It is comprised of more than 30 national quality
organizations and other institutions, enterprises and specialists.
Certification Information for Software QA and Test Engineers:
CSQE ASQ (American Society for Quality) CSQE (Certified Software Quality
Engineer) program information on requirements, outline of required Body of
Knowledge, listing of study references and more.
96

Manual Testing
Q

Testing Tools

Mind

CSQA/CSTE QAI (Quality Assurance Institute)s program for CSQA (Certified


Software Quality Analyst). CSTE (Certified Software Test Engineer), and Certified
Software Project Manager (CSPM) Certifications.
ISTQB Software Testing Certifications The British Computer Society maintains a
program of 2 levels of certifications ISEB Foundation Certificate, Practitioner
Certificate.
SEI, CMM, ISO, ANSI

SEI =Software Engineering Institute at Carnegie-Mellon University; initiated by


the U.S. Defense Department to help improve software development processor
CMM = Capability Maturity Model, developed by the SEI Its model of 5 levels
of organizational maturity that determine effectiveness in delivering quality
software. It is geared to large organizations such as large U.S. Defense
Department contractors. However, many of the QA processes involved are
appropriate to any organization, and if reasonably applied can be helpful.
Organizations can receive CMM ratings by undergoing assessments by qualified
auditors.

Level 1- characterized by chaos, periodic panics, and heroic efforts required by


individuals to successfully complete projects. Few if any processes in place;
successes may not be repeatable.
Level 2 software project tracking, requirements management, realistic planning, and
configuration management processes are in place; successful practices can be
repeated.
Level 3 standard software development and maintenance processes
Are integrated throughout an organization; a software Engineering Process group in is
in place to oversee software processes, and training programs are used to ensure
understanding. And compliance and products. Project performance is predictable
and quality is consistently high.
Level 4 And products, Project performance is predictable and quality is consistently
high.

97

Manual Testing
Q

Testing Tools

Mind

Level 5 The focus is on continuous process improvement. Impact of new processes


and technologies can be predicted and effectively implemented when required.
Perspective on CMM ratings: During 1997-2001, 1018 organization were assessed.
Of those, 27% were rated at Level 1, 39%, at 23% at 3, 6% at 4, and 5% at 5. (For
ratings during the per 1992-96, 62% were at Level 1, 23%, at 2, 13% at 3, 2% at 4, at
0.4% at 5). The median size of organizations was 100 software engineering/
maintenance personnel; 32% of organizations were U.S. federal contractors or
agencies. For those rated at Level 1, the most problematical key process area was in
software Quality Assurance.

ISO = International Organization for Standardization The ISO 9001:2000


standard (which replaces the previous standard of 1994) concerns quality sy.
That are assessed by outside auditors, and it applies to many kinds of products
and manufacturing organizations, not just software. It covers documentatior
design, development, production, testing, installation, servicing, and other
processes. The full set of standards consists of: (a) Q9001-2000 Quality
Management System: Requirements; (b) Q9001-2000 Quality Management
Systems: Fundamentals and Vocabulary; (c) Q9004-2000 Quality Manager
Systems; Guidelines for Performance Improvements, to be ISO 9001 Certify
third-party auditor assesses an organization, and certification is typically go about
3 years, after which a complete reassessment is required. Note that ISO
Certification does not necessarily indicate quality products it indicates only
documented processes are followed. Also see http://www.iso.ch/for the late
information. In the U.S. the standards can be purchased via the ASQ web.si
http://e-standards.asq.org/.

IEEE = Institute of Electrical and Electronics Engineers among other third


creates standards such as IEEE Standard for Software Test Decumentation
(EEE/ANSI Standard 829), IEEE Standard of Software Unit Testing (IEEE/ANSI
Standard 1008), IEEE Standard for Software Quality Assurance Plans
(IEEE/ANSI Standard 730), and others.
ANSI = American National Standards Institute, the primary industrial standard
body in the U.S.; publishes some software-related standards in conjunction the
IEEE and ASQ (American Society for Quality).
Other software development process assessment methods besides CMM and 9000
include SPICE, Trillium, Tick IT, and Bootstrap.

98

Manual Testing
Q

Testing Tools

Mind

Capability Maturity Model (SW-CMM) for Software


The Capability Maturity Model for Software describes the principles and practices
underlying software process maturity and is intended to help software organizations
improve the maturity of their software process in of an evolutionary path from ad hoc,
chaotic processes to mature, disciplined software processes. The CMM organized into
five maturity levels:
1) Initial : The software process is characterized as ad hoc, and occasionally even
chaotic. Few processes and defined, and success depends on individual effort and
heroics.
2) Repeatable: Basic project management processes are established to track cost,
schedule, and functionality. The necessary process discipline is in place to repeat earlier
successes on projects with similar applications.
3) Defined: The software process for both management and engineering activities is
documented, standardized, and integrated into a standard software process for the
organization. All projects use an approved, tailored version of the organizations standard
software process for developing and maintaining software.
4) Managed: Detailed measures of the software process and product quality are
collected. Both the software process and products are quantitatively understood and
controlled.
5) Optimizing: Continuous process improvement is enabled by quantitative feedback
from the process and from piloting innovative ideas and technologies.
Predictability, effectiveness, and control of an organizations software processes are
believed to improve as the organization moves up these five levels. While not rigorous,
the empirical evidence to date supports this belief.
Except for Level 1, each maturity level is decomposed into several key process areas that
indicate the areas an organization should focus on to improve its software process.
The key process areas at Level 2 focus on the software projects concerns related to
establishing basic project management controls. They are Requirements Management,
Software Project Planning, Software Project Tracking and oversight, Software
Subcontract Management, Software Quality Assurance, and Software Configuration
Management.
99

Manual Testing
Q

Testing Tools

Mind

The key process areas at Level 3 address both project and organization issues, as the
organization established an infrastructure that institutionalizes effective software
engineering and management processes across all project. They are Organization Process
Focus, Organization Process Definition, Training Program, Integrated Software
Management, Software Product Engineering, Inter group Coordination, and Peer
Reviews.
The key process areas at Level 4 focus on establishing a quantitative understanding of
both the software process and the software work products being built. They are
Quantitative Process Management and Software Quality Management.
The key process area at Level 5 cover the issues that both the organization and the
projects must address to software process improvement. They are Defect Prevention,
Technology Change Management, and Process.
Each key process area is described in terms of the key practices that contribute to
satisfying its goals. The key practices describe the infrastructure and activities that
contribute most to the effective implementation and institutionalization of the key process
area.

CMM (Capability Maturity Model)

Level 1 Initial
No KPAs

Level 2 Repeatable
a. Software Requirement Management
b. Software Project Planning
c. Software Project Tracking & Oversight
d. Software Subcontract Management
e. Software Quality Assurance
f. Software Configuration Management

Level 3 Defined
a. Organizational Process Focus
b. Organizational Process Definition
c. Training Program
d. Software Product Engineering
e. Integrated Software Management
100

Manual Testing
Q

Testing Tools

Mind

f. Inter-Group Coordination
g. Peer Review

Level 4 Managed
a. Software Quality Management
b. Quality Process Management

Level 5 - Optimizing
a. Defect Prevention
b. Technology Change Management
c. Process Change Management

Difference between CMM & CMMI


CMM is a reference model of matured practices in a specified discipline for e.g.
Systems Engineering CMM, Software CMM, People CMM, and Software Acquisition
CMM etc. But they were difficult to integrate as and when needed. So the CMMi evolved
as a more matured set of guidelines and was built combining the best components of
individual disciplines of CMM (Software CMM, People CMM etc). It can be applied to
product manufacturing, People management, Software development etc etc.

101

Manual Testing
Q

Testing Tools

Mind

Appendix A: System Requirement Document


Signature Page................................................................................................Error! Bookmark not defined.
1.1 Purpose...........................................................................................................................................101
1.2 Intended Audience..........................................................................................................................101
1.3 Document Organization.................................................................................................................101
1.4 Project Scope..................................................................................................................................101
1.5 References......................................................................................................................................101
1.6 Revision History............................................................................................................................101
1.7 Points of Contact............................................................................................................................101
1.8 Risks...............................................................................................................................................101
2................................................................................................ Overall Description of the System
......................................................................................................................................................................101
2.1 Product Perspective........................................................................................................................101
2.2 Product Functions..........................................................................................................................101
2.3 Roles and Groups...........................................................................................................................101
2.4 Design and Implementation Constraints........................................................................................102
3.................................................................................................................... External Interfaces
......................................................................................................................................................................102
3.1 User Interfaces...............................................................................................................................102
3.2 Hardware Interfaces.......................................................................................................................102
3.3 Software Interfaces........................................................................................................................102
3.4 Communication Interfaces.............................................................................................................102
4................................................................................................ Functional Requirements / Design
......................................................................................................................................................................102
4.1 XYZ INC Business Process...........................................................................................................102
4.2 Electronic Proposal Management System Components................................................................102
4.3 Legacy Data...................................................................................................................................102
4.4 Search Capabilities.........................................................................................................................102
4.5 Archive...........................................................................................................................................102
4.6 Basic Portlet Functions..................................................................................................................102
4.7 Electronic Proposal Management System Home Page..................................................................102
4.8 Proposal Library...............................................................................Error! Bookmark not defined.
4.9 Metadata Maintenance Page..........................................................................................................102
Appendix A Metadata Lookups.................................................................................................................102
Appendix B Metadata Data Types and Comparison Operators.................................................................102
Appendix C Permissions for different XYZ INC Roles Across EPMS Document Libraries...................102
Appendix D - Project Folders.......................................................................................................................102

102

Manual Testing
Q

Testing Tools

Mind

Introduction
Purpose
The purpose of this System Requirements Document (SRD) is to establish the function requirements for
XYZ INCs new Electronic Proposal Management System (EPMS).
Intended Audience
First, the departments within XYZ INC who are stakeholders in the project will be able to read this
document and understand the functionality that it describes and provide clarifications, corrections, or
modifications to more clearly define how the system can best be organized to meet their customers needs.
Also, upon reading this document, the clients should have a complete understanding of the functionality of
the envisioned system.
Document Organization
The initial chapters of this System Requirements Document provide an overview of the document and the
scope of the project (Chapter 1), a high level description of the functional and business objectives of the
project (Chapter 2), and interfaces that are expected between this system to be developed under this project
and other systems (Chapter 3).
Project Scope
References
Revision History
Points of Contact
Risks
The risks and mitigation steps defined in the following matrix are the list of risks identified at the outset of
the project with the XYZ INC Project Manager on the project. There is no reason to expect that any
particular risk will come to pass, but serves to ensure that the risk analysis and planning are incorporated as
a normal part of the project lifecycle and that appropriate steps are taken to offset those risks where it is
deemed appropriate.
Overall Description of the System
Product Perspective
The XYZ INC Electronic Proposal Management System will provide new functionality to XYZ INC staff
included in the development of new business proposals. This system is expected to be available to
Business Development staff, Operations staff, Estimation staff, Contracts Staff, and the Proposal Team
staffs in Arlington and Houston.
Product Functions
The product functions for the Electronic Proposal Management System fall under four main areas:
Roles and Groups
The users interact with the system based on their role in the Proposal lifecycle.

103

Manual Testing
Q

Testing Tools

Mind

Design and Implementation Constraints


The Electronic Proposal Management System shall be developed to run on any desktop computer running a
Microsoft Operating System and the Microsoft Internet Explorer Version 5.5 browser or later versions of
the browser. The resulting application may work on other desktop computers (e.g., Apple or Linux-based),
but will not be tested or guaranteed to work on such systems.
External Interfaces
User Interfaces
Hardware Interfaces
Software Interfaces
Communication Interfaces
Functional Requirements / Design
XYZ INC Business Process
Electronic Proposal Management System Components
Legacy Data
Search Capabilities
Archive
Basic Portlet Functions
Electronic Proposal Management System Home Page
Metadata Maintenance Page
Appendix A Metadata Lookups
Appendix B Metadata Data Types and Comparison Operators
Appendix C Permissions for different XYZ INC Roles Across EPMS Document Libraries
Appendix D - Project Folders

104

Manual Testing
Q

Testing Tools

Mind

Appendix B: Test Strategy Document

Test Strategy and Approach


For Xyz Inc
0.4
Draft
SSTMTT06
Commercial IN Confidence

Author(s):

Ananth

Reviewed By:

James, Aron

Approved By:

Michales

Distribution:

105

Manual Testing
Q

Testing Tools

Mind

Table of Contents
1.

EXECUTIVE SUMMARY_____________________________________________________106

2.

STATEMENT OF REQUIREMENTS_____________________________________________107

3.

3.1

TEST OBJECTIVES_______________________________________________________________107

3.2

TEST REQUIREMENTS____________________________________________________________107

3.3

MEASURABLE SUCCESS CRITERIA__________________________________________________107

3.4

ASSUMPTIONS_________________________________________________________________107

3.5

SCOPE________________________________________________________________________107

SYSTEM UNDER TEST (SUT)________________________________________________109


4.1

FUNCTIONALITY________________________________________________________________109

4.2

SOFTWARE ARCHITECTURE_______________________________________________________109

4.3

HARDWARE ARCHITECTURE_______________________________________________________109

4.4

DATA________________________________________________________________________109

4.

ROLE OF AUTOMATED TEST TOOLS__________________________________________110

5.

TESTING FRAMEWORK_____________________________________________________111
6.1

THE TESTING PROCESS___________________________________________________________111

6.2

TEST MANAGEMENT INFRASTRUCTURE______________________________________________111

6.3

TESTING GUIDELINES____________________________________________________________111

6.

TEST DATA PLAN________________________________ERROR! BOOKMARK NOT DEFINED.

7.

TESTING STRATEGY_______________________________________________________113

8.

9.

8.1

TYPES OF TESTING______________________________________________________________113

8.2

USE-CASES____________________________________________________________________113

8.3

TEST CYCLES__________________________________________________________________113

RESOURCES & FACILITIES__________________________________________________114


9.1

ROLES________________________________________________________________________114

9.2

RESPONSIBILITIES_______________________________________________________________114

9.3

RESOURCES ASSESSMENT________________________________________________________114

9.4

FACILITIES & HARDWARE REQUIREMENTS___________________________________________114

RISK ASSESSMENT________________________________________________________115
10.1 PROJECT RISKS_________________________________________________________________115
10.2 OTHER FACTORS_______________________________________________________________115

106

Manual Testing
Q

Testing Tools

Mind

10.3 STATEMENT OF RISK_____________________________________________________________115


10.4 RISK MITIGATION STRATEGY______________________________________________________115

10. LEVEL OF EFFORT ESTIMATES______________________________________________116


11.1 WORK PACKAGES______________________________________________________________116
11.2 DELIVERY DATES_______________________________________________________________116
11.3 PROPOSED PROJECT SCHEDULE____________________________________________________116
11.4 SUMMARY OF RESOURCE REQUIREMENTS____________________________________________116

11. COMPONENT ASSESSMENTS SUMMARIES______________________________________117


12.1 LEGACY SYSTEMS AND FEEDS_____________________________________________________117
12.2 VENDOR SYSTEMS AND FEEDS____________________________________________________117

A.

KEY MAN INTERVIEWS___________________________ERROR! BOOKMARK NOT DEFINED.

B.
ERROR! BOOKMARK NOT DEFINED.
C.

TERMS & CONDITIONS___________________________ERROR! BOOKMARK NOT DEFINED.

107

Manual Testing
Q

Testing Tools

Mind

Executive Summary
The objective of the CMS project is To deliver a state-of-the-art Medical Management system
with the objective of improving members health, while controlling care costs. This system should
differentiate XYZ Inc from competing heath plans to retain and grow market share (state source)
The expectation of this exercise is to provide the following:
(1) To ensure that the test regiment applied to the ongoing activities are robust and complete w.r.t.
types of testing ensuring the delivery of a quality application.
(2) The reduction of time taken for execution resulting in earlier delivery date.
(3) To add visibility to all aspects of the testing process allowing better control by management.
(4) To develop an acceptable Basis of Estimation (BOE) to aid this and similar efforts in the
future where time and resource estimations are required aiding cost expectations.
This endeavour presents several risks including:
Schedule risk: Timelines are aggressive given the number of new or changing systems
and the inherent complexity of such an activity requires
Project Risk: The likelihood that the project cannot be complete based on acceptance
criteria. Firstly these criteria do not exist at the time this document was written further
Key Performance Indicators (KPIs) are currently undefined allowing no evaluation on
acceptance criteria should acceptance criteria actual exists
Business Risk There is an identified widow of opportunity EPA intends to exploit.
Should project delivery extend past this the ROI will may not be met.

the estimated size of the team required

the estimated duration and timing of the project

108

Manual Testing
Q

Testing Tools

Mind

Statement of Requirements
Test Objectives
Primary
The primary objectives are:
To ensure that the functionality of the system will meet the expectations of the internal and external users
To reduce where possible the time estimated to conduct and complete testing

Secondary
The secondary objectives are:
To explore the use of automation tools to improve test coverage, reduce time to test.
To investigate and recommend measures that will result in more structured, and reusable testing processes
across the department
To indicate level of effort required for to conduct testing, now and in the future
To present a staffing model necessary to conduct testing

Test Requirements
The requirements for this project are:

To determine whether the system will perform with acceptable response times at loads of up to 2000
concurrent users running a launch date scenario of typical customer transactions.

Measurable Success Criteria


Test Class

Definition

Target Pass
Rate

Critical

Essential functionality.

100%

Important

Necessary functionality, but for which a workaround exists.

90%

Desirable

User-preferred but non-essential behaviour of the application.

70%

Assumptions
Scope
In scope components and related feeds include:

Care Management System (CMS)

MedStat Advantage Suite

Health A-to-Z (Web Front End)

DxCG RiskSmart Core Module

109

Manual Testing
Q

Testing Tools

Mind

McKesson CCMS and Disease Monitor Modules

EPA Applications as denoted on the EPA Care Management Data Architecture document (CareMgmtDataArch
(2005-11-03).xls)

Other related EPA Applications


o

SmartMail

CCMS-Exchange eMail/Zix Interface

CCMS-Fax Interface

Out of Scope
o

Execution of any proposed or designed tests as detailed by test planning. Execution will be the
activities of subsequent efforts.

Unit Testing of 3rd party applications/systems

Test environment configuration, implementation.

Data sourcing, massaging and loading.

Systems or components specifically related to Significa, Erin Group, Nurse Coaching, UM Full/Lite
systems

110

Manual Testing
Q

Testing Tools

Mind

System Under Test (SUT)


Functionality
Software Architecture
Hardware Architecture
Data
Database Name

Database Type

Contents

Case Tracker Case


Management

MS Access

DM Access Database

MS Access

ODS/Data Repository

DB2

CMBS
Eligibility/Membership

DB2

Claims System USB92,


SAMM, ACII, Facets,
Comp 1, Rx

DB2

Health AtoZ

MS Sequel Server

Externally Hosted by Vendor. Provides


Interface for EPA clients

Medstat Advantage Suite

DB2/AIX

Modelling systems provides model


results to DxCG

DxCG RiskSmart

MS Sequel Server

Calculates score from MedStat Results


and passes to McKesson/CCMS

Membership enrolment details,

Provider
Wellness database
Capitated Lab
Teleform Assessments
HRAs
McKesson CCMS

Provides Nurses Interface Currently 40


will be 800 in a few years

111

Manual Testing
Q

Testing Tools

Mind

Role of Automated Test Tools


The use of automation as been identified as an area of interest as such we have undertaken to investigate
what tools are already in us, what if any, new tools can be employed to help meet the project goals and
recommendations on appropriate tools.

Use of Tools Considerations


Tools Assessments
In house Tools Assessment and Search

WorkSoft and Compuware have been short listed and have been scheduled to deliver Proofs of Concept
(POCs) in January of 2006. Mercury Int., the market leader in this automation tools space, has been
excluded on the basis of financial instability.

112

Manual Testing
Q

Testing Tools

Mind

Testing Framework
The Testing Process
Test Management Infrastructure
Testing Guidelines
Environment Management
Test Management
Defect Management
Reviews
Reviews of documents and deliverables should be conducted at the point where control of quality is
required, as discussed in the Quality Control section. In general, this is at any point that you have a
planned task that requires a review, approval or sign-off. Specific review points established by the
Project Plan, could be as follows:

Test Strategy approval

Test Environment sign-off

Test Plan sign-off

User approval of test data

Detailed Test Script Definition (per Test Plan) sign-off

Environment readiness review

Post Test Iteration review

Final Test Review and report

Progress Reporting
Requirements Management
Traceability and Test Coverage
Prioritization
Risk Assessment
1.
2.
3.

Impact: the impact that the failure of this requirement would have on the business;
Probability: the probability that a failure might occur if the requirement is not covered by a test;
Complexity: allowing tests to concentrate on the most complex functionality;

113

Manual Testing
Q
4.

Testing Tools

Mind

Source of failure: identifying the areas of testing that are most likely to cause failures, and
concentrating upon the requirements and tests covering these areas.

Requirements Management Tool


Test Documentation
Issue Management
1.
2.
3.

Issues raised during testing are to be entered into an issue management system, either an
existing/incumbent one or an issue management system developed in lieu.
The issues arising will be formally raised and recorded into the issues register, where they will be
reviewed, escalated and monitored to completion.
An issue management system should be designed that can allow the storage of issues, assigned to
owners whose responsibility it is to manage the resolution of their issues. This must be held under
change control and can be updated regularly with meetings to discuss the progress of issue
resolution. The monitoring of issues can also provide a means of tracking project progress. To do
this, issues can be assigned priorities indicating whether they are (for example) Showstoppers, High,
Medium or Low Priority.

114

Manual Testing
Q

Testing Tools

Mind

Testing Strategy
Types of Testing
Functional Testing
Unit Testing
Data Validation Testing
Data Referential Integrity Testing
Functionality Testing
Privacy Testing
Security Testing
Regression Testing
End-to-End Testing
Non Functional Testing

Use-Cases
Test Cycles
Cycle 0
Cycle 1
Cycle 2 etc.
Investigative Testing

115

Manual Testing
Q

Testing Tools

Mind

Resources & Facilities


Roles
The following table lists names and contact numbers for key personnel on the project.
Role

Name

Project Owner/Sponsor

James

Project Manager

Brett

Contact No.

Database Administrator
Data Specialist

Kate

Business Specialist

Nina

Application Specialist MedStat

Laura

Application Specialist - DxCG

Kathy

Application Specialist - McKesson

Kathy

Application Specialist Health AtoZ

Kathy

Network Specialist

Responsibilities
Resources Assessment
Facilities & Hardware Requirements

116

Manual Testing
Q

Testing Tools

Mind

Risk Assessment
Project Risks
The following standard Project Risks have been identified as being applicable to this project.

The system is critical to the Clients business.

System requirements have not be gathered and finalised.

Health A to Z has not performed equivalent or similar tests to those of interest to the client.

The proposed hardware configuration of the system has not been proven at the client site
before.

The application / system software configuration has not been proven at the client site

Other systems will be passing information to or taking information from this system

Other systems rely upon this system for their functionality or performance.

Other Factors
Other risks and considerations that apply to this project are:

Existing feeds and sub routines will be changed to accommodate the new
systems and processes

Timelines are aggressive given the number of applications being deployed

Current development and testing is very siloed and communication is not efficient

Systems are desperate in nature

Test practices are inconsistent across the organization

No standard test metrics are used across the organization

Statement of Risk
Risk Mitigation Strategy

117

Manual Testing
Q

Testing Tools

Mind

Level of Effort Estimates


Work Packages
Create Test Plan
Create Test Data
Specify Matrix of Transactions and Data
Document Detailed Script Definitions
Develop Test Scripts
Conduct Test Environment Readiness Check
Execute Preliminary Tests (Cycle 0)
Execute Main Tests (Cycles 1 to N)
Pre-requisite: Successful completion of test/production environment readiness checks, successful
completion of Preliminary Tests.
Effort: Controlling tests, analysing results, possible refinement of scenarios (and consequently scripts),
executing tests and gathering results.
Completion criteria: All test cases for the test cycle have been executed. This will be signed-off by the
Testing Consultant.
Timing: 5 working days per cycle.

Review Stage
Complete Final Report
Delivery Dates
Proposed Project Schedule
Summary of Resource Requirements

118

Manual Testing
Q

Testing Tools

Mind

Component Assessments Summaries


Legacy Systems and Feeds
Claims Systems and Feeds
Provider Systems and Feeds
Wellness / Teleform Systems and Feeds
Vendor Systems and Feeds
Health A to Z (Web Front End)

Skills Matrix

119

Manual Testing
Q

Testing Tools

Mind

Role

Skills

Responsibilities

Training

QA
Manage
r

Client qualification expertise, including business


driver/technical capabilities alignment

Establish Quality Management


Framework and define Testing
processes & procedures for clients

Project Management

Solution engineering capabilities, including


Center of Excellence modelling
Customer engagement and cultural awareness
Risk management

Expertise with QA/Testing Lifecycle strategy and


concepts

Knowledge of leading QA/Testing practices, as


well as industry standards in project/service
governance infrastructures
Training capabilities in QA management
Knowledge of Software Engineering
Development Process
12+ years IT experience

Negotiate and audit ongoing


objectives and deliverables of
multiple test efforts through
continuous improvement in QA
management utilizing test results
and analysis reports
Leverage client service
governance infrastructures to
ensure appropriate planning and
management of QA and test
execution resources
Assess progress and effectiveness
of test effort through QA
management
Advocate appropriate level of
quality by the resolution of
important defects
Advocate appropriate level of
testability focus in the software
development process

QA/Testing Framework
- EWTS and SSTM
- as well as QMS
processes
QA management tools
from leading test
automation vendors;
including (in order of
preference) Mercury
Interactive
(TestDirector/Quality
Center, Compuware
(TrackRecord),
Rational, Clear Case,
RadView, etc
CMMI & TMM
Quality and audit
controls for various
industries, preferably
including Healthcare
and Financial sector
regulations

Appendix C: System Test Plan

Sales & Inventory


Management System
120

Manual Testing
Q

Testing Tools

Mind

Software Test Plan

V1.0
Approved By

Name

Role

Date

Ashok Reddy J

Sr. Test Lead

25/12/06

A-Added, M-Modified, D-Deleted

Sno

Date

Version No

Page No

Change Mode
(A/M/D)

Brief
Description
of Change

121

Manual Testing
Q

Testing Tools

Mind

Contents
1. Introduction......................................................................................................................120
Objectives........................................................................................................................120
Test Strategy....................................................................................................................120
Scope................................................................................................................................120
Referential Material.......................................................................................................121
2. Test Items..........................................................................................................................121
Program Modules...........................................................................................................121
User Procedures..............................................................................................................121
3.Features To Be Tested........................................................................................................121
4. Features Not To Be Tested................................................................................................121
5. Approach...........................................................................................................................121
Sanity Testing..................................................................................................................121
Interface Testing.............................................................................................................121
Functional Testing..........................................................................................................121
Regression Testing..........................................................................................................121
Integration Testing..........................................................................................................122
System Testing.................................................................................................................122
Automation Testing........................................................................................................122
6. Pass/Fail Criteria...............................................................................................................122
Suspension Criteria........................................................................................................122
Resumption Criteria.......................................................................................................122
Approval Criteria...........................................................................................................122
7. Testing Process.................................................................................................................122
Test Deliverables.............................................................................................................122
Testing Tasks...................................................................................................................122
Responsibilities...............................................................................................................123
Resources.........................................................................................................................123
Schedule..........................................................................................................................123
8. Environmental Requirements...............................................Error! Bookmark not defined.
Software..............................................................................Error! Bookmark not defined.
Tools....................................................................................Error! Bookmark not defined.
Publications........................................................................Error! Bookmark not defined.
9. Risks and Contingencies......................................................Error! Bookmark not defined.
Schedule.........................................................................Error! Bookmark not defined.
Personnel.......................................................................Error! Bookmark not defined.
Requirements................................................................Error! Bookmark not defined.
10.Change Management Procedures........................................Error! Bookmark not defined.
11.Plan Approvals....................................................................Error! Bookmark not defined.

122

Manual Testing
Q

Testing Tools

Mind

1. Introduction
The Software Test Plan (STP) is designed to prescribe the
scope, approach, resources, and schedule of all testing
activities. The plan must identify the items to be tested, the
features not to be tested, the types to be performed, the
personnel responsible for testing, the resources and schedule
required to complete testing, and the risks associated with
the plan.

Objectives
The objective of the Test Plan is:
a) To identify the components or modules to be tested.
b) To identify and determine the resources required performing
the testing process.
c) To identify and estimate the task schedules for each level
of testing process.
d) To define the test deliverables.

Test Strategy
Test Strategy is a management plan that involves ingenious
methods to achieve ends. In the context of testing a strategy
can be defined as a high-level management method that confirms
adequate confidence in the software product being tested,
while ensuring that the cost, effort, and timelines are all
within acceptable limits.
The Test Strategy for SIMS is classified into following
sections of the Test Plan document.

Scope
Testing will be performed in several points in the life
cycle as the product is constructed. Testing is a very
dependent activity. As a result, test planning is a
continuing
activity
performed
throughout
the
system
development life cycle. Test plans must be developed for each
level of testing.
The scope of this Test Plan document is the testing process
for entire SIMS.

123

Manual Testing
Q

Testing Tools

Mind

Referential Material
a) FRS Documents and Use Case Documents

2. Test Items
Program Modules
This section outlines testing to
developer for each module being built.

be

performed

by

the

User Procedures
This section describes the testing to be performed on all
user documentation to ensure that it is correct, complete, and
comprehensive.

3.Features To Be Tested
The features to be tested within SIMS are classified under the
following modules as:
a) Admin

4. Features Not To Be Tested


5. Approach
Sanity Testing
Interface Testing
Functional Testing
Regression Testing

124

Manual Testing
Q

Testing Tools

Mind

Integration Testing
System Testing
Automation Testing
6. Pass/Fail Criteria
Suspension Criteria
a) When the AUT is failed in the Build Acceptance Testing.
b) Whenever there is a Change Request.
c) Delay in publishing the input documents.

Resumption Criteria
a) Testing can be resumed after the patch is released for the
rejected build.
b) When specification documents are refined and base-lined
based on CR acceptance or rejection.
c) After the input documents are published.

Approval Criteria
When the status of the bugs in the Defect profile is
Closed and result column in the TCD is Pass. This ensures
the proposed functionalities are justified in the System.

7. Testing Process
Test Deliverables
a) Defect Profile Documents.
b) Test Summary Reports.
c) Test Execution Reports.

Testing Tasks

a) Review of Functional specification document and preparation


of Review Report.
b) Preparation of Test Case documents.
c) Execution of the TCDs.
Result Analysis based on Actual Behavior and Expected Behavior.
d) Defect Tracking.

125

Manual Testing
Testing Tools
Q
e) Bug Reporting.
f) Ensuring bug-fixing process.

Mind

Responsibilities
Resources
Task Schedule from 27-Jan-2007 to 08-Feb-2007.

Sno

2
3
4

Task
Master Entities
Sub Category Profile
a) FRS Review and
Review Report
preparation
a) RR with
clarification release
b) Review Meeting
a) Test Case Workshop
b) Test Design

a)

Lead Reviews

Schedule
(in Days)

Start Date

End Date

1.5 Day

13/02/2007

14/02/2007

.5 Day

14/02/2007

14/02/2007

2.5 Days

15/02/2007

17/02/2007

.5 Days

07/02/2007

17/02/2007

126

Manual Testing
Q
b) Refinement and
Baseline of TCD
Product Profile
FRS Review and Review
5
Report preparation

Testing Tools

Mind

1 Day

19/02/2007

19/02/2007

a) RR with
clarification release
b) Review Meeting

.5 Day

20/02/2007

20/02/2007

b) TCD Preparation

3 Days

20/02/2007

23/02/2007

a) Peer Reviews
b) Refinement of TCD
based on Peer
Reviews
c) Lead Reviews
d) Refinement and
Baseline of TCD

1 Day

24/02/2007

24/02/2007

Appendix D
Testing Dictionary
Acceptance Testing: Formal testing conducted to determine whether or not a system
satisfies its acceptance criteriaenables an end user to determine whether or not to
accept the system.
Affinity Diagram: A group process that takes large amounts of language data, such as a
list developed by brainstorming, and divides it into categories.
Alpha Testing: Testing of a software product or system conducted at the developers site
by the end user.
Audit: An inspection/assessment activity that verifies compliance with plans, policies,
and procedures, and ensures that resources are conserved. Audit is a staff function; it
serves as the eyes and ears of management.
Automated Testing: That part of software testing that is assisted with software tool(s)
that does not require operator input, analysis, or evaluation.
Beta Testing: Testing conducted at one or more end user sites by the end user of a
delivered software product or system.
127

Manual Testing
Q

Testing Tools

Mind

Black-box Testing: Functional testing based on requirements with no knowledge of the


internal program structure or data. Also known as closed-box testing. Black box testing
indicates whether or not a program meets required specifications by spotting faults of
omission -- places where the specification is not fulfilled.
Bottom-up Testing: An integration testing technique that tests the low-level components
first using test drivers for those components that have not yet been developed to call the
low-level components for test.
Boundary Value Analysis: A test data selection technique in which values are chosen to
lie along data extremes. Boundary values include maximum, mini-mum, just
inside/outside boundaries, typical values, and error values.
Brainstorming: A group process for generating creative and diverse ideas.
Branch Coverage Testing: A test method satisfying coverage criteria that requires each
decision point at each possible branch to be executed at least once.
Bug: A design flaw that will result in symptoms exhibited by some object (the object
under test or some other object) when an object is subjected to an appropriate test.
Cause-effect Graphing: A testing technique that aids in selecting, in a systematic way, a
high-yield set of test cases that logically relates causes to effects to produce test cases. It
has a beneficial side effect in pointing out incompleteness and ambiguities in
specifications.
Check sheet: A form used to record data as it is gathered.
Clear-box Testing: Another term for white-box testing. Structural testing is sometimes
referred to as clear-box testing; since white boxes are considered opaque and do not
really permit visibility into the code. This is also known as glass-box or open-box testing.
Client: The end user that pays for the product received, and receives the benefit from the
use of the product.
Control Chart: A statistical method for distinguishing between common and special
cause variation exhibited by processes.
Customer (end user): The individual or organization, internal or external to the
producing organization that receives the product.
128

Manual Testing
Q

Testing Tools

Mind

Cyclomatic Complexity: A measure of the number of linearly independent paths through


a program module.
Data Flow Analysis: Consists of the graphical analysis of collections of (sequential) data
definitions and reference patterns to determine constraints that can be placed on data
values at various points of executing the source program.
Debugging: The act of attempting to determine the cause of the symptoms of
malfunctions detected by testing or by frenzied user complaints.
Defect: NOTE: Operationally, it is useful to work with two definitions of a defect:
1) From the producers viewpoint: a product requirement that has not been met or a
product attribute possessed by a product or a function performed by a product that is not
in the statement of requirements that define the product.
2) From the end users viewpoint: anything that causes end user dissatisfaction, whether
in the statement of requirements or not.
Defect Analysis: Using defects as data for continuous quality improvement. Defect
analysis generally seeks to classify defects into categories and identify possible causes in
order to direct process improvement efforts.
Defect Density: Ratio of the number of defects to program length (a relative number).
Desk Checking: A form of manual static analysis usually performed by the originator.
Source code documentation, etc., is visually checked against requirements and standards.
Dynamic Analysis: The process of evaluating a program based on execution of that
program. Dynamic analysis approaches rely on executing a piece of software with
selected test data.
Dynamic Testing: Verification or validation performed which executes the systems
code.
Error: 1) A discrepancy between a computed, observed, or measured value or condition
and the true, specified, or theoretically correct value or condition; and
2) a mental mistake made by a programmer that may result in a program fault.
Error-based Testing: Testing where information about programming style, error-prone
language constructs, and other programming knowledge is applied to select test data
capable of detecting faults, either a specified class of faults or all possible faults.
129

Manual Testing
Q

Testing Tools

Mind

Evaluation: The process of examining a system or system component to determine the


extent to which specified properties are present.
Execution: The process of a computer carrying out an instruction or instructions of a
computer.
Exhaustive Testing: Executing the program with all possible combinations of values for
program variables.
Failure: The inability of a system or system component to perform a required function
within specified limits. A failure may be produced when a fault is encountered.
Failure-directed Testing: Testing based on the knowledge of the types of errors made in
the past that are likely for the system under test.
Fault: A manifestation of an error in software. A fault, if encountered, may cause a
failure.
Fault Tree Analysis: A form of safety analysis that assesses hardware safety to provide
failure statistics and sensitivity analyses that indicate the possible effect of critical
failures.
Fault-based Testing: Testing that employs a test data selection strategy designed to
generate test data capable of demonstrating the absence of a set of pre-specified faults,
typically, frequently occurring faults.
Flowchart: A diagram showing the sequential steps of a process or of a workflow around
a product or service.
Formal Review: A technical review conducted with the end user, including the types of
reviews called for in the standards.
Function Points: A consistent measure of software size based on user requirements. Data
components include inputs, outputs, etc. Environment characteristics include data
communications, performance, reusability, operational ease, etc. Weight scale: 0 = not
present; 1 = minor influence, 5 = strong influence.
Functional Testing: Application of test data derived from the specified functional
requirements without regard to the final program structure. Also known as black-box
testing.

130

Manual Testing
Q

Testing Tools

Mind

Heuristics Testing: Another term for failure-directed testing.


Histogram: A graphical description of individual measured values in a data set that is
organized according to the frequency or relative frequency of occurrence. A histogram
illustrates the shape of the distribution of individual values in a data set along with
information regarding the average and variation.
Hybrid Testing: A combination of top-down testing combined with bottom-up testing of
prioritized or available components.
Incremental Analysis: Incremental analysis occurs when (partial) analysis may be
performed on an incomplete product to allow early feedback on the development of that
product.
Infeasible Path: Program statement sequence that can never be executed.
Inputs: Products, services, or information needed from suppliers to make a process work.
Inspection: 1) A formal evaluation technique in which software requirements, design, or
code are examined in detail by a person or group other than the author to detect faults,
violations of development standards, and other problems.
2) A quality improvement process for written material that consists of two dominant
components: product (document) improvement and process improvement (document
production and inspection).
Instrument: To install or insert devices or instructions into hardware or software to
monitor the operation of a system or component.
Integration: The process of combining software components or hardware components, or
both, into an overall system.
Integration Testing: An orderly progression of testing in which software components or
hardware components, or both, are combined and tested until the entire system has been
integrated.
Interface: A shared boundary. An interface might be a hardware component to link two
devices, or it might be a portion of storage or registers accessed by two or more computer
programs.

131

Manual Testing
Q

Testing Tools

Mind

Interface Analysis: Checks the interfaces between program elements for consistency and
adherence to predefined rules or axioms.
Intrusive Testing: Testing that collects timing and processing information during
program execution that may change the behavior of the software from its behavior in a
real environment. Usually involves additional code embedded in the software being
tested or additional processes running concurrently with software being tested on the
same platform.
IV&V: Independent Verification and Validation is the verification and validation of a
software product by an organization that is both technically and managerially separate
from the organization responsible for developing the product.
Life Cycle: The period that starts when a software product is conceived and ends when
the product is no longer available for use. The software life cycle typically includes a
requirements phase, design phase, implementation (code) phase, test phase, installation
and checkout phase, operation and maintenance phase, and a retirement phase.
Manual Testing: That part of software testing that requires operator input, analysis, or
evaluation.
Mean: A value derived by adding several qualities and dividing the sum by the number of
these quantities.
Measurement: 1) The act or process of measuring. A figure, extent, or amount obtained
by measuring.
Metric: A measure of the extent or degree to which a product possesses and exhibits a
certain quality, property, or attribute.
Mutation Testing: A method to determine test set thoroughness by measuring the extent
to which a test set can discriminate the program from slight variants of the program.
Non-intrusive Testing: Testing that is transparent to the software under test; i.e., testing
that does not change the timing or processing characteristics of the software under test
from its behavior in a real environment. Usually involves additional hardware that
collects timing or processing information and processes that information on another
platform.

132

Manual Testing
Q

Testing Tools

Mind

Operational Requirements: Qualitative and quantitative parameters that specify the


desired operational capabilities of a system and serve as a basis for deter-mining the
operational effectiveness and suitability of a system prior to deployment.
Operational Testing: Testing performed by the end user on software in its normal
operating environment.
Outputs: Products, services, or information supplied to meet end user needs.
Path Analysis: Program analysis performed to identify all possible paths through a
program, to detect incomplete paths, or to discover portions of the program that are not
on any path.
Path Coverage Testing: A test method satisfying coverage criteria that each logical path
through the program is tested. Paths through the program often are grouped into a finite
set of classes; one path from each class is tested.
Peer Reviews: A methodical examination of software work products by the producers
peers to identify defects and areas where changes are needed.
Policy: Managerial desires and intents concerning either process (intended objectives) or
products (desired attributes).
Problem: Any deviation from defined standards. Same as defect.
Procedure: The step-by-step method followed to ensure that standards are met.
Process: The work effort that produces a product. This includes efforts of people and
equipment guided by policies, standards, and procedures.
Process Improvement: To change a process to make the process produce a given product
faster, more economically, or of higher quality. Such changes may require the product to
be changed. The defect rate must be maintained or reduced.
Product: The output of a process; the work product. There are three useful classes of
products: manufactured products (standard and custom), administrative/ information
products (invoices, letters, etc.), and service products (physical, intellectual,
physiological, and psychological). Products are defined by a statement of requirements;
they are produced by one or more people working in a process.

133

Manual Testing
Q

Testing Tools

Mind

Product Improvement: To change the statement of requirements that defines a product


to make the product more satisfying and attractive to the end user (more competitive).
Such changes may add to or delete from the list of attributes and/or the list of functions
defining a product. Such changes frequently require the process to be changed. NOTE:
This process could result in a totally new product.
Productivity: The ratio of the output of a process to the input, usually measured in the
same units. It is frequently useful to compare the value added to a product by a process to
the value of the input resources required (using fair market values for both input and
output).
Proof Checker: A program that checks formal proofs of program properties for logical
correctness.
Prototyping: Evaluating requirements or designs at the conceptualization phase, the
requirements analysis phase, or design phase by quickly building scaled-down
components of the intended system to obtain rapid feedback of analysis and design
decisions.
Qualification Testing: Formal testing, usually conducted by the developer for the end
user, to demonstrate that the software meets its specified requirements.
Quality: A product is a quality product if it is defect free. To the producer a product is a
quality product if it meets or conforms to the statement of requirements that defines the
product. This statement is usually shortened to quality means meets requirements.
NOTE: Operationally, the work quality refers to products.
Quality Assurance (QA): The set of support activities (including facilitation, training,
measurement, and analysis) needed to provide adequate confidence that processes are
established and continuously improved in order to produce products that meet
specifications and are fit for use.
Quality Control (QC): The process which product compares quality with applicable
standards; and the action taken when nonconformance is detected. Its focus is defect
detection and removal. This is a line function, that is, the performance of these tasks is
the responsibility of the people working within the process.
Quality Improvement: To change a production process so that the rate at which
defective products (defects) are produced is reduced. Some process changes may require
the product to be changed.

134

Manual Testing
Q

Testing Tools

Mind

Random Testing: An essentially black-box testing approach in which a program is tested


by randomly choosing a subset of all possible input values. The distribution may be
arbitrary or may attempt to accurately reflect the distribution of inputs in the application
environment.
Regression Testing: Selective retesting to detect faults introduced during modification of
a system or system component, to verify that modifications have not caused unintended
adverse effects, or to verify that a modified system or system component still meets its
specified requirements.
Reliability: The probability of failure-free operation for a specified period.
Requirement: A formal statement of: 1) an attribute to be possessed by the product or a
function to be performed by the product; the performance standard for the attribute or
function; or 3) the measuring process to be used in verifying that the standard has been
met.
Review: A way to use the diversity and power of a group of people to point out needed
improvements in a product or confirm those parts of a product in which improvement is
either not desired or not needed. A review is a general work product evaluation technique
that includes desk checking, walkthroughs, technical reviews, peer reviews, formal
reviews, and inspections.
Run Chart: A graph of data points in chronological order used to illustrate trends or
cycles of the characteristic being measured for the purpose of suggesting an assignable
cause rather than random variation.
Scatter Plot (correlation diagram): A graph designed to show whether there is a
relationship between two changing factors.
Semantics: 1) The relationship of characters or a group of characters to their meanings,
independent of the manner of their interpretation and use. 2) The relationships between
symbols and their meanings.
Software Characteristic: An inherent, possibly accidental, trait, quality, or property of
software (for example, functionality, performance, attributes, design constraints, number
of states, lines of branches).
Software Feature: A software characteristic specified or implied by requirements
documentation (for example, functionality, performance attributes, or design constraints).

135

Manual Testing
Q

Testing Tools

Mind

Software Tool: A computer program used to help develop, test, analyze, or maintain
another computer program or its documentation; e.g., automated design tools, compilers,
test tools, and maintenance tools.
Standards: The measure used to evaluate products and identify nonconformance. The
basis upon which adherence to policies is measured.
Standardize: Procedures are implemented to ensure that the output of a process is
maintained at a desired level.
Statement Coverage Testing: A test method satisfying coverage criteria that requires
each statement be executed at least once.
Statement of Requirements: The exhaustive list of requirements that define a product.
NOTE: The statement of requirements should document requirements proposed and
rejected (including the reason for the rejection) during the requirements determination
process.
Static Testing: Verification performed without executing the systems code. Also called
static analysis.
Statistical Process Control: The use of statistical techniques and tools to measure an
ongoing process for change or stability.
Structural Coverage: This requires that each pair of module invocations be executed at
least once.
Structural Testing: A testing method where the test data is derived solely from the
program structure.
Stub: A software component that usually minimally simulates the actions of called
components that have not yet been integrated during top-down testing.
Supplier: An individual or organization that supplies inputs needed to generate a product,
service, or information to an end user.
Syntax: 1) The relationship among characters or groups of characters independent of
their meanings or the manner of their interpretation and use; 2) the structure of
expressions in a language; 3) the rules governing the structure of the language.

136

Manual Testing
Q

Testing Tools

Mind

System: A collection of people, machines, and methods organized to accomplish a set of


specified functions.
System Simulation: Another name for prototyping.
System Testing: The process of testing an integrated hardware and software system to
verify that the system meets its specified requirements.
Technical Review: A review that refers to content of the technical material being
reviewed.
Test Bed: 1) An environment that contains the integral hardware, instrumentation,
simulators, software tools, and other support elements needed to conduct a test of a
logically or physically separate component.
2) A suite of test programs used in conducting the test of a component or system.
Test Case: The definition of test case differs from company to company, engineer to
engineer, and even project to project. A test case usually includes an identified set of
information about observable states, conditions, events, and data, including inputs and
expected outputs.
Test Development: The development of anything required to conduct testing. This may
include test requirements (objectives), strategies, processes, plans, software, procedures,
cases, documentation, etc.
Test Executive: Another term for test harness.
Test Harness: A software tool that enables the testing of software components that links
test capabilities to perform specific tests, accept program inputs, simulate missing
components, compare actual outputs with expected outputs to determine correctness, and
report discrepancies.
Test Objective: An identified set of software features to be measured under specified
conditions by comparing actual behavior with the required behavior described in the
software documentation.
Test Plan: A formal or informal plan to be followed to assure the controlled testing of the
product under test.
Test Procedure: The formal or informal procedure that will be followed to execute a test.
This is usually a written document that allows others to execute the test with a minimum
of training.

137

Manual Testing
Q

Testing Tools

Mind

Testing: Any activity aimed at evaluating an attribute or capability of a program or


system to determine that it meets its required results. The process of exercising or
evaluating a system or system component by manual or automated means to verify that it
satisfies specified requirements or to identify differences between expected and actual
results.
Top-down Testing: An integration testing technique that tests the high-level components
first using stubs for lower-level called components that have not yet been integrated and
that stimulate the required actions of those components.
Unit Testing: The testing done to show whether a unit (the smallest piece of software that
can be independently compiled or assembled, loaded, and tested) satisfies its functional
specification or its implemented structure matches the intended design structure.
User: The end user that actually uses the product received.
V- Diagram (model): a diagram that visualizes the order of testing activities and their
corresponding phases of development
Validation: The process of evaluating software to determine compliance with specified
requirements.
Verification: The process of evaluating the products of a given software development
activity to determine correctness and consistency with respect to the products and
standards provided as input to that activity.
Walkthrough: Usually, a step-by-step simulation of the execution of a procedure, as
when walking through code, line by line, with an imagined set of inputs. The term has
been extended to the review of material that is not procedural, such as data descriptions,
reference manuals, specifications, etc.
White-box Testing: Testing approaches that examine the program structure and derive
test data from the program logic. This is also known as clear box testing, glass-box or
open-box testing. White box testing determines if program-code structure and logic is
faulty. The test is accurate only if the tester knows what the program is supposed to do.
He or she can then see if the program diverges from its intended goal. White box testing
does not account for errors caused by omission, and all visible code must also be
readable.

138

Вам также может понравиться