Вы находитесь на странице: 1из 103

Software TestingChapter 1

Basics of Software Testing


1.1 Some of the definitions of Software testing
1.2 Terminologies
Errors
Fault
An accidental condition that causes a functional unit to fail to perform its required
function.
Defect or Bug
1.3 Software quality
1.4 Quality Assurance
Quality Control
Comparison of Quality assurance vs Quality control
1.5 Quality Control (Seven QC tools)
Pareto chart
Stratification
1.6 The Scope of the Test Effort
1.7 The Challenge of Effective and Efficient Testing
1.8 The Limits of Testing
1.9 Prioritizing Tests
1.10 Cost of quality
1.11 Software quality factors
1.12 The Fundamental Test Process (11 steps)
1.13 Review process (Inspection, Walkthrough, Peer, Test coverage)
Chapter 2
Testing in Software development life cycle
2.1 Software Development life cycle
2.2 The V Concept of testing
2.3 Verification and Validation
2.4 Testing levels and stages
Unit testing
Integration testing
System testing
User Acceptance testing
Regression testing
2.5 Static testing
2.6 Dynamic testing
Chapter 3
Test Management
Chapter 4
Test planning
Chapter 5
Test case design techniques
Structure
Example

Chapter 6
Test case development
Chapter 7
Test Execution strategies
Chapter 8
Test Reporting
DEFECTS DATA SUMMARY
Chapter 9
Regression Testing
Chapter 10
Types of Software Testing
Chapter 11
Testing Metrics
Chapter 12
Quality standards & methodologies
Levels of the CMM
Level 1 - Initial
Level 2 - Repeatable
Level 3 - Defined
Level 4 - Managed
Level 5 - Optimizing
Process areas
The name
Methodology
DMAIC
DMAIC
DMADV
Chapter 13
Testing Automation

Chapter 1
Basics of Software Testing
Objective of this chapter
What is testing?
What is a defect
Errors, Faults, and Failures
Quality Assurance
Quality Control (Seven QC tools)
The Scope of the Test Effort
The Challenge of Effective and Efficient Testing
The Limits of Testing
Prioritizing Tests
Cost of quality
Software quality factors
The Fundamental Test Process
Review process (Inspection, Walkthrough, Peer, Test coverage)

Introduction
The objective of this chapter is to familiarize the reader with basics of software testing.

1.1 Some of the definitions of Software testing


Software testing is the process used to help identify the correctness, completeness,
security, and quality of developed computer software
Software testing is a process of technical investigation, performed on behalf of
Stake holders, that is intended to reveal quality-related information about the product
with respect to the context in which it is intended to operate
Software testing furnishes a criticism or comparison that compares the state and
behavior of the product against a specification
Software testing proves presence of error. But never its absence.
Software testing is a constructive destruction process
Software testing involves operation of a system or application under controlled
conditions and evaluating the results
Objective of Testing

Finding defects
Gaining confidence about the level of quality and providing information
Preventing defects

History of Software Testing


D. Gelperin and W.C. Hetzel classified in 1988 the phases and goals in software testing as
follows:
Until 1956 it was the debugging oriented period, where testing was often associated to
debugging: there was no clear difference between testing and debugging. From 19571978 there was the demonstration oriented period where debugging and testing was
distinguished now - in this period it was shown, that software satisfies the requirements.
The time between 1979-1982 is announced as the destruction oriented period, where the
goal was to find errors. 1983-1987 is classified as the evaluation oriented period:
intention here is that during the software lifecycle a product evaluation is provided and
measuring quality. From 1988 on it was seen as prevention oriented period where tests
were to demonstrate that software satisfies its specification, to detect faults and to prevent
faults

Some recent major computer system failures caused by software bugs

In August of 2006 a U.S. government student loan service erroneously made


public the personal data of as many as 21,000 borrowers on it's web site, due to a
software error. The bug was fixed and the government department subsequently
offered to arrange for free credit monitoring services for those affected.

A September 2006 news report indicated problems with software utilized in a


state government's primary election, resulting in periodic unexpected rebooting of
voter check in machines, which were separate from the electronic voting
machines, and resulted in confusion and delays at voting sites. The problem was
reportedly due to insufficient testing.

Reasons for bugs


Miscommunication or no communication
Software complexity
Programming errors
Changing requirements
Time pressures
Egos
Poorly documented code
Software development tools

1.2 Terminologies
Errors
It refers to an incorrect action or calculation performed by software

Fault
An accidental condition that causes a functional unit to fail to perform its required
function.

Failures
It refers to the inability of the system to produce the intended result

Defect or Bug
Non-conformance of software to its requirements is commonly called a defect

1.3 Software quality


Quality software is reasonably bug-free, delivered on time and within budget, meets
requirements and/or expectations, and is maintainable. However, quality is obviously a
subjective term. It will depend on who the 'customer' is and their overall influence in the

scheme of things. A wide-angle view of the 'customers' of a software development project


might include end-users, customer acceptance testers, customer contract officers,
customer management, the development organization's management/ accountants/
testers/salespeople, future software maintenance engineers, stockholders, magazine
columnists, etc. Each type of 'customer' will have their own slant on 'quality' - the
accounting department might define quality in terms of profits while an end-user might
define quality as user-friendly and bug-free

1.4 Quality Assurance


Quality assurance is a planned and systematic set of activities necessary to provide
adequate confidence that products and services will conform to specified requirements
and meet user needs
Quality assurance is a staff function, responsible for implementing the quality policy
defined through the development and continuous improvement of software development
processes
Quality assurance is an activity that establishes and evaluates the processes that produce
products. If there is no need for process, there is no role for quality assurance. For
example, quality assurance activities in an IT environment would determine the need for
acquire, or help install:

System development methodologies


Estimation processes
System maintenance processes
Requirements definition processes
Testing processes and standards

Once installed, quality assurance would measure these processes to identify weaknesses
and then correct those weaknesses to continually improve the process.
Software QA involves the entire software development PROCESS - monitoring and
improving the process, making sure that any agreed-upon standards and procedures are
followed, and ensuring that problems are found and dealt with. It is oriented to
'prevention'

Quality Control
Quality control is the process by which the product quality is compared with applicable
standards, and the action taken when nonconformance is detected. Quality control is a
line function, and the work is done within a process to ensure that the work product
conforms to standards and requirements.

Quality control activities focus on identifying defects in the actual products produced.
These activities begin at the start of the software development process with reviews of
requirements, and continue until all application testing is complete
It is possible to have quality control without quality assurance. For example, a test team
may be in place to conduct system testing at the end of the development, regardless of
whether that system is produced using a software development methodology.
Both quality assurance and quality control are separate and distinct from internal
audit function
Internal auditing is an independent appraisal activity within an organization for the
review of operations, and is a service to management. It is a managerial control that
functions by measuring and evaluating the effectiveness of other controls

Comparison of Quality assurance vs Quality control


Quality Assurance
It helps establish processes
It is an management responsibility
It identifies the weakness in the process

Quality control
It helps to execution of process
It is the producers responsibility
It identifies the weakness in the product

1.5 Quality Control (Seven QC tools)


Production environments that utilize modern quality control methods are dependant upon
statistical literacy. The tools used therein are called the seven quality control tools. These
include
1
2
3
4
5
6
7

Check sheet
Pareto Chart
Stratification
Cause and Effect Diagram
Histogram
Scatter Diagram
Control Chart

Check sheet
A structured, prepared form for collecting and analyzing data; a generic tool that can be
adapted for a wide variety of purposes.
Description
A check sheet is a structured, prepared form for collecting and analyzing data. This is a
generic tool that can be adapted for a wide variety of purposes.
When to Use

When data can be observed and collected repeatedly by the same person or at the
same location.

When collecting data on the frequency or patterns of events, problems, defects,


defect location, defect causes, etc.

When collecting data from a production process.

Procedure
1

Decide what event or problem will be observed. Develop operational definitions.

Decide when data will be collected and for how long.

Design the form. Set it up so that data can be recorded simply by making check
marks or Xs or similar symbols and so that data do not have to be recopied for
analysis.

Label all spaces on the form.

Test the check sheet for a short trial period to be sure it collects the appropriate
data and is easy to use.

Each time the targeted event or problem occurs, record data on the check sheet.

Example
The figure below shows a check sheet used to collect data on telephone interruptions. The
tick marks were added as data was collected over several weeks.
The figure below shows a check sheet used to collect data on telephone interruptions. The
tick marks were added as data was collected over several weeks.

Pareto chart

Shows on a bar graph which factors are more significant


Also called: Pareto diagram, Pareto analysis
Variations: weighted Pareto chart, comparative Pareto charts

Description
A Pareto chart is a bar graph. The lengths of the bars represent frequency or cost (time or
money), and are arranged with longest bars on the left and the shortest to the right. In this
way the chart visually depicts which situations are more significant.
When to Use

When analyzing data about the frequency of problems or causes in a process.

When there are many problems or causes and you want to focus on the most
significant.

When analyzing broad causes by looking at their specific components.

When communicating with others about your data

Procedure
1

Decide what categories you will use to group items.

Decide what measurement is appropriate. Common measurements are frequency,


quantity, cost and time.

Decide what period of time the chart will cover: One work cycle? One full day? A
week?

Collect the data, recording the category each time. (Or assemble data that already
exist.)

Subtotal the measurements for each category.

Determine the appropriate scale for the measurements you have collected. The
maximum value will be the largest subtotal from step 5. (If you will do optional
steps 8 and 9 below, the maximum value will be the sum of all subtotals from step
5.) Mark the scale on the left side of the chart.

Construct and label bars for each category. Place the tallest at the far left, then the
next tallest to its right and so on. If there are many categories with small
measurements, they can be grouped as other.
Steps 8 and 9 are optional but are useful for analysis and communication.

Calculate the percentage for each category: the subtotal for that category divided
by the total for all categories. Draw a right vertical axis and label it with
percentages. Be sure the two scales match: For example, the left measurement that
corresponds to one-half should be exactly opposite 50% on the right scale.

Calculate and draw cumulative sums: Add the subtotals for the first and second
categories, and place a dot above the second bar indicating that sum. To that sum
add the subtotal for the third category, and place a dot above the third bar for that
new sum. Continue the process for all the bars. Connect the dots, starting at the
top of the first bar. The last dot should reach 100 percent on the right scale.

Examples
Figure 1 shows how many customer complaints were received in each of five categories.
Figure 2 takes the largest category, documents, from Figure 1, breaks it down into six
categories of document-related complaints, and shows cumulative values.
If all complaints cause equal distress to the customer, working on eliminating documentrelated complaints would have the most impact, and of those, working on quality
certificates should be most fruitful.

Figure 1

Figure 2

Stratification
Also called: flowchart or run chart

Description
Stratification is a technique used in combination with other data analysis tools. When
data from a variety of sources or categories have been lumped together, the meaning of
the data can be impossible to see. This technique separates the data so that patterns can be
seen.
When to Use

Before collecting data.

When data come from several sources or conditions, such as shifts, days of the
week, suppliers or population groups.

When data analysis may require separating different sources or conditions.

Procedure
1

Before collecting data, consider which information about the sources of the data
might have an effect on the results. Set up the data collection so that you collect
that information as well.

When plotting or graphing the collected data on a scatter diagram, control chart,
histogram or other analysis tool, use different marks or colors to distinguish data
from various sources. Data that are distinguished in this way are said to be
stratified.

Analyze the subsets of stratified data separately. For example, on a scatter


diagram where data are stratified into data from source 1 and data from source 2,
draw quadrants, count points and determine the critical value only for the data
from source 1, then only for the data from source 2.

Example
The ZZ-400 manufacturing team drew a scatter diagram to test whether product purity
and iron contamination were related, but the plot did not demonstrate a relationship. Then
a team member realized that the data came from three different reactors. The team
member redrew the diagram, using a different symbol for each reactors data:

Cause and Effect diagram


Fishbone Diagram
Also Called: Cause-and-Effect Diagram, Ishikawa Diagram
Variations: cause enumeration diagram, process fishbone, time-delay fishbone, CEDAC
(cause-and-effect diagram with the addition of cards), desired-result fishbone, reverse
fishbone diagram
Description

The fishbone diagram identifies many possible causes for an effect or problem. It can be
used to structure a brainstorming session. It immediately sorts ideas into useful
categories.
When to Use

When identifying possible causes for a problem.

Especially when a teams thinking tends to fall into ruts.

Procedure
Materials needed: flipchart or whiteboard, marking pens.
1

Agree on a problem statement (effect). Write it at the center right of the flipchart
or whiteboard. Draw a box around it and draw a horizontal arrow running to it.

Brainstorm the major categories of causes of the problem. If this is difficult use
generic headings:
o

Methods

Machines (equipment)

People (manpower)

Materials

Measurement

Environment

Write the categories of causes as branches from the main arrow.

Brainstorm all the possible causes of the problem. Ask: Why does this happen?
As each idea is given, the facilitator writes it as a branch from the appropriate
category. Causes can be written in several places if they relate to several
categories.

Again ask why does this happen? about each cause. Write sub-causes branching
off the causes. Continue to ask Why? and generate deeper levels of causes.
Layers of branches indicate causal relationships.

When the group runs out of ideas, focus attention to places on the chart where
ideas are few.

Example
This fishbone diagram was drawn by a manufacturing team to try to understand the
source of periodic iron contamination. The team used the six generic headings to prompt
ideas. Layers of branches show thorough thinking about the causes of the problem.

Histogram
Description
A frequency distribution shows how often each different value in a set of data occurs. A
histogram is the most commonly used graph to show frequency distributions. It looks
very much like a bar chart, but there are important differences between them.
When to Use

When the data are numerical.

When you want to see the shape of the datas distribution, especially when
determining whether the output of a process is distributed approximately
normally.

When analyzing whether a process can meet the customers requirements.

When analyzing what the output from a suppliers process looks like.

When seeing whether a process change has occurred from one time period to
another.

When determining whether the outputs of two or more processes are different.

When you wish to communicate the distribution of data quickly and easily to
others.

Construction

Collect at least 50 consecutive data points from a process.

Use the histogram worksheet to set up the histogram. It will help you determine
the number of bars, the range of numbers that go into each bar and the labels for
the bar edges. After calculating W in step 2 of the worksheet, use your judgment
to adjust it to a convenient number. For example, you might decide to round 0.9 to
an even 1.0. The value for W must not have more decimal places than the numbers
you will be graphing.

Draw x- and y-axes on graph paper. Mark and label the y-axis for counting data
values. Mark and label the x-axis with the L values from the worksheet. The
spaces between these numbers will be the bars of the histogram. Do not allow for
spaces between bars.

For each data point, mark off one count above the appropriate bar with an X or by
shading that portion of the bar.

Analysis

Before drawing any conclusions from your histogram, satisfy yourself that the
process was operating normally during the time period being studied. If any
unusual events affected the process during the time period of the histogram, your
analysis of the histogram shape probably cannot be generalized to all time
periods.

Analyze the meaning of your histograms shape.

Normal. A common pattern is the bell-shaped curve known as the normal distribution.
In a normal distribution, points are as likely to occur on one side of the average as on the
other. Be aware, however, that other distributions look similar to the normal distribution.
Statistical calculations must be used to prove a normal distribution.
Dont let the name normal confuse you. The outputs of many processesperhaps even
a majority of themdo not form normal distributions, but that does not mean anything is
wrong with those processes. For example, many processes have a natural limit on one

side and will produce skewed distributions. This is normal meaning typical for
those processes, even if the distribution isnt called normal!

Skewed. The skewed distribution is asymmetrical because a natural limit prevents


outcomes on one side. The distributions peak is off center toward the limit and a tail
stretches away from it. For example, a distribution of analyses of a very pure product
would be skewed, because the product cannot be more than 100 percent pure. Other
examples of natural limits are holes that cannot be smaller than the diameter of the drill
bit or call-handling times that cannot be less than zero. These distributions are called
right- or left-skewed according to the direction of the tail.

Double-peaked or bimodal. The bimodal distribution looks like the back of a twohumped camel. The outcomes of two processes with different distributions are combined
in one set of data. For example, a distribution of production data from a two-shift
operation might be bimodal, if each shift produces a different distribution of results.
Stratification often reveals this problem.

Plateau. The plateau might be called a multimodal distribution. Several processes with
normal distributions are combined. Because there are many peaks close together, the top
of the distribution resembles a plateau.

Edge peak. The edge peak distribution looks like the normal distribution except that it
has a large peak at one tail. Usually this is caused by faulty construction of the histogram,
with data lumped together into a group labeled greater than

Truncated or heart-cut. The truncated distribution looks like a normal distribution with
the tails cut off. The supplier might be producing a normal distribution of material and
then relying on inspection to separate what is within specification limits from what is out
of spec. The resulting shipments to the customer from inside the specifications are the
heart cut.

Dog food. The dog food distribution is missing somethingresults near the average. If a
customer receives this kind of distribution, someone else is receiving a heart cut, and the
customer is left with the dog food, the odds and ends left over after the masters meal.
Even though what the customer receives is within specifications, the product falls into
two clusters: one near the upper specification limit and one near the lower specification
limit. This variation often causes problems in the customers process.

Scatter diagram
Also called: scatter plot, XY graph
Description
The scatter diagram graphs pairs of numerical data, with one variable on each axis, to
look for a relationship between them. If the variables are correlated, the points will fall
along a line or curve. The better the correlation, the tighter the points will hug the line.
When to Use

When you have paired numerical data.

When your dependent variable may have multiple values for each value of your
independent variable.

When trying to determine whether the two variables are related, such as when
trying to identify potential root causes of problems.

After brainstorming causes and effects using a fishbone diagram, to determine


objectively whether a particular cause and effect are related.

When determining whether two effects that appear to be related both occur with
the same cause.

When testing for autocorrelation before constructing a control chart.

Procedure
1

Collect pairs of data where a relationship is suspected.

Draw a graph with the independent variable on the horizontal axis and the
dependent variable on the vertical axis. For each pair of data, put a dot or a
symbol where the x-axis value intersects the y-axis value. (If two dots fall
together, put them side by side, touching, so that you can see both.)

Look at the pattern of points to see if a relationship is obvious. If the data clearly
form a line or a curve, you may stop. The variables are correlated. You may wish
to use regression or correlation analysis now. Otherwise, complete steps 4 through
7.

Divide points on the graph into four quadrants. If there are X points on the graph,

Count X/2 points from top to bottom and draw a horizontal line.

Count X/2 points from left to right and draw a vertical line.

If number of points is odd, draw the line through the middle point.

Count the points in each quadrant. Do not count points on a line.

Add the diagonally opposite quadrants. Find the smaller sum and the total of
points in all quadrants.

10 A = points in upper left + points in lower right


11 B = points in upper right + points in lower left
12 Q = the smaller of A and B
13 N = A + B
14 Look up the limit for N on the trend test table.
15 If Q is less than the limit, the two variables are related.
16 If Q is greater than or equal to the limit, the pattern could have occurred from
random chance.

Example
The ZZ-400 manufacturing team suspects a relationship between product purity (percent
purity) and the amount of iron (measured in parts per million or ppm). Purity and iron are
plotted against each other as a scatter diagram, as shown in the figure below.

There are 24 data points. Median lines are drawn so that 12 points fall on each side for
both percent purity and ppm iron.
To test for a relationship, they calculate:
A = points in upper left + points in lower right = 9 + 9 = 18
B = points in upper right + points in lower left = 3 + 3 = 6
Q = the smaller of A and B = the smaller of 18 and 6 = 6
N = A + B = 18 + 6 = 24
Then they look up the limit for N on the trend test table. For N = 24, the limit is 6.
Q is equal to the limit. Therefore, the pattern could have occurred from random chance,
and no relationship is demonstrated.

Considerations
Here are some examples of situations in which might you use a scatter diagram:

Variable A is the temperature of a reaction after 15 minutes. Variable B measures


the color of the product. You suspect higher temperature makes the product
darker. Plot temperature and color on a scatter diagram.

Variable A is the number of employees trained on new software, and variable B is


the number of calls to the computer help line. You suspect that more training
reduces the number of calls. Plot number of people trained versus number of calls.

To test for autocorrelation of a measurement being monitored on a control chart,


plot this pair of variables: Variable A is the measurement at a given time. Variable
B is the same measurement, but at the previous time. If the scatter diagram shows
correlation, do another diagram where variable B is the measurement two times

previously. Keep increasing the separation between the two times until the scatter
diagram shows no correlation.

Even if the scatter diagram shows a relationship, do not assume that one variable
caused the other. Both may be influenced by a third variable.

When the data are plotted, the more the diagram resembles a straight line, the
stronger the relationship.

If a line is not clear, statistics (N and Q) determine whether there is reasonable


certainty that a relationship exists. If the statistics say that no relationship exists,
the pattern could have occurred by random chance.

If the scatter diagram shows no relationship between the variables, consider


whether the data might be stratified.

If the diagram shows no relationship, consider whether the independent (x-axis)


variable has been varied widely. Sometimes a relationship is not apparent because
the data dont cover a wide enough range.

Think creatively about how to use scatter diagrams to discover a root cause.

Drawing a scatter diagram is the first step in looking for a relationship between
variables.

Control chart
Also called: statistical process control
Variations:
Different types of control charts can be used, depending upon the type of data. The two
broadest groupings are for variable data and attribute data.

Variable data are measured on a continuous scale. For example: time, weight,
distance or temperature can be measured in fractions or decimals. The possibility
of measuring to greater precision defines variable data.

Attribute data are counted and cannot have fractions or decimals. Attribute data
arise when you are determining only the presence or absence of something:
success or failure, accept or reject, correct or not correct. For example, a report
can have four errors or five errors, but it cannot have four and a half errors.

Variables charts
o

X and R chart (also called averages and range chart)

X and s chart

chart of individuals (also called X chart, X-R chart, IX-MR chart, Xm R


chart, moving range chart)

moving averagemoving range chart (also called MAMR chart)

target charts (also called difference charts, deviation charts and nominal
charts)

CUSUM (also called cumulative sum chart)

EWMA (also called exponentially weighted moving average chart)

multivariate chart (also called Hotelling T2)

Attributes charts
o

p chart (also called proportion chart)

np chart

c chart (also called count chart)

u chart

Charts for either kind of data


o

short run charts (also called stabilized charts or Z charts)

group charts (also called multiple characteristic charts)

Description
The control chart is a graph used to study how a process changes over time. Data are
plotted in time order. A control chart always has a central line for the average, an upper
line for the upper control limit and a lower line for the lower control limit. These lines are
determined from historical data. By comparing current data to these lines, you can draw
conclusions about whether the process variation is consistent (in control) or is
unpredictable (out of control, affected by special causes of variation).
Control charts for variable data are used in pairs. The top chart monitors the average, or
the centering of the distribution of data from the process. The bottom chart monitors the
range, or the width of the distribution. If your data were shots in target practice, the

average is where the shots are clustering, and the range is how tightly they are clustered.
Control charts for attribute data are used singly.
When to Use

When controlling ongoing processes by finding and correcting problems as they


occur.

When predicting the expected range of outcomes from a process.

When determining whether a process is stable (in statistical control).

When analyzing patterns of process variation from special causes (non-routine


events) or common causes (built into the process).

When determining whether your quality improvement project should aim to


prevent specific problems or to make fundamental changes to the process.

Basic Procedure
1

Choose the appropriate control chart for your data.

Determine the appropriate time period for collecting and plotting data.

Collect data, construct your chart and analyze the data.

Look for out-of-control signals on the control chart. When one is identified,
mark it on the chart and investigate the cause. Document how you investigated,
what you learned, the cause and how it was corrected.
Out-of-control signals
o

A single point outside the control limits. In Figure 1, point sixteen is above
the UCL (upper control limit).

Two out of three successive points are on the same side of the centerline
and farther than 2 from it. In Figure 1, point 4 sends that signal.

Four out of five successive points are on the same side of the centerline
and farther than 1 from it. In Figure 1, point 11 sends that signal.

A run of eight in a row are on the same side of the centerline. Or 10 out of
11, 12 out of 14 or 16 out of 20. In Figure 1, point 21 is eighth in a row
above the centerline.

Obvious consistent or persistent patterns that suggest something unusual


about your data and your process.

Continue to plot data as they are generated. As each new data point is plotted,
check for new out-of-control signals.

When you start a new control chart, the process may be out of control. If so, the
control limits calculated from the first 20 points are conditional limits. When you
have at least 20 sequential points from a period when the process is operating in
control, recalculate control limits.

1.6 The Scope of the Test Effort


During the initial stages of development, testing team should be able to prepare the
detailed test plan which involves the following.

Scope of testing
What items to be tested
What items not to be tested
How many hours/resources required to perform this task
When are we going to perform Ad-hoc testing
What are the deliverables in each phase

1.7 The Challenge of Effective and Efficient Testing


Time pressure: This plays a major role in performing effective and efficient testing. If
the testers are not given enough time for testing (including preparation of test cases,
execution), the result may not give the intended result.
Software complexity: Due to the technological advancements, the products are expected
to support various environments.
Choosing right approach: If the testing team does not follow the right approach, it
leads to inconsistent output.

Wrong process: If the testing team does not follow right process, it would not give the
good result

1.8 The Limits of Testing


Testing does not guarantee 100% defect free product
Testing proves presence of error; but not its absence.
Testing will not improve the development process

1.9 Prioritizing Tests


Test case prioritization techniques schedule test cases in an order that increases their
effectiveness in meeting some performance goal. One performance goal, rate of fault
detection, is a measure of how quickly faults are detected within the testing process; an
improved rate of fault detection can provide faster feedback on the system under test, and
let software engineers begin locating and correcting faults earlier than might otherwise be
possible.

1.10 Cost of quality


Cost of quality is a term used to quantify the total cost of prevention and appraisal and
costs associated with the production of software

Categories of cost
Prevention cost
Money required to prevent errors and to do the job right the first time.
Ex. Establishing methods and procedures, Training workers, acquiring tools
Appraisal cost
Money spent to review completed products against requirements.
Ex. Cost of inspections, testing, reviews
Failure cost
All costs associated with defective products that have been delivered to the user or moved
in to the production
Ex. Repairing cost, cost of operating faulty products, damage incurred by using them

1.11 Software quality factors


Correctness
Reliability
Efficiency

Extent to which a program satisfies its specifications and fulfills the


users mission objectives.
Extent to which a program can be expected to perform its intended
with required precision
The amount of computing resources and code required by a program
to perform a function

Integrity
Usability
Maintainability
Testability
Flexibility
Portability
Reusability
Interpretability

Extent to which access to software or data by unauthorized persons


can be controlled
Effort required learning, operating, preparing input, and interpreting
output of a program
Effort required locating and fixing an error in an operational program
Effort required testing a program to ensure that it performs its intended
function
Effort required modifying an operational program
Effort required to transfer software from one configuration to another
Extent to which a program can be used in other applications- related to
the packaging and scope of the functions that programs perform
Effort required to couple one system with another

1.12 The Fundamental Test Process (11 steps)


Step no
1
2
3
4
5
6
7
8
9
10
11

Description
Assess development plan and status
Develop the test plan
Test Software requirements
Test software design
Program (Build) phase testing
Execute and record results
Acceptance test
Report test results
The software installation
Test software changes
Evaluate test effectiveness

1.13 Review process (Inspection, Walkthrough, Peer,


Test coverage)
Inspection
An inspection is more formalized than a 'walkthrough', typically with 3-8 people
including a moderator, reader, and a recorder to take notes. The subject of the inspection
is typically a document such as a requirements spec or a test plan, and the purpose is to
find problems and see what's missing, not to fix anything. Attendees should prepare for
this type of meeting by reading thru the document; most problems will be found during
this preparation. The result of the inspection meeting should be a written report.
Thorough preparation for inspections is difficult, painstaking work, but is one of the most
cost effective methods of ensuring quality. Employees who are most skilled at inspections
are like the 'eldest brother' in the parable in 'Why is it often hard for organizations to get
serious about quality assurance?'. Their skill may have low visibility but they are
extremely valuable to any software development organization, since bug prevention is far
more cost-effective than bug detection.

A formal assessment of a work product conducted by one or more qualified independent


reviewers to detect defects, violations of development standards, and other problems.
Inspections involve authors only when specific questions concerning deliverables exist.
An inspection identifies defects, but does not attempt to correct them. Authors take
corrective actions and arrange follow up reviews as needed.
Walkthrough
A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or
no preparation is usually required
During a walkthrough, the producer of a product walk through or paraphrases the
products content, while a team of other individuals follow along. The teams job is to ask
questions and raise issues about the product that may lead to defect identification.
Points to remember

Software testing proves presence of error; But never its absence.

Software testing is a constructive destruction process

The bug is present in the product due to various reasons like miscommunication,
software complexity

QA and QC are important activities for software development

Seven QC tools are used for making statistical analysis for any problem

Inspection, Walkthrough are used for identifying defects from early stage of
development process

Exercises
Fill in the blanks
1) ___________ is the variation from the customer requirements
2) Testing is an __________ activity
3) ________ is an activity which defines processes
4) QC tools are used for performing __________ analysis
5) Regression testing is also called as ____________
True or False
1) Testing is done to prove the product is defect free
2) All the defects are failures
3) System testing can be done only after building all the modules
4) Walkthrough is a review process
5) Testing is the last checkpoint before the product reaches the customer
Match the following

1.
2.
3.
4.
5.

QA
QC
Bug
Failure
Unit testing

|
|
|
|
|

Testing the independent module


Defining process
Testing
Variations from requirements
Run time errors

Lab exercises
1) Read the following requirement given by the X customer and find the defects.
Product XYZ is a web site which can be opened from any browser, and its response time
should be faster. It should support multilingual support.
2)

Identify the defects from www.google.co.in using following checksheet

Feature to be tested
Functionality
User friendliness
Interoperability
Total

No of defects

Total

3) ABC is a leading software organization in Hyderabad. Very recently, some of their


UAT (user acceptance testing) was failed. They want to identify the weakness in the
existing system and to improve. You are being identified for this. Prepare the causeeffect diagram and identify the various weakness and report to the management

Chapter 2
Testing in Software development life cycle
Objectives
Testing in Different Development Lifecycles
The V concept of testing
Verification and Validation
Testing Levels/Stages
Component Test Issues
Integration Testing
System and Acceptance Testing
Static Vs Dynamic testing
Maintenance and Regression Testing

Introduction
This chapter familiarizes the reader with the various SDLC models and different types of
testing

2.1 Software Development life cycle


A software development process is a structure imposed on the development of a
software product. Synonyms include software life cycle and software process. There are
several models for such processes, each describing approaches to a variety of tasks or
activities that take place during the process.
As in any other engineering disciplines, software engineering also has some structured
models for software development.
Process activities/steps

Software Elements Analysis: Extracting the requirements of a desired software


product is the first task in creating it. While customers probably believe they
know what the software is to do, it may require skill and experience in software
engineering to recognize incomplete, ambiguous or contradictory requirements.

Specification: Specification is the task of precisely describing the software to be


written, possibly in a rigorous way. In practice, most successful specifications are
written to understand and fine-tune applications that were already well-developed,
although safety-critical software systems are often carefully specified prior to
application development. Specifications are most important for external
interfaces that must remain stable.

Software architecture: The architecture of a software system refers to an abstract


representation of that system. Architecture is concerned with making sure the
software system will meet the requirements of the product, as well as ensuring
that future requirements can be addressed. The architecture step also addresses
interfaces between the software system and other software products, as well as the
underlying hardware or the host operating system.

Implementation (or coding): Reducing a design to code may be the most


obvious part of the software engineering job, but it is not necessarily the largest
portion.

Testing: Testing of parts of software, especially where code by two different


engineers must work together, falls to the software engineer.

Documentation: An important (and often overlooked) task is documenting the


internal design of software for the purpose of future maintenance and
enhancement. Documentation is most important for external interfaces.

Software Training and Support: A large percentage of software projects fail


because the developers fail to realize that it doesn't matter how much time and
planning a development team puts into creating software if nobody in an
organization ends up using it. People are occasionally resistant to change and
avoid venturing into an unfamiliar area, so as a part of the deployment phase, its
very important to have training classes for the most enthusiastic software users
(build excitement and confidence), shifting the training towards the neutral users
intermixed with the avid supporters, and finally incorporate the rest of the
organization into adopting the new software. Users will have lots of questions and
software problems which leads to the next phase of software.

Maintenance: Maintaining and enhancing software to cope with newly


discovered problems or new requirements can take far more time than the initial
development of the software. Not only may it be necessary to add code that does
not fit the original design but just determining how software works at some point
after it is completed may require significant effort by a software engineer. About

of all software engineering work is maintenance, but this statistic can be


misleading. A small part of that is fixing bugs. Most maintenance is extending
systems to do new things, which in many ways can be considered new work. In
comparison, about of all civil engineering, architecture, and construction work
is maintenance in a similar way.
Process models
A decades-long goal has been to find repeatable, predictable processes or methodologies
that improve productivity and quality. Some try to systematize or formalize the seemingly
unruly task of writing software. Others apply project management techniques to writing
software. Without project management, software projects can easily be delivered late or
over budget. With large numbers of software projects not meeting their expectations in
terms of functionality, cost, or delivery schedule, effective project management is proving
difficult. The following process models are used for development of software.
Waterfall model
Modified waterfall model
Prototype model
Spiral model

Waterfall processes
The best-known and oldest process is the waterfall model, where developers (roughly)
follow these steps in order. They state requirements, analyze them, design a solution
approach, architect a software framework for that solution, develop code, test (perhaps
unit tests then system tests), deploy, and maintain. After each step is finished, the process
proceeds to the next step, just as builders don't revise the foundation of a house after the
framing has been erected.
There is a misconception that the process has no provision for correcting errors in early
steps (for example, in the requirements). In fact this is where the domain of requirements
management comes in which includes change control.
This approach is used in high risk projects, particularly large defence contracts. The
problems in waterfall arise out of immature engineering practices, particularly in
requirements analysis and requirements management. Moreover, where poor engineering
process and rigor exists and where developers move into coding without understanding
the problem at hand.
More often too the supposed stages are part of joint review between customer and
supplier, the supplier can, in fact, develop at risk and evolve the design but must sell off
the design at a key milestone called Critical Design Review.

Most criticisms of the approach come from the developer, not software engineering,
community where they subscribe to the WISCY approach to software development.
The waterfall model is a sequential software development model (a process for the
creation of software) in which development is seen as flowing steadily downwards (like a
waterfall) through the phases of requirements analysis, design, implementation, testing
(validation), integration, and maintenance. The origin of the term "waterfall" is often
cited to be an article published in 1970 by W. W. Royce; ironically, Royce himself
advocated an iterative approach to software development and did not even use the term
"waterfall". Royce originally described what is now known as the waterfall model as an
example of a method that he argued "is risky and invites failure".

Water fall model


Advantages

Testing is inherent to every phase of the waterfall model

It is an enforced disciplined approach

It is documentation driven, that is, documentation is produced at every stage

Disadvantages
The waterfall model is the oldest and the most widely used paradigm.
However, many projects rarely follow its sequential flow. This is due to the inherent
problems associated with its rigid format. Namely:

It only incorporates iteration indirectly, thus changes may cause considerable


confusion as the project progresses.

As the client usually only has a vague idea of exactly what is required from
the software product, this WM has difficulty accommodating the natural
uncertainty that exists at the beginning of the project.

The customer only sees a working version of the product after it has been
coded. This may result in disaster if any undetected problems are precipitated
to this stage.

Modified waterfall models


In response to the perceived problems with the "pure" waterfall model, many modified
waterfall models have been introduced. These models may address some or all of the
criticisms of the "pure" waterfall model. Many different models are covered by Steve
McConnell in the "lifecycle planning" chapter of his book Rapid Development: Taming
Wild Software Schedules.
While all software development models will bear at least some similarity to the waterfall
model, as all software development models will incorporate at least some phases similar
to those used within the waterfall model, this section will deal with those closest to the
waterfall model. For models which apply further differences to the waterfall model, or for
radically different models seek general information on the software development process.
Royce's final model
Royce's final model, his intended improvement upon his initial "waterfall model",
illustrated that feedback could (should, and often would) lead from code testing to design
(as testing of code uncovered flaws in the design) and from design back to requirements
specification (as design problems may necessitate the removal of conflicting or otherwise
unsatisfiable / undesignable requirements). In the same paper Royce also advocated large
quantities of documentation, doing the job "twice if possible" (a sentiment similar to that
of Fred Brooks, famous for writing the Mythical Man Month, an influential book in
software project management, who advocated planning to "throw one away"), and
involving the customer as much as possiblenow the basis of participatory design and of
User Centered Design, a central tenet of Extreme Programming.

The "sashimi" model


The sashimi model (so called because it features overlapping phases, like the overlapping
fish of Japanese sashimi) was originated by Peter DeGrace. It is sometimes simply
referred to as the "waterfall model with overlapping phases" or "the waterfall model with
feedback". Since phases in the sashimi model overlap, information of problem spots can
be acted upon during phases of the waterfall model that would typically "precede" others
in the pure waterfall model. For example, since the design and implementation phases

will overlap in the sashimi model, implementation problems may be discovered during
the "design and implementation" phase of the development process. This helps alleviate
many of the problems associated with the Big Design Up Front philosophy of the
waterfall model.
Software prototyping
The prototyping model is a software development process that begins with requirements
collection, followed by prototyping and user evaluation. Often the end users may not be
able to provide a complete set of application objectives, detailed input, processing, or
output requirements in the initial stage. After the user evaluation, another prototype will
be built based on feedback from users, and again the cycle returns to customer evaluation.
The cycle starts by listening to the user, followed by building or revising a mock-up, and
letting the user test the mock-up, then back.
In the mid-1980s, prototyping became seen as the solution to the problem of requirements
analysis within software engineering. Prototypes are mock-ups of the screens of an
application which allow users to visualize the application that is not yet constructed.
Prototypes help users get an idea of what the system will look like, and make it easier for
users to make design decisions without waiting for the system to be built. When they
were first introduced the initial results were considered amazing. Major improvements in
communication between users and developers were often seen with the introduction of
prototypes. Early views of the screens led to fewer changes later and hence reduced
overall costs considerably.
However, over the next decade, while proving a useful technique, it did not solve the
requirements problem:

Managers, once they see the prototype, often have a hard time understanding that
the finished design will not be produced for some time.

Designers often feel compelled to use the patched-together prototype code in the
real system, because they are afraid to 'waste time' starting again.

Prototypes principally help with design decisions and user interface design.
However, they can not tell what the requirements were originally.

Designers and end users can focus too much on user interface design and too little
on producing a system that serves the business process.

Advantages of prototyping

Prototypes can be easily changed

May provide the proof of concept necessary to attract funding

Early visibility of the prototype gives users an idea of what the final system looks
like

High user satisfaction

Encourages active participation among users and producer

Enables a higher output for user

Cost effective (Development costs reduced) * Increases system development


speed

Make changes quickly and easily

Disadvantages of prototyping

Users expectation on prototype may be above its performance

Possibility of causing systems to be left unfinished or implemented before they


are ready.

Producer might produce a system inadequate for overall organization needs

Producer might get too attached to it (might cause legal involvement)

Often lack flexibility

Not suitable for large applications

Project management difficulties

Spiral model
The spiral model is a software development process combining elements of both design
and prototyping-in-stages, in an effort to combine advantages of top-down and bottom-up
concepts.
History
The spiral model was defined by Barry Boehm in his article A Spiral Model of Software
Development and Enhancement from 1985. This model was not the first model to discuss
iterative development, but it was the first model to explain why the iteration matters. As
originally envisioned, the iterations were typically 6 months to 2 years long. Each phase
starts with a design goal and ends with the client (who may be internal) reviewing the

progress thus far. Analysis and engineering efforts are applied at each phase of the
project, with an eye toward the end goal of the project.
Applications
For a typical shrink-wrap application, the spiral model might mean that you have a
rough-cut of user elements (without the polished / pretty graphics) as an operable
application, add features in phases, and, at some point, add the final graphics.
The spiral model is used most often in large projects (by companies such as IBM,
Microsoft, Patni Computer Systems and Tata Consultancy Services ) and needs constant
review to stay on target. For smaller projects, the concept of agile software development
is becoming a viable alternative. The US military has adopted the spiral model for its
Future Combat Systems program.
Advantages

Estimates (i.e. budget, schedule, etc.) get more realistic as work progresses,
because important issues are discovered earlier.

It is more able to cope with the (nearly inevitable) changes that software
development generally entails.

Software engineers (who can get restless with protracted design processes) can
get their hands in and start working on a project earlier.

2.2 The V Concept of testing


The V-Model defines a uniform procedure for IT product development. It is the standard
for German federal administration and defense projects. As it is publicly available many
companies also use it. It is a project management method comparable to PRINCE2 and
describes methods for project management as well as methods for system development.
The current version of the V-Model is the V-Model XT (http://www.v-modell-xt.de)
which was finalized February 2005. It is not really comparable to CMMI. While CMMI
only describes "What" has to be done, the V-Model also describes "How" and "When" it
has to be done and "Who" is responsible for doing it.
The V-model was developed to regulate the software development process within the
German federal administration. It describes the activities and results that have to be
produced during software development.
The V-model is a graphical representation of the system development lifecycle. It
summarizes the main steps to be taken in conjunction with the corresponding deliverables
within computerized system validation framework.

The left tail of the V represents the specification stream where the system specifications
are defined. The right tail of the V represents the testing stream where the systems are
being tested (against the specifications defined on the left-tail). The bottom of the V
where the tails meet, represents the development stream.
The specification stream mainly consists of:

User Requirement Specifications

Functional Specifications

Design Specifications

The testing stream generally consists of:

Installation Qualification

Operational Qualification

Performance Qualification

The development stream can consist (depending on the system type and the development
scope) in customization, configuration or coding

2.3 Verification and Validation


Verification ensures that the system (software, hardware, documentation, and personnel)
complies with an organizations standards and processes, relying on review or nonexecutable methods. Validation physically ensures that the system operates according to
plan by executing the system functions through a series of tests that can be observed and
evaluated. Verification answers the question, Did we built the right system?, which
validation addresses, Did we build the system right?
Examples
Verification requires several types of reviews, including requirements reviews, design
reviews, code walkthroughs, code inspections and test reviews. The system user should
be involved in these reviews to find defects before they are built into the system. In the
case of purchased systems, user input is needed to assure that the supplier makes the
appropriate tests to eliminate defects.

Verification
Example
Requirement
reviews

Performed by

Explanation

Deliverable

Developers,
users

Reviewed statement
of requirements,
ready to be
translated in to
system design

Design reviews

Developers

The study and


discussion of the
computer system
requirements to ensure
they meet stated user
needs and are feasible
The study and
discussion of the
computer system design
to ensure it will support
the system requirements

Code Walkthroughs

Developers

Validation example
Unit testing

Performed by
Developers

Integration testing

Developers

System design,
ready to be
translated into
computer programs,
hardware
configurations,
documentation and
training
Computer software
ready for testing or
more detailed
inspections by the
developer
Computer software
ready for testing by
the developer.

An informal analysis of
the program source
code to find defects and
verify coding
techniques
Code inspections
Developers
A formal analysis of the
program source code to
find defects as defined
by meeting computer
system design
specifications. Usually
performed by a team
composed of developers
and subject matter
experts.
Validation is accomplished simply by executing a real-life function (if you wanted to
check to see if your mechanic had fixed the starter on your car, youd try to star the car).
Explanation
The testing of a single
program, module, or unit
of code. Usually
performed by the
developer of the unit.
Validates that the
software performs as
designed
The testing of related
programs, modules or
units of code. Validates

Deliverable
Software unit ready
for testing with
other system
components such as
other software units,
hardware,
documentation, or
users.
Portions of the
system ready for
testing with other

Validation example

Performed by

System testing

Developers,
Users

User Acceptance
testing

Users

Explanation
that multiple parts of the
system interact according
to the system design.
The testing of an entire
computer system. This
kind of testing can
include functional and
structural testing such as
stress testing. Validates
the system requirements
The testing of a computer
system or parts of a
computer systems to
make sure it will work in
the system regardless of
what the system
requirements indicate

Deliverable
portions of the
system
A tested computer
system based on
what was specified
to be developed or
purchased
A tested computer
system, based on
user needs.

2.4 Testing levels and stages

Unit testing
Integration testing
System testing
User Acceptance testing

Unit testing
Objective
To test a single program, module, or unit of code
Who does?
Usually performed by the developer of the unit

Integration testing
Objective
To test related programs, modules or units of code are interfaced properly or not validates
that multiple parts of the system interact according to the system design.

Who does?
Usually performed by the developer

System testing
Objective
To test an entire computer system with respect to functionality and performance. This
kind of testing can include functional and structural testing such as stress testing.
Validates the system requirements
Who does?
Testers

User Acceptance testing


Objective
To test of a computer system or parts of a computer systems to make sure it will work in
the system regardless of what the system requirements indicate
Who does?
Customer or customer representative

Regression testing
Testing done to ensure that modification does not cause any side effects is called as
Regression testing.
It is also called as selective retesting.

2.5 Static testing


Static testing is performed using the software documentation. The code is not executing
during static testing. Dynamic testing requires the code to be in an executable state to
perform the tests.
Most verification techniques are static tests
Feasibility reviews - Tests for this structural element would verify the logic flow of a
unit of software

Requirements reviews These reviews verify software relationships; for example, in


any particular system, the structural limits of how much load (e.g. transactions or
number of concurrent users) a system can handle

2.6 Dynamic testing


Most validation tests are dynamic tests. Examples of this are:
Unit testing: These tests verify that the system functions properly; for example,
pressing a function to complete an action
Integration testing: The system runs tasks that involve more than one application or
database to verify that it performed the tasks accurately
System testing: The tests simulate operation of an entire system and verify that it ran
correctly.
User Acceptance: This real-world test means the most to your business, and
unfortunately theres no way to conduct it in isolation. Once your organizations staff,
customers, or vendors begin to interact with your system, theyll verify that it functions
properly to you.
Points to remember

Waterfall model does not all modification after reaching to its next phase
Prototyping is an iterative model
Regression testing is also called as selective re-testing
All the reviews are called as Verification
Unit, integration, system, acceptance testing are the various levels of testing

Exercises
Fill in the blanks
1
In ________ model of software development, the customer can evaluate the
product can give the feedback to the developer
2
_________ testing is done to ensure that modification does not cause any side
effects
3
Code walkthrough is an _______ activity
4
UAT is done by ________
5
Testing the performance of the application is done in _______ level of testing
True or False

1
2
3
4
5

One of the disadvantage of prototype model is, it takes lot of time to release the
product
In spiral model, go or not go decision is made in every cycle
The objective of performing the integration testing is to identify the errors in the
interface
Waterfall model is suitable for change driven projects
UAT is done to ensure that the product is defective.

Match the following


1)
2)
3)
4)
5)

Prototype model
Spiral model
Waterfall model
Document testing
System testing

|
|
|
|
|

Testing done in cycles


Changes are not allowed
Performance testing
Iterative model
Static testing

Exercises
1) Which SDLC model would be most suitable for each of the following scenarios?
1
2
2

The product is a bespoke product for a specific customer, who is always available
to give feedback
A product that is made of a number of features that become available sequentially
and incrementally
Write the test cases for the following screen and find the defects

Chapter 3
Test Management

Objective

To understand the concept behind Test organization


How to prepare Test planning and estimation
How to perform Test progress monitoring and control
Configuration management

Introduction

This chapter familiarizes the reader with the test planning and software configuration
management
Test organization and independence
The effectiveness of finding defects by testing and reviews can be improved by using
independent testers. Options for independence are:
Independent testers within the development teams
Independent test team or group within the organization, reporting to project
management or executive management
Independent testers from the business organization, user community and IT
Independent test specialists for specific test targets such as usability testers,
security testers or certification testers (who certify a software product against
standards and regulations)
Independent testers outsourced or external to the organization
For large, complex or safety critical projects, it is usually best to have multiple levels of
testing, with some or all of the levels done by independent testers. Development staff
may participate in testing, especially at the lower levels, but their lack of objectivity often
limits their effectiveness. The independent testers may have the authority to require and
define test processes and rules, but testers should take on such process-related roles only
in the presence of a clear management mandate to do so.
The benefits of independence include:

Independent testers see other and different defects, and are unbiased.
An independent tester can verify assumptions people made during specification
and
implementation of the system

Drawbacks include:

Isolation from the development team (if treated as totally independent).


Independent testers may be the bottleneck as the last checkpoint.
Developers lose a sense of responsibility for quality.

Test planning and estimation


Testing like any project should be driven by a plan. The test plan acts as the anchor
for the execution, tracking, and reporting of the entire testing project and covers
1) What needs to be tested? The scope of testing, including clear identification of what
will be tested and what will not be tested
2) How the testing is going to be performed breaking down the testing into small and
manageable tasks and identifying the strategies to be used for carrying out the tasks

3) What resources are needed for testing? Computer as well as human resources.
4) The time lines by which the testing activities will be performed
5) Risks that may be faced in all of the above, with appropriate mitigation and
contingency plans.
Test plan template
Introduction
Scope
References
Test methodology and strategy / approach
Test criteria
Entry criteria
Exit criteria
Suspension criteria
Resumption criteria
Assumptions, dependencies, and Risks
Assumptions
Dependencies
Risks and Risks management plans
Estimations
Size
Effort
Schedule
Test deliverables and milestones
Responsibilities
Resource requirements
Hardware resources
Software resources
People resources (number of people, skills, duration etc)
Other resources
Training requirements
Details of training required
Possible attendees
Any constraints
Defect logging and tracking progress
Metrics plan
Product release criteria
Test Estimation
Estimation happens broadly in three phases.
1 Size estimation
2 Effort estimation
3 Schedule estimation

Size estimation
Size estimate quantifies the actual amount of testing that needs to be done. Several
factors contribute to the size estimation of the testing project. They are
1 Lines of code
2 Function point
3 Number of screens/reports/transactions
4 Extent of automation required
5 Number of platforms and inter-operability environments to be tested
Size estimate is expressed in terms of the following.
1 Number of test cases
2 Number of test scenarios
3 Number of configurations to be tested
Effort estimation
Size estimation is primary input for estimating effort. The other factors that drive the
effort estimate are as follows.
Productivity data: Productivity refers to the speed at which the various activities of
testing can be carried out. This is based on historical data available in the organization.
It can be further classified into number of test cases developed per day, the number of test
cases that can be run per day.
Reuse opportunities: If the test architecture has been designed keeping reuse in mind,
then the effort required to cover a given size of testing can come down.
Robustness of processes: Existence of well-defined process will go a long way in
reducing the effort involved in any activity
Schedule estimation
Activity breakdown and schedule estimation entail translating the effort required into
specific time frames. The following steps make up this translation
1
2

Identifying external and internal dependencies among the activities


Sequencing the activities, based on the expected duration as well as on the
dependencies

Identifying the time required for each of the WBS activities, taking into account
the above two factors

Monitoring the progress in terms of time and effort

Rebalancing schedules and resources as necessary

Some of the external dependencies


1. Availability of the product from developer
2. Hiring
3. Training

4. Acquisition of hardware/software required for training


5. Availability of translated message files for testing
Some of the internal dependencies
1 Completing the test specification
2 Coding/scripting the tests
3 Executing the tests

Example of Gantt chart

Test progress monitoring and controlling


The purpose of test monitoring is to give feedback and visibility about test activities.
Information to be monitored may be collected manually or automatically and may be
used to measure exit criteria, such as coverage. Metrics may also be used to assess
progress against the planned schedule and budget.
Common test metrics include:
Percentage of work done in test case preparation (or percentage of planned test cases
prepared)
Percentage of work done in test environment preparation.
Test case execution (e.g. number of test cases run/not run, and test cases passed/failed).
Defect information (e.g. defect density, defects found and fixed, failure rate, and retest
results)
Test coverage of requirements, risks or code.

Subjective confidence of testers in the product.

Dates of test milestones.


Testing costs, including the cost compared to the benefit of finding the next defect or to
run the next test.

Test Reporting
Test reporting is concerned with summarizing information about the testing Endeavour
including:
What happened during a period of testing, such as dates when exit criteria were met
Analyzed information and metrics to support recommendations and decisions about
future actions, such as an assessment of defects remaining, the economic benefit of
continued testing, outstanding risks, and the level of confidence in tested software.
The outline of a test summary report is given in Standard for Software Test Documentation
(IEEE829).
Metrics should be collected during and at the end of a test level in order to assess:
The adequacy of the test objectives for that test level.
The adequacy of the test approaches taken.
The effectiveness of the testing with respect to its objectives.

Test control
Test control describes any guiding or corrective actions taken as a result of information and
metrics gathered and reported. Actions may cover any test activity and may affect any other
software life cycle activity or task.
Examples of test control actions are:
Re-prioritize tests when an identified risk occurs (e.g. software delivered late).
Change the test schedule due to availability of a test environment.
Set an entry criterion requiring fixes to have been retested by a developer before accepting
them into a build

Configuration Management
Configuration management is a key component of the infrastructure for any software
development organizations. The ability to maintain control over the changes made to all
project artifacts is critical to the success of a project. The more complex an application is,
the more important it is to implement change both the application and its supporting
artifacts in a controlled manner.
The list below illustrates the types of project artifacts that must be managed and
controlled in the CM environment

Source code
Requirements
Analysis models
Design models
Test cases and procedures
Automated test scripts

User documentation, including manuals and online Help


Hardware and software configuration settings
Other artifacts as needed

Definition
Set of activities designed to control change by identifying the work products that are
likely to change, establishing relationships among them, defining mechanisms for
managing different versions of these work products, controlling the changes imposed, and
auditing and reporting on the changes made.
Purposes
The goals of SCM are generally:

Configuration Identification- What code are we working with?

Configuration Control- Controlling the release of a product and its changes.

Status Accounting- Recording and reporting the status of components.

Review- Ensuring completeness and consistency among components.

Build Management- Managing the process and tools used for builds.

Process Management- Ensuring adherence to the organization's development


process.

Environment Management- Managing the software and hardware that host our
system.

Teamwork- Facilitate team interactions related to the process.

Defect Tracking- making sure every defect has traceability back to the source

Change management
Managing changes is a process. The process is the primary responsibility of the software
development staff. They must assure that the change requests are documented, that they
are tracked through approval or rejection, and then incorporated in to the developmental
process.
The testers need to know two aspects of change

1
2

The characteristics of the change so that modification of the test plan and test data
can be made to assure the right functionality and structure are tested
The version in which that change will be implemented

Without effective communication between the development team and test team regarding
changes, test effort may be wasted testing the wrong functionality and structure.

Version Control
Once dynamic testing begins, the project team must ensure that the appropriate versions
of the software components are being tested. The time and effort devoted to testing are
wasted if either the incorrect components or the wrong version of the correct components
have been migrated to the test environment. The configuration manager must develop
both migration and back out procedures to support this process.
Points to remember

Independent testers see other and different defects, and are unbiased.

The test plan acts as the anchor for the execution, tracking, and reporting of the
entire testing project and covers

Estimation happens broadly in three phases. Size estimation, Effort estimation,


Schedule estimation

The ability to maintain control over the changes made to all project artifacts is
critical to the success of a project.

Exercises
1) ____________ Management is managing the changes
2) _________ acts as the anchor for the execution, tracking, and reporting of the entire
testing project and covers
3) __________ describes any guiding or corrective actions taken as a result of
information and metrics gathered and reported
4) Number of test cases is the one measurement of size estimation (T/F)

Chapter 4
Test planning
Objective of this chapter
Basic Test planning

What is Test plan


Steps in preparing the test
plan

Build the test plan


Write the test plan
What is a Test strategy?
The components and formats of test
strategy
Risk concepts
Risk analysis

Introduction
The objective of this chapter is to familiarize the user with the test planning
Test plan
It describes how testing will be accomplished. Its creation is essential to effective testing
and should take about one third of the total test effort. If the plan is developed carefully,
test execution, analysis and reporting will flow smoothly.
Test plan is a contract between the testers and the project team and user describing the
role of testing in the project
Points to remember

The test plan should be an evolving document. As the developmental effort


changes in scope, the test plan must change accordingly

The test plan should give information on software being tested, test objectives and
risks and specific test to be performed

Test planning can be started as soon as the time requirements definition starts

Steps in preparing the test plan

Understand the characteristics of the software being developed


Build the test plan
Write the test plan

Tips for understanding the characteristics of the software being developed


Define what it means to meet project objectives
Understand core business areas and processes
Assess the severity of potential failures
Identify the components for the system
Assure requirements are testable
Address implementation schedule issues
Address interface and data exchange issues
Evaluate contingency plans for this system and activities
Identify vulnerable parts of the system and processes operating outside the
information resource management area
Steps in building the test plan
1 Set test objectives

2
3

Develop test matrix


Define test administration

Steps in Setting test objectives


Define each objective so that you can reference it by a number
Write the test objectives in a measurable statement
Assign a priority to the objectives such as High, Medium, and Low
Define the acceptance criteria for each objective
Developing test matrix
It is the key component of the test plan. It lists which software functions must be tested
and the available tests.
Software function
Functionality
Login
Adding to shopping
cart
Viewing products
Screen navigation

Test used to verify function


Back end
Code inspection
verification

Guidelines to write the test plan


Start early
Keep the test plan flexible
Review the test plan frequently
Keep the test plan concise and readable
Calculate the planning effort
Spend the time to do a complete test plan
Test plan template
1
2
3
4
5
6
7
8
9
10

Test scope
Test objectives
Assumptions
Risk analysis
Test design
Roles and responsibilities
Test schedule and resources
Test data management
Test environment
Communication approach

11 Test tools

Test Strategy
The Test Strategy is the plan for how you are going to approach testing
Test strategy document helps us to answers the following questions
Should you develop a formal plan for your tests?
Should you test the entire program as a whole or run tests only on a small part of
it?
Should you rerun tests you've already conducted as you add new components to a
large system?
When should you involve the customer?
Template of test strategy
Name of the Project :
XXXXX Project
Brief Description :
Mention the Scope of the Project
Type of Project :
Application Testing or Product Testing
Type of Software :
List all the software used in the project
Critical Success Factors
Delivery on time
Exceptional user friendliness
Attractive user interface
Reasonable response time for online part
Risk Factors:
Schedule Timeframe is short
Tools/Technology Limited automated test tools
Resource - Non availability of test team members / Test team members are
new to environment
Support Not enough support from Business people.
Test Objectives:
Insure that not more than 1 bug per 10 function point.
Max 3 seconds response time
User friendliness of screen / menus / documentations
Trade offs:
Objectives stated under 7 to be achieved at any cost.

Delivery on time takes precedence over other aspects


System Test (Plan,Test Preparation,Execution)
Responsibility - PM/PL
Resource budgeted Depending on the Project
Planned start date
Stop criteria - Review , approval and Sign Off
Integration Test Plan (Test Plan,Test Execution)
Responsibility - PL/ML
Resource budgeted Depending on the Project
Planned start date
Stop criteria - Review , approval, Required features covered
Unit Test (Plan,Test Preparation,Execution)
Responsibility - ML/TM
Resource budgeted Depending on the Project
Planned start date
Stop criteria - Review , approval, 100%
Risk
Risk is the potential loss to an organization.
Risk analysis
It is an analysis of an organizations information resources, its existing controls, and its
remaining organization and computer system vulnerabilities. It combines the loss
potential for each resource or combination of resources with an estimated rate of
occurrence to establish a potential level of damage in dollars or other assets
Threat
A threat is something capable of exploiting vulnerability in the security of a computer
system or application. Threats include both hazards and events that can trigger flaws
Vulnerability
Vulnerability is a design, implementation, or operations flaw that may be exploited by a
threat; the flaw causes the computer system or application to operate in a fashion
different from its published specifications and to result in destruction or misuse of
equipment or data
Steps involved in Risks analysis and management
1 Identify the risk
2 Estimate the severity of the risk
3 Develop tests to substantiate the impact of the risk on the application
Risks in the software testing

Not enough training/Lack of competency


Us versus them mentality
Lack of test tools
Lack of management understanding and support of testing

Lack of customer and user involvement


Not enough schedule or budget for testing
Over reliance on independent testers
Rapid change
Testers are in A Lose-Lose situation
Having to say no
Test environment
New technology
New developmental processes

Example of risk analysis


S.N
o.

Risk categ

Risk desc

Cause

Impact
on the
project

Project
managem
ent

Change
in
requirem
ent

Lack of
domain
knowled
ge

Schedu
le
slippag
e

Probabili Impa
ty of
ct
occurren
ce
2
2

Risk
criticali
ty

Risk
Handling
plan

Evaluate
the
change/wo
rk request,
if leading
to
schedule
slippage
get the
approval
from the
customer

Points to remember

If the test plan is developed carefully, test execution, analysis and reporting will
flow smoothly.

Risk is the potential loss to an organization.

Exercises
Prepare the risk analysis for functional testing of google.co.in and submit to your team
lead

Chapter 5
Test case design techniques
Objective of this chapter
Types of testing
Black box testing
White box testing
Preparation of test cases
Execution of test cases
Mapping requirements with test cases
Traceability Matrix (Use Cases/ Test Requirements to Test Cases)
Boundary value
analysis
Equivalence
partitioning
Decision table

Introduction
The objective of this chapter is to familiarize the reader with the test case design
techniques
Black box testing
Also known as functional testing. A software testing technique whereby the internal
workings of the item being tested are not known by the tester. For example, in a black
box test on a software design the tester only knows the inputs and what the expected
outcomes should be and not how the program arrives at those outputs. The tester does not
ever examine the programming code and does not need any further knowledge of the
program other than its specifications.
The advantages of this type of testing include:

The test is unbiased because the designer and the tester are independent of each
other.

The tester does not need knowledge of any specific programming languages.

The test is done from the point of view of the user, not the designer.

Test cases can be designed as soon as the specifications are complete.

The disadvantages of this type of testing include:

The test can be redundant if the software designer has already run a test case.

The test cases are difficult to design.

Testing every possible input stream is unrealistic because it would take a


inordinate amount of time; therefore, many program paths will go untested.

White box testing


Also known as glass box, structural, clear box and open box testing. A software testing
technique whereby explicit knowledge of the internal workings of the item being tested

are used to select the test data. Unlike black box testing, white box testing uses specific
knowledge of programming code to examine outputs. The test is accurate only if the
tester knows what the program is supposed to do. He or she can then see if the program
diverges from its intended goal. White box testing does not account for errors caused by
omission, and all visible code must also be readable.
For a complete software examination, both white box and black box tests are required

Preparation of test cases


While preparing test cases consider the following points
1

Have you planned for an overall testing schedule and the personnel required, and
associated training requirements?

Have the test team members been given assignments?

Have you established test plans and test procedures for


o

module testing,

integration testing,

system testing, and

Acceptance testing?

Have you designed at least one black-box test case for each system function?

Have you designed test cases for verifying quality objectives/factors (e.g.
reliability, maintainability, etc.)?

Have you designed test cases for verifying resource objectives?

Have you defined test cases for performance tests, boundary tests, and usability
tests?

Have you designed test cases for stress tests (intentional attempts to break
system)?

Have you designed test cases with special input values (e.g. empty files)?

10 Have you designed test cases with default input values?


11 Have you described how traceability of testing to requirements is to be
demonstrated (e.g. references to the specified functions and requirements)?
12 Do all test cases agree with the specification of the function or requirement to be
tested?
13 Have you sufficiently considered error cases? Have you designed test cases for
invalid and unexpected input conditions as well as valid conditions?
14 Have you defined test cases for white-box-testing (structural tests)?
15 Have you stated the level of coverage to be achieved by structural tests?
16 Have you unambiguously provided test input data and expected test results or
expected messages for each test case?
17 Have you documented the purpose of and the capability demonstrated by each test
case?
18 Is it possible to meet and to measure all test objectives defined (e.g. test
coverage)?
19 Have you defined the test environment and tools needed for executing the
software test?
20 Have you described the hardware configuration an resources needed to implement
the designed test cases?
21 Have you described the software configuration needed to implement the designed
test cases?
22 Have you described the way in which tests are to be recorded?
23 Have you defined criteria for evaluating the test results?
24 Have you determined the criteria on which the completion of the test will be
judged?
25 Have you considered requirements for regression testing?

Execution of test cases


Traceability Matrix
In a software development process, a Traceability Matrix is a table that correlates any
two baselined documents that require a many to many relationship to the determine
completeness of the relationship. It is often used with high-level requirements (sometimes
known as Marketing Requirements) and detailed requirements of the software product to
the matching parts of high-level design, detailed Design, test plan, and test cases.
Sample template for traceability matrix
S.No.

Use Case ID/Requirement ID

Test case ID

Test case design techniques


Boundary value analysis is a software testing related technique to determine test cases
covering known areas of frequent problems at the boundaries of software component
input ranges.
Testing experience has shown that especially the boundaries of input ranges to a software
component are liable to defects. A programmer who has to implement e.g. the range 1 to
12 at an input, which e.g. stands for the month January to December in a date, has in his
code a line checking for this range. This may look like:
if (month > 0 && month < 13)

But a common programming error may check a wrong range e.g. starting the range at 0
by writing:
if (month >= 0 && month < 13)

For more complex range checks in a program this may be a problem which is not so
easily spotted as in the above simple example.
Applying boundary value analysis you have to select now a test case at each side of the
boundary between two partitions. In the above example this would be 0 and 1 for the
lower boundary as well as 12 and 13 for the upper boundary. Each of these pairs consists
of a "clean" and a "dirty" test case. A "clean" test case should give you a valid operation
result of your program. A "dirty" test case should lead to a correct and specified input
error treatment such as the limiting of values, the usage of a substitute value, or in case of
a program with a user interface, it has to lead to warning and request to enter correct data.
The boundary value analysis can have 6 textcases.n, n-1,n+1 for the upper limit and n, n1,n+1 for the lower limit.
Equivalence partitioning

Equivalence partitioning is software testing related technique with the goal:


1

To reduce the number of test cases to a necessary minimum.

To select the right test cases to cover all possible scenarios.

Although in rare cases equivalence partitioning is also applied to outputs of a software


component, typically it is applied to the inputs of a tested component. The equivalence
partitions are usually derived from the specification of the component's behaviour. An
input has certain ranges which are valid and other ranges which are invalid. This may be
best explained at the following example of a function which has the pass parameter
"month" of a date. The valid range for the month is 1 to 12, standing for January to
December. This valid range is called a partition. In this example there are two further
partitions of invalid ranges. The first invalid partition would be <= 0 and the second
invalid partition would be >= 13.
Decision table
Decision tables are a precise yet compact way to model complicated logic. Decision
tables, like if-then-else and switch-case statements, associate conditions with actions to
perform. But, unlike the control structures found in traditional programming languages,
decision tables can associate many independent conditions with several actions in an
elegant way.

Structure
Decision tables are typically divided into four quadrants, as shown below.
The four quadrants
Conditions

Condition alternatives

Actions

Action entries

Each decision corresponds to a variable, relation or predicate whose possible values are
listed among the condition alternatives. Each action is a procedure or operation to
perform, and the entries specify whether (or in what order) the action is to be performed
for the set of condition alternatives the entry corresponds to. Many decision tables
include in their condition alternatives the don't care symbol, a hyphen. Using don't cares
can simplify decision tables, especially when a given condition has little influence on the
actions to be performed. In some cases, entire conditions thought to be important initially
are found to be irrelevant when none of the conditions influence which actions are
performed.
Aside from the basic four quadrant structure, decision tables vary widely in the way the
condition alternatives and action entries are represented. Some decision tables use simple

true/false values to represent the alternatives to a condition (akin to if-then-else), other


tables may use numbered alternatives (akin to switch-case), and some tables even use
fuzzy logic or probabilistic representations for condition alternatives. In a similar way,
action entries can simply represent whether an action is to be performed (check the
actions to perform), or in more advanced decision tables, the sequencing of actions to
perform (number the actions to perform).

Example
The limited-entry decision table is the simplest to describe. The condition alternatives are
simple boolean values, and the action entries are check-marks, representing which of the
actions in a given column are to be performed.
A technical support company writes a decision table to diagnose printer problems based
upon symptoms described to them over the phone from their clients.
Printer troubleshooter

Conditions

Printer does not print

Y Y Y Y N N N N

A red light is flashing

Y Y N N Y Y N N

Printer is unrecognized

Y N Y N Y N Y N

Check the power cable

Actions

Check the printer-computer cable

Ensure printer software is installed

Check/replace ink

X X

Check for paper jam

X X
X

Of course, this is just a simple example (and it does not necessarily correspond to the
reality of printer troubleshooting), but even so, it is possible to see how decision tables
can scale to several conditions with many possibilities.
Points to remember
In Black box testing technique the internal workings of the item being tested are
not known by the tester.

In white box testing technique the explicit knowledge of the internal workings of
the item being tested are used to select the test data.

Boundary value analysis, Equivalence partitioning are the techniques used for
reducing the no of test cases

Exercises
1) In white box testing, the tester will be aware of the inner workings of the program
(T/F)
2) In black box testing, the tester will not be aware of the inner workings of the program
(T/F)
3) Equivalence partioning checks the defect from the application with respect to its
boundaries (T/F)
4) Prepare the test cases for the following scenario.
For an insurance application, the customers date of birth is one of the input parameter,
according to the business rule, the application should not accept customer who is less
than 18 years of age and not more than 58 years.
5) Prepare the test cases for the following scenario
For a banking application, the transaction is performed on the following situation.
If the customer has entered valid customer no
If the customer has entered valid password
If the customer has entered less than or equal to current balance

Chapter 6
Test case
development

Objectives

Test Cases and Test scripts


Test Case Best Practices
Sample Test Cases

Introduction
The objective of this chapter is to familiarize the reader with the test case development
Test cases
A test case is a set of conditions or variables under which a tester will determine if a
requirement upon an application is partially or fully satisfied
Test script
A test script is a short program written in a programming language used to test part of
the functionality of a software system. A written set of steps that should be performed
automatically can also be called a test script; however this is more correctly called a test
case.
Test case best practices

Avoid unnecessary duplication of test case


Map all the test cases with its requirements
Provide sufficient information on the test cases with the appropriate naming
conventions

Sample test cases


Functional test cases (For testing the calculator + button)
S.No. Test
Use
case ID Case
ID

Pre
Conditio
n

Sample
data

Steps

Expecte
d output

Actual
output

Test
Resul
t

Open
calculator

<input1
>=1

Click
the
button
with
the
captio
n 1
and
click
+

1
displayed
in the
text box

Appear
s

Pass

CALC1 ADD

<input2
>=1

button

Points to remember

A test case is a set of conditions or variables under which a tester will determine
if a requirement upon an application is partially or fully satisfied

A test script is a short program written in a programming language used to test


part of the functionality of a software system

All the test cases should be mapped with its requirements

Exercises
1 Re-testing and Regression testing is one and the same (T/F)
2 Checking the spelling errors is coming under Functionality testing (T/F)
3. For the following screen, the business rules are as follows.

The application should open only by giving user name


of more than 4 characters with password as mercury (Both Upper/Lower cases
allowed). Prepare the test case for the same.

Chapter 7
Test Execution strategies

Objectives

Automated Testing
Risks of Not Automating Testing
Risks of Automating Testing
Where Do Tools Fit In?
The Major Issues
Top 10 Test Tools
Critical Success Factors
Test Execution - Manual Methods
Building the Test Environment
How to Create and Maintain Test Data
Pitfalls to Avoid
Test stop criteria

Introduction
The objective of this chapter is to familiarize the reader with the test execution strategies.
Automated Testing

Test automation is the use of software to control the execution of tests, the comparison of
actual outcomes to predicted outcomes, the setting up of test preconditions, and other test
control and test reporting functions. Commonly, test automation involves automating a
manual process already in place that uses a formalized testing process.
Risk of not automating testing

Regression testing may take more time


Performance testing cannot be tested by manual
Preparing the test environment for every test case may take more time
Manual testing will increase the cost involved in test execution

Risk of automating testing

One tool may not be fit for every requirement


The effort required to prepare the automation, may take more time
If the testing team follows wrong approach, it may lead to more repetition of jobs
Maintenance of script should be done in a controlled manner
Testing Process cannot be started until the product is ready for testing

Where Do Tools Fit In?

Testing the functionality


Testing the performance
Managing the testing process
Performing regression testing
Checking the source code

The Major Issues

Requirements may be changed frequently which will affect the slippage in the test
schedule
The order of test execution will also affect the test result, so proper planning
should be done before testing the product
Review of test script is very important; if the script is not prepared properly, the
test result will not be accepted. So, test the test first.
Identifying critical testing areas should be effectively done by the testing team

Automation tool may generate the run time error which may not be relevant to the
testing

Testing Tools
AutoTester
AutoTester is a company involved in tool based software automation testing. Founded in
1985, it is based in Dallas, Texas.
Products by developed by AutoTester include:

AutoTester

AutoTester ONE

AutoController

WET
WET is a open source web automation testing tool which uses Watir as the library to
drive web pages.
WinRunner
Mercury Interactive's WinRunner is an automated functional GUI testing tool that
allows a user to record and play back UI interactions as test scripts. The software
implements a proprietary Test Script Language (TSL) that allows customization and
parameterization of user input.
QA Load
Compuwares QALoad helps you achieve loads that mimic realistic business usage as
well as validate that the system can meet acceptable service levels.
QA Run by Compuware
QARun is a functional testing tool that provides the automation capabilities needed to
quickly and productively create and execute test scripts,
TestPartner by Compuware
TestPartner is an automated functional and regression testing tool that has been specially
designed for testing complex applications based on Microsoft, Java and web based

technologies. Unique features of TestPartner allow both testers and developers to create
repeatable tests through visual scripting and automatic wizards. Users also have access to
the full capabilities of Microsoft's Visual Basic for Applications (VBA), allowing tests to
be as high level or detailed as needed.
SilkTest
SilkTest is the leading tool for automating the functional software testing process of
GUI applications. Its powerful test automation capabilities make it the perfect solution
for cross-platform, localization, and regression testing across a broad set of application
technologies, including Web, Java or .NET and client/server, within the confines of
today's short testing cycles. Designed for realizing automation benefits even when
applied to complex test cases, SilkTest provides a host of productivity-boosting features
that lets you easily cope with changes in the application under test.
TestPro
TestPro has a significant depth of knowledge in the area of application Load and Stress
testing.
Rational Robot
Rational Robot automates graphical functional, regression, and smoke testing of
applications created with any development tool
Rational Functional Tester by IBM
An advanced, automated functional and regression testing tool for testing applications based on
Java, Microsoft Visual Studio .NET, and Web technologies.

iMacros Web Browser Automation by iOpus


A form filler is a software program that automatically fills forms in a UI. Form filler's
can be part of a larger program, like a password manager or a enterprise single sign-on
(E-SSO) solution.
A form filler is the opposite of a screen scraper; which extracts data from a form.
Rational Performance Tester by IBM
Automated load and performance testing tool for any team validating the scalability and
reliability of complex e-business applications before deployment.
Jitbit Macro Recorder by Jitbit Software
JitBit Macro Recorder is not only a keyboard recorder, mouse recorder and player, but
also a powerful automation script editor. All recorded keystrokes and mouse activity can
be saved to disk as a macro (script) for later use, bound to hotkeys, extended with custom
commands or even compiled to an EXE file (a standalone Windows application). This
macro recording program will save you a lot of time on repetitive tasks. Use our Macro
Recorder to automate ANY activity in ANY windows application, record on-screen
presentations and tutorials.

Macro Scheduler by MJTNet


Automate business processes and save wasted resources with Macro Scheduler.
With this scripting tool enterprises can integrate disparate systems, automate data
transfer, script web applications, automate ERP systems, perform automated software
testing and a whole lot more.
Advanced users will benefit from the powerful, professional code editor and debugger;
Microsoft VBScript; ability to use DLLs and the Windows API; Script Compiler, Dialog
Designer and script event handlers.

AutoIntern by Graphical Dynamics


With AutoIntern you will be able to employ groundbreaking features:

Send keystrokes to windows - even special keys, even to DOS boxes

Schedule your event to run at one or more times per day

Schedule your event to run on one or more specific dates

Schedule your event to run on specific days of the week

Schedule your event to run on specific days of the month

Schedule your event to run when specific files are created, modified, or deleted
Opalis Integration Server Opalis

Incident Process Automation


Problem Process Automation
Configuration Process
Automation
Change Process Automation
Release Process Automation

Industrial IT and automation by cowex

Cowex a/s is an Industrial IT and Automation company which is specialised in solutions


of process-technical tasks, in particular computer-based control, regulation and
supervision within areas of aquaculture, water supply and industrial automation
JUnit
JUnit is a regression testing framework written by Erich Gamma and Kent Beck. It is
used by the developer who implements unit tests in Java. JUnit is Open Source Software,
released under the Common Public License Version 1.0 and hosted on Source Forge.

Critical Success Factors

Proper planning
Correct approach towards automation
Skilled human resources
Analyzing all the risks associated with automation and preparing contingency
planning

Test Execution - Manual Methods

Collect the requirements from the User Requirement document


Analyze the requirement
Identify the areas to be tested
Prepare the detailed test plan and test case
Prepare the environment for testing
Execute the test cases in the planned manner
Observe the behaviour
Report in the defect log during abnormality

Building the Test Environment


The test environment is comprised of all the conditions, circumstances, and influences
surrounding and affecting the testing of software.
How to Create and Maintain Test Data
Steps

Study the requirement


Analyze the requirement
Prioritize the test case and rank them
Prepare the data
Prepare the process for data maintenance and implement

Test stop criteria

No major defects present in the application


All the requirements are met
Code coverage
Functionality coverage
Budget
Time

Points to remember

Test automation is the use of software to control the execution of tests, the
comparison of actual outcomes to predicted outcomes, the setting up of test
preconditions, and other test control and test reporting functions.
Requirements may be changed frequently which will affect the slippage in the test
schedule
Automation tools can be used for Testing the functionality, Testing the
performance, Managing the testing process, Performing regression testing,
Checking the source code
Testing can be stopped during various situation like when no major defects
present in the application, All the requirements are met, Code coverage,
Functionality coverage, Budget, Time

Exercise
1) Automation will reduce the test execution time (T/F)
2) Find the tool which is used for testing the java programs
a) QTP b) Winrunner C) JUnit d) None
3) Find the odd man out
a) QTP b) Winrunner C) Test Director d) QA Run
4) QC is the leading tool for testing the functionality (T/F)

Chapter 8
Test Reporting
Objective of this chapter

Test Reporting
Defect Management system

Introduction
The objective of this chapter is to familiarize the reader with test reporting and defect
management system
Test Reporting
It is a process to collect data, analyze the data, supplement the data with metrics, graphs,
and charts other pictorial representations which help the developers and users interpret
that data.
Prerequisites to test reporting
1
2
3

Define the test status data to be collected


Define the test metrics to be used in reporting test results
Define effective test metrics

Define and collect test status data


Processes need to be put into place to collect the data on the status of testing that will be
used in reporting test results. Four categories of data that testers collect more often are
Test result data
Test cases results and test verification results
Defects
Efficiency
Define test metrics used in reporting
Steps in creating test metrics
1
2
3

Establish a test metrics team


Inventory using existing IT measures
Develop a consistent set of metrics

4
5
6

Define desired test metrics


Develop and implement the process for collecting measurement data
Monitor the process

Define effective test metrics


A metric is a mathematical number that shows a relationship between two variables.
Software metrics are used to quantify status or results such as lines of code, defect
tracking.
Types
Process metric
Product metric
Software quality metric
Test Reports - Samples
TEST EXECUTION SUMMARY
Cycle Number

Date

No. of Test Cases


Planned

12th Dec
2005

20

No. of Test Cases


Executed

18

No. of test cases


Failed

DEFECTS DATA SUMMARY


Severity

Show Stopper

1
Defect Status
Total
H
M
L

Open

3
2
1

Medium

High

Low

Closed

3
1
1
1

Total

Postponed

2
1

Defect Management system


A major test objective is to identify defects. Once identified, defects needed to be
recorded and tracked until appropriate action is taken. This section explains a philosophy
and a process to find defects as quickly as possible and minimize their impact.
General principles

The primary goal is to prevent defects. Where there is not possible or practical,
the goals are to both find the defect as quickly as possible and minimize the
impact of the defects.

The defect management process, like the entire software development process,
should be risk driven

Defect measurement should be integrated into the development process and be


used by the project team to improve the development process

As much as possible, the capture and analysis of the information should be


automated

Defect information should be used to improve the process

The process must be altered

Defect Naming
Level 1 Name of the defect
Level 2 Developmental phase or activity in which the defect occurred
Level 3 The category of the defect (Missing, Inaccurate, Incomplete, Inconsistent)
Defect Management process
Defect
Prevention

Deliverable
Baseline

Defect
discovery

Defect
resolution

Process
improvement
s

Management reporting
Sample defect tracking system
S.No.

Defect ID

Defect status
(Open/Close
)

Test case
ID

Priority How soon the defect should be fixed


Severity How bad the defect is

Run/Build
no

Priority
(1,2)

Severity
(1-5)

Chapter 9
Regression Testing
Objective of this chapter
What is Regression Testing?
Regression testing Vs Re-testing
When to do regression testing?
How to do regression testing?
Best practices

Introduction
The objective of this chapter is to familiarize the user with Regression testing
Regression testing
Testing done to ensure that, modification does not cause any side effects, we perform
regression testing
Also called as selective re-testing
Regression testing Vs Re-testing
Retesting is done to ensure that, the defects are closed or not
Regression testing is done to ensure that, by closing the bugs, there are no new bugs are
arriving in the system
When to do regression testing?
A reasonable amount of initial testing is already carried out
A good number of defects have been fixed
Defect fixes that can produce side effects are taken care of
Steps

Performing initial test


Understanding the criteria for selecting test cases
Classifying the test cases into different priorities
A methodology for selecting test cases
Resetting the test cases for test execution
Concluding the results of a regression cycle

Steps in performing initial smoke test or sanity test

1
2
3
4

Identifying the basic functionality that a product must satisfy


Designing test cases to ensure that these basic functionality work and packaging
them into a smoke test suite
Ensuring that every time a product is built, this suite is run successfully before
anything else is run
If this suite fails, escalating to the developers to identify the changes and perhaps
change or roll back the changes to a state where the smoke test suite succeeds.

Understanding the criteria for selecting test cases


1
2
3
4

The defect fixes and changes made in the current build


The ways to test the current changes
The impact that the current changes may have on other parts of system
The ways of testing the other impacted parts

Classifying the test cases into different priorities


Priority 0 - Sanity test checks basic functionality are run for accepting the build for
further testing. They are also run when a product goes through a major change.
Priority 1 Uses the basic and normal set up and these test cases delivery high project
value to both development and to customers
Priority 2 These test cases deliver moderate project value. They are executed as part of
the testing cycles and selected for regression testing on a need basis
A methodology for selecting test cases
Case 1 If the criticality and impact of the defect fixes are low, then it is enough
that a test engineer selects few test cases from test case database
Case 2 if the criticality and impact of the defect fixes are medium, then we need
to execute all priority 0 and priority 1 test cases. If defect fixes needs additional
test cases from prioirty2, then those test cases can also be selected for regression
testing
Case 3 if the criticality and impact of the defect fixes are high, then we need to
execute all priority 0, priority 1, and a carefully selected subset of priority 2 test
cases.
Resetting the test cases for test execution
Resetting of test cases, is not expected to be done often, and it needs to be done with the
following considerations in mind.

When there is a major change in product


When there is change in the build procedure which affects the product
Large release cycle where some test cases were not executed for a long time
When the product is in the final regression test cycle with a few selected test cases
When there is a situation, that the expected results of the test cases could be quite
different from the previous cycles
The test cases related to defect fixes and production problems need to be
evaluated release after release.
Whenever existing functionality is removed
Test cases that consistently gives positive results
Negative test cases which does not produce any defects

Concluding the results of Regression testing


Current result
Previous results
Conclusion
from regression
Fail
Pass
Fail

Remarks
Need to improve the
regression process
and code reviews

Best practices
1 Regression can be used for all types of releases
2 Mapping defect identifiers with test cases improve regression quality
3 Create and execute regression test bed daily
4 Ask your best test engineer to select test cases
5 Detect defects, and protect your product from defects and defect fixes
Points to remember

Testing done to ensure that, modification does not cause any side effects, we
perform regression testing
Re-testing and regression testing are different
Whenever there is a change in the module, we need to perform regression testing

Exercise
1
2
3
4

Regression testing is called as ____


Selecting test cases for regression testing is done by ______
Priority 0 test cases are given low priority for testing (T/F)
A product for automating payroll is slated to be released in March. The
following are some features. For these, identify which of them would you
put through smoke tests in the three months.
Screens for entering employee information
Statutory year end reports
Calculating the pay slip details for a month

Screens for maintaining address of the employee

Chapter 10
Types of Software Testing
Objective of this chapter
Types of testing
Smoke testing
Sanity testing
Gorilla testing
Ad-hoc testing
Client/server testing
Web based testing
Stress testing
Volume testing
Load testing
Security Testing
Database Testing
Compatibility Testing
Internationalization
Data migration

Introduction
The objective of this chapter is to familiarize the reader with various types of software
testing.
Smoke testing
Testing done to ensure that the product is testable
Smoke test is a collection of written tests that are performed on a system prior to being
accepted for further testing. This is also known as a build verification test.
Sanity testing
A sanity test or sanity check is a basic test to quickly evaluate the validity of a claim or
calculation, specifically a very brief run-through of the functionality of a computer
program, system, calculation, or other analysis, to assure that the system or methodology
works as expected, often prior to a more exhaustive round of testing.
Gorilla testing
It is an intense round of testing--quite often redirecting all available resources to the
activity. The idea here is to test as much of the application in as short a period of
time as possible.
Ad-hoc testing
Testing phase where the tester tries to 'break' the system by randomly trying the system's
functionality. It can include negative testing as well
Client/server testing
Testing the product in the client/server environment with respect to functionality and
performance

Web based testing


Testing the web based application.
Stress testing
Testing conducted to evaluate a system or component at or beyond the limits of its
specified requirements to determine the load under which it fails and how. Often this is
performance testing using a very high level of simulated load.
Volume testing
Testing which confirms that any values that may become large over time (such as
accumulated counts, logs, and data files), can be accommodated by the program and will
not cause the program to stop working or degrade its operation in any manner.

Load testing
It is the process of creating demand on a system or device and measuring its response.
Load testing generally refers to the practice of modeling the expected usage of a software
program by simulating multiple users accessing the program's services concurrently. As
such, this testing is most relevant for multi-user systems, often one built using a
client/server model, such as web servers.
Security Testing
Techniques used to confirm the design and/or operational effectiveness of security
controls implemented within a system. Examples: Attack and penetration studies to
determine whether adequate controls have been implemented to prevent breach of system
controls and processes Password strength testing by using tools (password crackers).
Database Testing
Testing the database with respect to security, query execution
Compatibility Testing
Testing the application with various environments like browsers, operating systems is
called compatibility testing
Internationalization
Internationalization is the process of designing and coding a product so it can perform
properly when it is modified for use in different languages and locales.
Localization (also known as L10N) refers to the process, on a properly internationalized
base product, of translating messages and documentation as well as modifying other
locale specific files.

Data migration
Testing the data migration ensures that change from one form to another form does not
have any adverse effect on the system
Comparison between smoke testing and sanity testing
Smoke tests get their name from the electronics industry. The circuits are laid out on a
bread board and power is applied. If anything starts smoking, there is a problem. In the
software industry, smoke testing is a shallow and wide approach to the application. You
test all areas of the application without getting too deep. This is also known as a Build
Verification test or BVT.
In comparison, sanity testing is usually narrow and deep. That is they look
at only a few areas but all aspects of that part of the application. A smoke
test is scripted--either using a written set of tests or an automated
test--whereas a sanity test is usually unscripted.

Chapter 11
Testing Metrics
Objective of this chapter
What are
metrics?
Why are
metrics?
Types of metrics
Examples of
metrics

Introduction
The objective of this chapter is to familiarize the user with the testing metrics and
measurements
What is metrics?
A metric is a quantitative measure of the degree to which a system, component, or
process possesses a given attribute.
Metrics derive information from raw data with a view to help in decision making
Why metrics in testing?
Metrics in testing help in identifying
When to make the release
What to release
Whether the product is being released with known quality
Examples of testing metrics

Total number of tests


Number of tests executed to date
Number of tests successfully to date

Example of defect metrics

Total number of defects corrected in each activity


Total number of defects detected in each activity
Average duration between defect detection and defect correction
Average effort to correct a defect
Total number of defects remaining at delivery

Test metric categories


Metrics unique to test

Complexity measurements
Project metrics
Size measurement
Defect metrics
Product measures
Satisfaction measures
Productivity metrics

Metrics unique to test


Metric
Defect removal efficiency
Defect density
Mean time to failure
Mean time to last failure
Coverage metrics
Test cycles
Requirements tested

Description
The percentage of total defects occurring in a phase or
activity removed by the end of that activity
The number of defects in the particular product
The average operational time it takes before a software
system fails
An estimate of the time it will take to remove the last defect
from the software
The percentage of instructions or paths executed during
tests
The number of testing cycles required to complete testing
The percentage of requirements tested during testing

Complexity measurements
Metric
Size of a unit/module
Logic complexity

Description
Larger module/units are considered more complex
The number of opportunities to branch/transfer within a
single module
Documentation complexity The difficulty level in reading documentation usually
expressed as an academic grade level
Project metrics
Metric
Percent of budget utilized
Days behind or ahead of
schedule
Percent of change of
project scope
Percent of project
completed
Size measurements

Description
Total used
Days planned Actual date
No of requirement in day 1 No of requirement in last day
No of functionality tested/ total no of functionality * 100

Metric
KLOC
Function points
Pages of words of
documentation

Description
Kilo lines of code (1000 lines excluding comments)
A defined unit of size for software

Defect metrics
Metric
Defects related to size of
software
Severity of defects
Priority of defects
Age of defects

Description
Such as high,low,medium
Importance of correcting defects
Number of days the defect has been uncovered but not
corrected

Defects uncovered in
testing
Cost to locate a defect
Product metrics
Metric
Defect density

Description
The expected number of defects that will occur in a product during
development

Satisfaction metric
Metric
Ease of use
Customer complaints
Customer subjective assessment
Acceptance criteria met

Description
The amount of effort required to use
software
No of transactions with no of complaints
Feedback from customer
Percentage of acceptance criteria met

Productivity metrics
Metric
Cost of testing in relation to overall project
costs
Under budget/ahead of schedule
Software defects uncovered after the
software is placed into an operational status
Amount of testing using automated tools

Description
Project cost-testing cost
Planned actual
No of defects
Total test cases manual testing

Exercise
Prepare the metrics to identify the following.
Effort variance
Schedule variance

Chapter 12
Quality standards & methodologies
Objectives of this chapter

CMM
I
TMM
Six
sigma

ISO standards

Introduction
The objective of this chapter is to familiarize the reader with the quality standards and
methodologies
CMM Capability Maturity Model
Capability Maturity Model (CMM) broadly refers to a process improvement approach
that is based on a process model. CMM also refers specifically to the first such model,
developed by the Software Engineering Institute (SEI) in the mid-1980s, as well as the
family of process models that followed. A process model is a structured collection of
practices that describe the characteristics of effective processes; the practices included are
those proven by experience to be effective.
The Capability Maturity Model can be used to assess an organization against a scale of
five process maturity levels. Each level ranks the organization according to its
standardization of processes in the subject area being assessed. The subject areas can be
as diverse as software engineering, systems engineering, project management, risk
management, system acquisition, information technology (IT) services and personnel
management.
CMM was developed by the SEI at Carnegie Mellon University in Pittsburgh. It has been
used extensively for avionics software and government projects, in North America,
Europe, Asia, Australia, South America, and Africa. Currently, some government
departments require software development contract organization to achieve and operate at
a level 3 standard.

Levels of the CMM


There are five levels of the CMM. According to the SEI,
"Predictability, effectiveness, and control of an organization's software processes
are believed to improve as the organization moves up these five levels. While not
rigorous, the empirical evidence to date supports this belief."

Level 1 - Initial
At maturity level 1, processes are usually ad hoc and the organization usually does not
provide a stable environment. Success in these organizations depends on the competence

and heroics of the people in the organization and not on the use of proven processes. In
spite of this ad hoc, chaotic environment, maturity level 1 organizations often produce
products and services that work; however, they frequently exceed the budget and
schedule of their projects.
Maturity level 1 organizations are characterized by a tendency to over commit, abandon
processes in the time of crisis, and not be able to repeat their past successes again.

Level 1 software project success depends on having high quality people.

Level 2 - Repeatable
At maturity level 2, software development successes are repeatable. The processes may
not repeat for all the projects in the organization. The organization may use some basic
project management to track cost and schedule.
Process discipline helps ensure that existing practices are retained during times of stress.
When these practices are in place, projects are performed and managed according to their
documented plans.
Project status and the delivery of services are visible to management at defined points
(for example, at major milestones and at the completion of major tasks).
Basic project management processes are established to track cost, schedule, and
functionality. The minimum process discipline is in place to repeat earlier successes on
projects with similar applications and scope. There is still a significant risk of exceeding
cost and time estimates.

Level 3 - Defined
The organizations set of standard processes, which is the basis for level 3, is established
and improved over time. These standard processes are used to establish consistency
across the organization. Projects establish their defined processes by the organizations
set of standard processes according to tailoring guidelines.
The organizations management establishes process objectives based on the
organizations set of standard processes and ensures that these objectives are
appropriately addressed.
A critical distinction between level 2 and level 3 is the scope of standards, process
descriptions, and procedures. At level 2, the standards, process descriptions, and
procedures may be quite different in each specific instance of the process (for example,
on a particular project). At level 3, the standards, process descriptions, and procedures for

a project are tailored from the organizations set of standard processes to suit a particular
project or organizational unit.

Level 4 - Managed
Using precise measurements, management can effectively control the software
development effort. In particular, management can identify ways to adjust and adapt the
process to particular projects without measurable losses of quality or deviations from
specifications. Organizations at this level set quantitative quality goals for both software
process and software maintenance.
Subprocesses are selected that significantly contribute to overall process performance.
These selected subprocesses are controlled using statistical and other quantitative
techniques.
A critical distinction between maturity level 3 and maturity level 4 is the predictability of
process performance. At maturity level 4, the performance of processes is controlled
using statistical and other quantitative techniques, and is quantitatively predictable. At
maturity level 3, processes are only qualitatively predictable.

Level 5 - Optimizing
Maturity level 5 focuses on continually improving process performance through both
incremental and innovative technological improvements. Quantitative processimprovement objectives for the organization are established, continually revised to reflect
changing business objectives, and used as criteria in managing process improvement. The
effects of deployed process improvements are measured and evaluated against the
quantitative process-improvement objectives. Both the defined processes and the
organizations set of standard processes are targets of measurable improvement activities.
Process improvements to address common causes of process variation and measurably
improve the organizations processes are identified, evaluated, and deployed.
Optimizing processes that are nimble, adaptable and innovative depends on the
participation of an empowered workforce aligned with the business values and objectives
of the organization. The organizations ability to rapidly respond to changes and
opportunities is enhanced by finding ways to accelerate and share learning.
A critical distinction between maturity level 4 and maturity level 5 is the type of process
variation addressed. At maturity level 4, processes are concerned with addressing special
causes of process variation and providing statistical predictability of the results. Though
processes may produce predictable results, the results may be insufficient to achieve the
established objectives. At maturity level 5, processes are concerned with addressing
common causes of process variation and changing the process (that is, shifting the mean
of the process performance) to improve process performance (while maintaining

statistical probability) to achieve the established quantitative process-improvement


objectives.

Process areas
The CMMI contains several key process areas indicating the aspects of product
development that are to be covered by company processes.
Key Process Areas of the Capability Maturity Model Integration (CMMI)
Maturity
Abbreviation
Name
Area
Level
CAR
Causal Analysis and Resolution
Support
5
CM
Configuration Management
Support
2
DAR
Decision Analysis and Resolution
Support
3
Project
IPM
Integrated Project Management
3
Management
Project
ISM
Integrated Supplier Management
3
Management
Project
IT
Integrated Teaming
3
Management
MA
Measurement and Analysis
Support
2
Organizational Environment for
OEI
Support
3
Integration
Organizational Innovation and
Process
OID
5
Deployment
Management
Process
OPD
Organizational Process Definition
3
Management
Process
OPF
Organizational Process Focus
3
Management
Process
OPP
Organizational Process Performance
4
Management
Process
OT
Organizational Training
3
Management
PI
Product Integration
Engineering
3
Project
PMC
Project Monitoring and Control
2
Management
Project
PP
Project Planning
2
Management
Process and Product Quality
PPQA
Support
2
Assurance
Project
QPM
Quantitative Project Management
4
Management

RD
REQM

Requirements Development
Requirements Management

RSKM

Risk Management

SAM

Supplier Agreement Management

TS
VAL
VER

Technical Solution
Validation
Verification

Engineering
Engineering
Project
Management
Project
Management
Engineering
Engineering
Engineering

3
2
3
2
3
3
3

ISO International organization for Standardization


The International Organization for Standardization (ISO) is an international
standard-setting body composed of representatives from national standards bodies.
Founded on February 23, 1947, the organization produces world-wide industrial and
commercial standards, the so-called ISO standards.
While the ISO defines itself as a non-governmental organization (NGO), its ability to set
standards which often become law through treaties or national standards makes it more
powerful than most NGOs, and in practice it acts as a consortium with strong links to
governments. Participants include several major corporations and at least one standards
body from each member country.
ISO cooperates closely with the International Electro technical Commission (IEC), which
is responsible for standardization of electrical equipment.

The name
The organization is usually referred to simply as "ISO" It is a common misconception
that ISO stands for "International Standards Organization", or something similar. ISO is
not an acronym; it comes from the Greek word (isos), meaning "equal". In English,
the organizations long-form name is "International Organization for Standardization",
while in French it is called "Organisation internationale de normalisation." These initials
would result in different acronyms in ISOs two official languages, English (IOS) and
French (OIN), thus the founders of the organization chose "ISO" as the universal short
form of its name.
4 - Quality Management System.
Your Quality Management System must contain various elements. This clause describes
the requirements for your Quality Manual, control of documents, control of records, etc.
5 - Management Responsibility

Your "Top Management" must fulfil their part of this process. The standard states what it
requires from them..
6- Resource Management
You must supply sufficient resources so that your system can work. This includes
facilities, people, training, equipment, etc.

7 - Product Realization
You must control the process from quotation/receipt of order through design of product or
service, procurement of parts, manufacture of goods or provision of your service, through
to delivery and subsequent servicing.
8 - Measurement, Analysis & Improvement
You must ensure that what you provide to your Customers is correct. Whilst making
measurements or checking the goods or services, you must analyze the information and
use it to improve your system.
Six Sigma
Six Sigma is a disciplined, data-driven approach and methodology for eliminating defects
(driving towards six standard deviations between the mean and the nearest specification
limit) in any process -- from manufacturing to transactional and from product to service.
Six Sigma at many organizations simply means a measure of quality that strives for near
perfection
The statistical representation of Six Sigma describes quantitatively how a process is
performing. To achieve Six Sigma, a process must not produce more than 3.4 defects per
million opportunities. A Six Sigma defect is defined as anything outside of customer
specifications. A Six Sigma opportunity is then the total quantity of chances for a defect.
Six Sigma is a business improvement methodology, originally developed by Motorola to
systematically improve processes by eliminating defects. Defects are defined as
unacceptable deviation from the mean or target. The objective of Six Sigma is to deliver
high performance, reliability, and value to the end customer. Since it was originally
developed, Six Sigma has enjoyed wide popularity as an important element of many Total
Quality Management (TQM)initiatives.
The process was pioneered by Bill Smith at Motorola in 1986[2] and was originally
defined[3] as a metric for measuring defects and improving quality, and a methodology to

reduce defect levels below 3.4 Defects Per (one) Million Opportunities (DPMO), or put
another way, a methodology of controlling a process to the point of plus or minus six
sigma (standard deviations) from a centerline (for a total span of twelve sigma). Six
Sigma has now grown beyond defect control.
Six Sigma is a registered service mark and trademark of Motorola, Inc[4]. Motorola has
reported over US$17 billion in savings from Six Sigma to date.
In addition to Motorola, companies which also adopted six sigma methodologies early-on
and continue to practice it today include Honeywell International (previously known as
Allied Signal), Raytheon and General Electric (introduced by Jack Welch). The three
companies have reportedly saved billions of dollars thanks to the aggressive
implementation and daily practice of six sigma methodologies.[citation needed]
Recent six sigma trends lies in the advancement of the methodology with integrating to
TRIZ for inventive problem solving and product design [6].

Methodology
Six Sigma has two key methodologies[7]: DMAIC and DMADV. DMAIC is used to
improve an existing business process. DMADV is used to create new product designs or
process designs in such a way that it results in a more predictable, mature and defect free
performance.
Also see DFSS (Design for Six Sigma) quality. Sometimes a DMAIC project may turn
into a DFSS project because the process in question requires complete redesign to bring
about the desired degree of improvement.

DMAIC
Basic methodology consists of the following five steps:

Define the process improvement goals that are consistent with customer demands
and enterprise strategy.

Measure the current process and collect relevant data for future comparison.

Analyze to verify relationship and causality of factors. Determine what the


relationship is, and attempt to ensure that all factors have been considered.

Improve or optimize the process based upon the analysis using techniques like
Design of Experiments.

Control to ensure that any variances are corrected before they result in defects. Set up
pilot runs to establish process capability, transition to production and thereafter

Six Sigma has two key methodologies[7]: DMAIC and DMADV. DMAIC is used to
improve an existing business process. DMADV is used to create new product designs or
process designs in such a way that it results in a more predictable, mature and defect free
performance.
Also see DFSS (Design for Six Sigma) quality. Sometimes a DMAIC project may turn
into a DFSS project because the process in question requires complete redesign to bring
about the desired degree of improvement.

DMAIC
Basic methodology consists of the following five steps:

Define the process improvement goals that are consistent with customer demands
and enterprise strategy.

Measure the current process and collect relevant data for future comparison.

Analyze to verify relationship and causality of factors. Determine what the


relationship is, and attempt to ensure that all factors have been considered.

Improve or optimize the process based upon the analysis using techniques like
Design of Experiments.

Control to ensure that any variances are corrected before they result in defects.
Set up pilot runs to establish process capability, transition to production and
thereafter continuously measure the process and institute control mechanisms.

DMADV
Basic methodology consists of the following five steps:

Define the goals of the design activity that are consistent with customer demands
and enterprise strategy.

Measure and identify CTQs (critical to qualities), product capabilities, production


process capability, and risk assessments.

Analyze to develop and design alternatives, create high-level design and evaluate
design capability to select the best design.

Design details, optimize the design, and plan for design verification. This phase
may require simulations.

Verify the design, set up pilot runs, implement production process and handover
to process owners.

Some people have used DMAICR (Realize). Others contend that focusing on the
financial gains realized through Six Sigma is counter-productive and that said financial
gains are simply byproducts of a good process improvement.
Another additional flavor of Design for Six Sigma is the DMEDI method. This process is
almost exactly like the DMADV process, utilizing the same toolkit, but with a different
acronym. DMEDI stands for Define, Measure, Explore, Develop, Implement.

Chapter 13
Testing Automation
Test automation
What is test automation?
Benefits of automation
Terms used
Types of automation tools

Introduction
The objective of this chapter is to familiarize the reader with the automation testing
What is test automation?
Developing software to test the software is called as automation
Benefits of automation
Fast
Reliable
Repeatable
Programmable
Comprehensive
Reusable
Types of automation tools
Functionality / Regression testing tools
Performance testing tools
Test Management tools
Functionality/Regression testing tools
It helps the user to create/edit/run script for testing the functionality by using following
features

Checkpoints - To verify the actual output with the expected output


Data driven testing - To test the application with multiple sets of data
Object repository files - To store information on various objects
Exception handling To handle run time errors
Examples: Quick test Professional, Win Runner
Performance testing tools
It helps the user to create/edit/run script the performance by the following features
Virtual users Creating dummy users in the system
Response time- Measuring time gap between the server the client
Load time- The amount of time taken for loading the page
Examples: Load Runner
Test Management tools
It helps the user to manage the entire testing activity by the following features.
Creating Requirements
Creating Test plan
Creating Manual/Automation test case
Execution of test cases
Defect log
Example: Quality Centre
Points to remember

Developing software to test the software is called as automation


Functionality / Regression testing tools, Performance testing tools, Test
Management tools are the types of tools
Checkpoints are used for verifying the actual output with the expected output

Вам также может понравиться