Академический Документы
Профессиональный Документы
Культура Документы
Chapter 6
Test case development
Chapter 7
Test Execution strategies
Chapter 8
Test Reporting
DEFECTS DATA SUMMARY
Chapter 9
Regression Testing
Chapter 10
Types of Software Testing
Chapter 11
Testing Metrics
Chapter 12
Quality standards & methodologies
Levels of the CMM
Level 1 - Initial
Level 2 - Repeatable
Level 3 - Defined
Level 4 - Managed
Level 5 - Optimizing
Process areas
The name
Methodology
DMAIC
DMAIC
DMADV
Chapter 13
Testing Automation
Chapter 1
Basics of Software Testing
Objective of this chapter
What is testing?
What is a defect
Errors, Faults, and Failures
Quality Assurance
Quality Control (Seven QC tools)
The Scope of the Test Effort
The Challenge of Effective and Efficient Testing
The Limits of Testing
Prioritizing Tests
Cost of quality
Software quality factors
The Fundamental Test Process
Review process (Inspection, Walkthrough, Peer, Test coverage)
Introduction
The objective of this chapter is to familiarize the reader with basics of software testing.
Finding defects
Gaining confidence about the level of quality and providing information
Preventing defects
1.2 Terminologies
Errors
It refers to an incorrect action or calculation performed by software
Fault
An accidental condition that causes a functional unit to fail to perform its required
function.
Failures
It refers to the inability of the system to produce the intended result
Defect or Bug
Non-conformance of software to its requirements is commonly called a defect
Once installed, quality assurance would measure these processes to identify weaknesses
and then correct those weaknesses to continually improve the process.
Software QA involves the entire software development PROCESS - monitoring and
improving the process, making sure that any agreed-upon standards and procedures are
followed, and ensuring that problems are found and dealt with. It is oriented to
'prevention'
Quality Control
Quality control is the process by which the product quality is compared with applicable
standards, and the action taken when nonconformance is detected. Quality control is a
line function, and the work is done within a process to ensure that the work product
conforms to standards and requirements.
Quality control activities focus on identifying defects in the actual products produced.
These activities begin at the start of the software development process with reviews of
requirements, and continue until all application testing is complete
It is possible to have quality control without quality assurance. For example, a test team
may be in place to conduct system testing at the end of the development, regardless of
whether that system is produced using a software development methodology.
Both quality assurance and quality control are separate and distinct from internal
audit function
Internal auditing is an independent appraisal activity within an organization for the
review of operations, and is a service to management. It is a managerial control that
functions by measuring and evaluating the effectiveness of other controls
Quality control
It helps to execution of process
It is the producers responsibility
It identifies the weakness in the product
Check sheet
Pareto Chart
Stratification
Cause and Effect Diagram
Histogram
Scatter Diagram
Control Chart
Check sheet
A structured, prepared form for collecting and analyzing data; a generic tool that can be
adapted for a wide variety of purposes.
Description
A check sheet is a structured, prepared form for collecting and analyzing data. This is a
generic tool that can be adapted for a wide variety of purposes.
When to Use
When data can be observed and collected repeatedly by the same person or at the
same location.
Procedure
1
Design the form. Set it up so that data can be recorded simply by making check
marks or Xs or similar symbols and so that data do not have to be recopied for
analysis.
Test the check sheet for a short trial period to be sure it collects the appropriate
data and is easy to use.
Each time the targeted event or problem occurs, record data on the check sheet.
Example
The figure below shows a check sheet used to collect data on telephone interruptions. The
tick marks were added as data was collected over several weeks.
The figure below shows a check sheet used to collect data on telephone interruptions. The
tick marks were added as data was collected over several weeks.
Pareto chart
Description
A Pareto chart is a bar graph. The lengths of the bars represent frequency or cost (time or
money), and are arranged with longest bars on the left and the shortest to the right. In this
way the chart visually depicts which situations are more significant.
When to Use
When there are many problems or causes and you want to focus on the most
significant.
Procedure
1
Decide what period of time the chart will cover: One work cycle? One full day? A
week?
Collect the data, recording the category each time. (Or assemble data that already
exist.)
Determine the appropriate scale for the measurements you have collected. The
maximum value will be the largest subtotal from step 5. (If you will do optional
steps 8 and 9 below, the maximum value will be the sum of all subtotals from step
5.) Mark the scale on the left side of the chart.
Construct and label bars for each category. Place the tallest at the far left, then the
next tallest to its right and so on. If there are many categories with small
measurements, they can be grouped as other.
Steps 8 and 9 are optional but are useful for analysis and communication.
Calculate the percentage for each category: the subtotal for that category divided
by the total for all categories. Draw a right vertical axis and label it with
percentages. Be sure the two scales match: For example, the left measurement that
corresponds to one-half should be exactly opposite 50% on the right scale.
Calculate and draw cumulative sums: Add the subtotals for the first and second
categories, and place a dot above the second bar indicating that sum. To that sum
add the subtotal for the third category, and place a dot above the third bar for that
new sum. Continue the process for all the bars. Connect the dots, starting at the
top of the first bar. The last dot should reach 100 percent on the right scale.
Examples
Figure 1 shows how many customer complaints were received in each of five categories.
Figure 2 takes the largest category, documents, from Figure 1, breaks it down into six
categories of document-related complaints, and shows cumulative values.
If all complaints cause equal distress to the customer, working on eliminating documentrelated complaints would have the most impact, and of those, working on quality
certificates should be most fruitful.
Figure 1
Figure 2
Stratification
Also called: flowchart or run chart
Description
Stratification is a technique used in combination with other data analysis tools. When
data from a variety of sources or categories have been lumped together, the meaning of
the data can be impossible to see. This technique separates the data so that patterns can be
seen.
When to Use
When data come from several sources or conditions, such as shifts, days of the
week, suppliers or population groups.
Procedure
1
Before collecting data, consider which information about the sources of the data
might have an effect on the results. Set up the data collection so that you collect
that information as well.
When plotting or graphing the collected data on a scatter diagram, control chart,
histogram or other analysis tool, use different marks or colors to distinguish data
from various sources. Data that are distinguished in this way are said to be
stratified.
Example
The ZZ-400 manufacturing team drew a scatter diagram to test whether product purity
and iron contamination were related, but the plot did not demonstrate a relationship. Then
a team member realized that the data came from three different reactors. The team
member redrew the diagram, using a different symbol for each reactors data:
The fishbone diagram identifies many possible causes for an effect or problem. It can be
used to structure a brainstorming session. It immediately sorts ideas into useful
categories.
When to Use
Procedure
Materials needed: flipchart or whiteboard, marking pens.
1
Agree on a problem statement (effect). Write it at the center right of the flipchart
or whiteboard. Draw a box around it and draw a horizontal arrow running to it.
Brainstorm the major categories of causes of the problem. If this is difficult use
generic headings:
o
Methods
Machines (equipment)
People (manpower)
Materials
Measurement
Environment
Brainstorm all the possible causes of the problem. Ask: Why does this happen?
As each idea is given, the facilitator writes it as a branch from the appropriate
category. Causes can be written in several places if they relate to several
categories.
Again ask why does this happen? about each cause. Write sub-causes branching
off the causes. Continue to ask Why? and generate deeper levels of causes.
Layers of branches indicate causal relationships.
When the group runs out of ideas, focus attention to places on the chart where
ideas are few.
Example
This fishbone diagram was drawn by a manufacturing team to try to understand the
source of periodic iron contamination. The team used the six generic headings to prompt
ideas. Layers of branches show thorough thinking about the causes of the problem.
Histogram
Description
A frequency distribution shows how often each different value in a set of data occurs. A
histogram is the most commonly used graph to show frequency distributions. It looks
very much like a bar chart, but there are important differences between them.
When to Use
When you want to see the shape of the datas distribution, especially when
determining whether the output of a process is distributed approximately
normally.
When analyzing what the output from a suppliers process looks like.
When seeing whether a process change has occurred from one time period to
another.
When determining whether the outputs of two or more processes are different.
When you wish to communicate the distribution of data quickly and easily to
others.
Construction
Use the histogram worksheet to set up the histogram. It will help you determine
the number of bars, the range of numbers that go into each bar and the labels for
the bar edges. After calculating W in step 2 of the worksheet, use your judgment
to adjust it to a convenient number. For example, you might decide to round 0.9 to
an even 1.0. The value for W must not have more decimal places than the numbers
you will be graphing.
Draw x- and y-axes on graph paper. Mark and label the y-axis for counting data
values. Mark and label the x-axis with the L values from the worksheet. The
spaces between these numbers will be the bars of the histogram. Do not allow for
spaces between bars.
For each data point, mark off one count above the appropriate bar with an X or by
shading that portion of the bar.
Analysis
Before drawing any conclusions from your histogram, satisfy yourself that the
process was operating normally during the time period being studied. If any
unusual events affected the process during the time period of the histogram, your
analysis of the histogram shape probably cannot be generalized to all time
periods.
Normal. A common pattern is the bell-shaped curve known as the normal distribution.
In a normal distribution, points are as likely to occur on one side of the average as on the
other. Be aware, however, that other distributions look similar to the normal distribution.
Statistical calculations must be used to prove a normal distribution.
Dont let the name normal confuse you. The outputs of many processesperhaps even
a majority of themdo not form normal distributions, but that does not mean anything is
wrong with those processes. For example, many processes have a natural limit on one
side and will produce skewed distributions. This is normal meaning typical for
those processes, even if the distribution isnt called normal!
Double-peaked or bimodal. The bimodal distribution looks like the back of a twohumped camel. The outcomes of two processes with different distributions are combined
in one set of data. For example, a distribution of production data from a two-shift
operation might be bimodal, if each shift produces a different distribution of results.
Stratification often reveals this problem.
Plateau. The plateau might be called a multimodal distribution. Several processes with
normal distributions are combined. Because there are many peaks close together, the top
of the distribution resembles a plateau.
Edge peak. The edge peak distribution looks like the normal distribution except that it
has a large peak at one tail. Usually this is caused by faulty construction of the histogram,
with data lumped together into a group labeled greater than
Truncated or heart-cut. The truncated distribution looks like a normal distribution with
the tails cut off. The supplier might be producing a normal distribution of material and
then relying on inspection to separate what is within specification limits from what is out
of spec. The resulting shipments to the customer from inside the specifications are the
heart cut.
Dog food. The dog food distribution is missing somethingresults near the average. If a
customer receives this kind of distribution, someone else is receiving a heart cut, and the
customer is left with the dog food, the odds and ends left over after the masters meal.
Even though what the customer receives is within specifications, the product falls into
two clusters: one near the upper specification limit and one near the lower specification
limit. This variation often causes problems in the customers process.
Scatter diagram
Also called: scatter plot, XY graph
Description
The scatter diagram graphs pairs of numerical data, with one variable on each axis, to
look for a relationship between them. If the variables are correlated, the points will fall
along a line or curve. The better the correlation, the tighter the points will hug the line.
When to Use
When your dependent variable may have multiple values for each value of your
independent variable.
When trying to determine whether the two variables are related, such as when
trying to identify potential root causes of problems.
When determining whether two effects that appear to be related both occur with
the same cause.
Procedure
1
Draw a graph with the independent variable on the horizontal axis and the
dependent variable on the vertical axis. For each pair of data, put a dot or a
symbol where the x-axis value intersects the y-axis value. (If two dots fall
together, put them side by side, touching, so that you can see both.)
Look at the pattern of points to see if a relationship is obvious. If the data clearly
form a line or a curve, you may stop. The variables are correlated. You may wish
to use regression or correlation analysis now. Otherwise, complete steps 4 through
7.
Divide points on the graph into four quadrants. If there are X points on the graph,
Count X/2 points from top to bottom and draw a horizontal line.
Count X/2 points from left to right and draw a vertical line.
If number of points is odd, draw the line through the middle point.
Add the diagonally opposite quadrants. Find the smaller sum and the total of
points in all quadrants.
Example
The ZZ-400 manufacturing team suspects a relationship between product purity (percent
purity) and the amount of iron (measured in parts per million or ppm). Purity and iron are
plotted against each other as a scatter diagram, as shown in the figure below.
There are 24 data points. Median lines are drawn so that 12 points fall on each side for
both percent purity and ppm iron.
To test for a relationship, they calculate:
A = points in upper left + points in lower right = 9 + 9 = 18
B = points in upper right + points in lower left = 3 + 3 = 6
Q = the smaller of A and B = the smaller of 18 and 6 = 6
N = A + B = 18 + 6 = 24
Then they look up the limit for N on the trend test table. For N = 24, the limit is 6.
Q is equal to the limit. Therefore, the pattern could have occurred from random chance,
and no relationship is demonstrated.
Considerations
Here are some examples of situations in which might you use a scatter diagram:
previously. Keep increasing the separation between the two times until the scatter
diagram shows no correlation.
Even if the scatter diagram shows a relationship, do not assume that one variable
caused the other. Both may be influenced by a third variable.
When the data are plotted, the more the diagram resembles a straight line, the
stronger the relationship.
Think creatively about how to use scatter diagrams to discover a root cause.
Drawing a scatter diagram is the first step in looking for a relationship between
variables.
Control chart
Also called: statistical process control
Variations:
Different types of control charts can be used, depending upon the type of data. The two
broadest groupings are for variable data and attribute data.
Variable data are measured on a continuous scale. For example: time, weight,
distance or temperature can be measured in fractions or decimals. The possibility
of measuring to greater precision defines variable data.
Attribute data are counted and cannot have fractions or decimals. Attribute data
arise when you are determining only the presence or absence of something:
success or failure, accept or reject, correct or not correct. For example, a report
can have four errors or five errors, but it cannot have four and a half errors.
Variables charts
o
X and s chart
target charts (also called difference charts, deviation charts and nominal
charts)
Attributes charts
o
np chart
u chart
Description
The control chart is a graph used to study how a process changes over time. Data are
plotted in time order. A control chart always has a central line for the average, an upper
line for the upper control limit and a lower line for the lower control limit. These lines are
determined from historical data. By comparing current data to these lines, you can draw
conclusions about whether the process variation is consistent (in control) or is
unpredictable (out of control, affected by special causes of variation).
Control charts for variable data are used in pairs. The top chart monitors the average, or
the centering of the distribution of data from the process. The bottom chart monitors the
range, or the width of the distribution. If your data were shots in target practice, the
average is where the shots are clustering, and the range is how tightly they are clustered.
Control charts for attribute data are used singly.
When to Use
Basic Procedure
1
Determine the appropriate time period for collecting and plotting data.
Look for out-of-control signals on the control chart. When one is identified,
mark it on the chart and investigate the cause. Document how you investigated,
what you learned, the cause and how it was corrected.
Out-of-control signals
o
A single point outside the control limits. In Figure 1, point sixteen is above
the UCL (upper control limit).
Two out of three successive points are on the same side of the centerline
and farther than 2 from it. In Figure 1, point 4 sends that signal.
Four out of five successive points are on the same side of the centerline
and farther than 1 from it. In Figure 1, point 11 sends that signal.
A run of eight in a row are on the same side of the centerline. Or 10 out of
11, 12 out of 14 or 16 out of 20. In Figure 1, point 21 is eighth in a row
above the centerline.
Continue to plot data as they are generated. As each new data point is plotted,
check for new out-of-control signals.
When you start a new control chart, the process may be out of control. If so, the
control limits calculated from the first 20 points are conditional limits. When you
have at least 20 sequential points from a period when the process is operating in
control, recalculate control limits.
Scope of testing
What items to be tested
What items not to be tested
How many hours/resources required to perform this task
When are we going to perform Ad-hoc testing
What are the deliverables in each phase
Wrong process: If the testing team does not follow right process, it would not give the
good result
Categories of cost
Prevention cost
Money required to prevent errors and to do the job right the first time.
Ex. Establishing methods and procedures, Training workers, acquiring tools
Appraisal cost
Money spent to review completed products against requirements.
Ex. Cost of inspections, testing, reviews
Failure cost
All costs associated with defective products that have been delivered to the user or moved
in to the production
Ex. Repairing cost, cost of operating faulty products, damage incurred by using them
Integrity
Usability
Maintainability
Testability
Flexibility
Portability
Reusability
Interpretability
Description
Assess development plan and status
Develop the test plan
Test Software requirements
Test software design
Program (Build) phase testing
Execute and record results
Acceptance test
Report test results
The software installation
Test software changes
Evaluate test effectiveness
The bug is present in the product due to various reasons like miscommunication,
software complexity
Seven QC tools are used for making statistical analysis for any problem
Inspection, Walkthrough are used for identifying defects from early stage of
development process
Exercises
Fill in the blanks
1) ___________ is the variation from the customer requirements
2) Testing is an __________ activity
3) ________ is an activity which defines processes
4) QC tools are used for performing __________ analysis
5) Regression testing is also called as ____________
True or False
1) Testing is done to prove the product is defect free
2) All the defects are failures
3) System testing can be done only after building all the modules
4) Walkthrough is a review process
5) Testing is the last checkpoint before the product reaches the customer
Match the following
1.
2.
3.
4.
5.
QA
QC
Bug
Failure
Unit testing
|
|
|
|
|
Lab exercises
1) Read the following requirement given by the X customer and find the defects.
Product XYZ is a web site which can be opened from any browser, and its response time
should be faster. It should support multilingual support.
2)
Feature to be tested
Functionality
User friendliness
Interoperability
Total
No of defects
Total
Chapter 2
Testing in Software development life cycle
Objectives
Testing in Different Development Lifecycles
The V concept of testing
Verification and Validation
Testing Levels/Stages
Component Test Issues
Integration Testing
System and Acceptance Testing
Static Vs Dynamic testing
Maintenance and Regression Testing
Introduction
This chapter familiarizes the reader with the various SDLC models and different types of
testing
Waterfall processes
The best-known and oldest process is the waterfall model, where developers (roughly)
follow these steps in order. They state requirements, analyze them, design a solution
approach, architect a software framework for that solution, develop code, test (perhaps
unit tests then system tests), deploy, and maintain. After each step is finished, the process
proceeds to the next step, just as builders don't revise the foundation of a house after the
framing has been erected.
There is a misconception that the process has no provision for correcting errors in early
steps (for example, in the requirements). In fact this is where the domain of requirements
management comes in which includes change control.
This approach is used in high risk projects, particularly large defence contracts. The
problems in waterfall arise out of immature engineering practices, particularly in
requirements analysis and requirements management. Moreover, where poor engineering
process and rigor exists and where developers move into coding without understanding
the problem at hand.
More often too the supposed stages are part of joint review between customer and
supplier, the supplier can, in fact, develop at risk and evolve the design but must sell off
the design at a key milestone called Critical Design Review.
Most criticisms of the approach come from the developer, not software engineering,
community where they subscribe to the WISCY approach to software development.
The waterfall model is a sequential software development model (a process for the
creation of software) in which development is seen as flowing steadily downwards (like a
waterfall) through the phases of requirements analysis, design, implementation, testing
(validation), integration, and maintenance. The origin of the term "waterfall" is often
cited to be an article published in 1970 by W. W. Royce; ironically, Royce himself
advocated an iterative approach to software development and did not even use the term
"waterfall". Royce originally described what is now known as the waterfall model as an
example of a method that he argued "is risky and invites failure".
Disadvantages
The waterfall model is the oldest and the most widely used paradigm.
However, many projects rarely follow its sequential flow. This is due to the inherent
problems associated with its rigid format. Namely:
As the client usually only has a vague idea of exactly what is required from
the software product, this WM has difficulty accommodating the natural
uncertainty that exists at the beginning of the project.
The customer only sees a working version of the product after it has been
coded. This may result in disaster if any undetected problems are precipitated
to this stage.
will overlap in the sashimi model, implementation problems may be discovered during
the "design and implementation" phase of the development process. This helps alleviate
many of the problems associated with the Big Design Up Front philosophy of the
waterfall model.
Software prototyping
The prototyping model is a software development process that begins with requirements
collection, followed by prototyping and user evaluation. Often the end users may not be
able to provide a complete set of application objectives, detailed input, processing, or
output requirements in the initial stage. After the user evaluation, another prototype will
be built based on feedback from users, and again the cycle returns to customer evaluation.
The cycle starts by listening to the user, followed by building or revising a mock-up, and
letting the user test the mock-up, then back.
In the mid-1980s, prototyping became seen as the solution to the problem of requirements
analysis within software engineering. Prototypes are mock-ups of the screens of an
application which allow users to visualize the application that is not yet constructed.
Prototypes help users get an idea of what the system will look like, and make it easier for
users to make design decisions without waiting for the system to be built. When they
were first introduced the initial results were considered amazing. Major improvements in
communication between users and developers were often seen with the introduction of
prototypes. Early views of the screens led to fewer changes later and hence reduced
overall costs considerably.
However, over the next decade, while proving a useful technique, it did not solve the
requirements problem:
Managers, once they see the prototype, often have a hard time understanding that
the finished design will not be produced for some time.
Designers often feel compelled to use the patched-together prototype code in the
real system, because they are afraid to 'waste time' starting again.
Prototypes principally help with design decisions and user interface design.
However, they can not tell what the requirements were originally.
Designers and end users can focus too much on user interface design and too little
on producing a system that serves the business process.
Advantages of prototyping
Early visibility of the prototype gives users an idea of what the final system looks
like
Disadvantages of prototyping
Spiral model
The spiral model is a software development process combining elements of both design
and prototyping-in-stages, in an effort to combine advantages of top-down and bottom-up
concepts.
History
The spiral model was defined by Barry Boehm in his article A Spiral Model of Software
Development and Enhancement from 1985. This model was not the first model to discuss
iterative development, but it was the first model to explain why the iteration matters. As
originally envisioned, the iterations were typically 6 months to 2 years long. Each phase
starts with a design goal and ends with the client (who may be internal) reviewing the
progress thus far. Analysis and engineering efforts are applied at each phase of the
project, with an eye toward the end goal of the project.
Applications
For a typical shrink-wrap application, the spiral model might mean that you have a
rough-cut of user elements (without the polished / pretty graphics) as an operable
application, add features in phases, and, at some point, add the final graphics.
The spiral model is used most often in large projects (by companies such as IBM,
Microsoft, Patni Computer Systems and Tata Consultancy Services ) and needs constant
review to stay on target. For smaller projects, the concept of agile software development
is becoming a viable alternative. The US military has adopted the spiral model for its
Future Combat Systems program.
Advantages
Estimates (i.e. budget, schedule, etc.) get more realistic as work progresses,
because important issues are discovered earlier.
It is more able to cope with the (nearly inevitable) changes that software
development generally entails.
Software engineers (who can get restless with protracted design processes) can
get their hands in and start working on a project earlier.
The left tail of the V represents the specification stream where the system specifications
are defined. The right tail of the V represents the testing stream where the systems are
being tested (against the specifications defined on the left-tail). The bottom of the V
where the tails meet, represents the development stream.
The specification stream mainly consists of:
Functional Specifications
Design Specifications
Installation Qualification
Operational Qualification
Performance Qualification
The development stream can consist (depending on the system type and the development
scope) in customization, configuration or coding
Verification
Example
Requirement
reviews
Performed by
Explanation
Deliverable
Developers,
users
Reviewed statement
of requirements,
ready to be
translated in to
system design
Design reviews
Developers
Code Walkthroughs
Developers
Validation example
Unit testing
Performed by
Developers
Integration testing
Developers
System design,
ready to be
translated into
computer programs,
hardware
configurations,
documentation and
training
Computer software
ready for testing or
more detailed
inspections by the
developer
Computer software
ready for testing by
the developer.
An informal analysis of
the program source
code to find defects and
verify coding
techniques
Code inspections
Developers
A formal analysis of the
program source code to
find defects as defined
by meeting computer
system design
specifications. Usually
performed by a team
composed of developers
and subject matter
experts.
Validation is accomplished simply by executing a real-life function (if you wanted to
check to see if your mechanic had fixed the starter on your car, youd try to star the car).
Explanation
The testing of a single
program, module, or unit
of code. Usually
performed by the
developer of the unit.
Validates that the
software performs as
designed
The testing of related
programs, modules or
units of code. Validates
Deliverable
Software unit ready
for testing with
other system
components such as
other software units,
hardware,
documentation, or
users.
Portions of the
system ready for
testing with other
Validation example
Performed by
System testing
Developers,
Users
User Acceptance
testing
Users
Explanation
that multiple parts of the
system interact according
to the system design.
The testing of an entire
computer system. This
kind of testing can
include functional and
structural testing such as
stress testing. Validates
the system requirements
The testing of a computer
system or parts of a
computer systems to
make sure it will work in
the system regardless of
what the system
requirements indicate
Deliverable
portions of the
system
A tested computer
system based on
what was specified
to be developed or
purchased
A tested computer
system, based on
user needs.
Unit testing
Integration testing
System testing
User Acceptance testing
Unit testing
Objective
To test a single program, module, or unit of code
Who does?
Usually performed by the developer of the unit
Integration testing
Objective
To test related programs, modules or units of code are interfaced properly or not validates
that multiple parts of the system interact according to the system design.
Who does?
Usually performed by the developer
System testing
Objective
To test an entire computer system with respect to functionality and performance. This
kind of testing can include functional and structural testing such as stress testing.
Validates the system requirements
Who does?
Testers
Regression testing
Testing done to ensure that modification does not cause any side effects is called as
Regression testing.
It is also called as selective retesting.
Waterfall model does not all modification after reaching to its next phase
Prototyping is an iterative model
Regression testing is also called as selective re-testing
All the reviews are called as Verification
Unit, integration, system, acceptance testing are the various levels of testing
Exercises
Fill in the blanks
1
In ________ model of software development, the customer can evaluate the
product can give the feedback to the developer
2
_________ testing is done to ensure that modification does not cause any side
effects
3
Code walkthrough is an _______ activity
4
UAT is done by ________
5
Testing the performance of the application is done in _______ level of testing
True or False
1
2
3
4
5
One of the disadvantage of prototype model is, it takes lot of time to release the
product
In spiral model, go or not go decision is made in every cycle
The objective of performing the integration testing is to identify the errors in the
interface
Waterfall model is suitable for change driven projects
UAT is done to ensure that the product is defective.
Prototype model
Spiral model
Waterfall model
Document testing
System testing
|
|
|
|
|
Exercises
1) Which SDLC model would be most suitable for each of the following scenarios?
1
2
2
The product is a bespoke product for a specific customer, who is always available
to give feedback
A product that is made of a number of features that become available sequentially
and incrementally
Write the test cases for the following screen and find the defects
Chapter 3
Test Management
Objective
Introduction
This chapter familiarizes the reader with the test planning and software configuration
management
Test organization and independence
The effectiveness of finding defects by testing and reviews can be improved by using
independent testers. Options for independence are:
Independent testers within the development teams
Independent test team or group within the organization, reporting to project
management or executive management
Independent testers from the business organization, user community and IT
Independent test specialists for specific test targets such as usability testers,
security testers or certification testers (who certify a software product against
standards and regulations)
Independent testers outsourced or external to the organization
For large, complex or safety critical projects, it is usually best to have multiple levels of
testing, with some or all of the levels done by independent testers. Development staff
may participate in testing, especially at the lower levels, but their lack of objectivity often
limits their effectiveness. The independent testers may have the authority to require and
define test processes and rules, but testers should take on such process-related roles only
in the presence of a clear management mandate to do so.
The benefits of independence include:
Independent testers see other and different defects, and are unbiased.
An independent tester can verify assumptions people made during specification
and
implementation of the system
Drawbacks include:
3) What resources are needed for testing? Computer as well as human resources.
4) The time lines by which the testing activities will be performed
5) Risks that may be faced in all of the above, with appropriate mitigation and
contingency plans.
Test plan template
Introduction
Scope
References
Test methodology and strategy / approach
Test criteria
Entry criteria
Exit criteria
Suspension criteria
Resumption criteria
Assumptions, dependencies, and Risks
Assumptions
Dependencies
Risks and Risks management plans
Estimations
Size
Effort
Schedule
Test deliverables and milestones
Responsibilities
Resource requirements
Hardware resources
Software resources
People resources (number of people, skills, duration etc)
Other resources
Training requirements
Details of training required
Possible attendees
Any constraints
Defect logging and tracking progress
Metrics plan
Product release criteria
Test Estimation
Estimation happens broadly in three phases.
1 Size estimation
2 Effort estimation
3 Schedule estimation
Size estimation
Size estimate quantifies the actual amount of testing that needs to be done. Several
factors contribute to the size estimation of the testing project. They are
1 Lines of code
2 Function point
3 Number of screens/reports/transactions
4 Extent of automation required
5 Number of platforms and inter-operability environments to be tested
Size estimate is expressed in terms of the following.
1 Number of test cases
2 Number of test scenarios
3 Number of configurations to be tested
Effort estimation
Size estimation is primary input for estimating effort. The other factors that drive the
effort estimate are as follows.
Productivity data: Productivity refers to the speed at which the various activities of
testing can be carried out. This is based on historical data available in the organization.
It can be further classified into number of test cases developed per day, the number of test
cases that can be run per day.
Reuse opportunities: If the test architecture has been designed keeping reuse in mind,
then the effort required to cover a given size of testing can come down.
Robustness of processes: Existence of well-defined process will go a long way in
reducing the effort involved in any activity
Schedule estimation
Activity breakdown and schedule estimation entail translating the effort required into
specific time frames. The following steps make up this translation
1
2
Identifying the time required for each of the WBS activities, taking into account
the above two factors
Test Reporting
Test reporting is concerned with summarizing information about the testing Endeavour
including:
What happened during a period of testing, such as dates when exit criteria were met
Analyzed information and metrics to support recommendations and decisions about
future actions, such as an assessment of defects remaining, the economic benefit of
continued testing, outstanding risks, and the level of confidence in tested software.
The outline of a test summary report is given in Standard for Software Test Documentation
(IEEE829).
Metrics should be collected during and at the end of a test level in order to assess:
The adequacy of the test objectives for that test level.
The adequacy of the test approaches taken.
The effectiveness of the testing with respect to its objectives.
Test control
Test control describes any guiding or corrective actions taken as a result of information and
metrics gathered and reported. Actions may cover any test activity and may affect any other
software life cycle activity or task.
Examples of test control actions are:
Re-prioritize tests when an identified risk occurs (e.g. software delivered late).
Change the test schedule due to availability of a test environment.
Set an entry criterion requiring fixes to have been retested by a developer before accepting
them into a build
Configuration Management
Configuration management is a key component of the infrastructure for any software
development organizations. The ability to maintain control over the changes made to all
project artifacts is critical to the success of a project. The more complex an application is,
the more important it is to implement change both the application and its supporting
artifacts in a controlled manner.
The list below illustrates the types of project artifacts that must be managed and
controlled in the CM environment
Source code
Requirements
Analysis models
Design models
Test cases and procedures
Automated test scripts
Definition
Set of activities designed to control change by identifying the work products that are
likely to change, establishing relationships among them, defining mechanisms for
managing different versions of these work products, controlling the changes imposed, and
auditing and reporting on the changes made.
Purposes
The goals of SCM are generally:
Build Management- Managing the process and tools used for builds.
Environment Management- Managing the software and hardware that host our
system.
Defect Tracking- making sure every defect has traceability back to the source
Change management
Managing changes is a process. The process is the primary responsibility of the software
development staff. They must assure that the change requests are documented, that they
are tracked through approval or rejection, and then incorporated in to the developmental
process.
The testers need to know two aspects of change
1
2
The characteristics of the change so that modification of the test plan and test data
can be made to assure the right functionality and structure are tested
The version in which that change will be implemented
Without effective communication between the development team and test team regarding
changes, test effort may be wasted testing the wrong functionality and structure.
Version Control
Once dynamic testing begins, the project team must ensure that the appropriate versions
of the software components are being tested. The time and effort devoted to testing are
wasted if either the incorrect components or the wrong version of the correct components
have been migrated to the test environment. The configuration manager must develop
both migration and back out procedures to support this process.
Points to remember
Independent testers see other and different defects, and are unbiased.
The test plan acts as the anchor for the execution, tracking, and reporting of the
entire testing project and covers
The ability to maintain control over the changes made to all project artifacts is
critical to the success of a project.
Exercises
1) ____________ Management is managing the changes
2) _________ acts as the anchor for the execution, tracking, and reporting of the entire
testing project and covers
3) __________ describes any guiding or corrective actions taken as a result of
information and metrics gathered and reported
4) Number of test cases is the one measurement of size estimation (T/F)
Chapter 4
Test planning
Objective of this chapter
Basic Test planning
Introduction
The objective of this chapter is to familiarize the user with the test planning
Test plan
It describes how testing will be accomplished. Its creation is essential to effective testing
and should take about one third of the total test effort. If the plan is developed carefully,
test execution, analysis and reporting will flow smoothly.
Test plan is a contract between the testers and the project team and user describing the
role of testing in the project
Points to remember
The test plan should give information on software being tested, test objectives and
risks and specific test to be performed
Test planning can be started as soon as the time requirements definition starts
2
3
Test scope
Test objectives
Assumptions
Risk analysis
Test design
Roles and responsibilities
Test schedule and resources
Test data management
Test environment
Communication approach
11 Test tools
Test Strategy
The Test Strategy is the plan for how you are going to approach testing
Test strategy document helps us to answers the following questions
Should you develop a formal plan for your tests?
Should you test the entire program as a whole or run tests only on a small part of
it?
Should you rerun tests you've already conducted as you add new components to a
large system?
When should you involve the customer?
Template of test strategy
Name of the Project :
XXXXX Project
Brief Description :
Mention the Scope of the Project
Type of Project :
Application Testing or Product Testing
Type of Software :
List all the software used in the project
Critical Success Factors
Delivery on time
Exceptional user friendliness
Attractive user interface
Reasonable response time for online part
Risk Factors:
Schedule Timeframe is short
Tools/Technology Limited automated test tools
Resource - Non availability of test team members / Test team members are
new to environment
Support Not enough support from Business people.
Test Objectives:
Insure that not more than 1 bug per 10 function point.
Max 3 seconds response time
User friendliness of screen / menus / documentations
Trade offs:
Objectives stated under 7 to be achieved at any cost.
Risk categ
Risk desc
Cause
Impact
on the
project
Project
managem
ent
Change
in
requirem
ent
Lack of
domain
knowled
ge
Schedu
le
slippag
e
Probabili Impa
ty of
ct
occurren
ce
2
2
Risk
criticali
ty
Risk
Handling
plan
Evaluate
the
change/wo
rk request,
if leading
to
schedule
slippage
get the
approval
from the
customer
Points to remember
If the test plan is developed carefully, test execution, analysis and reporting will
flow smoothly.
Exercises
Prepare the risk analysis for functional testing of google.co.in and submit to your team
lead
Chapter 5
Test case design techniques
Objective of this chapter
Types of testing
Black box testing
White box testing
Preparation of test cases
Execution of test cases
Mapping requirements with test cases
Traceability Matrix (Use Cases/ Test Requirements to Test Cases)
Boundary value
analysis
Equivalence
partitioning
Decision table
Introduction
The objective of this chapter is to familiarize the reader with the test case design
techniques
Black box testing
Also known as functional testing. A software testing technique whereby the internal
workings of the item being tested are not known by the tester. For example, in a black
box test on a software design the tester only knows the inputs and what the expected
outcomes should be and not how the program arrives at those outputs. The tester does not
ever examine the programming code and does not need any further knowledge of the
program other than its specifications.
The advantages of this type of testing include:
The test is unbiased because the designer and the tester are independent of each
other.
The tester does not need knowledge of any specific programming languages.
The test is done from the point of view of the user, not the designer.
The test can be redundant if the software designer has already run a test case.
are used to select the test data. Unlike black box testing, white box testing uses specific
knowledge of programming code to examine outputs. The test is accurate only if the
tester knows what the program is supposed to do. He or she can then see if the program
diverges from its intended goal. White box testing does not account for errors caused by
omission, and all visible code must also be readable.
For a complete software examination, both white box and black box tests are required
Have you planned for an overall testing schedule and the personnel required, and
associated training requirements?
module testing,
integration testing,
Acceptance testing?
Have you designed at least one black-box test case for each system function?
Have you designed test cases for verifying quality objectives/factors (e.g.
reliability, maintainability, etc.)?
Have you defined test cases for performance tests, boundary tests, and usability
tests?
Have you designed test cases for stress tests (intentional attempts to break
system)?
Have you designed test cases with special input values (e.g. empty files)?
Test case ID
But a common programming error may check a wrong range e.g. starting the range at 0
by writing:
if (month >= 0 && month < 13)
For more complex range checks in a program this may be a problem which is not so
easily spotted as in the above simple example.
Applying boundary value analysis you have to select now a test case at each side of the
boundary between two partitions. In the above example this would be 0 and 1 for the
lower boundary as well as 12 and 13 for the upper boundary. Each of these pairs consists
of a "clean" and a "dirty" test case. A "clean" test case should give you a valid operation
result of your program. A "dirty" test case should lead to a correct and specified input
error treatment such as the limiting of values, the usage of a substitute value, or in case of
a program with a user interface, it has to lead to warning and request to enter correct data.
The boundary value analysis can have 6 textcases.n, n-1,n+1 for the upper limit and n, n1,n+1 for the lower limit.
Equivalence partitioning
Structure
Decision tables are typically divided into four quadrants, as shown below.
The four quadrants
Conditions
Condition alternatives
Actions
Action entries
Each decision corresponds to a variable, relation or predicate whose possible values are
listed among the condition alternatives. Each action is a procedure or operation to
perform, and the entries specify whether (or in what order) the action is to be performed
for the set of condition alternatives the entry corresponds to. Many decision tables
include in their condition alternatives the don't care symbol, a hyphen. Using don't cares
can simplify decision tables, especially when a given condition has little influence on the
actions to be performed. In some cases, entire conditions thought to be important initially
are found to be irrelevant when none of the conditions influence which actions are
performed.
Aside from the basic four quadrant structure, decision tables vary widely in the way the
condition alternatives and action entries are represented. Some decision tables use simple
Example
The limited-entry decision table is the simplest to describe. The condition alternatives are
simple boolean values, and the action entries are check-marks, representing which of the
actions in a given column are to be performed.
A technical support company writes a decision table to diagnose printer problems based
upon symptoms described to them over the phone from their clients.
Printer troubleshooter
Conditions
Y Y Y Y N N N N
Y Y N N Y Y N N
Printer is unrecognized
Y N Y N Y N Y N
Actions
Check/replace ink
X X
X X
X
Of course, this is just a simple example (and it does not necessarily correspond to the
reality of printer troubleshooting), but even so, it is possible to see how decision tables
can scale to several conditions with many possibilities.
Points to remember
In Black box testing technique the internal workings of the item being tested are
not known by the tester.
In white box testing technique the explicit knowledge of the internal workings of
the item being tested are used to select the test data.
Boundary value analysis, Equivalence partitioning are the techniques used for
reducing the no of test cases
Exercises
1) In white box testing, the tester will be aware of the inner workings of the program
(T/F)
2) In black box testing, the tester will not be aware of the inner workings of the program
(T/F)
3) Equivalence partioning checks the defect from the application with respect to its
boundaries (T/F)
4) Prepare the test cases for the following scenario.
For an insurance application, the customers date of birth is one of the input parameter,
according to the business rule, the application should not accept customer who is less
than 18 years of age and not more than 58 years.
5) Prepare the test cases for the following scenario
For a banking application, the transaction is performed on the following situation.
If the customer has entered valid customer no
If the customer has entered valid password
If the customer has entered less than or equal to current balance
Chapter 6
Test case
development
Objectives
Introduction
The objective of this chapter is to familiarize the reader with the test case development
Test cases
A test case is a set of conditions or variables under which a tester will determine if a
requirement upon an application is partially or fully satisfied
Test script
A test script is a short program written in a programming language used to test part of
the functionality of a software system. A written set of steps that should be performed
automatically can also be called a test script; however this is more correctly called a test
case.
Test case best practices
Pre
Conditio
n
Sample
data
Steps
Expecte
d output
Actual
output
Test
Resul
t
Open
calculator
<input1
>=1
Click
the
button
with
the
captio
n 1
and
click
+
1
displayed
in the
text box
Appear
s
Pass
CALC1 ADD
<input2
>=1
button
Points to remember
A test case is a set of conditions or variables under which a tester will determine
if a requirement upon an application is partially or fully satisfied
Exercises
1 Re-testing and Regression testing is one and the same (T/F)
2 Checking the spelling errors is coming under Functionality testing (T/F)
3. For the following screen, the business rules are as follows.
Chapter 7
Test Execution strategies
Objectives
Automated Testing
Risks of Not Automating Testing
Risks of Automating Testing
Where Do Tools Fit In?
The Major Issues
Top 10 Test Tools
Critical Success Factors
Test Execution - Manual Methods
Building the Test Environment
How to Create and Maintain Test Data
Pitfalls to Avoid
Test stop criteria
Introduction
The objective of this chapter is to familiarize the reader with the test execution strategies.
Automated Testing
Test automation is the use of software to control the execution of tests, the comparison of
actual outcomes to predicted outcomes, the setting up of test preconditions, and other test
control and test reporting functions. Commonly, test automation involves automating a
manual process already in place that uses a formalized testing process.
Risk of not automating testing
Requirements may be changed frequently which will affect the slippage in the test
schedule
The order of test execution will also affect the test result, so proper planning
should be done before testing the product
Review of test script is very important; if the script is not prepared properly, the
test result will not be accepted. So, test the test first.
Identifying critical testing areas should be effectively done by the testing team
Automation tool may generate the run time error which may not be relevant to the
testing
Testing Tools
AutoTester
AutoTester is a company involved in tool based software automation testing. Founded in
1985, it is based in Dallas, Texas.
Products by developed by AutoTester include:
AutoTester
AutoTester ONE
AutoController
WET
WET is a open source web automation testing tool which uses Watir as the library to
drive web pages.
WinRunner
Mercury Interactive's WinRunner is an automated functional GUI testing tool that
allows a user to record and play back UI interactions as test scripts. The software
implements a proprietary Test Script Language (TSL) that allows customization and
parameterization of user input.
QA Load
Compuwares QALoad helps you achieve loads that mimic realistic business usage as
well as validate that the system can meet acceptable service levels.
QA Run by Compuware
QARun is a functional testing tool that provides the automation capabilities needed to
quickly and productively create and execute test scripts,
TestPartner by Compuware
TestPartner is an automated functional and regression testing tool that has been specially
designed for testing complex applications based on Microsoft, Java and web based
technologies. Unique features of TestPartner allow both testers and developers to create
repeatable tests through visual scripting and automatic wizards. Users also have access to
the full capabilities of Microsoft's Visual Basic for Applications (VBA), allowing tests to
be as high level or detailed as needed.
SilkTest
SilkTest is the leading tool for automating the functional software testing process of
GUI applications. Its powerful test automation capabilities make it the perfect solution
for cross-platform, localization, and regression testing across a broad set of application
technologies, including Web, Java or .NET and client/server, within the confines of
today's short testing cycles. Designed for realizing automation benefits even when
applied to complex test cases, SilkTest provides a host of productivity-boosting features
that lets you easily cope with changes in the application under test.
TestPro
TestPro has a significant depth of knowledge in the area of application Load and Stress
testing.
Rational Robot
Rational Robot automates graphical functional, regression, and smoke testing of
applications created with any development tool
Rational Functional Tester by IBM
An advanced, automated functional and regression testing tool for testing applications based on
Java, Microsoft Visual Studio .NET, and Web technologies.
Schedule your event to run when specific files are created, modified, or deleted
Opalis Integration Server Opalis
Proper planning
Correct approach towards automation
Skilled human resources
Analyzing all the risks associated with automation and preparing contingency
planning
Points to remember
Test automation is the use of software to control the execution of tests, the
comparison of actual outcomes to predicted outcomes, the setting up of test
preconditions, and other test control and test reporting functions.
Requirements may be changed frequently which will affect the slippage in the test
schedule
Automation tools can be used for Testing the functionality, Testing the
performance, Managing the testing process, Performing regression testing,
Checking the source code
Testing can be stopped during various situation like when no major defects
present in the application, All the requirements are met, Code coverage,
Functionality coverage, Budget, Time
Exercise
1) Automation will reduce the test execution time (T/F)
2) Find the tool which is used for testing the java programs
a) QTP b) Winrunner C) JUnit d) None
3) Find the odd man out
a) QTP b) Winrunner C) Test Director d) QA Run
4) QC is the leading tool for testing the functionality (T/F)
Chapter 8
Test Reporting
Objective of this chapter
Test Reporting
Defect Management system
Introduction
The objective of this chapter is to familiarize the reader with test reporting and defect
management system
Test Reporting
It is a process to collect data, analyze the data, supplement the data with metrics, graphs,
and charts other pictorial representations which help the developers and users interpret
that data.
Prerequisites to test reporting
1
2
3
4
5
6
Date
12th Dec
2005
20
18
Show Stopper
1
Defect Status
Total
H
M
L
Open
3
2
1
Medium
High
Low
Closed
3
1
1
1
Total
Postponed
2
1
The primary goal is to prevent defects. Where there is not possible or practical,
the goals are to both find the defect as quickly as possible and minimize the
impact of the defects.
The defect management process, like the entire software development process,
should be risk driven
Defect Naming
Level 1 Name of the defect
Level 2 Developmental phase or activity in which the defect occurred
Level 3 The category of the defect (Missing, Inaccurate, Incomplete, Inconsistent)
Defect Management process
Defect
Prevention
Deliverable
Baseline
Defect
discovery
Defect
resolution
Process
improvement
s
Management reporting
Sample defect tracking system
S.No.
Defect ID
Defect status
(Open/Close
)
Test case
ID
Run/Build
no
Priority
(1,2)
Severity
(1-5)
Chapter 9
Regression Testing
Objective of this chapter
What is Regression Testing?
Regression testing Vs Re-testing
When to do regression testing?
How to do regression testing?
Best practices
Introduction
The objective of this chapter is to familiarize the user with Regression testing
Regression testing
Testing done to ensure that, modification does not cause any side effects, we perform
regression testing
Also called as selective re-testing
Regression testing Vs Re-testing
Retesting is done to ensure that, the defects are closed or not
Regression testing is done to ensure that, by closing the bugs, there are no new bugs are
arriving in the system
When to do regression testing?
A reasonable amount of initial testing is already carried out
A good number of defects have been fixed
Defect fixes that can produce side effects are taken care of
Steps
1
2
3
4
Remarks
Need to improve the
regression process
and code reviews
Best practices
1 Regression can be used for all types of releases
2 Mapping defect identifiers with test cases improve regression quality
3 Create and execute regression test bed daily
4 Ask your best test engineer to select test cases
5 Detect defects, and protect your product from defects and defect fixes
Points to remember
Testing done to ensure that, modification does not cause any side effects, we
perform regression testing
Re-testing and regression testing are different
Whenever there is a change in the module, we need to perform regression testing
Exercise
1
2
3
4
Chapter 10
Types of Software Testing
Objective of this chapter
Types of testing
Smoke testing
Sanity testing
Gorilla testing
Ad-hoc testing
Client/server testing
Web based testing
Stress testing
Volume testing
Load testing
Security Testing
Database Testing
Compatibility Testing
Internationalization
Data migration
Introduction
The objective of this chapter is to familiarize the reader with various types of software
testing.
Smoke testing
Testing done to ensure that the product is testable
Smoke test is a collection of written tests that are performed on a system prior to being
accepted for further testing. This is also known as a build verification test.
Sanity testing
A sanity test or sanity check is a basic test to quickly evaluate the validity of a claim or
calculation, specifically a very brief run-through of the functionality of a computer
program, system, calculation, or other analysis, to assure that the system or methodology
works as expected, often prior to a more exhaustive round of testing.
Gorilla testing
It is an intense round of testing--quite often redirecting all available resources to the
activity. The idea here is to test as much of the application in as short a period of
time as possible.
Ad-hoc testing
Testing phase where the tester tries to 'break' the system by randomly trying the system's
functionality. It can include negative testing as well
Client/server testing
Testing the product in the client/server environment with respect to functionality and
performance
Load testing
It is the process of creating demand on a system or device and measuring its response.
Load testing generally refers to the practice of modeling the expected usage of a software
program by simulating multiple users accessing the program's services concurrently. As
such, this testing is most relevant for multi-user systems, often one built using a
client/server model, such as web servers.
Security Testing
Techniques used to confirm the design and/or operational effectiveness of security
controls implemented within a system. Examples: Attack and penetration studies to
determine whether adequate controls have been implemented to prevent breach of system
controls and processes Password strength testing by using tools (password crackers).
Database Testing
Testing the database with respect to security, query execution
Compatibility Testing
Testing the application with various environments like browsers, operating systems is
called compatibility testing
Internationalization
Internationalization is the process of designing and coding a product so it can perform
properly when it is modified for use in different languages and locales.
Localization (also known as L10N) refers to the process, on a properly internationalized
base product, of translating messages and documentation as well as modifying other
locale specific files.
Data migration
Testing the data migration ensures that change from one form to another form does not
have any adverse effect on the system
Comparison between smoke testing and sanity testing
Smoke tests get their name from the electronics industry. The circuits are laid out on a
bread board and power is applied. If anything starts smoking, there is a problem. In the
software industry, smoke testing is a shallow and wide approach to the application. You
test all areas of the application without getting too deep. This is also known as a Build
Verification test or BVT.
In comparison, sanity testing is usually narrow and deep. That is they look
at only a few areas but all aspects of that part of the application. A smoke
test is scripted--either using a written set of tests or an automated
test--whereas a sanity test is usually unscripted.
Chapter 11
Testing Metrics
Objective of this chapter
What are
metrics?
Why are
metrics?
Types of metrics
Examples of
metrics
Introduction
The objective of this chapter is to familiarize the user with the testing metrics and
measurements
What is metrics?
A metric is a quantitative measure of the degree to which a system, component, or
process possesses a given attribute.
Metrics derive information from raw data with a view to help in decision making
Why metrics in testing?
Metrics in testing help in identifying
When to make the release
What to release
Whether the product is being released with known quality
Examples of testing metrics
Complexity measurements
Project metrics
Size measurement
Defect metrics
Product measures
Satisfaction measures
Productivity metrics
Description
The percentage of total defects occurring in a phase or
activity removed by the end of that activity
The number of defects in the particular product
The average operational time it takes before a software
system fails
An estimate of the time it will take to remove the last defect
from the software
The percentage of instructions or paths executed during
tests
The number of testing cycles required to complete testing
The percentage of requirements tested during testing
Complexity measurements
Metric
Size of a unit/module
Logic complexity
Description
Larger module/units are considered more complex
The number of opportunities to branch/transfer within a
single module
Documentation complexity The difficulty level in reading documentation usually
expressed as an academic grade level
Project metrics
Metric
Percent of budget utilized
Days behind or ahead of
schedule
Percent of change of
project scope
Percent of project
completed
Size measurements
Description
Total used
Days planned Actual date
No of requirement in day 1 No of requirement in last day
No of functionality tested/ total no of functionality * 100
Metric
KLOC
Function points
Pages of words of
documentation
Description
Kilo lines of code (1000 lines excluding comments)
A defined unit of size for software
Defect metrics
Metric
Defects related to size of
software
Severity of defects
Priority of defects
Age of defects
Description
Such as high,low,medium
Importance of correcting defects
Number of days the defect has been uncovered but not
corrected
Defects uncovered in
testing
Cost to locate a defect
Product metrics
Metric
Defect density
Description
The expected number of defects that will occur in a product during
development
Satisfaction metric
Metric
Ease of use
Customer complaints
Customer subjective assessment
Acceptance criteria met
Description
The amount of effort required to use
software
No of transactions with no of complaints
Feedback from customer
Percentage of acceptance criteria met
Productivity metrics
Metric
Cost of testing in relation to overall project
costs
Under budget/ahead of schedule
Software defects uncovered after the
software is placed into an operational status
Amount of testing using automated tools
Description
Project cost-testing cost
Planned actual
No of defects
Total test cases manual testing
Exercise
Prepare the metrics to identify the following.
Effort variance
Schedule variance
Chapter 12
Quality standards & methodologies
Objectives of this chapter
CMM
I
TMM
Six
sigma
ISO standards
Introduction
The objective of this chapter is to familiarize the reader with the quality standards and
methodologies
CMM Capability Maturity Model
Capability Maturity Model (CMM) broadly refers to a process improvement approach
that is based on a process model. CMM also refers specifically to the first such model,
developed by the Software Engineering Institute (SEI) in the mid-1980s, as well as the
family of process models that followed. A process model is a structured collection of
practices that describe the characteristics of effective processes; the practices included are
those proven by experience to be effective.
The Capability Maturity Model can be used to assess an organization against a scale of
five process maturity levels. Each level ranks the organization according to its
standardization of processes in the subject area being assessed. The subject areas can be
as diverse as software engineering, systems engineering, project management, risk
management, system acquisition, information technology (IT) services and personnel
management.
CMM was developed by the SEI at Carnegie Mellon University in Pittsburgh. It has been
used extensively for avionics software and government projects, in North America,
Europe, Asia, Australia, South America, and Africa. Currently, some government
departments require software development contract organization to achieve and operate at
a level 3 standard.
Level 1 - Initial
At maturity level 1, processes are usually ad hoc and the organization usually does not
provide a stable environment. Success in these organizations depends on the competence
and heroics of the people in the organization and not on the use of proven processes. In
spite of this ad hoc, chaotic environment, maturity level 1 organizations often produce
products and services that work; however, they frequently exceed the budget and
schedule of their projects.
Maturity level 1 organizations are characterized by a tendency to over commit, abandon
processes in the time of crisis, and not be able to repeat their past successes again.
Level 2 - Repeatable
At maturity level 2, software development successes are repeatable. The processes may
not repeat for all the projects in the organization. The organization may use some basic
project management to track cost and schedule.
Process discipline helps ensure that existing practices are retained during times of stress.
When these practices are in place, projects are performed and managed according to their
documented plans.
Project status and the delivery of services are visible to management at defined points
(for example, at major milestones and at the completion of major tasks).
Basic project management processes are established to track cost, schedule, and
functionality. The minimum process discipline is in place to repeat earlier successes on
projects with similar applications and scope. There is still a significant risk of exceeding
cost and time estimates.
Level 3 - Defined
The organizations set of standard processes, which is the basis for level 3, is established
and improved over time. These standard processes are used to establish consistency
across the organization. Projects establish their defined processes by the organizations
set of standard processes according to tailoring guidelines.
The organizations management establishes process objectives based on the
organizations set of standard processes and ensures that these objectives are
appropriately addressed.
A critical distinction between level 2 and level 3 is the scope of standards, process
descriptions, and procedures. At level 2, the standards, process descriptions, and
procedures may be quite different in each specific instance of the process (for example,
on a particular project). At level 3, the standards, process descriptions, and procedures for
a project are tailored from the organizations set of standard processes to suit a particular
project or organizational unit.
Level 4 - Managed
Using precise measurements, management can effectively control the software
development effort. In particular, management can identify ways to adjust and adapt the
process to particular projects without measurable losses of quality or deviations from
specifications. Organizations at this level set quantitative quality goals for both software
process and software maintenance.
Subprocesses are selected that significantly contribute to overall process performance.
These selected subprocesses are controlled using statistical and other quantitative
techniques.
A critical distinction between maturity level 3 and maturity level 4 is the predictability of
process performance. At maturity level 4, the performance of processes is controlled
using statistical and other quantitative techniques, and is quantitatively predictable. At
maturity level 3, processes are only qualitatively predictable.
Level 5 - Optimizing
Maturity level 5 focuses on continually improving process performance through both
incremental and innovative technological improvements. Quantitative processimprovement objectives for the organization are established, continually revised to reflect
changing business objectives, and used as criteria in managing process improvement. The
effects of deployed process improvements are measured and evaluated against the
quantitative process-improvement objectives. Both the defined processes and the
organizations set of standard processes are targets of measurable improvement activities.
Process improvements to address common causes of process variation and measurably
improve the organizations processes are identified, evaluated, and deployed.
Optimizing processes that are nimble, adaptable and innovative depends on the
participation of an empowered workforce aligned with the business values and objectives
of the organization. The organizations ability to rapidly respond to changes and
opportunities is enhanced by finding ways to accelerate and share learning.
A critical distinction between maturity level 4 and maturity level 5 is the type of process
variation addressed. At maturity level 4, processes are concerned with addressing special
causes of process variation and providing statistical predictability of the results. Though
processes may produce predictable results, the results may be insufficient to achieve the
established objectives. At maturity level 5, processes are concerned with addressing
common causes of process variation and changing the process (that is, shifting the mean
of the process performance) to improve process performance (while maintaining
Process areas
The CMMI contains several key process areas indicating the aspects of product
development that are to be covered by company processes.
Key Process Areas of the Capability Maturity Model Integration (CMMI)
Maturity
Abbreviation
Name
Area
Level
CAR
Causal Analysis and Resolution
Support
5
CM
Configuration Management
Support
2
DAR
Decision Analysis and Resolution
Support
3
Project
IPM
Integrated Project Management
3
Management
Project
ISM
Integrated Supplier Management
3
Management
Project
IT
Integrated Teaming
3
Management
MA
Measurement and Analysis
Support
2
Organizational Environment for
OEI
Support
3
Integration
Organizational Innovation and
Process
OID
5
Deployment
Management
Process
OPD
Organizational Process Definition
3
Management
Process
OPF
Organizational Process Focus
3
Management
Process
OPP
Organizational Process Performance
4
Management
Process
OT
Organizational Training
3
Management
PI
Product Integration
Engineering
3
Project
PMC
Project Monitoring and Control
2
Management
Project
PP
Project Planning
2
Management
Process and Product Quality
PPQA
Support
2
Assurance
Project
QPM
Quantitative Project Management
4
Management
RD
REQM
Requirements Development
Requirements Management
RSKM
Risk Management
SAM
TS
VAL
VER
Technical Solution
Validation
Verification
Engineering
Engineering
Project
Management
Project
Management
Engineering
Engineering
Engineering
3
2
3
2
3
3
3
The name
The organization is usually referred to simply as "ISO" It is a common misconception
that ISO stands for "International Standards Organization", or something similar. ISO is
not an acronym; it comes from the Greek word (isos), meaning "equal". In English,
the organizations long-form name is "International Organization for Standardization",
while in French it is called "Organisation internationale de normalisation." These initials
would result in different acronyms in ISOs two official languages, English (IOS) and
French (OIN), thus the founders of the organization chose "ISO" as the universal short
form of its name.
4 - Quality Management System.
Your Quality Management System must contain various elements. This clause describes
the requirements for your Quality Manual, control of documents, control of records, etc.
5 - Management Responsibility
Your "Top Management" must fulfil their part of this process. The standard states what it
requires from them..
6- Resource Management
You must supply sufficient resources so that your system can work. This includes
facilities, people, training, equipment, etc.
7 - Product Realization
You must control the process from quotation/receipt of order through design of product or
service, procurement of parts, manufacture of goods or provision of your service, through
to delivery and subsequent servicing.
8 - Measurement, Analysis & Improvement
You must ensure that what you provide to your Customers is correct. Whilst making
measurements or checking the goods or services, you must analyze the information and
use it to improve your system.
Six Sigma
Six Sigma is a disciplined, data-driven approach and methodology for eliminating defects
(driving towards six standard deviations between the mean and the nearest specification
limit) in any process -- from manufacturing to transactional and from product to service.
Six Sigma at many organizations simply means a measure of quality that strives for near
perfection
The statistical representation of Six Sigma describes quantitatively how a process is
performing. To achieve Six Sigma, a process must not produce more than 3.4 defects per
million opportunities. A Six Sigma defect is defined as anything outside of customer
specifications. A Six Sigma opportunity is then the total quantity of chances for a defect.
Six Sigma is a business improvement methodology, originally developed by Motorola to
systematically improve processes by eliminating defects. Defects are defined as
unacceptable deviation from the mean or target. The objective of Six Sigma is to deliver
high performance, reliability, and value to the end customer. Since it was originally
developed, Six Sigma has enjoyed wide popularity as an important element of many Total
Quality Management (TQM)initiatives.
The process was pioneered by Bill Smith at Motorola in 1986[2] and was originally
defined[3] as a metric for measuring defects and improving quality, and a methodology to
reduce defect levels below 3.4 Defects Per (one) Million Opportunities (DPMO), or put
another way, a methodology of controlling a process to the point of plus or minus six
sigma (standard deviations) from a centerline (for a total span of twelve sigma). Six
Sigma has now grown beyond defect control.
Six Sigma is a registered service mark and trademark of Motorola, Inc[4]. Motorola has
reported over US$17 billion in savings from Six Sigma to date.
In addition to Motorola, companies which also adopted six sigma methodologies early-on
and continue to practice it today include Honeywell International (previously known as
Allied Signal), Raytheon and General Electric (introduced by Jack Welch). The three
companies have reportedly saved billions of dollars thanks to the aggressive
implementation and daily practice of six sigma methodologies.[citation needed]
Recent six sigma trends lies in the advancement of the methodology with integrating to
TRIZ for inventive problem solving and product design [6].
Methodology
Six Sigma has two key methodologies[7]: DMAIC and DMADV. DMAIC is used to
improve an existing business process. DMADV is used to create new product designs or
process designs in such a way that it results in a more predictable, mature and defect free
performance.
Also see DFSS (Design for Six Sigma) quality. Sometimes a DMAIC project may turn
into a DFSS project because the process in question requires complete redesign to bring
about the desired degree of improvement.
DMAIC
Basic methodology consists of the following five steps:
Define the process improvement goals that are consistent with customer demands
and enterprise strategy.
Measure the current process and collect relevant data for future comparison.
Improve or optimize the process based upon the analysis using techniques like
Design of Experiments.
Control to ensure that any variances are corrected before they result in defects. Set up
pilot runs to establish process capability, transition to production and thereafter
Six Sigma has two key methodologies[7]: DMAIC and DMADV. DMAIC is used to
improve an existing business process. DMADV is used to create new product designs or
process designs in such a way that it results in a more predictable, mature and defect free
performance.
Also see DFSS (Design for Six Sigma) quality. Sometimes a DMAIC project may turn
into a DFSS project because the process in question requires complete redesign to bring
about the desired degree of improvement.
DMAIC
Basic methodology consists of the following five steps:
Define the process improvement goals that are consistent with customer demands
and enterprise strategy.
Measure the current process and collect relevant data for future comparison.
Improve or optimize the process based upon the analysis using techniques like
Design of Experiments.
Control to ensure that any variances are corrected before they result in defects.
Set up pilot runs to establish process capability, transition to production and
thereafter continuously measure the process and institute control mechanisms.
DMADV
Basic methodology consists of the following five steps:
Define the goals of the design activity that are consistent with customer demands
and enterprise strategy.
Analyze to develop and design alternatives, create high-level design and evaluate
design capability to select the best design.
Design details, optimize the design, and plan for design verification. This phase
may require simulations.
Verify the design, set up pilot runs, implement production process and handover
to process owners.
Some people have used DMAICR (Realize). Others contend that focusing on the
financial gains realized through Six Sigma is counter-productive and that said financial
gains are simply byproducts of a good process improvement.
Another additional flavor of Design for Six Sigma is the DMEDI method. This process is
almost exactly like the DMADV process, utilizing the same toolkit, but with a different
acronym. DMEDI stands for Define, Measure, Explore, Develop, Implement.
Chapter 13
Testing Automation
Test automation
What is test automation?
Benefits of automation
Terms used
Types of automation tools
Introduction
The objective of this chapter is to familiarize the reader with the automation testing
What is test automation?
Developing software to test the software is called as automation
Benefits of automation
Fast
Reliable
Repeatable
Programmable
Comprehensive
Reusable
Types of automation tools
Functionality / Regression testing tools
Performance testing tools
Test Management tools
Functionality/Regression testing tools
It helps the user to create/edit/run script for testing the functionality by using following
features