Вы находитесь на странице: 1из 102

Software Testing

Index Software Testing.

Software Testing
Introduction
Testing Start
Process
Testing Stop
Process
Testing Strategy
Testing Plan
Risk Analysis
Software Testing
Life Cycle
Software Testing
Types
Static Testing
Dynamic Testing
Blackbox Testing
Whitebox Testing.
Unit Testing.
Requirements
Testing.
Regression
Testing.
Error Handling
Testing.
Manual support
Testing.
Intersystem
Testing.
Control Testing.
Parallel Testing.
Volume Testing.
Stress Testing.
Performance
Testing.

Testing Tools
Win Runner
Load Runner
Test Director
Silk Test
Test Partner

Interview Question
Win Runner
Load Runner
Silk Test
Test Director
General Testing
Question

-1-
Software Testing

Testing Introduction
Testing is a process used to help identify the correctness, completeness and quality of
developed computer software. With that in mind, testing can never completely establish the
correctness of computer software. In other words Testing is nothing but CRITICISM or
COMPARISION. Here comparison in the sense comparing the actual value with expected
one.

There are many approaches to software testing, but effective testing of complex products is
essentially a process of investigation, not merely a matter of creating and following rote
procedure. One definition of testing is "the process of questioning a product in order to
evaluate it", where the "questions" are things the tester tries to do with the product, and
the product answers with its behavior in reaction to the probing of the tester. Although most
of the intellectual processes of testing are nearly identical to that of review or inspection,
the word testing is connoted to mean the dynamic analysis of the product—putting the
product through its paces.

The quality of the application can and normally does vary widely from system to system but
some of the common quality attributes include reliability, stability, portability, maintainability
and usability. Refer to the ISO standard ISO 9126 for a more complete list of attributes and
criteria.

Testing helps is verifying and Validating if the Software is working as it is intended to be


working. Thins involves using Static and Dynamic methodologies to Test the application.
Because of the fallibility of its human designers and its own abstract, complex nature, software
development must be accompanied by quality assurance activities. It is not unusual for
developers to spend 40% of the total project time on testing. For life-critical software (e.g. flight
control, reactor monitoring), testing can cost 3 to 5 times as much as all other activities
combined. The destructive nature of testing requires that the developer discard preconceived
notions of the correctness of his/her developed software.

Software Testing Fundamentals

Testing objectives include:

1. Testing is a process of executing a program with the intent of finding an error.


2. A good test case is one that has a high probability of finding an as yet undiscovered error.
3. A successful test is one that uncovers an as yet undiscovered error.

Testing should systematically uncover different classes of errors in a minimum amount of time
and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that
the software appears to be working as stated in the specifications. The data collected through
testing can also provide an indication of the software's reliability and quality. But, testing cannot
show the absence of defect -- it can only show that software defects are present.

-2-
Software Testing

When Testing should start:

Testing early in the life cycle reduces the errors. Test deliverables are associated with every
phase of development. The goal of Software Tester is to find bugs, find them as early as
possible, and make them sure they are fixed.

The number one cause of Software bugs is the Specification. There are several reasons
specifications are the largest bug producer.

In many instances a Spec simply isn’t written. Other reasons may be that the spec isn’t
thorough enough, its constantly changing, or it’s not communicated well to the entire team.
Planning software is vitally important. If it’s not done correctly bugs will be created.

The next largest source of bugs is the Design, That’s where the programmers lay the plan
for their Software. Compare it to an architect creating the blue print for the building, Bugs
occur here for the same reason they occur in the specification. It’s rushed, changed, or not
well communicated.

Coding errors may be more familiar to you if you are a programmer. Typically these can be
traced to the Software complexity, poor documentation, schedule pressure or just plain
dump mistakes. It’s important to note that many bugs that appear on the surface to be
programming errors can really be traced to specification. It’s quite common to hear a
programmer say, “ oh, so that’s what its supposed to do. If someone had told me that I
wouldn’t have written the code that way.”

The other category is the catch-all for what is left. Some bugs can blamed for false
positives, conditions that were thought to be bugs but really weren’t. There may be
duplicate bugs, multiple ones that resulted from the square root cause. Some bugs can be
traced to Testing errors.

Costs: The costs re logarithmic- that is, they increase tenfold as time increases. A bug
found and fixed during the early stages when the specification is being written might cost
next to nothing, or 10 cents in our example. The same bug, if not found until the software is
coded and tested, might cost $1 to $10. If a customer finds it, the cost would easily top
$100.

-3-
Software Testing

When to Stop Testing

This can be difficult to determine. Many modern software applications are so complex, and
run in such as interdependent environment, that complete testing can never be done.
"When to stop testing" is one of the most difficult questions to a test engineer. Common
factors in deciding when to stop are:

• Deadlines ( release deadlines,testing deadlines.)


• Test cases completed with certain percentages passed
• Test budget depleted
• Coverage of code/functionality/requirements reaches a specified point
• The rate at which Bugs can be found is too small
• Beta or Alpha Testing period ends
• The risk in the project is under acceptable limit.

Practically, we feel that the decision of stopping testing is based on the level of the risk
acceptable to the management. As testing is a never ending process we can never assume
that 100 % testing has been done, we can only minimize the risk of shipping the product to
client with X testing done. The risk can be measured by Risk analysis but for small duration
/ low budget / low resources project, risk can be deduced by simply: -

• Measuring Test Coverage.


• Number of test cycles.
• Number of high priority bugs.

Test Strategy:

How we plan to cover the product so as to develop an adequate assessment of quality.

A good test strategy is:

Specific
Practical
Justified

The purpose of a test strategy is to clarify the major tasks and challenges of the test
project.

Test Approach and Test Architecture are other terms commonly used to describe what I’m
calling test strategy.

Example of a poorly stated (and probably poorly conceived) test strategy:

"We will use black box testing, cause-effect graphing, boundary testing, and white box
testing to test this product against its specification."

-4-
Software Testing

Test Strategy: Type of Project, Type of Software, when Testing will occur, Critical Success
factors, Tradeoffs

Test Plan - Why

• Identify Risks and Assumptions up front to reduce surprises later.


• Communicate objectives to all team members.
• Foundation for Test Spec, Test Cases, and ultimately the Bugs we find.
Failing to plan = planning to fail.

Test Plan - What

• Derived from Test Approach, Requirements, Project Plan, Functional Spec., and
Design Spec.
• Details out project-specific Test Approach.
• Lists general (high level) Test Case areas.
• Include testing Risk Assessment.
• Include preliminary Test Schedule
• Lists Resource requirements.

Test Plan

The test strategy identifies multiple test levels, which are going to be performed for the
project. Activities at each level must be planned well in advance and it has to be formally
documented. Based on the individual plans only, the individual test levels are carried out.

Entry means the entry point to that phase. For example, for unit testing, the coding must be
complete and then only one can start unit testing. Task is the activity that is performed.
Validation is the way in which the progress and correctness and compliance are verified for
that phase. Exit tells the completion criteria of that phase, after the validation is done. For
example, the exit criterion for unit testing is all unit test cases must pass.

Unit Test Plan {UTP}

The unit test plan is the overall plan to carry out the unit test activities. The lead tester
prepares it and it will be distributed to the individual testers, which contains the following
sections.

What is to be tested?

The unit test plan must clearly specify the scope of unit testing. In this, normally the basic
input/output of the units along with their basic functionality will be tested. In this case
mostly the input units will be tested for the format, alignment, accuracy and the totals. The
UTP will clearly give the rules of what data types are present in the system, their format and
their boundary conditions. This list may not be exhaustive; but it is better to have a
complete list of these details.

-5-
Software Testing

Sequence of Testing

The sequences of test activities that are to be carried out in this phase are to be listed in
this section. This includes, whether to execute positive test cases first or negative test cases
first, to execute test cases based on the priority, to execute test cases based on test groups
etc. Positive test cases prove that the system performs what is supposed to do; negative
test cases prove that the system does not perform what is not supposed to do. Testing the
screens, files, database etc., are to be given in proper sequence.

Basic Functionality of Units

How the independent functionalities of the units are tested which excludes any
communication between the unit and other units. The interface part is out of scope of this
test level. Apart from the above sections, the following sections are addressed, very specific
to unit testing.

• Unit Testing Tools


• Priority of Program units
• Naming convention for test cases
• Status reporting mechanism
• Regression test approach
• ETVX criteria

Integration Test Plan

The integration test plan is the overall plan for carrying out the activities in the integration
test level, which contains the following sections.

What is to be tested?

This section clearly specifies the kinds of interfaces fall under the scope of testing internal,
external interfaces, with request and response is to be explained. This need not go deep in
terms of technical details but the general approach how the interfaces are triggered is
explained.

Sequence of Integration

When there are multiple modules present in an application, the sequence in which they are
to be integrated will be specified in this section. In this, the dependencies between the
modules play a vital role. If a unit B has to be executed, it may need the data that is fed by
unit A and unit X. In this case, the units A and X have to be integrated and then using that
data, the unit B has to be tested. This has to be stated to the whole set of units in the
program. Given this correctly, the testing activities will lead to the product, slowly building
the product, unit by unit and then integrating them.

System Test Plan {STP}

The system test plan is the overall plan carrying out the system test level activities. In the
system test, apart from testing the functional aspects of the system, there are some special
testing activities carried out, such as stress testing etc. The following are the sections
normally present in system test plan.

-6-
Software Testing

What is to be tested?

This section defines the scope of system testing, very specific to the project. Normally, the
system testing is based on the requirements. All requirements are to be verified in the
scope of system testing. This covers the functionality of the product. Apart from this what
special testing is performed are also stated here.

Functional Groups and the Sequence

The requirements can be grouped in terms of the functionality. Based on this, there may be
priorities also among the functional groups. For example, in a banking application, anything
related to customer accounts can be grouped into one area, anything related to inter-branch
transactions may be grouped into one area etc. Same way for the product being tested,
these areas are to be mentioned here and the suggested sequences of testing of these
areas, based on the priorities are to be described.

Acceptance Test Plan {ATP}

The client at their place performs the acceptance testing. It will be very similar to the
system test performed by the Software Development Unit. Since the client is the one who
decides the format and testing methods as part of acceptance testing, there is no specific
clue on the way they will carry out the testing. But it will not differ much from the system
testing. Assume that all the rules, which are applicable to system test, can be implemented
to acceptance testing also.

Since this is just one level of testing done by the client for the overall product, it may
include test cases including the unit and integration test level details.

A sample Test Plan Outline along with their description is as shown below:

Test Plan Outline

1. BACKGROUND – This item summarizes the functions of the application system and the
tests to be performed.
2. INTRODUCTION
3. ASSUMPTIONS – Indicates any anticipated assumptions which will be made while testing
the application.
4. TEST ITEMS - List each of the items (programs) to be tested.
5. FEATURES TO BE TESTED - List each of the features (functions or requirements) which
will be tested or demonstrated by the test.
6. FEATURES NOT TO BE TESTED - Explicitly lists each feature, function, or requirement
which won't be tested and why not. 7. APPROACH - Describe the data flows and test
philosophy.
Simulation or Live execution, Etc. This section also mentions all the approaches which will
be followed at the various stages of the test execution.
8. ITEM PASS/FAIL CRITERIA Blanket statement - Itemized list of expected output and
tolerances
9. SUSPENSION/RESUMPTION CRITERIA - Must the test run from start to completion?
Under what circumstances it may be resumed in the middle?
Establish check-points in long tests.
10. TEST DELIVERABLES - What, besides software, will be delivered?
Test report

-7-
Software Testing

Test software
11. TESTING TASKS Functional tasks (e.g., equipment set up)
Administrative tasks
12. ENVIRONMENTAL NEEDS
Security clearance
Office space & equipment
Hardware/software requirements
13. RESPONSIBILITIES
Who does the tasks in Section 10?
What does the user do?
14. STAFFING & TRAINING
15. SCHEDULE
16. RESOURCES
17. RISKS & CONTINGENCIES
18. APPROVALS

The schedule details of the various test pass such as Unit tests, Integration tests, System
Tests should be clearly mentioned along with the estimated efforts.

Risk Analysis:

A risk is a potential for loss or damage to an Organization from materialized threats. Risk
Analysis attempts to identify all the risks and then quantify the severity of the risks.A threat
as we have seen is a possible damaging event. If it occurs, it exploits vulnerability in the
security of a computer based system.

Risk Identification:

1. Software Risks: Knowledge of the most common risks associated with Software
development, and the platform you are working on.

2. Business Risks: Most common risks associated with the business using the Software

3. Testing Risks: Knowledge of the most common risks associated with Software Testing
for the platform you are working on, tools being used, and test methods being applied.

4. Premature Release Risk: Ability to determine the risk associated with releasing
unsatisfactory or untested Software Prodicts.

5. Risk Methods: Strategies and approaches for identifying risks or problems associated
with implementing and operating information technology, products and process; assessing
their likelihood, and initiating strategies to test those risks.

Traceability means that you would like to be able to trace back and forth how and where
any work product fulfills the directions of the preceding (source-) product. The matrix deals
with the where, while the how you have to do yourself, once you know the where.

Take e.g. the Requirement of User Friendliness (UF). Since UF is a complex concept, it is not
solved by just one design-solution and it is not solved by one line of code. Many partial
design-solutions may contribute to this Requirement and many groups of lines of code may
contribute to it.

-8-
Software Testing

A Requirements-Design Traceability Matrix puts on one side (e.g. left) the sub-
requirements that together are supposed to solve the UF requirement, along with other
(sub-)requirements. On the other side (e.g. top) you specify all design solutions. Now you
can connect on the cross points of the matrix, which design solutions solve (more, or less)
any requirement. If a design solution does not solve any requirement, it should be deleted,
as it is of no value.

Having this matrix, you can check whether any requirement has at least one design solution
and by checking the solution(s) you may see whether the requirement is sufficiently solved
by this (or the set of) connected design(s).

If you have to change any requirement, you can see which designs are affected. And if you
change any design, you can check which requirements may be affected and see what the
impact is.

In a Design-Code Traceability Matrix you can do the same to keep trace of how and which
code solves a particular design and how changes in design or code affect each other.

Demonstrates that the implemented system meets the user requirements.


Serves as a single source for tracking purposes. Identifies gaps in the design and testing.

Prevents delays in the project timeline, which can be brought about by having to backtrack
to fill the gaps

Software Testing Life Cycle:

The test development life cycle contains the following components:

Requirements
Use Case Document
Test Plan
Test Case
Test Case execution
Report Analysis
Bug Analysis
Bug Reporting

Typical interaction scenario from a user's perspective for system requirements studies or
testing. In other words, "an actual or realistic example scenario". A use case describes the
use of a system from start to finish. Use cases focus attention on aspects of a system useful
to people outside of the system itself.

• Users of a program are called users or clients.


• Users of an enterprise are called customers, suppliers, etc.

Use Case:

A collection of possible scenarios between the system under discussion and external actors,
characterized by the goal the primary actor has toward the system's declared
responsibilities, showing how the primary actor's goal might be delivered or might fail.

-9-
Software Testing

Use cases are goals (use cases and goals are used interchangeably) that are made up of
scenarios. Scenarios consist of a sequence of steps to achieve the goal, each step in a
scenario is a sub (or mini) goal of the use case. As such each sub goal represents either
another use case (subordinate use case) or an autonomous action that is at the lowest level
desired by our use case decomposition.

This hierarchical relationship is needed to properly model the requirements of a system


being developed. A complete use case analysis requires several levels. In addition the level
at which the use case is operating at it is important to understand the scope it is
addressing. The level and scope are important to assure that the language and granularity
of scenario steps remain consistent within the use case.

There are two scopes that use cases are written from: Strategic and System. There are also
three levels: Summary, User and Sub-function.

Scopes: Strategic and System

Strategic Scope:

The goal (Use Case) is a strategic goal with respect to the system. These goals are goals of
value to the organization. The use case shows how the system is used to benefit the
organization.,/p> These strategic use cases will eventually use some of the same lower level
(subordinate) use cases.

System Scope:

Use cases at system scope are bounded by the system under development. The goals
represent specific functionality required of the system. The majority of the use cases are at
system scope. These use cases are often steps in strategic level use cases

Levels: Summary Goal , User Goal and Sub-function.

Sub-function Level Use Case:

A sub goal or step is below the main level of interest to the user. Examples are "logging in"
and "locate a device in a DB". Always at System Scope.

User Level Use Case:

This is the level of greatest interest. It represents a user task or elementary business
process. A user level goal addresses the question "Does your job performance depend on
how many of these you do in a day". For example "Create Site View" or "Create New
Device" would be user level goals but "Log In to System" would not. Always at System
Scope.

Summary Level Use Case:

Written for either strategic or system scope. They represent collections of User Level Goals.
For example summary goal "Configure Data Base" might include as a step, user level goal
"Add Device to database". Either at System of Strategic Scope.

- 10 -
Software Testing

Test Documentation

Test documentation is a required tool for managing and maintaining the testing process.
Documents produced by testers should answer the following questions:

• What to test? Test Plan


• How to test? Test Specification
• What are the results? Test Results Analysis Report

Software testing life cycle

Bug Life cycle:

In entomology (the study of real, living Bugs), the term life cycle refers to the various
stages that an insect assumes over its life. If you think back to your high school biology
class, you will remember that the life cycle stages for most insects are the egg, larvae,
pupae and adult. It seems appropriate, given that software problems are also called bugs,
that a similar life cycle system is used to identify their stages of life. Figure 18.2 shows an
example of the simplest, and most optimal, software bug life cycle.

This example shows that when a bug is found by a Software Tester, its logged and assigned
to a programmer to be fixed. This state is called open state. Once the programmer fixes the
code, he assigns it back to the tester and the bugs enter the resolved state. The tester then
performs a regression test to confirm that the bug is indeed fixed and, if it closes it out. The
bug then enters its final state, the closed state.

- 11 -
Software Testing

In some situations though, the life cycle gets a bit more complicated.

In this case the life cycle starts out the same with the Tester opening the bug and assigning
to the programmer, but the programmer doesn’t fix it. He doesn’t think its bad enough to fix
and assigns it to the project manager to decide. The Project Manager agrees with the
Programmer and places the Bug in the resolved state as a “wont-fix” bug. The tester
disagrees, looks for and finds a more obvious and general case that demonstrates the bug,
reopens it, and assigns it to the Programmer to fix. The programmer fixes the bg, resolves it
as fixed, and assign it to the Tester. The tester confirms the fix and closes the bug.

You can see that a bug might undergo numerous changes and iterations over its life,
sometimes looping back and starting the life all over again. Figure below takes the simple
model above and adds to it possible decisions, approvals, and looping that can occur in most
projects. Of course every software company and project will have its own system, but this
figure is fairly generic and should cover most any bug life cycle that you’ll encounter

- 12 -
Software Testing

The generic life cycle has two additional states and extra connecting lines. The review state
is where Project Manager or the committee, sometimes called a change Control Board,
decides whether the bug should be fixed. In some projects all bugs go through the review
state before they’re assigned to the programmer for fixing. In other projects, this may not
occur until near the end of the project, or not at all. Notice that the review state can also go
directly to the closed state. This happens if the review decides that the bug shouldn’t be
fixed – it could be too minor is really not a problem, or is a testing error. The other is a
deferred. The review may determine that the bug should be considered for fixing at
sometime in the future, but not for this release of the software.

The additional line from resolved state back to the open state covers the situation where the
tester finds that the bug hasn’t been fixed. It gets reopened and the bugs life cycle repeats.

The two dotted lines that loop from the closed and the deferred state back to the open state
rarely occur but are important enough to mention. Since a Tester never gives up, its
possible that a bug was thought to be fixed, tested and closed could reappear. Such bugs
are often called Regressions. It’s possible that a deferred bug could later be proven serious
enough to fix immediately. If either of these occurs, the bug is reopened and started
through the process again. Most Project teams adopt rules for who can change the state of
a bug or assign it to someone else.For example, maybe only the Project Manager can decide
to defer a bug or only a tester is permitted to close a bug. What’s important is that once you
log a bug, you follow it through its life cycle, don’t lose track of it, and prove the necessary
information to drive it to being fixed and closed.

Bug Report - Why

• Communicate bug for reproducibility, resolution, and regression.


• Track bug status (open, resolved, closed).
• Ensure bug is not forgotten, lost or ignored.

Used to back create test case where none existed before.

- 13 -
Software Testing

Testing Types
Static testing

The Verification activities fall into the category of Static Testing. During static testing, you
have a checklist to check whether the work you are doing is going as per the set standards
of the organization. These standards can be for Coding, Integrating and Deployment.
Reviews, Inspection's and Walkthrough's are static testing methodologies.

Dynamic testing

Dynamic Testing involves working with the software, giving input values and checking if the
output is as expected. These are the Validation activities. Unit Tests, Integration Tests,
System Tests and Acceptance Tests are few of the Dynamic Testing methodologies. As we go
further, let us understand the various Test Life Cycle's and get to know the Testing
Terminologies. To understand more of software testing, various methodologies, tools and
techniques, you can download the Software Testing Guide Book from here.

Difference Between Static and Dynamic Testing: Please refer the definition of Static
Testing to observe the difference between the static testing and dynamic testing.

Black box testing

Introduction

Black box testing attempts to derive sets of inputs that will fully exercise all the functional
requirements of a system. It is not an alternative to white box testing. This type of testing
attempts to find errors in the following categories:

1. incorrect or missing functions,


2. interface errors,
3. errors in data structures or external database access,
4. performance errors, and 5. initialization and termination errors.

Tests are designed to answer the following questions:

1. How is the function's validity tested?


2. What classes of input will make good test cases?
3. Is the system particularly sensitive to certain input values?
4. How are the boundaries of a data class isolated?
5. What data rates and data volume can the system tolerate?
6. What effect will specific combinations of data have on system operation?

White box testing should be performed early in the testing process, while black box testing
tends to be applied during later stages. Test cases should be derived which

1. Reduce the number of additional test cases that must be designed to achieve reasonable
testing, and
2. Tell us something about the presence or absence of classes of errors, rather than an error
associated only with the specific test at hand.

- 14 -
Software Testing

Equivalence Partitioning

This method divides the input domain of a program into classes of data from which test
cases can be derived. Equivalence partitioning strives to define a test case that uncovers
classes of errors and thereby reduces the number of test cases needed. It is based on an
evaluation of equivalence classes for an input condition. An equivalence class represents a
set of valid or invalid states for input conditions.

Equivalence classes may be defined according to the following guidelines:

1. If an input condition specifies a range, one valid and two invalid equivalence classes are
defined.
2. If an input condition requires a specific value, then one valid and two invalid equivalence
classes are defined.
3. If an input condition specifies a member of a set, then one valid and one invalid
equivalence class are defined.
4. If an input condition is Boolean, then one valid and one invalid equivalence class are
defined.

Boundary Value Analysis

This method leads to a selection of test cases that exercise boundary values. It
complements equivalence partitioning since it selects test cases at the edges of a class.
Rather than focusing on input conditions solely, BVA derives test cases from the output
domain also. BVA guidelines include:

1. For input ranges bounded by a and b, test cases should include values a and b and just
above and just below a and b respectively.
2. If an input condition specifies a number of values, test cases should be developed to
exercise the minimum and maximum numbers and values just above and below these
limits.
3. Apply guidelines 1 and 2 to the output.
4. If internal data structures have prescribed boundaries, a test case should be designed to
exercise the data structure at its boundary.

Cause-Effect Graphing Techniques

Cause-effect graphing is a technique that provides a concise representation of logical


conditions and corresponding actions. There are four steps:

1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is
assigned to each.
2. A cause-effect graph is developed.
3. The graph is converted to a decision table.
4. Decision table rules are converted to test cases.
What is blackbox testing, difference between blackbox testing and whitebox testing,
Blackbox Testing plans, unbiased blackbox testing

- 15 -
Software Testing

White box testing

White box testing is a test case design method that uses the control structure of the
procedural design to derive test cases. Test cases can be derived that

1. guarantee that all independent paths within a module have been exercised at least once,
2. exercise all logical decisions on their true and false sides,
3. execute all loops at their boundaries and within their operational bounds, and
4. exercise internal data structures to ensure their validity.

The Nature of Software Defects


Logic errors and incorrect assumptions are inversely proportional to the probability that a
program path will be executed. General processing tends to be well understood while special
case processing tends to be prone to errors.
We often believe that a logical path is not likely to be executed when it may be executed on
a regular basis. Our unconscious assumptions about control flow and data lead to design
errors that can only be detected by path testing.

Typographical errors are random.

Basis Path Testing

This method enables the designer to derive a logical complexity measure of a procedural
design and use it as a guide for defining a basis set of execution paths. Test cases that
exercise the basis set are guaranteed to execute every statement in the program at least
once during testing.

Flow Graphs

Flow graphs can be used to represent control flow in a program and can help in the
derivation of the basis set. Each flow graph node represents one or more procedural
statements. The edges between nodes represent flow of control. An edge must terminate at
a node, even if the node does not represent any useful procedural statements. A region in a
flow graph is an area bounded by edges and nodes. Each node that contains a condition is
called a predicate node. Cyclomatic complexity is a metric that provides a quantitative
measure of the logical complexity of a program. It defines the number of independent paths
in the basis set and thus provides an upper bound for the number of tests that must be
performed.

The Basis Set

An independent path is any path through a program that introduces at least one new set of
processing statements (must move along at least one new edge in the path). The basis set
is not unique. Any number of different basis sets can be derived for a given procedural
design. Cyclomatic complexity, V(G), for a flow graph G is equal to

1. The number of regions in the flow graph.


2. V(G) = E - N + 2 where E is the number of edges and N is the number of nodes.
3. V(G) = P + 1 where P is the number of predicate nodes.

- 16 -
Software Testing

Deriving Test Cases


1. From the design or source code, derive a flow graph.
2. Determine the cyclomatic complexity of this flow graph.
Even without a flow graph, V(G) can be determined by counting
the number of conditional statements in the code.
3. Determine a basis set of linearly independent paths.
Predicate nodes are useful for determining the necessary paths.
4. Prepare test cases that will force execution of each path in the basis set.
Each test case is executed and compared to the expected results.

Automating Basis Set Derivation


The derivation of the flow graph and the set of basis paths is amenable to automation. A
software tool to do this can be developed using a data structure called a graph matrix. A
graph matrix is a square matrix whose size is equivalent to the number of nodes in the flow
graph. Each row and column correspond to a particular node and the matrix corresponds to
the connections (edges) between nodes. By adding a link weight to each matrix entry, more
information about the control flow can be captured. In its simplest form, the link weight is 1
if an edge exists and 0 if it does not. But other types of link weights can be represented:

� the probability that an edge will be executed,


� the processing time expended during link traversal,
� the memory required during link traversal, or
� the resources required during link traversal.

Graph theory algorithms can be applied to these graph matrices to help in the analysis
necessary to produce the basis set.

Loop Testing

This white box technique focuses exclusively on the validity of loop constructs. Four different
classes of loops can be defined:

1. simple loops,
2. nested loops,
3. concatenated loops, and
4. unstructured loops.

Simple Loops

The following tests should be applied to simple loops where n is the maximum number of
allowable passes through the loop:

1. skip the loop entirely,


2. only pass once through the loop,
3. m passes through the loop where m < n,
4. n - 1, n, n + 1 passes through the loop.

- 17 -
Software Testing

Nested Loops

The testing of nested loops cannot simply extend the technique of simple loops since this
would result in a geometrically increasing number of test cases. One approach for nested
loops:

1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their
minimums. Add tests for out-of-range or excluded values.
3. Work outward, conducting tests for the next loop while keeping all other outer loops at
minimums and other nested loops to typical values.
4. Continue until all loops have been tested.

Concatenated Loops

Concatenated loops can be tested as simple loops if each loop is independent of the others.
If they are not independent (e.g. the loop counter for one is the loop counter for the other),
then the nested approach can be used.

Unstructured Loops

This type of loop should be redesigned not tested!!!


Other White Box Techniques
Other white box testing techniques include:

1. Condition testing
exercises the logical conditions in a program.
2. Data flow testing
selects test paths according to the locations of definitions and uses of variables in the
program.

Unit Testing
In computer programming, a unit test is a method of testing the correctness of a particular
module of source code.

The idea is to write test cases for every non-trivial function or method in the module so that
each test case is separate from the others if possible. This type of testing is mostly done by
the developers.

Benefits

The goal of unit testing is to isolate each part of the program and show that the individual
parts are correct. It provides a written contract that the piece must satisfy. This isolated
testing provides four main benefits:

Encourages change

Unit testing allows the programmer to refactor code at a later date, and make sure the
module still works correctly (regression testing). This provides the benefit of encouraging
programmers to make changes to the code since it is easy for the programmer to check if
the piece is still working properly.

- 18 -
Software Testing

Simplifies Integration

Unit testing helps eliminate uncertainty in the pieces themselves and can be used in a
bottom-up testing style approach. By testing the parts of a program first and then testing
the sum of its parts will make integration testing easier.

Documents the code

Unit testing provides a sort of "living document" for the class being tested. Clients looking to
learn how to use the class can look at the unit tests to determine how to use the class to fit
their needs.

Separation of Interface from Implementation

Because some classes may have references to other classes, testing a class can frequently
spill over into testing another class. A common example of this is classes that depend on a
database; in order to test the class, the tester finds herself writing code that interacts with
the database. This is a mistake, because a unit test should never go outside of its own class
boundary. As a result, the software developer abstracts an interface around the database
connection, and then implements that interface with their own Mock Object. This results in
loosely coupled code, thus minimizing dependencies in the system.

Limitations

It is important to realize that unit-testing will not catch every error in the program. By
definition, it only tests the functionality of the units themselves. Therefore, it will not catch
integration errors, performance problems and any other system-wide issues. In addition, it
may not be trivial to anticipate all special cases of input the program unit under study may
receive in reality. Unit testing is only effective if it is used in conjunction with other software
testing activities.

Unit Testing - Software Unit Testing, Tools, Research Topics, Toolkits, Extreme Programming
Unit Testing

Requirement testing

Usage:

• To ensure that system performs correctly


• To ensure that correctness can be sustained for a considerable period of time.
• System can be tested for correctness through all phases of SDLC but incase of
reliability the programs should be in place to make system operational.

Objective:

• Successfully implementation of user requirements,/li>


• Correctness maintained over considerable period of time Processing of the application
complies with the organization’s policies and procedures.

- 19 -
Software Testing

Secondary users needs are fulfilled:

• Security officer
• DBA
• Internal auditors
• Record retention
• Comptroller

How to Use

Test conditions created

• These test conditions are generalized ones, which becomes test cases as the SDLC
progresses until system is fully operational.
• Test conditions are more effective when created from user’s requirements.
• Test conditions if created from documents then if there are any error in the
documents those will get incorporated in Test conditions and testing would not be
able to find those errors.
• Test conditions if created from other sources (other than documents) error trapping
is effective.
• Functional Checklist created.

When to Use

• Every application should be Requirement tested


• Should start at Requirements phase and should progress till operations and
maintenance phase.
• The method used to carry requirement testing and the extent of it is important.

Example

• Creating test matrix to prove that system requirements as documented are the
requirements desired by the user.
• Creating checklist to verify that application complies to the organizational policies
and procedures.

Regression testing

Usage:

• All aspects of system remain functional after testing.


• Change in one segment does not change the functionality of other segment.

Objective:

• Determine System documents remain current


• Determine System test data and test conditions remain current
• Determine previously tested system functions properly without getting effected
though changes are made in some other segment of application system.

- 20 -
Software Testing

How to Use

• Test cases, which were used previously for the already tested segment is, re-run to
ensure that the results of the segment tested currently and the results of same
segment tested earlier are same.
• Test automation is needed to carry out the test transactions (test condition
execution) else the process is very time consuming and tedious.
• In this case of testing cost/benefit should be carefully evaluated else the efforts
spend on testing would be more and payback would be minimum.

When to Use

• When there is high risk that the new changes may effect the unchanged areas of
application system.
• In development process: Regression testing should be carried out after the pre-
determined changes are incorporated in the application system.
• In Maintenance phase : regression testing should be carried out if there is a high risk
that loss may occur when the changes are made to the system

Example

• Re-running of previously conducted tests to ensure that the unchanged portion of


system functions properly.
• Reviewing previously prepared system documents (manuals) to ensure that they do
not get affected after changes are made to the application system.

Disadvantage

• Time consuming and tedious if test automation not done

Regression Testing - Software Testing - Network Regression Testing - Web & Automated
Regression Testing

Error handling testing


Usage:

• It determines the ability of applications system to process the incorrect transactions


properly
• Errors encompass all unexpected conditions.
• In some system approx. 50% of programming effort will be devoted to handling error
condition.

Objective:

• Determine Application system recognizes all expected error conditions

- 21 -
Software Testing

• Determine Accountability of processing errors has been assigned and procedures


provide a high probability that errors will be properly corrected
• Determine During correction process reasonable control is maintained over errors.

How to Use

• A group of knowledgeable people is required to anticipate what can go wrong in the


application system.
• It is needed that all the application knowledgeable people assemble to integrate their
knowledge of user area, auditing and error tracking.
• Then logical test error conditions should be created based on this assimilated
information.

When to Use

• Throughout SDLC.
• Impact from errors should be identified and should be corrected to reduce the errors
to acceptable level.
• Used to assist in error management process of system development and
maintenance.

Example

• Create a set of erroneous transactions and enter them into the application system
then find out whether the system is able to identify the problems..
• Using iterative testing enters transactions and trap errors. Correct them. Then enter
transactions with errors, which were not present in the system earlier.

Manual support testing

Usage:

• It involves testing of all the functions performed by the people while preparing the
data and using these data from automated system.

Objective:

• Verify manual support documents and procedures are correct.


• Determine Manual support responsibility is correct
• Determine Manual support people are adequately trained.
• Determine Manual support and automated segment are properly interfaced.

How to Use

• Process evaluated in all segments of SDLC.


• Execution of the can be done in conjunction with normal system testing.
• Instead of preparing, execution and entering actual test transactions the clerical and
supervisory personnel can use the results of processing from application system.

- 22 -
Software Testing

• To test people it requires testing the interface between the people and application
system.

When to Use

• Verification that manual systems function properly should be conducted throughout


the SDLC.
• Should not be done at later stages of SDLC.
• Best done at installation stage so that the clerical people do not get used to the
actual system just before system goes to production.

Example

• Provide input personnel with the type of information they would normally receive
from their customers and then have them transcribe that information and enter it in
the computer.
• Users can be provided a series of test conditions and then asked to respond to those
conditions. Conducted in this manner, manual support testing is like an examination
in which the users are asked to obtain the answer from the procedures and manuals
available to them.

Inter system testing

Usage:

• To ensure interconnection between application functions correctly.

Objective:

• Determine Proper parameters and data are correctly passed between the applications
• Documentation for involved system is correct and accurate.
• Ensure Proper timing and coordination of functions exists between the application
systems.

How to Use

• Operations of multiple systems are tested.


• Multiple systems are run from one another to check that they are acceptable and
processed properly.

When to Use

- 23 -
Software Testing

• When there is change in parameters in application system


• The parameters, which are erroneous then risk associated to such parameters, would
decide the extent of testing and type of testing.
• Intersystem parameters would be checked / verified after the change or new
application is placed in the production.

Example

• Develop test transaction set in one application and passing to another system to
verify the processing.
• Entering test transactions in live production environment and then using integrated
test facility to check the processing from one system to another.
• Verifying new changes of the parameters in the system, which are being tested, are
corrected in the document.

Disadvantage
Time consuming and tedious if test automation not done

• Cost may be expensive if system is run several times iteratively.

Control testing

Usage:

• Control is a management tool to ensure that processing is performed in accordance


to what management desire or intents of management.

Objective:

• Accurate and complete data


• Authorized transactions
• Maintenance of adequate audit trail of information.
• Efficient, effective and economical process.
• Process meeting the needs of the user.

How to Use

• To test controls risks must be identified.


• Testers should have negative approach i.e. should determine or anticipate what can
go wrong in the application system.
• Develop risk matrix, which identifies the risks, controls; segment within application
system in which control resides.

When to Use

- 24 -
Software Testing

• Should be tested with other system tests.

Example

• file reconciliation procedures work


• Manual controls in place.

Parallel testing

Usage:

• To ensure that the processing of new application (new version) is consistent with
respect to the processing of previous application version.

Objective:

• Conducting redundant processing to ensure that the new version or application


performs correctly.
• Demonstrating consistency and inconsistency between 2 versions of the application.

How to Use

• Same input data should be run through 2 versions of same application system.
• Parallel testing can be done with whole system or part of system (segment).

When to Use

• When there is uncertainty regarding correctness of processing of new application


where the new and old version are similar.
• In financial applications like banking where there are many similar applications the
processing can be verified for old and new version through parallel testing

Example

• Operating new and old version of a payroll system to determine that the paychecks
from both systems are reconcilable.
• Running old version of application to ensure that the functions of old system are
working fine with respect to the problems encountered in the new system.

Volume testing

Whichever title you choose (for us volume test) here we are talking about realistically
exercising an application in order to measure the service delivered to users at different
levels of usage. We are particularly interested in its behavior when the maximum number of
users are concurrently active and when the database contains the greatest data volume.

The creation of a volume test environment requires considerable effort. It is essential that
the correct level of complexity exists in terms of the data within the database and the range

- 25 -
Software Testing

of transactions and data used by the scripted users, if the tests are to reliably reflect the to
be production environment. Once the test environment is built it must be fully utilised.
Volume tests offer much more than simple service delivery measurement. The exercise
should seek to answer the following questions:

What service level can be guaranteed. How can it be specified and monitored?

Are changes in user behaviour likely? What impact will such changes have on resource
consumption and service delivery?

Which transactions/processes are resource hungry in relation to their tasks?

What are the resource bottlenecks? Can they be addressed?

How much spare capacity is there?

The purpose of volume testing is to find weaknesses in the system with respect to its
handling of large amount of data during extended time periods

Stress testing

The purpose of stress testing is to find defects of the system capacity of handling large
numbers of transactions during peak periods. For example, a script might require users to
login and proceed with their daily activities while, at the same time, requiring that a series
of workstations emulating a large number of other systems are running recorded scripts
that add, update, or delete from the database.

Performance testing

System performance is generally assessed in terms of response time and


throughput rates under differing processing and configuration conditions. To attack the
performance problems, there are several questions should be asked first:

How much application logic should be remotely executed?


How much updating should be done to the database server over the network from the
client workstation?
How much data should be sent to each in each transaction?

According to Hamilton [10], the performance problems are most often the result of the
client or server being configured inappropriately.

The best strategy for improving client-sever performance is a three-step process [11]. First,
execute controlled performance tests that collect the data about volume, stress, and loading
tests. Second, analyze the collected data. Third, examine and tune the database queries
and, if necessary, provide temporary data storage on the client while the application is
executing.

- 26 -
Software Testing

Testing tools
Win Runner
Introduction

WinRunner, Mercury Interactive’s enterprise functional testing tool. It is used to quickly


create and run sophisticated automated tests on your application. Winrunner helps you
automate the testing process, from test development to execution. You create adaptable
and reusable test scripts that challenge the functionality of your application. Prior to a
software release, you can run these tests in a single overnight run- enabling you to detect
and ensure superior software quality.

What's New in WinRunner 7.5?

Automatic Recovery

The Recovery Manager provides an easy-to-use wizard that guides you through the process
of defining a recovery scenario. You can specify one or more operations that enable the test
run to continue after an exception event occurs. This functionality is especially useful during
unattended test runs, when errors or crashes could interrupt the testing process until
manual intervention occurs.

Silent Installation

Now you can install WinRunner in an unattended mode using previously recorded installation
preferences. This feature is especially beneficial for those who use enterprise software
management products or any automated software distribution mechanisms.

Enhanced Integration with TestDirector

WinRunner works with both TestDirector 6.0, which is client/server-based, and TestDirector
7.x, which is Web-based. When reporting defects from WinRunner’s test results window,
basic information about the test and any checkpoints can be automatically populated in
TestDirector’s defect form. WinRunner now supports version control, which enables updating
and revising test scripts while maintaining old versions of each test.

- 27 -
Software Testing

Support for Terminal Servers

Support for Citrix and Microsoft Terminal Servers makes it possible to open several window
clients and run WinRunner on each client as a single user. Also, this can be used with
LoadRunner to run multiple WinRunner Vusers.

Support for More Environments

WinRunner 7.5 includes support for Internet Explorer 6.x and Netscape 6.x, Windows XP
and Sybase's PowerBuilder 8, in addition to 30+ environments already supported by
WinRunner 7.

WinRunner provides the most powerful, productive and cost-effective solution for verifying
enterprise application functionality. For more information on WinRunner, contact a Mercury
Interactive local representative for pricing, evaluation, and distribution information.

WinRunner(Features & Benefits)

Test functionality using multiple data combinations in a single test

WinRunner's DataDriver Wizard eliminates programming to automate testing for large


volumes of data. This saves testers significant amounts of time preparing scripts and allows
for more thorough testing.

Significantly increase power and flexibility of tests without any programming

The Function Generator presents a quick and error-free way to design tests and enhance
scripts without any programming knowledge. Testers can simply point at a GUI object, and
WinRunner will examine it, determine its class and suggest an appropriate function to be
used.

Use multiple verification types to ensure sound functionality

WinRunner provides checkpoints for text, GUI, bitmaps, URL links and the database,
allowing testers to compare expected and actual outcomes and identify potential problems
with numerous GUI objects and their functionality.

Verify data integrity in your back-end database

Built-in Database Verification confirms values stored in the database and ensures
transaction accuracy and the data integrity of records that have been updated, deleted and
added.

- 28 -
Software Testing

View, store and verify at a glance every attribute of tested objects

WinRunner’s GUI Spy automatically identifies, records and displays the properties of
standard GUI objects, ActiveX controls, as well as Java objects and methods. This ensures
that every object in the user interface is recognized by the script and can be tested.

Maintain tests and build reusable scripts

The GUI map provides a centralized object repository, allowing testers to verify and modify
any tested object. These changes are then automatically propagated to all appropriate
scripts, eliminating the need to build new scripts each time the application is modified.

Test multiple environments with a single application

WinRunner supports more than 30 environments, including Web, Java, Visual Basic, etc. In
addition, it provides targeted solutions for such leading ERP/CRM applications as SAP,
Siebel, PeopleSoft and a number of others.

NAVIGATIONAL STEPS FOR WINRUNNER LAB-EXERCISES

Using Rapid Test Script wizard

• Start->Program Files->Winrunner->winruner
• Select the Rapid Test Script Wizard (or) create->Rapid Test Script wizard
• Click Next button of welcome to script wizard
• Select hand icon and click on Application window and Cilck Next button
• Select the tests and click Next button
• Select Navigation controls and Click Next button
• Set the Learning Flow(Express or Comprehensive) and click Learn button
• Select start application YES or NO, then click Next button
• Save the Startup script and GUI map files, click Next button
• Save the selected tests, click Next button
• Click Ok button
• Script will be generated.then run the scripts. Run->Run from top
• Find results of each script and select tools->text report in Winrunner test results.

Using GUI-Map Configuration Tool:

• Open an application.
• Select Tools-GUI Map Configuration;Windows pops-up.
• Click ADD button;Click on hand icon.
• Click on the object, which is to be configured. A user-defined class for that object is
added to list.
• Select User-defined class you added and press ‘Configure’ button.
• Mapped to Class;(Select a corresponding stanadard class from the combo box).

- 29 -
Software Testing

• You can move the properties from available properties to Learned Properties. By
selecting Insert button
• Select the Selector and recording methods.
• Click Ok button
• Now, you will observe Winrunner indentifying the configured objects.

Using Record-ContextSensitive mode:

• Create->Record context Sensitive


• Select start->program files->Accessories->Calculator
• Do some action on the application.
• Stop recording
• Run from Top; Press ‘OK’.

Using Record-Analog Mode:

• Create->Insert Function->from function generator


• Function name:(select ‘invoke_application’ from combo box).
• Click Args button; File: mspaint.
• Click on ‘paste’ button; Click on ‘Execute’ button to open the application; Finally click
on ‘Close’.
• Create->Record-Analog.
• Draw some picture in the paintbrush file.
• Stop Recording
• Run->Run from Top; Press ‘OK’.

GUI CHECK POINTS-Single Property Check:

• Create->Insert function->Function Generator-> (Function name:Invoke_application;


File :Flight 1a)
• Click on’paste’ and click on’execute’ & close the window.
• Create->Record Context sensitive.
• Do some operations & stop recording.
• Create->GUI Check Point->For single Property.
• Click on some button whose property to be checked.
• Click on paste.
• Now close the Flight1a application; Run->Run from top.
• Press ‘OK’ it displays results window.
• Double click on the result statement. It shows the expected value & actual value
window.

GUI CHECK POINTS-For Object/Window Property:

• Create->Insert function->Function Generator-> (Function name:Invoke_application;


File :Flight 1a)

- 30 -
Software Testing

• Click on’paste’ and click on’execute’ & close the window.


• Create->Record Context sensitive.
• Do some operations & stop recording.
• Create->GUI Check Point->Object/Window Property.
• Click on some button whose property to be checked.
• Click on paste.
• 40Now close the Flight 1a application; Run->Run from top.
• Press ‘OK’ it displays results window.
• Double click on the result statement. It shows the expected value & actual value
window.

GUI CHECK POINTS-For Object/Window Property:

• Create->Insert function->Function Generator-> (Function name:Invoke_application;


File :Flight 1a)
• Click on’paste’ and click on’execute’ & close the window.
• Create->Record Context sensitive.
• Do some operations & stop recording.
• Create->GUI Check Point->For Multiple Object.
• Click on some button whose property to be checked.
• Click on Add button.
• Click on few objects & Right click to quit.
• Select each object & select corresponding properties to be checked for that object:
click ‘OK’.
• Run->Run from Top. It displys the results.

BITMAP CHECK POINT:

For object/window.

• Create->Insert function->Function Generator-> (Function name:Invoke_application;


File :Flight 1a)
• Click on’paste’ and click on’execute’ & close the window.
• Create->Record Context sensitive.
• Enter the Username, Password & click ‘OK’ button
• Open the Order in Flight Reservation Application
• Select File->Fax Order& enter Fax Number, Signature
• Press ‘Cancel’ button.
• Create->Stop Recording.
• Then open Fax Order in Flight Reservation Application
• Create->Bitmap Check->For obj.window;
• Run->run from top.
• The test fails and you can see the difference.

For Screen Area:

- 31 -
Software Testing

• Open new Paint Brush file;


• Create->Bitmapcheck point->from screen area.
• Paint file pops up; select an image with cross hair pointer.
• Do slight modification in the paint file(you can also run on the same paint file);
• Run->Run from Top.
• The test fails and you can see the difference of images.

DATABASE CHECK POINTS

Using Default check(for MS-Access only)

• Create->Database Check Point->Default check


• Select the Specify SQL Statement check box
• Click Next button
• Click Create button
• Type New DSN name and Click New button
• Then select a driver for which you want to set up a database & double clcik that
driver
• Then select Browse button and retype same DSN name and Click save button.
• Click Next button & click Finish button
• Select database button & set path of the your database name
• Click ‘OK’ button & then Click the your DSN window ‘OK’ button
• Type the SQL query in SQL box
• Theb click Finish button Note : same process will be Custom Check Point

Runtime Record Check Point.

• Repeat above 10 steps.


• Type query of two related tables in SQL box Ex: select Orders.Order_Number,
Flights.Flight_Number from Orders, Flights where
Flight.Flight_Number=Orders.Flight_Number.
• Select Finish Button
• Select hand Icon button& select Order No in your Application
• Click Next button.
• Select hand Icon button& select Filght No in your Application
• Click Next button
• Select any one of the following check box 1. One match record 2. One or more match
records. 3. No match record
• select Finish button the script will be generated.

Synchronization Point

For Obj/Win Properties:

• Open start->Programs->Win Runner->Sample applications->Flight1A.


• Open winrunner window
• Create->RecordContext Sensitive
• Insert information for new Order &click on "insert Order" button
• After inserting click on "delete" button

- 32 -
Software Testing

• Stop recording& save the file.


• Run->Run from top: Gives your results.

Without Synchronization:

• settings->General Options->Click on "Run" tab. "Timeout for checkpoints& Cs


statements’ value:10000 follow 1 to 7->the test display on "Error Message" that
"delete" button is disabled.

With Synchronization:

• Keep Timeout value:1000 only


• Go to the Test Script file, insert pointed after "Insert Order" button, press statement.
• Create->Synchronization->For Obj/Window Property
• Click on"Delete Order" button & select enable property; click on "paste".
• It inserts the Synch statement.

For Obj/Win Bitmap:

• Create-> Record Context Sensitive.


• Insert information for new order & click on "Insert order" button
• Stop recording & save the file.
• Go to the TSL Script, just before inserting of data into "date of flight" insert pointer.
• Create->Synchronization->For Obj/Win Bitmap is selected.
• (Make sure flight reservation is empty) click on "data of flight" text box
• Run->Run from Top; results are displayed. Note:(Keep "Timeout value" :1000)

Get Text: From Screen Area:

(Note: Checking whether Order no is increasing when ever Order is created)

• Open Flight1A; Analysis->graphs(Keep it open)


• Create->get text->from screen area
• Capture the No of tickets sold; right clcik &close the graph
• Now , insert new order, open the graph(Analysis->graphs)
• Go to Winrunner window, create->get text->from screen area
• Capture the No of tickets sold and right click; close the graph
• Save the script file
• Add the followinf script; If(text2==text1) tl_step("text comparision",0,"updateed");
else tl_step("text comparision",1,"update property");
• Run->Run from top to see the results.

Get Text: For Object/Window:

• Open a "Calc" application in two windows (Assuming two are two versions)
• Create->get text->for Obj/Window
• Click on some button in one window
• Stop recording
• Repeat 1 to 4 for Capture the text of same object from another "Calc" application.

- 33 -
Software Testing

• Add the following TSL(Note:Change "text" to text1 & text2 for each statement)
if(text1==text2) report_msg("correct" text1); Else report_msg("incorrect" text2);
• Run & see the results

Using GUI-Spy:

Using the GUI Spy, you can view and verify the properties of any GUI object on selected
application

• Tools->Gui Spy…
• Select Spy On ( select Object or Window)
• Select Hand icon Button
• Point the Object or window & Press Ctrl_L + F3.
• You can view and verify the properties.

Using Virtual Object Wizard:

Using the Virtual Object wizard, you can assign a bitmap to a standard object class, define
the coordinates of that object, and assign it a logical name

• tools->Virtual Object Wizard.


• Click Next Button
• Select standard class object for the virtual object Ex: class:Push_button
• Click Next button
• Click Mark Object button
• Drag the cursor to mark the area of the virtual object.
• Click Next button
• Assign the Logical Name, This name will appear in the test script when you record
object.
• Select Yes or No check box
• Click Finish button
• Go to winrunner window & Create->Start Recording.
• Do some operations
• Stop Recording

Using Gui Map Editor:

Using the GUI Map Editor, you can view and modify the properties of any GUI object on
selected application. To modify an object’s logical name in a GUI map file

• Tools->GUI Map Editor


• Select Learn button
• Select the Application A winrunner message box informs “do you want to learn all
objects within the window” & select ‘yes’’ button.
• Select perticular object and select Modify Button
• Change the Logical Name& click ‘OK’ Button

- 34 -
Software Testing

• Save the File

To find an object in a GUI map file:

• Choose Tools > GUI Map Editor.


• Choose View > GUI Files.
• Choose File > Open to load the GUI map file.
• Click Find. The mouse pointer turns into a pointing hand.
• Click the object in the application being tested. The object is highlighted in the GUI
map file.

To highlight an object in a Application:

• Choose Tools > GUI Map Editor.


• Choose View > GUI Files.
• Choose File > Open to load the GUI map file.
• Select the object in the GUI map file
• Click Show. The object is highlighted in the Application.

Data Driver Wizard

• Start->Programs->Wirunner->Sample applications->Flight 1A
• Open Flight Reservation Application
• Go to Winrunner window
• Create->Start recording
• Select file->new order, insert the fields; Click the Insert Order
• Tools->Data Table; Enter different Customer names in one row and Tickets in
another row.
• Default that two column names are Noname1 and Noname2.
• Tools->Data Driver Wizard
• Click Next button &select the data table
• Select Parameterize the test; select Line by Line check box
• Click Next Button
• Parameterize each specific values with column names of tables;Repeat for all
• Finalli Click finish button.
• Run->Run from top;
• View the results.

Merge the GUI Files:

Manual Merge

• Tools->Merge GUI Map Files A WinRunner message box informs you that all open
GUI maps will be closed and all unsaved changes will be discarded & click ‘OK’
button.
• Select the Manual Merge. Manual Merge enables you to manually add GUI objects
from the source to target files.

- 35 -
Software Testing

• To specify the Target GUI map file click the browse button& select GUI map file
• To specify the Source GUI map file. Click the add button& select source GUI map file.
• Click ‘OK’ button
• GUI Map File Manual Merge Tool Opens Select Objects and move Source File to
Target File
• Close the GUI Map File Manual Merge Tool

Auto Merge

• Tools->Merge GUI Map Files A WinRunner message box informs you that all open
GUI maps will be closed and all unsaved changes will be discarded & click ‘OK’
button.
• Select the Auto Merge in Merge Type. If you chose Auto Merge and the source GUI
map files are merged successfully without conflicts,
• To specify the Target GUI map file click the browse button& select GUI map file
• To specify the Source GUI map file.
• Click the add button& select source GUI map file.
• Click ‘OK’ button A message confirms the merge.

Manually Retrive the Records form Database

• db_connect(query1,DSN=Flight32);
• db_execute_query(query1,select * from Orders,rec);
• db_get_field_value(query1,#0,#0);
• db_get_headers(query1, field_num,headers);
• db_get_row(query1,5,row_con);
• db_write_records(query1,,c:\\str.txt,TRUE,10);

TSL SCRIPTS FOR WEB TESTING

1. web_browser_invoke ( browser, site );

// invokes the browser and opens a specified site. browser The name of browser (IE or
NETSCAPE). site The address of the site.

2. web_cursor_to_image ( image, x, y );

// moves the cursor to an image on a page. image The logical name of the image. x,y The
x- and y-coordinates of the mouse pointer when moved to an image

3. web_cursor_to_label ( label, x, y );

// moves the cursor to a label on a page. label The name of the label. x,y The x- and y-
coordinates of the mouse pointer when moved to a label.

4.web_cursor_to_link ( link, x, y );

- 36 -
Software Testing

// moves the cursor to a link on a page. link The name of the link. x,y The x- and y-
coordinates of the mouse pointer when moved to a link.

5.web_cursor_to_obj ( object, x, y );

// moves the cursor to an object on a page. object The name of the object. x,y The x- and
y-coordinates of the mouse pointer when moved to an object.

6.web_event ( object, event_name [, x , y ] );

// uns an event on a specified object. object The logical name of the recorded object.
event_name The name of an event handler. x,y The x- and y-coordinates of the mouse
pointer when moved to an object

7.web_file_browse ( object );

// clicks a browse button. object A file-type object.

8.web_file_set ( object, value );

// sets the text value in a file-type object. object A file-type object. Value A text string.

9. web_find_text ( frame, text_to_find, result_array [, text_before, text_after,


index, show ] );

// returns the location of text within a frame.

10. web_frame_get_text ( frame, out_text [, text_before, text_after, index ] );

// retrieves the text content of a frame.

11. web_frame_get_text_count ( frame, regex_text_to_find , count );

// returns the number of occurrences of a regular expression in a frame.

12. web_frame_text_exists ( frame, text_to_find [, text_before, text_after ] );

// returns a text value if it is found in a frame.

13.web_get_run_event_mode ( out_mode );

// returns the current run mode out_mode The run mode in use. If the mode is FALSE, the
default parameter, the test runs by mouse operations. If TRUE, is specified, the test runs by
events.

14. web_get_timeout ( out_timeout );

// returns the maximum time that WinRunner waits for response from the web. out_timeout
The maximum interval in seconds

15.web_image_click ( image, x, y );

- 37 -
Software Testing

// clicks a hypergraphic link or an image. image The logical name of the image. x,y The x-
and y-coordinates of the mouse pointer when clicked on a hypergraphic link or an image.

16. web_label_click ( label );

// clicks the specified label. label The name of the label.


17. web_link_click ( link );

// clicks a hypertext link. link The name of link.

18. web_link_valid ( name, valid );

// checks whether a URL name of a link is valid (not broken). name The logical name of a
link. valid The status of the link may be valid (TRUE) or invalid (FALSE)

19. web_obj_click ( object, x, y );

object The logical name of an object. x,y The x- and y-coordinates of the mouse pointer
when clicked on an object.

20. web_obj_get_child_item ( object, table_row, table_column, object_type,


index, out_object );

// returns the description of the children in an object.

21. function returns the count of the children in an object.

web_obj_get_child_item_count ( object, table_row, table_column, object_type,


object_count );

22. returns the value of an object property.

web_obj_get_info ( object, property_name, property_value );

23. returns a text string from an object.

web_obj_get_text ( object, table_row, table_column, out_text [, text_before, text_after,


index] );

24. returns the number of occurrences of a regular expression in an object.

web_obj_get_text_count ( object, table_row, table_column, regex_text_to_find, count );

25. returns a text value if it is found in an object.

web_obj_text_exists ( object, table_row, table_column, text_to_find [, text_before,


text_after] );

26. web_restore_event_default ( );

//resets all events to their default settings. 27. web_set_event ( class, event_name,
event_type, event_status );

- 38 -
Software Testing

// sets the event status.

28. web_set_run_event_mode ( mode );

//sets the event run mode. 29 web_set_timeout ( timeout );

//.sets the maximum time WinRunner waits for a response from the web. 30.
web_set_tooltip_color ( fg_color, bg_color );

// sets the colors of the WebTest ToolTip.

31. web_sync ( timeout );

//waits for the navigation of a frame to be completed. 32. web_url_valid ( URL, valid );

// checks whether a URL is valid.

Load Runner
Load Runner - Introduction

Load Runner is divided up into 3 smaller applications:

The Virtual User Generator allows us to determine what actions we would like our Vusers, or
virtual users, to perform within the application. We create scripts that generate a series of
actions, such as logging on, navigating through the application, and exiting the program.

The Controller takes the scripts that we have made and runs them through a schedule that
we set up. We tell the Controller how many users to activate, when to activate them, and
how to group the users and keep track of them.

The Results and Analysis program gives us all the results of the load test in various forms. It
allows us to see summaries of data, as well as the details of the load test for pinpointing
problems or bottlenecks.

LoadRunner 7 Features & Benefits

New Tuning Module Add-In


The LoadRunner Tuning Module allows customers to isolate and resolve system performance
bottlenecks. Once the application has been stress tested using LoadRunner, the Tuning
Module provides component test libraries and a knowledgebase that help users isolate and
resolve performance bottlenecks.

WAN Emulation Support

- 39 -
Software Testing

This powerful feature set enables LoadRunner to quickly point out the effect of the wide area
network (WAN) on application reliability, performance, and response time. Provided through
technology from Shunra Software, this WAN emulation capability introduces testing for
bandwidth limits, latency, network errors, and more to LoadRunner.

Sun ONE Studio4 IDE Add-in


Mercury Interactive and Sun Microsystems have worked together to integrate LoadRunner
with the Sun ONE Studio4 add-in.

JBuilder for Java IDE Add-in


LoadRunner now works with Borland's JBuilder integrated development environment (IDE)
to create powerful support for J2EE applications. This add-in enables LoadRunner users who
create J2EE applications and services with JBuilder to create virtual users based on source
code within a JBuilder project.

Native ICA Support for Citrix MetaFrame


LoadRunner now supports Citrix's Independent Computing Architecture (ICA) for the testing
of applications being deployed with Citrix MetaFrame. This support is the first native ICA
load testing solution of its kind, jointly developed with Citrix.

Web Transaction Breakdown Monitor for Isolating Performance Problems


Now you can more efficiently isolate performance problems within your architecture.
LoadRunner's Web Transaction Breakdown Monitor splits end-to-end transaction response
times for the client, network, and server and provides powerful drill-down capabilities.

Data Wizard for Data-Driven Testing


LoadRunner's Data Wizard enables you to quickly create data-driven tests and eliminate
manual data manipulation. It connects directly to back-end database servers and imports
desired data into test scripts.

Goal-Oriented Testing with AutoLoad


The new AutoLoad technology allows you to pre-define your testing objectives beyond the
number of concurrent users to streamline the testing process.

Enterprise Java Bean Testing


By testing EJB components with LoadRunner, you can identify and solve problems during the
early stages of application development. As a result, you can optimize performance before
clients have been developed and thereby save time and resources.

XML Support
With LoadRunner's XML support, you can quickly and easily view and manipulate XML data
within the test scripts

Hosted Virtual Users

How it Work

LoadRunner Hosted Virtual Users complements in-house load testing tools and allows
companies to load test their Web-based applications from outside the firewall using Mercury
Interactive's infrastructure. Customers begin by using LoadRunner Hosted Virtual Users'
simple Web interface to schedule tests and reserve machines on Mercury Interactive's load
farm. At the scheduled time, they select the recorded scripts to be uploaded and start

- 40 -
Software Testing

running the tests on the host machines*. These scripts will emulate the behavior of real
users on the application and generate load on the system.

Through LoadRunner Hosted Virtual Users’ Web interface, testers can view real-time
performance metrics, such as hits per second, throughput, transaction response times and
hardware resource usage (e.g., CPU and memory levels). They also can view performance
metrics gathered by Mercury Interactive’s server monitors and correlate this with end-user
performance data to diagnose bottlenecks on the back end.

The interface to LoadRunner Hosted Virtual Users enables test teams to control the load test
and view tests in progress, no matter their locations. When the test is complete, testers can
analyze results online, as well as download data for further analysis.

*Customers who do not own LoadRunner can download the VUGen component for free to
record their scripts. Likewise, the LoadRunner analysis pack can be downloaded for free.

LoadRunner Hosted Virtual Users gives testers complete control of the testing process while
providing critical real-time performance information, as well as views of the individual
machines generating the load.

Features and Benefits

Provides pre- and post-deployment testing.

At any time in the application lifecycle, organizations can use LoadRunner Hosted Virtual
Users to verify performance and fine-tune systems for greater efficiency, scalability and
availability. The application under test only needs to be accessible via the Web.

Complements in-house solutions to provide comprehensive load testing.

By combining LoadRunner Hosted Virtual Users with Mercury Interactive's LoadRunner or


another in-house load testing tool, operations groups can thoroughly load test their Web
applications and Internet infrastructures from inside and outside the firewall.

Gives customers complete control over all load testing.

Testing groups create the scripts, run the tests and perform their own analyses. They can
perform testing at their convenience and easily access all performance data to quickly
diagnose performance problems.

Provides access to Mercury Interactive's extensive load testing infrastructure.

With LoadRunner Hosted Virtual Users, organizations do not need to invest in additional
hardware, software or bandwidth to increase their testing coverage. Mercury Interactive’s
load testing infrastructure is available 24x7 and consists of load farms located worldwide. As
a result, organizations can generate real-user loads over the Internet to stress their Web-
based applications at any time, from anywhere.

- 41 -
Software Testing

How the Monitors Work

To minimize the impact of the monitoring on the system under test, LoadRunner enables IT
groups to extract data without having to install intrusive capture agents on the monitored
servers. As a result, LoadRunner can be used to monitor the performance of the servers
regardless of the hardware and operating system on which they run. Setup and installation
of the monitors therefore is trivial. Since all the monitoring information is sampled at a low
frequency (typically 1 to 5 seconds) there is only a negligible effect on the servers.

Supported Monitors

Astra LoadTest and LoadRunner support monitors for the following components:

Client-side Monitors

End-to-end transaction monitors - Provide end-user response times, hits per second,
transactions per second

Hits per Second and Throughput

Hits per Second

The Hits per Second graph shows the number of hits on the Web server (y-axis) as a
function of the elapsed time in the scenario (x-axis). This graph can display the whole
scenario, or the last 60, 180, 600 or 3600 seconds. You can compare this graph to the
Transaction Response Time graph to see how the number of hits affects transaction
performance.

Throughput

The Throughput graph shows the amount of throughput on the Web server (y-axis) during
each second of the scenario run (x-axis). Throughput is measured in kilobytes and
represents the amount of data that the Vusers received from the server at any given
second. You can compare this graph to the Transaction Response Time graph to see how the
throughput affects transaction performance.

HTTP Responses

HTTP Responses The HTTP Responses per Second graph shows the number of HTTP status
codes, which indicate the status of HTTP requests, for example, the request was
successful,the page was not found returned from the Web server during each second of the
scenario run (x-axis), grouped by status code.

- 42 -
Software Testing

Load Testing Monitors

Pages Downloaded per Second

• Pages Downloaded per Second The Pages Downloaded per Second graph shows the
number of Web pages downloaded from the server during each second of the
scenario run. This graph helps you evaluate the amount of load Vusers generate, in
terms of the number of pages downloaded. Like throughput, downloaded pages per
second is a representation of the amount of data that the Vusers received from the
server at any given second.

User-defined Data Point

User Defined Data Points graph allows you to add your own measurements by defining a
data point function in your Vuser script. Data point information is gathered each time the
script executes the function or step. The User-Defined Data Point graph shows the average
value of the data points during the scenario run. The x-axis represents the number of
seconds elapsed since the start time of the run. The y-axis displays the average values of
the recorded data point statements.

Transaction Monitors

·
Transaction Response Time The Transaction Response time graph shows the
response time of transactions in seconds (y-axis) as a function of the elapsed time in
the scenario (x-axis). ·

• Transaction per Second (Passed) The Transaction per Second (Passed) graph
shows the number of successful transactions performed per second (y-axis) as a
function of the elapsed time in the scenario (x-axis). ·
• Transaction per Second (Failed) The Transaction per Second (Failed) graph shows
the number of failed transactions per second (y- axis) as a function of the elapsed
time in the scenario (x- axis).

Virtual User Status

The monitor's Runtime graph provides information about the status of the Vusers running in
the current scenario on all host machines. The graph shows the number of running Vusers,
while the information in the legend indicates the number of Vusers in each state.

The Status field of each Vuser displays the current status of the Vuser. The following
table describes each Vuser status.

Running The total number of Vusers currently running on all load generators. ·

- 43 -
Software Testing

• Ready The number of Vusers that completed the initialization section of the script
and are ready to run. ·
• Finished The number of Vusers that have finished running. This includes both
Vusers that passed and failed ·
• Error The number of Vusers whose execution generated an error.

Web Transaction Breakdown Graphs

• DNS Resolution Displays the amount of time needed to resolve the DNS name to
an IP address, using the closest DNS server. The DNS Lookup measurement is a
good indicator of problems in DNS resolution, or problems with the DNS server. ·
• Connection Time Displays the amount of time needed to establish an initial
connection with the Web server hosting the specified URL. The connection
measurement is a good indicator of problems along the network. It also indicates
whether the server is responsive to requests. ·
• Time To First Buffer Displays the amount of time that passes from the initial HTTP
request (usually GET) until the first buffer is successfully received back from the Web
server. The first buffer measurement is a good indicator of Web server delay as well
as network latency. ·
• Server and Network time The Time to First Buffer Breakdown graph also displays
each Web page component's relative server and network time (in seconds) for the
period of time until the first buffer is successfully received back from the Web server.
If the download time for a component is high, you can use this graph to determine
whether the problem is server- or network- related. ·
• Receive Time Displays the amount of time that passes until the last byte arrives
from the server and the downloading is complete. The Receive measurement is a
good indicator of network quality (look at the time/size ratio to calculate receive
rate). ·
• Client Time Displays the average amount of time that passes while a request is
delayed on the client machine due to browser think time or other client-related
delays. ·
• Error Time Displays the average amount of time that passes from the moment an
HTTP request is sent until the moment an error message (HTTP errors only) is
returned ·
• SSL Handshaking Time Displays the amount of time taken to establish an SSL
connection (includes the client hello, server hello, client public key transfer, server
certificate transfer, and other stages). The SSL Handshaking measurement is only
applicable for HTTPS communications ·
• FTP Authentication Displays the time taken to authenticate the client. With FTP, a
server must authenticate a client before it starts processing the client's commands.
The FTP Authentication measurement is only applicable for FTP protocol
communications.

Server Monitors

NT/UNIX/Linux monitors - Provide hardware, network and operating system performance


metrics, such as CPU, memory and network throughput.

The following list describes the recommended objects to be monitored during a


load test:

- 44 -
Software Testing

ASP Server
Cache
HTTP Content Index
Internet Information Service Global
Logical Disk
Memory
Physical Disk
Processor
Server

Server

• Debugging Requests - Number of debugging document requests.


• Errors during Script Runtime - Number of requests failed due to runtime errors.
• Errors from ASP Preprocessor - Number of requests failed due to preprocessor
errors.
• Errors from Script Compilers - Number of requests failed due to script compilation
errors.
• Errors/Sec - The number of errors per second.
• Memory Allocated - The total amount of memory, in bytes, currently allocated by
Active Server Pages
• Request Bytes In Total - The total size, in bytes, of all requests.
• Request Bytes Out Total - The total size, in bytes, of responses sent to clients. This
does not include standard HTTP response headers.
• Request Execution Time - The number of milliseconds required to execute the most
recent request.
• Request Wait Time - The number of milliseconds the most recent request was waiting
in the queue.
• Requests Disconnected - The number of requests disconnected due to
communication failure.
• Requests Executing - The number of requests currently executing.
• Requests Failed Total - The total number of requests failed due to errors,
authorization failure and rejections.
• Requests Not Authorized - The number of requests failed due to insufficient access
rights.
• Requests Succeeded - The number of requests that were executed successfully.
• Requests Timed Out - The number of requests that timed out.

Cache

• Async Copy Reads/Sec - The frequency of reads from cache pages that involve a
memory copy of the data from the cache to the application's buffer. The application
will regain control immediately, even if the disk must be accessed to retrieve the
page.
• Async Data Maps/Sec - The frequency that an application uses a file system, such as
NTFS or HPFS, to map a page of a file into the cache to read because it does not
wish to wait for the cache to retrieve the page if it is not in main memory.
• Async Fast Reads/Sec - The frequency of reads from cache pages that bypass the
installed file system and retrieve the data directly from the cache. Normally, file I/O
requests will invoke the appropriate file system to retrieve data from a file. This

- 45 -
Software Testing

path, however, permits direct retrieval of cache data without file system involvement,
as long as the data is in the cache. Even if the data is not in the cache, one
invocation of the file system is avoided. If the data is not in the cache, the request
(application program call) will not wait until the data has been retrieved from disk,
but will get control immediately.
• Fast Reads/Sec - The frequency of reads from cache pages that bypass the installed
file system and retrieve the data directly from the cache. Normally, file I/O requests
invoke the appropriate file system to retrieve data from a file. This path, however,
permits direct retrieval of cache data without file system involvement if the data is in
the cache. Even if the data is not in the cache, one invocation of the file system is
avoided.
• Lazy Write Flushes/Sec - The frequency with which the cache's Lazy Write thread has
written to disk. Lazy Writing is the process of updating the disk after the page has
been changed in memory. In this way, the application making the change to the file
does not have to wait for the disk write to be completed before proceeding. More
than one page can be transferred on each write operation.

HTTP Content Index

• %Cached Hits - Percentage of queries found in the query cache.


• %Cache Misses - Percentage of queries not found in the query cache.
• Active Queries - Current number of running queries.
• Cache Items - Number of completed queries in cache.Current Requests Queued -
Current number of query requests queued.
• Queries per Minute - Number of queries per minute.
• Total Queries - Total number of queries run since service start.
• Total Requests Rejected Total number of query requests rejected.

Load runner IIS


Internet Information Service Global

• Cache Hits % - The ratio of cache hits to all cache requests.


• Cache Misses - The total number of times a file open, directory listing or service-
specific object request was not found in the cache.
• Cached Files Handles - The number of open file handles cached by all of the Internet
Information Services.
• Current Blocked Async I/O Requests - Current requests temporarily blocked due to
bandwidth throttling settings.
• Directory Listings - The number of directory listings cached by all of the Internet
Information Services.
• Measured Async I/O Bandwith Usage - Measured bandwidth of asynchronous I/O
averaged over a minute.
• Objects - The number of objects cached by all of the Internet Information Services.
The objects include file handle tracking objects, directory listing objects and service
specific objects.
• Total Allowed Async I/O Requests - Total requests allowed by bandwidth throttling
settings (counted since service startup).
• Total Blocked Async I/O Requests - Total requests temporarily blocked due to
bandwidth throttling settings (counted since service startup).

Logical Disk

- 46 -
Software Testing

• % Disk Read Time - The percentage of elapsed time that the selected disk drive was
busy servicing read requests.
• % Disk Time - The percentage of elapsed time that the selected disk drive was busy
servicing read or write requests.
• % Disk Write Time - The percentage of elapsed time that the selected disk drive was
busy servicing write requests.
• % Free Space - The ratio of the free space available on the logical disk unit to the
total usable space provided by the selected logical disk drive
• Avg. Disk Bytes/Read - The average number of bytes transferred from the disk
during read operations.
• Avg. Disk Bytes/Transfer - The average number of bytes transferred to or from the
disk during write or read operations.

Memory

• % Committed Bytes in Use - The ratio of the Committed Bytes to the Commit Limit.
This represents the amount of available virtual memory in use. Note that the Commit
Limit may change if the paging file is extended. This is an instantaneous value, not
an average.
• Available Bytes - Displays the size of the virtual memory currently on the Zeroed,
Free and Standby lists. Zeroed and Free memory is ready for use, with Zeroed
memory cleared to zeros. Standby memory is memory removed from a process's
Working Set but still available. Notice that this is an instantaneous count, not an
average over the time interval.
• Cache Bytes - Measures the number of bytes currently in use by the system cache.
The system cache is used to buffer data retrieved from disk or LAN. In addition, the
system cache uses memory not in use by active processes in the computer.
• Cache Bytes Peak - Measures the maximum number of bytes used by the system
cache. The system cache is used to buffer data retrieved from disk or LAN. In
addition, the system cache uses memory not in use by active processes in the
computer.
• Cache Faults/Sec - Cache faults occur whenever the cache manager does not find a
file's page in the immediate cache and must ask the memory manager to locate the
page elsewhere in memory or on the disk, so that it can be loaded into the
immediate cache.

Physical Disk

• % Disk Read Time - The percentage of elapsed time that the selected disk drive is
busy servicing read requests.
• % Disk Time - The percentage of elapsed time that the selected disk drive is busy
servicing read or write requests.
• % Disk Write Time - The percentage of elapsed time that the selected disk drive is
busy servicing write requests.
• Avg. Disk Bytes/Read - The average number of bytes transferred from the disk
during read operations.
• Avg. Disk Bytes/Transfer - The average number of bytes transferred to or from the
disk during write or read operations.
• Avg. Disk Bytes/Write - The average number of bytes transferred to the disk during
write operations.

- 47 -
Software Testing

• Avg. Disk Queue Length - The average number of both read and write requests that
were queued for the selected disk during the sample interval.

Processor

• % DPC Time - The percentage of elapsed time that the Processor spent in Deferred
Procedure Calls (DPC). When a hardware device interrupts the Processor, the
Interrupt Handler may elect to execute the majority of its work in a DPC. DPCs run at
lower priority than Interrupts. This counter can help determine the source of
excessive time being spent in Privileged Mode.
• % Interrupt Time - The percentage of elapsed time that the Processor spent handling
hardware Interrupts. When a hardware device interrupts the Processor, the Interrupt
Handler will execute to handle the condition, usually by signaling I/O completion and
possibly issuing another pending I/O request. Some of this work may be done in a
DPC (see % DPC Time.)
• % Privileged Time - The percentage of processor time spent in Privileged Mode in
non-idle threads. The Windows NT service layer, the Executive routines, and the
Windows NT Kernel execute in Privileged Mode. Device drivers for most devices other
than graphics adapters and printers also execute in Privileged Mode. Unlike some
early operating systems,
• % Processor Time - Processor Time is expressed as a percentage of the elapsed time
that a processor is busy executing a non-idle thread. It can be viewed as the fraction
of the time spent doing useful work. Each processor is assigned an idle thread in the
idle process that consumes those unproductive processor cycles not used by any
other threads.
• % User Time - The percentage of processor time spent in User Mode in non-idle
threads. All application code and subsystem code execute in User Mode. The graphics
engine, graphics device drivers, printer device drivers and the window manager also
execute in User Mode. Code executing in User Mode cannot damage the integrity of
the Windows NT Executive, Kernel, and device drivers. Unlike some early operating
systems, Windows NT uses process boundaries for subsystem protection in addition
to the traditional protection of User and Privileged modes.

Server

• Blocking Requests Rejected - The number of times the server has rejected blocking
Server Message Blocks (SMBs) due to insufficient count of free work items. May
indicate whether the maxworkitem or minfreeworkitems server parameters need
tuning.
• Bytes Received/Sec - The number of bytes the server has received from the network.
This value indicates how busy the server is.
• Bytes Total/Sec - The number of bytes the server has sent to and received from the
network. This value provides an overall indication of how busy the server is.
• Bytes Transmitted/Sec - The number of bytes the server has sent on the network.
This value indicates how busy the server is.
• Context Blocks Queued/Sec - - The rate that work context blocks had to be placed on
the server's FSP queue to await server action.
• Errors Access Permissions - The number of times file opens on behalf of clients have
failed with STATUS_ACCESS_DENIED. Can indicate whether somebody is randomly
attempting to access files in hopes of accessing data that was not properly protected.

- 48 -
Software Testing

NAVIGATIONAL STEPS FOR LOADRUNNER LAB-EXERCISES

1.Creating Script Using Virtual User Generator

• Start-> program Files->Load Runner->Virtual User Generator


• Choose File->New
• Select type and Click Ok Button
• Start recording Dialog Box appears
• Besides Program to Record, Click Browser Button and Browse for the Application
• Choose the Working Dir
• Let start recording into sections Vuser_Init and click Ok button
• After the application appears, change sections to Actions.
• Do some actions on the application
• Change sections to Vuser_End and close the application
• Click on stop Recording Icon in the tool bar of Vuser Generator
• Insert the Start_Transaction and End_Transactions.
• Insert the Rendezvous Point
• Choose :Vuser->Run, Verify the status of script at the bottom in Execution Log.
• Choose:File->Save.(Remember the path of the script).

2.Running the script in the Controller with Wizard

• Start-> program Files->Load Runner->Controller.


• Choose: wizard option and click OK.
• Click Next in the welcome Screen
• In the host list , click add button and mention the machine name Click Next Button
• Select the related script you are generated in Vuser Generator(GUI Vuser Script,DB
script,RTE script)
• Select Simulation group list, cilck edit button and change the group name ,No of
Vuser.
• Click Next Button
• Select Finish Button.
• Choose: Group->Init or Group->Run or Scenario->Start.
• Finally Load runner Analysis graph report appears.

3.Running the script in the Controller with out Wizard

• Start-> program Files->Load Runner->Controller.


• Choose File->New, Four Windows appears
• Select Vusers window.
• Select Group->Add Group
• Vuser Information Box appears
• Select Group Name, Vuser Quantity, Host Name.
• Select Script and select Add button, select path of the script.
• Click OK button.
• Choose: Group->Init or Group->Run or Scenario->Start.
• Select Results->Analyse Results.

4.Creating GUI Vuser Script Using Winrunner(GUI Vuser)

- 49 -
Software Testing

• Start-> Program Files->Winrunner->Winrunner.


• Chose: File->New
• Start recording through Create->Record ContextSensitive Mode
• Invoke the application
• Do some actions on the application.
• Select Stop Recording Create->Stop Record
• Declare the Transactions and Rendezvous Point at the top of script
Declare_transaction(); Declare_rendezvous();
• Identify where the transaction points to be inserted. Start_transaction();
End_transaction();
• Insert the Rendezvous Point Rendezvous();
• Save the script and remember the path of the script.

5.Running the script in the Controller with Wizard

• Start-> program Files->Load Runner->Controller.


• Choose: wizard option and click OK.
• Click Next in the welcome Screen
• In the host list , click add button and mention the machine name Click Next Button
• In GUI Vuser Script, click add button and mention the name of Script
• Click Next button.
• Select Simulation group list, cilck edit button and change the group name ,No of
Vuser.
• Click Next Button
• Select Finish Button.
• Select Vuser window, select tools->option Select winrunner tab,set the path of
Winrunner

Path:Program files->winrunner->dat->wrun.ini

• Select Host window, select Host->details Select Vuser Limits tab, select GUI-
winrunner check box. Select winrunner tab , set path

Path:Program files->winrunner->dat->wrun.ini

• Choose: Group->Init or Group->Run or Scenario->Start.


• Finally Load runner Analysis graph report appears.

- 50 -
Software Testing

Test director
Introduction

TestDirector, the industry’s first global test management solution, helps organizations deploy
high-quality applications more quickly and effectively. Its four modules Requirements, Test
Plan, Test Lab, and Defects are seamlessly integrated, allowing for a smooth information
flow between various testing stages. The completely Web-enabled TestDirector supports
high levels of communication and collaboration among distributed testing teams, driving a
more effective, efficient global application-testing process.

Features in TestDirector 7.5?

Web-based Site Administrator

- 51 -
Software Testing

The Site Administrator includes tabs for managing projects, adding users and defining user
properties, monitoring connected users, monitoring licenses and monitoring TestDirector
server information.

Domain Management

TestDirector projects are now grouped by domain. A domain contains a group of related
TestDirector projects, and assists you in organizing and managing a large number of
projects.

Enhanced Reports and Graphs

Additional standard report types and graphs have been added, and the user interface is
richer in functionality. The new format enables you to customize more features.

Version Control

Version control enables you to keep track of the changes you make to the testing
information in your TestDirector project. You can use your version control database for
tracking manual, WinRunner and QuickTest Professional tests in the test plan tree and test
grid.

Collaboration Module

The Collaboration module, available to existing customers as an optional upgrade, allows


you to initiate an online chat session with another TestDirector user. While in a chat session,
users can share applications and make changes.

Features in TestDirector 8.0?

TestDirector Advanced Reports Add-in

With the new Advanced Reports Add-in, TestDirector users are able to maximize the value of
their testing project information by generating customizable status and progress reports.
The Advanced Reports Add-in offers the flexibility to create custom report configurations
and layouts, unlimited ways to aggregate and compare data and ability to generate cross-
project analysis reports.

Automatic Traceability Notification

The new traceability automatically traces changes to the testing process entities such as
requirements or tests, and notifies the user via flag or e-mail. For example, when the

- 52 -
Software Testing

requirement changes, the associated test is flagged and tester is notified that the test may
need to be reviewed to reflect requirement changes.

Coverage Analysis View in Requirements Module

The graphical display enables you to analyze the requirements according to test coverage
status and view associated tests - grouped according to test status.

Hierarchical Test Sets

Hierarchical test sets provide the ability to better organize your test run process by grouping
test sets into folders.

Workflow for all TestDirector Modules

The addition of the script editor to all modules enables organizations to customize
TestDirector to follow and enforce any methodology and best practices.

Improved Customization

With a greater number of available user fields, ability to add memo fields and create input
masks users can customize their TestDirector projects to capture any data required by their
testing process. New rich edit option add color and formatting options to all memo fields.

TestDirector Features & Benefits

Supports the entire testing process

TestDirector incorporates all aspects of the testing process requirements management,


planning, scheduling, running tests, issue management and project status analysis into a
single browser-based application.

Leverages innovative Web technology

Testers, developers and business analysts can participate in and contribute to the testing
process by working seamlessly across geographic and organizational boundaries.

Uses industry-standard repositories

TestDirector integrates easily with industry-standard databases such as SQL, Oracle, Access
and Sybase.

Links test plans to requirements

TestDirector connects requirements directly to test cases, ensuring that functional


requirements have been covered by the test plan.

Integrates with Microsoft Office

- 53 -
Software Testing

TestDirector can import requirements and test plans from Microsoft Office, preserving your
investment and accelerating your testing process.

Manages manual and automated tests

TestDirector stores and runs both manual and automated tests, and can help jumpstart a
user’s automation project by converting manual tests to automated test scripts.

Accelerates testing cycles

TestDirector's TestLab manager accelerates the test execution cycles by scheduling and
running tests automatically—unattended, even overnight. The results are reported into
TestDirector’s central repository, creating an accurate audit trail for analysis.

Supports test runs across boundaries

TestDirector allows testers to run tests on their local machines and then report the results to
the repository that resides on a remote server.

Integrates with internal and third-party tools

Documented COM API allows TestDirector to be integrated both with internal tools (e.g.,
WinRunner and LoadRunner) and external third-party lifecycle applications.

Enables structured information sharing

TestDirector controls the information flow in a structured and organized manner. It defines
the role of each tester in the process and sets the appropriate permissions to ensure
information integrity.

Provides Analysis and Decision Support Tools

TestDirector's integrated graphs and reports help analyze application readiness at any point
in the testing process. Using information about requirements coverage, planning progress,
run schedules or defect statistics, managers are able to make informed decisions on
whether the application is ready to go live.

Provides easy defect reporting

TestDirector offers a defect tracking process that can identify similar defects in a database.

Generates customizable reports

TestDirector features a variety of customizable graphs and reports that provide a snapshot
of the process at any time during testing. You can save your favorite views to have instant
access to relevant project information.

Supports decision-making through analysis

- 54 -
Software Testing

TestDirector helps you make informed decisions about application readiness through dozens
of reports and analysis features.

Provides Anytime, Anywhere Access to Testing Assets

Using TestDirector's Web interface, testers, developers and business analysts can participate
in and contribute to the testing process by collaborating across geographic and
organizational boundaries.

Provides Traceability Throughout the Testing Process

TestDirector links requirements to test cases, and test cases to issues, to ensure traceability
throughout the testing cycle. When requirement changes or the defect is fixed, the tester is
notified of the change.

Integrates with Third-Party Applications

Whether an individual uses an industry standard configuration management solution,


Microsoft Office or a homegrown defect management tool, any application can be integrated
into TestDirector. Through the open API, TestDirector preserves the users’ investment in
their existing solutions and enables them to create an end-to-end lifecycle-management
solution.

Manages Manual and Automated Tests

TestDirector stores and runs both manual and automated tests, and can help jumpstart a
user’s automation project by converting manual tests to automated test scripts.

Accelerates Testing Cycles

TestDirector's TestLab manager accelerates the test execution cycles by scheduling and
running tests automatically—unattended, even overnight. The results are reported into
TestDirector’s central repository, creating an accurate audit trail for analysis.

Supports test runs across boundaries

TestDirector allows testers to run tests on their local machines and then report the results to
the repository that resides on a remote server.

Integrates with internal and third-party tools

Documented COM API allows TestDirector to be integrated both with internal tools (e.g.,
WinRunner and LoadRunner) and external third-party lifecycle applications.

Enables structured information sharing

TestDirector controls the information flow in a structured and organized manner. It defines
the role of each tester in the process and sets the appropriate permissions to ensure
information integrity.

- 55 -
Software Testing

Provides Analysis and Decision Support Tools

TestDirector's integrated graphs and reports help analyze application readiness at any point
in the testing process. Using information about requirements coverage, planning progress,
run schedules or defect statistics, managers are able to make informed decisions on
whether the application is ready to go live.

Provides easy defect reporting

TestDirector offers a defect tracking process that can identify similar defects in a database.

Generates customizable reports

TestDirector features a variety of customizable graphs and reports that provide a snapshot
of the process at any time during testing. You can save your favorite views to have instant
access to relevant project information.

Supports decision-making through analysis

TestDirector helps you make informed decisions about application readiness through dozens
of reports and analysis features.

Provides Anytime, Anywhere Access to Testing Assets

Using TestDirector's Web interface, testers, developers and business analysts can participate
in and contribute to the testing process by collaborating across geographic and
organizational boundaries.

Provides Traceability Throughout the Testing Process

TestDirector links requirements to test cases, and test cases to issues, to ensure traceability
throughout the testing cycle. When requirement changes or the defect is fixed, the tester is
notified of the change.

Integrates with Third-Party Applications

Whether an individual uses an industry standard configuration management solution,


Microsoft Office or a homegrown defect management tool, any application can be integrated
into TestDirector. Through the open API, TestDirector preserves the users’ investment in
their existing solutions and enables them to create an end-to-end lifecycle-management
solution.

Manages Manual and Automated Tests

TestDirector stores and runs both manual and automated tests, and can help jumpstart a
user’s automation project by converting manual tests to automated test scripts.

Accelerates Testing Cycles

- 56 -
Software Testing

TestDirector's TestLab manager accelerates the test execution cycles by scheduling and
running tests automatically unattended, even overnight. The results are reported into
TestDirector’s central repository, creating an accurate audit trail for analysis.

Facilitates Consistent and Repetitive Testing Process

By providing a central repository for all testing assets, TestDirector facilitates the adoption
of a more consistent testing process, which can be repeated throughout the application
lifecycle or shared across multiple applications or lines of business (LOB).

Testing Process

Test management is a method for organizing application test assets—such as test


requirements, test plans, test documentation, test scripts or test results—to enable easy
accessibility and reusability. Its aim is to deliver quality applications in less time.

The test management process is the main principle behind Mercury Interactive's
TestDirector. It is the first tool to capture the entire test management process—
requirements management, test planning, test execution and defect management—in one
powerful, scalable and flexible solution.

Managing Requirements

Requirements are what the users or the system needs. Requirements management,
however, is a structured process for gathering, organizing, documenting and managing the
requirements throughout the project lifecycle. Too often, requirements are neglected during
the testing effort, leading to a chaotic process of fixing what you can and accepting that
certain functionality will not be verified. In many organizations, requirements are
maintained in Excel or Word documents, which makes it difficult for team members to share
information and to make frequent revisions and changes.

TestDirector supports requirements-based testing and provides the testing team with a
clear, concise and functional blueprint for developing test cases. Requirements are linked to
tests—that is, when the test passes or fails, this information is reflected in the requirement
records. You can also generate a test based on a functional requirement and instantly create
a link between the requirement, the relevant test and any defects that are uncovered during
the test run.

Test Planning

Based on the requirements, testers can start building the test plan and designing the actual
tests. Today, organizations no longer wait to start testing at the end of the development
stage, before implementation. Instead, testing and development begin simultaneously. This
parallel approach to test planning and application design ensures that testers build a
complete set of tests that cover every function the system is designed to perform.

- 57 -
Software Testing

TestDirector provides a centralized approach to test design, which is invaluable for gathering
input from different members of the testing team and providing a central reference point for
all of your future testing efforts. In the Test Plan module, you can design tests—manual and
automated—document the testing procedures and create quick graphs and reports to help
measure the progress of the test planning effort.

Running Tests

After you have addressed the test design and development issues and built the test plan,
your testing team is ready to start running tests.

TestDirector can help configure the test environment and determine which tests will run on
which machines. Most applications must be tested on different operating systems , different
browser versions or other configurations. In TestDirector's Test Lab, testers can set up
groups of machines to most efficiently use their lab resources.

TestDirector can also schedule automated tests, which saves testers time by running
multiple tests simultaneously across multiple machines on the network. Tests with
TestDirector can be scheduled to run unattended, overnight or when the system is in least
demand for other tasks. For both manual and automated tests, TestDirector can keep a
complete history of all test runs. By using this audit trail, testers can easily trace changes to
tests and test runs.

Managing Defects

The keys to creating a good defect management process are setting up the defect workflow
and assigning permission rules. With TestDirector, you can clearly define how the lifecycle of
a defect should progress, who has the authority to open a new defect, who can change a
defect's status to "fixed" and under which conditions the defect can be officially closed.
TestDirector will also help you maintain a complete history and audit trail throughout the
defect lifecycle.

Managers often decide whether the application is ready to go live based on defect analysis.
By analyzing the defect statistics in TestDirector, you can take a snapshot of the application
under test and see exactly how many defects you currently have, their status, severity,
priority, age, etc. Because TestDirector is completely Web-based, different members of the
team can have instant access to defect information, greatly improving communication in
your organization and ensuring everyone is up to date on the status of the application

Silk test
Introduction

Silk Test is a tool specifically designed for doing REGRESSION AND FUNCTIONALITY
testing. It is developed by Segue Software Inc. Silk Test is the industry’s leading functional
testing product for e-business applications, whether Window based, Web, Java, or traditional
client/server-based. Silk Test also offers test planning, management, direct database access

- 58 -
Software Testing

and validation, the flexible and robust 4Test scripting language, a built in recovery system
for unattended testing, and the ability to test across multiple platforms, browsers and
technologies.

You have two ways to create automated tests using silktest:

1. Use the Record Testcase command to record actions and verification steps as you
navigate through the application.
2. Write the testcase manually using the Visual 4Test scripting language.

1. Record Testcase

The Record / Testcase command is used to record actions and verification steps as you
navigate through the application. Tests are recorded in an object-oriented language called
Visual 4Test. The recorded testreads like a logical trace of all of the steps that were
completed by the user. The Silk Test point and click verification system allows you to record
the verification step by selecting from a list of properties that are appropriate for the type of
object being tested. For example, you can verify the text is stored in a text field.

2. Write the Testcase manually

We can write tests that are capable of accomplishing many variations on a test. The key
here is re-use. A test case can be designed to take parameters including input data and
expected results. This "data-driven" testcase is really an instance of a class of test cases
that performs certain steps to drive and verify the application-under-test. Each instance
varies by the data that it carries. Since far fewer tests are written with this approach,
changes in the GUI will result in reduced effort in updating tests. A data-driven test design
also allows for the externalization of testcase data and makes it possible to divide the
responsibilities for developing testing requirements and for developing test automation. For
example, it may be that a group of domain experts create the Testplan Detail while another
group of test engineers develop tests to satisfy those requirements.

In a script file, an automated testcase ideally addresses one test requirement. Specifically, a
4Test function that begins with the test case keyword and contains a sequence of 4Test
statements. It drives an application to the state to be tested, verifies that the application
works as expected, and returns the application to its base state.

A script file is a file that contains one or more related testcases. A script file has a .t
extension, such as find .t

Other Segue products

The Silk products include

Sit Test for functional and regression testing


Silk Performer for load and performance simulation
Silk Pilot for functional and regression testing CORBA and EJB servers
Silk Radar for automated defet tracking
Silk Vision for enterprise application health monitoring
Silk Express for a scalability and load testing consulting solution.

- 59 -
Software Testing

Silk Test Features :

Some of the feature of silk test are given below.

• Easy to use interface


• Built in recovery system
• The object oriented concept
• Record & Play
• Multi-kind application testing
• Automatic generation of results
• Browser & Platform independent
• 24 x 365 unattended testing
• Distributed Access to Test Results
• Cross Platform Java Testing
• Testing Across Multiple Browsers and Windows Versions
• Support for HTML, XML, JavaScript, Java, Active X, Windows controls, and Visual
Basic.
• Single-recording testing for cross-platform Java testing with the Silk Bean.
• Against Over 35 Databases.
• Link Tester
• Validation of Advanced Database Structure and Techniques
• Creation of
Test Plan
Test Frame
Test Suite
• Integration with other silk products.

Silk Test Architecture

Normal use of an application consists of a person manipulating a keyboard and mouse to


initiate application operations. The person is said to be interacting with the GUI (Graphical
User Interface). During Silk Test testing, Silk Test interacts with the GUI to submit
operations to the application automatically. Thus Silk Test can simulate the actions of a
person who is exercising all the capabilities of an application and verifying the results of
each operation. The simulated user (Silk test) is said to be driving the application. The
application under test reacts to the simulated user exactly as it would react to a human
rest. Silk Test consists of two distinct software components that execute in separate
processes :

Silk Test host software

The Silk Test host software is the program you use to develop, edit, compile, run and debug
your 4Test scripts and test plans. This manual refers to the system that runs this program
as the host machine or the Silk Test machine.

The Agent

- 60 -
Software Testing

The 4Test Agent is the software process that translates the commands in your 4Test scripts
into GUI-specific commands. In order words, it is the Agent that actually drives and
monitors the application you are testing. One Agent can run locally on the host machine. In
a networked environment, any number of Agents can run on remote machines. This manual
refers to the systems that run remote Agents as target machines. This manual refers to the
systems that run remote Agents as target machines. In a client/server environment, Silk
Test drives the client application by means of an Agent process running on each application’s
machine. The application then drives the server just as it always does. Silk Test is also
capable of driving the GUI belonging to a server or of directly driving a server database by
running scripts that submit SQL statements to the database. These methods o directly
manipulating the server application are intended to support testing in which the client
application drives the server.

Limitations of Silk Test:

Some of the limitations of Silk Test are given below:

• SilkTest may not recognize some objects in a window / page due to some technical
reasons.
• Silk Test may not recognize some window frames.
• The ‘tag’ value may get changed frequently.
• Sometimes it will be difficult to activate some window.
• It may be necessary to make some modifications if testing should be shifted to other
browser / operating system.
• In web based applications, sometimes, silktest will take the links as simple text.

System Requirements :

The minimum requirement, a system needs to run the silk test is given below :

Windows NT, Windows 95, 98 or 2000


Pentium 466 Mhz or better processor (application dependent)
32 MB RAM
60MB Hard Disk

Supported Environments:

Netscape Navigator 4.x


Internet Explorer 4 and 5
Active X, Visual Basic 5 and 6
Java JDK 1.3
Swing 1.1
Microsoft Web browser control

Automated process
THE AUTOMATED TESTING PROCESS

The testing process has these 4 steps :

- 61 -
Software Testing

• Creating a testplan (if you are using the testplan editor)


• Recording a test frame.
• Creating testcases
• Running testcases and interpreting their results.

Creating a testplan

If the testplan editor is used, the automated testing process is started by creating a
testplan. A basic testplan is structured as a hierarchical outline and contains:

• Descriptions of individual tests and groups of tests. As many levels of description can
be used.
• Statements that link the test descriptions in the plan to the 4Test routines, called
testcases, that accomplish the actual work of testing.

Recording a test frame

Next, record a test frame, which contains descriptions, called window declarations, of each
of the GUI objects in your application. A window declaration specifies a logical, cross-
platform name for a GUI object, called the identifier, and maps the identifier to the object’s
actual name, called the tag. In addition, the declaration indicates the type of the object,
called its class.

Creating testcases

The 4Test commands in a testcase collectively perform three distinct actions :

• Drive the application to the state to be tested.


• Verify the state (this is the heart of the testcase).
• Return the application to its original state.

The powerful object-oriented recorder can be used to automatically capture these 4Test
commands to interact with the application, or to white the 4Test code manually if one is
comfortable with programming languages. For maximum ease and power, these two
approaches can be combined, recording the basic testcase and then extending it using
4Test’s flow of control features.

Running testcases and interpreting results

Next, run one or more testcases, either by running a collection of scripts, called a suite, or,
if you are using the testplan editor, by running specific portions of the testplan. As each
testcase runs, statistics are written to a results file. The results file and its associated
comparison tools allow you to quickly pinpoint the problems in your application.

A Test Frame

The test frame is the backbone that supports the testcases and scripts. It is a file that
contains all the information about the application’s GUI objects that Silk Test needs when
you record testcases. This information minimally consists of a declaration for each GUI

- 62 -
Software Testing

object, but can also include any data that you want to associate with each GUI object, as
well as any new classes and methods that you want to define.

A window declaration specifies a cross-platform, logical name for a GUI object, called the
identifier, and maps the identifier to the object’s actual name, called the tag. Because the
testcases use logical names, if the object’s actual name changes on the current GUI, on
another GUI, or in a localized version of the application, only the tag in the window
declarations need to be changed; don’t need to change any of the scripts. Variables,
functions, methods, properties can be added to the basic window declarations recorded by
Silk Test.

To record declarations for the main window and menu hierarchy of your application.

1. Start up your application and Silktest.


2. Select File / New. The New dialog appears.
3. Select the Test Frame radio button and click OK. The New Test Frame dialog is displayed,
allowing to create a test.
4. Frame file for an application displayed in the Application list box.
5. Select the application fro the Application list box. If a Web application is tested, different
fields are seen.
6. Click OK. The new test frame file is created. The file contains the 4Test declarations for
the main window and all its menus, as well as a generic declaration that is valid for each of
the standard message boxes in the application.

Silk Test - Test Plan, Test Case, Test Script

A Test Plan

A Testplan is made up of a large amount of information, a structured, hierarchical outline


provides an ideal model for organizing and developing the details of the plan. A testplan
consists of two distinct parts

• An outline that describes the test requirements


• Statement that connect the outline to the 4Test scripts and testcases that implement
the test requirements.

Using the testplan, we can create and run tests.

To start a new testplan :

• Select File / New


• Select Testplan and click OK.

An empty testplan window opens.

A Test Suite

- 63 -
Software Testing

A Test Suite is a collection of test scripts. Consider a case that we are having a set of script
(.t) file. If we want to run these scripts against our application, we have to select the
required testcase or we have to run the entire script file. But after the completion of that
script file, the user has to manually change that to the next script file to run those testcases
available in that script. Instead of that silktest provides a way to continuously select a set of
script files and run those script files at-a-stretch. This can be done by creating a new Test
Suite file and declare the needed script files in that suite file.

To start a new test suite:

1. Select File / New.


2. Select Test Suite and click OK.
3. In that suite file enter the script file names to run continuously.
4. Save that script file.
5. Compile the script file and run it.
6. Now the process of running the script will not stop after the completion of the first script
file, instead of that it will automatically pass to the next script file and run the testcases
available there.

Assume a case where there is a folder called silk scripts in c drive with five test script files.
Here in the suite file, we are calling all the script files instead of running those script files
separately. The suite file will look like as given below :

use c:\silkscripts.\script.t" use c:\silkscripts.\script2t" use "c:\silkscripts./script3.t" use


"c:\silkscripts\script4.t" use "c:\silkscripts\5."

A Test script

A testscript file contains various testcases for various test conditions.

A Testcase

In a script file, a testcase ideally addresses one test requirement. Specifically, a 4Test
function that begins with the testcase keyword and contains a sequence of 4Test
statements. It drives an application to the state to be tested, verifies that the application
works as expected, and returns the application to its base state.

• In the silktest tool, select the File -> Now option the menu bar.
• In the resulting dialog box �New�, there will be options for selecting different
kind of files.
• Select the file type 4� test script� option.
• It will open a new script file.
• Before start writing the testcase, declare the necessary file that is to be used in that
script file.
• Start with the keyword �testcase� followed by the testcase name. The name of
the testcase is whatever as selected by the user. But make sure that by looking at
the name of the testcase, the objective of the testcase should be understandable.

- 64 -
Software Testing

• Start the tests from the scratch so that the silktest will start eh application and to the
testing from the base state.
• Use necessary conditions / loops if necessary.
• At the end of each and every script, print a statement to know whether the test case
has achieved its objective or not. The user can make sure that the particular part of
the application is error free by looking at the message you print.
• Try to make the testcase effective and time consuming (say) by keeping the second
tests continue from the place where the first test finishes.
• A sample testcase for registering into the yahoo mail.

Testcase registration ()

• Browser.LoadPage "mail.yoo.com")
• Sign In YahooMail.SetActive ()
• SignIn Yahoo.Mail.objSingIn YahooMail.SignUp.Now.Click()
• Sleep (3)
• WelcomeTo Yahoo.Set Active
• Welcome To yahoo.objWelcomeToYahoo.LastName.SetText("lastname")
• Welcome To Yahoo.objWelcomeToYahoo.LanguageConten1.Select(5)
• WelcomeTo Yahoo.objWelcome ToYahoo.ContactMeOccassionally About.Click ()
• Welcome To Yahoo.objWelcome To Yahoo.Submit ThisForm.Click()
• If Registration Success.Exists ()
• Print ("Test Pass")
• else
• logerror ("Test Fail")

Silk Test installation tips

• Run the Silk Test setup from the CD or through the Network.
• The Silk Test Software is available in the
Firesip\Europa\softare\siltest5.0.1\siltest� directory in your Network
Neighborhood.
• Get into the above folder and select the setup.exe file to start the installation.
• During installation, it will ask for the licence file. Set the path
Firesip\Europa\software\siltest 5.0.1\licence� for the licence.dat file.
• In the installation process, it will ask for the Silk/test/Silk Test Agent only option.
Select the Silk Test option if you are installing this for testing applications in the
stand-alone machine.
• For the �Will you be testing browsers?� message box, select Yes if you are
going to test web based applications.
• It will ask for the default browser option. Select the appropriate Browser you want to
test the application using Silk Test. Note that you are allowed to select only one
Browser option. By default Silk Test goes fine with Netscape browsers.
• After installing, it will open the Silk Test tool with the Quick start wizard open. The
quickstart wizard will assist you in creating various silk files. If you are a first time
user of silktest, the continue with that.

- 65 -
Software Testing

Getting started with the Quickstart Wizard

you are using Silk Test with the testplan editor, you can use the QuickStart Wizard, which
greatly simplifies the four steps of automated testing.

hen you start Silk Test the first time (or whenever you start and have no open windows),
the QuickStart Wizard is displayed automatically. You can also invoke the wizard at any time
by selecting File/New and clicking the Quickstart Wizard icon.

You can use the Quickstart wizard to :

1. Create a testplan You simply name the file (giving it the .pln extension) and its directory.

2. Create a test frame, Which contains descriptions of the GUI objects in your application
that you want to test. As prompted, you simply open your application and open the various
windows and dialogs that you want to test in the application. The wizard automatically
records all the declarations in a file called frame.inc. You don�t have to do any coding.

3. Record testcases You name the testcase and provide a description for the testplan, then
simply record the testcase. Again, you don�t have to do any coding. The wizard
automatically sages the testcase in script (.t) file with the same name as the testplan.

Silk Test - How to run Test Cases

Run testcases.

Procedure To use the wizard :

1. Involve the wizard by selecting File /New and clicking the QuickStart Wizard icon. Now
you will name a new testplan, which will organize and manage your tests.

2. Click Next.

3. Name the file edit.pln and Next. The next step is to record the test frame, which defines
all the windows, dialogs, menus, and so on that you want to test.

4. To create a new test frame, leave New Test Frame selected and click Next. At this point,
the wizard lists all the open (running and not minimized) applications. If Text Editor is not
open, you can open it now (it is the directory where you installed Silk Test). After you open
the Text Editor, click on the QuickStart Wizard title bar to see Text Editor added to the list of
applications.

5. Select Text Editor and click next.


6. The capture Windows panel displays, describing the procedure.
7. Click Next.

- 66 -
Software Testing

8. Now you simply open a document window an open all the dialogs hat you want to test in
the Text Editor. When you place the mouse pointer on a window or dialog, the wizard
records all the declarations that SilkTest needs in a file called frame.inc in the same
directory as your testplan.

9. When you have finished capturing the windows and dialogs in Text Editor, click Return to
Wizard in the Capturing New Windows dialog. Now that you have created you test frame,
you are ready to create a testcase.

10. Click Next twice.

11. Name the test Find Box and enter the description “Verify controls in Find dialog.” Click
Next. You test is now being recorded, as indicated by the Record Status window on your
screen.

12. Now go to Text Editor, select Search / Find to open the Find dialog, place your mouse
pointer over the dialog’s title bar, and press Ctrl + Alt to verify its state. The verify windows
dialog displays. Click OK to verify all properties for the dialog. Close the Find dialog (to
return to your base state), then click Done in the Record Status window. You return to the
Wizard and are asked to confirm that the test is what you want.

13. Click Next.


14. Run the test by clicking the Run Test Button.

15. The wizard reports the results. You an move the wizard to the side and look at the
results file that is created whenever you run a test.

16. In the wizard, click Next to save your testcase. The testcase is saved in a script (.t) file
with the same name as testplan (in this case, edit.t)

17. Click Close to close the wizard. You see a window containing the results file from the
test you just ran. In another window is the testplan.

Configuring the settings

• In Silk Test, select Option -> Extentions menu from the menu bar.
• It will load the Extentions dialog box.
• In that dialog box, check the kind of application you re testing.
• Say, if you are testing the web based application in the Netscape browser, Enable the
Netscape option by checking against it and un-check all the other options.
• Click on the OK button.
• Now click on the Options - > Runtime option.
• Set the ‘use path’ option to point to the silktest installed folder. Say if the silktest has
been installed in you c:\ drive then set the sue files = “C:\Program
Files\Segue\SilkTest\ETEND”. This is to use the common include files as per the
application.
• Since we have selected the Extensions, the use file will have the declaration
‘extend’\netscape.inc’ of the include file. Now the script is ready to open the
Netscape browser by default and run the scripts.

- 67 -
Software Testing

• From the status bar Go to the Start - > Programs -> Silk Test -> Extention Enabller
option, to declare the same set of extensions (as in step 4) in the ‘Extention Enabler’
dialog box.

Exposure to Silk Test IDE

To Start the Silk Test tool

• Select the ‘Start’ button in Windows status bar.


• Go to Program -> Silk Test -> Silk Test’ option (with green color icon)
• It will open the silktest tool.
• If you get the Quickstart wizard, close the wizard if you don’t need it or else, refer to
the ‘Installation tips’ to go with the wizard.

To compile & run scripts

• Open the testsscript file (with the extension .t).


• Select the Run - > Compile option from the menu bar. Also we can select the icon
with the ‘tick’ mark to compile the scripts.
• The files that are associated with the script files will also get compiled. Hence if there
is any problem with the include file, you will get error by compiling the testscript file
itself.
• To compile a selected file, keep the mouse cursor on that particular file and give the
Run-Compile option.
• The ‘Run option will get enabled only for the testscript files. We cannot run a include
file or a test frame.
• We can run a testcase selectively, compile the testscript file and click the “=>t” icon
in the tool bar.
• To run the entire set of testcases in a testscript file, click the “==>” icon or select
the Run - > Run option.
• If there is any syntax error in the script file, it will show that if you move to the next
line of the code. But the other errors are known only at the time of compilation or
during the run.

Silk Test - new plug & play testcase (using Action)

To open a new or existing files

In Silktest,

• Select the File -> New menu from the menu bar. (or) select from the ‘white icon’ in
the left top corner of the window.
• It will ask for various kind of files to open. Select the ‘4Test include file’ to declare
the window objects from the application.
• The ‘4Test Script file’ option is to open the script file where we will be writing
testscript.
• The above two files are more important for building the scripts.
• Open the application you want to test. If you are going to test the Yahoo site, then
open the browser load the page you want to start testing.
• The page that you start testing the application will be assumed as the ‘BaseState’.
Even we can explicitly declare the window base state.

- 68 -
Software Testing

• The Test Script file will be used only after creating the include file. We will be using
the include file to write the script file. Hence we have to declare the include file that
we are calling in the testscript files.
• To open an existing file, select File -> Open and select the existing file.

To write a new plug & play testcase (using Action)

• This example is to write the script for logging into the yahoo site.
• Start the Silk Test by selecting from the ‘Start’ menu.
• Configure the settings as given in lab I.
• Click ‘File -> New ‘menu, and select the ‘4Test script file’ option.
• Click on the OK button.
• Start with the keyword ‘testcase Action ()’ and press Enter key in your keyboard. The
testcase is the default keyword for any testcase and the name Action is the name of
the testcase. The testcase name can be anything but it is advisable to name it clearly
so that will represent the functionality of the test.
• Now start writing the testcase (Follow the below instructions)
• Open the application in parallel ie., open the browser in which the application has to
run. (say Netscape)
• Go to Silk Test
• Click Record -> Actions. Menu from the menu bar.
• It will load the ‘Record Actions’ dialog box.
• Keep the dialog box as it is, go to the application and do the action what ever you
want to perform.
• The silktest will record the events you do sequentially and you can vie it in the
‘Record Actions’ dialog.
• After completing your task (till whatever you want to record), click on the ‘paste to
editor’ button in the ‘Record Action’ dialog box.
• Then click the close button to close the Record actions dialog box and go to your
application.
• Now the recorded code will be readily available in the testscript editor, inside the
testcase.
• Now delete the keyword ‘recording’ in your first line of the recorded code.
• Now, select the entire recorded code by keeping the mouse arrow at the leftmost dot
(.) in your editor at the first line, and drag it till the end.
• Right click on the selected code and select the ‘Move Left’ option.
• The code is ready now.
• Now, compile the code from the ‘Run -> compile’ option and run the script by
selecting the ‘Run -> Run’ menu.
• Now the testcase will automatically start the application and perform the events
recorded and throws the results.
• The sample recorded testcase for yahoo login look like this :
• Testcase Action ()

// [-] recording
Browserpage.Set Active ()
Browser.Location.Set Text (www.yahoo.com”)
Browser.Location. Type Keys (“”)
Yahoo.HtmlLink (“Mail| #26|$http:??www.yahoo.com?r?m2”).Click ()
BrowserPage.HtmlTextField(“Yahoo!ID: | # 1”). SetPosition (1, 1)

- 69 -
Software Testing

BrowserPage.HtmlTextField(“Yahoo!ID: | # 1”). Set Text (“username”)


BrowserPage.HtmlTextField(“Yahoo!ID: | # 1”). TypeKeys (‘’’)
BrowserPage.HtmlTextField(“Password:|#2”).Set Text (“password”)
BrowserPage.HtmlPushButton (“Sign In|#1”). Click ()

The 4 Test language

The 4 Test language is an object-oriented fourth-generation language (4GL) designed


specifically with the needs of the QA professional in mind. 4 Test’s powerful features are
organized into three basic kinds of functionality:

• A robust library of object-oriented classes and methods that specify how a testcase
can interact with an application’s GUI objects.
• A set of statements, operators, and data types that you use to add structure and
logic to a recorded testcase.
• A library of built-in functions for performing common support tasks. Note This section
provides a high-level look at 4 Test.

Silk Test - Running the Silkscripts

Running the silkscripts

The basic silk scripts will be in two forms. One as an include file and the other as a script
file.

• The include file with the extention *.inc can be used for the declaration of window
names, window objects, variables, constants, structures and classes. The core
objects of the scripts lies here.
• The script file will be used in writing scripts. It will be with the extention *.t. The
body of the scripts will be defined here. ie. the testcases that meets various test
conditions will be written in the script file.

The script file (*.t) can be used for declaring objects and the include file (*.inc) for writing
testcases. But to make the code clear we use different files for different purposes. If no
testcases is written in a file (include), then the include file can be compiled but cannot be
run. It will show error that the file does not contain any testcases. Only the file with the
testcase present will be allowed to run.

Before running the scripts, separate declaration file have to be written (for declaring the
objects) and the script file (for writing scripts using that declaration file) and compile them.

The steps to be followed for running the scripts are as below.

• Open the silk test tool.


• Open the script (*.t) file that has to be run.
• Compile the script by selecting the Run-> Compile menu from the menu bar (or)
from the compile icon.

- 70 -
Software Testing

• It will compile that particular script and the other related files, called by that script.
The user can confirm that by looking at the progress status (in yellow color) in the
bottom-right corner of the silktest too.
• If there is any error, the error details are displayed in the compile time. The user has
to make necessary changes.
• Then, select the Run-> Testcase from the menu bar (or) else select the Run icon.
• The testcases can be run by selectively or at-a-stretch.
• If the selective method is selected, it will ask for the testcase to be run from a list of
testcases.
• After selecting the testcase and start running, the silktest will automatically start the
application and start the test from the basestate.

Recording the Events / Action

Writing scripts in SilkTest includes steps of commands with declaration of window names
and its objects before that. To avoid these difficulties and to make the process easier (this is
an alternate for writing line-by-line steps of the scripts) silktest provides a special feature of
recording events.

The steps are given below.

1. Create a new Testcase.


2. Select the option Record – Actions menu.
3. After getting the ‘Record Actions’ dialog box, the sequence of steps to be tested, should
be done.
4. ie., the programmer has to simply do the ordinary testing process by selecting or using
the windows & its objects.
5. After completing these steps, the user has to click the ‘Paste to Editor’ button in the
‘Record Actions’ dialog box.
6. Now the scripts are automatically available in the script file.
7. Save the script and run that testcase.

A recorded statements for logging in to the yahoo site, will look line the sample given below.

• [-] recording
• BrowserPage.SetActive()
• Browser.Location.SetText(www.yahoo.com)
• Browser.Location.TypeKeys(“”)
• Yahoo.HtmlLink (“Mail|#26|$http:??www.yahoo.com?r?m2”)
• BrowserPage.HtmlTextField (“Yahoo!ID:| #1”). SetPosition (1,1)
• BrowserPage.HtmlTextField(“Yahoo!ID: | # 1”). Set Text (“username”)
• BrowserPage.HtmlTextField(“Yahoo!ID: | # 1”). TypeKeys (‘’’)
• BrowserPage.HtmlTextField(“Password:|#2”).Set Text (“password”)
• BrowserPage.HtmlPushButton (“Sign In|#1”). Click ()

The alternate for the above recorded statements will be as below:

• Browser.Loadpage (www.yahoo.com) // Loads the yahoo homepage as the default


page

- 71 -
Software Testing

• if Yahoo.Exists() // checking for the existence of the homepage


• print (“Yahoo window exists”) // confirming that the window exists
• Yahoo.objYahoo.Loginname.SetText (“username”)
• Yahoo.objYahoo.Password.SetText (“password”)
• Yahoo.objYahoo.Submit.Click()

The difference between the above two different scripts are, the method II needs windows &
its objects to declared before the scripts are to be written. It is not in the case of the
recording kind of the code.

Silk Test Features

SILK TEST FEATURES

Platform Independent

Silk Test doesn’t care about how the application is created, in which software the application
is written, what kind of design is used, which browser it is being worked, in which operating
system the application is running.

All that needs for an application to be tested using silktest is that it needs a frame (like
window)

Browser Independent

There are various kinds of browsers used by various people for running their applications.
The user may use any browser of his choice to test the standard application. Each and every
browser acts differently with different applications. They show the same page differently.
The web objects they display can also be aligned or displayed in different manner.

SilkTest just looks at these browser contents as objects and hence they cannot avoid any
images, texts, that they are not identifiable. Also we can write a test in one browser and run
it in any other browser (to some extend). i.e, using SilkTest, we can do cross browser
testing.

With minor modifications, your tests are robust enough to support different browsers and
different versions of these browsers.

Technology Independent

Silktest does not care how the application was built. It seamlessly works with the different
web technologies commonly used today.

- 72 -
Software Testing

How to use the same code for multiple browsers:

Start writing the silk scripts. Capture the window declarations (.inc file) and write the .t file.
Say if you are capturing the declarations from Internet Explorer & run successfully on it. As
we captured the declarations from I.E., we now have to make the same test case run on
Netscape since the tag value changes from multiple browsers.

Testing the Windows based applications

Before start writing scripts, enable the settings given below.

1. Declare all the window names and its objects (used in writing scripts) starting from the
first window.
2. In the File-> New option in the menu bar, select the test frame.
3. In the resulting ‘new Test Frame’ dialog box, specify the path of the executable file of
your application.
4. After submitting that dialog box, the silktest will automatically create a declaration file
with the default window declared.
5. Use that file to create your testscripts.

Testing the Java based applications

Before you start testing the java applications or applets, you have to set the java classpath.

• Point to a Java archive (.jar file) that contains the software the powers SilkTest’s Java
support for JDK 1.2 and JRE 1.2. This file is called SilkTest_Java2.jar.)
• When you install SilkTest_Java2.jar is installed in this directory:\JavaEx
• If you will use only JDK 1.2 for testing, you can activate Java support for JDK 1.2 by
copying SilkTest_Java2.jar from \JavaEx to \jre/lib/ext.

If you do not copy SilkTest_Java2.jar to your JDK 1.2 install directory, you must point to it
from your CLASSPATH.

- 73 -
Software Testing

Test Partner
Introduction

Batch Testing:--

we can create batches/suites by using the keyword “Run”. In this tool, calling test is known
as Driver script and called test is known as Test script.

Syntax: Run “Test script” (or) Run(“Test script”)

Including an Asset:--

To include a VBA or other non-Test Partner asset in a script, add a declaration using the
following syntax:

$TPInclude "asset Name"

where asset Name is the name of the asset that you are including. Asset names are unique
across all asset types, so you don’t need to specify what type of asset you are including.

Object Mapping:--

It can be used to provide simplified, easily understood aliases for the names of Windows
objects. Once a window is registered in the Object Map, all references to it in scripts, check
definitions, and event definitions are made by its alias, rather than by its actual attach
name. The attach name is an important concept when testing applications using Test
Partner.

Check Points:--

A check is a definition of the expected state of some aspect of the target system at a
particular point. In Test Partner, checks are saved and managed as assets. This means you
always have the option to reuse a check in more than one script. The following are various
checks available in Test Partner.

i) Bitmap Check:-

– Bitmap checks allow you to verify the appearance of a bitmap image. When you create the
check, you capture the image within a rectangular area of the screen. When the check is

- 74 -
Software Testing

verified, the same area is captured and compared to the defined image. If the two images
match according to the criteria you defined, the check passes. If not, the check fails. These
checks are used to check the appearance of toolbars, the desktop, and other windows that
contain non-textual information.

ii) Clock Check:-

– Clock checks measure the time the system takes to perform a process. Clock checks help
you determine how the system performs under varying CPU or network loads. When you
create the check, you specify an acceptable response time. When the check is verified, the
system’s actual response time is recorded and compared to the specified time. It can

• carry out performance checks on the target application.


• determine whether, under controlled conditions, the target application performs tasks
within pre-defined response times.
• record the effects of varying CPU and network loads on the system.

iii) Content Check:-–

Content checks test the contents of tables and list controls in a window or web page. A
content check enables you to verify the contents of controls that it supports. Currently,
tables and list controls in a Windows-based or Web-based application are supported. The
Windows NT Version 4 desktops are also list controls.

The content check for tables enables you to optionally check the number of rows and
columns in the table and the case of the text in each table cell.

The content check for list controls enables you to optionally check the number of items,
positions of the items, which item(s) are selected, the text of each list item, and the case of
the text.

iv) Field Check:-–

Like text checks, Field checks enable you to verify that required text is present in the target
application, but they enable you to verify that text as data, such as numbers or dates. For
example, you can see if a value falls between a lower and upper limit, or if a particular area
of the screen contains today’s date. You can create field checks that verify the following
data:

• ASCII values
• Numeric values
• Date values (fixed and aged)
• Time values
• Patterns

v) Property Check:-

– Property checks verify the properties of the controls in a dialog or web page. You can
check the size and position of each control, their legends and IDs, and whether they are

- 75 -
Software Testing

active, disabled, selected, or cleared. You can check a single control, or you can check
several controls within an application window.

vi) Text Check:-

– Text checks provide an exact comparison of the text in a window or individual area to
defined text. If you check a whole screen, areas that contain legitimately variable data, such
as dates and login IDs, can be ignored. Unlike bitmap checks, which simply compare the
appearance of an area of the screen with an expected appearance, text checks actually read
the displayed data as strings. This enables more sophisticated checking to be performed.

Events:--

Events are unscheduled occurrences or conditions to which you want the target application
to respond in a specified manner. Test Partner supports two categories of events: Wait and
Whenever. A Wait event tells Test Partner to wait for a specified occurrence before
proceeding. Wait events are useful in situations where you cannot anticipate the amount of
time a response will take. An example of a Wait event would be waiting for a system login
prompt. When your script is running against a network-based application that requires the
user to log in, the amount of time it takes for the login prompt to display may vary. To
account for this variance, you can insert a Wait event that instructs your script to wait for
the login prompt before proceeding to type a username and password.

A Whenever event tells Test Partner to watch for a specific occurrence and, if it occurs,
perform a special set of steps. Whenever events are useful for trapping unexpected error
conditions during a test run. For example, you can include events in your scripts to
recognize when the connection to the server has been interrupted by a communications
error or a network system message has been received, so that the script can report or work
around the problem. In a script, Test Partner automatically inserts the Script_Whenever
function by default to handle the whenever event. If a whenever event is inserted into a
module, shared module, or class module, you must customize the whenever event handler
code. Test Partner supports the following types of Events:

i) Bitmap Event:-- A bitmap event detects the presence, absence, or state of a graphic in
a window.

ii) Date/Time Event:--

Date/Time events enable you to define a date or time condition. Test Partner recognizes the
event by monitoring the internal clock of the computer on which it and the target application
are running.

iii) Key Event:--

Watch for the entry of a particular keystroke combination by the user. You can use key
events to:

• Build your own “hotkeys” to make Test Partner perform an action whenever the
hotkey is used.
• Interrupt a script to take manual control of the target application.
• Pause a script until a particular key is used.

- 76 -
Software Testing

• Prevent certain keys from being entered.

iv) Menu Events:-- Watch for when a particular menu item is selected by the user.

v) Mouse Events:-- Watch for when one of the mouse buttons is clicked or released in a
certain window.

vi) Screen Events:--

A screen event detects the presence or absence of text in a window. The most common use
of screen events is to synchronize with host-based applications and to detect error
messages.

vii) Window Events:--

A window event detects an action on a window; for example, its creation, destruction,
movement, or its existence.

Data Driven Test:--

The Test Partner Active Data Test Creation wizard provides a non-programmatic way to
create data-driven tests. Such tests are useful for testing form-based applications. Using the
wizard, you can record a test, choose the fields you want to include in your data file, then
populate the data file itself using a data table.

To create a data test

1. From the Script menu, choose Create Data Test. The Data Test wizard appears.
2. Read the instructions and click Next.
3. Follow the instructions in the Data Layout Assistant for the three steps necessary to
define the scope of the test.
4. Enter a name for the Data Table.
5. Exclude any fields you do not want to include in the test by unchecking the Use checkbox
and click Next.
6. Read the instructions and click Finish.

TestPartner shows the Data Table, which includes a column for each field you defined and
one row of data. The data table also includes a column labelled Results Script, to specify a
script to be run after the data driven test.

To modify the data table.

1. If you have just recorded a new data test and the data table is open, proceed to step 3.

2. To modify an existing data test, select Modify Data Table on the Script menu and choose
the data table you want to change.

- 77 -
Software Testing

3. To add rows and populate the table with test data, right-click on the empty row labelled
END ( or any other row) and select Insert on the context menu. Test Partner inserts a new
row above the selected row.

4. Enter test data into the cells of the new row.

5. Alternatively, right-click the empty row and choose Import. You can import data from a
tab-delimited text (.TXT) or comma-delimited (.CSV) file to populate the cells of the data
table. If you select the END row in the table, TestPartner will add multiple rows if needed to
accommodate the data.

6. To delete rows from the table, right-click the row and choose Delete. To delete multiple
rows, press the Ctrl or Shift (for contiguous rows) key while selecting rows to delete.

7. To launch another script from the data test, insert the name of the script in the Results
Script field in the table at any point(s) at which you want to run the script.

8. When you have finished editing the data table, click Save and Close to exit.

- 78 -
Software Testing

Interview questions
1) How you used WinRunner in your project?

Ans. Yes, I have been WinRunner for creating automates scripts for GUI, functional and
regression testing of the AUT.

2) Explain WinRunner testing process?

Ans. WinRunner testing process involves six main stages


i. Create GUI Map File so that WinRunner can recognize the GUI objects in the application
being tested
ii. Create test scripts by recording, programming, or a combination of both. While recording
tests, insert checkpoints where you want to check the response of the application being
tested.
iii. Debug Test: run tests in Debug mode to make sure they run smoothly
iv. Run Tests: run tests in Verify mode to test your application.
v. View Results: determines the success or failure of the tests.
vi. Report Defects: If a test run fails due to a defect in the application being tested, you can
report information about the defect directly from the Test Results window.

3) What in contained in the GUI map?

Ans. WinRunner stores information it learns about a window or object in a GUI Map. When
WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s
description in the GUI map and then looks for an object with the same properties in the
application being tested. Each of these objects in the GUI Map file will be having a logical
name and a physical description.

There are 2 types of GUI Map files.


i. Global GUI Map file: a single GUI Map file for the entire application
ii. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test
created.

4) How does WinRunner recognize objects on the application?

Ans. WinRunner uses the GUI Map file to recognize objects on the application. When
WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s
description in the GUI map and then looks for an object with the same properties in the
application being tested.

- 79 -
Software Testing

5) Have you created test scripts and what is contained in the test scripts?

Ans. Yes I have created test scripts. It contains the statement in Mercury Interactive’s Test
Script Language (TSL). These statements appear as a test script in a test window. You can
then enhance your recorded test script, either by typing in additional TSL functions and
programming elements or by using WinRunner’s visual programming tool, the Function
Generator.

6) How does WinRunner evaluates test results?

Ans. Following each test run, WinRunner displays the results in a report. The report details
all the major events that occurred during the run, such as checkpoints, error messages,
system messages, or user messages. If mismatches are detected at checkpoints during the
test run, you can view the expected results and the actual results from the Test Results
window.

7) Have you performed debugging of the scripts?

Ans. Yes, I have performed debugging of scripts. We can debug the script by executing the
script in the debug mode. We can also debug script using the Step, Step Into, Step out
functionalities provided by the WinRunner.

8) How do you run your test scripts?

Ans. We run tests in Verify mode to test your application. Each time WinRunner encounters
a checkpoint in the test script, it compares the current data of the application being tested
to the expected data captured earlier. If any mismatches are found, WinRunner captures
them as actual results.

9) How do you analyze results and report the defects?

Ans. Following each test run, WinRunner displays the results in a report. The report details
all the major events that occurred during the run, such as checkpoints, error messages,
system messages, or user messages. If mismatches are detected at checkpoints during the
test run, you can view the expected results and the actual results from the Test Results
window. If a test run fails due to a defect in the application being tested, you can report
information about the defect directly from the Test Results window. This information is sent
via e-mail to the quality assurance manager, who tracks the defect until it is fixed.

10) What is the use of Test Director software?

Ans. TestDirector is Mercury Interactive’s software test management tool. It helps quality
assurance personnel plan and organize the testing process. With TestDirector you can create
a database of manual and automated tests, build test cycles, run tests, and report and track
defects. You can also create reports and graphs to help review the progress of planning
tests, running tests, and tracking defects before a software release.

11) How you integrated your automated scripts from TestDirector?

- 80 -
Software Testing

Ans When you work with WinRunner, you can choose to save your tests directly to your
TestDirector database or while creating a test case in the TestDirector we can specify
whether the script in automated or manual. And if it is automated script then TestDirector
will build a skeleton for the script that can be later modified into one which could be used to
test the AUT.

12) What are the different modes of recording?

Ans. There are two type of recording in WinRunner.


i. Context Sensitive recording records the operations you perform on your application by
identifying Graphical User Interface (GUI) objects.
ii. Analog recording records keyboard input, mouse clicks, and the precise x- and y-
coordinates traveled by the mouse pointer across the screen.

13) What is the purpose of loading WinRunner Add-Ins?

Ans. Add-Ins are used in WinRunner to load functions specific to the particular add-in to the
memory. While creating a script only those functions in the add-in selected will be listed in
the function generator and while executing the script only those functions in the loaded add-
in will be executed else WinRunner will give an error message saying it does not recognize
the function.

14) What are the reasons that WinRunner fails to identify an object on the GUI?

Ans. WinRunner fails to identify an object in a GUI due to various reasons.


i. The object is not a standard windows object.
ii. If the browser used is not compatible with the WinRunner version, GUI Map Editor will not
be able to learn any of the objects displayed in the browser window.

15) What do you mean by the logical name of the object.

Ans. An object’s logical name is determined by its class. In most cases, the logical name is
the label that appears on an object.

16) If the object does not have a name then what will be the logical name?

Ans. If the object does not have a name then the logical name could be the attached text.

17) What is the different between GUI map and GUI map files?

Ans. The GUI map is actually the sum of one or more GUI map files. There are two modes
for organizing GUI map files.

i. Global GUI Map file: a single GUI Map file for the entire application

ii. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test
created. GUI Map file is a file which contains the windows and the objects learned by the
WinRunner with its logical name and their physical description.

18) How do you view the contents of the GUI map?

- 81 -
Software Testing

Ans. GUI Map editor displays the content of a GUI Map. We can invoke GUI Map Editor from
the Tools Menu in WinRunner. The GUI Map Editor displays the various GUI Map files created
and the windows and objects learned in to them with their logical name and physical
description.

19) When you create GUI map do you record all the objects of specific objects?

Ans. If we are learning a window then WinRunner automatically learns all the objects in the
window else we will we identifying those object, which are to be learned in a window, since
we will be working with only those objects while creating scripts.

20) What is the purpose of set_window command?

Ans. Set_Window command sets the focus to the specified window. We use this command
to set the focus to the required window before executing tests on a particular window.

Syntax: set_window(, time); The logical name is the logical name of the window and time is
the time the execution has to wait till it gets the given window into focus.

21) How do you load GUI map?

Ans. We can load a GUI Map by using the GUI_load command.

Syntax: GUI_load();

22) What is the disadvantage of loading the GUI maps through start up scripts?

Ans.1.If we are using a single GUI Map file for the entire AUT then the memory used by the
GUI Map may be much high.

2.If there is any change in the object being learned then WinRunner will not be able to
recognize the object, as it is not in the GUI Map file loaded in the memory. So we will have
to learn the object again and update the GUI File and reload it.

23) How do you unload the GUI map?

Ans. We can use GUI_close to unload a specific GUI Map file or else we call use
GUI_close_all command to unload all the GUI Map files loaded in the memory.

Syntax: GUI_close(); or GUI_close_all;

24) What actually happens when you load GUI map?

Ans. When we load a GUI Map file, the information about the windows and the objects with
their logical names and physical description are loaded into memory. So when the
WinRunner executes a script on a particular window, it can identify the objects using this
information loaded in the memory.

25) What is the purpose of the temp GUI map file?

- 82 -
Software Testing

Ans. While recording a script, WinRunner learns objects and windows by itself. This is
actually stored into the temporary GUI Map file. We can specify whether we have to load
this temporary GUI Map file should be loaded each time in the General Options.

26) What is the extension of gui map file?

Ans. The extension for a GUI Map file is “.gui”.

27) How do you find an object in an GUI map.

Ans. The GUI Map Editor is been provided with a Find and Show Buttons.

i. To find a particular object in the GUI Map file in the application, select the object and click
the Show window. This blinks the selected object.

ii. To find a particular object in a GUI Map file click the Find button, which gives the option
to select the object. When the object is selected, if the object has been learned to the GUI
Map file it will be focused in the GUI Map file.

28) What different actions are performed by find and show button?

Ans. 1.To find a particular object in the GUI Map file in the application, select the object and
click the Show window. This blinks the selected object.

2.To find a particular object in a GUI Map file click the Find button, which gives the option to
select the object. When the object is selected, if the object has been learned to the GUI Map
file it will be focused in the GUI Map file.

29) How do you identify which files are loaded in the GUI map?

Ans. The GUI Map Editor has a drop down “GUI File” displaying all the GUI Map files loaded
into the memory.

30) How do you modify the logical name or the physical description of the objects
in GUI map?

Ans. You can modify the logical name or the physical description of an object in a GUI map
file using the GUI Map Editor.

31) When do you feel you need to modify the logical name?

Ans. Changing the logical name of an object is useful when the assigned logical name is not
sufficiently descriptive or is too long.

32) When it is appropriate to change physical description?

Ans. Changing the physical description is necessary when the property value of an object
changes.

33) How WinRunner handles varying window labels?

- 83 -
Software Testing

Ans. We can handle varying window labels using regular expressions. WinRunner uses two
“hidden” properties in order to use regular expression in an object’s physical description.
These properties are regexp_label and regexp_MSW_class.

i. The regexp_label property is used for windows only. It operates “behind the scenes” to
insert a regular expression into a window’s label description.

ii. The regexp_MSW_class property inserts a regular expression into an object’s MSW_class.
It is obligatory for all types of windows and for the object class object.

34) What is the purpose of regexp_label property and regexp_MSW_class


property?

Ans. The regexp_label property is used for windows only. It operates “behind the scenes” to
insert a regular expression into a window’s label description.

The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It


is obligatory for all types of windows and for the object class object.

35) How do you suppress a regular expression?

Ans. We can suppress the regular expression of a window by replacing the regexp_label
property with label property.

36) How do you copy and move objects between different GUI map files?

Ans. We can copy and move objects between different GUI Map files using the GUI Map
Editor. The steps to be followed are:

i. Choose Tools > GUI Map Editor to open the GUI Map Editor.
ii. Choose View > GUI Files.
iii. Click Expand in the GUI Map Editor. The dialog box expands to display two GUI map files
simultaneously.
iv. View a different GUI map file on each side of the dialog box by clicking the file names in
the GUI File lists.
v. In one file, select the objects you want to copy or move. Use the Shift key and/or Control
key to select multiple objects. To select all objects in a GUI map file, choose Edit > Select
All.
vi. Click Copy or Move.
vii. To restore the GUI Map Editor to its original size, click Collapse.

37) How do you select multiple objects during merging the files?

Ans. Use the Shift key and/or Control key to select multiple objects. To select all objects in
a GUI map file, choose Edit > Select All.

38) How do you clear a GUI map files?

Ans. We can clear a GUI Map file using the “Clear All” option in the GUI Map Editor.

- 84 -
Software Testing

39) How do you filter the objects in the GUI map?

Ans. GUI Map Editor has a Filter option. This provides for filtering with 3 different types of
options.
i. Logical name displays only objects with the specified logical name.
ii. Physical description displays only objects matching the specified physical description. Use
any substring belonging to the physical description.
iii. Class displays only objects of the specified class, such as all the push buttons.

40) How do you configure GUI map?

a. When WinRunner learns the description of a GUI object, it does not learn all its
properties. Instead, it learns the minimum number of properties to provide a unique
identification of the object.

b. Many applications also contain custom GUI objects. A custom object is any object not
belonging to one of the standard classes used by WinRunner. These objects are therefore
assigned to the generic “object” class. When WinRunner records an operation on a custom
object, it generates obj_mouse_ statements in the test script.

c. If a custom object is similar to a standard object, you can map it to one of the standard
classes. You can also configure the properties WinRunner uses to identify a custom object
during Context Sensitive testing. The mapping and the configuration you set are valid only
for the current WinRunner session. To make the mapping and the configuration permanent,
you must add configuration statements to your startup test script.

- 85 -
Software Testing

Load Runner Que.

1. What is load testing? Load testing is to test that if the application works fine with
the loads that result from large number of simultaneous users, transactions and to
determine weather it can handle peak usage periods.
2. What is Performance testing? - Timing for both read and update transactions
should be gathered to determine whether system functions are being performed in
an acceptable timeframe. This should be done standalone and then in a multi user
environment to determine the effect of multiple transactions on the timing of a single
transaction.
3. Did u use LoadRunner? What version? Yes. Version 7.2.
4. Explain the Load testing process? -
Step 1: Planning the test. Here, we develop a clearly defined test plan to ensure
the test scenarios we develop will accomplish load-testing objectives. Step 2:
Creating Vusers. Here, we create Vuser scripts that contain tasks performed by
each Vuser, tasks performed by Vusers as a whole, and tasks measured as
transactions. Step 3: Creating the scenario. A scenario describes the events that
occur during a testing session. It includes a list of machines, scripts, and Vusers that
run during the scenario. We create scenarios using LoadRunner Controller. We can
create manual scenarios as well as goal-oriented scenarios. In manual scenarios, we
define the number of Vusers, the load generator machines, and percentage of Vusers
to be assigned to each script. For web tests, we may create a goal-oriented scenario
where we define the goal that our test has to achieve. LoadRunner automatically
builds a scenario for us. Step 4: Running the scenario.
We emulate load on the server by instructing multiple Vusers to perform tasks
simultaneously. Before the testing, we set the scenario configuration and scheduling.
We can run the entire scenario, Vuser groups, or individual Vusers. Step 5:
Monitoring the scenario.
We monitor scenario execution using the LoadRunner online runtime, transaction,
system resource, Web resource, Web server resource, Web application server
resource, database server resource, network delay, streaming media resource,
firewall server resource, ERP server resource, and Java performance monitors. Step
6: Analyzing test results. During scenario execution, LoadRunner records the
performance of the application under different loads. We use LoadRunner’s graphs
and reports to analyze the application’s performance.
5. When do you do load and performance Testing? - We perform load testing once
we are done with interface (GUI) testing. Modern system architectures are large and
complex. Whereas single user testing primarily on functionality and user interface of
a system component, application testing focuses on performance and reliability of an
entire system. For example, a typical application-testing scenario might depict 1000

- 86 -
Software Testing

users logging in simultaneously to a system. This gives rise to issues such as what is
the response time of the system, does it crash, will it go with different software
applications and platforms, can it hold so many hundreds and thousands of users,
etc. This is when we set do load and performance testing.
6. What are the components of LoadRunner? - The components of LoadRunner are
The Virtual User Generator, Controller, and the Agent process, LoadRunner Analysis
and Monitoring, LoadRunner Books Online.
7. What Component of LoadRunner would you use to record a Script? - The
Virtual User Generator (VuGen) component is used to record a script. It enables you
to develop Vuser scripts for a variety of application types and communication
protocols.
8. What Component of LoadRunner would you use to play Back the script in
multi user mode? - The Controller component is used to playback the script in
multi-user mode. This is done during a scenario run where a vuser script is executed
by a number of vusers in a group.
9. What is a rendezvous point? - You insert rendezvous points into Vuser scripts to
emulate heavy user load on the server. Rendezvous points instruct Vusers to wait
during test execution for multiple Vusers to arrive at a certain point, in order that
they may simultaneously perform a task. For example, to emulate peak load on the
bank server, you can insert a rendezvous point instructing 100 Vusers to deposit cash
into their accounts at the same time.
10. What is a scenario? - A scenario defines the events that occur during each testing
session. For example, a scenario defines and controls the number of users to
emulate, the actions to be performed, and the machines on which the virtual users
run their emulations.
11. Explain the recording mode for web Vuser script? - We use VuGen to develop a
Vuser script by recording a user performing typical business processes on a client
application. VuGen creates the script by recording the activity between the client and
the server. For example, in web based applications, VuGen monitors the client end of
the database and traces all the requests sent to, and received from, the database
server. We use VuGen to: Monitor the communication between the application and
the server; Generate the required function calls; and Insert the generated function
calls into a Vuser script.
12. Why do you create parameters? - Parameters are like script variables. They are
used to vary input to the server and to emulate real users. Different sets of data are
sent to the server each time the script is run. Better simulate the usage model for
more accurate testing from the Controller; one script can emulate many different
users on the system.
13. What is correlation? Explain the difference between automatic correlation
and manual correlation? - Correlation is used to obtain data which are unique for
each run of the script and which are generated by nested queries. Correlation
provides the value to avoid errors arising out of duplicate values and also optimizing
the code (to avoid nested queries). Automatic correlation is where we set some rules
for correlation. It can be application server specific. Here values are replaced by data
which are created by these rules. In manual correlation, the value we want to
correlate is scanned and create correlation is used to correlate.
14. How do you find out where correlation is required? Give few examples from
your projects? - Two ways: First we can scan for correlations, and see the list of
values which can be correlated. From this we can pick a value to be correlated.
Secondly, we can record two scripts and compare them. We can look up the
difference file to see for the values which needed to be correlated. In my project,
there was a unique id developed for each customer, it was nothing but Insurance

- 87 -
Software Testing

Number, it was generated automatically and it was sequential and this value was
unique. I had to correlate this value, in order to avoid errors while running my script.
I did using scan for correlation.
15. Where do you set automatic correlation options? - Automatic correlation from
web point of view can be set in recording options and correlation tab. Here we can
enable correlation for the entire script and choose either issue online messages or
offline actions, where we can define rules for that correlation. Automatic correlation
for database can be done using show output window and scan for correlation and
picking the correlate query tab and choose which query value we want to correlate. If
we know the specific value to be correlated, we just do create correlation for the
value and specify how the value to be created.
16. What is a function to capture dynamic values in the web Vuser script? -
Web_reg_save_param function saves dynamic data information to a parameter.
17. When do you disable log in Virtual User Generator, When do you choose
standard and extended logs? - Once we debug our script and verify that it is
functional, we can enable logging for errors only. When we add a script to a scenario,
logging is automatically disabled. Standard Log Option: When you select
Standard log, it creates a standard log of functions and messages sent during script
execution to use for debugging. Disable this option for large load testing scenarios.
When you copy a script to a scenario, logging is automatically disabled Extended Log
Option: Select
extended log to create an extended log, including warnings and other messages.
Disable this option for large load testing scenarios. When you copy a script to a
scenario, logging is automatically disabled. We can specify which additional
information should be added to the extended log using the Extended log options.
18. How do you debug a LoadRunner script? - VuGen contains two options to help
debug Vuser scripts-the Run Step by Step command and breakpoints. The Debug
settings in the Options dialog box allow us to determine the extent of the trace to be
performed during scenario execution. The debug information is written to the Output
window. We can manually set the message class within your script using the
lr_set_debug_message function. This is useful if we want to receive debug
information about a small section of the script only.
19. How do you write user defined functions in LR? Give me few functions you
wrote in your previous project? - Before we create the User Defined functions we
need to create the external
library (DLL) with the function. We add this library to VuGen bin directory. Once the
library is added then we assign user defined function as a parameter. The function
should have the following format: __declspec (dllexport) char* <function
name>(char*, char*)Examples of user defined functions are as follows:GetVersion,
GetCurrentTime, GetPltform are some of the user defined functions used in my
earlier project.
20. What are the changes you can make in run-time settings? - The Run Time
Settings that we make are: a) Pacing - It has iteration count. b) Log - Under this
we have Disable Logging Standard Log and c) Extended Think Time - In think time
we have two options like Ignore think time and Replay think time. d) General -
Under general tab we can set the vusers as process or as multithreading and
whether each step as a transaction.
21. How do you perform functional testing under load? - Functionality under load
can be tested by running several Vusers concurrently. By increasing the amount of
Vusers, we can determine how much load the server can sustain.
22. What is Ramp up? How do you set this? - This option is used to gradually
increase the amount of Vusers/load on the server. An initial value is set and a value

- 88 -
Software Testing

to wait between intervals can be


specified. To set Ramp Up, go to ‘Scenario Scheduling Options’
23. What is the advantage of running the Vuser as thread? - VuGen provides the
facility to use multithreading. This enables more Vusers to be run per
generator. If the Vuser is run as a process, the same driver program is loaded into
memory for each Vuser, thus taking up a large amount of memory. This limits the
number of Vusers that can be run on a single
generator. If the Vuser is run as a thread, only one instance of the driver program is
loaded into memory for the given number of
Vusers (say 100). Each thread shares the memory of the parent driver program, thus
enabling more Vusers to be run per generator.
24. If you want to stop the execution of your script on error, how do you do
that? - The lr_abort function aborts the execution of a Vuser script. It instructs the
Vuser to stop executing the Actions section, execute the vuser_end section and end
the execution. This function is useful when you need to manually abort a script
execution as a result of a specific error condition. When you end a script using this
function, the Vuser is assigned the status "Stopped". For this to take effect, we have
to first uncheck the “Continue on error” option in Run-Time Settings.
25. What is the relation between Response Time and Throughput? - The
Throughput graph shows the amount of data in bytes that the Vusers received from
the server in a second. When we compare this with the transaction response time,
we will notice that as throughput decreased, the response time also decreased.
Similarly, the peak throughput and highest response time would occur approximately
at the same time.
26. Explain the Configuration of your systems? - The configuration of our systems
refers to that of the client machines on which we run the Vusers. The configuration of
any client machine includes its hardware settings, memory, operating system,
software applications, development tools, etc. This system component configuration
should match with the overall system configuration that would include the network
infrastructure, the web server, the database server, and any other components that
go with this larger system so as to achieve the load testing objectives.
27. How do you identify the performance bottlenecks? - Performance Bottlenecks
can be detected by using monitors. These monitors might be application server
monitors, web server monitors, database server monitors and network monitors.
They help in finding out the troubled area in our scenario which causes increased
response time. The measurements made are usually performance response time,
throughput, hits/sec, network delay graphs, etc.
28. If web server, database and Network are all fine where could be the
problem? - The problem could be in the system itself or in the application server or
in the code written for the application.
29. How did you find web server related issues? - Using Web resource monitors we
can find the performance of web servers. Using these monitors we can analyze
throughput on the web server, number of hits per second that
occurred during scenario, the number of http responses per second, the number of
downloaded pages per second.
30. How did you find database related issues? - By running “Database” monitor and
help of “Data Resource Graph” we can find database related issues. E.g. You can
specify the resource you want to measure on before running the controller and than
you can see database related issues
31. How did you plan the Load? What are the Criteria? - Load test is planned to
decide the number of users, what kind of machines we are going to use and from
where they are run. It is based on 2 important documents, Task Distribution Diagram

- 89 -
Software Testing

and Transaction profile. Task Distribution Diagram gives us the information on


number of users for a particular transaction and the time of the load. The peak usage
and off-usage are decided from this Diagram. Transaction profile gives us the
information about the transactions name and their priority levels with regard to the
scenario we are deciding.
32. What does vuser_init action contain? - Vuser_init action contains procedures to
login to a server.
33. What does vuser_end action contain? - Vuser_end section contains log off
procedures.
34. What is think time? How do you change the threshold? - Think time is the
time that a real user waits between actions. Example: When a user receives data
from a server, the user may wait several seconds to review the data before
responding. This delay is known as the think time. Changing the Threshold:
Threshold level is the level below which the recorded think time will be ignored. The
default value is five (5) seconds. We can change the think time threshold in the
Recording options of the Vugen.
35. What is the difference between standard log and extended log? - The
standard log sends a subset of functions and messages sent during script execution
to a log. The subset depends on the Vuser type Extended log sends a detailed script
execution messages to the output log. This is mainly used during debugging when
we want information about: Parameter substitution. Data returned by the server.
Advanced trace.
36. Explain the following functions: - lr_debug_message - The lr_debug_message
function sends a debug message to the output log when the specified message class
is set. lr_output_message - The lr_output_message function sends notifications to
the Controller Output window and the Vuser log file. lr_error_message - The
lr_error_message function sends an error message to the LoadRunner Output
window. lrd_stmt - The lrd_stmt function associates a character string (usually a
SQL statement) with a cursor. This function sets a SQL statement to be processed.
lrd_fetch - The lrd_fetch function fetches the next row from the result set.
37. Throughput - If the throughput scales upward as time progresses and the
number of Vusers increase, this indicates that the bandwidth is sufficient. If
the graph were to remain relatively flat as the number of Vusers increased, it would
be reasonable to conclude that the bandwidth is constraining the volume of
data delivered.
38. Types of Goals in Goal-Oriented Scenario - Load Runner provides you with five
different types of goals in a goal oriented scenario:
1. The number of concurrent Vusers
2. The number of hits per second
3. The number of transactions per second
4. The number of pages per minute
5. The transaction response time that you want your scenario
39. Analysis Scenario (Bottlenecks): In Running Vuser graph correlated with the
response time graph you can see that as the number of Vusers increases, the
average response time of the check itinerary transaction very gradually increases. In
other words, the average response time steadily increases as the load
increases. At 56 Vusers, there is a sudden, sharp increase in the average response
time. We say that the test broke the server. That is the mean time before failure
(MTBF). The response time clearly began to degrade when there were more than 56
Vusers running simultaneously.
40. What is correlation? Explain the difference between automatic correlation
and manual correlation? - Correlation is used to obtain data which are unique for

- 90 -
Software Testing

each run of the script and which are generated by nested queries. Correlation
provides the value to avoid errors arising out of duplicate values and also optimizing
the code (to avoid nested queries). Automatic correlation is where we set some rules
for correlation. It can be application server specific. Here values are replaced by data
which are created by these rules. In manual correlation, the value we want to
correlate is scanned and create correlation is used to correlate.
41. Where do you set automatic correlation options? - Automatic correlation from
web point of view, can be set in recording options and correlation tab. Here we can
enable correlation for the entire script and choose either issue online messages or
offline actions, where we can define rules for that correlation. Automatic correlation
for database, can be done using show output window and scan for correlation and
picking the correlate query tab and choose which query value we want to correlate. If
we know the specific value to be correlated, we just do create correlation for the
value and specify how the value to be created.
42. What is a function to capture dynamic values in the web vuser script? -
Web_reg_save_param function saves dynamic data information to a parameter.

Silk test que.

1. How does the Recovery System Work in SilkTest?

2. What is the purpose of user-defined base state method .?

3. What are the components of SilkTest .?

4. What are the important features of SilkTest as compare to other tools?

5. How to define new class in SilkTest?

6. What is SilkMeter and how does it works with SilkTest .?

Test director
1Types of vies in Datastage Director?
2. Orchestrate Vs Datastage Parallel Extender?
3.What is an Exception ? What are types of Exception ?
4.What are Routines and where/how are they written and have you written any routines
before?
5.What are the command line functions that import and export the DS jobs?
6.How many types of database triggers can be specified on a table ? What are they ?
7. what is NLS in datastage? how we use NLS in Datastage ? what advantages in that ? at
the time of ins. . .</< td>
8.What are types of Hashed File?
9.What are the datatypes a available in PL/SQL ?

- 91 -
Software Testing

General
.What is 'Software Quality Assurance'?
Software QA involves the entire software development Process - monitoring and improving
the process, making sure that any agreed-upon standards and procedures are followed, and
ensuring that problems are found and dealt with. It is oriented to 'prevention'. (See the
Books section for a list of useful books on Software Quality Assurance.)

2.What is 'Software Testing'?


Testing involves operation of a system or application under controlled conditions and
evaluating the results (eg, 'if the user is in interface A of the application while using
hardware B, and does C, then D should happen'). The controlled conditions should include
both normal and abnormal conditions. Testing should intentionally attempt to make things
go wrong to determine if things happen when they shouldn't or things don't happen when
they should. It is oriented to 'detection'.

Organizations vary considerably in how they assign responsibility for QA and testing.
Sometimes they're the combined responsibility of one group or individual. Also common are
project teams that include a mix of testers and developers who work closely together, with
overall QA processes monitored by project managers. It will depend on what best fits an
organization's size and business structure.

3. What are some recent major computer system failures caused by software
bugs?

* Media reports in January of 2005 detailed severe problems with a $170 million high-profile
U.S. government IT systems project. Software testing was one of the five major problem
areas according to a report of the commission reviewing the project. Studies were under
way to determine which, if any, portions of the project could be salvaged.

* In July 2004 newspapers reported that a new government welfare management system in
Canada costing several hundred million dollars was unable to handle a simple benefits rate
increase after being put into live operation. Reportedly the original contract allowed for only
6 weeks of acceptance testing and the system was never tested for its ability to handle a
rate increase.

* Millions of bank accounts were impacted by errors due to installation of inadequately


tested software code in the transaction processing system of a major North American bank,
according to mid-2004 news reports. Articles about the incident stated that it took two
weeks to fix all the resulting errors, that additional problems resulted when the incident
drew a large number of e-mail phishing attacks against the bank's customers, and that the

- 92 -
Software Testing

total cost of the incident could exceed $100 million.

* A bug in site management software utilized by companies with a significant percentage of


worldwide web traffic was reported in May of 2004. The bug resulted in performance
problems for many of the sites simultaneously and required disabling of the software until
the bug was fixed.

* According to news reports in April of 2004, a software bug was determined to be a major
contributor to the 2003 Northeast blackout, the worst power system failure in North
American history. The failure involved loss of electrical power to 50 million customers,
forced shutdown of 100 power plants, and economic losses estimated at $6 billion. The bug
was reportedly in one utility company's vendor-supplied power monitoring and management
system, which was unable to correctly handle and report on an unusual confluence of
initially localized events. The error was found and corrected after examining millions of lines
of code.

* In early 2004, news reports revealed the intentional use of a software bug as a counter-
espionage tool. According to the report, in the early 1980's one nation surreptitiously
allowed a hostile nation's espionage service to steal a version of sophisticated industrial
software that had intentionally-added flaws. This eventually resulted in major industrial
disruption in the country that used the stolen flawed software.

* A major U.S. retailer was reportedly hit with a large government fine in October of 2003
due to web site errors that enabled customers to view one anothers' online orders.

* News stories in the fall of 2003 stated that a manufacturing company recalled all their
transportation products in order to fix a software problem causing instability in certain
circumstances. The company found and reported the bug itself and initiated the recall
procedure in which a software upgrade fixed the problems.

* In January of 2001 newspapers reported that a major European railroad was hit by the
aftereffects of the Y2K bug. The company found that many of their newer trains would not
run due to their inability to recognize the date '31/12/2000'; the trains were started by
altering the control system's date settings.

* News reports in September of 2000 told of a software vendor settling a lawsuit with a
large mortgage lender; the vendor had reportedly delivered an online mortgage processing
system that did not meet specifications, was delivered late, and didn't work.

* In early 2000, major problems were reported with a new computer system in a large
suburban U.S. public school district with 100,000+ students; problems included 10,000
erroneous report cards and students left stranded by failed class registration systems; the
district's CIO was fired. The school district decided to reinstate it's original 25-year old
system for at least a year until the bugs were worked out of the new system by the
software vendors.

* In October of 1999 the $125 million NASA Mars Climate Orbiter spacecraft was believed to
be lost in space due to a simple data conversion error. It was determined that spacecraft
software used certain data in English units that should have been in metric units. Among
other tasks, the orbiter was to serve as a communications relay for the Mars Polar Lander
mission, which failed for unknown reasons in December 1999. Several investigating panels
were convened to determine the process failures that allowed the error to go undetected.

- 93 -
Software Testing

* Bugs in software supporting a large commercial high-speed data network affected 70,000
business customers over a period of 8 days in August of 1999. Among those affected was
the electronic trading system of the largest U.S. futures exchange, which was shut down for
most of a week as a result of the outages.

* January 1998 news reports told of software problems at a major U.S. telecommunications
company that resulted in no charges for long distance calls for a month for 400,000
customers. The problem went undetected until customers called up with questions about
their bills.

4.Why is it often hard for management to get serious about quality assurance?

* Solving problems is a high-visibility process; preventing problems is low-visibility. This is


illustrated by an old parable: In ancient China there was a family of healers, one of whom
was known throughout the land and employed as a physician to a great lord.

5.Why does software have bugs?

* Miscommunication or no communication - as to specifics of what an application should or


shouldn't do (the application's requirements).

* Software complexity - the complexity of current software applications can be difficult to


comprehend for anyone without experience in modern-day software development. Multi-
tiered applications, client-server and distributed applications, data communications,
enormous relational databases, and sheer size of applications have all contributed to the
exponential growth in software/system complexity.

* Programming errors - programmers, like anyone else, can make mistakes.

* Changing requirements (whether documented or undocumented) - the end-user may not


understand the effects of changes, or may understand and request them anyway - redesign,
rescheduling of engineers, effects on other projects, work already completed that may have
to be redone or thrown out, hardware requirements that may be affected, etc. If there are
many minor changes or any major changes, known and unknown dependencies among
parts of the project are likely to interact and cause problems, and the complexity of
coordinating changes may result in errors. Enthusiasm of engineering staff may be affected.
In some fast-changing business environments, continuously modified requirements may be
a fact of life. In this case, management must understand the resulting risks, and QA and
test engineers must adapt and plan for continuous extensive testing to keep the inevitable
bugs from running out of control - see 'What can be done if requirements are changing
continuously?' in Part 2 of the FAQ. Also see information about 'agile' approaches such as
XP, also in Part 2 of the FAQ.

* Time pressures - scheduling of software projects is difficult at best, often requiring a lot of
guesswork. When deadlines loom and the crunch comes, mistakes will be made.

* egos - people prefer to say things like:

* * 'no problem'

* * 'piece of cake'

- 94 -
Software Testing

* * 'I can whip that out in a few hours'

* * 'it should be easy to update that old code'

* instead of:

* * 'that adds a lot of complexity and we could end up making a lot of mistakes'

* * 'we have no idea if we can do that; we'll wing it'

* * 'I can't estimate how long it will take, until I take a close look at it'

* * 'we can't figure out what that old spaghetti code did in the first place'

If there are too many unrealistic 'no problem's', the result is bugs.

* Poorly documented code - it's tough to maintain and modify code that is badly written or
poorly documented; the result is bugs. In many organizations management provides no
incentive for programmers to document their code or write clear, understandable,
maintainable code. In fact, it's usually the opposite: they get points mostly for quickly
turning out code, and there's job security if nobody else can understand it ('if it was hard to
write, it should be hard to read').

* Software development tools - visual tools, class libraries, compilers, scripting tools, etc.
often introduce their own bugs or are poorly documented, resulting in added bugs.

6.How can new Software QA processes be introduced in an existing organization?

* A lot depends on the size of the organization and the risks involved. For large
organizations with high-risk (in terms of lives or property) projects, serious management
buy-in is required and a formalized QA process is necessary.

* Where the risk is lower, management and organizational buy-in and QA implementation
may be a slower, step-at-a-time process. QA processes should be balanced with productivity
so as to keep bureaucracy from getting out of hand.

* For small groups or projects, a more ad-hoc process may be appropriate, depending on
the type of customers and projects. A lot will depend on team leads or managers, feedback
to developers, and ensuring adequate communications among customers, managers,
developers, and testers.

* The most value for effort will often be in (a) requirements management processes, with a
goal of clear, complete, testable requirement specifications embodied in requirements or
design documentation, or in 'agile'-type environments extensive continuous coordination
with end-users, (b) design inspections and code inspections, and (c) post-
mortems/retrospectives.

7.What is verification? validation?

* Verification typically involves reviews and meetings to evaluate documents, plans, code,
requirements, and specifications. This can be done with checklists, issues lists,
walkthroughs, and inspection meetings. Validation typically involves actual testing and takes
place after verifications are completed. The term 'IV & V' refers to Independent Verification

- 95 -
Software Testing

and Validation.

8.What is a 'walkthrough'?

* A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or


no preparation is usually required.

What's an 'inspection'?

* An inspection is more formalized than a 'walkthrough', typically with 3-8 people including
a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a
document such as a requirements spec or a test plan, and the purpose is to find problems
and see what's missing, not to fix anything. Attendees should prepare for this type of
meeting by reading thru the document; most problems will be found during this preparation.
The result of the inspection meeting should be a written report.

10.What kinds of testing should be considered?

* Black box testing - not based on any knowledge of internal design or code. Tests are
based on requirements and functionality.

* White box testing - based on knowledge of the internal logic of an application's code. Tests
are based on coverage of code statements, branches, paths, conditions.

* Unit testing - the most 'micro' scale of testing; to test particular functions or code
modules. Typically done by the programmer and not by testers, as it requires detailed
knowledge of the internal program design and code. Not always easily done unless the
application has a well-designed architecture with tight code; may require developing test
driver modules or test harnesses.

* Incremental integration testing - continuous testing of an application as new functionality


is added; requires that various aspects of an application's functionality be independent
enough to work separately before all parts of the program are completed, or that test
drivers be developed as needed; done by programmers or by testers.

* Integration testing - testing of combined parts of an application to determine if they


function together correctly. The 'parts' can be code modules, individual applications, client
and server applications on a network, etc. This type of testing is especially relevant to
client/server and distributed systems.

* Functional testing - black-box type testing geared to functional requirements of an


application; this type of testing should be done by testers. This doesn't mean that the
programmers shouldn't check that their code works before releasing it (which of course
applies to any stage of testing.)

* System testing - black-box type testing that is based on overall requirements


specifications; covers all combined parts of a system.

* End-to-end testing - similar to system testing; the 'macro' end of the test scale; involves
testing of a complete application environment in a situation that mimics real-world use, such
as interacting with a database, using network communications, or interacting with other
hardware, applications, or systems if appropriate.

- 96 -
Software Testing

* Sanity testing or smoke testing - typically an initial testing effort to determine if a new
software version is performing well enough to accept it for a major testing effort. For
example, if the new software is crashing systems every 5 minutes, bogging down systems
to a crawl, or corrupting databases, the software may not be in a 'sane' enough condition to
warrant further testing in its current state.

* Regression testing - re-testing after fixes or modifications of the software or its


environment. It can be difficult to determine how much re-testing is needed, especially near
the end of the development cycle. Automated testing tools can be especially useful for this
type of testing.

* Acceptance testing - final testing based on specifications of the end-user or customer, or


based on use by end-users/customers over some limited period of time.

* Load testing - testing an application under heavy loads, such as testing of a web site
under a range of loads to determine at what point the system's response time degrades or
fails.

* Stress testing - term often used interchangeably with 'load' and 'performance' testing.
Also used to describe such tests as system functional testing while under unusually heavy
loads, heavy repetition of certain actions or inputs, input of large numerical values, large
complex queries to a database system, etc.

* Performance testing - term often used interchangeably with 'stress' and 'load' testing.
Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements
documentation or QA or Test Plans.

* Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend
on the targeted end-user or customer. User interviews, surveys, video recording of user
sessions, and other techniques can be used. Programmers and testers are usually not
appropriate as usability testers.

* Install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.

* Recovery testing - testing how well a system recovers from crashes, hardware failures, or
other catastrophic problems.

* Failover testing - typically used interchangeably with 'recovery testing'

* Security testing - testing how well the system protects against unauthorized internal or
external access, willful damage, etc; may require sophisticated testing techniques.

* Compatability testing - testing how well software performs in a particular


hardware/software/operating system/network/etc. environment.

* Exploratory testing - often taken to mean a creative, informal software test that is not
based on formal test plans or test cases; testers may be learning the software as they test
it.

* Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers
have significant understanding of the software before testing it.

* Context-driven testing - testing driven by an understanding of the environment, culture,

- 97 -
Software Testing

and intended use of software. For example, the testing approach for life-critical medical
equipment software would be completely different than that for a low-cost computer game.

* User acceptance testing - determining if software is satisfactory to an end-user or


customer.

* Comparison testing - comparing software weaknesses and strengths to competing


products.

* Alpha testing - testing of an application when development is nearing completion; minor


design changes may still be made as a result of such testing. Typically done by end-users or
others, not by programmers or testers.

* Beta testing - testing when development and testing are essentially completed and final
bugs and problems need to be found before final release. Typically done by end-users or
others, not by programmers or testers.

* Mutation testing - a method for determining if a set of test data or test cases is useful, by
deliberately introducing various code changes ('bugs') and retesting with the original test
data/cases to determine if the 'bugs' are detected. Proper implementation requires large
computational resources.

11.What are 5 common problems in the software development process?

* Solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements


that are agreed to by all players. Use prototypes to help nail down requirements. In
'agile'-type environments, continuous coordination with customers/end-users is necessary.

* Realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-
testing, changes, and documentation; personnel should be able to complete the project
without burning out.

* Adequate testing - start testing early on, re-test after fixes or changes, plan for adequate
time for testing and bug-fixing. 'Early' testing ideally includes unit testing by developers and
built-in testing and diagnostic capabilities.

* Stick to initial requirements as much as possible - be prepared to defend against


excessive changes and additions once development has begun, and be prepared to explain
consequences. If changes are necessary, they should be adequately reflected in related
schedule changes. If possible, work closely with customers/end-users to manage
expectations. This will provide them a higher comfort level with their requirements decisions
and minimize excessive changes later on.

* Communication - require walkthroughs and inspections when appropriate; make extensive


use of group communication tools - e-mail, groupware, networked bug-tracking tools and
change management tools, intranet capabilities, etc.; insure that
information/documentation is available and up-to-date - preferably electronic, not paper;
promote teamwork and cooperation; use protoypes if possible to clarify customers'
expectations.

12.What is software 'quality'?

* Quality software is reasonably bug-free, delivered on time and within budget, meets

- 98 -
Software Testing

requirements and/or expectations, and is maintainable. However, quality is obviously a


subjective term. It will depend on who the 'customer' is and their overall influence in the
scheme of things. A wide-angle view of the 'customers' of a software development project
might include end-users, customer acceptance testers, customer contract officers, customer
management, the development organization's.

* Management/accountants/testers/salespeople, future software maintenance engineers,


stockholders, magazine columnists, etc. Each type of 'customer' will have their own slant on
'quality' - the accounting department might define quality in terms of profits while an end-
user might define quality as user-friendly and bug-free.

13.What is 'good code'?

* * 'Good code' is code that works, is bug free, and is readable and maintainable. Some
organizations have coding 'standards' that all developers are supposed to adhere to, but
everyone has different ideas about what's best, or what is too many or too few rules. There
are also various theories and metrics, such as McCabe Complexity metrics. It should be kept
in mind that excessive use of standards and rules can stifle productivity and creativity. 'Peer
reviews', 'buddy checks' code analysis tools, etc. can be used to check for problems and
enforce standards. For C and C++ coding, here are some typical ideas to consider in setting
rules/standards; these may or may not apply to a particular situation:

* Minimize or eliminate use of global variables.

* Use descriptive function and method names - use both upper and lower case, avoid
abbreviations, use as many characters as necessary to be adequately descriptive (use of
more than 20 characters is not out of line); be consistent in naming conventions.

* Use descriptive variable names - use both upper and lower case, avoid abbreviations, use
as many characters as necessary to be adequately descriptive (use of more than 20
characters is not out of line); be consistent in naming conventions.

* Function and method sizes should be minimized; less than 100 lines of code is good, less
than 50 lines is preferable.

* Function descriptions should be clearly spelled out in comments preceding a function's


code.

* Organize code for readability.

* Use whitespace generously - vertically and horizontally.

* Each line of code should contain 70 characters max.

* One code statement per line.

* Coding style should be consistent throught a program (eg, use of brackets, indentations,
naming conventions, etc.)

* In adding comments, err on the side of too many rather than too few comments; a
common rule of thumb is that there should be at least as many lines of comments (including
header blocks) as lines of code.

- 99 -
Software Testing

* No matter how small, an application should include documentaion of the overall program
function and flow (even a few paragraphs is better than nothing); or if possible a separate
flow chart and detailed program documentation.

* Make extensive use of error handling procedures and status and error logging.

* For C++, to minimize complexity and increase maintainability, avoid too many levels of
inheritance in class heirarchies (relative to the size and complexity of the application).
Minimize use of multiple inheritance, and minimize use of operator overloading (note that
the Java programming language eliminates multiple inheritance and operator overloading.)

* For C++, keep class methods small, less than 50 lines of code per method is preferable.

* For C++, make liberal use of exception handlers.

14.What is 'good design'?

* * 'Design' could refer to many things, but often refers to 'functional design' or 'internal
design'. Good internal design is indicated by software code whose overall structure is clear,
understandable, easily modifiable, and maintainable; is robust with sufficient error-handling
and status logging capability; and works correctly when implemented. Good functional
design is indicated by an application whose functionality can be traced back to customer and
end-user requirements.For programs that have a user interface, it's often a good idea to
assume that the end user will have little computer knowledge and may not read a user
manual or even the on-line help; some common rules-of-thumb include:

* The program should act in a way that least surprises the user

* It should always be evident to the user what can be done next and how to exit

* The program shouldn't let the users do something stupid without warning them.

15.What is SEI? CMM? CMMI? ISO? IEEE? ANSI? Will it help?

* SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S.


Defense Department to help improve software development processes.

* CMM = 'Capability Maturity Model', now called the CMMI ('Capability Maturity Model
Integration'), developed by the SEI. It's a model of 5 levels of process 'maturity' that
determine effectiveness in delivering quality software. It is geared to large organizations
such as large U.S. Defense Department contractors. However, many of the QA processes
involved are appropriate to any organization, and if reasonably applied can be helpful.
Organizations can receive CMMI ratings by undergoing assessments by qualified auditors.

* Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals
to successfully complete projects. Few if any processes in place; successes may not be
repeatable.

* Level 2 - software project tracking, requirements management, realistic planning, and


configuration management processes are in place; successful practices can be repeated.

* Level 3 - standard software development and maintenance processes are integrated


throughout an organization; a Software Engineering Process Group is is in place to oversee

- 100 -
Software Testing

software processes, and training programs are used to ensure understanding and
compliance.

* Level 4 - metrics are used to track productivity, processes, and products. Project
performance is predictable, and quality is consistently high.

* Level 5 - the focus is on continouous process improvement. The impact of new processes
and technologies can be predicted and effectively implemented when required.

* Perspective on CMM ratings: During 1997-2001, 1018 organizations were assessed. Of


those, 27% were rated at Level 1, 39% at 2, 23% at 3, 6% at 4, and 5% at 5. (For ratings
during the period 1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and 0.4%
at 5.) The median size of organizations was 100 software engineering/maintenance
personnel; 32% of organizations were U.S. federal contractors or agencies. For those rated
at Level 1, the most problematical key process area was in Software Quality Assurance.

* ISO = 'International Organisation for Standardization' - The ISO 9001:2000 standard


(which replaces the previous standard of 1994) concerns quality systems that are assessed
by outside auditors, and it applies to many kinds of production and manufacturing
organizations, not just software. It covers documentation, design, development, production,
testing, installation, servicing, and other processes. The full set of standards consists of:
(a)Q9001-2000 - Quality Management Systems: Requirements; (b)Q9000-2000 - Quality
Management Systems: Fundamentals and Vocabulary; (c)Q9004-2000 - Quality
Management Systems: Guidelines for Performance Improvements. To be ISO 9001 certified,
a third-party auditor assesses an organization, and certification is typically good for about 3
years, after which a complete reassessment is required. Note that ISO certification does not
necessarily indicate quality products - it indicates only that documented processes are
followed. Also see http://www.iso.ch/ for the latest information. In the U.S. the standards
can be purchased via the ASQ web site at http://e-standards.asq.org/

* IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates
standards such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard
829), 'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard
for Software Quality Assurance Plans' (IEEE/ANSI Standard 730), and others.

* ANSI = 'American National Standards Institute', the primary industrial standards body in
the U.S.; publishes some software-related standards in conjunction with the IEEE and ASQ
(American Society for Quality).

* Other software development/IT management process assessment methods besides CMMI


and ISO 9000 include SPICE, Trillium, TickIT, Bootstrap, ITIL, MOF, and CobiT.

16.What is the 'software life cycle'?

* The life cycle begins when an application is first conceived and ends when it is no longer in
use. It includes aspects such as initial concept, requirements analysis, functional design,
internal design, documentation planning, test planning, coding, document preparation,
integration, testing, maintenance, updates, retesting, phase-out, and other aspects.

17.Will automated testing tools make testing easier?

* Possibly For small projects, the time needed to learn and implement them may not be
worth it. For larger projects, or on-going long-term projects they can be valuable.

- 101 -
Software Testing

* A common type of automated tool is the 'record/playback' type. For example, a tester
could click through all combinations of menu choices, dialog box choices, buttons, etc. in an
application GUI and have them 'recorded' and the results logged by a tool. The 'recording' is
typically in the form of text based on a scripting language that is interpretable by the testing
tool. If new buttons are added, or some underlying code in the application is changed, etc.
the application might then be retested by just 'playing back' the 'recorded' actions, and
comparing the logging results to check effects of the changes. The problem with such tools
is that if there are continual changes to the system being tested, the 'recordings' may have
to be changed so much that it becomes very time-consuming to continuously update the
scripts. Additionally, interpretation and analysis of results (screens, data, logs, etc.) can be
a difficult task. Note that there are record/playback tools for text-based interfaces also, and
for all types of platforms.

* Another common type of approach for automation of functional testing is 'data-driven' or


'keyword-driven' automated testing, in which the test drivers are separated from the data
and/or actions utilized in testing (an 'action' would be something like 'enter a value in a text
box'). Test drivers can be in the form of automated test tools or custom-written testing
software. The data and actions can be more easily maintained - such as via a spreadsheet -
since they are separate from the test drivers. The test drivers 'read' the data/action
information to perform specified tests. This approach can enable more efficient control,
development, documentation, and maintenance of automated tests/test cases.

* Other automated tools can include:

* Code analyzers - monitor code complexity, adherence to standards, etc.

* Coverage analyzers - these tools check which parts of the code have been exercised by a
test, and may be oriented to code statement coverage, condition coverage, path coverage,
etc.

* Memory analyzers - such as bounds-checkers and leak detectors.

* Load/performance test tools - for testing client/server and web applications under various
load levels.

* Web test tools - to check that links are valid, HTML code usage is correct, client-side and
server-side programs work, a web site's interactions are secure.

* Other tools - for test case management, documentation management, bug reporting, and
configuration management.

- 102 -

Вам также может понравиться