Академический Документы
Профессиональный Документы
Культура Документы
Software Testing
Introduction
Testing Start
Process
Testing Stop
Process
Testing Strategy
Testing Plan
Risk Analysis
Software Testing
Life Cycle
Software Testing
Types
Static Testing
Dynamic Testing
Blackbox Testing
Whitebox Testing.
Unit Testing.
Requirements
Testing.
Regression
Testing.
Error Handling
Testing.
Manual support
Testing.
Intersystem
Testing.
Control Testing.
Parallel Testing.
Volume Testing.
Stress Testing.
Performance
Testing.
Testing Tools
Win Runner
Load Runner
Test Director
Silk Test
Test Partner
Interview Question
Win Runner
Load Runner
Silk Test
Test Director
General Testing
Question
-1-
Software Testing
Testing Introduction
Testing is a process used to help identify the correctness, completeness and quality of
developed computer software. With that in mind, testing can never completely establish the
correctness of computer software. In other words Testing is nothing but CRITICISM or
COMPARISION. Here comparison in the sense comparing the actual value with expected
one.
There are many approaches to software testing, but effective testing of complex products is
essentially a process of investigation, not merely a matter of creating and following rote
procedure. One definition of testing is "the process of questioning a product in order to
evaluate it", where the "questions" are things the tester tries to do with the product, and
the product answers with its behavior in reaction to the probing of the tester. Although most
of the intellectual processes of testing are nearly identical to that of review or inspection,
the word testing is connoted to mean the dynamic analysis of the product—putting the
product through its paces.
The quality of the application can and normally does vary widely from system to system but
some of the common quality attributes include reliability, stability, portability, maintainability
and usability. Refer to the ISO standard ISO 9126 for a more complete list of attributes and
criteria.
Testing should systematically uncover different classes of errors in a minimum amount of time
and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that
the software appears to be working as stated in the specifications. The data collected through
testing can also provide an indication of the software's reliability and quality. But, testing cannot
show the absence of defect -- it can only show that software defects are present.
-2-
Software Testing
Testing early in the life cycle reduces the errors. Test deliverables are associated with every
phase of development. The goal of Software Tester is to find bugs, find them as early as
possible, and make them sure they are fixed.
The number one cause of Software bugs is the Specification. There are several reasons
specifications are the largest bug producer.
In many instances a Spec simply isn’t written. Other reasons may be that the spec isn’t
thorough enough, its constantly changing, or it’s not communicated well to the entire team.
Planning software is vitally important. If it’s not done correctly bugs will be created.
The next largest source of bugs is the Design, That’s where the programmers lay the plan
for their Software. Compare it to an architect creating the blue print for the building, Bugs
occur here for the same reason they occur in the specification. It’s rushed, changed, or not
well communicated.
Coding errors may be more familiar to you if you are a programmer. Typically these can be
traced to the Software complexity, poor documentation, schedule pressure or just plain
dump mistakes. It’s important to note that many bugs that appear on the surface to be
programming errors can really be traced to specification. It’s quite common to hear a
programmer say, “ oh, so that’s what its supposed to do. If someone had told me that I
wouldn’t have written the code that way.”
The other category is the catch-all for what is left. Some bugs can blamed for false
positives, conditions that were thought to be bugs but really weren’t. There may be
duplicate bugs, multiple ones that resulted from the square root cause. Some bugs can be
traced to Testing errors.
Costs: The costs re logarithmic- that is, they increase tenfold as time increases. A bug
found and fixed during the early stages when the specification is being written might cost
next to nothing, or 10 cents in our example. The same bug, if not found until the software is
coded and tested, might cost $1 to $10. If a customer finds it, the cost would easily top
$100.
-3-
Software Testing
This can be difficult to determine. Many modern software applications are so complex, and
run in such as interdependent environment, that complete testing can never be done.
"When to stop testing" is one of the most difficult questions to a test engineer. Common
factors in deciding when to stop are:
Practically, we feel that the decision of stopping testing is based on the level of the risk
acceptable to the management. As testing is a never ending process we can never assume
that 100 % testing has been done, we can only minimize the risk of shipping the product to
client with X testing done. The risk can be measured by Risk analysis but for small duration
/ low budget / low resources project, risk can be deduced by simply: -
Test Strategy:
Specific
Practical
Justified
The purpose of a test strategy is to clarify the major tasks and challenges of the test
project.
Test Approach and Test Architecture are other terms commonly used to describe what I’m
calling test strategy.
"We will use black box testing, cause-effect graphing, boundary testing, and white box
testing to test this product against its specification."
-4-
Software Testing
Test Strategy: Type of Project, Type of Software, when Testing will occur, Critical Success
factors, Tradeoffs
• Derived from Test Approach, Requirements, Project Plan, Functional Spec., and
Design Spec.
• Details out project-specific Test Approach.
• Lists general (high level) Test Case areas.
• Include testing Risk Assessment.
• Include preliminary Test Schedule
• Lists Resource requirements.
Test Plan
The test strategy identifies multiple test levels, which are going to be performed for the
project. Activities at each level must be planned well in advance and it has to be formally
documented. Based on the individual plans only, the individual test levels are carried out.
Entry means the entry point to that phase. For example, for unit testing, the coding must be
complete and then only one can start unit testing. Task is the activity that is performed.
Validation is the way in which the progress and correctness and compliance are verified for
that phase. Exit tells the completion criteria of that phase, after the validation is done. For
example, the exit criterion for unit testing is all unit test cases must pass.
The unit test plan is the overall plan to carry out the unit test activities. The lead tester
prepares it and it will be distributed to the individual testers, which contains the following
sections.
What is to be tested?
The unit test plan must clearly specify the scope of unit testing. In this, normally the basic
input/output of the units along with their basic functionality will be tested. In this case
mostly the input units will be tested for the format, alignment, accuracy and the totals. The
UTP will clearly give the rules of what data types are present in the system, their format and
their boundary conditions. This list may not be exhaustive; but it is better to have a
complete list of these details.
-5-
Software Testing
Sequence of Testing
The sequences of test activities that are to be carried out in this phase are to be listed in
this section. This includes, whether to execute positive test cases first or negative test cases
first, to execute test cases based on the priority, to execute test cases based on test groups
etc. Positive test cases prove that the system performs what is supposed to do; negative
test cases prove that the system does not perform what is not supposed to do. Testing the
screens, files, database etc., are to be given in proper sequence.
How the independent functionalities of the units are tested which excludes any
communication between the unit and other units. The interface part is out of scope of this
test level. Apart from the above sections, the following sections are addressed, very specific
to unit testing.
The integration test plan is the overall plan for carrying out the activities in the integration
test level, which contains the following sections.
What is to be tested?
This section clearly specifies the kinds of interfaces fall under the scope of testing internal,
external interfaces, with request and response is to be explained. This need not go deep in
terms of technical details but the general approach how the interfaces are triggered is
explained.
Sequence of Integration
When there are multiple modules present in an application, the sequence in which they are
to be integrated will be specified in this section. In this, the dependencies between the
modules play a vital role. If a unit B has to be executed, it may need the data that is fed by
unit A and unit X. In this case, the units A and X have to be integrated and then using that
data, the unit B has to be tested. This has to be stated to the whole set of units in the
program. Given this correctly, the testing activities will lead to the product, slowly building
the product, unit by unit and then integrating them.
The system test plan is the overall plan carrying out the system test level activities. In the
system test, apart from testing the functional aspects of the system, there are some special
testing activities carried out, such as stress testing etc. The following are the sections
normally present in system test plan.
-6-
Software Testing
What is to be tested?
This section defines the scope of system testing, very specific to the project. Normally, the
system testing is based on the requirements. All requirements are to be verified in the
scope of system testing. This covers the functionality of the product. Apart from this what
special testing is performed are also stated here.
The requirements can be grouped in terms of the functionality. Based on this, there may be
priorities also among the functional groups. For example, in a banking application, anything
related to customer accounts can be grouped into one area, anything related to inter-branch
transactions may be grouped into one area etc. Same way for the product being tested,
these areas are to be mentioned here and the suggested sequences of testing of these
areas, based on the priorities are to be described.
The client at their place performs the acceptance testing. It will be very similar to the
system test performed by the Software Development Unit. Since the client is the one who
decides the format and testing methods as part of acceptance testing, there is no specific
clue on the way they will carry out the testing. But it will not differ much from the system
testing. Assume that all the rules, which are applicable to system test, can be implemented
to acceptance testing also.
Since this is just one level of testing done by the client for the overall product, it may
include test cases including the unit and integration test level details.
A sample Test Plan Outline along with their description is as shown below:
1. BACKGROUND – This item summarizes the functions of the application system and the
tests to be performed.
2. INTRODUCTION
3. ASSUMPTIONS – Indicates any anticipated assumptions which will be made while testing
the application.
4. TEST ITEMS - List each of the items (programs) to be tested.
5. FEATURES TO BE TESTED - List each of the features (functions or requirements) which
will be tested or demonstrated by the test.
6. FEATURES NOT TO BE TESTED - Explicitly lists each feature, function, or requirement
which won't be tested and why not. 7. APPROACH - Describe the data flows and test
philosophy.
Simulation or Live execution, Etc. This section also mentions all the approaches which will
be followed at the various stages of the test execution.
8. ITEM PASS/FAIL CRITERIA Blanket statement - Itemized list of expected output and
tolerances
9. SUSPENSION/RESUMPTION CRITERIA - Must the test run from start to completion?
Under what circumstances it may be resumed in the middle?
Establish check-points in long tests.
10. TEST DELIVERABLES - What, besides software, will be delivered?
Test report
-7-
Software Testing
Test software
11. TESTING TASKS Functional tasks (e.g., equipment set up)
Administrative tasks
12. ENVIRONMENTAL NEEDS
Security clearance
Office space & equipment
Hardware/software requirements
13. RESPONSIBILITIES
Who does the tasks in Section 10?
What does the user do?
14. STAFFING & TRAINING
15. SCHEDULE
16. RESOURCES
17. RISKS & CONTINGENCIES
18. APPROVALS
The schedule details of the various test pass such as Unit tests, Integration tests, System
Tests should be clearly mentioned along with the estimated efforts.
Risk Analysis:
A risk is a potential for loss or damage to an Organization from materialized threats. Risk
Analysis attempts to identify all the risks and then quantify the severity of the risks.A threat
as we have seen is a possible damaging event. If it occurs, it exploits vulnerability in the
security of a computer based system.
Risk Identification:
1. Software Risks: Knowledge of the most common risks associated with Software
development, and the platform you are working on.
2. Business Risks: Most common risks associated with the business using the Software
3. Testing Risks: Knowledge of the most common risks associated with Software Testing
for the platform you are working on, tools being used, and test methods being applied.
4. Premature Release Risk: Ability to determine the risk associated with releasing
unsatisfactory or untested Software Prodicts.
5. Risk Methods: Strategies and approaches for identifying risks or problems associated
with implementing and operating information technology, products and process; assessing
their likelihood, and initiating strategies to test those risks.
Traceability means that you would like to be able to trace back and forth how and where
any work product fulfills the directions of the preceding (source-) product. The matrix deals
with the where, while the how you have to do yourself, once you know the where.
Take e.g. the Requirement of User Friendliness (UF). Since UF is a complex concept, it is not
solved by just one design-solution and it is not solved by one line of code. Many partial
design-solutions may contribute to this Requirement and many groups of lines of code may
contribute to it.
-8-
Software Testing
A Requirements-Design Traceability Matrix puts on one side (e.g. left) the sub-
requirements that together are supposed to solve the UF requirement, along with other
(sub-)requirements. On the other side (e.g. top) you specify all design solutions. Now you
can connect on the cross points of the matrix, which design solutions solve (more, or less)
any requirement. If a design solution does not solve any requirement, it should be deleted,
as it is of no value.
Having this matrix, you can check whether any requirement has at least one design solution
and by checking the solution(s) you may see whether the requirement is sufficiently solved
by this (or the set of) connected design(s).
If you have to change any requirement, you can see which designs are affected. And if you
change any design, you can check which requirements may be affected and see what the
impact is.
In a Design-Code Traceability Matrix you can do the same to keep trace of how and which
code solves a particular design and how changes in design or code affect each other.
Prevents delays in the project timeline, which can be brought about by having to backtrack
to fill the gaps
Requirements
Use Case Document
Test Plan
Test Case
Test Case execution
Report Analysis
Bug Analysis
Bug Reporting
Typical interaction scenario from a user's perspective for system requirements studies or
testing. In other words, "an actual or realistic example scenario". A use case describes the
use of a system from start to finish. Use cases focus attention on aspects of a system useful
to people outside of the system itself.
Use Case:
A collection of possible scenarios between the system under discussion and external actors,
characterized by the goal the primary actor has toward the system's declared
responsibilities, showing how the primary actor's goal might be delivered or might fail.
-9-
Software Testing
Use cases are goals (use cases and goals are used interchangeably) that are made up of
scenarios. Scenarios consist of a sequence of steps to achieve the goal, each step in a
scenario is a sub (or mini) goal of the use case. As such each sub goal represents either
another use case (subordinate use case) or an autonomous action that is at the lowest level
desired by our use case decomposition.
There are two scopes that use cases are written from: Strategic and System. There are also
three levels: Summary, User and Sub-function.
Strategic Scope:
The goal (Use Case) is a strategic goal with respect to the system. These goals are goals of
value to the organization. The use case shows how the system is used to benefit the
organization.,/p> These strategic use cases will eventually use some of the same lower level
(subordinate) use cases.
System Scope:
Use cases at system scope are bounded by the system under development. The goals
represent specific functionality required of the system. The majority of the use cases are at
system scope. These use cases are often steps in strategic level use cases
A sub goal or step is below the main level of interest to the user. Examples are "logging in"
and "locate a device in a DB". Always at System Scope.
This is the level of greatest interest. It represents a user task or elementary business
process. A user level goal addresses the question "Does your job performance depend on
how many of these you do in a day". For example "Create Site View" or "Create New
Device" would be user level goals but "Log In to System" would not. Always at System
Scope.
Written for either strategic or system scope. They represent collections of User Level Goals.
For example summary goal "Configure Data Base" might include as a step, user level goal
"Add Device to database". Either at System of Strategic Scope.
- 10 -
Software Testing
Test Documentation
Test documentation is a required tool for managing and maintaining the testing process.
Documents produced by testers should answer the following questions:
In entomology (the study of real, living Bugs), the term life cycle refers to the various
stages that an insect assumes over its life. If you think back to your high school biology
class, you will remember that the life cycle stages for most insects are the egg, larvae,
pupae and adult. It seems appropriate, given that software problems are also called bugs,
that a similar life cycle system is used to identify their stages of life. Figure 18.2 shows an
example of the simplest, and most optimal, software bug life cycle.
This example shows that when a bug is found by a Software Tester, its logged and assigned
to a programmer to be fixed. This state is called open state. Once the programmer fixes the
code, he assigns it back to the tester and the bugs enter the resolved state. The tester then
performs a regression test to confirm that the bug is indeed fixed and, if it closes it out. The
bug then enters its final state, the closed state.
- 11 -
Software Testing
In some situations though, the life cycle gets a bit more complicated.
In this case the life cycle starts out the same with the Tester opening the bug and assigning
to the programmer, but the programmer doesn’t fix it. He doesn’t think its bad enough to fix
and assigns it to the project manager to decide. The Project Manager agrees with the
Programmer and places the Bug in the resolved state as a “wont-fix” bug. The tester
disagrees, looks for and finds a more obvious and general case that demonstrates the bug,
reopens it, and assigns it to the Programmer to fix. The programmer fixes the bg, resolves it
as fixed, and assign it to the Tester. The tester confirms the fix and closes the bug.
You can see that a bug might undergo numerous changes and iterations over its life,
sometimes looping back and starting the life all over again. Figure below takes the simple
model above and adds to it possible decisions, approvals, and looping that can occur in most
projects. Of course every software company and project will have its own system, but this
figure is fairly generic and should cover most any bug life cycle that you’ll encounter
- 12 -
Software Testing
The generic life cycle has two additional states and extra connecting lines. The review state
is where Project Manager or the committee, sometimes called a change Control Board,
decides whether the bug should be fixed. In some projects all bugs go through the review
state before they’re assigned to the programmer for fixing. In other projects, this may not
occur until near the end of the project, or not at all. Notice that the review state can also go
directly to the closed state. This happens if the review decides that the bug shouldn’t be
fixed – it could be too minor is really not a problem, or is a testing error. The other is a
deferred. The review may determine that the bug should be considered for fixing at
sometime in the future, but not for this release of the software.
The additional line from resolved state back to the open state covers the situation where the
tester finds that the bug hasn’t been fixed. It gets reopened and the bugs life cycle repeats.
The two dotted lines that loop from the closed and the deferred state back to the open state
rarely occur but are important enough to mention. Since a Tester never gives up, its
possible that a bug was thought to be fixed, tested and closed could reappear. Such bugs
are often called Regressions. It’s possible that a deferred bug could later be proven serious
enough to fix immediately. If either of these occurs, the bug is reopened and started
through the process again. Most Project teams adopt rules for who can change the state of
a bug or assign it to someone else.For example, maybe only the Project Manager can decide
to defer a bug or only a tester is permitted to close a bug. What’s important is that once you
log a bug, you follow it through its life cycle, don’t lose track of it, and prove the necessary
information to drive it to being fixed and closed.
- 13 -
Software Testing
Testing Types
Static testing
The Verification activities fall into the category of Static Testing. During static testing, you
have a checklist to check whether the work you are doing is going as per the set standards
of the organization. These standards can be for Coding, Integrating and Deployment.
Reviews, Inspection's and Walkthrough's are static testing methodologies.
Dynamic testing
Dynamic Testing involves working with the software, giving input values and checking if the
output is as expected. These are the Validation activities. Unit Tests, Integration Tests,
System Tests and Acceptance Tests are few of the Dynamic Testing methodologies. As we go
further, let us understand the various Test Life Cycle's and get to know the Testing
Terminologies. To understand more of software testing, various methodologies, tools and
techniques, you can download the Software Testing Guide Book from here.
Difference Between Static and Dynamic Testing: Please refer the definition of Static
Testing to observe the difference between the static testing and dynamic testing.
Introduction
Black box testing attempts to derive sets of inputs that will fully exercise all the functional
requirements of a system. It is not an alternative to white box testing. This type of testing
attempts to find errors in the following categories:
White box testing should be performed early in the testing process, while black box testing
tends to be applied during later stages. Test cases should be derived which
1. Reduce the number of additional test cases that must be designed to achieve reasonable
testing, and
2. Tell us something about the presence or absence of classes of errors, rather than an error
associated only with the specific test at hand.
- 14 -
Software Testing
Equivalence Partitioning
This method divides the input domain of a program into classes of data from which test
cases can be derived. Equivalence partitioning strives to define a test case that uncovers
classes of errors and thereby reduces the number of test cases needed. It is based on an
evaluation of equivalence classes for an input condition. An equivalence class represents a
set of valid or invalid states for input conditions.
1. If an input condition specifies a range, one valid and two invalid equivalence classes are
defined.
2. If an input condition requires a specific value, then one valid and two invalid equivalence
classes are defined.
3. If an input condition specifies a member of a set, then one valid and one invalid
equivalence class are defined.
4. If an input condition is Boolean, then one valid and one invalid equivalence class are
defined.
This method leads to a selection of test cases that exercise boundary values. It
complements equivalence partitioning since it selects test cases at the edges of a class.
Rather than focusing on input conditions solely, BVA derives test cases from the output
domain also. BVA guidelines include:
1. For input ranges bounded by a and b, test cases should include values a and b and just
above and just below a and b respectively.
2. If an input condition specifies a number of values, test cases should be developed to
exercise the minimum and maximum numbers and values just above and below these
limits.
3. Apply guidelines 1 and 2 to the output.
4. If internal data structures have prescribed boundaries, a test case should be designed to
exercise the data structure at its boundary.
1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is
assigned to each.
2. A cause-effect graph is developed.
3. The graph is converted to a decision table.
4. Decision table rules are converted to test cases.
What is blackbox testing, difference between blackbox testing and whitebox testing,
Blackbox Testing plans, unbiased blackbox testing
- 15 -
Software Testing
White box testing is a test case design method that uses the control structure of the
procedural design to derive test cases. Test cases can be derived that
1. guarantee that all independent paths within a module have been exercised at least once,
2. exercise all logical decisions on their true and false sides,
3. execute all loops at their boundaries and within their operational bounds, and
4. exercise internal data structures to ensure their validity.
This method enables the designer to derive a logical complexity measure of a procedural
design and use it as a guide for defining a basis set of execution paths. Test cases that
exercise the basis set are guaranteed to execute every statement in the program at least
once during testing.
Flow Graphs
Flow graphs can be used to represent control flow in a program and can help in the
derivation of the basis set. Each flow graph node represents one or more procedural
statements. The edges between nodes represent flow of control. An edge must terminate at
a node, even if the node does not represent any useful procedural statements. A region in a
flow graph is an area bounded by edges and nodes. Each node that contains a condition is
called a predicate node. Cyclomatic complexity is a metric that provides a quantitative
measure of the logical complexity of a program. It defines the number of independent paths
in the basis set and thus provides an upper bound for the number of tests that must be
performed.
An independent path is any path through a program that introduces at least one new set of
processing statements (must move along at least one new edge in the path). The basis set
is not unique. Any number of different basis sets can be derived for a given procedural
design. Cyclomatic complexity, V(G), for a flow graph G is equal to
- 16 -
Software Testing
Graph theory algorithms can be applied to these graph matrices to help in the analysis
necessary to produce the basis set.
Loop Testing
This white box technique focuses exclusively on the validity of loop constructs. Four different
classes of loops can be defined:
1. simple loops,
2. nested loops,
3. concatenated loops, and
4. unstructured loops.
Simple Loops
The following tests should be applied to simple loops where n is the maximum number of
allowable passes through the loop:
- 17 -
Software Testing
Nested Loops
The testing of nested loops cannot simply extend the technique of simple loops since this
would result in a geometrically increasing number of test cases. One approach for nested
loops:
1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their
minimums. Add tests for out-of-range or excluded values.
3. Work outward, conducting tests for the next loop while keeping all other outer loops at
minimums and other nested loops to typical values.
4. Continue until all loops have been tested.
Concatenated Loops
Concatenated loops can be tested as simple loops if each loop is independent of the others.
If they are not independent (e.g. the loop counter for one is the loop counter for the other),
then the nested approach can be used.
Unstructured Loops
1. Condition testing
exercises the logical conditions in a program.
2. Data flow testing
selects test paths according to the locations of definitions and uses of variables in the
program.
Unit Testing
In computer programming, a unit test is a method of testing the correctness of a particular
module of source code.
The idea is to write test cases for every non-trivial function or method in the module so that
each test case is separate from the others if possible. This type of testing is mostly done by
the developers.
Benefits
The goal of unit testing is to isolate each part of the program and show that the individual
parts are correct. It provides a written contract that the piece must satisfy. This isolated
testing provides four main benefits:
Encourages change
Unit testing allows the programmer to refactor code at a later date, and make sure the
module still works correctly (regression testing). This provides the benefit of encouraging
programmers to make changes to the code since it is easy for the programmer to check if
the piece is still working properly.
- 18 -
Software Testing
Simplifies Integration
Unit testing helps eliminate uncertainty in the pieces themselves and can be used in a
bottom-up testing style approach. By testing the parts of a program first and then testing
the sum of its parts will make integration testing easier.
Unit testing provides a sort of "living document" for the class being tested. Clients looking to
learn how to use the class can look at the unit tests to determine how to use the class to fit
their needs.
Because some classes may have references to other classes, testing a class can frequently
spill over into testing another class. A common example of this is classes that depend on a
database; in order to test the class, the tester finds herself writing code that interacts with
the database. This is a mistake, because a unit test should never go outside of its own class
boundary. As a result, the software developer abstracts an interface around the database
connection, and then implements that interface with their own Mock Object. This results in
loosely coupled code, thus minimizing dependencies in the system.
Limitations
It is important to realize that unit-testing will not catch every error in the program. By
definition, it only tests the functionality of the units themselves. Therefore, it will not catch
integration errors, performance problems and any other system-wide issues. In addition, it
may not be trivial to anticipate all special cases of input the program unit under study may
receive in reality. Unit testing is only effective if it is used in conjunction with other software
testing activities.
Unit Testing - Software Unit Testing, Tools, Research Topics, Toolkits, Extreme Programming
Unit Testing
Requirement testing
Usage:
Objective:
- 19 -
Software Testing
• Security officer
• DBA
• Internal auditors
• Record retention
• Comptroller
How to Use
• These test conditions are generalized ones, which becomes test cases as the SDLC
progresses until system is fully operational.
• Test conditions are more effective when created from user’s requirements.
• Test conditions if created from documents then if there are any error in the
documents those will get incorporated in Test conditions and testing would not be
able to find those errors.
• Test conditions if created from other sources (other than documents) error trapping
is effective.
• Functional Checklist created.
When to Use
Example
• Creating test matrix to prove that system requirements as documented are the
requirements desired by the user.
• Creating checklist to verify that application complies to the organizational policies
and procedures.
Regression testing
Usage:
Objective:
- 20 -
Software Testing
How to Use
• Test cases, which were used previously for the already tested segment is, re-run to
ensure that the results of the segment tested currently and the results of same
segment tested earlier are same.
• Test automation is needed to carry out the test transactions (test condition
execution) else the process is very time consuming and tedious.
• In this case of testing cost/benefit should be carefully evaluated else the efforts
spend on testing would be more and payback would be minimum.
When to Use
• When there is high risk that the new changes may effect the unchanged areas of
application system.
• In development process: Regression testing should be carried out after the pre-
determined changes are incorporated in the application system.
• In Maintenance phase : regression testing should be carried out if there is a high risk
that loss may occur when the changes are made to the system
Example
Disadvantage
Regression Testing - Software Testing - Network Regression Testing - Web & Automated
Regression Testing
Objective:
- 21 -
Software Testing
How to Use
When to Use
• Throughout SDLC.
• Impact from errors should be identified and should be corrected to reduce the errors
to acceptable level.
• Used to assist in error management process of system development and
maintenance.
Example
• Create a set of erroneous transactions and enter them into the application system
then find out whether the system is able to identify the problems..
• Using iterative testing enters transactions and trap errors. Correct them. Then enter
transactions with errors, which were not present in the system earlier.
Usage:
• It involves testing of all the functions performed by the people while preparing the
data and using these data from automated system.
Objective:
How to Use
- 22 -
Software Testing
• To test people it requires testing the interface between the people and application
system.
When to Use
Example
• Provide input personnel with the type of information they would normally receive
from their customers and then have them transcribe that information and enter it in
the computer.
• Users can be provided a series of test conditions and then asked to respond to those
conditions. Conducted in this manner, manual support testing is like an examination
in which the users are asked to obtain the answer from the procedures and manuals
available to them.
Usage:
Objective:
• Determine Proper parameters and data are correctly passed between the applications
• Documentation for involved system is correct and accurate.
• Ensure Proper timing and coordination of functions exists between the application
systems.
How to Use
When to Use
- 23 -
Software Testing
Example
• Develop test transaction set in one application and passing to another system to
verify the processing.
• Entering test transactions in live production environment and then using integrated
test facility to check the processing from one system to another.
• Verifying new changes of the parameters in the system, which are being tested, are
corrected in the document.
Disadvantage
Time consuming and tedious if test automation not done
Control testing
Usage:
Objective:
How to Use
When to Use
- 24 -
Software Testing
Example
Parallel testing
Usage:
• To ensure that the processing of new application (new version) is consistent with
respect to the processing of previous application version.
Objective:
How to Use
• Same input data should be run through 2 versions of same application system.
• Parallel testing can be done with whole system or part of system (segment).
When to Use
Example
• Operating new and old version of a payroll system to determine that the paychecks
from both systems are reconcilable.
• Running old version of application to ensure that the functions of old system are
working fine with respect to the problems encountered in the new system.
Volume testing
Whichever title you choose (for us volume test) here we are talking about realistically
exercising an application in order to measure the service delivered to users at different
levels of usage. We are particularly interested in its behavior when the maximum number of
users are concurrently active and when the database contains the greatest data volume.
The creation of a volume test environment requires considerable effort. It is essential that
the correct level of complexity exists in terms of the data within the database and the range
- 25 -
Software Testing
of transactions and data used by the scripted users, if the tests are to reliably reflect the to
be production environment. Once the test environment is built it must be fully utilised.
Volume tests offer much more than simple service delivery measurement. The exercise
should seek to answer the following questions:
What service level can be guaranteed. How can it be specified and monitored?
Are changes in user behaviour likely? What impact will such changes have on resource
consumption and service delivery?
The purpose of volume testing is to find weaknesses in the system with respect to its
handling of large amount of data during extended time periods
Stress testing
The purpose of stress testing is to find defects of the system capacity of handling large
numbers of transactions during peak periods. For example, a script might require users to
login and proceed with their daily activities while, at the same time, requiring that a series
of workstations emulating a large number of other systems are running recorded scripts
that add, update, or delete from the database.
Performance testing
According to Hamilton [10], the performance problems are most often the result of the
client or server being configured inappropriately.
The best strategy for improving client-sever performance is a three-step process [11]. First,
execute controlled performance tests that collect the data about volume, stress, and loading
tests. Second, analyze the collected data. Third, examine and tune the database queries
and, if necessary, provide temporary data storage on the client while the application is
executing.
- 26 -
Software Testing
Testing tools
Win Runner
Introduction
Automatic Recovery
The Recovery Manager provides an easy-to-use wizard that guides you through the process
of defining a recovery scenario. You can specify one or more operations that enable the test
run to continue after an exception event occurs. This functionality is especially useful during
unattended test runs, when errors or crashes could interrupt the testing process until
manual intervention occurs.
Silent Installation
Now you can install WinRunner in an unattended mode using previously recorded installation
preferences. This feature is especially beneficial for those who use enterprise software
management products or any automated software distribution mechanisms.
WinRunner works with both TestDirector 6.0, which is client/server-based, and TestDirector
7.x, which is Web-based. When reporting defects from WinRunner’s test results window,
basic information about the test and any checkpoints can be automatically populated in
TestDirector’s defect form. WinRunner now supports version control, which enables updating
and revising test scripts while maintaining old versions of each test.
- 27 -
Software Testing
Support for Citrix and Microsoft Terminal Servers makes it possible to open several window
clients and run WinRunner on each client as a single user. Also, this can be used with
LoadRunner to run multiple WinRunner Vusers.
WinRunner 7.5 includes support for Internet Explorer 6.x and Netscape 6.x, Windows XP
and Sybase's PowerBuilder 8, in addition to 30+ environments already supported by
WinRunner 7.
WinRunner provides the most powerful, productive and cost-effective solution for verifying
enterprise application functionality. For more information on WinRunner, contact a Mercury
Interactive local representative for pricing, evaluation, and distribution information.
The Function Generator presents a quick and error-free way to design tests and enhance
scripts without any programming knowledge. Testers can simply point at a GUI object, and
WinRunner will examine it, determine its class and suggest an appropriate function to be
used.
WinRunner provides checkpoints for text, GUI, bitmaps, URL links and the database,
allowing testers to compare expected and actual outcomes and identify potential problems
with numerous GUI objects and their functionality.
Built-in Database Verification confirms values stored in the database and ensures
transaction accuracy and the data integrity of records that have been updated, deleted and
added.
- 28 -
Software Testing
WinRunner’s GUI Spy automatically identifies, records and displays the properties of
standard GUI objects, ActiveX controls, as well as Java objects and methods. This ensures
that every object in the user interface is recognized by the script and can be tested.
The GUI map provides a centralized object repository, allowing testers to verify and modify
any tested object. These changes are then automatically propagated to all appropriate
scripts, eliminating the need to build new scripts each time the application is modified.
WinRunner supports more than 30 environments, including Web, Java, Visual Basic, etc. In
addition, it provides targeted solutions for such leading ERP/CRM applications as SAP,
Siebel, PeopleSoft and a number of others.
• Start->Program Files->Winrunner->winruner
• Select the Rapid Test Script Wizard (or) create->Rapid Test Script wizard
• Click Next button of welcome to script wizard
• Select hand icon and click on Application window and Cilck Next button
• Select the tests and click Next button
• Select Navigation controls and Click Next button
• Set the Learning Flow(Express or Comprehensive) and click Learn button
• Select start application YES or NO, then click Next button
• Save the Startup script and GUI map files, click Next button
• Save the selected tests, click Next button
• Click Ok button
• Script will be generated.then run the scripts. Run->Run from top
• Find results of each script and select tools->text report in Winrunner test results.
• Open an application.
• Select Tools-GUI Map Configuration;Windows pops-up.
• Click ADD button;Click on hand icon.
• Click on the object, which is to be configured. A user-defined class for that object is
added to list.
• Select User-defined class you added and press ‘Configure’ button.
• Mapped to Class;(Select a corresponding stanadard class from the combo box).
- 29 -
Software Testing
• You can move the properties from available properties to Learned Properties. By
selecting Insert button
• Select the Selector and recording methods.
• Click Ok button
• Now, you will observe Winrunner indentifying the configured objects.
- 30 -
Software Testing
For object/window.
- 31 -
Software Testing
Synchronization Point
- 32 -
Software Testing
Without Synchronization:
With Synchronization:
• Open a "Calc" application in two windows (Assuming two are two versions)
• Create->get text->for Obj/Window
• Click on some button in one window
• Stop recording
• Repeat 1 to 4 for Capture the text of same object from another "Calc" application.
- 33 -
Software Testing
• Add the following TSL(Note:Change "text" to text1 & text2 for each statement)
if(text1==text2) report_msg("correct" text1); Else report_msg("incorrect" text2);
• Run & see the results
Using GUI-Spy:
Using the GUI Spy, you can view and verify the properties of any GUI object on selected
application
• Tools->Gui Spy…
• Select Spy On ( select Object or Window)
• Select Hand icon Button
• Point the Object or window & Press Ctrl_L + F3.
• You can view and verify the properties.
Using the Virtual Object wizard, you can assign a bitmap to a standard object class, define
the coordinates of that object, and assign it a logical name
Using the GUI Map Editor, you can view and modify the properties of any GUI object on
selected application. To modify an object’s logical name in a GUI map file
- 34 -
Software Testing
• Start->Programs->Wirunner->Sample applications->Flight 1A
• Open Flight Reservation Application
• Go to Winrunner window
• Create->Start recording
• Select file->new order, insert the fields; Click the Insert Order
• Tools->Data Table; Enter different Customer names in one row and Tickets in
another row.
• Default that two column names are Noname1 and Noname2.
• Tools->Data Driver Wizard
• Click Next button &select the data table
• Select Parameterize the test; select Line by Line check box
• Click Next Button
• Parameterize each specific values with column names of tables;Repeat for all
• Finalli Click finish button.
• Run->Run from top;
• View the results.
Manual Merge
• Tools->Merge GUI Map Files A WinRunner message box informs you that all open
GUI maps will be closed and all unsaved changes will be discarded & click ‘OK’
button.
• Select the Manual Merge. Manual Merge enables you to manually add GUI objects
from the source to target files.
- 35 -
Software Testing
• To specify the Target GUI map file click the browse button& select GUI map file
• To specify the Source GUI map file. Click the add button& select source GUI map file.
• Click ‘OK’ button
• GUI Map File Manual Merge Tool Opens Select Objects and move Source File to
Target File
• Close the GUI Map File Manual Merge Tool
Auto Merge
• Tools->Merge GUI Map Files A WinRunner message box informs you that all open
GUI maps will be closed and all unsaved changes will be discarded & click ‘OK’
button.
• Select the Auto Merge in Merge Type. If you chose Auto Merge and the source GUI
map files are merged successfully without conflicts,
• To specify the Target GUI map file click the browse button& select GUI map file
• To specify the Source GUI map file.
• Click the add button& select source GUI map file.
• Click ‘OK’ button A message confirms the merge.
• db_connect(query1,DSN=Flight32);
• db_execute_query(query1,select * from Orders,rec);
• db_get_field_value(query1,#0,#0);
• db_get_headers(query1, field_num,headers);
• db_get_row(query1,5,row_con);
• db_write_records(query1,,c:\\str.txt,TRUE,10);
// invokes the browser and opens a specified site. browser The name of browser (IE or
NETSCAPE). site The address of the site.
2. web_cursor_to_image ( image, x, y );
// moves the cursor to an image on a page. image The logical name of the image. x,y The
x- and y-coordinates of the mouse pointer when moved to an image
3. web_cursor_to_label ( label, x, y );
// moves the cursor to a label on a page. label The name of the label. x,y The x- and y-
coordinates of the mouse pointer when moved to a label.
4.web_cursor_to_link ( link, x, y );
- 36 -
Software Testing
// moves the cursor to a link on a page. link The name of the link. x,y The x- and y-
coordinates of the mouse pointer when moved to a link.
5.web_cursor_to_obj ( object, x, y );
// moves the cursor to an object on a page. object The name of the object. x,y The x- and
y-coordinates of the mouse pointer when moved to an object.
// uns an event on a specified object. object The logical name of the recorded object.
event_name The name of an event handler. x,y The x- and y-coordinates of the mouse
pointer when moved to an object
7.web_file_browse ( object );
// sets the text value in a file-type object. object A file-type object. Value A text string.
13.web_get_run_event_mode ( out_mode );
// returns the current run mode out_mode The run mode in use. If the mode is FALSE, the
default parameter, the test runs by mouse operations. If TRUE, is specified, the test runs by
events.
// returns the maximum time that WinRunner waits for response from the web. out_timeout
The maximum interval in seconds
15.web_image_click ( image, x, y );
- 37 -
Software Testing
// clicks a hypergraphic link or an image. image The logical name of the image. x,y The x-
and y-coordinates of the mouse pointer when clicked on a hypergraphic link or an image.
// checks whether a URL name of a link is valid (not broken). name The logical name of a
link. valid The status of the link may be valid (TRUE) or invalid (FALSE)
object The logical name of an object. x,y The x- and y-coordinates of the mouse pointer
when clicked on an object.
26. web_restore_event_default ( );
//resets all events to their default settings. 27. web_set_event ( class, event_name,
event_type, event_status );
- 38 -
Software Testing
//.sets the maximum time WinRunner waits for a response from the web. 30.
web_set_tooltip_color ( fg_color, bg_color );
//waits for the navigation of a frame to be completed. 32. web_url_valid ( URL, valid );
Load Runner
Load Runner - Introduction
The Virtual User Generator allows us to determine what actions we would like our Vusers, or
virtual users, to perform within the application. We create scripts that generate a series of
actions, such as logging on, navigating through the application, and exiting the program.
The Controller takes the scripts that we have made and runs them through a schedule that
we set up. We tell the Controller how many users to activate, when to activate them, and
how to group the users and keep track of them.
The Results and Analysis program gives us all the results of the load test in various forms. It
allows us to see summaries of data, as well as the details of the load test for pinpointing
problems or bottlenecks.
- 39 -
Software Testing
This powerful feature set enables LoadRunner to quickly point out the effect of the wide area
network (WAN) on application reliability, performance, and response time. Provided through
technology from Shunra Software, this WAN emulation capability introduces testing for
bandwidth limits, latency, network errors, and more to LoadRunner.
XML Support
With LoadRunner's XML support, you can quickly and easily view and manipulate XML data
within the test scripts
How it Work
LoadRunner Hosted Virtual Users complements in-house load testing tools and allows
companies to load test their Web-based applications from outside the firewall using Mercury
Interactive's infrastructure. Customers begin by using LoadRunner Hosted Virtual Users'
simple Web interface to schedule tests and reserve machines on Mercury Interactive's load
farm. At the scheduled time, they select the recorded scripts to be uploaded and start
- 40 -
Software Testing
running the tests on the host machines*. These scripts will emulate the behavior of real
users on the application and generate load on the system.
Through LoadRunner Hosted Virtual Users’ Web interface, testers can view real-time
performance metrics, such as hits per second, throughput, transaction response times and
hardware resource usage (e.g., CPU and memory levels). They also can view performance
metrics gathered by Mercury Interactive’s server monitors and correlate this with end-user
performance data to diagnose bottlenecks on the back end.
The interface to LoadRunner Hosted Virtual Users enables test teams to control the load test
and view tests in progress, no matter their locations. When the test is complete, testers can
analyze results online, as well as download data for further analysis.
*Customers who do not own LoadRunner can download the VUGen component for free to
record their scripts. Likewise, the LoadRunner analysis pack can be downloaded for free.
LoadRunner Hosted Virtual Users gives testers complete control of the testing process while
providing critical real-time performance information, as well as views of the individual
machines generating the load.
At any time in the application lifecycle, organizations can use LoadRunner Hosted Virtual
Users to verify performance and fine-tune systems for greater efficiency, scalability and
availability. The application under test only needs to be accessible via the Web.
Testing groups create the scripts, run the tests and perform their own analyses. They can
perform testing at their convenience and easily access all performance data to quickly
diagnose performance problems.
With LoadRunner Hosted Virtual Users, organizations do not need to invest in additional
hardware, software or bandwidth to increase their testing coverage. Mercury Interactive’s
load testing infrastructure is available 24x7 and consists of load farms located worldwide. As
a result, organizations can generate real-user loads over the Internet to stress their Web-
based applications at any time, from anywhere.
- 41 -
Software Testing
To minimize the impact of the monitoring on the system under test, LoadRunner enables IT
groups to extract data without having to install intrusive capture agents on the monitored
servers. As a result, LoadRunner can be used to monitor the performance of the servers
regardless of the hardware and operating system on which they run. Setup and installation
of the monitors therefore is trivial. Since all the monitoring information is sampled at a low
frequency (typically 1 to 5 seconds) there is only a negligible effect on the servers.
Supported Monitors
Astra LoadTest and LoadRunner support monitors for the following components:
Client-side Monitors
End-to-end transaction monitors - Provide end-user response times, hits per second,
transactions per second
The Hits per Second graph shows the number of hits on the Web server (y-axis) as a
function of the elapsed time in the scenario (x-axis). This graph can display the whole
scenario, or the last 60, 180, 600 or 3600 seconds. You can compare this graph to the
Transaction Response Time graph to see how the number of hits affects transaction
performance.
Throughput
The Throughput graph shows the amount of throughput on the Web server (y-axis) during
each second of the scenario run (x-axis). Throughput is measured in kilobytes and
represents the amount of data that the Vusers received from the server at any given
second. You can compare this graph to the Transaction Response Time graph to see how the
throughput affects transaction performance.
HTTP Responses
HTTP Responses The HTTP Responses per Second graph shows the number of HTTP status
codes, which indicate the status of HTTP requests, for example, the request was
successful,the page was not found returned from the Web server during each second of the
scenario run (x-axis), grouped by status code.
- 42 -
Software Testing
• Pages Downloaded per Second The Pages Downloaded per Second graph shows the
number of Web pages downloaded from the server during each second of the
scenario run. This graph helps you evaluate the amount of load Vusers generate, in
terms of the number of pages downloaded. Like throughput, downloaded pages per
second is a representation of the amount of data that the Vusers received from the
server at any given second.
User Defined Data Points graph allows you to add your own measurements by defining a
data point function in your Vuser script. Data point information is gathered each time the
script executes the function or step. The User-Defined Data Point graph shows the average
value of the data points during the scenario run. The x-axis represents the number of
seconds elapsed since the start time of the run. The y-axis displays the average values of
the recorded data point statements.
Transaction Monitors
·
Transaction Response Time The Transaction Response time graph shows the
response time of transactions in seconds (y-axis) as a function of the elapsed time in
the scenario (x-axis). ·
• Transaction per Second (Passed) The Transaction per Second (Passed) graph
shows the number of successful transactions performed per second (y-axis) as a
function of the elapsed time in the scenario (x-axis). ·
• Transaction per Second (Failed) The Transaction per Second (Failed) graph shows
the number of failed transactions per second (y- axis) as a function of the elapsed
time in the scenario (x- axis).
The monitor's Runtime graph provides information about the status of the Vusers running in
the current scenario on all host machines. The graph shows the number of running Vusers,
while the information in the legend indicates the number of Vusers in each state.
The Status field of each Vuser displays the current status of the Vuser. The following
table describes each Vuser status.
Running The total number of Vusers currently running on all load generators. ·
- 43 -
Software Testing
• Ready The number of Vusers that completed the initialization section of the script
and are ready to run. ·
• Finished The number of Vusers that have finished running. This includes both
Vusers that passed and failed ·
• Error The number of Vusers whose execution generated an error.
• DNS Resolution Displays the amount of time needed to resolve the DNS name to
an IP address, using the closest DNS server. The DNS Lookup measurement is a
good indicator of problems in DNS resolution, or problems with the DNS server. ·
• Connection Time Displays the amount of time needed to establish an initial
connection with the Web server hosting the specified URL. The connection
measurement is a good indicator of problems along the network. It also indicates
whether the server is responsive to requests. ·
• Time To First Buffer Displays the amount of time that passes from the initial HTTP
request (usually GET) until the first buffer is successfully received back from the Web
server. The first buffer measurement is a good indicator of Web server delay as well
as network latency. ·
• Server and Network time The Time to First Buffer Breakdown graph also displays
each Web page component's relative server and network time (in seconds) for the
period of time until the first buffer is successfully received back from the Web server.
If the download time for a component is high, you can use this graph to determine
whether the problem is server- or network- related. ·
• Receive Time Displays the amount of time that passes until the last byte arrives
from the server and the downloading is complete. The Receive measurement is a
good indicator of network quality (look at the time/size ratio to calculate receive
rate). ·
• Client Time Displays the average amount of time that passes while a request is
delayed on the client machine due to browser think time or other client-related
delays. ·
• Error Time Displays the average amount of time that passes from the moment an
HTTP request is sent until the moment an error message (HTTP errors only) is
returned ·
• SSL Handshaking Time Displays the amount of time taken to establish an SSL
connection (includes the client hello, server hello, client public key transfer, server
certificate transfer, and other stages). The SSL Handshaking measurement is only
applicable for HTTPS communications ·
• FTP Authentication Displays the time taken to authenticate the client. With FTP, a
server must authenticate a client before it starts processing the client's commands.
The FTP Authentication measurement is only applicable for FTP protocol
communications.
Server Monitors
- 44 -
Software Testing
ASP Server
Cache
HTTP Content Index
Internet Information Service Global
Logical Disk
Memory
Physical Disk
Processor
Server
Server
Cache
• Async Copy Reads/Sec - The frequency of reads from cache pages that involve a
memory copy of the data from the cache to the application's buffer. The application
will regain control immediately, even if the disk must be accessed to retrieve the
page.
• Async Data Maps/Sec - The frequency that an application uses a file system, such as
NTFS or HPFS, to map a page of a file into the cache to read because it does not
wish to wait for the cache to retrieve the page if it is not in main memory.
• Async Fast Reads/Sec - The frequency of reads from cache pages that bypass the
installed file system and retrieve the data directly from the cache. Normally, file I/O
requests will invoke the appropriate file system to retrieve data from a file. This
- 45 -
Software Testing
path, however, permits direct retrieval of cache data without file system involvement,
as long as the data is in the cache. Even if the data is not in the cache, one
invocation of the file system is avoided. If the data is not in the cache, the request
(application program call) will not wait until the data has been retrieved from disk,
but will get control immediately.
• Fast Reads/Sec - The frequency of reads from cache pages that bypass the installed
file system and retrieve the data directly from the cache. Normally, file I/O requests
invoke the appropriate file system to retrieve data from a file. This path, however,
permits direct retrieval of cache data without file system involvement if the data is in
the cache. Even if the data is not in the cache, one invocation of the file system is
avoided.
• Lazy Write Flushes/Sec - The frequency with which the cache's Lazy Write thread has
written to disk. Lazy Writing is the process of updating the disk after the page has
been changed in memory. In this way, the application making the change to the file
does not have to wait for the disk write to be completed before proceeding. More
than one page can be transferred on each write operation.
Logical Disk
- 46 -
Software Testing
• % Disk Read Time - The percentage of elapsed time that the selected disk drive was
busy servicing read requests.
• % Disk Time - The percentage of elapsed time that the selected disk drive was busy
servicing read or write requests.
• % Disk Write Time - The percentage of elapsed time that the selected disk drive was
busy servicing write requests.
• % Free Space - The ratio of the free space available on the logical disk unit to the
total usable space provided by the selected logical disk drive
• Avg. Disk Bytes/Read - The average number of bytes transferred from the disk
during read operations.
• Avg. Disk Bytes/Transfer - The average number of bytes transferred to or from the
disk during write or read operations.
Memory
• % Committed Bytes in Use - The ratio of the Committed Bytes to the Commit Limit.
This represents the amount of available virtual memory in use. Note that the Commit
Limit may change if the paging file is extended. This is an instantaneous value, not
an average.
• Available Bytes - Displays the size of the virtual memory currently on the Zeroed,
Free and Standby lists. Zeroed and Free memory is ready for use, with Zeroed
memory cleared to zeros. Standby memory is memory removed from a process's
Working Set but still available. Notice that this is an instantaneous count, not an
average over the time interval.
• Cache Bytes - Measures the number of bytes currently in use by the system cache.
The system cache is used to buffer data retrieved from disk or LAN. In addition, the
system cache uses memory not in use by active processes in the computer.
• Cache Bytes Peak - Measures the maximum number of bytes used by the system
cache. The system cache is used to buffer data retrieved from disk or LAN. In
addition, the system cache uses memory not in use by active processes in the
computer.
• Cache Faults/Sec - Cache faults occur whenever the cache manager does not find a
file's page in the immediate cache and must ask the memory manager to locate the
page elsewhere in memory or on the disk, so that it can be loaded into the
immediate cache.
Physical Disk
• % Disk Read Time - The percentage of elapsed time that the selected disk drive is
busy servicing read requests.
• % Disk Time - The percentage of elapsed time that the selected disk drive is busy
servicing read or write requests.
• % Disk Write Time - The percentage of elapsed time that the selected disk drive is
busy servicing write requests.
• Avg. Disk Bytes/Read - The average number of bytes transferred from the disk
during read operations.
• Avg. Disk Bytes/Transfer - The average number of bytes transferred to or from the
disk during write or read operations.
• Avg. Disk Bytes/Write - The average number of bytes transferred to the disk during
write operations.
- 47 -
Software Testing
• Avg. Disk Queue Length - The average number of both read and write requests that
were queued for the selected disk during the sample interval.
Processor
• % DPC Time - The percentage of elapsed time that the Processor spent in Deferred
Procedure Calls (DPC). When a hardware device interrupts the Processor, the
Interrupt Handler may elect to execute the majority of its work in a DPC. DPCs run at
lower priority than Interrupts. This counter can help determine the source of
excessive time being spent in Privileged Mode.
• % Interrupt Time - The percentage of elapsed time that the Processor spent handling
hardware Interrupts. When a hardware device interrupts the Processor, the Interrupt
Handler will execute to handle the condition, usually by signaling I/O completion and
possibly issuing another pending I/O request. Some of this work may be done in a
DPC (see % DPC Time.)
• % Privileged Time - The percentage of processor time spent in Privileged Mode in
non-idle threads. The Windows NT service layer, the Executive routines, and the
Windows NT Kernel execute in Privileged Mode. Device drivers for most devices other
than graphics adapters and printers also execute in Privileged Mode. Unlike some
early operating systems,
• % Processor Time - Processor Time is expressed as a percentage of the elapsed time
that a processor is busy executing a non-idle thread. It can be viewed as the fraction
of the time spent doing useful work. Each processor is assigned an idle thread in the
idle process that consumes those unproductive processor cycles not used by any
other threads.
• % User Time - The percentage of processor time spent in User Mode in non-idle
threads. All application code and subsystem code execute in User Mode. The graphics
engine, graphics device drivers, printer device drivers and the window manager also
execute in User Mode. Code executing in User Mode cannot damage the integrity of
the Windows NT Executive, Kernel, and device drivers. Unlike some early operating
systems, Windows NT uses process boundaries for subsystem protection in addition
to the traditional protection of User and Privileged modes.
Server
• Blocking Requests Rejected - The number of times the server has rejected blocking
Server Message Blocks (SMBs) due to insufficient count of free work items. May
indicate whether the maxworkitem or minfreeworkitems server parameters need
tuning.
• Bytes Received/Sec - The number of bytes the server has received from the network.
This value indicates how busy the server is.
• Bytes Total/Sec - The number of bytes the server has sent to and received from the
network. This value provides an overall indication of how busy the server is.
• Bytes Transmitted/Sec - The number of bytes the server has sent on the network.
This value indicates how busy the server is.
• Context Blocks Queued/Sec - - The rate that work context blocks had to be placed on
the server's FSP queue to await server action.
• Errors Access Permissions - The number of times file opens on behalf of clients have
failed with STATUS_ACCESS_DENIED. Can indicate whether somebody is randomly
attempting to access files in hopes of accessing data that was not properly protected.
- 48 -
Software Testing
- 49 -
Software Testing
Path:Program files->winrunner->dat->wrun.ini
• Select Host window, select Host->details Select Vuser Limits tab, select GUI-
winrunner check box. Select winrunner tab , set path
Path:Program files->winrunner->dat->wrun.ini
- 50 -
Software Testing
Test director
Introduction
TestDirector, the industry’s first global test management solution, helps organizations deploy
high-quality applications more quickly and effectively. Its four modules Requirements, Test
Plan, Test Lab, and Defects are seamlessly integrated, allowing for a smooth information
flow between various testing stages. The completely Web-enabled TestDirector supports
high levels of communication and collaboration among distributed testing teams, driving a
more effective, efficient global application-testing process.
- 51 -
Software Testing
The Site Administrator includes tabs for managing projects, adding users and defining user
properties, monitoring connected users, monitoring licenses and monitoring TestDirector
server information.
Domain Management
TestDirector projects are now grouped by domain. A domain contains a group of related
TestDirector projects, and assists you in organizing and managing a large number of
projects.
Additional standard report types and graphs have been added, and the user interface is
richer in functionality. The new format enables you to customize more features.
Version Control
Version control enables you to keep track of the changes you make to the testing
information in your TestDirector project. You can use your version control database for
tracking manual, WinRunner and QuickTest Professional tests in the test plan tree and test
grid.
Collaboration Module
With the new Advanced Reports Add-in, TestDirector users are able to maximize the value of
their testing project information by generating customizable status and progress reports.
The Advanced Reports Add-in offers the flexibility to create custom report configurations
and layouts, unlimited ways to aggregate and compare data and ability to generate cross-
project analysis reports.
The new traceability automatically traces changes to the testing process entities such as
requirements or tests, and notifies the user via flag or e-mail. For example, when the
- 52 -
Software Testing
requirement changes, the associated test is flagged and tester is notified that the test may
need to be reviewed to reflect requirement changes.
The graphical display enables you to analyze the requirements according to test coverage
status and view associated tests - grouped according to test status.
Hierarchical test sets provide the ability to better organize your test run process by grouping
test sets into folders.
The addition of the script editor to all modules enables organizations to customize
TestDirector to follow and enforce any methodology and best practices.
Improved Customization
With a greater number of available user fields, ability to add memo fields and create input
masks users can customize their TestDirector projects to capture any data required by their
testing process. New rich edit option add color and formatting options to all memo fields.
Testers, developers and business analysts can participate in and contribute to the testing
process by working seamlessly across geographic and organizational boundaries.
TestDirector integrates easily with industry-standard databases such as SQL, Oracle, Access
and Sybase.
- 53 -
Software Testing
TestDirector can import requirements and test plans from Microsoft Office, preserving your
investment and accelerating your testing process.
TestDirector stores and runs both manual and automated tests, and can help jumpstart a
user’s automation project by converting manual tests to automated test scripts.
TestDirector's TestLab manager accelerates the test execution cycles by scheduling and
running tests automatically—unattended, even overnight. The results are reported into
TestDirector’s central repository, creating an accurate audit trail for analysis.
TestDirector allows testers to run tests on their local machines and then report the results to
the repository that resides on a remote server.
Documented COM API allows TestDirector to be integrated both with internal tools (e.g.,
WinRunner and LoadRunner) and external third-party lifecycle applications.
TestDirector controls the information flow in a structured and organized manner. It defines
the role of each tester in the process and sets the appropriate permissions to ensure
information integrity.
TestDirector's integrated graphs and reports help analyze application readiness at any point
in the testing process. Using information about requirements coverage, planning progress,
run schedules or defect statistics, managers are able to make informed decisions on
whether the application is ready to go live.
TestDirector offers a defect tracking process that can identify similar defects in a database.
TestDirector features a variety of customizable graphs and reports that provide a snapshot
of the process at any time during testing. You can save your favorite views to have instant
access to relevant project information.
- 54 -
Software Testing
TestDirector helps you make informed decisions about application readiness through dozens
of reports and analysis features.
Using TestDirector's Web interface, testers, developers and business analysts can participate
in and contribute to the testing process by collaborating across geographic and
organizational boundaries.
TestDirector links requirements to test cases, and test cases to issues, to ensure traceability
throughout the testing cycle. When requirement changes or the defect is fixed, the tester is
notified of the change.
TestDirector stores and runs both manual and automated tests, and can help jumpstart a
user’s automation project by converting manual tests to automated test scripts.
TestDirector's TestLab manager accelerates the test execution cycles by scheduling and
running tests automatically—unattended, even overnight. The results are reported into
TestDirector’s central repository, creating an accurate audit trail for analysis.
TestDirector allows testers to run tests on their local machines and then report the results to
the repository that resides on a remote server.
Documented COM API allows TestDirector to be integrated both with internal tools (e.g.,
WinRunner and LoadRunner) and external third-party lifecycle applications.
TestDirector controls the information flow in a structured and organized manner. It defines
the role of each tester in the process and sets the appropriate permissions to ensure
information integrity.
- 55 -
Software Testing
TestDirector's integrated graphs and reports help analyze application readiness at any point
in the testing process. Using information about requirements coverage, planning progress,
run schedules or defect statistics, managers are able to make informed decisions on
whether the application is ready to go live.
TestDirector offers a defect tracking process that can identify similar defects in a database.
TestDirector features a variety of customizable graphs and reports that provide a snapshot
of the process at any time during testing. You can save your favorite views to have instant
access to relevant project information.
TestDirector helps you make informed decisions about application readiness through dozens
of reports and analysis features.
Using TestDirector's Web interface, testers, developers and business analysts can participate
in and contribute to the testing process by collaborating across geographic and
organizational boundaries.
TestDirector links requirements to test cases, and test cases to issues, to ensure traceability
throughout the testing cycle. When requirement changes or the defect is fixed, the tester is
notified of the change.
TestDirector stores and runs both manual and automated tests, and can help jumpstart a
user’s automation project by converting manual tests to automated test scripts.
- 56 -
Software Testing
TestDirector's TestLab manager accelerates the test execution cycles by scheduling and
running tests automatically unattended, even overnight. The results are reported into
TestDirector’s central repository, creating an accurate audit trail for analysis.
By providing a central repository for all testing assets, TestDirector facilitates the adoption
of a more consistent testing process, which can be repeated throughout the application
lifecycle or shared across multiple applications or lines of business (LOB).
Testing Process
The test management process is the main principle behind Mercury Interactive's
TestDirector. It is the first tool to capture the entire test management process—
requirements management, test planning, test execution and defect management—in one
powerful, scalable and flexible solution.
Managing Requirements
Requirements are what the users or the system needs. Requirements management,
however, is a structured process for gathering, organizing, documenting and managing the
requirements throughout the project lifecycle. Too often, requirements are neglected during
the testing effort, leading to a chaotic process of fixing what you can and accepting that
certain functionality will not be verified. In many organizations, requirements are
maintained in Excel or Word documents, which makes it difficult for team members to share
information and to make frequent revisions and changes.
TestDirector supports requirements-based testing and provides the testing team with a
clear, concise and functional blueprint for developing test cases. Requirements are linked to
tests—that is, when the test passes or fails, this information is reflected in the requirement
records. You can also generate a test based on a functional requirement and instantly create
a link between the requirement, the relevant test and any defects that are uncovered during
the test run.
Test Planning
Based on the requirements, testers can start building the test plan and designing the actual
tests. Today, organizations no longer wait to start testing at the end of the development
stage, before implementation. Instead, testing and development begin simultaneously. This
parallel approach to test planning and application design ensures that testers build a
complete set of tests that cover every function the system is designed to perform.
- 57 -
Software Testing
TestDirector provides a centralized approach to test design, which is invaluable for gathering
input from different members of the testing team and providing a central reference point for
all of your future testing efforts. In the Test Plan module, you can design tests—manual and
automated—document the testing procedures and create quick graphs and reports to help
measure the progress of the test planning effort.
Running Tests
After you have addressed the test design and development issues and built the test plan,
your testing team is ready to start running tests.
TestDirector can help configure the test environment and determine which tests will run on
which machines. Most applications must be tested on different operating systems , different
browser versions or other configurations. In TestDirector's Test Lab, testers can set up
groups of machines to most efficiently use their lab resources.
TestDirector can also schedule automated tests, which saves testers time by running
multiple tests simultaneously across multiple machines on the network. Tests with
TestDirector can be scheduled to run unattended, overnight or when the system is in least
demand for other tasks. For both manual and automated tests, TestDirector can keep a
complete history of all test runs. By using this audit trail, testers can easily trace changes to
tests and test runs.
Managing Defects
The keys to creating a good defect management process are setting up the defect workflow
and assigning permission rules. With TestDirector, you can clearly define how the lifecycle of
a defect should progress, who has the authority to open a new defect, who can change a
defect's status to "fixed" and under which conditions the defect can be officially closed.
TestDirector will also help you maintain a complete history and audit trail throughout the
defect lifecycle.
Managers often decide whether the application is ready to go live based on defect analysis.
By analyzing the defect statistics in TestDirector, you can take a snapshot of the application
under test and see exactly how many defects you currently have, their status, severity,
priority, age, etc. Because TestDirector is completely Web-based, different members of the
team can have instant access to defect information, greatly improving communication in
your organization and ensuring everyone is up to date on the status of the application
Silk test
Introduction
Silk Test is a tool specifically designed for doing REGRESSION AND FUNCTIONALITY
testing. It is developed by Segue Software Inc. Silk Test is the industry’s leading functional
testing product for e-business applications, whether Window based, Web, Java, or traditional
client/server-based. Silk Test also offers test planning, management, direct database access
- 58 -
Software Testing
and validation, the flexible and robust 4Test scripting language, a built in recovery system
for unattended testing, and the ability to test across multiple platforms, browsers and
technologies.
1. Use the Record Testcase command to record actions and verification steps as you
navigate through the application.
2. Write the testcase manually using the Visual 4Test scripting language.
1. Record Testcase
The Record / Testcase command is used to record actions and verification steps as you
navigate through the application. Tests are recorded in an object-oriented language called
Visual 4Test. The recorded testreads like a logical trace of all of the steps that were
completed by the user. The Silk Test point and click verification system allows you to record
the verification step by selecting from a list of properties that are appropriate for the type of
object being tested. For example, you can verify the text is stored in a text field.
We can write tests that are capable of accomplishing many variations on a test. The key
here is re-use. A test case can be designed to take parameters including input data and
expected results. This "data-driven" testcase is really an instance of a class of test cases
that performs certain steps to drive and verify the application-under-test. Each instance
varies by the data that it carries. Since far fewer tests are written with this approach,
changes in the GUI will result in reduced effort in updating tests. A data-driven test design
also allows for the externalization of testcase data and makes it possible to divide the
responsibilities for developing testing requirements and for developing test automation. For
example, it may be that a group of domain experts create the Testplan Detail while another
group of test engineers develop tests to satisfy those requirements.
In a script file, an automated testcase ideally addresses one test requirement. Specifically, a
4Test function that begins with the test case keyword and contains a sequence of 4Test
statements. It drives an application to the state to be tested, verifies that the application
works as expected, and returns the application to its base state.
A script file is a file that contains one or more related testcases. A script file has a .t
extension, such as find .t
- 59 -
Software Testing
The Silk Test host software is the program you use to develop, edit, compile, run and debug
your 4Test scripts and test plans. This manual refers to the system that runs this program
as the host machine or the Silk Test machine.
The Agent
- 60 -
Software Testing
The 4Test Agent is the software process that translates the commands in your 4Test scripts
into GUI-specific commands. In order words, it is the Agent that actually drives and
monitors the application you are testing. One Agent can run locally on the host machine. In
a networked environment, any number of Agents can run on remote machines. This manual
refers to the systems that run remote Agents as target machines. This manual refers to the
systems that run remote Agents as target machines. In a client/server environment, Silk
Test drives the client application by means of an Agent process running on each application’s
machine. The application then drives the server just as it always does. Silk Test is also
capable of driving the GUI belonging to a server or of directly driving a server database by
running scripts that submit SQL statements to the database. These methods o directly
manipulating the server application are intended to support testing in which the client
application drives the server.
• SilkTest may not recognize some objects in a window / page due to some technical
reasons.
• Silk Test may not recognize some window frames.
• The ‘tag’ value may get changed frequently.
• Sometimes it will be difficult to activate some window.
• It may be necessary to make some modifications if testing should be shifted to other
browser / operating system.
• In web based applications, sometimes, silktest will take the links as simple text.
System Requirements :
The minimum requirement, a system needs to run the silk test is given below :
Supported Environments:
Automated process
THE AUTOMATED TESTING PROCESS
- 61 -
Software Testing
Creating a testplan
If the testplan editor is used, the automated testing process is started by creating a
testplan. A basic testplan is structured as a hierarchical outline and contains:
• Descriptions of individual tests and groups of tests. As many levels of description can
be used.
• Statements that link the test descriptions in the plan to the 4Test routines, called
testcases, that accomplish the actual work of testing.
Next, record a test frame, which contains descriptions, called window declarations, of each
of the GUI objects in your application. A window declaration specifies a logical, cross-
platform name for a GUI object, called the identifier, and maps the identifier to the object’s
actual name, called the tag. In addition, the declaration indicates the type of the object,
called its class.
Creating testcases
The powerful object-oriented recorder can be used to automatically capture these 4Test
commands to interact with the application, or to white the 4Test code manually if one is
comfortable with programming languages. For maximum ease and power, these two
approaches can be combined, recording the basic testcase and then extending it using
4Test’s flow of control features.
Next, run one or more testcases, either by running a collection of scripts, called a suite, or,
if you are using the testplan editor, by running specific portions of the testplan. As each
testcase runs, statistics are written to a results file. The results file and its associated
comparison tools allow you to quickly pinpoint the problems in your application.
A Test Frame
The test frame is the backbone that supports the testcases and scripts. It is a file that
contains all the information about the application’s GUI objects that Silk Test needs when
you record testcases. This information minimally consists of a declaration for each GUI
- 62 -
Software Testing
object, but can also include any data that you want to associate with each GUI object, as
well as any new classes and methods that you want to define.
A window declaration specifies a cross-platform, logical name for a GUI object, called the
identifier, and maps the identifier to the object’s actual name, called the tag. Because the
testcases use logical names, if the object’s actual name changes on the current GUI, on
another GUI, or in a localized version of the application, only the tag in the window
declarations need to be changed; don’t need to change any of the scripts. Variables,
functions, methods, properties can be added to the basic window declarations recorded by
Silk Test.
To record declarations for the main window and menu hierarchy of your application.
A Test Plan
A Test Suite
- 63 -
Software Testing
A Test Suite is a collection of test scripts. Consider a case that we are having a set of script
(.t) file. If we want to run these scripts against our application, we have to select the
required testcase or we have to run the entire script file. But after the completion of that
script file, the user has to manually change that to the next script file to run those testcases
available in that script. Instead of that silktest provides a way to continuously select a set of
script files and run those script files at-a-stretch. This can be done by creating a new Test
Suite file and declare the needed script files in that suite file.
Assume a case where there is a folder called silk scripts in c drive with five test script files.
Here in the suite file, we are calling all the script files instead of running those script files
separately. The suite file will look like as given below :
A Test script
A Testcase
In a script file, a testcase ideally addresses one test requirement. Specifically, a 4Test
function that begins with the testcase keyword and contains a sequence of 4Test
statements. It drives an application to the state to be tested, verifies that the application
works as expected, and returns the application to its base state.
• In the silktest tool, select the File -> Now option the menu bar.
• In the resulting dialog box �New�, there will be options for selecting different
kind of files.
• Select the file type 4� test script� option.
• It will open a new script file.
• Before start writing the testcase, declare the necessary file that is to be used in that
script file.
• Start with the keyword �testcase� followed by the testcase name. The name of
the testcase is whatever as selected by the user. But make sure that by looking at
the name of the testcase, the objective of the testcase should be understandable.
- 64 -
Software Testing
• Start the tests from the scratch so that the silktest will start eh application and to the
testing from the base state.
• Use necessary conditions / loops if necessary.
• At the end of each and every script, print a statement to know whether the test case
has achieved its objective or not. The user can make sure that the particular part of
the application is error free by looking at the message you print.
• Try to make the testcase effective and time consuming (say) by keeping the second
tests continue from the place where the first test finishes.
• A sample testcase for registering into the yahoo mail.
Testcase registration ()
• Browser.LoadPage "mail.yoo.com")
• Sign In YahooMail.SetActive ()
• SignIn Yahoo.Mail.objSingIn YahooMail.SignUp.Now.Click()
• Sleep (3)
• WelcomeTo Yahoo.Set Active
• Welcome To yahoo.objWelcomeToYahoo.LastName.SetText("lastname")
• Welcome To Yahoo.objWelcomeToYahoo.LanguageConten1.Select(5)
• WelcomeTo Yahoo.objWelcome ToYahoo.ContactMeOccassionally About.Click ()
• Welcome To Yahoo.objWelcome To Yahoo.Submit ThisForm.Click()
• If Registration Success.Exists ()
• Print ("Test Pass")
• else
• logerror ("Test Fail")
• Run the Silk Test setup from the CD or through the Network.
• The Silk Test Software is available in the
Firesip\Europa\softare\siltest5.0.1\siltest� directory in your Network
Neighborhood.
• Get into the above folder and select the setup.exe file to start the installation.
• During installation, it will ask for the licence file. Set the path
Firesip\Europa\software\siltest 5.0.1\licence� for the licence.dat file.
• In the installation process, it will ask for the Silk/test/Silk Test Agent only option.
Select the Silk Test option if you are installing this for testing applications in the
stand-alone machine.
• For the �Will you be testing browsers?� message box, select Yes if you are
going to test web based applications.
• It will ask for the default browser option. Select the appropriate Browser you want to
test the application using Silk Test. Note that you are allowed to select only one
Browser option. By default Silk Test goes fine with Netscape browsers.
• After installing, it will open the Silk Test tool with the Quick start wizard open. The
quickstart wizard will assist you in creating various silk files. If you are a first time
user of silktest, the continue with that.
- 65 -
Software Testing
you are using Silk Test with the testplan editor, you can use the QuickStart Wizard, which
greatly simplifies the four steps of automated testing.
hen you start Silk Test the first time (or whenever you start and have no open windows),
the QuickStart Wizard is displayed automatically. You can also invoke the wizard at any time
by selecting File/New and clicking the Quickstart Wizard icon.
1. Create a testplan You simply name the file (giving it the .pln extension) and its directory.
2. Create a test frame, Which contains descriptions of the GUI objects in your application
that you want to test. As prompted, you simply open your application and open the various
windows and dialogs that you want to test in the application. The wizard automatically
records all the declarations in a file called frame.inc. You don�t have to do any coding.
3. Record testcases You name the testcase and provide a description for the testplan, then
simply record the testcase. Again, you don�t have to do any coding. The wizard
automatically sages the testcase in script (.t) file with the same name as the testplan.
Run testcases.
1. Involve the wizard by selecting File /New and clicking the QuickStart Wizard icon. Now
you will name a new testplan, which will organize and manage your tests.
2. Click Next.
3. Name the file edit.pln and Next. The next step is to record the test frame, which defines
all the windows, dialogs, menus, and so on that you want to test.
4. To create a new test frame, leave New Test Frame selected and click Next. At this point,
the wizard lists all the open (running and not minimized) applications. If Text Editor is not
open, you can open it now (it is the directory where you installed Silk Test). After you open
the Text Editor, click on the QuickStart Wizard title bar to see Text Editor added to the list of
applications.
- 66 -
Software Testing
8. Now you simply open a document window an open all the dialogs hat you want to test in
the Text Editor. When you place the mouse pointer on a window or dialog, the wizard
records all the declarations that SilkTest needs in a file called frame.inc in the same
directory as your testplan.
9. When you have finished capturing the windows and dialogs in Text Editor, click Return to
Wizard in the Capturing New Windows dialog. Now that you have created you test frame,
you are ready to create a testcase.
11. Name the test Find Box and enter the description “Verify controls in Find dialog.” Click
Next. You test is now being recorded, as indicated by the Record Status window on your
screen.
12. Now go to Text Editor, select Search / Find to open the Find dialog, place your mouse
pointer over the dialog’s title bar, and press Ctrl + Alt to verify its state. The verify windows
dialog displays. Click OK to verify all properties for the dialog. Close the Find dialog (to
return to your base state), then click Done in the Record Status window. You return to the
Wizard and are asked to confirm that the test is what you want.
15. The wizard reports the results. You an move the wizard to the side and look at the
results file that is created whenever you run a test.
16. In the wizard, click Next to save your testcase. The testcase is saved in a script (.t) file
with the same name as testplan (in this case, edit.t)
17. Click Close to close the wizard. You see a window containing the results file from the
test you just ran. In another window is the testplan.
• In Silk Test, select Option -> Extentions menu from the menu bar.
• It will load the Extentions dialog box.
• In that dialog box, check the kind of application you re testing.
• Say, if you are testing the web based application in the Netscape browser, Enable the
Netscape option by checking against it and un-check all the other options.
• Click on the OK button.
• Now click on the Options - > Runtime option.
• Set the ‘use path’ option to point to the silktest installed folder. Say if the silktest has
been installed in you c:\ drive then set the sue files = “C:\Program
Files\Segue\SilkTest\ETEND”. This is to use the common include files as per the
application.
• Since we have selected the Extensions, the use file will have the declaration
‘extend’\netscape.inc’ of the include file. Now the script is ready to open the
Netscape browser by default and run the scripts.
- 67 -
Software Testing
• From the status bar Go to the Start - > Programs -> Silk Test -> Extention Enabller
option, to declare the same set of extensions (as in step 4) in the ‘Extention Enabler’
dialog box.
In Silktest,
• Select the File -> New menu from the menu bar. (or) select from the ‘white icon’ in
the left top corner of the window.
• It will ask for various kind of files to open. Select the ‘4Test include file’ to declare
the window objects from the application.
• The ‘4Test Script file’ option is to open the script file where we will be writing
testscript.
• The above two files are more important for building the scripts.
• Open the application you want to test. If you are going to test the Yahoo site, then
open the browser load the page you want to start testing.
• The page that you start testing the application will be assumed as the ‘BaseState’.
Even we can explicitly declare the window base state.
- 68 -
Software Testing
• The Test Script file will be used only after creating the include file. We will be using
the include file to write the script file. Hence we have to declare the include file that
we are calling in the testscript files.
• To open an existing file, select File -> Open and select the existing file.
• This example is to write the script for logging into the yahoo site.
• Start the Silk Test by selecting from the ‘Start’ menu.
• Configure the settings as given in lab I.
• Click ‘File -> New ‘menu, and select the ‘4Test script file’ option.
• Click on the OK button.
• Start with the keyword ‘testcase Action ()’ and press Enter key in your keyboard. The
testcase is the default keyword for any testcase and the name Action is the name of
the testcase. The testcase name can be anything but it is advisable to name it clearly
so that will represent the functionality of the test.
• Now start writing the testcase (Follow the below instructions)
• Open the application in parallel ie., open the browser in which the application has to
run. (say Netscape)
• Go to Silk Test
• Click Record -> Actions. Menu from the menu bar.
• It will load the ‘Record Actions’ dialog box.
• Keep the dialog box as it is, go to the application and do the action what ever you
want to perform.
• The silktest will record the events you do sequentially and you can vie it in the
‘Record Actions’ dialog.
• After completing your task (till whatever you want to record), click on the ‘paste to
editor’ button in the ‘Record Action’ dialog box.
• Then click the close button to close the Record actions dialog box and go to your
application.
• Now the recorded code will be readily available in the testscript editor, inside the
testcase.
• Now delete the keyword ‘recording’ in your first line of the recorded code.
• Now, select the entire recorded code by keeping the mouse arrow at the leftmost dot
(.) in your editor at the first line, and drag it till the end.
• Right click on the selected code and select the ‘Move Left’ option.
• The code is ready now.
• Now, compile the code from the ‘Run -> compile’ option and run the script by
selecting the ‘Run -> Run’ menu.
• Now the testcase will automatically start the application and perform the events
recorded and throws the results.
• The sample recorded testcase for yahoo login look like this :
• Testcase Action ()
// [-] recording
Browserpage.Set Active ()
Browser.Location.Set Text (www.yahoo.com”)
Browser.Location. Type Keys (“”)
Yahoo.HtmlLink (“Mail| #26|$http:??www.yahoo.com?r?m2”).Click ()
BrowserPage.HtmlTextField(“Yahoo!ID: | # 1”). SetPosition (1, 1)
- 69 -
Software Testing
• A robust library of object-oriented classes and methods that specify how a testcase
can interact with an application’s GUI objects.
• A set of statements, operators, and data types that you use to add structure and
logic to a recorded testcase.
• A library of built-in functions for performing common support tasks. Note This section
provides a high-level look at 4 Test.
The basic silk scripts will be in two forms. One as an include file and the other as a script
file.
• The include file with the extention *.inc can be used for the declaration of window
names, window objects, variables, constants, structures and classes. The core
objects of the scripts lies here.
• The script file will be used in writing scripts. It will be with the extention *.t. The
body of the scripts will be defined here. ie. the testcases that meets various test
conditions will be written in the script file.
The script file (*.t) can be used for declaring objects and the include file (*.inc) for writing
testcases. But to make the code clear we use different files for different purposes. If no
testcases is written in a file (include), then the include file can be compiled but cannot be
run. It will show error that the file does not contain any testcases. Only the file with the
testcase present will be allowed to run.
Before running the scripts, separate declaration file have to be written (for declaring the
objects) and the script file (for writing scripts using that declaration file) and compile them.
- 70 -
Software Testing
• It will compile that particular script and the other related files, called by that script.
The user can confirm that by looking at the progress status (in yellow color) in the
bottom-right corner of the silktest too.
• If there is any error, the error details are displayed in the compile time. The user has
to make necessary changes.
• Then, select the Run-> Testcase from the menu bar (or) else select the Run icon.
• The testcases can be run by selectively or at-a-stretch.
• If the selective method is selected, it will ask for the testcase to be run from a list of
testcases.
• After selecting the testcase and start running, the silktest will automatically start the
application and start the test from the basestate.
Writing scripts in SilkTest includes steps of commands with declaration of window names
and its objects before that. To avoid these difficulties and to make the process easier (this is
an alternate for writing line-by-line steps of the scripts) silktest provides a special feature of
recording events.
A recorded statements for logging in to the yahoo site, will look line the sample given below.
• [-] recording
• BrowserPage.SetActive()
• Browser.Location.SetText(www.yahoo.com)
• Browser.Location.TypeKeys(“”)
• Yahoo.HtmlLink (“Mail|#26|$http:??www.yahoo.com?r?m2”)
• BrowserPage.HtmlTextField (“Yahoo!ID:| #1”). SetPosition (1,1)
• BrowserPage.HtmlTextField(“Yahoo!ID: | # 1”). Set Text (“username”)
• BrowserPage.HtmlTextField(“Yahoo!ID: | # 1”). TypeKeys (‘’’)
• BrowserPage.HtmlTextField(“Password:|#2”).Set Text (“password”)
• BrowserPage.HtmlPushButton (“Sign In|#1”). Click ()
- 71 -
Software Testing
The difference between the above two different scripts are, the method II needs windows &
its objects to declared before the scripts are to be written. It is not in the case of the
recording kind of the code.
Platform Independent
Silk Test doesn’t care about how the application is created, in which software the application
is written, what kind of design is used, which browser it is being worked, in which operating
system the application is running.
All that needs for an application to be tested using silktest is that it needs a frame (like
window)
Browser Independent
There are various kinds of browsers used by various people for running their applications.
The user may use any browser of his choice to test the standard application. Each and every
browser acts differently with different applications. They show the same page differently.
The web objects they display can also be aligned or displayed in different manner.
SilkTest just looks at these browser contents as objects and hence they cannot avoid any
images, texts, that they are not identifiable. Also we can write a test in one browser and run
it in any other browser (to some extend). i.e, using SilkTest, we can do cross browser
testing.
With minor modifications, your tests are robust enough to support different browsers and
different versions of these browsers.
Technology Independent
Silktest does not care how the application was built. It seamlessly works with the different
web technologies commonly used today.
- 72 -
Software Testing
Start writing the silk scripts. Capture the window declarations (.inc file) and write the .t file.
Say if you are capturing the declarations from Internet Explorer & run successfully on it. As
we captured the declarations from I.E., we now have to make the same test case run on
Netscape since the tag value changes from multiple browsers.
1. Declare all the window names and its objects (used in writing scripts) starting from the
first window.
2. In the File-> New option in the menu bar, select the test frame.
3. In the resulting ‘new Test Frame’ dialog box, specify the path of the executable file of
your application.
4. After submitting that dialog box, the silktest will automatically create a declaration file
with the default window declared.
5. Use that file to create your testscripts.
Before you start testing the java applications or applets, you have to set the java classpath.
• Point to a Java archive (.jar file) that contains the software the powers SilkTest’s Java
support for JDK 1.2 and JRE 1.2. This file is called SilkTest_Java2.jar.)
• When you install SilkTest_Java2.jar is installed in this directory:\JavaEx
• If you will use only JDK 1.2 for testing, you can activate Java support for JDK 1.2 by
copying SilkTest_Java2.jar from \JavaEx to \jre/lib/ext.
If you do not copy SilkTest_Java2.jar to your JDK 1.2 install directory, you must point to it
from your CLASSPATH.
- 73 -
Software Testing
Test Partner
Introduction
Batch Testing:--
we can create batches/suites by using the keyword “Run”. In this tool, calling test is known
as Driver script and called test is known as Test script.
Including an Asset:--
To include a VBA or other non-Test Partner asset in a script, add a declaration using the
following syntax:
where asset Name is the name of the asset that you are including. Asset names are unique
across all asset types, so you don’t need to specify what type of asset you are including.
Object Mapping:--
It can be used to provide simplified, easily understood aliases for the names of Windows
objects. Once a window is registered in the Object Map, all references to it in scripts, check
definitions, and event definitions are made by its alias, rather than by its actual attach
name. The attach name is an important concept when testing applications using Test
Partner.
Check Points:--
A check is a definition of the expected state of some aspect of the target system at a
particular point. In Test Partner, checks are saved and managed as assets. This means you
always have the option to reuse a check in more than one script. The following are various
checks available in Test Partner.
i) Bitmap Check:-
– Bitmap checks allow you to verify the appearance of a bitmap image. When you create the
check, you capture the image within a rectangular area of the screen. When the check is
- 74 -
Software Testing
verified, the same area is captured and compared to the defined image. If the two images
match according to the criteria you defined, the check passes. If not, the check fails. These
checks are used to check the appearance of toolbars, the desktop, and other windows that
contain non-textual information.
– Clock checks measure the time the system takes to perform a process. Clock checks help
you determine how the system performs under varying CPU or network loads. When you
create the check, you specify an acceptable response time. When the check is verified, the
system’s actual response time is recorded and compared to the specified time. It can
Content checks test the contents of tables and list controls in a window or web page. A
content check enables you to verify the contents of controls that it supports. Currently,
tables and list controls in a Windows-based or Web-based application are supported. The
Windows NT Version 4 desktops are also list controls.
The content check for tables enables you to optionally check the number of rows and
columns in the table and the case of the text in each table cell.
The content check for list controls enables you to optionally check the number of items,
positions of the items, which item(s) are selected, the text of each list item, and the case of
the text.
Like text checks, Field checks enable you to verify that required text is present in the target
application, but they enable you to verify that text as data, such as numbers or dates. For
example, you can see if a value falls between a lower and upper limit, or if a particular area
of the screen contains today’s date. You can create field checks that verify the following
data:
• ASCII values
• Numeric values
• Date values (fixed and aged)
• Time values
• Patterns
v) Property Check:-
– Property checks verify the properties of the controls in a dialog or web page. You can
check the size and position of each control, their legends and IDs, and whether they are
- 75 -
Software Testing
active, disabled, selected, or cleared. You can check a single control, or you can check
several controls within an application window.
– Text checks provide an exact comparison of the text in a window or individual area to
defined text. If you check a whole screen, areas that contain legitimately variable data, such
as dates and login IDs, can be ignored. Unlike bitmap checks, which simply compare the
appearance of an area of the screen with an expected appearance, text checks actually read
the displayed data as strings. This enables more sophisticated checking to be performed.
Events:--
Events are unscheduled occurrences or conditions to which you want the target application
to respond in a specified manner. Test Partner supports two categories of events: Wait and
Whenever. A Wait event tells Test Partner to wait for a specified occurrence before
proceeding. Wait events are useful in situations where you cannot anticipate the amount of
time a response will take. An example of a Wait event would be waiting for a system login
prompt. When your script is running against a network-based application that requires the
user to log in, the amount of time it takes for the login prompt to display may vary. To
account for this variance, you can insert a Wait event that instructs your script to wait for
the login prompt before proceeding to type a username and password.
A Whenever event tells Test Partner to watch for a specific occurrence and, if it occurs,
perform a special set of steps. Whenever events are useful for trapping unexpected error
conditions during a test run. For example, you can include events in your scripts to
recognize when the connection to the server has been interrupted by a communications
error or a network system message has been received, so that the script can report or work
around the problem. In a script, Test Partner automatically inserts the Script_Whenever
function by default to handle the whenever event. If a whenever event is inserted into a
module, shared module, or class module, you must customize the whenever event handler
code. Test Partner supports the following types of Events:
i) Bitmap Event:-- A bitmap event detects the presence, absence, or state of a graphic in
a window.
Date/Time events enable you to define a date or time condition. Test Partner recognizes the
event by monitoring the internal clock of the computer on which it and the target application
are running.
Watch for the entry of a particular keystroke combination by the user. You can use key
events to:
• Build your own “hotkeys” to make Test Partner perform an action whenever the
hotkey is used.
• Interrupt a script to take manual control of the target application.
• Pause a script until a particular key is used.
- 76 -
Software Testing
iv) Menu Events:-- Watch for when a particular menu item is selected by the user.
v) Mouse Events:-- Watch for when one of the mouse buttons is clicked or released in a
certain window.
A screen event detects the presence or absence of text in a window. The most common use
of screen events is to synchronize with host-based applications and to detect error
messages.
A window event detects an action on a window; for example, its creation, destruction,
movement, or its existence.
The Test Partner Active Data Test Creation wizard provides a non-programmatic way to
create data-driven tests. Such tests are useful for testing form-based applications. Using the
wizard, you can record a test, choose the fields you want to include in your data file, then
populate the data file itself using a data table.
1. From the Script menu, choose Create Data Test. The Data Test wizard appears.
2. Read the instructions and click Next.
3. Follow the instructions in the Data Layout Assistant for the three steps necessary to
define the scope of the test.
4. Enter a name for the Data Table.
5. Exclude any fields you do not want to include in the test by unchecking the Use checkbox
and click Next.
6. Read the instructions and click Finish.
TestPartner shows the Data Table, which includes a column for each field you defined and
one row of data. The data table also includes a column labelled Results Script, to specify a
script to be run after the data driven test.
1. If you have just recorded a new data test and the data table is open, proceed to step 3.
2. To modify an existing data test, select Modify Data Table on the Script menu and choose
the data table you want to change.
- 77 -
Software Testing
3. To add rows and populate the table with test data, right-click on the empty row labelled
END ( or any other row) and select Insert on the context menu. Test Partner inserts a new
row above the selected row.
5. Alternatively, right-click the empty row and choose Import. You can import data from a
tab-delimited text (.TXT) or comma-delimited (.CSV) file to populate the cells of the data
table. If you select the END row in the table, TestPartner will add multiple rows if needed to
accommodate the data.
6. To delete rows from the table, right-click the row and choose Delete. To delete multiple
rows, press the Ctrl or Shift (for contiguous rows) key while selecting rows to delete.
7. To launch another script from the data test, insert the name of the script in the Results
Script field in the table at any point(s) at which you want to run the script.
8. When you have finished editing the data table, click Save and Close to exit.
- 78 -
Software Testing
Interview questions
1) How you used WinRunner in your project?
Ans. Yes, I have been WinRunner for creating automates scripts for GUI, functional and
regression testing of the AUT.
Ans. WinRunner stores information it learns about a window or object in a GUI Map. When
WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s
description in the GUI map and then looks for an object with the same properties in the
application being tested. Each of these objects in the GUI Map file will be having a logical
name and a physical description.
Ans. WinRunner uses the GUI Map file to recognize objects on the application. When
WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s
description in the GUI map and then looks for an object with the same properties in the
application being tested.
- 79 -
Software Testing
5) Have you created test scripts and what is contained in the test scripts?
Ans. Yes I have created test scripts. It contains the statement in Mercury Interactive’s Test
Script Language (TSL). These statements appear as a test script in a test window. You can
then enhance your recorded test script, either by typing in additional TSL functions and
programming elements or by using WinRunner’s visual programming tool, the Function
Generator.
Ans. Following each test run, WinRunner displays the results in a report. The report details
all the major events that occurred during the run, such as checkpoints, error messages,
system messages, or user messages. If mismatches are detected at checkpoints during the
test run, you can view the expected results and the actual results from the Test Results
window.
Ans. Yes, I have performed debugging of scripts. We can debug the script by executing the
script in the debug mode. We can also debug script using the Step, Step Into, Step out
functionalities provided by the WinRunner.
Ans. We run tests in Verify mode to test your application. Each time WinRunner encounters
a checkpoint in the test script, it compares the current data of the application being tested
to the expected data captured earlier. If any mismatches are found, WinRunner captures
them as actual results.
Ans. Following each test run, WinRunner displays the results in a report. The report details
all the major events that occurred during the run, such as checkpoints, error messages,
system messages, or user messages. If mismatches are detected at checkpoints during the
test run, you can view the expected results and the actual results from the Test Results
window. If a test run fails due to a defect in the application being tested, you can report
information about the defect directly from the Test Results window. This information is sent
via e-mail to the quality assurance manager, who tracks the defect until it is fixed.
Ans. TestDirector is Mercury Interactive’s software test management tool. It helps quality
assurance personnel plan and organize the testing process. With TestDirector you can create
a database of manual and automated tests, build test cycles, run tests, and report and track
defects. You can also create reports and graphs to help review the progress of planning
tests, running tests, and tracking defects before a software release.
- 80 -
Software Testing
Ans When you work with WinRunner, you can choose to save your tests directly to your
TestDirector database or while creating a test case in the TestDirector we can specify
whether the script in automated or manual. And if it is automated script then TestDirector
will build a skeleton for the script that can be later modified into one which could be used to
test the AUT.
Ans. Add-Ins are used in WinRunner to load functions specific to the particular add-in to the
memory. While creating a script only those functions in the add-in selected will be listed in
the function generator and while executing the script only those functions in the loaded add-
in will be executed else WinRunner will give an error message saying it does not recognize
the function.
14) What are the reasons that WinRunner fails to identify an object on the GUI?
Ans. An object’s logical name is determined by its class. In most cases, the logical name is
the label that appears on an object.
16) If the object does not have a name then what will be the logical name?
Ans. If the object does not have a name then the logical name could be the attached text.
17) What is the different between GUI map and GUI map files?
Ans. The GUI map is actually the sum of one or more GUI map files. There are two modes
for organizing GUI map files.
i. Global GUI Map file: a single GUI Map file for the entire application
ii. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test
created. GUI Map file is a file which contains the windows and the objects learned by the
WinRunner with its logical name and their physical description.
- 81 -
Software Testing
Ans. GUI Map editor displays the content of a GUI Map. We can invoke GUI Map Editor from
the Tools Menu in WinRunner. The GUI Map Editor displays the various GUI Map files created
and the windows and objects learned in to them with their logical name and physical
description.
19) When you create GUI map do you record all the objects of specific objects?
Ans. If we are learning a window then WinRunner automatically learns all the objects in the
window else we will we identifying those object, which are to be learned in a window, since
we will be working with only those objects while creating scripts.
Ans. Set_Window command sets the focus to the specified window. We use this command
to set the focus to the required window before executing tests on a particular window.
Syntax: set_window(, time); The logical name is the logical name of the window and time is
the time the execution has to wait till it gets the given window into focus.
Syntax: GUI_load();
22) What is the disadvantage of loading the GUI maps through start up scripts?
Ans.1.If we are using a single GUI Map file for the entire AUT then the memory used by the
GUI Map may be much high.
2.If there is any change in the object being learned then WinRunner will not be able to
recognize the object, as it is not in the GUI Map file loaded in the memory. So we will have
to learn the object again and update the GUI File and reload it.
Ans. We can use GUI_close to unload a specific GUI Map file or else we call use
GUI_close_all command to unload all the GUI Map files loaded in the memory.
Ans. When we load a GUI Map file, the information about the windows and the objects with
their logical names and physical description are loaded into memory. So when the
WinRunner executes a script on a particular window, it can identify the objects using this
information loaded in the memory.
- 82 -
Software Testing
Ans. While recording a script, WinRunner learns objects and windows by itself. This is
actually stored into the temporary GUI Map file. We can specify whether we have to load
this temporary GUI Map file should be loaded each time in the General Options.
Ans. The GUI Map Editor is been provided with a Find and Show Buttons.
i. To find a particular object in the GUI Map file in the application, select the object and click
the Show window. This blinks the selected object.
ii. To find a particular object in a GUI Map file click the Find button, which gives the option
to select the object. When the object is selected, if the object has been learned to the GUI
Map file it will be focused in the GUI Map file.
28) What different actions are performed by find and show button?
Ans. 1.To find a particular object in the GUI Map file in the application, select the object and
click the Show window. This blinks the selected object.
2.To find a particular object in a GUI Map file click the Find button, which gives the option to
select the object. When the object is selected, if the object has been learned to the GUI Map
file it will be focused in the GUI Map file.
29) How do you identify which files are loaded in the GUI map?
Ans. The GUI Map Editor has a drop down “GUI File” displaying all the GUI Map files loaded
into the memory.
30) How do you modify the logical name or the physical description of the objects
in GUI map?
Ans. You can modify the logical name or the physical description of an object in a GUI map
file using the GUI Map Editor.
31) When do you feel you need to modify the logical name?
Ans. Changing the logical name of an object is useful when the assigned logical name is not
sufficiently descriptive or is too long.
Ans. Changing the physical description is necessary when the property value of an object
changes.
- 83 -
Software Testing
Ans. We can handle varying window labels using regular expressions. WinRunner uses two
“hidden” properties in order to use regular expression in an object’s physical description.
These properties are regexp_label and regexp_MSW_class.
i. The regexp_label property is used for windows only. It operates “behind the scenes” to
insert a regular expression into a window’s label description.
ii. The regexp_MSW_class property inserts a regular expression into an object’s MSW_class.
It is obligatory for all types of windows and for the object class object.
Ans. The regexp_label property is used for windows only. It operates “behind the scenes” to
insert a regular expression into a window’s label description.
Ans. We can suppress the regular expression of a window by replacing the regexp_label
property with label property.
36) How do you copy and move objects between different GUI map files?
Ans. We can copy and move objects between different GUI Map files using the GUI Map
Editor. The steps to be followed are:
i. Choose Tools > GUI Map Editor to open the GUI Map Editor.
ii. Choose View > GUI Files.
iii. Click Expand in the GUI Map Editor. The dialog box expands to display two GUI map files
simultaneously.
iv. View a different GUI map file on each side of the dialog box by clicking the file names in
the GUI File lists.
v. In one file, select the objects you want to copy or move. Use the Shift key and/or Control
key to select multiple objects. To select all objects in a GUI map file, choose Edit > Select
All.
vi. Click Copy or Move.
vii. To restore the GUI Map Editor to its original size, click Collapse.
37) How do you select multiple objects during merging the files?
Ans. Use the Shift key and/or Control key to select multiple objects. To select all objects in
a GUI map file, choose Edit > Select All.
Ans. We can clear a GUI Map file using the “Clear All” option in the GUI Map Editor.
- 84 -
Software Testing
Ans. GUI Map Editor has a Filter option. This provides for filtering with 3 different types of
options.
i. Logical name displays only objects with the specified logical name.
ii. Physical description displays only objects matching the specified physical description. Use
any substring belonging to the physical description.
iii. Class displays only objects of the specified class, such as all the push buttons.
a. When WinRunner learns the description of a GUI object, it does not learn all its
properties. Instead, it learns the minimum number of properties to provide a unique
identification of the object.
b. Many applications also contain custom GUI objects. A custom object is any object not
belonging to one of the standard classes used by WinRunner. These objects are therefore
assigned to the generic “object” class. When WinRunner records an operation on a custom
object, it generates obj_mouse_ statements in the test script.
c. If a custom object is similar to a standard object, you can map it to one of the standard
classes. You can also configure the properties WinRunner uses to identify a custom object
during Context Sensitive testing. The mapping and the configuration you set are valid only
for the current WinRunner session. To make the mapping and the configuration permanent,
you must add configuration statements to your startup test script.
- 85 -
Software Testing
1. What is load testing? Load testing is to test that if the application works fine with
the loads that result from large number of simultaneous users, transactions and to
determine weather it can handle peak usage periods.
2. What is Performance testing? - Timing for both read and update transactions
should be gathered to determine whether system functions are being performed in
an acceptable timeframe. This should be done standalone and then in a multi user
environment to determine the effect of multiple transactions on the timing of a single
transaction.
3. Did u use LoadRunner? What version? Yes. Version 7.2.
4. Explain the Load testing process? -
Step 1: Planning the test. Here, we develop a clearly defined test plan to ensure
the test scenarios we develop will accomplish load-testing objectives. Step 2:
Creating Vusers. Here, we create Vuser scripts that contain tasks performed by
each Vuser, tasks performed by Vusers as a whole, and tasks measured as
transactions. Step 3: Creating the scenario. A scenario describes the events that
occur during a testing session. It includes a list of machines, scripts, and Vusers that
run during the scenario. We create scenarios using LoadRunner Controller. We can
create manual scenarios as well as goal-oriented scenarios. In manual scenarios, we
define the number of Vusers, the load generator machines, and percentage of Vusers
to be assigned to each script. For web tests, we may create a goal-oriented scenario
where we define the goal that our test has to achieve. LoadRunner automatically
builds a scenario for us. Step 4: Running the scenario.
We emulate load on the server by instructing multiple Vusers to perform tasks
simultaneously. Before the testing, we set the scenario configuration and scheduling.
We can run the entire scenario, Vuser groups, or individual Vusers. Step 5:
Monitoring the scenario.
We monitor scenario execution using the LoadRunner online runtime, transaction,
system resource, Web resource, Web server resource, Web application server
resource, database server resource, network delay, streaming media resource,
firewall server resource, ERP server resource, and Java performance monitors. Step
6: Analyzing test results. During scenario execution, LoadRunner records the
performance of the application under different loads. We use LoadRunner’s graphs
and reports to analyze the application’s performance.
5. When do you do load and performance Testing? - We perform load testing once
we are done with interface (GUI) testing. Modern system architectures are large and
complex. Whereas single user testing primarily on functionality and user interface of
a system component, application testing focuses on performance and reliability of an
entire system. For example, a typical application-testing scenario might depict 1000
- 86 -
Software Testing
users logging in simultaneously to a system. This gives rise to issues such as what is
the response time of the system, does it crash, will it go with different software
applications and platforms, can it hold so many hundreds and thousands of users,
etc. This is when we set do load and performance testing.
6. What are the components of LoadRunner? - The components of LoadRunner are
The Virtual User Generator, Controller, and the Agent process, LoadRunner Analysis
and Monitoring, LoadRunner Books Online.
7. What Component of LoadRunner would you use to record a Script? - The
Virtual User Generator (VuGen) component is used to record a script. It enables you
to develop Vuser scripts for a variety of application types and communication
protocols.
8. What Component of LoadRunner would you use to play Back the script in
multi user mode? - The Controller component is used to playback the script in
multi-user mode. This is done during a scenario run where a vuser script is executed
by a number of vusers in a group.
9. What is a rendezvous point? - You insert rendezvous points into Vuser scripts to
emulate heavy user load on the server. Rendezvous points instruct Vusers to wait
during test execution for multiple Vusers to arrive at a certain point, in order that
they may simultaneously perform a task. For example, to emulate peak load on the
bank server, you can insert a rendezvous point instructing 100 Vusers to deposit cash
into their accounts at the same time.
10. What is a scenario? - A scenario defines the events that occur during each testing
session. For example, a scenario defines and controls the number of users to
emulate, the actions to be performed, and the machines on which the virtual users
run their emulations.
11. Explain the recording mode for web Vuser script? - We use VuGen to develop a
Vuser script by recording a user performing typical business processes on a client
application. VuGen creates the script by recording the activity between the client and
the server. For example, in web based applications, VuGen monitors the client end of
the database and traces all the requests sent to, and received from, the database
server. We use VuGen to: Monitor the communication between the application and
the server; Generate the required function calls; and Insert the generated function
calls into a Vuser script.
12. Why do you create parameters? - Parameters are like script variables. They are
used to vary input to the server and to emulate real users. Different sets of data are
sent to the server each time the script is run. Better simulate the usage model for
more accurate testing from the Controller; one script can emulate many different
users on the system.
13. What is correlation? Explain the difference between automatic correlation
and manual correlation? - Correlation is used to obtain data which are unique for
each run of the script and which are generated by nested queries. Correlation
provides the value to avoid errors arising out of duplicate values and also optimizing
the code (to avoid nested queries). Automatic correlation is where we set some rules
for correlation. It can be application server specific. Here values are replaced by data
which are created by these rules. In manual correlation, the value we want to
correlate is scanned and create correlation is used to correlate.
14. How do you find out where correlation is required? Give few examples from
your projects? - Two ways: First we can scan for correlations, and see the list of
values which can be correlated. From this we can pick a value to be correlated.
Secondly, we can record two scripts and compare them. We can look up the
difference file to see for the values which needed to be correlated. In my project,
there was a unique id developed for each customer, it was nothing but Insurance
- 87 -
Software Testing
Number, it was generated automatically and it was sequential and this value was
unique. I had to correlate this value, in order to avoid errors while running my script.
I did using scan for correlation.
15. Where do you set automatic correlation options? - Automatic correlation from
web point of view can be set in recording options and correlation tab. Here we can
enable correlation for the entire script and choose either issue online messages or
offline actions, where we can define rules for that correlation. Automatic correlation
for database can be done using show output window and scan for correlation and
picking the correlate query tab and choose which query value we want to correlate. If
we know the specific value to be correlated, we just do create correlation for the
value and specify how the value to be created.
16. What is a function to capture dynamic values in the web Vuser script? -
Web_reg_save_param function saves dynamic data information to a parameter.
17. When do you disable log in Virtual User Generator, When do you choose
standard and extended logs? - Once we debug our script and verify that it is
functional, we can enable logging for errors only. When we add a script to a scenario,
logging is automatically disabled. Standard Log Option: When you select
Standard log, it creates a standard log of functions and messages sent during script
execution to use for debugging. Disable this option for large load testing scenarios.
When you copy a script to a scenario, logging is automatically disabled Extended Log
Option: Select
extended log to create an extended log, including warnings and other messages.
Disable this option for large load testing scenarios. When you copy a script to a
scenario, logging is automatically disabled. We can specify which additional
information should be added to the extended log using the Extended log options.
18. How do you debug a LoadRunner script? - VuGen contains two options to help
debug Vuser scripts-the Run Step by Step command and breakpoints. The Debug
settings in the Options dialog box allow us to determine the extent of the trace to be
performed during scenario execution. The debug information is written to the Output
window. We can manually set the message class within your script using the
lr_set_debug_message function. This is useful if we want to receive debug
information about a small section of the script only.
19. How do you write user defined functions in LR? Give me few functions you
wrote in your previous project? - Before we create the User Defined functions we
need to create the external
library (DLL) with the function. We add this library to VuGen bin directory. Once the
library is added then we assign user defined function as a parameter. The function
should have the following format: __declspec (dllexport) char* <function
name>(char*, char*)Examples of user defined functions are as follows:GetVersion,
GetCurrentTime, GetPltform are some of the user defined functions used in my
earlier project.
20. What are the changes you can make in run-time settings? - The Run Time
Settings that we make are: a) Pacing - It has iteration count. b) Log - Under this
we have Disable Logging Standard Log and c) Extended Think Time - In think time
we have two options like Ignore think time and Replay think time. d) General -
Under general tab we can set the vusers as process or as multithreading and
whether each step as a transaction.
21. How do you perform functional testing under load? - Functionality under load
can be tested by running several Vusers concurrently. By increasing the amount of
Vusers, we can determine how much load the server can sustain.
22. What is Ramp up? How do you set this? - This option is used to gradually
increase the amount of Vusers/load on the server. An initial value is set and a value
- 88 -
Software Testing
- 89 -
Software Testing
- 90 -
Software Testing
each run of the script and which are generated by nested queries. Correlation
provides the value to avoid errors arising out of duplicate values and also optimizing
the code (to avoid nested queries). Automatic correlation is where we set some rules
for correlation. It can be application server specific. Here values are replaced by data
which are created by these rules. In manual correlation, the value we want to
correlate is scanned and create correlation is used to correlate.
41. Where do you set automatic correlation options? - Automatic correlation from
web point of view, can be set in recording options and correlation tab. Here we can
enable correlation for the entire script and choose either issue online messages or
offline actions, where we can define rules for that correlation. Automatic correlation
for database, can be done using show output window and scan for correlation and
picking the correlate query tab and choose which query value we want to correlate. If
we know the specific value to be correlated, we just do create correlation for the
value and specify how the value to be created.
42. What is a function to capture dynamic values in the web vuser script? -
Web_reg_save_param function saves dynamic data information to a parameter.
Test director
1Types of vies in Datastage Director?
2. Orchestrate Vs Datastage Parallel Extender?
3.What is an Exception ? What are types of Exception ?
4.What are Routines and where/how are they written and have you written any routines
before?
5.What are the command line functions that import and export the DS jobs?
6.How many types of database triggers can be specified on a table ? What are they ?
7. what is NLS in datastage? how we use NLS in Datastage ? what advantages in that ? at
the time of ins. . .</< td>
8.What are types of Hashed File?
9.What are the datatypes a available in PL/SQL ?
- 91 -
Software Testing
General
.What is 'Software Quality Assurance'?
Software QA involves the entire software development Process - monitoring and improving
the process, making sure that any agreed-upon standards and procedures are followed, and
ensuring that problems are found and dealt with. It is oriented to 'prevention'. (See the
Books section for a list of useful books on Software Quality Assurance.)
Organizations vary considerably in how they assign responsibility for QA and testing.
Sometimes they're the combined responsibility of one group or individual. Also common are
project teams that include a mix of testers and developers who work closely together, with
overall QA processes monitored by project managers. It will depend on what best fits an
organization's size and business structure.
3. What are some recent major computer system failures caused by software
bugs?
* Media reports in January of 2005 detailed severe problems with a $170 million high-profile
U.S. government IT systems project. Software testing was one of the five major problem
areas according to a report of the commission reviewing the project. Studies were under
way to determine which, if any, portions of the project could be salvaged.
* In July 2004 newspapers reported that a new government welfare management system in
Canada costing several hundred million dollars was unable to handle a simple benefits rate
increase after being put into live operation. Reportedly the original contract allowed for only
6 weeks of acceptance testing and the system was never tested for its ability to handle a
rate increase.
- 92 -
Software Testing
* According to news reports in April of 2004, a software bug was determined to be a major
contributor to the 2003 Northeast blackout, the worst power system failure in North
American history. The failure involved loss of electrical power to 50 million customers,
forced shutdown of 100 power plants, and economic losses estimated at $6 billion. The bug
was reportedly in one utility company's vendor-supplied power monitoring and management
system, which was unable to correctly handle and report on an unusual confluence of
initially localized events. The error was found and corrected after examining millions of lines
of code.
* In early 2004, news reports revealed the intentional use of a software bug as a counter-
espionage tool. According to the report, in the early 1980's one nation surreptitiously
allowed a hostile nation's espionage service to steal a version of sophisticated industrial
software that had intentionally-added flaws. This eventually resulted in major industrial
disruption in the country that used the stolen flawed software.
* A major U.S. retailer was reportedly hit with a large government fine in October of 2003
due to web site errors that enabled customers to view one anothers' online orders.
* News stories in the fall of 2003 stated that a manufacturing company recalled all their
transportation products in order to fix a software problem causing instability in certain
circumstances. The company found and reported the bug itself and initiated the recall
procedure in which a software upgrade fixed the problems.
* In January of 2001 newspapers reported that a major European railroad was hit by the
aftereffects of the Y2K bug. The company found that many of their newer trains would not
run due to their inability to recognize the date '31/12/2000'; the trains were started by
altering the control system's date settings.
* News reports in September of 2000 told of a software vendor settling a lawsuit with a
large mortgage lender; the vendor had reportedly delivered an online mortgage processing
system that did not meet specifications, was delivered late, and didn't work.
* In early 2000, major problems were reported with a new computer system in a large
suburban U.S. public school district with 100,000+ students; problems included 10,000
erroneous report cards and students left stranded by failed class registration systems; the
district's CIO was fired. The school district decided to reinstate it's original 25-year old
system for at least a year until the bugs were worked out of the new system by the
software vendors.
* In October of 1999 the $125 million NASA Mars Climate Orbiter spacecraft was believed to
be lost in space due to a simple data conversion error. It was determined that spacecraft
software used certain data in English units that should have been in metric units. Among
other tasks, the orbiter was to serve as a communications relay for the Mars Polar Lander
mission, which failed for unknown reasons in December 1999. Several investigating panels
were convened to determine the process failures that allowed the error to go undetected.
- 93 -
Software Testing
* Bugs in software supporting a large commercial high-speed data network affected 70,000
business customers over a period of 8 days in August of 1999. Among those affected was
the electronic trading system of the largest U.S. futures exchange, which was shut down for
most of a week as a result of the outages.
* January 1998 news reports told of software problems at a major U.S. telecommunications
company that resulted in no charges for long distance calls for a month for 400,000
customers. The problem went undetected until customers called up with questions about
their bills.
4.Why is it often hard for management to get serious about quality assurance?
* Time pressures - scheduling of software projects is difficult at best, often requiring a lot of
guesswork. When deadlines loom and the crunch comes, mistakes will be made.
* * 'no problem'
* * 'piece of cake'
- 94 -
Software Testing
* instead of:
* * 'that adds a lot of complexity and we could end up making a lot of mistakes'
* * 'I can't estimate how long it will take, until I take a close look at it'
* * 'we can't figure out what that old spaghetti code did in the first place'
If there are too many unrealistic 'no problem's', the result is bugs.
* Poorly documented code - it's tough to maintain and modify code that is badly written or
poorly documented; the result is bugs. In many organizations management provides no
incentive for programmers to document their code or write clear, understandable,
maintainable code. In fact, it's usually the opposite: they get points mostly for quickly
turning out code, and there's job security if nobody else can understand it ('if it was hard to
write, it should be hard to read').
* Software development tools - visual tools, class libraries, compilers, scripting tools, etc.
often introduce their own bugs or are poorly documented, resulting in added bugs.
* A lot depends on the size of the organization and the risks involved. For large
organizations with high-risk (in terms of lives or property) projects, serious management
buy-in is required and a formalized QA process is necessary.
* Where the risk is lower, management and organizational buy-in and QA implementation
may be a slower, step-at-a-time process. QA processes should be balanced with productivity
so as to keep bureaucracy from getting out of hand.
* For small groups or projects, a more ad-hoc process may be appropriate, depending on
the type of customers and projects. A lot will depend on team leads or managers, feedback
to developers, and ensuring adequate communications among customers, managers,
developers, and testers.
* The most value for effort will often be in (a) requirements management processes, with a
goal of clear, complete, testable requirement specifications embodied in requirements or
design documentation, or in 'agile'-type environments extensive continuous coordination
with end-users, (b) design inspections and code inspections, and (c) post-
mortems/retrospectives.
* Verification typically involves reviews and meetings to evaluate documents, plans, code,
requirements, and specifications. This can be done with checklists, issues lists,
walkthroughs, and inspection meetings. Validation typically involves actual testing and takes
place after verifications are completed. The term 'IV & V' refers to Independent Verification
- 95 -
Software Testing
and Validation.
8.What is a 'walkthrough'?
What's an 'inspection'?
* An inspection is more formalized than a 'walkthrough', typically with 3-8 people including
a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a
document such as a requirements spec or a test plan, and the purpose is to find problems
and see what's missing, not to fix anything. Attendees should prepare for this type of
meeting by reading thru the document; most problems will be found during this preparation.
The result of the inspection meeting should be a written report.
* Black box testing - not based on any knowledge of internal design or code. Tests are
based on requirements and functionality.
* White box testing - based on knowledge of the internal logic of an application's code. Tests
are based on coverage of code statements, branches, paths, conditions.
* Unit testing - the most 'micro' scale of testing; to test particular functions or code
modules. Typically done by the programmer and not by testers, as it requires detailed
knowledge of the internal program design and code. Not always easily done unless the
application has a well-designed architecture with tight code; may require developing test
driver modules or test harnesses.
* End-to-end testing - similar to system testing; the 'macro' end of the test scale; involves
testing of a complete application environment in a situation that mimics real-world use, such
as interacting with a database, using network communications, or interacting with other
hardware, applications, or systems if appropriate.
- 96 -
Software Testing
* Sanity testing or smoke testing - typically an initial testing effort to determine if a new
software version is performing well enough to accept it for a major testing effort. For
example, if the new software is crashing systems every 5 minutes, bogging down systems
to a crawl, or corrupting databases, the software may not be in a 'sane' enough condition to
warrant further testing in its current state.
* Load testing - testing an application under heavy loads, such as testing of a web site
under a range of loads to determine at what point the system's response time degrades or
fails.
* Stress testing - term often used interchangeably with 'load' and 'performance' testing.
Also used to describe such tests as system functional testing while under unusually heavy
loads, heavy repetition of certain actions or inputs, input of large numerical values, large
complex queries to a database system, etc.
* Performance testing - term often used interchangeably with 'stress' and 'load' testing.
Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements
documentation or QA or Test Plans.
* Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend
on the targeted end-user or customer. User interviews, surveys, video recording of user
sessions, and other techniques can be used. Programmers and testers are usually not
appropriate as usability testers.
* Recovery testing - testing how well a system recovers from crashes, hardware failures, or
other catastrophic problems.
* Security testing - testing how well the system protects against unauthorized internal or
external access, willful damage, etc; may require sophisticated testing techniques.
* Exploratory testing - often taken to mean a creative, informal software test that is not
based on formal test plans or test cases; testers may be learning the software as they test
it.
* Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers
have significant understanding of the software before testing it.
- 97 -
Software Testing
and intended use of software. For example, the testing approach for life-critical medical
equipment software would be completely different than that for a low-cost computer game.
* Beta testing - testing when development and testing are essentially completed and final
bugs and problems need to be found before final release. Typically done by end-users or
others, not by programmers or testers.
* Mutation testing - a method for determining if a set of test data or test cases is useful, by
deliberately introducing various code changes ('bugs') and retesting with the original test
data/cases to determine if the 'bugs' are detected. Proper implementation requires large
computational resources.
* Realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-
testing, changes, and documentation; personnel should be able to complete the project
without burning out.
* Adequate testing - start testing early on, re-test after fixes or changes, plan for adequate
time for testing and bug-fixing. 'Early' testing ideally includes unit testing by developers and
built-in testing and diagnostic capabilities.
* Quality software is reasonably bug-free, delivered on time and within budget, meets
- 98 -
Software Testing
* * 'Good code' is code that works, is bug free, and is readable and maintainable. Some
organizations have coding 'standards' that all developers are supposed to adhere to, but
everyone has different ideas about what's best, or what is too many or too few rules. There
are also various theories and metrics, such as McCabe Complexity metrics. It should be kept
in mind that excessive use of standards and rules can stifle productivity and creativity. 'Peer
reviews', 'buddy checks' code analysis tools, etc. can be used to check for problems and
enforce standards. For C and C++ coding, here are some typical ideas to consider in setting
rules/standards; these may or may not apply to a particular situation:
* Use descriptive function and method names - use both upper and lower case, avoid
abbreviations, use as many characters as necessary to be adequately descriptive (use of
more than 20 characters is not out of line); be consistent in naming conventions.
* Use descriptive variable names - use both upper and lower case, avoid abbreviations, use
as many characters as necessary to be adequately descriptive (use of more than 20
characters is not out of line); be consistent in naming conventions.
* Function and method sizes should be minimized; less than 100 lines of code is good, less
than 50 lines is preferable.
* Coding style should be consistent throught a program (eg, use of brackets, indentations,
naming conventions, etc.)
* In adding comments, err on the side of too many rather than too few comments; a
common rule of thumb is that there should be at least as many lines of comments (including
header blocks) as lines of code.
- 99 -
Software Testing
* No matter how small, an application should include documentaion of the overall program
function and flow (even a few paragraphs is better than nothing); or if possible a separate
flow chart and detailed program documentation.
* Make extensive use of error handling procedures and status and error logging.
* For C++, to minimize complexity and increase maintainability, avoid too many levels of
inheritance in class heirarchies (relative to the size and complexity of the application).
Minimize use of multiple inheritance, and minimize use of operator overloading (note that
the Java programming language eliminates multiple inheritance and operator overloading.)
* For C++, keep class methods small, less than 50 lines of code per method is preferable.
* * 'Design' could refer to many things, but often refers to 'functional design' or 'internal
design'. Good internal design is indicated by software code whose overall structure is clear,
understandable, easily modifiable, and maintainable; is robust with sufficient error-handling
and status logging capability; and works correctly when implemented. Good functional
design is indicated by an application whose functionality can be traced back to customer and
end-user requirements.For programs that have a user interface, it's often a good idea to
assume that the end user will have little computer knowledge and may not read a user
manual or even the on-line help; some common rules-of-thumb include:
* The program should act in a way that least surprises the user
* It should always be evident to the user what can be done next and how to exit
* The program shouldn't let the users do something stupid without warning them.
* CMM = 'Capability Maturity Model', now called the CMMI ('Capability Maturity Model
Integration'), developed by the SEI. It's a model of 5 levels of process 'maturity' that
determine effectiveness in delivering quality software. It is geared to large organizations
such as large U.S. Defense Department contractors. However, many of the QA processes
involved are appropriate to any organization, and if reasonably applied can be helpful.
Organizations can receive CMMI ratings by undergoing assessments by qualified auditors.
* Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals
to successfully complete projects. Few if any processes in place; successes may not be
repeatable.
- 100 -
Software Testing
software processes, and training programs are used to ensure understanding and
compliance.
* Level 4 - metrics are used to track productivity, processes, and products. Project
performance is predictable, and quality is consistently high.
* Level 5 - the focus is on continouous process improvement. The impact of new processes
and technologies can be predicted and effectively implemented when required.
* IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates
standards such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard
829), 'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard
for Software Quality Assurance Plans' (IEEE/ANSI Standard 730), and others.
* ANSI = 'American National Standards Institute', the primary industrial standards body in
the U.S.; publishes some software-related standards in conjunction with the IEEE and ASQ
(American Society for Quality).
* The life cycle begins when an application is first conceived and ends when it is no longer in
use. It includes aspects such as initial concept, requirements analysis, functional design,
internal design, documentation planning, test planning, coding, document preparation,
integration, testing, maintenance, updates, retesting, phase-out, and other aspects.
* Possibly For small projects, the time needed to learn and implement them may not be
worth it. For larger projects, or on-going long-term projects they can be valuable.
- 101 -
Software Testing
* A common type of automated tool is the 'record/playback' type. For example, a tester
could click through all combinations of menu choices, dialog box choices, buttons, etc. in an
application GUI and have them 'recorded' and the results logged by a tool. The 'recording' is
typically in the form of text based on a scripting language that is interpretable by the testing
tool. If new buttons are added, or some underlying code in the application is changed, etc.
the application might then be retested by just 'playing back' the 'recorded' actions, and
comparing the logging results to check effects of the changes. The problem with such tools
is that if there are continual changes to the system being tested, the 'recordings' may have
to be changed so much that it becomes very time-consuming to continuously update the
scripts. Additionally, interpretation and analysis of results (screens, data, logs, etc.) can be
a difficult task. Note that there are record/playback tools for text-based interfaces also, and
for all types of platforms.
* Coverage analyzers - these tools check which parts of the code have been exercised by a
test, and may be oriented to code statement coverage, condition coverage, path coverage,
etc.
* Load/performance test tools - for testing client/server and web applications under various
load levels.
* Web test tools - to check that links are valid, HTML code usage is correct, client-side and
server-side programs work, a web site's interactions are secure.
* Other tools - for test case management, documentation management, bug reporting, and
configuration management.
- 102 -