Вы находитесь на странице: 1из 75

Testing Methods

1. White Box
Also called ‘Structural Testing / Glass Box Testing’ is used for testing the code keeping the system specs in mind. Inner working is considered and thus
Developers Test..
• Mutation Testing
Number of mutants of the same program created with minor changes and none of their result should coincide with that of the result of the
original program given same test case.

• Basic Path Testing


Testing is done based on Flow graph notation, uses Cyclometric complexity & Graph matrices.

• Control Structure Testing


The Flow of control execution path is considered for testing. It does also checks :-
Conditional Testing : Branch Testing, Domain Testing.
Data Flow Testing.
Loop testing :Simple, Nested, Conditional, Unstructured Loops.
2. Gray Box

3. Black Box
Also called ‘Functional Testing’ as it concentrates on testing of the functionality rather than the internal details of code.
Test cases are designed based on the task descriptions
• Comparison Testing
Test cases results are compared with the results of the test Oracle.

• Graph Based Testing


Cause and effect graphs are generated and cyclometric complexity considered in using the test cases.

• Boundary Value Testing


Boundary values of the Equivalence classes are considered and tested as they generally fail in Equivalence class testing.

• Equivalence class Testing


Test inputs are classified into Equivalence classes such that one input check validates all the input values in that class.

Gray Box Testing : Similar to Black box but the test cases, risk assessments, and test methods involved in gray box testing are developed based on the
knowledge of the internal data and flow structures
Levels of Testing

1. Unit Testing.
• Unit Testing is primarily carried out by the developers themselves.
• Deals functional correctness and the completeness of individual program units.
• White box testing methods are employed
2. Integration Testing.
• Integration Testing: Deals with testing when several program units are integrated.
• Regression testing : Change of behavior due to modification or addition is called ‘Regression’. Used to bring changes from worst to least.
• Incremental Integration Testing : Checks out for bugs which encounter when a module has been integrated to the existing.
• Smoke Testing : It is the battery of test which checks the basic functionality of program. If fails then the program is not sent for further testing.
3. System Testing.
• System Testing - Deals with testing the whole program system for its intended purpose.
• Recovery testing : System is forced to fail and is checked out how well the system recovers the failure.
• Security Testing : Checks the capability of system to defend itself from hostile attack on programs and data.
• Load & Stress Testing : The system is tested for max load and extreme stress points are figured out.
• Performance Testing : Used to determine the processing speed.
• Installation Testing : Installation & uninstallation is checked out in the target platform.
4. Acceptance Testing.
• UAT ensures that the project satisfies the customer requirements.
• Alpha Testing : It is the test done by the client at the developer’s site.
• Beta Testing : This is the test done by the end-users at the client’s site.
• Long Term Testing : Checks out for faults occurrence in a long term usage of the product.
• Compatibility Testing : Determines how well the product is substantial to product transition.

Software Testing
Testing involves operation of a system or application under controlled conditions and evaluating the results. Every Test consists of 3 steps :
Planning : Inputs to be given, results to be obtained and the process to proceed is to planned.
Execution : preparing test environment, Completing the test, and determining test results.
Evaluation : compare the actual test outcome with what the correct outcome should have been.
Automated Testing
Automated testing is as simple as removing the "human factor" and letting the computer do the thinking. This can be done with integrated debug tests, to much
more intricate processes. The idea of the these tests is to find bugs that are often very challenging or time intensive for human testers to find. This sort of testing
can save many man hours and can be more "efficient" in some cases. But it will cost more to ask a developer to write more lines of code into the game (or an
external tool) then it does to pay a tester and there is always the chance there is a bug in the bug testing program. Reusability is another problem; you may not be
able to transfer a testing program from one title (or platform) to another. And of course, there is always the "human factor" of testing that can never truly be
replaced.
Other successful alternatives or variation: Nothing is infallible. Realistically, a moderate split of human and automated testing can rule out a wider range of possible
bugs, rather than relying solely on one or the other. Giving the testere limited access to any automated tools can often help speed up the test cycle.
Release Acceptance Test
The release acceptance test (RAT), also referred to as a build acceptance or smoke test, is run on each development release to check that each build is stable
enough for further testing. Typically, this test suite consists of entrance and exit test cases plus test cases that check mainstream functions of the program with
mainstream data. Copies of the RAT can be distributed to developers so that they can run the tests before submitting builds to the testing group. If a build does not
pass a RAT test, it is reasonable to do the following:

• Suspend testing on the new build and resume testing on the prior build until another build is received.
• Report the failing criteria to the development team.
• Request a new build.

Functional Acceptance Simple Test


The functional acceptance simple test(FAST) is run on each development release to check that key features of the program are appropriately accessible and
functioning properly on the at least one test configuration (preferable the minimum or common configuration).This test suite consists of simple test cases that
check the lowest level of functionality for each command- to ensure that task-oriented functional tests(TOFTs) cna be performed on the program. The objective is
to decompose the functionality of a program down to the command level and then apply test cases to check that each command works as intended. No attention is
paid to the combination of these basic commands, the context of the feature that is formed by these combined commands, or the end result of the overall feature.
For example, FAST for a File/Save As menu command checks that the Save As dialog box displays. However, it does not validate that the overall file-saving
feature works nor does it validate the integrity of save files.
Deployment Acceptance Test
The configuration on which the Web system will be deployed will often be much different from develop-and-test configurations. Testing efforts must consider this in
the preparation and writing of test cases for installation time acceptance tests. This type of test usually includes the full installation of the applications to the
targeted environments or configurations.
Task-Oriented Functional Test
The task-oriented functional test (TOFT) consists of positive test cases that are designed to verify program features by checking the task that each feature
performs against specifications, user guides, requirements, and design documents. Usually, features are organized into list or test matrix format. Each feature is
tested for:

• The validity of the task it performs with supported data conditions under supported operating conditions.
• The integrity od the task's end result
• The feature's integrity when used in conjunction with related features

Forced-Error Test
The forced-error test (FET) consists of negative test cases that are designed to force a program into error conditions. A list of all error messages thatthe program
issues should be generated. The list is used as a baseline for developing test cases. An attempt is made to generate each error message in the list. Obviously, test
to validate error-handling schemes cannot be performed until all the handling and error message have been coded. However, FETs should be thought through as
early as possible. Sometimes, the error messages are not available. The error cases can still be considered by walking through the program and deciding how the
program might fail in a given user interface such as a dialog or in the course of executing a given task or printing a given report. Test cases should be created for
each condition to determine what error message is generated.
Real-world User-level Test
These tests simulate the actions customers may take with a program. Real-World user-level testing often detects errors that are otherwise missed by formal test
types.
Exploratory Test
Exploratory Tests do not involve a test plan, checklist, or assigned tasks. The strategy here is to use past testing experience to make educated guesses about
places and functionality that may be problematic. Testing is then focused on those areas. Exploratory testing can be scheduled. It can also be reserved for
unforeseen downtime that presents itself during the testing process.
Compatibility and Configuration Testing
Compatibility and configuration testng is performanced to check that an application functions properly across various hardware and software environments. Often,
the stragegy is to run the functional acceptance simple tests or a subset of the task-oriented functional tests on a range of software and hardware configurations.
Sometimes, another strategy is to create a specific test that takes into account the error risks associated with configuration differences. For example, you might
design an extensive series of tests to check for browser compatibility issues. Software compatibility configurations include variances in OS versions, input/output
(I/O) devices, extension, network software, concurrent applications, online services and firewalls. Hardwere configurations include variances in manufacturers,
CPU types, RAM, graphic display cards, video capture cards, sound cards, monitors, network cards, and connection types(e.g. T1, DSL, modem, etc..).
Documentation
Testing of reference guides and user guises check that all features are reasonably documented. Every page of documentation should be keystroke-tested for the
following errors:

• Accuracy of every statement of fact


• Accuracy of every screen shot, figure and illustation
• Accuracy of placement of figures and illustation
• Accuracy of every tutorial, tip, and instruction
• Accuracy of marketing collateral (claims, system requirements,and screen shots)
• Accuracy of downloadable documentation(PDFs, HTML, or test files)

Online Help Test


Online help tests check the accuracy of help contents, correctness of features in the help system, and functionality of the help system.
Install/uninstall Test
Web system often require both client-side and server-side installs. Testing of the installer checks that installed features function properly--including icons, support
documentation , the README file, and registry keys. The test verifies that the correct directories are created and that the correct system files are copied to the
appropriate directories. The test also confirms that various error conditions are detected and handled gracefully.
Testing of the uninstaller checks that the installed directories and files are appropriately removed, that configuration and system-related filea are also appropriately
removed or modified, and that the operating environment is recovered in its original state.
User Interface Tests
Easy-of-use UI testing evaluates how intuitive a system is. Issues pertaining to navigation, usablility, commands, and accessibility are considered. User interface
functionality testing examines how well a UI operates to specifications.
AREAS COVERED IN UI TESTING

• Usability
• Look and feel
• Navigation controls/navigation bar
• Instructional and technical information style
• Images
• Tables
• Navigation branching
• Accessibility

External Beta Testing


External beta testing offers developers their first glimpse at how users may actually interact with a program. Copies of the program or a test URL, sometimes
accompanied with letter of instruction, are sent out to a group of volunteers who try out the program and respond to questions in the letter. Beta testing is black-
box, real-world testing. Beta testing can be difficult to manage, and the feedback that it generates normally comes too late in the development process to
contribute to improved usability and functionality. External beta-tester feedback may be reflected in a README file or deferred to future releases.
Security Tests
Security measures protect Web systems from both internal and external threats. E-commerce concerns and the growing popularity of Web-based applications
have made security testing increasingly relevant. Security tests determine whether a company's security policies have been properly implemented; they evaluate
the functionality of existing systems, not whether the security policies that have been implemented are appropriate.
PRIMARY COMPONENTS REQUIRING SECURITY TESTING

• Application software
• Database
• Servers
• Client workstations
• Networks

Unit Tests
Unit tests are positive tests that eveluate the integrity of software code units before they are integrated with other software units. Developers normally perform unit
testing. Unit testing represents the first round of software testing--when developers test their own software and fix errors in private.
Click-Stream Testing
Click stream Testing is to show which URLs the user clicked, The Web site's user activity by time period during the day, and other data otherwise found in the Web
server logs. Popular choice for Click-Stream Testing statisticss include KeyNote Systems Internet weather report , WebTrends log analysis utility, and the
NetMechanic monitoring service.
Disadvantage: Click-Stream Testing statistics reveal almost nothing about the user's ability to achieve their goals using the Web site. For example, a Web site may
show a million page views, but 35% of the page views may simply e pages with the message "Found no search results," With Click-Stream Testing, there's no way
to tell when user reach their goals.
Click-stream measurement tests
Makes a request for a set of Web pages and records statiestics about the response, including total page views per hour, total hits per week, total user sessions per
week, and derivatives of these numbers. The downside is that if your Web-enabled application takes twics as many pages as it should for a user to complete his or
her goal, the click stream test makes it look as though your Web site is popular, while to the user your Web site is frustrating.
HTML content-checking tests
HTML content checking tests makes a request to a Web page, parses the response for HTTP hyperlinks, requests hyperlinks from their associated host, and if the
links returned successful or exceptional conditions. The downside is that the hyperlinks in a Web-enalbled application are dynamic and can change, depending on
the user's actions. There is little way to know the context of the hyperlinks in a Web-enabled application. Just checking the links' validity is meaningless if not
misleading. These tests were meant to test static Web sites, not Web-enabled application
Web-Enabled Application Measurement Tests

1. Meantime between failures in seconds


2. Amount of time in seconds for each user session, sometimes know as transaction
3. Application availability and peak usage periods.
4. Which media elements are most used ( for example, HTML vs. Flash, JavaScript vs. HTML forms, Real vs. Windows Media Player vs. QuickTime)

Ping tests
Ping tests use the Internet Control Message Protocol(ICMP) to send a ping request to a server. If the ping returns, the server is assumed to be alive and well. The
downside is that usually a Web server will continue to return ping requests even when the Web-enable application has crashed.
Unit Testing
Unit testing finds problems and errors at the module level before the software leaves development. Unit testing is accomplished by adding a small amount of the
code to the module that validates the module's responses.
System-Level Test
System-level tests consists of batteris of tests that are designed to fully exercise a program as a whole and check that all elements of the integrated system
function properly.
Functional System Testing
System tests check that the software functions properly from end-to-end. The components of the system include: A database, Web-enable application software
modules, Web servers, Web-enabled application frameworks deploy Web browser software, TCP/IP networking routers, media servers to stream audio and video,
and messaging services for email.
A common mistake of test professionals is to believe that they are conducting system tests while they are actually testing a single component of the system. For
example, checking that the Web server returns a page is not a system test if the page contains only a static HTML page.
System testing is the process of testing an integrated hardware and software system to verify that the system meets its specified requirements. It verifies proper
execution of the entire set of application components including interfaces to other applications. Project teams of developers and test analysts are responsible for
ensuring that this level of testing is performed.
System testing checklist include question about:

• Functional completeness of the system or the add-on module


• Runtime behavior on various operating system or different hardware configurantions.
• Installability and configurability on various systems
• Capacity limitation (maximum file size, number of records, maximum number of concurrent users, etc.)
• Behavior in response to problems in the programming environment (system crash, unavailable network, full hard-disk, printer not ready)
• Protection against unauthorized access to data and programs.

"black-box" (or functional) testing


Black Box Testing is testing without knowledge of the internal workings of the item being tested. The Outside world comes into contact with the test items, --only
through the application interface ,,, an internal module interface, or the INPUT/OUTPUT description of a batch process. They check whether interface definitions
are adhered to in all situation and whether the product conform to all fixed requirements. Test cases are created based on the task descriptions.
Black Box Testing assumes that the tester does not know anything about the application that is going to be tested. The tester needs to understand what the
program should do, and this is achieved through the business requirements and meeting and talking with users.
Funcional tests: This type of tests will evaluate a specific operating condition using inputs and validating results. Functional tests are designed to test boundaries.
A combination of correst and incorrect data should be used in this type of test.

Scalability and Performance Testing


Scalability and performance testing is the way to understand how the system will handle the load cause by many concurrent users. In a Web environment
concurrent use is measured as simply the number of users making requests at the same time.
Performance testing is designed to measure how quickly the program completes a given task. The primary objective is to determine whether the processing speed
is acceptable in all parts of the program. If explicit requirements specify program performance, then performance test are often performed as acceptance tests.
As a rule, performance tests are easy to automate. This makes sense above all when you want to make a performance comparison of different system conditions
while using the user interface. The capture and automatic replay of user actions during testing eliminates variations in response times.
This type of test should be designed to verify response and excution time. Bottlenecks in a system are generally found during this stage of testing.
Stress Testing
Overwhelm the product for performance, reliability, and efficiency assessment; To find the breakpoint when system is failure; to increase load regressively to
gather information for finding out maximum concurrent users.
Stress tests force programs to operate under limited resource conditions. The goal is to push the upper functional limits of a program to ensure that it can function
correctly and handle error conditions gracefully. Examples of resources that may be artificially manipulated to create stressful conditions include memory, disk
space, and network bandwidth. If other memory-oriented tests are also planned, they should be performed here as part of the stress test suite. Stress tests can be
automated.
Breakpoint:
the capabilites and weakness of the product:

• High volunmes of data


• Device connections
• Long transation chains

Stress Test Environment:


As you set up your testing environment for a stress test, you need to make sure you can answer the following questions:

• Will my test be able to support all the users and still maintain performance?
• Will my test be able to simulate the number of transactions that pass through in a matter of hours?
• Will my test be able to uncover whether the system will break?
• Will my server crash if the load continues over and over?
The test should be set up so that you can simulate the load; for example:

• If you have a remote Web site you should be able to monitor up to four Web sites or URLs.
• There should be a way to monitor the load intervals.
• The load test should be able to simulate the SSL (Secure Server)
• The test should be able to simulate when a user submits the Form Data (GET method)
• The test should be set up to simulate and authentical the keyword verification.
• The test should be able to simulate up to six email or pager mail addresses and an alert should occur when there is a failure.

It is important to remember when stressing your Web site to give a certain number of users a page to stress test and give them a certain amount of time in which to
run the test.
Some of the key data features that can help you measure this type of stress test, determine the load, and uncover bottlenecks in the system are:

• Amount of memory available and used


• The processor time used
• The number of requests per second
• The amount of time it takes ASP pages to be set up.
• Server timing errors.

Load Testing
The process of modeling application usage conditions and performing them against the application and system under test, to analyze the application and system
and determine capacity, throughout speed, transation handling capabilities, scalabilities and reliability while under under stress.
This tyoe of test is designed to identify possible overloads to the system such as too many users signed on to the system, too many terminals on the network, and
network system too slow.
Load testing a simulation of how a browser will respond to intense use by many individuals. The Web sessions can be recorded live and set up so that the test can
be run during peak times and also during slow times. The following are two different types of load tests:
Single session - A single session should be set up on browser that will have one or multiple responses. The timeing of the data should be put in a file. After the
test, you can set up a separate file for report analysis.
Multiple session - a multiple session should be developed on multiple browsers with one or multiple responses. The multivariate statistical methods may be
needed for a complex but general performance model
When performing stress testing, looping transactions back on themselves so that the system stresses itself simulates stress loads and may be useful for finding
synchronization problems and timing bugs, Web priority problems, memory bugs, and Windows problems using API. For example, you may want ot simulate an
incoming message that is then put out on a looped-back line; this in turn will generate another incoming message. The nyou can use another system of
comparable size to create the stress load.
Memory leaks are often found under stress testing. A memory leak occurs when a test leaves allocated memory behind and does not correctly return the memory
to the memory allocation scheme. The test seems to run correctly, but after several iteration available memory is reduced until the system fails.

Peak Load and Testing Parameters:


Determining your peak load is important before beginning the assessment of the Web site test. It may mean more than just using user requests per second to
stress the system. There should be a combination of determinants such as requests per second , processor time, and memory usage. There is also the
consideration of the type of information that is on your Web page from graphics and code processing, such as scripts, to ASP pages. Then it is important to
determine what is fast and what is slow for your system. The type of connection can be a critical component here, such as T1 or T3 versus a modem hookup. After
you have selected your threshold, you can stress your system to additional limits.
As a tester you need to set up test parameters to make sure you can log the number of users coming into and leaving the test. This should be started in a small
way and steadily increased. The test should also begin by selecting a test page that may not have a large amount of graphics and steadily increasing the
complexity of the test by increasing the number of graphics and image requests. Keep in mind that images will take up additional bandwidth and resources on the
server but do not really have a large impact on the server's processor.
Another important item to remember is that you need to account for the length of time the user will spend surfing each page. As you test, you should set up a log to
determine the approximate time spend on each page, whether it is 25 or 30 seconds. It may be recorded that each user spends at least 30 seconds on each page,
and that will produce a heightened response for the server. As the request is queued, and this will be analyzed as the test continues.
Load/Volume Test
Load/volume tests study how a program handles large amounts of data, excessive calculations, and excessive processing. These tests do not necessarily have to
push or exceed upper functional limits. Load/volume tests can, and usually must, be automated.
Focus of Load/Volume Tesing

• Pushing through large amounts of data with extreme processing demands.


• Requesting many processes simulateously.
• Repeating tasks over a long period of time

Load/volume tests, which involve extreme conditions, are normally run after the execution of feature-level tests, which prove that a program functions correctly
under normal conditions.

Difference between Load and Strees testing


The idea of stress testing is to find the breaking point in order to find bugs that will make that break potentially harmful. Load testing is merely testing at the highest
transaction arrival rate in performance testing to see the resource contention, database locks etc..
Web Capacity Testing Load and Stress
The performance of the load or stress test Web site should be monitored with the following in mind:

• The load test should be able to support all browser


• The load test should be able to support all Web server.
• The tool should be able to simulate up 500 users or playback machines
• The tool should be able to run on WIndows NT, Linux, Solaris, and most Unix variants.
• There should be a way to simulate various users at different connection speeds.
• After the tests are run, you should be able to report the transactions, URL, and number of users who visited the site.
• The test cases should be asssembled in a like fashion to set up test suites.
• There should be a way to test the different server and port addresses.
• There should be a way to account for the user's cookies.

Performance Test
The primary goal of performance-testing is to develop effective enhancement strategies for maintaining acceptable system performance. Performance testing is a
capacity analysis and planning process in which measurement data are used to predict when load levels will exhaust system resources.

The Mock Test


It is a good idea to set up s mock test before you begin your actual test. This is a way to measure the server's stressd performance. As you progress with your
stress testing, you can set up a measurement of metrics to determine the efficiency of the test.
After the initial test, you can determine the breaking point for the server. It may be a processor problem or even a memory problem. You need to be able to check
your log to determine the average amount of time that it takes your provessor to perform the test. Running graphics or even ASP pages can cause processor
problems and a limitation every time you run your stress test.
Memory tends to be a problem with the stress test. This may be due to a memary leak or lack of memory. You need to log and monitor the amount of disk capacity
during the stress test. As mentioned earlier, the bandwidth can account for the slow down of the processing of the Web site speed. If the test hanges and there is a
large waiting period, your processor usage is too low to handle the a,ount of stress on the system.
Simulate Resources
It is important to be able to run system in a high-stress format so that you can actually simulate the resources and understand how to handle a specific load. For
example, a bank transaction processing system may be designed to process up to 150 transactions per second, whereas an operating system may be designed to
handle up to 200 separate terminals. The different tests need to be designed to ensure that the system can process the expected load. This type of testing usually
involves planning a series of tests where the load is gradually increased to reflect the expected usage pattern. The stress tests can steadily increase the load on
the system beyond the maximum design load until the system fails.
This type of testing has a dual function of testing the system for failure and looking for a combination of events that occur when a load is placed on the server.
Stress testing can then determine if overloading the system results in loss of data or user sevice to the customers The use of stress testing is particularly relevant
to an ecommerce system with Web database.
Increas Capacity Testing
When you begin your stress testing, you will want to increase your capacity testing to make sure you are able to handle the increased load of data such as ASP
pages and graphics. When you test the ASP pages, you may want to create a page similar to the original page that will simulate the same items on the ASP page
and have it send the information to a test bed with a process that completes just a small data output. By doing this, you will have your processor still stressing the
system but not taking up the bandwidth by sending the HTML code along the full path. This will not stress the entire code but will give you a basis from which to
work. Dividing the requests per second by the total number of user or threads will determine the number of transactions per second. It will tell you at what point the
server will start becoming less efficient at handling the load. Let's look at an example. Let's say your test with 50 users shows your server can handle 5 requests
per seconf, with 100 users it is 10 requests per second, with 200 users it is 15 requests per second, and eventually with 300 users it is 20 requests per second.
Your requests per second are continually climbing, so it seems that you are obtaining steadily improving performance. Let's look at the ratios:
05/50 = 0.1
10/100 = 0.1
15/200 = 0.075
20/300 = 0.073
From this example you can see that the performance of the server is becoming less and less efficient as the load grows. This in itself is not necessarily bad (as
long as your pages are still returning within your target time frame). However, it can be a useful indicator during your optimization process and does give you some
indication of how much leeway you have to handle expected peaks.

Stateful testing
When you use a Web-enabled application to set a value, does the server respond correctly later on?
Privilage testing
What happens when the everyday user tries to access a control that is authorized only for adminstrators?
Speed testing
Is the Web-enabled application taking too long to respond?

Boundary Test
Boundary tests are designed to check a program's response to extreme input values. Extreme output values are generated by the input values. It is important to
check that a program handles input values and output results correctly at the lower and upper boundaries. Keep in mind that you can create extreme boundary
results from non-extreme input values. It is essential to analyze how to generate extremes of both types. In addition. sometime you know that there is an
intermediate variable involved in processing. If so, it is useful to determine how to drive that one through the extremes and special conditions such as zero or
overflow condition.
Boundary timeing testing
What happens when your Web-enabled application request times out or takes a really long time to respond?
Regression testing
Did a new build break an existing function? Repeat testing after changes for managing risk relate to product enhancement.
A regression test is performded when the tester wishes to see the progress of the testing processs by performing identical tests before and after a bug has been
fixed. A regression test allows the tester to compare expeted test results with the actual results.
Regression testing's primary objective is to ensure that all bugfree features stay that way. In addition, bugs which have been fixed once should not turn up again in
subsequent program versions.
Regression testing: After every software modification or before next release, we repeat all test cases to check if fixed bugs are not show up again and new and
existing functions are all working correctly.
Regression testing is used to confirm that fixed bugs have, in fact, been fixed and that new bugs have not been introduced in the process, and that festures that
were proven correctly functional are intact. Depending on the size of a project, cycles of regression testing may be perform once per milestone or once per build.
Some bug regression testing may also be performed during each accceptance test cycle, forcusing on only the most important bugs. Regression tests can be
automated.
CONDITIONS DURING WHICH REGRESSION TESTS MAY BE RUN Issu fixing cycle. Once the development team has fixed issues, a regression test can be run
t ovalidate the fixes. Tests are based on the step-by-step test casess that were originally reported:

• If an issue is confirmeded as fixed, then the issue report status should be changed to Closed.
• If an issue is confirmed as fixed, but with side effects, then the issue report status should be changed to Closed. However, a new issue should be filed to
report the side effect.
• If an issue is only partially fixed, then the issue report resolution should be changed back to Unfixed, along with comments outlining the oustanding
problems

Open-status regression cycle. Periodic regression tests may be run on all open issue in the issue-tracking database. During this cycle, issue status is confirmed
either the report is reproducible as is with no modification, the report is reproducible with additional comments or modifications, or the report is no longer
reproducible
Closed-fixed regression cycle. In the final phase of testing, a full-regression test cycle should be run to confirm the status of all fixed-closed issues.
Feature regression cycle. Each time a new build is cut or is in the final phase of testing depending on the organizational procedure, a full-regression test cycle
should be run to confirm that the proven correctly functional features are still working as expected.

Database Testing

Items to check when testing a database

What to test Environment toola/technique

Seach results System test environment Black Box and White Box technique

Response time System test environment Sytax Testing/Functional Testing

Data integrity Development environment White Box testing

Data validity Development environment White Box testing

Query reaponse time


The turnaround time for responding to queries in a database must be short; therefor, query response time is essential for online transactions. The results from this
test will help to identify problems, such as possible bottlenecks in the network, sspecific queries, the database structure, or the hardware.

Data integrity
Data stored in the database should include such items as the catalog, pricing, shipping tables, tax tables, order database, and customer information. Testng must
verify the integrity of the stored data. Testing should be done on a regular basis because data changes over time.

Data integrity tests


Data integrity can be tested as follows to ensure that the data is valid and not corrupt:

• Test the creation, modification, and deletion of data in tables as specified in the business requirement.
• Test to make sure that sets of radio buttons represent a fixed set of values. You should also check for NULL or EMPTY values.
• Test to make sure that data is save to the database and that each values gets saved fully. You should watch for the truncation of
strings and that numeric values are not rounded off.
• Test to make sure that default values are stored and saved.
• Test the compatibility with old data. You should ensure that all updates do not affect the data you have on file in your database.

Data validity
The most common data errors are due to incorrect data entry, called data validity errors.
Recovery testing
• The system recovers from faukts and resumes processing within a predefined period of time.
• The system is fault-tolerant, which means that processing faults do not halt the overall functioning of the system.
• Data recovery and restart are correct in case of automatic recovery. If recovery requires human intervention, the mean time to repair the database is
within predefined acceptable limits.

When testing a SQL server

• If the Web site publishes from inside the SQL Server straight to a Web page, is the data accurate and of the correct data type?
• If the SQL Server reads from a stored procedure to produce a Web page or if the stored procedure is changed, does the data on the page change?
• If you are using FrontPage or interDev is the data connection to your pages secure?
• Does the database have scheduled maintenance with a log so testers can set changes or errors?
• Can the tester check to see how back ups are being handled?
• Is the database secure?

When testing a Access database

• If the database is creating Web pages from the datbase to a URL, is the information correct and updated? If the pages are not dynamic or Active Server
pages, they will not update automatically.
• If the tables in the database are linked to another database, make sure that all the links are active and giving reevant information.
• Are the fields such as zip code, phone numbers, dates, currency, and social security number formateed properly?
• If there are formulas in the database, do they work? How will they take care of updates if numbers change (for example, updating taxes)?
• Do the forms populate the correct tables?
• Is the database secure?

When test a FoxPro database

• If the database is linked to other database, are the links secure and working?
• If the database publishes to the Internet, is the data correct?
• When data is deployed, is it still accurate?
• Do the queries give accurate information to the reports?
• If thedatabase performs calculations, are the calculatons accurate?

Other important Database and security feature

• Credite Card Transaction


• Shopping Carts
• Payment Transaction Security

Secure Sockets Layer (SSL)


SSL is leading security protocol on the Internet.
When an SSL session is started, the server sends its publice key to the browser, which the browser uses to send a randomly generated secret key back to the
server to have a secret key exchange for that session.
SSL is a protocol that is submitted to the WWW consortium (W3C) working group on security for consideration as a standard security hanhshake that is used to
initiate the TCP/IP connection. This handshake results in the client and server agreeing on the level of security that they will use, and this will fulfill any
authentication requirements for the connection. SSL's role is to encrypt and decrypt the byte stream of the application protocol being used. This means the all the
inofrmation in both the HTTP request and the HTTP response are fully encrypted, including the URL the client is requesting, any submitted form contents (such as
credit card numbers), anty HTTP access authorization information (user names and passwords), and all the data returned from the server to the client.

Transport Layer Security (TLS)


TLS is a majo security standard on the internet. TLS is backward compatible with SSL and use Triple Data Encryption Standard (DES) encryption.
Three Waves of Software Development

Type of
Wave Development Style Test style
application

Event-Driven framwork surrounds individual procedural functions The


Test each function agaist a written functions
1. Desktop common style is to have a hierarchical set of menus presenting a set of
specification.
commands.

Structured programming command organized into hierachical menu lists Test each function against a written functional
2. Client/Server
combine a common dropdown menu bar with graphical windows contain throughput to server clients.
controls.

Visual Integrated development tools facilitate object-oriented design Capture/record/ playable watches how an
patterns. The common style is to provide multiple paths to accomplishing application is used and then provides reports
3. Web-enabled
tasks. (The net effect is that you can't just walk through a set of hierarchical comparing how the playback differed from the
menus and arrive at the same test result any more.) original recording.

Desktop application development and Test automation


The software was written to provide a friendly interface for information workers: Spreadsheet jockeys, business people needing written reports, and game players.
The full spectrum of desktop software could pretty well be categorized into spreadsheet, wordprocessor, database, and entertainment categories since desktop
computers were rarely networked to other information resources. Desktop applications used the keyboard, and then later a mouse, to navigate through windows
and a drop-down menu. Inside a desktop application software package one would find an event-driven framework surrounding individual procedural functions. The
automation focused on improving the time it took to test a desktop application for functionality. The test utilities link into desktop applications and try each
command as though a user were accessing the menu and window commands. Most QA technicians testing a desktop application compare the function of all the
menus and windows to a written functional specification document. The variation from the document to the performance shows the relative health of a desktop
application.

Clicent/Server Development and Test automation


The original intent for client/server applications was to separete presentation logic from business logic. In an ideal system design, the client was reponsible for
presenting the user interface, command elements (drop-down menus, buttons, controls), displayed results information in a set of windows, charts, and dials. The
client connected to a server to process functions and the server responded with data.
In a client/server environment the protocols are cleanly defined so that all the clients use the same protocols to communicate with the server.
The client-side frameworks to provide the same functionality of desktop application frameworks plus most of the needed communication code to issue commands
to the server and the code needed to automatically update a client with new functions received from the server.The server-side frameworks provide code needed
to received and handle requests from multiple clients, and code to connect to database for data persistence and remote information providers. Additionally, these
framworks need to handle stateful transations and intermittent network connections. Stateful transactions require multiple steps to accomplish a task.
Client/server applications are normally transactional in nature and usually several interactions with the user are needed to finish a single request. For example, in a
stock trading application the user begins a transaction by identifying themselves to the server, looking up an order code, and then submitting a request to the
server, and receives and presents the results to the user. The client-side application normally knows something about the transaction - for example, the client-side
application will normally store the user identification and a session code such as cookie value across the user's interaction with the server-based application. Users
like it better when the client-side application knows about the transaction because each step in a request can be optimized in the client application. For example. in
the stock trading example the client application could calculate a stock trade commission locally without having to communication with server.
Client/server application test automation provides the functionality of desktop application test automation plus these:

• Client/server applications operate in a network environment. The tests need to not only check for the function of an application, they need to test how the
application handles slow or intermittent network performance.
• Automated test are ideal to determine the number of client applications a server is able to efficiently handle at any given time.
• The server is usually a middle tier between the client application and several data sources. Automated tests need to check the server for correct
functionality while it communicates with the data source.

Black-Box testing on Window-based Application


Editable Fields Checking and Validation:

• Valid/invalid characters/strings data in all editable fields


• Valid minimum/maximum/mid range values in fields
• Null strings (or no data ) in required fields
• Record length (character limit)in text/memo fields
• Cut/copy/paste into/from fields when possible

Not Editable Fields Checking:

• Check for all test/spelling in warnings and error messages/dialogs


• Invoke/check all menu items and their options

Application Usability:

• Appearance an outlook (Placement an alignment of objects on screen)


• User Interface Test (open all menus, check all items)
• Basic functionality checking (File+Open+Save, etc..)
• Right mouse clicking sensitivity
• Resize/min/max/restore app, windows (check min app size)
• Scrollability when applicable (scrollbars, keyboard, autoscrolling)
• Keyboard and mouse navigation, highlighting, dragging, drag/drop
• Print in landscape an portrait modes
• Check F1, What's This , Help menu
• Short-cut and Accelerator keys
• Tab Key order and Navigation in all dialog boxes and menus

Web-Enabled Development and Test automation


Web-Enabled application go further in these areas:

• Web-enabled application are meant to be stateless. HTTP was designed to be stateless. Each request from a Web-enabled application is meant to be
atomic and not rely on any previouse requests. This has huge advantages for system architecture and datacenter provisioning. When requests are
stateless, then any sserver can respond to the request and any request handler on any server may service the request.
• Web-enabled application are platform independent. The client application may be written for Windows, Macintosh, Linux, and any other platform that is
capable of implementing the command protocol and network connection to the server.
• Web-enabled application expect the client application to provide presentation rendering and simple scripting capabilities. The client application is usually
a browser, however, it may also be a dedication client application such as a retail cash register, a Windows-based data analysis tool, ot an electronic
address book in your mobile phone.

The missing context in a Web-enabled application test automation means that software developers and QA technicians must manually script tests for each Web-
enalbled application. Plus, they need to maintain the test scriots as the application changes. Web-enabled application test automation tools focus on making the
scriot writing and maintenance tasks easier. The test automation tool offer these features:

• A friendly, graphical user interface to integrate the record, edit, and run-time script functions.
• A recorder that watches how an application is used and writes a test script for you.
• A playback utility that drives a Web-enalbed application by processing the test script and logging. The playback utility also provides the facility to play
back several concurrently running copies of the same script to check the system for scalability and load testing.
• A report utility to show how the playback differed from the original recording. The differences may be slower or faster performance times, errors, and
incomplete transactions.

Testing Web Sites Applications


Many developers and testers are making the transition from testing traditional client/server, PC, and/or mainframe systems, to testing rapidly changing Web
applications.
Web test: This testing forcuses on how well all parts of the web site hold together, whether inside and outside the website are working and whether all parts of the
website are connected.
Web sites Server consist of three back-end layer:

• Web server
• Application server
• Data layers

Black Box testing for web-based application: (1)


1. Browser functionality:

• Is the browser compatible with the application design?


• There are many different types of browsers available.

• GUI design components

• Are the scroll bars, buttons, and frames compatible with the browser and functional?
• To check the functionality of the scroll bars on the interface of the Web page to make sure the the user can scroll through items and make the correct
selection from a list of items.
• The button on the interface need to be functional and the correct hyperlink should go to the correct page.
• If frames are used on the interface, they should be checked for the correct size and whether all of the components fit within the viewing screen of the
monitor.

2. User Interface
One of the reasons the web browser is being used as the front end to applications is the ease of use. Users who have been on the web before will probably know
how to navigate a well-built web site. While you are concentrating on this portion of testing it is important to verify that the application is easy to use. Many will
believe that this is the least important area to test, but if you want to be successful, the site better be easy to use.
3.Instructions
You want to make sure there are instructions. Even if you think the web site is simple, there will always be someone who needs some clarification. Additionally,
you need to test the documentation to verify that the instructions are correct. If you follow each instruction does the expected result occur?

4. Site map or navigational bar


Does the site have a map? Sometimes power users know exactly where they want to go and don't want to wade through lengthy introductions. Or new users get
lost easily. Either way a site map and/or an ever-present navigational bar can help guide the user. You need to verify that the site map is correct. Does each link
on the map actually exist? Are there links on the site that are not represented on the map? Is the navigational bar present on every screen? Is it consistent? Does
each link work on each page? Is it organized in an intuitive manner?

5. Content
To a developer, functionality comes before wording. Anyone can slap together some fancy mission statement later, but while they are developing, they just need
some filler to verify alignment and layout. Unfortunately, text produced like this may sneak through the cracks. It is important to check with the public relations
department on the exact wording of the content.
You also want to make sure the site looks professional. Overuse of bold text, big fonts and blinking (ugh) can turn away a customer quickly. It might be a good idea
to consult a graphic designer to look over the site during User Acceptance Testing. You wouldn't slap together a brochure with bold text everywhere, so you want
to handle the web site with the same level of professionalism.
Finally, you want to make sure that any time a web reference is given that it is hyperlinked. Plenty of sites ask you to email them at a specific address or to
download a browser from an address. But if the user can't click on it, they are going to be annoyed.

6. Colors/backgrounds
Ever since the web became popular, everyone thinks they are graphic designers. Unfortunately, some developers are more interested in their new backgrounds,
than ease of use. Sites will have yellow text on a purple picture of a fractal pattern. (If you've never seen this, try most sites at GeoCities or AOL.) This may seem
"pretty neat", but it's not easy to use.
Usually, the best idea is to use little or no background. If you have a background, it might be a single color on the left side of the page, containing the navigational
bar. But, patterns and pictures distract the user.
7. Images
Whether it's a screen grab or a little icon that points the way, a picture is worth a thousand words. Sometimes, the best way to tell the user something is to simply
show them. However, bandwidth is precious to the client and the server, so you need to conserve memory usage. Do all the images add value to each page, or do
they simply waste bandwidth? Can a different file type (.GIF, .JPG) be used for 30k less?
In general, you don't want large pictures on the front page, since most users who abandon a page due to a large load will do it on the front page. If you can get
them to see the front page quickly, it will increase the chance they will stay.

8. Tables
You also want to verify that tables are setup properly. Does the user constantly have to scroll right to see the price of the item? Would it be more effective to put
the price closer to the left and put miniscule details to the right? Are the columns wide enough or does every row have to wrap around? Are certain columns
considerably longer than others?

9. Wrap-around
Finally, you will want to verify that wrap-around occurs properly. If the text refers to "a picture on the right", make sure the picture is on the right. Make sure that
widowed and orphaned sentences and paragraphs don't layout in an awkward manner because of pictures.

10. Functionality
The functionality of the web site is why your company hired a developer and not just an artist. This is the part that interfaces with the server and actually "does
stuff".

11. Links
A link is the vehicle that gets the user from page to page. You will need to verify two things for each link: that the link brings you to the page it said it would and that
the pages you are linking to actually exists. It may sound a little silly but I have seen plenty of web sites with internal broken links.

12. Forms
When a user submits information through a form it needs to work properly. The submit button needs to work. If the form is for an online registration, the user
should be given login information (that works) after successful completion. If the form gathers shipping information, it should be handled properly and the customer
should receive their package. In order to test this, you need to verify that the server stores the information properly and that systems down the line can interpret
and use that information.

13. Data verification

If the system verifies user input according to business rules, then that needs to work properly. For example, a State field may be checked against a list of valid
values. If this is the case, you need to verify that the list is complete and that the program actually calls the list properly (add a bogus value to the list and make
sure the system accepts it).

14. Cookies
Most users only like the kind with sugar, but developers love web cookies. If the system uses them, you need to check them. If they store login information, make
sure the cookies work. If the cookie is used for statistics, verify that totals are being counted properly. And you'll probably want to make sure those cookies are
encrypted too, otherwise people can edit their cookies and skew your statistics.
Application specific functional requirements Most importantly, you want to verify the application specific functional requirements. Try to perform all functions a user
would: place an order, change an order, cancel an order, check the status of the order, change shipping information before an order is shipped, pay online, ad
naseum.
This is why your users will show up on your doorstep, so you need to make sure you can do what you advertise.
16. Interface Testing
Many times, a web site is not an island. The site will call external servers for additional data, verification of data or fulfillment of orders.

16. Server interface


The first interface you should test is the interface between the browser and the server. You should attempt transactions, then view the server logs and verify that
what you're seeing in the browser is actually happening on the server. It's also a good idea to run queries on the database to make sure the transaction data is
being stored properly.

17. External interfaces


Some web systems have external interfaces. For example, a merchant might verify credit card transactions real-time in order to reduce fraud. You will need to
send several test transactions using the web interface. Try credit cards that are valid, invalid, and stolen. If the merchant only takes Visa and MasterCard, try using
a Discover card. (A script can check the first digit of the credit card number: 3 for American Express, 4 for Visa, 5 for MasterCard, or 6 for Discover, before the
transaction is sent.) Basically, you want to make sure that the software can handle every possible message returned by the external server.

18. Error handling


One of the areas left untested most often is interface error handling. Usually we try to make sure our system can handle all of our errors, but we never plan for the
other systems' errors or for the unexpected. Try leaving the site mid-transaction - what happens? Does the order complete anyway? Try losing the internet
connection from the user to the server. Try losing the connection from the server to the credit card verification server. Is there proper error handling for all these
situations? Are charges still made to credit cards? Is the interruption is not user initiated, does the order get stored so customer service reps can call back if the
user doesn't come back to the site?

19. Compatibility
You will also want to verify that the application can work on the machines your customers will be using. If the product is going to the web for the world to use, you
will need to try different combinations of operating system, browser, video setting and modem speed.

20. Operating systems


Does the site work for both MAC and IBM-Compatibles? Some fonts are not available on both systems, so make sure that secondary fonts are selected. Make
sure that the site doesn't use plug-ins only available for one OS, if your users will use both.

21. Browsers
Does your site work with Netscape? Internet Explorer? Lynx? Some HTML commands or scripts only work for certain browsers. Make sure there are alternate tags
for images, in case someone is using a text browser. If you're using SSL security, you only need to check browsers 3.0 and higher, but verify that there is a
message for those using older browsers.

22. Video settings


Does the layout still look good on 640x400 or 600x800? Are fonts too small to read? Are they too big? Does all the text and graphic alignment still work?

23. Modem/connection speeds


Does it take 10 minutes to load a page with a 28.8 modem, but you tested hooked up to a T1? Users will expect long download times when they are grabbing
documents or demos, but not on the front page. Make sure that the images aren't too large. Make sure that marketing didn't put 50k of font size -6 keywords for
search engines.
23. Printers
Users like to print. The concept behind the web should save paper and reduce printing, but most people would rather read on paper than on the screen. So, you
need to verify that the pages print properly. Sometimes images and text align on the screen differently than on the printed page. You need to at least verify that
order confirmation screens can be printed properly.

24. Combinations
Now you get to try combinations. Maybe 600x800 looks good on the MAC but not on the IBM. Maybe IBM with Netscape works, but not with Lynx.
If the web site will be used internally it might make testing a little easier. If the company has an official web browser choice, then you just need to verify that it
works for that browser. If everyone has a T1 connection, then you might not need to check load times. (But keep in mind, some people may dial in from home.)
With internal applications, the development team can make disclaimers about system requirements and only support those systems setups. But, ideally, the site
should work on all machines so you don't limit growth and changes in the future.

25. Load/Stress
You will need to verify that the system can handle a large number of users at the same time, a large amount of data from each user, and a long period of
continuous use. Accessibility is extremely important to users. If they get a "busy signal", they hang up and call the competition. Not only must the system be
checked so your customers can gain access, but many times crackers will attempt to gain access to a system by overloading it. For the sake of security, your
system needs to know what to do when it's overloaded and not simply blow up.
Many users at the same time
If the site just put up the results of a national lottery, it better be able to handle millions of users right after the winning numbers are posted. A load test tool would
be able to simulate large number of users accessing the site at the same time.
Large amount of data from each user
Most customers may only order 1-5 books from your new online bookstore, but what if a university bookstore decides to order 5000 different books? Or what if
grandma wants to send a gift to each of her 50 grandchildren for Christmas (separate mailing addresses for each, of course.) Can your system handle large
amounts of data from a single user?
Long period of continuous use
If the site is intended to take orders for flower deliveries, then it better be able to handle the week before Mother's Day. If the site offers web-based email, it better
be able to run for months or even years, without downtimes.
You will probably want to use an automated test tool to implement these types of tests, since they are difficult to do manually. Imagine coordinating 100 people to
hit the site at the same time. Now try 100,000 people. Generally, the tool will pay for itself the second or third time you use it. Once the tool is set up, running
another test is just a click away.

26. Security
Even if you aren't accepting credit card payments, security is very important. The web site will be the only exposure some customers have to your company. And,
if that exposure is a hacked page, they won't feel safe doing business with you.
27. Directory setup
The most elementary step of web security is proper setup of directories. Each directory should have an index.html or main.html page so a directory listing doesn't
appear.
One company I was consulting for didn't observe this principal. I right clicked on an image and found the path "...com/objects/images". I went to that directory
manually and found a complete listing of the images on that site. That wasn't too important. Next, I went to the directory below that: "...com/objects" and I hit the
jackpot. There were plenty of goodies, but what caught my eye were the historical pages. They had changed their prices every month and kept the old pages. I
browsed around and could figure out their profit margin and how low they were willing to go on a contract. If a potential customer did a little browsing first, they
would have had a definite advantage at the bargaining table.
SSL Many sites use SSL for secure transactions. You know you entered an SSL site because there will be a browser warning and the HTTP in the location field on
the browser will change to HTTPS. If your development group uses SSL you need to make sure there is an alternate page for browser with versions less than 3.0,
since SSL is not compatible with those browsers. You also need to make sure that there are warnings when you enter and leave the secured site. Is there a
timeout limit? What happens if the user tries a transaction after the timeout?

28 Logins
In order to validate users, several sites require customers to login. This makes it easier for the customer since they don't have to re-enter personal information
every time. You need to verify that the system does not allow invalid usernames/password and that it does allow valid logins. Is there a maximum number of failed
logins allowed before the server locks out the current user? Is the lockout based on IP? What if the maximum failed login attempts is three, and you try three, but
then enter a valid login? What are the rules for password selection?

29. Log files


Behind the scenes, you will need to verify that server logs are working properly. Does the log track every transaction? Does it track unsuccessful login attempts?
Does it only track stolen credit card usage? What does it store for each transaction? IP address? User name?

30. Scripting languages


Scripting languages are a constant source of security holes. The details are different for each language. Some exploits allow access to the root directory. Others
allow access to the mail server. Find out what scripting languages are being used and research the loopholes. It might also be a good idea to subscribe to a
security newsgroup that discusses the language you will be testing.

31. Web Server Testing Features

• Feature: Definition
• Transactions: The nunber of times the test script requested the current URL
• Elapsed time: The number of seconds it took to run the request
• Bytes transferred: The total number of bytes sent or received, less HTTP headers
• Response time: The average time it took for the server to respond to each individual request.
• Transaction rate: The average number of transactions the server was able to handle per second.
• Transferance: The average number of bytes transferred per second.
• Concurrency: The average number of simultaneous connections the server was able to handle during the test session.
• Status code nnn: This indicates how many times a particular HTTP status code was seen.

What is localization (L10N)

• Adapting a (software) product to a local or regional market.


• Goal: Appropriate linguistic and cultural aspects
• Performed by translators, localizers, language engineers

Localization
The aspect of development and testing relating to the translation of the software and ite prsentation to the end user. This includes translating the program,
choosing appropriate icons and graphics, and other cultural considerations. It also may include translating the program's help files and the documentation. You
could think of localization as pertaining to the presentation of your program; the things the user sees.
Internationalization (I18N)

• Developing a (software) product in such a way that it will be easy to adapt it to other markets (languages and cultures)
• Goal: eliminate the need to reprogram or recompile the original program
• Carried out by SW-Development in conjunction with Localization
• Handing foreign text and data within a program
• Sorting, importing and exporting text and data, correct handing of currency and data and time formats, string parsing, upper and lower case handling.
• Separating strings from the source code, and making sure that the foreign language string have enough space in your user interface to be displayed
correctly

Internationalization
The aspect of development and testing relating to handling foreign text and data within a program. This would include sorting, importing and exporting text and
data, correct handling of currency and date and time formats, string parsing, upper and lower case handling, and so forth. It also includes the task of separating
strings (or user interface text) from the source code, and making sure that the foreign language strings have enough space in your user interface to be displayed
correctly. You could think of internationalization as pertaining ot the underlying functionality and workings of your program.
What is I18N/L10N stand for ?
These two abbreviations mean internationalization and localization respectively. Using the word "internationalization" as an example; here is how these
abbreviations are derived. First, you take the first letter of the word you want to abbreviate; in this case the letter "I". Next, you take the last letter in the word; in this
case the letter "N". These become the first and last letters in the abbreviation. Finally, you count the remaining letters in the word between the first and last letter.
In this case. "nternationalizatio" has 18 characters in it. se we will plug the number 18 between the "I" and "N"; thus I18N.
I18N and L10N

• I18N and L10N comprise the whole of the offort involved in enabling a product.
• I18N is "Stuff" you have to do once.
• L10N is "stuff you have to do over and over again.
• The more stuff you push into I18N out of L10N, the less complicated and expensive the process becomes.
Globalization (G11N)

• Activities performed for the purpose of marketing a (software) product in regional marketing a (software) product in regional markets
• Goal: Global marketing that accounts for economic and legal factors.
• Focus on marketing; total enterprise solutions and to management support

Aspects of Localization

• Terminology
The selection and definition as well as the correct and consistent usage of terms are preconditions for successful localition:
• laymen and expert users
• in most cases innovative domains and topics
• huge projects with many persons involved
• consistent terminology throughout all products
• no synonyms allowed
• prededined terminology(environment, laws, specifications, guidelines, corporate language)
• Symbols
• Symbols are culture-dependant, but often they cannot be modified by the localizer.
• Symbols are often adopted from other (common) spheres of life.
• Symbols often use allusions (concrete for abstract); in some cases, homonyms or even homophones are used.
• Illustrations and Graphics
• Illustrations and graphics are very often culture dependant, not only in content but also in the way they are presented
• Illustrations and graphics should be adopted to the (technical) needs of the target market (screen shots, manuals)
• Illustrations and graphics often contain textual elements that must be localized, but cannnot be isolated.
• Colors have different meanings in different cultures
• Character sets
• Languages are based on different character sets ("alphabets")
• the localized product must be able to handle(display, process, sort etc) the needed character set
• "US English" character sets (1 byte, 7 bit or 8 bit)
• Product development for internationalization should be based on UNICODE (2 byte or 4 byte)
• Fonts and typography
• Font types and font families are used in different cultures with varying frequency and for different text types and parts of text
• Example: In English manuals fonts with serifs(Times Roman) are preferred; In German manuals fonts without serifs(Helvetica) are preferred
• Example: English uses capitalization more frequently(e.g. CAUTION) for headers and parts of text
• Language and style
• In addition to the language specific features of grammar, syntax and style, there are cultural conventions that must be taken into account for
localization.
• In (US_English) an informal style is preferred, the reader is addressed directly, "simple" verbs are used, repetition of parts of text is accepted
etc.
• Formulating headers (L10N)
• Long compound words (L10N)
• Elements of the user interface
Open the File menu
The Open dialog box appears
Click the copy command
• Formats
• Date, currency, units of measurement
• Paper format
• Different length of text
1. Consequences for the volume of documents, number of pages, page number (table of contents, index)etc.
2. Also for the size of buttons (IT, machines etc.)

What we need consider in localization testing?

• testing resource files [separate strings from the code] Solution: create a pseudo build
• string expansion: String size change breaking layout and aligment. when words or sentatences are translated into other languages, most of the time the
resulting string will be either longer or shorter than the native language version of the string. Two solutions to this problem:
1. Account the space needed for string expansion, adjusting the layout of your dialog accordingly
2. Separate your dialog resources into separate dynamic libraries.
• Data format localization:
European style:DD/MM/YY
North American style: MM/DD/YY
Currency, time and number formate, address.
• Charater sets: ASCII or Non ASCII
Single byte character 16bit (US) 256 charaters
Double byte character 32bit(chinese) 65535 code ponits
• Encoding: Unicode: Unicode supports many different written languages in the world all in a single character encoding. Note: For double character set, it
is better to convert from Unicode to UTF8 for chinese, because UTF8 is a variable length encoding of Unicode that can be easily sent through the
network via single byte streams.
• Builds and installer : Creating an environment that supports a single version of your code, and multiple version of the language files.
• program's installation, uninstallation in the foreign machines.
• Tesing with foreign characters :
EXP:
enter foreign text for username and password.
For entering European or Latin characters on Windows
1. Using Character Map tool
Search: star-program-accessories-accessibility-system tool-
2. escape sequences. EXP: ALT + 128
For Asia languages use what is usually called an IME (input method editor)
Chinese: I use GB encoding with pin yin inputer mode
• Foreign Keyboards or On-Screen keyboard
• Text filters: Program that are used to collect and manipulate data usually provide the user with a mechanism for searching and filtering that data. As a
global software tester, you need to make sure that the filtering and searching capabilities of your program work correctly with foreign text. Problem;
ignore the accent marks used in foreign text.
• Loading, saving, importing,and exporting high and low ASCII
• Asian text in program: how double charater set work
• Watch the style
• Two environment to test for the program in chinese
1. In Chinese window system. (in China)
2. In English Window system with chinese language support (in USA)
Microsoft language codes:
CHS - Chinese Simplified
CHT - Chinese Traditional(Taiwan)
ENU - English (United States)
FRA - French (France)
Java Languages codes
zh_CN - Chinese Simplified
zh_TW - Chinese Traditional(Taiwan)
Fr or fr_FR - English (United States)
en or en_US - French (France)
• More need consider in localization testing:
• Hot key.
• Garbled in translation
• Error message identifiers
• Hyphenation rules
• Spelling rules
• Sorting rules
• Uppercase and lowercase conversion

Testing for Localizable Strings -- separate string from code


Localizable string are strings that are no longer hard coded and compiled directly into the programs executable files. To do so for tester is to create a pseudo build
automaticaly or manually. A pseudo language build means is that the native language string files are run through a parser that changes them in some way so they
can easily be identified by testing department. Next a build is created using these strings so they show up in the user interface instead of the English strings. This
helps find strings that haven't been put into separate string files. problems with displaying foreign text, and problems with string expansion where foreign strings
will be truncated.
Two environment to test for the program in chinese
1. In Chinese window system. (in China)
2. In English Window system with chinese language support (in USA)
Online help

• Accuracy
• Good reading
• Help is a combination of writing and programming
• Test hypertext links
• Test the index
• Watch the style

Some specific testing recommendations

• Make your own list of localization requirements


• Automated testing

The documentation tester's objectives

• Checking the technical accurancy of every word.


• Lookout for confusions in the writing.
• Lookout for missing features.
• Give suggections without justifying.

Load / Stress Testing of Websites

1. The Importance of Scalability & Load Testing?


Some very high profile websites have suffered from serious outages and/or performance issues due to the number of people hitting their website. E-commerce
sites that spent heavily on advertising but not nearly enough on ensuring the quality or reliability of their service have ended up with poor web-site performance,
system downtime and/or serious errors, with the predictable result that customers are being lost.

In the case of toysrus.com, its web site couldn't handle the approximately 1000 percent increase in traffic that their advertising campaign generated. Similarly,
Encyclopaedia Britannica was unable to keep up with the amount of users during the immediate weeks following their promotion of free access to its online
database. The truth is, these problems could probably have been prevented, had adequate load testing taken place.

When creating an eCommerce portal, companies will want to know whether their infrastructure can handle the predicted levels of traffic, to measure performance
and verify stability.

These types of services include Scalability / Load / Stress testing, as well as Live Performance Monitoring.

Load testing tools can be used to test the system behaviour and performance under stressful conditions by emulating thousands of virtual users. These virtual
users stress the application even harder than real users would, while monitoring the behaviour and response times of the different components. This enables
companies to minimise test cycles and optimise performance, hence accelerating deployment, while providing a level of confidence in the system. Once launched,
the site can be regularly checked using Live Performance Monitoring tools to monitor site performance in real time, in order to detect and report any performance
problems - before users can experience them.

2. Preparing for a Load Test


The first step in designing a Web site load test is to measure as accurately as possible the current load levels.
Measuring Current Load Levels
The best way to capture the nature of Web site load is to identify and track, [e.g. using a log analyzer] a set of key user session variables that are applicable and
relevant to your Web site traffic.
Some of the variables that could be tracked include:
the length of the session (measured in pages)
the duration of the session (measured in minutes and seconds)
the type of pages that were visited during the session (e.g., home page, product information page, credit card information page etc.)
the typical/most popular ‘flow’ or path through the website
the % of ‘browse’ vs. ‘purchase’ sessions
the % type of users (new user vs. returning registered user)

Measure how many people visit the site per week/month or day. Then break down these current traffic patterns into one-hour time slices, and identify the peak-
hours (i.e. if you get lots of traffic during lunch time etc.), and the numbers of users during those peak hours. This information can then be used to estimate the
number of concurrent users on your site.

3. Concurrent Users
Although your site may be handling x number of users per day, only a small percentage of these users would be hitting your site at the same time. For example, if
you have 3000 unique users hitting your site on one day, all 3000 are not going to be using the site between 11.01 and 11.05 am.
So, once you have identified your peak hour, divide this hour into 5 or 10 minute slices [you should use your own judgement here, based on the length of the
average user session] to get the number of concurrent users for that time slice.
4. Estimating Target Load Levels
Once you have identified the current load levels, the next step is to understand as accurately and as objectively as possible the nature of the load that must be
generated during the testing.

Using the current usage figures, estimate how many people will visit the site per week/month or day. Then divide that number to attain realistic peak-hour
scenarios.

It is important to understand the volume patterns, and to determine what load levels your web site might be subjected to (and must therefore be tested for).

There are four key variables that must be understood in order to estimate target load levels:
how the overall amount of traffic to your Web site is expected to grow
the peak load level which might occur within the overall traffic
how quickly the number of users might ramp up to that peak load level
how long that peak load level is expected to last

Once you have an estimate of overall traffic growth, you’ll need to estimate the peak level you might expect within that overall volume.

5. Estimating Test Duration


The duration of the peak is also very important-a Web site that may deal very well with a peak level for five or ten minutes may crumble if that same load level is
sustained longer than that. You should use the length of the average user session as a base for determining the load test duration.

6. Ramp-up Rate
As mentioned earlier, Although your site may be handling x number of users per day, only a small percentage of these users would be hitting your site at the same
time.

Therefore, when preparing your load test scenario, you should take into account the fact that users will hit the website at different times, and that during your peak
hour the number of concurrent users will likely gradually build up to reach the peak number of users, before tailing off as the peak hour comes to a close.
The rate at which the number of users build up, the "Ramp-up Rate" should be factored into the load test scenarios (i.e. you should not just jump to the maximum
value, but increase in a series of steps).

7. Scenario Identification
The information gathered during the analysis of the current traffic is used to create the scenarios that are to be used to load test the web site.
The identified scenarios aim to accurately emulate the behavior of real users navigating through the Web site.
for example, a seven-page session that results in a purchase is going to create more load on the Web site than a seven-page session that involves only browsing.
A browsing session might only involve the serving of static pages, while a purchase session will involve a number of elements, including the inventory database,
the customer database, a credit card transaction with verification going through a third-party system, and a notification email. A single purchase session might put
as much load on some of the system’s resources as twenty browsing sessions.
Similar reasoning may apply to purchases from new vs. returning users. A new user purchase might involve a significant amount of account setup and verification
—something existing users may not require. The database load created by a single new user purchase may equal that of five purchases by existing users, so you
should differentiate the two types of purchases.
8. Script Preparation
Next, program your load test tool to run each scenario with the number of types of users concurrently playing back to give you a the load scenario.

The key elements of a load test design are:

test objective
pass/fail criteria
script description
scenario description

Load Test Objective


The objective of this load test is to determine if the Web site, as currently configured, will be able to handle the X number of sessions/hr peak load level
anticipated. If the system fails to scale as anticipated, the results will be analyzed to identify the bottlenecks.

Pass/Fail Criteria
The load test will be considered a success if the Web site will handle the target load of X number of sessions/hr while maintaining the pre-defined average page
response times (if applicable). The page response time will be measured and will represent the elapsed time between a page request and the time the last byte is
received.

Since in most cases the user sessions follow just a few navigation patterns, you will not need hundreds of individual scripts to achieve realism—if you choose
carefully, a dozen scripts will take care of most Web sites.

9. Script Execution
Scripts should be combined to describe a load testing scenario. A basic scenario includes the scripts that will be executed, the percentages in which those scripts
will be executed, and a description of how the load will be ramped up.
By emulating multiple business processes, the load testing can generate a load equivalent to X numbers of virtual users on a Web application. During these load
tests, real-time performance monitors are used to measure the response times for each transaction and check that the correct content is being delivered to users.
In this way, they can determine how well the site is handling the load and identify any bottlenecks.
The execution of the scripts opens X number of HTTP sessions (each simulating a user) with the target Web site and replays the scripts over and over again.
Every few minutes it adds X more simulated users and continues to do so until the web site fails to meet a specific performance goal.

10. System Performance Monitoring


It is vital during the execution phase to monitor all aspects of the website. This includes measuring and monitoring the CPU usage and performance aspects of the
various components of the website – i.e. not just the webserver, but the database and other parts aswell (such as firewalls, load balancing tools etc.)
For example, one etailer, whose site fell over (apparently due to a high load), when analysing the performance bottlenecks on their site discovered that the
webserver had in fact only been operating at 50% of capacity. Further investigation revealed that the credit card authorisation engine was the cause of failure – it
was not responding quick enough for the website, which then fellover when it was waiting for too many responses from the authorisation engine. They resolved
this issue by changing the authorisation engine, and amending the website coding so that if there were any issues with authorisation responses in future, the site
would not crash.
Similarly, another ecommerce site found that the performance issues that they were experiencing were due to database performance issues – while the webserver
CPU usage was only at 25%, the backend db server CPU usage was 86%. Their solution was to upgrade the db server.
Therefore, it is necessary to use (install if necessary) performance monitoring tools to check each aspect of the website architecture during the execution phase.
11. Suggested Execution Strategy:
Start with a test at 50% of the expected virtual user capacity for 15 minutes and a medium ramp rate. The different members of the team [testers will also need to
be monitoring the CPU usage during the testing] should be able to check whether your website is handling the load efficiently or some resources are already
showing high utilization.
After making any system adjustments, run the test again or proceed to 75% of expected load. Continue with the testing and proceed to 100%; then up to 150% of
the expected load, while monitoring and making the necessary adjustments to your system as you go along.

12. Results Analysis


Often the first indication that something is wrong is the end user response times start to climb. Knowing which pages are failing will help you narrow down where
the problem is.
Whichever load test tool you use, it will need to produce reports that will highlight the following:
Page response time by load level
Completed and abandoned session by load level
Page views and page hits by load level
HTTP and network errors by load level
Concurrent user by minute
Missing links report, if applicable
Full detailed report which includes response time by page and by transaction, lost sales opportunities, analysis and recommendations

13. Important Considerations


When testing websites, it is critically important to test from outside the firewall. In addition, web-based load testing services, based outside the firewall, can identify
bottlenecks that are only found by testing in this manner.
Web-based stress testing of web sites are therefore more accurate when it comes to measuring a site's capacity constraints.
Web traffic is rarely uniformly distributed, and most Web sites exhibit very noticeable peaks in their volume patterns. Typically, there are a few points in time (one
or two days out of the week, or a couple of hours each day) when the traffic to the Web site is highest.
LoadRunner from Mercury - Load Testing Software, Automated Software Performance Testing ...

1. What is load testing?


Load testing is to test that if the application works fine with the loads that result from large number of simultaneous users, transactions and to determine weather it
can handle peak usage periods.

2. What is Performance testing?


Timing for both read and update transactions should be gathered to determine whether system functions are being performed in an acceptable timeframe. This
should be done standalone and then in a multi user environment to determine the effect of multiple transactions on the timing of a single transaction.

3. What is LoadRunner?
LoadRunner works by creating virtual users who take the place of real users operating client software, such as sending requests using the HTTP protocol to IIS or
Apache web servers. Requests from many virtual user clients are generated by Load Generators in order to create a load on various servers under test
These load generator agents are started and stopped by Mercury's Controller program. The Controller controls load test runs based on Scenarios invoking
compiled Scripts and associated Run-time Settings.
Scripts are crafted using Mercury's "Virtual user script Generator" (named "V U Gen"), It generates C-language script code to be executed by virtual users by
capturing network traffic between Internet application clients and servers.
With Java clients, VuGen captures calls by hooking within the client JVM. During runs, the status of each machine is monitored by the Controller.
At the end of each run, the Controller combines its monitoring logs with logs obtained from load generators, and makes them available to the "Analysis" program,
which can then create run result reports and graphs for Microsoft Word, Crystal Reports, or an HTML webpage browser.

Each HTML report page generated by Analysis includes a link to results in a text file which Microsoft Excel can open to perform additional analysis.
Errors during each run are stored in a database file which can be read by Microsoft Access.

4. What is Virtual Users?


Unlike a WinRunner workstation which emulates a single user's use of a client, LoadRunner can emulate thousands of Virtual Users.
Load generators are controlled by VuGen scripts which issue non-GUI API calls using the same protocols as the client under test. But WinRunner GUI Vusers
emulate keystrokes, mouse clicks, and other User Interface actions on the client being tested.
Only one GUI user can run from a machine unless LoadRunner Terminal Services Manager manages remote machines with Terminal Server Agent enabled and
logged into a Terminal Services Client session.
During run-time, threadedvusers share a common memory pool.
So threading supports more Vusers per load generator.
The Status of Vusers on all load generators start from "Running", then go to "Ready" after going through the init section of the script. Vusers are "Finished" in
passed or failed end status. Vusers are automatically "Stopped" when the Load Generator is overloaded.
To use Web Services Monitors for SOAP and XML, a separate license is needed, and vUsers require the Web Services add-in installed with Feature Pack (FP1).
No additional license is needed for standard web (HTTP) server monitors Apache, IIS, and Netscape.

5. Using Windows Remote Desktop Connection


To keep Windows Remote Desktop Connection sessions from timing out during a test, the Terminal Services on each machine should be configured as follows:

1. Click Start, point to Programs (or Control Panel), Administrative Tools and choose Terminal Services
2. Configuration.
3. Open the Connections folder in tree by clicking it once.
4. Right-click RDP-Tcp and select Properties.
5. Click the Sessions tab.
6. Make sure "Override user settings" is checked.
7. Set Idle session limit to the maximum of 2 days instead of the default 2 hours.
8. Click Apply.
9. Click OK to confirm message "Configuration changes have been made to the system registry; however, the user session now active on the RDP-Tcp
connection will not be changed."

6. Explain the Load testing process? Version 7.2


Step 1: Planning the test. Here, we develop a clearly defined test plan to ensure the test scenarios we develop will accomplish load-testing objectives.
Step 2: Creating Vusers. Here, we create Vuser scripts that contain tasks performed by each Vuser, tasks performed by Vusers as a whole, and tasks measured
as transactions.
Step 3: Creating the scenario. A scenario describes the events that occur during a testing session. It includes a list of machines, scripts, and Vusers that run during
the scenario. We create scenarios using LoadRunner Controller. We can create manual scenarios as well as goal-oriented scenarios. In manual scenarios, we
define the number of Vusers, the load generator machines, and percentage of Vusers to be assigned to each script. For web tests, we may create a goal-oriented
scenario where we define the goal that our test has to achieve. LoadRunner automatically builds a scenario for us.
Step 4: Running the scenario. We emulate load on the server by instructing multiple Vusers to perform tasks simultaneously. Before the testing, we set the
scenario configuration and scheduling. We can run the entire scenario, Vuser groups, or individual Vusers.
Step 5: Monitoring the scenario. We monitor scenario execution using the LoadRunner online runtime, transaction, system resource, Web resource, Web server
resource, Web application server resource, database server resource, network delay, streaming media resource, firewall server resource, ERP server resource,
and Java performance monitors.
Step 6: Analyzing test results. During scenario execution, LoadRunner records the performance of the application under different loads. We use LoadRunner’s
graphs and reports to analyze the application’s performance.

7. When do you do load and performance Testing?


We perform load testing once we are done with interface (GUI) testing. Modern system architectures are large and complex. Whereas single user testing primarily
on functionality and user interface of a system component, application testing focuses on performance and reliability of an entire system. For example, a typical
application-testing scenario might depict 1000 users logging in simultaneously to a system. This gives rise to issues such as what is the response time of the
system, does it crash, will it go with different software applications and platforms, can it hold so many hundreds and thousands of users, etc. This is when we set
do load and performance testing.

8. What are the components of LoadRunner?


The components of LoadRunner are The Virtual User Generator, Controller, and the Agent process, LoadRunner Analysis and Monitoring, LoadRunner Books
Online.

9. What Component of LoadRunner would you use to record a Script?


The Virtual User Generator (VuGen) component is used to record a script. It enables you to develop Vuser scripts for a variety of application types and
communication protocols.

10. When do you do load and performance Testing?


We perform load testing once we are done with interface (GUI) testing. Modern system architectures are large and complex. Whereas single user testing primarily
on functionality and user interface of a system component, application testing focuses on performance and reliability of an entire system. For example, a typical
application-testing scenario might depict 1000 users logging in simultaneously to a system. This gives rise to issues such as what is the response time of the
system, does it crash, will it go with different software applications and platforms, can it hold so many hundreds and thousands of users, etc. This is when we set
do load and performance testing.

11. What are the components of LoadRunner?


The components of LoadRunner are The Virtual User Generator, Controller, and the Agent process, LoadRunner Analysis and Monitoring, LoadRunner Books
Online. What Component of LoadRunner would you use to record a Script? - The Virtual User Generator (VuGen) component is used to record a script. It enables
you to develop Vuser scripts for a variety of application types and communication protocols.

12. What Component of LoadRunner would you use to play Back the script in multi user mode?
The Controller component is used to playback the script in multi-user mode. This is done during a scenario run where a vuser script is executed by a number of
vusers in a group.

13. What is a rendezvous point?


You insert rendezvous points into Vuser scripts to emulate heavy user load on the server. Rendezvous points instruct Vusers to wait during test execution for
multiple Vusers to arrive at a certain point, in order that they may simultaneously perform a task. For example, to emulate peak load on the bank server, you can
insert a rendezvous point instructing 100 Vusers to deposit cash into their accounts at the same time.

14. What is a rendezvous point?


A scenario defines the events that occur during each testing session. For example, a scenario defines and controls the number of users to emulate, the actions to
be performed, and the machines on which the virtual users run their emulations.
15. Explain the recording mode for web Vuser script?
We use VuGen to develop a Vuser script by recording a user performing typical business processes on a client application. VuGen creates the script by recording
the activity between the client and the server. For example, in web based applications, VuGen monitors the client end of the database and traces all the requests
sent to, and received from, the database server. We use VuGen to: Monitor the communication between the application and the server; Generate the required
function calls; and Insert the generated function calls into a Vuser script.

16. Why do you create parameters?


Parameters are like script variables. They are used to vary input to the server and to emulate real users. Different sets of data are sent to the server each time the
script is run. Better simulate the usage model for more accurate testing from the Controller; one script can emulate many different users on the system.

17. What is correlation? Explain the difference between automatic correlation and manual correlation?
Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid
errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can
be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned
and create correlation is used to correlate.

18. How do you find out where correlation is required?


Two ways: First we can scan for correlations, and see the list of values which can be correlated. From this we can pick a value to be correlated. Secondly, we can
record two scripts and compare them. We can look up the difference file to see for the values which needed to be correlated.

19. Where do you set automatic correlation options?


Automatic correlation from web point of view can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose
either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database can be done using show output
window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be
correlated, we just do create correlation for the value and specify how the value to be created.

20. What is a function to capture dynamic values in the web Vuser script?
Web_reg_save_param function saves dynamic data information to a parameter.

21. VuGen Recording and Scripting?


LoadRunner script code obtained from recording in the ANSI C language syntax, represented by icons in icon view until you click Script View.
22. What is Scenarios ?
Scenarios encapsulate the Vuser Groups and scripts to be executed on load generators at run-time.
Manual scenarios can distribute the total number of Vusers among scripts based on the analyst-specified percentage (evenly among load generators).
Goal Oriented scenarios are automatically created based on a specified transaction response time or number of hits/transactions-per-second (TPS). Test analysts
specify the % of Target among scripts.

23. What are the typical settings for each type of run scenario ?

24. When do you disable log in Virtual User Generator, When do you choose standard and extended logs?
Once we debug our script and verify that it is functional, we can enable logging for errors only. When we add a script to a scenario, logging is automatically
disabled. Standard Log Option: When you select Standard log, it creates a standard log of functions and messages sent during script execution to use for
debugging. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled Extended Log Option:
Select extended log to create an extended log, including warnings and other messages. Disable this option for large load testing scenarios. When you copy a
script to a scenario, logging is automatically disabled. We can specify which additional information should be added to the extended log using the Extended log
options.

25. How do you debug a LoadRunner script?


VuGen contains two options to help debug Vuser scripts-the Run Step by Step command and breakpoints. The Debug settings in the Options dialog box allow us
to determine the extent of the trace to be performed during scenario execution. The debug information is written to the Output window. We can manually set the
message class within your script using the lr_set_debug_message function. This is useful if we want to receive debug information about a small section of the
script only.

26. How do you write user defined functions in LR?


Before we create the User Defined functions we need to create the external library (DLL) with the function. We add this library to VuGen bin directory. Once the
library is added then we assign user defined function as a parameter. The function should have the following format: __declspec (dllexport) char* (char*, char*)

27. What are the changes you can make in run-time settings?
The Run Time Settings that we make are:

1. Pacing - It has iteration count.


2. Log - Under this we have Disable Logging Standard Log and
3. Extended Think Time - In think time we have two options like Ignore think time and Replay think time.
4. General - Under general tab we can set the vusers as process or as multithreading and whether each step as a transaction.

28. Where do you set Iteration for Vuser testing?


We set Iterations in the Run Time Settings of the VuGen. The navigation for this is Run time settings, Pacing tab, set number of iterations.

29. How do you perform functional testing under load?


Functionality under load can be tested by running several Vusers concurrently. By increasing the amount of Vusers, we can determine how much load the server
can sustain.

30. Using network drive mappings


If several load generators need to access the same physical files, rather than having to remember to copy the files each time they change, each load generator
can reference a common folder using a mapped drive. But since drive mappings are associated with a specific user:

1. Logon the load generator as the user the load generator will use
2. Open Windows Explorer and under Tools select Map a Network Drive and create a drive. It saves time and hassle to have consistent drive letters across
load generators, so some organizations reserver certain drive letters for specific locations.
3. Open the LoadRunner service within Services (accessed from Control Panel, Administrative Tasks),
4. Click the "Login" tab.
5. Specify the username and password the load generator service will use. (A dot appears in front of the username if the userid is for the local domain).
6. Stop and start the service again.

31. What is Ramp up? How do you set this?


This option is used to gradually increase the amount of Vusers/load on the server. An initial value is set and a value to wait between intervals can be specified. To
set Ramp Up, go to ‘Scenario Scheduling Options’

32. What is the advantage of running the Vuser as thread?


VuGen provides the facility to use multithreading. This enables more Vusers to be run pergenerator. If the Vuser is run as a process, the same driver program is
loaded into memory for each Vuser, thus taking up a large amount of memory. This limits the number of Vusers that can be run on a single generator. If the Vuser
is run as a thread, only one instance of the driver program is loaded into memory for the given number of Vusers (say 100). Each thread shares the memory of the
parent driver program, thus enabling more Vusers to be run per generator.

33. If you want to stop the execution of your script on error, how do you do that?
The lr_abort function aborts the execution of a Vuser script. It instructs the Vuser to stop executing the Actions section, execute the vuser_end section and end the
execution. This function is useful when you need to manually abort a script execution as a result of a specific error condition. When you end a script using this
function, the Vuser is assigned the status "Stopped". For this to take effect, we have to first uncheck the Continue on error option in Run-Time Settings.
34. What is the relation between Response Time and Throughput?
The Throughput graph shows the amount of data in bytes that the Vusers received from the server in a second. When we compare this with the transaction
response time, we will notice that as throughput decreased, the response time also decreased. Similarly, the peak throughput and highest response time would
occur approximately at the same time.
35. Explain the Configuration of your systems?
The configuration of our systems refers to that of the client machines on which we run the Vusers. The configuration of any client machine includes its hardware
settings, memory, operating system, software applications, development tools, etc. This system component configuration should match with the overall system
configuration that would include the network infrastructure, the web server, the database server, and any other components that go with this larger system so as to
achieve the load testing objectives.

36. How do you identify the performance bottlenecks?


Performance Bottlenecks can be detected by using monitors. These monitors might be application server monitors, web server monitors, database server monitors
and network monitors. They help in finding out the troubled area in our scenario which causes increased response time. The measurements made are usually
performance response time, throughput, hits/sec, network delay graphs, etc.

37. If web server, database and Network are all fine where could be the problem?
The problem could be in the system itself or in the application server or in the code written for the application.

38. How did you find web server related issues?


Using Web resource monitors we can find the performance of web servers. Using these monitors we can analyze throughput on the web server, number of hits per
second that occurred during scenario, the number of http responses per second, the number of downloaded pages per second.

39. How did you find database related issues?


By running Database monitor and help of Data Resource Graph we can find database related issues. E.g. You can specify the resource you want to measure on
before running the controller and than you can see database related issues .

40. What is the difference between Overlay graph and Correlate graph?
Overlay Graph: It overlay the content of two graphs that shares a common x-axis. Left Y-axis on the merged graph show’s the current graph’s value & Right Y-axis
show the value of Y-axis of the graph that was merged. Correlate Graph: Plot the Y-axis of two graphs against each other. The active graph’s Y-axis becomes X-
axis of merged graph. Y-axis of the graph that was merged becomes merged graph’s Y-axis.

41. How did you plan the Load? What are the Criteria?
Load test is planned to decide the number of users, what kind of machines we are going to use and from where they are run. It is based on 2 important documents,
Task Distribution Diagram and Transaction profile. Task Distribution Diagram gives us the information on number of users for a particular transaction and the time
of the load. The peak usage and off-usage are decided from this Diagram. Transaction profile gives us the information about the transactions name and their
priority levels with regard to the scenario we are deciding.

42. What does vuser_init action contain?


Vuser_init action contains procedures to login to a server.

43. What does vuser_end action contain?


Vuser_end section contains log off procedures.
44. What is think time? How do you change the threshold?
Think time is the time that a real user waits between actions. Example: When a user receives data from a server, the user may wait several seconds to review the
data before responding. This delay is known as the think time. Changing the Threshold: Threshold level is the level below which the recorded think time will be
ignored. The default value is five (5) seconds. We can change the think time threshold in the Recording options of the Vugen.

45. What is the difference between standard log and extended log?
The standard log sends a subset of functions and messages sent during script execution to a log. The subset depends on the Vuser type Extended log sends a
detailed script execution messages to the output log. This is mainly used during debugging when we want information about: Parameter substitution. Data returned
by the server. Advanced trace.

46. What is lr_debug_message ?


The lr_debug_message function sends a debug message to the output log when the specified message class is set.

47. What is lr_output_message ?


The lr_output_message function sends notifications to the Controller Output window and the Vuser log file.

48. What is lr_error_message ?


The lr_error_message function sends an error message to the LoadRunner Output window.

49. What is lrd_stmt?


The lrd_stmt function associates a character string (usually a SQL statement) with a cursor. This function sets a SQL statement to be processed.
50. What is lrd_fetch?
The lrd_fetch function fetches the next row from the result set.
51. What is Throughput?
If the throughput scales upward as time progresses and the number of Vusers increase, this indicates that the bandwidth is sufficient. If the graph were to remain
relatively flat as the number of Vusers increased, it would be reasonable to conclude that the bandwidth is constraining the volume of data delivered.

52. Types of Goals in Goal-Oriented Scenario


Load Runner provides you with five different types of goals in a goal oriented scenario:

1. The number of concurrent Vusers


2. The number of hits per second
3. The number of transactions per second
4. The number of pages per minute

The transaction response time that you want your scenario Analysis Scenario (Bottlenecks): In Running Vuser graph correlated with the response time graph you
can see that as the number of Vusers increases, the average response time of the check itinerary transaction very gradually increases. In other words, the
average response time steadily increases as the load increases. At 56 Vusers, there is a sudden, sharp increase in the average response time. We say that the
test broke the server. That is the mean time before failure (MTBF). The response time clearly began to degrade when there were more than 56 Vusers running
simultaneously.

53. What is correlation? Explain the difference between automatic correlation and manual correlation?
Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid
errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can
be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned
and create correlation is used to correlate.

54. Where do you set automatic correlation options?


Automatic correlation from web point of view, can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose
either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database, can be done using show output
window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be
correlated, we just do create correlation for the value and specify how the value to be created.

55. Where do you set automatic correlation options?


What is a function to capture dynamic values in the web vuser script? - Web_reg_save_param function saves dynamic data information to a parameter.
WinRunner automated software functionality test tool from Mercury Interactive for functional and regression testing

Q: For new users, how to use WinRunner to test software applications automately ?
A: The following steps may be of help to you when automating tests

1. MOST IMPORTANT - write a set of manual tests to test your application - you cannot just jump in with WR and expect to produce a set of meaningful
tests. Also as you will see from the steps below this set of manual tests will form your plan to tackle automation of your application.
2. Once you have a set of manual tests look at them and decide which ones you can automate using your current level of expertise. NOTE that there will
be tests that are not suitable for automation, either because you can't automate them, or they are just not worth the effort.
3. Automate the tests selected in step 2 - initially you will use capture/replay using the steps in the manual test, but you will soon see that to produce
meaningful and informative tests you need to add additional code to your test eg. use tl_step() to give test results. As this process continues you will
soon see that there are operations that you repeatedly do in multiple tests - these are then candidates for user-defined functions and compiled modules
4. Once you have completed step 3 go back to step 2 and you will find that the knowledge you have gained in step 3 will now allow you to select some
more tests that you can do.

If you continue going through this loop you will gradually become more familiar with WR and TSL, in fact you will probably find that eventually you do very little
capture/replay and more straight TSL coding.

Q: How to use WinRunne to check whether the record was updated or the record was delelte or the record was inserted or not?
Using WinRunner check point features: Create->dDB checkpoint->Runtime Record check
Q: How to use WinRunner to test the login screen
A: When you enter wrong id or password, you will get Dialog box.
1. Record this Dialog box
2. User win_exists to check whether dialog box exists or not
3. Playback: Enter wrong id or password, if win_exists is
true, then your application is working good.
Enter good id or password, if win_exists is false,
then your application is working perfectly.

Q: After clicking on "login" button, they opens other windows of the web application, how to check that page is opened or not
When your expecting "Window1" to come up after clicking on Login...
Capture the window in the GUI Map. No two windows in an web based
application can have the same html_name property. Hence, this would
be the property to check.
First try a simple win_exists("window1", ) in an IF condition.

If that does'nt work, try the function,

win_exists("{ class: window, MSW_class: html_frame,


html_name: "window1"}",);
Q: Winrunner testscript for checking all the links at a time
location = 0;
set_window("YourWindow",5);

while(obj_exists((link = "{class: object,MSW_class: html_text_link,location: "


& location & "}"))== E_OK)
{
obj_highlight(link); web_obj_get_info(link,"name",name);
web_link_valid(link,valid);
if(valid)
tl_step("Check web link",PASS,"Web link \"" & name & "\" is valid.");
else
tl_step("Check web link",FAIL,"Web link \"" & name & "\" is not valid.");
location++;
}

Q: How to get the resolution settings


Use get_screen_res(x,y) to get the screen resolution in WR7.5.
or
Use get_resolution (Vert_Pix_int, Horz_Pix_int, Frequency_int) in WR7.01

Q: WITHOUT the GUI map, use the phy desc directly....


It's easy, just take the description straight out of the GUI map squigglies and
all, put it into a variable (or pass it as a string)
and use that in place of the object name.

button_press ( "btn_OK" );
becomes
button_press ( "{class: push_button, label: OK}" );

Q: What are the three modes of running the scripts?


WinRunner provides three modes in which to run tests: Verify, Debug, and Update. You use each mode during a different phase of the testing process.
Verify
Use the Verify mode to check your application.
Debug
Use the Debug mode to help you identify bugs in a test script.
Update
Use the Update mode to update the expected results of a test or to create a new expected results folder.

Q: How do you handle unexpected events and errors?


WinRunner uses exception handling to detect an unexpected event when it occurs and act to recover the test run.
WinRunner enables you to handle the following types of exceptions:
Pop-up exceptions: Instruct WinRunner to detect and handle the appearance of a specific window.
TSL exceptions: Instruct WinRunner to detect and handle TSL functions that return a specific error code.
Object exceptions: Instruct WinRunner to detect and handle a change in a property for a specific GUI object.
Web exceptions: When the WebTest add-in is loaded, you can instruct WinRunner to handle unexpected events and errors that occur in your Web site during a
test run.

Q: How do you handle pop-up exceptions?


A pop-up exception Handler handles the pop-up messages that come up during the execution of the script in the AUT. TO handle this type of exception we make
WinRunner learn the window and also specify a handler to the exception. It could be
Default actions: WinRunner clicks the OK or Cancel button in the pop-up window, or presses Enter on the keyboard. To select a default handler, click the
appropriate button in the dialog box.
User-defined handler: If you prefer, specify the name of your own handler. Click User Defined Function Name and type in a name in the User Defined Function
Name box.

Q: How do you handle TSL exceptions?


Suppose you are running a batch test on an unstable version of your application. If your application crashes, you want WinRunner to recover test execution. A TSL
exception can instruct WinRunner to recover test execution by exiting the current test, restarting the application, and continuing with the next test in the batch.
The handler function is responsible for recovering test execution. When WinRunner detects a specific error code, it calls the handler function. You implement this
function to respond to the unexpected error in the way that meets your specific testing needs.
Once you have defined the exception, WinRunner activates handling and adds the exception to the list of default TSL exceptions in the Exceptions dialog box.
Default TSL exceptions are defined by the XR_EXCP_TSL configuration parameter in the wrun.ini configuration file.
Q: How to write an email address validation script in TSL?
public function IsValidEMAIL(in strText)
{
auto aryEmail[], aryEmail2[], n;

n = split(strText, aryEmail, "@");


if (n != 2)
return FALSE;

# Ensure the string "@MyISP.Com" does not pass...


if (!length(aryEmail[1]))
return FALSE;

n = split(aryEmail[2], aryEmail2, ".");


if (n < 2)
return FALSE;
# Ensure the string "Recipient@." does not pass...
if (!(length(aryEmai2[1]) * length(aryEmai2[1])))
return FALSE;

return TRUE;
}

Q: How to have winrunner insert yesterdays date into a field in the application?
1) Use get-time to get the PC system time in seconds since 01/01/1970

2)Subtract 86400 (no seconds in a day) from it

3)Use time_str to convert the result into a date format

4)If format of returned date is not correct use string manipulations to get
the format you require

5) Insert the date into your application

Alternatively you could try the following :

1) In an Excel datasheet create a column with an appropriate name, and in


the first cell of the column use the excel formula 'today() - 1'

2) Format the cell to give you the required date format

3) Use the ddt- functions to read the date from the excel datasheet

4) insert the reteived date into your application


Q: How can withwin runner to make single scripts which supports multiple languages?
Actually, you can have scripts that run for different locales.I have a set of scripts that run for Japanese as well as English Locales. Idea is to have objects recorded
in GUI Map with a locale independent physical description. This can be achieved in two ways.
1. After recording the object in the GUI Map, inspect the description and ensure that no language specific properties are used. For ex: html_name property for an
object of class: html_text_link could be based on the text. You can either remove these language dependent properties if it doesnt really affect your object
recognition. If it does affect, you need to find another property for the object that is locale independent. This new property may be something thats already there or
you need to create them. This leads to the next option.
2. Have developers assign a locale independent property like 'objname' or something to all objects that you use in your automated scripts. Now, modify your GUI
Map description for the particular object to look for this property instead of the standard locale dependent properties recorded by WR (these default properties are
in GUI Map Configuration).
or
You could also use a GUI map for each locale. Prefix the GUI map name with the locale (e.g. jpn_UserWindow.gui and enu_UserWindow.gui) and load the correct
map based on the current machine locale. Specifically, you can use the get_lang() function to obtain the current language setting, then load the appropriate GUI
map in your init script. Take a look at the sample scripts supplied with WinRunner (for the flight application). I think those scripts are created for both English and
Japanese locales.

After taking care of different GUIs for different locales, the script also needs some modification. If you are scripting in English and then moving on to any other
language (say Japanese), all the user inputs will be in English. Due to this the script will fail as it is expecting a Japanese input for a JPN language. Instead of
using like that, assign all the user inputs to a variable and use the same wherever the script uses it. This variables has to be assigned (may be after the driver
script) before you call the script which you want to run. You should have different variable scripts for different languages. Depending on the language you want to
run, call the appropriate variable script file. This will help you to run the same script with different locale
Q: How to use a regular _expression in the physical description of a window in the GUI map?
Several web page windows with similar html names - they all end in or contain "| MyCompany" The GUI Map has saved the following physical description for one
of these windows:
{
class: window,
html_name: "Dynamic Name | MyCompany"
MSW_class: html_frame
}

The "Dynamic Name " part of the html name changes with the different pages.

Replace:

{
class: window,
html_name: "!.*| MyCompany"
MSW_class: html_frame
}

Regular expressions in GUI maps always begin with "!".

Q: How to force WR to learn the sub-items on a menu...?


If WR is not learning sub-items then the easy way id to add manually those sub items in to GUI map.. of course you need to study the menu description and
always add the PARENT menu name for that particular sub-menu..
Q: How to check property of specific Icon is highlighted or not?
set_window("Name of the window");
obj_check_info("Name of the object ","focused",0ut_value);

check for out_value & proceed further

Q: BitMap or GUI Checkpoints


DO NOT use BitMap or GUI Checkpoints for dynamic verification. These checkpoints are purely for static verifications. There are ofcourse, work-arounds, but
mostly not worth the effort.

Q: How to to get the information from the status bar without doing any activity/click on the hyperlink?
You can use the "statusbar_get_text("Status Bar",0,text);" function
"text" variable contains the status bar statement.

or

web_cursor_to_link ( link, x, y );

link The name of the link.


x,y The x- and y-coordinates of the mouse pointer when moved to a link,
relative to the upper left corner of the link.

Q: Object name Changing dynamically?


1.
logicalname:"chkESActivity"
{
class: check_button,
MSW_class: html_check_button,
html_name: chkESActivity,
part_value: 90
}
2.
logical name "chkESActivity_1"

{
class: check_button,
MSW_class: html_check_button,
html_name: chkESActivity,
part_value: 91
}
Replace with:

Logical:"CheckBox" # you give any name as the logical name


{
class: check_button,
MSW_class: html_check_button,
html_name: chkESActivity,
part_value: "![0-9][0-9]" # changes were done here
}

you can use any of the checkbox command like


button_set("CheckBox",ON); # the above statement will check any check
box with part value ranging from 00 to 99
Q: Text Field Validations
Need to validate text fields against
1. Null
2. Not Null.
3. whether it allows any Special Characters.
4. whether it allows numeric contents.
5. Maximum length of the field etc.

1) From the requirements find out what the behaviour of the text field in
question should be. Things you need to know are :
what should happen if field left blank
what special characters are allowed
is it an alpha, nemeric or alphanumeric field etc.etc.

2) Write manual tests for doing what you want. This will create a structure
to form the basis of your WR tests.

3) now create your WR scripts. I suggest that you use data driven tests and
use Excel spreadsheets for your inputs instead of having user input.
For example the following structure will test whether the text field will
accept special characters :

open the data table


for each value in the data table
get value
insert value into text field
attempt to use the value inserted
if result is as expected
report pass
else
report fail
next value in data table

in this case the data table will contain all the special charcaters

Q: Loads multiple giumaps into an array


#GUIMAPS-------------------------------------------------------------------
static guiname1 = "MMAQ_guimap.gui";
static guiname2 = "SSPicker_guimap.gui";
static guiname3 = "TradeEntry.gui";
static guiLoad[] = {guiname1, guiname2, guiname3}

Then I just call the function:


#LOAD GUIMAP FILES VIA THE LOAD GUIMAP FUNCTION (this closes ALL open guimaps)
rc = loadGui(guiLoad);
if (rc != "Pass") #Check success of the Gui_Load
{
tl_step("Guiload",FAIL,"Failed to load Guimap(s)for "&testname(getvar));
#This line to test log
texit("Failed to load Guimap(s)for "&testname(getvar));
}

public function loadGui(inout guiLoad[])


{
static i;
static rc;

# close any temp GUI map files


GUI_close("");
GUI_close_all();

for(i in guiLoad)
{
rc = (GUI_load(GUIPATH & guiLoad[i]));
if ((rc != 0) && (rc != E_OK)) #Check the Gui_Load
{
return ("Failed to load " &guiLoad[i]);
}
}
return ("Pass");
}
Q: Read and write to the registry using the Windows API functions
function space(isize)
{
auto s;
auto i;
for (i =1;i<=isize;i++)
{
s = s & " ";

}
return(s);
}

load_dll("c:\\windows\\system32\\ADVAPI32.DLL");
extern long RegDeleteKey( long, string<1024> );
extern long RegCloseKey(long);
extern long RegQueryValueExA(long,string,long,long,inout string<1024>,inout long );
extern long RegOpenKeyExA(long,string,long ,long,inout long);
extern long RegSetValueExA(long,string,long,long,string,long);

MainKey = 2147483649; # HKEY_CURRENT_USER


SubKey = "Software\\TestConverter\\TCEditor\\Settings";
# This is where you set your subkey path
const ERROR_SUCCESS = 0;

const KEY_ALL_ACCESS = 983103;


ret = RegOpenKeyExA(MainKey, SubKey, 0, KEY_ALL_ACCESS, hKey); # open the
key
if (ret==ERROR_SUCCESS)
{
cbData = 256;
tmp = space(256);
KeyType = 0;
ret = RegQueryValueExA(hKey,"Last language",0,KeyType,tmp,cbData); # replace
"Last language" with the key you want to read
}
pause (tmp);
NewSetting = "SQABASIC";
cbData = length(NewSetting) + 1;
ret = RegSetValueExA(hKey,"Last language",0,KeyType,NewSetting,cbData);
# replace "Last language" with the key you want to write

cbData = 256;
tmp = space(256);
KeyType = 0;
ret = RegQueryValueExA(hKey,"Last language",0,KeyType,tmp,cbData);
# verifies you changed the key
pause (tmp);

RegCloseKey(hKey); # close the key

Q: How to break infinite loop


set_window("Browser Main Window",1);
text="";
start = get_time();
while(text!="Done")
{
statusbar_get_text("Status Bar",0,text);
now = get_time();
if ( (now-start) == 60 ) # Specify no of seconds after which u want
break
{
break;
}
}

Q: User-defined function that would write to the Print-log as well as write to a file
function writeLog(in strMessage){
file_open("C:\FilePath\...");
file_printf(strMessage);
printf(strMessage);
}
Q: How to do text matching?
You could try embedding it in an if statement. If/when it fails use a tl_step statement to indicate passage and then do a texit to leave the test. Another idea would
be to use win_get_text or web_frame_get_text to capture the text of the object and the do a comparison (using the match function) to determine it's existance.

Q: the MSW_id value sometimes changes, rendering the GUI map useless
MSW_Id's will continue to change as long as your developers are modifying your application. Having dealt with this, I determined that each MSW_Id shifted by the
same amount and I was able to modify the entries in the gui map rather easily and continue testing.
Instead of using the MSW_id use the "location". If you use your GUI spy it will give you every detail it can. Then add or remove what you don't want.

Q: Having the DB Check point, its able to show the current values in form but its not showing the values that saved in the table
This looks like its happening because the data has
been written to the db after your checkpoint, so you
have to do a runtime record check Create>Database
Checkpoint>Runtime Record Check. You may also have to
perform some customization if the data displayed in
the application is in a different format than the data
in the database by using TSL. For example, converting
radio buttons to database readable form involves the
following:

# Flight Reservation
set_window ("Flight Reservation", 2);
# edit_set ("Date of Flight:", "06/08/02");

# retrieve the three button states


button_get_state ( "First", first);
button_get_state ( "Business", bus);
button_get_state ( "Economy", econ);

# establish a variable with the correct numeric value


based on which radio button is set
if (first)
service="1";

if (bus)
service="2";

if (econ)
service="3";

set_window("Untitled - Notepad",3);
edit_set("Report Area",service);

db_record_check("list1.cvr", DVR_ONE_MATCH,record_num);
Increas Capacity Testing
When you begin your stress testing, you will want to increase your capacity testing to make sure you are able to handle the increased load of data such as ASP
pages and graphics. When you test the ASP pages, you may want to create a page similar to the original page that will simulate the same items on the ASP page
and have it send the information to a test bed with a process that completes just a small data output. By doing this, you will have your processor still stressing the
system but not taking up the bandwidth by sending the HTML code along the full path. This will not stress the entire code but will give you a basis from which to
work. Dividing the requests per second by the total number of user or threads will determine the number of transactions per second. It will tell you at what point the
server will start becoming less efficient at handling the load. Let's look at an example. Let's say your test with 50 users shows your server can handle 5 requests
per seconf, with 100 users it is 10 requests per second, with 200 users it is 15 requests per second, and eventually with 300 users it is 20 requests per second.
Your requests per second are continually climbing, so it seems that you are obtaining steadily improving performance. Let's look at the ratios:
05/50 = 0.1
10/100 = 0.1
15/200 = 0.075
20/300 = 0.073
From this example you can see that the performance of the server is becoming less and less efficient as the load grows. This in itself is not necessarily bad (as
long as your pages are still returning within your target time frame). However, it can be a useful indicator during your optimization process and does give you some
indication of how much leeway you have to handle expected peaks.

Stateful testing
When you use a Web-enabled application to set a value, does the server respond correctly later on?

Privilage testing
What happens when the everyday user tries to access a control that is authorized only for adminstrators?

Speed testing
Is the Web-enabled application taking too long to respond?

Boundary Test
Boundary tests are designed to check a program's response to extreme input values. Extreme output values are generated by the input values. It is important to
check that a program handles input values and output results correctly at the lower and upper boundaries. Keep in mind that you can create extreme boundary
results from non-extreme input values. It is essential to analyze how to generate extremes of both types. In addition. sometime you know that there is an
intermediate variable involved in processing. If so, it is useful to determine how to drive that one through the extremes and special conditions such as zero or
overflow condition.

Boundary timeing testing


What happens when your Web-enabled application request times out or takes a really long time to respond?

Regression testing
Did a new build break an existing function? Repeat testing after changes for managing risk relate to product enhancement.
A regression test is performded when the tester wishes to see the progress of the testing processs by performing identical tests before and after a bug has been
fixed. A regression test allows the tester to compare expeted test results with the actual results.
Regression testing's primary objective is to ensure that all bugfree features stay that way. In addition, bugs which have been fixed once should not turn up again in
subsequent program versions.
Regression testing: After every software modification or before next release, we repeat all test cases to check if fixed bugs are not show up again and new and
existing functions are all working correctly.
Regression testing is used to confirm that fixed bugs have, in fact, been fixed and that new bugs have not been introduced in the process, and that festures that
were proven correctly functional are intact. Depending on the size of a project, cycles of regression testing may be perform once per milestone or once per build.
Some bug regression testing may also be performed during each accceptance test cycle, forcusing on only the most important bugs. Regression tests can be
automated.
CONDITIONS DURING WHICH REGRESSION TESTS MAY BE RUN
Issu fixing cycle. Once the development team has fixed issues, a regression test can be run t ovalidate the fixes. Tests are based on the step-by-step test casess
that were originally reported:

• If an issue is confirmeded as fixed, then the issue report status should be changed to Closed.
• If an issue is confirmed as fixed, but with side effects, then the issue report status should be changed to Closed. However, a new issue should be filed to
report the side effect.
• If an issue is only partially fixed, then the issue report resolution should be changed back to Unfixed, along with comments outlining the oustanding
problems

Open-status regression cycle. Periodic regression tests may be run on all open issue in the issue-tracking database. During this cycle, issue status is confirmed
either the report is reproducible as is with no modification, the report is reproducible with additional comments or modifications, or the report is no longer
reproducible
Closed-fixed regression cycle. In the final phase of testing, a full-regression test cycle should be run to confirm the status of all fixed-closed issues.
Feature regression cycle. Each time a new build is cut or is in the final phase of testing depending on the organizational procedure, a full-regression test cycle
should be run to confirm that the proven correctly functional features are still working as expected.
Database Testing

Items to check when testing a database

What to test Environment toola/technique

Seach results System test environment Black Box and White Box technique

Response time System test environment Sytax Testing/Functional Testing

Data integrity Development environment White Box testing

Data validity Development environment White Box testing

Q:How do you find an object in an GUI map?


The GUI Map Editor is been provided with a Find and Show Buttons.
To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object.
To find a particular object in a GUI Map file click the Find button, which gives the option to select the object. When the object is selected, if the object has been
learned to the GUI Map file it will be focused in the GUI Map file.

Q:What different actions are performed by find and show button?


To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object.
To find a particular object in a GUI Map file click the Find button, which gives the option to select the object. When the object is selected, if the object has been
learned to the GUI Map file it will be focused in the GUI Map file.

Q:How do you identify which files are loaded in the GUI map?
The GUI Map Editor has a drop down GUI File displaying all the GUI Map files loaded into the memory.

Q:How do you modify the logical name or the physical description of the objects in GUI map?
You can modify the logical name or the physical description of an object in a GUI map file using the GUI Map Editor.

Q:When do you feel you need to modify the logical name?


Changing the logical name of an object is useful when the assigned logical name is not sufficiently descriptive or is too long.

Q:When it is appropriate to change physical description?


Changing the physical description is necessary when the property value of an object changes.

Q:How WinRunner handles varying window labels?


We can handle varying window labels using regular expressions. WinRunner uses two hidden properties in order to use regular expression in an object’s physical
description. These properties are regexp_label and regexp_MSW_class.
i. The regexp_label property is used for windows only. It operates behind the scenes to insert a regular expression into a window’s label description.
ii. The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It is obligatory for all types of windows and for the object class
object.

Q:What is the purpose of regexp_label property and regexp_MSW_class property?


The regexp_label property is used for windows only. It operates behind the scenes to insert a regular expression into a window’s label description.
The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It is obligatory for all types of windows and for the object class object.

Q:How do you suppress a regular expression?


We can suppress the regular expression of a window by replacing the regexp_label property with label property.
Q:How do you copy and move objects between different GUI map files?
We can copy and move objects between different GUI Map files using the GUI Map Editor. The steps to be followed are:

1. Choose Tools - GUI Map Editor to open the GUI Map Editor.
2. Choose View - GUI Files.
3. Click Expand in the GUI Map Editor. The dialog box expands to display two GUI map files simultaneously.
4. View a different GUI map file on each side of the dialog box by clicking the file names in the GUI File lists.
5. In one file, select the objects you want to copy or move. Use the Shift key and or Control key to select multiple objects. To select all objects in a GUI
map file, choose Edit - Select All.
6. Click Copy or Move.
7. To restore the GUI Map Editor to its original size, click Collapse.
Q:How do you select multiple objects during merging the files?
Use the Shift key and or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit - Select All.

Q:How do you clear a GUI map files?


We can clear a GUI Map file using the Clear All option in the GUI Map Editor.

Q:How do you filter the objects in the GUI map?


GUI Map Editor has a Filter option. This provides for filtering with 3 different types of options.

1. Logical name displays only objects with the specified logical name.
2. Physical description displays only objects matching the specified physical description. Use any substring belonging to the physical description.
3. Class displays only objects of the specified class, such as all the push buttons.

Q:How do you configure GUI map?

1. When WinRunner learns the description of a GUI object, it does not learn all its properties. Instead, it learns the minimum number of properties to
provide a unique identification of the object.
2. Many applications also contain custom GUI objects. A custom object is any object not belonging to one of the standard classes used by WinRunner.
These objects are therefore assigned to the generic object class. When WinRunner records an operation on a custom object, it generates obj_mouse_
statements in the test script.
3. If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure the properties WinRunner uses
to identify a custom object during Context Sensitive testing. The mapping and the configuration you set are valid only for the current WinRunner session.
To make the mapping and the configuration permanent, you must add configuration statements to your startup test script.

Q:What is the purpose of GUI map configuration?


GUI Map configuration is used to map a custom object to a standard object.

Q:How do you make the configuration and mappings permanent?


The mapping and the configuration you set are valid only for the current WinRunner session. To make the mapping and the configuration permanent, you must
add configuration statements to your startup test script.

Q:What is the purpose of GUI spy?


Using the GUI Spy, you can view the properties of any GUI object on your desktop. You use the Spy pointer to point to an object, and the GUI Spy displays the
properties and their values in the GUI Spy dialog box. You can choose to view all the properties of an object, or only the selected set of properties that WinRunner
learns.
Q:What is the purpose of different record methods 1) Record 2) Pass up 3) As Object 4) Ignore.?
1) Record instructs WinRunner to record all operations performed on a GUI object. This is the default record method for all classes. (The only exception is the
static class (static text), for which the default is Pass Up.)
2) Pass Up instructs WinRunner to record an operation performed on this class as an operation performed on the element containing the object. Usually this
element is a window, and the operation is recorded as win_mouse_click.
3) As Object instructs WinRunner to record all operations performed on a GUI object as though its class were object class.
4) Ignore instructs WinRunner to disregard all operations performed on the class.

Q:How do you find out which is the start up file in WinRunner?


The test script name in the Startup Test box in the Environment tab in the General Options dialog box is the start up file in WinRunner.

Q:What are the virtual objects and how do you learn them?

• Applications may contain bitmaps that look and behave like GUI objects. WinRunner records operations on these bitmaps using win_mouse_click
statements. By defining a bitmap as a virtual object, you can instruct WinRunner to treat it like a GUI object such as a push button, when you record and
run tests.
• Using the Virtual Object wizard, you can assign a bitmap to a standard object class, define the coordinates of that object, and assign it a logical name.
To define a virtual object using the Virtual Object wizard:
1. Choose Tools > Virtual Object Wizard. The Virtual Object wizard opens. Click Next.
2. In the Class list, select a class for the new virtual object. If rows that are displayed in the window. For a table class, select the number of
visible rows and columns. Click Next.
3. Click Mark Object. Use the crosshairs pointer to select the area of the virtual object. You can use the arrow keys to make precise adjustments
to the area you define with the crosshairs. Press Enter or click the right mouse button to display the virtual object’s coordinates in the wizard. If
the object marked is visible on the screen, you can click the Highlight button to view it. Click Next.
4. Assign a logical name to the virtual object. This is the name that appears in the test script when you record on the virtual object. If the object
contains text that WinRunner can read, the wizard suggests using this text for the logical name. Otherwise, WinRunner suggests
virtual_object, virtual_push_button, virtual_list, etc.
5. You can accept the wizard’s suggestion or type in a different name. WinRunner checks that there are no other objects in the GUI map with the
same name before confirming your choice. Click Next.

Q:What are the two modes of recording?


There are 2 modes of recording in WinRunner
1. Context Sensitive recording records the operations you perform on your application by identifying Graphical User Interface (GUI) objects.
2. Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates traveled by the mouse pointer across the screen.

Q:What is a checkpoint and what are different types of checkpoints?


Checkpoints allow you to compare the current behavior of the application being tested to its behavior in an earlier version.
You can add four types of checkpoints to your test scripts:

1. GUI checkpoints verify information about GUI objects. For example, you can check that a button is enabled or see which item is selected in a list.
2. Bitmap checkpoints take a snapshot of a window or area of your application and compare this to an image captured in an earlier version.
3. Text checkpoints read text in GUI objects and in bitmaps and enable you to verify their contents.
4. Database checkpoints check the contents and the number of rows and columns of a result set, which is based on a query you create on your database.

Q:What are data driven tests?


When you test your application, you may want to check how it performs the same operations with multiple sets of data. You can create a data-driven test with a
loop that runs ten times: each time the loop runs, it is driven by a different set of data. In order for WinRunner to use data to drive the test, you must link the data to
the test script which it drives. This is called parameterizing your test. The data is stored in a data table. You can perform these operations manually, or you can use
the DataDriver Wizard to parameterize your test and store the data in a data table.

Q:What are the synchronization points?


Synchronization points enable you to solve anticipated timing problems between the test and your application. For example, if you create a test that opens a
database application, you can add a synchronization point that causes the test to wait until the database records are loaded on the screen.
For Analog testing, you can also use a synchronization point to ensure that WinRunner repositions a window at a specific location. When you run a test, the mouse
cursor travels along exact coordinates. Repositioning the window enables the mouse pointer to make contact with the correct elements in the window.
Q:What is parameterizing?
In order for WinRunner to use data to drive the test, you must link the data to the test script which it drives. This is called parameterizing your test. The data is
stored in a data table.

Q:How do you maintain the document information of the test scripts?


Before creating a test, you can document information about the test in the General and Description tabs of the Test Properties dialog box. You can enter the name
of the test author, the type of functionality tested, a detailed description of the test, and a reference to the relevant functional specifications document.

Q:What do you verify with the GUI checkpoint for single property and what command it generates, explain syntax?
You can check a single property of a GUI object. For example, you can check whether a button is enabled or disabled or whether an item in a list is selected. To
create a GUI checkpoint for a property value, use the Check Property dialog box to add one of the following functions to the test script:
button_check_info
scroll_check_info
edit_check_info
static_check_info
list_check_info
win_check_info
obj_check_info
Syntax: button_check_info (button, property, property_value );
edit_check_info ( edit, property, property_value );

Q:What do you verify with the GUI checkpoint for object/window and what command it generates, explain syntax?

• You can create a GUI checkpoint to check a single object in the application being tested. You can either check the object with its default properties or
you can specify which properties to check.
• Creating a GUI Checkpoint using the Default Checks
• You can create a GUI checkpoint that performs a default check on the property recommended by WinRunner. For example, if you create a
GUI checkpoint that checks a push button, the default check verifies that the push button is enabled.
• To create a GUI checkpoint using default checks:
1. Choose Create - GUI Checkpoint - For Object/Window, or click the GUI Checkpoint for Object/Window button on the User toolbar. If
you are recording in Analog mode, press the CHECK GUI FOR OBJECT/WINDOW softkey in order to avoid extraneous mouse
movements. Note that you can press the CHECK GUI FOR OBJECT/WINDOW softkey in Context Sensitive mode as well. The
WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens on the screen.
2. Click an object.
3. WinRunner captures the current value of the property of the GUI object being checked and stores it in the test’s expected results
folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an obj_check_gui statement Syntax:
win_check_gui ( window, checklist, expected_results_file, time );
• Creating a GUI Checkpoint by Specifying which Properties to Check
• You can specify which properties to check for an object. For example, if you create a checkpoint that checks a push button, you can choose to verify that
it is in focus, instead of enabled.
• To create a GUI checkpoint by specifying which properties to check:
• Choose Create - GUI Checkpoint - For Object/Window, or click the GUI Checkpoint for Object/Window button on the User toolbar. If you are
recording in Analog mode, press the CHECK GUI FOR OBJECT/WINDOW softkey in order to avoid extraneous mouse movements. Note that
you can press the CHECK GUI FOR OBJECT/WINDOW softkey in Context Sensitive mode as well. The WinRunner window is minimized, the
mouse pointer becomes a pointing hand, and a help window opens on the screen.
• Double-click the object or window. The Check GUI dialog box opens.
• Click an object name in the Objects pane. The Properties pane lists all the properties for the selected object.
• Select the properties you want to check.
1. To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in
the Expected Value column to edit it.
2. To add a check in which you specify arguments, first select the property for which you want to specify arguments. Next, either click
the Specify Arguments button, or double-click in the Arguments column. Note that if an ellipsis (three dots) appears in the
Arguments column, then you must specify arguments for a check on this property. (You do not need to specify arguments if a
default argument is specified.) When checking standard objects, you only specify arguments for certain properties of edit and static
text objects. You also specify arguments for checks on certain properties of nonstandard objects.
3. To change the viewing options for the properties of an object, use the Show Properties buttons.
4. Click OK to close the Check GUI dialog box. WinRunner captures the GUI information and stores it in the test’s expected results
folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an obj_check_gui or a
win_check_gui statement. Syntax: win_check_gui ( window, checklist, expected_results_file, time ); obj_check_gui ( object,
checklist, expected results file, time );

Q:What do you verify with the GUI checkpoint for multiple objects and what command it generates, explain syntax?
To create a GUI checkpoint for two or more objects:

• Choose Create GUI Checkpoint For Multiple Objects or click the GUI Checkpoint for Multiple Objects button on the User toolbar. If you are recording in
Analog mode, press the CHECK GUI FOR MULTIPLE OBJECTS softkey in order to avoid extraneous mouse movements. The Create GUI Checkpoint
dialog box opens.
• Click the Add button. The mouse pointer becomes a pointing hand and a help window opens.
• To add an object, click it once. If you click a window title bar or menu bar, a help window prompts you to check all the objects in the window.
• The pointing hand remains active. You can continue to choose objects by repeating step 3 above for each object you want to check.
• Click the right mouse button to stop the selection process and to restore the mouse pointer to its original shape. The Create GUI Checkpoint dialog box
reopens.
• The Objects pane contains the name of the window and objects included in the GUI checkpoint. To specify which objects to check, click an object name
in the Objects pane. The Properties pane lists all the properties of the object. The default properties are selected.
1. To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in the
Expected Value column to edit it.
2. To add a check in which you specify arguments, first select the property for which you want to specify arguments. Next, either click the Specify
Arguments button, or double-click in the Arguments column. Note that if an ellipsis appears in the Arguments column, then you must specify
arguments for a check on this property. (You do not need to specify arguments if a default argument is specified.) When checking standard
objects, you only specify arguments for certain properties of edit and static text objects. You also specify arguments for checks on certain
properties of nonstandard objects.
3. To change the viewing options for the properties of an object, use the Show Properties buttons.
• To save the checklist and close the Create GUI Checkpoint dialog box, click OK. WinRunner captures the current property values of the selected GUI
objects and stores it in the expected results folder. A win_check_gui statement is inserted in the test script.

Syntax: win_check_gui ( window, checklist, expected_results_file, time );


obj_check_gui ( object, checklist, expected results file, time );

Q:What information is contained in the checklist file and in which file expected results are stored?
The checklist file contains information about the objects and the properties of the object we are verifying.
The gui*.chk file contains the expected results which is stored in the exp folder

Q:What do you verify with the bitmap check point for object/window and what command it generates, explain syntax?

• You can check an object, a window, or an area of a screen in your application as a bitmap. While creating a test, you indicate what you want to check.
WinRunner captures the specified bitmap, stores it in the expected results folder (exp) of the test, and inserts a checkpoint in the test script. When you
run the test, WinRunner compares the bitmap currently displayed in the application being tested with the expected bitmap stored earlier. In the event of a
mismatch, WinRunner captures the current actual bitmap and generates a difference bitmap. By comparing the three bitmaps (expected, actual, and
difference), you can identify the nature of the discrepancy.
• When working in Context Sensitive mode, you can capture a bitmap of a window, object, or of a specified area of a screen. WinRunner inserts a
checkpoint in the test script in the form of either a win_check_bitmap or obj_check_bitmap statement.
• Note that when you record a test in Analog mode, you should press the CHECK BITMAP OF WINDOW softkey or the CHECK BITMAP OF SCREEN
AREA softkey to create a bitmap checkpoint. This prevents WinRunner from recording extraneous mouse movements. If you are programming a test,
you can also use the Analog function check_window to check a bitmap.
• To capture a window or object as a bitmap:
1. Choose Create - Bitmap Checkpoint - For Object/Window or click the Bitmap Checkpoint for Object/Window button on the User toolbar.
Alternatively, if you are recording in Analog mode, press the CHECK BITMAP OF OBJECT/WINDOW softkey. The WinRunner window is
minimized, the mouse pointer becomes a pointing hand, and a help window opens.
2. Point to the object or window and click it. WinRunner captures the bitmap and generates a win_check_bitmap or obj_check_bitmap statement
in the script. The TSL statement generated for a window bitmap has the following syntax: win_check_bitmap ( object, bitmap, time );
3. For an object bitmap, the syntax is: obj_check_bitmap ( object, bitmap, time );
4. For example, when you click the title bar of the main window of the Flight Reservation application, the resulting statement might be:
win_check_bitmap ("Flight Reservation", "Img2", 1);
5. However, if you click the Date of Flight box in the same window, the statement might be: obj_check_bitmap ("Date of Flight:", "Img1", 1);

Syntax: obj_check_bitmap ( object, bitmap, time [, x, y, width, height] );

Q:What do you verify with the bitmap checkpoint for screen area and what command it generates, explain syntax?
• You can define any rectangular area of the screen and capture it as a bitmap for comparison. The area can be any size: it can be part of a single
window, or it can intersect several windows. The rectangle is identified by the coordinates of its upper left and lower right corners, relative to the upper
left corner of the window in which the area is located. If the area intersects several windows or is part of a window with no title (for example, a popup
window), its coordinates are relative to the entire screen (the root window).
• To capture an area of the screen as a bitmap:
1. Choose Create - Bitmap Checkpoint - For Screen Area or click the Bitmap Checkpoint for Screen Area button. Alternatively, if you are
recording in Analog mode, press the CHECK BITMAP OF SCREEN AREA softkey. The WinRunner window is minimized, the mouse pointer
becomes a crosshairs pointer, and a help window opens.
2. Mark the area to be captured: press the left mouse button and drag the mouse pointer until a rectangle encloses the area; then release the
mouse button.
3. Press the right mouse button to complete the operation. WinRunner captures the area and generates a win_check_bitmap statement in your
script.
4. The win_check_bitmap statement for an area of the screen has the following syntax: win_check_bitmap ( window, bitmap, time, x, y, width,
height );

Q:What do you verify with the database checkpoint default and what command it generates, explain syntax?

• By adding runtime database record checkpoints you can compare the information in your application during a test run with the corresponding record in
your database. By adding standard database checkpoints to your test scripts, you can check the contents of databases in different versions of your
application.
• When you create database checkpoints, you define a query on your database, and your database checkpoint checks the values contained in the result
set. The result set is set of values retrieved from the results of the query.
• You can create runtime database record checkpoints in order to compare the values displayed in your application during the test run with the
corresponding values in the database. If the comparison does not meet the success criteria you
• specify for the checkpoint, the checkpoint fails. You can define a successful runtime database record checkpoint as one where one or more matching
records were found, exactly one matching record was found, or where no matching records are found.
• You can create standard database checkpoints to compare the current values of the properties of the result set during the test run to the expected
values captured during recording or otherwise set before the test run. If the expected results and the current results do not match, the database
checkpoint fails. Standard database checkpoints are useful when the expected results can be established before the test run.
Syntax: db_check(checklist_file, expected_restult);
• You can add a runtime database record checkpoint to your test in order to compare information that appears in your application during a test run with the
current value(s) in the corresponding record(s) in your database. You add runtime database record checkpoints by running the Runtime Record
Checkpoint wizard. When you are finished, the wizard inserts the appropriate db_record_check statement into your script.
Syntax: db_record_check(ChecklistFileName,SuccessConditions,RecordNumber );
ChecklistFileName ---- A file created by WinRunner and saved in the test's checklist folder. The file contains information about the data to be captured
during the test run and its corresponding field in the database. The file is created based on the information entered in the Runtime Record Verification
wizard.
SuccessConditions ----- Contains one of the following values:
1. DVR_ONE_OR_MORE_MATCH - The checkpoint passes if one or more matching database records are found.
2. DVR_ONE_MATCH - The checkpoint passes if exactly one matching database record is found.
3. DVR_NO_MATCH - The checkpoint passes if no matching database records are found.
RecordNumber --- An out parameter returning the number of records in the database.

Q:How do you handle dynamically changing area of the window in the bitmap checkpoints?
The difference between bitmaps option in the Run Tab of the general options defines the minimum number of pixels that constitute a bitmap mismatch

Q:What do you verify with the database check point custom and what command it generates, explain syntax?

• When you create a custom check on a database, you create a standard database checkpoint in which you can specify which properties to check on a
result set.
• You can create a custom check on a database in order to:
• check the contents of part or the entire result set
• edit the expected results of the contents of the result set
• count the rows in the result set
• count the columns in the result set
• You can create a custom check on a database using ODBC, Microsoft Query or Data Junction.

Q:What do you verify with the sync point for object/window property and what command it generates, explain syntax?

• Synchronization compensates for inconsistencies in the performance of your application during a test run. By inserting a synchronization point in your
test script, you can instruct WinRunner to suspend the test run and wait for a cue before continuing the test.
• You can a synchronization point that instructs WinRunner to wait for a specified object or window to appear. For example, you can tell WinRunner to wait
for a window to open before performing an operation within that window, or you may want WinRunner to wait for an object to appear in order to perform
an operation on that object.
• You use the obj_exists function to create an object synchronization point, and you use the win_exists function to create a window synchronization point.
These functions have the following syntax:
obj_exists ( object [, time ] ); win_exists ( window [, time ] );
Q:What do you verify with the sync point for object/window bitmap and what command it generates, explain syntax?
You can create a bitmap synchronization point that waits for the bitmap of an object or a window to appear in the application being tested.
During a test run, WinRunner suspends test execution until the specified bitmap is redrawn, and then compares the current bitmap with the expected one captured
earlier. If the bitmaps match, then WinRunner continues the test.
Syntax:
obj_wait_bitmap ( object, image, time );
win_wait_bitmap ( window, image, time );
Q:What is the purpose of obligatory and optional properties of the objects?
For each class, WinRunner learns a set of default properties. Each default property is classified obligatory or optional.

1. An obligatory property is always learned (if it exists).


2. An optional property is used only if the obligatory properties do not provide unique identification of an object. These optional properties are stored in a
list. WinRunner selects the minimum number of properties from this list that are necessary to identify the object. It begins with the first property in the list,
and continues, if necessary, to add properties to the description until it obtains unique identification for the object.

Q:When the optional properties are learned?


An optional property is used only if the obligatory properties do not provide unique identification of an object.

Q:What is the purpose of location indicator and index indicator in GUI map configuration?
In cases where the obligatory and optional properties do not uniquely identify an object, WinRunner uses a selector to differentiate between them. Two types of
selectors are available:
A location selector uses the spatial position of objects.
The location selector uses the spatial order of objects within the window, from the top left to the bottom right corners, to differentiate among objects with the same
description.
An index selector uses a unique number to identify the object in a window.
The index selector uses numbers assigned at the time of creation of objects to identify the object in a window. Use this selector if the location of objects with the
same description may change within a window.

Q:How do you handle custom objects?


A custom object is any GUI object not belonging to one of the standard classes used by WinRunner. WinRunner learns such objects under the generic object
class. WinRunner records operations on custom objects using obj_mouse_ statements.
If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure the properties WinRunner uses to identify a
custom object during Context Sensitive testing.

Q:What is the name of custom class in WinRunner and what methods it applies on the custom objects?
WinRunner learns custom class objects under the generic object class. WinRunner records operations on custom objects using obj_ statements.

Q:In a situation when obligatory and optional both the properties cannot uniquely identify an object what method WinRunner applies?
In cases where the obligatory and optional properties do not uniquely identify an object, WinRunner uses a selector to differentiate between them. Two types of
selectors are available:
i. A location selector uses the spatial position of objects.
ii. An index selector uses a unique number to identify the object in a window.

Q:What do you verify with the sync point for screen area and what command it generates, explain syntax?
For screen area verification we actually capture the screen area into a bitmap and verify the application screen area with the bitmap file during execution Syntax:
obj_wait_bitmap(object, image, time, x, y, width, height);

Q:How do you edit checklist file and when do you need to edit the checklist file?
WinRunner has an edit checklist file option under the create menu. Select the Edit GUI Checklist to modify GUI checklist file and Edit Database Checklist to edit
database checklist file. This brings up a dialog box that gives you option to select the checklist file to modify. There is also an option to select the scope of the
checklist file, whether it is Test specific or a shared one. Select the checklist file, click OK which opens up the window to edit the properties of the objects.
Q:How do you edit the expected value of an object?
We can modify the expected value of the object by executing the script in the Update mode. We can also manually edit the gui*.chk file which contains the
expected values which come under the exp folder to change the values.

Q:How do you modify the expected results of a GUI checkpoint?


We can modify the expected results of a GUI checkpoint be running the script containing the checkpoint in the update mode.

Q:How do you handle ActiveX and Visual basic objects?


WinRunner provides with add-ins for ActiveX and Visual basic objects. When loading WinRunner, select those add-ins and these add-ins provide with a set of
functions to work on ActiveX and VB objects.

Q:How do you create ODBC query?


We can create ODBC query using the database checkpoint wizard. It provides with option to create an SQL file that uses an ODBC DSN to connect to the
database. The SQL File will contain the connection string and the SQL statement.

Q:How do you record a data driven test?


We can create a data-driven testing using data from a flat file, data table or a database.
Using Flat File: we actually store the data to be used in a required format in the file. We access the file using the File manipulation commands, reads data from the
file and assign the variables with data.
Data Table: It is an excel file. We can store test data in these files and manipulate them. We use the ‘ddt_*’ functions to manipulate data in the data table.
Database: we store test data in the database and access these data using ‘db_*’ functions.

Q:How do you convert a database file to a text file?


You can use Data Junction to create a conversion file which converts a database to a target text file.

Q:How do you parameterize database check points?


When you create a standard database checkpoint using ODBC (Microsoft Query), you can add parameters to an SQL statement to parameterize the checkpoint.
This is useful if you want to create a database checkpoint with a query in which the SQL statement defining your query changes.

Q:How do you create parameterize SQL commands?


A parameterized query is a query in which at least one of the fields of the WHERE clause is parameterized, i.e., the value of the field is specified by a question
mark symbol ( ? ). For example, the following SQL statement is based on a query on the database in the sample Flight Reservation application:
SELECT Flights.Departure, Flights.Flight_Number, Flights.Day_Of_Week FROM Flights Flights WHERE (Flights.Departure=?) AND (Flights.Day_Of_Week=?)
SELECT defines the columns to include in the query.
FROM specifies the path of the database.
WHERE (optional) specifies the conditions, or filters to use in the query. Departure is the parameter that represents the departure point of a flight.
Day_Of_Week is the parameter that represents the day of the week of a flight.
When creating a database checkpoint, you insert a db_check statement into your test script. When you parameterize the SQL statement in your checkpoint, the
db_check function has a fourth, optional, argument: the parameter_array argument. A statement similar to the following is inserted into your test script:
db_check("list1.cdl", "dbvf1", NO_LIMIT, dbvf1_params);
The parameter_array argument will contain the values to substitute for the parameters in the parameterized checkpoint.
Q:What check points you will use to read and check text on the GUI and explain its syntax?

• You can use text checkpoints in your test scripts to read and check text in GUI objects and in areas of the screen. While creating a test you point to an
object or a window containing text. WinRunner reads the text and writes a TSL statement to the test script. You may then add simple programming
elements to your test scripts to verify the contents of the text.
• You can use a text checkpoint to:
• Read text from a GUI object or window in your application, using obj_get_text and win_get_text
• Search for text in an object or window, using win_find_text and obj_find_text
• Move the mouse pointer to text in an object or window, using obj_move_locator_text and win_move_locator_text
• Click on text in an object or window, using obj_click_on_text and win_click_on_text

Q:How to get Text from object/window ?


We use obj_get_text (logical_name, out_text) function to get the text from an object
We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window.

Q:How to get Text from screen area ?


We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window.

Q:Which TSL functions you will use for Searching text on the window
find_text ( string, out_coord_array, search_area [, string_def ] );
win_find_text ( window, string, result_array [, search_area [, string_def ] ] );

Q:What are the steps of creating a data driven test?


The steps involved in data driven testing are:
Creating a test
Converting to a data-driven test and preparing a database
Running the test
Analyzing the test results.

Q: How to use data driver wizard?


You can use the DataDriver Wizard to convert your entire script or a part of your script into a data-driven test. For example, your test script may include recorded
operations, checkpoints, and other statements that do not need to be repeated for multiple sets of data. You need to parameterize only the portion of your test
script that you want to run in a loop with multiple sets of data.
To create a data-driven test:

• If you want to turn only part of your test script into a data-driven test, first select those lines in the test script.
• Choose Tools - DataDriver Wizard.
• If you want to turn only part of the test into a data-driven test, click Cancel. Select those lines in the test script and reopen the DataDriver Wizard. If you
want to turn the entire test into a data-driven test, click Next.
• The Use a new or existing Excel table box displays the name of the Excel file that WinRunner creates, which stores the data for the data-driven test.
Accept the default data table for this test, enter a different name for the data table, or use
• The browse button to locate the path of an existing data table. By default, the data table is stored in the test folder.
• In the Assign a name to the variable box, enter a variable name with which to refer to the data table, or accept the default name, table.
• At the beginning of a data-driven test, the Excel data table you selected is assigned as the value of the table variable. Throughout the script, only the
table variable name is used. This makes it easy for you to assign a different data table
• To the script at a later time without making changes throughout the script.
• Choose from among the following options:
1. Add statements to create a data-driven test: Automatically adds statements to run your test in a loop: sets a variable name by which to refer to
the data table; adds braces ({and}), a for statement, and a ddt_get_row_count statement to your test script selection to run it in a loop while it
reads from the data table; adds ddt_open and ddt_close statements
2. To your test script to open and close the data table, which are necessary in order to iterate rows in the table. Note that you can also add these
statements to your test script manually.
3. If you do not choose this option, you will receive a warning that your data-driven test must contain a loop and statements to open and close
your datatable.
4. Import data from a database: Imports data from a database. This option adds ddt_update_from_db, and ddt_save statements to your test
script after the ddt_open statement.
5. Note that in order to import data from a database, either Microsoft Query or Data Junction must be installed on your machine. You can install
Microsoft Query from the custom installation of Microsoft Office. Note that Data Junction is not automatically included in your WinRunner
package. To purchase Data Junction, contact your Mercury Interactive representative. For detailed information on working with Data Junction,
refer to the documentation in the Data Junction package.
6. Parameterize the test: Replaces fixed values in selected checkpoints and in recorded statements with parameters, using the ddt_val function,
and in the data table, adds columns with variable values for the parameters. Line by line: Opens a wizard screen for each line of the selected
test script, which enables you to decide whether to parameterize a particular line, and if so, whether to add a new column to the data table or
use an existing column when parameterizing data.
7. Automatically: Replaces all data with ddt_val statements and adds new columns to the data table. The first argument of the function is the
name of the column in the data table. The replaced data is inserted into the table.
• The Test script line to parameterize box displays the line of the test script to parameterize. The highlighted value can be replaced by a parameter. The
Argument to be replaced box displays the argument (value) that you can replace with a parameter. You can use the arrows to select a different
argument to replace.
Choose whether and how to replace the selected data:
1. Do not replace this data: Does not parameterize this data.
2. An existing column: If parameters already exist in the data table for this test, select an existing parameter from the list.
3. A new column: Creates a new column for this parameter in the data table for this test. Adds the selected data to this column of the data table.
The default name for the new parameter is the logical name of the object in the selected. TSL statement above. Accept this name or assign a
new name.
• The final screen of the wizard opens.
1. If you want the data table to open after you close the wizard, select Show data table now.
2. To perform the tasks specified in previous screens and close the wizard, click Finish.
3. To close the wizard without making any changes to the test script, click Cancel.

Q: How do you handle object exceptions?


During testing, unexpected changes can occur to GUI objects in the application you are testing. These changes are often subtle but they can disrupt the test run
and distort results.
You could use exception handling to detect a change in property of the GUI object during the test run, and to recover test execution by calling a handler function
and continue with the test execution

Q: What is a compile module?


A compiled module is a script containing a library of user-defined functions that you want to call frequently from other tests. When you load a compiled module, its
functions are automatically compiled and remain in memory. You can call them directly from within any test.
Compiled modules can improve the organization and performance of your tests. Since you debug compiled modules before using them, your tests will require less
error-checking. In addition, calling a function that is already compiled is significantly faster than interpreting a function in a test script.

Q: What is the difference between script and compile module?


Test script contains the executable file in WinRunner while Compiled Module is used to store reusable functions. Complied modules are not executable.
WinRunner performs a pre-compilation automatically when it saves a module assigned a property value of Compiled Module.
By default, modules containing TSL code have a property value of "main". Main modules are called for execution from within other modules. Main modules are
dynamically compiled into machine code only when WinRunner recognizes a "call" statement. Example of a call for the "app_init" script:
call cso_init();
call( "C:\\MyAppFolder\\" & "app_init" );
Compiled modules are loaded into memory to be referenced from TSL code in any module. Example of a load statement:
reload (C:\\MyAppFolder\\" & "flt_lib");
or load ("C:\\MyAppFolder\\" & "flt_lib");

Q:How do you write messages to the report?


To write message to a report we use the report_msg statement
Syntax: report_msg (message);

Q:What is a command to invoke application?


Invoke_application is the function used to invoke an application.
Syntax: invoke_application(file, command_option, working_dir, SHOW);

Q:What is the purpose of tl_step command?


Used to determine whether sections of a test pass or fail.
Syntax: tl_step(step_name, status, description);

Q:Which TSL function you will use to compare two files?


We can compare 2 files in WinRunner using the file_compare function. Syntax: file_compare (file1, file2 [, save file]);
Q:What is the use of function generator?
The Function Generator provides a quick, error-free way to program scripts. You can:
Add Context Sensitive functions that perform operations on a GUI object or get information from the application being tested.
Add Standard and Analog functions that perform non-Context Sensitive tasks such as synchronizing test execution or sending user-defined messages to a report.
Add Customization functions that enable you to modify WinRunner to suit your testing environment.

Q:What is the use of putting call and call_close statements in the test script?
You can use two types of call statements to invoke one test from another:
A call statement invokes a test from within another test.
A call_close statement invokes a test from within a script and closes the test when the test is completed.
Q:What is the use of treturn and texit statements in the test script?
The treturn and texit statements are used to stop execution of called tests.
i. The treturn statement stops the current test and returns control to the calling test.
ii. The texit statement stops test execution entirely, unless tests are being called from a batch test. In this case, control is returned to the main batch test.
Both functions provide a return value for the called test. If treturn or texit is not used, or if no value is specified, then the return value of the call statement is 0.
The syntax is: treturn [( expression )]; texit [( expression )];

Q:What does auto, static, public and extern variables means?


auto: An auto variable can be declared only within a function and is local to that function. It exists only for as long as the function is running. A new copy of the
variable is created each time the function is called.
static: A static variable is local to the function, test, or compiled module in which it is declared. The variable retains its value until the test is terminated by an Abort
command. This variable is initialized each time the definition of the function is executed.
public: A public variable can be declared only within a test or module, and is available for all functions, tests, and compiled modules.
extern: An extern declaration indicates a reference to a public variable declared outside of the current test or module.

Q:How do you declare constants?


The const specifier indicates that the declared value cannot be modified. The class of a constant may be either public or static. If no class is explicitly declared, the
constant is assigned the default class public. Once a constant is defined, it remains in existence until you exit WinRunner.
The syntax of this declaration is: [class] const name [= expression];

Q:How do you declare arrays?


The following syntax is used to define the class and the initial expression of an array. Array size need not be defined in TSL.
class array_name [ ] [=init_expression]
The array class may be any of the classes used for variable declarations (auto, static, public, extern).

Q:How do you load and unload a compile module?


In order to access the functions in a compiled module you need to load the module. You can load it from within any test script using the load command; all tests
will then be able to access the function until you quit WinRunner or unload the compiled module.
You can load a module either as a system module or as a user module. A system module is generally a closed module that is invisible to the tester. It is not
displayed when it is loaded, cannot be stepped into, and is not stopped by a pause command. A system module is not unloaded when you execute an unload
statement with no parameters (global unload).
load (module_name [,1|0] [,1|0] );
The module_name is the name of an existing compiled module.
Two additional, optional parameters indicate the type of module. The first parameter indicates whether the function module is a system module or a user module: 1
indicates a system module; 0 indicates a user module.
(Default = 0)
The second optional parameter indicates whether a user module will remain open in the WinRunner window or will close automatically after it is loaded: 1 indicates
that the module will close automatically; 0 indicates that the module will remain open.
(Default = 0)
The unload function removes a loaded module or selected functions from memory.
It has the following syntax:
unload ( [ module_name | test_name [ , "function_name" ] ] );

Q:Why you use reload function?


If you make changes in a module, you should reload it. The reload function removes a loaded module from memory and reloads it (combining the functions of
unload and load).
The syntax of the reload function is:
reload ( module_name [ ,1|0 ] [ ,1|0 ] );
The module_name is the name of an existing compiled module.
Two additional optional parameters indicate the type of module. The first parameter indicates whether the module is a system module or a user module: 1
indicates a system module; 0 indicates a user module.
(Default = 0)
The second optional parameter indicates whether a user module will remain open in the WinRunner window or will close automatically after it is loaded. 1 indicates
that the module will close automatically. 0 indicates that the module will remain open.
(Default = 0)

Q:Write and explain compile module?


Write TSL functions for the following interactive modes:
i. Creating a dialog box with any message you specify, and an edit field.
ii. Create dialog box with list of items and message.
iii. Create dialog box with edit field, check box, and execute button, and a cancel button.
iv. Creating a browse dialog box from which user selects a file.
v. Create a dialog box with two edit fields, one for login and another for password input.
Q:How you used WinRunner in your project?
Yes, I have been using WinRunner for creating automated scripts for GUI, functional and regression testing of the AUT.
Q:Explain WinRunner testing process?
WinRunner testing process involves six main stages
Create GUI Map File so that WinRunner can recognize the GUI objects in the application being tested
Create test scripts by recording, programming, or a combination of both. While recording tests, insert checkpoints where you want to check the response of the
application being tested.
Debug Test: run tests in Debug mode to make sure they run smoothly
Run Tests: run tests in Verify mode to test your application.
View Results: determines the success or failure of the tests.
Report Defects: If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window.

Q:What is contained in the GUI map?


WinRunner stores information it learns about a window or object in a GUI Map. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an
object’s description in the GUI map and then looks for an object with the same properties in the application being tested. Each of these objects in the GUI Map file
will be having a logical name and a physical description. There are 2 types of GUI Map files. Global GUI Map file: a single GUI Map file for the entire application.
GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.

Q:How does WinRunner recognize objects on the application?


WinRunner uses the GUI Map file to recognize objects on the application. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s
description in the GUI map and then looks for an object with the same properties in the application being tested.

Q:Have you created test scripts and what is contained in the test scripts?
Yes I have created test scripts. It contains the statement in Mercury Interactive’s Test Script Language (TSL). These statements appear as a test script in a test
window. You can then enhance your recorded test script, either by typing in additional TSL functions and programming elements or by using WinRunner’s visual
programming tool, the Function Generator.

Q:How does WinRunner evaluate test results?


Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error
messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual
results from the Test Results window.

Q:Have you performed debugging of the scripts?


Yes, I have performed debugging of scripts. We can debug the script by executing the script in the debug mode. We can also debug script using the Step, Step
Into, Step out functionalities provided by the WinRunner.

Q:How do you run your test scripts?


We run tests in Verify mode to test your application. Each time WinRunner encounters a checkpoint in the test script, it compares the current data of the
application being tested to the expected data captured earlier. If any mismatches are found, WinRunner captures them as actual results.

Q:How do you analyze results and report the defects?


Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error
messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual
results from the Test Results window. If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the
Test Results window. This information is sent via e-mail to the quality assurance manager, who tracks the defect until it is fixed.

Q:What is the use of Test Director software?


TestDirector is Mercury Interactive’s software test management tool. It helps quality assurance personnel plan and organize the testing process. With TestDirector
you can create a database of manual and automated tests, build test cycles, run tests, and report and track defects. You can also create reports and graphs to
help review the progress of planning tests, running tests, and tracking defects before a software release.

Q:Have you integrated your automated scripts from TestDirector?


When you work with WinRunner, you can choose to save your tests directly to your TestDirector database or while creating a test case in the TestDirector we can
specify whether the script in automated or manual. And if it is automated script then TestDirector will build a skeleton for the script that can be later modified into
one which could be used to test the AUT. What are the different modes of recording? - There are two type of recording in WinRunner. Context Sensitive recording
records the operations you perform on your application by identifying Graphical User Interface (GUI) objects. Analog recording records keyboard input, mouse
clicks, and the precise x- and y-coordinates traveled by the mouse pointer across the screen.
Q:What is the purpose of loading WinRunner Add-Ins?
Add-Ins are used in WinRunner to load functions specific to the particular add-in to the memory. While creating a script only those functions in the add-in selected
will be listed in the function generator and while executing the script only those functions in the loaded add-in will be executed else WinRunner will give an error
message saying it does not recognize the function. What are the reasons that WinRunner fails to identify an object on the GUI? - WinRunner fails to identify an
object in a GUI due to various reasons. The object is not a standard windows object. If the browser used is not compatible with the WinRunner version, GUI Map
Editor will not be able to learn any of the objects displayed in the browser window.

Q:What is meant by the logical name of the object?


An object’s logical name is determined by its class. In most cases, the logical name is the label that appears on an object.

Q:If the object does not have a name then what will be the logical name?
If the object does not have a name then the logical name could be the attached text.

Q:What is the different between GUI map and GUI map files?
The GUI map is actually the sum of one or more GUI map files. There are two modes for organizing GUI map files. Global GUI Map file: a single GUI Map file for
the entire application. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created. GUI Map file is a file which contains the
windows and the objects learned by the WinRunner with its logical name and their physical description.

Q:How do you view the contents of the GUI map?


GUI Map editor displays the content of a GUI Map. We can invoke GUI Map Editor from the Tools Menu in WinRunner. The GUI Map Editor displays the various
GUI Map files created and the windows and objects learned in to them with their logical name and physical description.

Q:How do you view the contents of the GUI map?


If we are learning a window then WinRunner automatically learns all the objects in the window else we will we identifying those object, which are to be learned in a
window, since we will be working with only those objects while creating scripts.

Q:Coming up soon for the following Questions. If you know the answers, please email to us !
How do you call a function from external libraries (dll).
What is the purpose of load_dll?
How do you load and unload external libraries?
How do you declare external functions in TSL?
How do you call windows APIs, explain with an example?
What is the purpose of step, step into, step out, step to cursor commands for debugging your script?
How do you update your expected results?
How do you run your script with multiple sets of expected results?
How do you view and evaluate test results for various check points?
How do you view the results of file comparison?
What is the purpose of Wdiff utility?
What are batch tests and how do you create and run batch tests ?
How do you store and view batch test results?
How do you execute your tests from windows run command?
Explain different command line options?
What TSL function you will use to pause your script?
What is the purpose of setting a break point?
What is a watch list?
During debugging how do you monitor the value of the variables?
Describe the process of planning a test in WinRunner?
How do you record a new script?
Can you e-mail a WinRunner script?
How can a person run a previously saved WinRunner script?
How can you synchronize WinRunner scripts?
What is a GUI map? How does it work?
How can you verify application behavior?
Explain in detail how WinRunner checkpoints work. What are standard checkpoints?
What is a data-driven test? What are the benefits of a data-driven test?
How do you modify logical names on GUI map?
Why would you use batch testing under WinRunner? Explain advantages and disadvantages. Give an example of one project where you used batch testing.
How do you pass parameter values between the tests? typically learns all the objects in the window else we will identifying those object, which are to be learned in
a window, since we will be working with only those objects while creating scripts.
Have you used WinRunner Recovery Manager?
What is an exception handler? Wny would you define one in WinRunner?
We’re testing an application that returns a graphical object (i.e., a map) as a result of the user query. Explain how you’d teach WinRunner to recognize and
analyze the returned object.
What is a TSL? Write a simple script in TSL.
1. What is SilkTest?
SilkTest is a software testing automation tool developed by Segue Software, Inc.
2. What is the Segue Testing Methodology?
Segue testing methodology is a six-phase testing process:

1. Plan - Determine the testing strategy and define specific test requirements.
2. Capture - Classify the GUI objects in your application and build a framework for running your tests.
3. Create - Create automated, reusable tests. Use recording and/ or programming to build test scripts written in Segue's 4Test language.
4. Run - Select specific tests and execute them against the AUT.
5. Report - Analyze test results and generate defect reports.
6. Track - Track defects in the AUT and perform regression testing.

3. What is AUT?
AUT stands for Application Under Test.
4. What is SilkTest Host?
SilkTest Host is a SilkTest component that manages and executes test scripts. SilkTest Host usually runs on a separate machine different than the machine where
AUT (Application Under Test) is running.
5. What is SilkTest Agent?
SilkTest Agent is a SilkTest component that receives testing commands from the SilkTest Host and interacts with AUT (Application Under Test) directly. SilkTest
Agent usually runs on the same machine where AUT is running.
6. What is 4Test?
4Test is a test scripting language used by SilkTest to compose test scripts to perform automated tests. 4Test is an object-oriented fourth-generation language. It
consists of 3 sets of functionalities:

1. A robust library of object-oriented classes and methods that specify how a testcase can interact with an application’s GUI objects.
2. A set of statements, operators and data types that you use to introduce structure and logic to a recorded testcase.
3. A library of built-in functions for performing common support tasks.

7. What is the DOM browser extension?


Document Object Model (DOM) browser extension is a SilkTest add-on component for testing Web applications. DOM browser extension communicates directly
with the Web browser to recognize, categorize and manipulate objects on a Web page. It does this by working with the actual HTML code, rather than relying on
the visual pattern recognition techniques currently employed by the Virtual Object (VO) extension.
8. What is the VO browser extension?
Virtual Object (VO) browser extension is a SilkTest add-on component for testing Web applications. VO browser extersion uses sophisticated pattern recognition
techniques to identify browser-rendered objects. The VO extension sees Web pages as they appear visually; it does not read or recognize HTML tags in the Web
application code. Instead, the VO extension sees the objects in a Web page; for example, links, tables, images and compound controls the way that you do,
regardless of the technology behind them.
9. What is SilkTest project?
A SilkTest project is a collection of files that contains required information about a test project.
10. How to create a new SilkTest project?

1. Run SilkTest.
2. Select Basic Workflow bar.
3. Click Open Project on the Workflow bar.
4. Select New Project.
5. Double click Create Project icon in the New Project dialog box
6. One the Create Project dialog box, enter your project name, and your project description.
7. Click OK.
8. SilkTest will create a new subdirectory under SilkTest project directory, and save all files related to the new project under that subdirectory.

11. How to open an existing SilkTest project?

1. Run SilkTest.
2. Select File menu.
3. Select Open Project.
4. Select the project.
5. Click OK.
6. SilkTest will open the selected project.

12. What is a SilkTest Testplan?


The SilkTest testplan is an outline that provides a framework for the software testing process and serves as the point of control for organizing and managing your
test requirements. A testplan consists of two distinct parts: an outline, which is a formatted description of the test requirements, and statements, which are used to
connect the testplan to SilkTest scripts and testcases that implement the test requirements.
13. Where is a testplan stored?
A SilkTest testplan is stored in a file with .pln file extension.
14. How to create and edit a testplan?

1. Make sure your project is open.


2. Click the Files tab in the Project Explorer.
3. Right-click the Plan folder.
4. Click New File.
5. An untitled testplan file opens in the SilkTest testplan editor.
6. Click File/Save menu to save the testplan.

15. What are the types of text lines in a testplan file?


A testplan file contains text lines. There are 5 types of text lines in a testplan file:

1. Comment - Marked in green color: Providing commentary information.


2. Group descriptiton - Marked in black color: Providing descriptions for groups of tests. Tests in a testplan can be grouped into multiple levels of groups.
3.
4. Test description - Marked in blue color: Providing descriptions for individual test.
5. Testplan statement - Marked in dark red color: Providing relations to link scripts, testcases, test data, closed sub testplans or an include file to the
testplan.
6. Open subplan file marker - Marked in magenda color: Providing relations to link sub testplans to be included in a master testplan.

16. How to create group and sub group descriptions in a testplan?


In a testplan, each text line starting from column 0 represents a top level group description. To create sub group description:

1. Move the cursor the next line below the top level group description.
2. Click Outline/Move Right.
3. The text line will be indented to the right to be come a sub group description.
16. What are testplan attributes?
Testplan attributes are user defined characteristics to be associated with test group descriptions and/or test descriptions. You search, identify, and/or report test
cases based on values of the different attributes.
17. What are the default testplan attributes?
SilkTest offers you 3 predefined default attributes:

1. Category: The type of testcase or group of testcases. For example, you can use this attributes to categorize your test groups as "Boundary value tests",
"Navagation tests", etc.
2. Component: The name of the application modules to be tested.
3. Developer: The name of the QA engineer assigned to develop the testcase or group of testcases.

18. How to define new testplan attributes?

1. Make sure your test project is open.


2. Click Testplan/Define Attributes menu. The Define Attributes dialog box shows up. You should see 3 predefined default attributes: Category,
Component, and Developer.
3. Click the New button. The New Attribute dialog box shows up.
4. Enter a name for your new attribute. For example: "Level" to indicate the complexity level of test cases.
5. Select an attribute type: Normal, Edit, or Set.
6. Click OK.

19. How to define values for a testplan attribute?


You must define values for a testplan before using it:

1. Make sure your test project is open.


2. Click Testplan/Define Attributes menu. The Define Attributes dialog box shows up. You should see 3 predefined default attributes and other attributes
defined by yourself.
3. Select an attribute. For example, "Component". The Values box should be empty.
4. Enter a value in Add box. For example, "Catalog".
5. Click Add. Value "Catalog" should be inserted into the Values box.
6. Repeat the last two steps to add more values.

20. Where are the testplan attributes stored?


Testplan attributes are stored in the testplan initialization file, testplan.ini, in SilkTest installation directory.
21. How to assign attribute values to test cases?

1. Make sure your testplan is open.


2. Click on the test case for which you want to assign an attribute value.
3. Click Testplan/Detail menu. The Testplan Details dialog box shows up.
4. Click the Test Attribute tab.
5. Click the Component field. The dropdown list shows up with all values of "Component".
6. Select one of the values in the dropdown list.
7. Click OK.

22. What is a test frame?


A test frame is a file that contains information about the application you are testing. Information stored in a test frame will be used as references when SilkTest
records and executes testcases. A test frame is stored in an include file with file extension .inc.
23. How to create a test frame?

1. Make sure your Web browser is active and showing your Web application home page. Do not minimize this Web page window.
2. Make sure your test project is open.
3. Click File/New menu. The New dialog box shows up.
4. Select the Test Frame radio button.
5. Click OK. The New Test Frame dialog box shows up with a list all active Web applications.
6. Select your Web application.
7. Enter a test frame name. For example: HomeFrame.inc.
8. Review the window name. It should be the HTML title your Web application. You can rename it, if needed.
9. Click OK to close the New Test Frame dialog box.
10. Click File/Save menu.

24. What is stored in a test frame?


A test frame is a text file, which records the following types of information for a Web application:

1. Comment: Commentary information.


2. wMainWindow: A string constant to identify your application's home page.
3. Home page window: An object of class BrowserChild window that holds application home page.
4. sLocation: The URL of the your application's home apge.
5. sUserName and dPassword: User name and password if needed to login to your Web application.
6. BrowserSize: A pair of values to indicate the size of the browser window.
7. Home page objects: A list of all objects on the home page, such as HtmlImage, HtmlText, HtmlLinks, etc.

25. How DOM browser extension identify a Web application UI object?


A Web application UI object is identified in two parts:

1. Identify the Web browser window where the Web application is running. For example, a Web browser window can be identified as
"Browser.BrowserChild("Yahoo Home Page")". Another Web browser window can be identified as "Browser.BrowserChild("Google Home Page")".
2. Identify the Web UI object based on the HTML element that represents the UI object. For example, an image in a Web page can be identified as
"HtmlImage("Yahoo Logo")"; A hyperlink in a Web page can be identified as "HtmlLink("Site Map")";

The full identification of a Web applicatin UI object is the concatenation of the browser window identification and the HTML element identification. For example, the
Yahoo logo image is identified as: Browser.BrowserChild("Yahoo Home Page").HtmlImage("Yahoo Logo"). The site map link is identified as:
Browser.BrowserChild("Google Home Page").HtmlLink("Site Map").
26. What is the syntax of UI object identifier used by DOM extension?
The DOM browser extension uses the following syntax for Web UI objects:
Browser.BrowserChild("page_title").html_class("object_tag")

1. "page_title" is the title of the Web page, defined by the HTML <TITLE> tag.
2. "object_tag" is the label of the HTML element. How a HTML element is labeled depending on the type of HTML element.

27. What is multi-tagging?


Multi-tagging is a technique used by the DOM browser extension to identify a Web page UI object. Whenever possible, DOM extension inserts more than one tag
into the object identifier in following format:
Browser.BrowserChild("page_title").html_class("caption_tag|#index_tag|window_tag")

1. "caption_tag" is the caption of the HTML element.


2. "#index_tag" is the index of this HTML element, counting from the beginning of this page of the same class of HTML elements.
3. "window_tag" is the window identifier.

28. How to add objects of other pages to a test frame?


If your Web application has pages other than the home page, you should also record their page objects into the test frame:

1. Make sure your Web browser is active and showing another page of your Web application.
2. Make sure SilkTest is running.
3. Click File/Open menu.
4. Select your test frame file. For example: HomeFrame.inc.
5. Click OK to open the test frame.
6. Click Record/Window Declarations menu. The Record Window Declarations dialog box shows up.
7. Click your Web application window. Web page objects are recorded in the Record Window Declarations dialog box.
8. Press Ctrl+Alt to pause the recording.
9. Click "Paste to Editor" button. All recorded objects will be inserted into the test frame.
10. Repeat this for other Web pages, if needed.

29. How to specify a browser extension to a Web application?

1. Run SilkTest.
2. Open Internet Explorer (IE).
3. Enter the URL of the Web application.
4. Leave the IE window with the Web application. Don't minimize the IE window.
5. To back to SilkTest window.
6. Select Basic Workflow bar.
7. Click Enable Externsions on the Workflow bar.
8. The Enable Extensions dialog will show up. Your Web application running in the IE window will listed in the dialog box.
9. Select your Web application and click Select.
10. The Extension Settings dialog will show up. Click OK to enable the DOM browser extension.

30. What is DefaultBaseState?


The DefaultBaseState is a starting point of test project from which the Recovery System can automatically restart your test cases when test cases fail to continue.
How to test your DefaultBaseState?

1. Close your Web application and other Web browsers.


2. Make sure your test frame is open.
3. Click Run/Application State menu. The Run Application State dialog box shows up with a list of states. One of them should be DefaultBaseState.
4. Select DefaultBaseState.
5. Click Run button. The Runtime Status dialog box shows up. And the Results File dialog box shows up too.
6. You should see no error message in the results file.

31. What are the important aspects of a test case?

1. Each test case must be independent of other test cases.


2. Each test case have a single test purpose.
3. Each test case should start from a base state and returning to the same base state.

32. What is the standard flow of execution of a test case?


1. Starting from the base state.
2. Drive the application to the state where the expected result should occur.
3. Verify the actual result against the expected result.
4. Declare the test case as passed or failed.
5. Return to the base state.

33. How to record a test case?

1. Run SilkTest.
2. Click Option/Runtime menu. The Runtime Options dialog box shows up.
3. Edit the Use Files field to include your test frame file and the exlorer.inc file. For example: ...\HomeFrame.inc,extend\explorer.inc.
4. Make sure IE 5.x DOM is selceted.
5. Click OK to cloase the Runtime Optoins dialog box.
6. Open your test project.
7. Click Record/Testcase menu. The Record Testcase dialog box shows up.
8. Name your test case. For example: LoginTest.
9. Select DefaultBaseState in the Applicatin State dropdown list.
10. Click Start Recording button.The Record Testcase dialog closes. Your Web application is will be automatically started by SilkTest, based on the
information in test frame file. SilkTest Editor window closes. The Record Status dialog box shows up.
11. Continue to use your Web application. SilkTest records everything you did on your application.
12. Click the "Done" button on the Recording Status dialog box to stop recording. The Recording Status dialog box closes. The Record Testcase dialog box
shows up again.
13. Click Paste to Editor. SilkTest will insert the recorded acitivities as 4Test statements into a script file. The Record Testcase dialog closes.
14. Click File/Save menu to save the script file. You can enter a script file name. For example, LoginTest.t.

34. How to include a test case into a testplan?

1. Make sure your testplan is open.


2. Enter a test description into your testplan. For example, "Test login process".
3. Select this test description.
4. Click Testplan/Detail menu. The Testplan Detail dialog box shows up.
5. Click the Test Execution tag on the Testplan Detail dialog box.
6. Click the "Scripts" button to browse and select a test case script file. For example, LoginTest.t.
7. Click the "Testcases" button, to select a testcase recored in the specified script file.
8. Click OK to close the Testplan Detail dialog box.

35. How record a test case into a testplan automatically?


Test cases can recorded first without a testplan, then included into a testplan later. Test cases can also be recorded into a testplan directly:

1. Make sure your testplan is open.


2. Enter a test descripption into your testplan. For example, "Test change password".
3. Select this test description.
4. Click Record/Testcase menu.
5. Enter a name for the script file.
6. Click Open. The Record Testcase dialog box shows up.
7. Enter a testcase name in the Testcase Name field.
8. Select DefaultBaseState in the Applicatin State dropdown list.
9. Click Start Recording button.The Record Testcase dialog closes. Your Web application is will be automatically started by SilkTest, based on the
information in test frame file. SilkTest Editor window closes. The Record Status dialog box shows up.
10. Continue to use your Web application. SilkTest records everything you did on your application.
11. Click the "Done" button on the Recording Status dialog box to stop recording. The Recording Status dialog box closes. The Record Testcase dialog box
shows up again.
12. Click Paste to Editor. SilkTest will insert the recorded acitivities as 4Test statements into a script file. The Record Testcase dialog closes.
13. Click File/Save menu to save the script file. You can enter a script file name. For example, ChangePasswordTest.t.

36. How to define an object verification in a test case?


While recording a test case, you can define verification points to verify UI objects:

1. Make sure you are in the process of recording a testcase.


2. Make sure the Record Status dialog box is on the screen.
3. Make sure your recording reached the Web page that has the UI object you want to verify.
4. Click the background (blank area) of the Web page. Do not click any objects on the page.
5. Press Ctrl-Alt. The Verify Window dialog box shows up. All the objects on the current Web page are listed on the Verify Window dialog box.
6. Select the object to be verified in the object list. Un-select all other objets.
7. Select the property to be verified in the property list. Un-select all other properties.
8. Click OK to close the Verify Window dialog box.
9. Continue your recording.

37. How to run a test case from a test script file?


A test script file can store multiple test cases. You can run a testcase from a test script file:

1. Open the test script file.


2. Select the test case in the test file.
3. Click Run/Testcase menu. The Run Testcase dialog box shows up.
4. Click the Run button. SilkTest starts to run the test case.
5. Do not touch mouse or keyboard, to avoid interrupting the test case execution.
6. SilkTest finishes executing the testcase. The Restuls window shows up with the execution result.
7. Review the execution result.

38. How to run a test case from a testplan file?


If a testcase is linked to a testplan, you can run it from the testplan:

1. Open the testplan.


2. Select the test description line which has the testcase linked.
3. Click Run/Testcase menu. The Run Testcase dialog box shows up.
4. Click the Run button. SilkTest starts to run the test case.
5. Do not touch mouse or keyboard, to avoid interrupting the test case execution.
6. SilkTest finishes executing the testcase. The Restuls window shows up with the execution result.
7. Review the execution result.

39. How to run all test cases in a testplan?

1. Open the testplan.


2. Click Run/Run All Tests menu. SilkTest starts to run all the test cases in the testplan.
3. Do not touch mouse or keyboard, to avoid interrupting the test case execution.
4. SilkTest finishes executing the testcase. The Restuls window shows up with the execution result.
5. Review the execution result.

40. How to select a group of test cases in a testplan to run?


Usually, a testplan contains a big number of test cases. For some reason, you don't want to run all test cases in the testplan. You want to select a group of test
cases and run them:

1. Open the testplan.


2. Select the test description line (linked to the testcase) to mark.
3. Click Testplan/Mark menu. The selected test description line is marked.
4. Repeat this process to select more linked testcases.
5. Click the Run/Run Marked Tests menu. SilkTest runs all the marked testcases.
6. Do not touch mouse or keyboard, to avoid interrupting the test case execution.
7. SilkTest finishes executing the testcase. The Restuls window shows up with the execution result.
8. Review the execution result.

41. What's in the test result file?

1. Result sumary: The name of the script file. The name of the testcase. The machine on which the tests ran. The starting time and the total elapsed time.
The number and percentage of testcases that passed and failed. The total number of errors and warnings.
2. Result detail: List of errors and detailed information.

42. How to link an error in the result file to the script file?

1. Make sure the Result window is open with result file.


2. Locate the error message in the result file.
3. Select the error message.
4. Click the Results/Goto Source menu. The original script file opens up showing the place where the error was originated.

43. How to link an error in the result file to the script file?

1. Make sure the Result window is open with result file.


2. Click Results/Pass/Fail Report. The Pass/Fail Report dialog box shows up.
3. Select an attribute on which you want the report to be based on. For example: Component.
4. Click the Generate button.
5. SilkTest generates a report in the Pass/Fail Report dialog box.
6. You can print or export the report.
7. Click the Close button to close the Pass/Fail Report dialog box.

44. What is DBTester?


DBTester is a testing tool that allows you to access a database server directly through ODBC drivers. If your application is a database driven application, you can
perform a test through the application UI, and verify data changes in the database with DBTester without using the application UI.
45. What are the functions offered by DBTester?
DBTester offers 6 functions. You can use them directly in your test cases:

1. DB_Connect: Opens a database connection linking the data through the specified OBDC DSN name. DB_Connect returns a connection handle which
can be used on other DBTester functions. SQL statements can be submitted to the database. For example: con = DB_Connect("dsn=dsn_name")
2. DB_Disconnect: Closes the database connection represented by the speficied connection handle. All resources related to this connect are also
released. For example: DB_Disconnect(con)
3. DB_ExecuteSql: Sends the specified SQL statement to the specified database connection for execution. DB_ExecuteSql returns a query result handler
which can be used by the DB_FetchNext function. For example: res = DB_ExecuteSql(con, "SELECT * FROM ...")
4. DB_FetchNext: Retrieves the next row from the specified query result handler. For example: DB_FetchNext(res, col1, col2, col3, ...)
5. DB_FetchPrevious: Retrieves the previous row from the specified query result handler.
6. DB_FinishSql: Closes the specified query result handler. For example: DB_FinishSql(res)

What Is Rational Robot?


Rational Robot is a complete set of components for automating the testing of Microsoft Windows client/server and Internet applications.
The main component of Robot lets you start recording tests in as few as two mouse clicks. After recording, Robot plays back the tests in a fraction of the time it
would take to repeat the actions manually.
Which products does Rational Robot Installs with?
ClearQuest - Change-Request management tool that tracks and manages defects and change requests through the development process.
Rational LogViewer and Comparators- are the tools you use to view logs and test results created when you playback scripts.
Rational Robot - is the tool that you used to develop both GUI and VU (virtual user) scripts.
SQL Anywhere - A database product to help create, maintain and run your Rational repositories.
Rational Test Manager - is the component that you use to plan your tests, manage your test asses, and run queries and reports.
Rational Administrator - is the component that you use to create and manage repositories.
Rational SiteCheck - is the component that you use to test the structural integrity of your intranet or www site.
Additional Rational Products available only with Rational Suite TestStudio or PerformanceStudio:
Test Factory - Component based testing tool that automatically generates TestFactory scripts according to the applications navigational structure.
Diagnostic Tools
Rational Purify - is a comprehensive C/C++ run time error checking tool.
Rational Visual Quantify - is an performance profiler that provides performance analysis of a product, to aid in improving performance of the code.
Rational Visual PureCoverage - is a customizable code coverage analysis tool that provides detailed application analysis and ensure that all code has been
exercised.
Performance Studio - Tool used for automating performance tests on client/server systems.
Rational Synchronizer - Tool used to share data from rational rose, requisite pro and rational robot.
RequisitePro - Create, define requirements for your development process. The baseline version is incorporated into Rational Team Test. The full version in
Rational Studio TestSuite allows you to customize requirements databases, and additional features like tracability, change notification and attribute management.
What is Rational Administrator?
Use the Rational Administrator to:

• create and manage projects.


• Create a project under configuration management.
• Create a project outside of configuration management.
• Connect to a project.
• See projects that are not on your machine (register a project).
• Delete a project.
• Create and manage users and groups for a Rational Test datastore.
• Create and manage projects containing Rational RequisitePro projects and Rational Rose models.
• Manage security privileges for the entire Rational project.
• Configure a SQL Anywhere database server.

What two kind of GUI scripts using Rational Robot?


1. functional testing
2. sessions for performance testing.

• Perform full functional testing. Record and play back scripts that navigate through your application and test the state of objects through verification
points.
• Perform full performance testing. Use Robot and TestManager together to record and play back sessions that help you determine whether a multi-client
system is performing within user-defined standards under varying loads.
• Create and edit scripts using the SQABasic and VU scripting environments. The Robot editor provides color-coded commands with keyword Help for
powerful integrated programming during script development. (VU scripting is used with sessions in performance testing.)
• Test applications developed with IDEs such as Java, HTML, Visual Basic, Oracle Forms, Delphi, and PowerBuilder. You can test objects even if they are
not visible in the application’s interface.
• Collect diagnostic information about an application during script playback. Robot is integrated with Rational Purify, Rational Quantify, and Rational
PureCoverage. You can play back scripts under a diagnostic tool and see the results in the log.

What is datapool?
A datapool is a source of variable test data that scripts can draw from during playback.
How to create a datapool?
When creating a datapool, you specify the kinds of data (called data types) that the script will send for example, customer names, addresses, and unique order
numbers or product names. When you finish defining the datapool, TestManager automatically generates the number of rows of data that you specify.
How to analyz results in the log and comparators
You use TestManager to view the logs that are created when you run scripts and schedules.
Use the log to:
--View the results of running a script, including verification point failures, procedural failures, aborts, and any additional playback information. Reviewing the results
in the log reveals whether each script and verification point passed or failed.
Use the Comparators to:
--Analyze the results of verification points to determine why a script may have failed. Robot includes four Comparators:
.Object Properties Comparator
.Text Comparator
.Grid Comparator
.Image C omparator
Rational SiteCheck
Rational SiteCheck to test the structural integrity of your intranet or World Wide Web site. SiteCheck is designed to help you view, track, and maintain your rapidly
changing site. Use SiteCheck to:
• Visualize the structure of your Web site and display the relationship between each page and the rest of the site.
• Identify and analyze Web pages with active content, such as forms, Java, JavaScript, ActiveX, and Visual Basic Script (VBScript).
• Filter information so that you can inspect specific file types and defects, including broken links.
• Examine and edit the source code for any Web page, with color-coded text.
• Update and repair files using the integrated editor, or configure your favorite HTML editor to perform modifications to HTML files.
• Perform comprehensive testing of secure Web sites. SiteCheck provides Secure Socket Layer (SSL) support, proxy server configuration, and support for
multiple password realms.

What is A verification point?


A verification point is a point in a script that you create to confirm the state of an object across builds of the application-under-test.
Verification point type
1. Alphanumeric:
Captures and tests alphanumeric data in Windows objects that contain text, such as edit boxes, check boxes, group boxes, labels, push buttons, radio buttons,
toolbars, and windows (captions).
2. Clipboard:
Captures and compares alphanumeric data that has been copied to the Clipboard.
3. File Comparison:
Compares two specified files during playback.
4. File Existence:
Verifies the existence of a specified file during playback.
5. Menu:
Captures and compares the menu title, menu items, shortcut keys, and the state of selected menus.
6. Module Existence:
Verifies whether a specified module is loaded into a specified context (process), or is loaded anywhere in memory.
7. Object Data
Captures and compares the data inside standard Windows objects.
8. Object Properties
Captures and compares the properties of standard Windows objects.
9. Region Image
Captures a region of the screen as a bitmap.
10. Web Site Compare
Captures a baseline of a Web site and compares it to the Web site at another point in time.
11. Web Site Scan
Checks the contents of a Web site with every revision and ensures that changes have not resulted in defects.
12. Window Existence:
Verifies the existence and status of a specified window during playback.
13. Window Image:
Captures a window as a bitmap.
How to create a verification point?

1. Do one of the following:


. If recording, click the Display GUI Insert Toolbar button on the GUI Record toolbar.
. If editing, position the pointer in the script and click the Display GUI Insert Toolbar button on the Standard toolbar.
2. Click a verification point button on the GUI Insert toolbar.
3. In the Verification Point Name dialog box, edit the name as appropriate. The name can be a maximum of 20 characters.
4. Optionally, set the Wait state options. For information, see the next section, Setting a Wait State for a Verification Point.
5. Optionally, set the Expected result option.
6. Click OK.

How to add a wait state when creating a verification point?

1. Start to create the verification point.


2. In the Verification Point Name dialog box, select Apply wait state to verification point.
3. Type values for the following options: Retry every - How often Robot retries the verification point during playback. Robot retries until the verification point
passes or until the timeout limit is reached.
Timeout after - The maximum amount of time that Robot waits for the verification point to pass before it times out. If the timeout limit is reached and the
verification point has not passed, Robot enters a failure in the log. The script playback either continues or stops based on the setting in the Error
Recovery tab of the GUI Playback Options dialog box.

How to set the expected result when creating a verification point?


1. Start to create a verification point.
2. In the Verification Point Name dialog box, click Pass or Fail.
What are two verification points for use with Web sites
1. Use the Web Site Scan verification point to check the content of your Web site with every revision and ensure that changes have not resulted in defects.
2. Use the Web Site Compare verification point to capture a baseline of your Web site and compare it to the Web site at another point in time.
How to select the object to test?

1. Start creating the verification point.


2. In the Verification Point Name dialog box, type a name and cl ick OK to open the Select Object dialog box.
3. Do one of the following:
. Select Automatically close dialog box after object selection to have the Select Object dialog box close after you select the object to test.
.Clear Automatically close dialog box after object selection to have the Select Object dialog box reappear after you select the object to test. You will
need to click OK to close the dialog box. To select a visible object directly from the application, continue with step 4. To select an object from a list of all
objects on the desktop, skip to step 5.
4. To select a visible object directly from the application, drag the Object Finder tool over the object and release the mouse button.
5. To select a visible or hidden object from a list of all objects on the Windows desktop, click Browse to open the Object List dialog box. Select the object
from the list and click OK.

What's verification method?


The verification method specifies how Robot compares the baseline data captured while recording with the data captured during playback.
Eight verification methods

1. Case-Sensitive - Verifies that the text captured during recording exactly matches the captured text during playback.
2. Case-Insensitive - Verifies that the text captured during recording matches the captured text during playback in content but not necessarily in case.
3. Find Sub String Case-Sensitive - Verifies that the text captured during recording exactly matches a subset of the captured text during playback.
4. Find Sub String Case-Insensitive - Verifies that the text captured during recording matches a subset of the captured text during playback in content but
not necessarily in case.
5. umeric Equivalence - Verifies that the values of the data captured during recording exactly match the values captured during playback.
6. Numeric Range - Verifies that the values of the data captured during recording fall within a specified range during playback. You specify the From and
To values for the numeric range. During playback, the verification point verifies that the numbers are within that range.
7. User-Defined and Apply a User-Defined DLL test function - Passes text to a function within a dynamic-link library (DLL) so that you can run your own
custom tests. You specify the path for the directory and name of the custom DLL and the function. The verification point passes or fails based on the
result that it receives back from the DLL function.
8. Verify that selected field is blank - Verifies that the selected field contains no text or numeric data. If the field is blank, the verification point passes.

What's an identification method?


An identification method tells Robot how to identify the values to compare during record and playback.
There are four identification methods

1. By Content - to verify that the recorded values exist during playback.


2. By Location - to verify that the recorded values exist in the same locations during playback.
3. By Title - to verify that the recorded values remain with their titles (names of menus or columns) during playback, even though the columns may have
changed locations.
4. By Key/Value - to verify that the recorded values in a row remain the same during playback.

How to rename a verification point and its associated files?

1. Right-click the verification point name in the Asset (left) pane and click Rename.
2. Type the new name and press ENTER.
3. Click the top of the script in the Script (right) pane.
4. Click Edit > Replace.
5. Type the old name in the Find what box. Type the new name in the Replace with box.
6. Click Replace All.

How to copy a verification point?

1. Right-click the verification point in the Asset (left) pane and click Copy.
2. In the same script or in a different script (in the same project), right-click Verification Points in the Asset pane.
3. Click Paste to paste a copy of the verification point and its associated files into the project. If a verification point with that name already exists, Robot
appends a unique number to the name. You can also copy and paste by dragging the verification point to Verification Points in the Asset pane.
4. Click the top of the Script (right) pane of the original script.
5. Click Edit > Find and locate the line with the verification point name that you just copied.
6. Select the entire line, which starts with Result=.
7. Click Edit > Copy.
8. Return to the script that you used in step 2. Click the location in the script where you want to paste the line. Click Edit > Paste.
9. Change the name of the verification point to match the name in the Asset pane.

How to delete a verification point and its associated files?

1. Right-click the verification point name in the Asset (left) pane and click Delete.
2. Click the top of the script in the Script (right) pane.
3. Click Edit > Find.
4. Type the name of the deleted verification point in the Find what box.
5. Click Find Next.
6. Delete the entire line, which starts with Result=.
7. Repeat steps 5 and 6 until you have deleted all references.

What's TestManager
Rational TestManager is the one place to manage all testing activities--planning, design, implementation, execution, and analysis. TestManager ties testing with
the rest of the development effort, joining your testing assets and tools to provide a single point from which to understand the exact state of your project.
Test Manager supports five testing activities:
1. Plan Test.
2. Design Test.
3. Implement Test.
4. Execute Tests.
5. Evaluate Tests.
Test plan
Test Manager is used to define test requirements, define test scripts and link these requirements and scripts to your test plans (written in word).
Test plan - A test plan defines a testing project so it can be properly measured and controlled. The test plan usually describes the features and functions you are
going to test and how you are going to test them. Test plans also discuss resource requirement and project schedules.
Test Requirements
Test requirements are defined in the Requirement Hierarchy in TestManager. The requirements hierarchy is a graphical outline of requirements and nested child
requirements.
Requirements are stored in the requisite pro database. Requisite Pro is a tool that helps project teams control the development process by managing and tracking
the changes of requirements.

TestManager includes a baseline version of Requisite Pro. The full version with more features and customizations is available in the Rational Suite TestStudio.
TestManager's wizard
TestManager has a wizard that you can use to copy or import test scripts and other test assets (Datapools) from one project to another.
How TestManager manage test logs ?
When a robot scripts runs the output creates a test log. Test logs are managed now is the TestManager application. Rational now allows you to organize your logs
into any type of format you need.

You can create a directory structures that suites your need Create build names for each build version (or development) Create folders in which to put the build in.
What's TestFactory
Rational TestFactory is a component-based testing tool that automatically generates TestFactory scripts according to the application’s navigational structure.
TestFactory is integrated with Robot and its components to provide a full array of tools for team testing under Windows NT 4.0, Windows 2000, Windows 98, and
Windows 95.
With TestFactory, you can:
--Automatically create and maintain a detailed map of the application-under-test.
--Automatically generate both scripts that provide extensive product coverage and scripts that encounter defects, without recording.
--Track executed and unexecuted source code, and report its detailed findings.
--Shorten the product testing cycle by minimizing the time invested in writing navigation code.
--Play back Robot scripts in TestFactory to see extended code coverage information and to create regression suites; play back TestFactory scripts in Robot to
debug them.
What's ClearQuest
Rational ClearQuest is a change-request management tool that tracks and manages defects and change requests throughout the development process. With
ClearQuest, you can manage every type of change activity associated with software development, including enhancement requests, defect reports, and
documentation modifications.
With Robot and ClearQuest, you can:
-- Submit defects directly from the TestManager log or SiteCheck.
-- Modify and track defects and change requests.
-- Analyze project progress by running queries, charts, and reports.
Rational diagnostic tools
Use the Rational diagnostic tools to perform runtime error checking, profile application performance, and analyze code coverage during playback of a Robot script.

• Rational Purify is a comprehensive C/C++ run-time error checking tool that automatically pinpoints run-time errors and memory leaks in all components
of an application, including third-party libraries, ensuring that code is reliable.
• Rational Quantify is an advanced performance profiler that provides application performance analysis, enabling developers to quickly find, prioritize and
eliminate performance bottlenecks within an application.
• Rational PureCoverage is a customizable code coverage analysis tool that provides detailed application analysis and ensures that all code has been
exercised, preventing untested code from reaching the end-user.

TestManager can be used for Performance Testing


Rational Testmanager is a sophisticated tool that can be used for automating performance tests on client/server systems. A client/server system includes client
applications accessing a database or application server, and browsers accessing a Web server.
Performance testing uses Rational Robot and Rational TestManager. Use Robot to record client/server conversations and store them in scripts. Use TestManager
to schedule and play back the scripts. During playback, TestManager can emulate hundreds, even thousands, of users placing heavy loads and stress on your
database and Web servers.
What's RequisitePro
Rational RequisitePro is a requirements management tool that helps project teams control the development process. RequisitePro organizes your requirements by
linking Microsoft Word to a requirements repository and providing traceability and change management throughout the project lifecycle.
How to set GUI recording option
To set the GUI recording options:
1. Open the GUI Record Options dialog box by doing one of the following: . Before you start recording, click Tools > GUI Record Options. . Start recording by
clicking the Record GUI Script button on the toolbar. In the Record GUI dialog box, click Options.
2. Set the options on each tab.
3. Click OK.
How to Control Robot Responds to Unknown Objects?
1. Open the GUI Record Options dialog box.
2. In the General tab, do one of the following:
-- Select Define unknown objects as type Generic to have Robot automatically associate unknown objects encountered while recording with the Generic object
type.
-- Clear Define unknown objects as type Generic to have Robot suspend recording and open the Define Object dialog box if it encounters an unknown object
during recording. Use this dialog box to associate the object with an object type.
3. Click OK or change other options.
How to change the object order preference?
1. Open the GUI Record Options dialog box.
2. Click the Object Recognition Order tab.
3. Select a preference in the Object order preference list.
4. Click OK or change other options.
How to create a new object order preference?
1. In an ASCII editor, create an empty text file with the extension .ord.
2. Save the file in the Dat folder of the project.
3. Click To o l s > G UI R e c o r d O p t i o n s .
4. Click the Object Recognition Order tab.
5. From the Object order preferences list, select the name of the file you created.
6. Change the method order to customize your preferences.
How to defining an Object Class Mapping?
1. Identify the class name of the window that corresponds to the object. You can use the Spy++ utility in Visual C++ to identify the class name. You can also use
the Robot Inspector tool by clicking Tools > Inspector.
2. In Robot, click Tools > General Options, and then click the Object Mapping tab.
3. From the Object type list, select the standard object type to be associated with the new object class name.
Robot displays the class names already available for that object type in the Object classes list box.
4. Click Add.
5. Type the class name you identified in step 1 and click OK.
6. Click OK.
Modifying or Deleting a Custom Class Name
1. Click Tools > General Options, and then click the Object Mapping tab.
2. From the Object type list, select the standard object type that is associated with the object class name.
Robot displays the class names already available for that object type in the Object classes list.
3. From the Object classes list, select the name to modify or delete.
4. Do one of the following:
- To modify the class name, click Modify. Change the name and click OK.
- To delete the object class mapping, click Delete. Click OK at the
confirmation prompt.
5. Click OK.
How to to record a GUI script

1. Prepare to record the script.


2. If necessary, enable your application for testing.
3. Make sure your recording options are set appropriately for the recording session.
4. Click the Record GUI Script button on the toolbar to open the Record GUI dialog box.
5. Type a name (40 characters maximum) or select a script from the list. The listed scripts have already been recorded in Robot, or generated in
TestFactory. To change the list, select a query from the Query list. The query lets you narrow down the displayed list, which is useful in projects with
hundreds of scripts. You create queries in TestManager, and you modify queries in TestManager or Robot.
If a prefix has been defined for script autonaming, Robot displays the prefix in the Name box. To edit this name, either type in the Name box, or click
Options, change the prefix in the Prefix box, and click OK.
6. To change the recording options, click Options. When finished, click OK.
7. If you selected a previously recorded script, you can change the properties by clicking Properties. When finished, click OK. To change the properties of a
new script, record the script first. After recording, click File > Properties.
8. Click OK to start recording. The following events occur: . If you selected a script that has already been recorded, Robot asks if you want to overwrite it.
Click Yes. (If you record over a previously-recorded script, you overwrite the script file but any existing properties are applied to the new script.)
. Robot is minimized by default.
. The floating GUI Record toolbar appears. You can use this toolbar to pause or stop recording, display Robot, and insert features into a script.
9. Start the application-under-test as follows: a. Click the Display GUI Insert Toolbar button on the GUI Record toolbar.
b. Click the appropriate Start button on the GUI Insert toolbar.
c. Fill in the dialog box and click OK.
10. Perform actions as needed to navigate through the application.
11. Insert features as needed. You can insert features such as verification points, comments, and timers.
12. If necessary, switch from Object-Oriented Recording to low-level recording. Object-Oriented Recording examines Windows GUI objects and other
objects in the application-under-test without depending on precise timing or screen coordinates. Low-level recording tracks detailed mouse movements
and keyboard actions by screen coordinates and exact timing.
13. When finished, click the Stop Recording button on the GUI Record toolbar. The Robot main window appears as follows:
- The script that you recorded appears in a Script window within the Robot main window.
- The verification points and low-level scripts in the script (if any) appear in the Asset pane on the left.
- The text of the script appears in the Script pane on the right.
14. Optionally, change the script properties by clicking File > Properties.

Restoring the Robot Main Window During Recording


When Robot is minimized or is hidden behind other windows during recording, you can bring it to the foreground in any of the following ways:
. Click the Open Robot Window button on the GUI Record toolbar.
. Click the Robot button on the Windows taskbar.
. Use the hot key combination CTRL+SHIFT+F to display the window and CTRL+SHIFT+H to hide the window.
Setting GUI Recording Options
1. Open the GUI Record Options dialog box by doing one of the following:
--Before you start recording, click Tools > GUI Record Options.
-- Start recording by clicking the Record GUI Script button on the toolbar.
In the Record GUI dialog box, click Options.
2. Set the options on each tab.
3. Click OK.
Naming Scripts Automatically
1. Open the GUI Record Options dialog box.
2. In the General tab, type a prefix in the Prefix box.
Clear the box if you do not want a prefix. If the box is cleared, you will need to type a name each time you record a new script.
3. Click OK or change other options.
The next time you record a new script, the prefix and a number appear in the Name box of the Record GUI dialog box.
In the following figure, the autonaming prefix is Test. When you record a new script, Test7 appears in the Name box because there are six other scripts that begin
with Test.
How to change the object order preference?

1. Open the GUI Record Options dialog box.


2. Click the Object Recognition Order tab.
3. Select a preference in the Object order preference list.
4. Click OK or change other options.

How to change the order of the object recognition methods for an object type?

1. Open the GUI Record Options dialog box.


2. Click the Object Recognition Order tab.
3. Select a preference in the Object order preference list. If you will be testing C++ applications, change the object order preference to C++ Recognition
Order.
4. From the Object type list, select the object type to modify. The fixed set of recognition methods for the selected object type appears in the Recognition
method order list in its last saved order.
5. Select an object recognition method in the list, and then click Move Up or Move Down. Changes made to the recognition method order take place
immediately, and cannot be undone by the Cancel button. To restore the original default order, click Default.
6. Click OK.

Important Notes:
. Changes to the recognition method order affect scripts that are recorded after the change. They do not affect the playback of scripts that have already been
recorded.
. Changes to the recognition method order are stored in the project. For example, if you change the order for the CheckBox object, the new order is stored in the
project and affects all users of that project.
. Changes to the order for an object affect only the currently-selected preference. For example, if you change the order for the CheckBox object in the preference,
the order is not changed in the C++ preference.
How to create a new object order preference?

1. In an ASCII editor, create an empty text file with the extension .ord.
2. Save the file in the Dat folder of the project.
3. Click To o l s > G UI R e c o r d O p t i o n s .
4. Click the Object Recognition Order tab.
5. From the Object order preferences list, select the name of the file you created.
6. Change the method order to customize your preferences.

How to define an object class and map an object type to it?

1. Identify the class name of the window that corresponds to the object. You can use the Spy++ utility in Visual C++ to identify the class name. You can
also use the Robot Inspector tool by clicking Tools > Inspector.
2. In Robot, click Tools < General Options, and then click the Object Mapping tab.
3. From the Object type list, select the standard object type to be associated with the new object class name. Robot displays the class names already
available for that object type in the Object classes list box.
4. Click Add.
5. Type the class name you identified in step 1 and click OK.
6. Click OK.

How to modify or delete a custom class name?

1. Click Tools > General Options, and then click the Object Mapping tab.
2. From the Object type list, select the standard object type that is associated with the object class name. Robot displays the class names already available
for that object type in the Object classes list.
3. From the Object classes list, select the name to modify or delete.
4. Do one of the following:
. To modify the class name, click Modify. Change the name and click OK.
. To delete the object class mapping, click Delete. Click OK at the confirmation prompt.
5. Click OK.

Pausing and Resuming the Recording of a Script


To pause recording:
--Click the Pause button on the GUI Record toolbar. Robot indicates a paused state by:
----Depressing the Pause button.
----Displaying Recording Suspended in the status bar.
----Displaying a check mark next to the Record > Pause command.
To resume recording:
-- Click Pause again.
----Always resume recording with the application-under-test in the same state that it was in when you paused.
Robot has two recording modes
1. Object-Oriented Recording mode
Examines objects in the application-under-test at the Windows layer during recording and playback. Robot uses internal object names to identify objects, instead of
using mouse movements or absolute screen coordinates. If objects in your application’s graphical user interface (GUI) change locations, your tests still pass
because the scripts are not location dependent. As a result, Object-Oriented Recording insulates the GUI script from minor user interface changes and simplifies
GUI script maintenance.
2. Low-level recording mode
Tracks detailed mouse movements and keyboard actions by screen coordinates and exact timing. Use low-level recording when you are testing functionality that
requires the tracking of detailed mouse actions, such as in painting, drawing, or CAD applications.
To switch between the two modes during recording, do one of the following:
------Press CTRL+SHIFT+R.
------Click the Open Robot Window button on the GUI Record toolbar (or press CTRL+SHIFT+F) to bring Robot to the foreground. Click Record > Turn Low-Level
Recording On/Off.
How to end the recording of a script?
Click the Stop Recording button on the GUI Record toolbar.
How to define script properties?

1. Do one of the following:


. If the script is open, click File > Properties.
. If the script is not open, click File > Open > Script. Select the script and click the Properties button.
2. In the Script Properties dialog box, define the properties. For detailed information about an item, click the question mark near the upper-right corner of
the dialog box, and then click the item.
3. Click OK.

How to code a script manually?

1. In Robot, click File > New > Script.


2. Type a script name (40 characters maximum) and, optionally, a description of the script.
3. Click GUI.
4. Click OK. Robot creates an empty script with the following lines:
Sub Main
Dim Result As Integer
'Initially Recorded: 01/17/05 14:55:53
'Script Name: GUI Script
End Sub
5. Begin coding the GUI script.

How to add a new action to an existing script?

1. If necessary, open the script by clicking File > Open > Script.
2. If you are currently debugging, click Debug > Stop.
3. In the Script window, click where you want to insert the new actions. Make sure that the application-under-test is in the appropriate state to begin
recording at the text cursor position.
4. Click the Insert Recording button on the Standard toolbar. The Robot window minimizes by default, or behaves as specified in the GUI Record Options
dialog box.
5. Continue working with the application-under-test as you normally do when recording a script.

How to add a feature to an existing GUI script?

1. If necessary, open the script by clicking File > Open > Script.
2. If you are currently debugging, click Debug > Stop.
3. In the Script window, click where you want to insert the feature. Make sure that the application-under-test is in the appropriate state to insert the feature
at the text cursor position.
4. Do one of the following:
- To add the feature without going into recording mode, click the Display GUI Insert Toolbar button on the Standard toolbar. The Robot Script window
remains open. - To start recording and add the feature, click the Insert Recording button on the Standard toolbar. The Robot window minimizes by
default, or behaves as specified in the GUI Record Options dialog box. Click the Display GUI Insert Toolbar button on the GUI Record toolbar.
5. Click the appropriate button on the GUI Insert toolbar.
6. Continue adding the feature as usual.

How o batch compile scripts and library source files?

1. Click File > Batch Compile.


2. Select an option to filter the type of scripts or files you want to appear in the Available list: GUI scripts, VU scripts, or SQABasic library source files.
3. Optionally, select List only modules that require compilation to display only those files that have not yet been compiled or that have changed since they
were last compiled.
4. Select one or more files in the Available list and click > or >>. Robot compiles the files in the same order in which they appear in the Selected list.
5. Click OK to compile the selected files.

How to set and clear breakpoints?

1. If necessary, open a script by clicking File > Open> Script.


2. Place the pointer on the line where you want to set a new breakpoint or clear an existing breakpoint. You can only place a breakpoint on a line where an
SQABasic command is executed. Breakpoints on comments, labels, and blank lines are not supported. Also, there are a very few commands that do not
support breakpoints (for example, Dim and Sub).
3. Click once to insert a blinking text cursor. (You can also highlight the entire line or any part of the line.)
4. Click Debug > Set or Clear Breakpoint. If you set a breakpoint, Robot inserts a solid red circle in the left margin or highlights the line. If you clear a
breakpoint, Robot removes the circle or highlighting.
5. If you set a breakpoint, click Debug > Go. Robot executes as far as the breakpoint, and then displays a yellow arrow in the left margin of that line or
highlights the line.

The following steps outline the general process for recording a script.

1. Set the session recording options. Recording options tell Robot how to record and generate scripts. You set recording options to specify:
- The t ype of recordi ng you want to perform, such as API, network, or proxy. The recording method you choose determines some of the other recording
options you need to set.
- Script generation options, such as specifying whether you want the script to include datapool commands or think time delays, and whether you want to
filter out protocols to control the size of the script.
2. Start the recording session. With the API recording method, you must start recording first, at which point Robot prompts you for the name of the client.
With the other recording methods, network and proxy, you can start recording before or after you start the client.
3. Start the client application.
4. Record the transactions. While you are recording the transactions, you can split the session into multiple scripts, each representing a logical unit of work.
5. Optionally, insert timers, blocks, comments, and other features into the script during recording.
6. Close the client application.
7. Stop recording.
8. Robot automatically generates scripts.

How to create a shell script to Play Back Scripts in Sequence?

1. Click File > New > GUI Shell Script.


2. Type a name (40 characters maximum).
3. Optionally, type a description.
4. Click OK.
5. To add scripts, select one or more scripts in the Available list and click > or >>. Robot plays back scripts in the same order in which they appear in the
Selected list.
6. Click OK.

Playing Back a Shell Script

1. Click Tools > GUI Playback Options.


2. In the Playback tab, clear the Acknowledge results check box. This prevents a pass/fail result message box from appearing for each verification point.
You can still view the results in the log after playback.
3. Set the other options in the tabs as appropriate.
4. Click OK.

Starting Applications
1. Do one of the following:
. If recording, click the Display GUI Insert Toolbar button on the GUI Record toolbar.
.If editing, position the pointer in the script and click the Display GUI Insert Toolbar button on the Standard toolbar.
2. Do one of the following:
. To start most applications, click the Start Application button. You can specify that you want the application to start under Rational Purify, Quantify, or
PureCoverage during playback.
. To start a Java application that you want to start under Quantify or PureCoverage during playback, click the Start Java Application button.
. To start an HTML application, click the Start Browser button.
3. Fill in the dialog box and click OK.
For information about an item in the dialog box, click the question mark in the upper-right corner and then click the item.
4. Continue recording or editing the script.
How to insert a call to a previously recorded script while recording or editing?

1. Do one of the following:


. If recording, click the Display GUI Insert Toolbar button on the GUI Record toolbar. . If editing, position the pointer in the script and click the Display
GUI Insert Toolbar button on the Standard toolbar.
2. Click the Call Script button on the GUI Insert toolbar.
3. Select a GUI script from the list.
4. Do one of the following:
. Select Run Now if the script being recorded depends on the state in which the called script leaves the application-under-test. If this check box is
selected, Robot adds the script call to the recording script and immediately plays back the called script when you click OK. .Clear Run Now if the called
script starts and ends at the same point in the application-under-test, so that the script being recorded does not depend on the called script. If this check
box is cleared, Robot adds the script call to the recording script but does not play back the called script when you click OK.
5. Click OK to continue recording or editing.

How to insert a verification point while recording or editing a script?

1. Do one of the following:


. If recording, click the Display GUI Insert Toolbar button on the GUI Record toolbar. . If editing, position the pointer in the script and click the Display
GUI Insert Toolbar button on the Standard toolbar.
2. Click a verification point button on the GUI Insert toolbar.
3. In the Verification Point Name dialog box, edit the name of the verification point as appropriate. Robot automatically names the verification point with the
verification point type, and adds a number if there is more than one of the same type in the script.
4. Optionally, set the Wait state options. The wait state specifies how often Robot should retry the verification point until it passes or times out, and how
long Robot should keep trying the verification point before it times out.
5. Optionally, set the Expected result option. When you create a verification point, the expected result is usually that the verification point will pass - for
example, that a window does exist during playback. However, you can also indicate that you expect the verification point to fail - for example, that a
window does not exist during playback.
6. Click OK.

Measuring Specific Task Performance


1. During recording, start a timer.
2. Start an application task or transaction
3. Insert a verification point with a wait state.
For example, insert a Window Existence verification point that waits up to 30 seconds for a window that indicates the task is complete. 4. Stop the timer.
5. Continue recording other actions or stop the recording. After you play back the script, the log shows the timing results.
How to insert a timer while recording or editing a script?
1. Do one of the following:
. If recording, click the Display GUI Insert Toolbar button on the GUI Record toolbar. . If editing, position the pointer in the script and click the Display
GUI Insert Toolbar button on the Standard toolbar.
2. Click the Start Timer button on the GUI Insert toolbar.
3. Type a timer name (40 characters maximum) and click OK. If you start more than one timer, make sure you give each timer a different name.
4. Perform the timed activity.
5. Immediately after performing the timed activity, click the Stop Timer button on the GUI Insert toolbar.
6. Select a timer name from the list of timers you started and click OK.

Playing Back a Script that Includes Timers


1. Click Tools > GUI Playback Options.
2. In the Playback tab, clear Acknowledge results.
This prevents a pass/fail result message box from appearing for each verification point. You can still view the results in the log after playback.
3. In the Playback tab, set the Delay between commands value to 0. This removes any extra Robot timing delays from the performance measurement. If you need
a delay before a single command, click Insert > Delay and type a delay value.
4. Click OK.
How to insert a log message into a script during recording or editing?
1. Do one of the following:
. If recording, click the Display GUI Insert Toolbar button on the GUI Record toolbar.
. If editing, position the pointer in the script and click the Display GUI Insert Toolbar button on the Standard toolbar.
2. Click the Write to Log button on the GUI Insert toolbar.
After playback, you can view logs and messages using TestManager.
How to Choose Network Recording?

1. Click Tools Tools Tools Tools > Session Record Options Session Record Options Session Record Options Session Record Options.
2. Click the Method Method Method Method tab, and click Network recording Network recording Network recording Network recording.
3. Optionally, click the Method:Network Method:Network Method:Network Method:Network tab, and select the client/server pair that you will record. The
default is to record all of the network traffic to and from your computer.
4. Optionally, click the Generator Filtering Generator Filtering Generator Filtering Generator Filtering tab to specify the network protocols to include in the
script that Robot generates.

How to Choose Proxy Recording

1. Click Tools Tools Tools Tools > Session Record Options Session Record Options Session Record Options Session Record Options.
2. Click the Method Method Method Method tab, and click Proxy Proxy Proxy Proxy recording.
3. Click the Method:Proxy Method:Proxy Method:Proxy Method:Proxy tab to:
. Create a proxy computer.
. Identify client/server pairs that will communicate through the proxy.

How to set up and use proxy recording?


• Start Robot on the proxy computer.
• In the Proxy Administration dialog box, match up the proxy computer and port with each server to be used in the test.
• In the Method:Proxy Method:Proxy Method:Proxy Method:Proxy tab of the Session Record Options dialog box, match up each client with the server it will send
requests to. Be sure to specify the actual server and not the proxy computer.
• Configure each client to send requests to the proxy computer, not to the server. For example, if the client will be sending requests to an Oracle database, use
the Oracle client configuration software to specify the proxy computer’s address and port number, not the server’s.
• On each client computer, a tester should start the client application and navigate to the point where recording will begin.
• On the proxy computer, enable recording (File File File File > Record Session Record Session Record Session Record Session).
• With recording enabled, each tester at each client computer performs the transactions to record.
• When all transactions are complete, stop recording on the proxy computer.
How to create a proxy computer?

1. Click Tools Tools Tools Tools > Session Record Options Session Record Options Session Record Options Session Record Options.
2. Click the Method Method Method Method tab and make sure that Proxy recording Proxy recording Proxy recording Proxy recording is selected.
3. Click the Method:Proxy Method:Proxy Method:Proxy Method:Proxy tab.
4. Click Proxy Admin Proxy Admin Proxy Admin Proxy Admin.
5. In Proxy:Port Proxy:Port Proxy:Port Proxy:Port, specify the proxy computer’s port number. Note that Robot has already detected and specified the proxy
computer’s name. You can specify any available port number. Avoid the well-known ports (those below 1024). If you specify a port number that is
unavailable, Robot prompts you for a new port number.
6. In the Server:Port Server:Port Server:Port Server:Port list, select a server involved in the test.
7. Click Create Proxy Create Proxy Create Proxy Create Proxy.

How to associate each client in the test with the server it will communicate with?

1. Click Tools Tools Tools Tools > Session Record Options Session Record Options Session Record Options Session Record Options.
2. Click the Method Method Method Method tab and make sure that Proxy recording Proxy recording Proxy recording Proxy recording is selected.
3. Click the Method:Proxy Method:Proxy Method:Proxy Method:Proxy tab.
4. Select a client in the Client [:Port] Client [:Port] Client [:Port] Client [:Port] list. The client port is optional.
5. Select the client’s server in the Server:Port Server:Port Server:Port Server:Port list. The server port is required.
6. Click Add.

Controlling the Values Accepted When an HTTP Script Is Played Back


You can set recording options that control which status values are acceptable when a script that accesses a Web server is played back. If you do not set any
recording options, the script plays back successfully only if the playback conditions exactly match the conditions during recording. However, you can set recording
options so that a script plays back successfully even if:
• The server responds with partial or full page data during record or playback.
• The response was cached during record or playback.
• The script was redirected to another http server during playback.
• You are recording a number of HTTP scripts and plan to play them back in a different order.

How to expand the conditions under which a script will play back successfully?

1. Click Tools Tools Tools Tools - Session Record Options Session Record Options Session Record Options Session Record Options.
2. Click the Generator per protocol Generator per protocol Generator per protocol Generator per protocol tab.
3. Select HTTP at the Protocol Protocol Protocol Protocol secti on, and then sel ect one or more of the following:
a. Allow partial responses
Select this option to enable a script to play back successfully if the HTTP server responds with partial data during playback. This generates a script that
sets the TSS environment variable Http_control Http_control Http_control Http_control to HTTP_PARTIAL_OK. Leaving this box cleared enforces strict
interpretation of recorded responses during playback.
b. Allow cache responses
Select this option to enable a script to play back successfully if a response is cached differently during playback. This generates a script that sets the
TSS environment variable Http_control Http_control Http_control Http_control to HTTP_CACHE_OK. Leaving this box cleared enforces strict
interpretation of recorded cache responses during playback.
c. Allow redirects
Select this option to enable a script to play back successfully if the script was directed to another HTTP server during playback or recording. This
generates a script that sets the TSS environment variable Http_control Http_control Http_control Http_control to HTTP_REDIRECT_OK. Leaving this
box cleared enforces strict interpretation of recorded redirects during playback.
d. Use HTTP keep-alives for connections in a session with multiple scripts. You should generally leave this box cleared.

How to specify the level of data correlation?


TestManager finds the session IDs (and other correlated variables) and, when you run the suite, automatically generates the proper script commands to extract
their actual values.
Before you record a script, you can tell TestManager to correlate all possible values (the default), not to correlate any values, or to correlate only a specific list of
variables that you provide.
To specify the level of data correlation:

1. Click Tools Tools Tools Tools > Session Record Options Session Record Options Session Record Options Session Record Options.
2. Click the Generator per protocol Generator per protocol Generator per protocol Generator per protocol tab.
3. At Correlate variables in response Correlate variables in response Correlate variables in response Correlate variables in response, select one of the
following:
a. All - All variables are correlated. You should generally select this option. Select another option only if you encounter problems when you play back the
script.
b. Specific - Only the variables that you select are correlated.
c. None - No variables are correlated.

Providing the Name of an Oracle Database

1. Click Tools Tools Tools Tools > Session Record Options Session Record Options Session Record Options Session Record Options.
2. Click the Generator per protocol Generator per protocol Generator per protocol Generator per protocol tab.
3. Select Oracle at the Protocol Protocol Protocol Protocol section.
4. Enter the name that the client application uses in the Database name Database name Database name Database name box.

Assigning a Prefix to IIOP Command IDs and Including IORs in IIOP_bind


If you are recording IIOP requests, you can have Robot automatically assign an identifying prefix to IIOP emulation command IDs. You can also include the
original IORs in iiop_bind commands.

1. Click Tools Tools Tools Tools - Session Record Options Session Record Options Session Record Options Session Record Options.
2. Click the Generator per protocol Generator per protocol Generator per protocol Generator per protocol tab.
3. Select IIOP at the Protocol section, and then select one or more of the following:
- Use operation name to prefix emulation command IDs
- Include original IORs in iiop_bind commands

DCOM Recording
Robot records DCOM client applications that are constructed with Visual Basic (compiled executables), Java (compiled executables), or C++, with the restriction
that the C++ interfaces are useable by VB - they must use attributes that conform to the OLE Automation attribute. No preprocessing of the application is
necessary before recording begins.
Assigning a Prefix to DCOM Command IDs
If you are recording DCOM requests, you can have Robot automatically assign an identifying prefix to DCOM emulation command IDs.

1. Click Tools Tools Tools Tools > Session Record Options Session Record Options Session Record Options Session Record Options.
2. Click the Generator per protocol Generator per protocol Generator per protocol Generator per protocol tab.
3. Select DCOM at the Protocol section, and then select an event label. The label you select determines the prefix of the emulation command IDs in your
script.

How to add a client or server computer?


If you are using network or proxy recording, and the computer that you want to use is not listed in the Method Network and Method Proxy tabs, you can add it to
the list.

1. Click Tools > Session Record Options


2. Click the Method:Network tab or Method:Proxy tab, depending on whether you are adding a computer for network or proxy recording.
3. Click Manage Computers.
4. Click New.
5. In the Name box of the Computers group, type a name to associate with the network name of the computer that you are adding. You can assign any
name up to 40 characters.
6. Type the computer’s network name.
7. Optionally, click Ping to make sure that the network name you just typed is correct. If it is correct, -Successful Ping of network name- appears in the
status bar.
8. Select Client System to list this computer as a client. Select Server System to list this computer as a server. You can select both choices.
9. Click Add.
10. Take the following steps to use a port number with the network name. A port number is required for servers used in proxy recording:
a. In the Ports Ports Ports Ports group, type a name to associate with the port number that you are adding. You can assign any name up to 40
characters.
b. Type the port number to use with the computer’s network name.
c. Click Apply, then OK .
11. Click Close Close Close Close.

How to remove a client or server computer from the Method:Network tab?


1. Click Tools > Session Record Options.
2. Click the Method:Network tab.
3. Click Manage Computers .
4. In the Name box of the Computers group, select the computer name to remove from the list.
5. Click Delete .
6. Click Close.
How to remove a port name and number from a computer’s address or edit the information?
1. Click Tools > Session Record Options .
2. Click the Method:Network tab.
3. Click Manage Computers.
4. In the Name box of the Computers group, select the computer name associated with the port that you are removing.
5. Under Ports , select the port name to remove.
6. Click Edit and make the changes.
7. Click Close.
How to Recording a Single Script in a Session?

1. In Robot, click the Record Session button. Alternatively, click File > Record Session, or press CTRL+SHIFT+R.
2. Ty p e t h e session name (40 characters maximum), or accept the default name. You will specify the script name when you finish recording the script. If
you have not yet set your session recording options, do so now by clicking Options.
3. Click OK in the Record Session - Enter Session Name dialog box. The following events occur:
. Robot is minimized (default behavior).
. The Session Record floating toolbar appears (default behavior). You can use this toolbar to stop recording, redisplay Robot, split a script, and insert
features into a script.
. The Session Recorder icon appears on the taskbar. The icon blinks as Robot captures requests and responses.
4. If the Start Application dialog box is displayed, provide the following information, and click OK:
. The path of the executable file for the browser or database application.
. Optionally, the working directory for any components (such as DLLs) that the client application needs at runtime.
. Optionally, any arguments that you want to pass to the client application. The Start Application dialog box appears only if you are performing API
recording, or if you are performing network or proxy recording and selected Prompt for application name on start recording in the General tab of the
Session Record Options dialog box.
5. Perform the transactions that you want to record. As the application sends requests to the server, notice the activity in the Session Recorder window.
Progress bars and request statistics appear in the top of the window.
If there is no activity in the Session Recorder window (or if the Session Recorder icon never blinks), there is a problem with the recording. Stop recording
and try to find the cause of the problem.
6. Optionally, insert features such as blocks and timers through the Session Insert floating toolbar or through the Robot Insert menu.
7. Optionally, when you finish recording transactions, close the client application. With API recording, when you close the client, Robot asks whether you
want to stop recording. If so, click Yes, and either name the session or click to ignore the recorded information in the Generating Scripts dialog box.
8. Click the Stop Recording button on the Session Record floating toolbar.

How to stop recording and generate scripts in a Session?


1. Click the Stop Recording button on the Session Record floating toolbar.
2. In the Name of the just-recorded script box, type or select a name for the script that you just finished recording, or accept the default name.
3. Click OK.
How to see error message If problems occur during script generation?
the error message appears in the status bar of the Generating Scripts dialog box like this: "Completed with warnings and/or errors"
To see the list of errors, click Details. If the text of an error is truncated, you can either:
. Double-click the text to see the entire message.
. Press CTRL+C to copy the text to the Clipboard
How to cancelling a Script in a Single-Script Session?
1. During recording, click the Stop button on the Session Record floating toolbar.
2. In the Stop Recording dialog box, click Ignore just-recorded information.
3. Click OK in the Stop Recording dialog box.
4. Click OK to acknowledge that the session is being deleted.
How to cancelling the Current Script in a Multi-Script Session?
1. During recording, click the Split Script button on the Session Record floating toolbar.
2. In the Split Script dialog box, click Ignore just-recorded information.
3. Click OK.
How to cancelling All Scripts in a Multi-Script Session?
1. During recording, click the Stop button on the Session Record floating toolbar.
2. Click OK in the Stop Recording dialog box.
3. Immediately click Cancel in the Generating Scripts dialog box.
When would you want to split a session?
If quick script development time is a priority - perhaps because testable builds are developed daily, or because web content is updated daily.
How to split a session into multiple scripts?
1. During recording, at the point where you want to end one script and begin a new one, click the Split Script button on the Session Record floating toolbar.
2. Enter a name for the script that you are ending, or accept the default name.
3. Click OK.
4. Repeat the previous steps as many times as needed to end one script and begin another. 5. After you click the Stop Recording button to end the recording
session, type or select a name for the last script you recorded, or accept the default name.
How to import a session file and regenerate scripts?
You can import a session from a different computer into your current project.

1. In Robot, click Tools > Import Session. The Open dialog box appears.
2. Click the session file, then click Open. The session and its scripts are now in your project.
3. To regenerate the scripts in the session you imported, click Tools > Regenerate Test Scripts from Session, and select the session you imported.
4. To regenerate the suite, click Tools > Rational Suite TestStudio > Rational TestManager.
5. Click File > New Suite. The New Suite dialog box appears.
6. Select Existing Session, and click OK.
7. TestManager displays a list of sessions that are in the project. Click the name of the session that you imported, and click OK.

How to regenerate Scripts from a Session?


1. In Robot, click Tools > Regenerate Test Scripts from Session.
2. Click the name of the session to use.
3. Click OK to acknowledge that the regeneration operation is complete.
How to Define Script Properties in Robot?
1. Click File > Open > Test Script to open the Open Test Script dialog box.
2. Click the script you are defining properties for.
3. Click Properties.
4. Define the script’s properties, and click OK.
How to Finding the Session Associated with a Script?
1. In Robot, click File > Open > Test Script.
2. Click the name of the script whose associated session you want to view.
3. Click Properties.
4. Click General.
5. View the session name in Referenced Session.
How Timers Work?
1. Start the timer (click Insert--Start Timer) just before you click the button to send the query. This action inserts the VU emulation command start_time into the
script.
2. Stop the timer (click Insert --Stop Timer) as soon as the results appear. This action inserts the VU emulation command stop_time into the script.
What's a block?
A block is a set of contiguous lines of code that you want to make distinct from the rest of the script. Typically, you use a block to identify a transaction within a
script.
What's a block's characteristics?

• A block begins with the comment. In the VU language, a block begins like this:
/* Start_Block "BlockName" */
• Robot automatically starts a timer at the start of the block. In the VU language, the timer looks like this:
start_time ["BlockName"] _fs_ts;
Typically, the start_time emulation command is inserted after the first action, but with an argument to use a read-only variable that refers to the start of
the first action.
• The ID of every emulation command in a block is constructed the same way that is, by the block name followed by a unique, three-digit autonumber. For
example, in the VU language:
http_header_recv ["BlockName002"] 200;
When you end a block, command IDs are constructed as they were before you started the block. For example, if the last command ID before the block
was Script025, the next command ID after the block will be Script026.
• A block ends with a stop_time command plus a comment. For example, in the VU language:
stop_time ["BlockName"]; /* Stop_Block */
• A script can have up to 50 blocks.

Why Use Blocks?


. To associate the block and timer names with the emulation command that performs the transaction.
. To include the block name in TestManager reports, thus enabling you to filter the reports with the block name.
.To make the script easier to read, and to provide an immediate context for a line within the block through command IDs.
How to insert a block into a script?

1. If the Session Insert floating toolbar is not already displayed, click the Insert button on the Session Record floating toolbar.
2. Click the Start Block button at that point in the script where you want the block to begin for example, just before you start to record a transaction.
3. Type the block name. Robot uses this name as the prefix for all command IDs in the block. The maximum number of characters for a command ID prefix
is seven.
4. Click OK.
5. Record all of the client requests in the block.
6. Click the Stop Block button to end the current block, and click OK.
7. Continue recording the other sections of the script. When you start and stop a block during recording, the commands are reported as annotations in the
Annotations window
What's A synchronization point?
A synchronization point lets you coordinate the activities of a number of virtual testers by pausing the execution of each tester at a particular point
Why Use Synchronization Points?
By synchronizing virtual testers to perform the same activity at the same time, you can make that activity occur at some particular point of interest in your test.
Typically, synchronization points that you insert into scripts are used in conjunction with timers to determine the effect of varying workload on the timed activity.
How to inserting Synchronization Points?
You can insert a synchronization point into a script (through Robot) or into a suite (through TestManager).
1. Into a script, you can insert a synchronization point into a script in one of the following ways:
. During recording, through the Sync Point toolbar button or through the Insert menu.
. During script editing, by manually typing the synchronization point command name into the script.
2. Into a suite, you can insert a synchronization point into a suite through the TestManager Synchronization Point dialog box.
Why Restore the Test Environment Before Playback?
The state of the Windows environment as well as your application-under-test can affect script playback. If there are differences between the recorded environment
and the playback environment, playback problems can occur.
How to set GUI playback options?
. Open the GUI Playback Options dialog box by doing one of the following:
. Before you start playback, click Tools > GUI Playback Options.
. Start playback by clicking the Playback Script button on the toolbar. In the Playback dialog box, click Options.
How to setting Log Options for Playback?

1. Open the GUI Playback Options dialog box.


2. Click the Log tab.
3. To output the playback results to the log so you can view them, select Output playback results to log.
4. To have the log appear automatically after playback is complete, select View log after playback. If you clear this, you can still view the log after playback
by clicking Tools > Rational Test > TestManager, and then opening the log.
5. To have Robot prompt you before it overwrites a log, select Prompt before overwrite log.
6. Click one of the following: Specify log information at playback , displays the Specify Log Information dialog box so that you can specify the build, log
folder, and log. Use default log information At playback, uses the same build and log folder that was used during the last playback. Uses the script name
as the log name.
7. Click OK or change other options.

Why set Wait State and Delay Options?


If a script needs to wait before executing a particular command, you can insert a delay for just that command.If you are testing an application in which time
estimates are not predictable, you can define a wait state for a verification point so that playback waits based on specific conditions rather than on absolute time.
How to set the wait state options?
1. Open the GUI Playback Options dialog box.
2. Click the Wai t State tab.
3. To specify how often Robot checks for the existence of a window, type a number in the Retry test every box.
4. To specify how long Robot waits for a window before it times out, type a number in the Timeout after box.
5. Click OK or change other options.
How to set the delay options for commands and keystrokes?
1. Open the GUI Playback Options dialog box.
2. Click the Playback tab.
3. Click Delay between commands. Type the delay value.
This is the delay between each user action command and between each verification point command during playback.
4. Click Delay between keystrokes. Type the delay value.
5. Click OK or change other options.
How to set Error Recovery Options?
Use the error recovery options to specify how Robot handles script command failures and verification point failures.

1. Open the GUI Playback Options dialog box.


2. Click the Error Recovery tab.
3. To specify what Robot should do if it encounters a failure, click one of the following options under both On script command failure and On verification
point failure:
Continue execution - Continues playback of the script.
Skip current script - Terminates playback of the current script. If the script with the failure was called from another script, playback resumes with the
command following the CallScript command.
Abort playback - Terminates playback of the current script. If the script with the failure was called from another script, the calling script also terminates.
4. Click OK or change other options.

What's an unexpected active window?


An unexpected active window is any unplanned window that appears during script playback that prevents the expected window from being made active (for
example, an error message from the network or application-under-test). These windows can interrupt playback and cause false failures
How too set options to specify how Robot responds to unexpected active windows?

1. Open the GUI Playback Options dialog box.


2. Click the Unexpected Active Window tab.
3. To have Robot detect unexpected active windows and capture the screen image for viewing in the Image Comparator, select Detect unexpected active
windows and Capture screen image.
4. Specify how Robot should respond to an unexpected active window:
Send key - Robot sends the specified keystroke: ENTER, ESCAPE, or any alphabetic key (A through Z).
Select pushbutton with focus - Robot clicks the push button with focus.
Send WM_CLOSE to window - Robot sends a Windows WM_CLOSE message.
This is equivalent to clicking the Windows Close button.
5. Specify what Robot shoul d do if i t cannot remove an unexpected acti ve window:
6. Continue running script - Robot continues script playback with the next command in the script after the one being processed when the unexpected active
window appeared. Playback continues even if the unexpected active window cannot be removed. This may result in repeated script command failures.
7. Skip current script - Robot halts playback of the current script. If the script that detected the unexpected active window was called from within another
script, playback resumes with the script command following the CallScript command.
8. Abort playback - Robot halts playback completely. If the script that detected the unexpected active window was called from within another script, the
calling script also stops running.
9. Click OK or change other options.

Why setting the Diagnostic Tools Options?


You can use the Rational diagnostic tools - Rational Purify, Quantify, and PureCoverage - to collect diagnostic information about an application during playback of
a Robot script.
After playback, Robot can integrate the diagnostic tool’s results into the Robot log, so that you can view all of the playback results in one place. You can choose to
show any combination of errors, warnings, and informational messages. You can then double-click a result in the log to open the script in Robot and the
appropriate file in the diagnostic tool.
How to set options to specify the diagnostic tool to be used during playback?
1. Open the GUI Playback Options dialog box.
2. Click the Diagnostic Tools tab. Then do the following:
a. Click the diagnostic tool under which the application should run. The options are enabled if you have the tools installed.
b. Optionally, change this value. This multiplies wait state and delay values. c. Select the type of information to show in the log.
3. If you selected any of the log check boxes in step 2c, click the Log tab and select both Log management check boxes.
4. Click OK or change other options
What's the Trap?
Robot uses the Trap utility to detect the occurrence of General Protection Faults (GPF) and the location of offending function calls during playback. If a GPF is
detected, Robot updates a log file that provides information about the state of the Windows session that was running.
How to automatically start Trap during playback?
1. Open the GUI Playback Options dialog box.
2. Click the Trap tab.
3. Select Start Trap to enable the other options.
4. To include the contents of the stack for non-current tasks, select Stack trace.
5. To include the modules and class list information, select Module and class list.
6. Click one of the following to specify what Trap should do after detecting a GPF: Restart Windows session - Trap restarts Windows.
Call user-defined sub procedure - Trap calls the sub-procedure in the module that you specify. Select this option to specify your own custom SQABasic error
handling. Type the names of the library source file (with an.sbl extension) and the sub-procedure.
7. Click OK or change other options.
How to play back a GUI script?
1. Prepare for playback by restoring the test environment.
2. Set your playback options. You can also set these options after you start playback.
3. Click the Playback Script button on the toolbar.
4. Type a name or select it from the list.
5. To change the playback options, click GUI Options. When finished, click OK.
6. Click OK to continue.
7. If the Specify Log Information dialog box appears, fill in the dialog box and click OK.
8. If a prompt appears asking if you want to overwrite the log, do one of the following:
.Click Yes to overwrite the log.
.Click No to return to the Specify Log Information dialog box. Change the build, log folder, and/or log information.
.Click Cancel to cancel the playback.
What Is a Datapool?
A datapool is a test dataset. It supplies data values to the variables in a script during script playback. Datapool’s should be considered whenever multiple records
are being sent to the server in a single playback, and you want to send a different record each time.
A datapool consists of two files.
. Datapool values are stored in a text file with a .csv extension.
. Datapool column names are stored in a specification(.spc) file. The Robot or TestManager software is always responsible for creating and maintaining this file.
You should never edit this file directly.
.csv and .spc files are stored in the Datapool directory of your Robot project.
Datapool Limits.
A datapool can have up to 150 columns if the rational test software automatically generates the data for the datapool, o 32768 columns if you import the datapool
in from another database or source. A datapool can have up to 2 billion rows. Datapool names can be from 1 - 40 characters long with a description of 1-255 long.
What Kinds of Problems Does a Datapool Solve?
1.Problem: During recording, you create a personnel file for a new employee, using the employee’s unique social security number. Each time the script is played
back, there is an attempt to create the same personnel file and supply the same social security number. The application rejects the duplicate requests.
Solution: Use a datapool to send different employee data, including unique social security numbers, to the server each time the script is played back.
2. Problem: You delete a record during recording. During playback, each instance and iteration of the script attempts to delete the same record, and Record Not
Found errors result.
Solution: Use a datapool to reference a different record in the deletion request each time the script is played back.
3. Problem: The client application reads a database record while you record a script for a performance test. During playback, that same record is read hundreds of
times. Because the client application is well designed, it puts the record in cache memory, making its retrieval deceptively fast in subsequent fetches. The
response times that the performance test yields will be inaccurate.
Solution: Use a datapool to request a different record each time the script is played back.
How to creating a Datapool with Robot?
1. Plan the Datapool
- What datapool columns do you need?
- What data type should you assign to each column?
- Do you need to create data types?
2. Generate the code
- Manually add datapool commands to the script
- Match up script variable names with datapool columns
3. Create and Populate the Datapool
- In TestManager, define datapool columns (including assign a data type to each datapool column).
- Generate the data.
- Verify generated data is correct (open back up to view)
- Test with script
Datapool data type.
A datapool data type is a source of data for one datapool column There are two standard kinds of datapool data types:
Standard data types - These are included with rational robot, and consist of types such as first name, last name, city, state (see rational robot appendix c for full list
of all data types).
User-defined data types - These are data types that the user creates. You must create a data type if none of the standard data types contains the kind of values
you need for your script.
Datapool Commands
In order to use the datapool functionality you must add the sqautil.sbh header file to all your scripts that will utilize the datapool.
SQADatapoolFetch - is used to retrieve an entire row (record) of values from the datapool. Call SQADatapoolFetch(dp)
SQADatapoolValue - is used to retrieve an individual value from the fetched datapool row and assign it to a script value.
Call SQADatapoolValue(dp, 4, ccNum)
dp = datapool name
4 = the column in datapool
ccNum = the column name in the datapool
SQADatapoolOpen - is used to open a specific datapool.
dp=SQADatapoolOpen("name of datapool")
SQADatapoolClose - is used to close a specific datapool.
Call SQADatapoolClose(dp)
Example of a GUI Script that used datapools:
‘$Include "sqautil.sbh"

Sub Main
Dim Results as Integer
Dim x as Integer
‘reference the datapool
Dim dp as Long
‘variable to be assigned data from the datapool
Dim ccNum as String

‘Open the datapool


dp=SQADatapoolOpen("name of datapool")

‘now this code will populate a full record from the datapool 10 times.
For x = 0 to 9
Call SQADatapoolFetch(dp)
‘Begin
Window SetContext, "Caption=Main Window; Class=Window", ""
PushButton Click, "Text=Order"
Window SetContext, "Caption=Order Window; Class=Window’, ""

‘add the value to the order window


Window SetContext, "Caption=Order Window; Class=Window’, ""
EditBox Click, "ObjectIndex=3", "Coords=13,14"
Call SQADatapoolValue(dp, 4, ccNum)
InputKeys ccNum

Window SetContext, "Caption=Order Window; Class=Window’, ""


PushButton Click, "Text=OK"

Next x
Call SQADatapoolClose(dp)
End Sub

How to edit datapool configuration and to begin the process of defining and generating a datapool:
1. If the script that will access the datapool is not open for editing, click File > Open > Test Script to open it.
2. Click Edit > Datapool Information to open the Configure Datapool in Test Script dialog box.
This dialog box lets you edit the DATAPOOL_CONFIG section of the script.
3. Either accept the defaults in the Configure Datapool in Test Script dialog box, or make any appropriate changes.
4. When finished making any changes, click Save.
5. Take one of these actions:
.Click Create to define and populate the new datapool.
.Click Close if you do not want to define and populate a datapool at this time.
6. If you clicked Create in the previous step, continue by following the instructions in the section
How to Define and populate the datapool?

1. To insert one or more new columns into the datapool file:


a. Click the row located either just before or just after the location where you want to insert the new datapool column. (Note that the order in which
datapool column names are listed in Name determines the order in which values are stored in a datapool record.)
An arrow appears next to the name of the datapool row you clicked.
b. Click either Insert before or Insert after, depending on where you want to insert the datapool column.
c. Type a name for the new datapool column (40 characters maximum).
Make sure there is a script variable of the same name listed in the Configure Datapool in Test Script dialog box. The case of the names must match.
2. For each datapool column in the grid, assign a data type to the column, and modify the default property values for the column as appropriate.
3. When finished defining datapool columns, type a number in the No. of records to generate field.
If a different row has to be retrieved with each fetch, make sure the datapool has at least as many rows as the number of users (and user iterations) that
will be requesting rows at runtime.
4. Click Generate Data.
You cannot generate data for a datapool that has more than 150 columns. Alternatively, if you do not want to generate any data now, click Save to save
your datapool column definitions, and then click Close.
5. Optionally, click Yes to see a brief summary of the generated data

How to correct the errors If the datapool values are not successfully generated?
1. Click Yes to see the error report.
2. After viewing the cause of the errors, click Cancel.
3. Correct the errors in the Datapool Fields grid.
How to edit a datapool’s column definitions while in Robot?

1. If the script that will access the datapool is not open for editing, click File > Open > Test Script to open it.
2. Click Edit > Datapool Information to open the Configure Datapool in Test Script dialog box.
3. Either accept the defaults in the Configure Datapool in Test Script dialog box, or make any appropriate changes.
4. When finished making any changes, click Save.
5. Click Edit Specification to open the Datapool Specification dialog box, where you update datapool column definitions.
6. To insert one or more new columns into the datapool file:
a. Click the row located either just before or just after the location where you want to insert the new datapool column.
An arrow appears next to the name of the datapool row you clicked. b. Click either Insert before or Insert after, depending on where you want to insert
the datapool column.
c. Type a name for the new datapool column (40 characters maximum).
Make sure there is a script variable of the same name listed in the Configure Datapool in Test Script dialog box. Case of the names must match.
7. When finished modifying datapool columns, type a number in the No. of records to generate field. If a different row has to be retrieved with each fetch,
make sure the datapool has at least as many rows as the number of users (and user iterations) that will be requesting rows at runtime.
8. Click Generate Data.
You cannot generate data for a datapool that has more than 150 columns. Alternatively, if you do not want to generate any data now, click Save to save
your datapool column definitions, and then click Close.
9. Optionally, click Yes to see a brief summary of the generated data.

How to view or edit a datapool’s values while in Robot?

1. If the script that will access the datapool is not open for editing, click File > Open > Test Script to open it.
2. Click Edit > Datapool Information to open the Configure Datapool in Test Script dialog box.
3. Either accept the defaults in the Configure Datapool in Test Script dialog box, or make any appropriate changes.
4. When finished making any changes, click Save.
5. Click Edit Existing Data.
6. In the Edit Datapool dialog box, edit datapool values as appropriate.
7. When finished editing datapool values, click Save, and then click Close.

Using Datapools with GUI Scripts.


A GUI script can access a datapool when it is played back in Robot. Also, when a GUI script is played back in a TestManager suite, the GUI script can access the
same datapool as other GUI scripts and/or session scripts.
There are differences in the way GUI scripts and session scripts are set up for datapool access:
. You must add datapool commands to GUI scripts manually while editing the script in Robot. Robot adds datapool commands to session scripts automatically.
. There is no DATAPOOL_CONFIG statement in a GUI script. The command SQADatapoolOpen defines the access method to use for the datapool.
How to verify the Visual Basic extension is loaded?
To test Visual Basic applications, you should first verify that the Robot Visual Basic extension is loaded in Robot.
To verify that the extension is loaded:
1. Start Robot.
2. Click Tools > Extension Manager.
3. Ver i f y t ha t Visual Basic is selected. If not, select it.
4. To improve the performance of Robot, clear the check boxes of all environments that you do not plan to test.
5. Exit Robot.
Robot Support for Oracle Forms Applications.
You can use Robot to test Oracle Forms objects, including:
. Windows
. Forms
. Canvas-views
. Oracle Trees (Tree-views)
. Base-table blocks (single- and multi-record)
. Items, including OLE containers
Before you can test an Oracle Forms application, you need to run the Enabler. How to run the Enabler?

1. Start the Rational Test Oracle Forms Enabler from the folder in which it was installed (the default folder is Developer 2000).
2. Click Browse. Select the .fmb file that you want to make testable and click OK.
3. Click Add Rational Test Object Testing Library.
4. Set the following options as needed:
Backup original FMB file - Creates a backup file before the file is enabled. Enable all FMB files in selected directory - Enables every .fmb file in the
directory. If this check box is not selected, only the .fmb file in the Oracle FMB file box is enabled.
Generate each selected FMB file - Generates each .fmb file after enabling it.
5. Click Advanced to open the following dialog box.
6. If you selected the Generate each selected FMB file option, type your database connection parameters in the Database tab.
7. Click the Directories tab.
8. If you need to change the default locations of the Object Testing Library and Oracle home directory, select Override Oracle paths in registry. Click each
Browse button and select the new location.
9. Click the General tab.
10. To send the output in the Status box of the Enabler to a log file that you can view or print, select Write status output to log file.
11. If objects in your application contain the WHEN-MOUSE-ENTER trigger, the Enabler prepends sqa_mouse_handler; to each trigger . This is necessary
for Robot to correctly record mouse actions against these objects. If you need to prevent this modification, clear Modify local WHEN-MOUSE-ENTER
triggers.
12. Click OK.
13. Click Enable. As the file is enabled, information appears in the Status box.

An Oracle Forms application is made up of visual and nonvisual objects.


. Visual objects are GUI objects that you can see in the application. Examples are check boxes and push buttons.
. Nonvisual objects are non-GUI objects that you cannot see in the application. Examples are blocks and forms.
There are two views when you test a visual or nonvisual Oracle object.
1. Full View - Includes all objects (visual and nonvisual) in the application. In this view, items are children of blocks, which are children of a form. This
view also includes canvas-views and windows. When you select an object from the full view, the object is identified by its complete path in the script.
2 GUI View - Includes only the visual (GUI) objects in the application. In this view, all objects are children of a window. When you select an object from
the GUI view, the object is identified by its block.item name relative to the window in the script.
There are two methods to test the properties of an Oracle object.
1. Object Properties verification point - Use to test properties while recording or editing a script.
2. Object Scripting commands - Use to test properties programmatically while editing a script.
Robot Support for HTML Applications.
Rational Robot provides comprehensive support for testing HTML applications that run on the World Wide Web. Robot lets you test both static and dynamically-
generated pages accessed from both standard and secured HTTP servers, regardless of make or model. Robot examines the data and properties of each HTML
element, letting you test the elements that appear on your Web pages, including table data, links, and form elements, such as buttons, check boxes, lists, and text.
Configuring Your Browser before you record scripts.
Before you record scripts, you should configure Internet Explorer and/or Netscape Navigator so that scripts will play back in the same way as when you recorded
them. For best results, you should configure Internet Explorer and/or Navigator identically on both the computer that you record scripts on and the computer that
you play back scripts on. In addition, you should disable the cookie prompt.
Verifying that the HTML Extension Is Loaded.
To test HTML applications, you must first make sure that the HTML extension is loaded in Robot. To do this:
1. Start Robot.
2. Click Tools > Extension Manager.
3. Ver i f y t ha t HTML-MSIE or HTML-Navigator is selected. If not, select it. 4. To improve the performance of Robot, clear the check boxes of all environments
that you do not plan to test.
5. Exit Robot.
Enabling HTML Testing in Robot.
After loading the HTML extension, you must enable HTML testing so that Robot can recognize HTML elements. You can do this either by starting Internet Explorer
or Netscape Navigator through the Robot Start Browser command or by loading the Rational ActiveX Test control.
To enable HTML testing using the Start Browser command:
1. Start recording in Robot. To record, click the Record GUI Script button on the Robot toolbar.
2. Type a script name or select a name from the list.
3. Click OK to display the GUI Record toolbar.
4. Click the Display GUI Insert Toolbar button on the GUI Record toolbar.
5. Click the Start Browser button on the GUI Insert toolbar.
6. Type the URL of the HTML application that you plan to test, or click Browse and select a local file.
7. Type the name of a tag to uniquely identify this instance of the browser. By using tags, you can test multiple instances of the browser.
8. Click OK.
How to test an HTML element’s data?

1. Start recording in Robot.


2. Navigate to the Web page that contains the elements to test. For example, navigate to the page that is returned after the user submits a page to be
processed.
3. Click the Object Data Verification Point button on the GUI Insert toolbar.
4. Assign a name, wait state, and expected result for the verification point and then click OK.
5. In the Select Object dialog box, drag the Object Finder tool over the page until the element that you want to test appears in the TestTip.
6. Release the mouse button and click OK.
7. If the Object Data Test dialog box appears, select the data test to use and click OK.
8. Select the verification method that Robot should use to compare the baseline data captured while recording with the data captured during playback.
9. Click OK.
10. When finished, click the Stop Recording button on the GUI Record toolbar

How to Test for Text within a Table ?


1. Add an Object Data verification point.
2. Select the HTMLTable object with the Object Finder tool.
3. Select a Contents data test.
4. Select the Case Sensitive verification method to test for all of the text in the table. Select the Find Sub String Case Sensitive verification method to
test for any text item with the table.
How to Test the Destination of a Link?
1. Add an Object Data verification point.
2. With the Object Finder tool, select HTMLLink to test a text-based link.
3. Select a Contents data test to capture the URL of the destination.
4. Select the Case Sensitive verification method to test for the entire URL. Select the Find Sub String Case Sensitive verification method to test for part
of the URL.
How to test an HTML element’s properties?
1. Start recording in Robot.
2. Navigate to the Web page that contains the element to test.
3. Click the Object Properties Verification Point button on the GUI Insert toolbar.
4. Assign a name, wait state, and expected result for the verification point and then click OK,
5. Select the element to test and then click OK.
6. Click OK to insert the verification point.
How to capture the entire window, including the applet?
1. Add an Object Properties verification point.
2. With the Object Finder tool, point to the title bar of the browser window until Window appears in the TestTip.
3. Click OK.
How too capture only the properties of the Java applet?
1. Add an Object Properties verification point.
2. With the Object Finder tool, point to the Java applet until JavaWindow appears in the TestTip.
3. Click OK.
How to Record Mouse Movements?
With Dynamic HTML, it is possible to cause a page to change color or to cause text on a page to update simply by moving the mouse over the page. To
capture these mouse movements:
1. Start recording in Robot.
2. Navigate to the page that contains the Dynamic HTML.
3. Press CTRL+SHIFT+R to enter low-level recording mode.
4. Move the pointer over the portion of the page that is affected by the mouse movement.
5. Press CTRL+SHIFT+R to stop low-level recording mode.
6. Insert an Object Properties verification point.
Robot Support for Java.
Rational Robot provides comprehensive support for testing GUI components in both Java applets and standalone Java applications. With its Object Testing
technology, Robot examines the data and properties of Java components. This means that Robot can do the following:
--Determine the names of components in your program, and use those names for object recognition.
--Capture properties of Java components with the Object Properties verification point.
--Capture data in Java components with the Object Data verification point.
To make Java applets and applications testable, you need to Run the Java Enabler. How?
1. Make sure that Robot is closed.
2. Click Start > Programs > Rational product name > Rational Test > Java Enabler.
3. Select one of the available Java enabling types.
4. Select the environments to enable.
5. Click Next.
6. Click Yes to view the log file.
How to verify that the Java extension is loaded?
1. Start Robot.
2. Click Tools > Extension Manager.
3. Verify that Java is selected. If not, select it.
4. To improve the performance of Robot, clear the check boxes of all environments that you do not plan to test.
5. Exit Robot.
How to test a Java component’s data?
1. Start recording in Robot.
2. Open the Java applet or application that you want to test.
3. Navigate to the page that you want to test.
4. Start creating the Object Data verification point.
5. Assign a name, wait state, and expected result for the verification point and then click OK.
6. In the Select Object dialog box, drag the Object Finder tool over the page until the component you want to test appears in the TestTip.
7. Release the mouse button.
8. If the dialog box is still open, click OK.
9. If the Object Data Tests dialog box appears, select the data test to use and click OK.
10. Complete the verification point as usual.
How too test the contents of a Java panel?
1. Start recording in Robot.
2. Open the Java applet or application that you want to test.
3. Navigate to the page that you want to test.
4. Start creating the Object Data verification point.
5. Assign a name, wait state, and expected result for the verification point and then click OK.
6. In the Select Object dialog box, drag the Object Finder tool over the page until JavaPanel appears in the TestTip.
7. Release the mouse button.
8. If the dialog box is still open, click OK.
9. If the Object Data Tests dialog box appears, select the data test to use and click OK.
10. Complete the verification point as usual
How to to test the properties of Java components?
1. Start recording in Robot.
2. Navigate to the page that contains the component you want to test.
3. Start creating the Object Properties verification point as usual.
4. Assign a name, wait state, and expected result for the verification point and then click OK.
5. In the Select Object dialog box, drag the Object Finder tool over the page until the component that you want to test appears in the TestTip.
6. Release the mouse button. If the Select Object dialog box is still open, click OK.
7. Click OK to complete the verification point.
Robot Support for PowerBuilder Applications.
Rational Robot provides comprehensive support for testing applications built with PowerBuilder.
With its Object Testing technology, Robot examines data and properties that are not visible to the user. Robot uses Object-Oriented Recording to recognize a
PowerBuilder object by its internal name.
You can use Robot to test all PowerBuilder and third-party components, including:
. DataStore controls and hidden DataWindows
. ActiveX controls
.RichTextEdit controls
. DataWindows with RichText presentation style
. All properties of a DataWindow computed field, including expression and value
How to create or edit a custom data test?
1. Display the object for which you want to create the data test.
2. In Robot, click Tools > Object Data Test Definition.
3. Click Select to open the Select Object dialog box.
4. Select the object for which you want to create the data test in one of the following ways:
-- Drag the Object Finder tool over the object and release the mouse button. As you move the Object Finder tool over an object, the object type appears in the
yellow TestTip.
-- Click Browse to open the Object List dialog box, select the object from the list, and click OK.
The Object List dialog box shows a hierarchical list of all objects on the Windows desktop, including hidden objects.
5. If the Select Object dialog box is still open, click OK to close it. The object classification of the selected object and its data tests appear in the Object Data Test
Definition dialog box.
If the object is Unknown (not defined), the Define Object dialog box appears. Select an object type and click OK to open the Object Data Test Definition dialog box.
6. Do one of the following to display the Create/Edit Object Data Test dialog box:
-- To create a new test, type a name (50 characters maximum) in the Data test name box and click New.
-- To edit a custom test, select the test from the list and click Edit.
-- To copy a test and edit the copy, select the test and click Copy. Type the new name and click OK. Then, click Edit.
7. Select a property from the Property to test list. This property is the one whose values you want to capture in the data test. 8. Select the Column check box to
add parameters for the vertical axis. Select the Row check box to add parameters for the horizontal axis.
9. Type an expression in the From and To boxes, or click the Expression button to the right of each box to build the expression.
An expression is a single value or property, or a combination of values, properties, and operators.
10. In the Using box (in the Create/Edit Object Data Test dialog box), type a property or select it from the list to further define the property that you are capturing
and testing.
The Using box specifies what property Robot will modify to affect its iteration. For example, to iterate from row 0 to row Rows-1, Robot will set the Row property.
11. Select the check boxes under Additional parameters as needed.
12. In the Description box, type a description that indicates what the data test does.
13. Optionally, click Te s t to do the following:
--Verify the syntax of the data test before you save it.
-- If the syntax is correct, watch Robot perform the data test on the selected object.
When the test has ended, Robot opens a dialog box with the captured data. Click OK to close the dialog box.
14. Click OK to save the test and automatically verify it.
If the syntax of the expression is incorrect, the incorrect area is highlighted so you can correct it and then resave the test.
How to copy, rename, or delete a data test?
1. Click Tools > Object Data Test Definition.
2. Select the data test.
3. Do one of the following:
-- To copy the test, click Copy. Type a new name (50 characters maximum) and click OK.
-- To r ename t he test, click Rename. Type a new name (50 characters maximum) and click OK.
-- To delete the test, click Delete. Click OK to confirm the deletion.
What is Rational Suite?
Rational Suite is a set of tools for every member of the software development team. It contains the following tools:

• Rational Unified Process


• Rational RequisitePro
• Rational ClearQuest
• Rational SoDA
• Rational ClearCase LT
• Rational TestManager
• Rational ProjectConsole
• Rational Rose
• Rational PureCoverage
• Rational Purify
• Rational Quantify
• Rational Robot
• Rational TestFactory
• Rational Process Workbench
• Rational NetDeploy
• Rational SiteLoad

Rational tools are sold with the following packages:

• Team Unifying Platform - Rational Unified Process, Rational RequisitePro, Rational ClearQuest, Rational SoDA, Rational ClearCase LT, Rational
TestManager, and Rational ProjectConsole.
• Analyst Studio - Team Unifying Platform, and Rational Rose.
• DevelopmentStudio - Team Unifying Platform, Rational Rose£¬Rational PureCoverage£¬Rational Purify, and Rational Quantify.
• TestStudio - Team Unifying Platform, Rational PureCoverage£¬Rational Purify, Rational Quantify, Rational Robot, and Rational TestFactory.
• Enterprise - Team Unifying Platform, Rational Rose, Rational PureCoverage£¬Rational Purify, Rational Quantify, Rational Robot, Rational TestFactory,
and Process Workbench.
• Content Studio - Team Unifying Platform, Rational NetDeploy, and Rational SiteLoad.

What are the software development best practices suggested by Rational Suite?

• Develop software iteratively. Iterative development means analyzing, designing, and implementing incremental subsets of the system over the project
lifecycle. The project team plans, develops, and tests an identified subset of system functionality for each iteration. The team develops the next
increment, integrates it with the first iteration, and so on. Each iteration results in either an internal or external release and moves you closer to the goal
of delivering a product that meets its requirements.
• Manage requirements. A requirement is one criterion for a project's success. Your project requirements answer questions like "What do customers
want?" and "What new features must we absolutely ship in the next version?" Most software development teams work with requirements. On smaller,
less formal projects, requirements might be kept in text files or e-mail messages. Other projects can use more formal ways of recording and maintaining
requirements.
• Use component-based architectures. Software architecture is the fundamental framework on which you construct a software project. When you define
an architecture, you design a system's structural elements and their behavior, and you decide how these elements fit into progressively larger
subsystems.
• Model software visually. Visual modeling helps you manage software design complexity. At its simplest level, visual modeling means creating a
graphical blueprint of your system's architecture. Visual models can also help you detect inconsistencies between requirements, designs, and
implementations. They help you evaluate your system's architecture, ensuring sound design.
• Continuously verify quality. Verifying software quality means testing what has been built against defined requirements. Testing includes verifying that
the system delivers required functionality and verifying reliability and its ability to perform under load.
• Manage change. It is important to manage change in a trackable, repeatable, and predictable way. Change management includes facilitating parallel
development, tracking and handling enhancement and change requests, defining repeatable development processes, and reliably reproducing software
builds

How to register a Rational project in Rational Administrator?

1. Run Start > Programs > Rational Suite > Rational Administrator.
2. Right-click on Projects. From the shortcut menu that appears, click Register Existing Project.
3. In the Select Rational Project dialog box, browse to the Rational project file (.rsp), then click Open. Your project name should be added below Projects.
4. Right-click on your project name, then click Connect. All assets and development information associated with your project will appear.

What are the databases used by a Rational project to store project information?

• ClearQuest database - used to store project's change requests (defects and enhancement requests).
• RequisitePro database - used to store project's business and system requirements.
• Rational Test datastore - used to store project's testing information (test assets, logs, and reports).
• Rational Rose - used to store project's visual models.

How to attach the ClearQuest database to your project?

• ClearQuest database - used to store project's change requests (defects and enhancement requests).
• RequisitePro database - used to store project's business and system requirements.
• Rational Test datastore - used to store project's testing information (test assets, logs, and reports).
• Rational Rose - used to store project's visual models.

What Is the Rational Unified Process (RUP)?


RUP is a process framework for developing software that helps you:

• Coordinate the developmental responsibilities of the entire development team.


• Produce high-quality software.
• Meet the needs of your users.
• Work within a set schedule and budget.
• Leverage new technologies.

What Are Key Concepts in the Rational Unified Process

• A discipline shows all activities that produce a particular set of software assets. RUP describes development disciplines at an overview level - including
a summary of all roles, workflows, activities, and artifacts that are involved.
• A role is defined as the behavior and responsibilities of an individual or a group of individuals on a project team. One person can act in the capacity of
several roles over the course of a project. Conversely, many people can act in the capacity of a single role in a project. Roles are responsible for
creating artifacts.
• A workflow is the sequence of activities that workers perform toward a common goal. A workflow diagram serves as a high-level map for a set of related
activities. The arrows between activities represent the typical, though not required, flow of work between activities.
• A activity is a unit of work that is performed by a particular role. It is a set of ordered steps, like a recipe, for creating an artifact.
• A artifact is something a role produces as the result of performing an activity. In RUP, the artifacts produced in one activity are often used as input into
other activities. An artifact can be small or large, simple or complex, formal or informal. Examples of artifacts are: a test plan, a vision document, a model
of a system¡¯s architecture, a script that automates builds, or application code.

How to use TestManager to organize a test plan?


In TestManager, you can create a test plan to store information about the purpose and goals of testing within a Rational project, and the strategies to implement
and run testing. You can have one or more test plans in a project. A test plan can include properties such as the test plan name, configurations associated with the
test plan, and a time frame for when a test plan must pass.
A test plan contains test case folders, which in turn contain test cases. A test case is a testable and verifiable behavior in a system. You can organize the test plan
and test case folders in the way that makes sense for your organization. For example, you can have a test case folder for each tester in your department, for each
phase of testing, or for each use case.
How to define a test case?
A test case describes the testable and verifiable behavior in a system. A test case can also describe the extent to which you will test an area of the application.
Existing project artifacts, such as requirements, provide information about the application and can be used as test inputs for your test cases. TestManager
provides built-in test input types, but almost any artifact can be used as a test input.
For example, here's what the following artifacts offer as test inputs:

• Requirements describe a condition or capability to which a system must conform.


• Visual models provide a graphic representation of a system's structure and interrelationships

Web Testing Checklist about Usability

Navigation
1. Is terminology consistent?
2. Are navigation buttons consistently located?
3. Is navigation to the correct/intended destination?
4. Is the flow to destination (page to page) logical?
5. Is the flow to destination the page top-bottom left to right?
6. Is there a logical way to return?
7. Are the business steps within the process clear or mapped?
8. Are navigation standards followed?

Ease of Use
1. Are help facilities provided as appropriate?
2. Are selection options clear?
3. Are ADA standards followed?
4. Is the terminology appropriate to the intended audience?
5. Is there minimal scrolling and resizeable screens?
6. Do menus load first?
7. Do graphics have reasonable load times?
8. Are there multiple paths through site (search options) that are user chosen?
9. Are messages understandable?
10. Are confirmation messages available as appropriate?

Presentation of Information
1. Are fonts consistent within functionality?
2. Are the company display standards followed?
- Logos
- Font size
- Colors
- Scrolling
- Object use
3. Are legal requirements met?
4. Is content sequenced properly?
5. Are web-based colors used?
6. Is there appropriate use of white space?
7. Are tools provided (as needed) in order to access the information?
8. Are attachments provided in a static format?
9. Is spelling and grammar correct?
10. Are alternative presentation options available (for limited browsers or performance issues)?

How to interpret/Use Info


1. Is terminology appropriate to the intended audience?
2. Are clear instructions provided?
3. Are there help facilities?
4. Are there appropriate external links?
5. Is expanded information provided on services and products? (why and how)
6. Are multiple views/layouts available?
Web Testing Checklist about Compatibility and Portability

Overall
1. Are requirements driven by business needs and not technology?
Audience
1. Has the audience been defined?
2. Is there a process for identifying the audience?
3. Is the process for identifying the audience current?
4. Is the process reviewed periodically?
5. Is there appropriate use of audience segmentation?
6. Is the application compatible with the audience experience level?
7. Where possible, has the audience readiness been ensured?
8. Are text version and/or upgrade links present?

Testing Process
1. Does the testing process include appropriate verifications? (e.g., reviews, inspections and walkthroughs)
2. Is the testing environment compatible with the operating systems of the audience?
3. Does the testing process and environment legitimately simulate the real world?

Operating systems Environment/ Platform


1. Has the operating environments and platforms been defined?
2. Have the most critical platforms been identified?
3. Have audience expectations been properly managed?
4. Have the business users/marketing been adequately prepared for what will be tested?
5. Have sign-offs been obtained?

Risk
1. Has the risk tolerance been assessed to identify the vital few platforms to test?

Hardware
1. Is the test hardware compatible with all screen types, sizes, resolution of the audience?
2. Is the test hardware compatible with all means of access, modems, etc of the audience?
3. Is the test hardware compatible will all languages of the audience?
4. Is the test hardware compatible with all databases of the audience?
5. Does the test hardware contain the compatible plug-ins and DLLs of the audience?

General
1. Is the application compatible with standards and conventions of the audience?
2. Is the application compatible with copyright laws and licenses?
Access Control
1. Is there a defined standard for login names/passwords?
2. Are good aging procedures in place for passwords?
3. Are users locked out after a given number of password failures?
4. Is there a link for help (e.g., forgotten passwords?)
5. Is there a process for password administration?
6. Have authorization levels been defined?
7. Is management sign-off in place for authorizations?

Disaster Recovery
1. Have service levels been defined. (e.g., how long should recovery take?)
2. Are fail-over solutions needed?
3. Is there a way to reroute to another server in the event of a site crash?
4. Are executables, data, and content backed up on a defined interval appropriate for the level of risk?
5. Are disaster recovery process & procedures defined in writing? If so, are they current?
6. Have recovery procedures been tested?
7. Are site assets adequately Insured?
8. Is a third party "hot-site' available for emergency recovery?
9. Has a Business Contingency Plan been developed to maintain the business while the site is being restored?
10. Have all levels in organization gone through the needed training & drills?
11. Do support notification procedures exist & are they followed?
12. Do support notification procedures support a 24/7 operation?
13. Have criteria been defined to evaluation recovery completion / correctness?

Firewalls
1. Was the software installed correctly?
2. Are firewalls installed at adequate levels in the organization and architecture? (e.g., corporate data, human resources data, customer transaction files, etc.)
3. Have firewalls been tested? (e.g., to allow & deny access).
4. Is the security administrator aware of known firewall defects?
5. Is there a link to access control?
6. Are firewalls installed in effective locations in the architecture? (e.g., proxy servers, data servers, etc.)

Proxy Servers
1. Have undesirable / unauthorized external sites been defined and screened out? (e.g. gaming sites, etc.)
2. Is traffic logged?
3. Is user access defined?

Privacy
1. Is sensitive data restricted to be viewed by unauthorized users?
2. Is proprietary content copyrighted?
3. Is information about company employees limited on public web site?
4. Is the privacy policy communicated to users and customers?
5. Is there adequate legal support and accountability of privacy practices?
Data Security
1. Are data inputs adequately filtered?
2. Are data access privileges identified? (e.g., read, write, update and query)
3. Are data access privileges enforced?
4. Have data backup and restore processes been defined?
5. Have data backup and restore processes been tested?
6. Have file permissions been established?
7. Have file permissions been tested?
8. Have sensitive and critical data been allocated to secure locations?
9. Have date archival and retrieval procedures been defined?
10. Have date archival and retrieval procedures been tested?

Monitoring
1. Are network monitoring tools in place?
2. Are network monitoring tool working effectively?
3. Do monitors detect
- Network time-outs?
- Network concurrent usage?
- IP spoofing?
4. Is personnel access control monitored?
5. Is personnel internet activity monitored?
- Sites visited
- Transactions created
- Links accessed

Security Administration
1. Have security administration procedures been defined?
2. Is there a way to verify that security administration procedures are followed?
3. Are security audits performed?
4. Is there a person or team responsible for security administration?
5. Are checks & balances in place?
6. Is there an adequate backup for the security administrator?

Encryption
1. Are encryption systems/levels defined?
2. Is there a standard of what is to be encrypted?
3. Are customers compatible in terms of encryption levels and protocols?
4. Are encryption techniques for transactions being used for secured transactions?
- Secure socket layer (SSL)
- Virtual Private Networks (VPNs)
5. Have the encryption processes and standards been documented?

Viruses
1. Are virus detection tools in place?
2. Have the virus data files been updated on a current basis?
3. Are virus updates scheduled?
4. Is a response procedure for virus attacks in place?
5. Are notification of updates to virus files obtained from anti-virus software vendor?
6. Does the security administrator maintain an informational partnership with the anti-virus software vendor?
7. Does the security administrator subscribe to early warning e-mail services? (e.g., www.fooorg or www.bar.net)
8. Has a key contact been defined for the notification of a virus presence?
9. Has an automated response been developed to respond to a virus presence?
10. Is the communication & training of virus prevention and response procedures to users adequate?
Web Testing Checklist about Performance (1)

Tools
1. Are virus detection tools in place?
2. Have the virus data files been updated on a current basis?
3. Are virus updates scheduled?
4. Is a response procedure for virus attacks in place?
5. Are notification of updates to virus files obtained from anti-virus software vendor?
6. Does the security administrator maintain an informational partnership with the anti-virus software vendor?
7. Does the security administrator subscribe to early warning e-mail services? (e.g., www.foo.org or www.bar.net)
8. Has a key contact been defined for the notification of a virus presence?
9. Has an automated response been developed to respond to a virus presence?
10. Is the communication & training of virus prevention and response procedures to users adequate?

Tools
1. Has a load testing tool been identified?
2. Is the tool compatible with the environment?
3. Has licensing been identified?
4. Have external and internal support been identified?
5. Have employees been trained?
Number of Users
1. Have the maximum number of users been identified?
2. Has the complexity of the system been analyzed?
3. Has the user profile been identified?
4. Have user peaks been identified?
5. Have languages been identified?, i.e. English, Spanish, French, etc. for global wide sites
6. Have the length of sessions been identified by the number of users?
7. Have the number of users configurations been identified?

Expectations/Requirements
1. Have the response time been identified?
2. Has the client response time been identified?
3. Has the expected vendor response time been identified?
4. Have the maximum and acceptable response times been defined?
5. Has response time been met at the various thresholds?
6. Has the break point been identified been identified for capacity planning?
7. Do you know what caused the crash if the application was taken to the breaking point?
8. How many transactions for a given period of time have been identified (bottlenecks)?
9. Have availability of service levels been defined?

Architecture
1. Has the database campacity been identified?
2. Has anticipated growth data been obtained?
3. Is the database self-contained?
4. Is the system architecture defined?
" Tiers
" Servers
" Network
5. Has the anticipated volume for initial test been defined - with allowance for future growth?
6. Has plan for vertical growth been identified?
7. Have the various environments been created?
8. Has historical experience with the databases and equipment been documented?
9. Has the current system diagram been developed?
10.Is load balancing available?
11.Have the types of programming languages been identified?
12.Can back end processes be accessed?
Web Testing Checklist about Performance (2)
Resources
1. Are people with skill sets available?
2. Have the following skill sets been acquired?
" DBA
" Doc
" BA
" QA
" Tool Experts
" Internal and external support
" Project manager
" Training

Time Frame
1. When will the application be ready for performance testing?
2. How much time is available for performance testing?
3. How many iterations of testing will take place?

Test Environment
1. Does the test environment exist?
2. Is the environment self-contained?
3. Can one iteration of testing be performed in production?
4. Is a copy of production data available for testing?
5. Are end-users available for testing and analysis?
6. Will the test use virtual users?
7. Does the test environment mirror production?
8. Have the differences documented? (constraints)
9. Is the test available after production?
10. Have version control processes been used to ensure the correct versions of applications and data in the test environment?
11. Have the times been identified when you will receive the test data (globally) time frame?
12. Are there considerations for fail-over recovery? Disaster recovery?
13. Are replacement servers available?
14. Have back-up procedures been written?
Web Testing Checklist about Correctness (1)

Data
1. Does the application write to the database properly?
2. Does the application record from the database correctly?
3. Is transient data retained?
4. Does the application follow concurrency rules?
5. Are text fields storing information correctly?
6. Is inventory or out of stock being tracked properly?
7. Is there redundant info within web site?
8. Is forward/backward cashing working correctly?
9. Are requirements for timing out of session met?

Presentation
1. Are the field data properly displayed?
2. Is the spelling correct?
3. Are the page layouts and format based on requirements?
(e.g., visual highlighting, etc.)
4. Does the URL show you are in secure page?
5. Is the tab order correct on all screens?
6. Do the interfaces meet specific visual standards(internal)?
7. Do the interfaces meet current GUI standards?
8. Do the print functions work correctly?

Navigation
1. Can you navigate to the links correctly?
2. Do Email links work correctly?

Functionality
1. Is the application recording the number of hits correctly?
2. Are calculations correct?
3. Are edits rules being consistently applied?
4. Is the site listed on search engines properly?
5. Is the help information correct?
6. Do internal searches return correct results?
7. Are follow-up confirmations sent correctly?
8. Are errors being handled correctly?
9. Does the application properly interface with other applications?

Environment
1. Are user sessions terminated properly?
2. Is response time adequate based upon specifications?

• Is a complete software requirements specification available?


• Are requirements bounded?
• Have equivalence classes been defined to exercise input?
• Have boundary tests been derived to exercise the software at its boundaries.
• Have test suites been developed to validate each software function?
• Have test suites been developed to validate all data structures?
• Have test suites been developed to assess software performance?
• Have test suites been developed to test software behavior?
• Have test suites been developed to fully exercise the user interface?
• Have test suites been developed to exercise all error handling?
• Are use-cases available to perform scenario testing?
• Is statistical use testing (SEPA, 5/e, Chapter 26) being considered as an element of validation?
• Have tests been developed to exercise the software against procedures defined in user documentation and help facilities?
• Have error reporting and correction mechanisms been established?
• Has a deficiency list been created?

Check list for Conducting Unit

• Is the number of input parameters equal to number of arguments?


• Do parameter and argument attributes match?
• Do parameter and argument units system match?
• Is the number of arguments transmitted to called modules equal to number of parameters?
• Are the attributes of arguments transmitted to called modules equal to attributes of parameters?
• Is the units system of arguments transmitted to called modules equal to units system of parameters?
• Are the number of attributes and the order of arguments to built-in functions correct?
• Are any references to parameters not associated with current point of entry?
• Have input only arguments altered?
• Are global variable definitions consistent across modules?
• Are constraints passed as arguments?
• When a module performs external I/O, additional interface tests must be conducted.
• File attributes correct?
• OPEN/CLOSE statements correct?
• Format specification matches I/O statement?
• Buffer size matches record size?
• Files opened before use?
• End-of-file conditions handled?
• Any textual errors in output information?
• improper or inconsistent typing
• erroneous initialization or default values
• incorrect (misspelled or truncated) variable names
• inconsistent data types
• underflow, overflow and addressing exceptions
• Has the component interface been fully tested?
• Have local data structured been exercised at their boundaries?
• Has the cyclomatic complexity of the module been determined?
• Have all independent basis paths been tested?
• Have all loops been tested appropriately?
• Have data flow paths been tested?
• Have all error handling paths been tested?

Check list about General (1)

General

• Pages fit within the resolution(800x600)


• Design works with liquid tables to fill the user's window size.
• Separate print versions provided for long documents (liquid tables may negate this necessity). Accommodates A4 size paper.
• Site doesn't use frames.
• Complex tables are minimized.
• Newer technologies are generally avoided for 1-2 years from release, or if used alternative traditional forms of content are easily available.

Home vs. Subsequent Pages & Sections

• Home page logo is larger and more centrally placed than on other pages.
• Home page includes navigation, summary of news/promotions, and a search feature.
• Home page answers: Where am I; What does this site do; How do I find what I want?
• Larger navigation space on home page, smaller on subsequent pages.
• Logo is present and consistently placed on all subsequent pages (towards upper left hand corner).
• "Home" link is present on all subsequent pages (but not home page).
• If subsites are present, each has a home page, and includes a link back to the global home page.

Navigation

• Navigation supports user scenarios gathered in the User Task Assessment phase (prior to design).
• Users can see all levels of navigation leading to any page.
• Breadcrumb navigation is present (for larger and some smaller sites).
• Site uses DHTML pop-up to show alternative destinations for that navigation level.
• Navigation can be easily learned.
• Navigation is consistently placed and changes in response to rollover or selection.
• Navigation is available when needed (especially when the user is finished doing something).
• Supplimental navigation is offered appropriately (links on each page, a site map/index, a search engine).
• Navigation uses visual hierarchies like movement, color, position, size, etc., to differentiate it from other page elements.
• Navigation uses precise, descriptive labels in the user's language. Icon navigation is accompanied by text descriptors.
• Navigation answers: Where am I (relative to site structure); Where have I been (obvious visited links); Where can I go (embedded, structural, and
associative links)?
• Redundant navigation is avoided.

Functional Items

• Terms like "previous/back" and "next" are replaced by more descriptive labels indicating the information to be found.
• Pull-down menus include a go button.
• Logins are brief.
• Forms are short and on one page (or demonstrate step X of Y, and why collecting a larger amount of data is important and how the user will benefit).
• Documentation pages are searchable and have an abundance of examples. Instructions are task-oriented and step-by-step. A short conceptual model of
the system is available, including a diagram that explains how the different parts work together. Terms or difficult concepts are linked to a glossary.

Linking

• Links are underlined.


• Size of large pages and multi-media files is indicated next to the link, with estimated dowload times.
• Important links are above the fold.
• Links to releated information appear at bottom of content or above/near the top.
• Linked titles make sense out of context.
• If site requires registration or subscription, provides special URLs for free linking. Indicates the pages are freely linkable, and includes and easy method
to discover the URL.
• If site is running an ad, it links to a page with the relevant content, not the corporate home page.
• Keeps linked phrases short to aid scanning (2-4 words).
• Links on meaningful words and phrases. Avoids phrases like, "click here."
• Includs a brief description of what the user should expect on the linked page. In code:
• Uses relative links when linking between pages in a site. Uses absolute links to pages on unrelated sites.
• Uses link titles in the code for IE users (preferably less than 60 characters, no more than 80).

Search Capabilities

• A search feature appears on every page (exceptions include pop-up forms and the like).
• Search box is wide to allow for visible search parameters.
• Advanced Search, if included, is named just that (to scare off novices).
• Search system performs a spelling check and offers synonym expansion.
• Site avoids scoped searching. If included it indicates scope at top of both query and results pages, and additionally offers an automatic extended site
search immediately with the same parameters.
• Results do not include a visible scoring system.
• Eliminates duplicate occurances of the same results (e.g., foo.com/bar vs. foo.com/bar/ vs. foo.com/bar/index.html).

Page Design

• Content accounts for 50% to 80% of a page's design (what's left over after logos, navigation, non-content imagery, ads, white space, footers, etc.).
• Page elements are consistent, and important information is above the fold.
• Pages load in 10 seconds or less on users bandwidth.
• Pages degrade adequately on older browsers.
• Text is over plain background, and there is high contrast between the two.
• Link styles are minimal (generally one each of link, visited, hover, and active states). Additional link styles are used only if necessary.
• Specified the layout of any liquid areas (usually content) in terms of percentages.

Fonts and Graphics

• Graphics are properly optimized.


• Text in graphics is generally avoided.
• Preferred fonts are used: Verdana, Arial, Geneva, sans-serif.
• Fonts, when enlarged, don't destroy layout.
• Images are reused rather than rotated.
• Page still works with graphics turned off.
• Graphics included are necessary to support the message.
• Fonts are large enough and scalable.
• Browser chrome is removed from screen shots.
• Animation and 3D graphics are generally avoided.

Content Design

• Uses bullets, lists, very short paragraphs, etc. to make content scannable.
• Articles are structured with scannable nested headings.
• Content is formatted in chunks targeted to user interest, not just broken into multiple pages.
• No moving text; most is left-justified; sans-serif for small text; no upper-case sentences/paragraphs; italics and bold are used sparingly.
• Dates follow the international format (year-month-day) or are written out (August 30, 2001).
Writing

• Writing is brief, concise, and well edited.


• Information has persistent value.
• Avoids vanity pages.
• Starts each page with the conclusion, and only gradually added the detail supporting that conclusion.
• One idea per paragraph.
• Uses simple sentence structures and words.
• Gives users just the facts. Uses humor with caution.
• Uses objective language.

Check list about Generl (4)

Folder Structure

• Folder names are all lower-case and follow the alpha-numeric rules found under "Naming Conventions" below.
• Segmented the site sections according to:
Root directory (the "images" folder usually goes at the top level within the root folder)
Sub-directories (usually one for each area of the site, plus an images folder at the top level within the root directory)
Images are restricted to one folder ("images") at the top level within the root directory (for global images) and then if a great number of images are going
to be used only section-specifically, those are stored in local "images" folders

Naming Conventions

• Uses clients preferred naming method. If possible, uses longer descriptive names (like "content_design.htm" vs. "contdesi.htm").
• Uses alphanumeric characters (a-z, 0-9) and - (dash) or _ (underscore)
• Doesn't use spaces in file names.
• Avoids characters which require a shift key to create, or any punctuation other than a period.
• Uses only lower-case letters.
• Ends filenames in .htm (not .html).

Multimedia

• Any files taking longer than 10 seconds to download include a size warning (> 50kb on a 56kbps modem, > 200kb on fast connections). Also includes
the running time of video clips or animations, and indicate any non-standard formats.
• Includes a short summary (and a still clip) of the linked object.
• If appropriate to the content, includes links to helper applications, like Adobe Acrobat Reader if the file is a .pdf.

Page Titles

• Follows title strategy ... Page Content Descriptor : Site Name, Site section (E.g.: Content Implementation Guidelines : CDG Solutions, Usability Process )
• Tries to use only two to six words, and makes their meaning clear when taken out of context.
• The first word(s) are important information-carrying one(s).
• Avoids making several page titles start with the same word.

Headlines

• Describes the article in terms that relate to the user.


• Uses plain language.
• Avoids enticing teasers that don't describe.

CSS

• Uses CSS to format content appearance (as supported by browsers), rather than older HTML methods.
• Uses a browser detect and serve the visitor a CSS file that is appropriate for their browser/platform combination.
• Uses linked style sheets.
Documentation and Help Pages

• When using screen shots, browser chrome was cropped out.


• Hired a professional to write help sections (a technical writer).
• Documentation pages are searchable.
• Documentation section has an abundance of examples.
• Instructions are task-oriented and step-by-step.
• A short conceptual model of the system is provided, including a diagram that explains how the different parts work together.
• Terms or difficult concepts are linked to a glossary.

Content Management
Site has procedures in place to remove outdated information immediately (such as calendar events which have passed).
Checklist: Graphical User Interface

Test Type Description Purpose Considerations Variations

Menu Bar-Mouseclick
RMB
Toolbar
- All Sequences?
Navigate from each Buttons - Push
Test interrelated processing - Important Combinations?
Transfer Functions different window to all
between windows Buttons-Hot Key
possible windows - Negative - No Transfers
Buttons-Keyboard
Menu Bar - Hot Keys
Menu Bar - Keyboard
List window with no data
List window one record in
list (row)
List window >1 row -
- Different for list windows
Test transfers with general Test data row retrieval and last row
Data Conditions for Window vs. one record display
(record level) data transfer functions using
Transfer Functions List window >1 row -
conditions data windows
not first or last row

One row display window


Select inquiry entity in list
window (not from list)
Lists of Columns
Single Row Display
DropDownListBox-
Contents
Tests stored procedure/
Verify Window Display Data Verify inquiry data displays DropDownListBox -
GUI retrieval of data
Selection Retrieval

Specific Data Retrieval


Conditions- Max, Null, etc.
Field Edit Formats
Required Field - no data
Maximum Data Length
Test data entry for a single (PBEdit040's within Data Valid Value
Field Level Data Entry Test GUI field edits
column Windows) Invalid Value
Invalid data format

New
Change to non-key field
Test stored procedure/GUI Note: do an inquiry after
Test data row handling from
Row Data Maintenance add/change/delete update to verify database Change to key field
GUI to database
functions update (delete and add)

Delete

Test Buttons, Scroll Bars - Controls which do Transfer Buttons


Application Window
and other windows types of Test GUI processing transfers are under transfer OK, Miscellaneous
Controls
controls functions NEW
- Retrieve or OK which CLOSE/CANCEL
retrieves need to do inquiry
to do data check of retrieval RETRIEVE
- Link,Unlink, Change, Database Updates
Delete need to do inquiry to LINK, UNLINK, CHANGE,
check database updates DELETE
- New test will be for data
entry in field Data Entry - NEW
Radio Buttons

Scroll Bars
(Vertical/Horizontal)
Window Control Menu
Max, Min,
Print Functions
(Print, Printer Setup)
Standard Window Edit Functions
Controls/Functions
(Cut, Copy, Paste)

Window Functions
(Previous Window, Close
All, Open Window List, Tile,
Layer, Cascade)

Microhelp
Balloon Notes
Help- Index
Application HELP
Help-Table of Contents
Help-Jump Words
Help-Text
Job Status
Online Report/s
Informational Windows -
Miscellaneous Application Content
Specific
Informational
Windows - Button

Fatal Application Errors

Вам также может понравиться