Академический Документы
Профессиональный Документы
Культура Документы
1. White Box
Also called ‘Structural Testing / Glass Box Testing’ is used for testing the code keeping the system specs in mind. Inner working is considered and thus
Developers Test..
• Mutation Testing
Number of mutants of the same program created with minor changes and none of their result should coincide with that of the result of the
original program given same test case.
3. Black Box
Also called ‘Functional Testing’ as it concentrates on testing of the functionality rather than the internal details of code.
Test cases are designed based on the task descriptions
• Comparison Testing
Test cases results are compared with the results of the test Oracle.
Gray Box Testing : Similar to Black box but the test cases, risk assessments, and test methods involved in gray box testing are developed based on the
knowledge of the internal data and flow structures
Levels of Testing
1. Unit Testing.
• Unit Testing is primarily carried out by the developers themselves.
• Deals functional correctness and the completeness of individual program units.
• White box testing methods are employed
2. Integration Testing.
• Integration Testing: Deals with testing when several program units are integrated.
• Regression testing : Change of behavior due to modification or addition is called ‘Regression’. Used to bring changes from worst to least.
• Incremental Integration Testing : Checks out for bugs which encounter when a module has been integrated to the existing.
• Smoke Testing : It is the battery of test which checks the basic functionality of program. If fails then the program is not sent for further testing.
3. System Testing.
• System Testing - Deals with testing the whole program system for its intended purpose.
• Recovery testing : System is forced to fail and is checked out how well the system recovers the failure.
• Security Testing : Checks the capability of system to defend itself from hostile attack on programs and data.
• Load & Stress Testing : The system is tested for max load and extreme stress points are figured out.
• Performance Testing : Used to determine the processing speed.
• Installation Testing : Installation & uninstallation is checked out in the target platform.
4. Acceptance Testing.
• UAT ensures that the project satisfies the customer requirements.
• Alpha Testing : It is the test done by the client at the developer’s site.
• Beta Testing : This is the test done by the end-users at the client’s site.
• Long Term Testing : Checks out for faults occurrence in a long term usage of the product.
• Compatibility Testing : Determines how well the product is substantial to product transition.
Software Testing
Testing involves operation of a system or application under controlled conditions and evaluating the results. Every Test consists of 3 steps :
Planning : Inputs to be given, results to be obtained and the process to proceed is to planned.
Execution : preparing test environment, Completing the test, and determining test results.
Evaluation : compare the actual test outcome with what the correct outcome should have been.
Automated Testing
Automated testing is as simple as removing the "human factor" and letting the computer do the thinking. This can be done with integrated debug tests, to much
more intricate processes. The idea of the these tests is to find bugs that are often very challenging or time intensive for human testers to find. This sort of testing
can save many man hours and can be more "efficient" in some cases. But it will cost more to ask a developer to write more lines of code into the game (or an
external tool) then it does to pay a tester and there is always the chance there is a bug in the bug testing program. Reusability is another problem; you may not be
able to transfer a testing program from one title (or platform) to another. And of course, there is always the "human factor" of testing that can never truly be
replaced.
Other successful alternatives or variation: Nothing is infallible. Realistically, a moderate split of human and automated testing can rule out a wider range of possible
bugs, rather than relying solely on one or the other. Giving the testere limited access to any automated tools can often help speed up the test cycle.
Release Acceptance Test
The release acceptance test (RAT), also referred to as a build acceptance or smoke test, is run on each development release to check that each build is stable
enough for further testing. Typically, this test suite consists of entrance and exit test cases plus test cases that check mainstream functions of the program with
mainstream data. Copies of the RAT can be distributed to developers so that they can run the tests before submitting builds to the testing group. If a build does not
pass a RAT test, it is reasonable to do the following:
• Suspend testing on the new build and resume testing on the prior build until another build is received.
• Report the failing criteria to the development team.
• Request a new build.
• The validity of the task it performs with supported data conditions under supported operating conditions.
• The integrity od the task's end result
• The feature's integrity when used in conjunction with related features
Forced-Error Test
The forced-error test (FET) consists of negative test cases that are designed to force a program into error conditions. A list of all error messages thatthe program
issues should be generated. The list is used as a baseline for developing test cases. An attempt is made to generate each error message in the list. Obviously, test
to validate error-handling schemes cannot be performed until all the handling and error message have been coded. However, FETs should be thought through as
early as possible. Sometimes, the error messages are not available. The error cases can still be considered by walking through the program and deciding how the
program might fail in a given user interface such as a dialog or in the course of executing a given task or printing a given report. Test cases should be created for
each condition to determine what error message is generated.
Real-world User-level Test
These tests simulate the actions customers may take with a program. Real-World user-level testing often detects errors that are otherwise missed by formal test
types.
Exploratory Test
Exploratory Tests do not involve a test plan, checklist, or assigned tasks. The strategy here is to use past testing experience to make educated guesses about
places and functionality that may be problematic. Testing is then focused on those areas. Exploratory testing can be scheduled. It can also be reserved for
unforeseen downtime that presents itself during the testing process.
Compatibility and Configuration Testing
Compatibility and configuration testng is performanced to check that an application functions properly across various hardware and software environments. Often,
the stragegy is to run the functional acceptance simple tests or a subset of the task-oriented functional tests on a range of software and hardware configurations.
Sometimes, another strategy is to create a specific test that takes into account the error risks associated with configuration differences. For example, you might
design an extensive series of tests to check for browser compatibility issues. Software compatibility configurations include variances in OS versions, input/output
(I/O) devices, extension, network software, concurrent applications, online services and firewalls. Hardwere configurations include variances in manufacturers,
CPU types, RAM, graphic display cards, video capture cards, sound cards, monitors, network cards, and connection types(e.g. T1, DSL, modem, etc..).
Documentation
Testing of reference guides and user guises check that all features are reasonably documented. Every page of documentation should be keystroke-tested for the
following errors:
• Usability
• Look and feel
• Navigation controls/navigation bar
• Instructional and technical information style
• Images
• Tables
• Navigation branching
• Accessibility
• Application software
• Database
• Servers
• Client workstations
• Networks
Unit Tests
Unit tests are positive tests that eveluate the integrity of software code units before they are integrated with other software units. Developers normally perform unit
testing. Unit testing represents the first round of software testing--when developers test their own software and fix errors in private.
Click-Stream Testing
Click stream Testing is to show which URLs the user clicked, The Web site's user activity by time period during the day, and other data otherwise found in the Web
server logs. Popular choice for Click-Stream Testing statisticss include KeyNote Systems Internet weather report , WebTrends log analysis utility, and the
NetMechanic monitoring service.
Disadvantage: Click-Stream Testing statistics reveal almost nothing about the user's ability to achieve their goals using the Web site. For example, a Web site may
show a million page views, but 35% of the page views may simply e pages with the message "Found no search results," With Click-Stream Testing, there's no way
to tell when user reach their goals.
Click-stream measurement tests
Makes a request for a set of Web pages and records statiestics about the response, including total page views per hour, total hits per week, total user sessions per
week, and derivatives of these numbers. The downside is that if your Web-enabled application takes twics as many pages as it should for a user to complete his or
her goal, the click stream test makes it look as though your Web site is popular, while to the user your Web site is frustrating.
HTML content-checking tests
HTML content checking tests makes a request to a Web page, parses the response for HTTP hyperlinks, requests hyperlinks from their associated host, and if the
links returned successful or exceptional conditions. The downside is that the hyperlinks in a Web-enalbled application are dynamic and can change, depending on
the user's actions. There is little way to know the context of the hyperlinks in a Web-enabled application. Just checking the links' validity is meaningless if not
misleading. These tests were meant to test static Web sites, not Web-enabled application
Web-Enabled Application Measurement Tests
Ping tests
Ping tests use the Internet Control Message Protocol(ICMP) to send a ping request to a server. If the ping returns, the server is assumed to be alive and well. The
downside is that usually a Web server will continue to return ping requests even when the Web-enable application has crashed.
Unit Testing
Unit testing finds problems and errors at the module level before the software leaves development. Unit testing is accomplished by adding a small amount of the
code to the module that validates the module's responses.
System-Level Test
System-level tests consists of batteris of tests that are designed to fully exercise a program as a whole and check that all elements of the integrated system
function properly.
Functional System Testing
System tests check that the software functions properly from end-to-end. The components of the system include: A database, Web-enable application software
modules, Web servers, Web-enabled application frameworks deploy Web browser software, TCP/IP networking routers, media servers to stream audio and video,
and messaging services for email.
A common mistake of test professionals is to believe that they are conducting system tests while they are actually testing a single component of the system. For
example, checking that the Web server returns a page is not a system test if the page contains only a static HTML page.
System testing is the process of testing an integrated hardware and software system to verify that the system meets its specified requirements. It verifies proper
execution of the entire set of application components including interfaces to other applications. Project teams of developers and test analysts are responsible for
ensuring that this level of testing is performed.
System testing checklist include question about:
• Will my test be able to support all the users and still maintain performance?
• Will my test be able to simulate the number of transactions that pass through in a matter of hours?
• Will my test be able to uncover whether the system will break?
• Will my server crash if the load continues over and over?
The test should be set up so that you can simulate the load; for example:
• If you have a remote Web site you should be able to monitor up to four Web sites or URLs.
• There should be a way to monitor the load intervals.
• The load test should be able to simulate the SSL (Secure Server)
• The test should be able to simulate when a user submits the Form Data (GET method)
• The test should be set up to simulate and authentical the keyword verification.
• The test should be able to simulate up to six email or pager mail addresses and an alert should occur when there is a failure.
It is important to remember when stressing your Web site to give a certain number of users a page to stress test and give them a certain amount of time in which to
run the test.
Some of the key data features that can help you measure this type of stress test, determine the load, and uncover bottlenecks in the system are:
Load Testing
The process of modeling application usage conditions and performing them against the application and system under test, to analyze the application and system
and determine capacity, throughout speed, transation handling capabilities, scalabilities and reliability while under under stress.
This tyoe of test is designed to identify possible overloads to the system such as too many users signed on to the system, too many terminals on the network, and
network system too slow.
Load testing a simulation of how a browser will respond to intense use by many individuals. The Web sessions can be recorded live and set up so that the test can
be run during peak times and also during slow times. The following are two different types of load tests:
Single session - A single session should be set up on browser that will have one or multiple responses. The timeing of the data should be put in a file. After the
test, you can set up a separate file for report analysis.
Multiple session - a multiple session should be developed on multiple browsers with one or multiple responses. The multivariate statistical methods may be
needed for a complex but general performance model
When performing stress testing, looping transactions back on themselves so that the system stresses itself simulates stress loads and may be useful for finding
synchronization problems and timing bugs, Web priority problems, memory bugs, and Windows problems using API. For example, you may want ot simulate an
incoming message that is then put out on a looped-back line; this in turn will generate another incoming message. The nyou can use another system of
comparable size to create the stress load.
Memory leaks are often found under stress testing. A memory leak occurs when a test leaves allocated memory behind and does not correctly return the memory
to the memory allocation scheme. The test seems to run correctly, but after several iteration available memory is reduced until the system fails.
Load/volume tests, which involve extreme conditions, are normally run after the execution of feature-level tests, which prove that a program functions correctly
under normal conditions.
Performance Test
The primary goal of performance-testing is to develop effective enhancement strategies for maintaining acceptable system performance. Performance testing is a
capacity analysis and planning process in which measurement data are used to predict when load levels will exhaust system resources.
Stateful testing
When you use a Web-enabled application to set a value, does the server respond correctly later on?
Privilage testing
What happens when the everyday user tries to access a control that is authorized only for adminstrators?
Speed testing
Is the Web-enabled application taking too long to respond?
Boundary Test
Boundary tests are designed to check a program's response to extreme input values. Extreme output values are generated by the input values. It is important to
check that a program handles input values and output results correctly at the lower and upper boundaries. Keep in mind that you can create extreme boundary
results from non-extreme input values. It is essential to analyze how to generate extremes of both types. In addition. sometime you know that there is an
intermediate variable involved in processing. If so, it is useful to determine how to drive that one through the extremes and special conditions such as zero or
overflow condition.
Boundary timeing testing
What happens when your Web-enabled application request times out or takes a really long time to respond?
Regression testing
Did a new build break an existing function? Repeat testing after changes for managing risk relate to product enhancement.
A regression test is performded when the tester wishes to see the progress of the testing processs by performing identical tests before and after a bug has been
fixed. A regression test allows the tester to compare expeted test results with the actual results.
Regression testing's primary objective is to ensure that all bugfree features stay that way. In addition, bugs which have been fixed once should not turn up again in
subsequent program versions.
Regression testing: After every software modification or before next release, we repeat all test cases to check if fixed bugs are not show up again and new and
existing functions are all working correctly.
Regression testing is used to confirm that fixed bugs have, in fact, been fixed and that new bugs have not been introduced in the process, and that festures that
were proven correctly functional are intact. Depending on the size of a project, cycles of regression testing may be perform once per milestone or once per build.
Some bug regression testing may also be performed during each accceptance test cycle, forcusing on only the most important bugs. Regression tests can be
automated.
CONDITIONS DURING WHICH REGRESSION TESTS MAY BE RUN Issu fixing cycle. Once the development team has fixed issues, a regression test can be run
t ovalidate the fixes. Tests are based on the step-by-step test casess that were originally reported:
• If an issue is confirmeded as fixed, then the issue report status should be changed to Closed.
• If an issue is confirmed as fixed, but with side effects, then the issue report status should be changed to Closed. However, a new issue should be filed to
report the side effect.
• If an issue is only partially fixed, then the issue report resolution should be changed back to Unfixed, along with comments outlining the oustanding
problems
Open-status regression cycle. Periodic regression tests may be run on all open issue in the issue-tracking database. During this cycle, issue status is confirmed
either the report is reproducible as is with no modification, the report is reproducible with additional comments or modifications, or the report is no longer
reproducible
Closed-fixed regression cycle. In the final phase of testing, a full-regression test cycle should be run to confirm the status of all fixed-closed issues.
Feature regression cycle. Each time a new build is cut or is in the final phase of testing depending on the organizational procedure, a full-regression test cycle
should be run to confirm that the proven correctly functional features are still working as expected.
Database Testing
Seach results System test environment Black Box and White Box technique
Data integrity
Data stored in the database should include such items as the catalog, pricing, shipping tables, tax tables, order database, and customer information. Testng must
verify the integrity of the stored data. Testing should be done on a regular basis because data changes over time.
• Test the creation, modification, and deletion of data in tables as specified in the business requirement.
• Test to make sure that sets of radio buttons represent a fixed set of values. You should also check for NULL or EMPTY values.
• Test to make sure that data is save to the database and that each values gets saved fully. You should watch for the truncation of
strings and that numeric values are not rounded off.
• Test to make sure that default values are stored and saved.
• Test the compatibility with old data. You should ensure that all updates do not affect the data you have on file in your database.
Data validity
The most common data errors are due to incorrect data entry, called data validity errors.
Recovery testing
• The system recovers from faukts and resumes processing within a predefined period of time.
• The system is fault-tolerant, which means that processing faults do not halt the overall functioning of the system.
• Data recovery and restart are correct in case of automatic recovery. If recovery requires human intervention, the mean time to repair the database is
within predefined acceptable limits.
• If the Web site publishes from inside the SQL Server straight to a Web page, is the data accurate and of the correct data type?
• If the SQL Server reads from a stored procedure to produce a Web page or if the stored procedure is changed, does the data on the page change?
• If you are using FrontPage or interDev is the data connection to your pages secure?
• Does the database have scheduled maintenance with a log so testers can set changes or errors?
• Can the tester check to see how back ups are being handled?
• Is the database secure?
• If the database is creating Web pages from the datbase to a URL, is the information correct and updated? If the pages are not dynamic or Active Server
pages, they will not update automatically.
• If the tables in the database are linked to another database, make sure that all the links are active and giving reevant information.
• Are the fields such as zip code, phone numbers, dates, currency, and social security number formateed properly?
• If there are formulas in the database, do they work? How will they take care of updates if numbers change (for example, updating taxes)?
• Do the forms populate the correct tables?
• Is the database secure?
• If the database is linked to other database, are the links secure and working?
• If the database publishes to the Internet, is the data correct?
• When data is deployed, is it still accurate?
• Do the queries give accurate information to the reports?
• If thedatabase performs calculations, are the calculatons accurate?
Type of
Wave Development Style Test style
application
Structured programming command organized into hierachical menu lists Test each function against a written functional
2. Client/Server
combine a common dropdown menu bar with graphical windows contain throughput to server clients.
controls.
Visual Integrated development tools facilitate object-oriented design Capture/record/ playable watches how an
patterns. The common style is to provide multiple paths to accomplishing application is used and then provides reports
3. Web-enabled
tasks. (The net effect is that you can't just walk through a set of hierarchical comparing how the playback differed from the
menus and arrive at the same test result any more.) original recording.
• Client/server applications operate in a network environment. The tests need to not only check for the function of an application, they need to test how the
application handles slow or intermittent network performance.
• Automated test are ideal to determine the number of client applications a server is able to efficiently handle at any given time.
• The server is usually a middle tier between the client application and several data sources. Automated tests need to check the server for correct
functionality while it communicates with the data source.
Application Usability:
• Web-enabled application are meant to be stateless. HTTP was designed to be stateless. Each request from a Web-enabled application is meant to be
atomic and not rely on any previouse requests. This has huge advantages for system architecture and datacenter provisioning. When requests are
stateless, then any sserver can respond to the request and any request handler on any server may service the request.
• Web-enabled application are platform independent. The client application may be written for Windows, Macintosh, Linux, and any other platform that is
capable of implementing the command protocol and network connection to the server.
• Web-enabled application expect the client application to provide presentation rendering and simple scripting capabilities. The client application is usually
a browser, however, it may also be a dedication client application such as a retail cash register, a Windows-based data analysis tool, ot an electronic
address book in your mobile phone.
The missing context in a Web-enabled application test automation means that software developers and QA technicians must manually script tests for each Web-
enalbled application. Plus, they need to maintain the test scriots as the application changes. Web-enabled application test automation tools focus on making the
scriot writing and maintenance tasks easier. The test automation tool offer these features:
• A friendly, graphical user interface to integrate the record, edit, and run-time script functions.
• A recorder that watches how an application is used and writes a test script for you.
• A playback utility that drives a Web-enalbed application by processing the test script and logging. The playback utility also provides the facility to play
back several concurrently running copies of the same script to check the system for scalability and load testing.
• A report utility to show how the playback differed from the original recording. The differences may be slower or faster performance times, errors, and
incomplete transactions.
• Web server
• Application server
• Data layers
• Are the scroll bars, buttons, and frames compatible with the browser and functional?
• To check the functionality of the scroll bars on the interface of the Web page to make sure the the user can scroll through items and make the correct
selection from a list of items.
• The button on the interface need to be functional and the correct hyperlink should go to the correct page.
• If frames are used on the interface, they should be checked for the correct size and whether all of the components fit within the viewing screen of the
monitor.
2. User Interface
One of the reasons the web browser is being used as the front end to applications is the ease of use. Users who have been on the web before will probably know
how to navigate a well-built web site. While you are concentrating on this portion of testing it is important to verify that the application is easy to use. Many will
believe that this is the least important area to test, but if you want to be successful, the site better be easy to use.
3.Instructions
You want to make sure there are instructions. Even if you think the web site is simple, there will always be someone who needs some clarification. Additionally,
you need to test the documentation to verify that the instructions are correct. If you follow each instruction does the expected result occur?
5. Content
To a developer, functionality comes before wording. Anyone can slap together some fancy mission statement later, but while they are developing, they just need
some filler to verify alignment and layout. Unfortunately, text produced like this may sneak through the cracks. It is important to check with the public relations
department on the exact wording of the content.
You also want to make sure the site looks professional. Overuse of bold text, big fonts and blinking (ugh) can turn away a customer quickly. It might be a good idea
to consult a graphic designer to look over the site during User Acceptance Testing. You wouldn't slap together a brochure with bold text everywhere, so you want
to handle the web site with the same level of professionalism.
Finally, you want to make sure that any time a web reference is given that it is hyperlinked. Plenty of sites ask you to email them at a specific address or to
download a browser from an address. But if the user can't click on it, they are going to be annoyed.
6. Colors/backgrounds
Ever since the web became popular, everyone thinks they are graphic designers. Unfortunately, some developers are more interested in their new backgrounds,
than ease of use. Sites will have yellow text on a purple picture of a fractal pattern. (If you've never seen this, try most sites at GeoCities or AOL.) This may seem
"pretty neat", but it's not easy to use.
Usually, the best idea is to use little or no background. If you have a background, it might be a single color on the left side of the page, containing the navigational
bar. But, patterns and pictures distract the user.
7. Images
Whether it's a screen grab or a little icon that points the way, a picture is worth a thousand words. Sometimes, the best way to tell the user something is to simply
show them. However, bandwidth is precious to the client and the server, so you need to conserve memory usage. Do all the images add value to each page, or do
they simply waste bandwidth? Can a different file type (.GIF, .JPG) be used for 30k less?
In general, you don't want large pictures on the front page, since most users who abandon a page due to a large load will do it on the front page. If you can get
them to see the front page quickly, it will increase the chance they will stay.
8. Tables
You also want to verify that tables are setup properly. Does the user constantly have to scroll right to see the price of the item? Would it be more effective to put
the price closer to the left and put miniscule details to the right? Are the columns wide enough or does every row have to wrap around? Are certain columns
considerably longer than others?
9. Wrap-around
Finally, you will want to verify that wrap-around occurs properly. If the text refers to "a picture on the right", make sure the picture is on the right. Make sure that
widowed and orphaned sentences and paragraphs don't layout in an awkward manner because of pictures.
10. Functionality
The functionality of the web site is why your company hired a developer and not just an artist. This is the part that interfaces with the server and actually "does
stuff".
11. Links
A link is the vehicle that gets the user from page to page. You will need to verify two things for each link: that the link brings you to the page it said it would and that
the pages you are linking to actually exists. It may sound a little silly but I have seen plenty of web sites with internal broken links.
12. Forms
When a user submits information through a form it needs to work properly. The submit button needs to work. If the form is for an online registration, the user
should be given login information (that works) after successful completion. If the form gathers shipping information, it should be handled properly and the customer
should receive their package. In order to test this, you need to verify that the server stores the information properly and that systems down the line can interpret
and use that information.
If the system verifies user input according to business rules, then that needs to work properly. For example, a State field may be checked against a list of valid
values. If this is the case, you need to verify that the list is complete and that the program actually calls the list properly (add a bogus value to the list and make
sure the system accepts it).
14. Cookies
Most users only like the kind with sugar, but developers love web cookies. If the system uses them, you need to check them. If they store login information, make
sure the cookies work. If the cookie is used for statistics, verify that totals are being counted properly. And you'll probably want to make sure those cookies are
encrypted too, otherwise people can edit their cookies and skew your statistics.
Application specific functional requirements Most importantly, you want to verify the application specific functional requirements. Try to perform all functions a user
would: place an order, change an order, cancel an order, check the status of the order, change shipping information before an order is shipped, pay online, ad
naseum.
This is why your users will show up on your doorstep, so you need to make sure you can do what you advertise.
16. Interface Testing
Many times, a web site is not an island. The site will call external servers for additional data, verification of data or fulfillment of orders.
19. Compatibility
You will also want to verify that the application can work on the machines your customers will be using. If the product is going to the web for the world to use, you
will need to try different combinations of operating system, browser, video setting and modem speed.
21. Browsers
Does your site work with Netscape? Internet Explorer? Lynx? Some HTML commands or scripts only work for certain browsers. Make sure there are alternate tags
for images, in case someone is using a text browser. If you're using SSL security, you only need to check browsers 3.0 and higher, but verify that there is a
message for those using older browsers.
24. Combinations
Now you get to try combinations. Maybe 600x800 looks good on the MAC but not on the IBM. Maybe IBM with Netscape works, but not with Lynx.
If the web site will be used internally it might make testing a little easier. If the company has an official web browser choice, then you just need to verify that it
works for that browser. If everyone has a T1 connection, then you might not need to check load times. (But keep in mind, some people may dial in from home.)
With internal applications, the development team can make disclaimers about system requirements and only support those systems setups. But, ideally, the site
should work on all machines so you don't limit growth and changes in the future.
25. Load/Stress
You will need to verify that the system can handle a large number of users at the same time, a large amount of data from each user, and a long period of
continuous use. Accessibility is extremely important to users. If they get a "busy signal", they hang up and call the competition. Not only must the system be
checked so your customers can gain access, but many times crackers will attempt to gain access to a system by overloading it. For the sake of security, your
system needs to know what to do when it's overloaded and not simply blow up.
Many users at the same time
If the site just put up the results of a national lottery, it better be able to handle millions of users right after the winning numbers are posted. A load test tool would
be able to simulate large number of users accessing the site at the same time.
Large amount of data from each user
Most customers may only order 1-5 books from your new online bookstore, but what if a university bookstore decides to order 5000 different books? Or what if
grandma wants to send a gift to each of her 50 grandchildren for Christmas (separate mailing addresses for each, of course.) Can your system handle large
amounts of data from a single user?
Long period of continuous use
If the site is intended to take orders for flower deliveries, then it better be able to handle the week before Mother's Day. If the site offers web-based email, it better
be able to run for months or even years, without downtimes.
You will probably want to use an automated test tool to implement these types of tests, since they are difficult to do manually. Imagine coordinating 100 people to
hit the site at the same time. Now try 100,000 people. Generally, the tool will pay for itself the second or third time you use it. Once the tool is set up, running
another test is just a click away.
26. Security
Even if you aren't accepting credit card payments, security is very important. The web site will be the only exposure some customers have to your company. And,
if that exposure is a hacked page, they won't feel safe doing business with you.
27. Directory setup
The most elementary step of web security is proper setup of directories. Each directory should have an index.html or main.html page so a directory listing doesn't
appear.
One company I was consulting for didn't observe this principal. I right clicked on an image and found the path "...com/objects/images". I went to that directory
manually and found a complete listing of the images on that site. That wasn't too important. Next, I went to the directory below that: "...com/objects" and I hit the
jackpot. There were plenty of goodies, but what caught my eye were the historical pages. They had changed their prices every month and kept the old pages. I
browsed around and could figure out their profit margin and how low they were willing to go on a contract. If a potential customer did a little browsing first, they
would have had a definite advantage at the bargaining table.
SSL Many sites use SSL for secure transactions. You know you entered an SSL site because there will be a browser warning and the HTTP in the location field on
the browser will change to HTTPS. If your development group uses SSL you need to make sure there is an alternate page for browser with versions less than 3.0,
since SSL is not compatible with those browsers. You also need to make sure that there are warnings when you enter and leave the secured site. Is there a
timeout limit? What happens if the user tries a transaction after the timeout?
28 Logins
In order to validate users, several sites require customers to login. This makes it easier for the customer since they don't have to re-enter personal information
every time. You need to verify that the system does not allow invalid usernames/password and that it does allow valid logins. Is there a maximum number of failed
logins allowed before the server locks out the current user? Is the lockout based on IP? What if the maximum failed login attempts is three, and you try three, but
then enter a valid login? What are the rules for password selection?
• Feature: Definition
• Transactions: The nunber of times the test script requested the current URL
• Elapsed time: The number of seconds it took to run the request
• Bytes transferred: The total number of bytes sent or received, less HTTP headers
• Response time: The average time it took for the server to respond to each individual request.
• Transaction rate: The average number of transactions the server was able to handle per second.
• Transferance: The average number of bytes transferred per second.
• Concurrency: The average number of simultaneous connections the server was able to handle during the test session.
• Status code nnn: This indicates how many times a particular HTTP status code was seen.
Localization
The aspect of development and testing relating to the translation of the software and ite prsentation to the end user. This includes translating the program,
choosing appropriate icons and graphics, and other cultural considerations. It also may include translating the program's help files and the documentation. You
could think of localization as pertaining to the presentation of your program; the things the user sees.
Internationalization (I18N)
• Developing a (software) product in such a way that it will be easy to adapt it to other markets (languages and cultures)
• Goal: eliminate the need to reprogram or recompile the original program
• Carried out by SW-Development in conjunction with Localization
• Handing foreign text and data within a program
• Sorting, importing and exporting text and data, correct handing of currency and data and time formats, string parsing, upper and lower case handling.
• Separating strings from the source code, and making sure that the foreign language string have enough space in your user interface to be displayed
correctly
Internationalization
The aspect of development and testing relating to handling foreign text and data within a program. This would include sorting, importing and exporting text and
data, correct handling of currency and date and time formats, string parsing, upper and lower case handling, and so forth. It also includes the task of separating
strings (or user interface text) from the source code, and making sure that the foreign language strings have enough space in your user interface to be displayed
correctly. You could think of internationalization as pertaining ot the underlying functionality and workings of your program.
What is I18N/L10N stand for ?
These two abbreviations mean internationalization and localization respectively. Using the word "internationalization" as an example; here is how these
abbreviations are derived. First, you take the first letter of the word you want to abbreviate; in this case the letter "I". Next, you take the last letter in the word; in this
case the letter "N". These become the first and last letters in the abbreviation. Finally, you count the remaining letters in the word between the first and last letter.
In this case. "nternationalizatio" has 18 characters in it. se we will plug the number 18 between the "I" and "N"; thus I18N.
I18N and L10N
• I18N and L10N comprise the whole of the offort involved in enabling a product.
• I18N is "Stuff" you have to do once.
• L10N is "stuff you have to do over and over again.
• The more stuff you push into I18N out of L10N, the less complicated and expensive the process becomes.
Globalization (G11N)
• Activities performed for the purpose of marketing a (software) product in regional marketing a (software) product in regional markets
• Goal: Global marketing that accounts for economic and legal factors.
• Focus on marketing; total enterprise solutions and to management support
Aspects of Localization
• Terminology
The selection and definition as well as the correct and consistent usage of terms are preconditions for successful localition:
• laymen and expert users
• in most cases innovative domains and topics
• huge projects with many persons involved
• consistent terminology throughout all products
• no synonyms allowed
• prededined terminology(environment, laws, specifications, guidelines, corporate language)
• Symbols
• Symbols are culture-dependant, but often they cannot be modified by the localizer.
• Symbols are often adopted from other (common) spheres of life.
• Symbols often use allusions (concrete for abstract); in some cases, homonyms or even homophones are used.
• Illustrations and Graphics
• Illustrations and graphics are very often culture dependant, not only in content but also in the way they are presented
• Illustrations and graphics should be adopted to the (technical) needs of the target market (screen shots, manuals)
• Illustrations and graphics often contain textual elements that must be localized, but cannnot be isolated.
• Colors have different meanings in different cultures
• Character sets
• Languages are based on different character sets ("alphabets")
• the localized product must be able to handle(display, process, sort etc) the needed character set
• "US English" character sets (1 byte, 7 bit or 8 bit)
• Product development for internationalization should be based on UNICODE (2 byte or 4 byte)
• Fonts and typography
• Font types and font families are used in different cultures with varying frequency and for different text types and parts of text
• Example: In English manuals fonts with serifs(Times Roman) are preferred; In German manuals fonts without serifs(Helvetica) are preferred
• Example: English uses capitalization more frequently(e.g. CAUTION) for headers and parts of text
• Language and style
• In addition to the language specific features of grammar, syntax and style, there are cultural conventions that must be taken into account for
localization.
• In (US_English) an informal style is preferred, the reader is addressed directly, "simple" verbs are used, repetition of parts of text is accepted
etc.
• Formulating headers (L10N)
• Long compound words (L10N)
• Elements of the user interface
Open the File menu
The Open dialog box appears
Click the copy command
• Formats
• Date, currency, units of measurement
• Paper format
• Different length of text
1. Consequences for the volume of documents, number of pages, page number (table of contents, index)etc.
2. Also for the size of buttons (IT, machines etc.)
• testing resource files [separate strings from the code] Solution: create a pseudo build
• string expansion: String size change breaking layout and aligment. when words or sentatences are translated into other languages, most of the time the
resulting string will be either longer or shorter than the native language version of the string. Two solutions to this problem:
1. Account the space needed for string expansion, adjusting the layout of your dialog accordingly
2. Separate your dialog resources into separate dynamic libraries.
• Data format localization:
European style:DD/MM/YY
North American style: MM/DD/YY
Currency, time and number formate, address.
• Charater sets: ASCII or Non ASCII
Single byte character 16bit (US) 256 charaters
Double byte character 32bit(chinese) 65535 code ponits
• Encoding: Unicode: Unicode supports many different written languages in the world all in a single character encoding. Note: For double character set, it
is better to convert from Unicode to UTF8 for chinese, because UTF8 is a variable length encoding of Unicode that can be easily sent through the
network via single byte streams.
• Builds and installer : Creating an environment that supports a single version of your code, and multiple version of the language files.
• program's installation, uninstallation in the foreign machines.
• Tesing with foreign characters :
EXP:
enter foreign text for username and password.
For entering European or Latin characters on Windows
1. Using Character Map tool
Search: star-program-accessories-accessibility-system tool-
2. escape sequences. EXP: ALT + 128
For Asia languages use what is usually called an IME (input method editor)
Chinese: I use GB encoding with pin yin inputer mode
• Foreign Keyboards or On-Screen keyboard
• Text filters: Program that are used to collect and manipulate data usually provide the user with a mechanism for searching and filtering that data. As a
global software tester, you need to make sure that the filtering and searching capabilities of your program work correctly with foreign text. Problem;
ignore the accent marks used in foreign text.
• Loading, saving, importing,and exporting high and low ASCII
• Asian text in program: how double charater set work
• Watch the style
• Two environment to test for the program in chinese
1. In Chinese window system. (in China)
2. In English Window system with chinese language support (in USA)
Microsoft language codes:
CHS - Chinese Simplified
CHT - Chinese Traditional(Taiwan)
ENU - English (United States)
FRA - French (France)
Java Languages codes
zh_CN - Chinese Simplified
zh_TW - Chinese Traditional(Taiwan)
Fr or fr_FR - English (United States)
en or en_US - French (France)
• More need consider in localization testing:
• Hot key.
• Garbled in translation
• Error message identifiers
• Hyphenation rules
• Spelling rules
• Sorting rules
• Uppercase and lowercase conversion
• Accuracy
• Good reading
• Help is a combination of writing and programming
• Test hypertext links
• Test the index
• Watch the style
In the case of toysrus.com, its web site couldn't handle the approximately 1000 percent increase in traffic that their advertising campaign generated. Similarly,
Encyclopaedia Britannica was unable to keep up with the amount of users during the immediate weeks following their promotion of free access to its online
database. The truth is, these problems could probably have been prevented, had adequate load testing taken place.
When creating an eCommerce portal, companies will want to know whether their infrastructure can handle the predicted levels of traffic, to measure performance
and verify stability.
These types of services include Scalability / Load / Stress testing, as well as Live Performance Monitoring.
Load testing tools can be used to test the system behaviour and performance under stressful conditions by emulating thousands of virtual users. These virtual
users stress the application even harder than real users would, while monitoring the behaviour and response times of the different components. This enables
companies to minimise test cycles and optimise performance, hence accelerating deployment, while providing a level of confidence in the system. Once launched,
the site can be regularly checked using Live Performance Monitoring tools to monitor site performance in real time, in order to detect and report any performance
problems - before users can experience them.
Measure how many people visit the site per week/month or day. Then break down these current traffic patterns into one-hour time slices, and identify the peak-
hours (i.e. if you get lots of traffic during lunch time etc.), and the numbers of users during those peak hours. This information can then be used to estimate the
number of concurrent users on your site.
3. Concurrent Users
Although your site may be handling x number of users per day, only a small percentage of these users would be hitting your site at the same time. For example, if
you have 3000 unique users hitting your site on one day, all 3000 are not going to be using the site between 11.01 and 11.05 am.
So, once you have identified your peak hour, divide this hour into 5 or 10 minute slices [you should use your own judgement here, based on the length of the
average user session] to get the number of concurrent users for that time slice.
4. Estimating Target Load Levels
Once you have identified the current load levels, the next step is to understand as accurately and as objectively as possible the nature of the load that must be
generated during the testing.
Using the current usage figures, estimate how many people will visit the site per week/month or day. Then divide that number to attain realistic peak-hour
scenarios.
It is important to understand the volume patterns, and to determine what load levels your web site might be subjected to (and must therefore be tested for).
There are four key variables that must be understood in order to estimate target load levels:
how the overall amount of traffic to your Web site is expected to grow
the peak load level which might occur within the overall traffic
how quickly the number of users might ramp up to that peak load level
how long that peak load level is expected to last
Once you have an estimate of overall traffic growth, you’ll need to estimate the peak level you might expect within that overall volume.
6. Ramp-up Rate
As mentioned earlier, Although your site may be handling x number of users per day, only a small percentage of these users would be hitting your site at the same
time.
Therefore, when preparing your load test scenario, you should take into account the fact that users will hit the website at different times, and that during your peak
hour the number of concurrent users will likely gradually build up to reach the peak number of users, before tailing off as the peak hour comes to a close.
The rate at which the number of users build up, the "Ramp-up Rate" should be factored into the load test scenarios (i.e. you should not just jump to the maximum
value, but increase in a series of steps).
7. Scenario Identification
The information gathered during the analysis of the current traffic is used to create the scenarios that are to be used to load test the web site.
The identified scenarios aim to accurately emulate the behavior of real users navigating through the Web site.
for example, a seven-page session that results in a purchase is going to create more load on the Web site than a seven-page session that involves only browsing.
A browsing session might only involve the serving of static pages, while a purchase session will involve a number of elements, including the inventory database,
the customer database, a credit card transaction with verification going through a third-party system, and a notification email. A single purchase session might put
as much load on some of the system’s resources as twenty browsing sessions.
Similar reasoning may apply to purchases from new vs. returning users. A new user purchase might involve a significant amount of account setup and verification
—something existing users may not require. The database load created by a single new user purchase may equal that of five purchases by existing users, so you
should differentiate the two types of purchases.
8. Script Preparation
Next, program your load test tool to run each scenario with the number of types of users concurrently playing back to give you a the load scenario.
test objective
pass/fail criteria
script description
scenario description
Pass/Fail Criteria
The load test will be considered a success if the Web site will handle the target load of X number of sessions/hr while maintaining the pre-defined average page
response times (if applicable). The page response time will be measured and will represent the elapsed time between a page request and the time the last byte is
received.
Since in most cases the user sessions follow just a few navigation patterns, you will not need hundreds of individual scripts to achieve realism—if you choose
carefully, a dozen scripts will take care of most Web sites.
9. Script Execution
Scripts should be combined to describe a load testing scenario. A basic scenario includes the scripts that will be executed, the percentages in which those scripts
will be executed, and a description of how the load will be ramped up.
By emulating multiple business processes, the load testing can generate a load equivalent to X numbers of virtual users on a Web application. During these load
tests, real-time performance monitors are used to measure the response times for each transaction and check that the correct content is being delivered to users.
In this way, they can determine how well the site is handling the load and identify any bottlenecks.
The execution of the scripts opens X number of HTTP sessions (each simulating a user) with the target Web site and replays the scripts over and over again.
Every few minutes it adds X more simulated users and continues to do so until the web site fails to meet a specific performance goal.
3. What is LoadRunner?
LoadRunner works by creating virtual users who take the place of real users operating client software, such as sending requests using the HTTP protocol to IIS or
Apache web servers. Requests from many virtual user clients are generated by Load Generators in order to create a load on various servers under test
These load generator agents are started and stopped by Mercury's Controller program. The Controller controls load test runs based on Scenarios invoking
compiled Scripts and associated Run-time Settings.
Scripts are crafted using Mercury's "Virtual user script Generator" (named "V U Gen"), It generates C-language script code to be executed by virtual users by
capturing network traffic between Internet application clients and servers.
With Java clients, VuGen captures calls by hooking within the client JVM. During runs, the status of each machine is monitored by the Controller.
At the end of each run, the Controller combines its monitoring logs with logs obtained from load generators, and makes them available to the "Analysis" program,
which can then create run result reports and graphs for Microsoft Word, Crystal Reports, or an HTML webpage browser.
Each HTML report page generated by Analysis includes a link to results in a text file which Microsoft Excel can open to perform additional analysis.
Errors during each run are stored in a database file which can be read by Microsoft Access.
1. Click Start, point to Programs (or Control Panel), Administrative Tools and choose Terminal Services
2. Configuration.
3. Open the Connections folder in tree by clicking it once.
4. Right-click RDP-Tcp and select Properties.
5. Click the Sessions tab.
6. Make sure "Override user settings" is checked.
7. Set Idle session limit to the maximum of 2 days instead of the default 2 hours.
8. Click Apply.
9. Click OK to confirm message "Configuration changes have been made to the system registry; however, the user session now active on the RDP-Tcp
connection will not be changed."
12. What Component of LoadRunner would you use to play Back the script in multi user mode?
The Controller component is used to playback the script in multi-user mode. This is done during a scenario run where a vuser script is executed by a number of
vusers in a group.
17. What is correlation? Explain the difference between automatic correlation and manual correlation?
Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid
errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can
be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned
and create correlation is used to correlate.
20. What is a function to capture dynamic values in the web Vuser script?
Web_reg_save_param function saves dynamic data information to a parameter.
23. What are the typical settings for each type of run scenario ?
24. When do you disable log in Virtual User Generator, When do you choose standard and extended logs?
Once we debug our script and verify that it is functional, we can enable logging for errors only. When we add a script to a scenario, logging is automatically
disabled. Standard Log Option: When you select Standard log, it creates a standard log of functions and messages sent during script execution to use for
debugging. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled Extended Log Option:
Select extended log to create an extended log, including warnings and other messages. Disable this option for large load testing scenarios. When you copy a
script to a scenario, logging is automatically disabled. We can specify which additional information should be added to the extended log using the Extended log
options.
27. What are the changes you can make in run-time settings?
The Run Time Settings that we make are:
1. Logon the load generator as the user the load generator will use
2. Open Windows Explorer and under Tools select Map a Network Drive and create a drive. It saves time and hassle to have consistent drive letters across
load generators, so some organizations reserver certain drive letters for specific locations.
3. Open the LoadRunner service within Services (accessed from Control Panel, Administrative Tasks),
4. Click the "Login" tab.
5. Specify the username and password the load generator service will use. (A dot appears in front of the username if the userid is for the local domain).
6. Stop and start the service again.
33. If you want to stop the execution of your script on error, how do you do that?
The lr_abort function aborts the execution of a Vuser script. It instructs the Vuser to stop executing the Actions section, execute the vuser_end section and end the
execution. This function is useful when you need to manually abort a script execution as a result of a specific error condition. When you end a script using this
function, the Vuser is assigned the status "Stopped". For this to take effect, we have to first uncheck the Continue on error option in Run-Time Settings.
34. What is the relation between Response Time and Throughput?
The Throughput graph shows the amount of data in bytes that the Vusers received from the server in a second. When we compare this with the transaction
response time, we will notice that as throughput decreased, the response time also decreased. Similarly, the peak throughput and highest response time would
occur approximately at the same time.
35. Explain the Configuration of your systems?
The configuration of our systems refers to that of the client machines on which we run the Vusers. The configuration of any client machine includes its hardware
settings, memory, operating system, software applications, development tools, etc. This system component configuration should match with the overall system
configuration that would include the network infrastructure, the web server, the database server, and any other components that go with this larger system so as to
achieve the load testing objectives.
37. If web server, database and Network are all fine where could be the problem?
The problem could be in the system itself or in the application server or in the code written for the application.
40. What is the difference between Overlay graph and Correlate graph?
Overlay Graph: It overlay the content of two graphs that shares a common x-axis. Left Y-axis on the merged graph show’s the current graph’s value & Right Y-axis
show the value of Y-axis of the graph that was merged. Correlate Graph: Plot the Y-axis of two graphs against each other. The active graph’s Y-axis becomes X-
axis of merged graph. Y-axis of the graph that was merged becomes merged graph’s Y-axis.
41. How did you plan the Load? What are the Criteria?
Load test is planned to decide the number of users, what kind of machines we are going to use and from where they are run. It is based on 2 important documents,
Task Distribution Diagram and Transaction profile. Task Distribution Diagram gives us the information on number of users for a particular transaction and the time
of the load. The peak usage and off-usage are decided from this Diagram. Transaction profile gives us the information about the transactions name and their
priority levels with regard to the scenario we are deciding.
45. What is the difference between standard log and extended log?
The standard log sends a subset of functions and messages sent during script execution to a log. The subset depends on the Vuser type Extended log sends a
detailed script execution messages to the output log. This is mainly used during debugging when we want information about: Parameter substitution. Data returned
by the server. Advanced trace.
The transaction response time that you want your scenario Analysis Scenario (Bottlenecks): In Running Vuser graph correlated with the response time graph you
can see that as the number of Vusers increases, the average response time of the check itinerary transaction very gradually increases. In other words, the
average response time steadily increases as the load increases. At 56 Vusers, there is a sudden, sharp increase in the average response time. We say that the
test broke the server. That is the mean time before failure (MTBF). The response time clearly began to degrade when there were more than 56 Vusers running
simultaneously.
53. What is correlation? Explain the difference between automatic correlation and manual correlation?
Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid
errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can
be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned
and create correlation is used to correlate.
Q: For new users, how to use WinRunner to test software applications automately ?
A: The following steps may be of help to you when automating tests
1. MOST IMPORTANT - write a set of manual tests to test your application - you cannot just jump in with WR and expect to produce a set of meaningful
tests. Also as you will see from the steps below this set of manual tests will form your plan to tackle automation of your application.
2. Once you have a set of manual tests look at them and decide which ones you can automate using your current level of expertise. NOTE that there will
be tests that are not suitable for automation, either because you can't automate them, or they are just not worth the effort.
3. Automate the tests selected in step 2 - initially you will use capture/replay using the steps in the manual test, but you will soon see that to produce
meaningful and informative tests you need to add additional code to your test eg. use tl_step() to give test results. As this process continues you will
soon see that there are operations that you repeatedly do in multiple tests - these are then candidates for user-defined functions and compiled modules
4. Once you have completed step 3 go back to step 2 and you will find that the knowledge you have gained in step 3 will now allow you to select some
more tests that you can do.
If you continue going through this loop you will gradually become more familiar with WR and TSL, in fact you will probably find that eventually you do very little
capture/replay and more straight TSL coding.
Q: How to use WinRunne to check whether the record was updated or the record was delelte or the record was inserted or not?
Using WinRunner check point features: Create->dDB checkpoint->Runtime Record check
Q: How to use WinRunner to test the login screen
A: When you enter wrong id or password, you will get Dialog box.
1. Record this Dialog box
2. User win_exists to check whether dialog box exists or not
3. Playback: Enter wrong id or password, if win_exists is
true, then your application is working good.
Enter good id or password, if win_exists is false,
then your application is working perfectly.
Q: After clicking on "login" button, they opens other windows of the web application, how to check that page is opened or not
When your expecting "Window1" to come up after clicking on Login...
Capture the window in the GUI Map. No two windows in an web based
application can have the same html_name property. Hence, this would
be the property to check.
First try a simple win_exists("window1", ) in an IF condition.
button_press ( "btn_OK" );
becomes
button_press ( "{class: push_button, label: OK}" );
return TRUE;
}
Q: How to have winrunner insert yesterdays date into a field in the application?
1) Use get-time to get the PC system time in seconds since 01/01/1970
4)If format of returned date is not correct use string manipulations to get
the format you require
3) Use the ddt- functions to read the date from the excel datasheet
After taking care of different GUIs for different locales, the script also needs some modification. If you are scripting in English and then moving on to any other
language (say Japanese), all the user inputs will be in English. Due to this the script will fail as it is expecting a Japanese input for a JPN language. Instead of
using like that, assign all the user inputs to a variable and use the same wherever the script uses it. This variables has to be assigned (may be after the driver
script) before you call the script which you want to run. You should have different variable scripts for different languages. Depending on the language you want to
run, call the appropriate variable script file. This will help you to run the same script with different locale
Q: How to use a regular _expression in the physical description of a window in the GUI map?
Several web page windows with similar html names - they all end in or contain "| MyCompany" The GUI Map has saved the following physical description for one
of these windows:
{
class: window,
html_name: "Dynamic Name | MyCompany"
MSW_class: html_frame
}
The "Dynamic Name " part of the html name changes with the different pages.
Replace:
{
class: window,
html_name: "!.*| MyCompany"
MSW_class: html_frame
}
Q: How to to get the information from the status bar without doing any activity/click on the hyperlink?
You can use the "statusbar_get_text("Status Bar",0,text);" function
"text" variable contains the status bar statement.
or
web_cursor_to_link ( link, x, y );
{
class: check_button,
MSW_class: html_check_button,
html_name: chkESActivity,
part_value: 91
}
Replace with:
1) From the requirements find out what the behaviour of the text field in
question should be. Things you need to know are :
what should happen if field left blank
what special characters are allowed
is it an alpha, nemeric or alphanumeric field etc.etc.
2) Write manual tests for doing what you want. This will create a structure
to form the basis of your WR tests.
3) now create your WR scripts. I suggest that you use data driven tests and
use Excel spreadsheets for your inputs instead of having user input.
For example the following structure will test whether the text field will
accept special characters :
in this case the data table will contain all the special charcaters
for(i in guiLoad)
{
rc = (GUI_load(GUIPATH & guiLoad[i]));
if ((rc != 0) && (rc != E_OK)) #Check the Gui_Load
{
return ("Failed to load " &guiLoad[i]);
}
}
return ("Pass");
}
Q: Read and write to the registry using the Windows API functions
function space(isize)
{
auto s;
auto i;
for (i =1;i<=isize;i++)
{
s = s & " ";
}
return(s);
}
load_dll("c:\\windows\\system32\\ADVAPI32.DLL");
extern long RegDeleteKey( long, string<1024> );
extern long RegCloseKey(long);
extern long RegQueryValueExA(long,string,long,long,inout string<1024>,inout long );
extern long RegOpenKeyExA(long,string,long ,long,inout long);
extern long RegSetValueExA(long,string,long,long,string,long);
cbData = 256;
tmp = space(256);
KeyType = 0;
ret = RegQueryValueExA(hKey,"Last language",0,KeyType,tmp,cbData);
# verifies you changed the key
pause (tmp);
Q: User-defined function that would write to the Print-log as well as write to a file
function writeLog(in strMessage){
file_open("C:\FilePath\...");
file_printf(strMessage);
printf(strMessage);
}
Q: How to do text matching?
You could try embedding it in an if statement. If/when it fails use a tl_step statement to indicate passage and then do a texit to leave the test. Another idea would
be to use win_get_text or web_frame_get_text to capture the text of the object and the do a comparison (using the match function) to determine it's existance.
Q: the MSW_id value sometimes changes, rendering the GUI map useless
MSW_Id's will continue to change as long as your developers are modifying your application. Having dealt with this, I determined that each MSW_Id shifted by the
same amount and I was able to modify the entries in the gui map rather easily and continue testing.
Instead of using the MSW_id use the "location". If you use your GUI spy it will give you every detail it can. Then add or remove what you don't want.
Q: Having the DB Check point, its able to show the current values in form but its not showing the values that saved in the table
This looks like its happening because the data has
been written to the db after your checkpoint, so you
have to do a runtime record check Create>Database
Checkpoint>Runtime Record Check. You may also have to
perform some customization if the data displayed in
the application is in a different format than the data
in the database by using TSL. For example, converting
radio buttons to database readable form involves the
following:
# Flight Reservation
set_window ("Flight Reservation", 2);
# edit_set ("Date of Flight:", "06/08/02");
if (bus)
service="2";
if (econ)
service="3";
set_window("Untitled - Notepad",3);
edit_set("Report Area",service);
db_record_check("list1.cvr", DVR_ONE_MATCH,record_num);
Increas Capacity Testing
When you begin your stress testing, you will want to increase your capacity testing to make sure you are able to handle the increased load of data such as ASP
pages and graphics. When you test the ASP pages, you may want to create a page similar to the original page that will simulate the same items on the ASP page
and have it send the information to a test bed with a process that completes just a small data output. By doing this, you will have your processor still stressing the
system but not taking up the bandwidth by sending the HTML code along the full path. This will not stress the entire code but will give you a basis from which to
work. Dividing the requests per second by the total number of user or threads will determine the number of transactions per second. It will tell you at what point the
server will start becoming less efficient at handling the load. Let's look at an example. Let's say your test with 50 users shows your server can handle 5 requests
per seconf, with 100 users it is 10 requests per second, with 200 users it is 15 requests per second, and eventually with 300 users it is 20 requests per second.
Your requests per second are continually climbing, so it seems that you are obtaining steadily improving performance. Let's look at the ratios:
05/50 = 0.1
10/100 = 0.1
15/200 = 0.075
20/300 = 0.073
From this example you can see that the performance of the server is becoming less and less efficient as the load grows. This in itself is not necessarily bad (as
long as your pages are still returning within your target time frame). However, it can be a useful indicator during your optimization process and does give you some
indication of how much leeway you have to handle expected peaks.
Stateful testing
When you use a Web-enabled application to set a value, does the server respond correctly later on?
Privilage testing
What happens when the everyday user tries to access a control that is authorized only for adminstrators?
Speed testing
Is the Web-enabled application taking too long to respond?
Boundary Test
Boundary tests are designed to check a program's response to extreme input values. Extreme output values are generated by the input values. It is important to
check that a program handles input values and output results correctly at the lower and upper boundaries. Keep in mind that you can create extreme boundary
results from non-extreme input values. It is essential to analyze how to generate extremes of both types. In addition. sometime you know that there is an
intermediate variable involved in processing. If so, it is useful to determine how to drive that one through the extremes and special conditions such as zero or
overflow condition.
Regression testing
Did a new build break an existing function? Repeat testing after changes for managing risk relate to product enhancement.
A regression test is performded when the tester wishes to see the progress of the testing processs by performing identical tests before and after a bug has been
fixed. A regression test allows the tester to compare expeted test results with the actual results.
Regression testing's primary objective is to ensure that all bugfree features stay that way. In addition, bugs which have been fixed once should not turn up again in
subsequent program versions.
Regression testing: After every software modification or before next release, we repeat all test cases to check if fixed bugs are not show up again and new and
existing functions are all working correctly.
Regression testing is used to confirm that fixed bugs have, in fact, been fixed and that new bugs have not been introduced in the process, and that festures that
were proven correctly functional are intact. Depending on the size of a project, cycles of regression testing may be perform once per milestone or once per build.
Some bug regression testing may also be performed during each accceptance test cycle, forcusing on only the most important bugs. Regression tests can be
automated.
CONDITIONS DURING WHICH REGRESSION TESTS MAY BE RUN
Issu fixing cycle. Once the development team has fixed issues, a regression test can be run t ovalidate the fixes. Tests are based on the step-by-step test casess
that were originally reported:
• If an issue is confirmeded as fixed, then the issue report status should be changed to Closed.
• If an issue is confirmed as fixed, but with side effects, then the issue report status should be changed to Closed. However, a new issue should be filed to
report the side effect.
• If an issue is only partially fixed, then the issue report resolution should be changed back to Unfixed, along with comments outlining the oustanding
problems
Open-status regression cycle. Periodic regression tests may be run on all open issue in the issue-tracking database. During this cycle, issue status is confirmed
either the report is reproducible as is with no modification, the report is reproducible with additional comments or modifications, or the report is no longer
reproducible
Closed-fixed regression cycle. In the final phase of testing, a full-regression test cycle should be run to confirm the status of all fixed-closed issues.
Feature regression cycle. Each time a new build is cut or is in the final phase of testing depending on the organizational procedure, a full-regression test cycle
should be run to confirm that the proven correctly functional features are still working as expected.
Database Testing
Seach results System test environment Black Box and White Box technique
Q:How do you identify which files are loaded in the GUI map?
The GUI Map Editor has a drop down GUI File displaying all the GUI Map files loaded into the memory.
Q:How do you modify the logical name or the physical description of the objects in GUI map?
You can modify the logical name or the physical description of an object in a GUI map file using the GUI Map Editor.
1. Choose Tools - GUI Map Editor to open the GUI Map Editor.
2. Choose View - GUI Files.
3. Click Expand in the GUI Map Editor. The dialog box expands to display two GUI map files simultaneously.
4. View a different GUI map file on each side of the dialog box by clicking the file names in the GUI File lists.
5. In one file, select the objects you want to copy or move. Use the Shift key and or Control key to select multiple objects. To select all objects in a GUI
map file, choose Edit - Select All.
6. Click Copy or Move.
7. To restore the GUI Map Editor to its original size, click Collapse.
Q:How do you select multiple objects during merging the files?
Use the Shift key and or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit - Select All.
1. Logical name displays only objects with the specified logical name.
2. Physical description displays only objects matching the specified physical description. Use any substring belonging to the physical description.
3. Class displays only objects of the specified class, such as all the push buttons.
1. When WinRunner learns the description of a GUI object, it does not learn all its properties. Instead, it learns the minimum number of properties to
provide a unique identification of the object.
2. Many applications also contain custom GUI objects. A custom object is any object not belonging to one of the standard classes used by WinRunner.
These objects are therefore assigned to the generic object class. When WinRunner records an operation on a custom object, it generates obj_mouse_
statements in the test script.
3. If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure the properties WinRunner uses
to identify a custom object during Context Sensitive testing. The mapping and the configuration you set are valid only for the current WinRunner session.
To make the mapping and the configuration permanent, you must add configuration statements to your startup test script.
Q:What are the virtual objects and how do you learn them?
• Applications may contain bitmaps that look and behave like GUI objects. WinRunner records operations on these bitmaps using win_mouse_click
statements. By defining a bitmap as a virtual object, you can instruct WinRunner to treat it like a GUI object such as a push button, when you record and
run tests.
• Using the Virtual Object wizard, you can assign a bitmap to a standard object class, define the coordinates of that object, and assign it a logical name.
To define a virtual object using the Virtual Object wizard:
1. Choose Tools > Virtual Object Wizard. The Virtual Object wizard opens. Click Next.
2. In the Class list, select a class for the new virtual object. If rows that are displayed in the window. For a table class, select the number of
visible rows and columns. Click Next.
3. Click Mark Object. Use the crosshairs pointer to select the area of the virtual object. You can use the arrow keys to make precise adjustments
to the area you define with the crosshairs. Press Enter or click the right mouse button to display the virtual object’s coordinates in the wizard. If
the object marked is visible on the screen, you can click the Highlight button to view it. Click Next.
4. Assign a logical name to the virtual object. This is the name that appears in the test script when you record on the virtual object. If the object
contains text that WinRunner can read, the wizard suggests using this text for the logical name. Otherwise, WinRunner suggests
virtual_object, virtual_push_button, virtual_list, etc.
5. You can accept the wizard’s suggestion or type in a different name. WinRunner checks that there are no other objects in the GUI map with the
same name before confirming your choice. Click Next.
1. GUI checkpoints verify information about GUI objects. For example, you can check that a button is enabled or see which item is selected in a list.
2. Bitmap checkpoints take a snapshot of a window or area of your application and compare this to an image captured in an earlier version.
3. Text checkpoints read text in GUI objects and in bitmaps and enable you to verify their contents.
4. Database checkpoints check the contents and the number of rows and columns of a result set, which is based on a query you create on your database.
Q:What do you verify with the GUI checkpoint for single property and what command it generates, explain syntax?
You can check a single property of a GUI object. For example, you can check whether a button is enabled or disabled or whether an item in a list is selected. To
create a GUI checkpoint for a property value, use the Check Property dialog box to add one of the following functions to the test script:
button_check_info
scroll_check_info
edit_check_info
static_check_info
list_check_info
win_check_info
obj_check_info
Syntax: button_check_info (button, property, property_value );
edit_check_info ( edit, property, property_value );
Q:What do you verify with the GUI checkpoint for object/window and what command it generates, explain syntax?
• You can create a GUI checkpoint to check a single object in the application being tested. You can either check the object with its default properties or
you can specify which properties to check.
• Creating a GUI Checkpoint using the Default Checks
• You can create a GUI checkpoint that performs a default check on the property recommended by WinRunner. For example, if you create a
GUI checkpoint that checks a push button, the default check verifies that the push button is enabled.
• To create a GUI checkpoint using default checks:
1. Choose Create - GUI Checkpoint - For Object/Window, or click the GUI Checkpoint for Object/Window button on the User toolbar. If
you are recording in Analog mode, press the CHECK GUI FOR OBJECT/WINDOW softkey in order to avoid extraneous mouse
movements. Note that you can press the CHECK GUI FOR OBJECT/WINDOW softkey in Context Sensitive mode as well. The
WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens on the screen.
2. Click an object.
3. WinRunner captures the current value of the property of the GUI object being checked and stores it in the test’s expected results
folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an obj_check_gui statement Syntax:
win_check_gui ( window, checklist, expected_results_file, time );
• Creating a GUI Checkpoint by Specifying which Properties to Check
• You can specify which properties to check for an object. For example, if you create a checkpoint that checks a push button, you can choose to verify that
it is in focus, instead of enabled.
• To create a GUI checkpoint by specifying which properties to check:
• Choose Create - GUI Checkpoint - For Object/Window, or click the GUI Checkpoint for Object/Window button on the User toolbar. If you are
recording in Analog mode, press the CHECK GUI FOR OBJECT/WINDOW softkey in order to avoid extraneous mouse movements. Note that
you can press the CHECK GUI FOR OBJECT/WINDOW softkey in Context Sensitive mode as well. The WinRunner window is minimized, the
mouse pointer becomes a pointing hand, and a help window opens on the screen.
• Double-click the object or window. The Check GUI dialog box opens.
• Click an object name in the Objects pane. The Properties pane lists all the properties for the selected object.
• Select the properties you want to check.
1. To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in
the Expected Value column to edit it.
2. To add a check in which you specify arguments, first select the property for which you want to specify arguments. Next, either click
the Specify Arguments button, or double-click in the Arguments column. Note that if an ellipsis (three dots) appears in the
Arguments column, then you must specify arguments for a check on this property. (You do not need to specify arguments if a
default argument is specified.) When checking standard objects, you only specify arguments for certain properties of edit and static
text objects. You also specify arguments for checks on certain properties of nonstandard objects.
3. To change the viewing options for the properties of an object, use the Show Properties buttons.
4. Click OK to close the Check GUI dialog box. WinRunner captures the GUI information and stores it in the test’s expected results
folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an obj_check_gui or a
win_check_gui statement. Syntax: win_check_gui ( window, checklist, expected_results_file, time ); obj_check_gui ( object,
checklist, expected results file, time );
Q:What do you verify with the GUI checkpoint for multiple objects and what command it generates, explain syntax?
To create a GUI checkpoint for two or more objects:
• Choose Create GUI Checkpoint For Multiple Objects or click the GUI Checkpoint for Multiple Objects button on the User toolbar. If you are recording in
Analog mode, press the CHECK GUI FOR MULTIPLE OBJECTS softkey in order to avoid extraneous mouse movements. The Create GUI Checkpoint
dialog box opens.
• Click the Add button. The mouse pointer becomes a pointing hand and a help window opens.
• To add an object, click it once. If you click a window title bar or menu bar, a help window prompts you to check all the objects in the window.
• The pointing hand remains active. You can continue to choose objects by repeating step 3 above for each object you want to check.
• Click the right mouse button to stop the selection process and to restore the mouse pointer to its original shape. The Create GUI Checkpoint dialog box
reopens.
• The Objects pane contains the name of the window and objects included in the GUI checkpoint. To specify which objects to check, click an object name
in the Objects pane. The Properties pane lists all the properties of the object. The default properties are selected.
1. To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in the
Expected Value column to edit it.
2. To add a check in which you specify arguments, first select the property for which you want to specify arguments. Next, either click the Specify
Arguments button, or double-click in the Arguments column. Note that if an ellipsis appears in the Arguments column, then you must specify
arguments for a check on this property. (You do not need to specify arguments if a default argument is specified.) When checking standard
objects, you only specify arguments for certain properties of edit and static text objects. You also specify arguments for checks on certain
properties of nonstandard objects.
3. To change the viewing options for the properties of an object, use the Show Properties buttons.
• To save the checklist and close the Create GUI Checkpoint dialog box, click OK. WinRunner captures the current property values of the selected GUI
objects and stores it in the expected results folder. A win_check_gui statement is inserted in the test script.
Q:What information is contained in the checklist file and in which file expected results are stored?
The checklist file contains information about the objects and the properties of the object we are verifying.
The gui*.chk file contains the expected results which is stored in the exp folder
Q:What do you verify with the bitmap check point for object/window and what command it generates, explain syntax?
• You can check an object, a window, or an area of a screen in your application as a bitmap. While creating a test, you indicate what you want to check.
WinRunner captures the specified bitmap, stores it in the expected results folder (exp) of the test, and inserts a checkpoint in the test script. When you
run the test, WinRunner compares the bitmap currently displayed in the application being tested with the expected bitmap stored earlier. In the event of a
mismatch, WinRunner captures the current actual bitmap and generates a difference bitmap. By comparing the three bitmaps (expected, actual, and
difference), you can identify the nature of the discrepancy.
• When working in Context Sensitive mode, you can capture a bitmap of a window, object, or of a specified area of a screen. WinRunner inserts a
checkpoint in the test script in the form of either a win_check_bitmap or obj_check_bitmap statement.
• Note that when you record a test in Analog mode, you should press the CHECK BITMAP OF WINDOW softkey or the CHECK BITMAP OF SCREEN
AREA softkey to create a bitmap checkpoint. This prevents WinRunner from recording extraneous mouse movements. If you are programming a test,
you can also use the Analog function check_window to check a bitmap.
• To capture a window or object as a bitmap:
1. Choose Create - Bitmap Checkpoint - For Object/Window or click the Bitmap Checkpoint for Object/Window button on the User toolbar.
Alternatively, if you are recording in Analog mode, press the CHECK BITMAP OF OBJECT/WINDOW softkey. The WinRunner window is
minimized, the mouse pointer becomes a pointing hand, and a help window opens.
2. Point to the object or window and click it. WinRunner captures the bitmap and generates a win_check_bitmap or obj_check_bitmap statement
in the script. The TSL statement generated for a window bitmap has the following syntax: win_check_bitmap ( object, bitmap, time );
3. For an object bitmap, the syntax is: obj_check_bitmap ( object, bitmap, time );
4. For example, when you click the title bar of the main window of the Flight Reservation application, the resulting statement might be:
win_check_bitmap ("Flight Reservation", "Img2", 1);
5. However, if you click the Date of Flight box in the same window, the statement might be: obj_check_bitmap ("Date of Flight:", "Img1", 1);
Q:What do you verify with the bitmap checkpoint for screen area and what command it generates, explain syntax?
• You can define any rectangular area of the screen and capture it as a bitmap for comparison. The area can be any size: it can be part of a single
window, or it can intersect several windows. The rectangle is identified by the coordinates of its upper left and lower right corners, relative to the upper
left corner of the window in which the area is located. If the area intersects several windows or is part of a window with no title (for example, a popup
window), its coordinates are relative to the entire screen (the root window).
• To capture an area of the screen as a bitmap:
1. Choose Create - Bitmap Checkpoint - For Screen Area or click the Bitmap Checkpoint for Screen Area button. Alternatively, if you are
recording in Analog mode, press the CHECK BITMAP OF SCREEN AREA softkey. The WinRunner window is minimized, the mouse pointer
becomes a crosshairs pointer, and a help window opens.
2. Mark the area to be captured: press the left mouse button and drag the mouse pointer until a rectangle encloses the area; then release the
mouse button.
3. Press the right mouse button to complete the operation. WinRunner captures the area and generates a win_check_bitmap statement in your
script.
4. The win_check_bitmap statement for an area of the screen has the following syntax: win_check_bitmap ( window, bitmap, time, x, y, width,
height );
Q:What do you verify with the database checkpoint default and what command it generates, explain syntax?
• By adding runtime database record checkpoints you can compare the information in your application during a test run with the corresponding record in
your database. By adding standard database checkpoints to your test scripts, you can check the contents of databases in different versions of your
application.
• When you create database checkpoints, you define a query on your database, and your database checkpoint checks the values contained in the result
set. The result set is set of values retrieved from the results of the query.
• You can create runtime database record checkpoints in order to compare the values displayed in your application during the test run with the
corresponding values in the database. If the comparison does not meet the success criteria you
• specify for the checkpoint, the checkpoint fails. You can define a successful runtime database record checkpoint as one where one or more matching
records were found, exactly one matching record was found, or where no matching records are found.
• You can create standard database checkpoints to compare the current values of the properties of the result set during the test run to the expected
values captured during recording or otherwise set before the test run. If the expected results and the current results do not match, the database
checkpoint fails. Standard database checkpoints are useful when the expected results can be established before the test run.
Syntax: db_check(checklist_file, expected_restult);
• You can add a runtime database record checkpoint to your test in order to compare information that appears in your application during a test run with the
current value(s) in the corresponding record(s) in your database. You add runtime database record checkpoints by running the Runtime Record
Checkpoint wizard. When you are finished, the wizard inserts the appropriate db_record_check statement into your script.
Syntax: db_record_check(ChecklistFileName,SuccessConditions,RecordNumber );
ChecklistFileName ---- A file created by WinRunner and saved in the test's checklist folder. The file contains information about the data to be captured
during the test run and its corresponding field in the database. The file is created based on the information entered in the Runtime Record Verification
wizard.
SuccessConditions ----- Contains one of the following values:
1. DVR_ONE_OR_MORE_MATCH - The checkpoint passes if one or more matching database records are found.
2. DVR_ONE_MATCH - The checkpoint passes if exactly one matching database record is found.
3. DVR_NO_MATCH - The checkpoint passes if no matching database records are found.
RecordNumber --- An out parameter returning the number of records in the database.
Q:How do you handle dynamically changing area of the window in the bitmap checkpoints?
The difference between bitmaps option in the Run Tab of the general options defines the minimum number of pixels that constitute a bitmap mismatch
Q:What do you verify with the database check point custom and what command it generates, explain syntax?
• When you create a custom check on a database, you create a standard database checkpoint in which you can specify which properties to check on a
result set.
• You can create a custom check on a database in order to:
• check the contents of part or the entire result set
• edit the expected results of the contents of the result set
• count the rows in the result set
• count the columns in the result set
• You can create a custom check on a database using ODBC, Microsoft Query or Data Junction.
Q:What do you verify with the sync point for object/window property and what command it generates, explain syntax?
• Synchronization compensates for inconsistencies in the performance of your application during a test run. By inserting a synchronization point in your
test script, you can instruct WinRunner to suspend the test run and wait for a cue before continuing the test.
• You can a synchronization point that instructs WinRunner to wait for a specified object or window to appear. For example, you can tell WinRunner to wait
for a window to open before performing an operation within that window, or you may want WinRunner to wait for an object to appear in order to perform
an operation on that object.
• You use the obj_exists function to create an object synchronization point, and you use the win_exists function to create a window synchronization point.
These functions have the following syntax:
obj_exists ( object [, time ] ); win_exists ( window [, time ] );
Q:What do you verify with the sync point for object/window bitmap and what command it generates, explain syntax?
You can create a bitmap synchronization point that waits for the bitmap of an object or a window to appear in the application being tested.
During a test run, WinRunner suspends test execution until the specified bitmap is redrawn, and then compares the current bitmap with the expected one captured
earlier. If the bitmaps match, then WinRunner continues the test.
Syntax:
obj_wait_bitmap ( object, image, time );
win_wait_bitmap ( window, image, time );
Q:What is the purpose of obligatory and optional properties of the objects?
For each class, WinRunner learns a set of default properties. Each default property is classified obligatory or optional.
Q:What is the purpose of location indicator and index indicator in GUI map configuration?
In cases where the obligatory and optional properties do not uniquely identify an object, WinRunner uses a selector to differentiate between them. Two types of
selectors are available:
A location selector uses the spatial position of objects.
The location selector uses the spatial order of objects within the window, from the top left to the bottom right corners, to differentiate among objects with the same
description.
An index selector uses a unique number to identify the object in a window.
The index selector uses numbers assigned at the time of creation of objects to identify the object in a window. Use this selector if the location of objects with the
same description may change within a window.
Q:What is the name of custom class in WinRunner and what methods it applies on the custom objects?
WinRunner learns custom class objects under the generic object class. WinRunner records operations on custom objects using obj_ statements.
Q:In a situation when obligatory and optional both the properties cannot uniquely identify an object what method WinRunner applies?
In cases where the obligatory and optional properties do not uniquely identify an object, WinRunner uses a selector to differentiate between them. Two types of
selectors are available:
i. A location selector uses the spatial position of objects.
ii. An index selector uses a unique number to identify the object in a window.
Q:What do you verify with the sync point for screen area and what command it generates, explain syntax?
For screen area verification we actually capture the screen area into a bitmap and verify the application screen area with the bitmap file during execution Syntax:
obj_wait_bitmap(object, image, time, x, y, width, height);
Q:How do you edit checklist file and when do you need to edit the checklist file?
WinRunner has an edit checklist file option under the create menu. Select the Edit GUI Checklist to modify GUI checklist file and Edit Database Checklist to edit
database checklist file. This brings up a dialog box that gives you option to select the checklist file to modify. There is also an option to select the scope of the
checklist file, whether it is Test specific or a shared one. Select the checklist file, click OK which opens up the window to edit the properties of the objects.
Q:How do you edit the expected value of an object?
We can modify the expected value of the object by executing the script in the Update mode. We can also manually edit the gui*.chk file which contains the
expected values which come under the exp folder to change the values.
• You can use text checkpoints in your test scripts to read and check text in GUI objects and in areas of the screen. While creating a test you point to an
object or a window containing text. WinRunner reads the text and writes a TSL statement to the test script. You may then add simple programming
elements to your test scripts to verify the contents of the text.
• You can use a text checkpoint to:
• Read text from a GUI object or window in your application, using obj_get_text and win_get_text
• Search for text in an object or window, using win_find_text and obj_find_text
• Move the mouse pointer to text in an object or window, using obj_move_locator_text and win_move_locator_text
• Click on text in an object or window, using obj_click_on_text and win_click_on_text
Q:Which TSL functions you will use for Searching text on the window
find_text ( string, out_coord_array, search_area [, string_def ] );
win_find_text ( window, string, result_array [, search_area [, string_def ] ] );
• If you want to turn only part of your test script into a data-driven test, first select those lines in the test script.
• Choose Tools - DataDriver Wizard.
• If you want to turn only part of the test into a data-driven test, click Cancel. Select those lines in the test script and reopen the DataDriver Wizard. If you
want to turn the entire test into a data-driven test, click Next.
• The Use a new or existing Excel table box displays the name of the Excel file that WinRunner creates, which stores the data for the data-driven test.
Accept the default data table for this test, enter a different name for the data table, or use
• The browse button to locate the path of an existing data table. By default, the data table is stored in the test folder.
• In the Assign a name to the variable box, enter a variable name with which to refer to the data table, or accept the default name, table.
• At the beginning of a data-driven test, the Excel data table you selected is assigned as the value of the table variable. Throughout the script, only the
table variable name is used. This makes it easy for you to assign a different data table
• To the script at a later time without making changes throughout the script.
• Choose from among the following options:
1. Add statements to create a data-driven test: Automatically adds statements to run your test in a loop: sets a variable name by which to refer to
the data table; adds braces ({and}), a for statement, and a ddt_get_row_count statement to your test script selection to run it in a loop while it
reads from the data table; adds ddt_open and ddt_close statements
2. To your test script to open and close the data table, which are necessary in order to iterate rows in the table. Note that you can also add these
statements to your test script manually.
3. If you do not choose this option, you will receive a warning that your data-driven test must contain a loop and statements to open and close
your datatable.
4. Import data from a database: Imports data from a database. This option adds ddt_update_from_db, and ddt_save statements to your test
script after the ddt_open statement.
5. Note that in order to import data from a database, either Microsoft Query or Data Junction must be installed on your machine. You can install
Microsoft Query from the custom installation of Microsoft Office. Note that Data Junction is not automatically included in your WinRunner
package. To purchase Data Junction, contact your Mercury Interactive representative. For detailed information on working with Data Junction,
refer to the documentation in the Data Junction package.
6. Parameterize the test: Replaces fixed values in selected checkpoints and in recorded statements with parameters, using the ddt_val function,
and in the data table, adds columns with variable values for the parameters. Line by line: Opens a wizard screen for each line of the selected
test script, which enables you to decide whether to parameterize a particular line, and if so, whether to add a new column to the data table or
use an existing column when parameterizing data.
7. Automatically: Replaces all data with ddt_val statements and adds new columns to the data table. The first argument of the function is the
name of the column in the data table. The replaced data is inserted into the table.
• The Test script line to parameterize box displays the line of the test script to parameterize. The highlighted value can be replaced by a parameter. The
Argument to be replaced box displays the argument (value) that you can replace with a parameter. You can use the arrows to select a different
argument to replace.
Choose whether and how to replace the selected data:
1. Do not replace this data: Does not parameterize this data.
2. An existing column: If parameters already exist in the data table for this test, select an existing parameter from the list.
3. A new column: Creates a new column for this parameter in the data table for this test. Adds the selected data to this column of the data table.
The default name for the new parameter is the logical name of the object in the selected. TSL statement above. Accept this name or assign a
new name.
• The final screen of the wizard opens.
1. If you want the data table to open after you close the wizard, select Show data table now.
2. To perform the tasks specified in previous screens and close the wizard, click Finish.
3. To close the wizard without making any changes to the test script, click Cancel.
Q:What is the use of putting call and call_close statements in the test script?
You can use two types of call statements to invoke one test from another:
A call statement invokes a test from within another test.
A call_close statement invokes a test from within a script and closes the test when the test is completed.
Q:What is the use of treturn and texit statements in the test script?
The treturn and texit statements are used to stop execution of called tests.
i. The treturn statement stops the current test and returns control to the calling test.
ii. The texit statement stops test execution entirely, unless tests are being called from a batch test. In this case, control is returned to the main batch test.
Both functions provide a return value for the called test. If treturn or texit is not used, or if no value is specified, then the return value of the call statement is 0.
The syntax is: treturn [( expression )]; texit [( expression )];
Q:Have you created test scripts and what is contained in the test scripts?
Yes I have created test scripts. It contains the statement in Mercury Interactive’s Test Script Language (TSL). These statements appear as a test script in a test
window. You can then enhance your recorded test script, either by typing in additional TSL functions and programming elements or by using WinRunner’s visual
programming tool, the Function Generator.
Q:If the object does not have a name then what will be the logical name?
If the object does not have a name then the logical name could be the attached text.
Q:What is the different between GUI map and GUI map files?
The GUI map is actually the sum of one or more GUI map files. There are two modes for organizing GUI map files. Global GUI Map file: a single GUI Map file for
the entire application. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created. GUI Map file is a file which contains the
windows and the objects learned by the WinRunner with its logical name and their physical description.
Q:Coming up soon for the following Questions. If you know the answers, please email to us !
How do you call a function from external libraries (dll).
What is the purpose of load_dll?
How do you load and unload external libraries?
How do you declare external functions in TSL?
How do you call windows APIs, explain with an example?
What is the purpose of step, step into, step out, step to cursor commands for debugging your script?
How do you update your expected results?
How do you run your script with multiple sets of expected results?
How do you view and evaluate test results for various check points?
How do you view the results of file comparison?
What is the purpose of Wdiff utility?
What are batch tests and how do you create and run batch tests ?
How do you store and view batch test results?
How do you execute your tests from windows run command?
Explain different command line options?
What TSL function you will use to pause your script?
What is the purpose of setting a break point?
What is a watch list?
During debugging how do you monitor the value of the variables?
Describe the process of planning a test in WinRunner?
How do you record a new script?
Can you e-mail a WinRunner script?
How can a person run a previously saved WinRunner script?
How can you synchronize WinRunner scripts?
What is a GUI map? How does it work?
How can you verify application behavior?
Explain in detail how WinRunner checkpoints work. What are standard checkpoints?
What is a data-driven test? What are the benefits of a data-driven test?
How do you modify logical names on GUI map?
Why would you use batch testing under WinRunner? Explain advantages and disadvantages. Give an example of one project where you used batch testing.
How do you pass parameter values between the tests? typically learns all the objects in the window else we will identifying those object, which are to be learned in
a window, since we will be working with only those objects while creating scripts.
Have you used WinRunner Recovery Manager?
What is an exception handler? Wny would you define one in WinRunner?
We’re testing an application that returns a graphical object (i.e., a map) as a result of the user query. Explain how you’d teach WinRunner to recognize and
analyze the returned object.
What is a TSL? Write a simple script in TSL.
1. What is SilkTest?
SilkTest is a software testing automation tool developed by Segue Software, Inc.
2. What is the Segue Testing Methodology?
Segue testing methodology is a six-phase testing process:
1. Plan - Determine the testing strategy and define specific test requirements.
2. Capture - Classify the GUI objects in your application and build a framework for running your tests.
3. Create - Create automated, reusable tests. Use recording and/ or programming to build test scripts written in Segue's 4Test language.
4. Run - Select specific tests and execute them against the AUT.
5. Report - Analyze test results and generate defect reports.
6. Track - Track defects in the AUT and perform regression testing.
3. What is AUT?
AUT stands for Application Under Test.
4. What is SilkTest Host?
SilkTest Host is a SilkTest component that manages and executes test scripts. SilkTest Host usually runs on a separate machine different than the machine where
AUT (Application Under Test) is running.
5. What is SilkTest Agent?
SilkTest Agent is a SilkTest component that receives testing commands from the SilkTest Host and interacts with AUT (Application Under Test) directly. SilkTest
Agent usually runs on the same machine where AUT is running.
6. What is 4Test?
4Test is a test scripting language used by SilkTest to compose test scripts to perform automated tests. 4Test is an object-oriented fourth-generation language. It
consists of 3 sets of functionalities:
1. A robust library of object-oriented classes and methods that specify how a testcase can interact with an application’s GUI objects.
2. A set of statements, operators and data types that you use to introduce structure and logic to a recorded testcase.
3. A library of built-in functions for performing common support tasks.
1. Run SilkTest.
2. Select Basic Workflow bar.
3. Click Open Project on the Workflow bar.
4. Select New Project.
5. Double click Create Project icon in the New Project dialog box
6. One the Create Project dialog box, enter your project name, and your project description.
7. Click OK.
8. SilkTest will create a new subdirectory under SilkTest project directory, and save all files related to the new project under that subdirectory.
1. Run SilkTest.
2. Select File menu.
3. Select Open Project.
4. Select the project.
5. Click OK.
6. SilkTest will open the selected project.
1. Move the cursor the next line below the top level group description.
2. Click Outline/Move Right.
3. The text line will be indented to the right to be come a sub group description.
16. What are testplan attributes?
Testplan attributes are user defined characteristics to be associated with test group descriptions and/or test descriptions. You search, identify, and/or report test
cases based on values of the different attributes.
17. What are the default testplan attributes?
SilkTest offers you 3 predefined default attributes:
1. Category: The type of testcase or group of testcases. For example, you can use this attributes to categorize your test groups as "Boundary value tests",
"Navagation tests", etc.
2. Component: The name of the application modules to be tested.
3. Developer: The name of the QA engineer assigned to develop the testcase or group of testcases.
1. Make sure your Web browser is active and showing your Web application home page. Do not minimize this Web page window.
2. Make sure your test project is open.
3. Click File/New menu. The New dialog box shows up.
4. Select the Test Frame radio button.
5. Click OK. The New Test Frame dialog box shows up with a list all active Web applications.
6. Select your Web application.
7. Enter a test frame name. For example: HomeFrame.inc.
8. Review the window name. It should be the HTML title your Web application. You can rename it, if needed.
9. Click OK to close the New Test Frame dialog box.
10. Click File/Save menu.
1. Identify the Web browser window where the Web application is running. For example, a Web browser window can be identified as
"Browser.BrowserChild("Yahoo Home Page")". Another Web browser window can be identified as "Browser.BrowserChild("Google Home Page")".
2. Identify the Web UI object based on the HTML element that represents the UI object. For example, an image in a Web page can be identified as
"HtmlImage("Yahoo Logo")"; A hyperlink in a Web page can be identified as "HtmlLink("Site Map")";
The full identification of a Web applicatin UI object is the concatenation of the browser window identification and the HTML element identification. For example, the
Yahoo logo image is identified as: Browser.BrowserChild("Yahoo Home Page").HtmlImage("Yahoo Logo"). The site map link is identified as:
Browser.BrowserChild("Google Home Page").HtmlLink("Site Map").
26. What is the syntax of UI object identifier used by DOM extension?
The DOM browser extension uses the following syntax for Web UI objects:
Browser.BrowserChild("page_title").html_class("object_tag")
1. "page_title" is the title of the Web page, defined by the HTML <TITLE> tag.
2. "object_tag" is the label of the HTML element. How a HTML element is labeled depending on the type of HTML element.
1. Make sure your Web browser is active and showing another page of your Web application.
2. Make sure SilkTest is running.
3. Click File/Open menu.
4. Select your test frame file. For example: HomeFrame.inc.
5. Click OK to open the test frame.
6. Click Record/Window Declarations menu. The Record Window Declarations dialog box shows up.
7. Click your Web application window. Web page objects are recorded in the Record Window Declarations dialog box.
8. Press Ctrl+Alt to pause the recording.
9. Click "Paste to Editor" button. All recorded objects will be inserted into the test frame.
10. Repeat this for other Web pages, if needed.
1. Run SilkTest.
2. Open Internet Explorer (IE).
3. Enter the URL of the Web application.
4. Leave the IE window with the Web application. Don't minimize the IE window.
5. To back to SilkTest window.
6. Select Basic Workflow bar.
7. Click Enable Externsions on the Workflow bar.
8. The Enable Extensions dialog will show up. Your Web application running in the IE window will listed in the dialog box.
9. Select your Web application and click Select.
10. The Extension Settings dialog will show up. Click OK to enable the DOM browser extension.
1. Run SilkTest.
2. Click Option/Runtime menu. The Runtime Options dialog box shows up.
3. Edit the Use Files field to include your test frame file and the exlorer.inc file. For example: ...\HomeFrame.inc,extend\explorer.inc.
4. Make sure IE 5.x DOM is selceted.
5. Click OK to cloase the Runtime Optoins dialog box.
6. Open your test project.
7. Click Record/Testcase menu. The Record Testcase dialog box shows up.
8. Name your test case. For example: LoginTest.
9. Select DefaultBaseState in the Applicatin State dropdown list.
10. Click Start Recording button.The Record Testcase dialog closes. Your Web application is will be automatically started by SilkTest, based on the
information in test frame file. SilkTest Editor window closes. The Record Status dialog box shows up.
11. Continue to use your Web application. SilkTest records everything you did on your application.
12. Click the "Done" button on the Recording Status dialog box to stop recording. The Recording Status dialog box closes. The Record Testcase dialog box
shows up again.
13. Click Paste to Editor. SilkTest will insert the recorded acitivities as 4Test statements into a script file. The Record Testcase dialog closes.
14. Click File/Save menu to save the script file. You can enter a script file name. For example, LoginTest.t.
1. Result sumary: The name of the script file. The name of the testcase. The machine on which the tests ran. The starting time and the total elapsed time.
The number and percentage of testcases that passed and failed. The total number of errors and warnings.
2. Result detail: List of errors and detailed information.
42. How to link an error in the result file to the script file?
43. How to link an error in the result file to the script file?
1. DB_Connect: Opens a database connection linking the data through the specified OBDC DSN name. DB_Connect returns a connection handle which
can be used on other DBTester functions. SQL statements can be submitted to the database. For example: con = DB_Connect("dsn=dsn_name")
2. DB_Disconnect: Closes the database connection represented by the speficied connection handle. All resources related to this connect are also
released. For example: DB_Disconnect(con)
3. DB_ExecuteSql: Sends the specified SQL statement to the specified database connection for execution. DB_ExecuteSql returns a query result handler
which can be used by the DB_FetchNext function. For example: res = DB_ExecuteSql(con, "SELECT * FROM ...")
4. DB_FetchNext: Retrieves the next row from the specified query result handler. For example: DB_FetchNext(res, col1, col2, col3, ...)
5. DB_FetchPrevious: Retrieves the previous row from the specified query result handler.
6. DB_FinishSql: Closes the specified query result handler. For example: DB_FinishSql(res)
• Perform full functional testing. Record and play back scripts that navigate through your application and test the state of objects through verification
points.
• Perform full performance testing. Use Robot and TestManager together to record and play back sessions that help you determine whether a multi-client
system is performing within user-defined standards under varying loads.
• Create and edit scripts using the SQABasic and VU scripting environments. The Robot editor provides color-coded commands with keyword Help for
powerful integrated programming during script development. (VU scripting is used with sessions in performance testing.)
• Test applications developed with IDEs such as Java, HTML, Visual Basic, Oracle Forms, Delphi, and PowerBuilder. You can test objects even if they are
not visible in the application’s interface.
• Collect diagnostic information about an application during script playback. Robot is integrated with Rational Purify, Rational Quantify, and Rational
PureCoverage. You can play back scripts under a diagnostic tool and see the results in the log.
What is datapool?
A datapool is a source of variable test data that scripts can draw from during playback.
How to create a datapool?
When creating a datapool, you specify the kinds of data (called data types) that the script will send for example, customer names, addresses, and unique order
numbers or product names. When you finish defining the datapool, TestManager automatically generates the number of rows of data that you specify.
How to analyz results in the log and comparators
You use TestManager to view the logs that are created when you run scripts and schedules.
Use the log to:
--View the results of running a script, including verification point failures, procedural failures, aborts, and any additional playback information. Reviewing the results
in the log reveals whether each script and verification point passed or failed.
Use the Comparators to:
--Analyze the results of verification points to determine why a script may have failed. Robot includes four Comparators:
.Object Properties Comparator
.Text Comparator
.Grid Comparator
.Image C omparator
Rational SiteCheck
Rational SiteCheck to test the structural integrity of your intranet or World Wide Web site. SiteCheck is designed to help you view, track, and maintain your rapidly
changing site. Use SiteCheck to:
• Visualize the structure of your Web site and display the relationship between each page and the rest of the site.
• Identify and analyze Web pages with active content, such as forms, Java, JavaScript, ActiveX, and Visual Basic Script (VBScript).
• Filter information so that you can inspect specific file types and defects, including broken links.
• Examine and edit the source code for any Web page, with color-coded text.
• Update and repair files using the integrated editor, or configure your favorite HTML editor to perform modifications to HTML files.
• Perform comprehensive testing of secure Web sites. SiteCheck provides Secure Socket Layer (SSL) support, proxy server configuration, and support for
multiple password realms.
1. Case-Sensitive - Verifies that the text captured during recording exactly matches the captured text during playback.
2. Case-Insensitive - Verifies that the text captured during recording matches the captured text during playback in content but not necessarily in case.
3. Find Sub String Case-Sensitive - Verifies that the text captured during recording exactly matches a subset of the captured text during playback.
4. Find Sub String Case-Insensitive - Verifies that the text captured during recording matches a subset of the captured text during playback in content but
not necessarily in case.
5. umeric Equivalence - Verifies that the values of the data captured during recording exactly match the values captured during playback.
6. Numeric Range - Verifies that the values of the data captured during recording fall within a specified range during playback. You specify the From and
To values for the numeric range. During playback, the verification point verifies that the numbers are within that range.
7. User-Defined and Apply a User-Defined DLL test function - Passes text to a function within a dynamic-link library (DLL) so that you can run your own
custom tests. You specify the path for the directory and name of the custom DLL and the function. The verification point passes or fails based on the
result that it receives back from the DLL function.
8. Verify that selected field is blank - Verifies that the selected field contains no text or numeric data. If the field is blank, the verification point passes.
1. Right-click the verification point name in the Asset (left) pane and click Rename.
2. Type the new name and press ENTER.
3. Click the top of the script in the Script (right) pane.
4. Click Edit > Replace.
5. Type the old name in the Find what box. Type the new name in the Replace with box.
6. Click Replace All.
1. Right-click the verification point in the Asset (left) pane and click Copy.
2. In the same script or in a different script (in the same project), right-click Verification Points in the Asset pane.
3. Click Paste to paste a copy of the verification point and its associated files into the project. If a verification point with that name already exists, Robot
appends a unique number to the name. You can also copy and paste by dragging the verification point to Verification Points in the Asset pane.
4. Click the top of the Script (right) pane of the original script.
5. Click Edit > Find and locate the line with the verification point name that you just copied.
6. Select the entire line, which starts with Result=.
7. Click Edit > Copy.
8. Return to the script that you used in step 2. Click the location in the script where you want to paste the line. Click Edit > Paste.
9. Change the name of the verification point to match the name in the Asset pane.
1. Right-click the verification point name in the Asset (left) pane and click Delete.
2. Click the top of the script in the Script (right) pane.
3. Click Edit > Find.
4. Type the name of the deleted verification point in the Find what box.
5. Click Find Next.
6. Delete the entire line, which starts with Result=.
7. Repeat steps 5 and 6 until you have deleted all references.
What's TestManager
Rational TestManager is the one place to manage all testing activities--planning, design, implementation, execution, and analysis. TestManager ties testing with
the rest of the development effort, joining your testing assets and tools to provide a single point from which to understand the exact state of your project.
Test Manager supports five testing activities:
1. Plan Test.
2. Design Test.
3. Implement Test.
4. Execute Tests.
5. Evaluate Tests.
Test plan
Test Manager is used to define test requirements, define test scripts and link these requirements and scripts to your test plans (written in word).
Test plan - A test plan defines a testing project so it can be properly measured and controlled. The test plan usually describes the features and functions you are
going to test and how you are going to test them. Test plans also discuss resource requirement and project schedules.
Test Requirements
Test requirements are defined in the Requirement Hierarchy in TestManager. The requirements hierarchy is a graphical outline of requirements and nested child
requirements.
Requirements are stored in the requisite pro database. Requisite Pro is a tool that helps project teams control the development process by managing and tracking
the changes of requirements.
TestManager includes a baseline version of Requisite Pro. The full version with more features and customizations is available in the Rational Suite TestStudio.
TestManager's wizard
TestManager has a wizard that you can use to copy or import test scripts and other test assets (Datapools) from one project to another.
How TestManager manage test logs ?
When a robot scripts runs the output creates a test log. Test logs are managed now is the TestManager application. Rational now allows you to organize your logs
into any type of format you need.
You can create a directory structures that suites your need Create build names for each build version (or development) Create folders in which to put the build in.
What's TestFactory
Rational TestFactory is a component-based testing tool that automatically generates TestFactory scripts according to the application’s navigational structure.
TestFactory is integrated with Robot and its components to provide a full array of tools for team testing under Windows NT 4.0, Windows 2000, Windows 98, and
Windows 95.
With TestFactory, you can:
--Automatically create and maintain a detailed map of the application-under-test.
--Automatically generate both scripts that provide extensive product coverage and scripts that encounter defects, without recording.
--Track executed and unexecuted source code, and report its detailed findings.
--Shorten the product testing cycle by minimizing the time invested in writing navigation code.
--Play back Robot scripts in TestFactory to see extended code coverage information and to create regression suites; play back TestFactory scripts in Robot to
debug them.
What's ClearQuest
Rational ClearQuest is a change-request management tool that tracks and manages defects and change requests throughout the development process. With
ClearQuest, you can manage every type of change activity associated with software development, including enhancement requests, defect reports, and
documentation modifications.
With Robot and ClearQuest, you can:
-- Submit defects directly from the TestManager log or SiteCheck.
-- Modify and track defects and change requests.
-- Analyze project progress by running queries, charts, and reports.
Rational diagnostic tools
Use the Rational diagnostic tools to perform runtime error checking, profile application performance, and analyze code coverage during playback of a Robot script.
• Rational Purify is a comprehensive C/C++ run-time error checking tool that automatically pinpoints run-time errors and memory leaks in all components
of an application, including third-party libraries, ensuring that code is reliable.
• Rational Quantify is an advanced performance profiler that provides application performance analysis, enabling developers to quickly find, prioritize and
eliminate performance bottlenecks within an application.
• Rational PureCoverage is a customizable code coverage analysis tool that provides detailed application analysis and ensures that all code has been
exercised, preventing untested code from reaching the end-user.
How to change the order of the object recognition methods for an object type?
Important Notes:
. Changes to the recognition method order affect scripts that are recorded after the change. They do not affect the playback of scripts that have already been
recorded.
. Changes to the recognition method order are stored in the project. For example, if you change the order for the CheckBox object, the new order is stored in the
project and affects all users of that project.
. Changes to the order for an object affect only the currently-selected preference. For example, if you change the order for the CheckBox object in the preference,
the order is not changed in the C++ preference.
How to create a new object order preference?
1. In an ASCII editor, create an empty text file with the extension .ord.
2. Save the file in the Dat folder of the project.
3. Click To o l s > G UI R e c o r d O p t i o n s .
4. Click the Object Recognition Order tab.
5. From the Object order preferences list, select the name of the file you created.
6. Change the method order to customize your preferences.
1. Identify the class name of the window that corresponds to the object. You can use the Spy++ utility in Visual C++ to identify the class name. You can
also use the Robot Inspector tool by clicking Tools > Inspector.
2. In Robot, click Tools < General Options, and then click the Object Mapping tab.
3. From the Object type list, select the standard object type to be associated with the new object class name. Robot displays the class names already
available for that object type in the Object classes list box.
4. Click Add.
5. Type the class name you identified in step 1 and click OK.
6. Click OK.
1. Click Tools > General Options, and then click the Object Mapping tab.
2. From the Object type list, select the standard object type that is associated with the object class name. Robot displays the class names already available
for that object type in the Object classes list.
3. From the Object classes list, select the name to modify or delete.
4. Do one of the following:
. To modify the class name, click Modify. Change the name and click OK.
. To delete the object class mapping, click Delete. Click OK at the confirmation prompt.
5. Click OK.
1. If necessary, open the script by clicking File > Open > Script.
2. If you are currently debugging, click Debug > Stop.
3. In the Script window, click where you want to insert the new actions. Make sure that the application-under-test is in the appropriate state to begin
recording at the text cursor position.
4. Click the Insert Recording button on the Standard toolbar. The Robot window minimizes by default, or behaves as specified in the GUI Record Options
dialog box.
5. Continue working with the application-under-test as you normally do when recording a script.
1. If necessary, open the script by clicking File > Open > Script.
2. If you are currently debugging, click Debug > Stop.
3. In the Script window, click where you want to insert the feature. Make sure that the application-under-test is in the appropriate state to insert the feature
at the text cursor position.
4. Do one of the following:
- To add the feature without going into recording mode, click the Display GUI Insert Toolbar button on the Standard toolbar. The Robot Script window
remains open. - To start recording and add the feature, click the Insert Recording button on the Standard toolbar. The Robot window minimizes by
default, or behaves as specified in the GUI Record Options dialog box. Click the Display GUI Insert Toolbar button on the GUI Record toolbar.
5. Click the appropriate button on the GUI Insert toolbar.
6. Continue adding the feature as usual.
The following steps outline the general process for recording a script.
1. Set the session recording options. Recording options tell Robot how to record and generate scripts. You set recording options to specify:
- The t ype of recordi ng you want to perform, such as API, network, or proxy. The recording method you choose determines some of the other recording
options you need to set.
- Script generation options, such as specifying whether you want the script to include datapool commands or think time delays, and whether you want to
filter out protocols to control the size of the script.
2. Start the recording session. With the API recording method, you must start recording first, at which point Robot prompts you for the name of the client.
With the other recording methods, network and proxy, you can start recording before or after you start the client.
3. Start the client application.
4. Record the transactions. While you are recording the transactions, you can split the session into multiple scripts, each representing a logical unit of work.
5. Optionally, insert timers, blocks, comments, and other features into the script during recording.
6. Close the client application.
7. Stop recording.
8. Robot automatically generates scripts.
Starting Applications
1. Do one of the following:
. If recording, click the Display GUI Insert Toolbar button on the GUI Record toolbar.
.If editing, position the pointer in the script and click the Display GUI Insert Toolbar button on the Standard toolbar.
2. Do one of the following:
. To start most applications, click the Start Application button. You can specify that you want the application to start under Rational Purify, Quantify, or
PureCoverage during playback.
. To start a Java application that you want to start under Quantify or PureCoverage during playback, click the Start Java Application button.
. To start an HTML application, click the Start Browser button.
3. Fill in the dialog box and click OK.
For information about an item in the dialog box, click the question mark in the upper-right corner and then click the item.
4. Continue recording or editing the script.
How to insert a call to a previously recorded script while recording or editing?
1. Click Tools Tools Tools Tools > Session Record Options Session Record Options Session Record Options Session Record Options.
2. Click the Method Method Method Method tab, and click Network recording Network recording Network recording Network recording.
3. Optionally, click the Method:Network Method:Network Method:Network Method:Network tab, and select the client/server pair that you will record. The
default is to record all of the network traffic to and from your computer.
4. Optionally, click the Generator Filtering Generator Filtering Generator Filtering Generator Filtering tab to specify the network protocols to include in the
script that Robot generates.
1. Click Tools Tools Tools Tools > Session Record Options Session Record Options Session Record Options Session Record Options.
2. Click the Method Method Method Method tab, and click Proxy Proxy Proxy Proxy recording.
3. Click the Method:Proxy Method:Proxy Method:Proxy Method:Proxy tab to:
. Create a proxy computer.
. Identify client/server pairs that will communicate through the proxy.
1. Click Tools Tools Tools Tools > Session Record Options Session Record Options Session Record Options Session Record Options.
2. Click the Method Method Method Method tab and make sure that Proxy recording Proxy recording Proxy recording Proxy recording is selected.
3. Click the Method:Proxy Method:Proxy Method:Proxy Method:Proxy tab.
4. Click Proxy Admin Proxy Admin Proxy Admin Proxy Admin.
5. In Proxy:Port Proxy:Port Proxy:Port Proxy:Port, specify the proxy computer’s port number. Note that Robot has already detected and specified the proxy
computer’s name. You can specify any available port number. Avoid the well-known ports (those below 1024). If you specify a port number that is
unavailable, Robot prompts you for a new port number.
6. In the Server:Port Server:Port Server:Port Server:Port list, select a server involved in the test.
7. Click Create Proxy Create Proxy Create Proxy Create Proxy.
How to associate each client in the test with the server it will communicate with?
1. Click Tools Tools Tools Tools > Session Record Options Session Record Options Session Record Options Session Record Options.
2. Click the Method Method Method Method tab and make sure that Proxy recording Proxy recording Proxy recording Proxy recording is selected.
3. Click the Method:Proxy Method:Proxy Method:Proxy Method:Proxy tab.
4. Select a client in the Client [:Port] Client [:Port] Client [:Port] Client [:Port] list. The client port is optional.
5. Select the client’s server in the Server:Port Server:Port Server:Port Server:Port list. The server port is required.
6. Click Add.
How to expand the conditions under which a script will play back successfully?
1. Click Tools Tools Tools Tools - Session Record Options Session Record Options Session Record Options Session Record Options.
2. Click the Generator per protocol Generator per protocol Generator per protocol Generator per protocol tab.
3. Select HTTP at the Protocol Protocol Protocol Protocol secti on, and then sel ect one or more of the following:
a. Allow partial responses
Select this option to enable a script to play back successfully if the HTTP server responds with partial data during playback. This generates a script that
sets the TSS environment variable Http_control Http_control Http_control Http_control to HTTP_PARTIAL_OK. Leaving this box cleared enforces strict
interpretation of recorded responses during playback.
b. Allow cache responses
Select this option to enable a script to play back successfully if a response is cached differently during playback. This generates a script that sets the
TSS environment variable Http_control Http_control Http_control Http_control to HTTP_CACHE_OK. Leaving this box cleared enforces strict
interpretation of recorded cache responses during playback.
c. Allow redirects
Select this option to enable a script to play back successfully if the script was directed to another HTTP server during playback or recording. This
generates a script that sets the TSS environment variable Http_control Http_control Http_control Http_control to HTTP_REDIRECT_OK. Leaving this
box cleared enforces strict interpretation of recorded redirects during playback.
d. Use HTTP keep-alives for connections in a session with multiple scripts. You should generally leave this box cleared.
1. Click Tools Tools Tools Tools > Session Record Options Session Record Options Session Record Options Session Record Options.
2. Click the Generator per protocol Generator per protocol Generator per protocol Generator per protocol tab.
3. At Correlate variables in response Correlate variables in response Correlate variables in response Correlate variables in response, select one of the
following:
a. All - All variables are correlated. You should generally select this option. Select another option only if you encounter problems when you play back the
script.
b. Specific - Only the variables that you select are correlated.
c. None - No variables are correlated.
1. Click Tools Tools Tools Tools > Session Record Options Session Record Options Session Record Options Session Record Options.
2. Click the Generator per protocol Generator per protocol Generator per protocol Generator per protocol tab.
3. Select Oracle at the Protocol Protocol Protocol Protocol section.
4. Enter the name that the client application uses in the Database name Database name Database name Database name box.
1. Click Tools Tools Tools Tools - Session Record Options Session Record Options Session Record Options Session Record Options.
2. Click the Generator per protocol Generator per protocol Generator per protocol Generator per protocol tab.
3. Select IIOP at the Protocol section, and then select one or more of the following:
- Use operation name to prefix emulation command IDs
- Include original IORs in iiop_bind commands
DCOM Recording
Robot records DCOM client applications that are constructed with Visual Basic (compiled executables), Java (compiled executables), or C++, with the restriction
that the C++ interfaces are useable by VB - they must use attributes that conform to the OLE Automation attribute. No preprocessing of the application is
necessary before recording begins.
Assigning a Prefix to DCOM Command IDs
If you are recording DCOM requests, you can have Robot automatically assign an identifying prefix to DCOM emulation command IDs.
1. Click Tools Tools Tools Tools > Session Record Options Session Record Options Session Record Options Session Record Options.
2. Click the Generator per protocol Generator per protocol Generator per protocol Generator per protocol tab.
3. Select DCOM at the Protocol section, and then select an event label. The label you select determines the prefix of the emulation command IDs in your
script.
1. In Robot, click the Record Session button. Alternatively, click File > Record Session, or press CTRL+SHIFT+R.
2. Ty p e t h e session name (40 characters maximum), or accept the default name. You will specify the script name when you finish recording the script. If
you have not yet set your session recording options, do so now by clicking Options.
3. Click OK in the Record Session - Enter Session Name dialog box. The following events occur:
. Robot is minimized (default behavior).
. The Session Record floating toolbar appears (default behavior). You can use this toolbar to stop recording, redisplay Robot, split a script, and insert
features into a script.
. The Session Recorder icon appears on the taskbar. The icon blinks as Robot captures requests and responses.
4. If the Start Application dialog box is displayed, provide the following information, and click OK:
. The path of the executable file for the browser or database application.
. Optionally, the working directory for any components (such as DLLs) that the client application needs at runtime.
. Optionally, any arguments that you want to pass to the client application. The Start Application dialog box appears only if you are performing API
recording, or if you are performing network or proxy recording and selected Prompt for application name on start recording in the General tab of the
Session Record Options dialog box.
5. Perform the transactions that you want to record. As the application sends requests to the server, notice the activity in the Session Recorder window.
Progress bars and request statistics appear in the top of the window.
If there is no activity in the Session Recorder window (or if the Session Recorder icon never blinks), there is a problem with the recording. Stop recording
and try to find the cause of the problem.
6. Optionally, insert features such as blocks and timers through the Session Insert floating toolbar or through the Robot Insert menu.
7. Optionally, when you finish recording transactions, close the client application. With API recording, when you close the client, Robot asks whether you
want to stop recording. If so, click Yes, and either name the session or click to ignore the recorded information in the Generating Scripts dialog box.
8. Click the Stop Recording button on the Session Record floating toolbar.
1. In Robot, click Tools > Import Session. The Open dialog box appears.
2. Click the session file, then click Open. The session and its scripts are now in your project.
3. To regenerate the scripts in the session you imported, click Tools > Regenerate Test Scripts from Session, and select the session you imported.
4. To regenerate the suite, click Tools > Rational Suite TestStudio > Rational TestManager.
5. Click File > New Suite. The New Suite dialog box appears.
6. Select Existing Session, and click OK.
7. TestManager displays a list of sessions that are in the project. Click the name of the session that you imported, and click OK.
• A block begins with the comment. In the VU language, a block begins like this:
/* Start_Block "BlockName" */
• Robot automatically starts a timer at the start of the block. In the VU language, the timer looks like this:
start_time ["BlockName"] _fs_ts;
Typically, the start_time emulation command is inserted after the first action, but with an argument to use a read-only variable that refers to the start of
the first action.
• The ID of every emulation command in a block is constructed the same way that is, by the block name followed by a unique, three-digit autonumber. For
example, in the VU language:
http_header_recv ["BlockName002"] 200;
When you end a block, command IDs are constructed as they were before you started the block. For example, if the last command ID before the block
was Script025, the next command ID after the block will be Script026.
• A block ends with a stop_time command plus a comment. For example, in the VU language:
stop_time ["BlockName"]; /* Stop_Block */
• A script can have up to 50 blocks.
1. If the Session Insert floating toolbar is not already displayed, click the Insert button on the Session Record floating toolbar.
2. Click the Start Block button at that point in the script where you want the block to begin for example, just before you start to record a transaction.
3. Type the block name. Robot uses this name as the prefix for all command IDs in the block. The maximum number of characters for a command ID prefix
is seven.
4. Click OK.
5. Record all of the client requests in the block.
6. Click the Stop Block button to end the current block, and click OK.
7. Continue recording the other sections of the script. When you start and stop a block during recording, the commands are reported as annotations in the
Annotations window
What's A synchronization point?
A synchronization point lets you coordinate the activities of a number of virtual testers by pausing the execution of each tester at a particular point
Why Use Synchronization Points?
By synchronizing virtual testers to perform the same activity at the same time, you can make that activity occur at some particular point of interest in your test.
Typically, synchronization points that you insert into scripts are used in conjunction with timers to determine the effect of varying workload on the timed activity.
How to inserting Synchronization Points?
You can insert a synchronization point into a script (through Robot) or into a suite (through TestManager).
1. Into a script, you can insert a synchronization point into a script in one of the following ways:
. During recording, through the Sync Point toolbar button or through the Insert menu.
. During script editing, by manually typing the synchronization point command name into the script.
2. Into a suite, you can insert a synchronization point into a suite through the TestManager Synchronization Point dialog box.
Why Restore the Test Environment Before Playback?
The state of the Windows environment as well as your application-under-test can affect script playback. If there are differences between the recorded environment
and the playback environment, playback problems can occur.
How to set GUI playback options?
. Open the GUI Playback Options dialog box by doing one of the following:
. Before you start playback, click Tools > GUI Playback Options.
. Start playback by clicking the Playback Script button on the toolbar. In the Playback dialog box, click Options.
How to setting Log Options for Playback?
Sub Main
Dim Results as Integer
Dim x as Integer
‘reference the datapool
Dim dp as Long
‘variable to be assigned data from the datapool
Dim ccNum as String
‘now this code will populate a full record from the datapool 10 times.
For x = 0 to 9
Call SQADatapoolFetch(dp)
‘Begin
Window SetContext, "Caption=Main Window; Class=Window", ""
PushButton Click, "Text=Order"
Window SetContext, "Caption=Order Window; Class=Window’, ""
Next x
Call SQADatapoolClose(dp)
End Sub
How to edit datapool configuration and to begin the process of defining and generating a datapool:
1. If the script that will access the datapool is not open for editing, click File > Open > Test Script to open it.
2. Click Edit > Datapool Information to open the Configure Datapool in Test Script dialog box.
This dialog box lets you edit the DATAPOOL_CONFIG section of the script.
3. Either accept the defaults in the Configure Datapool in Test Script dialog box, or make any appropriate changes.
4. When finished making any changes, click Save.
5. Take one of these actions:
.Click Create to define and populate the new datapool.
.Click Close if you do not want to define and populate a datapool at this time.
6. If you clicked Create in the previous step, continue by following the instructions in the section
How to Define and populate the datapool?
How to correct the errors If the datapool values are not successfully generated?
1. Click Yes to see the error report.
2. After viewing the cause of the errors, click Cancel.
3. Correct the errors in the Datapool Fields grid.
How to edit a datapool’s column definitions while in Robot?
1. If the script that will access the datapool is not open for editing, click File > Open > Test Script to open it.
2. Click Edit > Datapool Information to open the Configure Datapool in Test Script dialog box.
3. Either accept the defaults in the Configure Datapool in Test Script dialog box, or make any appropriate changes.
4. When finished making any changes, click Save.
5. Click Edit Specification to open the Datapool Specification dialog box, where you update datapool column definitions.
6. To insert one or more new columns into the datapool file:
a. Click the row located either just before or just after the location where you want to insert the new datapool column.
An arrow appears next to the name of the datapool row you clicked. b. Click either Insert before or Insert after, depending on where you want to insert
the datapool column.
c. Type a name for the new datapool column (40 characters maximum).
Make sure there is a script variable of the same name listed in the Configure Datapool in Test Script dialog box. Case of the names must match.
7. When finished modifying datapool columns, type a number in the No. of records to generate field. If a different row has to be retrieved with each fetch,
make sure the datapool has at least as many rows as the number of users (and user iterations) that will be requesting rows at runtime.
8. Click Generate Data.
You cannot generate data for a datapool that has more than 150 columns. Alternatively, if you do not want to generate any data now, click Save to save
your datapool column definitions, and then click Close.
9. Optionally, click Yes to see a brief summary of the generated data.
1. If the script that will access the datapool is not open for editing, click File > Open > Test Script to open it.
2. Click Edit > Datapool Information to open the Configure Datapool in Test Script dialog box.
3. Either accept the defaults in the Configure Datapool in Test Script dialog box, or make any appropriate changes.
4. When finished making any changes, click Save.
5. Click Edit Existing Data.
6. In the Edit Datapool dialog box, edit datapool values as appropriate.
7. When finished editing datapool values, click Save, and then click Close.
1. Start the Rational Test Oracle Forms Enabler from the folder in which it was installed (the default folder is Developer 2000).
2. Click Browse. Select the .fmb file that you want to make testable and click OK.
3. Click Add Rational Test Object Testing Library.
4. Set the following options as needed:
Backup original FMB file - Creates a backup file before the file is enabled. Enable all FMB files in selected directory - Enables every .fmb file in the
directory. If this check box is not selected, only the .fmb file in the Oracle FMB file box is enabled.
Generate each selected FMB file - Generates each .fmb file after enabling it.
5. Click Advanced to open the following dialog box.
6. If you selected the Generate each selected FMB file option, type your database connection parameters in the Database tab.
7. Click the Directories tab.
8. If you need to change the default locations of the Object Testing Library and Oracle home directory, select Override Oracle paths in registry. Click each
Browse button and select the new location.
9. Click the General tab.
10. To send the output in the Status box of the Enabler to a log file that you can view or print, select Write status output to log file.
11. If objects in your application contain the WHEN-MOUSE-ENTER trigger, the Enabler prepends sqa_mouse_handler; to each trigger . This is necessary
for Robot to correctly record mouse actions against these objects. If you need to prevent this modification, clear Modify local WHEN-MOUSE-ENTER
triggers.
12. Click OK.
13. Click Enable. As the file is enabled, information appears in the Status box.
• Team Unifying Platform - Rational Unified Process, Rational RequisitePro, Rational ClearQuest, Rational SoDA, Rational ClearCase LT, Rational
TestManager, and Rational ProjectConsole.
• Analyst Studio - Team Unifying Platform, and Rational Rose.
• DevelopmentStudio - Team Unifying Platform, Rational Rose£¬Rational PureCoverage£¬Rational Purify, and Rational Quantify.
• TestStudio - Team Unifying Platform, Rational PureCoverage£¬Rational Purify, Rational Quantify, Rational Robot, and Rational TestFactory.
• Enterprise - Team Unifying Platform, Rational Rose, Rational PureCoverage£¬Rational Purify, Rational Quantify, Rational Robot, Rational TestFactory,
and Process Workbench.
• Content Studio - Team Unifying Platform, Rational NetDeploy, and Rational SiteLoad.
What are the software development best practices suggested by Rational Suite?
• Develop software iteratively. Iterative development means analyzing, designing, and implementing incremental subsets of the system over the project
lifecycle. The project team plans, develops, and tests an identified subset of system functionality for each iteration. The team develops the next
increment, integrates it with the first iteration, and so on. Each iteration results in either an internal or external release and moves you closer to the goal
of delivering a product that meets its requirements.
• Manage requirements. A requirement is one criterion for a project's success. Your project requirements answer questions like "What do customers
want?" and "What new features must we absolutely ship in the next version?" Most software development teams work with requirements. On smaller,
less formal projects, requirements might be kept in text files or e-mail messages. Other projects can use more formal ways of recording and maintaining
requirements.
• Use component-based architectures. Software architecture is the fundamental framework on which you construct a software project. When you define
an architecture, you design a system's structural elements and their behavior, and you decide how these elements fit into progressively larger
subsystems.
• Model software visually. Visual modeling helps you manage software design complexity. At its simplest level, visual modeling means creating a
graphical blueprint of your system's architecture. Visual models can also help you detect inconsistencies between requirements, designs, and
implementations. They help you evaluate your system's architecture, ensuring sound design.
• Continuously verify quality. Verifying software quality means testing what has been built against defined requirements. Testing includes verifying that
the system delivers required functionality and verifying reliability and its ability to perform under load.
• Manage change. It is important to manage change in a trackable, repeatable, and predictable way. Change management includes facilitating parallel
development, tracking and handling enhancement and change requests, defining repeatable development processes, and reliably reproducing software
builds
1. Run Start > Programs > Rational Suite > Rational Administrator.
2. Right-click on Projects. From the shortcut menu that appears, click Register Existing Project.
3. In the Select Rational Project dialog box, browse to the Rational project file (.rsp), then click Open. Your project name should be added below Projects.
4. Right-click on your project name, then click Connect. All assets and development information associated with your project will appear.
What are the databases used by a Rational project to store project information?
• ClearQuest database - used to store project's change requests (defects and enhancement requests).
• RequisitePro database - used to store project's business and system requirements.
• Rational Test datastore - used to store project's testing information (test assets, logs, and reports).
• Rational Rose - used to store project's visual models.
• ClearQuest database - used to store project's change requests (defects and enhancement requests).
• RequisitePro database - used to store project's business and system requirements.
• Rational Test datastore - used to store project's testing information (test assets, logs, and reports).
• Rational Rose - used to store project's visual models.
• A discipline shows all activities that produce a particular set of software assets. RUP describes development disciplines at an overview level - including
a summary of all roles, workflows, activities, and artifacts that are involved.
• A role is defined as the behavior and responsibilities of an individual or a group of individuals on a project team. One person can act in the capacity of
several roles over the course of a project. Conversely, many people can act in the capacity of a single role in a project. Roles are responsible for
creating artifacts.
• A workflow is the sequence of activities that workers perform toward a common goal. A workflow diagram serves as a high-level map for a set of related
activities. The arrows between activities represent the typical, though not required, flow of work between activities.
• A activity is a unit of work that is performed by a particular role. It is a set of ordered steps, like a recipe, for creating an artifact.
• A artifact is something a role produces as the result of performing an activity. In RUP, the artifacts produced in one activity are often used as input into
other activities. An artifact can be small or large, simple or complex, formal or informal. Examples of artifacts are: a test plan, a vision document, a model
of a system¡¯s architecture, a script that automates builds, or application code.
Navigation
1. Is terminology consistent?
2. Are navigation buttons consistently located?
3. Is navigation to the correct/intended destination?
4. Is the flow to destination (page to page) logical?
5. Is the flow to destination the page top-bottom left to right?
6. Is there a logical way to return?
7. Are the business steps within the process clear or mapped?
8. Are navigation standards followed?
Ease of Use
1. Are help facilities provided as appropriate?
2. Are selection options clear?
3. Are ADA standards followed?
4. Is the terminology appropriate to the intended audience?
5. Is there minimal scrolling and resizeable screens?
6. Do menus load first?
7. Do graphics have reasonable load times?
8. Are there multiple paths through site (search options) that are user chosen?
9. Are messages understandable?
10. Are confirmation messages available as appropriate?
Presentation of Information
1. Are fonts consistent within functionality?
2. Are the company display standards followed?
- Logos
- Font size
- Colors
- Scrolling
- Object use
3. Are legal requirements met?
4. Is content sequenced properly?
5. Are web-based colors used?
6. Is there appropriate use of white space?
7. Are tools provided (as needed) in order to access the information?
8. Are attachments provided in a static format?
9. Is spelling and grammar correct?
10. Are alternative presentation options available (for limited browsers or performance issues)?
Overall
1. Are requirements driven by business needs and not technology?
Audience
1. Has the audience been defined?
2. Is there a process for identifying the audience?
3. Is the process for identifying the audience current?
4. Is the process reviewed periodically?
5. Is there appropriate use of audience segmentation?
6. Is the application compatible with the audience experience level?
7. Where possible, has the audience readiness been ensured?
8. Are text version and/or upgrade links present?
Testing Process
1. Does the testing process include appropriate verifications? (e.g., reviews, inspections and walkthroughs)
2. Is the testing environment compatible with the operating systems of the audience?
3. Does the testing process and environment legitimately simulate the real world?
Risk
1. Has the risk tolerance been assessed to identify the vital few platforms to test?
Hardware
1. Is the test hardware compatible with all screen types, sizes, resolution of the audience?
2. Is the test hardware compatible with all means of access, modems, etc of the audience?
3. Is the test hardware compatible will all languages of the audience?
4. Is the test hardware compatible with all databases of the audience?
5. Does the test hardware contain the compatible plug-ins and DLLs of the audience?
General
1. Is the application compatible with standards and conventions of the audience?
2. Is the application compatible with copyright laws and licenses?
Access Control
1. Is there a defined standard for login names/passwords?
2. Are good aging procedures in place for passwords?
3. Are users locked out after a given number of password failures?
4. Is there a link for help (e.g., forgotten passwords?)
5. Is there a process for password administration?
6. Have authorization levels been defined?
7. Is management sign-off in place for authorizations?
Disaster Recovery
1. Have service levels been defined. (e.g., how long should recovery take?)
2. Are fail-over solutions needed?
3. Is there a way to reroute to another server in the event of a site crash?
4. Are executables, data, and content backed up on a defined interval appropriate for the level of risk?
5. Are disaster recovery process & procedures defined in writing? If so, are they current?
6. Have recovery procedures been tested?
7. Are site assets adequately Insured?
8. Is a third party "hot-site' available for emergency recovery?
9. Has a Business Contingency Plan been developed to maintain the business while the site is being restored?
10. Have all levels in organization gone through the needed training & drills?
11. Do support notification procedures exist & are they followed?
12. Do support notification procedures support a 24/7 operation?
13. Have criteria been defined to evaluation recovery completion / correctness?
Firewalls
1. Was the software installed correctly?
2. Are firewalls installed at adequate levels in the organization and architecture? (e.g., corporate data, human resources data, customer transaction files, etc.)
3. Have firewalls been tested? (e.g., to allow & deny access).
4. Is the security administrator aware of known firewall defects?
5. Is there a link to access control?
6. Are firewalls installed in effective locations in the architecture? (e.g., proxy servers, data servers, etc.)
Proxy Servers
1. Have undesirable / unauthorized external sites been defined and screened out? (e.g. gaming sites, etc.)
2. Is traffic logged?
3. Is user access defined?
Privacy
1. Is sensitive data restricted to be viewed by unauthorized users?
2. Is proprietary content copyrighted?
3. Is information about company employees limited on public web site?
4. Is the privacy policy communicated to users and customers?
5. Is there adequate legal support and accountability of privacy practices?
Data Security
1. Are data inputs adequately filtered?
2. Are data access privileges identified? (e.g., read, write, update and query)
3. Are data access privileges enforced?
4. Have data backup and restore processes been defined?
5. Have data backup and restore processes been tested?
6. Have file permissions been established?
7. Have file permissions been tested?
8. Have sensitive and critical data been allocated to secure locations?
9. Have date archival and retrieval procedures been defined?
10. Have date archival and retrieval procedures been tested?
Monitoring
1. Are network monitoring tools in place?
2. Are network monitoring tool working effectively?
3. Do monitors detect
- Network time-outs?
- Network concurrent usage?
- IP spoofing?
4. Is personnel access control monitored?
5. Is personnel internet activity monitored?
- Sites visited
- Transactions created
- Links accessed
Security Administration
1. Have security administration procedures been defined?
2. Is there a way to verify that security administration procedures are followed?
3. Are security audits performed?
4. Is there a person or team responsible for security administration?
5. Are checks & balances in place?
6. Is there an adequate backup for the security administrator?
Encryption
1. Are encryption systems/levels defined?
2. Is there a standard of what is to be encrypted?
3. Are customers compatible in terms of encryption levels and protocols?
4. Are encryption techniques for transactions being used for secured transactions?
- Secure socket layer (SSL)
- Virtual Private Networks (VPNs)
5. Have the encryption processes and standards been documented?
Viruses
1. Are virus detection tools in place?
2. Have the virus data files been updated on a current basis?
3. Are virus updates scheduled?
4. Is a response procedure for virus attacks in place?
5. Are notification of updates to virus files obtained from anti-virus software vendor?
6. Does the security administrator maintain an informational partnership with the anti-virus software vendor?
7. Does the security administrator subscribe to early warning e-mail services? (e.g., www.fooorg or www.bar.net)
8. Has a key contact been defined for the notification of a virus presence?
9. Has an automated response been developed to respond to a virus presence?
10. Is the communication & training of virus prevention and response procedures to users adequate?
Web Testing Checklist about Performance (1)
Tools
1. Are virus detection tools in place?
2. Have the virus data files been updated on a current basis?
3. Are virus updates scheduled?
4. Is a response procedure for virus attacks in place?
5. Are notification of updates to virus files obtained from anti-virus software vendor?
6. Does the security administrator maintain an informational partnership with the anti-virus software vendor?
7. Does the security administrator subscribe to early warning e-mail services? (e.g., www.foo.org or www.bar.net)
8. Has a key contact been defined for the notification of a virus presence?
9. Has an automated response been developed to respond to a virus presence?
10. Is the communication & training of virus prevention and response procedures to users adequate?
Tools
1. Has a load testing tool been identified?
2. Is the tool compatible with the environment?
3. Has licensing been identified?
4. Have external and internal support been identified?
5. Have employees been trained?
Number of Users
1. Have the maximum number of users been identified?
2. Has the complexity of the system been analyzed?
3. Has the user profile been identified?
4. Have user peaks been identified?
5. Have languages been identified?, i.e. English, Spanish, French, etc. for global wide sites
6. Have the length of sessions been identified by the number of users?
7. Have the number of users configurations been identified?
Expectations/Requirements
1. Have the response time been identified?
2. Has the client response time been identified?
3. Has the expected vendor response time been identified?
4. Have the maximum and acceptable response times been defined?
5. Has response time been met at the various thresholds?
6. Has the break point been identified been identified for capacity planning?
7. Do you know what caused the crash if the application was taken to the breaking point?
8. How many transactions for a given period of time have been identified (bottlenecks)?
9. Have availability of service levels been defined?
Architecture
1. Has the database campacity been identified?
2. Has anticipated growth data been obtained?
3. Is the database self-contained?
4. Is the system architecture defined?
" Tiers
" Servers
" Network
5. Has the anticipated volume for initial test been defined - with allowance for future growth?
6. Has plan for vertical growth been identified?
7. Have the various environments been created?
8. Has historical experience with the databases and equipment been documented?
9. Has the current system diagram been developed?
10.Is load balancing available?
11.Have the types of programming languages been identified?
12.Can back end processes be accessed?
Web Testing Checklist about Performance (2)
Resources
1. Are people with skill sets available?
2. Have the following skill sets been acquired?
" DBA
" Doc
" BA
" QA
" Tool Experts
" Internal and external support
" Project manager
" Training
Time Frame
1. When will the application be ready for performance testing?
2. How much time is available for performance testing?
3. How many iterations of testing will take place?
Test Environment
1. Does the test environment exist?
2. Is the environment self-contained?
3. Can one iteration of testing be performed in production?
4. Is a copy of production data available for testing?
5. Are end-users available for testing and analysis?
6. Will the test use virtual users?
7. Does the test environment mirror production?
8. Have the differences documented? (constraints)
9. Is the test available after production?
10. Have version control processes been used to ensure the correct versions of applications and data in the test environment?
11. Have the times been identified when you will receive the test data (globally) time frame?
12. Are there considerations for fail-over recovery? Disaster recovery?
13. Are replacement servers available?
14. Have back-up procedures been written?
Web Testing Checklist about Correctness (1)
Data
1. Does the application write to the database properly?
2. Does the application record from the database correctly?
3. Is transient data retained?
4. Does the application follow concurrency rules?
5. Are text fields storing information correctly?
6. Is inventory or out of stock being tracked properly?
7. Is there redundant info within web site?
8. Is forward/backward cashing working correctly?
9. Are requirements for timing out of session met?
Presentation
1. Are the field data properly displayed?
2. Is the spelling correct?
3. Are the page layouts and format based on requirements?
(e.g., visual highlighting, etc.)
4. Does the URL show you are in secure page?
5. Is the tab order correct on all screens?
6. Do the interfaces meet specific visual standards(internal)?
7. Do the interfaces meet current GUI standards?
8. Do the print functions work correctly?
Navigation
1. Can you navigate to the links correctly?
2. Do Email links work correctly?
Functionality
1. Is the application recording the number of hits correctly?
2. Are calculations correct?
3. Are edits rules being consistently applied?
4. Is the site listed on search engines properly?
5. Is the help information correct?
6. Do internal searches return correct results?
7. Are follow-up confirmations sent correctly?
8. Are errors being handled correctly?
9. Does the application properly interface with other applications?
Environment
1. Are user sessions terminated properly?
2. Is response time adequate based upon specifications?
General
• Home page logo is larger and more centrally placed than on other pages.
• Home page includes navigation, summary of news/promotions, and a search feature.
• Home page answers: Where am I; What does this site do; How do I find what I want?
• Larger navigation space on home page, smaller on subsequent pages.
• Logo is present and consistently placed on all subsequent pages (towards upper left hand corner).
• "Home" link is present on all subsequent pages (but not home page).
• If subsites are present, each has a home page, and includes a link back to the global home page.
Navigation
• Navigation supports user scenarios gathered in the User Task Assessment phase (prior to design).
• Users can see all levels of navigation leading to any page.
• Breadcrumb navigation is present (for larger and some smaller sites).
• Site uses DHTML pop-up to show alternative destinations for that navigation level.
• Navigation can be easily learned.
• Navigation is consistently placed and changes in response to rollover or selection.
• Navigation is available when needed (especially when the user is finished doing something).
• Supplimental navigation is offered appropriately (links on each page, a site map/index, a search engine).
• Navigation uses visual hierarchies like movement, color, position, size, etc., to differentiate it from other page elements.
• Navigation uses precise, descriptive labels in the user's language. Icon navigation is accompanied by text descriptors.
• Navigation answers: Where am I (relative to site structure); Where have I been (obvious visited links); Where can I go (embedded, structural, and
associative links)?
• Redundant navigation is avoided.
Functional Items
• Terms like "previous/back" and "next" are replaced by more descriptive labels indicating the information to be found.
• Pull-down menus include a go button.
• Logins are brief.
• Forms are short and on one page (or demonstrate step X of Y, and why collecting a larger amount of data is important and how the user will benefit).
• Documentation pages are searchable and have an abundance of examples. Instructions are task-oriented and step-by-step. A short conceptual model of
the system is available, including a diagram that explains how the different parts work together. Terms or difficult concepts are linked to a glossary.
Linking
Search Capabilities
• A search feature appears on every page (exceptions include pop-up forms and the like).
• Search box is wide to allow for visible search parameters.
• Advanced Search, if included, is named just that (to scare off novices).
• Search system performs a spelling check and offers synonym expansion.
• Site avoids scoped searching. If included it indicates scope at top of both query and results pages, and additionally offers an automatic extended site
search immediately with the same parameters.
• Results do not include a visible scoring system.
• Eliminates duplicate occurances of the same results (e.g., foo.com/bar vs. foo.com/bar/ vs. foo.com/bar/index.html).
Page Design
• Content accounts for 50% to 80% of a page's design (what's left over after logos, navigation, non-content imagery, ads, white space, footers, etc.).
• Page elements are consistent, and important information is above the fold.
• Pages load in 10 seconds or less on users bandwidth.
• Pages degrade adequately on older browsers.
• Text is over plain background, and there is high contrast between the two.
• Link styles are minimal (generally one each of link, visited, hover, and active states). Additional link styles are used only if necessary.
• Specified the layout of any liquid areas (usually content) in terms of percentages.
Content Design
• Uses bullets, lists, very short paragraphs, etc. to make content scannable.
• Articles are structured with scannable nested headings.
• Content is formatted in chunks targeted to user interest, not just broken into multiple pages.
• No moving text; most is left-justified; sans-serif for small text; no upper-case sentences/paragraphs; italics and bold are used sparingly.
• Dates follow the international format (year-month-day) or are written out (August 30, 2001).
Writing
Folder Structure
• Folder names are all lower-case and follow the alpha-numeric rules found under "Naming Conventions" below.
• Segmented the site sections according to:
Root directory (the "images" folder usually goes at the top level within the root folder)
Sub-directories (usually one for each area of the site, plus an images folder at the top level within the root directory)
Images are restricted to one folder ("images") at the top level within the root directory (for global images) and then if a great number of images are going
to be used only section-specifically, those are stored in local "images" folders
Naming Conventions
• Uses clients preferred naming method. If possible, uses longer descriptive names (like "content_design.htm" vs. "contdesi.htm").
• Uses alphanumeric characters (a-z, 0-9) and - (dash) or _ (underscore)
• Doesn't use spaces in file names.
• Avoids characters which require a shift key to create, or any punctuation other than a period.
• Uses only lower-case letters.
• Ends filenames in .htm (not .html).
Multimedia
• Any files taking longer than 10 seconds to download include a size warning (> 50kb on a 56kbps modem, > 200kb on fast connections). Also includes
the running time of video clips or animations, and indicate any non-standard formats.
• Includes a short summary (and a still clip) of the linked object.
• If appropriate to the content, includes links to helper applications, like Adobe Acrobat Reader if the file is a .pdf.
Page Titles
• Follows title strategy ... Page Content Descriptor : Site Name, Site section (E.g.: Content Implementation Guidelines : CDG Solutions, Usability Process )
• Tries to use only two to six words, and makes their meaning clear when taken out of context.
• The first word(s) are important information-carrying one(s).
• Avoids making several page titles start with the same word.
Headlines
CSS
• Uses CSS to format content appearance (as supported by browsers), rather than older HTML methods.
• Uses a browser detect and serve the visitor a CSS file that is appropriate for their browser/platform combination.
• Uses linked style sheets.
Documentation and Help Pages
Content Management
Site has procedures in place to remove outdated information immediately (such as calendar events which have passed).
Checklist: Graphical User Interface
Menu Bar-Mouseclick
RMB
Toolbar
- All Sequences?
Navigate from each Buttons - Push
Test interrelated processing - Important Combinations?
Transfer Functions different window to all
between windows Buttons-Hot Key
possible windows - Negative - No Transfers
Buttons-Keyboard
Menu Bar - Hot Keys
Menu Bar - Keyboard
List window with no data
List window one record in
list (row)
List window >1 row -
- Different for list windows
Test transfers with general Test data row retrieval and last row
Data Conditions for Window vs. one record display
(record level) data transfer functions using
Transfer Functions List window >1 row -
conditions data windows
not first or last row
New
Change to non-key field
Test stored procedure/GUI Note: do an inquiry after
Test data row handling from
Row Data Maintenance add/change/delete update to verify database Change to key field
GUI to database
functions update (delete and add)
Delete
Scroll Bars
(Vertical/Horizontal)
Window Control Menu
Max, Min,
Print Functions
(Print, Printer Setup)
Standard Window Edit Functions
Controls/Functions
(Cut, Copy, Paste)
Window Functions
(Previous Window, Close
All, Open Window List, Tile,
Layer, Cascade)
Microhelp
Balloon Notes
Help- Index
Application HELP
Help-Table of Contents
Help-Jump Words
Help-Text
Job Status
Online Report/s
Informational Windows -
Miscellaneous Application Content
Specific
Informational
Windows - Button