Академический Документы
Профессиональный Документы
Культура Документы
Contents
Best Practices for OATS – Openscript Functional Testing. ............................................................................ 1
Script Naming related best practices ............................................................................................................ 2
Assets related best practices ........................................................................................................................ 2
Scripting related best practices .................................................................................................................... 3
Execution related best practices ................................................................................................................... 6
Settings related best practices ...................................................................................................................... 6
Script Naming related best practices
1. Script name should contain only alpha numeric characters without any special characters except
“_”
Purpose: To avoid unexpected errors by Openscript with special characters in a script name.
2. Make sure the script name is short, meaningful an less than 60 characters in total.
Purpose: Sometimes, Openscript might through errors to execute scripts for which the length of
script name is more than 60
4. Any re-usable assets attached, most likely have them relative to repository unless they are
present inside a script folder
Purpose: Helps for easy script movements from one location to another without export and
import processes.
5. Try not to use java classes for re-usable methods or libraries as much as possible, unless for
special cases.
Purpose: This might need little technically skilled people and also some times Openscript might
throw Object cannot be resolved error.
6. Avoid index attributes to identify objects for web applications for all UI components except for
document and form. In case you are recording the scripts, try to make these index attribute
removal for desired UI components in object identification for Recording > Web Functional
Regular way:
web.window(“/web:window[@index=’0′ or @title=’Google’]”).waitForPage(null);
Better way:
web.window( “/web:window[@title=’Google’]”).waitForPage(null);
Purpose: Most of the times the scripts will fail if index attribute is present the list of attributes
used to identify an UI component in XPath
8. Instead of having think statements in the script, always better to have a waitFor(time) on the
target object which are going to perform operation in the next step.
Purpose: This will reduce the chances of failure of script for different conditions of Application
under test, in the sense time taking to load the page or objects refresh in page.
This is also important for the places where AJAX requests are also there.
9. Any objects or elements if the attributes used for identifying them are not unique / not
constant, then it is better to identify those objects based on some label or prompt present
nearby to it.
i.e. dynamically get an objects based on any label associated to that specific object by using
methods like
getElementsByTag, parent, etc..
Purpose: Brings more stability in the script as the objects are identified based on the associated
labels, especially for the cases where the attribute values to identify an object are changing
frequently and not unique.
10. In case you are using scripting based test automation frameworks, it is suggested to have shared
Object Libraries instead of XPaths being made as part of scripting itself.
Purpose: If object attributes are part of scripting itself, it might increase the overhead of
maintenance, if there will be in any changes in objects / unique attribute to identify an object.
11. Try avoiding multi level nested looping conditions in the scripts.
Purpose: It will take more time to identify if the issue for failure of execution is due to a script
issue or a functional issue.
12. Try creating the scripts simpler and not have too much generalisation to satisfy different
scenarios in one test script.
Purpose: This will create challenges at the time of debugging the scripts or to make any
maintenance in the scripts.
14. Try to develop individual scripts which may not run more than 30 mins, avoid as much as
possible in having lengthy execution time based scripts.
Purpose: With shorter execution time period scripts, the stability of the scripts is higher.
15. Any values captured from Application under test, try to save those values to physical
location(i.e. files) and also print to log or result file to have them for reference.
Purpose: Generally when scripts fail in between and you want to continue execution of
remaining part, we can assign these values and run the remaining part of the script, which
otherwise we might lose the data that got captured in the script execution for resuming the
scripts which have stopped abruptly for unknown reasons or some issues.
16. Write appropriate info statements at different parts of the script to make them visible properly
in the final result file.
Purpose: Helpful while debugging the scripts.
17. Go through your entire application and identify what are the attributes generally used to
identify an object uniquely and come up with the list of attributes for each object in the
application and circulate to the entire team working on this, so that every would be in sync to
use the same.
Purpose: Brings a standard approach for building a better scripts.
18. The cases where we have web tables in the application under test and if we have to perform
operation(s) on an object based on values present in other columns, it is better to dynamically
find the row in which the values are matching for the respective search columns and then
dynamically get the target object from the target column for the same row and perform the
desired operation.
Purpose: Data in web tables are generally dynamic and the order of showing the data in web
table will always change based on other transactions in the application under test, by writing
logic like with this kind of approach can bring stability in the script.
19. Regularize the attributes in the XPath, which will help to identify the object uniquely without
fail. For example: a web page / window object is identified with a title and the value for this title
attribute is “Order Creation 1235”, and this is the only attribute which can help to identify
this window object.
So the script might file fail the next time we execute with the same attribute value, but instead if
we regularize saying “Order Creation*” then the same script can run without fail multiple
number of times.
Purpose: Brings better stability in the script for multiple runs.
Execution related best practices
20. Have a shared location, where you can store execution results for each cycle w.r.t each test
script which were executed during the cycle.
These result folders would be present in each script / Master script. Generally found in this
location
Master Script > Results > SessionXYZ
Here session XYZ is something we need to figure out and keep them
Purpose: As a general practice people might delete the results post executions once they feel
everything is fine, but by doing this we can have a reference any time to go back and check on
need basis.
21. Have a practice where a script would be stabilized by one person and have another person in
the team to run the same script for 3 times, out of which it has to pass at least for 2 times.
Purpose: You can get better confidence on the script that everything is working fine, which is
also called as dry runs.
23. Enable the setting in Openscript to take screenshot on any script failure.
Purpose: This will help speeding up the Execution failure analysis, especially when having the
nightly / unattended script executions.