Академический Документы
Профессиональный Документы
Культура Документы
PR CM
BE TIC ols
AC To
ST ES
S
:
VOLUME 5 • ISSUE 12 • DECEMBER 2008 • $8.95 • www.stpmag.com
Reeled In By
The Allure
Of Test Tools?
School Your Team On Putting
Test Automation Into Practice page 12
12
COV ER STORY
Reeled In By The Allure Of
Fancy Test Automation Tools?
Here are some real-world examples that can school your team on the ways
of putting test automation into practice.
By Aaron Cook and Mark Lustig
Depar t ments
5 • Editorial
How many software testers are there in the
world, anyway?
6 • Contributors
Get to know this month’s experts and the
best practices they preach.
7 • ST&Pedia
Industry lingo that gets you up to speed.
11 • Feedback
It’s your chance to tell us where to go.
32 • Best Practices
Quality Gates For the Five Phases
SCM tools are great, but they won’t run
your business or make the coffee.
By Joel Shore
27 Of Automated Software Testing
“You shall not pass,” someone might bellow, as if protecting the team
34 • Future Test
When automating Web service testing, from a dangerous peril. And so might your team if it embarks on the
there’s a right way and a wrong way. journey through the five phases of automated testing.
By Elena Petrovskaya and Sergey Verkholazov By Elfriede Dustin
Editor
VOLUME 5 • ISSUE 12 • DECEMBER 2008
EDITORIAL
Editorial Director
How Many
Edward J. Correia
+1-631-421-4158 x100
ecorreia@bzmedia.com
Copy Desk
Alan Zeichick
+1-650-359-4763
alan@bzmedia.com
Contributing Editors
Software Testers
Adam LoBelia
Diana Scheben
Matt Heusser
Chris McMahon
Joel Shore Are Out There?
ART & PRODUCTION What is the size of the soft- QA] are in the 1-to-3 or 1-to-
Art Director ware testing market? How 5 range.” However, the ratio
Mara Leonardi
many software testers exist in about a third of compa-
in the world? nies is around 1-to-10, he
SALES & MARKETING
I was asked that question said, which skews the aver-
Publisher
recently, and I had to admit age. “We focus on the enter-
Ted Bahr
+1-631-421-4158 x101 that I had no idea of the prise. ISVs tend to have a
ted@bzmedia.com answer. My estimate was higher saturation of testers.
about 250,000, but that was I’ve never seen an enterprise
Associate Publisher
David Karp just a guess. It was not based with a 1-to-1 ratio; maybe
+1-631-421-4158 x102 on research, statistical meas- with certain teams [working
dkarp@bzmedia.com Edward J. Correia
urement or anything at all, on] critical apps. So I would
Advertising Traffic Reprints really. It was just a number plucked from say that 1-to-5 as an industry average
Nidia Argueta Lisa Abelson thin air. would be close.”
+1-631-421-4158 x125 +1-516-379-7097 Numbers I do know for sure are these: Murphy points to what he called the
nargueta@bzmedia.com labelson@bzmedia.com
The circulation of this magazine is about flip-side of that argument. “How many lines
List Services Accounting 25,000. My e-mail newsletter goes to anoth- of code does it take to test a line of code?”
Lisa Fiske Viena Ludewig er 40,000. Our conferences (there are If, let’s say, that ratio was 10-to-1, “how
+1-631-479-2977 +1-631-421-4158 x110 three of them each year) attract another could I have three developers cranking
lfiske@bzmedia.com vludewig@bzmedia.com
few thousand software testers, QA profes- out code and expect one tester to keep up
sionals and senior test managers. Of with that?” Of course, test tools and
READER SERVICE
course, there’s some overlap between mag- automation have the potential to help the
Director of Circulation Customer Service/ azine and newsletter readers and confer- tester here, but his point is well taken;
Agnes Vanek Subscriptions
ence attendees, so let’s say BZ Media is there are not enough testers out there.
+1-631-443-4158 +1-847-763-9692
avanek@bzmedia.com stpmag@halldata.com reaching about 60,000 unique testers. Is Which perhaps helps to explain the
that a lot, or are we just scratching the sur- meteoric rise of test-only organizations
face? cropping up in India. “We’ve seen a huge
To help find the answer, I contacted growth of Indian off-shore groups where
Thomas Murphy, a research analyst with all they do is testing. Tata has spun out
Gartner. While he couldn’t provide even testing as its own entity,” he said, refer-
an approximate body count, he said that ring to the gigantic Indian conglomer-
President BZ Media LLC
sales of software testing tools this year is ate. “They wanted [the testing organiza-
Ted Bahr 7 High Street, Suite 407 expected to yield US$2.2 billion in rev- tion] to be independent, with IV&V char-
Executive Vice President Huntington, NY 11743 enue for the likes of Borland, Hewlett- acteristics.
Alan Zeichick +1-631-421-4158
fax +1-631-421-4130 Packard and other AD tool makers. That “Testing has been a growth business
www.bzmedia.com figure includes about $1.5 billion in test- in India. There’s a huge number of indi-
info@bzmedia.com
tool revenue for so-called “distributed” viduals doing manual testing, [and] build-
platforms such as Linux, Mac OS X and ing regression suites and frameworks for
Software Test & Performance (ISSN- #1548-3460) is Windows, and another $700 million for package testing.” They’ve also had to shift
published monthly by BZ Media LLC, 7 High Street,
Suite 407, Huntington, NY, 11743. Periodicals postage mainframes. their processes to adapt to changes in
paid at Huntington, NY and additional offices. As for people, Murphy estimates that technology. “As they get deeper into test-
Software Test & Performance is a registered trade- for the enterprise—that is, companies ing Web 2.0 and SOA, just to get to the
mark of BZ Media LLC. All contents copyrighted 2008
BZ Media LLC. All rights reserved. The price of a one building applications for internal use— same level of quality they used to have
year subscription is US $49.95, $69.95 in Canada,
$99.95 elsewhere. the most common ratio for developers to requires tremendous increases in their
POSTMASTER: Send changes of address to Software testers is about 5-to-1. That ratio can be own quality proactives.”
Test & Performance, PO Box 2169, Skokie, IL 60076. quite different for ISVs. “At Microsoft, for Still, the number of software testers in
Software Test & Performance Subscribers Services
begun to search. ý
may be reached at stpmag@halldata.com or by instance, the ratio is almost 1-to-1. But the world remains elusive. I’ve only just
calling 1-847-763-9692.
most companies doing a good job [of
If you’ve ever been handed a software-test automation tool and told that it will make
you more productive, you’ll want to turn to page 12 and read our lead feature. It was
written by automation experts AARON COOK and MARK LUSTIG, who themselves have
taken the bait of quick-fix offers and share their experiences for making it work.
Zephyr 2.0: Now in the Cloud one place, from release to release, sprint
to sprint, via a live dashboard. We give the
Zephyr in late October launched version look at the coverage you have across man- test team and management a view into
2.0 of its namesake software test man- ual and automated tests.” everything you are testing.”
agement tool, which is now available as a The system also lets you schedule test Also new in Zephyr 2.0 is two-way inte-
SaaS in Amazon’s Elastic Compute to run. “When it comes time to run auto- gration with the JIRA defect tracking sys-
Cloud. Zephyr gives development teams mated scripts, we can kick off automation tem. “Objects stay in JIRA, and if they’re
a Flex-based system for collaboration, scripts on the target machines, bring modified in one place they’re modified
resource, document and project man- results back to Zephyr and update met- in the other. We built a clean Web serv-
agement, test-case creation automation rics and dashboards in real time.” Shah ices integration, and the live dashboard
and archiving, defect tracking and claims that moving test cases into Zephyr also reflects changes in JIRA.”
reporting. The system was previously is largely automatic, particularly if they’re Version 2.0 improves the UI, Shah said,
available only as a self-hosted system. stored in Excel spreadsheets or Word doc- making test cases faster and easier to write
“The advantage of a SaaS is that you uments, which is said is common. “At the thanks to a full-screen mode. “When a
need no hardware,” said Zephyr CEO end of the day, you have to give a report tester is just writing test cases, we expand
Samir Shah. “It’s a predeveloped back-end of the quality of your software. That that for them and let them speed through
up and running immediately with 24/7 process is manual and laborious. We auto- that process. Importing and reporting also
availability, backup, restore, and high mate that by bringing all that [data] into were improved, he said.
bandwidth access from anywhere.” The
cost for either system is US$65 per user
per month. If you deploy in-house, you
also get three free users for the first year.
Also new in Zephyr 2.0 is Zbots, which
Shah said are automation agents for inte-
grating Zephyr with other test tools.
“These are little software agents that run
on the remote machines on which you
run Selenium, [HP QuickTestPro] or
[Borland] SilkTest.” They let you run all
your automated tests from within Zephyr,
and bring the results back into Zephyr
for analysis, reporting and communica-
tion to the team. “You build your automa-
tion scripts in those tools,” he explained,
“then within Zephyr, you get compre- The test manager’s ‘desktop’ in Zephyr provides access to information about available human and tech-
hensive views of all test cases so you can nology resources, test metrics and execution schedules, defects and administrative settings.
FEEDBACK: Letters should include the writer’s name, city, state, company
affiliation, e-mail address and daytime phone number. Send your thoughts to
feedback@bzmedia.com. Letters become the property of BZ Media and may
be edited for space and style.
However, verifying the print formatting generation across multiple machines and ly and concurrently. The size and scale of
for the document can easily be automat- data centers/geographic locations. each environment also can be managed
ed and allow the test engineer to focus to enable the appropriate testing while
efforts on other critical testing tasks. Security and Vulnerability Testing minimizing the overall infrastructure
Functional test scripts are ideal for Security testing tools enable security requirement.
building a regression test suite. As new vulnerability detection early in the soft- The sample environment topology
functionality is developed, the functional ware development life cycle, during the below defines a potential automation
script is added to the regression test suite. development phase as well as testing infrastructure. This includes:
By leveraging functional test automation, phases. Proactive security scanning • Primary test management and coor-
the test team can easily validate function- accelerates remediation, and save both dination server
ality across all environments, data sets, and time and money when compared with • Automation test centralizedcontrol
select business processes. The test team later detection. server
can more easily and readily document Static source code testing scans an • Automation test results repository
and replicate identified defects for devel- application’s source code line by line to and analysis server
opers. This ensures that the development detect vulnerabilities. Dynamic testing • System and transaction monitoring
team can replicate and resolve the defects tests applications at runtime in many and metrics server
faster. They can run the regression environments (i.e., development, accept- • Security and vulnerability test
tests on upgraded and enhanced appli- ance test, and production). A mature Execution server
cations and environments during off hours security and vulnerability testing process • Functional automation execution
so that the team is more focused on new will combine both static and dynamic test- server(s)
functionality introduced during the last ing. Today’s tools will identify most vul- • Load and stress automation
build or release to QA. Functional test nerabilities, enabling developers to pri- Execution server(s)
l
automation can also pro- oritize and address these • Sample application under test
vide support for specific issues. (AUT), including:
technologies (e.g., GUI,
text-based, protocol specif-
Automation is Test Environments
• Web server
• Application server(s)
ic), including custom con-
trols.
an investment When planning the test
automation environment, a
• Database server
automation maintenance costs (e.g., test results, a system can be more thorough- 1. What to automate. Automation can
script maintenance). ly executed. be used for scenario (end-to-end) based
• Increased range of testing. Introduce testing, component-based testing, batch
One simple approach to determining automation into areas not currently being testing, and workflow processes, to
when it makes sense to automate a test executed. For example, if an organiza- name a few. The goals of automation
case is: tion is not currently executing standalone differ based on the area of focus. A
The benefits of automation are two- component testing, this area could be given project may choose to implement
fold, starting with a clear cost savings asso- automated. differing levels of automation as time
ciated with test execution. First, automated Though ROI is a prevailing tool for and priorities permit.
tests are significantly faster to execute and conveying the value of test automation, 2. Amount of test cases to be automat-
report on than manual tests. Secondly, the less quantifiable business advantages ed, based on the overall functionality of
resources now have more time to dedicate of testing may be even more important an application: What is the duration of
toward other testing responsibilities such than cost reduction or optimization of the the testing life cycle for a given applica-
as test case development, test strategy, exe- testing process itself. These advantages tion? The more test cases that are
cutes tests under fault conditions, etc. include the value of increased system qual- required to be executed, the more valu-
ity to the organization. Higher quality able and beneficial the execution of test
Additional benefits of automation include: yields higher user satisfaction, better cases becomes. Automation is more cost
• Flexibility to automate standalone com- business execution, better information, effective when required for a higher
ponents of the testing process. This may and higher operational performance. number of tests and with multiple per-
include automating regression tests, but Conversely, the negative benefits of a low mutations of test case values. In addi-
also actions directly related to test man- quality experience may result in decreased tion, automated scripts execute at a con-
agement, including defect tracking, and user satisfaction, missed opportunities, sistent pace. User testers pace varies
requirements management. unexpected downtime, poor information, widely, based on the individual tester’s
• Increased depth of testing. Automated missed transactions, and countless other experience and productivity.
scripts can enable more thorough testing detriments to the company. In this sense, 3. Application release cycles: Are releas-
by systematically testing all potential com- we consider the missed revenue or wasted es daily, weekly, monthly, or annually? The
binations of criteria. For example, a sin- expense of ill-performing systems and soft- maintenance of automation scripts can
gle screen may be used for searching, with ware, not just the isolated testing process. be more challenging as the frequency of
results formatted differently for different Test Automation Considerations releases increases if a framework based
results sets returned. By automating the When moving toward test automation, a approach is not used. The frequency of
execution of all possible permutations of number of factors must be considered. execution of automated regression tests
will yield financial savings as more regres- • Test automation software should be embedded in the team. This
sion test cycles can be executed quickly, • Hardware to host and execute auto- also has implications for how the automa-
reducing overall regression test efforts mated tests tion framework will be developed.
from an employee and time standpoint. • Resources for automation scripting and
The more automated scripts are execut- execution, including test management. Waterfall Methodology
ed, the more valuable the scripts become. To estimate the development effort for
It is worth noting the dependency of a Test Automation Framework the test automation framework, the
consistent environment for test automa- To develop a comprehensive and exten- automation engineers require access to
tion. To mitigate risks associated with envi- sible test automation framework that is the business requirements and technical
ronment consistency, environments must easy to maintain, it’s extremely helpful if specifications. These begin to define the
be consistently maintained. the automation engineers understand the elements that will be incorporated into
4. System release frequency: Depending AUT and its underlying technologies. the test automation framework. From
on how often systems are released into the They should understand how the test the design specifications and business
environments where test automation will automation tool interacts with and han- requirements, the automation team
be executed (e.g., acceptance testing envi- dles UI controls, forms, and the under- begins to identify those functions that will
ronment, performance testing environ- lying API and database calls. be used across more than one test case.
ment, regression testing environments), The test team also needs to under- This forms the outline of the framework.
automated testing will be more effective stand and participate in the development Once these functions are identified, the
in minimizing the risk to releasing defects process so that the appropriate code automation engineers then begin to
into a production environment, lowering drops can be incorporated into the over- develop the code required to access and
the cost of resolving defects. all automation effort and the targeted validate between the AUT and the test
5. Data integrity requirement within and framework development. This comes into automation tool. These functions often
across test cases: If testers create their own play if the development organization fol- take the form of the tool interacting with
data sets, data integrity may be compro- lows a traditional waterfall SDLC. If the each type of control on a particular form.
mised if any inadvertent entry errors are dev team follows a waterfall approach, Determining the proper approach for
made. Automation scripts use data sets then major functionality in the new code custom-built controls and the test tool
that are proven to maintain data integri- has to be accounted for in the framework. can be challenging. These functions
ty and provide repeatable expected results. This can cause significant automation often take the form of setting a value (typ-
Synergistic benefits are attained with con- challenges. If the development team is ically provided by the business test case)
sistent test data can be used to validate following an agile or other iterative and validating a result (from the AUT).
individual test case functionality as well as approach, then the automation engineers By breaking each function down into the
multi-test case functionality. For example,
one set of automated test cases provides TABLE 1: GUTS OF THE FRAMEWORK
data entry while another set of test cases A business scenario defined to validate that the application will not allow the end
can test reporting functionality.
user to define a duplicate customer record.
6. Automation engineers staffing and
skills: Individual experience with test Components required:
automation varies widely based on expe- • Launch Application
rience. Training courses alone will result LaunchSampleApp(browser, url)
in a minimum level of competency with • Returns message (success or failure)
a specific tool. Practical experience, based
• Login
on multiple projects and lessons learned
Login(userID, password)
is the ideal means to achieve a strong set
• Returns message (success or failure)
of skills, time permitting. A more prag-
• Create new unique customer record
matic means is to leverage the lessons
CreateUniqueCustomer({optional} file name, org, name, address)
learned from industry best practices, and
the experiences of highly skilled individ- • Outputs data to a file including application generated customer ID, org,
uals in the industry. name, and address
An additional staffing consideration is • Returns message (success or failure)
the resource permanence. Specifically, are • Query and verify new customer data
the automation engineers permanent or QueryCustomerRecord(file path, file name)
temporary (e.g., contractors, consultants, Returns message (success or failure)
outsourced resources). Automation engi- • Create identical customer record
neers can easily develop test scripts fol- CreateUniqueCustomer({optional} file name, org, name, address)
lowing individual scripting styles that • Outputs data to a file including application generated customer ID, org, name,
can lead to costly or difficult knowledge and address
transfers. Returns message (success or failure)
7. Organizational support of test automa- Handle successful error message(s) from application
tion: Are automation tools currently owned • Gracefully log out and return application to the home page
within the organization, and/or has budg- Logout
et been allocated to support test automa- Returns message (success or failure)
tion? Budget considerations include
er cost. ý
tion of functions to build and automate, tion engineer to focus on the tool inter- sistent, stable code in less time and at low-
priority and resources required. From action with your AUT without having to
that point forward, the automation reinvent the standard output methods.
framework begins to grow and mature Any QA engineer can pick up that func-
alongside of the application under tion or snippet of code and know what
TEACH
Unfortunately for many existing appli-
cations, testability was not a key objective
from the start of the project. For these
applications, it often is impossible to intro-
I ance testing tools, there is one statement I’ve heard from customers
more than any other: “Changing the You’ll also understand the effort need-
initial design and architecture of the appli-
cation. Introducing them in an existing
application can cause mature architectural
application to make it more testable with ed to enhance testability with the bene- changes, which most often includes re-writ-
your tool is not an option for my organi- fits gathered in terms of increased test ing of extensive parts of the application
zation.” If companies don’t change this coverage and test automation and that no one is willing to pay for. Also test
mindset, then test automation tools will decreased test maintenance, and how an interfaces need a layered architecture.
never deliver more than marginal agile development process will help to Introducing an additional layer for the
improvements to testing productivity. successfully implement testability testability interfaces is often impossible for
Testability improvements can be improvements in the application. monolithic applications architectures.
applied to existing, complex applications Testability interfaces for performance
to bring enhanced benefits and simplic- What Do I Mean by ‘Testability?’ testing need to be able to sufficiently test
ity to testers. This article will teach you There are many ways to interpret the the multiuser aspects of an application.
how to enhance testability in legacy appli- term testability, some in a general and This can become complex as it usually
cations where a complete re-design of the vague sense and others highly specific. requires a remote-able test interface.
application is not an option (and special For the context of this article, I define Thus, special purpose testability inter-
purpose testability interfaces can not eas- testability from a test automation per- faces for performance testing are even
ily be introduced). spective. I found the following defini- less likely to exist than testability inter-
You’re also learn how testability tion, which describes testability in terms faces for the functional aspects of the
improvements can be applied to an exist- of visibility and control, most useful: application.
ing, complex (not “Hello World”) appli- Visibility is the ability to observe the To provide test automation for appli-
cation and the benefits gathered through states, outputs, resource usage and oth- cations that were not designed with testa-
improved testability. This article covers er side effects of the software under test. bility in mind, existing application inter-
testability improvements from the per- Control is the ability to apply inputs to faces need to be used for testing purpos-
spective of functional testing as well as the software under test or place it in es. In most cases this means using the
from that of performance and load test- specified states. Ultimately testability graphical user interface (GUI) for func-
ing, showing which changes are needed means having reliable and convenient tional testing and a protocol interface or
in the application to address either per- interfaces to drive the execution and ver- a remote API interface for performance
spective. ification of tests. testing. This is the approach traditional
To Be More Testable
• A GUI with “custom controls” (GUI con-
trols that are not natively recognized by
the test tool) is problematic because it
provides only limited visibility and con-
trol for the test tool.
• GUI controls with weak UI object recog-
nition mechanisms (such as screen or win-
dow coordinates, UI indexes, or captions)
provide limited control, especially when
it comes to maintenance. This results in
fragile tests that break each time the appli-
cation UI changes, even minimally.
• Protocol interfaces used for perform-
ance testing are also problematic. Session
state and control information is often hid-
den in opaque data structures that are
not suitable for testing purposes. Also,
the semantic meaning of the protocol
often is undocumented.
But using existing interfaces need not
to be less efficient or effective than using
special purpose testing interfaces. Often
slight modifications to how the applica-
tion is using these interfaces can increase
the testability of an application signifi-
cantly. And again, using the existing appli-
cation interfaces for testing is often the
only option you have for an existing appli-
cations.
Ernst Ambichl is chief scientist at Borland.
FutureTest is a
two-day conference • Stay abreast of trends and best practices for managing
software test and • Hear from the greatest minds in the software test/QA
QA managers. community and gain their knowledge and wisdom
FutureTest will • Hear practitioners inside leading companies share their
provide practical, test management and software quality assurance secrets
results-oriented
• All content is practical, thought-provoking,
sessions taught by
future-thinking guidance that you can use today
top industry
• Lead your organization in building
professionals.
higher-quality, more secure software
and realize the benefits sooner. you may never have tried before.
REGISTER by
December 19
Just $1,095
SAVE $350!
www.futuretest.net
OLD DOG, NEW TRICKS
actionable controls on one page, which CIDs are only used to provide more between the test scripts and the different
in our case all use the same request for- context information for the test tool, the localized versions of the application.
mat shown above. The only way to iden- testers, and developers. There is no Captions are of course language depend-
tify a control using its control handle is change in the application behavior – the ent, and therefore are not a good candi-
by the order number of the control han- application still uses the control handle date for stable control identifiers.
dle compared to other control handles and ignores the CID. The first option would have been to
in the response data (which is either The HTTP request format changed localize the test scripts themselves by
HTML/JavaScript or JSON in our case). from: externalizing the captions and providing
E.g.: The control handle for the localized versions of the externalized cap-
“Login” button in the “Login” page http://host/borland/mod_1?control_handle= tions. But still this approach introduces
68273841207300477604!...
might be the 6th control handle in the a maintenance burden when captions
page. A parsing statement needed to to: change or when new languages need to
parse the value of the handler for the be supported.
“Login” button then might look like that http://host/borland/mod_1?control_handle= Using the caption to identify an
*tm_btn_login.68273841207300477604!...
is Figure 2. HTML link (HTML fragment of an
It should be obvious from the small As CIDs are included in all responses HTML page):
example in Figure 2 that using the con- as part of their control handlers, it is easy
trol handle to identify controls is far from to create a parsing rule the uniquely pars- <A …
HREF=" http://...control_handle=*tm_btn_login
a stable recognition technique. The prob- es the control handle – the search will .6827..."
lems that come with this approach are: only return one control handle as the >Login</A>
• Poor readability: Scripts are hard to read CIDs are always unique in the context
as they use order number information to of the response. Calling the actual “Login” link then
identify controls. The same script as above now looks like: might look like this:
• Fragile to changes: Adding controls or
just rearranging the order of controls WebPage(“http://host/borland/login/”); MyApp.HtmlLink(“Login”).Click();
WebParseDataBound(hBtnLogin,
will cause scripts to break or even worse "control_handle=*tm_btn_login.", “!”, 1);
As we already have introduced the con-
introduces unexpected behavior caus- cept of stable control identifiers (CIDs)
ing subsequent errors which are very WebPage(“http://host/borland/mod_1?control_ for performance testing we wanted to
hard to find. handle==*tm_btn_login.” + reuse these identifiers also for GUI level
hBtnLogin + “!”);
• Poor maintainability: Frequent changes testing. Using CIDs makes the test scripts
to the scripts are needed. Detecting the By introducing CIDs and exposing language independent without the need
changes of the order number of control CIDs at the HTTP protocol-level, we are to localize the scripts (at least for control
handles is cumbersome, as you have to now able to build extremely reliable and recognition – verification of control prop-
search through response data to find the stable test scripts. Because a CID will not erties still may need language dependent
new order number of the control handle. change for an existing control, there is code). To make the CID accessible for
no maintenance work related to changes our functional testing tool, the HTML
Stable Control Identifiers such as introducing new controls in a controls of our application exposed a cus-
Trying to optimize the recognition tech- page, filling the page with dynamic con- tom HTML attribute named “CID.” This
nique within the test scripts by adapt- tent that exposes links with control han- attribute is ignored by the browser but
ing more intelligent parsing rules dlers (like a list of links), or reordering is accessible from our functional testing
(instead of only searching for occur- controls on a page. Also the scripts are tool using the browser’s DOM.
rences of “control_handle”) proved not now more readable and the communi-
to be practical. We even found it coun- cation between testers and developers is Using the CID to identify a link:
terproductive because the more unam- easier because they use the same notion
biguous the parsing rules were the less when they were talking about the con- <A …
HREF=" http://...control_handle=*tm_btn_login
stable they became. trols of the application. .6827..."
So we decided to address the root CID="tm_btn_login"
cause of the problem by creating stable Functional Testing >Login</A>
identifiers for controls. Of course, this The problem here was the existence of
needed changes in the application code language-dependent test scripts. Our Calling the actual “Login” link using
—but more on this later. existing functional test scripts heavily the CID then might look like this:
We introduced the notion of a control relied on recognizing GUI controls and
identifier (CID), which uniquely identifies windows using their caption. MyApp.HtmlLink(“CID=tm_btn_login”).Click();
a certain control. Then we extended the For example, for push buttons, the
protocol so that CIDs could easily be con- caption is the displayed name of the but- Existing Test Scripts
sumed by the test tool. By using mean- ton. For links the caption is the displayed We had existing functional test scripts
ingful names for CIDs, we made it easy text of the link. And for text fields, the where we needed to introduce to the
for testers to identify controls in caption is the text label preceding the new mechanism how to identify con-
request/response data and test scripts. text field. trols. So it was essential that we had sep-
Also was then easier for the developers To automate functional testing for all arated the declaration of UI controls
to associate CIDs with the code that imple- localized versions of the application, we and how controls are recognized from
ments the control. needed to minimize the dependencies the actual test scripts using them.
I am not ‘actionable.’
I am Not a Lead
I WILL GET TO KNOW YOU on my own time and
in my own way. I will learn about you myself and
then I may choose to read your white papers,
attend your webinars, or visit your web site. I am
in control of the process. And if I don’t get to
know you first, you will never get my business.
Therefore we only had to change the declaration of the con- es, now generates the following control handler, which contains
trols but not their usage in multiple test scripts. the CID as part of the handler:
A script that encapsulates control information from actions
might look something like this: *tm_btn_login.68273841207300477604!
related checks and balances along Phase 1: Requirements Gathering for ability to automate
Automated Software Testing, the team is Phase 1 will generally begin with a kick- a) If program or test requirements are
not only responsible for the final testing off meeting. The purpose of the kick-off not documented, the automation
automation efforts, but also to help meeting is to become familiar with the effort will include documenting the
enforce that quality is built into the entire AUT’s background, related testing pro- specific requirements that need to
Automated Software Testing life cycle cessing, automated testing needs, and be automated to allow for a require-
throughout. The automation team is held schedules. Any additional information ments traceability matrix (RTM)
responsible for defining, implementing regarding the AUT will be also collect- b) Requirements are automated based
and verifying quality. ed for further analysis. This phase serves on various criteria, such as
It is the goal of this section to provide as the baseline for an effective automa- i) Most critical feature paths
program management and the technical tion program, i.e. the test requirements ii) Most often reused (i.e. automat-
lead a solid set of automated technical will serve as a blueprint for the entire ing a test requirement that only
process best practices and recommenda- Automated Software Testing effort. has to be run once, might not be
tions that will improve the quality of the Some of the information you gather cost effective)
testing program, increase productivity with for each AUT might include: iii) Most complex area, which is
respect to schedule and work performed, • Requirements often the most error-prone
and aid successful automation efforts. • Test Cases iv) Most data combinations, since
• Test Procedures testing all permutations and com-
Testing Phases and Milestones • Expected Results binations manually is time-con-
Independent of the specific needs of • Interface Specifications suming and often not feasible
the application under test (AUT), In the event that some information v) Highest risk areas
Automated Software Testing will imple- is not available, the automator will work vi) Test areas which are most time
ment a structured technical process and with the customer to derive and/or devel- consuming, for example test per-
approach to automation and a specific op as needed. Additionally, during this formance data output and analy-
set of phases and milestones to each phase of automation will generally follow sis.
program. this process: 3) Evaluate test automation ROI of test
Those phases consist of: 1) Evaluate AUT’s current manual test- requirement
• Phase 1: Requirements Gathering— ing process and determine a) Prioritize test automation imple-
Analyze automated testing needs and a) areas for testing technique improve- mentation based on largest ROI
develop high level test strategies ment b) Analyze AUT’s current life-cycle
• Phase 2: Design & Develop Test Cases b) areas for automated testing tool use and evaluate reuse of exist-
• Phase 3: Automation framework and c) determine current quality index, as ing tools
test script development applicable (depending on AUT state) c) Assess and recommend any addi-
• Phase 4: Automated test execution d) collect initial manual test timelines tional tool use or the required devel-
and results reporting and duration metrics (to be used opment
• Phase 5: Program review as a comparison baseline ROI) d) Finalize manual test effort baseline
Our overall project approach to e) determine the “automation index” to be used for ROI calculation
accomplish automated testing for a spe- —i.e. determine what lends itself A key technical objective is to demon-
cific effort is listed in the project mile- to automation (see the next item) strate a significant reduction in test time.
stones below. 2) Analyze existing AUT test requirements Therefore this phase involves a detailed
assessment of the time required to man-
FIGURE 1: THE ATLM ually execute and validate results. The
assessment will include measuring the actu-
al test time required for manually execut-
ing and validating the tests. Important in
the assessment is not only the time
required to execute the tests but also to
validate the results. Depending on the
nature of the application and tests, vali-
dation of results can often take significantly
longer than the time to execute the tests.
Based on this analysis, the automator
would then develop a recommendation
for testing tools and products most com-
patible with the AUT. This important step
is often overlooked. When overlooked
and tools are simply bought up front with-
out consideration for the application, the
result is less than optimum results in the
best case, and simply not being able to
use the tools in the worst case.
At this time, the automator would also
identify and develop additional software Once the list of test requirements for time. The expected results provide the
as required to support automating the automation has been agreed to by the baseline to determine pass or fail status
testing. This software would provide inter- program, they can be entered in the of each test. Verification of the expected
faces and other utilities as required to requirements management tool and/or results will include manual execution of
support any unique requirements while test management tool for documentation the test cases and validation that the
maximizing the use of COTS testing tools and tracking purposes. expected results were produced. In the
and products. A brief description: case where exceptions are noted, the
• Assess existing automation framework Phase 2: Manual Test Case automator will work with the customer to
for component reusability Development and Review resolve the discrepancies, and as needed
• GUI record/playback utilities compat- Armed with the products of phase 1, update the expected results. The verifi-
ible with AUT display/GUI applications manual test cases can now be devel- cation step for the expected results
(as applicable in rare cases) oped. If test cases already exist, they can ensures that the team will be using the
• Library of test scripts able to interface simply be analyzed, mapped as applica- correct baseline of expected results for
to AUT standard/certified simula- ble to the automated test requirements the software baseline under test.
tion/stimulation equipment and sce- and reused, ultimately to be marked as Also during the manual test assess-
narios used for scenario simulation automated test cases. ment, pass/fail status as determined
• Library of tools to support retrieval of It is important to note for a test to be through manual execution will be docu-
test data generation automated, manual test cases need to be mented. Software trouble reports will be
• Data repository for expect- l automated, as computer documented accordingly.
ed results inputs and expectations
• Library of performance Automating differ from human inputs. The products of Phase 2 will typically be:
testing tools able to sup- As a general best practice, 1. Documented manual test cases to
port/measure real time
and non-real time AUTs
inefficient test before any test can be
automated, it needs to be
be automated (or existing test cas-
es modified and marked as “to be
• Test scheduler able to sup-
port distributed testing
cases will documented and vetted
with the customer to veri-
automated”)
2. Test case walkthrough and priority
across multiple computers fy its accuracy and that the agreement
and test precedence
result in poor automator’s understand- 3. Test case implementation by
The final step for this ing of the test cases is cor- phase/priority and timeline
phase will be to complete
test program rect. This can be accom- 4. Populated requirements traceabil-
the configuration for the plished via a test case walk- ity matrix
application(s), including performance. through. 5. Any software trouble reports asso-
the procurement and instal- l Deriving effective test ciated with manual test execution
lation of the recommended cases is important for suc- 6. First draft of “Automated Software
testing tools and products along with the cessfully implementing this type of Testing Project Strategy and
additional software utilities developed. automation. Automating inefficient test Charter” (as described in the
cases will result in poor test program per- Project Management portion of this
The products of Phase 1 will typically be : formance. document)
1. Report on test improvement In addition to the test procedures, oth-
opportunities, as applicable er documentation, such as the interface Phase 3 : Automated Framework
2. Automation index specifications for the software are also and Test Script Development
3. Automated Software Testing needed to develop the test scripts. As This phase will allow for analysis and
requirements walkthrough with required, the automator will develop any evaluation of existing frameworks and
stakeholders, resulting in agree- missing test procedures and will inspect automation artifacts. It is expected that
ment the software, if available, to determine for each subsequent implementation,
4. Presentation report on recom- the interfaces if specifications are not there will be software utilities and test
mendations for tests to automate, available. scripts we’ll be able to reuse from previ-
i.e. test requirements to be auto- Phase 2 also includes a collection and ous tasks. During this phase we deter-
mated entry of the requirements and test cases mine scripts that can be reused.
5. Initial summary of high level test into the test manager and/or require- As needed, the automation framework
automation approach ments management tool (RM tool), as will be modified, and test scripts to execute
6. Presentation report on test tool applicable. The end result is a populated each of the test cases are developed. Scripts
or in-house development needs requirements traceability matrix inside will be developed for each of the test cas-
and associated recommendations the test manager and RM tool that links es based on test procedures for each case.
7. Automation utilities requirements to test cases. This central The recommended process for devel-
8. Application configuration details repository provides a mechanism to oping an automated test framework or
9. Summary of test environment organize test results by test cases and test scripts is the same as would be used
10. Timelines requirements. for developing a software application.
11. Summary of current manual test- The test case, related test procedure, Keys to a technical approach to devel-
ing level of effort (LOE) to be test data input and expected results from oping test scripts is that implementations
used as a baseline for automated each test case are also collected, docu- are based on generally accepted devel-
testing ROI measurements mented, organized and verified at this opment standards; no proprietary imple-
mentation should be allowed. The test program review also includes ty gates, which apply to Automated
This task also includes verifying that an assessment of whether automation Software Testing milestones.
each test script works as expected. efforts satisfy completion criteria for the Our process controls verify that the
AUT, and whether the automation effort output of one stage represented in Figure
The products of Automated Software itself has been completed. The review also 2 is fit to be used as the input to the next
Testing Phase 3 will typically be: could include an evaluation of progress stage. Verifying that output is satisfacto-
1. Modified automated test frame- measurements and other metrics col- ry may be an iterative process, and veri-
work; reuse test scripts (as appli- lected, as required by the program. fication is accomplished by customer
cable) The evaluation of the test metrics review meetings; internal meetings and
2. Test case automation—newly should examine how well original test comparing the output against defined
developed test scripts program time/sizing measurements com- standards and other project specific cri-
3. High-level walkthrough of auto- pared with the actual number of hours teria, as applicable. Additional quality
mated test cases with internal or expended and test procedures developed gates activities will take place as applica-
external customer to accomplish the automation. The ble, for example:
4. Updated requirements traceabil- review of test metrics should conclude Technical interchanges and walk-
ity matrix with improvement recommendations, as throughs with the customer and the
needed. automation team, represent an evalua-
Phase 4: Automated Test Execution Just as important is to document the tion technique that will take place dur-
and Results Reporting activities that automation efforts per- ing and as a final step of each Automated
The next step is to execute automated formed well and were done correctly in Software Testing phase. These evaluation
tests and develop the framework and order to be able to repeat these success- techniques can be applied to all deliver-
related test scripts. Pass/fail status will be ful processes. Once the project is com- ables, i.e. test requirements, test cases,
captured and recorded in the test man- plete, proposals for corrective action will automation design and code, and other
ager. An analysis and comparison of man- surely be beneficial to the next project, software work products, such as test pro-
ual and automated test times and but the corrective actions, applied dur- cedures and automated test scripts. They
pass/fail results will be conducted and ing the test program, can be significant consist of a detailed examination by a per-
summarized in a test presentation report. enough to improve the final results of the son or a group other than the author.
Depending on the nature of the appli- test program. These interchanges and walkthroughs are
cation and tests, you might also complete Automated Software Testing efforts will intended to help find defects, detect non-
an analysis that characterizes the range adopt, as part of its culture, an ongoing adherence to Automated Software Testing
of performance for the application. iterative process of lessons learned activi-
ties. This approach will encourage automa- FIGURE 2: AUTOMATED
The products of Automated Software tion implementers to take the responsi- SOFTWARE TESTING PHASES,
Testing Phase 4 will typically be: bility to raise corrective action proposals MILESTONES AND QUALITY GATES
1. Test Report including Pass/Fail immediately, when such actions potentially
Status by Test Case and Require- have significant impact on test program
ment (including updated RTM) performance. This promotes leadership
2. Test Execution Times (Manual and behavior from each test engineer.
Automated)—initial ROI reports
3. Test Summary Presentation The products of phase 5 will typically be:
4. Automated Software Testing train- 1. The final report
ing, as required
Quality Gates
Phase 5: Program Review Internal controls and quality assurance
and Assessment processes verify that each phase has been
The goal of Automated Software Testing completed successfully, while keeping
implementations is to allow for contin- the customer involved. Controls include
ued improvements. During the fifth quality gates for each phase, such as tech-
phase, we will review the performance of nical interchanges and walkthroughs
the automation program to determine that include the customer, use of stan-
where improvements can be made. dards, and process measurement.
Throughout the automation efforts, Successful completion of the activities
we collect test metrics, many during the prescribed by the process should be the
test execution phase. It is not beneficial only approved gateway to the next phase.
to wait until the end of the automation Those approval activities or quality gates
project to document insights gained into include technical interchanges, walk-
how to improve specific procedures. throughs, internal inspections, examina-
When needed, we will alter detailed pro- tion of constraints and associated risks,
cedures during the test program, when configuration management; tracked and
it becomes apparent that such changes monitored schedules and cost; corrective
are necessary to improve the efficiency actions; and more as this section
of an ongoing activity. describes. Figure 1 reflects typical quali-
standards, test procedure issues, and oth- current environment. The previous ver- the customer immediately and necessary
er problems. sion of the tool may have performed cor- adjustments made accordingly.
Examples of technical interchange rectly and a new version may perform fine Tracking schedules on an ongoing
meetings include an overview of test in other environments, but might adverse- basis also contributes to tracking and con-
requirement documentation. When test ly affect the team’s particular environ- trolling costs.
requirements are defined in terms that ment. Additionally, using a configuration Costs that are tracked can be controlled.
l
are testable and correct, errors are pre- management tool to base- By closely tracking sched-
vented from entering the development line the test repository will ules and other required
pipeline, which would eventually be help safeguard the integri- resources, the automator
reflected as possible defects in the deliv- ty of the automated test-
By closely tracking assures that a cost tracking
erable. Automation design-component ing process and help with and controlling process is
walkthroughs can be performed to ensure roll-back in the event of
schedules and followed. Inspections, walk-
that the design is consistent with defined failure. throughs and other status
requirements, conforms to standards and The automator incor- other required reporting will allow for a
applicable design methodology and porates the use of config- closely monitored cost con-
errors are minimized. uration management tools resources, the trol tracking activity.
Technical reviews and inspections have help control of the Performance is con-
proven to be the most effective form of integrity of the automa- automator assures tinuously tracked with
preventing miscommunication, allowing tion artifacts. For exam- necessary visibility into
for defect detection and removal. ple, we will include all that cost tracking project performance,
Internal automator inspections of automation framework related schedule and cost.
deliverable work products will take place components, script files, and controlling The automator manager
to support the detection and removal of test case and test proce- maintains the record of
defects early in the development and test dure documentation, process is followed. delivery dates (planned
cycles; prevent the migration of defect to schedules and cost track- vs. actual), and continu-
later phases; improve quality and pro- ing data under a config- l ously evaluates the proj-
ductivity; and reduce cost, cycle time, and uration management sys- ect schedule. This is main-
maintenance efforts. tem. This assures us that the latest and tained in conjunction with all project
A careful examination of goals and accurate version control and records of tracking activities, is presented at week-
constraints and associated risks will take Automated Software Testing artifacts and ly status reports and is submitted with the
place, which will lead to a systematic products are maintained. monthly status report.
automation strategy, will produce a pre- Even with the best laid plans and
dictable, higher-quality outcome and will Schedules are Defined,Tracked implementation, corrective actions and
enable a high degree of success. and Communicated adjustments are unavoidable. Good QA
Combining a careful examination of con- It’s also important to define, track processes will allow for continuous eval-
straints, as a defect prevention technolo- and communicate project schedules. uation and adjustment of task imple-
gy, together with defect detection tech- Schedule task durations are determined mentation. If a process is too rigid, imple-
nologies will yield the best results. based on past historical performance mentation can be doomed to failure.
Any constraint and associated risk will and associated best estimates. Also, any When making adjustments, it’s criti-
be communicated to the customer and schedule dependencies and critical path cal to discuss any and all changes with cus-
risk mitigation strategies as necessary will elements should be considered up front tomers, explain why the adjustment is rec-
be developed. and incorporated into the schedule. ommended and the impact of not mak-
Defined QA processes allow for con- If the program is under a tight dead- ing the change. This communication is
stant risk assessment and review. If a risk line, for example, only the automation essential for customer buy-in.
is identified, appropriate mitigation strate- tasks that can be delivered on time should QA processes should allow for and sup-
gies can be deployed. We require ongo- be included in the schedule. During port the implementation of necessary cor-
ing review of cost, schedules, processes Phase 1, test requirements are prioritized. rective actions. This allows for strategic
and implementation to prevent potential This allows the most critical tasks to be course correction, schedule adjustments,
problems to go unnoticed until too late, included and prioritized for completion and deviation from Automated Software
instead our process assures problems are first, and less critical and lower priority Testing Phases to adjust to specific proj-
addressed and corrected immediately. tasks to be later in the schedule. ect needs. This also allows for continuous
a successful delivery. ý
Experience shows that it’s important After Phase 1, an initial schedule is process improvement and ultimately to
to protect the integrity of the Automated presented to the customer for approval.
Software Testing processes and environ- During the technical interchanges and
REFERENCES
ment. Means of achieving this include the walkthroughs, schedules are presented
• This process is based on the Automated Testing Lifecycle
testing of any new technologies in isola- on an ongoing basis to allow for contin- Methodology (ATLM) described in the book Automated
tion. This ensures that validating that tools, uous schedule communication and mon- Software Testing — A diagram that shows the rela-
for example, perform up to specifications itoring. Potential schedule risks will be tionship of the Automated Software Testing technical
and marketing claims before being used communicated well in advance and risk process to the Software Development Lifecycle will
be provided here
on any AUT or customer environment. mitigation strategies will be explored and • As used at IDT
The automator also will verify that any implemented, as needed. Any potential • Implementing Automated Software Testing,
upgrades to a technology still run in the schedule slip can be communicated to Addison Wesley Feb. 2009.
good architecture.” ý
well-meant spirit of getting the product out soft’s Visual Studio Team System: “Every builds,” he says, “are not a substitute for
the door—to circumvent check-in policies. build counts.”
A simple, low-tech way to avoid poten- Guadagno’s view of the development
tial disasters, such as sending the wrong world is simple: Build control and process Joel Shore is a 20-year industry veteran and
version of code into production, is a sim- automation must be ingrained in the cul- has authored numerous books on personal
computing. He owns and operates Reference
ple checklist attached to developer’s cubi- ture of the team and not something that’s
Guide, a technical product reviewing and doc-
cle walls with a push pin. “This method of left to a manager. “If it’s not instilled in umentation consultancy in Southboro, Mass.
managing repetitive tasks and making sure the culture of the team, if they don’t
understanding of the
Empirix www.empirixfreedom.com 4
products and services
FutureTest 2009 www.futuretest.net 20, 21
of leading software
Hewlett-Packard hp.com/go/quality 36
vendors, and educate
I Am Not a Lead www.i-am-not-a-lead.com 24
yourself on a variety of
topics directly related Qualitest www.QualiTest-int.com 4
Seapine
www.sdtimes.com/jobboard
www.seapine.com/stptcm08
26
www.stpcon.com
6, 33
35
l
after significant changes absolutely no need to write ful to send requests one after another
were made in the archi- your own script for XML in the correct order. Also be mindful of
tecture. From the very file comparison; just find the uniqueness of test data during the
beginning we intended to
We are able to the most suitable free tool test. The service might not allow you to
automate as many of these and use it within your create, for example, several users with the
tests as possible. Some of automate about scripts. However, this same name. It’s also common for one
the techniques we used for method is not applicable request in a sequence to require data
automated verification of 70 percent of when dealing with dynam- from a previous response. In this case you
Web services functionality ic data or if a new version need to think about storing that data and
were as follows: our test cases of Web services involves using it on the fly.
Response schema val- changes in the request All of these techniques were imple-
1. idation. This is the
most simple and formal
[with a] structure or Web services
architecture.
mented using a home-grown tool which
we were using in our day-to-day work.
procedure of response small utility Check XML nodes Using that tool, we are able to automate
structure verification. There
are free schema validators built in-house.
3. and their values. In
the case of dynamic data, it
about 70 percent of our test cases. This
small utility that was originally created by
out there that can easily can be useful to validate developers for developers to cover typi-
be built into automated l only the existence of nodes cal predefined test conditions and grad-
scripts. However, schema validation is not using XPath query and/or check the ually evolved into rather powerful tool.
sufficient of you also need to validate the node values using patterns. Having such a tool in your own organi-
SPRING
Speakers 29 Minutes
3 u
Days
’r e b usy, so we
packed the
30
31+
nd-picked
for their
We know yo inimize re h a , CA is
in to 3 days to m Our speak
e rs a
d ability to San Mateo
confere nc e April 2009 xpertise an midway be
tween
ut technical e nowledge co and
your time o ate their k
Monday Tuesday
Wednesday Thursday
San Fran c is
communic
Friday
e . 30 at are
, in ways th g
ic 31
of the off 1 2 Va ll ey.
Silicon
3
Y o u r effectively g ra tin
Mark l for inte
most usefu
6 7
Calendar!
8 9
in to yo ur
dge
that knowle
daily life.
s
31+ Exhibitor
25
$895 Value! t th e m akers of th
e latest so
ll .
ftware
M e e hibit H a
ls in our Ex
Tracks testing too nd
5+ acks The travel
and tu it io n e x p ense of att
rences.
ending STP
Con
Learn
fea tu
a
re
b
s,
o u
te
t th
st
e ir n e w
them out, a
products a
nd talk th
o built th e m .
to e