Вы находитесь на странице: 1из 36

A Publication

PR CM
BE TIC ols
AC To
ST ES
S

:
VOLUME 5 • ISSUE 12 • DECEMBER 2008 • $8.95 • www.stpmag.com

Reeled In By
The Allure
Of Test Tools?
School Your Team On Putting
Test Automation Into Practice page 12

Teach Old Apps


New Testing Tricks
Using Quality Gates To
Prevent Automation
Project Failure
VOLUME 5 • ISSUE 12 • DECEMBER 2008

Contents A BZ Media Publication

12
COV ER STORY
Reeled In By The Allure Of
Fancy Test Automation Tools?
Here are some real-world examples that can school your team on the ways
of putting test automation into practice.
By Aaron Cook and Mark Lustig

Teach Old Apps


18 Some New Tricks
When redesign’s not an option, and
adding testability interfaces is difficult,
you need ways to improve testability of
existing legacy apps.
By Ernst Ambichl

Depar t ments
5 • Editorial
How many software testers are there in the
world, anyway?

6 • Contributors
Get to know this month’s experts and the
best practices they preach.

7 • ST&Pedia
Industry lingo that gets you up to speed.

9 • Out of the Box


News and products for testers.

11 • Feedback
It’s your chance to tell us where to go.

32 • Best Practices
Quality Gates For the Five Phases
SCM tools are great, but they won’t run
your business or make the coffee.
By Joel Shore
27 Of Automated Software Testing
“You shall not pass,” someone might bellow, as if protecting the team
34 • Future Test
When automating Web service testing, from a dangerous peril. And so might your team if it embarks on the
there’s a right way and a wrong way. journey through the five phases of automated testing.
By Elena Petrovskaya and Sergey Verkholazov By Elfriede Dustin

DECEMBER 2008 www.stpmag.com • 3


Ed Notes

Editor
VOLUME 5 • ISSUE 12 • DECEMBER 2008
EDITORIAL

Editorial Director
How Many
Edward J. Correia
+1-631-421-4158 x100
ecorreia@bzmedia.com

Copy Desk
Alan Zeichick
+1-650-359-4763
alan@bzmedia.com

Contributing Editors
Software Testers
Adam LoBelia
Diana Scheben
Matt Heusser
Chris McMahon
Joel Shore Are Out There?
ART & PRODUCTION What is the size of the soft- QA] are in the 1-to-3 or 1-to-
Art Director ware testing market? How 5 range.” However, the ratio
Mara Leonardi
many software testers exist in about a third of compa-
in the world? nies is around 1-to-10, he
SALES & MARKETING
I was asked that question said, which skews the aver-
Publisher
recently, and I had to admit age. “We focus on the enter-
Ted Bahr
+1-631-421-4158 x101 that I had no idea of the prise. ISVs tend to have a
ted@bzmedia.com answer. My estimate was higher saturation of testers.
about 250,000, but that was I’ve never seen an enterprise
Associate Publisher
David Karp just a guess. It was not based with a 1-to-1 ratio; maybe
+1-631-421-4158 x102 on research, statistical meas- with certain teams [working
dkarp@bzmedia.com Edward J. Correia
urement or anything at all, on] critical apps. So I would
Advertising Traffic Reprints really. It was just a number plucked from say that 1-to-5 as an industry average
Nidia Argueta Lisa Abelson thin air. would be close.”
+1-631-421-4158 x125 +1-516-379-7097 Numbers I do know for sure are these: Murphy points to what he called the
nargueta@bzmedia.com labelson@bzmedia.com
The circulation of this magazine is about flip-side of that argument. “How many lines
List Services Accounting 25,000. My e-mail newsletter goes to anoth- of code does it take to test a line of code?”
Lisa Fiske Viena Ludewig er 40,000. Our conferences (there are If, let’s say, that ratio was 10-to-1, “how
+1-631-479-2977 +1-631-421-4158 x110 three of them each year) attract another could I have three developers cranking
lfiske@bzmedia.com vludewig@bzmedia.com
few thousand software testers, QA profes- out code and expect one tester to keep up
sionals and senior test managers. Of with that?” Of course, test tools and
READER SERVICE
course, there’s some overlap between mag- automation have the potential to help the
Director of Circulation Customer Service/ azine and newsletter readers and confer- tester here, but his point is well taken;
Agnes Vanek Subscriptions
ence attendees, so let’s say BZ Media is there are not enough testers out there.
+1-631-443-4158 +1-847-763-9692
avanek@bzmedia.com stpmag@halldata.com reaching about 60,000 unique testers. Is Which perhaps helps to explain the
that a lot, or are we just scratching the sur- meteoric rise of test-only organizations
face? cropping up in India. “We’ve seen a huge
To help find the answer, I contacted growth of Indian off-shore groups where
Thomas Murphy, a research analyst with all they do is testing. Tata has spun out
Gartner. While he couldn’t provide even testing as its own entity,” he said, refer-
an approximate body count, he said that ring to the gigantic Indian conglomer-
President BZ Media LLC
sales of software testing tools this year is ate. “They wanted [the testing organiza-
Ted Bahr 7 High Street, Suite 407 expected to yield US$2.2 billion in rev- tion] to be independent, with IV&V char-
Executive Vice President Huntington, NY 11743 enue for the likes of Borland, Hewlett- acteristics.
Alan Zeichick +1-631-421-4158
fax +1-631-421-4130 Packard and other AD tool makers. That “Testing has been a growth business
www.bzmedia.com figure includes about $1.5 billion in test- in India. There’s a huge number of indi-
info@bzmedia.com
tool revenue for so-called “distributed” viduals doing manual testing, [and] build-
platforms such as Linux, Mac OS X and ing regression suites and frameworks for
Software Test & Performance (ISSN- #1548-3460) is Windows, and another $700 million for package testing.” They’ve also had to shift
published monthly by BZ Media LLC, 7 High Street,
Suite 407, Huntington, NY, 11743. Periodicals postage mainframes. their processes to adapt to changes in
paid at Huntington, NY and additional offices. As for people, Murphy estimates that technology. “As they get deeper into test-
Software Test & Performance is a registered trade- for the enterprise—that is, companies ing Web 2.0 and SOA, just to get to the
mark of BZ Media LLC. All contents copyrighted 2008
BZ Media LLC. All rights reserved. The price of a one building applications for internal use— same level of quality they used to have
year subscription is US $49.95, $69.95 in Canada,
$99.95 elsewhere. the most common ratio for developers to requires tremendous increases in their
POSTMASTER: Send changes of address to Software testers is about 5-to-1. That ratio can be own quality proactives.”
Test & Performance, PO Box 2169, Skokie, IL 60076. quite different for ISVs. “At Microsoft, for Still, the number of software testers in
Software Test & Performance Subscribers Services

begun to search. ý
may be reached at stpmag@halldata.com or by instance, the ratio is almost 1-to-1. But the world remains elusive. I’ve only just
calling 1-847-763-9692.
most companies doing a good job [of

DECEMBER 2008 www.stpmag.com • 5


Contributors

If you’ve ever been handed a software-test automation tool and told that it will make
you more productive, you’ll want to turn to page 12 and read our lead feature. It was
written by automation experts AARON COOK and MARK LUSTIG, who themselves have
taken the bait of quick-fix offers and share their experiences for making it work.

AARON COOK is the quality assurance practice leader at


Collaborative Consulting and has been with the company
for nearly five years. He has led teams of QA engineers and
analysts at organizations ranging from startups to large multi-
nationals. Aaron has extensive experience in the design,
development, implementation and maintenance of QA proj-
ects supporting manual, automated and performance test-
ing processes, and is a member of the American Society for
Quality. Prior to joining Collaborative, he managed the test
automation and performance engineering team for a start-
up company.

MARK LUSTIG is the director of performance engineering


and quality assurance at Collaborative Consulting. In addi-
tion to being a hands-on performance engineer, he special-
izes in application and technical architecture for multi-tiered
Internet and distributed systems. Prior to joining Collaborative,
Mark was a principal consultant for CSC Consulting. Both
men are regular speakers at the Software Test & Performance
conference.

In her upcoming book titled “Implementing Automated


Software Testing,” (Addison-Wesley, Feb. 2009), ELFREDE
DUSTIN details the Automated Software Testing Process best
practices for test and QA professionals. In this third and final
installment on automated software testing, which begins on
page 27, she provides relevant excerpts from that book on
processes describing use of quality gates at each phase of a
project as a means of preventing automation failure.
Elfriede has authored or collaborated on numerous oth-
er works, including “Effective Software Testing” (Addison
Wesley, 2002), “Automated Software Testing,” (Addison Wesley,
1999) and “Quality Web Systems,” (Addison Wesley, 2001). Her latest book “The Art
of Software Security Testing,” (Symantec Press, 2006), was co-authored with security
experts Chris Wysopal, Lucas Nelson, and Dino Dai Zovi.

Once again we welcome ERNST AMBICHL, Borland’s chief


scientist, to our pages. In the March ‘08 issue, Ernst schooled
us on methods of load testing early in development to pre-
vent downstream performance problems. This time he tells
us how to make legacy and other existing applications more
testable when redesign is not an option, and adding testabil-
ity interfaces would be difficult.
Ernst served as chief technology officer at Segue Software
until 2006, when the maker of SilkTest and other QA tools
was acquired by Borland. He was responsible for the devel-
opment and architecture of Segue’s SilkPerformer and
SilkCentral product lines. For Borland, Ernst is responsible for the architecture of
Borland’s Lifecycle Quality Management products.
TO CONTACT AN AUTHOR, please send e-mail to feedback@bzmedia.com.

6 • Software Test & Performance DECEMBER 2008


ST&Pedia
Translating the jargon of testing into plain English

The Automation Automaton


Even the experts don’t agree on the “right” KEYWORD DRIVEN
way to approach test automation. If there was An alternative to record/playback is to isolate
one, it would address one specific situation— user interface elements by IDs, and to exam-
which might or might not match yours. What ine only the specific elements mentioned.
we might all agree on are which problems are Keyword-driven frameworks generally take
the most challenging, and on some of the ways input in the form of a table, usually with three
to attack them. We'll start with two of the most columns, as shown in Table 1.
common problems of test automation: Matt Heusser and
Chris McMahon TABLE 1
The Minefield Command Element Argument
Exploratory testing pioneer James Bach once
Q: What would your type your_name_txt Matthew
famously compared the practice of software answers be? click Button Submit
testing to searching for mines in a field. “If you Can we shrink the project sched- wait_for_text_present_ok Body hello, Matthew
travel the same path through the field again
ule if we use test automation?
and again, you won’t find a lot of mines,” he Some versions omit the Command or verb,
Now that the tests are repeat-
said, asserting that it’s actually a great way to where the framework assumes that every verb is
able...are they any good?
avoid mines. a “click” or that certain actions will always be
Which heuristics are you using?
Automated tests that repeat the same steps performed in a certain order. This approach is
Are we doing an SVT after our
over and over find bugs only when things stop often referred to as data-driven.
BVT? Does the performance
working. In some cases, as in a prepared demo Where record/playback reports too many
in which the user never veers from a script, that testing pass? false failures, keyword driven frameworks do not
may be exactly what you want. But if you know evaluate anything more than the exact elements
that users won’t stick to the prepared test scripts,
the strategy of giving them a list of tests to repeat
A: ST&Pedia will help
you answer questions
like these and earn the
they are told to inspect, and can report too many
false successes. For example, in Table 1, if the soft-
may not be optimal. respect you deserve. ware accidentally also included a cached last
name, and wrote “Hello, Matthew Heusser,” this
Upcoming topics:
The Oracle would technically constitute failure but would
While injecting randomness solves the mine- January evaluate as true.
field problem, it introduces another. If there Change Management, ALM
are calculations in the code, the automated MODEL DRIVEN TESTING (MDT)
tests now need to calculate answers to deter- February Most software applications can be seen as list of
mine success or failure. Consider testing soft- Tuning SOA Performance valid states and transitions between states, also
ware that calculates mortgage amounts based known as a finite state machines. MDT is an
on a percentage, loan amount, and payment March approach popularized by Harry Robinson that
period. Evaluating the correct output for a Web Perf. Management automates tests by understanding the possible
given input requires another engine to cal- valid inputs, randomly selecting a choice and
culate what the answers should be. And to April value to insert. Some MDT software systems
make sure that answer is correct, you need Security & Vuln. Testing even record the order the tests run in, so they
another oracle. And to evaluate that...and on can be played back to recreate the error. Under
and on it goes. May the right circumstances, this can be a powerful
Unit Testing approach.
RECORD/PLAYBACK
June
In the 1990’s, a common automation strategy Build Management
AUTOMATED UNIT TESTS
was to record a tester’s behavior and the Software developers commonly use Test Driven
AUT’s responses during a manual test by use Matt Heusser and Chris McMahon
Development (TDD) and automated unit tests
of screen capture. Later, these recorded tests are career software developers, to generate some confidence that changes to
can be played back repeatedly without the testers and bloggers. They’re col- the software did not introduce a serious
presence of the manual tester. Unfortunately, leagues at Socialtext, where they “break,” or regression. Strictly, TDD is unit test-
these recordings are fragile. They’re subject to perform testing and quality assur- ing typically developed in the same language of
(false) failure due to changes in date, screen ance for the company’s Web- the code, and does not have the GUI integra-
based collaboration software.
resolution or size, icons or other minute tion issue challenges associated with automat-
screen changes. continued on page 33 >

DECEMBER 2008 www.stpmag.com • 7


Hardware Makers Get Windows 7 Pre-Beta
Attendees of the Windows Hardware for the operating system when it was intro- PCI or SATA. The module also provides
Engineering Conference (WinHEC) in duced in 2007 was that of a three-year Wizards.
Los Angeles in early November received timetable that “will ultimately be deter- Claiming to simplify connections to
a pre-beta of Windows 7, the version of mined by meeting the quality bar.” the Internet while mobile, Microsoft has
Windows that Microsoft said will replace Word from Microsoft is that develop- broadened the “View Available Networks”
Vista. The software, which Microsoft ment of Windows 7 is on schedule. In a feature to include mobile broadband. For
characterized in a statement as “API- Nov. 5 statement, Microsoft said touch- makers of specialized or highly custom
complete,” introduces a series of capabil- sensitivity and simplified broadband con- devices, Microsoft provides Device Stage,
ities the company says will “make it easi- figuration were ready. which “provides information on the
er for hardware partners to create new Helping to afford those opportunities device status and runs common tasks in
experiences for Windows PC customers.” (Microsoft hopes) is a new component a single window customized by the device
The move is intended to “rally hard- called Devices and Printers, which report- manufacturer.” Microsoft also said that
ware engineers to begin development and edly presents a combined browser for the “Start menu, Windows Taskbar and
testing” for its nascent operating system. files, devices and settings. “Devices can Windows Explorer are touch-ready,” in
Windows 7 could be available as soon as be connected to the PC using USB, Windows 7. It’s unclear if application
soon as the middle of next year, accord- Bluetooth or Wi-Fi,” said the statement, developers can access the Windows Touch
ing to a Nov. 7 report on download- making no of devices connected via seri- APIs. WinHEC attendees also received a
squad.com. Microsoft’s original promise al, parallel, SCSI, FireWire, IDE, PS/2, pre-beta of Windows Server 2008 R2.

Zephyr 2.0: Now in the Cloud one place, from release to release, sprint
to sprint, via a live dashboard. We give the
Zephyr in late October launched version look at the coverage you have across man- test team and management a view into
2.0 of its namesake software test man- ual and automated tests.” everything you are testing.”
agement tool, which is now available as a The system also lets you schedule test Also new in Zephyr 2.0 is two-way inte-
SaaS in Amazon’s Elastic Compute to run. “When it comes time to run auto- gration with the JIRA defect tracking sys-
Cloud. Zephyr gives development teams mated scripts, we can kick off automation tem. “Objects stay in JIRA, and if they’re
a Flex-based system for collaboration, scripts on the target machines, bring modified in one place they’re modified
resource, document and project man- results back to Zephyr and update met- in the other. We built a clean Web serv-
agement, test-case creation automation rics and dashboards in real time.” Shah ices integration, and the live dashboard
and archiving, defect tracking and claims that moving test cases into Zephyr also reflects changes in JIRA.”
reporting. The system was previously is largely automatic, particularly if they’re Version 2.0 improves the UI, Shah said,
available only as a self-hosted system. stored in Excel spreadsheets or Word doc- making test cases faster and easier to write
“The advantage of a SaaS is that you uments, which is said is common. “At the thanks to a full-screen mode. “When a
need no hardware,” said Zephyr CEO end of the day, you have to give a report tester is just writing test cases, we expand
Samir Shah. “It’s a predeveloped back-end of the quality of your software. That that for them and let them speed through
up and running immediately with 24/7 process is manual and laborious. We auto- that process. Importing and reporting also
availability, backup, restore, and high mate that by bringing all that [data] into were improved, he said.
bandwidth access from anywhere.” The
cost for either system is US$65 per user
per month. If you deploy in-house, you
also get three free users for the first year.
Also new in Zephyr 2.0 is Zbots, which
Shah said are automation agents for inte-
grating Zephyr with other test tools.
“These are little software agents that run
on the remote machines on which you
run Selenium, [HP QuickTestPro] or
[Borland] SilkTest.” They let you run all
your automated tests from within Zephyr,
and bring the results back into Zephyr
for analysis, reporting and communica-
tion to the team. “You build your automa-
tion scripts in those tools,” he explained,
“then within Zephyr, you get compre- The test manager’s ‘desktop’ in Zephyr provides access to information about available human and tech-
hensive views of all test cases so you can nology resources, test metrics and execution schedules, defects and administrative settings.

DECEMBER 2008 www.stpmag.com • 9


Feedback

SOFTWARE IS BUGGED BY BUGS


Regarding “Software is Deployed, Bugs and All,” Test & QA Report,
July 29, 2008, (http://sdtimes.com/link/3299432634):
I was very interested in this article. I will have to track down
the white paper and take a look but there are some comments
that I’d like to make based on the article. First a disclosure of my
own— used to work for a company that made tools in this space
and presently work for a distributor of IBM Rational tools.
Your “paradoxical tidbit” is I think absolutely a correct obser-
vation. It highlights something I’ve observed for some time...that
many businesses seem to think managers in the IT department
don’t need necessarily to have an IT background or they have
an IT background but [no] real management training.
The net result is they either don’t fully understand the impli-
cations of problems with the process, or they understand the prob-
lems but don’t have the management skills to address them. It’s
very rare to find a truly well managed IT department. I think what
you’ve described is really a symptom of that.
Some of the other “conclusions” seem less clearly supportable.
Statistics is a dangerous game. Once you start looking at the low-
er percentage results it is very easy to make conclusions that may
be supportable, but often there are other equally supportable con-
clusions.
For example, the data re time required to field defects... just
because it could take 20-30 days to fix a problem that isn’t neces-
sarily the worst outcome for a business. I can think of several cas-
es where this would be acceptable. To list a few...
1. The cost to the business of not releasing the product with
defects could be even greater
2. There is a work around for the defect
3. The defect is non-critical
4. There is release cycle in place that only allows fixes to be
deployed once a month.
I’m sure there are others.
The takeaways are where the real test of a survey are. From
what you’ve published of the report it seems that the second one
is supportable after a fashion. Clearly they need to fix their process
and it would seem obvious that automated solution should be a
part of that. However I do wonder if they actually got data to
support that from the survey.
The first conclusion says there are “debilitating consequences.”
Again I wonder if the survey actually established that. Clearly there
are consequences, but were they debilitating? Was there any data
about the commercial impact of the defects? Without that it is
hard to say. Yes we all know about the legendary bugs and their
consequences, but that does not automatically imply that all defects
are debilitating.
In any event, it is a topic that should be discussed more often
and I enjoyed the article.
Mark McLaughlin
Software Traction
South Australia

FEEDBACK: Letters should include the writer’s name, city, state, company
affiliation, e-mail address and daytime phone number. Send your thoughts to
feedback@bzmedia.com. Letters become the property of BZ Media and may
be edited for space and style.

DECEMBER 2008 www.stpmag.com • 11


Beyond Tools:
Test Automation
Once you’ve taken the bait, how
By Aaron Cook and Mark Lustig for changes to your Application Under applicable or practical for all testing phases.
Test (AUT). We also cover an approach Additional environments to also con-
ne of the biggest challenges
O facing quality teams today
is the development and mainte-
to the overall test management of your
newly implemented test automation suite.
The set of automation tools an organ-
sider may include a break-fix/firecall envi-
ronment. This environment is used to cre-
ate and test production “hot fixes” prior
ization requires should fit the needs, envi- to deployment to production. A training
nance of a viable test automation ronments, and phases of their specific sit- environment may also exist. An example
solution. Tool vendors do a great job of uation. Automation requires an integrat- of this would be for a call center, where
selling their products, but too often lack ed set of tools and the corresponding infra- training sessions occur regularly. While
a comprehensive plan for putting the structure to support test development, this is usually a version of the production
automation solution into practice. management, and execution. Depending environment, its dedicated use is to sup-
This article will help you undertake a on your quality organization, this set of port training.
real-world approach to planning, devel- automation tools and technologies can Automation is most effective when
oping, and implementing your test include solutions for test management, func- applied to a well-defined, disciplined set
automation as part of the overall quality tional test automation, security and vulnera- of test processes. If the test processes are
efforts for your organization. You will learn bility testing, and performance testing. Each not well defined, though maturing, it may
how to establish a test automation envi- of these areas is covered in detail. be practical and beneficial to tactically
ronment regardless of the tools in your In most organizations we have seen, apply test automation over time. This
testing architecture and infrastructure. phases and environments typically include would happen in parallel to the contin-
We also give you an approach to cal- development and unit testing, system inte- ued refinement and definition of the test
culating the ROI of implementing test gration testing, user acceptance testing, process. A key area where automation can
automation in your organization, an performance testing, security and vulner- begin is test management.
example test automation framework ability testing, and regression testing.
including the test case and test scenario Depending on your specific quality Test Management
assembly, and rules for maintenance of processes and software development Mature test management solutions pro-
your automation suite and how to account methodologies, automation may not be vide a repeatable process for gathering

12 • Software Test & Performance DECEMBER 2008


in Practice
to make the most of your catch
and maintaining requirements, plan- tributed test teams by unifying multiple automation, and performance test
ning, developing and scheduling tests, processes and resources throughout the automation.
analyzing results, and managing defects organization. The test management tool allows for
and issues. The core components of For example, business analysts can use a central repository to store and manage
test management typically include test the test management tool(s) (TMT) to testing work products and processes, and
requirements management (and/or define and store application business to link these work products to other arti-
integration with a requirement manage- requirements and testing objectives. This facts for traceability. For example, by link-
ment solution), and the ability to plan allows the test managers and test analysts ing a business requirement to a test case
and coordinate the testing along with to use the TMT to define and store test and linking a test case to a defect, the
integration with test automation solu- plans and test cases. The test automation TMT can be used to generate a trace-
tions. Many of the test management engineers can use the TMT to create and ability report allowing the project team
processes and solutions available today store their automated scripts and asso- to further analyze the root cause of iden-
also include reporting features such as ciated frameworks. The QA testers can tified defects. In addition, TMT allows for
trending, time tracking, test results, proj- use the TMT to run both manual and standards to be maintained which increas-
ect comparisons, and auditing. Test man- automated tests and design and run the es the likelihood that quality remains
agers needs to be cognizant of integrat- reports necessary to examine the test exe- high. However, facilitating all resources
ing with software change and configura- cution results as well as tracking and to leverage this tool requires management
tion management solutions as well as trending the application defects. The sponsorship and support.
defect management and tracking. team’s program and project managers
These core components can be can use the TMT to create status reports, Functional Test Automation
employed within a single tool or can be manage resource allocation and decide Functional test automation should be
integrated using multiple tools. Using one whether an application is ready to be coupled with manual testing. It is not
tool integrated tool can improve the qual- released to production. Also, the TMT practical for all requirements and func-
ity process and provide a seamless enter- can be integrated with other automation tions to be automated. For example, ver-
prise-wide solution for standards, com- test suite tools for functional test automa- ifying the contents of a printed docu-
munication and collaboration among dis- tion, security and vulnerability test ment will likely best be done manually.

DECEMBER 2008 www.stpmag.com • 13


HOOKED ON TESTING

However, verifying the print formatting generation across multiple machines and ly and concurrently. The size and scale of
for the document can easily be automat- data centers/geographic locations. each environment also can be managed
ed and allow the test engineer to focus to enable the appropriate testing while
efforts on other critical testing tasks. Security and Vulnerability Testing minimizing the overall infrastructure
Functional test scripts are ideal for Security testing tools enable security requirement.
building a regression test suite. As new vulnerability detection early in the soft- The sample environment topology
functionality is developed, the functional ware development life cycle, during the below defines a potential automation
script is added to the regression test suite. development phase as well as testing infrastructure. This includes:
By leveraging functional test automation, phases. Proactive security scanning • Primary test management and coor-
the test team can easily validate function- accelerates remediation, and save both dination server
ality across all environments, data sets, and time and money when compared with • Automation test centralizedcontrol
select business processes. The test team later detection. server
can more easily and readily document Static source code testing scans an • Automation test results repository
and replicate identified defects for devel- application’s source code line by line to and analysis server
opers. This ensures that the development detect vulnerabilities. Dynamic testing • System and transaction monitoring
team can replicate and resolve the defects tests applications at runtime in many and metrics server
faster. They can run the regression environments (i.e., development, accept- • Security and vulnerability test
tests on upgraded and enhanced appli- ance test, and production). A mature Execution server
cations and environments during off hours security and vulnerability testing process • Functional automation execution
so that the team is more focused on new will combine both static and dynamic test- server(s)
functionality introduced during the last ing. Today’s tools will identify most vul- • Load and stress automation
build or release to QA. Functional test nerabilities, enabling developers to pri- Execution server(s)
l
automation can also pro- oritize and address these • Sample application under test
vide support for specific issues. (AUT), including:
technologies (e.g., GUI,
text-based, protocol specif-
Automation is Test Environments
• Web server
• Application server(s)
ic), including custom con-
trols.
an investment When planning the test
automation environment, a
• Database server

that yields number of considerations Return on Investment


Performance Testing must be addressed. These Defining the ROI of an automation
Performance testing tools
can be used to measure
the most include the breadth of
applications, the demand
installation can be straightforward. For
example, comparisons across: time to cre-
load/stress capability and
predict system behavior
significant for automation within the
enterprise, the types of
ate a defect before and after test automation;
time to create and execute a test plan; and
using limited resources.
Performance testing tools
benefits automation being executed
and the resource needs
time to create reports before and after test man-
agement, can all be effective measures.
can emulate hundreds
and thousands of concur-
over time. to support all system
requirements. A
Test automation is an investment that
yields the most significant benefits over
rent users putting the l key first step is tool(s) selec- time. Automation is most effectively
application through rig- tion. Practitioners should applied to a well defined, disciplined test
orous real-life user loads. IT depart- consider questions such as: processes. The testing life cycle is multi-
ments can stress an application from • Should all tools be from a single vendor? phased, and includes unit, integration,
end-to-end and measure response times • How well do the tools interact with the system, performance, environment, and
of key business processes. Performance AUT? user acceptance testing. Each of these
tools also collect system and compo- • How will tools from multiple vendors work phases has different opportunities for cost
nent-level performance information together? and benefit improvements. Additionally,
through system monitors and diagnos- That last issue—that of integration— good test management is a key compe-
tic modules. is key. Connecting tools effectively increas- tency in realizing improvements and ben-
These metrics can be combined to es quality from requirements definition efits when using test automation.
analyze and allow teams to drill down to through requirements validation and There are different ways of defining
isolate bottlenecks within the architec- reporting. the return on investment (ROI) of test
ture. Most of the commercial and open The next step is to determine the automation. It is important to realize that
source performance testing tools avail- number of test tool environments while ROI is easily defined as benefits over
able include metrics for key technologies, required. At a minimum, a production costs, benefits are more accurately
operating systems, programming lan- test tools environment will be necessary defined as the benefits of automated testing
guages and protocols. to support test execution. It is worth con- versus the benefits of manual testing over costs
They include the ability to perform sidering the need for a test automation of automated testing versus the costs of man-
visual script recording for productivity, as development environment as well. By hav- ual testing. Simply put, there are costs asso-
well as script based viewing and editing. ing both a test automation development ciated with test automation including soft-
The performance test automation tools environment and production automation ware acquisition and maintenance, train-
also provide flexible load distribution to environment, automation engineers can ing costs, automation development costs
create synthetic users and distribute load develop and execute tests independent- (e.g., test script development), and

14 • Software Test & Performance DECEMBER 2008


HOOKED ON TESTING

automation maintenance costs (e.g., test results, a system can be more thorough- 1. What to automate. Automation can
script maintenance). ly executed. be used for scenario (end-to-end) based
• Increased range of testing. Introduce testing, component-based testing, batch
One simple approach to determining automation into areas not currently being testing, and workflow processes, to
when it makes sense to automate a test executed. For example, if an organiza- name a few. The goals of automation
case is: tion is not currently executing standalone differ based on the area of focus. A
The benefits of automation are two- component testing, this area could be given project may choose to implement
fold, starting with a clear cost savings asso- automated. differing levels of automation as time
ciated with test execution. First, automated Though ROI is a prevailing tool for and priorities permit.
tests are significantly faster to execute and conveying the value of test automation, 2. Amount of test cases to be automat-
report on than manual tests. Secondly, the less quantifiable business advantages ed, based on the overall functionality of
resources now have more time to dedicate of testing may be even more important an application: What is the duration of
toward other testing responsibilities such than cost reduction or optimization of the the testing life cycle for a given applica-
as test case development, test strategy, exe- testing process itself. These advantages tion? The more test cases that are
cutes tests under fault conditions, etc. include the value of increased system qual- required to be executed, the more valu-
ity to the organization. Higher quality able and beneficial the execution of test
Additional benefits of automation include: yields higher user satisfaction, better cases becomes. Automation is more cost
• Flexibility to automate standalone com- business execution, better information, effective when required for a higher
ponents of the testing process. This may and higher operational performance. number of tests and with multiple per-
include automating regression tests, but Conversely, the negative benefits of a low mutations of test case values. In addi-
also actions directly related to test man- quality experience may result in decreased tion, automated scripts execute at a con-
agement, including defect tracking, and user satisfaction, missed opportunities, sistent pace. User testers pace varies
requirements management. unexpected downtime, poor information, widely, based on the individual tester’s
• Increased depth of testing. Automated missed transactions, and countless other experience and productivity.
scripts can enable more thorough testing detriments to the company. In this sense, 3. Application release cycles: Are releas-
by systematically testing all potential com- we consider the missed revenue or wasted es daily, weekly, monthly, or annually? The
binations of criteria. For example, a sin- expense of ill-performing systems and soft- maintenance of automation scripts can
gle screen may be used for searching, with ware, not just the isolated testing process. be more challenging as the frequency of
results formatted differently for different Test Automation Considerations releases increases if a framework based
results sets returned. By automating the When moving toward test automation, a approach is not used. The frequency of
execution of all possible permutations of number of factors must be considered. execution of automated regression tests

DECEMBER 2008 www.stpmag.com • 15


HOOKED ON TESTING

will yield financial savings as more regres- • Test automation software should be embedded in the team. This
sion test cycles can be executed quickly, • Hardware to host and execute auto- also has implications for how the automa-
reducing overall regression test efforts mated tests tion framework will be developed.
from an employee and time standpoint. • Resources for automation scripting and
The more automated scripts are execut- execution, including test management. Waterfall Methodology
ed, the more valuable the scripts become. To estimate the development effort for
It is worth noting the dependency of a Test Automation Framework the test automation framework, the
consistent environment for test automa- To develop a comprehensive and exten- automation engineers require access to
tion. To mitigate risks associated with envi- sible test automation framework that is the business requirements and technical
ronment consistency, environments must easy to maintain, it’s extremely helpful if specifications. These begin to define the
be consistently maintained. the automation engineers understand the elements that will be incorporated into
4. System release frequency: Depending AUT and its underlying technologies. the test automation framework. From
on how often systems are released into the They should understand how the test the design specifications and business
environments where test automation will automation tool interacts with and han- requirements, the automation team
be executed (e.g., acceptance testing envi- dles UI controls, forms, and the under- begins to identify those functions that will
ronment, performance testing environ- lying API and database calls. be used across more than one test case.
ment, regression testing environments), The test team also needs to under- This forms the outline of the framework.
automated testing will be more effective stand and participate in the development Once these functions are identified, the
in minimizing the risk to releasing defects process so that the appropriate code automation engineers then begin to
into a production environment, lowering drops can be incorporated into the over- develop the code required to access and
the cost of resolving defects. all automation effort and the targeted validate between the AUT and the test
5. Data integrity requirement within and framework development. This comes into automation tool. These functions often
across test cases: If testers create their own play if the development organization fol- take the form of the tool interacting with
data sets, data integrity may be compro- lows a traditional waterfall SDLC. If the each type of control on a particular form.
mised if any inadvertent entry errors are dev team follows a waterfall approach, Determining the proper approach for
made. Automation scripts use data sets then major functionality in the new code custom-built controls and the test tool
that are proven to maintain data integri- has to be accounted for in the framework. can be challenging. These functions
ty and provide repeatable expected results. This can cause significant automation often take the form of setting a value (typ-
Synergistic benefits are attained with con- challenges. If the development team is ically provided by the business test case)
sistent test data can be used to validate following an agile or other iterative and validating a result (from the AUT).
individual test case functionality as well as approach, then the automation engineers By breaking each function down into the
multi-test case functionality. For example,
one set of automated test cases provides TABLE 1: GUTS OF THE FRAMEWORK
data entry while another set of test cases A business scenario defined to validate that the application will not allow the end
can test reporting functionality.
user to define a duplicate customer record.
6. Automation engineers staffing and
skills: Individual experience with test Components required:
automation varies widely based on expe- • Launch Application
rience. Training courses alone will result LaunchSampleApp(browser, url)
in a minimum level of competency with • Returns message (success or failure)
a specific tool. Practical experience, based
• Login
on multiple projects and lessons learned
Login(userID, password)
is the ideal means to achieve a strong set
• Returns message (success or failure)
of skills, time permitting. A more prag-
• Create new unique customer record
matic means is to leverage the lessons
CreateUniqueCustomer({optional} file name, org, name, address)
learned from industry best practices, and
the experiences of highly skilled individ- • Outputs data to a file including application generated customer ID, org,
uals in the industry. name, and address
An additional staffing consideration is • Returns message (success or failure)
the resource permanence. Specifically, are • Query and verify new customer data
the automation engineers permanent or QueryCustomerRecord(file path, file name)
temporary (e.g., contractors, consultants, Returns message (success or failure)
outsourced resources). Automation engi- • Create identical customer record
neers can easily develop test scripts fol- CreateUniqueCustomer({optional} file name, org, name, address)
lowing individual scripting styles that • Outputs data to a file including application generated customer ID, org, name,
can lead to costly or difficult knowledge and address
transfers. Returns message (success or failure)
7. Organizational support of test automa- Handle successful error message(s) from application
tion: Are automation tools currently owned • Gracefully log out and return application to the home page
within the organization, and/or has budg- Logout
et been allocated to support test automa- Returns message (success or failure)
tion? Budget considerations include

16 • Software Test & Performance DECEMBER 2008


HOOKED ON TESTING

simplest construct, the test automation


FIGURE 1: TRANSPARENT SCALES
engineer can begin to assemble them
into more complex functions easily, with- WebTableSearchByRow_v3
out a great deal of code development.
For example, let’s say the AUT has a ‘This Function searches through a webtable to locate any item in the table.
requirement for Login. This is driven
‘WebTableSearchByRow_v3 returns the row number of the correct item.
from the business requirement. This can
also be a good example of a reusable test ‘This function takes 6 parameters
component. The function Login can be ‘ @param wtsbr_browser ‘ Browser as defined in the Library
broken down into four separate functions: ‘ @param wtsbr_page ‘ Page as defined in the Library
SetUserID, SetPassword, SubmitForm, ‘ @param wtsbr_webtable ‘ WebTable as defined in the Library
and VerifyLogin. Each of these functions ‘ @param wtsbr_rowcount ‘ Total Row Count for the WebTable
can be generalized so that you end up ‘ @param wtsbr_searchcriteria ‘ String value to search for
with a series of functions to set a value (in ‘ @param wtsbr_columncount ‘ Total column count for the webtable
this case, both the UserID and the
Password) and there is a generic submit ‘ return Row Number for Search Criteria
function and a generic validate result
function. These three functions form the Example:
basis for the beginning of the develop- rownum = WebTableSearchByRow_v3 (br, pg, wt, rc, sc, cc)
ment of your test automation framework.
development. the output message will be. For example,
Agile Methodology all WebTableSearchByRow functions
In the case of an Agile methodology, Putting the Pieces Together return a row number based on the search
often times the development begins by When it comes time to execute your criteria (Figure 1).
coding a test case that will fail. Then the automated tests, it is often the case that To be successful, the automation effort
developers write just enough code to they must be assembled into multiple must have organizational support and be
pass the test case. Once this is accom- technical and business scenarios. By given the same degree of commitment as
plished, then the code is added to the taking a framework-based approach to the initial development effort for the proj-
hourly/daily build and compiled into building the automation components, ect, and goals for test automation must
the release candidate. the job of assembly is straightforward. be identified and laid out up front.
For test automation to be successful The business community needs to dic- Automation should have support and
in this type of environment, the test tate the actual business scenario that involvement of the entire team and the
automation engineers need to be requires validation. They can do this in tool(s) should be selected and evaluated
involved upfront in the decisions of what a number of ways. One way is to pro- from within your environment.
tests to automate and what priority to vide the scenario to the automation Your test automation team needs a sta-
automate them in. In this case, the team engineers (in a spreadsheet, say) and ble and consistent environment in which
determines the list of requirements for have them perform the assembly. A to develop components of the test
the given sprint and assigns priority to more pragmatic approach is for the automation framework and the discrete
the development items. The testers in business community to navigate to the test cases themselves. The number and
conjunction with the rest of the team test management system that is used type of tests to be automated need to be
determine which test cases will be can- for the storage and execution of the defined (functional UI, system level API,
didates for this sprint and what order to test automation components and to performance, scalability, security), and
begin development. From there, the assemble the scenario directly. This automators need to have dedicated time
actual development begins to follow a makes the job of tracking and trending to develop the automation components,
similar pattern to the earlier waterfall the results much easier and less prone store and version them in a source code
methodology. For example, suppose one to interpretation. control system, and schedule the test runs
of the scrum team decides to develop Because maintenance is the key to all to coincide with stable testable software
the AUT login function. The testers will test automation, the QA team needs to releases.
follow the same process to decompose standardize the approach and methods Failure in any one of these areas can
the high-level business process into for building all test automation functions. derail the entire automation project.
three discrete technical components, Each function needs to have a standard Remember, test automation cannot com-
SetValue, SubmitForm, and VerifyResult. header/comment area describing the pletely replace manual testing, and not
Table 1 shows another example. These function as well as all necessary parame- all areas of an application are candidates
functions will form the basis for the ters. All outputs and return values need for automation. But test automation of
underlying test automation framework. to be standardized. This allows for greater the parts that allow it, when properly
After each sprint is accomplished, the flexibility in the development of your implemented, can enhance the ability of
scrum team performs the same evalua- automation code by allowing the automa- your quality organization to release con-

er cost. ý
tion of functions to build and automate, tion engineer to focus on the tool inter- sistent, stable code in less time and at low-
priority and resources required. From action with your AUT without having to
that point forward, the automation reinvent the standard output methods.
framework begins to grow and mature Any QA engineer can pick up that func-
alongside of the application under tion or snippet of code and know what

DECEMBER 2008 www.stpmag.com • 17


Stepchild of the Architecture
The likelihood and willingness of devel-
opment teams to provide test interfaces
varies considerably.
Designing your application with testa-
bility in mind automatically leads to better
design with better modularization and less
coupling between modules. For modern
development practices like extreme pro-
gramming or test-driven development,
testability is one of the cornerstones of good
application design and a built-in practice.

TEACH
Unfortunately for many existing appli-
cations, testability was not a key objective
from the start of the project. For these
applications, it often is impossible to intro-

Your Old Applications


By Ernst Ambichl duce new special purpose test interfaces.
n my 15 years of experience in building functional and perform- Test interfaces need to be part of the

I ance testing tools, there is one statement I’ve heard from customers
more than any other: “Changing the You’ll also understand the effort need-
initial design and architecture of the appli-
cation. Introducing them in an existing
application can cause mature architectural
application to make it more testable with ed to enhance testability with the bene- changes, which most often includes re-writ-
your tool is not an option for my organi- fits gathered in terms of increased test ing of extensive parts of the application
zation.” If companies don’t change this coverage and test automation and that no one is willing to pay for. Also test
mindset, then test automation tools will decreased test maintenance, and how an interfaces need a layered architecture.
never deliver more than marginal agile development process will help to Introducing an additional layer for the
improvements to testing productivity. successfully implement testability testability interfaces is often impossible for
Testability improvements can be improvements in the application. monolithic applications architectures.
applied to existing, complex applications Testability interfaces for performance
to bring enhanced benefits and simplic- What Do I Mean by ‘Testability?’ testing need to be able to sufficiently test
ity to testers. This article will teach you There are many ways to interpret the the multiuser aspects of an application.
how to enhance testability in legacy appli- term testability, some in a general and This can become complex as it usually
cations where a complete re-design of the vague sense and others highly specific. requires a remote-able test interface.
application is not an option (and special For the context of this article, I define Thus, special purpose testability inter-
purpose testability interfaces can not eas- testability from a test automation per- faces for performance testing are even
ily be introduced). spective. I found the following defini- less likely to exist than testability inter-
You’re also learn how testability tion, which describes testability in terms faces for the functional aspects of the
improvements can be applied to an exist- of visibility and control, most useful: application.
ing, complex (not “Hello World”) appli- Visibility is the ability to observe the To provide test automation for appli-
cation and the benefits gathered through states, outputs, resource usage and oth- cations that were not designed with testa-
improved testability. This article covers er side effects of the software under test. bility in mind, existing application inter-
testability improvements from the per- Control is the ability to apply inputs to faces need to be used for testing purpos-
spective of functional testing as well as the software under test or place it in es. In most cases this means using the
from that of performance and load test- specified states. Ultimately testability graphical user interface (GUI) for func-
ing, showing which changes are needed means having reliable and convenient tional testing and a protocol interface or
in the application to address either per- interfaces to drive the execution and ver- a remote API interface for performance
spective. ification of tests. testing. This is the approach traditional

18 • Software Test & Performance DECEMBER 2008


functional and load testing tools are using. Interfaces with Testability Hooks This does not necessarily mean that
Of course, there are problems using A key issue with testability using an exist- you need to use GUID-style identifiers
the available application interfaces for ing interface is being able to name and that are unique in a global context.
testing purposes, and you will find many distinguish interface controls using sta- Identifiers for controls should be read-
examples of their deficiencies, especial- ble identifiers. Often it’s the absence of able and provide meaningful names.
ly when reading articles on agile testing. stable identifiers for interface controls Naming conventions for these identifiers
Some of those examples are: that makes our life as testers so hard. will make it easier to associate the iden-
• Existing application interfaces are usu- A stable identifier for a control means tifier to the actual control.
ally not built for testing. Using them for that the identifier for a certain control is Using stable identifiers also avoids the
testing purposes may use them in a way always the same – between invocations of drawbacks of using the control hierarchy
the existing client of the interface never the control as well as between different for recognizing controls. Control hierar-
would have intended. Driving tests versions of the application itself. A stable chies are often used if you’re using weak
through these interfaces can cause unex- identifier also needs to be unique in the control identifiers, which are not unique
pected application behavior as well as lim- context of its usage, meaning that there in the current context of the application.
ited visibility and control of the applica- is not a second control with the same By using the hierarchy of controls, you’re
tion for the test tool. identifier accessible at the same time. providing a “path” for how to find the

To Be More Testable
• A GUI with “custom controls” (GUI con-
trols that are not natively recognized by
the test tool) is problematic because it
provides only limited visibility and con-
trol for the test tool.
• GUI controls with weak UI object recog-
nition mechanisms (such as screen or win-
dow coordinates, UI indexes, or captions)
provide limited control, especially when
it comes to maintenance. This results in
fragile tests that break each time the appli-
cation UI changes, even minimally.
• Protocol interfaces used for perform-
ance testing are also problematic. Session
state and control information is often hid-
den in opaque data structures that are
not suitable for testing purposes. Also,
the semantic meaning of the protocol
often is undocumented.
But using existing interfaces need not
to be less efficient or effective than using
special purpose testing interfaces. Often
slight modifications to how the applica-
tion is using these interfaces can increase
the testability of an application signifi-
cantly. And again, using the existing appli-
cation interfaces for testing is often the
only option you have for an existing appli-
cations.
Ernst Ambichl is chief scientist at Borland.

DECEMBER 2008 www.stpmag.com • 19


FutureTest 2009: QA
Who Should Attend?
The program is designed for high-level test managers,
IT managers or development managers that manage the
test department or consider future test and QA strategies
to be critical to their job and their company. By attending
you will gain ideas and inspiration, be able to think freely,
away from the day-to-day demands of the office, and also exchange ideas with peers after being stimulated
by our inspirational speakers. You will leave this conference with new ideas for managing and solving your
biggest challenges both now and in the future.

Typical job titles at FutureTest include:


CIO/CTO Senior VP, IT Operations Director of Systems Programming
CSO/CISO Senior Software Test Manager VP, Software Engineering
Vice President, Development Security Director Test/QA Manager
Test Team Leader Test Architect Senior Software Architect
Vice President, Test/QA Manager of Quality Assurance
Test Director Project Manager

FutureTest is a
two-day conference • Stay abreast of trends and best practices for managing

created for senior software quality

software test and • Hear from the greatest minds in the software test/QA
QA managers. community and gain their knowledge and wisdom
FutureTest will • Hear practitioners inside leading companies share their
provide practical, test management and software quality assurance secrets
results-oriented
• All content is practical, thought-provoking,
sessions taught by
future-thinking guidance that you can use today
top industry
• Lead your organization in building
professionals.
higher-quality, more secure software

A BZ Media Event • Network with high-level peers


and the Web
5 Great Reasons to
1. You’ll hear great ideas to help
your company save money
with more effective Web applica-
3. You’ll listen to how other
organizations have improved
their Web testing processes, so you
5. You’ll be ready to share prac-
tical, real-world knowledge
with your test and development
tions testing and quality assurance can adopt their ideas to your own colleagues as soon as you get
and security. projects. back to the office.

2. You’ll learn how to implement


new test & QA programs and
initiatives faster, so you save money
4. You’ll engage with the newest
testing methodologies and
QA processes — including some
Add it all up, and it’s where you
want to be in February.

and realize the benefits sooner. you may never have tried before.

REGISTER by
December 19
Just $1,095
SAVE $350!

February 24 –25, 2009


The Roosevelt Hotel
New York, NY

www.futuretest.net
OLD DOG, NEW TRICKS

control. Relying your control recognition FIG. 1: PARSING WORDS


on this path just increases the depend-
ency of the control recognition on other Parse the dynamic control handler from a HTTP response:
controls and introduces a maintenance WebParseDataBound(<dynamic control handle>, "control_handle=", “!” …)
burden when this hierarchy changes.
To improve the testability of your The function WebParseDataBound parses the response data of a HTTP request
application, one of the simplest and most using “control_handle=” as the left boundary and “!” as the right boundary and
returns the result into <dynamic control handle>. You’re tools’ API will vary, of
efficient practices is to introduce stable
course, but might look something like this.
control identifiers and expose them via
the existing application interfaces. This Use the dynamic control handler in an HTTP request:
practice not only works for functional test- http://host/borland/mod_1?control_handle=<dynamic control handle>!...
ing using the GUI but also can be adopt-
ed for performance testing using a pro-
tocol approach. How to accomplish this
FIG. 2: PARSING LOGINS
for a concrete application is shown next.
Case Study: Testability Improvements WebPage(“http://host/borland/login/”);
in an Enterprise Application WebParseDataBound(hBtnLogin, "control_handle=", "!”, 6);
A major theme for the latest release To call the actual “Login” button then might look like this:
of one of our key applications that we are
developing in Borland’s R&D lab in Linz, WebPage(“http://host/borland/mod_1?control_handle=” + hBtnLogin + “!”);
Austria, was to make the application ready
for enterprise-wide, global usage. This the amount of test automation for per- URL query string parameters in the fol-
resulted in two key requirements: formance tests increases the importance lowing way:
• Providing localized versions of the appli- of maintainable and stable performance Each request that triggers an action
cation. test scripts. In previous releases, our per- on a control (such as pressing a button,
• Provide significantly increased per- formance test scripts lacked stability, clicking a link, expanding a tree node)
formance and scalability of the applica- maintainability and customizability. Often uses an opaque control handle to iden-
tion in distributed environments. it happened that only small modifications tify the control on the server:
Our application is a Web based multi- in the application broke performance
user application using a HTML/Ajax test scripts. http://host/borland/mod_1?control_handle=
68273841207300477604!...
front-end with multiple tiers built on a Also was it hard to detect the reason
Java EE infrastructure with a database of script failures. Instead of searching for On the server, this control handle is
backend. The application was developed the root-cause of broken scripts, per- used to identify the actual control
over several years and now contains sev- formance test scripts were re-recorded instance and the requested action. As
eral hundred thousand lines of code. and re-customized. Complex test cases the control handle references the actu-
To increase the amount of test automa- were not covered (especially scenarios al instance of a control, the value of the
tion, we wanted to be able to test local- that change data) as they were hard to handle changes after each server request
ized versions of the application with min- build and had a high likelihood to break (as in this case, where a new instance
imal or no changes to the functional test when changes were introduced in the of the same control will be created). The
scripts we used for the English version of application. Also customization of tests control handle is the only information
the product. Also we wanted to be able was complicated, and increased the fragili- the server exposes through its interface.
to test the scalability and performance of ty of the tests. This is perfectly fine for the functional-
existing functionality on a regular night- To get a better insight on the reasons ity of the application, but is a nightmare
ly build basis over the whole release cycle. for the poor testability, we needed to look for every testing tool (and of course for
These regular performance tests at the architecture of the application and testers).
should ensure that we detect perform- the interfaces we used for testing: What does this mean for identifying
ance regressions as early as possible. New For Web-based applications, the most the actions on controls in a test tool?
functionality should be tested for per- common approach for performance test- It means that because of the dynam-
formance as soon as the functionality was ing is to test at the HTTP-protocol-level. ic nature of the requests (control handles
available, and in combination with exist- So let’s take a look into the HTTP-pro- constantly change), a recorded, static
ing functionality to ensure it was built tocol of the application: HTTP-based test script never will work as
meeting the scalability and performance Our application is highly dynamic, so it calls control handles, which are not
requirements. We thought that by exe- HTTP requests and responses are dynam- valid. So the first thing testers need to do
cuting performance tests on a regular ic. This means that they include dynam- is to replace the static control handles in
basis, we should be able to continuously ic session information as well as dynam- the test script with the dynamic handles
measure the progress we were making ic information about the active UI con- generated by the server at runtime.
toward the defined scalability and per- trols. While session information (as in Sophisticated performance test tools will
formance objectives. many other Web applications) is trans- allow you to define parsing and replace-
ported using cookies and therefore auto- ment rules that do this for you and cre-
Performance Testing matically handled by most HTTP based ate the needed modifications in the test
Among the problems we had was frag- testing tools, the information about script automatically (Figure 1).
ile performance test scripts. Increasing actions and controls is represented as An application of course has many

22 • Software Test & Performance DECEMBER 2008


OLD DOG, NEW TRICKS

actionable controls on one page, which CIDs are only used to provide more between the test scripts and the different
in our case all use the same request for- context information for the test tool, the localized versions of the application.
mat shown above. The only way to iden- testers, and developers. There is no Captions are of course language depend-
tify a control using its control handle is change in the application behavior – the ent, and therefore are not a good candi-
by the order number of the control han- application still uses the control handle date for stable control identifiers.
dle compared to other control handles and ignores the CID. The first option would have been to
in the response data (which is either The HTTP request format changed localize the test scripts themselves by
HTML/JavaScript or JSON in our case). from: externalizing the captions and providing
E.g.: The control handle for the localized versions of the externalized cap-
“Login” button in the “Login” page http://host/borland/mod_1?control_handle= tions. But still this approach introduces
68273841207300477604!...
might be the 6th control handle in the a maintenance burden when captions
page. A parsing statement needed to to: change or when new languages need to
parse the value of the handler for the be supported.
“Login” button then might look like that http://host/borland/mod_1?control_handle= Using the caption to identify an
*tm_btn_login.68273841207300477604!...
is Figure 2. HTML link (HTML fragment of an
It should be obvious from the small As CIDs are included in all responses HTML page):
example in Figure 2 that using the con- as part of their control handlers, it is easy
trol handle to identify controls is far from to create a parsing rule the uniquely pars- <A …
HREF=" http://...control_handle=*tm_btn_login
a stable recognition technique. The prob- es the control handle – the search will .6827..."
lems that come with this approach are: only return one control handle as the >Login</A>
• Poor readability: Scripts are hard to read CIDs are always unique in the context
as they use order number information to of the response. Calling the actual “Login” link then
identify controls. The same script as above now looks like: might look like this:
• Fragile to changes: Adding controls or
just rearranging the order of controls WebPage(“http://host/borland/login/”); MyApp.HtmlLink(“Login”).Click();
WebParseDataBound(hBtnLogin,
will cause scripts to break or even worse "control_handle=*tm_btn_login.", “!”, 1);
As we already have introduced the con-
introduces unexpected behavior caus- cept of stable control identifiers (CIDs)
ing subsequent errors which are very WebPage(“http://host/borland/mod_1?control_ for performance testing we wanted to
hard to find. handle==*tm_btn_login.” + reuse these identifiers also for GUI level
hBtnLogin + “!”);
• Poor maintainability: Frequent changes testing. Using CIDs makes the test scripts
to the scripts are needed. Detecting the By introducing CIDs and exposing language independent without the need
changes of the order number of control CIDs at the HTTP protocol-level, we are to localize the scripts (at least for control
handles is cumbersome, as you have to now able to build extremely reliable and recognition – verification of control prop-
search through response data to find the stable test scripts. Because a CID will not erties still may need language dependent
new order number of the control handle. change for an existing control, there is code). To make the CID accessible for
no maintenance work related to changes our functional testing tool, the HTML
Stable Control Identifiers such as introducing new controls in a controls of our application exposed a cus-
Trying to optimize the recognition tech- page, filling the page with dynamic con- tom HTML attribute named “CID.” This
nique within the test scripts by adapt- tent that exposes links with control han- attribute is ignored by the browser but
ing more intelligent parsing rules dlers (like a list of links), or reordering is accessible from our functional testing
(instead of only searching for occur- controls on a page. Also the scripts are tool using the browser’s DOM.
rences of “control_handle”) proved not now more readable and the communi-
to be practical. We even found it coun- cation between testers and developers is Using the CID to identify a link:
terproductive because the more unam- easier because they use the same notion
biguous the parsing rules were the less when they were talking about the con- <A …
HREF=" http://...control_handle=*tm_btn_login
stable they became. trols of the application. .6827..."
So we decided to address the root CID="tm_btn_login"
cause of the problem by creating stable Functional Testing >Login</A>
identifiers for controls. Of course, this The problem here was the existence of
needed changes in the application code language-dependent test scripts. Our Calling the actual “Login” link using
—but more on this later. existing functional test scripts heavily the CID then might look like this:
We introduced the notion of a control relied on recognizing GUI controls and
identifier (CID), which uniquely identifies windows using their caption. MyApp.HtmlLink(“CID=tm_btn_login”).Click();

a certain control. Then we extended the For example, for push buttons, the
protocol so that CIDs could easily be con- caption is the displayed name of the but- Existing Test Scripts
sumed by the test tool. By using mean- ton. For links the caption is the displayed We had existing functional test scripts
ingful names for CIDs, we made it easy text of the link. And for text fields, the where we needed to introduce to the
for testers to identify controls in caption is the text label preceding the new mechanism how to identify con-
request/response data and test scripts. text field. trols. So it was essential that we had sep-
Also was then easier for the developers To automate functional testing for all arated the declaration of UI controls
to associate CIDs with the code that imple- localized versions of the application, we and how controls are recognized from
ments the control. needed to minimize the dependencies the actual test scripts using them.

DECEMBER 2008 www.stpmag.com • 23


I’m a person, just like you.

I’ll buy when and how I want.

Do not call me.

Do not make me register to


learn about you.

Don’t try to generate me.

I am not ‘actionable.’

I am Not a Lead
I WILL GET TO KNOW YOU on my own time and
in my own way. I will learn about you myself and
then I may choose to read your white papers,
attend your webinars, or visit your web site. I am
in control of the process. And if I don’t get to
know you first, you will never get my business.

Print Advertising - How Buyers Get to Know You


LEARN MORE AT I-AM-NOT-A-LEAD.COM
Based on study of SD Times readers by Readex, February 2008. How do you prefer to receive marketing information from software
tool companies 61% ads in print magazines, 40% presentations at trade shows, 38% vendor white papers, 29% direct mail, 19% banners.
OLD DOG, NEW TRICKS

Therefore we only had to change the declaration of the con- es, now generates the following control handler, which contains
trols but not their usage in multiple test scripts. the CID as part of the handler:
A script that encapsulates control information from actions
might look something like this: *tm_btn_login.68273841207300477604!

// declaration of the control Lessons Learned


BrowserChild MyApp {

Costs for enhancing testability using existing application inter-
HtmlLink Login { faces were minimal. One of the most enlightening experiences
tag “CID=tm_btn_login” // before: tag “Login” in this project was how easy it was to add the testability hooks

at the time we were able to express the problem to the devel-
}
} opers. The code changes needed in the application to add the
testability hooks to the existing application interfaces (GUI
// action on the control and protocol) were minimal.
MyApp.Login.Click();
There was no need to create special purpose testing inter-
This approach works similarly for GUI toolkits such as Java faces, which would have caused a major rework of the applica-
SWT, Java Swing/AWT, toolkits supporting the Microsoft User tion. And the runtime overhead that was added by the testabil-
Interface Automation (MSUIA) and Adobe Flex Automation. All ity hooks was negligible. Most effort was related to introducing
these GUI toolkits allow developers to add custom properties to stable identifiers (CIDs) for all relevant controls of the appli-
controls or are offering a special property that you then can use cation. All changes needed in the application summed up to an
to expose CIDs to testing tools. Of course you need to check if effort of about 4 person weeks—this is less than 1% of the
your GUI testing tool is able to work with custom properties of resources working on this application per year!
controls for each toolkit. For the changes and effort to make them, the benefits we
gathered were dramatic. Maintenance work for test scripts
Code Changes Needed to Add Testability Hooks dropped significantly. This was especially true for performance
One of the most enlightening experiences in this project was testing, where for the first time we were able to have a main-
how easy it was to add the testability hooks to the application tainable set of tests. Before that we needed to recreate test scripts
code. Speaking with the developers of the UI framework that whenever the application changed. The performance testing
was used in the application and explaining them the needs to scripts are now extremely stable and if the application changes
use CIDs for recognizing controls, they immediately found out the test scripts are easy to adjust.
that there was already functionality in the framework APIs that What’s more, the changes in the existing functional test
allowed them to add custom HTML attributes to the controls scripts that were needed for testing different localized versions
the UI framework provided. of the application were small. Here, it helped that we had
So no changes in the UI framework code were even needed already separated the declaration of UI controls and how con-
to support CIDs for functional testing! Certainly there was work trols are recognized from the actual test scripts using them.
related to creating CIDs for each UI control of the applica- Being able to also cover the localized versions of the applica-
tion. But this work was done step-by-step at first introducing tion with automated tests not only increased the test coverage
CIDs for the parts of the applications, which we focused our test- of the application, but it reduced the time we needed for pro-
ing efforts on. viding the localized versions of the application. We are cur-
Changes in the application code to introduce a CID: rently releasing all localized versions and the English version
at the same time.
Link login = new Link("Login"); Performance testing is now done by regularly running a
login.addAttribute("CID", "tm_btn_login"); // additional code line to
// create CID the control
core set of benchmark tests with different workloads against
different configurations on each nightly build. So there is con-
More Changes For Performance Testing tinuous information how the performance and the scalabili-
For performance testing, we needed to extend the protocol of ty the application is improving (or degrading). We’re also now
the application to also include the CID as part of the control able to detect defects that affect performance or scalability
handler, which is used to relate the actual control instance on as soon as they are introduced—which is a huge benefit as per-
the server with the UI control in the browser. By understanding formance problems are usually extremely hard to diagnose.
what we needed, our UI framework developers immediately fig- Knowing how performance has changed between two nightly
ured out how to accomplish it. Again the changes were mini- builds greatly reduces the time and effort needed to find the
mal and were needed in just one base class of the framework. root cause.
Changes in the UI framework of the application code to intro- Moving to an agile development process was the catalyst.
duce CIDs in control handlers: When it’s so easy to improve the testability of existing appli-

ity so significantly, I wonder why it’s not done more often. ý


cations and by that increase test automation and improve qual-
private String generateControlHandle() {
String cid = this.getAttribute(“CID”);
if (cid != null)
return "*" + cid + "." + this.generateOldControlHandle(); REFERENCES
else • Software Test Automation, Mark Fewster, Dorothy Graham, Addison Wesley, 1999
return this.generateOldControlHandle();
• Design for Testability, Bret Pettichord, Paper, 2002
}
• Lessons Learned in Software Testing, Cem Kaner et. al, John Wiley & Sons, 2002
• Agile Tester, Internet Blog: Why GUI tests fail a lot?
The new version of the generateControlHandle method, (From tools perspective), http://developer-in-test.blogspot.com/2008/09/
which is implemented in the base class of all UI control class- problems-with-gui-automation-testing.html

DECEMBER 2008 www.stpmag.com • 25


By Elfriede Dustin tracked. Training for the process will need
to take place.
mplementing a successful Automated Software Testing effort
I requires a well defined and structured, but lightweight technical
process, using minimal overhead and is feedback loops, including adjustments
Our process best practices span all
phases of the Automated Software Testing
life cycle. For example, in the require-
ments phases, an initial schedule is devel-
described here. It is based on proven sys- to specific project needs. For example, oped and maintained throughout each
tem and software engineering processes if test requirements and test cases already phase of the Automated Software Testing
and consists of 5 phases, each requiring exist for a project, Automated Software implementation (e.g. update percentage
the “pass” of a quality gate before mov- Testing will evaluate the existing test arti- complete to allow for program status
ing on to the next phase. By implement- facts for reuse, modify them as required tracking, etc). See the section on quali-
ing quality gates, we will help enforce and mark them as “to-be-automated,” ty gates activities related to schedules.
that quality is built into the entire imple- instead of re-documenting the test Weekly status updates are also an
mentation, thus preventing late and requirement and test case documenta- important ingredient for successful sys-
expensive rework (see Figure 2). tion from scratch. The goal is to reuse tem program management, which spans
The overall process implementation existing components and artifacts and all phases of the development life cycle.
can be verified via inspections, quality use/modify as appropriate, whenever See our section on quality gates related
checklists and other audit activities, each possible. to status reporting.
of which is covered later in this section. The Automated Software Testing phas- Post-mortems or lessons learned play
In my experience in the defense indus- es and selected best practices need to be an essential part in these efforts, and are
try, I have modified the Automated Testing adapted to each task at hand and need conducted to help avoid repeats of past
Lifecycle Methodology (ATLM) to adapt to be revisited and reviewed for effec- mistakes in ongoing or new development
to the needs of my current employer, tiveness on an ongoing basis. An efforts. See our section on Quality Gates
Innovative Defense Technologies. See approach for this is described here. related to inspections and reviews.
Figure 1 for an illustration of the process The very best standards and process- By implementing quality gates and
that’s further defined next. es are not useful if stakeholders don’t
Our proposed Automated Software know about them or don’t adhere to Elfriede Dustin is currently employed by Inno-
vative Defense Technologies (IDT), a Virginia-
Testing technical process needs to be them. Therefore Automated Software
based software testing consulting company
flexible enough to allow for ongoing iter- Testing processes and procedures are doc- specializing in automated testing.
ative and incremental improvement umented, communicated, enforced and

DECEMBER 2008 www.stpmag.com • 27


QUALITY GATES

related checks and balances along Phase 1: Requirements Gathering for ability to automate
Automated Software Testing, the team is Phase 1 will generally begin with a kick- a) If program or test requirements are
not only responsible for the final testing off meeting. The purpose of the kick-off not documented, the automation
automation efforts, but also to help meeting is to become familiar with the effort will include documenting the
enforce that quality is built into the entire AUT’s background, related testing pro- specific requirements that need to
Automated Software Testing life cycle cessing, automated testing needs, and be automated to allow for a require-
throughout. The automation team is held schedules. Any additional information ments traceability matrix (RTM)
responsible for defining, implementing regarding the AUT will be also collect- b) Requirements are automated based
and verifying quality. ed for further analysis. This phase serves on various criteria, such as
It is the goal of this section to provide as the baseline for an effective automa- i) Most critical feature paths
program management and the technical tion program, i.e. the test requirements ii) Most often reused (i.e. automat-
lead a solid set of automated technical will serve as a blueprint for the entire ing a test requirement that only
process best practices and recommenda- Automated Software Testing effort. has to be run once, might not be
tions that will improve the quality of the Some of the information you gather cost effective)
testing program, increase productivity with for each AUT might include: iii) Most complex area, which is
respect to schedule and work performed, • Requirements often the most error-prone
and aid successful automation efforts. • Test Cases iv) Most data combinations, since
• Test Procedures testing all permutations and com-
Testing Phases and Milestones • Expected Results binations manually is time-con-
Independent of the specific needs of • Interface Specifications suming and often not feasible
the application under test (AUT), In the event that some information v) Highest risk areas
Automated Software Testing will imple- is not available, the automator will work vi) Test areas which are most time
ment a structured technical process and with the customer to derive and/or devel- consuming, for example test per-
approach to automation and a specific op as needed. Additionally, during this formance data output and analy-
set of phases and milestones to each phase of automation will generally follow sis.
program. this process: 3) Evaluate test automation ROI of test
Those phases consist of: 1) Evaluate AUT’s current manual test- requirement
• Phase 1: Requirements Gathering— ing process and determine a) Prioritize test automation imple-
Analyze automated testing needs and a) areas for testing technique improve- mentation based on largest ROI
develop high level test strategies ment b) Analyze AUT’s current life-cycle
• Phase 2: Design & Develop Test Cases b) areas for automated testing tool use and evaluate reuse of exist-
• Phase 3: Automation framework and c) determine current quality index, as ing tools
test script development applicable (depending on AUT state) c) Assess and recommend any addi-
• Phase 4: Automated test execution d) collect initial manual test timelines tional tool use or the required devel-
and results reporting and duration metrics (to be used opment
• Phase 5: Program review as a comparison baseline ROI) d) Finalize manual test effort baseline
Our overall project approach to e) determine the “automation index” to be used for ROI calculation
accomplish automated testing for a spe- —i.e. determine what lends itself A key technical objective is to demon-
cific effort is listed in the project mile- to automation (see the next item) strate a significant reduction in test time.
stones below. 2) Analyze existing AUT test requirements Therefore this phase involves a detailed
assessment of the time required to man-
FIGURE 1: THE ATLM ually execute and validate results. The
assessment will include measuring the actu-
al test time required for manually execut-
ing and validating the tests. Important in
the assessment is not only the time
required to execute the tests but also to
validate the results. Depending on the
nature of the application and tests, vali-
dation of results can often take significantly
longer than the time to execute the tests.
Based on this analysis, the automator
would then develop a recommendation
for testing tools and products most com-
patible with the AUT. This important step
is often overlooked. When overlooked
and tools are simply bought up front with-
out consideration for the application, the
result is less than optimum results in the
best case, and simply not being able to
use the tools in the worst case.
At this time, the automator would also

28 • Software Test & Performance DECEMBER 2008


QUALITY GATES

identify and develop additional software Once the list of test requirements for time. The expected results provide the
as required to support automating the automation has been agreed to by the baseline to determine pass or fail status
testing. This software would provide inter- program, they can be entered in the of each test. Verification of the expected
faces and other utilities as required to requirements management tool and/or results will include manual execution of
support any unique requirements while test management tool for documentation the test cases and validation that the
maximizing the use of COTS testing tools and tracking purposes. expected results were produced. In the
and products. A brief description: case where exceptions are noted, the
• Assess existing automation framework Phase 2: Manual Test Case automator will work with the customer to
for component reusability Development and Review resolve the discrepancies, and as needed
• GUI record/playback utilities compat- Armed with the products of phase 1, update the expected results. The verifi-
ible with AUT display/GUI applications manual test cases can now be devel- cation step for the expected results
(as applicable in rare cases) oped. If test cases already exist, they can ensures that the team will be using the
• Library of test scripts able to interface simply be analyzed, mapped as applica- correct baseline of expected results for
to AUT standard/certified simula- ble to the automated test requirements the software baseline under test.
tion/stimulation equipment and sce- and reused, ultimately to be marked as Also during the manual test assess-
narios used for scenario simulation automated test cases. ment, pass/fail status as determined
• Library of tools to support retrieval of It is important to note for a test to be through manual execution will be docu-
test data generation automated, manual test cases need to be mented. Software trouble reports will be
• Data repository for expect- l automated, as computer documented accordingly.
ed results inputs and expectations
• Library of performance Automating differ from human inputs. The products of Phase 2 will typically be:
testing tools able to sup- As a general best practice, 1. Documented manual test cases to
port/measure real time
and non-real time AUTs
inefficient test before any test can be
automated, it needs to be
be automated (or existing test cas-
es modified and marked as “to be
• Test scheduler able to sup-
port distributed testing
cases will documented and vetted
with the customer to veri-
automated”)
2. Test case walkthrough and priority
across multiple computers fy its accuracy and that the agreement
and test precedence
result in poor automator’s understand- 3. Test case implementation by
The final step for this ing of the test cases is cor- phase/priority and timeline
phase will be to complete
test program rect. This can be accom- 4. Populated requirements traceabil-
the configuration for the plished via a test case walk- ity matrix
application(s), including performance. through. 5. Any software trouble reports asso-
the procurement and instal- l Deriving effective test ciated with manual test execution
lation of the recommended cases is important for suc- 6. First draft of “Automated Software
testing tools and products along with the cessfully implementing this type of Testing Project Strategy and
additional software utilities developed. automation. Automating inefficient test Charter” (as described in the
cases will result in poor test program per- Project Management portion of this
The products of Phase 1 will typically be : formance. document)
1. Report on test improvement In addition to the test procedures, oth-
opportunities, as applicable er documentation, such as the interface Phase 3 : Automated Framework
2. Automation index specifications for the software are also and Test Script Development
3. Automated Software Testing needed to develop the test scripts. As This phase will allow for analysis and
requirements walkthrough with required, the automator will develop any evaluation of existing frameworks and
stakeholders, resulting in agree- missing test procedures and will inspect automation artifacts. It is expected that
ment the software, if available, to determine for each subsequent implementation,
4. Presentation report on recom- the interfaces if specifications are not there will be software utilities and test
mendations for tests to automate, available. scripts we’ll be able to reuse from previ-
i.e. test requirements to be auto- Phase 2 also includes a collection and ous tasks. During this phase we deter-
mated entry of the requirements and test cases mine scripts that can be reused.
5. Initial summary of high level test into the test manager and/or require- As needed, the automation framework
automation approach ments management tool (RM tool), as will be modified, and test scripts to execute
6. Presentation report on test tool applicable. The end result is a populated each of the test cases are developed. Scripts
or in-house development needs requirements traceability matrix inside will be developed for each of the test cas-
and associated recommendations the test manager and RM tool that links es based on test procedures for each case.
7. Automation utilities requirements to test cases. This central The recommended process for devel-
8. Application configuration details repository provides a mechanism to oping an automated test framework or
9. Summary of test environment organize test results by test cases and test scripts is the same as would be used
10. Timelines requirements. for developing a software application.
11. Summary of current manual test- The test case, related test procedure, Keys to a technical approach to devel-
ing level of effort (LOE) to be test data input and expected results from oping test scripts is that implementations
used as a baseline for automated each test case are also collected, docu- are based on generally accepted devel-
testing ROI measurements mented, organized and verified at this opment standards; no proprietary imple-

DECEMBER 2008 www.stpmag.com • 29


QUALITY GATES

mentation should be allowed. The test program review also includes ty gates, which apply to Automated
This task also includes verifying that an assessment of whether automation Software Testing milestones.
each test script works as expected. efforts satisfy completion criteria for the Our process controls verify that the
AUT, and whether the automation effort output of one stage represented in Figure
The products of Automated Software itself has been completed. The review also 2 is fit to be used as the input to the next
Testing Phase 3 will typically be: could include an evaluation of progress stage. Verifying that output is satisfacto-
1. Modified automated test frame- measurements and other metrics col- ry may be an iterative process, and veri-
work; reuse test scripts (as appli- lected, as required by the program. fication is accomplished by customer
cable) The evaluation of the test metrics review meetings; internal meetings and
2. Test case automation—newly should examine how well original test comparing the output against defined
developed test scripts program time/sizing measurements com- standards and other project specific cri-
3. High-level walkthrough of auto- pared with the actual number of hours teria, as applicable. Additional quality
mated test cases with internal or expended and test procedures developed gates activities will take place as applica-
external customer to accomplish the automation. The ble, for example:
4. Updated requirements traceabil- review of test metrics should conclude Technical interchanges and walk-
ity matrix with improvement recommendations, as throughs with the customer and the
needed. automation team, represent an evalua-
Phase 4: Automated Test Execution Just as important is to document the tion technique that will take place dur-
and Results Reporting activities that automation efforts per- ing and as a final step of each Automated
The next step is to execute automated formed well and were done correctly in Software Testing phase. These evaluation
tests and develop the framework and order to be able to repeat these success- techniques can be applied to all deliver-
related test scripts. Pass/fail status will be ful processes. Once the project is com- ables, i.e. test requirements, test cases,
captured and recorded in the test man- plete, proposals for corrective action will automation design and code, and other
ager. An analysis and comparison of man- surely be beneficial to the next project, software work products, such as test pro-
ual and automated test times and but the corrective actions, applied dur- cedures and automated test scripts. They
pass/fail results will be conducted and ing the test program, can be significant consist of a detailed examination by a per-
summarized in a test presentation report. enough to improve the final results of the son or a group other than the author.
Depending on the nature of the appli- test program. These interchanges and walkthroughs are
cation and tests, you might also complete Automated Software Testing efforts will intended to help find defects, detect non-
an analysis that characterizes the range adopt, as part of its culture, an ongoing adherence to Automated Software Testing
of performance for the application. iterative process of lessons learned activi-
ties. This approach will encourage automa- FIGURE 2: AUTOMATED
The products of Automated Software tion implementers to take the responsi- SOFTWARE TESTING PHASES,
Testing Phase 4 will typically be: bility to raise corrective action proposals MILESTONES AND QUALITY GATES
1. Test Report including Pass/Fail immediately, when such actions potentially
Status by Test Case and Require- have significant impact on test program
ment (including updated RTM) performance. This promotes leadership
2. Test Execution Times (Manual and behavior from each test engineer.
Automated)—initial ROI reports
3. Test Summary Presentation The products of phase 5 will typically be:
4. Automated Software Testing train- 1. The final report
ing, as required
Quality Gates
Phase 5: Program Review Internal controls and quality assurance
and Assessment processes verify that each phase has been
The goal of Automated Software Testing completed successfully, while keeping
implementations is to allow for contin- the customer involved. Controls include
ued improvements. During the fifth quality gates for each phase, such as tech-
phase, we will review the performance of nical interchanges and walkthroughs
the automation program to determine that include the customer, use of stan-
where improvements can be made. dards, and process measurement.
Throughout the automation efforts, Successful completion of the activities
we collect test metrics, many during the prescribed by the process should be the
test execution phase. It is not beneficial only approved gateway to the next phase.
to wait until the end of the automation Those approval activities or quality gates
project to document insights gained into include technical interchanges, walk-
how to improve specific procedures. throughs, internal inspections, examina-
When needed, we will alter detailed pro- tion of constraints and associated risks,
cedures during the test program, when configuration management; tracked and
it becomes apparent that such changes monitored schedules and cost; corrective
are necessary to improve the efficiency actions; and more as this section
of an ongoing activity. describes. Figure 1 reflects typical quali-

30 • Software Test & Performance DECEMBER 2008


QUALITY GATES

standards, test procedure issues, and oth- current environment. The previous ver- the customer immediately and necessary
er problems. sion of the tool may have performed cor- adjustments made accordingly.
Examples of technical interchange rectly and a new version may perform fine Tracking schedules on an ongoing
meetings include an overview of test in other environments, but might adverse- basis also contributes to tracking and con-
requirement documentation. When test ly affect the team’s particular environ- trolling costs.
requirements are defined in terms that ment. Additionally, using a configuration Costs that are tracked can be controlled.
l
are testable and correct, errors are pre- management tool to base- By closely tracking sched-
vented from entering the development line the test repository will ules and other required
pipeline, which would eventually be help safeguard the integri- resources, the automator
reflected as possible defects in the deliv- ty of the automated test-
By closely tracking assures that a cost tracking
erable. Automation design-component ing process and help with and controlling process is
walkthroughs can be performed to ensure roll-back in the event of
schedules and followed. Inspections, walk-
that the design is consistent with defined failure. throughs and other status
requirements, conforms to standards and The automator incor- other required reporting will allow for a
applicable design methodology and porates the use of config- closely monitored cost con-
errors are minimized. uration management tools resources, the trol tracking activity.
Technical reviews and inspections have help control of the Performance is con-
proven to be the most effective form of integrity of the automa- automator assures tinuously tracked with
preventing miscommunication, allowing tion artifacts. For exam- necessary visibility into
for defect detection and removal. ple, we will include all that cost tracking project performance,
Internal automator inspections of automation framework related schedule and cost.
deliverable work products will take place components, script files, and controlling The automator manager
to support the detection and removal of test case and test proce- maintains the record of
defects early in the development and test dure documentation, process is followed. delivery dates (planned
cycles; prevent the migration of defect to schedules and cost track- vs. actual), and continu-
later phases; improve quality and pro- ing data under a config- l ously evaluates the proj-
ductivity; and reduce cost, cycle time, and uration management sys- ect schedule. This is main-
maintenance efforts. tem. This assures us that the latest and tained in conjunction with all project
A careful examination of goals and accurate version control and records of tracking activities, is presented at week-
constraints and associated risks will take Automated Software Testing artifacts and ly status reports and is submitted with the
place, which will lead to a systematic products are maintained. monthly status report.
automation strategy, will produce a pre- Even with the best laid plans and
dictable, higher-quality outcome and will Schedules are Defined,Tracked implementation, corrective actions and
enable a high degree of success. and Communicated adjustments are unavoidable. Good QA
Combining a careful examination of con- It’s also important to define, track processes will allow for continuous eval-
straints, as a defect prevention technolo- and communicate project schedules. uation and adjustment of task imple-
gy, together with defect detection tech- Schedule task durations are determined mentation. If a process is too rigid, imple-
nologies will yield the best results. based on past historical performance mentation can be doomed to failure.
Any constraint and associated risk will and associated best estimates. Also, any When making adjustments, it’s criti-
be communicated to the customer and schedule dependencies and critical path cal to discuss any and all changes with cus-
risk mitigation strategies as necessary will elements should be considered up front tomers, explain why the adjustment is rec-
be developed. and incorporated into the schedule. ommended and the impact of not mak-
Defined QA processes allow for con- If the program is under a tight dead- ing the change. This communication is
stant risk assessment and review. If a risk line, for example, only the automation essential for customer buy-in.
is identified, appropriate mitigation strate- tasks that can be delivered on time should QA processes should allow for and sup-
gies can be deployed. We require ongo- be included in the schedule. During port the implementation of necessary cor-
ing review of cost, schedules, processes Phase 1, test requirements are prioritized. rective actions. This allows for strategic
and implementation to prevent potential This allows the most critical tasks to be course correction, schedule adjustments,
problems to go unnoticed until too late, included and prioritized for completion and deviation from Automated Software
instead our process assures problems are first, and less critical and lower priority Testing Phases to adjust to specific proj-
addressed and corrected immediately. tasks to be later in the schedule. ect needs. This also allows for continuous

a successful delivery. ý
Experience shows that it’s important After Phase 1, an initial schedule is process improvement and ultimately to
to protect the integrity of the Automated presented to the customer for approval.
Software Testing processes and environ- During the technical interchanges and
REFERENCES
ment. Means of achieving this include the walkthroughs, schedules are presented
• This process is based on the Automated Testing Lifecycle
testing of any new technologies in isola- on an ongoing basis to allow for contin- Methodology (ATLM) described in the book Automated
tion. This ensures that validating that tools, uous schedule communication and mon- Software Testing — A diagram that shows the rela-
for example, perform up to specifications itoring. Potential schedule risks will be tionship of the Automated Software Testing technical
and marketing claims before being used communicated well in advance and risk process to the Software Development Lifecycle will
be provided here
on any AUT or customer environment. mitigation strategies will be explored and • As used at IDT
The automator also will verify that any implemented, as needed. Any potential • Implementing Automated Software Testing,
upgrades to a technology still run in the schedule slip can be communicated to Addison Wesley Feb. 2009.

DECEMBER 2008 www.stpmag.com • 31


Best Practices

SCM Tools Are Great,


But Won’t Mind the Store
When it comes to managing that the details of the devel- understand branching structures and
collaborative code-develop- opment process are followed changes, it all becomes disconnected
ment, it is not so much the sounds completely silly, but from what makes great software.”
choice of tools used, but it works,” says Kapfhammer. Developers, he notes, often think about
rather, ensuring that every- That seems simple build quality as something that happens
one plays by the same set of enough. But the follow-up downstream. “The thinking is ‘if I build
rules. It takes only one cow- question, of course, is “ok, and it breaks, it’s no big deal.’ But that’s
boy coder to bring disaster but exactly which proce- not true. Do it up front and you’ll invari-
to otherwise carefully con- dures are we talking about?” ably a deliver better product and do it
structed processes or one That’s where the idea of a more timely.”
rogue developer to bring a process model comes into Branching, while necessary in an envi-
Joel Shore
project to a standstill. play. ronment where huge projects are divvied
Whether you’ve implemented an The current darling of process mod- up among many developers, is the prover-
open-source tool such as CVS, Subversion, els is the Information Technology bial double-edged sword. Handled cor-
or JEDI; or a commercial tool, experts Infrastructure Library (ITIL), a compre- rectly, it simplifies development and main-
agree there had better be a man in a hensive set of concepts and policies tenance. But implement a model based
striped shirt ready to throw a penalty flag for managing development, operations, on bad architecture, and projects can col-
if short cuts are attempted. and even infrastructure. There’s serious lapse under their own weight.
“You can’t have a free for all when it weight behind ITIL—it’s actually a trade- Guadagno recalled a small ISV with a
comes to version control; it’s essential to mark of the United Kingdom’s Office of specialized product in use by five cus-
have clearly defined policies for code Government Commerce. tomers. Their model of building a custom
check in and check out,” says David The official ITIL Web site, www.itil-offi- version of the app for each customer and
Kapfhammer, practice director for the cialsite.com (apparently they’re serious doing that with separate branches was a
Quality Assurance and Testing Solutions about being THE official site), describes good idea, he says. “But what happens if
Organization at IT services provider ITIL as the “most widely accepted they grow to 50 or a hundred customers?”
Keane. “We see companies installing approach to IT service management in Introducing new functionality eventually
sophisticated source code and configu- the world.” It also notes that ITIL aims to will cause every build to break—and it did.
ration management systems, but what they provide a “cohesive set of best practices, Spinning off different branches and hop-
neglect to do is audit their processes.” drawn from the public and private sec- ing for the best when it comes to cus-
And it’s often large, well-established tors internationally.” That’s especially tomizations simply didn’t work.
corporations that should know better. important as projects pass among teams “It turns out that this was not a prob-
Kapfhammer says he is working with “very worldwide in an effort to keep develop- lem that any source code management
mature, very big clients” in the health- ment active 24 hours a day. tool or process could solve. The problem
care and insurance industries that have But again, it works only if everyone was the original system architecture,
all the right tools and processes in place, plays, and only if development tasks are which could handle the addition of a few
“but no one is auditing, no one is mak- granular enough to be manageable. new customers, but which never foresaw
ing sure everyone follows the rules.” “If I had to boil everything I know the impact of adding dozens.”
It boils down to simple human nature. about source code management and ver- Guadagno’s point is that while tools,
With enormous pressure to ensure that sion control down to three words, it would processes, and procedures are essential
schedules are met (though they often are be this,” says Norman Guadagno, direc- in assuring a successful outcome, poor
not), it’s common for developers—in their tor of product management for Micro- design will still yield poor results. “Good

good architecture.” ý
well-meant spirit of getting the product out soft’s Visual Studio Team System: “Every builds,” he says, “are not a substitute for
the door—to circumvent check-in policies. build counts.”
A simple, low-tech way to avoid poten- Guadagno’s view of the development
tial disasters, such as sending the wrong world is simple: Build control and process Joel Shore is a 20-year industry veteran and
version of code into production, is a sim- automation must be ingrained in the cul- has authored numerous books on personal
computing. He owns and operates Reference
ple checklist attached to developer’s cubi- ture of the team and not something that’s
Guide, a technical product reviewing and doc-
cle walls with a push pin. “This method of left to a manager. “If it’s not instilled in umentation consultancy in Southboro, Mass.
managing repetitive tasks and making sure the culture of the team, if they don’t

32 • Software Test & Performance DECEMBER 2008


ST&Pedia
< continued from page 7 100 page loads at the exact same time—
will take some kind of tooling to do eco-
ed acceptance testing. nomically.

AUTOMATED BUSINESS- SMOKE TESTING


FACING TESTS Frequently a team’s first test automation
A final approach is to have some project, automating a build/deploy/
method of accessing the business logic sanity-check project almost always pays
outside of the GUI, and to test the off. (Chris once spent a few days building
business logic by itself in a standalone a smoke test framework that increased
form. In this way, the tests become a efficiency 600% for months afterward.)
sort of example of “good” operations,
or an executable specification. Two COMPUTER ASSISTED TESTING
open-source tools designed to enable Jonathan Kohl is well known for using
this are FIT and Fitnesse. Business-fac- GUI scripting tools to populate a test
ing tests also sidestep the GUI, but environment with a standard set-up. The
miss any rendering bugs that exist. test automation brings the application
to the interesting area, where a human
AUTOMATED TEST SETUP being takes over and executes sapient
Software can be complex and yet not tests in the complex environment.
have simple save—open functionality— Any form of cybernetics can make a
for example, databases. Building import/ person better, faster, and less error prone.
export hooks, and a “sample test data- The one thing they cannot do is think.
base” can save time, energy, and effort There’s no shame in outsourcing required,
that would have to be done over and over repetitive work to a computer, but you will
again manually. Automated environment always need a human being somewhere to
setup is a form of test automation. drive the strategy. Or, to paraphrase James
Bach: The critical part of testing is essen-
LOAD AND PERFORMANCE tially a conversation between a human and
TESTING
to “automate” a conversation? ý
the software. What would it even mean
Measuring a page load—or simulating

Discover all the best


Index to Advertisers
software practices, Advertiser URL Page

gain an in-depth Automated QA www.testcomplete.com/stp 8

understanding of the
Empirix www.empirixfreedom.com 4
products and services
FutureTest 2009 www.futuretest.net 20, 21
of leading software
Hewlett-Packard hp.com/go/quality 36
vendors, and educate
I Am Not a Lead www.i-am-not-a-lead.com 24
yourself on a variety of
topics directly related Qualitest www.QualiTest-int.com 4

to your industry. Ranorex www.ranorex.com/stp 15

Reflective Solutions www.stresstester.net/stp 11

Learn Some SD Times Job Board

Seapine
www.sdtimes.com/jobboard

www.seapine.com/stptcm08
26

New Tricks! Software Test & Performance

Software Test & Performance


www.stpmag.com

www.stpcon.com
6, 33

35

Free White Papers at Conference

www.stpmag.com Test & QA Report www.stpmag.com/tqa 10

DECEMBER 2008 www.stpmag.com • 33


Future
Future Test
Test

Automate Web Service


Testing the Right Way
Web services Web services are perfect the perfect solution for testing any serv-
candidates for automated testing. Com- ice with dynamic data. On the other hand,
pared with validating a UI, Web services it requires additional skills to write your
Web services testing is quite simple: send own scripts for fetching data from data
a request, get a response, verify it. But source and comparing it with service
it's not as easy as it may look. The main response data.
challenge is to identify the formal pro- Scenario testing. This is used when
cedures that could be used for auto-
mated validation and verification. Other
5. the test case is not a single request
but a set of requests sent in sequence to
problems include dynamic data and dif- check Web services behavior. Usually the
fering request formats. Web services scenario testing includes
Elena Petrovskaya
In the project our team was working and Sergey Verkholazov
checking one Web service operation using
on when this was written, there were another. For example, you can check that
many different Web services in need of manual validation, all valid responses are the CreateUser request works properly
testing. We needed to check not only the considered correct and saved for future by sending a GetUser request and vali-
format of the request and response, but reference. The response of all next ver- dating the GetUser response. In general,
also the business logic, data returned by sions of that Web service will then be com- any of the above techniques can be used
the services and the service behavior pared with these template files. There is for Web services scenarios. Just be care-

l
after significant changes absolutely no need to write ful to send requests one after another
were made in the archi- your own script for XML in the correct order. Also be mindful of
tecture. From the very file comparison; just find the uniqueness of test data during the
beginning we intended to
We are able to the most suitable free tool test. The service might not allow you to
automate as many of these and use it within your create, for example, several users with the
tests as possible. Some of automate about scripts. However, this same name. It’s also common for one
the techniques we used for method is not applicable request in a sequence to require data
automated verification of 70 percent of when dealing with dynam- from a previous response. In this case you
Web services functionality ic data or if a new version need to think about storing that data and
were as follows: our test cases of Web services involves using it on the fly.
Response schema val- changes in the request All of these techniques were imple-
1. idation. This is the
most simple and formal
[with a] structure or Web services
architecture.
mented using a home-grown tool which
we were using in our day-to-day work.
procedure of response small utility Check XML nodes Using that tool, we are able to automate
structure verification. There
are free schema validators built in-house.
3. and their values. In
the case of dynamic data, it
about 70 percent of our test cases. This
small utility that was originally created by
out there that can easily can be useful to validate developers for developers to cover typi-
be built into automated l only the existence of nodes cal predefined test conditions and grad-
scripts. However, schema validation is not using XPath query and/or check the ually evolved into rather powerful tool.
sufficient of you also need to validate the node values using patterns. Having such a tool in your own organi-

into the future. ý


Web services logic based on data returned Validate XML response data against zation could help your testing efforts well
from a service.
Comparison of the actual response
4. original data source. As a “data
source” it’s possible to use database,
2. with the expected response. This is
a perfect method for Web service regres-
another service, XML files, non-XML files
and in general, everything that can be
Elena Petrovskaya is lead software testing
engineer and Sergey Verkholazov is chief soft-
ware engineer working for EPAM Systems in
sion testing because it allows for check- considered a “data provider” for Web serv- one of its development centers in Eastern
ing of both response structure and data ices. In combination with response Europe, and currently engaged in a project for
within the response. After the successful schema validation, this method could be a leading provider of financial information.

34 • Software Test & Performance DECEMBER 2008


Te Be Th
s e
s tin of t Va
g C an lue
Attend on y
fer
en
ce
!

SPRING

March 31 – April 2, 2009


San Mateo Marriott • San Mateo, CA

Speakers 29 Minutes
3 u
Days
’r e b usy, so we
packed the
30
31+
nd-picked
for their
We know yo inimize re h a , CA is
in to 3 days to m Our speak
e rs a
d ability to San Mateo
confere nc e April 2009 xpertise an midway be
tween
ut technical e nowledge co and
your time o ate their k
Monday Tuesday
Wednesday Thursday
San Fran c is
communic
Friday

e . 30 at are
, in ways th g
ic 31
of the off 1 2 Va ll ey.
Silicon
3
Y o u r effectively g ra tin
Mark l for inte
most usefu
6 7
Calendar!
8 9
in to yo ur
dge
that knowle
daily life.

60+ Classes + C o l l eagues 6+ Tutorials


More th a n 60 classes wil
l be offered
/QA and p e rf o
,
rmance
500 ers torials offe
r a comple
te
a re te st rk with pe Full-day tu a ll o w in g
covering so
ft w
n tire applic
ation life
x p e ri e n c es! Netwo , c o ff ee rs io n in to a subjec t,
n si on
ss th e e
a given tim
e Share e ception imme mprehe
issues acro 8 classes in in st ru c to rs at our re c h g re a te r detail, co e o f th e
re a re d and mu ractic
cycle. T h e
’ll always
fi n
m e cases) p ion
means you breaks and a n d (i n so
an a 1-h o u r se ss
slot, which u r needs. m a tt e r th
that fits yo on the subject
something y.
exhibit could conve
floor.

s
31+ Exhibitor
25
$895 Value! t th e m akers of th
e latest so
ll .
ftware
M e e hibit H a
ls in our Ex
Tracks testing too nd
5+ acks The travel
and tu it io n e x p ense of att
rences.
ending STP
Con
Learn
fea tu
a
re
b
s,
o u
te
t th
st
e ir n e w
them out, a
products a
nd talk th
o built th e m .
to e

learning tr n other confe d Southwe


st experts wh
We’ll offer r is less th a
JetB e an
lu
specially fo Check out
designed e r sp ecialists t airfares.
a ge rs and othe for discoun ore you sa
ve.
m a n
u, who mu
st stay
r yo u si g n up, the m Includes:
just like yo hnologies The earl ie
R e gistratio n
e latest tec vent Pa ss p o rt
hnical class
es
on top of th h niques Your Full E ps and tec
pment tec w o rk sh o
and develo • Admissio
n to
ur comp a n y
n to keyno
te Reception
to keep yo • Ad m is si o
a ll a n d Attendee s
ve . n to Exhib it H l Showcase
compe ti ti
• Admissio h tn in g Ta lks and Too
n to Li g
• Admissio rials nch
rence mate
PRODUCED BY
fe aks, and lu
• All c o n
b re a kf a st , coffee bre
ta l
• Continen

For more information,


matiion go to www.stpcon.com
A LT E R N AT I V E T H I N K I N G A B O U T Q U A L I T Y M A N A G E M E N T S O F T WA R E :

Make Foresight 20/20.


Alternative thinking is “Pre.” Precaution. Preparation. Prevention.
Predestined to send the competition home quivering.

It’s proactively designing a way to ensure higher quality in your


applications to help you reach your business goals.

It’s understanding and locking down requirements ahead of


time—because “Well, I guess we should’ve” just doesn’t cut it.

It’s quality management software designed to remove the


uncertainties and perils of deployments and upgrades, leaving
you free to come up with the next big thing.

Technology for better business outcomes. hp.com/go/quality

©2008 Hewlett-Packard Development Company, L.P.

Вам также может понравиться