Вы находитесь на странице: 1из 40

A Publication

We
PR Serv
BE CTIC es,
b
A ic
ST ES SO
: A
VOLUME 5 • ISSUE 11 • NOVEMBER 2008 • $8.95 • www.stpmag.com

Like Duh, It’s the Network:


Latency Test, Excellent!

When Automation Projects


Fail Because of YOU

Dial ‘P’ For


Performance
VOLUME 5 • ISSUE 11 • NOVEMBER 2008

Contents A Publication

14
COV ER STORY
Go Behind the Scenes At
The Testers Choice Awards
This year’s most celebrated products as selected by you, our audience of
testers in the mosh pit of corporate QA teams. By Edward J. Correia

Press ‘P’ For


22 Performance
Distinguishing between components
in a large system is not always easy; test-
ing them all is even harder. Here’s a
step-by-step guide to performance test-
ing simple enough for school child.
By Keven Lui

Automation Depar t ments


28 Project Failure—
Not On Your Watch!
7 • Editorial
Check out CMG, a great non-profit resource
with hundreds of documents about per-
formance analysis.
Management
expectations 8 • Contributors
Get to know this month’s experts and the
don’t always
best practices they preach.
jive with reali-
ty. The time 9 • Feedback
to set them It’s your chance to tell us where to go.
straight is
the moment 11 • Out of the Box
Network Slow?
you know there’s
a mismatch. In part
two of a three-part
33 How to Know
News and products for testers.

13 • ST&Pedia
Industry lingo that gets you up to speed.
series on project auto- Sometimes performance problems are
mation failures, learn so obvious that they’re overlooked by
even the sharpest minds. Learn every- 37 • Best Practices
how to approach SOA testing is as much about business as it
these sometimes day techniques to determine if your net-
is about technology. By Joel Shore
delicate matters. work is the tortoise or the hare.
By Elfriede Dustin By Nels Hoenig 38 • Future Test
Take apart SOAs to prevent costly perform-
ance rework. By John Michelsen

NOVEMBER 2008 www.stpmag.com • 5


Ed Notes

CMG: Data For


VOLUME 5 • ISSUE 11 • NOVEMBER 2008
EDITORIAL
Editor Editorial Director
Edward J. Correia Alan Zeichick
+1-631-421-4158 x100 +1-650-359-4763
ecorreia@bzmedia.com

Copy Desk
Adam LoBelia
Diana Scheben
alan@bzmedia.com

Contributing Editors
Matt Heusser
Chris McMahon
Joel Shore
Your Thoughts
ART & PRODUCTION I’ve stumbled upon a Does your organization
Art Director resource that I believe employ telecommuters? Pe-
LuAnn T. Palazzo you’ll want to know about rhaps home-office workers
lpalazzo@bzmedia.com
and use. It’s called the should read “Remote But
SALES & MARKETING Computer Measurement Not Forgotten – Working
Publisher Group (www.cmg.org), from Home Successfully,”
Ted Bahr and in essence, it’s where which contains advice for
+1-631-421-4158 x101
ted@bzmedia.com
the world’s performance home-office workers about
data goes to retire. Well building and maintaining
Associate Publisher that’s the goal, at least. trust relationships with co-
David Karp
+1-631-421-4158 x102
According to CMG dir- workers and bosses.
ector Michael Salsburg, the Edward J. Correia People doing specific
dkarp@bzmedia.com
non-profit CMG started as a group of types of testing might align themselves
Advertising Traffic Reprints
Liz Franklin Lisa Abelson
IBM mainframe users doing perform- with particular authors. “A person who
+1-631-421-4158 x103 +1-516-379-7097 ance analysis. It soon expanded into writes and presents a lot of software per-
lfranklin@bzmedia.com labelson@bzmedia.com Linux and Unix. “In a simple way, it formance engineering papers is Dr.
delivers a service to businesses. Connie Smith. Or when you look at soft-
List Services Accounting
Lisa Fiske Viena Ludewig Members of CMG have a reputation as ware test and quality assurance, it’s
+1-631-479-2977 +1-631-421-4158 x110 experts at making sure that levels of devoted to function testing,” Salsburg
lfiske@bzmedia.com vludewig@bzmedia.com service end-to-end are says. But at some point, he
what the business wants.” said, your department will
READER SERVICE


Director of Circulation Customer Service/
The site contains hun- have to guarantee a level
Agnes Vanek Subscriptions dreds of white papers, of quality of service for the
+1-631-443-4158 +1-847-763-9692 submitted by members, application. “That’s where
avanek@bzmedia.com stpmag@halldata.com
chronicling their per- we come in. We see our-
Cover Photo Illustration by The Design Diva, NY
formance measurement ‘We’re a go-to selves as a go-to communi-
experiences. ty where you can see
“You can get any level
of performance you want
community’ for what’s out there and how
products are performing.
if you have enough
money,” says Salsburg.
performance People write papers and
they present them, and
“The real balance is to get their companies pay their
President most performance for
testers. way. We offer a community
BZ Media LLC
Ted Bahr 7 High Street, Suite 407 least money.” That’s what and that’s what we want
Executive Vice President
Alan Zeichick
Huntington, NY 11743
+1-631-421-4158
fax +1-631-421-4130
www.bzmedia.com
info@bzmedia.com
CMG aims to help compa-
nies do. For example, one
of this month’s most pop-
ular articles is called “Xen
• people to know about.”
More than the big pic-
ture, CMG also can help
testers get into the nuts
vs. VMware – The Battle of and bolts of testing. “We
Software Test & Performance (ISSN- #1548-3460) is Brands,” which compares performance would help you understand the tech-
published monthly by BZ Media LLC, 7 High Street,
Suite 407, Huntington, NY, 11743. Periodicals postage of the two. niques of how to instrument your
paid at Huntington, NY and additional offices.
Another is “Using ITIL Best Practices application, to understand perform-
Software Test & Performance is a registered trade-
mark of BZ Media LLC. All contents copyrighted to Create a Capacity Management ance and forecast how it would per-
2008 BZ Media LLC. All rights reserved. The price Process,” detailing the Information form on a production system and
of a one year subscription is US $49.95, $69.95 in
Canada, $99.95 elsewhere. Technology Infrastructure Library for under heavy load.” That’s something
POSTMASTER: Send changes of address to Software service management adopted by British you might not be able to create in your
Test & Performance, PO Box 2169, Skokie, IL 60076.
Software Test & Performance Subscribers Services Airways, Hewlett Packard, IBM, test environment and can’t afford to
may be reached at stpmag@halldata.com or by

ment, he added. ý
calling 1-847-763-9692.
Microsoft, Proctor and Gamble and duplicate your production environ-
many other top-flight companies.

NOVEMBER 2008 www.stpmag.com • 7


Contributors
KEVEN LIU graduated from
Shanghai Jiao Tong University in
China. Over the past 10 years,
Keven has gained experience
with software development,
project management, quality
assurance and testing, working
closely with numerous clients to
develop and deploy software
applications for Hong Kong Air
Cargo Terminals Ltd. He cur-
rently serves as the company’s test
leader.
Keven has published articles in Chinese about software
management, CMM/CMMI and testing. Starting on page 22,
he discusses how to simplify complex testing.

ELFRIEDE DUSTIN, is author of


“Effective Software Testing” (Add-
ison-Wesley, 2002) and lead
author of numerous other works.
She is also a testing consultant cur-
rently employed by Innovative
Defense Technologies, a Virginia-
based software testing consulting
company specializing in automat-
ed testing.
In her role as consultant,
Elfriede has evaluated scores of soft-
ware testing tools. This month, she
presents the second installment of her three-part series on soft-
ware automation testing, beginning on page 28.

NELS HOENIG has 15 years expe-


rience in the IT industry as a
proven software quality engineer,
project leader, business analyst and
application support specialist.
Nels has worked both in the
development of new applications
and the implementation of exist-
ing applications.
In his role as senior quality
assurance leader at DSW, he's an
expert at listening to user con-
cerns and developing a trust rela-
tionship that focuses on issues and allows the project to succeed.
In this issue of Software Test & Performance, he writes about
how network performance problems can sometimes be so obvi-
ous that they’re overlooked by the test team. Read all about it
beginning on page 33.

TO CONTACT AN AUTHOR, please send e-mail to feedback@bzmedia.com.

8 • Software Test & Performance NOVEMBER 2008


Feedback

Technology and Design.”


‘TOPS’ NOT ALWAYS ‘BEST’ Attached for your reading pleasure or
lining your wife’s bird cage is something
Regarding “China's Top Coders” editorial in Sept. 2008, issue of Software Test & I have worked at since 1962 to help me
be a better Air Force NCO/security per-
Performance:
son and now a security consultant, one
As always I enjoyed your editorial comments.The TopCoder’s piece was very inter-
who chooses his projects with the great-
esting and thought provoking. It is particularly interesting is taking a game like atmos-
est of care.
phere and turning it into an opportunity to deliver value to a company…should I say
Bill Warnock
BRILLIANT?!
Not sure how many of these 114,000+ programmers are formally trained (from the WHERE THE ACTION IS
survey’s it looks like less than 35%), and if the distribution is such that the majority Regarding 'Ten Concepts for Successful
are either Eastern European or Asian then is it a true barometric measure for pro- Automation Scripts,” Test & QA Report,
gramming excellence. Programming application is not cultural, nor is it hereditary, it’s Sept. 6, 2008 (http://www.sdtimes.com
pragmatic and with well defined rule sets that are used. But let’s talk the business /link/32824): Although it is essential for
value proposition and what makes the outcome successful. the test automation engineer to have
Simply,TopCoder put out narrow scope challenges (or open ended when it comes to development skills it is not necessarily the
doing an efficiency improvement project, when then creates a research atmosphere) that case for those who enter and maintain
allow these people to deliver. In a typical IT setting the scope is constantly changing, the the test scripts. This is possible if your test
business dynamics are substantial, and the requirements are seldom automation approach follows a
adequate. For these reasons, the Agility paradigm has become an attrac- high level framework driven
tive solution to these issues. I’m wondering if we put these pools of approach.
talent into these settings whether the delivery outcome would be the The script editors of our test
same. automation scripts are business
One might argue that delivery is all that matters. But I’m not sure analysts, testers and managers who
that I would trust strategically important applications to this approach. type keywords or (as Hans Buwalda
Respectfully, would say) action words - into an
Jerry E. Durant, Chairman Emeritus Excel form. Those people do not
need development skills, but I
The International Institute for Outsource Management
absolutely agree that without such
Winter Springs, FL
a framework any script editor
would need those skills and it is
THAT’S SO 1990’S playbacks per year. essential – as you say – to have a
Regarding “Which Functional Testers Do Edward Miller, President good understanding of how software is
the Best Job?” July 15, 2008, (http://www www.e-valid.co, San Francisco, CA organized into re-usable pieces. In our case
.sdtimes.com/link/32536): The fact is, it is of great benefit that only a small num-
most of the products you mentioned were While the sentence “You can think more ber of specialists are needed to keep the
developed in the 1990s for the applica- clearly when you are busy if you plan core framework running for all users.
tion context of the 1990s. Beginning in ahead” is okay, more clarity and com- My contribution to this test automa-
2001 we here at Software Research took pleteness of the thought is conveyed by tion article today is also a new cartoon of
a new tact, based on the recognition (well, the following: “You can think more clear- mine. If automation is done right, bugs
OK, the bet!) that Web browser enabled ly when you are busy if you have planned will learn to fear automation (see attached
applications would win out. The bet was ahead.” picture). =;O)
a good one, of course. Michael F. Stanton Best regards
The result of our efforts was our eValid T.J. Zelger
solution– it's now patented that puts ALL A PAT ON THE BACK Test lead, CORE
of the test functionality in a browser. It is Regarding “Security & Quality: A Union Audatex Systems, Switzerland
a very simple idea: test Web applications Of Equals,” Software Test & Performance,
from where they are normally driven. But Oct. 2008: When you have a few spare Editor’s Note: We will include the picture in a
doing it is somewhat more complex. But moments please give a well deserved pat future edition of the Test & QA Report.
what we have is simple, easy to use, inex- on the back to Danny Allan for his
pensive, and 100 percent capable. remarkable piece of journalism and your FEEDBACK: Letters should include the writer’s
eValid, now in Version 8, is in use in “guts” (or other anatomical parts) for name, city, state, company affiliation, e-mail
1000s of companies worldwide. printing it. address and daytime phone number. Send your
In monitoring mode—functional tests I’d like your permission to pass this thoughts to feedback@bzmedia.com. Letters
become the property of BZ Media and may be
played back regularly—eValid accounts article around especially to Steve Lasky,
edited for space and style.
for [about] 10,000,000 monitoring test Publisher/Editor-in-Chief of “Security

NOVEMBER 2008 www.stpmag.com • 9


Out of the Box

Gomez Monitors
Multimedia Web
Experience
Active Streaming XF is a new service for the Active
from that measures the speed and qual- Streaming XF serv-
ity of streamed video and audio deliv- ice include measur-
ered over the Web. Self-service featuring ing the perform-
a highly intuitive diagnostics interface ance and quality of
and Active Streaming XF rapidly identi- high-value streams
fies and troubleshoots streaming media as experienced by
performance issues so that businesses end users in differ-
like advertisers, broadcasters, retailers ent geographies. It
and social networks can deliver content can also be used for
without delay or disruption to audiences verifying if syndi-
around the world. cated or third party
Nikki Baird, managing partner at RSR content sources are Active Streaming XF can validate the effectiveness of content deliv-
Research, a research company run by negatively affecting ery networks and multimedia delivery infrastructure.
retailers for the retail industry, said: the end-user expe-
"Interest in video as part of the online rience; validating the performance Entirely self-serving users can set up,
commerce experience is definitely on claims of hardware vendors or content execute and analyze tests within min-
the rise. Between product demos and delivery networks prior to purchase; and utes, making the Active Streaming XF
reviews alone, the potential for the medi- objectively managing service level agree- service flexible and easy to administer,
um is huge. A solution, like Active ments with streaming service providers. and speeding time to resolution, said
Streaming XF, that can help sites ensure The company explained that the Gomez.
a high quality video experience for visi- Active Streaming XF interface provides The service is sold on an annual sub-
tors is very timely." actionable, near real-time insight into scription basis, with the price varying
According to Gomez, possible uses metrics such as start up and buffer time. depending on the features needed.

Oracle Delivers App Testing Suite


Oracle Corp.’s new Application Testing cation simultaneously, Load Testing experience.
Suite is an addition to its Oracle Enter- for Web Applications measures the • Test Manager for Web Applications:
prise Manager product line. The new effect of the load on application Helps users manage the Web appli-
suite provides an integrated solution for performance providing a way to val- cation testing process, enabling full
load testing, functional testing and test idate the performance and scala- test coverage, by allowing users to
management, letting customers test bility of applications and Web serv- define testing requirements, spec-
packaged, Web and SOA-based apps and ices. The tool can test against syn- ify and execute manual or auto-
their underlying infrastructure. thetic and real-world test loads. mated tests to validate require-
The Oracle Application Testing Suite • Functional Testing for Web App- ments, and then manage the
employs an integrated scripting plat- lications: Lets developers and defects that those tests uncover.
form for load and functional testing that testers execute functional tests of The company says that the Oracle
the company says can help cut scripting applications and Web services. Application Testing Suite provides a
time in half and reduce weeks from a According to Oracle, the transac- common, standards-based scripting envi-
typical project’s testing schedule. The tion engine simplifies automated ronment that links functional testing
suite, which works with both Oracle and test script generation and enables and load testing to simplify test process
non-Oracle applications, includes: out-of-the-box automation of the and accelerate completion of tests by as
• Load Testing for Web Applications: most complex Web applications much as 50 percent. The test suite also
By simulating tens of thousands of and associated technologies to includes a functional and load testing
virtual users accessing the appli- quantifiably validate the end user’s accelerator for Oracle's Siebel CRM.

NOVEMBER 2008 www.stpmag.com • 11


a valuable feedback loop to improve IT operations and development test
OpenMake And the quality of their WSDLs, thus teams can now ose Surgient to define the
Serena Promote increasing interoperability, security virtual blank- slate deployment environ-
and integrity of services. ment, and then populate that environ-
Meister Builds "This release addresses one of the ment by leveraging existing, production-
OpenMake Software, the company major pain points that SOA users face, ready operating system sequences from
behind the open-source Meister build- that is, how to test for quality of HP Server Automation to build replicas
management system, is working with WSDLs," said Mamoon Yunus, CTO at of the production systems in the virtual
Serena Software to offer a migration Forum Systems and an advisor to environment.
path for customers using Serena Crosscheck Networks. "The latest “Customers can manage change more
Builder and Serena Builder Enter- SOAPSonar release offers unique effectively and meet compliance needs
prise. Serena customers now have the capabilities that provide strong valida- with this integrated solution,” said Michel
choice to move their Builder support tion features that helps strengthen the Feaster, director of products and Business
and maintenance from Serena Soft- quality of enterprise services." Service Automation Software at HP. “By
ware to OpenMake Software, provid- Version 4.0 of the software, now creating identical, clean copies of pro-
ing them immediate access to the lat- shipping, offers WS-Policy Consum- duction systems and leveraging the sys-
est version of OpenMake Meister. ption and Policy Generation report- tem of record for production deploy-
The last release of Serena Builder, ing, WSDL scoring, custom XSD ments to be used for testing purposes,
version 6.41, uses an older version of Schema support, Schematron valida- customers gain greater flexibility in get-
the Meister code, which is currently up tion, SmartCard certificates, and asyn- ting new applications out to market.”
to version 7.2. chronous performance testing. It’s a
"Moving to OpenMake Meister 7.2
will provide the Serena Builder cus-
free upgrade for customers running Registration Opens
SOAPSonar 3.5 or later.
tomers with many new features and for FutureTest 2009
enhancements," states Stephen King, Come to FutureTest 2009, Feb. 24-29 in
CEO of OpenMake Software. “The HP and Surgient Play New York City, to focus on what it takes
most important of these features to deliver. Whether you’re a vice presi-
include Meister's improved continu- Virtually Together dent of software test, director of soft-
ous integration server, its ability to per- Surgient, a virtualization company, is ware quality, project manager or a sen-
form build mash-ups that synchronize working with Hewlett-Packard to improve ior development manager with respon-
IDE builds with the Continuous Inte- the speed and accuracy of software devel- sibility for test, FutureTest 2009 is where
gration builds and support for Micro- opment and testing. you want to be.
soft Team Build and Visual Studio The Surgient Virtual Automation If you’re evaluating Web software built
2008.” Platform optimizes IT's ability to support for use by your customers or by employ-
“OpenMake Software and Serena critical business initiatives, effectively man- ees within your company, or if you’re
Software will work together to support aging diverse virtual resources, accord- charged with vetting Web apps built for
interoperability between Serena Build- ing to Surgient. Integrated with HP Server internal consumption, FutureTest is for
er and OpenMake Meister,” said Automation software, the solution you. If you want to keep your orgnaization
Steven Mitzenmacher, Vice President empowers IT teams to automate the cre- on the cutting edge of software testing
of Corporate Development at Serena ation of identical production environ- and quality assurance, FutureTest 2009 is
Software. ments, whether physical or virtual, allow- the must-attend event of the year for you
ing the highest quality software applica- and your senior test professionals.
tions.
New SOAP Test “With our pre-existing integration with
At FutureTest 2009, the editors of
Software Test & Performance assembled
Software Knows HP Quality Center and new integration the greatest minds in software test/QA to
with HP Server Automation software, we help you look into the future so that you
The Score can provide organizations with a com- can better manage the present, while
How standards compliant is your prehensive solution for the creation and always being ready for change.
SOAP and WSDL? Check the score to management of cost- effective virtual envi- The eXtreme Early Bird registration
see if it passes the test, says Crosscheck ronments for the testing of production- price of $995 is a $500 savings from the
Networks. The company’s SOAPSonar ready software applications,” said Sameer full conference price. This discount
4.0 test software incorporates modules Jagtap, Surgient’s vice president of prod- expires on Nov. 21, so register soon at
that enable QA and Development uct strategy. “Customers can accelerate futuretest.net. Follow the conference as
teams to quickly test the quality of test cycles by consolidating test infra- it evolves at blog.futuretest.net.
their enterprise WSDLs based on an structure and automating the set-up of
easy-to-understand scoring system. clean, identical production environments Send product announcements to
The scoring system provides the teams on-demand.” stpnews@bzmedia.com

12 • Software Test & Performance NOVEMBER 2008


ST&Pedia
Translating the jargon of testing into plain English

Taster’s, er, Testers Choice


So you’re a tester. How will you test? Put sim- the test process, such as the work breakdown
ply, here are the main things you will need: structure, the requirements that have been
1. Some test data. tested, what remains to be tested, known
2. A way to send that data to whatever issues, and creation of reports and summaries.
you’re testing.
3. A means of analyzing the result. DEFECT/ISSUE MANAGEMENT tools
4. A way to figure out what went wrong. track the status of known issues and provide
5. A way to report about what you did. a history of when bugs were discovered, by Matt Heusser and
6. To see what happens when you use a whom, how to reproduce the issue, and when Chris McMahon
whole lot of test data. the issue was fixed.
7. You will need a way to report bugs and
how they were fixed.
Doing all this manually can take a lot of
LOAD/PERFORMANCE TEST tools help
measure the average experience for a client
Q: What would your
answers be?

time, and in the case of changing versions and many generate and measure perform- Did you exhaustively test
over time, can lead to mistakes. To simplify ance under load. this? Are we doing an SVT
the process and minimize mistakes, compa- after our BVT? Does the
nies and testers have developed tools to do SOA/WEB SERVICES TEST tools help performance testing pass?
some of this work for you. evaluate a Web services stack, such as SOAP, What are your equivalence
This month’s issue of Software Test & XML, and WSDL, typically by sending (simu- classes? Which heuristics
Performance—the Testers Choice issue— lated) messages to the Web server and evalu- are you using?
describes the tools that readers like best and ating responses.
use most. And this month’s ST&Pedia is devot-
ed to some of the Testers Choice categories, to
help you understand how they’re defined and
SECURITY TEST tools identify potential
vulnerabilities of applications, generally by
A: ST&Pedia will help
you answer questions
like these and earn the
how they fit into your daily life as a tester. either hardening the application or operat- respect you deserve.
ing system (such as a firewall) or by generat-
Upcoming topics:
DATA TEST/PERFORMANCE tools allow ing multiple attacks on the system and listing
for setup, population, migration, testing, vulnerabilities. Some also spot “vulns” using December
performance measurement, and tuning of static code analysis. Test Automation
databases, Web servers and all forms of data-
driven services. TEST AUTOMATION tools execute com- January 2009
Change Management, ALM
puter programs and evaluate the results. The
FUNCTIONAL TEST tools evaluate appli- two most common approaches to GUI test
February
cations and components through executing automation are record/playback/compare
Tuning SOA Perfomance
code. They typically “run” automated tests, screenshots and drive/compare GUI through
which may drive code at the unit level, run object IDs, known as keyword driven testing. March
business logic at the acceptance level, or (More on test automation next month). Web Perf. Management
directly drive a userinterface.
EMBEDDED/MOBILE TEST/PERFOR- April
STATIC/DYNAMIC CODE ANALYSIS MANCE tools specialize in the testing of Security & Vuln. Testing
tools look at the code, either examining it as embedded or wireless applications and sys-
May
text looking for flaws (static), or providing tems, either with support for embedded oper-
Unit Testing
instrumentation on the code as it is execut- ating systems and/or with specific interfaces to
ed (dynamic). Instrumented code can pro- and features for testing mobile devices.
vide information about the software under Matt Heusser and Chris McMahon
test, such as the percentage of lines of code, SCM/BUILD MANAGEMENT tools are career software developers,
or possible code branches, that were cov- create an automated library of code testers and bloggers. They’re col-
ered by the tests and where uncovered changes, including differences. Build man- leagues at Socialtext, where they
(missed) spots lie. agement systems assist or automate builds, perform testing and quality assur-
integrate components, and automate the ance for the company’s Web-
based collaboration software.
TEST/QA MANAGEMENT tools provide
and automating buildtime tasks. ý
develop and deploy cycle by running builds
planning, organizing, and oversight tools for

NOVEMBER 2008 www.stpmag.com • 13


Party Time. Excellent.
Take a Trip Backstage to Hang
With This Year’s Test-Tool Heroes
(We’re Not Worthy!)

By Edward J. Correia

x-squeeze me! OK, pop quiz: In 2006, did you


E vote for Mercury Interactive as your starring
supplier of testing tools? Chyeah! You all put your
LoadRunner with our Grand Prize for most votes
overall, so too does the versatile tool receive this
year’s top honor, leading in the individual categories
hands together for LoadRunner, QuickTest Profes- of Java/Test Performance, Data/Test Performance
sional and TestDirector for Quality Center. and Load/Test Performance. Schwing!
I mean like last year, the players were the same, So here’s your backstage pass to party with the best
dude, but the awards went to Hewlett-Packard, testing tools to take the stage in 2008, along with the
which acquired Mercury in November 2006. two closest warm-up bands. This year’s categories
Bonus! These tools have received our standing ova- included defect management, automation, security, per-
tions for four years running. They are like truly formance, free solutions and those from new
amazing and excellent. players—seventeen categories in all. Maybe you’ll see
Just as last year’s tallies bestowed the awesome your Testers Choice. Party on, Garth.

14 • Software Test & Performance


Hewlett-Packard LoadRu
nner
The tester’s pe
rennial favori
multi-faceted te is LoadRun
load and perf ner, a
LoadRunner ormance testin
ex am ines application g tool.
performance behavior and
of running syst
ulate applicat em s as it applies
ion activities. load to sim-
lating real-wor The tool is ca
ld scenarios wi pable of simu-
simultaneous th hundreds or
nodes simulat thousands of
ice, Web server in g an applicatio
, database or ot n, Web serv-
Unix. her program in
Windows or
HP’s most rece
nt major upda
in June 2007, te of LoadRun
when it enhanc ner came
the testing ne ed the tool to
eds of compa better address
based on so-cal nies deployin
led Web 2.0 te g applications
chnologies.

GRAND PRIZE WINNER 2


008
test case simulation. Script parameters can be modified as

Data/Tnecset
needed to adapt to different cases, data parameters (such as
for keyword-driven testing), correlation and error handling.

Performa
A controller module runs the scripts and can simulate large
numbers of users while the analysis module tracks and graphs
behavior and performance results.
If you’re new to LoadRunner or need to pick up a few new
techniques, you might refer to the March 2007 issue of
LOADRUNNER took the most votes in the data/test perform- Software Test & Performance for an excellent tutorial by con-
ance category, which groups products in terms of their ability sultant Francisco Sambade on using the tool to test multiple
to apply test data to an application or system and evaluate how protocols.
it handles the processing. Runners-up in this category were Compuware File-AID/CS
At the core of LoadRunner’s power is the Virtual User gen- and Red Gate’s SQL Data Generator. File-AID is an enterprise
erator (VuGen), which creates test scripts for playback and data management tool that permits testers to quickly build

www.stpmag.com • 15
test data environments across a multitude of systems, includ- tool spots memory leaks, profiles application performance
ing mainframes, MVS, DB2 and distributed systems (as the and analyzes code coverage. Supported languages include
finalist CS edition). C/C++, Java, the .NET languages, Visual Basic, and Visual C++.
Introduced just this past April, Red Gate’s SQL Data Versions are available for Linux, Unix and Windows.
Generator has generated enough attention from testers to Also unseated by this year’s runners-up—the Eclipse Test
and unseat Intel’s VTune Performance Analyzer, last year’s and Performance Tools Platform and HP’s DevInspect—were
runner-up in the category. The Red Gate tool’s claim to fame Compuware’s DevPartner Studio and Parasoft’s Jtest, last year’s
is the ability to populate “two million rows of intelligent data finalists in the category.
for 10 tables in less time that it takes to get a cup of coffee.” Eclipse’s TPTP project for Linux and Windows got an infu-
What’s more, it does so with a single mouse-click, according sion of new features last June as part of the Ganymede release
to the company. train. Major new capabilities include increased test coverage
through test creation, automation and expanded run time
execution; the ability to execute multiple instances of the tool

Functionalt
in parallel; simplified handling of test assets, including the
ability to cut, copy, paste and rename. Also new are an

Tes
improved thread profiler and new profiler API for Java 1.5
and above.
Inherited along with HP’s acquisition of SPI Dynamics near
the end of 2007, DevInspect was widely praised this year by
One again, HP’s QUICKTEST testers. The tool is intended primarily for automated securi-
PROFESSIONAL takes the top prize for functional testing. The ty testing and remediation during development.
2006 Grand Prize winner, QuickTest Pro is a Windows-
only record-and-playback tool for functional and regres-
sion GUI testing, and it includes a scripting language
based on VBScript that can control and manipulate
program objects and allow testers to script out test

ManagTeesmt/eQnA
procedures.
Leading it to victory in 2006 were enhamcements
to the tool’s team collaboration capabilities, a new
object repository manager and the ability to share function
libraries across tester workgroups. It also added keyword man-
agement and drag-and-drop test-step construction, XML out- HP is once again on top with
t
put for test-result reporting, and a new and more accurate TESTDIRECTOR FOR QUALITY CENTER, which testers
debugger that identifies errors while building or maintaining voted this year and last their favorite tool for test and QA man-
test cases. HP has released no major updates recently. agement. TestDirector for Quality Center includes modules
Runners-up on the functional test category were IBM for requirements management, test planning, test lab and
Rational Functional Tester and Borland Silk Test. Neither of defect management.
last year’s finalists—Parasoft SOAtest and Compuware Optimal TestDirector allows testers and managers to gather require-
Quality Management—came close this time. ments, design and schedule manual and automated tests, and
Functional Tester is an automated function and regression analyze the results, all through a browser. Also included are
test tool for GUI and data-driven testing. It allows scripting graphical reporting and integration with HP’s WinRunner
with its own ScriptAssure environment or can be driven by and QuickTest Professional tools. Options include extensions
Visual Basic .NET, Java from within Eclipse, or custom con- for Oracle, SAP and SOA.
trols built with Java or .NET through a proxy SDK. Finalists in this category were Borland’s SilkCentral Test
SilkTest is also an automated function and regression testing Manager (as last year) and IBM’s Optim Test Data Management
tool, with cross-platform testing capabilities for applications built Solution, kicking VMware Lab Manager from its third-place
for client/server, Java, .NET and the Web. It also includes a script- perch last year.
ing framework to help facilitate reuse of test assets. Since being acquired by Borland along with Segue Software
in February 2006, SilkCentral Test Manager has been inte-
grated with VMware Lab Manager to streamline cross-plat-
form testing, and with Eclipse to facilitate capturing of

Static/Dynamic test results in a centralized console for impact analysis.


The browser-based environment for remote, simulta-

Code Analysis neous test execution and management of JUnit/NUnit


and other third-party testing frameworks also includes a
manual test client to ensure repeatable data collection
and reporting. Reporting is provided by BIRT.
Testers handed the top honor in the the code analysis cate- Optim Test Data Management Solution is IBM’s way to man-
gory to IBM’s RATIONAL SOFTWARE ANALYZER DEVELOPER age test data and improve application quality. The tool per-
EDITION, unseating three-time imcumbent PurifyPlus, a use- mits testers to apply specific standards of coverage, create
ful tool with long and storied roots, now also developed and error or boundary conditions and create an environment that
marketed by IBM Rational. The automated runtime analysis closely simulated production.

16 • Software Test & Performance NOVEMBER 2008


The same tool can be used across enterprise apps, includ- Windows application with high-level and detailed tree views and
ing PeopleSoft and Siebel; databases from IBM, Microsoft, a tree-based text editor.
Oracle, Sybase and others; and Linux, Unix, Windows and SilkPerformer is an automated, sharable environment for
Z/OS. executing load and stress performance tests across Java and .NET
applications with server-side analysis. The tool can simulate thou-

e ct/Issu
Def nagemente sands of users with no licensing restrictions for controllers or
protocols. A visual script recorder enables script editing and
manipulation. A plug-in extends test creation to Eclipse. There’s

Ma also a SOA edition.

SOA/Web
Services Test
Last year there were two winners in the defect and
issue management category: HP TestDirector for Quality
Center and the Mozilla Foundation’s Bugzilla were tied. This
year, TESTDIRECTOR alone took the top spot and Bugzilla
moved to third, edged out by Microsoft’s Visual Studio Team
Edition for Software Testers. In the SOA and Web services testing category, last year’s winner
The defect tracking module in TestDirector automatically was this year’s no-show. LoadRunner has fallen out of favor for
checks its central defect database for similar defects each time testing these types of applications, replaced in 2008 by IBM’s
a new defect is logged, helping to reduce duplicates. The tool RATIONAL TESTER FOR SOA QUALITY.
tabulates defect statistics to aid management deployment deci- When your app relies on someone else’s Web service, you
sions and identify trends. can’t leave availability to chance. For example, if you’re depend-
A relative newcomer, Visual Studio Team Edition for ing on a credit authorization service that’s accessed via the Web,
Software Testers can perform functional and load testing on how can you ensure that it won’t fail on Black Friday, traditionally
Web applications and Web sites, and manages and permits America’s busiest shopping day? While you can’t control the
reuse of test cases, all without writing a single line of code. credit agency’s service, you can use Tester for SOA Quality to
Bugzilla began its life in 1998 and was originally written in simulate thousands of hits against its service and watch how your
Tcl by Terry Weissman. Thinking another language might get application behaves while simultaneously trying to conduct hun-
more traction within the community, Weissman decided to dreds or thousands of transactions. The tool also works with
port it to Perl, which resulted in Bugzilla 2.0. In April 2000, services that present no GUI.
the project was handed off to Tara Hernandez, who gained Runners-up in the category were HP’s QuickTest Professional
more developer participation, including that of its current and Parasoft SOA Quality Solution.
custodian, Dave Miller. Bugzilla last year won the top spot in A perennial favorite of testers, QuickTest Pro is an auto-
the free test/performance tools category, but this year slipped mated functional and regression testing tool for GUI applica-
to runner-up. tions that records and plays back user actions on Web or native
applications. Scripts are recorded in a proprietary scripting lan-
guage based on Microsoft’s VBScript and can be modified, played
back and reused.

Load/Performance Parasoft’s SOA Quality Solution incorporates the company’s


multi-layer workflow approach to testing and quality assurance.
Capabilities include policy enforcement with interoperability

Test and consistency checking across SOA layers; multi-layer verifi-


cation of the underlying implementation for auditing; service
emulation with end-to-end testing and emulation of business
This year’s winners in the load logic or transactions; regression testing; penetration testing; and
and performance testing category are a carbon copy of last load and performance testing.
year’s. LOADRUNNER again topped the list, with capabilities
centered around its Virtual User Generator record-and-play-
back tool. VuGen generates an editable script of user actions
that are routed through a protocol-dependant proxy during
Security
Test
recording. So when scripts are generated using the Web/HTTP
edition, testers can set LoadRunner to generate either URL- or
HTML-based scripting, for example.
Runners-up in this category again were IBM Rational
Performance Tester and Borland SilkPerformer. Performance
Tester stresses Web applications and tests their scalability. Last year, SPI Dynamics took the top prize in security testing
Integrated with IBM’s Tivoli management environment, it with WebInspect, its security scanning and assessment tool for
enables large, multi-user tests while using minimal hardware Web applications. Now owned by HP, which took both runner-
resources. Distributed controller agents are available for Linux, up slots in the category, WebInspect this year was displaced by
Windows and z/OS. Test results are displayed in a Linux or IBM’s RATIONAL APPSCAN STANDARD EDITION.

NOVEMBER 2008 www.stpmag.com • 17


Inherited along with its acquisition of Watchfire, AppScan is an
automated security auditing tool for developers, QA staff and secu-
rity auditors. It’s also available in multi-user and SaaS versions. IBM
in October introduced AppScan Express Edition, a version CM /Bui
S gementld
of the utility aimed at small and mid-sized businesses with
support for apps made with AJAX, Flash and other so-
called Web 2.0 technologies.
HP’s Assessment Management Platform, a first-timer
Mana Microsoft VISUAL
for the Testers Choice awards, is an automated, central- SOURCESAFE was tops in the
ly controlled security scanning and testing platform for SCM/build management category for the sec-
Web applications. It permits distributed teams to access ond year running. Originally developed by One Tree Software
and remediate security vulnerabilities through a dashboard-style as SourceSafe, this tool was first released in the early 1990s as
interface. version 3.1, which a Wikipedia author speculates was meant to
WebInspect last year, prior to its acquisition by HP, got itself match the current version of Windows at the time. Microsoft’s
a shiny new profiler that scans Web apps and suggests configu- SCM at the time was called Delta and was said to be inferior.
ration settings for the most effective testing. It also received a real- Microsoft acquired the 16-bit SourceSafe and released a 32-bit
time traffic monitor that reports HTTP activity during a scan. A version as Visual SourceSafe 4.0 around 1995. It would be 10
results window displays requests and responses sent by WebInspect years before the company released VSS 2005, the first client-
during crawls and security audits. The tool was completely rewrit- server version; until then it was limited to local use.
ten in January 2007, and SPI claimed performance improvements Among VSS’ strongest attributes are its tight integration with
and compatibility with modern technologies and techniques. Visual Studio, relative ease of use and price; it’s free with cer-
WebInspect 7.5 reportedly further improved auditing capabili- tain subscriptions to MSDN. It also offers numerous expan-
ties for spotting vulnerabilities in AJAX-based applications and sion possibilities thanks to a pluggable architecture. Its future
strengthened its support for Windows Vista. in the enterprise seems uncertain, however, since many of its
capabilities are implemented in Visual Studio Team Foundation
Server and several good rival systems are available from the open-

Testn
source community.
One such rival system is Subversion, which again this year is
a runner up along with ElectricCommander from Electric Cloud.

oma tio The Subversion version control system, particularly popular

Aut among open-source developers, was launched in 2000 by


CollabNet. Subversion commits are atomic and do not cause cor-
ruption if interrupted. All moves, adds, changes and deletions
QUICKTEST PROFESSIONAL are versioned, including files, directories, trees, file metadata
again comes out on top, this time in the and symbolic links. It supports binary file storage with file-lock-
test automation category. QuickTest Pro includes ing for unmerged files, and sends diffs in both directions.
a pluggable architecture to allow customization. The tool Build automation tool maker Electric Cloud in late
includes plug-ins for ActiveX controls, Web applications and September began shipping ElectricCommander 3.0, which
VB objects. Optional .NET-language objects and multimedia included the ability to "preflight" newly modified source code
plug-ins are available. The tool also saves the AUT’s screens to determine if it will correctly build into an application. The
along with the recorded script, highlighting the portion of the new version also integrates with Eclipse and Visual Studio.
screen being tested as an aid to regression testing. These are ElectricCommander automates program component assem-
also useful for creating checkpoints, which can apply to images, bly without regard to development language or build utility.
tables and pages, and permit verification of expected applica- It supports numerous scripting languages, including Bash,
tion behavior during automated tests. Other features include perl, Python and Tcl. It also works with AccuRev, ClearCase,
virtual objects, output value for data verification, data- and trans- Perforce, Subversion and Synergy software configuration man-
action-driven testing, and exception handling. agement systems.
Compuware TestPartner is an automated GUI function test-
ing tool that uses Microsoft’s Visual Basic for Applications
for scripting, debugging and error handling. Tests also
can be created using "Visual Navigator," a storyboard
environment. By displaying screens of the AUT, testers Embedded/Mobile
select from lists of automation steps to assemble story-
boards. They also can add test logic, variables and oth-
er particulars.
Test/Performance
Another newcomer to the Testers Choice awards is HP
Business Process Testing, a Web-based test design solution Parasoft took the top prize in the embed-
that the company says is intended to allow subject-matter ded/mobile test and performance category with its C++ TEST,
experts to build and execute manual and automated tests putting IBM’s Rational Test RealTime out of the top spot. C++
without any programming knowledge. It automates testing Test is a solution for code analysis and review, coverage analysis,
and creation of documentation, and is also said to reduce regression testing and automation of unit and component test-
test maintenance. ing. Parasoft offers versions for embedded and non-embedded

18 • Software Test & Performance NOVEMBER 2008


apps for Linux, Solaris and Windows. C++ Test works with Eclipse
and Visual Studio as well as in batch processes.
Runners-up in the category were TestShell from QualiSystems Java
and Prevent from Coverity.
TestShell, the flagship of tool maker QualiSystems, is a line
of products that includes tools for planning, building, execut-
Test/Performance
ing and controlling tests and analyzing the results. The heart of
the suite is TestShell Foundation, the engine and database that LOADRUNNER is the testers choice in the Java test and perform-
provide a pluggable infrastructure to grow with the needs of the ance category for the third year running. During load tests, the
testing organization. tool uses monitors to gauge the performance of the applications
Aptly named for its intended purpose is Prevent, Coverity’s or components under test. Available monitors include those for
code scanning tool for finding programming errors using stat- virtual users, transaction rate, network latency, Web and data-
ic analysis before compilation. It works with C, C++ and Java base server responsiveness (these are server-specific) and for serv-
source code, and integrates with an existing build environment er resources.
and development process. Monitor data is saved for examination by the analysis tool, which
processes and graphs the completed scenario results. By analyz-
ing these graphs and combining them into reports, testers gain
an understanding of the big performance picture and can make
or suggest adjustments to tune performance.

.NET Test/ JUnit and the Eclipse Test and Performance Tools
Platform also received high marks.

Performance Often heralded as the grandfather of unit testing frame-


works is JUnit, one of the earliest and most successful of its
kind. Celebrating its 10th birthday in 2007, JUnit has spawned
ports to numerous languages including C# (NUnit), C++
For testing performance of .NET applications, testers this year (CPPUnit), Fortran (fUnit), Perl (Test::Class and Test::Unit),
chose Microsoft VISUAL STUDIO TEAM SYSTEM EDITION PHP (PHPUnit) and Python (PyUnit). There’s even a version
SOFTWARE TESTER, moving it up from its second-place fin- for JavaScript (JSUnit). JUnit has been downloaded more than 2
ish last year to trade places with HP LoadRunner. million times in 10 years, according to one study, and is included
Team System is an assemblage of server- and client-side com- as a plug-in to all major IDEs, including Eclipse.
ponents that trickled onto the market since 2005. The primary JUnit 4.5 was released in August and simplifies JUnit extensions
server-side components are Team Foundation Server and SQL with new public extension points for inserting behavior into the
Server. The server stores all source code and supports multi- standard JUnit 4 class runner. Advances in JUnit 4.4 included a
ple simultaneous checkouts, branching, merging, conflict res- new API with an “extensible and readable syntax,” according to
olution and tracking of all changes. Security can be applied at the release notes. The API enables features like assumptions or
any level. the declaration of explicit dependencies when a tester has no con-
Reporting is handled by SQL Server Reporting Services. trol of forces that might cause a test to fail. Assumptions also give
Included reports can show code change over time, lists of bugs rise to theories, which can capture some aspect of behavior in “pos-
without test cases and regressions on previously passing tests. sibly infinite numbers of potential scenarios,” reads the notes.
There’s also a build server with tracking. On the client side, When it was updated to version 4.5.1 in September, the Eclipse
Visual Studio enables code analysis, code coverage and test Test and Performance Tools Platform (TPTP) workbench and
tools for build validation. Agent Controller received Java 1.6 support, numerous bug fixes
Support for .NET languages was added to LoadRunner and increased test coverage through test creation, automation and
beginning with version 8, which also added a diagnostics serv- expanded run-time execution. The update includes numerous
er, commander, mediator and probes for Java EE and .NET. enhancements over 4.5, its Ganymede release in June.
Other improvements included a LoadRunner 8 that was more
scalable, and easier to install, configure and use, according

rated Test/
Intergmance Suite
to the company.
LoadRunner permits hundreds of simulated users to

Perfo
be launched from one or several machines with the
LoadRunner agent installed. These load generators, or
“injectors,” are instructed on how and when to launch
scripts using LoadRunner scenarios so that simulations
can be ramped up.
The .NET version of Compuware DevPartner Studio
also was a runner-up in this category. The company also While awaiting Visual Studio Team System 2008, testers in
offers versions for C/C++ and Java. 2008 told us they are quite fond of VISUAL STUDIO TEAM
DevPartner analyzes quality and complexity of code and EDITION SOFTWARE TESTER, this year’s winner in the
can detect memory leaks, timing issues and coverage. The soft- Integrated Test and Performance Suite category. Meanwhile,
ware also can help with memory optimization, CPU, disk and much is already known of Rosario, code name for Visual Studio
network resource management, fault simulation, and error Team System 2010.
detection. Microsoft reportedly plans to build more application life-

NOVEMBER 2008 www.stpmag.com • 19


Free Test/
cycle management capabilities into the tool, will integrate it with
its Office Project Server, and add full traceability across projects
and resources as well as metrics and dashboards.
Testers last year chose HP Performance Center as their
favorite integrated test/performance suite; this year it was run-
Performance
ner-up. The tool combines all the capabilities of LoadRunner
with test-asset and human-resource management features and Of the free testing and performance tools available today, testers
reporting in a centralized repository accessible through a brows- this year chose the TEST AND PERFORMANCE TOOLS PLATFORM
er with role-based security. Load-test configurations and test project of Eclipse, bouncing Bugzilla, last year’s winner, into a run-
scripts can be created, edited and scheduled for execution. ner-up. TPTP is a test and performance life-cycle tool, address-
Progress can be tracked, graphed and compared with data from ing a project’s needs from early testing to production applica-
past projects or releases. Performance Center also can reboot tion monitoring, and everything in between. Test editing and exe-
and deploy patches to remote nodes. cution, monitoring, tracing and profiling, and log analysis capa-
Also a runner-up was TestComplete from AutomatedQA. bilities all are core parts of this tool, which supports embedded
TestComplete is a record and playback tool that automates UI systems, standalone applications, enterprise-grade apps as well as
functional testing of Flash, Java, .NET, Web and Windows apps. It high-performance computing systems.
also can handle database and back-end HTTP load testing. One of Eclipse’s earliest and most successful projects is still
Recorded tests can be modified and reused for regression testing. TPTP, and it has enjoyed tremendous support from the commu-
nity. Now at version 4.5.1, its subprojects include those for build-
ing tools for monitoring and logging performance, and resource
allocation of application servers; for building testing tools that edit,

Commercial Test/ manage, deploy and execute tests; and for application trace and
profiling.

Performance under $500/seat Runners up were JUnit and Bugzilla. It’s safe to say you’ve
succeeded when your name is widely used as a verb. For instance:
“Google that term and see what you get,” or “Be sure to JUnit your
The prime choice for sub-$500 commercial testing solutions was code before checking it in.” JUnit has become synonymous with
SOFTWARE PLANNER, making 2008 a peak in the roller coaster unit testing and has been built into Eclipse for years now. Its suc-
ride that was the past three years for Pragmatic. The SaaS tool cess has even exceeded the expectations of co-creator Kent Beck.
earned top marks this year, was a runner up in 2007 and took the Bugzilla, the open-source defect tracking system, celebrated
top spot in 2006. Software Planner is a Web-based hierarchical version 3.0 last year, nine years after v2. Chief among the enhance-
project planner that uses task linking to prevent one task from ments are huge performance gains, a Web services interface,
beginning before another is completed. and the ability to control and customize far more of the environ-
Project managers can use the system to set baseline dates, costs ment than in prior versions, according to project developers.
and track deviations as the project proceeds. Team members are
alerted to new tasks via e-mail; multi-level security limits access to
authorized people. Billed as an application life-cycle management

tion From
Best SAoluNew Player
tool, it helps IT departments manage requirements, project deliv-
erables, test cases, defects and trouble tickets. The tool inte-
grates a calendar, dashboards, document collaboration
and sharing, a knowledge base, and threaded discus-
sions. Software Planner also is available as a self-hosted
solution.
Runners-up were Mindreef SOAPscope Developer and VMLogix LABMANAGER 3.5
Shunra Virtual Enterprise Desktop Standard. took this year’s top spot for Best Solution from
Mindreef has been earning awards almost since coming a New Player, which we define as a company less than five years
onto the scene in 2002. SOAPscope 5.0, a tool for testing old. Released in May, version 3.5 added support for Citrix XenServer
SOAP-based apps was awarded best solution from a new compa- 4.1 to its existing support for virtualization systems from Microsoft
ny in 2006. SOAPscope Developer allows teams to create, test, deliv- and VMware. LabManager 3.5 also includes some major user inter-
er, support Web services and SOA components, and automate face enhancements and to virtual network management.
XML-oriented tasks. The tool is included with SOAPscope Server Last year’s winner in this category was Defender from Fortify
6.1, a server-based solution intended for use and collaboration by Software. This year the company reached runner-up with anoth-
all members of the SOA and Web services team, including analysts er new product: Fortify 360, which takes aim at the software devel-
and managers. opment life cycle with the ability to identify, prioritize and repair
Shunra Virtual Enterprise creates a virtual network for use as vulnerabilities during every phase of development.
a simulation environment for application testing. It permits testers Requirements management company Blueprint grabbed a run-
to determine application performance and user experience under ner-up slot with Requirements Center, which brings cloud-based
a variety of network conditions and “impairments,” such as high collaboration to application users and business analysts. As the
network load or traffic, excessive collisions, broadcast storms, and company puts it, the solution puts these people at the heart of the
other types of real-world production scenarios. Test results are dis- development process, “allowing them to collaboratively define,

before a single line of code has been developed.” ý


played graphically, and testers can drill into particular results for visualize and validate their requirements in business terms, all
issue specifics and analysis.

20 • Software Test & Performance NOVEMBER 2008


The biggest
conference ever:
more classes!
more speakers!
more fun!

March 31-April 2, 2009 San Mateo Marriott


www.stpcon.com San Mateo, CA
SPRING
“STP Con has
produced a
conference that is jam
packed with ‘must
know’ topics by the
best presenters in
their field.”

“Great material, great


speakers, all around a
great experience.”

“The courses were


excellent and very
informative. The other
attendees were a vast
source of knowledge
regarding techniques
and experiences.”

“If you don’t learn


something you are not
paying attention!”
Breaking Down Large
Systems to Component
Parts Turns Complex
Testing Into Child’s Play
22 • Software Test & Performance
By Keven Lui

ost experienced testers are to the production environment as

M familiar with perform-


ance testing. But have you ever
possible.
• Try to simulate the random nature of
the workload.
• Estimate the workload’s growth every
tested full-scale performance that
year and factor that into your tests.
includes all sub-systems? Do you know how • Ensure the version of software was
to distinguish between them? You will by deployed and tested is correct.
the time you finish reading this. • During a performance test, ramp up
You know, the goal of performance the system workload gradually, with
testing is not to find bugs, it is important adequate time between steps to allow
to guarantee the system is able to support for the system to stabilize.
specified load, as well as properly recover • Always monitor and record systems
from the excess use of resources. By per- stats, including CPU memory utiliza-
formance testing, we try to tion of clients and all servers
determine a system’s per- used by the AUT.


formance, validate the qual- In my practice, full-scale
ity attributes (such reliabili- performance test consists of
ty, stability) of the system, the Response Time Test, the
expose the bottle neck, and Load Test, the Stress Test, the
measure the peak of loads. For stability, Stability Test and the Database
Many of today’s systems Volume Test. Many times peo-
can be considered huge—
such as those of Amazon
simulate work, ple use these terms inter-
changeably. But I don’t agree
and eBay—e-commerce that they are interchange-
Websites that service thou-
estimate growth, able. A few definitions:
sands of users simultaneous- System Performance: The
ly all over the world. How
validate versions degree to which a system or
do administrators of those component accomplishes its
sites ensure high efficiency and ramp up designated functions within
and stability? Of course, given constraints regarding
that’s done with perform- performance. processing time and through-
ance testing. But do you put rate.
know how to define the
testing scenario, which
functions should be includ-
ed, the approach and steps,
• Performance Test: A test to
determine how fast some
aspect of a system performs
under a particular work-
the areas of coverage, etc.? load; the validation and ver-
Do you know how to touch all system ification of other quality attributes of the
components and exercise its full-scale system, such as scalability, reliability and
performance? What follows is my method resource usage.
for doing that. Response Time Test: The basic perform-
Before you do any testing, please con- ance test and one of most concern by end-
sider some basic principles of perform- user. The response time test evaluates the
ance testing: compliance of a system or component
• Ensure the test environment is stable with specified performance require-
and that test conditions are as similar ments. It usually measures the end-to-end
timing of critical business processes
A ten-year software developer and QA profes-
and/or transactions. I sometimes refer to
sional, Keven Lui is test leader at Hong Kong
Air Cargo Terminals, Ltd. it as the end-to-end response time test, or
e2e time test, and often assign benchmark

www.stpmag.com • 23
PERFORMANCE TUNING

values. For example, the home page of TABLE 1: LOAD TIMES your system’s performance and stability.
your Website should display completely
Function Name Response Time(s)
in less than 5 seconds.
Min Max
Activity 1: Response Time Testing
Load Test: You can call this perform- Home 8.343 14.422 This is the narrow definition of per-
ance testing with loads. It tests an About 0.431 1.839 formance testing, and it focuses on
application under normal loads, such Contact Us 0.570 0.799 determining the application client’s
as testing of a Web site under a range News 0.721 6.002 response time.
of actual loads to determine the Available Position 0.656 1.926 Put simply, all you need to do is
Products & Services 0.527 9.440
response times for various critical launch the application and capture the
Online Support 0.566 3.849
transactions or business processes to response times for each function and
determine whether it meets a require- loading of all UI pages from a client
ment (or Service Level Agreements - after several large numbers of transac- machine. This testing should also be
SLA). It also tests the capability of the tions have been executed, and is espe- repeatable and consistent. If you re-exe-
application to function correctly under cially useful for servers intended to cute several times, you should get the
load by measuring transaction operate 24/7. same or similar response times for each
pass/fail/error rates. Database Volume Test: The database function. Of course, automation tools
Stress Test: This evaluates a system at volume test is designed specifically for can simplify this job, and should be
the specified limitation of load to deter- applications that access huge stores of used if available.
mine the load under which it fails and data. It’s important to know how the Traditionally, response time is
how. It is important to know in advance application will operate not only when defined as the interval between when a
if a ‘stress’ situation will result in a cata- it’s new, but also as the database user initiates a request to the server and
strophic system failure, or even “slow becomes populated and nears or when the first part of the response is
down.” exceeds its maximum capacity. received by the application. However,
Stability Test: Tests a system at differ- Problems over time might include there’s a sometimes a difference from
ent levels of loads for prolonged peri- degraded response times for searching the customer’s perspective, and long
delays should prompt a message. In
TABLE 2: KNOW WHAT TO TEST practice, you can define the response
time of a Web application system includ-
Candidate Function Comment
ing the processing time on the client
Critical Business Function The most important functions to the business; those
machine and the time taken to down-
that will impact revenue or profit.
Example: in a ticket sales system, the critical func-
load and display all contents. Table 1
tions might be pricing and discounts, revenue col- displays time specifications for typical
lection, statistical analysis and reporting. Web pages to display.
When defining response times, you
High Frequency Function The function or UI that is most frequently used. will find that some are unacceptable
Example: in a travel Website, the most frequently according the SLA. Generally, as the Web
used UI/functions might include airline, airport, cal-
system, in a single screen the time which
endar, flight schedules, resorts near destination and
prices. performs the business logic transaction
should be between five and 10 seconds,
Query Function The query function is the basic function of the appli- UI showing time limits of five to eight
cation. At least one query should be involved in the seconds. If you are the customer, will
selected list. you want to wait for the next page for
Example: in a human resources Website, “Search more than 15 seconds? Certainly there
Jobs” and “Search Candidates” are the basic query
are some principles that will help you to
functions.
make the decision of which components
Create or Update Function At least one create or update function should be or functions should be tuned:
included, and should verify correct behavior in cas- • Is it critical to the business?
es an update attempt is made when the database • Is there heavy throughput or func-
table (or record) is locked. tions on a the page?
• Is this a function or page with a
Heavy Throughput Function (Optional) Example: in a blog site,“Register” might be the first
step before submitting a comment is allowed.
large percentage of hits?
If the testing will be impacted by the network or • Is the function related to the data-
bandwidth, the Heavy Throughput Function should base?
be included. • Is this a most complex function?
Example: Streaming or downloading a video. • Is this a page with a large picture or
resource?
ods of time. It simulates normal execu- and report generation, added time or Also, the application should meet
tion and multiplies transactions by sev- resource requirements for backup and the SLA or otherwise satisfy the cus-
eral times in one day as would be restore functions, and an increase in tomer. This type of testing will help you
expected in a single day, week or other transaction errors. determine whether clients will be satis-
period. It is intended to identify per- Once you execute all these tests with fied or if functions should be tuned by
formance problems that might appear positive results, you can be confident in the developer.

24 • Software Test & Performance NOVEMBER 2008


PERFORMANCE TUNING

Activity 2: Load Testing transactions are mission-critical, 0.01 system input or heavy database load.
Similar to Activity 1, load testing will cap- percent might be closer to optimal. With stress testing, we want to see
ture a series of response times under what happens when the “acceptable”
load (sometimes called background Activity 3: Stress Testing load is exceeded. Does the system crash?
loading or background noise). These are Following Activity 2, stress testing initi- How long does it take to recover once
various load levels below the daily maxi- ates the maximum expected system the load is reduced? Does it fail in a way
mum load expectation to determine if load, then raises it in an attempt to that causes loss of data or other damage?
the application still functions properly. reach the system’s absolute limit. In particular, the goals of such tests may
You will find there is no difference Typically, stress testing starts with load be to ensure the software doesn’t crash
between performance testing in conditions of insufficient
and load testing except the computational resources (such
level of background loading.
Although the methods are sim-
ilar, their goals are quite differ-
ent. At this testing, you will val-
• as memory or disk space),
unusually high concurrency, or
denial of service attacks.
When the system’s load limi-
idate that response times When excessive load tation is reached, the system
and transaction pass/fail/error performance will usually
rates under a range of loads is removed, performance degrade, and the system re-
meets the SLA. Usually, the sponds more and more slowly.
transactions per second (TPS), should return to normal. But when the excessive load is
HTTP connections, concurrent removed, the performance


user or throughput measure- should return to normal. So
ments are used to represent for when you execute the stress test-
the load level. For example, ing with initial loads, you can
one useful test might be done add additional users or activities
with a workload of 100 concur- with gradual ramp-up (e.g. add
rent users. testing, and then additional users or 1 more user after 5 minutes until the
From this testing, you will determine activities are gradually increased until system’s load stable), at the same time,
the performance thresholds of the appli- the application server “slows down” or monitor the CPU and memory usage in
cation and server resources such as begins to work abnormally. For data- the application server and database
memory, CPU utilization and disk usage. base applications, you can try more server. It’s also useful to monitor per-
Activity 1 performance testing gave complex database queries, continuous formance on the test clients and all
you the response times of every func-
tion. In Activity 2, we load-test a nucle- TABLE 3: SERVER PERFORMANCE COUNTERS
us of functions based on the judgments Object Indicator Reference Memo
in Table 2. % Total If the server’s processor utilization
Processor < 75%
You can separate the background Processor Time stays high for extended periods, it
loading into three levels of the maxi- might indicate a bottleneck and that
mum load expectation: none or lower more CPU power is needed.
than 1 percent, 50 percent and 100 per-
cent. As in performance testing, start Load Average <3 / per CPU core The average number of processes
waiting for execution, in general,
the background load first and measure
should be less than the number of
the response times after the background CPU cores. For a system with a sin-
load stabilizes. For efficiency, I suggest gle CPU core, a load average less
covering all functions, especially trivial than two is optimal; three is accept-
ones. You can perform the testing of the able; more than five is sub-optimal.
ten most important functions in order.
In this testing, the same important Memory Available bytes > 10% physical The amount of physical memory
(usually repre- memory available for use by the server. If the
thing should be done, that is checking
sented as a per- server does not have enough mem-
the transaction pass/fail/error rates. centage of phys- ory to handle its workload, it will
Load test tools such as HP Mercury ical memory) spend more time reading from disk.
LoadRunner, IBM Rational Perfor- Sometimes, Page Faults/sec could
mance Tester are useful here to simu- indicate a memory shortage.
late multiple users accessing the appli-
PhysicalDisk % Disk time < 80% Shows the percentage of elapsed time
cation concurrently. I have found that a
that the disk is busy with read/write
setting of 60-120 minutes for runtime activity. If this counter is high and
duration is good enough to ensure a the processor and network bandwidth
stable simulation environment. are not saturated, then there is like-
As for exit criteria for this testing, ly a disk bottleneck. A number con-
the transaction failure rates should be sistently above 80% may indicate a
lower than 3 percent for most systems. memory leak.
For financial systems or others in which

NOVEMBER 2008 www.stpmag.com • 25


PERFORMANCE TUNING

other Web servers and application TABLE 4: STRESSFUL RESULTS a large number of transactions over
servers touched by the application. a long period of time.
Object Indicator Reference Actual Result
Performance counter data is used The memory leak is generally the
Processor %Total <75% Max: Pass
to determine when a test client, Web Processor 86%, 6s most common issue. Gradual degra-
server or application server has Time Min: dation of response time of some
reached its maximum CPU or mem- 76%, 3s functions and reduction in efficien-
ory use. These performance coun- cy, serious memory leaks will almost
ters will be the easiest way to deter- Load <3 / per Max: Pass always eventually result in a larger
Average CPU core 1.324
mine where the bottleneck is occur- crisis or failure. This testing pro-
Min:
ring. For example, the total proces- 0.338
vides an opportunity to identify
sor time is the best counter for view- defects that load and stress tests may
ing processor saturation. A number PhysicalDisk %Disk <80% 34% Pass not due to their relatively short
consistently above 90 percent on Time duration.
one or more processors indicates A stability testing should run for as
that the test is too intense for the Memory Available > 10% Max: Pass long as possible, so the weekend is
Bytes Physical 102411
hardware. often an opportune time for this test.
Memory 3, 25%
Generally, there are four levels bot- Min: Of course, the stability test exe-
tleneck: application level, database 614408, cutes one of the performance tests,
level, operation system level and net- 15% but it is also important to measure
work level. Table 3 shows some typical CPU and physical memory utiliza-
counters used in server performance longed periods of time. tion, and system response times. If a
tests, and can be helpful when trying to Why should we do this testing? For particular server process needs to be
find less obvious causes of performance real-time and embedded systems, this available for the application to operate,
problems. Table 4 contains a stress-test can help ensure that systems will run it is necessary to record the server’s
results from a Web server test. normally through many days and memory usage at the start and the end,
nights. Simply put, this test simulates as well as the client’s (or JVM’s) memo-
Activity 4: Stability Testing more and varied workloads across a ry usage.
After Activities one to three, you will test whole day or longer period than would
the permanency of the system, that is sta- be expected in a working day. Stability Activity 5: Database Volume Test:
bility testing. In this testing, the system testing can identify problems that only If you’re deploying a database applica-
will be running at various loads for pro- occur after the system has been running tion, it’s recommended that you add the

O v e r P
Y o u r R o s t
2 , 8 9 5g s !
n
TO D
e s
AY
u m e
!
Jo b P o s ti

Searching for the next step in your career?


A new job? Just want a peek at what’s out there?
The SD Times Job Board Post Your Resume Anonymously
You can keep your identity secret,
instantly connects you sending only your contact information
to employers looking for to the employers whose opportunities
someone just like you! you are interested in!

The SD Times Job Board offers services, Search Over 2,895


resources and a networking Software Development Job Postings
community for software development Find a software development, testing,
managers and professionals. In our NEW engineering or programming position
online Career Center, you’ll find a huge today!
bank of job postings and resumes, career Access a Library of Career-Related Articles
resources, coaching and advice services, and Resources
and so much more! We offer Career Coaching, Ask the
Job Search Expert, and an entire library
of career-advancement articles.

Visit SDTimes.com to Find a Job Near You!


www.sdtimes.com/jobboard BZ Media

26 • Software Test & Performance NOVEMBER 2008


PERFORMANCE TUNING

database volume test to your required list. TABLE 5: VARIED RESULTS


It’s important to know how the database
will behave when it’s loaded to its maxi- Item Case 1 Case 2 Case 3
mum with data, or even beyond. This test Database Volume Setting 20 GB 20 GB 20 GB
will let you know of problems with data Database Volume Utilization 99.5% 42% 3%
corruption, degraded response time for Table Record Count (CARGO) 1,200,920 450,760 10,873
searching or generating reports, slow Table Record Count (CONTAINER) 1,504,842 600,765 7,769
backup and restore processing or an Table Record Count (CUSTOMER) 40,981 9,864 1,265
increased number of transaction errors. Response Time Result Add Cargo Record 2.282 2.165 1.837
This testing is similar to the response Search Cargo Record 21.102 6.125 4.062
time test (Activity 1). The idea is to Assign Cargo in 5.473 5.314 3.765
record the response times when the sys- Container
tem is running with and without maxi- Search Customer 15.023 11.904 2.125
mum amounts of data. The volume test
might also reveal problems of functions It’s helpful to keep a few things in boundary between them. They must be

entire system. ý
running abnormally, slow or no mind when doing these types of tests: taken together to properly evaluate an
response, no disk space for writing the Concerns (or complaints) about
log file, no space in databases or data response time and application stability
files or complex searches with sorting typically come from end-users; inspec- REFERENCES
through many tables that lead to over- tion of the hardware system is where • Performance vs. load vs. stress testing http://
agiletesting.blogspot.com/2005/02/performance
flow, to name just a few. you’ll find network throughput and -vs-load-vs-stress-testing.html. Grig Georghiu,
Table 5 contains some results of a resource utilization bottlenecks; source February 28, 2005
• Patterns & Practices Performance Testing
test with various amounts of test data. code and database optimization are the Guidance for Web Applications by J.D. Meier, Scott
Of course, when you test a huge data- developer’s domain. But the tester has Barber, Carlos Farre, Prashant Bansode, Dennis
Rea
base system, you should try to generate responsibility for all these areas. And a • Software performance testing http://en.wikipedia
realistic data in all tables. For exam- full-scale performance test is the only .org/wiki/Software_performance_testing
ple, if the “customers” table involves way to ensure that the whole system is • Performance Testing for Ajax-based Applications
Aztecsoft Limited, May 14, 2008
over million records but 999,000 of healthy and will perform well over time. • General Functionality and Stability Test Procedure
those are identical, you will get unreal- Though the test activities explained James Bach
• Performance Testing Considerations. http://www
istic results when searching for a cus- here each have quite different mean- .visibleprogress.com/software_performance
tomer name. ings and scopes, there is no clear _testing.htm

Without oversight, software projects can creep


out of control and cause teams to freak. But with
Software Planner, projects stay on course.
Track project plans, requirements, test cases,
and d efects via the web. Share documents, hold
discussions, and sync with MS Outlook®. Visit
SoftwarePlanner.com for a 2-week trial.

NOVEMBER 2008 www.stpmag.com • 27


By Elfriede Dustin

hile at a new project kickoff meeting, the


W project manager introduces you as the test
lead. The manager says that the project will use an automated
reduce the test effort nor the test schedule. While it has been
proven that automation is valuable and can produce a return
on investment, there isn’t always an immediate payback.
Such misconceptions of test automation are many
test tool and that due to this planned automation, the test and persistent in the software industry. Here are
effort is not expected to be significant. The project manager some of the more common ones along with guid-
concludes the meeting with a request that you submit within ance on how to manage.
the next week a recommendation of the specific test tool
required, together with a cost estimate for the procurement of Automatic Test Tools Do Everything
the tool. Currently, there is no commercially avail-
Huh? You’re caught totally by surprise. What in the world able tool that can create a comprehensive
just happened? You sit alone in the meeting room, wondering test plan while also supporting test design
where the project manager got his expectations with regard to and execution.
automated testing. You decide they must have been developed Throughout a software test career,
after reading a vendor brochure. the test engineer can expect to witness
A scenario similar to this is one that I have actually experi- test tool demonstrations and review an
enced. If this ever happens to you, my suggestion would be to abundant amount of test tool literature.
clear up and correct these misconceptions immediately. The Often the test engineer will be asked to
idea of automated testing often brings high expectations, and stand before a senior manager or a
a lot is demanded from the technology. But some people have number of managers to give a test tool
the notion that an automated test tool does everything from functionality overview. As always the
test planning through execution, without manual intervention presenter must bear in mind the audi-
of any kind. And while it would be great if such a tool existed, ence. In this case, the audience may
that is pure fantasy; and no such tool exists. represent individuals with just
In fact, automated test tools in most cases do not initially enough technical knowledge to
make them enthusiastic about
Elfriede Dustin is currently employed by Innovative Defense Tech- automated testing, while not
nologies (IDT), a software testing consulting company specializing in
enough to be aware of the
automated testing.
complexity involved with an

28 • Software Test & Performance


automated test effort. Specifically, the man- One Tool Fits All
agers may have obtained information about Currently not one single test tool exists that can be
automated test tools third hand, and may have used to support all operating system environments.
reached the wrong interpretation of the actu- Generally, a single test tool will not fulfill all
al capability of automated test tools. the testing requirements for an organization,
What the audience at the management nor is it likely to run on all its platforms.
presentation may be waiting to hear is that the Consider the experience of one test engineer
tool that you are proposing automatically encountering such a situation. The test engi-
develops the test plan, designs and creates neer, Dave, was asked by a manager to find a
your test procedures, executes all the test pro- test tool that could be used to automate all
cedures and analyzes the results for you. You their real-time embedded system tests. The
meanwhile start out the presentation by department was using various operating sys-
informing the group that automated test tools tems, including VxWorks and Integrity, plus
should be viewed as enhancements to manual mainframe, Linux and Windows XP.
testing, and that automated test tools will not Programming languages included Java and
automatically develop the test plan, design C++, and various n-tier server and Web tech-
and create the test procedures and execute nologies also were involved.
the test procedures. Expectations had to be managed. It had to
Soon into the presentation and after sev- be made clear that there did not exist any sin-
eral management questions, it becomes gle tool that was compatible with all of those
apparent just how much of a divide exists variables. Granted more vendors have provid-
between the reality of the test tool capabili- ed cross-platform compatibility since then, but
ties and the perceptions of the individuals in the capabilities of such tools might not meet
the audience. The term automated test tool your needs and often more than one tool and
seems to bring with it a great deal of wishful feature is required to test the various plat-
thinking that is not closely aligned with real- forms and technologies.
ity. An automated test tool will not replace
the human factor necessary for testing a Test Efforts are Immediately Reduced
product. The proficiencies, analytical capa- Introduction of automated test tools will not imme-
bilities and subject matter expertise of test diately reduce the test effort, and will usually
engineers and other quality assurance increase them at first.
experts will still be needed to keep the test- A primary impetus for introducing an auto-
ing machinery running. A test tool can mated test tool onto a project is to reduce the
be viewed as an additional part of the test effort. Experience has shown that a learn-
machinery that supports the release of a ing curve is associated with the attempts to
good product. apply automated testing to a new project and

Part II: Myths, Realities And


How to Know the Difference

www.stpmag.com • 29
AUTOMATION FACT AND FICTION

TABLE 1: TO AUTOMATE OR NOT TO AUTOMATE? process has been established and effec-
tively implemented, the project can
Yes No expect gains in productivity and turn-
Number of test executions: around time that can have a positive
• Is test executed more than once? effect on schedule and cost.
• Is test run on a regular basis, i.e. often reused, such as part
of regression or build testing? Automation Tools are Easy to Use
An automated tool requires new skills, and
Criticality of test: additional training is usually required.
• Does test cover most critical feature paths?
Plan for costs and time of training and a
• Does test cover most complex area (often most error-prone learning curve.
area)? Many tool makers market their tools
by exaggerating the ease of use of the
Cost of test:
• Is test impossible or prohibitively expensive to perform man- tool. They deny any learning curve that
ually, such as concurrency, soak/endurance testing, perform- is associated with the use of the new tool.
ance and memory leak detection testing? The vendors are quick to point out that
• Is test very time consuming, such as expected results analy-
the tool can simply capture (record) test
sis of hundreds or thousands of outputs? engineer keystrokes and (like magic) a
script is created in the background,
Type of test: which can then simply be reused for
• Does test require many data combinations using the same test
steps (i.e. multiple data inputs for same feature)? playback and edited as necessary.
Efficient automation is not that sim-
• Does test need to be verified on multiple software and hard- ple. The test scripts generated during
ware configurations?
recording need to be modified manual-
• Are the expected results constant, i.e. expected results do not ly to make them reusable and to allow
change with each test and don’t vary? playback under various application
states and with variable data. This
Application/System under Test:
• Is test run on a stable application, i.e. the features of the required language and scripting knowl-
application are not in constant flux? edge. Scripts also need to be modified
as to be maintainable. For a test engi-
• Uses compatible Technology and open architecture neer to be able to modify the scripts,
they need to be trained on the tool and
to the effective use of automated test- lifecycle, complete with the planning its built-in scripting language. New
ing. Test effort savings do not always and coordination issues that come training requirements and a learning
come immediately. Still, test or project along with a development effort. (To curve can be expected with the use of
managers have read the test tool litera- understand what a team has to learn any new tool.
ture and are anxious to realize the before an automated test tool can be
potential of the automated tools. effective, don’t miss part III of this arti- All Tests Can be Automated
There is a good chance that the test cle series.) Not all tests can be automated.
effort will actually increase initially Automated testing is an enhance-
when an automated test tool is first Schedules are Immediately ment to manual testing. It is unreason-
brought into an organization. When Compressed able to expect that 100 percent of the
introducing an automated test tool to a An automated test tool will not immediately tests on a project can be automated. For
new project, a whole new level of com- minimize the testing schedule, and will usu- example, when an automated GUI test
plexity is being added to the test pro- ally lengthen it at first. tool is first introduced, it is beneficial to
gram. And while there may be a learn- Another misconception is that auto- conduct some compatibility tests on the
ing curve for the test engineers to mated test tools will immediately mini- AUT to determine whether the tool will
become smart and efficient in the use of mize the test schedule. Since the testing be able to recognize all objects and
the tool, there are still manual tests to effort actually increases initially, the third party controls.
be performed on the project as well. testing schedule will not experience the The performance of compatibility
The reasons why an entire test effort anticipated decrease at first, and an tests is especially important for GUI test
generally cannot be automated are out- allowance for schedule increase should tools, because such tools have difficulty
lined later in this article. be built in whenever automation is recognizing some custom controls fea-
Initial introduction of automated introduced. This is because when tures within the application. These
testing also requires careful analysis of rolling out an automated test tool, an include the little calendars or spin con-
the application-under-test in order to entirely new testing process has to be trols that are incorporated into many
determine which sections of the appli- developed and implemented. The applications, especially in Windows
cation can be automated. Test automa- entire test team and possibly the devel- applications. These controls or widgets
tion also requires careful attention to opment team also needs to learn new are often written by third parties, and
automated test procedure design and skills and become familiar with (and fol- most test tool manufacturers can’t keep
development. The automated test effort low) this new automated testing up with the hundreds of clever controls
can be viewed as a mini-development process. Once an automated testing churned out by various companies.

30 • Software Test & Performance NOVEMBER 2008


AUTOMATION FACT AND FICTION

It might be that the test tool is com- When performing this cost analysis, procedures per day, it would still take
patible with all releases of C++ and Java, the test engineer will need to weed out 155 years to prepare and execute a com-
for example, but if an incompatible third redundant tests. The goal for test pro- plete test. Therefore, not all possible
party custom control is introduced into cedure coverage using automated test- inputs could be exercised during a test.
the application, the results will be that ing is for each single test to exercise With this rapid expansion it would be
the tool might not recognize the object multiple items, while avoiding duplica- nearly impossible to exercise all inputs
on the screen. It might be the case that tion of tests. For each test, an evaluation and in fact it has been proven to be
most of the application uses a third party should be performed to ascertain the impractical.
grid that the test tool does not recognize. value of automating the test. Let’s take for example the test of the
The test engineer will have to decide As a starting point, Table 1 provides telephone system in North America.
whether to automate this part of the a list of questions to ask when deciding The format of the telephone numbers
application by defining a custom object whether or not to automate a test. If the in the North America is specified by a
within the automation tool, finding a answers are “Yes” to most of the ques- numbering plan. A telephone number
work around solution or whether to test tions, chances are the test lends itself to consists of ten digits, which are split into
this control manually only. automation. a three-digit area code, a three-digit
There are other tests that are physi- central office code and a four-digit sta-
cally impossible to automate, such as Automation Provides 100 tion code. Because of signaling consid-
verifying a printout. The test engineer Percent Test Coverage erations, there are certain restrictions
can automatically send a document to Even with automation, not everything can be on some of these digits.
the printer, but then has to verify the tested. It is impossible to perform a 100 per- To specify the allowable format, let X
results by physically walking over to the cent test of all the possible simple inputs to a denote a digit that can take any of the
printer to make sure the document real- system. It is also impossible to exhaustively values of 0 through 9, let N denote a
ly printed. The printer could have been test every combination and path of a system. digit that can take any of the values of 2
off line or was out of paper. One of the major reasons why testing through 9. The formats of these codes
Neither can a test tool automate 100 has the potential to be an are NXX, NXX, and
percent of the test requirements of any infinite task is that in XXXX, respectively. How
given test effort. Given the endless num-
ber of permutations and combinations
of system and user actions possible
with n-tier (client/middle-layer/server)
order to know that there
are no problems with a
function, it must be tested
with all possible data,
• many different North
American telephone num-
bers do you think are pos-
sible under this plan?
architecture and GUI-based applica- valid and invalid. It is not feasable There are 8 x by 8 x by 10
tions, a test engineer or team does not Automated testing may = 640 office codes with for-
have enough time to test every possibili- increase the breadth and to approach the mat NNX and 8 x by 10 x
ty – manually or otherwise. depth of test coverage, yet by 10 = 800 with format
It also should be noted that no test with automated testing test effort NXX. There are also 10 x
team can possibly have enough time or there still isn’t enough 10 x 10 x 10 = 10,000 sta-
resources to support 100 percent test time or resources to per- with the goal tion codes with format
automation of an entire application. It form a 100 percent XXXX. Consequently it
is not possible to test all inputs or all exhaustive test. of testing follows that there are 800
combinations and permutations of all The sheer volume of x 800 x 10,000 =
inputs. It is impossible to test exhaus- permutations and combi- 100 percent 6,400,000,000 different
tively all paths of even a moderate sys- nations is simply too stag- numbers available. This
tem. As a result, it is not feasible to gering. Take for example of the number would only test
approach the test effort for the entire the test of a function that the valid numbers and
AUT with the goal of testing 100 per- handles the verification of application. inputs to the system, and
cent of the application. a user password. Each user hasn’t even touched on
Another limiting factor is cost. Some on a computer system has a the invalid numbers that
tests are more expensive to automate
than others, and it might make sense
simply to test those manually. A test that
is only executed once per development
password, which is general-
ly six to eight characters
long, where each character
is an uppercase letter or a
• could be applied. This is
another example, which
shows how it is impractical
based upon development
cycle is often not worth automating. For digit. Each password must contain at least costs versus ROI associated with play-
example, an end of year report of a one digit. How many possible password back - to test all combinations of input
health claim system might only be run character combinations do you think data for a system1.
once, because of all the setup activity there are in this example? According to In view of this, random code reviews
involved to generate this report. Since Kenneth H. Rosen in “Discrete of critical modules are often done. It is
this report is executed rarely, it’s more Mathematics and its Application” there also necessary to rely on the testing
cost effective not to automate it. When are 2,684,483,063,360 possible variations process to discover defects early. Such
deciding which test procedures to auto- of passwords. test activities, which include require-
mate, a test engineer needs to evaluate Even if it were possible to create a ments, design and code walkthroughs,
the cost-effectiveness of developing an test procedure each minute, or sixty test support the process of defect preven-
automated script. procedures per hour, equaling 480 test tion. Both defect prevention and detec-

NOVEMBER 2008 www.stpmag.com • 31


AUTOMATION FACT AND FICTION

tion technologies need to be part of an recorded, that “First Name” will be hard- testing versus those for automated
effective automated testing process. coded, rendering test scripts usable only software testing. In essence, an auto-
Given the potential magnitude of any for that “First Name.” If variables are to mated software tester needs software
test, the test team needs to rely on test be added, scripts must be modified. To development skills. Different skills are
procedure design techniques, such as read in more than one “First Name,” the required and a manual tester without
boundary, equivalence testing and com- ability to read data from a file or data- any training or background in soft-
binatorics, where only representative base would have to be added, as would ware development will have a difficult
data samples are used. the ability to include conditional state- time implementing successful auto-
ments or looping constructs. mated testing programs.
Test Automation is the Same As Additionally, capture/playback tools
Capture and Playback don’t implement software development Losing Site of the Testing Goal:
Hitting a record button doesn’t produce an best practices right out of the box; Finding Defects
effective automated script. scripts need to be modified to be main- Often during automated software test-
Many companies and automated tainable and modular. Additionally, ven- ing activities, the focus is on creating
testers still equate automated software dor-provided capture/playback tools the best automation framework and
testing with using capture/playback don’t necessarily provide all testing fea- the best automation software and we
lose sight of the testing goal: to find
defects. It’s important that testing
techniques, such as boundary value,
risk based testing, equivalence parti-
tioning, deriving the best suitable test
cases, etc. are applied.
You might have employed the latest
and greatest automated software devel-
opment techniques and used the best
developers to implement your automa-
tion framework. It performs fast with
tens of thousands of test case results,
and it’s getting rave reviews. No matter
how sophisticated your automated test-
ing framework is, if defects slip into pro-
duction and the automated testing
scripts didn’t catch the defects it was
supposed to, your automated testing
effort will be considered a failure.
tools. Capture/playback in itself is ineffi- tures required; code enhancements are The most successful automated test-
cient automated software testing at best, often required to meet testing needs. ing efforts require a mix of tools, frame-
usually creating non-reusable scripts. Finally, vendor-provided tools are not works and skills. Capture/playback solu-
Capture/playback tools record the necessarily compatible with system engi- tions get the biggest hype, but are only
test engineer’s keystrokes in the tool’s neering environment, and software test- one piece of the automated testing solu-
provided scripting language and allow ing frameworks need to be developed tion. There is no magic bullet to auto-
for script playback for baseline verifica- in-house to enhance some of the exist- mated testing and not one tool exists on
tion. Automated test tools mimic ing tools’ capabilities. the market that can help solve all your
actions of the test engineer. During test- automated testing vows. Automated
ing, the engineer uses the keyboard and Automated Software Testing is A frameworks developed in-house, various
mouse to perform some type of test or Manual Tester Activity scripting languages, skills and knowledge
action. The testing tool captures all key- Use of capture/playback tools and script of testing techniques, analytical skills and
strokes or images and subsequent recording does not an automated tester make. development experience are all required
results, which are baselined in an auto- While automated software testing for automated testing to be successful.
mated test script. During test playback, does not immediately replace the man-
scripts compare the latest outputs with ual testing and analytical skills required Next Time
previous baseline. Testing tools often for effective and efficient test case devel- In the next and final part of this series I’ll
provide built-in, reusable test functions, opment, automated testing skills discuss additional pitfalls to avoid such as
which can be useful and most test tools required are different skill from manual picking the wrong tool; how to better
provide for non-intrusive testing; i.e., software testing skills required. Often integrate tools, by asking for standards
they interact with the AUT as if the test however, companies buy into the ven- across tool vendors; and how an auto-
tool was not involved and won’t modi- dor and marketing hype that automated mated software testing process can help

ing effort. ý
fy/profile code used in the AUT. software testing is simply hitting a implement a successful automated test-
Capture/playback generated scripts record button to automate a script.
provide challenges. For example, the That is simply not the case.
REFERENCES
capture/playback record hard-coded val- It is important to distinguish be-
1. Rosen, Kenneth H. (1991) Discrete Mathematics
ues; i.e. if an input called “First Name” is tween the skills required for manual and Its Application.

32 • Software Test & Performance NOVEMBER 2008


Are Latency Issues Holding Back Your App?
A Few Simple Tests Will Keep You In Front
By Nels Hoenig

ave you considered the impact of that the Internet is used for many things and

H latency on your Web-based internal


applications?
that the latency can vary tremendously during
the day due to legitimate and not so legitimate
activities.
Network latency is composed of two factors; No matter how fast your application runs and
the latency that is native to the connection the speed of your servers, transaction speed is
(transmission speed and distance) and latency limited by the speed of the network used to
Photograph by James Steidl

due to other traffic on the network. Native move data between the client and the server.
latency makes the assumption that the entire This article covers my recent experience imple-
network and Internet pipe is yours for the ask- menting a global Web-based solution and some
ing. The cold reality is that you share this pipe
with all the other users in your office and also Nels Hoenig is a lead software quality engineer for IT
consulting firm Princeton Information.
all the other traffic on the Internet. We know

NOVEMBER 2008 www.stpmag.com • 33


IT’S THE NETWORK, STUPID

computers, the user and server as possi-


FIG. 1: WORKING DAY, THE INDIA WAY ble bottlenecks. Attention now focused
on the network.
800
Step 3: Testing
700 Complete Process Now that we understood the problem
Clear Filter was rooted in the network and that it
600 Login only happened during working hours,
it was time to take a closer look at the
500
network. We had previously done
400
some sample trace route testing to
measure the latency (the number of
300 hops and time needed to get a packet
from India to our server), A latency of
200 300 - 350 ms was considered reason-
able based on the distance involved.
100
This also approximated (but some-
0
times exceeded) the ISP vendor’s
0:00 1:00 2:00 3:00 4:00 5:00 6:00 7:00 8:00 9:00 10:00 11:00 Service Level Agreement, which prom-
10:00 11:00 12:00 13:00 14:00 15:00 16:00 17:00 18:00 10:00 20:00 21:00
ised latency of less than 330 ms.
INDIA BUSINESS DAY Performing a trace route is straight-
forward. Simply open a DOS window,
type in the command “tracert” fol-
lowed by the target URL and hit enter.
latency issues found in a remote loca- ing, suck would seem a valid term. The The standard command will return up
tion. It also includes DOS code samples users were reporting transaction times to 30 hops and on numbered lines
that can be used to create simple data as much as to 500 percent worse than with the URL, IP address and the time
collection tools to measure latency at a our testing results using remote desk- (in milliseconds) needed to complete
given location to determine if a prob- top to run the same test on machines each hop. Your results should look
lem exists. located in the same location. Problem similar to this:
“Your application sucks” was not the confirmed.
best subject line in my C:\>tracert stpmag.com

inbox that morning, but Step 2: Investigation


Tracing route to stpmag.com [198.170.245.48]over
based on the other peo- We checked the server to a maximum of 30 hops:
ple copied, it was appar- see if it was having per-
ently the first one formance issues or if back- 1 1 ms <1 ms <1 ms 192.168.1.1
2 8 ms 9 ms 13 ms cpe-75-185-104-
opened. The message ups were running when
arrived just after we had
Automated tools the issue was reported. No
1.insight.res.rr.com [75.185.113.1]
3 7 ms 9 ms 9 ms network-065-024-192-
extended manual testing issues; keep looking. Next 197.insight.rr.com [65.24.192.197]
to users in India, after ran transactions check the users’ PCs for 4 10 ms 9 ms 9 ms pos13-2.clmboh1-rtr2.colum-
bus.rr.com [65.24.192.38]
testing using remote adequate resources or 5 10 ms 9 ms 9 ms pos13-1.clmboh1-rtr1.colum-
logins to machines in from India for unexpected constraints. bus.rr.com [24.95.86.165]
India. The remote testing No issues; keep looking. 6 20 ms 18 ms 19 ms tge5-3-0.ncntoh1-

was OK, but not as fast as 11 hours to see We had machines located
rtr0.mwrtn.rr.com [65.25.137.249]
7 118 ms 207 ms 203 ms te-3-
in the U.S. In manual in India that we could 4.car1.Cleveland1.Level3.net [64.156.66.97]
remote testing from this performance remote into. So we set the 8 21 ms 19 ms 19 ms ae-2-
4.bar1.Cleveland1.Level3.net [4.69.132.194]
location, processing time alarm for 3:00 am and test-
9 25 ms 35 ms 35 ms ae-6-
was longer than in U.S. flucuations... ed it ourselves. This testing 6.ebr1.Washington1.Level3.net [4.69.136.190]
locations, but the times confirmed the issue, but 10 40 ms 35 ms 36 ms ae-61-
had been acceptable. we had proof also showed that the prob- 61.csw1.Washington1.Level3.net [4.69.134.130]
11 27 ms 26 ms 28 ms ae-1-
So how were we to lem existed only in India 69.edge2.Washington4.Level3.net [4.68.17.19]
interpret this somewhat of a problem. and only during the India 12 40 ms 27 ms 28 ms 4.68.63.186
inflammatory message? working day (Figure 1). 13 30 ms 28 ms 29 ms ae-
To further understand 1.r21.asbnva01.us.bb.gin.ntt.net [129.250.2.180]

Step 1: Define “Sucks”


As we had a sample data-
base and the transaction
was Web-based, we had
• the issue, we used an auto-
mated testing tool to run a
sample transaction set
from India for 11 hours to
14 33 ms 28 ms 28 ms xe-1-
2.r03.stngva01.us.bb.gin.ntt.net [129.250.2.19]
15 30 ms 28 ms 28 ms ge-0-0-
0.r01.stngva01.us.wh.verio.net [129.250.27.218]
16 34 ms 28 ms 28 ms ge-
26.a0719.stngva01.us.wh.verio.net [204.2.125.122]
the remote users perform manual tests get a perspective of performance and 17 28 ms 30 ms 26 ms www.stpmag.com
identical to the tests we had done how it was fluctuating. Now we had real [198.170.245.48]
using desktop remote. Based on the proof that this was a problem to solve.
Trace complete.
answers that came back the next morn- We had eliminated the application, the

34 • Software Test & Performance NOVEMBER 2008


IT’S THE NETWORK, STUPID

Since the performance results FIG. 2: SLA, BLOWN AWAY


seemed to vary so much during the day,
we decided to run an extended latency
test over a 24-hour period. Some very 600

simple DOS-command scripts were


550
used to collect the data and run a trac-
ert every 10 minutes. When the results 500

are viewed in a raw form, it is hard to


450 8:00 pm
see the pattern, but when the results are SLA
changed to show an hourly average view 400 330 ms
of the latency, the issue comes into
focus (see Figure 2). 350

300
Step 4: Fix the Issue
Now that we understood that the net- 250

work was the source of the perform-


200
ance issue, we worked with the net-
5:50 pm
7:50 pm
9:50 pm
11:50 pm
1:50 am
3:50 am
5:50 am
7:50 am
9:50 am
11:50 am
1:50 pm
3:50 pm
5:50 pm
7:50 pm
9:50 pm
11:50 pm
1:50 am
3:50 am
5:50 am
7:50 am
9:50 am
11:50 am
1:50 pm
3:50 pm
5:50 pm
8:00 pm
10:00 pm
12:00 am
4:00 am
6:00 am
10:00 am
12:00 pm
2:00 pm
4:10 pm
work team to make changes.
Efforts to improve this situation
approached several fronts:
• We informed the ISP serving this
location that we were measuring collect the data into a text file, we did to call the scripts.
latency against the terms of the have to do some manual analysis of the
Date Script (date.bat)
SLA. The ISP made some im- results but it is not that hard. All you are date /t >>test.txt
provements to the network con- looking for are hops where the latency
nection. period is above your maximum allowed. Time Script (time.bat)
time /t >>test.txt
• We increased the bandwidth of If you are going to run these tests, please
our Internet connection, allowing check with your network/security ad- Tracert Script (tracert.bat)
more traffic to flow without being ministration before hand. This kind of The format for the tracert command has several
buffered. test could appear like an attempt to options we set them as follows:
• We upgraded the enforcement of hack the server and you don’t want to Tracert -h 10 -w 500 www.stpmag.com >>test.txt
proxy controls to limit network have one of “those meetings.”
usage for non-business related We used one script to load the date tracert -h maximum_hops -w timeout
activities. of the test, one script to load the time of What the parameters do:
-h maximum_hops
• We told users that they were con- the test and finally the tracert script to
tributing to the performance do the actual test. By testing this way we Specifies the maximum number of
problem, and asked that they be didn’t need to use any fancy applica- hops to search for the target. The
part of the solution. tions and it could be used on any default is 30 and if your site resolves in
Once these changes were applied, machine that has the XP operating sys- less, save some paper and set this to a
the same test was repeated. The results tem on it. We used Windows scheduler more valid value.
are shown in Figure 3. The changes
greatly reduced the amount of vari- FIG. 3: AFTER FIX, PERFORMANCE CLICKS
ability in the latency seen by the users B a n g a l o re L a te n c y

and this gave them an improved expe-


430
rience for all networked applications.
410
Step 5: Learn Your Lesson
390
What we learned from this project was 25 April
that application performance depends 15 May
370
not only on the program, the client
and the server. The network being 350
used also can be a source of the prob-
lem. You should never assume any- 330

thing is fine without conducting a test


310
to confirm. We were happy to find that
this problem was not with our applica- 290
tion and that resolving it improved
performance for all the remote users 270

at our India location.


250
0 1 2 3 4 5 6 7 8
Bangalore Time 13 14 15 16 17 18 19 20 21 22
The Scripts
This process used three DOS Scripts to

NOVEMBER 2008 www.stpmag.com • 35


IT’S THE NETWORK, STUPID

-w timeout The scripts need


to be saved as a
Waits the number of milliseconds (Batch) “bat” file so
specified by timeout for each, you can the scheduler can
adjust this value to allow more time if call them.
needed. You may find that
once your trace
Using Windows Scheduler reaches the data cen-
The windows scheduler is located inside ter (firewall) the
the Accessories folder under system trace will no longer
tools and is labeled Task Scheduler. You resolve. By this point
will select DOS command as the trans- you have completed
action type, be sure the user you use has the connection to
execute permissions on the machine. the data center and
You will want to use the Advanced response time inside
tab when scheduling a task to repeat. the data center
If it is going to repeat every 10 minutes should not be the
for a 24-hour period, this tab should issue. Your network
look like the example in Figure 5. administrators can
Notes: The “/t” used in the date and assist should you
time scripts causes the command to need to measure
report the current value, leaving it out response time inside the firewall. utes. This ensures the results file is always
will cause the program to consider this Please make sure your .bat files are available and that no conflicts will occur.
an update command. executable and that the results file is If you have a load balancer or other
The “>>” causes the results to not read only. device that routes traffic to different
append to the end of the existing infor- Leave a minute or so between when servers, you can use the IP address of
mation in the results file. your scripts execute. For example, have the target as an alternative to the URL.
We located all the scripts and data the date script run at 00:03 once a day, This can be handy if you are measur-
file in a common folder on the desk- the time stamp run twice per hour at 05 ing the impact of the load balancer

cation server. ý
top but they could be anywhere on the and 35 after the hour and
03 RZ Ad squish_Froglogic_SWTP:Layout 1 17.09.2008 13:44 Uhr Seite 1
the trace com- compared to going direct to the appli-
testing PC. mand run at 00, 10, 20,…every ten min-

Index to Advertisers

AUTOMATED GUI TESTING Advertiser URL Page

DONE RIGHT.
® Automated QA www.testcomplete.com/stp 10
SQUISH .
CROSS-PLATFORM
GUI TESTING Froglogic www.froglogic.com/evaluate 36

FutureTest 2009 www.futuretest.net 2, 3

Hewlett-Packard hp.com/go/quality 40

Mu Dynamics www.mudynamics.com/voip 6

Reflective Solutions www.stresstester.net/stp 8

Seapine www.seapine.com/optiframe 4

SD Times Job Board www.sdtimes.com/content/sdjobboard.aspx 26

Software Planner www.softwareplanner.com 27

Software Test & Performance www.stpcon.com 21


Conference

Get a free evaluation at www.froglogic.com/evaluate


TechExcel www.techexcel.com 39

36 • Software Test & Performance NOVEMBER 2008


Best Practices

Testing SOA’s About Business,


Not Just Technology
If one thing is clear from plate of development tasks, uct, so you should be testing each compo-
talking to developers and so we address this up front nent as early as possible,” said CEO
testers of SOA applications, to eliminate iterations.” In Michael Sack. Yet, testing later in the
it’s that SOA’s very struc- the process, the team relies development cycle remains equally essen-
ture dictates that the on QA for requirements tial. “Given our agile methodology, we
approach used in testing checking during each two- often get hit with changes during the final
must differ from methods to three-week development weeks of a project, and that necessitates a
taken with more traditional cycle. VetSource is using a lot of rapid releases which in turn
projects and constructs. combination of tools and requires a lot of QA resources and QA
Doron Reuveni, CEO methods, including Ajax, cycles to fully regress each build.”
and co-founder of uTest, a Mule, RESTClient for test- TeraCode relies on a dual-strategy
Joel Shore
cloud-based testing services ing RESTful Web services, approach: automated unit tests and
Website, says that since SOA systems are and IBM’s Rational Robot. extensive manual execution of test scripts.
based on multiple Web services compo- SOA applications represent a shift in For testing, Sack’s observations point
nents, a multifaceted approach to test- culture that requires thinking about the to coverage as a key challenge, question-
ing is crucial. overall business process, not just the ing whether it’s possible to test every
Unit testing of code functionality processing logic of individual services. aspect adequately and fully test every-
without any interfaces should come first That advice comes from David thing, and get it done in the time allot-
to verify that each component functions Kapfhammer, practice director for the ted. His solution is turning to uTest. It
as it should on its own, he says. System Quality Assurance and Testing Solutions has helped his developers run more test
testing to verify the interfaces between Organization at IT services provider scripts, get better bug reporting and,
the various components follows. Keane Inc. “SOA was supposed address though it increases bug count, ultimate-
Reuveni notes that because system test- the need for getting to market faster. ly leads to a better, more stable release.
ing touches components within the IT Looking from that perspective, you So, what’s to be avoided? Everyone
ecosystem and infrastructure, develop- build your architecture around services agrees on several perils:
ers must verify correct application and and you become more business driven • Choosing the wrong “off-the-shelf”
infrastructure behaviors. “It is impor- and less technology driven.” components.
tant to always test applications in their Among the miscalculations that • Testing primarily the presentation
entirety,” he says. Load testing is last. It Reuveni sees in testing is taking a view layer.
verifies the system load capabilities that’s too narrow. “It’s important to • Not performing sufficient regres-
across the end-to-end functionality. remember that testing a SOA system only sion testing between builds.
This was true for VetSource, an online through the GUI layer is not sufficient.” • Not doing load testing early.
supplier of pharmaceuticals to veterinar- He also points to SOA’s modularity, From technical perspective, SOA
ians and pet owners. The company is in among its chief advantages, as also hav- apps demand a shift in testing manage-
the midst of implementing several SOA- ing a down side. “In most cases, SOA sys- ment. “It’s a system of systems and inde-
based applications that handle payment tems are built based on off-the-shelf pendent services, and the developer
processing and order fulfillment on components combined with self-devel- group often doesn’t have control over all
behalf of hundreds of veterinary clinics. oped components. Most development of those services,” says Kapfhammer. “If a
For products not inventoried in its own teams find it hard to verify the interface service changes because the business
facilities, VetSource passes orders direct- between the two.” In many cases he sees changes, and 100 different apps use this

every one.” ý
ly to manufacturers or other distributors. developers finding bugs in code they service, you need a strategy for testing
Of the 13 people on its development didn’t write but which still influence
team, two are dedicated to QA. application behavior dramatically.
“We are learning how to merge auto- At outsource developer TeraCode the Joel Shore is a 20-year industry veteran and
mated testing at build time as a way to has authored numerous books on personal com-
methodology is to test early and test often.
puting. He owns and operates Reference Guide,
smooth out our development cycle,” says “One of the advantages of SOA is that the a technical product reviewing and documenta-
technology director Craig Sutter. “We’re components are discreet and can be test- tion consultancy in Southboro, Mass.
a fast-moving group with a fairly large ed independently of the completed prod-

NOVEMBER 2008 www.stpmag.com • 37


Future
Future Test
Test

Getting a Grip
some of the hardships of trying to test
components early, be it virtualizing
server configurations for faster setups
and teardowns or simulating the
behavior of other components you

On SOA Speed depend on.


In the waterfall approach, you
might test the three components in
order, aiming for that 2.1-second
The phrase “test early, test to notify the development benchmark. However, the complete
often” is a worn-out man- teams what their expected solution isn’t tested until the integra-
tra by now. But in the per- response times are early in tion is completed, and at that point,
formance lab, testing is the lifecycle, so perform- the performance testing results might
still considered a near- ance risks can be mitigat- balloon to a 4-second response time.
deployment step. That ed at the component level. The first temptation after getting such
timing worked fine for If we wait for the entire terrible results might be to throw
homegrown monolithic application to exceed the money at the hardware and band-
client-server apps, but SLA, it will be much hard- width.
that doesn’t cut it for er to pinpoint the culprit Aside from the real monetary con-
today’s highly distributed, in the integrated applica- straint and time delay of provisioning
service-oriented business John Michelsen tion. each project with its own high-capacity
applications. Enterprise applications are fre- lab, buying additional hardware and
No matter what technology you’re quently too complex and varied to be bandwidth is like adding oil to a leaky
testing, or testing with, high perform- considered “fully tested” engine; it buys time with-
ance will never happen if you’re not unless they’re built using out fixing the problem.
signing up to performance-test indi-
vidual technologies and components
early in the development lifecycle.
Take for instance a lab testing a
such

hardware
a test-enabled
model. For years, our
engineering
counterparts have under-
• Running a system on
faster hardware may buy
you a 10 percent increase
in response time, but the
solution with an expected service level stood the need for a test Buying additonal workflow will reach the
of a 2.1-second response time. The ini- harness. same choke points, just a
tial UI is finally ready for load testing, These test beds en- hardware and little faster. Changing
which means the app is already about abled electronics testers hardware rarely fixes
95 percent completed. to execute the instru- bandwidth is like threading issues, critical
If performance comes in at less mented device in a vari- process failures or bad
than 2.1 seconds fine. But if it doesn’t, ety of ways, and the adding oil to a coding.
the entire team has major concerns, as device itself reports its We can better manage
most of the connections within the failures such that engi- leaky engine; end-user performance
app are already integrated. Pin- neers can quickly under- adherence if we don’t
pointing the culprit service or compo- stand the failures and it buys time wait for a completed
nent is nearly impossible. correct them. application to test.
To move performance testing into For every order hitting without fixing Furthermore, there are
the lifecycle, each contributing devel- the application, thou- few, if any, enterprise sys-
opment team should be assigned a sands of lines of code the problem. tems that are still devel-
specific metric and must perform their from many different oped and tested by a sin-
own performance testing. A quality authors over years of gle team. Today’s distrib-
engineering resource should be tasked
with making that happen.
If my solution has an expected
response time of 2.1 seconds, I need to
deployments are execut-
ed. It isn’t feasible for
my solution-level per-
formance-testing team to
• uted applications offer
increased functionality
and flexibility, but that
comes with the cost of a
decompose that SLA into perform- prove that every possible error condi- higher degree of variability, and less
ance expectations at a component tion or performance issue that might predictable performance as a result.
level. In this example, the app server occur over time has been avoided. Here’s to getting the whole team
may take 0.1 seconds, while we give 0.3 Therefore, a test-enabled component signed up for better performance.
seconds to the Verify Order process, is needed for the test lab to provide
0.5 seconds to Order Lookup, and 1.2 value. John Michelsen is CTO of iTKO, a maker of
QA tools for SOA applications.
seconds to Order Response. The key is Virtualization has helped mitigate

38 • Software Test & Performance NOVEMBER 2008


A LT E R N AT I V E T H I N K I N G A B O U T Q U A L I T Y M A N A G E M E N T S O F T WA R E :

Make Foresight 20/20.


Alternative thinking is “Pre.” Precaution. Preparation. Prevention.
Predestined to send the competition home quivering.

It’s proactively designing a way to ensure higher quality in your


applications to help you reach your business goals.

It’s understanding and locking down requirements ahead of


time—because “Well, I guess we should’ve” just doesn’t cut it.

It’s quality management software designed to remove the


uncertainties and perils of deployments and upgrades, leaving
you free to come up with the next big thing.

Technology for better business outcomes. hp.com/go/quality

©2008 Hewlett-Packard Development Company, L.P.

Вам также может понравиться