Вы находитесь на странице: 1из 5

Dr.

Dobb's | Agile Testing Strategies | December 12, 2006 Page 1 of 5

Agile Testing Strategies


As Scott shows here, the quality of your system is only as good as the quality of your
testing efforts.
By Scott Ambler
December 12, 2006
URL:http://www.ddj.com/tools/196603549

Scott is a DDJ Senior Contributing Editor and author of numerous IT books. He can be contacted at
www.ambysoft.com/ scottAmbler.html.

Over the past few months, I've had more and more people ask me about how we go about testing on
agile projects. Agile developers are definitely "test infected," and this month, I explore several
strategies for testing on agile software development projects. A word of warning: Although we
clearly don't follow our father's serial testing methodology of yesteryear, I suspect that we can still
learn a few tricks from dear old dad.

Let's start by setting the philosophical groundwork:

 First, you want to test as early as you possibly can because the potential impact of a defect
rises exponentially over time (this isn't always true, but it's something to be concerned about).
In fact, many agile developers prefer a test-first approach.
 Second, you want to test as often as possible, and more importantly, as effectively as possible,
to increase the chance that you'll find defects. Although this increases your costs in the short
term, studies have shown that greater investment in testing reduces the total cost of ownership
of a system due to improved quality.
 Third, you want to do just enough testing for your situation: Commercial banking software
requires a greater investment in testing than membership administration software for your
local Girl Scouts group.
 Fourth, pair testing, just like pair programming and modeling with others, is an exceptionally
good idea. My general philosophy is that software development is a lot like swimming—it's
very dangerous to do it alone.

Testing Throughout the Lifecycle

Figure 1 presents a high-level view of the agile lifecycle for the purpose of testing (see "Initiating an
Agile Project" at www.ddj.com/dept/architect/188700850 for details). Agile projects go through an
often short Initiation phase (Iteration 0) where we set the foundation for the project; a Construction
phase where we develop the system in an evolutionary (iterative and incremental) manner; an End

http://www.drdobbs.com/article/printableArticle.jhtml;jsessionid=2CQCY3NFREXJ... 05/04/2010
Dr. Dobb's | Agile Testing Strategies | December 12, 2006 Page 2 of 5

Game phase where we transition our system into production; and a Production phase where we
operate the system and support users. Don't fear the serial boogeyman: The Initiation phase is not a
requirements phase, nor is the End Game a testing phase.

[Click image to view at full size]

Figure 1: Test activities during the agile lifecycle.

Testing activities vary throughout the lifecycle. During Iteration 0, you perform initial setup tasks.
This includes identifying the people who will be on the external "investigative" testing team,
identifying and potentially installing your testing tools, and starting to schedule scarce resources
such as a usability-testing lab if required. If your project has a deadline, you likely want to identify
the date into which your project must enter the End Game. The good news is that you'll discover that
increased testing during construction iterations enables you to do less testing during the End Game.

A significant amount of testing occurs during construction iterations—remember, agilists test often,
test early, and usually test first. This is confirmatory testing against the stakeholder's current intent
and is typically milestone-based at the unit level. This is a great start, but it's not the entire testing
picture (which is why we also need investigative testing that is risk-based at more of an integration
level). Regardless of the style, your true goal should be to test, not to plan to test, and certainly not to
write comprehensive documentation about how you intend to hopefully test at some point. Agilists
still do planning, and we still write documentation, but our focus is on high-value activities such as
actual testing.

During the End Game, you may be required to perform final testing efforts for the release, including
full system and acceptance testing. This is true if you are legislated to do so (common in life-critical
situations such as medical software development) or if your organization has defined service-level
agreements with customers who require it. Luckily, if you've tested effectively during the
construction iterations, your final testing efforts will prove to be straightforward and quick. If you're
counting on doing any form of "serious testing" during the End Game, then you're likely in trouble
because your team won't have sufficient time to act on any defects that you do find.

Testing During a Construction Iteration

The majority of testing occurs during construction iterations on agile projects. Your testing effort,
just like your system, evolves throughout construction. Figure 2 depicts two construction iterations,

http://www.drdobbs.com/article/printableArticle.jhtml;jsessionid=2CQCY3NFREXJ... 05/04/2010
Dr. Dobb's | Agile Testing Strategies | December 12, 2006 Page 3 of 5

indicating that there is confirmatory testing performed by the team, and in parallel, investigative
testing efforts ideally performed by a independent test team (I've adopted the terms "confirmatory"
and "investigative" testing from Michael Bolton, a thought leader within the testing community).
Although it isn't always possible to have an independent test team, particularly for small projects, it
is highly desirable. Confirmatory testing focuses on verifying that the system fulfills the intent of the
stakeholders as described to the team to date, whereas investigative testing strives to discover
problems that the development team didn't consider.

Figure 2: Incremental testing throughout the agile development lifecycle.

There are two aspects to confirmatory testing: agile acceptance testing and developer testing, both of
which are automated to enable continuous regression testing throughout the lifecycle. Confirmatory
testing is the agile equivalent of testing to the specification, and in fact, we consider acceptance tests
to be the primary part of the requirements specification and our developer tests to be the primary part
of the design specification. Both of these concepts are applications of the agile practice of single
sourcing information whenever possible.

Agile acceptance testing is a mix of traditional functional testing and traditional acceptance testing
because the development team and their stakeholders are doing it collaboratively. Developer testing
is a mix of traditional unit testing and traditional class/component/service integration testing.
Developer testing strives to verify both the application code and the database schema (for more
information, see my article "Ensuring Database Quality"; www.ddj.com/architect). Your goal is to
look for coding errors, perform at least coverage if not full path testing, and to ensure that the system
meets the current intent of its stakeholders. Developer testing is often done in a test-first manner,
where a single test is written and then sufficient production code is written to fulfill that test (see
www.agiledata.org/essays/ tdd.html for details). Interestingly, this test-first approach is considered a
detailed design activity first and a testing activity second.

Automation is an important aspect of construction testing due to the increased need for regression
testing on evolutionary projects. The Fitnesse testing framework (www.fitnesse.org), arguably a
requirements documentation tool, is often used to automate agile acceptance tests. It is also possible
to generate acceptance test cases from use cases and scenario definitions or from process diagrams
such as UML activity diagrams or flow charts, and tools are beginning to emerge to do exactly this.
The XUnit framework—JUnit (www.junit.org) for Java and VBUnit (www.vbunit.org) for Visual
Basic—is used to automate developer tests. Commercial testing tools such as HP Mercury's
TestDirector (www.mercury.com/us/products/quality-center/testdirector) or IBM Rational's
TestManager (www-128.ibm.com/developerworks/rational/ products/testmanager) are also good

http://www.drdobbs.com/article/printableArticle.jhtml;jsessionid=2CQCY3NFREXJ... 05/04/2010
Dr. Dobb's | Agile Testing Strategies | December 12, 2006 Page 4 of 5

options to consider, as they often prove more sophisticated than their open source alternatives. Static
code analysis tools, such as FindBugs (findbugs.sourceforge.net), are often included in automated
testing runs to help identify potential quality problems in the source code.

Investigative Testing

As I was writing this column, I was lucky enough to attend a presentation given by Dr. Cem Kaner
to the Toronto Association of Systems and Software Quality (TASSQ). Kaner, coauthor of Lessons
Learned in Software Testing (Wiley, 2001), described his thoughts and experiences in software
testing, and as a result, helped me to conceptualize some of my own ideas. In particular, I had been
struggling to properly describe the activities of investigative testing efforts during construction
iterations, and Kaner's presentation helped to coalesce my experiences.

A separate test team? Preposterous you say! Actually, there is significant value to be gained by
submitting your system to an independent test team at intervals throughout the lifecycle so that they
can verify the quality of your work. Agile teams produce working software at the end of each
construction iteration; therefore, you have something new to test at that point. A common practice is
to provide a new version of the system at least once a week, regardless of your iteration length, a
particularly good strategy the closer you get to the End Game.

The investigative test team's goal should be to ask, "What could go wrong," and to explore potential
scenarios that neither the development team nor business stakeholders may have considered. They're
attempting to address the question, "Is this system any good?" and not, "Does this system fulfill the
written specification?" The confirmatory testing efforts verify whether the system fulfills the intent,
so simply repeating that work isn't going to add much value. Kaner promotes the idea that good
testers look for defects that programmers missed, exploring the unique blind spots of the individual
developers.

Investigative testers describe potential problems in the form of defect stories—the agile equivalent of
a defect report. A defect story is treated as a form of requirement—it is estimated and prioritized and
put on your requirements stack. The need to fix a defect is a type of requirement, so it makes perfect
sense to address it just like any other requirement. As you would expect, during the End Game, the
only requirement type that you're working on is defect stories.

Your investigative testing will address common issues such as load/stress testing, integration testing,
and security testing. Scenario testing, against both the system itself and the supporting
documentation, is also common. You may also do some form of usability testing—the user interface
is the system to most end users; therefore, usability is critical to success. The UI includes both the
screens that people interact with and the documentation that they read, implying that you need to test
both.

Good investigative testing efforts reveal any problems that developers missed long before they
become too expensive to address. It also provides feedback to management that the team is
successfully delivering high-quality working software on a regular basis. Kaner pointed out that
there's no one right way to go about investigative testing, nor is there one correct list of techniques to
employ. Your efforts must reflect the goals of the project team that you're supporting. For example,
is the goal to determine whether the system is ready to be shipped? Is it to ensure that the system
interoperates with other existing systems? Is it to help the developers to identify problems in their
own testing efforts by pointing out the causes of defects that they missed? Is it to minimize the
chance of a lawsuit against your organization or its managers? The context in which you are testing
will determine what and how you test—not only will the context be different for each project; the
context also changes over the life of the project.

The type of confirmatory testing performed by agile teams is only one part of the testing picture—it

http://www.drdobbs.com/article/printableArticle.jhtml;jsessionid=2CQCY3NFREXJ... 05/04/2010
Dr. Dobb's | Agile Testing Strategies | December 12, 2006 Page 5 of 5

is the agile equivalent of traditional smoke testing. This is a great start, and having automated
regression testing provides the safety net required by evolutionary development techniques.
Investigative testing enables you to explore the critical "big picture" issues, as well as the "little
picture" issues that nobody thought of until now, which confirmatory testing typically does not.

Quality Is Job #1

The testing approach I've described here is different from the traditional, documentation-heavy
approach where you throw the system and its specifications over the wall to testers and hope for the
best. My experience is that the quality of your system is only as good as the quality of your testing
efforts.

Thanks to Michael Bolton, Cem Kaner, Renu L. Rajani, and Steve Robinson for their insightful
feedback.

Agile Testing Resources

You should find the following resources full of provocative ideas about how to improve your
testing efforts:

• Agile Testing Mailing List (tech.groups.yahoo.com/group/ agile-testing).

• Context-Driven Software Testing Mailing List (tech.groups.yahoo .com/group/software-


testing).

• "High-Volume Test Automation," by C. Kaner, W.P. Bond, and P. McGee


(www.kaner.com/pdfs/HVAT_STAR.pdf).

• James Bach's blog (www.satisfice.com/blog).

• "Lessons Learned in Software Testing," by C. Kaner, J. Bach, and B. Pettichord


(www.testinglessons.com).

• Michael Bolton's testing articles (www.developsense.com/articles).

• "Roadmap for Agile Testing" and "The Testing Team's Motto," by Brian Marick
(www.testing.com/writings).

• "What Is Exploratory Testing?" by James Bach (www.satisfice.com/articles/what_is_et.shtml).

Copyright © 2010 United Business Media LLC

http://www.drdobbs.com/article/printableArticle.jhtml;jsessionid=2CQCY3NFREXJ... 05/04/2010

Вам также может понравиться