Вы находитесь на странице: 1из 24

TE TER

Essential for software testers


SUBSCRIBE
It’s FREE
for testers

February 2011 £4 /¤5 v2.0 number 7

Managing
manual
testing

Including articles by:


Bogdan Bereza-Jarociński VictO Mohamed Patel Equiem George Wilson Original Software
Derk-Jan de Grood Valori David Yuill HP Erik van Veenendaal Ashwin Palaparthi AppLabs
Marek Kucharski Parasoft
TE TER
From the editor
Managing manual testing
The term manual testing, like automated screen recorders such as BBTestAssistant
testing, means more than just test execution. (see http://www.bbtestassistant.com) is a
The idea that testing could ever be done fast-growing approach and we have five
without human intervention is dead: all non- licences, worth $199 each, for Blueberry

Managing trivial software models or at least interfaces


with reality, yet can never match its
Software's innovative and popular tool to give
away. They will go to the first five readers to

manual complexity. That is why whenever people's


interests are to be trusted to software, people
email me at editor@professionaltester.com
identifying the story, book, TV programme or

testing will be needed to adjust testing to protect


those interests better. This issue of
Professional Tester is about making and
film to which each article headline in this
issue refers.

controlling the adjustments. Edward Bishop


Editor
As several of our contributors have noted in
different ways, a key challenge of manual test CRIBE
SUBS
execution is documenting it for repeatability
’s F R EE
and incident reporting. Using advanced
It
sters
Contact for te
Editor
Edward Bishop
editor@professionaltester.com
IN THIS ISSUE
Managing Director Managing manual testing
Niels Valkering
ops@professionaltester.com 4 Robot, I
Bogdan Bereza-Jarociński proposes a new kind of tool
Art Director
Christiaan van Heest
art@professionaltester.com 6 The edge of human
Hands-on performance testing with Mohamed Patel
Sales
Rikkert van Erp
advertise@professionaltester.com 10 We can remember it for you wholesale
Contributors to this issue: Marek Kucharski on keeping manual execution traceable
Bogdan Bereza-Jarociński
Mohamed Patel
George Wilson
13 Now wait for last year
Derk-Jan de Grood Derk-Jan de Grood wants testers to be busier
David Yuill
Erik van Veenendaal
Ashwin Palaparthi 15 Use the force better, Luke
Marek Kucharski David Yuill introduces HP Sprinter
Publisher
Jerome H. Mol 19 Equality Unconquered
publisher@professionaltester.com Power to the people, says George Wilson
Subscriptions
subscribe@professionaltester.com Test process improvement
16 To maturity, and beyond
Professional Tester is published TMMi has recently been completed. Erik van Veenendaal tells us what is new
bimonthly by Test Publishing Ltd.
Test data
We aim to promote editorial independence 18 Better than life
and free debate: views expressed by
contributors are not necessarily those of the Ashwin Palaparthi explains how AppLabs fabricates rather than prepares test data
editor nor of the proprietors.
©Test Publishing Ltd 2010. Feature
All rights reserved. No part of this
publication may be reproduced in any form 22 Test library
without prior written permission. Reviews of three new testing books and BSI's new web accessibility standard
“Professional Tester” is a trademark of Test
Publishing Ltd.
Visit professionaltester.com for the latest news and commentary

PT - Februar y 2011 - professionaltester.com 3


Managing manual testing

Robot, I
by Bogdan Bereza-Jarociński

when time and resources are short,


What if the generation of tests for people can be asked to attempt the
manual execution were automated? best testing possible under the
prevailing conditions. In contrast tools
Many articles in Professional Tester usually have a fixed preparation and
are concerned with improving automated maintenance overhead which must be
dynamic test execution, because the paid before they can be used to any
potential benefits of doing that are well advantage
understood. However in practice a great
deal of test execution is still done when part of a test cannot be run as
manually. written for an obvious reason, such as
a minor change to interface design, a
Some believe that will change, and more person can work around (when
and more testing will become automated. permitted, with care and raising an
They may be right: computers can be incident against the test) to complete
better than people at executing tests other parts whose results may be the
correctly, repeating tests consistently and more important at the time. However
checking results accurately, all of which trivial, such an obstacle usually
are of course vital to effective retesting stumps automated execution
Bogdan Bereza- and regression testing. completely.

Jarociński envisages Others think there must always be a place Some of these limitations of automation
for manual test execution – for exactly the may diminish in the future: but few doubt
an approach which same reasons: that in the present at least some manual
reverses current execution is essential.
sometimes inconsistent execution,
common practice inadvertent or not, increases coverage So the variation in test execution and
and therefore potential to detect checking introduced by people is
defects which automated execution sometimes desirable, sometimes not.
would miss When it is not, how can we eliminate it? I
suggest that the answer is by defining the
sometimes people notice anomalies tests more explicitly. Much of the
which a tool has been configured, weakness of manual execution comes
wrongly, not to look for or to ignore from its association with manual test
because it was not foreseen preparation. Even when standards and
templates are used, test specifications
automated execution can validate have some room for interpretation. If that
software, but a person can evaluate it: could be eliminated, manual testers could
he/she adds business knowledge, still use them as a basis for useful
understanding, intuition, imagination variations in both the actions taken and
and empathy with users a tool cannot what they look for, but could be trusted far
emulate more to execute them correctly and not
miss any significant discrepancy when
the act of executing a test can lead a needed. All the advantages and almost
person to create additional valuable none of the disadvantages (the exceptions
tests being speed and use of human resources)
of manual testing would be realised.

4 PT - Februar y 2011 - professionaltester.com


ana in manual testin

in some very ergonomic tabular or


Keyword-based
TEST test generator graphical form, include visual cues, and/or
BASIS communicate with the person executing
some other way. There is an opportunity
here for great design but it must not go too
Test cases Test procedures
Specification described in Automated far: for example, having a person repeat
described in execution
or test execution
model
test specification
language inputs shown in a sequence of images, or
language
or system Manual even a movie or similar, might take too
execution
Translator VA TEST much of his or her attention away from the
RIA
TIONS RESULTS
test item.

NE
W
TE
ST
Tester’s S
iven the definition of such a language or
expectation
and other description system – which because
understanding
of its purpose would need to be both
Exploration
simple and small – it should be quite easy
Figure 1: routes from test descriptions to executable tests – and back to write a “translator” program that
generates it from the test specification
Could this be achieved by using a tool to Test description telling a person how language.
create consistent, unambiguous tests for to e ecute the test
people to execute? Manual test preparation usually involves to Ideally, it would be possible to define new
some degree the use of natural language. tests directly using the “execution
Test specification defining what the This creates a lot of problems: different test language” or system too. It would be
test is analysts have different description styles extremely useful in incident reporting and
To automate the creation of detailed test different organizations use different retesting when an execution-time variation
procedures, the test cases (pre-conditions, description guidelines and different testers of the procedure, or a new test created in
input, expected output and post-conditions) may understand a description differently. an ad-hoc or exploratory way, detects an
must be described unambiguously. Most anomaly, or when it is desired to add such
formal languages used to define test cases Using a formal language at this stage too a test, passed or not, to a regression suite.
are developed and used locally within an should eliminate the first two problems but Depending on the form the language or
organization or even for a specific project. would only change the third. The formal system takes, an extra interface or
Some tool vendors provide basic languages used to specify tests are “development kit” might be necessary to
frameworks for such languages, for designed for machine, not human, achieve this, and/or syntax checking tools
example HP's Business rocess Testing, readability. Executing tests expressed in could be used to verify and debug
which enables the creation of test cases as them manually would be difficult, “handcoded” tests.
blocks of words which can then be painstaking, error-prone work. The
manipulated graphically. They are not very keyword-driven approach, designed to Finally, the execution procedure could be
much like programming languages, but make it easier to create and maintain used also as input for automated
more like business modelling languages, automated tests, may be a partial solution, generation of tests for automated
so that business rather than technical but the person executing would need to execution, on the same principle as
people can learn to write test cases and either know, or make constant reference to, keyword-driven automation methods. Thus
test scenarios using them. the definitions of the keywords. Again this the same tests could be run manually or
would probably be excessively demanding automatically as most appropriate for the
Tailored languages can be made 100 work in most circumstances. current objectives. Doing both and
suitable for the purposes of an comparing the results, such as resulting
organization and project. On the other So a second language is needed: more change to back-end data etc, could be
hand, building, teaching and learning such abstract, easily readable but still formal interesting too: it may help to reveal some
languages is expensive, and they tend to enough to define detailed procedures with subtle and dangerous defects such as
hamper collaboration, so a standard no ambiguity. It may resemble natural timing issues that either manual or
language for this purpose would be language, express the actions and inputs automated execution alone cannot
desirable. Perhaps one could be based on
a meta-language such as BPML or UML, Bogdan Bere a arocinski is a testing consultant, speaker and trainer and a long time
or adapted from a language used to contributor to rofessional Tester. e is the proprietor of ict http victo.eu
describe test cases such as TTCN or
LabVIEW?

PT - Februar y 2011 - professionaltester.com


Managing manual testing

The edge of human


by Mohamed Patel

complex logic and data handling, each


Applications are mutating in behaving realistically but differently:
unpredictable ways. Performance simple cloning usually won't do. While
functional testing may be (very slowly)
testing must be able to adapt moving towards standardization,
performance testing is diversifying. It's
been a long time since I have worked on
two similar projects. Rather, it's
amazing how different each new
situation – ie application and testing
requirement – is from all the others I've
seen before.

The power to change


So increasingly, assuring performance
requires not systematic skills or
prescriptive tools, but extreme flexibility.
In order to deliver the testing required,
the tester must be able to adapt and
innovate methods and to override and
extend automated functionality. At
Equiem we use and consult on many
performance testing tools and are often
asked which we prefer. On a simple
comparison of features, there is often
little to choose between them: some are
slightly stronger than others in various
areas, but not importantly so. A good fit
Mohamed Patel After many years as a performance
tester I have learned to expect the with development and other testing
tells us about his unexpected. While most other testing technologies in use can be a factor too.
But to us the vital thing is extensibility:
favourite tool disciplines aim for repeatability and
predictability, performance testing has the capacity to create the behaviours
always been about ad-hoc problem you need, rather than paying for many
solving. We operate not on the clean, built-in ones you don't. On that criterion,
well-lit superstructure of user interfaces the leading tool is Facilita's Forecast.
and architecture designs but in the very
murky depths. These days even most Test implementation using actual
developers don't know what goes on at application code
low level as they assemble their Using Forecast's capabilities to the full
applications from dinky components requires coding. We don't see
and flashy development kits. disadvantage in this as we believe it is a
skill the modern performance tester simply
Constructing effective test scripts from must possess. For example, DLLs, JARs
protocol transactions requires peering or .NET assemblies can be associated
into dark recesses, and whatever with custom virtual users. The external
bizarre things are found must be code then becomes available for use
simulated by many replicants, using within the scripts: the developer's, or third-

6 PT - Februar y 2011 - professionaltester.com


Is your test process ready to cope with
increased workload?
According to our recently published World n TPI NEXT® scans require thorough planning,
Quality Report 2010-2011, in co-production especially when there is a short timeframe
with Sogeti and HP, investment is shifting and stakeholders reside in different countries.
towards building new applications1, which n Implementing the improvements after the
means process improvements are necessary TPI NEXT® assessment needs attention and
in order to cope with the increasing workload. commitment from management.

One way to achieve the necessary improve- Beyond TPI NEXT®


ments is using the Test Process Improvement Capgemini has extensive experience
model - TPI NEXT®. Developed by Sogeti, TPI in providing a clear visualization of the
NEXT® is our world-leading model for providing improvements and implementation that
an objective step-by-step guide to business- result from a TPI NEXT® assessment. By
driven test process improvement. implementing the conclusions from TPI
NEXT® scans and clarifying the roadmap
How the TPI NEXT® model works to our clients, it becomes easier to evaluate
The TPI NEXT® assessment is used to measure and check the necessary improvements.
your testing process. How mature is your This approach combined with the full
organization at a particular moment? Which commitment of the Capgemini team has
business drivers need to be addressed? proven to be especially appreciated by
Interviews and accelerators assist in creating our clients.
a target maturity matrix. This provides an
overview of the Key Areas that should be Spreading the experience
improved in order to reach a higher maturity. At Capgemini in the Netherlands, the TPI
These are prioritized and the corresponding Expert Group is currently developing courses
improvements and implementation support to help clients put our experience into their
is defined. This approach has already been practice. Researching and combining different
successfully applied at large international test process improvement models and best
clients such as Air France-KLM. General practices together with Capgemini’s Quality
conclusions drawn from these assessments Blueprint (which provides a comparative
supports implementation of the model in benchmark against the industry standard) lead
future assessments. to practical support and guidance throughout
the improvement process.
Conclusions drawn from carrying out TPI®
NEXT scans For more information about TPI NEXT® and the
n The TPI NEXT® model is highly suitable for activities of the Capgemini TPI Expert Group,
tailor-made scans for organizations and please contact us at testen.nl@capgemini.com.
businesses.
n Ensure that the people being interviewed

know up-front that this is not an audit – ¹ http://www.uk.capgemini.com/insights-and-resources/by-


people themselves are not being judged! publication/world_quality_report_2010__2011

www.nl.capgemini.com
Managing manual testing

party, libraries can be used to encode and causing the script always to amend more
decode data during testing without fields.
needing access to proprietary or otherwise
unavailable source code. Now, suppose the form is designed to
change depending on the data: that is,
Object-oriented scripting it can have more, fewer or different
The class structure of the scripts enables fields depending on the
the tester to override any of the base customer type and/or history.
methods, introducing his or her own logic, With many tools, it is
conditions and validation. I used this necessary to determine every
recently when the application under test possible variant of the form –
required client-side timestamping and pre- which itself can be difficult
validation of data before every HTTP or impossible – and create
POST request. Coding this once, in the specific scripts to handle
custom virtual user, means it is each. With Forecast,
automatically implemented in all requests provided the fields they
sent by all scripts, including ones yet to be amend are still present,
created. These concepts are of course scripts always execute.
nothing new to OO programmers, but Values of all other fields,
many other performance testing tools try no matter what they are,
to hide the real code behind user are captured when the
interfaces or simplified procedural form is served and sent
languages that serve only to restrict what automatically when it is
you can do. submitted

Global editing
Heavily UI-based tools can be very
cumbersome, requiring user input for
every data item to be correlated or
modified, making for a great deal of error-
prone editing. Instead, Forecast has a
wizard to define script generation rules.
When a pattern, eg a header type, URL or
specific document content is detected, the
rule inserts code for correlation, checking,
extraction, header creation and so on.
Rather than editing the scripts, one edits
and extends the rules: the scripts are then
regenerated according to the new rules.

Dynamic form handling


Imagine a large form with many fields,
perhaps containing details of a retrieved
customer account, and a test that requires
one field to be amended before the form is
submitted. In other tools, the script
contains code to populate all fields. A Mo Patel's 25-year IT career has included successful performance testing of many
Forecast script refers to the one changed complex applications in the retail, banking and public sectors. He is a founder and
field only, making it easier to edit, extend director of Equiem (http://equiem.com) which specializes in highly tailored performance
or re-use. The other field inputs are testing services. For more information about Forecast see http://facilita.co.uk
correlated automatically with the values
embedded when the form was served.
They are in the script only as comments: if
desired, this behaviour can be changed,

8 PT - Februar y 2011 - professionaltester.com


For testers and test managers
who want to enhance their
knowledge of test management

In their strivings for operational efficiency, quality and to satisfy growing government regulation, the number
of companies that test software professionally is growing.

In “Advanced Test Management” testers and test managers will find :


- an overview of various approaches and techniques
- numerous examples, tips and tricks, tables and illustrations

The book provides a clearer and more effective manual for a well-oiled testing approach. This knowledge
allows you to arrange custom software testing and integrate it in any business environment.

The book ties in with the knowledge needed to gain the ISTQB Advanced Test Management Certificate in
Software Testing. ps_testware is an accredited ISTQB Foundation and Advanced training provider.

HOW TO GET IT?


,95 EUR*
anag anag
this book for 44
Training n an
a ing r i
You can order it for free
are.com or get
via www.pstestw our
when attending ps_testware is a leading company specialized in software
Test M an ag ement training. testing, software quality and quality assurance. With offices
ISTQB Advanced in Belgium, The Netherlands and France, ps_testware pro-
shipping costs)
* (excl. VAT and vides services in all matters of structured software testing and
related fields.

For a detailed table of contents: www.pstestware.com


Managing manual testing

We can remember it
for you wholesale
by Marek Kucharski

created by developers and executed


To stay on track, trace automatically overnight. We've changed
our mind. We still believe in automation,
and we use our products to automate a
very high proportion of our internal testing.
But we now acknowledge that sometimes
manual testing is the best, or even the
only, option and have extended our ALM
platform, Concerto, to embrace it: making
it as traceable, auditable and integrated
with development as automated testing.

Visibility makes faster work


Figure 1 shows a requirement in Concerto.
The tabs at the top summarize and give
access to detailed information about the
work done so far. To implement the
requirement, 32 development tasks were
Many were surprised last autumn when Parasoft identified (these include all development
embraced manual testing. Marek Kucharski work, not just coding); 37,641 lines of code
have been created or modified; two
explains what has changed automated tests have been run and
There is a Polish saying that “only a cow detected no defects; and nine manual
does not change her mind”. She is happy tests (shown under the Scenarios tab)
just to chew grass. Parasoft, a company have been run, of which four have failed.
known for advanced tools, has been like
that cow for years: we thought that Thus the people executing the manual
dedicated testers would eventually be tests have visibility of what everyone else,
replaced almost completely by tests including developers, has already done:

Figure 1: A requirement in Concerto Project Center

10 PT - Februar y 2011 - professionaltester.com


ana in manual testin

Figure 2: manual test scenarios being managed and edited

manual or automated. The steps taken


and their outcomes are recorded: they
demonstrate to the tester what is
considered correct behaviour, far more
quickly and clearly than a formal
description. This avoids
misunderstandings and helps the tester
know what is and is not an incident. The
tester adds additional scenarios, based or
not on the ones provided by development.

Traceability ma es less wor


Then, the loop is closed: when a defect is
detected and fixed, Concerto enables
traceability of test to requirement,
requirement to code, and defect to
modification. It therefore knows at all times
exactly what tests, both manual and
automated, need to be re-executed for
retesting and regression testing purposes
Figure 3: retests recommended due to code modification (figure 3). This information enables
enormous savings in what is by nature a
unit testing, static analysis, regression time-consuming activity. Finally, a static
coverage and everything else. This input analysis rule can be created to prevent the
to manual testing helps to target it construct(s) that caused it being repeated
precisely, making it easier and more cost- anywhere else. Collaboration at that level
effective. When a developer completes a between development and QA makes future
task, he or she creates a test for it, expensive defects simply not happen.

PT - Februar y 2011 - professionaltester.com 11


Managing manual testing

Tracer makes traceability work


These facilities are available even when
code is created by an external
development organization which is not
using Concerto. Tracer identifies the
methods and objects used when each test
case is executed, mapping test cases to
code, completing the traceability
information needed to manage and report
(figure 4) the entire development and test
effort, including manual testing,
comprehensively

Figure 4: reporting on manual testing

Marek Kucharski is CEO Europe of Parasoft. For more information about Concerto see
http://parasoft.com/concerto

12 PT - Februar y 2011 - professionaltester.com


TE TEREssential for software testers
Managing manual testing

Now wait for last year


by Derk-Jan de Grood

The things that delay testing that cause delay to testing and what might
be done to eliminate it.
and how to avoid them
I recently carried out questionnaire-based
research with testers in multiple industries,
Software testing is usually on the critical aiming to discover the causes of the
path. Most testers feel the burden of the wasted waiting time. This article discusses
anxiety and impatience of managers and the three cited most often, and suggests
stakeholders, who expect testing activity approaches to arguing for project change
to reflect it, and often express surprise to that might help to mitigate them. That
find this is not the case: when the storm should keep testers busy for more and
everywhere else is reaching its height, the unduly pressurized for less, of the time,
testers… wait. Why, when even small helping to bring about what everyone
delays can have severe impact on the wants – quality products delivered faster.
project timeline? Because while test
execution is given high priority, providing Unavailability of test environment
the things needed to do it is not. Not Test execution time is typically greatly
having those things at the right times (i) increased because the test environment is
delays the start of test execution; (ii) unavailable, unstable or unusable.
makes test execution take longer; and (iii)
makes testing less effective and This is a familiar but still common situation
dependable by requiring more for testers. I recently worked on a large
Why are we waiting? assumptions to be made. project with a one-week release cycle.
Derk-Jan de Grood Unfortunately “release” meant only
Good test managers try to emphasize the delivery of code. The deployment and
finds out advantages of early involvement and configuration required to enable
working in parallel with development, so meaningful test execution and results
that test execution can begin immediately checking took several days, reducing
and be done efficiently whenever work testing time to two or three days a week.
products are released. Unfortunately those
concepts are still not well understood – or Unstable and slow environments are also
are not taken seriously – by other leaders, commonly experienced. Both cause very
whose concern with project timescales significant delay and risk. In the first case,
tends to make them concentrate on the environment crashes before the test is
removing any potential cause of delay to complete so it must be repeated
development, forcing more of the testing unnecessarily. Casual testers – ie
work to take place later, actually causing stakeholders and developers – who
worse delays. “explore” software often do not understand
that “carrying on where you left off” is
To help managers to reduce time-to- usually impossible and always dangerous
market, we as testers need to get the in systematic testing. As well as wasting
measures required for better, more timely their time waiting for responses, sluggish
testing higher on their agenda. Trying to do interfaces affect testers' concentration and
this by making them understand testing lead to mistakes.
better has failed. We might achieve more
by focusing instead on what they do Discussions with project managers
understand, highlighting the project issues regarding test environments tend to be rare

PT - Februar y 2011 - professionaltester.com 13


Managing manual testing

because they are seen as a side issue, not Delay in fixing show-stopping bugs To create and justify a strategy that will not
contributing directly to the delivered Testing aims to find the most important fail because of waiting caused by unsafe
product. It needs to be made clear that if bugs first, but those bugs are often assumptions, ask management:
release and testing of code are on the showstoppers which cause testing to be
critical path then so is making the release suspended until they are fixed. But, how expressed quantitatively (eg on a scale
testable. The concept of a timeline event long will that take? It's hard to say. Bug of 1 to 10, where 1 is a blank sheet of
called “release to testing” – which occurs fixing is seldom a well-managed process. paper and 10 is the complete, perfect
only after (i) the developers have released It's important to ensure that management design), how detailed can we expect
not a build but a full installation and (ii) the realises and takes into account the fact documented requirements to be (i)
testers have verified it might help. Failing that while incidents are being reported, before the testing project begins; (ii) at
that, asking the following questions will help reproduced, discussed and resolved, management-defined milestones in the
to anticipate problems and taking action to testers will often be waiting. critical path of the development
make more of the answers positive will When discussing this problem with project?
reduce execution time. management, ask the following questions:
how accurate can we expect
are test environment considerations is the incident management documented requirements to be, as a
being included in project risk mechanism sufficient and are all who proportion of the final features of the
management? should using it correctly? accepted product, at those critical
milestones?
has the number of environments, and if an incident occurs and a tester is
instances of each, been established or unsure whether to raise it, what should is the information needed by the
estimated? he or she do? various project participants, including
testers, identified and agreed before
does the test plan include identification does the project plan allocate sufficient system definitions are documented?
and the project plan creation and time for incidents to be resolved
maintenance of test data? (investigated and, if necessary, fixed)? where detailed documentation of that
Do its estimates take into account information is not to be available, are
have the specifications of environments quality, complexity and commenting of information sharing activities to replace
yet to be created been documented and code and availability of the people it planned?
agreed? needed to resolve incidents?
Other reasons testing time is lost
have the stability and performance of Lack of sufficient information about the The research identified four other common
environments already created been system issues that force testers to wait. The full
assessed? Every tester understands the importance results and many more suggestions are
of getting system definitions included in a comprehensive checklist,
will there be any requirement to share (specifications, requirements, use cases, intended to offer a fresh approach to
environments with other activities or stories etc) as early as possible. If they are opening and maintaining productive
projects? late, test preparation is made difficult, dialogue. Its questions align with the
causing subsequent time-consuming concerns and areas of expertise of project
does the project plan include allocation change. If they must be chased, the time management, helping to eliminate time
of environments to testing and does the for test preparation is reduced, with the wasting from any testing effort. The
schedule show the associated same result. If they are inadequate, the checklist is available free at
dependencies? time taken to discuss incidents is http://www.smartest.nl/toolstemplates/
increased, impacting bug-fix time. procesverbetering
have technical support resources
required to provide and configure user Derk-Jan de Grood is a test manager at Valori (http://valori.nl) and author of
accounts and then to assist users of the TestGoal: Result-Driven Testing (Springer, ISBN 9783540788287). His new book in
environments? Dutch, Grip op IT: De Held Die Voor Mijn Nachtrust Zorgt (Academic Service, ISBN
9789012582599) will be published later this year. He speaks frequently at international
has configuration management to enable testing conferences, including about his passion for aligning IT and business
environments to be reproduced exactly,
and to track change and difference
between environments, been
implemented?

14 PT - Februar y 2011 - professionaltester.com


Managing manual testing

Use the force better, Luke


by David Yuill

Manual test execution is monotonous,


time consuming and error prone.
Why is it still so common?
In certain situations manual testing is
better than automated. It can take many
forms, adapting to achieve immediate
objectives and solve or work around
problems at any point in the application
lifecycle, making it popular with agile
development teams and with V-model- automatically injects the correct data into
minded testers. Some of those forms every field, increasing speed, accuracy
require little preparation and none require and ease.
script recording, coding or intricate
technical setup. Some do not require Manual compatibility testing
technical skill: there will always be parts of takes too long
applications that must be tested manually It is typically possible to execute manual
by business analysts and end users as tests on a very limited number of
well as testers. There will always be the environments: there simply is not time to

David Yuill introduces need to check for important defects very


quickly using knowledge and experience
continue to repeat execution. HP Sprinter's
mirror testing replicates manual execution
HP's new concept: rather than systematic techniques. And automatically and simultaneously across
testers will always be expected to and multiple platforms and configurations,
accelerated manual want to explore products in creative, increasing compatibility coverage.
test execution unplanned ways to improve assurance
against unforeseen high-impact failures. Reporting and reproducing incidents
Entirely manual testing also has wastes time
disadvantages, but they are greatly Whenever manual execution has a
reduced by HP's new Sprinter technology, nonsystematic element, reporting an
which is now core functionality in Quality incident sufficiently becomes difficult. That
Center. delays resolution which in turn delays
testing. Sprinter records and logs manual
Data-driven manual testing is testing steps precisely ensuring that every
inaccurate incident reported is reproduced at the first
A wrongly-performed step or incorrect data attempt. The recording is easy to read and
input can lead to overlooked defects or HP Sprinter provides state-of-art screen
wasteful false incidents. Repeating the and movie capture and annotation facilities
same steps multiple times with different to accelerate test documentation and
inputs makes mistakes even more likely, incident management and resolution
because of the sheer tedium and the need
to switch attention between the application David Yuill is Apps Product Solution
under test and the data source. As time Marketing Manager (EMEA) at HP.
becomes short discipline is lost under For more information about HP Sprinter
pressure to take shortcuts, deliberately see http://hp.com/go/sprinter and
skipping steps or entering incorrect data. http://youtube.com/watch?v=-G8C61PnlS0
HP Sprinter, under manual control,

PT - Februar y 2011 - professionaltester.com 15


Test process improvement

To maturity, and beyond


by Erik van Veenendaal

reviews. These practices are deployed


TMMi intends to help organizations achieve across the organization, not just at the
more effective, more efficient, continually project level. I think of level 3 as the one
where testing has become
improving testing. The first complete version institutionalized: that is defined, managed
was launched last month and organized. To achieve that, testers are
involved in development projects at or
The Test Maturity Model Integration near their commencement.
(“TMMi”) is a guideline and reference
framework for test process improvement. Version 3.1 of TMMi, launched at
Such a framework is often called a EuroSTAR in December 2010, defines its
“model”, that is a generalized description top levels: 4 “Measured” and 5
of how an activity, in this case testing, “Optimization”.
should be done. TMMi can be used to
complement Capability Maturity Model TMMi level 4: Measured
Integration (“CMMI”), the Carnegie Mellon This is the level where testing becomes
Software Engineering Institute's wider self-aware. The Test Measurement
process improvement approach (see process area requires that the technical,
http://sei.cmu.edu/cmmi), or managerial and operational resources
independently. achieved to reach level 3 are used to put
in place an organization-wide programme
Applying TMMi to evaluate and improve an capable of measuring the effectiveness
and productivity of testing to assess
Erik van Veenendaal organization's test process should
increase test productivity and therefore productivity and monitor improvement.
describes the final two product quality. In achieving this it benefits Analysis of the measurements taken is
testers by promoting education, sufficient used to support (i) taking of decisions
maturity levels which resourcing and tight integration of testing based on fact and (ii) prediction of future
have been added with development. test performance and cost.

Like CMMI, TMMi defines maturity levels, Rather than being simply necessary to
process areas, improvement goals and detect defects, testing at this level is
practices. An organization that has not evaluation: everything that is done to
implemented TMMi is assumed to be at check the quality of all work products,
maturity level 1. Being at level 2, called throughout the software lifecycle. That
“Managed”, requires the practices most quality is understood quantitatively,
testers would consider basic and essential supporting the achievement of specified
to any test project: decision on approach, quality needs, attributes and metrics. Work
production of plans and application of products are evaluated against these
techniques. I call it “the project-oriented quantitative criteria and management is
level”. informed and driven by that evaluation
throughout the lifecycle. All of these
The goals and practices required by level practices are covered in the Product
3, “Defined”, invoke a test organization, Quality Evaluation process area.
professional testers (that is people whose
main role is testing and who are trained to The Advanced Peer Reviews process area
perform it) earlier and more strategic test is introduced and builds on the review
planning, non-functional testing and practices from level 3. Peer review

16 PT - Februar y 2011 - professionaltester.com


Test process improvement

organizations call this the Test Process


5 Optimization
Group or TPG: it relates to and grows
Defect Prevention
Test Process Optimization from the test organization defined at
Quality Control TMMi level 3, but now takes on
responsibility for practices introduced
at level 5: establishing and applying a
4 Measured procedure to identify process
Test Measurement enhancements, developing and
Software Quality Evaluation
Advanced Peer Reviews maintaining a library of reusable
process assets, and evaluating and
selecting new test methods and tools.
3 Defined
Test Organization Level 5 introduces a new process area,
Test Training Program Defect Prevention. Defects are analyzed
Test Lifecycle and Integration
Non-Functional Testing to identify their causes and action
Peer Reviews taken, comprising change to the test
and/or other processes as necessary, to
2 Managed prevent the introduction of similar and
Test Policy and Strategy
Test Planning related defects in future. By including
Test Monitoring and Control these practices, at level 5 the objective
Test Design and Execution of testing becomes to prevent defects.
Test Environment

This and the other process areas


1 Initial
introduced at level 5, Test Process
Figure 1: TMMi maturity levels and their process areas Optimization and Quality Control, are
interdependent and cyclic: Defect
becomes a practice to measure work much as possible by automation and able Prevention assists product and process
product quality early in the life cycle. The to support technology transfer and test Quality Control, which contributes to
findings and measurement results are the process component reuse. Test Process Optimization, which in turn
basis of the strategy, planning and feeds into Defect Prevention and Quality
implementation of dynamic testing of To achieve such a process a Control. All three process areas are, in
subsequent (work) products. permanent group, formed of turn, supported by the continuing
appropriately skilled and trained practices within the process areas
TMMi Level 5: Optimization people, is formally established. Some established at the lower levels
When the improvement goals at levels 2, 3
and 4 have been achieved, testing is Erik van Veenendaal (http://erikvanveenendaal.nl) is a widely-recognized expert in
defined completely and measured software testing, an international testing consultant and trainer and the founder of Improve
accurately, enabling its cost and Quality Services BV (http://improveqs.nl). He is the lead author and developer of TMMi and
effectiveness to be controlled. At level 5 vice chair of the TMMi Foundation. His new book with Jan Jaap Cannegieter,
the measurements become statistical and The Little TMMi: Objective-Driven Test Process Improvement, is reviewed on page 22
the control detailed enough to be used to
fine-tune the process and achieve
continuous further improvement: testing
becomes self-optimizing.

Improvement is defined as that which


helps to achieve the organization's
business objectives. The basis for
improvement is a quantitative
understanding of the causes of variation
inherent to the process; incremental and
innovative change is applied to address
those causes, increasing predictability. An
optimizing process is also supported as

PT - Februar y 2011 - professionaltester.com 17


Test data

Better than life to permutate the order in which test


cases are executed with each regression
cycle. Test inputs can be varied using
by Ashwin Palaparthi
fully configurable randomization too.
Real data limits testing: fabricated data Deploying, refreshing and updating
empowers it Loading data produced by DFT into the test
• are functionally representative, in order DBMSs is automated using Apache ANT
to achieve coverage of classes and (http://ant.apache.org)
domains
• violate defined constraints, for Once DFT has been configured to produce
robustness and reliability testing the required data, the same configuration
• contain security threats such as SQL can be used again but with the addition of
injection and persistent cross-site uniformly-distributed or stochastic
scripting attempts. randomization. This creates further data
which has the same defined
Sculpting and controlling the data characteristics and is governed by the
The inputs to DFT include XML files same constraints, but is materially
Ashwin Palaparthi containing the field definitions plus different, refreshing the test data so
captured metadata that controls the increasing the coverage and defect finding
explains how AppLabs creates
quality, variety, and variability factors potential of testing.
the data it needs to test such as referential integrity,
enterprise-level applications geographical and demographic variation To update data in a managed rather than
and business intelligence. Its random way, the characteristics that make
configuration controls include support each record valid or invalid are recorded
Getting test data with enough volume, for static configurable lookup, and and can be varied at will: so the minimum
variety and variability is often troublesome, weighted-random pickup of data from amount of change to make valid records
more so when testing multi-environment enumeration sets. invalid and vice-versa can be applied easily,
enterprise systems that will interface with and coverage of the range of factors that
external systems. Using real data has As well as populating test databases, make them so monitored. A second
compliance implications and adapting it to DFT can create related test input data approach uses a small amount of seed data
deal with them properly often compromises (figure 1). In recent client projects we to ensure the presence of specific, desired
testing effectiveness. The painstaking work have used this capability to create data- records among the many created on-the-fly.
done to make the data “safe” and extend it driven test suites – for both manual and
for instrumentation purposes while automated execution – to invoke and The test data is under full configuration
maintaining integrity and dependency is exercise specific combinations of input management at all times: DFT is integrated
very expensive to repeat when change to and test data, and meta-driven test suites tightly with CVS (http://nongnu.org/cvs)
the application under test occurs.
Data model Field-level Control and
metadata configuration
To address this challenge AppLabs creates
test data from scratch, using its own Data
AppLabs DFT
Fabrication Toolkit (“DFT”) to produce the
very large numbers of records commonly
Test data Test input and
necessary for testing in banking, financial, (multple versions) configuraton
correlation
insurance and healthcare applications, or to test actions
test database performance or validate
Calls to App under
analytics in any system. DFT is integral to external systems test (multiple
versions and
our service delivery. It includes features to environments)
populate and maintain referential integrity of Test databases (multiple types)

specific, difficult fields including US Social Figure 1: DFT in test and test data generation
Security number, UK National Insurance
number and credit card details. The data Ashwin Palaparthi (ashwin.p@applabs.com) is VP, innovation at AppLabs, which he
can also be very rich. DFT includes facilities rejoined recently when it acquired ValueMinds, the company he left to found three years
to calculate and insert values that: ago and which has created many innovative test tools including testersdesk.com

18 PT - Februar y 2011 - professionaltester.com


Managing manual testing

Equality Unconquered
by George Wilson

management, development and


Testing will never stop diversifying operations and to involve business more
and never should and more directly. The days when these
functions operated in their own silos and
communicated infrequently are gone.
Intrinsic continuous connection between
them is now mandatory.

AQM has not kept up. The market-leading


products continue to impose their own
processes: a narrow and limiting
hierarchical workspace of requirements,
tasks and defects based on the practices
of the programmers that created them or
on dubious interpretations of incomplete,
decades-old standards. At Original
Software we formed the opinion some time
ago that what testers need now is not a
prescriptive database application, but a
quality management platform that can be
used to implement and support any
process and integrate with any external
George Wilson says Application Quality Management (AQM) activity or tool. Our offering, Qualify, was
products have many functions. They do launched last year. Like older products it
quality management different things for people in different roles, stores, monitors, controls and
depending on complex factors such as communicates information about
tools should provide process, entities and other tools. However requirements, design, build, test planning
liberation, not limitation their purpose is clear and unchanging: to and control, test execution, test
ensure business objectives are met. Doing environment and deployment, providing a
that well is becoming harder as software unified view. But it is designed for use by
organizations are increasingly challenged business analysts, project managers and
to achieve better quality, less risk, faster operations as well as development and
delivery and lower costs. testing staff, enabling all to implement their
own processes exactly and assimilate
So development methodologies are them seamlessly.
evolving, becoming more agile and closely
aligned to changing business need. Choose your own adventure
Testing is having to change too but, as Qualify's data definitions are completely
always, how it should is less obvious. As configurable: its flexibility is limitless. It
development becomes more reactive and comes with templates based on all the
unpredictable, the ways in which testing is popular methodologies, including
organized and performed across traditional ones, for customization, or can
organizations, teams and even projects be set up from scratch within realistic time:
are becoming more, not less, varied and a customer with expertise in Sogeti's
complex. To cope, testing must remain TMap implemented it fully using Qualify in
able to change itself, to integrate more 48 hours. A user account, once created, is
and more closely with project available at all times, even across different

PT - Februar y 2011 - professionaltester.com 19


Managing manual testing

Figure 1: test cases for requirement “Order Entry 1.10”

methodologies and roles. Its attributes are


retained and its permissions can be
configured separately for each project.
While other tools require a great deal of
data input before testing effort can begin,
and continue to provide more questions
than answers for a long time after that,
Qualify hits the ground running. This
flexibility is particularly valuable to testing
consultancies and service providers: they
can provide the test process each of their
Figure 2: grouping test cases
customers prefers using a single product
and re-using assets and expertise
common to multiple projects. And when an
improvement to a process is identified, it
can be implemented immediately.

To each according to need, not ability


Direct, objectives-driven management
requires removal of unnecessary barriers
between roles. Rules such as that only a
test manager can assign test cases to
testers, only developers can change the
status of an incident to “fixed” and only a
tester to “retested”, and testers produce
summary reports for BAs to read are
simply not agile. The idea that anyone in
the team can take on any of the team's
tasks, is. Qualify's extreme ease of use
makes that a reality: its interface is clean,
Figure 3: reassigning multiple test cases
simple, intuitive and completely code free.
Here's an example:

20 PT - Februar y 2011 - professionaltester.com


Managing manual testing

the “Allocated To” column heading onto


the “Group by” area just above it
(figure 2).

Now the affected test cases are selected


in the familiar Windows way: click the first
and shift-click the last. All are dragged
and dropped onto the “Joe Developer”
group (figure 3). And that's it.

Another example: a DBA, a developer


and their PM, Cindy, start work on new
Figure 4: changing status of multiple test cases
test cases. Cindy changes the status of
all three by grouping them by control-
clicking them and dragging them to the
“In Progress” group (figure 4). They are
now tasks in progress throughout
Qualify: in Gantt charts, reports, the
affected users' calendars (figure 5) and
work lists, and the practically infinite
other data views.

Auditability and accountability for


today’s business world
It’s no longer enough to report the results
of testing. Business stakeholders need
the ability to define their own reporting so
Figure 5: Qualify's planner view that they can understand precisely what
has been done at whatever level is
needed to meet their rapidly-changing
Suppose “Sally Business Analyst” is needs including evidencing compliance.
indisposed and “Joe Developer” is to Qualify provides absolutely detailed
take responsibility for executing her test history so that complete audit trailing,
cases related to the requirement “Order versioning, rollback and coverage
Entry 1.10”. First we view all test cases measurement can be achieved easily. It
for that requirement (figure 1). Next, we outputs fully customizable reports in
group them to get all Sally's ones various formats including publish-ready
together: that's done simply by dragging HTML and PDF

George Wilson is a founder and general manager of Original Software


(http://origsoft.com)

PT - Februar y 2011 - professionaltester.com 21


Test library

Revelation space
very short with many in tables, and element
Advanced Test Management types and subheadings are plentiful and
by Patrick Hendrickx and Chris Van Bael
diverse. It works brilliantly for study and
ps_testware, ISBN 9789090257273
reference, but less well for personal
Available from http://amazon.fr, soon from http://amazon.com
learning and understanding, because the
fragmentation makes linear reading heavy
In his foreword Alain Bultink says that syllabus requires, much is drawn from going: there is almost no narrative. So, this
ps_testware and ISTQB share the same other sources, but the whole adheres is a fine study aid, positively essential for
philosophy of testing. That is the great tightly to ISTQB’s prescriptions – exactly anyone taking ISTQB-AL-TM. It’s also a
strength of this book: the authors have what someone aiming to pass Advanced good textbook, although it’s better to dip
embraced completely the often mysterious Level Test Manager needs. Indeed, the into than read through, and would be even
ISTQB Advanced Level Syllabus and tripartite nature of the syllabus means that better if it contained fewer defects: typos
explained their interpretations of it more portions of the book can be used also by etc fall to the eye too readily, and those
clearly than anyone else has yet managed. those studying for Test Analyst and publishing books for testers should do
Even better, the explanation is practical: Technical Test Analyst. A subset of the more to convince readers that they are
the titles of many sections begin with “How content would also be good, in my opinion eating their own dog food. Whether it’s a
to...” and they really do demonstrate better than BCS’s official book, for sourcebook for a test manager depends on
actually doing the things needed to choose Foundation Level candidates. The large to what extent he or she agrees with
the right exam answer. Diagrams (not page count (685) is due to the use of ISTQB, but it contains much valuable,
including the silly photos at the start of “structured writing" which is granular, accessible information and is undeniably a
each chapter) are well executed. As the organized and formulaic. Paragraphs are worthy addition to the genre.

The Little TMMi: Objective-Driven Test a less formal way, with examples, may
Process Improvement have achieved the objective better. The last
by Erik van Veenendaal and Jan Jaap Cannegieter 20 pages are original: they describe
UTN, ISBN 9789490986032 Available from http://www.utn.nl assessments, then implementation using
IDEAL. These sections are much easier to
As contributors to and enthusiasts for model, but with the low-level detail read and are worthwhile, but again are
TMMi, the authors want it to be adopted by removed: over 56,000 words cut down to prescriptive rather than instructive. This
more test organizations. Their book aims to about 13,000. That obviously makes it book is a convenient way to learn what
promote that and is deliberately compact in easier to digest, but the very stiff and TMMi says we should do, but a practical,
order to make the model more accessible. general style remains. Explaining what it self-contained guide to tell us how is
In fact 50 or so pages are the text of the means (which is often far from obvious) in needed too.

BS 8878:2010 Web accessibility – executing tests without using a mouse and


Code of practice with assistive technologies, “expert reviews”
BSI, ISBN 9780580626548 (heuristic evaluations and walkthroughs) and
Available from http://bsigroup.com observing representative users. Finally we
are reminded to repeat accessibility testing
This new standard has replaced PAS know that a United Nations convention when the site is updated. The best
78:2006 Guide to good practice in (http://un.org/disabilities/convention/conventi information in this document, such as the
commissioning accessible websites and is onfull.shtml) requires “products to be usable clear explanation of current legal and other
very well written in a modern style making it by all people to the greatest extent obligations, business justification for good
far more readable than that and other older possible”? Other than mind-boggling but accessibility and discussion of the needs of
standards such as those familiar to testers. useless facts like that however web testers people with different disabilities has
It's an essential guide for anyone formulating will find nothing new. We are told to create implications for testing but does not help to
new web strategy, such as a startup, and an accessibility test plan. Its obvious do it. After all these years, a usability and
more experienced web application managers contents are described in four brief bullet accessibility testing standard or at least
will appreciate having almost all the current points. The test methods mandated are textbook providing innovative, applicable
advice in one place. For example, did you markup validation, WAI conformity checking, test techniques is still sorely needed.

22 PT - Februar y 2011 - professionaltester.com


1
2 3
4

Test library

reviewed, so is easiest to recommend.


Selenium Simplified Anyway, that’s the best way to read it –
by Alan Richardson
onscreen, in colour, alongside the
Compendium Developments, ISBN 9780956733214
applications it teaches you to use. It looks
Available from http://www.compendiumdev.co.uk and http://amazon.com
fine on a handheld reader too. Whichever
format you prefer, if you want to learn this
This is a tutorial to be followed by a getting stuck if something about their stuff you must get this book – there's no
learner with hands on a computer – the environment causes behaviour different to better way, except being shown by the
crashiest of courses imaginable, in not what the narrative expects. Apart from author in person. Testing, not only
only Selenium but Java, XPath, CSS that, the only weaknesses result from the automated, needs many more truly
Selectors, JUnit and much more. Its author's decision to self-publish. That's practical books like this.
direct, conversational style is fast-paced obviously to be encouraged, but the
but very easy to follow, and well-explained production is not yet good enough. The A4 Thanks to its author we have one printed
code and screen shots are plentiful. It's format makes it too big and heavy for and three e-book copies of Selenium
highly accessible, easily achieving its goal comfortable reading in the hands, yet it is Simplified to give away. For your
of being suitable for almost anyone, even perfect bound, so will not lie open on a chance of getting one, send an email to
those with no previous coding or desk, and graphics are printed in books@professionaltester.com telling us
automation knowledge, and will be greyscale, reducing the readability of what is your favourite testing book and
enjoyed just as much by the experienced. screenshots showing code being edited. why. The free books will go to the writers
Windows is used throughout, but other Finally, no book should be judged by its of the first four emails received
than in the short sections on installation cover, but especially not this one. Buying
etc, the steps are similar under Mac or the e-book entitles one always to
Linux. Whatever the OS, in parts where download newer versions, which have
the test items and tools interact directly already addressed many of the language
with it some readers might be in danger of and layout defects in the printed copy we

PT - Februar y 2011 - professionaltester.com 23


a l l S Q S training
off s sional
P r o f e
15%
for a ders*
t e r r e
Tes

Shoot for success


All your testing requirements firmly under control

Not only does software testing save time and money; it also protects your good reputation.
Ensuring high quality in your IT projects is challenging and requires continuous training.

 Specialist training can solve your real-life testing and QA problems. Give your testing team
the winning edge with best practices drawn from successful real-life projects!

 Learning how to best use industry-leading tools from vendors including HP and Microsoft
will secure you maximise your investment and increase your team’s efficiency.

 For practitioners, our ISTQB® Certifed Tester training series drives confidence and guarantees
a secure platform to build success on.

 Our TrainingFLEX scheme will help you squeeze more value from your training budget
and ensure success.

* Courses must be booked and taken by 30 April 2011, quote code PT1102 to qualify

Details of all SQS training services are available at www.sqs-uk.com/training

Вам также может понравиться