Вы находитесь на странице: 1из 21

Basics Of Testing

Test Case Designing


What is testing?
What do we mean by the word testing? We use the words test and testing in everyday life.

With respect to software's, Testing could be described as “checking the software is OK”.

In detail “Software testing can be stated as,


The process of validating and verifying that a software program/application/product
meets the requirement that guided its design and development and it works as expected”.

Testing is a ‘Risk Management’ activity. It is carried out to reduce the risk


involved in a software implementation.
Why testing is necessary?
To get the answer of this question we should ask our self some questions Like,
Does it matter if there are mistakes in what we do?
Does it matter if we don’t find those mistakes?

We know that in our normal life, some of our mistakes do not matter, and some of them are a cause
of big problems. So we need to find such mistakes which matters. It is the same with software
systems.

Testing is necessary because we all make mistakes. The complex nature of software that is build
today and the speed with which the new software is required to be available, inevitably leads to
mistakes. We need to check anything and everything we produce because things can always go
wrong. Humans make mistakes all the time – it is what we do best!

Testing is necessary to measure & improve quality of the product.


Testing principles
Following principles offer general guidelines common for all types of testing,

• Testing is context dependent: [ Software system context ]


Testing is done differently in different contexts. For example, safety critical software is
tested differently from an e-commerce site.

• Exhaustive testing is Difficult: [ How much testing is enough? ]


Testing everything (all combinations of inputs & preconditions) is sometimes not
feasible except for trivial (small/minor) cases. Instead of exhaustive testing, we use risk &
priorities to focus our testing efforts.

• Early testing:
Testing activities should start as early as possible in the software development life cycle.
Testing principles
• Pesticide paradox: [ Defect clusters change over time ]
If the same tests are repeated over & over again, eventually the same set of test cases will
no longer find any new bugs. To overcome the ‘pesticide paradox’, the test cases need to
be regularly reviewed and revised, and new & different tests need to be written to exercise
different parts of the software or system to potentially find more defects.

• Testing shows presence of defects: [ Is the software defect free? ]


Testing can show that defects are present, but cannot prove that there are no defects.
Testing reduces the probability of undiscovered defects remaining in the software but, even
if no defects are found, it is not a proof of correctness.

• Absence of errors fallacy: [ If we don’t find defects does that mean the users will accept the
software?]
Finding & fixing defects does not help if the system built is unusable and does not fulfil
the user’s needs and expectations.
Test process
Following are the steps involved in Test process,

• Test Planning
• Test Analysis & Specification
• Test Execution
• Test Recording (Verification)
• Checking for completion

• Test Planning:
The test planning will outline the approach to testing, the scope of testing, the test
stages, the entry & exit criteria, the environment requirements, the staffing &
training needs & the schedule for testing.
Test process
• Test Analysis and specification:
It includes analyzing the test requirements and designing test cases. The main objective in
test design is to create a detailed test scripts to exercise the criteria in an efficient manner,
which will achieve maximum coverage with the minimum number of test cases, use the
minimum amount of test data.

• Test Execution:
Test execution schedule outlines the sequence in which the tests are to be applied. The
sequence depends on priority & natural order in which the test should be carried out.
Before Test execution, the test environment need to be set up & ready for testing to satisfy
the standing date requirements identified during test planning.
Test process
• Test Recording:
Once a test is executed its results should be recorded – did it pass or fail? The actual
outcome should be compared against the expected outcome.
The test results for each test case should at least clearly record:
• The identities & versions of the software under test & the test specification.
• Date & Time of test execution.
• Result of the test ( pass/ fail )
• The actual outcome.
• Schedule for re-tests (in case failed)
• Defect reference for failures
• Comments & Observations
It should be possible to establish that all of the specified testing activities have been carried
out by reference to the test records.

Verification – Testing is an iterative process, test that fail need to be recorded, corrected &
re-tested repeatedly if necessary, until all the major defects have been resolved, in line with
the predefined exit criteria.
Test process
• Checking for test completion:
Check completion or Test exit criteria are used to determine when testing is complete.
These criteria may be defined in terms of cost, time, faults found or coverage criteria.

In this the test records should be checked against the previously specified test completion
criteria. If these criteria are not met, the test activity that must be repeated in order
to meet the criteria should be identified and the test process should be restarted from that
point. It may be necessary to repeat the test specification sometimes.
Testing Techniques
Broadly there are two main categories, static and dynamic. Dynamic techniques are subdivided
into three more categories: specification-based (black-box), structure-based (white-box) and
experience-based.

Static testing techniques


Static testing techniques do not involve executing the code being examined and are
generally used before any tests are executed on the software. Static testing techniques are
primarily used early in the lifecycle. All deliverable including code, can also be tested
using this techniques. All these techniques find faults, and because they usually
find faults early, static tests activities provide extremely good value for money. Static
techniques can improve both quality and productivity, all software organizations should
consider using reviews in all major aspects of their work including requirements, design,
implementation, testing, and maintenance.
Testing Techniques
Static testing technique continued….

How can we evaluate or analyze a requirements document, a design document, or a user manual?
How can we effectively pre-examine the source code before execution? One powerful technique
that can be used is static testing, e.g. reviews. In principle all software work products can be tested
using review technique.

The use of static testing, e.g. reviews, on software work products has various advantages:

• Since static testing can start early in the life cycle, early feedback on quality issues can be
established, e.g. an early validation of user requirements and not just late in the life cycle
during acceptance testing.

• By detecting defects at an early stage, rework costs are most often relatively low and thus
improvement of the quality of software products can be achieved.
Testing Techniques
Static testing technique continued….

• Since rework effort is substantially reduced, development productivity figures are likely
to increase.

• The evaluation by a team has the additional advantage that there is an exchange of
information between the participants.

• Static tests contribute to an increased awareness of quality issues.

In conclusion, static testing is a very suitable method for improving the quality of software
products.
Testing Techniques
Dynamic testing techniques
Dynamic techniques, the traditional method of running tests by executing the software, are
appropriate for all stages where executable software components are available. Dynamic
techniques are subdivided into three more categories:
• specification-based (black-box)
• structure-based (white-box)
• experience-based

Specification-based (black-box) testing technique


This is also known as 'black-box' or input/output-driven testing technique
because testers view the software as a black-box with inputs and outputs, they
have no knowledge of how the system or component is structured inside the box.
In essence, the tester is concentrating on what the software does, not how it does.
Testing Techniques
Structure-based (white-box) testing technique
Structure-based testing technique use the internal structure of the software to
derive test cases. They are commonly called 'white-box‘ or 'glass-box' techniques
(implying you can see into the system) since they require knowledge of how the
software is implemented, i.e. how it works. For example, a structural technique
may be concerned with exercising loops in the software. Different test cases may
be derived to exercise the loop once, twice, and many times. This may be done
regardless of the functionality of the software.

Experience-based testing technique


In experience-based technique, people's knowledge, skills and background are a
prime contributor to the test conditions and test cases. The experience of both
technical and business people is important, as they bring different perspectives to
the test analysis and design process. Due to previous experience with similar
systems, they may have insights into what could go wrong, which is very useful
for testing.
Testing Techniques
Where to apply these different categories of techniques

Specification-based technique is appropriate at all levels of testing (component testing through to


acceptance testing) where a specification exists. When performing system or acceptance
testing, the requirements specification or functional specification may form the basis of the tests.
When performing component or integration testing, a design document or low-level specification
forms the basis of the tests.

Structure-based technique can also be used at all levels of testing. Developers use structure-based
technique in component testing and component integration testing, especially where there is good
tool support for code coverage. Structure-based technique can also be used in system and
acceptance testing, but the structures are different. For example, the coverage of menu options or
major business transactions could be the structural element in system or acceptance testing.

Experience-based technique is used to complement specification based and structure-based


techniques, and is also used when there is no specification, or if the specification is inadequate or
out of date.
Designing Test cases
Being able to write good test cases is an essential skill for every test analyst. This is a learned skill
combining the theory of good test practices with on the job experience.

There are many benefits to being able to produce a set of cases that will provide you, and your
customers with confidence that the application has been well tested. Being able to write good test
scripts means that you can quickly and easily produce a set of tests, which will cover the major
functions and expose bugs early in the testing process.

What follows.. is a set of guidelines for writing test scripts. Use this in combination with your own
knowledge and experience to enhance the scripts that you write on an everyday basis.
Designing Test cases
Following is a set of guidelines for writing test scripts,

• Test case format should be such that you ensure that you are giving the person running
the test the best opportunity to run the test as efficiently and effectively as possible.
• One test step should be identified with one expected result
• Expected Results add value to the test
• Test Cases should be easy to maintain
• Test cases should target functionality that has changed
• Test cases should check each different scenario just once
• Test data conditions should be kept to a minimum
• Each expected result should have a defined pass / fail
• Test case should have appropriate level of detail
• Test case can be traced to a specific requirement
Defect Raising & Tracking
When running a test, we might observe actual results that vary from expected results. This
is not a bad thing - one of the major goals of testing is to find problems. Different organizations
have different names to describe such situations. Commonly, they're called incidents, bugs, defects,
problems or issues.

When a tester tests an application and if he/she finds any defect, the life cycle of the defect
starts and it becomes very important to communicate the defect to the developers in order
to get it fixed, keep track of current status of the defect, find out if any such defect (similar
defect) was ever found in last attempts of testing etc. For this purpose, either manually created
documents can be used, which needs to be circulated to everyone associated with the
project (developers, testers, managers etc.), nowadays many Bug Reporting Tools are
available, which help in tracking and managing bugs in an effective way.
Defect Raising & Tracking
It becomes necessary to log a defect in a proper way, track the defect, and keep a log of
defects for future reference. Following are some of the important details which should be provided
during defect logging or reporting,
Project Name, Date Detected,
Bug ID Detected in Version,
Description, Severity,
Status, Priority,
Detected By, Date of Closure,
Assigned To, etc.

along with these details, expected and actual results, the screen-shots, error logs, etc. taken
at the time of execution of test case should also be attached to the bug for reference to the
developer.

After the bug is reported, it is assigned a status of 'New', which goes on changing as the bug
fixing process progresses. As the tracking process is generally not automated, it becomes important
to keep updating the information of the bug that was raised till the time it is closed.
Summary
In conclusion, remember that the topics discussed are guidelines. Testing is a wide
and varied discipline and what may work in one situation may not be applicable to every
situation.

Use your analytical skills and your experience to judge how much detail is required in your
test scripts.

The aim here is not to minimize the amount of testing we do, but to get the most benefit
from what we do test.

By doing this we contribute to the overall success of the projects we work on and we add
greater value to the part we play in the software development lifecycle.

Вам также может понравиться