Вы находитесь на странице: 1из 29

Software testing

Software testing is the process used to measure the quality of developed computer software.
Usually, quality is constrained to such topics as correctness, completeness, security, but can
also include more technical requirements as described under the ISO standard ISO 9126, such
as capability, reliability, efficiency, portability, maintainability, compatibility, and usability. Testing
is a process of technical investigation, performed on behalf of stakeholders, that is intended to
reveal quality-related information about the product with respect to the context in which it is
intended to operate. This includes, but is not limited to, the process of executing a program or
application with the intent of finding errors. Quality is not an absolute; it is value to some person.
With that in mind, testing can never completely establish the correctness of arbitrary computer
software; testing furnishes a criticism or comparison that compares the state and behaviour of
the product against a specification. An important point is that software testing should be
distinguished from the separate discipline of Software Quality Assurance (SQA), which
encompasses all business process areas, not just testing.
Today, software has grown in complexity and size. The software product developed by a
developer is according to the System Requirement Specification. Every software product has a
target audience. For example, a video game software has its audience completely different from
banking software. Therefore, when an organization invests large sums in making a software
product, it must ensure that the software product must be acceptable to the end users or its
target audience. This is where Software Testing comes into play. Software testing is not merely
finding defects or bugs in the software, it is the completely dedicated discipline of evaluating the
quality of the software.
There are many approaches to software testing, but effective testing of complex products is
essentially a process of investigation, not merely a matter of creating and following routine
procedure. One definition of testing is "the process of questioning a product in order to evaluate
it", where the "questions" are operations the tester attempts to execute with the product, and the
product answers with its behavior in reaction to the probing of the tester. Although most of the
intellectual processes of testing are nearly identical to that of review or inspection, the word
testing is also used to connote the dynamic analysis of the product—putting the product through
its paces. Sometimes one therefore refers to reviews, walkthroughs or inspections as "static
testing", whereas actually running the program with a given set of test cases in a given
development stage is often referred to as "dynamic testing", to emphasize the fact that formal
review processes form part of the overall testing scope.
Introduction

In general, software engineers distinguish software faults from software failures. In case of a
failure, the software does not do what the user expects. A fault is a programming error that may
or may not actually manifest as a failure. A fault can also be described as an error in the
correctness of the semantic of a computer program. A fault will become a failure if the exact
computation conditions are met, one of them being that the faulty portion of computer software
executes on the CPU. A fault can also turn into a failure when the software is ported to a
different hardware platform or a different compiler, or when the software gets extended.
Software testing may be viewed as a sub-field of Software Quality Assurance but typically exists
independently (and there may be no SQA areas in some companies). In SQA, software process
specialists and auditors take a broader view on software and its development. They examine
and change the software engineering process itself to reduce the amount of faults that end up in
the code or deliver faster.
Regardless of the methods used or level of formality involved, the desired result of testing is a
level of confidence in the software so that the organization is confident that the software has an
acceptable defect rate. What constitutes an acceptable defect rate depends on the nature of the
software. An arcade video game designed to simulate flying an airplane would presumably have
a much higher tolerance for defects than software used to control an actual airliner.
A problem with software testing is that the number of defects in a software product can be very
large, and the number of configurations of the product larger still. Bugs that occur infrequently
are difficult to find in testing. A rule of thumb is that a system that is expected to function without
faults for a certain length of time must have already been tested for at least that length of time.
This has severe consequences for projects to write long-lived reliable software, since it is not
usually commercially viable to test over the proposed length of time unless this is a relatively
short period. A few days or a week would normally be acceptable, but any longer period would
usually have to be simulated according to carefully prescribed start and end conditions.
A common practice of software testing is that it is performed by an independent group of testers
after the functionality is developed but before it is shipped to the customer. This practice often
results in the testing phase being used as project buffer to compensate for project delays,
thereby compromising the time devoted to testing. Another practice is to start software testing at
the same moment the project starts and it is a continuous process until the project finishes.
This is highly problematic in terms of controlling changes to software: if faults or failures are
found part way into the project, the decision to correct the software needs to be taken on the
basis of whether or not these defects will delay the remainder of the project. If the software does
need correction, this needs to be rigorously controlled using a version numbering system, and
software testers need to be accurate in knowing that they are testing the correct version, and
will need to re-test the part of the software wherein the defects were found. The correct start
point needs to be identified for retesting. There are added risks in that new defects may be
introduced as part of the corrections, and the original requirement can also change part way
through, in which instance previous successful tests may no longer meet the requirement and
will need to be re-specified and redone (part of regression testing). Clearly the possibilities for
projects being delayed and running over budget are significant.
Another common practice is for test suites to be developed during technical support escalation
procedures. Such tests are then maintained in regression testing suites to ensure that future
updates to the software don't repeat any of the known mistakes.
It is commonly believed that the earlier a defect is found the cheaper it is to fix it. This is
reasonable based on the risk of any given defect contributing to or being confused with further
defects later in the system or process. In particular, if a defect erroneously changes the state of
the data on which the software is operating, that data is no longer reliable and therefore any
testing after that point cannot be relied on even if there are no further actual software defects.
Time Detected [1]
Time Introduced Requirements Architecture Construction System Test Post-Release
Requirements 1 3 5-10 10 10-100
Architecture - 1 10 15 25-100
Construction - - 1 10 10-25

In counterpoint, some emerging software disciplines such as extreme programming and the
agile software development movement, adhere to a "test-driven software development" model.
In this process unit tests are written first, by the software engineers (often with pair
programming in the extreme programming methodology). Of course these tests fail initially; as
they are expected to. Then as code is written it passes incrementally larger portions of the test
suites. The test suites are continuously updated as new failure conditions and corner cases are
discovered, and they are integrated with any regression tests that are developed.
Unit tests are maintained along with the rest of the software source code and generally
integrated into the build process (with inherently interactive tests being relegated to a partially
manual build acceptance process).
The software, tools, samples of data input and output, and configurations are all referred to
collectively as a test harness.
Software Testing Axioms

1. It is impossible to test a program completely.


2. Software testing is risk based exercise.
3. Testing cannot show that bugs don't exist.
4. The more bugs you find, the more bugs there are.
5. Not all the bugs you find will be fixed.
6. Product specifications are never final.

History

The separation of debugging from testing was initially introduced by Glenford J. Myers in 1979. [2]
Although his attention was on breakage testing it illustrated the desire of the software
engineering community to separate fundamental development activities, such as debugging,
from that of verification. Drs. Dave Gelperin and William C. Hetzel classified in 1988 the phases
and goals in software testing as follows:[3]
until 1956 it was the debugging oriented period, when testing was often associated to
debugging: there was no clear difference between testing and debugging. From 1957-1978
there was the demonstration oriented period where debugging and testing was distinguished
now - in this period it was shown, that software satisfies the requirements. The time between
1979-1982 is announced as the destruction oriented period, where the goal was to find errors.
1983-1987 is classified as the evaluation oriented period: intention here is that during the
software lifecycle a product evaluation is provided and measuring quality. From 1988 on it was
seen as prevention oriented period where tests were to demonstrate that software satisfies its
specification, to detect faults and to prevent faults.
Dr. Gelperin chaired the IEEE 829-1989 (Test Documentation Standard) with Dr. Hetzel writing
the book The Complete Guide to Software Testing. Both works were pivotal in to today's testing
culture and remain a consistent source of reference. Dr. Gelperin and Jerry E. Durant also went
on to develop High Impact Inspection Technology that builds upon traditional Inspections but
utilizes a test driven additive.
SDLC : Software Development Life Cycle
The following are the actives of the SDLC
1) System engineering and modeling
2) Software require analysis
3) Systems analysis and design
4) Code generation
5) Testing
6) Development and Maintenance
System Engineering and Modeling
In this process we have to identify the projects requirement and main features proposed in the
application. Here the development team visits the customer and their system. They investigate
the need for possible software automation in the given system. By the end of the investigation
study. The team writes a document that holds the specifications for the customer system.
Software Requirement Analysis
In this software requirements analysis, firstly analysis the requirement for the proposed system.
To understand the nature of the program to built, the system engineer must understand the
information domain for the software, as well as required functions, performance and the
interfacing. From the available information the system engineer develops a list of the actors use
cases and system level requirement for the project. With the help of key user the list of use case
and requirement is reviewed. Refined and updated in an iterative fashion until the user is
satisfied that it represents the essence of the proposed system.
Systems analysis and design
The design is the process of designing exactly how the specifications are to be implemented. It
defines specifically how the software is to be written including an object model with properties
and method for each object, the client/server technology, the number of tiers needed for the
package architecture and a detailed database design. Analysis and design are very important in
the whole development cycle. Any glitch in the design could be very expensive to solve in the
later stage of the software development.
Code generation
The design must be translated into a machine readable form. The code generation step
performs this task. The development phase involves the actual coding of the entire application.
If design is performed in a detailed manner. Code generation can be accomplished with out
much complicated. Programming tools like compilers, interpreters like c, c++, and java are used
for coding .with respect to the type of application. The right programming language is chosen.
Testing
After the coding. The program testing begins. There are different methods are there to detect
the error in coding .different method are already available. Some companies are developed they
own testing tools
Development and Maintenance
The development and maintenance is a staged roll out of the new application, this involves
installation and initial training and may involve hardware and network upgrades. Software will
definitely undergo change once it is delivered to the customer. There are many reasons for the
change. Change could be happen because of some unexpected input values into the system. In
addition, the changes in the system could be directly affecting the software operations. The
software should be developed to accommodate changes that could happen during the post
implementation period.
Life Cycle of Testing Process

This article explains about Different steps in Life Cycle of Testing Process. in Each phase of the
development process will have a specific input and a specific output. Once the project is
confirmed to start, the phases of the development of project can be divided into the following
phases:
• Software requirements phase.
• Software Design
• Implementation
• Testing
• Maintenance
In the whole development process, testing consumes highest amount of time. But most of the
developers oversee that and testing phase is generally neglected. As a consequence,
erroneous software is released. The testing team should be involved right from the requirements
stage itself.
The various phases involved in testing, with regard to the software development life cycle are:
1. Requirements stage
2. Test Plan
3. Test Design.
4. Design Reviews
5. Code Reviews
6. Test Cases preparation.
7. Test Execution
8. Test Reports.
9. Bugs Reporting
10. Reworking on patches.
11. Release to production.
Requirements Stage
Normally in many companies, developers itself take part in the requirements stage. Especially
for product-based companies, a tester should also be involved in this stage. Since a tester
thinks from the user side whereas a developer can’t. A separate panel should be formed for
each module comprising a developer, a tester and a user. Panel meetings should be scheduled
in order to gather everyone’s view. All the requirements should be documented properly for
further use and this document is called “Software Requirements Specifications”.
Test Plan
Without a good plan, no work is a success. A successful work always contains a good plan. The
testing process of software should also require good plan. Test plan document is the most
important document that brings in a process – oriented approach. A test plan document should
be prepared after the requirements of the project are confirmed. The test plan document must
consist of the following information:

• Total number of features to be tested.


• Testing approaches to be followed.
• The testing methodologies
• Number of man-hours required.
• Resources required for the whole testing process.
• The testing tools that are to be used.
• The test cases, etc
Test Design
Test Design is done based on the requirements of the project. Test has to be designed based on
whether manual or automated testing is done. For automation testing, the different paths for
testing are to be identified first. An end to end checklist has to be prepared covering all the
features of the project.

The test design is represented pictographically. The test design involves various stages. These
stages can be summarized as follows:
• The different modules of the software are identified first.
• Next, the paths connecting all the modules are identified.

Then the design is drawn. The test design is the most critical one, which decides the test case
preparation. So the test design assesses the quality of testing process.
Test Cases Preparation
Test cases should be prepared based on the following scenarios:

• Positive scenarios
• Negative scenarios
• Boundary conditions and
• Real World scenarios

Design Reviews
The software design is done in systematical manner or using the UML language. The tester can
do the reviews over the design and can suggest the ideas and the modifications needed.
Code Reviews
Code reviews are similar to unit testing. Once the code is ready for release, the tester should be
ready to do unit testing for the code. He must be ready with his own unit test cases. Though a
developer does the unit testing, a tester must also do it. The developers may oversee some of
the minute mistakes in the code, which a tester may find out.
Test Execution and Bugs Reporting
Once the unit testing is completed and the code is released to QA, the functional testing is
done. A top-level testing is done at the beginning of the testing to find out the top-level failures. If
any top-level failures occur, the bugs should be reported to the developer immediately to get the
required workaround.

The test reports should be documented properly and the bugs have to be reported to the
developer after the testing is completed.
Release to Production
Once the bugs are fixed, another release is given to the QA with the modified changes.
Regression testing is executed. Once the QA assures the software, the software is released to
production. Before releasing to production, another round of top-level testing is done.

The testing process is an iterative process. Once the bugs are fixed, the testing has to be done
repeatedly. Thus the testing process is an unending process.

Bug Life Cycle:


Introduction:
Bug can be defined as the abnormal behavior of the software. No software exists without a bug.
The elimination of bugs from the software depends upon the efficiency of testing done on the
software. A bug is a specific concern about the quality of the Application under Test (AUT).
Bug Life Cycle:
In software development process, the bug has a life cycle. The bug should go through the life
cycle to be closed. A specific life cycle ensures that the process is standardized. The bug attains
different states in the life cycle. The life cycle of the bug can be shown diagrammatically as
follows:

The different states of a bug can be summarized as follows:


1. New
2. Open
3. Assign
4. Test
5. Verified
6. Deferred
7. Reopened
8. Duplicate
9. Rejected and
10. Closed
Description of Various Stages:
1. New: When the bug is posted for the first time, its state will be “NEW”. This means that the
bug is not yet approved.
2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine
and he changes the state as “OPEN”.
3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding
developer or developer team. The state of the bug now is changed to “ASSIGN”.
4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next
round of testing. Before he releases the software with bug fixed, he changes the state of bug to
“TEST”. It specifies that the bug has been fixed and is released to testing team.
5. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next
releases. The reasons for changing the bug to this state have many factors. Some of them are
priority of the bug may be low, lack of time for the release or the bug may not have major effect
on the software.
6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the
state of the bug is changed to “REJECTED”.
7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug,
then one bug status is changed to “DUPLICATE”.
8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug.
If the bug is not present in the software, he approves that the bug is fixed and changes the
status to “VERIFIED”.
9. Reopened: If the bug still exists even after the bug is fixed by the developer, the tester
changes the status to “REOPENED”. The bug traverses the life cycle once again.
10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no
longer exists in the software, he changes the status of the bug to “CLOSED”. This state means
that the bug is fixed, tested and approved.
While defect prevention is much more effective and efficient in reducing the number of defects,
most organization conducts defect discovery and removal. Discovering and removing defects is
an expensive and inefficient process. It is much more efficient for an organization to conduct
activities that prevent defects.
Guidelines on deciding the Severity of Bug:
Indicate the impact each defect has on testing efforts or users and administrators of the
application under test. This information is used by developers and management as the basis for
assigning priority of work on defects.
A sample guideline for assignment of Priority Levels during the product test phase includes:
1. Critical / Show Stopper — An item that prevents further testing of the product or
function under test can be classified as Critical Bug. No workaround is possible for
such bugs. Examples of this include a missing menu option or security permission
required to access a function under test.
.
2. Major / High — A defect that does not function as expected/designed or cause other
functionality to fail to meet requirements can be classified as Major Bug. The
workaround can be provided for such bugs. Examples of this include inaccurate
calculations; the wrong field being updated, etc.
.
3. Average / Medium — The defects which do not conform to standards and
conventions can be classified as Medium Bugs. Easy workarounds exists to achieve
functionality objectives. Examples include matching visual and text links which lead to
different end points.
.
4. Minor / Low — Cosmetic defects which does not affect the functionality of the system
can be classified as Minor Bugs.
Guidelines on writing Bug Description:
Bug can be expressed as “Result followed by the action”. That means, the unexpected behavior
occurring when a particular action takes place can be given as bug description.
1. Be specific. State the expected behavior which did not occur - such as after pop-up
did not appear and the behavior which occurred instead.
2. Use present tense.
3. Don’t use unnecessary words.
4. Don’t add exclamation points. End sentences with a period.
5. DON’T USE ALL CAPS. Format words in upper and lower case (mixed case).
6. Mention steps to reproduce the bug compulsorily.

White box, black box, and grey box testing

White box and black box testing are terms used to describe the point of view that a test
engineer takes when designing test cases. Black box testing treats the software as a black-box
without any understanding as to how the internals behave. Thus, the tester inputs data and only
sees the output from the test object. This level of testing usually requires thorough test cases to
be provided to the tester who then can simply verify that for a given input, the output value (or
behavior), is the same as the expected value specified in the test case.
White box testing, however, is when the tester has access to the internal data structures, code,
and algorithms. For this reason, unit testing and debugging can be classified as white-box
testing and it usually requires writing code, or at a minimum, stepping through it, and thus
requires more skill than the black-box tester. If the software in test is an interface or API of any
sort, white-box testing is almost always required.
In recent years the term grey box testing has come into common usage. This involves having
access to internal data structures and algorithms for purposes of designing the test cases, but
testing at the user, or black-box level. Manipulating input data and formatting output do not
qualify as grey-box because the input and output are clearly outside of the black-box we are
calling the software under test. This is particularly important when conducting integration testing
between two modules of code written by two different developers, where only the interfaces are
exposed for test.
Grey box testing could be used in the context of testing a client-server environment when the
tester has control over the input, inspects the value in a SQL database, and the output value,
and then compares all three (the input, sql value, and output), to determine if the data got
corrupt on the database insertion or retrieval.
Grey box testing is the combination of black box and white box testing. Intention of this testing is
to find out defects related to bad design or bad implementation of the system.
In gray box testing, test engineer is equipped with the knowledge of system and designs test
cases or test data based on system knowledge

Verification and validation

Software testing is used in association with verification and validation (V&V). Verification is the
checking of or testing of items, including software, for conformance and consistency with an
associated specification. Software testing is just one kind of verification, which also uses
techniques such as reviews, inspections, and walkthroughs. Validation is the process of
checking what has been specified is what the user actually wanted.
• Verification: Have we built the software right? (i.e. does it match the specification).
• Validation: Have we built the right software? (i.e. Is this what the customer wants?)
Levels of testing

Unit testing tests the minimal software component, or module. Each unit (basic component) of
the software is tested to verify that the detailed design for the unit has been correctly
implemented. In an Object-oriented environment, this is usually at the class level, and the
minimal unit tests include the constructors and destructors.
• Integration testing exposes defects in the interfaces and interaction between integrated
components (modules). Progressively larger groups of tested software components
corresponding to elements of the architectural design are integrated and tested until the
software works as a system.
• Functional testing tests at any level (class, module, interface, or system) for proper
functionality as defined in the specification.
• System testing tests a completely integrated system to verify that it meets its
requirements.
• System integration testing verifies that a system is integrated to any external or third
party systems defined in the system requirements.
• Acceptance testing can be conducted by the end-user, customer, or client to validate
whether or not to accept the product. Acceptance testing may be performed as part of
the hand-off process between any two phases of development.
o Alpha testing is simulated or actual operational testing by potential
users/customers or an independent test team at the developers' site. Alpha
testing is often employed for off-the-shelf software as a form of internal
acceptance testing, before the software goes to beta testing.
o Beta testing comes after alpha testing. Versions of the software, known as
beta versions, are released to a limited audience outside of the company. The
software is released to groups of people so that further testing can ensure the
product has few faults or bugs. Sometimes, beta versions are made available to
the open public to increase the feedback field to a maximal number of future
users.
It should be noted that although both Alpha and Beta are referred to as testing it is in fact use
immersion. The rigors that are applied are often unsystematic and many of the basic tenets of
testing process are not used. The Alpha and Beta period provides insight into environmental
and utilization conditions that can impact the software.
After modifying software, either for a change in functionality or to fix defects, a regression test
re-runs previously passing tests on the modified software to ensure that the modifications
haven't unintentionally caused a regression of previous functionality. Regression testing can be
performed at any or all of the above test levels. These regression tests are often automated.

Test cases, suites, scripts, and scenarios

A test case is a software testing document, which consists of event, action, input, output,
expected result, and actual result. Clinically defined (IEEE 829-1998) a test case is an input and
an expected result. This can be as pragmatic as 'for condition x your derived result is y',
whereas other test cases described in more detail the input scenario and what results might be
expected. It can occasionally be a series of steps (but often steps are contained in a separate
test procedure that can be exercised against multiple test cases, as a matter of economy) but
with one expected result or expected outcome. The optional fields are a test case ID, test step
or order of execution number, related requirement(s), depth, test category, author, and check
boxes for whether the test is automatable and has been automated. Larger test cases may also
contain prerequisite states or steps, and descriptions. A test case should also contain a place for
the actual result. These steps can be stored in a word processor document, spreadsheet,
database, or other common repository. In a database system, you may also be able to see past
test results and who generated the results and the system configuration used to generate those
results. These past results would usually be stored in a separate table.
The term test script is the combination of a test case, test procedure, and test data. Initially the
term was derived from the product of work created by automated regression test tools. Today,
test scripts can be manual, automated, or a combination of both.
The most common term for a collection of test cases is a test suite. The test suite often also
contains more detailed instructions or goals for each collection of test cases. It definitely
contains a section where the tester identifies the system configuration used during testing. A
group of test cases may also contain prerequisite states or steps, and descriptions of the
following tests.
Collections of test cases are sometimes incorrectly termed a test plan. They might correctly be
called a test specification. If sequence is specified, it can be called a test script, scenario, or
procedure.

A sample testing cycle


Although testing varies between organizations, there is a cycle to testing:
1. Requirements Analysis: Testing should begin in the requirements phase of the software
development life cycle.
During the design phase, testers work with developers in determining what aspects of a design
are testable and under what parameter those tests work.
2. Test Planning: Test Strategy, Test Plan(s), Test Bed creation.
A lot of activities will be carried out during testing, so that a plan is needed.
3. Test Development: Test Procedures, Test Scenarios, Test Cases, Test Scripts to use in
testing software.
4. Test Execution: Testers execute the software based on the plans and tests and report
any errors found to the development team.
5. Test Reporting: Once testing is completed, testers generate metrics and make final
reports on their test effort and whether or not the software tested is ready for release.
6. Retesting the Defects
Not all errors or defects reported must be fixed by a software development team. Some may be
caused by errors in configuring the test software to match the development or production
environment. Some defects can be handled by a workaround in the production environment.
Others might be deferred to future releases of the software, or the deficiency might be accepted
by the business user. There are yet other defects that may be rejected by the development team
(of course, with due reason) if they deem it.

Code coverage

Main article: Code coverage


Code coverage is inherently a white box testing activity. The target software is built with special
options or libraries and/or run under a special environment such that every function that is
exercised (executed) in the program(s) are mapped back to the function points in the source
code. This process allows developers and quality assurance personnel to look for parts of a
system that are rarely or never accessed under normal conditions (error handling and the like)
and helps reassure test engineers that the most important conditions (function points) have
been tested.
Test engineers can look at code coverage test results to help them devise test cases and input
or configuration sets that will increase the code coverage over vital functions. Two common
forms of code coverage used by testers are statement (or line) coverage, and path (or edge)
coverage. Line coverage reports on the execution footprint of testing in terms of which lines of
code were executed to complete the test. Edge coverage reports which branches, or code
decision points were executed to complete the test. They both report a coverage metric,
measured as a percentage.
Generally code coverage tools and libraries exact a performance and/or memory or other
resource cost which is unacceptable to normal operations of the software. Thus they are only
used in the lab. As one might expect there are classes of software that cannot be feasibly
subjected to these coverage tests, though a degree of coverage mapping can be approximated
through analysis rather than direct testing.
There are also some sorts of defects which are affected by such tools. In particular some race
conditions or similar real time sensitive operations can be masked when run under code
coverage environments; and conversely some of these defects may become easier to find as a
result of the additional overhead of the testing code.
Code coverage may be regarded as a more up-to-date incarnation of debugging in that the
automated tools used to achieve statement and path coverage are often referred to as
“debugging utilities”. These tools allow the program code under test to be observed on screen
whilst the program is executing, and commands and keyboard function keys are available to
allow the code to be “stepped” through literally line by line. Alternatively it is possible to define
pinpointed lines of code as “breakpoints” which will allow a large section of the code to be
executed, then stopping at that point and displaying that part of the program on screen. Judging
where to put breakpoints is based on a reasonable understanding of the program indicating that
a particular defect is thought to exist around that point. The data values held in program
variables can also be examined and in some instances (with care) altered to try out “what if”
scenarios. Clearly use of a debugging tool is more the domain of the software engineer at a unit
test level, and it is more likely that the software tester will ask the software engineer to perform
this. However, it is useful for the tester to understand the concept of a debugging tool.

Controversy

There is considerable controversy among testing writers and consultants about what constitutes
responsible software testing. Members of the "context-driven" school of testing believe that
there are no "best practices" of testing, but rather that testing is a set of skills that allow the
tester to select or invent testing practices to suit each unique situation. In addition, prominent
members of the community consider much of the writing about software testing to be doctrine,
mythology, and folklore. Some might contend that this belief directly contradicts standards such
as the IEEE 829 test documentation standard, and organizations such as the Food and Drug
Administration who promote them. The context-driven school's retort is that Lessons Learned in
Software Testing includes one lesson supporting the use IEEE 829 and another opposing it; that
not all software testing occurs in a regulated environment and that practices appropriate for
such environments would be ruinously expensive, unnecessary, and inappropriate for other
contexts; and that in any case the FDA generally promotes the principle of the least
burdensome approach.
Some of the major controversies include:
Agile vs. traditional
Starting around 1990, a new style of writing about testing began to challenge what had come
before. The seminal work in this regard is widely considered to be Testing Computer Software,
by Cem Kaner.[4] Instead of assuming that testers have full access to source code and complete
specifications, these writers, including Kaner and James Bach, argued that testers must learn to
work under conditions of uncertainty and constant change. Meanwhile, an opposing trend
toward process "maturity" also gained ground, in the form of the Capability Maturity Model. The
agile testing movement (which includes but is not limited to forms of testing practiced on agile
development projects) has popularity mainly in commercial circles, whereas the CMM was
embraced by government and military software providers.
However, saying that "maturity models" like CMM gained ground against or opposing Agile
testing may not be right. Agile movement is a 'way of working', while CMM is a process
improvement idea.
But another point of view must be considered: the operational culture of an organization. While it
may be true that testers must have an ability to work in a world of uncertainty, it is also true that
their flexibility must have direction. In many cases test cultures are self-directed and as a result
fruitless; unproductive results can ensue. Furthermore, providing positive evidence of defects
may either indicate that you have found the tip of a much larger problem, or that you have
exhausted all possibilities. A framework is a test of Testing. It provides a boundary that can
measure (validate) the capacity of our work. Both sides have, and will continue to argue the
virtues of their work. The proof however is in each and every assessment of delivery quality. It
does little good to test systematically if you are too narrowly focused. On the other hand, finding
a bunch of errors is not an indicator that Agile methods was the driving force; you may simply
have stumbled upon an obviously poor piece of work.

Exploratory vs. scripted


Exploratory testing means simultaneous test design and test execution with an emphasis on
learning. Scripted testing means that learning and test design happen prior to test execution,
and quite often the learning has to be done again during test execution. Exploratory testing is
very common, but in most writing and training about testing it is barely mentioned and generally
misunderstood. Some writers consider it a primary and essential practice. Structured
exploratory testing is a compromise when the testers are familiar with the software. A vague test
plan, known as a test charter, is written up, describing what functionalities need to be tested but
not how, allowing the individual testers to choose the method and steps of testing.
There are two main disadvantages associated with a primarily exploratory testing approach. The
first is that there is no opportunity to prevent defects, which can happen when the designing of
tests in advance serves as a form of structured static testing that often reveals problems in
system requirements and design. The second is that, even with test charters, demonstrating test
coverage and achieving repeatability of tests using a purely exploratory testing approach is
difficult. For this reason, a blended approach of scripted and exploratory testing is often used to
reap the benefits while mitigating each approach's disadvantages.

Manual vs. automated


Some writers believe that test automation is so expensive relative to its value that it should be
used sparingly. Others, such as advocates of agile development, recommend automating 100%
of all tests. A challenge with automation is that automated testing requires automated test
oracles (an oracle is a mechanism or principle by which a problem in the software can be
recognised). Such tools have value in load testing software (by signing on to an application with
hundreds or thousands of instances simultaneously), or in checking for intermittent errors in
software. The success of automated software testing depends on complete and comprehensive
test planning. Software development strategies such as test-driven development are highly
compatible with the idea of devoting a large part of an organization's testing resources to
automated testing. Many large software organizations perform automated testing. Some have
developed their own automated testing environments specifically for internal development, and
not for resale.

Software design vs. software implementation


Software testers should not be limited only to testing software implementation, but also to
testing software design. With this assumption, the role and involvement of testers will change
dramatically. The test cycle will change too. To test software design, testers will review
requirement and design specifications together with designer and programmer. This will help to
identify bugs earlier.

Certification

Several certification programs exist to support the professional aspirations of software testers
and quality assurance specialists. No certification currently offered actually requires the
applicant to demonstrate the ability to test software. No certification is based on a widely
accepted body of knowledge. No certification board decertifies individuals. [verification needed][citation needed]
This has led some to declare that the testing field is not ready for certification. [5] Certification
itself cannot measure an individual's productivity, their skill, or practical knowledge, and cannot
guarantee their competence, or professionalism as a tester.[6]
Certifications can be grouped into: exam-based and education-based. Exam-based
certifications: For these there is the need to pass an exam, which can also be learned by self-
study: e.g. for ISTQB or QAI. Education-based certifications are instructor-led sessions, where
each course has to be passed, e.g. IIST (International Institute for Software Testing).

Testing certifications
• CSTE offered by the Quality Assurance Institute (QAI)
• CSTP offered by the International Institute for Software Testing
• CSTP (TM) (Australian Version) offered by the K. J. Ross & Associates
• CATe offered by the International Institute for Software Testing
• ISEB offered by the Information Systems Examinations Board
• ISTQB offered by the International Software Testing Qualification Board
Quality assurance certifications
• CSQE offered by the American Society for Quality (ASQ)
• CSQA offered by the Quality Assurance Institute (QAI)

Who watches the watchmen?

One principle in software testing is summed up by the classical Latin question posed by
Juvenal: Quis Custodiet Ipsos Custodes (Who watches the watchmen?), or is alternatively
referred informally, as the "Heisenbug" concept (a common misconception that confuses
Heisenberg's uncertainty principle with observer effect). The idea is that any form of observation
is also an interaction, that the act of testing can also affect that which is being tested.
In practical terms the test engineer is testing software (and sometimes hardware or firmware)
with other software (and hardware and firmware). The process can fail in ways that are not the
result of defects in the target but rather result from defects in (or indeed intended features of)
the testing tool.
There are metrics being developed to measure the effectiveness of testing. One method is by
analyzing code coverage (this is highly controversial) - where everyone can agree what areas
are not being covered at all and try to improve coverage in these areas.
Bugs can also be placed into code on purpose, and the number of bugs that have not been
found can be predicted based on the percentage of intentionally placed bugs that were found.
The problem is that it assumes that the intentional bugs are the same type of bug as the
unintentional ones.
Finally, there is the analysis of historical find-rates. By measuring how many bugs are found and
comparing them to predicted numbers (based on past experience with similar projects), certain
assumptions regarding the effectiveness of testing can be made. While not an absolute
measurement of quality, if a project is halfway complete and there have been no defects found,
then changes may be needed to the procedures being employed by QA.

Roles in software testing

Software testing can be done by software testers. Until the 1950s the term software tester was
used generally, but later it was also seen as a separate profession. Regarding the periods and
the different goals in software testing (see D. Gelperin and W.C. Hetzel) there have been
established different roles: test lead/manager, tester, test designer, test automater/automation
developer, and test administrator.
Participants of testing team:
1. Tester
2. Developer
3. Business Analyst
4. Customer
5. Information Service Management
6. Test Manager
7. Senior Organization Management
8. Quality team

Software Testing Life Cycle

Software testing life cycle identifies what test activities to carry out and when (what is the best
time) to accomplish those test activities. Even though testing differs between organizations,
there is a testing life cycle.

Software Testing Life Cycle consists of six (generic) phases:


• Test Planning,
• Test Analysis,
• Test Design,
• Construction and verification,
• Testing Cycles,
• Final Testing and Implementation and
• Post Implementation.
Software testing has its own life cycle that intersects with every stage of the SDLC. The basic
requirements in software testing life cycle is to control/deal with software testing – Manual,
Automated and Performance.
Test Planning
This is the phase where Project Manager has to decide what things need to be tested, do I have
the appropriate budget etc. Naturally proper planning at this stage would greatly reduce the risk
of low quality software. This planning will be an ongoing process with no end point.
Activities at this stage would include preparation of high level test plan-(according to IEEE test
plan template The Software Test Plan (STP) is designed to prescribe the scope, approach,
resources, and schedule of all testing activities. The plan must identify the items to be tested,
the features to be tested, the types of testing to be performed, the personnel responsible for
testing, the resources and schedule required to complete testing, and the risks associated with
the plan.). Almost all of the activities done during this stage are included in this software test
plan and revolve around a test plan.

Test Analysis
Once test plan is made and decided upon, next step is to delve little more into the project and
decide what types of testing should be carried out at different stages of SDLC, do we need or
plan to automate, if yes then when the appropriate time to automate is, what type of specific
documentation I need for testing.
Proper and regular meetings should be held between testing teams, project managers,
development teams, Business Analysts to check the progress of things which will give a fair idea
of the movement of the project and ensure the completeness of the test plan created in the
planning phase, which will further help in enhancing the right testing strategy created earlier. We
will start creating test case formats and test cases itself. In this stage we need to develop
Functional validation matrix based on Business Requirements to ensure that all system
requirements are covered by one or more test cases, identify which test cases to automate,
begin review of documentation, i.e. Functional Design, Business Requirements, Product
Specifications, Product Externals etc. We also have to define areas for Stress and Performance
testing.

Test Design
Test plans and cases which were developed in the analysis phase are revised. Functional
validation matrix is also revised and finalized. In this stage risk assessment criteria is
developed. If you have thought of automation then you have to select which test cases to
automate and begin writing scripts for them. Test data is prepared. Standards for unit testing
and pass / fail criteria are defined here. Schedule for testing is revised (if necessary) & finalized
and test environment is prepared.

Construction and verification


In this phase we have to complete all the test plans, test cases, complete the scripting of the
automated test cases, Stress and Performance testing plans needs to be completed. We have
to support the development team in their unit testing phase. And obviously bug reporting would
be done as when the bugs are found. Integration tests are performed and errors (if any) are
reported.

Testing Cycles
In this phase we have to complete testing cycles until test cases are executed without errors or
a predefined condition is reached. Run test cases --> Report Bugs --> revise test cases (if
needed) --> add new test cases (if needed) --> bug fixing --> retesting (test cycle 2, test cycle
3….)

Final Testing and Implementation


In this we have to execute remaining stress and performance test cases, documentation for
testing is completed / updated, provide and complete different matrices for testing. Acceptance,
load and recovery testing will also be conducted and the application needs to be verified under
production conditions.
Post Implementation
In this phase, the testing process is evaluated and lessons learnt from that testing process are
documented. Line of attack to prevent similar problems in future projects is identified. Create
plans to improve the processes. The recording of new errors and enhancements is an ongoing
process. Cleaning up of test environment is done and test machines are restored to base lines
in this stage.
Software Testing Life Cycle

Phase Activities Outcome

Planning Create high level test plan Test plan, Refined Specification

Analysis Create detailed test plan, Revised Test Plan, Functional


Functional Validation Matrix, validation matrix, test cases
test cases

Design test cases are revised; select revised test cases, test data
which test cases to automate sets, sets, risk assessment
sheet

Construction scripting of test cases to test procedures/Scripts, Drivers,


automate, test results, Bug reports.

Testing cycles complete testing cycles Test results, Bug Reports

Final testing execute remaining stress and Test results and different
performance tests, complete metrics on test efforts
documentation

Post implementation Evaluate testing processes Plan for improvement of testing


process

BLACK BOX TESTING

FUNCTIONAL TESTING
In this type of testing, the software is tested for the functional requirements. The tests are
written in order to check if the application behaves as expected. Although functional testing is
often done toward the end of the development cycle, it can—and should, —be started much
earlier. Individual components and processes can be tested early on, even before it's possible to
do functional testing on the entire system. Functional testing covers how well the system
executes the functions it is supposed to execute—including user commands, data manipulation,
searches and business processes, user screens, and integrations. Functional testing covers the
obvious surface type of functions, as well as the back-end operations (such as security and how
upgrades affect the system).
STRESS TESTING:
The application is tested against heavy load such as complex numerical values, large number of
inputs, large number of queries etc. which checks for the stress/load the applications can
withstand. Stress testing deals with the quality of the application in the environment. The idea
is to create an environment more demanding of the application than the application would
experience under normal work loads. This is the hardest and most complex category of testing
to accomplish and it requires a joint effort from all teams. A test environment is established with
many testing stations. At each station, a script is exercising the system. These scripts are
usually based on the regression suite. More and more stations are added, all simultaneous
hammering on the system, until the system breaks. The system is repaired and the stress test is
repeated until a level of stress is reached that is higher than expected to be present at a
customer site. Race conditions and memory leaks are often found under stress testing. A race
condition is a conflict between at least two tests. Each test works correctly when done in
isolation. When the two tests are run in parallel, one or both of the tests fail. This is usually due
to an incorrectly managed lock. A memory leak happens when a test leaves allocated memory
behind and does not correctly return the memory to the memory allocation scheme. The test
seems to run correctly, but after being exercised several times, available memory is reduced
until the system fails.
LOAD TESTING
The application is tested against heavy loads or inputs such as testing of web sites in order to
find out at what point the web-site/application fails or at what point its performance degrades.
Load testing operates at a predefined load level, usually the highest load that the system can
accept while still functioning properly. Note that load testing does not aim to break the system by
overwhelming it, but instead tries to keep the system constantly humming like a well-oiled
machine.In the context of load testing, extreme importance should be given of having large
datasets available for testing. Bugs simply do not surface unless you deal with very large
entities such thousands of users in repositories such as LDAP/NIS/Active Directory; thousands
of mail server mailboxes, multi-gigabyte tables in databases, deep file/directory hierarchies on
file systems, etc. Testers obviously need automated tools to generate these large data sets, but
fortunately any good scripting language worth its salt will do the job.
ADHOC TESTING
This type of testing is done without any formal Test Plan or Test Case creation. Ad-hoc testing
helps in deciding the scope and duration of the various other testing and it also helps testers in
learning the application prior starting with any other testing. It is the least formal method of
testing. One of the best uses of ad hoc testing is for discovery. Reading the requirements or
specifications (if they exist) rarely gives you a good sense of how a program actually behaves.
Even the user documentation may not capture the “look and feel” of a program. Ad hoc testing
can find holes in your test strategy, and can expose relationships between subsystems that
would otherwise not be apparent. In this way, it serves as a tool for checking the completeness
of your testing. Missing cases can be found and added to your testing arsenal. Finding new
tests in this way can also be a sign that you should perform root cause analysis. Ask yourself or
your test team, “What other tests of this class should we be running?” Defects found while doing
ad hoc testing are often examples of entire classes of forgotten test cases. Another use for ad
hoc testing is to determine the priorities for your other testing activities. In our example program,
Panorama may allow the user to sort photographs that are being displayed. If ad hoc testing
shows this to work well, the formal testing of this feature might be deferred until the problematic
areas are completed. On the other hand, if ad hoc testing of this sorting photograph feature
uncovers problems, then the formal testing might receive a higher priority.
EXPLORATORY TESTING
This testing is similar to the ad-hoc testing and is done in order to learn/explore the application.
Exploratory software testing is a powerful and fun approach to testing. In some situations, it can
be orders of magnitude more productive than scripted testing. At least unconsciously, testers
perform exploratory testing at one time or another. Yet it doesn't get much respect in our field. It
can be considered as “Scientific Thinking” at real time
USABILITY TESTING
This testing is also called as ‘Testing for User-Friendliness’. This testing is done if User Interface
of the application stands an important consideration and needs to be specific for the specific
type of user. Usability testing is the process of working with end-users directly and indirectly to
assess how the user perceives a software package and how they interact with it. This process
will uncover areas of difficulty for users as well as areas of strength. The goal of usability
testing should be to limit and remove difficulties for users and to leverage areas of strength for
maximum usability. This testing should ideally involve direct user feedback, indirect feedback
(observed behavior), and when possible computer supported feedback. Computer supported
feedback is often (if not always) left out of this process. Computer supported feedback can be
as simple as a timer on a dialog to monitor how long it takes users to use the dialog and
counters to determine how often certain conditions occur (ie. error messages, help messages,
etc). Often, this involves trivial modifications to existing software, but can result in tremendous
return on investment. Ultimately, usability testing should result in changes to the delivered
product in line with the discoveries made regarding usability. These changes should be directly
related to real-world usability by average users. As much as possible, documentation should be
written supporting changes so that in the future, similar situations can be handled with ease.
SMOKE TESTING
This type of testing is also called sanity testing and is done in order to check if the application is
ready for further major testing and is working properly without failing up to least expected level.
A test of new or repaired equipment by turning it on. If it smokes... guess what... it doesn't work!
The term also refers to testing the basic functions of software. The term was originally coined in
the manufacture of containers and pipes, where smoke was introduced to determine if there
were any leaks. A common practice at Microsoft and some other shrink-wrap software
companies is the "daily build and smoke test" process. Every file is compiled, linked, and
combined into an executable program every day, and the program is then put through a "smoke
test," a relatively simple check to see whether the product "smokes" when it runs.
RECOVERY TESTING
Recovery testing is basically done in order to check how fast and better the application can
recover against any type of crash or hardware failure etc. Type or extent of recovery is specified
in the requirement specifications. It is basically testing how well a system recovers from
crashes, hardware failures, or other catastrophic problems
VOLUME TESTING
Volume testing is done against the efficiency of the application. Huge amount of data is
processed through the application (which is being tested) in order to check the extreme
limitations of the system
Volume Testing, as its name implies, is testing that purposely subjects a system (both hardware
and software) to a series of tests where the volume of data being processed is the subject of the
test. Such systems can be transactions processing systems capturing real time sales or could
be database updates and or data retrieval.
Volume testing will seek to verify the physical and logical limits to a system's capacity and
ascertain whether such limits are acceptable to meet the projected capacity of the organization’s
business processing.
SCENARIO TESTING
Scenario tests are realistic, credible and motivating to stakeholders, challenging for the program
and easy to evaluate for the tester. They provide meaningful combinations of functions and
variables rather than the more artificial combinations you get with domain testing or
combinatorial test design.
REGRESSION TESTING
Regression testing is a style of testing that focuses on retesting after changes are made. In
traditional regression testing, we reuse the same tests (the regression tests). In risk-oriented
regression testing, we test the same areas as before, but we use different (increasingly
complex) tests. Traditional regression tests are often partially automated. These note focus on
traditional regression.
Regression testing attempts to mitigate two risks:
o A change that was intended to fix a bug failed.
o Some change had a side effect, unfixing an old bug or introducing a new bug

Regression testing approaches differ in their focus. Common examples include:


Bug regression: We retest a specific bug that has been allegedly fixed.
Old fix regression testing: We retest several old bugs that were fixed, to see if they are back.
(This is the classical notion of regression: the program has regressed to a bad state.)
General functional regression: We retest the product broadly, including areas that worked
before, to see whether more recent changes have destabilized working code. (This is the typical
scope of automated regression testing.)
Conversion or port testing: The program is ported to a new platform and a subset of the
regression test suite is run to determine whether the port was successful. (Here, the main
changes of interest might be in the new platform, rather than the modified old code.)
Configuration testing: The program is run with a new device or on a new version of the
operating system or in conjunction with a new application. This is like port testing except that the
underlying code hasn't been changed--only the external components that the software under
test must interact with.
Localization testing: The program is modified to present its user interface in a different
language and/or following a different set of cultural rules. Localization testing may involve
several old tests (some of which have been modified to take into account the new language)
along with several new (non-regression) tests.
Smoke testing also known as build verification testing:A relatively small suite of tests is used to
qualify a new build. Normally, the tester is asking whether any components are so obviously or
badly broken that the build is not worth testing or some components are broken in obvious ways
that suggest a corrupt build or some critical fixes that are the primary intent of the new build
didn't work. The typical result of a failed smoke test is rejection of the build (testing of the build
stops) not just a new set of bug reports.
USER ACCEPTANCE TESTING
In this type of testing, the software is handed over to the user in order to find out if the software
meets the user expectations and works as it is expected to. In software development, user
acceptance testing (UAT) - also called beta testing, application testing, and end user testing - is
a phase of software development in which the software is tested in the "real world" by the
intended audience. UAT can be done by in-house testing in which volunteers or paid test
subjects use the software or, more typically for widely-distributed software, by making the test
version available for downloading and free trial over the Web. The experiences of the early
users are forwarded back to the developers who make final changes before releasing the
software commercially.
ALPHA TESTING
In this type of testing, the users are invited at the development center where they use the
application and the developers note every particular input or action carried out by the user. Any
type of abnormal behavior of the system is noted and rectified by the developers.
BETA TESTING
In this type of testing, the software is distributed as a beta version to the users and users test
the application at their sites. As the users explore the software, in case if any exception/defect
occurs that is reported to the developers. Beta testing comes after alpha testing. Versions of the
software, known as beta versions, are released to a limited audience outside of the company.
The software is released to groups of people so that further testing can ensure the product has
few faults or bugs. Sometimes, beta versions are made available to the open public to increase
the feedback field to a maximal number of future users.
WHITE BOX TESTING
UNIT TESTING
The developer carries out unit testing in order to check if the particular module or unit of code is
working fine. The Unit Testing comes at the very basic level as it is carried out as and when the
unit of the code is developed or a particular functionality is built. Unit testing deals with testing a
unit as a whole. This would test the interaction of many functions but confine the test within
one unit. The exact scope of a unit is left to interpretation. Supporting test code, sometimes
called scaffolding, may be necessary to support an individual test. This type of testing is driven
by the architecture and implementation teams. This focus is also called black-box testing
because only the details of the interface are visible to the test. Limits that are global to a unit are
tested here. In the construction industry, scaffolding is a temporary, easy to assemble and
disassemble, frame placed around a building to facilitate the construction of the building. The
construction workers first build the scaffolding and then the building. Later the scaffolding is
removed, exposing the completed building. Similarly, in software testing, one particular test may
need some supporting software. This software establishes an environment around the test. Only
when this environment is established can a correct evaluation of the test take place. The
scaffolding software may establish state and values for data structures as well as providing
dummy external functions for the test. Different scaffolding software may be needed from one
test to another test. Scaffolding software rarely is considered part of the system. Sometimes the
scaffolding software becomes larger than the system software being tested. Usually the
scaffolding software is not of the same quality as the system software and frequently is quite
fragile. A small change in the test may lead to much larger changes in the scaffolding. Internal
and unit testing can be automated with the help of coverage tools. A coverage tool analyzes the
source code and generates a test that will execute every alternative thread of execution. It is still
up to the programmer to combine this test into meaningful cases to validate the result of each
thread of execution. Typically, the coverage tool is used in a slightly different way. First the
coverage tool is used to augment the source by placing informational prints after each line of
code. Then the testing suite is executed generating an audit trail. This audit trail is analyzed and
reports the percent of the total system code executed during the test suite. If the coverage is
high and the untested source lines are of low impact to the system's overall quality, then no
more additional tests are required.
STATIC & DYNAMIC ANALYSIS
Static analysis involves going through the code in order to find out any possible defect in the
code. Dynamic analysis involves executing the code and analyzing the output.
STATEMENT COVERAGE
In this type of testing the code is executed in such a manner that every statement of the
application is executed at least once. It helps in assuring that all the statements execute without
any side effect.
BRANCH COVERAGE
No software application can be written in a continuous mode of coding, at some point we need
to branch out the code in order to perform a particular functionality. Branch coverage testing
helps in validating of all the branches in the code and making sure that no branching leads to
abnormal behavior of the application.
SECURITY TESTING
Security Testing is carried out in order to find out how well the system can protect itself from
unauthorized access, hacking – cracking, any code damage etc. which deals with the code of
application. This type of testing needs sophisticated testing techniques.
MUTATION TESTING
A kind of testing in which, the application is tested for the code that was modified after fixing a
particular bug/defect. It also helps in finding out which code and which strategy of coding can
help in developing the functionality effectively.

Вам также может понравиться