Академический Документы
Профессиональный Документы
Культура Документы
Software testing is the process used to measure the quality of developed computer software.
Usually, quality is constrained to such topics as correctness, completeness, security, but can
also include more technical requirements as described under the ISO standard ISO 9126, such
as capability, reliability, efficiency, portability, maintainability, compatibility, and usability. Testing
is a process of technical investigation, performed on behalf of stakeholders, that is intended to
reveal quality-related information about the product with respect to the context in which it is
intended to operate. This includes, but is not limited to, the process of executing a program or
application with the intent of finding errors. Quality is not an absolute; it is value to some person.
With that in mind, testing can never completely establish the correctness of arbitrary computer
software; testing furnishes a criticism or comparison that compares the state and behaviour of
the product against a specification. An important point is that software testing should be
distinguished from the separate discipline of Software Quality Assurance (SQA), which
encompasses all business process areas, not just testing.
Today, software has grown in complexity and size. The software product developed by a
developer is according to the System Requirement Specification. Every software product has a
target audience. For example, a video game software has its audience completely different from
banking software. Therefore, when an organization invests large sums in making a software
product, it must ensure that the software product must be acceptable to the end users or its
target audience. This is where Software Testing comes into play. Software testing is not merely
finding defects or bugs in the software, it is the completely dedicated discipline of evaluating the
quality of the software.
There are many approaches to software testing, but effective testing of complex products is
essentially a process of investigation, not merely a matter of creating and following routine
procedure. One definition of testing is "the process of questioning a product in order to evaluate
it", where the "questions" are operations the tester attempts to execute with the product, and the
product answers with its behavior in reaction to the probing of the tester. Although most of the
intellectual processes of testing are nearly identical to that of review or inspection, the word
testing is also used to connote the dynamic analysis of the product—putting the product through
its paces. Sometimes one therefore refers to reviews, walkthroughs or inspections as "static
testing", whereas actually running the program with a given set of test cases in a given
development stage is often referred to as "dynamic testing", to emphasize the fact that formal
review processes form part of the overall testing scope.
Introduction
In general, software engineers distinguish software faults from software failures. In case of a
failure, the software does not do what the user expects. A fault is a programming error that may
or may not actually manifest as a failure. A fault can also be described as an error in the
correctness of the semantic of a computer program. A fault will become a failure if the exact
computation conditions are met, one of them being that the faulty portion of computer software
executes on the CPU. A fault can also turn into a failure when the software is ported to a
different hardware platform or a different compiler, or when the software gets extended.
Software testing may be viewed as a sub-field of Software Quality Assurance but typically exists
independently (and there may be no SQA areas in some companies). In SQA, software process
specialists and auditors take a broader view on software and its development. They examine
and change the software engineering process itself to reduce the amount of faults that end up in
the code or deliver faster.
Regardless of the methods used or level of formality involved, the desired result of testing is a
level of confidence in the software so that the organization is confident that the software has an
acceptable defect rate. What constitutes an acceptable defect rate depends on the nature of the
software. An arcade video game designed to simulate flying an airplane would presumably have
a much higher tolerance for defects than software used to control an actual airliner.
A problem with software testing is that the number of defects in a software product can be very
large, and the number of configurations of the product larger still. Bugs that occur infrequently
are difficult to find in testing. A rule of thumb is that a system that is expected to function without
faults for a certain length of time must have already been tested for at least that length of time.
This has severe consequences for projects to write long-lived reliable software, since it is not
usually commercially viable to test over the proposed length of time unless this is a relatively
short period. A few days or a week would normally be acceptable, but any longer period would
usually have to be simulated according to carefully prescribed start and end conditions.
A common practice of software testing is that it is performed by an independent group of testers
after the functionality is developed but before it is shipped to the customer. This practice often
results in the testing phase being used as project buffer to compensate for project delays,
thereby compromising the time devoted to testing. Another practice is to start software testing at
the same moment the project starts and it is a continuous process until the project finishes.
This is highly problematic in terms of controlling changes to software: if faults or failures are
found part way into the project, the decision to correct the software needs to be taken on the
basis of whether or not these defects will delay the remainder of the project. If the software does
need correction, this needs to be rigorously controlled using a version numbering system, and
software testers need to be accurate in knowing that they are testing the correct version, and
will need to re-test the part of the software wherein the defects were found. The correct start
point needs to be identified for retesting. There are added risks in that new defects may be
introduced as part of the corrections, and the original requirement can also change part way
through, in which instance previous successful tests may no longer meet the requirement and
will need to be re-specified and redone (part of regression testing). Clearly the possibilities for
projects being delayed and running over budget are significant.
Another common practice is for test suites to be developed during technical support escalation
procedures. Such tests are then maintained in regression testing suites to ensure that future
updates to the software don't repeat any of the known mistakes.
It is commonly believed that the earlier a defect is found the cheaper it is to fix it. This is
reasonable based on the risk of any given defect contributing to or being confused with further
defects later in the system or process. In particular, if a defect erroneously changes the state of
the data on which the software is operating, that data is no longer reliable and therefore any
testing after that point cannot be relied on even if there are no further actual software defects.
Time Detected [1]
Time Introduced Requirements Architecture Construction System Test Post-Release
Requirements 1 3 5-10 10 10-100
Architecture - 1 10 15 25-100
Construction - - 1 10 10-25
In counterpoint, some emerging software disciplines such as extreme programming and the
agile software development movement, adhere to a "test-driven software development" model.
In this process unit tests are written first, by the software engineers (often with pair
programming in the extreme programming methodology). Of course these tests fail initially; as
they are expected to. Then as code is written it passes incrementally larger portions of the test
suites. The test suites are continuously updated as new failure conditions and corner cases are
discovered, and they are integrated with any regression tests that are developed.
Unit tests are maintained along with the rest of the software source code and generally
integrated into the build process (with inherently interactive tests being relegated to a partially
manual build acceptance process).
The software, tools, samples of data input and output, and configurations are all referred to
collectively as a test harness.
Software Testing Axioms
History
The separation of debugging from testing was initially introduced by Glenford J. Myers in 1979. [2]
Although his attention was on breakage testing it illustrated the desire of the software
engineering community to separate fundamental development activities, such as debugging,
from that of verification. Drs. Dave Gelperin and William C. Hetzel classified in 1988 the phases
and goals in software testing as follows:[3]
until 1956 it was the debugging oriented period, when testing was often associated to
debugging: there was no clear difference between testing and debugging. From 1957-1978
there was the demonstration oriented period where debugging and testing was distinguished
now - in this period it was shown, that software satisfies the requirements. The time between
1979-1982 is announced as the destruction oriented period, where the goal was to find errors.
1983-1987 is classified as the evaluation oriented period: intention here is that during the
software lifecycle a product evaluation is provided and measuring quality. From 1988 on it was
seen as prevention oriented period where tests were to demonstrate that software satisfies its
specification, to detect faults and to prevent faults.
Dr. Gelperin chaired the IEEE 829-1989 (Test Documentation Standard) with Dr. Hetzel writing
the book The Complete Guide to Software Testing. Both works were pivotal in to today's testing
culture and remain a consistent source of reference. Dr. Gelperin and Jerry E. Durant also went
on to develop High Impact Inspection Technology that builds upon traditional Inspections but
utilizes a test driven additive.
SDLC : Software Development Life Cycle
The following are the actives of the SDLC
1) System engineering and modeling
2) Software require analysis
3) Systems analysis and design
4) Code generation
5) Testing
6) Development and Maintenance
System Engineering and Modeling
In this process we have to identify the projects requirement and main features proposed in the
application. Here the development team visits the customer and their system. They investigate
the need for possible software automation in the given system. By the end of the investigation
study. The team writes a document that holds the specifications for the customer system.
Software Requirement Analysis
In this software requirements analysis, firstly analysis the requirement for the proposed system.
To understand the nature of the program to built, the system engineer must understand the
information domain for the software, as well as required functions, performance and the
interfacing. From the available information the system engineer develops a list of the actors use
cases and system level requirement for the project. With the help of key user the list of use case
and requirement is reviewed. Refined and updated in an iterative fashion until the user is
satisfied that it represents the essence of the proposed system.
Systems analysis and design
The design is the process of designing exactly how the specifications are to be implemented. It
defines specifically how the software is to be written including an object model with properties
and method for each object, the client/server technology, the number of tiers needed for the
package architecture and a detailed database design. Analysis and design are very important in
the whole development cycle. Any glitch in the design could be very expensive to solve in the
later stage of the software development.
Code generation
The design must be translated into a machine readable form. The code generation step
performs this task. The development phase involves the actual coding of the entire application.
If design is performed in a detailed manner. Code generation can be accomplished with out
much complicated. Programming tools like compilers, interpreters like c, c++, and java are used
for coding .with respect to the type of application. The right programming language is chosen.
Testing
After the coding. The program testing begins. There are different methods are there to detect
the error in coding .different method are already available. Some companies are developed they
own testing tools
Development and Maintenance
The development and maintenance is a staged roll out of the new application, this involves
installation and initial training and may involve hardware and network upgrades. Software will
definitely undergo change once it is delivered to the customer. There are many reasons for the
change. Change could be happen because of some unexpected input values into the system. In
addition, the changes in the system could be directly affecting the software operations. The
software should be developed to accommodate changes that could happen during the post
implementation period.
Life Cycle of Testing Process
This article explains about Different steps in Life Cycle of Testing Process. in Each phase of the
development process will have a specific input and a specific output. Once the project is
confirmed to start, the phases of the development of project can be divided into the following
phases:
• Software requirements phase.
• Software Design
• Implementation
• Testing
• Maintenance
In the whole development process, testing consumes highest amount of time. But most of the
developers oversee that and testing phase is generally neglected. As a consequence,
erroneous software is released. The testing team should be involved right from the requirements
stage itself.
The various phases involved in testing, with regard to the software development life cycle are:
1. Requirements stage
2. Test Plan
3. Test Design.
4. Design Reviews
5. Code Reviews
6. Test Cases preparation.
7. Test Execution
8. Test Reports.
9. Bugs Reporting
10. Reworking on patches.
11. Release to production.
Requirements Stage
Normally in many companies, developers itself take part in the requirements stage. Especially
for product-based companies, a tester should also be involved in this stage. Since a tester
thinks from the user side whereas a developer can’t. A separate panel should be formed for
each module comprising a developer, a tester and a user. Panel meetings should be scheduled
in order to gather everyone’s view. All the requirements should be documented properly for
further use and this document is called “Software Requirements Specifications”.
Test Plan
Without a good plan, no work is a success. A successful work always contains a good plan. The
testing process of software should also require good plan. Test plan document is the most
important document that brings in a process – oriented approach. A test plan document should
be prepared after the requirements of the project are confirmed. The test plan document must
consist of the following information:
The test design is represented pictographically. The test design involves various stages. These
stages can be summarized as follows:
• The different modules of the software are identified first.
• Next, the paths connecting all the modules are identified.
Then the design is drawn. The test design is the most critical one, which decides the test case
preparation. So the test design assesses the quality of testing process.
Test Cases Preparation
Test cases should be prepared based on the following scenarios:
• Positive scenarios
• Negative scenarios
• Boundary conditions and
• Real World scenarios
Design Reviews
The software design is done in systematical manner or using the UML language. The tester can
do the reviews over the design and can suggest the ideas and the modifications needed.
Code Reviews
Code reviews are similar to unit testing. Once the code is ready for release, the tester should be
ready to do unit testing for the code. He must be ready with his own unit test cases. Though a
developer does the unit testing, a tester must also do it. The developers may oversee some of
the minute mistakes in the code, which a tester may find out.
Test Execution and Bugs Reporting
Once the unit testing is completed and the code is released to QA, the functional testing is
done. A top-level testing is done at the beginning of the testing to find out the top-level failures. If
any top-level failures occur, the bugs should be reported to the developer immediately to get the
required workaround.
The test reports should be documented properly and the bugs have to be reported to the
developer after the testing is completed.
Release to Production
Once the bugs are fixed, another release is given to the QA with the modified changes.
Regression testing is executed. Once the QA assures the software, the software is released to
production. Before releasing to production, another round of top-level testing is done.
The testing process is an iterative process. Once the bugs are fixed, the testing has to be done
repeatedly. Thus the testing process is an unending process.
White box and black box testing are terms used to describe the point of view that a test
engineer takes when designing test cases. Black box testing treats the software as a black-box
without any understanding as to how the internals behave. Thus, the tester inputs data and only
sees the output from the test object. This level of testing usually requires thorough test cases to
be provided to the tester who then can simply verify that for a given input, the output value (or
behavior), is the same as the expected value specified in the test case.
White box testing, however, is when the tester has access to the internal data structures, code,
and algorithms. For this reason, unit testing and debugging can be classified as white-box
testing and it usually requires writing code, or at a minimum, stepping through it, and thus
requires more skill than the black-box tester. If the software in test is an interface or API of any
sort, white-box testing is almost always required.
In recent years the term grey box testing has come into common usage. This involves having
access to internal data structures and algorithms for purposes of designing the test cases, but
testing at the user, or black-box level. Manipulating input data and formatting output do not
qualify as grey-box because the input and output are clearly outside of the black-box we are
calling the software under test. This is particularly important when conducting integration testing
between two modules of code written by two different developers, where only the interfaces are
exposed for test.
Grey box testing could be used in the context of testing a client-server environment when the
tester has control over the input, inspects the value in a SQL database, and the output value,
and then compares all three (the input, sql value, and output), to determine if the data got
corrupt on the database insertion or retrieval.
Grey box testing is the combination of black box and white box testing. Intention of this testing is
to find out defects related to bad design or bad implementation of the system.
In gray box testing, test engineer is equipped with the knowledge of system and designs test
cases or test data based on system knowledge
Software testing is used in association with verification and validation (V&V). Verification is the
checking of or testing of items, including software, for conformance and consistency with an
associated specification. Software testing is just one kind of verification, which also uses
techniques such as reviews, inspections, and walkthroughs. Validation is the process of
checking what has been specified is what the user actually wanted.
• Verification: Have we built the software right? (i.e. does it match the specification).
• Validation: Have we built the right software? (i.e. Is this what the customer wants?)
Levels of testing
Unit testing tests the minimal software component, or module. Each unit (basic component) of
the software is tested to verify that the detailed design for the unit has been correctly
implemented. In an Object-oriented environment, this is usually at the class level, and the
minimal unit tests include the constructors and destructors.
• Integration testing exposes defects in the interfaces and interaction between integrated
components (modules). Progressively larger groups of tested software components
corresponding to elements of the architectural design are integrated and tested until the
software works as a system.
• Functional testing tests at any level (class, module, interface, or system) for proper
functionality as defined in the specification.
• System testing tests a completely integrated system to verify that it meets its
requirements.
• System integration testing verifies that a system is integrated to any external or third
party systems defined in the system requirements.
• Acceptance testing can be conducted by the end-user, customer, or client to validate
whether or not to accept the product. Acceptance testing may be performed as part of
the hand-off process between any two phases of development.
o Alpha testing is simulated or actual operational testing by potential
users/customers or an independent test team at the developers' site. Alpha
testing is often employed for off-the-shelf software as a form of internal
acceptance testing, before the software goes to beta testing.
o Beta testing comes after alpha testing. Versions of the software, known as
beta versions, are released to a limited audience outside of the company. The
software is released to groups of people so that further testing can ensure the
product has few faults or bugs. Sometimes, beta versions are made available to
the open public to increase the feedback field to a maximal number of future
users.
It should be noted that although both Alpha and Beta are referred to as testing it is in fact use
immersion. The rigors that are applied are often unsystematic and many of the basic tenets of
testing process are not used. The Alpha and Beta period provides insight into environmental
and utilization conditions that can impact the software.
After modifying software, either for a change in functionality or to fix defects, a regression test
re-runs previously passing tests on the modified software to ensure that the modifications
haven't unintentionally caused a regression of previous functionality. Regression testing can be
performed at any or all of the above test levels. These regression tests are often automated.
A test case is a software testing document, which consists of event, action, input, output,
expected result, and actual result. Clinically defined (IEEE 829-1998) a test case is an input and
an expected result. This can be as pragmatic as 'for condition x your derived result is y',
whereas other test cases described in more detail the input scenario and what results might be
expected. It can occasionally be a series of steps (but often steps are contained in a separate
test procedure that can be exercised against multiple test cases, as a matter of economy) but
with one expected result or expected outcome. The optional fields are a test case ID, test step
or order of execution number, related requirement(s), depth, test category, author, and check
boxes for whether the test is automatable and has been automated. Larger test cases may also
contain prerequisite states or steps, and descriptions. A test case should also contain a place for
the actual result. These steps can be stored in a word processor document, spreadsheet,
database, or other common repository. In a database system, you may also be able to see past
test results and who generated the results and the system configuration used to generate those
results. These past results would usually be stored in a separate table.
The term test script is the combination of a test case, test procedure, and test data. Initially the
term was derived from the product of work created by automated regression test tools. Today,
test scripts can be manual, automated, or a combination of both.
The most common term for a collection of test cases is a test suite. The test suite often also
contains more detailed instructions or goals for each collection of test cases. It definitely
contains a section where the tester identifies the system configuration used during testing. A
group of test cases may also contain prerequisite states or steps, and descriptions of the
following tests.
Collections of test cases are sometimes incorrectly termed a test plan. They might correctly be
called a test specification. If sequence is specified, it can be called a test script, scenario, or
procedure.
Code coverage
Controversy
There is considerable controversy among testing writers and consultants about what constitutes
responsible software testing. Members of the "context-driven" school of testing believe that
there are no "best practices" of testing, but rather that testing is a set of skills that allow the
tester to select or invent testing practices to suit each unique situation. In addition, prominent
members of the community consider much of the writing about software testing to be doctrine,
mythology, and folklore. Some might contend that this belief directly contradicts standards such
as the IEEE 829 test documentation standard, and organizations such as the Food and Drug
Administration who promote them. The context-driven school's retort is that Lessons Learned in
Software Testing includes one lesson supporting the use IEEE 829 and another opposing it; that
not all software testing occurs in a regulated environment and that practices appropriate for
such environments would be ruinously expensive, unnecessary, and inappropriate for other
contexts; and that in any case the FDA generally promotes the principle of the least
burdensome approach.
Some of the major controversies include:
Agile vs. traditional
Starting around 1990, a new style of writing about testing began to challenge what had come
before. The seminal work in this regard is widely considered to be Testing Computer Software,
by Cem Kaner.[4] Instead of assuming that testers have full access to source code and complete
specifications, these writers, including Kaner and James Bach, argued that testers must learn to
work under conditions of uncertainty and constant change. Meanwhile, an opposing trend
toward process "maturity" also gained ground, in the form of the Capability Maturity Model. The
agile testing movement (which includes but is not limited to forms of testing practiced on agile
development projects) has popularity mainly in commercial circles, whereas the CMM was
embraced by government and military software providers.
However, saying that "maturity models" like CMM gained ground against or opposing Agile
testing may not be right. Agile movement is a 'way of working', while CMM is a process
improvement idea.
But another point of view must be considered: the operational culture of an organization. While it
may be true that testers must have an ability to work in a world of uncertainty, it is also true that
their flexibility must have direction. In many cases test cultures are self-directed and as a result
fruitless; unproductive results can ensue. Furthermore, providing positive evidence of defects
may either indicate that you have found the tip of a much larger problem, or that you have
exhausted all possibilities. A framework is a test of Testing. It provides a boundary that can
measure (validate) the capacity of our work. Both sides have, and will continue to argue the
virtues of their work. The proof however is in each and every assessment of delivery quality. It
does little good to test systematically if you are too narrowly focused. On the other hand, finding
a bunch of errors is not an indicator that Agile methods was the driving force; you may simply
have stumbled upon an obviously poor piece of work.
Certification
Several certification programs exist to support the professional aspirations of software testers
and quality assurance specialists. No certification currently offered actually requires the
applicant to demonstrate the ability to test software. No certification is based on a widely
accepted body of knowledge. No certification board decertifies individuals. [verification needed][citation needed]
This has led some to declare that the testing field is not ready for certification. [5] Certification
itself cannot measure an individual's productivity, their skill, or practical knowledge, and cannot
guarantee their competence, or professionalism as a tester.[6]
Certifications can be grouped into: exam-based and education-based. Exam-based
certifications: For these there is the need to pass an exam, which can also be learned by self-
study: e.g. for ISTQB or QAI. Education-based certifications are instructor-led sessions, where
each course has to be passed, e.g. IIST (International Institute for Software Testing).
Testing certifications
• CSTE offered by the Quality Assurance Institute (QAI)
• CSTP offered by the International Institute for Software Testing
• CSTP (TM) (Australian Version) offered by the K. J. Ross & Associates
• CATe offered by the International Institute for Software Testing
• ISEB offered by the Information Systems Examinations Board
• ISTQB offered by the International Software Testing Qualification Board
Quality assurance certifications
• CSQE offered by the American Society for Quality (ASQ)
• CSQA offered by the Quality Assurance Institute (QAI)
One principle in software testing is summed up by the classical Latin question posed by
Juvenal: Quis Custodiet Ipsos Custodes (Who watches the watchmen?), or is alternatively
referred informally, as the "Heisenbug" concept (a common misconception that confuses
Heisenberg's uncertainty principle with observer effect). The idea is that any form of observation
is also an interaction, that the act of testing can also affect that which is being tested.
In practical terms the test engineer is testing software (and sometimes hardware or firmware)
with other software (and hardware and firmware). The process can fail in ways that are not the
result of defects in the target but rather result from defects in (or indeed intended features of)
the testing tool.
There are metrics being developed to measure the effectiveness of testing. One method is by
analyzing code coverage (this is highly controversial) - where everyone can agree what areas
are not being covered at all and try to improve coverage in these areas.
Bugs can also be placed into code on purpose, and the number of bugs that have not been
found can be predicted based on the percentage of intentionally placed bugs that were found.
The problem is that it assumes that the intentional bugs are the same type of bug as the
unintentional ones.
Finally, there is the analysis of historical find-rates. By measuring how many bugs are found and
comparing them to predicted numbers (based on past experience with similar projects), certain
assumptions regarding the effectiveness of testing can be made. While not an absolute
measurement of quality, if a project is halfway complete and there have been no defects found,
then changes may be needed to the procedures being employed by QA.
Software testing can be done by software testers. Until the 1950s the term software tester was
used generally, but later it was also seen as a separate profession. Regarding the periods and
the different goals in software testing (see D. Gelperin and W.C. Hetzel) there have been
established different roles: test lead/manager, tester, test designer, test automater/automation
developer, and test administrator.
Participants of testing team:
1. Tester
2. Developer
3. Business Analyst
4. Customer
5. Information Service Management
6. Test Manager
7. Senior Organization Management
8. Quality team
Software testing life cycle identifies what test activities to carry out and when (what is the best
time) to accomplish those test activities. Even though testing differs between organizations,
there is a testing life cycle.
Test Analysis
Once test plan is made and decided upon, next step is to delve little more into the project and
decide what types of testing should be carried out at different stages of SDLC, do we need or
plan to automate, if yes then when the appropriate time to automate is, what type of specific
documentation I need for testing.
Proper and regular meetings should be held between testing teams, project managers,
development teams, Business Analysts to check the progress of things which will give a fair idea
of the movement of the project and ensure the completeness of the test plan created in the
planning phase, which will further help in enhancing the right testing strategy created earlier. We
will start creating test case formats and test cases itself. In this stage we need to develop
Functional validation matrix based on Business Requirements to ensure that all system
requirements are covered by one or more test cases, identify which test cases to automate,
begin review of documentation, i.e. Functional Design, Business Requirements, Product
Specifications, Product Externals etc. We also have to define areas for Stress and Performance
testing.
Test Design
Test plans and cases which were developed in the analysis phase are revised. Functional
validation matrix is also revised and finalized. In this stage risk assessment criteria is
developed. If you have thought of automation then you have to select which test cases to
automate and begin writing scripts for them. Test data is prepared. Standards for unit testing
and pass / fail criteria are defined here. Schedule for testing is revised (if necessary) & finalized
and test environment is prepared.
Testing Cycles
In this phase we have to complete testing cycles until test cases are executed without errors or
a predefined condition is reached. Run test cases --> Report Bugs --> revise test cases (if
needed) --> add new test cases (if needed) --> bug fixing --> retesting (test cycle 2, test cycle
3….)
Planning Create high level test plan Test plan, Refined Specification
Design test cases are revised; select revised test cases, test data
which test cases to automate sets, sets, risk assessment
sheet
Final testing execute remaining stress and Test results and different
performance tests, complete metrics on test efforts
documentation
FUNCTIONAL TESTING
In this type of testing, the software is tested for the functional requirements. The tests are
written in order to check if the application behaves as expected. Although functional testing is
often done toward the end of the development cycle, it can—and should, —be started much
earlier. Individual components and processes can be tested early on, even before it's possible to
do functional testing on the entire system. Functional testing covers how well the system
executes the functions it is supposed to execute—including user commands, data manipulation,
searches and business processes, user screens, and integrations. Functional testing covers the
obvious surface type of functions, as well as the back-end operations (such as security and how
upgrades affect the system).
STRESS TESTING:
The application is tested against heavy load such as complex numerical values, large number of
inputs, large number of queries etc. which checks for the stress/load the applications can
withstand. Stress testing deals with the quality of the application in the environment. The idea
is to create an environment more demanding of the application than the application would
experience under normal work loads. This is the hardest and most complex category of testing
to accomplish and it requires a joint effort from all teams. A test environment is established with
many testing stations. At each station, a script is exercising the system. These scripts are
usually based on the regression suite. More and more stations are added, all simultaneous
hammering on the system, until the system breaks. The system is repaired and the stress test is
repeated until a level of stress is reached that is higher than expected to be present at a
customer site. Race conditions and memory leaks are often found under stress testing. A race
condition is a conflict between at least two tests. Each test works correctly when done in
isolation. When the two tests are run in parallel, one or both of the tests fail. This is usually due
to an incorrectly managed lock. A memory leak happens when a test leaves allocated memory
behind and does not correctly return the memory to the memory allocation scheme. The test
seems to run correctly, but after being exercised several times, available memory is reduced
until the system fails.
LOAD TESTING
The application is tested against heavy loads or inputs such as testing of web sites in order to
find out at what point the web-site/application fails or at what point its performance degrades.
Load testing operates at a predefined load level, usually the highest load that the system can
accept while still functioning properly. Note that load testing does not aim to break the system by
overwhelming it, but instead tries to keep the system constantly humming like a well-oiled
machine.In the context of load testing, extreme importance should be given of having large
datasets available for testing. Bugs simply do not surface unless you deal with very large
entities such thousands of users in repositories such as LDAP/NIS/Active Directory; thousands
of mail server mailboxes, multi-gigabyte tables in databases, deep file/directory hierarchies on
file systems, etc. Testers obviously need automated tools to generate these large data sets, but
fortunately any good scripting language worth its salt will do the job.
ADHOC TESTING
This type of testing is done without any formal Test Plan or Test Case creation. Ad-hoc testing
helps in deciding the scope and duration of the various other testing and it also helps testers in
learning the application prior starting with any other testing. It is the least formal method of
testing. One of the best uses of ad hoc testing is for discovery. Reading the requirements or
specifications (if they exist) rarely gives you a good sense of how a program actually behaves.
Even the user documentation may not capture the “look and feel” of a program. Ad hoc testing
can find holes in your test strategy, and can expose relationships between subsystems that
would otherwise not be apparent. In this way, it serves as a tool for checking the completeness
of your testing. Missing cases can be found and added to your testing arsenal. Finding new
tests in this way can also be a sign that you should perform root cause analysis. Ask yourself or
your test team, “What other tests of this class should we be running?” Defects found while doing
ad hoc testing are often examples of entire classes of forgotten test cases. Another use for ad
hoc testing is to determine the priorities for your other testing activities. In our example program,
Panorama may allow the user to sort photographs that are being displayed. If ad hoc testing
shows this to work well, the formal testing of this feature might be deferred until the problematic
areas are completed. On the other hand, if ad hoc testing of this sorting photograph feature
uncovers problems, then the formal testing might receive a higher priority.
EXPLORATORY TESTING
This testing is similar to the ad-hoc testing and is done in order to learn/explore the application.
Exploratory software testing is a powerful and fun approach to testing. In some situations, it can
be orders of magnitude more productive than scripted testing. At least unconsciously, testers
perform exploratory testing at one time or another. Yet it doesn't get much respect in our field. It
can be considered as “Scientific Thinking” at real time
USABILITY TESTING
This testing is also called as ‘Testing for User-Friendliness’. This testing is done if User Interface
of the application stands an important consideration and needs to be specific for the specific
type of user. Usability testing is the process of working with end-users directly and indirectly to
assess how the user perceives a software package and how they interact with it. This process
will uncover areas of difficulty for users as well as areas of strength. The goal of usability
testing should be to limit and remove difficulties for users and to leverage areas of strength for
maximum usability. This testing should ideally involve direct user feedback, indirect feedback
(observed behavior), and when possible computer supported feedback. Computer supported
feedback is often (if not always) left out of this process. Computer supported feedback can be
as simple as a timer on a dialog to monitor how long it takes users to use the dialog and
counters to determine how often certain conditions occur (ie. error messages, help messages,
etc). Often, this involves trivial modifications to existing software, but can result in tremendous
return on investment. Ultimately, usability testing should result in changes to the delivered
product in line with the discoveries made regarding usability. These changes should be directly
related to real-world usability by average users. As much as possible, documentation should be
written supporting changes so that in the future, similar situations can be handled with ease.
SMOKE TESTING
This type of testing is also called sanity testing and is done in order to check if the application is
ready for further major testing and is working properly without failing up to least expected level.
A test of new or repaired equipment by turning it on. If it smokes... guess what... it doesn't work!
The term also refers to testing the basic functions of software. The term was originally coined in
the manufacture of containers and pipes, where smoke was introduced to determine if there
were any leaks. A common practice at Microsoft and some other shrink-wrap software
companies is the "daily build and smoke test" process. Every file is compiled, linked, and
combined into an executable program every day, and the program is then put through a "smoke
test," a relatively simple check to see whether the product "smokes" when it runs.
RECOVERY TESTING
Recovery testing is basically done in order to check how fast and better the application can
recover against any type of crash or hardware failure etc. Type or extent of recovery is specified
in the requirement specifications. It is basically testing how well a system recovers from
crashes, hardware failures, or other catastrophic problems
VOLUME TESTING
Volume testing is done against the efficiency of the application. Huge amount of data is
processed through the application (which is being tested) in order to check the extreme
limitations of the system
Volume Testing, as its name implies, is testing that purposely subjects a system (both hardware
and software) to a series of tests where the volume of data being processed is the subject of the
test. Such systems can be transactions processing systems capturing real time sales or could
be database updates and or data retrieval.
Volume testing will seek to verify the physical and logical limits to a system's capacity and
ascertain whether such limits are acceptable to meet the projected capacity of the organization’s
business processing.
SCENARIO TESTING
Scenario tests are realistic, credible and motivating to stakeholders, challenging for the program
and easy to evaluate for the tester. They provide meaningful combinations of functions and
variables rather than the more artificial combinations you get with domain testing or
combinatorial test design.
REGRESSION TESTING
Regression testing is a style of testing that focuses on retesting after changes are made. In
traditional regression testing, we reuse the same tests (the regression tests). In risk-oriented
regression testing, we test the same areas as before, but we use different (increasingly
complex) tests. Traditional regression tests are often partially automated. These note focus on
traditional regression.
Regression testing attempts to mitigate two risks:
o A change that was intended to fix a bug failed.
o Some change had a side effect, unfixing an old bug or introducing a new bug