Академический Документы
Профессиональный Документы
Культура Документы
Page 2
Page 3
After completing the Project Planning phase, one will be completing the
"requirements definitions" phase.
When and only when the requirements are fully completed, one proceeds
to design. This design should be a plan for implementing the requirements
given.
2006 Zeta Cyber Solutions (P) Limited
Page 4
Thus the waterfall model maintains that one should move to a phase only when
its proceeding phase is completed and perfected. Phases of development in the
waterfall model are thus discrete, and there is no jumping back and forth or
overlap between them.
The central idea behind the waterfall model - time spent early on making sure
that requirements and design are absolutely correct is very useful in economic
terms (it will save you much time and effort later). One should also make sure that
each phase is 100% complete and absolutely correct before proceeding to the
next phase of program creation.
It is argued that the waterfall model in general can be suited to software projects
which are stable (especially with unchanging requirements) and where it is
possible and likely that designers will be able to fully predict problem areas of the
system and produce a correct design before implementation is started.
The waterfall model also requires that implementers follow the well made,
complete design accurately, ensuring that the integration of the system proceeds
smoothly.
The waterfall model however is argued by many to be a bad idea in practice,
mainly because of their belief that it is impossible to get one phase of a software
product's lifecycle "perfected" before moving on to the next phases and learning
from them (or at least, the belief that this is impossible for any non-trivial
program). For example clients may not be aware of exactly what requirements
they want before they see a working prototype and can comment upon it - they
may
change
their
requirements
constantly,
and
program
designers
and
Page 5
If clients change their requirements after a design is finished, that design must be
modified to accommodate the new requirements, invalidating quite a good deal of
effort if overly large amounts of time have been invested into the model.
In response to the perceived problems with the "pure" waterfall model, many
modified waterfall models have been introduced namely Royce's final model,
sashimi model, and other alternative models. These models may address some or
all of the criticism of the "pure" waterfall model. There are other alternate SDLC
models such as Spiral and V which have been explained in the later part of
this chapter.
After the project is completed, the Primary Developer Representative (PDR) and
Primary End-User Representative (PER), in concert with other customer and
development team personnel develop a list of recommendations for enhancement
of the current software.
Prototypes
The software development team, to clarify requirements and/or design elements,
may generate mockups and prototypes of screens, reports, and processes.
Although some of the prototypes may appear to be very substantial, they're
generally similar to a movie set: everything looks good from the front but there's
nothing in the back.
When a prototype is generated, the developer produces the minimum amount of
code
necessary
to
clarify
the
requirements
or
design
elements
under
Page 6
The PER and PDR may, at their discretion, allow the development effort to
continue while previous stage deliverables are updated in cases where the
impacts are minimal and strictly limited in scope. In this case, the changes must
be carefully tracked to make sure all their impacts are appropriately handled.
Page 7
Spiral Lifecycle
The spiral model starts with an initial pass through a standard waterfall lifecycle,
using a subset of the total requirements to develop a robust prototype. After an
evaluation period, the cycle is initiated again, adding new functionality and
releasing the next prototype. This process continues with the prototype becoming
larger and larger with each iteration, hence the spiral.
The Spiral model is used most often in large projects and needs constant review
to stay on target. For smaller projects, the concept of agile software development
is becoming a viable alternative. Agile software development tends to be rather
more extreme in their approach than the spiral model.
Page 8
The theory is that the set of requirements is hierarchical in nature, with additional
functionality building on the first efforts. This is a sound practice for systems
where the entire problem is well defined from the start, such as modeling and
simulating software. Business-oriented database projects do not enjoy this
advantage. Most of the functions in a database solution are essentially
independent of one another, although they may make use of common data. As a
result, the prototype suffers from the same flaws as the prototyping lifecycle. For
this reason, the software development teams usually decide against the use of
the spiral lifecycle for database projects.
V-Model
The V-model was originally developed from the waterfall software process model.
The four main process phases requirements, specification, design and
Implementation have a corresponding verification and validation testing phase.
Implementation of modules is tested by unit testing, system design is tested by
Integration testing, system specifications are tested by system testing and finally
Acceptance testing verifies the requirements. The V-model gets its name from the
timing of the phases. Starting from the requirements, the system is developed
one phase at a time until the lowest phase, the implementation phase, is finished.
At this stage testing begins, starting from unit testing and moving up one test
level at a time until the acceptance testing phase is completed. During
development stage the program will be tested at all levels simultaneously.
V-Model
The different levels in V-Model are unit tests, integration tests, system tests and
acceptance test. The unit tests and integration tests ensure that the system
design is followed in the code. The system and acceptance tests ensure that the
2006 Zeta Cyber Solutions (P) Limited
Page 9
system does what the customer wants it to do. The test levels are planned so that
each level tests different aspects of the program and so that the testing levels are
independent of each other. The traditional V-model states that testing at a higher
level is started only when the previous test level is completed.
Verification
Vs
Validation
Verification
Am I building the product right
The review of interim work steps and
Validation
Am I building the right product
determine
is
the project.
if
the
system
data
required
requirement)
High level activity
Performed after
to
work
satisfy
product
the
is
Page 10
key
artifacts,
reviews
and
like
walkthroughs,
inspections,
mentor
of
consistency,
each
stage
of
the
produced
ensuring
against
that
established
the
product
criteria
integrates
respect
to
the
user
needs
and
requirements
Page 11
Levels of Testing
Types of Testing
The test strategy consists of a series of tests that will fully exercise the
product.
defects and measure its full capability. A list of few of the tests and a brief
explanation is given below.
Sanity test
System test
Performance test
Security test
Functionality/Automated test
2006 Zeta Cyber Solutions (P) Limited
Page 12
Recovery test
Document test
Beta test
Load
Volume
Usability
Reliability
Storage
Internationalization
Configuration
Compatibility
Installation
Scalability
Sanity Test
Sanity testing is a cursory testing. It is performed whenever a cursory testing is
sufficient to prove that the application is functioning according to specifications.
This level of testing is a subset of regression testing. It normally includes a set of
core tests of basic GUI functionality to demonstrate connectivity to the database,
application servers, printers, etc.
System Test
System tests focus on the behavior of the product. User scenarios will be
executed against the system as well as screen mapping and error message
testing. Overall, system tests will test the integrated system and verify that it
meets the requirements defined in the requirements document.
Performance Test
Performance test will be conducted to ensure that the products response time
meets user expectations and does not fall outside the specified performance
criteria. During these tests, the response time will be measured under simulated
heavy stress and/or volume.
Page 13
Security Test
Security tests will determine how secure the product is. The tests will verify that
unauthorized user access to confidential data is prevented. It will also verify data
storage security, strength of encryption algorithms, vulnerability to hacking, etc.
Functionality/Automated Test
A suite of automated tests will be developed to test the basic functionality of the
product and perform regression testing on all areas of the system to identify and
log all errors.
Page 14
Load Testing
Load testing generally refers to the practice of modeling the expected usage of a
software program by simulating multiple users accessing the program's services
concurrently. As such, this testing is most relevant for multi-user systems; often
one built using a client/server model, such as web servers. However, other types
of software systems can be load-tested also. For example, a word processor or
graphics editor can be forced to read an extremely large document; or a financial
package can be forced to generate a report based on several years' worth of data.
When the load placed on the system is raised beyond normal usage patterns, in
order to test the system's response at unusually high or peak loads, it is known as
Stress testing. The load is usually so great that error conditions are the expected
result, although there is a gray area between the two domains and no clear
boundary exists when an activity ceases to be a load test and becomes a stress
test.
Volume Testing
Volume Testing, as its name implies, is testing that purposely subjects a system
(both hardware and software) to a series of tests where the volume of data being
processed is the subject of the test. Such systems can be transactions processing
systems capturing real time sales or could be database updates and or data
retrieval.
Volume testing will seek to verify the physical and logical limits to a system's
capacity and ascertain whether such limits are acceptable to meet the projected
capacity of the organizations business processing.
Usability testing
Usability testing it is carried out pre-release so that any significant issues
identified can be addressed. Usability testing can be carried out at various stages
of the design process. In the early stages, however, techniques such as
walkthroughs
are
often
more
appropriate.
Page 15
development team can decide the appropriate direction for the site's look and
feel, navigation, and functionality.
Assessment testing occurs when the site is close to launch. Here you can get
feedback on issues that might present huge problems for users but are relatively
simple to fix.
Evaluation testing can be useful to validate the success of a site subsequent to
launch. A site can be scored and compared to competitors, and this scorecard can
be used to evaluate the project's success.
Page 16
Storage testing
It is used to study how memory and space is used by the program, either in
resident memory or on disk. If there are limits of these amounts, storage tests
attempt to prove that the program will exceed them.
Reliability testing
Verify the probability of failure free operation of a computer program in a specified
environment for a specified time.
Reliability of an object is defined as the probability that it will not fail under
specified conditions, over a period of time. The specified conditions are usually
taken to be fixed, while the time is taken as an independent variable. Thus
reliability is often written R (t) as a function of time t, the probability that the
object will not fail within time t.
Any computer user would probably agree that most software is flawed, and the
evidence for this is that it does fail. All software flaws are designed in; that the
software does not break, rather it was always broken. But unless conditions are
right to excite the flaw, it will go unnoticed -- the software will appear to work
properly.
Internationalization testing
Testing related to handling foreign text and data (for ex: currency) within the
program. This would include sorting, importing and exporting test and data,
correct handling of currency and date and time formats, string parsing, upper and
lower case handling and so forth.
Compatibility Testing
It is the process of determining the ability of two or more systems to exchange
information. In a situation where the developed software replaces an already
working program, an investigation should be conducted to assess possible
comparability problems between the new software and other programs or
systems.
Install/uninstall testing
Testing of full, partial, or upgrade install/uninstall processes.
2006 Zeta Cyber Solutions (P) Limited
Page 17
Scalability testing
It is a subtype of performance test where performance requirements for response
time, throughput, and/or utilization are tested as load on the SUT is increased
over time.
Configuration testing
Quality Monitor runs tests to automatically verify (before deployment) that all
installation elements such as file extensions and shortcuts are configured
correctly and that all key functionality will work in the intended environment.
For example, all of the application shortcuts are automatically listed. Each one can
be selected and run. If the shortcut launches the application as expected, a
positive comment can be logged and the test will be marked as successful.
Page 18
Test Techniques
There are mainly two types of test techniques,
tester
needs
no
knowledge
of
implementation,
including
specific
programming languages
only a small number of possible inputs can actually be tested, to test every
possible input stream would take nearly forever
without clear and concise specifications, test cases are hard to design
Page 19
most testing related research has been directed towards white box testing
Guarantee that all independent paths within a module have been executed
at least once
Execute all loops at their boundaries and within their operational bounds
Page 20
We often believe that a logical path is not likely to be executed when, in fact, it
may be executed on a regular basis - our assumptions about the logical flow of a
program and data may lead us to make design errors that are uncovered only
when the path testing commences.
Typographical errors are random - when a program is translated into its
programming language source code, it is likely that some typing errors will occur.
It is likely that a typo will exist on obscure logical paths on a mainstream path.
Aspects of Code to consider
White box testing of software is predicated on a close examination of procedural
detail. White box test cases exercise specific sets of conditions and / or loops tests
logical paths through the software. The "Status of the program" may be examined
at various points to determine if the expected or asserted status corresponds to
the actual status.
At first glance it would seem that very through white box testing would lead to
100% correct program. All we need to do is to define all logical paths. Develop
test cases to exercise them and evaluate their results, i.e. generate test cases to
exercise program logic exhaustively. Unfortunately, exhaustive testing presents
certain logistical problems. For even small programs, the number of possible
logical paths can be very large. For example, the procedural design that might
correspond to a 100-line Pascal program with a single loop that may be executed
no more than 20 times. There are approximately 10*14 possible paths that may
be executed!
To put this number in perspective, we assume that a magic test processor
("magic", because no such processor exists) has been developed for exhaustive
testing. The processor can develop a test case, execute it and evaluate the result
in one millisecond. Working 24 hours a day, 365 days a year, the processor would
work for 3170 years to test the program represented in figure. This would,
undeniably cause havoc in most development schedules. Exhaustive testing is
impossible for large software systems. White box testing should not, however be
dismissed as impractical. Limited number of important logical paths can be
selected and exercised. Important data structure can be probed for validity.
Test Adequacy Criteria
White box testing is very useful in achieving the test adequacy rather than
designing test cases. The test adequacy criteria are based on the "code coverage
analysis" which includes coverage of:
Page 21
The code coverage analysis methods mentioned above are discussed in the
section below.
The goal for white box testing is to ensure that internal components of a program
are working properly. A common focus is on structural elements such as
statements and braches. The tester develops test cases that exercise these
structural elements to determine if defects exist in the program structure. By
exercising all of these structural elements, the tester hopes to improve the
chances of detecting defects.
The testers need a framework for deciding which structural elements to select as
a focus of testing, for choosing the appropriate test data and for deciding when
the testing efforts are adequate enough to terminate the process with confidence
that the software is working properly. The criteria can be viewed as representing
minimal standards for testing a program.
The application scope of adequacy criteria also includes:
Helping them to select a test data set for the program based on the
selected
properties.
Page 22
Statement Testing
Branch testing
Path testing
Loop testing
Statement testing
In this method every source language statement in the program is executed at
least once so that no statements are missed out. Here we need to be concerned
about the statements controlled by decisions. The statement testing is usually
regarded as the minimum level of coverage in the hierarchy of code coverage and
is therefore a weak technique. It is based on - feeling absurd to release a piece of
software without having executed every statement.
In statement testing, 100% coverage may be difficult considering the following,
Page 23
Choosing the distinct path is not easy and different people may choose different
distinct paths.
Path coverage is assessed by:
Path coverage = #Paths Executed / #Total Paths
2006 Zeta Cyber Solutions (P) Limited
Page 24
Loop testing
Loop testing strategies focus on detecting common defects associated with loop
structures like simple loops, concatenated loops, nested loops, etc. The loop
testing makes sure that all the loops in the program have been traversed at least
once during testing. Defects in these areas are normally due to poor programming
practices or inadequate reviewing.
Why Code Coverage Analysis?
Code coverage analysis is necessary to satisfy test adequacy criteria and in
practice is also used to set testing goals and, to develop and evaluate test data. In
the context of coverage analysis, testers often refer to test adequacy criteria as
"coverage criteria."
When coverage related testing goal is expressed as a percentage, it is often called
"degree of coverage." The planned degree of coverage is usually specified in the
test plan and then measured when the tests are actually executed by a coverage
tool. The planned degree of coverage is usually specified as 100% if the tester
wants to completely satisfy the commonly applied test adequacy or coverage
criteria.
Under some circumstances, the planned degree of coverage may be less than
100% possibly due to the following reasons:
1. The nature of the unit
The time set aside for testing is not adequate to achieve 100%
coverage.
There are not enough trained testers to complete coverage for all
units.
Page 25
The application of coverage analysis is associated with the use of control and data
models to represent structural elements and data. The logic elements are based
on the flow of control in a code. They are:
Program statements
All structured programs can be built from three basic primes - sequential,
decision and iterative. Primes are nothing but the representation of flow of
control in a software program which can take anyone form. The figure shows the
graphical representation of primes mentioned above.
True
False
False
False
True
3
Iteration
Sequence
Condition
Using the concept of a prime and the ability to use the combinations of primes to
develop structured code, a control flow diagram for the software under test can be
developed. The tester to evaluate the code with respect to its testability as well as
to develop white box test cases can use the flow graph. The direction of the
transfer depends upon the outcome of the condition.
Page 26
Code Complexity
Cyclomatic Complexity
Thomas McCabe coined the term Cyclomatic complexity. The Cyclomatic
complexity is software metric that provides a quantitative measure of the logical
complexity of a program. When used in the context of the basis path testing
method, the value computed for Cyclomatic complexity defines the number of
independent paths in the basis set of a program and provides us with an upper
bound for the number of tests that must be conducted to ensure that all
statements have been executed at least once. The concept of Cyclomatic
complexity can be well explained using a flow graph representation of a program
as shown below in figure.
1
2
5
6
In the diagram above circles denote nodes, and arrows denote edges. Each circle
represents a processing task (one or more source code statements) and arrows
represent flow of control.
Code Complexity Value
McCabe defines a software complexity measure that is based on the Cyclomatic
complexity of a program graph for a module. Otherwise known as code complexity
number, the Cyclomatic complexity is a very useful attribute to the tester. The
formula proposed by McCabe can be applied to flow graphs where there are no
disconnected components. Cyclomatic Complexity is computed in one of the
following ways:
1. The number of regions of the flow graph edges corresponds to the cyclomatic
complexity.
2. Cyclomatic complexity, V, for a flow graph is defined as V = E - N + 2, where E
is the number of flow graph edges and N is the number of flow graph nodes.
3. Cyclomatic complexity, V is also defined as V = P + 1, where P is the number of
predicate nodes contained in the flow graph. A predicate note is the one having
2006 Zeta Cyber Solutions (P) Limited
Page 27
The tester can use the value of V along with past project data to
approximate the testing time and resources required to test a software
module.
The complexity value V along with control flow graph give the tester
another tool for developing white box test cases using the concept of path.
Page 28
requirements
analysis,
functional
design,
internal
design,
Bug life cycle begins when a programmer, software developer, or architect makes
a mistake, creates an unintentional software defect, i.e. a bug, and ends when the
bug is fixed, and the bug is no longer in existence.
Page 29
What should be done after a bug is found? When a bug is found, it needs to be
communicated and assigned to developers that can fix it. After the problem is
resolved, fixes should be retested.
Additionally, determinations should be made regarding requirements, software,
hardware, safety impact, etc., for regression testing to check the fixes didn't
create other problems elsewhere. If a problem-tracking system is in place, it
should encapsulate these determinations.
A variety of commercial, problem tracking/management software tools are
available which can be used by software test engineers. These tools, with the
detailed input of software test engineers, will give the team complete information
so developers can understand the bug, get an idea of its severity, reproduce it
and fix it.
Page 30
Test cases
Whats a Test Case?
A test case specifies the pretest state of the IUT and its environment, the test
inputs or conditions, and the expected result. The expected result specifies what
the IUT should produce from the test inputs. This specification includes messages
generated by the IUT, exceptions, returned values, and resultant state of the IUT
and its environment. Test cases may also specify initial and resulting conditions
for other objects that constitute the IUT and its environment. Some more
definitions are listed below.
Test case is a set of test inputs, execution conditions, and expected results
developed for a particular objective, such as to exercise a particular program path
or to verify compliance with a specific requirement
Test case is a documentation specifying inputs, predicted results, and a set of
execution conditions for a test item (as per IEEE Standard 829-1983 definition)
Test cases are the specific inputs that youll try and the procedures that youll
follow when you test the software
Page 31
always review your work, particularly after several days (when possible)
writing does get easier with practice, but it never gets easy
Use Cases
To write test cases, it is best if Use Cases were made available.
A Use Case
Name. The name should implicitly express the user's intent or purpose
of the use case, such as "Enroll Student in Seminar."
Actors [Optional]. The list of actors associated with the use case.
Although this information is contained in the use case itself, it helps to
increase the understandability of the use case when the diagram is
unavailable.
Frequency. How often this use case is invoked by the actor. This is
often a free-form answer such as once per each user login or once per
month.
Extended use case [Optional]. The use case that this use case
extends (if any). An extend association is a generalization relationship
where an extending use case continues the behavior of a base use
2006 Zeta Cyber Solutions (P) Limited
Page 32
Included use cases [Optional]. A list of the use cases this one
includes. An include association is a generalization relationship
denoting the inclusion of the behavior described by a use case within
another use case. This is modeled using a use-case association with the
<<include>> stereotype. Also known as a uses or a has-a relationship.
Change history [Optional]. Details about when the use case was
modified, why, and by whom.
Where such use cases are not available, do not attempt to create it as it is very
difficult to create accurate and complete Use Cases unless the design documents
are available.
Page 33
b) For each component in the work-flow, create a list of the screens that are
involved.
c) For each screen, create a list of fields that must be checked.
For each
There should not be any ambiguity in the steps nor the resulted values
j)
Test cases must be written to arrive at positive results. When writing test
cases for applications where the use cases are not available, negative
result test cases are also needed
Page 34
Software Inspection
Software inspection is a verification process, which is carried out in a formal,
systematic and organized manner. Verification techniques are used during all
phases of software development life cycle to ensure that the product generation is
happening the "right way". The various verification techniques other than formal
inspection are reviews and walkthroughs. These processes work directly on the
work product under development. The other verification techniques, which also
ensure product quality, are management reviews and audits. We will be focusing
more on reviews, walkthroughs and formal inspections.
Several
steps
are
Inspection
Policies and
plan
Entry
Criteria
Checklist
Invitation
Preparation
Inspection
Meeting
Defect
Database
Reporting
Results
Metric
Database
Rework and
Follow-up
Exi
t
The figure given above reveals the several steps that are carried out during the
inspection process. The responsibility for initiating and carrying through the steps
2006 Zeta Cyber Solutions (P) Limited
Page 35
The inspection leader plans for the inspection, sets the date, schedules the
meeting, runs the inspection meeting, appoints a recorder to record the results
and monitor the follow up period after the review.
Entry criteria
The inspection process begins when the inspection pre-conditions are met as
specified in the inspection policies, procedures and plans. A personal preinspection be performed carefully by each team member. Error, problems and
items should be noted by each individual for each item on the list. When the
actual meeting takes place, the document under review is presented by a reader
and is discussed as it is read. Attention is paid to issues related to quality,
adherence to standard, testability, traceability and satisfaction of the user's
requirements
Checklist
The checklist varies with the software artifact being inspected. It contains items
that the inspection participants should focus their attention on, check and
evaluate. The inspection participants address each item on the checklist. The
recorder records any discrepancies, misunderstandings, errors and ambiguities or
in general any problem associated with an item. The competed checklist is part of
the review summary document.
Reviewers use organization's standard checklist for all work-products to look for
common errors. Specific checklist is also prepared for all work-products to
increase review effectiveness before going for actual review/inspection. Checklist
is dynamic and improves over time in the organization by analyzing root causes
for the common issues found during inspection process.
Invitation
The inspection leader invites each member participating in the meeting and
distributes the documents that are essential for the conduct of the meeting.
Preparation
The key item that the inspection leader prepares is the check list of items that
serves as agenda for the review which helps in determining the area of focus, its
objectives and tactics to be used.
Page 36
Inspection meeting
The inspection leader announces the inspection meeting, and distributes the
items to be inspected, the checklist and any other auxiliary materials to the
participants usually a day or two before the scheduled meeting. Participants must
do their homework and study the items and the checklist. The group as a whole
addresses all the items in the checklist and the problems is recorded. The
recorder documents all the findings and the measurements.
Reporting results
When the inspection meeting has been completed, (all agenda items covered),
the inspectors are usually asked to sign a written document that is sometimes
called as summary report. The defect report is stored in the defect database.
Inspection metrics are also recorded in the metric database.
The inspection process requires a formal follow-up process. Rework session should
be scheduled as needed and monitored to ensure that all problems identified at
the inspection meeting have been addressed and resolved. This is the
responsibility of the inspection leader. Only when all problems have been resolved
and the item is either re-inspected by the group or the moderator is the inspection
process completed.
Page 37
Recorder/scribe
Author
Owner of the document, Present review
item
Perform any
needed rework on the reviewed item
Inspectors
Attend review-training sessions,
Prepare for reviews, Participate in
meetings;
Evaluate reviewed
item, Perform rework where appropriate
Page 38
Understandability:
Page 39
Code inspection
When a software code is inspected, one can check the code's adherence to design
specification and the constraints it is supposed to handle. We can also check for
its language and correspondence of the terms in code with the data dictionary.
Reviews
Review involves a meeting of a group of people, whose intention is to evaluate a
software related item. Reviews are a type of static testing technique that can be
used to evaluate the quality of software artifacts such as requirements document,
a test plan, a design document or a code component.
The composition of a review group may consist if managers, clients, developers,
testers and other personnel, depending on the type of artifact under review.
Review objectives
The general goals for the reviewers are to
-
Page 40
Reviews are characterized by its less formal nature compared to inspections. The
reviews are more like a peer group discussion with no specific focus on defect
identification, data collection and analysis. Reviews do not require elaborate
preparation.
Walkthroughs
Walkthroughs are a type of technical review where the producer of the review
material servers as the review leader and actually guides the progress of the
review. They are normally applied on the following documents.
-
In the case of detailed design and code walkthroughs, test inputs may be selected
and review participants walkthrough the design or code with the set of inputs in a
line by line manner. The reader can compare this process to manual execution of
the code. If the presenter gives a skilled presentation of the material, the
walkthrough participants are able to build a comprehensive mental model of the
detailed design or code and are able to evaluate its quality and detect defects.
The primary intent of a walkthrough is to make a team familiarized with a work
product. Walkthroughs are useful when one wants to impart a complex work
product to a team. For example, a walkthrough for a complex design document
can be done by the lead designer to the coding team.
Page 41
Advantages of walkthroughs
The following are the advantages of a walkthrough that makes it attractive for
reviewing less critical artifacts
-
One can eliminate the checklist and preparation step for a walkthrough.
Benefits of Inspection
The benefits of inspection can be classified as direct and indirect benefits which
are as discussed below:
Direct benefits:
-
We can reduce the testing cost and in turn the time required for testing by
50% to 90%.
Indirect benefits:
The indirect benefits include the following:
Management benefits:
The software inspection process gives visibility to the relevant facts and
Deadline benefits:
-
Inspection will give early danger signals so that one can act appropriately and
reduce the impact to the deadline of the project.
The quality of the work is improved to a greater extent and it becomes more
maintainable when inspection is carried out.
Also the software professionals can expect to live under less intense deadline
2006 Zeta Cyber Solutions (P) Limited
Page 42
pressure.
Page 43
Page 44
Page 45
There are mixed mode operations that can give rise to errors in the program.
Selective testing or execution paths are essential tasks during the unit tests. Test
cases should be designed to uncover erroneous computations,
incorrect
comparisons and improper control flow. Path and loop testing are effective
techniques to uncover broad array of path errors. They should also uncover
potential defects in
-
Be sure that we design tests to execute every error handling path. If we do not do
that the path may fail when it is invoked. Good design dictates that error
conditions be anticipated and error handling paths set reroute or cleanly
terminate processing when an error occurs.
Among the potential errors that should be tested when error handling is evaluated
are:
-
When error noted in unit code does not correspond to error encountered.
Page 46
Driver
Interface
Local data structures
Boundary conditions
Independent paths
Error handling paths
Module
To be
tested
Stub
Test
Test
Cases
Cases
Stub
RESULTS
The unit test environment is illustrated in Figure above. Since the component is
not a standalone program, driver and/or stub software must be developed for
each unit test. In most applications the driver is nothing other than a "main
program" that accepts test case data, passes such data to the component to be
tested, and prints the relevant results. Stubs serve to replace modules that are
subordinate to (called by) the component under test. A stub or "dummy
subprogram" uses the subordinate module's interface, may do minimal data
manipulation, prints verification of entry, and returns control to the module under
testing.
Drivers and stubs present overhead, as they are not part of the final deliverable.
Judicious decision must be taken whether to write too many stubs and drivers of
postpone the some of the testing to integration phase.
Unit testing is simplified when a component with high cohesion is designed.
Hence only one function is addressed by a component, the number of test cases is
reduced and errors can be more easily predicted and uncovered.
Page 47
Page 48
Integration testing
There is a risk that Data can be lost at the interface. Similarly one module can
have inadvertent affect on another. When several sub-functions are combined the
desired major function may not be produced and individually acceptable
imprecision may be magnified to unacceptable levels. The global data structures
can also present problems. So in order to rectify similar issues we need to carry
out integration testing.
Integration testing is a systematic testing technique for constructing the program
structure while at the same time conducting tests to uncover errors associated
with the interfaces.
The interfaces are more adequately tested when the units are finally connected to
a full and working implementation of those units it calls and those that call it. As a
consequence of this integration process software subsystems are put together to
form a complete system and integration testing is carried out.
With a few minor exceptions, integration test should be performed on units that
have been reviewed and successfully passed unit testing. A tester might believe
erroneously that since a unit has already been tested during unit test with a driver
and stub, it does not need to be retested in combination with other units during
integration tests. However, a unit tested in isolation may not have been tested
adequately for the situation where it is combined with other modules.
One unit at a time is integrated into a set of previously integrated modules which
have passed a set of integration tests. The interfaces and the functionality of the
new unit in combination with the previously integrated units are tested.
Page 49
Page 50
Integration Strategies
Non Incremental Approach
This approach is also called big bang approach. A representation of modules in big
bang approach is as shown in the Figure.
M1
M2
M5
M3
M6
M4
M7
M8
In this, all the modules are put together and combined randomly into integrated
system and then tested as a whole. Here all modules are combined in advance
and the entire program is tested. The result is a usually a chaos because a set of
errors are encountered. Correction is difficult because isolation of the causes is
complicated by the vast expansion of the program. Once these errors are
corrected one can find that new defects pop up and this becomes a seemingly
endless loop. Therefore to solve this problem we come up with another strategy,
which is called incremental approach.
Incremental Approach
As the name suggests here we have assemble different units one by one in a
systematic or incremental manner. Each module will be tested before it is
integrated with another unit. And this approach is taken care when all the units
are integrated into one system. We can classify the incremental approach as
Top down
approach
Bottom up
approach
Sandwich
approach
Page 51
M1
M2
M5
M6
M3
M4
M7
5M8
Advantage of top-down integration
The top down integration strategy verifies the major control or decision- points
early in test process. In a well-factored program, structure decision-making occurs
at upper level and if a major control problem exists then it needs to be recognized
early.
If depth first approach is used then a complete function of the software may be
implemented and demonstrated. Early demonstration of functional capability is a
confidence builder.
Disadvantage of top-down Integration
Here logistical problems can arise. It occurs commonly when processing at low
levels in the beginning of top down approach therefore no significant data can
flow upward in the hierarchy is required to test the upper levels adequately. Stubs
replace the low level modules at the program structure. Now the testers are left
2006 Zeta Cyber Solutions (P) Limited
Page 52
delay
in
tests
causes
us
to
loose
some
control
over
the
Bottom-UP INTEGRATION
As its name implies, in case of bottom-up integration, testing begins with
construction and testing with atomic modules i.e., components at the lower levels
in the program structure. Since components are integrated from the bottom up,
processing required for the components subordinate to a given level is always
available and the need for stubs is eliminated.
Bottom-up integration is a technique in integration testing where modules are
added to the growing subsystem starting from the bottom or lower level. Bottomup integration of modules begins with testing the lowest-level modules. These
modules do not call other modules. Drivers are needed to test these modules. The
next step is to integrate modules on the next upper level of the structure. In the
process for bottom-up integration after a module has been tested, its driver is
replaced by an actual module (the next oneMc
to be integrated. This next module to
vx
vx
vx
vx
v
D2
111
111
111
111
111
111
111
11 2006 Zeta Cyber Solutions (P) Limited
D3
Page 53
Cluster 3
Cluster1
Cluster 2
Cluster consist of classes that are related and that may work together to support
a required functionality for the complete system. In the figure 3 components are
combined to form clusters 1, 2, 3. Each of these dusters is tested using a driver
shown as a dashed box. Components in clusters in 1 and 2 are subordinate to Ma.
Drivers D1 and D2 are removed and the clusters are interfaced directly to Ma.
Similarly driver D3 for cluster 3 is removed prior to integration with module Mb.
This process is followed for other modules also,
Advantage of bottom-up integration
As integration moves upward the need for separate test drivers lessen. In fact if
the top two levels of the program structure are integrated top down then the
number of drivers can be substantially reduced and the integration of clusters is
greatly simplified. Such an approach is called sandwich integration testing.
Bottom-up integration has the advantage that the lower-level modules are usually
well tested early in the integration process. This is important if these modules are
candidates to be reused.
Disadvantage of bottom-up integration
The major disadvantage of this strategy is that the program as an entity does not
exist until the last module has been added to it. This drawback is tempered by
easier test case design and a lack of stubs.
The upper-levels modules in the bottom-up integration are tested later in the
integration process and consequently may not be tested well. If they are critical
decision makers then this would be risky.
Sandwich Integration
It is a strategy where both top-down and bottom-up integration strategies are
used to integration test a program. This is a combined approach, which is useful
when the program structure is very complex and frequent use of drivers and stubs
becomes unavoidable. Because using sandwich approach at some portions of the
code we can eliminate the excess use of drivers and stubs thereby simplifying the
integration testing process. It is an approach that uses top down approach for
modules in the upper level of the hierarchy and bottom up approach for lower
2006 Zeta Cyber Solutions (P) Limited
Page 54
levels.
Page 55
The figure below shows the top down and bottom approach incorporated in a
sandwich model:
M1
M2
M5
M3
M4
M6
M7
M8
In the figure bottom up approach is applied to the modules M1, M2 and M3.
Similarly a top down approach is applied to M1, M4 and M8. The sandwich model
has been widely used since it uses both top down and bottom up approach.
Regression Testing
They are tests that are run every time a change is made to the software so that
we can find that the change has not broken or altered any other part of the
software program. Regression testing is an important strategy for reducing side
effects.
Each time a new module is added as part of integration testing, the software
changes. New data flow paths are established, new I/O may occur and new control
logic is invoked. These changes may cause problems with functions that
previously worked flawlessly. In the context of an integration test strategy, the
regression testing is the re-execution of some subset of tests that have already
been conducted to ensure that changes have not propagated unintended side
effects.
In a broader context, successful tests results in the discovery of errors and errors
must be corrected. Whenever software is corrected some aspect of software
configuration like the program, its documentation, etc. is changed. Regression
testing is the activity that helps to ensure that changes do not introduce
unintended behavior or additional errors.
As the integration testing proceeds the number of regression tests can also grow
quite large. Regression testing may be conducted manually or by using
automated tools. It is impractical and inefficient to re-execute every test for every
program function once change has occurred. Therefore, regression test suite
2006 Zeta Cyber Solutions (P) Limited
Page 56
should be designed to include only those tests that address one or more classes
of errors in each of the major program functions.
Page 57
Tests that focus on the software components that have been changed
constitute the
Smoke Testing
Smoke testing is an integration testing strategy that is commonly used when
"shrink-wrapped" software products are being developed. It is designed as a
pacing mechanism for time-critical projects, allowing the software team to assess
its projects on a frequent basis. The smoke testing approach encompasses the
following activities:
Software components that have been translated into code are integrated
into a "build." A build includes all data field, libraries, reusable modules
and engineered components that are required to implement one or more
product components.
A series of tests is designed to expose errors that will keep the build from
properly performing its function. The intent should be to uncover "show
stopper" errors that have the highest likelihood of throwing the software
project behind schedule.
The build is integrated with other builds and the entire product in its
current form is smoke tested daily. The integration approach may be top
down or bottom up.
Treat the daily build as the heartbeat of the project. If there is no heartbeat then
the project is dead. The smoke test should exercise the entire system from end to
end. It does not have to be exhaustive but it should be capable of exposing major
problems. The smoke test should be thorough enough that if the build passes, you
can assume that it is stable enough to be tested more thoroughly.
Benefits of smoke testing
Smoke testing provides a number of benefits when it is applied on complex time
critical software engineering projects. Some of them are as discussed:
Page 58
thereby reducing the likelihood of serious schedule impact when errors are
uncovered.
Error diagnosis and Correction are simplified - Software that has just
been added to the builds is a probable cause of a newly discovered defect
Smoke testing makes that defect identification easier.
Modules that use critical system resources like CPU, memory, I/O
devices, etc.
Page 59
Tests designed to uncover errors associated with local or global data structures
are conducted.
Non-functional issues:
Performance - Tests designed to verify performance bounds established
during software design are conducted. Performance issues such as time
requirements for a transaction should also be subjected to tests.
Resource hogging issues - When modules are integrated, then one
module may interfere with the resources used by another module or might
require resources for its functions. This usage of additional resources like
CPU,
memory,
external
devices,
etc
can
cause
interference
and
These
documents
contain
structure
char3,
state
char3,
data
Page 60
minimally functional.
Page 61
System Testing
Why System Testing?
The system testing requires a large amount of resources. The goal is to ensure
that the system performs according to its requirements. The system test
evaluates both functional behavior as well as quality requirements such as
performance, reliability, usability, security etc. This phase of testing is especially
useful for detecting external hardware and software interface defects for example
those causing race conditions, deadlocks, problems with interrupts and exception
handling, and ineffective memory usage. After system test, the software will be
turned over to the users for evaluation during acceptance testing, alpha/beta
tests. The organization would like to ensure that the quality of the software has
been measured and evaluated before users or clients are invited to use the
system.
Exploratory Testing
How it differs from Scripted Testing?
The plainest definition of exploratory testing is test design and test execution at
the same time. This is the opposite of scripted testing (predefined test
procedures, whether manual or automated). Exploratory tests, unlike scripted
tests, are not defined in advance and carried out precisely according to plan. This
may sound like a straightforward distinction, but in practice it's murky. That's
because "defined" is a spectrum. Even an otherwise elaborately defined test
procedure will leave many interesting details (such as how quickly to type on the
keyboard, or what kinds of behavior to recognize as a failure) to the discretion of
the tester. Likewise, even a free-form exploratory test session will involve tacit
constraints or mandates about what parts of the product to test, or what
2006 Zeta Cyber Solutions (P) Limited
Page 62
strategies to use. A good exploratory tester will write down test ideas and use
them in later test cycles. Such notes sometimes look a lot like test scripts, even if
they aren't. Exploratory testing is sometimes confused with "ad hoc" testing. Ad
hoc testing normally refers to a process of improvised, impromptu bug searching.
By definition, anyone can do ad hoc testing. The term "exploratory testing"-coined by Cem Kaner, in Testing
Balancing Exploratory Testing with Scripted Testing
To the extent that the next test we do is influenced by the result of the last test
we did, we are doing exploratory testing. We become more exploratory when we
can't tell what tests should be run, in advance of the test cycle, or when we
haven't yet had the opportunity to create those tests. If we are running scripted
tests, and new information comes to light that suggests a better test strategy, we
may switch to an exploratory mode (as in the case of discovering a new failure
that requires investigation). Conversely, we take a more scripted approach when
there is little uncertainty about how we want to test, new tests are relatively
unimportant, the need for efficiency and reliability in executing those tests is
worth the effort of scripting, and when we are prepared to pay the cost of
documenting and maintaining tests. The results of exploratory testing aren't
necessarily radically different than those of scripted testing, and the two
approaches to testing are fully compatible.
Why Exploratory Testing?
Recurring themes in the management of an effective exploratory test cycle are
tester, test strategy, test reporting and test mission. The scripted approach to
testing, attempts to mechanize the test process by taking test ideas out of a test
designer's head and putting them on paper. There's a lot of value in that way of
testing. But exploratory testers take the view that writing down test scripts and
following them tends to disrupt the intellectual processes that make testers able
to find important problems quickly. The more we can make testing intellectually
rich and fluid, the more likely we will hit upon the right tests at the right time.
That's where the power of exploratory testing comes in: the richness of this
process is only limited by the breadth and depth of our imagination and our
emerging insights into the nature of the product under test.
Scripting has its place. We can imagine testing situations where efficiency and
repeatability are so important that we should script or automate them. For
example, in the case where a test platform is only intermittently available, such as
a client-server project where there are only a few configured servers available and
2006 Zeta Cyber Solutions (P) Limited
Page 63
they must be shared by testing and development. The logistics of such a situation
may dictate that we script tests carefully in advance to get the most out of every
second of limited test execution time. Exploratory testing is especially useful in
complex testing situations, when little is known about the product, or as part of
preparing a set of scripted tests. The basic rule is this: exploratory testing is called
for any time the next test you should perform is not obvious, or when you want to
go beyond the obvious.
Testing Process
The diagram above outlines the Test Process approach that will be followed.
a.
Organize Project involves creating a System Test Plan, Schedule & Test
b.
&
Exit
Criteria,
Expected
Results,
etc.
In
general,
test
d.
e.
f.
data set-ups.
Execute Project Integration Test
Execute Operations Acceptance Test
g.
Signoff - Signoff happens when all pre-defined exit criteria have been
achieved.
Page 64
The
fundamental
test
process
comprises
planning,
specification,
execution, recording and checking for completion. The Planning process consists
of five main stages as figure below depicts.
Test Planning
Define Design
Test Execution
Test Recording
Test Phases
Page 65
Test Planning
Test Planning involves producing document that describes your overall approach
and test objectives. Completion or exit criteria must be specified so that you know
when testing (at any stage) is complete. Plan your test.
A tactical test plan must be developed to describe when and how testing will
occur. The test plan should provide background information on the software being
tested, on the test objectives and risks, as well as on the business functions to be
tested and the specific tests to be performed.
Entrance Criteria
Hardware has been be acquired and installed
Test cases and test data have been identified and are available
The Software Requirements Specification (SRS) and Test Plan have been
signed off
Acceptance tests have been completed, with a pass rate of not less than
80%
Resumption Criteria
In the event that system testing is suspended resumption criteria will be specified
and testing will not re-commence until the software reaches these criteria.
Test Design
This involves designing test conditions and test cases using many of the
automated tools out in the market today. You produce a document that describes
the tests that you will carry out. It is important to determine the expected
results prior to test execution.
Test Execution
This involves actually running the specified tests on a computer system either
manually or by using an automated Test tool like WinRunner, Rational Robot or
SilkTest.
Page 66
Test Recording
This involves keeping good records of the test activities that you have carried out.
Versions of the software you have tested and the test design are recorded along
with the actual results of each test.
All the first and second severity bugs found during QA testing have been
resolved.
At least all high exposure minor and insignificant bugs have been fixed and
a resolution has been identified and a plan is in place to address the
remaining bugs.
All high-risk test cases (high-risk functions) have been successful executed.
The system must meet all stated security requirements. The system and
data resources must be protected against accidental and/or intentional
modifications or misuse.
Page 67
Page 68
References
List all documents that support this test plan. Refer to the actual version/release
number of the document as stored in the configuration management system. Do
not duplicate the text from other documents as this will reduce the viability of this
document and increase the maintenance effort. Documents that can be
referenced include:
Project Plan
Requirements specifications
Introduction
State the purpose of the Plan, possibly identifying the level of the plan (master
etc.). This is essentially the executive summary part of the plan.
You may want to include any references to other plans, documents or items that
contain information relevant to this project/process. If preferable, you can create a
references section to contain all reference documents.
Identify the Scope of the plan in relation to the Software Project plan that it relates
to. Other items may include, resource and budget constraints, scope of the testing
effort, how testing relates to other evaluation activities (Analysis & Reviews), and
possibly the process to be used for change control and communication and
coordination of key activities.
As this is the "Executive Summary" keep information brief and to the point.
Test items (functions)
These are things you intend to test within the scope of this test plan. Essentially,
something you will test, a list of what is to be tested. This can be developed from
the software application inventories as well as other sources of documentation
and information.
This can be controlled on a local Configuration Management (CM) process if you
have one. This information includes version numbers, configuration requirements
2006 Zeta Cyber Solutions (P) Limited
Page 69
Page 70
Features to be tested
This is a listing of what is to be tested from the user's viewpoint of what the
system does. This is not a technical description of the software, but a USERS view
of the functions.
Set the level of risk for each feature. Use a simple rating scale such as (H, M, L):
High, Medium and Low. These types of levels are understandable to a User. You
should be prepared to discuss why a particular level was chosen.
Features not to be tested
This is a listing of what is 'not to be tested from both the user's viewpoint of what
the system does and a configuration management/version control view. This is not
a technical description of the software, but a user's view of the functions.
Identify why the feature is not to be tested, there can be any number of reasons.
Approach (strategy)
This is your overall test strategy for this test plan; it should be appropriate to the
level of the plan (master, acceptance, etc.) and should be in agreement with all
higher and lower levels of plans. Overall rules and processes should be identified.
Hardware
Software
What levels of regression testing will be done and how much at each test
level?
Page 71
How will elements in the requirements and design that do not make sense
or are un-testable be processed?
If this is a master test plan the overall project testing approach and coverage
requirements must also be identified.
Specify if there are special requirements for the testing.
MTBF, Mean Time between Failures - if this is a valid measurement for the
test involved and if the data is available.
At the Master test plan level this could be items such as:
o
This could be an individual test case level criterion or a unit level plan or it can be
general functional requirements for higher level plans.
What is the number and severity of defects located?
Page 72
Specify what constitutes stoppage for a test or series of tests and what is the
acceptable level of defects that will allow the testing to proceed past the defects.
Testing after a truly fatal error will generate conditions that may be identified as
defects but are in fact ghost errors caused by the earlier defects that were
ignored.
Test deliverables
What is to be delivered as part of this plan?
Test cases.
Simulators.
One thing that is not a test deliverable is the software itself that is listed under
test items and is delivered by development.
Remaining test tasks
If this is a multi-phase process or if the application is to be released in increments
there may be parts of the application that this plan does not address. These areas
need to be identified to avoid any confusion should defects be reported back on
Page 73
those future functions. This will also allow the users and testers to avoid
incomplete functions and prevent waste of resources chasing non-defects.
If the project is being developed as a multi-party process, this plan may only
cover a portion of the total functions/features. This status needs to be identified
so that those other areas have plans developed for them and to avoid wasting
resources tracking defects that do not relate to this plan.
When a third party is developing the software, this section may contain
descriptions of those test tasks belonging to both the internal groups and the
external groups.
Environmental needs
Are there any special requirements for this test plan, such as:
How will test data be provided? Are there special collection requirements
or specific ranges of data that must be provided?
Setting risks.
Page 74
Who makes the critical go/no go decisions for items not covered in the test
plans?
Schedule
A schedule should be based on realistic and validated estimates. If the estimates
for the development of the application are inaccurate, the entire project plan will
slip and the testing is part of the overall project plan.
As we all know, the first area of a project plan to get cut when it comes to
crunch time at the end of a project is the testing. It usually comes down to
the decision, Lets put something out even if it does not really work all
that well. And, as we all know, this is usually the worst possible decision.
How slippage in the schedule will to be handled should also be addressed here.
If the users know in advance that a slippage in the development will cause
a slippage in the test and the overall delivery of the system, they just may
be a little more tolerant, if they know its in their interest to get a better
tested application.
By spelling out the effects here you have a chance to discuss them in
advance of their actual occurrence. You may even get the users to agree to
a few defects in advance, if the schedule slips.
At this point, all relevant milestones should be identified with their relationship to
the development process identified. This will also help in identifying and tracking
potential slippage in the schedule caused by the test process.
It is always best to tie all test dates directly to their related development activity
dates. This prevents the test team from being perceived as the cause of a delay.
For example, if system testing is to begin after delivery of the final build, then
system testing begins the day after delivery. If the delivery is late, system testing
starts from the day of delivery, not on a specific date. This is called dependent or
relative dating.
Planning risks and contingencies
What are the overall risks to the project with an emphasis on the testing process?
Page 75
The test schedule and development schedule will move out an appropriate
number of days. This rarely occurs, as most projects tend to have fixed
delivery dates.
These two items could lower the overall quality of the delivered product.
The test team will work overtime (this could affect team morale).
Management is usually reluctant to accept scenarios such as the one above even
though they have seen it happen in the past.
The important thing to remember is that, if you do nothing at all, the usual result
is that testing is cut back or omitted completely, neither of which should be an
acceptable option.
Approvals
Who can approve the process as complete and allow the project to proceed to the
next level (depending on the level of the plan)?
At the master test plan level, this may be all involved parties.
When determining the approval process, keep in mind who the audience is:
The audience for a unit test level plan is different than that of an
integration, system or master level plan.
2006 Zeta Cyber Solutions (P) Limited
Page 76
The levels and type of knowledge at the various levels will be different as
well.
Programmers are very technical but may not have a clear understanding of
the overall business process driving the project.
Users may have varying levels of business acumen and very little technical
skills.
Always be wary of users who claim high levels of technical skills and
programmers that claim to fully understand the business process. These
types of individuals can cause more harm than good if they do not have
the skills they believe they possess.
Glossary
Used to define terms and acronyms used in the document, and testing in general,
to eliminate confusion and promote consistent communications.
Page 77
these
processes.
variety
of
commercial
problem-
Complete information such that developers can understand the bug, get
an idea of its severity, and reproduce it if necessary.
The function, module, feature, object, screen, etc. where the bug occurred
Tester name
Test date
Description of fix
Date of fix
2006 Zeta Cyber Solutions (P) Limited
Page 78
Retest date
Retest results
risk
analysis
to
determine
where
testing
should
be
focused.
Since it's rarely possible to test every possible aspect of an application, every
possible combination of events, every dependency, or everything that could go
wrong, risk analysis is appropriate to most software development projects. This
requires judgment skills, common sense, and experience. Considerations can
include:
2006 Zeta Cyber Solutions (P) Limited
Page 79
Which parts of the code are most complex, and thus most subject to
errors?
Which parts of the requirements and design are unclear or poorly thought
out?
Page 80
Test Automation
Scope of Automation
Software must be tested to have confidence that it will work as it should in its
intended environment. Software testing needs to be effective at finding any
defects which are there, but it should also be efficient, performing the tests as
quickly as possible.
Automating software testing can significantly reduce the effort required for
adequate testing or significantly increase the testing that can be done in limited
time. Tests can be run in minutes that would take hours to run manually. Savings
as high as 80% of manual testing effort have been achieved using automation.
At first glance it seems that automating testing is an easy task. Just buy one of
the popular test execution tools, record the manual tests, and play them back
whenever you want them to. Unfortunately, it doesn't work like that in practice.
Just as there is more to software design than knowing a programming language,
there is more to automating testing than knowing a testing tool.
Tool Support for Life-Cycle testing
The figure shows the availability of the tools support for testing in every stage of
the software development life cycle. The different types of tools and their
positions are as shown:
Requirement
Specification
Performance
stimulator
tools
Architectural
design
Test design
tools:
Logical design
tools:
Physical design
tools:
Manageme
nt tools
Static
analysi
s tools
Acceptance
test
System
test
Test execution
and
comparison
Detailed
design
Code
Integration
test
Unit
test
Covera
ge
tools
Dynamic
analysis
tools
Debuggin
g tools
Page 81
Page 82
Stub(s)
Driver
Setup
Simulator(s)
Execute
Test Data
Emulator(s)
Oracle
Test
Resul
t
Cleanup
Test
Log
Driver
Drivers are tools used to control and operate the software being tested.
Stubs
Stubs are essentially the opposite of drivers and they receive or respond to the
data that the software sends. Stubs are frequently used when software needs to
communicate with external devices.
Simulator
Simulators are used in place of actual systems and behave like the actual system.
Simulators are excellent way of test automation when the actual system that the
software interfaces is not available.
Emulator
Emulator is used to describe a device that is a plug-in replacement for the real
device. A PC
Page 83
Setup
The task required to be done before a test or a set of tests can be executed. The
setups maybe build for a single test, a test set or a test suite as per the need.
Execute
Oracle
Test data
Data that exists (for example in a database) before it a test is executed and that
affects by the software under test
Cleanup
This task is done after This task is done after a test or set of tests has finished or
stopped in order to leave the system in a clean state for the next test or set of
tests. It is particularly important where a test has failed to complete.
Page 84
Benefits of Automation
Test automation can enable some testing tasks to be performed far more
efficiently that could ever be done by testing manually. Some of the benefits are
included below.
A clear benefit of automation is the ability to run more tests in less time and
therefore, to make it possible to run those more often. This will lead to a greater
confidence in the system.
When testing manually, expected outcome typically include the obvious things
that are visible to the tester. However, there are attributes that should be tested
which are not easy to verify manually. For example a GUI object may trigger some
event that does not produce an immediate output. A test execution tool maybe
able to check that the event has been triggered which would otherwise not be
possible to check without using a tool.
Automating menial and boring tasks such as repeatedly inputting the same test
inputs gives greater accuracy as well as improved staff morale and frees skilled
testers to put more effort in designing better test cases to be run.
There will also be some test which is best done manually. The testers can do a
better job of manual testing, if there are fewer tests to be run manually. Machines
which would otherwise lay idle overnight or weekends can be used to run
automated tests.
Tests that are repeated automatically will be repeated exactly every time (at least
the inputs will be, the output may differ due to timing). This gives a level of
2006 Zeta Cyber Solutions (P) Limited
Page 85
Reuse of Tests
The effort put into deciding what to test, designing the tests and building the tests
can be distributed over many executions of those tests. Test which will be reused
are worth spending time on to make sure that they are reliable.
Increased confidence
Knowing that an extensive set of automated tests have run successfully, there
can be greater confidence when the system is released provided that tests being
run are good tests.
The other benefits of test automation are
It is fast
It is not possible and some times not desirable to automate all testing activities or
all tests. Tests that should probably not automated include
A test is most likely to reveal a defect when it is run for the first time. If a test
2006 Zeta Cyber Solutions (P) Limited
Page 86
A tool can only identify the differences between the actual and expected
outcomes. That is, it helps in making a comparison. When tests are run, the tool
will tell you that whether the test has passed or failed, when in fact they have
only matched your expected outcomes. It is therefore more important to be
confident of the quality of the tests that are to be automated.
Automating a set of tests does not make them more effective that those same
tests run manually. Automation can eventually improve the efficiency of tests i.e.
how much they cost to run and how long they take to run. It also affects the
resolvability of the tests.
Tool is only software, which follows instructions to execute a set of test cases. But
a human tester will perform the same tasks differently, effectively and creatively.
When unexpected event happen that are not part of the planned sequence of test
cases, human tester can identify it easily.
Page 87
Page 88
proof:
mathematical
process
which
demonstrates
the
Page 89
process
of
systematically
studying
and
inspecting
Page 90
system
testing
is
mainly
based
on
black-box
methods.
The
Page 91
Page 92
White-box testing: A test method where the tester views the internal behavior
and structure of the program. The testing strategy permits one to examine the
internal structure of the program. In using this strategy, the tester derives test
data from an examination of the program's logic without neglecting the
requirements in the specification. The goal of this test method is to achieve a
high-test coverage, which is examination of as much of the statements, branches,
paths as possible.
Page 93
Points to ponder
1. What testing approaches can you tell me about?
A: Each of the followings represents a different testing approach: black box
testing, white box testing, unit testing, incremental testing, integration testing,
functional testing, system testing, end-to-end testing, sanity testing, regression
testing, acceptance testing, load testing, performance testing, usability testing,
install/uninstall testing, recovery testing, security testing, compatibility testing,
exploratory testing, ad-hoc testing, user acceptance testing, comparison testing,
alpha testing, beta testing, and mutation testing.
2. What is stress testing?
A: Stress testing is testing that investigates the behavior of software (and
hardware) under extraordinary operating conditions. For example, when a web
server is stress tested, testing aims to find out how many users can be on-line, at
the same time, without crashing the server. Stress testing tests the stability of a
given system or entity. Stress testing tests something beyond its normal
operational capacity, in order to observe any negative results.
3. What is load testing?
A: Load testing simulates the expected usage of a software program, by
simulating multiple users that access the program's services concurrently. Load
testing is most useful and most relevant for multi-user systems, client/server
models, including web servers. For example, the load placed on the system is
increased above normal usage patterns, in order to test the system's response at
peak loads.
4. What is the difference between stress testing and load testing?
A: Load testing generally stops short of stress testing. During stress testing, the
load is so great that the expected results are errors, though there is gray area in
between stress testing and load testing. Load testing is a blanket term that is
used in many different ways across the professional software testing community.
The term, load testing, is often used synonymously with stress testing,
performance testing, reliability testing, and volume testing.
5. What is the difference between performance testing and load testing?
A: Load testing is a blanket term that is used in many different ways across the
professional software testing community. The term, load testing, is often used
synonymously with stress testing, performance testing, reliability testing, and
2006 Zeta Cyber Solutions (P) Limited
Page 94
volume testing. Load testing generally stops short of stress testing. During stress
testing, the load is so great that errors are the expected results, though there is
gray area in between stress testing and load testing.
6. What is the difference between reliability testing and load testing?
A: Load testing is a blanket term that is used in many different ways across the
professional software testing community. The term, load testing, is often used
synonymously with stress testing, performance testing, reliability testing, and
volume testing. Load testing generally stops short of stress testing. During stress
testing, the load is so great that errors are the expected results, though there is
gray area in between stress testing and load testing.
7. What is automated testing?
A: Automated testing is a formally specified and controlled method of formal
testing approach.
8. What is the difference between volume testing and load testing?
A: Load testing is a blanket term that is used in many different ways across the
professional software testing community. The term, load testing, is often used
synonymously with stress testing, performance testing, reliability testing, and
volume testing. Load testing generally stops short of stress testing. During stress
testing, the load is so great that errors are the expected results, though there is
gray area in between stress testing and load testing.
9. What is incremental testing?
A: Incremental testing is partial testing of an incomplete product. The goal of
incremental testing is to provide an early feedback to software developers.
10. What is software testing?
A: Software testing is a process that identifies the correctness, completeness, and
quality of software. Actually, testing cannot establish the correctness of software.
It can find defects, but cannot prove there are no defects.
11. What is alpha testing?
A: Alpha testing is final testing before the software is released to the general
public. First, (and this is called the first phase of alpha testing), the software is
tested by in-house developers. They use either debugger software, or hardware
assisted debuggers. The goal is to catch bugs quickly. Then, (and this is called
second stage of alpha testing), the software is handed over to software QA staff
for additional testing in an environment that is similar to the intended use.
2006 Zeta Cyber Solutions (P) Limited
Page 95
Page 96
Page 97
Page 98
Page 99
27.
failure?
A: A software failure occurs when the software does not do what the user expects
to see. Software faults, on the other hand, are hidden programming errors.
Software faults become software failures only when the exact computation
conditions are met, and the faulty portion of the code is executed on the CPU. This
can occur during normal usage. Other times it occurs when the software is ported
to a different hardware platform, or, when the software is ported to a different
complier, or, when the software gets extended.
28. Who is a test engineer?
A: We, test engineers, are engineers who specialize in testing. We create test
cases, procedures, scripts and generate data. We execute test procedures and
scripts,
analyze
standards
of
measurements,
and
evaluate
results
of
system/integration/regression testing.
29. Who is a QA engineer?
A: QA engineers are test engineer, but they do more than just testing. Good QA
engineers understand the entire software development process and how it fits
into the business approach and the goals of the organization. Communication
skills and the ability to understand various sides of issues are important. A QA
engineer is successful if people listen to him, if people use his tests, if people
think that he's useful, and if he's happy doing his work. I would love to see QA
departments
staffed
with
experienced
software
developers
who
coach
development teams to write better code. But I've never seen it. Instead of
coaching, QA engineers tend to be process people.
30. How do test case templates look like?
A: Software test cases are documents that describe inputs, actions, or events and
their expected results, in order to determine if all features of an application are
working correctly. A software test case template is, for example, a 6-column table,
where column 1 is the "Test case ID number", column 2 is the "Test case name",
column 3 is the "Test objective", column 4 is the "Test conditions/setup", column 5
is the "Input data requirements/steps", and column 6 is the "Expected results". All
documents should be written to a certain standard and template. Standards and
templates maintain document uniformity. It also helps in learning where
information is located, making it easier for a user to find what they want. Lastly,
with standards and templates, information will not be accidentally omitted from a
document
Page 100
is no
QA team.
Should
he
take
responsibility
to
set up
QA
Page 101
cases completed with certain percentage passed, or when bug rate falls below a
certain level. But, if these are project management tools, why should we label
them quality assurance tools?
34. What is role of the QA engineer?
A: The QA Engineer's function is to use the system much like real users would,
find all the bugs, find ways to replicate the bugs, submit bug reports to the
developers, and to provide feedback to the developers, i.e. tell them if they've
achieved the desired level of quality.
35. What metrics can be used for bug tracking?
A: Metrics that can be used for bug tracking include the total number of bugs,
total number of bugs that have been fixed, number of new bugs per week, and
number of fixes per week. Other metrics in quality assurance include...
McCabe metrics: cyclomatic complexity metric (v(G)), actual complexity metric
(AC), module design complexity metric (iv(G)), essential complexity metric (ev
(G)), pathological complexity metric (pv (G)), design complexity metric (S0),
integration complexity metric (S1), object integration complexity metric (OS1),
global data complexity metric (gdv(G)), data complexity etric (DV), tested data
complexity metric (TDV), data reference metric (DR), tested data reference etric
(TDR), maintenance severity metric (maint_severity), data reference severity
metric (DR_severity), data complexity severity metric (DV_severity), global data
severity metric (gdv_severity).
36. What metrics can be used for bug tracking? (Cont'd...)
McCabe object-oriented software metrics: encapsulation percent public data
(PCTPUB),
access
to
public
data
(PUBDATA),
polymorphism
percent
of
Page 102
Page 103
developers that can fix it. After the problem is resolved, fixes should be retested.
Additionally, determinations should be made regarding requirements, software,
hardware, safety impact, etc., for regression testing to check the fixes didn't
create other problems elsewhere. If a problem-tracking system is in place, it
should encapsulate these determinations. A variety of commercial, problem
tracking/management software tools are available. These tools, with the detailed
input of software test engineers, will give the team complete information so
developers can understand the bug, get an idea of its severity, reproduce it and
fix it.
Page 104
Page 105
interpretation of the results (screens, data, logs, etc.) that can be a time
consuming task.
Page 106
Software
Configuration
management
(SCM)
relates
to
Configuration
Management (CM). SCM is the control, and the recording of, changes that are
made to the software and documentation throughout the software development
life cycle (SDLC). SCM covers the tools and processes used to control, coordinate
and track code, requirements, documentation, problems, change requests,
designs, tools, compilers, libraries, patches, and changes made to them, and to
keep track of who makes the changes. We, test engineers have experience with a
full range of CM tools and concepts, and can easily adapt to an organization's
software tool and process needs.
48. What are some of the software configuration management tools?
A: Software configuration management tools include Rational ClearCase, DOORS,
PVCS, CVS; and there are many others. Rational ClearCase is a popular software
tool, made by Rational Software, for revision control of source code. DOORS or
"Dynamic Object Oriented Requirements System" is a requirements version
2006 Zeta Cyber Solutions (P) Limited
Page 107
Page 108
A: Verification takes place before validation, and not vice versa. Verification
evaluates documents, plans, code, requirements, and specifications. Validation,
on the other hand, evaluates the product itself. The inputs of verification are
checklists, issues lists, walk-troughs and inspection meetings, reviews and
meetings. The input of validation, on the other hand, is the actual testing of an
actual product. The output of verification is a nearly perfect set of documents,
plans, specifications, and requirements document. The output of validation, on the
other hand, is a nearly perfect, actual product.
54. What is documentation change management?
A: Documentation change management is part of configuration management
(CM). CM covers the tools and processes used to control, coordinate and track
code, requirements, documentation, problems, change requests, designs, tools,
compilers, libraries, patches, changes made to them and who makes the changes.
55. What is up time?
A: "Up time" is the time period when a system is operational and in service. Up
time is the sum of busy time and idle time. For example, if, out of 168 hours, a
system has been busy for 50 hours, idle for 110 hours, and down for 8 hours,
then the busy time is 50 hours, idle time is 110 hours, and up time is (110 + 50
=) 160 hours.
56. What is upwardly compatible software?
A: Upwardly compatible software is compatible with a later or more complex
version of itself. For example, upwardly compatible software is able to handle files
created by a later version of itself.
57. What is upward compression?
A: In software design, upward compression means a form of demodularization, in
which a subordinate module is copied into the body of a superior module.
58. What is usability?
A: Usability means ease of use; the ease with which a user can learn to operate,
prepares inputs for, and interprets outputs of a software product.
59. What is user documentation?
A: User documentation is a document that describes the way a software product
or system should be used to obtain the desired results.
60. What is a user manual?
2006 Zeta Cyber Solutions (P) Limited
Page 109
manual?
A: When a distinction is made between those who operate and use a computer
system for its intended purpose, separate user documentation and user manual is
created. Operators get user documentation, and users get user manuals.
62. What is user friendly software?
A: A computer program is user friendly, when it is designed with ease of use, as
one of the primary objectives of its design.
63. What is a user friendly document?
A: A document is user friendly, when it is designed with ease of use, as one of the
primary objectives of its design.
64. What is a user guide?
A: User guide is the same as the user manual. It is a document that presents
information necessary to employ a system or component to obtain the desired
results. Typically, what is described are system and component capabilities,
limitations, options, permitted inputs, expected outputs, error messages, and
special instructions.
65. What is user interface?
A: User interface is the interface between a human user and a computer system.
It enables the passage of information between a human user and hardware or
software components of a computer system.
66. What is a utility?
A: Utility is a software tool designed to perform some frequently used support
function. For example: program to print files.
67. What is utilization?
A: Utilization is the ratio of time a system is busy, divided by the time it is
available. Utilization is a useful measure in evaluating computer performance.
68. What is V&V?
2006 Zeta Cyber Solutions (P) Limited
Page 110
Page 111
and
identification
of
the
software,
identification
of
changes
incorporated into this version, and installation and operating information unique
to this version of the software.
Page 112
Page 113
level
of
testing.
The
term
'performance
testing'
is
often
used
synonymously with stress testing, load testing, reliability testing, and volume
testing.
89. What is disaster recovery testing?
A: Disaster recovery testing is testing how well the system recovers from
disasters, crashes, hardware failures, or other catastrophic problems.
90. How do you conduct peer reviews?
A: Peer reviews, sometimes called PDR, are formal meeting, more formalized than
a walk-through, and typically consists of 3-10 people including the test lead, task
lead (the author of whatever is being reviewed) and a facilitator (to make notes).
The subject of the PDR is typically a code block, release, or feature, or document.
2006 Zeta Cyber Solutions (P) Limited
Page 114
The purpose of the PDR is to find problems and see what is missing, not to fix
anything. The result of the meeting is documented in a written report. Attendees
should prepare for PDRs by reading through documents, before the meeting
starts; most problems are found during this preparation. Why are PDRs so useful?
Because PDRs are cost-effective methods of ensuring quality, because bug
prevention is more cost effective than bug detection.
91. How do you check the security of your application?
A: To check the security of an application, we can use security/penetration
testing. Security/penetration testing is testing how well the system is protected
against unauthorized internal or external access, or willful damage. This type of
testing usually requires sophisticated testing techniques.
92. When testing the password field, what is your focus?
A: When testing the password field, one needs to verify that passwords are
encrypted.
93. What is the objective of regression testing?
A: The objective of regression testing is to test that the fixes have not created any
other problems elsewhere. In other words, the objective is to ensure the software
has remained intact. A baseline set of data and scripts are maintained and
executed, to verify that changes introduced during the release have not "undone"
any previous code. Expected results from the baseline are compared to results of
the software under test. All discrepancies are highlighted and accounted for,
before testing proceeds to the next level.
94. What stage of bug fixing is the most cost effective?
A: Bug prevention, i.e. inspections, PDRs, and walk-throughs, is more cost
effective than bug detection.
95. What can you tell about white box testing?
A: White box testing is a testing approach that examines the application's
program structure, and derives test cases from the application's program logic.
Clear box testing is a white box type of testing. Glass box testing is also a white
box type of testing. Open box testing is also a white box type of testing.
96. What black box testing types can you tell me about?
A: Black box testing is functional testing, not based on any knowledge of internal
software design or code. Black box testing is based on requirements and
functionality. Functional testing is also a black-box type of testing geared to
2006 Zeta Cyber Solutions (P) Limited
Page 115
Page 116
Page 117
Page 118
Stochastic testing is a series of random tests over time. The software under test
typically passes the individual tests, but our goal is to see if it can pass a large
series of the individual tests.
106. What is mutation testing?
A: In mutation testing, we create mutant software, we make mutant software to
fail, and thus demonstrate the adequacy of our test case. When we create a set of
mutant software each mutant software, differs from the original software by one
mutation, i.e. one single syntax change made to one of its program statements,
i.e. each mutant software contains only one single fault.
When we apply test cases to the original software and to the mutant software, we
evaluate if our test case is adequate. Our test case is inadequate, if both the
original software and all mutant software generate the same output. Our test case
is adequate, if our test case detects faults or if at least one mutant software
generates a different output than does the original software for our test case.
107. What is PDR?
A: PDR is an acronym. In the world of software QA/testing, it stands for "peer
design review", or "peer review".
108. What is good about PDRs?
A: PDRs are informal meetings. PDRs make perfect sense, because they're for the
mutual benefit of you and your end client. Your end client requires a PDR, because
they work on a product, and want to come up with the very best possible design
and documentation. Your end client requires you to have a PDR, because when
you organize a PDR, you invite and assemble the end client's best experts and
encourage them to voice their concerns as to what should or should not go into
the design and documentation, and why. When you're a developer, designer,
author, or writer, it's also to your advantage to come up with the best possible
design and documentation. Therefore you want to embrace the idea of the PDR,
because holding a PDR gives you a significant opportunity to invite and assemble
the end client's best experts and make them work for you for one hour, for your
own benefit. To come up with the best possible design and documentation, you
want to encourage your end client's experts to speak up and voice their concerns
as to what should or should not go into your design and documentation, and why.
109. Why is that My Company requires a PDR?
A: Your Company requires a PDR, because your company wants to be the owner
of the very best possible design and documentation. Your company requires a
PDR, because when you organize a PDR, you invite, assemble and encourage the
2006 Zeta Cyber Solutions (P) Limited
Page 119
company's best experts to voice their concerns as to what should or should not go
into your design and documentation, and why. Remember, PDRs are not about
you, but about design and documentation. Please don't be negative; please do not
assume your company is finding fault with your work, or distrusting you in any
way. There is a 90+ per cent probability your company wants you, likes you and
trusts you, because you're a specialist, and because your company hired you after
a long and careful selection process.
Your company requires a PDR, because PDRs are useful and constructive. Just
about everyone - even corporate chief executive officers (CEOs) - attend PDRs
from time to time. When a corporate CEO attends a PDR, he has to listen for
"feedback" from shareholders. When a CEO attends a PDR, the meeting is called
the "annual shareholders' meeting".
110. Give me a list of ten good things about PDRs!
A:
1. PDRs are easy, because all your meeting attendees are your coworkers
and friends.
2. PDRs do produce results. With the help of your meeting attendees,
PDRs help you produce better designs and better documents than the
ones you could come up with, without the help of your meeting
attendees.
3. Preparation for PDRs helps a lot, but, in the worst case, if you had no
time to read every page of every document, it's still OK for you to show
up at the PDR.
4. It's technical expertise that counts the most, but many times you can
influence your group just as much, or even more so, if you're dominant
or have good acting skills.
5. PDRs are easy, because, even at the best and biggest companies, you
can dominate the meeting by being either very negative, or very bright
and wise.
6. It is easy to deliver gentle suggestions and constructive criticism. The
brightest and wisest meeting attendees are usually gentle on you; they
deliver gentle suggestions that are constructive, not destructive.
7. You get many-many chances to express your ideas, every time a
meeting attendee asks you to justify why you wrote what you wrote.
8. PDRs are effective, because there is no need to wait for anything or
anyone; because the attendees make decisions quickly (as to what
errors are in your document). There is no confusion either, because all
the group's recommendations are clearly written down for you by the
PDR's facilitator.
2006 Zeta Cyber Solutions (P) Limited
Page 120
have
any additional
suggestions,
recommendations,
or
comments?"
5. "What is the outcome of this peer review?" At the end of the peer review, the
facilitator asks the attendees of the peer review to make a decision as to the
outcome of the peer review. I.e., "What is our consensus?" "Are we accepting the
design
(or
document
or
code)?"
Or,
"Are
we
accepting
it
with
minor
Page 121
review, and if they're not well prepared, the facilitator can send them back to their
desks, and even ask the task lead to reschedule the peer review. The facilitator's
script for the entry criteria includes the following questions:
1. Are all the required attendees present at the peer review?
2. Have all the attendees received all the relevant documents and reports?
3. Are all the attendees well prepared for this peer review?
4. Have all the preceding life cycle activities been concluded?
5. Are there any changes to the baseline?
113. What are the parameters of peer reviews?
A: By definition, parameters are values on which something else depends. Peer
reviews depend on the attendance and active participation of several key people;
usually the facilitator, task lead, test lead, and at least one additional reviewer.
The attendances of these four people are usually required for the approval of the
PDR.
According
to company
policy, depending
on your company,
other
participants are often invited, but generally not required for approval. Peer
reviews depend on the facilitator, sometimes known as the moderator, who
controls the meeting, keeps the meeting on schedule, and records all suggestions
from all attendees. Peer reviews greatly depend on the developer, also known as
the designer, author, or task lead, usually a software engineer, who is most
familiar with the project, and most likely able to answer any questions or address
any concerns that may come up during the peer review. Peer reviews greatly
depend on the tester, also known as test lead, or bench test person -- usually
another software engineer -- who is also familiar with the project, and most likely
able to answer any questions or address any concerns that may come up during
the peer review. Peer reviews greatly depend on the participation of additional
reviewers and additional attendees who often make specific suggestions and
recommendations, and ask the largest number of questions.
114. What types of review meetings can you tell me about?
A: Of review meetings, peer design reviews are the most common. Peer design
reviews are so common that they tend to replace both inspections and walkthroughs. Peer design reviews can be classified according to the 'subject' of the
review. I.e., "Is this a document review, design review, or code review?" Peer
design reviews can be classified according to the 'role' you play at the meeting.
I.e., "Are you the task lead, test lead, facilitator, moderator, or additional
reviewer?" Peer design reviews can be classified according to the 'job title of
attendees. I.e., "Is this a meeting of peers, managers, systems engineers, or
system integration testers?" Peer design reviews can be classified according to
what is being reviewed at the meeting. I.e., "Are we reviewing the work of a
2006 Zeta Cyber Solutions (P) Limited
Page 122
Page 123
115. How can I shift my focus and area of work from QC to QA?
A:
1. Focus on your strengths, skills, and abilities! Realize that there are MANY
similarities between Quality Control and Quality Assurance! Realize that you have
MANY transferable skills!
2. Make a plan! Develop a belief that getting a job in QA is easy! HR professionals
cannot tell the difference between quality control and quality assurance! HR
professionals tend to respond to keywords (i.e. QC and QA), without knowing the
exact meaning of those keywords!
3. Make it a reality! Invest your time! Get some hands-on experience! Do some QA
work! Do any QA work, even if, for a few months, you get paid a little less than
usual! Your goals, beliefs, enthusiasm, and action will make a huge difference in
your life!
4. Read all you can, and that includes reading product pamphlets, manuals,
books, information on the Internet, and whatever information you can lay your
hands on! If there is a will, there is a way! You CAN do it, if you put your mind to
it! You CAN learn to do QA work, with little or no outside help!
116. What techniques and tools can enable me to migrate from QC to
QA?
A: Refer to above answers for question#115
117. What is the difference between build and release?
A: Builds and releases are similar, because both builds and releases are end
products of software development processes. Builds and releases are similar,
because both builds and releases help developers and QA teams to deliver
reliable software. Build means a version of software, typically one that is still in
testing. Usually a version number is given to a released product, but, sometimes,
a build number is used instead.
Difference #1: Builds refer to software that is still in testing, release refers to
software that is usually no longer in testing.
Difference #2: Builds occur more frequently; releases occur less frequently.
Difference #3: Versions are based on builds, and not vice versa. Builds, or
usually a series of builds, are generated first, as often as one build per every
morning, depending on the company, and then every release is based on a build,
or several builds, i.e. the accumulated code of several builds.
118. What is CMM?
A: CMM is an acronym that stands for Capability Maturity Model. The idea of CMM
is, as to future efforts in developing and testing software, concepts and
2006 Zeta Cyber Solutions (P) Limited
Page 124
experiences do not always point us in the right direction, and therefore we should
develop processes, and then refine those processes. There are five CMM levels, of
which Level 5 is the highest;
CMM Level 1 is called "Initial".
CMM Level 2 is called "Repeatable".
CMM Level 3 is called "Defined".
CMM Level 4 is called "Managed".
CMM Level 5 is called "Optimized".
There are not many Level 5 companies; most hardly need to be. Within the United
States, fewer than 8% of software companies are rated CMM Level 4, or higher.
The U.S. government requires that all companies with federal government
contracts to maintain a minimum of a CMM Level 3 assessment. CMM
assessments take two weeks. They're conducted by a nine-member team led by a
SEI-certified lead assessor.
119. What are CMM levels and their definitions?
A: There are five CMM levels of which level 5 is the highest.
project
tracking,
subcontract
management,
QA,
and
configuration management.
training
programs,
process
focus,
integrated
software
Page 125
product quality, and if both the software process and the software products
are quantitatively understood and controlled. Software processes are at
CMM level 4, if there is software quality management (SQM) and
quantitative process management.
management,
and
defect
prevention
technology
change
management.
120. What is the difference between bug and defect in software testing?
A: In software testing, the difference between bug and defect is small, and
depends on your company. For some companies, bug and defect are synonymous,
while others believe bug is a subset of defect. Generally speaking, we, software
test engineers, discover BOTH bugs and defects, before bugs and defects damage
the reputation of our company. We, QA engineers, use the software much like real
users would, to find BOTH bugs and defects, to find ways to replicate BOTH bugs
and defects, to submit bug reports to the developers, and to provide feedback to
the developers, i.e. tell them if they've achieved the desired level of quality.
Therefore, we, software engineers, do not differentiate between bugs and defects.
In our bug reports, we include BOTH bugs and defects, and any differences
between them are minor. Difference number one: In bug reports, the defects are
usually easier to describe. Difference number two: In bug reports, it is usually
easier to write the descriptions on how to replicate the defects. Defects tend to
require brief explanations only.
121. What is grey box testing?
A: Grey box testing is a software testing technique that uses a combination of
black box testing and white box testing. Gray box testing is not black box testing,
because the tester does know some of the internal workings of the software under
test.
In grey box testing, the tester applies a limited number of test cases to the
internal workings of the software under test. In the remaining part of the grey box
testing, one takes a black box approach in applying inputs to the software under
test and observing the outputs. Gray box testing is a powerful idea. The concept is
simple; if one knows something about how the product works on the inside, one
can test it better, even from the outside.
Page 126
Grey box testing is not to be confused with white box testing; i.e. a testing
approach that attempts to cover the internals of the product in detail. Grey box
testing is a test strategy based partly on internals. The testing approach is known
as gray box testing, when one does have some knowledge, but not the full
knowledge of the internals of the product one is testing. In gray box testing, just
as in black box testing, you test from the outside of a product, just as you do with
black box, but you make better-informed testing choices because you're better
informed; Because you know how the underlying software components operate
and interact.
122. What is the difference between version and release?
A: Both version and release indicate a particular point in the software
development life cycle, or in the lifecycle of a document. The two terms, version
and release, are similar (i.e. mean pretty much the same thing), but there are
minor differences between them. Version means a VARIATION of an earlier, or
original, type; for example, "I've downloaded the latest version of the software
from the Internet. The latest version number is 3.3." Release, on the other hand, is
the ACT OR INSTANCE of issuing something for publication, use, or distribution.
Release is something thus released. For example: "A new release of a software
program."
123. What is data integrity?
A: Data integrity is one of the six fundamental components of information
security. Data integrity is the completeness, soundness, and wholeness of the
data that also complies with the intention of the creators of the data. In
databases, important data including customer information, order database, and
pricing tables -- may be stored. In databases, data integrity is achieved by
preventing accidental, or deliberate, or unauthorized insertion, or modification, or
destruction of data.
124. How do you test data integrity?
A: Data integrity testing should verify the completeness, soundness, and
wholeness of the stored data. Testing should be performed on a regular basis,
because important data can and will change over time. Data integrity tests
include the followings:
1. Verify that you can create, modify, and delete any data in tables.
2. Verify that sets of radio buttons represent fixed sets of values.
3. Verify that a blank value can be retrieved from the database.
Page 127
4. Verify that, when a particular set of data is saved to the database, each value
gets saved fully, and the truncation of strings and rounding of numeric values do
not occur.
5. Verify that the default values are saved in the database, if the user input is not
specified.
6. Verify compatibility with old data, old hardware, versions of operating systems,
and interfaces with other software.
125. What is data validity?
A: Data validity is the correctness and reasonableness of data. Reasonableness of
data means, for example, account numbers falling within a range, numeric data
being all digits, dates having a valid month, day and year, spelling of proper
names. Data validity errors are probably the most common, and the most difficult
to detect, data-related errors. What causes data validity errors? Data validity
errors are usually caused by incorrect data entries, when a large volume of data is
entered in a short period of time. For example, 12/25/2005 is entered as
13/25/2005 by mistake. This date is therefore invalid. How can you reduce data
validity errors? Use simple field validation rules. Technique 1: If the date field in a
database uses the MM/DD/YYYY format, then use a program with the following two
data validation rules: "MM should not exceed 12, and DD should not exceed 31".
Technique 2: If the original figures do not seem to match the ones in the database,
then use a program to validate data fields. Compare the sum of the numbers in
the database data field to the original sum of numbers from the source. If there is
a difference between the figures, it is an indication of an error in at least one data
element.
126. What is the difference between data validity and data integrity?
A:
Difference number two: Data validity errors are more common, while data
integrity errors are less common. Difference number three: Errors in data
validity are caused by HUMANS -- usually data entry personnel who
enter, for example, 13/25/2005, by mistake, while errors in data integrity
are caused by BUGS in computer programs that, for example, cause the
overwriting of some of the data in the database, when one attempts to
retrieve a blank value from the database.
2006 Zeta Cyber Solutions (P) Limited
Page 128
#3: Static testing is many times more cost-effective than dynamic testing.
#6: Static testing gives you comprehensive diagnostics for your code.
#8: Dynamic testing usually takes longer than static testing. Dynamic
testing may involve running several test cases, each of which may take
longer than compilation.
2006 Zeta Cyber Solutions (P) Limited
Page 129
#10: Static testing can be done before compilation, while dynamic testing
can take place only after compilation and linking.
#11: Static testing can find all of the following that dynamic testing cannot
find: syntax errors, code that is hard to maintain, code that is hard to test,
code that does not conform to coding standards, and ANSI violations.
Page 130
modules and subsystems. To put it differently, top down design looks at the whole
system, and then explodes it into subsystems, or smaller parts. A systems
engineer or systems analyst determines what the top level objectives are, and
how they can be met. He then divides the system into subsystems, i.e. breaks the
whole system into logical, manageable-size modules, and deals with them
individually.
133. How can I be effective and efficient, when I do black box testing of
ecommerce web sites?
A: When you're doing black box testing of e-commerce web sites, you're most
efficient and effective when you're testing the sites' Visual Appeal, Contents, and
Home Pages. When you want to be effective and efficient, you need to verify that
the site is well planned. Verify that the site is customer-friendly. Verify that the
choices of colors are attractive. Verify that the choices of fonts are attractive.
Verify that the site's audio is customer friendly. Verify that the site's video is
attractive. Verify that the choice of graphics is attractive. Verify that every page of
the site is displayed properly on all the popular browsers. Verify the authenticity of
facts. Ensure the site provides reliable and consistent information. Test the site for
appearance. Test the site for grammatical and spelling errors. Test the site for
visual appeal, choice of browsers, consistency of font size, download time, broken
links, missing links, incorrect links, and browser compatibility. Test each toolbar,
each menu item, every window, every field prompt, every pop-up text, and every
error message. Test every page of the site for left and right justifications, every
shortcut key, each control, each push button, every radio button, and each item
on every drop-down menu. Test each list box, and each help menu item. Also
check, if the command buttons are grayed out when they're not in use.
134. What is the difference between top down and bottom up design?
A: Top down design proceeds from the abstract (entity) to get to the concrete
(design). The Bottom up design proceeds from the concrete (design) to get to the
abstract (entity). Top down design is most often used in designing brand new
systems, while bottom up design is sometimes used when one is reverse
engineering a design; i.e. when one is trying to figure out what somebody else
designed in an existing system. Bottom up design begins the design with the
lowest level modules or subsystems, and progresses upward to the main program,
module, or subsystem. With bottom up design, a structure chart is necessary to
determine the order of execution, and the development of drivers is necessary to
complete the bottom up approach. Top down design, on the other hand, begins
the design with the main or top level module, and progresses downward to the
lowest level modules or subsystems. Real life sometimes is a combination of top
2006 Zeta Cyber Solutions (P) Limited
Page 131
down design and bottom up design. For instance, data modeling sessions tend to
be iterative, bouncing back and forth between top down and bottom up modes, as
the need arises.
135. What is the definition of bottom up design?
A: Bottom up design begins the design at the lowest level modules or
subsystems, and progresses upward to the design of the main program, main
module, or main subsystem. To determine the order of execution, a structure
chart is needed, and, to complete the bottom up design, the development of
drivers is needed. In software design - assuming that the data you start with is a
pretty good model of what you're trying to do - bottom up design generally starts
with the known data (e.g. customer lists, order forms), then the data is broken
into chunks (i.e. entities) appropriate for planning a relational database. This
process reveals what relationships the entities have, and what the entities'
attributes are. In software design, bottom up design doesn't only mean writing the
program in a different order, but there is more to it. When you design bottom up,
you often end up with a different program. Instead of a single, monolithic
program, you get a larger language, with more abstract operators, and a smaller
program written in it. Once you abstract out the parts which are merely utilities,
what is left is much shorter program. The higher you build up the language, the
less distance you will have to travel down to it, from the top. Bottom up design
makes it easy to reuse code blocks. For example, many of the utilities you write
for one program are also useful for programs you have to write later. Bottom up
design also makes programs easier to read.
136. What is smoke testing?
A: Smoke testing is a relatively simple check to see whether the product
"smokes" when it runs. Smoke testing is also known as ad hoc testing, i.e. testing
without a formal test plan. With many projects, smoke testing is carried out in
addition to formal testing. If smoke testing is carried out by a skilled tester, it can
often find problems that are not caught during regular testing. Sometimes, if
testing occurs very early or very late in the software development cycle, this can
be the only kind of testing that can be performed. Smoke tests are, by definition,
not exhaustive, but, over time, you can increase your coverage of smoke testing.
A common practice at Microsoft, and some other software companies, is the daily
build and smoke test process. This means, every file is compiled, linked, and
combined into an executable file every single day, and then the software is smoke
tested. Smoke testing minimizes integration risk, reduces the risk of low quality,
supports easier defect diagnosis, and improves morale. Smoke testing does not
have to be exhaustive, but should expose any major problems. Smoke testing
2006 Zeta Cyber Solutions (P) Limited
Page 132
should be thorough enough that, if it passes, the tester can assume the product is
stable enough to be tested more thoroughly. Without smoke testing, the daily
build is just a time wasting exercise. Smoke testing is the sentry that guards
against any errors in development and future problems during integration.
At first, smoke testing might be the testing of something that is easy to test. Then
as the system grows, smoke testing should expand and grow, from a few seconds
to 30 minutes or more.
137. What is the difference between monkey testing and smoke testing?
Difference#4: "Smart monkeys" are valuable for load and stress testing,
but not very valuable for smoke testing, because they are too expensive
for smoke testing.
Difference#7: Monkey testing does not evolve. Smoke testing, on the other
hand, evolves as the system evolves from something simple to something
more thorough.
138. Tell me about the process of daily builds and smoke tests
A: The idea behind the process of daily builds and smoke tests is to build the
product every day, and test it every day. The software development process at
Microsoft and many other software companies requires daily builds and smoke
2006 Zeta Cyber Solutions (P) Limited
Page 133
tests. According to their process, every day, every single file has to be compiled,
linked, and combined into an executable program. And, then, the program has to
be "smoke tested". Smoke testing is a relatively simple check to see whether the
product "smokes" when it runs. You should add revisions to the build only when it
makes sense to do so. You should to establish a Build Group, and build *daily*; set
your *own standard* for what constitutes "breaking the build", and create a
penalty for breaking the build, and check for broken builds *every day*. In
addition to the daily builds, you should smoke test the builds, and smoke test
them Daily. You should make the smoke test Evolve, as the system evolves. You
should build and smoke test Daily, even when the project is under pressure. Think
about the many benefits of this process! The process of daily builds and smoke
tests minimizes the integration risk, reduces the risk of low quality, supports
easier defect diagnosis, improves morale, enforces discipline, and keeps pressurecooker projects on track. If you build and smoke test *daily*, success will come,
even when you're working on large projects!
139. What is the purpose of test strategy?
A:
Reason#2: Having a test strategy does satisfy one important step in the
software testing process.
Reason#3: The test strategy document tells us how the software product
will be tested.
Reason#5:
The
test
strategy
document
describes
the
roles,
responsibilities, and the resources required for the test and schedule
constraints.
Reason#7: The test strategy is decided first, before lower level decisions
are made on the test plan, test design, and other testing issues.
Page 134
plans, test cases, bug reports, user manuals should all be documented, so that
they are repeatable. Document files should be well organized. There should be a
system for easily finding and obtaining documents, and determining what
document has a particular piece of information. We should use documentation
change management, if possible.
141. What is the purpose of a test plan?
Reason#2: We create a test plan because it can and will help people
outside the test group to understand the why and how of product
validation.
Reason#7: We create test plan because one of the outputs for creating a
test strategy is an approved and signed off test plan document.
Reason#8:
We
create
test
plan
because
the
software
testing
methodology a three step process and one of the steps is the creation of a
test plan.
Page 135
142. Give me one test case that catches all the bugs!
A: If there is a "magic bullet", i.e. the one test case that has a good possibility to
catch ALL the bugs, or at least the most important bugs, it is a challenge to find it,
because test cases depend on requirements; requirements depend on what
customers need; and customers can have great many different needs. As software
systems are getting increasingly complex, it is increasingly more challenging to
write test cases. It is true that there are ways to create "minimal test cases" which
can greatly simplify the test steps to be executed. But, writing such test cases is
time consuming, and project deadlines often prevent us from going that route.
Often the lack of enough time for testing is the reason for bugs to occur in the
field. However, even with ample time to catch the "most important bugs", bugs
still surface with amazing spontaneity. The challenge is, developers do not seem
to know how to avoid providing the many opportunities for bugs to hide, and
testers do not seem to know where the bugs are hiding.
143. What is the difference between a test plan and a test scenario?
A:
Difference#1: A test plan is a document that describes the scope, approach,
resources, and schedule of intended testing activities, while a test scenario is a
document that describes both typical and atypical situations that may occur in the
use of an application.
Difference#2: Test plans define the scope, approach, resources, and schedule of
the intended testing activities, while test procedures define test conditions, data
to be used for testing, and expected results, including database updates, file
outputs, and report results. Difference#3: A test plan is a description of the scope,
approach, resources, and schedule of intended testing activities, while a test
scenario is a description of test cases that ensure that a business process flow,
applicable to the customer, is tested from end to end.
144. What is a test scenario?
A: The terms "test scenario" and "test case" are often used synonymously. Test
scenarios are test cases, or test scripts, and the sequence in which they are to be
executed. Test scenarios are test cases that ensure that business process flows
are tested from end to end. Test scenarios are independent tests, or a series of
tests, that follow each other, where each of them dependent upon the output of
2006 Zeta Cyber Solutions (P) Limited
Page 136
the
previous
one.
Test
scenarios
are
prepared
by
reviewing
functional
requirements, and preparing logical groups of functions that can be further broken
into test procedures. Test scenarios are designed to represent both typical and
unusual situations that may occur in the application. Test engineers define unit
test requirements and unit test scenarios. Test engineers also execute unit test
scenarios. It is the test team that, with assistance of developers and clients,
develops test scenarios for integration and system testing. Test scenarios are
executed through the use of test procedures or scripts. Test procedures or scripts
define a series of steps necessary to perform one or more test scenarios. Test
procedures or scripts may cover multiple test scenarios.
145. Give me some sample test cases you would write!
A: For instance, if one of the requirements is, "Brake lights shall be on, when the
brake pedal is depressed", then, based on this one simple requirement, for
starters, I would write all of the following test cases:
Test case#1: "Inputs: The headlights are on. The brake pedal is depressed.
Expected result: The brake lights are on. Verify that the brake lights are on, when
the brake pedal is depressed."
Test case#2: "Inputs: The left turn lights are on. The brake pedal is depressed.
Expected result: The brake lights are on. Verify that the brake lights are on, when
the brake pedal is depressed."
Test case number#3: "Inputs: The right turn lights are on. The brake pedal is
depressed. Expected result: The brake lights are on. Verify that the brake lights
are on, when the brake pedal is depressed."
As you might have guessed, in the work place, in real life, requirements are more
complex than this one; and, just to verify this one, simple requirement, there is a
need for many more test cases.
146. How do you write test cases?
A: When I write test cases, I concentrate on one requirement at a time. Then,
based on that one requirement, I come up with several real life scenarios that are
likely to occur in the use of the application by end users. When I write test cases, I
describe the inputs, action, or event, and their expected results, in order to
determine if a feature of an application is working correctly. To make the test case
complete, I also add particulars e.g. test case identifiers, test case names,
objectives, test conditions (or setups), input data requirements (or steps), and
expected results. If I have a choice, I prefer writing test cases as early as possible
in the development life cycle. Why? Because, as a side benefit of writing test
cases, many times I am able to find problems in the requirements or design of an
application. And, because the process of developing test cases makes me
2006 Zeta Cyber Solutions (P) Limited
Page 137
completely think through the operation of the application. You can learn to write
test cases.
147. What is a parameter?
A: A parameter is an item of information - such as a name, a number, or a
selected option - that is passed to a program by a user or another program. By
definition, a parameter is a value on which something else depends. Any desired
numerical value may be given as a parameter. We use parameters when we want
to allow a specified range of variables. We use parameters when we want to
differentiate behavior or pass input data to computer programs or their
subprograms. Thus, when we are testing, the parameters of the test can be varied
to produce different results, because parameters do affect the operation of the
program receiving them. Example 1: We use a parameter, such as temperature
that defines a system. In this definition, it is temperature that defines the system
and determines its behavior. Example 2: In the definition of function f(x) = x + 10,
x is a parameter. In this definition, x defines the f(x) function and determines its
behavior. Thus, when we are testing, x can be varied to make f(x) produce
different values, because the value of x does affect the value of f(x).When
parameters are passed to a function subroutine, they are called arguments.
148. What is a constant?
A: In software or software testing, a constant is a meaningful name that
represents a number, or string, that does not change. Constants are variables
whose value remains the same, i.e. constant, throughout the execution of a
program. Why do developers use constants? Because if we have code that
contains constant values that keep reappearing, or, if we have code that depends
on certain numbers that are difficult to remember, we can improve both the
readability and maintainability of our code, by using constants. To give you an
example, let's suppose we declare a constant and we call it Pi. We set it to
3.14159265 and use it throughout our code. Constants, such as Pi, as the name
implies, store values that remain constant throughout the execution of our
program. Keep in mind that, unlike variables which can be read from and written
to, constants are read-only variables. Although constants resemble variables, we
cannot modify or assign new values to them, as we can to variables. But we can
make constants public, or private. We can also specify what data type they are.
149. What is a requirements test matrix?
A: The requirements test matrix is a project management tool for tracking and
managing testing efforts, based on requirements, throughout the project's life
cycle. The requirements test matrix is a table, where requirement descriptions are
2006 Zeta Cyber Solutions (P) Limited
Page 138
put in the rows of the table, and the descriptions of testing efforts are put in the
column headers of the same table. The requirements test matrix is similar to the
requirements traceability matrix, which is a representation of user requirements
aligned against system functionality. The requirements traceability matrix ensures
that all user requirements are addressed by the system integration team and
implemented in the system integration effort. The requirements test matrix is a
representation of user requirements aligned against system testing. Similarly to
the requirements traceability matrix, the requirements test matrix ensures that all
user requirements are addressed by the system test team and implemented in
the system testing effort.
150. What is reliability testing?
A: Reliability testing is designing reliability test cases, using accelerated reliability
techniques (e.g. step-stress, test/analyze/fix, and continuously increasing stress
testing techniques), AND testing units or systems to failure, in order to obtain raw
failure time data for product life analysis. The purpose of reliability testing is to
determine product reliability, and to determine whether the software meets the
customer's reliability requirements. In the system test phase, or after the software
is fully developed, one reliability testing technique we use is a test/analyze/fix
technique, where we couple reliability testing with the removal of faults. When
we identify a failure, we send the software back to the developers, for repair. The
developers build a new version of the software, and then we do test iteration. We
track failure intensity (e.g. failures per transaction, or failures per hour) in order to
guide our test process, and to determine the feasibility of the software release,
and to determine whether the software meets the customer's reliability
requirements.
151. Give me an example on reliability testing.
A: For example, our products are defibrillators. From direct contact with
customers during the requirements gathering phase, our sales team learns that a
large hospital wants to purchase defibrillators with the assurance that 99 out of
every 100 shocks will be delivered properly. In this example, the fact that our
defibrillator is able to run for 250 hours without any failure, in order to
demonstrate the reliability, is irrelevant to these customers. In order to test for
reliability we need to translate terminology that is meaningful to the customers
into equivalent delivery units, such as the number of shocks. We describe the
customer needs in a quantifiable manner, using the customers terminology. For
example, our goal of quantified reliability testing becomes as follows: Our
defibrillator will be considered sufficiently reliable if 10 (or fewer) failures occur
from 1,000 shocks. Then, for example, we use a test/analyze/fix technique, and
2006 Zeta Cyber Solutions (P) Limited
Page 139
couple reliability testing with the removal of errors. When we identify a failed
delivery of a shock, we send the software back to the developers, for repair. The
developers build a new version of the software, and then we deliver another 1,000
shocks into dummy resistor loads. We track failure intensity (i.e. number of
failures per 1,000 shocks) in order to guide our reliability testing, and to
determine the feasibility of the software release, and to determine whether the
software meets our customers' reliability requirements.
152. What is verification?
A: Verification ensures the product is designed to deliver all functionality to the
customer; it typically involves reviews and meetings to evaluate documents,
plans, code, requirements and specifications; this can be done with checklists,
issues lists, and walk-through and inspection meetings.
153. What is validation?
A: Validation ensures that functionality, as defined in requirements, is the
intended behavior of the product; validation typically involves actual testing and
takes place after Verifications are completed.
154. What is a walk-through?
A: A walk-through is an informal meeting for evaluation or informational purposes.
A walk-through is also a process at an abstract level. It's the process of inspecting
software code by following paths through the code (as determined by input
conditions and choices made along the way). The purpose of code walk-through is
to ensure the code fits the purpose. Walk-through also offers opportunities to
assess an individual's or team's competency.
155. What is an inspection?
A: An inspection is a formal meeting, more formalized than a walk-through and
typically consists of 3-10 people including a moderator, reader (the author of
whatever is being reviewed) and a recorder (to make notes in the document). The
subject of the inspection is typically a document, such as a requirements
document or a test plan. The purpose of an inspection is to find problems and see
what is missing, not to fix anything. The result of the meeting should be
documented in a written report. Attendees should prepare for this type of meeting
by reading through the document, before the meeting starts; most problems are
found during this preparation. Preparation for inspections is difficult, but is one of
the most cost effective methods of ensuring quality, since bug prevention is more
cost effective than bug detection.
Page 140
management,
salespeople, software
engineers, stockholders and accountants. Each type of customer will have his or
her own slant on quality. The accounting department might define quality in terms
of profits, while an end-user might define quality as user friendly and bug free.
157. What is good code?
A: A good code is code that works, is free of bugs and is readable and
maintainable. Organizations usually have coding standards all developers should
adhere to, but every programmer and software engineer has different ideas about
what is best and what are too many or too few rules. We need to keep in mind
that excessive use of rules can stifle both productivity and creativity. Peer reviews
and code analysis tools can be used to check for problems and Enforce standards.
158. What is good design?
A: Design could mean too many things, but often refers to functional design or
internal design. Good functional design is indicated by software functionality can
be traced back to customer and end-user requirements. Good internal design is
indicated by software code whose overall structure is clear, understandable, easily
modifiable and maintainable; is robust with sufficient error handling and status
logging capability; and works correctly when implemented.
159. What is software life cycle?
A: Software life cycle begins when a software product is first conceived and ends
when it is no longer in use. It includes phases like initial concept, requirements
analysis,
functional
documentation planning,
test
Page 141
finding and obtaining of documents and determining what document will have a
particular piece of information. Use documentation change management, if
possible.
Page 142
because of unclear
some
fast-changing
business
environments,
continuously
modified
Page 143
Page 144
References
Books
ROGER
S.
PRESSMAN:
Software
Engineering
Practitioners
approach,
McGrawHill, 2001
ILENE BURNSTEIN: Practical Software Testing, Springer, 2002
WILLIAM E. PERRY: Effective Methods for software testing, Willey 2000
C.R.PANDIAN: Software Metrics, Arunodaya Printers, 2003
Website Links
http://www.stickyminds.com
http://en.wikipedia.org/wiki/Software_testing
http://www.reference.com/browse/wiki/Software_testing
http://www.softwareqatest.com/
http://www.testingstuff.com/autotest.html
Page 145
Page 146