Вы находитесь на странице: 1из 21

B4.

3-R3: Software Testing And Quality Management

Chapters Jan-04 Jul-04 Jan-05 Jul-05 Jan-06 Jul-06 Jan-07 Jul-07 Jan-08 Total
01. Testing Software 0 4 6 14 6 10 8 4 12 64
02. Software Faults and Failures 4 8 0 0 6 22 18 6 22 86
03. Verification and Validation 22 4 22 8 18 20 0 6 4 104
04. Testing Techniques and Strategies 60 34 38 29 38 10 8 10 28 255
05. Building Tests and Test Plans 10 28 0 26 0 6 16 18 4 108
06. Testing Specialized Systems and Application 16 22 18 36 34 22 32 28 4 212
07. Testing Measurements and Tools 6 18 16 4 12 12 30 22 34 154
08. Quality Assurance and Standards 18 18 36 19 22 34 24 42 28 241
Total 136 136 136 136 136 136 136 136 136 1224

1
1. Testing Software

July-2004 [4]
1.
f) Briefly list out the parties who have vested interest in software testing and their interest too.
[4]

1. System analysts and architects collect requirements and design the applications or software
products at a functional and structural level. They are aware of the use cases involved and of the
activities that could be performed. They have an interest that all such cases and activities are
properly tested to assure the functional integrity of the application. They may be also interested in
performance testing for the validation of their major architectural decisions.

2. Software developers are interested in unit testing various components that they create, as well as in
the subsequent integration testing. They are mainly concerned that the software is bug-free and
delivers the expected outputs.

3. QA teams are exclusively focused on all aspects of the software quality and may plan, design and
run a variety of tests. They may also analyze the test results and make sure that the software passes
certain preset standards of quality.

4. End users, the final beneficiaries of the software, may perform a number of functionality and
performance tests, in what is commonly called acceptance testing. The tests results help them to
accept a software application or to select between a number of software products offered by
different vendors.

5. Security experts may perform specific testing to insure that no unauthorized people have access to
protected resources.

6. Auditors may perform functionality tests to insure that the application conform to the users’
standards and accurately reports information.

January-2007 [8]
1.
e) What is informational cohesion? [4]
A class exhibits informational cohesion if the tasks its methods perform operate on the same
information or data. In object oriented programming this information would be the information
contained in the variables of an object.

For example, the Airplane class exhibits informational cohesion because its methods all work on the
same information: the speed and altitude of some airplane object.

2
class Airplane {
double speed, altitude;
void takeoff() { ... }
void fly() { ... }
void land() { ... }
}

Note that the informational cohesion of this class is ruined if we add a method for computing taxes or
browsing web pages.

f) What is typical about a Windows process in regards to memory allocation? [4]

Each process is allocated its own block of available RAM space, no process can access another
process? code or data. If the process crashes, it dies alone without taking the
entire OS or a bunch of other applications down
July-2007 [4]
1.
a) What is Software Testing Life Cycle? [4]

Software testing life cycle identifies what test activities to carry out and when (what is the best time) to
accomplish those test activities. Even though testing differs between organizations, there is a testing
life cycle.

Software Testing Life Cycle consists of six (generic) phases:

 Test Planning,
 Test Analysis,
 Test Design,
 Construction and verification,
 Testing Cycles,
 Final Testing and Implementation and
 Post Implementation.

Software testing has its own life cycle that intersects with every stage of the SDLC. The basic
requirements in software testing life cycle is to control/deal with software testing – Manual,
Automated and Performance.

2. Software Faults and Failures


January-2004 [4]
1. State whether the following statements are TRUE or FALSE. In each case, justify your answer using
one or two sentences. Irrelevant and unnecessarily long answers shall be avoided.
d) Error and failure are synonymous in software testing terminology. [4]

To fully understand the facets of software testing, it is important to clarify the terms “fault”, “error”
and “failure”:failure is the manifested inability of the program to perform
the function required, i.e., a system malfunction evidenced by incorrect output, abnormal termination
or unmet time and space constraints. The cause of a failure, e.g., a missing

3
or incorrect piece of code, is a fault. A fault may remain undetected long time, until some event
activates it. When this happens, it first brings the program into an intermediate
unstable state, called error, which, if and when propagates to the output, eventually causes the failure

July-2004 [8]
1.
a) Explain why it is not necessary for a program to be completely free of defects before it is delivered to its
customers. To what extent can testing be used to validate that the program is fit for its purpose?
[4]

1. A program need not be completely free of defects before delivery if:


1. Remaining defects are minor defects that do not cause system corruption and which are
transient i.e. which can be cleared when new data is input.
2. Remaining defects are such that they are recoverable and a recovery function that
causes minimum user disruption is available.
3. The benefits to the customer's business from the system exceed the problems that might
be caused by the remaining system defects.

Testing cannot completely validate that a system is fit for its intended purpose as this requires a
detailed knowledge of what that purpose will be and exactly how the system will be used. As
these details inevitably change between deciding to procure a system and deploying that
system, the testing will be necessarily incomplete. In addition, it is practically impossible for all
except trivial system to have a complete test set that covers all possible ways that the system is
likely to be used.
c) Discuss the differences in testing a business critical system, a safety critical system and a
system whose failure would not seriously affect lives, health or business. [4]

January-2006 [6]
7.
b) Explain at least one defect metric and how this metric can be collected. Also explain how defects can be
effectively tracked for a software product. [6]

The number of defects that are found in the product is one of the main indicators of
quality. Hence, we will look at progress metrics that reflect the defects of a product.
Defects get detected by the testing team and get fixed by the development team. Based on
this, defect metrics are further classified into test defect metrics and development defect
metrics.
Defect Density:
The defect density is measured by adding up the number of defects reported by the Software Quality
Assurance to the number of defects that are reported by the peer and dividing it by the actual
size(which can be in either KlOC, SLOC or the function points to measure the size of the software
product).

4
July-2006 [22]
1.
a) Distinguish clearly between the terms fault and failure in software development. [4]

In the context of any discussion of software quality and reliability, failure is nonconformance to
software requirements. Yet, even within this definition, there are gradations. Failures can be only
annoying or catastrophic. One failure can be corrected within seconds while another requires weeks or
even months to correct. Complicating the issue even further, the correction of one failure may in fact
result in the introduction of other errors that ultimately result in other failures. All software failures can
be traced to design or implementation problems.
2.
a) How can you determine the number of latent defects in a software product during the testing phase?
[6]

Latent defects are those defects that still remain in the software product even when delivered to the
customer. These can be identified effectively with Inspections. Regarding the true volume of latent
defects shipped with a product to users, in most cases this can never really be determined. We do not
yet have the ability to decisively determine the real number of defects shipped with a product. We can
project total defects based on other characteristics in the process or we can assume zero defects based
on other process data and patterns, but we cannot prove it. For example, we can analyze error depletion
curves prior to delivery to make a prediction of latent defects after delivery.

4.
b) Explain how the different defects in a system can be classified. Why is it necessary to classify
the defects into several classes? [6]

c) How can we estimate the Cost of Repairing the software defect in a program. [6]

Failure costs are those that would disappear if no defects appeared before shipping a product to
customers. Failure costs may be subdivided into internal failure costs and external failure costs.
Internal failure costs are incurred when we detect a defect in our product prior to shipment. Internal
failure costs include.
1. Rework 2. Repair 3. Failure mode analysis
External failure costs are associated with defects found after the product has been shipped to the
customer. Examples of external failure costs are
1. Complaint resolution 2. Product return and replacement 3. Help line support 4. Warranty work

To illustrate the cost impact of early error detection, we consider a series of relative costs that are
based on actual cost data collected for large software projects. Assume that an error uncovered during
design will cost 1.0 monetary unit to correct. Relative to this cost, the same error uncovered just before
testing commences will cost 6.5 units; during testing, 15 units; and after release, between 60 and 100
units.
January-2007 [18]
3.
a) What are the basic concepts behind software fault tolerance? What are design diversity and
independent failure modes? Explain in detail. [10]

5
Software fault tolerance is the ability for software to detect and recover from a fault that is happening
or has already happened in either the software or hardware in the system in which the software is
running in order to provide service in accordance with the specification. Software fault tolerance is a
necessary component in order to construct the next generation of highly available and reliable
computing systems from embedded systems to data warehouse systems. Software fault tolerance is not
a solution unto itself however, and it is important to realize that software fault tolerance is just one
piece necessary to create the next generation of systems.
Design diversity was not a concept applied to the solutions to hardware fault tolerance, and to this
end, N-Way redundant systems solved many single errors by replicating the same hardware. Software
fault tolerance tries to leverage the experience of hardware fault tolerance to solve a different problem,
but by doing so creates a need for design diversity in order to properly create a redundant system.
Design diversity is a solution to software fault tolerance only so far as it is possible to create diverse
and equivalent specifications so that programmers can create software which has different enough
designs that they don't share similar failure modes.

b) Explain the differences between recovery block and N-version software method of fault tolerance.
[8]
The recovery block method is a simple method developed by Randell from what was observed
as somewhat current practice at the time. The recovery block operates with an adjudicator
which confirms the results of various implementations of the same algorithm. In a system with
recovery blocks, the system view is broken down into fault recoverable blocks. The entire
system is constructed of these fault tolerant blocks. Each block contains at least a primary,
secondary, and exceptional case code along with an adjudicator.
Recovery block operation still has the same dependency which most software fault tolerance
systems have: design diversity.
The recovery block system is also complicated by the fact that it requires the ability to roll back
the state of the system from trying an alternate.
The N-version software concept attempts to parallel the traditional hardware fault tolerance
concept of N-way redundant hardware. In an N-version software system, each module is made
with up to N different implementations. Each variant accomplishes the same task, but hopefully
in a different way. Each version then submits its answer to voter or decider which determines
the correct answer, and returns that as the result of the module.
The N-version method presents the possibility of various faults being generated, but
successfully masked and ignored within the system.

Difference between recovery block and N-version software method.


In recovery blocks, each alternative would be executed serially until an acceptable solution is
found as determined by the adjudicator. The recovery block method has been extended to
include concurrent execution of the various alternatives.
The N-version method has always been designed to be implemented using N-way hardware
concurrently. In a serial retry system, the cost in time of trying multiple alternatives may be too
expensive, especially for a real-time system. The recovery block method requires that each
module build a specific adjudicator; in the N-version method, a single decider may be used.

6
July-2007 [6]
7.
c) What are the effects of software bugs on the system? How can bugs cause security vulnerability?
[6]
Software Bug refers to a trouble, error or a gaffe in computer programming, which stops the computer
software set up from working normally. It is one of the severe cases of Computer problems . The
problem occurs when the source program of the computer software is created with certain flaws, which
outcome to incorrect end result.

Effects of Software Bug

Software Bug can have the following outcomes some of which might be very serious:

1. Inaccurate results
2. Problems in the functionality of the program if it has a number of bugs
3. Bugs might even cause in crash down of the system and all services in it
4. Security problems might also occur

Ways to prevent such Software Bugs

During the programming task if there are some faults then they are known as Software Bugs. A Software Bug can
be very harmful for the working of the computer software. Apart from hampering the normal work process it also
decreases the efficiency of the system. There are several prevention measures introduced by the software industry
to help the programmers. They are the following:

• Defensive programming to reduce the presence of Software Bug


• New types of programming techniques, which are designed to tackle this bug
• Programming techniques designed to resist the software bugs can also be used

• Certain programming languages are also designed to help the programmers deal with the bugs.

January-2008 [22]
1.
a) How the cost of repairing defects in the software can be minimized? Discuss with examples.
[4]
1. Finding and fixing a software problem after delivery is often 100 times more expensive than finding and
fixing it during the requirements and design phase.
2. Current software projects spend about 40 to 50 percent of their effort on avoidable rework..
3. About 80 percent of avoidable rework comes from 20 percent of the defects.
4. About 80 percent of the defects come from 20 percent of the modules, and about half the modules are
defect free.
5. Peer reviews catch 60 percent of the defects
6. Perspective-based reviews catch 35 percent more defects than non-directed reviews.
7. Disciplined personal practices can reduce defect introduction rates by up to 75 percent.
8. All other things being equal, it costs 50 percent more per source instruction to develop high-
dependability software products than to develop low-dependability software products. However, the
investment is more than worth it if the project involves significant operations and maintenance costs.
3.
a) What is software failure? How is it related with faults? Explain bath tub curve nature of the
software reliability and compare it with the hardware reliability. [6]

7
2c) How do you calculate error index by making classification of various errors/causes into different
categories? Discuss applicability of the Pareto’s Principle in this case. [6]

3. Verification and Validation


January-2005 [22]
6.
a) When during the development process is the compliance with coding standards is checked? List
two coding standards each for (i) enhancing readability of the code, (ii) reuse of the code.
[6]

January-2006 [18]
2.
b) Explain the difference between code inspection and code walk through. Why is detection and correction
of errors during inspection and walkthrough preferable to that achieved using testing.
[6]
Static unit testing is conducted as a part of a larger philosophical belief that a software
product should undergo a phase of inspection and correction at each milestone in its life
cycle. At a certain milestone, the product need not be in its final form. For example,
completion of coding is a milestone, even though coding of all the units may not make
the desired product. After coding, the next milestone is testing all or a substantial
number of units forming the major components of the product. Thus, before units are
individually tested by actually executing them, those are subject to usual review and
correction as it is commonly understood. The idea behind review is to find the defects as
close to their points of origin as possible so that those defects are eliminated with less
effort, and the interim product contains fewer defects before the next task is undertaken.
Code is reviewed by applying techniques commonly known as inspection and
walkthrough
Inspection:
1. It is a step-by-step peer group review of a work product, with each step checked against
predetermined criteria.
2. This inspection process usually concentrates on discovering errors, not correcting them.

3. The inspection process is a way of identifying early the most error-prone sections of the
program, helping to focus more attention on these sections during the computer-based testing
processes

Walkthrough:
1. It is a review where the author leads the team through a manual or simulated execution of the
product using predefined scenarios.

8
2. The procedure in the meeting is different. Rather than simply reading the program or using
error checklists, the participants “play computer.” The person designated as the tester comes to
the meeting armed with a small set of paper test cases—representative sets of inputs (and
expected outputs) for the program or module.
3. Each test case is mentally executed. That is, the test data are walked through the logic of the
program. The state of the program (i.e., the values of the variables) is monitored on paper or
whiteboard.

1. Inspections and walkthroughs are more effective, again because people other than the program’s
author are involved in the process.
2. Results in lower debugging (error-correction) costs, is the fact that when an error is found it is
usually precisely located in the code.
3. This methods generally are effective in finding from 30 to 70 percent of the logic-design and coding
errors in typical programs.

July-2006 [20]
2.
c) What do you understand by “code review effectiveness”? How can review effectiveness be
determined? [6]
The effectiveness of static testing is limited by the ability of a reviewer to find defects in
code by visual means. However, if occurrences of defects depend on some actual values
of variables, then it is a difficult task to identify those defects by visual means. Therefore,
a unit must be executed to observe its behaviors in response to a variety of inputs.
Finally, whatever may be the effectiveness of static tests, one cannot feel confident
without actually running the code.

January-2008 [4]
1.
b) Differentiate between verification and validation. Can a proof of correctness of software be provided?
[4]
Verification refers to the set of activities that ensure that software correctly implements a specific function.
Validation refers to a different set of activities that ensure that the software that has been built is traceable to
customer requirements.

4. Testing Techniques and Strategies


January-2004 [60]
1. State whether the following statements are TRUE or FALSE. In each case, justify your answer
using one or two sentences. Irrelevant and unnecessarily long answers shall be avoided.
b) The effectiveness of a test suite in detecting errors can be determined by counting the total number of
test cases present in the test suite. [4]

e) Development of suitable driver and stub functions are essential for carrying out effective system
testing of a product. [4]
f) The main purpose of Integration testing is to find design errors. [4]
g) Introduction of additional sequence type statements in a program would not increase the
program's cyclomatic complexity. [4]

9
2.
c) Design the black-box test suite for a function that accepts two pairs of floating point numbers
representing two coordinate points. Each pair of coordinate points represents the center and a
point on the circumference of the circle. The function prints whether the

January-2005 [38]
3.
b) Design the black-box suite for a program that accepts two strings and checks if the first string is
a substring of the second string and displays the number of times the first string occurs in the
second string. [6]

January-2006 [38]
1. State whether the following statements are TRUE or FALSE. In each case, justify your answer
using one or two sentences. Irrelevant and unnecessarily long answers will be penalized.
b) Introduction of additional sequence type of statements in a program can not increase its
cyclomatic complexity. [4]
January-2008 [28]
2.
b) Consider the program of determination of next date in a calendar. The input is a triple of day,
month and year within the range 1 ≤ month ≤ 12, 1 ≤ day ≤ 31 and 1900 ≤ year ≤ 2005
respectively. The possible outputs would be next date or invalid input date. Design boundary
values, robust and worst test cases for this program. [6]

b) Discuss the importance of path testing during white box or structural testing. What is code walk
through? What do you mean by code not reachable? How do you find out not reachable code in
a program? [9]

5. Building Tests and Test Plans


January-2004 [10]
1. State whether the following statements are TRUE or FALSE. In each case, justify your answer
using one or two sentences. Irrelevant and unnecessarily long answers shall be avoided.
a) System test plan can be prepared immediately after the requirements specification phase is
complete. [4]
7.
a) Normally as testing continues on a software product more and more errors are discovered.
Explain how you would decide when to stop testing. [6]

July-2004 [28]
3.
a) What is a test plan? Discuss the features of good test-plans. [6]

The purpose of system test planning, or simply test planning, is to get ready and organized for
test execution.
The purpose of a system test plan is summarized as follows:
• It provides guidance for the executive management to support the test project, thereby allowing
them to release the necessary resources to perform the test activity.

10
•It establishes the foundation of the system testing part of the overall software project.
•It provides assurance of test coverage by creating a requirement traceability matrix.
•It outlines an orderly schedule of events and test milestones that are tracked.
•It specifies the personnel, financial, equipment, and facility resources required to support the
system testing part of a software project.

b) Normally test design methods involve large number of test cases and it is nearly impossible to execute
all of the tests. Describe and illustrate the various strategies and criteria employed by practitioners to
reduce the number of test cases. [12]

The test design activities must be performed in a planned manner in order to


meet some technical criteria, such as effectiveness, and economic criteria, such
as productivity. Therefore, we consider the following factors during test design: (i)
coverage metrics, (ii) effectiveness, (iii) productivity, (iii) validation, (iv)
maintenance, and (v) user skill.

4.
b) Develop a test plan for exhaustive testing of a program that computes the roots, all possible
types, of a quadratic equation. [10]

July-2005 [26]
1.
f) Explain the concept of a test case and test plan. [4]
2.
a) DOEACC is planning to start online testing. It will use an automated process for recording candidate
information, scheduling candidates for exams, keeping track of results and sending out certificates.
Write a brief test plan for this project. [6]
b) Software testing can be an unending process. What criteria are used to stop testing?
[6]
3.
b) You are a tester for testing a large system. The system data model is very large with many attributes
and there are many interdependencies within the fields. What steps would you use to test the system

11
and what are the effects of the steps you have taken on the test plan?
[6]
4.
a) Explain and give examples of the following black box techniques?
• Error Guessing. [4]

Without using any particular methodology such as boundary-value analysis of cause-effect graphing,
these people seem to have a knack for sniffing out errors. One explanation of this is that these people
are practicing, subconsciously more often than not, a test-case-design technique that could be termed
error guessing. Given a particular program, they surmise, both by intuition and experience, certain
probable types of errors and then write test cases to expose those errors. It is difficult to give a
procedure for the error-guessing technique since it is largely an intuitive and ad hoc process. The basic
idea is to enumerate a list of possible errors or error-prone situations and then write test cases based on
the list.

July-2006 [6]
2.
b) Identify the types of information that should be presented in the test summary report.
[6]
1. Report identifier
2. Summary
3. Variances
4. Summary of results
5. Evaluation
6. Recommendations
7. Summary of activities
8. Approval

The report identifier uniquely identifies the report. It is used to keep track of the
document under version control.
The summary section summarizes what acceptance testing activities took place,
including the test phases, releases of the software used, and the test environment. This
section normally includes references to the ATP, acceptance criteria,and requirements
specification.
The variances section describes any difference between the testing that was planned
and the actual testing carried out. It provides an insight into a process for improving
acceptance test planning in the future
In the summary of results section of the document test results are summarized. The
section gives the total number of test cases executed, the number of passing test cases,
and the number of failing test cases; identifies all the defects; and summarizes the
acceptance criteria to be changed.
The evaluation section provides an overall assessment of each category of the quality
attributes identified in the acceptance criteria document, including their limitations. This
evaluation is based on the test results from each category of the test plan. The
deviations of the acceptance criteria that are captured in the ACC during the acceptance
testing are discussed.
The recommendations section includes the acceptance test team’s overall
recommendation: (i) unconditionally accept the system, (ii) accept the system subject to
certain conditions being met, or (iii) reject the system. However, the ultimate decision is
made by the business experts of the supplier and the buyer organization.

12
The summary of activities section summarizes the testing activities and the major
events. This section includes information about the resources consumed by the various
activities. For example, the total manpower involved in and the time spent for each of
the major testing activities are given. This section is useful to management for
accurately estimating future acceptance testing efforts.
Finally, the names and titles of all the people that will approve this report are listed in the
approvals section. Ideally, the approvers of this report should be the same people who
approved the corresponding ATP because the summary report describes all the activities
outlined in the ATP. If some of the reviewers have minor disagreements, they may note
their views before signing off on the
document.

January-2007 [16]
1.
d) When do you decide to stop testing any further? [4]

Testing in software systems is a complex task due to interdependency of system, complexity of applications
etc. Complete testing is not possible for almost all projects and a determination needs to be made early on by the
project test team , SQA and PM on when to stop testing.
This will be outlined in the detailed test plan and it should be discussed and concurred upon with the customer.
It is generally a good idea to list down what testing is going to be left out.

Eg: It is agreed that test all branches of code will not be done, due to the time constraint of project. Successful
execution of Test case xyz will imply that the system works as intended.

Some of the common factors and constraints that should be considered when decided on when to stop testing
are:
1. Testing budget of the project. Or when the cost of continued testing does not justify the project cost.
2. Resouces available and their skills.
3. Project deadline and test completion deadline.
4. Critical or Key Test cases successfully completed. Certain test cases even if they fail may not be show
stoppers.
5. Functional coverage, code coverage, meeting the client requirements to certain point.
6. Defect rates fall below certain specified level & High priority bugs are resolved.
7. Project progresses from Alpha, to beta and so on.

Testing is potentially endless process. Once the product is delivered the customer starts testing everyday when
they use the product. So the decision has to be made early , as to what is the acceptable risk , based on level of
testing possible.

13
g) What is a testable design? [4]
4.
b) What are clean tests and dirty tests? Which one works when? [8]

July-2007 [18]
3. List down all the components that are to be part of a test plan. How do you decide on when to STOP
testing? [18]
The components of a good test plan are as follows:
1. Objectives. The objectives of each testing phase must be defined.
2. Completion criteria. Criteria must be designed to specify when each testing phase will be judged to
be complete.
3. Schedules. Calendar time schedules are needed for each phase.

4. Responsibilities. For each phase, the people who will design, write, execute, and verify test cases,
and the people who will repair discovered errors, should be identified.
5. Test case libraries and standards. In a large project, systematic methods of identifying, writing,
and storing test cases are necessary.
6. Tools. The required test tools must be identified, including a plan for who will develop or acquire
them, how they will be used, and when they are needed.
7. Computer time.
8. Hardware configuration. If special hardware configurations or devices are needed, a plan is
required that describes the requirements, how they will be met, and when they are needed.
9. Integration. Part of the test plan is a definition of how the program will be pieced together (for
example, incremental top-down testing).
10. Tracking procedures. Means must be identified to track various aspects of the testing progress,
including the location of error-prone modules and estimation of progress with respectto the schedule,
resources, and completion criteria.
11. Debugging procedures. Mechanisms must be defined for reporting detected errors, tracking the
progress of corrections, and adding the corrections to the system. Schedules, responsibilities, tools, and
computer time/resources also must be part of the debugging plan.
12. Regression testing. Regression testing is performed after making a functional improvement or
repair to the program. Its purpose is to determine whether the change has regressed other aspects of the
program. It usually is performed by rerunning some subset of the program’s test cases. Regression
testing is important because changes and error corrections tend to be much more error prone than the
original program code (in much the same way that most typographical errors in newspapers are the
result of last-minute editorial changes, rather than changes in the original copy). A plan for regression
testing—who, how, when—also is necessary
Determining the exit criteria of the final test cycle is a complex issue. It involves the nature of products
(e.g., shrink-wrap application versus an operating system), business strategy related to the product,
marketing opportunities and timing, and customer requirements. The test cycle is considered to have
completed when the following predicates hold: (i) new test cases are designed and documented for
those defects that were not detected by the existing test case, referred to as test case escapes; (ii) all
test cases are executed at least once; (iii) 95% of test cases pass; and (iv) all the known defects are in
the closed state.
January-2008 [4]
1.

14
c) “Complete testing of software is just not possible.” State whether this statement is true or false
and illustrate with an example. [4]

6. Testing Specialized Systems and Application


January-2004 [16]
2.
a) Usability of a software product is tested during which type of testing: unit, Integration or system
testing? How is usability tested? [6]
b) Explain difference between testing a system program based on object-oriented and procedure-
oriented approach. [4]

July-2005 [36]
4.
b) Suppose you company is about to roll out an E-Commerce application. It is not possible to test the
application on all types of browsers on all platforms and operating systems. What steps would you take
in the testing environment to reduce the business risks and commercial risks?
[6]
7.
b) Discuss the salient features of graphical interface testing. How is it different from WWW
Testing? [6]
In modern-day software applications, users access functionalities via GUIs. Users of the client-
server technology find it convenient to use GUI based applications. The GUI tests are designed to
verify the interface to the users of an application. These tests verify different components
(objects) such as icons, menu bars, dialogue boxes, scroll bars, list boxes, and radio buttons. Ease
of use (usability) of the GUI design and output reports from the viewpoint of actual system users
should be checked. Usefulness of the online help, error messages, tutorials, and user manuals are
verified. The GUI can be utilized to test the functionality behind the
interface, such as accurate response to database queries.

January-2006 [34]
1. State whether the following statements are TRUE or FALSE. In each case, justify your answer using
one or two sentences. Irrelevant and unnecessarily long answers will be penalized.
g) A satisfactory way to test object-oriented programs to test all the methods supported by the
different classes individually and then by performing adequate integration and system testing.
[4]
6.
b) Why effective testing of real-time and embedded systems is considered more difficult than
testing traditional systems? Explain a satisfactory scheme to test real-time and embedded
systems. [6]
7.
a) What do you understand by volume testing? Explain using a suitable example how volume test
cases can be designed and the types of defects these tests can help to detect.
[6]

January-2007 [32]
2.
a) What are the typical problems in testing web services? What are the differences between testing
internet and Internet-based web services in an organization? [6]
b) In the context of web services testing, explain the following with example:
• Proof-of-concept testing
[12]
5.

15
b) What is mutation testing and random testing? Illustrate your answer by an example.
[10]

July-2007 [28]
1.
b) What security risks must be addressed in a web application test plan? [4]
f) What are the benefits of reliability testing? [4]
The goal of this method is the development of a set of theorems about the
program in question, the proof of which guarantees the absence of errors in the
program.
5. In the context of software testing explain the following briefly: -
d) Context-driven testing [3]

7. Testing Measurements and Tools


January-2004 [6]
6.
c) Distinguish between the static and dynamic analysis of a program. How are static and dynamic program
analysis results useful? [6]
Static analysis techniques , where the term “static” does not refer to the techniques themselves (they
can use automated analysis tools), but is used to mean that they do not involve the execution of the
tested system different purposes, such as to check the adherence of the implementation to the
specifications or to detect flaws in the code via inspection or review. Static techniques are based solely
on the manual or automated examination of project documentation, of software models and code, and
of other related information about requirements and design. Particularly valuable when a language
such as C is used which has weak typing and hence many errors are undetected by the compiler.

Dynamic analysis techniques, which exercise the software in order to expose possible failures.
Dynamic Analysis: Dynamic analysis of a software system involves actual program execution in
order to expose possible program failures. The behavioral and performance properties of the
program are also observed. Programs are executed with both typical and carefully chosen input
values. Often, the input set of a program can be impractically large. However, for practical
considerations, a finite subset of the input set can be selected.
Therefore, in testing, we observe some representative program behaviors and reach a conclusion
about the quality of the system. Careful selection of a finite test set is crucial to reaching a
reliable conclusion.

July-2004 [18]
5.
a) Static Analysis is a technique for assessing the structural characteristics of source code.
Explain this technique by taking a simple example. Bring out the utility and limitations of static
analyzers. [12]

January-2005 [16]
1. State whether the following statements are TRUE or FALSE. In each case, justify your answer
using one or two sentences. Irrelevant and unnecessarily long answers will be penalized.
b) Use of static and dynamic program analysis tools is an effective substitute for through testing.
[4]
2.

16
a) What do you understand by automatic program analysis? Give a broad classification of the
different types of program analysis tools used during program development. What are the
different types of information produced by each type of tool? [6]

July-2005 [4]
1.
c) Differentiate between static testing and dynamic testing? [4]

In static unit testing, a programmer does not execute the unit; instead, the code is examined over
all possible behaviors that might arise during run time. Static unit testing is also known as non-
execution-based unit testing In static unit testing, the code of each unit is validated against
requirements of the unit by reviewing the code.
Dynamic unit testing is execution based. In dynamic unit testing, a program unit is actually
executed and its outcomes are observed. Dynamic unit testing means testing the code by actually
running it.

January-2006 [12]
6.
a) Explain two test coverage metrics for procedural code. How are these useful? Can these be used
satisfactorily for object-oriented programs? Explain your answer. [6]
c) Distinguish between the static and dynamic analysis of a program. Explain at least one metric that a
static analysis tool reports and one metric that a dynamic analysis tool reports. How are these metrics
useful? [6]

July-2006 [12]
3.
a) What do you understand by test data generation? Explain, how test data can be generated
automatically. [6]

January-2007 [30]
1.
c) What are the steps in automating the testing process? [4]

6.
a) What are the advantages of dynamic analysis over static testing? Explain dynamic analysis
process. [10]
b) What are the categories of dynamic analyzers available? Describe their important features.
[8]
7.
a) Explain the general architecture of a test-data generator with a diagram. Give an example.
[8]
Test Data Generator: These tools assist programmers in selecting test data that cause a program
to behave in a desired manner. Test data generators can offer several capabilities beyond the
basics of data generation:
• They have generate a large number of variations of a desired data set based
on a description of the characteristics which has been fed into the tool.
• They can generate test input data from source code.
• They can generate equivalence classes and values close to the boundaries.
• They can calculate the desired extent of boundary value testing
They can estimate the likelihood of the test data being able to reveal faults.
• They can generate data to assist in mutation analysis.

17
July-2007 [22]
1.
e) How do you choose the most suitable test automation tool for your project? [4]

1. Meeting requirements
2. Technology expectations
3. Training/skills and
4. Management aspects.
5. Collect the experiences of other organizations which used similar test
tools.
6. Keep a checklist of questions to be asked to the vendors on
cost/effort/support.
7. Identify list of tools that meet the above requirements.
4. What is a Test case? What are the generic types of automated test tools available?
[18]

A test case is a set of sequential steps to execute a test operating on a set of


predefined inputs to produce certain expected outputs. There are two types of test cases
namely automated and manual. Automated Test case is an object for execution for other modules in the
architecture and does not represent any interaction by itself.

January-2008 [34]
1.
f) Discuss main objectives of performing measurement in software engineering domain.
[4]
Measurement lets one evaluate parameters of interest in a quantitative manner as
follows:
1. Evaluate the effectiveness of a technique used in performing a task. One can
evaluate the effectiveness of a test generation technique by counting the number
of defects detected by test cases generated by following the technique and those
detected by test cases generated by other means.
2. Evaluate the productivity of the development activities. One can keep
track of productivity by counting the number of test cases designed per day, the
number of test cases executed per day, and so on.
3. Evaluate the quality of the product. By monitoring the number of defects
detected per week of testing, one can observe the quality level of the system.
4. Evaluate the product testing.
5.
a) How do we measure function points of software? Compute the function point value for a software project
with the following details.
Number of User Inputs =10
Number of User Outputs =20
Number of Enquiries =10
Number of Files=6
Number of External Interfaces=3

18
Assume the multipliers at their average values (4,5,4,10,7) and all the complexity adjustment factors at
their moderate to average values (2.5). [6

b) What do you understand by static analysis of a program? How is the information generated during static
analysis is useful? [6]

Static Analysis: As the term “static” suggests, it is based on the examination of a number of
documents, namely requirements documents, software models, design documents, and source
code. Traditional static analysis includes code review, inspection, walk-through, algorithm
analysis, and proof of correctness. It does not involve actual execution of the code under
development. Instead, it examines code and reasons over all possible behaviors
that might arise during run time. Compiler optimizations are standard static analysis.

c) Discuss test coverage metrics of procedural software. Can these be also used for object-oriented
program? [6]
7.
a) List various software project management activities and give various options to achieve reliable cost and
schedule estimates. [6]
c) Give the structure of a software testing tool and discuss its various components in brief.
[6]
Software testing is a highly labor intensive task. A test engineer can use a variety of tools, such as
a static code analyzer, a test data generator, and a network analyzer, if a network-based
application or protocol is under test. Those tools are useful in increasing the efficiency and
effectiveness of testing.

8. Quality Assurance and Standards


January-2004 [18]
5.
b) Do you agree with the following statement:
Modern quality assurance paradigms are centered around carrying out through product testing."
Justify your answer. [6]

July-2004 [18]
7.
a) Explain what you understand by software process quality and software product quality. How
would you assure the quality of software product? [10]
b) A commonly used software quality measure is the number of known errors per thousand lines of
product source code. Compare the usefulness of this measure for developers and users. What
are the possible problems with relying on this measure as the sole expression of software
quality? [8]

January-2005 [36]
5.
a) What problems would you face if you are developing several versions of the same product
according to a client’s request and you are not using any configuration management tools?
[6]

July-2005 [19]
6.
b) Describe the popular software quality assurance models. Compare and contrast the ISO 9000
and CMM models. [9]

7.
a) How can Software Quality Assurance process be implemented without stifling productivity?
Explain. [6]

19
July-2006 [34]
7.
c) List four metrics that can be determined from an analysis of a program’s source code and would
correlate well with the reliability of the delivered software. [6]

January-2007 [24]
1.
b) What is the difference between robustness and correctness? [4]
Correctness: A software system is expected to meet the explicitly specified functional requirements
and the implicitly expected nonfunctional requirements. If a software system satisfies all the functional
requirements, the system is said to be correct. However, a correct software system may still be
unacceptable to customers if the system fails to meet unstated requirements, such as stability,
performance, and scalability. On the other hand, even an incorrect system may be accepted by users
Robustness means how sensitive a system is to erroneous input and changes in its operational
environment. Tests in this category are designed to verify how gracefully the system behaves in error
situations and in a changed operational environment. The purpose is to deliberately break the system,
not as an end in itself, but as a means to find error. Types of robustness tests1.Boundary value 2.Power
cycling 3.On-line insertion andremoval4.High availability 5.Degraded node
4.
a) Can we directly measure the quality of software? What are the basic three sets of factors that determine
software quality? How can their indicators be derived? [10]

July-2007 [42]
2. What are the common causes for bugs in software that can be handled by better software project
management and quality practices? What are the five levels of CMM? From which levels, according to
your, the good quality practices start prevailing, thereby reducing the causes of bugs?
[18]
6. What are the various types of risks in software projects? What is traceability? What is the use of
Requirements-Design Traceability Matrix? [18]
7.
a) How can Quantity Assurance processes be incorporated in a software organization? Illustrate nay one
process with example. [6]

January-2008 [28]
1.
g) Give main features of the Capability Maturity Model (CMM). [4]

The capability maturity model (CMM), developed by the Software Engineering Institute
(SEI) at Carnegie Mellon University, allows an organization to evaluate its software
development processes. The model supports incremental process improvement.
Level 1: Initial At this level, software is developed by following no process
model. There is not much planning involved. Even if a plan is prepared, it may not
be followed. Individuals make decisions based on their own capabilities and skills
Level 2: Repeatable At this level, the concept of a process exists so that success
can be repeated for similar projects. Performance of the proven activities of past
projects is used to prepare plans for future projects
Level 3: Defined At this level, documentation plays a key role. Processes that
are related to project management and software development activities are documented,
reviewed, standardized, and integrated with organizational processes.

20
Level 4: Managed At this level, metrics play a key role. Metrics concerning
processes and products are collected and analyzed. Those metrics are used to gain
quantitative insight into both process and product qualities.
Level 5: Optimizing At this level, organizations strive to improve their processes
on a continual basis. This is achieved in two steps: (i) Observe the effects
of processes, by measuring a few key metrics, on the quality, cost, and lead
time of software products and (ii) effect changes to the processes by introducing
new techniques, methods, tools, and strategies.
6.
a) How do you define software quality? Give a list of various software quality criteria and attributes.
[6]
Software quality is defined as Conformance to explicitly stated functional and performance
requirements, explicitly documented development standards, and implicit characteristics that are
expected of all professionally developed software.

b) “Software Quality Assurance (SQA) is an umbrella activity.” Illustrate this statement and give various
activities required for software quality assurance. Discuss the importance of software configuration
management in modern quality paradigms. [6]

An effective quality process must focus on:


• Paying much attention to customer’s requirements
• Making efforts to continuously improve quality
• Integrating measurement processes with product design and development
• Pushing the quality concept down to the lowest level of the organization
• Developing a system-level perspective with an emphasis on methodology
and process
• Eliminating waste through continuous improvement

c) What do you understand by the reliability of software? Give some metrics which can be
determined from the analysis of source code of software and can be correlated with the
reliability. [6]
Reliability is one of the metrics used to measure the quality of a software system. Software
reliability is defined as the probability that the software executes without failure for a specified
amount of time in a specified environment. The longer a system runs without failure, the more
reliable it is. A software reliability model provides a family of growth curves that describe the
decline of failure rate as defects are submitted and closed during the system testing phase.

7.
b) What are software reliability growth models? Using logarithmic Poisson execution time model
calculate current failure intensity for a software which has initial failure intensity as 20
failures/hour. The failure intensity decay parameter is 0.02/failures and it has experienced 100
failures up to this point. [6]

21

Вам также может понравиться