Вы находитесь на странице: 1из 5

Chapter 3

Q1) Quality Assurance, A classification scheme?



Defect prevention through error blocking or error source removal:
Defect reduction through fault detection and removal:
Defect containment through failure prevention and containment:

Defect prevention through error blocking or error source removal:
Explain and identify techniques in the context of error blocking or error source removal to festively apply defect prevention strategy?
These QA activities prevent certain types of faults from being injected into the software. Since errors are the missing or
incorrect human actions that lead to the injection of faults into software systems, we can directly remove the underlying
causes for them. Therefore, defect prevention can be done in two generic ways:
Eliminating certain error sources, such as eliminating ambiguities or correcting human misconceptions, which are the
root causes for the errors.
Fault prevention by directly correcting or blocking missing or incorrect human actions. This group of techniques breaks
the causal relation between error sources and faults through the use of certain tools and technologies enforcement of
certain process and product standards, etc.

Defect reduction through fault detection and removal:
These QA alternatives detect and remove certain faults once they have been injected into the software systems.

Inspection directly detects and removes faults from the software code, design etc.
Testing removes faults based on related failure observations during program execution.

Defect containment through failure prevention and containment:

These containment measures focus on the failures by either containing them to local areas so that there are no global failures
observable to users, or limiting the damage caused by software system failures. Therefore, defect containment can be done in
two generic ways:

Some QA alternatives, such as the use of fault-tolerance techniques, break the causal relation between faults and
failures so that local faults will not cause global failures, thus tolerating these local faults.

A related extension to fault-tolerance is containment measures to avoid catastrophic consequences, such as death,
personal injury, and severe property or environmental damages, in case of failures. For example, failure containment
for real-time control software used in nuclear reactors may include concrete walls to encircle and contain radioactive
material in case of reactor melt-down due to software failures, in order to prevent damage to environment and
peoples health.
Q2) Defect prevention through education and training?

Education and training provide people-based solutions for error source elimination. It has long been observed by software
practitioners that the people factor is the most important factor that determines the quality and, ultimately, the success or
failure of most software projects.
The education and training effort for error source elimination should focus on the following areas:

Product and domain specific knowledge.
If the people involved are not familiar with the product type or application domain, there is a good chance that wrong
solutions will be implemented. For example, developers unfamiliar with embedded software may design software without
considering its environmental constraints, thus leading to various interface and interaction problems between software and its
physical surroundings.
Software development knowledge and expertise.
Plays an important role in developing high-quality software products. For example, lack of expertise with requirement analysis
and product specification usually leads to many problems and rework in subsequent design, coding, and testing activities.
Knowledge about Development methodology, technology, and tools
Also plays an important role in developing high-quality software products.
Development process knowledge.
If the project personnel do not have a good understanding of the development process involved, there is little chance that the
process can be implemented correctly. For example, if the people involved in incremental software development do not know
how the individual development efforts for different increments fit together, the uncoordinated development may lead to
many interface or interaction problems.

Q3) Defect Reduction through Inspection?

Software inspections are critical examinations of software artifacts by human inspectors aimed at discovering and fixing faults in
the software systems. Inspection is a well-known QA alternative familiar to most experienced software quality professionals.

Inspections are critical reading and analysis of software code or other software artifacts, such as designs, product
specifications, test plans, etc.

Inspections are typically conducted by multiple human inspectors, through some coordination process. Multiple
inspection phases or sessions might be used.

Faults are detected directly in inspection by human inspectors, either during their individual inspections or various
types of group sessions.

Identified faults need to be removed as a result of the inspection process, and their removal also needs to be verified.

The inspection processes vary, but typically include some planning and follow-up activities in addition to the core
inspection activity.

Q4) what is Defect Containment?
What is defect Containment? Identify the strategies to promote software fault tolerance and failure containment?

Because of the large size and high complexity of most software systems in use today, testing, defect prevention and inspection
all defect reduction activities can only reduce the number of faults to a fairly low level, but not completely eliminate them.
For software systems where failure impact is substantial(Large), such as many real-time control software sub-systems used in
medical, nuclear, transportation, and other embedded systems, this low defect level and failure risk may still be
inadequate(poor). Some additional QA alternatives are needed.
On the other hand, these few remaining faults may be triggered under rare conditions making it unrealistic to attempt to
generate the huge number of test cases to cover all these conditions some other means need to be used to prevent failures by
breaking the causal relations between these faults and the resulting failures, thus tolerating these faults, or to contain the
failures by reducing the resulting damage.
Fault tolerance
Software fault tolerance ideas originate from fault tolerance designs in traditional hardware systems that require higher levels
of reliability, availability, or dependability. In such systems, spare parts and backup units are commonly used to keep the
systems in operational conditions, maybe at a reduced capability, at the presence of unit or part failures. The primary software
fault tolerance techniques include recovery blocks, N-version programming (NVP).
Recovery blocks
Recovery blocks use redundancy over time as the basic mechanism for fault tolerance ensuring that local failure will not lead to
global failure.
NVP (N-Version Programming)
NVP uses parallel redundancy, where N copies, each of different version, of program are run in parallel. The decision algorithm
In NVP ensures that local failures in limited number of these parallel versions will not lead to global execution results
Failure containment
For safety critical systems, the primary concern is our ability to prevent accidents from happening, where an accident is a
failure with a severe consequence. Even low failure probabilities for software are not tolerable in such systems if these failures
may still likely lead to accidents. Therefore, in addition to the above QA techniques, various specific techniques are also used
for safety critical systems based on analysis of hazards, or logical pre-conditions for accidents.

Hazard elimination through substitution, simplification, decoupling, elimination of specific human errors, and reduction of
hazardous materials or conditions. These techniques reduce certain defect injections or substitute non-hazardous ones for
hazardous ones. The general approach is similar to the defect prevention and defect reduction techniques surveyed earlier, but
with a focus on those problems involved in hazardous situations.
Hazard reduction through design for controllability (for example, automatic pressure release in boilers), use of locking devices
(for example, hardware/Software interlocks), and failure minimization using safety margins and redundancy. These techniques
are similar to the fault tolerance techniques surveyed above, where local failures are contained without leading to system
failures.
Hazard control through reducing exposure, isolation and containment (for example, barriers between the system and the
environment), protection systems (active protection activated in case of hazard), and fail-safe design (passive protection, fail in
a safe state without causing further damages). These techniques reduce the severity of failures, therefore weakening the link
between failures and accidents.
Damage control through escape routes, safe abandonment of products and materials, and devices for limiting physical
damages to equipments or people. These techniques reduce the severity of accidents, thus limiting the damage caused by
these accidents and related software failures.
Chapter 4
Q1 Why is defect tracking and defect handling important in quality assurance? Explain the techniques

An important part of the normal execution of various QA activities is dealing with the defect handling. At the minimum, each
discovered defect needs to be resolved. To ensure its resolution, some records must be kept and tracked. The exact way used
to handle defects is also influenced by the specific QA activities that led to their initial discovery, the project environment, and
other factors.
An important part of AQ activities defect tracking in which monitors and records what happened to each defect after its initial
discovery, up until its final resolution.

Techniques
Defect handling is an important part of QA that involves multiple parties. For example, during testing, the developers who fix
discovered defects are typically not the same as the testers who observed and reported the problems in the first place. The
exception is unit testing, which is usually carried out parallel to coding by the same person. However, most defects from unit
testing are not formally tracked because they are considered as part of the implementation activities.
In many organizations, defect handling is implicitly assumed to be part of the project management activities, which is handled
in similar ways as configuration management. A formalized defect handling process highlights important activities and
associated rules, parties involved, and their responsibilities. It is typically defined by the different states associated with
individual defect status and transitions among these states due to status changes. Such status changes follow certain rules
defined by project management. For example, a newly reported defect has the new status, which may go through various
different status changes, such as working, re-verify, etc., until it is closed. Different defect handling processes may
include different collections of defect status and other possible attributes.
Q4) Explain Verification and Validation? How do they relate to defects?
Verification
Software verification provides objective evidence that the design outputs of a particular phase of the software development
lifecycle meet all of the specified requirements for that phase. Software verification looks for consistency, completeness, and
correctness of the software and its supporting documentation, as it is being developed, and provides support for a subsequent
conclusion that software is validated. In other words, verification ensures that you built it right.
Validation
Software validation is confirmation by examination and provision of objective evidence that software specifications conform to
user needs and intended uses, and that the particular requirements implemented through software can be consistently
fulfilled. Since software is usually part of a larger hardware system, software validation typically includes evidence that all
software requirements have been implemented correctly and completely and are traceable to system requirements.
In other words, validation ensures that you built the right thing.

Verification and validation should establish confidence that the software is fit for purpose.
Generally this does not mean to be completely free of defects.
Rather, it must be good enough for its intended use and the type of use will determine the degree of confidence that is
needed. Unfortunately, users accept this

Q5) Explain verification and validation actives associate with the V-Model?
A variation of the waterfall process model where the different development phases are presented in a V-shaped graph, relating
specific verification or validation activities to their corresponding requirements or specifications. For example, customer
requirements are validated by operational use; while product specification, high-level design, and low level design are verified
by system test, integration test, and component test, respectively.
In addition, system test also validates the product by focusing on how the overall operations under an environment that
resembles that for target customers. In a sense, the users' operational environment is captured as part of the product
specification or as part of the testing model. At the bottom, coding and unit testing are typically grouped in a single phase,
where the code itself specifies the expected behavior and needs to be verified through unit test. Sometimes, various other QA
activities, such as inspections, reviews, walkthroughs, analyses, formal verification, etc., are also associated with the left arm of
the V-model and illustrated by additional dotted lines pointed to the specific phases.
Similar to the mapping of QA activities to other process models above, validation and verification activities can be mapped into
non-sequential processes such as incremental, iterative, spiral, and extreme programming processes. Typically, there is some
level of user involvement in each part or iteration. Therefore, validation plays a more important role in these processes than in
the waterfall process or the V-model.



Q2 static and dynamic techniques?

QA techniques can be categorized into two types, static and dynamic. The selection, objectives, and organization of a particular
technique depend on the requirements and nature of the project and selection is based on very different criteria depending on
the methodology being used. Unlike dynamic techniques, static techniques do not involve the execution of code. Static
techniques involve examination of documentation by individuals or groups. This examination may is assisted by software tools,
e.g., inspection of the requirements specification and technical reviews of the code. Testing and simulation are dynamic
techniques.
Sometimes static techniques are used to support dynamic techniques and vice versa. The waterfall model uses both static and
dynamic techniques. However, agile methods mostly use dynamic techniques.

Q5) Explain what is the SQA and Verification & Validation aspects of Agile Methods?
Any iteration in agile development must produce artifacts, typically code, that pass the phase of verification and validation.
Therefore, all iterations must have phases like this.
For requirements and design, the verification and validation is the result of peer reviews with team members and with the
customer. (There is a customer even if the product is developed for internal use; for example a department)
For coding, verification and validation is done by code reviews, unit testing and functional testing.
An Agile Methodology does NOT mean that there will be "frequent changes in requirements."
What Agile specifically does is to begin with very high level view of the requirements at commencement, and through the
iterative process to begin honing those requirements with your stakeholders as they begin to interact with the product itself.
So, it's important to know that there are not frequent changes in requirements - rather there is a constant process of
increasing the detail of the requirements.
This is very important when it comes to "Verification and Validation" - because if the requirements were changing then yes, it
would be a nightmare to follow those changes with any hope of verifying and validating the deliverables. However, what
actually happens is that with the increase in detail in the requirements comes a commensurate increase in detail in the
verification and validation processes.
Validation will mostly come in the form of tests, primarily functional and integration tests for the business, and unit tests for
the development teams themselves. Also with stakeholder walk-through and buy in - obviously with each iteration the
stakeholder is viewing what has been done, and is either rejecting it because it fails validation, or they will accept and/or refine
it because it has passed validation for that stage of iteration.
Something which hasn't been mentioned in regards to verification are test coverage tools and reports. These are an important
part of verification and the different detail available in the reports can provide different levels of verification, an internal level
of detail for the development team, and an external level of detail for the stakeholders themselves.

Вам также может понравиться