Вы находитесь на странице: 1из 32

CAUSAL ANALYSIS AND RESOLUTION

Cyrus Fakharzadeh USC Computer Science

CS577b 3/20/00

Outline
Definitions n Defect analysis review n Sample causal analysis exercises n Defect prevention KPA
n

CS577b 3/20/00

Definitions
Causal analysis: the analysis of defects to determine their underlying root cause n Causal analysis meeting: a meeting, conducted after completing a specific task, to analyze defects uncovered during the performance of that task
n

CS577b 3/20/00

Defect Analysis
Defect: any flaw in the specification, design, or implementation of a product. n Facilitate process improvement through defect analysis
n

defect categorization to identify where work must be done and to predict future defects causal analysis to prevent problems from recurring
CS577b 3/20/00
4

Fault Distributions
Requirements Design Coding
50% 40% 10%

Functional Test

System Test

Field Use

Fault Origin

3%

5%

7% 25%

50% 10%

Fault Detection

~20. KDM ~12. KDM ~1.KDM ~1.KDM ~1.KDM ~6. KDM

Cost per Fault

KDM=kilo deutsch marks

CS577b 3/20/00

Fault Distributions (cont.)


Process Maturity Level

Requirements

Design

Coding

Functional Test

System Test

Field Use

Phase
Fault Introduction Distribution

10%

40% 20% 12%

50% 40% 20% 10% 20% <5% 5%

5% 3%

4 3 2 1

30% 20%

30%

0% 0% 0% 1

2% 0% 0% 1

38% 30% 15% 6

32% 50% 50% 12

8% 17%

Fault Detection Distribution

3% 2% 1

33% 20
Relative Fault Cost

CS577b 3/20/00

Sample Defect Data


n

Defect data should be collected by:


detection activity when detected introduction phase type mode

A defect introduction and removal matrix can be generated and used for defect prevention to help answer what are high-leverage opportunities for defect prevention / cost containment?.

CS577b 3/20/00

Defect Flow Tracking


n

A defect introduction and removal matrix can be generated and used as a basis for defect analysis and prevention.
Percentage of Defects Phase detected Requirements Preliminary design Detailed design Code/unit test Integration testing System testing Total Phase injected Requirements Preliminary Detailed Code/unit Total design design test 37% 8% 22% 38% 16% 15% 18% 34% 17% 7% 24% 28% 43% 25% 7% 9% 14% 29% 14% 11% 12% 24% 29% 19% 100% 100% 100% 100% 100%

CS577b 3/20/00

Causal Analysis
Data on defects is collected and categorized n Trace each defect to its underlying cause n Isolate the vital few causes
n

Pareto principle: 80% of defects are traceable to 20% of all possible causes
n

Move to correct the problems that caused the defects


9

CS577b 3/20/00

Causal Analysis Form Fields


n

Post-inspection example:

Moderator, Date Subject, Subject type Item number Assigned to Defect category (interface, requirements, design, code, other) Item description Probable cause Suggestions for eliminating probable cause Action taken Number of hours to take corrective action

CS577b 3/20/00

10

Frequency
1000 100 200 300 400 500 600 700 800 900 0

Correctness

Clarity

Causal Analysis Example

CS577b 3/20/00
Completeness Consistency Compliance Maintainability

Defect Category
Functionality Interface Performance Testability Reusability Traceability

11

Typical Analysis Steps


1. Sort data by defect origin. Count the number in each group. Arrange the totals in descending order of total hours. 2. Calculate the average fix times for each of the totals in the first step. 3. For the top two or three totals in step 1, count the defects sorted by defect type and multiply the appropriate average fix times. Limit the number of types to the largest totals plus a single total for all others. 4. Add up the defects in each module. Get totals for the five most frequently changed modules plus a single total for all others. 5. Review the defect reports for the defects included in the largest totals from steps 3 and 4. Summarize the defect-report suggestions for how the defects might have been prevented or found earlier. CS577b 3/20/00
12

Causal Analysis Exercise #1


The following defect data is from a completed project, and another with the same generic component types is being planned with no reuse. Use causal analysis to identify the highest risks and make suggestions for the new project.
Component Type C hardware interface B communication B communication B hardware interface B hardware interface A communication A logic B logic A logic A logic B user interface C logic A user interface C user interface Rework hours 25 3 6 15 18 4 12 5 12 14 19 20 23 42

CS577b 3/20/00

13

Exercise #1 Answer
n n

Determine the defect types and components that contribute the most rework: TYPE user interface 84 hours logic 63 hours hardware interface 53 hours communication 13 hours
> concentrate on the user interface (resolve risk early, allocate resources, user prototyping, inspections, etc.)

COMPONENT
C87 hours B 66 hours A 65 hours
> concentrate on component C

CS577b 3/20/00

14

Causal Analysis Exercise #2


Analyze the following defect data. Produce three Pareto column charts (or tables in descending order) showing 1) the distribution of defect origins, 2) an effort-weighted distribution of defect origins showing the normalized hours to fix defects, 3) effort-weighted distribution of defect types for the top two defect origins from chart #1. Make summary suggestions for the development process.
Weighting factors - normalized cost to fix defect types if not found until testing. Specification 14 (e.g. it takes 14 times as much effort to fix a specification defect in the test phase compared to in the specification phase) Design 6.2 Code 2.5 Documentation 1 Other 1 Operator 1 Defect # Origin 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Documentation Code Documentation Design Code Code Specification Design Specification Code Design Code Design Other Code Environment Other Specification Code Code Type Standards Logic Process Comm. S/W Interface Computation Logic User Interface Process Comm. Functionality Logic User Interface Logic H/W Interface Process Comm. Computation Standards Support Process Comm. Functionality Computation Logic
15

CS577b 3/20/00

Exercise #2 Answers
Defect Origin # of defects Code 8 Design 4 Specification 3 Documentation 2 Other 2 Environment Support 1 Code Defect Type # of Defects Weight Logic 5 2.5 Computation 3 2.5 Total wt. 12.5 7.5

Design Defect Type # of Defects Weight Total wt. S/W Interface 1 6.2 6.2 Process Comm. 1 6.2 6.2 User Interface 1 6.2 6.2 H/W Interface 1 6.2 6.2 weight 14 6.2 2.5 1 1 1 total weight 42 24.8 20 2 2 1
16

Defect Origin # of defects Specification 3 Design 4 Code 8 Documentation 2 Other 2 Environment Support 1

CS577b 3/20/00

Level 4 Relationship to Level 5 KPAs


n

Data analysis from Level 4 activities enables focusing the performance of Defect Prevention (DP), Technology Change Management (TCM), and Process Change Management (PCM)

CS577b 3/20/00

17

Defect Prevention
The purpose of Defect Prevention is to identify the cause of defects and prevent them from recurring. Defect Prevention involves analyzing defects that were encountered in the past and taking specific actions to prevent the occurrence of those types of defects in the future. The defects may have been identified on other projects as well as in earlier stages or tasks of the current project. Defect prevention activities are also one mechanism for spreading lessons learned between projects. Trends are analyzed to track the types of defects that have been encountered and to identify defects that are likely to recur. Based on an understanding of the project's defined software process and how it is implemented (as described in the Integrated Software Management and Software Product Engineering key process areas), the root causes of the defects and the implications of the defects for future activities are determined. Both the project and the organization take specific actions to prevent recurrence of the defects.

CS577b 3/20/00

18

Defect Prevention (DP) ETVX Diagram


ENTRY 1. Policy for organization to perform DP activities (C1) 2. Policy for projects to perform DP activities (C2) 3. Organization-level team exists to coordinate DP activities (Ab1) 4. Project level team exists to coordinate DP activities (Ab2) 5. Adequate resources/funding (Ab3) 6. Training for members of the S/W engineering group and related groups (Ab4) 7. Procedures for Ac1, Ac3, Ac6, & Ac7 TASK 1. Develop Projects DP plan (Ac1) 2. Team has kick-off meeting to prepare for DP activities (Ac2) 3. Conduct causal analysis meetings (Ac3) 4. Conduct coordination meetings to review the implementation of action proposals from the causal analysis meetings (Ac4) 5. Document and track DP data (Ac5) 6. Revise the organizations standard process resulting from DP actions (Ac6) 7. Revise the projects defined process resulting from DP actions (Ac7) 8. Provide feedback to developers on the status and results of DP actions (Ac8) VERIFICATION 1. Reviews with senior management (V1) 2. Reviews with project manager (V2) 3. Reviews/audits by SQA (V3) 4. Measurement of status of DP activities (M1) EXIT 1. DP activities are planned (G1) 2. Common causes of defects are sought out and identified (G2) 3. Common causes of defects are prioritized and systematically eliminated(G3)

CS577b 3/20/00

19

Defect Prevention Policies


n

Organization defect prevention policy should state:


Long-term plans and commitments are established for funding, staffing, and other resources for defect prevention. The resources needed are allocated for the defect prevention activities. Defect prevention activities are implemented across the organization to improve the software processes and products. The results of the defect prevention activities are reviewed to ensure the effectiveness of those activities. Management and technical actions identified as a result of the defect prevention activities are addressed.

Project defect prevention policy should state:


Defect prevention activities are included in each project's software development plan. The resources needed are allocated for the defect prevention activities. Project management and technical actions identified as a result of the defect prevention activities are addressed.

CS577b 3/20/00

20

DP Tools and Training


n

Tools:
statistical analysis tools database systems other

Examples of DP training:
defect prevention methods conduct of task kick-off meetings conduct of causal analysis meetings, and statistical methods (e.g., cause/effect diagrams and Pareto analysis).
21

CS577b 3/20/00

DP Project Activities
n

Project plan for defect prevention:


1.Identifies the defect prevention activities (e.g., task kick-off and causal analysis meetings) that will be held. 2.Specifies the schedule of defect prevention activities. 3.Covers the assigned responsibilities and resources required, including staff and tools. 4.Undergoes peer review. Kick-off meetings are held to familiarize the members of the team with the details of the implementation of the process, as well as any recent changes to the process. Causal analysis meetings are held.

CS577b 3/20/00

22

Causal Analysis Procedures


n

Causal analysis meeting procedure typically specifies:


1.Each team that performs a software task conducts causal analysis meetings. A causal analysis meeting is conducted shortly after the task is completed. Meetings are conducted during the software task if and when the number of defects uncovered warrants the additional meetings. Periodic causal analysis meetings are conducted after software products are released to the customer, as appropriate. For software tasks of long duration, periodic in-process defect prevention meetings are conducted, as appropriate. An example of a long duration task is a level-of-effort, customer support task. 2.The meetings are led by a person trained in conducting causal analysis meetings. 3.Defects are identified and analyzed to determine their root causes. An example of a method to determine root causes is cause/effect diagrams.

CS577b 3/20/00

23

Causal Analysis Procedures (cont.)


4.The defects are assigned to categories of root causes. Examples of defect root cause categories include: inadequate training, breakdown of communications, not accounting for all details of the problem, and making mistakes in manual procedures (e.g., typing). 5.Proposed actions to prevent the future occurrence of identified defects and similar defects are developed and documented. Examples of proposed actions include modifications to: the process, training, tools, methods, communications, and software work products. 6.Common causes of defects are identified and documented. Examples of common causes include: frequent errors made in invoking a certain system function, and frequent errors made in a related group of software units. 7.The results of the meeting are recorded for use by the organization and other projects.

CS577b 3/20/00

24

DP Team Activities
n

Each of the teams assigned to coordinate defect prevention activities meets on a periodic basis to review and coordinate implementation of action proposals from the causal analysis meetings. The teams involved may be at the organization or project level. Team activities include: 1.Review the output from the causal analysis meetings and select action proposals that will be addressed. 2.Review action proposals that have been assigned to them by other teams coordinating defect prevention activities in the organization and select action proposals that will be addressed. 3.Review actions taken by the other teams in the organization to assess whether these actions can be applied to their activities and processes. 4.Perform a preliminary analysis of the action proposals and set their priorities. Priority is usually nonrigorous and is based on an understanding of: the causes of defects, the implications of not addressing the defects, the cost to implement process improvements to prevent the defects, and the expected impact on software quality. An example of a technique used to set priorities for the action proposals is Pareto analysis. 25

CS577b 3/20/00

DP Team Activities (cont.)


5.Reassign action proposals to teams at another level in the organization, as appropriate. 6.Document their rationale for decisions and provide the decision and the rationale to the submitters of the action proposals. 7.Assign responsibility for implementing the action items resulting from the action proposals. Implementation of the action items includes making immediate changes to the activities that are within the purview of the team and arranging for other changes. Members of the team usually implement the action items, but, in some cases, the team can arrange for someone else to implement an action item. 8.Review results of defect prevention experiments and take actions to incorporate the results of successful experiments into the rest of the project or organization, as appropriate. Examples of defect prevention experiments include: using a temporarily modified process, and using a new tool. 9.Track the status of the action proposals and action items.

CS577b 3/20/00

26

DP Team Activities (cont.)


10.Document software process improvement proposals for the organization's standard software process and the projects' defined software processes as appropriate. The submitters of the action proposal are designated as the submitters of the software process improvement proposals. 11.Review and verify completed action items before they are closed. 12.Ensure that significant efforts and successes in preventing defects are recognized.

CS577b 3/20/00

27

DP Documentation and Tracking Activities


Activity 5 -- Defect prevention data are documented and tracked across the teams coordinating defect prevention activities. 1.Action proposals identified in causal analysis meetings are documented. Examples of data that are in the description of an action proposal include: originator of the action proposal, description of the defect, description of the defect cause, defect cause category, stage when the defect was injected, stage when the defect was identified, description of the action proposal, and action proposal category. 2.Action items resulting from action proposals are documented. Examples of data that are in the description of an action item include: the person responsible for implementing it, a description of the areas affected by it, the individuals who are to be kept informed of its status, the next date its status will be reviewed, the rationale for key decisions, a description of implementation actions, the time and cost for identifying the defect and correcting it, and the estimated cost of not fixing the defect.

CS577b 3/20/00

28

DP Feedback
n

Feedback is needed on the status and results of the organization's and project's defect prevention activities on a periodic basis.

The feedback provides: 1.A summary of the major defect categories. 2.The frequency distribution of defects in the major defect categories. 3.Significant innovations and actions taken to address the major defect categories. 4.A summary status of the action proposals and action items. Examples of means to provide this feedback include: electronic bulletin boards, newsletters, and information flow meetings.

CS577b 3/20/00

29

DP Measurements
n

Examples: the cumulative costs of defect prevention activities (e.g., holding causal analysis meetings and implementing action items) the time and cost for identifying the defects and correcting them, compared to the estimated cost of not correcting the defects profiles measuring the number of action items proposed, open, and completed the number of defects injected in each stage, cumulatively, and over releases of similar products and the total number of defects.

CS577b 3/20/00

30

DP Management Reviews
n

DP reviews cover: 1.A summary of the major defect categories and the frequency distribution of defects in these categories. 2.A summary of the major action categories and the frequency distribution of actions in these categories. 3.Significant actions taken to address the major defect categories. 4.A summary status of the proposed, open, and completed action items. 5.A summary of the effectiveness of and savings attributable to the defect prevention activities. 6.The actual cost of completed defect prevention activities and the projected cost of planned defect prevention activities.

CS577b 3/20/00

31

References
n

Defect Prevention (DP)

Inderpal Bhandari, Michael Halliday, et al., "A Case Study of Software Process Improvement During Development," IEEE Transactions on Software Engineering, Vol. 19, No. 12, December 1993, pp. 11571170. R. Chillarege and I. Bhandari, "Orthogonal Defect Classification -- A Concept for In-Process Measurements," IEEE Software, Vol. 18, No. 11, November 1992, pp. 943-955. Julia L. Gale, Jesus R. Tirso, and C. Art Burchfield, "Implementing the Defect Prevention Process in the MVS Interactive Programming Organization," IBM Systems Journal, Vol. 29, No. 1, 1990, pp. 33-43. C.L. Jones, "A Process-Integrated Approach to Defect Prevention," IBM Systems Journal, Vol. 24, No. 2, 1985, pp. 150-167. Juichirou Kajihara, Goro Amamiya, and Tetsuo Saya, "Learning from Bugs," IEEE Software, Vol. 10, No. 5, September 1993, pp. 46-54. R.G. Mays, C.L. Jones, G.J. Holloway, and D.P. Studinski, "Experiences with Defect Prevention," IBM Systems Journal, Vol. 29, No. 1, 1990, pp. 4-32. Norman Bridge and Corinne Miller, "Orthogonal Defect Classification Using Defect Data to Improve Software Development," Proceedings of the 7th International Conference on Software Quality, Montgomery, Alabama, 6-8 October 1997, pp. 197-213.

CS577b 3/20/00

32

Вам также может понравиться