Вы находитесь на странице: 1из 41

Continuous Improvement/ Problem Solving Handbook

Closing the Gaps to Excellence

Rob De La Espriella DLE Total Quality Management, LLC

Revision 0

GENERAL DISTRIBUTION: Copyright 2006 by DLE Total Quality Management, LLC. Not for sale or for commercial use. All other rights reserved. NOTICE: DLE Total Quality Management, nor any person acting on the behalf of them (a) makes any warranty or representation, expressed or implied, with respect to the accuracy, completeness, or usefulness of the information contained in this document, or that the use of any information, apparatus, method, or process disclosed in this document may not infringe on privately owned rights, or (b) assumes any liabilities with respect to the use of, or for damages resulting from the use of any information, apparatus, method, or process disclosed in this document.

Continuous Improvement / Problem Solving Handbook

TABLE OF CONTENTS 1.0 2.0 3.0 PURPOSE ...................................................................................................................................................... 3 SCOPE ........................................................................................................................................................... 3 DEFINITIONS .............................................................................................................................................. 3

GUIDELINES FOR IMPLEMENTING AN EFFECTIVE PERFORMANCE IMPROVEMENT PROGRAM ....................................................................................................................................................................................... 4 4.0 5.0 INTRODUCTION TO CONTINUOUS IMPROVEMENT ...................................................................... 4 THE CONTINUOUS IMPROVEMENT CYCLE ..................................................................................... 4 5.1 5.2 5.3 5.4 5.5 5.6 6.0 7.0 8.0 9.0 10.0 11.0 12.0 Monitor Performance ............................................................................................................................ 5 Identify And Define The Problems ......................................................................................................... 6 Analyze The Problems ........................................................................................................................... 6 Determine the Causes ............................................................................................................................ 6 Develop and Implement Corrective Actions .......................................................................................... 6 Adjust Programs and Processes ............................................................................................................ 6

MONITORING PERFORMANCE ............................................................................................................. 7 DOCUMENTING IDENTIFIED PROBLEMS .......................................................................................... 8 PRE-SCREENING OF CONDITION REPORTS ................................................................................... 10 ENHANCED CRITERIA FOR THE STATION CR SCREENING TEAM ......................................... 11 QUALITY REVIEWS OF ASSIGNED CRS ............................................................................................ 11 DAILY CAP LOOK-AHEAD REPORTS ................................................................................................ 12 ANALYZING DEPARTMENT PERFORMANCE DATA ..................................................................... 12

13.0 PREPARATIONS FOR THE PERIODIC DEPARTMENT PERFORMANCE IMPROVEMENT MEETINGS................................................................................................................................................................ 15 14.0 15.0 16.0 17.0 18.0 19.0 DEPARTMENT PERFORMANCE IMPROVEMENT MEETINGS .................................................... 16 COMMUNICATING RESULTS ............................................................................................................... 17 TRACKING CORRECTIVE ACTIONS .................................................................................................. 18 PROBLEM SOLVING ............................................................................................................................... 19 SELECTING CORRECTIVE ACTIONS................................................................................................. 20 MONTHLY DCAC/HUDC ALIGNMENT MEETINGS ........................................................................ 21

ATTACHMENT 1 ..................................................................................................................................................... 23 PROBLEM SOLVING TOOLS.................................................................................................................................... 23 I. DATA ANALYSIS TOOLS ............................................................................................................................... 23 A. Tables and Spreadsheets .......................................................................................................................... 23 B. Histograms ............................................................................................................................................... 24 C. Pareto Charts ........................................................................................................................................... 26 D. Change Analysis ....................................................................................................................................... 29 E. Flowchart / Process Maps ........................................................................................................................ 31 II. TOOLS FOR IDENTIFYING CAUSAL FACTORS .............................................................................................. 34 A. Cause and Effect Analysis (Fishbone Diagrams) ..................................................................................... 34 B. Hazards / Barriers / Targets (HBT) Analysis ........................................................................................... 36 C. Fault Tree Analysis................................................................................................................................... 38 III. EVALUATING AND IMPLEMENTING CORRECTIVE ACTIONS .................................................................. 40 A. Countermeasures Matrix .......................................................................................................................... 40
Copyright 2006 by DLE Total Quality Management, LLC Page 2 of 41

Continuous Improvement / Problem Solving Handbook

1.0 PURPOSE 1.1 The purpose of a Performance Improvement Program is to support the pursuit of and commitment to excellence and continuous improvement, a fundamental goal of many business units. In a continuous improvement paradigm, personnel practice self-evaluation and problem solving as part of everyday business. 1.2 This Handbook was written to provide enhanced guidelines for implementing a Performance Improvement Program, in many cases supplementing existing guidelines for Corrective Action Programs. The main benefits of the enhanced guidance provided in this Handbook are: (1) a more efficient use of department resources by reviewing low level CAPs/CRs collectively rather than individually; (2) prioritizing corrective action efforts by focusing on resolving the most significant problems first; (3) the identification and elimination of underlying causal factors by addressing problems at the lower levels, before they can result in bigger problems, and (4) a formal methodology (rather than a theory) for establishing a culture that values and practices continuous improvement. 1.3 To enhance the use of formal cause evaluation tools, some of the more effective and easy to use problem solving tools and techniques are also provided. In general, the Handbook will help the station improve its Corrective Action Program into a broader Performance Improvement Program with continuous improvement as its foundation, and it will help personnel become better problem solvers.

2.0 SCOPE 2.1 This Handbook is applicable to all personnel that implement the Corrective Action / Performance Improvement Program at your station. 2.2 This Handbook builds on the principles established by your existing Corrective Action Program implementing procedures, which establish the framework for identifying and resolving current problems and anticipating and preventing future problems through trending and analysis. The scope of those procedures usually includes the steps to take to complete periodic (monthly or quarterly) self-evaluation reviews, and associated meetings and reports. 2.3 It is intended that the guidance in this Handbook does not conflict with any of the guidance in site specific procedures for Condition Reporting and Station Trending.

3.0 DEFINITIONS 3.1 None

Copyright 2006 by DLE Total Quality Management, LLC

Page 3 of 41

Continuous Improvement / Problem Solving Handbook

Guidelines for Implementing an Effective Performance Improvement Program

4.0 Introduction to Continuous Improvement Continuous improvement is sometimes mistaken for a philosophy rather than a methodical approach to improving performance. Continuous improvement encompasses all programs and processes that are used to improve station performance, and the effective implementation of those processes is the key to creating a continuous improvement environment. Central to an effective continuous improvement method is the identification and resolution of adverse conditions. It is there where there are great gains to be made, by using fundamental problem solving capabilities on a continual basis at the lowest levels of an organization. Fostering this self-evaluation and problem solving culture by a broad cross section of the facility as part of everyday business is another essential element of continuous improvement. 4.1 Additional keys to an effective continuous improvement methodology include: 4.1.1 When monitoring station performance, Key Performance Indicators are not used as bean counts but are used to measure performance in meaningful ways and trigger actions that drive performance improvement. Indicators should monitor in-process parameters in addition to outcome indicators such that adjustments can be made prior to a negative outcome. For example, monitoring behaviors such that adverse trends can be corrected before the same behaviors result in consequential outcomes. 4.1.2 When evaluating problems, there must be an understanding that station problems have to be addressed below the symptomatic level by trending and data analysis to determine the underlying causal factors and root causes. Solving problems at the symptomatic level is not the most effective use of limited resources. Use formal tools for data analysis and causal factor identification to address problems below the symptomatic level. Most problems can be evaluated and addressed with these basic evaluation tools: tables, Pareto diagrams, Hazards / Barriers / Targets Analysis, Cause and Effect Diagrams, Fault Tree Analysis, and a Countermeasures Matrix. 4.1.3 Management has to establish the proper balance between problem identification and problem resolution such that the organization is not overwhelmed. Stations tend to emphasize documenting identified problems and not sufficient emphasis is applied to evaluating and resolving the problems. In some cases there are resource limitations to overcome, but in most cases there is a lack of efficient methods to use department resources in an effective and prioritized manner. For example, trending the CAP data and showing results on Pareto charts is an effective way to maintain resources focused on the top issues, rather than on individual problem resolution. 4.1.4 When selecting corrective actions, especially for significant problems, use a graded approach to determine which solutions to implement. A tool such as a countermeasures matrix is valuable in sorting out and prioritizing potential solutions, and also documents for posterity which solutions were considered but not implemented, and the reasons why. A good example where alternate solutions were not considered is NASAs space pen, which writes under any conditions. The pen cost $12 million to develop. The Russians used pencils.

5.0 The Continuous Improvement Cycle On average, there may be more than 10,000 condition reports (CR) written each year at a typical nuclear station. Of the total CR written in a year, only a small percentage will require Root Cause Evaluations (RCEs) or Apparent Cause Evaluations (ACEs). The remaining CRs are addressed individually or they are closed to actions taken and trended by the station trend programs. What is noteworthy about the thousands of conditions that did not require RCEs and ACEs is that in many cases, their underlying causes are the same as those that are causing the bigger events.

Copyright 2006 by DLE Total Quality Management, LLC

Page 4 of 41

Continuous Improvement / Problem Solving Handbook

With the premise that in many cases the underlying causes of minor and major events may be similar, Parts 1 and 2 of this Handbook recommend approaches to implementing the Performance Improvement Program in a manner that prioritizes and resolves problems at the lower levels, before they can result in bigger problems. The following is the INPO Performance Improvement Model that is a sound approach to achieving continuous improvement. Although there are different variations of this model across the industry, most follow this fundamental approach. Below is a simplified description of each of the segments of a continuous improvement cycle.

5.1 Monitor Performance 5.1.1 Monitoring performance is crucial to continuous improvement. Below are a few of the key processes and tools to monitor performance which are discussed in more detail later in the Handbook. a. Operator Rounds b. Human Performance (HU) Program c. Self-Assessment Program

d. Corrective Action Program e. Key Performance Indicators (KPIs) f. Station Trend Program

g. Performance Improvement Program


Copyright 2006 by DLE Total Quality Management, LLC Page 5 of 41

Continuous Improvement / Problem Solving Handbook

h. System Health Reports 5.2 Identify And Define The Problems 5.2.1 Many of the underlying causal factors for lower level issues are the same ones that go on to cause the big events. By documenting as much information as possible for every problem, the station can more readily identify and address underlying causal factors before they cause events with considerable more consequences. And the cost of addressing these causes from low level problems are a fraction of the cost associated with recovering from a consequential event. Your Condition Reporting procedure describes the requirements for documenting adverse conditions on Condition Reports (CRs) and capturing available information.

5.2.2

5.3 Analyze The Problems 5.3.1 Finding and eliminating the underlying causes of a problem can be a real challenge, and without the use of investigative methods or tools, the likelihood of problems recurring increases. Even subject matter experts would have limited success without the use of sound fundamental tools and techniques to analyze data and identify the underlying causal factors. For significant problems, the tools used to conduct RCEs and ACEs usually are well defined in station procedures. The vast majority of lower level problems are addressed individually through the corrective action program and collectively through the Performance Improvement Program. However, many station procedures only provide broad guidance for how the performance improvement process is used to evaluate the vast majority of performance data to improve performance.

5.3.2

5.4 Determine the Causes Levels of Causes: a problem or an event is usually the result of multiple causes that combine to cause the problem. By evaluating a set of problems (symptoms) you can find underlying causes as they are the ones that contributed to multiple symptoms. In classic problem solving, you can identify some of the stations biggest underlying causes and root causes by evaluating just a few events thoroughly. 5.4.1 5.4.2 Symptoms: these are not regarded as actual causes but signs of existing problems. Underlying Causes: causes that directly contributed to the problem, or causes that did not directly cause the problem but are linked through a cause-and-effect relationship to other causes that ultimately created the problem. Root Causes: the lowest most actionable cause of the problem. If the root causes were not present, the problem would never have occurred. If the root causes are removed, the problem or event would not recur.

5.4.3

5.5 Develop and Implement Corrective Actions Corrective Actions are the countermeasures you put in place to address the identified causes and prevent similar problems from recurring. There is a vast range of corrective actions that can be applied to any given set of causes, with varying degrees of effectiveness and cost. The challenge is to implement the most cost-effective solution, as in every business there are limited budgets and resources available to implement the corrective actions. 5.6 Adjust Programs and Processes Once corrective actions were determined to have been effective in a particular application, consider where else the corrective actions would be beneficial. Using a similar approach to the extent of condition for an identified problem, consider the extent of the solution for replication in other departments or stations. 5.6.1 Determine which fleet standards and procedures were affected by the corrective actions and whether they would benefit from standardizing the corrective actions.

Copyright 2006 by DLE Total Quality Management, LLC

Page 6 of 41

Continuous Improvement / Problem Solving Handbook

5.6.2 5.6.3 5.6.4

Determine whether the corrective actions should be incorporated into associated training programs. Discuss the corrective actions and their effectiveness at Department Performance Improvement Meetings such that they can be evaluated for site-wide implementation. Discuss the corrective actions and their effectiveness at Station Performance Improvement Meetings such that they can be evaluated for fleet-wide implementation.

6.0 Monitoring Performance Monitoring performance is crucial to continuous improvement. Below are a few of the key processes and tools that should be in place to monitor performance. 6.1 Operator Rounds: one of the most effective means to monitor the plant are the Operators and their daily routine of monitoring station parameters, as well as general conditions in the plant. 6.2 Human Performance (HU) Program: the HU Program requires us to conduct performance observations to collect data on human performance behaviors that can be trended. 6.3 Self-Assessment Program: the Self-Assessment Program provides guidance for conducting continuous and focused self-assessments. 6.4 Corrective Action Program: our ability to effectively monitor performance relies heavily on the corrective action program. Identifying and documenting adverse conditions, problems, issues and near misses allows the station to conduct meaningful reviews of performance in all areas. 6.5 Key Performance Indicators (KPIs): KPIs are largely reliant on the CAP data base for its source of information, and they are used extensively to monitor performance and identify adverse trends in most area of the station. 6.6 Station Trend Program: trending is usually conducted at the station level by a Station Trend Coordinator and supported at the department level by Department Corrective Action Coordinators. 6.7 Performance Improvement Program: the Performance Improvement Program is used to monitor performance on a periodic (quarterly or monthly basis) by reviewing available sources of performance-related information, including CRs and Human Performance observations, to identify and address adverse trends. 6.8 System Health Reports: the Equipment Reliability Program relies heavily on System Engineers and their monitoring of system performance. This has grown from Maintenance Rule monitoring to a more advanced process that is managed by the Plant Health Committee. These processes should be periodically assessed to ensure they are being implemented in an effective manner.

Copyright 2006 by DLE Total Quality Management, LLC

Page 7 of 41

Continuous Improvement / Problem Solving Handbook

7.0 Documenting Identified Problems

Why is Documentation so Important? A key to being able to evaluate the lower significance issues is by doing an excellent job of documenting every problem put in the corrective action database. Many of the underlying causal factors for lower level issues are the same ones that go on to cause the big events. Therefore a major benefit of having great documentation for every problem is that you can identify and address underlying causal factors before they cause events with considerable more consequences. In many cases, only the subject matter experts know a key piece of information that will be lost or forgotten if it is not captured when a CR is written.

7.1 Your Condition Reporting process describes the requirements for documenting adverse conditions and assigning problem resolution to applicable departments. The following guidelines are provided to help originators, department CAP and HU coordinators and supervisors better understand what should be considered when filling out a CR. a. Detailed Description (required field): provide as much information as possible, including: The plant status/conditions at the time of the problem How the problem was discovered The personnel that were involved (by title/position), what shift or schedule they were on and what crew they were assigned to What process or evolution was being conducted (e.g. maintenance, tagging, testing, operating the plant, planning, scheduling, boric acid corrosion controls, self-assessment, management observations, administrative tasks) What procedures were being used The location where the problem took place The equipment involved and how it performed What tools were in use and their contribution to the problem Observations from other involved personnel Any physical evidence that can be shown to others

b. Why did this occur: even if the originator does not know or is unsure, they can add value by documenting their observations on the following areas. Think of the physical or administrative barriers in place to prevent this problem from happening in the first place, and which ones might have failed: 1) Physical controls include: a) Automatic shutdown devices b) Safety and relief devices c) Conservative design margins d) Redundant equipment e) Locked doors and valves f) Fire Barriers and seals

g) Alarms and annunciators


Copyright 2006 by DLE Total Quality Management, LLC Page 8 of 41

Continuous Improvement / Problem Solving Handbook

2) Administrative controls include: a) Management & supervisory oversight b) Operating and maintenance procedures c) Programs, policies and practices d) Training and qualification program e) Worker practices f) Signs and postings

Whether the problem or issue specifically involved human performance, worker practices or equipment failure. Whether the processes in use at the time were new or recently revised. Whether that task or evolution was conducted successfully in the past, and whether there was anything different when this problem occurred. Any environmental factors or conditions that were pertinent to the problem, such as working on the second night of a back shift, the lighting in a particular location or the physical conditions of the work environment. Whether there was any supervision on the job. Any known previous history in the problem area and past corrective actions (to prompt a review of the TeamTrack database). Any similar experiences at other plants (operating experience).

c.

Immediate Actions Taken to address the issue: there may be more actions needed to address a bigger issue, so look for a detailed description of actions taken.

d. Recommendations: if the problem has not been corrected, in many cases the person documenting the problem is the subject matter expert and may know best how to address it in the future to prevent it from recurring. Check with the originator and ask what they believe would address the problem if money were no object. In some cases the subject matter experts can help the station avoid costly solutions for simple problems. e. References: the specific reference numbers for any documents that was associated with the problem, such that they can be researched at a later time. For example, Work Order numbers Procedures Drawing numbers Operator logs Alarm printouts Laboratory test reports Sample analysis results Recorded measurements Vendor data

Copyright 2006 by DLE Total Quality Management, LLC

Page 9 of 41

Continuous Improvement / Problem Solving Handbook

8.0 Pre-Screening of Condition Reports 8.1 After a CR is generated it is typically pre-screened by the department supervisor and the Corrective Action Coordinators (DCACs) for completeness, and then they are sent to the Station Screening Team. 8.2 The following guidelines are provided to ensure that the CRs contain as much information as possible prior to going before the Screen Team and being assigned for action. a. Reference material for use during pre-screening: Condition Reporting Process Guidance on CR Significance Levels Guidance on CR Trending INPO Performance Objectives and Criteria Binning Tool (chart) Station logic diagrams and flow diagrams Procedures listed on the CR Plant Procedures

b. Enter as much information as is available into the CR. In some cases a little research may be involved, such as looking up work order numbers for equipment problems and referring to your stations Trend Handbook for the proper codes. The following fields should be filled in to provide good information for processing the CR and for station and department trending. System Number Equipment Number Significance Level Process Codes Activity Codes Method of Discovery Focus areas such as: c. Safety Human Performance Equipment Reliability Corrective Action Program Management & Supervision

Group causing the problem INPO Performance Objective: refer to the INPO binning chart Equipment failure mode (may require contacting originator or subject matter expert) Any other appropriate trend codes

Conduct a review of CR history to look for trends: Conduct word searches in the CR data base to identify whether the components have failed in the past or the issues have recurred.

Copyright 2006 by DLE Total Quality Management, LLC

Page 10 of 41

Continuous Improvement / Problem Solving Handbook

Document the results of the CR history review in the CR. Note any potential or adverse trends that should be reviewed by the department that is assigned the CR.

9.0 Enhanced Criteria for the Station CR Screening Team 9.1 Criteria for the Station CR Screening Team are usually found in the CAP / CR procedures.

NOTE The following guidance should not supercede nor conflict with existing instructions. It provides additional basis for consideration when determining whether a CR should be closed to actions taken and evaluated collectively by the departments.

9.2 Enhanced CAP/CR Screening Criteria If the documented condition meets the criteria below, then the CR is assigned as usual under existing guidelines. Otherwise, the CR should be closed to actions taken or to trend, allowing the departments to evaluate the conditions collectively rather than individually under a continual Performance Improvement Program: a. It is a significance level A or B condition requiring a root or apparent cause evaluation. b. It is a significance level C condition, but requires immediate actions because it is an immediate safety concern in the areas of c. Industrial Safety Nuclear Safety Radiological Safety

It is a significance level C condition, but requires immediate actions because the condition impacts Plant Reliability. For example: The condition is causing a loss of megawatts Degraded performance of redundant equipment

d. In addition to the above criteria, additional actions may be warranted if the condition poses a Risk to the station. For example: Conditions impacting commitments with fixed deadlines Conditions affecting actions to complete station initiatives Any condition that warrants immediate actions based on Management or Supervisory inputs

e. If there is not enough information on the CR to determine how to disposition it, the CAP/CR should go back to the supervisor of the originator for additional information.

10.0

Quality Reviews of Assigned CRs

10.1 Once the CRs have been screened by the Station Screen Team and assigned to the departments, the CRs should be screened by the receiving department:
Copyright 2006 by DLE Total Quality Management, LLC Page 11 of 41

Continuous Improvement / Problem Solving Handbook

a. Check that the CR has been assigned to the correct organization. b. Check that the level of effort assigned is appropriate for the task. c. If there are any issues with either the assigned organization or the level of effort, refer to existing guidance on how to resolve those issues with the CAP administrators.

d. DCACs can review the CRs and provide initial recommendations to the department supervisor on: Priority Due date The responsible activity performer

11.0

Daily CAP Look-Ahead Reports

11.1 On a daily basis, DCACs should monitor corrective action backlogs and conduct a one-week look-ahead to ensure that there will be no overdue corrective actions in the department. 11.2 On the CR data base, run a report using the responsible group codes for the associated department, for activities (actions or evaluations) that are coming due within the next calendar week. 11.3 Notify management of those activities that are coming due within the next two days. 11.4 Periodically (at managements discretion) provide department management with a complete listing of all activities for the department for their consideration. a. DCACs should proactively review the station schedule for activities impacting work in the organization, such as refueling outages, major inspections, holidays and heavy vacation periods, against the due dates for assigned activities. b. DCACs can make recommendations to their management on pulling actions up or moving them back past those periods where there is a high likelihood for competing resources.

12.0

Analyzing Department Performance Data

12.1 Each department should conduct periodic (monthly or quarterly) trending of CR data, key performance indicators and other measures of performance. Therefore the following guidance is provided to assist the DCACs in managing the periodic self-evaluations. a. Each month, DCACs query the CR data base and prepare reports containing approximately six months worth of closed CRs assigned to their department. The reports should include the following information. CR Number Description Why did this occur Immediate actions taken Recommendations References Process Codes Activity Codes Method of Discovery
Page 12 of 41

Copyright 2006 by DLE Total Quality Management, LLC

Continuous Improvement / Problem Solving Handbook

Focus areas: Safety Human Performance Equipment Reliability Corrective Action Program Management & Supervision

Group Causing the Problem INPO Performance Objective Equipment failure mode

b. Transfer the six months worth of data from TeamTrack into a spreadsheet.

NOTES Because the amount of CRs varies greatly between departments, time frames longer or shorter than six months should be considered based on collecting a meaningful amount of data to analyze. To save time on executing these routine reports every month, the trend reports can be standardized and automated.

c.

Conduct common cause analysis and trending of the data. Filter and sort the data by the various trend codes and tabulate the number of occurrences in each category. Chart the information using Pareto Charts for further evaluation. Using Paretos Principle and applying it to CAP data, identify problem areas where approximately 20% of the causes are causing approximately 80% of the problems. Using this same principle during problem solving efforts, by solving those 20% of causes, the majority of a problem area will be resolved, and resources should then be applied to other top problem areas rather than continuing to solve the last 20% of the problems (which are caused by 80% of the causes). By applying Paretos principle, the department stays focused on the most important contributors to adverse trends.

Copyright 2006 by DLE Total Quality Management, LLC

Page 13 of 41

Continuous Improvement / Problem Solving Handbook

EXAMPLE In the example above, the first level Pareto for the Focus areas shows 50 CRs in the area of Human Performance. The next step is to pull those 50 CRs and further evaluate the data associated with those 50. The next level Pareto shows that the largest contributing cause of Human Performance issues was Procedure Adherence, with 25 CRs. The analysis of the 25 procedure adherence CRs found that 15 were caused by inadequate procedures. At the monthly DSEM, the department would pursue an ACE to evaluate the 15 CRs involving procedure quality.

d. Examples of data analysis tools are included in Attachment 1. e. In addition, key performance indicators (KPIs) and other performance data for the department should also be reviewed. These include: Industrial Safety Data Department KPIs Human Performance Data: analyzed by the Human Performance Department Coordinators (HUDCs) Self-assessment Program Performance Benchmarking information Operating Experience Information Corrective Action Program Performance Department Budgets
Page 14 of 41

Copyright 2006 by DLE Total Quality Management, LLC

Continuous Improvement / Problem Solving Handbook

f.

Below are suggestions for reviewing some of the more important areas. Industrial Safety Program: collectively review applicable KPIs for industrial safety together with any available information from TeamTrack and analyze for adverse trends and common causes. Human Performance Observations: trended in the same manner as CRs as described herein. Corrective Action Program Indicators: collectively review the information from the CR data base together with published Corrective Action Program KPIs and metrics and analyze for adverse trends and common causes. Self-assessment Program: review self-assessment reports that were completed in the past month and look for recurring problem areas where an escalated action plan may be required. Department Radiation Protection Indicators: collectively review the information from TeamTrack together with Radiation Protection KPIs and metrics and analyze for adverse trends and common causes. Department Budget Indicators: review department Budget KPIs and metrics and identify where actual budgets are more than 10% over the authorized budgets. Year end projection should be closely monitored, and ensure that recovery plans are in place if projections show the department will be over budget at years end. (Note: the current department indicators do not show year-end projections). Also identify if the department is more than 10% under budget, such that management can discuss how the windfall could be used to offset other departments that are over budget, as well as to consider adjustments to future budgets. Benchmarking information: review the departments benchmarking schedule and ensure that completed trips were properly documented. Also review completed reports and check that any performance gaps that are to be addressed are tracked through the corrective action program. Operating Experience: review the report of OE assignments for the department and ensure that they are on track to be completed on time. Review completed reports and check that any performance gaps that are to be addressed are tracked through the corrective action program. Miscellaneous Custom Department Indicators: review customized department KPIs and metrics, as well as the custom trend hot buttons and analyze for adverse trends and common causes.

13.0

Preparations for the Periodic Department Performance Improvement Meetings

13.1 Each period, the departments should hold a Performance Improvement meeting to review department performance data. 13.2 The following guidance is provided for preparing a summary of the overall trending and analysis to be discussed at the Department Performance Improvement Meetings. a. A list of root cause evaluations and apparent cause evaluations completed during the previous month. b. The list of top performance issues and adverse trends identified through data analysis and trending, and existing corrective actions. c. The list of key performance indicators that are RED and existing recovery plans.

d. A list of department performance improvement items and associated action items completed during the past month, such as:

Copyright 2006 by DLE Total Quality Management, LLC

Page 15 of 41

Continuous Improvement / Problem Solving Handbook

e. Operating experience reviews f. Self-assessments

g. Benchmarking reports h. Requests for support from other departments or industry peers. i. j. Actions or projects that will require additional funding that need to be presented to management. The three best examples of improved performance in the department during the past month and how the Performance Improvement Program and other performance improvement tools were used to achieve those improvements.

14.0

Department Performance Improvement Meetings

14.1 Attendance: a. Department Managers determine the final list of attendees. b. Department Managers lead the monthly meetings and are supported primarily by the DCACs and HUDCs. c. DCACs are responsible for reserving a room and putting out the invitation to all attendees using Lotus Notes.

14.2 Agenda: there are many inputs to the self-evaluation meetings, and it is not possible to review the associated information during that meeting. For example, an analysis of corrective action program data for common causes and trends can take days. The majority of that information should be reviewed ahead of time by the DCACs and HUDCs, and those reviews can be summarized during the meeting. The following is a recommended agenda for periodic performance improvement meetings. a. Review root cause evaluations and apparent cause evaluations completed during the previous month and discuss: Quality of the evaluations / grades and whether improvement is needed. Timeliness of the evaluations. Recurrence of root causes and causal factors.

b. Review the results of the monthly self-evaluation trending and analysis conducted by the DCACs and the HUDCs. Summarize the review of department key performance indicators. Summarize the data analysis conducted prior to the meeting using charts, Pareto diagrams and other indicators. Highlight the departments top performance issues and Discuss RED indicators in detail. 1) Industrial and Radiation Safety indicators. 2) Corrective Action Program Indicators. 3) Human Performance Observation data, Observations and Observations of Training. 4) Department Error Rate and Event Rate Discuss Corrective Action Plans for identified problem areas: 1) Review of existing corrective actions for problem areas that have already been documented on an CR. Management Work

Copyright 2006 by DLE Total Quality Management, LLC

Page 16 of 41

Continuous Improvement / Problem Solving Handbook

2) New Action Requests (CRs) needed to address top performance issues and adverse trends that have not been previously addressed through the corrective action program. 3) Review of existing recovery plans for RED key performance indicators to determine whether they are on track. 4) New recovery plans for RED key performance indicators that do not have an action plan to restore the indicator to GREEN. c. Review general reports and other department performance improvement items completed during the past month such as the ones listed below to ensure that all corrective actions or improvement initiatives are documented in the corrective action program for tracking to completion. Also review the schedule of upcoming activities to ensure they are on track. Operating Experience Reviews Self-assessments Benchmarking reports Reports from external agencies Budget report

d. Identify the need for support from other departments such as Training and Engineering, any major support required from off-site peers, and any additional funding requests that need to be presented to management. Training requests should be formally transmitted to the Training Manager. Requests for engineering assistance should be filled out in accordance with existing processes.

e. Identify the three best examples of improved performance in the department during the past month and how the Performance Improvement Program and other performance improvement tools were used to achieve those improvements. Document those examples in the periodic performance improvement meeting reports.

15.0

Communicating Results

15.1 Reports: following the performance improvement meetings, the departments are responsible for filling out a report. a. The summary report should clearly convey: The top three problem areas for the department and ongoing corrective action efforts. The top three success stories in resolving identified problems. Any additional support or funding required to resolve the problems. Any feedback or requests for action being sent to other organizations such as Training and Engineering.

b. Completed DEM summaries are forwarded to the CAP Manager. 15.2 Provide Results to Other Departments: the results of meetings are valuable inputs to other programs and organizations. The following are a few of the main functional areas that should be provided with meeting results as appropriate. a. Training Program training needs or requests b. Station Trend Program inputs on adverse trends
Copyright 2006 by DLE Total Quality Management, LLC Page 17 of 41

Continuous Improvement / Problem Solving Handbook

c.

Self-assessment Program inputs for future self-assessments

d. The Engineering Department feedback on equipment problems

16.0

Tracking Corrective Actions Recognizing that every organization is challenged with managing their limited resources wisely, the following guidance is provided for problem solving efforts.

16.1 Establish and maintain an ongoing integrated list of department problems and update the list on a monthly basis. a. Use the results of the monthly meetings to update the list of top problems. The reports contain a list of the top problems identified for that month, based on a review of the past six months worth of performance data. b. Also consider other adverse conditions and trends identified outside of the performance improvement process. 16.2 Grade each problem as it is placed on the list and keep the problems in order of relative ranking. Consider using formal criteria to rank the problems. For example, grade the relative significance of each problem with respect to: Nuclear Safety Industrial Safety Radiological Safety Equipment Reliability Plant Performance Regulatory Impact or Commitments

16.3 The list should capture the following essential information. a. Ranking b. Overall Grade c. Department

d. Date Identified e. Corrective Action Tracking Number f. Summary of Problem

g. Summary of Corrective Actions h. Focus on Four Area i. j. k. l. INPO Performance Objective Significance Level Assigned Priority Assignee

m. Due Date n. Date Closed o. Key Performance Indicator Used to Track Performance Improvements 16.4 Assign actions to address those problems in order of priority and track those actions in the corrective action program.

Copyright 2006 by DLE Total Quality Management, LLC

Page 18 of 41

Continuous Improvement / Problem Solving Handbook

a. Take into consideration available resources and do not assign more actions than can be completed in a high quality manner. b. Issue apparent cause evaluations for adverse trends and significant issues. c. Issue root cause evaluations when actions to prevent recurrence are required.

16.5 Update the list on a monthly basis to ensure that the departments keep their resources focused on addressing their most important problems first. 16.6 Issues that do not get addressed one month may be addressed in subsequent months as the higher priority problems are addressed and the remaining problem areas rise in the relative rankings. 16.7 By maintaining a list of ranked problems and following the above methodology, each department can demonstrate that it is using their available resources in the most effective and efficient manner by applying them to the most important problem areas first.

17.0

Problem Solving

17.1 There may be more than 10,000 action requests (CRs) written each year at a typical nuclear station. Of the total CRs written in a year, approximately 30 to 50 may require Root Cause Evaluations (RCEs), and approximately 300 to 500 may require Apparent Cause Evaluations (ACEs). The remaining CRs are addressed individually or closed to actions taken, and trended by the station trend programs. What is noteworthy about the thousands of conditions that did not require RCEs and ACEs is that in many cases, their underlying causes are the same as those that are causing the bigger events. 17.2 The Performance Improvement Program was written with the premise that there is great value in analyzing those 10,000 lower level CRs for adverse trends and common causes, as their underlying causes can prevent larger events from taking place. 17.3 In previous sections of this Handbook, guidance was provided on how to identify problem areas for analysis and resolution in an organized and prioritized manner. The following guidance is provided to enhance the determination of underlying causal factors and establishing appropriate corrective actions. a. When choosing a problem to solve, it is advantageous to have a set of data to evaluate rather than one occurrence. As described in earlier sections, binning problems and charting them on Pareto diagrams provides a good method to determine the set of data to use. b. Formal problem solving tools should be applied, such as: c. Cause and Effect Analysis Hazards / Barriers / Targets Fault Tree Analysis Events and Causal Factors Charting Five Whys Analysis Matrix Diagram

The primary purpose of using formal tools is to identify the underlying causal factors that are causing the steady stream of problems that show up every month as symptoms (performance issues). The following diagram shows how being good problem solvers involves focusing on underlying causal factors and root causes, not on the symptoms.

Copyright 2006 by DLE Total Quality Management, LLC

Page 19 of 41

Continuous Improvement / Problem Solving Handbook

Symptomatic Level

# of Condition Reports

Month

Underlying Causal Factors

Root Causes

d. A key to an effective Performance Improvement Program is that personnel across the station use these tools in every day applications and are therefore better problem solvers, eliminating underlying causes and latent organizational weaknesses that were causing repetitive problems. e. Examples of some of the more effective and simple to use problem solving tools are included in Attachment 1.

18.0

Selecting Corrective Actions

18.1 Corrective Actions are the countermeasures put in place to address the identified causes and prevent similar problems from recurring. There is a vast range of corrective actions that can be applied to any given set of causes, with varying degrees of effectiveness and cost. The challenge is to pick cost-effective solution as in every business there are limited resources. 18.2 A tool that can help organize potential corrective actions and help the evaluators decide which ones should be implemented is the Countermeasures Matrix. Once the corrective actions are selected, an Action Plan can be generated from the matrix to schedule the implementation and track the plan to conclusion

Focus on Problems vs. Focus on Solutions When NASA began the launch of astronauts into space, they found out that pens wouldn't work at zero gravity. In order to solve this problem, they hired Andersen Consulting (Accenture today). It took them one decade and 12 million dollars. They developed a pen that worked at zero gravity, upside down, under water, on practically any surface including crystal, and in a temperature range from below freezing to over 300 C. The Russians used pencils.

18.3 The NASA example really drives home the possible range of corrective actions that could be used to solve a problem. In many cases there will be a range of ideas on how to solve a

Copyright 2006 by DLE Total Quality Management, LLC

Page 20 of 41

Continuous Improvement / Problem Solving Handbook

particular problem, and the countermeasures matrix guides the team through a logical approach to select the solutions that are most in line with the companys goals. 18.4 Instructions for how to complete a countermeasures table are included in Attachment 1.

19.0

Monthly Alignment Meetings

19.1 Purpose: At least once a month the DCACs, HUDCs and the Station Trend Coordinator meet with the Corrective Action Program Manager to share lessons learned and compare how the departments are implementing the Corrective Action Program and the DCAC functions such that a consistent approach is taken across the station. The meetings also serve as a forum to discuss the top problem areas among the department to look for common causes and trends across the station. 19.2 Attendance: DCACs, HUDCs, the Station Trend Coordinator, the Human Performance and Corrective Action Program Supervisors and the CAP Manager. The meeting should be facilitated by one of the Supervisors. 19.3 Agenda: the following is a recommended agenda for the monthly DCAC Alignment Meetings. a. Discuss difficulties in performing DCAC responsibilities, such as downloading and analyzing data, applying trend codes and preparing for the monthly meetings. b. Discuss the top performance issues for each department. Look for any common set of causal factors across the departments for possible station trends. If the same causal factors are identified across multiple departments, the Station Trend Coordinator should determine whether a station level ACE should be issued to further evaluate the issues collectively.

NOTE When underlying causes show up in multiple departments, it is an indication that there are root causes at work generating problems across a wider spectrum. Similarly, when the same causes show up across the fleet, it is an indication of limiting weaknesses in the organization that are at the core of the companys business models.

Copyright 2006 by DLE Total Quality Management, LLC

Page 21 of 41

Continuous Improvement / Problem Solving Handbook

Attachment 1 Problem Solving Fundamentals


Closing the Gaps to Excellence

Rob De La Espriella DLE Total Quality Management, LLC

Copyright 2006 by DLE Total Quality Management, LLC

Page 22 of 41

Continuous Improvement / Problem Solving Handbook

Attachment 1 Problem Solving Fundamentals


I.
Data Analysis Tools There are numerous data analysis tools available from which to choose depending on the application, and some are listed below for your information. They each have their advantages and disadvantages. This Handbook contains a few of the more effective tools to analyze data that were carefully selected for their ease of use (noted by an *).

Data Analysis Tools Tables & Spreadsheets* Histogram* Pareto Chart* Process Map* Task Analysis Change Analysis* Flowchart* Spider Chart Performance Matrix Is / Is not Matrix Sampling Scatter Diagram Problem Concentration Diagram Affinity Diagram Spaghetti Diagrams

A. Tables and Spreadsheets One of the most basic and useful methods to display data is a table. Tables are extremely useful because they form the basis for most charts. They are an excellent way to organize data and should be used wherever possible. Most everyone should be familiar with tables, so we will not go into detail on how to construct one. However, not everyone is familiar with spreadsheets, which are an excellent tool to organize tables and chart data. Suggestions for improving your skills in using spreadsheets: 1. In the DATA menu, there is a SORT function. With this feature you can sort your data by rows or columns to arrange the data in ascending or descending order.

Copyright 2006 by DLE Total Quality Management, LLC

Page 23 of 41

Continuous Improvement / Problem Solving Handbook

2. In the DATA menu, there is an AUTOFILTER function. With this function you can filter the data in a column by the pre-set criteria, or you can customize what specific information you want the spreadsheet to provide. 3. In the DATA menu, there is a VALIDATION function. This function allows you to create pull-down menus for your data set. First you define a table of information to choose from, and then you format a range of cells that will use a drop-down arrow that lets you choose the desired information from the pre-defined table to place in that cell. 4. In the WINDOW menu, there is a FREEZE PANE function. By placing the cursor on a cell and clicking this function, it freezes every column to the left of the cell and every row above the cell. This feature is very useful for large tables where you want to keep the heading row and the first column in view as you scroll. 5. The CHCRT WIZCRD is a quick method to chart your data sets. The Wizard will guide you step by step in selecting the right chart (there is a preview feature if you are not sure what it will look like), and it prompts you to enter the necessary headings and axes labels. 6. The FUNCTION (fx) button allows you to put formulas in cells such as SUM and AVG, allowing the spreadsheet to do calculations for you.

NOTE Most programs come with a HELP menu that will guide you through setting up and using these useful functions.

7. Using Spreadsheets to Analyze Data a. By organizing data into a spreadsheet, you will be able to sort and filter the data such that common causes and contributors can be counted. b. The most important aspect of analyzing data is that you have as much information as possible. Where data is missing, the DCACs will have to interface with the originators or subject matter experts and fill in the missing blanks. This step is sometimes overlooked, which dilutes the accuracy of the analysis. c. Once the data is organized into common causes and contributors, count the occurrences for each area so that histograms and Pareto charts can be prepared.

d. The primary goal of this analysis is to prepare Pareto Charts that will pictorially show which groups of problems are causing the most CR within the department, such that they can receive top priority.

B. Histograms Once we have organized our information in a table or spreadsheet, it is often useful to view the information pictorially. Histograms, also called bar charts, are used to graphically represent the distribution and variation of a data set because it is easier to see this variation in a chart than in a table. Histograms are a type of common cause analysis because causes can be grouped together and displayed in a histogram.

Copyright 2006 by DLE Total Quality Management, LLC

Page 24 of 41

Continuous Improvement / Problem Solving Handbook

Below is an example of a table and its associated histogram. Work Practices Root Cause Subfactor Self-checking procedure adherence Intended/required verification not performed 5 miscellaneous categories Total #of CRs 38 33 21 14 106

Work Practices Causal Factors

40 35 Total CRs for Work Practices 30 25 20 15 10 5 0 Self-checking #of CRs 38 procedure adherence 33 Subfactors

Intended/required 5 miscellaneous verification not categories performed 21 14

Copyright 2006 by DLE Total Quality Management, LLC

Page 25 of 41

Continuous Improvement / Problem Solving Handbook

C. Pareto Charts Vilfredo Pareto, an Italian mathematician in the 1800s, first developed the Pareto Principle. He was concerned with the distribution of riches in society and claimed that 80% of the wealth was owned by 20% of the population. In modern applications, Pareto charts help us prioritize the order in which problems should be addressed. We can use this principle to focus our resources on eliminating 80% of the contributors by only having to address 20% of the problems. Pareto charts are also one of the better methods to chart the results of common cause analysis. Steps for preparing a Pareto Chart 1. Capture the data on a table or spreadsheet Equipment Failure Mode Aging Design Environmental Protection Fabrication Foreign Material Installation Lubrication Operation Pressure Temperature Total Count 7 2 1 1 1 1 8 1 4 5 31

2. Sort the data so that the causes are arranged from largest to smallest Equipment Failure Mode Lubrication Aging Temperature Pressure Design Environmental Protection Fabrication Foreign Material Installation Operation Total Count 8 7 5 4 2 1 1 1 1 1 31

Copyright 2006 by DLE Total Quality Management, LLC

Page 26 of 41

Continuous Improvement / Problem Solving Handbook

3. Construct a Histogram

Equipment Failure Modes


9 8 # of E quipm ent Failures 7 6 5 4 3 2 1 0

re

in

at er ia

sig

su r

t io

ec t io

Ag

t io

tio

tu

ica

ra

br ica

Pr es

De

ll a

pe

ro t

st a

br

Fa

Lu

Te

lP

ta

En

vir o

nm

Modes

4. Determine the % contribution of each failure mode to the total, and the cumulative % of the failures when added from largest to smallest (a spreadsheet can do the calculations for you).

Equipment Failure Mode (Causes) Lubrication Aging Temperature Pressure Design Environmental Protection Fabrication Foreign Material Installation Operation Total

Fo

en

re

ig

Count 8 7 5 4 2 1 1 1 1 1 31

% of Total 25.8% 22.6% 16.1% 12.9% 6.5% 3.2% 3.2% 3.2% 3.2% 3.2%

Cumulative % 25.8% 48.4% 64.5% 77.4% 83.9% 87.1% 90.3% 93.5% 96.8% 100.0%

Copyright 2006 by DLE Total Quality Management, LLC

In

O
Page 27 of 41

pe

ra t

io

Continuous Improvement / Problem Solving Handbook

Following Vilfredo Paretos principle, select the first four failure modes for further evaluation and corrective actions. In this case, 77.4% of the failures were caused by 40% of the causes. Below is a representation of the Pareto, although due to the limitations of the software, the cumulative % (line graph) is not displayed in classic Pareto format. In a classic Pareto, the primary Y axis (on the left) and the secondary Y axis (on the right) are set at the total number of failures (N=31), so that the 100% value corresponds to 31 failures.

Pareto of Equipment Failure Modes

N=31
10 No. of Failures 8 6 4 2 0
Lubric'n Count 8 Aging 7 Temp 5 Press 4 Design 2 Environ mental 1 Foreign Install'n Material 1 1

120.00% 100.00% 80.00% 60.00% 40.00% 20.00% 0.00%


Ops 1 Fab 1

Cumulative % 25.80% 48.40% 64.50% 77.40% 83.90% 87.10% 90.30% 93.50% 96.80% 100.00%

Failure Modes

Copyright 2006 by DLE Total Quality Management, LLC

Page 28 of 41

Continuous Improvement / Problem Solving Handbook

D. Change Analysis Change Analysis is the comparison of a successfully performed activity to the same activity performed unsuccessfully. During the process of collecting information, all identified changes are written down. The differences are then analyzed for their effects in producing the inappropriate action or adverse Event. Change Analysis is used to help you analyze a process and develop evaluation leads and questions on which to follow up in your search for causes. The question the evaluator needs to ask is: What was different from all other times we carried out the same task or activity without an inappropriate action or adverse event?

EXAMPLE On your way to work you take the car out of the garage and notice oil on the floor where the car was parked. The car had never leaked oil before. Right away you would ask yourself what has changed between yesterday and today. You quickly identify that yesterday you changed the oil in the car. Although at this point you may not know exactly where the oil is coming from, you have a good lead to follow-up on. When you check under the car, you notice that the oil is coming from the oil filter gasket and promptly correct the problem by tightening the filter. The cause of the oil leak was a loose oil filter. The underlying causal factor may be that you did not have the right level of knowledge or all the appropriate tools, but you change the oil yourself to save money. The root cause might be that you are thrifty by nature and saving money is a major factor in all your decisions.

In power plant applications, change analysis is useful in situations when a task or procedure has been done successfully many times in the past, but the last time it resulted in an undesirable outcome. The following are the general steps for conducting Change Analysis.

1. Obtain the help of subject matter experts who have done the task or evolution successfully in the past, as well as the personnel who were involved in the latest attempt, to review the chronology and available information. 2. Fill out the Change Analysis Worksheet to identify differences between the successful completion and the unsuccessful attempt, asking the five simple questions of What, Where, When, How and Who.

Copyright 2006 by DLE Total Quality Management, LLC

Page 29 of 41

Continuous Improvement / Problem Solving Handbook

Change Factor

Successful Outcome
Tag out the pump breaker for the 1A pump for PM

What was different this time?


Tagged the wrong breaker (pump 1B)

Adverse Effect or Consequences


1A Pump was not tagged and started on a demand signal

Follow-up Questions
What barriers should have prevented this from happening? How did Operations conduct the tagging and verification? Were the breakers properly labeled? Was the lighting adequate in the breaker cubicle? Why didnt the mechanics check that the tags were hung before starting work? What barriers should have prevented this from happening? How is scheduling done? Why did the SS not approve the tag out by 0500? How long does it usually take to hang these tags? Was there inordinate time pressure? Why were the breakers not labeled? How many other components are not labeled? What is the status of the plant labeling program? Was the Operator properly trained on this equipment? What was his level of experience? Why did the second verification not catch the error? Were the standards for hanging and verifying tags clear and were they followed? What did the Operators use to verify they were on the right breaker? Is the guidance clear for how verification is to be done? Are the Operators trained on V&V? Who is responsible for ensuring the area is safe to start work? Is the guidance clear for verifying work areas are safe prior to starting work? Was the apprentice qualified to do tag verification?

WHAT (Conditions, activity, equipment)

WHAT (Conditions, activity, equipment)

Mechanics verify their work are safe to start work The Shift Supervisor usually approves the tag out by 0500 and tags are hung by 0700 Components to be tagged out are properly labeled and the area is well lit

The tags were not verified to be hung on the 1A breaker The Shift Supervisor approved the tag out at 0700

Worked on the pump without a proper boundary

WHEN (Occurrence, plant status, schedule)

Operators were in a hurry to get the tags hung by 0800

WHERE (physical locations, environmental conditions, step of procedure)

The 1A and 1B breakers for these pumps were not labeled. The area was well lit.

Operator went to the wrong breaker

HOW (work practice, omission, extraneous actions, out of sequence, poor procedure)

HOW (work practice, omission, extraneous actions, out of sequence, poor procedure)

WHO (personnel involved (by job title, not name), supervision)

Independent verifications are conducted when hanging tags Mechanics walk down the tag out prior to starting the job Qualified Mechanics verify area is safe before starting work

The operators worked together and did not independently verify tag placement Mechanics sent a mechanic apprentice to check tags

Wrong breaker was tagged

A mechanic apprentice verified the tags were hung

Mechanic apprentice saw the tag but did not recognize it was on the wrong breaker Mechanic apprentice saw the tag but did not recognize it was on the wrong breaker

3. Use the information gained from the change analysis to pursue the causal factors.

Copyright 2006 by DLE Total Quality Management, LLC

Page 30 of 41

Continuous Improvement / Problem Solving Handbook

E. Flowchart / Process Maps Many of the problems at the station involve processes and procedures. In many cases it would be advisable to prepare a flowchart of the process (or a segment of the process).

Flowcharts and process maps can be used: 1. To provide an understanding of the process and help pinpoint the problem area in support of cause evaluations. 2. To conduct a gap analysis between the existing process or procedure and industry standards of excellence for the same process. 3. To evaluate the process flow and look for opportunities to make the process more efficient. 4. To identify where a procedure is missing key steps or critical guidance.

NOTE One of the most effective methods for writing a new procedure or revising an existing process is to first prepare a detailed flowchart of the process, and then use the steps in the flowchart as the basis for the necessary guidance. (Section III of this Handbook was prepared using a detailed process map shown below).

Copyright 2006 by DLE Total Quality Management, LLC

Page 31 of 41

Continuous Improvement / Problem Solving Handbook

Steps for preparing a flowchart 1. Flowcharts can be created on a single page or on a large whiteboard using post-it notes, depending on the complexity. 2. The basic shapes used in flowcharts are as follows.

START

Start and Stop Points

Step 3.2.1

Process Step Decision Point ?


YES NO

Documents

3. The shapes are connected in the sequence dictated by a procedure or a process. If there is no procedure, subject matter experts can be used to lay out the process. Build in the desired level of detail, starting with a high level view of the process. 4. A more sophisticated method to map out a process is the Cross-Functional Flowchart. This method can be used to show how each group or persons involved in the process interface with the process, and different process phases or segments can also be shown. a. Along the top margin of the flowchart, establish columns for each group or function involved in the process. b. Along the left margin of the flowchart, establish rows for each phase or segment of the process. c. When preparing the chart, follow the columns and rows to separate the groups and phases of the process as applicable.

This degree of sophistication is particularly useful when preparing multiple procedures for a complex process involving different organizations. For example, to prepare a specific procedure for one of the groups or functional areas, follow the column containing the steps that the group is involved down through all the phases of the process.

5. Keep in mind that there are usually three versions of a flowchart:

Copyright 2006 by DLE Total Quality Management, LLC

Page 32 of 41

Continuous Improvement / Problem Solving Handbook

What You Think


STEP

START

STEP

DOCUMENT

STOP

STEP

STEP

STEP

What It Really Is
STEP STEP

START

STEP

?
YES NO

STEP

DOCUMENT

STOP

STEP STEP

STEP

What You Want it to Be

START

STEP

?
YES NO

STEP

DOCUMENT

STOP

Copyright 2006 by DLE Total Quality Management, LLC

Page 33 of 41

Continuous Improvement / Problem Solving Handbook

II.

Tools for Identifying Causal Factors

A. Cause and Effect Analysis (Fishbone Diagrams)

Process

Program

PROBLEM

Personnel

Procedure

Also known as the Fishbone Diagram or Ishikawa after Mr. Kaoru Ishikawa, the Japanese Quality Management expert who created it. Once a clearly defined problem has been selected from the data analysis, cause and effect analysis can be used to: Help the evaluators push past the symptoms and systematically evaluate potential causes and determine which are most likely to be root causes Provide a structured method to ensure that a balanced list of potential causes has been generated and that possible causes are not overlooked Provide a pictorial representation that shows the relationship between the problem and the underlying causes

Steps for conducting Cause and Effect Analysis 1. Gather a facilitator familiar with the process and a group of subject matter experts in the problem area. 2. Write the specific problem statement to be evaluated at the head of the fishbone, and be as specific as possible. For example: Procedure adherence contributed to 50% of the 50 CR in the Operations Department for the period of May to October 2006. 3. Construct the major bones of the fishbone. They can vary and are determined based on the area to be evaluated. Typical major bones to choose from are listed below. Man, Method, Management, Material, Machine, Measurement Personnel, Processes, Procedures, Standards/Expectations & Policies, Plant Equipment & Facilities, Tools 4. Considering each of the major categories, brainstorm a list of all the possible first level causes and write them on the branches that apply. 5. For each of the possible causes in the major categories, ask why these causes exist until the answer is no longer actionable or reasonable.
Copyright 2006 by DLE Total Quality Management, LLC Page 34 of 41

Continuous Improvement / Problem Solving Handbook

Example Problem: Procedure adherence contributed to 50% of the 50 CR in the Operations Department for the period of May to October 2006 1. Procedures Category: Why were there 25CR on procedure adherence in Ops? Because operating procedures are less than adequate. Why are operating procedures less than adequate? Because they are outdated. Why are they outdated? Because they have not been upgraded since they were originated. Why havent they been upgraded since they were originated? Because major procedure upgrades have not been funded. Why has a procedure upgrade project not been funded? Because a project proposal has not been submitted to the annual station budget. Why has a project proposal not been submitted? Because of unclear standards and expectations in this area.

Taking the evaluation to a cause that is not actionable is not sensible and can lead to frustration. For example: gravity, the earths rotation or the annual budget for the Nuclear Business Unit.

NOTE In the above example, the evaluators drilled down on the Procedures category and asked why each time. One thing to note is that each question can have more than one possible answer, creating many branches for the same question. The strength of the Fishbone diagram is that it keeps all these causal relationships in order and represents them pictorially in an easy to understand format.

Copyright 2006 by DLE Total Quality Management, LLC

Page 35 of 41

Continuous Improvement / Problem Solving Handbook

6. Once the diagram is complete, look for the lowest level causes that have come up repeatedly in different parts of the Fishbone. Those are the most likely underlying causes of the problem. Cloud these underlying causes on the diagram.

NOTE There are usually several underlying causes for any given problem. If the analysis only identified one possible cause, consider adding additional major categories and continuing the analysis

7. Confirm theses causes using a simple screening. a. If these causes never existed, would the problem still have occurred? b. If these causes are eliminated, would the problem completely go away? 8. Causes that have been confirmed as the most likely causes or root causes are assigned corrective actions using a countermeasures matrix.

B. Hazards / Barriers / Targets (HBT) Analysis HBT analysis is used to identify barriers that should have prevented the problem from occurring. The analysis identifies where barriers were missing, weak or ineffective, and what new barriers might be needed. At nuclear plants, there are many threats (hazards) to the safe and reliable operation of the station. Examples include: Radiation Electricity High energy systems Aging equipment Heavy loads Confined spaces

The targets that the barriers are designed to protect are usually employees and the general public, as well as plant equipment that is required for either the safe operation of the plant or to mitigate the consequences of design basis events. Nuclear plants employ a defense-in-depth concept, and for every potential hazard there are numerous barriers in place that were designed to prevent it. There are two general types of barriers, Physical and Administrative in nature. 1. Physical barriers include: Automatic shutdown devices Safety and relief devices Conservative design margins Engineered safety features Radiation shielding

Copyright 2006 by DLE Total Quality Management, LLC

Page 36 of 41

Continuous Improvement / Problem Solving Handbook

Redundant equipment Locked doors and valves Fire barriers and seals

2. Administrative barriers include: Regulations Technical Specifications Training and qualification programs Operating and maintenance procedures Programs, policies and practices Management & supervisory oversight

Steps for conducting HBT analysis: 1. The problem being evaluated is regarded as the threat or hazard, and the target is usually personnel or equipment.

Barriers Hazards
Postings Doors Locks Insulation Shielding Standards Procedures Training Experience Supervision Oversight Regulations Best Practices

Targets

Steam Electricity Radiation Heavy Loads Confined Spaces Hot Surfaces Extreme Temperatures Rotating Equipment

SSCs

2. For the identified problem, prepare a list of barriers that were put in place to protect the target from the hazard. 3. Place the identified barriers in a Barrier Analysis Worksheet and analyze each barrier to determine to what extent each barrier failed and why. Failed barriers should be assigned corrective actions. 4. Similarly, identify potential new barriers that, had they existed, may have prevented the problem from occurring. Potential new barriers are also considered for action in a countermeasures matrix.

Copyright 2006 by DLE Total Quality Management, LLC

Page 37 of 41

Continuous Improvement / Problem Solving Handbook

Barrier Analysis Worksheet Hazard / Problem / Consequence Barriers That Should Have Prevented the Problem, or New Barriers that Can Industrial Safety Program Human Performance Program Worker received electrical shock Why Did the Barrier Fail? Guidelines for checking work area not followed Did not self-check, peer check, no questioning attitude Guidelines for working on energized systems not followed Tags were hung on wrong component Add new requirement to work practices Handbook

Worker Practices

Tagging Program NEW: Live-Dead-Live Checks

C. Fault Tree Analysis Fault tree analysis is a systematic approach to identify potential causal factors. It provides an excellent representation of the problem and its potential causes, and is most useful for equipment problems because they have a finite possible number of causes that can be identified and validated through a process of elimination. Fault tree analysis is an excellent troubleshooting tool as well. Steps for preparing a Fault Tree 1. Fault trees can be created on a single page or on a large whiteboard using post-it notes, depending on the complexity. 2. The basic shapes used in Fault Trees are as follows.

AND gate OR gate

EVENT

CAUSES LOWEST POSSIBLE CAUSE

BASIC EVENT

3. Write the specific problem statement to be evaluated at the top of the Fault Tree. 4. Using available information such as drawings, wiring diagrams and procedures, construct the fault tree using AND gates when all of the underlying causes have to occur to trigger

Copyright 2006 by DLE Total Quality Management, LLC

Page 38 of 41

Continuous Improvement / Problem Solving Handbook

the higher level event, and OR gates when any one of the causes can trigger the higher level event. 5. The last possible cause in the series is depicted by a circle. The fault tree charting is complete when the lowest causes shown on the diagram are all circles. 6. Once all the possible causes are charted, begin a process of elimination based on actual data or expertise. Each fault path to the problem is evaluated and verified to exist or not. This may involve one or more of the following. a. Equipment testing b. Preventive maintenance that exercises the potential fault c. Physical inspections

d. Review of event chronology e. Review of governing procedures f. Personnel interviews and eye witness accounts

7. The process ends when all of the potential fault paths have been evaluated, and as many of the fault paths ruled out. The remaining basic events that cannot be eliminated will require corrective actions to ensure that the fault path has been addressed.
COMPONENT COOLING WATER PP WILL NOT START

NO BUS POWER

NO CONTROL POWER

START LOGIC NOT SATISFIED

OFFSITE PWR OOS

EDG PWR OOS

DC BKR OPEN

CONTRL PWR FUSES BLOWN

LOW BEARING WATER FLOW

XFER SW NOT SELECTED TO CONTROL ROOM

PP START NOT SELECTED ON CONTROL SW

SUCTION VLV CLOSED

VALVE NOT FULL OPEN

LIMIT SW DEFECT

Copyright 2006 by DLE Total Quality Management, LLC

Page 39 of 41

Continuous Improvement / Problem Solving Handbook

III.

Evaluating and Implementing Corrective Actions A. Countermeasures Matrix A Countermeasures Matrix should be used for evaluating proposed corrective actions that come out of the ACEs used to evaluate adverse trends identified through the DSEMs. In addition, this useful tool can be used to evaluate proposed corrective actions that are generated by: Root cause evaluations Apparent cause evaluations Self-assessments

A Countermeasures Matrix does the following: Graphically shows the relationship between the likely causes and the proposed corrective actions. It helps the team select the countermeasures that best address those cases using formal screening criteria. It provides management with a prioritized list of corrective actions such that decisions can be made based on station budgets, resources, etc. After the decision is made on what is to be implemented, it also documents what was not implemented to understand the risk of recurrence.

Below are the steps for filling out a Countermeasures Matrix.


CAUSES Most Likely Causes CORRECTIVE ACTIONS SCORING (1=LOW 5=HIGH) Cost Corrective Implementation Feasability Effectiveness Actions Details Effective A 1 1 5 1 B C D 2 2 4 2 E F G 3 4 3 4 H I J 4 5 3 5 K L SELECTION Selected for Implementation 4 NO

PROBLEM

Other Factors -

Overall Score 7

Ranking

1 Problem Statement 2

NO

11

YES

13

YES

a. Problem: the problem statement used at the start of the analysis. b. Most Likely Causes: the list of most likely causes that were identified through the evaluation process. To help zero in on most likely causes, ask the following questions. 1) If this cause never existed, would the problem still have occurred? 2) If this cause is eliminated, would the problem completely go away? c. Corrective Actions: the big picture corrective actions that will address the problem area.

d. Implementation Details: the main tasks needed to implement each of the corrective actions. Providing this next level of detail helps in the analysis by identifying major hurdles to implementing a corrective action. e. Scoring: the team can select any number of factors to score the proposed list of corrective actions. Scoring can be done on any desired scale (1 to 5 or 1 to 10, with the highest value being the most effective score). Examples of scoring factors:
Copyright 2006 by DLE Total Quality Management, LLC Page 40 of 41

Continuous Improvement / Problem Solving Handbook

1) Feasibility: a rating based on how hard or easy it is to implement the corrective action. 2) Effectiveness: the likelihood that the problem will be eliminated: a rating based on how effective the corrective action will address the cause. 3) Cost Effectiveness: cost of implementing the corrective action is sometimes left off of pure problem solving efforts, but it is critical criteria to consider based on limited budgets and resources. Measure cost in terms of both dollar amounts and person-hours required to implement. In some cases there is no cash outlay, but the staff may have to invest 100s of hours of department resources. 4) Other Factors: add additional columns to consider other factors that affect your decision. For example: Whether it enhances regulatory margin The positive effect on industrial, radiological and nuclear safety The positive effect on station morale

5) Overall Score: the sum of the factors that were considered. Consider increasing the range from 5 to 10 and multiplying the scores instead of adding them when there is not enough separation between the scores. 6) Ranking: the final ranking based on the scores. Once the corrective actions are ranked, they should be sorted in order of priority. 7) Selected for Implementation: YES or NO. The purpose of ranking the corrective actions is to allow management to decide how many solutions can be implemented based on available budgets and resources.

NOTE The matrix will remain as a record of the actions taken and not taken. The matrix can be revisited and additional corrective actions may be funded and implemented as desired if the KPIs show that the problem has not been sufficiently addressed by the initial corrective actions.

***END OF TEXT***

Copyright 2006 by DLE Total Quality Management, LLC

Page 41 of 41

Вам также может понравиться