Вы находитесь на странице: 1из 4

Bottom Line Success with Six Sigma

Define key process output variables and their effects on the cost of poor quality
by Forrest W. Breyfogle III and Becki Meadows Learning disabilities are difficult, regardless of who or what they affect. Unfortunately, learning disabilities can be fatal to organizations, causing most companies to throw in the towel before hitting their 40th anniversary.1 However, more organizations are overcoming these disabilities by evolving into learning organizations. They are defying the odds stacked against them by wisely applying a particular strategy: the Six Sigma methodology. Six Sigma offers a road map for changing data into knowledge, reducing the amount of daily firefighting and uncovering opportunities that impact both the customer and the bottom line. Some organizations, however, have had mediocre results after implementing Six Sigma. The reasons for this vary but typically lie within a company's infrastructure--the road map used for executing projects and establishing metrics.

Companies with profitable Six Sigma strategies are successful because they maintain effective infrastructures for selecting, supporting and executing projects. They do so by thinking of their infrastructures as ongoing processes that are continually improved and revisited through executive planning. Figure 1 offers a version of this process. This article focuses on the first step under the project teams category, as shown in Figure 1. The emphasis within this step, defining the project key process output variables (KPOVs), is given to counting defects, metrics and the cost of poor quality (COPQ), otherwise known as the cost of doing nothing (CODN). Counting defects

Peter M. Senge, The Fifth Discipline: The Art and Practice of the Learning Organization (New York: Doubleday/Current, 1990).

Organizations often waste time creating metrics that are not appropriate for the outputs being measured. Executive leaders can get deceptive results if they force all projects to determine a one size fits all metric in order to compare the quality of products and services from various departments. From a management point of view, having one universal metric seems beneficial. However, such a directive can lead to ineffective activities and often encourages playing games with the numbers. Metrics that expose the hidden factory--rework within a process such as defect per million opportunities (DPMO)-can be very beneficial to some projects; however, the same metric can require a huge amount of questionably valued effort where other projects are concerned. To illustrate how metrics can be deceptive, let's look at the following scenarios: Scenario 1:Measuring defects in a manufacturing process. Consider the manufacturing of high precision sheet metal, where surface voids and scratches are unacceptable. In a one size fits all metric culture, we would be forced to define an area of the part as an opportunity (for failure) since this manufacturing process does not consist of discrete parts. These boundaries are supposed to define customer needs, but they typically lead to inconsistencies and playing games with the numbers. For instance, one group might select a square foot area as an opportunity, while another might select a square inch or a square millimeter. If the sigma quality level or DPMO rate does not look good, a team might even believe it needs to improve the perception of product quality by changing the area considered. This can yield dramatically different results, offering little insight into the process. Scenario 2:Measuring defects in the service industry. In Six Sigma, a defect rate is one measure of the frequency that an event does not meet customer expectations. In a manufacturing environment, for example, if a shaft is larger than a diameter tolerance specification limit, the shaft could be too large to rotate freely within a bushing that is part of a later assembly. In this process, manufacturing specification limits have physical meaning; however, in a service environment this is often not the case. In the airline industry, whenever an airplane departs 15 minutes or more later than scheduled, the event is labeled as a defect. This defect metric is not an accurate representation of the desires of customers who are impacted by the process. If this were a good descriptive metric, passengers would be equally dissatisfied if the airplane were 16 minutes late, as if the airplane were 3 hours late. This is typically not the case since a plane departure that is off by 16 minutes might not affect an airplane transfer at another destination or cause other inconveniences. A departure that is 3 hours late, on the other hand, could cause much more passenger frustration and inconvenience. Choosing the right metrics A Six Sigma business strategy should encourage creating the right metric(s) for each situation. We discourage using a sigma quality level metric (such as assigning 3.4 parts DPMO rate as a Six Sigma quality level). This metric can often lead to the wrong activity and the fabrication of numbers, an extension of the pitfalls previously described. With a Six Sigma business strategy, we are trying to determine more than just a snapshot of the rates of occurrence for a process. We really want a picture that describes the output of a process over time, along with additional metrics, to give us insight as to where to focus our improvement efforts. Unfortunately, organizations often encourage practitioners to compile data in a format that does not lead to useful information.

To avoid this pitfall, we suggest infrequent sampling from a process and the creation of "30,000 foot level" metrics for a project (see Figure 2). The frequency of sampling should be such that typical noise variability of the process has a chance to occur between each sampling point. For example, if raw material changes daily, we might select one datum point daily. It should be emphasized that the intent of a 30,000 foot level control chart (and later process capability analysis) is not to understand what might be causing variability or unsatisfactory results. Our intent is to establish a process baseline and the COPQ/CODN from the vantage point of a final customer, which in Figure 2 is determined from the frequency of nonconformance before process change. When organizations use this approach to track key metrics, they typically redirect resources from firefighting modes to fire prevention activities through Six Sigma projects. The reason for this is that many undesirable outcomes, previously considered special causes, would now be considered common causes that can be fixed only through systematic process improvement efforts. This approach gives focus to what is sometimes called long-term variability within Six Sigma. When the 30,000 foot level metrics and COPQ/CODN indicate that change is needed (see Figure 2), teams tap into organizational wisdom to determine where to focus improvement efforts or future passive analyses. Sometimes these techniques capture low hanging fruit improvement ideas that are obviously beneficial. In other cases, there will be a need to test theories through passive analyses using advanced statistical tools (such as analysis of variance, regression analysis and variance components analyses) in order to determine the key process input variables (KPIVs) affecting the 30,000 foot level metrics. Passive analyses can then lead to proactive testing in the improve phase of Six Sigma, using the power of design of experiments. The control phase would then be used to maintain identified KPVIs such that project improvement benefits are sustained after the Six Sigma practitioner moves on to another project. This approach is a much more powerful strategy than using short-term process entitlement as a driving metric, as suggested by some Six Sigma providers.

Calculating the cost of poor quality Organizations are inconsistent in how they count bottom-line Six Sigma project benefits. Some organizations count only hard savings and will not give focus to soft savings--improving efficiency where there is no immediate head count reduction. If only hard savings are considered, there will be minimal effort to improving efficiency when there is no immediate head count reduction or cost prevention activities, such as reducing development cycle times. Both improved efficiency and cost prevention activities, however, can be very beneficial to an organization and should be, in our opinion, addressed within a Six Sigma business strategy. In addition to being a controversial metric, soft savings can be difficult to determine. For the same process, one person can calculate a soft savings amount that is considerably different from that calculated by another person. Calculating COPQ/CODN is a subprocess of Figure 1, involving employees from multiple levels of the organization. The ideal process incorporates a rough estimate of COPQ/CODN in the selection of strategic projects. The project team later refines this calculation with the help of a finance representative. COPQ/CODN should be a common metric considered within all projects. This metric effectively communicates project worth throughout all levels of the organization and ties quality to the bottom line. Even though a COPQ/CODN metric that includes soft savings is, at times, ambiguous, with some agreed upon guidelines this metric can help lead organizations to the right activity. The wise implementation of the COPQ/CODN metric is a critical element that is necessary to any successful Six Sigma infrastructure. REFERENCE BIBLIOGRAPHY Breyfogle III, Forrest W., Implementing Six Sigma: Smarter Solutions Using Statistical Methods (New York: John Wiley & Sons, 1999). Breyfogle III, Forrest W., Statistical Methods for Testing, Development and Manufacturing (New York: John Wiley & Sons, 1992). Breyfogle III, Forrest W., James M. Cupello and Becki Meadows, Managing Six Sigma: A Practical Guide To Understanding, Assessing and Implementing the Strategy That Yields Bottom-Line Success (New York: John Wiley & Sons, 2001).

FORREST W. BREYFOGLE III is the president of Smarter Solutions Inc., headquartered in Austin, TX. He earned a master's degree in mechanical engineering from the University of Texas in Austin. Breyfogle authored several books, including Implementing Six Sigma and Managing Six Sigma. He is a Fellow of ASQ and a certified quality engineer and reliability engineer. BECKI MEADOWS is a Six Sigma consultant with Smarter Solutions in Boulder, CO. She earned her bachelor's degree in mechanical engineering from the University of Michigan in Ann Arbor. Meadows is a co-author of the book Managing Six Sigma.

As seen in May, 2001 Quality Progress

Вам также может понравиться