Вы находитесь на странице: 1из 52

StatisticalQuality Cantral

Unit 7 Statistical Quality Control


Objectives
After reading this unit you should be able to: App2y statistical thinking to quality improvement; + Use 7 took to troubleshoot quality problems; 6 Determine process capability of a manufacturingprocess; + Explain what a control chart is and how to design one; Select appropriate control chartsfor dijierent applications; + Understand the principles behind sampling methods; and Understand the role of 6-sigma and other advanced QC methods

+ + +

What is Statistical Quality Control? Process Capability: A Discerning Measure of Process Performance The Seven Quality Improvement Tools Control Charts for Variables Data Special Control Charts for Variables Data Control Charts for Attributes Choosing the Correct SPC Chart Implementing SPC Acceptance Sampling What are Taguchi Methods? The Six-Sigma Principle Summary Key Words Self-Assessment Questions Further Readings and References

7.1

What is Statistical QualityControl?

The concept of TQM is basically very simple. Each part of the organization h a ~ s t o m e r s s o m e external and many internal. Identifying what the customer requirements are and setting about to meet them is the core of a total quality approach. This requires a good management system, methods including statistical quality control (SQC), and teamwork. A well-operated, documented management system provides the necessary foundation for the successful application of SQC. Note, however, that SQC is not just a collection of techniques. It is a strategy for reducing variability, the root cause of many quality problems. SQC refers to the use of statistical methods to improve or enhance quality for it customer satisfaction. However, this task is seldom trivial because real world processes are affected by numerous uncontrolled factors. For instance, within every factory, conditions fluctuate with time. Variations occur in the incoming materials, in machine conditions, in the environment and in operator performance. A steel plant, for example, may purchase good quality ore from a mine, but the physical and chemical characteristics of ore coming from different locations in the mine may vary. Thus, everything isn't always "in control." Besides ore, in steel making furnace conditions may vary from heat to heat. In welding, it is not possible to form two exactly identical joints and faulty joints may occur occasionally. In a cutting process, the size of each piece of material cut varies; even the most high-quality cutting machine has some inherent variability. In addition to such inherent variability, a large number of other factors may also influence processes (Figure 7.1). Many of these variations cannot be predicted with certainty, although sometimes it is possible to trace the unusual patterns of such variations to their root cause(s). If we have collected qufficient data from these variations,we can tell, in terms of probability, what is most likely to occur next if no action is taken. It we know what is likely to occur the next given certain conditions, we can take suitable actions - to try to
Ambiet temperature, vibration, humidity, supply voltage, etc. Labour

Training level

Contml Variables: Set points for temperature, cutting speed, raw material specs, recipe etc.

&T'
\

The pmeerro

-----

Raw material qualitylquantity

Variables: Quality of Finished Products; Variation In Output Level of Customer Satisfaction

State Variables

Figure 7.1: Input, ~

e ond n Output W Variable8 of a P r o m ~

maintain or improve the acceptability of the output. This is the rationale of statistical quality control. Another prospect in which statistical methods can help to improve product quality is the design of products and processes. It is now well-understood that over 213rd of all product malfunctions may be traced to their design. Indeed, the characteristics or quality of a product depends greatly on the choice of materials, settings of various parameters in the design of the product and the production process settings. In order to locate an optimal setting of the various parameters which gives the best product, we may consider using models governing the outcome and the various parameters, if such models can be established by theory or through experimental work. Such a model is diagrammatically shown in Figure 7.2. However, in many cases, a theoretical quality control model y =f(x) relating the final output responses (yl,y2,y3;. ..) and the input parameters (XI,x2,x3, ...) is either extremely difficult to establish or mathematically intractable. The following two examples illustrate such cases . Example 1: In bakery industry,the taste, tenderness and texture of a kind of bread depends on various input parameters such as the origin of the flour used, the amounts of sugar, the amount of baking power, the baking temperature profile and baking time, and the type of oven used, and so on. In order to improve the quality of the bread produced, the baker may use a model which relates the input parameters and the output
Input factors
XI,

Output
responses Yl?M# )S#

*,a,..

Figlue 7.8: & IdealhdPcDce~ Yodel

quality of the bread. To find theoretical models quantifying the taste, tenderness and texture of the bread produced and relate these quantities to the various input parameters based on our present scientific knowledge is a formidable task. However, the baker can easily use statistical methods in regression analysis to establish empirical models and use them to locate an optimal setting of the input parameters.

Example 2: Sometimes there are great difficulties in solving an engineering problem using established theoretical models. The heat accumulated on a chip in an electronic circuit during normal operation will raise the temperature of the chip and shorten its life. In order to improve the quality of the circuit, the designer would like to optimize the design of the circuit so that the heat accumulated on the chip will not exceed a certain level. This heat accumulated can be expressed theoretically in terms of other parameters in the circuit using a complicated system of ten or more daunting partial differential equations which can be used to optimize the circuit design. However, it is usually not possible to solve such a system analytically, and to solve it numerically using the computer also has computational difficulties. In this situation, a statistical methodology known as design of experiments (DOE) can be u k d to find an optimal design of the circuit without going through the complicated method of solving partial differential equations. In other cases control may need to be exercised even on-linewhile the process is in progressbased on how the process is performing, to maintain product quality. Thus in statistical quality control problems are numerous and diverse.

SQC engages the following three methodologies:


1.

Acceptance Sampling

This method is also called"sampling inspection." When products are required to be inspected but it is not feasible to inspect 100% of the products, samples of the product may be taken for inspection and conclusions drawn using the results of inspecting the samples. This technique specifies how to draw samples from a population and what rules to use to determine the acceptability of the product being inspected.
21.

Statistical Process Control (SPC)

Even in an apparently stable production process,products produced are subject to random variations. 9 ' C aims at controlling the variability o t process output using a device called the control chart. On a control chart, a certain characteristic of the product is plotted. Under normal conditions these plot& i~olnts are expected to vary in a "~lsualw a f ~ nthe chart. When abnornlal points or patterns appear on the chart, ~t is a statistical indication that the process parameters 0 r p d w w . n imuhwrs might have changed undesirably. At this point an investigation is conducted to discover ~ ~ n u s uor a l abnornlal conditions (e.g. tool breakdown, use of wrong raw material, temperature controller failure, etc.). Subsequently, corrective actions are taken to remove the abnormality. In addition to the use of control charts. SPC also monitors process c~lpubility, an indicator of the adecl~~acy of the manutacturing process to nwet customer requirements under routine operating conditions. In summary, SPC aims at maintaining a stable, capable and predictable process. Note, however, that since SPC requires processes to display measurable variation, it is ineffective for quality levels approaching six-sigma though it is quite effective for companies in the early stages of quality improvement efforts.
3.

Design of Experiments

Trial and error can be used to run experiments in the design of products and design of processes. in order to find an optimal setting of the parameters so that products of good quality will be produced. However, performing experiments by trial and error unscientitically is frequently very inefficient in the search of an optimal solution. Application of the statistical nlethodology of"design of experiments" (DOE) can help us in performing such experiments scientificully and systrmatically. Additionally, such methods greatly reduce the total effort used in product or process development experiments, increasing at the same time the accuracy of the results. DOE forms an integral part of Taguchi methodstechniques that produce high quality and robust product and process designs.
The Correct Use of Statistical Quality Control Methods

The production of a product typically progresses as indicated in the simplified flow diagram shohn in Figure 7.3. In order to improve the quality of the tinal product,design of experiments (DOE) may be used in Step 1 and Step 2, acceptance sampling may be used in Step 3 and Step 5, and statistical process control (SPC) may be used in Step 4.

Product

Process

of materials and parts

Production Product

Ngore7.8: Rrodncdon from Den@ to Mspatch

There are several benefits that the SPC approach brings which are as fonows: There are no restrictions as to the type of the process being controlled or studied, but the process tackled will be improved. Decisions guided by SPC are based on facts not opinions. Thus a lot of'emotiod is removed from problems by SPC. Quality awareness of the workforce increrses because they become directly involved in the improvement process. Knowledge and experience of those wholoperate the process are released in a systematic way through the investigative approach. They understand their role in problem solving,which includes collectingfacts,communicating fads, and making decisions. Management and supervisors solve problems methodically,instead of by using a seat-of-the-pantsstyle.

7.8

Process Capability: A Discerning Measure of Process Performance

W e introduce now an important conceptremployed in thinking statistically about real life processes. Process capability is the range over which the "natural variation"of a process occurs as determined by the system of common or random causes; that is, pracess capability indicates what the process can deliver under "stable" conditions when it is said to be under statistical control. The capability of a process is thefiaction of output that can be routinely found to be within specifications (specs). A capable process has 99.73% or more of its output within specifications (Figures 7.4 and 7.5).

Natural of the process , ,

/:
;

' 1I

Natural Variation of the process

+I
I

,I
,

1 1

it--

Specification range

--ti

i1
!
--

Specification range

0 7.4 A Capable -88: Natural variation is within speo


5

7.8: A Proce88 that fe not Capable: Natural variation exceeds spec range

---%

Process capability refers to how capable a process is of making parts that are within the range of engineering or customerspecifications. Figure 7.4 shows the distribution of the dimension of parts for a machining process whose output follows the bell-shaped normal distribution. This process is capable because the distribution of its output is wholly within the specific range. The process shown by Figure 7.5 is not capable. Process Control on the other hand refers to maintaining t;le performance of a process at its current capaa rLinge ui art~vitit.~ such as sampling the process product, charting its bility level. Process cfanlrol !r~rii[ves performance, detcrmlalng causes of any excesave variation and taking corrective action. As mentioned above, the capability of a process is an expression of the comparison of product specs to the range of natural variability seen in the process. In simple terms, process capability expresses the proportion or fractional output that a process can routinely deliver within the specifications. A process when subjected to a capabiliiy study answers two key questions, "Does the process need to be improved?" and "How much does the process need to improved?" Knowing process capability allows manufacturing and quality managers to predict, quantitatively, how and the level of control necessary to well a process will meet specs and to specify euipment requirern~~its maintain the firmi capability. For example, :f a Ge,i6i! Tees require a length of metal tubing to be cut within one-tenth of an inch, a process consisting of a worker u:ilia a ruler and hacksaw will probably result in a prod~lr -.; i t re : ;c, the process, due to its high inherent or natural varilarge percentage of nonconfern~ing ability, is not capable of m w :I.)? the desrgn specs. Man~g~rnent would face here three possible choices: (1) measure each pie.faeA.ri r!trler re-cut i:, ,:~p rionronforming tubing, (2) develop a better process by investing in new technblogy, o: (3) change the specrfications. Such decisions are usua!ly bdsed on ecull.v!c;. i<ememberthat under routine productio~:,dm cost to ~ r r : duce one unit of the p-duct be., its unit cost) whether the product ultin~ateive-2, iaillng wlthin or outp~rf k ~ f : the within-spec products side specs is the Fame. gather, the firm n d y be forced to raise d:r ! ~ l d r i ~ (those that are acceptable to customerc) 3%; &us weaken its competitive position. "Scrap andlor rework out-of-spec or defective parts" is therefore a poor business strategy since labour and materials have already been invested in the unacceptable product produced. Additionally, inspection errors will probably allow some nonconforming products to leave the production facility if the firm aims at making parts that just meet the specs. On the other hand, new technology might require substantial investment the firm cannot afford. Changes in design, on the other hand, may sacrifice fitness-for-use requirements and result in a lower quality product. Thus, these factors demonstarte the need to consider process capability during product design and in the acceptance of new contracts. Many firms now require process capability date from their rnmagenienr. systems require a firm to determine its process vendors. Both IS0 9000 and QS 993b cinnl~tv capability. Prccess capa~::ltyhas three lrnportant components: (1) the design specifications, (2) the centering of the natural variation, and (3) the range, or spread, of variation. Figures 7.6 to 7.9 illustrate four possible outcomes that can arise when natural process variability is compared with product specs. In Firire 7.5 the specifications are wider than the natural variation; one would thercic:r * , ;, :r l ?;'is ?recess wil always produce conforming products as long as ;I remains in control. It inay even be possible to reduce costs by investing in a cheaper technology that permit a larger variation in the process output. In Figure 7.7, the

./m
-35

Distribution of dimensions

.+ -1
I

+n

.& ++

+3v

Natural variation of the process

+
-35
-20

ir

f mean t o

+20

i
+30

Lower Limit Limit

~ower Limit

upper

spec I
Limit

Figure 7.6: A CapahlaProeerr: Output i 8 wholly within spec Ilmitr

]HBare 7.7: A Rocerr with natural Variability equ8l to Spec Range

natural variation and specifications are the same. A small percentage of nonconforming products might be produced; thus, the process should be closely monitored. In Figure 7.8, the range of natural variability is larger than the specification; thus, the current process would not always meet specifications even when it is in control. This situation often results from a lack of adquate communication between the design department and manufacturing, a task entrusted to manufacturing engineers. If the process is in control but cannot produce according to the design specifications,the question should be raised whether the specifications have been correctly applied or if they may be relaxed without adverseor subsequent use of the product. If the specifications are realistic and firm, an ly affected the a s ~ m b l y effort must be mqde to improve the process to the point where it is capable to producing consistently with-' in specifications. ' Finally, in Figure 7.9, the capability is the same as in Figure 7.7, but the process average is of-center.

44

&

4 4

430

-35

-2;

! o

4 I2 ~ 00

Natural variation of the process

f- Nalural variation of

the process Lower

* ,

Usually this can be corrected by a simple adjustment of a machine setting or recalibrating the inspection equipment used to capture the measurements. If no action is taken, however,a substantial portion of output will fall outside the spec limits even though the process has the inherent capability to meet specifications. We may define the study or process capability from another perspective. A capability study is a technique for analyzing the random variability found in a production process. In every manufacturing process there is some variability. This variability may be large or small, but it is always present. It can be divided into two types: Variability due to common (random) causes Variability due to assignabk (special) causes The first type of variability is said to be inherent in the process and it can be expected to occurnaturally within a process. It is attributed to a multitude of factors which behave like a constant system of the chances affecting the process. Called common or random causes, such factors include equipment vibration, passing traffic, atmospheric pressure or temperature changes, electrical voltage or humidity fluctuations, changes in operator's physical nr emotional conditions, etc. Such forces determine whether a coin when form a tossed will end up,showing a head or tail when on the floor. Together, however, these "chancesyy unique, stable addescribable distribution. The behaviour of a process operating under such conditions is predictable (Figure 7.10).
Predicted variability

lbigare 7.10: Common Causer of Variation pmrent, but no &d#nabls Causer

Inherent variability may be reduced by changing the environment or the technology, but given a set of operating condition, this variability can never be completely eliminated from a process. Variability due to assignable causes, on the other'hand, refers to the variation that can be linked to specific or special causes that disturbe a process. Examples are tool failure, power supply interruption, process controller malfunction, adding wrong ingredients or wrong quantities, switching a vendor, etc. Assignble cuases are fewer in number and are usually identifiable through investiation in the shop floor or an examination of process logs. The effect (i.e., the variation in the process) caused by an assignable fac-

tor, however, is usually large and detectable when when compared with the inherent variability-seen in the process. If the assignable causes are controlled properly, the total process variability associated with them can be reduced and even eliminated. Still, the effect of assignable causes cannot be described by a single distribution (Figure 7.1 1).
Variability cannot be predicted

lrigre 7.11 : Both Common and A8ai#nable Caws8 lile&h#the

prom^

A capability study measures h e inherent variability or the performance potential of a process when no assignable causes are present (i.e. when the process is said to be ir. statistical conml). Since inherent variability can be described by a unique distribution, usually a normal disbribution,capability can be evaluated by utilizing the properties of this disbribution. Recall that capability is the pmportion ofroutineptocess output that remains within product specs. Even approximate capability calculations done using histograms enable manufacturers to take a preventiw approach to defects. This approach is in contrast with the traditional two-step process: production personnel make the product while QC personnel inspect and screen out products that do not meet specifications.Such QC is wasteful and expensive since it allows plant resources including time and materials to be put into products that are not salable. It is also unreliable since even 100 percent inspection would fail to catch all defective products. SPC aims at correcting undesirable changes in the output of a process. Such changes may affect the centering (or uccwacy)of the process, or its variability (spread or peckion). These effects are graphically shown in Figure 7.12.
-

Control Limits are not an Indication of Capability


Those new to SPC often have the misconception that they don't need to calculate capability indices. Some even think that they can compare their control limits to the spec limits. This is not true, because control limits look at the distribution of avemges (xbat;p np u, etc.) while capabilityindices look at the distribution of individual measurements (x). The disbribution of x for a process will always be more spread out than the distribution of its xbar values (Figure 7.10).Therefore,the control limits are often within the specification limits but the plus-and-minus3-sigma distribution of individual part dimensions (x) is not. The statistical theory of the "central limit the6remn says that the averages,of samples or subgroups {xbar}

Accuracy % Precision O/o

Accuracy Precision %

Accum~y% Precision %

Accuracy Predsion

Hgum 7.18 (a): Proaesr Aoau?aqp andPre&qn


Lower Control Limit for xbar

1- fl

i x Limit for a a r

Upper Control

Ic Natural variation of +)
the process

Lower

specLimit Limit

follow more closely a normal distribution. This is why we can easily construction control chartgs on process data that are themselves not normally disbributed. But averages-cannot be used for capability calculation because capability evaluatesindividual parts delivered by a process. After all,parts get shipped to customers, not averages.
What Capability 6tudies Do for You

Capability studies are most often used to quickly determine whether a pmcess can meet specs or how many parts will exceed the specifications. However, there are numerous other practical uses: Estimating percentage of defective parts to be expected Evaluating new equipment purchases Predicting whether design tolerances can be met Assigning equipment to production Planning process control checks Analyzing the interrelationship of sequential processes Making adjustments during manufacture Setting specifications Costing out contracts

statistical QualitsControl

Since a capability study determines the inherent reproducibility of parts created in a process, it can even be applied to many problems outside the domain of manufacturing, such as inspection, administration,and engineering. There are instances where capability measurements are valuable even when it is not practical to determine in advance if the process is in control. Such an analysis is called a pevonnance study Performance studies can be useful for examining incoming lots of materials or one-time-onlyproduction runs. In the case of an incoming lot, a performance study cannot tell us that the process that produced the materials is in control,but it may tell us by the shape of the disrribution what percent of the parts are out of specs or more importantly,whether the disbtribution was truncated by the vendor sorting out the obvious bad parts.
How to set up a CapaUty Study Before we set up a capability study, we must select the critical dimension or quality characteristic (must be a measurable variable) to be examined. This dimension is the one that must meet product specs. In the simplest case, the study dimension is the result of a single, direct product and measurement process. In more complicated studies, the critical dimension may be the result of several processing steps or stages. It may become necessary in these cases to perform capability studies on each process stage. Studies on early process stages frequently prove to be more valuable than elaborate capability studies done on later processes since early processes lay the foundation (i.e., constitute the input) which may affect later operations. Once the critical dimensions is selected, data measurements can be collected. This can be accomplished manually or by using automatic gaging and fixturing linked to a data collection device or computer. When measurements on a critical dimension are made, it is important we ensure that the measuring instrument isas precise as possible, preferably one order of magnitudefiner than the specification. Otherwise, the measuring process itself will contribute excess variation to the dimension data as recorded. Using handheld data collectors with automatic gages may help reduce errors introduced by the process of measurement,. data recording, and transcription for post processing by computer. The deal situation for data collection is to collect as much data aspossible over a defined time period.This will yield reliable capability number since it is based upon a large sample size. In the practice of process improvement, determining process capability is Step 5: Step 1 Gather process data Step 2 Plot the data on control charts Step 3 Find the control limits. Step 4 Get the process in control (in other words, identify and eliminate assignable causes). Step 5 Calculate process capability. Step 6 If process capability is not sufficient, improve the process (reduce its inherent variation), and go back to Step 1.

CapaMUfg Calmladonscknditioa 1: The Roaetw Muat be in Control! Process capability formulas commonly used by industry require that the process must be in control and normally distributed before one takes samples to estimate process capability. All standard capability indices assume that the process is in control and the individual data follow a normal distribution. If the process is

not in control, capability indices are not valid, even i f they appear to indicate the process is capable. Three different statistical tools are used together to determine whether a process is in control and follows a normal distribution. These are Control chart f Visual analysis of a histogram Mathematical analysis of the distribution to test that the distribution is normal. Note that no single tool can do the job here and all three must be used together. Conrol charts (discussed in detail laten in h i s Unit) are the most common method for maintaining a process in statistical control. For a process to be in control, all points plotted,op the control chart must be inside the control limits with no apparent patterns (e.g., trends) be present. A !($togram (described in Section 7.3) allows us to quickly see (a) if any parts are outside the spec limits and (Q bhat the distribution's position is relative to the specification range. If the process is one that is naturally a normal distribution,then the histogram should approximate a bell-shaped curve if the process is in control. However, note that a process can be in control but not have its individuals following a normal distribution if the process is inherently non-normal. \ \'

~~cwutti)mr-tinna:~bemb~mube~mMany processes naturally follow a bell-shaped curke (a normal distribution) but some do not. Examples of non-normal dimensions are roundness, squareness, flatness and positional tolerances; they have a natural barrier at zero. In these cases, a perfect measurement is zero (for example, no ovality in the roundness measurement). There can never be a value less than zero. The standard capability indices are not valid for such non-normal distributions. Tests for normality are available in SPC text books that can assist you to identify whether or not a process is normal. If a process is not normal, you may have to use special capability measures that apply to non-normal distributions [I].
8taWdcal Proosrnr Control (8PC) Methodology

Control charts,like the other basic tools for quality improvement, are relatively simple to use. In general, control charts have two objectives, (a) help restore accuracy of the process so that the process average stays near the target,and (b) help minimize variation in the process to ensure that goodpcision is maintained in the output (see Figure 7.12 (a) and (b)). Control charts have three basic applications: (1) to establish a state of statistical control, (2) to monitor a process and signal when the process goes out of control, and (3) to determine process capability. The following is a summary of the steps required to develop and use control charts. Steps 1 through 4 focus on establishing a state of statistical control; in step 5, the charts are used for ongoing monitoring; and finally in step 6, the data are used for process capability analysis. 1. Preparation a. Choose the variable or attribute to be measured b. Determine the basis, size and frequency of sampling. c. Set up the correct control chart.
2 0 0 -

Stoti8tiorlQlrall~~trol 2.

Data Collection a. record the data b. Calculate relevant statistics: c. Plot the statistics on the chart.

3. Determination of trial control limits. a. Draw the center line (process average) on the chart. b. Compute the uppter and lower control limits. 4. Analysis and interpretation a. Investigate the chart for lack of control b. Eliminate out-of-control points. c. Re-compute control limits if necessary. 5. Use as a.problem-solving tool a. Continue data collection and plotting. b. Identify out-of-control situations and take corrective action.
6. Use the control chart data to determine process capability, if desired.
,

'

In Section 7.3 we review the "seven quality improvement toolsm-simple methods ppularized by the Japanese-that can do a great deal in bringing a poorly performng process into control and then to improve it further. In Section 7.4 we discuss the SPC methodology in detail and the construction, interpretation, and use of the different types of process control charts. Although many different charts will be described, they will differ only in the type of meas&ement for which the chart is used; the same analysis and interpretation methodology described applies to each of them.

7 . 3

The Seven QualityImprovement TOOL

In SPC, numbers and information form the basis for decisions and actions. Therefore, a thorough data recording systemflmanual or otherwiseflwould be an essential enabler for SPC. In order to allow one to interpret fully and derive maximum use of quality-related data, over the past fifty years a set of simple statistical'tools'have evolved. These tools offer any organization an easy means to collect, present and analyze most of such data. In this section we briefly review these tools. An extended description of them may be found in the Quality Management Standard IS0 9004-4 (1994). In the 1950's Japaneseindustry began to learn and apply statistical methods in earnestness,methods that American statisticians Walter Shewhart and W Edward Deming developed in the 1930's and 1940's to help manage quality. subsequently, progress in continuous quality impmvement in Japan led to significant expansion of the many simple statistical tools on shop floor. Kaoru Ishikawa, head of the Japanese Union of

Scientists and Engineers (JUSE), later formalized the use of these tools in Japanese manufacturing with the introduction of the 7 Quality Control (7 QC) tools. The seven Ishikawa tools reviewed below are now an integral part of quality control on the shop floor around the world. Many Indian industries use them routinely. Some of these quality improvements tools were discussed earlier in the first Part.

The flowchart lists the order of activities in a project or process and their interdependency. It expresses detailed process knowledge. To express this knowledge certain standard symbols are used. The oval symbol indicates the beginning or end of the process. The boxes indicate action items while diamonds indicate decision or check points. The flowchart can be used to identify the steps affecting quality and the potential control points. Another effective use of the flowchart would be to map the ideal process and the actual process and to identify their differences as the targets for improvements. Flowcharting is often the first step in Business Process Reengineering (BPR).

Upper Spec 1 Limit


I

The histogram is a bar chart showing a distribution of variable quantities or characteristics. An example of a "live" histogram would be to line up by height a group of students enrolled in a course. Normally, one individual would be the tallest and one the shortest, with a cluster of individuals bunched around the average height. In manufacturing, the histogram can rapidly identify the nature of quality problems in a process by the

35

I- ? S

Lower

4k Tolerance-j

shape of the distribution as well as the width of the distribution. It informally establishespmcesscapabiliiy. It can also help compare two or more distributions.

7,303

Pareto Chart

The Pareto chart, as shown below, indicates the distribution of effects attributable to various causes or factors arranged from the most frequent to the least frequent. This tool is named after Wilfredo Pareto, the Italian economist who determined that wealth is not evenly distributed and some of the people have most of the money. This tool is a graphical picture of the relative frequencies of different types of quality problems with the most frequent problem type obtaining dear visibility. Thus the Pareto chart identifies the vital few and the trivial many and it highlights problems that should be worked first to get the most improvement. Historically, 80% problems are caused by 20% ofthe factors.
30%
25% 15% 10% s i
0 0

Operator Callbration a
6 Misc.

,
s n

3 4

Oatlee and Bffect Diagram

The cause and effect diagram is also called thefishbone chart because of its appearance and the Ishikawa diagram after the man who popularized its use in Japan. Its most frequent use is to list the causes of some particular quality problem or defect. The lines coming of the core horizontal line are the main causes while the lines coming off those are subcauses. The causc and effect diagram identifies problem areas where data should be collected and analyzed. It is used to develop reaction plans to help investigate out-of-control points found on control charts. It is also the first step for planning design of experiments (DOE) studies and for applying Taguchi methods to improve product and process designs.
Machine Man
tralning

mixing

quality

Methods

Materials

7.3.8

Scatter Diagram

The scatter diagram shows any existing pattern in the relationship between two variables that are thought to be related. For example, is there a relationship between outside temperature and cases of the common cold? As temperatures drop, do cases of the common cold rise in number? The closer the scatter points hug a diagonal line, the more closely there is one-to-one relationship between the variables being studied. Thus, the scatter diagram may be used to develop informal models to predict the future based on past correlations.

Cases of Common Cold1100 persons

15

30

I ' I 1 I 40 50 60 70 Ouldwr Temperature (F)

7.3.6
1

Run Chart

The run chart shows the history and pattern of variation. It is plot of data points in time sequence,connected by a line. Its primary use is in determining trends over time. The analyst should indicate on the chart whether up is good or down is good. This tool is used at the beginning of the change process to see what the problems are. It is also used at the end (or check) part of the change process to see whether the change made has resulted in a perI manent process improvement.
A Run Chart of Sample Averqe
0 B5
Q)

0 "" *
0.70

OB

7.3.7

Control Chart

Whereas a histogram gives a static picture of process variability,a run chart or a control chart illustrates the dynamicpetformance (i.e., performance over time) of the process. The control chart in.particular is a powerful process quality monitoring device and it constitutes the core of statistical process control (SPC). It is a line chart marked with control limits at 3 standard deviations (s) above and below the average quality level. These limits are based on the statistical studies of shop data conducted in the 1930s by Dr Walter Shewhart. By comparing certain measures of the process output such as xbar, R,p, u, c etc. (see Section 7.4) to their con-

trol limits one can determine quality variation that is due to common or random causes and variation that is produced by the occurrence of assignable events (special causes). Failure to distinguish between common causes and special causes of variation can actually increase the variation in the output of a process. This is often due to the mistaken belief that whenever process output is off target, some adjustment must be made. However, knowing when to leave a process alone is an

NgaM7.1s: Frequency Distributionand Histogxam of a Typical Proam

Tabla 7.1: Tbirtg rPmpler of QualityMeatmrementa.


206

important step in maintaining control over a process. Equally important is knowing when to take action tb prevent the production of nonconforming product. Using actual industry data Shewhart demonstrated that a sensible strategy to control quality is to first eliminaie the special causes with the help of the control chart and then systematically reduce the common causes. This strategy reduces the variation in process output with a high degree of reliability while it improves the acceptability of the product. Statistical process control (SPC) is actually a methodology for monitoring a process to (a) identify the special causes of variation and (b) signal the need to take corrective action when it is appropriate. When special causes are present, the process is deemed to be out of control. If the variation in the process is due to common causes alone, the process is said to be in statistical control. A practical definition of statistical control is that both the process averages and variances are constant over time (Figure 7 . 1 0 ) . Such a process is stable and predictable. SPC uses control charts as the basic tool to improve both quality and productivity. SPC provides a means by which a firm may demonstrate its quality capability, an activity necessary for survival in today's highly competitive markets. Also, many customers (e.g., the automotive companies) now require the evidence that their suppliers use SPC in managing their operations. Note, however, that since SPC requires processes to display measurable variation; even though it is quite effective for companies in the early stages of quality efforts, it becomes ineffective in producing improvements once quality level approaches six-sigma. calculations make little sense if the Before we leave this section, we repeat again that process c1apability process is not in statistical control because the data are confounded by special (or assignable) causes and thus do not represent the inherent capability of the process. The simple tools described in this section may be good enough to enable you ro check this. To see this, consider the data in Table 7 . 1 (previouspage), which shows 150 measurements of a quality characteristic from a manufacturing process with specifications 0.75 k 0.25.Each row corresponds to a sample of size = 5 taken every I 5 minutes. The average of each sample is also given in the last column. A frequency distribution and histogram of these data is shown in Figure 7 . 1 3 . The data form a relatively symmetric distribution with a mean of 0.762 and standard deviation 0.0738.Using these values, we find that Cpk = 1.075and form the impression that the process capability is at least marginally acceptable.

7.4

Control Charts for Variables Data

As we mentioned in the previous section, the control chart is a powerful process quality monitoring device and it constitutes the core of statistical process control (SPC). In the SPC methodology, knowing when to leave a process in itselfis an important step in maintaining control over a process. Control charts enable us to do that. Equally important is knowing when to take action to prevent the production of nonconforming product. Indeed, failure to distinguish between variation produced by common causes and special causes can actually increase the variation in the output of a process. Again, control charts empower us here. When the chart crosses any of its control limits, special causes are indicated to be present. The process is now deemed to be out of control and is investigated to remove the source of disturbance. Otherwise, when variation stays within control limits, it is indicated to be due to common causes only. Now the process is said to be in "statistical control" and it should be left alone. Statistical control is defined as the state in which

both the process averages and variances are constant over time and hence the process output is stable and predictable (Figure 7.10). Control charts help us in bringing a process within such control. Most processes that deliver a "product" or a "service" may be monitored by measuring their output over time and then plotting these measurements appropriately. However, processes differ in the nature of those outputs. Variables data are those output characteristics that are measurable along a continuous scale. Examples of variables data are length, weight, or viscosity. By contrast, some output may only be judged to be good or bad, or "acceptab1e"or "unacceptable", such as print quality of a photocopier or defective knots produced per meter by a weaving machine. In such cases we categorize the output as an attribute that is either acceptable or unacceptable; we cannot put it on a continuous scale as done with weight or viscosity. However, SPC methodology provides us with a variety of different types of control charts to work with such diversity. For variables data control charts most commonly used are the "xbar" chart and the "R-chart" (range chart). The xbar chart is used to monitor the centering of the process to help control its accuracy (Figure 7.9). The R-chart monitors the dispersion or precision of the process. Range R rather than standard deviation is used as a measure of variation simply to enable workers on the factory floor perform control chart calculations by hand, as done for example in the turbine blade machining shop in BHEL, Hardwar. For large samples and when data can be processed bv a computer. the standard deviation is a better measure of variability.

7.4.1

Cop8tructing %bar and &Charts and Bstabli6hing Statistical Control

The first step in developing xbar and R-charts is to gather data. Usually, about 25 to 30 samples are collected. Samples between size 3 and 10 are generally used, with 5 being the most common. The number of samples is indicated by k, and n denotes the sample size. For each sample i, the mean (denoted by %bari)and the range (Ri) a n computed. These values are then plotted on their respective control charts. Next, the overall mean (x-doublebar) and average range (Rbar) calculations are made. These values specify the center lines for the xbar and R-charts, respectively. The overall mean is the average of the sample means xbar;. Xl+X2+ ...+X" xbar = n

x-doublebar =

The average range is similarly computed, using the formulas

i i=l
xbari

The average range and average mean are used to compute control limits for the R-and xbar charts. Control limits are easily calculated using the following formulas

UCL, = D4 Rbar LCL, = D3 Rbar UCLxb, = x-doublebar + A2 Rbar LCLxb, = x-doublebar - A2 Rbar where the constants D3, D4 and Az depend on sample size n and may be found in Table 9.1. Control limits represent the range between which all points are expected to fall if the process is in statistical control, i.e., operating only under the influence of random or common causes. If any points fall outside the control limits or if any unusual patterns are observed, then some special (called assignable) cause may have probably affected the process. Note, however, that if assignable causes are affecting the process, then the process data are not representative of the true state of statistical control and hence the calculations of the center line and control limits would be biased. To be effective, SPC requires the center line and the control limit calculations to be unbiased. Therefore, before control charts are set up for roitine use by the factory, any out-or-control data points should be eliminated from the data table and new values for x-doublebar, Rbar, and the control limits re-computed, as illustrated below. In order to determine whether a process is in statistical control, the R-chart is always analyzed first. Since the control limits in the xbar chart depend on the average range, special causes in the R-chart may produce unusual patterns in thexbar chart, even when the centering of the process is in control. (An example of this is given later in this unit). Once statistical control is established for the R-chart, attention may turn to the xbar chart.

HgaM 7.14: A Uontrol ahart Data Bhest

Figure 7.14 shows a typical data sheet used for recording. This form provides space (under uNotesn)for descriptive information about the process and for recording of sample observations and computed statistics. Subsequently the control charts are drawn.

7.4.8

:Interpreting Abnormal Patterns i n Oontrol Ohart8

When a p&ss is in statistical controlathepoints on a control chart fluctuate randomly between the control livits with no recognizable, non-random pattern. The following checklist provides a set of general rules tor examining a process to determine if it is in control: 1. No points are outside control limits. 2. The number of points above and below the center line is about the same. 3. The points seem to fall randomly above and below the center line. 4. Most points, but not all, are near the center line, and only a few are dose to the control limits. The underlying assumption behind these rules is that the distribution of sample means is normal. This assumption follows from the central limit theorem of statistics, which states that the distribution of sample means approaches a normal distribution as the sample size increases regardless of the original distribution. Of course, for small sample sizes, the distribution of the original data must be reasonably normal for this assumption to hold. The upper and lower control limits are computed to be three standard deviations from the overall mean. Thus, the probability that any sample mean falls outside the control limits is very small-This probability is the origin of rule 1. Since the normal distribution is symmetric, about the same number of points fall above as below the center line. Also, since the mean of the normal distribution is the median, about half the points fall on either side of the center line. Finally, about 68 percent of a normal distribution falls within one standard deviation of the mean; thus, most but not all points should be dose to the center line, These characteristics will hold provided that the mean and variance of the original data have not changed during the time the data were collected; that is, the process is stable. Several types of unusual patterns arise in control charts, which are reviewed here along with an indication of the typical causes of such patterns.
One Point Olrtdds Oomtrol w t r

A single point outside the control limits (see Figure 7.15) is usually produced by a special cause. Often,the R-chart provides a similar indication. Once in awhile, however, such points are normal part of the process and occur simply by chance. A common reason for a point falling outside a control limit is an error in the calculation of xbar or R for the sample. You should always check your calculations whenever this occurs, Other possible causes area sudden power surge, a broken tool, measurement error, or an incomplete or omitted operation in the process.

o ! ,
I

, , , , , , , , , , , , , , ,

, , , , , ,

Sample#

Figwe 7.10: Single Point Olltdd4 Qontml Limit#

Sudden SbUt in the Process Avera#e An unusual number of consecutive points falling on one side of the center line (see Figure 7.16) is usually an indication that the process average has suddenly shifted. ?).pically, this occurrence is the result of an external influence that has affected the process, which would be considered a special cause. In both the xbar and R-charts, possible causes might be a new operator, a new inspector, a new machine setting, or a change in the setup or method.

0:

, , , , , , , , , , , , , , , , , , , , , , , , Sample I

Ngare 7.18: Shift i n Rooerr Average

If the shift is up in the R-chart, the process has become less uniform. ?).pica1 causes are carelessness of operators, poor or inadequate maintenance, or possibly a fmtilre in need of repair. If the shift is down in the R-chart, the uniformity of the process has improved. This might be the result of improved workmanship or better machines or materials. As mentioned, every effort should be made to determine the reason for the improvement and to maintain it. Three rules of thumb are used for early detection of process shifts. A simple rule is that if eight consecutive points fall on one side of the center line, one could conclude that the mean has shifted. Second, divide the region between the center line and each control limit into three equal parts. Then if (a) two of three consecutive points fall in the outer one-third region between the center line and one of the control limits or (b) four of five consecutive points fall within the outer two-thirds region, one would also condude that the process has gone out of control.

Cycles Cycles are short, repeated patterns in the chart, alternating high peaks and low valleys (see Figure 7.17). These patterns are the result of causes that come and go on a regular basis. In the xbar chart, cycles may be the result of operator rotation or fatigue at the end of a shift, different gauges used by different inspectors, seasonal effects such as temperature or humidity,or differences between day and night shifts. In the

I
Sample I

R-chart, cycles can occur from maintenance schedules, rotation of fixtures or gages, differences between shifts, or operator fatigue.

A trend is the result of some cause that gradually affects the quality characteristics of the product and causes the points on a control chart to gradually move up or down from the center line. As a new group of operators gains experience on the job, for example, or as maintenance of equipment improves over time, a trend may occur. In the xbar chart, trends may be the result of improving operator skills, dirt or chip buildup in fixtures, tool wear, changes in temperature or humidity, or aging of equipment. In the R-chart, an increasing trend may be due to a gradual decline in material quality, operator fatigue,gradual loosening of a fixture or a tool, or dulling of a tool. A decreasing trend often is the result of improved operator skill or work methods, better purchased materials, or improved or more frequent maintenance.

Huggin# the Center Line


Hugging the center line occurs when nearly all the points fall dose to the center line (see Figure 7.18). In the control chart it appears that the control limits are too wide. A common cause of hugging the center line is that the sample includes one item systematically taken from each of several machines, spindles, operators, and so on. A simple example will serve to illustrate this pattern. Suppose that one machine produces parts whose diameters average 7.508 with variation of only a few thousandths; a second machine produces parts whose diameters average 7.502, again with only a small variation. Taken together, parts from both machines would yield a range of variation that would probably be between 7.500 and 7.510, and average about 7.505, since one will always be high and the other will always be low. Even though a large variation will occur in the parts taken as a whole, the sample averages will not reflect this variation. In such a case, a control chart should be constructed for each machine, spindle, operator, and so on. An often overlooked cause for this pattern is miscalculation of the control limits, perhaps by using the wrong factor from the table, or misplacing the decimal point in the computations.

4I,,,
m

, , , , , , , , , , , , , , , , , , , , , Sample #

e 7.18: Hugging the Centre Line

H u g g i n g the Control Limits


This pattern shows up when many points are near the control limits with very few in between. It is often called a mixture and is actually a combination of two different patterns on the same chart. A mixture can be split into two separate patterns.
,

A mixture pattern can result when different lots of material are used in one process, or when parts are produced by different machines but fed into a common inspection group.

InstabUty Instability is characterized by unnatural and erratic fluctuations on both sides of the chart over a period of time (see Figure 7.19). Points will often lie outside both the upper and lower control limits without a consistent pattern. Assignable causes may be more difficult to identify in this case than when specific patterns are present. A frequent cause of instability is over adjustment of a machine, or the same reasons that cause hugging the control limits.

As suggested earlier, the R-chart should be analyzed before the xbar chart, because some out-of-control conditions in the R-chart may cause out-of-control conditions in the xbm chart. Also, as the variability in the process decreases, all the sample observations will be closer to the true population mean, and therefore their average, xbar, will not vary much from sample to sample. If this reduction in the variation can be identified and controlled, then new control limits should be computed for both charts.

7.4.5

Bautine Procese Monitoring and Control

After a process is determined to be in control, the charts should be used on a daily basis to monitor production, identify any special causes that might arise, and make corrections as necessary. More important, the chart tells when to leave the process alone! Unnecessary adjustments to a process result in nonpohctive labor, reduced production, and increased variability of output. It is more productive if the operators themselves take the samples and chart the data. In this way, they can react quickly to changes in the process and immediately make adjustments. To do this effectively, training of the operators is essential. Many companies conduct in-house training programmes to teach operators and supervisors the elementary methods of statistical quality control. Not only does this training provide the mathematical and technical skills that are required, but it also give the shop floor personnel increased quality consciousness. Introduction of control charts on the shop floor typically leads to improvements in conformance, particularly when the process is labour intensive. Apparently, management involvement in operators9work produces positive behavioral modifications (as first demonstrated in the famous Hawthorne studies). Under such circumstances, and as good practice, management and operators should revise the control limits periodically and determine a new process capability as improvements take place.

Stati6tim.lQ t U Q r Control

Another important point must be noted. Control charts are designed to be used by production operators rather than by inspectors or QC personnel. Under the philosophy of statistical process control, the burden of quality rests with the operators themselves. The use of control charts allows operators to react quickly to special causes of variation. The range is used in place of the standard deviation for the very reason that it allows shop-floor personnel to easily make the necessary computations to plot points on a control chart. The experience even in Indian factories such as the turbine blade machining shop in BHEL Hardwar strongly supports this assertion. The right approach taken by management in ingraining the correct outlook among the workers appears to hold the key here.

7.4.4

Estimating Plant Process Capability

After a process has been brought to a state of statistical control by eliminating special causes of variation, the data may be used to find a rough estimate process capability. This approach uses the average range Rbar rather than the estimated standard deviation of the original data. Nevertheless, it is a quick and useful method, provided that the distribution of the original data is reasonably normal. Under the normality assumption, the standard deviation (s,) of the original data {x}can be estimated as follows:

where d2 is a constant that depends on the sample size and is also given in Table 7.1. Therefore, process capability may be determined by comparing the spec range to 6s The natural variation of individual measurements is given by x-doublebar f 3s

When a process is in control, the xbar and R charts may be used to make decisions about the state of the process during its operation. We use three zones on the charts to help in the routine management of the process. The zones are marked on the charts as follows. Zone 1 (it falls between the upper and lower WARNING LINES (or f 2 sigma lines) . If the plotted points fall in this zone, it indicates that the process has remained stable and actions or adjustments are unnecessary. Indeed any adjustment here may increase the amount of variability. Zone 2 (it falls between Zone 1 andJone 3). Any point found in Zone 2 suggests that there may have been an assignable change and another sample must be taken to check that out. Zone 3 (it falls beyond the upper or Lower Control Limit). Any points falling in this zone indicate that the process should be investigated and thatn if action is taken, the latest estimate of x-doublebar value and Rbar should be used to revise the control limits.

7.4.6

Modified Control Limits

Modified control limits often are used when process capability is very good. For example, suppose that the

Point in Zone 2

Possible Results of second sampling

ZONE 3
.------

ZONE 2
ZONE 1 . ------------

--- --------

ZONE 1 ZONE 1 ZONE -- 1 ZONE 2 --ZONE 3

--------- -+ Investigate and -- adjust upwr conhol Limit 0 4 -+ Investigate - Upper Warning Limit --K- - f

.
o

X-double bnr

--

Lower Warning Limit Lower Conhol Limit

Sample I

Figure 7.80: Aotionr for Bemnd Sample Following a Warning Signal i n %nea
/ "

process capability of a factory is only 60 percent of tolerance (Cp = 1.67) and that the process mean can be controlled by a simple machine adjustment. Management may quickly discover the impracticality of investigating every isolated point that falls outside the usual control limits because the-output is probably still well within specifications. In such cases, the usual control limits may be replaced with the following mod$ed control limits: URL, = USL - ba bar^ LRL, = LSL + A, Rbar where URL, is the upper reject level, LRL, is the lower reject level and USL and LSL are the upper and lower specifications respectively. A, values are determined by statistical principles and these are shown in Table 7.2. The modified control limits allow for more variation than the ordinary control limits and still provide high confidence that the product produced will remain within specification. It is important to note that modified limits apply only if process capability is at least 60 to 75 percent of tolerance. However, if the mean must be controlled closely, a conventional xbar-chart should be used even if the process capability is good. Also if the process standard deviation (s,) is likely to shift, don't modify control limits.

Table ?.a: Factor8 for Control Limit8 and Standard Deviation


214

Example 3: (Computing Modified Control Limits for the Silicon Wafer Case) Shown below is the calculations for the silicon wafer thickness example considered in this section. Since the sample size is 3, A , = 0.749. Therefore, the modified limits are

Observe that if the process is centered on the nominal, the modified control limits are "looser" (wider) than the ordinary control limits. For this example, before the modified control limits are implemented, the centering of the process would first have to be corrected from its current (estimated) value of 47.0 to the specification midpoint of 50.0.

7.8

Special Control Charts for Variables Data

Several alternatives to the popular xbar and R-chart for process control of variables measurements are available. This section discusses some of them.

7.8.1

Xbar and 8-charts

An alternative to using the R-chart along with the xbar chart is to compute and plot the standard deviation s of each sample over R. The range has traditionally been used. It involves less computational effort and is easier for shop-floor personnel to understand. Using s this has its advantages. The sample standard deviation is a more sensitive and better indicator of process variability, especially for larger sample sizes. Thus, when tight control of variability is required, s should be used. With the use of modern calculators and personal computers, the computational burden of computing s is reduced or eliminated,and s has thus become a viable alternative to R. The sample standard deviation is computes as

To construct an s-chart, compute the standard deviation for each sample. Next, compute the average standard deviation sbar by averaging the sample standard deviations over all samples. (Notice that this computation is analogous to computing R). Control limits for the s-chart are given by UCL, = B4 sbar LCL, = B3 sbar where B3 and B4, are constants found in Table 7.1. For the associated xbar chart, the control limits derived from the overall standard deviation are

UCLh, = x-doublebar + A3 sbar LCLh, = x-doublebar - A3 sbar where A3 is a constant that is a function of sample size (n) may be found in Table 7.1. Observe that the formulas for the control limits are equivalent to those for xbarand R-charts except that the constants differ. Constructing xbar and S-charts is illustrated by an example. (Please refer to Annexure 4 of the supplement).

7.5.8

Chartai for Individuals

With the development of automated inspection for many processes, manufacturers can now easily inspect and measure quality characteristics on every item produced. Hence, the sample size for process control is n = 1, and a control chart for individual meusurements also called an x-chart can be used. Other examples in which x-charts are useful include accounting data such as shipments, orders, absences, and accidents; production records of temperature, humidity, voltage, or pressure; and the results of physical or chemical analyses. With individual measurements, the process standard deviation can be estimated and three-sigma control limits used. As shown earlier, Rbar/d2 provides an estimate of the process standard deviation. Thus, an x-chart for individual measurements would have "three-sigma" control limits defined by UCL, = x-average + 3 Rbar / d2 LCL, = x-average - 3 Rbar / d2 Sample of size 1, however, do not furnish enough information for process variability measurement. However, process variability can be determined by using a moving average of ranges, or a moving range, or n successive observations. for example, a moving range for n = 2 is computed by finding the absolute difference between two successive observations. The number of observations used in the moving range determines the constant d2; hence, for n = 2 from Table 7.1, d2 = 1.128. In a similar fashion, larger values of n can be used to compute moving ranges. The moving range chart has control limits defined by UCLR= Dq Rbar LCLR= D3 Rbar which is comparable to the ordinary range chart.
I

Contructing an xchart with moving averages is illustrated by an example. (Please ske Annexure 5 of the supplement). Control charts for individuals have the advantage that specifications can be drawn on the chart and compared directly with the control limits. Some disadvantage also exist: a Individuals charts are less sensitive to many of the conditions that can be detected by xbar and Rcharts; for example, the process must vary a lot before a shift in the mean is detected. a Also, short cycles and trends may appear on an individual's chart and not on an xbar or R-chart.

Finally, the assumption of normality of observations is more critical than for xbar and R-charts; when the normality assumption does not hold, greater chance for error is present

7 . 6

Control Charts for Attributes

Attributes quality data assume only two valuesflgood or bad, pass or fail. Attributes usually cannot be measured, but they can be observed and counted and are useful in quality management in many practical situations. For instance, in printing packages for consumer products, colour quality can be rated as acceptable or not acceptable, or a sheet of cardboard is either damaged or is not. Usually, attributes data are easy to collect, often by visual inspection. Many accounting records, such as percent scrapped, are also usually readily available. However, one drawback in using attributes data is that large samples are necessary to obtain valid statistical results. The topics dealing with Fraction Nonconforming Chart, Variable Sample Size p charts, np- charts for numbers non-conforming charts for defects are discussed in Annexure 6 of the supplement.

7 . 7

Choosingthe Cormat SPC Chart

Confusion often exists over which chart is appropriate for a specific application, since the c- and u-charts apply to situations in which the quality characteristics inspected do not necessarily come from discrete units. The key issue to consider is whether the sampling unit is constant. For example, suppose that an electronics manufacturer produces circuit boards. The boards may contain various defects, such as faulty components and missing connections. Because the sampling unit - the circuit board - is constant (assuming that all boards are the same), a c-chart is appropriate. If the process produces boards of varying sizes with different numbers of components and connections, then a u-chart would apply. As another example, consider a telemarketing firm that wants to track the number of calls needed to make one sale. In this case, the firm has no physical sampling unit. However, an analogy can be made with the circuit boards. The sale corresponds to the circuit board, and the number of calls to the number of defects. In both examples, the number of occurrences in relationship to a constant entity is being meas-

R cham

7.8

Implementing SPC

The origirial methods of SQC have been available for over 75 years now; Walter Shewhart's first book on control charts was written in 1924. However, studies show that managers still do not understand variation. Where do you find the motivation for using SQC? Studies in industry show that when used properly, SQC or SPC reduce the cost of qualityflthe cost of excessive inspection, scrap, returns from customers and warranty service. Good quality, as the Japanese have demonstrated, edges out competition, builds repute and along with that brings in new customers, raises morale and expands business. Good quality culture also propagates: Companies using SPC frequently require their suppliers to use them also. This generates considerable benefit. Where there is low use of SPC, the major reason often is lack of knowledge of variation and the importance of understanding it in order to improve customer satisfaction. Successful firms on the other hand repeatedly show certain characteristics: Their top management understand variation and the importance of SPC methods to successfully manage it. They do not delegate this task to the QC department. All people involved in the use of the technique understand what they are being asked to do and why it would help them, and Training, followed by clear and written instructions on agreed procedures are systematically introduced and followed up through audits. The above principles form the core of the general principles of good quality management, be it through IS0 9000, QS 9000 or TQM. The bottom line is that you must find your own motivation to create and deliver quality. If you do it as a fad, you will neither get the results nor maintain credibility with your own people about initiatives that you take in the form of management interventions. SO,to summarize, Studies show that to succeed with SPC you must understand variation, however boring that may appear. When SPC is used properly, quality costs go down. Low usage of SPC is associated with lack of knowledge and training, even among senior management. SPC needs to be systematically introduced. A step-wise introduction of SPC would include - review of management systems, - review of requirements and design specs, - emphasis on the need for process understanding and control, - planning for education and training, - tackling one problem at a time based on customer complaints/feedback, - recording of detailed data, - measuring process capability and - making routine use of data to manage the process. The first prudent step toward implementing SPC in an organization would be"to put the house in order:' which may be done by getting the firm registered for IS0 9000.

~tistiaalQuali~Oontral

i
I

7.9

Acceptance Sampling

I
I

Sampling inspection is a screening mechanism used to separate acceptable products from products of poor quality; it actually does not improve the quality of any product. The most obvious way to tell whether a product item is acceptable or not is to inspect it or to use it. If this can be done to every item, that is, a 100% inspection can be performed, there would be no need to use acceptance sampling. However, in many cases, it is not economical nor possible to do a 100% inspection. For instance, if the cost of inspection of an item is higher the value of the item, which usually is true for low-cost massed-produced products such as injection-molded parts or flash light bulbs, a 100% inspection is not justified; if equipment cost and labor cost to inspect an item are high, only a small fraction of the product can be inspected. If the inspection is a destructive test (for example, a life test for electronic components or a car crash test), obviously a 100% inspection will destroy all the products. When inspection is necessary and 100% inspection is not possible, acceptance sampling can be employed. A samplingplan is a method for guiding the acceptance sampling process. It specifies the procedure for drawing samples to inspect from a batch of products and then the rule for deciding whether to accept or reject the whole batch based on the results of this inspection. The sample is a small number of items taken from the batch rather than the whole batch. The action of rejecting the batch means not accepting it for consumption and this may include downgrading the batch or selling it at a lower price returning it to its supplier or vendor. Suppose that a sampling plan specifies that (a) n item are drawn randomly from a batch to form a sample and (b) the batch is rejected if and only if c or more of these n items are defective or non-conforming. An operating characteristic curve, or OC-curve,of a sampling plan is defined as the plot of the probability that the batch will be accepted (Pa(p)) against the fractionp of defective products in the batch. The larger is Pa(p), the more it is likely that the batch is accepted. A higher likelihood of acceptance benefits the producer. On the other hand, the smaller is Pa(p), the harder it will be for the batch to be accepted. This would benefit and even protect the consumer who would want some assurance against receiving bad products and would prefer accepting batches with a Zowp (fraction defective) value.

In order to specify a sampling plan with Pa@) characteristics as described above, the numbers n and c must be correctly specified. This spetification requires us to specify two batch quality or fraction defective levels first, namely the AQL (acceptable quality level) and the RQL (rejection quality level) values. AQL and RQL (explained below) are two key quality parameters frequently used in designing a sampling plan. An ideal sampling plan is regarded to be one that accepts the batch with 100% probability if the fraction of defective items in it is less than or equal to AQL and that rejects the batch with a 100% probability if the fraction of defective items in the batch is larger than AQL. Most customers realize that perfect quality (e.g., a batch or lot of parts containing no defkctive parts in it) is perhaps impossible to expect; some defectives will always be there. Therefore, the customer decides to tolerate a small fraction of defectives in hislher purchases. However, the customer certainly wants a high level of assurance that the sampling plan used to screen the incoming lots will reject lots with fraction defective levels exceeding some decidedly poor quality threshold called the rejection quality level or RQL.

In reality, RQL is a defect level that causes a great deal of heartache once such a lot enters the customer's factory. The supplier of those parts,on the other hand,wants to ensure that the customer's acceptance sampling plan will not reject too many lots with defect levels that are certainly within the customer's tolerance, i.e., acceptable to the customer on the average. Generally, the customer sets a quality threshold here also, called AQL. This is actually the worst lot fraction defective that is acceptable to the customer in the shipments helshe receives on an average basis.

7.9.8

The Operating Characteristic (OC) Curve of a Samphg Plan

A bit of thinking will indicate that only error-free 100% inspection of all items in a lot would accept the batch with 100% probability if the fraction of defective items in it is less than or equal to AQL and reject the lot with a 100% probability if the fraction of defective items in it is larger than AQL. Such performance cannot be realized othewise. The OC-curve of such a sampling plan is shown in Figure 7.22.

0.0

AQL

p (fractloi defective)

Ngare 7.1)1): The Ideal OC Curwe

In order to correctly evaluate the OC-curve of an arbitrary (not 100%-inspection) sampling plan with parameters (n, c) we need to harness certain principles of probability theory as follows. Suppose that the batch is large or the production process is continuous, so that drawing a sample item with or without replacement has about the same result. In such cases we are permitted to assume that the number of defective items x In a sample of sufficiently large size n follows a binomialprobability distribution. Pa(p) is then given by the expression

Now, if the producer is willing to sacrifice a little bit so that the batch with fraction defective AQL is accepted with a probability at least (1- a) where a is a small positive number, and the consumer is willing to sacrifice a little so that the batch with fraction defective RQL is accepted with a probability of at most b, where b is a small positive number and RQL > AQL, then the following two inequalities can be established.
C

From the above two inequalities, the numbers n and c can be solved (but the solution may not be unique). The OC-curve of such a sampling plan is shown in Figure 7.23. The number a is call the producer's risk, and the number b called the consumer's risk. As a common practice in industries,the magnitude of a and b is usually set at some value from 0.01 to 0.1. Nomograms are available for obtaining solution(s) of the inequalities given above easily.

0.0 1 0.0

AQL

ROL

p (fnctfon

defective)

The sampling plan on the facing page is called a single samplingplan (Figure 7.24) because the decision to accept or reject the.batch is made in a single stage after drawing a single sample from the batch being evaluated. To set up a practical single sampling procedure you would need to specify 1) AQL, 2) a, the consumer's risk, 3) RQL, and 4) b, the producer's risk. The plan itself may be developed from a tool called thenLarsen Nomogramngiven in statistical quality control texts and handbooks. A typical plan specifying AQL = 0.01, a= 0.05, RQL = 0.06 and b = 0.10 will have sample size n = 89 and the maximum number of defectives allowed (c) = 2.

7.9.3 Double Sampling Plans A double samplingplan is one in which a sample of size nl is drawn and inspected, the batch is acceptedlrejected according to whether the number dl of defective items found in the sample is f rl or A rz
(where rl < rz); and if the number of defective items lies between rl and rz, a further sample of size nz is drawn and inspected, and the batch is accepted or rejected according to whether the total number dl + dz of defective items in the first and second sample is f r3 or > r3. This procedure shown diagrammatically in Figure 7.9.4. The average sample number (ASN) of a double sampling plan is Pal(p) nl + (1 - Pal(p)) nz, where Pal(p) is the probability that the batch will be accepted upon inspection of thefirst sample. If the value of p in the batch is very low or very high, a decision can usually be made within inspection of the first sample, and the ASN of a double sampling plan will be smaller than the sample size of a single sampling plan with the same producer's risk (a) and consumer's risk (b).

I I
I

I
t
I

C__7
Sample n items the sample: Inspect all items in the sample: d defectives found

h
Reiect the lot

C>-[-) 5 2 Accept the lot Sample n2 more items from the lot items from the lot

Reject the lot

rigam 7.84: The sin~pe 8-lh#

The idea of double sampling plan can be extended to construct sampling plans of more than two stages, namely multiple sampling plans. A sequential sampling plan is a sampling plan Reject the lot in which one item is drawn and inspected each time, in such a way that if a decision (of accepting or rejecting the batch) can be made upon inspection of the first item, the process is stop; if Pigme 7.88: The Double Samplh# P h a decision cannot be made, a second item will be drawn and inspected, and if decision can be made upon inspection of the second item, the plan is stopped; otherwise, the third item is drawn and inspected, and so on, until a decision can be made. However, multiple sampling plans and sequential sampling plans are not so commonly used, because their implementation in practice is more complicated than single and double sampling plans.

7.0.4

AOQ and AOQL (Average Ooutgoing Quality Limit)

The quantity [p' Pa(p)] is called the average outgoing quality (AOQ) of a sampling plan at fraction defective p. This is the quality to be expected at the customer's end if the sampling plan is used consistently and repeatedly to accept or reject lots being received. It is clear that if p = 0, then AOQ = 0. If p = 1, that is, all the product items in the batch are defective, then Pa@) of any sampling plan that we are using should be equal to 0 for otherwise the plan would not have any utility. Hence AOQ = 0 also when p = 1. Since AOQ can never be negative, it has a global maximum point in the range (0,l); this maximum is called the average outgoing quality limit (AOQL) of the sampling plan. The graph of AOQ against p for a typ222

ical sampling plan is shown in Figure 7.26. Although a sampling plan can be specified by setting the producer's risk (a) and consumer's risk (b) at AQL and RQL, the quantity AOQL can also be used to specify a

Lot Fraction Defective p

Figure 7.86: The A *

Curve of a lampling Plan

Another type of sampling plan which is different from the above is called continuous samplingprocedure (CSP). The rationale of CSP is that, if we are not sure that the products produced from aprocess are of good quality, a 100% inspection will be adopted; if the quality of products is found to be good, then only a fraction of \he products will be inspected. In the simplest CSP, initially 100% inspection is performed; during 100% inspection if no defective items are found after a specified number of items are inspected (which means that the quality of product produced is perhaps good), 100% inspection is stopped and only a fraction f of the products ig inspected. During fraction inspection if a defective item is found (which means that the quality of products might have deteriorated), then fraction inspection is stopped and 100% inspection is resumed. More refined CSP's have also been constructed, for example, by setting f at 112 at the first stage, 114 at the second stage, and so on.

7.9.8

Variables Sampling Plams

All sampling plans described above are called attributesamplingplans, because the inspection procedure is based on a "go"/ "no go* basis, that is, an item is either regarded as non-defective and accepted, or it is regarded as defective and not accepted. Variablesamplingplans are sampling plans in which continuous measurements (such as dimensional or weight checks) are made on each item in the sample, and the decision as to whether to accept or to reject the batch is based on the sample mean or the average of the measurements obtained from all items contained in the sample. A variable sampling plan can be used, for example, when a product item is regarded as acceptable if a certain measurement x (diameter, length, hardness, etc.) of it exceeds a pre-set lower spec limit L;otherwise the item is regarded as not acceptable (see Figure 7.27).

?@um 7.87: Distribution of Individual Meawrementr ( x )


C

Measurements {x) of the products produced , ----. . . -. --....-- ...-.. have a population mean p, say. When p is much larger than L, we can expect that most items will have an x value greater than L and all such items would be acceptable; when p is much less than L, we can expect that most items will have x values less than L and all such items would not be acceptable. A variable sampling plan can be constructed by specifying a sample size n and a lower cut-off value c for the sample mean x b q such that if the sample of size n is drawn and all items in this sample are measured, the lot is accepted if the sample mean %barnexceeds c. The lot is rejected otherwise. We require that when the population mean of the product produced isp, or larger,the lot is accepted with a probability of at least 1 - a, and when the population mean is p2 or smaller, the lot is accepted with probability only b or less, where a = producer's risk and b = consumer's risk (Figure 7.28).
-----.I

. . . . . . . . I , L U

Una~~8ptabI8 Lot

Acceptable Lots

~ i g 7.a8: m Produoor'# (a) and ~ o n m w r ' r (p) ~irkr

Suppose that xq is the value of x such that the probability for x < xq is q. According to the criterion given above we can drive that

The above system of inequalities may not have a unique (n, c) solution. From elementary statistical theory, if the form of xq is known (for example, when x follows a normal distribution), from these inequalities we can determine a minimum value for the sample size n (which is an integer), and a range for the cut-off point c for the sample mean xbar. Such a sampling plan is a called single specijication limit variable sampling plan. If an upper specification limit U instead of a lower specification limit L is set for x, we only need to consider the lower specification limit problem with (-x) replacing x and (-U) replacing L. When a product item is regarded as acceptable only if a certain measurement x of it lies between a lower specification limit L and an upper specification limit U,a double specification limit variable sampling plan will be used. In a double specification limit variable sampling plan, a sample size n, a lower cut-off value c~ and an upper cut-off value cu for the sample mean xbar are specified. A batch is accepted if and

only if the sample mean of a sample of size n from the batch lies between c~and c~ Calculations for c~,, cu, and n are more complicated than the single specification case.

MIGSTD-1OSE Sampling Plans International standards for sampling plans are now available. Many of these are based on the work of Professors Dodge and Rohmig. The plan, originally was developed for single and multiple attribute sampling for the US army during WW 11, is now widely used in @dustries. It is called the MIL-!XD-IOSE. An equivalent Indian standard known as IS 2500 has been published by the Bureau of Indian Standards. Many other official standards for various attribute sampling plans (such as those based on AOQ or CSP's, and so on) and variable sampling plans (assuming the variable has a normal distribution, when the population variance is known or unknown, and so on) have been published by the US government and the British Standards Institution. Sampling Inspection is only a Screening Tooll Before we end this section, we stress again that acceptance sampling or sampling inrpection is only a smening tool for separating batches or lots of good quality products from batches of poor quality products. Tosome extent this screening assures the quality of incoming parts and materials. Actually, the use of sampling plans does help an industry to do this screening more effectively than the drawing samples arbitrarily. Therefore, sampling inspection can be used during purchasing, for checking the quality of incoming materials, whenever one is not sure about the conditions and QC procedures in use in the vendor's plant. Acceptance sampling can also be used for the final checking of products after production (Figure 7.3). This, to a limited degree, assures the quality of the products being readied for a customer before they are physically dispatched and even Motorola uses acceptance sampling as a temporary means of quality control until permanent corrective actions can be implemented. But note that unlike SPC, acceptance sampling does not help in the prevention of the production of poor quality products.

7.10

What are Taguchi Methods?

In this section we discuss briefly methods that belong to the domain of qualily engineering, a recently formalized discipline that aims at developing products whose superior performance delights the discrirninating usednot only when the package is opened, but also throughout their lifetime of use. The quality of such products is robust, i.e., it remains unaffected by the deleterious impact of environmental or other factors often beyond the users' control. Since the topic of quality engineering is of notably broad appeal, we include below a brief review of the associated rationale and methods. The term "quality engineering"(QE) was used till recently by Japanese quality experts only. One such expert is Genichi Taguchi (1986) who reasoned that even the best available manufacturing technology was by itself no assurance that the final product would actually function in the hands of its user as desired. To achieve this Taguchi suggested the designer must "engineer" quality into the product, just as helshe specifies the product's physical dimensions to make the dimensions of the final product correct. QE requires systematic experimentation with carefully developed prototypes whose performance is

tested in actual field conditions. The object is to discover the optimum set-point values of the different design parameters, to ensure that the final product would perform as expected consistenth when in actua1 use. A quality-engineered product has robust performance.

1 Taguchi's Thoughts on Quality

Taguchi, a Iapanese electrical engineer by training, is credited to have made several contributions in the management and assurance of quality. Taguchi studied the methods of design of experiments (DOE)at the Indian Statistical Institute in the 1950s and later applied these methods in a very creative manner to improve product and process design. His methods now form the foundation of engineering design methodology in many leading industries around the world, including AT&T, General Motors and IBM. In the 1980s his methods were popularized in the USA by Madhav Phadke and Raghu Kacker of the Bell Laboratories. Taguchi's contributions may be classified under the following three headings: The loss function Robust design of products and production processes
Taguchi's view of loss to society: 'Target is best; Loss rises

Traditional view of loss to society: "Within spec is OK, outside spec is bad"

Lower spec

Target

Upper Spec

Performance Charazcteristic

Simplified industrial statistical experiments


Ngrw V.89: L o u to Sodm ineraaserr whenever Per!ormmm

deviate8 from the Target

The essence of the loss function concept may be stated as follows. Whenever a product deviates from its target performance, it generates a loss to society (Figure 7.29). This loss is minimum when performance is right on target, but it grows gradually as one deviates from the target. Such a philosophy suggests that the traditional "if it is within specs, the product is goodnviewof judging a product's quality is not correct. If your foot size is 7, then a shoe of size differentfront 7 will cause you inconvenience, pain,loose fit, and even embarrassment. Under such conditions it is meaningless to seek a shoe that meets a spec given as (7 x). To state again, the loss function philosophy says that for a producer, the best strategy is to produce products as close to the target aspossible, rather than aiming at "being within specifications." The other contributions of Taguchi are a methodology to minimize performance or quality problems

arising due to non-ideal operating or environmental conditions, and a simplified method known as orthogonal array experiments to help conduct multi-factor experiments toward seeking the best product or process design. These ideas may be described as follows.

7.10.8

The Secret of Creating a Bobust Design

A practice common in traditional engineering design is sensitivity analysis. For instance, in traditional

electronic circuit design, as well as the development of performance design equations, sensitivity analysis of the circuit developed remains a key step that the designer must complete before his job is over. Sensitivity analysis evaluates the likely changes in the device's performance, usually due to element value tolerances or due to value changes with time and temperature. Sensitivity analysis also determines the changes to be expected in the design's performance due to factor variations of uncontrollable character. If the design is found to be too sensitive,the designer projects the wont-case scenarioflto help plan for the unexpected. However, studies indicate that worst-case projections or conservative designs are often unnecessary and that a "robust designn can greatly reduce offtarget performance caused by poorly controlled manufacturing conditions, temperature or humidity shifts, wider component tolerances used during fabrication, and also field abuse that might occur due to voltagelfrequencyfluctuations, vibration, etc. Robust design should not be confused with rugged or conservative design, which adds to unit cost by using heavier insulation or high reliability, high tolerance components. As an engineering methodology robust design seeks to reduce the sensitivity of the productlprocess performance to the uncontrolled factors through a careful selection of the values of the design parameters. One straightforward way to produce robust designs is to apply the "Taguchi methodn. The Taguchi method may be illustrated as follows. Suppose that a European product (chocolate bars) is to be introduced in a tropical country where the ambient temperature rises to 45'C. If the European product formulation is directly adopted, the result may be molten bars on store shelves in Bombay and Singapore and gooey hands and dirty dresses, due to the high temperature sensitivity of the European formula (Curve 1, Figure 7.30).

Temperature

The behavior of the bar's plasticity may be experimentally explored to determine its robustness to temperature, but few product designers actually attempt this. Taguchi would suggest that we do here some special "statistical experiments" in which both the bar's formulation (the original European and perhaps an alternate prototype formulation that we would call "X") and ambient temperature would be varied simultaneously and systematically and the consequent plasticities observed. Taguchi was able to show that by such experiments it is often possible to discover an alternate bar design (here an appropriate chocolate bar formulation) that would be robust to temperature. The trick, he said, is to uncover any "exploitable" interaction between the effect of changing the design (e.g. from the European formulation to Formulation "X")and temperature. In the language of statistics, two factors are said to interact when the influence of one on a response is found to depend on the setting of the other factor (Montgomery, 1997). Figure 7.31 shows such an interaction, experimentally uncovered. Thus, a
Response at the hthest ambient temperetun,

plastkii due to

Response at the lowest ambieni tempentun,

I.
Formulation "X" European (Eu) formulation

731: InteraaLion of the Weotrrof'fempsntrue mdOhomlate Pormul8tlon

"robust" chocolate bar may be created for the tropical market if the original European formulation is changed to Formulation "X".

70

Robust D e d g n by fihe "Two-atepH Taguahi Method

Note that a product's performance is "fixed" primarily by its design, i.e.,by the settings selected for its various design factors. Performance may also be affected by noiseflenvironmental factors, unit to unit variation in material, workmanship, methods, etc., or due to agingldeterioration (Figure 7.32). The breakthrough in product design that Taguchi achieved renders performance robust even in thepresence 4 noise, without actually controlling the noise factors themselves. Taguchi's special "design 'noise array" experiments (Figure 7.33) discover those optimum settings. Briefly, the procedure first builds a special assortment of prototype designs -(as guided by the "design array") and then tests these prototypes for their robustness in "noisy" conditions. For this, each prototype is "shaken" by deliberately subjecting it to

. 228

different levels of noise (selected from the "noise array", which simulates noise variation in field conditions). Thus performance is studied systematically under noise in order to find eventually a design that is insensitive to the influence of noise. To guide the discovery the "optimum" design factor settings Taguchi suggested a two-step procedure. In Step 1, optimum settings for certain design factors (called "robustness seeking factors") are sought so as to ensure that the response (for the bar, plasticity) becomes robust (i.e., the bar does not collapse into a blob at least up to 50C temperature). In Step 2, the optimum setting of some other design factor (called theUadjustment"factor) is sought to put the design's average response at the desired target (e.g., for a plasticity level that is easily chewable). For the chocolate bar design problem, the alternative design "factors" are two candidate formula-

(7
Noise Design Factor A Product Variable Response ~e&n Factor B

]HBrrre 7.81): Ded@ m dBIoise IraTemp 50% Temp 40% Temp 30% Temp 20T Temp 1OT Temp OOC

Both Impact Beaponn


Plasticity X - 9 Plasticity Eu-50 Plasticii x-0 Plasticity Eu-0

Ll
"X"

Noise (Experimental Array results) Hgum 7.m Ded@ x a o i r e Array m~srimantr ana their m r m l t n
Array

Design

tionsflone the original European, and the other that we called "X". Thus, the design array would contain two alternatives "X" and "Eu." "Noise" here is ambient temperature, to be experimentally varied over the range 0C to 50C, as seen in the tropics. Figure 7;33 shows the experimental outcome of hypothetical design'noise array experiments. For instance, dsponse value Plasticity X-50 was observed when formulation X was tested at 50C. Figure 7.31 is a compact illustrative display of these experimental results. It is evident that Formulation X's behavior 'is quite robust even at tropical temperatures. Therefore,the adaptation of Formulation "X"wou1d make the . chomlate bar "johst", i.e., its plasticity would not vary much even if ambient temperature had wide

swings.
Tpgqahi98Orthogonal M a y Ixgerimenta Rather typically, the performance of a product (or process) is affected by a multitude of factors. It is also well-known that over 213" of all product malfunctions may be traced to the design of the product. To the extent basic scientific knowledge allows the designer to guide hislher design, the designer does hislher best to come up with selection of design parameter values that would ensure good performance. Frequently though, not everything can be predicted by theory and experimentation or prototyping must be resorted to and the design must be empirically optimized. The planning of such experimental multiplefactor investigations falls in the domain of statistical design of experiments (DOE). Many text book methods for conducting multi-factor experiments,however, are too elabotate and cumbersome. This has discouraged many practicing engineers from trying out this powerful methodology in real design and optimization work. Taguchi observed this and popularized a class of simpler experimental plans that can still reveal a lot about the performance of a product or process, without the burden of heavy theory. An example is shown below. A fire extinguisher is to be designed so as to effectively cover flames in case of fire. The designer wishes to achieve this by either using higher pressure inside the C02 cylinder, or by altering the nozzle design. Theoretical models of such systems using computational fluid dynamics (CFD) are too complex and too

5 MM 10MM 5 MM 10MM

Pressure 2 BARS 2BARS 4 BARS 4BARS

Observed Spray Area 0.8 M~ 0.9 ' M 1.6 M~ 1.9 5 Nozzle 10 2 4

c&

cumbersome to optimize. The question to be answered is, which is more effectivefhigher pressure or a wider nozzle? The Taguchi orthogonal array experimental scheme would set up the following experimental plan consisting of only four specifically designed experiments. The results are shown in the table and on the associated graph. It is dear from the factor effect plots of the results that the designer would be much better off by increasing pressure. Nozzle diameter seems to have little effect on the extent of the area covered by the extinguisher.

. 7.11 , The Six Sigma Prindple


The six sigma principle is Motorola's own rendering of what is known in the quality literature as the zero &fects (ZD) program. Zero defects is a philosophical benchmark or standard of excellence in quality proposed by Philip Crosby. Crosby explained the mission and essence of ZD by the statement *What

I-

standard would you set on how many babies nurses are allowed to drop?" ZD is aimed at stimulating each employee to care about accuracy and completeness, to pay attention to detail, and to improve work habits. By adopting this mind-set, everyone assumes the responsibility toward reducing his or her own errors to zero. , One might think that having three-sigma quality, i.e., the natural variability (m , f 3sx)equal to tolerance (= upper spec limit - lower spec limit, or in other words, Cp = 1.0) would mean good enough quality. After all, if distribution is normal, only 0.27% of the output would be expected to fall outside the product's specs or tolerance range. But what does this really mean? An average aircraft consists of 10,000 different parts. At 3-sigma quality, 27 of those parts in an assembled aircraft would be defective. At this performance level, there would be no electricity or water in Delhi for one full day each year. Even foursigma quality may not be OK. At four-sigma level there would be 62.10 minutes of telephone shutdown every week. You might wish to know where performance is today. Restaurant bill errors are near 3-sigma. Payroll processing errors are near 4-sigma. Wire transfer of funds in banks is near 4-sigma and so is baggage mishandling by airlines. The average Indian manufacturing industry is near the 3-sigma level. Airline flight fatality rates are at about 6.5 sigma level (0.25 per million landings). At two-sigma level a company's cost of returns, scrap, rework and erosion of market share costs over a third of its yearly sales. For the typical Indian, a 1-hour train delay, an incorrect eye operation or drug administration, or no electricity or water half a day is no surprise; helshe routinely experiences even worse performance. Quantitatively, such performance is worse than two-sigma. Can this be called acceptable? One- or twosigma performance is downright noncompetitive. Besides adopting TQM as the way to conduct busines, many companies worldwide are now seriously looking at six-sigma benchmarks to assess where they stand. Six sigma not only reduces defects and raises customer acceptability, it has been now shown at Allied Signal Inc., Motorola, Raytheon, Bombardier Aerospace and Xerox that it can actually save money as well. Therefore, it is no surprise that Motorola aggressively set the following quality goal for itself in 1987 and then didn't want to stop till they achieved it:

Improve product and services quality ten times by 1989, and at least one hundred fold by 1991. Achieve six-sig'ma capability by 1992. With a deep sense of urgencj spread dedication to quality to every facet of the corporation, and achieve a culture of continual improvement to assure total customer satisfaction. There is only one goal: zero deject in everything we do.

The Step8 to Six SiCpna


The concept of sir-sigma quality is shrinking the inherent va~iation in a process to half of the spec range (Cp = 2.0) while allowing the mean to shift at most 1.5 sigma from the spec midpoint (the target quality) is explained by Figure 7.34. The area under the shifted curves beyond the six sjgma range (the tolerance limits) is only 0.0000034, or 3.4 parts per million. If the process mean can be controlled to within f 1.5 s, of the target, a maximum of 3.4 defects per million pieces produced can be expected. If the process mean is held exactly on target, only 2.0 defects per billion would be expected. This is why within its organization
'

Process Mean + 3 a = Inherent variability = half o f Tolerance

Target

:.c-----)i

Process Mean
is allowed to vary in this range

Figure 7.34: Tho Six Sigm8 Proctmr: UBL = Mean +30,LSL = Mein -3a Motorola defines six sigma as a state of the production or service unit that represents "almost perfect quality . " Motorola prescribes six sieps to achieve the six-sigma state, as follows.

Step 1: Step 2: Step 3: Step 4: Step 5: Step 6:

Identify the product you create or service you provide. Identify the customer(s) for your product or service and determine what they consider important. ldentify your needs to provide the product or service that satisfies the customer. Define the process for doing the work. Mistake-proof the process and eliminate waste effort. Ensure continuous improvement by measuring, analyzing and controlling the improved process.

Many companies have adopted the Measure-Analyze-Improve-Control cycle to step inlo six sigma. Typically they proceed as follows: Select critical-to-quality characteristics Define performance standards (the targets to be achieved) Validate measurement systems (to ensure that the data is reliable) Establish Product Capability (how good are you now?) Define performance objectives Identify sources of variation (seven tools etc.) Screen potential causes (correlation studies, etc.)

Statistiaal QualityQantrol

Discover relationship between variablesflcauses or factorsfland the output (DOE) Establish operating tolerances for input factors and output variables Validate the measurement system Determine process capability (Cpk)(Can you deliver? What do you need to improve?) Implement process controls One must audit and review nonstop to ensure that one is moving along the charted path. The aspect common between six sigma and ZD ("zero defects") is that both concepts require the maximum participation by-the entire organization. In other words, they require that unrelenting effort by management and the involvement of all employees. Companies such as General Motors have used a four-phase approach to seek six sigma:
-

1. Measure: Select critical quality characteristics though Pareto charts; determine the existing frequen-

cy of defects, define target performance standard, validate the measurements system and establish existing process capability. 2. Analyze: Understand when, where, and why defects occur by defining performance objectives and sources of variation. 4. Improve: Identify potential causes, discover cause-effect relationships, and establish operating tolerances. 4. Control: Maintain improvements by validating the measurement system, determining process capability, and implementing process control systems. It is reported that in GM a new culture has been created. An individual or team devotes all its time and energy to solving one problem at a time, designs solutions with customers' assistance, and helps to minimize bureaucracy in supporting the six-sigma initiative.
B e g a m

The Japanese have recently evolved the "Bluebird Plan: which is a "third option" beyond SPC and TQM designed to achieve the four objectives business excellence. These objectives include establishing corporate ethics, maintaining and boosting international competitiveness, ensuring stable employment and improving national quality of life. The Bluebird Plan provides a forum for government, labour and management to discuss the actions which need to taken. In Japan the plan set out an action program for reform for the three years 1997-1999 which was noted to be a critical time that would determine the direction of Japai's future. Striking about the plan is the employers' acceptance that the relationship between labor and management is an imperativeUstabilizingforce in so~iety.~ Thus it reaches beyond the tenets of TQM.

7.18

Summary

1L Xegr Point8 about Control Chart8

Let us sum up the key points about control charts: Every process exhibits some variability. Variabilijy is,caused by a multitude of factors that affect a manufacturing or service process while it is operating.
233

In SPC, a process is assumed to be disturbed by two types of factors called respectively (1) random causes and (2) assignable causes. Random causes are many, but their individual effect is small. The combined effect of all random causes is manifested as the inherent variability of the process. A performance measure called "process capability" quantifies the inherent variability of a process. Process capability measures the fraction of the total process output that falls within tolerance (or spec range) when no assignable factors are affecting the process. Assignable factors on the other hand are few-in number (we can usually "put our finger on it"), but their effect is large and noticeable on an appropriately drawn "control chart.'' When disturbed by an assignable cause, the average value of the process output may deviate from its desired target value, hence the process may lose accuracy. Process spread or dispersion may also widen, causing a worsening ofprecision. Hence, the appearance of assignable causes should be detected quickly and corrective steps should be taken to remove such factors from affecting the process. Mean or sample average (xbar) indicates the extent of accuracy of process output (location of the output distribution relative to the target). Standard deviation, or range (R), indicates the precision of the output. The purpose of drawing control charts is to determine when the process should be left alone and when it should be investigated and if necessary, adjusted or rectifiedflto remove any assignable cause affecting it.
8. 8PC for variables data progresses in three stages:

(i) Examination of the state of statistical control of the process using xbar and R charts, (ii) process capability study to compare process spread and its location to specifications, and (iii) routine process control using control charts.
C. Key Points about Control Chart Construction

Let us summarise the key points about control charts: Statistical process control (SPC) is a methodology for monitoring a process to identify special causes of variation and signal the need to take corrective action when appropriate. Capability and control are independent concepts. Ideally, we would like a process to have both high capability and be in control. If a process is not in control, it should first be brought into control before attempting to evaluate process capability Control charts have three basic applications: (1) Establishing a state of statistical control, (2) Monitoring a process to identify special causes, and (3) Determining process capability. Control charts for variables data include: xbar- and R-charts; and s-charts, and individual and moving range charts. xbar- and s-charts are alternatives to xbar- and R-charts for larger sample sizes. The sample standard deviation provides a better indication of process variability than the range. Individual charts are useful when every item can be inspected and when a long lead time exists for

statistical Q u a l i e Control

producing an item. Moving ranges are used to measure the variability in individuals charts. e A process is in control if no points are outside control limits; the number of points above and below the center line is about the same; the points seem to fall randomly above and below the center line; and most points (but not all) are near the center line, with only a few close to the control limits. Typical out-of-control conditions are represented by sudden shifts in the mean value, cycles, trends, hugging of the center line, huggieg of the control limits, and instability. Modified control limits can be used when the process capability is known to be good. These wider limits reduce the amount of investigation of isolated points that would fall outside the usual control limits. Charts for attributes includep-, np-, c- and u-charts. The np-chart is an alternative to thep-chart, and .controls the number nonconforming for attributes data. Charts for defects include the c-chart and uchart. The c-chart is used for constant sample size and the u-chart is used for variable sample size.

3.13

KeyWords

Acceptance Sampling: The process of selecting a sample from incoming inputs (raw material, components etc.) or semi-finished, or finished products for the purpose of analysing the results. This screening mechanismltechnique specifies how to draw samples and what rules would determine the acceptability of incoming inputs or products etc. Control Chart: A line chart showing the control limits at three standard deviations above and below the average quality level. It constitutes the core of SPC. Design of Experiments: Application of statistical methods for producing high quality, robust products and process designs (one of the Taguchi methods). 8 Fishbone Chart: A diagram showing the main causes and sub-causes of a quality problem or defect. Flow Chart: A devicelmethod showing the order of activities in a process or a project and their interdependency. Histogram: A Bar Chart showing the distribution of variable quantities or characteristics. Pareto Chart: A chart showing the distribution of effects attributable to various causes or factors arranged from the most frequent tothe least frequent. Process Capability: The range over which the 'natural variation' of a process occurs and is determined by the system of commonlrandom causes.

Run Chart: A chart showing the history and pattern or variation in time sequence.
Scatter Qiagram: A diagram showing the pattern that exists in the relationship between two variables. Six Sigma Principle: Similar to Zero Defects (ZD), it is a philosophical benchmark or standard of excellence proposed by Philip Crosby. It aims at stimulating each employee to care about accuracy and completeness of what they do.

Statistical Process Control (SPC): A process to control the variability of output using control charts. Statistical Quality Control (SQC): Use of statistical methods to improve or enhance quality for customer satisfaction. I t involves monitoring a process to identify the unique causes of variation for signaling appropriate corrective actions.

7.14
1. 2. 3. 4: 5. 6.

Self-Assessment Questions

Define statistical process control and discuss its advantages What does the term statistical control mean? Explain the difference between capability and control. What are the disadvantages of simply using histograms to study process capability? Discuss the three primary applications of control charts. Describe the difference between variables and attributes data. What types of control charts are used for each? Briefly describe the methodology of constructing and using control charts. What does one look for in interpreting control charts? Explain the possible causes of different out-ofcontrol indicators.

7.

8. How should control charts be used by shop-floor personnel?

9. What are modified control limits? Under what conditions should they be used?
10. How are variables control charts used to determine process capability? 11. Describe the difference between control limits and specification limits. 12. Why is the s-chart sometimes used in place of the R-Chart? 13. Describe some situations in which a chart for individual measurements would be used. 14. Explain the concept of a moving range. Why is a moving range chart difficult to interpret? 15. Explain the difference between defects and defectives. 16. Briefly describe the process of constructing ap-chart. What are the key differences compared with an

x-chart?
Sample
1 2
3.55 3.61 3.61 4.13 4.06 4.48 3.25

Obsenrartions
3.64 3.42 3.36 3.50 3.28 4.32 3.58

3
4

5
8 7

4.37 4.07 4.34 3.61 3.07 3.71 3.51

Sample
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29

Obselvartions

.,

30

4.25 4.35 3.62 3 : ~ . 3.38 2.85 3.59 3.60 2.69 3.07 2.86 3.68 2.90 3.57 2.82 3.82 3.14 3.97 3.77 4.12 3.92 3.50 4.23

3.38 3.64 3.61 3.28 3.15 3.44 3.61 2.83 3.57 3.18 3.69 3.59 3.41 3.63 3.55 2.91 3.83 3.34 3.60 338 3.60 4.08 3.62

3.00 3.20 3.43 3.12 3.09 4.06 3.34 2.84 3.28 3.1 1 3.05 3.93 3.37 2.72 3.56 3.80 3.80 3.65 3.81 3.37 3.54 4.09 3.00

17. Does an np-chart provide any different information than a p-chart? Why would an np-chart be used? 18. Explain the difference between a c-chart and a u-chart. 19. Discuss how to use charts for defects in a quality rating system. 20. Describe the rules for determining the appropriate control chart to use in any given situation. 21. Where do statistical methods fit into the TQM philosophy? 22. Describe each of the "seven toolsnof quality management. Where would you use them? 23. What is the difference between SPC (statistical process control) and SQC (statistical quality control)? 24. Define the terms accuracy, precision and process control. Which control chart would enable you to

control the accuracy of a process? Which chart would help you control precision?
25. Name the statistical method used in checking the acceptability of parts supplied in lots by a vendor. 26. Define the terms AQL, RQL, producer's risk and consumer's risk. 27. How does a sampling plan help you control producer's and consumer's dsks? 28. What does the technique called DOE (design of experiments) do for you?

33. Suppose that the following sample means and standard deviations are observed for samples of size 5 .

34. Construct charts for individuals using both two-period and three-period moving ranges for the followingobservations: 9.0,9.5,8.4,11.5,10.3,12.1,11.4,10.0,11.0,12.7,11.3,17.2,12.6,12.5,13.0,12.0,
11.2,11.1,11.5,12.5,12.1

35. The fraction defective for an automotive piston is given below for 20 samples. Two hundred units are inspected each day. Construct ap-chart and interpret the results.

7.15

Further Readings and Ref erenoes

D C Montgomery (1993) Statistical Quality Control, 3" ed., John Wiley. T P Bagchi (1996) IS0 9000: Concepts, Methods and Implementation, 2" ed., Wheeler.

Вам также может понравиться