Вы находитесь на странице: 1из 35

statistical tolerance analysis basics: Root Sum

Square (RSS)

In my last post on worst-case tolerance analysis I concluded with the fact that the worst-case method,
although extremely safe, is also extremely expensive.

Allow me to elaborate, then offer a resolution in the form of statistical tolerance analysis.

cost
A worst-case tolerance analysis is great to make sure that your parts will always fit, but if you’re producing
millions of parts, ensuring each and every one works is expensive and, under most circumstances,
impractical.

Consider these two scenarios.

1. You make a million parts, and it costs you $1.00 per part to make sure that every single one works.
2. You make a million parts, but decide to go with cheaper, less accurate parts. Now your cost is $0.99
per part, but 1,000 parts won’t fit.

In the first, scenario, your cost is:

$1.00/part * 1,000,000 parts = $1,000,000

In the second scenario, your cost is:

$0.99/part * 1,000,000 parts = $990,000,


but you have to throw away the 1,000 rejects which cost $0.99/part. So your total cost is:

$990,000+1,000*$0.99=$990,990. Which means you save $9,010.

Those actual numbers are make-believe, but the lesson holds true: by producing less precise (read:
crappier) parts and throwing some of them away, you save money.

Sold yet? Good. Now let’s take a look at the theory.

statistical tolerance analysis: theory


The first thing you’ll want to think of is the bell curve. You may recall the bell curve being used to explain
that some of your classmates were smart, some were dumb, but most were about average.

The same principle holds true in tolerance analysis. The bell curve (only now it’s called the “normal
distribution“) states that when you take a lot of measurements, be it of test scores or block thicknesses,
some measurements will be low, some high, and most in the middle.

Of course, “just about” and “most” doesn’t help you get things done. Math does, and that’s where the
normal distribution (and excel… attachment below) come in.

sidebar: Initially I planned on diving deep into the math of RSS, but Hileman does such a good job on the
details, I’ll stick with the broad strokes here. I highly suggest printing out his post and sitting down in a
quiet room, it’s the only way to digest the heavy stuff.

the normal distribution and “defects per million”


Using the normal distribution, you can determine how many defects (defined as parts that come in outside
of allowable tolerances) will occur. The standard unit of measure is “defects per million”, so we’ll stick
with that.

There are two numbers you need to create a normal distribution, and they are represented by μ (pronounced
“mew”) and σ (pronounced (“sigma”)

 μ is the mean, a measure of the “center” of a distribution.


 σ is the standard deviation, a measure of how spread out a distribution is. For example, the number sets
{0,10} and {5,5} both have an average of 5, but the {0,10} set is spread out and thus has a higher standard
deviation.

Using one of our blocks (remember those?) as an example…


Let’s say you measure five blocks like the one above (in practice it’s best to measure 30 at the very least, but we’ll keep it at 5 for the
example) and get the following results:

 x1 = 1.001″
 x2 = 0.995″
 x3 = 1.000″
 x4 = 1.001″
 x5 = 1.003″

The average (μ) is 1.000 ( and the standard deviation (σ) is .003. Plug those into a normal distribution, and
your tolerances break down like this. (see the ‘after production’ tab in the attached excel file for formulas)

If you require the blocks to be 1.000±.003 (±1σ), the blocks will pass inspection 68.27% of the time…
317,311 defects per million.

If you require the blocks to be 1.000±.006 (±2σ), the blocks will pass inspection 95.45% of the time…
45,500 defects per million

If you require the blocks to be 1.000±.009 (±3σ), the blocks will pass inspection 99.73% of the
time…2,700 defects per million

and so on.

Using the data above you can say with confidence (assuming you measured enough blocks!) that if you
were to use a million blocks, all but 2700 of them would come in between 0.991 and 1.009.

root sum square and the standard deviation


If you’ve followed the logic closely you may notice a catch-22. Ideally, you want to do a tolerance
analysis before you go to production, but how can you determine μ or σ without having samples to test…
which you will only get after production?

You make (and state… repeatedly) assumptions

The μ part is easy. You just assume that the mean will be equal to the nominal (in our case, 1.000). This is
usually a solid assumption and only begins to get dicey when you talk about the nominal shifting (some
like to plan for up to 1.5σ!) over the course of millions of cycles (perhaps due to tool wear), but that is
another topic.

For σ, a conservative estimate is that your tolerance can be held to a quality of ±3σ, meaning that a
tolerance of ±.005 will yield you a σ of 0.005/3 = 0.00167.

Let’s play this out… If you are stacking five blocks @ 1.000±.005, you need to add up the five blocks to
get μ, and take the square root of the sum of the squares of the standard deviation of the tolerances (wordy I
know), which looks like this… SQRT([.005/3]^2+[.005/3]^2+[.005/3]^2+[.005/3]^2+[.005/3]^2)… (you
divide by 3 because you are assuming that your tolerances represent 3 standard deviations)
That’s as wordy as I’m going to get on the math (the post is already longer than i’d like), you can see it
working for yourself in the ‘before production’ tab in the attached excel file for formulas)

Just remember to treat those numbers with the respect that they deserve and that industry-accepted
assumptions are no replacement for a heart-to-heart (and email trail) with your manufacturer . Trying to
push a manufacturer to hold tolerances they aren’t comfortable with us a draining and often futile exercise.

The tolerances dictate the design, not the other way around.

update: My series of posts on worst-case, root sum square, and monte carlo tolerance analysis started off
as just a brief introduction to the basics. Since then I have heard from a number of you asking for a clear,
concise (everything else out there is so heavy), usable guide to both the math behind tolerance analysis and
real-world examples of when to use it. I’m currently working on it, but would love to hear what YOU would
like out of it. Let me know in the comments or contact me through the site.

1. statistical tolerance analysis basics: Monte Carlo Simulation


2. statistical tolerance analysis basics: worst-case tolerance analysis
3. First Cut prototype

statistical tolerance analysis basics: worst-case


tolerance analysis
fair warning: if you’ve never heard the phrase “tolerance analysis”, you’ll likely never have to perform it
and should just spend the next 5 minutes elsewhere. Otherwise…

A well-performed tolerance analysis will add years to your life.

No worrying about parts fitting together, no worries about sloppy fits. A well-done tolerance analysis both
proves the design and helps communicate which dimensions are really important to the manufacturer.

It’s a good place to be.

But where to begin? The easiest place to start is the worst-case method.

the worst-case method


A worst-case tolerance analysis method is simple arithmetic (that’s right… just addition and subtraction),
so lets start there.

For example, let’s say we have a block

The block is is 1.000″ thick, and the (honest) guy selling it to you guarantees accuracy within ±0.005″ (5
thousandths of an inch). Thus, you are guaranteed that the part will be between 0.995″ and 1.005″.
Simple enough? Great. You just performed a worst-case tolerance analysis of that part.

Easy right? You bet.

But what happens if you stack, lets say, five blocks?

The five 1.000″ blocks add up to, no surprise here, 5.000″. But the tolerances stack up as well, to ±0.025″.

Now things are becoming less precise. When we had one block we knew the thickness within 0.010″
(0.995″ – 1.005″). Now that we have five blocks, our range has ballooned to .050″ (4.975″ – 5.025″).

sidebar: 0.050″ is just under 1/16th of an inch. If that seems negligible to you, just look at the sticky gas
pedal problems Toyota has been having lately. This stuff is important, especially when it comes to
mechanisms.

If you want to determine a window inside which every last one of your block assemblies will land, you are
done. Just design everything else around the fact that this block assembly will be off by ±0.025″. Either
that or go back to the block salesmen and shell out a small fortune for more accurate blocks.

If, on the other hand, you lack the small fortune for precision parts, the ambivalence to accept such a
sloppy block assembly, or both, you do have a recourse:

You need to delve into the world of Root Sum Square (RSS) tolerance analysis

PROCESS TOLERANCING:
A SOLUTION TO THE DILEMMA OF
WORST-CASE VERSUS STATISTICAL
TOLERANCING
Dr. Wayne A. Taylor

When solving tolerancing problems, one must choose between worst-case tolerancing and statistical
tolerancing. Both of these methods have their pros and cons. If worst-case tolerancing is used, all
tolerances must be specified as worst-case tolerances. If statistical tolerancing is used, all tolerances must
be specified as statistical tolerances. In reality, the behavior of certain inputs is best described using worst-
case tolerances while the behavior of other inputs is best described by statistical tolerances. Still other
inputs are not adequately represented by either type of tolerance. This article reviews the two current
methods of tolerancing along with their pros and cons. It then introduces a new method of tolerancing,
called process tolerancing, which solves many of the problems associated with the current approaches.
Process tolerancing represents a unified approach to tolerancing that encompasses both of the previous
approaches. Using process tolerancing, worst-case tolerances and statistical tolerances can be combined
into the same analysis. The flexibility provided by process tolerancing results in solutions to such common
problems as multiple components from the same lot and off-center processes.

KEY WORDS: Statistical Tolerance, Worst-Case Tolerance, Process Tolerance, Variation Transmission
Analysis, Propagation of Error, and Six Sigma Quality.

INTRODUCTION

Tolerance analysis is used during product design. For example, consider the design of a pump. Flow rate is
a key characteristic that must be controlled. It is called an output variable. Flow rate is affected by such
factors as diameter of the piston, stroke length, motor speed and viscosity of the solution. These factors are
called input variables. The goal of tolerance analysis is to predict the behavior of the output variable based
on an understanding of how the inputs behave. The behavior of the inputs can then be adjusted to cause the
output to behave as desired.

Tolerance analysis is equally important to process development. For example, consider the installation of a
new heat seal machine. Seal strength is the output variable. Input variables include die temperature, seal
time, and pressure. Using tolerance analysis, a process window can be determined for the inputs that
ensures that seal strength stays within its specification limits.

Two methods of tolerancing currently exist: worst-case and statistical tolerancing. Both have their pros and
cons. These two methods will be reviewed and compared. A new method of tolerancing, called process
tolerancing, will then be introduced. Process tolerancing represent a unified approach to tolerancing that
encompasses both the previous approaches.

The greatest benefit of process tolerancing is that it allows both worst-case and statistical tolerances to be
combined in the same analysis. Some inputs are best described by worst-case tolerances. For example,
consider solution viscosity for the pump. Different customers will use the pump with different solutions.
Each customer wants the correct flow rate. As a result, a worst-case tolerance best describes the behavior of
solution viscosity. Some inputs are best described by statistical tolerances. For example, consider die
temperature on the heat seal machine. Die temperature is controlled automatically using close-loop
feedback. This ensures it remains at the specified target. Its behavior is best described using a statistical
tolerance. The purpose of a tolerance analysis is to predict the behavior of the output based on knowledge
of how the inputs behave. Key to its success is the ability to accurately describe the behavior of the inputs.
Using process tolerancing, you can select the type of tolerance that best describes the behavior of each
input.

Some inputs are not well represented by either worst-case or statistical tolerances. For example, inputs
from processes that are off-center or lack statistical control. Process tolerancing offers much greater
flexibility than the current two methods and can be used to better represent such situations. Because of this
added flexibility, process tolerancing provides solutions to such common problems as multiple components
from the same lot and off-center processes.
WORST-CASE TOLERANCES

Worst-case tolerances are specified in the form 1.00  0.03". The tolerance half-width of 0.03" represents
the maximum amount that a "good" unit can deviate from target. This worst-case tolerance can also be
represented in interval form as (0.97", 1.03"). A unit falling outside the range of 0.97" and 1.03" fails to
meet the tolerance and is called a nonconformance or defect. With worst-case tolerances, units of product
can fall anywhere in the tolerance interval. All units could be 0.97", all units could be 1.03", or the units
could be uniformly spread across the interval. Nonsymmetrical tolerances can also be specified, but are
ignored here because they can be easily translated into equivalent symmetrical tolerances.

If worst-case tolerances are specified for the inputs, a worst-case tolerance can be calculated for the output.
Let the output be represented by y and the inputs by x1, …, xn. Suppose y = f(x1, …, xn) and that the inputs
have worst-case tolerances txi   xi. Then the resulting worst-case tolerance for y, denoted ty   y, is:

(Eq. 1)

(Eq. 2)

(Eq. 3)

(Eq. 4)

The terms ly and uy represent the minimum and maximum values of y over the worst-case tolerances for the
inputs. The worst-case tolerance for y is the interval (ly, uy). The target for y, ty, is the midpoint of this
interval while y is half the width of the interval. The following example demonstrates the use of these
equations.

Example: Suppose that four identical components are stacked on top of each other and that we are
interested in controlling the overall height. Overall height is denoted by h. Specification limits for overall
height are from 3.9" to 4.1". The height of the four components are denoted by d1, …, d4. Suppose these
components have a tolerance of 1.00  0.03". Then:

(Eq. 5)

(Eq. 6)

(Eq. 7)
(Eq. 8)

(Eq. 9)

The resulting worst-case tolerance for h is 4.00  0.12" or (3.88", 4.12"). This tolerance is shown in Figure
1 along with the specification limits. Worst-case tolerancing, predicts that the specification limits will not
be meet and that the tolerance for component height must be tightened.

Figure 1: Worst-case tolerance for overall height


4-component stack example

When the output is the sum of the inputs as in the above example, the worst-case tolerance equations
simplify to:

(Eq. 10)

(Eq. 11)

Equation 11 is frequently cited in textbooks on tolerancing. However, when more complex equations are
involved, Equations 1-4 must be used instead.

STATISTICAL TOLERANCES

Statistical tolerances are determined by a target, tx, and maximum standard deviation, x. In this article they
are denoted by tx  x. They are more commonly denoted by tx   x . When using the symbol, it is
most common to have  x = 3x. However, this relationship is not formalized in the standards and the six
sigma approach uses  x = 6x. Throughout this article we will assume that  x = 3x. Using this
interpretation, both 1.00  0.01" and 1.00  0.03" denote the statistical tolerance with target tx = 1.00"
and maximum standard deviation x = 0.01".

The worst-case tolerance 1.00  0.03" and the statistical tolerance 1.00  0.03" both assume that units of
product fall in the interval (0.97", 1.03"). However, the statistical tolerance is more restrictive. The
statistical tolerance further requires that the units of product have an average of 1.00" and a standard
deviation no greater than 0.01". Statistical tolerancing assumes centered processes. It further assumes that
the inputs vary independent of each other, i.e., are not correlated. Figure 2 shows a production batch that
meets the worst-case tolerance 1.00  0.03" but fails to meet the assumptions of the corresponding
statistical tolerance 1.00  0.03" .

Figure 2: Production batch satisfying the requirements of a


worst-case tolerance but not a statistical tolerance

If statistical tolerances are specified for the inputs, a statistical tolerance can be calculated for the output.
This amounts to determining the average and standard deviation of the output. Simulations can always be
used to predict these two values. However, there also exist a variety of methods for deriving approximate
and sometimes exact equations for the average and standard deviation. Details can be found in Taylor
(1991), Cox (1986) and Evans (1975). These approaches have a variety of names including statistical
tolerance analysis, propagation of errors and variation transmission analysis.

Returning to the case where y = f(x1, …, xn), equations can be derived for the average and standard
deviation of y in terms of the average and standard deviation of the xi’s. These will be denoted  y ( x1, …,
 xn,  x1, …,  xn) and  y ( x1, …,  xn,  x1, …,  xn) respectively. The statistical tolerance for y can then
be calculated as follows:

(Eq. 12)

(Eq. 13)

When the output is the sum of the inputs, the statistical tolerancing equations simplify to:

(Eq. 14)

(Eq. 15)

Multiplying both sides of Equation 15 by 3 gives:

(Eq. 16)

Equation 16 is frequently cited in textbooks on tolerancing. It still holds even if we assume  x = 6x.
When more complex equations are involved, Equations 12 and 13 should be used instead.
Example (continued): Returning to the example of four components stacked on top of each other, assume
each component has a statistical tolerance of 1.00  0.01". Then the statistical tolerance for overall height
is:

(Eq. 17)

(Eq. 18)

(Eq. 19)

The resulting statistical tolerance for h is 4.00  0.02" or (3.94", 4.06"). This tolerance is shown in Figure 3
along with the specification limits. Statistical tolerancing predicts that the specification limits will be easily
met. No change is required to the tolerance for component height.

Figure 3: Statistical tolerance for overall height


4-component stack example

COMPARISON OF THE TWO APPROACHES TO TOLERANCING

In the 4 component stack example, worst case tolerancing results in a tolerance for overall height of 4.00 
0.12" while statistical tolerancing results in a tolerance of 4.00  0.06" . The worst-case tolerance is
double the width of the statistical tolerance. Statistical tolerancing predicts that the specification limits will
be met and that the tolerance for component height does not need to be tightened. Worst-case tolerancing
predicts that the specification limits will not be met and that the tolerance for component height must be
tightened. The two methods of tolerancing produce different results and that can lead to opposite
conclusions. Selecting which method of tolerancing to use is an important decision. This section examines
the pros and cons of each approach.

Worst-case tolerancing is the safer approach. If the inputs are within their respective tolerances, the output
is guaranteed to be within its worst-case tolerance. This is especially important for products like heart
values or critical components on airplanes. However, this guarantee comes at a high cost. Worst-case
tolerancing guards against a worst-case scenario that is highly unlikely if not impossible in most situations.
Worst-case tolerancing guards against all inputs simultaneously being at the same extreme. In the 4-
component stack example, this would require that all four components be either 0.97" or 1.03". For this to
occur repeatedly, would require the entire batch of components to be at one of the two extremes. This could
only occur if the process producing the components had zero variation.
Suppose instead that the process producing the components had a standard deviation of 0.005". As shown
in Figure 4, the process average must stay between 0.985" and 1.015". Otherwise some components would
fall outside the tolerance limits. The two normal curves in Figure 4 represent the worst-case conditions for
component height when the standard deviation is 0.005".

Figure 4: Worst-case conditions for component height when  = 0.005"

When component height has an average of 0.985" and a standard deviation is 0.005", overall height will
have an average of 3.94" and a standard deviation of 0.0025". Likewise when component height has an
average of 1.015" and a standard deviation is 0.005", overall height will have an average of 4.06" and a
standard deviation of 0.0025". These two sets of conditions represent the worst-case scenarios for overall
height. They are shown in Figure 5. As a result of the variation, so long as the components meet their
tolerance, overall height will easily meet the specification limits.

Figure 5: Resulting worst-case conditions for overall height when  = 0.005"

Worst-case tolerancing generally overestimates the range of the outputs. This can result in the unnecessary
tightening of tolerances for the inputs, which drives up costs. Worst-case tolerancing is the safer but more
costly of the two approaches.

Statistical tolerancing results in narrower tolerances for the outputs. In the 4-component stack example,
statistical tolerancing resulted in a tolerance for overall height that was half as wide as that obtained using
worst-case tolerancing. As a result, the tolerances of the inputs do not have to be tightened as frequently or
by as much. Statistical tolerancing is the less costly of the two approaches. When applicable, we would like
to use statistical tolerancing. However, statistical tolerancing assumes centered processes. Violation of this
assumption can result in defective product.
To see how, let us return to the 4-component stack example. Figure 6 shows how component height is
suppose to behave when a statistical tolerance of 1.0  0.03" is specified ( = 1.0",  = 0.01"). It also
shows the corresponding behavior of overall height ( = 4.0",  = 0.02").

Figure 6: How component height and overall height are suppose to behave

Consider what would happen if the component height actually behaved as in Figure 7 ( = 0.976",  =
0.002"). All the component heights are within the tolerance. However, the process is off-center. Figure 7
also shows the resulting behavior of overall height ( = 3.904",  = 0.004"). As a result of the components
being off-center, the overall height of some units exceeds the specification limits and all the units exceed
the statistical tolerance of 3.94" to 4.06". Statistical tolerancing, while less costly, can underestimate the
range of the outputs.

Figure 7: How component height and overall height actually behave

To counter the tendency of statistical tolerances to underestimate the range of the outputs, Bender (1962)
recommends multiplying statistical tolerances by a factor of 1.5. For the 4-component stack example, this
results in a tolerance of 4.0  0.09" or (3.91", 4.09"). The resulting tolerance is half way between the
statistical and worst-case tolerances. It is probably a more accurate estimate of the actual behavior of
overall height. However, the factor 1.5 is somewhat arbitrary and certainly not appropriate for every
problem.

Another way of adjusting for the tendency of statistical tolerances to underestimate the range of the outputs
is to estimate separately the amount that the average varies around the target and then calculate the total
variation of the input as follows:

(Eq. 20)
Benderizing tolerances amounts to assuming that the process average has a standard deviation equal to
112% of that of the within lot standard deviation.

These two approaches are only appropriate when the process varies around the target over longer periods of
time. Neither approach properly accounts for a process that is consistently run off-target. For example,
sheeting thickness might purposely be run closely to the lower specification limit in order to reduce
material usage.

THE DILEMMA

Statistical tolerancing is less costly. We would like to use it when appropriate. If statistical tolerancing is
selected, statistical tolerances must be specified for all inputs. However, statistical tolerances are not
appropriate for all inputs. To be safe, if some inputs are best represented by worst-case tolerances, worst-
case tolerancing should be used instead. But if worst-case tolerancing is selected, worst-case tolerances
must be specified for all inputs, even those meeting the assumptions of statistical tolerancing. This dives up
cost. What is needed is a method of combining the two types of tolerances into a single analysis. Process
tolerancing, the topic of the next section will allow us to do just this.

PROCESS TOLERANCING

Process tolerancing is a new approach to tolerancing that is implemented in Taylor (1997). A process
tolerance consists of a minimum average, maximum average and maximum standard deviation. For
example, it might be specified that the average for component height be between 0.99" and 1.01" and that
the standard deviation be no more than 0.0067". This process tolerance is shown in Figure 8. The two
normal curves represent the minimum and maximum average with the maximum standard deviation.

Figure 8: Process tolerance for component height

Process tolerances will be denoted by tx   x   x. The term tx   x denotes an interval containing the
average. The minimum average is tx -  x and the maximum average is tx +  x. The maximum standard
deviation is  x. For example, the process tolerance shown in Figure 8 is denoted by 1.0  0.01  0.0067".
Process tolerances ensure that individual units all fall in the range (tx- x-3 x, tx+ x+3 x). As shown in
Figure 8, the process tolerance 1.0  0.01  0.0067" ensures individual components fall in the interval
(0.97", 1.03").

Process tolerances differ from worst-case tolerances in that worst-case tolerances represent requirements
for individual components while process tolerances represent requirements for the process producing the
components. Hence its name. Process tolerances are more similar to statistical tolerances. However,
process tolerances do not require that the process remain centered. Instead, process tolerances specify an
operating window that the process average must remain within.

If process tolerances are specified for the inputs, a process tolerance can be calculated for the output. This
requires equations for the average and standard deviation. Returning to the case where y = f(x1, …, xn), let
these two equations be represented by  y (  x1, …,  xn,  x1, …,  xn) and  y ( x1, …,  xn,  x1, …,  xn).
These two equations can be obtained using the same methods used to calculate statistical tolerances.
Further let the process tolerance for xi be denoted txi   xi   xi. Then the resulting process tolerance for y,
denoted ty   y   y, is:

(Eq. 21)

(Eq. 22)

(Eq. 23)

(Eq. 24)

(Eq. 25)

If the inputs satisfy their respective process tolerances, then y is guaranteed to satisfy the process tolerance
given by Equations 21-25.

When the output is the sum of the inputs, the process tolerancing equations simplify to:

(Eq. 26)

(Eq. 27)

(Eq. 28)

Equation 27 is the same as Equation 11 for worst-case tolerances. Equation 28 is the same as Equation 15
for statistical tolerances. A process tolerance represents a worst-case tolerance for the average and a
statistical tolerance for the variation around the average. When more complex equations are involved,
Equations 21 through 25 should be used instead.
Example (continued): Returning to the example of four components stacked on top of each other, assume
each component has a process tolerance of 1.00  0.01  0.00667". This tolerance was shown in Figure 8.
It is roughly equivalent to the worst-case tolerance 1.00  0.03" and the statistical tolerance 1.00  0.01".
All three tolerances result in units in the range (0.97", 1.03"). The resulting process tolerance for overall
height is calculated as follows:

(Eq. 29)

(Eq. 30)

(Eq. 31)

The resulting process tolerance for h is 4.00  0.04  0.01334". This tolerance is shown in Figure 9 along
with the specification limits. Individual units of product are expected to fall in the interval (3.92", 4.08").
Process tolerancing predicts that the specification limits will be met. No change is required to the tolerance
for component height.

Figure 9: Process tolerance for overall height


4-component stack example

The results of the three different approaches to tolerance analysis are summarized in Figure 10. Process
tolerancing predicts the overall heights will be in the interval (3.92", 4.08"). This falls between the worst-
case tolerance of (3.88", 4.12") and the statistical tolerance of (3.94", 4.06").
Figure 10: Comparison of three approaches to tolerance analysis
4-component stack example

A UNIFIED APPROACH TO TOLERANCING

Process tolerancing represent a unified approach to tolerancing that encompasses both the previous
approaches. The process tolerance tx   x  0 is in fact a worst-case tolerance. In this case, Equations 21-
25 simplify to Equations 1-4. The process tolerance tx  0   x is in fact a statistical tolerance. In this case,
Equations 21-25 simplify to Equations 12-13. Worst-case tolerancing guards against the worst-case
conditions which can only occur in the unlikely event of zero variation ( x = 0). Statistical tolerances make
the restrictive assumption of centered processes ( x = 0). In one sense statistical tolerances represent the
best-case conditions. Neither of these two extreme cases adequately describes all inputs. Process
tolerancing provides a middle ground. The purpose of a tolerance analysis is to predict the behavior of the
output based on knowledge of how the inputs behave. Key to its success is the ability to accurately describe
the behavior of the inputs. Process tolerancing, provides the needed flexibility to accurately describe the
behavior of the inputs and to therefore accurately predict the behavior of the outputs.

Even if you never specify a process tolerance, you can still benefit from process tolerancing. Process
tolerancing allows both worst-case and statistical tolerances to be combined in the same analysis. Some
inputs are best described by worst-case tolerances. Other inputs are better described by statistical
tolerances. However, if worst-case tolerancing is used, all tolerances must be specified as worst-case
tolerances and if statistical tolerancing is used, all tolerances must be specified as statistical tolerances.
Using process tolerancing, you can select the type of tolerance for each input that best describes its
behavior. The decision you must make changes from "Which type of tolerancing should I use?" to "Which
type of tolerance best describes the behavior of each input?" The following seven examples illustrate how
tolerances might be specified for different circumstances.

Example 1 – Usage Temperature (environmental input): The flow rate of a pump is affected by usage
temperature. The pump is intended for use over a temperature range of 40 F to 120 F. A worst-case
tolerance of 80 F  40 F best describes the behavior of usage temperature since individual pumps could
be run continuously at each of the temperature extremes. In general, inputs representing usage and
environmental conditions as well as other inputs outside the control of the designer should be specified as
worst-case tolerances.
Example 2 – Die Temperature (automatically controlled input): Die temperature affects the seal
strength of a heat seal machine. The heat sealer continuously monitors die temperature and automatically
adjusts it. This ensures that die temperature is centered on the target. The behavior of die temperature is
best described using a statistical tolerance. Suppose the target is 150 F and that the standard deviation of
die temperature is 5 F. Then the statistical tolerance 150 F  5 F could be specified. In general, inputs
with automatic controls to keep them centered should be specified as statistical tolerances.

Example 3 – Motor Speed (manually controlled input): The flow rate of a pump is affected by motor
speed. The motors are purchased from a supplier. A capability study performed on the supplier’s process
indicates that their process is in control with a standard deviation of 0.1 rpm. The statistical tolerance 18.0
rpm  0.1 rpm might be considered. However, there is a concern that the capability study, run during the
production of a single lot, is not representative of the full range of variation that will be experience over
more extended periods of time. One possibility is to Benderize the tolerance by multiplying it by 1.5. This
results in the statistical tolerance 18.0 rpm  0.15 rpm. Another approach is to allow an operating window
for the average. Suppose that an control chart is used to keep the process centered. If the process
shifts off-target by 1.5 , the first point following the shift has a 70% chance of falling outside the control
limits. This control chart is capable of keeping the process within 1.5 of target. A process tolerance of 18
 0.15  0.1 rpm could be specified. This allows a  1.5 operating window around the target. In general,
manually controlled processes are best represented by process tolerances specifying the operating window
that can be maintained. Stable processes and processes controlled using control charts typically hold a 
1.5 operating window.

Example 4 – Line voltage (unstable process): The line voltage effects the seal strength of a sonic sealing
machine. Voltage fluctuates between 120  10 volts. It cycles up and down. It can run for long periods of
time in the upper end or the lower end of the range. This input is best described using the worst-case
tolerance 120  10 volt. In general, unstable processes should be handled using worst-case or process
tolerances. If process tolerances are used, the minimum and maximum averages should represent the
extremes seen over extended periods of time.

Example 5 – Sheeting Thickness (consistently off-target input): Sheeting thickness affects seal strength
of a heat sealer. The requirements have been specified as 15  1 mil (thousandth of an inch). Suppose the
sheeting supplier purposely runs below the target in order to reduce material costs. They average 14.6 mils
with a standard deviation of 0.2 mils. Off-center processes violate the assumption of statistical tolerancing.
Statistical tolerances should not be used is this case. The process tolerance 15  0.4  0.2 mil accounts for
the off-center average. In general, only process or worst-case tolerances should be used for consistently off-
target processes.

Example 6 – 4-Component Stack (multiple identical components): In the 4-component stack problem,
the 4-components are identical. As a result, finished units consist of four components coming from the
same supplier lot. A capability study performed on the supplier’s process indicates that their process is in
control with a standard deviation of 0.00667". Using historical data it is estimated that the process average
varies from lot to lot with a standard deviation of 0.0075". The total standard deviation is then

". However, specifying a statistical tolerance of 1.0  0.01" and performing


a tolerance analysis can produce misleading results. When the lot average is low, all the components tend
to be low. When the lot average is high, all the components tend to be high. This creates a correlation
between the inputs violating the assumption of independence. Statistical tolerances should not be used is
this case. A process tolerance of
1.0  0.015  0.00667" could be used instead. It accounts for the effect of below and above average lots. In
general, only process and worst-case tolerances should be used for products containing multiple identical
components or correlated inputs.

Example 7 – Spring Force (deterioration over time): The closure force of a car door hinge is affected by
spring force. The springs are purchased from a supplier. The target is 10 lbs. A capability study performed
on the supplier’s process indicates that their process is in control with a standard deviation of 0.2 lbs. A
process tolerance of 10  0.3  0.2 lb. could be used, allowing a  1.5 operating window around the
target. This represents the manufacturing variation. However, in this case the springs are also prone to
deterioration over time. Life testing of the springs indicates that they deteriorate by 1 lb. over a 5 year
period. Deterioration is best represented by a worst-case tolerance since the closure force must be within
specifications at both the beginning and the end of the 5 year period. Deterioration is best represented by
the worst-case tolerance 10 +0 –1 lb. Combining these two tolerances results in 10 + 0.3 –1.3  0.2 lb. In
general, deterioration of an input should be represented as a worst-case tolerance. The deterioration effect
should then be combined with the manufacturing variation to obtain the final tolerance.

SIX SIGMA QUALITY

There is an important link between six sigma quality and process tolerancing. Six sigma quality, a term
introduced at Motorola, requires that a measurable characteristic such as flow rate or seal strength have no
more the 3.4 defects per million units produced. This is in turn translated into the requirements that the
average and standard deviation meet the conditions specified by the process tolerance ty   y   y where:

(Eq. 32)

(Eq. 33)

(Eq. 34)

USL is the upper specification limit and LSL is the lower specification limit. This process tolerance is
shown in Figure 11.

Figure 11: Six Sigma Process Tolerance


For the process tolerance in Figure 11, the worst-case scenario is when the average is 1.5 off-target
resulting in an average that is 4.5 from the nearest specification limit. Based on the normal distribution,
when the process average is 4.5 from the specification limit, the defect rate is 3.4 defects per million. This
is the definition of six sigma quality. The name comes from the fact that six sigma quality requires
specifications that are  6 wide.

Since the requirements for six sigma quality are specified in the form of a process tolerance, it is only
natural that process tolerancing be used to ensure that these tolerances are achieved. Process tolerancing
provides an alternative to the combined dynamic and static mean tolerance analysis procedure described in
Harry and Lawson (1992).

SUMMARY

Worst-case tolerancing tends to overestimate the variation of the output. Cost suffers when the variation of
the output is overestimated. Statistical tolerancing, tends to underestimate the output variation. Quality
suffers when the variation of the output is underestimated. Using process tolerancing, it is possible to
accurately predict the behavior of the output, providing the desired quality at a lower cost.

Process tolerancing provides a unified approach to tolerancing with encompasses both worst-case and
statistical tolerancing. It allows worst-case tolerances to be used for some inputs and statistical tolerance
are used for others. No longer does one have to use the same type of tolerance for all inputs. The purpose of
a tolerance analysis is to predict the behavior of the output based on knowledge of how the inputs behave.
Key to its success is the ability to accurately describe the behavior of the inputs. Using process tolerancing,
you can describe behaviors for inputs not adequately described by either worst-case or statistical tolerances.
This provides solutions to commonly occurring problems such as off-center processes and multiple
identical components.

REFERENCE LIST

Bender, A. (1962). Benderizing Tolerances - A Simple Practical Probability Method of Handling


Tolerances for Limit-Stack-Ups. Graphic Science Dec.

Cox, N. D. (1986). How to Perform Statistical Tolerance Analysis. Milwaukee: ASQC Quality Press.

Evans, D. H. (1975). Statistical Tolerancing: The State of the Art. Journal of Quality Technology 7 (1): 1-
12.

Harry, M. J. and Lawson J. R. (1992). Six Sigma Producibility Analysis and Process Characterization.
New York: Addison-Wesley.

Taylor, W. A. (1991). Optimization and Variation Reduction in Quality. Libertyville, IL: Taylor
Enterprises, Inc.

Taylor, W. A. (1997). VarTran User Manual. Libertyville, IL: Taylor Enterprises, Inc.

3. Linear Tolerance Analysis


When analyzing a linear tolerance stack up, if you take the root sum square of your tolerances,
there is a very good probably your limits will fall within this zone. If you have just a few tolerances
and you feel lucky you can take the root mean square of your tolerances.

o RSS (root sum square)

The square root of the sum of the square of the tolerances. See Figure 1, below.

o RMS (root mean square)

The square root of the sum of the square of the tolerances divided by the number of
tolerances

Tolerance analysis is the general term for activities related to the study of accumulated variation in
mechanical parts and assemblies. Its methods may be used on other types of systems subject to
accumulated variation, such as mechanical and electrical systems. Engineers analyze tolerances for the
purpose of evaluating geometric dimensioning and tolerancing (GD&T). Methods include 2D tolerance
stacks, 3D Monte Carlo simulations, and datum conversions.

Tolerance stackups or tolerance stacks are terms used to describe the problem-solving process in
mechanical engineering of calculating the effects of the accumulated variation that is allowed by specified
dimensions and tolerances. Typically these dimensions and tolerances are specified on an engineering
drawing. Arithmetic tolerance stackups use the worst-case maximum or minimum values of dimensions
and tolerances to calculate the maximum and minimum distance (clearance or interference) between two
features or parts. Statistical tolerance stackups evaluate the maximum and minimum values based on the
absolute arithmetic calculation combined with some method for establishing likelihood of obtaining the
maximum and minimum values, such as Root Sum Square (RSS) or Monte-Carlo methods.

While no official engineering standard covers the process or format of tolerance analysis and stackups,
these are essential components of good product design. Tolerance stackups should be used as part of the
mechanical design process, both as a predictive and a problem-solving tool. The methods used to conduct a
tolerance stackup depend somewhat upon the engineering dimensioning and tolerancing standards that are
referenced in the engineering documentation, such as American Society of Mechanical Engineers (ASME)
Y14.5, ASME Y14.41, or the relevant ISO dimensioning and tolerancing standards. Understanding the
tolerances, concepts and boundaries created by these standards is vital to performing accurate calculations.

Tolerance stackups serve engineers by:

 helping them study dimensional relationships within an assembly.


 giving designers a means of calculating part tolerances.
 helping engineers compare design proposals.
 helping designers produce complete drawings.

[edit] Concerns with tolerance stackups


A safety factor is often included in designs because of concerns about:

 Operational temperature and pressure of the parts or assembly.


 Wear.
 Deflection of components after assembly.
 The possibility or probability that the parts are slightly out of specification (but passed inspection).
 The sensitivity or importance of the stack (what happens if the design conditions are not met).

Taguchi quality loss function and specification


tolerance design
From ControlsWiki
Jump to: navigation, search

Title: Taguchi Quality Loss Function and Specification Tolerance Design

Authors: Erin Knight, Matt Russell, Dipti Sawalka, Spencer Yendell

Date Presented: November 28, 2006


Date Revised: December 8, 2006

 First round reviews for this page


 Rebuttal for this page

Contents
[hide]

 1 Introduction
 2 Taguchi Quality Loss Function
o 2.1 What are the losses to society from poor quality?
o 2.2 Great, so what is the actual loss function?
 3 Quality Loss Function for Various Quality Characteristics
o 3.1 Nominal–Defined upper and lower boundries
o 3.2 Smaller-the- Better
o 3.3 Larger-the-Better
 4 Specifying Tolerances for a Process
 5 Worked out Example 1
 6 Worked out Example 2
 7 Multiple Choice Question 1
 8 Multiple Choice Question 2
 9 Submitting answers to the multiple choice questions
 10 References
Introduction
Genichi Taguchi provided a whole new way to evaluate the quality of a product. Traditionally, product
quality has been a correlation between loss and market size for the product. Actual quality of the product
was thought of as an adherance to product specifications. Loss due to quality has usually only been thought
of as additional costs in manufacturing (i.e. materials, re-tooling, etc.) to the producer up to the time of
shipment or sale of the product. It was believed that after sale of the product, the consumer was the one to
bear costs due to quality loss either in repairs or the purchase of a new product. It has actually been proven
in most cases that in the end the manufacturer is the one to bear the costs of quality loss due to things like
negative feedback from customers. Taguchi changed the perspective of quality by correlating quality with
cost and loss in dollars not only at the manufacuring level, but also to the customer and society in general.

Taguchi Quality Loss Function


You will most likely encounter Taguchi methods in a manufacturing context. They are statistical methods
developed by Genichi Taguchi to improve the quality of products. Where as statisticians before him
focused on improving the mean outcome of a process, Taguchi recognized that in an industrial process it is
vital to produce a product on target , and that the variation around the mean caused poor manufactured
quality. For example, car windshields that have the target average mean are useless if they each vary
significantly from the target specifications.

What are the losses to society from poor quality?

Taguchi's key argument was that the cost of poor quality goes beyond direct costs to the manufacturer such
as reworking or waste costs. Traditionally manufacturers have considered only the costs of quality up to the
point of shipping out the product. Taguchi aims to quantify costs over the lifetime of the product. Long
term costs to the manufacturer would include brand reputation and loss of customer satisfaction leading to
declining market share. Other costs to the consumer would include costs from low durability, difficulty
interfacing with other parts, or the need to build in safety margins.

Great, so what is the actual loss function?

Think for a moment about how the costs of quality would vary with the products deviation on either side of
the mean. Now if you were to plot the costs versus the diameter of a nut, for example, you would have a
quadratic function, with a minimum of zero at the target diameter. We expect therefore that the loss (L) will
be a quadratic function of the variance (σ, or standard deviation) from the target (m). The squared-error loss
function has been in use since the 1930's, but Taguchi modified the function to represent total losses. Next
we will walk though the derivation of the Taguchi Loss Function.

Loss function for one piece of product:

Where:
L = Loss in Dollars
y = Quality Characteristic (diameter, concentration, etc)
m = Target Value for y
k = Constant (defined below)
The cost of the counter measure, or action taken by the customer to account for a defective product at either
end of the specification range, Ao, is found by substituting y = m + Δ0 into the loss function:

Now we can solve for the constant k,

Since we are not usually concerned with only one piece of product, the loss function for multiple units is

The process capability index (Cp) is used to forecast the quality level of non-defective products that will be
shipped out. The Cp has been used in traditional quality control and is defined rather abstractly as:

where represents the tolerance of the product.

Substituting k into the loss function and then rearranging in terms of Cp,

Taguchi Loss Function:

The Taguchi quality loss function is a way to assess economic loss from a deviation in quality without
having to develop the unique function for each quality characteristic. As a function of the traditionally used
process capability index, it also puts this unitless value into monetary units.

Quality Loss Function for Various Quality Characteristics


There are three characteristics used to define the quality loss function:

1. Nominal–the-Best Characteristic
2. Smaller-the-Better Characteristic
3. Larger-the-Better Characteristic

Each of these characteristic types is defined by a different set of equations, which is different from the
general form of the loss function equation.
Nominal–Defined upper and lower boundries

For a nominal characteristic, there is a defined target value for the product which has to be achieved. There
is a specified upper and lower limit, with the target specification being the middle point. Quality is in this
case is defined in terms of deviation from the target value. An example of this characteristic is the thickness
of a windshield in a car.

The equation used to describe the loss function of one unit of product:

Where:
L = Loss in Dollars
y = Output Value
m = Target Value of Output
k = Proportionality Constant

The proportionality constant (k) for nominal-the-best characteristics can be defined as:

Where:
A0 = Consumer Loss (in Dollars)
Δ0 = Maximum Deviation from Target Allowed by Consumer

When there is more than one piece of product the following form of the loss function is used:

= Mean Squared Deviation

A graphical representation of the Nominal Characteristic is shown below. As the output value (y) deviates
from the target value (m) increasing the mean squared deviation, the loss (L) increases. There is no loss
when the output value is equal to the target value (y = m).
Smaller-the- Better

In the case of Smaller-the-Better characteristic, the ideal target value is defined as zero. An example of this
characteristic is minimization of heat losses in a heat exchanger. Minimizing this characteristic as much as
possible would produce a more desirable product.

The equation used to describe the loss function of one unit of product:

Where:
k = Proportionality Constant
y = Output Value

The proportionality constant (k) for the Smaller-the-Better characteristic can be determined as follows:

A0 = Consumer Loss (in Dollars)


y0 = Maximum Consumer Tolerated Output Value

A graphical representation of the Smaller-the-Better characteristic is below. The loss is minimized as the
output value is minimized.
Larger-the-Better

The Larger–the-Better characteristic is just the opposite of the Smaller-the-Better characteristc. For this
characteristic type, it is preferred to maximize the result, and the ideal target value is infinity. An example
of this characteristic is maximizing the product yield from a process.

The equation used to describe the loss function of one unit of product:

Where:
k = Proportionality Constant
y0 = Minimum Consumer Tolerated Output Value

The proportionality constant (k) for the Larger-the-Better characteristic can be calculated by using the
equation given for the Smaller-the-Better proportionality constant. The only difference between the two is
the deffinition of y0.

A graphical representation of the Larger-the-Better characteristic is shown below. This characteristic is the
opposite of the Smaller-the-Better characteristic, as the loss is minimized as the output value is maximized.
Specifying Tolerances for a Process
A manufacturer is responsible for only shipping products that meet certain specifications. Products that do
not meet these determined specifications are defective and cannot be shipped for sale, resulting in a loss to
the company. In aiming to meet these specifications, manufacturers have a determined level of tolerance
for deviation from the desired target specification. The problem that often occurs is products that barely
meet specifications are shipped and fail after customer purchase. This causes negative feedback from
customers, which results in losses to the manufacturer and ultimately society. The standard to fix this
problem is to tighten up the tolerances. More stringent tolerances would result in fewer products failing on
customers, reducing losses in the market, but they would also result in increased costs to manufacturers.
Before Taguchi, there was no set method for determining optimal tolerances for a given process.

Since it is very difficult to quantify the loss to society for a defective product after customer purchase,
Taguchi predicts the quality level. The quality loss function is the basis for determining tolerances for a
process. In quality engineering, tolerance is defined as the deviation from the target, not the deviation
between products. Taguchi's method determines tolerances that aim for a balance between losses to the
manufacturer and the customer. To do determine these tolerances, the quality loss function can be used to
determine how much it costs the manufacturer to fix the defective product before shippment, and compare
that value to the cost that the defective product would have on the customer (society).

Worked out Example 1


Suppose you are manufacturing green paint. To determine a specification for the pigment, you must
determine both a functional tolerance and customer loss. The functional tolerance, Δ0 is a value for every
product characteristic at which 50% of customers view the product as defective. The customer loss, A0, is
the average loss occuring at this point. Your target is 200g of pigment in each gallon of paint. The average
cost to the consumer, HomePainto, is $10 per gallon from returns or adjusting the pigment. The paint
becomes unsatisfactory if it is out of the range .

Calculate the loss imparted to society from a gallon of paint with only 185g of pigment.
dollars/ gallon of paint

This figure is a rough approximation of the cost imparted to society from poor quality.

Worked out Example 2


Expanding on the first paint example, lets decide what the manufacturing tolerance should be. The
manufacturing tolerance is the economic break-even point for reworking scrap. Suppose the off-target paint
can be adjusted at the end of the line for $1 a gallon. At what pigment level, should the manufacturer spend
the $1 to adjust the paint?

The manufacturing tolerance is determined by setting L = $1.

dollars/ gallon of paint

As long as the paint is within of pigment, the factory should not spend $1 to adjust the
pigment at the end of the line , because the loss without the rework will be less than $1. The manufacturing
tolerance represents a break-even point between the manufacturer and the consumer and sets limits for
shipping the product. If the paint manufacturer ships the product from example 1 with 185g of pigment,
they are saving 1$ of reworking costs but imparting a cost of $22.50 on society. These additional costs will
surface through loss of customer satisfaction and thus brand reputation, loss of marketshare and returned
products.

Multiple Choice Question 1


How do Taguchi's methods differ from traditional ways of calculating losses due to poor quality?
A) They average losses over a 12 month period of time

B) They include not only losses to the manufacturer up to the point of shipping, but also losses to society

C) They put the losses in Yen instead of Dollars

D) They calculate the cost per product

Multiple Choice Question 2


How do you select a tolerance range?

A) Ask your consumers which products they are disatisfied with

B) Find the break even point for fixing the product and the cost imparted to society from not fixing the
product

C) Make the tolerance as small as your equipment will allow

D) Use the standard safety factor of 4

What are Taguchi designs?


Taguchi Genichi Taguchi, a Japanese engineer, proposed several approaches to
designs are experimental designs that are sometimes called "Taguchi Methods." These
related to methods utilize two-, three-, and mixed-level fractional factorial designs. Large
fractional screening designs seem to be particularly favored by Taguchi adherents.
factorial
designs - Taguchi refers to experimental design as "off-line quality control"
many of which because it is a method of ensuring good performance in the design stage
are large of products or processes. Some experimental designs, however, such as
screening
when used in evolutionary operation, can be used on-line while the
process is running. He has also published a booklet of design nomograms
designs
("Orthogonal Arrays and Linear Graphs," 1987, American Supplier
Institute) which may be used as a design guide, similar to the table of
fractional factorial designs given previously in Section 5.3. Some of the
well-known Taguchi orthogonal arrays (L9, L18, L27 and L36) were
given earlier when three-level, mixed-level and fractional factorial
designs were discussed.

If these were the only aspects of "Taguchi Designs," there would be little
additional reason to consider them over and above our previous
discussion on factorials. "Taguchi" designs are similar to our familiar
fractional factorial designs. However, Taguchi has introduced several
noteworthy new ways of conceptualizing an experiment that are very
valuable, especially in product development and industrial engineering,
and we will look at two of his main ideas, namely Parameter Design and
Tolerance Design.

Parameter Design

Taguchi The aim here is to make a product or process less variable (more robust) in the
advocated face of variation over which we have little or no control. A simple fictitious
using inner example might be that of the starter motor of an automobile that has to
and outer perform reliably in the face of variation in ambient temperature and varying
array designs states of battery weakness. The engineer has control over, say, number of
to take into armature turns, gauge of armature wire, and ferric content of magnet alloy.
account noise
factors (outer) Conventionally, one can view this as an experiment in five factors.
and design Taguchi has pointed out the usefulness of viewing it as a set-up of three
factors (inner) inner array factors (turns, gauge, ferric %) over which we have design
control, plus an outer array of factors over which we have control only in
the laboratory (temperature, battery voltage).

Pictorial Pictorially, we can view this design as being a conventional design in the inner
representation array factors (compare Figure 3.1) with the addition of a "small" outer array
of Taguchi factorial design at each corner of the "inner array" box.
designs
Let I1 = "turns," I2 = "gauge," I3 = "ferric %," E1 = "temperature," and
E2 = "voltage." Then we construct a 23 design "box" for the I's, and at
each of the eight corners so constructed, we place a 22 design "box" for
the E's, as is shown in Figure 5.17.
FIGURE 5.17 Inner 23 and outer 22 arrays for robust design
with `I' the inner array, `E' the outer array.

An example of We now have a total of 8x4 = 32 experimental settings, or runs. These are set
an inner and out in Table 5.7, in which the 23 design in the I's is given in standard order on
outer array the left of the table and the 22 design in the E's is written out sideways along
designed the top. Note that the experiment would not be run in the standard order but
experiment should, as always, have its runs randomized. The output measured is the
percent of (theoretical) maximum torque.

Table showing TABLE 5.7 Design table, in standard order(s) for the parameter
the Taguchi design of Figure 5.9
design and the
responses
from the Run
experiment Number 1 2 3 4

E1 -1 +1 -1 +1 Output Output
I1 I2 I3 E2 -1 -1 +1 +1 MEAN STD. DEV
1 -1 -1 -1 75 86 67 98 81.5 13.5

2 +1 -1 -1 87 78 56 91 78.0 15.6

3 -1 +1 -1 77 89 78 8 63.0 37.1

4 +1 +1 -1 95 65 77 95 83.0 14.7

5 -1 -1 +1 78 78 59 94 77.3 14.3

6 +1 -1 +1 56 79 67 94 74.0 16.3

7 -1 +1 +1 79 80 66 85 77.5 8.1

8 +1 +1 +1 71 80 73 95 79.8 10.9

Interpretation Note that there are four outputs measured on each row. These correspond to
of the table the four `outer array' design points at each corner of the `outer array' box. As
there are eight corners of the outer array box, there are eight rows in all.

Each row yields a mean and standard deviation % of maximum torque.


Ideally there would be one row that had both the highest average torque
and the lowest standard deviation (variability). Row 4 has the highest
torque and row 7 has the lowest variability, so we are forced to
compromise. We can't simply `pick the winner.'

Use contour One might also observe that all the outcomes occur at the corners of the
plots to see design `box', which means that we cannot see `inside' the box. An optimum
inside the box point might occur within the box, and we can search for such a point using
contour plots. Contour plots were illustrated in the example of response
surface design analysis given in Section 4.

Fractional Note that we could have used fractional factorials for either the inner or outer
factorials array designs, or for both.

Tolerance Design

Taguchi also This section deals with the problem of how, and when, to specify tightened
advocated tolerances for a product or a process so that quality and
tolerance performance/productivity are enhanced. Every product or process has a
studies to number—perhaps a large number—of components. We explain here how to
determine, identify the critical components to target when tolerances have to be
based on a tightened.
loss or cost
function, It is a natural impulse to believe that the quality and performance of any
which item can easily be improved by merely tightening up on some or all of its
variables have tolerance requirements. By this we mean that if the old version of the
critical
item specified, say, machining to ± 1 micron, we naturally believe that we
can obtain better performance by specifying machining to ± ½ micron.
tolerances
that need to This can become expensive, however, and is often not a guarantee of
be tightened much better performance. One has merely to witness the high initial and
maintenance costs of such tight-tolerance-level items as space vehicles,
expensive automobiles, etc. to realize that tolerance design—the selection
of critical tolerances and the re-specification of those critical tolerances—
is not a task to be undertaken without careful thought. In fact, it is
recommended that only after extensive parameter design studies have
been completed should tolerance design be performed as a last resort to
improve quality and productivity.

Example

Example: Customers for an electronic component complained to their supplier that the
measurement measurement reported by the supplier on the as-delivered items appeared to
of electronic be imprecise. The supplier undertook to investigate the matter.
component
made up of The supplier's engineers reported that the measurement in question was
two made up of two components, which we label x and y, and the final
components measurement M was reported according to the standard formula

M = K x/y

with `K' a known physical constant. Components x and y were measured


separately in the laboratory using two different techniques, and the results
combined by software to produce M. Buying new measurement devices
for both components would be prohibitively expensive, and it was not
even known by how much the x or y component tolerances should be
improved to produce the desired improvement in the precision of M.

Taylor series Assume that in a measurement of a standard item the `true' value of x is xo and
expansion for y it is yo. Let f(x, y) = M; then the Taylor Series expansion for f(x, y) is
with all the partial derivatives, `df/dx', etc., evaluated at (xo, yo).

Apply formula Applying this formula to M(x, y) = Kx/y, we obtain


to M

It is assumed known from experience that the measurements of x show a


distribution with an average value xo, and with a standard deviation x =
0.003 x-units.

Assume In addition, we assume that the distribution of x is normal. Since 99.74% of a


distribution of normal distribution's range is covered by 6 , we take 3 <SUBx = 0.009 x-units
x is normal to be the existing tolerance Tx for measurements on x. That is, Tx = ± 0.009 x-
units is the `play' around xo that we expect from the existing measurement
system.

Assume It is also assumed known that the y measurements show a normal distribution
distribution of around yo, with standard deviation y = 0.004 y-units. Thus Ty = ± 3 y = ±0.012.
y is normal

Worst case Now ±Tx and ±Ty may be thought of as `worst case' values for (x-xo) and (y-yo).
values Substituting Tx for (x-xo) and Ty for (y-yo) in the expanded formula for M(x, y),
we have

Drop some
The and TxTy terms, and all terms of higher order, are going to be at least an
terms
order of magnitude smaller than terms in Tx and in Ty, and for this reason we
drop them, so that

Worst case Thus, a `worst case' Euclidean distance of M(x, y) from its ideal value Kxo/yo is
Euclidean (approximately)
distance

This shows the relative contributions of the components to the variation


in the measurement.

Economic As yo is a known quantity and reduction in Tx and in Ty each carries its own price
decision tag, it becomes an economic decision whether one should spend resources to
reduce Tx or Ty, or both.

Simulation an In this example, we have used a Taylor series approximation to obtain a simple
alternative to expression that highlights the benefit of Tx and Ty. Alternatively, one might
Taylor series simulate values of M = K*x/y, given a specified (Tx,Ty) and (x0,y0), and then
approximation summarize the results with a model for the variability of M as a function of
(Tx,Ty).

Functional In other applications, no functional form is available and one must use
form may not experimentation to empirically determine the optimal tolerance design. See
be available Bisgaard and Steinberg (1997).

Вам также может понравиться