Вы находитесь на странице: 1из 152

Copyright 8 1997 DaimlerChrysler Corporation i

Design of Experiments
Volume I
(Taguchi Approach)
September 1987
December 1988 (Revision A)
December 1997 (Revision B)
Quality & Reliability Planning
Copyright 8 1997 DaimlerChrysler Corporation iii
Preface
The Design of Experiments Volume I (Taguchi Approach) is one in a series of How to manuals published by
Chrysler Corporation. The purpose of this manual is to:
1) Serve as an introduction to DOE using the methods applied by Dr. G. Taguchi.
2) Refresh the user who has already had some training in the Taguchi approach to DOE.
3) Demonstrate application of the techniques via example.
4) Present some useful topics, including some classical statistical approaches that are
beyond the scope of most introductory Taguchi method courses.
The nine chapters of this manual are presented in three sections:
Section 1 - Setting the stage for DOE
Section 2 - Analyzing experimental data
Section 3 - DOE approach to a total design process
This manual is not a substitute for a comprehensive DOE class because no text can take the place of an interactive
classroom experience. This manual supplements that experience. DOE classes at Chrysler are offered under the
auspices of the Chrysler Quality Institute (CQI).
Design of Experiments Volume I (Taguchi Approach) Revision B is an update of Revision A, which was prepared by
Mike I. Burke. The new manual contains the following revisions:
1) Corrections of typos
2) The addition of a section (Chapter VII - Supplement 2) on Engineering Optimization
(Robust Design).
Revision B was prepared by Ming-Wei Lu, William D. Carlson, David M. Krick and Richard J. Rudy.
In this manual, the DOE techniques advocated by Dr. Genichi Taguchi will be discussed. For the classical DOE
approach, the reader is encouraged to consult the Blue Dot Manual - Design of Experiments Volume II (Classical
Approach) and other reference books.
Any questions or comments regarding proposed improvements to this manual are encouraged and should be addressed
to the Chrysler Corporation - Quality & Reliability Planning Department, CIMS 484-04-10, attention Ming-Wei Lu.
Copyright 8 1997 DaimlerChrysler Corporation v
Table Of Contents
Section 1 - Setting the Stage for DOE........................................................................................ 1
1.1 Introduction ...................................................................................................................................... 2
1.2 Planning the Experiment .................................................................................................................. 9
1.3 Supplement....................................................................................................................................... 17
1.4 Setting Up the Experiment ............................................................................................................... 19
1.5 Supplement....................................................................................................................................... 33
Section 2 - Analyzing Experimental Data................................................................................................. 35
2.1 Loss Function and Signal-to-Noise.................................................................................................. 37
2.2 Supplement....................................................................................................................................... 45
2.3 Analysis............................................................................................................................................ 47
2.4 Supplement....................................................................................................................................... 63
2.5 Analysis of Classified Data.............................................................................................................. 67
2.6 Supplement....................................................................................................................................... 75
2.7 Dynamic Situations .......................................................................................................................... 77
2.8 Supplement 1.................................................................................................................................... 87
2.9 Supplement 2.................................................................................................................................... 89
Section 3 - DOE Approach to a Total Design Process .......................................................................... 97
3.1 Parameter Design ............................................................................................................................. 99
3.2 Tolerance Design.............................................................................................................................. 107
The Anova Table .............................................................................................................................................. 115
DOE Checklist................................................................................................................................................... 119
Chrysler DOE Examples ................................................................................................................................ 121
Speedometer Cable Noise Study.............................................................................................................. 123
SMC Process Improvement Study ........................................................................................................... 135
Bibliography ...................................................................................................................................................... 141
Appendix..................................................................................................................................... 143
Copyright 1997 DaimlerChrysler 1
Section 1
Setting the Stage for DOE
INTRODUCTION
Copyright 1997 DaimlerChrysler 3
1.1 Introduction
Design of Experiments (DOE) is a way to efficiently plan and structure an investigatory testing program.
Although DOE is often perceived to be a problem solving tool, its greatest benefit can come as a problem
avoidance tool. This document will present both approaches through examples.
This manual is organized into nine chapters. The reader who is looking for a basic DOE introduction in order
to participate with some understanding in a problem solving group is urged to study and understand the first
two chapters. The remaining chapters discuss more complex topics including problem avoidance in product
and process design, more advanced experimental layouts, and understanding the analysis in more detail.
Purpose
The purpose of this chapter is to:
1. Discuss the benefit of using a structured approach to DOE.
2. Present a brief description of the philosophy Dr. Taguchi uses in his approach to product and process
design and DOE.
Summary
# Thorough planning sets the stage for a successful DOE.
# The application of DOE can result in significant savings in time and testing resources.
# A properly structured experiment will give the maximum amount of information while a poorly
structured experiment may yield nothing or give erroneous answers.
# Taguchi methods are designed to minimize the total cost of the product (i.e., internal production cost
+ cost of customer operation including the cost of customer dissatisfaction.
# Situations where simulations are possible offer special opportunities to use DOE.
# The difference in Taguchi methods and classical statistics is primarily in the philosophy of
application.
Why DOE (Design of Experiments) is a Valuable Tool
DOE is a valuable tool because:
1. DOE helps the responsible group plan, conduct and analyze test programs more efficiently.
2. DOE is an effective way to reduce cost.
INTRODUCTION
4 Copyright 1997 DaimlerChrysler Corporation
Usually the term, DOE, brings to mind only the analysis of experimental data. The application of DOE
necessitates a much broader approach that encompasses the total process involved in testing. The skills
required to conduct an effective test program fall into three main categories:
1. Planning/Organizational
2. Technical
3. Analytical/Statistical
The planning of the experiment is a critical phase. If the ground work laid in the planning phase is faulty, the
best analytic techniques will not salvage the disaster. The tendency to run off and conduct tests as soon as a
problem is found, without planning the outcome, should be resisted. The benefits from up-front planning
almost always outweigh the small investment of time and effort. Too often, time and resources are wasted
running down blind alleys that could have been avoided. Section 2 contains a more detailed discussion of
planning and the techniques used to assure a well planned experiment.
DOE can be a powerful tool in situations where the effect on a measured output of several factors, each at
two or more levels, must be determined. In the traditional "one factor at a time" approach, each test result is
used in a small number of comparisons. In DOE, each test is used in every comparison. This allows the
maximum amount of information to be extracted. A simplified example follows:
Example
Situation
A problem solving brainstorming group suspects 7 factors (named A, B, C, D, E, F and G), each at 2 levels
(Level 1 and Level 2), of influencing a critical, measurable function of the design. The group wants to
determine the best settings of these factors to optimize the measured test results.
One factor at a time
The group tests configurations containing the following combinations of the factors:
LEVEL OF FACTOR
(1 & 2 INDICATE THE
DIFFERENT LEVELS)
RESULTS TEST
NUMBER A B C D E F G a b
1 1 1 1 1 1 1 1 271.4 266.3
2 2 1 1 1 1 1 1 215.0 211.2
3 1 2 1 1 1 1 1 275.3 271.1
4 1 2 2 1 1 1 1 235.2 231.5
5 1 2 1 2 1 1 1 296.6 301.6
6 1 2 1 2 2 1 1 305.2 301.1
7 1 2 1 2 2 2 1 277.8 275.3
8 1 2 1 2 2 1 2 251.9 254.3
INTRODUCTION
Copyright 1997 DaimlerChrysler 5
Two evaluations (a & b) are run at each test configuration rather than a single evaluation in order to attain
a higher confidence in the difference between factor levels (assumes no need for a "tie breaker"). The group
makes the following comparisons:
TEST NUMBER USED
TO DETERMINE:
FACTOR LEVEL 1 LEVEL 2
DIFFERENCE
LEVEL 1 - LEVEL 2
A 1a, 1b 2a, 2b 55.8
B 1a, 1b 3a, 3b -4.4
C 3a, 3b 4a, 4b 39.9
D 3a, 3b 5a, 5b -25.7
E 5a, 5b 6a, 6b -4.3
F 6a, 6b 7a, 7b 26.1
G 6a, 6b 8a, 8b 50.1
16 total tests are run and 4 tests are used to determine the difference between levels for each factor.
The best combination of factors is (1,2,1,2,2,1,1) for factors A through G.
However, using DOE
The group runs the following test configurations:
LEVEL OF FACTOR
(1 & 2 INDICATE THE
DIFFERENT LEVELS)
TEST
NUMBER A B C D E F G RESULTS
1 1 1 1 1 1 1 1 270.7
2 1 1 1 2 2 2 2 223.8
3 1 2 2 1 1 2 2 158.2
4 1 2 2 2 2 1 1 263.1
5 2 1 2 1 2 1 2 129.3
6 2 1 2 2 1 2 1 175.1
7 2 2 1 1 2 2 1 195.4
8 2 2 1 2 1 1 2 194.6
INTRODUCTION
6 Copyright 1997 DaimlerChrysler Corporation
The group makes the following comparisons:
TEST NUMBER USED
TO DETERMINE:
FACTOR LEVEL 1 LEVEL 2
DIFFERENCE
LEVEL 1 - LEVEL2
A 1,2,3,4 5,6,7,8 55.4
B 1,2,5,6 3,4,7,8 -3.1
C 1,2,7,8 3,4,5,6 39.7
D 1,3,5,7 2,4,6,8 -25.8
E 1,3,6,8 2,4,5,7 -3.3
F 1,4,5,8 2,3,5,8 26.3
G 1,4,6,7 2,3,5,8 49.6
8 total tests are run and 8 tests are used to determine the difference between levels for each factor. This can
be done because each level of every factor equally impacts the determination of the average response at all
levels of all of the other factors ( e.g. of the 4 tests run at A=1, 2 were run at B=1 and 2 were run at B=2, this
is also true of the 4 tests run at A=2). This relationship is called orthogonality. This concept is very
important and the reader should work through the relationships between the levels of at least two other factors
to better understand the use of orthogonality in this testing matrix.
After pooling, the best level is [1, (1 or 2), 1, 2, (1 or 2), 1, 1] for A through G. Factors B and E are not
significant and may be set to the least expensive level.
Comparison of the Two Methods
NUMBER
OF TESTS
ESTIMATE AT
THE BEST LEVELS
CONFIDENCE INTERVAL
AT 90% CONFIDENCE
One Factor at a Time 16 301.1 "3.7
DOE 8 299.6 "3.3
HALF AS MANY TESTS ARE REQUIRED USING A DOE APPROACH AND THE
ESTIMATE AT EACH LEVEL IS BETTER (4 tests per factor level versus 2) !!
THIS IS ALMOST LIKE GETTING SOMETHING FOR NOTHING. The only thing that is required is that
the group plan out what is to be learned before running any of the tests. The savings in time and testing
resources can be significant. Direct benefits include reduced product development time, improved
problem correction response and more satisfied customers.
This approach to DOE is also very flexible and can accommodate known or suspected interactions and
factors with more than two levels. A properly structured experiment will give the maximum amount of
information possible. An experiment that is less well designed will be an inefficient use of scarce resources.
Taguchi Philosophy
INTRODUCTION
Copyright 1997 DaimlerChrysler 7
The main emphasis of Dr. Taguchi's approach is to minimize the total cost to society. He uses the "Loss
Function" (Section 2.1) to evaluate the total cost impact of alternative quality improvement actions. In Dr.
Taguchi's view, we all have an important societal responsibility to minimize the sum of the internal cost of
producing a product and the external cost the customer incurs in using the product. The customer's cost
includes the cost of dissatisfaction. This responsibility should be in harmony with every company's
objectives when the long term view of survival and customer satisfaction is considered. Profits may be
maximized in the short-run by deceiving today's customers or trading away the future. Traditionally, the next
quarter's or next year's "bottom line" has been the driving force in most corporations. Times have changed,
worldwide competition has grown, and customers have become more concerned with the total product cost.
In this environment, survival becomes a real issue and customer satisfaction must be a part of the Cost
Equation that drives the decision process.
Dr. Taguchi uses Signal-to-Noise (S/N) ratio as the operational way of incorporating the Loss Function into
experimental design. Experimental S/N is analogous to the S/N measurement developed in the
audio/electronics industry. S/N is used to assure that designs and processes are robust over different
conditions of uncontrollable "noise" factors. S/N is introduced in Chapter IV and developed in examples in
later chapters.
There are three basic types of product design activity in Dr. Taguchi's approach:
1. System design
2. Parameter design
3. Tolerance design
System design involves basic research to understand Nature. System design involves scientific principles,
their extension to unknown situations and the development of highly structured basic relationships.
Parameter and tolerance design involve optimizing the system design using empirical methods. Taguchi's
methods are most useful in parameter and tolerance designs. The rest of this document will discuss these
applications.
Parameter design optimizes the product or process design to reach the target value with minimum possible
variability with the cheapest available components. Note the emphasis on striving to satisfy the requirements
in the least costly manner. Parameter design is discussed in Section 3.1.
The tolerance design phase only occurs if the variability achieved with the least costly components is too
large to meet product goals. In tolerance design, the sensitivity of the design to changes in component
tolerances is investigated. The goal is to determine which components should be more tightly controlled and
which are not as critical. Again, the driving force is cost. Tolerance design is discussed in Section 3.2.
Problem resolution might appear to be another type of product design, however, if targets are set correctly
and parameter and tolerance design occur, there will be little need for problem resolution. When problems do
arise, they are attacked using elements of both parameter and tolerance design, as the situation warrants.
Miscellaneous Thoughts
INTRODUCTION
8 Copyright 1997 DaimlerChrysler Corporation
# A tremendous opportunity exists when the basic relationships between components are defined in
equation form in the system design phase. This occurs in electrical circuit design, finite element
analysis and other situations. In these cases, once the equations are known, testing can be simulated
on a computer and the "best" component values and appropriate tolerances obtained. It might be
argued that the true best values would not be located using this technique, only the local maxima
would be obtained. The equations involved are generally too complex to solve for the true best
values using calculus. Determining the local best values in the region that the experienced design
engineer considers most promising is generally the best available approach. It definitely has merit
over choosing several values and solving for the remaining ones. The cost involved is computation
time and the benefit is a robust design using the widest possible tolerances.
# Those readers who have some experience in classical statistics may wonder about the differences
between the classical and Taguchi approaches. Although there are some operational differences, the
biggest difference is in philosophical emphasis. Classical statistics emphasizes the producer's risk.
This means a factor's effect must be shown to be significantly different from zero at a high
confidence level to warrant a choice between levels. Taguchi uses percent contribution as a way to
evaluate test results from a consumer's risk standpoint. The reasoning is that, if a factor has a high
percent contribution, more often than not, it is worth pursuing. In this respect, the Taguchi approach
is less conservative than the classical approach. Dr. Taguchi uses orthogonal arrays extensively in
his approach and has formulated them into a "cookbook" approach that is relatively easy to learn and
apply. Classical statistics has several different ways of designing experiments including orthogonal
arrays. In some cases, another approach may be more efficient than the orthogonal array. However,
the application of these methods is complex and is usually left to the statisticians. Dr. Taguchi also
approaches uncontrollable "noise" differently. He emphasizes developing a design that is robust over
the levels of noise factors. This means that the design will perform at or near target regardless of
what is happening with the uncontrollable factors. Classical statistics seeks to remove the noise
factors from consideration by "blocking" the noise factors.
# In certain cases, the approaches Taguchi recommends may be more complicated that other
Statistical approaches or may be questioned by classical statisticians. In these cases alternative
approaches are presented as supplemental information at the end of the appropriate chapter.
Additional analysis techniques are also presented in the Chapter Supplements.
# The reader is encouraged to thoroughly analyze the data using all appropriate tools. Incomplete
analysis can result in incorrect conclusions.
PLANNING THE EXPERIMENT
Copyright 1997 DaimlerChrysler Corporation 9
1.2 Planning the Experiment
Purpose
The purpose of this chapter is to:
1. Impress upon the reader the importance of planning the experiment as a prerequisite to achieving
successful results.
2. Present some tools to use and points to consider during the planning phase.
3. Demonstrate DOE applications via simple examples.
Summary and Important Points
# A successful DOE application starts with understanding the situation and asking the right
question(s). To illustrate, suppose that the scrap rate for a particular process is too high. If the
process is not in control, a DOE may not be appropriate to improve the process. In this case,
Statistical Process Control (SPC) tools should be employed to learn more about the present process.
The important consideration is what type of test program will best answer the basic question.
Although this document primarily addresses the structuring of a DOE test program so that its results
can be analyzed using matrix manipulations, the reader should realize that there are many other
techniques that sometimes fit the situation better or should be used to yield a better preliminary
understanding. These techniques include : Failure Mode and Effects Analysis (FMEA), SPC,
Process Potential Studies, Gauge R & R (Repeatability and Reproducibility), Reliability
Development and Fault Tree Analysis. These techniques are described in the texts listed in the
bibliography.
# The initial group planning and set-up of an experiment define what questions will be answered by the
experiment. A poorly planned experiment will yield results that are of limited use at best.
# Most well-constructed experiments consist of at least two rounds of testing. The first round is used
to identify the major contributing factors. The second and subsequent testing rounds are used to
investigate the major contributors in more detail.
# As a rule-of-thumb, about 25% of the testing resources should be used to identify major contributors.
The balance of the resources should be used for detailed investigation and for confirmation.
# A brainstorming session should precede running the experiments to assure that the collective
questions of the appropriate experts are addressed and that each involved person agrees on the goals
of the experiment, methods, and each person's role.
# A brainstorming group sometimes brings together people with divergent views and "hidden
agendas". In these cases, it may take several meetings to build a sense of group identity and
purpose. The group leader needs to take a broad view of the situation and negotiate with the
others, both in the group and individually, to reach group consensus.
# Confirmation runs should always be made just in case the experiment does not contain all the
important factors.
Brainstorming
PLANNING THE EXPERIMENT
10 Copyright 1997 DaimlerChrysler Corporation
The first steps in planning a DOE are to define the situation to be addressed, identify the participants, and
determine the scope and the goal of the investigation. This information should be written down in terms that
are as specific as possible so that everyone involved can agree on and share a common understanding and
purpose. The experts involved should pool their understanding of the subject. In a brainstorming session,
each participant is encouraged to offer an opinion of which factors cause the effect. All ideas are recorded
without question or discussion at this stage. To aid in the organization of the proposed factors, a branching
(fishbone) format is often used where each main branch is a main aspect of the effect under investigation (e.g.
Material, Methods, Machine, Operator). The construction of a Cause-and-Effect (Fishbone or Ishikawa)
Diagram in a brainstorming session provides a structured, efficient way of assuring that pertinent ideas are
collected and considered and that the discussion stays on track. An example of a partially completed Cause-
and-Effect Diagram follows:
After the participants have expressed their ideas on possible causes, the factors are discussed and prioritized
for investigation. Usually, a three level (High, Moderate & Low) rating system is used to indicate the group
consensus on the level of suspected contribution. Quite often, the rating will be determined by a simple vote
of the participants. In situations where several different areas of contributing expertise are represented, a
participant's vote outside of his area of expertise may not have the importance of the expert's vote. Handling
this situation becomes a management challenge for the group leader and is beyond the scope of this
document.
PLANNING THE EXPERIMENT
Copyright 1997 DaimlerChrysler Corporation 11
During the brainstorming and prioritization process, the participants should keep the following things clearly
in mind:
1. The situation - What is the present state-of-affairs and why aren't we satisfied?
2. The goal - When will we be satisfied? (at least in the short term)
3. The constraints - How much time and resources can we use in the investigation?
4. The approach - Is DOE appropriate right now or should we do other research first?
5. The measurement technique and response - What measurement technique will be used and what
response will be measured? This is an important point that is sometimes not given much thought
The most obvious response is not always the best. As an example:
Choice of Response
Consider the gap between two vehicle body panels. At first look, that gap could be used as the response in a
DOE aimed at achieving a target gap. However, the gap can be a symptom of more basic problems with:
# The width of the panels.
# The location holes in the panels.
# The location of the attachment points on the body frame.
All of these must be right for the gap to be as intended. If the goal of the experiment is to identify which of
these has the biggest impact on the gap, the choice of the gap as a response is appropriate. If the purpose is
to minimize the deviation from the target gap, the gap may not be the right response. A more basic
investigation of the factors that contribute to the underlying cause is required. Don't confuse the symptom
with the underlying causes. This thought process is very similar to the thought process used in SPC and
FMEA and draws heavily upon the experience of the experts to frame the right question. In DOE, the choice
of an improper response could result in an inconclusive experiment or in a solution that might not work as
things change due to interactions between the factors. An interaction occurs when the change in the response
due to a change in the level of a factor is different for the different levels of a second factor. An example
follows:
Factor 2 = Low
Factor 2 = High
Low Level High Level
Factor 1
R
e
s
p
o
n
s
e
PLANNING THE EXPERIMENT
12 Copyright 1997 DaimlerChrysler Corporation
The choice of the proper response characteristic will usually result in few interactions being significant. Since
there is a limitation as to how much information can be extracted from a given number of experiments,
choosing the right response will allow the investigation of the maximum number of factors in the minimum
number of tests without interactions between factors blurring the factor main effect. Interactions will be
discussed in more detail in Section 2.1. The proper set-up of an experiment is not only a statistical task.
Statistics serves to focus the technical expertise of the participating experts into the most efficient approach.
In summary, the response should:
1. Relate to the underlying causes and not be a symptom.
2. Be measurable (if possible, a continuous response should be chosen).
3. Be repeatable for the test procedure.
The prioritization process continues until the most critical factors that can be addressed within the resources
of the test program are identified. The next step is to determine:
1. Are the factors controllable or are some of them "noise" beyond our control?
2. Do the factors interact ?
3. What levels of each factor should be considered ?
4. How do these levels relate to production limits or specs ?
5. Who will supply the parts, machines and testing facilities and when will they be available ?
6. Does everyone agree to the statement of the problem, goal, approach and allocation of roles ?
7. What kind of test procedure will be used ?
When all of these questions have been answered, the person who is acting as the statistical resource for the
group can translate the answers into a hypothesis and experimental set-up to test the hypothesis. The
following example illustrates how the process can work:
Example
A particular bracket has started to fail in the field with a higher than ordinary frequency. Dan, the design
engineer, and Paula, the process engineer, are alerted to the problem and agree to form a problem solving
team to investigate the situation. Dan reviews the design FMEA, while Paula reviews the process FMEA.
The information relating to the previously anticipated potential causes of this failure and SPC charts for the
appropriate critical characteristics are brought to the first meeting.
The team consists of:
# Dan
# Paula
# Mike, the machine operator
# Mark, the metallurgist
PLANNING THE EXPERIMENT
Copyright 1997 DaimlerChrysler Corporation 13
# Doris, another manufacturing engineer who has taken a DOE course and has agreed to help the
group set up the DOE.
In the first meeting, the group discussed the applicable areas from the FMEAs, reviewed the SPC charts, and
began a Cause-and-Effect listing for the observed failure mode. At the conclusion of the meeting, Dan was
assigned to determine if the loads on the bracket had changed due to changes in the components attached to it,
Paula was asked to investigate if there had been any change to the incoming material, Mark was asked to
consider the testing procedure that should be used to duplicate field failure modes and the response that
should be measured, and all of the group members were asked to consider additions to the Cause-and-Effect
list.
At the second meeting, the participants reported on their assignments and continued constructing the Cause-
and-Effect Diagram. Their Cause-and Effect Diagram is shown below with the specific causes shown as "C1,
C2, ..." rather than the actual descriptions which would appear on a real C & E Diagram.
The group easily reached the consensus that 7 of the potential causes were suspected of contributing to the
field problem. Doris agreed to set up the experiment assuming 2 levels for each factor and the others
determined what those levels should be to relate the experiment to the production reality. Doris returned to
the group and announced that she was able to use an L8 orthogonal array to set up the experiment and that 8
tests were all that were needed at this time. The test matrix for the 7 suspected factors is shown below:
LEVELS FOR EACH SUSPECTED FACTOR
FOR EACH OF 8 TESTS
TEST
NUMBER C1 C2 C7 C11 C13 C15 C16
1 1 1 1 1 1 1 1
2 1 1 1 2 2 2 2
3 1 2 2 1 1 2 2
4 1 2 2 2 2 1 1
5 2 1 2 1 2 1 2
6 2 1 2 2 1 2 1
7 2 2 1 1 2 2 1
8 2 2 1 2 1 1 2
Doris explained that this matrix would allow the group to determine if there was a difference in test responses
Bracket
Breaks
Operator/Machine
Interface Machine Material
Process Design
C8
C9
C7
C6
C5
C4
C3
C2
C1
C10
C11
C12
C13
C14
C15
C16
C17
PLANNING THE EXPERIMENT
14 Copyright 1997 DaimlerChrysler Corporation
for the two levels of each factor and would prioritize the within factor differences. Since the two levels of
each factor represented an actual situation that existed in production during the time the failed parts were
produced, this information could be used to correct the problem. By now Mark had identified a test
procedure and response that seemed to fit the requirements outlined in this section.
Two weeks were required to gather all the material and parts for the experiment and to run the experiment.
The test results were:
TEST NUMBER RESULT
1 10
2 13
3 15
4 17
5 14
6 16
7 19
8 21
While Doris entered the data into the computer for analysis, Dan and Paula plotted the data to see if anything
was readily apparent. The factor level plots are shown below:
When Doris finished with the computer, she reported that of all the variability seen in the data, 53.65% was
1 2 1 2 1 2 1 2
1 2 1 2 1 2
Level Level Level Level
13
14
15
16
17
18
13
14
15
16
17
18
R
E
S
P
O
N
S
E
Level Level Level
R
E
S
P
O
N
S
E
PLOTS OF AVERAGES (HIGHER RESPONSES ARE BETTER)
C1 C2 C7 C11
C13 C15 C16
PLANNING THE EXPERIMENT
Copyright 1997 DaimlerChrysler Corporation 15
Part 1
5 factors
Part 2
2 factors
Part 3
5 factors
Part 4
3 factors
Part 5
6 factors
due to the change in factor C2, 33.38% was due to the change in factor C1, and 11.92% was due to the
change in factor C11. The remainder, 1.04%, was due to the other factors and experimental error. The large
percentage variability contribution coupled with the fact that the differences between the levels of the three
factors are significant from an engineering standpoint indicate that these three factors may indeed be the
culprits. The computer analysis indicated that the best estimate for a test run at C1=2, C2=2, and C11=2 is
21.4. One of the eight tests in the experiment was run at this condition and the result was 21. Two
confirmatory tests were run and the results were 22 and 20. The group then moved into a second phase of the
investigation to identify what the spec limits should be on C1, C2, and C11. In the second round of testing, 8
tests were required to investigate 3 levels for each of the three factors. The set-up for the second round of
testing involves an advanced procedure (idle column method) that will be presented in a later chapter, so the
example will be concluded for now.
In summary, the group in the example took the following actions:
1. Gathered appropriate back-up data.
2. Called together the right experts.
3. Made a list of the possible causes for the problem.
4. Prioritized the possible causes.
5. Determined the proper test procedure and response.
6. Reached agreement prior to running any tests.
7. Approached the investigation in a structured manner.
8. Asked and addressed one question at a time.
Final Point
There are many ways to approach a particular DOE. In the situation where testing or material is very
expensive, the most efficient experimental layout must be used. In the following chapters, techniques will be
introduced that help the experimenter optimize the experimental design. Additional opportunities to optimize
the experiment should always be considered. Consider the situation where there is a five part process. A
brainstorming group has constructed a Cause-and-Effect Diagram for a particular process problem. The
number of suspected factors for each part of the process is shown by:
PLANNING THE EXPERIMENT
16 Copyright 1997 DaimlerChrysler Corporation
The obvious approach would be to set up the experiment with 21 factors. An alternative approach would be
to consider only 7 factors for the first round of testing. These would be the 6 factors within Part 5 plus one
factor for the best and worst input to Part 5. If the difference in input to Part 5 is significant, then the
investigation is expanded upstream. The decision to approach a problem in this manner is dependent upon
the beliefs of the experts. If the experts have a strong prior belief that a factor in Part 1, for instance, is
significant then a different approach should be used. This approach is also dependent upon the structure of
the situation.
The above example is presented to illustrate the point that the experimenter should always be alert for ways
to test more efficiently and effectively.
SUPPLEMENT
Copyright 1997 DaimlerChrysler 17
1.3 Supplement
An additional useful method of looking at the data is to plot the contrasts on normal probability paper. For a
two level factor, the contrast is the average of all the tests run at one level subtracted from the average of the
tests run at the other level. For the example in this chapter:
FACTOR
AVERAGE AT
LEVEL ONE
AVERAGE AT
LEVEL TWO
CONTRAST
(LEVEL 2 AVG. -
LEVEL 1 AVG.)
C1 13.75 17.50 3.75
C2 13.25 18.00 4.75
C7 15.75 15.50 -0.25
C11 14.50 16.75 2.25
C13 15.50 15.75 0.25
C15 15.50 15.75 0.25
C16 15.50 15.75 0.25
These contrasts are plotted on normal probability paper versus median ranks. The values for median ranks
are available in many statistics and reliability books and are use in Weibull reliability plotting. The Appendix
of this manual contains normal probability paper that is already set up for plotting of contrasts for some of
the common test set-ups. For this example, the normal contrast plot is shown below.
-1 0 1 2 3 4 5
1
2
3
4
5
6
7
N
u
m
e
r
i
c
a
l

R
a
n
k

C
o
r
r
e
s
p
o
n
d
i
n
g

T
o








M
e
d
i
a
n

R
a
n
k

P
r
o
b
a
b
i
l
i
t
y
Contrast
C7
C15
C13
C16
C11
C1
C2
SUPPLEMENT
18 Copyright 1997 DaimlerChrysler Corporation
To plot the contrasts on normal paper, the contrasts are ranked in numerical order. Here, from -0.25 (C7) to
4.75 (C2). The contrasts are then plotted against the median ranks or, in this case, against the rank number
shown on the left margin of the plot. Factors that are significant have contrasts that are relatively far from
zero and do not lie on a rough line defined by the rest of the factors. These factors can lie off the line on the
right side (Level 2 higher) or on the left side (Level 1 higher). In this example, there seems to be two separate
lines defined by the contrasts. This could be due either to:
9 C1, C2, and C11 are significant and the others are not, or
9 There may be one or more bad data points that occurs when C1, C2, and C11 are either all at the
lower level or all at the higher level.
In this example, C1, C2, C11 and C16 were at Level 2 and the other factors were set at Level 1 for run
number 8. Depending on the situation, it would be worthwhile to either rerun that test or to investigate the
circumstances that accompanied that test (e.g., Was the test hard to run because of the factor settings or did
something else change that was not in the experiment ?). In the example, this combination of factors
represented the best observed outcome and the confirmation runs supported the results of the original test.
Plotting contrasts is a way of better understanding the data. It helps the experimenter get a visual picture of
what is happening with the data. Sometimes, information that might get lost in a table of data will be crystal
clear on a plot.
SUPPLEMENT
Copyright 1997 DaimlerChrysler 19
SETTING UP THE EXPERIMENT
Copyright 1997 DaimlerChrysler Corporation 19
1.4 Setting Up the Experiment
Purpose
This chapter discusses:
1. The choice of the number of levels for each factor.
2. Fitting a linear graph to the experiment.
3. Special applications to reduce the number of tests.
4. How to handle noise factors in an experiment.
Summary
# In the Taguchi approach, the first round of testing should be a screening experiment which
evaluates many factors.
# The following round(s) of testing are usually used to investigate particular factors in more detail.
# The choice of levels to be tested is dependent upon the question that is to be addressed.
# Linear graphs are tools that aid the experimenter in setting up the experiment when interactions are
to be investigated.
# The degrees of freedom associated with a factor is the number of pairwise comparisons that can be
made.
# Orthogonal arrays specify the hardware set-up for each test.
# Interaction columns are not considered during hardware set-up.
# L12, L18 and L36 orthogonal arrays have special properties regarding interactions.
# A 4 level factor can be assigned to three2 level columns in an orthogonal array.
# An 8 level factor can be assigned to seven 2 level columns.
# A 9 level factor can be assigned to four 3 level columns.
# There are two ways to assign a 2 level factor in a 3 level column.
# 3 level factors can be assigned in a 2 level array.
SETTING UP THE EXPERIMENT
20 Copyright 1997 DaimlerChrysler Corporation
Choice of the Number of Factor Levels
To review:
A factor is a unique component or characteristic about which a decision will be made. A factor level is
one of choices of the factor to be evaluated (e.g. if the screw speed of a machine is the factor to be
investigated, two factor levels might be 1200 rpm and 1400 rpm). Investigating a larger number of levels for
a factor requires more tests than investigating a smaller number of levels. There is usually a trade-off
required between the amount of information needed from the experiment to be very confident of the results
and the time and resources available. If testing and material are cheap and there is time available, then
evaluate many levels for each factor. Usually, this is not the case and two or three levels for each factor are
recommended. An exception to this occurs when the factor is non-continuous and several levels are of
interest. Examples of this type of factor include the evaluation of a number of suppliers, machines or types
of material. This situation will be discussed later in this chapter.
The first round of testing is usually designed to screen a large number of factors. To accomplish this in a
small number of tests, 2 levels per factor are usually tested. The choice of the levels depends upon the
question to be addressed. If the question is "Have we specified the right spec limits?" or "What happens to
the response in the worst possible situation?", then the choice of what the levels should be is clear.
A more complicated question to address is "How will the distribution in production affect the response?". As
suppliers become capable of maintaining low variability about a target value
,
testing at the spec limits will not
give a good answer to this question. There are at least two approaches that can be used :
1. Test at the production limits, as a worse case.
2. Test at other points that put less emphasis on the tails of the distribution where few parts are
produced and more emphasis on the bulk of the distribution. It is a difficult choice to pick two points
to represent an entire distribution. If this approach is being used, a rule-of-thumb is to choose levels
that encompass approximately 70% of that distribution (mean " 1 standard deviation).
The main point of this discussion is that the choice of levels is an integral part of the experimental definition
and should be carefully considered by the group setting up the experiment.
The second and subsequent rounds of testing are usually designed to investigate particular factors in more
detail. Generally, three levels per factor are recommended. Using two levels allows the estimation of a linear
trend between the points tested. The testing of three levels gives an indication of non-linearity of the
response across the levels tested. This non-linearity can be used in determining specification limits to
optimize the response. Although this concept will be explored in more detail in the chapter on tolerance
design, its application can be illustrated as follows:
SETTING UP THE EXPERIMENT
Copyright 1997 DaimlerChrysler Corporation 21
Levels of Factor 1
A B
R
e
s
p
o
n
s
e
s
B C D
Levels of Factor 1
R
e
s
p
o
n
s
e
s
1st Round of Testing
Level B of Factor 1 gives a response that is
more desirable than does Level A.
2nd Round of Testing
Level B gives a response that is more desirable
than does either C or D. However, the
differences are not great. Spec limits are set at C
& D with B as the nominal.
In a manner similar to the two level per factor situation, the choice of the specific three levels to be tested
depends upon the specific question under investigation. Testing at three levels can be used by the
experimenter to focus-in on a particular area of the possible factor settings to optimize the response over as
large a range as possible. If three levels of a factor are used to gain understanding for an entire distribution, a
rule-of-thumb is to choose the levels at the mean and mean " 1.225 standard deviations which encompasses
approximately 78% of the distribution. These rules-of-thumb will be used in tolerance design (Chapter IX).
Linear Graphs
After the number of levels has been determined for each factor, the next step is to decide which experimental
set-up to use. Dr. Taguchi uses a tool called "Linear Graphs" to aid the experimenter in this process. Linear
graphs show how factors may be assigned to orthogonal arrays. Linear graphs are provided in the Appendix
for the
following situations:
ORTHOGONAL ARRAYS
1. All factors at two levels L4, L8, L12, L16, L32
2. All factors at three levels L9, L27
3. A mix of 2 and 3 level factors L18, L36
Degrees of Freedom
In the orthogonal array designation, the number following the L indicates how many testing set-ups are
involved. This number is also one more than the "degrees of freedom" available in the test set-up. Degrees
of freedom are the number of pairwise comparisons that can be made. In comparing the levels of a two level
factor, one comparison is made and one degree of freedom is expended. For a three level factor, two
comparisons are made as follows. First compare A & B, then compare whichever is "best" with C to
determine which of the three is "best". Two degrees of freedom are expended in this comparison. Once the
number of levels for each factor is determined, the degrees of freedom required for each factor are summed.
This sum plus one becomes the bottom limit to the orthogonal array choice.
SETTING UP THE EXPERIMENT
22 Copyright 1997 DaimlerChrysler Corporation
The degrees of freedom for an interaction are determined by multiplying the degrees of freedom for the
factors involved in the interaction.
A two level factor interacting with a two level factor requires 1 degree of freedom (df) [1*1=1]
A three level factor interacting with a three level factor requires 4 dfs [2*2 =4]
A three level factor interacting with a two level factor requires 2 dfs [2*1=2]
Although the test response should be chosen to minimize the occurrence of interactions, there will be times
when the experts know or strongly suspect that interactions occur. In these cases, linear graphs allow the
interaction to be included into the experiment easily.
If more than one test is run for each test set-up, the total df is the total number of tests run minus one. The
dfs used for assigning factors remains the same as without the repetition. The other dfs are used to estimate
the non-repeatability of the experiment (Section 2.3).
Using Orthogonal Arrays and Linear Graphs
In an orthogonal array, the number of rows correspond to the number of tests to be run and, in fact, each row
describes a test set-up. The factors to be investigated are each assigned to a column of the array. The value
that appears in that column for a particular test (row) tells to what level that factor should be set for that test.
As an example, consider an L4 test set-up:
COLUMN
ROW
1 2 3
1 1 1 1
2 1 2 2
3 2 1 2
4 2 2 1
If factor A was assigned to Column 1 and factor B was assigned to Column 2, then test number 3 would be
set up with A at level 2 and B at level 1.
The sum of the degrees of freedom required for each column (a 2 level column requires 1 df; a 3 level column
requires 2 df) equals the sum of the available dfs in the set-up. Another property of the arrays is that
orthogonality is maintained among the columns. Orthogonality, mentioned in Chapter I, is the property that
allows each level of every factor to equally impact the average response at each level of every other factor.
Using the L4 as an example, for the tests where column 1 (factor A) is at level 1, column 2 (factor B) is tested
at the low level and at the high level an equal number of times. This is also true of column 1 at level 2. In
fact, orthogonality is maintained for all three columns. The reader is invited to study the L4 and verify this
statement.
SETTING UP THE EXPERIMENT
Copyright 1997 DaimlerChrysler Corporation 23
1 2 3
In the Appendix, near the orthogonal array are line and dot figures that look a little like "stick" drawings.
These are linear graphs. The dots represent the factors that can be assigned to the orthogonal array and the
lines represent the possible interaction of the two dots joined by the line. The numbers next to the dots and
lines correspond to the column numbers in the orthogonal array. For example, the linear graph for the L4 is:
The interpretation of this linear graph is that if a factor is assigned to column 1 and a factor is assigned to
column 2, column 3 can be used to evaluate their interaction. If the interaction is not suspected of influencing
the response, another factor can be assigned to column 3. If no other factor remains, column 3 is left
unassigned and becomes an estimator of experimental error or non-repeatability. This will be explained in
more detail in Section 2.3.
The interrelationship between the columns are such that there are many ways of writing the linear graphs.
The linear graphs presented in the Appendix represent some of the commonly used types.
Column Interaction (Triangular) Table
Also shown in the Appendix near the orthogonal array is the Column Interaction Table for that particular
array. This table shows in which column(s) the interaction would be located for every combination of two
columns. The linear graphs have been constructed using this information. The L8 column interaction table is
shown as an example:
COLUMN
COLUMN 1 2 3 4 5 6 7
1 3 2 5 4 7 6
2 1 6 7 4 5
3 7 6 5 4
4 1 2 3
5 3 2
6 1
The interaction between two factors can be assigned by finding the intersection in the Column Interaction
Table of the orthogonal array columns to which those factors have been assigned. As an example, suppose
that a factor was assigned to column 3 and another factor was assigned to column 5. If the brainstorming
group suspects that the interaction of these two factors is a significant influence and includes that interaction
in the analysis, that interaction must be assigned to column 6 in the orthogonal array.
Note that the interaction of two 2 level factors (1 degree of freedom each) can be assigned to one column
which has 1 degree of freedom (1 * 1 = 1).
Factors with 3 Levels
SETTING UP THE EXPERIMENT
24 Copyright 1997 DaimlerChrysler Corporation
The orthogonal arrays, linear graphs and column interaction tables for situations with factors with three levels
are very similar to the two level situation. Since a three level factor requires 2 degrees of freedom, the three
level orthogonal array columns use 2 of the available dfs. The interaction of two 3 level factors requires 4 dfs
(2 * 2). In the linear graphs and column interaction table an interaction is shown with two column numbers.
If an interaction is being investigated, it must be assigned to two columns. The L9 orthogonal array, linear
graph and column interaction table are presented as an example.
L9
ORTHOGONAL ARRAY LINEAR GRAPH
COLUMN
1 3, 4 2
ROW 1 2 3 4
1 1 1 1 1
2 1 2 2 2 COLUMN INTERACTION TABLE
3 1 3 3 3 COLUMN
4 2 1 2 3 COLUMN 1 2 3 4
5 2 2 3 1
6 2 3 1 2 1 3 2 2
7 3 1 3 2 4 4 3
8 3 2 1 3 2 1 1
9 3 3 2 1 4 3
3 1
2
Interactions and Hardware Test Set-Up
The orthogonal array specifies the hardware set-up for each test. In setting-up the hardware for a particular
test in the orthogonal array, the experimenter should disregard the interaction columns and use only the
columns assigned to single factors. If an interaction is included in the experiment, its level will be based
solely on the levels of the interacting factors. The interaction will come into consideration during the analysis
of the data.
An example will serve to demonstrate the use of the linear graph and the layout of a simple experiment.
Example
A brainstorming group has constructed a Cause-and-Effect Diagram and determined that 4 factors (A through
D) are suspected of being contributors to the problem. In addition, two interactions are suspected (BxD and
CxD). The group has decided to use two levels for each factor. The experiment is laid out as follows:
1. Determine the df requirement.
SETTING UP THE EXPERIMENT
Copyright 1997 DaimlerChrysler Corporation 25
A D
B
C
4 dfs are required for the main factors (1 for each 2 level factor).
2 dfs are required for the interactions (1 for each interaction of 2 level factors).
A total of 6 dfs are required.
2. Determine a likely orthogonal array.
Since 6 dfs + 1 = 7 tests minimum and all factors have 2 levels, the L8 array is a likely place to start.
3. Draw the linear graph required for the experiment.
The linear graph required for the experiment looks like:
4. Compare the linear graph(s) of the orthogonal array to the linear graph required for the experiment
One of the linear graphs for the L8 that could fit is:
5. Assign factors to the orthogonal array columns.
1
2
4
7
3
5
6
SETTING UP THE EXPERIMENT
26 Copyright 1997 DaimlerChrysler Corporation
Make the following column assignments:
COLUMN 1 2 3 4 5 6 7
Factor D B BxD C CxD A Unassigned
where, BxD indicates the interaction between B and D.
Choice of the Test Array
For a particular experiment, the test response should be chosen to minimize interactions and the smallest
orthogonal array that fits the situation should be used. The emphasis should be on assigning factors to as
many columns as possible. This allows the question posed by the situation to be answered using a minimum
number of tests.
The consideration of whether an interaction exists or not is an important issue that must be addressed in
setting up the experiment. If an interaction does exist and provision is not made for it in the experimental set-
up, its effect becomes "mixed up" or confounded with the effect of the factor assigned to the column where
the interaction would be assigned. The analysis will not be able to separate the two. This is an important
reason why confirmatory runs are necessary. Confirmatory runs should be made with the non-significant
factors set to their different levels, just to make sure.
Another way to minimize the effect of interactions is to use an L12, L18, or L36 orthogonal array. These
arrays have a special property that some, or all, of the interactions between columns are spread across all
columns more or less equally instead of being concentrated in a column. This property can be used by the
experimenter rank the contribution of factors without worrying about interactions. There are times when this
can be a valuable tool for the experimenter. The linear graphs for those arrays tell which interactions can be
estimated and which cannot.
Factors with 4 Levels
A factor with 4 levels can easily be assigned to a 2 level orthogonal array. A 4 level factor requires 3 dfs.
Since a 2 level column has 1 df, three 2 level columns are used for the 4 level factor. The three columns
chosen must be represented in the linear graph by two dots and the connecting interaction line. One of the L8
linear graphs is:
The line enclosing the columns 1, 2, 3 designators indicates that these columns will be used for a 4 level
1
2
3
6
5
7
4
SETTING UP THE EXPERIMENT
Copyright 1997 DaimlerChrysler Corporation 27
factor. The particular level of the 4 level factor for each run can be determined by taking any 2 of the 3
columns that are to be combined and assigning the 4 combinations to the 4 levels of the factor. As an
example, consider columns 1 and 2.
COLUMN
4 LEVEL
1 2 FACTOR
1 1 1
1 2 2
2 1 3
2 2 4
Although column 3 is not used in determining the level of the 4 level factor, its df is used and no other factor
can be assigned to it.
In the orthogonal array, one of the columns used for the 4 level factor is set to the levels of the 4 level factor
and the other 2 columns are set to zero for each test. For the L8 example, the modified array would be:
TEST COLUMNS
NUMBER 1 2 3 4 5 6 7
1 0 0 1 1 1 1 1
2 0 0 1 2 2 2 2
3 0 0 2 1 1 2 2
4 0 0 2 2 2 1 1
5 0 0 3 1 2 1 2
6 0 0 3 2 1 2 1
7 0 0 4 1 2 2 1
8 0 0 4 2 1 1 2
Factors with 8 Levels
In a similar manner, a factor with 8 levels requires 7 dfs and takes up 7 two level columns. The particular
columns are chosen by taking a closed triangle in the linear graph and the interaction column of one of the
points of the triangle with the opposite base. One example is:
The column interaction table indicates that the interaction of columns 1 and 6 will be in column 7. The actual
factor level for each test is determined by looking at the combinations of the three columns that make up the
1
2
3
6
5
7
4
SETTING UP THE EXPERIMENT
28 Copyright 1997 DaimlerChrysler Corporation
corners of the triangle. In the example above:
COLUMN 8 LEVEL
1 2 4 FACTOR
1 1 1 1
1 1 2 2
1 2 1 3
1 2 2 4
2 1 1 5
2 1 2 6
2 2 1 7
2 2 2 8
None of the 7 columns which are used for the 8 level factor can be assigned to another factor.
In the orthogonal array, one of the columns used for the 8 level factor is set to the levels of the 8 level factor
and the other 6 columns are set to zero for each test.
Factors with 9 Levels
A factor with 9 levels is handled in a similar manner to a 4 level factor. The 9 level factor requires 8 dfs
which are available in four 3 level columns. Two 3 level columns and their two interaction columns are used.
One of the L27 linear graphs is:
The line enclosing the column 1, 2, 3, 4 designators indicates that these four columns will be used for the 9
level factor. The level of the 9 level factor to be used in a particular test can be determined by taking any 2 of
the 4 columns that are to be combined and assigning their 9 combinations to the 9 levels of the factor. This is
left to the reader to demonstrate.
In the orthogonal array, one of the columns used for the 9 level factor is set to the levels of the 9 level factor
and the other 3 columns are set to zero.
Using Factors with 2 Levels in a 3 Level Array
1
2
9,10
3,4
5
6,7
8
12,13
11
SETTING UP THE EXPERIMENT
Copyright 1997 DaimlerChrysler Corporation 29
1. Dummy Treatment
Often the situation calls for a mix of factors with 2 and 3 level. A 2 level factor can be assigned to a 3 level
column by using one of the 2 levels as the third level in the test determination. Consider using a 2 level factor
in an L9 array:
TEST COLUMN
NUMBER 1 2 3 4
1 1 1 1 1
2 1 2 2 2
3 1 3 3 3
4 2 1 2 3
5 2 2 3 1
6 2 3 1 2
7 1 1 3 2
8 1 2 1 3
9 1 3 2 1
In column 1, the second set of 1s (in experiments 7, 8 and 9) is the dummy treatment. In the analysis, the
average at level 1 of the factor assigned to column 1 is determined with more accuracy than the average at
level 2 since more tests are run at level 1. The level that is of more interest to the experimenter should be the
one used for the dummy treatment.
2. Combination Method
Two 2 level factors can be assigned to a single 3 level column. This is done by assigning three of the four
combinations of the two 2 level factors to the 3 level factor and the fourth combination is not tested. As an
example, two 2 level factors are assigned to a 3 level column as follows:
FACTOR A FACTOR B 3 LEVEL COLUMN
1 1 1
1 2 2
2 1 3

Note that the combination A
2
B
2
is not tested. In this approach, information about the AB interaction is not
available and many ANOVA (analysis of variance) computer programs are not able to break apart the effect
of A and B. A way of doing that by hand will be presented at the end of Section 2.3.
Using Factors with 3 Levels in a 2 Level Array
A factor with 3 levels requires 2 dfs. Although it would seem that two 2 level columns combined would give
the required dfs, the interaction of those two columns is confounded with the 3 level factor. The approach
used to assign one 3 level factor to a 2 level array is to construct a 4 level column and use the dummy
treatment approach to assign the 3 level factor to the 4 level column.
Assigning more than one 3 level factor to a 2 level array uses a variation of this approach. Recall that in
SETTING UP THE EXPERIMENT
30 Copyright 1997 DaimlerChrysler Corporation
constructing a 4 level column, three 2 level columns are used. These three must be shown in the linear graph
as two dots connected by an interaction line. Any two of these columns are used to determine the level to be
tested. The third column's df is used up in assigning a 4 level factor. In assigning a 3 level factor, the third
column's df is not used for the 3 level factor since it requires only 2 dfs. However, the third column is
confounded with the 3 level factor and should not be assigned to another factor. That column is said to be
"idle" . When two or more 3 level factors are assigned to a 2 level array, the 3 level factors can share the
same idle column. An example of assigning two 3 level factors to an L8 array is:
Here, column 1 would be idle (a factor cannot be assigned to column 1), columns 2 and 3 would be used to
determine the levels of a 3 level factor, columns 4 and 5 would be used to determine the levels of the second 3
level factor and columns 6 and 7 are available for 2 level factors. The modified orthogonal array for this
experiment would be (level 2 is the dummy treatment in both cases):
TEST COLUMNS
NUMBER 1 2 3 4 5 6 7
1 1 0 1 0 1 1 1
2 1 0 1 0 2 2 2
3 1 0 2 0 1 2 2
4 1 0 2 0 2 1 1
5 2 0 3 0 2 1 2
6 2 0 3 0 3 2 1
7 2 0 2 0 2 2 1
8 2 0 2 0 3 1 2
The idle column approach cannot be used with 4 level factors. If it were attempted, insufficient degrees of
freedom would exist and the 4 level factors would be confounded.
Other Techniques
There are other techniques for setting up an experiment that will be mentioned here but will not be discussed
in detail. The reader is invited to read the chapter on Pseudo-Factor Design in Quality Engineering -
Product & Process Design Optimization by Yuin Wu and Dr. Willie Hobbs Moore or to consult with a
statistician to use these techniques.
1. Nesting of Factors
1 (idle)
2
3
6
5
7
4
SETTING UP THE EXPERIMENT
Copyright 1997 DaimlerChrysler Corporation 31
Occasionally, levels of one factor have meaning only at a particular level of another factor. Consider the
comparison of two types of machine. One is electrically operated and the other is hydraulically operated. The
voltage and frequency of the electrical power source and the temperature and formulation of the hydraulic
fluid are factors that have meaning for one type of machine and not the other. These factors are nested within
the machine level and require a special set-up and analysis which is discussed in the reference given above.
2. Setting up experiments with factors with large numbers of levels
Experiments with factors with large numbers of levels can be assigned to an experimental layout using
combinations of the techniques that have been covered in this booklet.
Inner Arrays & Outer Arrays
Factors are generally divided into three basic types:
1. Control factors are the factors which are to be optimized to attain the experimental goal.
2. Noise factors represent the uncontrollable elements of the system. The optimum choice of control
factor levels should be robust over the noise factor levels.
3. Signal factors represent different inputs into the system for which system response should be
different. For example, if several micrometers were to be compared, the standard thicknesses to be
measured would be levels of a signal factor. The optimum micrometer choice would be the one that
operated best at all the standard thicknesses. Signal factors will be discussed in more detail in
Section 2.7.
Control and noise factors are usually handled differently in setting up an experiment. Control factors are
entered into an orthogonal array called an inner array. The noise factors are entered into a separate array
called an outer array. These arrays are so related that every testing set-up in the inner array is evaluated
across every noise set-up in the outer array. As an example, consider an L8 inner (control) array with an L4
outer (noise) array.
L8 L4 1 2 2 1
TEST
NO.
A
1
B
2
C
3
D
4
E
5
F
6
G
7
(on side)
!
1
1
2
1
1
2
2
2
1 1 1 1 1 1 1 1 Test Results 25 27 30 26
2 1 1 1 2 2 2 2 25 27 21 19
3 1 2 2 1 1 2 2 18 21 19 22
4 1 2 2 2 2 1 1 26 23 27 28
5 2 1 2 1 2 1 2 15 11 12 14
6 2 1 2 2 1 2 1 18 15 17 18
7 2 2 1 1 2 2 1 20 17 21 18
8 2 2 1 2 1 1 2 19 20 20 17
The purpose of this relationship is to equally and completely expose the control factor choices to the
uncontrollable environment. This assures that the optimum factor will be robust. A Signal-to-Noise (S/N)
ratio can be calculated for each of the control factor array test situations. This allows the experimenter to
identify the control factor level choices that meet the target response consistently. This application will be
discussed in Sections 2.3, 3.1 and 3.2.
SETTING UP THE EXPERIMENT
32 Copyright 1997 DaimlerChrysler Corporation
Randomization of the Experimental Tests
In the orthogonal arrays, each test set-up is identified by a test number. Generally, the tests should be run in
the order of test number. If the tests were run in that order, all the tests with the factor assigned to column 1
at level 1 would be run before any of the tests with that factor at level 2. A quick glance at an orthogonal
array will confirm this relationship. In fact, the columns toward the left of the array change less often than
the columns toward the right of the array. If an uncontrolled noise factor changes during the testing process,
the effect of that noise factor could be mixed in with one or more of the factor effects. This could result in an
erroneous conclusion. The possibility of this occurring can be minimized by randomizing the order of the
experiment runs. By randomizing the tests, the effect of the changing uncontrolled noise factor will be more
or less spread evenly over all the levels of the controlled factors and although the experimental error will be
increased, the effects of the controlled factors will still be identifiable. Randomization can be done as simply
as writing the test numbers on slips of paper and drawing them out of a hat.
There are two situations where randomization may not be possible or where the importance is lessened.
1. If it is very expensive, difficult or takes too much time to change the level of a factor, all tests at
one level of a factor may have to be run before the level of that factor can be changed. In this case,
noise factors should be chosen for the outer array that represent the possible variation in
uncontrolled environment as much as possible.
2. If the noise factors in the outer array are properly chosen, the confident experimenter may elect to
dispense with randomization. In most cases, the purpose of the experiment is to learn more about the
situation and the experimenter does not have complete confidence. Therefore, the test order should
be randomized whenever the circumstances permit.
Final Point
Dr. Taguchi stresses evaluating as many main factors as possible and filling up the available columns. If it
turns out that the experimental design will result in unassigned columns, there are a few situations where
there are some column assignment schemes that are better than others. The rationale behind these choices is
that they minimize the confounding of unsuspected 2 factor interactions with the main factors. A detailed
discussion is beyond the scope of this document. The reader is invited to read Chapter 12 of Statistics for
Experimenters by G. Box, W. Hunter and J. S. Hunter to learn more about this concept.
Consider an L8 for which there are to be four 2 level factors assigned. This implies that there will be 3
columns that will not be assigned to a main factor. There are 35 ways the 4 factors can be assigned to the 7
columns. The recommended assignment is to use columns 1, 2, 4 and 7 for the main factors. The interactions
to be evaluated, the linear graphs and the column interaction table determine if the recommended column
assignments are usable for a particular experiment.
The recommended column assignments are given on the following below.
SETTING UP THE EXPERIMENT
Copyright 1997 DaimlerChrysler Corporation 33
RECOMMENDED FACTOR ASSIGNMENT BY COLUMN
L8 ARRAY L16 ARRAY L32 ARRAY
4 Factors 1, 2, 4, 7 1, 2, 4, 8
5 Factors * 1, 2, 3, 8, 15 1, 2, 4, 8, 16
6 Factors * 1, 2, 4, 8, 11, 13 1, 2, 3, 8, 16, 31
7 Factors * 1, 2, 4, 7, 8, 11, 13 1, 2, 4, 8, 15, 16,23
8 Factors ---- 1, 2, 4, 7, 8, 11, 13, 14 1, 2, 4, 8, 15, 16, 23, 27
9 Factors ---- * 1, 2, 4, 8, 15, 16, 23, 27, 29
10 Factors ---- * 1, 2, 4, 8, 15, 16, 23, 27, 29, 30
11 Factors ---- * 1, 2, 4, 7, 8, 11, 13, 14, 16, 19, 21
12 Factors ---- * 1, 2, 4, 7, 8, 11, 13, 14, 16, 19, 21, 22
13 Factors
---- *
1, 2, 4, 7, 8, 11, 13, 14, 16, 19, 21, 22, 25
14 Factors
----
* 1, 2, 4, 7, 8, 11, 13, 14, 16, 19, 21, 22, 25, 26
15 Factors ---- * 1, 2, 4, 7, 8, 11, 13, 14, 16, 19, 21, 22, 25, 26, 28
* No recommended assignment scheme
The linear graphs in the Appendix have been constructed to facilitate the use of these recommended column
assignments. The reader will find that the linear graphs in other books and reference material may not make
these assignments available. There are many equally valid ways that linear graphs for the larger arrays can
be constructed from the column interaction table. It is not feasible for any one book to list all the
possibilities.
SUPPLEMENT
Copyright 1997 DaimlerChrysler Corporation 35
1.5 Supplement
In many cases, the brainstorming group may not have a good feel for whether interactions exist or not. In
these cases, two alternatives are usually considered:
1. Design an experiment that allows all two factor interactions to be estimated,
or
2. Design an experiment in which no factor is assigned to a column that also contains the interaction of
two other factors, although pairs of two factor interactions may be assigned to the same column. The
Recommended Factor Assignments given on page 32 are examples of this approach.
The second approach is based on the assumption that few of the interactions will be significant and that later
testing can be used to investigate them in more detail. The reader is urged to seek statistical assistance in
approaching this type of experiment.
Sometimes, the response is not related to the input factors in a linear fashion. Testing each factor at two
levels allows only a linear relationship to be defined and, in this more complex situation, can give misleading
results. A detailed statistical analysis tool called Response Surface Methodology can be used to investigate
the complex relationship of the input factors to the response in these cases.
All of this seems to indicate that DOEs must be lengthy and complicated when interactions or non-linear
relationships are suspected. In most situations, time and resources are not available to run a large
experiment. Sometimes, a transformation of the measured data or of a quantitative input factor can allow a
linear model to fit within the region covered by the input factors. The linear model requires fewer data points
than a curvilinear model and is easier to interpret. Unfortunately, unless there are multiple observations at
each inner array set-up, the choice of transformation is guided mainly by the experience of the experimenter
or by trying several transformations and seeing which one fits best.
The choice of the proper transformation to use is related to the choice of the proper response. As an example,
two common measures of fuel usage are "miles per gallon" and "liters per kilometers". With the
multiplication of a constant, these two measures are inverses of each other. A model that is linear in mpg will
be definitely non-linear in l/km. Which measurement is correct? There is no easy answer. The experimenter
should evaluate several different transformations to determine the best model. Some transformations that are
useful are:
( ) ( )
Y 1 y
2 1 -
Y y
. variances comparing for useful Y ln or Y log y
surface. painted a on flaws of number
the like d) distribute (Poisson data count for useful
2 1
Y y
=
=
=
=
SUPPLEMENT
36 Copyright 1997 DaimlerChrysler Corporation
When there are more than one observation at each inner array test set-up either through replication or through
testing with an outer array, another guide to choosing the right transformation can be used. For the ANOVA
to work correctly, the variances at all test points should be equal. The observed variances should be
compared as follows:
1. Calculate the average ( ) X and the standard deviation ( s ) for each inner array test set-up.
2. Take the log or ln of each X and s.
3. Plot log s (y-axis) versus log X (x-axis) and estimate the slope.
4. Use the estimated slope as a rough guide to determine which transformation to use:
Slope Transformation
0.0 no transformation
0.5 y = Y
1/2
1.0 y = log (Y) or ln (Y)
1.5 y = Y
-1/2
2.0 y = 1/Y
It should be noted that the addition or subtraction of a constant before plotting will not affect the standard
deviation but will affect the relative spacing of the log X and hence the slope of the line. This approach can
be used to improve the fit of the transformation. With the widespread use of computers, data analysis of this
type should be fairly easy and should be pursued as a means to get the most information out of the data.
Examples of this approach will be given in the supplements of subsequent chapters.
The reader is invited to refer to Statistics for Experimenters by G. Box, W. Hunter, and J. S. Hunter to
learn more about the use of transformations in analyzing data.
Copyright 1997 DaimlerChrysler Corporation 37
SECTION 2
Analyzing Experimental Data
LOSS FUNCTION AND SIGNAL-TO-NOISE
Copyright 1997 DaimlerChrysler Corporation 39
2.1 Loss Function and Signal-to-Noise
Purpose
This chapter discusses:
1. The Taguchi Loss Function and its cost oriented approach to product design.
2. A comparison of the loss function and the traditional approach.
3. The use of the loss function in evaluating alternative actions.
4. A comparison of the loss function and C
pk
and the appropriate use of each.
5. The relationship of the loss function and the Signal-to-Noise (S/N) calculation that Dr. Taguchi uses
in Design of Experiments.
Summary
# The loss function is a way of incorporating "hidden costs" (lost sales, unhappy customers, warranty
costs, etc.) into the product decision process.
# A quadratic form of the loss function is used because it is the most simple form and directly relates to
the S/N calculations used in analyzing DOE data where noise factors are included.
# The general form of the loss function is L = k [ expression ] where "k" is a
constant that relates to the cost increase associated with failing to satisfy
customer requirements, and [ expression ] depends upon whether:
a. There is a nominal response target (Nominal the Best),
b. The optimum response is as low as possible (Smaller the Better), or
c. The optimum response is as high as possible (Larger the Better)
# The general form of the Signal-to-Noise calculation is:
S/N = c log
10
[expression]
where c is either -10 for Smaller the Better & Larger the Better or 10 for Nominal the Best.
# S/N analysis is used to determine the best situation when there are noise factors present, i.e. when
uncontrollable factors such as ambient temperature or humidity are included in the experimental
design, S/N is used to identify the situation that best meets the target across all levels of the
uncontrolled noise.
# The goal of a DOE S/N analysis is to maximize the S/N.
# This goal emphasizes attaining the target with small variability.
# The situation that yields the highest S/N will also give the least loss.
LOSS FUNCTION AND SIGNAL-TO-NOISE
40 Copyright 1997 DaimlerChrysler Corporation
Rework
Scrap
Loss
In $
Spec
Limit
Spec
Limit
Loss Function and the Traditional Approach
In the Traditional Approach to considering Company Loss, parts produced within the spec limits perform
equally well and parts outside of the spec limits are equally bad. This approach has a fallacy in that it
assumes that parts produced at the target and parts just inside the spec limit perform the same and that parts
just inside and just outside the spec limits perform differently.
Statistical Process Control (SPC) and Process Capability calculations (C
pk
) have brought to the
manufacturing floor an awareness of the importance of reducing process variability and centering around the
target. However, the question still remains, "How can this thought process carry over into product and
process decisions ?".
The loss function provides a way of considering customer satisfaction in a quantitative manner during the
development of a product and its manufacturing process. The Loss Function is the cornerstone of the
Taguchi philosophy. The basic premise of the loss function is that there is a particular target value for each
critical characteristic that will best satisfy all customer requirements. Parts or systems that are produced
close to this target will satisfy the customer. Parts or systems that are produced farther away from the target
will not satisfy the customer as well. The level of satisfaction decreases as the distance from the target
increases. The loss function approximates the total cost to society, including customer dissatisfaction, of
producing a part at a particular characteristic value.
Taken for a whole production run, the total cost to society is based on the variability of the process and the
distance of the distribution mean to the target. Decisions that affect process variability and centering or the
range over which customers will be satisfied can be evaluated using the common measurement of loss to
society.
The loss function can be used when considering the expenditure of resources. Customer dissatisfaction is
very difficult to quantify and is often ignored in the traditional approach. Its inclusion into the decision
process via the loss function highlights a gold mine in customer perceived quality and repeat purchases that
would be hidden otherwise. This gold mine is often available at a relatively minor expense applied to
improving the product or process.
NOTE: Use of the Loss Function implies a total system which starts with the determination of
targets that reflect the greatest level of customer satisfaction. Calculation of losses using
nominals that were set using other methods may yield erroneous results.
LOSS FUNCTION AND SIGNAL-TO-NOISE
Copyright 1997 DaimlerChrysler Corporation 41
m m - m +
A
o
Co s t
Calculation of the Loss Function
Dr. Taguchi uses a quadratic equation to describe the loss function. A quadratic form was chosen because:
1. It is the most simple equation that fulfills the requirement of increasing as it moves away from target.
2. Taguchi believes that, historically, costs behave in this fashion.
3. The quadratic form allows direct conversion from signal-to-noise ratios and decomposition used in
analysis of experimental results.
The general form for the loss function is:
L (x) = k (x - m)
2
where, L(x) is the loss associated with producing a part at a "x" value.
k is a unique constant determined for each situation.
x is the measured value of the characteristic.
m is the target of the characteristic.
When the general form is extended to a production of "n" items, the average loss is:
L = (k/n) 3 (x - m)
2
This can be simplified to:
L = k [[
2
+ (T - m)
2
]
where, [
2
is the population piece-to-piece variance.
T is the population mean.
(T-m) is the offset of the population mean from the target.
In the Nominal-the-Best (NTB) situation shown below:
LOSS FUNCTION AND SIGNAL-TO-NOISE
42 Copyright 1997 DaimlerChrysler Corporation
A
o
is the cost incurred in the field by the customer or warranty when a part is produced from the target.
is the point at which 50% of the customers would have the part repaired or replaced. A
o
and define the
shape of the loss function and the value of "k".
The loss resulting from producing a part at m - - is:
L = (m - ;) = (k (m - - - m)
2
A
o
= k;
2
k = A
o
/ -
2
In general, the loss per piece is :
L (x) = A
o
/ -
2
* (x - m)
2
The loss for the population is :
L = A
o
/ -
2
* ([
2
+ offset
2
)
Example
A simple example will illustrate these calculations.
A particular component is manufactured at an internal supplier, shipped to an assembly plant and assembled
into the vehicle. If this component deviates from its target of 300 units by 10 or more, the average customer
will complain and the estimated warranty cost will be $150.00. In this case,
k = $150 / (10 units)
2
k = $1.50 per unit
2
SPC records indicate that the process average is 295 units and the variability is 8 units
2
. The present total
loss is:
L = k [[
2
+ (T - 300)
2
]
L = $1.50 [8
2
+ (295 - 300)
2
]
L = 133.50 per part
50,000 parts are produced per year.
The total yearly loss (and opportunity for improvement) is $6.7 million.
LOSS FUNCTION AND SIGNAL-TO-NOISE
Copyright 1997 DaimlerChrysler Corporation 43
Situation 1
It is estimated that a redesign of the system would make the system more robust and the average customer
would complain if the component deviated by 15 units or more from 300. In this case:
k = $150 / (15 units)
2
k = $0.67 per unit
2
The total loss would be:
L = $0.67 [8
2
+ (295 - 300)
2
]
L = $59.63 per unit
2
The net yearly improvement due to redesigning the system would be:
Improvement = ($133.50 - $59.63) * 50000
Improvement = $3,693,500
This cost should be balanced against the cost of the redesign.
Situation 2
It is estimated that a new machine at the component manufacturing plant would improve the mean of the
distribution to 297 units and improve the process variability to 6 units
2
. In this case, the total loss would be:
L = $1.50 [6
2
+ (297 - 300)
2
]
L = $67.50 per part
The net yearly improvement due to using the new machine would be:
Improvement = ($133.50 - $67.50) * 50000
Improvement = $3,300,000
This cost should be balanced against the cost of the new machine.
From these situations, it is apparent that the quality of decisions using the loss function is heavily dependent
on the quality of the data that goes into the loss function. The loss function emphasizes making a decision
based on quantitative total cost data. In the traditional approach, decisions are difficult because of the
unknowns and differing subjective interpretations. The loss function approach requires investigation to
remove some of the unknowns. Subjective interpretations become numeric assumptions and analyses which
are easier to discuss and can be shown to be based on facts.
LOSS FUNCTION AND SIGNAL-TO-NOISE
44 Copyright 1997 DaimlerChrysler Corporation
Co s t
A
o
X
o
# In the Smaller-the-Better situation illustrated below, the loss function
reduces to:
# For the Larger-the-Better situation, the loss function reduces to:
The loss functions for the three situations are summarized as:
LOSS FUNCTION
Nominal the best (NTB) L = k ([
2
+ offset
2
)
Smaller the better (STB) L = k ( 1/n ;x
i
2
)
Larger the better (LTB) L = k ( 1/n ; 1/x
i
2
)
Comparison of the Loss Function and C
pk
The loss function can be used to evaluate process performance. It provides an emphasis on both reducing
[ ] x n 1 k L
2
i
=
[ ]

= x 1 n 1 k L
2
i
Co s t
A
o
X
o
LOSS FUNCTION AND SIGNAL-TO-NOISE
Copyright 1997 DaimlerChrysler Corporation 45
variability and centering the process since those actions have a net effect of reducing the value of the loss
function. Process performance is normally evaluated using C
pk
. C
pk
is calculated using the following
equation:
where, X = the average of the process.
Both the loss function and C
pk
emphasize minimizing the variability and centering the process on the target.
The relative benefits of the two are summarized below:
Loss Function
Provides more emphasis on the target.
Relates to customer costs.
Can be used to prioritize the effect of different processes.
C
pk
Is easier to understand and use.
Is based only on data from the process and specifications.
Is normalized for all processes.
The Loss Function represents the type of thinking that must go into making strategic management decisions
regarding the product and process for critical characteristics. C
pk
is an easily used tool for monitoring actual
product processes.
The following shows

C
pk
and the value of the loss function for five different cases. In each of these cases the
specification is 20 " 4 and the value of k in the loss function is $2 per unit
2
.
CASE 1 CASE 2 CASE 3 CASE 4 CASE 5
Average 20 18 17.2 20 20
Sigma 1.33 0.67 0.4 2.82 0.67
C
pk
1 1 1 0.47 2
Loss (assume k=2) 3.56 8.89 16 16 0.89
( ) ( )

=
deviation standard 3
limit spec lower - X
,
deviation standard 3
X - limt spec upper
minimum C
pk
LOSS FUNCTION AND SIGNAL-TO-NOISE
46 Copyright 1997 DaimlerChrysler Corporation
Both the C
pk
and Loss Function emphasize reducing the part-to-part variability and centering the process on
target.

The use of C
pk
is recommended in production areas to monitor process performance because of the
ease of understanding and the clear relationship of C
pk
and the other SPC tools. Management decisions
regarding the location of distributions with small variability within a large specification tolerance should be
based on a loss function type of approach. (See Cases 2 & 5 above.)
The Loss Function approach should be used to determine the target value and to evaluate the relative merits
of two or more courses of action because of the emphasis on Cost and on including customer satisfaction as a
factor in making basic product and process decisions. These questions also lend themselves to the use of
Design of Experiments. The relationship of the Loss Function to the Signal-to-Noise DOE calculations used
by Dr. Taguchi will now be discussed.
Signal-to-Noise (S/N)
Signal-to-noise is a calculated value that Dr. Taguchi recommends to analyze DOE results. It incorporates
both the average response and the variability of the data. S/N is a measure of the signal strength to the
strength of the noise (variability). The goal is always to maximize the S/N. S/N ratios are so constructed that
if the average response is far from the target, recentering the response has a bigger effect on the S/N than
does reducing the variability. When the average response is close to the target, reducing the variability has a
bigger effect. There are three basic formulas used for calculating S/N as follows (formulas for the loss
function are shown for reference) :
SIGNAL-TO-NOISE (S/N) LOSS FUNCTION (L)
Smaller the better (STB) -10 log
10
[ 1/n ;x
i
2
] k ( 1/n ;x
i
2
)
Larger the better (LTB) -10 log
10
[ 1/n ; 1/x
i
2
] k ( 1/n ; 1/x
i
2
)
Nominal the besT (NTB) 10 log
10
[ 1/n (S
m
-V
e
) / V
e
] k ( [
2
+ offset
2
)
where, S
m
= ( ; x
i
)
2
/ n
V
e
= ( ; x
i
2
- S
m
) / (n-1)
S/N for a particular testing condition is calculated by considering all the data that was run at that particular
condition across all noise factors. Actual analysis techniques will be covered in later chapters.
The relationships between S/N and Loss Function are obvious for STB and LTB. The expressions contained
in brackets are the same. With some effort this can also be shown to be true for the NTB situation. S/N and
the loss function actually measure the same thing. When S/N is maximized, the loss function will be
minimized. S/N is used in Design of Experiments rather than the loss function because it is more
understandable from an engineering standpoint and because it is not necessary to compute the value of "k"
when comparing two alternate courses of action. Once the optimal situation is determined using S/N
analysis, the value of the loss function can be determined by solving for MSD in the S/N equation and placing
that value in the loss function. In this manner, the total cost implications of the choices can be compared.
S/N calculations are also used in DOE to search for "robust" factor values. These are values around which
LOSS FUNCTION AND SIGNAL-TO-NOISE
Copyright 1997 DaimlerChrysler Corporation 47
production variability has the least effect on the response. Sections 3.1 and 3.2 discuss this use of S/N.
SUPPLEMENT
Copyright 1999 DaimlerChrysler Corporation 47
2.2 Supplement
Many statisticians disagree with the use of the previously defined S/N ratios to analyze DOE data. They do
recognize the need to analyze both location effects and dispersion (variance) effects but use other measures.
Dr. George Box's report, Studies in Quality Improvement: Signal To Noise Ratios, Performance
Criteria and Transformation (see Bibliography), is recommended to the reader who wishes to learn more
detail about this disagreement and some of the other methods that are available.
In brief, Dr. Box disagrees with the STB and LTB S/N calculations and finds the NTB S/N to be inefficient.
The approach that he supports is to calculate the log (or ln) of the standard deviation of the data, log(s), at
each inner array set-up in place of the S/N ratio. The log is used because the standard deviation tends to be
log-normally distributed. The raw data should be analyzed (with appropriate transformations) to determine
which factors control the average of the response and the log(s) should be analyzed to determine which
factors control the variance of the response. From these two analyses, the experimenter can choose the
combination of factors that gives the response that best fills the requirements.
The data in the following table illustrate some of the concerns with the NTB S/N ratio:
TEST RAW DATA (4 REPS.)
STANDARD
DEVIATION
NTB
S/N
A 1 2 4 5 1.83 3.89
B 15 11 12 14 1.83 17.03
C 18 21 19 22 1.83 20.78
D 24 24 28.12 28.12 2.38 20.78
E 42.55 42.8 50 50 4.23 20.78
The first three tests (A through C) have the same standard deviation but very different S/N, while the last
three tests (C through E) have the same S/N but very different standard deviations. The NTB S/N ratio
places emphasis on getting a higher response value. This approach might lead to difficulties in tuning the
response to a specific target.
It should be noted that Taguchi does discuss other S/N measures in some of his works that have not been
widely available in English. An alternate NTB S/N ratio is available in the computer program ANOVA-TM
which is distributed by Advanced Systems and Designs, Inc. (ASD) of Farmington Hills, Michigan and is
based on Taguchi's approach. This S/N ratio is:
NTB
II
S/N = -10 log (s
2
) = -20 log (s)
Maximizing this S/N is equivalent to minimizing log(s). Examples using this S/N ratio will be developed in
the supplements of the following sections.
ANALYSIS
Copyright 1997 DaimlerChrysler Corporation 47
2.3 Analysis
Purpose
The purpose of this section is to:
1. Introduce graphical and numerical analysis of experimental data.
2. Present a method for estimating a response value and assigning a confidence interval for it.
3. Discuss the use and interpretation of Signal-to-Noise (S/N) ratio calculations.
Summary
# Graphical techniques can be used to analyze DOE data when computer programs are not available.
# Graphical techniques should be used in conjunction with numerical analysis as an aid in
communicating the experimental results.
# The ANOVA computer programs provide a measure of the relative contribution of each factor to the
total variability observed in the data.
# An estimation of the response at the optimum factor levels can be made using the ANOVA Table
and level averages.
# The effect of multi-level factors can be decomposed, e.g. into linear, quadratic or cubic elements, to
aid in the interpretation of the ANOVA Table.
# S/N ratios can be used to determine the factor level choices that have the best combination of
response variability and offset from the target.
Graphical Analysis
In the example in Section 1.2, Dan and Paula calculated and plotted the average response at each factor level.
Since the experimental design they used (an L8) is orthogonal, the average at each level of a factor is equally
impacted by the effect of the levels of the other factors. This allows the graphical approach to have direct
usage. This example from Section 1.2 follows.
ANALYSIS
48 Copyright 1997 DaimlerChrysler Corporation
L8 ORTHOGONAL ARRAY
TEST
LEVELS FOR EACH SUSPECTED
FACTORS FOR EACH OF 8 TESTS TEST
NUMBER C1 C2 C7 C11 C13 C15 C16 RESULT
1 1 1 1 1 1 1 1 10
2 1 1 1 2 2 2 2 13
3 1 2 2 1 1 2 2 15
4 1 2 2 2 2 1 1 17
5 2 1 2 1 2 1 2 14
6 2 1 2 2 1 2 1 16
7 2 2 1 1 2 2 1 19
8 2 2 1 2 1 1 2 21
Note: The C numbers (e.g. C11, C13) are factor names.
The factor level plots are shown below:
[e.g. C
1
level 1 = (10 + 13 + 15 + 17) / 4 = 13.75]
Factors C1, C2 and C11 clearly have a different response for each of their two levels. The difference between
levels is much smaller for the other factors. If the goal of the experiment was to identify situations that
minimize or maximize the response, C1, C2 and C11 are important while the others are not.
1 2 1 2 1 2 1 2
1 2 1 2 1 2
Level Level Level Level
13
14
15
16
17
18
13
14
15
16
17
18
R
E
S
P
O
N
S
E
Level Level Level
R
E
S
P
O
N
S
E
PLOTS OF AVERAGES (HIGHER RESPONSES ARE BETTER)
C1 C2 C7 C11
C13 C15 C16
ANALYSIS
Copyright 1997 DaimlerChrysler Corporation 49
Graphical Analysis is a valid, powerful technique which is especially useful in the following situations:
1. When computer analysis programs are not available.
2. When a quick picture of the experimental results is desired.
3. As a visual aid in conjunction with computer analysis.
Once the experiment has been set up correctly, the graphical analysis can be easily used and can point the way
to improvements.
Analysis of Variance (ANOVA)
As was mentioned in the Preface, mathematical calculations and detailed discussions will not be included in
this book. The interested reader should consult the books listed in the Bibliography for rigorous
mathematical discussions. The approach given here will focus on the interpretation of the ANOVA analysis.
ANOVA is a matrix analysis procedure that partitions the total variation measured in a set of data. These
partitions are the portions that are due to the difference in response between the levels of each factor. The
number of degrees of freedom (df) associated with an experimental set-up is also the maximum number of
partitions that can be made. Consider the L8 experiment from Section 1.2 that was illustrated above in the
Graphical Analysis section. The following ANOVA Table summarizes the analysis:
ANOVA TABLE
COLUMN SOURCE df SS MS F RATIO S= %
1 C1 1 28.125 28.125 225 28.000 33.38
2 C2 1 45.125 45.125 361 45.000 53.65
3 C7 1* 0.125 0.125
4 C11 1 10.125 10.125 81 10.000 11.92
5 C13 1* 0.125 0.125
6 C15 1* 0.125 0.125
7 C16 1* 0.125 0.125
error ---
(pooled error) 4 0.500 0.125 0.875 1.04
Total 7 83.875 11.982 83.875
Legend df = degrees of freedom
MS = mean square
SS = sum of squares
The column number shows to what column of the orthogonal array the source (factor) was assigned.
Normally, the column number is not shown in an ANOVA Table. The df column shows the df(s) associated
with the factor in the Source column. The SS column contains the Sum of Squares. The SS is a measure of
the spread of the data due to that factor. The Total SS is the sum of the SS due to all of the Sources. The MS
or Mean Square column is the SS/df for each Source. The MS is also known as the Variance.
The row with "error" in the Source column is left blank in this experiment. If one of the columns had not
been assigned or if the experiment had been replicated, then the unassigned dfs would be used to estimate
error. Error is the non-repeatability of the experiment with everything held as constant as possible. The
ANOVA technique compares the variability contribution of each factor to the variability due to error. Factors
that do not demonstrate much difference in response over the levels tested have a variability that is not much
ANALYSIS
50 Copyright 1997 DaimlerChrysler Corporation
different from the error estimate.
The df and SS from these factors
are pooled into the error term.
Pooling is done by adding the df
and SS into the error df and SS.
Pooling the insignificant factors
into the error can provide a better
estimate of the error.
Initially no estimate of error was
made in the L8 example because
there were no unassigned columns
or repetitions. Because of this, a true estimate of the error can not be made. However, the purpose of the
experiment is to identify the factors that have a usable difference in response between the levels. In this
experiment, the factors with relatively small MS were pooled and called "error". Pooling requires that the
experimenter judge which differences are significant from an operational standpoint. This judgement is based
on the prior knowledge of the system being studied. In the example, factors C7, C13, C15 and C16 have
much lower MS than do the other factors and are pooled to construct an error estimate. The * next to a df
indicates that the df and SS for that factor were pooled into the error term.
The F ratio column contains the ratio of the MS for a source to the MS for the pooled error. This ratio is used
to statistically test whether the variance due to that factor is significantly different from the error variance.
As a quick rule-of-thumb, if the F ratio is greater than 3, the experimenter should suspect that there is a
significant difference. Dr. Taguchi does not emphasize the use of the F ratio statistical test in his approach to
DOE. A detailed description of the use of the F test can be found in Statistics for Experimenters by Box,
Hunter and Hunter.
In the determination of the SS of a factor, the non-repeatability of the experiment is still present. The number
in the S' column is an attempt to totally remove the SS due to error and leave the "pure" SS which is due only
to the source factor. The error MS times the df is subtracted from the SS to leave the pure SS or S' value for a
factor. The amount that is subtracted from each non-pooled factor is then added to the pooled error SS and
the total is entered as the error S'. In this way the total SS remains the same.
The % column contains the S' value divided by the total SS times 100% . This gives the percent contribution
by that factor to the total variation of the data. This information can be used directly in prioritizing the
factors. In the experiment that has been discussed, C2 makes the greatest contribution, C1 contributes less
and C11 contributes still less. It can be argued that the graphical analysis can display those conclusions quite
well. In more complicated experiments with many factors and factors with a large number of levels, the
ANOVA Table can display the analysis in a more concise form and quickly lead the experimenter to the most
important factors. A more detailed discussion of the mechanics of the ANOVA Table is given after Section
3.2 (Pg. 115-118).
Estimation at the Optimum Level
The ANOVA Table is used to identify important factors. The experimenter refers to the average response at
each level of the important factors to choose the best combination of factor levels. All of the best levels can
be combined to estimate the response at the best factor combination. Consider the case where the second
level of factor A (A
2
), the third level of factor B (B
3
), the first level of factor C (C
1
) and the interaction of C
1
and D
1
are determined to be the best combination of factors. An estimate of the response at these conditions
can be made using the equation:
D at run data the of average the D
C at run data the of average the C
B at run data the of average the B
A at run data the of average the A
data the all of response average the T where,
)] T - D ( - ) T - C ( - ) T - D C [( ) T - C ( ) T - B ( ) T - A ( T
opt

=
=
=
=
=
+ + + + =
ANALYSIS
Copyright 1997 DaimlerChrysler Corporation 51
Each factor which is a significant contributor appears in a manner similar to A
2
, B
3
and C
1
above. The term
in brackets [ ] addresses the optimum level of the CD interaction and is an example of the way in which
interactions are handled.
Confidence Interval Around the Estimation
A 90% confidence interval can be calculated for confirmatory tests using the equation:
where, F
1,

dfe,

.05
is a value from an F statistical table. The F values are based on two different degrees of
freedom and the desired confidence. In this case, the first degree of freedom is always 1 and the
second is the degree of freedom of the pooled error (dfe). The desired confidence is .05 since .05
in each direction (" ) sums to a 10% confidence.
MS
e
is the Mean Square of the pooled error term.
n
r
is the number of confirmatory tests to be run.
n
e
is the effective number of replications and is calculated as follows:
For the T
opt
that was just considered, n
e
is calculated as follows:
SOURCE df
A 1
B 2
C 1
CD 1
mean 1

Total 6
) 1/n (1/n * * ) (F
r e .05 dfe, 1,
+
e
MS
mean for the df 1 plus equation in the appear and t significan
are that ns interactio and factors the all of dfs the of Sum
s Experiment Of Number Total
n
e
=
ANALYSIS
52 Copyright 1997 DaimlerChrysler Corporation
1 2 3
Factor Level
1 2 3
Factor Level
1 2 3
Factor Level
R
e
s
p
o
n
s
e
Linear
Only
Quadratic
Only
Both Linear
& Quadratic
Consider that an L36 was run with no repetitions.
n = 36 / 6 = 6.0
Interpretation and Use
The confidence interval about the estimated value is used as a check when verification runs are made. If the
average of the verification runs does not fall within the interval, there is strong reason to believe that a very
important factor may have been left out of the experiment.
ANOVA Decomposition of Multi-Level Factors
When a factor is tested at two levels, an estimate of the linear change in response between the two levels can
be made. When a factor is tested at more than two levels, more complex relationships must be investigated.
With a three level factor, both the linear and quadratic relationships can be investigated. These relationships
are demonstrated below:
This relationship is important to consider even when the factor levels are not continuous (e.g. different
machines or suppliers). Consider the following situation:
The dotted line is the linear response and indicates no significant difference. However, Supplier 2 is different
from Suppliers 1 & 3. This difference can be found only if the quadratic relationship is considered.
The number of higher order relationships that can be investigated is determined by the degrees of freedom of
1 2 3
Supplier
R
e
s
p
o
n
s
e
ANALYSIS
Copyright 1997 DaimlerChrysler Corporation 53
the source.
LEVELS OF A FACTOR df RELATIONSHIPS
2 1 linear
3 2 linear, quadratic
4 3 linear, quadratic, cubic
5 4 linear, quadratic, cubic, quartic
etc.
In the ANOVA Table, the number of relationships that should be investigated is the same as the df. The total
SS for a factor is decomposed into parts with unit dfs. These parts are the linear, quadratic, cubic, etc. parts
of the relationship. Each part can then be treated separately and the parts with small MS are pooled into the
error term. The type of relationship that remains as significant can guide the experimenter in investigating
the level averages.
S/N Calculations & Interpretations
Control factors and noise factors were introduced in Section 1.4. Control factors appear in an orthogonal
array called an inner array. Noise factors which represent the uncontrolled or uncontrollable environment are
entered into a separate array called an outer array. The following example of an L8 inner (control) array with
an L4 outer (noise) array was first presented in Section 1.4. Actual responses and factor names are added
here in development of the example.
L8 L4 Z 1 2 2 1
TEST
NO.
A
1
B
2
C
3
D
4
E
5
F
6
G
7
(on side)
!
Y
Z
1
1
2
1
1
2
2
2
1 1 1 1 1 1 1 1 Test Results 25 27 30 26
2 1 1 1 2 2 2 2 25 27 21 19
3 1 2 2 1 1 2 2 18 21 19 22
4 1 2 2 2 2 1 1 26 23 27 28
5 2 1 2 1 2 1 2 15 11 12 14
6 2 1 2 2 1 2 1 18 15 17 18
7 2 2 1 1 2 2 1 20 17 21 18
8 2 2 1 2 1 1 2 19 20 20 17
This type of experimental set-up and analysis evaluates each of the control factor choices (L8 array factors)
over the expected range of the uncontrollable environment (L4 array factors). This assures that the optimal
factor levels from the L8 array will be robust. A S/N can be calculated for each test situation. These S/N
ratios are then used in an ANOVA to identify the situation that maximizes the S/N.
ANALYSIS
54 Copyright 1997 DaimlerChrysler Corporation
Smaller-the-Better (STB)
The following S/N ratios are calculated for the STB situation using the equations given in Section 2.1 and
assuming that the optimum value is zero and that the responses shown represent deviation from that target:
TEST NUMBER STB S/N
1 -28.65
2 -27.32
3 -26.05
4 -28.32
5 -22.34
6 -24.63
7 -25.61
8 -25.59
The S/N ratios for testing situations are then analyzed using an ANOVA Table. The STB ANOVA Table for
the example is:
ANOVA TABLE
SOURCE df SS MS F RATIO S= %
A 1 18.487 18.487 84.803 18.269 61.56
B 1 0.864 0.864 3.963 0.646 2.18
C 1 4.232 4.232 19.413 4.014 13.53
D 1 1.295 1.295 5.940 1.077 3.63
E 1* 0.223 0.223
F 1* 0.213 0.213
G 1 4.362 4.362 20.009 4.144 13.96
error ---
(pooled error) 2 0.436 0.218 1.526 5.14
Total 7 29.676 4.239
The ANOVA Table indicates that factors A, G and C are the most significant contributors. Inspection of the
level averages shows that the highest S/N values (least negative) in order of contribution, occur at A
2
G
2
C
2
D
1
B
1
. Estimation of the S/N at the optimal levels can be made from the S/N level averages using the
technique discussed earlier in this chapter. Likewise, estimation of the raw data average response at the
optimal level can be made from the response level averages at the optimal S/N factor levels.
ANALYSIS
Copyright 1997 DaimlerChrysler Corporation 55
Larger-the-Better (LTB)
The same data will be used to demonstrate the LTB notation. In this case, the optimum value is infinity.
Examples of this include strength or fuel economy. The following S/N ratios are calculated using the LTB
equation given in Section 2.1:
TEST NUMBER LTB S/N
1 28.57
2 26.98
3 25.94
4 28.23
5 22.08
6 24.54
7 25.48
8 25.52
The S/N ratios for testing situations are then analyzed using an ANOVA Table. The LTB ANOVA Table for
the example is:
ANOVA TABLE
SOURCE df SS MS F RATIO S= %
A 1 18.292 18.292 55.442 17.966 58.99
B 1 1.121 1.121 3.397 0.791 2.60
C 1 4.160 4.160 12.606 3.830 12.58
D 1 1.271 1.271 3.852 0.941 3.09
E 1* 0.396 0.396
F 1* 0.264 0.264
G 1 4.947 4.947 14.991 4.617 15.16
error ---
(pooled error) 2 0.660 0.330 2.310 7.59
Total 7 30.454 4.351
Inspection of the ANOVA Table and the level averages shows that the highest S/N values occur at A
1
G
1
C
1
D
2
B
2
. Interpretation of the LTB analysis is similar to the STB analysis.
Nominal-the-Best (NTB)
Analysis of the NTB experiment is a two part process. Again, the same data will be used to illustrate this
approach. The target value will be assumed to be 24 in this case.
ANALYSIS
56 Copyright 1997 DaimlerChrysler Corporation
1. The S/N values are analyzed. The following S/N are calculated:
TEST NUMBER NTB S/N
1 21.93
2 15.96
3 20.78
4 21.60
5 17.03
6 21.59
7 20.33
8 22.56
The S/N ratios for testing situations are then analyzed using an ANOVA Table. The NTB ANOVA Table for
the example is:
ANOVA TABLE
SOURCE df SS MS F RATIO S= %
A 1* 0.193 0.193
B 1 9.618 9.618 55.339 9.441 43.16
C 1* 0.006 0.006
D 1* 0.333 0.333
E 1 17.816 17.816 100.655 17.639 43.16
F 1 2.477 2.477 13.994 2.300 5.63
G 1 10.424 10.424 58.893 10.247 25.07
error ---
(pooled error) 3 0.532 0.177 1.240 3.03
Total 7 40.867 5.838
The ANOVA Table and the level averages indicate that E
1
G
1
B
2
F
1
are the optimal choices from a S/N
standpoint. These are the factor choices that should result in the minimum variance of the response.
ANALYSIS
Copyright 1997 DaimlerChrysler Corporation 57
2. The ANOVA analysis and level averages of the raw data are then investigated to determine if there
are other factors that have significantly different responses at their different levels but are not
significant in the S/N analysis. These factors can be used to tune the average response to the desired
value but do not appreciably affect the variability of the response. The ANOVA Table of the raw
data follows:
ANOVA TABLE
SOURCE df SS MS F RATIO S= %
A 1 392.000 392.000 84.940 387.385 53.95
B 1* 8.000 8.000
C 1 72.000 72.000 15.601 67.385 9.39
D 1 18.000 18.000 3.900 13.385 1.86
E 1* 2.000 2.000
F 1 18.000 18.000 3.900 13.385 1.86
G 1 98.000 98.000 21.235 93.385 13.01
X 1* 0.125 0.125
Y 1* 3.125 3.125
Z 1* 0.000 0.000
error 21 106.750 5.083
(pooled error) 26 120.000 4.615 143.075 19.93
Total 31 718.000 23.161
From the ANOVA Table, it can be seen that the significant contributors to the observed variability of the data
are the factors A, G, C, D and F. This can be combined with the S/N analysis and interpreted as follows:
1. Factors that influence variability only - B, E.
2. Factors that influence both variability and average response - G.
3. Factors that influence the average only - A, C.
4. Factors that have little or no influence on either variability or average response - D, F.
The results from this experiment indicate that factors B, E and G should be set to the levels with the highest
S/N. Factor G should be set to the level with the highest S/N rather than using it to tune the average since its
relative contribution to S/N variability is greater than its contribution to the variability of the raw data. This
decision might change based on cost implications and the ability to use factors A and C to tune the average
response. Factors A and C should be investigated to determine if they can be set to levels which will allow
the target value of 24 to be attained. This may be possible with factors that have continuous values. Factors
with discrete choices like vendor or machine number can not be interpolated. Factors D and F should be set
to the levels that are the least expensive. A series of confirmation runs should be made when the optimum
levels have been determined. The average response and S/N should be compared to the predicted values.
ANALYSIS
58 Copyright 1997 DaimlerChrysler Corporation
Combination Design
Combination design was mentioned in Section 1.4 as a way of assigning two 2 level factors to a single 3 level
column. This is done by assigning 3 of the 4 combinations of the two 2 level factors to the 3 level factor and
the fourth combination is not tested. As an example, two 2 level factors are assigned to a 3 level column as
follows:
FACTOR A FACTOR B
3 LEVEL COLUMN
COMBINED FACTOR (A.B)
1 1 1
2 1 2
2 2 3
Note that the combination A
1
B
2
is not tested. In this approach, information about the AB interaction is not
available and many ANOVA computer programs are not able to break apart the effect of A and B.
The sum of squares (SS) in the ANOVA Table that is due to factor A.B contains both the SS due to factor A
and the SS due to factor B. These two SS are not additive since the factors A and B are not orthogonal. This
means:
SS
AB
SS
A
+ SS
B
The SS of A and B can be calculated separately as follows:
SS
A
= (T
AB1
- T
AB2
)
2
/ (2 * r)
SS
B
= (T
AB2
- T
AB3
)
2
/ (2 * r)
where, T
A.B1
= the sum of all responses run at the first level of A.B
T
A.B2
= the sum of all responses run at the second level of A.B
T
A.B3
= the sum of all responses run at the third level of A.B
r = the number of data points run at each level of A.B
The MS of A and B then can be separately compared to the error MS to determine if either or both factors are
significant. The df for both A and B is 1. If one of the factors is significant and the other is not, the ANOVA
should be rerun with the significant factor shown with a dummy treatment and the other factor excluded from
the analysis.
ANALYSIS
Copyright 1997 DaimlerChrysler Corporation 59
Example
The following factors will be evaluated using an L9 orthogonal array:
FACTOR NUMBER OF LEVELS
A 2
B 2
C 3
D 3
E 3
A and B will be combined into a single 3 level column. The test array and results are:
A B A.B C D E TEST RESULTS
SUM OF THE
TEST RESULTS
1 1 1 1 1 1 7 10 17
1 1 1 2 2 2 3 6 9
1 1 1 3 3 3 5 3 8
2 1 2 1 2 3 22 18 40
2 1 2 2 3 1 13 15 28
2 1 2 3 1 2 9 8 17
2 2 3 1 3 2 12 16 28
2 2 3 2 1 3 12 10 22
2 2 3 3 2 1 15 12 27
The sum of the data at each level of A.B is:
for A.B = 1, the sum is 17 + 9 + 8 = 34
for A.B = 2, the sum is 40 + 28 + 17 = 85
for A.B = 3, the sum is 28 + 22 + 27 = 77
SS
A
= ( 34 - 85 )
2
/ ( 2 * 6 )
SS
A
= 216.75
SS
B
= ( 85 - 77 )
2
/ ( 2 * 6 )
SS
B
= 5.33
ANALYSIS
60 Copyright 1997 DaimlerChrysler Corporation
The ANOVA Table for the data is shown below. The decomposed SS for A and B are shown in ( ) and are
not added into the total SS.
ANOVA TABLE
SOURCE df SS MS F RATIO S= %
A.B 2 250.778 125.389 31.347 242.778 53.50
(A) 1 (216.750) 216.750 54.188
(B) 1 (5.333) 5.333 1.333
C 2 100.778 50.389 12.597 92.778 20.45
D 2 33.778 16.889 4.222 25.778 5.68
E 2 32.444 16.222 4.056 24.444 5.39
error 9 36.000 4.000
(pooled error) 9 36.000 4.000 68.000 14.99
Total 17 453.778 26.693
The F ratio for factor B indicates that the effect of the change in factor B on the response is insignificant.
Factor B is excluded from the analysis and factor A is analyzed with a dummy treatment. The ANOVA
Table for this analysis follows:
ANOVA TABLE
SOURCE df SS MS F RATIO S= %
A 1 245.444 245.444 59.386 241.311 53.18
C 2 100.778 50.389 12.192 92.512 20.39
D 2 33.778 16.889 4.086 25.512 5.62
E 2 32.444 16.222 3.925 24.178 5.33
error 10 41.334 4.133 70.625 15.48
(pooled error) 10 41.334 4.133
Total 17 453.778 26.693
The analysis continues using the techniques described in this chapter.
What Can Go Wrong
The purpose of most DOE's is to predict what the response will be at the optimum condition. Confirmatory
tests should be run to assure the experimenter that the projected results are valid. Sometimes, the
confirmatory tests are significantly different from the projected results. This can be due to one or more of the
following:
# There was an error in the basic assumptions made in setting up the experiment
- Not all of the important factors were controlled in the experiment.
- The factors interacted in a manner that was not accounted for.
ANALYSIS
Copyright 1997 DaimlerChrysler Corporation 61
- The response that was measured was not the proper response or was only a symptom of
something more basic (see Section 1.2).
- An important noise factor was not included in the experiment (e.g. the experimental tests
were run on sunny days while the confirmatory tests were run on a rainy day).
# The experimental test equipment is not capable of providing consistent, repeatable test results.
# A mistake was made in setting up one or more of the experimental tests.
The experimenter who is faced with data that does not support the prediction, is forced to ask which of these
problems affected the results. It is important that all of these problems be considered and investigated, if
appropriate. If two or more of these problems co-existed, correcting only one problem may not improve the
experimental results.
Even though it may seem that the experiment was a failure, that is not necessarily true. Experimentation
should be considered an organized approach to uncovering a working knowledge about a situation. The
"failed" experiment does provide new knowledge about the situation that should be used in setting up the next
iteration of experimental testing.
The prior statement may sound too idealistic for the "real" world where deadlines are very important. A
failed experiment may cause some people to doubt the usefulness of the DOE approach and extol the virtues
of traditional one-factor-at-a-time testing. All of the problems listed above that could cause a DOE to fail,
will also cause a one-factor-at-a-time experiment to fail. In DOE, the problem will be found fairly early since
relatively few tests are run. In one-factor-at-a-time testing, the problem may not surface until many tests have
been run or the problem may not even be identified in the testing program. In this case, the problem may not
show up until production or field use.
The importance of meeting "real" world deadlines makes the planning stage of the experiment critical. Proper
planning, including consideration of the experience and knowledge of the experts, will enable the
experimenter to avoid many of the possible problems. Deadlines are never a good excuse for not taking the
time to adequately plan an experiment.
ANALYSIS
62 Copyright 1997 DaimlerChrysler Corporation
SUPPLEMENT
Copyright 1997 DaimlerChrysler Corporation 63
2.4 Supplement
The data used to demonstrate the S/N calculations in this chapter will here be analyzed using the approach,
NTB
II
S/N = -10 log (s
2
) = -20 log (s) . This approach was discussed in the supplement of Section 2.1. The
data is repeated below:
L8 Z 1 2 2 1
TEST
NO.
A
1
B
2
C
3
D
4
E
5
F
6
G
7
Y
X
1
1
2
1
1
2
2
2 s
-20
Log(s)
1 1 1 1 1 1 1 1
TEST
RESULTS 25 27 30 26 2.16 -6.690
2 1 1 1 2 2 2 2 25 27 21 19 3.65 -11.249
3 1 2 2 1 1 2 2 18 21 19 22 1.83 -5.229
4 1 2 2 2 2 1 1 26 23 27 28 2.16 -6.690
5 2 1 2 1 2 1 2 15 11 12 14 1.83 -5.229
6 2 1 2 2 1 2 1 18 15 17 18 1.41 -3.010
7 2 2 1 1 2 2 1 20 17 21 18 1.83 -5.229
8 2 2 1 2 1 1 2 19 20 20 17 1.41 -3.010
The S/N ratios for the testing situations are then analyzed using an ANOVA Table. The NTB
II
ANOVA
Table for the example is:
ANOVA TABLE
SOURCE df SS MS F RATIO S= %
A 1 22.379 22.379 24.746 21.474 44.90
B 1 4.531 4.531 5.010 3.627 7.58
C 1 4.531 4.531 5.010 3.627 7.58
D 1* 0.313 0.3139
E 1 0.313 0.313 15.117 12.766 26.69
F 1* 1.200 1.200
G 1* 1.200 1.200
error ---
(pooled error) 3 2.713 0.904
Total 7 47.823 6.832
SUPPLEMENT
64 Copyright 1997 DaimlerChrysler Corporation
2.5
2.0
1.5
1 2 1 2 1 2 1 2
FACTOR A FACTOR B FACTOR C FACTOR E
AVERAGE STANDARD DEVIATION BY FACTOR LEVEL
S
T
A
N
D
A
R
D

D
E
V
I
A
T
I
O
N
To help interpret the ANOVA Table, the level standard deviation averages and the level S/N averages are
shown for the significant factors:
FACTOR LEVEL AVERAGE STANDARD DEVIATION NTB
II
S/N
A 1 2.36 -7.465
2 1.61 -4.120
B 1 2.12 -6.545
2 1.79 -5.039
C 1 2.12 -6.545
2 1.79 -5.039
E 1 1.67 -4.485
2 2.26 -7.099
To give a visual impact of the spread of the data and what the above table really means, it would be wise to
plot the data for each factor level. The plots of the average standard deviation by factor level are shown
below:
The ANOVA Table and the level average standard deviations indicate that A
2
B
2
C
2
E
1
are the optimal
choices from an NTB
II
S/N standpoint. The analysis of the raw data remains the same as shown in the
chapter. The average level of the response should be targeted using the results of the raw data analysis. This
is true regardless of whether the goal is as small as possible, as large as possible, or to meet a specific value.
The variance should be minimized by maximizing the NTB
II
S/N. The experimenter must make the trade-off
between the choice of factor levels that adjust the response average and the choice of factor levels that
minimize the variance of the response.
SUPPLEMENT
Copyright 1997 DaimlerChrysler Corporation 65
A comparison of the results of the two methods shows clear differences. As an example, for the situation
where a specific value is targeted (NTB), the factor level choices are:
NTB - B
2
E
1
G
1
to minimize variability, A and C set to achieve target.
NTB
II
- B
2
E
1
to minimize variability, G set to achieve target. If the target is attainable using factor
G, use A
2
C
2
to minimize variability, otherwise, set C and/or A to achieve target.
There is no complete agreement among all statisticians and DOE practitioners as to which approach gives
better results. As a general rule, the reader is encouraged to:
1. Plot the data including raw and/or transformed values, level averages and standard deviations, and
any other information that seems appropriate. One picture is worth a thousand words.
2. Analyze the data using the appropriate analysis techniques.
3. Compare the results to the data plots to determine which set of results makes the most sense.
Perform this comparison fairly and resist the temptation to choose the results solely on whether
they support convenient conclusions.
4. Run confirmation tests.
DOE is a powerful tool that can help the experimenter get the most out of scarce testing resources. However,
like any powerful tool, care must be taken to understand how to use the tool and how to interpret the results.
ANALYSIS OF CLASSIFIED DATA
Copyright 1997 DaimlerChrysler Corporation 69
2.5 Analysis of Classified Data
Purpose
The purpose of this section is to:
1. Discuss the Classified Attribute Analysis and Classified Variable Analysis approaches to analyzing
classified responses.
2. Present examples of how these techniques are used.
Summary
# Some responses cannot be measured on a continuous scale.
# Classified Attribute Analysis and Classified Variable Analysis can be used to analyze data that can
be divided into sequential classes.
# Three to five responses are recommended at each experimental set-up.
# Classified Attribute Analysis is used when the sample size for every set-up is the same.
# Classified Variable Analysis is used when the sample size for every set-up is not the same.
# Both Classified Attribute Analysis and Classified Variable Analysis analyze cumulative rating class
frequencies.
# If possible, a continuous scale response should be used in place of a classified scale response since
more samples are required for the classified analysis and measurement error is generally higher.
Classified Responses
Some experimental responses cannot be measured on a continuous scale although they can be divided into
sequential classes. Examples include appearance and performance ratings. In these situations, three to five
rating classes are generally the optimum number because this number allows major differences in the
responses to be identified and yet, does not require the rater to identify differences that are too subtle. Two
related analysis techniques are used to analyze classified responses:
1. Classified Attribute Analysis is used when the total number of items rated is the same for every test
matrix set-up.
2. Classified Variable Analysis is used when the total number of items rated is not the same for every
test matrix set-up.
Three to five responses at each experimental set-up are recommended to give a good evaluation of the class
distribution of responses at that set-up. As with continuous measurements, more responses at each set-up
allow smaller differences to be identified.
ANALYSIS OF CLASSIFIED DATA
70 Copyright 1997 DaimlerChrysler Corporation
Classified Attribute Analysis
This technique converts the observed frequency in each class into a cumulative frequency for the classes. As
an example, if there are three classes, the observed and cumulative frequencies might be:
OBSERVED FREQUENCY
CUMULATIVE FREQUENCY
Class I 2 2
Class II 1 3
Class III 1 4
It is assumed that the reader will use a computer program to analyze the classified data. The specific input
format will depend on the computer program that the reader uses. The mathematical derivations and
philosophies of this approach will not be presented here. The interested reader is invited to refer to Quality
Engineering: Product & Process Design Optimization by Yuin Wu and Dr. Willie Hobbs Moore.
Example
There are three grades used to evaluate paint appearance of a product. They are Bad, OK and Good. Seven
factors (A through G), each at two levels, are evaluated to determine the combination of factor levels that
optimizes paint appearance. Five products are evaluated at each testing situation in an L8 orthogonal array.
Test results are:
TEST SET-UP AND RESULTS
FREQUENCY IN EACH GRADE
A B C D E F G BAD OK GOOD
1 1 1 1 1 1 1 2 3 0
1 1 1 2 2 2 2 3 2 0
1 2 2 1 1 2 2 4 1 0
1 2 2 2 2 1 1 0 2 3
2 1 2 1 2 1 2 0 4 1
2 1 2 2 1 2 1 1 3 1
2 2 1 1 2 2 1 0 3 2
2 2 1 2 1 1 2 0 1 4
The ANOVA analysis for this data is:
ANALYSIS OF CLASSIFIED DATA
Copyright 1997 DaimlerChrysler Corporation 71
ANOVA TABLE
SOURCE df SS MS F RATIO S= %
A 2 11.668 5.834 7.820 10.176 12.72
B 2 6.678 3.339 4.476 5.186 6.48
C 2* 0.125 0.063
D 2* 3.668 1.834
E 2* 2.259 1.130
F 2 7.935 3.986 5.319 6.443 8.05
G 2* 2.259 1.130
error 64 45.409 0.710
(pooled error) 72 53.720 0.746 58.196 72.75
Total 78 80.000 1.026
Note that the degrees of freedom are calculated differently from the non-classified situation. The df of each
source is
(The number of levels of that factor - 1) * (The number of classes - 1)
In this example, the number of levels of each factor is 2 and the number of classes is 3. For each factor,
df = (2 -1) * (3 - 1) = 2
The total df is
(The total number of rated items - 1) * (The number of classes - 1)
The total df for this example is:
df = (40 - 1) * (3 - 1) = 78
The error df is the total df minus the df of each of the factors.
ANALYSIS OF CLASSIFIED DATA
72 Copyright 1997 DaimlerChrysler Corporation
From the ANOVA table, factors A, B and F are identified as significant. The effects of these factors are
shown below:
OBSERVED
FREQUENCY
% RATE OF
OCCURRENCE
(R.O.)
CUMULATIVE
FREQUENCY
CUMULATIVE
% R.O.
BAD OK GOOD BAD OK GOOD BAD OK GOOD BAD OK GOOD
A1 9 8 3 45 40 15 9 17 20 45 85 100
A2 1 11 8 5 55 40 1 12 20 5 60 100
B1 6 12 2 30 60 10 6 18 20 30 90 100
B2 4 7 9 20 35 45 4 11 20 20 55 100
F1 2 10 8 10 50 40 2 12 20 10 60 100
F2 8 9 3 40 45 15 8 17 20 40 85 100
Total 10 19 11 25 73 100
Although interpretation and use of the ANOVA Table in Classified Attribute Analysis is the same as for the
non-classified situation, a significant difference does exist in estimating the cumulative rate of occurrence for
each class under the optimum condition.
Factor Effects
A-1 A-2 B-1 B-2 F-1 F-2
0
20
40
60
80
100
Factor - Level
C
u
m
u
l
a
t
i
v
e

R
a
t
e

o
f

O
c
c
u
r
r
e
n
c
e

-

%
Bad OK Good
ANALYSIS OF CLASSIFIED DATA
Copyright 1997 DaimlerChrysler Corporation 73
Percentages near 0% or 100% are not additive. The cumulative rates of occurrence can be transformed using
the Omega Method to obtain values that are additive. In the Omega Method, the cumulative percentage (p) is
transformed to a new value (A) as follows:
A = -10 log (l/p - 1)
The units of A are decibel (db).
Using this transformation, the estimated cumulative rate of occurrence for each class at the optimum
condition (A
2
B
2
F
1
) is calculated as follows:
db of = db of T + (db of A
2
- db of T) + (db of B
2
- db of T) + (db of F
1
- db of T)
The estimated cumulative rate of occurrence for each class for the optimum condition is:
Class I
db of = db of .25 + (db of .05 - db of .25) + (db of .20 - db of .25) + (db of .10 - db of .25)
= -4.77 + (-12.79 + 4.77) + (-6.02 + 4.77) + (-9.54 + 4.77)
= -18.81

= 1%
Class II
db of = db of .73 + (db of .60 - db of .73) + (db of .55 - db of .73) + (db of .60 - db of .73)
= -4.25
= 27%
These results are summarized into the following table:
RATE OF OCCURRENCE AT THE OPTIMUM SETTINGS
CLASS
CUMULATIVE
RATE OF OCCURRENCE RATE OF OCCURRENCE
Bad 1% 1%
OK 27% 26%
Good 100% 73%
Classified Variable Analysis
Classified Variable Analysis is used when the number of items evaluated is not the same for all test matrix
set-ups. As with Classified Attribute Analysis, the computer analyzes the cumulative frequencies.
ANALYSIS OF CLASSIFIED DATA
74 Copyright 1997 DaimlerChrysler Corporation
Example
Four factors (A, B, C and D) are suspected of influencing door closing efforts for a particular car model. An
experiment was set up that evaluated each of these factors at 3 levels. An L9 orthogonal array was used to
evaluate the factor levels. Door closing effort ratings were made by a group of typical customers. Each
customer was asked to evaluate the doors on a scale of 1 to 3 as follows:
CLASS DESCRIPTION OF EFFORT
1 Unacceptable
2 Barely Acceptable
3 Very Good Feel
The experimental set-up and test results are:
TEST SET-UP AND RESULTS
NUMBER
OF
RATINGS
BY CLASS
CLASS
% RATE OF
OCCURRENCE
CLASS
CUMULATIVE
FREQUENCY (%)
A B C D RATINGS 1 2 3 1 2 3 1 2 3
1 1 1 1 5 1 3 1 20 60 20 20 80 100
1 2 2 2 4 2 1 1 50 25 25 50 75 100
1 3 3 3 5 2 3 0 40 60 0 40 100 100
2 1 2 3 4 0 0 4 0 0 100 0 0 100
2 2 3 1 4 0 1 3 0 25 75 0 25 100
2 3 1 2 4 0 1 3 0 25 75 0 25 100
3 1 3 2 5 3 2 0 60 40 0 60 100 100
3 2 1 3 5 4 1 0 80 20 0 80 100 100
3 3 2 1 4 3 1 0 75 25 0 75 100 100
The ANOVA analysis for this data is:
ANOVA TABLE
SOURCE df SS MS F RATIO S= %
A 4 871.296 217.824 447.277 869.348 48.30
B 4 34.404 8.601 17.661 32.456 1.80
C 4 25.125 6.296 12.928 23.234 1.29
D 4* 4.827 1.207
error 1782 864.291 0.485
(pooled error) 1786 869.118 0.487 874.962 48.61
Total 1798 1800.000 1.001
ANALYSIS OF CLASSIFIED DATA
Copyright 1997 DaimlerChrysler Corporation 75
From the ANOVA Table, factors A, B and C are identified as significant. The effects of these factors are
shown below:
FACTOR % RATE OF OCCURRENCE
CUMULATIVE
% RATE OF OCCURRENCE
& LEVEL CLASS 1 CLASS 2 CLASS 3 CLASS 1 CLASS 2 CLASS 3
A1 36.7 48.3 15.0 36.7 85.0 100
A2 0 16.7 83.3 0 16.7 100
A3 71.7 28.3 0 71.7 100.0 100
B1 26.7 33.3 40.0 26.7 60.0 100
B2 43.3 23.3 33.3 43.3 66.6 100
B3 38.3 36.7 25.0 38.3 75.0 100
C1 33.3 35.0 31.7 33.3 68.3 100
C2 41.7 16.7 41.7 41.7 58.4 100
C3 33.3 41.7 25.0 33.3 75.0 100
Total 36.1 31.1 32.8 36.1 67.2 100
Factor Effects
A-1 A-2 A-3 B-1 B-2 B-3 C-1 C-2 C-3
0
20
40
60
80
100
Factor - Level
C
u
m
u
l
a
t
i
v
e

r
a
t
e

o
f

O
c
c
u
r
r
e
n
c
e

-

%
I II III
Class
ANALYSIS OF CLASSIFIED DATA
76 Copyright 1997 DaimlerChrysler Corporation
The choice of the optimum levels is clear for factors A and B. A
2
and B
1
are the best choices. Two different
choices are possible for factor C, depending on the overall goal of the design. If the goal is to minimize the
occurrence of unacceptable efforts, C
1
is the best choice. If the goal is to maximize the number of customer
ratings of "very good", then C
2
is the best choice. For this example, C
1
will be chosen as the preferred factor
setting. The estimated rate of occurrence for each class for the optimum setting, A
2
B
1
C
1
, can be calculated
using the Omega Method. The estimated rates are:
RATE OF OCCURRENCE AT THE OPTIMUM SETTINGS
CLASS
CUMULATIVE
RATE OF OCCURRENCE
RATE
OF OCCURRENCE
I (Unacceptable) 0% 0%
II (Barely Acceptable) 13.4% 13.4%
III (Very Good Feel) 100% 86.6%
The df for the factors are calculated in the same way as with the Classified Attribute Analysis, i.e.
df = (number of factor levels - 1) * (number of classes - 1)
In Classified Variable Analysis, the total number of items evaluated at each condition is not equal. To
"normalize" these sample sizes, percentages are analyzed and the "sample size" for each test set-up becomes
100 (for 100%). The total df is
(The total of the "sample sizes" - 1) * (The number of classes - 1)
For this example, the total df is:
df = (900 - 1 ) * (3 - 1) = 1798
The error df is the total df minus the df of each of the factors.
Discussion of the Degrees of Freedom
In both Classified Attribute Analysis and Classified Variable Analysis, the total degrees of freedom are much
larger than the number of items evaluated. The interpretation of the F ratios and the calculation of a
confidence interval is complicated by the large number of degrees of freedom and will not be addressed here.
The analysis techniques for classified responses are not as completely developed as are the techniques for the
analysis of continuous data. In Dr. Taguchi's approach, the emphasis is on using the % contribution to
prioritize alternative choices. Although better statistical techniques may be developed to handle classified
data, Classified Attribute and Classified Variable Analyses can be used to identify the large contributors to
variation in classified responses.
SUPPLEMENT
Copyright 1997 DaimlerChrysler Corporation 77
2.6 Supplement
As was mentioned in the Discussion of the Degrees of Freedom, there is no consensus among statisticians
regarding the best method to use to analyze classified data. A method that is an alternate to the ones
described in this chapter is to transform the classified data into variable data and analyze the data as
described in Chapter V. A drawback to this approach is that the relative difference in the transformed values
should reflect the relative difference in the classifications and this is sometimes difficult to do. A simple
example from the medical field will illustrate this. Four different groups of patients suffering from the same
disease are each given a different medicine. The purpose is to determine which medicine is best. The
response classes are:
CLASS DESCRIPTION
A Patient Improves
B No Change In Patient
C Patient Dies
If Class A is given a value of 1 and Class B is given a value of 2, what should Class C be given? Is the
difference between Classes B and C the same as the difference between Classes A and B? Twice the
difference? Three times?
Dr. George Box is of the opinion that this difficulty can be overcome by analyzing the variable data using
several different transformations from the classifications. In most instances, the choice of the best response
wont be affected by the different relative values placed on the classifications and, in every case, the data will
be much easier to analyze and interpret. The example given on page 68 will be worked as an example.
Example
There are three grades used to evaluate paint appearance of a product. They are Bad, OK and Good. The
classified data are transformed into variable data as follows: Bad = 1, OK = 3, Good = 4. This puts
emphasis on avoiding situations that result in "bad" responses. Seven factors (A through G), each at two
levels, are evaluated to determine the combination of factor levels that optimizes paint appearance. Five
products are evaluated at each testing situation in an L8 orthogonal array. Test results are:
TEST SET-UP AND RESULTS
FREQUENCY IN EACH GRADE TRANSFORMED
A B C D E F G BAD OK GOOD DATA
1 1 1 1 1 1 1 2 3 0 1 1 3 3 3
1 1 1 2 2 2 2 3 2 0 1 1 1 3 3
1 2 2 1 1 2 2 4 1 0 1 1 1 1 3
1 2 2 2 2 1 1 0 2 3 3 3 4 4 4
2 1 2 1 2 1 2 0 4 1 3 3 3 3 4
2 1 2 2 1 2 1 1 3 1 1 3 3 3 4
2 2 1 1 2 2 1 0 3 2 3 3 3 4 4
2 2 1 2 1 1 2 0 1 4 3 4 4 4 4
SUPPLEMENT
78 Copyright 1997 DaimlerChrysler Corporation
The ANOVA analysis for the raw data is:
ANOVA TABLE
SOURCE df SS MS F RATIO S= %
A 1 11.03 11.03 14.33 10.26 20.94
B 1 3.03 3.03 3.93 2.26 4.61
C 1* 0.03 0.03
D 1* 2.03 2.03
E 1* 2.03 2.03
F 1 7.23 7.23 9.39 6.46 13.18
G 1* 2.03 2.03
error 32 21.60 0.68
(pooled error) 36 27.70 0.77 30.01 61.27
Total 39 48.98 1.26
Plotting of the data and inspection of the level averages reveals that the best factor choices are: A
2
B
2
F
1
.
The ANOVA analysis for the NTB
II
S/N ratios is:
ANOVA TABLE
SOURCE df SS MS F RATIO S= %
A 1 23.81 23.81 16.76 22.39 25.18
B 1 23.81 23.81 16.76 22.39 25.18
C 1* 0.39 0.39
D 1* 0.39 0.39
E 1 13.21 13.21 9.29 11.79 13.26
F 1 23.81 23.81 16.76 22.39 25.18
G 1* 3.49 3.49
error
(pooled error) 3 4.26 1.42 9.95 11.19
Total 7 88.91 12.70
Plotting of the S/N data and inspection of the level averages reveals that the best factor choices are: A
2
B
2
E
2
F
1
. The best choices overall are: A
2
B
2
E
2
F
1
. This compares with the best choice of A
2
B
2
F
1
from the
Accumulation Analysis.
Each of the methods has one further disadvantage. Using the transformation approach, its not possible to
make a projection of what the distribution of classes would look like at the optimum settings. The
Accumulation Analysis was not able to identify the effect on the standard deviation of the ratings due to
factor E. Each approach tells a different part of the story and both should be used to get the full picture.
DYNAMIC SITUATIONS
Copyright 1997 DaimlerChrysler Corporation 79
2.7 Dynamic Situations
Purpose
This chapter discusses:
1. What are dynamic test situations.
2. How a test plan should be set-up in a dynamic situation.
3. The analysis of test data.
Summary
# Systems that respond differently, by design, to different levels of an input factor are called dynamic.
# The analytical procedures discussed in prior chapters emphasize reducing data variability across all
noise factors and are not appropriate for dynamic situations without some modifications.
# A nominal-the-best (NTB) signal-to-noise (S/N) ratio can be calculated for each test by doing a
separate ANOVA for each test set-up.
Definition
In many instances, the experimenter knows that the optimum response for a system changes with levels of an
input signal. Using the signal-to-noise techniques described in the previous chapters would yield incorrect
results. These techniques emphasize repeatability across all levels of the noise factors. In a dynamic
situation, the experimenter wants different responses depending upon the level of an input signal factor. Two
examples are:
1. If two or more length measurement devices are compared, the standard lengths to be measured
become signal factor levels for the comparison. The experimenter would want a measurement device
that gives a reading that is relative to the different standard lengths measured and is repeatable at
each standard measured.
2. Several control factors are to be included in an experiment to determine the combination that
optimizes vehicle braking distance. The tests are run at two different vehicle speeds. The vehicle
speed would be treated as a signal factor. The experimenter would want the braking distance to be
repeatable at each vehicle speed and reflect the customers' needs and desires for braking distance at
each vehicle speed. These needs and desires would not be the same for all vehicle speeds. (Note: It
might seem that the goal should be to minimize the braking distance at each vehicle speed, however,
if the braking were too abrupt, the driver might lose control of the vehicle.)
Discussion
The analysis of dynamic test data can be complicated. The following conditions exist in the examples that
follow. If these conditions are not present in a dynamic experiment, help from a statistician should be sought
before setting up the experiment.
Conditions
DYNAMIC SITUATIONS
80 Copyright 1997 DaimlerChrysler Corporation
1. Signal factors will be assigned to an outer array in the experimental set-up.
2. The signal factor(s) will have either 2 or 3 levels.
3. If there are 3 levels for a signal factor, the intervals between the adjacent levels will be equal.
4. The experimental test includes either noise factors in an outer array or repetitions so that for each
inner array control factor set-up, 2 or more runs are made for each signal factor.
Analysis
The general approach that is used to analyze the data is:
1. The test results for each inner array set-up (test number) are separately analyzed using Analysis of
Variance (ANOVA). The ANOVA Table for these analyses will be:
ANOVA TABLE (RAW DATA)
SOURCE df SS V
Signal
error
df
s
df
t
- df
s
SS
s
SS
t
- SS
s
V
s
V
e
Total df
t
SS
t
2. A nominal-the-best signal-to-noise ratio is calculated for each inner array set-up from these ANOVA
Tables as follows:
where, r = the number of data in each
level of the signal factor for this inner array set-up.
s = 0.5 if the signal factor has 2 levels
or
s = 2.0 if the signal factor has 3 levels
h = the interval between the adjacent levels of the signal factor
3. The calculated S/N ratio for each inner array set-up is then used in a Nominal-the-Best (NTB) S/N
Analysis of Variance to determine which control factor settings should be used to reduce variability
and which should be used to tune the response to the desired output.
The application of these steps will be developed more fully through the following examples.


=
2
s
10
h * s * r *
SS
log 10 S/N
e
e
V
V
DYNAMIC SITUATIONS
Copyright 1997 DaimlerChrysler Corporation 81
Example
Two different types of automatic optical measurement devices are to be compared. Two orientations of the
devices are possible, horizontal or vertical. These are assigned to an L4 inner array as follows:
FACTOR COLUMN
Type (T) 1
Orientation (O) 2
TxO Interaction 3
Items with two different surface finishes will be measured by the devices. Surface finish (F) will be a noise
factor.
Two standard lengths of 10 and 20 mm will be evaluated. These will be the two levels of the signal factor
(S). The test matrix and test results for the experiment are:
TEST RESULTS
TEST TEST MATRIX S
1
S
2
NTB
NUMBER T O TxO F
1
F
2
F
1
F
2
S/N
1 1 1 1 9.8 9.7 20.4 20.2 19.33
2 1 2 2 10.2 9.9 20.3 20.1 14.94
3 2 1 2 9.6 9.9 19.6 20.0 12.05
4 2 2 1 10.2 9.8 19.7 19.5 12.65
For Test Number 1, the S/N ratio is calculated from the ANOVA Table for just the runs in Test Number 1.
S F TEST RESULT
1 1 9.8
1 2 9.7
2 1 20.4
2 2 20.2
The ANOVA Table for these data is:
ANOVA TABLE (RAW DATA)
SOURCE df SS V
Signal
error
1
2
111.303
0.026
111.303
0.013
Total 3 111.328
DYNAMIC SITUATIONS
82 Copyright 1997 DaimlerChrysler Corporation
The S/N ratio is calculated as follows:
A S/N ratio for each of the other test set-ups is calculated in a similar manner. These S/N ratios are then
analyzed using the S/N ratio as a single response for each test set-up.
ANOVA TABLE (S/N Ratio Used AS Raw Data)
SOURCE df SS MS F RATIO S= %
T 1 20.794 20.794 4.311 15.970 52.46
0 1* 4.494 4.494
TxO 1* 5.153 5.153
error ---
(pooled error) 2 9.647 4.824 14.471 47.54
Total 3 30.647 10.147
The level averages for the data are:
LEVEL AVERAGES
NTB S/N DATA
O
1
O
1
OVERALL
T
1
19.33 14.98 17.16
T
2
12.05 12.65 12.35
Overall 15.69 13.82 14.76
Inspection of the data shows that the setting of T that gives the highest S/N is level 1. Although there are not
enough test set-ups to allow the statistical identification of level 1 of factor O as the optimum, the data
suggest that orientation 1 might work the best with device 1 and should be further investigated.
19.33 S/N
10 * 0.5 * 2 * 0.013
0.013 - 111.303
log 10 S/N
h * s * r *
SS
log 10 S/N
2
10
2
s
10
=


=
e
e
V
V
DYNAMIC SITUATIONS
Copyright 1997 DaimlerChrysler Corporation 83
The level averages of the raw data are:
LEVEL AVERAGES
RAW DATA
S
1
S
2
OVERALL
T
1
9.90 20.25 15.08
T
2
9.88 19.70 14.79
O
1
9.75 20.05 14.90
O
2
10.03 19.90 14.97
Average of S 9.89 19.98
Overall 14.93
Average at T
1
O
1
15.03
The predicted
averages are
calculated using
the techniques
given in Section
2.3 using the
interaction of T
1
and O
1
(assumed) as the
optimum setting.
Note that the readings at the optimum do not average out to the standard exactly. This assumes that the
output reading can be calibrated to reflect the standard measured. The emphasis in this approach is to
provide readings with low variability at each standard level of input.
This example was very simple and it may seem that the ANOVA was not really necessary. In many cases, the
inner array will be more complicated than an L4 and the technique shown in this example will help the
19.96
14.93) - (19.89 14.93)] - (14.90 - 14.93) - (15.03 - 14.93) - [(15.03 14.93
mm 9.87 S for
9.87
14.9) - (9.89 14.93)] - (14.90 - 14.93) - (15.03 - 14.93) - [(15.03 14.93
mm) (10 1 S for
=
+ + =
=
=
+ + =
=
DYNAMIC SITUATIONS
84 Copyright 1997 DaimlerChrysler Corporation
experimenter make an informed choice.
Example
The effect of several factors on vehicle braking distance is to be investigated. The control factors to be
investigated are assigned to an L8 orthogonal inner array as follows:
COLUMN
1 A - Content of material AZ@in the brake
pads
2 B - Content of material AY@in the brake
pads
3 AxB Interaction
4 C - Hydraulic fluid type
5 Unassigned
6 Unassigned
7 D - Brake pad design
Noise and signal factors are assigned to an L8 outer array as follows:
COLUMN
1 S - Vehicle speed (30 mph or 60 mph)
2 T - Tire size
3 Unassigned
4 P - Pavement type (asphalt or concrete)
5 Unassigned
6 Unassigned
7 Unassigned
In this example Vehicle Speed is a signal factor. It is not possible that the braking distance would be the
same when starting from 30 mph or from 60 mph and therefore, different responses are expected. The
experimenter has determined through market research that for this type of vehicle, the customer would prefer
that the braking distance be 35 feet from 30 mph and 130 feet from 60 mph.
The test set-up and results are:
A S
1
S
2
DYNAMIC SITUATIONS
Copyright 1997 DaimlerChrysler Corporation 85
A S
1
S
2
T
1
T
2
T
1
T
2
A B
x
B C D P
1
P
2
P
1
P
2
P
1
P
2
P
1
P
2
NTB
S/N
1 1 1 1 1 30.9 40.6 40.4 40.6 140.9 141.2 140.7 139.6 15.73
1 1 1 2 2 42.5 42.8 42.4 42.7 143.0 142.4 141.1 142.8 14.62
1 2 2 1 2 45.0 41.3 41.0 44.8 141.2 143.1 143.4 142.7 5.90
1 2 2 2 1 40.7 39.7 40.5 40.7 141.3 140.7 140.9 139.7 15.25
2 1 2 1 2 39.4 40.1 39.7 38.1 139.9 139.7 141.1 139.7 12.74
2 1 2 2 1 37.5 37.3 37.6 37.3 137.6 138.0 137.1 137.2 20.64
2 2 1 1 1 36.1 38.4 36.9 37.9 135.1 139.3 138.4 136.1 6.58
2 2 1 2 2 39.6 40.4 40.5 40.3 139.7 140.1 142.0 138.5 9.89
The unassigned columns are not shown to conserve space and to make the table more presentable. The outer
array is also shown somewhat differently, with column 1 (factor S) at the top, column 2 (factor T) in the
middle, and column 4 (factor P) at the bottom. Although this arrangement can be used to present the data, the
unassigned columns should be added back to the arrays to aid the experimenter's understanding of the
analysis and the application of the inner and
outer L8 orthogonal arrays.
For the first test set-up, the S/N ratio is
calculated from the ANOVA Table for the data
in the first row.
The ANOVA Table for these data is:
ANOVA TABLE (RAW DATA)
SOURCE Df SS V
Signal
Error
1
6
20090.101
1.786
20090.101
0.298
Total 7 20091.888
The S/N ratio is calculated as follows:
15.73 S/N
30 * 0.5 * 4 * 0.298
0.298 - 20090.101
10log S/N
h * s * r *
V - SS
log 10 S/N
2
10
2
e s
10
=

=
e
V
DYNAMIC SITUATIONS
86 Copyright 1997 DaimlerChrysler Corporation
A S/N ratio for each of the other test set-ups is calculated in a similar manner. These S/N ratios are then
analyzed using the S/N ratio as a single response for each test set-up.
ANOVA TABLE (S/N RATIO USED AS RAW DATA)
SOURCE df SS MS F RATIO S' %
A 1* 0.340 0.340
B 1 85.217 85.217 44.453 83.300 47.87
AxB 1 7.431 7.431 3.876 5.514 3.17
C 1 47.288 47.288 24.668 45.371 26.08
Unassigned 1* 1.103 1.103
Unassigned 1* 4.307 4.307
D 1 28.313 28.313 14.769 26.396 15.17
error ---
(pooled error) 3 5.750 1.917 13.418 7.71
Total 7 173.998 24.857
The ANOVA Table indicates that factors B, C and D and the interaction of factors A and B are significant.
The level averages for these factors are:
LEVEL AVERAGES
B
1
B
2
A
1
A
2
A
1
A
2
C
1
C
2
D
1
D
2
15.18 16.69 10.58 8.24 10.24 15.10 14.55 10.79
The ANOVA Table and the level averages indicate that B
1
, C
2
and D
1
are the optimal choices from a S/N
standpoint. These are the factor choices that should result in the minimum variance of the response.
An analysis of the raw data would identify the signal factor as the most significant contributor to the variation
of the data. However, this information is not useful. To increase the ability of the analysis to clearly show
the significant control factors, the target braking distance for each of the signal factor levels is subtracted
from all of the data collected at that signal factor level. This reduces the percent level of contribution of the
signal factor and increases the percent level of contribution of the control factors while maintaining their
relative order of contribution. This transformation makes the effects of the control factors more visible but
does not affect their significance. The transformed data are shown below:
A S
1
S
2
x T
1
T
2
T
1
T
2
A B B C D P
1
P
2
P
1
P
2
P
1
P
2
P
1
P
2
1 1 1 1 1 4.9 5.6 5.4 5.6 10.9 11.2 10.7 9.6
1 1 1 2 2 7.5 7.8 7.4 7.7 13.0 12.4 11.1 12.8
1 2 2 1 2 10.0 6.3 6.0 9.8 11.2 13.1 13.4 12.7
DYNAMIC SITUATIONS
Copyright 1997 DaimlerChrysler Corporation 87
A S
1
S
2
1 2 2 2 1 5.7 4.7 5.5 5.7 11.3 10.7 10.9 9.7
2 1 2 1 2 4.4 5.1 4.7 3.1 9.9 9.7 11.1 9.7
2 1 2 2 1 2.5 2.3 2.6 2.3 7.6 8.0 7.1 7.2
2 2 1 1 1 1.1 3.4 1.9 2.9 5.1 9.3 8.4 6.1
2 2 1 2 2 4.6 5.4 5.5 5.3 9.7 10.1 12.0 8.5
The ANOVA Table for these data is:
ANOVA TABLE
SOURCE df SS MS F RATIO S= %
A 1 150.369 150.369 155.340 149.401 21.56
B 1* 1.56E-4 1.56E-4
AxB 1* 0.473 0.473
C 1* 0.833 0.833
Unassigned 1* 2.600 2.600
Unassigned 1* 2.213 2.213
D 1 101.758 101.758 105.122 100.790 14.55
S 1 382.691 382.691 395.342 381.723 55.09
T 1* 0.083 0.083
Unassigned 1* 0.170 0.170
P 1* 0.375 0.375
Unassigned 1* 1.658 1.658
Unassigned 1* 0.508 0.508
Unassigned 1* 2.520 2.520
AxS 1* 0.083 0.083
DxS 1* 0.098 0.098
error 47* 46.441 60.959 8.80
(pooled error) 60 58.055
Total 63 692.874
DYNAMIC SITUATIONS
88 Copyright 1997 DaimlerChrysler Corporation
The interactions between all the columns of the inner array and all the columns of the outer array are available
for investigation. For this example, only the AxS and DxS interactions are investigated to give an indication
of whether factors A and D "behave" consistently at the two levels of the signal factor. The analysis indicates
that control factors A and D are significant contributors to the variation of the data. The difference in
responses between the two levels of these factors is independent of the signal level. The analysis also
identified the signal factor, S, as an important contributor to the data variation. This was already known.
The level averages are:
LEVEL AVERAGES
S
1
S
2
OVERALL
A
1
6.58 11.54 9.06
A
2
3.59 8.41 6.00
D
1
3.86 8.68 6.27
D
2
6.40 11.28 8.79
Average of S 5.08 9.98
Overall Average 7.53
The predicted averages are calculated using the techniques given in Section 2.3 using A
2
and D
1
as the
optimum settings and adding the values that were subtracted prior to the ANOVA.
Factor B should be set to level 1 and factor C should be set to level 2 to maximize the S/N ratio.
Since the target values were not obtained at the optimum settings, the experimenter must either continue to
investigate other ways of reducing the stopping distance or accept the consequences of failing to fully satisfy
the customer's requirements.
( ) ( ) ( ) [ ]
( ) ( ) ( ) [ ]
feet 137.19
130 7.53 - 9.98 7.53 - 6.27 7.53 - 6.00 7.53
mph) (60 2 S for
feet 37.29
35 7.53 - 5.08 7.53 - 6.27 7.53 - 6.00 7.53
mph) (30 1 S for
=
+ + + + =
=
=
+ + + =
=
DYNAMIC SITUATIONS
Copyright 1997 DaimlerChrysler Corporation 89
SUPPLEMENT 1
Copyright 1997 DaimlerChrysler Corporation 89
2.8 Supplement 1
This supplement shows how the two examples given in this chapter would be worked using the NTB
II
S/N
approach. The NTB S/N ratio for a dynamic situation is:
This equation is explained on pages 78 . Using the same terminology, the NTB
II
S/N is:
NTB
II
S/N = 10 LOG (V
e
) which equals -20 log (error standard deviation)
First Example (pages 81 through 83)
The calculations for the NTB S/Ns are discussed on page 80 . The same steps are followed for the NTB
II
approach until the final S/N calculation. The two sets of S/N ratios are contrasted below:
TEST
NUMBER
NTB
S/N
NTB
II
S/N
1 19.33 19.03
2 14.94 14.88
3 12.05 12.04
4 12.65 13.01
When the NTB
II
S/N ratios were analyzed, the ANOVA Table and the interpretation of the level averages
were essentially the same as those for the NTB S/N.
Second Example (pages 84 through 88)
The calculations for the NTB S/Ns are discussed on pages 83 AND 84. In doing the NTB
II
analysis, it was
suspected that the standard deviation of the data might be related to the average of the data. In other words,
the spread of the stopping distances might be greater at Standard One (30 mph) than at Standard Two (60
mph). Using the procedure given in Section 1.5 Supplement, the averages and standard deviations were
compared as follows:
1. For each vehicle speed, the average stopping distance and the standard deviation were
calculated (16 averages and 16 standard deviations in total).
2. The log (standard deviation) was plotted versus the log (average).
3. The slope was estimated to be in the range of 0.2 to 0.3 with large scatter in the data. By
comparing this value to the table on page 34, it was determined that there was not a strong
need to transform the data.

=
2
e
e s
10 II
h * s * r * V
V - SS
log 10 S/N NTB
SUPPLEMENT 1
90 Copyright 1997 DaimlerChrysler Corporation
The NTB
II
S/N ratios were calculated for the untransformed data. The two sets of S/N ratios are compared
below:
TEST
NUMBER
NTB
S/N
NTB
II
S/N
1 15.73 5.26
2 14.62 4.19
3 5.90 -4.51
4 15.25 4.62
5 12.74 2.21
6 20.64 10.18
7 6.58 -3.87
8 9.89 -0.56
When the NTB
II
S/N ratios were analyzed, the ANOVA Table and the interpretation of the level averages
were essentially the same as those for the NTB S/N. The reader is encouraged to run the analysis to confirm
this. The analysis of the raw data did not change. The conclusions also remained the same as before.
Conclusions
For these two examples, the NTB and NTB
II
methods give equivalent results. However, this does not prove
equivalency of the methods. On other sets of data, differences in the results obtained have been
demonstrated. Of the two methods, the NTB
II
approach is easier to understand since maximizing the NTB
II
S/N is the same as minimizing the error variance for the chosen combination of factor levels.
The experimenter should always analyze the data completely, plot the data, compare the results to the data
plots, and run confirmation tests.
SUPPLEMENT 2 - ENGINEERING OPTIMIZATION (ROBUST DESGIN)
Copyright 1997 DaimlerChrysler Corporation 91
M: Input Displacement
2.9 Supplement 2 - Engineering Optimization (Robust Design)
This supplement addresses an approach to develop effective engineering optimization (Robust Design). An
example will be demonstrated of such an approach. For more information, please see Reference [1], [2] and
[3].
The Approach to Engineering Optimization (Robust Design)
Successful engineering optimization requires engineers to focus on system, product and/or component
function. It is essential that engineers be provided a method by which some numerical estimation of the
robustness of design alternatives can be evaluated prior to and subsequent to design creation. Such a method
is built on the premise that the greatest optimization can be achieved by understanding a system=s ideal
function and then searching for a low-cost design in which the deviation from the ideal is minimal (i.e., the
design is robust or not unaffected by uncontrolled inputs).
The approach to engineering optimization involves:
(A) Defining the engineering optimization
Step 1: Define the engineered system
Step 2: Consider the output response
Step 3: Consider the noise space
Step 4: Consider the input signal
Step 5: Define the ideal function
Step 6: Consider the control factors
(B) Designing and conducting an appropriate experiment
(C) Two-step optimization
Step 1: Maximize S/N
Step 2: Adjust the sensitivity to the desired sensitivity level
Example
In this section an example is discussed to illustrate the approach to engineering optimization outlined above.
Defining the engineering optimization
Step 1: Define the Engineered System
The system is a flexible push-pull cable assembly (see Figure 1). It is used to transmit mechanical energy in
complex routings. Typical automotive applications include a hood release, trunk release and mirror
adjustment.
SUPPLEMENT 2 - ENGINEERING OPTIMIZATION (ROBUST DESIGN)
92 Copyright 1997 DaimlerChrysler Corporation
Figure 1
Step 2: Consider the Output Response
The output response on our push-pull cable example is the output displacement (mm). A specific output
displacement is generated for a given input displacement.
Step 3: Consider the Noise Space
Some noise factors and levels pertinent to our push-pull cable example are given below:
FACTOR DESCRIPTION N1 N2
Number of cycles 10 1000
Assembly length Short Long
Routing Gentle Severe
Operating load Low Heavy
Step 4: Consider the Input Signal
The input signal to our push-pull example is the input displacement (mm). A specific input displacement
(input signal) produces a corresponding output displacement (output response). The three levels of the input
signal factor are given below:
Level 1 = 8 mm
Level 2 = 16 mm
Level 3 = 24 mm
SUPPLEMENT 2 - ENGINEERING OPTIMIZATION (ROBUST DESGIN)
Copyright 1997 DaimlerChrysler Corporation 93
25
20
15
10
5
0
0 5 10 15 20 25
M: Input Displacement
y
:


O
u
t
p
u
t

D
i
s
p
l
a
c
e
m
e
n
t
y = M
Step 5: Define the Ideal Function
The ideal function (transformation of input signal to output response) serves as the conceptual framework
that drives engineering optimization. Most ideal functions are linear. The ideal relationship is
y = I + JM = e
where, y = output response
I = constant
J = slope
M = input signal
e = error
There are three basic issues to be considered in this relationship:
(1) The linearity of this relationship.
(2) The slope of the line of the relationship.
(3) The error of that line of the relationship.
The user wants the hood to open (unlatch) with an easy pull of the lever. The push-pull cable must efficiently
transmit the motion from the actuating lever to the hood latch retaining rod. The ideal relationship is that the
output displacement (y) equals input displacement (M) regardless of noise factors such as routing, loading
and temperature.
The relationship that expresses the ideal function is shown in Figure 2.
SUPPLEMENT 2 - ENGINEERING OPTIMIZATION (ROBUST DESIGN)
94 Copyright 1997 DaimlerChrysler Corporation
Step 6: Consider the Control Factors
A complete listing of control factors and levels pertinent to our push-pull cable example are given in
Table 1.
FACTOR DESCRIPTION LEVEL 1 LEVEL 2
A Liner Material Modulus T - 20% T + 20%
B Braid Wire Diameter T - 0.1 mm Theoretical
C Liner Wall Thickness T - 0.2 mm Theoretical
D Braid Density Theoretical T + 10%
E Coating Material Modulus Theoretical T + 20%
F Coating Thickness Theoretical T + 0.3 mm
G Cable Diameter T - 0.1 mm Theoretical
H Cable Material Modulus T - 20% Theoretical
I Cable Type 1 x 19 7 x 7
J Braid Wire Modulus Theoretical T + 20%
K Liner Inner Diameter Cable ` + 0.1 mm Cable ` + 0.2 mm
Table 1 - Push-Pull Cable Actuator Control Factors
Designing and conducting an appropriate experiment
Once the above six steps have been completed, an experiment is conducted to optimize the function of the
actuator. The experimental layout with the test data is shown in Table 2.
SUPPLEMENT 2 - ENGINEERING OPTIMIZATION (ROBUST DESGIN)
Copyright 1997 DaimlerChrysler Corporation 95
M1 M2 M3
A B C D E F G H I J K 8 mm 16 mm 24 mm O J (db)
1 1 1 1 1 1 1 1 1 1 1 1 N1 4.97 5.19 5.41 10.05 10.49 10.94 13.28 13.95 14.62 -2.48 -4.58
N2 4.69 6.90 5.11 9.48 9.90 10.32 12.53 13.16 13.79
2 1 1 1 1 1 2 2 2 2 2 2 N1 4.00 4.27 4.54 7.20 7.74 8.27 8.56 9.37 10.18 -7.74 -7.57
N2 3.84 4.10 4.35 6.90 7.42 7.93 8.21 8.98 9.76
3 1 1 2 2 2 1 1 1 2 2 2 N1 4.46 4.65 4.84 9.04 9.41 9.79 11.78 12.35 12.91 -2.62 -5.51
N2 4.40 4.48 4.66 8.70 9.06 9.42 11.34 11.89 12.43
4 1 2 1 2 2 1 2 2 1 1 2 N1 5.00 5.14 5.28 9.59 9.86 10.14 13.66 14.07 14.48 3.01 -4.55
N2 4.89 5.03 5.16 9.38 9.65 9.92 13.36 13.76 14.17
5 1 2 2 1 2 2 1 2 1 2 1 N1 6.00 6.27 6.54 12.10 12.64 1301
8
18.20 19.01 19.82 2.79 -2.22
N2 5.75 6.01 6.27 11.60 12.12 12.64 17.45 18.23 19.00
6 1 2 2 2 1 2 2 1 2 1 1 N1 3.86 4.00 4.14 7.70 7.98 8.26 10.50 10.92 11.33 -0.17 -6.71
N2 3.72 3.85 3.99 7.42 7.68 7.95 10.11 10.51 10.91
7 2 1 2 2 1 1 2 2 1 2 1 N1 4.04 4.23 4.42 7.29 7.66 8.03 8.72 9.28 9.84 -7.13 -7.57
N2 3.96 4.14 4.32 7.13 7.49 7.86 8.53 9.08 9.62
8 2 1 2 1 2 2 2 1 1 1 2 N1 3.39 3.61 3.83 6.36 6.80 7.25 6.96 7.63 8.29 -9.46 -9.17
N2 3.20 3.41 3.62 6.00 6.42 6.84 6.57 7.20 7.83
9 2 1 1 2 2 2 1 2 2 1 1 N1 6.13 6.27 6.41 11.98 12.25 12.53 16.53 16.94 17.36 1.44 -2.84
N2 6.00 6.13 7.27 11.72 11.99 12.26 16.17 16.58 16.98
10 2 2 2 1 1 1 1 2 2 1 2 N1 6.54 6.76 6.98 13.18 13.62 14.06 19.82 20.48 21.14 4.42 -1.57
N2 6.27 6.48 6.69 12.64 13.06 13.48 19.00 19.64 20.27
11 2 2 1 2 1 2 1 1 1 2 2 N1 4.89 5.08 5.27 10.28 10.65 11.03 15.15 15.71 16.27 3.69 -3.82
N2 4.71 4.89 5.07 9.90 10.26 10.62 14.58 15.12 15.66
12 2 2 1 1 2 1 2 1 2 2 1 N1 3.28 3.55 3.82 6.53 7.07 7.61 8.73 9.54 10.36 -4.99 -7.99
N2 3.09 3.35 3.60 6.16 6.67 7.19 8.23 8.23 9.77
Table 2 - Push-Pull Cable Actuator Experiment Layout and Test Data
The raw data is the output displacement measured in millimeters. For each combination of control factors, 18
measurements of output displacement were obtained and used to compute a corresponding signal-to-noise
ratio ( ).
Two-step optimization
Step 1: Maximize S/N
The levels for the factors that affect the ratio are selected so as to maximize the ratio. As can be seen in
Table 3 (which contains the average ratio values for each level of each factor), five factors were found to
have high impact on the ratio. On this basis, levels B
2
, D
2
, G
1
, H
2
and J
1
were chosen for the push-pull
cable actuator.
SUPPLEMENT 2 - ENGINEERING OPTIMIZATION (ROBUST DESIGN)
96 Copyright 1997 DaimlerChrysler Corporation
A B C D E F G H I J K
Level 1 -1.20 -4.67 -1.18 -2.91 -1.57 -1.63 1.21 -2.67 -1.60 -0.54 -1.76
Level 2 -2.01 1.46 -2.03 -0.30 -1.64 -1.58 -4.41 -0.54 -1.61 -2.67 -1.45
Difference 0.81 6.13 0.85 2.61 0.07 0.05 5.62 2.13 0.01 2.13 0.31
Table 3 - Signal-to-Noise Ratio Response
Step 2: Adjust the Sensitivity to the Desired Sensitivity Level
To adjust the sensitivity to the desired sensitivity level, a separate analysis is performed to find the factors
that move the slope. In particular, we look at the factors that were found to have little or no affect on the 0
ratio (Factors A, C, E, F, I and K). If it had turned out that any of these factors moved the slope, then their
levels would have been selected to maximize the slope. The J (db) in Table 2 is an indicator of the slope
(sensitivity) in decibel units.
A B C D E F G H I J K
Level 1 -5.19 -6.21 -5.23 -5.52 -5.30 -5.30 -3.42 -6.30 -5.32 -4.90 -5.32
Level 2 -5.49 -4.48 -5.46 -5.17 -5.38 -5.39 -7.26 -4.39 -5.37 -5.78 -5.37
Difference 0.30 1.73 0.23 0.35 0.08 0.09 3.84 1.91 0.05 0.88 0.05
Table 4 - Sensitivity Response
As can be seen in Table 4, which contains the average J (db) slopes for each level of each factor, these five
factors (Factors A, C, E, F, I and K) have essentially no affect on the slope. For factors that have no strong
impact on either the ratio or the slope, the engineer chooses the levels at which it is easiest or most
economical to operate. Once the final combination of control factor levels is selected, the predicted ratio and
slope for this combination are estimated. A confirmation experiment is conducted to conform the expected
gain.
S/N ratio and sensitivity calculations
The calculations are illustrated using the data from test combination No. 1 in Table 2.
S/N ratio (O) calculation
The S/N ratio ( ) is called the Azero point proportional@S/N ratio. The equations that follow are based on
the method of ALeast Squares@and drive the best-fit line through zero.
where r
o
= number of data points in each signal level
( ) ( ) 5376 24 16 8 6 M M M r r
1883.1882 13.79 ...... 5.19 47 y S
2 2 2 2
3
2
2
2
1 0
2 2 2 2
T
= + + = + + =
= + + + = =

SUPPLEMENT 2 - ENGINEERING OPTIMIZATION (ROBUST DESGIN)
Copyright 1997 DaimlerChrysler Corporation 97
where Y
i
is the sum of readings at the i-th signal levels (i = 1, 2, 3)
where k is the number of signal levels
Sensitivity (J (db)) calculation
Reference
[1] Diana Byrne, Jim Quinlan, ARobust Function for Attaining High Reliability at Low Cost@,
Proceedings of the Annual Reliability and Maintainability Symposium, p.183 - 191, 1993.
[2] ARobust Design - One Day Overview@, ITEQ International, Livonia, MI 1994.
[3] ARobust Design Workshop@, American Supplier Institute Inc., Allen Park, MI 1993.
( )
( ) ( ) ( ) [ ]
2
2
3 3 2 2 1 1
81.33 24 61.18 16 30.27 8
5376
1

Y M Y M Y M
r
1
S
+ + =
+ + =

( )
0.6165
1 - 6 3
10.4808

1 - kr
S
V
10.4808 S - S S
0
e
e
T e
=

= =
= =

db 2.48 -
0.6165
0.6165 - 1872.7074

5376
1
log 10
V
V - S

r
1
log 10
e
e
=

=

( ) ( )
( ) db 4.58 - 0.6165 - 1872.7074
5376
1
log 10
V - S
r
1
log 10 db
e
=

=

Copyright 1997 DaimlerChrysler Corporation 99
SECTION 3
DOE Approach to a Total Design Process
PARAMETER DESIGN
Copyright 1997 DaimlerChrysler Corporation 101
3.1 Parameter Design
Purpose
This section provides an example of how the DOE technique is used to determine design factor target levels.
This approach is an upstream attempt to develop a robust product that will avoid problems later in
production. The emphasis at this stage is on using wide tolerance levels to provide a product that is easy to
manufacture and still meets all requirements.
Summary
# After the basic system design of a product has been determined, the design emphasis should be on
determining the component target levels using wide, low cost, tolerance levels.
# In a parameter design DOE, the control factors are entered into an inner array and the low cost
tolerance limits are entered into an outer array with the other noise factors.
# Testing at the tolerance limits does not provide a direct estimate of the response variance of the total
production distribution.
# An estimate of the response variance of the total production distribution can be inferred through the
DOE test results and prior knowledge about the production distribution or through computerized
simulations.
# An estimate of the Loss associated with production using the wide tolerances can be made using the
observed offset from the target and the calculated variance of the production distribution
Discussion
After the basic design of a product is determined, the next step is to determine to what levels the components
of that product should be set to assure that the target will be met. The experience of the designer or design
team is useful in establishing the starting values for the investigation. This investigation begins by
determining what the component target values should be using wide component tolerances. This is called
Parameter Design. If the resultant variability around the product response target is too great, the next step is
to determine which tolerances should be tightened. This approach, Tolerance Design, will be discussed in
Section 3.2.
Example
A particular product has been designed with 5 components (A through E). The target response for the
product is 59.0 units. Field experience has indicated that when the response differs from the target by 5 units,
the average customer would complain and the repair cost would be $150. From this information, the k value
in the Loss Function can be calculated.
k = $150 / 5
2
k = $6.00 per unit
2
PARAMETER DESIGN
102 Copyright 1997 DaimlerChrysler Corporation
A brainstorming group which consisted of the designer and other experts in this area, determined that the
response is linear over the component range of interest and the components should be evaluated at the
following levels:
COMPONENT
LEVELS
(Units are those appropriate to each component)
(FACTOR) LOW HIGH
A 1000 1500
B 400 700
C 50 70
D 1300 2200
E 1200 1600
Note: Factor A is more expensive to manufacture at the high level than at the lower level.
These factors will be assigned to an L8 inner array. The two unassigned columns will be used for an estimate
of the experimental error.
An L8 outer array contains the low cost tolerances as follows:
LOW COST TOLERANCE
(FACTOR) LOW HIGH
A -50 +50
B -15 +15
C -5 +5
D -200 +200
E -100 +100
The tolerance amounts are added to/subtracted from the control level as indicated by the outer array. The
brainstorming group suspects that two other noise factors are significant, namely, Temperature (T) and
Humidity (H) of the assembly environment. The noise and tolerance factors are combined into an L8 outer
array. The testing set-up and test results are:
PARAMETER DESIGN
Copyright 1997 DaimlerChrysler Corporation 103
L8
OUTER ARRAY
T 1 2 2 1 2 1 1 2
H 1 2 2 1 1 2 2 1
E 1 2 1 2 2 1 2 1
D 1 2 1 2 1 2 1 2
L8 INNER ARRAY C 1 1 2 2 2 2 1 1
B 1 1 2 2 1 1 2 2 NTB
A B C D E X Y A 1 1 1 1 2 2 2 2 S/N
1 1 1 1 1 1 1 51.4 49.5 48.9 56.6 52.7 50.8 46.0 51.9 24.34
1 1 1 2 2 2 2 58.6 56.7 56.8 60.3 58.2 56.3 52.8 56.7 28.36
1 2 2 1 1 2 2 52.1 59.1 58.9 62.7 61.5 51.1 47.0 50.9 19.58
1 2 2 2 2 1 1 62.6 60.0 61.6 67.2 62.5 59.4 56.0 62.7 25.59
2 1 2 1 2 1 2 47.2 45.3 45.2 51.3 47.3 45.4 41.3 47.4 24.28
2 1 2 2 1 2 1 50.1 48.8 48.4 54.6 51.0 49.1 44.5 50.4 24.86
2 2 1 1 2 2 1 40.1 38.6 37.7 44.6 40.7 38.7 34.8 39.7 22.99
2 2 1 2 1 1 2 46.6 43.2 42.6 49.4 46.6 43.6 39.2 45.4 23.12
X and Y are the unassigned columns that will be used to estimate error.
An understanding of the way the testing matrix is interpreted can be reached by considering the factor A.
When the inner array column associated with factor A has a value of 1, the value of A is 1000. When there is
a 2 in that column, the value of A is 1500. The actual test values of A are also determined by the tolerance
value of A in the outer array. If the outer array value of A is 1, then 50 is subtracted from the value of A
determined in the inner array. If the outer array value is 2, then 50 is added to it. This can be summarized as
follows:
ACTUAL TEST VALUES OF A
OUTER ARRAY TOLERANCE VALUE OF A
INNER ARRAY VALUE OF A 1 2
1 950 1050
2 1450 1550
PARAMETER DESIGN
104 Copyright 1997 DaimlerChrysler Corporation
The ANOVA Table and Level Averages for the most significant factors for the S/N and raw data follows:
NOMINAL-THE-BEST ANOVA TABLE
SOURCE df SS MS F RATIO S= %
A 1* 0.860 0.860
B 1 13.965 13.965 21.992 13.330 30.50
C 1 2.533 2.533 3.989 1.898 4.34
D 1 14.455 14.455 22.764 13.820 31.62
E 1 10.845 10.845 17.079 10.210 23.36
X 1* 2.477 2.477
Y 1* 10.424 10.424
error ---
(pooled error) 3 1.906 0.635 4.446 10.17
Total 7 43.704 6.243
S/N LEVEL AVERAGES
FACTOR LEVEL 1 LEVEL 2
D 22.80 25.48
B 25.46 22.82
E 22.98 25.30
Average of all data = 24.14
RAW DATA ANOVA TABLE
SOURCE df SS MS F RATIO S= %
A 1 2032.883 2032.883 680.577 2029.896 56.85
B 1 9.533 9.533 3.192 6.546 0.18
C 1 435.244 435.244 145.713 432.257 12.11
D 1 427.973 427.973 143.279 424.986 11.90
E 1 13.231 13.231 4.430 10.244 0.29
X 1* 3.658 3.658
Y 1* 3.563 3.563
A-Tol. 1 88.125 88.125 29.503 85.138 2.38
B-Tol. 1* 1.995 1.995
C-Tol. 1 113.156 113.156 37.883 110.169 3.09
D-Tol. 1 49.879 49.879 16.699 46.892 1.31
E-Tol. 1* 3.754 3.754
H 1 239.089 239.089 80.043 236.102 6.61
T 1* 3.754 3.754
error 49* 140.969 2.877
(pooled error) 54 161.297 2.987 188.180 5.27
Total 63 3570.410 56.673
PARAMETER DESIGN
Copyright 1997 DaimlerChrysler Corporation 105
LEVEL AVERAGES
FACTOR LEVEL 1 LEVEL 2
A 56.23 44.96
C 47.99 53.21
D 48.01 53.18
From the S/N Level Averages, D
2
B
1
E
2
is clearly the best setting for S/N. The estimated S/N at that setting is:
S/N = 24.14 + (25.48 - 24.14) + (25.46 - 24.14) + (53.18 - 50.6)
S/N = 27.96
Since A
1
is preferred from a cost standpoint and D
2
is preferred from the S/N analysis, the next step is to
determine if the value of C can be adjusted to attain the target of 59. The average response at A
1
D
2
is:
Average Response = 50.6 + (56.23 - 50.6) + 53.18 +50.6)
Average Response = 58.81
To reach a target of 59, the value of C that is included in the average response calculation must have a level
average of 50.8 since:
Target = Average Response at A
1
D
2
+ Effect due to C
59.0 = 58.8 + (50.8 - 50.6)
The target value for C can be interpolated from the tested levels and the level averages as follows:
NOTE:
VALUE OF C RESPONSE
50 47.99
Target 50.8
70 53.21
60.77 is at the same percentile between 50 and 70 as 50.8 is at between 47.99 and 53.21
( ) 60.77 50 - 70
47.99 - 53.21
47.99 - 50.8
50 Target = + =
PARAMETER DESIGN
106 Copyright 1997 DaimlerChrysler Corporation
In summary, the recommended target values are:
FACTOR TARGET VALUE
A 1000
B 400
C 60.77
D 2200
E 1600
The estimated average is 59.0 and the estimated S/N is 27.96 .
The 90% confidence on the average is:
A set of verification runs is now made using the recommended factor target values given previously. The
tolerance levels from the outer array are used to define an L8 verification run experiment as follows:
A-TOL B-TOL C-TOL D-TOL E-TOL H T TEST RESULT
1 1 1 1 1 1 1 60.2
1 1 1 2 2 2 2 57.9
1 2 2 1 1 2 2 59.5
1 2 2 2 2 1 1 64.8
2 1 2 1 2 1 2 59.4
2 1 2 2 1 2 1 58.6
2 2 1 1 2 2 1 55.7
2 2 1 2 1 1 2 59.9
The average response is 59.5 and the S/N is 27.3 . Since the average response and the S/N are close to the
predicted values, the verification runs confirm the prediction. If the average response and S/N did not
confirm the predictions, the data could be analyzed to determine which factors have response characteristics
different from those predicted.
The information from the verification runs cannot be used directly in the Loss Function since the observed
variability may be affected by testing only at the tolerance limits. The center portions of the factor
distributions are not represented in these tests.
For the situation where the change in response is assumed to be a linear increase or decrease across the
tolerance levels, the Loss Function can be easily calculated as follows:
1. If it can be assumed that the C
pk
in production will be 1.0 or greater for all specified tolerances, then
the difference between the tolerance limits will be equal to, or greater than, 6 times the production
standard deviation for each component parameter.
1.50 59.0
or
1/8) (1/16 * 87 2.9 * 4.02 59.0

+
PARAMETER DESIGN
Copyright 1997 DaimlerChrysler Corporation 107
2. The difference between the response level averages for the two tolerance limits will equal 6 times the
production response standard deviation since the product response is linearly related to the
component parameter level.
3. The response variance due to each tolerance is additive since the response effect of each component
tolerance is additive.
4. The effect of noise factors can be treated in a similar manner.
In this example, the levels of humidity were set at the average humidity " 2 times the humidity standard
deviation. The change in response is assumed to be linear across the change in humidity. The difference in
response between the two levels represents 4 times the response standard deviation. The response variance
can be calculated as follows:
TOLERANCE
FOR FACTOR
RESPONSE DIFFERENCE
BETWEEN TOL. LIMITS
RESPONSE
PRODUCTION
STD. DEV. VARIANCE
A 2.2 0.37 0.1344
B 1.0 0.16 0.0278
C 2.2 0.37 0.1344
D 1.6 0.27 0.0711
E 0.1 0.02 0.0003
0.3680
Humidity (H) 3.2 0.80 0.6400
Error Variance 2.9890
3.9970
The response variance will be 3.9970 . The loss function can be calculated from the equation:
For a production run of 50,000 pieces, the total loss would be $1,274,100.
In the situation where the change in response is non-linear across the tolerance levels or noise factor levels, a
computer simulation can be used to determine the distribution of product response for each component taken
singly and for the total assembly. This situation occurs when the highest (lowest) response occurs at the
component nominal and the response decreases (increases) as the distance from the component nominal
increases. The purpose of these calculations is to estimate the response variance for the total assembly
population so that the Loss Function can be calculated. Once the value of the Loss Function has been
calculated, it can be compared to the cost of tightening the tolerances so as to determine the optimal tolerance
limits. This technique will be discussed in Section 3.2.
( )
( ) ( )
piece per $25.482 Loss
59.0 - 59.5 3.9970 $6.00 Loss
offset k Loss
2
2 2
=
+ =
+ =
TOLERANCE DESIGN
Copyright 1997 DaimlerChrysler Corporation 109
3.2 Tolerance Design
Purpose
This chapter illustrates:
1. How tolerance limits can be set so that the product will meet customer requirements repeatable with
the widest possible tolerances. The goal is to choose the most cost efficient tolerance levels.
2. How prior knowledge about the response characteristics of the component levels of a product can be
efficiently used.
Summary
# After the component target levels have been determined, Tolerance Design can be used to choose
which tolerances should be tightened from the wide, low cost levels.
# An estimate of the response variance of the total production distribution can be inferred through the
DOE test results and prior knowledge about the production distribution or through computerized
simulations.
# An estimate of the Loss associated with production can be made using the observed offset from the
target and the calculated variance of the production distribution.
# Tolerance design can minimize testing by incorporating prior knowledge about the design type and
production process and how they impact the response.
Discussion
After the target level for each component has been determined using Parameter Design, the Loss Function
value of the product design is compared to design guidelines and to the cost of tightening tolerances. If it
costs less to tighten the tolerance than the resulting reduction in the loss function, then, for the long run, it is
better to tighten the tolerance. The evaluation of the tolerance limits and the selective tightening of the limits
is called Tolerance Design. As in parameter design, an example will illustrate this approach.
Example
Continuing the example from Section 3.1, the loss function with low cost tolerance was calculated to be
$25.482 per piece or $1,274,100 for the production run of 50,000 pieces. This calculation was based on the
assumptions that:
1. The tolerance spread is equal to 6 times the production standard deviation (C
pk
= 1).
2. The response changes linearly across the tolerance limits.
3. The sum of the variance contributions for the components is the total assembly response variance.
4. Humidity was set at levels which are the average humidity " 2 times the standard deviation and the
response changes linearly across these levels.
TOLERANCE DESIGN
110 Copyright 1997 DaimlerChrysler Corporation
If any of these assumptions cannot be made, a computer simulation using the appropriate assumptions can be
used to determine the total assembly response variance.
The calculation of the response variance for the example is repeated:
TOLERANCE
FOR FACTOR
RESPONSE DIFFERENCE
BETWEEN TOL. LIMITS
RESPONSE
PRODUCTION
STD. DEV. VARIANCE
A 2.2 0.37 0.1344
B 1.0 0.16 0.0278
C 2.2 0.37 0.1344
D 1.6 0.27 0.0711
E 0.1 0.02 0.0003
0.3680
Humidity (H) 3.2 0.80 0.6400
Error Variance 2.9890
3.9970
The response variance will be 3.9970 . The loss function was calculated from the equation:
Loss = k ([
2
+ offset
2
)
Loss = $6.00 (3.9970 + (59.5 - 59.0)
2
)
Loss = $25.482 per piece
For a production run of 50,000 pieces, the total loss would be $1,274,100.
From the above response variance contribution table, it can be seen that the tolerance for factors A, C and, to
a lesser degree, D are the largest component tolerance contributors to the total response variance. The cost of
reducing those tolerances are:
LOW COST
TOLERANCE
HIGH COST
TOLERANCE %
COST TO CHANGE
THE TOLERANCE
COMPONENT LOW HIGH LOW HIGH REDUCTION FOR 50000 PIECES
A -50 +50 -40 +40 20% $5000
C -5 +5 -4 +4 20% $15000
D -200 +200 -150 +150 20% $9000
Since the response is linearly related to the component levels, a reduction of 20% in the tolerance spread will
result in a reduction of 20% in the response spread. In the situation where the response is not linear, it would
be necessary to run a computer simulation, as was mentioned previously.
TOLERANCE DESIGN
Copyright 1997 DaimlerChrysler Corporation 111
The impact of tightening the tolerance of each of the three components can be summarized as follow:
TIGHTENED TOLERANCE LIMITS
COMPONENT
RESPONSE
DIFFERENCE
BETWEEN TIGHTENED
TOLERANCE LIMITS
RESPONSE
PRODUCTION
STD. DEV.
TIGHTENED
RESPONSE
VARIANCE
COST OF
TIGHTENED
TOLERANCE
A 1.76 0.293 0.0858 $5000
C 1.76 0.293 0.0858 $15000
D 1.20 0.200 0.0400 $9000
COMPONENT
ORIGINAL
VARIANCE
TIGHTENED
VARIANCE
VARIANCE %
REDUCTION
VARIANCE %
REDUCTION
PER $1000 COST
A 0.1344 0.0858 36.16 7.23
C 0.1344 0.0858 36.16 2.41
D 0.0711 0.0400 43.74 4.86
The Variance % Reduction per $1000 cost indicates that the reduction of the tolerance of component A
should be investigated first since the % reduction per $1000 cost is the greatest.
The situation for a reduction of 20% in the tolerance limits of component A is summarized in the following
table:
TOLERANCE
FOR FACTOR
RESPONSE DIFFERENCE
BETWEEN TOL. LIMITS
RESPONSE
PRODUCTION
STD. DEV. VARIANCE
A 1.76 0.293 0.0858
B 1.0 0.16 0.0278
C 2.2 0.37 0.1344
D 1.6 0.27 0.0711
E 0.1 0.02 0.0003
0.3194
Humidity (H) 3.2 0.80 0.6400
Error Variance 2.9890
3.9484
The response variance will be 3.9484. If the same 0.5 offset is assumed, the loss function can be calculated
as:
Loss = k ([
2
+ offset
2
)
Loss = $6.00 (3.9484 + (59.5 - 59.0)
2
)
Loss = $25.1904 per piece
TOLERANCE DESIGN
112 Copyright 1997 DaimlerChrysler Corporation
For a production run of 50,000 pieces, the total loss would be $1,259,520. This is a $14,580 decrease in the
loss function from the low cost tolerance situation. Since the decrease in the loss function is more than the
$5000 cost of tightening the tolerance on A, it is advantageous in the long run to tighten that tolerance.
The 0.50 offset is assumed to be constant to provide a basis to compare improvement in only the variance
part of the equation. In some situations, the actions taken to reduce the response variance may also result in a
better centered response distribution.
The next step is to evaluate the loss function with the tolerance limits reduced for component D.
TOLERANCE
FOR FACTOR
RESPONSE DIFFERENCE
BETWEEN TOL. LIMITS
RESPONSE
PRODUCTION
STD. DEV. VARIANCE
A 1.76 0.293 0.0858
B 1.0 0.16 0.0278
C 2.2 0.37 0.1344
D 1.2 0.200 0.0400
E 0.1 0.02 0.0003
0.2883
Humidity (H) 3.2 0.80 0.6400
Error Variance 2.9890
3.9173
The response variance will be 3.9173 . If the same 0.5 offset is assumed, the loss function can be calculated
as:
Loss = k ([
2
+ offset
2
)
Loss = $6.00 (3.9173 + (59.5 - 59.0)
2
)
Loss = $25.0038 per piece
For a production run of 50000 pieces, the total loss would be $1,250,190. This is a $9330 decrease in the
loss function from the situation with only the tolerance of A tightened. Since the decrease in the loss function
is more than the $9000 cost of tightening the tolerance on D, it is advantageous in the long run to tighten that
tolerance.
TOLERANCE DESIGN
Copyright 1997 DaimlerChrysler Corporation 113
The reduction of the tolerance of component C can now be investigated.
TOLERANCE
FOR FACTOR
RESPONSE DIFFERENCE
BETWEEN TOL. LIMITS
RESPONSE
PRODUCTION
STD. DEV. VARIANCE
A 1.76 0.293 0.0858
B 1.0 0.16 0.0278
C 1.76 0.293 0.0858
D 1.2 0.200 0.0400
E 0.1 0.02 0.0003
0.2397
Humidity (H) 3.2 0.80 0.6400
Error Variance 2.9890
3.8687
The response variance will be 3.8687 . If the same 0.5 offset is assumed, the loss function can be calculated
as:
Loss = k ([
2
+ offset
2
)
Loss = $6.00 (3.8687 + (59.5 - 59.0)
2
)
Loss = $24.7122 per piece
For a production run of 50000 pieces, the total loss would be $1,235,610. This is a $14,580 decrease in the
loss function from the situation with only the tolerances of A and D tightened. Since the cost of tightening
the tolerance on component C is $15,000, it would not be advantageous to tighten that tolerance.
So far, the Tolerance Design has been entirely a paper exercise based on the tests run during the Parameter
Design and the assumptions about the relationships between the component levels and the response. A set of
confirmation runs should be made with the tolerance limits for components A and D tightened. An L8
orthogonal array is used for the confirmation runs with the levels set as follows:
TOLERANCE LEVELS
COLUMN
LOW LEVEL
(1)
HIGH LEVEL
(2) NOMINAL
A-Tol. 1 -45 +45 1000
B-Tol. 2 -15 +15 400
C-Tol. 3 -5 +5 60.77
D-Tol. 4 -150 +150 2200
E-Tol. 5 -100 +100 1600
H (Humidity) 6 Low High ----
Unassigned 7
TOLERANCE DESIGN
114 Copyright 1997 DaimlerChrysler Corporation
The test set-up and results:
A-TOL B-TOL C-TOL D-TOL E-TOL H TEST RESULT
1 1 1 1 1 1 1 59.7
1 1 1 2 2 2 2 57.5
1 2 2 1 1 2 2 59.5
1 2 2 2 2 1 1 63.6
2 1 2 1 2 1 2 59.3
2 1 2 2 1 2 1 58.6
2 2 1 1 2 2 1 55.8
2 2 1 2 1 1 2 59.3
The average response is 59.2 and the S/N is 28.5 . The ANOVA Table and Level Averages for all of the
factors from the verification runs are:
ANOVA TABLE
SOURCE df SS MS F RATIO S' %
A-Tol. 1 6.661 6.661 20.433 6.335 18.35
B-Tol. 1 1.201 1.201 3.684 0.875 2.53
C-Tol. 1 9.461 9.461 29.022 9.135 26.46
D-Tol. 1 2.761 2.761 8.469 2.435 7.05
E-Tol. 1* 0.101 0.101
H 1 13.781 13.781 42.273 13.455 38.98
Unassigned 1* 0.551 0.551
error ---
(pooled error) 2 0.652 0.326 2.282 6.61
Total 7 34.519
LEVEL AVERAGES
FACTOR LEVEL 1 LEVEL 2
A-Tol. 60.1 58.3
B-Tol. 58.8 59.6
C-Tol. 58.1 60.3
D-Tol. 58.6 59.8
E-Tol. 59.3 59.1
H 60.5 57.9
As mentioned in Section 3.1, this information cannot be used directly in the Loss Function since the observed
variability may be affected by testing only at the tolerance limits. The center portions of the distributions are
not represented in these tests.
Since the change in response is assumed to be a linear increase or decrease across the tolerance levels, the
Loss Function can be easily calculated. The C
pk
in production is assumed to be 1.0 or greater for all specified
tolerances. The difference between the tolerance limits will be equal to 6 times the production standard
deviation for each component parameter for a C
pk
of 1. Since the product response is linearly related to the
TOLERANCE DESIGN
Copyright 1997 DaimlerChrysler Corporation 115
component parameter level, the difference between the response level averages for the two tolerance limits
will equal 6 times the production response standard deviation. Since the response effect of each component
tolerance is additive, the response variance due to each tolerance is additive. In a similar manner, the
difference in response between the two humidity levels represents 4 times the response standard deviation.
For this example, the response variance can be calculated as follows:
TOLERANCE
FOR FACTOR
RESPONSE DIFFERENCE
BETWEEN TOL. LIMITS
RESPONSE
PRODUCTION
STD. DEV. VARIANCE
A 1.8 0.3 0.0900
B 0.8 0.13 0.0178
C 2.2 0.37 0.1344
D 1.2 0.20 0.0400
E 0.1 0.03 0.0011
0.2833
Humidity (H) 2.6 0.65 0.4225
Error Variance 0.3260
1.0318
The response variance will be 1.0318 . The loss function can be calculated from the equation:
Loss = k ([
2
+ offset
2
)
Loss = $6.00 (1.0318 + (59.5 - 59.0)
2
)
Loss = $7.6908 per piece
For a production run of 50,000 pieces, the total loss is estimated to be $384,540, compared to $1,274,100
before the tolerance design. This is a $889,560 reduction from the original estimate of the value of the loss
function.
1. The mean was relocated from 59.5 to 59.2.
2. The error variance was reduced from 2.989 to 0.326.
3. The tolerances for components A and D were tightened.
Humidity
Note that humidity was identified as an important contributor throughout this example. The experimenter
should investigate the possibility of controlling humidity to further reduce the loss function. If either the
effect of humidity on the design can be minimized, or the humidity can be controlled, the loss function could
be greatly reduced.
TOLERANCE DESIGN
116 Copyright 1997 DaimlerChrysler Corporation
Testing
Eighty tests were used in the example in Sections 2.7 and 3.2. These tests were used as follows:
# Determine the target levels - 64 tests
# Confirm the choice of targets - 8 tests
# Determine the tolerances to tighten - 0 tests (based on prior knowledge and
simulation)
# Confirm the performance with tightened tolerances - 8 tests
THE ANOVA TABLE
Copyright 1997 DaimlerChrysler Corporation 117
4.0 THE ANOVA TABLE
ANOVA Table
The main part of this manual does not include a detailed discussion on the ANOVA Table and the reader is
referred to statistics texts shown in the Bibliography. A number of requests were made that the second edition
of the manual contain, at least, a description of the mechanics of how an ANOVA Table is constructed. In
this section, a very simple experiment will be analyzed manually and an ANOVA Table constructed. The
theory behind the calculations will not be given here.
As was mentioned on Section 2.3, Analysis of Variance (ANOVA) is a matrix procedure that partitions the
total variation measured in a set of data. These partitions are the portions that are due to the difference in
response associated with the levels of each factor. To illustrate this partition, consider the following example:
FACTOR FACTOR INTERACTION
NUMBER A B AB RESPONSES
1 1 1 1 2 3
2 1 2 2 4 5
3 2 1 2 7 7
4 2 2 1 6 7
As a first step, the Sum of each Factor Level is calculated.
LEVEL SUMS
FACTOR LEVEL 1 LEVEL 2 TOTAL
A 14 (2+3+4+5) 27 41
B 19 22 41
AB 18 23 41
An interim calculated value called a ACorrelation Factor A (CF) is calculated as follows:
( )
210.125 CF
8
41
CF
points data of number The
data) the all of Sum
CF
2
2
=
=
=
THE ANOVA TABLE
118 Copyright 1997 DaimlerChrysler Corporation
The Total Sum of Squares (SS
T
) is calculated as follows:
SS
T
= Sum of the squares of all the data points - CF
SS
T
= 2
2
+ 3
2
+ 4
2
+ 5
2
+ 7
2
+ 7
2
+ 6
2
+ 7
2
- 210.125
SS
T
= 237 - 210.125
SS
T
= 26.875
The Sum of Squares due to Factor A is calculated as follows:
Likewise, the Sum of Squares due to Factor B and the Sum of Squares due to the AB interaction are
calculated in a similar fashion.
The Error Sum of Squares is the Total Sum of Squares minus the sums of squares due to the factors.
SS
e
= SS
T
- SS
A
- SS
B
- SS
AB
SS
e
= 26.875 - 21.125 - 1.125 - 3.125
SS
e
= 1.5
Degrees of freedom (df) are the number of comparisons that can be made within the factor being addressed.
Since there are two levels of Factor A, only one comparison of the factors can be made (Level 1 vs Level 2)
and the df is one. Likewise, the df for Factor B is one. The df for an interaction is the product of the df of the
interacting terms. Here, the df for AB is one (1*1). The total df is the total number of data points minus one.
( )
21.125 SS
125 . 210
4
27 14
SS
CF -
level each in points data of number The
) Sums level (A of Sum
SS
A
2 2
A
2
A
=

+
=
=
( )
( )
3.125 SS
210.125 -
4
23 18
SS
1.125 SS
210.125 -
4
22 19
SS
AB
2 2
AB
B
2 2
B
=
+
=
=
+
=
THE ANOVA TABLE
Copyright 1997 DaimlerChrysler Corporation 119
Since 8 different responses were measured, the total df is 7. The error df is equal to the total df minus the df
for all the factors. In this example,
error df = df
T
- df
A
- df
B
- df
AB
error df = 7 - 1 - 1 - 1
error df = 4
The following table summarizes what has been calculated so far:
ANOVA TABLE
SOURCE df SS MS F RATIO S= %
A 1 21.125
B 1 1.125
AB 1 3.125
error 4 1.500
(pooled error)
Total 7 26.875
The Mean Square (MS) is also known as the variance and is calculated by dividing the SS by the
corresponding df. Pooling of terms into error is discussed on page 50. In this example, since there are more
than one observation at each test situation and a true estimate of error can be made, none of the factors will be
pooled into error prior to testing for significance with the F Ratio. The F Ratio is also discussed in Section
2.2 and is calculated by dividing the MS for each factor by the pooled error MS. The ANOVA Table now is:
ANOVA TABLE
SOURCE df SS MS F RATIO S= %
A 1 21.125 21.125 56.333
B 1 1.125 1.125 3.000
AB 1 3.125 3.125 8.333
error 4 1.500 0.375
(pooled error) 4 1.500 0.375
Total 7 26.875 3.840
The F Value is compared to a critical value from an F Distribution Table. The ratio of two variances follows
an F Distribution. If the ratio is significantly different from 1, the variances are significantly different from
each other. The reader is urged to refer to a good basic statistics textbook to learn more about the F
Distribution and its uses. The critical F Distribution value for a 90% confidence level, one df in the numerator
and 4 df in the denominator of the ratio is 4.54. The F Ratio for Factor B is less than the critical value and the
variation in the data due to Factor B is not significantly different from the variation due to error. The df and
SS for Factor B should be added (pooled) into the error df abd SS. F Ratios are then recalculated and re-
evaluated using the pooled error variance. The ANOVA Table now is:
THE ANOVA TABLE
120 Copyright 1997 DaimlerChrysler Corporation
ANOVA TABLE
SOURCE df SS MS F RATIO S= %
A 1 21.125 21.125 40.238
B 1* 1.125 1.125
AB 1 3.125 3.125 5.952
error 4 1.500 0.375
(pooled error) 5 2.625 0.525
Total 7 26.875 3.840
The S= value is calculated by taking the SS for each significant factor and subtracting the pooled error MS
times the factor df. The values that are subtracted are then added into the error SS to determine the error
SS=. The rationale behind this calculation is explained in Quality Engineering Product and Process
Design Optimization by Y. Wu and W. H. Moore and in Fundamental Concepts in the Design of
Experiments by C. R. Hicks. In simple terms, the SS for each factor also contains within it an element due to
basic system non-repeatability (error). When this element is subtracted, the remainder is a Apure@estimate
of the SS due to the factor. S= is sometimes called the Apure sum of squares@. S= is then divided by the
total SS to determine the % contribution of that factor or error term to the total variability observed in the
data.
ANOVA TABLE
SOURCE Df SS MS F RATIO S= %
A 1 21.125 21.125 40.238 20.600 76.65
B 1* 1.125 1.125
AB 1 3.125 3.125 5.952 2.600 9.67
error 4 1.500 0.375
(pooled error) 5 2.625 0.525 3.675 13.67
Total 7 26.875 3.840
This completes the example ANOVA Table development.
DOE CHECKLIST
Copyright 1997 DaimlerChrysler Corporation 121
5.0 DOE Checklist
ACTION COMPLETE
Describe in measurable terms how the present situation deviates from what is desired.
Identify the proper people to be involved in the investigation and the leader of the
investigation.
Obtain agreement from those involved on:
- Scope of the investigation
- Other constraints, such as time or resources
Obtain agreement on the goal of the investigation given the scope and constraints.
Determine if DOE is appropriate or if other research should be done first.
Use brainstorming to determine what factors may be important and which of them could
interact.
Choose a response and measurement technique that:
- Relates to the underlying cause and is not a symptom
- Is measurable
- Is repeatable
Determine the test procedure to be used.
Determine which of the factors are controllable and which of them are noise.
Determine the levels to be tested for each factor to support the goal of the DOE.
Choose the appropriate experimental design for the control and noise factors.
Obtain final agreement from all involved parties on the:
- Goal - Test procedure
- Approach - Timing of the work plan
- Allocation of roles
Arrange to obtain appropriate parts, machines and testing facilities.
Monitor the testing to assure proper procedures are followed.
Use the appropriate techniques to analyze the data.
Run confirmatory experiments.
Prepare a summary report of the experiment with conclusions and recommendations.
Copyright 1997 DaimlerChrysler Corporation 123
Chrysler DOE Examples
Copyright 1997 DaimlerChrysler Corporation 125
SPEEDOMETER CABLE NOISE
DESIGN OF EXPERIMENTS
S. M. CISCHKE
NOVEMBER 17, 1986
SPEEDOMETER CABLE NOISE STUDY
Copyright 1997 DaimlerChrysler Corporation 127
6.1 SPEEDOMETER CABLE NOISE - Design of Experiments
Objective
Develop a testing method for speedometer cable noise evaluation to:
# Obtain objective and repeatable results.
# Focus efforts on noise contributors offering the best payback.
Problem Definition
# Noisy speedometer cables are a source of customer dissatisfaction.
# Cable noise is a function of many variables in each of the following areas:
- Cable system design
- Part quality
- Assembly process
# Previous testing (one factor at a time) had not produced repeatable test
results. Possible interactions between variables.
SPEEDOMETER CABLE NOISE STUDY

128 Copyright 1997 DaimlerChrysler
SELECT VARIABLES
AND IDENTIFY INTERACTIONS
CHOOSE APPROPRIATE LINEAR
GRAPH AND ASSIGN VARIABLES
APPLY ORTHOGANOL ARRAY
TO DETERMINE TEST RUNS
RANDOMIZE TEST RUNS
RUN EXPERIMENT
CALCULATE ANALYSIS OF
VARIANCE (ANOVA) OF RESULTS
DETERMINE % CONTRIBUTION
DESIGN OF EXPERIMENTS
Taguchi Method
SUMMARIZE RESULTS
SPEEDOMETER CABLE NOISE STUDY
Copyright 1997 DaimlerChrysler Corporation 129
SPEEDOMETER CABLE NOISE STUDY

130 Copyright 1997 DaimlerChrysler
FULL FACTORIAL VERSUS ORTHOGONAL ARRAY
3 Variables (A, B, C)
3 Levels of Each Variable (A
1
, A
2
, A
3
)
A: Cable Construction A S.W. P
B: Cable Alignment B1 B2 B3
C: Vibration 1 2 3
Full Factorial Requires 3
3
= 27 tests
CABLE CONSTRUCTION
A S.W. P
B1 1 3 2 B1
Cable Alignment B2 2 1 3 B2
B3 3 2 1 B3
A1 A2 A3
Orthogonal Array Requires 9 Tests:
TEST
(A)
CONSTRUCTION
(B)
ALIGNMENT
(C)
VIBRATION
NOISE
MEASUREMENT
1 A B1 LEVEL 1 40
2 S.W. B1 LEVEL 2 30
3 P B1 LEVEL 3 55
4 A B2 LEVEL 2 45
5 S.W. B2 LEVEL 1 30
6 P B2 LEVEL 3 60
7 A B3 LEVEL 3 55
8 S.W. B3 LEVEL 2 40
9 P B3 LEVEL 1 50
TEST RUN AT EACH FACTOR LEVEL
A B C
1 1,4,7 1,2,3 1,5,9
2 2,5,8 4,5,6 3,4,8
3 3,6,9 7,8,9 2,6,7
LEVEL SUMS
A B C
1 140 125 120
2 100 135 140
3 165 145 145
SPEEDOMETER CABLE NOISE STUDY
Copyright 1997 DaimlerChrysler Corporation 131
ANALYSIS OF VARIANCE
VARIABLES
DEGREES OF
FREEDOM SS MS F S= P
Housing Constr. 1 266 266 52.5 261 63%
Core 1 0.8
Housing Material 1 0.1
Speedo Dial 1 2.6
Vibration 1 15.4 15.4 3.0 10.3 2.5%
Housing Align. 1 41.9 41.9 8.3 36.8 8.9%
Lube 1 17.4 17.4 3.4 12.3 3.0%
Error Term 8 40.7 5.08 81.3 19.6%
Total 15 414.9 414.9 100%
Housing
Construction
Error
Housing
Error
Housing
Construction
Error
HOUSING
CONSTRUCTION
ERROR
THREE POSSIBILITIES
ERROR
HOUSING
ERROR
HOUSING
CONSTRUCTION
Housing Constructions
are very different.
Want to select for best
quality (lowest noise).
Majority of variability is
due to error. Housing
is not a significant factor.
Inconclusive.
Need statistical
procedure and
further testing
SPEEDOMETER CABLE NOISE STUDY

132 Copyright 1997 DaimlerChrysler
Error
Term
Housing
Construction
(26%)
Alignment
Vibration
Lube
Error
Term
Vibration
(50%)
Interactions
2 Level Test
16 Runs
3 Level Test
27 Runs
Experiment 1 Experiment 2
DESIGN OF EXPERIMENTS
TAGUCHI ANALYSIS
SPEEDOMETER CABLE NOISE STUDY
Copyright 1997 DaimlerChrysler Corporation 133
460
480
500
510
530
550
P
A
S.W.
#2
#1
None
None
Spec
M
a
r
g
i
n
a
l
s

(
C
o
m
b
i
n
e
d

d
B
A
)
Variables: A
24%
B
55%
C E F
Cable
Housing
(Vendor) Vibration
Cable
Alignment Lube
Ferrule
Orientation
RESULTS OF EXPERIMENT #2
SPEEDOMETER CABLE NOISE STUDY

134 Copyright 1997 DaimlerChrysler
Error
Term
Alignment
Foam
Housing
Construction
(79%)
Error Speed
Temperature
(47%)
Vibration
(48%)
Inner Array Outer Array
64 Tests
Experiment #5
40 50 60
70 80 90
56.8 65.8
Ribbed Construction Finned Construction
dBA
X
[ 3[
S. W.
(Ribbed Construction)
56.8 3.8 11.3
P
(Finned Construction)
65.8 6.0 18.1
SPEEDOMETER CABLE NOISE STUDY
Copyright 1997 DaimlerChrysler Corporation 135
Conclusions
1. Five experiments were conducted that proposed consistent and repeatable results.
2. Cable construction was determined to be a significant factor in speedo noise. Ribbed construction
showed less noise and variability than finned construction.
3. Decreasing the ambient temperature increased noise but did not change the relationship of the
other variables. Future test can be run at room temperature without biasing the test results.
Benefits
1. Significantly fewer test required using the Taguchi method.
2. Speedo cable housing construction was changed on all car lines (effective November 1987) to reduce
noise and improve customer satisfaction.
3. Purchasing re-sourced a major percentage of the business based on the results of the Taguchi testing.
4. Taguchi testing forced competition between vendors. Cable vendors have now committed to using
Design of Experiments to optimize their cable designs.
Summary
1. Taguchi method of testing required fewer tests and produced repeatable results.
2. Engineering was able to focus on a major noise contributor that offered the best payback.
3. Engineering is working with the speedo cable vendors to develop uniform test procedures for future
Taguchi testing to be conducted by the vendors to optimize their cable design.
n
SMC PROCESS IMPROVEMENT STUDY
Copyright 1997 DaimlerChrysler Corporation 137
SHEET MOLDED
COMPOUND PROCESS
IMPROVEMENT
PRESENTED BY
P. I. HSIEH AND D. E. GOODWIN
September, 1986
SMC PROCESS IMPROVEMENT STUDY

138 Copyright 1997 DaimlerChrysler
6.2 SMC Process Improvement Study
Introduction
A commitment by management at Chrysler Motors to dedicate resources for working with suppliers to
improve product quality led to this case study. A Chrysler Task Force consisting of personnel from
Manufacturing, Assembly, Supplier Quality Assurance and Engineering was formed to investigate and help
improve sheet molded compound (SMC) process and product quality. The product under study in this
experiment was grille opening panels (GOP) which has been produced by Eagle Picher for many years. The
supplier previously conducted one factor-at-a-time experimentation to improve their process yields. These
experiments had disappointing results.
The Problem
Grille opening panels are shipped to Chrysler assembly plants from SMC suppliers and assembled into the
car prior to the paint process. Surface imperfections in the paint finish (APops@) after the paint process
caused a low first-time capability (FTC) of approximately 77% for one assembly plant. The fact that the
imperfections were not detectable until components were shipped, assembled and painted created serious
productivity concerns for the assembly plant and the supplier.
The Process
The process studied in this experiment included raw material mixing to part finishing at the supplier, Eagle
Picher. The finished parts were packaged and shipped to Chrysler plants. Figure 1 shows the process for
producing GOP at Eagle Picher.
Figure 1 - The Process
The Cause and Effect Diagram
In a brainstorming session, Chrysler Task Force and the supplier engineering team constructed a cause and
effect diagram focused on the product and the process. An abbreviated version of the diagram appears in
Figure 2.
Punch Drill Parts Assembled To GOP Power Wash Dry Off
Material Thickening Glass Mixing Mold Cleaning
Priming Packaging Shipping
SMC PROCESS IMPROVEMENT STUDY
Copyright 1997 DaimlerChrysler Corporation 139
Figure 2 - Cause and Effect Diagram
Experimental Process
Using the Cause and Effect Diagram, an experiment was set up which included nine main variables:
VARIABLE
NO. OF
LEVELS LEVEL 1 LEVEL 2
1. Mold Pressure (A) 2 Low High
2. Mold Temp. (B) 2 Low High
3. Mold Cycle (C) 2 Low High
4. Cutting Pattern (D) 2 Method I Method II
5. Priming (E) 2 Method I Method II
6. Viscosity (F) 2 Low High
7. Weight (G) 2 Low High
8. Material
Thickening
Process
(H) 2 Process I Process II
9. Glass Type (I) 2 Type I Type II
Porosity
Cutting Pattern
Priming
Material Thickening Process
Weight
Viscosity
Glass Type
Mold Pressure
Mold Temperature
Mold Cycle
Filling Material
SMC PROCESS IMPROVEMENT STUDY

140 Copyright 1997 DaimlerChrysler
In addition to these nine main factors, the following five potential interactions were investigated in this
experiment:
A x B
A x C
A x D
B x F
F x H
With these experimental considerations, an L
16
orthogonal array layout was chosen to conduct the
experiments. The array arrangement and responses of porosity observations can be seen in Table 1. Two
responses were run per experiment. Response values are the number of APops@found.
TABLE 1
L
16
ORTHOGONAL ARRAY
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15)
A B A A F
EXP. x x x x x Responses
NO. A B B F G F E C C H I D D H (e) R1 R2
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 56 10
2 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 17 2
3 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 2 1
4 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1 4 3
5 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 3 1
6 1 2 2 1 1 2 2 2 2 1 1 2 2 1 1 4 13
7 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 50 49
8 1 2 2 2 2 1 1 2 2 1 1 1 1 2 2 2 3
9 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 3
10 2 1 2 1 2 1 2 2 1 2 1 2 1 2 1 0 3
11 2 1 2 2 1 2 1 1 2 1 2 2 1 2 1 3 2
12 2 1 2 2 1 2 1 2 1 2 1 1 2 1 2 12 2
13 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 3 4
14 2 2 1 1 2 2 1 2 1 1 2 2 1 1 2 4 10
15 2 2 1 2 1 1 2 1 2 2 1 2 1 1 2 0 5
16 2 2 1 2 1 1 2 2 1 1 2 1 2 2 1 0 8
Analysis
An analysis of variance (ANOVA) indicates that:
(1) Mold pressure (A), priming (E) and mold cycle (C) were significant in contributing to the
experimental variation.
(2) Material thickening process (H) and viscosity (F) were both insignificant with F-value of 0.4854 and
0.0539 respectively.
(3) The interaction F x H was significant with F-value of 11.0801 (99% confidence level). A similar
SMC PROCESS IMPROVEMENT STUDY
Copyright 1997 DaimlerChrysler Corporation 141
AVERAGE POPS
(POROSITY)
5
10
15
E
1
(E
2
) A
1
(A
2
) C
1
(C
2
)
observation was seen in B x F.
(4) As anticipated, the interaction A x C was seen as significant. The completed ANOVA table is
presented in Table 2 with level sum of the variables.
TABLE 2 - ANOVA
SOURCE df S V F
F
(POOL) S= Rho %
A 1 800.000 800.000 9.588 11.728 731.788 11.04
B 1 45.125 45.125 0.541 --- --- ---
AxB 1 15.125 15.125 0.181 --- --- ---
F 1 4.500 4.500 0.054 --- --- ---
G 1 0.500 0.500 0.006 --- --- ---
BxF 1 595.125 595.125 7.133 8.725 526.913 7.95
E 1 990.125 990.125 11.867 14.515 921.913 13.91
C 1 351.125 351.125 4.208 5.148 282.913 4.27
AxC 1 630.125 630.125 7.552 9.128 561.913 8.48
H 1 40.500 40.500 0.485 --- --- ---
I 1 50.000 50.000 0.599 --- --- ---
D 1 78.125 78.125 0.936 --- --- ---
AxD 1 120.125 120.125 1.440 1.761 51.913 0.78
FxH 1 924.500 924.500 11.080 13.553 856.288 12.92
e 1 648.000 648.000 7.766 9.500 579.788 8.75
Error 16 1335.000 83.438
(pool) (23) (1568.873) (68.212) 2114.571 31.90
Total 31 6628.000 213.807
SUM/VARIABLES A B C D E F G H I
First Level
(1)
220 121 193 115 229 134 138 122 120
Second Level
(2)
60 159 87 165 51 146 142 158 160
To have a clear idea of the experimental results, the effects of each significant factor is graphed. These graphs
represent what was observed in the ANOVA (Table 2). The factors are arranged so that the most significant
is on the left.
SMC PROCESS IMPROVEMENT STUDY

142 Copyright 1997 DaimlerChrysler
AVERAGE POPS
(POROSITY)
5
10
15
B
1
B
2
H
1
H
2
( ) Denotes the desired condition; Level E2 is preferred over E1.
It is observed that B, F and H are not significant factors to influence the pops being measured. However,
interactions are presented between B and F (B x F) as well F and H (F x H).
Therefore, (B
2
, F
1
); (H
2
, F
1
) are recommended.
The best condition based on the levels of factors as identified in this experiment is:
Mold pressure: High
Mold temperature: High
Mold cycle: High
Cutting pattern: Method II
Priming: Method II
Viscosity: Low
Weight: Low
Material thickening process: Process II
Glass type: Type I
Confirmation of Results
Confirmation experiments were carried out by the supplier in its own plant. Our assembly plant conducted a
before and after study on the optimum process. Results revealed an increased first time capability from 77%
to 96%.
Conclusion
This experiment is considered unique and successful due to the following aspects:
1) Both supplier and Chrysler engineers were very enthusiastic about participating in the Design of
Experiments. The supplier was particularly pleased that a joint effort was made to assist them in
improving their process.
2) The supplier and Chrysler have learned the power of the Taguchi Methods in DOE.
3) The supplier has expanded the use of DOE to other products.
4) The ability of the supplier to optimize internal processes for Chrysler-sourced parts benefitted both
Chrysler and the supplier.
SMC PROCESS IMPROVEMENT STUDY
Copyright 1997 DaimlerChrysler Corporation 143
5) Chrysler realized an improvement in first time through capability from 77% to 96% in the assembly
plant. The estimated savings from reduced inspection and repair is $900,000 per year.
6) There are additional savings that cannot be accurately dollarized. These areas include fewer material
handling personnel and reduced GOP inventory in the assembly plant and the supplier plant.
SMC PROCESS IMPROVEMENT STUDY

144 Copyright 1997 DaimlerChrysler
BIBLIOGRAPHY
Copyright 1997 DaimlerChrysler Corporation 145
7.0 BIBLIOGRAPHY
1) A. H. Bowker and G. J. Lieberman, Engineering Statistics, Prentice-Hall, Inc., Englewood Cliffs, N.J.,
1972.
2) G. E. P. Box, Report No. 26, Studies in Quality Improvement: Signal to Noise Ratios, Performance
Criteria and Transformation, The College of Engineering, University of Wisconsin - Madison, July,
1987.
3) G. E. P. Box and N. R. Draper, Empirical Model Building and Response Surfaces, John Wiley &
Sons, New York, 1978.
4) E. P. Box, W. G. Hunter and J. S. Hunter, Statistics for Experimenters, John Wiley & Sons, New York,
1978.
5) R. M. Brown and M. I. Burke, Framing of Design of Experiment (DOE), Proceedings from the
American Society for Quality Control 42
nd
Annual Quality Congress, May 1988.
6) J. L. Fleiss, Statistical Methods for Rates and Proportions, John Wiley & Sons, New York, 1981.
7) C. R. Hicks, Fundamental Concepts in the Design of Experiments, Holt, Rinehart and Winston, New
York, 1982.
8) K. Ishikawa, Guide to Quality Control, Asian Productivity Organization, Tokyo, Japan, 1983.
9) K. C. Kapur and L. R. Lamberson, Reliability in Engineering Design, John Wiley & Sons, New York,
1977.
10) G. Taguchi and Y. Wu, Introduction to Off-Line Quality Control, Central Japan Quality Control
Association, Nagaya, Japan, 1979.
11) Y. Wu and W. H. Moore, Quality Engineering Product and Process Design Optimization, American
Supplier Institute, Dearborn, Michigan, 1985.
12) Japanese Industrial Standard, " General Tolerancing Rules for Plastics Dimensions - JIS K 7109 -
1986", Japanese Standards Association, Tokyo, Japan, 1986 (American National Standards Institute,
New York).

Вам также может понравиться