Вы находитесь на странице: 1из 83

Seminar Report

On
Taguchis Methods
Submitted by: Submitted to:
Ishwar Chander (800982007) Mr. Tarun Nanda
Pulkit Bajaj (800982019) (Sr. Lecturer)
Hitesh Bansal (800982006)

Department of Mechanical Engineering
Thapar University, Patiala
(Deemed University)
1
CONTENTS
Chapter 1
1.1Introduction 4
1.2 Definitions of quality 5
1.2.1 Traditional and Taguchi definition of Quality 6
1.3Taguchis quality philosophy 7
1.4 Objective of Taguchi Methods 9
1.5 8-Steps in Taguchi Methodology 9
Chapter 2 (Loss Function) 10
2.1 Taguchi Loss Function 10
2.2 Variation of Quadratic Loss function 16
Chapter 3 (Analysis of Variation) 18
3.1 Understanding Variation 18
3.2 What is ANOVA 18
3.2.1 No Way ANOVA 18
3.2. 1.1 Degree of Freedom 19
3.2.2 One Way ANOVA 23
3.2.3 Two Way ANOVA 29
3.3 Example of ANOVA 35
Chapter 4 (Orthogonal Array) 45
4.1 What is Array 45
4.2 History of Array 45
4.3 Introduction of Orthogonal Array 46
4.3.1 Intersecting many factor- A case study 48
4.3.1.1 Example of Orthogonal Array 49

2
4.3.2 A Full factorial Experiment 57
4.4 Steps in developing Orthogonal Array 59
4.4.1 Selection of factors and/or interactions to be evaluated 59
4.4.2 Selection of number of levels for the factors 59
4.4.3 Selection of the appropriate OA 60
4.4.4 Assignment of factors and/or interactions to columns 61
4.4.5 Conduct tests 63
4.4.6 Analyze results 64
4.4.7 Confirmation experiment 67
4.5 Example Experimental Procedure 67
4.6 Standard Orthogonal Array 71
Chapter 5 (Robust Designing)


5.1 What is robustness 72
5.2 The Robustness Strategy uses five primary tools 72
5.2.1P-Diagram 73
5.2.2 Quality Measurement 74
5.2.3 Signal To Noise Ratio 75
5.3 Steps in Robust Para design 76
5.4 Noise Factor 77
5.5 OFF-LINE and ON-LINE Quality Control 78
5.5.1 OFF-LINE Quality Control 78
5.5.2 ON-LINE Quality Control 78
5.5.1.1 Product Design 79
5.5.1.2 Process Design 79
5.5.2.1 Product Quality Control method (On Line Quality Control Stage 1) 80
5.5.2.2 Customer Relations (On Line Quality Control Stage 2) 80
References 84


3
Preface
When Japan began its reconstruction efforts after World War II, Japan faced an acute
shortage of good quality raw material, high quality manufacturing equipment and skilled
engineers. The challenge was to produce high quality products and continue to improve
quality product under these circumstances. The task of developing a methodology to meet
the challenge was assigned to Dr. Genichi Taguchi, who at that time was manager in
charge of developing certain telecommunication products at the electrical communication
laboratories (ECL) of Nippon Telecom and Telegraph Company(NTT).Through his
research in the 1950s and the early 1960s. Dr Taguchi developed the foundation of
robust design and validated his basics philosophies by applying them in the development
of many products. In recognisation of this contribution, Dr Taguchi received the individual
DEMING AWARD in 1962, which is one of the highest recognition in the quality field.
CHAPTER 1
4
1.1 Introduction
Genichi Taguchi attended Kiryu Technical College where he studied textile engineering.
From 1942 to 1945, he served in the Astronomical Department of the Navigation Institute
of the Imperial Japanese Navy. After that, he worked in the Ministry of Public Health and
Welfare and the Institute of Statistical Mathematics, Ministry of Education. While
working there, he was educated by Matosaburo Masuyama on the use of orthogonal arrays
and also on different experimental design techniques.
In 1950, he began working at the newly formed Electrical Communications
Laboratory of the Nippon Telephone and Telegraph Company. He stayed there for more
than 12 years and was responsible for training engineers to be more effective with their
techniques. While he was there he consulted with many different Japanese companies and
also wrote his first book on orthogonal arrays.
He served as a visiting Professor at the Indian Statistical Institute from 1954 to
1955. While he was there, Taguchi met Sir R.A. Fisher and Walter A. Shewhart. He
published the first edition of his two-volume book on Experimental design in 1958. He
made his first visit to the United States in 1962 where he was a visiting Professor at
Princeton University. In the same year, he was also awarded his PhD from Kyushu
University.
He developed the concept of the Quality Loss Function in the 1970s. He also
published the third and most current edition of his book on experimental designs. He
revisited the United States in 1980 and from then his methods spread and became more
widely used. Genichi Taguchi made many important contributions during his lifetime.
Some of his most important were probably to the field of quality control. However he did
make many important contributions to experimental design.
Professor Genichi Taguchi was the director of the Japanese Academy of quality and
four times receipt of the Deming Prize. The term Taguchi Methods was coined in the
United States.
Although SPC can assist the operator in the elimination of the special cause of
defects, thus bringing the process under control. But some thing is still needed: the
continuous improvement of manufacturing processes so that the production of robust
products can be assured. And this is where Taguchi comes in. He starts where SPC
(temporarily) finishes. He can help with the identification of common cause of variation,
the most difficult to determine and eliminate in process. He attempts to go even further: he
tries to make the process and the product robust against their effect (eliminate of the effect
rather then the cause) at the design stage; indeed, in dealing with uncontrollable (noise)
factors, there is no alternative. Even if the removal of the effect is impossible, he provides
a systematic procedure for controlling the noise (through tolerance design) at the minimum
cost.
When Dr. Taguchi was first brought his ideas to America in 1980, he was already
well known in Japan for his contribution to quality engineering. His arrival in the U.S.
went virtually unnoticed, but by 1984 his ideas had generated so much interest that Ford
Motors Company sponsored the first supplier Symposium on Taguchi Methods.

5
1.2 Definitions of Quality
Fitness for use
Dr. Juran (1964)
The leading promoter of the zero defects concept and author of Quality is Free
(1979) defines quality as Conformance to requirements.
Philips Crosby
Quality should be aimed at needs of consumer, present and future.
Dr. Deming
The totality of features and characteristics of a product or service that bear on its
ability to satisfy given needs.
The American Society for Quality Control (1983)
The aggregate of properties of a product determining its ability to satisfy the needs
it was built to satisfy.
(Russian Encyclopaedia)
The totality of features and characteristics of a product and service that bear on its
ability to satisfy a given need.
(European Organization for Quality Control Glossary 1981)
Although these definitions are all different, some common threads run through them:
Quality is a measure of the extent to which customer requirements and expectations
are satisfied.
Quality is not static, since customer expectation can change.
Quality involves developing product or service specifications and standards to meet
customer needs (quality of design) and then manufacturing products or providing
services which satisfy those specifications and standards (quality of conformance).
It is important to note that the above quality definitions which are before 1950s does not
refer to grade or features. For example Honda Car has more features and is generally
considered to be a higher grade car than Maruti. But it does not mean that it is of better
quality. A couple with two children may find that a Maruti does a much better job of
meeting their requirements in terms of case of loading and unloading ,comfort when the
entire family is in the car, gas mileage, maintenance, and of course ,basic cost of vehicle.
1.2.1 Traditional and Taguchi definition of Quality
6
Traditional Definition:
The more traditional "Goalpost"
mentality of what is considered good quality
says that a product is either good or it isn't,
depending or whether or not it is within the
specification range (between the lower and
upper specification limits -- the goalposts).
With this approach, the specification range
is more important than the nominal (target)
value. But, is the product as good as it can
be, or should be, just because it is within specification.
Taguchi Definition:
Taguchi says no to above definition.
He define the quality as deviation from
on-target performance. According to him,
quality of a manufactured product is total
loss generated by that product to society
from the time it is shipped. Financial loss
or Quality loss
L(y) = k(y-m)
y objective characteristic
m target value
k constant
k = Cost of defective product / (Tolerance)
= A/


1.3 Taguchis Quality Philosophy
7
Traditional Quality Definition
Taguchi Quality Definition
Genichi Taguchis impact on the word quality scene has been far- reaching. His quality
engineering system has been used successfully by many companies in Japan and
elsewhere. He stresses the importance of designing quality into product into processes,
rather than depending on the more traditional tools of on-line quality control. Taguchis
approach differs from that of other leading quality gurus in that he focuses more on the
engineering aspects of quality rather than on management philosophy of statistics. Also,
Dr. Taguchi uses experimental design primarily as a tool to make products more robust- to
make them less sensitive to noise factors. That is, he views experimental design as a tool
for reducing the effects of variation on product and process quality characteristics. Earlier
applications of experimental design focused more on optimizing average product
performance characteristics without considering effects on variations. Taguchis quality
philosophy seven basic elements:
1. An important dimension of the quality of a manufactured product is the total loss
generated by product to society. At a time when the BOTTOM LINE appears to be
the driving force for so many organizations, it seems strange to see loss to society
as part of product quality.
2. In a competitive economy, continuous quality improvement and cost reduction are
necessary for staying in business. This is hard lesson to learn. Masaaki Imai (1986)
argues very persuasively that the principal difference between Japanese and
American management is that American companies look to new technologies and
innovation as the major route to improvement, while Japanese companies focus
more on Kaizen means gradual improvement in everything they do. Taguchi
stresses use of experimental designs in parameter design as a way of reducing
quality costs. He identifies three types quality costs: R & D costs, manufacturing
costs, and operating costs. All three costs can be reduced through use of suitable
experimental designs.
3. A continues quality improvement program includes continuous reduction in the
variation of product performance characteristics about their target values. Again
Kaizen. But with the focus on product and process variability. This does not fit the
mold of quality being of conformance to specification.
4. The customers loss due to a products performance variation is often
approximately proportional to the square of the deviation of the performance
characteristic from its target value. This concept of a quadratic loss function, says
that any deviation from target results in some loss of the customer, but that large
deviations from target result in severe losses.
5. The final quality and cost of manufactured product are determined to a large extent
by the engineering designs of the product and its manufacturing process. This is so
8
simple, and so true. The future belongs to companies, which, once they understand
the variabilitys of their manufacturing processes using statistical process control,
move their quality improvement efforts upstream to product and process design.
6. A product (or process) performance variation can be reduced by exploiting the
nonlinear effects of the product (or process) parameters on the performance
characteristics. This is an important statement because its gets to the heart of off-
line QC. Instead of trying to tighten speciation beyond a process capability, perhaps
a change in design can allow specifications to be loosened. As an example, suppose
that in a heating process the tolerance on temperature is a function of the heating
time in the oven. The tolerance relationship is represents by the band in given figure.
For example : If a process specification says the heating process is to last 4.5
minutes, then the temperature must be held between 354.0 degrees and 355.0
degrees, a tolerance interval 1.0 degrees wide. Perhaps the oven cannot hold this
tight a tolerance. One solution would be spending a lot of money on a new oven and
new controls. Other possibility would be to change the time for the heating process
to, say, 3.5 minutes. Then the temperature would need to be held to between 358.0
and 360.6 degrees an interval of width 2.6 degrees. If the oven could hold this
tolerance, the most economical decision might be to adjust the specifications. This
would make the process less sensitive to variation in oven temperature.
Time Temperature Relationship
7. Statistically designed experiments can be used to identify the settings of product
parameters that reduce performance variations. And hence improve quality,
9
productivity, performance, reliability, and profits, statistically designed experiments
will be the strategic quality weapon of the 1990s.


1.4 Objective of Taguchi Methods
The objective of Taguchis efforts is process and product-design
improvement through the identifications of easily controllable factors and their settings,
which minimize the variation in product response while keeping the mean response on
target. By setting those factors at their optimal levels, the product can be made robust to
changes in operating and environmental conditions. Thus, more stable and higher-quality
products can be obtained, and this is achieved during Taguchi parameter-design stage by
removing the bad effect of the cause rather than the cause of the bad effect. Furthermore,
since the method is applied in a systematic way at a pre-production stage (off-line), it can
greatly reduce the number of time-consuming tests needed to determine cost-effective
process conditions, thus saving in costs and wasted products

1.5 8-Steps in Taguchi Methodology
1. Identify the main function, side effects and failure mode.
2. Identify the noise factor, testing condition and quality characteristics.
3. Identify the objective function to be optimized.
4. Identify the control factor and their levels.
5. Select the Orthogonal Array, Matrix experiments.
6. Conduct the Matrix equipment.
7. Analyze the data; predict the optimum levels and performance.
8. Perform the verification experiment and plan the failure action.



10
CHAPTER 2
2.1 Taguchi Loss Function
Genichi Taguchi has an unusual definition for product quality: Quality is the loss a
product causes to society after being shipped, other then any losses caused by its intrinsic
functions. By loss Taguchi refers to the following two categories.
Loss caused by variability of function.
Loss caused by harmful side effects.
An example of loss caused by variability if function would be an automobile that does not
start in cold weather. The cars owner would suffer a loss if he or she had to pay some to
start a car. The car owners employer losses the services of the employee who is now late
for work. An example of a loss caused by a harmful side effect would be frost by it
suffered by the owner of the car which would not start.
An unacceptable product which is scrapped or rework prior to shipment is viewed by
Taguchi as a cost to the company but not a quality loss.



11
2.1.1 Comparing The Quality Levels of SONY TV Sets Made in
JAPAN and in SAN DIEGO
The front page of the Ashi News on April 17,1979 compared the quality levels of Sony
color TV sets made in Japanese plants and those made in San Diego, California, plant. The
quality characteristic used to compare these sets was the color density distribution, which
affect color balance. Although all the color TV sets had the same design, most American
customers thought that the color TV sets made in San Diego plant were of lower quality
than those made in Japan.
The distribution of the quality characteristic of these color TV sets was given in the
Ashi News (shown in Figure).The quality characteristics of the TV sets from Japanese
Sony plants are normally disturbed around the target value m. If a value of 10 is assigned
to range of the tolerance specifications for this objective characteristic, then the standard
deviation of this normally distributed curve can be calculated and is about 10/6.
In quality control, the process capability index(Cp) is usually defined as
the tolerance specification divided by 6 times the standard deviation of the objective
characteristic:
Cp=Tolerance/6*Standard deviation

12


Therefore, the process capability index of the objective characteristic of
Japanese Sony TV sets is about 1. In addition, the mean value of the distribution of these
objective characteristics is very close to the target value of m.
On the other hand, a higher percentage of TV sets from San Diego Sony are
within the tolerance limits than those from Japanese Sony. However, the color density of
San Diego TV sets is uniformly distributed rather than normally distributed. Therefore, the
standard deviation of these uniformly distributed objective characteristics is about 1/12
of the tolerance specification. Consequently, the process capability index of the San Diego
Sony plant is calculated as follows:

Cp=Tolerance/6(Tolerance/12) = 0.577
It is obvious that the process capability index of San Diego Sony is much lower than that
of Japanese Sony.
All products that are outside of the tolerance specifications are
supposed to be consider defective and not shipped out of the plant. Thus products that are
within tolerance specifications are assumed to be pass and are shipped. As a matter of fact,
tolerance specifications are very similar to tests in schools, where 60% is usually the
dividing line between passing and failing, and 100% is ideal score. In our example of ideal
TV sets, the ideal consideration is that the objective characteristics, color density, meet the
target value m. The more the color density deviates from the target value, the lower the
quality level of TV set. If the deviation of color density is over the tolerance
specifications, m 5, a TV set is considered defective. In the case of school test, 59% or
below is failing, while 60% or above is passing. Similarly, the grades between 60% and
100% in evaluating quality can be classified as follows:
60%-69% Passing (D)
70%-79% Fair (C)
80%-89% Good (B)
13
90%-100% Excellent (A)
The grades D,C,B and A in parentheses above are quite commonly used in the United
States for categorizing the objective characteristics of products. Thus, one can apply this
scheme to the classification of the objective characteristics (color density) of these color
TV sets as shown in Fig. One can see that a very high percentage of Japanese Sony TV
sets are within grade B, and a very low percentage are within or below grade D. In
comparison, the color TV sets from San Diego SONY have about the same percentage in
grades A, B and C.
To reduce the difference in process capability indices between Japanese
SONY and San Diego SONY, (and thus seemingly increase the quality level of the San
Diego sets) the letter tried to tighten the tolerance specification to extend only to grade C
shown in Fig. rather than grade D. Therefore, Only the products within grades A,B and C
were treated as passing. But this approach is faulty, Tightening the tolerance specifications
because of a low process capability in a production plant is meaningless as increasing the
passing score of school tests from 60% to 70% just because students do not learn well. On
the contrary, such a school should consider asking the teachers to lower the passing score
for student who do not test as well instead of rating it. The next section will illustrate how
to evaluate the functional quality of products meaningfully and correctly.
Quadratic Loss Function
When an objective characteristic y deviates from its target value m, some financial loss
will occur. Therefore, the financial loss, sometimes referred to simply as quality loss or
used as an expression of quality level, can be assumed to be a function of y, which we
shall designate L(y),When y meets the target m, the of L(y) will be at minimum; generally,
the financial loss can be assumed to be zero under this ideal condition:
L (m) = 0 Equation 2.1
Since the financial loss is at a minimum at the point, the first derivative of the loss function
with respect to y at this point should also be zero. Therefore, one obtains the following
equation:
L (m) = 0 .. Equation 2.2
If one expand the loss function L(y) through a Taylor series expansion around the target
value m and takes Equations (2.1) and (2.2) into consideration, one will get the following
equation:
L(y) = L(m) + L (m)(y-m)/1!+L(m)(y-m) /2!+
=L(y-m)/2! +
14
is result is obtained because the constant term L(m) and the first derivative term L(m) are
both zero. In addition, the third order and higher order term are assumed to be negligible.
Thus, one can express the loss function as a squared term multiplied by constant k:
L(y) =k(y-m) Equation 2.3
When the deviation of the objective characteristic from the target value increases, the
corresponding quality loss will increase. When magnitude of deviation is outside of the
tolerance specifications, this product should be considered defective.
Let the cost due to defective product be A and the corresponding magnitude of the
deviation from the target value be . Taking into right hand side of Equation (2.3), one
can determine the value for constant k by following Equation:

k=cost of defective product/(tolerance)

In the case of the SONY colour TV sets, let the adjustment cost be A= 300 Rs, when the
colour density is out of the tolerance specifications.
Therefore, the value of k can be calculated by the following equation:
k = 600/5 = 12 Rs
Therefore, the loss function is
L(y) = 12(y m)
This equation is still valid even when only one unit of product is made.
Consider the visitor to the BHEL Heavy Electric Equipment Company in India who was
told, In our company, only one unit of product needs to be made for our nuclear power
plant. In fact, it is not really necessary for us to make another unit of product. Since the
sample size is only one, the variance is zero. Consequently, the quality loss is zero and it is
not really necessary for us to apply statistical approaches to reduce the variance of our
product.
However, the quality loss function [L = k(y m)] is defined as the
mean square deviation of objective characteristics from their target value, not the variance
of products. Therefore, even when only one product is made, the corresponding quality
loss can still be calculated by Equation (2.3). Generally, the mean square deviation of
objective characteristics from their target values can be applied to estimate the mean value
of quality loss in Equation (2.3). One can calculate the mean square deviation from target
( in this equation is not variance) by the following equation (the term is also
called the mean square error or the variance of products):

Quality of Sony TV sets where the tolerance specification is 10 and the
objective function data corresponds to figure
15
Japan M 10/6
10/36
33 0.27%
San Diego M 10/12
10/12
100 0.00
Substituting this equation into Equation (2.3), one gets the following equation:
L = k
From Equation (2.4), one can evaluate the differences of average quality levels between
the TV sets from Japanese Sony and those from San Diego Sony as shown in Table 2.1.
From table 2.1 it is clear that although the defective ratio of the Japanese Sony is higher
than that of the san Deego Sony, the quality level of the former is 3 times higher than the
latter. Assume that one can tighten the tolerance specifications of the TV sets of San Diego
Sony to be m +- 10/3.
Also assume that these TV sets remain uniformly distributed after the tolerance
specifications are tightened. The average quality level of San Diego Sony TV sets would
be improved to the following quality level:

16
Plant
Location
Mean Value
of Objective
Function
Standard
Deviation
Variation
Loss L
(in Rs)
Defective
Ratio
L = 12[(1/ 12) (10) (2/3)] = 44Rs
where the value of the loss function is considered the relative quality level of the product.
This average quality level of the TV sets of San Diego Sony is 56Rs higher than the
original quality level but still 11Rs lower than that of Japanese Sony TV sets. In addition,
in this type of quality improvement, one must adjust the products that are between the two
tolerance limits,m +- 10/3 and m+- 5, to be within m +- 10/3. In the uniform
distribution shown in Figure 2.1, 33.3% would need adjustment, which would cost 300Rs
per unit. Therefore each TV set from San Diego Sony would cost an additional 100Rs on
average for the adjustment:
(300)(0.333) = 100Rs
Consequently, it is not really a good idea to spend 100Rs more to adjust each product,
which is worth only 56Rs.A better way is to apply quality management methods to
improve the quality level of products.
2.2 Variation of the Quadratic Loss Function
1. Nominal the best type: Whenever the quality characteristic y has a finite target
value, usually nonzero, and the quality loss is symmetric on the either side of the
target, such quality characteristic called nominal-the-best type. This is given by
equation
L(y) =k(y-m) Equation
1

Color density of a television set and the out put voltage of a power supply
circuit are examples of the nominal-the-best type quality characteristic.
2. Smaller-the-better type: Some characteristic, such as radiation leakage from a
microwave oven, can never take negative values. Also, their ideal value is equal to
zero, and as their value increases, the performance becomes progressively worse.
Such characteristic are called smaller-the-better type quality characteristics. The
response time of a computer, leakage current in electronic circuits, and pollution
from an automobile are additional examples of this type of characteristic. The
quality loss in such situation can be approximated by the following function, which
is obtain from equation by substituting m = 0
L(y) =ky
17
This is one side loss function because y cannot take negative values.

3. Larger-the-better type: Some characteristics, such as the bond strength of adhesives,
also do not take negative values. But, zero is there worst value, and as their value
becomes larger, the performance becomes progressively better-that is, the quality
loss becomes progressively smaller. Their ideal value is infinity and at that point the
loss is zero. Such characteristics are called larger-the-better type characteristics. It is
clear that the reciprocal of such a characteristics has the same qualitative behavior as
a smaller-the-better type characteristic. Thus we approximate the loss function for a
larger-the-better type characteristic by substituting 1/y for y in
equation1:
L(y) = k [1/y]

4. Asymmetric loss function: In certain situations, deviation of the quality characteristic
in one direction is much more harmful than in the other direction. In such cases, one
can use a different coefficient k for the two directions. Thus, the quality loss would
be approximated by the following asymmetric loss function:

k(y-m) ,y>m
L(y) = k(y-m) , ym

18

CHAPTER 3
Introduction to Analysis of variation
3.1 Understanding variation
The purpose of product or process development is to improve the performance
characteristics of the product or process relative to customers needs and expectations.
The purpose of experimentation should be to reduce and control variation of a
product or a process; subsequently decisions must be made which parameters affect the
performance of a product/process.
Since variation is a large part of the discussion relative to quality, analysis of variation
(ANOVA) will be the statistical method used to interpret experimental data and make
necessary decisions.
3.2 What is ANOVA
ANOVA is a statistically based decision tool for detecting any differences in average
performance of groups of items tested.
ANOVA is a mathematical technique which breaks total variation down into accountable
sources; total variation is decomposed into its appropriate components
We will start with a very simple case and then build up more comprehensive situations
Thereafter, we will apply ANOVA to some very specialized experimental situations
3.2.1 No way ANOVA
Imagine an engineer is sent to the production line to sample a set of windshield pumps for
the purpose of measuring flow rate. The data collected is as under:
Pump No. 1 2 3 4 5 6 7 8
Flow rate
oz/min.
5 6 8 2 5 4 4 6
19
1 oz/min. = 0.473 ml/s
No-way ANOVA breaks total variation down into only two components
1. The variation of the average (or mean) of all the data points relative to zero
2. The variation of individual data points around the average (traditionally called
experimental error)
The notations used in the calculation method are as under:
y = observation or response or simply data
i
y
= i
th
response for example 3
y
= 8 oz/min
N = Total number of observations
T = Sum of all observations
T = Average of all observations = T/N =
y
In this case
N = 8, T = 40 oz/min , and T = 5.0 oz/min
What is the reason for variation from pump to pump?
The true flow rate is actually unknown; it is only estimated through the use of some flow
meter. There will be some unknown measurement error present, but flow rate will
nonetheless be observed and accepted as the pumps performance under the conditions of
the test.
Also, the pumps were randomly selected. Although the manufacturer produces identical
pumps; however there will be slight differences from pump to pump, causing a pump to
pump variation in performance. (This is natural variation of the process)
It is for this reason the flow rates of pumps are not identical.
No-way ANOVA can be illustrated graphically.
(Notes of this section must be taken by hand)
The magnitude of each observation can be represented by a line segment extending
from zero to the observation.
These line segments can be divided into two portions:
- One portion attributed to the mean;
- Other portion attributed to the error
The error includes the natural process variation and the measurement error.
The magnitude of the line segment due to the mean is indicated by extending a line
from the average value to zero.
20
- The magnitude of line segment due to error is indicated by the difference of the
average value from each observation.
To calculate the total variation present we will do a mathematical operation which will
allow a clearer picture to develop.
The magnitudes of each of the line segments can be squared and then summed to provide a
measure of the total variation present

T
SS
total sum of squares =
2 2 2 2
6 8 6 5 + + + +

T
SS
222.0
The magnitude of line segment due to mean can also be squared and summed

m
SS
sum of squares due to mean = ( )
2
T N
But N T T /
2

,
_

N
T
N SS
m
=
N
T
2
=
8
40
2
= 200.0
The portion of the magnitude of the line segment due to error can be squared and summed
to provide measure of the variation around the average value

e
SS
error sum of squares =
2
1
) ( y y
i
n
i

2 2 2 2 2 3 2 2
1 ) 1 ( ) 1 ( 0 ) 3 ( 3 1 0 + + + + + + +
e
SS

e
SS
22.0
Note that
222.0 = 200.0 + 22.0
This demonstrates a basic property of ANOVA that the total sum of squares is equal to the
sum of squares due to known components
SS
T
= SS
m
+ SS
E
The formulas for the sum of squares can be written generally

N
i
i T
y SS
1
2

m
SS
N
T
2
( )


N
I
i e
T y SS
1
2
In the above example the error value was calculated, but it is not necessary as
SS
e
= SS
T
- SS
m
21
3.2.1.1 Degrees of Freedom (dof)
To complete ANOVA calculations, one other element must be considered i.e. Degrees of
Freedom. The concept of dof is to allow 1 dof for each independent comparison that can
be made in the data.
Only 1 independent comparison can be made between the mean of all the data (There is
only 1 mean). Therefore, only 1 dof is associated with the mean.
The concept of dof also applies to the dof associated with the error estimate. With
reference to 8 observations, there are 7 independent comparisons that can be made to
estimate the variation in data. Data point 1 can be compared to 2, 2 to 3, 3 to 4 etc. which
are 7 independent comparisons.
One of the questions an instructor dreads most from an audience is,
"What exactly is degrees of freedom?"
It's not that there's no answer. The mathematical answer is a single phrase, "The rank of a
quadratic form."
It is one thing to say that degrees of freedom is an index and to describe how to calculate it
for certain situations, but none of these pieces of information tells what degrees of freedom
means.
At the moment, I'm inclined to define degrees of freedom as a way of keeping score.
A data set contains a number of observations, say, n. They constitute n individual pieces of
information. These pieces of information can be used either to estimate parameters (mean)
or variability.
In general, each item being estimated costs one degree of freedom. The remaining degrees
of freedom are used to estimate variability. All we have to do is count properly.
A single sample: There are n observations. There's one parameter (the mean) that needs to
be estimated. That leaves (n-1) degrees of freedom for estimating variability.
Two samples: There are 2 1
n n +
observations. There are two means to be estimated. That
leaves
) 2 (
2 1
+n n
degrees of freedom for estimating variability.
22
Let v = dof
t
v
= Total dof
m
v
= dof associated with mean (always 1 for each sample)
e
v
= dof associated with error
Then t
v
= m
v
+ e
v
8 = 1 + 7
The total dof equals total number of observations in the data set for this method of
ANOVA
Summary of No-way ANOVA table:
Source SS Dof
Mean 200 1
Error 22 7
Total 222 8

One other statistic calculated from ANOVA is variance V.
Error variance or just variance is
14 . 3
7
22

e
e
e
v
SS
V
Also, sample standard deviation
V s
Where
( )
1
1
2

N
y y
s
N
i
i
( )
1
2
2

N
y y
V s
i
= variance
Which is essentially
e
e
e
v
SS
V
Although the formula above is faster than ANOVA for calculating error variance in this
case, but when the experimental situations become more complex ANOVA will become
faster method.
Error variance is a measure of the variation due to all the uncontrolled parameters,
including measurement error involved in a particular experiment (set of data collected).
Which is essentially the natural variation of a process
23
3.2.2 One Way ANOVA
This is next most complex ANOVA to conduct.
This situation considers the effect of one controlled parameter upon the performance of
product or process, in contrast to no-way ANOVA, where no parameters were controlled.
Again let us try to solve this problem through an imaginary, yet potentially real situation.
Imagine the same engineer who took samples for flow rate of windshield pumps is charged
with the task of establishing the fluid velocity generated by the windshield washer pumps.
This means when the fluid velocity is too low, the fluid will merely dribble out, and if too
high the air movement past the windshield will not be able to distribute the cleaning fluid
adequately to satisfy the car driver.
The engineer proposes a test of three different orifice areas to determine which give a
proper fluid velocity.
Before the test data is collected some notation in order to simplify the mathematical
discussion is:
A= Factor under investigation (outlet orifice area)
1
A
= Ist level of orifice area = 0.0015 sq. in
2
A
= 2
nd
level of orifice area = 0.0030 sq. in
3
A
= 3
rd
level of orifice area = 0.0045 sq. in
The same symbol for the level designations will be used to denote the sum of responses
i
A
= sum of observations under i
A
level
i
A
= Average of observations under i
A
level = i
A
/ Ai
n

T = sum of all observations
T = Average of all observations = T/N
Ai
n
= number of observations under i
A
level
A
k
= number of levels of factor A
With notation in mind, the engineer constructs four pumps with each given orifice area
(making 12 to test for three levels)
The test data is as under:
Level Area sqin Velocity Ft/s Total
1
A
0.0015 2.2 1.9 2.7 2.0 8.8
2
A
0.0030 1.5 1.9 1.7 -* 5.1
3
A
0.0045 0.6 0.7 1.1 0.8 3.2
Grand Total 17.1
* Dropped pump and destroyed it, no data
1
A
= 8.8 ft/s 1 A
n
= 4
1
A
= 2.2 ft/s
2
A
= 5.1 ft/s 2 A
n
= 3
2
A
= 1.7 ft/s
3
A
= 3.2 ft/s 3 A
n
= 4 3
A
= 0.8 ft/s
24
A
k
= 3
T = 17.1 ft/s N = 11
T
= 1.6 ft/s
Sum of squares (One-way ANOVA)
Two methods can be used to complete the calculations
- Including the mean
- Excluding the mean
Method 1 (Including the mean)
As before the total variation can be decomposed into its appropriate components:
- The variation of the mean of all observations relative to zero VARIATION DUE
TO MEAN
- The variation of the mean of observations under each factor level around the
average (mean) of all observations VARIATION DUE TO FACTOR A
- The variation of individual observations around the average of observations under
each level VARIATION DUE TO ERROR
The calculations are identical to No-way ANOVA example, except for the component of
variation due to factor A, outlet orifice area.

N
i
i T
y SS
1
2
= 2.2+1.9+ -----------+0.8 = 31.90

2
) (T N SS
m
N
T
2
= 17.1/11 = 26.583
Graphically, also this can be shown (Pl make hand notes here for graphical
representation)
The magnitude of segments for each level of factor A is squared and summed. For
instance, the length of the line segment due to level 1
A
is
( ) T A
1
.
There are four observations under 1
A
condition. The same type of information is collected
for other levels of factor A.
( ) ( ) ( )
2
3 3
2
2 2
2
1 1
T A n T A n T A n SS
A A A A
+ +

A
SS
4(0.64545) + 3(0.14545) + 4(-0.75454) = 4.007
The above calculation is tedious and is mathematically equivalent to:
N
T
n
A
SS
A
k
i
Ai
i
A
2
1
2

1
1
]
1

,
_

A
SS
11
1 . 17
4
2 . 3
3
1 . 5
4
8 . 8
2 2 2 2
+ +
= 4.007 which is same as above
25
The variation due to error is given by
( )
2
1 1



A Ai
k
j
n
i
j i e
A y SS
( )
2 2 2 2 2 2
0 3 . 0 ) 2 . 0 ( 5 . 0 3 . 0 0 + + + + +
e
SS

e
SS
0.600
Error variation is again based on method of least squares but in one way ANOVA the least
squares are evaluated around the average of each level of the controlled factor.
Error variation is the uncontrolled variation within the controlled group. Again the total
variation is
SS
T
= SS
m
+ SS
A
+ SS
e
31.190 = 26.583 + 4.007 + 0.600
Dof (Including the mean)
t
v
= m
v
+ A
v
+ e
v
t
v
= 11, A
v
=
1
A
k
= 2
e
v
= 11 1 2 = 8
One way ANOVA summary (Method 1)
Source SS Dof v Variance V
Mean m 26.583 1 26.583
Factor A 4.007 2 2.004
Uncontrolled Error e 0.600 8 0.075
Total 31.190 11
In the above table we have been able to estimate variance for both factor A and
uncontrolled error. This is what will be of interest to us when we design experiments.
Also, if look at the calculations done for Method 1, you will observe that mean does not
affect the calculations for the variation due to factor A and error in any way.
Thus in most experimental situations, mean has no practical value with the exception of
lower is better situations where the variation due to mean is a measure of how far the
average is from zero and how successful the factor might be in reducing the average to
zero.
26
Method 2: When we exclude the mean from ANOVA calculations, the total variation is
then calculated as:
- The variation of average of observations under each factor level around the average
of all observations
- The variation of the individual observations around the average of observations
under each factor level
Again graphically this can be demonstrated:
The same concept of summing the squares of the magnitudes of various line segments is
applied in method 2 as well.
2
1
) (


N
i
i T
T y SS
= 4.607
Mathematically this is equivalent to
N
T
y SS
N
i
i T
2
2
1
) (

This expression will be used to define the total variation by this method.
See this equivalent to
) (
m T
SS SS
from previous calculations
The variation due to factor A and uncontrolled error is calculated identically as in Method
1
N
T
n
A
SS
A
k
i
Ai
i
A
2
1
2

1
1
]
1

,
_

( )
2
1 1



A Ai
k
j
n
i
j i e
A y SS
Dof (Excluding the mean)
In method 1, dof was
t
v
= m
v
+ A
v
+ e
v
Where m
v
= 1 (always) and t
v
= N
In Method 2(excluded mean), the dof for mean is subtracted from both sides of the above
equation

So
27
t
v
= N 1 = 11 1 = 10
t
v
= A
v
+ e
v
10 = ( A
k
- 1) + e
v
so e
v
= 8
One way ANOVA summary (Method 2)
Source SS Dof v Variance V
Factor A 4.007 2 2.004
Uncontrolled Error e 0.600 8 0.075
Total 4.607 10
The values of variance for factor A and Error in both methods are identical. The value of
mean is disregarded in method 2 and is the most popular method.
Only when the performance parameter is lower is better characteristic would the variance
due to mean be relevant; this provides a measure of how effective some factor might be in
reducing the average to zero.
Let us sum up the above discussion
Define three "Sums of Squares"
Total Corrected Sum of Squares (SS
T
)
Squared deviations of observations from overall average
Error Sum of Squares (SS
E
)
Squared deviations of observations from treatment averages
Treatment Sum of Squares (SS
trt
)
Squared deviations of treatment averages from overall average (times n)
Dot Notation
y.. =
i
a

1

j
n

1
y
ij

y..
= y../N (the overall average)
y
i.
=
j
n

1
y
ij

yi .
= yi./n (the average within
Treatment i)
Raw SS =
i
a

1

j
n

1
y
ij
2
28
Total Corrected SS
SS
T
=
i
a

1

j
n

1
( y
ij
-

yi .
)
2
This measures the overall variability in the data.
SS
T
/(N-1) is just the sample variance of the whole dataset
DECOMPOSITION OF SS
I will do (hopefully) derivation of the following equation:
SS
T
= SS
trt
+ SS
E

+

+

+

+



+

+


,
_

,
_

,
_

,
_

,
_

,
_

,
_

,
_

,
_

,
_

,
_

,
_

,
_

,
_

a
i
n
j
i i ij
a
i
i
a
i
n
j
i i ij
a
i
n
j
i
a
i
n
j
i ij
a
i
n
j
i i ij
a
i
n
j
i i ij
a
i
n
j
ij
y y y y y y n
E
SS
y y y y y y y y
y y y y
y y y y y y
T
SS
1 1
. .
1
2
.
1 1
. .
2
1 1
.
2
1 1
.
2
1 1
. .
2
1 1
. .
2
1 1
2
2
29
zero is last term show Must
0
2
1 1
. .
+ +

+ +

,
_

,
_

,
_

TRT
SS
E
SS
y y y y
TRT
SS
E
SS
a
i
n
j
i i ij

1
1
]
1

,
_

1
1
]
1

,
_

1
1
1
1
1
1
1
]
1

,
_

,
_

,
_

,
_

,
_

,
_

,
_

a
i
i
a
i
i i i
a
i
i
n
j
ij i
a
i
n
j
i ij i
a
i
n
j
i i ij
y y y n y n y y
y n y y y
y y y y y y y y
1
.
1
. . .
1
.
1
.
1 1
. .
1 1
. .
0 0 2 2
2
2 2
Two Way ANOVA
Two-way ANOVA is the next highest order of ANOVA to review
There are two controlled parameters in this experimental situation
Let us consider an experimental situation. A student worked at an aluminum casting
foundry which manufactured pistons for reciprocating engines.
30
The problem with the process was how to attain the proper hardness (Rockwell B) of the
casting for a particular product.
Engineers were interested in the effect of copper and magnesium content on casting
hardness.
According to specs the copper content could be 3.5 to 4.5% and the magnesium content
could be 1.2 to 1.8%.
The student runs an experiment to evaluate these factors and conditions simultaneously.
If A = % Copper Content 1
A
= 3.5 2
A
= 4.5
If B = % Magnesium Content 1
B
= 1.2 2
B
= 1.8
The experimental conditions for a two level factors is given by
f
2 = 4 which are
1 1
B A
2 1
B A
1 2
B A
2 2
B A
Imagine, four different mixes of metal constituents are prepared, casting poured and
hardness measured. Two samples are measured from each mix for hardness. The result
will look like:
1
A
2
A
1
B
76, 78 73, 74
2
B
77, 78 79, 80
To simplify discussion 70 points from each value are subtracted in the above observations
from each of the four mixes. Transformed results can be shown as:
1
A
2
A
1
B
6, 8 3, 4
2
B
7, 8 9, 10
Two way ANOVA calculations:
The variation may be decomposed into more components:
1. Variation due to factor A
2. Variation due to factor B
3. Variation due to interaction of factors A and B
4. Variation due to error
An equation for total variation can be written as
e B A B A T
SS SS SS SS SS + + +

A x B represents the interaction of factor A and B. The interaction is the mutual effect of
Cu and Mg in affecting hardness.
Some preliminary calculations will speed up ANOVA
31
1
A
2
A
Total
1
B
6, 8 3, 4 21
2
B
7, 8 9, 10 34
Total 29 26 55
Grand Total
1
A
= 29 2
A
= 26 1
B
=21 2
B
=34
T = 55, N = 8
4
1

A
n 4
1

B
n 4
2

A
n 4
2

B
n
Total Variation
N
T
y SS
N
i
i T
2
2
1
) (

= 6 + 8 + 3 + ----------- + 10 -
8
55
2
= 40.875
Variation due to factor A
N
T
n
A
SS
A
k
i
Ai
i
A
2
1
2

1
1
]
1

,
_

N
T
n
A
n
A
n
A
Ak
k
A A
2 2
2
2
2
1
2
1
+ + +
A
SS
=
8
55
4
26
4
29
2 2 2
+ = 1.125
Please carry out a mathematical check here which is
Numerator 29 + 26 = 55 and
Denominator 4 + 4 = 8
If these conditions are not met then the A
SS
calculation will be wrong.
For a two level experiment, when the sample sizes are equal, the equation above can be
simplified to this special formula:
( )
N
A A
SS
A
2
2 1

=
8
) 26 29 (
2

= 1.125
Similarly the variation due to factor B
( )
125 . 21
8
) 34 21 (
2 2
2 1

N
B B
SS
B
32
To calculate the variation due to interaction of factors A and B
Let i
B A ) (
represent the sum of data under the i
th
condition of the combination of factor
A and B. Also let c represent the number of possible combinations of the interacting
factors and
i
B A
n
) ( the number of data under this condition.
B A
c
i i B A
i
B A
SS SS
N
T
n
B A
SS
1
1
]
1

,
_

2
1 ) (
2
) (
Note that when the various combinations are summed, squared, and divided by the number
of data points for that combination, the subsequent value also includes the factor main
effects which must be subtracted. While using this formula, all lower order interactions
and factor effects are to be subtracted.
For the example problem:
1 1
B A
= 14, 2 1
B A
= 15, 1 2
B A
= 7 2 2
B A
= 19
And no. of possible combinations c = 4
And since there are two observations under each combination
i
B A
n
) ( = 2
B A B A
SS SS SS + + +

8
55
2
19
2
15
2
7
2
14
2 2 2 2 2
= 15.125
Since e B A B A T
SS SS SS SS SS + + +

500 . 3 125 . 15 125 . 21 125 . 1 875 . 40
e
SS
Degrees of Freedom (Dof) Two way ANOVA
e B A B A t
v v v v v + + +

t
v
= N 1 = 8 1 = 7
A
v
= A
k
- 1 = 1
B
v
= B
k
- 1 = 1
1 ) )( (
B A B A
v v v
4 1 1 1 7
e
v
ANOVA summary Table (Two-way)
33
Source SS Dof v Variance V F
A 1.125 1 1.125 1.29
B 21.125 1 21.125 24.14*
A x B 15.125 1 15.125 17.29**
E 3.500 4 0.875
Total 40.875 7
* at 95% confidence
** at 90% confidence
The ANOVA results indicate that Cu by itself has no effect on the resultant hardness,
magnesium has a large effect (largest SS) on hardness and the interaction of Cu and Mg
plays a substantial part in determining hardness.
A plot of these data can be seen.
(Take hand notes here)
In this plot there exist non parallel lines which indicate the presence of an interaction. The
factor A effect depends on the level of factor B and vice versa. If the lines are parallel,
there would be no interaction which means the factor A effect would be same regardless of
the level of factor B.
Hardn
ess
2
4
6
8
10
B1
B2
34
A1 A2
Geometrically, there is some information available from the graph that may be useful. The
relative magnitudes of the various effects can be seen graphically. The B effect is the
largest, A x B effect next largest and the A effect is very small.
See
1 . 7
4
29
1
A

5 . 6
4
26
2
A
2 . 5
4
21
1
B

5 . 8
4
34
2
B
So by plotting the data for each factor we could observe the following:
(Make hand notes here)
In the first case, there is no interaction because the lines are parallel
In 2
nd
case, some interaction
In 3
rd
case, there is a strong interaction
3.3 EXAMPLE OF ANOVA
Hardn
ess
2
4
6
8
10
B1
B2
B effect
A x B effect
A effect
A2 A1
Mid pt. A2B1
& A2B2
Mid pt. on line
B1
35
During the late 1980s, Modi Xerox had a large base of customers (50 thousand)
for this model spread over the entire country. Many buyers of these machines
earn their livelihood by running copying services. Each of these buyers
ultimately serves a very large number of customers (end user). When copy
quality is either poor or inconsistent, customers earn a bad name and their
image and business gets affected. In the late 1980s, the company integrated the
total quality management philosophy into its operation and placed the highest
focus on customer satisfactionany problem of field failure was given the
highest priority for investigation. The problem of skips was subjected to
detailed investigation by a cross-functional team from the Production,
Marketing, and Quality Assurance Departments.
Problem Description
The pattern of blurred images (skips) observed in the copy is shown in Figure
above. It usually occurs between 10-60 mm from the lead edge of the paper.
Sometimes, on a photocopy taken on a company letterhead paper, the company
logo gets blurred, which is not appreciated by the customer. This problem was
noticed in only one-third of the machines produced by the company, with the
remaining two-thirds of machines in the field working well without this
problem. The in-house test evaluation record also confirmed the problem in
only about 15% of the machines produced. Data analysis indicated that not all
the machines produced were faulty. Therefore, the focus of further
investigation was to find out what went wrong in the faulty machines or
whether there are basic differences between the components used in good and
faulty machines.
Preliminary Investigation
36
A copier machine consists of more than 1000 components and assemblies. A
brainstorming session by the team helped in the identification of 16
components suspected to be responsible for the problem of blurred images.
Each Suspected component had at least two possible dimensional characters
which could have resulted in the skip symptom. This led to more than 40
probable causes (40 dimensions arising out of 16 components) for the problem.
An attempt was made by the team to identify the real causes among these 40
probable causes. Ten bad machines were stripped open and various dimensions
of these 16 component were measured. It was observed that all the dimensions
were well within specifications Hence, this investigation did not give any clues
to the problem. Moreover, the time and effort spent in dismantling the faulty
machines and checking various dimensions in 16 components was in vain. This
gave rise to the thought that conforming to specification does nor always lead
to perfect quality. The team needed to think beyond the specification in order to
find a solution to the problem
Taguchi Experiment
An earlier brainstorming session had identified 16 components that were likely
to be the cause of this problem. A study of travel documents of 300 problem
machines revealed that on 88% of occasions, the problem was solved by
replacing one or more of only four parts of the machine. These four parts were
from the list of 16 parts identified earlier. They were considered to be Critical
and it was decided to conduct an experiment on these four parts. These parts
were the following:
(a) Drum shaft
(b) Drum gear
(c) Drum flange
(d) Feed shaft
Two sets of these parts Were taken for Experiment I, one from an identified
problem machine and one from a problem free machine. The two levels
considered in the experiment were good and bad; good signifying parts from
the problem-free machine and bad signifying parts from the problem
machine. The factors and levels thus identified are given in Table below.
37

A full factorial experiment would have required 16 trials while the experiment
was designed in L8(27) fractional factorial using a linear graph and orthogonal
array (OA) table developed by Taguchi.
The linear graph is presented in Fig. 9.24 and the layout in Table 9.14. A
master plan for conducting eight experiments was prepared. The response
considered was the number of defective copies (copies exhibiting the skip
problem) in a 50-copy run. The master plan along with the response is
presented in Table 9.15.
38
Analysis and Results
The response considered was fraction defective (p5 d/n). Data were normalized
by the transformation sin (p). Analysis of variance (ANOVA) was
performed on normalized data and the results are presented in Table .
39
F(1,.5) at 0.05 = 6.61, F(1,.5) at 0.01=16.26.
pA=(3528-32.4)*100/3788 =923
As can be seen from Table factor A is highly significant (the only significant
factor), explaining 92.3% of the total variation. In other words, of the four
components studied, the drum shaft alone is the source of trouble for skip. The
problem was now narrowed down to one component from the earlier list of 16
components, giving a ray of hope for moving towards a solution. Further
investigations were carried out on drum shaft design.
Drum Shaft Design
The configuration of the drum shaft is defined by 15 dimensions. A
brainstorming session by the team members identified wobbling and increased
play in the drum shaft as major causes for this problem. Four dimensions of the
drum shaft were suspected to be causing wobbling and excessive play. These
dimensions in all 20 machines (10 good and 10 bad) were checked and found to
be well within specification Now the question as to where the problem lay
arose-definitely not within the specification, perhaps outside the specification?
This led u to think beyond the specification in order to find a solution. As a
first step, the dimension patterns of good and bad machines were compared.
The dimension patterns for four critical dimensions suspected to be the cause of
the problem are show ft in Figure below.
40
There is not much difference in pattern between good and bad machines with
respect to dimensions B, C, and D. Dimension A, that is, diameter over pin
(DOP) dimensions of the drum shaft splines revealed a difference in pattern
between the good and bad machines. The DOP of shafts from 10 problem
machines were found to lie in the lower half of the specification range, whereas
in case of problem-free machines, the DOP was found to be always on the
upper half of the specification range(shown in fig. above). The DOP dimension
in the drum shaft is shown in Figure below.
41
DOP (diameter over pin) is a measure of the tooth thickness, t. Higher DOP
means greater tooth thickness of the splines and vice versa. If the DOP in the
splines of the drum shaft is on the lower side, then it will increase the
clearance, resulting in more play between the drum shaft and the drum gear
assembly .
Here, the image of the original document is transferred on the photoreceptor
drum through a series of lenses and mirrors. The photoreceptor drum is coated
with a photo-conductor material and it is electrically charged with positive
charge. During the transfer of the image from the document, the whole of the
drum area is exposed to light except the area where the image is formed, Due to
42
the exposure to light, the photo-conductor material becomes a conductor and
the charge is neutralized, except in the image area. This image is called latent
image. Subsequently, this image is transferred to paper through toner and
developer. During the transfer of image, the drum should rotate at a uniform
speed. Any jerk to the photoreceptor drum during rotation will cause distortion
or blur on the latent image. The photoreceptor drum is driven by the drum shaft
anti drum gear assembly. An excessive play between drum shaft and drum gear
gets magnified and produces jerks in the photoreceptor unit. Bad machine
dimension pattern clearly indicates the possibility of an excessive play between
drum shaft and drum gear assembly. A sketch of the Photoreceptor assembly is
shown below here.
A lower DOP results in Producing a large gap between the drum shaft and
drum gear which causes excessive play in the drum shaft. Technically,
excessive play between the drum and drive gear can cause the skip problem
This theory was further confirmed when this model (X) was compared with
model Y and model Z where no skip problem was observed. In models Y and
Z, the drum shaft and drive gear were integrated into a single unit. This
Probably explains the zero play and no skip defect. The drum and drive gear
assembly of the three models X, Y, and Z are shown in figure. for comparison
43
For further validation of this point, play between the drum shaft and drive gear
was eliminated by temporarily integrating the system using a drop of araldite
(glue) in 50 problem machines. A test run was pen on all 50 machine and no
skip defect was observe This led to the conclusion that drum shaft DOP
specifications are not fail-safe against skips. It was now felt necessary to arrive
at new specifications for DOP to ensure no excessive play between the drum
shaft and drive gear. The question arose as to how much play can be permitted.
To find an answer, a similar drive system of the very successful two-wheeler
Lambretta Scooter was studied, and it was found that the Play varied between
0.04 and 0.07 mm. To be on the safe side, it was decided to allow maximum
play of only 0.04 mm between the drum shaft and drive gear. These drum
shafts are manufactured by subcontractors, so new specifications were reached
by taking into consideration the supplier s capability of machining these
dimensions and a maximum permissible play of 0.04mm. The old and new
specifications for DOP are shown in figure.
44
Confirmatory Trial
The implications of the new specifications on other systems of the machine
were examined and it was found that the change in specification would not
create any problems. The 36 worst-affected machines were selected from the
field. Drum shafts with the new specifications were made and then fitted on
these machines. Test results of these machines showed a total elimination of
skip defects.
Ultimately, to give the customer the benefits of the study, 5000 drum shafts
with 30 new specifications were made and incorporated in 5000 existing
machines with the old design in the field. A sample performance audit of 800
machines (out of those 5000) in the field was carried out and none of these 800
machines indicated skip problems. This provided confidence that the new
design had worked successfully. After that, the new design was implemented
fully by releasing the new specification. The rate of occurrence of the skip
problem in the assembly line dropped from the previous 13% to less than 0.5%.
Beating the Benchmark
Machine specifications released from Rank Xerox (UK) permit the occurrence
of skip up to 10mm from the lead edge. Earlier specifications of Modi Xerox
permitted the occurrence of skip up to 60 mm from the lead edge, but, to most
of the customers, loss of
information near the lead edge is not acceptable, as
the company logos are located near the lead edge of the letterheads. The
exercise was initially taken up to reach the standard of Rank Xerox (skip up to
10mm). Now, the modified design of the drum shaft, evolved through scientific
and systematic investigation, has completely eliminated the skip and hence has
surpassed even the Rank Xerox benchmark of permitting skip up to 10 mm
from the lead edge. This is a great accomplishment towards skip-free copy. A
problem is completely solved for which no solution was previously available
worldwide.

45
CHAPTER 4

4.1 WHAT IS ARRAY
An Arrays name indicates the number of rows and column it has , and also the number of
levels in each of the column. Thus the array L4(2) has four rows and three 2-
level column.
4.2 HISTORY OF ORTHOGONAL ARRAY
Historically, related methods was developed for agriculture, largely in UK, around the
second world war ,Sir R.A.Fisher was particularly associated with this work .
Here the fields area has been divided
up into rows and column and four fertilizers
(F1-F4) and four irrigation levels are
Represented. Since all combination are
Taken ,sixteen cells or plots result.
The Fisher field experiment is a full factional experiment since all 4*4 =16 combinations
of the experiments factors ,fertilizer and irrigation level ,are included.
The number of combinations required may not be feasible or economic. To cut down on
the number of experimental combinations included, a Latin Square design of experiment
may be used. Here there are three fertilisers, three irrigation levels and three alternative
additives (A1-A2) but only nine instead of the 3*3*3 =27 combination, of the full factorial
are included.
There are pivotal combinations, however, that still
allow the
identification of the best fertiliser, Irrigation level
and additive provided that these no serious
non-additives or interactions in the relationship
between yield and these control
factors. The property of Latin Squares
that corresponds to this separability is that each
of the labels A1,A2,A3 appears exactly once in
each row and column.
Difference from agricultural applications is that in agriculture the noise
Or uncontrollable factors that disturb production also tend to disturb experimentation, such
as the weather. In industry, factors that disturb production, or are uneconomic to control in
production, can and should be directly manipulated in test. Our desire is to identify a
A1 A2 A3
A2 A3 A1
A3 A1 A2
46
F1 F2 F3 F4
I
1
I
2
I
3
I
4
I
1
I
2
I
3
F
1
F2 F3
design or line calibration which can best survive the transient effects in the manufacturing
process caused by the uncontrolled factors. We wish to have small piece to piece and time
to time variations associated with this noise variation. To do this we can force diversity on
noise conditions by crossing our orthogonal array of controllable factors by full factorial or
orthogonal array of noise factors.
Thus in the example, we evaluate our product for each
of the nine trials against the background of four different combinations of noise conditions.
We are looking one of the nine rows of control factors combinations, or for one of the
missing 72 rows ({3*3*3*3}=81; 81-9=72), which not only gives the correct mean result
on average, but also minimises variation away from the mean. To do this Taguchi
introduces the signal-to-noise ratio.
4.3 Introduction to Orthogonal Arrays
Engineers and Scientists are often faced with two product or process improvement
situations.
One development situation is to find a parameter that will improve some performance
characteristic to an acceptable and optimum value. This is the most typical situation in
most organizations.
A 2
nd
situation is to find a less expensive, alternate design, material, or method which will
provide equivalent performance
When searching for improved or equivalent deigns, the person typically runs some tests,
observe some performance of the product and makes a decision to use or reject the new
design.
In order to improve the quality of this decision, proper test strategies are utilized.
Before describing the OAs let us look at some other test strategies:
Most common test plan is to evaluate the effect of one parameter on product performance.
This is what is typically called as one factor experiment.
This experiment evaluates the effect of one parameter while holding everything else
constant
The simplest case of testing the effect of one parameter on performance would be to run a
test at two different conditions of that parameter.
For example: the effect of cutting speed on the finish of a machined part. Two different
cutting speeds could be used and the resultant finish measured to determine which cutting
speed gave better results. If the first level, the first cutting speed, is symbolized by 1 and
the second level by 2, the experimental conditions will look like this:
Trial No. Factor Level Test Results
1 1 * , *
2 2 * , *
47
The * symbolizes the value of finish that would be obtained.
This sample of two (in this case) could be averaged and compared to the second test.
If there happens to be an interaction of this factor with some other factor then this
interaction cannot be studied.
Several Factors One at a Time
If this doesnt work, the next progression is to evaluate the effect of several parameters on
product performance one at a time. Let us assume the experimenter has looked at four
different factors A, B, C and D each evaluated one at a time.
The resultant test program may appear like the table below:
Factor and Factor Level Test Results
Trial No. A B C D
1 1 1 1 1 * , *
2 2 1 1 1 * , *
3 1 2 1 1 * , *
4 1 1 2 1 * , *
5 1 1 1 2 * , *
One can see that the first trial is the base line condition and results of trial 2 can be
compared to trial 1 to estimate the effect of factor A on product performance.
Similarly results of trial 3 can be compared to trial 1 to estimate the effect of factor B and
so on.
The main limitation of several factors one at a time is that no interaction among the factors
studied can be observed.
Also, the strategy makes limited use of data when evaluating factor effects. Of the ten data
points we had in the above example, only two were used to compare against two others;
and the remaining six data points were temporarily ignored.
If we try to use all the data points, then the experiment will not remain orthogonal. One
main requirement of orthogonality is a balanced experiment which means equal number of
samples under various test conditions. (Equal no. of tests under A1 and A2)
For instance, in the above experiment if all the data under A1 and A2 is averaged and
compared, then this is not a fair comparison of A1 to A2.
Of the four trials under level A1, three were level B1 and one at level B2. The one trial
under A2 was at level B1.
48
Therefore one cannot see that if factor B has an effect on the performance it will be part of
the observed effect of factor A, and vice versa.
Only when trial 1 is compared to other trials one at a time are the factor effects orthogonal.
Several factors all at the same time
The most desperate and urgent situations finds the experimenter evaluating effect of
several parameters on performance all at the same time.
Here the experimenter hopes that at least one of the changes will improve the situation
sufficiently.
Trial No.
Factor and Factor Level Test Results
A B C D
1 1 1 1 1 * , *
2 2 2 2 2 * , *
This situation makes separation of main factor effects impossible, let alone any interaction
effects.
Some factors may be making positive effect and some negative, but we will not get any
hint of this information.
4.3.1 Investigating many factors a case study
In most problems, preliminary brainstorming would reveal a large number of factors which
may influence the output of the process under study.
How are the effects of these factors prioritized?
The traditional approach is to
- Isolate what is believed to be the most important factor
- Investigate this factor by itself, ignoring all others
- Make recommendations on changes to this crucial factor
- Move on to the next factor and repeat
This OFAT (One factor at a time) approach has several critical weaknesses. The factorial
approach in which several factors are studied simultaneously in a balanced manner is
much better. We will try to understand this through an example.
49
4.3.1.1 Example
A process producing steel springs is generating considerable scrap due to cracking after
heat treatment. A study is planned to determine better operating conditions to reduce the
cracking problem.
There are several ways to measure cracking
- Size of the crack
- Presence or absence of cracks
The response selected was
Y: the percentage without cracks in a batch of 100 springs
Three major factors were believed to affect the response
- T: Steel temperature before quenching
- C: carbon content (percent)
- O: Oil quenching temperature
Levels chosen for the study are:
Factor Low (Level 1) High (Level 2)
T 1450 F 1600 F
C 0.5% 0.7%
O 70 F 120 F

Classical approach : OFAT
Experiment: Four runs at each level of T with C and O at their low levels
Steel Tempt. % springs without cracks Average
1450 61 67 68 66 65.5
1600 79 75 71 77 75.5
Conclusion: Increased T reduces cracks by 10%
Problem: How general is this conclusion? Does it depend upon
- Quench Temperature?
- Carbon Content?
- Steel chemistry?
- Spring type?
- Analyst
- Etc.??
Carrying out similar OFAT experiments for C and O would require a total of 24
observations and provide limited knowledge.
50
Factorial Approach:
- Include all factors in a balanced design:
- To increase the generality of the conclusions, use a design that involves all eight
combinations of the three factors.
The treatments for the eight runs are given as under:
Run C T O
1 0.5 1450 70
2 0.7 1450 70
3 0.5 1600 70
4 0.7 1600 70
5 0.5 1450 120
6 0.7 1450 120
7 0.5 1600 120
8 0.7 1600 120
The above eight runs constitute a FULL FACTORIAL DESIGN. The design is balanced
for every factor. This means 4 runs have T at 1450 and 4 have T at 1600. Same is true for
C and O.
IMMEDIATE ADVANTAGES
- The effect of each factor can be assessed by comparing the responses from the
appropriate sets of four runs.
- More general conclusions
- 8 runs rather than 24 runs.
The data for the complete factorial experiment are:
Run C T O Y
1 0.5 1450 70 67
2 0.7 1450 70 61
3 0.5 1600 70 79
4 0.7 1600 70 75
5 0.5 1450 120 59
6 0.7 1450 120 52
7 0.5 1600 120 90
8 0.7 1600 120 87
The main effects of each factor can be estimated by the difference between the average of
the responses at the high level and the average of the responses at the low level.
For example to calculate the O main effect:
Avg. of responses with O as 70 =
5 . 70
4
75 79 61 67

+ + +
Avg. of responses with O as 120 =
72
4
87 90 52 59

+ + +
So the main effect of O is =
5 . 1 5 . 70 0 . 72
51

The apparent conclusion is that changing the oil temperature from 70 to 120 has little
effect.
The use of factorial approach allows examination of two factor interactions. For example
we can estimate the effect of factor O at each level of T.
At T = 1450
Avg. of responses with O as 70 =
0 . 64
2
61 67

+
Avg. of responses with O as 120 =
5 . 55
2
52 59

+
So the effect of O is 55.5 64 = -8.5
At T = 1600
Avg. of responses with O as 70 =
0 . 77
2
75 79

+
Avg. of responses with O as 120 =
5 . 88
2
87 90

+
So the effect of O is 88.5 77 = 11.5
The conclusion is that at T = 1450, increasing O decreases the average response by 8.5
whereas at T = 1600, increasing O increases the average response by 11.5.
That is, O has a strong effect but the nature of the effect depends on the value of T.
This is called interaction between T and O in their effect on the response.
Avg.
Y
70
71
72
73
74
70 120
O Main Effect
O
52
It is convenient to summarize the four averages corresponding to the four combinations of
T and O in a table:
O
Average 70 120
T
1450 64 55.5 59.75
1600 77 88.5 82.75
Average 70.5 72 71.25
When an interaction is present the lines on the plot will not be parallel. When an
interaction is present the effect of the two factors must be considered simultaneously.
The lines are added to the plot only to help with the interpretation. We cannot know that
the response will increase linearly.
Two way tables of averages and plots for the other factor pairs are:
C
Average 0.5 0.7
T
1450 63.0 56.5 59.75
1600 84.5 81.0 82.75
Average 73.75 68.75 71.25
Respo
nse Y
50
60
70
80
90
1450 1600
O = 70
O = 120
T
53
O
Average 70 120
C
0.5 73 74.5 73.75
0.7 68 69.5 68.75
Average 70.6 72 71.25
Conclusions:
- C has little effect
- There is an interaction between T and O.
Recommendations:
- Run the process with T and O at their high levels to produce about 90% crack free
product (further investigation at other levels might produce more improvement).
- Choose the level of C so that the lowest cost is realized.
Comparison with OFAT
On the basis of the observed data we can see that OFAT approach leads to different
conclusions if the factors are considered in the following order:
Fix T = 1450 and C = 0.5 and vary O, conclude O=70 is best
Run C T O Y
1 0.5 1450 70 67
2 0.7 1450 70 61
3 0.5 1600 70 79
4 0.7 1600 70 75
Res Y
50
60
70
80
90
1450 1600
C = 0.7
C = 0.5
T
54
5 0.5 1450 120 59
6 0.7 1450 120 52
7 0.5 1600 120 90
8 0.7 1600 120 87
Fix O = 70 and C = 0.5 and vary T, conclude T = 1600 is best
Run C T O Y
1 0.5 1450 70 67
2 0.7 1450 70 61
3 0.5 1600 70 79
4 0.7 1600 70 75
5 0.5 1450 120 59
6 0.7 1450 120 52
7 0.5 1600 120 90
8 0.7 1600 120 87
T = 1600 and vary C, conclude C = 0.5 is Fix O = 70 and best
Run C T O Y
1 0.5 1450 70 67
2 0.7 1450 70 61
3 0.5 1600 70 79
4 0.7 1600 70 75
5 0.5 1450 120 59
6 0.7 1450 120 52
7 0.5 1600 120 90
8 0.7 1600 120 87
This approach will incorrectly conclude that
T = 1600
C = 0.5
O = 70
Is the best
Whereas the factorial approach concluded that T and O should be at their high levels and C
has no effect.
Looking at the above experimental situations, it will be pertinent to answer a few questions
here:
How can poor utilization of data and non orthogonal situation be avoided?
How can interactions be estimated and still have an orthogonal experiment?
The use of full factorial experiments is one possibility!!!
55
And, the use of some orthogonal arrays is another.
Better Test Strategies
Let us recall the example we had discussed for the two-way ANOVA discussion. A full
factorial experiments is as shown:
Trial No.
Factor and Factor Level
Hardness data (RB)
A B
1 1 1 76 78
2 1 2 77 78
3 2 1 73 74
4 2 2 79 80
The above is a full factorial experiment and is orthogonal. Note that under level A1, factor
B has two data points under B1 condition and two under B2 condition. The same is true
under level A2.
The same balanced situation is true when looking at the experiment w.r.t two conditions
B1 and B2. Because of the balanced arrangement, factor A does not influence the estimate
of effect of factor B and vice versa.
All possible combinations of factor A and B at both there levels are represented in the test
matrix. Using this information both factor and interaction effects can be estimated.
The perfect experimental design is a full factorial, with replications, that is conducted in a
random manner.
Unfortunately, this type of experimental design may make the number of experimental
runs prohibitive, especially if the experiment is conducted on production equipment with
lengthy setup times.
The number of treatment conditions (TC) for a full factorial experiment is determined
by
f
l TC
Where TC = number of treatment conditions, l is the number of levels and f is the number
of factors.
Thus for a two level design
.......... 32 2 , 4 2
6 2

and for a three level design
........ 81 3 , 27 3 , 6 3
4 3 2

If each treatment condition is replicated only once, the number of experimental runs is
doubled. Thus, for a three-level design with five factors and one replicate, the number of
runs is 486.
Table below shows a three factor full factorial design.
The design space is composed of seven columns with 1 or 2, and the design matrix is
composed of the three individual factor columns A, B, and C.
The design matrix tells us how to run the Treatment Conditions.
56
Treatment
Condition
Factors
Response
A B C AB AC BC ABC
1 1 1 1 2 2 2 1 *
2 2 1 1 1 1 2 2 *
3 1 2 1 1 2 1 2 *
4 2 2 1 2 1 1 1 *
5 1 1 2 2 1 1 2 *
6 2 1 2 1 2 1 1 *
7 1 2 2 1 1 2 1 *
8 2 2 2 2 2 2 2 *
Three factor interactions with a significant effect on the process are rare, and some two-
factor interactions will not occur or can be eliminated by using engineering experience and
sound judgment.
If our engineering judgment shows that there was no three-factor interaction (AxBxC), we
could place a factor D in that column and make it part of the design matrix.

Of course, we would need to have a high degree of confidence that factor D does not
interact with other columns. Similarly, we could place a factor E in column headed BxC if
we thought there was no BxC interaction. This approach keeps the number of runs the
same and adds factors.
Please note that a full factorial experiments is possible only if there are few factors to be
investigated otherwise the matrix may become too large for many factors.
Typically most engineering problems will be five or more factors affecting performance
(at least initially). For a seven factor experiments with each factor at two levels
7
2 = 128
experiments need to be conducted.
However, usual time and financial limitations preclude the use of full factorial
experiments.
How can an engineer efficiently (economically) investigate these design factors?
4.3.2 A Full Factorial Experiment
An actual example of an experiments used in an engine plant to investigate the problem of
water pump leaks involved seven factors
Factor Level 1 Level 2
Front cover design Production New
Gasket Design Production New
Front bolt torque Low High
Gasket Coating Yes No
57
Pump Housing Finish Rough Smooth
Rear bolt torque Low High
Torque pattern Front rear Rear front
If a full factorial is to be used in this situation 128 2
7
will have to be conducted. (As
shown in figure below)
A1 A2
B1 B2 B1 B2
C1 C2 C1 C2 C1 C2 C1 C2
D1 D2 D1 D2 D1 D2 D1 D2 D1 D2 D1 D2 D1 D2 D1 D2
E1
F1
G1
G2
F2
G1
G2
E2
F1
G1
G2
F2
G1
G2
Efficient Test Strategies
Statisticians have developed more efficient test plans which are called fractional factorial
experiments (FFEs)
FFEs use only a portion of the total combinations to estimate the main factor effects and
some, not all, of the interactions.
Certain treatment conditions are chosen to maintain the orthogonality among the various
factors and interactions.
It is obvious that 1/8
th
FFE with only 16 test combinations or 1/16
th
with only 8
combinations is much more appealing to the experimenter from a time and cost standpoint.
Taguchi has developed a family of FFE matrices which can be utilized in various
situations. In this situation a possible matrix is an eight trial OA which is labeled as L8
matrix.
L8 OA matrix
Trial No. Column No.
58
1 2 3 4 5 6 7
1 1 1 1 1 1 1 1
2 1 1 1 2 2 2 2
3 1 2 2 1 1 2 2
4 1 2 2 2 2 1 1
5 2 1 2 1 2 1 2
6 2 1 2 2 1 2 1
7 2 2 1 1 2 2 1
8 2 2 1 2 1 1 2
This is 1/16
th
FFE which has only 8 of the possible 128 combinations represented.
One can observe that there are 7 columns in this array which may have a factor assigned to
each column.
The eight trials provide 7 dof s for the entire experiment allocated to 7 columns of 2 levels,
each column with one dof
The array allows all the error dofs to be traded for factor dofs and provide the particular
test combinations that accommodate that approach.
When all columns are assigned a factor, this is known as a saturated design. The levels of
factors are designated by 1s and 2s.
It can be seen that each column provide 4 tests under level 1 and 4 tests under level 2. This
is the feature that provides orthogonality to the experiments.
This is the real power of OA i.e. the ability to evaluate several factors in a minimum of
tests. This is called an efficient experiment since much information is obtained from few
trials.
The assignment of factors to a saturated design FFE is not difficult; all columns are
assigned a factor.
However, the experiments which are not fully saturated (when all columns cannot be
assigned factors) may be more complicated to design.
4.4 Steps in Designing, Conducting and Analyzing an Experiment
The major initial steps are:
1. Selection of factors and/or interactions to be evaluated
2. Selection of number of levels for the factors
3. Selection of the appropriate OA
4. Assignment of factors and/or interactions to columns
5. Conduct tests
6. Analyze results
7. Confirmation experiment
59
Steps 1 to 4 concern the actual design of experiment
Let us try to understand each step
4.4.1 Selection of factors and/or interactions to evaluate
Several methods are useful for determining which factors to include in initial experiments.
These are
1. Brainstorming
2. Flowcharting
3. Cause and Effect diagrams
4.4.2 Selection of Number of Levels
Initial round of experiments should involve many factors at few levels; two are
recommended to minimize the size of the initial experiment
This is because dof for a factor is the no. of levels minus one; increasing the number of
levels increases the total dof in the experiments which is a direct function of total number
of tests to be conducted
The initial round of experimentation will eliminate many trivial factors from contention
and a few remaining can then be investigated with multiple levels without causing an
undue inflation in the size of the experiments.
60
4.4.3 Selection of OA
Degrees of Freedom:
The selection of which OA to use depends upon:
1. The number of factors and interactions of interest
2. The number of levels for the factors of interest
Recall of ANOVA analysis:
Dof for each factor is (Say factor A)
1
A A
k v
Dof for interaction is (Say A and B)
) )( (
B A B A
v v v

The minimum required dof in the experiment is the sum of all the factor and interaction
dofs.
Orthogonal Arrays
Two basic kind of arrays are available.
- Two level arrays: L4, L8, L12, L16, L32
- Three level arrays: L9, L18, L27
The number in the array designates the number of trials in the array. The total dof
available in an array
1 N v
LN where N is the number of trials
When an array is selected, the following inequality is to be satisfied:
erations and factor for required LN
v v
int

Depending upon number of levels in a factor, a 2 or a 3 level OA can be selected.


If some factors are two-level and some three-level, then whichever is predominant should
indicate which kind of OA is selected.
Once the decision is made about the right OA, then the number of trials for that array must
provide an adequate total dof. When required dof fall between the two dof provided by two
OAs, the next larger OA must be chosen.
4.4.4 Assignment of Factors and Interactions
61
Before getting into the details of using some method of assignment of factors and
interactions, a demonstration of a mathematical property of OAs is in order.
Demonstration of Interaction Columns
The simplest OA is an L4 which has an arrangement as shown:
Trial No.
Column No.
First Second Third
1 1 1 1
2 1 2 2
3 2 1 2
4 2 2 1
Recall the two-way ANOVA problem:
Let us consider an experimental situation. A student worked at an aluminum casting
foundry which manufactured pistons for reciprocating engines. The problem with the
process was how to attain the proper hardness (Rockwell B) of the casting for a particular
product.
Engineers were interested in the effect of Cu and Mg content on casting hardness.
According to specifications the copper content could be 3.5 to 4.5% and the magnesium
content could be 1.2 to 1.8%.
The student runs an experiment to evaluate these factors and conditions simultaneously.
If A = % Copper Content 1
A
= 3.5 2
A
= 4.5
If B = % Magnesium Content 1
B
= 1.2 2
B
= 1.8
The experimental conditions for a two level factors is given by
f
2 = 4 which are
1 1
B A
2 1
B A
1 2
B A
2 2
B A
Imagine, four different mixes of metal constituents are prepared, casting poured and
hardness measured. Two samples are measured from each mix for hardness. The result
will look like:
1
A
2
A
1
B
76, 78 73, 74
2
B
77, 78 79, 80
To simplify discussion 70 points from each value are subtracted in the above observations
from each of the four mixes. Transformed results can be shown as:
1
A
2
A
1
B
6, 8 3, 4
2
B
7, 8 9, 10
62
Let us adapt this problem to a L4 OA
Factor A can be assigned to column 1 and Factor B can be assigned to column 2. The
entire experiment can be shown in a L4 OA as under:
Trial No.
Column No. y data
(Rb 70) Factor A Factor B 3
1 1 1 1 6, 8
2 1 2 2 7, 8
3 2 1 2 3, 4
4 2 2 1 9, 10
ANOVA for L4 OA
The ANOVA for an OA is conducted by calculating the sum of squares for each column.
The formula for A
SS
is the same as done for 2-way ANOVA.
( )
N
A A
SS
A
2
2 1

1
A
= 6 + 8 + 7 + 8 = 29
2
A
= 3 + 4 + 9 + 10 = 26
( )
125 . 1
8
26 29
2

A
SS
The sum of squares for factor B, column 2 is
( )
125 . 21
8
) 34 21 (
2 2
2 1

N
B B
SS
B
Note that the Sum of Squares for factor A and B are identical to two-way ANOVA.
The sum of squares of column 3 is
( )
125 . 15
8
) 22 33 ( 3 3
2 2
2 1
3

N
SS
Note that the value of 3
SS
is equal to B A
SS

This is not coincidental but is a mathematical property of OA.
The calculation is a demonstration that the third column represents the interaction of
factors assigned to the column 1 and 2.
This particular L4 example is similar to the two-way ANOVA example and has similar test
results. Thus L4 with two factors assigned to it is equivalent to a full factorial experiment
and the ANOVA is equivalent to a two-way ANOVA because certain columns represent
the interaction of two other columns.
63
4.4.5 Conducting the experiment
Once the factors are assigned to a particular column of an OA, the test strategy has been
set and physical preparation for performing the experiment can begin.
Some decisions need to be made regarding the order of testing the various trials. Factors
are assigned to columns, trial test conditions are dictated by rows of the OA.
For an L8 OA, one can observe that trial 6 requires the test conditions of
Trial 6: 1 2 1 2
D C B A
Factors
A B A x B C A x C B x C D
Column No.
Trial # 1 2 3 4 5 6 7
1 1 1 1 1 1 1 1
2 1 1 1 2 2 2 2
3 1 2 2 1 1 2 2
4 1 2 2 2 2 1 1
5 2 1 2 1 2 1 2
6 2 1 2 2 1 2 1
7 2 2 1 1 2 2 1
8 2 2 1 2 1 1 2
The interaction conditions cannot be controlled when conducting a test because they are
dependent upon main factor levels. Only the analysis is concerned with the interaction
columns.
Therefore, it is recommended that test sheets be made up which show only the main factor
levels required for each trial. This will minimize mistakes in conducting the experiment
which may inadvertently destroy the orthogonality.
4.4.6 Analysis of Experimental Results
The simple example of casting discussed for two-way ANOVA is intended to
demonstrate another basic property of OAs, which is that the total variation can be
accounted for by summing the variation for all columns.
Let us try to put the data from that two-way ANOVA example to an L8OA. Factor A is
assigned to column 1 and B to column 2.
64
The first two trials of the OA represent the 1 1
B A
condition which has corresponding
results if 6 and 8 in the example. Similarly, Trials 3 and 4 represent the 2 1
B A
condition
which has results corresponding to 7 and 8. The complete OA is as shown below:
Factors and Interactions
A B A x B
Column No.
Trial # 1 2 3 4 5 6 7
Y data
b
R
=70
1 1 1 1 1 1 1 1 6
2 1 1 1 2 2 2 2 8
3 1 2 2 1 1 2 2 7
4 1 2 2 2 2 1 1 8
5 2 1 2 1 2 1 2 3
6 2 1 2 2 1 2 1 4
7 2 2 1 1 2 2 1 9
8 2 2 1 2 1 1 2 10
ANOVA of Taguchi L8 OA
29 8 7 8 6
1
+ + + A
26 10 9 4 3
2
+ + + A
( )
125 . 1
8
) 26 29 (
2 2
2 1

N
A A
SS
A
The sum of squares for factor B
( )
125 . 21
8
) 34 21 (
2 2
2 1

N
B B
SS
B
The sum of squares of column A x B is
( )
125 . 15
8
) 22 33 ( 3 3
2 2
2 1

N
SS
B A
Note that the SS for factor A, B and interaction A x B are same as two-way ANOVA
calculated earlier.
Continuing with sum of squares calculations for other columns:
( )
125 . 3
8
) 30 25 ( 4 4
2 2
2 1
4

N
SS
( )
125 . 0
8
) 28 27 ( 5 5
2 2
2 1
5

N
SS
( )
125 . 0
8
) 28 27 ( 6 6
2 2
2 1
6

N
SS
65
( )
125 . 0
8
) 28 27 ( 7 7
2 2
2 1
7

N
SS
500 . 3
7 6 5 4
+ + + SS SS SS SS SS
e
The total sum of squares for the unassigned columns is equal to e
SS
calculated in the
two way ANOVA example.
Thus the unassigned columns in an OA represent an estimate of error variation.
Here the difference of the particular array selected for the experiment changes the
analysis approach slightly. The L4 has two data points per trial and the L8 one data
point per trial.
L4 OA for the same experiment:
Trial No.
Column No. y data
(Rb 70) Factor A Factor B 3
1 1 1 1 6, 8
2 1 2 2 7, 8
3 2 1 2 3, 4
4 2 2 1 9, 10
The error variance of the L4 must come from the repetitions in each trial, but the error
variance in L8 must come from the columns since there are no repetitions within trials.
Note that:

columns T
SS SS
This is a demonstration of the property of the total sum of squares being contained
within the columns of an OA.
Column estimate of error variance
In the previous example, the unassigned columns were shown to estimate error variance
when there was only one test result per trial.
This approach of using columns to estimate the error variance will be used even if all
the columns have factors assigned to them. Some factors assigned to an experiment will
not be significant at all, even though thought to be before experimentation.
This is equivalent to saying the color of the car can affect fuel economy and assigning
two different colors to a column (2 levels). This column will have small sum of squares
because it will be estimating error variance rather than any true color effect.
When a column effect turns out to be small in an OA, it means any one of the
following:
66
No assigned factors or interactions like in the L8 example discussed above
Not significant or very small factor and /or interaction effect
Canceling factor and/or interaction effects
A fully saturated OA will depend upon some column effects turning out small relative
to others and using the small ones as estimate of error variance.
Pooling estimates of error variance
In the above example, there were four unassigned columns having four degrees of
freedom, one for each column, which provides estimate for error variance.
A better estimate was the combination of all four columns effects for one overall
estimate of error variance with 4 dofs.
The combining of column effects to better estimate variance is referred to as pooling.
The pooling up strategy entails F-testing the smallest column effect against the next
larger one to see of significance exists.
If no significance exists, then these two effects are pooled to test the next larger column
effect until some significant F-ratio exists.
The ANOVA table for the experimental problem discussed above will appear like this:
Source SS Dof V F
A 1.125 1 1.125
B 21.125 1 21.125 22.83
AxB 15.125 1 15.125 16.35
Col 4 3.125 1 3.125
Col 5 0.125 1 0.125
Col 6 0.125 1 0.125
Col 7 0.125 1 0.125
T 40.875 7
Error pooled 4.625 5 0.925
In this situation five smaller column effects have been pooled to form an estimate of
error variance with 5 dof associated with that estimate.
As a rule of thumb, pooling up to half of the dofs is advisable. Here that rule was
exceeded slightly because two of the column effects were substantially larger than the
others. The ANOVA summary table could be rewritten to recognize the pooling as
shown in Table below;
Source SS Dof V F
B 21.125 1 21.125 22.83
AxB 15.125 1 15.125 16.35
67
Error pooled 4.625 5 0.925
T 40.875 7
4.4.7 Confirmation experiment
The confirmation experiment is the final step in verifying the conclusions from the
previous round of experimentation. The optimum conditions are set for the significant
factors and levels and several tests are made under constant conditions.
The average of the confirmation experiment results is compared to the anticipated
average based on the factors and levels tested.
4.5 EXAMPLE EXPERIMENTAL PROCEDURE
Popcorn experiment
This is an example to make you walk through the process of designing experiments.
The scenario is to develop process (cooking) specifications to go on a bag of popcorns.
The owner of the company has developed a new hybrid seed which may or may not use
the same cooking process recommended for the current seed.
One of the processes used by the customer is the hot oil method, which is addressed in
this situation.
Statement of problem and objective of experiment
To find process factors which influence popcorn quality characteristics relative to
customers requirement characteristics such as
unpopped kernels in a batch,
the fluffiness or volume of the popped corn,
the color,
the taste, and
the crispiness
are typically considered.
The objective of the experiment is to find the process conditions which optimize the
various quality characteristics to provide improved popping, fluffiness, color, taste and
texture.
Measurement Methods
Number of un-popped kernels in a batch can be easily measured, but it assumes there
are an equal number of uncooked kernels in a measuring cup
The fluffiness or volume can be quantified by placing the popped kernels in a
measuring cup; which again assumes there are an equal number of uncooked kernels
were used in each batch.
The performance of color, taste and texture are somewhat more abstract. These may be
addressed by assigning a numerical color rating, taste rating, and texture rating to each
batch.
68
Quality Improvement: Problem Solving
Flowchart
Flowchart

Popcorn
Popcorn
Cooking
process
Heat Oil
Add Corn
(Preheat)
Agitation
Popped Corn
Venting
Inspect
Popcorn factors and levels
Factor Level 1 Level 2
A Type of Oil Corn Oil Peanut Oil
B Amount of Oil Low High
C Amount of heat Medium High
D Preheat No Yes
E Agitation No Yes
F Venting No Yes
G Pan Material Aluminum Steel
H Pan Shape Shallow Deep
Cause & Effect diagram Popcorn Experiment
Oil Method
Pan
Heat
High
Quality
Popcorn
g
Type
Amount
Agitation
Venting
Shape
Material
Amount
Preheat
69
Assignment of factors to columns L16 OA
The factor list is small enough to fit into an L16 OA at a resolution 2 if 2 levels are
used for each factor.
Using the Taguchi L16 OA the assignment of factors to the columns can be done by
referring to table D2 of the appendix. (Hard copies distributed already)
Factors A to H are assigned to columns 1, 2, 4, 7, 8, 11, 13 and 14.
The trial data sheets can then be generated from the factor column assignment.
Popcorn factors
A B C D E F G H
Column Number
Trial 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2
3 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2
4 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1
5 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2
6 1 2 2 1 1 2 2 2 2 1 1 2 2 1 1
7 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1
8 1 2 2 2 2 1 1 2 2 1 1 1 1 2 2
9 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2
10 2 1 2 1 2 1 2 2 1 2 1 2 1 2 1
11 2 1 2 2 1 2 1 1 2 1 2 2 1 2 1
12 2 1 2 2 1 2 1 2 1 2 1 1 2 1 2
13 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1
14 2 2 1 1 2 2 1 2 1 1 2 2 1 1 2
15 2 2 1 2 1 1 2 1 2 2 1 2 1 1 2
16 2 2 1 2 1 1 2 2 1 1 2 1 2 2 1
Trial # 5 data sheet will look like:
Corn Oil
High amount of oil
Medium Heat
Preheat before adding popcorn
No agitation during popping
Vented during popping
Aluminum Pan
Deep Pan shape
Conducting the experiment:
70
Order may be completely randomized with one test per trial. A batch of 200 seeds could be
made for each trial with the specified test conditions.
For each trial, un-popped kernels, fluffiness, color, taste and texture would be noted.
Popcorn experiment interpretation
ANOVA will be used to analyze each performance characteristic separately to determine
which factors and levels gave the best result.

Problems:
1. Assign factors A,B,C,D and E as well as interactions CxD and CxE to an OA if
all factors are using two levels.
2. Assign these factors and interactions to an OA
A,B,C,D, and E (Two levels)
AxB BxC CxE
AxC BxD DxE
AxD BxE
BxF
Answer to Problem 1:
Several possibilities exist
L8 (Resolution 1)
Column # 1 2 3 4 5 6 7
Option 1 C D CxD E CxE A B
Option 2 D E A C CxD CxE B
Option 3 A B C D E CxE CxD
L16 (Resolution 3)
Column # 1 2 4 8 11 12 15
A B C D CxE CxD E
4.6 Standard Orthogonal Array
71

CHAPER 5
ROBUST DESIGNING
72
5.1 WHAT IS ROBUSTNESS
Robust products work wellclose to ideal customer satisfactioneven when produced in
real factories and used by real customers under real conditions of use. All products look
good when they are precisely made:
in a model shop and are tested under carefully controlled laboratory conditions. Only
robust products provide consistent customer satisfaction. :
Robustness also greatly shortens the development time by eliminating much of the rework
that is known as build, test, and fix.
Robustness is small variation in performance. For example, Sam and John go to the target
range, and each shoots an initial round of 10 shots. Sam has his shots in a tight cluster,
which lies outside the bulls-eye.
John actually has one shot in the bulls-eye, but his success results only from his hit-or-
miss pattern. In this initial round John has one more bulls-eye than Sam, but Sam is the
robust shooter. By a simple adjustment of his sights, Sam will move his tight cluster into
the bulls-eye for the next round. John faces a much more difficult task. He must improve
his control altogether, systematically optimizing his arm position, the tension of his sling,
and other critical parameters.
Several facts about this example reveal important characteristics of robustness:
(1) The application of the ultimate performance metrics to initial performance is often
misleading; Sam had no bulls-eyes even though he is an excellent marksman.
(2) Adjustment to the target is usually a simple secondary step.
(3) Reduction of variation is the difficult step.
(4) A metric is needed that recognizes that Sam is a good marks man and that measures his
expected performance after he adjusts his sights to the target.
Automobiles give further insight into robustness. Customers do not want a car that is a
lemon. They want one that is robust against production variations. A lemon is a car that
has excessive production variations that cause great customer dissatisfaction. To overcome
this, the production processes have to be more robust so that they produce less variation,
and the car design has to be more robust so that its performance is less sensitive to
production variations. The customers also want a car that will start readily in northern
73
Canada in the winter and will not overheat in southern Arizona during the summer; that is,
they want a car that is robust with respect to the variations of customer use conditions.
Customers would also prefer cars that are as good at 50,000 miles as when new, that are
robust against time and wear.
This example reveals the three sources of undesirable variation (also called noises) in
products:
(1) Variations in conditions of use
(2) Production variations
(3) Deterioration (variation with time and use)
5.2 The Robustness Strategy uses five primary tools:
1. P-Diagram is used to classify the variables associated with the product into noise,
control, signal (input), and response (output) factors.
2. Ideal Function is used to mathematically specify the ideal form of the signal-response
relationship as embodied by the design concept for making the higher-level system work
perfectly.
3. Quadratic Loss Function (also known as Quality Loss Function) is used to quantify the
loss incurred by the user due to deviation from target performance.
4. Signal-to-Noise Ratio is used for predicting the field quality through laboratory
experiments.
5. Orthogonal Arrays are used for gathering dependable information about control factors
(design parameters) with a small number of experiments.
5.2.1 P-Diagram
P-Diagram is a must for every development project. It is a way of succinctly defining the
development scope. First we identify the signal (input) and response (output) associated with
the design concept. For example, in designing the cooling system for a room the thermostat
setting is the signal and the resulting room temperature is the response.
Next consider the parameters/factors that are beyond the control of the designer. Those factors
are called noise factors. Outside temperature, opening/closing of windows, and number of
74
occupants are examples of noise factors. Parameters that can be specified by the designer are
called control factors. The number of registers, their locations, size of the air conditioning unit,
insulation are examples of control factors.
Ideally, the resulting room temperature should be equal to the set point temperature. Thus the
ideal function here is a straight line of slope one in the signal-response graph. This relationship
must hold for all operating conditions. However, the noise factors cause the relationship to
deviate from the ideal.
The job of the designer is to select appropriate control factors and their settings so that the
deviation from the ideal is minimum at a low cost. Such a design is called a minimum sensitivity
design or a robust design. It can be achieved by exploiting nonlinearity of the products/systems.
The Robust Design method prescribes a systematic procedure for minimizing design sensitivity
and it is called Parameter Design.
An overwhelming majority of product failures and the resulting field costs and design iterations
come from ignoring noise factors during the early design stages. The noise factors crop up one
by one as surprises in the subsequent product delivery stages causing costly failures and band-
aids. These problems are avoided in the Robust Design method by subjecting the design ideas
to noise factors through parameter design.
The next step is to specify allowed deviation of the parameters from the nominal values. It
involves balancing the added cost of tighter tolerances against the benefits to the customer.
Similar decisions must be made regarding the selection of different grades of the subsystems
and components from available alternatives. The quadratic loss function is very useful for
quantifying the impact of these decisions on customers or higher-level systems. The process of
balancing the cost is called Tolerance Design.
The result of using parameter design followed by tolerance design is successful products at low
cost.
5.2.2 Quality Measurement
In quality improvement and design optimization the metric plays a crucial role. Unfortunately, a
single metric does not serve all stages of product delivery.
It is common to use the fraction of products outside the specified limits as the measure of
quality. Though it is a good measure of the loss due to scrap, it miserably fails as a predictor of
customer satisfaction. The quality loss function serves that purpose very well.
75
Let us define the following variables:
m: target value for a critical product characteristic
+/- 0: allowed deviation from the target
A0: loss due to a defective product
Then the quality loss, L, suffered by an average customer due to a product with y as value of the
characteristic is given by the following equation:
L = k * ( y - m )
2
where k = ( A0 / 0
2
)
If the output of the factory has distribution of the critical characteristic with mean and variance

2
, then the average quality loss per unit of the product is given by:
Q = k { ( - m )
2
+
2
}
5.2.3 Signal To Noise (S/N) Ratios
The product/process/system design phase involves deciding the best values/levels for the
control factors. The signal to noise (S/N) ratio is an ideal metric for that purpose.
The equation for average quality loss, Q, says that the customer's average quality loss depends
on the deviation of the mean from the target and also on the variance. An important class of
design optimization problem requires minimization of the variance while keeping the mean on
target.
Between the mean and standard deviation, it is typically easy to adjust the mean on target, but
reducing the variance is difficult. Therefore, the designer should minimize the variance first and
then adjust the mean on target.Among the available control factors most of them should be used
to reduce variance. Only one or two control factors are adequate for adjusting the mean on
target.
The design optimization problem can be solved in two steps:
76
1. Maximize the S/N ratio, , defined as
= 10 log10 (
2~
/
2
)
This is the step of variance reduction.
2. Adjust the mean on target using a control factor that has no effect on h. Such a factor is
called a scaling factor. This is the step of adjusting the mean on target.
One typically looks for one scaling factor to adjust the mean on target during design and another
for adjusting the mean to compensate for process variation during manufacturing.
5.3. Steps in Robust Parameter Design
Robust Parameter design has 4 main steps:
1. Problem Formulation:
This step consists of identifying the main function, developing the P-diagram, defining the ideal
function and S/N ratio, and planning the experiments. The experiments involve changing the
control, noise and signal factors systematically using orthogonal arrays.
2. Data Collection/Simulation:
The experiments may be conducted in hardware or through simulation. It is not necessary to
have a full-scale model of the product for the purpose of experimentation. It is sufficient and
more desirable to have an essential model of the product that adequately captures the design
concept. Thus, the experiments can be done more economically.
3. Factor Effects Analysis:
The effects of the control factors are calculated in this step and the results are analyzed to
select optimum setting of the control factors.
4. Prediction/Confirmation:
In order to validate the optimum conditions we predict the performance of the product design
under baseline and optimum settings of the control factors. Then we perform confirmation
experiments under these conditions and compare the results with the predictions. If the results
of confirmation experiments agree with the predictions, then we implement the results.
Otherwise, the above steps must be iterated.

77
5.4 NOISE FACTORS
There are two main aspects to the Taguchi technique:
First, the behavior of a product or process is characterized in term of factors (parameters
that are separated into two types:
1. Controllable (or design) factors- Those whose value may be set or easily adjusted by
the designer or process engineer.
2. Uncontrollable (or noise) factors- Which are sources of variation often associated with
the production or operational environment; overall performance should, ideally, be
insensitive to their variation.
Second are the controllable factors, which are divided into:
1. Those which affect the average levels of the response of interest, referred to as
target control factor (TCF), some times called signal factor.
2. Those which affect the variability in the response, the variability control
factor(VCF).

Noise: The variables/factors causing variation and which are
impossible or difficult to control
Outer Noise: Operating Conditions Environment
Inner Noise: Deterioration Manufacturing Imperfections
Purpose Make product/process robust against Noise factors(NF)
Taguchis definitions of noise


Purpose: To make the process/product insensitive to the effect of NFs
Procedure
a. Find VCFs and their settings which minimize variability
b. Find TCFs and their settings which bring the mean response on to target
78
External noise
Internal
noise
Variability control Factor
(VCF)
Target control Factor
(TCF)
Factor (parameters, variables)
Controllable Factor
(CF)
Noise Factor (NF)
(uncontrollable)
Cost control
Factor
3. Those which affect neither the mean response nor the variability, and can thus be
adjusted to fit economic requirements, called the cost factors.
It is this concentration on variability which distinguishes the
Taguchi approach from traditional tolerance methods or inspection-based quality control.
The idea is to reduce variability by changing the variability control factors, while
maintaining the required average performance through adjustments to the target control
factors.
5.5 OFF-LINE and ON-LINE Quality Control
Western books on frequently divide quality systems in to two parts:
Quality of design
Quality of conformance
Dr Taguchi refers to these two parts as off-line quality control and on-line quality control,
respectively.
5.5.1 Off-line quality control: It is concern with:
1. Correctly identifying customer needs and expectations,
2. Designing a product which will meet customer expectation,
3. Designing a product which can be constantly and economically manufactured,
4. Developing clear and adequate specifications, standards, procedures and equipment
for manufacture.
There are two stages in off-line quality control:
Product design stage
Process design stage
During the product design stage a new product is developed or an existing product is
modified. The goal here is to design a product which is manufacturable and will
meet customer requirements. During the process design stage, production and
process engineers developed manufacturing processes to meet there
specifications developed during the product design stage. Taguchi developed a
three-step approach for assuring quality with in each of the two stages of off-line
quality control. He called these steps system design, parameter design and
tolerance design.
5.5.2 On-line quality control: It is concern with manufacturing products within the
specifications established during product design using the procedure developed during
process design. Taguchi identifies two stages of on-line quality control.
Stage 1: Production quality control methods
79
It has three forms:
Process diagnosis and adjustment
Prediction and correction
Measurement and action
Stage 2: customer relations
5.5.1.1 Product Design (Off-Line Quality Stage 1)
1. System Design: applying engineering and scientific knowledge to develop a
prototype design which meets customer requirements. Initial selection of parts,
materials and manufacturing technology are made at this time. The emphasis
here is on using the best available technology to meet customer requirements at
lower cost. A key difference between this step in Taguchis approach and
prototype design step in many western R & D departments is Taguchis focus on
proven technology, low cost parts, and customer requirements rather then on
using the latest technology and exotic or expensive parts.
2. Parameter Design: Determination of optimal setting for product parameters.
The goal here is to minimise manufacturing product life time costs by
minimising performance variation. This involves making the product design
robust-insensitive to noise factors. A noise factor is an uncontrollable source of
variation in the functional characteristics of a product. Taguchi identifies three
types of noise factor:
External noise these are due to variation in environmental conditions
such as dust, temperature, humidity or supply voltages.
Internal noise these are mainly due to deterioration such as product
wear, material aging or other changes in components or materials with
time or use.
Unit to unit noise it is the differences in products built to the same
specifications caused by variability in materials, manufacturing
equipments and assembly processes.
3. Tolerance design: Establish tolerances around the target (nominal) values establish
during parameter design. The goal is set to tolerances wide (to reduce manufacturing costs)
while still keeping the products functional characteristics within specified bounds.
5.5.1.2 Process Design (Off-Line Quality Control, Stage 2)
1. System design: Select the manufacturing process on the basis of knowledge of the
product and current manufacturing technology. The focus here is on building to
specification using existing machinery and processes when ever possible.
2. Parameter design: determine appropriate levels for controllable production process
parameters. The goal here is to make the process robust-to minimise the effect of noise on
80
the production process and finished product. Experimental designs are used during this
step.
3. Tolerance design: establish tolerances for the process parameters identified as
critical during process parameter design. If the product or process parameter design steps
are poorly done, if may be necessary here to tighten tolerances or specify higher cost
materials or better equipments, thus driving up manufacturing costs.
5.5.2.1 Production Quality Control Methods (on line QC, stage 1)
Dr. Taguchi identifies three forms of on-line quality control:
1. Process diagnosis and adjustment: the process is monitored at regular intervals,
adjustments and corrections are made as needed.
2. Prediction and correction: a quantitative process parameter is measured at regular
interval, the data are used to project trends in the process. Whenever the process is
found to be too far off target, the process is adjusted to correct the situation. This
method is also called feedback or feed-forward control.
3. Measurement and action: quality by inspection. Every manufactured unit is
inspected. Defected units are reworked or scrapped. This is the most expensive and
least desirable form of production quality control since it doesnt prevent defects
from occurring or even identify all defective units.
5.5.2.2 Customer relations (on-line QC, stage 2)
Customer service can involve repair or replacement of defective products, or
compensation for losses. The complaint handling process should be more than customer
relations operation. Information on types of complaints and failures, and customer
perceptions of products, should be promptly fed back to relevant functions within the
organization for corrective action.


81
References
Books Writer
Taguchi on robust technology Development By Genichi Taguchi

A Primer on the Taguchi Method Ranjit Roy
Designing For Quality Robert H. Lochner & Joseph E. Mator
Managing For Total Quality N Logothetis
Quality Management Kanishka Bedi
T.Q.M Besterfield
Internet References
www.slideshare.net
www.google.com
www.wikipedia.org
www.scribd.com
82

83

Вам также может понравиться