Вы находитесь на странице: 1из 14

Application of Taguchi Methods to Modern

Manufacturing Systems-A Literature Survey

AMEY VIDVANS
THE PENNSYLVANIA STATE UNIVERSITY

Submitted to Professor M Jeya Chandra


Amey Vidvans

Application of Taguchi Methods to Modern Manufacturing


Systems-A Literature Survey
-Amey Vidvans
The Pennsylvania State University, University Park-16802

1.0 Abstract:
Manufacturing products at low cost and high quality is both an engineering and economic
challenge. There is a wide interest in optimizing systems for manufacturing to improve quality,
reduce the manufacturing cost and improve the overall quality loss.[1].There are frequently many
decision variables which require setting. Traditional techniques including one at a Time
optimization or trial and error invariably fail. Robust design studies a large number of variables
using a mathematical technique called Orthogonal Arrays to deliver the best product at the lowest
cost and highest quality. A number of documented case studies in advanced manufacturing
applying these techniques and their extensions will be discussed in this literature survey to
understand the methodology and conclusions.
2.0 Literature Survey:
We first discuss a bit about the Taguchi methodology and steps in its implementation. Whenever
we are looking at optimizing a process, we can do it in 3 ways [1].
1.) Trial and error
This involves making a series of experiments using variables set at values that the
experimenter feels will give the best results. Data analysis on the direction of
improvement is carried out after each experiment.
2.) Design of Experiments
This approach involves parameters of interest being varies over a specified range. Although
this is a very systematic approach to experimentation, it is prohibitively expensive and not
readily adaptable to industry.
3.) Taguchi Methods
The Taguchi Method involves the use of orthogonal arrays, which are supposed to give the
least variance and impart robustness to the process. Although this process is similar to the
DOE approach, this approach is supposed to do so with the minimum possible
experimentation. The objectives in this case are SNR (signal-noise ratio). They are actually
logarithms of the output to be optimized.
All problems discussed shall be of the static type. For static problems, the SNR are commonly of
3 types 1.) Smaller the better 2.) Larger the better 3.) Nominal the best.

Based on [1], the following steps of Taguchi Method are developed,

1
Amey Vidvans

The first step is to identify the factors of interest. Followed by determination of noise factors and
quality characteristic of interest. Based on this we develop the objective function to be optimized.
This is to be followed by determining control factors and their levels. Next we select the orthogonal
array and conduct the experiment. From the plots of process means and SN ratios generated we
can optimize the process. More often than not ANOVA will also be integrated into the analysis.
The first case study by Zhu et al [2] studies the effect of distortions due to a hybrid manufacturing
process caused by deposition and subsequent solidification. This creates dimensional in accuracies.
The additive manufacturing processes provide a great deal of flexibility to the designer due to their
ability to produce complex geometries. CNC machining on the other hand produce parts with high
accuracy. However due to problems of tool accessibility it becomes cumbersome to machine some
complex geometries. To counter these problems, hybrid systems consisting of both additive and
subtractive manufacturing which combines the advantages of both these types of processes have
been developed. However the newly deposited material can distort the final part and induce
residual stresses. In the analysis stage, a mathematical model is developed which can help identify
the process parameters to be used in the DOE (design of experiments) Taguchi phase since there
are many factors of interest. The experiments thus studied were statistically analyzed to discover
the correlation between distortion and the process parameters. We shall critically evaluate only the
DOE phase of robust design in this paper. The mathematical model is essentially a force and
moment balance to develop equilibrium conditions which give a rough idea of the process
parameters to be used.
As explained earlier, the paper develops a mathematical model to enlist the critical parameters of
interest which shall be examined by Taguchi DoE approach. These parameters were found to be
Section length of the existing part Ls, heights of the existing part he and the newly deposited part
hn and layer thickness t. The authors note that although there are a wide variety of factors that
influence the process, the most relevant ones are selected.
Three levels were assigned to each effect corresponding to low, medium and high. The 3k design
was probably selected to detect quadratic effect if present since the experiments at two levels can
detect quadratic effect but cannot characterize it properly. The interactions between these factors
were also obviously relevant and taken into consideration. The levels of the factors were as
follows. Keeping the working volume of the experimental setup, the FFF machine, the levels for
section length (Ls) were chosen as 60 mm, 90 mm and 120 mm. To shorten the time that would be
required for each observation, the authors select 3mm, 6mm and 9mm as the three levels for the
heights of existing parts(he). The same heights were also applied for the heights of the newly
deposited parts(hn). So effectively the ratio of he to hn can be in a wide range of
33%,50%,67%,100%,150%,200% and 300%. The layer thicknesses chosen were 0.2, 0.25 and
0.3mm considering the stability of the machine. It should be noted here that the other control
factors like deposition speed etc. were held constant for the purposes of this study.
An L27 standard orthogonal Taguchi Array was selected for the experiment. Now coming to the
experimental procedure used for each experiment, the top surface of each newly deposited part

2
Amey Vidvans

was face milled for datum reference in measurement. The bottom surface of each test piece was
measured by scanning mode on a coordinate measuring machine (CMM). For measuring the
distortion, the part was positioned in the direction opposite to the build direction. For accuracy, 5
lines with 4mm interval between each other along the X axis were scanned and appropriate
measurements noted in the L27 table.
Beginning the analysis, the authors conduct an ANOVA analysis, main effects section length,
height of existing part and height of newly deposited part and layer thickness were denoted as A,
B,C and D respectively. Their interactions were denoted by A*B, B*C, A*C. In the ANOVA
analysis, it can be seen that height of existing part (he) followed by section length (Ls) are the most
significant parameters. While height of newly deposited part and layer thickness are not even
significant. This can be judged from the p values which are greater than 0.05. As we know from
robust methodology mean plots are an important way of judging the optimum values of parameters,
the authors turn to these next. The mean effect plots state that section length should be at its lowest
value, the height of the existing part should be at its highest value, the height of the newly deposited
part should be the lowest, while the layer thickness deposited should be 0.25 mm.
Next moving on to the interaction studies between the two primary factors of interest. From
interaction plots of section length and height of existing part, it can be seen that increasing the
height of the existing part (he) can significantly reduce the distortion observed. Also increasing
section length also leads to increased degree of distortion. As expected from the ANOVA analysis,
the layer thickness does not have much influence on part distortion. This interaction is significant
because of the residual stresses being developed in the new and existing material deposited. The
authors claim that the longer the section length of the existing part, more is the contraction force
that acts as the material solidifies and so this creates higher distortion or tolerance loss.
Explaining the physical meaning of these results the authors explain the physical phenomenon
associated the processes. There are a few areas of improvement that can be highlighted. As we
have noted that this is a fractional study so full information about the effects is not available. This
however is a impediment with all Taguchi Methods. Also we are considering only 3 levels of the
main effects thereby losing information on the other higher power effects. The number of factors
considered is also very small and complete conclusions required for optimization of the process
cannot be made. For confirmation we may have to run a full experiment using 34 experiments and
at least 2 replicates of each experiment. Since a lot of information is missing in using orthogonal
arrays, the authors recommend in building a robust FEA model for greater accuracy that may also
help in further experimental runs. The work should have included discussion on the effect of
correlation of the various factors and considers each to be independent. An obvious addition of
relevant experimental factors to be studied would be part geometry effect of material ,scanning
method employed. It however is a parameter that is complex to quantify. However attempts can
be made to develop the same using categorical variables and surface area to volume ratio. Also
several other factors are not considered which may influence the distortion behavior of the built
part.
Building on this work we can review another example of Taguchi optimization by Campanelli et
al [3]. This paper is on the optimization of a laser ablation process for the surface finish. The

3
Amey Vidvans

objective of this study is to use a reduced experimental plan for optimization and have robustness
against the uncontrolled variables. The material used in this study was powder 316L with a Nd:
YaG laser working in the continuous mode. Parts were produced using a random island scanning
strategy, in order to reduce thermal stresses and distortions. The layers were built one by one on a
support plate in the vertical direction. The resulting density was more than 99%. The degradation
and oxidation was avoided by processing in a Nitrogen atmosphere in the closed chamber. The
quality characteristic, the roughness value on xz plane was determined by averaging the roughness
along the x and z direction. Out of the parameters such as average power (P), repetition rate (Fp),
scan speed (v), degree of overlap (O), laser defocus (df) and, the number of removed layers (Nrl),
we select Power, scan speed, laser defocus and pulse frequency as the experimental factors based
on the understanding of the authors. Out of these the authors tested only average power (P),
repetition rate (Fn), and scan speed (v). The next step would be to choose the levels of the
parameters. The levels of the factors are tabulated in Table 1.

Table 1 Levels of factors of interest

As explained earlier, it is prohibitively costly to run a full factorial design, we select an L27
orthogonal array on the 3 levels. The experiment as per this setup is carried out.
Now an ANOVA analysis to determine significant factors is carried out. ANOVA analysis also
gives the order of precedence in which parameters are significant. From the analysis we can
conclude that laser defocus which is significant for low Nrl values becomes insignificant. Also the
scan speed is significant only for higher values of Nrl .Further carrying forward this work based on
the SN ratios to minimize surface roughness were formulated of the kind Smaller the Better. Based
on the procedure of selecting the parameters with highest SN ratios we select the parameter as 20
W, 30 kHz and 300mm/s. As mentioned earlier the laser de focus is significant only for lower
values of Nrl. So this means that initially the laser is to be defocused and processing to be carried
out. Looking at the plot of surface roughness vs number of removed layers, it becomes apparent
that as the number of removed layers increases the surface roughness reduces. Thus the main
conclusions of the study are as follows.

1.) Surface roughness decreased considerably with the increasing number of reduced layers
2.) Scan speeds affect surface roughness only for higher values of removed layers.
3.) The maximum power setting and pulse frequency optimized the surface roughness across all
ranges of number of removed layers.
4.) In the range considered, higher scan speeds affect surface roughness only at higher number of
removed layers

4
Amey Vidvans

However there are a few drawbacks of this work, the associated physical phenomenon related to
laser processing are not discussed. The results should make sense physically. Since only 3 levels
of factors are studied, complex nonlinear relations are not very clear even though the 3k design
considered does give an indication that there is a nonlinear relationship .The obtained optimal
settings are not validated with any physical model or further experimental runs. Also since a
fractional design is used we do not have an information regarding interaction effect of any of the
parameters with respect to Nrl .According to previous domain knowledge a few critical factors like
initial powder size are missing from the analysis and should be included in further analysis or at
least be classified as a noise factor. There is also not much clarity on the selection of the relevant
process parameters. As we know from previous domain knowledge that deposited material
geometric properties affect mechanical properties. Thus the work may be extended by considering
multiple quality characteristics including mechanical properties. It may also be helpful to look at
4 levels to understand the nonlinear behavior better. Here optimization is also done only for the
levels tested so we may not know the optimum that may lie between these levels. We shall consider
a case study where this is accounted for in the analysis.

Now that we have become familiar with the methodology of Taguchi Method, we consider some
extensions of the work.This approach to optimization however considers only one quality
characteristic. In case of multiple quality characteristic this method is not very useful. Most
commonly in industries we deal with problems having multiple quality characteristics
This is where cutting edge algorithms are combined with traditional Taguchi methods to optimize
the system. One such method is Grey relational analysis (GRA). Another limitation of Taguchi
Method is that it can specify an optimum only on levels tested. The Taguchi Method integrated
with Neural Networks can predict even complex relations between input and output values. There
has been a great deal of work on application of Modern methods in the traditional Taguchi Method.
We focus on a paper by Lin et al [4] that combines aspects of Taguchi Method(TM), Grey
Relational analysis and Neural Networks for optimization of a novel Gas Metal Arc welding
process [5]. In GMA welding it is of prime importance to have the correct weld bead geometry.
Mechanical strength can be predicted by the weld bead geometry. The depth of penetration, width
of and fusion area of weld bead were used to describe the weld bead geometry of specimen. For
the experimental analysis, two plates of JIS SUS 304 Stainless steel and SAE 1020 low carbon
steel having dimensions 50x100x5mm were joined. The type of joint to be welded was a square
groove and the joint gap was 1.5mm. Electrode wire ER 308 with diameter 1.6mm was selected.
Mixed fluxes (MoS2 and MoO3) were applied on the weld area of the two plates to produce a butt
joint. These activating fluxes are supposed to reduce the joint preparation time and increase
penetration. The welding parameters of interest in controlling the weld bead were Voltage, Flow
rate of Shielding gas, Welding Speed and different proportion of welding fluxes mentioned above.
The authors selected the range of levels for each parameter as follows: welding voltage 2124 (V),
the flow rate of shielding gas 8 16 L/ min, welding speed 300450 mm/ min, and the different
proportion of MoS2 and MoO3 for mixed fluxes.

5
Amey Vidvans

Table 2 lists the levels of each factor. In addition to these factors a 5th noise factor was also used
at 2 levels which corresponds to cleanliness of joint.

Table 2 Levels for 4k design used.


Consequently an L16 array was selected with 15 degrees of freedom for the experiment run. We
now look at the SNR plots with DWR (depth to width ratio) as the quality characteristic. This is
higher the better type quality characteristic. Here a 4k design was used which give better
predictions than the simple 3k will design as used previously. By this we get the combination A1
B1 C3 D1 as the one that optimizes DWR. An ANOVA analysis was also carried out and it was
discovered that factor B was in fact non-significant. An experimental run was carried out based on
the parameters given by the Taguchi Optimization. These results confirm that initial optimization
results were obtained in the Taguchi Method.
Now coming to the Grey Relational Analysis (GRA), it was discussed that a multiple quality
characteristic would be used. We select depth of penetration, Depth to Width Ratio (DWR) and
fusion area of each bead. In this stage each of these quality characteristics is normalized in the
range between 0 and 1. The next step is computation of overall Grey Relational Grade, the multi
response characteristic by assigning weights to each quality characteristic after standardization. In
this case all weights assigned were equal to 1. Thus in this way the multiple quality characteristic
problem can be turned into a single quality characteristic problem. Thus for each experimental run
we shall have a Grey Relational Grade. This output is fed into Multilayer feed forward NN that
are typically used for solving complex predictive problems. The Neural Network (NN) consists of
various nodes arranged in layers, an input layer, one or more hidden layer and an output layer. So
for our analysis, it will have 5 input nodes in the input layer and output will be a single node. The
algorithm selected for this was Levenburg-Marquardt Back Propagation (LMBP) which was
preferred over heuristic methods for fast convergence. This is considered the fastest algorithm for
training of multi-layer networks. There are similarities in this algorithm with the gradient descent
techniques used in the Gauss-Jordan Method. The dataset was split into a training and test data set
for building the NN and testing its validity. The NN works to minimize the MSE (Mean square
error) term. By trial and error, the architecture 5-4-1 was decided upon since it produced the least
simulation error. The transfer functions of the hidden neurons were sigmoidal and as per
convention a linear function was used for the output .It is also known that an abundance of hidden
neurons leads to overfitting .From this study, the NN gives the optimum parameters tabulated in
Table 3 regardless of the value of the noise factor, i.e. cleanliness. These results were obtained by
simulation of the neural network discussed above on the output of the GRA and TM on each of
the parameters.

6
Amey Vidvans

Table 3 Comparison between two methods discussed


From this we can conclude that the GRA followed by LMBP NN gives significantly better results
than simple Taguchi Method. However where only a rough approximation of parameters is
required, it is a very reliable and cheap method.
The drawbacks of such an approach is that the interaction effect between welding parameters is
not considered. .So no testing is done for that. Also, it seems that this technique imposes a high
computational burden for implementation. In the industry there is a need for simple tools to obtain
quick and accurate results. It might not be possible to implement such a technique where fast
results are required. Also consideration of only one noise factor is also not very feasible from the
point of view of practical applications. Since selection of NN parameters was done by trial and
error, an exhaustive study of such parameters would be required before the optimum combination
is found.
Now we look at another advanced application of Taguchi Method in the paper by Lin HC et al [5]
to modern manufacturing systems with multiple quality characteristics. This paper focusses on
applying Taguchi Method along with neural networks, desirability function and genetic algorithm
to a multiple quality characteristic optimization problem with continuous film sputtering process
used in solar industry. This technique is considered the best among electroplating and painting.
Industrial Technology Research Institute (ITRI) in Taiwan is the location and sponsor of all
experimentation
Again the argument that selection of process parameters based on intuition of the engineers will
not give the optimum result for the quality characteristics absorptance () and emittance (). Here
fundamentally Taguchi Method is applied to both identify relevant factors and find optimum
settings of the said parameters. Thereafter by integration of desirability functions, genetic
algorithm and neural networks, the said optimum settings are optimized further. Eventually, a
result better than the one obtained by standalone Taguchi Method is obtained.
Desirability function is a mathematical technique used to convert multiple outputs into a single
value between 0 and 1. There are different desirability formulations for different quality
characteristics e.g. nominal the best, larger the better and smaller the better denoted by di. A

7
Amey Vidvans

detailed account of these formulations can be found in the paper. After these transformations, the
indices so obtained for each response variable are combined to form a single compound value.
D= f (d1, d2, d3dn)
Normally D is taken to be the geometric mean of all response variables. So D is between 0 and 1
and is expected to be maximized.
Genetic algorithms are an emerging tool in optimization. A brief outline of their function with
respect to this study follows.
Genetic algorithms work in phases called generation that ultimately drive the algorithm to an
optimal solution. In the first generation, the GA starts with a set of randomly generated candidates.
Each of these is called a chromosome. After the first generation, GA makes use of three functions
called reproduction, crossover and mutation, to produce subsequent generations of solutions. A
better offspring may be reserved for future generations. Starting the optimization with standalone
Taguchi method for two quality characteristics emittance () which is to be minimized and
absorbance () which is to be maximized, we have to rely on the understanding of the process of
the engineers operating the process. The factors with their levels are shown in Table 4. An L18
orthogonal array was chosen to carry out the analysis.

Table 4 Factors for Taguchi Method


After suitable SN ratios are defined for each of the quality characteristic, the corresponding SN
plots are drawn and as per previous discussion. An ANOVA analysis given in Table 5 is also
included for the two quality characteristics. A noise factor for the time of the day when the process
is carried out is also incorporated. It should be noted that the selection of the parameters was done
based on the operating engineers experience. SP in the terminology describes sub process which
is a characteristic of the processing.

8
Amey Vidvans

Table 5 ANOVA table for Taguchi Method for absorptance() and emittance respectively()
Based on the SN ratios we can plot the main effects SN ratios against the parameters of interest.
Based on this analysis and since this is a multiple objective optimization we rely on the
interpretation of these plots by the technical team of ITERI. The optimum parameters are
determined as A2 B2C1D3 E3F3 G3. Based on his result, we run 10 experimental runs, the results
on an average gave quality indices of emittance and absorbance values of 0.086 and 0.9021, which
are much better than values currently obtained of 0.6 and 0.85 respectively. Thus the
implementation of Taguchi Method is successful.
Comparing this approach to the integrated approach using neural networks, desirability function
and genetic algorithm is the next step Neural Networks were used to map the relations between
the output and the 7 input parameters. Back propagation (BP) algorithm is selected for the work.12
samples were used for training the neural network and the remaining to test the outputs. Three
parameters in the neural network, including the number of hidden nodes, learning rate, and
momentum, have to be given in advance. These values are fixed through trial and error. Based on
network architecture and selection of neural network parameters the values of learning rate and
momentum were selected as 0.05 and 0.9 respectively. This was for the network architecture 7-6-
2. The selection was done based on the lowest value of combined RMS error values for training
and test data under the condition of 5000 learning iterations.

Now the GA approach is utilized with the input 7 parameters in the chromosome. It should be
noted that absorptance is to be maximized and emittance is to be minimized. These had to be
transformed as per requirement (binary transformation). Fitness function for the GA was defined
by the desirability function. Now again the trial and error approach is utilized to obtain optimum
values of GA parameters initial population, number of generations, crossover rate, and mutation
rate. From analysis, the values of the crossover rate and mutation rate were set to 0.8 and 0.05,

9
Amey Vidvans

respectively. The initial number of chromosomes was set to 70. Also the response variables were
transformed into a single index using the desirability function. Based on the output of the Genetic
Algorithm which would converge in 1000 generations the optimum values of the process were
selected as follows in Table 6.

Table 6 Output of GA
Confirmation experiments at these values give average absorptance and emittance as 0.9281 and
0.076 respectively. This was better than that achieved by standalone Taguchi Method.

One of the obvious reasons for failure of the Taguchi method would be because of the involvement
of variables which are continuous in nature rather than discreet. If the variables or factors had been
discrete, Taguchi Method would have given the optimum solution. As we have seen in these case
studies that no care is taken to reduce the correlation between the explanatory factors in the
analysis. Thus this might influence the results. Also since only 18 specimens are used to arrive at
the conclusions, the validity of the experiments may be questioned. Confirmation with particle
swarm optimization and simulated annealing could be done. It is obvious that using 7 control
factors might affect the experimental results. So a Placket Burman test used for screening main
factors is recommended to be carried out. Also since it is obvious that many factors will be
correlated, interactions must also be studied and not be lumped into the pooled error term as shown
in the ANOVA table.

In all our analysis till now we have not considered the correlation existing between the predictor
or explanatory factors. Principle component analysis(PCA) is a technique that can take into
account he correlations present between the variables and reduce their number based on their
correlations in such a way that they are un correlated , orthogonal and ordered while explaining all
the variation in the data. It is a very useful technique for exploratory data analysis.

In the current work by Gu et al [6], we try and find a cheaper and easier solution to solving multiple
quality characteristic problems using simpler methods than the one discussed above.

When making parts using injection moulding it is obviously more expensive to use virgin plastics.
However blended plastics tend to have more undesirable properties than the virgin plastics. Also
it might be environmentally friendly to use blended mixes and reduce number of trials .Thus the
objective of this study was to optimize the mechanical properties such as tensile strength, yield

10
Amey Vidvans

strength, flexural modulus, flexural strength and impact strength and also find the optimum mix
of recycled plastic and virgin plastic. As explained earlier, a SN ratio of the type higher the better
corresponding to the above mentioned mechanical properties was selected. Virgin PP material
used in this work was a high impact copolymer, K8303 with an average pellet size of 3mm .The
recycled material PP was black pellets derived from common municipal waste (MSW).An
anomaly in the MFI (melt flow index) and molecular weights of the blended polymer mixes is
attributed to an increased mineral content probably due to the fact that the blends were made from
municipal solid waste (MSW). This fact is important in analyzing the results later on.
An L9 orthogonal array for 3 levels and 4 factors as defined in table 7 is carried out. These factors
were selected on the basis of ease of control.

Table 7 Factors of interest in study


Now, based on this the SN ratios for each of the blends PP0, PP10, PP30, PP50, PP70, PP90 and
PP100 were calculated and based on the multiple quality characteristics were evaluated for finding
he optimum levels.
Since we have 5 quality characteristics of interest there is no clear trend in the output of the SN
ratios and they give contradicting results. Some of the contradictions are explained below with
respect to the different factors. Interpretation of the SN plots was not straightforward.
Some conclusions can be obtained as follows. Plastic blends processed at low melt temperature
generally had better mechanical properties. One explanation of this is that low temperatures
prevented excessive thermal degradation of the plastic and thus gave better strength to the
specimens. As factor B, the mould temperature increased so did the mechanical properties except
the flexural strength. The injection speed, factor C had mixed trends. Tensile strength, yield
strength and flexural modulus were optimized for lower injection speed while flexural strength
and impact strength were optimized for high injection speed. The same trends as described above
were obtained for factor D, the packing pressure. The optimal results are tabulated in Table 8 based
on the SN plots for larger the better type characteristic.

Table 8 Optimal solutions for all quality characteristics

11
Amey Vidvans

From these parameters, a series of verification tests were carried out with specimens for each
condition. The results are tabulated in Table 9.

Table 9 Verification test results


As can be seen from table 9, the results of PP90 are even slightly better than the values for PP100,
which indicates the optimization may work the best for PP90 as we shall see later. Now coming to
the trends of flexural strength and flexural modulus, we see that there are opposite trends which
can be explained by the increasing mineral content of the plastic blend. The improvement ratio
calculated against the average values of the OA experiments shows considerable improvement
which means that the optimization is effective with the maximum improvement being 9.58% for
PP0. Also it is explained that the virgin polymer has a high impact strength which makes sense.
Principle Component Analysis (PCA) is used to reduce the number of quality characteristics from
5 to a reduced number to ease the process of selection of parameters. As explained earlier the PCA
tries to reduce the number of variables by weeding out the variables that are highly correlated with
the rest of the variables. Thus, the variables are uncorrelated and contains the variation in the data.
Thus a parameter called AEV (Accumulated Eigen Vector) is introduced that explains the
proportion of variation explained by the set of variables. The AEV calculated till the 4th principle
component using Eigen values derived from the correlation matrix of the experimental runs were
on an average 0.9889. Thus we are justified in dropping the 5th component according to the PCA.
Based on this optimum combinations of processing parameters were determined. The final data as
given in the paper based on which a ranking of the compositions for evaluation of the
compositions. Thus the conclusion was that even though there will be a tradeoff with the impact
strength and flexural properties of the PCA were better than Taguchi optimized results as can be
seen from results of PP90 which gives the highest critical values. These results obtained from the
final data calculated are compared with the optimal trials of the Taguchi runs. The improvement
for PP10 was as high as 6.05 % for impact strength. However there was a general improvement in
the flexural properties of the PCA trials as compared to the optimal Taguchi trials with the optimal
number of trials.
There are however a few drawbacks in this study. One is that interactions between the processing
parameters were not considered in the analysis even though the authors accept that the controlling
factors are highly correlated. Since the proposed approach involving PCA does not take into
account the continuous nature of the variables, there is no guarantee that the values of the
parameters obtained are really optimum. This is however a very relevant method that can be used
in the industry at a fraction of the cost of the full experimental analysis. Also the computational
load of this technique is not very heavy.

12
Amey Vidvans

3.0 Conclusion:
Thus we have covered the basic Taguchi Method in the first few examples and explained the
shortcomings of the papers. Different problems associated with this technique and methods to
overcome them are highlighted. Advanced techniques involving Neural Networks, Desirability
function, Grey Relational Analysis and Principle Component Analysis are incorporated to improve
the Taguchi Method. Problems associated with multiple quality characteristics and nonlinear
relationships between the response and input are solved satisfactorily. Thus we can conclude the
simple Taguchi Method may be used in confidence when rough estimates of the processing
parameters across discrete values are to be obtained. Further extensions of the work that serve as
add-ons to the basic Taguchi Method include Pareto Analysis, Fuzzy Logic which again is used
for multiple quality characteristics.

References:
[1] M Phadke, Quality Engineering Using Robust Design, 1st ed, 1989, Prentice Hall
[2] Z Zhu,V Dhokia,A Nassehi,S Newman, Investigation of Part distortions as a result of hybrid
manufacturing, Robotics & Computer-Integrated Manufacturing,2016,37(2016),23-32
[3] S Campanelli, G Casalino, N Cottuzzi, A Ludovico, Taguchi Optimization of the surface finish
obtained by laser ablation on selective laser molten steel parts, Procedia CIRP,2013,12(2013),462-
467
[4] H Lin, The use of the Taguchi Method with Grey Relational Analysis and a neural network to
optimize a novel GMA welding process, Journal of Intelligent Manufacturing, 2012, 23,1671-1680
[5] H Lin, C Su, C Wang, B Chang, R Juang, Parameter optimization of continuous sputtering
process based on Taguchi Methods, neural networks, desirability function and genetic algorithms,
Expert Systems with Applications,2012, 39(2012),12918-12925
[6] F Gu, P Hall, N Miles, Q Ding, T Wu, Improvement of mechanical properties of recycled
plastic blends via optimizing processing parameters using Taguchi method and principle
component analysis, Materials and Design,2014,62(2014),189-198
*Tables reproduced from relevant papers

13

Вам также может понравиться