Академический Документы
Профессиональный Документы
Культура Документы
AMEY VIDVANS
THE PENNSYLVANIA STATE UNIVERSITY
1.0 Abstract:
Manufacturing products at low cost and high quality is both an engineering and economic
challenge. There is a wide interest in optimizing systems for manufacturing to improve quality,
reduce the manufacturing cost and improve the overall quality loss.[1].There are frequently many
decision variables which require setting. Traditional techniques including one at a Time
optimization or trial and error invariably fail. Robust design studies a large number of variables
using a mathematical technique called Orthogonal Arrays to deliver the best product at the lowest
cost and highest quality. A number of documented case studies in advanced manufacturing
applying these techniques and their extensions will be discussed in this literature survey to
understand the methodology and conclusions.
2.0 Literature Survey:
We first discuss a bit about the Taguchi methodology and steps in its implementation. Whenever
we are looking at optimizing a process, we can do it in 3 ways [1].
1.) Trial and error
This involves making a series of experiments using variables set at values that the
experimenter feels will give the best results. Data analysis on the direction of
improvement is carried out after each experiment.
2.) Design of Experiments
This approach involves parameters of interest being varies over a specified range. Although
this is a very systematic approach to experimentation, it is prohibitively expensive and not
readily adaptable to industry.
3.) Taguchi Methods
The Taguchi Method involves the use of orthogonal arrays, which are supposed to give the
least variance and impart robustness to the process. Although this process is similar to the
DOE approach, this approach is supposed to do so with the minimum possible
experimentation. The objectives in this case are SNR (signal-noise ratio). They are actually
logarithms of the output to be optimized.
All problems discussed shall be of the static type. For static problems, the SNR are commonly of
3 types 1.) Smaller the better 2.) Larger the better 3.) Nominal the best.
1
Amey Vidvans
The first step is to identify the factors of interest. Followed by determination of noise factors and
quality characteristic of interest. Based on this we develop the objective function to be optimized.
This is to be followed by determining control factors and their levels. Next we select the orthogonal
array and conduct the experiment. From the plots of process means and SN ratios generated we
can optimize the process. More often than not ANOVA will also be integrated into the analysis.
The first case study by Zhu et al [2] studies the effect of distortions due to a hybrid manufacturing
process caused by deposition and subsequent solidification. This creates dimensional in accuracies.
The additive manufacturing processes provide a great deal of flexibility to the designer due to their
ability to produce complex geometries. CNC machining on the other hand produce parts with high
accuracy. However due to problems of tool accessibility it becomes cumbersome to machine some
complex geometries. To counter these problems, hybrid systems consisting of both additive and
subtractive manufacturing which combines the advantages of both these types of processes have
been developed. However the newly deposited material can distort the final part and induce
residual stresses. In the analysis stage, a mathematical model is developed which can help identify
the process parameters to be used in the DOE (design of experiments) Taguchi phase since there
are many factors of interest. The experiments thus studied were statistically analyzed to discover
the correlation between distortion and the process parameters. We shall critically evaluate only the
DOE phase of robust design in this paper. The mathematical model is essentially a force and
moment balance to develop equilibrium conditions which give a rough idea of the process
parameters to be used.
As explained earlier, the paper develops a mathematical model to enlist the critical parameters of
interest which shall be examined by Taguchi DoE approach. These parameters were found to be
Section length of the existing part Ls, heights of the existing part he and the newly deposited part
hn and layer thickness t. The authors note that although there are a wide variety of factors that
influence the process, the most relevant ones are selected.
Three levels were assigned to each effect corresponding to low, medium and high. The 3k design
was probably selected to detect quadratic effect if present since the experiments at two levels can
detect quadratic effect but cannot characterize it properly. The interactions between these factors
were also obviously relevant and taken into consideration. The levels of the factors were as
follows. Keeping the working volume of the experimental setup, the FFF machine, the levels for
section length (Ls) were chosen as 60 mm, 90 mm and 120 mm. To shorten the time that would be
required for each observation, the authors select 3mm, 6mm and 9mm as the three levels for the
heights of existing parts(he). The same heights were also applied for the heights of the newly
deposited parts(hn). So effectively the ratio of he to hn can be in a wide range of
33%,50%,67%,100%,150%,200% and 300%. The layer thicknesses chosen were 0.2, 0.25 and
0.3mm considering the stability of the machine. It should be noted here that the other control
factors like deposition speed etc. were held constant for the purposes of this study.
An L27 standard orthogonal Taguchi Array was selected for the experiment. Now coming to the
experimental procedure used for each experiment, the top surface of each newly deposited part
2
Amey Vidvans
was face milled for datum reference in measurement. The bottom surface of each test piece was
measured by scanning mode on a coordinate measuring machine (CMM). For measuring the
distortion, the part was positioned in the direction opposite to the build direction. For accuracy, 5
lines with 4mm interval between each other along the X axis were scanned and appropriate
measurements noted in the L27 table.
Beginning the analysis, the authors conduct an ANOVA analysis, main effects section length,
height of existing part and height of newly deposited part and layer thickness were denoted as A,
B,C and D respectively. Their interactions were denoted by A*B, B*C, A*C. In the ANOVA
analysis, it can be seen that height of existing part (he) followed by section length (Ls) are the most
significant parameters. While height of newly deposited part and layer thickness are not even
significant. This can be judged from the p values which are greater than 0.05. As we know from
robust methodology mean plots are an important way of judging the optimum values of parameters,
the authors turn to these next. The mean effect plots state that section length should be at its lowest
value, the height of the existing part should be at its highest value, the height of the newly deposited
part should be the lowest, while the layer thickness deposited should be 0.25 mm.
Next moving on to the interaction studies between the two primary factors of interest. From
interaction plots of section length and height of existing part, it can be seen that increasing the
height of the existing part (he) can significantly reduce the distortion observed. Also increasing
section length also leads to increased degree of distortion. As expected from the ANOVA analysis,
the layer thickness does not have much influence on part distortion. This interaction is significant
because of the residual stresses being developed in the new and existing material deposited. The
authors claim that the longer the section length of the existing part, more is the contraction force
that acts as the material solidifies and so this creates higher distortion or tolerance loss.
Explaining the physical meaning of these results the authors explain the physical phenomenon
associated the processes. There are a few areas of improvement that can be highlighted. As we
have noted that this is a fractional study so full information about the effects is not available. This
however is a impediment with all Taguchi Methods. Also we are considering only 3 levels of the
main effects thereby losing information on the other higher power effects. The number of factors
considered is also very small and complete conclusions required for optimization of the process
cannot be made. For confirmation we may have to run a full experiment using 34 experiments and
at least 2 replicates of each experiment. Since a lot of information is missing in using orthogonal
arrays, the authors recommend in building a robust FEA model for greater accuracy that may also
help in further experimental runs. The work should have included discussion on the effect of
correlation of the various factors and considers each to be independent. An obvious addition of
relevant experimental factors to be studied would be part geometry effect of material ,scanning
method employed. It however is a parameter that is complex to quantify. However attempts can
be made to develop the same using categorical variables and surface area to volume ratio. Also
several other factors are not considered which may influence the distortion behavior of the built
part.
Building on this work we can review another example of Taguchi optimization by Campanelli et
al [3]. This paper is on the optimization of a laser ablation process for the surface finish. The
3
Amey Vidvans
objective of this study is to use a reduced experimental plan for optimization and have robustness
against the uncontrolled variables. The material used in this study was powder 316L with a Nd:
YaG laser working in the continuous mode. Parts were produced using a random island scanning
strategy, in order to reduce thermal stresses and distortions. The layers were built one by one on a
support plate in the vertical direction. The resulting density was more than 99%. The degradation
and oxidation was avoided by processing in a Nitrogen atmosphere in the closed chamber. The
quality characteristic, the roughness value on xz plane was determined by averaging the roughness
along the x and z direction. Out of the parameters such as average power (P), repetition rate (Fp),
scan speed (v), degree of overlap (O), laser defocus (df) and, the number of removed layers (Nrl),
we select Power, scan speed, laser defocus and pulse frequency as the experimental factors based
on the understanding of the authors. Out of these the authors tested only average power (P),
repetition rate (Fn), and scan speed (v). The next step would be to choose the levels of the
parameters. The levels of the factors are tabulated in Table 1.
As explained earlier, it is prohibitively costly to run a full factorial design, we select an L27
orthogonal array on the 3 levels. The experiment as per this setup is carried out.
Now an ANOVA analysis to determine significant factors is carried out. ANOVA analysis also
gives the order of precedence in which parameters are significant. From the analysis we can
conclude that laser defocus which is significant for low Nrl values becomes insignificant. Also the
scan speed is significant only for higher values of Nrl .Further carrying forward this work based on
the SN ratios to minimize surface roughness were formulated of the kind Smaller the Better. Based
on the procedure of selecting the parameters with highest SN ratios we select the parameter as 20
W, 30 kHz and 300mm/s. As mentioned earlier the laser de focus is significant only for lower
values of Nrl. So this means that initially the laser is to be defocused and processing to be carried
out. Looking at the plot of surface roughness vs number of removed layers, it becomes apparent
that as the number of removed layers increases the surface roughness reduces. Thus the main
conclusions of the study are as follows.
1.) Surface roughness decreased considerably with the increasing number of reduced layers
2.) Scan speeds affect surface roughness only for higher values of removed layers.
3.) The maximum power setting and pulse frequency optimized the surface roughness across all
ranges of number of removed layers.
4.) In the range considered, higher scan speeds affect surface roughness only at higher number of
removed layers
4
Amey Vidvans
However there are a few drawbacks of this work, the associated physical phenomenon related to
laser processing are not discussed. The results should make sense physically. Since only 3 levels
of factors are studied, complex nonlinear relations are not very clear even though the 3k design
considered does give an indication that there is a nonlinear relationship .The obtained optimal
settings are not validated with any physical model or further experimental runs. Also since a
fractional design is used we do not have an information regarding interaction effect of any of the
parameters with respect to Nrl .According to previous domain knowledge a few critical factors like
initial powder size are missing from the analysis and should be included in further analysis or at
least be classified as a noise factor. There is also not much clarity on the selection of the relevant
process parameters. As we know from previous domain knowledge that deposited material
geometric properties affect mechanical properties. Thus the work may be extended by considering
multiple quality characteristics including mechanical properties. It may also be helpful to look at
4 levels to understand the nonlinear behavior better. Here optimization is also done only for the
levels tested so we may not know the optimum that may lie between these levels. We shall consider
a case study where this is accounted for in the analysis.
Now that we have become familiar with the methodology of Taguchi Method, we consider some
extensions of the work.This approach to optimization however considers only one quality
characteristic. In case of multiple quality characteristic this method is not very useful. Most
commonly in industries we deal with problems having multiple quality characteristics
This is where cutting edge algorithms are combined with traditional Taguchi methods to optimize
the system. One such method is Grey relational analysis (GRA). Another limitation of Taguchi
Method is that it can specify an optimum only on levels tested. The Taguchi Method integrated
with Neural Networks can predict even complex relations between input and output values. There
has been a great deal of work on application of Modern methods in the traditional Taguchi Method.
We focus on a paper by Lin et al [4] that combines aspects of Taguchi Method(TM), Grey
Relational analysis and Neural Networks for optimization of a novel Gas Metal Arc welding
process [5]. In GMA welding it is of prime importance to have the correct weld bead geometry.
Mechanical strength can be predicted by the weld bead geometry. The depth of penetration, width
of and fusion area of weld bead were used to describe the weld bead geometry of specimen. For
the experimental analysis, two plates of JIS SUS 304 Stainless steel and SAE 1020 low carbon
steel having dimensions 50x100x5mm were joined. The type of joint to be welded was a square
groove and the joint gap was 1.5mm. Electrode wire ER 308 with diameter 1.6mm was selected.
Mixed fluxes (MoS2 and MoO3) were applied on the weld area of the two plates to produce a butt
joint. These activating fluxes are supposed to reduce the joint preparation time and increase
penetration. The welding parameters of interest in controlling the weld bead were Voltage, Flow
rate of Shielding gas, Welding Speed and different proportion of welding fluxes mentioned above.
The authors selected the range of levels for each parameter as follows: welding voltage 2124 (V),
the flow rate of shielding gas 8 16 L/ min, welding speed 300450 mm/ min, and the different
proportion of MoS2 and MoO3 for mixed fluxes.
5
Amey Vidvans
Table 2 lists the levels of each factor. In addition to these factors a 5th noise factor was also used
at 2 levels which corresponds to cleanliness of joint.
6
Amey Vidvans
7
Amey Vidvans
detailed account of these formulations can be found in the paper. After these transformations, the
indices so obtained for each response variable are combined to form a single compound value.
D= f (d1, d2, d3dn)
Normally D is taken to be the geometric mean of all response variables. So D is between 0 and 1
and is expected to be maximized.
Genetic algorithms are an emerging tool in optimization. A brief outline of their function with
respect to this study follows.
Genetic algorithms work in phases called generation that ultimately drive the algorithm to an
optimal solution. In the first generation, the GA starts with a set of randomly generated candidates.
Each of these is called a chromosome. After the first generation, GA makes use of three functions
called reproduction, crossover and mutation, to produce subsequent generations of solutions. A
better offspring may be reserved for future generations. Starting the optimization with standalone
Taguchi method for two quality characteristics emittance () which is to be minimized and
absorbance () which is to be maximized, we have to rely on the understanding of the process of
the engineers operating the process. The factors with their levels are shown in Table 4. An L18
orthogonal array was chosen to carry out the analysis.
8
Amey Vidvans
Table 5 ANOVA table for Taguchi Method for absorptance() and emittance respectively()
Based on the SN ratios we can plot the main effects SN ratios against the parameters of interest.
Based on this analysis and since this is a multiple objective optimization we rely on the
interpretation of these plots by the technical team of ITERI. The optimum parameters are
determined as A2 B2C1D3 E3F3 G3. Based on his result, we run 10 experimental runs, the results
on an average gave quality indices of emittance and absorbance values of 0.086 and 0.9021, which
are much better than values currently obtained of 0.6 and 0.85 respectively. Thus the
implementation of Taguchi Method is successful.
Comparing this approach to the integrated approach using neural networks, desirability function
and genetic algorithm is the next step Neural Networks were used to map the relations between
the output and the 7 input parameters. Back propagation (BP) algorithm is selected for the work.12
samples were used for training the neural network and the remaining to test the outputs. Three
parameters in the neural network, including the number of hidden nodes, learning rate, and
momentum, have to be given in advance. These values are fixed through trial and error. Based on
network architecture and selection of neural network parameters the values of learning rate and
momentum were selected as 0.05 and 0.9 respectively. This was for the network architecture 7-6-
2. The selection was done based on the lowest value of combined RMS error values for training
and test data under the condition of 5000 learning iterations.
Now the GA approach is utilized with the input 7 parameters in the chromosome. It should be
noted that absorptance is to be maximized and emittance is to be minimized. These had to be
transformed as per requirement (binary transformation). Fitness function for the GA was defined
by the desirability function. Now again the trial and error approach is utilized to obtain optimum
values of GA parameters initial population, number of generations, crossover rate, and mutation
rate. From analysis, the values of the crossover rate and mutation rate were set to 0.8 and 0.05,
9
Amey Vidvans
respectively. The initial number of chromosomes was set to 70. Also the response variables were
transformed into a single index using the desirability function. Based on the output of the Genetic
Algorithm which would converge in 1000 generations the optimum values of the process were
selected as follows in Table 6.
Table 6 Output of GA
Confirmation experiments at these values give average absorptance and emittance as 0.9281 and
0.076 respectively. This was better than that achieved by standalone Taguchi Method.
One of the obvious reasons for failure of the Taguchi method would be because of the involvement
of variables which are continuous in nature rather than discreet. If the variables or factors had been
discrete, Taguchi Method would have given the optimum solution. As we have seen in these case
studies that no care is taken to reduce the correlation between the explanatory factors in the
analysis. Thus this might influence the results. Also since only 18 specimens are used to arrive at
the conclusions, the validity of the experiments may be questioned. Confirmation with particle
swarm optimization and simulated annealing could be done. It is obvious that using 7 control
factors might affect the experimental results. So a Placket Burman test used for screening main
factors is recommended to be carried out. Also since it is obvious that many factors will be
correlated, interactions must also be studied and not be lumped into the pooled error term as shown
in the ANOVA table.
In all our analysis till now we have not considered the correlation existing between the predictor
or explanatory factors. Principle component analysis(PCA) is a technique that can take into
account he correlations present between the variables and reduce their number based on their
correlations in such a way that they are un correlated , orthogonal and ordered while explaining all
the variation in the data. It is a very useful technique for exploratory data analysis.
In the current work by Gu et al [6], we try and find a cheaper and easier solution to solving multiple
quality characteristic problems using simpler methods than the one discussed above.
When making parts using injection moulding it is obviously more expensive to use virgin plastics.
However blended plastics tend to have more undesirable properties than the virgin plastics. Also
it might be environmentally friendly to use blended mixes and reduce number of trials .Thus the
objective of this study was to optimize the mechanical properties such as tensile strength, yield
10
Amey Vidvans
strength, flexural modulus, flexural strength and impact strength and also find the optimum mix
of recycled plastic and virgin plastic. As explained earlier, a SN ratio of the type higher the better
corresponding to the above mentioned mechanical properties was selected. Virgin PP material
used in this work was a high impact copolymer, K8303 with an average pellet size of 3mm .The
recycled material PP was black pellets derived from common municipal waste (MSW).An
anomaly in the MFI (melt flow index) and molecular weights of the blended polymer mixes is
attributed to an increased mineral content probably due to the fact that the blends were made from
municipal solid waste (MSW). This fact is important in analyzing the results later on.
An L9 orthogonal array for 3 levels and 4 factors as defined in table 7 is carried out. These factors
were selected on the basis of ease of control.
11
Amey Vidvans
From these parameters, a series of verification tests were carried out with specimens for each
condition. The results are tabulated in Table 9.
12
Amey Vidvans
3.0 Conclusion:
Thus we have covered the basic Taguchi Method in the first few examples and explained the
shortcomings of the papers. Different problems associated with this technique and methods to
overcome them are highlighted. Advanced techniques involving Neural Networks, Desirability
function, Grey Relational Analysis and Principle Component Analysis are incorporated to improve
the Taguchi Method. Problems associated with multiple quality characteristics and nonlinear
relationships between the response and input are solved satisfactorily. Thus we can conclude the
simple Taguchi Method may be used in confidence when rough estimates of the processing
parameters across discrete values are to be obtained. Further extensions of the work that serve as
add-ons to the basic Taguchi Method include Pareto Analysis, Fuzzy Logic which again is used
for multiple quality characteristics.
References:
[1] M Phadke, Quality Engineering Using Robust Design, 1st ed, 1989, Prentice Hall
[2] Z Zhu,V Dhokia,A Nassehi,S Newman, Investigation of Part distortions as a result of hybrid
manufacturing, Robotics & Computer-Integrated Manufacturing,2016,37(2016),23-32
[3] S Campanelli, G Casalino, N Cottuzzi, A Ludovico, Taguchi Optimization of the surface finish
obtained by laser ablation on selective laser molten steel parts, Procedia CIRP,2013,12(2013),462-
467
[4] H Lin, The use of the Taguchi Method with Grey Relational Analysis and a neural network to
optimize a novel GMA welding process, Journal of Intelligent Manufacturing, 2012, 23,1671-1680
[5] H Lin, C Su, C Wang, B Chang, R Juang, Parameter optimization of continuous sputtering
process based on Taguchi Methods, neural networks, desirability function and genetic algorithms,
Expert Systems with Applications,2012, 39(2012),12918-12925
[6] F Gu, P Hall, N Miles, Q Ding, T Wu, Improvement of mechanical properties of recycled
plastic blends via optimizing processing parameters using Taguchi method and principle
component analysis, Materials and Design,2014,62(2014),189-198
*Tables reproduced from relevant papers
13