You are on page 1of 21




Based on the literature survey and research objectives discussed in

the previous chapter, composites with single and multiple reinforcements

have been developed. A description of the procedure for alloy and composite
production various test procedures adopted to evaluate the properties of
composites are discussed in this chapter. Design of Experiments approach
used and optimisation techniques employed are presented in this section.
Modelling techniques used are also explained in detail.

The plan of investigations is shown in Figure (3.1). Important

aspects such as selection of matrix and reinforcement materials for production

of composites, identification of manufacturing process of composites, design
of experiments techniques and studies on mechanical properties (ultimate
tensile strength,

hardness), microstructures, dry sliding wear as well as

wear of composites produced are discussed in detail in

following headings.




In the present study, composites have been developed to evaluate

their mechanical and tribological characteristics. Selection of matrix and

reinforcements for composite preparation used are discussed in the following

Matrix material
The desirable properties required for aluminium alloy matrices are

low density, compatibility with matrix alloy, chemical compatibility, thermal

stability, high compression and tensile strength and economic efficiency. In
the present study, Al-Si10Mg was used as the matrix material. Al-Si10Mg
alloy exhibits excellent resistance to corrosion under both ordinary
atmospheric and marine conditions along with high strength and hardness.
The chemical composition of this alloy is given in Table 3.1.
Table 3.1Chemical composition of Al-Si10Mg matrix alloy (in weight %)




0.2 to 0.6 10.0 to 13.0 0.6 max 0.3 to 0.7




1.5 max


Reinforcement is the material, which is added to improve the

various properties of the matrix material. In Aluminium matrix composites,

hard reinforcement such as Al2O3, SiC strengthens the matrix both
extrinsically, through improved load transfer intrinsically by increasing the
dislocation density. Soft reinforcement like graphite and molybdenum

disulphide (MoS2) contribute to lowering wear rate, friction and increasing
seizure resistance.
In the present study, both hard and soft reinforcement has been
used to produce composites. SiCp reinforcement with an average particle size
of 40 m with a density of 3210 kg/m3 was used to produce the composite.
The melting point of the SiC is 2890 C, has a very low reactivity in molten
metal and is relatively cheap. A significant enhancement in composite
properties, such as stiffness, strength and fracture toughness, as well as low
reactivity and the low cost make this reinforcement ideal for the manufacture
of cast metal matrix composites requiring very good wear resistance.
Molybdenum disulphide, a solid lubricant has been used as a soft
reinforcement in this research. The average particle size of MoS2 used was
1.5 m and has a density of 4800 kg/m3. Lubrication effectiveness /efficiency
of MoS2 often exceeds that of graphite and are effective in a vacuum where
graphite fails. However, the efficiency of MoS2 lubrication decreases after
675 K wherein it begins to form MoO3, which lowers the lubricating effect.
MoS2p have superior load bearing and surface speed performance values in
comparison to either graphite or tungsten disulphide.


The Al-Si10Mg alloy was charged into an electrical resistance-

heated furnace, specifically modified for the present investigation. The

melting process was carried out at 1073 K under argon atmosphere in a
graphite crucible. Mixing of reinforcement was conducted using a stainless
steel impeller. Schematic diagram of the stir casting setup is shown in
Figure 3.2. Calculated quantity of reinforcement pre-heated at 600K was
added to the molten metal slowly with continuous stirring. Stirring was
continued for a further period of ten minutes after adding the reinforcement.

The molten mixture was solidified in a cast iron die in the form of cylindrical
pin of a diameter of 15 mm and length of 75 mm. In the present work, AlSi10Mg MMCs reinforced with 10 wt. % and 20 wt. % SiCp were produced.
The same procedure was repeated for to produce AMCs reinforced with 2 wt.
% and 4 wt. % MoS2p. Hybrid metal matrix composite with 2 wt. % and 4 wt.
% of MoS2p and with 10 wt. % and 20 wt. % SiCp were also cast. As cast
composites and machined wear pin, samples shown in Figure 3.3.

Figure 3.2 Schematic diagram of stir casting setup


Figure 3.3 Composite castings, UTS and wear test specimen


Microstructural characterisation of the composite specimens was

carried out using Carl Zeiss Goettingen Optical microscope. The specimens
were metallographically polished as per standards to obtain average
roughness value of 0.8 m. The specimens were etched with 2% hydrofluoric
acid. Optical micrographs of the polished specimens were recorded at
different magnifications.


Properties of the composites as well as matrix alloy specimens

were tested at room temperature and a relative humidity of 55%. The

specimens prepared were machined using a CNC machine to a dimensional
tolerance of 0.1mm. Equipments used for testing along with the test
procedures are described in the following sections.


Density measurement
Density of composites as well as matrix alloy was determined using

top loading electronic balance (Mettler Toledo make). According to the

Archimedean principle, a solid body immersed in a liquid apparently loses as
much of its own weight of the liquid it has displaced. The density of the solid
body was determined by using a liquid of known density (Water). Standard
cylindrical specimens of 14mm diameter and 20mm length were used for
these tests. Average readings of three random specimens for each
composition were taken as the density of the composite specimen.

Micro hardness measurement

Micro hardness values of the composites as well as matrix alloy

specimen were measured using a Mitutoyo micro hardness tester equipped

with a diamond indenter. The test surface was prepared to obtain a
metallographic finish; 50-gram load was employed with a dwell time 15
seconds for each sample. An average of five readings taken at different
locations was taken as the hardness of composite specimen.

Ultimate tensile strength measurement

Tensile testing was carried out as per ASTM E8 using a

Hounsefield tensometer. The specimen prior to and after testing is shown in

Figure 3.4.


Figure 3.4 Tensile Specimens before and after test


Morphology studies
Morphology of fractured surface as well as the worn surface of the

composite specimen for tensile test was carried out using JEOL JSM 6360,
T100 Scanning Electron Microscope (SEM). Surface morphology was studied
different magnifications.

Dry sliding wear test

A Pin-on-Disk apparatus was used for dry sliding wear tests,

conducted as per ASTM G-99 standard. Initial and final weight of the
specimen was measured using a Mitutoyo make electronic weighing machine
with an accuracy of 0.0001 g. Wear tests were conducted on 10 mm diameter
and 25 mm long cylindrical specimens against a rotating EN-32 steel disc
(counter face) having a hardness 65 Rc. The difference in weights before and

after the test was taken as weight loss. Each test was repeated three times and
the results were averaged. Wear Rate (WR) of the composites was studied as
a function of sliding velocity, applied load and sliding distance. WR was
calculated based on the difference in weights of the specimen using the
Equation (3.1) for a constant sliding distance of 3000m.



Where W = Weight loss in kg

= Density of the material in kg/mm3
D = sliding distance in m

Figure 3.5

Schematic representations of pin-on-disc apparatus used in

this study

Table 3.2 Specifications of the Tribo-system

: DUCOM Turbo Innovators

Testing Standard

: ASTM G99

Pin size

: 3 12 mm

Disc size

: 100 mm x 8 mm thick

Sliding velocity

: 0.26 10 m/s

Disc rotation speed

: 100 2000 rpm

Maximum normal load

: 200 N

Frictional force

: 0 200 N

Wear measurement range : 0 4 mm



The same pin-on-disc type apparatus was employed to evaluate the

high stress abrasive wear characteristics of composites. The tests were

conducted on cylindrical specimens of 10mm diameter and 30mm length. The
disc was covered with a commercial SiC emery sheet bonded to the rotating
disc. In order to allow exposure of fresh abrasive material, the specimen was
moved against the parallel surface of the rotating steel disc. Mass loss of the
specimens was measured before and after the wear test using an electronic
weighing balance (accuracy 0.0001 g). Experiments were repeated thrice with
additional specimens to obtain sufficient data for significant results. WR was
calculated based on the difference in weights of the specimen using the
Equation(3.1) for a constant sliding distance of 100m.






RSM is defined as the statistical tool that uses quantitative data
from appropriate experimental design to determine and simultaneously solve
multivariate equations. Genetic Algorithm (GA) is direct, parallel, stochastic
method for global search and optimisation, which mimic the evolution of
humans. In order to combine the benefits of both RSM and Genetic algorithm
an integrated RSM GA approach was adopted. RSM-GA approach used for
modelling, analysing and optimising the materials propertied has been
outlined in Figure 3.6.
Determination of Variables, which
affect material properties

Design of Experiments
Measurement of the Material Property
Development of Response Surface Model(s) (RSM)
Analysis of Variance (ANOVA)
Formulation of Fitness function from RSM
Optimization using Genetic Algorithm
Confirmation Experiment(s)

Figure 3.6 RSMGA integrated approach

RSM-GA is a blend of experimental design, statistical method and
optimisation technique for constructing an empirical model and for achieving
optimum conditions. The procedure for RSM-GA is shown below
Step 1:

Determination of the influencing factors affecting the experiment

and levels at which they are to be examined.

Step 2:

Generation of a Response surface design (Central Composite or

Box-Behnken Design)
a. Central Composite Design (CCD)
Central composite designs are often recommended when the
design plan calls for sequential experimentation because these
designs can incorporate information from a properly planned
factorial experiment. This design can handle effectively two
factor experiments.
b. Box-Behnken Design
Box-Behnken designs are used when performing non-sequential
experiments. That is, when planning to perform the experiment
just once. These designs allow efficient estimation of the firstand second-order coefficients. Since Box-Behnken designs have
fewer design points, they are less time to run than Central
Composite designs with the same number of factors. BoxBehnken designs also ensure that not all factors are set at their
highest levels simultaneously. A comparison of CCD and Box
Behnken design are shown schematically in Figure 3.7.


Figure. 3.7. Response surface design for three factors

a) Central Composite Design b) Box-Behnken Design
Step 3:

Experimentation and collection of response data.

Step 4:

Development of a Response Surface model to the experimental data

By conducting experiments and applying regression analysis, a
model of the response to a number of independent input variables
can be achieved. From the model of the response, a near optimal
point can then be obtained. RSM has been frequently useful in the
modelling and optimisation of processes. If X1, X2, ...,Xn are n
number of independent variables and y is a dependant variable
based on the independent variable, then y is called the response .
The observed response, y, can then be written as a function of the


Y=f(X1,X2,...,Xn) +





is a random error constituent. By

plotting the expected response of y, a surface, known as the

response surface, is obtained. The form of f(X1,X2,...,Xn)


unknown and may be very complicated. Thus, RSM aims at






polynomial in some region of the independent process variables.

The aim of using RSM is not only to examine the response over the
whole factor space, but also to find the region of importance where
the response attains its optimum or near optimal value. By carefully
examining the response surface model, the combination of factors,
which offer the best response, can then be determined. The values
of the regression coefficients of the linear, square and interaction
terms of the RSM models can be determined by using the relation.

B (X T X ) 1 X TY


where B is the matrix of factor estimates, XT the transpose of

calculation matrix X and Y is the matrix of measured response
under study. Coefficient of determination (R 2) is used to test the
goodness-of-fit of the developed mathematical model and provides
a measure of variability in the observed response values. It can be
explained by the controllable factors and their interactions.
Predicted RSM WR was compared with the corresponding
experimental values .The error percentage is given by

Error %


i 1

Y Exp i Y Pr ed
Y Exp i


where Y (Exp)i is the measured response corresponding to ith trial,

the Y (Pred)i is predicted response by RSM model corresponding to
ith trial

and n is the number of data sets. Most independent

variables lie within a certain range.

Step 5:

Analysis of the Regression Equation from RSM using ANOVA

ANOVA was first evolved by Sir Ronald Fisher (a British
Statistician) is a method of partitioning variability into identifiable
sources of variation and the associated degrees of freedom in a

given experiment. The F-test is simply a ratio of sample variances.
Comparing the F-ratio of a source with the tabulated F-ratio is
called the F-test. When Analysis of Variance has been performed
on a set of data and the respective sums of squares have been
calculated, it is possible to use this information to distribute the
corrected sums of squares to the appropriate factors. Comparing
this value with the total sum of squares, gives the percent of the
contributions of each factor can be obtained. The percent
contribution due to error provides an estimate of the adequacy of
the experiment. Since error refers to unknown effects and which
cannot be controlling factors, the percent contribution due to error
suggests that if the sufficiency due to error is low (15% or less),
then it can be assumed that no important factors have been omitted
from the experiment.
Step 6:

Use of Contour/Surface Plots to visualize response surface patterns.

Contour and surface plots are useful for establishing desirable
response values and operating conditions. A contour plot provides a
two-dimensional view where all points that have the same response
are connected to produce contour lines of constant responses.
A surface plot provides a three-dimensional view that may provide
a clear picture of the response surface.

Step 7:

Genetic Algorithm Optimisation

Genetic Algorithm (GA) is a direct, parallel, stochastic method for
global search and optimisation, which mimic the evolution of
humans. GA is becoming increasingly popular in the optimization
i.e. maximization or minimization of a resultant response based on
process conditions. The GA sequence in this work is shown in
Figure 3.8


Figure 3.8 Genetic Algorithm Sequence

The concepts of GA are directly derived from natural evolution and
are based on the mechanics of natural selection and natural
genetics. Basic Steps in Genetic algorithm are

Coding Structure
The primary step in GA is to decide the coding structure. A set of
coding structure is termed as a chromosome. Each individual
chromosome is usually described as a string of symbols from
(0,1) which are labelled as Genes. The number of bits that must
be used to describe the parameters is problem dependent. Let
each solution in the population of N such solutions Xi, i=1, 2, .. .,
N, be a string of symbols (0,1) of length L. The length of L is
determined by the number of variables affecting the optimum
solution. The relationship between phenotype and genotype of
individual chromosome is shown in Figure 3.9.


Figure 3.9 Relationship between Phenotype and Genotype Chromosome

(ii) Create Initial Population
Typically, the initial population of N solutions is selected
completely at random, with each bit of each solution having a
50% chance of taking the value 0. The jth gene of ith individual
chromosome can be denoted in matrix form as Xij. Population
(POP) = ( Xij) where i=1, 2, .. ., N ; j = 1, 2, .. ., L The genetic
algorithm requires a fitness function. The fitness of an individual
chromosome in a genetic algorithm is the value of an objective
function for its phenotype.
(iii) Evaluating the fitness
For calculating fitness, the chromosome has to be first decoded
and the objective function has to be evaluated. The fitness not
only indicates how good the solution is, but also corresponds to
how close the chromosome is to the optimal one. The best
mathematical model obtained from RSM in coded form used as
the fitness function for the model. The selection of the
chromosomes to produce successive generations are important in
Genetic Algorithm. Several selection schemes such as Roulette

wheel, Uniform selection and Tournament selection are available.
Among them Roulette wheel selection is one of the most widely
used scheme in the selection process. It is a probabilistic selection
method, which is based on the individuals fitness such that the
better individuals have an increased chance of being selected. In
this scheme, parent selection is based on roulette wheel where
each member of the population is represented by a slice whose
size is decided by members fitness. A selection step is then a
spin of the wheel, which in the long run tends to eliminate the
least fit population members. The probability P, for the individual
selection is defined by :
P[X isc hosen] =




(iv) Reproduction
Reproduction options control how the Genetic Algorithm creates
the next generation. The reproduction options used by the GA are
Elite chromosome, Cross over chromosome and Mutation
Chromosome. The reproduction consists of basically three steps.
a. Selection
Elite Chromosomes are specified by the number of individuals
with the best fitness values namely Elite Count (Ne) in the current
generation, which are guaranteed to survive to the next
generation. The Elite Chromosome fraction of the next generation
population (N) is
P =


Setting Ne to a high value causes the fittest individuals to
dominate the population, which can make the search less
effective. Besides Elite Chromosome, GA uses the individuals in
the current generation to create the children that make up the next
generation namely, Cross over Chromosome and Mutation
Chromosome. Crossover takes genes from a pair of individual
chromosome (parents) in the current generation and combines
them to form a pair of children (Cross over Chromosome 1 and
Cross over Chromosome 2).
b. Crossover
The number of Cross over Chromosome in the next generation is
termed as Cross over Count (Nc). Crossover fraction is Pc which
ranges from 0 to 1. A random number R is generated and
compared with the probability parameter Pc. If the random
number is less than Pc, a crossover operation is selected,
otherwise, no crossover is performed and the individual parent is
returned. If the crossover operation is selected, then a random
number called the crossover point Rc is generated between 1 and
length of the chromosome Lc.

Various types of crossover

functions are available like one point crossover, the two point
cross over and heuristic cross over and so on . One-point
crossover is one of the popular crossover methods wherein a pair
of children chromosome is formed by interchanging the last Lc
Rc elements of the first parent chromosome and last Lc
elements of the second parent chromosome.



Figure 3.10 Single point Cross over

c. Mutation
Mutation applies random changes to the genes of individual
chromosome (parent) in the current generation to produce a single
new individual child. The number of Mutation Chromosome in
the next generation is termed as Mutation Count (Nm). Mutation
is performed by using techniques like Binary, uniform, Gaussian
as well as adaptive algorithms. The uniform Mutation algorithm
is much suitable in problems like minimisation .It first selects a
fraction of the elements of an individual chromosome for
mutation, where each entry has the same probability as the
mutation rate of being mutated. In the second step, mutation
operator replaces each selected entry by a uniform random
number selected uniformly from user-specified upper and lower
bounds for that element.

If the Population size is N, the Elite

count is E and the Crossover fraction is Pc, the numbers of each

type of children in the next generation are as follows: Elite count
Ne= E; Cross over count Nc = (N-E)* Pc; Mutation count Nm =
(v) Decoding
On achieving optimisation, the best chromosomes are decoded
into required factor value.


Wear processes in composites are a difficult phenomenon

connecting a number of process parameters and it is vital to understand the

wear characteristics of the composites are affected by these parameters.
Selecting the correct operating conditions is always a major concern as
traditional experiment design, would require many experimental runs to
achieve a satisfactory result. In any process, the desired testing parameters are
either determined based on experience or by use of a data book and do not
provide optimal testing parameters for a particular situation. An approach
based on DoE technique was adopted to obtain maximum possible
information with a minimum number of experiments. Genetic Algorithm was
used to obtain optimum conditions for wear testing.