Вы находитесь на странице: 1из 19

CHAPTER 5

ANN MODELING
Introduction
Neural Networks are capable of learning complex relationships in data. By
mimicking the functions of the brain, they can discern patterns in data, and then
extrapolate predictions when given new data. The problems Neural Networks are used
for can be divided in two general groups:
Classification Problems: Problems in which you are trying to determine what type
of category an unknown item falls into. Examples include medical diagnoses and
prediction of credit repayment ability.
Numeric Problems: Situations where you need to predict a specific numeric
outcome. Examples include stock price forecasting and predicting the level of sales
during a future time period.
Neural Tools Package
A neural network is a system that takes numeric inputs, performs computations
on these inputs, and outputs one or more numeric values. When a neural net is
designed and trained for a specific application, it outputs approximately correct values
for given inputs. For example, a net could have inputs representing some easily
measured characteristics of an abalone (a sea animal), such as length, diameter and
weight. The computations performed inside the net would result in a single number,
which is generally close to the age of the animal (the age of an abalone is harder to
determine). The inspiration for neural nets comes from the structure of the brain.
A brain consists of a large number of cells, referred to as "neurons". A neuron receives
impulses from other neurons through a number of "dendrites". Depending on the
impulses received, a neuron may send a signal to other neurons, through its single
"axon", which connects to dendrites of other neurons. Like the brain, artificial neural

nets consist of elements, each of which receives a number of inputs, and generates a
single output, where the output is a relatively simple function of the inputs.
The Structure of a Neural Net
The structure of a neural net consists of connected units referred to as "nodes" or
"neurons". Each neuron performs a portion of the computations inside the net: a
neuron takes some numbers as inputs, performs a relatively simple computation on
these inputs, and returns an output. The output value of a neuron is passed on as one
of the inputs for another neuron, except for neurons that generate the final output
values of the entire system. Neurons are arranged in layers. The input layer neurons
receive the inputs for the computations, like the length, diameter, and weight of an
individual abalone. These values are passed to the neurons in the first hidden layer,
which perform computations on their inputs and pass their outputs to the next layer.
This next layer could be another hidden layer, if there is one. The outputs from the
neurons in the last hidden layer are passed to the neuron or neurons that generate the
final outputs of the net, like the age of the abalone.
Numeric and Category Prediction
When neural nets are used to predict numeric values, they typically have just
one output. This is because single-output nets are more reliable than multiple-output
nets, and almost any prediction problem can be addressed using single-output nets.
For example, instead of constructing a single net to predict the volume and the price
for a stock on the following day, it is better to build one net for price predictions, and
one for volume predictions. On the other hand, neural nets have multiple outputs when
used for classification/category prediction. For example, suppose that we want to
predict if the price of a stock the following day will "rise more that 1%", "fall more
than 1%", or "not change more than 1%". Then the net will have three numeric
outputs, and the greatest output will indicate the category selected by the net.
Training a Net
Training a net is the process of fine-tuning the parameters of the computation,
where the purpose is to make the net output approximately correct values for given

inputs. This process is guided by training data on the one hand, and the training
algorithm on the other. The training algorithm selects various sets of computation
parameters, and evaluates each set by applying the net to each training case to
determine how good the answers given by the net are. Each set of parameters is a
"trial"; the training algorithm selects new sets of parameters based on the results of
previous trials.
Computer Processing of Neural Nets
A neural net is a model of computations that can be implemented in various
types of computer hardware. A neural net could be built from small processing
elements, with each performing the work of a single neuron. However, neural nets
are typically implemented on a computer with a single powerful processor, like
most computers currently in use. With single-processor computers the program, like
NeuralTools, uses the same processor to perform each neuron's computations; in this
case the concept of a neuron describes part of the computations needed to obtain a
prediction, as opposed to a physical processing element.
Types of Neural Networks
There are various types of neural networks, differing in structure, kinds of
computations performed inside neurons, and training algorithms. One type offered in
NeuralTools is the Multi-Layer Feed forward Network. With MLF nets, a
NeuralTools user can specify if there should be one or two layers of hidden neurons,
and how many neurons the hidden layers should contain (NeuralTools provides help
with making appropriate selections, as described in the section on MLF nets).
NeuralTools also offers Generalized Regression Neural Nets and Probabilistic
Neural Nets; these are closely related, with the former used for numeric prediction,
and the latter for category prediction/classification. With GRN/PN nets there is no
need for the user to make decisions about the structure of a net. These nets always
have two hidden layers of neurons, with one neuron per training case in the first
hidden layer, and the size of the second layer determined by some facts about
training data.

The remaining sections of this chapter discuss in more detail each type of neural
network offered in NeuralTools.

Multi-Layer Feedforward Nets


Multi-Layer Feedforward Networks (also referred to as "Multi-Layer Perceptron
Networks") are systems capable of approximating complex functions, and thus
capable of modeling complex relationships between independent variables and a
dependent one.
MLF Architecture
The diagram below shows an MLF net for numeric prediction with three independent
numeric variables; the net was configured to have 2 neurons/nodes in the first hidden
layer, and 3 neurons/nodes in the second hidden layer.

The behavior of the net is determined by:


Its topology (the number of hidden layers and the numbers of
nodes in those layers)
The "weights" of connections (a parameter assigned to each
connection) and bias terms (a parameter assigned to each neuron)
Activation/transfer function, used to convert the inputs of each
neuron into its output

Specifically, a hidden neuron with n inputs first computes a weighted sum of its
inputs:
Sum = in0 * w0 + in1 * w1 + ... + inn * wn + bias
where in0 to inn are outputs of neurons in the previous layer, while w0 to wn are
connection weights; each neuron has its own bias value.
Then the activation function is applied to the Sum to generate the output of the
neuron.
A sigmoid (s-shaped) function is used as the activation function in hidden layer
neurons. Specifically, NeuralTools uses the hyperbolic tangent function. In
NeuralTools the output neuron uses identity as the activation function; that is, it
simply returns the weighted sum of its inputs. Neural nets are sometimes constructed
with sigmoid activation functions in output neurons. However, that is not needed for
a neural net to be able to approximate complex functions.
Moreover, sigmoid functions have restricted output range (-1 to 1 for the
hyperbolic tangent function), and there will typically be dependent values outside the
range. Thus using a sigmoid function in the output neuron would force an additional
transformation of output values before passing training data to the net.
When MLF nets are used for classification, they have multiple output neurons,
one corresponding to each possible dependent category. A net classifies a case by

computing its numeric outputs; the selected category is the one corresponding to the
neuron that outputs the highest value.
MLF Net Training
Training an MLF net consists of finding a set of connection weights and bias terms
that will get the net to generally give right answers when presented with new cases
(for simplicity the bias term will be omitted in the presentation below). Training
starts by assigning a set of randomly selected connection weights. A prediction is
made for each training case (by presenting independent values as inputs to obtain the
output). The output will most likely be different from the known dependent value.
Thus for each training case we have an error value. From these we compute an error
measure for the entire training set; it tells us how well the net does given the initial
weights.
The net will probably not do very well with the random initial assignment of weights,
and we proceed to subsequent trials: other assignments of weights. However, the
assignments of weights are no longer random, but rather are decided by our training
algorithm: the method for selecting connection weights based on results of previous
trials. The problem is one of optimization: we want to minimize the error measure by
changing connection weights.

Error Measures
The error measure used when training numeric prediction nets is the Mean Squared
Error over all the training cases, that is the mean squared difference between the
correct answer, and the answer given by the net. With classification, we have more
than one output for each training case (with one output corresponding to each
dependent category). We compute the Mean Squared Error over all the outputs for all
the training cases, by reference to the desired output values: for each training case we
want the output value to be close to 1 for the output corresponding to the correct
category, and we want the remaining output values to be close to 0.
Training Time

The NeuralTools MLF training algorithms restarts itself multiple times from different
initial starting weights. Therefore, the longer a net is trained, the better. The more
times it is allowed to restart itself, the more likely it is that the global minimum of the
error function will be found.
Topology Selection
The selection of the number of layers and the numbers of neurons in the layers
determines whether the net is capable of learning the relationship between the
independent variables and the dependent one. Typically a net with a single hidden
layer and two hidden neurons will not train to a satisfactory error level. However,
increasing the number of layers and neurons comes at a price that is often not worth
paying. A single hidden layer is sufficient for almost any problem; using two layers
will typically result in unnecessarily long training times. Moreover, a few neurons in
a single hidden layer are typically sufficient.
NeuralTools can auto-configure the net topology based on training data.
However, the Best Net Search feature offers a more reliable approach. As part of the
Best Net Search a range of single-hidden- layer nets with different numbers of neurons
will be trained. By default, five MLF nets, with 2 to 6 hidden neurons will be
included.

If sufficient time is available, the range can be broadened; but it is

recommended that it start with a 2-neuron net, for reasons related to preventing overtraining.
Preventing Over-Training
The term "over-training" refers to the situation where the net learns not only the
general characteristics of the relationship between independent variables and the
dependent one, but instead starts learning facts about training cases that will not apply
in general; that is, they will not apply to cases not included in training. Sometimes to
address this problem the testing set is divided into testing-while- training set, and the
proper testing set, to be used after training. The error on the testing-while-training set
is periodically computed during training. When it starts to increase, this is taken as
evidence that the net is beginning to over train, and training is stopped.

NeuralTools takes a different approach to preventing over-training. The


approach with two distinct testing sets is often unrealistic, insofar as typically there
is not enough data to split into a training set and two testing sets. Also, the increase
of error on a testing-while-training set is not a reliable indicator of over-training; the
increase could be local, and the error might continue to decrease with more training.
NeuralTools Best Net Search is designed to prevent over-training. With default
settings, Best Net Search will start with a net with 2 neurons, which is typically too
small to get over-trained. With default settings it will train nets with up to 6 neurons.
If the nets with 5 and 6 neurons over-train that will show in the results from the
single testing set; one of the nets with 2, 3 or 4 neurons will have the lowest testing
error.
Generalized Regression Neural Nets
GRN nets are used for numeric prediction/function approximation.
Architecture
A Generalized Regression Neural Net for two independent numeric variables is
structured as shown in the graph (assuming there are just three training cases):

The Pattern Layer contains one node for each training case. Presenting a
training case to the net consists here of presenting two independent numeric values.
Each neuron in the pattern layer computes its distance from the presented case. The
values passed to the Numerator and Denominator Nodes are functions of the distance
and the dependent value. The two nodes in the Summation Layer sum its inputs,
while the Output Node divides them to generate the prediction.
The distance function computed in the Pattern Layer neurons uses "smoothing
factors"; every input has its own "smoothing factor" value. With a single input, the
greater the value of the smoothing factor, the more significant distant training cases
become for the predicted value. With 2 inputs, the smoothing factor relates to the
distance along one axis on a plane, and in general, with multiple inputs, to one
dimension in multi-dimensional space.
Training a GRN net consists of optimizing smoothing factors to minimize the
error on the training set, and the Conjugate Gradient Descent optimization method is
used to accomplish that. The error measure used during training to evaluate different
sets of smoothing factors is the Mean Squared Error. However, when computing the
Squared Error for a training case, that case is temporarily excluded from the Pattern
Layer. This is because the excluded neuron would compute a zero distance, making
other neurons insignificant in the computation of the prediction.
Advantages of GRN nets:
Train fast
Do not require topology specification (numbers of hidden
layers and nodes)
PN nets not only classify, but also return the probabilities that
the case falls in different possible dependent categories
Advantages of MLF nets:
Smaller in size, thus faster to make predictions
More reliable outside the range of training data (for example,
when the value of some independent variable falls outside the

range of values for that variable in the training data); though


note that prediction outside the range of training data is still
risky with MLF nets
Capable of generalizing from very small training sets
Input Transformation
NeuralTools scales numeric variables before training, so that the values of
each variable are approximately in the same range. This is done to equalize the effect
variables have on net output during initial stages of training. When a variable is not
significant for making correct predictions, this will be reflected during training by
reducing the weights of connections leading from an input to first-hidden-layer
neurons. However, if that insignificant variable happens to have a larger order of
magnitude than other variables, the weights need to be reduced so much more to
compensate for the greater values.
The scaling uses the mean and the standard deviation for each variable,
computed on the training set. The mean is subtracted from each value, and the result
is divided by the standard deviation. The same scaling parameters are used when
testing the trained net or using it to make predictions.
Category/symbolic data cannot be used directly with a neural net, which takes
number as inputs. Consequently, every independent category variable is represented
by a number of numeric net inputs, one for every possible category. The "one-of-n"
conversion method is used.
CONJUGATE GRADIENTS DESCENT METHOD
The conjugate-gradient method is a general purpose simultaneous equation
solving method ideal for geophysical inversion and imaging. A simple form of the
algorithm iteratively searches the plane of the gradient and the previous step.
INTRODUCTION
The solution time for simultaneous linear equations grows cubically with the
number of unknowns. For equations with hundreds of unknowns the solutions require
minutes to hours. The number of unknowns somehow must be reduced by theoretical

means, or else numerical approximation methods to be used. A numerical technique


known as the conjugate-gradient method provides good approximations.
The conjugate-gradient method is an all-purpose optimizer and simultaneous
equation solver. It is useful for systems of arbitrarily high order because its iterations
can be interrupted at any stage and the partial result is an approximation that is often
useful. Like most simultaneous equation solvers, the exact answer (assuming exact
arithmetic) is attained in a finite number of steps. The conjugate-gradient method is
really a family of methods. There are perhaps a dozen or more forms of the conjugategradient algorithm. The various methods differ in treatment of underdetermined
systems, accuracy in treating ill conditioned systems, space requirements, and
numbers of dot products..
CHOICE OF DIRECTION
Any collection of search lines can be used for function minimization. Even if
the lines are random, the descent can reach the desired extremum because if the value
does not decrease when moving one way along the line, it almost certainly decreases
when moving the other way.
In the conjugate-gradient method a line is not searched. Instead a plane is
searched. A plane is made from an arbitrary linear combination of two vectors. Take
one vector to be the gradient vector g . Take the other vector to be the previous
descent step vector, say s =Xj Xj-1. Instead of g, a linear combination is needed, say
a g + s ( and are the distances to be determined). For minimizing quadratic
functions the plane search requires only the solution of a two-by-two set of linear
equations for and . (For nonquadratic functions a plane search is considered
intractable, whereas a line search proceeds by bisection).

ANN RESULTS:
Table 7 ANN Predictions for Solid and Multichannel Electrodes
SOLID

1MM

1.5MM

2MM

Trial

Ton

Toff

Pd

MRR

EWR

SR

MRR

EWR

SR

MRR

EWR

SR

MRR

EWR

SR

200

20

0.25

19.67

1.85

3.39

18.82

4.83

3.44

32.85

7.12

3.88

22.91

3.01

200

20

0.5

19.67

1.66

4.57

16.14

4.70

3.44

32.85

12.82

4.29

22.92

200

20

0.75

18.03

1.69

5.44

19.71

4.50

3.35

32.86

12.85

4.29

28.02

8.06
19.7
9
20.0
7

200

40

0.25

19.67

1.83

4.86

15.14

4.77

3.13

32.87

7.24

3.95

23.06

6.38

3.00

200

40

0.5

19.67

1.50

5.99

14.73

4.61

3.47

32.85

7.85

3.74

24.76

6.43

3.00

200

40

0.75

18.07

1.50

5.67

18.38

4.35

3.87

32.85

7.65

4.29

22.97

7.34

4.48

200

60

0.25

19.70

1.61

3.06

13.68

4.70

4.04

48.38

6.96

4.06

26.94

6.36

3.01

200

60

0.5

28.49

1.58

4.38

13.52

4.49

4.05

32.85

7.59

3.73

19.88

1.44

3.00

200

60

0.75

67.96

1.56

4.42

15.79

4.18

4.06

32.85

10.18

3.82

18.55

1.37

3.00

10

400

20

0.25

19.67

1.69

7.36

25.43

2.42

5.11

32.86

7.10

5.36

75.17

4.33

5.80

11

400

20

0.5

18.84

1.69

4.36

23.71

2.25

5.10

32.85

7.91

5.86

52.26

5.86

12

400

20

0.75

14.33

1.69

4.97

29.89

2.13

5.09

33.07

12.97

5.18

48.88

4.45
12.1
2

13

400

40

0.25

19.74

1.50

7.54

33.74

2.32

5.09

40.67

1.79

5.31

57.25

5.35

3.07

14

400

40

0.5

34.13

1.64

5.16

40.30

2.18

5.14

32.79

1.82

5.85

43.75

6.37

4.60

15

400

40

0.75

35.84

1.69

6.22

45.74

2.08

5.65

9.24

5.35

48.34

6.18

5.86

16

400

60

0.25

70.01

1.41

6.83

53.69

2.24

5.65

32.76
109.1
9

1.80

5.31

61.54

6.38

7.08

17

400

60

0.5

69.84

1.08

4.39

55.76

2.12

5.65

17.29

2.60

5.30

43.90

5.78

3.00

18

400

60

0.75

36.27

1.46

4.41

65.74

2.03

5.65

16.87

7.53

5.49

47.38

1.32

3.10

19

600

20

0.25

11.00

0.83

9.54

72.11

1.36

5.22

36.12

3.28

9.25

103.78

4.33

6.98

20

600

20

0.5

68.78

1.75

11.71

73.68

1.18

5.11

32.31

3.29

9.24

108.12

4.33

5.86

21

600

20

0.75

130.00

2.22

11.69

74.63

1.05

5.10

3.80

5.87

94.82

4.34

5.86

22

600

40

0.25

37.71

1.69

7.62

33.68

1.26

5.10

84.92
106.6
7

3.27

8.70

91.38

4.33

7.66

23

600

40

0.5

37.18

1.69

6.73

31.10

1.32

5.65

16.11

3.28

6.96

78.97

4.54

5.89

24

600

40

0.75

56.74

1.69

10.75

33.13

1.49

5.66

3.28

5.86

53.12

6.30

5.86

25

600

60

0.25

38.49

1.08

6.11

16.16

1.18

5.65

16.25
108.6
0

2.07

8.66

104.93

2.27

7.10

26

600

60

0.5

36.26

1.18

5.00

15.99

1.05

5.65

64.07

1.77

5.37

57.89

6.37

7.08

27

600

60

0.75

36.34

1.27

5.76

46.36

1.39

5.69

48.14

3.30

5.86

47.47

5.86

28

200

20

0.25

19.67

6.64

5.85

18.94

9.28

4.32

32.86

7.84

5.41

22.96

29

200

20

0.5

19.75

7.06

3.83

18.81

9.25

4.25

14.32

4.29

21.33

30

200

20

0.75

22.24

2.61

4.57

29.27

9.21

3.79

33.10
107.3
8

14.34

4.29

29.86

2.19
35.8
0
24.7
2
15.0
7

5.85
3.05

5.86

3.00
4.66
5.86

31

200

40

0.25

19.67

6.99

6.54

18.84

9.27

3.13

32.88

7.35

5.20

22.69

32

200

40

0.5

19.91

7.34

5.31

21.52

9.23

3.13

32.86

13.19

4.29

25.88

33

200

40

0.75

20.87

5.77

5.06

23.68

9.17

3.32

33.21

14.32

4.29

27.70

34

200

60

0.25

50.36

7.31

6.85

19.40

9.25

3.66

52.98

10.67

5.24

52.05

35

200

60

0.5

70.10

6.64

5.50

43.53

9.20

3.68

32.85

13.88

3.75

41.56

36

200

60

0.75

67.27

2.73

5.50

61.53

9.14

3.68

32.86

13.49

4.47

40.31

37

400

20

0.25

73.17

3.70

5.58

45.02

8.49

6.29

33.03

8.61

6.22

79.23

38

400

20

0.5

84.28

2.47

5.16

70.69

8.65

6.29

8.61

5.86

90.00

39

400

20

0.75

142.81

2.91

10.52

82.04

8.69

6.25

91.27
194.7
9

13.49

5.85

40

400

40

0.25

58.61

2.95

5.43

66.47

8.37

5.10

43.36

3.54

41

400

40

0.5

84.05

2.22

4.01

92.63

8.60

5.11

7.60

42

400

40

0.75

88.35

2.40

5.81

92.28

8.67

5.77

43

400

60

0.25

70.42

2.69

6.83

94.40

8.23

5.65

33.09
104.7
4
123.9
3

44

400

60

0.5

68.84

2.16

6.17

61.18

8.55

5.66

45

400

60

0.75

33.80

2.67

5.43

16.30

8.66

46

600

20

0.25

124.03

2.78

11.83

241.75

47

600

20

0.5

157.21

2.92

12.20

48

600

20

0.75

179.23

2.92

49

600

40

0.25

94.19

1.53

50

600

40

0.5

93.47

51

600

40

0.75

52

600

60

53

600

54

600

55

12

56

57

35.1
3
18.5
6
17.1
0
17.1
5

3.00
3.00
3.11
3.12
3.00

108.96

6.94
18.0
4
17.5
0
19.4
7
18.8
4

5.31

98.11

6.38

4.64

5.86

83.44

3.02

14.32

5.85

95.06

7.07
14.7
6

6.19

5.31

103.41

2.51

7.10

31.92

6.16

5.69

97.33

1.32

3.05

5.96

32.67

9.61

7.03

94.24

1.75

3.00

5.65

6.66

3.37

9.26

180.12

4.36

12.59

193.33

6.15

8.48

6.39

9.25

177.43

201.59

7.07

9.16

8.56

6.04

177.51

7.88
18.2
3

8.72

11.05

82.06
194.5
3
195.3
1

9.62

241.47

5.58

5.97

118.03

3.28

9.24

129.82

4.41

9.74

2.40

11.08

201.61

5.98

8.24

3.28

8.96

154.00

6.16

9.15

151.91

2.88

8.73

201.60

6.81

8.56

81.72
193.0
9

3.28

5.88

132.38

1.51

8.04

0.25

93.65

1.62

6.60

106.41

5.53

7.08

118.93

6.39

8.71

113.45

6.32

8.78

60

0.5

91.71

1.58

10.72

199.22

5.84

8.43

94.48

5.04

6.32

110.45

1.36

8.36

60

0.75

83.28

2.79

10.03

201.58

6.56

8.57

117.70

4.00

7.96

52.35

6.35

200

20

0.25

113.91

6.04

5.70

36.80

7.13

4.33

17.92

5.86

39.63

12

200

20

0.5

138.71

6.17

3.31

39.85

7.12

4.35

17.56

4.59

35.78

12

200

20

0.75

139.30

6.57

3.38

161.29

7.09

4.66

96.96
194.8
7
195.3
2

15.03

4.45

82.58

58

12

200

40

0.25

34.76

6.04

4.81

36.78

7.13

4.32

33.16

13.45

5.38

40.03

59

12

200

40

0.5

117.74

6.04

5.61

36.78

7.11

4.25

15.37

6.19

40.53

60

12

200

40

0.75

174.39

6.04

4.65

37.90

7.07

4.40

110.75
195.0
0

14.54

6.52

48.73

61

12

200

60

0.25

70.29

6.52

7.94

36.78

7.12

3.59

58.50

20.09

7.51

40.61

62

12

200

60

0.5

42.58

6.04

7.84

36.94

7.09

3.80

19.09

7.10

42.97

63

12

200

60

0.75

34.78

6.14

5.50

85.39

7.05

5.01

18.82

6.54

43.75

64

12

400

20

0.25

122.71

6.60

8.22

134.60

6.55

6.44

9.78

8.33

123.92

65

12

400

20

0.5

185.99

6.55

11.32

134.69

6.32

7.78

8.83

5.89

189.98

66

12

400

20

0.75

155.57

2.54

8.92

156.29

5.99

7.91

13.88

6.94

195.90

67

12

400

40

0.25

134.41

6.04

6.91

140.68

6.63

7.13

33.24
124.7
5
194.6
5
195.3
2
195.3
2
108.3
2

8.85

6.48

104.82

1.35
34.9
8
30.8
3
30.9
4
30.8
2
30.9
1
35.1
4
30.8
7
34.1
5
44.4
4
22.8
2
22.8
5
15.0
2
31.0
1

3.00
3.63
5.87
5.87

5.86

8.66

5.62
5.43
7.98
4.62
4.20
3.83
5.68
3.28
3.17
5.96
8.79
8.81
9.89

68

12

400

40

0.5

187.44

6.05

9.01

173.91

6.49

7.89

69

12

400

40

0.75

137.79

10.27

174.25

6.28

8.36

70

12

400

60

0.25

125.46

2.23
10.5
8

7.82

43.93

6.67

6.96

71

12

400

60

0.5

89.94

6.03

7.96

97.65

6.59

7.33

72

12

400

60

0.75

103.98

9.29

7.34

195.28

6.46

7.36

73

12

600

20

0.25

157.33

6.26

11.36

205.63

3.27

9.22

74

12

600

20

0.5

156.09

1.47

11.37

241.61

3.09

9.22

75

12

600

20

0.75

166.25

1.66

9.53

173.81

2.97

9.22

76

12

600

40

0.25

165.21

5.52

12.01

241.17

3.58

9.22

77

12

600

40

0.5

155.44

1.58

13.16

201.07

3.28

9.77

78

12

600

40

0.75

127.31

12.94

183.39

3.09

9.75

79

12

600

60

0.25

125.52

2.67
13.3
5

12.08

241.73

4.03

8.62

80

12

600

60

0.5

109.89

4.09

12.59

184.63

3.59

8.58

194.8
4
195.3
2
125.2
4
109.4
4
224.3
3
202.2
8
195.3
2
195.3
4
286.1
2
202.8
6
227.3
3
201.4
7
265.0
3

81

12

600

60

0.75

112.70

11.68

13.61

201.60

3.29

8.57

200

20

0.25

19.67

2.05

4.32

19.08

4.88

200

20

0.5

19.28

1.75

5.43

79.86

200

20

0.75

18.05

1.69

5.45

200

40

0.25

19.67

2.05

200

40

0.5

18.51

200

40

0.75

82
83
84
85
86

8.67

8.02

110.01

14.35

8.11

158.42

7.84

7.56

102.36

9.74

8.11

102.00

13.89

8.11

102.82

8.40

9.26

276.90

8.61

9.60

199.16

8.61

9.82

183.76

3.31

11.10

276.21

3.29

11.49

248.79

3.44

8.12

7.75

20.8
4
15.5
6
18.6
3
19.2
7
24.4
7
18.0
2
18.0
8
14.9
4

5.94
8.42
10.04
7.24
5.94
12.83
8.81
8.81
10.04

183.83

7.19
14.4
5
15.0
6

11.11

229.42

1.31

10.04

7.74

9.20

203.85

1.37

10.04

211.81

7.78

8.11

154.10

6.67

3.40

33.07

24.64

3.73

23.86

5.63

3.25

32.85

18.34

4.18

23.94

31.72

6.07

3.23

32.87

14.22

4.29

33.00

4.00

78.68

5.29

3.23

97.85

15.37

3.73

24.06

6.70
28.5
0
18.0
8
19.9
2
21.4
2

2.05

4.41

83.63

5.90

3.24

32.84

14.50

3.73

24.14

5.71

18.03

1.76

5.71

15.75

6.19

3.26

32.82

13.24

4.29

22.40

11.27
8.81

5.86
5.86
5.86
3.00

200

60

0.25

19.67

1.45

4.38

83.74

5.65

3.52

112.54

10.41

3.73

33.24

6.66
17.6
0
22.1
6

89

200

60

0.5

18.12

1.53

4.38

32.48

6.08

3.80

22.56

8.13

3.73

18.85

6.53

3.00

90

200

60

0.75

18.91

6.77

4.40

15.74

6.25

3.82

18.48

7.64

3.74

18.48

11.62

3.94

91

400

20

0.25

47.31

2.01

5.97

55.95

3.65

5.86

80.87

4.33

5.44

79.21

4.33

5.87

92

400

20

0.5

63.85

1.69

4.52

24.10

3.66

5.72

32.63

11.70

5.28

66.39

5.86

400

20

0.75

64.54

1.85

5.67

17.12

3.73

7.45

14.12

4.29

50.97

4.64
15.2
3

87
88

93

5.86
3.95

400

40

0.25

21.13

2.02

3.25

83.50

3.65

5.16

39.57
108.9
6

2.48

5.31

71.76

4.33

8.78

95

400

40

0.5

39.73

2.01

3.22

15.75

3.68

6.06

17.90

7.65

4.89

47.52

4.67

5.86

96

400

40

0.75

32.93

2.01

5.09

15.74

3.83

8.31

7.56

4.29

48.47

6.41

5.86

400

60

0.25

21.44

1.61

6.20

24.55

3.66

5.82

3.55

5.30

83.49

6.11

7.10

400

60

0.5

52.14

2.38

4.38

50.74

3.73

7.29

16.22
108.5
6
103.7
6

7.57

5.01

68.07

6.43

5.68

400

60

0.75

65.25

11.75

4.38

50.42

4.00

8.47

7.54

4.25

44.82

5.45

5.86

600

20

0.25

34.90

2.19

13.54

162.74

2.39

9.07

21.91
108.4
4

3.43

8.74

104.91

4.33

9.93

600

20

0.5

88.65

2.23

11.79

183.48

2.24

8.71

3.31

9.25

110.33

4.33

5.87

600

20

0.75

106.68

2.25

11.63

183.63

2.14

8.15

3.96

5.92

107.00

4.33

5.86

600

40

0.25

49.02

1.63

5.86

16.52

2.30

8.06

21.43
169.1
6
108.5
6

2.88

8.70

88.65

4.33

9.96

600

40

0.5

80.83

1.68

8.53

85.90

2.18

8.57

98.94

3.29

8.51

85.61

4.33

8.81

94

97
98
99
100
101
102
103
104

5.86

105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139

600

40

0.75

88.75

10.9
9

10.18

182.12

2.10

8.57

600

60

0.25

116.71

1.75

6.77

15.74

2.23

8.92

600

60

0.5

83.17

11.41

4.53

15.82

2.13

600

60

0.75

83.08

11.46

8.96

27.16

200

20

0.25

20.41

6.56

3.32

200

20

0.5

74.59

4.54

200

20

0.75

81.28

200

40

0.25

200

40

200

8.91

5.86

65.86

4.38

5.86

1.85

8.70

96.77

4.34

7.15

8.58

44.05
133.8
3
141.0
1

2.50

5.33

70.50

5.22

9.96

2.07

8.57

59.56

9.01

6.08

49.25

6.04

18.81

11.94

4.51

33.16

28.58

3.74

23.36

3.25

18.89

11.97

6.00

29.67

4.29

23.33

1.40

3.26

54.34

11.99

7.15

29.86

4.29

38.22

19.67

5.63

5.21

18.83

11.93

3.66

40.96
189.9
0
103.6
7

28.52

3.74

36.21

0.5

18.05

6.79

5.54

49.38

11.97

5.41

32.88

28.45

3.80

35.53

40

0.75

23.98

7.84

5.79

38.73

11.98

6.02

28.62

5.40

42.84

200

60

0.25

19.69

4.72

6.75

36.73

11.91

4.11

44.08
125.1
6

26.93

4.37

67.32

200

60

0.5

23.29

5.50

83.67

11.96

6.31

37.10

25.48

5.90

42.05

200

60

0.75

29.58

8.90
10.9
1

5.50

15.76

11.98

6.60

32.37

23.44

6.54

39.62

400

20

0.25

95.93

8.23

4.88

28.13

11.53

9.20

24.12

7.18

84.71

400

20

0.5

101.84

6.86

6.69

116.72

11.43

9.22

29.01

5.85

117.98

400

20

0.75

107.70

5.28

8.91

112.47

11.37

9.22

27.06

4.37

125.48

400

40

0.25

20.21

4.06

5.43

56.56

11.61

8.61

93.83
187.2
8
195.3
0
122.7
7

18.59

5.32

101.85

400

40

0.5

65.65

4.11

4.36

19.34

11.56

8.12

22.63

6.21

92.77

400

40

0.75

155.40

6.41

9.52

50.57

11.52

8.58

17.90

6.48

99.09

400

60

0.25

69.73

6.83

75.39

11.66

8.57

22.76

7.17

103.62

400

60

0.5

67.58

4.38

15.75

11.63

8.57

111.70

21.88

7.55

101.40

400

60

0.75

34.32

5.41

15.77

11.61

8.57

14.26

6.71

93.80

600

20

0.25

159.25

2.23

10.36

241.45

8.84

9.32

17.14

9.26

183.05

600

20

0.5

169.29

2.68

8.46

173.78

8.67

9.22

12.24

9.26

177.59

600

20

0.75

202.16

11.85

8.99

200.38

8.57

9.22

7.55

7.05

177.42

4.47
13.0
1
17.9
9

12.90

58.58
264.7
6
184.0
8
182.3
4

4.15
15.0
4

6.39

5.02
10.4
8
10.5
1

39.30
185.3
8
108.6
2

6.63
33.8
8
35.5
1
33.3
7
35.7
6
34.7
6
38.3
5
19.5
4
35.3
4
40.0
4
33.7
2
18.0
4
18.7
9
20.4
0
14.6
2
18.8
3
22.1
9

600

40

0.25

196.84

11.02

179.57

9.12

9.60

4.33

12.89

40

0.5

200.85

11.82

201.52

8.85

9.10

8.78
10.7
9

164.30

600

168.28

4.35

12.85

600

40

0.75

201.78

9.67

197.29

8.68

8.66

6.06

8.80

600

60

0.25

116.09

7.76

55.36

9.52

11.03

12.78

8.10
10.9
2

157.33

11.55
14.9
0

117.40

4.79

10.02

600

60

0.5

84.69

12.32

194.25

9.13

8.59

7.79

8.38

115.15

600

60

0.75

123.96

10.05

191.16

8.86

8.57

10.97

8.11

58.02

12

200

20

0.25

88.47

6.74

5.70

36.77

29.35

7.26

30.06

5.17

39.66

12

200

20

0.5

125.57

3.26

36.92

28.95

7.26

30.06

6.44

42.11

12

200

20

0.75

136.25

6.70
14.3
9

4.68

103.49

28.37

7.26

30.06

6.54

114.09

12

200

40

0.25

74.24

6.43

5.78

36.73

29.62

7.26

117.34

30.07

6.50

40.10

5.71
10.0
0
35.1
2
32.7
6
42.8
3
31.8
8

12.71

11.11
10.5
5

113.95
277.5
8
210.8
3
141.0
4
148.2
0
219.4
1
188.7
4
195.3
0
195.3
2

13.79

2.52
12.1
4

7.43
9.90

5.27
5.86
5.86
3.01
3.27
5.86
6.78
3.00
3.01
7.09
6.66
6.37
7.40
6.02
5.95
7.14

5.84

8.83
8.80

11.38
6.01
8.80
8.80
6.04

140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162

12

200

40

0.5

77.74

12

200

40

0.75

67.40

12

200

60

0.25

28.39

12

200

60

0.5

30.49

12

200

60

0.75

88.04

15.5
9
15.3
8
17.0
1
16.6
2
15.1
0

12

400

20

0.25

144.04

12

400

20

0.5

12

400

20

12

400

12

4.65

33.50

29.37

7.26

4.66

41.07

28.96

7.25

7.94

36.58

29.78

6.15

5.51

35.81

29.63

6.62

5.50

30.24

29.38

6.61

6.73

10.92

204.46

23.29

7.49

173.55

5.43

10.77

205.78

24.51

7.49

0.75

191.88

8.92

191.83

24.94

7.50

40

0.25

170.25

7.78

95.24

22.99

7.51

400

40

0.5

182.18

11.94
12.0
4
12.3
1

11.80

239.13

24.41

7.51

12

400

40

0.75

175.71

10.34

173.78

24.93

8.06

12

400

60

0.25

67.55

7.74

71.44

22.79

7.32

12

400

60

0.5

107.74

11.95
14.3
7
12.2
3

8.18

33.47

24.36

6.95

12

400

60

0.75

144.07

11.27

10.46

183.54

24.97

6.90

12

600

20

0.25

221.81

1.89

11.36

227.40

15.62

9.91

12

600

20

0.5

202.44

11.81

11.37

183.47

16.29

9.23

12

600

20

0.75

227.95

11.94

8.92

173.78

17.99

9.22

12

600

40

0.25

235.00

11.95

12.05

241.57

15.53

12.56

12

600

40

0.5

184.63

11.95

13.16

173.79

16.04

9.73

12

600

40

0.75

159.71

10.87

242.07

17.43

9.77

12

600

60

0.25

200.22

12.47

213.96

15.47

13.04

12

600

60

0.5

160.09

11.95
15.7
3
13.9
9

12.55

210.25

15.86

9.12

12

600

60

0.75

159.61

11.69

11.37

215.60

16.95

8.63

190.3
3
195.3
1
125.4
9

30.07

6.54

41.17

30.06

6.54

49.25

29.42

6.68

40.93

51.26
192.3
5
257.6
7
195.3
2
195.3
2
280.4
7
198.3
0
199.8
8
128.1
3
298.3
5
213.4
2
287.6
7
196.6
5
214.1
6
294.8
6
298.3
7

33.44

5.99

41.65

34.49

41.91

24.33

6.54
10.9
0

29.82

8.11

196.25

30.06

7.79

197.48

19.12

7.69

129.61

27.73

8.11

131.84

30.00

7.90

185.71

26.56

7.56

104.87

29.19

8.09

101.89

29.13

7.97

106.34

19.03

11.47

277.75

20.57

11.51

217.03

25.82

11.18

184.15

19.01

11.51

280.55

18.95

11.51

268.29

211.18
295.6
2
303.5
0
229.9
6

24.40

186.16

23.39

8.17
10.9
6
10.7
5

28.13

8.11

173.29

23.43

189.78

267.45
256.38

CONFIRMATION TEST :
Once the ANN prediction is achieved, it is needed to verify the ANN results
with experimental results. So the input parameters are randomly selected and
experiments are conducted for these conditions. The selected input parameters are
shown in the table 8. The comparison of ANN predictions and confirmation test
results are shown in the table 9. Also the errors in the predictions are indicated.
Table 8 Input parameters for verification experiments
Expt

Ip

Ton

Toff

Pf

42.0
6
45.0
2
41.1
2
45.0
0
45.0
8
33.8
4
34.1
8
34.0
2
35.4
3
34.4
7
42.4
3
25.0
7
44.1
5
42.1
9
33.8
3
18.0
5
16.8
7
25.7
2
15.9
7
24.8
4
20.11
10.1
2
23.6
6

5.91
8.72
9.91
5.75
5.61
10.40
8.81
8.81
10.04
8.70
8.81
10.04
10.02
6.89
12.90
9.63
8.81
12.27
12.90
8.82
11.04
11.29
12.84

No

1
2
3
4
5
6

+
+
+
-

(amps
)
4
4
8
8
12
12

(s)

(s)

200
400
200
400
400
600

40
60
40
20
20
60

(kg/cm
2)
0.75
0.5
0.75
0.75
0.25
0.25

Table 9 Comparison of experimental results with the ANN model prediction


ANN model
predictions

Experimental results

Electrod
e

Solid
Eectrode

1mm
MCE

1.5mm
MCE

2mm
MCE

MRR

EWR

(mg/mi
n)
16.56
46.44
19.28
94.78
110.63

(mg/mi
n)
1.72
2.86
4.56
6.02
5.78

213.16

13.88

18.06
56.38
19.62
98.06
120.39

3.56
4.52
8.8
12.56
6.34

220.17

14.59

26.52
93.2
28.66
170.82
176.94

6.12
6.58
15.62
23.84
10.38

288.96

21.88

21.32
76.48
23.81
114.39
130.25

6.52
7.04
19.82
21.66
19.38

270.06

22.46

SR

(m)
5.1
4.66
5.68
8.2
8.83
11.9
2

MRR

EWR

(mg/mi
n)
18.07
52.14
20.87
107.70
122.71

(mg/mi
n)
1.50
2.38
5.77
5.28
6.60

200.22

15.73

Prediction error(%)
SR

MRR

EWR

5.67
4.38
5.06
8.91
8.22

(mg/mi
n)
-9.12
12.27
8.25
-13.63
10.92

(mg/mi
n)
12.79
-16.78
26.54
-12.29
14.19

11.18
6.01
10.92
8.66
6.91

12.47

6.07

-13.33

4.61

15.99
-22.19
-17.48
4.20
-9.47
3.31

8.05
12.50
28.80
6.21
1.77
10.80

(m)

SR

(m)

3.44
5.66
3.54
9.06
7.22
12.3
9

18.38
50.74
23.68
112.47
134.60

4.35
3.73
9.17
11.37
6.55

3.87
7.29
3.32
9.22
6.44

10.04
-1.77
-10.00
20.69
-14.70
11.80

213.96

15.47

13.04

2.82

-6.03

5.25

4.1
4.62
4.18
5.68
8.9
11.4
8

32.85
103.76
33.21
195.30
194.65

7.65
7.57
14.32
27.06
9.78

4.29
5.01
4.29
4.37
8.33

10.30
-23.87
11.33
15.88
-14.33
10.01

10.45
-25.00
15.05
-8.32
13.51
-5.78

10.89
4.63
8.44
2.63
23.06
6.40

295.62

23.43

10.96

-2.30

-7.08

4.53

3.62
5.2
3.83
6.94
6.7
12.5
6

22.97
68.07
27.69
125.48
123.92

7.34
6.43
17.10
18.79
22.82

4.48
5.68
3.11
6.37
5.96

12.95
-7.74
-11.00
16.30
-9.69
-4.86

12.46
-12.58
-8.66
-13.72
-13.25
17.75

8.28
23.76
9.23
18.80
8.21
11.04

267.45

20.11

11.04

0.97

10.46

12.10

8.43

12.74

13.86

The number of neurons and the number of epochs were varied to reach
minimum values of root mean square error. The predicted results based on the above
model were compared with actual values and are found to be in good agreement as
shown in figure 6. The proposed model can be employed successfully in prediction of
MRR, EWR and SR of the stochastic and complex EDM process.

Figure 6. Actual

Vs
Predicted values

OPTIMIZATION :
Optimization conditions differ based on the performance characteristic
considered. Here, for MRR maximization results is the optimum condition and for
EWR and SR, minimization results in optimum condition.
Using ANN, the response parameters are predicted for all the combinations
of the machining conditions i.e., 162 experimental conditions.
Optimum conditions for MRR, EWR and SR for each of the electrode are
readily reckoned from the ANN results. Confirmation experiments are conducted for
the optimum machining conditions, response variables are found out and
comparisions among predicted and experimental values are made.

Table 10 Comparison of Optimum Conditions

Electro
de

Respons
es

TON

TOFF

Pd

EXPERIMEN
TAL VALUE

Solid
Electro
de

MRR
EWR
SR
MRR
EWR
SR
MRR
EWR
SR
MRR
EWR
SR

+
+
+
+
+
+
+
+

12
4
4
12
4
4
12
4
4
12
4
4

600
600
200
600
600
200
600
400
200
600
400
200

40
20
60
40
60
60
60
40
60
40
60
40

0.25
0.25
0.25
0.75
0.5
0.25
0.5
0.25
0.5
0.25
0.75
0.25

240.17
0.74
2.8
246.89
1.26
3.2
315.36
1.65
3.58
286.44
1.58
2.56

1mm
MCE
1.5mm
MCE
2mm
MCE

PREDICT
ED
VALUE
235
0.83
3.06
242.07
1.05
3.13
303.5
1.79
3.73
280.55
1.32
3.00

%ERRO
R
2.15
-12.16
-9.29
1.95
16.67
2.19
3.76
-8.48
-4.19
2.06
16.46
-17.19

..
CONCLUDING REMARKS :
ANN provides a means for finding the response variables for all the different
machining combinations. Conducting experiments for all the machining combinations
is time consuming and costly affair. Further, with ANN predicted results the optimum
condition for each of the MRR, EWR and SR can be found out. The predicted
responses of the ANN model are in very good agreement with the experimental
values. This method is also tested for its prediction potentials for non-experimental
patterns.

Вам также может понравиться