Вы находитесь на странице: 1из 29

Presentation

On
Neural Network
Adaline & Madaline
INTRODUCTIN
Developed in 1960 by Widrow & Hof.
It is very closely related to the perceptron learning rule. The rule
called Delta rule.
Adjusts the weights to reduce the difference b/w the net i/p to the
o/p unit, and the desired o/p, which results in a least mean square
error.
Adaline (Adaptive Linear Neuron) & Madaline (Multilayered
Adaline) networks use this LMS learning rule & are applied to
various neural n/w applications.
The weights
2 on the interconnections b/w the adaline & madaline
networks are adjustable.
This is a supervised type of learning.
WHAT IS AN ADALINE
NETWORK?

Stands for ADAptive LINear Element.


It is a simple perceptron like system that

NNs Adaline
accomplishes classification by modifying weights
in such a way as to diminish the MSE(MEAN
SQUARE ERROR) at every iteration. This can
be accomplished using gradient adaptive linear
element.
When Adaline is to be used for pattern
classification, then, after training, a threshold
function is applied to the net input to obtain the
activation. 3
ADALINE
It is found to use bipolar activations for its input
signals and target output (+1 or -1).
The learning rule can be called as Delta rule, Least
Mean square rule or Widrow-Hoff rule.
1
b

X1 W1

X2
W2 Y

Wn
4
Xn
Fig(a):-single layer n/w (Adaline)
ADALINE SCHEMATIC

September 28, 2010


i 1
w0 + w1i1 + + wnin
i 2
Output

Modifications

Neural Networks
i n

Adjust

Lecture 7: Perceptron
weights Compare with
desired value
class(i) (1 or -1)

5
ARCHITECTURE

The architecture of adaline is shown in fig(a).


The adaline has only one output unit. This o/p
unit receives i/p from several units & also from
bias; whose activation always +1.
In fig(a) an i/p layer with X1... Xi...Xn & bias,
an o/p layer with only one neuron is present.
The link b/w the input & output neurons
possess weighted interconnections . These
weights get changed as the training
progresses.
6
DELTA LEARNING RULE
(WIDROW-HOFF RULE OR LMS)
The delta rule changes the weight of the connection to
minimize the difference b/w the net i/p to o/p unit Yin &
the target value t.
The delta rule is given by :

Wi = (t-Yin)Xi
Where , X is the vector of activation of i/p units.

Y-in is the net i/p to o/p unit. - X .W1


t is the target vector, - learning rate =0.2
The mean square error for a particular training pattern
is E= (tj-Yinj)
7

The training process is continued until the error , which


is the difference between the target and the net input
becomes minimum.
ALGORITHM
Initialize weights (not zero but small random values are used). Set
learning rate .
Set activations of input unit.
Y-in = b + Xi Wi
From delta learning rule.
Wi = (t-Yin)Xi
Update bias & weight i 1 to n.
Wi(new) = Wi(old) + (t-Yin)Xi
b(new) = b(old) + (t-Yin)
Test stopping condition.
E= (tj-Yinj)
The training process is continued until the error , which is the
8
difference between the target and the net input becomes minimum.
Finally apply the activations to obtain the o/p Y.
1, if Y-in 0
Y = f(Yin) =
-1, if Y-in < 0

Example:- Develop an adaline n/w for ANDNOT.

1 b=0.2

X1
W1=0.2
Y
W2=0.2

X2 9

Fig(b):- ANDNOT
Epoch1
X1 X2 b t Yin (t-Yin) W1 W2 b W1 W2 b E

1 1 1 -1 0.6 -1.6 -0.32 -0.32 -0.32 -0.12 -0.12 -0.12 2.56

1 -1 1 1 -0.12 1.12 0.22 -0.22 0.22 0.10 -0.34 0.10 1.25

-1 1 1 -1 -0.34 -0.66 0.13 -0.13 -0.13 0.23 -0.47 -0.03 0.44

-1 -1 1 -1 -0.21 -1.21 0.24 0.24 -0.24 0.47 -0.23 -0.27 1.46

Error E=5.71
By using above weight, the LMS error is calculated. E=(t
10
j-
Yinj)
Epoch2
X1 X2 b t Yin (t-Yin) W1 W2 b W1 W2 b E

1 1 1 -1 -0.02 -0.98 -0.19 -0.19 -0.19 0.28 -0.43 -0.46 0.95

1 -1 1 1 0.25 0.76 0.15 -0.15 0.15 0.43 -0.58 -0.31 0.57

-1 1 1 -1 -1.33 0.33 -0.06 0.06 0.06 0.37 -0.51 -0.25 0.10

-1 -1 1 -1 -0.11 -0.90 0.18 0.18 -0.18 0.55 -0.33 0.43 0.8

Error E=2.43
By using above weight, the LMS error is calculated. E=(t
11j-

Yinj)
Epoch3
X1 X2 b t Yin (t-Yin) W1 W2 b W1 W2 b E

1 1 1 -1 0.64 -1.64 -0.33 -0.33 -0.33 0.22 -0.66 0.1 2.69

1 -1 1 1 0.98 0.018 0.03 0.03 0.03 0.22 -0.69 0.14 0.00


3
-1 1 1 -1 -0.79 -0.21 0.04 0.04 0.04 0.27 -0.74 0.09 0.04

-1 -1 1 -1 0.57 -1.57 0.31 -0.31 -0.31 0.58 -0.43 0.22 2.46

Error E=5.198
By using above weight, the LMS error is calculated. E=(t
12j-

Yinj)
Epoch4
X1 X2 b t Yin (t-Yin) W1 W2 b W1 W2 b E

1 1 1 -1 -0.06 -0.93 -0.18 -0.18 -0.18 0.39 -0.61 -0.41 0.86

1 -1 1 1 0.601 0.39 0.08 -0.08 0.08 0.47 -0.69 -0.33 0.15

-1 1 1 -1 -1.49 0.49 -0.09 0.09 0.09 0.37 -0.59 -0.23 0.24

-1 -1 1 -1 0.006 -0.994 0.2 0.2 -0.2 0.57 -0.4 -0.45 0.98

Error E=2.257
By using above weight, the LMS error is calculated. E=(t
13j-

Yinj)
Epoch5
X1 X2 b t Yin (t-Yin) W1 W2 b W1 W2 b E

1 1 1 -1 -0.27 -0.727 -0.14 -0.14 -0.14 0.43 -0.55 -0.59 0.52

1 -1 1 1 0.33 0.62 0.12 -0.12 0.12 0.55 -0.67 -0.47 0.38

-1 1 1 -1 -1.69 0.69 -0.13 0.13 0.13 0.42 -0.53 -0.33 0.47

-1 -1 1 -1 -0.21 -0.79 0.15 0.15 -0.15 0.57 -0.37 -0.49 0.61

Error E= 2.004
By using above weight, the LMS error is calculated. E=(t
14j-

Yinj)
Epoch6
X1 X2 b t Yin (t-Yin) W1 W2 b W1 W2 b E

1 1 1 -1 -0.28 -0.71 -0.14 -0.14 0.14 0.43 -0.52 -0.63 0.50

1 -1 1 1 0.37 0.68 0.13 -0.13 0.13 0.57 -0.65 -0.49 0.46

-1 1 1 -1 -1.71 0.71 -0.14 0.14 0.14 0.42 -0.6 -0.35 0.54

-1 -1 1 -1 -0.26 -0.74 0.14 0.14 -0.14 0.57 -0.45 -0.49 0.54

After Epoch 6 we get W1 = 0.5, W2 = -0.5, b = -0.5


Error E=2.004 15
By using above weight, the LMS error is calculated. E=(t j-
Yinj)
MADALINE
Madaline is the combination of adaline.
It is also called multilayered adaline.

Madaline has two training algorithm, MRI & MRII.

Architecture:-
1 b1
1
W11 b3
X1 Z1

W12 V1

V2
Y
W21

X2 W22 Z2
16
b2 Fig(c):- Architecture of
1 Madaline
CONTINUE..

It has two hidden adalines and one output adaline.


There can be any number of hidden layers between
the output and the input but it increases the

NNs Adaline
computation.
Here both the hidden layers have separate bias,
along with input connections. The output from the
hidden neurons are linked to the output neuron,
where the final output of the network is calculated.

17
MRI ALGORITHM

Weights of hidden adaline unit are adjustable,


weights of o/p unit are fixed.
V1 & V2 are fixed with bias b3 as 0.5.

The activation function for Z1, Z2 & Y is given by:

1, if p 0
f(p) =
0, if p < 0
Other weights may be small random values.

Set activation of i/p.


18
Calculate the i/p of hidden adaline units.
Z-in1 = b1 + X1 W11 + X2 W21
Z-in2 = b2 + X1 W12 + X2 W22
Find o/p of hidden adaline unit (if +Ve=1 or if Ve= 0).
Z1 = f(Z-in1)
Z2 = f(Z-in2)
Calculate net input to output.
Y-in = b3 + Z1 V1 + Z2 V2
Apply activation to get the output of net.

Y = f(Y-in)

19
Find the error and do weight updation.
If t=Y, no weight updation.
If tY, then,
If t= 1, then update weight on Zj unit.
Wij(new) = Wij(old) + (1-Z-inj)Xi
bj(new) = bj(old) + (1-Z-inj)
If t=-1, then update weights on Zk unit.
Wik(new) = Wik(old) + (-1-Z-ink)Xi
bk(new) = bk(old) + (-1-Z-ink)
Test for the stopping condition.
20
EXAMPLE-

Given =0.5
W11=0.05, W21=0.2, b1=0.3

NNs Adaline
W21=0.1, W22=0.2, b2=0.15

V1=V2=b3=0.5

21
Epoch1
X X b t Zin1 Zin2 W11 W21 b1 W12 W22 b2 Z1 Z2 Yin Y
1 2

1 1 1 -1 0.55 0.45 -0.72 -0.57 -0.47 -0.62 -0.52 -0.575 1 1 1.5 1

1 -1 1 1 -0.625 -0.675 0.087 -1.38 0.337 -0.62 -0.52 -0.575 -1 -1 -0.5 -1

-1 1 1 -1 -1.137 -0.475 0.087 -1.38 0.337 -1.36 0.212 0.1625 -1 -1 -0.5 -1

-1 -1 1 -1 1.637 1.3125 1.406 -0.06 -0.98 -0.20 1.369 -0.994 1 1 1.5 1

22
Epoch2

X X b t Zin1 Zin2 W11 W21 b1 W12 W22 b2 Z1 Z2 Yin Y


1 2
1 1 1 -1 0.356 0.168 0.728 -0.74 -1.65 -0.79 -0.207 -1.578 1 1 1.5 1

1 -1 1 1 -0.184 -3.154 1.320 -1.33 -1.06 -0.79 0.785 -1.578 -1 -1 -0.5 -1

-1 1 1 -1 -3.728 -0.002 1.320 -1.33 -1.06 -1.29 0.785 -1.077 -1 -1 -0.5 -1

-1 -1 1 -1 -1.049 -1.071 1.320 -1.33 -1.06 -1.29 1.286 -1.077 -1 -1 -0.5 -1

23
Epoch3

X X b t Zin1 Zin2 W11 W21 b1 W12 W22 b2 Z1 Z2 Yin Y


1 2
1 1 1 -1 -1.086 -1.083 1.320 -1.33 -1.06 -1.29 1.286 -1.077 -1 -1 -0.5 -1

1 -1 1 1 1.591 -3.655 1.320 -1.33 -1.06 -1.29 1.286 -1.077 1 -1 0.5 1

-1 1 1 -1 -3.728 1.501 1.320 -1.33 -1.06 -1.29 1.286 -1.077 -1 1 0.5 1

-1 -1 1 -1 -1.049 -1.071 1.320 -1.33 -1.06 -1.29 1.286 -1.077 -1 -1 -0.5 -1

24
MRII ALGORITHM

This algorithm proposed by Widrow, Winter &


Banter in 1987.
In this method updating all the weights in the
net.
This algorithm different from MRI algorithm
in the manner of weight updation only.
Initial weights (all weights to some random
value) set of learning rate `.
Set activation of i/p.
25
Calculate the i/p of hidden adaline units.
Z-in1 = b1 + X1 W11 + X2 W21
Z-in2 = b2 + X1 W12 + X2 W22
Find o/p of hidden adaline unit (if +Ve=1 or if Ve= 0).
Z1 = f(Z-in1)
Z2 = f(Z-in2)
Calculate net input to output.
Y-in = b3 + Z1 V1 + Z2 V2
Apply activation to get the output of net.

Y = f(Y-in)

26
Find the error and do weight updation.
If tY, then,
For each hidden layer whose input is closest to 0. Start
with unit whose net input is closest to 0.
Change the units output as
If it is +1, change to -1.
If it is -1, change to +1.
Recompute the output of the net.
If the error is reduced.
Adjust the weights on this unit.
Test for the stopping condition.
27
APPLICATIONS

Useful in noise correction


Adaline in every modem.

Adaline has better convergence properties than


Perceptron

28

Вам также может понравиться