Академический Документы
Профессиональный Документы
Культура Документы
On
Neural Network
Adaline & Madaline
INTRODUCTIN
Developed in 1960 by Widrow & Hof.
It is very closely related to the perceptron learning rule. The rule
called Delta rule.
Adjusts the weights to reduce the difference b/w the net i/p to the
o/p unit, and the desired o/p, which results in a least mean square
error.
Adaline (Adaptive Linear Neuron) & Madaline (Multilayered
Adaline) networks use this LMS learning rule & are applied to
various neural n/w applications.
The weights
2 on the interconnections b/w the adaline & madaline
networks are adjustable.
This is a supervised type of learning.
WHAT IS AN ADALINE
NETWORK?
NNs Adaline
accomplishes classification by modifying weights
in such a way as to diminish the MSE(MEAN
SQUARE ERROR) at every iteration. This can
be accomplished using gradient adaptive linear
element.
When Adaline is to be used for pattern
classification, then, after training, a threshold
function is applied to the net input to obtain the
activation. 3
ADALINE
It is found to use bipolar activations for its input
signals and target output (+1 or -1).
The learning rule can be called as Delta rule, Least
Mean square rule or Widrow-Hoff rule.
1
b
X1 W1
X2
W2 Y
Wn
4
Xn
Fig(a):-single layer n/w (Adaline)
ADALINE SCHEMATIC
Modifications
Neural Networks
i n
Adjust
Lecture 7: Perceptron
weights Compare with
desired value
class(i) (1 or -1)
5
ARCHITECTURE
Wi = (t-Yin)Xi
Where , X is the vector of activation of i/p units.
1 b=0.2
X1
W1=0.2
Y
W2=0.2
X2 9
Fig(b):- ANDNOT
Epoch1
X1 X2 b t Yin (t-Yin) W1 W2 b W1 W2 b E
Error E=5.71
By using above weight, the LMS error is calculated. E=(t
10
j-
Yinj)
Epoch2
X1 X2 b t Yin (t-Yin) W1 W2 b W1 W2 b E
Error E=2.43
By using above weight, the LMS error is calculated. E=(t
11j-
Yinj)
Epoch3
X1 X2 b t Yin (t-Yin) W1 W2 b W1 W2 b E
Error E=5.198
By using above weight, the LMS error is calculated. E=(t
12j-
Yinj)
Epoch4
X1 X2 b t Yin (t-Yin) W1 W2 b W1 W2 b E
Error E=2.257
By using above weight, the LMS error is calculated. E=(t
13j-
Yinj)
Epoch5
X1 X2 b t Yin (t-Yin) W1 W2 b W1 W2 b E
Error E= 2.004
By using above weight, the LMS error is calculated. E=(t
14j-
Yinj)
Epoch6
X1 X2 b t Yin (t-Yin) W1 W2 b W1 W2 b E
Architecture:-
1 b1
1
W11 b3
X1 Z1
W12 V1
V2
Y
W21
X2 W22 Z2
16
b2 Fig(c):- Architecture of
1 Madaline
CONTINUE..
NNs Adaline
computation.
Here both the hidden layers have separate bias,
along with input connections. The output from the
hidden neurons are linked to the output neuron,
where the final output of the network is calculated.
17
MRI ALGORITHM
1, if p 0
f(p) =
0, if p < 0
Other weights may be small random values.
Y = f(Y-in)
19
Find the error and do weight updation.
If t=Y, no weight updation.
If tY, then,
If t= 1, then update weight on Zj unit.
Wij(new) = Wij(old) + (1-Z-inj)Xi
bj(new) = bj(old) + (1-Z-inj)
If t=-1, then update weights on Zk unit.
Wik(new) = Wik(old) + (-1-Z-ink)Xi
bk(new) = bk(old) + (-1-Z-ink)
Test for the stopping condition.
20
EXAMPLE-
Given =0.5
W11=0.05, W21=0.2, b1=0.3
NNs Adaline
W21=0.1, W22=0.2, b2=0.15
V1=V2=b3=0.5
21
Epoch1
X X b t Zin1 Zin2 W11 W21 b1 W12 W22 b2 Z1 Z2 Yin Y
1 2
22
Epoch2
23
Epoch3
24
MRII ALGORITHM
Y = f(Y-in)
26
Find the error and do weight updation.
If tY, then,
For each hidden layer whose input is closest to 0. Start
with unit whose net input is closest to 0.
Change the units output as
If it is +1, change to -1.
If it is -1, change to +1.
Recompute the output of the net.
If the error is reduced.
Adjust the weights on this unit.
Test for the stopping condition.
27
APPLICATIONS
28