Академический Документы
Профессиональный Документы
Культура Документы
Synapse Dendrites
Axon
Axon
Soma Soma
Dendrites
Synapse
Middle Layer
Input Layer Output Layer
x1
Y
w1
x2
w2
Neuron Y Y
wn
Y
xn
S t e p f u n c t io n S ig n f u n c t io n S ig m o id f u n c t io n L in e a r f u n c t io n
Y Y Y Y
+1 +1 +1 +1
0 X 0 X 0 X 0 X
-1 -1 -1 -1
In
puts
x1 Lin
ear Hard
w C
ombin
er L
imiter
1 O
utp
ut
Y
w
2
x2
T
hresh
old
Class A1
1
2
1
x1
Class A2 x1
e( p) Yd ( p) Y ( p) where p = 1, 2, 3, . . .
Step 2: Activation
Activate the perceptron by applying inputs x1(p),
x2(p),…, xn(p) and desired output Yd (p).
Calculate the actual output at iteration p = 1
n
Y ( p ) step xi ( p ) wi ( p )
i 1
where n is the number of the perceptron inputs,
and step is a step activation function.
1 1 1
x1 x1 x1
0 1 0 1 0 1
Ou t p u t Sig n a ls
Input Signals
First Second
Input hidden hidden Output
layer layer layer layer
i wij j wjk
xi k yk
m
n l yl
xn
Input Hidden Output
layer layer layer
Error signals
3
w13 1
x1 1 3 w35
w23 5
5 y5
w24
x2 2 4 w45
w24
Input 4 Output
layer layer
1
Hidden layer
Negnevitsky, Pearson Education, 2002 38
The effect of the threshold applied to a neuron in the
hidden or output layer is represented by its weight, ,
connected to a fixed input equal to 1.
The initial weights and threshold levels are set
randomly as follows:
w13 = 0.5, w14 = 0.9, w23 = 0.4, w24 = 1.0, w35 = 1.2,
w45 = 1.1, 3 = 0.8, 4 = 0.1 and 5 = 0.3.
5 5 5 0 .3 0 .0127 0 .3127
0
10
Sum-Squared Error
-1
10
10-2
-3
10
-4
10
0 50 100 150 200
Epoch
+1.5
1
+1.0
x1 1 3 2.0 +0.5
+1.0
5 y5
+1.0
x2 2 +1.0
4
+1.0
+0.5
1
Negnevitsky, Pearson Education, 2002 46
Decision boundaries
x2 x2 x2
x1 + x2 – 1.5 = 0 x1 + x2 – 0.5 = 0
1 1 1
x1 x1 x1
0 1 0 1 0 1
tan h 2a
Y bX
a
1 e
where a and b are constants.
Suitable values for a and b are:
a = 1.716 and b = 0.667
Negnevitsky, Pearson Education, 2002 48
We also can accelerate training by including a
momentum term in the delta rule:
w jk ( p ) w jk ( p 1) y j ( p ) k ( p)
1
Learning Rate
0.5
-0.5
-1
0 20 40 60 80 100 120 140
Epoch
0.8
Learning Rate
0.6
0.4
0.2
0
0 20 40 60 80 100 120
Epoch
2
Learning Rate
1.5
0.5
0
0 10 20 30 40 50 60 70 80 90
Epoch
Output Signals
Input Signals
x2 2 y2
xi i yi
xn n yn
1, if X 0
Y sign 1, if X
Y , if X
m 1
(1, 1, 1) (1, 1, 1)
y1
0
0 2 2 1 0 1
Y2 sign 2 0 2 1 0 1
2 2 0 1 0 1
x1(p) 1 x1(p+1) 1
1 y1(p) 1 y1(p)
x2(p) 2 x2(p+1) 2
2 y2(p) 2 y2(p)
xi(p)
j yj(p) j yj(p)
i xi(p+1) i
m ym(p) m ym(p)
xn(p) n xn(p+1) n
Input Output Input Output
layer layer layer layer