Академический Документы
Профессиональный Документы
Культура Документы
Burakov Neuro PDF
Burakov Neuro PDF
. .
-
2013
004.032.6
32.818
91
:
- ,
. .
(
( ));
. .
(
)
-
, . .
91 : . / . . . .: , 2013.284 .: .
ISBN 978-5-8088-0812-6
, , , . ,
. MatLab .
220400 .
004.032.6
32.818
ISBN 978-5-8088-0812-6
-
(), 2013
. . , 2013
1940-
. - . [1], , .
.
,
.
(, ), ,
.
[1] , , , , , . (),
,
. , .. ,
.
1949 . . [2] , , . . , ,
.
1950- , . [3].
- .
. .
. . . [4]
. , , . .
. [5]. ,
3
, .
, .
,
.
, 80- , 70-
. [6] . [7].
1980-
, .
1982 . .
[8]. [9].
,
[10].
(
) ,
, ..
, ..
, .
,
,
[11], [12], [13], [14].
[15].
([1624]
.), [2528] .
:
. , ,
4
.
, ,
, .. , ;
. , , -: ((x1, y1), (x2, y2),
..., (xn, yn)), ,
f.
, ;
.
. , (). , , ;
. n {y(t1),
y(t2), ..., y(tn)} t1, t2, ..., tn.
y(tn+1).
;
. , . , .
, .
;
. , ,
.
, ;
. .
, 5
, , , ,
..
[23]:
();
;
, ;
;
;
;
, ;
;
.
,
. [22]:
()
;
() ;
;
.
, , NeuroSolutions
NeuroDimension Inc., NeuralWorks Professional U/Plus
UDND Neural Ware, Inc., Process Advisor AlWare,
Inc., NeuroShell2 Ward Systems Group, BrainMaker Pro California Scientific Software ..
,
MatLab Simulink Neural Network toolbox. :
6
MatLab 220400 ;
Neural Network toolbox
, MatLab;
MatLab ,
.
.
.
1. .
1.1.
,
. . [29].
, . , , . .
, .
1000 2000 .
, . . 2000 .,
. 1100 .
[30], 1291 , 1374 ,
1377 , 1508 . , 1610 .
,
18
10 , 3000 300 .
, , , . ,
6800 , 5000 . ,
1/50 ,
1/10000.
, ,
. ,
-, , . ,
97% . 3% , ,
. ., , .
, .
8
. , . () ,
, , .
, , ,
.
:
, ), . , , , . .
, ,
, , , , ,
. . (. 1.1).
: , ,
. .
. 1.1.
9
,
.
.
(23 ) ,
2500 2. .
: ,
,
. (
) 29% , 17% [31].
10 20 (
100 ). ( )
10000 . 1/4
1/5 .
, ,
, , 10% ,
100
70% .
,
, , .
( ),
.
,
, .
( , ..)
.
,
, .
, . ,
() , 14 () .
.
, ( 10
[32]).
:
, . ,
,
,
.
, . ,
, ,
.
. , ,
,
(, ).
. ,
, .
. 110 . 10 ,
[33].
.
.
.
(, ) ,
, .
(), () () .
. ,
, .
11
,
, () . , , () .
, , ,
:
;
;
;
;
.
( ), .
, , . . , .
.
, .. . 70 (
).
20 .
, .
1520 .
7 .
7 2
- ( , ). , .
.
50000
100000. 10
20 [34].
12
?
,
. , , .
50000 .
[35]:
,
,
.
. ( )
1000 ().
,
,
( ).
.
.
, , , .
. ,
, , .
, , 72 .
1.2.
, ,
, . , , . [36].
13
. 1.2.
: (),
().
, , . ,
5 ,
70 .
, , .
( ) ,
.
,
. .
,
. 1,5 .
. .
. . , . .
. 1.2.
14
.
.
: , , () , , ( ).
20 , . ,
, , , 60 200 [36].
,
,
. , , , , .
.
30 . , , , .
, . () ,
.
- . 2 . ,
.
[36].
. .
. .
15
,
, ,
, .
, , , ,
, .
70 .
60 ,
Na+,
, .
1 , . .
1 100 . 100 /,
. - ,
.
1.3.
,
, . .
,
.
( . connection ),
,
. :
(
, );
16
, ,
;
, .
, , (),
, .
, , .
. 1.1.
- , . , .
() .
1.1
,
,
,
,
,
,
,
,
,
,
,
,
()
17
, ,
, . ,
. , , .
[28] :
, , .
:
;
, .
- ,
, . , , .
,
.
: (), () , ,
.
, , (
).
,
(. 1.3).
, .
: .
. ,
18
. 1.3.
, ()
. , .
, ( ). ( )
.
,
, .
, ,
. , ,
.
. :
, ,
, . . .
.
, , , .
19
, , . ,
, , , , . .
.
1.4.
.
, , , ().
.1.4 , .
. , , .
, .
.
RBF-
ADALINE
ART-
. 1.4.
20
.
.
. .
. N ,
.
, , .
, .
( ) ,
,
.
.
. ,
.
( . lateralis
) . ()
. .
,
. , ,
.
, ,
.
,
.
, ,
: , , , .
, ,
, .
21
1.5.
, ..
[37].
, n-
Rn, . . n- ,
. , ,
.
n n .
m , , , .
,
(-) .
,
x y . :
1.d(x, y) 0, (d(x, y) = 0 x = y).
2.d(x, y) = d(y, x).
3.d(x, y) d(x, z)+ d(z, y).
, .
, , R2. : , , .
, x1 x2 (. 1.5).
, : 1 2, . ,
() .
, .
. 1.5 2 ,
1 , , ,
( , ).
( , , , , ) ( ) .
22
s(X) = w1x1 + w2 x2 + w3 = 0.
(1.1)
x2
0
x1
. 1.5.
2
5
s (X ) = x1 + x2 5 = 0
0
5
. 1.6.
23
. , (1.1) (1.2) . (1.2) ().
,
( ,
, ).
n-
Xi - Xj = [(Xi - Xj ) (Xi - Xj
1
)] 2 .
Pi i- (i = 1, 2, , N).
X .
i- :
X - Pi = (X - Pi ) (X - Pi ) = X X - 2X Pi + Pi Pi .
(1.3)
( (A + B) = A + B XP = PX).
(1.3) i, , (1.3) :
1
di (X) = X Pi - Pi Pi .
2
(1.4)
(1.4)
(1.5)
wi, j = pi, j ;
1
wi, n+1 = - Pi Pi .
2
. ,
3
P1 = ,
3
24
6
P2 = ,
-5
-8
P3 = .
10
(1.5)
3
W1 = 3 ,
-9
W2 = -5 ,
-30,5
-8
W3 = 10 ,
-82
. 1.7.
, , (d1(X) > d2(X))
(d1(X)> d3(X))
d1(X)
9
6
x1
x2
30,5
d2(X)
d1(X)
max
8
10
d3(X)
82
. 1.7.
25
1.6.
() , .
.
( ). .
(. 1.8).
F .
F(y) [0,1] [1,1].
(. 1.2).
, . W
X:
y = XW + b.
,
:
z = f(y).
, (. . 1.2),
f(y) = sgn(XW + b).
, (1.2).
x1
w1
x2
w2
xN
wN
w i+1 = b
y = xiwi + wi+1
i=1
. 1.8.
26
f(y)
1.2
F(y) = ky,
k > 0,
k
+1, y > P,
1, y <-P
(s-)
F(y) = sgn(y)
1
F (y) =
,
1 + exp(-ky)
F(y)
F(y)
F(y)
F(y)
k
k
y
0 k
k
y
0
y
0
y
0
F(y)
F(y)
F(y)
F(y)1
1
P1
P1
P
y
P 0 P
0
P
y
01
P
y
01
P
y
1
1
F(y)
F(y)
F(y)1
F(y)1
1
1
F(y)
F(y)
F(y)1
F(y)
1
1
1
k>0
(
)
F(y) = th(ky),
k>0
1, y P,
F (y) =
0, y < P
F(y)
F(y)
F(y)1
F(y)1
1
1
0
0
01
0
1
1
1
y
y
y
y
0,5
0,5
0,5
00,5
0
0
0
y
y
y
y
1
1
1
1
F(y)
F(y)
F(y)1
F(y)
1
1
1
0
0
0
0
y
y
y
y
P
P
P
P
y
y
y
y
1.7.
(. 1.9).
x1w1 + x2w2
(x1x2), (1.1) .
27
x1
x2
w1
x1w1 + x2w2 + b
w2
. 1.9.
.
, . 1.3.
A1 A4 (. 1.10).
, OR
A1 , AND A4.
1.3
x1
x2
OR(x1, x2)
AND(x1, x2)
XOR(x1, x2)
A1
A2
A3
A4
x2
A2
A4
A1
A3
x1
. 1.10.
28
x1 x2 : w1 = w2 = 1, F P = 0.
b. , b = 1,5
AND (. 1.11).
b = 0,5 OR.
. 1.12
.
XOR.
, 2 3 ,
A1 A4 . , ..
XOR .
x1
x2
w1= 1
w2 = 1
x 1w1 + x 2w2 + b
b =1,5
. 1.11.
x2
s1(X) = x1 + x2 1,5 = 0
A2
A4
s2 (X) = x1 + x2 0,5 = 0
A1
A3
x1
. 1.12.
AND OR
29
, . XOR , (. 1.13).
, AND
x1 = x2 = 1, XOR
z = F(1 + 1 0,5 2) = 0.
XOR
, , . 1.13 (. 1.14).
b = 1,5
w1 = 1
x1
w2 = 1
x2
x1w1 + x2w2 + b
x3
F
AND
w3 = 2
w1 = 1
w2 = 1
XOR
b = 0,5
. 1.13. XOR
W1
x1
AND
2
x2
W2
3
XOR
OR
1-
2-
. 1.14. XOR
30
OR s1(X) = 0,
AND s2(X) = 0,
.1.15.
XOR 3
(s1(X) > 0) AND (s2(X) < 0),
.. XOR(X,Y) = OR(X,Y) AND (NOT(AND(X,Y))).
x2
s 2(X ) = 0
A4
A2
s1(X) = 0
+
+
A1
A3
x1
. 1.15. XOR
x2
s2
+
s1
s3
x1
. 1.16.
31
x2
x2
x1
. 1.17.
. 1.14 .
. 1.16 , , , . 1- .
,
(.1.17).
,
.
,
16 , 14 , .. .
(104 256)
[25].
.
, XOR. .
1.8.
. , wij i j 32
, ..
:
wij = yi yj ,
[0,1] .
yj zj.
wij:
wij (t + 1) = wij (t) + wij .
. wij, g.
wij (t + 1) = wij (t)(1 - g) + wij .
g .
-, ()
:
wij = (zj - yj ).
1.9.
. .
(instar) ,
, (. 1.18).
, . .
,
X W.
( ),
y
X = W.
33
x1
y1
x2
w2
x3
y2
w1
N
w3
w1
w2
y = xiwi
w3
y3
i=1
x4
w4
w4
y4
wN
wN
yN
xN
.1.18.
.1.19.
X, , , .
wit+1 = wit + t (xi - wit ),
t ; t , 0,1 .
,
, , .
(outstar) , (. 1.19).
Y.
. :
wit+1 = wit + t (yi - wit ).
.
34
1.10.
(
) ,
.
.
(supervised learning).
.
, ,
.
. ,
,
.
- , ()
. ,
.
(unsupervised learning).
. , , ..
, .
()
,
. . ,
. ,
, .. .
(reinforcement learning). . , .
, ,
,
, ,
35
. ,
. ,
.
,
, .
,
, , :
. , .
, .. ,
;
. , ,
( ). , , .
.
,
.
: . , . .
. , ,
, , . . ,
.. .
1.11.
,
.
, .
()
.
36
,
. n, N :
N = log2n.
,
, [38].
, , ,
.
, ,
.
, . .
(), .
[0,1] [1,1].
[xmin, xmax]
x - xmin
xN =
.
xmax - xmin
[a, b],
xN =
(x - xmin )(b - a)
xmax - xmin
+ a.
xmax : xmax .
xN =
1
1 + e-x
,
37
, x Z,
zi =
+ a,
zi i- Z, yi
.
( , , . .), (, , ).
, n
zi i
i- :
E=
1 N
(zi - yi )2 .
N i=1
.
:
E=
1 N
2
ki (zi - yi ) .
N i=1
:
N
E = zi - yi E = max zi - yi .
i=1
1. ?
?
2. ? ?
38
3.
?
4. ?
5. ?
6. ?
7. ?
8. ?
9. ?
10.
?
11. ?
12. ?
13. ?
14. ?
15. ?
16. ?
17. ?
18. ?
19. ?
20. ?
21. ?
22. - ?
23. , ?
24. ?
25. ?
26. ?
27. ? ?
28. ? ?
29.
?
30. ?
31. ?
39
32. ? .
33. ?
34. ?
35. ?
36. ?
37. ,
?
38. ?
39. ?
40. ?
41. ?
?
42. ?
43. ?
44. ?
45. ?
46. ?
47. AND?
48. OR?
49.
?
50. XOR?
51.
?
52. ?
53. ?
54. ?
55. ?
56. ?
57.
?
58. ?
40
2.
2.1.
MatLab
. 2.1 MatLab.
Neural Net toolbox MatLab . ,
(a, b, c, ),
(a, b, c, ), (A, B, , ).
. 2.1 p - , W (
-), f , a
.
>> net = newlin([0 10; 0 10],1);
[0 10; 0 10] (. .
), ; net .
Neuron w Vector Input
Input
w1,1
w1,R
1
a = f(Wp+ b)
. 2.1. MatLab
41
>> net.IW{1,1}=[3 4]
>> net.b{1}=1
IW (
). {n, m} , m n. b{n} n- .
-, :
>> P=[1; 3]
P=
1
3
:
>> X=sim(net,P)
X=
16
:
>> P=[[1; 3],[2;4],[3;8]]
P=
1 2 3
3 4 8
>> X=sim(net,P)
X=
16 23 42
. 2.1.
2.1
MatLab
hardlim(x)
hardlims(x)
purelin(x)
logsig(x)
poslin(x)
satlin(x)
satlins(x)
radbas(x)
tribas(x)
tansig(x)
42
0
0
()
2.2.
( . perceptio ) , [3].
(, , , , ) . (. 2.2).
: S-,
A- R-.
,
, .
, R-
.
. A-
,
S-, . R- ,
A-.
, . .
S-
A-
R-
. 2.2.
43
, . .
, .
.
, , .
,
. . , ,
. , . 5 . 2.3.
, ,
(. 2.2). , , .
, , .
. . 2.2,
,
,
.
[1 1 1 1 0 0 1 1 1 0 0 1 1 1 1]
. 2.3.
44
2.2
0
1
2
3
4
5
6
7
8
9
[1 1 1 1 0 1 1 0 1 1 0 1 1 1 1]
[0 1 0 0 1 0 0 1 0 0 1 0 0 1 0]
[1 1 1 0 0 1 1 1 1 1 0 0 1 1 1]
[1 1 1 0 0 1 1 1 1 0 0 1 1 1 1]
[1 0 1 1 0 1 1 1 1 0 0 1 0 0 1]
[1 1 1 1 0 0 1 1 1 0 0 1 1 1 1]
[1 1 1 1 0 0 1 1 1 1 0 1 1 1 1]
[1 1 1 0 0 1 0 1 0 1 0 0 1 0 0]
[1 1 1 1 0 1 1 1 1 1 0 1 1 1 1]
[1 1 1 1 0 1 1 1 1 0 0 1 1 1 1]
z Z
1
0
1
0
1
0
1
0
1
0
0000
0001
0010
0011
0100
0101
0110
0111
1000
1001
. 2.4.
Xk
w1
w2
w3
w4
w5
w6
w7
w8
w9
w10
w11
w12
w13
w14
w15
15
yk = xiwi
i=1
f(yk )
. 2.4.
45
E=
k=0
ek =
f (yk ) - zk ,
(2.1)
k=0
f(yk) zk k-
.
.
W .
:
1. k- .
2. ek = f(yk) zk.
3. ek = 0, W .
4. ek 0, , (wi, xi):
wi (wi sgn(ek) xi)
( ,
).
5. ek . 16 . (2.1). > 0,
1.
,
.
.
. ,
,
, (. . 2.2).
(. 2.5).
E=
k =
k=0
9 4
f
(
y
)
z
kj kj kj .
k=0 j=1
k=0 j=1
9
(2.2)
, W .
46
x1
f(y1)
x2
f(y2 )
f(y3 )
xi
F(Y)
f(y4 )
x15
W
(1-)
. 2.5.
:
1. k- .
2. :
k1 = f(yk1) zk1.
3. , .
4. 2 3 .
5.k . 14
. (2.2). > 0, 1.
, ,
. 2.3, ,
. .
.
-, i- j- :
(2.3)
h ( h [0.01, 0.1]),
.
47
(2.3) ,
.
. 2.6 , MatLab.
>> net = newp(PR, S)
PR R2, R ; S .
:
>> net = newp(PR, S, tf, lf)
tf {hardlim, hardlims}, hardlim; lf
{learnp, learnpn}, learnp.
initzero.
>> net.IW{1,1} = [1 1];
>> net.b{1} = [1]
adapt, ( ):
>> adapt(net,P,T)
P ; T .
Layer 1
Input 1
p
R1
a1
IW 1,1
S 1 R
S 1
n1
1
S 1
1
R
b
1
S 1
S1
. 2.6.
48
adapt ,
.
train , , .
.
:
>> p = [[2;2] [1;2] [2;2] [1;0.5]];
>> t =[0 1 0 1];
>> plotpv(p,t)
. , (.2.7):
>> net = newp([2 2;2 2],1);
>> net = train(net,p,t);
TRAINC, Epoch 0/100
TRAINC, Epoch 2/100
TRAINC, Performance goal met.
,
(. 2.8).
Vectors to be Classified
3
2
P(2)
1
0
-1
-2
-3
-3
-2
-1
0
P(1)
. 2.7.
49
. 2.8.
C plotpc (. 2.9):
>> plotpc(net.IW{1,1},net.b{1});
sim:
>> a = sim(net,p)
a=
0 1 0 1
t.
OR:
>> percOR=newp([0 1; 0 1],1);
>> input = [0 0 1 1; 0 1 0 1]; d=[0 1 1 1];
>> plotpv(input,d);
>> percOR.adaptParam.passes=20;
>> percORa=adapt(percOR,input,d);
>> plotpc(percORa.IW{1},percORa.b{1});
>> rez=sim(percORa,[0 0 1 1; 0 1 0 1])
rez =
0
1 1 1
50
Vectors to be Classified
3
2
P(2)
1
0
-1
-2
-3
-3
-2
-1
0
P(1)
. 2.9.
Vectors to be Classified
1.5
P(2)
0.5
-0.5
-0.5
0.5
P(1)
1.5
. 2.10. OR
. 2.10 .
. A, B, C, D. 51
.
:
>> A = [rand(1,20) 0.6; rand(1,20) + 0.6];
>> B = [rand(1,20) + 0.6; rand(1,20) + 0.6];
>> C = [rand(1,20) + 0.6; rand(1,20) 0.6];
>> D = [rand(1, 20) 0.6; rand(1,20) 0.6];
>> plot(A(1,:),A(2,:),'bs')
>> hold on
>> plot(B(1,:),B(2,:),'r+')
>> plot(C(1,:),C(2,:),'go')
>> plot(D(1,:),D(2,:),'m*')
>> grid on
>> text(.1,1.7,'Class A')
>> text(1.1,1.7,'Class B')
>> text(0.1,0.7,'Class C')
>> text(1.1,0.7,'Class D')
. 2.11.
:
>> a = [0 1]';
>> b = [1 1]';
>> c = [1 0]';
>> d = [0 0]';
2
1.5
Class A
Class B
Class C
Class D
1
0.5
0
-0.5
-1
-1
-0.5
0.5
1.5
. 2.11.
52
. 2.12.
:
>> P = [A B C D];
>> T = [repmat(a,1,length(A)) repmat(b,1,length(B)) ...
repmat(c,1,length(C)) repmat(d,1,length(D))];
. :
>> net = newp([1 2;1 2],2);
>> net = train(net,P,T);
. 2.12.
,
.
2.3.
, ADALIN (ADAptive LInear
Neuron networks), . ,
(), , . . 2.13.
53
Linear Neuron
Inputs
w1,1
w1,2
a = purelin(Wp +b)
. 2.13. MatLab
. 2.13, , .
, ,
,
WP + b = 0.
n = WP + b = w11p1 + w12p2 + b = 0.
:
p1 = 0:p2 = b/w12,
p2 = 0:p1 = b/w11.
, ,
b .
b = 0 WP = 0, .
.
w1,1 = 1, w1,2 = 0,5, b = 1. , ,
p1 = 2, p2 = 2
2
a = purelin(WP + b) = [1 0,5] -1 = 2 > 0.
2
54
Input
p
R1
W
SR
S1
S1
b
R
S1
a = purelin(Wp + b)
. 2.14. MatLab
(. 2.14).
:
>> net=newlin(PR, S, id, lr),
>> net=newlin(PR, S, 0, P),
>> net=newlind(P, T),
PR R2 R ; S ; id
( ); lr ( 0,01); P RQ, Q ;
SQ;
.
>> net = newlin( [1 1; 1 1],1);
, 1 , . . .
>> net.IW{1,1} = [1 0.5];
>> net.b{1} =[1];
>> p = [2; 2];
>> a = sim(net,p)
a=2
55
2.4.
.
, , N :
{P1, Z1}, {P2, Z2}, ..., { PN, ZN},
Pi ; Zi .
Ai Pi.
k-
E(k) =
1 N
( Zi - Ai )2 .
N i=1
k- , . 2.15.
, e .
,
.
, :
w1,j = w1,j (k + 1) - w1,j (k) = -
e2 (k)
w1,j
),
j = 1, R,
e(k) = z (k) a( k)
w1,R
w1,1
w1,2
. 2.15.
56
e2 (k)
w1,j
w1,j
R
z(k) - w1,j pj (k) + b
j=1
= 2e(k)
= -2e(k) pj (k),
w1,j
j = 1, R.
,
w1,j = w1,j (k + 1) - w1,j (k) = - (-2e(k) pj (k)) =
= 2e(k) pj (k) = e(k) pj (k),
j = 1, R,
j = 1, R.
bj (k + 1) = bj (k) + e(k),
j = 1, R,
.
, .
:
1
a(1) = W (1) P1 = [0 0 0 ] 1 = 0,
1
e(1) = z1 - a(1) = -1 - 0 = -1 ,
W (2) = W (1) + e(1)P (1) =
-1
= [0 0 0 ] + 0,4(-1) 1 = [0,4 -0,4 0,4 ].
-1
:
1
a(2) = W (2) P(2) = [0,4 -0,4 0,4 ] 1 = -0,4,
1
-1
W (3) p1 = [0,96 0,16 -0,16 ] 1 = -0,64,
-1
1
W (3) p2 = [0,96 0,16 -0,16 ] 1 = 1,28,
-1
1
e = ((-0,36)2 + (0,28)2 ) = 0,2.
2
58
e2 > e2min, .
MatLab:
>> p=[1 1; 1 1; 1 1];
>> t=[1 1];
>> net=newlin([1 1; 1 1; 1 1],1);
>> net.trainParam.goal=0.1;
>> [net,tr]=train(net,p,t);
TRAINB, Epoch 0/100, MSE 1/0.
TRAINB, Epoch 25/100, MSE 0.36417/0.
TRAINB, Epoch 50/100, MSE 0.13262/0.
TRAINB, Epoch 75/100, MSE 0.048296/0.
TRAINB, Epoch 100/100, MSE 0.0175879/0.
TRAINB, Maximum epoch reached.
(100),
.
,
, , . . .
train () . ,
>> p = [2 1 2 1;2 2 2 1]; t = [0 1 0 1];
>> net = newlin( [2 2; 2 2],1);
>> net.trainParam.goal= 0.1;
>> net.trainParam.epochs = 60;
>> net = train(net,p,t);
. 2.16 ,
, .
, .
, . 2.17.
y1 = x1w11 + x2w21 + b1,
y2 = x1w12 + x2w22 + b2,
z = y1v1 + y2v2 + b3 = (x1w11 + x2w21 + b1) v1 +
+ (x1w12 + x2w22 + b2) v2 + b3 = x1(w11 v1+ w12 v2) +
+ x2(w21v1 + w22 v2) + b1v1 + b2v2 + b3 = x1q11 + x2q21 + b,
q11 = w11v1+ w12v2;q21 = w21v1 + w22v2; b = b1v1 + b2v2 + b3.
59
Z = VW X = QX.
ADALIN
MADALINE (Many ADAptive LInear NEurons).
. 2.18.
, , .
Performance is 0.102236, Goal is 0.1
0
10
Training
-1
10
-2
10
0
Goal
10
20
30
60 Epochs
40
50
60
. 2.16.
x1
y1
z
x2
y2
1-
2-
. 2.17.
60
ADALIN1
X
ADALIN2
OR
ADALINN
. 2.18. MADALINE
ADALIN ,
OR .
2.5.
ADALIN, .
{p(k)}
, N1 .
. :
>> net = newlin([0,11],1);
(. 2.19):
>> net.inputWeights{1,1}.delays = [0 1 2];
>> net.IW{1,1} = [2 4 11];
>> net.b{1} = [3];
>> pi ={1 2}
3
:
>> p = {1 2 5 9}
>> [a,pf] = sim(net,p,pi);
61
Input
p ( t) = p ( t)
1
D
p ( t) = p ( t 1)
w 1,1
w 1,2
n ( t)
a ( t)
b
w 1,3 1
p ( t) = p ( t 2)
3
a = purelin( Wp + b )
. 2.19.
a=
[24]
[33]
[22]
[63]
:
a(1) = 12 + 24 + 111 +3 = 24,
a(2) = 22 + 14 + 211 +3 = 33,
a(3) = 52 + 24 + 111 +3 = 32,
a(4) = 92 + 54 + 211 +3 = 63.
, 4, 5, 6, 7 20, 25, 30, 35. :
>> net = newlin([0,10],1);
>> net.inputWeights{1,1}.delays = [0 1 2];
>> net.IW{1,1} = [7 8 9];
>> net.b{1} = [0];
>> pi ={1 2};
>> p = {4 5 6 7};
>> T = {20 25 30 35}
>> net.adaptParam.passes = 200;
>> [net,y,E pf,af] = adapt(net,p,T,pi);
>> T =
[20] [25] [30] [35]
62
:
>> sys = ss(tf(1, [1 1 1])); %
>> t = 0:0.2:10;
%
>> [Y, t] = step(sys, t)
%
>> plot(t,Y)
>> grid
>> xlabel('t')
>> ylabel('Y(t)')
. 2.20.
:
>> P = Y(1: length(t)2)';
>> T = Y(3: length(t))';
>> Z= [1 2];
>> net = newlin([1 1], 1, Z);
>> pi ={0 0};
>> net.IW{1,1} = [1 1];
>> net.adaptParam.passes = 200;
>> P1 = num2cell(P);
>> T1 = num2cell(T);
>> [net,y,E pf,af] = adapt(net,P1,T1,pi);
1
Y (t)
5
t
10
. 2.20.
63
(. 2.21):
>> x = sim(net,P1)
>> x1 = cat(1,x{:})';
>> t1=t(1: length(t)2)';
>> plot(t1,x1,'b:+', t1,P,'ro')
.
>> [net,y,E pf,af] = adapt(net,P1,P1,pi);
(. 2.22).
. , , . 2.23.
>> t = 0: 0.01: 6;
>> y = [sin(4.1*pi*0.2.*t.*abs(0.7*t 5))];
>> plot(t,y)
>> ylim([1.2 1.2])
>> xlabel('Time [sec]');
>> ylabel('Target Signal');
>> grid on
1.4
1.2
y(t)
1
0.8
0.6
0.4
0.2
0
5
t
. 2.21.
64
10
1.4
1.2
y(t)
1
0.8
0.6
0.4
0.2
0
5
t
10
. 2.22.
1
0.8
Target Signal
0.6
0.4
0.2
0
-0.2
-0.4
-0.6
-0.8
-1
3
4
Time [sec]
. 2.23.
:
>> p = con2seq(y);
>> net = newlin([1,1],1);
>> net.inputWeights{1,1}.delays = [0 1 2 3 4 5];
>> [net,Y,E] = adapt(net,p,p);
65
Prediction
error
0.8
0.6
Error
0.4
0.2
0
-0.2
-0.4
-0.6
-0.8
-1
3
4
Time [sec]
. 2.24.
:
>> E = seq2con(E); E = E{1};
>> plot(t,E);
>> grid on
>> legend('Prediction error')
>> xlabel('Time [sec]');
>> ylabel('Error');
>> ylim([1.2 1.2])
. 2.24, .
:
>> net.IW{1}
ans =
0.2330 0.2045 0.1735 0.1405 0.1059 0.0702
1. MatLab?
2. MatLab?
3. MatLab?
4. ?
66
5. ?
6. ?
7.
MatLab?
8. train adapt, ?
9. 16 . ,
?
10.
?
11. ?
12. ?
13. ?
14. ?
15.
?
16. ?
17. ?
18.
MatLab?
19. ADALIN?
20.
ADALIN?
21. MADALINE?
67
3.
3.1.
()
(. feed-forward network)
. (multi-layer perceptron).
:
;
;
();
, ,
.
(..
). ,
,
.
.
. 3.1
m n.
x1
x2
1
x3
xn
y1
ym
1- ()
. 3.1.
68
.
. . 3.2 (
).
[39].
.
1- , 2- .
, ,
.
.
,
, , .
, , .
MatLab :
NEWFF(PR,[S1 S2...SN],{TF1 TF2...TFN},BTF,BLF,PF),
PR R2
R ; S1, S2, ..., SN
x1
x2
2
1
x3
xn
m
W1
1-
y1
y2
W2
2-
W3
. 3.2.
69
3.2.
-, ,
.
,
, - .
,
. ,
, ,
.
,
. ,
.
(), -,
. , ,
.
, . , :
dF (y)
= F (y)(1 - F (y)).
dy
70
.
, , . ,
.
, . ,
:
1) (X, Z*), X ;
2) Z = F(Y);
3) E;
4) ;
5) . 1 . .,
.
1 2 , 3 4
.
: .
:
,
, .
, , .
:
1. , . . .
2. (overfitting),
, .
. 3.3, , .
. 3.3 . . 3.3, , . 3.3, .
,
W, V (. 3.4).
71
. 3.3.
x1
1
1
x2
xi
xn
uj
zk =F( yk )
zp =F( yp )
m
W
z1 =F( y 1)
. 3.4.
f(X), X ,
:
X(t + 1) = X(t) -
f (X)
.
X
V (t + 1) = V (t) -
72
E(V )
.
V
, vjk, j-
k- ,
vjk (t + 1) = vjk (t) -
E
.
vjk
E=
2
1 p
zk - zk* .
2 k=1
, E V,
E
= zk - zk* = k ,
zk
E
E zk
=
= k zk (1 - zk ),
yk zk yk
E
E zk yk
=
= k zk (1 - zk )uj ,
vjk zk yk vjk
uj j- yk = vjk uj .
j=1
,
E
.
wij
aj = wij xi ,
i=1
uj = F (aj ).
73
E
E uj
=
= j uj (1 - uj ).
aj uj aj
j , . .
, .
,
, , .
,
(. 3.5):
p
p
E
E zk
= j =
= k zk (1 - zk )vjk ,
uj
z uj k=1
k=1 k
E
E uj aj p
=
= k zk (1 - zk )vjk uj (1 - uj )xi .
wij uj aj wij k=1
:
p
k=1
x1
x2
w 1j
v jk
uj
wij
xi
v j1
w2j
wnj
v jp
xn
. 3.5.
74
1
k
,
.
. , 100%. ,
:
, ;
, .
,
.
. , , 0,01.
, .
.
, .
, .
3.3.
4 : ,
1, 0.
3.3.
>> inp = [0
0
0
0
0
0
0
1
0
0
1
0
0
0
1
1
0
1
0
0
0
1
0
1
0
1
1
0
0
1
1
1
1
0
0
0
1
0
0
1
1
0
1
0
1
0
1
1
1
1
0
0
1
1
0
1
1
1
1
0
1;
1;
1;
1]
:
>> out=[0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0]
75
>> network=newff([0 1;0 1; 0 1; 0 1],[6 1],{'logsig','logsig'})
,
( ),
.
:
>> network=init(network);
, :
>> network.trainParam.epochs = 500;
>> network.trainParam.lr = 0.05;
>> network=train(network,inp,out);
:
>> y=sim(network,inp);
>> outy
ans =
1.0e004 *
0.0191 0.0044 0.0204 0.0237 0.0039 0.0000 0.0316
0.0041 0.0014 0.0367 0.0253 0.1118 0.0299 0.0384
0.0000 0.0000
.
.
3.4. , , e,
:
1,
e 0,
u =
.
-1, e < 0.
MatLab :
>> w=2*pi;
>> for i=1:300
%
time(i)=0.01*i;
e(i)=(exp(time(i)))*sin(w*time(i));
if (e(i)>0.0) t(i)=1;
elseif (e(i)<0.0) t(i)=1;
end
end
76
1.5
1
0.5
0
-0.5
-1
-1.5
0.5
1.5
t
2.5
. 3.6.
4.5
4
3
T, Y
2
1
0
-0.5
5
P, X
10
. 3.7. ,
>> figure(1)
>> plot(P,T,'ko',X,Y)
P T 11 , .
, . 50
0,01. . 3.7 ,
.
3.6. :
>> x = 0:.05:2;
>> y = 1 ./ ((x.3).^2 + .01) + 1 ./ ((x.9).^2 + .04) 6;
:
>> net=newff([0 2], [15,1], {'tansig','purelin'},'trainlm');
>> net.trainParam.show = 50;
>> net.trainParam.epochs =100;
>> net.trainParam.goal = 0.001;
>> P=x;
>> T=y;
>> net1 = train(net, P, T);
. 3.8. 55- .
78
. 3.8.
100
80
y, A
60
40
20
0
-20
0.2
0.4
0.6
0.8
x, P
1.2
1.4
1.6
1.8
. 3.9.
79
,
trainlm (LevenbergMarquardt).
:
>> A = sim(net1,P);
>> plot(x,y,P,A)
(. 3.9).
3.7. :
z=sin(x)cos(y).
C MatLab , . 3.10:
>> x = 2:0.25:2; y = 2:0.25:2;
>> z = cos(x)'*sin(y);
>> mesh(x,y,z)
:
>> P = [x;y]; T = z;
{x; y} . 17 :
>> net=newff([2 2; 2 2], [25 17], {'tansig' 'purelin'},'trainlm');
>> net.trainParam.show = 50;
net.trainParam.lr = 0.05;
net.trainParam.epochs = 300;
1
0.5
0
-0.5
-1
2
-1
-2 -2
-1
. 3.10.
80
1
0.5
0
-0.5
-1
2
-1
-2 -2
-1
. 3.11.
net.trainParam.goal = 0.001;
>> net1 = train(net, P, T);
(. 3.11):
>> A=sim(net1,P)
>> figure(1); mesh(x,y,A)
. 3.10 3.11 .
3.8. .
,
-
y = 3a + 4ab + 2c + f,
a, b, c ; f . :
>> a = rand(1,100);
>> b = rand(1,100);
>> c = rand(1,100);
>> f = rand(1,100)*0.025;
>> y = 3*a + 4*a.*b + 2*c + f;
>> P = [a; b; c];
>> T = y;
81
0.015
0.01
0.005
0
-0.005
-0.01
-0.015
10
20
30
40
50
60
70
80
90
100
. 3.12.
:
>> net = newff([0 1; 0 1; 0 1], [4 1], {'tansig', 'purelin'});
>> net = train(net, P,T)
:
>> out=sim(net,P);
>> t=1:100;
>> plot(t,yout)
. 3.12.
. , .
3.9. :
>> net = newff([1 1],[30,1],{'tansig','purelin'}); %
>> net.trainParam.epochs = 500;
>> p = [1:0.05:1];
%
>> t1 = sin(2*pi*p);
%
>> t = sin(2*pi*p)+0.1*randn(size(p));
%
>> [net,tr] = train(net,p,t);
>> t2 = sim(net,p);
>> figure(1); plot(p,t,'+',p,t2,'',p,t1,':')
>> legend('','',''); grid on
82
. 3.13. ,
.
1.5
1
0.5
0
-0.5
-1
-1.5
-1
0.2
0.4
0.6
0.8
. 3.13.
1.5
1
0.5
0
-0.5
-1
-1.5
-1
0.2
0.4
0.6
0.8
. 3.14.
83
train,
:
>> val.P = [0.975: 0.05: 0.975]; %
colormap(MM);
nn=min([n,25]);
for j=1:nn
subplot(5,5,j)
imagesc(reshape(alphabet(:,j),5,7)');
axis equal
axis off
end
. 3.15:
>> plotletters(alphabet);
targets
26 . 10:
>> net = newff(minmax(alphabet),[10 26],{'logsig' 'logsig'},'traingdx');
>> P = alphabet; T = targets;
:
>> net.trainParam.epochs = 1000;
>> [net,tr] = train(net,P,T);
. 3.15.
85
. 3.16.
(. 3.17):
. 3.16.
. 3.17.
86
. 3.18.
87
, .
,
, .
3.6.
- .
3.11.
,
t2
d2 y
dt
+t
dy
+ (t2 - a2 )y = 0.
dt
. 3.19:
>> t=0:0.1:20;
y=bessel(1,t);
plot(t,y)
grid
0.6
0.5
0.4
0.3
0.2
y(t)0.1
0
-0.1
-0.2
-0.3
-0.4
10
t
12
14
. 3.19.
88
16
18
20
:
>> net=newff([0 20], [10,1], {'tansig','purelin'},'trainlm');
>> P=t; T=y;
>> net.trainParam.show = 50;
>> net.trainParam.lr = 0.05;
>> net.trainParam.epochs = 300;
>> net.trainParam.goal = 0.001;
>> net1 = train(net, P, T);
11 (. 3.20).
:
>> A= sim(net1,P);
(. . 3.18):
>> figure(1); plot(P,TA)
. 3.21,
, .
. 3.20.
89
0.25
0.2
0.15
T-A
0.1
0.05
0
-0.05
-0.1
10
12
14
16
18
20
. 3.21.
3.12.
d2 y
dt
+ (y2 -1)
dy
+y=0.
dt
, MatLab
Simulink (. 3.22).
Product1
1
Constant
+
Add
Product
1
1
+
s
s
Add1 Integrator Integrator1
x
To Workspace
y
To Workspace1
. 3.22. -,
90
x y ( x = 2 y = 2).
10 (
'Van_der_Pol' mdl-):
>> [t,z]=sim('Van_der_Pol',10);
. 3.23 .
, :
>> P = t';
>> T = z';
>> net=newff([0 20], [20,2], {'tansig','purelin'},'trainlm');
>> net.trainParam.show = 50;
>> net.trainParam.lr = 0.05;
>> net.trainParam.epochs = 300;
>> net.trainParam.goal = 0.001;
>> net1 = train(net, P, T);
:
>> A= sim(net1,P);
>> figure(1); plot(P,A)
>> grid
. 3.23 3.24
.
3
2
x(t),
y(t)
1
0
-1
-2
-3
5
t
10
. 3.23.
91
3
2
1
A0
-1
-2
-3
5
P
10
. 3.24.
, .
. .
3.13. 2-
.
d2 y(t)
dt
+ 0,5
dy(t)
Y (s)
1
+ y(t) = x(t) W (s) =
=
.
2
dt
X(s) s + 0,5s + 1
s 2 +0.5s+1
Step
Transfer Fcn
Scope
( PF ):
>> [t,y]=sim('PF',20);
(. 3.26):
>> plot(t,y(:,2))
, :
>> P = t';
>> T = y(:,2)';
>> net=newff([0 20], [15,1], {'tansig','purelin'},'trainlm');
>> net.trainParam.show = 50;
>> net.trainParam.lr = 0.05;
>> net.trainParam.epochs = 100;
>> net.trainParam.goal = 0.001;
>> net1 = train(net, P, T);
:
>> A= sim(net1,P);
>> figure(1); plot(P,A)
>> grid
Simulink
( 0,01 ):
>> gensim(net1,0.01)
1.5
1
y(t)
0.5
10
t
12
14
16
18
20
. 3.26.
93
, . 3.27,
.
. 3.28, .
, (, step sine wave)
, (. 3.29).
s +0.5s+1
Step
y(t)
Transfer Fcn
p{1}
Clock
y{1}
simout
To Workspace
z(t)
Neural Network
. 3.27.
1.6
1.4
1.2
1
z(t), 0.8
y(t)
0.6
0.4
0.2
0
-0.2
10
15
20
25
. 3.28.
94
2
1.5
1
0.5
y(t),
z(t)
0
-0.5
-1
-1.5
-2
10
15
20
25
. 3.29.
, .
.
3.7.
. MatLab
.
3.14. data.txt :
10
7,5
5
2,5
0
2,5
5
7,5
10
0
7,07
10
7,07
0
7,07
10
7,07
0
95
>> load data.txt;
>> P=data(1:9,1);
>> T=data(1:9,2);
:
>> [pn,minp,maxp,tn,mint,maxt] = premnmx(P',T')
pn =
1.0000 0.7500 0.5000 0.2500 0 0.2500 0.5000 0.7500 1.0000
minp = 10
maxp = 10
tn =
0 0.7070 1.0000 0.7070 0 0.7070 1.0000 0.7070 0
mint = 10
maxt = 10
3.15. premnmx
[1
1], ,
postmnmx:
>> P = [0.92 0.73 0.47 0.74 0.29; 0.08 0.86 0.67 0.52 0.93];
>> t = [0.08 3.40 0.82 0.69 3.10];
>> [pn,minp,maxp,tn,mint,maxt] = premnmx(P,t);
>> net = newff(minmax(pn),[5 1],{'tansig' 'purelin'},'trainlm');
>> net = train(net,pn,tn); grid on
>> an = sim(net,pn)
an = 0.6493 1.0000 1.0000 0.2844 0.8578
>> a = postmnmx(an,mint,maxt)
a = 0.0800 3.4000 0.8200 0.6900 3.1000
prestd,
[0 1], poststd.
1.
?
2. ?
96
3. , ,
?
4. , ?
5.
MatLab?
6. ?
7.
?
8. ?
9. ?
10. ?
11. ?
12.
?
13.
?
14. ?
15.
?
16.
?
17.
?
18.
?
19. ?
97
4.
4.1.
- ,
.
, ,
. ,
. .
. 4.1 . D
.
y(t)
ym(t).
.
,
,
, .
xk
xk 1
m
k
xk m
yk
yk 1
yk n
. 4.1.
98
.
,
.
4.1.
3.13 ( . 3.26). 2- ,
(. 4.2).
,
, , . 4.2,
, .
. 4.3 (
0,01 ).
,
:
>> net=newff([0 1; 3 3; 3 3], [3,1], {'purelin','purelin'},'trainlm');
>> P = simout';
>> T = simout1';
>> net.trainParam.show = 50;
>> net.trainParam.lr = 0.05;
>> net.trainParam.epochs = 1000;
>> net.trainParam.goal = 0.001;
>> net1 = train(net, P, T);
ym
yk 2
yk 1
xk
yk
. 4.2. 2-
99
Step
1
s2 +0.5s+1
Transport
Transfer Fcn Transport
Delay 0.5s Delay1 0.5s
simout
To Workspace
simout1
To Workspace1
. 4.3.
(. 4.4).
( . 4.5).
Neural Network
Simulink-:
>> gensim(net1,0.01)
0,01 .
. 4.6,
. , .
. 4.4.
100
y(t)
1
s2 +0.5s+1
Step
Transfer Fcn
p{1} y {1}
z(t)
Neural Network
Transport
Delay1 0.5s
simout
To Workspace
Transport
Delay 0.5s
. 4.5.
1.5
1
y(t),
z(t)
0.5
10
t, c
15
20
25
. 4.6.
4.2.
:
y(k + 1) =
+ u(k).
101
y(k + 1) = f (y(k), y(k -1)) + u(k).
, . 4.7, f.
( u [2, 2]):
>> u=rands(1, 301)*2;
:
>> for k=2 : 301
y(k + 1)=y(k)*(y(k1) + 2)*(y(k) + 3)/(8 + y(k)^2 + y(k1)^2) + u(k);
out(k1)=(y(k + 1)u(k))/20;
in(k1)=y(k)/20;
end;
. :
>> net = newff([min(in) max(in); min(in) max(in)],[2 10 1],{'tansig'
'tansig' 'tansig'},'trainlm','learngdm','mse');
:
>> plantin=[in(1:299); in(2:300)];
>> plantout=out(1:299);
:
>> net.trainParam.epochs = 500;
>> net.trainParam.goal = 0.0005;
>> net=train(net,plantin,plantout);
y(k 1)
z 1
y(k)
y(k + 1)
z 1
u(k)
y(k + 1)
. 4.7.
102
, :
>> yp(1)=0.0; yp(2)=0.0;out1(1)=0; out1(2)=0;
>> for k=2:500
if (k<=200)u(k)=2.0*cos(2*pi*k*0.01);
else
u(k)=1.2*sin(2*pi*k*0.05);
end;
yp(k + 1)=yp(k)*(yp(k1) + 2)*(yp(k) + 2.5)/(8.5 + yp(k)^2 + yp(k
1)^2) + u(k);
out1(k)=yp(k)/20;
out1(k1)=yp(k1)/20;
nnout(k + 1)=20*sim(net,[out1(k);out1(k1)]) + u(k);
end;
>> plot(yp, 'b');
>> hold on;
>> plot(nnout, ':k');
>> grid;
>> axis([0, 500, 4.0, 10.0]);
. 4.8 ( ) ().
10
8
6
4
2
0
-2
-4
50
100
150
200
250 300
Time Step
350
400
450
500
. 4.8.
103
eu(t)
um(t)
+
x(t)
e(t)
+
u(t)
y(t)
. 4.9.
, ,
.
. , - , ..
( -) on line (. 4.9)
(. . (t)
x(t)) u(t).
, .
:
eu (t) = um (t) - u(t).
offline ,
.
4.3. -.
.
. 4.10 , .
- ,
:
>> net=newff([1 1; 5 5], [3,1], {'purelin','purelin'},'trainlm');
P = simout';
T = simout1';
net.trainParam.show = 50;
104
net.trainParam.lr = 0.05;
net.trainParam.epochs = 1000;
net.trainParam.goal = 0.001;
net1 = train(net, P, T);
. 4.11,
.
simout
To Workspace
Step
55
Gain1
simout1
To Workspace1
1
y(t)
0.1s 2+0.01s+1
du/dt
Derivative
8.3
Transfer Fcn3
Scope
Gain2
. 4.11.
105
. 4.12 .
(. 4.13).
, , ,
.
.
Step
du/dt
Derivative1
z(t)
1
0.1s 2+0.01s+1
p{1} y{1}
Neural Network1
Scope1
Transfer Fcn1
. 4.12. ,
-
1
0.9
0.8
0.7
0.6
y(t), 0.5
z(t)
0.4
0.3
0.2
0.1
0
0.1
0.2
0.3
0.4
0.5
t
0.6
0.7
0.8
. 4.13.
106
0.9
4.2.
.
, , , .
, . .
,
.
, .
, [40].
,
(. 4.14).
. ,
(. 4.15).
yk 1
z 1
m
k+1
yk
z 1
xk
k+ 1
. 4.14.
107
yk 1
z 1
ym
k+1
yk
z 1
xk
k+ 1
4.15.
, ,
. ,
, . , .
4.3.
. ,
, (. 4.16).
, . 4.16
(. 4.17, u(t) ; y(t) ; g(t)
; e(t) ).
, - , .
(. 4.18, g(t) ).
108
. 4.16.
g(t)
e(t)
+
u(t)
y(t)
. 4.17.
g(t)
F 1
y(t)
F
. 4.18.
F , F1 ,
y(t) = g(t) F1F = g(t).
. .
F ,
y = F(x) (. 4.19,).
x = F1(y) (. 4.19,).
,
. , (. 4.20).
(). W(s) 109
x = F 1 ( y )
y = F( x)
. 4.19.
x = F 1 ( y )
y = F( x)
. 4.20.
, Winv(s)
W(s)Winv(s) = 1.
(4.1)
Winv(s) - .
, W(s) 1- :
1
W (s) =
.
T1s + 1
W (s)Winv (s) =
1
.
T2s + 1
(4.2)
T s +1
Winv (s) = 1
.
T2 s + 1
T2, (4.1) (4.2).
4.4.
W (s) =
TT s + (T + T )s + 1
K ; ; T .
-,
:
1
Wp (s) = K p + Ki + Kd s,
s
(4.3)
Kp, Ki, Kd .
,
,
W (s) =
1
.
s + 1
Wp (s)W (s)
1
=
s + 1 1 + Wp (s)W (s)
Wp (s) =
1
.
sW (s)
W(s)
T + T 1 1 TT
+
Wp (s) =
+
s.
K K s K
111
,
.
. .
- (4.3)
de(t)
u(t) = K p e(t) + Kd
+ Ki e(t)dt.
dt
(.4.21).
.
,
[26].
( ) ([41] .):
W (s) =
y ; x ; i bj ; u(s) e(s)
, m n.
Kp
de/dt
Kd
edt
Ki
. 4.21. -
112
y(s)(sn an + sn-1an-1 + ... + sa1 + a0 ) =
= x(s)(sm bm + sm-1bm-1 + ... + sb1 + b0 ).
.
x - xk-1
x - 2xk-1 + xk-2
sx = k
, s2 x = k
, ...
t
t2
t ; k .
n
i=1
j=1
yn = aiyn-i + bj ym-j ,
ai bj , .
z-, , :
W (z) =
zi i .
,
.
,
.4.22. .
.
, -
y = z-1y + b1x + b2z-1x + b3 z-2 x.
. 4.22 .
.
113
b1
b0
b2
b m1
bm
+
+
+
+
a1
z
a2
z
a n1
z
1 /a0
an
z
. 4.22.
, (. 4.23).
( ()) ([42 45] .):
, ;
x
z 1
z 1
z 1
z 1
z 1
z 1
z 1
z 1
. 4.23.
114
,
, ,
.
.
, , .
, .
,
.
:
X(k + 1) = F (X(k), U (k)),
Y (k + 1) = G (X(k)),
X(k) Rn k; U(k) Rm
(); Y(k + 1) Rp k + 1.
n (NARX),
X(k) = [y(k), y(k 1), , y(k n), u(k), u(k 1), , u(k m).
,
X(k) = [y(k), y(k 1), , y(k n)].
, . 4.23 ,
. 4.24 (D ).
,
.
4.4.
. , 115
g( k + 1)
u ( k)
y ( k)
X(k + 1)
. 4.24.
u( k)
u( k)
u m( k)
X(k 1)
y ( k)
. 4.25.
116
offline . - , ,
, . , .
.
() , . 4.26,
[26].
,
. .
() y(t) g(t):
ey (t) = y(t) - g(t).
, ,
( ),
eu(t) = 0ey(t) 0.
g(k)
u(k)
u m ( k)
u ( k)
y (k)
. 4.26.
117
ey(t), eu(t).
(. 4.27)
.
:
w(k + 1) = w(k) - w(k),
y(k + 1) u(k)
w(k) = -e(k)
,
u(k) w(k)
.
u(k)
w(k)
y(k + 1)
,
u(k)
,
.
[26]. .
ey(t). ,
.
, ey(t) -
y (k)
g(k)
u(k)
X( k 1)
y (k)
. 4.27.
118
eu(t), . . [26].
eu(t)
ey(t) .
[26, 43].
, ,
, .
offline. , .
. on line
, (. 4.28).
.
eu(t) ey(t) (forward model). ey(t),
, eu(t).
.
+
y ( t)
g(t)
u ( t)
u ( t)
y* (t)
y ( t)
. 4.28.
119
. , ,
. ,
.
4.5. MatLab
. ,
.
Neural Network toolbox :
1. (NN Predictive Controller).
2.
(NARMA L2 Controller).
3. (Model Reference
Controller).
:
.
, .
. 4.29. , , : e = yp ym
u.
(. 4.30). .
offline, .
. .
120
yp
ym
. 4.29.
MatLab
Inputs
Layer 1
yp ( t)
u ( t)
Layer 2
ym( t+1)
1,1
TDL
IW
TDL
IW 1,2
LW 2,1
+
+
1
b1
b2
S1
. 4.30.
, .
,
,
. , , , .
121
. ,
.
, . , [46]:
J=
N2
j= N1
Nu
N1, N2 Nu , . u , yr
ym .
.
, . 4.31,
.
. u, ,
.
yr
ym
yp
. 4.31.
122
G , ,
.
, [48],
.
, NARMAL2,
y(k + d) = f [y(k), y(k -1),..., y(k - n + 1), u(k -1),..., u(k - m + 1)] +
+g[y(k), y(k -1),..., y(k - n + 1), u(k -1),..., u(k - m + 1)]u(k).
,
, 123
u(k) y(k), .
:
u(k + 1) =
y (k + d) - f [y(k), y(k -1),..., y(k - n + 1), u(k -1),..., u(k - m + 1)]
= r
,
g[y(k), y(k -1),..., y(k - n + 1), u(k -1),..., u(k - m + 1)]
d d 2.
NARMAL2 ,
. 4.32.
. 4.33 NARMAL2
MatLab.
r
yr
. 4.32.
124
u(t +1)
a 2 (t)
a1 (t)
T
D
L
IW1,1
LW
2,1
n1
1
T
D
L
b
IW
+
1
1,2
y (t +2)
n1
T
D
L
IW
3,1
n1
y (t +1)
a 3 (t)
T
D
L
IW
3,2
LW
n1
a 4 (t)
4,3
+
1
em
ym
yr
eu
. 4.34.
, .
.
,
.
Model Reference Controller
MATLAB Neural Network Blockset/Control
Systems (. 1).
1. ?
2.
?
3. ?
4. ,
?
5. -
?
6.
?
7. ?
126
8.
?
9. ?
10. -?
11.
?
12. ?
13. ?
14. ?
15. ?
16. ?
17. ?
18.
?
19. MatLab?
20. MatLab?
21. ?
22. ?
23. NARMAL2 MatLab?
24. ?
127
5.
5.1.
(Radial Basis Function RBF)
[49]. ( ) (. 5.1).
.
- ,
() .
- , i- RBF-
fi (X) = j( X - Ci ),
Ci RBF- : X,
C Rn. , .
j
2
X - Ci
j( X - Ci ) = exp,
2
s
2
i
s .
V
x1
y1
x2
y2
xn
. 5.1. RBF-
128
yk
X - Ci = (x1 - ci1 )2 + (x2 - ci2 )2 + ... + (xn - cin )2 .
, i-
X Ci.
. 5.2
= 0 s = 1.
. 5.2, RBF- ,
. , ,
.
. ,
j( X - Ci ) = X - Ci ,
j( X - Ci ) = X - Ci
,
1/2
j( X - Ci ) = X - Ci + s2
.
. 5.3.
2
1.8
1.6
1.4
1.2
(x)
0.8
0.6
0.4
0.2
0
-3
-2
-1
0
x
. 5.2.
129
2
1.8
1.6
1.4
1.2
f(x) 1
0.8
0.6
0.4
0.2
0
-3
-2
-1
0
x
. 5.3.
RBF-
.
, RBF (. 5.4).
, , .
)
. 5.4.
() RBF- ()
130
. , :
m
5.2.
RBF-.
RBF- k (. 5.5).
m = n, n , {(X1, y1), (X2, y2), , (Xn, yn)}, Xi = [xi1, xi2, , xik].
,
Ci = Xi .
s , ,
.
W,
h(Xi) = yi.
W
x1
x2
xk
( X C1
( X C2
w1
w2
y = h( X )
( X Cn
wn
. 5.5. RBF-
131
n W
= .
f (X ) f (X ) ... f (X ) w y
2
n
n
n n n
1 n
:
FW = Y.
W = F1Y.
(5.1)
RBF- , .
, RBF-,
.
p ,
Yi = [yi1, yi2, , yip].
wij , i = 1,n, j = 1, p.
,
n
f1 (X1 ) f2 (X1 ) ... fn (X1 ) w11 w21 ... w1 p y11 y12 ... y1 p
f1 (X2 ) f2 (X2 ) ... fn (X2 ) w21 w22 ... w2 p y21 y22 ... y2p
=
.
f (X ) f (X ) ... f (X ) w
w
...
w
y
y
...
y
n2
np n1
n2
np
1 n
2
n
n
n n1
F
. W .
Y ()
.
W (5.1).
132
, RBF- . ,
.
m = n ,
. ,
. , m << n ( m n),
.
, .
RBF-
.
RBF- m :
m
y = h(X) = wi fi (X).
(5.2)
i=1
,
- ( ):
{(X1, y1), (X2, y2), , (Xn, yn)}.
,
n
E = (h(Xi ) - yi ) .
(5.3)
i=1
(5.3)
n
h(Xi )
E
= 2(h(Xi ) - yi )
.
wj
wj
i=1
(5.2)
h(Xi )
= fj (Xi ),
wj
,
n
E
= 2 (h(Xi ) - yi )fj (Xi ).
wj
i=1
133
E
=
wj
i=1
i=1
i=1
fj (X1 )
fj (X2 )
Fj =
,
f ( X )
j n
h(X1 )
h(X2 )
,
H=
h( X )
n
y1
y2
Y = .
y
n
Fj H = Fj Y, j = 1,m,
F H FY
1 1
F2 H F2 Y
=
,
Fm H Fm Y
F H = F Y,
F = [ F1 F2 Fm ].
w f (X )
j j 1
j=1
f (X ) f (X )
h(X1 ) m
2
1
1 1
h(X2 ) wj fj (X2 ) f1 (X2 ) f2 (X2 )
=
H=
=
j=1
f (X ) f (X )
h(Xn )
2
n
1 n
m
wj fj (Xn )
j=1
F FW = F Y,
134
fm (X1 ) w1
fm (X2 ) w2
= FW,
fm (Xn ) wm
W = (F F)-1 F H = F + H,
F+ .
5.1. (p = 3):
{(0,9; 1), (2,1; 1,9), (3,1; 3)}.
y(x) = w1h1(x) + w2h2(x),
h1(x) h2(x) - ,
0.5
1.5
t
2.5
. 5.6. ,
w1 w2:
0,9795 0,7024
,
F F =
0,7024 1,2188
1,7398 -1,0026
,
(F F)-1 =
-1,0026 1,3982
135
F = h1 (x2 ) h2 (x2 ) = 0,6977 0,8521 , Y = 1,9 ,
h (x ) h (x ) 0,0773 0,6977
3
2 3
1 3
0
,
9795
0
,
7024
,
F F =
0,7024 1,2188
1
,
7398
1
,
0026
,
(F F)-1 =
-1,0026 1,3982
0
,
1244
.
W = (F F)-1 F Y =
3,0373
. 5.6 . z(x),
.
, (
) , RBF
.
5.3.
,
. , :
;
;
.
, , [50].
.
.
, , () [51].
.
. , . . , , , (5.3).
136
RBF-
. 5.7. RBF-
. 5.7. ,
.
5.4. MatLab
R , MatLab, . 5.8.
Input
w1,1
p1
w1,R
p2
pR
p3
dist
a = radbas( W p b)
. 5.8.
137
dist
P W . b .
. 5.9.
2
a = radbas(n) = e-n .
, ,
P W .
RBF- . 5.10. S1
S2 , .
RBF- MatLab newrbe newrb. , .
newrb RBF-, , .
, a
1.0
0.5
0.0
0.833
+0.8333
. 5.9.
Input
Linear Layer
S R IW 1,1
p
R1
1
R
a1
dist
S 1
n1
S 1
b1
1
S 1
S 1
1
S
a2 = y
LW 2,1
2
S S
b2
S 1
S 1
S 1
n2
.
newrb:
>> P = [0 1 2 3 4 5 6 7 8 9 10]; %
>> T = [0 1 2 3 4 3 2 1 2 3 4]; %
>> GOAL = 0.01; %
>> SPREAD = 1; % RBF-
>> net = newrb(P,T,GOAL,SPREAD); % RBF-
>> figure(1), clf, %
>> plot(P,T,'sr','MarkerSize',8,'MarkerFaceColor','y')
>> hold on; %
>> X = 0:0.01:10; % RBF-.
>> Y = sim(net,X); %
>> plot(X,Y,'LineWidth',2), grid on % RBF-.
. 5.11 RBF-. , ,
>> net.layers{1}.size
ans = 10
newrb 4.5
4
3.5
3
2.5
T
2
1.5
1
0.5
0
10
2
1.5
1
0.5
0
10
. 5.12. RBF-
4.5
4
3.5
3
T
2.5
2
1.5
1
0.5
0
10
. 5.13. RBF-
140
newrb.
humps
MatLab:
>> x = 0:.05:2; y=humps(x);
>> P=x; T=y;
>> goal=0.02; spread= 0.1;
>> net1 = newrb(P,T,goal,spread);
>> A= sim(net1,P);
>> plot(x,y,P,A)
.
RBF-
GOAL . (. 5.14),
>> P = [0 1 2 3 4 5 6 7 8 9 10];
>> T = [2 2 2 2 4 4 4 2 2 2 2];
>> SPREAD = 0.5;
>> net = newrbe(P,T,SPREAD);
>> figure(1), clf,
>> plot(P,T,'sr','MarkerSize',8,'MarkerFaceColor','y')
>> hold on;
>> X = 0:0.01:10; Y = sim(net,X); plot(X,Y,'LineWidth',2), grid on
>>
4.5
4
3.5
T 3
2.5
2
1.5
10
. 5.14. RBF-
141
RBF- ,
.
(. . 3.19). :
>> [t,z]=sim('Van_der_Pol',10);
>> P = t';
>> T = z';
>> net2=newrb(P,T,0.01);
>> A= sim(net1,P);
>> figure(1); plot(P,A)
>> grid
,
. 3.21.
MatLab RBF- GRNN
PNN.
GRNN (Generalized Regression Neural Network)
,
. - (. 5.15). .
- , RBF .
Q . ,
Input
QR
p
R 1
1
R
dist
b1
Q1
LW 2,1
a2 = y
Q1
n1
a1
Q1
Q1
nprod
Q1
. 5.15. GRNN-
142
Q1
n2
, .
, R,
LW{2,1} .
normprod - .
,
Pi , i- - , i-
i.
SPREAD , , ,
. SPREAD .
, ,
, .
>> P = [0 1 2 3 4 5 6 7 8 9 10];
T = [2 2 2 2 4 4 4 2 2 2 2];
SPREAD = 0.5;
net = newgrnn(P,T);
figure(1), clf,
plot(P,T,'sr','MarkerSize',8,'MarkerFaceColor','y')
hold on;
X = 0:0.01:10; Y = sim(net,X); plot(X,Y,'LineWidth',2), grid on
>> net.layers{1}.size
ans = 11
(. 5.16)
. 5.14 ( ).
PNN (Probabilistic Neural Network) , .
- . (. 5.17).
- GRNN-.
143
4
3.8
3.6
3.4
3.2
T 3
2.8
2.6
2.4
2.2
2
5
P
10
. 5.16. GRNN-
Input
Competitive Layer
Q R IW 1,1
p
R1
1
R
dist
b1
Q1
a2 = y
Q1
n1
a1
Q1
Q1
LW 2,1
KQ
n2
K1
K1
. 5.17. PNN-
, .
Q -.
K , . , ,
:
>> P = [0 0;1 1;0 3;1 4;3 1;4 1;4 3]'
Tc = [1 1 2 2 3 3 3]
144
P=
0 1
0 1
Tc =
1 1
0
3
1
4
3
1
4
1
4
3
T KQ, ,
, . , T(i, j) , , j- i.
>> T = ind2vec(Tc)
T=
(1,1)
1
(1,2)
1
(2,3)
1
(2,4)
1
(3,5)
1
(3,6)
1
(3,7)
1
>> T=full(T)
T=
1 1 0 0
0 0 1 1
0 0 0 0
0
0
1
0
0
1
0
0
1
,
>> net = newpnn(P,T);
Y = sim(net,P);
. ,
>> P = [0.1 0.5;1.2 1.3;4 4]'
P=
0.1000 1.2000 4.0000
0.5000 1.3000 4.0000
>> Y = sim(net,P)
145
Y=
(1,1)
1
(1,2)
1
(3,3)
1
>> Yc = vec2ind(Y)
Yc =
1 1 3
, 1,
3.
- . ,
.
RBF-,
, .
, . , RBF- .
5.5.
, RBF- .
, , [38].
n (), m .
, . 5.18 [52].
1- () , .
V . i-
{vji}, j = {1, 2, ..., n}, . X V .
1-
,
.
146
x1
2
X
xn
m
V
1-
2-
. 5.18.
X V :
:
n
R = XV = xivi ;
i=1
R = max( x1 - v2 , x2 - v2 , ..., xn - vn );
R = (x1 - v1 )2 + (x2 - v2 )2 + ... + (xn - vn )2 .
, RBF-.
, ,
. 5.19.
, 1- . 5.18, ,
- .
.
2- , . 5.18, . W 147
F(R)
1
0
R
. 5.19.
, 1- :
m
y = w1
F (R1 )
F(Ri )
i=1
+ w2
F (R2 )
F(Ri )
i=1
+ ... + wm
F (Rm )
F(Ri )
i=1
wi F (Ri )
=
i=1
m
F(Ri )
i=1
RBF-,
.
, RBF-
.
1. RBF-?
2. - ?
3. RBF- ?
4. - ?
5. -
?
6. - ?
7. RBF- ?
8.
RBF-?
9. RBF-?
148
10.
RBF- ?
11. RBF-?
12.
RBF-?
13. RBF- ?
14.
RBF-?
15. RBF-?
16.
RBF-?
17. MatLab
RBF-? ?
18. GRNN?
19. PNN?
20. RBF-?
149
6.
6.1.
[53] (. 6.1).
.
XOR. ,
. XOR
. ,
1 0 1 0 0 0 0 1 1 1 1 0 1 0 1
.
, , XOR,
:
:
101000011110101
:
01000011110101?
[53]
, . 6.2.
N , K , z1, .
. 6.1.
150
W1
x1
W2
xN
z 1
z 1
z 1
v1
v2
vK
y1
yM
. 6.2.
kT0 (T0
)
X(k) = [x0(k), x1(k),, xn(k), v1(k 1), , vK(k 1)],
x0(k), x1(k), , xN(k) ; v1(k 1), ,
vK(k 1) .
N +K
j=0
w1ij vj (k),
, . .
, . - [54].
MatLab c
net = newelm(PR, [S1, S2, , SN], {TF1,TF2, , TFN}, BTF, BLF, PF),
PR R2 R ; S1, S2, , SN ; TF1, TF2, , TFN (
tansig); BTF ,
( traingdx); BLF , (
learngdm); PF ( mse).
.
6.1. , ,
.
20
>> P = round (rand (1, 20))
P=
00110001110110001010
>> T = [0 (P(1:end1)+P(2:end) = = 2)]
T=
00010000110010000000
:
>> Pseq = con2seq(P);
>> Tseq = con2seq(T);
>> net = newelm ([0 1], [10, 1], {'tansig', 'logsig'});
:
>> net.trainParam.goal = 0.001;
>> net.trainParam.epochs = 1000;
>> net = train(net, Pseq, Tseq);
:
>> Y = sim(net, Pseq)
152
Y=
[2.6513e004] [4.0384e004] [0.2860] [0.9994] [0.0366]
[4.4019e004] [3.0681e005] [4.4659e004] [0.7891] [0.9995]
[0.0356] [9.9000e004] [0.8228] [0.0218] [4.0585e004] [2.9824e
005] [4.4890e004] [7.5032e005] [4.7443e004] [5.7692e005]
.
:
>> P = round (rand (1, 20))
P=
10101001101100001111
>> Pseq = con2seq(P);
>> Y = sim(net, Pseq)
Y=
[0.0992] [0.0554] [0.0020] [2.3346e004] [4.6559e004] [3.8692e
005] [3.6769e004] [0.3296] [0.9994] [0.0365] [0.0010] [0.8307]
[0.0216] [4.0568e004] [2.9802e005] [3.5165e004] [0.3219]
[0.9994] [0.9993] [0.9994]
6.2. .
,
:
>> p1 = sin(1:20);
>> p2 = sin(1:20)*2;
:
>> t1 = ones(1,20);
>> t2 = ones(1,20)*2;
:
>> p = [p1 p2 p1 p2];
>> t = [t1 t2 t1 t2];
:
>> Pseq = con2seq(p);
>> Tseq = con2seq(t);
.
:
>> a = sim(net,Pseq);
>> a1 = seq2con(a);
153
>> x = 1:80;
>> plot(x,a1{1,1})
. 6.3.
2.2
2
1.8
1
1.6
1.4
1.2
1
0.8
10
20
30
40
x
50
60
70
80
. 6.3.
4
3.5
3
1 2.5
2
1.5
1
10
20
30
40
50
x
60
70
80
90
100
. 6.4.
154
:
>> p1 = sin(1:20)*1.5;
>> p2 = sin(1:20)*2.5;
>> p = [p1 p2 p1 p2 p2];
>> Pseq = con2seq(p);
>> a = sim(net,Pseq);
>> a1 = seq2con(a);
>> x = 1:100;
>> plot(x,a1{1,1})
. 6.4.
6.2.
, (. 6.5).
(), ,
.
.
, -
. 6.5.
155
. 6.6.
. , ()
(. 6.6).
,
.
, .
, .
, (. 6.7).
yit+1 = F wj yi + xi .
i
xi .
:
yt ,
yt+1 = +1,
-1,
wij yi + xi = T,
i
wij yi + xi > T,
i
wij yi + xi < T.
i
.
156
z 1
z 1
z 1
x1
w N1
w 21
w 11
x2
wN2
w22
w12
xN
wNN
w2 N
y
N
w1 N
. 6.7.
1 0.
N :
y1
y2
Y = ,
y
N
m 2m .
, .
, W : wij = wji,
: wii = 0.
, .
157
xjk xik ,
wij =
k=1
wij = 0,
i j,
i = j,
xjk j- k- ; m .
, W
nn:
m
W = Xk Xk - E.
k=1
, N2
N. , , 120
1202 120 = 14280.
W=
1 m
Xk Xk - E.
n k=1
:
, .
6.3.
X = [1 1 1 -1] .
1
1
1
0
W = [1 1 1 -1]-
1
0
-1
0
0
1
0
0
0
0
1
0
0 0
1
1 -1
0 1
0
1 -1
=
.
0 1
1
0 -1
1 -1 -1 -1
0
Y(0).
0
1 1
1
1 -1 1
1
0
1
1
1
3
1
= sgn = ,
Y (1) = sgn[WY (0) ] = sgn
1
1
0 -1 1
1
1
-1 -1 -1
-1 -1
0
1
158
0
3 1
1
1 -1 1
1
0
1
1
1
3
1
1
1
0 -1 1
3
1
-3 -1
-1 -1 -1
0 -1
, .
:
1. (, ): Y t = 0;
2.
( ):
n
3. , , . . .
, Y.
, , . . ( ).
Xj Xk
Kjk =
xij xik .
i=1
m
m m
K = Kjk .
j=1 k=1
. ,
.
, .
XF ,
W (t + 1) = W (t) - XF XF .
159
.
,
:
eij = -xiwij xj .
E = - xiwij xj = -X WX.
i
, , E.
E min ,
xiwij xj max.
i
E = - (xi ) (xj ) .
2
,
N
1 N
1 N
wij xj = N xi (xj xj ) = xi N 1 = xi .
j=1
j=1
j=1
,
, .
t, , xp:
E(t) = - xiwij xj = xiwij xj - xj w pj x p - xiwip x p .
i
i p j p
xp t + 1 :
160
i p j p
E = E(t + 1) - E(t) =
= - xj w pj x*p - xiwip x*p + xj w pj x p + xiwip x p .
j
,
E = 2 xiw pi (x p - x*p ).
i
xiw pi > 0
i
xiw pi < 0
E < 0,
E < 0.
, .
,
,
, N
M 0,15N.
MatLab.
dotprod, netsum
satlins. LW{1,1} (. 6.8).
. 6.9.
>> net = newhop(T)
RQ, Q
+1 1; R
.
6.4. :
>> T = [1 1 1; 1 1 1]'
T=
1 1
1 1
1 1
161
a1(k 1)
LW
p
R 1 1
a1 (0)
a1(k)
1,1
S1 1
b1
S 1 1
R1
Initial
condition
S1 1
n1
S 1 R 1
S1
. 6.8. MatLab
+1
1
a = satlins (n)
. 6.9.
A=
[3x1 double]
>> [Y,Pf,Af] = sim(net,{1 5},{},A);
, :
>> [Y{1} Y{2} Y{3} Y{4} Y{5}]
ans =
0.6971 0.8099 0.9410 1.0000 1.0000
0.9884 1.0000 1.0000 1.0000 1.0000
0.9884 1.0000 1.0000 1.0000 1.0000
,
.
6.5. :
>> vectors = [1 1 1 1 1 1 1 1 1; 1 1 1 1 1 1 1 1 1; 1
1 1 1 1 1 1 1 1; 1 1 1 1 1 1 1 1 1]';
>> net = newhop(vectors);
>> result = sim(net,4,[],vectors)
result =
1 1 1 1
1 1 1 1
1 1
1 1
1
1 1 1
1
1 1 1
1
1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
>> test = {[0.1; 0.8; 1; 0.7; 0.5; 1; 0.9; 0.85; 1]};
>> result = sim(net,{1,5},{},test);
>> for i = 1:5,
disp(sprintf('Network state after %d iterations:',i));
disp(result{i});
end
Network state after 1 iterations:
0.4930
0.8601
1.0000
1.0000
0.9661
1.0000
163
1.0000
0.8712
0.7384
Network state after 2 iterations:
0.7045
0.9879
1.0000
1.0000
1.0000
1.0000
1.0000
0.9904
0.7593
Network state after 3 iterations:
0.8625
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.8748
Network state after 4 iterations:
0.9966
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.9993
Network state after 5 iterations:
1
1
1
1
1
1
164
1
1
1
6.6.
(. 6.10):
>> T = [+1 1; 1 +1; +1 +1; 1 1];
>> T = T';
>> plot(T(1,:),T(2,:),'ro','MarkerSize',13), hold on;
>> axis([1.2 1.2 1.2 1.2]);
>> title('Hopfield Network State Space');
>> xlabel('x');
>> ylabel('y');
>> net = newhop(T);
:
>> for i = 1:5
a = {rands(2,1)};
[y,Pf,Af] = sim(net,{1 20},{},a);
record = [cell2mat(a) cell2mat(y)];
start = cell2mat(a);
plot(start(1,1),start(2,1),'kx',record(1,:),
Hopfield Network State Space
1
0.8
0.6
0.4
0.2
y
0
-0.2
-0.4
-0.6
-0.8
-1
-1 -0.8 -0.6 -0.4 -0.2
0
x
. 6.10.
165
0.8
0.6
0.4
0.2
y
-0.2
-0.4
-0.6
-0.8
-1
0
x
. 6.11.
record(2,:), color(rem(i,5)+1),'LineWidth',5)
end
. 6.11, ( ) .
6.7.
.
:
>> zero = [ 1 1 1 1 1 1 1,
1 +1 +1 +1 +1 +1 1,
1 +1 1 1 1 +1 1,
1 +1 1 1 1 +1 1,
1 +1 1 1 1 +1 1,
1 +1 1 1 1 +1 1,
1 +1 1 1 1 +1 1,
1 +1 +1 +1 +1 +1 1,
1 1 1 1 1 1 1];
>> zero = reshape(zero',1,63); % -
>> one = [ 1 1 1 1 1 1 1,
1 1 1 +1 1 1 1,
166
1 1 +1 +1 1 1 1,
1 1 1 +1 1 1 1,
1 1 1 +1 1 1 1,
1 1 1 +1 1 1 1,
1 1 1 +1 1 1 1,
1 +1 +1 +1 +1 +1 1,
1 1 1 1 1 1 1];
>> one = reshape(one',1,63);
>> two = [ 1 1 1 1 1 1 1,
1 +1 +1 +1 +1 +1 1,
1 1 1 1 1 +1 1,
1 1 1 1 1 +1 1,
1 +1 +1 +1 +1 +1 1,
1 +1 1 1 1 1 1,
1 +1 1 1 1 1 1,
1 +1 +1 +1 +1 +1 1,
1 1 1 1 1 1 1];
>> two = reshape(two',1,63);
>> three = [ 1 1 1 1 1 1 1,
1 +1 +1 +1 +1 +1 1,
1 1 1 1 1 +1 1,
1 1 1 1 1 +1 1,
1 1 1 +1 +1 +1 1,
1 1 1 1 1 +1 1,
1 1 1 1 1 +1 1,
1 +1 +1 +1 +1 +1 1,
1 1 1 1 1 1 1];
>> three = reshape(three',1,63);
>> four = [1 1 1 1 1 1 1,
1 +1 1 1 1 1 1,
1 +1 1 1 1 1 1,
1 +1 1 +1 1 1 1,
1 +1 +1 +1 +1 1 1,
1 1 1 +1 1 1 1,
1 1 1 +1 1 1 1,
1 1 1 +1 1 1 1,
1 1 1 1 1 1 1];
>> four = reshape(four',1,63);
>> five = [1 1 1 1 1 1 1,
1 +1 +1 +1 +1 +1 1,
167
1 +1 1 1 1 1 1,
1 +1 1 1 1 1 1,
1 +1 +1 +1 +1 +1 1,
1 1 1 1 1 +1 1,
1 1 1 1 1 +1 1,
1 +1 +1 +1 +1 +1 1,
1 1 1 1 1 1 1];
>> five = reshape(five',1,63);
>> six = [1 1 1 1 1 1 1,
1 +1 +1 +1 +1 +1 1,
1 +1 1 1 1 1 1,
1 +1 1 1 1 1 1,
1 +1 +1 +1 +1 +1 1,
1 +1 1 1 1 +1 1,
1 +1 1 1 1 +1 1,
1 +1 +1 +1 +1 +1 1,
1 1 1 1 1 1 1];
>> six = reshape(six',1,63);
>> seven = [1 1 1 1 1 1 1,
1 +1 +1 +1 +1 +1 1,
1 1 1 1 1 +1 1,
1 1 1 1 +1 1 1,
1 1 1 +1 1 1 1,
1 1 +1 1 1 1 1,
1 +1 1 1 1 1 1,
1 +1 1 1 1 1 1,
1 1 1 1 1 1 1];
>> seven = reshape(seven',1,63);
>> eight = [1 1 1 1 1 1 1,
1 +1 +1 +1 +1 +1 1,
1 +1 1 1 1 +1 1,
1 +1 1 1 1 +1 1,
1 +1 +1 +1 +1 +1 1,
1 +1 1 1 1 +1 1,
1 +1 1 1 1 +1 1,
1 +1 +1 +1 +1 +1 1,
1 1 1 1 1 1 1];
>> eight = reshape(eight',1,63);
>> nine = [1 1 1 1 1 1 1,
1 +1 +1 +1 +1 +1 1,
168
1 +1 1 1 1 +1 1,
1 +1 1 1 1 +1 1,
1 +1 +1 +1 +1 +1 1,
1 1 1 1 1 +1 1,
1 1 1 1 1 +1 1,
1 1 1 1 1 +1 1,
1 1 1 1 1 1 1];
>> nine = reshape(nine',1,63);
( 63 10):
>> digits1 = [zero; one; two; three; four; five; six; seven; eight; nine]';
:
>> net = newhop(digits1);
:
>> digits = {zero, one, two, three, four, five, six, seven, eight, nine};
>> bnw = [1 1 1; 0 1 0];
%
>> for P = 1:10
%
subplot(3,4,P);
digit = digits{P};
img = reshape(digit,7,9);
image((img'+1)*255/2);
axis image
axis off
colormap(bnw)
title(sprintf('Number %d', P));
end
show . 6.12.
. . ,
>> five1 = [1 1 1 1 1 1 1,
1 +1 +1 +1 +1 +1 1,
1 +1 1 1 1 1 1,
+1 +1 1 1 1 1 1,
1 +1 +1 +1 +1 +1 1,
1 1 1 +1 +1 +1 1,
1 1 1 1 1 +1 1,
1 +1 +1 +1 +1 +1 1,
1 +1 1 1 1 1 1];
>> five1 = reshape(five1',1,63);
169
. 6.12. ,
>> img = reshape(five1,7,9);
>> image((img'+1)*255/2);
>> axis image
>> axis off
>> colormap(bnw)
>> title(sprintf('Number 5 noise'));
. 6.13.
:
>> five2 = {five1'}
%
>> [Y,Pf,Af] = sim(net,{1 10},{},five2); %
:
>> img = reshape(Y{10},7,9);
>> image((img'+1)*255/2);
>> axis image
>> axis off
>> colormap(bnw)
>> title(sprintf('Number 5 correct'));
. 6.14.
170
. 6.13.
. 6.14. ,
171
6.3.
, . .
,
- , .
, , .
().
, .
,
(. 6.15).
1-
, :
B = F(AW),
F .
,
:
A = F(BW).
1- 2-
,
. .
. 6.15.
172
. , .
,
.
W W.
F 0.
.
.
,
W = AiT Bi .
i
6.8.
1 = (1 0 0), 1 = (0 0 1 0),
2 = (0 1 0), 2 = (1 0 0 1),
3 = (0 0 1), 3 = (0 1 0 0).
1 ( ):
1 = (1 1 1), 1 = (1 1 1 1),
2 = (1 1 1), 2 = (1 1 1 1),
3 = (1 1 1), 3 = (1 1 1 1).
:
W = A1 B1 + A2 B2 + A3 B3 ,
A1 B1 = -1 [-1
-1
A2 B2
A3 B3
-1 -1
1 -1
-1 1 -1] = 1
1 -1
1 ,
1 -1
1
1
-1
-1
1
1 -1
= 1 [1 -1 -1 1] = 1 -1 -1
1 ,
1
1 -1
-1
-1
-1
1 -1
1
1
= -1 [-1 1 -1 -1] = 1 -1
1
1 ,
1
-1
1 -1 -1
173
-1
3
-1 -1
3 -1
W = -1 -1
3 -1
W = 3 -1 -1
3 ,
-1
-1
3 -1 -1
3
-1 -1
3
B = F [ AW ] = F [0,8 0,2 0 ] 3 -1 -1
-1
3 -1
-1
3
.
-1
-1
= (0,8 0,2 0).
-1
3 =
-1
= F [-0,2 -1 2,2 -1] = [0 0 1 0 ],
-1
3 -1
-1 -1
3
=
A = F BW = F [-1 -1 1 -1]
3 -1 -1
-1
3 -1
= F [6 -6 -2] = [1 0 0 ].
, ,
, . , , .
,
.
.
, ,
, . , , .
( ),
.
, [25].
.
E. .
Ej = Yj q,
q .
174
1
.
Pj =
1 + exp- Ej
j
k [0,1]. j > k, Yj = 1.
:
1
1.1.
.
1.2.
.
1.3. .
1.4. 1.1 1.3 .
1.5.
Pij+ , i j , .
2
2.1. . .
2.2. 2.1 .
2.3. Pij , i j
, .
3
3.1.
wij = wij + (Pij+ - Pij- ),
h .
6.4.
. .
175
, . ,
.
, .
, , X=
[x1, x2, , xn] ,
: xi = +1
( 1), xi = 1 ( 0).
N
L . ,
.
.
(. 6.16).
m ( ) y1, y2, , ym
.
x1
x2
2
X
y1
y2
x3
ym
xn
W
. 6.16.
176
XW = xiwi = a - b,
i=1
; b
.
,
n = 5, W = [1, 1,1, 1, 1],
X = [1, 1, 1,1,1].
n XW n 1 n
+
= + xiwi .
2
2
2 2 i=1
.
W
:
wij = -x / 2,
i = 1,n, j = 1,m,
w
w12 ... w1m -x11 / 2 -x12 / 2 ... -x1m / 2
11
W1 =
=
.
i=1
P ; F .
177
m , P = n/2,
F :
0, z 0,
F (z) = zi , 0 < z n,
n, z > n.
, . 6.17.
,
n = 5, X = [ 1, 1, 1, 1, 1], W = [0,5; 0,5; 0,5; 0,5; 0,5].
n /2
n /2
. 6.17.
178
, .
( maxnet) (. 6.18).
, , m .
.
2- .
, () ,
() , +1.
2-
x2
X
m
W1
z2
xn
y2
x3
z1
y1
zm
ym
m
W2
. 6.18.
179
. X.
1- , Z(0) 2- maxnet.
maxnet ,
2- .
,
. -
, X. -
m
j=1
F ,
,
.
, 2-
1, j = k,
wik =
, j k,
jk
0<<
1
.
m 1
maxnet
.
wjk = 180
1
+ .
L -1
,
, ,
.
( ). ,
3- () (. 6.19).
. 2- , 3- ,
P, . ,
:
pm2 ... pmk
:
1. .
x1
y
x2
X
1
2
x3
xn
m
W1
z1
z2
m
W2
p1
p2
P
zm
pk
W3
. 6.19.
181
2. ,
. , 100 13 , 10000 . 1000 , 1100,
100 maxnet.
3.
, .
4. .
5. ,
,
.
:
1. .
,
.
2.
, .
, , , .
6.5.
, . . ,
.
. , , .
,
. .
182
.
. 1-, , 2- . , ARTMAP
FuzzyART.
1.
1 . . .
1 :
1)
, ,
;
2) ,
. .
1 [25]
, , , 1 2
(. 6.20).
: m ; n .
+
+
2
g2
+
1
g1
. 6.20.
183
( R ).
, .6.21.
tij,
.
(1 m) :
1) xi;
2) pj;
3) g1.
, ,
. g1 1, pj 0, X.
. .
, . ,
(. 6.22).
R
Tn
T2
T1
rn
p2
2
p1
pm
m
x1
x2
g1
xm
. 6.21.
184
c1 c2
r2
r1
cm
. 6.22.
, , .
. 6.23.
, .
.
Yj = BjC.
F :
1,
F (Yj ) =
0,
Yj T,
Yj < T.
, R .
j , (
) Bj. ,
.
.
.
1
1
F (Y2 )
2
F(Yn )
F ( Y1 )
n
n
C
R
. 6.23.
185
. , .
- .
,
1, 0. , R
.
2. ,
. , g2
:
g2 = x1 x2 x3 xm.
1. , 2,
R g1 0:
. R
tij, .
T B ,
.
R , g1
0, 2/3
,
X R.
X ,
0 (. 6.24).
X ,
, X,
, .
. , , .
, R , g1
1, X, ,
.
, .
,
, B T
.
- ,
, .
j
j
p1
pm
p2
rj = 1
x1
x2
g1
xm
. 6.24.
187
,
.
. .
bij, i- j- ,
2Ci
bij =
,
Ck + 1
k
j ;
.
j,
, , : tij = ci (tij
j i
).
. , bij . ,
, (.. ).
. 1 = [1 0 0 0 0], 2 = [1 1 1 0 0]
(. . 1 2).
bij ,
1 = 1 = [1 0 0 0 0],
2 = 2 = [1 1 1 0 0].
1,
. bij
,
1 = [1 0 0 0 0],
2 = [0,5 0,5 0,5 0 0],
1 1
1, 2 0,5 ().
2 1 3/2 ( ).
3 = [1 1 0 0 0]. 1
, 2 2/3. 1 , [1 1 0 0 0].
S = 1/2, r = 2/3, 1 , 2 ( = [1 1 0 0 0]), S = 1.
188
, ,
,
,
.
,
,
.
1. ?
2. XOR ?
3.
?
4. ?
5.
MatLab?
6. ?
7. ?
8. ?
9.
?
10. ?
11.
?
12. ?
13. ?
14. , , ?
15. ?
16. ?
17. ?
18. ?
19.
?
189
20. ?
21. ?
22. ?
23.
?
24. ?
25. ?
26. ?
27. ?
28. ?
29. ?
30. 2- ?
31. 2-
?
32. 3- ?
33. ?
34. ?
35. -
?
36. ART
?
37. ART?
38. ART?
39. ART?
40. ART?
41. ART?
190
7.
7.1.
[9] . ,
,
.
, .
,
.
. , , , .
:
1. . . -.
2. (Self-Organized Map SOM).
.
.
SOM , .
SOM ,
,
,
. . . 7.1 .
, i j
I J,
(neighbourhood function),
2
I - J
,
g(i, j) = exp2
2
s
s .
191
. 7.1. SOM
. . 7.2 N .
W,
.
X
. , N >> m.
X
x2
x3
x4
xn
Wm1
W1
x1
y1
y2
y3
m1
ym
y m1
. 7.2.
192
i- 2-
Wi, X. X Wi, - j,
:
j = arg min X - Wi .
i
X -W =
(xi - wi )2 .
(7.1)
i=1
X W , ,
n
yj = Wj X = wij xi ,
(7.2)
i=1
j c X
W:
j = arg max Wi X .
i
wi =
wi
n
j=1
w2j
, xi =
xi
n
j=1
, i = 1,n.
x2j
, X
, yj (-). Wj, .
,
,
. (7.2) :
193
yj = wij xi + g(j,l)yl ,
i=1
(7.3)
l j
g(j,l) ,
.
g(j,l).
1,
g(i, j) =
0,
i - j r,
i - j > r.
R 2
g(i, j) = exp 2 ,
2s
; R ,
R = ri - rj
ri rj .
, (. 7.3).
. 7.3, , .
.
.
(. 7.4).
g( R )
b
+
. 7.3.
194
g(R )
b
R0
3R 0
R
b/3
. 7.4.
. 7.4
:
b, R [-R0 , R0 ],
- , R [-R0 , - 3R0 ],
3
g (R ) =
, R [R0 , 3R0 ],
0 .
(. 7.5). .
. 7.5.
195
7.2.
. n n- . .
n- (
, ).
n = 2
, n = 3 , n > 3
.
,
:
1. X.
2. - j (7.2).
3. .
) , :
( X W )
X W
W
.7.6.
,
.
. ().
,
,
.
, , ,
. ,
,
, . ,
. , [25].
1/ n,
n , ..
.
1
xi +
(1 - ).
n
a ,
. a , .
.
197
. , ,
.
, , ,
, , .
7.1. . :
1
1
X1 = ,
0
0
0
0
X2 = ,
0
1
1
0
X3 = ,
0
0
0
0
X4 = .
1
1
,
. -.
():
0,2
0,6
W1 = ,
0,5
0,9
0,8
0,4
W2 = .
0,7
0,3
X1 ( (7.1)):
4
d1 = X1 - W1 =
(xi - wi )2 1,36,
d2 = X2 - W2 =
(xi - wi )2 0,99.
i=1
4
i=1
, ,
( = 0,6):
1 0,8 0,92
0,8
0,4
1 0,4 0,76
W2 = + 0,6 - =
.
,
,
,
0
7
0
0
7
0
28
0 0,3 0,12
0,3
198
X2:
d1 = X2 - W1 =
d2 = X2 - W2 =
(xi - wi )2 0,81,
i=1
4
(xi - wi )2 1,5.
i=1
, :
0 0,2 0,08
0,2
0 0,6 0,24
0,6
.
W1 = + 0,6 - =
0 0,5 0,20
0,5
1 0,9 0,96
0,9
X3 X4, ,
.
, :
0
1
0
0,5
W1 = , W2 = .
0,5
0
1
0
, (W1) X2 X4,
(W2) X1 X3.
7.3.
. 7.7.
. 7.7 R p ;
S1 ; C (competitive
layers) . b
.
199
Input
Competitive Layer
S R
IW
1,1
p
R 1
1
R
a1
S1 1
ndist
n1
C
S1 1
S1 1
b1
S1 1
S1
. 7.7. MatLab
net = newc(P, S, KLR, CLR),
P (RQ)-
Q ; S ; KLR CLR
.
.
7.2. . :
>> p=[1 0 1 0; 1 0 0 0; 0 0 0 1; 0 1 0 1]
p=
1 0 1 0
1 0 0 0
0 0 0 1
0 1 0 1
:
>> net=newc([ 0 1; 0 1; 0 1; 0 1],2)
, .
.
:
>> net.trainParam.epochs = 500
200
>> net=train(net,p);
:
>> ves=net.IW{1,1}
ves =
0.0000 0.0000 0.5135 1.0000
1.0000 0.4789 0.0000 0.0000
.
:
>> p=[0 0 0 1]';
>> Y=sim(net,p)
Y=
(1,1)
1
>> p=[1 0 0 0]';
>> Y=sim(net,p)
Y=
(2,1)
1
7.3. . 60 :
>> A = [rand(1,20) 0.7; rand(1,20) + 0.2];
>> B = [rand(1,20) + 0.7; rand(1,20) + 0.7];
>> C = [rand(1,20) + 0.2; rand(1,20) 0.7];
>> plot(A(1,:),A(2,:),'bs')
>> hold on
>> plot(B(1,:),B(2,:),'r+')
>> plot(C(1,:),C(2,:),'go')
>> grid on
>> P = [A, B, C]
>> ncl = 3;
>> MN=[min(X(1,:)) max(X(1,:)); min(X(:,2)) max(X(:,2)) ]
>> net = newc(MN, ncl, 0.1, 0.0005);
>> net.trainParam.epochs=49;
>> net.trainParam.show=7;
>> net = train(net,P);
>> w = net.IW{1};
>> plot(w(:,1),w(:,2),'kp');
. 7.8, .
201
2
1.5
1
y
0.5
0
-0.5
-1
-1
-0.5
0.5
x
1.5
. 7.8.
7.4.
.
>> X=[0 1; 0 1];
>> clusters=8;
>> points=10;
>> std=0.05;
>> P=nngenc(X,clusters,points,std);
>> plot(P(1,:),P(2,:),'+r');
>> hold on;
>> h=newc([0 1;0 1],8,.1);
>> h.trainParam.epochs=500;
>> h=init(h);
>> h=train(h,P);
>> w=h.IW{1};
>> plot(w(:,1),w(:,2),'ob');
>> xlabel('p(1)'); ylabel('p(2)');
. 7.9.
7.5. .
>> d1 = randn(3,20);
%
202
1
0.9
0.8
p(2)
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.1
0.2
0.3
0.4
0.5
0.6
p(1)
0.7
0.8
0.9
1.1
. 7.9.
. 7.10.
203
7.4.
. , , SOM
. SOM .
.
, , , M N,
. ,
. SOM ,
. ,
.
, SOM :
( , );
(.. ).
204
net = newsom(PR,[d1,d2,...], tfcn, dfcn, olr, osteps, tlr, tnd),
PR (R2)-
R ; di ( [5
8]); tfcn ( 'hextop'); dfcn ( 'linkdist'); olr
( 0,9); osteps
( 1000); tlr
( 0,02); tnd
( 1).
hextop , gridtop randtop .
dist ,
boxdist , mandist , linkdist
.
SOM . -
dW=lr A2(P W),
lr , olr tlr ; A2
, -
i:
1,
A2(i,q) = 0,5,
a(i,q) = 1,
a(i,q) = 1, D(i, j) nd,
.
(i,q) ; D(i, j) i j; nd -.
, - ,
.
:
;
205
.
.
,
.
nd = 1,00001+(max(d) 1)(1 s/S),
max(d) ; s
; S .
lr =tlr+(olr tlr)(1 s/S).
, ,
nd = tnd + 0,00001,
:
lr= tlrS/s.
,
.
, .
. .
,
. ,
.
7.6. :
>> x=1:0.05:1;
>> y=x.*x;
>> P = [x; y];
>> net=newsom([0 1;0 1],[10]);
>> net.trainParam.epochs =1000;
>> net1=train(net,P);
206
Weight Vectors
1
W(i,2)
0.8
0.6
0.4
0.2
0
-0.2
-0.8
-0.6
-0.4
-0.2
0
0.2
W(i,1)
0.4
0.6
0.8
. 7.11. SOM
>> plotsom(net1.iw{1,1},net1.layers{1}.distances)
SOM . 7.11.
:
>> a=sim(net,[0.6;0.4])
a=
(8,1)
1
7.7. 40
:
>> P = [rand(1,40)*2; rand(1,40)];
>> net = newsom([0 2; 0 1],[3 5]);
>> net = train(net,P);
>> plot(P(1,:),P(2,:),'.g','markersize',20)
>> hold on
>> plotsom(net.iw{1,1},net.layers{1}.distances)
. 7.12.
MatLab SOM.
>> plotsompos(net,P);
207
, . 7.12.
>> plotsomnd(net);
.
. 7.13.
Weight Vectors
1.2
1
0.8
W(i,2)
0.6
0.4
0.2
0
-0.2
0.2
0.4
0.6
0.8
1
1.2
W(i,1)
1.4
1.6
1.8
. 7.12. SOM
. 7.13. SOM
208
. 7.14. ,
SOM
,
:
>> plotsomhits(net2,P);
. 7.14.
,
.
, .
7.5.
,
LVQ- (Learning Vector Quantization), , .
(. 7.15).
( )
S1 (.. ).
209
Input
Competitive Layer
Linear Layer
IW 1,1
p
R1
S 1 R
ndist
a2 = y
n1
S1 1
a1
S1 1
IW2,1
S 2 S1
S2 1
S1
n1i = i IW
1,1
S2 1
n2
1
S2
a2 = purelin( LW
2,1 1
a)
a1 = compet ( n1)
. 7.15. LVQ-
S2 ( S2 S1).
.
LVQ-
net = newlvq(PR, S1, PC, LR, LF),
PR R2
R ; S1
; PC S2 ,
; LR
; LF .
, 1- 2- ,
. , , .
LVQ- ,
, , :
{p1, t1}, {p2, t2}, , {pN, tN}.
7.8. ( ). T:
>> P = [3 2 2 0 0 0 0 2 2 3; 0 1 1 2 1 1 2 1 1 0];
>> Tc = [1 1 1 2 2 2 2 1 1 1]; % ;
>> I1 = find(Tc==1); %
>> I2 = find(Tc==2); %
>> figure(1), clf, axis([4,4,3,3]), hold on
>> plot(P(1,I1),P(2,I1),'+r')
210
3
2
1
0
-1
-2
-3
-4
-3
-2
-1
. 7.16. LVQ-
>> plot(P(1,I2),P(2,I2),'xb')
>> T = ind2vec(Tc);
% ;
>> T = full(T);
% ;
>> net = newlvq(minmax(P), 4, [0.6 0.4]);
>> net.trainParam.epochs = 200;
>> net.trainParam.lr = 0.05;
>> net.trainParam.goal = 1e5;
>> net = train(net,P,T);
>> w = net.IW{1};
>> plot(w(:,1),w(:,2),'rp');
. 7.16, .
>> Y = sim(net,P);
>> Yc = vec2ind(Y)
Yc =
1 1 1 2 2 2 2 1 1 1
7.9. ,
, (. 7.17):
>> A = [rand(1,20) + 0.2; rand(1,20) + 0.2];
>> B = [rand(1,20) + 1.2; rand(1,20) + 1.2];
>> C = [rand(1,20) + 2.2; rand(1,20) + 2.2];
211
4.5
4
3.5
3
Y 2.5
2
1.5
1
0.5
0
0.5
1.5
2.5
3.5
. 7.17.
4.5
Neuron Positions
4.5
4
position(2,i)
3.5
3
2.5
2
1.5
1
0.5
0
0.5
1.5
2
2.5
position(1,i)
3.5
4.5
. 7.18. LVQ-
>> end
>> plotsom(net.iw{1,1}');
1. ?
2. ?
3.
?
4. ?
5. ?
6. ? ?
7. ?
8.
?
9. ?
10. ?
11. ?
213
12. MatLab ?
13. ?
14. ?
15. ?
16. SOM?
17. MatLab SOM?
18. SOM?
19.
SOM?
20. LVQ-?
21. LVQ-?
22. MatLab LVQ-?
23. ,
LVQ- ?
214
8.
8.1.
.
() ,
() W(s) (. 8.1,a).
, K(s) (. 8.1,).
. , ,
.
,
, . 8.2, .
)
X(s)
W(s)
Y(s)
)
X(s)
K(s)
Y(s)
W(s)
. 8.1.
ym(t)
ey(t)
W
e(t)
g(t)
u(t)
y(t)
. 8.2.
215
,
.
8.2.
:
F (X) min,
xD
D Rn ,
D ; Rn n- .
B (X) = { X R n ;
X - X < }.
,
X opt X lopt .
,
. ,
,
. ,
, :
. 8.3.
,
,
.
. ,
.
,
[55].
.
: (. 8.3).
, ,
. , , [11].
,
217
x1(t)
2
1
u(t)
3
x2(t)
4
W1
W2
w11
w12
w13
w14
w21
w22
w23
w24
w11
w21
W1
w31
w41
W2
. 8.4.
.
().
, (biologically inspired
algorithms). natural computing ( ) [56,
57]. ([12, 51, 58] .).
,
-, . . , .
.8.4.
,
, , , , . ,
.
218
8.3.
(simulated
annealing) [11, 59] .
( ). ,
.
() , , .
,
,
.
E
P(E) = exp ,
kT
P()
; k ; T .
, T
.
.
:
1. : T = Tmax. : t = 0.
2. ( ) x = x0.
3. F(x).
4. x= x + x.
5. F(x).
6. DF = F(x) F(x).
7. DF < 0, x = x ( ).
8. DF > 0, DF (
x)
219
F
P(F) - exp
,
kT
k , .
P(DF) n [0,1].
P(DF) > n, x = x, x .
9. ,
: t = t +t, . 4,
, T .
, T , ,
.
. , , Tmax . , t:
T (t) =
Tmax
.
log(1 + t)
, .
8.1. :
>> tm=100;
>> K=1;
>> x1=10;
>> Tm=500;
>> F1=test(x1);
>> for i=1:200
z=rand(1);
if z > 0.5 z = 1; else z = 1; %
end;
dx(i) = z*rand(1)*(2*Tm/500);
%
5
x2(i)=x1+dx(i);
%
F2(i) = test(x2(i));
Fd(i) = F2(i) F1;
%
%
P(i) = 0;
if Fd(i) < 0 x1 = x2(i); F1 = F2(i); elseif Fd(i) >= 0 P(i) = 1/exp(Fd(i)/K*Tm);
end;
if P(i) > rand(1) x1=x2(i); %
F1 = F2(i);
%
end;
Tm=Tm2.5; %
end;
>> figure; plot(x2); grid;
%
>> figure; plot(F2); grid;
%
>> figure; plot(Fd); grid;
%
:
function s=test1(x)
s=20+x.^2+10.*(cos(2*pi.*x))
end
: x [ 10, 10].
8.4.
() ,
, .
,
.
MatLab [51]. MatLab (toolbox) Simulink Control System toolbox.
Simulink
,
. Control System
toolbox, , , . ()
.
221
. 8.5.
, .
.
5. .
223
,
. , .
:
(), .
, ,
;
,
- Simulink.
,
: . () .
.
, (-), .
().
.
.
8.2.
.
() .
[61]:
di
va = Ra ia + La a + ea ,
dt
m
T = J
+ TL sgn(m ) + Bm m ,
dt
ea = Km ,
TL = Kia ,
va ; Ra ; ia ,
224
; La
; ea ,
; T ;
J ; m ; Bm
( ); TL .
, ,
di
e = La a .
dt
K = C
; .
( )
d
T = J m .
dt
, , T. T:
T = T + T. .
,
Tx.x = 0 Tc = T.
T = T + T ,
Tc = TLsgn(m) (
, , . ., );
T =Bmm ( ).
:
Ra = 2 , La = 0,0052 , J = 1,5104 2, Bm = 1103,
K = 0,1 /.
225
, Tc
m ( ). (sgn)
.
- (sgn) Tc , , .
Tc
.
. . a [11].
,
: . a. (Relay), .
Relay ,
- ,
. .
, , .
,
, 8.6.
e(t), ,
. , .
.
() , 226
1
e(t)
z1
z1
W1
W2
1-
2-
u(t)
. 8.6.
.
W1 W2,
.
. 8.6 12 (
). , .
m-:
function z=set2(X);
global k1; global k2; global k3; global k4; global k5; global k6;
global k7; global k8; global k9; global k10; global k11; global k12;
k1=X(1); k2=X(2); k3=X(3); k4=X(4); k5=X(5); k6=X(6);
k7=X(7); k8=X(8); k9=X(9); k10=X(10); k11=X(11); k12=X(12);
sim('dpt_NC');
z=sum(abs(simout-simout1));
end
m- :
, ;
(simulink model 'dpt_NC');
- ( ),
.
simout,
, ().
. 8.7 - ,
MatLab Simulink.
gatool [51].
227
228
Step
Step1
Add
wc
0.25
Gain
Out1
W11
Transport
Delay
Transport
Delay1
Out1
W12
Out1
W13
ie
Out1
W21
Sign
va
w
z
p
dotprod Saturation1
w
z
p
dotprod1 Saturation2
w
z
p
dotprod2 Saturation3
Tl
Relay
va
To Workspace1
simout1
To Workspace
simout
we
Tl
va
wm
wm
ia ia
w
z
p
dotprod3 Saturation
DC
MOTOR
150
100
(rad/s)
50
0
-50
-100
-150
0.2
0.4
0.6
0.8
1.2
. 8.8. : a ;
b
, , , ( m = 100),
,
( 0,07 ).
5070
20 .
:
3,6
2,4 8,6
. 8.8
.
. 8.6, .
, .
8.3 ( ).
229
-. 90%
-.
, , [62].
,
.
,
. ,
-, , .
-
de(t)
u(t) = K p e(t) + Kd
+ Ki e(t)dt,
dt
Kp, Kd, Ki ; e(t)
u(t) .
- . 8.9 (g(t) y(t)
).
,
(. 8.10).
- (
) MatLab ( 1 . 8.11).
( 2 . 8.11).
,
-
Kp
g(t)
e(t)
+
Kd
Ki
y(t)
u(t)
. 8.9. -
230
1
0.1s 2 + 0.02s + 1
Lookup Table
Transfer Fcn
0.2
0
-0.2
-0.4
0.5
1 .5
t
2.5
. 8.11.
. ,
(. 8.12).
. 8.12 .
1- , 2- .
.
. (. . 8.11), 0,01 0,1 .
, , W1 W2, .
, .
231
z 1
z 1
e(t)
W1
1-
Kp
Kd
Ki
W2
2-
. 8.12.
, . . , 18
().
. .
, . Pi.
y*(t) ,
i- . ,
Pi =
N
1
, Ei = yk* (t) yk (t) ,
1 + Ei
k=1
N .
. 8.13 MatLab Simulink.
. 8.13 , . simout
simout1
.
gatool MatLab. 50 , 100 .
:
-0,66 0,75 0,88
0,41
1
0,97
232
233
Step2
z
p
dotprod
w
z
p
dotprod1
w
z
p
dotprod2
Out1
W21
Derivative
du/dt
1
s
Integrator
Product
Product2
Product1
Transport
Delay
Out1
W13
Out1
W23
Transfer Fcn1
0.01s 2+0.3s+1
Transfer Fcn
0.1s 2+0.02s+1
tansig2
tansig1
tansig
Out1
W22
purelin
simout1
To Workspace1
simout
To Workspace2
purelin2
purelin1
Lookup Table2
z
p
dotprod5
z
p
dotprod4
z
p
dotprod3
Add3
Out1
Out1
Transport
Delay1
W12
W11
. 8.14 . . 8.15 - .
1.2
1
0.8
y*(t)
0.6
0.4
0.2
y(t)
0
-0.2
-0.4
0.5
1.5
t
2.5
. 8.14.
6
5
4
Kp
3
Kd
2
Ki
0.5
1 .5
t
2.5
. 8.15. -
234
. 8.15, , -
, - ( 2 . 8.11).
8.5.
Particle Swarm Optimization (PSO) [13]. swarm
intelligence [63], , . ,
.
.
,
, , .
, , .
. ,
, .
. , .
, .
, ..
.
, N- , (
).
, . ,
, () .
,
.
PSO
. N-
235
N. ,
, ( ), ,
( ).
,
,
,
.
, .
, :
1. N- :
;
G , ;
v .
2. , :
P - ( );
G -, ( ).
v- X-:
Xi = Xi + vi.
(8.1)
[vmax, vmax], vmax .
vi = vi + c1r1(G Xi) + c2r2(Pi Xi),
(8.2)
i ; vi ; Xi
; Pi , ;
G , I ;
1, 2 , ; r1, r2
[0, 1],
.
.
, .
236
2 , ,
1, .
(8.2) W:
. 8.16.
PSO :
;
;
;
;
.
:
PSO ;
, .
238
x2
S(t+1)
x2
G(t)
v (t)
X(t + 1)
X(t + 2)
v (t + 2)
X(t + 1)
S(t)
v (t + 1)
v (t)
K(t+1)
v (t + 1)
K(t)
X(t)
G(t)
X( t)
P(t+1)
P(t)
x1
x1
. 8.17.
8.6.
1990-
,
.
, [67, 68]
, .
, , . .
239
, , , , . (Estimation of
Distribution Algorithms EDA), [68].
EDA .
EDA
, . .
i- (. 8.1).
, .
1, 4 5 (. 8.2).
j . 8.2
j- .
j-
j [0, 1]. j, j- . EDA
.
8.1
1
2
3
4
5
6
1001010
0100101
1101010
0110110
1001111
0001101
0,109
0,697
1,790
0,090
0,238
2,531
8.2
0,667
0,333
1
4
5
1
0
1
0
1
0
240
Pi
0,333
0,667
0,667
0
1
0
1
0
1
0
1
1
0,333
1
1
1
0
0
1
(Differential Evolution
DE) [69, 70].
G NP D:
G : { Xi,G }, i = 1, NP.
.
.
i- :
1. Xi,G r1, r2, r3.
2. ( F [0, 2]):
uji,G +1 =
xji,G ,
j = 1, D ;
.
. , . ,
.
. ,
, , ,
, . .
HS :
1. . hms (harmony
memory size ) , n. F(Xi):
x1
... xn1
F (X1 )
1
2
x1
... xn2
F (X2 )
HM =
.
hms
hms
hms
x
...
x
F
(
X
)
n
1
2. X X',
:
hmcr (harmony memory considering rate) :
xi xiint(rand(0.1)hms)+1;
1 hmcr .
3. , par (pitch adjusting rate)
:
xi xi ;
xi xi wrand(0,1).
, .
242
4. X , Xworst , Xworst
X.
5. 2- 4- .
hms 50100 , hmcr [0,7; 0,99],
par [0,1; 0,5].
; w .
(biologically inspired algorithms) . (Ants Colony Optimization ACO)
[14]. (Bees Algorithm BA) [72, 73],
. (monkey search).
, ([74, 75]).
(). - [74]
, , .
. . , .
()
,
,
,
[75].
. 8.18.
.
n ,
P k, :
243
i = 1,n.
. M,
Ag = { popt1,
popti2 , ...,
poptk },
N
M
. 8.18.
-
244
Y*(t). ,
Yi(t) i- , i-
fi (t) =
1
N
k=1
i = 1,n,
(tk ) - Yi (tk ) + 1
N , .
m ,
, Ci
Abi :
Ci = round(kc fi ),
i = 1,n,
kc .
. , Abi ,
u.
i, Abi
,
i = roundkmut -1,
i = 1,n,
f
kmut .
, ,
.
. M .
M ,
, M.
N , M .
,
. 245
,
. () [76, 77].
.
,
, , , ,
.
,
, .
, , .
()
(, ). .
.
, :
, ;
(belief space),
.
,
. : ,
(. 8.19).
:
( );
;
;
;
(
);
.
246
()
(
)
(
)
. 8.19.
() .
. ( ).
.
PSO.
,
, , .
, .
.
,
,
.
, , :
B(t) = (S(t), N(t)),
S(t) c , , ; N(t) , .
247
.
, , - .
.
,
.
.
,
.
, :
1. ,
. .
2. , , .
3. (,
), .
1.
?
2. ?
3. ?
4. ?
5. ?
6. ?
7.
?
248
8. ?
9.
?
10. ?
11. ?
12. ?
13. ?
14. ?
15. ?
16.
?
17. ?
18. ?
19. ?
20. ?
21. ?
22.
?
23. ?
24. ?
25. ?
26.
?
27. ?
28.
?
29. ?
30. ?
31. ?
32. ?
33. ?
34. ?
249
35. ?
36. ?
37. ()?
38. ?
250
,
, offline, ,
.
,
, , . , ,
.
,
,
.
, . . , . :
,
. ( ). ;
. .
,
.
, ,
, .
.
.
;
251
.
,
;
().
, .
, (VHDL, Verilog).
, .
. , , .
: , ,
(DSP), (PCI,
Ethernet-MAC, LVDS) (ARM,
PowerPC). , Altera, Atmel, Xilinx .
,
.
,
.
():
, ;
, , .
, . ,
252
, .
, XXI .
. ,
.
, Supercomputing 2012 IBM , Compass.
Blue Gene/Q Sequoia, . , Compass 2084 , 530
100 , 77 , .
.
.
,
, .
253
1.- ., . , . .: , 1956.
2.Hebb D. O. The organization of behavior: A Neuropsychological
theory. New York: Wiley, 1949.
3. . . . .: , 1965.
4.Widrow B., Hoff M. E. Adaptive switching circuits // 1960 IRE
WESCON Convention Record. New York: IRE, 1960. Pt 4. P. 96104.
5. ., . . .: , 1971.
6.Kohonen T. Correlation matrix memories // IEEE Trans. Comput.
1972. Vol.21. P. 353359.
7.Grossberg S. Adaptive pattern classification and universal
recoding. I. Parallel development and coding of neural feature
detectors // Biol. Cybernet. 1976. Vol. 23. P. 121134.
8.Hopfield J. J. Neural networks and phisical systems with
emergent collective computational abilities // Proc. Nat. Acad. Sci.
U.S.A. 1982. Vol. 79. P. 25542558.
9.Kohonen T. Self-organization and associative memory. Berlin:
Springer, 1987.
10.Rumelhart D. E., Hinton G. E., Williams R. J. Learning
internal representations by error propagation. I. Parallel distributed
processing. 1986. Vol. 1. P. 318362.
11.Aarts E. H. L., Laarhoven P. J. M. van. Simulated annealing:
Theory and applications. London: Kluwer, 1987.
12.Goldberg D. E. Genetic algorithms in search, optimization and
machine learning. New York: Addison-Wesley, Reading, MA. 1989.
13.Kennedy, J., Eberhart R. Particle swarm optimization // Proc.
1995 IEEE International Conference on Neural Networks IEEE Press.
P. 19421948.
14.The ant system: Optimization by a colony of cooperating
agents / Dorigo M. et al. // IEEE Trans. Systems Man Cybernetics.
Pt B. Cybernetics. 1996. Vol. 26(1). P. 2941.
15. : : .
/ . . . . , . . . .:
, 2001. . 5.
16. . . . .: ,
1990.
17. . ., . . . : , 1996.
254
18. . . . : , 1994.
19. . . : . / . . . . . .: , 2000. . 3.
20. . ., . ., . . : . / . . . . . .: , 2002. . 8.
21. / .
. . , . . . .: , 2003. . 9.
22. / .
... .: , 2004.
23. . ., . ., . . . .: , 2004.
24. . ., . .
. .: - . -, 2005.
25. . : . .: , 1992.
26. ., ., . / . . . , . . . .: , 2000.
27. . /
. . . . . .: , 2002.
28. . . .: . , 2006.
29. . .: , 1991.
30. . . . .: , 1981.
31. . : ,
. .: , 2003.
32. . . .: , 2002.
33. . : . .:
, 1995.
34. ., . / . .
.: , 1990.
35. ., . . .:
, 1975.
36. . .: , 1979.
37. ., . . .:
, 1976.
38. . . : . . .:
. 2010.
39.Hornik K., Stinchcombe M., White H. Multilayer feedforward
networks are universal approximators // Neural Network. 1989.
Vol.2. P. 359366.
255
40. ,
/ . . , . . , . . , . . . : , 1997.
41. . ., . . . .: , 1972.
42.Hagan M. T., Demuth H. B. Neural networks for control //
Proc. 1999 American Control Conference. San Diego: CA, 1999.
P.16421656.
43.Neural systems for control / O. Omidvar, D. L. Elliott eds. //
New York: Academic Press. 1997. P. 272.
44. . . //
. 2002. 10. . 216.
45. . ., . . // . 2011. 2. .7994.
46.Soloway D., Haley P. J. Neural generalized predictive control//
Proc. 1996 IEEE International Symposium on Intelligent Control.
1996. P. 277281.
47.Chen S., Billings S. A. Representation of nonlinear systems: The
NARMA model // Int. J. Control. 1989. Vol. 49(3). P. 10131032.
48.Narendra K. S., Mukhopadhyay S. Adaptive control using
neural networks and approximate models //IEEE Trans. Neural
Networks. 1997. Vol. 8. P. 475485.
49.Broomhead D. S., Lowe D. Multivariable functional interpolation
and adaptive networks // Complex Systems. 1988. N 2. P. 321355.
50.Yager R., Filev D. Essentials of fuzzy modeling and control.
New York: John Wiley & Sons. 1984.
51. . . : .
.: . 2008.
52. . ., . ., . . . .: . -. 2005.
53.Elman J. L. Finding structure in time // Cognitive Sci. Ser.
1990. N 14. P. 179211.
54. . ., . . . 6.
.: -, 2002.
55.Glover F. Future paths for integer programming and links to
artificial intelligence // Comput. Oper. Res. 1986. Vol. 13(5). P. 533549.
56.Mohammadian M., Ruhul A., Xin Y. Computational intelligence
in control. London: Idea Group inc. 2003.
57.Rudas I. F., Janos F., Kacprzyk J. Computational intelligence in
engineering. Berlin; Heidelberg. Springer-Verlag: 2010.
256
58. . ., . ., . .
. .: , 2006.
59.Ali M., Storey C. Aspiration based simulated annealing
algorithm // J. Global Optimization. 1996. N11. P. 181191.
60. . . // . .
. 1999. 3. . 140145.
61. . . .
.: , 1985.
62.Astrom K. J., Hagglund T. Advanced PID control. Boston: ISA,
2006.
63.Kennedy J., Eberhart R. Swarm intelligence. San Francisco,
CA: Morgan Kaufmann Publ. Inc., 2001.
64.Janson S. Middendorf M. A hierarchical particle swarm
optimizer and its adaptive variant // IEEE Trans. Systems Man and
Cybernetics. Pt B: Cybernetics. December 2005. Vol. 35. P. 1272
1282.
65.Angeline P. J. Using selection to improve particle swarm
optimization // IEEE Int. Conference on Evolutionary Computation.
May 1998. P. 8489.
66.A fuzzy adaptive turbulent particle swarm optimisation / Liu
et. al. // Int. J. Innovative Computing and Applications. 2007. Vol. 1.
N 1. P. 3947.
67.Burakov M. V., Konovalov A. S. Peculiarities of genetic
algorithm usage when synthesizing neural and fuzzy regulators //
Kluwer Int. Ser. Eng. and Comput. Sci. 2002. N 664. P. 3948.
68.Muhlenbein H., Paab G. From recombination of genes to the
estimation of distributions. I: Binary parameters // Lecture Notes in
Comput. Sci. 1411: Parallel Problem Solving from Nature PPSN IV.
1996. P. 178187.
69.Storn R., Price K. Differential evolution a simple and efficient
heuristic for global optimization over continuous spaces //J. Global
Optimization. 1997. Vol. 11. P. 341359.
70.Price K., Storn R., Lampinen J. Differential evolution a
practical approach to global optimization. Heidelberg: Springer, 2005.
71.Geem Z. W. Music-inspired harmony search algorithm: Theory
and applications. Berlin: Springer, 2009.
72.The bees algorithm / D. T. Pham, A. Ghanbarzadeh, E. Koc,
S. Otri, S. Rahim, M. Zaidi. Technical Note, Manufacturing
Engineering Centre, Cardiff University, UK, 2005.
73.Karaboga D. An idea based on honey bee swarm for numerical
optimization: Technical report. Kayseri: Erciyes University, 2005.
257
258
1
1.
MatLab
1.1.
, Neural Net toolbox, ,
. .1.1.
dh(t)
w2
w2
Cb2
w0
Cb
. .1.1.
259
260
Random
Reference
NN
Model
Optim.
Control
Signal
Cb
Plant
(Continuous Stirred Tank Reactor)
w2
. .1.2.
Plant
Output
Reference
NN Predictive Controller
Clock
X(2Y)
Graph
. .1.3.
. .1.4.
:
Training Samples -
, Simulink;
Maximum Plant Input ;
Minimum Plant Input ;
Maximum Interval Value (sec) ( );
Minimum Interval Value (sec) ( );
Limit Output Data
;
Maximum Plant Output
, Limit Output
Data;
262
. .1.5.
,
, Apply OK. Cncel
,
.
, Import Data.
Export Data
.
Erase Generated Data .
, Plant Identification ,
Simulink,
,
, .
264
. .1.6.
, Apply OK,
.
. .1.6 .
1.2.
,
:
d2 j
2
dt
= -2
dj
-10sin j + u,
dt
j ; u ,
.
265
, r
,
d2 yr
dt
= -6
dyr
- 9yr + 9r ,
dt
yr .
MatLab
(..1.7 .1.8).
. .1.8, ,
.
,
. mrefrobotarm (. .1.9).
Model Reference Controller
. , . .1.10.
sin
10
Trigonometric
Function
Gain
2
Gain1
Subtract
Step
1
s
Integrator
f(t)
1
s
Integrator1
Scope
9
Gain2
yr(t)
6
Gain3
9
Gain4
1
s
Subtract1 Integrator2
1
s
Integrator3
. .1.7. MatLab
266
3
2.5
2
f(t)
1.5
1
yr(t)
0.5
5
t
10
. .1.8.
. (.
..1.4), .
.
.
Sampling Interval Normalize Training Data
,
.
No. Delayed Reference Inputs
, .
No. Delayed Controller Outputs
.
No. Delayed Plant Outputs .
.
, ,
mat- MatLab, (reference) .
Reference model , . ,
267
268
. .1.9.
. .1.10.
, . .
Simulink.
Controller Training Samples . , , .
, , .
( ).
:
Controller Training Epochs .
, .
Controller Training Segments , . 269
. ,
.
Use Current Weights .
. .
Use Cumulative Training
,
, . , . , .
,
Train Controller. , ( , ).
Plant Response
for NN Model Reference Control, .
. . .1.11 .
Model Reference Controller
. :
1.
, ,
- .
:
;
, ;
, .
2. , .
270
. .1.11.
:
, Training Data;
;
,
.
3. .
:
;
, .
1.3. NARMA L2-
NARMA L2 , Neural Network toolbox.
271
272
narmamaglev (. .1.12).
. .1.12, :
Plant;
NARMA L2 Controller;
Random Reference;
Clock;
Graph.
, ,
..1.13.
d2 y(t)
i2
dy(t)
=
g
+
,
M y(t) M dt
dt 2
y(t) , ; g ; ,
;
i ; .
NARMA
L2 Controller Simulink. Plant Identification NARMA L2,
.
S
y(t)
i(t)
. .1.13.
273
Train Network. . .
Plant Identification . , , OK Apply.
NN NARMA L2 Controller.
Start Simulation, .
(. . .1.13).
274
2
NNtool.
NNtool
.
NNtool :
;
;
;
;
.
NNtool ,
.
NNtool MatLab:
>> nntool
NNtool (. .2.1).
. .2.1. NNtool
275
New, . ,
XOR. (. .2.2).
, . Data
(. .2.3).
Create
(. .2.4).
(. .2.5).
Train,
-
. .2.2.
276
. .2.3.
.
,
,
.
Train Network (. .2.6) .
Training Parameters :
(epochs) , ;
(goal)
, ;
(show) , ;
277
. .2.4. NNtool
. .2.5.
278
279
. .2.6.
. .2.7.
(time)
, , .
(..2.7), , , .
280
....................................................................................
1. . .............................................
1.1. ..................................
1.2. .............................
1.3. ................................................
1.4. .......................................
1.5. ...........................
1.6. ......................................................
1.7. ......................................
1.8. ...................................................
1.9. ...............................
1.10. .......................................................
1.11.
......................................................
...........................................................
8
8
13
16
20
22
26
27
32
33
35
2. ....................................................
2.1. MatLab.........................
2.2.......................................................................
2.3. ..................................................
2.4. . ......................
2.5. ......................................
...........................................................
41
41
43
53
56
61
66
3. ..................................
3.1. ........................................................
3.2. ......................
3.3. .......................................
3.4. ..................................................
3.5. ....................................................
3.6. ...........................
3.7. ........................
...........................................................
68
68
70
75
77
84
88
95
96
4.....................................................................
4.1. ...............................
4.2. ..................................
4.3. .............................................
4.4. ..............................................
4.5. MatLab..............................................
...........................................................
98
98
107
108
115
120
126
5. ......................................................
5.1. ................................
5.2. .....................
5.3. .................................
128
128
131
136
36
38
281
150
150
155
172
175
182
189
7. .........................................................
7.1. ...................................................
7.2. ....................................................
7.3. .................................................................
7.4. ................................
7.5. .........................................
...........................................................
191
191
196
199
204
209
213
8. ......................
8.1. . ..........................
8.2. .......................................
8.3. ....................................................
8.4. .....................................................
8.5. ..............................................................
8.6. .................................
...........................................................
...............................................................................
..........................................................
1............................................................................
2............................................................................
215
215
216
219
221
235
240
248
251
254
259
275
282
. .
. .
05.02.13. 25.04.13.
6084 1/16. . . . . 16,39.
.-. . 16,36. 100 . 200.
-
190000, -, . ., 67