Академический Документы
Профессиональный Документы
Культура Документы
Overview
1. Biological inspiration
2. Artificial neurons and neural networks
3. Learning processes
4. Learning with artificial neural networks
Biological inspiration
Animals are able to react adaptively to changes in their
external and internal environment, and they use their nervous
system to perform these behaviours.
Biological inspiration
Dendrites
Biological inspiration
axon
dendrites
synapses
Biological inspiration
The spikes travelling along the axon of the pre-synaptic
neuron trigger the release of neurotransmitter substances
at the synapse.
The neurotransmitters cause excitation or inhibition in the
dendrite of the post-synaptic neuron.
The integration of the excitatory and inhibitory signals
may produce spikes in the post-synaptic neuron.
The contribution of the signals depends on the strength of
the synaptic connection.
Artificial neurons
Neurons work by processing information. They receive and
provide information in form of spikes.
x1
w1
x2
Inputs
w2
x3
xn-1
xn
z wi xi ; y H ( z )
i 1
..
w3
. wn-1
wn
Output
y
Artificial neurons
The McCullogh-Pitts model:
spikes are interpreted as spike rates;
synaptic strength are translated as synaptic weights;
excitation means positive product between the
incoming spike rate and the corresponding synaptic
weight;
inhibition means negative product between the
incoming spike rate and the corresponding synaptic
weight;
Artificial neurons
Nonlinear generalization of the McCullogh-Pitts
neuron:
y f ( x, w)
y is the neurons output, x is the vector of inputs, and w
is the vector of synaptic weights.
Examples:
1
1 e
ye
w xa
|| x w|| 2
2a 2
sigmoidal neuron
Gaussian neuron
Inputs
Output
The young animal learns that the green fruits are sour,
while the yellowish/reddish ones are sweet. The learning
happens by adapting the fruit picking behavior.
Learning as optimisation
The objective of adapting the responses on the basis of the
information received from the environment is to achieve a
better state. E.g., the animal likes to eat many energy rich,
juicy fruits that make its stomach full, and makes it feel
happy.
Output
y11 f ( x1 , w11 )
y11
y12 f ( y 1 , w12 )
y32
2
y f ( x2 , w ) y1
2
2
1
2
y 2 f ( y , w2 ) y y3
1
2
y31 f ( x3 , w31 )
y3 2
1
2
y3
y 1 y3 f ( y , w3 )
4
y 1 f ( x , w1 )
1
2
1
2
y 12
yOut f ( y 2 , w13 )
y out F ( x, W )
W is the matrix of all weight vectors.
y out wT x
yout
y 1k
w1 kT x a1k
1 e
y 1 ( y11 , y 12 , y31 )T
, k 1,2,3
y k2
w 2 kT y 1 a k2
1 e
y 2 ( y12 , y 22 )T
2
, k 1,2
yout
r ( x) r (|| x c ||)
Example:
f ( x) e
y out wk2 e
k 1
|| x w1,k || 2
2( ak ) 2
|| x w|| 2
2a 2
Gaussian RBF
yout
Learning to approximate
Error measure:
1
E
N
( F ( xt ; W ) y t ) 2
t 1
E
wi c
(W )
j
wi
j
wi
j , new
wi wi
j
y out wT x
1
2
N
Data: ( x , y1 ), ( x , y 2 ),..., ( x , y N )
2
T t
2
E
(
t
)
(
y
(
t
)
y
)
(
w
(
t
)
x
y
)
Error:
out
t
t
Learning:
( w(t )T x t yt ) 2
E (t )
wi (t 1) wi (t ) c
wi (t ) c
wi
wi
wi (t 1) wi (t ) c ( w(t )T x t yt ) xit
m
w(t ) x w j (t ) x tj
T
j 1
2
RBF neural network: y out F ( x, W ) wk e
|| x w1,k || 2
2( ak ) 2
k 1
1
2
N
(
x
,
y
),
(
x
,
y
),...,
(
x
, yN )
Data:
1
2
M
|| x t w1,k || 2
2 ( ak ) 2
k 1
Learning:
yt ) 2
E (t )
w (t 1) w (t ) c
wi2
2
i
2
i
E (t )
t
(
F
(
x
, W (t )) yt ) e
2
wi
|| x t w1,i || 2
2 ( ai ) 2
w1 kT x a1k
1 e
y 1 ( y11 ,..., y 1M 1 )T
yout
y k2
, k 1,..., M 1
1
w 2 kT y 1 a k2
1 e
y 2 ( y12 ,..., y M2 2 )T
, k 1,..., M 2
...
y out F ( x;W ) w pT y p 1
1 2 p-1 p
1
2
N
(
x
,
y
),
(
x
,
y
),...,
(
x
, yN )
Data:
1
2
Error: E (t ) ( y (t ) out yt ) 2 ( F ( x t ;W ) yt ) 2
It is very complicated to calculate the weight changes.
y out F ( x;W ) w
k 1
2
k
1
1 e
w1,kT x a k
2
i
E (t )
1
t
(
F
(
x
,
W
(
t
))
y
)
1,iT t
t
wi2
1 e w x ai
1, i
j
1, i
j
E (t )
(
F
(
x
,
W
(
t
))
y
)
t
w1j,i
w1j,i
w1j,i
1,iT
1 e
1,iT
x ai
e w
1 e
1,iT
w
1 e
x t ai
w1,iT x t ai
w1,iT x t ai x tj
1, i
w j
x ai
w1,iT x t ai
1, i
w j
1,iT
w (t 1) w (t ) c 2 ( F ( x , W (t )) yt )
1, i
j
1, i
j
ew
1 e
x t ai
1,iT
x ai
( x tj )
Summary
Artificial neural networks are inspired by the learning
processes that take place in biological systems.
Artificial neurons and neural networks try to imitate the
working mechanisms of their biological counterparts.
Learning can be perceived as an optimisation process.
Biological neural learning happens by the modification
of the synaptic strength. Artificial neural networks learn
in the same way.
The synapse strength modification rules for artificial
neural networks can be derived by applying
mathematical optimisation methods.
Summary
Learning tasks of artificial neural networks can be
reformulated as function approximation tasks.
Neural networks can be considered as nonlinear function
approximating tools (i.e., linear combinations of nonlinear
basis functions), where the parameters of the networks
should be found by applying optimisation methods.
The optimisation is done with respect to the approximation
error measure.
In general it is enough to have a single hidden layer neural
network (MLP, RBF or other) to learn the approximation of
a nonlinear function. In such cases general optimisation can
be applied to find the change rules for the synaptic weights.