Вы находитесь на странице: 1из 1

File: /home/akira/Documents/NN

Page 1 of 1

Experiment in Neural Network


2015.05.06 AKIRA KOBASHI
Consider
v: an weight of neuron,
y: training datasets.
Next, learnining algorithm of perceptron is:
v := v+y (when y = 0 and v*y < 0)
v := v-y (when y = 1 and v*y > 0)
Third, assume that the initial weight of the neuron as v = (0,0,0);
v=vector([0,0,0]).
And the training datasets y are as follows:
y[0]
y[1]
y[2]
y[3]

=
=
=
=

(1,0,0)
(1,0,1)
(1,1,0)
(1,1,1)

in
in
in
in

Y0
Y0
Y1
Y1

which implies (1, x-coordinate, y-coordinate) vectors.


Easy to say, the set Y0 is on y-axis and the set Y1 is on x=1, indeed,
Y0: (0,0) (0,1) on x-axis and Y1: (1,0), (1,1) on x = 1.
Define learning datasets y as;
y=[vector([1,0,0]),vector([1,0,1]),vector([1,1,0]),vector([1,1,1])]
Thus, we can simply implement the code to renew the weight of the newron v as:
def learn(v,y,a):
if v*y < 0 and a == 0:
v = v+y
if v*y >=0 and a == 1 :
v = v-y
return v

Finally, this neuron learns the training datasets y repeatedly:


v = learn(v,
v = learn(v,
v = learn(v,
v = learn(v,
print v

y[0],
y[1],
y[2],
y[3],

0)
0)
1)
1)

Then we obtain:
1st
2nd
3rd
4th

iteration:
iteration:
iteration:
iteration:

(-1, -1, -1)


(0, -2, -1)
(1, -2, 0)
(1, -2, 0)

So far, the result may 1-2x+0y=0, which is x = 1/2. Briefly discussing this result, x=1/2 divides each
training datasets into Y0 and Y1 precisely.

Вам также может понравиться