Академический Документы
Профессиональный Документы
Культура Документы
Exercises
E7.1 Consider the prototype patterns given to the left.
E7.3 Use the Hebb rule to determine the weight matrix for a perceptron network
(shown in Figure E7.1) to recognize the patterns shown to the left.
p1 p2
Inputs Sym. Hard Limit Layer
p n a
6x1
W 1x1 1x1
1x6
6 1
7
a = hardlims (Wp)
E7.4 In Problem P7.7 we demonstrated how networks can be trained using the
Hebb rule when the prototype vectors are given in binary (as opposed to bi-
polar) form. Repeat Exercise E7.1 using the binary representation for the
prototype vectors. Show that the response of this binary network is equiv-
alent to the response of the original bipolar network.
7-31
7 Supervised Hebbian Learning
where Q is the number of prototype vectors. (Hint: show that the prototype
vectors continue to be eigenvectors of the new weight matrix.)
1 t = 1 p = 1 t = – 1 p = 0 t = 1 .
p1 = 1 2 2 3 3
0 1 1
i. Show that this problem cannot be solved unless the network uses a
bias.
ii. Use the pseudoinverse rule to design a network for these prototype
vectors. Verify that the network correctly transforms the prototype
vectors.
E7.7 Consider the reference patterns and targets given below. We want to use
these data to train a linear associator network.
2 t = 4 t = –2 t =
p1 = 1 26 p2 = 2 26 p3 = 3 – 26
4 2 –2
iv. Find and sketch the decision boundary for the network with the
pseudo-inverse rule weights.
v. Compare (discuss) the decision boundaries and weights for each of
the methods (Hebb and pseudo-inverse).
7-32
Exercises
iv. Find the eigenvalues and eigenvectors of the weight matrix. (Do not
solve the equation W – I = 0 . Use an analysis of the Hebb
rule.)
-1
p1 p2 p3 1
E7.9 Suppose that we have the following three reference patterns and their tar-
gets.
3 t = 6 t = –6 t =
p1 = 1 75 p2 = 2 75 p3 = 3 – 75
6 3 3
1 t = 1 t =
p1 = 1 1 p2 = 2 –1
1 –1
i. Use the Hebb rule to determine the weight matrix for the percep-
tron network shown in Figure E7.3.
ii. Plot the resulting decision boundary. Is this a “good” decision
boundary? Explain.
iii. Repeat part i. using the Pseudoinverse rule.
iv. Will there be any difference in the operation of the network if the
Pseudoinverse weight matrix is used? Explain.
7-33
7 Supervised Hebbian Learning
a
1x1
p n
2x1
W 1x1
1x2
2 1
a = hardlims(Wp)
7-34
10 Widrow-Hoff Learning
Exercises
E10.1 An adaptive filter ADALINE is shown in Figure E10.1. Suppose that the
weights of the network are given by
w 1 1 = 1 , w 1 2 = – 4 , w 1 3 = 2 ,
y k = 0 0 0 1 1 2 0 0 .
Inputs ADALINE
y(k)
w1,1
D w1,2 n(k) a(k)
Σ
SxR
D w1,3
Class I Class II
10-42
Exercises
E10.3 Suppose that we have the following two reference patterns and their tar-
gets:
1 t = 1 , p = 1 t = –1 .
p1 = 1 2 2
1 –1
E10.4 In this exercise we will modify the reference pattern p 2 from Problem
P10.3:
1 t = 1 , p = –1 t = –1 .
p1 = 1 2 2
1 –1
i. Assume that the patterns occur with equal probability. Find the
mean square error and sketch the contour plot.
ii. Find the maximum stable learning rate.
»2+2
ans = iii. Write a MATLAB M-file to implement the LMS algorithm for this
4 problem. Take 40 steps of the algorithm for a stable learning rate.
Use the zero vector as the initial guess. Sketch the trajectory on the
contour plot.
iv. Take 40 steps of the algorithm after setting the initial values of both
parameters to 1. Sketch the final decision boundary.
v. Compare the final parameters from parts (iii) and (iv). Explain your
results.
E10.5 We again use the reference patterns and targets from Problem P10.3, and
assume that they occur with equal probability. This time we want to train
an ADALINE network with a bias. We now have three parameters to find:
w 1 1 , w 1 2 and b .
i. Find the mean square error and the maximum stable learning rate.
»2+2
ans = ii. Write a MATLAB M-file to implement the LMS algorithm for this
4 problem. Take 40 steps of the algorithm for a stable learning rate.
Use the zero vector as the initial guess. Sketch the final decision
boundary.
iii. Take 40 steps of the algorithm after setting the initial values of all 10
parameters to 1. Sketch the final decision boundary.
iv. Compare the final parameters and the decision boundaries from
parts (iii) and (iv). Explain your results.
10-43
10 Widrow-Hoff Learning
1 –1
.
1 2
Category II consists of
0
–4 .
–1 1
E10.7 Suppose that we have the following three reference patterns and their tar-
gets:
3 t = 6 t = –6 t =
p1 = 1 75 , p2 = 2 75 , p3 = 3 – 75 .
6 3 3
10-44
Exercises
iv. Sketch the trajectory of the LMS algorithm on your contour plot. As-
sume a very small learning rate, and start with all weights equal to
zero. This does not require any calculations.
E10.8 Suppose that we have the following two reference patterns and their tar-
gets:
1 t = –2 t =
p1 = 1 –1 , p2 = 2 1 .
2 1
4 t = 5 , p = 2 t = –2 , p = –4 t = 9 .
p1 = 1 2 2 3 3
2 –4 4
The first two pair each occurs with probability of 0.25, and the third pair
occurs with probability 0.5. We want to train a single-neuron ADALINE
network without a bias to perform the desired mapping.
i. Draw the network diagram.
ii. What is the maximum stable learning rate?
iii. Perform one iteration of the LMS algorithm. Apply the input p 1 and
use a learning
T
rate of = 0.1 . Start from the initial weights
x0 = 0 0 .
10
10-45
10 Widrow-Hoff Learning
2 t = 1 , p = –4 t = –1 , p = 4 t = 1 .
p1 = 1 2 2 3 3
–4 4 2
The first two pair each occurs with probability of 0.25, and the third pair
occurs with probability 0.5. We want to train a single-neuron ADALINE
network without a bias to perform the desired mapping.
–1 t = –1 p = 2 t = –1 p = 0 t = 1 p = –1 t = 1
p1 = 1 2 2 3 3 4 4
2 –1 –1 0
E10.12 Suppose that we have the following three reference patterns and their tar-
gets:
2 t = 4 t = –2 t =
p1 = 1 26 , p 2 = 2 26 , p 3 = 3 – 26 .
4 2 –2
10-46
Exercises
Inputs ADALINE
t(k) = y(k)
y(k) D w1,1
n(k) a(k)
+
D w1,2 Σ
SxR
- e(k)
Cy n = E y k y k + n .
10
i. Write an expression for the mean square error in terms of C y n .
ii. Give a specific expression for the mean square error when
10-47
10 Widrow-Hoff Learning
y k = sin ------ .
k
5
iii. Find the eigenvalues and eigenvectors of the Hessian matrix for the
mean square error. Locate the minimum point and sketch a rough
contour plot.
iv. Find the maximum stable learning rate for the LMS algorithm.
v. Take three steps of the LMS algorithm by hand, using a stable
learning rate. Use the zero vector as the initial guess.
»2+2
ans = vi. Write a MATLAB M-file to implement the LMS algorithm for this
4 problem. Take 40 steps of the algorithm for a stable learning rate
and sketch the trajectory on the contour plot. Use the zero vector as
the initial guess. Verify that the algorithm is converging to the op-
timal point.
vii. Verify experimentally that the algorithm is unstable for learning
rates greater than that found in part (iv).
E10.14 Repeat Problem P10.9, but use the numerals “1”, “2” and “4”, instead of the
letters “T”, “G” and “F”. Test the trained network on each reference pattern
and on noisy patterns. Discuss the sensitivity of the network. (Use the Neu-
ral Network Design Demonstration Linear Pattern Classification (nnd10lc).)
10-48