Вы находитесь на странице: 1из 3

Quiz Question Bank

1. Obtain the output of the neuron Y for the network shown below using activation
function as
(a) binary sigmoidal
(b) bipolar sigmoidal
1.0
x1
0.8

0.6

0.4

0.1

x2

0.3

x3

-0.2

0.35

2. Suppose a genetic algorithm uses chromosomes of the form x = abcdefgh with a fixed
length of eight genes. Each gene can be any digit between 0 and 9. Let the fitness of
individual x be calculated as:
f(x) = (a + b) (c + d) + (e + f) (g + h)
And let the initial population consist of four individuals with the following
chromosomes:
x1 = 6 5 4 1 3 5 3 2
x2 = 8 7 1 2 6 6 0 1
x3 = 2 3 9 2 1 2 8 5
x4 = 4 1 8 5 2 0 9 4
a) Evaluate the fitness of each individual, showing all your workings, and arrange
them in order with the fittest first and the least fit last.
b) Perform the following crossover operations:
i) Cross the fittest two individuals using onepoint crossover at the middle point.
ii) Cross the second and third fittest individuals using a twopoint crossover (b
and f).
iii) Cross the first and third fittest individuals (ranked 1st and 3rd) using a uniform
crossover.
c) Suppose the new population consists of the six offspring individuals received by
the crossover operations in the above question. Evaluate the fitness of the new
population, showing all your workings. Has the overall fitness improved?

3. Find the new weights of the below back propagation network using bipolar sigmoidal
activation function. Given input pattern is [-1 1], desired output is 1 and learning rate
= 0.25.

4. Construct a LVQ net to cluster five vectors assigned to two classes. The following
input vectors represent two classes 1 and 2.
Vectors
Class
(1 0 0 1)
1
(1 1 0 0)
2
(0 1 1 0)
1
(1 0 0 0)
2
(0 0 1 1)
1
5. Consider a Kohenen self-organizing net with two cluster units and five input units.
The weight vectors for the cluster units are:
w1 = [1.0 0.9 0.7 0.3 0.2]
w2 = [0.6 0.7 0.5 0.4 1.0]
Use the square of the Euclidean distance to find the winning cluster unit for the input
pattern x = [0.0 0.2 0.1 0.2 0.0]. Find the new weights for the winning unit when
the learning rate = 0.2.
6.

A single neuron network using f(net) = sgn(net), has been trained using the following
(x,d) pairs:
x1 = [1 -2 3 -1], d1 = -1
x2 = [0 -1 2 -1], d2 = 1
x3 = [-2 0 -9 -1] , d3 = -1
The final weights obtained by using the perceptron rule is
w4 = [3 2 6 1]
Knowing that correction in weights is made in each iteration, determine
(a) w3, w2, w1 by back tracking the training
(b) w5, w6, w7 obtained for steps 4, 5 and 6 of training by reusing the sequence (x1,
d1), (x2, d2) and (x3, d3)

7. Given an objective function f(x) = x13 + 2x12x2+ 4x1 x22 + 9x2 + 3 and learning rate
= 0.1. Initial value of x1 = 0.5 and x2 = 0.5. Find the descent after three iterations
using steepest descent method.
8. Maximize f(x) = x2 where x varies from 0 to 31, using genetic algorithm.

9. Four steps of Hebbian learning of a single neuron network is implemented starting


with w1 = [1 -1] at the rate = 1, using the inputs given below:
x1 = [1 -2]
x2 = [0 1]
x1 = [2 3]
x1 = [1 -1]
Find final weights for bipolar continuous f(net), when = 1.
10. Train a Mc Culloch Pitt network to behave as an XOR logical gate.
11. Perform two training steps of the network using the delta learning rule for = 1 and
= 0.25. With initial weights as w1 = [1 0 1], train the network with the following (x,
d) data pairs:
x1 = [2 0 -1], d1 = -1
x2 = [1 -2 -1], d2 = 1
2
[Hint: Use f(net) = (1 o ) and f(net) is bipolar continuous. ]
12. Classify the 2D pattern shown in figure below using perceptron network
+ + +
.
+ .
+

C
Target = +1

A
Target = -1

13. Draw the architecture of ANFIS and explain its layers.


14. Compare derivative-based and derivative-free optimization.
15. Explain Newtons method with flowchart and an example
16. Explain steepest descent method with flowchart and an example

Вам также может понравиться