Академический Документы
Профессиональный Документы
Культура Документы
Above is a schematic representation of brain neuron along with its basic components.
Artificial Neural Networks:
Artificial Neural Networks is a system loosely modeled on the human brain . It is an attempt to
stimulate within specialized hardware of sophisticated software , the multiple layers of simple processing
elements called neurons . Each neuron is linked to certain of its neighbors with varying coefficients of
connectivity that represent the strengths of these connections .
Neural networks are a form of multiprocessor computer system , with
(figure 5)
The above image shows a basic neural network. The bottom layer represents the input layer, in this
case with 5 inputs. In the middle is something called the hidden layer, with a variable number of nodes. It is
the hidden layer that performs much of the work of the network. The output layer in this case has two
nodes, representing output values we are trying to determine from the inputs.
Architecture of neural networks:
1) Feed-forward networks:
Feed-forward ANNs (figure 1) allow signals to travel one way only ; from input to output .
There is no feedback (loops) i.e. the output of any layer does not affect that same layer . Feed-forward
ANNs tend to be straight forward networks that associate inputs with outputs . They are extensively used
in pattern recognition. This type of organization is also referred to as bottom-up or top-down .
2) Feedback networks:
Feedback networks can have signals traveling in both directions by introducing loops in the network
Feedback networks are very powerful and can get extremely complicated . Feedback networks are
dynamic ; their ‘state’ is changing continuously until they reach an equilibrium point . They remain at the
equilibrium point until the input changes and a new equilibrium needs to be found . Feedback
architectures are also referred to as interactive or recurrent , although the latter term is often used to
denote feedback connections in single-layer organizations . Figure 3 is an example of simple feedback
network .
3) Network layers:
The commonest type of artificial network ( figure 3 ) consists of three groups , or layers , of
units ; a layer of “input” units is connected to a layer of “hidden” units , which is connected to a layer
of “output” units.The activity of the input units represents the raw information that is fed into the
network.The activity of each hidden unit is determined by the activities of the input units and the
weights on the connections between the input and the hidden units .The behavior of the output units
depends on the activity of the hidden units and the weights between the hidden and output units .
This simple type of network is interesting because the hidden units are free to construct their
own representations of the input . The weights between the input and hidden units determine when each
hidden unit is active , and so by modifying these weights , a hidden unit can choose what it represents .
we also distinguish single-layer and multi-layer architectures . The single-layer organization , in which all
units are connected to one another , constitutes the most general case and is of more potential
computational power than hierarchically structured multi-layer organizations . In multi-layer networks ,
units are often numbered by layer , instead of following a global numbering .
4) Perceptrons :
The most influential work on neural nets in the 60’s went under the heading of ‘ perceptrons ‘ .
The perceptron turns out to be an MCP model ( neutral with weighted inputs ) with some additional ,
fixed , pre-processing . Perceptron mimic the basic idea behind the mammalian visual system . They
were mainly used in pattern recognition even though their capabilities extended a lot more .
Mathematically that single layer perceptions could not do some basic pattern recognition operations like
determining the parity of a shape or determining whether a shape is connected or not . But in the later
times is discovered that by giving appropriate training , multilevel perceptions can do these operations .
The simple perceptron :
The network adapts as follows : change the weight by an amount
proportional to the difference between the desired output and the actual output.
As an equation :
Δ Wi = η * (D-Y).Ii .
where η is the learning rate , D is the desired output , Y is the actual output.
This is called the basic perceptron learning rule .
Active node:
In comparision the nodes of hidden and output layer are active . Below ( figure 4 )
is a flow diagram of the active nodes used in the hidden and output layers of the
neural network . Each input is multiplied by a weight ( the Wn values ) , and then
summed. This produces a single value that is passed through an “ s “ shaped non-
linear function called a sigmoid.
( figure 4 )
( figure 5 )
Associative mapping :
Here the network learns to produce a particular pattern on the set of inputs units
whenever another particular pattern is applied on the set of input units . The
associative mapping can generally be broken down into two mechanisms :
1) Auto-association: An input pattern is associated with itself and the
states of input and output units coincide . This is used to provide
pattern completion , i.e. to produce a pattern whenever a portion of it
or a distorted pattern is presented .
2) Hetero-association: This is related to two recall mechanisms : nearest
neighbor recall , where the output pattern produced corresponds to the
input pattern stored , which is closest to the pattern presented and ,
interpolative recall , where the output pattern is a similarity dependent
interpolation of the patterns stored corresponding to the pattern
presented .
Methods of learning :
All Learning methods used for adaptive neural networks can be classified into two
major categories :
Learning laws:
There are a variety of learning laws which are in common use . These laws
are mathematical algorithms used to update the connection weights . Learning
is certainly more complex than the simplification represented by the learning
laws currently developed . A few of the major laws are given here:
1)Hebb’sRule:
The first and the best known learning rule was introduced by Donald Hebb . The
basic rule is: If a neuron receives an input from another neuron , and if both are
highly active ( mathematically have the same sign ) , the weight between the
neurons should be strengthened .
2) Hopfield Law:
This law is similar to Hebb’s Rule with the exception that it specifies the
magnitude of the strengthening or weakening . It states , “ if the desired output
and the input are both active , or both inactive , increment the connection weight
by the learning rate , otherwise decrement the weight by the learning rate “. Most
learning functions have some provision for a learning rate , or a learning constant .
Usually this term is positive and between zero and one .
but retaining some topological ordering of the output . Below diagram ( figure 6 )
will cause some of the output units in some area to become active . Such
whether for further neural network data processing , or for more traditional
techniques .
Applications of neural networks :