Вы находитесь на странице: 1из 43

Introduction to Neurons

and Neural networks

By

Dr. Maitreyee Dutta


Professor,
CSE Department
CONTENTS

Biological Neuron
Artificial Neuron
Signal Functions
Definition of Artificial Neural Network
Characteristics of Artificial Neural
Network
Components of Artificial Neural
Network
Applications of Neural Network
The brain is a collection of
about 10 billion interconnected
neurons. Each neuron is a cell
that uses biochemical reactions
to receive, process and
transmit information.
BIOLOGICAL NEURON
Soma-Soma or cell
body contains cells
nucleus and other
vital components.

Dendrites- Dendrites
form a tree like
structure that spreads
out from the cell. The
neuron receives input
electrical signals.
BIOLOGICAL NEURON
Axon- Axon is a tubular
extension from the cell
soma that carries an
electrical signal away
from the soma to
another neuron for
processing.

Each terminal button is


connected to other
neurons across a small
gap called a synapse
GLIAL CELLS
Glial cells function
primarily as the
physical support for
neurons.

1.Astroglia
2.Microglia
3.Oligodendroglia

Membrane of a rat
brain
ARTIFICIAL NEURON

1. Receives n inputs (plus a bias term)


2. Multiplies each input by its weight
3. Applies an activation function to the
sum of results
4. Outputs result
ARTIFICIAL NEURAL NETWORK
jth artificial neuron t
receives input
signals si , from
possibly n different
sources
Internal activation
xj which is a linear
weighted
aggregation of the
impinging signals,
modified by an
internal threshold,
j
wij denotes the
weight from
neuron i to
neuron j .
The activation of
the neuron is
subsequently
transformed
through a signal
function S()
Generates the
output signal sj =
S(xj ) of the
neuron.
SIGNAL FUNCTION

A signal function
may typically be
binary threshold
linear threshold
sigmoidal
Gaussian
probabilistic.
BINARY THRESHOLD SIGNAL
FUNCTION

Net positive
activations
translate to a +1
signal value
Net negative
activations
translate to a 0
signal value.
The threshold
logic neuron is a
two state
machine
sj = S(xj ) {0, 1}
PLOT OF BINARY THRESHOLD
SIGNAL FUNCTION
THRESHOLD LOGIC NEURON (TLN)
IN DISCRETE TIME

The response of the


threshold logic neuron
as a two-state
machine can be
extended to the
bipolar case where
the signals are
sj {1, 1}
LINEAR THRESHOLD SIGNAL
FUNCTION

j = 1/xm is the slope


parameter of the
function
Figure plotted for xm
= 2 and j = 0.5.
SIGMOIDAL SIGNAL FUNCTION

j is a gain scale
factor
In the limit, as j
the smooth logistic
function
approaches the
non-smooth binary
threshold function.
GAUSSIAN SIGNAL FUNCTION

j is the Gaussian
spread vector and c j
is the center.
Changing the center
shifts the function to
the right or left along
the activation axis
This function is an
example of a non-
monotonic signal
function
SUMMARY OF SIGNAL FUNCTIONS
X=-10:0.1:10
Tmp=exp(-x)
Y1=1./1+tmp
Y2=(1-tmp)./(1+tmp)
Y3=x
Subplot(2 3 1)
Plot(x,y1)
Title(logistic function)
Subplot(2 3 2)
Plot(x,y2)
Title(Hyperbolic function)
Subplot(2 3 3)
Plot(x,y3)
Title(linear function)
COMPARISON BETWEEN ANN AND
BNN
BNN ANN
Speed milli seconds Nano seconds
Processing Massively parallel processing Less parallel compared to
BNN processing
Size and complexity Billons of neurons and can Not much computational
solve complex pattern neurons and hence difficult to
recognition tasks solve complex pattern
recognition
Storage New information is added by New information on the same
adjusting the interconnection location can destroy the
strength previous information

Fault Tolerance Information is Distributed in ANN are inherently not fault


the connections so fault tolerant since the information
tolerant corrupted in the memory
cannot be retrieved.

Control Mechanism No central control mechanism Central control mechanism


DEFINITION OF ARTIFICIAL
NETWORK

Artificial neural networks are massively


parallel adaptive networks of simple
nonlinear computing elements called
neurons which are intended to abstract
and model some of the functionality of the
human nervous system in an attempt to
partially capture some of its
computational strengths.
CHARACTERISTICS OF
ARTIFICIAL NEURAL NETWORK
Parallel- NN process information in
parallel
Nonlinearity- Nonlinear
interconnections and nonlinearity is
distributed throyghout
Input-output mapping- NNs exhibit
mapping capabilities i.e they can map
input patterns to their associated
output patterns.
Learning- Learn by examples
Fault tolerant- Can recall full patterns
from incomplete, partial and noisy
patterns
Adaptability- It can adapt any free
parameters.
Response- It can give confident
response
EIGHT COMPONENTS OF
ARTIFICIAL NEURAL NETWORKS
Neurons. These can be of three types:
Input: receive external stimuli
Hidden: compute intermediate functions
Output: generate outputs from the
network

Activation state vector. This is a vector of


the activation level xi of individual
neurons in the neural network,
X = (x1, . . . , xn)T Rn.
EIGHT COMPONENTS OF
ARTIFICIAL NEURAL NETWORKS
Signal function. A function that generates
the output signal of the neuron based on
its activation.

Pattern of connectivity. This essentially


determines the inter-neuron connection
architecture or the graph of the network.
Connections which model the inter-neuron
synaptic efficacies, can be
excitatory (+)
inhibitory ()
absent (0).
EIGHT COMPONENTS OF
ARTIFICIAL NEURAL NETWORKS
Activity aggregation rule.
A way of
aggregating activity at a neuron, and is
usually computed as an inner product of
the input vector and the neuron fan-in
weight vector.
EIGHT COMPONENTS OF
ARTIFICIAL NEURAL NETWORKS
Activation rule. A function that
determines the new activation level of a
neuron on the basis of its current
activation and its external inputs.

Learning rule. Provides a means of


modifying connection strengths based
both on external stimuli and network
performance with an aim to improve the
latter.
EIGHT COMPONENTS OF
ARTIFICIAL NEURAL NETWORKS

Environment. The environments within


which neural networks can operate
could be

1. deterministic (noiseless) or
2. stochastic (noisy).
APPLICATIONS OF ARTIFICIAL
NEURAL NETWORKS

Finger Print Recognition


Preprocessing System
Feature Extraction using Neural
Network
Classification
Result
FINGER PRINT RECOGNITION
SYSTEM
Image Edge Ridge Thinning
Acquisition detection Extraction

Features Classification
Extraction

Image is digitised into 512X512 image

Edge detection and thinning To


remove noise and enhance the image
EDGE DETECTION
Edge detection:

The edge of the image is


defined where the gray
scale levels changes
greatly. Also orientation of
ridges is determined for
each 32x32 block of pixels
using gray scale gradient
RIDGE EXTRACTION

The ridges are extracted


using the fact that gray
scale value of pixels are
maximum along the
direction normal to the
ridge orientation
THINNING AND FEATURES
EXTRACTION
Thinning: The extracted ridges are
converted to skeletal structure

Ridge Featires Extraction:

Features like Ridge


bifurcation and ridge
ending are extracted
through Neural Network
CLASSIFICATION
Depending on the extracted features,
a class level is assigned.

Classification can be done depending


1. Arch
2. Tented Arch
3. Right Loop
4. Left Loop
APPLICATIONS OF FINGERPRINT
RECOGNITION
Recognition of criminals
Security in case of laptops, lockers
etc.
In elections(who has voted or who
have not voted)
To count individuals
OTHER APPLICATIONS

Character Recognition
Image Compression
Stock Market Prediction
Travelling Salesman Problem
Electronic nose, Security and home
applications
Application principles

The solution of a problem must be the


simple.

Complicated solutions waste time and


resources.
If a problem can be solved with a
small look-up table that can be easily
calculated that is a more preferred
solution than a complex neural
network with many layers that learns
with back-propagation.
Application principles

On-line neural network solutions should


be very simple.
.

Using many layer neural networks


should be avoided, if possible. Complex
learning algorithms should be avoided. If
possible a priori knowledge should be
used to set the initial parameters such
that very short training is needed for
optimal performance.
Application principles

All the available data should be


collected about the problem.
Having redundant data is usually
a smaller problem than not
having the necessary data.
The data should be partitioned in
training, validation and testing
data.
Application principles

The neural network solution of a problem


should be selected from a large enough pool
of potential solutions.
Because of the nature of the neural networks,
it is likely that if a single solution is built then
that will not be the optimal one.
If a pool of potential solutions is generated
and trained, it is more likely that one which is
close to the optimal one is found.

Вам также может понравиться