Академический Документы
Профессиональный Документы
Культура Документы
wn
These products are simply summed, fed
through the transfer function, f( ) to xn
z wixi ; y H (z)
i1
Connection
Patterns
Feed-
Recurrent
Forward
Sources: https://image.slidesharecdn.com/artificialneuralnetworks-121114000455-phpapp02/95/artificial-neural
networks1638.jpg?cb=1352851582
Learning
What is the learning process in ANN?
- updating network architecture and connection weights so that network can efficiently perform a task
What is the source of learning for ANN?
- Available training patterns
- The ability of ANN to automatically learn from examples or input-out put relations
How to design a Learning process?
- Knowing about available information
- Having a model from environment: Learning Paradigm
- Figuring out the update process of weights: Learning rules
- Identifying a procedure to adjust weights by learning rules: Learning algorithm
The present paper deals with the development of a neural network model using the results of laboratory
model tests to estimate the reduction factor.
Reduction factor (RF) is the ratio of the ultimate bearing capacity of the strip footing subjected to an
eccentrically inclined load to the ultimate bearing capacity of the strip footing subjected to a centric
vertical load.
Backpropagation neural network is most
suitable for prediction problems and Levenberg
Marquadrt algorithm is adopted as it is efficient
In comparison to other algorithm.
Out of 78 test records shown in Table 1, 59 tests are considered for training and
the remaining 19 are reserved for testing. Each record represents a complete
model test where an eccentrically inclined loaded strip footing was subjected to
failure.
All the variables (i.e. inputs and output) are normalized in the range [-1, 1] before
training.
A feedforward backpropagation neural network is used with hyperbolic tangent
sigmoid function and a linear function as the transfer function. The network is
trained (learning) with LevenbergMarquardt algorithm as it is efficient in
comparison to other algorithm.
The ANN has been implemented using MATLAB V 7.11.0 (R2010b).
Courtesy: Patra et.al.(2012)
Results And Discussion
Therefore, the final ANN architecture used in this study will be 341, i.e. 3 (input)4
(hidden layer neuron)1 (output), as show in fig
The residual analysis was carried out by calculating the residuals from the
experimental reduction factor and predicted reduction factor for training data
set. Residual (er) can be defined as the difference between the experimental
and predicted RF value and is given by
Courtesy: Behera et.al.(2013)
Courtesy: Behera et.al.(2013)
Neural Interpretation Diagram
The lines joining the input-hidden and hidden-output neurons represent the weights. The positive weights are
represented by solid lines and negative weights by dashed lines and the thickness of the lines is proportional
to their magnitude.
where RFn is the normalized value of RF in the range [-1,1], fn is the transfer
function, h is the no. of neurons in the hidden layer, Xi is the normalized value of
inputs in the range [-1, 1], m is the no. of input variables, wik is the connection
weight between ith layer of input and kth neuron of hidden layer, wk is the
connection weight between kth neuron of hidden layer and single output neuron,
bhk is the bias at the kth neuron of hidden layer, and bo is the bias at the output
layer.
Inference
As per my inference from this paper the use of ANN can be very useful in providing general
solution with good predictive accuracy. They deal with the non-linearity in the world which we
live. They can also handle noisy or missing data by providing a model equation. ANNs can also
provide the proportionality between the input parameters and output parameters.
Acknowledgement
I would like to take this opportunity to thank Dr. Amarnath Hegde for his
precious guidance in presenting this paper. I also want to extend my
gratitude to my classmates for helping me to prepare this presentation.
Thank you!