Вы находитесь на странице: 1из 39

Project Report on

“Computation of Scour Depth within


Channel Contraction using ANN”

Submitted By
Avinash K. Hegde (2KL05ME012)
Manoj V. Naik (2KL05ME032)
Patel Pratik N. (2KL05ME046)
Vadher Ansh A. (2KL05ME075)

Under The Guidance of


Dr. G. Ravindranath
Dr. R.V.Raikar

DEPARTMENT OF MECHANICAL ENGINEERING


K.L.E.SOCIETY’S COLLEGE OF ENGINEERING AND
TECHNOLOGY
UDYAMBAG, BELGAUM-590008
INDEX
 Objective, Scope & Abstract.
 Introduction to Scour.
 Artificial Neural
Network(ANN).
 Training Programs
 Testing Programs
 Results
OBJECTIVE

 To develop an ANN model for the


computation of scour depth within a
channel.
SCOPE

 Working with moderate


data.
ABSTRACT

 Due to change in flow props. ,conduit


may erode out

 To study the behaviors of scour depth


in
different areas

 Extensive data of scour depth are


analyzed using
ANN in MATLAB environment.
Introduction

 Scour
 Scour Depth

CLASSIFICATION: 1. General bed


scour
2.Local scour

 Parameters influencing
Scour within channel
contractions:

 Channel contraction
(a)

 Geometry b1 b2

L
1 2

(b)

h1
h2

dsc
Bed sediment

1 2

dsc = f (d50 U1 h1 b2 )
Methods:
 Direct field measurement

 Laboratory model study


 Mathematical modeling
 Applying soft computing techniques
to the
measured values.
 
Artificial Neural Network (ANN)

 Prediction of operations of fluid


mechanics
takes more time as it is extremely
complicated and non-linear.

 ANN is adapted in this work for Scour


Depth
studies.
Neural networks are promising due
to their ability to learn highly non-
linear relationship.
Prediction
through ANN
 ANN is capable of handling tasks
involving
Incomplete data sets
Complex and ill-defined problems
Non-linear problems
Systems with many inter related
parameters

 ANN takes a set of known patterns as


inputs and
produces outputs which closely match
 Types of ANN
• Multi layer perceptrons (MLPs)
• Radial basis function network (RBFNNs)
• Computer propagation neural networks
(CPNNs)
 Feed forward neural network consisting
• Input layer
• Output layer
Architect
wij
urej

i
D50

wjk

h1

k
b2
dsc

u1/u
c
ANN MODEL:
METHOD: Feed forward Architecture Trained
by Back-Propagation Technique.

VARIABLES: d50 U1/Uc h1 b2


dsc
DATA: 99
INPUT: 4 (d50 U1/Uc h1 b2 )

OUTPUT: 1 (dsc )
Training algorithm
 ANN provides with built-in training
functions.
 No algorithm is best suited to all
locations and
selected judiciously for a particular
application.
 Depends on complexity of problem,
number of
data points in training set and error
goal.
Train -
GDX,GDA,CGF,CGP,CGB,SCG,OSS,LM
(Conti..)
Training algorithm
From graphs it is observed TRAINLM
is fastest method for training
moderate-sized feed-forward neural
e^-1traingdx e^-1traingd
network.
e^-1traingda e^-1trainoss

e^-1trainscg e^-1traincgb
e^-1traincgf

e^-1trainlm

e^-1traincgp
Training Methods :
 Unsupervised or Adaptive
training
 Supervised training
Here both inputs and outputs are
provided
During training default performance
function for
feed forward network is mean square
error (MSE).
MATrix LABoratory(MATLAB):
 It is a special purpose computer
program
 It is used to perform engineering
and
scientific calculations
 Graphical User Interface
Training program

rand('state',0);
ptr=[.81 .965 .087 .42;
.81 .975 .0854 .36;
.81 .972 .128 .36;
.81 .949 .125 .3;
.81 .97 .13 .24;
1.86 .954 .128 .42;
1.86 .957 .095 .36;
1.86 .968 .097 .3;
1.86 .966 .096 .24;
1.86 .955 .127 .24;
2.54 .943 .095 .42;
2.54 .956 .126 .36;
2.54 .952 .094 .3;
2.54 .947 .0955 .24;
2.54 .954 .1286 .24;
4.1 .96 .0738 .42;
4.1 .96 .0935 .42;
4.1 .928 .101 .42;
4.1 .942 .0768 .36;
4.1 .95 .0849 .36;
4.1 .925 .1366 .36;
4.1 .931 .0692 .3;
4.1 .913 .101 .3;
4.1 .907 .1322 .3;
4.1 .932 .0903 .24;
4.1 .925 .1063 .24;
4.1 .944 .1242 .24;
5.53 .976 .0859 .42;
5.53 .936 .1048 .42;
5.53 .94 .122 .42;
5.53 .958 .0707 .36;
5.53 .938 .1077 .36;
5.53 .929 .1269 .36;
5.53 .953 .071 .3;
5.53 .933 .0891 .3;
5.53 .922 .1277 .3;
5.53 .944 .0716 .24;
5.53 .941 .0835 .24;
5.53 .906 .107 .24;
7.15 .956 .0786 .42;
7.15 .914 .1036 .42;
7.15 .917 .124 .42;
7.15 .957 .0677 .36;
7.15 .923 .0857 .36;
7.15 .936 .1219 .36;
7.15 .939 .0675 .3;
7.15 .93 .0817 .3;
7.15 .917 .1033 .3;
7.15 .951 .0853 .24;
7.15 .913 .102 .24;
7.15 .926 .123 .24;
10.25 .918 .084 .42;
10.25 .901 .101 .42;
10.25 .904 .1215 .36;
10.25 .957 .077 .3;
10.25 .913 .0897 .3;
10.25 .91 .121 .3;
10.25 .919 .0892 .24;
10.25 .91 .121 .24;
14.25 .923 .089 .42;
14.25 .941 .1045 .42;
14.25 .947 .104 .36;
14.25 .92 .1247 .36;
14.25 .937 .088 .3;
14.25 .947 .122 .3;
14.25 .903 .108 .24;
14.25 .91 .121 .24];
ttr=[.026;
.029;
.056;
.095;
.149;
.045;
.063;
.096;
.143;
.154;
.025;
.064;
.088;
.11;
.139;
.03;
.036;
.04;
.043;
.056;
.065;
.069;
.083;
.097;
.092;
.129;
.131;
.037;
.041;
.044;
.051;
.07;
.069;
.068;
.08;
.099;
.074;
.098;
.103;
.034;
.041;
.044;
.052;
.062;
.068;
.071;
.081;
.093;
.089;
.103;
.135;
.029;
.031;
.069;
.071;
.073;
.104;
.087;
.138;
.05;
.052;
.073;
.074;
.081;
.108;
.118;
.144];
p=transpose(ptr);
t=transpose(ttr);
[pn,minp,maxp,tn,mint,maxt]=premnmx(p,t);
net=newff(minmax(pn),[1,1],{'tansig','purelin'},'trainlm');
net.trainParam.show=50;
net.trainParam.lr=0.5;
net.trainParam.lr_inc=1.005;
net.trainParam.epochs=500;
net.trainParam.goal=1.0e-1;
[net,tr]=train(net,pn,tn);
Testing Program
an = sim(net,pn);
a=postmnmx(an,mint,maxt);
at=transpose(a);
erg=[]; rs1=[]; rs2=[];
xaxis1=[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25];
xaxis2=[1,2,3,4,5,6,7,8,9,10];
fprintf(' dsc(exp) dsc(predi) ERROR \n');
fprintf(' in percent \n');
fprintf('----------------------------------\n');
for i=1:32
rs1(i,1)=ttr(i,1);
rs1(i,2)=at(i,1);
rs1(i,3)=abs(100*(rs1(i,1)-rs1(i,2))/rs1(i,1));
erg1(i)=rs1(i,3);
end
disp(rs1);
reg1=[];
for i=1:32
reg1(i,1)=at(i,1);
end
regt1=transpose(reg1);
rs1=[];
% Testing with new input patterns for validation
pnew=[.81 .963 .123 .42;
.81 .954 .094 .3;
.81 .964 .0937 .24;
1.86 .959 .093 .42;
1.86 .953 .13 .36;
1.86 .954 .128 .3;
2.54 .944 .1273 .42;
2.54 .957 .1 .36;
2.54 .946 .127 .3;
4.1 .923 .1373 .42;
4.1 .923 .1115 .36;
4.1 .946 .0892 .3;
4.1 .934 .0772 .24;
5.53 .927 .0726 .42;
5.53 .94 .0886 .36;
5.53 .907 .1079 .3;
5.53 .915 .1281 .24;
7.15 .956 .0833 .42;
7.15 .913 .1037 .36;
7.15 .927 .1229 .3;
7.15 .95 .0681 .24;
10.25 .9 .122 .42;
10.25 .921 .0793 .36;
10.25 .922 .089 .36;
10.25 .902 .101 .36;
10.25 .906 .1018 .3;
10.25 .925 .079 .24;
10.25 .944 .1063 .24;
14.25 .947 .122 .42;
14.25 .952 .091 .36;
14.25 .911 .1072 .3;
14.25 .944 .0875 .24];
pnewt=transpose(pnew);
pnewn = tramnmx(pnewt,minp,maxp);
anewn = sim(net,pnewn);
anew = postmnmx(anewn,mint,maxt);
anewt = transpose(anew);
fprintf('dsc(exp)test dsc(predi)test ERROR \n');
fprintf(' in percentage \n');
fprintf('--------------------------------------\n');
for i=1:32
res1(i,2)=anewt(i,1);
res1(i,3)=abs(100*(res1(i,1)-res1(i,2))/res1(i,1));
ergt1(i)=res1(i,3);
end
disp(res1);
regt1=[];
for i=1:32
regt1(i,1)=at(i,1);
end
regte1=transpose(regt1);
Generating Error
Graph
 plot(erg1);

Generating Regression Graph

 [m,b,r] = postreg(a,t);
Results
Graphs for 500
epochs:
 Training Graphs(Performance graphs)
MSE Level e-2 :
3 Hidden Layers 4 Hidden
Layers
MSE Level e-1: MSE Level e-3:
1 Hidden Layer 10 Hidden Layers
MSE Level e-2
 Testing(Performance graphs)

MSE Level e-1

MSE Level e-3


MSE Level e-2
 Error graphs
a) Training

MSE Level e-1

MSE Level e-3


MSE Level e-2

b) Testing graphs:

MSE Level e-1

MSE Level e-3


MSE Level e-2
 Comparison of
EXPERIMENTAL
Data with PREDICTABLE
Data

MSE Level e-1

MSE Level e-3


MSE Level e-2
 Regression graphs:

a) Training

MSE Level e-1

MSE Level e-3


MSE Level e-2

b) Testing graphs:

MSE Level e-1

MSE Level e-3


Conclusion
 TRAINLM method was found to be more
suitable.
 MSE goal reached with different Hidden
layers.
 Parameter Identification.
Applications:
 Heat Exchangers(Petrochemical
Industries).
References

 Ph.D thesis entitled “characteristics


of flow
over gravel beds and scour within
contractions and at piers” by
Dr.R.V.Raikar(IIT Kharagpur,2006)
 “MATLAB programming for
Engineer’s”-Stephan J Chapman 2nd
Edition Singapore 2002.
 Neural Networks- Simon Haykin 2nd
Edition New Jersey 1999
 MATLAB manual.
THANK
YOU

Вам также может понравиться