Академический Документы
Профессиональный Документы
Культура Документы
kD1
n
i D1
(T
.k/
i
,
.k/
i
)
2
. (4)
The weights are adjusted according to the gradient decent rule, so that the
actual output of the MLP moves closer to the desired output.
The training algorithm and the network architecture were implemented
using the MATLAB program. The algorithmis designed to approach a second
order training speed without computing the Hessian matrix. The performance
function having the form of a sum of squares was approximated as
H D J
T
J. (5)
Figure 1. Diagram of the perceptron model.
Nigerian Crude Oil System for Environmental Sustainability 1997
and the gradient was computed as
g D J
T
e. (6)
where J is the Jacobian matrix that contains rst derivatives of the network
errors with respect to the weights and biases, and e is a vector of network
errors.
The Jacobian matrix is then computed through a standard back-propaga-
tion technique and the generated Hessian matrix is then approximated using
quasi-Newton equation, thus,
.
kC1
D .
k
J
T
J Cj1|
1
J
T
e. (7)
During the training phase, the weights are adjusted according to the
generalized rule. To obtain the accurate models for predicting 1
b
and T
ob
as a
function of the other four variables, the number of neurons was systematically
varied to obtain a good t to the data. Training was completed when the
network was able to predict the given output.
Of the 542 data sets obtained from some wells in Niger-Delta of Nigeria,
264 sets were used to train the model, 142 sets used to cross validate the
relationship established during the training process, and the remaining 136
sets were used to test the model for accuracy evaluation. Each data range
contains the reservoir temperature T, solution gas-oil ratio 1
s
, gas specic
gravity ;
g
, oil API gravity ;
o
, bubble point pressure 1
b
, and oil formation
volume factor T
ob
(Table 1).
For comparison purposes, 1
b
and T
ob
for some existing empirical models
such as Standing, Glas, Labedi, and Elsharkawy were calculated through
their various equations.
For Standing model, 1
b
and T
ob
are respectively given as
1
b
D
__
1
s
;
g
_
anti log
10
(0.00091T 0.0142511) 1.4
_
. (8)
T
ob
D 0.972 C
1.47 10
4
1
s
(;
g
,;
o
)
0:5
C1.25T|
1:175
. (9)
Table 1. Range of PVT data used for training
Bubble point pressure (psia) 953,660
Bubble point oil formation volume factor (RB/STB) 1.03101.659
Solution gas oil ratio (SCF/STB) 221,234
Gas specic density (air D 1) 0.66901.1510
Stock tank oil gravity (
API) 16.350.8
Reservoir temperature 108220
1998 E. O. Obanijesu and D. O. Araromi
For Glas model
T
ob
D 1.0 C10
6:58511C2:91329log.Bb/0:27683.log.Bb//
2
(10)
T
b
D 1
s
_
;
g
;
o
_
0:526
C0.968T. (11)
while the T
ob
for Labedi and Elsharkawy were respectively calculated as
T
ob
D 0.9897 C0.0001364
_
1
s
_
;
g
;
o
_
0:5
C1.25T
_
1:175
. (12)
T
ob
D 1.0 C40.428 10
5
1
s
C63.802 10
5
(T 60)
C0.0780 10
5
_
1
s
(T 60)
_
;
g
;
o
__
. (13)
For comparative performance evaluation, average relative percent error,
average absolute percent error, minimum and maximum absolute percent
error, standard deviation and correlation coefcient were used as statistical
tools for comparison between the developed Neural Network (NN) model
and these existing empirical models.
The average percent relative error, which is the relative deviation of the
estimated values from the experimental data, was calculated as
1
r
D
1
n
N
i D1
1
i
. (14)
where
1
i
D
(,
i
O ,
i
)
,
i
100 i D 1. . . . . n. (15)
The average absolute percent relative error, which measures the relative
absolute deviation of the estimated values from the experimental values, was
calculated as
1
a
D
1
n
N
i D1
j1
i
j. (16)
The minimum and maximum absolute percent relative error, which dene the
ranges of error for each correlation, are respectively given by
1
min
D
n
min
i D1
j1
i
j (17)
1
max
D
n
max
i D1
j1
i
j. (18)
Nigerian Crude Oil System for Environmental Sustainability 1999
The Standard deviation, which is a measure of the spread or dispersion of
the data distribution, was calculated as
o D
_
n
i D1
(.
i
.)
n 1
. (19)
While the correlation coefcient that represents the degree of success in
reducing the standard deviation by regression analysis was calculated as
r D
_
1
n
i D1
(,
i
O ,
i
)
2
,
n
i D1
(,
i
,). (20)
where
, D
1
n
n
i D1
,
i
. (21)
RESULTS AND DISCUSSION
The training plots of the network, which show that the performance goal
is met, are displayed in Figures 2 and 3. The accuracy of the ANN de-
veloped in this study is evaluated against those developed by Standing,
Glas, Elsharkawy, and Labedi due to their global acceptability. The statistical
results of the comparison for both 1
b
and T
ob
are given in Tables 2 and
3. As shown in the tables, the proposed model shows high accuracy in
predicting both the 1
b
and T
ob
values, and achieves the lowest minimum
error, lowest maximum error, lowest standard deviation, and correlation co-
efcient. The model achieves 99.2% and 98.9% correlation coefcient for 1
b
and T
ob
, respectively, which are highest when compared with the existing
correlations (Figures 46; summarized as Figures 7 and 8). Absolute percent
error was used to test the accuracy of the models and the result compared
with those of other correlations. The ANN model has the lowest value of
4.36% for bubble point error (Figure 9) and lowest error of 1.73% for
oil formation volume factor (Figure 10). The scatter plots in Figures 11
15 depict the predicted T
ob
versus experimental T
ob
values. These cross
plots indicate the degree of agreement between the experimental and the
predicted values. If the agreement is perfect, then all points should lie on
the 45
F)
T
r
reservoir temperature (
R)
T
s
separator temperature (
F)
T
k
reservoir temperature (
K)
;
o
oil specic gravity (air D 1.0)
;
g
gas specic gravity (air D 1.0)
;
gs
separator gas specic gravity (air D 1.0)
T
ob
bubble point oil formation volume factor (RB/STB)
T
.k/
i
the target value of the output neuron for the given kth data pattern
,
.k/
i
the prediction for the i th output neuron given the kth data pattern
M number of training data pattern
N number of neurons in the output layer