Вы находитесь на странице: 1из 16

Journal of Petroleum Science and Engineering 106 (2013) 1833

Contents lists available at SciVerse ScienceDirect

Journal of Petroleum Science and Engineering


journal homepage: www.elsevier.com/locate/petrol

An innovative neural forecast of cumulative oil production from a


petroleum reservoir employing higher-order neural networks (HONNs)
N. Chithra Chakra a, Ki-Young Song b, Madan M. Gupta b,n, Deoki N. Saraf a
a
b

Center for Information Technology, College of Engineering, University of Petroleum and Energy Studies, Dehradun, Uttarakhand 2487001, India
Intelligent Systems Research Laboratory, College of Engineering, University of Saskatchewan, Saskatoon, Saskatchewan, Canada S7N 5A9

art ic l e i nf o

a b s t r a c t

Article history:
Received 2 August 2012
Accepted 19 March 2013
Available online 10 April 2013

Precise and consistent production forecasting is indeed an important step for the management and
planning of petroleum reservoirs. A new neural approach to forecast cumulative oil production using
higher-order neural network (HONN) has been applied in this study. HONN overcomes the limitation of
the conventional neural networks by representing linear and nonlinear correlations of neural input
variables. Thus, HONN possesses a great potential in forecasting petroleum reservoir productions without
sufcient training data. Simulation studies were carried out on a sandstone reservoir located in Cambay
basin in Gujarat, India, to prove the efcacy of HONNs in forecasting cumulative oil production of the
eld with insufcient eld data available. A pre-processing procedure was employed in order to reduce
measurement noise in the production data from the oil eld by using a low pass lter and optimal input
variable selection using cross-correlation function (CCF). The results of these simulation studies indicate
that the HONN models have good forecasting capability with high accuracy to predict cumulative oil
production.
Crown Copyright & 2013 Published by Elsevier B.V. All rights reserved.

Keywords:
oil production forecasting
black oil reservoir
time series
higher-order neural networks
higher-order synaptic operation
data preprocessing

1. Introduction
An important phase in the eld of petroleum reservoir engineering is concerned with the forecasting of oil production from
the reservoir. This estimation of reserves involves massive investment of money, time and technology under a wide range of
operating and maintenance scenarios such as well operations
and completion, articial lift, workover, production, and injection
operations. A fairly precise estimation of oil quantity in the
reservoir is in demand; however, the rock and uid properties of
the reservoirs are highly nonlinear and heterogeneous in nature.
Therefore, it is difcult to estimate an accurate upcoming oil
production. The oil production from a reservoir depends on many
static and dynamic parameters such as porosity and permeability
of rocks (static parameters), and uid saturation and pressure in
the reservoir (dynamic parameters). When these static and
dynamic parameters are available, the forecasting of oil production
of a reservoir would be more accurate. However, all the parameter
data are not always available. This limited data access from the oil
elds lessens the accuracy of forecasting.
In the past, several forecasting methods have been developed
from decline curve analysis to soft computing techniques
(Tamhane et al., 2000). Articial intelligence tools such as neural

Corresponding author. Tel.: +1 306 966 5451.


E-mail address: madan.gupta@usask.ca (M.M. Gupta).

computing, fuzzy inference systems and genetic algorithms have


been extensively applied in petroleum industries because of their
potential to handle the nonlinearities and time-varying situations
(Mohaghegh, 2001). Neural networks (NN) is one of the most
attractive methods of articial intelligence to cope with the
nonlinearities in production forecasting (Weiss et al., 2002) as
well as in parameters estimation (Aminzadeh et al., 2000) due to
its ability to learn and adapt to new dynamic environments.
Numerous researches have shown successful implementation of
NN in the eld of oil exploration and development such as pattern
recognition in well test analysis (Al-Kaabi and Lee, 1993), reservoir
history matching (Maschio et al., 2010), prediction of phase
behavior (Habiballah et al., 1996), prediction of natural gas
production in the United States (Al-Fattah and Startzman, 2001)
and reservoir characterization (Mohaghegh et al., 2001) by mapping the complex nonlinear inputoutput relationship. In conventional NN model, each neural unit (neuron) performs linear
synaptic operation of neural inputs and synaptic weights. Later,
extensive researches on NN have been made by Lee et al. (1986),
Rumelhart and McClelland (1986), Giles and Maxwell (1987), Gosh
and Shin (1992), and Homma and Gupta (2002) to capture the
nonlinear synaptic operation of the input space.
As well, it has been reported that neural network models
outperform any other conventional statistical model such as
Autoregressive Integrated Moving Average (ARIMA), Autoregressive Moving Average (ARMA) and Autoregressive Conditioned
Heteroskedasticity (ARCH). Castellano-Mendenz (2004) reported

0920-4105/$ - see front matter Crown Copyright & 2013 Published by Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.petrol.2013.03.004

N.C. Chakra et al. / Journal of Petroleum Science and Engineering 106 (2013) 1833

x R

Second-order
correlator

First-order
correlator

x1

v w0 x0 wi xi
i1

However, in nature, the correlation of neural inputs and neural


weights is not simply linear, but rather related nonlinearly. This
observation introduced a nonlinear (higher-order) synaptic operation, and NN with the higher-order synaptic operation (HOSO) (see
Fig. 1) was developed and named as higher-order neural networks

x 12

w11

x 1x2

w12

x n2

w0

wn

()

Y R1

wnn

w1...1

x1N-1x2 w1...2
x nN

x1

Y1
Y2

x2

xn

Ym

Input Layer

Hidden Layers

Output Layer

Fig. 2. A schematic diagram of HONN with multilayer.

(HONN) (Gupta et al., 2003; Song et al., 2009). HOSO of HONN


embraces the linear correlation (conventional synaptic operation)
as well as the higher-order correlation of neural inputs with
synaptic weights (up to nth-order correlation).
An Nth order HOSO is dened as
n

v w0 x0 wi1 xi1
i1 1
n

i1 1 i2 i1

w2

Fig. 1. A neural unit (neuron) with higher-order synaptic operation (HOSO) (Gupta
et al., 2003).

Neural networks (NN) are composed of several layers of neural


units (neurons): input layer, hidden layers and output layer. A
neural unit is structured mainly with two operations: synaptic
operation for weighting, and somatic operation for mapping. In a
conventional neural unit, the weighting process is operated with
linear correlation of neural inputs, xa (x0, x1, x2, ,xn)Rn+1 (x0 is
the bias), and neural weights, wa (w0, w1, w2, , wn)Rn+1
(w0 1). The linear correlation can be expressed mathematically as

xn

w1

wn...n

2. Neural networks (NN) and its extension to higher-order


neural networks (HONN)

x2

x1 N

Nth-order
correlator

multi-layer NN perform better for single step-off prediction than


linear ARMA (Box and Jenkins, 1976) model. A superior performance of NN is reported by Donaldson et al. (1993) when
compared to specialized nonlinear nance models like ARCH.
Donaldson et al. (1993) found that ARCH could only partially
remove the leptokurtosis and symmetric/asymmetric heteroskedasticity from the data used to evaluate the fat-tailed and heteroskedasticity nature of the stock return. Nayak et al. (2004) and
Tiwari et al. (2012) also reported superiority of NN model over
ARMA model in river ow forecasting of hydrological model.
Although the conventional time series models have several
advantages and are widely applied for forecasting, but these
models have their own limitations while attempting to solve
highly nonlinear time series data and also not perform well at
times (Tokar and Johnson, 1999). More details on comparison
between NN models and conventional statistical techniques for
forecasting of time series data can be found in the research paper
by Hill et al. (1994).
In order to overcome the constraints of the conventional NN
models, numerous methods of modication and improvement of
NN model have been carried out. One innovative neural structure
embeds higher-order synaptic operations (HOSO) and a new NN
model, named higher-order neural network (HONN), was developed employing the HOSO architecture (Gupta et al., 2003; Hou
et al., 2006). The exclusive feature of HONN is the expression of
the correlation of neural inputs by computing products of the
inputs. It has been found that HONN has signicant advantages
over conventional NN such as faster training, reduced network
size, and smaller forecasting errors (Redlapalli, 2004; Song et al.,
2009; Gupta et al., 2010; Tiwari et al., 2012). The advantage of NN
methods over conventional statistical techniques motivated us to
employ HONN model for forecasting of highly nonlinear production data from petroleum reservoir.
HONN has been used for the rst time to forecast cumulative oil
production from a real oil eld reservoir with limited dynamic
parameter data: (i) oil production data and (ii) oil, gas and water
production data (no data on pressure and uid saturation available).
Two case studies have been carried out to verify the potential of the
proposed neural approach with limited available parameters from an
oil eld in Cambay basin, Gujarat, India. In case study-1, data on only
one dynamic parameter, oil production, from ve producing wells, are
used for forecasting, whereas in case study-2, data on three dynamic
parameters, oil, gas and water production from ve producing wells,
are used for forecasting. A pre-processing step is included for the
preparation of neural inputs.

19

wi1 i2 xi1 xi2

i1 1 i2 i1

iN iN1

wi1 i2 iN xi1 xi2 xiN

and the somatic operation, which yields the neural output, is


dened as
y v

In this paper, different HOSO have been applied up to thirdorder, and the rst-order (conventional linear correlation), the
second-order and the third-order synaptic operations are called
linear synaptic operation (LSO), quadratic synaptic operation
(QSO) and cubic synaptic operation (CSO), respectively.
The higher-order neural network (HONN), illustrated in Fig. 2,
consists of multiple interconnected layers: input layer, hidden
layers and output layer. The input layer conveys n number of input
data to the rst hidden layer. Each hidden layer includes different
number of neurons, and the output layer contains m neurons, m
being the number of desired outputs. The number of the hidden
layers and the number of neurons in each hidden layer can be
assigned after careful investigation for different applications.

20

N.C. Chakra et al. / Journal of Petroleum Science and Engineering 106 (2013) 1833

HONN is trained by an error based algorithm in which synaptic


weights (connection strength) are adjusted to minimize the error
between desired and neural outputs (Gupta and Rao, 1993; Gupta,
2008; Gupta et al., 2010). Let x(k)Rn be the neural input pattern at
time step k 1, 2,, n corresponding to desired output yd(k)R1
and neural output y(k). The error of a pattern can be calculated as
ek ykyd k

wa k 1 wa k wa k

wa k

1 2
e k
2

where the change in weight matrix is denoted by wa(k) which is


proportional to the gradient of the error function E(k) as

The overall error for an epoch E(k) is dened as


Ek

The overall error (squared error) is minimized by updating the


weight matrix wa as

Ek
wa k

where 40 is the learning rate which effects the performance of


the algorithm during the updating process. The details can be
found in the reference Gupta et al. (2003).

Well-7 Well-4
Well-8
Well-1
Well-5 Well-6

W
S

Well-2

3. Model performance evaluation criteria

N
E

Grid Top (m)


1413

Well-3
Fault

1408
1403
1399
1394

Contour

1389
1385
1380

Fig. 3. The top structure map of the oil bearing reservoir with well locations. In this
case study, the oil production from Well-1, Well-2, Well-3, Well-4 and Well-5 are
forecasted.

Several statistical methods have been used to evaluate the


performance of neural networks in the literature. In these studies,
the following performance measurements are applied to substantiate the statistical accuracy of the performance of HONNs:
mean square error (MSE), root mean square error (RMSE), and
mean absolute percentage error (MAPE). These performance
measurements are commonly used evaluation criteria in assessing
the model performance to predict the values that deviate from the
mean value. They indicate the deviation of prediction of applied
HONN models, and they are dened as
Mean Square Error (MSE):
MSE

1 n
2
yobs ypred
i
ni1 i

Table 1a
Raw monthly oil production ratios from Well-1, Well-2, Well-3, Well-4 and Well-5 and cumulative oil production ratio.
Months Well-1
(C1)

Well2
(C2)

Well3
(C3)

Well4
(C4)

Well-5
(C5)

Cumulative Oil
(C6)

Months Well-1
(C1)

Well-2
(C2)

Well-3
(C3)

Well-4
(C4)

Well-5
(C5)

Cumulative Oil
(C6)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32

0.072
0.073
0.060
0.061
0.057
0.091
0.049
0.048
0.049
0.049
0.043
0.029
0.028
0.031
0.031
0.020
0.027
0.023
0.023
0.020
0.022
0.023
0.020
0.031
0.029
0.020
0.018
0.021
0.032
0.043
0.040
0.050

0.309
0.307
0.292
0.290
0.298
0.313
0.326
0.320
0.337
0.332
0.303
0.337
0.327
0.359
0.286
0.228
0.276
0.306
0.307
0.280
0.316
0.348
0.308
0.496
0.479
0.419
0.401
0.391
0.322
0.252
0.450
0.463

0.090
0.088
0.075
0.081
0.072
0.058
0.084
0.070
0.088
0.089
0.079
0.097
0.051
0.010
0.110
0.094
0.131
0.102
0.064
0.066
0.045
0.047
0.081
0.062
0.058
0.063
0.061
0.041
0.034
0.026
0.077
0.078

0.237
0.237
0.226
0.209
0.211
0.197
0.212
0.208
0.180
0.180
0.196
0.193
0.187
0.205
0.163
0.093
0.120
0.155
0.155
0.142
0.164
0.176
0.153
0.114
0.107
0.093
0.086
0.100
0.084
0.068
0.083
0.073

0.820
0.815
0.752
0.744
0.744
0.737
0.781
0.755
0.706
0.702
0.668
0.707
0.643
0.659
0.674
0.500
0.644
0.662
0.626
0.571
0.607
0.657
0.617
0.753
0.720
0.626
0.597
0.601
0.515
0.429
0.714
0.729

33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63

0.048
0.029
0.035
0.039
0.037
0.016
0.019
0.013
0.018
0.017
0.019
0.035
0.033
0.030
0.028
0.024
0.014
0.013
0.060
0.128
0.123
0.120
0.124
0.124
0.218
0.193
0.176
0.188
0.167
0.183
0.140

0.443
0.357
0.508
0.693
0.647
0.665
0.609
0.218
0.601
0.579
0.632
0.564
0.576
0.579
0.543
0.584
0.567
0.577
0.563
0.592
0.596
0.593
0.655
0.572
0.514
0.440
0.426
0.457
0.346
0.364
0.339

0.077
0.033
0.046
0.053
0.049
0.050
0.040
0.013
0.039
0.037
0.038
0.040
0.044
0.041
0.038
0.041
0.048
0.046
0.055
0.058
0.044
0.039
0.046
0.041
0.029
0.028
0.027
0.029
0.021
0.027
0.031

0.070
0.069
0.079
0.089
0.083
0.095
0.092
0.054
0.089
0.086
0.059
0.046
0.057
0.053
0.001
0.000
0.054
0.056
0.033
0.031
0.030
0.017
0.025
0.032
0.033
0.031
0.018
0.019
0.015
0.001
0.204

0.702
0.531
0.701
0.909
0.849
0.860
0.800
0.317
0.786
0.757
0.763
0.725
0.748
0.737
0.651
0.699
0.728
0.739
0.764
0.853
0.844
0.798
0.878
0.814
0.827
0.722
0.671
0.718
0.574
0.602
0.717

0.111
0.111
0.099
0.103
0.106
0.078
0.111
0.109
0.052
0.053
0.046
0.051
0.050
0.055
0.084
0.065
0.089
0.076
0.076
0.062
0.061
0.063
0.055
0.050
0.046
0.032
0.031
0.047
0.043
0.039
0.065
0.065

0.065
0.042
0.032
0.036
0.034
0.035
0.040
0.018
0.039
0.037
0.016
0.040
0.037
0.034
0.041
0.051
0.045
0.046
0.052
0.044
0.051
0.030
0.028
0.045
0.033
0.029
0.024
0.026
0.025
0.027
0.003

N.C. Chakra et al. / Journal of Petroleum Science and Engineering 106 (2013) 1833

Root mean square error (RMSE):


s
1 n
2
RMSE
yobs ypred
i
ni1 i

Mean absolute percentage error (MAPE) and mean absolute


error (MAE):
y
100 n  yobs
 i obsi
n i1
yi

pred

MAPE





Mean Absolute Error MAE

10
MAPE
100

11

where yobs is the observed data, ypred is the predicted data, and n is
the number of data points.
In order to illustrate the consistency in performance of HONN
model toward forecasting the production data, we have used three
performance measurement metrics. The results obtained using all
three metrics are different in their calculated values, but the
signicance of each metrics is similar in performance measurement of HONN model. Since the production data used as neural
input, preprocessed data and raw data have different scales, it is
preferable to use MAPE for estimating the relative error (Azadeh
et al., 2006).

4. Pre-processing: optimal selection of input variables


Before performing a prediction by HONN, it is important and
necessary to rene the available parameter data by applying preprocessing because of two main reasons: (i) noise reduction and
(ii) proper selection of input variables.
The measured oil production data from the eld include noise.
It is not appropriate to use the raw production data for neural

21

network training because NN requires extremely low learning


rates. Thus, a preprocessing of the raw experimental production
data was, therefore, incorporated in all cases. Moving average is a
type of low pass lter that transforms the time series monthly
production data into smooth trends. This lter does weighted
averaging of past data points in the time series production data
within the specied time span to generate a smoothed estimate of
a time series. The time span of moving average depends on the
analytical objectives of the problem. For case studies, we used
moving average lter with a time span of ve-points since it is
found to be optimal for reducing the random noise by retaining
the sharpest step response associated with production data.
Moving average lter is the simplest and perhaps optimal lter
that can be used for time domain signals as reported by Smith
(1997).
After noise reduction process, cross-correlation analysis is
carried out to nd optimal input variables. Determining the
signicant input variables is an important task in the process of
training HONN model for production forecasting. A thorough
understanding of dynamics of petroleum reservoir is necessary
to avoid missing key input variables and prevent introduction of
spurious ones that create confusion in the training process.
Currently, there are no dened rules for the selection of the input
variables. Most of the heuristic methods for selecting the input
variables are ad-hoc or have experimental basis. In this paper,
signicant input variables are selected by employing crosscorrelation function (CCF) for multiple parameter data. Crosscorrelation denes an interconnection between two or more
variables in a time series and can be calculated using Eq. (12).
Consider two time series xt and yt, t1, 2,, n; the time series
yt may be related to the past lags of time series xt and this can be
calculated using Eq. (12). Here rxy(k) is the cross-correlation
coefcient between xt and yt and k is the lag. This means

Table 1b
Smoothed monthly oil production ratios from Well-1, Well-2, Well-3, Well-4 and Well-5 and cumulative oil production ratio.
Months Well-1
(C1)

Well2
(C2)

Well3
(C3)

Well4
(C4)

Well-5
(C5)

Cumulative oil
(C6)

Months Well-1
(C1)

Well-2
(C2)

Well-3
(C3)

Well-4
(C4)

Well-5
(C5)

Cumulative oil
(C6)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32

0.072
0.068
0.065
0.068
0.064
0.061
0.059
0.057
0.048
0.044
0.040
0.036
0.032
0.028
0.027
0.026
0.025
0.023
0.023
0.022
0.022
0.023
0.025
0.025
0.024
0.024
0.024
0.027
0.031
0.037
0.043
0.042

0.309
0.303
0.299
0.300
0.304
0.309
0.319
0.326
0.324
0.326
0.327
0.332
0.322
0.307
0.295
0.291
0.281
0.279
0.297
0.311
0.312
0.350
0.389
0.410
0.421
0.437
0.402
0.357
0.363
0.376
0.386
0.393

0.090
0.084
0.081
0.075
0.074
0.073
0.074
0.078
0.082
0.085
0.081
0.065
0.069
0.072
0.079
0.089
0.100
0.091
0.082
0.065
0.061
0.060
0.059
0.062
0.065
0.057
0.051
0.045
0.048
0.051
0.058
0.058

0.237
0.233
0.224
0.216
0.211
0.207
0.202
0.195
0.195
0.191
0.187
0.192
0.189
0.168
0.154
0.147
0.137
0.133
0.147
0.158
0.158
0.150
0.143
0.129
0.111
0.100
0.094
0.086
0.084
0.082
0.076
0.073

0.820
0.796
0.775
0.758
0.752
0.752
0.745
0.736
0.722
0.708
0.685
0.676
0.670
0.637
0.624
0.628
0.621
0.601
0.622
0.625
0.616
0.641
0.671
0.675
0.663
0.659
0.612
0.554
0.571
0.598
0.618
0.621

33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63

0.040
0.040
0.038
0.031
0.029
0.025
0.021
0.017
0.017
0.020
0.024
0.027
0.029
0.030
0.026
0.022
0.028
0.048
0.068
0.089
0.111
0.124
0.142
0.156
0.167
0.180
0.188
0.181
0.171
0.163
0.140

0.444
0.493
0.530
0.574
0.624
0.566
0.548
0.534
0.528
0.519
0.590
0.586
0.579
0.569
0.570
0.570
0.567
0.577
0.579
0.584
0.600
0.602
0.586
0.555
0.521
0.482
0.437
0.407
0.386
0.350
0.339

0.062
0.057
0.052
0.046
0.048
0.041
0.038
0.036
0.033
0.033
0.040
0.040
0.040
0.041
0.042
0.043
0.046
0.050
0.050
0.048
0.048
0.046
0.040
0.037
0.034
0.031
0.027
0.026
0.027
0.026
0.031

0.075
0.076
0.078
0.083
0.088
0.083
0.083
0.083
0.076
0.067
0.067
0.060
0.043
0.031
0.033
0.033
0.029
0.035
0.041
0.033
0.027
0.027
0.027
0.028
0.028
0.027
0.023
0.017
0.051
0.073
0.204

0.675
0.714
0.738
0.770
0.824
0.747
0.722
0.704
0.685
0.670
0.756
0.746
0.725
0.712
0.713
0.711
0.716
0.757
0.786
0.800
0.827
0.837
0.832
0.808
0.782
0.750
0.702
0.657
0.656
0.631
0.717

0.111
0.107
0.106
0.099
0.099
0.101
0.091
0.081
0.074
0.062
0.050
0.051
0.057
0.061
0.069
0.074
0.078
0.074
0.073
0.068
0.063
0.058
0.055
0.049
0.043
0.041
0.040
0.038
0.045
0.052
0.055
0.055

0.054
0.048
0.042
0.036
0.035
0.033
0.033
0.034
0.030
0.030
0.034
0.033
0.034
0.041
0.042
0.043
0.047
0.048
0.048
0.045
0.041
0.040
0.037
0.033
0.032
0.031
0.027
0.026
0.021
0.018
0.003

22

N.C. Chakra et al. / Journal of Petroleum Science and Engineering 106 (2013) 1833

to 25 m, and these layers are separated by thin shales with


thickness between the range of 1 m and 2 m. The graphical
presentation of oil eld reservoir for this study is shown in Fig. 3.
The initial reservoir pressure was recorded as 144 kg/cm2 at
1397 m. The quantity of reserved oil inplace was 2.47 MMt, and
the cumulative oil production until September 2009 was 0.72 MMt
which is 29.1% of the inplace reserve and 64.5% of ultimate reserve.
The marginal drop in reservoir pressure against cumulative oil
production of 0.72 MMt indicates that the reservoir is operating
under active water drive. Most of the wells are producing gas to oil
ratio (GOR) in the range of 3035 v/v as wells are owing above
the bubble point pressure. Hence the model shows constant
producing GOR.
The eld started producing oil from February 2000 through
Well-1 and December 2000 through Well-2. Well-1 produces oil at
58 m3/d through 6 mm bean. The initial reservoir pressure
recorded at Well-1 was 144.6 kg/cm2 at 1385 m. The cumulative
productions of oil, gas and water from Well-1 till September 2009
are 0.156 MMt, 8.1 MMm3, and 7.2 Mm3, respectively. Later, other
wells (Well-3Well-8) were drilled and put on production in
different years until 2009. In this study, we are interested in
forecasting cumulative oil production from the ve wells having
continuous production history, Well-1, Well-2, Well-3, Well-4 and
Well-5. Two cases are studied for cumulative oil production

measurements in the variable y are lagging or leading those in x by


k time steps.
nxt xyt yxt yt
r xyk q

2 q

2
nx2t xt
ny2t yt

12

In the present study only positive lags have been used. The
presence of positive lagk between xt and yt indicates that the
relationship between these time series will be most signicant
when the data in x at time t are related to data in y at time t+k.

5. The reservoir under study


The simulation studies are carried out with the production data
from a real oil eld reservoir which has 63 months (from 2004 to
2009) of production history. The oil eld is located in southwestern part of Tarapur Block of Cambay Basin and to the west of
Cambay Gas Field in India. This eld consists of total eight oil
producing wells. The structure of the eld trends NNWSSE in
direction and bounded by a fault on either side, which separates
the structure from the adjoining lows. The reservoir structure is
controlled by EastWest trending normal fault in the north, and it
narrows down toward south. The eld is composed of three
sandstone layers (L-1, L-2, and L-3) having varying thickness up

0.25

0.12

0.08
0.06
0.04

0.15
0.1
0.05

0.02
0

Original data
Smoothed data

0.2

Original data
Smoothed data

Data value

Data value

0.1

0
0

10

20

30

40

50

60

70

10

20

30

50

60

70

0.14

0.7

0.12

Original data
Smoothed data

0.1
Data value

0.6
Data value

40
Months

Months

0.5
0.4

0.08
Original data
Smoothed data

0.06
0.04

0.3
0.2

0.02
0

10

20

30

40

50

60

70

10

20

30

Months

40

50

60

70

Months

0.25
Original data
Smoothed data

Data value

0.2
0.15
0.1
0.05
0

10

20

30

40

50

60

70

Months

Fig. 4. Oil production history from 2004 to 2009 before and after smoothing of (a) Well-1, (b) Well-2, (c) Well-3, (d) Well-4, and (e) Well-5.

N.C. Chakra et al. / Journal of Petroleum Science and Engineering 106 (2013) 1833

forecasting using (i) only oil production data from ve wells and
(ii) oil, water and gas production data from ve wells.

Cross-correlation

0.8

5.1. Structure of HONN

0.6

In this study, a number of design factors for HONN were


considered such as selection of neural structure (order of synaptic
operation), numbers of neurons and hidden layers. Also, different
mapping functions (somatic operation) were selected after careful
investigation in each layer: a sigmoidal (hyperbolic tangent)
function for hidden layers and a linear function for the output
layer. In HONN, three synaptic operations were applied for HONN
modeling: linear synaptic operation (LSO), quadratic synaptic
operation (QSO) and cubic synaptic operation (CSO). It should be
noted that LSO represents the synaptic operation of the conventional NN. Only one hidden layer was used since it resulted in the
best output for time sequence applications such as forecasting
(Tiwari et al., 2012) and different number of neurons (110 for
case-1, and 15 for case-2) in the hidden layer were applied. Each
HONN model was run with learning rate of 0.01 and different
initial synaptic weights. The learning rate was dynamically
updated by multiplying with 0.7 for increasing error and with
1.05 for reducing error. After pre-processing, the data were divided
into three segments for training, validation and test. Thirty-ve
data sets were used for training, 17 for validation and remaining 10
data for testing. Each model was trained and updated for 200
epochs for training and validation before test.

0.4

0.2

0
-80

-60

-40

-20

20

40

60

20

40

60

80

Lags

1.2

Cross-correlation

1
0.8
0.6
0.4
0.2
0
-80

23

-60

-40

-20

80

5.2. Case study-1

Lags
Fig. 5. Cross-correlation of cumulative oil production to ve wells. The legend
represents the correlation between two parameters. The dark circles on the plot
indicate the highest correlation between two parameters. (a) CCF of original oil
production data and (b) CCF of smoothed oil production data.

In this case study, for training HONN, the monthly oil production ratios from month one to month 63 (6 years) were used for
cumulative oil production forecasting as shown in Table 1a. The
monthly production ratios were calculated using the maximum

Table 2
The training, validation and target data used to train HONN models for scenario 1 (lag1).
Months

Input-1

Input-2

Input-3

Input-4

Input-5

Target

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29

0.111
0.107
0.106
0.099
0.099
0.101
0.091
0.081
0.074
0.062
0.05
0.051
0.057
0.061
0.069
0.074
0.078
0.074
0.073
0.068
0.063
0.058
0.055
0.049
0.043
0.041
0.04
0.038
0.045

0.072
0.068
0.065
0.068
0.064
0.061
0.059
0.057
0.048
0.044
0.04
0.036
0.032
0.028
0.027
0.026
0.025
0.023
0.023
0.022
0.022
0.023
0.025
0.025
0.024
0.024
0.024
0.027
0.031

0.309
0.303
0.299
0.3
0.304
0.309
0.319
0.326
0.324
0.326
0.327
0.332
0.322
0.307
0.295
0.291
0.281
0.279
0.297
0.311
0.312
0.35
0.389
0.41
0.421
0.437
0.402
0.357
0.363

0.09
0.084
0.081
0.075
0.074
0.073
0.074
0.078
0.082
0.085
0.081
0.065
0.069
0.072
0.079
0.089
0.1
0.091
0.082
0.065
0.061
0.06
0.059
0.062
0.065
0.057
0.051
0.045
0.048

0.237
0.233
0.224
0.216
0.211
0.207
0.202
0.195
0.195
0.191
0.187
0.192
0.189
0.168
0.154
0.147
0.137
0.133
0.147
0.158
0.158
0.15
0.143
0.129
0.111
0.1
0.094
0.086
0.084

0.796
0.775
0.758
0.752
0.752
0.745
0.736
0.722
0.708
0.685
0.676
0.67
0.637
0.624
0.628
0.621
0.601
0.622
0.625
0.616
0.641
0.671
0.675
0.663
0.659
0.612
0.554
0.571
0.598

24

N.C. Chakra et al. / Journal of Petroleum Science and Engineering 106 (2013) 1833

Table 2 (continued )
Months

Input-1

Input-2

Input-3

Input-4

Input-5

Target

30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52

0.052
0.055
0.055
0.054
0.048
0.042
0.036
0.035
0.033
0.033
0.034
0.03
0.03
0.034
0.033
0.034
0.041
0.042
0.043
0.047
0.048
0.048
0.045

0.037
0.043
0.042
0.04
0.04
0.038
0.031
0.029
0.025
0.021
0.017
0.017
0.02
0.024
0.027
0.029
0.03
0.026
0.022
0.028
0.048
0.068
0.089

0.376
0.386
0.393
0.444
0.493
0.53
0.574
0.624
0.566
0.548
0.534
0.528
0.519
0.59
0.586
0.579
0.569
0.57
0.57
0.567
0.577
0.579
0.584

0.051
0.058
0.058
0.062
0.057
0.052
0.046
0.048
0.041
0.038
0.036
0.033
0.033
0.04
0.04
0.04
0.041
0.042
0.043
0.046
0.05
0.05
0.048

0.082
0.076
0.073
0.075
0.076
0.078
0.083
0.088
0.083
0.083
0.083
0.076
0.067
0.067
0.06
0.043
0.031
0.033
0.033
0.029
0.035
0.041
0.033

0.618
0.621
0.675
0.714
0.738
0.77
0.824
0.747
0.722
0.704
0.685
0.67
0.756
0.746
0.725
0.712
0.713
0.711
0.716
0.757
0.786
0.8
0.827

Table 3
Performance measure of HONNs using single lag1.
Synaptic operation

Number of hidden layers

Number of neurons

MSE

RMSE

MAPE (%)

Mean

SD

Mean

SD

Mean

SD

Linear synaptic operation (LSO)

1
2
3
4
5
6
7
8
9
10

0.003
0.003
0.003
0.002
0.002
0.002
0.002
0.002
0.004
0.002

0.001
0.001
0.001
0.000
0.000
0.000
0.001
0.001
0.001
0.000

0.057
0.054
0.052
0.047
0.043
0.044
0.043
0.049
0.061
0.041

0.010
0.011
0.010
0.004
0.006
0.004
0.006
0.006
0.008
0.004

6.777
5.954
6.042
5.425
4.717
5.041
5.276
5.979
6.771
4.055

1.302
1.181
0.956
0.489
0.721
0.599
0.776
0.799
0.768
0.620

Quadratic synaptic operation (QSO)

1
2
3
4
5
6
7
8
9
10

0.003
0.001
0.003
0.001
0.003
0.001
0.002
0.001
0.002
0.003

0.001
0.000
0.001
0.000
0.001
0.000
0.001
0.000
0.001
0.001

0.055
0.038
0.050
0.038
0.051
0.038
0.044
0.036
0.041
0.052

0.008
0.003
0.008
0.005
0.012
0.004
0.006
0.006
0.010
0.010

6.610
3.943
5.464
4.355
5.248
4.081
5.268
4.131
3.898
5.663

0.711
0.383
0.902
0.852
1.454
0.781
0.822
0.831
0.590
1.127

Cubic synaptic operation (CSO)

1
2
3
4
5
6
7
8
9
10

0.003
0.002
0.002
0.001
0.003
0.003
0.002
0.003
0.001
0.002

0.001
0.001
0.001
0.001
0.001
0.002
0.000
0.000
0.001
0.000

0.056
0.044
0.047
0.035
0.050
0.049
0.041
0.055
0.033
0.047

0.005
0.009
0.006
0.010
0.007
0.016
0.003
0.005
0.009
0.005

6.517
4.920
5.368
3.459
5.758
5.373
4.044
6.610
3.634
5.087

0.481
0.813
0.981
0.452
1.001
1.845
0.741
0.706
0.930
0.493

The numbers in bold represent best results from HONN.

production of products (approximately 9500 m3/month for oil)


through the 6 year production history of Well-1, Well-2, Well-3,
Well-4 and Well-5. For example, C1, the production ratio for
Well-1, is calculated by dividing the rst month's oil production
from this well by 9500.

In the pre-processing stage, the oil production ratio data were


smoothed using a ve point moving average lter. Table 1b shows the
smoothed oil production data for this case study. The graphical
representation of original oil production versus smoothed production
data for Well-1 to Well-5 is shown in Fig. 4. As seen in this gure, the

N.C. Chakra et al. / Journal of Petroleum Science and Engineering 106 (2013) 1833

high peaks of the data were smoothed. After the smoothing process,
the cross-correlations of the cumulative oil production data for the ve
wells were calculated by the cross-correlation function (CCF). The CCF
plots of oil production before and after smoothing are presented in
Fig. 5. In Fig. 5, notations; C1, C2, C3, C4, and C5, in the legend box
corresponds to the each wells oil production from Well-1Well-5 and
C6 corresponds to cumulative oil production. C1C6, in Fig. 5, represents the cross-correlation between the oil productions from Well-1 to
cumulative oil production from all the wells. As shown, the highest
correlation occurs at lag0. Since lag0 represents current time step (no
step ahead), lag0 can be neglected for forecasting methods. From the
CCF plot, it was identied that lag1 and lag2 have the most signicant
correlation which means that the input variables in these lags are the
optimal to train HONN. HONN was trained to forecast cumulative oil
production based on three scenarios: (1) using only lag1 (single lag1)
for training; (2) using only lag2 (single lag2) for training; and (3) using
lag1 and lag2 (accumulated lag2) for training.

25

5.2.1. HONN using single Lag1


In scenario 1, rst 35 months data were used for training and
next 17 months data for validation, and the data were randomly
rearranged as presented in Table 2 after the pre-processing. In
Table 2, Input-1, Input-2, Input-3, Input-4, and Input-5 correspond
to oil productions rates from Well-1, Well-2, Well-3, Well-4 and
Well-5. It may be noted that to account for this lag1, the
cumulative oil production has been advanced by one time step.
The simulation results from HONN model in terms of RMSE,
MSE and MAPE are shown in Table 3, comparing the performances
with different number of neurons in the hidden layer. The
selection criteria for a better model are lower values of MAPE,
MSE and RMSE. From simulation results, the best model was
HONN with CSO having four neurons in the hidden layer. The
performance indices show that the best model is the HONN with
CSO resulted in MAPE 3.459%, MSE 0.001, and RMSE 0.035. In
case of HONN with LSO, MAPE 4.055% was achieved having 10

Table 4
The training, validation and target data used to train HONN models for scenario 2 (lag2).
Months

Input 1

Input-2

Input -3

Input -4

Input-5

Target

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51

0.111
0.107
0.106
0.099
0.099
0.101
0.091
0.081
0.074
0.062
0.05
0.051
0.057
0.061
0.069
0.074
0.078
0.074
0.073
0.068
0.063
0.058
0.055
0.049
0.043
0.041
0.04
0.038
0.045
0.052
0.055
0.055
0.054
0.048
0.042
0.036
0.035
0.033
0.033
0.034
0.03
0.03
0.034
0.033
0.034
0.041
0.042
0.043
0.047
0.048
0.048

0.072
0.068
0.065
0.068
0.064
0.061
0.059
0.057
0.048
0.044
0.04
0.036
0.032
0.028
0.027
0.026
0.025
0.023
0.023
0.022
0.022
0.023
0.025
0.025
0.024
0.024
0.024
0.027
0.031
0.037
0.043
0.042
0.04
0.04
0.038
0.031
0.029
0.025
0.021
0.017
0.017
0.02
0.024
0.027
0.029
0.03
0.026
0.022
0.028
0.048
0.068

0.309
0.303
0.299
0.3
0.304
0.309
0.319
0.326
0.324
0.326
0.327
0.332
0.322
0.307
0.295
0.291
0.281
0.279
0.297
0.311
0.312
0.35
0.389
0.41
0.421
0.437
0.402
0.357
0.363
0.376
0.386
0.393
0.444
0.493
0.53
0.574
0.624
0.566
0.548
0.534
0.528
0.519
0.59
0.586
0.579
0.569
0.57
0.57
0.567
0.577
0.579

0.09
0.084
0.081
0.075
0.074
0.073
0.074
0.078
0.082
0.085
0.081
0.065
0.069
0.072
0.079
0.089
0.1
0.091
0.082
0.065
0.061
0.06
0.059
0.062
0.065
0.057
0.051
0.045
0.048
0.051
0.058
0.058
0.062
0.057
0.052
0.046
0.048
0.041
0.038
0.036
0.033
0.033
0.04
0.04
0.04
0.041
0.042
0.043
0.046
0.05
0.05

0.237
0.233
0.224
0.216
0.211
0.207
0.202
0.195
0.195
0.191
0.187
0.192
0.189
0.168
0.154
0.147
0.137
0.133
0.147
0.158
0.158
0.15
0.143
0.129
0.111
0.1
0.094
0.086
0.084
0.082
0.076
0.073
0.075
0.076
0.078
0.083
0.088
0.083
0.083
0.083
0.076
0.067
0.067
0.06
0.043
0.031
0.033
0.033
0.029
0.035
0.041

0.775
0.758
0.752
0.752
0.745
0.736
0.722
0.708
0.685
0.676
0.67
0.637
0.624
0.628
0.621
0.601
0.622
0.625
0.616
0.641
0.671
0.675
0.663
0.659
0.612
0.554
0.571
0.598
0.618
0.621
0.675
0.714
0.738
0.77
0.824
0.747
0.722
0.704
0.685
0.67
0.756
0.746
0.725
0.712
0.713
0.711
0.716
0.757
0.786
0.8
0.827

26

N.C. Chakra et al. / Journal of Petroleum Science and Engineering 106 (2013) 1833

Table 5
Performance measure of HONNs using single lag2.
Synaptic operation

Number of hidden layers

Number of neurons

MSE

RMSE

MAPE (%)

Mean

SD

Mean

SD

Mean

SD

Linear synaptic operation (LSO)

1
2
3
4
5
6
7
8
9
10

0.004
0.003
0.003
0.005
0.002
0.004
0.003
0.003
0.004
0.003

0.001
0.001
0.001
0.001
0.001
0.001
0.001
0.001
0.000
0.001

0.065
0.058
0.055
0.070
0.049
0.060
0.054
0.058
0.060
0.057

0.010
0.006
0.006
0.004
0.006
0.005
0.007
0.006
0.004
0.008

7.562
6.392
6.169
7.695
5.925
6.473
6.110
6.702
6.884
6.242

0.831
0.894
0.793
0.210
0.821
0.382
0.722
0.740
0.667
0.919

Quadratic synaptic operation (QSO)

1
2
3
4
5
6
7
8
9
10

0.003
0.003
0.002
0.003
0.003
0.003
0.002
0.003
0.002
0.002

0.001
0.001
0.001
0.000
0.001
0.001
0.000
0.001
0.001
0.001

0.057
0.054
0.047
0.056
0.051
0.052
0.043
0.057
0.049
0.046

0.007
0.010
0.011
0.004
0.008
0.010
0.004
0.005
0.008
0.008

6.220
5.859
5.216
6.279
5.358
5.649
5.127
6.614
5.445
5.381

0.699
0.843
1.057
0.624
1.124
1.145
0.694
0.606
1.431
1.017

Cubic synaptic operation (CSO)

1
2
3
4
5
6
7
8
9
10

0.002
0.002
0.002
0.003
0.002
0.003
0.003
0.002
0.004
0.008

0.001
0.000
0.000
0.001
0.001
0.000
0.002
0.001
0.001
0.005

0.049
0.045
0.048
0.052
0.048
0.055
0.057
0.049
0.060
0.083

0.005
0.005
0.005
0.006
0.009
0.003
0.016
0.007
0.008
0.030

5.650
4.882
5.665
5.818
5.413
6.222
6.573
5.276
6.623
9.662

0.761
0.791
0.978
0.862
1.369
0.577
2.033
0.614
1.030
2.979

The numbers in bold represent best results from HONN.

Table 6
The training, validation and target data used to train HONN models for scenario 3 (accumulated lag2).
Months

Input-1

Input-2

Input-3

Input-4

Input-5

Input-6

Input-7

Input-8

Input-9

Input-10

Target

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31

0.107
0.106
0.099
0.099
0.101
0.091
0.081
0.074
0.062
0.05
0.051
0.057
0.061
0.069
0.074
0.078
0.074
0.073
0.068
0.063
0.058
0.055
0.049
0.043
0.041
0.04
0.038
0.045
0.052
0.055
0.055

0.068
0.065
0.068
0.064
0.061
0.059
0.057
0.048
0.044
0.04
0.036
0.032
0.028
0.027
0.026
0.025
0.023
0.023
0.022
0.022
0.023
0.025
0.025
0.024
0.024
0.024
0.027
0.031
0.037
0.043
0.042

0.303
0.299
0.3
0.304
0.309
0.319
0.326
0.324
0.326
0.327
0.332
0.322
0.307
0.295
0.291
0.281
0.279
0.297
0.311
0.312
0.35
0.389
0.41
0.421
0.437
0.402
0.357
0.363
0.376
0.386
0.393

0.084
0.081
0.075
0.074
0.073
0.074
0.078
0.082
0.085
0.081
0.065
0.069
0.072
0.079
0.089
0.1
0.091
0.082
0.065
0.061
0.06
0.059
0.062
0.065
0.057
0.051
0.045
0.048
0.051
0.058
0.058

0.233
0.224
0.216
0.211
0.207
0.202
0.195
0.195
0.191
0.187
0.192
0.189
0.168
0.154
0.147
0.137
0.133
0.147
0.158
0.158
0.15
0.143
0.129
0.111
0.1
0.094
0.086
0.084
0.082
0.076
0.073

0.111
0.107
0.106
0.099
0.099
0.101
0.091
0.081
0.074
0.062
0.05
0.051
0.057
0.061
0.069
0.074
0.078
0.074
0.073
0.068
0.063
0.058
0.055
0.049
0.043
0.041
0.04
0.038
0.045
0.052
0.055

0.072
0.068
0.065
0.068
0.064
0.061
0.059
0.057
0.048
0.044
0.04
0.036
0.032
0.028
0.027
0.026
0.025
0.023
0.023
0.022
0.022
0.023
0.025
0.025
0.024
0.024
0.024
0.027
0.031
0.037
0.043

0.309
0.303
0.299
0.3
0.304
0.309
0.319
0.326
0.324
0.326
0.327
0.332
0.322
0.307
0.295
0.291
0.281
0.279
0.297
0.311
0.312
0.35
0.389
0.41
0.421
0.437
0.402
0.357
0.363
0.376
0.386

0.09
0.084
0.081
0.075
0.074
0.073
0.074
0.078
0.082
0.085
0.081
0.065
0.069
0.072
0.079
0.089
0.1
0.091
0.082
0.065
0.061
0.06
0.059
0.062
0.065
0.057
0.051
0.045
0.048
0.051
0.058

0.237
0.233
0.224
0.216
0.211
0.207
0.202
0.195
0.195
0.191
0.187
0.192
0.189
0.168
0.154
0.147
0.137
0.133
0.147
0.158
0.158
0.15
0.143
0.129
0.111
0.1
0.094
0.086
0.084
0.082
0.076

0.775
0.758
0.752
0.752
0.745
0.736
0.722
0.708
0.685
0.676
0.67
0.637
0.624
0.628
0.621
0.601
0.622
0.625
0.616
0.641
0.671
0.675
0.663
0.659
0.612
0.554
0.571
0.598
0.618
0.621
0.675

N.C. Chakra et al. / Journal of Petroleum Science and Engineering 106 (2013) 1833

27

Table 6 (continued )
Months

Input-1

Input-2

Input-3

Input-4

Input-5

Input-6

Input-7

Input-8

Input-9

Input-10

Target

32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51

0.054
0.048
0.042
0.036
0.035
0.033
0.033
0.034
0.03
0.03
0.034
0.033
0.034
0.041
0.042
0.043
0.047
0.048
0.048
0.045

0.04
0.04
0.038
0.031
0.029
0.025
0.021
0.017
0.017
0.02
0.024
0.027
0.029
0.03
0.026
0.022
0.028
0.048
0.068
0.089

0.444
0.493
0.53
0.574
0.624
0.566
0.548
0.534
0.528
0.519
0.59
0.586
0.579
0.569
0.57
0.57
0.567
0.577
0.579
0.584

0.062
0.057
0.052
0.046
0.048
0.041
0.038
0.036
0.033
0.033
0.04
0.04
0.04
0.041
0.042
0.043
0.046
0.05
0.05
0.048

0.075
0.076
0.078
0.083
0.088
0.083
0.083
0.083
0.076
0.067
0.067
0.06
0.043
0.031
0.033
0.033
0.029
0.035
0.041
0.033

0.055
0.054
0.048
0.042
0.036
0.035
0.033
0.033
0.034
0.03
0.03
0.034
0.033
0.034
0.041
0.042
0.043
0.047
0.048
0.048

0.042
0.04
0.04
0.038
0.031
0.029
0.025
0.021
0.017
0.017
0.02
0.024
0.027
0.029
0.03
0.026
0.022
0.028
0.048
0.068

0.393
0.444
0.493
0.53
0.574
0.624
0.566
0.548
0.534
0.528
0.519
0.59
0.586
0.579
0.569
0.57
0.57
0.567
0.577
0.579

0.058
0.062
0.057
0.052
0.046
0.048
0.041
0.038
0.036
0.033
0.033
0.04
0.04
0.04
0.041
0.042
0.043
0.046
0.05
0.05

0.073
0.075
0.076
0.078
0.083
0.088
0.083
0.083
0.083
0.076
0.067
0.067
0.06
0.043
0.031
0.033
0.033
0.029
0.035
0.041

0.714
0.738
0.77
0.824
0.747
0.722
0.704
0.685
0.67
0.756
0.746
0.725
0.712
0.713
0.711
0.716
0.757
0.786
0.8
0.827

Table 7
Performance measure of HONNs using accumulated lag2.
Synaptic operation

Number of hidden layers

Number of neurons

MSE

RMSE

MAPE (%)

Mean

SD

Mean

SD

Mean

SD

Linear synaptic operation (LSO)

1
2
3
4
5
6
7
8
9
10

0.004
0.004
0.003
0.004
0.003
0.003
0.003
0.002
0.003
0.002

0.001
0.001
0.001
0.001
0.000
0.000
0.001
0.000
0.001
0.001

0.059
0.063
0.055
0.059
0.051
0.054
0.051
0.042
0.056
0.047

0.008
0.008
0.010
0.010
0.003
0.004
0.007
0.003
0.007
0.005

6.715
7.134
6.362
6.670
5.711
6.266
5.167
4.748
6.695
5.134

1.261
0.563
0.834
1.081
0.150
0.565
1.024
0.802
0.578
0.671

Quadratic synaptic operation (QSO)

1
2
3
4
5
6
7
8
9
10

0.003
0.002
0.002
0.003
0.002
0.002
0.002
0.003
0.003
0.005

0.000
0.000
0.000
0.001
0.001
0.001
0.001
0.001
0.001
0.002

0.055
0.041
0.040
0.051
0.045
0.045
0.047
0.055
0.054
0.070

0.004
0.003
0.003
0.011
0.006
0.010
0.009
0.005
0.007
0.015

6.583
4.465
4.320
5.438
5.184
5.289
4.984
5.525
6.199
7.320

0.534
0.547
0.407
1.413
0.717
1.534
0.847
0.576
1.087
1.138

Cubic synaptic operation (CSO)

1
2
3
4
5
6
7
8
9
10

0.003
0.002
0.002
0.003
0.003
0.002
0.004
0.003
0.004
0.005

0.000
0.001
0.000
0.001
0.001
0.001
0.001
0.001
0.001
0.002

0.056
0.046
0.039
0.053
0.058
0.045
0.063
0.054
0.063
0.072

0.003
0.008
0.003
0.008
0.009
0.011
0.007
0.012
0.008
0.013

6.456
5.214
4.045
5.474
6.141
5.362
7.013
6.045
7.293
7.620

0.516
1.181
0.462
1.168
1.016
1.508
0.853
1.748
0.901
1.191

The numbers in bold represent best results from HONN.

neural units in the hidden layer, and MAPE 3.89% was achieved
by HONN with QSO having nine neurons in the hidden layer.

5.2.2. HONN using single Lag2


In scenario 2, rst 34 months data were used for training and
next 17 months data for validation were selected applying single

lag2 after pre-processing and the data were randomly rearranged


as listed in Table 4. Here again, the rst number in C6 has been
advanced by two time steps.
Table 5 lists the simulation results using single lag2 for HONN.
The performance indices show that the best model is HONN with
CSO resulting in MAPE 4.882%, MSE 0.002, and RMSE 0.045
with two neurons in the hidden layer. HONN with QSO also

28

N.C. Chakra et al. / Journal of Petroleum Science and Engineering 106 (2013) 1833

resulted in low error having MAPE 5.127, MSE 0.002, RMSE


0.043 with seven neurons in the hidden layer. However, the
HONNs LSO resulted in higher errors compared to CSO and QSO
with single lag2.

5.2.3. HONN using accumulated Lag2


In this scenario, after the pre-processing, the data were
selected applying accumulated lag2 (lag1 and lag2), and the data
were randomly rearranged for training and validation as shown in

Table 8
Smoothed monthly oil, gas and water production data ratios of 5 wells and cumulative oil production ratio.
Months Well-1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63

Well-2

Well-3

Well-4

Well-5

Cumulative
oil

Oil
(C1)

Gas
(C2)

Water
(C3)

Oil
(C4)

Gas
(C5)

Water
(C6)

Oil
(C7)

Gas
(C8)

Water
(C9)

Oil
(C10)

Gas
(C11)

Water
(C12)

Oil
(C13)

Gas
(C14)

Water
(C15)

0.111
0.107
0.106
0.099
0.099
0.101
0.091
0.081
0.074
0.062
0.050
0.051
0.057
0.061
0.069
0.074
0.078
0.074
0.073
0.068
0.063
0.058
0.055
0.049
0.043
0.041
0.040
0.038
0.045
0.052
0.055
0.055
0.054
0.048
0.042
0.036
0.035
0.033
0.033
0.034
0.030
0.030
0.034
0.033
0.034
0.041
0.042
0.043
0.047
0.048
0.048
0.045
0.041
0.040
0.037
0.033
0.032
0.031
0.027
0.026
0.021
0.018
0.003

0.107
0.103
0.107
0.103
0.106
0.110
0.098
0.085
0.078
0.067
0.054
0.058
0.065
0.068
0.076
0.081
0.086
0.082
0.082
0.079
0.076
0.070
0.067
0.059
0.050
0.047
0.046
0.045
0.052
0.060
0.064
0.064
0.062
0.055
0.047
0.040
0.039
0.036
0.038
0.039
0.036
0.037
0.042
0.041
0.041
0.050
0.051
0.053
0.058
0.059
0.060
0.056
0.051
0.049
0.046
0.041
0.039
0.038
0.032
0.029
0.022
0.019
0.003

0.184
0.188
0.190
0.205
0.210
0.211
0.196
0.184
0.155
0.138
0.123
0.122
0.144
0.156
0.182
0.211
0.242
0.231
0.234
0.221
0.203
0.187
0.191
0.199
0.207
0.207
0.200
0.193
0.184
0.175
0.178
0.175
0.179
0.181
0.185
0.186
0.192
0.169
0.163
0.161
0.167
0.168
0.182
0.175
0.171
0.160
0.154
0.159
0.159
0.158
0.153
0.154
0.154
0.153
0.155
0.157
0.156
0.158
0.168
0.169
0.147
0.137
0.046

0.072
0.068
0.065
0.068
0.064
0.061
0.059
0.057
0.048
0.044
0.040
0.036
0.032
0.028
0.027
0.026
0.025
0.023
0.023
0.022
0.022
0.023
0.025
0.025
0.024
0.024
0.024
0.027
0.031
0.037
0.043
0.042
0.040
0.040
0.038
0.031
0.029
0.025
0.021
0.017
0.017
0.020
0.024
0.027
0.029
0.030
0.026
0.022
0.028
0.048
0.068
0.089
0.111
0.124
0.142
0.156
0.167
0.180
0.188
0.181
0.171
0.163
0.14

0.050
0.047
0.045
0.049
0.047
0.046
0.046
0.046
0.039
0.037
0.034
0.031
0.029
0.025
0.025
0.024
0.023
0.021
0.022
0.021
0.021
0.023
0.024
0.023
0.022
0.022
0.023
0.026
0.029
0.035
0.040
0.039
0.038
0.037
0.034
0.028
0.026
0.022
0.019
0.016
0.017
0.020
0.024
0.027
0.029
0.030
0.026
0.022
0.028
0.049
0.069
0.091
0.112
0.124
0.142
0.160
0.165
0.174
0.177
0.166
0.147
0.142
0.13

0.203
0.210
0.215
0.197
0.195
0.189
0.162
0.137
0.133
0.127
0.121
0.135
0.142
0.145
0.142
0.141
0.140
0.140
0.145
0.141
0.134
0.125
0.122
0.112
0.101
0.088
0.074
0.058
0.065
0.069
0.078
0.086
0.084
0.067
0.054
0.049
0.057
0.065
0.084
0.104
0.116
0.096
0.093
0.080
0.065
0.049
0.050
0.039
0.030
0.019
0.012
0.005
0.010
0.013
0.022
0.033
0.040
0.043
0.049
0.048
0.045
0.049
0.054

0.309
0.303
0.299
0.300
0.304
0.309
0.319
0.326
0.324
0.326
0.327
0.332
0.322
0.307
0.295
0.291
0.281
0.279
0.297
0.311
0.312
0.350
0.389
0.410
0.421
0.437
0.402
0.357
0.363
0.376
0.386
0.393
0.444
0.493
0.530
0.574
0.624
0.566
0.548
0.534
0.528
0.519
0.590
0.586
0.579
0.569
0.570
0.570
0.567
0.577
0.579
0.584
0.600
0.602
0.586
0.555
0.521
0.482
0.437
0.407
0.386
0.350
0.339

0.256
0.251
0.248
0.256
0.266
0.277
0.295
0.312
0.314
0.323
0.327
0.330
0.320
0.302
0.286
0.281
0.271
0.271
0.296
0.321
0.327
0.366
0.404
0.420
0.426
0.440
0.408
0.366
0.372
0.382
0.391
0.398
0.446
0.484
0.506
0.533
0.572
0.515
0.505
0.504
0.516
0.517
0.589
0.587
0.581
0.569
0.569
0.570
0.570
0.584
0.590
0.596
0.607
0.604
0.586
0.563
0.514
0.467
0.411
0.372
0.333
0.304
0.301

0.029
0.016
0.012
0.008
0.006
0.006
0.007
0.009
0.012
0.015
0.018
0.015
0.020
0.022
0.025
0.033
0.048
0.053
0.053
0.048
0.041
0.039
0.045
0.049
0.053
0.057
0.056
0.053
0.046
0.045
0.048
0.040
0.029
0.033
0.040
0.042
0.054
0.059
0.078
0.101
0.110
0.111
0.116
0.106
0.084
0.079
0.078
0.082
0.086
0.087
0.078
0.062
0.042
0.033
0.049
0.076
0.116
0.166
0.191
0.192
0.188
0.163
0.159

0.090
0.084
0.081
0.075
0.074
0.073
0.074
0.078
0.082
0.085
0.081
0.065
0.069
0.072
0.079
0.089
0.100
0.091
0.082
0.065
0.061
0.060
0.059
0.062
0.065
0.057
0.051
0.045
0.048
0.051
0.058
0.058
0.062
0.057
0.052
0.046
0.048
0.041
0.038
0.036
0.033
0.033
0.040
0.040
0.040
0.041
0.042
0.043
0.046
0.050
0.050
0.048
0.048
0.046
0.040
0.037
0.034
0.031
0.027
0.026
0.027
0.026
0.031

0.047
0.044
0.042
0.040
0.041
0.043
0.046
0.051
0.055
0.056
0.058
0.048
0.058
0.067
0.082
0.093
0.104
0.095
0.086
0.070
0.068
0.067
0.065
0.069
0.071
0.062
0.056
0.050
0.053
0.056
0.064
0.063
0.067
0.061
0.054
0.048
0.049
0.042
0.040
0.039
0.038
0.038
0.046
0.046
0.046
0.047
0.048
0.049
0.053
0.058
0.059
0.057
0.057
0.053
0.046
0.043
0.039
0.034
0.029
0.028
0.027
0.026
0.031

0.210
0.217
0.229
0.243
0.261
0.268
0.287
0.297
0.300
0.302
0.287
0.222
0.161
0.106
0.047
0.041
0.062
0.082
0.106
0.128
0.155
0.162
0.167
0.163
0.158
0.130
0.127
0.119
0.117
0.114
0.107
0.102
0.108
0.114
0.123
0.129
0.138
0.121
0.120
0.121
0.119
0.119
0.140
0.143
0.141
0.139
0.132
0.124
0.114
0.106
0.107
0.108
0.112
0.118
0.125
0.127
0.133
0.138
0.146
0.147
0.142
0.140
0.11

0.237
0.233
0.224
0.216
0.211
0.207
0.202
0.195
0.195
0.191
0.187
0.192
0.189
0.168
0.154
0.147
0.137
0.133
0.147
0.158
0.158
0.150
0.143
0.129
0.111
0.100
0.094
0.086
0.084
0.082
0.076
0.073
0.075
0.076
0.078
0.083
0.088
0.083
0.083
0.083
0.076
0.067
0.067
0.060
0.043
0.031
0.033
0.033
0.029
0.035
0.041
0.033
0.027
0.027
0.027
0.028
0.028
0.027
0.023
0.017
0.051
0.073
0.204

0.217
0.214
0.205
0.203
0.204
0.205
0.206
0.207
0.210
0.210
0.208
0.215
0.211
0.187
0.170
0.163
0.151
0.147
0.168
0.187
0.190
0.182
0.173
0.154
0.129
0.115
0.109
0.101
0.099
0.095
0.088
0.084
0.086
0.086
0.087
0.092
0.098
0.093
0.096
0.099
0.093
0.084
0.086
0.077
0.066
0.066
0.068
0.068
0.063
0.059
0.053
0.043
0.035
0.034
0.034
0.035
0.035
0.033
0.028
0.020
0.057
0.082
0.230

0.080
0.077
0.072
0.070
0.067
0.062
0.059
0.056
0.048
0.041
0.035
0.026
0.023
0.049
0.088
0.121
0.160
0.189
0.190
0.178
0.172
0.188
0.208
0.238
0.269
0.287
0.285
0.289
0.293
0.294
0.310
0.306
0.299
0.288
0.280
0.261
0.267
0.241
0.234
0.232
0.222
0.206
0.223
0.218
0.209
0.213
0.212
0.206
0.206
0.207
0.207
0.210
0.210
0.193
0.182
0.176
0.172
0.172
0.180
0.145
0.112
0.063
0.011

0.820
0.796
0.775
0.758
0.752
0.752
0.745
0.736
0.722
0.708
0.685
0.676
0.670
0.637
0.624
0.628
0.621
0.601
0.622
0.625
0.616
0.641
0.671
0.675
0.663
0.659
0.612
0.554
0.571
0.598
0.618
0.621
0.675
0.714
0.738
0.770
0.824
0.747
0.722
0.704
0.685
0.670
0.756
0.746
0.725
0.712
0.713
0.711
0.716
0.757
0.786
0.800
0.827
0.837
0.832
0.808
0.782
0.750
0.702
0.657
0.656
0.631
0.717

N.C. Chakra et al. / Journal of Petroleum Science and Engineering 106 (2013) 1833

show that the best model is HONN with CSO having three neurons
in the hidden layer which resulted in MAPE 4.045%, MSE 0.002,
and RMSE 0.039.
In case study-1, overall, the performance measure of
HONN models show that HONN with CSO is the best model
for forecasting cumulative oil production by yielding a stable
value between the range of MSE 0.0020.005, RMSE

0.35

0.14

0.1

0.12

0.3

0.1

0.25

0.06

Original data
Smoothed data

0.04
0.02
0

Data value

0.12

Data value

Data value

Table 6. The Input-1 to Input-5 in Table 6 corresponds to oil


production from Well-1 to Well-5 at time lag1, and Input-6 to
Input-10 in the table corresponds to oil production from Well-1 to
Well-5 at time lag2. Hence, both lag1 and lag2 are combined to
form the training data for accumulated lag2.
The simulation results from HONN models are presented in
Table 7 with their performance accuracy. The performance indices

0.08

0.08
Original data
Smoothed data

0.06
0.04
0.02

10

20

30
40
Months

50

60

70

0.25

29

Original data
Smoothed data

0.2
0.15
0.1
0.05

10

20

30
40
Months

50

60

70

10

20

30
40
Months

50

60

70

0.25
0.25
Original data
Smoothed data

0.1

20

30
40
Months

50

60

70

10

20

30
40
Months

50

60

0.2

0.2
0

10

20

30
40
Months

50

60

70

Data value
0

0.14

0.12

0.12

0.1

0.1

Original data
Smoothed data

0.06
0.04

20

30
40
Months

50

60

10

20

30
40
Months

50

60

0.25

50

60

70

0.1

10

20

30
40
Months

50

60

70

0.35
0.3

Original data
Smoothed data

0.08
0.06
0.04

70

30
40
Months

0.15

70

Original data
Smoothed data

0.25
0.2
0.15
0.1
0.05

10

20

30
40
Months

50

60

70

10

20

30
40
Months

50

60

70

0.25
Original data
Smoothed data

0.15
0.1
0.05

0.35

Original data
Smoothed data

0.2
Data value

0.2

0.3
Data value

10

0.02

0.02

20

0.05

0.14

0.08

10

Original data
Smoothed data

0.2

0.4
0.3

0.25
Original data
Smoothed data

0.5

0.3

Original data
Smoothed data

0.1

70

Data value

Data value

0.4

Data value

Data value

0.6

0.5

0.15

0.05

0.7
Original data
Smoothed data

0.6

Data value

0.1
0.05

0.7

Data value

0.15

0.05

10

0.2
Data value

0.15

Original data
Smoothed data

0.2
Data value

Data value

0.2

0.15
0.1
0.05

Original data
Smoothed data

0.25
0.2
0.15
0.1
0.05

10

20

30
40
Months

50

60

70

10

20

30
40
Months

50

60

70

0
0

10

20

30
40
Months

50

60

70

Fig. 6. Before and after smoothing process of oil, gas and water productions from ve wells. (1a) oil from well-1, (1b) water from well-1, (1c) gas from well-1; (2a) oil from
well-2, (2b) water from well-2, (2c)gas from well-2; (3a) oil from well-3, (3b) water from well-3, (3c) gas from well-3; (4a) oil from well-4, (4b) water from well-4, (4c) gas
from well-4; and (5a) oil from well-5, (5b) water from well-5, (5c) gas from well-5.

30

N.C. Chakra et al. / Journal of Petroleum Science and Engineering 106 (2013) 1833

0.0390.072 and MAPE 47.6% with few neurons in the


hidden layer.

Cross-correlation

0.8

5.3. Case study-2

0.6
0.4
0.2
0
-80

-60

-40

-20

20

40

60

80

Lags

Cross-correlation

0.8
0.6
0.4
0.2
0
-80

-60

-40

-20

20

40

60

80

Lags

Fig. 7. (a) CCF of original oil, gas and water production data and (b) CCF of
smoothed oil, gas and water production data. The dark circles on the plot indicate
the highest correlation between two parameters.

For this simulation study, monthly oil, gas and water production ratios from Well-1, Well-2, Well-3, Well-4 and Well-5 for 63
months were used for forecasting of cumulative oil production.
The production ratios used for this simulation were obtained by
dividing the production of oil, gas and water with maximum
cumulative productions (through 63 months) of oil, gas and water,
approximately 9500 m3, 275,000 m3, and 5200 m3, respectively.
Table 8 presents the smoothed monthly oil, gas and water
production data ratios of ve wells for 63 months.
This case study shows how additional input parameters (gas
and water production) inuence the efciency of HONN model in
forecasting cumulative oil production. The production data were
preprocessed by applying smoothing process and crosscorrelation. The smoothing process was carried out by a moving
averaging lter with ve sequence data points to reduce noise. Oil,
gas and water production ratios of ve producing wells before and
after smoothing are graphically represented in Fig. 6. After that,
the smoothed data were used to nd correlation between the
cumulative oil production and oil, gas, and water production from
each well by cross-correlation function (CCF) as shown in Fig. 7.
The notations C1, C2, and C3 in the legend box of Fig. 7 indicates
the oil, gas and water production of Well-1. Similarly, C4, C5, and
C6 corresponds to oil, gas and water production from Well-2 and

Table 9
The training, validation and test data sets used for training HONN model for case study-2.
Months

Input-1

Input-2

Input-3

Input-4

Input-5

Input-6

Input-7

Input-8

Input-9

Input-10

Input-11

Input-12

Input-13

Input-14

Target

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38

0.111
0.107
0.106
0.099
0.099
0.101
0.091
0.081
0.074
0.062
0.05
0.051
0.057
0.061
0.069
0.074
0.078
0.074
0.073
0.068
0.063
0.058
0.055
0.049
0.043
0.041
0.04
0.038
0.045
0.052
0.055
0.055
0.054
0.048
0.042
0.036
0.035
0.033

0.107
0.103
0.107
0.103
0.106
0.11
0.098
0.085
0.078
0.067
0.054
0.058
0.065
0.068
0.076
0.081
0.086
0.082
0.082
0.079
0.076
0.07
0.067
0.059
0.05
0.047
0.046
0.045
0.052
0.06
0.064
0.064
0.062
0.055
0.047
0.04
0.039
0.036

0.184
0.188
0.19
0.205
0.21
0.211
0.196
0.184
0.155
0.138
0.123
0.122
0.144
0.156
0.182
0.211
0.242
0.231
0.234
0.221
0.203
0.187
0.191
0.199
0.207
0.207
0.2
0.193
0.184
0.175
0.178
0.175
0.179
0.181
0.185
0.186
0.192
0.169

0.072
0.068
0.065
0.068
0.064
0.061
0.059
0.057
0.048
0.044
0.04
0.036
0.032
0.028
0.027
0.026
0.025
0.023
0.023
0.022
0.022
0.023
0.025
0.025
0.024
0.024
0.024
0.027
0.031
0.037
0.043
0.042
0.04
0.04
0.038
0.031
0.029
0.025

0.05
0.047
0.045
0.049
0.047
0.046
0.046
0.046
0.039
0.037
0.034
0.031
0.029
0.025
0.025
0.024
0.023
0.021
0.022
0.021
0.021
0.023
0.024
0.023
0.022
0.022
0.023
0.026
0.029
0.035
0.04
0.039
0.038
0.037
0.034
0.028
0.026
0.022

0.203
0.21
0.215
0.197
0.195
0.189
0.162
0.137
0.133
0.127
0.121
0.135
0.142
0.145
0.142
0.141
0.14
0.14
0.145
0.141
0.134
0.125
0.122
0.112
0.101
0.088
0.074
0.058
0.065
0.069
0.078
0.086
0.084
0.067
0.054
0.049
0.057
0.065

0.309
0.303
0.299
0.3
0.304
0.309
0.319
0.326
0.324
0.326
0.327
0.332
0.322
0.307
0.295
0.291
0.281
0.279
0.297
0.311
0.312
0.35
0.389
0.41
0.421
0.437
0.402
0.357
0.363
0.376
0.386
0.393
0.444
0.493
0.53
0.574
0.624
0.566

0.256
0.251
0.248
0.256
0.266
0.277
0.295
0.312
0.314
0.323
0.327
0.33
0.32
0.302
0.286
0.281
0.271
0.271
0.296
0.321
0.327
0.366
0.404
0.42
0.426
0.44
0.408
0.366
0.372
0.382
0.391
0.398
0.446
0.484
0.506
0.533
0.572
0.515

0.09
0.084
0.081
0.075
0.074
0.073
0.074
0.078
0.082
0.085
0.081
0.065
0.069
0.072
0.079
0.089
0.1
0.091
0.082
0.065
0.061
0.06
0.059
0.062
0.065
0.057
0.051
0.045
0.048
0.051
0.058
0.058
0.062
0.057
0.052
0.046
0.048
0.041

0.047
0.044
0.042
0.04
0.041
0.043
0.046
0.051
0.055
0.056
0.058
0.048
0.058
0.067
0.082
0.093
0.104
0.095
0.086
0.07
0.068
0.067
0.065
0.069
0.071
0.062
0.056
0.05
0.053
0.056
0.064
0.063
0.067
0.061
0.054
0.048
0.049
0.042

0.21
0.217
0.229
0.243
0.261
0.268
0.287
0.297
0.3
0.302
0.287
0.222
0.161
0.106
0.047
0.041
0.062
0.082
0.106
0.128
0.155
0.162
0.167
0.163
0.158
0.13
0.127
0.119
0.117
0.114
0.107
0.102
0.108
0.114
0.123
0.129
0.138
0.121

0.237
0.233
0.224
0.216
0.211
0.207
0.202
0.195
0.195
0.191
0.187
0.192
0.189
0.168
0.154
0.147
0.137
0.133
0.147
0.158
0.158
0.15
0.143
0.129
0.111
0.1
0.094
0.086
0.084
0.082
0.076
0.073
0.075
0.076
0.078
0.083
0.088
0.083

0.217
0.214
0.205
0.203
0.204
0.205
0.206
0.207
0.21
0.21
0.208
0.215
0.211
0.187
0.17
0.163
0.151
0.147
0.168
0.187
0.19
0.182
0.173
0.154
0.129
0.115
0.109
0.101
0.099
0.095
0.088
0.084
0.086
0.086
0.087
0.092
0.098
0.093

0.08
0.077
0.072
0.07
0.067
0.062
0.059
0.056
0.048
0.041
0.035
0.026
0.023
0.049
0.088
0.121
0.16
0.189
0.19
0.178
0.172
0.188
0.208
0.238
0.269
0.287
0.285
0.289
0.293
0.294
0.31
0.306
0.299
0.288
0.28
0.261
0.267
0.241

0.796
0.775
0.758
0.752
0.752
0.745
0.736
0.722
0.708
0.685
0.676
0.67
0.637
0.624
0.628
0.621
0.601
0.622
0.625
0.616
0.641
0.671
0.675
0.663
0.659
0.612
0.554
0.571
0.598
0.618
0.621
0.675
0.714
0.738
0.77
0.824
0.747
0.722

N.C. Chakra et al. / Journal of Petroleum Science and Engineering 106 (2013) 1833

31

Table 9 (continued )
Months

Input-1

Input-2

Input-3

Input-4

Input-5

Input-6

Input-7

Input-8

Input-9

Input-10

Input-11

Input-12

Input-13

Input-14

Target

39
40
41
42
43
44
45
46
47
48
49
50
51
52

0.033
0.034
0.03
0.03
0.034
0.033
0.034
0.041
0.042
0.043
0.047
0.048
0.048
0.045

0.038
0.039
0.036
0.037
0.042
0.041
0.041
0.05
0.051
0.053
0.058
0.059
0.06
0.056

0.163
0.161
0.167
0.168
0.182
0.175
0.171
0.16
0.154
0.159
0.159
0.158
0.153
0.154

0.021
0.017
0.017
0.02
0.024
0.027
0.029
0.03
0.026
0.022
0.028
0.048
0.068
0.089

0.019
0.016
0.017
0.02
0.024
0.027
0.029
0.03
0.026
0.022
0.028
0.049
0.069
0.091

0.084
0.104
0.116
0.096
0.093
0.08
0.065
0.049
0.05
0.039
0.03
0.019
0.012
0.005

0.548
0.534
0.528
0.519
0.59
0.586
0.579
0.569
0.57
0.57
0.567
0.577
0.579
0.584

0.505
0.504
0.516
0.517
0.589
0.587
0.581
0.569
0.569
0.57
0.57
0.584
0.59
0.596

0.038
0.036
0.033
0.033
0.04
0.04
0.04
0.041
0.042
0.043
0.046
0.05
0.05
0.048

0.04
0.039
0.038
0.038
0.046
0.046
0.046
0.047
0.048
0.049
0.053
0.058
0.059
0.057

0.12
0.121
0.119
0.119
0.14
0.143
0.141
0.139
0.132
0.124
0.114
0.106
0.107
0.108

0.083
0.083
0.076
0.067
0.067
0.06
0.043
0.031
0.033
0.033
0.029
0.035
0.041
0.033

0.096
0.099
0.093
0.084
0.086
0.077
0.066
0.066
0.068
0.068
0.063
0.059
0.053
0.043

0.234
0.232
0.222
0.206
0.223
0.218
0.209
0.213
0.212
0.206
0.206
0.207
0.207
0.21

0.704
0.685
0.67
0.756
0.746
0.725
0.712
0.713
0.711
0.716
0.757
0.786
0.8
0.827

Table 10
Performance measure of HONN models with oil, gas and water production ratios.
Synaptic operation

Number of hidden layers

Number of neurons

MSE

RMSE

MAPE (%)

Mean

SD

Mean

SD

Mean

SD

Linear synaptic operation

1
2
3
4
5

0.004
0.004
0.003
0.002
0.003

0.001
0.001
0.001
0.001
0.000

0.065
0.060
0.052
0.048
0.054

0.011
0.008
0.006
0.009
0.005

7.480
6.687
6.199
5.562
6.187

1.104
1.008
0.631
0.748
0.435

Quadratic synaptic operation

1
2
3
4
5

0.002
0.001
0.003
0.002
0.003

0.001
0.000
0.001
0.001
0.001

0.046
0.038
0.054
0.048
0.054

0.006
0.004
0.008
0.007
0.005

5.147
4.196
6.004
5.553
6.046

0.734
0.434
0.964
0.396
0.855

Cubic synaptic operation

1
2
3
4
5

0.003
0.002
0.001
0.003
0.002

0.001
0.001
0.000
0.001
0.001

0.050
0.049
0.036
0.054
0.044

0.007
0.006
0.004
0.008
0.010

5.359
5.627
3.990
6.238
4.568

0.874
0.393
0.450
1.013
0.961

so on and C16 corresponds to cumulative oil production from all


the wells. The C16C1 in legend box corresponds to crosscorrelation of cumulative oil production to oil production of
Well-1. Similarly for notations C16C2, C16C3, C16C5 etc.
represent the correlation of cumulative oil production to each
production data from corresponding ve wells.
The signicant input variables were determined using the CCF
plot. It is observed from the CCF plot (Fig. 7) that the correlations
between oil, gas, and water of ve producing wells are the most
signicant at lag0. Since lag0 does not represent step ahead, lag1
was selected as the highest cross-correlation. Also, since the CCF
for two parameters were calculated twice (i.e., CxC16 and C16Cx,
x 1, 2,, 16), one of them was dropped. C16C16 CCF was also
dropped because it represented auto-correlation. A closer look at
Fig. 7 showed that the water production from Well-3 was
negatively correlated with cumulative oil production and hence
C9 was dropped from HONN input. Thus, overall 14 input vectors
were rearranged for HONN models as listed in Table 9. Input 1 to
input 14 represents the correlation of oil, gas and water productions from each well at lag1 to the cumulative eld oil production.
First 35 months data were used for training and next 17 months
data for validation
Table 10 presents the results from HONN models with its
performance measure in terms of MSE, RMSE, and MAPE for
different congurations of neurons in the hidden layer and

synaptic operation. In this case study, the best model resulted in


MAPE 3.990%, MSE 0.001, RMSE 0.036 by HONN with CSO
having three neurons in the hidden layer.

6. Discussions
From the case studies, the performance evaluation criteria
indicate that the better cumulative oil production forecasting can
be achieved from HONN with CSO using oil, gas and water
production data. The simulation results from three different lag
scenarios also present that HONN with CSO gives best agreement
between simulation results and observed data. In this study, the
selection of lag time is an important factor that inuences the
forecasting results.
In case study-1, cross-correlation function (CCF) indicates that
the most signicant lag for oil production forecasting with only oil
production data from ve producing well is lag1, and HONN with
CSO yields the best forecasting cumulative oil production in this
case. This can be explained by recalling that ve input parameters
(oil productions from ve wells) generate complex correlation for
target parameter (oil production to be predicted). Thus, in this
case, nonlinear combination of synaptic operations could result in
better prediction. The oil production forecasting from HONN with
CSO with single lag1 for next 10 months, from month 53 to 63, is

Cumulative Oil Production (m3)

32

N.C. Chakra et al. / Journal of Petroleum Science and Engineering 106 (2013) 1833

4000
Measurement values

3500

HONN-CSO Results

3000
2500
2000
1500
1000

10

Months

Cumulative Oil Production (m3)

Fig. 8. Comparison between the measured cumulative oil production and the
forecast results from HONN with CSO using single lag-1 for case study-1.

4000
Measurement values

3500

HONN-CSO Results

3000
2500
2000
1500
1000

10

Months
Fig. 9. Comparison between the measured cumulative oil production and the
forecast results from HONN with CSO from case study-2.

compared with measurement production data in Fig. 8. The gure


indicates a good match between the measured production data
and forecast results within the overall testing error of 0.025.
In case study-2, the 14 input parameters, namely oil, gas and
water production from ve oil producing wells generates nonlinearity and heterogeneity between input and target parameters.
In this case, the higher-order synaptic operations, QSO and CSO, t
more to forecast the oil production as presented in the performance measure table (Table 10). It is also observed from simulation results that the HONN with QSO and CSO provide better
agreement between observed and predicted production data even
with fewer number of neural units than the best results of HONN
with LSO. Fig. 9 illustrates the comparison between the measured
oil production data and the forecast results from HONN with CSO
for 10 months from month 53 to 63 within the overall testing error
of 0.035. Again the match is reasonably satisfactory.
Through these case studies, it is observed that the performance
of HONN with CSO in the case study-1 shows less MAPE than that
in the case study-2. This is because the number of input parameters for training HONN in the case study-2 is higher than that in
the case study-1. It should be noted that for the application of
HONN for forecasting, the number of input variables is one of the
signicant factors to determine the order of synaptic operation in
the neuron. The results indicate that by increasing the order of
combination of neural inputs, the capability of an HONN model
increases; however, the uncertainty in input data also gets multiplied resulting in higher MAPE. The performance of HONN with
HOSO is very susceptible to higher number of neurons in the
hidden layer, which induces longer computational time.

7. Conclusion
In this paper, an innovative neural approach has been
attempted for forecasting cumulative oil production using
higher-order neural network (HONN) for the rst time. Two case

studies were carried out with the data from an oil eld situated at
Cambay basin, Gujarat, India, in order to prove the capability of
forecasting of HONN.
The simulation study indicates that HONN has high potential
for application in cumulative oil production prediction with
limited available parameters of petroleum reservoirs. The HONN
methodology applied to forecasting of oil production yielded
3.459% and 3.990% of MAPE in the two cases examined. In order
to perform more effective HONN models, two pre-processing
procedures were performed: (i) noise reduction and (ii) selection
of optimal input variables. Since the measuring process of oil
production in the oil eld cannot avoid noise during the measurement, it is essential to perform a noise reduction step. Furthermore, in order to enhance the accuracy of prediction of HONN, it is
important to select and group optimal input variables. Crosscorrelation function for multiple input parameters was employed
to determine the most signicant correlation between the parameters. Considering the limited available inputs for forecasting,
the performance indicates that HONN has a high potential to
overcome this limitation. Also, for complicated input patterns,
such as in the case study-2, the higher combination of the input
products reduces the computational cost, which yields faster
results.
In summary, HONN appears to be a satisfactory tool for longterm and short term oil production forecasting. However, a careful
selection of time lag and number of neurons in the hidden layer
are required to achieve accurate oil production forecasting. Further
research is being carried out to evaluate the effectiveness of HONN
in forecasting hydrocarbon production by incorporating static and
dynamic parameters of the reservoir.

Acknowledgments
We thank Institute of Reservoir Studies, Oil and Natural Gas
Corporation (ONGC), India for their support in providing eld
production data for this work. The work of rst author was
nancially supported by the Canadian Commonwealth Fellowship
and the University of Petroleum and Energy Studies. The fourth
author acknowledges the fund provided by NSERC Discovery
Grant. The valuable comments and recommendations from the
reviewers are highly appreciated.
References
Al-Fattah, S.M., Startzman, R.A., 2001. Predicting natural gas production using
articial neural network. In: SPE Hydrocarbon Economics and Evaluation
Symposium. Society of Petroleum Engineers, Dallas, Texas, pp. 111.
Al-Kaabi, A.U., Lee, W.J., 1993. Using articial neural networks to identify the well
test interpretation model. SPE Format. Eval. 8, 233240.
Aminzadeh, F., Barhen, J., Glover, C.W., Toomarian, N.B., 2000. Reservoir parameter
estimation using a hybrid neural network. Comput. Geosci. 26, 869875.
Azadeh, A., Ghaderi, S.F., Sohrabkhani, S., 2006. Forecasting electrical consumption
by integration of neural network, time series and ANOVA. Appl. Math. Comput.
186, 17531761.
Box, G.E.P., Jenkins, G.M., 1976. Time Series Analysis:Forecasting and Control. San
Francisco7 Holden Day 1970. (revised ed. 1976).
Castellano-Mendenz, M., 2004. Modelling of the monthly and daily behaviour of
the runoff of the Xallas River using BoxJenkins and neural network methods.
J. Hydrol. 296, 3858.
Donaldson, R.G., Kamstra, M., Kim, H.Y., 1993. Evaluating Alternative Models for
Conditional Stock Volatility: Evidence from International Data. University of
British Columbia.
Giles, C.L., Maxwell, T., 1987. Learning, invariance, and generalization in high-order
neural networks. Appl. Opt. 26, 49724978.
Gosh, J., Shin, Y., 1992. Effcient higher-order neural networks for classication and
function approximation. Int. J. Neural Syst. 3, 323350.
Gupta, M.M., 2008. Correlative type higher-order neural units with applications. In:
Proceedings of the IEEE International Conference on Automation and Logistics,
ICAL 2008, pp. 715718.
Gupta, M.M., Homma, N., Hou, Z.-G., Solo, A.M.G., Bukovsky, I., 2010. Higher order
neural networks: fundamentals theroy and appplications. In: Zhang, M. (Ed.),

N.C. Chakra et al. / Journal of Petroleum Science and Engineering 106 (2013) 1833

Articial Higher-Order Neural Networks for Computer Science and Engineeing:


Trends for Emerging Applications. Information Science Reference, Hoboken,
New Jersey, pp. 397422.
Gupta, M.M., Jin, L., Homma, N., 2003. Static and Dynamic Neural Networks: From
Fundamentals to Advanced Theory, 1st ed. WileyIEEE Press, Hoboken, New
Jersey.
Gupta, M.M., Rao, D.H., 1993. Neuro-Control Systems-Theory and Applications: IEEE
Neural Networks Council.
Habiballah, W.A., Startzman, R.A., Barrufet, M.A., 1996. Use of neural networks for
prediction of vapor/liquid equilibrium K-values for light-hydrocarbon mixtures.
SPE Reservoir Eng. 11, 121126.
Hill, T., Marquezb, L., OConnor, M., Remusa, W., 1994. Articial neural network
models for forecasting and decision making. Int. J. Forecast. 10, 515.
Homma, N., Gupta, M.M., 2002. Superimposing learning for backpropagation neural
networks. Bull. Coll. Med. Sci., Tohoku Univ. 11, 253259.
Hou, Z.-G., Song, K.-Y., Gupta, M.M., Tan, M., 2006. Neural units with higher-order
synaptic operations for robotic image processing applications. Soft Comput. 11,
221228.
Lee, Y.C., Doolen, G., Chen, H.H., Sun, G.Z., Maxwell, T., Lee, H.Y., Giles, C.L., 1986.
Machine learning using a higher order correlation network. Physica D 2,
276306.
Maschio, C., de Carvalho, C.P.V., Schiozer, D.J., 2010. A new methodology to reduce
uncertainties in reservoir simulation models using observed data and sampling
techniques. J. Pet. Sci. Eng. 72, 110119.
Mohaghegh, S., Richardson, M., Ameri, S., 2001. Use of intelligent systems in
reservoir characterization via synthetic magnetic resonance logs. J. Pet. Sci. Eng.
29, 189204.

33

Nayak, P.C., Sudheer, K.P., Rangan, D.M., Ramasastri, K.S., 2004. A neuro-fuzzy
computing technique for modeling hydrological time series. J. Hydrol. 291,
5266.
Redlapalli, S.K., 2004. Development of Neural Units With Higher-Order Synaptic
Operations and Their Applications to Logic Circuits and Control Problems.
Department of Mechanical Engineering, University of Saskatchewan, Canada.
Rumelhart, D.E., McClelland, J.L., 1986. Parallel Distributed Processing: Explorations
in the Microstructure of Cognition. The MIT Press, Cambridge, MA.
Smith, S.W., 1997. The Scientist and Engineer's Guide to Digital Signal Processing.
Song, K.-Y., Gupta, M.M., Jena, D., 2009. Design of an error-based robust adaptive
controller. In: Proceedings of the IEEE International Conference on Systems,
Man and Cybernetics, SMC 2009, pp. 23862390.
Tamhane, D., Wong, P.M., Aminzadeh, F., Nikravesh, M., 2000. Soft computing for
intelligent reservoir characterization. In: Proceedings of the SPE Asia Pacic
Conference on Integrated Modelling for Asset Management Yokohama, Japan,
pp. 111.
Tiwari, M.K., Song, K.Y., Chatterjee, C., Gupta, M.M., 2012. River-ow forecasting
using higher-order neural networks. J. Hydrol. Eng. 17, 112.
Tokar, A.S., Johnson, P.A., 1999. Rainfall runoff modeling using articial neural
network. J. Hydrol. Eng. 4, 232239.
Weiss, W.W., Balch, R.S., Stubbs, B.A., 2002. How articial intelligence methods can
forecast oil production. In: SPE/DOE Improved Oil Recovery Symposium.
Society of Petroleum Engineers, Tulsa, Oklahoma, pp. 116.

Вам также может понравиться