Вы находитесь на странице: 1из 22

Methods for quantifying the uncertainty of production forecasts

a comparative study
F.J.T. Floris, Delft Geoscience Research Centre1, P.O. Box 5028, 2600 GA Delft, The Netherlands
M.D. Bush, BPAmoco, Chertsey Road, Sunbury-on-Thames, Middlesex, TW17 7LN, UK
M. Cuypers, ELF Exploration UK, 30 Buckingham Gate, London SW1E 6NN, UK
F. Roggero, Institut Franais du Ptrole, 2 Avenue President Angot, Helioparc, 64000 Pau, France
A-R. Syversveen, Norwegian Computing Centre, P.O. Box 114 Blindern, Gaustadalleen 23, 0314 Oslo,
Norway
Abstract
This paper presents a comparison study in which several partners have applied methods to quantify uncertainty on
production forecasts for reservoir models conditioned to both static and dynamic well data. A synthetic case study
was set up, based on a real field case. All partners received well porosity/permeability data and historic production
data. Noise was added to both data types. A geological description was given to guide the parameterization of the
reservoir model. Partners were asked to condition their reservoir models to these data and estimate the probability
distribution of total field production at the end of the forecast period. The various approaches taken by the partners
were categorized. Results showed that for a significant number of approaches the truth case was outside the predicted
range. The choice of parameterization and initial reservoir models gave largest influence on the prediction range,
whereas the choice of reservoir simulator introduced a bias in the predicted range.

Introduction
Traditionally reservoir development decisions are based on a production forecast from a single history
matched reservoir model. To assess risk, some runs are made to check the sensitivity of the forecast.
However, formal quantification of risk requires full sampling of the forecast probability density function
for the quantity to be forecast.
Several papers have appeared for the generation of full pdfs in the geosciences. The reduction in
uncertainty of hydrocarbon pore volume due to structural and porosity / permeability uncertainty is
studied in Berteig et al. (1988). Methods for quantifying the pdf of hydrocarbon volumes in place due to
top structure uncertainty are given in Abrahamsen et al. (1992), Samson et al. (1996) and Floris &
Peersmann (1998). Uncertainty in production from fields without history matching is described in Lia et
al. (1997). Landa & Horne (1997) investigate the reduction in uncertainty on reservoir description for a
synthetic case where saturation maps from 4D seismic have been included as history data.
In this paper, we focus on production forecast uncertainty quantification (PUNQ) methods. During the
execution of the EC sponsored PUNQ project (Bos, 2000) a number of new methods have been
developed which are published in previous papers. The aim of this paper is to report on an integrated
case study to which all methods were applied.
Work flow for uncertainty quantification
The general workflow contains a number of standard components. Instead of organizing this paper in
terms of the separate work flows, we choose to first categorize these workflow components as building
blocks and then build the work flows from them. The general outline of the work flows is given in Figure
1.

The Delft Geoscience Research Centre is a collaboration between Delft University of Technology and the
Netherlands Organisation for Applied Scientific Research TNO

Methods for quantifying uncertainty on production forecasts

Page 2

Parameterization
In the Bayesian inversion approach, the key is to condition reservoir models to all available data. This
conditioning is done through parameters present in the reservoir model. In this study we have
concentrated on the spatial distribution of porosity and permeability. Their spatial distribution can be
parameterized in various ways. Below explanation is provided of the ways of parameterization used in
this study.
Grid blocks
The most general approach is to consider all grid block values as independent parameters. In this model
the porosity, and directional permeabilities for all 1761 active grid blocks are used as parameters,
resulting in about 5000 parameters. Through such an approach no preconceived idea about the geology is
incorporated in the reservoir model. All knowledge must be inferred from the well and production data
only. The main problems with this approach are the large number of parameters and the lack of spatial
continuity present in the resulting reservoir models.
Regions
The use of homogeneous regions is a way to reduce the number of parameters. Through a proper
selection of regions, some geological or reservoir engineering concepts can be incorporated in the
reservoir model. Regions can either be used to follow geological layers or genetic units within layers, or
can be used to characterize draining areas of wells. With regions, less parameters are needed to
characterize the reservoir model, but preconceived ideas about the characteristics of regions may be
inappropriate. The assumption of homogeneity within a region may not be justifiable and lead to abrupt
changes between the boundaries of regions. The region approach has however been the standard
approach in reservoir history matching.
Pilot points
With the advent of geostatistics, a new class of spatial parameterization approaches has emerged. They
are known as pilot point or master point approaches. The developments for characterizing aquifers in a
hydrogeological context started by de Marsily et al. (1984) with application to well interference testing.
They used only kriging solutions to the transmissivity fields. The approach was extended to the use of
Gaussian Random Fields by Ramarao et al. (1995) and Gomez-Hernandez et al. (1996). Floris (1996)
describes the application to hydrocarbon reservoir characterizing involving multi-phase production data.
A number of pilot points are used to build smooth spatially correlated corrections to porosity /
permeability fields. The method produces in continuously varying heterogeneous reservoir models,
controlled by a limited number of pilot points. The pilot points are traditionally pre-fixed, but a method
for selecting a limited number of pilot points based on a combination of geological uncertainty and
sensitivity to production data has also been developed (Cuypers et al. (1998)).
Global parameters
A final class of parameters are those that cannot be linked to a particular spatial location, called global or
underlying parameters. Examples are stochastic parameters such as mean values, standard deviation or
correlation lengths, or object parameters such as channel width and height. Also for property fields,
which are built from a weighted sum of basis functions, the weighting coefficients generally are not
localized and thus are global parameters. In the adaptive chain approach by Holden (1998) or the gradual
deformation method by Hu (2000), the coefficients in the linear combination of property fields used to
generating a new field are a form of global parameters.

Methods for quantifying uncertainty on production forecasts

Page 3

Objective function
In order to measure the extent to which a reservoir model is conditioned to the available history data, a
measure must be defined to quantify the mismatch between the simulated response of the reservoir model
and the history data. This measure is called the objective function.
Least Squares norm
Traditionally, the goal of history matching is to define a reservoir model that optimally reproduces the
observed production history of a field. The mismatch between the simulated production data and the
observed history data can be quantified using a Sum of Squares norm. For each quantity, e.g. Bottom
Hole Pressure (BHP), Gas Oil Ratio (GOR), Water Cut (WCT), and for each time step, tk, the difference
of the simulated value, osim, and the observed value, oobs, can be calculated. Squaring the numbers and
summing them up results in the SoS norm. The number of data available for each quantity may differ, for
example BHP will be measured more frequently than GOR or WCT when using permanent pressure
gauges. To avoid that the more frequently measured observations overshadow the significance of other
observations, the SoS value can be divided by the total number of values available,

1
SoS(oobs, p) =

nw

1
np

1
nt

 wijk
k

oijobs (t k )  oijsim (t k ; p ) 
ijk

(1)

Subscripts i and j run over the wells and production data types and k runs over the report times and nw, np,
nt are the respective number of samples. Symbol p denotes the parameters, denotes the model +
measurement error and w denotes extra weighting factors.
When the difference between simulated and historic response are normalized by the measurement plus
modeling error, a normalized SoS norm results. If all weights w equal unity, then a normalized SoS of 1
implies that on average the simulated data is within the error band around the historic data. Variation of
the weight w can be used to assign more significance to particular production data for example on
reservoir engineering grounds.
Likelihood function
The definition of a Bayesian likelihood function relies more on the specification of a model for the
uncertainty associated with the observed production. When the measurement plus modeling error are
assumed to be independently Gaussian distributed, the likelihood function f (oobs|p) follows formally as,

1
f (o | p) = c exp 

 2

obs

oijobs (t k ) oijsim (t k ; p) 





ijk



(2)

where c is a normalization constant. This likelihood function expresses the likelihood that the historic
data can be explained by the reservoir model for which the likelihood holds. If it is low, the historic
data are unlikely to come from the reservoir model. If it is high, then the reservoir model response fits the
historic data well. Note that the Bayesian formalism does not allow subjective weighting of the
mismatches as done in the SoS objective function (both weighting with the number of samples and the
extra weights unless this is formally brought in as a model assumption in the production uncertainty
model.
Posterior distribution
It is generally believed that conditioning reservoir models on the least squares or likelihood function
alone is mathematically ill-posed. Several solutions to the optimization problem may occur. Moreover,
even for a single solution production data is often insensitive to some parameters (e.g. permeability in

Methods for quantifying uncertainty on production forecasts

Page 4

unswept areas) or only sensitive to a combination of parameters (e.g. harmonic mean of permeability
data). Consequently, individual parameter values can be changed quite dramatically without deterioration
of the history match. Note that parameter values, which are insensitive during the history matching
period, may become sensitive during the forecasting period. Through the Bayesian prior function, the illposed mathematics of the optimization process can be resolved / reduced. The general formula for the
Bayesian posterior distribution is given by

f ( p | o ) c f (o | p ) f ( p )

(3)

where f(o|p) is the likelihood function and f(p) is the prior function. When both the prior distributions on
the parameters and the production uncertainty are assumed to be Gaussian, including the prior results in
an extra Sum of Squares term (here denoted in vector notation),

1 

f ( p | o)  c exp 

 2



 oijobs (t k )  oijsim (t k ; p ) 

k

ijk

p 

p Cp
T

p


p

(4)

where p is the vector of expectations of p and Cp is the covariance matrix.


Optimization algorithms
In order to do the conditioning to production data, most approaches employ an optimization technique.
One technique is to adjust the parameters manually and inspect the improvement on the match visually.
However, here we have quantified the mismatch between observation and simulation data in terms of the
objective function, allowing the use of optimization algorithms.
Gradient optimization
The most powerful tool for optimizing smooth functions is gradient optimization. Many variations on the
theme of gradient optimization exist. The steepest descent technique follows the direction opposite to the
gradient vector. The step size must be determined by, for example, a line search or a trust region
approach. Conjugate gradients employ only mutually orthogonalized gradient directions. LevenbergMarquardt switches from steepest descent to a Gauss-Newton approach, which also employs the second
derivative when the optimum is approached. The dog-leg method by Powell (1972) uses another
combination of the steepest descent direction and the direction proposed by the Gauss-Newton method to
further improve the optimization.
In general for numerical simulators, the gradient expressions are not present in closed form. Several
methods exist to obtain the gradient information. The most straightforward approach is to calculate the
sensitivity coefficients for each of the parameters by finite differences. Each iteration of the optimization
process now requires N+1 simulation runs, where N is the number of parameters. This is a very timeconsuming process if many parameters are present in the reservoir model. Using the Broyden (1965)
technique, the gradient information can also be updated using only the evaluation of the optimization
function for each new set of parameters. Thus, each iteration in the process requires only one extra
simulation run. The gradient only needs to be initialized, requiring N extra up front simulation runs. In
recent years some reservoir simulators calculate the gradient directly based on the finite difference
equations present within the reservoir simulator. For Simulator 3, a reservoir simulator used in this study
(see Table 2), the extra time needed to calculated the gradient to a single parameter is equivalent to
approximately 30 % of a traditional simulation run.

Methods for quantifying uncertainty on production forecasts

Page 5

Genetic Algorithms
The main problem with the gradient optimization technique is the danger of getting trapped in local
minima. Two techniques that perform global optimization are simulated annealing (e.g. Hegstad et al,
1994) and genetic algorithms (Goldberg, 1989). Tests have shown that for our case the SA and GA
methods required approximately the same number of simulation runs to arrive at an optimal solution. In
this work we have pursued the Genetic Algorithms, because they are easily parallelised and they can
cope with multiple optima.
In the Genetic Algorithm a population of random samples is evolved for a number of generations. During
the evolution of the population, rules inspired by genetics are used to select the best fitting reservoir
models. At the end of the optimization a population results which may consist of a number of optimal
clusters.
Table 1. Naming convention in Uncertainty Quantification methods.
ML2
MAP2

Maximum Likelihood value


Maximum A Posteriori value

ML+
MAP+
multi-ML/multi-MAP
multi-ML+ /
multi-MAP+
Oliver-prod
Oliver-full

Maximum Likelihood value + local characterization of the likelihood function


Maximum A Posteriori value + local characterization of the posterior distribution
multiple ML or MAP values using different initial models
multiple ML or MAP values using different initial models + local characterization of
the objective function around each peak
Oliver approach using only samples from the production data in the objective function
Oliver approach using samples from both the prior and the production data in the
objective function
Statistical sampling from the full posterior distribution

Posterior sampling

Uncertainty quantification
Having defined the parameters to be used in the reservoir model, and having a way to minimize the
objective function, results in a single conditioned reservoir model. This model is called the ML
(Maximum Likelihood) solution when only sum of squares or the likelihood function is used in the
objective function or the MAP (maximum a posteriori) solution if the prior term is included. Production
for the conditioned model can be forecast from the ML or MAP parameter values. The next step is to
consider the approaches used in quantifying forecast uncertainty. Table 1 summarizes the naming
convention.
ML+/MAP+
Having obtained the ML/MAP solution, it is possible to locally characterize the objective function
around this solution and transfer this information into forecast uncertainty. Such an approach is called a
ML+/MAP+ approach. An example is local linearization of the posterior. The Scenario Test Method
(STM) is another such approach (Roggero, 1997). In the STM, a search for extreme high and low
forecasts is invoked, starting from the ML/MAP solution. As a constraint in the search, the objective
function value may not drop below a threshold value. The lower the threshold value is set, the more
extreme forecasts may be produced. This method is believed to result in a characterization of the pdf
envelope around a single ML/MAP solution. A third approach is the use of a Genetic Algorithm. As

These methods give single estimates and thus do not quantify uncertainty. They are included for naming convention
only.

Methods for quantifying uncertainty on production forecasts

Page 6

detailed before, the GA results in a population of reservoir models. Near every optimum a set of
individuals may be present, which contain information about the posterior locally around that optimum.
Multi-ML/Multi-MAP
Another class of approaches start from multiple initial reservoir models. This approach is termed multiML or multi-MAP depending on which objective function is optimized. If the objective function is truly
multi-modal, then different conditioned models are expected, which will result in different forecasts.
During the PUNQ project, a GA optimization code used by BPAmoco has been adapted, such that
starting from a single initial population the final population may be centered around a number of distinct
optima. Thus, with the resulting GA an initial random population may zoom into a number of distinct
optima.
Multi-ML+/multi-MAP+
These approaches are a combination of the previous two classes, where multiple ML/MAP solutions are
found and local characterization is performed.
Posterior sampling
The above multi-ML+/multi-MAP+ approaches only give a local characterization of the objective
function around a number of optima. When the space between the optima carries a significant
probability, this approach gives a restricted view of the possible reservoir models, and consequently may
result in an underestimation of the range of uncertainty as well as a severe bias in the forecasted mean.
Only through sampling of the complete posterior distribution can the full uncertainty be quantified. A
statistically correct way of sampling from the posterior distribution is obtained by using the class of
Markov-Chain Monte-Carlo techniques (Hegstad & Omre, 1997). A straightforward MCMC technique is
one where each time reservoir models are proposed from the prior model. After reservoir simulation, the
likelihood for the reservoir model is determined. This likelihood is used as a weighting factor for the
reservoir model forecast. The drawback of this technique is that many prior samples may be needed
before a reservoir model with a reasonable likelihood occurs, each requiring a time-consuming reservoir
simulation run.
In order to improve this, a sequence of linked reservoir models can be generated, converging to samples
from the posterior distribution. Each new model is generated by making limited alterations to the current
model, creating a Markov-Chain. Still many samples may be needed in order to generate a useful number
of posterior samples. Further improvement can be gained by letting the proposed new model depend on a
set of previous models. To implement this, the GA approach can be used to select parent models and
generate child realizations from them. This leads to a new version of MCMC termed Adaptive Genetic
MCMC.
Oliver
In the approach suggested by Oliver et al. (1996), the advantages of the previous approaches are merged.
The approach aims at sampling from the complete posterior distribution, but using an optimization
technique to reduce the number of reservoir simulation runs needed. In the approach a sample is drawn
from the prior reservoir model. Concurrently, a sample is also drawn from the production data. This
production sampling is done because the production data contains observation errors and the reservoir
model contains modeling error. Pairs of prior reservoir samples and production samples are subsequently
history matched. The matching criterion is formed by both the mismatch between the production sample
and the simulated production data and the deviation of the reservoir model from the sampled prior
reservoir model used as starting point for the optimization. The latter term regularizes the ill-posed
character of the optimization problem. It can be proved that this approach indeed leads to a correct
sampling of the posterior distribution for Gaussian models for the reservoir geology and linear models for

Methods for quantifying uncertainty on production forecasts

Page 7

the fluid flow. Oliver et al. (1996) do show that for a well test model, which is non-linear, the approach
still compares well with results from MCMC sampling.
Application of Oliver's approach to full-field reservoir engineering problems can be found in (Zhan Wu,
1998) and Floris & Bos (1998). For such multi-phase problems there is no proof that the approach
correctly samples from the posterior distribution.
PUNQ-S3 truth case description
The PUNQ-S3 case has been taken from a reservoir engineering study on a real field performed by one of
the industrial partners in the PUNQ project. It was qualified as a small-size industrial reservoir
engineering model. The model contains 19x28x5 grid blocks, of which 1761 blocks are active. A top
structure map of the field is shown in Figure 2. The field is bounded to the east and south by a fault, and
by a strong aquifer to the north and west. A small gas cap is located in the center of the dome shaped
structure. The field initially had six production wells located around the gas oil contact. Due to the strong
aquifer, there are no injection wells. The geometry of the field has been modeled using corner-point
geometry.
The porosity/permeability fields were regenerated in order to have more control over the underlying
geological / geostatistical model. The corresponding geological description is given in Appendix A. A
geostatistical model based on Gaussian Random Fields has been used to generate the porosity /
permeability fields. The fields were generated independently for each of the five layers. Geostatistical
parameters, such as means and variograms were chosen to be as consistent as possible with the
geological model. Collocated co-simulation was used to correlate the porosity and permeability fields
statistically within each layer. To generate the fields, the Fortran program SGCOSIM from Stanford
University was used. It is part of an extension to the GSLIB software library (Deutsch & Journel, 1992).
Figure 3 shows the resulting fields for permeability in each of the five layers. The porosity fields have
similar characteristics.
The reservoir engineering model was completed with the PVT and aquifer data from the original
reservoir model and with power law relative permeability functions. There is no capillary pressure in the
model. The production scheduling resembles the history in the original model, i.e. a first year of extended
well testing, followed by a three year shut-in period, before field production commences. The well testing
year consists of four three-monthly production periods, each having its own production rate. During field
production, two weeks of each year are used for each well to do a shut-in test to collect shut-in pressure
data. A fixed fluid production constraint is imposed on the wells. After falling below a limiting bottom
hole pressure, they will switch to BHP-constraint. Using this completed reservoir engineering model, a
synthetic history is generated using the Simulator 1. The total simulation period is 16.5 years. Pressure,
water-cut and gas-oil ratio curves have been generated for each of the wells. Figure 4 shows a typical set
of production curves. The total oil recovery after the simulation period is 3.87 106 Sm3. The complete
data set suitable for Simulator 1 is publicly available on the internet at www.nitg.tno.nl/punq.
Gaussian noise was added to the well porosities / permeabilities and to the synthetic production data. The
standard deviation on poro / perm values was set to 15 %. For the production data the Gaussian noise was
correlated in time to mimic the more systematic character of errors in such data. The noise level on the
shut-in pressures was 3 times smaller than on the flowing pressure, respectively 1 bar and 3 bar, to reflect
the more accurate shut-in pressures. The noise level on the GOR was set at 10 % before gas breakthrough
and 25 % after gas breakthrough, reflecting the difference between the solution and the free gas situation.
Similarly, Gaussian noise of 2 % before and 5 % after water breakthrough was used for the WCT.
Each of the partners in the project was given the noisy well porosities / permeabilities and synthetic
production history of the first 8 years (Note that this history includes 1 year of well testing, 3 years of
field shut-in, and covers 4 years of actual field production). The synthetic production data consisted of
the BHP, WCT and GOR for each of the six wells. Within the history period, two wells show gas
breakthrough and one well shows the onset of water breakthrough. All partners were asked to forecast the

Methods for quantifying uncertainty on production forecasts

Page 8

total oil production after 16.5 years including uncertainty estimates, using the Bayesian formalism. Note
that none of the partners were given the exact porosity / permeability grids, only the geological
description. Each of the partners used his own workflow to infer these grids. In a second stage, five
incremental wells were defined. For the truth case, the extra wells resulted in an incremental recovery of
1.46 106 Sm3. The partners were again asked to forecast the incremental recovery as a probability
distribution function.
Table 2. Approaches used by the different partners.
Partner +
Approach

Parameter
Domains

Independent
Parameter(s)

Spatial
technique

Uncertainty
Quantification

Optimization

TNO-1

Homogeneous layers
Homogeneous
drainage area regions
Homogeneous flow
path regions
Fixed pilot points
Isotropic variogram
Fixed pilot points
Anisotropic variogram
Pilot points selected

PORO, Kv, Kh,


correlated statist.
PORO, Kv, Kh,
correlated statist.
PORO

Piecewise
Constant
Piecewise
Constant
Piecewise
Constant
GRF

Oliver-full

TNO-2

PORO, Kv, Kh,


uncorrelated
PORO, Kv, Kh,
uncorrelated
PORO

Dog-leg +
Broyden gradients
Dog-leg +
Broyden gradients
Dog-leg +
Broyden gradients
GA

PORO, Kv, Kh,


correlated
PORO, Kv, Kh,
correlated statist.
PORO

GRF

NCC-AG
MCMC
IFP-STM

Global parameters
acting on whole grid
Global parameters
acting on whole grid
Fixed pilot points

IFP-Oliver

Fixed pilot points

NCCOliver

Fixed pilot points

TNO-3
AmocoIso
AmocoAniso
Elf

NCC-GA

GRF

Oliver-full
Oliver-full
multi-ML+, start from
random prior models
multi-ML+, start from
random prior models
multi-ML, start from
sampled prior models

Reservoir
3
Simulator
Simulator 1
Simulator 1
Simulator 1
Simulator 1

GA

Simulator 1
Simulator 1

multi-ML

GN-SD hybrid +
finite diff.
gradients
GA

GRF

Posterior sampling

GA

Simulator 2

Kriging

Scenario Test Method


(ML+)

Simulator 3

PORO

Kriging

PORO, Kv, Kh,


correlated statist.

GRF

Oliver-Prod, start from


kriged solution
Oliver-full

Gauss-Newton +
simulator
gradients
Gauss-Newton +
simulator gradients
Gauss-Newton +
simulator gradients

GRF

Simulator 2

Simulator 3
Simulator 3

Results
General setup
In order to be able to compare results from the different approaches, a fair measure for assessing forecast
uncertainty is required. This measure is determined by the objective function, but is also influenced by
the assumptions in the model. In the Bayesian context, the objective function has a prior geological
component and a production SoS or likelihood component. With the different parameterizations used by
the partners, prior pdfs cannot be made equivalent, so it was decided that a weak prior should be used.
The production component was preset so that partners used the same objective function. The objective
function used in most approaches is given by equation (1). Only for the NCC-MCMC case was the
likelihood function (4) used. The production data comprised BHP, GOR and WCT data from all six
wells. For the standard deviations, ijk, the noise levels applied to the synthetic case were used. The
weights, wijk, were set to 4 just prior and after breakthrough in the wells, with gas or water breakthrough
to give more weight to the occurrence of the breakthrough. The wells for which no water breakthrough
was observed received a zero WCT data point, again with a weight of 4, to penalize reservoir models that
3

The simulators used are numbered as follows 1=Eclipse, 2=More, 3=Athos.

Methods for quantifying uncertainty on production forecasts

Page 9

showed early breakthrough. WCT and GOR data after breakthrough and all pressure data were given a
weight of 1.
We note that it is hard to isolate the influence of the model assumptions, for example to differentiate
between a Gaussian Random Field model or a zone model for spatial character of the porosity /
permeability field, or to use a deterministic versus a stochastic poro/perm relationship versus independent
porosity and permeability. This hampers the comparison of the uncertainty ranges for forecasts from
various approaches.
Base Case
Table 2 shows which selections have been made in the workflow for the quantification of production
forecast uncertainty. Figure 5 shows the cumulative pdf curves generated by the various approaches used
by the partners. Figure 6 displays a summary in terms of the low-median-high ranges forecasted. The
results are classified into three groups, the TNO curves, the IFP/NCC-Oliver curves and the
Amoco/Elf/other-NCC curves.
TNO group
The TNO curves are generated using piecewise homogeneous regions. The digits labeling the different
curves correspond to the number of layers (first digit), number of regions within each layer (middle digit
if it occurs) and parameter types (last digit), i.e. , or , kh and kv, respectively. The TNO curves show
larger uncertainty ranges than the other curves. Although their shapes are the similar, they show a mutual
shift along the horizontal axis. Initially the wider range was attributed to the use of production sampling
in the Oliver approach. Testing this hypothesis showed that dropping the production sampling does not
significantly affect the results. Other partners' results confirmed this. Further testing led to the final
conclusion that the use of homogeneous regions is not justified in this case. The homogeneous region
models do not lead to satisfactory history matches and result in large spreads in production forecast.
Parameterizations based on heterogeneous models, such as the pilot point approaches, are more
appropriate.
Amoco / Elf / NCC-GA,-MCMC group
The Amoco-Anisotropic curve and the Elf curve are practically identical. The main difference between
the approaches that generated the curves is the use of a different optimizer. Apparently, it is not critical
in this case to use global optimization or to consider multiple clusters. This result suggests that the
posterior surface may look like a plateau and not a landscape of individually identifiable peaks. The
Amoco-Isotropic curve shows a narrower uncertainty range than the Anisotropic curve because the
variogram used for the anisotropic case has a longer range in the NW-SE direction that accounts for the
presence of elongated channels. With the longer range, less variability is present in the reservoir porosity
/ permeability fields.
The NCC-MCMC curve has a smaller range; two reasons are forwarded to explain this. Firstly, in the
MCMC, the reservoir porosity/ permeability fields are linear combinations of Gaussian random fields,
thus retaining Gaussianity. This extra constraint reduces the uncertainty ranges produced in the reservoir
models. Porosity/permeability fields generated with the pilot-point approaches do not satisfy Gaussian
constraints. Secondly, after detailed analysis of the results of various approaches (Omre et al., 1999), it
was noted that the optimization step used in most approaches leads to generation of extreme values of
parameters (many parameters end up at values prescribed at geological/physical limits). These extreme
values do result in good history matches, but give wider spreads in the forecasts. Since MCMC results
generally show much smoother optimized porosity/permeability fields their production forecast range is
smaller. NCC have used the posterior distribution from equation (4) for the dynamic data conditioning in
the NCC-GA and NCC-MCMC result. Since this criterion is weaker than the least squares norm from
equation (1), because of the lack of normalization, one would expect wider ranges for these NCC curves.

Methods for quantifying uncertainty on production forecasts

Page 10

Since the ranges are smaller we may conclude that the exact form of the objective function does not
significantly influence the result.
IFP / NCC-Oliver group
The two IFP curves and the NCC-Oliver curve have median values which are shifted to lower ranges.
This systematic shift is attributed to the use of another reservoir simulator for these approaches, i.e.
Simulator 3 versus Simulator 1. Note that there is no apparent shift between the approaches that use
Simulator 2 or Simulator 1. In the incremental study we will see that this shift vanishes when incremental
recovery is considered. The uncertainty range in the IFP-Oliver curve is very small. This can be
attributed to the fact that no prior sampling was used in the IFP-Oliver approach, and that the production
sampling does not contribute very much to the uncertainty range. In the NCC-Oliver approaches prior
sampling is used, resulting in an increase in the forecasted range. The STM curve has the largest range.
This is remarkable, because STM is believed to quantify the envelope of the forecast pdf around a single
posterior peak. Again this result is acceptable if the posterior surface looks like a large plateau.
Comparison to the truth case
Figure 6 shows that five out of the eleven estimated uncertainty ranges do not include the truth case
value, thus confirming the general experience that uncertainty is often underestimated. Note that the
TNO-2 approach with its large uncertainty range still doesnt include the truth case, and that NCCMCMC with its small uncertainty range does include the truth case. Again this indicates the importance
of using properly history-matched models and appropriate models for heterogeneity.
Incremental case
All partners were asked to generate production forecasts for the conditioned reservoir models, including
five incremental wells. The partners were asked to quantify the incremental recovery, the difference
between total oil production of the base case and the case with the incremental wells included. Again, the
result should be a cumulative distribution function.
Figure 7 shows the cdf curves of incremental recovery for the incremental wells case. These curves can
be split into the TNO curve and all other curves. Again, we see that the homogeneous region
parameterization leads to a severe bias in the results. All other curves tend to be in agreement. Note
especially that the IFP curve now lines up with the other curves. The shift between Simulator 3 and
Simulators 1 and 2 is negated because it occurs both in the base case and in the case including the
incremental wells. In calculating the incremental recovery, the difference between the two results is
calculated, so canceling the shift.
Comparison to the truth case
Most approaches underestimate the incremental recovery of the truth case. The truth case value lies on
the high side of five of the curves. The NCC curves all have ranges which do not cover the truth case
value. The fact that the truth case value lies in the upside of the curves can be explained in hindsight. The
well locations were chosen in such a way that the recovery from the truth case permeability field would
be optimized. For an ensemble of permeability fields, these well locations are not optimal and will
produce at a lower average rate.
Discussion
Before drawing conclusions from the results, some discussion is needed regarding the comparison of the
results from the various partners. In order to make an objective comparison between the various
uncertainty quantification approaches possible, the prior and likelihood measures were prescribed.
However, especially the results from the NCC approach make it clear that the uncertainty ranges are not
only influenced by the information used to define the objective function, but also on the underlying
assumptions made and techniques used in the reservoir modeling. These can be seen as an implicit form

Methods for quantifying uncertainty on production forecasts

Page 11

of prior information. In this study the important assumptions are related to the fluid flow model and the
spatial distribution of porosity and permeability. Making a particular choice for the fluid flow simulator
and for modeling the spatial distributions will affect the uncertainty quantification in an implicit way.
This makes comparison of uncertainty ranges somewhat ambiguous. In particular, it is not formally
possible to compare all results with a single true uncertainty range. Each approach has its own
underlying set of assumptions, resulting in its own uncertainty range. Therefore, we can only make
observations and not judgements in terms of the correctness of estimates.
Conclusions
From the PUNQ-S3 integrated case study we conclude the following
Using porosity and permeability multipliers for homogeneous layers or regions resulted in large
uncertainty ranges and a significant bias in the production forecast due to the poor quality of history
matches obtained with these models. Heterogeneous models parameterized by pilot points gave more
consistent results.
Production sampling (as introduced by Oliver et al. (1996)) did not significantly contribute to the
forecast uncertainty range. Omitting the sampling of several prior geological models led to too
narrow ranges of uncertainty around the forecasts.
The use of different reservoir simulators to model the same reservoir led to a systematic difference in
production forecast for one of the (three) simulators. Fortunately, when considering incremental
recovery, the difference was negated.
Approaches which include an optimization step, showed a tendency to predict larger uncertainty
ranges. This is because the conditioned porosity/permeability fields contain significantly more
extreme high and/or low values.
Results suggest that the multi-dimensional posterior distribution looks more like a large plateau, than
many isolated peaks.
Comparison with the cumulative production forecast of the truth case showed that five out of the
eleven predicted forecast ranges did not include the truth case value.
All approaches underestimated the incremental recovery of the truth case. The latter is probably
because the locations of the incremental wells were optimally designed for the truth case.
We realize that these conclusions may be specific to the current test case. For confirmation, the above
case study should in principle be repeated for many truth cases.
Acknowledgements
The European Commission is acknowledged for partly funding this project in the Joule-III NonNuclear Energy Programme. We thank all other partners in the PUNQ project for the lively
discussions and valuable contribution to this study. We thank Elf for supplying a field case which
acted as a valuable integration tool.
References
ABRAHAMSEN, P., EGELAND, T., LIA, O., OMRE, H., 1992, An integrated approach to prediction of
hydrocarbon in place and recoverable reserve with uncertainty measures, Paper SPE 24276,
presenated at 1st SPE Europ. Petr. Comp. Conf, Stavanger, 2527 May.
BERTEIG, V., HALVORSEN, K.B., OMRE, H., 1988, Prediction of hydrocarbon pore volume with
uncertainties, Paper SPE 18325, presented at SPE Annual Technical Conference & Exhibition,
Houston, 25 Oct.

Methods for quantifying uncertainty on production forecasts

Page 12

BOS, C.F.M., 2000, Production forecasting with Uncertainty Quantification, Final report of EC project,
NITG-TNO report NITG 99-255-A, Jan.
BROYDEN, C.G., 1965, A class of methods for solving nonlinear equations, Math. Comp. 19, pp. 577
593.
CUYPERS, M., DUBRULE, O, LAMY, P, BISSEL, R, 1998., Optimal choice of inversion
parameters for history matching with the pilot point method, Proc. ECMOR VI conf., Peebles,
811 Sep.
DEUTSCH, C.V., JOURNEL, A., 1992, GSLIB Geostatistical Software Library and Users Guide,
Oxford Univesity Press, Oxford.
FLORIS, F.J.T., 1996, Direct conditioning of Gaussian random fields to dynamic production data, Proc.
ECMOR V conf., Leoben, 36 Sep.
FLORIS, F.J.T., BOS, C.F.M., 1998, Quantification of uncertainty reduction by conditioning to dynamic
production data, Proc. ECMOR VI conf., Peebles, 811 Sep.
FLORIS, F.J.T., PEERSMANN, M.R.H.E, 1998, Uncertainty estimation in volumetrics for supporting
hydrocarbon E&P decision making, J. of Petroleum Geoscience, Vol 4, No 1, pp. 3340.
GOLDBERG, D.E., 1989, Genetic algorithms in search, optimization, and machine learning, AddisonWesley, Reading.
HEGSTAD, B.K., OMRE, H., TJELMELAND, H. and TYLER, K., 1994, Stochastic simulation and
conditioning by annealing in reservoir description, In Armstrong and Dowd (ed), Geostatistical
Simulations, Kluwer Academic Publisher, pp. 4355.
HEGSTAD, B.K. and OMRE, H., 1997, Uncertainty Assessment in history matching and forecasting. in
Baafi and Schofield (Ed.), Geostatistics Wollongong '96 Vol. I Kluwer Academic Publishers, pp.
585596.
HOLDEN, L., 1998, Adaptive chains, Tech. Rep. SAND 11/98, Norwegian Computing Center, Oslo,
Norway.
HU, L.-Y., 2000, Gradual deformation and iterative calibration of Gaussian-related stochastic models,
Math. Geol., 32 (1), pp. 87108.
LANDA, J.L., HORNE, R.N., 1997, A procedure to integrate well test data, reservoir performance
history and 4-D seismic information into a reservoir description, Paper SPE 38653, presented at SPE
Annual Technical Conference & Exhibition, San Antonio, 58 Oct.
LIA, O., OMRE, H., TJELMELAND, H., HOLDEN, L., EGELAND, T., 1997, Uncertainty in
reservoir production forecasts, AAPG Bulletin Vol. 81, Nr. 5.
MARSILY, G. de, LAVEDAN, G., BOUCHER, M., FASANINO, G., 1984, Interpretation of inference
tests in a well field using geostatistical techniques to fit the permeability distribution in a reservoir
model, in Geostatistics for Natural Resources Characterization, eds. G. Verly et al., Part 2, D. Reidel
Publ. Comp., pp. 831849.
OLIVER, D., HE, N., REYNOLDS, A.C., 1996, Conditioning permeability fields to pressure data, Proc.
ECMOR V conf., Leoben, Sep 36.
OMRE, H., TJELMELAND, H., WIST, H.T., 1999, Uncertainty in history matching Model
specification and sampling algorithms, NTNU-Trondheim Internal report, Statistics No. 6.
POWELL, M.J.D., 1972, in Numerical methods for non-linear algebraic equations, (ed. W.Murray),
Academic Press, London and New York, p 29.
RAMARAO, B.S., MARSH LAVENUE, A., DE MARSILY, G., MARIETTA, M.G., 1995, Pilot point
methodology for automated calibration of an ensemble of conditionally simulated transmissivity fields
1. Theory and computational experiments, Water Reseources Research, Vol 31, No 3, pp. 475493.
ROGGERO, F., 1997, Direct Selection of Stochastic Model Realizations Constrained to Historical Data,
Paper SPE 38731, paper presented at the 1997 SPE Annual Technical Conference and Exhibition, San
Antonio, Texas, 58 October.

Methods for quantifying uncertainty on production forecasts

Page 13

SAMSON, P., DUBRULE, O., EULER, N., 1996, Quantifying the impact of structural uncertainties on
gross-rock volume estimates, Paper SPE 35535, presented at European 3D research modelling
conference, Stavanger, 1617 Apr.
TJELMELAND, H., 1997, A note on the Bayesian approach to history matching of reservoir
characteristics, Proceedings of IAMG'97: The Third Annual Conference of the International
Association for Mathematical Geology, ed. Pawlowsky-Glahn V., International Center for Numerical
Methods in Engineering (CIMNE), Barcelona, Spain, vol.~2, pp. 772777.
WEN, X-H, GOMEZ-HERNANDEZ, J.J., CAPILLA, J.E., SAHUQUILLO, A., 1996, Significance of
conditioning to piezometric head data for predictions of mass transport in groundwater modeling,
Math. Geology, Vol 28, No 7.
ZHAN WU, REYNOLDS, A.C., OLIVER, D., 1998, Conditioning geostatistical models to two-phase
production data, Paper SPE 49003, presented at SPE Annual Technical Conference & Exhibition,
New Orleans, 2730 Sep.
Website
http://www.nitg.tno.nl/punq

Methods for quantifying uncertainty on production forecasts

Page 14

Appendix A. Geological description.


This appendix gives a geological description of the heterogeneities that occur based on knowledge of the
regional geology (such as paleoslope, paleo water depth, gross environments of deposition, size and
shape of sedimentary bodies, structural trends and style) which is normally known from adjacent fields
and wildcat wells. In the geological interpretation, the layer thicknesses, which are all in the order of 5
meters, played an important role.
Sediments were deposited in a deltaic coastal plain environment. Layers 1, 3, and 5 consist of fluvial
channel fills encased in floodplain mudstone. Layer 2 represents marine or lagoonal clay with some distal
mouthbar deposits. Layer 4 represents a mouthbar or lagoonal delta encased in lagoonal clays.

Layers 1, 3, and 5 have linear streaks of highly porous sandstone (phi > 20 %), with an azimuth
somewhere between 110 and 170 degrees (SE). These sandstone streaks of about 800 m width are
embedded in a low porosity shale matrix (phi < 5 %). The width and the spacing of the streaks vary
somewhat between the layers. A summary is given in Table 3.

In layer 2 marine or lagoonal shales occur , in which distal mouthbar or distal lagoonal delta
occur. They translate into a low-porous (phi < 5%), shaly sediment, with some irregular patches of
somewhat higher porosity (phi > 5%).

Layer 4 contains mouthbars or lagoonal deltas within lagoonal clays, so a flow unit is expected
which consists of an intermediate porosity region (phi ~ 15%) with an approximate lobate shape
embedded in a low-porosity matrix (phi < 5%). The lobate shape is usually expressed as an ellipse (ratio
of the axes= 3:2) with the longest axis perpendicular to the paleocurrent (which is between 110 and 170
degrees SE).
Table 3. Expected sedimentary facies with estimates for width and spacing for major flow units for each
layer.
Layer
1
2
3
4
5

Facies
Channel Fill
Lagoonal Shale
Channel Fill
Mouthbar
Channel Fill

W
800 m

1000 m
500-5000 m
2000 m

Spacing
2-5 km

2-5 km
10 km
4-10 km

Methods for quantifying uncertainty on production forecasts

Page 15

List of tables
Table 1. Classification of Uncertainty Quantification methods.
Table 2. Approaches used by the different partners.
Table 3. Expected sedimentary facies with estimates for width and spacing for major flow units for each
layer.

List of Figures
Figure 1. General work flow used for the quantification of forecast uncertainty.
Figure 2. Top Structure map of the PUNQ-S3 case. The field contains both oil and gas. Black dots
indicate six initial wells located around the gas-oil contact. White dots indicate additional wells added in
a later phase.
Figure 3. Horizontal permeability fields in the five layers for the synthetic PUNQ-S3 case.
Figure 4. Typical dynamic production data for a well showing bottom hole pressure (BHP), oil
production rate (OPR), gas-oil ratio (GOR) and water cut (WCT). After a 1 year extended production test
and 3 years field shut-in, field production starts at a fixed oil rate. History data runs until year 8 and is
forecasted until year 16.5. This well shows the start of water production. Two other wells show gas
breakthrough.
Figure 5. Cumulative distribution functions for the total oil production forecasted up to 16.5 years. The
curves can be split up into three classes, the TNO curves, the IFP / NCC-Oliver curves and the Amoco /
Elf / other-NCC curves.
Figure 6. Summary of low - median high ranges of cumulative prouction at 16.5 years for all
approaches. The ranges were generated by ordering the samples and taking the 10 %, 50 % and 90 %
model.
Figure 7. Cumulative distribution functions for incremental recovery at 16.5 years using 5 more wells.
The curves can be split up into two classes, the TNO curve and the other curves.

Methods for quantifying uncertainty on production forecasts


Parameterization
+ prior pdfs

History data
+ error data

Prior sampling

Production
sampling

Reservoir model

Production data

Reservoir
simulation
Objective function
Calculation
Updating of
reservoir model
History matched
reservoir model
Forecasting

Figure 1. General work flow used for the quantification of forecast uncertainty.

Page 16

Methods for quantifying uncertainty on production forecasts

Page 17

Tops at step 0, 0 days

2339 2347 2355 2363 2371 2379 2387 2395 2403 2411

800

1600

2400

3200

2000

4000

PUNQS2 XY plane 1 TOPS step 0

Figure 2. Top Structure map of the PUNQ-S3 case. The field contains both oil and gas. Black dots
indicate six initial wells located around the gas-oil contact. White dots indicate additional wells added in
a later phase.

Methods for quantifying uncertainty on production forecasts

Page 18

Figure 3. Horizontal permeability fields in the five layers for the synthetic PUNQ-S3 case.

Methods for quantifying uncertainty on production forecasts

Page 19

Figure 4. Typical dynamic production data for a well showing bottom hole pressure (BHP), oil
production rate (OPR), gas-oil ratio (GOR) and water cut (WCT). After a 1 year extended
production test and 3 years field shut-in, field production starts at a fixed oil rate. History data
runs until year 8 and is forecasted until year 16.5. This well shows the start of water
production. Two other wells show gas breakthrough.

Methods for quantifying uncertainty on production forecasts

Page 20

1
TNO-1: 5x3 pars
TNO-2: (5x6)x3 pars
TNO-3: (5x14)x1 pars
Amoco Isotropic
Amoco Anisotropic
Elf
NCC-GA
NCC-MCMC
IFP-STM
IFP-Oliver
NCC-Oliver
truth case

Cumulative distribution function

0.8

0.6

0.4

0.2

0
3

3.2

3.4

3.6

3.8

4.2

Cumulative Oil Production after 16.5 years (in million Sm3)

Figure 5. Cumulative distribution functions for the total oil production forecasted up to 16.5 years. The
curves can be split up into three classes, the TNO curves, the Amoco / Elf / other-NCC curves and the
IFP / NCC-Oliver curves.

Methods for quantifying uncertainty on production forecasts

Page 21

Figure 6. Summary of low - median high ranges of cumulative production at 16.5 years for all
approaches. The ranges were generated by ordering the samples and taking the 10 %, 50 % and 90 %
model.

Methods for quantifying uncertainty on production forecasts

Page 22

1
TNO-2: (5x6)x3 pars
Amoco Isotropic
Amoco Anisotropic
Elf
NCC-GA
NCC-MCMC
IFP-STM
NCC-Oliver
truth case

Cumulative distribution function

0.8

0.6

0.4

0.2

0
0

0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
3
Incremental Oil Production after 16.5 years (in million Sm )

Figure 7. Cumulative distribution functions for incremental recovery at 16.5 years using 5 more wells.
The curves can be split up into two classes, the TNO curve and the other curves.

Вам также может понравиться