Вы находитесь на странице: 1из 19

SPE 116675

From Mega-Cell to Giga-Cell Reservoir Simulation


Ali H. Dogru, Larry S.K. Fung, Tareq M. Al-Shaalan, Usuf Middya, and Jorge A. Pita,Saudi Arabian Oil Company

Copyright 2008, Society of Petroleum Engineers


This paper was prepared for presentation at the 2008 SPE Annual Technical Conference and Exhibition held in Denver, Colorado, USA, 2124 September 2008.
This paper was selected for presentation by an SPE program committee following review of information contained in an abstract submitted by the author(s). Contents of the paper have not been
reviewed by the Society of Petroleum Engineers and are subject to correction by the author(s). The material does not necessarily reflect any position of the Society of Petroleum Engineers, its
officers, or members. Electronic reproduction, distribution, or storage of any part of this paper without the written consent of the Society of Petroleum Engineers is prohibited. Permission to
reproduce in print is restricted to an abstract of not more than 300 words; illustrations may not be copied. The abstract must contain conspicuous acknowledgment of SPE copyright.

Abstract
For many oil & gas reservoirs, especially large reservoirs in the Middle East, the availability of vast amounts of
seismic, geologic and dynamic reservoir data result in high resolution geological models. Because of the
limitations of conventional reservoir simulator technologies, high resolution models are upscaled to flow simulation
models by reducing the total number of cells from millions to a few hundred thousand. Flow simulators using
upscaled reservoir properties produce average reservoir performance and often fall short of accurately predicting
recovery.
Realizing the limitations of the conventional simulators for the giant oil and gas reservoirs, parallel reservoir
simulators have been developed. The first generation of parallel simulators increased the simulator capabilities by
an order of magnitude the result was that mega (million) cell simulation became a reality. Parallel computers,
including PC Clusters, were successfully used to simulate large reservoirs with long production histories, using
millions of cells. Mega-cell simulation helped recover additional oil and gas due to better understanding of
reservoir heterogeneity. The speed of parallel hardware also helped, making many runs to address uncertainty
possible.
Despite the many benefits of parallel simulation technology for large reservoirs, the average cell size still remains
in the order of hundreds of meters (m) for large reservoirs. To fully utilize the seismic data, smaller grid blocks of
fifty m in length are required. This size of grid block results in billion (Giga) cell models for giant reservoirs. This is
a two orders of magnitude increase from the mega-cell simulation. To simulate Giga-cell models in practical time,
new innovations in the main components of the simulator, such as linear equation solvers, are essential. Also, the
next generation pre- and post-processing tools are needed to build and analyze Giga-cell models in practical
times.
This paper describes the evolution of reservoir simulator technology, from Mega-Cell scale to a Giga-Cell scale,
presenting current achievements, challenges and the road map for Giga-Cell Simulators.
INTRODUCTION
To simulate the giant oil and gas reservoirs with sufficient resolution, parallel reservoir simulator technology is an
absolute necessity since the conventional simulators based on serial computations cannot handle these systems.
Interest in parallel reservoir simulation started over two decades ago in the oil industry. The earliest attempt was
1
2
3
by John Wheeler . This was followed by work of John Killough , Shiralkar and Stevenson .
Realizing the importance of the reservoir simulation technology in 1992, Saudi Aramco initiated a program for
4
developing the companys first in-house, Parallel Oil Water and Gas Reservoir Simulator, POWERS . The
simulator was developed on a parallel computer (Connection Machine CM-5) in 1996. The first field simulation
5
model using 1.3 million cells was successfully completed . In 2000 the parallel code was ported to symmetric

SPE 116675

parallel computers using IBM Nighthawk shared memory systems and later in 2002 to distributed memory PC
Clusters.
By 2002 the worlds largest oil field was simulated by Ghawars full field model, run using 10 million active cells
with sixty years of production history and thousands of wells. Experience proved that parallel reservoir simulation
was successful for reservoir simulation. Therefore, more features such as compositional, dual porosity/dual
permeability, and coupled surface facilities were added to the standard black oil options to cover more areas of
applications.
Figure 1 shows the past and future trend in simulating large reservoirs at Saudi Aramco. As seen, the impact of
parallel simulation on model size is very significant. Model size in this figure follows the general trend of
exponential growth in computer hardware technology.

40
35

Parallel
Simulator

30
25
20
15
10

Model Size
Million Cells
Conventional
Simulators

5
0
1994

1998

2003

2008

Figure 1: Growth of Model Size for field studies at Saudi Aramco

As parallel computers progressed from specialized designs such as Connection Machines to PC Clusters, the
programming language also progressed from High Performance Fortran to Shared Memory Parallel Fortran
(OpenMP), F90, Message Passing Interface libraries and finally parallel C++.
Industry-wide, several efforts have been made to introduce new programming languages and algorithms for
6
7
8
parallel simulation, starting in early 2001 by Beckner , DeBaun et al. and Shiralkar et al. in 2005.

BENEFITS OF MEGA-CELL PARALLEL RESERVOIR SIMULATION


Parallel reservoir simulation offers several advantages over the traditional, single CPU simulators. Among several
benefits of parallel simulation is the ability to handle very large simulation models with millions of cells. This
feature provides reduced upscaling from the geological models to simulation models, resulting in better
representation of the reservoir heterogeneity.
Figure 2 shows a typical industry simulation process. The geological models with millions of cells are upscaled to
the multi-hundred thousand cell simulation model. This produces highly-averaged (diluted) results. Specifically,
sharp saturation fronts and trapped oil saturations will not be captured by this approach. Simulators in this
category use single CPU computers. Therefore, simulator codes use non-parallel sequential programming
languages. We will refer to them as Sequential Simulators in this article.

SPE 116675

Figure 2: Industry practice for building simulator models from the geological models

Figure 3: Parallel Simulation Process


As mentioned above, parallel simulators do not require strong upscaling. Typically, with mild upscaling, these
models can produce highly realistic simulation results. Figure 3 displays the concept of parallel simulators.
As shown in Figure 3, mild upscaling is applied to the geological models. This helps to retain the original reservoir
heterogeneity. Hence it is expected that this process will produce more accurate results for a good geological
model. Another important side benefit is to enable engineers to directly update the geological model during the
history matching. This feature is not available for the coarse grid models (Sequential Simulators).
To further demonstrate the benefits of parallel simulation, we will present an actual field case shown in Figure 4.
Two modeling approaches will be compared. The first model is a 14 layer, 53,000 active cell coarse grid (Dx =
500 m) on a conventional nonparallel industrial simulator, and the second model is a 128 layer, 1.4 million active
cell, relatively fine grid (Dx = 250 m), parallel simulation model. This figure shows a vertical cross section between
a down flank water injector and a crestal producing well. The reservoir has been water-flooded over 30 years.
Changes in the water saturation over 30 years are shown on this cross section. Red or brown color shows no
change (unswept) where the green color indicates change in water saturation (swept).

SPE 116675

Figure 4: Field Case


If one compares the computed performances of the crest well for water cut and pressure, one can see that both
models match the observations very well. This indicates that the 14-layer model matches field data as good as
the 128 layer model. When the change in the water saturation (sweeping) was examined, it was observed that
there are significant differences between the two model results: the 14-layer model shows good sweep between
the injector and crestal well while the 128 layer model shows an unswept oil zone (brown color pocket). Based on
these results, a new well location was chosen as shown in the figure. A horizontal well was drilled and oil was
5
found as predicted by the simulator . Following the drilling, the same well was logged. Water saturation obtained
by the log was plotted against the simulator calculated water saturations in Figure 4. As seen, the simulator was
able to closely predict the vertical water saturations prior to drilling the well. Based on simulator results, four new
wells were drilled in 2003 and 2004 in similar locations to produce trapped oil. All the wells drilled found oil and
are still producing at a good rate.
MOTIVATION FOR GIGA-CELL SIMULATION
Presently, multi-million cell simulation models are used for reservoir management, recovery estimates, placing
complex wells, making future plans and developing new fields. For giant reservoirs, current grid sizes vary
between 100 m to 250 m. The question is whether or not it would be beneficial to go to smaller grid sizes.
To answer this question, we will consider a giant reservoir with associated reservoirs imbedded in the same
aquifer as shown in Figure 5. For the reservoir system, as shown in Figure 5, we will fix the number of vertical
layers as 51 (with an average thickness of 5 ft) and calculate the total number of grid block sizes in the areal
directions. Using a 250 m areal grid size yields 29 million cells. A seismic scale grid, 25 m areal grid size, yields
nearly 3 billion cells. A 12.5 m areal grid would yield about 12 billion cells. This is due to areal refinement.
Similarly, more cells will result in more vertical layers. For example, a 25 m areal grid with 100 layers will result in
6 billion cells and a 12.5 m areal grid would yield about 20 billion cells. This example was chosen for the largest
reservoir, since a very small percent recovery increase due to more accurate simulation would yield billions of
barrels of additional oil. Therefore, for this reservoir, it would be of interest to test finer grid models and see if the
vertical and areal refinement would increase the accuracy of the model. Accuracy of the model can be measured
by comparing model results, water cuts, and pressures with the observed data.

SPE 116675

Figure 5: Areal View of the Reservoir Aquifer Systems

ACCURACY INCREASES WITH MORE REFINEMENT


In this section we will consider the giant reservoir shown in Figure 5. As seen in this figure, there are several
reservoirs in the same full field model. Reservoir 1 is the small oil reservoir located in the NE corner of Figure 1.
All reservoirs are connected by a common aquifer. Production from the reservoirs had started over 60 years ago.
To show the effect of finer grid modeling of the same reservoir systems, we will first define a base model and
make a history run. Next, we will refine the base model and repeat the same run. We will then compare the
results relative to the field observations to make a judgment. It should be noted that finer grid models will be
constructed from the base model by reducing the grid block sizes. For refined cell properties, the mother model
cell properties will be assigned. Hence this practice will demonstrate the effect of numerical errors. Building very
high resolution models with actual data interpolated from seismic or geostatistics will be the subject of a new
paper.
In this paper, we will present two sets of examples on the giant Ghawar model. The base model has 10 million
cells, over 60 years of production/injection history with thousands of wells involved. From this base model, first a
48 million cell model will be constructed to study the effects of vertical refinement. Next, a 192 million cell model
will be constructed from the 48 million cell model to study the effects of areal refinement.
Refinement in both directions would yield a billion-cell model. The work on this is in progress and will be reported
in a future paper. In this paper a few results for a 700 million cell model will be presented for illustration of the
simulator capabilities.
Higher Vertical Resolution
Our base model is a ~10 million cell full field model covering the reservoirs shown in Figure 5. This model has 17
vertical layers and 250 m (840 ft) areal grid size. Next, we will increase the number of layers to 85, resulting in a
48 million cell model. This is accomplished by dividing each vertical layer by five and assigning the property of
each layer in the base model to the refined layers. The objective is to see if the numerical refinement would make
any difference in the accuracy of the model. Figure 6 shows the layering and total number of cells for both
models. The same relative permeabilities were used for both models.
It is expected that vertical refinement of layers for a thick reservoir with good vertical permeability would better
simulate the advancing water fronts. This is due to a better representation of the gravitational effect that
influences the shape of the moving water fronts. As a result, water arrival times and cuts at the producers are
expected to be more realistic. Finally, this would also reduce/eliminate the need of using pseudo functions and

SPE 116675

unrealistic multipliers in history matching. The reservoir in this example is a thick reservoir. Vertical fractures
enhance the vertical permeability.
Figure 7 demonstrates the concept. It is shown that the advancing water front, due to flank water injection, is
better represented by the fine grid model than the coarse grid (base) model. For example, water will move like a
piston into a base model layer (15 ft) showing no gravity effects. In the fine grid model (a 15 ft layer is divided into
five layers of 3 ft each), it will show the gravity effects. It is expected that more water will be in the bottom layer
due to gravity (water is heavier than oil). The field being studied has peripheral water injection, it is thick and has
enough vertical permeability (in some areas, this is due to local fractures).

Figure 6: Base model and Vertically Refined Full Field Model definition

Concept
Thick reservoirs
Higher Vertical Resolution Better Accuracy

Thin Vertical Layers

Thick Vertical layers

( subdivision by 5 )

Dx

Dx

water
Oil

water
Oil

Oil

3ft

15ft

Late Water Breakthrough

Earlier Water Breakthrough

Not to scale
Dx = 840 ft
Same Relative Permeability Curves

Figure 7: Effects of more vertical layering on injected water movement in a thick reservoir

SPE 116675

Simulator Results - 1990


17 layers
Water Sat

85 layers
Water Sat

WELL A

WELL A

Water
Water

Figure 8: Vertical water profile at 1990

Figure 9: Well Performance Comparison of 17 and 85 layer models with field observed water production and well
pressures

Figure 10: Water Rate predictions from both models

SPE 116675

Results of the simulation runs are shown in Figure 8. This figure illustrates the calculated vertical water fronts in
1990. The left image is from the base model, the image on the right is from the refined grid model. Figure 8 shows
that calculated vertical water fronts, due to west flank injection, confirm the concept postulated in Figure 7. The
fine grid model with 85 layers shows water saturation has already broken at the producer but the coarse grid
model with 17 layers shows no water breakthrough yet.
The next question is which of the results are correct? To answer this question we need to examine the observed
water production and pressures at the producing well in this figure. When simulator results are compared with the
field observations (Figures 9 and 10), one sees that close agreement is observed for the fine grid model for water
arrival, water production rate and well pressures. The coarse grid model (base model) miscalculates water arrival
(by a few years), water production rate and pressures. In this case the coarse grid model requires further
adjustments in parameters for history matching in this area of the reservoir.

Higher Areal Resolution


Next, the areal grid resolution was increased by a factor of two in the 48 million cell model (85 layers). This was
done by subdividing the areal grid size by a factor of 2 in X and Y, and hence yielding quadrupling the total
number of cells (48 million cells x 4 = 192 million cells). The new model has 125m (420 ft) areal grid block size
and a total of 192 million active cells. It must be noted that the properties of the fine grid cells were inherited from
the coarse cell (parent and child logic), hence no upscaling was implemented. Objective here was again to
observe the effects of numerical discretization and hence numerical errors.
In this section we will show the improvement in the solution not only in the main reservoir as shown in the
previous example, but also in the peripheral reservoirs. As seen in Figure 5, all reservoirs are imbedded in a
common aquifer and hence the withdrawal from one reservoir would affect the other. Therefore, accuracy in
simulating the whole system should be better than simulating individual reservoirs. This is particularly important
for high permeability reservoirs as shown here. Accurate accounting of boundary fluxes will improve the accuracy
of the solution. In this section we will show how much accuracy has improved, even in nearby reservoirs with fine
grid resolution.
Direct effect of areal refinement in the simulator results for this system will show as reduced numerical dispersion
in the areal direction. This effect is demonstrated on a conceptual model shown in Figure 11. In this figure, above
effects are demonstrated on the movement of injected water displayed in a vertical cross section. There are two
sub figures in Figure 11. The first figure on the left illustrates the water movement using 840 ft( 250m ) areal grid
size with fine gridding in the vertical direction ( Dz=3 ft ). As illustrated and also discussed in the previous section
it is expected that water should move faster in the bottom layer and can breakthrough at a hypothetical well
located at the east boundary of this vertical cross section. On the other hand, the sub figure on the right illustrates
the fact that when smaller areal grid of 420ft (125m) is used, water movement is delayed in the x direction due to
reduction of the numerical dispersion effects in x-y (areal) directions. Therefore, water is not expected to
breakthrough at a hypothetical well located at the east boundary of this vertical cross section.
As shown in Figure 11 below, this concept is illustrated by building and running 2 full-field simulation models at 2
different resolutions. The two models represent the same reservoir for the sixty years of history and with
thousands of wells producing and injecting. The coarse areal grid model with areal grid size of 840ft (250m) and
85 layers (Dz=3ft) results in 48 million cells. The fine areal grid model with areal grid size of 420ft (125m) and with
same number of vertical layers would result in 192 million cell model. These two models were run on a distributed
memory cluster for the sixty years of history, involving production and injection of thousands wells.
For illustration purposes, the effects of the areal grid size for a Well D are shown in Figure 12. Well D is located in
the oil reservoir to the NE of the main giant reservoir as shown in Figure 5. We note that both models (48 million
cells and 192 million cells) cover the main reservoir and the peripheral reservoir located to the NE of the main
reservoir sharing the same aquifer.
Near well D, the vertical water movement calculated by the coarse areal grid model (48 million cells, 840f areal
grid) at January 1981. As seen due to water injection at well C, water has already broken through at most of the
perforations for well D due to areal numerical dispersion effects.

SPE 116675

Areal Refinement Thick reservoirs


Concept

Dx = 840 ft

Dx = 420 ft

Dx

Dx

water
Oil

3ft

water

Earlier Water Breakthrough

Oil

Delayed Water Breakthrough

Not to scale
Same Relative Permeability Curves

Figure 11: Effect of areal refinement on the advancing vertical water front for a thick reservoir

Figure 12: Water Saturations in 1981 for the 48 Million Cell Model

On the other hand, effects of areally refined grid model are displayed in Figure 13. This figure shows that
numerical dispersion effects in the areal direction is reduced and water has only broken though at the bottom
sections of the well not through out the well perforations as in Figure 12.

10

SPE 116675

Figure 13: Water Saturations in 1981 for the 192 Million Cell Model

The observed phenomenon in Figures 12 and 13 is reflected in the calculated water production for the same well,
Figure 14.

Reservoir 1

Case 48 Million
Case 192 Million

Well D

History

Figure 14: Water Production at Well D: Comparison of Models with field data
As illustrated in Figure 14, coarse grid model show premature water breakthrough while the fine grid model shows
late water arrivals and less water production which is consistent with the observations (in green color).
Figures 12 to 14 illustrate the clear effect of numerical errors in an actual reservoir simulator using the real data.
Larger Size Models
In order to test the parallel performance of the simulator a new model was generated from the original model (10
million cells, 17 layers, with areal grid size of 840ft-250m) by reducing the areal grid size to 42 m x 42 m (138 ft x

SPE 116675

11

138 ft). This time number of the vertical layers was kept at 34 (twice of the original) in order to fit into the Linux
Cluster available.
Model was run and a snap shot of the fluid distribution in the peripheral field (which is also a giant carbonate
reservoir) is illustrated in Figure 15. As shown a secondary gas cap was formed due to production. This is
consistent with the history of the reservoir and also demonstrating the simulator capability of handling very large
size systems with three phase flow in high resolution.

Figure 15: Gas Cap formation in Reservoir 1 of 700 Million Cell Model

Towards Giga Cell Models


The motivation for going to Giga-Cell Reservoir Simulation is not only limited to the reason explained above. As it
is well known, more and more new technologies are being deployed in the oil and gas fields. These new
technologies provide more detailed description of the reservoirs which were not previously available. Some of the
new technologies include deep-well electro-magnetic surveys; new borehole gravimetric surveys, new seismic
methods, use of geochemistry and implementation of many new sensor technologies. With the addition of these
new technologies, it will be possible to describe reservoir heterogeneity more accurately and in greater detail.
This will require more grid cells to fully describe the fluid flow in the porous media. Even for regular-size
reservoirs, it will be possible to use giga-cell simulation for understanding the fluid flow better (pore scale physics)
and develop better recovery methods.
Critical Components of a Giga-Cell Simulator
There are several critical components of the simulator that enable us to make big model runs. Figure 16 is the pie
chart for the computational work for various parts of the simulator. For example, for any of the runs described in
the previous sections, more than one half of the computational time is spent in Linear Solver (52 percent). For
example, if a Giga-Cell run takes two days to complete for a gigantic reservoir with a long history, more than one
day would be spent solving the linear system of equations. If the linear solver is not scalable or doesnt have good
parallel efficiency, this number could be 80 to 90 percent. The fact that it is 52 percent here is a result of newly
developed parallel algorithms. The following sections will briefly describe the solver and scalability of the simulator
on a state-of-the-art parallel machine, node Linux PC cluster with Infiniband network interconnects or an IBM Blue
Gene/P.

12

SPE 116675

8.35

Percent

2
11.63

5.15

52

Solver
Jacobian
Wells
Update
I/O
Others

17.64

Figure 16- Distribution of Computational Work in a Next Generation Parallel Simulator

Parallel Linear Solver


The parallel linear solver used for the simulator mentioned in this paper, GigaPOWERS, can handle linear
systems of equations resulting from structured grids and unstructured grids as well. The theory of the solver has
9
been presented in SPE recently by Fung & Dogru .
Basically, there are two new parallel unstructured preconditioning methods used for the solver. The first method is
based on matrix substructuring which was called line-solve power series (LSPS). A maximum transmissibility
ordering scheme and a line bundling strategy were also described to appropriately strengthen the methodology in
the unstructured context. The second method is based on variable partitioning, known as constraint pressure
residual (CPR).
The CPR method is a kind of divide-and-conquer strategy that specifically attacks the nature of multiphase,
multicomponent transport problems in reservoir simulation. The basic premise of CPR is that the full system
conservation equations are mixed parabolic hyperbolic in nature, with the pressure part of the problem being
mainly parabolic, and the concentration and saturation part of the problem being mainly hyperbolic.
Therefore, CPR aims to decompose the pressure part and approximately solve it. The pressure solution is used to
constrain the residual on the full system, thus achieving a more robust and flexible overall solution strategy.
These two methods can be combined to produce the two-stage CPR-LSPS method. We also discussed the twostage CPR-PAMG method where parallel algebraic multigrid is used as the pressure-solve preconditioner.
Distributed Unstructured Grid Infrastructure
10

A scalable fully-distributed unstructured grid framework has been developed , which was designed to
accommodate both complex structured and unstructured models from the mega cell to the giga cell range in the
distributed memory computing environment. This is an important consideration as todays computer clusters can
have multiple terabytes of memory. The memory local to each processing node is typically only a few gigabytes
shared by several processing cores. The distributed unstructured grid infrastructure (DUGI) provides data
organization and algorithms for parallel computation in a fully local referencing system for peer-to-peer
communication. The organization is achieved by using two levels of unstructured graph descriptions: one for the
distributed computational domains and the other for the distributed unstructured grid. At each level, the
connectivities of the elements are described as distributed 3D unstructured graphs.

SPE 116675

13

Numerical Example
Two full-field reservoir models, one at 172 million cells and the other at 382 million cells are used to test scalability
of the new simulator. The 172 million cell model is a structured grid model with grid dimensions of 1350*3747*34.
The areal cell dimensions are 83 m * 83 m and the average thickness is about 2.2 m. The horizontal
permeabilities, Kh, range from 0.03 mD to 10 Darcies. Vertical permeabilities vary from 0.005 mD to 500 mD.
There are about 3000 wells in the model. The 382 million cell model has the grid dimensions of 2250*4996*34. It
is a finer grid simulation model for the same reservoir.
A 512-node Linux PC cluster with Infiniband network interconnects was used to run the models. Each node has 2
Quad Core Intel Xeon CPU running at 3.0 GHz. This gives a total of 4096 processing cores. Our initial testing
indicates that the optimal number of cores per node to use is 4. Using additional cores per node does not improve
performance. Thus, our optimal usage for performance is to always use 4 cores per node, and to increase the
number of nodes as model size increases.
The scalability of the simulator for the 172 million cell model is illustrated in Figure 17. The solver achieved super
linear scale-up of 4.23 when we increase from 120 nodes (480 cores) to 480 nodes (1920 cores) and the overall
Newtonian iteration achieved 3.6 out of 4. For 20 years of history, the total runtime was 8.86 hours using 120node which reduces to 2.76 hours when using 480-node. Detail breakdown of timing is illustrated in Figure 18.
Scalability results for the 382 million cell model are shown in Figure 19. In this case, the solver also achieved
super linear scale-up and linear scale up for the overall Newtonian iteration when we increase from 240-node to
480-node. This indicates that the simulator can fully benefit from using all 512 nodes (the entire cluster) to solve a
gigacell simulation problem to achieve good turn around time. The total run time of the 20 year history simulation
for the 382 million cell problem using 480 nodes (1920 cores) was 5.63 hours.

Scalability on QuadCore Xeon Linux Cluster for


172M M cell model

Speed-Up factor

4.00

3.00

solver
newton
ideal

2.00

1.00
480

960

1440

1920

Number of Processing Cores

Figure 17: Scalability for the 172 MM Cell Model on the Infiniband Quad Core Xeon Linux cluster

14

SPE 116675

CPU Times for 172M cell model with 20 years history


10
8.86
8.52

9
8

CPU Times (hrs)

7
5.73

solver
4.73
4.50

newton
total

3.65
3.35

4
2.92

2.76
2.47
1.82

1.42

1
0
480

960

1440

1920

Number of processing core

Figure 18: CPU Time Comparison for the 172MM cell model on the Infiniband Quad Core Xeon Linux cluster

Scalability on QuadCore Xeon Linux Cluster for 382M M


ce ll model

Speed-Up Factor

2.00

solver
newton
1.50

ideal

1.00
960

1440

1920

No. pf Processing Cores

Figure 19: Scalability for the 382 MM Cell Model on the Infiniband Quad Core Xeon Linux cluster

SPE 116675

15

PROJECTION FOR THE FUTURE


Growth of model size and improvement in computing power results presented above can be summarized on a
new plot as shown in Figure 20. The vertical axis is logarithmic, the horizontal axis is linear. This semi-log plot
shows that model size is exponentially growing with time.

1000

100
Second Generation Parallel
Simulators (Giga-Cells)

Model Size
Million Cells

10
First Generation Parallel Simulators
(Mega-Cell)

1
1988 1998 2002 2004 2007 2008 2009

0.1

Conventional Simulators

Figure 20 Growth of Model size


Extrapolation of the data beyond 2008 makes it evident that giga-cell simulation can be attained within the next
few years. Giga-cell simulation process will bring new challenges with it. These challenges will be discussed in
later sections.
It is interesting to note that the behavior seen in Figure 20 is similar to the computer power increase forecasted by
Moores Law. Although model size growth and computing power shows similar trends, one should not think that a
given code, for example, developed in 1985, would be able run a giga-cell model by using more powerful
computers. As indicated in Figure 19, growth in model size using more powerful computers is achieved by
introducing new programming languages (rewriting the code) and using innovative numerical solutions that take
advantage of the new computer architectures to improve simulator scalability. Otherwise, an old code will not
scale properly with the increased number of CPUs and, after certain number of CPUs, no more speed
improvement will be possible. Ideally, scientists design the algorithms such that the scalability plot is a straight
line or close to it. This can be accomplished by new methods of domain partitioning for load balancing, new
programming techniques and new numerical algorithms compatible with the hardware architecture.
A comprehensive discussion of the formulation of a next generation parallel or giga cell simulator will be
presented in a future publication.

16

SPE 116675

GIGA-CELL SIMULATION ENVIRONMENT


Building giga-cell reservoir models, and analyzing the results generated by the simulator in practical times require
totally new technologies in pre- and post-processing. Specifically, processing gigantic data set as input and output
in practical times is a major challenge for the conventional data modeling and visualization technologies.
Limitations in the pre- and post-processing would limit the use of Giga Cell simulation.
We have used two different approaches to giga-cell visualization. The first and more conventional approach uses
parallel computers to perform the visualization by subdividing the data and the rendering domain into multiple
chunks, each of which is handled by an individual processor in a computer cluster. We chose VisIt, a
11
visualization technology developed at the Lawrence Livermore National Laboratory . VisIt uses the MPI parallel
processing paradigm to subdivide work among processors. It ran on a SGI Altix 3400 shared-memory
supercomputer with 64 processors and 512 gigabytes of memory. Figures 21 and 22 show snapshots of
simulation results using VisIt.

Figure 21: Billion Cell Visualization of Results and QC of Input Data

Figure 22: Billion Cell Visualization of Reservoir Simulation Time Step


The second approach is newer and relies heavily on the capabilities of GPUs (Graphical Processing Units), which
today support large memory (currently up to 2 GB). By using multi-resolution techniques (e.g., levels of details
approach), relatively inexpensive workstations are able to render giga-cell size datasets. We chose Octreemizer
12
software from the Fraunhofer Institute for this approach, using an Appro XtremeStation with dual GPUs (NVidia
FX5600) and 128 gigabytes of memory. Figure 22 shows a four-display Octreemizer session using dual-GPUs to
render a 2.4-billion-cell model.

SPE 116675

17

Figure 23: Octreemizer Visualization of 2.4-Billion Cell Model


Octreemizer uses a hierarchical paging scheme to guarantee interactive frame rates for very large volumetric
datasets by trading texture resolution for speed. Original volumes are divided into bricks in the range of 32x32x32
to 64x64x64 voxels each. These bricks create the finest level in the octree structure. Eight neighboring bricks are
filtered into a single brick of the next coarser level until only a single brick remains on the top of the octree.
Both of these methods have been able to visualize billion-cell models with interactive frame rates for rotation and
zooming, while spending less than a minute in loading each time-step. We are confident that the rapid progress
of CPU, GPU and file I/O speeds will further accelerate our capabilities.
This interactivity opens the doors to in-situ visualization of online simulations with
results, bringing the insights of visualization closer to the actual model simulation than
post-processing examination of the results. Such integration would also bypass file I/O
that large-scale parallel supercomputers and computational GPUs on the workstation
achieve online simulation interactivity for Giga-cell models in the near future.

real-time analysis of the


currently afforded by just
bottlenecks. We envision
will partner effectively to

Modern remote visualization techniques are also proving helpful to visualize results from supercomputers located
in remote data centers. Using VNC (Virtual Network Computing) protocol and VirtualGL, all 3D manipulations
occur in the local workstation hardware, providing a very interactive response, while the simulation and the
visualization software run on the remote supercomputer. Figure 24 shows a snapshot of a workstation display
located in Dhahran, Saudi Arabia, running a visualization session on the TACC Ranger supercomputer located in
Austin, Texas. The model visualized consists of one billion cells.

18

SPE 116675

Figure 24: Remote Visualization of a One-Billion-Cell Model (TACC Ranger)

SUMMARY AND CONCLUSIONS


Experience has shown that with the aid of rapidly evolving parallel computer technology and using innovative
numerical solution techniques for parallel simulators, Giga-cell reservoir simulation is within reach. Such a
simulator will reveal great details of giant reservoirs and will enable engineers and geoscientists to build, run and
analyze highly detailed models for the oil and gas reservoirs with great accuracy. This will help companies
recover additional oil and gas. Further developments and innovative approaches are needed for the full utilization
and manipulation of the giant data sets. Introduction of virtual reality and sound will enhance the understanding of
the results, locating trapped oil zones, placing new wells and designing new field operations. Overall, Giga-cell
simulation is expected to be beneficial for mankind in its quest to produce more hydrocarbons to sustain the
worlds economic development.

ACKNOWLEDGEMENTS
The authors would like to thank Saudi Aramcos management for the permission to publish this paper. The
authors would also like to thank the other Computational Modeling Technology team members: Henry Hoy,
Mokhles Mezghani, Tom Dreiman, Hemanthkumar Kesavalu, Nabil Al-Zamel, Ho Jeen Su, James Tan, Werner
Hahn, Omar Hinai, Razen Al-Harbi and Ahmad Youbi for their valuable contributions.

REFERENCES
1. Wheeler, J.A. and Smith, R.A.: "Reservoir Simulation on a Hypercube, SPE paper 19804 presented at the
64th Annual SPE Conference & Exhibition, San Antonio, Texas, and October 1989.
2. Killough, J.E. and Bhogeswara, R.: "Simulation of Compositional Reservoir Phenomena on a Distributed
Memory Parallel Computer," J. Pet. Tech., November 1991.
3. Shiralkar, G.S., Stephenson, R.E., Joubert, W., Lubeck, O. and van Bloemen, B.: A Production Quality
Distributed Memory Reservoir Simulator," SPE paper 37975, presented at the SPE Symposium on Reservoir
Simulation, Dallas, Texas, June 8-11, 1997.

SPE 116675

19

4. Dogru, A.H., et al.: A Parallel Reservoir Simulator for Large-Scale Reservoir Simulation, SPE Reservoir
Evaluation & Engineering Journal, pp.11-23, 2002.
5. Pavlas, E.J.: MPP Simulation of Complex Water Encroachment in a Large Carbonate Reservoir, SPE paper
71628, presented at the SPE Annual Technical Conference & Exhibition, New Orleans, Louisiana, September
30 October 3, 2001.
6. Beckner, B.L., et al.: EMpower: New Reservoir Simulation System, SPE paper 68116, presented at the SPE
Middle East Oil Show held in Bahrain, March 17-20, 2001
7. DeBaun, D., et al.: An Extensible Architecture for Next Generation Scalable Parallel Reservoir Simulation,
SPE paper 93274, presented at the SPE Reservoir Simulation Symposium held in Houston, Texas, January
31, 2005 - February 2, 2005.
8. Shiralkar, G.S., et al.: Development and Field Application of a High Performance, Unstructured Simulator
with Parallel Capability, SPE paper 93080, presented at the SPE Reservoir Simulation Symposium held in
Houston, Texas, January 31, 2005 - February 2, 2005.
9. Fung, L. S. and Dogru, A. H., Parallel Unstructured Solver Methods for Complex Giant Reservoir Simulation,
SPE 106237, proceedings of the SPE Reservoir Simulation Symposium, Houston, Texas, 26-28 February,
2007. (accepted for publication in SPEJ)
10. Fung, L. S. and Dogru, A. H., Distributed Unstructured Grid Infrastructure for Complex Reservoir Simulation,
SPE 113906, proceedings of the SPE EUROPEC/EAGE Annual Conference and Exhibition, Rome, Italy, 9-12
11. Lawrence Livermore National Laboratory VisIt Software website www.llnl.gov/visit.
12. John Plate et al., Octreemizer: A Hierarchical Approach for Interactive Roaming Through Very Large
Volumes, IEEE TCVG Symposium on Visualization, Barcelona, May 2002.

Вам также может понравиться