Академический Документы
Профессиональный Документы
Культура Документы
Abstract
For many oil & gas reservoirs, especially large reservoirs in the Middle East, the availability of vast amounts of
seismic, geologic and dynamic reservoir data result in high resolution geological models. Because of the
limitations of conventional reservoir simulator technologies, high resolution models are upscaled to flow simulation
models by reducing the total number of cells from millions to a few hundred thousand. Flow simulators using
upscaled reservoir properties produce average reservoir performance and often fall short of accurately predicting
recovery.
Realizing the limitations of the conventional simulators for the giant oil and gas reservoirs, parallel reservoir
simulators have been developed. The first generation of parallel simulators increased the simulator capabilities by
an order of magnitude the result was that mega (million) cell simulation became a reality. Parallel computers,
including PC Clusters, were successfully used to simulate large reservoirs with long production histories, using
millions of cells. Mega-cell simulation helped recover additional oil and gas due to better understanding of
reservoir heterogeneity. The speed of parallel hardware also helped, making many runs to address uncertainty
possible.
Despite the many benefits of parallel simulation technology for large reservoirs, the average cell size still remains
in the order of hundreds of meters (m) for large reservoirs. To fully utilize the seismic data, smaller grid blocks of
fifty m in length are required. This size of grid block results in billion (Giga) cell models for giant reservoirs. This is
a two orders of magnitude increase from the mega-cell simulation. To simulate Giga-cell models in practical time,
new innovations in the main components of the simulator, such as linear equation solvers, are essential. Also, the
next generation pre- and post-processing tools are needed to build and analyze Giga-cell models in practical
times.
This paper describes the evolution of reservoir simulator technology, from Mega-Cell scale to a Giga-Cell scale,
presenting current achievements, challenges and the road map for Giga-Cell Simulators.
INTRODUCTION
To simulate the giant oil and gas reservoirs with sufficient resolution, parallel reservoir simulator technology is an
absolute necessity since the conventional simulators based on serial computations cannot handle these systems.
Interest in parallel reservoir simulation started over two decades ago in the oil industry. The earliest attempt was
1
2
3
by John Wheeler . This was followed by work of John Killough , Shiralkar and Stevenson .
Realizing the importance of the reservoir simulation technology in 1992, Saudi Aramco initiated a program for
4
developing the companys first in-house, Parallel Oil Water and Gas Reservoir Simulator, POWERS . The
simulator was developed on a parallel computer (Connection Machine CM-5) in 1996. The first field simulation
5
model using 1.3 million cells was successfully completed . In 2000 the parallel code was ported to symmetric
SPE 116675
parallel computers using IBM Nighthawk shared memory systems and later in 2002 to distributed memory PC
Clusters.
By 2002 the worlds largest oil field was simulated by Ghawars full field model, run using 10 million active cells
with sixty years of production history and thousands of wells. Experience proved that parallel reservoir simulation
was successful for reservoir simulation. Therefore, more features such as compositional, dual porosity/dual
permeability, and coupled surface facilities were added to the standard black oil options to cover more areas of
applications.
Figure 1 shows the past and future trend in simulating large reservoirs at Saudi Aramco. As seen, the impact of
parallel simulation on model size is very significant. Model size in this figure follows the general trend of
exponential growth in computer hardware technology.
40
35
Parallel
Simulator
30
25
20
15
10
Model Size
Million Cells
Conventional
Simulators
5
0
1994
1998
2003
2008
As parallel computers progressed from specialized designs such as Connection Machines to PC Clusters, the
programming language also progressed from High Performance Fortran to Shared Memory Parallel Fortran
(OpenMP), F90, Message Passing Interface libraries and finally parallel C++.
Industry-wide, several efforts have been made to introduce new programming languages and algorithms for
6
7
8
parallel simulation, starting in early 2001 by Beckner , DeBaun et al. and Shiralkar et al. in 2005.
SPE 116675
Figure 2: Industry practice for building simulator models from the geological models
SPE 116675
SPE 116675
SPE 116675
unrealistic multipliers in history matching. The reservoir in this example is a thick reservoir. Vertical fractures
enhance the vertical permeability.
Figure 7 demonstrates the concept. It is shown that the advancing water front, due to flank water injection, is
better represented by the fine grid model than the coarse grid (base) model. For example, water will move like a
piston into a base model layer (15 ft) showing no gravity effects. In the fine grid model (a 15 ft layer is divided into
five layers of 3 ft each), it will show the gravity effects. It is expected that more water will be in the bottom layer
due to gravity (water is heavier than oil). The field being studied has peripheral water injection, it is thick and has
enough vertical permeability (in some areas, this is due to local fractures).
Figure 6: Base model and Vertically Refined Full Field Model definition
Concept
Thick reservoirs
Higher Vertical Resolution Better Accuracy
( subdivision by 5 )
Dx
Dx
water
Oil
water
Oil
Oil
3ft
15ft
Not to scale
Dx = 840 ft
Same Relative Permeability Curves
Figure 7: Effects of more vertical layering on injected water movement in a thick reservoir
SPE 116675
85 layers
Water Sat
WELL A
WELL A
Water
Water
Figure 9: Well Performance Comparison of 17 and 85 layer models with field observed water production and well
pressures
SPE 116675
Results of the simulation runs are shown in Figure 8. This figure illustrates the calculated vertical water fronts in
1990. The left image is from the base model, the image on the right is from the refined grid model. Figure 8 shows
that calculated vertical water fronts, due to west flank injection, confirm the concept postulated in Figure 7. The
fine grid model with 85 layers shows water saturation has already broken at the producer but the coarse grid
model with 17 layers shows no water breakthrough yet.
The next question is which of the results are correct? To answer this question we need to examine the observed
water production and pressures at the producing well in this figure. When simulator results are compared with the
field observations (Figures 9 and 10), one sees that close agreement is observed for the fine grid model for water
arrival, water production rate and well pressures. The coarse grid model (base model) miscalculates water arrival
(by a few years), water production rate and pressures. In this case the coarse grid model requires further
adjustments in parameters for history matching in this area of the reservoir.
SPE 116675
Dx = 840 ft
Dx = 420 ft
Dx
Dx
water
Oil
3ft
water
Oil
Not to scale
Same Relative Permeability Curves
Figure 11: Effect of areal refinement on the advancing vertical water front for a thick reservoir
Figure 12: Water Saturations in 1981 for the 48 Million Cell Model
On the other hand, effects of areally refined grid model are displayed in Figure 13. This figure shows that
numerical dispersion effects in the areal direction is reduced and water has only broken though at the bottom
sections of the well not through out the well perforations as in Figure 12.
10
SPE 116675
Figure 13: Water Saturations in 1981 for the 192 Million Cell Model
The observed phenomenon in Figures 12 and 13 is reflected in the calculated water production for the same well,
Figure 14.
Reservoir 1
Case 48 Million
Case 192 Million
Well D
History
Figure 14: Water Production at Well D: Comparison of Models with field data
As illustrated in Figure 14, coarse grid model show premature water breakthrough while the fine grid model shows
late water arrivals and less water production which is consistent with the observations (in green color).
Figures 12 to 14 illustrate the clear effect of numerical errors in an actual reservoir simulator using the real data.
Larger Size Models
In order to test the parallel performance of the simulator a new model was generated from the original model (10
million cells, 17 layers, with areal grid size of 840ft-250m) by reducing the areal grid size to 42 m x 42 m (138 ft x
SPE 116675
11
138 ft). This time number of the vertical layers was kept at 34 (twice of the original) in order to fit into the Linux
Cluster available.
Model was run and a snap shot of the fluid distribution in the peripheral field (which is also a giant carbonate
reservoir) is illustrated in Figure 15. As shown a secondary gas cap was formed due to production. This is
consistent with the history of the reservoir and also demonstrating the simulator capability of handling very large
size systems with three phase flow in high resolution.
Figure 15: Gas Cap formation in Reservoir 1 of 700 Million Cell Model
12
SPE 116675
8.35
Percent
2
11.63
5.15
52
Solver
Jacobian
Wells
Update
I/O
Others
17.64
A scalable fully-distributed unstructured grid framework has been developed , which was designed to
accommodate both complex structured and unstructured models from the mega cell to the giga cell range in the
distributed memory computing environment. This is an important consideration as todays computer clusters can
have multiple terabytes of memory. The memory local to each processing node is typically only a few gigabytes
shared by several processing cores. The distributed unstructured grid infrastructure (DUGI) provides data
organization and algorithms for parallel computation in a fully local referencing system for peer-to-peer
communication. The organization is achieved by using two levels of unstructured graph descriptions: one for the
distributed computational domains and the other for the distributed unstructured grid. At each level, the
connectivities of the elements are described as distributed 3D unstructured graphs.
SPE 116675
13
Numerical Example
Two full-field reservoir models, one at 172 million cells and the other at 382 million cells are used to test scalability
of the new simulator. The 172 million cell model is a structured grid model with grid dimensions of 1350*3747*34.
The areal cell dimensions are 83 m * 83 m and the average thickness is about 2.2 m. The horizontal
permeabilities, Kh, range from 0.03 mD to 10 Darcies. Vertical permeabilities vary from 0.005 mD to 500 mD.
There are about 3000 wells in the model. The 382 million cell model has the grid dimensions of 2250*4996*34. It
is a finer grid simulation model for the same reservoir.
A 512-node Linux PC cluster with Infiniband network interconnects was used to run the models. Each node has 2
Quad Core Intel Xeon CPU running at 3.0 GHz. This gives a total of 4096 processing cores. Our initial testing
indicates that the optimal number of cores per node to use is 4. Using additional cores per node does not improve
performance. Thus, our optimal usage for performance is to always use 4 cores per node, and to increase the
number of nodes as model size increases.
The scalability of the simulator for the 172 million cell model is illustrated in Figure 17. The solver achieved super
linear scale-up of 4.23 when we increase from 120 nodes (480 cores) to 480 nodes (1920 cores) and the overall
Newtonian iteration achieved 3.6 out of 4. For 20 years of history, the total runtime was 8.86 hours using 120node which reduces to 2.76 hours when using 480-node. Detail breakdown of timing is illustrated in Figure 18.
Scalability results for the 382 million cell model are shown in Figure 19. In this case, the solver also achieved
super linear scale-up and linear scale up for the overall Newtonian iteration when we increase from 240-node to
480-node. This indicates that the simulator can fully benefit from using all 512 nodes (the entire cluster) to solve a
gigacell simulation problem to achieve good turn around time. The total run time of the 20 year history simulation
for the 382 million cell problem using 480 nodes (1920 cores) was 5.63 hours.
Speed-Up factor
4.00
3.00
solver
newton
ideal
2.00
1.00
480
960
1440
1920
Figure 17: Scalability for the 172 MM Cell Model on the Infiniband Quad Core Xeon Linux cluster
14
SPE 116675
9
8
7
5.73
solver
4.73
4.50
newton
total
3.65
3.35
4
2.92
2.76
2.47
1.82
1.42
1
0
480
960
1440
1920
Figure 18: CPU Time Comparison for the 172MM cell model on the Infiniband Quad Core Xeon Linux cluster
Speed-Up Factor
2.00
solver
newton
1.50
ideal
1.00
960
1440
1920
Figure 19: Scalability for the 382 MM Cell Model on the Infiniband Quad Core Xeon Linux cluster
SPE 116675
15
1000
100
Second Generation Parallel
Simulators (Giga-Cells)
Model Size
Million Cells
10
First Generation Parallel Simulators
(Mega-Cell)
1
1988 1998 2002 2004 2007 2008 2009
0.1
Conventional Simulators
16
SPE 116675
SPE 116675
17
Modern remote visualization techniques are also proving helpful to visualize results from supercomputers located
in remote data centers. Using VNC (Virtual Network Computing) protocol and VirtualGL, all 3D manipulations
occur in the local workstation hardware, providing a very interactive response, while the simulation and the
visualization software run on the remote supercomputer. Figure 24 shows a snapshot of a workstation display
located in Dhahran, Saudi Arabia, running a visualization session on the TACC Ranger supercomputer located in
Austin, Texas. The model visualized consists of one billion cells.
18
SPE 116675
ACKNOWLEDGEMENTS
The authors would like to thank Saudi Aramcos management for the permission to publish this paper. The
authors would also like to thank the other Computational Modeling Technology team members: Henry Hoy,
Mokhles Mezghani, Tom Dreiman, Hemanthkumar Kesavalu, Nabil Al-Zamel, Ho Jeen Su, James Tan, Werner
Hahn, Omar Hinai, Razen Al-Harbi and Ahmad Youbi for their valuable contributions.
REFERENCES
1. Wheeler, J.A. and Smith, R.A.: "Reservoir Simulation on a Hypercube, SPE paper 19804 presented at the
64th Annual SPE Conference & Exhibition, San Antonio, Texas, and October 1989.
2. Killough, J.E. and Bhogeswara, R.: "Simulation of Compositional Reservoir Phenomena on a Distributed
Memory Parallel Computer," J. Pet. Tech., November 1991.
3. Shiralkar, G.S., Stephenson, R.E., Joubert, W., Lubeck, O. and van Bloemen, B.: A Production Quality
Distributed Memory Reservoir Simulator," SPE paper 37975, presented at the SPE Symposium on Reservoir
Simulation, Dallas, Texas, June 8-11, 1997.
SPE 116675
19
4. Dogru, A.H., et al.: A Parallel Reservoir Simulator for Large-Scale Reservoir Simulation, SPE Reservoir
Evaluation & Engineering Journal, pp.11-23, 2002.
5. Pavlas, E.J.: MPP Simulation of Complex Water Encroachment in a Large Carbonate Reservoir, SPE paper
71628, presented at the SPE Annual Technical Conference & Exhibition, New Orleans, Louisiana, September
30 October 3, 2001.
6. Beckner, B.L., et al.: EMpower: New Reservoir Simulation System, SPE paper 68116, presented at the SPE
Middle East Oil Show held in Bahrain, March 17-20, 2001
7. DeBaun, D., et al.: An Extensible Architecture for Next Generation Scalable Parallel Reservoir Simulation,
SPE paper 93274, presented at the SPE Reservoir Simulation Symposium held in Houston, Texas, January
31, 2005 - February 2, 2005.
8. Shiralkar, G.S., et al.: Development and Field Application of a High Performance, Unstructured Simulator
with Parallel Capability, SPE paper 93080, presented at the SPE Reservoir Simulation Symposium held in
Houston, Texas, January 31, 2005 - February 2, 2005.
9. Fung, L. S. and Dogru, A. H., Parallel Unstructured Solver Methods for Complex Giant Reservoir Simulation,
SPE 106237, proceedings of the SPE Reservoir Simulation Symposium, Houston, Texas, 26-28 February,
2007. (accepted for publication in SPEJ)
10. Fung, L. S. and Dogru, A. H., Distributed Unstructured Grid Infrastructure for Complex Reservoir Simulation,
SPE 113906, proceedings of the SPE EUROPEC/EAGE Annual Conference and Exhibition, Rome, Italy, 9-12
11. Lawrence Livermore National Laboratory VisIt Software website www.llnl.gov/visit.
12. John Plate et al., Octreemizer: A Hierarchical Approach for Interactive Roaming Through Very Large
Volumes, IEEE TCVG Symposium on Visualization, Barcelona, May 2002.