Вы находитесь на странице: 1из 43

UNIVERSITI TEKNOLOGI PETRONAS

PAB3053
RESERVOIR MODELING AND SIMULATION
MAY 2017

Dr. Mohammed Abdalla Ayoub

1
Class schedule
Activity Time

2 hours (wk7)

2 hour (wk8)

2 hours (wk9)

2 hours (wk10)

2 hour (wk11)

6 hours (wk12-14)

2
Iterative Methods
When the number of equations is very large, the coefficient matrix is sparse but
not banded and the computer storage is critical, an iterative method is preferred to
the direct method of solution.
If the iterative process is convergent, the solution is obtained within a specified
accuracy of the exact answer in a finite but not predeterminable number of
operations. The method is certain to convergence for a system having diagonal
dominance.
Iterative methods have rather simple algorithms (easy to apply and not restricted
for use with simple geometries and B.Cs). Preferred when the number of
operations in the calculations is so large that the direct methods may prove
inadequate because of the accumulation of round-off errors.

3
Typical iterative methods

1. Jacobi method

2. Gauss-Seidel method

3. Successive Over-Relaxation (SOR), or LSOR

4. Alternative Direction Implicit (ADI) method

5. Conjugate Gradient Methods

6. Biconjugate Gradients and CGSTAB

7. Multigrid Methods

4
Concept of iteration
Ax = b
In the case of an iterative solver, A is split as iollows :
A = C-R
where :
C = the approximate coefficient matrix
R = the residual matrix, representing the error in C
The iterative method is then defined as:

C x = Rx + b or ;
x (n +1) = x (n ) + C 1r (n +1)
and r (n +1) = b-Ax (n )
5
Iterative Methods
a11 x1 + a12 x2 + a13 x3 + a14 x4 = b1
a x + a x + a23 x3 + a24 x4 = b2
21 1 22 2

a31 x1 + a32 x2 + a33 x3 + a34 x4 = b3
a41 x1 + a42 x2 + a43 x3 + a44 x4 = b4
Can be converted into

x1 = (b1 a12 x2 a13 x3 a14 x4 ) / a11


x = (b a x a x a x ) / a
2 2 21 1 23 3 24 4 22

x3 = (b3 a31 x1 a32 x2 a34 x4 ) / a33
x4 = (b4 a41 x1 a42 x2 a43 x3 ) / a44 6
Iterative Methods (a simpler form)
Idea behind iterative methods:
Convert Ax = b into x = Cx +d
Ax = b x = Cx + d Equivalent system

Generate a sequence of approximations (iteration) x1, x2, .,


with initial x0
j 1
x = Cx
j
+d
Similar to fix-point iteration method
7
Rearrange Matrix Equations
a11 x1 + a12 x2 + a13 x3 + a14 x4 = b1
a x + a x + a x + a x = b
21 1 22 2 23 3 24 4 2

a31 x1 + a32 x2 + a33 x3 + a34 x4 = b3
a41 x1 + a42 x2 + a43 x3 + a44 x4 = b4
Rewrite the matrix equation in the same way

a12 a13 a14 b1


x
1 = x 2 x3 x 4 +
a11 a11 a11 a11

a21 a23 a24 b2
x
2 = x1 x 3 x 4 + n equations
a22 a22 a22 a22

x3 = a31 x1 a32 x2 a
34 x4 + 3
b &
a33 a33 a33 a33
n variables
x4 = a41 x1 a42 x2 a43 x3 + 4
b

a44 a44 a44 a44 8
Iterative Methods

j 1
Ax = b x = Cx
j
+ d ; Cii = 0
x and d are column vectors, and C is a square matrix

a12 a13 a14 b1


0
a11 a11 a11 a
11
a 21 a 23 a 24 b2
0
a 22 a 22 a 22 a 22
C= ; d=
a a 32 a 34 b3
31 0
a 33 a 33 a 33 a 33
a 41 a a 43 b4
42 0
a 44 a 44 a 44 44
a 9
Convergence Criterion

x ij xij 1
(1) =
a,i j
100% < s for all x i
xi
2 Norm of the residual vector Ax b <
( ) s

10
Jacobi method
The Jacobi Method is considered one of the basic Iterative methods
An iterative technique to solve Ax=b starts with an initial
approximation x (0) and generates a sequence x ( k ) { }k =0
First we convert the system Ax=b into an equivalent form
x = Tx + c
And generate the sequence of approximation by

x ( k ) = Tx ( k 1) + c, k = 1,2,3...

x ( k ) x ( k 1)
The stopping criterion: <
(k )
x
11
Jacobi method : Example
Consider the following set of equations.
10 x1 x2 + 2 x3 =6
x1 + 11x2 x3 + 3 x4 = 25
2 x1 x2 + 10 x3 x4 = 11
3 x2 x3 + 8 x4 = 15
Convert the set Ax = b in the form of x = Tx + c.
1 1 3
x1 = x2 x3 +
10 5 5
1 1 3 25
x2 = x1 + x3 x4 +
11 11 11 11
1 1 1 11
x3 = x1 + x2 + x4
5 10 10 10
3 1 15
x4 = x2 + x3 +
8 8 8 12
Jacobi method : Example
1 (0) 1 (0) 3
= +
(1)
x1 x2 x3
10 5 5
1 (0) 1 (0) 3 (0) 25
= + +
(1)
x2 x1 x3 x4
11 11 11 11
1 (0) 1 (0) 1 (0) 11
= x1 + +
(1)
x3 x2 x4
5 10 10 10
3 (0) 1 (0) 15
= x2 + +
(1)
x4 x3
8 8 8

= 0, x2 = 0, x3 = 0 and x4 = 0.
(0) (0) (0) (0)
x1
1 1 3
= +
(1)
x1 (0) (0)
= 0.6000,
10 5 5 (1)
1 1 3 25 x1
= + +
(1)
x2 (0) (0) (0)
11 11 11 11
= 2.2727,
(1)
1 1 1 11 x2
= (0) + +
(1)
x3 (0) (0)
5 10 10 10
= 1.1000
(1)
3 1 15 x3
= (0) + +
(1)
x4 (0)
8 8 8
= 1.8750
(1)
x4 13
Jacobi method : Example
1 (1) 1 (1) 3
= +
(2)
x1 x2 x3
10 5 5
1 (1) 1 (1) 3 (1) 25
= + x3 +
( 2)
x2 x1 x4
11 11 11 11
1 (1) 1 (1) 1 (1) 11
= x1 + +
( 2)
x3 x2 x4
5 10 10 10
3 (1) 1 (1) 15
= x2 + +
( 2)
x4 x3
8 8 8

1 ( k 1) 1 ( k 1) 3
x1 = x3 +
(k)
x2
10 5 5
1 ( k 1) 1 ( k 1) 3 ( k 1) 25
= + x3 +
(k )
x2 x1 x4
11 11 11 11
1 ( k 1) 1 ( k 1) 1 ( k 1) 11
= x1 + + x4
(k )
x3 x2
5 10 10 10
3 ( k 1) 1 ( k 1) 15
= x2 + x3 +
(k )
x4
8 8 8 14
Jacobi method : Example
Results:
iteration 0 1 2 3

(k ) 0.0000 0.6000 1.0473 0.9326


x1
(k ) 0.0000 2.2727 1.7159 2.0530
x2
(k ) 0.0000 -1.1000 -0.8052 -1.0493
x3
(k ) 0.0000 1.8750 0.8852 1.1309
x4
15
Gauss-Seidel method
Gauss-Seidel iteration:

(1) This is a very simple, efficient point-iterative procedure for solving large, sparse
systems of algebraic equations.

(2) The idea of GS is to compute x (k ) using most recently calculated values. In our
example:

1 ( k 1) 1 3
x1( k ) = x2 x3( k 1) +
10 5 5
1 (k ) 1 3 25
x2( k ) = x1 + x3( k 1) - x4( k 1) +
11 11 11 11
1 1 1 11
x3( k ) = - x1( k ) + x2( k ) + x4( k 1)
5 10 10 10
3 1 15
x4( k ) = - x2( k ) + x3( k ) +
8 8 8 16
Gauss-Seidel method

0
Starting iterations with =(0,0,0,0), we obtain:

17
Gauss-Seidel method, cont,
(2) Consider the following three equations:
a11P1+a12P2+a13P3=d1
a21P1+a22P2+a23P3=d2
a31P1+a32P2+a33P3=d3
where a 0 for i = 1 to 3
ii

Equations are successively solved for the main diagonal unknowns


P1=(d1-a12P2-a13P3)/a11
P2=(d2-a21P2-a23P3)/a22
P3=(d3-a31P2-a32P3)/a33
Initial guess are chosen as

P1(0 ) , P2(0 ) , P3(0 )

18
Gauss-Seidel Method, cont,
(3) These guessed values are used together with the most recently
computed values to complete the first-round of iterations as

P1 (1)
=
1
a11
(d1 a12 P2(0 ) a13 P3(0 ) )

P2(1) =
1
a22
(d 2 a21 P1(1) a23 P3(0 ) )

P3(1) =
1
a33
(d 3 a31 P1(1) a32 P2(1) )

1 i 1
( ) ( )
n
xi( k ) = aij x (jk ) aij x (jk 1) + bi , i = 1, 2,...., n
aii j =1 j =i +1

19
Gauss-Seidel Method, cont,
(4) These first-approximations are used together with the most recently
computed values to complete the second-round of iterations as

P1(2 ) =
1
a11
(d1 a12 P2(1) a13 P3(1) )

P2(2 )
=
1
a22
(d 2 a21P1(2 ) a23 P3(1) )

P3(2 ) =
1
a33
(d 3 a31P1(2 ) a32 P2(2 ) )

The iteration procedure is continued in a similar manner.

20
Gauss-Seidel Method, cont,

We note that in each equation the largest element (in magnitude) is in the diagonal.
Example:
these equations are solved for the main diagonal unknowns as :

P1 = (17 P2 3P3 )
1
6 P1 + P2 + 3P3 = 17
6
P1 10 P2 + 4 P3 = 7
P2 = (7 + P1 + 4 P3 )
1
P1 + P2 + 3P3 = 12 10
P3 = (12 P1 P2 )
1
3

and the initial guess values are arbitarily chosen as


P1(0 ) = P2(0 ) = P3(0 ) = 1

The first round of iterations is determined as follows


1
(
P1(1) = 17 P2 3P3 = 2.167
6
(0) (0)
)
P2(1) =
1
10
(
7 + P1 + 4 P3 = 1.317
(1) (0)
)
1
(
P3(1) = 12 P1(1) P2 = 2.839
3
(1)
) 21
Gauss-Seidel Method, cont,
The second round of iterations are determined as follows
1
( )
P1( 2 ) = 17 P2 3P3 = 1.194
6
(1) (1)

P2( 2 ) =
1
10
( )
7 + P1 + 4 P3 = 1.955
( 2) (1)

1
( )
P3( 2 ) = 12 P ( 2 )1 P2
3
( 2)
= 2.950

The third round of iterations are determined as follows

P1(3 ) =
1
6
( )
17 P2 3P3 = 1.032
( 2) ( 2)

P2(3 ) =
1
10
( )
7 + P1 + 4 P3 = 1.999
( 3) ( 2)
the values obtained with three itrations are sufficiently
close to the exact answer, P1 = 1, P2 = 2, P3 = 3
P3(3 )
1
( )
= 12 P ( 3)1 P2 = 2.989
3
( 3)

22
Successive Over-Relaxation (SOR)

(1) The Gauss-Seidel method generally does not converge sufficiently fast.
Successive over-relaxation is a method that can accelerate the convergence.
(2) The basic idea in this approach is

1
T1( n +1) = T1( n ) + ( )
d1 a11T1( n ) a12T2( n ) a13T3( n )
a11
1
T2(n +1) = T2n + (d 2 a21T1( n +1) a22T2( n ) a23T3( n ) )
a22
1 (n )
T3( n +1) (n )
= T3 + (
d 3 a31T1 ( n +1)
a32T2( n +1)
a33T3 )
a33

23
Successive Over-Relaxation (SOR), cont,

(4) In the SOR method the bracketed terms are multiplied by a


factor , called the relaxation parameter and the equations are
rewritten as:

T1 ( n +1) (n )
= T1 +
a11
(d 1 a11T1( n ) a12T2( n ) a13T3( n ) )

T2( n +1) = T2n +
a22
(d 2 a21T1( n +1) a22T2( n ) a23T3( n ) )

T3 ( n +1) (n )
= T3 +
a33
(d 3 a31T1( n +1) a32T2( n +1) a33T3( n ) )

24
Successive Over-Relaxation (SOR), cont,

(4) The values of the relaxation parameter must lie in the


range 0 < < 2 over-relaxation and =1 to Gauss-Seidel
iteration.

(5) The above procedure for SOR can be generalized for the
case of M equations as

25
SOR: Example
4 x1 + 3 x 2 = 24
3 x1 + 4 x 2 x3 = 30
x 2 + 4 x3 = 24 Exact Solution: x=(3, 4, -5)

26
In summary

Jacobi Method

x1new = (b1 a12 x2old a13 x3old a14 x4old ) / a11


new
x2 = (b2 a21 x1 a23 x3 a24 x4 ) / a22
old old old

new
x3 = (b3 a31 x1 a32 x2 a34 x4 ) / a33
old old old

new
x4 = (b4 a41 x1 a42 x2 a43 x3 ) / a44
old old old

27
In summary

Gauss-Seidel Method
Differ from Jacobi method by sequential updating: use new xi
immediately as they become available

x 1 new = ( b1 a12 x 2
old
a13 x 3
old
a14 x 4
old
) / a11
new
x2 = ( b2 a21 x 1 a23 x 3 a24 x 4
new old old
) / a22
new
= ( b3 a31 x 1 a32 x 2 a34 x 4
new new old
x3 ) / a33
new
= ( b4 a41 x 1 a42 x 2 a43 x 3
new new new
x4 ) / a44

28
In summary

Gauss-Seidel Method
use new xi at jth iteration as soon as they become available

x1 j = (b1 a12 x2 j 1 a13 x3 j 1 a14 x4 j 1 ) / a11


j j 1 j 1
x2 = (b2 a21 x1 a23 x3 a24 x4 ) / a22
new

j j 1
=
j j
3
x ( b3 a x
31 1 a x
32 2 a x
34 4 ) / a33
j
x4 = (b4 a41 x1 a42 x2 a43 x3 ) / a44
j j j

x ij xij 1
a,i = j
100% < s for all x i
xi
29
Diagonally Dominant Matrix

8 1 2 3 is strictly diagonally dominant because


1 10 2 5 diagonal elements are greater than sum
A=
1 6 12 3 of absolute value of other elements in

3 2 3 9 row

8 1 2 3
1 6 2 5
A=
1 6 12 3 is not diagonally dominant

3 2 3 9
30
Jacobi and Gauss-Seidel

Example: 5 x1 2 x2 + 2 x3 = 10

2 x1 4 x2 x3 = 7
3x x + 6 x = 12
1 2 3

Jacobi Gauss-Seidel
new
new 2 old 2 old 10
x1 = x2 x3 +
2 old 2 old 10
x
1 = x2 x3 + 5 5 5

5 5 5
new new 2 new 1 old 7
x2 =
2 old
x1
1 old 7
x3 + 2x = x1 x3 +
4 4 4 4 4 4

new 3 old 1 old 12 new 3 new 1 new 12


x
3 =
6
x1 +
6
x 2 +
6 x3 = 6 x1 + 6 x2 +
6
31
Example

5 x1 + 12 x3 = 80 5 0 12
4 x1 x2 x3 = 2 4 1 1

6 x1 + 8 x2 = 45 6 8 0
Not diagonally dominant !!
Order of the equation can be important
Rearrange the equations to ensure convergence

4 x1 x2 x3 = 2 4 1 1
6 x1 + 8 x2 = 45 6 8 0

5 x1 + 12 x3 = 80 5 0 1232
Gauss-Seidel Iteration
x1 = ( x2 + x3 2) / 4

Rearrange x2 = ( 45 6 x1 ) / 8
x = (80 + 5 x ) / 12
3 1

Assume x1 = x2 = x3 = 0

x1 = (0 + 0 2) / 4 = 0.5

x2 = [45 6 ( 0.5)] / 8 = 6.0
First
iteration
x = [80 + 5 ( 0.5)] / 12 = 6.4583
3
33
Gauss-Seidel Method

x1 = ( 2 + 6 + 6.4583) / 4 = 2.6146

Second iteration x2 = ( 45 6( 2.6146)) / 8 = 3.6641
x = (80 + 5( 2.6146)) / 12 = 7.7561
3
x1 = ( 2 + 3.6641 + 7.7561) / 4 = 2.3550

Third iteration x2 = ( 45 6( 2.3550)) / 8 = 3.8587
x = (80 + 5( 2.3550)) / 12 = 7.6479
3
x1 = ( 2 + 3.8587 + 7.6479) / 4 = 2.3767

Fourth iteration x2 = ( 45 6( 2.3767)) / 8 = 3.8425
x = (80 + 5( 2.3767)) / 12 = 7.6569
3
5th : x1 = 2.3749, x2 = 3.8439, x3 = 7.6562
6th : x1 = 2.3750, x2 = 3.8437, x3 = 7.6563
7th : x1 = 2.3750, x2 = 3.8438, x3 = 7.6562 34
Gauss-Seidel Iteration
A = [4 1 1; 6 8 0; -5 0 12];
b = [-2 45 80];
x=Seidel(A,b,x0,tol,100);
i x1 x2 x3 x4 ....
1.0000 -0.5000 6.0000 6.4583
2.0000 2.6146 3.6641 7.7561
3.0000 2.3550 3.8587 7.6479
4.0000 2.3767 3.8425 7.6569
5.0000 2.3749 3.8439 7.6562
6.0000 2.3750 3.8437 7.6563
7.0000 2.3750 3.8438 7.6562
8.0000 2.3750 3.8437 7.6563
Gauss-Seidel method converged

Converges faster than the Jacobi method shown in next page


35
Jacobi Iteration
A = [4 1 1; 6 8 0; -5 0 12];
b = [-2 45 80];
x=Jacobi(A,b,0.0001,100);
i x1 x2 x3 x4 ....
1.0000 -0.5000 5.6250 6.6667
2.0000 2.5729 6.0000 6.4583
3.0000 2.6146 3.6953 7.7387
4.0000 2.3585 3.6641 7.7561
5.0000 2.3550 3.8561 7.6494
6.0000 2.3764 3.8587 7.6479
7.0000 2.3767 3.8427 7.6568
8.0000 2.3749 3.8425 7.6569
9.0000 2.3749 3.8438 7.6562
10.0000 2.3750 3.8439 7.6562
11.0000 2.3750 3.8437 7.6563
12.0000 2.3750 3.8437 7.6563
13.0000 2.3750 3.8438 7.6562
14.0000 2.3750 3.8438 7.6562

Jacobi method converged


36
Relaxation Method

Relaxation (weighting) factor


Gauss-Seidel method: = 1
Overrelaxation 1 < < 2
Underrelaxation 0 < < 1

x new
i = x new
i + (1 )x old
i

Successive Over-relaxation (SOR)

37
Successive Over Relaxation (SOR)

Relaxation method
(
G S method x2new = b2 a21 x1new a23 x3old a24 x4old a22 )
SOR method x2new = (1 )x2old + x2old
( )
= (1 )x2old + b2 a21 x1new a23 x3old a24 x4old a22

x1new = (1 )x1old + (b1 a12 x 2 old a13 x 3old a14 x 4 old )/a 11
new
x 2 = (1 )x 2 + (b 2 a 21x1 a 23 x 3 a 24 x 4 )/a 22
old new old old

new
x 3 = (1 )x 3 + (b 3 a 31x1 a 32 x 2 a 34 x 4 )/a 33
old new new old

new
4
x = (1 )x 4
old
+ (b 4 a x
41 1
new
a x
42 2
new
a x
43 3
new
)/a 44
38
SOR Iterations

x1 = ( x2 + x3 2) / 4

Rearrange x2 = ( 45 6 x1 ) / 8
x = (80 + 5 x ) / 12
3 1

Assume x1 = x2 = x3 = 0, and = 1.2

xi = x GS
i + (1 ) x ; G - S : Gauss - Seidel
old
i

x1 = ( 0.2) 0 + 1.2 (0 + 0 2) / 4 = 0.6



x2 = ( 0.2) 0 + 1.2 [45 6 ( 0.6)] / 8 = 7.29
First iteration

x = ( 0.2) 0 + 1.2 [80 + 5 ( 0.6)] / 12 = 7.7


3
39
SOR Iterations
Second iteration
x1 = ( 0.2) ( 0.6) + 1.2 (7.29 + 7.7 2) / 4 = 4.017

x2 = ( 0.2) (7.29) + 1.2 ( 45 6( 4.017)) / 8 = 1.6767
x = ( 0.2) (7.7) + 1.2 (80 + 5( 4.017)) / 12 = 8.4685
3
Third iteration
x1 = ( 0.2) ( 4.017) + 1.2 (1.6767 + 8.4685 2) / 4 = 1.6402

x2 = ( 0.2) (1.6767) + 1.2 ( 45 6(1.6402)) / 8 = 4.9385
x = ( 0.2) (8.4685) + 1.2 (80 + 5(1.6402)) / 12 = 7.1264
3
Converges slower !! (see MATLAB solutions)
There is an optimal relaxation parameter

40
Optimized

41
Current Methods in Use

The solver requiring a large computer work limits the size of the
problem that can be treated.

With the increasing number of cells, iterative methods are used.

A variety of iterative methods are available to solve large sparse


matrices; some of them take advantage of the symmetrical
character of the matrix.

During the 1980's, researchers worked on iterative methods to


satisfy the increasing needs of robust and efficient methods.
42
Current Methods in Use

Preconditioning is designed to obtain a fast inversion with


minimal storage and to ensure that material is conserved at each
iteration.

One way to approximate matrix A is the nested factorization.


This approximation is used as a preconditioning matrix for a
truncated conjugate gradient algorithm "Orthomin

Several variations of the Nested Factorization method were


tested and implemented in reservoir simulators
43

Вам также может понравиться