Вы находитесь на странице: 1из 16

Chapter 3 Iterative methods for Linear system

Why do we need to solve big linear systems?


E.g. 1 Spectral method: Express solution as : u(x) =
of linear equation.

Pn

i=1

ai i (x). To look for ai s We need to solve system

E.g. 2 Finite difference method : Consider u(x) = f (x) where 0 < x < 1 and u(0) = a0 , u(1) = a1 .
In calculus, we know
g(x + h)
+g(x h)
g(x + h) + g(x h)

f (x)

h2
g(x)
2
h2
g(x) hg 0 (x) + g(x)
2
2g(x) + h2 g(x)
g(x) + hg 0 (x) +

g(x + h) 2g(x) + g(x h)


h2

1
Now, partition [0, 1] into xi = ih; h = n+1
.
Then, the differential equation can be approximated by

ui+1 2ui + ui1


= f (xi ), i = 1, 2, ..., n
h2

u1
f (x1 ) u0
u2

f (x2 )
system of

A . =

.
.
.
linear
system
.

.
f (xn ) u1

un
where

2
1
1

A= 2
h

1
2
..
.

1
..
.
1

..

2 1
1 2

How to solve big linear system?


From linear algebra, we learnt gaussian elimination.


a11 a12 a1n
x1
b1
a21 a22 a2n x2 b2


..
.. .. = ..
.
.
.
.
.
.
.
. . .
an1 an2 ann
xn
bn

0
c11 c12 c1n
x1
b1
0

a
x
22
2n
2

b2

.. .. = ..
..

.
. . .
cnn
xn
b0n
Computational cost : O(n3 ) [Check it if interested]
From linear algebra, we also learnt LU factorization.
Decompose a matrix A into A = LU.
Then solve the equation by:
A~x = ~b L(u~x) = ~b
1

by elementary

row operation

Upper triangular matrix


Solved by
backward substitution

Let ~y = u(~x). Solve for L~y = ~b first (backward substitution). Then solve for u~x = ~y (easy).
For A =symmetric positive definite (~xT A~x > 0 for all ~x), then A = LLT . Decomposition can be done by Cholesky
decomposition : (Numerical analysis).
Computational cost : O(n3 )
Goal :
Develop iterative method : Find a sequence ~x0 , ~x1 , ~x2 , ... such that ~xR ~x = solution as k (Stop when the
error is small enough).
Splitting method for general linear systems
Consider the system A~x = f~; A Mnn (R) (n is big )
We can split A as follows:
A = M + (A M ) = M (M A) = N P
Solving A~x = f~ is equivalent to solving:
(N P )~x = f N~x = P ~x + f~
We can develop an iterative scheme as follows
N~xn+1 = P ~xn + f~

to get a sequence {~xn }n=1 .


It can be shown that if {~xn } converges, it converges to the solution of A~x = f .
Many different choices of the splitting !!
Goal :
N is simple to take inverse ( such as diagonal ).
Splitting choice 1 : Jacobi Method
Split A as A = D + (A D), where D contains the diagonal entries of A only.
Then A~x = f~ :
D~xk+1 + (A D)~xk
k+1

D~x

~xk+1

= f~
(D A)~xk + f~

= D1 (D A)~xk + D1 f~

This is equivalent to solving :

a11 xk+1
+ a12 xk2 + + a1n xkn = f1

k
a21 x1 + a22 xk+1
+ + a2n xkn = f2
2
..

k
k
an1 x1 + an2 x2 + + ann xk+1
= fn
n

for xk+1
1
for xk+1
2
..
.
for xk+1
n

Example : Consider

5
3
2

2
9
1

3
x1
1
1 x2 = 2
7
x3
3

Then:

1
0
0
0 3
7
2

2 3
5 0 0
0 1 ~xk + 0 9 0
~xk+1
1 0
0 0 7

0
0.186
Start with ~x0 = 0 . The sequence converges in 7 iteration to ~x7 = 0.331
0
0.423
5
= 0
0

0
9
0

1
2
3

Splitting Choice 2 : Gauss Seidel Method


Split A as :
A=L+D+U
Develop an iterative scheme as :
L~xk+1 + D~xk+1 + U~xk = f~
This is equivalent to :

a11 xk+1
+ a12 xk2 + + a1n xkn = f1
1
a21 xk+1
+ a22 xk+1
+ + a2n xkn = f2
1
2
..
.
k+1
k+1
an1 x1 + an2 x2 + + ann xk+1
= fn
n

for xk+1
1
for xk+1
2
..
.
for xk+1
n

Gauss-Seidel is equivalent to
~xk+1 = (L + D)1 U~xk + (D + L)1 f~
Exampl : Continue with last example

~xk+1

5
= 3
2

0
9
1

0
0
0 0
7
0

3
5
1 ~xk + 3
0
2

2
0
0

0
9
1

0
1
0 2
7
3

0.186
0
Start with ~x0 = 0 . The sequence converges in 7 iteration to : ~x7 = 0.331 .
0.423
0

Do Jacobi / Gauss-Seidel Method always converges ?




 

1 5
x1
4
Example : Consider :
=
.
7 1
x2
6
Then : Jacobi method gives :
~xk+1 =
0

Start with ~x =

0
0

1 0
0 1

. Then ~x =

1 

4
6

0
7
2

5
0

34
34

, ~x =

~xk +

1
0

0
1

1 

4
6






174
214374
3
7
, ~x =
, ...~x =
which
294
300127

doesnt converges.


 

x1
1
=
!
x2
1
How about Gauss Seidel ?? It also doesnt converges !
The real solution should be :

Our next goal is to check when Jacobi method and Gauss-Seidel method would converge.
Answer : Matrix A must satisfy certain property : Strictly diagonal dominant (SDD).
Analysis of Covergence
Let A = N P .

Goal : Solve A~x = f~ (N P )~x = f~.


We have : N~x = P ~x + f~ to obtain iterative scheme :
N~xm+1 = P ~xm + f~,

m = 0, 1, 2, ...

Let ~x be the solution of A~x = ~b. A~x = ~b.


Define error em := ~xm ~x , m = 0, 1, 2, ...
Now
N~xm+1 = P ~xm + f~ (1)
N~x = P ~x + f~ (2)
(1) (2) : N (~xm+1 ~x ) = P (~xm ~x ) N~em+1 = P ~em ~em+1 = N 1 P ~em
So let M = N 1 P , we have
~em = M m~e0
Assume {~
u1 , ..., ~un } is the set of linear independent eigenvectors of M . (~ui can be complex-valued vectors )
P
n
Let ~e0 = i=1 ai ~ui . Then :
n
n
X
X
~em = M m~e0 =
ai M m ~ui =
am
ui
i ~
i=1

i=1

where 1 , 2 , ...n =corresponding eigenvalues (can be C).


Suppose we can order the eigenvalues :
|1 | |2 | |3 | ... |n |
Then

(
~e

m
1

a1 ~u1 +

n
X


ai

i=2

i
1

m
u~i

Assume |1 | < 1. Then ~em 0 as m .


k
In order to reduce error by a factor of 10m , then we need k iterations when |1 | 10m .
That is
m
m
k
:=
log10 ((M ))
R
We call (M ) : asymptotic convergence factor.
We call R : asymptotic convergence rate.
In other words, the spectral radius of M :
(M ) = max {|k | : k = eigenvalues of M }
k

is a good indicator of the rate of convergence.


But finding (M ) is difficult !
Solution : Numerically (Next Topic)
Useful Theorem: Gerschgorin Theorem
T

Consider : ~e = (e1 , e2 , ..., en ) =eigenvector of A = (aij ) with eigenvalues .


Then :
A~e

= ~e

Hence for each i (1 i n)


N
X

aij ej

= ei

aij ej

= ei

j=1
N
X

aii ei +

j=1,j6=i

ei (aii )

N
X

aij ej

j=1,j6=i

|ei | |aii |

N
X

|aij | |ej |

j=1,j6=i

Suppose the component of the largest absolute value is |el | =


6 0 ( |el | |ej | j).
Then :

|el | |all |

N
X

j=1,j6=l

|all |

N
X

N
X

|alj | |ej |

|alj | |el |

j=1,j6=l

|alj |

j=1,j6=l

So we have

Ball

N
X

|alj |

j=1,j6=l

PN

That is the ball with centre all and with radii j=1,j6=i |alj |.
Note : We dont know l unless we know and ~e.
But : we can conclude

N
[
l=1

X
|alj |
Ball

j=1
j6=l

Example : Determine the upper bounds on the eigenvalues for the matrix:

2 1 0
0
1 2 1 0

A=
0 1 2 1
0
0 1 2


SN
PN
Then all eigenvalues lie within l=1 Ball
j=1,j6=l |alj | .
P

N
For l = 1 and 4, Ball
|alj | = { : | 2| 1}.
Pj=1,j6=l

N
For l = 2 and 3, Ball
|a
|
= { : | 2| 2} .
lj
j=1,j6=l
Therefore

N
N
[
X
Ball
|alj | = circle with radius 2 and centre at (2,0)
l=1

j=1,j6=l

Since A is symmetric, all eigenvalues are real.


Thus 0 4.
In factm eigenvalues of A are : 1 = 3.618, 2 = 2.618, 3 = 1.382, 4 = 0.382.
5

That is (A) = 1 4.
To prove the convergence of Jacobi method and the Gauss-Seidel method, let us introduce some definition.
Definition: A matrix M = (aij ) is called strictly diagonally dominant (SDD) if:
n
X

|aii | >

|aij | ,

i = 1, 2, ..., n

j=1,j6=i

Theorem 1 : If a matrix A is SDD, then A must be non-singular.



SN
PN
Proof : Recall all eigenvalues l=1 Ball
j=1,j6=l |alj | .
Now A is SDD iff :
N
X
|all | >
|alj | , l = 1, 2, ..., n
j=1,j6=l

P

Therefore every ball Ball


j=1,j6=l |alj | must not contain 0.
Hence, eigenvalue cannot be 0.
If A is singular, then ~v 6= 0 such that A~v = ~0 = 0~v .
This implies = 0 is an engenvalue. Contradiction.
So A is non-singular.
Now, we prove the convergence of Jacobi Method.
Recall that Jacobi Method can be written as :
~xm+1 = D1 (D A) ~xm + D1 f~
Theorem: The Jacobi method converges to the solution of Ax = f~ if A is strictly diagonally dominant.
Proof : Note that
n
fi
1 X
~xm+1
=

aij ~xm
, i = 1, 2, ..., n (1)
j +
i
aii
aii
j=1,j6=i

Let x be the solution. Then we also have


~xi =

1
aii

n
X

aij ~xj +

j=1,j6=i

fi
aii

(2)

(1)-(2):
~em+1
=
i

1
aii

n
X

aij ~em
j

j=1,j6=i

Therefore
m+1
~e

i


n
X
aij m
~ej
aii

j=1,j6=i

n
X
j=1,j6=i


aij m
k~e k

aii

 

k~em k = max ~em
j

= r k~em k


n
X
aij
< 1
r = max
aii
i
j=1,j6=i



~em+1

r k~e k

Inductively,

k~em k rm ~e0
6

Therefore
k~em k 0 as

Theorem : The Gauss-Seidel Method converges to the solution of A~x = f~ if A is strightly diagonally dominant.
Proof : Gauss-Seidel Method can be written as:
~xm+1
=
i

i1
X
aij

aii

j=1

~xm+1

n
X
fi
aij m
~xj +
a
aii
j=i+1 ii

(1)

Let ~x be the solution of A~x = f. Then :


~xi =

i1
X
aij
j=1

aii

n
X
fi
aij
~xj +
a
a
ii
j=i+1 ii

~xj

(2)

(1)-(2):
~em+1
=
i

i1
X
aij

~em+1

aii
o
nP
aij
n
m
= (em
1 , ..., en ), and r = maxi
j=1,j6=i aii Again, we will prove:
j=1

Let ~em

n
X
aij m
~e
a j
j=i+1 ii

m+1
~e
r k~em k ,

m = 0, 1, 2, ...i

Induction on i :
When i = 1,
m+1
~e

1


n
X
a1j m

a11 ~ej
j=2

k~em k


n
X
a1j


a11
j=2

r k~e k


Assume ~em+1 r k~em k for k = 1, 2, ..., i 1.
Then :
m+1
~e

i



n
i1
X
X
aij m
aij m+1
+
~ej
~e
aii
aii j
j=i+1
j=1


i1
n
X
X
aij
aij
m
m



r k~e k
aii + k~e k
aii
j=1
j=i+1

n
X
aij

< k~em k
aii

j=1,j6=i

r k~e k


< r k~em k .
By MI, ~em+1
i

Hence
m+1
~e
< r k~em k

Therefore

k~em k < rm ~e0 0 as m as r < 1

Example : Consider

A~x =

10
1

1
10



x1
x2


=

12
21

= ~b

A is SDD. Therefore both Jacobi method and Gauss-Seidel method converge.


Compare the convergence rates of the two methods.


10 0
Solution : Jacobi method: D =
0 10
~xk+1 = D1 (D A)~xk + D1

12
21

Let

M=

10 0
0 10

10
0

0
10

1 

Need to check the spectral radius of M :



1 
10 0
0
(M ) =
0 10
1

1 

0
1

1
0

1
0

!


1 m
10

~j
K

10
1

0
10

1
10



1
0

0
1

~xk + D1

12
21


=

0
1
10

1
10
0



1
1
1
= 0 = 10
or = 10
Eigenvalue of M : 2 100
1
Therefore M is diagonalizable and (M ) = 10 .

~ j where ~em = error = ~xm ~x =


Recall: ~em = (M )m K
Now, consider the Guass-Seidel method:
Take



10 0
0
L+D =
,U =
1 10
0

1
0


,

~xk+1 =

We need to check:


(M ) =

0
0

1 

0
0

1
0

~xk + (L + D)1~b

1
100

Eigenvalue of M :


100


=0=

Therefore
(M ) =

1
~em =
100

1
or = 0
100
1
100

m

~
KGS

So Guass Seidel converges faster.


In fact, recall that in order to reduce error by a factor of 10m , we need k iteration such that:
k

|1 | 10m k
For Jacobi, k

m
1
log10 ( 10
)

For Guass-Seidel, k

m
log10 ((M ))

=m

m
1
log10 ( 100
)

m
2

Therefore Gauss-Seidel converges twice as fast as Jacobi.


What if M = N 1 P is not diagonalizable.
Theorem : Let A Mnn (C) be a complex-valued matrix.
(A) = max {|i |}
i

where 1 , 2 , ...n = eigenvalues of A


8

where (A) is called the spectral radius. Then


lim Ak = 0 iff (A) < 1

Proof : () Let be an eigenvalues with eigenvector ~v .


Then Ak~v = k~v . Thus,
lim Ak~v

k0

lim k~v

= ~v lim k

limk k = 0 ||<1. Thus (A)<1.


() Let 1 , 2 , ..., s =eigenvalues of A. From linear algebra, invertible Q Mnn (C) such that :
A = QJQ1
where J =Jordan Canonical form of A.
Actually,

Jm1 (1 )
Jm2 (2 )

J =

..

.
Jms (s )

where

1
i

Jmi (i ) =

1
..
.

..

Mmi mi (C)

1
i

1is

Now, Ak = QJ k Q1 and

k
(1 )
Jm
1

Jk =

k
(2 )
Jm
2

..

.
k
Jm
(s )
s

Also, for k mi 1

ki

k
Jmi (i ) =

k
1

k1
i

ki


k
k2
i
 2 
k
k1
i
1
..
.

..
.
ki
0


k
i +1
km
i
m
i1


k
i +1
km
i
mi2
..
.


k
k1
i
1
k
i

Since (A) < 1, then |i | < 1 i.


k
Therefore limk Jm
= 0 i and so J k 0 as k .
i
Thus,
lim Ak = lim QJ k Q1 = 0
k

Remark : Following the same idea, (A) > 1 implies



lim Ak as k
k

kAk = max {aij }

Corollary : The iteration scheme xk+1 = M xk + b converges iff (M ) < 1


Proof : Consider
xk+1 = M xk + b (1)
x = M x + b (2) x = solution
Therefore
ek+1 = M (xk x) ek+1 = M ek ek = M k e0
If (M ) < 1, then M k 0 as k .
Splitting Choice 3 : Successive overrelaxation Method (SOR)
A=L+D+U
Consider the iterative scheme : (Introduce sequence xk and x
k )
Lxk+1 + D
xk+1 + U xk = b ()

xk+1 = xk + x
k+1 xk
()



1 k+1
k
k+1
x
+ ( 1) x
x

Put () to (), we have:

or



1
1
L + D xk+1 + (U + ( 1) D) xk = b





1
1
k+1
L+ D x
=
D (D + U ) xk + b (SOR)

Clearly, SOR is equivalent to splitting A:



A=N P =

 

1
1
L+ D
D (D + U )

In fact, SOR is equivalent to solving :

a11 x
k+1
+ a12 xk2 + + a1n xkn = b1

k+1
a21 x1 + a22 x
k+1
+ + a2n xkn = f2
2
.
..

an1 xk+1
+ an2 xk+1
+ + ann x
k+1
= fn
n
1
2


for xk+1
= xk1 + x
k+1
xk1
1
1

for xk+1
= xk2 + x
k+1
xk2
2
2
..
.

k+1
k
for xn = xn + x
k+1
xkn
n

Remark : SOR = Gauss-Seidel if = 1.


Condition for the convegence of SOR
Theorem : The necessary condition (not sufficient) condition for SOR to converge is 0 < < 2.
Proff : Consider:
det(N

 1 
!
1
1
det
L+
D
D (D + U )


1 !


1
1
det
D
det
DD



 1

1
det D
det ((1 ) D)

n
det ((1 ) I) = (1 )


P)

=
=
=

10

 Q

Sonce det N 1 P = i i i are eigenvalues of M 1 N
Therefore
 Y
n
n
n
(1 )
= det N 1 P =
i max |i | = N 1 P
i

N 1 P

| 1|

Now, SOR method converges


iff (N 1 P ) < 1.

So, | 1| M 1 N < 1 0 < < 2.
Remark : In general, SOR conveges if and only if
1 
!


1
1
<1
D (D+U)
N 1 P =
L+ D

Therefore, to find the sufficient condition for SOR method to converges, we need to check the eigenvalues of the
matrix:

1 

1
1
L+ D
D (D + U )

Example: Let go back to Ax = ~b where A =


1
Recall (MJacobi ) = 10
; (MGS ) =
G-S converges faster !!

10
1

1
10


.

1
100

Now, consider the convergence rate for SOR method.


Recall the SOR method reads:
~xk+1 =


L+

1
D

1 



1
1
1
~b
D (D + U ) ~xk + L + D

So,


=
=
=
=

1 


1
D (D + U )


1 

1
1
(D + L)
(D (D + U ))

1 1
(D + L)
((1 ) D U )

1
(D + L) ((1 ) D U )
1
L+ D

We examine (M )




10 0
0 1
(1 )

0 10
0 0


10 (1 )

=
and
0
10 (1 )


 1

10 0
0
1
10
(D + L) =
(D + L) =

1
10
100
10



1
10
1
2
MSOR = (D + L) ((1 ) D U ) =

(1)
10
100 + 1
Characteristic polynomial of MSOR is:

 2

2 (1 )
[(1 ) ]
+1
=0
100
100
(1 ) D U

11

Simplify:


2
2
+ (1 ) = 0
2 (1 ) +
100
2

Then:
2

= (1 ) +

200 20
1
.
When = 1 (Gauss-Seidel method), = 0 or = 100
Changing changes .
Choice of ??
Let us choose such that 4 (1 ) +
So, the equation has equal root :

2
100

r
4 (1 ) +

2
100

=0
2
200
= (1 ) 2 (1 ) = 1


2
4 (1 ) +
100
=

(1 ) +

The smallest value of (2 > > 0) such that 4 (1 ) +

2
100

= 0 is:

= 1.002512579
(Which is very close to gauss-Seidel)
But !! (MSOR ) = 0.002512579 (Compare to (MGS ) = 0.01, (MJ ) = 0.1)
converges much faster than G-S!!
Remark :
(MSOR ) is very sensitive to . If you can hit the right value of , we can improve the spped of convergence
significantly !!!
One major task in computational math is to find the right parameter !!
How can we choose optimal for simple case??
In general, difficult to choose
is usually chosen as 0 < < 2
But for some special matrix, optimum can be easily found.
Definition: Consider the system A~x = ~b. And let A = D + L + U .
If the eigenvalue of :
1
D1 L + D1 U, 6= 0

is independent of . Then the matrix is said to be consistently ordered.


Theorem: If A is consistently ordered, then the optimum for SOR method is:
=

2
p
1 + 1 (MJ )2

where MJ = M in the Jacobi method, MJ = D1 (L + U )


Proof: Consistently ordered means the eigenvalues of


1
D1 L + D1 U

12

are the same as those for


D1 L + D1 U

(Jacobi matrix, put = 1)


Now consider the eigenvalues of MSOR . The characteristic polynomial
det (MSOR I) = 0 or


1
det (D + L) ((1 ) D U ) I = 0 or
det (D + L)

det [(1 ) D U (D + L)] = 0

So, satisfies:
det [(1 ) D U L] = 0
Sinbce 6= 0, the non-zero eigenvalue must satisfy:




(1 )
1

det
D U L



1
1 1
( + 1)

det
D L + D U
I

Since A is consistently ordered, the eigenvalues of

1
D1 L + D1 U

are the same as those of MJ = D1 (L + U ).


Let the eigenvalues of MJ be , then the non-zero eigenvalue of MSOR safisfies:
=

+1

for some

()

For 6= 0, we can solve () and set


2 2

= (1 ) +
2

r
(1 ) +

2 2
4

()

Each gives one or two eigenvalues .


depends on . We want to find such that (MSOR ) is as small as possible.
We can show that this happens whenever the roots in () are equal when takes the maximum norm. That is,
r
2 2
(1 ) +
= 0
4
2
p
=
1 1 2
We look for smallest value of (0 < < 2) ad so
=

2
2
p
p
=
2
1 + 1 (BJ )2
1+ 1


10 1
Example: Consider A =
.
1 10


1
0
10
Then: D1 L + 1 D1 U =
2

10
0

1
10

13

10
= 0 2 =

1
100

A is consistently ordered.
Optimal :
=

2
2
p
q
=
1 + 1 (MJ )2
1+ 1

= 1.0025126

(sameasexample1)

1
100

The fastest convergence rate is:


(MSOR ) = 0.0025126
SOR method:
Take N = 1 D + L; P =
Then: A = N P .
The iterative scheme:

~x


1 D U.

k+1

1 



1
1
1
1
k
~b
D+L
1 D U ~x +
D+L

1

1
1
~b
D+L
(D + L) [(1 ) D U ] ~xk +


=
=

Recall:
SOR converges | 1| < 1 or 0 < < 2


1
In general, convergence of SOR (D + L) [(1 ) D U ] < 1
= 1 G-S method

Remark: In particular, tridiagonal matrix is consistently ordered:

..
.
.. ...
A=
.

n1

Example: Solve u = f, u(0) = a, u(1) = b


Partition [0, 1] by x0 = 0 < x1 = h < x2 = 2h < < 1 = xn
Approximate u by

u(x+h)2u(x)+u(xh)
,
h2

u(x1 )
..
.
..
.
..
.

then u = f can be approximated by

~
= b,

1
2
..
.

2
1

where A =

1
..
.
1

u(xn1 )

Example of consistent ordering matrix


Example 1: Consider a block tridiagonal matrix

D1
T21

A=

of the form
T12
D2
..
.

T23
..
.

..

Tp,p1
14

Dp

..

.
2
1

1
2

where Di are diagonal matrices. Then A is consistently ordered.


To see this, it suffices to see that D1 L + D1 U and zD1 L + z1 D1 U are similar for all z 6= 0.
Note,

1
zD1 L + D1 U = X D1 L + D1 U X 1
z
where

I
zI

X=

z I
..

.
z p1 I

Example 2: Another type of consistent ordered matrices:

T1 D12
D21 T2 T23

..
..
A=
.
.

..

.
Dp1,p
Tp

Dp,p1

where Ti Mnn are tridiagonal matrices.


Proof: Complicated !
Theorem: [D.Young] Assume:
1. (0, 2)
2. MJAC has only real eigenvalue
3. = (MJAC ) < 1
4. A is consistently ordered
Then: (MSOR, ) < 1
In face,
(
(MSOR, ) =

1 + 12 2 2 +

1+

2 2
4 ,

for 0 < opt

1 for opt < 2

where
opt =

2
p
1 + 1 2

Convergence condition for SOR


Theorem: If A is symmetric positive definite, then the SOR method converges for all 0 < < 2.
Theorem: If A is strictly diagonally dominant, then the SOR converges for 0 < 1.
Proof: If A is SDD, then aii 6= 0 and A is invertible. The SOR method reads:
~xk+1 = MSOR ~xk + ~c
where

MSOR = (D + L)

((1 ) D U ) ,

We need to show that if 0 < 1, then (MSOR ) < 1


We will prove by contradiction.
15

(A = L + D + U )

Suppose eigenvalue such that || 1. Then


det (I MSOR ) = 0
Also,


1
det (D + L)
(D + L) ((1 ) D U )
= 0






1

1
= 0
det (D + L)
det
1 (1 ) D + L + U

det (C) = 0


where


C=

since aii 6= 0, det (D + L)


Then


1

(1 ) D + L + U

6= 0.







1
1
(1 ) |aii |
|Cii | = 1 (1 ) |aii | 1

||
|aii | >

n
X

|aij |

i1
X
j=1

j=1,j6=1

So C is also SDD.
Thus det(C) 6= 0. (Contradiction)
all eigenvalues of A should satisfy || < 1.
Thus (MSOR ) < 1 and hence SOR converges.

16

|aij | +

n
X
|aij |
|| j=i+1

Вам также может понравиться