Вы находитесь на странице: 1из 22

SPECIAL

METHODS
THOMAS METHOD

  method emerges as a simplification of an LU factorization of a


This
tridiagonal matrix.  

We know that a positive definite matrix A has a unique symmetric


square root F such that :

Now if we do not insist on symmetry, there is a very large set of (non


symmetric) matrices G such that :

and which may also be regarded as "square roots" of A. The positive


definite matrix A is then said to be factored into the "square" of its
square root.
THOMAS METHOD

One of these factorizations is of particular interest, both from


theoretical and practical standpoints: Cholesky Decomposition,
which is expressed as follows.

• Let A be a positive definite matrix.

Then there exists a unique lower


triangular matrix L with positive
diagonal elements such that : A = LL’
THOMAS METHOD

 Based on the matrix product shown above gives


the following expressions:

U11  b1
an
Ln ,n 1 
U n 1,n 1
U n 1,n  cn 1
U n ,n  bn  Ln ,n 1U n 1,n
Where,
a1  0 y cn  0
THOMAS METHOD

 As far as making the sweep from k = 2 to n leads


to the following:

ak
Lk ,k 1 
U k 1,k 1
U k 1,k  c k 1
U k , k  bk  Lk , k 1U k 1,k
THOMAS METHOD

 If Lux and Ux = r = d then Ld = r, therefore:

d1  r1
from k  2 Until n
d k  rk  Lk ,k 1d k 1
THOMAS METHOD

 Finally we solve Ux = d from a backward place:

for k  n  1 until 1,
Where,
n

xn 
dn dk  U
j  k 1
kj xj
U n,n xk 
U k ,k
EXAMPLE

 Solve the following system using the method of


Thomas:
x1  4 x2 17
2 x1  x2  x3  9
2 x2  x3  4

 Vectors are identified a, b, c and r as follows:


b1  1 a1  0 c1  4 r1  17
b2  1 a2  2 c2  1 r2  9
b3  1 a3  2 c3  0 r3  4
EXAMPLE

 We obtain the following equalities:


U 11  b1  1
For k 2
a 2
L21  2  2
U 11 1
U 12  c1  4
U 22  b2  L21U 12  1   ( 2) * ( 4)   9
For k 3
a 2 2
L32  3  
U 22 9 9
U 23  c2  1
 2  7
U 33  b3  L32U 23  1     * (1)   
 9  9
EXAMPLE

 Now once known L and U, Ld = r is solved by a


progressive replacement:

1   d1  17 
2 1   d    9 
   2  
  2 9 1  d 3   4 

d1  r1  17
Para K  2
d 2  r2  L21d1  9   (2)(17)  25
Para K  3
d 3  r3  L32 d 2  4   ( 2 9)(25)  14 9
EXAMPLE

 Finally Ux = d is solved by replacing regressive:


1 4   x1   17 
 9 1    x     25 
   2  
  7 9  x3   14 9
d 3  14 9
x3   2
U 33  7 9
Para k  n  1  2
d  (U 23 x3 )  25  (1* 2)
x2  2  3
U 22 9
Para k  n  2  1
d  (U12 x2 ) 17   (4 * 3)
x1  1  5
U11 1
EXAMPLE

 So the solution vector is:

 2

x   3
5
CHOLESKY METHODS

If A is only positive semi-definite, the diagonal elements of


L can only be said to be non negative.

The Cholesky factorization can be symbolically represented


by :  L 11  L L  L L L  11 21 n  2 ,1 n 1,1 n ,1
L L22   L22  Ln 2, 2 Ln1, 2 Ln, 2 
 21  
     
  
 

  

 Ln 2,1 Ln 2,2  Ln 2,n2   Ln 2,n2 Ln1,n  2 Ln,n 2 
 Ln1,1 Ln1, 2  Ln 1,n 2 Ln 1,n 1   Ln1,n1 Ln,n1 
   
 Ln,1 Ln, 2  Ln,n 2 Ln,n1 Ln,n   Ln,n 

L LT
 a11 a12  a1,n 2 a1,n 1 a1,n 
a a22  a2, n  2 a2,n1 a2,n 
 21
       
A 
an2,1 an2, 2  an 2,n2 an 2,n1

an  2 , n  A =LLT
 an1,1 an1, 2  an1,n 2 an1,n1 an 1,n 
 
 an,1 an, 2  an , n  2 an,n 1 an,n 
CHOLESKY METHODS

The Cholesky factorization is the prefered numerical


method for calculating :
 The inverse, and the determinant of a positive
definite matrix (in particular of a covariance matrix),
as well as for the simulation of a random 
multivariate normal variable.
CHOLESKY METHODS

 From the product of the n-th row of L by the n-th


column of LT we have:

Ln,12  Ln, 2 2    Ln, n  2 2  Ln, n 12  Lnn 2  a nn


Lnn 2  a nn  Ln,12  Ln, 2 2    Ln, n  2 2  Ln, n 12
n 1
2
Lnn  a nn  
j 1
Ln, j 2

n 1
Lnn  a nn  j 1
Ln, j 2
CHOLESKY METHODS

 Making the sweep from k = 1 to n we have:

k 1
Lkk  a kk  L
j 1
k, j
2
CHOLESKY METHODS

 On the other hand if we multiply the n-th row of


L by the column (n-1) LT we have:

Ln,1 Ln 1,1  Ln, 2 Ln 1, 2    Ln, n  2 Ln 1, n  2  Ln, n 1 Ln 1, n 1  an, n 1


an, n 1  Ln,1 Ln 1,1  Ln, 2 Ln 1, 2    Ln, n  2 Ln 1, n  2
Ln, n 1 
Ln 1, n 1
n2
Ln, n 1  a n,n 1  L
j 1
n , j Ln 1, j
CHOLESKY METHODS

 By scanning for k = 1 to n we have:

i 1
Lk ,i  ak ,i   Lk , j Li , j
j 1

where 1  i  k  1
EXAMPLE

 Apply Cholesky for symmetric matrix


decomposed as follows:

2 1 1
 A  1 2 1
1 1 2
 For k = 1:

L11  a11  2
EXAMPLE

 For k = 2:
a21 1
L21   when i  1
L11 2
2
L22  a22  L21  2  (1 2 )2  3 2
 For k = 3:
a31 1
L31   when i  1
L11 2

L32 
a32  L31L21 1  (1

 2 ) * (1 2)  1 6 when i  2
L22 32
2 2
L33  a33  L31  L32  2  (1 2 ) 2  (1 6 )2  2 3
EXAMPLE

 Finally, as a result of decomposition was found


that:

 2 0 0 
 
 L  1 2 3 2 0 
1 2 1 6 2 3 

BIBLIOGRAPHY

 http://www.google.com.co/imgres?
imgurl=http://userweb.cs.utexas.edu/~plapack/
icpp98/img38.gif&imgrefurl=http://www.cs.ute
xas.edu/~plapack/icpp98/node2.html&usg=__lc
X8ioMpUm91zLexBn6t8JE72_Q=&h=823&w=75
1&sz=20&hl=es&start=3&um=1&itbs=1&tbnid=
cWGkKDGz2Os9EM:&tbnh=144&tbnw=131&pre
v=/images%3Fq%3Dcholesky%26um%3D1%26hl
%3Des%26ndsp%3D20%26tbs%3Disch:1

Вам также может понравиться