Вы находитесь на странице: 1из 4

LU Decomposition

CS 330
September 24, 2008
1 LU Decomposition
We can write an N N matrix A as the product of a lower triangular matrix L and an upper triangular
matrix U as follows (case N = 4):
LU = A (1)
_

_
1 0 0 0

10
1 0 0

20

21
1 0

30

31

32
1
_

_
_

00

01

02

03
0
11

12

13
0 0
22

23
0 0 0
33
_

_
=
_

_
a
00
a
01
a
02
a
03
a
10
a
11
a
12
a
13
a
20
a
21
a
22
a
23
a
30
a
31
a
32
a
33
_

_
(2)
We then generate N
2
equations for the N
2
unknowns and order these equations according to the columns
of A and underline the unknown to solve for as we progress:

00
= a
00
column j = 0 (3)

10

00
= a
10
(4)

20

00
= a
20
(5)

30

00
= a
30
(6)

01
= a
01
column j = 1 (7)

10

01
+
11
= a
11
(8)

20

01
+
21

11
= a
21
(9)

30

01
+
31

11
= a
31
(10)

02
= a
02
column j = 2 (11)

10

02
+
12
= a
12
(12)

20

02
+
21

12
+
22
= a
22
(13)

30

02
+
31

12
+
32

22
= a
32
(14)

03
= a
03
column j = 3 (15)

10

03
+
13
= a
13
(16)

20

03
+
21

13
+
23
= a
23
(17)

30

03
+
31

13
+
32

23
+
33
= a
33
(18)
This reveals the direct method called Crouts algorithm or Doolittle factorization for solving for the unknowns:
1 for j = 0 . . . N 1
2 for i = 0 . . . j
3
ij
= a
ij

i1
k=0

ik

kj
4 for i = j + 1 . . . N 1
5
ij
=
1
jj
_
a
ij

j1
k=0

ik

kj
_
1
We can store L and U in a single matrix since we do not need to explicitly store the zeroes:
Combined LU matrix =
_

00

01

02

03

10

11

12

13

20

21

22

23

30

31

32

33
_

_
. (19)
Furtherfore, each a
ij
is referenced exactly once as we solve for each
ij
or
ij
, so we can replace A in place
as we go. Crouts in place modication algorithm is as follows:
1 for j = 0 . . . N 1
2 for i = 0 . . . j
3 a
ij
= a
ij

i1
k=0
a
ik
a
kj
4 for i = j + 1 . . . N 1
5 a
ij
=
1
ajj
_
a
ij

j1
k=0
a
ik
a
kj
_
1.1 Partial Pivoting
Line 5 of Crouts algorithm has a problem when
jj
0. We can use partial pivoting (row swapping) to
avoid this situation as much as possible. We will actually store LU for a row-wise permutation of A and
record how A is permuted.
Line 3 computes the
ij
values on and above the diagonal (i j). Line 5 computes the
ij
values below
the diagonal (i > j). Note that the expression in parentheses on Line 5 is the same as the expression on
the right hand side of Line 3 when i = j (i.e., on the diagonal). Therefore, we can put o the division by

jj
on Line 5 and wait to see if one of the
ij
s below the diagonal would make a bettor pivot value; If so,
we perform the row swap and go back and do the necessary divisions once the appropriate pivot value is in
place. The array mutate[0, . . . , N 1] records row permutations (i.e., row i of LU equals row mutate[i] of
A). The sign of d depends on the parity of the number of row exchanges (used for computing |A|).
0 mutate[] = {0, . . . , N 1} // Initialize row permutation array (no row exchanges yet).
1 d = +1 // Initialize row swap parity value.
2 for j = 0 . . . N 1 // We replace A with LU column by column. . .
3 for i = 0 . . . j // Compute a
ij

ij
on and above diagonal.
4 a
ij
= a
ij

i1
k=0
a
ik
a
kj
// (Note: If i = 0 then sum = 0.)
5 p = |a
jj
| // p = initial pivot value
6 n = j // n = initial pivot row
7 for i = j + 1 . . . N 1 // Compute a
ij

ij
below diagonal.
8 a
ij
= a
ij

j1
k=0
a
ik
a
kj
9 if |a
ij
| > p // If better pivot found. . .
10 p = |a
ij
| // . . . then record new pivot.
11 n = i
12 if p = 0 abort! // Singular matrix! If p 0 we may have problems.
13 if n = j // If best pivot o diagonal. . .
14 swap rows n and j of A // . . . (Note: previous pivots unaltered)
15 swap mutate[n] and mutate[j] // . . . record row exchange
16 d = d // . . . ip parity
17 for i = j + 1 . . . N 1 // perform divisions below diagonal
18 a
ij
= a
ij
/a
jj
2
2 Applications
2.1 Solving Ax = b for multiple right-hand sides
We can use the LU decomposition to solve for multiple systems of the form Ax = b where A remains the
same but b changes. In fact, the bs do not need to known ahead of time. Given A = LU we have
Ax = b (20)
(LU)x = L(Ux) = b. (21)
We rst solve
Ly = b (22)
for y and then solve
Ux = y (23)
for x. Each triangular system can be easily solved. We solve Equation 22 via forward substitution (note that
we must rst permute b to account for row exchanges):
y
0
= b
mutate[0]
,
y
i
= b
mutate[i]

i1

j=0

ij
y
j
i = 1, . . . , N 1.
Equation 23 is solved by back substitution:
x
N1
=
y
N1

N1,N1
,
x
i
=
1

ii
_
_
y
i

N1

j=i+1

ij
x
j
_
_
i = N 2, . . . , 0.
Note that the solution x does not need to be permuted since it represents a linear combination of the columns
of A, but we only performed row exchanges (partial pivoting).
Crouts algorithm requires O(N
3
) multiplications to perform LU decomposition. Forward and backsolving
takes another O(N
2
) multiplications for each right hand side. Gaussian elimination and back-solving requires
O(N
3
) operations. Solving a linear system via LU decomposition requires about a third of the operations
needed via Gaussian elimination. In addition, we only need another O(N
2
) operations to solve using a
dierent b vector!
2.1.1 Iterative Improvement
Given that x is the exact solution to Equation 20, the above procedure yields only an approximate solution
x = x + x due to limited precision arithmetic. If we multiply A by our approximate we have
A x =

b (24)
A(x + x) = b + b (25)
Ax = b. (26)
Since we know b and we can compute

b = A x, we can determine b as follows:
b = A x b (27)
(note that right-hand side should be computed with higher precision). Then we can solve Equation 26 for
x (using our LU decomposition). Our rened solution is then
x = x x (28)
Lather, rinse, repeat as often as desired. Since we have already performed O(N
3
) operations computing x,
why not spend another O(N
2
) operations improving the solution?
3
2.2 Matrix Inversion
In this case, you have N right hand sides:
AX =
_

_
1 0 0
0 1 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 1
_

_
(29)
2.3 Determinant
|A| =
N1

j=0

jj
(30)
Note that we must scale the result by d (which is 1) to account for row exchanges.
4

Вам также может понравиться