Вы находитесь на странице: 1из 18

CE 341/441 - Lecture 3 - Fall 2004

LECTURE 3
DIRECT SOLUTIONS TO LINEAR ALGEBRAIC SYSTEMS - CONTINUED
Ill-conditioning of Matrices
There is no clear cut or precise definition of an ill-conditioned matrix.
Effects of ill-conditioning
Roundoff error accrues in the calculations
Can potentially result in very inaccurate solutions
Small variation in matrix coefficients causes large variations in the solution
Detection of ill-conditioning in a matrix
An inaccurate solution for X can satisfy an ill-conditioned matrix quite well!
Apply back substitution to check for ill-conditioning
Solve AX = B through Gauss or other direct method X poor
Back substitute AX poor B poor
Comparing we find that B poor B
p. 3.1

CE 341/441 - Lecture 3 - Fall 2004

Back substitution is not a good detection technique.


The effects of ill-conditioning are very subtle!
Examine the inverse of matrix A
If there are elements of A 1 which are many orders of magnitude larger than the original matrix, A , then A is probably ill-conditioned
It is always best to normalize the rows of the original matrix such that the maximum
magnitude is of order 1
Evaluate A 1 using the same method with which you are solving the system of equations. Now compute A 1 A and compare the results to I . If theres a significant deviation, then the presence of serious roundoff exists!
Compute ( A 1 ) 1 using the same method with which you are solving the system of
equations. This is a more severe test of roundoff since it is accumulated both in the
original inversion and the re-inversion.

p. 3.2

CE 341/441 - Lecture 3 - Fall 2004

Can also evaluate ill-conditioning by examining the normalized determinant. The


matrix may be ill-conditioned when:
detA
-------------------------------- < < 1
N

aij2
i=1 j=1

where
Euclidean Norm of A

i=1

j=1

aij2

If the matrix A is diagonally dominant, i.e. the absolute values of the diagonal terms
the sum of the off-diagonal terms for each row, then the matrix is not ill-conditioned
N

a ii

i = 1, 2, , N

a ij

j = 1
i j

p. 3.3

CE 341/441 - Lecture 3 - Fall 2004

Effects of ill-conditioning are most serious in large dense matrices (e.g. especially those
obtained in such problems as curve fitting by least squares)
Sparse banded matrices which result from Finite Difference and Finite Element methods
are typically much better conditioned (i.e. can solve fairly large sets of equations
without excessive roundoff error problems)
Ways to overcome ill-conditioning
Make sure you pivot!
Use large word size (use double precision)
Can use error correction schemes to improve the accuracy of the answers
Use iterative methods

p. 3.4

CE 341/441 - Lecture 3 - Fall 2004

Factor Method (Cholesky Method)


Problem with Gauss elimination
Right hand side load vector, B , must be available at the time of matrix triangulation
If B is not available during the triangulation process, the entire triangulation process
must be repeated!
Procedure is not well suited for solving problems in which B changes
AX = B 1

O ( N 3 ) + O ( N 2 ) steps

AX = B 2

O ( N 3 ) + O ( N 2 ) steps

.
.
.
AX = B R

.
.
.

O ( N 3 ) + O ( N 2 ) steps

Using Gauss elimination, O ( N 3 R ) operations, where N = size of the system of equation and R = the number of different load vectors which must be solved for
Concept of the factor method is to facilitate the solution of multiple right hand sides
without having to go through a re-triangulation process for each B r
p. 3.5

CE 341/441 - Lecture 3 - Fall 2004

Factorization step
Given A , find P and Q such that
A = PQ

where A , P and Q are all N N matrices


We note that A 0

P Q 0

and therefore neither P nor Q can be singular

a 11 a 12 a 13

p 11

p 12

p 13

a 21 a 22 a 23 =

p 21

p 22

p 23 q 21 q 22 q 23

a 31 a 32 a 33

p 31

p 32

p 33

We can only have N 2 unknowns!


Define P as lower triangular
Define Q as upper triangular
Now we have N 2 + N unknowns
p. 3.6

q 11 q 12 q 13
q 31 q 32 q 33

CE 341/441 - Lecture 3 - Fall 2004

Reduce the number of unknowns by selecting either


p ii = 1

i = 1, N

Doolittle Method

q ii = 1

i = 1, N

Crout Method

Now we only have N 2 unknowns! We can solve for all unknown elements of P and Q by
proceeding from left to right and top to bottom
a 11 a 12 a 13
a 21 a 22 a 23

1
= p 21

a 31 a 32 a 33

p 31

0
1

q 11 q 12 q 13
0
0 0 q 22 q 23

p 32 1

q 33

q 11

q 12

q 13

p 21 q 11

p 21 q 12 + q 22

p 21 q 13 + q 23

p 31 q 11

p 31 q 12 + p 32 q 22

p 31 q 13 + p 32 q 23 + q 33

a 11 a 12 a 13
a 21 a 22 a 23 =
a 31 a 32 a 33

p. 3.7

CE 341/441 - Lecture 3 - Fall 2004

Factorization proceeds from left to right and then top to bottom as:
Red current unknown being solved
Blue unknown value already solved
a 11 = q 11
a 12 = q 12
a 13 = q 13
a 21 = p 21 q 11
a 22 = p 21 q 12 + q 22
a 23 = p 21 q 13 + q 23
a 31 = p 31 q 11
a 32 = p 31 q 12 + p 32 q 22
a 33 = p 31 q 13 + p 32 q 23 + q 33
p. 3.8

CE 341/441 - Lecture 3 - Fall 2004

We can compute the elements of P and Q and store them directly into the old A matrix
as the procedure progresses
A = PQ

where

rename L

Q = upper triangular matrix

rename U

P = lower triangular matrix

Therefore renaming these matrices


PL
QU

we note that A has been factored as


A = LU

p. 3.9

CE 341/441 - Lecture 3 - Fall 2004

Now considering the equation to be solved


AX = B

However A = LU where L and U are known


( LU )X = B

Forward/backward substitution procedures to obtain a solution


Changing the order in which the product is formed
L ( UX ) = B

Now let
Y = UX

Hence we have two systems of simultaneous equations


LY = B
UX = Y
p. 3.10

CE 341/441 - Lecture 3 - Fall 2004

Apply a forward substitution sweep to solve for Y for the system of equations
LY = B

Apply a backward substitution sweep to solve for X for the system of equations
UX = Y

Notes on Factorization Methods


Procedure
Perform the factorization by solving for L and U
Perform the sequential forward and backward substitution procedures to solve for Y
and X
The factor method is very similar to Gauss elimination although the order in which the
operations are carried out is somewhat different.
Number of operations
O ( N 3 ) for LU decomposition (same as triangulation for Gauss)
O ( N 2 ) for forward/backward substitution (same as backward sweep for Gauss)

p. 3.11

CE 341/441 - Lecture 3 - Fall 2004

Advantages of LU factorization over Gauss Elimination


Can solve for any load vector B at any time with O ( N 2 ) operations (other than triangulation which is done only once with O ( N 3 ) operations)
Generally has somewhat smaller roundoff error
Example comparing costs
If we are solving R systems of N N equations in which the matrix A stays the same
and only the vector B changes, compare the overall costs for Gauss elimination and LU
factorization
Gauss Elimination costs
Triangulation Cost = R [ O ( N 3 ) ]
Back Substitution Cost = R [ O ( N 2 ) ]
Total Cost = R [ O ( N 3 ) + O ( N 2 ) ]
Total Cost for Large N

p. 3.12

R O( N 3)

CE 341/441 - Lecture 3 - Fall 2004

LU factorization costs
LU Factorization Cost = [ O ( N 3 ) ]

Back/Forward Substitution Cost = R [ O ( N 2 ) ]


Total Cost = [ O ( N 3 ) + R O ( N 2 ) ]
Total Cost for R N

R O( N 2)

Considering some typical values for N and R


N

Gauss Elim.

O ( RN )

O ( RN )

O( N )

1,000

5,000

5x1012

5x109

1,000

105

106

1021

1016

100,000

LU Factorization

Ratio of Costs

We can also implement LU factorization (decomposition) in banded mode and the


savings compared to banded Gauss elimination would be O ( M ) (where M = bandwidth)

p. 3.13

CE 341/441 - Lecture 3 - Fall 2004

Other Factorization Methods for Symmetrical Matrices


Solve AX = B assuming that A is symmetrical, i.e
AT = A

or

a ij = a ji

Cholesky Square Root Method


T
T
Requires that A is symetrical ( A = A ) and positive definite ( U AU > c where U = any
vector and c is a positive number.

First step is to decompose the matrix A


A = LL T

or

A = UT U

Diagonal terms on L or U dont equal unity

p. 3.14

CE 341/441 - Lecture 3 - Fall 2004

We note that A contains 6 independent entries and that L contains 6 unknowns and therefore we can perform the factorization

a 11 a 12 a 13
A =

L = l 21 l 22

l 11

a 21 a 22 a 23

l 31 l 32 l 33

a 31 a 32 a 33

Substituting the factored form of the matrix and changing the order in which products
are taken
L( LT X ) = B

Let L T X = Y and substitute


Now sequentially solve
LY = B

by forward substitution

L T X = Y by backward substitution
p. 3.15

CE 341/441 - Lecture 3 - Fall 2004

___________
LDLT Method
Decompose A = LDL T
Where L is a lower triangular matrix
Where D = diagonal matrix
Set the diagonal terms of L to unity
Solving for the elements of L and D
A = LDL T

Substituting and changing the order in which the products are formed
L ( DL T X ) = B

Now let
DL T X = Y

p. 3.16

CE 341/441 - Lecture 3 - Fall 2004

Now sequentially solve


LY = B by forward substitution
L T X = D 1 Y by backward substitution

Note that D 1 = 1/diagonal terms and are easily computed

p. 3.17

CE 341/441 - Lecture 3 - Fall 2004

Computation of the Determinant for Factorization Methods


Notes
If

A = LU

A = L U

The determinant of a triangularized matrix equals the product of the diagonal terms
In case of LU factorization
N

A =

lii

(Crout)

i=1
N

A =

uii

(Doolittle)

i=1

In case of LDL T decomposition


N

A = D =

d ii
i=1

For Gauss Elimination, simply keep a running product of the pivot elements.
p. 3.18

Вам также может понравиться