Вы находитесь на странице: 1из 42

DIRECT METHODS TO THE

SOLUTION OF LINEAR
EQUATIONS SYSTEMS
Lizeth Paola Barrero Riao
Numerical Methods Industrial Univesity of Santander

BASIC FUNDAMENTALS
Symmetric Matrix
Transposes Matrix
Determinant
Upper Triangular Matrix
Lower Triangular Matrix
Banded Matrix
Augmented Matrix
Matrix Multiplication

Matrix
A matrix consists
of a rectangular
array of elements
represented by a
single symbol. As
depicted in Figure
[A] is the
shorthand
notation for the
matrix and
designates an
individual
element of the
matrix.

A horizontal set of elements is


called a row (i) and a vertical
set is called a column (j).
Column 3

a11 a12 a13 ... a1m

a21 a22 a23 ... a2m

an1 an 2 an3 ... anm

Row 2

Symmetric Matrix
It is a
square
matrix in
which
the
element
s are
symmetr
ic about
the main
diagonal

Anmaij

is a Symmetric Matrix if

Example
:

If A is a symmetric matrix, then:

aij aji , ij
Scalar,
diagonal and
identity
matrices, are
symmetric
matrices.

AA
a. The product
is defined and is a
symmetric matrix.
b. The sum of symmetric matrices is a
symmetric matrix.
c. The product of two symmetric matrices is
a symmetric matrix if the matrices
t

Transposes Matrix

Let any
matrix
A=(aij) of
mxn order,
the matrix
B=(bij) de
order nxm is
the A
transpose if
the A rows
are the B
columns .
This
operation is
usually
denoted by

Amn aij
Example
:
A23

A tnm bij such that bji a ij , i,j

1 2 2

0 4 3

B12 2 9

At 32
2

Bt 21

Propertie
s:
a. (At )t A
b. (A B )t At Bt
c. (A B )t Bt At
d. (k A )t k At , k

1 0

2 4
2 3

Determinant
Given a
square matrix
A of n size,
its
determinant
is defined as
the sum of
the any
matrix line
elements
product (row
or column)
chosen, by
their
correspondin
g

Exampl
e:

2 4 5

To the matrix A 6 7 3
3 0 2

applying the definition, if we choose the third row is:

2 5
4 5
2 4
det(A ) 3
+0
+2
7 3
6

3
6 7

=3 (-12-35)+0 (-(6-30))+2 (-1


4-24)=-141+0-76
=-217

Determinant Properties
a.

b.

c.

d.

e.

f.
g.

If a matrix has a line (row or column) of zeros, the


determinant is zero.
If a matrix has two equal rows or proportional, the
determinant is null
If we permute two parallels lines of a square matrix, its
determinant changes sign.
If we multiply all elements of a determinant line by a
number, the determinant is multiplied by that number.
If a matrix line is added another line multiplied by a
number, the determinant does not change.
The determinant of a matrix is equal to its transpose,
If A has inverse matrix, A-1, it is verified that:
det(A 1)

1
det(A )

Upper and Lower Triangular Matrix


Upper Triangular Matrix
A aij

is Upper Triangular if aij 0 , i j

Lower Triangular Matrix


Ann aij

is Lower Triangular if aij 0 , i j

Example
:

Example
:
2 0 2
A 0 1 0
0 0 3

It is a square matrix in which


all the items under the main
diagonal are zero.

22 0
D 2
1
1 1

0
0
0

It is a square matrix in which


all the elements above the
main diagonal are zero.

Banded Matrix
A band matrix is a sparse matrix whose nonzero entries are confined to a diagonal band,
comprising the main diagonal and zero or more
diagonals on either side.
Example
:
2 0 0 0

0
1
0
0

C
0 0 5 0

0
0
0
7

Diagonal

4 0 0 0 0

7
8
1
0
0

D= 0 0 5 2 0

0
0
1
3
5

0 0 0 3 4

Triadiagonal

8 7

9 3
M= 3 1

0 0

0 0

6 0 0

0 2 0

8 9 10

3 5 8
7 4 0

Pentadiagonal

Augmented Matrix
It is called
extended or
augmented
matrix that is
formed by
the
coefficient
matrix and
the vector of
independent
terms, which
are usually
separated
with a dotted

Example
:

1 3 2

A 2 0 1 ,
5 2 2

B=

The augmented matrix A B


is represented as follows:
1 3 2 M4

A
B

2
0
1
M
3

5 2 2 M
1

Matrix Multiplication
.

To define A B is
necessary that the
number of columns
in the first matrix
coincides with the
number of rows in
the second matrix.
The product order
is given by the
number of rows in
the first matrix per
the number of
columns in the
second matrix.
That is, if A is mxn
order and B is nxp
order, then C = A
B is mxp order.

.
Given Amn aij y Bnp bij , the product A B is

another matrix Ampin which each Cij is the nth

row product of A per the nth column of B, namely th


n

element cij aik .bkj


k 1

a11 K

A M O
a
m1 L

b11 K

a1n

M
amn

B= M O
b L
n1

a11b11 K a1nbn1

then AB

K
M

a b K a b L
mn n 1
m1 11

b1p

M
bnp

a11b1p L a1nb1p

am1b1p L amnbnp

Matrix Multiplication
Graphicall
y:

Example
:
1 3 5

A 0 0 1
4 1 2

1 3 4 0

B= 0 8 9
1

7 5 5 1

36 4 56 8

C A B 7 5 5 1
18 14 35 3

Solution of Linear Algebraic Equation


Linear algebra is one of
the corner stones of
modern computational
mathematics. Almost
all numerical schemes
such as the finite
element method and
finite difference method
are in act techniques
that transform,
assemble, reduce,
rearrange, and/or
approximate the
differential, integral, or
other types of
equations to systems of
linear algebraic

A system of linear algebraic


equations can be expressed as
a11 a12 L
a a L
21 22

a1n x1 b1
a2n x 2 b2

M M O M M M


a
a
L
a
x m bm
m1 m 2
nm
where aij and bi are constants,
i=1,2,...,m, j 1,2,...,n.

Solution of Linear Algebraic Equation


Or:

If the intersected part is a


line or a surface, there are
an
infinite
number
of
solutions, usually expressed
by a particular solution
added
to
a
linear
combination of typically nm vectors. Otherwise, the
solution does not exist.

In this part, we deal with


the case of determining the
values x1, x2,,xn that
simultaneously satisfy a set

AX=B
Solving a system
Amxn
with a coefficient
matrix
is
equivalent to finding
the intersection
point(s) of all m
surfaces (lines) in an
n dimensional space.
If all m surfaces
happen to pass
through a single

Small Systems of Linear


Equations
1.
2.
3.

Graphical Method
Cramers Rule
The Elimination of Unknows

1. Graphical Method
When solving a
system with two
linear equations in
two variables, we
are looking for the
point where the
two lines cross.
This can be
determined by
graphing each line
on the same
coordinate system
and estimating the
point of

When two straight lines are


graphed, one of three possibilities
may result:

Graphical Method
When two lines
cross in exactly
one point, the
system is
consistent and
independent
and the solution
is the one
ordered pair
where the two
lines cross. The
coordinates of
this ordered pair
can be

Case 1

Independent
system:
one solution
point

Graphical Method
This graph shows
two distinct lines
that are parallel.
Since parallel lines
never cross, then
there can be no
intersection; that
is, for a system of
equations that
graphs as parallel
lines, there can be
no solution. This is
called an
"inconsistent"

Case 2

Inconsistent
system:
no solution
and
no
intersection
point

Graphical Method
This graph
appears to show
only one line.
Actually, it is the
same line drawn
twice. These
"two" lines,
really being the
same line,
"intersect" at
every point
along their
length. This is
called a

Case 3

Dependent
system:
the solution
is the
whole line

Graphical Method
ADVANTAGES:
The
graphical
method is good
because it clearly
illustrates
the
principle involved.
DISADVANTAGES:
It does not always
give us an exact
solution.
It cannot be used
when
we
have
more
than
two

For instance, if the lines cross at a


shallow angle it can be just about
impossible to tell where the lines cross:

Graphical Method
Example
Solve the following system by graphing.
2x 3y = 2
4x + y = 24
First, we must solve each equation for "y=", so we can
graph easily:
2x 3y = 2
2x + 2 = 3y
(2/3)x + (2/3) = y

4x + y = 24
y = 4x + 24

Graphical Method
The second line will be easy to
graph using just the slope and
intercept, but it is necessary a Tchart for the first line.
x

4
1
2
5
8

y = (2/3)x + (2/3)
8/3 + 2/3 = 6/3 =
2
2/3 + 2/3 = 0
4/3 + 2/3 = 6/3 = 2
10/3 + 2/3 = 12/3 =
4
16/3 + 2/3 = 18/3 =
6

y = 4x + 24

16 + 24 = 40
4 + 24 = 28
8 + 24 = 16
20 + 24 = 4
32 + 24 =
8

Solution: (x, y) = (5,


4)

Cramers Rule
Cramers rule is another
technique that is best
suited to small numbers
of equations. This rule
states that each
unknown in a system of
linear algebraic
equations may be
expressed as a fraction
of two determinants
with denominator D and
with the numerator
obtained from D by
replacing the column of
coefficients of the
unknown in question by
the constants b , b ,

For example, x1 would be


computed as

b1 a12
x1

a13

b2 a22 a23
b3 a32 a33
D

Example

Use Cramers Rule to solve the system:


5x 4y = 2
6x 5y = 1

Solution:
We begin
by setting
up and
evaluating
the three
determina
nts:

a1 b1
5 4
D
5 5 6 4 25 4 1

6 5
a2 b2
c1
Dx
c2

a1
Dy
a2

b1
2 4


b2
1

2 5 1 4 10 4 6

c1
5 4

c2
6 1

5 1 6 2

5 12 7

Example

From Cramers Rule, we have:


Dx 6
x

6
D 1

and

Dy 7
x

7
D 1

The solution is (6,7)


Cramers Rule does not apply if D=0. When D=0 ,
the system is either inconsistent or dependent.
Another method must be used to solve it.

The Elimination of
Unknows

The basic strategy is


to multiply the
equations by
constants so that of
the unknowns will
be eliminated when
the two equations
are combined. The
result is a single
equation that can be
solved for the
remaining unknown.
This value can then
be substituted into
either of the original
equations to
compute the other

The elimination of unknowns by


combing equations is an algebraic
approach that can be illustrated for
a set of two equations:
a11x1 a12x2 b1
1
a21x1 a22x2 b2

For example, these equations might


be multiplied by a21 and a11 to give
a11a21x1 a12a21x2 ba
1 21

a21a11x1 a22a11x2 b2a11

The Elimination of
Unknows
Subtracting Eq. 3 from 4 will, therefore, eliminate the xt
term from the equations to yield
a22a11x2 a12a21x2 b2a11 ba
1 21
Which can be solve
for
x2

a11b2 a21b1
a11b22 a12a21

This equation can then be substituted into Eq. 1, which can


be solved for
a22b1 a12b2
x1
a11b22 a12a21

The Elimination of
Unknows
Notice that these equations
follow directly from
Cramers rule, which states
b1

b
x1 2
a11

a21

a12

a22
ba a b
1 22 12 2
a12
a11a22 a12a21

a22

a11

a
x2 21
a11

a21

b1

b2
a b ba
11 2 1 21
a12
a11a22 a12a21

a22

EXAMPLE
Use the elimination of
unknown to solve,

3x1 2x2 18
x1 2x2 2
Solution
2(18) 2(2)
x1
4
3(2) 2(1)
3(2) (1)18
x2
3
3(2) 2(1)

Gaussian
Elimination
Gaussian

Elimination is
considered the
workhorse of
computational
science for the
solution of a system
of linear equations.
Karl Friedrich
Gauss, a great 19th
century
mathematician,
suggested this
elimination method
as a part of his
proof of a particular
theorem.
Computational

Gaussian Elimination is a systematic


application
of
elementary
row
operations to a system of linear
equations in order to convert the
system to upper triangular form.
Once the coefficient matrix is in upper

Gaussian Elimination
The general procedure for Gaussian
Elimination can be summarized in the
following steps:

1. Write the augmented matrix for the system of


linear equations.
2. Use elementary row operations on the
augmented matrix [A|b] to transform A into
upper triangular form. If a zero is located on the
diagonal, switch the rows until a nonzero is in
that place. If you are unable to do so, stop; the
system has either infinite or no solutions.
3. Use back substitution to find the solution of the

Gaussian Elimination
Example
1. Write the augmented matrix for the system of
linear equations.
2y z 4
x y 2z 6
2x y z 7

0 2 14

1
1
2
6

2 1 17

2.Use elementary row operations on the augmented


matrix [A|b] to transform A into upper triangular
form.
0 2 14 (r2 )

1
1
2
6

(r1)
2 1 17

Change row 1 to row


2
and vice versa

1 1 2 6

0 2 14
2 1 17 (r3 ) (2 r1)

Gaussian Elimination

Notice that the original


coefficient matrix had a 0
on the diagonal in row 1.
Since we needed to use
multiples of that diagonal
element to eliminate the
elements below it, we
switched two rows in order
to move a nonzero element
into that position. We can
use the same technique
when a 0 appears on the
diagonal as a result of
calculation. If it is not
possible to move a nonzero
onto the diagonal by
interchanging rows, then
the system has either
infinitely many solutions or
no solution, and the
coefficient matrix is said to

1 1 2 6

0 2 1 4
0 1 3 5

1
(r3 ) ( r2 )
2

2 6
1 4

0 0 52 3

1 1
0 2

Since all of the nonzero


elements are now located in
the upper triangle of the
matrix, we have completed
the first phase of solving a
system of linear equations
using Gaussian Elimination.

Gaussian Elimination
The second and final phase of Gaussian Elimination is
back substitution. During this phase, we solve for the
values of the unknowns, working our way up from the
bottom row.
3. Use back substitution to find the solution of the
problem

1 1
0 2

2 6
1 4

3
0 0 2

The last row in the augmented matrix represents the


equation:
5
6
z 3 z=
2
5

Gaussian Elimination
The second row of the augmented matrix represents the
equation:
4z 46 5
2y z 4 y

2
2

7
y
5

Finally, the first row of the augmented matrix represents


the equation
x y 2z 6

x=6-y-2z=6- 7 2 6
5
5

11
x
5

Gaussian-Jordan Elimination
As in Gaussian
Elimination,
again we are
transforming
the coefficient
matrix into
another matrix
that is much
easier to solve,
and the system
represented by
the new
augmented
matrix has the
same solution
set as the
original system

In Gauss-Jordan Elimination, the goal


is to transform the coefficient matrix
into a diagonal matrix, and the zeros
are introduced into the matrix one
column at a time. We work to
eliminate the elements both above
and below the diagonal element of a
given column in one pass through the
matrix.

Gaussian-Jordan Elimination
The general procedure for Gauss-Jordan Elimination can
be summarized in the following steps:
1. Write the augmented matrix for the system of linear
equations.
2. Use elementary row operations on the augmented
matrix [A|b] to transform A into diagonal form. If a
zero is located on the diagonal, switch the rows until a
nonzero is in that place. If you are unable to do so,
stop; the system has either infinite or no solutions.
3. By dividing the diagonal element and the right-handside element in each row by the diagonal element in
that row, make each diagonal element equal to one.

Gaussian-Jordan Elimination
Example

We will apply Gauss-Jordan Elimination to the same


example that was used to demonstrate Gaussian
Elimination
1-Write the augmented matrix for the system of linear
equations.
2y z 4
0 2 14
x y 2z 6
2x y z 7

1 1 2 6
2 1 17

2. Use elementary row operations on the augmented


matrix [A|b] to transform A into diagonal form.
0 2 14 (r2 )

1
1
2
6

(r1)
2 1 17

1 1 2 6

0 2 1 4
2 1 17 (r3 ) (2 r1)

Gaussian-Jordan Elimination

1
2 6 (r1) ( 2 r2 )

1 4
0 1 3 5 (r ) ( 1 r )

3
2 2
1 1

0 2

(r1) ( 3 r3 )
5
4
2

0 2
1 4 (r2 ) ( 2 r3 )
5

0 0 5 3
2

1 0

11
5
0
0 14
5

0 0 5 3
2

1 0
0 2

3-By dividing the diagonal element and the right-handside element in each row by the diagonal element in that
row, make each diagonal element equal to one.
11
5
0
1 0
0 2
0 14 (r2 ) ( 1 )
5
2

5
0 0

2 3 (r3 ) ( 25)

Entonces,
11
7
6
x= , y= , and z=
5
5
5

11
5
1
0
0

0 1 07
5

0
0
1

Notice that the


coefficient matrix
is now a diagonal
matrix with ones
on the diagonal.
This is a special
matrix called the
identity matrix.

LU Decomposition

Just as was the case


with Gauss elimination,
Lu decomposition
requires pivoting to
avoid division by zero.
However, to simplify
the following
description, we will
defer the issue of
pivoting until after the
fundamental approach
is elaborated. In
addition, the following
explanation is limited
to a set of three
simultaneous
equations. The results
can be directly
extended to n-

Linear algebraic notation can be


rearranged to give

A X B 0

Suppose that this equation could be


expressed as an upper triangular
system:
u11 u12 u13 x1
0 u u x
22
23

2
0 0 u33 x 3

d1

d2
d
3

Elimination is used to reduce the


system to upper triangular form. The
above equation can also be expressed
in matrix notation and rearranged to
give
3
U X D 0

LU Decomposition
Now, assume that there is a lower diagonal matrix with 1s
on the diagonal,
1 0 0
L l21 1 0
l31 l32 1

That has the property that when Eq. 3 is premultiplied by it,


Eq. 1 is the result. That is,

L U X D A X B

If this equation holds, it follows from the rules for matrix


multiplication that

L U A

and

L D B

LU Decomposition
A two-step strategy for obtaining solutions can be
based on Eqs. 3, 6 y 7.

LU decomposition step. [A] is factored or


decomposed into lower [L] and upper [U]
triangular matrices.

Substitution step. [L] and [U] are used to


determine a solution {X} for a right-hand side {B}.
This step itself consists of two steps. First, Eq. 7 is
used to generate an intermediate vector {D} by
forward substitution. Then, the result is substituted
into Eq. 3, which can solved by back substitution
for [X].

In

the

other

hand,

Gauss

Elimination

can

be

Bibliography
CHAPRA, Steven. Numerical Methods for
engineers. Editorial McGraw-Hill. 2000.
http://www.efunda.com/math
http://www.purplemath.com
http://ceee.rice.edu/Books

Вам также может понравиться