Вы находитесь на странице: 1из 46

# RELAXATION METHODS

## Presenters : Saleem Abdool 11/0935/1430

Michal Dhani 10/0937/1299
Quincy Chester 10/0937/2244
Kevin Tucker 10/0937/2516

Outline of presentation
Introduction
Types of Relaxation Methods
- Jacobi method
- Gauss-Seidel method
- Successive over relaxation(SOR)
Summary
Reference
Worksheet
Program solving relaxation Methods

Introduction
What is Relaxation Method:
It is a method of solving simultaneous equations by guessing a
solution and then reducing the errors that result by successive
approximations until all the errors are less than some specified
amount.
Why Relaxation Method:
Because relaxation methods may be applied to any system of
linear equations to interactively improve an approximation to the
exact solution.
In principle, relaxation methods which are the basis of the Jacobi,
Gauss-Seidel, Successive Over Relaxation methods may be applied
to any system of linear equations to interatively improve an
approximation to the exact solution.
One may solve these equation either (1) direct or (2) iterative
methods.

Introduction
Direct Method
They are three such methods
Solution by determinants (Cramer rules)
Solution by inverse matrix and
Solution by successive elimination.

The first two methods are not practical in solving large systems
of equations. Even the third one may sometimes be too
computer memory demanding that one needs to resort to the
iterative alternative.

Additionally, direct methods may leave round-off error problems
that may result in solutions that are incorrect on computers
supporting or not supporting sufficient precision.

Introduction
Iterative Method
Because of the limit in precision in most
computations representation of numbers, it is
unlikely that one will have the exact solution even
with the direct method.

Iterative methods do not produce exact solution,
theoretically, in finite number of iteration.
However, given the imprecise nature of number
representation on computers, iteration method may
have some advantages over direct methods. In
reality, for large systems of equation, iterative
methods are the ones to choose
Objectives
To identify the iteration methods
To show the application of iterative method
in engineering
To determine the convergence of each
method

Why use Iterative Techniques?
The method of solving simultaneous linear
algebraic equations using Gaussian Elimination and
the Gauss-Jordan Method. These techniques are
known as direct methods. Problems can arise from
round-off errors and zero on the diagonal.

One means of obtaining an approximate solution to
the equations is to use an educated guess.

Iterative Methods
We will look at three iterative
methods:
Jacobi Method
Gauss-Seidel Method
Successive over Relaxation (SOR)

Jacobi method
In numerical linear algebra, the Jacobi method is an
algorithm for determining the solutions of a system
of linear equations with largest absolute values in
each row and column dominated by the diagonal
element. Each diagonal element is solved for, and an
approximate value plugged in. The process is then
iterated until it converges. This algorithm is a
stripped-down version of the Jacobi transformation
method of matrix diagonalization. The method is
named after German mathematician Carl Gustav
Jakob Jacobi.
Jacobi method
The technique solves for the entire set of
x values for each iteration.

The problem does not update the values
until an iteration is completed
Jacobi method
Given a square system of n linear equations:

where:

:

Then A can be decomposed into a diagonal component D, and the remainder R:
Example
A linear system of the form with initial estimate is given by

We use the equation , described above, to estimate . First, we
rewrite the equation in a more convenient form , where
and . Note that where and are the strictly lower
and upper parts of . From the known values

we determine as

Further, C is found as

With T and C calculated, we estimate as :

The next iteration yields

This process is repeated until convergence (i.e., until is small). The solution after
25 iterations is

The Gauss-Seidel Method

This is an iterative method used to solve a
linear system of equations. It is named after
the German mathematicians Carl Friedrich
Gauss and Philipp Ludwig von Seidel, and is
similar to the Jacobi method. Though it can
be applied to any matrix with non-zero
elements on the diagonals, Convergence is
sure if the matrix is diagonally dominant or
symmetrical and positive definite.

How this method works:

It use absolute relative approximate error
after each iteration to check if an error is
within a prespecified tolerance by
assuming an initial guess solution array
then algebraically solving each linear
equation for xi. After which the iteration
method is repeated to check if the error is
within the prespecified tolerance.

Why
The Gauss-Seidel Method allows the user to control round-off error.
Elimination methods such as Gaussian Elimination and LU Decomposition are
prone to prone to round-off error.
Also: If the physics of the problem are understood, a close initial guess can be
made, decreasing the number of iterations needed.

In linear algebra, LU decomposition (also called LU factorization) factorizes a
matrix as the product of a lower triangular matrix and an upper triangular matrix.
The product sometimes includes a permutation matrix as well. LU decomposition
is a key step in several fundamental numerical algorithms in linear algebra such
as solving a system of linear equations, inverting a matrix, or computing the
determinant of a matrix. It can be viewed as the matrix form of Gaussian
elimination. LU decomposition was introduced by mathematician

Gauss-Seidel Method

http://numericalmethods.eng.usf.edu
Algorithm
Rewriting each equation
11
1 3 13 2 12 1
1
a
x a x a x a c
x
n n

=

nn
n n n n n n
n
n n
n n n n n n n n n
n
n n
a
x a x a x a c
x
a
x a x a x a x a c
x
a
x a x a x a c
x
1 1 , 2 2 1 1
1 , 1
, 1 2 2 , 1 2 2 , 1 1 1 , 1 1
1
22
2 3 23 1 21 2
2

=

=

=

From Equation 1

From equation 2

From equation n-1

From equation n
Gauss-Seidel Method

http://numericalmethods.eng.usf.edu
Algorithm
General Form of each equation
11
1
1
1 1
1
a
x a c
x
n
j
j
j j
=
=

=
22
2
1
2 2
2
a
x a c
x
j
n
j
j
j
=
=

=
1 , 1
1
1
, 1 1
1

=
=

=
n n
n
n j
j
j j n n
n
a
x a c
x
nn
n
n j
j
j nj n
n
a
x a c
x

=
=

=
1
Gauss-Seidel Method

http://numericalmethods.eng.usf.edu
Algorithm
General Form for any row i
. , , 2 , 1 ,
1
n i
a
x a c
x
ii
n
i j
j
j ij i
i
=

=

=
=
How or where can this equation be used?
Gauss-Seidel Method
(
(
(
(
(
(

n
- n
2
x
x
x
x
1
1

http://numericalmethods.eng.usf.edu
Solve for the unknowns
Assume an initial guess for [X]
Use rewritten equations to solve for
each value of x
i
.
Important: Remember to use the
most recent value of x
i
. Which
means to apply values calculated to
the calculations remaining in the
current iteration.
Gauss-Seidel Method

http://numericalmethods.eng.usf.edu
Calculate the Absolute Relative Approximate Error
100

= e
new
i
old
i
new
i
i
a
x
x x
So when has the answer been found?

The iterations are stopped when the absolute relative
approximate error is less than a prespecified tolerance for all
unknowns.
Gauss-Seidel Method: Example 1
Time, Velocity
5 106.8
8 177.2
12 279.2

http://numericalmethods.eng.usf.edu
The upward velocity of a rocket
is given at three different times
The velocity data is approximated by a polynomial as:
( ) 12. t 5 ,
3 2
2
1
s s + + = a t a t a t v
( ) s t ( ) m/s v
Table 1 Velocity vs. Time data.
Gauss-Seidel Method: Example 1
(
(
(

=
(
(
(

(
(
(

3
2
1
3
2
3
2
2
2
1
2
1
1
1
1
v
v
v
a
a
a

t t
t t
t t
3
2
1
(
(
(

=
(
(
(

(
(
(

2 . 279
2 . 177
8 . 106
1 12 144
1 8 64
1 5 25
3
2
1
a
a
a

http://numericalmethods.eng.usf.edu
Using a Matrix template of the form
The system of equations becomes
Initial Guess: Assume an initial guess of

(
(
(

=
(
(
(

5
2
1
3
2
1
a
a
a
Gauss-Seidel Method: Example 1
(
(
(

=
(
(
(

(
(
(

2 . 279
2 . 177
8 . 106
1 12 144
1 8 64
1 5 25
3
2
1
a
a
a

http://numericalmethods.eng.usf.edu
Rewriting each equation
25
5 8 . 106
3 2
1
a a
a

=
8
64 2 . 177
3 1
2
a a
a

=
1
12 144 2 . 279
2 1
3
a a
a

=

Gauss-Seidel Method: Example 1

http://numericalmethods.eng.usf.edu
Applying the initial guess and solving for a
i
(
(
(

=
(
(
(

5
2
1
3
2
1
a
a
a
6720 . 3
25
) 5 ( ) 2 ( 5 8 . 106
a
1
=

=
( ) ( )
8510 . 7
8
5 6720 . 3 64 2 . 177
a
2
=

=
( ) ( )
36 . 155
1
8510 . 7 12 6720 . 3 144 2 . 279
a
3
=

=
Initial Guess
When solving for a
2
, how many of the initial guess values were used?
Gauss-Seidel Method: Example 1
(
(
(

=
(
(
(

36 . 155
8510 . 7
6720 . 3
3
2
1
a
a
a

http://numericalmethods.eng.usf.edu
100

= e
new
i
old
i
new
i
i
a
x
x x
% 76 . 72 100
6720 . 3
0000 . 1 6720 . 3
1
a
=

= e x
% 47 . 125 100
8510 . 7
0000 . 2 8510 . 7
2
a
=

= e x
% 22 . 103 100
36 . 155
0000 . 5 36 . 155
3
a
=

= e x

Finding the absolute relative approximate error
At the end of the first iteration
The maximum absolute
relative approximate error is
125.47%

Gauss-Seidel Method: Example 1
(
(
(

=
(
(
(

36 . 155
8510 . 7
6720 . 3
3
2
1
a
a
a

http://numericalmethods.eng.usf.edu
Iteration #2
Using
( )
056 . 12
25
36 . 155 8510 . 7 5 8 . 106
1
=

= a
( )
882 . 54
8
36 . 155 056 . 12 64 2 . 177
2
=

= a
( ) ( )
34 . 798
1
882 . 54 12 056 . 12 144 2 . 279
3
=

= a

from iteration #1
the values of a
i
are found:
Gauss-Seidel Method: Example 1

http://numericalmethods.eng.usf.edu
Finding the absolute relative approximate error
% 543 . 69 100
056 . 12
6720 . 3 056 . 12
1
a
=

= e x
( )
% 695 . 85 100 x
882 . 54
8510 . 7 882 . 54
2
=

= e
a
( )
% 540 . 80 100
34 . 798
36 . 155 34 . 798
3
a
=

= e x

At the end of the second iteration
(
(
(

=
(
(
(

54 . 798
882 . 54
056 . 12
3
2
1
a
a
a
The maximum absolute
relative approximate error is
85.695%

Iteration a
1
a
2
a
3

1
2
3
4
5
6
3.6720
12.056
47.182
193.33
800.53
3322.6
72.767
69.543
74.447
75.595
75.850
75.906
7.8510
54.882
255.51
1093.4
4577.2
19049
125.47
85.695
78.521
76.632
76.112
75.972
155.36
798.34
3448.9
14440
60072
249580
103.22
80.540
76.852
76.116
75.963
75.931
Gauss-Seidel Method: Example 1
(
(
(

=
(
(
(

0857 . 1
690 . 19
29048 . 0
a
a
a
3
2
1

http://numericalmethods.eng.usf.edu
Repeating more iterations, the following values are obtained
%
1
a
e
%
2
a
e
%
3
a
e
Notice The relative errors are not decreasing at any significant rate
Also, the solution is not converging to the true solution of
Gauss-Seidel Method: Pitfall

http://numericalmethods.eng.usf.edu
Even though done correctly, the answer is not converging to the
This example illustrates a pitfall of the Gauss-Siedel method: not all
systems of equations will converge.
One class of system of equations always converges: One with a diagonally
dominant coefficient matrix.
Diagonally dominant: [A] in [A] [X] = [C] is diagonally dominant if:

=
=
>
n
j
j
ij
a a
i
1
ii

=
=
>
n
i j
j
ij ii
a a
1
for all i and
for at least one i
Gauss-Seidel Method: Example 2

http://numericalmethods.eng.usf.edu
Given the system of equations
1 5 3 12
3 2 1
x - x x = +
28 3 5
3 2 1
x x x

= + +
76 13 7 3
3 2 1
= + + x x x

(
(
(

=
(
(
(

1
0
1
3
2
1
x
x
x
With an initial guess of
The coefficient matrix is:
| |
(
(
(

=
13 7 3
3 5 1
5 3 12
A
Gauss-Seidel Method: Example 2
(
(
(

=
(
(
(

(
(
(

76
28
1
13 7 3
3 5 1
5 3 12
3
2
1
a
a
a

(
(
(

=
(
(
(

1
0
1
3
2
1
x
x
x

http://numericalmethods.eng.usf.edu
Rewriting each equation

12
5 3 1
3 2
1
x x
x
+
=
5
3 28
3 1
2
x x
x

=
13
7 3 76
2 1
3
x x
x

=

With an initial guess of
( ) ( )
50000 . 0
12
1 5 0 3 1
1
=
+
= x
( ) ( )
9000 . 4
5
1 3 5 . 0 28
2
=

= x
( ) ( )
0923 . 3
13
9000 . 4 7 50000 . 0 3 76
3
=

= x

Gauss-Seidel Method: Example 2

http://numericalmethods.eng.usf.edu
The absolute relative approximate error

% 00 . 100 100
50000 . 0
0000 . 1 50000 . 0
1
=

= e
a
% 00 . 100 100
9000 . 4
0 9000 . 4
2
a
=

= e
% 662 . 67 100
0923 . 3
0000 . 1 0923 . 3
3
a
=

= e

The maximum absolute relative error after the first iteration is 100%
Gauss-Seidel Method: Example 2
(
(
(

=
(
(
(

8118 . 3
7153 . 3
14679 . 0
3
2
1
x
x
x
(
(
(

=
(
(
(

0923 . 3
9000 . 4
5000 . 0
3
2
1
x
x
x

http://numericalmethods.eng.usf.edu
After Iteration #1
( ) ( )
14679 . 0
12
0923 . 3 5 9000 . 4 3 1
1
=
+
= x
( ) ( )
7153 . 3
5
0923 . 3 3 14679 . 0 28
2
=

= x
( ) ( )
8118 . 3
13
900 . 4 7 14679 . 0 3 76
3
=

= x

Substituting the x values into the
equations
After Iteration #2
Gauss-Seidel Method: Example 2

http://numericalmethods.eng.usf.edu
Iteration #2 absolute relative approximate error
% 61 . 240 100
14679 . 0
50000 . 0 14679 . 0
1
a
=

= e
% 889 . 31 100
7153 . 3
9000 . 4 7153 . 3
2
a
=

= e
% 874 . 18 100
8118 . 3
0923 . 3 8118 . 3
3
a
=

= e

The maximum absolute relative error after the first iteration is 240.61%

Iteration a
1

a
2
a
3

1
2
3
4
5
6
0.50000
0.14679
0.74275
0.94675
0.99177
0.99919
100.00
240.61
80.236
21.546
4.5391
0.74307
4.9000
3.7153
3.1644
3.0281
3.0034
3.0001
100.00
31.889
17.408
4.4996
0.82499
0.10856
3.0923
3.8118
3.9708
3.9971
4.0001
4.0001
67.662
18.876
4.0042
0.65772
0.074383
0.00101
Gauss-Seidel Method: Example 2

http://numericalmethods.eng.usf.edu
Repeating more iterations, the following values are obtained
%
1
a
e
%
2
a
e %
3
a
e
(
(
(

=
(
(
(

4
3
1
3
2
1
x
x
x
(
(
(

=
(
(
(

0001 . 4
0001 . 3
99919 . 0
3
2
1
x
x
x
The solution obtained is close to the exact solution of .
Successive Over relaxation Method
This method is based on to the Gauss-Seidel
Method and is specially formulated to give
you a more accurate solution in less
repetition.
This is done with the use of a relaxation
factor().

http://numericalmethods.eng.usf.edu 38
This method is based on the general Formula

(+1)
=
()
+

()

Where

()
is the residual of

Hence from a rearranged for of the Gauss-
Seidel Method which is

(+1)
=

()
+

(+1)

()
+

=
1
=1

Where we can see the Residual bracketed

http://numericalmethods.eng.usf.edu 39
The SOR formula then becomes:

(+1)
=

()
+

(+1)

()
+

=
1
=1

http://numericalmethods.eng.usf.edu 40
Example
Solve for values of x in the following system where:
2
1
+
2
= 1

1
2
2
+
3
= 0

2
2
3
= 1
Which becomes the matrix form

2 1 0
1 2 1
0 1 2

3
=
1
0
1

Where for

=
1

(+)

=
1
=1

http://numericalmethods.eng.usf.edu 41
Example contd
Rewriting the actual solution

=
1
2
2 0 1 0 0 0 1

=
1
2

Using the equation:

(+1)
=
()
+

()

Taking to be 1 in this case

http://numericalmethods.eng.usf.edu 42
Example contd

(0+1)
= 0 + 1
1
2

And it continues as shown below

http://numericalmethods.eng.usf.edu 43
K

1 0.5 0.25 0.625
2 0.625 0.625 0.8125
10 0.9985 0.9985 0.9926
Example contd
At different values of can see checking a value
each variable after 5 cycles (k=5)

http://numericalmethods.eng.usf.edu 44

1

2

3

0.5 0.651 0.545 0.704
1.0 0.953 0.953 0.976
1.99 0.995 0.995 1.94
The key to successfully using this method is obtaining an
appropriate value for each time.
This allows convergence to be faster than the previous
methods mentioned
For 0 < < 2 the system will converge.
But at < < 1 convergence is slower than the Gauss-
Seidel Method.
While at = 1 the system becomes Gauss-Seidel
Method.
And for 1 < < 2 the system converges fastest and in
the least steps.
This does not mean that for the highest possible value of
we get the fastest convergence but an appropriate
value of can be obtained by trial and error.

http://numericalmethods.eng.usf.edu 45
The End

http://numericalmethods.eng.usf.edu 46