Академический Документы
Профессиональный Документы
Культура Документы
1. Numerical analysis
Numerical analysis is the branch of mathematics which study and develop
the algorithms that use numerical approximation for the problems of mathematical analysis (continuous mathematics). Numerical technique is widely
used by scientists and engineers to solve their problems. A major advantage
for numerical technique is that a numerical answer can be obtained even
when a problem has no analytical solution. However, result from numerical
analysis is an approximation, in general, which can be made
as accurate as
8
= 2.66666 =
3
2
6
6
+ 2 + 3 + ...
1
10
10
10
101 .
3.1. Rounding and chopping. Let x be any real number and f l(x) be its
machine approximation. There are two ways to do the cutting to store a
real number
x = (a1 a2 . . . an an+1 . . . ) e ,
a1 6= 0.
(1) Chopping: We ignore digits after an and write the number as following in chopping
f l(x) = (.a1 a2 . . . an ) e .
(2) Rounding: Rounding is defined as following
(0.a1 a2 . . . an ) e , 0 an+1 < /2
(rounding down)
f l(x) =
(0.a1 a2 . . . an ) + (0.00 . . . 01) e , /2 an+1 < (rounding up).
Example 1.
6
0.86 100 (rounding)
fl
=
0.85 100 (chopping).
7
Rules for rounding off numbers:
(1) If the digit to be dropped is greater than 5, the last retained digit is
increased by one. For example,
12.6 is rounded to 13.
(2) If the digit to be dropped is less than 5, the last remaining digit is left
as it is. For example,
12.4 is rounded to 12.
(3) If the digit to be dropped is 5, and if any digit following it is not zero,
the last remaining digit is increased by one. For example,
12.51 is rounded to 13.
(4) If the digit to be dropped is 5 and is followed only by zeros, the last
remaining digit is increased by one if it is odd, but left as it is if even. For
example,
11.5 is rounded to 12, and 12.5 is rounded to 12.
Definition 3.2 (Absolute and relative error). If x is the approximation to
the exact value x , then the absolute error is |x x |, and relative error is
|x x |
.
|x|
Remark: As a measure of accuracy, the absolute error may be misleading
and the relative error is more meaningful.
Definition 3.3 (Overflow and underflow). An overflow is obtained when
a number is too large to fit into the floating point system in use, i.e e >
M . An underflow is obtained when a number is too small, i.e e < m .
When overflow occurs in the course of a calculation, this is generally fatal.
But underflow is non-fatal: the system usually sets the number to 0 and
continues. (Matlab does this, quietly.)
X
ai
e , a1 6= 0.
e
i=1
n
X
ai
e.
e
f l(x) = (0.a1 a2 . . . an ) e =
i=1
Therefore
|x f l(x)| =
X
ai
e
e
i=n+1
X
ai
.
e
e |x f l(x)| =
i=n+1
X
1
e
i=n+1
= ( 1)
1
n+1
"
= ( 1))
1
n+2
#
1
n+1
1 1
+ ...
= n .
Now
|x| = (0.a1 a2 . . . an ) e
1
e.
Therefore
|x f l(x)|
n e
1
1n .
|x|
e
Rounding errors: For rounding
P
ai
f l(x) =
P
1
ai
e
X
X
ai
an+1
ai
e |x f l(x)| =
=
+
i
n+1
i
i=n+1
i=n+2
X
/2 1
( 1)
+
n+1
i
i=n+2
/2 1
1
1
+ n+1 = n .
n+1
Since an+1
X
1
a
i
e |x f l(x)| =
i
n
i=n+1
1
X
ai
= n
i
i=n+1
1
X
an+1
ai
n n+1
i
i=n+2
1
an+1
n n+1
/2, therefore
1
/2
e
|x f l(x)| n n+1
1
= n .
2
1 en
.
2
|x f l(x)|
1 n e
1
= 1n .
1
e
|x|
2
2
5. Significant Figures
+
+ ...
X
X
X
X
which is a relative error. Now,
X x1 x2
xn
X X + X + ... X
which is a maximum relative error. Therefore it shows that when the given
numbers are added then the magnitude of absolute error in the result is the
sum of the magnitudes of the absolute errors in that numbers.
Error in subtraction of numbers. As in the case of addition, we can
obtain the maximum absolute errors for subtraction of numbers
|X| |x1 | + |x2 |.
Also
X x1 x2
X X + X
which is a maximum relative error in subtraction of numbers.
Error in product of numbers. Let X = x1 x2 . . . xn then using the general
formula for error
X
X
X
+ x2
+ + xn
.
X = x1
x1
x2
xn
We have
X
x1 X
x2 X
xn X
=
+
+ +
.
X
X x1
X x2
X xn
Now
1 X
x 2 x 3 . . . xn
1
=
=
X x1
x1 x2 x3 . . . xn
x1
1 X
x 1 x 3 . . . xn
1
=
=
X x2
x1 x2 x3 . . . xn
x2
1 X
x1 x2 . . . xn1
1
=
=
.
X xn
x1 x2 x3 . . . xn
xn
Therefore
xn
X
x1 x2
+
+ +
.
=
X
x1
x2
xn
Therefore maximum Relative and Absolute errors are given by
X x1 x2
xn
=
+
+ + +
Er =
xn .
X x1 x2
X
X
(x1 x2 . . . xn ).
Ea =
X=
X
X
Error in division of numbers. Let X =
X = x1
x1
x2
then
X
X
+ x2
.
x1
x2
We have
X
x1 X
x2 X
x1 x2
=
+
=
.
X
X x1
X x2
x1
x2
Therefore relative error
X x1 x2
,
Er =
+
X x1 x2
and absolute error
X
X.
Ea =
X
Example 2. Add the following floating-point numbers 0.4546e3 and 0.5433e7.
Sol. This problem contains unequal exponent. To add these floating-point
numbers, take operands with the largest exponent as,
0.5433e7 + 0.0000e7 = 0.5433e7.
(Because 0.4546e3 changes in the same operand as 0.0000e7).
Example 3. Add the following floating-point numbers 0.6434e3 and 0.4845e3.
Sol. This problem has an equal exponent but on adding we get 1.1279e3,
that is, mantissa has 5 digits and is greater than 1, thats why it is shifted
right one place. Hence we get the resultant value 0.1127e4.
Example 4. Subtract the following floating-point numbers:
1. 0.5424e 99 from 0.5452e 99
2. 0.3862e 7 from 0.9682e 7
Sol. On subtracting we get 0.0028e 99. Again this is a floating-point
number but not in the normalized form. To convert it in normalized form,
shift the mantissa to the left by 1. Therefore we get 0.028e 100. This
condition is called an underflow condition.
Similarly, after subtraction we get 0.5820e 7.
500
g(500) =
501 + 500
10
0.500000 103
0.223830 102 + 0.223607 102
0.500000 103
= 0.111748 102 .
=
0.447437 102
If more digits are used,
b b2 4ac
x=
.
2a
Consider the equation x2 + 62.10x + 1 = 0 and discuss the numerical results.
Sol. Using quadratic formula and 8-digit rounding arithmetic, we obtain
two roots
x1 = .01610723
x2 = 62.08390.
We use these values as exact values. Now we perform calculations with
4-digit rounding
arithmetic.
|f l(x2 ) x2 |
| 62.10 + 62.08390|
=
= 0.259 103 .
|x2 |
| 62.08390|
2
2
In this
equation since b = 62.10 is much larger than 4ac = 4. Hence b
and b2 4ac become two equal numbers. Calculation of x1 involves the
subtraction of nearly two equal numbers but x2 involves the addition of the
nearly equal numbers which will not cause serious loss of significant figures.
To obtain a more accurate 4-digit rounding approximation for x1 , we change
the formulation by rationalizing the numerator, that is,
2c
x1 =
.
b + b2 4ac
11
Then
2.000
= 2.000/124.2 = 0.01610.
62.10 + 62.06
The relative error in computing x1 is now reduced to 0.62 103 . However,
if rationalize the numerator in x2 to get
2c
.
x2 =
b b2 4ac
The use of this formula results not only involve the subtraction of two nearly
equal numbers but also division by the small number. This would cause
degrade in accuracy.
2.000
f l(x2 ) =
= 2.000/.04000 = 50.00
62.10 62.06
The relative error in x2 becomes 0.19.
f l(x1 ) =
x3 x5 x7
+
+ ...)
3!
5!
7!
x3
x5
x7
+
...
6
6 20 6 20 42
x3
x2
x2
x2
1 (1 (1 )(...)) .
=
6
20
42
72
=
7.2. Numerical stability. Another theme that occurs repeatedly in numerical analysis is the distinction between numerical algorithms are stable
and those that are not. Informally speaking, a numerical process is unstable
if small errors made at one stage of the process are magnified and propagated in subsequent stages and seriously degrade the accuracy of the overall
calculation. Whether a process is stable or unstable should be decided on
the basis of the relative error.
7.3. Conditioning. The words condition and conditioning are used to indicate how sensitive the solution of a problem may be to small changes in
the input data. A problem is ill-conditioned if small changes in the data
can produce large changes in the results. For a certain types of problems,
a condition number can be defined. If that number is large, it indicates an
ill-conditioned problem. In contrast, if the number is modest, the problem
is recognized as a well-conditioned problem.
The condition number can be calculated in the following manner:
=
12
(x )
| f (x)f
|
f (x)
| xx
x |
0
xf (x)
,
f (x)
10
,
1x2
0
2
xf (x)
= 2x .
=
f (x) |1 x2 |
Condition number can be quite large for |x| 1. Therefore, the function is
ill-conditioned.
Remarks
(1) Accuracy tells us the closeness of computed solution to true solution
of problem. Accuracy depends on conditioning of problem as well as
stability of algorithm.
(2) Stability alone does not guarantee accurate results. Applying stable
algorithm to well-conditioned problem yields accurate solution.
(3) Inaccuracy can result from applying stable algorithm to ill-conditioned
problem or unstable algorithm to well-conditioned problem.
Exercises
(1) Assume 3-digit mantissa with rounding
(a) Evaluate y = x3 3x2 + 4x + 0.21 for x = 2.73.
(b) Evaluate y = [(x 3)x + 4]x + 0.21 for x = 2.73.
Compare and discuss the errors obtained in part (a) and (b).
1
1
.
(2) Given f (x) =
x
x+1
Assume 3 decimal mantissa with rounding
(a) Evaluate f (1000) directly. (b) Evaluate f (1000) as accurate as
possible using an alternative approach.
(c) Find the relative error of f (1000) in part (a) and (b).
(3) Associativity not necessarily hold for floating point addition (or multiplication).
Let a = 0.8567 100 , b = 0.1325 104 , c = 0.1325 104 , then
a + (b + c) = 0.8567 100 , and (a + b) + c) = 0.1000 101 .
The two answers are NOT the same! Show the calculations.
(4) Find the smaller root of the equation
x2 400x + 1 = 0
using four digits rounding arithmetic.
(5) Discuss the condition number of the polynomial function f (x) =
2x2 + x 1.
13
Bibliography
[Atkinson]
[Conte]