Вы находитесь на странице: 1из 7

Order of Convergence

Richardson Extrapolation
Log-Log charts

January 19, 2006

1 Order of convergence
The following two series can be used to calculate :
X
2 1
= (1)
6 n=1
n2

X
2 8 1
= 2 2 (2)
16 n=1 (2n + 1) (2n 1)

but which series is better. It is the one that converges faster. Of these series,
the second one is likely to converge faster since its terms decay roughly at the
rate of 1/n4 compared to 1/n2 for the first series.
How can we quantify the rate of the convergence and confirm our guess? We
can do so experimentally. Let us begin by studying the first series. Later we
will apply the same arguments to the second series.
The following table contains partial sums of the first series (multiplied by
6, with the square root taken) for N = 10, 100, 1000, 10000, 100000, 1000000
terms:
s s
P
N
1
P
N
1
N SN = 6 n2 6 n2
n=1 n=1
10 3.0493616359820696318 0.0922310176077236067
100 3.1320765318091059044 0.0095161217806873341
1000 3.1406380562059931231 0.0009545973838001154
10000 3.1414971639472092032 0.0000954896425840353
100000 3.1415831043264409591 0.0000095492633522794
1000000 3.1415916986604670200 0.0000009549293262185

There are several important things to observe.


1. The series, in fact, converges to .

1
2. It does so rather slowly. In order to reduce the "error" by a factor of 10
we need to include 10 times as many terms in the series. In other words, in order
to get one more digit right, we need 10 times as many terms. If the pattern
continues (and it does) we would need to include 1016 terms in the series in
order to obtain an estimate for with 16 correct digits. This would take days.
3. There is something magical about the error. With each step, it decays
not by "roughly 10", but almost by "precisely 10". The digits ...95 appear from
N = 10 on while the digits ...954 appear from N = 100 on, and things get even
more stable later on.
The last observation is very important. It leads us to believe that partial
sums SN behave in the following way
A B C
SN = + + Smaller terms, probably 2 , 3 , etc.
N N N

2 Richardson Extrapolation
We have
A
SN = + + ...
N
A
S10N = + + ...
10N
We can solve for !
10S10N SN
= + ...
9
So define the following series
10S10N SN
TN =
9
and let us study how well it converges to . Construct a table like above, but
this time for TN :
s s
P
10N PN
N TN = 10
9 6 1 1
n2 9 6 1
n2 TN
n=1 n=1
10 3.1412670757898877124 0.00032557779990552612
100 3.1415893366945361474 0.00000331689525709108
1000 3.1415926203628998787 0.00000003322689335979
10000 3.1415926532574667098 0.00000000033232652875
100000 3.1415926535864699156 0.00000000000332332291
1000000 ... ...

2
Much better convergence! Now we pick up two digits with each factor of 10
in N . If we use this method, we would shrink the amount of time that we need
to wait for 16 digits by 108 !!! Instead of days, it would now take seconds!
But once again we see that the error have very regular behavior. We first
see the digits ...33 repeating, and the it becomes ...332, etc. Therefore, we can
once again conjecture that TN behaves the following way:
B C D
TN = + + Smaller terms, probably 3 , 4 , etc.
N2 N N
We have then
B
TN = + + ...
N2
B
T10N = + + ...
100N 2
We can solve for again:
100T10N TN
= + ...
99
so define
100T10N TN
UN =
99
Construct a table for UN

N UN = 100 1
99 T10N 99 TN TN
10 3.1415925918551891618 6.17346040766891 108
100 3.1415926535312671891 5.8526049392 1011
1000 3.1415926535897350615 5.81770101 1014
10000 3.1415926535897931802 5.82471 1017
100000 ... ...
1000000 ... ...

Three digits for every factor of 10 in N .


Could we have done the two steps simultaneously? Yes.
A B
SN = + + 2 + ...
N N
A B
S10N = + + + ...
10N 100N 2
A B
S100N = + + + ...
100N 10000N 2
Put these equations in matrix form:
1 1

1 N N2 SN
1 1 1 A = S10N
10N 100N 2
1 1
1 100N 10000N 2 B S100N

3
Solve for :
1 1
1 1000 10 1

1 N N2 SN 891 S100N 81 S10N + 891 SN
A = 1 1 1 S10N = ...
10N 100N 2
1 1
B 1 100N 10000N 2
S100N ...

Define
1000 10 1
VN = S100N S10N + SN
891 81 891
and construct the table for VN :

N VN = 1000 10 1
891 S100N 81 S10N + 891 SN VN
10 3.1415925918551891618 6.17346040766891 108
100 3.1415926535312671891 5.8526049392 1011
1000 3.1415926535897350615 5.81770101 1014
10000 3.1415926535897931802 5.82471 1017
100000 ... ...
1000000 ... ...

Not surprisingly same table!!!

3 How to determine the order of convergence


graphically
In the examples above we were able to easily guess how the error scales with N .
But there is a systematic way to determine that nature of dependence.
Consider the relationship y = x3 . We let Matlab plot this relationship for a
few values of x.
>> x = 1:10;
>> y = x.^3;
>> plot(x, y, ro-, LineWidth, 3); grid on;
1000

800

600

400

200

0
0 5 10

4
By simply looking at the graph, it is dicult to see where the relationship
between x and y is quadratic, cubic, x4 or something altogether dierent. Poly-
nomial relationships (xn ) can be clearly exposed by plotting ln x versus ln y.
If
y = xn
then
ln y = n ln x
so the relationship between ln y and ln x is linear and the slope is n. Let us see
this with Matlab:
>> x = 1:10;
>> y = x.^3;
>> plot(log(x), log(y), ro-, LineWidth, 3); grid on;

0
0 1 2 3

We observe that, as we expected, the points fall on a straight line. The slope
of the line can be determined by a visual inspection. Note the grid points that
the line passes through. It passes through the point (0, 0) and also through the
point (2, 6). So its slope is 3.
The only drawback of this approach is that rather than indicated the values
of x and y, the axes indicate the values of ln (x) and ln (y). In order to overcome
this shortcoming, log-log plots were invented. When you ask a piece of software
to build a log-log plot, it will do what we just did plot the logarithm of the
x-variable against the logarithm of the y-variable but it will label the axes
according to the original values rather than the logarithms thereof. Here is a
demonstration:
>> x = 1:10;
>> y = x.^3;
>> loglog(x, y, ro-, LineWidth, 3); grid on;

5
3
10

2
10

1
10

0
10 0 1
10 10

Let us now determine in this systematic way the rate of convergence of the
series (1). Well use the following Matlab code:
>> N = [ 10 100 1000 10000 100000 10000000 ];
>> errSn = [
0.0922310176077236067
0.0095161217806873341
0.0009545973838001154
0.0000954896425840353
0.0000095492633522794
0.0000009549293262185
]; % Note: Matlab only uses the first 16 digits
>> loglog(N, errSn, ro-, LineWidth, 3); grid on;
And now let us add to this plot the blue line for error after we have performed
Richardson extrapolation:
>> errTn = [
0.00032557779990552612
0.00000331689525709108
0.00000003322689335979
0.00000000033232652875
0.00000000000332332291
];
>> hold on;
>> loglog(N(1:5), errTn, ro-, LineWidth, 3); grid on;
The results can be seen in the following figure:

6
0
10
2
10
4
10
6
10
8
10
10
10
12
10 0 1 2 3 4 5 6
10 10 10 10 10 10 10

We can see by inspection the slope of the red line (before Richardson) is 1
while the slope of the blue line (after Richardson) is 2. We conclude that
the pre-Richardson error behaves like 1/N while the post-Richardson formula
behaves like 1/N 2 .

Вам также может понравиться