Академический Документы
Профессиональный Документы
Культура Документы
Naoki Osada
Acceleration Methods
for Slowly Convergent Sequences
and their Applications
January 1993
Naoki Osada
CONTENTS
Introduction
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1. Asymptotic preliminaries . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . .
10
. . . . . . . . . . . . . . . . . . . . .
13
. . . . . . . . . . . . . . . . . . . . . . .
13
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
14
15
. . . . . .
17
3. Innite series . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
19
23
4. Numerical integration
. . . . . . . . . . . . . . . . . . . . . . . .
28
28
29
. . . . . . . . . . . .
31
. . . . . . . . . . . .
33
. . . . . . . . . . . . . . . . . . . . . . . . . . .
33
5. Basic concepts
33
. . . . . . . . .
34
. . . . . . . . . . . . . . . . . . . . . . . . . .
35
. . . . . . . . . . . . . . . . .
35
. . . . . . . . . . . .
37
39
39
40
. . . . . . . . . . .
43
8. The -algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . .
50
. . . . . . . . . . . . . . . . . . . .
50
. . . . . . . . . . . . . . . . . . . . . . . . .
51
. . . . . . . . . . . .
52
. . . . . . . . . . . . . . .
53
. . . . . . . . . . . . . . . . . . . . . . .
57
. . . . . . . . . . . .
57
. . . . . . . .
58
. . . . . . . . . . . . . . . . . . . . . . .
61
. . . . . . . . . . . . . .
64
64
. . . . . . . . . .
66
. . . . . . . . . . . . .
68
. . . . . . . . . . . . . . . . . . . . .
72
. . . . . . . . . . . .
72
74
75
. . . . . . . . . . . . . . .
. . . . . . . .
77
. . . . . . . . . . . . . . . . . . . . . . . . . .
80
. . . . . . . . . . .
80
. . . . . . . . . . . .
82
. . . . . . . . . . . . . . . . . .
87
. . . . . . . . . . . . . . . . . . .
87
90
. . . . . . . . . . . . . . . . .
93
. . . . . . . . . . . . . . . . . . . . . . . .
93
. . . . . . . . . . . . . . . . . . . . . . . . . . .
94
. . . . . . . . . . . . . . . . . . . . . . . .
95
. . . . . . . . . . . . . . . . . . . . . . . . . . .
98
14.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . .
15. Application to numerical integration
100
. . . . . . . . . . . . . . . .
101
. . . . . . . . . . . . . . . . . . . . . . . . .
101
102
. . . . . . . . . . . . . . . .
106
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
108
15.1 Introduction
ii
References
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
111
Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
115
. . . . . . . . . . . .
FORTRAN program
115
. . . . . . . . .
117
. . . . . . . . . . . . . . . . . . . . . . . .
120
iii
INTRODUCTION
Sequence transformations
Convergent numerical sequences occur quite often in natural science and engineering.
Some of such sequences converge very slowly and their limits are not available without
a suitable convergence acceleration method. This is the raison detre of the study of the
convergence acceleration method.
A convergence acceleration method is usually represented as a sequence transformation. Let S and T be sets of real sequences. A mapping T : S T is called a sequence
transformation, and we write (tn ) = T (sn ) for (sn ) S . Let T : S T be a sequence
transformation and (sn ) S . T accelerates (sn ) if
tn s
= 0,
n s(n) s
lim
(sn+1 sn )2
(sn )2
= sn
,
sn+2 2sn+1 + sn
2 sn
(1)
where (sn ) is a scalar sequence. As C. Brezinski pointed out1 , the rst proposer of the 2
process was the greatest Japanese mathematician Takakazu Seki (or K
owa Seki,
1642?-1708). Seki used the 2 process computing in Katsuy
o Samp
o vol. IV
, which was edited by his disciple Murahide Araki in 1712. Let sn be the
perimeter of the polygon with 2n sides inscribed in a circle of diameter one. From
s15 = 3.14159 26487 76985 6708,
s16 = 3.14159 26523 86591 3571,
s17 = 3.14159 26532 88992 7759,
Seki computed
1 C.
t15 = s16 +
(2)
and he concluded = 3.14159 26535 89.2 The formula (2) is nothing but the 2 process.
Seki obtained seventeen-gure accuracy from s15 , s16 and s17 whose gure of accuracy is
less than ten.
Seki did not explain the reason for (2), but Yoshisuke Matsunaga (1692?1744), a disciple of Murahide Araki, explained it in Kigenkai , an annotated edition
of Katsuy
o Samp
o as follows. Suppose that b = a + ar, c = a + ar + ar2 . Then
b+
(b a)(c b)
a
=
,
(b a) (c b)
1r
1 2
1 1
)2
n2 .
(3)
(1)j 2j+1 2j n
=
+
(2 ) ,
2n
(2j
+
1)!
j=1
1
16
)n+1
.
Hirayama, K. Shimodaira, and H. Hirose(eds.), Takakazu Sekis collected works, English translation
by J. Sudo, (Osaka Kyoiku Tosho, 1974). pp.57-58.
3 M. Fujiwara, History of mathematics in Japan before the Meiji era, vol. II (in Japanese), under the
auspices of the Japan Academy, (Iwanami, 1956) (= , , , ).
p.180.
In 1926, A. C. Aitken[1] iteratively applied the 2 process nding the dominant root
of an algebraic equation, and so it is now named after him. He used (1) repeatedly as
follows:
(n)
T0
n N,
= sn ,
(n+1)
(n)
(n)
Tk+1 = Tk
(Tk
(n+2)
Tk
(n)
Tk )2
(n+1)
2Tk
(n)
+ Tk
k = 0, 1, . . . ; n N.
n,
(4)
i = 0, . . . , k,
(n)
tn = Ek
sn
g1 (n)
g (n)
= k
1
g1 (n)
gk (n)
gk (n + 1)
1
g1 (n + 1)
gk (n + 1)
sn+1
g1 (n + 1)
sn+k
g1 (n + k)
gk (n + k)
g1 (n + k)
gk (n + k)
is exact for the model sequence (4). This sequence transformation includes many famous
sequence transformations as follows:
(i) The Aitken 2 process : k = 1 and g1 (n) = sn .
(ii) The Richardson extrapolation : gj (n) = xjn , where (xn ) is an auxiary sequence.
(iii) Shanks transformation : gj (n) = sn+j1 .
(iv) The Levin u-transformation : gj (n) = n1j sn1 .
In 1979 and 1980, T. H
avie[20] and C. Brezinski[10] gave independently a recursive
(n)
cj gj (n),
(5)
j=1
where s, c1 , c2 , . . . are unknown constants and (gj (n)) is a known asymptotic scale. More
precisely, in 1990, A. Sidi[53] proved that if the E-algorithm is applied to the sequence
(5), then for xed k,
(n)
Ek
(n)
Ek1 s
(
=O
gk+1 (n)
gk (n)
)
,
as n .
(6)
cj
sn s + n
,
nj
j=0
(7)
B2j
1 1
1
1
1 + + + + log n + +
,
2 3
n
2n j=1 2jn2j
(8)
where is the Euler constant and B2j are the Bernoulli numbers. Since the sequence
n 1
i=1 i log n satises (7), the formula (8) is the earliest acceleration for a logarithmic
sequence satisfying (7).
For the partial sum sn =
n
i=1
(9)
where A1 (> 1), A2 , A3 , . . . are constants. We note that an asymptotic expansion (7)
implies (9). They assumed
s sn sn1
)
2
1
+ 2 + ... ,
1 n + 0 +
n
n
c2
c4
+ 4 + ....
2
n
n
s5 e
16
=
,
s4 e
25
cj
.
nj
j=0
(10)
formula dened by
(n)
s0
= sn
(n+1)
(n)
(n)
(n)
(n1)
sk )(sk sk
2k + 1 (sk
(n+1)
(n)
(n1)
2k
s
2s + s
(n)
sk+1 = sk
and proved that if it is applied to a sequence satisfying (7), then for xed k,
(n)
sk s = O(n2k ),
as n .
1 = 0,
(n)
(n)
(n+1)
= k2 +
= sn
k1
(n+1)
(n)
k1 k1
and proved that if it is applied to a sequence satisfying (7), then for xed k,
(n)
2k s = O((n + k)2k ),
as n .
The modied Aitken 2 formula and the generalized -algorithm require the knowledge of the exponent in (7). But Osada showed that can be computed using these
methods as follows. For a given sequence (sn ) satisfying (7), n denotes by
(
n = 1 +
1
),
sn
2 sn1
then Bjrstad, Dahlquist and Grosse[4] proved that the sequence (n ) has the asymptotic
expansion of the form
n + n
tj
.
nj
j=0
Thus by applying these methods with the exponent 2 to (n ), the exponent in (7)
can be estimated.
Organization of this paper
In Chapter I, the asymptotic properties of slowly convergent scalar sequences are
dealt, and examples of sequence, which will be taken up in Chapter II as an objective of
application of acceleration methods, are given. In Section 1, asymptotic preliminaries,
i.e., O-symbol, the Euler-Maclaurin summation formula and so on, are introduced. In
Section 2, some terminologies for slowly convergent sequences are given. In Section 3
and 4, partial sums of innite series, and numerical integration are taken up as slowly
convergent scalar sequences.
In Chapter II, acceleraton methods for scalar sequences are dealt. Taken up methods
are as follows:
(i) Methods added some new results by the author; the -algorithm, the generalized
-algorithm, and the modied Aitken 2 formula.
(ii) Other important methods; the E-algorithm, the Richardson extrapolation, the
-algorithm, Levins transforms, the d transform, the Aitken 2 process, and the Lubkin
W transform.
For these methods, the derivation, convergence theorem or asymptotic behaviour,
and numerical examples are given. For methods mentioned in (i), details are described,
7
but for others only important facts are described. For other information, see Brezinski[9],
Brezinski and Redivo Zaglia[11], Weniger[56][57], and Wimp[58].
In Section 14, these methods are compared using numerical examples of innite
series. In Section 15, these methods are applied to numerical integration.
For conveniences sake, FORTRAN subroutines of the automatic generalized algorithm and the automatic modied Aitken 2 formula are appended.
The numerical computations in this paper were carried out on NEC ACOS-610
computer at Computer Science Center of Nagasaki Institute of Applied Science in double
precision with approximately 16 digits unless otherwise stated.
(1.1)
x V.
(1.2)
f (x) = o(g(x)) as x b,
(1.3)
And we write
x V .
(1.4)
In the rest of this subsection b is xed and the qualifying phrase as x b is omitted.
Let f1 (x) f2 (x) and f3 (x) be innitesimal at b. We write f1 (x) = f2 (x) + O(f3 (x)),
if f1 (x)f2 (x) = O(f3 (x)). Similarly we write f1 (x) = f2 (x)+o(f3 (x)), if f1 (x)f2 (x) =
o(f3 (x)).
If f (x)/g(x) tends to unity, we write
f (x) g(x).
Then g is called an asymptotic approximation to f .
9
(1.5)
(1.6)
is valid. Let (fn (x)) be an asymptotic scale dened in V . Let f (x) be a function dened
in V . If there exist constants c1 , c2 , . . . , such that
f (x) =
(1.7)
k=1
ck fk (x),
(1.8)
k=1
and (1.8) is called an asymptotic expansion of f (x) with respect to (fn (n)). We note that
(1.8) implies f (x) c1 f1 (x) in the sense of (1.5).
If f (x) has an asymptotic expansion (1.8), then the coecients (ck ) are unique:
c1 = lim f (x)/f1 (x),
xb
(
)
n1
cn = lim (f (x)
ck fk (x))/fn (x) ,
xb
(1.9a)
n = 2, 3, . . . .
(1.9b)
k=1
k=1 ck fk (x),
we often write
f (x) g(x) +
ck fk (x)
(1.10)
k=1
x
Bn n
=
x ,
x
e 1 n=0 n!
10
|x| < 2,
(1.11)
where the left-hand side of (1.11) equals 1 when x = 0. The Bernoulli numbers are
computed recursively by
B0 = 1
n1
(n)
Bk = 0
k
(1.12a)
n = 2, 3, . . . .
(1.12b)
k=0
(1.13)
It is known that
|B2j |
4(2j)!
(2)2j
for j N.
(1.14)
Tn
a
p
)
B2j 2j ( (2j1)
f (x)dx =
h
f
(b) f (2j1) (a) +O(h2p+2 ), as h +0, (1.15a)
(2j)!
j=1
where
h = (b a)/n,
Tn = h
)
n1
1
1
f (a) +
f (a + ih) + f (b) .
2
2
i=1
(1.15b)
i=0
n+m
f (n + i) =
f (x)dx +
n
1
(f (n) + f (n + m))
2
p
)
B2j ( (2j1)
f
(n + m) f (2j1) (n) + Rp (n, m),
(2j)!
j=1
11
(1.16)
4e2
|Rp (n, m)|
(2)2p+1
n+m
|f (2p+1) (x)|dx.
(1.17)
4e2
|f (2p) (n + m) f (2p) (n)|.
2p+1
(2)
(1.18)
Theorem 1.3
Let f (x) be a function of C 2p+1 class in [n, n + 2m]. Then
2m
(1)i f (n + i) =
i=0
1
(f (n) + f (n + 2m))
2
p
)
4e2 (22p+1 + 1)
|Rp (n, m)|
(2)2p+1
n+2m
|f (2p+1) (x)|dx.
(1.20)
(1.21)
The following theorem gives the asymptotic expansion of the midpoint rule.
Theorem 1.4
Let f (x) be a function of C 2p+2 class in a closed interval [a, b]. Then the following
asymptotic formula is satised.
Mn
a
p
)
+ O(h2p+2 ), as h +0,
where
h = (b a)/n,
Mn = h
(1.22a)
1
f (a + (i )h).
2
i=1
(1.22b)
|sn+1 s|
B
|sn s|p
n n0 .
(2.1)
2-nd order convergence is usually called a quadratic convergence and 3-rd order convergence is sometimes called a cubic convergence.
If there exist C > 0 and n0 N such that
|sn+1 s| C|sn s|p
n n0 ,
(2.2)
then (sn ) is said to have at least order p or be at least p-th order convergence.
As is well known, under suitable conditions, Newtons iteration
sn+1 = sn
f (sn )
f 0 (sn )
(2.3)
sn sn1
f (sn ) f (sn1 )
(2.4)
A p-th order convergent sequence (sn ) is said to have the asymptotic error constant
C > 0 if
|sn+1 s|
= C.
n |sn s|p
lim
(2.5)
We note that p-th order convergent sequences not necessarily have the asymptotic error
constant. For example, let (sn ) be a sequence dened by
)
(
1 1
n
+ (1) (sn s)p + s.
sn+1 =
2 4
13
(2.6)
If p = 1, or p > 1 and 0 < 31/(1p ) (3/4)1/(p1) (s0 s) < 1, then (sn ) converges to s and
satises (2.1) with A = 1/4, B = 3/4, but |sn+1 s|/|sn s|p does not converge.
If (sn ) has order p > 1 and |sn+1 s|/|sn s| converges, then
sn+1 s
= 0.
n sn s
lim
(2.7)
0 < || < 1.
(2.8)
sn+1 s
= ,
n sn s
1 < 1, 6= 0,
lim
(2.9)
(sn ) is called a linearly convergent sequence and is called the rate of contraction.
In practice, linearly convergent sequences occur in the following situations:
(i) Partial sums sn of alternating series satises (2.9) with the rate of contraction
= 1.
(ii) Suppose that f (x) has a zero of multiplicity m > 1 and is of class C 2 in a
neighbourhood of . If s0 is suciently close to , then Newtons iteration (2.3) converges
linearly to with the rate of contraction 1 1/m.
(iii) Suppose that an equation x = g(x) has a xed point , g(x) is of class C 2 in
a neighbourhood of and 0 < |g 0 ()| < 1. If s0 is suciently close to , then (sn )
generated by sn+1 = g(sn ) converges linearly to with the rate of contraction g 0 ().
The convergence of a linearly convergent sequence whose asymptotic error constant
is close to 1 is so slow that it is necessary to accelerate the convergence. However, it is
easy to accelerate linearly convergent sequences. For example, the Aitken 2 process can
accelerate any linealy convergent sequence(Henrici[21]).
Some linearly convergent sequences have an asymptotic expansion of the form
sn s +
cj jn ,
j=1
14
as n ,
(2.10)
where c1 , c2 , . . . and 1 , 2 , . . . are constants independent of n. A sequence (sn ) satisfying (2.10) with known constants 1 , 2 , . . . can be quite eciently accelerated by the
Richardson extrapolation. When 1 , 2 , . . . in (2.10) are unknown, the sequence can be
eciently accelerated by the -algorithm.
Other some linearly convergent sequences such as the partial sums of certain altenating series (see Theorem 3.3) have an asymptotic expansion of the form
sn s + n n
cj
,
j
n
j=0
as n ,
(2.11)
sn+1 s
=1
sn s
(2.12)
1
sn =
,
j
j=1
> 1.
(2.13)
cj
,
j
n
j=0
(2.14)
cj
sn s + n
,
nj
j=0
(2.15)
where < 0 and c0 (6= 0), c1 , . . . are constants. Some other logarithmic sequences (sn )
have the asymptotic expansion of the form
sn s + n (log n)
j=0 i=0
15
ci,j
,
(log n)i nj
(2.16)
where < 0 or = 0 and < 0, and c0,0 (6= 0), ci,j , . . . are constants. The asymptotic
formula (2.15) or (2.16) is a special case of the following one
sn = s + O(n ),
as n ,
(2.17)
or
sn = s + O(n (log n) ),
as n ,
(2.18)
respectively. When in (2.17) or (2.18) is close to 0, (sn ) converges very slowly to s, but
when is suciently large, e.g. > 10, (sn ) converges rapidly to s.
Furthermore, logarithmic sequences which have the asymptotic formula
sn = s + O((log(log n)) ),
as n ,
( < 0)
(2.19)
occur in some literatures. Sequences satisfying (2.18) with = 0 or (2.19) converge quite
slowly.
According to their origin we can classify practical logarithmic sequences into the
following two categories:
(a) one from continuous problems by applying a discretization method.
(b) one from discrete problems.
There are many numerical problems in the class (a) such as numerical dierentiation, numerical integration, ordinary dierential equations, partial dierential equations,
integral equations and so on. Let s be the true value of such a problem and h a mesh
size. For many cases an approximation T (h) has an asymptotic formula
T (h) = s +
cj hj + O(hk+1 ),
(2.20)
j=1
where c1 , . . . , ck , 0 < 1 < < k+1 are constants. Setting sn = T (c/n) and dj = cj cj
for c > 0, we have
sn = s +
dj nj + O(nk+1 ),
(2.21)
j=1
=s+
dj nj + O(nk+1 ),
j=1
16
(2.22)
Tn =
f (x)dx +
a
cj (22j )n + O((22p2 )n ),
as n ,
(2.23)
j=1
(2.24)
j=1
such that
gj+1 (n)
= 0,
n gj (n)
j = 1, 2, . . . ,
(2.25)
gj (n + 1)
= 1,
n
gj (n)
j = 1, 2, . . . .
(2.26)
lim
and
lim
A double sequence (gj (n)) satisfying (2.25) and (2.26) is called an asymptotic logarithmic scale.
2.4 Criterion of linearly or logarithmically convergent sequences
The formulae (2.9) and (2.12) involve the limit s, so that they are of no use in
practice. However, under certain conditions, (2.9) and (2.12) can be replaced sn s with
sn = sn+1 sn . Namely, if lim sn+1 /sn = , and if one of the following conditions
n
(i) (Wimp[58]) 0 < || < 1, or || > 1, (For the divergence case || > 1, s can be
any number.)
17
(2.27)
18
(2.28)
3 Innite series
Most popular slowly convergent sequences are alternating series as well as logarithmically convergent series. In this section we describe the asymptotic expansions of innite
series and give examples that will be used as test problems.
3.1 Alternating series*
We begin with the denition of the property (C) that is a special case of Widders
completely monotonic functions4 .
Let a > 0. A function f (x) has the property (C) in (a, ), if the following four
conditions are satised:
(i) f (x) is of class C in (a, ),
(ii) (1)r f (r) (x) > 0 for x > a, r = 0, 1, . . . (completely monotonic),
(iii) f (x) 0 as x ,
(iv) for r = 0, 1, . . . , f (r+1) (x)/f (r) (x) 0 as x .
If f (x) has the property (C), then for r = 0, 1, . . . , f (r) (x) 0 as x , thus
(f (r) (x))r=0,1,... is an asymptotic scale.
Theorem 3.1 Suppose that a function f (x) has the property (C) in (a, ). Then
1
B2j (22j 1) (2j1)
(1) f (n + i) f (n)
f
(n),
2
(2j)!
i=0
j=1
i
as n ,
(3.1)
i=0
(1)i f (n + i) =
1
(f (n) + f (n + 2m))
2
+
where
p
)
4e2 (22p+1 + 1)
|Rp (n, m)|
(2)2p+1
n+2m
|f (2p+1) (x)|dx.
(3.3)
*The material in this subsection is taken from the authors paper: N. Osada, Asymptotic expansions and
acceleration methods for alternating series (in Japanese), Trans. Inform. Process. Soc. Japan, 28(1987)
No.5, pp.431436. ( = , , )
4 See, D. V. Widder, The Laplace transform, (Princeton, 1946), p.145.
19
(3.4)
Letting m , the series in the left-hand side of (3.2) converges and f (r) (n + 2m) 0
for r = 0, 1, . . . . So we obtain
1
B2j (22j 1) (2j1)
(1) f (n + i) = f (n)
f
(n) + O(f (2p) (n)).
2
(2j)!
i=0
j=1
i
(3.5)
(1)i1 f (i),
(3.6)
i=1
where f (x) has the property (C) in (a, ) for some a > 0. Let sn be the n-th partial sum
of (3.6). Then the following asymptotic expansion holds:
2j
1
B2j (2 1) (2j1)
sn s (1)n1 f (n) +
f
(n) ,
2
(2j)!
j=1
as n .
(3.7)
Proof. Since
sn s = (1)
n1
f (n) + (1)
(1)i1 f (n + i 1),
(3.8)
i=1
Notation (Levin and Sidi[27]) Let < 0. We denote A () by the set of all functions of
class C in (a, ) for some a > 0 satisfying the following two conditions:
(i) f (x) has the asymptotic expansion
f (x) x
aj
,
j
x
j=0
as x ,
(3.9)
(ii) Derivatives of any order of f (x) have asymptotic expansions, which can be
obtained by dierentiating that in (3.9) formally term by term.
20
Theorem 3.3 Suppose that f (x) A () has the property (C). Let the asymptotic expan
sion of f (x) be (3.9). Let sn be the n-th partial sum of the series s = i=1 (1)i1 f (i).
Then sn s has the asymptotic expansion
cj
n
,
j
n
j=0
sn s (1)
n1
where
1
cj = aj +
2
b(j+1)/2c
k=1
as n ,
2k1
B2k (22k 1)
( j + i)
aj+12k
(2k)!
i=1
(3.10)
(3.11)
m=0
am
(2k1
)
( m + 1 i) xm2k+1 .
(3.12)
i=1
1
( m + 1 i),
am
cj = aj +
2
(2k)!
i=1
(3.13)
k,m
where the summation in the right-hand side of (3.13) is taken all integers k, m such that
k 1, m 0, m + 2k 1 = j.
(3.14)
Since the solutions of (3.14) are only k = 1, 2, . . . , b(j + 1)/2c, we obtain the desired
result.
Using Theorem 3.2 and Theorem 3.3, we can obtain the asymptotic expansions of
typical alternating series.
Example 3.1 In order to illustrate Theorem 3.2, we consider rst a very simple example,
log 2 =
(1)i1
i=1
(3.15)
Let f (x) = 1/x and sn be the n-th partial sum of (3.15). Since f (2j1) (x) = (2j
1)!x2j , by (3.7) we have
2j
1
B2j (2 1)
sn log 2 (1)n1
.
2n j=1 (2j)n2j
21
(3.16)
n1
1
n
)
1
1
1
1
17
1
+
5+
+ O( 9 ) .
2 4n 8n3
4n
16n7
n
(3.17)
Example 3.2 In order to illustrate Theorem 3.3, we next consider the Leibniz series
(1)i1
=
.
4
2i
1
i=1
Since
(3.18)
1
1
1
=
1+
,
j
2x 1
2x
(2x)
j=1
|x| >
1
,
2
(3.19)
1
sn = (1)n1
4
n
)
1
1
5
61
1385
1
+
+ O( 10 ) ,
4 16n2
64n4
256n6
1024n8
n
(3.20)
(1)i1
i=1
> 0, 6= 1,
(3.21)
where () is the Riemann zeta function. For 0 < < 1, (3.21) is justied by analytic
continuation.5 Putting f (x) = 1/x in Theorem 3.2, we have
1
B2j (2 1) (2j1)
sn (1 21 )() (1)n1 f (n) +
f
(n) ,
2
(2j)!
j=1
2j
(3.22)
where sn is the n-th partial sum of (3.21). The rst 4 terms of the right-hand side of
(3.22) are as follows:
sn (1 21 )()
(
)
1
= (1)
n 2 4n
48n3
480n5
1
+ O( +7 ).
(3.23)
n
5 See,
E. C. Titchmarsh, The theory of the Riemann zeta function, 2nd ed. (Clarendon Press, Oxford,
1986), p.21.
22
the n-th partial sum of the series s = i=1 f (i). Suppose that both the innite integral
sn s =
1
B2j (2j1)
f (x)dx + f (n) +
f
(n) + O(f (2p) (n)),
2
(2j)!
j=1
as n , (3.24)
n+m
f (n + i) =
f (x)dx +
n
i=0
1
(f (n) + f (n + m))
2
p
)
B2j ( (2j1)
+
f
(n + m) f (2j1) (n) + Rp (n, m)
(2j)!
j=1
where
|Rp (n, m)|
4e2
|f (2p) (n + m) f (2p) (n)|.
(2)2p+1
(3.25)
(3.26)
Letting m , the series in the left-hand side of (3.25) converges and f (r) (n+m)
0 for r = 0, 1, . . . . So we obtain
s sn1 =
1
B2j (2j1)
f (x)dx + f (n)
f
(n) + O(f (2p) (n)).
2
(2j)!
j=1
(3.27)
Theorem 3.5 Let f (x) be a function belonging to A () with < 1. Let the asymptotic
expansion of f (x) be
aj
f (x) x
,
j
x
j=0
as x ,
(3.28)
**The material in this subsection is taken from the authors paper: N. Osada, Asymptotic expansions
and acceleration methods for logarithmically convergent series (in Japanese), Trans. Inform. Process. Soc.
Japan, 29(1988) No.3, pp.256261. ( = , , )
23
f (x)dx
n
i=1
converge. Then the n-th partial sum sn has the asymptotic expansion of the form
sn s n
+1
cj
,
j
n
j=0
as n ,
(3.29)
where
a1
a0
+
2
bj/2c
2k1
B2k
aj
1
cj =
+ aj1 +
aj2k
( j + 1 + l),
+1j
2
(2k)!
c0 =
a0
,
+1
c1 =
k=1
(j > 1)
l=1
(3.30)
sn s
1
B2j (2j1)
f
(n),
f (x)dx + f (n) +
2
(2j)!
j=1
as n .
(3.31)
f (x)dx
k=0
ak
n+1k ,
+1k
(3.32)
k=0
ak
(2j1
)
( k + 1 l) nk2j+1 .
(3.33)
l=1
By (3.31),(3.32) and (3.33), the coecient of n+1j in (3.29) coinsides with cj in (3.30).
This completes the proof.
Using Theorem 3.4, we can obtain the well-known asymptotic expansion of the
Riemann zeta function.
Example 3.4 (The Riemann zeta function) Let us consider
n
1
sn =
,
i
i=1
24
> 1,
(3.34)
1
1 B2j
1
+
n
n +
1
2
(2j)!
j=1
(2j1
)
( + l 1) n2j+1 ,
(3.35)
l=1
thus we obtain
1
1 B2j
1
sn s
n
+ n
( + 1) ( + 2j 2)n2j+1 .
1
2
(2j)!
j=1
(3.36)
In particular, if = 2 then
2
1
sn
=
6
n
)
1
1
1
1
1
1
1 +
+
+ O( 10 ) , as n . (3.37)
2n 6n2
30n4
42n6
30n8
n
i + e1/i
)2
(3.38)
i=1
f (n) n
1 +
j=1
2
1
(j 1)!nj
2
j=0
aj
,
nj
, we have
(3.39)
where the coecients aj in (3.39) are given in Table 3.1. By Theorem 3.5, the n-th
partial sum sn has the asymptotic expansion of the form
sn s n
1 2
j=0
cj
,
nj
(3.40)
where s = 1.71379 67355 40301 48654 and the coecients cj in (3.40) are also given in
Table 3.1.
25
Table 3.1
Coecients aj in (3.39) and cj in (3.40)
j
0
1
2
3
4
5
6
7
8
aj
1
2
1 (1/2)2
1 (1/6) 2
1/12 (5/12)
2
(23/120) 2
11/45 + (7/60)2
1/72 (1/240) 2
19/160 (221/2520) 2
cj
1 2
3/2
2 (25/12)
2
1/2 + (1/2) 2
227/840 + (71/630) 2
41/168 (37/168)
2
67/4968 + (173/6210) 2
3367/12240 + (5111/24480) 2
2924867/14212800 (106187/676800) 2
There are logarithmic terms in the asymptotic expansions of the following series.
Example 3.6 Let us consider
sn =
log i
i=2
> 1.
(3.41)
Since d/d(i ) = log i/i , sn converges to 0 (), where 0 (s) is the derivative of
the Riemann zeta function. Let f (x) = log x/x . Then
log x + 1
,
x+1
( + 1) log x 2 1
f 00 (x) =
,
x+2
( + 1)( + 2) log x + 32 + 6 + 2
f 000 (x) =
.
x+3
f 0 (x) =
(3.42a)
(3.42b)
(3.42c)
By Theorem 3.4,
0
sn + () =
log x
log n B2j (2j1)
dx
+
+
f
(n) + O(f (2p) (n)),
x
2n
(2j)!
j=1
p
(3.43)
thus
sn + 0 () =
log n
1
1
(1 )n
(1 )2 n1
log n log n + 1 ( + 1)( + 2) log n + 32 + 6 + 2
+
+
2n
12n+1
720n+3
log n
+ O( +5 ).
(3.44)
n
26
In particular, if = 2,
sn + 0 (2) =
log n
1
log n log n
1
log n
13
log n
+
+
+
+ O( 7 ),
2
3
3
5
5
n
n
2n
6n
12n
30n
360n
n
(3.45)
i=2
1
,
i(log i)
> 1.
(3.46)
1
1
log n +
+
(1 )(log n)
2n(log n)
12n2 (log n)+1
6(log n)3 + 11(log n)2 + 6( + 1) log n + ( + 1)( + 2)
+
720n4 (log n)+3
1
+ O( 6
).
n (log n)
(3.47)
sn s =
(3.48)
This series converges quite slowly. When = 2, by the rst 1043 terms, we can
obtain only two exact digits. P. Henrici6 computed this series using the Plana summation
formula. The asymptotic expansion (3.47) is due to Osada[38].
6 P.
Henrici, Computational analysis with the HP-25 Pocket Calculator, (Wiley, New York, 1977).
27
4 Numerical integration
Innite integrals and improper integrals usually converge slowly. Such an integral
implies slowly convergent sequence or innite series by a suitable method. In this section,
we deal with the convergence of numerical integrals and give some examples.
4.1 Semi-innite integrals with positive monotonically decreasing integrands
Let f (x) be a continuous function dened in [a, ). If the limit
f (x)dx
lim
(4.1)
n=1
xn
f (x)dx.
(4.2)
xn1
() =
x1 ex dx,
> 0.
(4.3)
Let sn be dened by sn =
n
0
sn () = n
x1 ex dx. Then
1 n
)
(
1
1
+ O( 2 ) ,
1
n
n
as n .
(4.4)
In particular, if N, then
sn () = n1 en 1 +
j=1
( 1) ( j)
.
nj
(4.5)
(4.4) or (4.5) shows that (sn ) converges linearly to () with the contraction ratio 1/e.
Especially when = 1, the innite series n=1 (sn sn1 ) becomes a geometric series
with the common ratio 1/e.
28
aj
f (x) x
,
xj
j=0
as x ,
(4.6)
f (x)dx converges.
a
f (t)dt I x
+1
j=0
aj
,
( j + 1)xj
as x .
(4.7)
=
2
Integrating
dx
.
1 + x2
(4.8)
(1)j1
1
=
,
2j
1 + t2
t
j=1
t > 1,
(4.9)
(1)j1
dt
=
,
1 + t2
(2j 1)x2j1
j=1
x > 1.
(4.10)
We note that the right-hand side of (4.10) converges uniformly to tan1 1/x = /2
x
tan1 x provided that x > 1. The equality (4.10) shows that 0 dt/(1 + t2 ) converges
logarithmically to /2 as x .
4.2 Semi-innite integrals with oscillatory integrands
Let (x) be an oscillating function whose zeros in (a, ) are x1 < x2 < . . . . Set
x0 = a. Let f (x) be a positive continuous function on [a, ) such that the semi-innite
integral
I=
f (x)(x)dx
(4.11)
xn
In =
f (x)(x)dx.
xn1
29
(4.12)
In
(4.13)
n=1
ex sin xdx,
I=
< 0,
(4.14)
e + 1 (n1)
e
.
1 + 2
(4.15)
Therefore the innite series (4.13) is a geometric series with the common ratio e .
If an integrand is a product of an oscillating function and a rational function converging to 0 as x , then the innite series (4.13) usually becomes an alternating
series satisfying In+1 /In 1 as n .
Example 4.4 Let us consider
I=
0
sin x
dx,
x
0 < < 2,
(4.16)
In =
(n1)
sin x
dx.
x
(4.17)
Substituting t = n x, we have
)
(
sin(n t)
t
n1
In =
dt = (1)
(n)
1
sin tdt
(n t)
n
0
0
( + 1) . . . ( + j 1)
(1)n1 (n) 2 +
tj sin tdt .
j
j!(n)
0
j=1
By Theorem 3.3,
Ik I (1)
k=1
n1
cj
,
j
n
j=0
(4.18)
(4.19)
lim
+0
f (x)dx
a+
(4.20)
an improper integral, and the endpoint a is called an integrable singularity. Similar for
the case limxb0 f (x) = .
f (x)dx. Let Mn
a
1
f (a + (i )h),
2
i=1
h=
ba
.
n
(4.21)
> 1,
(4.22)
p+1
j=1
2p+1
(2k 1)( k)
(212j 1)B2j (2j1)
f
(1)
+
g (k) (0)
2j
+k+1
(2j)!n
k!n
k=0
2p2
+ O(n
),
(4.23)
(4.24)
31
Mn
x dx c0 h1+ +
cj h2j .
0
(4.26)
j=1
, > 1,
(4.27)
Mn
0
1 (x) = x g(x),
(4.28)
p1 (j)
0 (0)(2j 1)( j)
f (x)dx =
j!n+j+1
j=0
p1
(j)
(4.29)
An integrable singular point in Theorem 4.1 or Theorem 4.2 is called an algebraic
singularity.
Example 4.6 Let us consider the Beta function
1
B(p, q) =
xp1 (1 x)q1 dx,
p, q > 0.
(4.30)
(
)
Mn B(p, q)
aj npj + bj nqj , as n .
(4.31)
j=0
, > 1,
aj + bj log n cj
+
+ O(np ).
Mn
f (x)dx =
+j+1
+j+1
n
n
0
j=0
j=0
(4.32)
(4.33)
x log xdx,
I=
> 1,
(4.34)
aj + bj log n cj
+
+ O(np ).
Mn I =
+j+1
j+1
n
n
j=0
j=0
32
(4.35)
tn s
= 0.
n s(n) s
lim
(5.1)
(sn+1 sn )2
sn+2 2sn+1 + sn
(5.2)
can be represented as
(sn+1 sn )(sn+2 sn+1 )
,
sn+2 2sn+1 + sn
(sn+2 sn+1 )2
= sn+2
,
sn+2 2sn+1 + sn
sn sn+2 s2n+1
=
,
s
2sn+1 + sn
n+2
sn
sn+1
sn sn+1
,
=
1
1
sn sn+1
= sn+1
(5.3)
(5.4)
(5.5)
(5.6)
where stands for the forward dierence, i.e. sn = sn+1 sn . The algorithm (5.6)
coincides with Shanks e1 tansformation. All algorithms (5.2) to (5.6) are equivalent
in theory but are dierent in numerical computation; e.g., the number of arithmetic
operations or rounding errors.
Let T be a sequence transformation satisfying (5.1). Either T or an algorithm for
T is called a convergence acceleration method, or an acceleration method for short, or a
33
speed-up method. A method for estimating the limit or the antilimit of a sequence (sn )
is called an extrapolation method.
The name extrapolation is explained as follows. We take an extrapolation function
f (n) with k + 1 unknown constants. Under suitable conditions, by solving the system of
equations
f (n + i) = sn+i ,
i = 0, . . . , k,
(5.7)
34
6. The E-algorithm
Many sequence transformations can be represented as a ratio of two determinants.
The E-algorithm is a recursive algorithm for such transformations and a quite general
method.
6.1 The derivation of the E-algorithm
Suppose that a sequence (sn ) with the limit or the antilimit s satises
sn = s +
cj gj (n).
(6.1)
j=1
Here (gj (n)) is a given auxiliary double sequence which can depend on the sequence
(sn ) whereas c1 , . . . , ck are constants independent of (sn ) and n. The auxiliary double
sequence is not necessarily an asymptotic scale.
Solving the system of linear equations
(n)
sn+i = Tk
(n)
+ c1 g1 (n + i) + + ck gk (n + i),
i = 0, . . . , k,
(6.2)
(n)
Tk
sn+1
sn
g1 (n) g1 (n + 1)
g (n) gk (n + 1)
= k
1
1
g1 (n) g1 (n + 1)
gk (n) gk (n + 1)
If (sn ) satises
sn s +
sn+k
g1 (n + k)
gk (n + k)
.
1
g1 (n + k)
gk (n + k)
cj gj (n),
(6.3)
(6.4)
j=1
(n)
approximation to s.
Many well known sequence transformations such as the Richardson extrapolation,
the Shanks transformation, Levins transformations and so on can be represented as (6.3).
(n)
35
(n)
dened as follows7 :
(n)
E0
= sn ,
n = 0, 1, . . . ,
(n)
g0,j = gj (n),
(6.5a)
j = 1, 2, . . . ; n = 0, 1, . . . .
(6.6a)
For k = 1, 2, . . . and n = 0, 1, . . .
(n)
(n)
Ek
(n+1)
(n+1)
(n)
(n+1)
(n+1) (n)
(n+1)
(n)
gk1,k gk1,k
(n)
(n)
gk,j
(n+1) (n)
(6.5b)
j = k + 1, k + 2, . . . .
(6.6b)
The equality (6.5b) is called the main rule and (6.6b) is called the auxiliary rule.
The following theorem is fundamental.
(n)
(n)
(n)
(n)
Gk,j
(n)
Nk
sn
g (n)
= 1
gk (n)
gj (n)
g (n)
= 1
gk (n)
gj (n + k)
g1 (n + k)
,
gk (n + k)
sn+k
g1 (n + k)
,
gk (n + k)
be dened by
(n)
Dk
g (n)
= 1
gk (n)
(6.7)
g1 (n + k)
,
gk (n + k)
(6.8)
(n)
(n)
j > k,
(n)
(n)
n = 0, 1, . . . ; k = 1, 2, . . . .
Ek
= Nk /Dk ,
(6.9)
(6.10)
36
Since the expressions (6.5b) and (6.6b) are prone to round-o-error eects, it is
(n)
better to compute Ek
(n)
(n)
Ek1
(n)
gk1,j
(n+1)
(n)
Ek1 Ek1
(n)
gk1,k (n+1)
,
(n)
gk1,k gk1,k
(6.11)
and
(n)
gk,j
(n+1)
gk1,j
(n)
gk1,k (n+1)
gk1,k
(n)
gk1,j
(n)
gk1,k
(6.12)
respectively.
6.2 The acceleration theorems of the E-algorithm
When (gj (n)) is an asymptotic scale, the following theorem is valid.
Theorem 6.2 (Sidi[53]) Suppose the following four conditions are satised.
(i) lim sn = s,
n
if i 6= j.
(iii) For any j, lim gj+1 (n + 1)/gj (n) = 0,
n
cj gj (n) as n .
(6.13)
j=1
(n)
Ek
and
s ck+1
(n)
Ek
(n)
Ek1 s
bk+1 bj
gk+1 (n)
1
b
j
j=1
(
=O
gk+1 (n)
gk (n)
as n
(6.14)
)
as n .
(6.15)
A logarithmic scale does not satisfy the assumption (ii) of Theorem 6.2, but satises
the assumptions of the next theorem.
37
Theorem 6.3 (Matos and Prevost[31]) If the conditions (i)(iii)(iv) of theorem 6.2 are
satised, and if for any j, p and any n N ,
gj+p (n)
..
gj+p (n + p)
then for any k 0,
0,
gj (n + p)
gj (n)
..
.
(6.16)
(n)
lim
Ek+1 s
(n)
Ek
= 0.
(6.17)
The above theorem is important because the following examples satisfy the assumption (6.16). (Brezinski et al.[11, p.69])
(1) Let (g(n)) be a logarithmic totally monotone sequence, i.e. lim g(n + 1)/g(n) =
n
1 and (1)k k g(n) 0 k. Let (gj (n)) be dened by g1 (n) = g(n), gj (n) = (1)j j g(n)
(j > 1).
(2) gj (n) = xnj with 1 > x1 > x2 > > 0 and 0 < 1 < 2 < . . . .
(3) gj (n) = nj with 1 > 1 > 2 > > 0.
(4) gj (n) = 1/((n + 1)j (log(n + 2))j ) with 0 < 1 2 . . . and j < j+1 if
j = j+1 .
38
1
2
1
(T2n Tn ) < C < Tn + Un .
3
3
3
(7.1)
= n+1 +
(n+2)
1
(n+1 n ) ,
3
(n+1)
(n+1)
n = 2, 3, . . . .
(n)
constructed
)
1 ( (n+1)
(n)
+
,
1
15 1
(n)
2
(n+1)
1
)/(1
(7.2)
(7.3)
Similarly, he constructed
(n)
k
8 A.
(n+1)
= k1 +
(
)
1
(n+1)
(n)
k1 ,
22k 1 k1
n = 2, 3, . . . ; k = 2, 3, . . . .
(7.4)
Hirayama, History of circle ratio (in Japanese), (Osaka Kyoiku Tosho, 1980), pp.75-76. (=,
(), )
39
(2)
8 = 3.14159 26535 89793 23846 26433 83279 50288 41971 2
(7.5)
cj h2j + O(h2m+2 ),
as h +0,
(7.6)
j=1
(7.7)
We dene T1 (h0 , h1 ) by the left-hand side of (7.7). Then T1 (h0 , h1 ) is a better approximation to s than T (h1 ) provided that h0 is suciently small. When we set h0 = h and
h1 = h/2, (7.7) becomes
T1 (h, h/2) = s +
(7.8)
j=2
where
T1 (h, h/2) =
T (h) 22 T (h/2)
,
1 22
(7.9)
then we have
T2 (h, h/2, h/4) = s +
(7.10)
(7.11)
j=3
where c00j = c0j (1 242j )/(1 24 ). By (7.6),(7.8) and (7.11), when h is suciently small,
T2 (h, h/2, h/4) is better than both T1 (h/2, h/4) and T (h/4).
9 M.
40
The formula
Extending of (7.9) and (7.10), we dene the two dimensional array (Tk ) by
(n)
T0
(n)
Tk
= T (2n h)
=
(n)
Tk1
n = 0, 1, . . . ,
(n+1)
22k Tk1
1 22k
(7.12a)
n = 0, 1, . . . ; k = 1, 2, . . . ,
(7.12b)
Tk
(n+1)
= Tk1
(
)
1
(n+1)
(n)
T
T
k1 .
22k 1 k1
(7.13)
(n)
and T2
= T1 (h/2n , h/2n+1 )
T0
(1)
T0
(2)
T0
(3)
T0
..
.
(0)
T1
(1)
T1
(2)
T1
(0)
T2
(1)
T2
(7.14)
(0)
T3
..
2
(nk)
Tk
=s+
cj
h2j 22j(nk) + O(22(m+1)(nk) ),
2i
12
i=1
(7.15)
j=k+1
Tk
s:
(
(nk)
Tk
s ck+1
1 22i(2k+2)
i=1
1 22i
)
h2k+2 2(nk)(2k+2) .
(7.16)
)
n1
1
1
f (a) +
f (a + ih) + f (b) ,
2
2
i=1
41
h=
ba
n
(7.17)
1
f (a + (i )h),
2
i=1
h=
ba
n
(7.18)
T (h)
f (x)dx
a
cj h2j , as h +0.
(7.19)
j=1
Thus we can apply the Richardson extrapolation to T (h). This method using the trapezoidal rule is called the Romberg quadrature and its algorithm is as follows:
(n)
T0
(n)
Tk
= T2n
=
n = 0, 1, . . . ,
(n)
Tk1
(n+1)
22k Tk1
1 22k
(7.20a)
n = 0, 1, . . . ; k = 1, 2, . . . ,
(7.20b)
(n)
Tk
f (x)dx
a
)
B2k+2 ( (2k+1)
f
(b) f (2k+1) (a) (b a)2k+2 2n(2k+2)
(2k + 2)!
1 22i2k2
i=1
1 22i
)
.
(7.21)
I=
(7.22)
We give errors of the T -table and the number of functional evaluations in Table 7.1. The
Romberg quadrature is quite ecient for such proper integrals.
42
Table 7.1
The errors of the T -table by the Romberg quadrature
(n)
f.e.
T0
0
1
2
3
4
2
3
5
9
17
(n1)
T1
1.41 101
3.56 102
8.94 103
2.24 103
5.59 104
5.79 104
3.70 105
2.23 106
1.46 107
(n2)
T2
(n3)
T3
8.59 107
1.38 108
2.16 1010
3.35 1010
1.34 1012
(0)
(n4)
T4
3.32 1014
is
which is close to T4
(7.23)
I in Table 7.1.
sn s +
cj xjn ,
(7.24)
j=1
j
cj x
n ,
(7.25)
j=1
where j s are known constants, cj s are unknown constants, and (xn ) is a known particular auxiliary sequence.
In particular, when xn = n and j = j, then (7.25) becomes
sn s +
cj (j )n .
(7.26)
j=1
this paper, xji means either j th power of a scalar xi or the i th component of a vector xj . In (7.24),
xjn = (xn )j .
10 In
43
cj nj ,
(7.27)
j=1
cj gj (n),
(7.28)
j=1
where (gj (n)) is a known asymptotic scale whereas cj s are unknown constants.
7.3.1 Polynomial extrapolation
Suppose that a sequence (sn ) satises
sn s +
cj xjn ,
(7.29)
j=1
where (xn ) is a known auxiliary sequence and cj s are unknown constants. Solving the
system of equations
(n)
sn+i = Tk
cj xjn+i ,
i = 0, . . . , k,
(7.30)
j=1
(n)
(n)
Tk
sn
xn
k
x
= n
1
xn
k
xn
sn+1
xn+1
xkn+1
1
xn+1
xkn+1
sn+k
xn+k
k
xn+k
.
1
xn+k
xkn+k
(7.31)
(7.32)
1
.
sn =
2
i
i=1
44
(7.33)
2
1
1
1
1
1
1
5
+ 2 3 +
+ ....
5
7
9
6
n 2n
6n
30n
42n
30n
66n11
(nk)
We give errors of Tk
(7.34)
digits. And, between n = 1 to 16, the best result is T11 s = 2.78 1013 . This
(2)
n
1
2
3
4
5
6
7
8
9
10
11
12
n
6
7
8
9
10
11
12
(n)
T0
6.45 101
3.95 101
2.84 101
2.21 101
1.81 101
1.54 101
1.33 101
1.18 101
1.05 101
9.52 102
8.69 102
8.00 102
(n5)
T5
1.73 105
3.43 106
8.82 107
2.80 107
1.04 107
4.36 108
2.01 108
(n1)
T1
(n2)
T2
1.45 101
6.16 102
3.38 102
2.13 102
1.47 102
1.07 102
8.14 103
6.40 103
5.17 103
4.26 103
3.57 103
(n6)
T6
1.99 102
6.05 103
2.57 103
1.32 103
7.67 104
4.84 104
3.25 104
2.28 104
1.66 104
1.25 104
(n7)
T7
1.12 106
3.18 108
2.14 108
1.33 108
6.70 109
3.38 109
1.23 107
3.65 108
9.79 109
2.95 109
1.01 109
(n3)
T3
1.42 103
2.58 104
7.30 105
2.67 105
1.15 105
5.64 106
3.01 106
1.73 106
1.05 106
(n8)
T8
(n4)
T4
3.12 105
1.96 105
8.06 106
3.57 106
1.74 106
9.24 107
5.24 107
3.14 107
(n9)
T9
2.57 108
3.11 109 6.01 1010
3.86 1010 2.18 1010
3.30 1011 8.48 1011
When xn = 1/n, the subsequence s0n = s2n has the asymptotic expansion of the
form
s0n
j=1
45
cj (2n )j .
(7.35)
Then we can apply the Richardson extrapolation with xn = 2n . This method requires
many terms but usually gives high accuracy.
For a sequence (sn ) satisfying (7.27), the Richardson extrapolation is as follows.
(n)
T0
(n)
Tk
= sn
(n+1)
= Tk1
k
(n+1)
(n)
Tk1 Tk1 .
1 k
(nk)
Tk
(7.36a)
s ck+1
i+1 i
i=1
1 i
(7.36b)
s:
)
nk+1 .
(7.37)
Example 7.3 We apply the Richardson extrapolation with xn = 2n to (2) once more.
(nk)
2 /6 in Table 7.3.
Table 7.3
n terms
0
1
2
3
4
5
6
7
8
1
2
4
8
16
32
64
128
256
n terms
5
6
7
8
32
64
128
256
(n)
T0
6.45 101
3.95 101
2.21 101
1.18 101
6.06 102
3.08 102
1.55 102
7.78 103
3.90 103
(n5)
T5
(n1)
T1
1.45 101
4.77 102
1.37 102
3.66 103
9.46 104
2.40 104
6.06 105
1.52 105
(n6)
T6
(n2)
T2
1.53 102
2.36 103
3.17 104
4.04 105
5.08 106
6.36 107
7.95 108
(n7)
T7
(n3)
T3
5.16 104
2.46 105
8.97 107
2.93 108
9.28 1010
2.91 1011
(n8)
T8
(n4)
T4
8.76 106
1.32 107
1.34 109
1.14 1011
9.06 1014
6.44 108
3.10 1010 1.84 1010
8.87 1013 2.82 1013 1.92 1013
1.94 1015 2.22 1016 8.33 1017 5.55 1017
Using 8 partial sums s1 , s2 , s4 , s8 , s16 , s32 , s128 , and s256 , we obtain 16 signicant
digits. This method is hardly aected by rouding errors such as cancellation of signicant
digits.
46
cj nj ,
(7.38)
j=1
where j s are known constants whereas cj s are unknown constants. The Richardson
extrapolation for (7.38) is dened by
(n)
T0
(n)
Tk
= s2n
n = 0, 1, . . . ,
(
)
1
(n+1)
(n+1)
(n)
Tk1 Tk1 .
= Tk1 +
2 k 1
(7.39a)
n = 0, 1, . . . ; k = 1, 2, . . . .
(7.39b)
cj hj .
(7.40)
j=1
T0
(n)
Tk
= T (2n h)
n = 0, 1, . . . ,
(
)
1
(n+1)
(n)
(n+1)
Tk1 Tk1 ,
= Tk1 +
2 k 1
(7.41a)
n = 0, 1, . . . ; k = 1, 2, . . . ,
(7.41b)
I=
0
dx
= 2.
x
(7.42)
Let Mn be the n panels midpoint rule for (7.42). By Example 4.5, we have
Mn 2 c0 h
1/2
cj h2j .
(7.43)
j=1
(0)
47
Table 7.4
(n)
f.e. T0
0
1
2
3
4
5
1
3
7
15
31
63
1.41
1.57
1.69
1.78
1.84
1.89
(n1)
(n2)
T1
1.971
1.9921
1.9979
1.99949
1.99987
(n3)
T2
dx
(n4)
T3
(n5)
T4
T5
1.99914
1.999930
1.999983
1.9999952 1.99999957
1.99999983
1.99999969 1.9999999920 1.9999999986 1.99999999932
Example 7.5 Next we apply the Richardson extrapolation to the Beta function
1
B(2/3, 1/3) =
x1/3 (1 x)2/3 dx,
(7.44)
where B(2/3, 1/3) = 2/ 3 = 3.62759 87284 68436. Let Mn be the n-panels midpoint
rule for (7.44). By Example 4.6, we have
Mn B(2/3, 1/3)
(7.45)
j=1
(0)
We give the T -table in Table 7.5. Using 255 functional evaluations, we obtain T7
3.62759 69010, whose error is 1.83 10
(n)
(n1)
(n2)
f.e. T0
T1
T2
0
1
2
3
4
5
6
7
1
3
7
15
31
63
127
255
3.68
3.70
3.691
3.672
3.657
3.646
3.639
3.74
3.665
3.639
3.631
3.6289
3.6280
2.00
2.34
2.62
2.84
3.01
3.14
3.25
3.33
(n3)
(n4)
T4
T3
3.615
3.6230
3.6262
3.62720
3.62748
(n5)
T5
(n6)
T6
(n7)
T7
3.6262
3.6277 3.6280
3.62765 3.62764 3.627563
3.62761 3.627601 3.6275935 3.6275969
48
49
8. The -algorithm
The -algorithm is a recursive algorithm for the Shanks transformation and is one
of the most familiar convergence acceleration methods.
8.1 The Shanks transformation
Suppose that a sequence (sn ) with the limit or the antilimit s satises
)
( k
k
ai sn+i =
ai s, n N,
i=0
(8.1a)
i=0
k
ai 6= 0,
(8.1b)
i=0
(8.2)
sn+1 s
sn s
sn+2 s
sn+1 s
sn+k s sn+k+1 s
thus we obtain
1
1
sn+1
sn
s
sn+k1 sn+k
Therefore,
sn+k s
sn+k+1 s
= 0,
sn+2k s
1
sn
sn+k sn
=
sn+2k1
sn+k1
sn
s
s = n+k1
1
sn
sn+k1
sn+1
sn+1
sn+k
1
sn+1
sn+k
50
sn+1
sn+1
sn+k
sn+k
sn+k
sn+2k1
.
1
sn+k
sn+2k1
(8.3)
sn+k
sn+k
. (8.4)
sn+2k1
(8.5)
sn
sn
s
ek (sn ) = n+k1
1
sn
sn+k1
sn+1
sn+1
sn+k
1
sn+1
sn+k
sn+k
sn+k
sn+2k1
.
1
sn+k
sn+2k1
(8.6)
The sequence transformation (sn ) 7 (ek (sn )) is called Shanks e-transformation, or the
Shanks transformation. In particular, e1 is the Aitken 2 process. By construction, ek is
exact on a sequence satisfying (8.1).
If a sequence (sn ) satises
sn = s +
cj (n)nj ,
n N,
(8.7)
j=1
where cj (n) are polynomials in n and j 6= 1 are constants, then (sn ) satises (8.1).
On concerning the necessary and sucient condition for (8.1), see Brezinski and Redivo
Zaglia[11, p.79].
The Shanks transformation was rst considered by R.J. Schmidt[49] in 1941. He used
this method for solving a system of linear equations by iteration but not for accelerating
the convergence. For that reason his paper was neglected until P. Wynn[59] quoted it
in 1956. The Shanks transformation did not receive much attention until Shanks[51]
published in 1955.
The main drawback of the Shanks transformation is to compute determinants of
large order. This drawback was recovered by the -algorithm proposed by Wynn.
8.2 The -algorithm
Immediately after Shanks rediscovered the e-transformation, P. Wynn proposed a
recursive algorithm which is named the -algorithm. He proved the following theorem by
using determinantal identities.
Theorem 8.1 (Wynn[59]) If
(n)
2k = ek (sn ),
1
(n)
2k+1 =
,
ek (sn )
51
(8.8a)
(8.8b)
then
(n)
(n+1)
k+1 = k1 +
(n+1)
(n)
k = 1, 2, . . . ,
(8.9)
1 = 0,
(n)
(n)
= sn ,
1
(n+1)
k+1 = k1 +
n = 0, 1, . . . ,
(n+1)
k
(n)
(8.10a)
k = 0, 1, . . . .
(8.10b)
There are many research papers on the -algorithm. See, for details, Brezinski[9], Brezinski and Redivo Zaglia[11].
8.3 The asymptotic properties of the -algorithm
(n)
sn s +
aj (n + b)j ,
a1 6= 0,
(8.11)
as n .
(8.12)
j=1
2k s +
a1
,
(k + 1)(n + b)
Theorem 8.2 shows that the -algorithm cannot accelerate a logarithmically convergent sequence satisfying (8.11). For, by (8.12)
(n)
2k s
1
=
.
n sn+2k s
k+1
(8.13)
lim
aj (n + b)j ,
a1 6= 0,
(8.14)
j=1
2k s +
(1)n (k!)2 a1
,
4k (n + b)2k+1
52
as n .
(8.15)
Theorem 8.3 shows that the -algorithm works well on partial sums of altenating
series
sn =
(1)i1
i=1
1
,
ai + b
(8.16)
aj nj ,
(8.17)
j=1
2k s +
as n .
(8.18)
aj nj ,
(8.19)
j=1
2k s + (1)n
as n .
(8.20)
The above theorems are further extended by J. Wimp. The following theorem is an
extension of Theorem 8.3.
Theorem 8.6 (Wimp[58, p.127]) If the -algorithm is applied to a sequence (sn ) satisfying
sn s + n (n + b)
cj nj ,
(8.21)
j=0
(8.22)
Example 8.1 We apply the -algorithm to the partial sums of alternating series
sn =
(1)i1
2i 1
i=1
(8.23)
1
sn = + (1)n1
4
n
(n2k)
We give sn and 2k
)
1
1
1
+ O( 4 ) .
4 16n2
n
(8.24)
obtain 10 exact digits. And by the rst 20 terms, we obtain 15 exact digits.
Table 8.1
The -algorithm applying to (8.23)
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
20
(n2k)
sn
1.000
0.666
0.866
0.723
0.834
0.744
0.820
0.754
0.813
0.760
0.808
0.764
0.804
0.767
0.802
0.772
0.785
2k
0.791
0.7833
0.78558
0.78534
0.78540
0.78539
0.78539
0.78539
0.78539
0.78539
0.78539
0.78539
0.78539
0.78539
0.78539
7
3
68
832
8126
81682
81623
81635
81633
81634
81633
81633
4
67
01
97448
97448
(1)i1
.
i
i=1
(8.25)
1
1
2)( ) + (1)n1
2
n
54
)
1
5
1
1
+
+ O( 5 ) ,
2 8n 128n3
n
(8.26)
where (1
2)( 12 ) = 0.60489 86434 21630. Theorem 8.6 and (8.26) show that the
(n2k)
in Table 8.2,
sn
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
20
1.00
0.29
0.87
0.37
0.81
0.40
0.78
0.43
0.76
0.45
0.75
0.46
0.74
0.47
0.73
0.49
0.60
(n2k)
2k
0.6107
0.6022
0.60504
0.60484
0.60490
0.60489
0.60489
0.60489
0.60489
0.60489
0.60489
0.60489
0.60489
0.60489
0.60489
9
2
74
875
8614
86466
86426
86435
86433
86434
86434
86434
1
99
243
21630
21630
(2)i1
i=1
1
2
(8.27)
55
Table 8.3
The -algorithm applying to (8.27)
n
(n2k)
sn
1
1.0
2
0.0
3
1.3
4
0.6
5
2.5
6
2.8
7
6.3
8
9.6
9
18.7
10
32.4
11
60.6
12
109.9
23
119788.2
24 229737.0
0.5
2k
0.571
0.533
0.5507
0.5485
0.54940
0.54926
0.54931
0.54930
0.54930
0.54930
0.54930
0.54930
0.54930
56
2
32
661
595
61443 34119
61443 34055
61443 34055
9. Levins transformations
In a favorable inuence survey [54], Smith and Ford concluded that the Levin utransform is the best available across-the-board method. In this section we deal with
Levins transformations.
9.1 The derivation of the Levin T transformation
If a sequence (sn ) satises
sn = s + Rn
k1
j=0
cj
,
nj
(9.1)
where Rn are nonzero functions of n and c0 (6= 0), . . . ,ck1 are constants independent of
n, then the limit or the antilimit s is represented as
sn
Rn
..
R /nk1
s= n
1
Rn
..
Rn /nk1
k1
Rn+k /(n + k)
.
1
Rn+k
..
k1
Rn+k /(n + k)
...
...
..
.
sn+k
Rn+k
..
.
...
...
...
..
.
...
(9.2)
cj
,
j
n
j=0
(9.3)
(n)
Tk
sn
Rn
..
Rn /nk1
=
1
Rn
..
Rn /nk1
...
...
..
.
...
...
...
..
.
...
k1
Rn+k /(n + k)
Rn+k
..
k1
Rn+k /(n + k)
sn+k
Rn+k
..
.
(9.4)
)k1
( )(
sn+j
k
n+j
(1)
n+k
Rn+j
j
j=0
= k
.
( )(
)k1
k
n
+
j
1
(1)j
j
n+k
Rn+j
j=0
k
(n)
Tk
(n)
The formula Tk
(9.5)
in Section 6.
(n)
E0
= sn ,
(n)
g0,j = n1j Rn ,
n = 1, 2, . . . ; j = 1, 2, . . . ,
(9.6a)
(n)
(n)
Ek
(n)
Ek1
(n)
gk1,k
Ek1
(n)
n = 1, 2, . . . ; k = 1, 2, . . . ,
(9.6b)
gk1,k
(n)
(n)
gk,j
(n)
gk1,j
(n)
gk1,k
gk1,j
(n)
n = 1, 2, . . . ; k = 1, 2, . . . , j > k,
gk1,k
(9.6c)
(n)
(n)
= Tk .
cj
f (x)
xj
j=0
as x .
(9.7)
]
c1
c2
c0 +
+ 2 + ... ,
n
n
(9.8)
Tk
(
)
s = O n n2k
as n .
(9.9)
Tk
(
)
s = O nk
as n .
(9.10)
Theorem 9.1(1) shows that the Levin t- and v-transforms accelerate certain alternating series.
Recently, N. Osada[41] has extended the Levin transforms to vector sequences. These
vector sequence transformations satisfy asymptotic properties similar to Theorem 9.1.
Example 9.1 Let us consider
s0 = 0,
n
(1)i1
sn =
,
i
i=1
n = 1, 2, . . . .
(9.11)
We apply the Levin u-, v-, and t-transforms on (9.11). For the Levin u-transform, we
(1)
(1)
take Tn1 , while Tn2 for v- and t-transforms. We give signicant digits of sn , dened
(1)
sn
Levin u
(1)
Tn1
Levin v
(1)
Tn2
Levin t
(1)
Tn2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
0.40
0.51
0.58
0.63
0.67
0.71
0.74
0.77
0.79
0.81
0.83
0.85
0.87
0.88
0.99
2.30
3.69
5.93
6.33
7.52
9.46
10.14
11.27
13.06
13.94
14.89
15.56
2.30
4.14
4.45
5.53
6.93
9.17
9.42
10.70
13.52
13.18
14.36
15.56
2.23
3.12
4.19
5.44
6.95
8.82
9.43
10.77
12.68
13.16
14.40
15.52
59
Theorem 9.1(2) shows that the Levin u- and v-transforms accelerate logarithmically
convergent sequences of the form
sn s n
]
c1
c2
c0 +
+ 2 + ... ,
n
n
(9.12)
1
.
i i
i=1
(9.13)
We apply the Levin u-, v-, and t-transforms on (9.13) with s0 = 0. Similar to the
(1)
above example, we show signicant digits of sn and Tnk in Table 9.2. Both the u- and
v-transforms accelerate but the t-transform cannot.
Table 9.2
The signicant digits of the Levin transforms applying to (1.5)
Levin u
Levin v
Levin t
(1)
Tn2
Tn2
1.22
2.26
2.35
3.00
3.93
5.29
6.25
7.08
8.34
8.55
7.57
6.97
0.08
0.18
0.27
0.35
0.42
0.48
0.53
0.58
0.62
0.66
0.70
0.73
sn
(1)
Tn1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
0.21
0.10
0.03
0.03
0.07
0.11
0.14
0.16
0.19
0.21
0.23
0.25
0.26
0.28
0.39
1.22
2.37
3.74
4.16
5.42
6.18
6.93
9.01
8.72
8.89
8.14
7.58
(1)
When the asymptotic expansion of logarithmically convergent sequence has logarithmic terms such as log n/n, the Levin u-, v-transforms do not work eectively.
Example 9.3 Consider a sequence dened by
sn =
log j
j=2
60
j2
(9.14)
As we described in Example 3.6, the sequence (sn ) converges to 0 (2), and has the
asymptotic expansion
sn 0 (2)
log n
1
log n log n
1
+
+
...,
2
3
n
n
2n
6n
12n3
(9.15)
where (s) is the Riemann zeta function and 0 (s) is its derivative. We apply the Levin
u-, v-, and t-transforms on (9.14) and show signicant digits in Table 9.3.
Table 9.3
The signicant digits of the Levin transforms applying to (9.14)
Levin u
Levin v
Levin t
(2)
Tn3
Tn3
0.12
0.49
0.33
1.18
1.62
3.18
2.50
2.72
2.81
2.91
3.00
3.08
3.16
0.45
0.47
0.58
0.69
0.79
0.87
0.95
1.02
1.09
1.15
1.21
1.26
1.31
sn
(2)
Tn2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
0.03
0.12
0.19
0.26
0.31
0.36
0.40
0.43
0.47
0.50
0.52
0.55
0.57
0.60
0.62
0.47
0.49
0.71
1.77
1.82
2.23
2.28
2.40
2.50
2.59
2.67
2.75
2.82
(2)
m1
(n)
Ri
i=0
(n)
where Ri
k1
j=0
ci,j
,
nj
(9.16)
61
sn
...
sn+mk
(n)
(n+mk)
R
.
.
.
R
0
0
(n)
(n+mk)
R1
...
R1
.
.
.
.
.
.
(n)
(n+mk)
...
Rm1
Rm1
(n)
(n+mk)
R /n
.
.
.
R
/(n
+
mk)
0
0
..
..
.
(n) .
(n+mk)
R
k1
k1
/n
.
.
.
R
/(n
+
mk)
m1
s = m1
.
1
...
1
(n)
(n+mk)
R0
...
R0
(n)
(n+mk)
R1
...
R1
..
..
.
.
(n)
(n+mk)
Rm1
.
.
.
R
m1
(n)
(n+mk)
R
/n
.
.
.
R
/(n
+
mk)
0
0
..
..
.
.
(n)
(n+mk)
k1
k1
Rm1 /n
. . . Rm1 /(n + mk)
(9.17)
(m)
We denote by Tmk,n the right-hand side of (9.17). The transformation of (sn ) into
(m)
E0
= sn ,
(9.18a)
(n)
p = 0, 1, . . . ; q = 1, . . . , m
(9.18b)
(n)
(n)
Ek
(n)
Ek1
(n)
gk1,k
(n)
gk1,k
Ek1
(n)
k = 1, 2, . . . ,
(9.18c)
k = 1, 2, . . . , j > k,
(9.18d)
gk1,k
(n)
(n)
gk,j
(n)
gk1,j
gk1,j
(n)
gk1,k
(n)
(m)
aj log n + bj
j=1
nj
62
(9.19)
sn (2) +
n snk
ckj nj .
(9.20)
j=0
k=1
This asymptotic expansion suggests that the d-transformation works well on (sn ). We
apply the Levin u, which can be regarded as the d(1) -transform, the d(2) -transform, and
the d(3) -transform to (sn ) and show signicant digits in Table 9.4.
Table 9.4
The signicant digits of the d-transforms applying to (9.14)
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
sn
0.03
0.12
0.19
0.26
0.31
0.36
0.40
0.43
0.47
0.50
0.52
0.55
0.57
0.60
0.62
Levin u
d(2)
d(3)
0.47
0.49
0.71
1.77
1.82
2.23
2.28
2.40
2.50
2.59
2.67
2.75
2.82
0.67
0.88
2.06
2.11
3.59
4.23
4.99
5.36
7.07
6.66
5.75
1.12
1.31
1.54
2.68
3.21
3.99
5.17
5.65
5.99
63
(10.1a)
k sn = k1 sn+1 k1 sn
k = 1, 2, . . . .
(10.1b)
(10.2a)
k sn = k1 sn k1 sn1
k = 1, 2, . . . ; n k.
(10.2b)
(10.3)
sn sn+2 s2n+1
(sn )2
=
s
.
n
2 sn
2 sn
(10.4)
(sn )2
,
2 sn
(10.5)
or equivalently
sn+1 sn+1
,
sn+1 sn+1
(sn+2 )2
= sn+2
.
2 sn+2
tn = sn+1
(10.6)
(10.7)
As is well known, the Aitken 2 process accelerates any linearly convergent sequence.
64
(10.8)
tn s
= 0.
n sn+2 s
lim
(10.9)
The Aitken 2 process can be applied iteratively as follows. The two dimensional
(n)
T0
= sn ,
n = 1, 2, . . . ,
(n+1)
(n)
(n)
Tk+1 = Tk
(Tk
(n+2)
Tk
(10.10a)
(n)
Tk )2
(n+1)
2Tk
(n)
k = 0, 1, . . . ; n = 1, 2, . . . .
+ Tk
(10.10b)
(n)
(n)
= 2
for any
11
cj (n + b)j ,
c0 6= 0,
(10.11)
j=0
[
]
c0 n+2k (n + b)2k ()( + 2) ( + 2k 2)
1
1 + O(
) . (10.12)
s+
( 1)2k
n+b
Example 10.1 We apply the iterated Aitken 2 process to the partial sums of alternating
series
sn =
(1)i1
i=1
(n2k)
We give sn and Tk
2i 1
(10.13)
65
(n2k)
of 2k
are also given in Table 10.1. The iterated Aitken 2 process is better than the
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
(n2k)
Tk
sn
1.000
0.666
0.866
0.723
0.834
0.744
0.820
0.754
0.813
0.760
0.808
0.764
0.804
0.767
0.802
0.785
0.791
0.7833
0.78552
0.78536
0.78539
0.78539
0.78539
0.78539
0.78539
0.78539
0.78539
0.78539
0.78539
0.78539
25
98
77
8178
8159
81634
81633
81633
81633
81633
81633
9
69
9779
9731
97444
97448
SD of
Aitken 2
SD of
-algorithm
2.20
2.69
3.89
4.45
5.78
6.35
7.82
8.39
8.31
10.55
12.46
12.86
14.37
2.20
2.69
3.73
4.30
5.25
5.87
6.78
7.43
8.31
8.98
9.84
10.52
11.37
(10.14)
2 sn
66
1 sn sn
= s + O(n2 ).
sn sn
Proof. See Appendix A.
(3) sn
A sequence transformation
sn 7 sn
1 (sn )2
2 sn
(10.15)
1 sn sn
sn sn
(10.16)
s0 = 0,
(n)
s0
= sn ,
(10.17a)
n = 1, 2, . . . ,
(10.17b)
(n)
(n)
sk+1
(n)
sk
(n)
2k + 1 sk sk
,
2k s(n) s(n)
k
k = 0, 1, . . . ; n k + 1,
(10.17c)
where and operate on the upper indexes.12 The formula (10.17) is called the modied
Aitken 2 formula. The following two theorems are fundamental.
(n)
s = n2k
(k)
c0
]
(k)
(k)
c1
c2
1
+
+ 2 + O( 3 ) ,
n
n
n
then
(n)
sk+1
is represented as
s=n
2k2
(k)
c0 6= 0,
[
]
1
(k+1)
c0
+ O( ) ,
n
(10.18)
(10.19)
where
(k)
(k+1)
c0
12 sk
n
(k)
(k)
2c2
c
(c )2
+
.
= 0 (1 + 2k) (k) 1
12
(2k )(2k + 1 )
c0 (2k )2
(n)
67
in (10.17).
(10.20)
(j)
Theorem 10.5 (Bjrstad, Dahlquist and Grosse) With the above notation, if c0
6= 0
for j = 0, . . . , k 1, then
(n)
sk s = O(n2k ) as n .
(10.21)
Example 10.2 We apply the iterated Aitken 2 process and modied Aitken 2 process
(n2k)
(nl)
and sl
k = b(n 1)/2c and l = bn/2c. By the rst 11 terms, we obtain 10 exact digits by the
modied Aitken 2 formula.
Table 10.2
The iterated Aitken 2 process and
the modied Aitken 2 formula applying to (1.5)
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
(n2k)
sn
Tk
1.00
1.35
1.54
1.67
1.76
1.82
1.88
1.92
1.96
1.99
2.02
2.04
2.06
2.08
2.61
1.77
1.90
2.14
2.19
2.33
2.36
2.44
2.46
2.539
2.524
2.525
2.522
2.612
(nl)
sl
2.640
2.6205
2.6159
2.61232
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
9
657
560
53431
53475
53487
53487
53487
53486
53486
53486
5
16
10
17
04
13
85
68
cj
,
j
n
j=0
(10.22)
where < 0 and c0 (6= 0), c1 , . . . are constants. Drummond[15] commented that for a
sequence satisfying (10.22),
(
1
) + 1,
sn
2 sn1
(10.23)
where the sign ; denotes approximate equality. Moreover Bjrstad et al.[4] show that
n + n
tj
,
nj
j=0
(10.24)
where n is dened by the right-hand side of (10.23) and t0 (6= 0), t1 , . . . are constants.
Thus the sequence (n ) itself can be accelerated by the modied Aitken 2 process
with = 2 in (10.17c).
Suppose that the rst n terms s1 , . . . , sn of a sequence satisfying (10.22) are given.
(m)
Initialization. t0 = 0.
For m = 1, . . . , n 2,
(m)
t0
1
) + 1,
sm
2 sm1
(10.25a)
(m)
(m)
tk+1
(m)
tk
(m)
2k + 3 tk tk
,
2k + 2 t(m) t(m)
k
(10.25b)
k = 0, . . . , bm/2c 1,
(m)
where tk
(m+1)
= tk
(m)
tk
(m)
and tk
{
n =
(m)
= tk
(k)
(m1)
tk
. Then we put
tk1
if n is odd,
(k)
tk
if n is even,
(10.26)
where k = b(n 1)/2c. Substituting n for in the denition of the modied Aitken 2
formula (10.17), we can obtain the followings:
(0)
Initialization. sn,0 = 0.
For m = 1, . . . , n,
(m)
sn,0 = sm ,
(10.27a)
(m)
(m)
sn,k+1
(m)
sn,k
(m)
2k + 1 n sn,k sn,k
,
2k n s(m) s(m)
n,k
k = 0, . . . , bn/2c 1,
69
n,k
(10.27b)
(m)
(m+1)
(m)
(m)
(m)
(m1)
This scheme is called the automatic modied Aitken 2 -formula. The data ow of
this scheme is as follows (case n = 4):
s1
s4
&
&
&
s1
s4,0
s2
(2)
s4,0
s3
s4,0
s4
s4,0
s2
s3
(0)
t0
t0
t0
(1)
&
&
(2)
(1)
t1
= 4
(1)
(3)
(4)
&
&
&
(1)
s4,1
(2)
&
&
(2)
s4,2
s4,1
(3)
s4,1
(k)
(10.28a)
or if n is odd and
(k+1)
|sn,k
(k)
sn,k | < ,
(10.28b)
where k = bn/2c.
A FORTRAN subroutine of the automatic modied Aitken 2 formula is given in
Appendix.
Example 10.3 We apply the automatic modied Aitken 2 formula to the partial sums
(nk)
rst 11 terms, we obtain 11 exact digits by the automatic modied Aitken 2 formula.
70
Table 10.3
The automatic modied Aitken 2 formula
applying to (1.5)
n
sn
1
2
3
4
5
6
7
8
9
10
11
12
13
14
1.00
1.35
1.54
1.67
1.76
1.82
1.88
1.92
1.96
1.99
2.02
2.04
2.06
2.08
2.61
(nk)
0.544
0.5071
0.50015
0.50001
0.49999
0.49999
0.50000
0.50000
0.50000
0.49999
0.49999
0.50000
0.50000
sn,k
3
9938
9967
017
0068
00001
99992
99999
00000
00000
71
3
3
23
79
00
2.55
2.604
2.61218
2.61236
2.61235
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
00
4
541
525
5314
53486
53488
53487
53486
53486
35
2
2
20
8549
sn+1 s
sn+1
= lim
= 1.
n
sn s
sn
(11.1)
1
lim
(11.2)
where (6= 0), s R. As Kowalewski[25, p.268] proved, 0 < 1. The equality (11.2)
is equivalent to
lim
sn+1 sn
= .
(sn+1 s)2 sn
(11.3)
(11.4)
1 sn sn+1
.
2 sn
(11.5)
When = 1, the right-hand side of (11.5) coincides with the Aitken 2 process. When
0 < < 1, that of (11.5) coincides with the modied Aitken 2 formula.
If in (11.3) is unknown, solving the equation
sn+2 sn+1
sn+1 sn
=
2
(sn+2 s) sn+1
(sn+1 s)2 sn
(11.6)
sn+1 sn 2 sn+1
.
sn+2 2 sn sn 2 sn+1
72
(11.7)
A sequence transformation
sn+1 sn 2 sn+1
W : sn
7 sn+1
sn+2 2 sn sn 2 sn+1
(11.8)
sn+1 sn 2 sn+1
.
sn+2 2 sn sn 2 sn+1
(11.9)
s
( n)
=
1
2
sn
sn+1
1
sn
= sn+2
.
2
1
1
+
sn+2
sn+1
sn
Since the Aitken 2 process can be represented as
(
)
sn
sn
),
tn = (
1
sn
(11.10)
(11.11)
(11.12)
(11.13)
the formula (11.11) means that the W transform is a modication of the Aitken 2
process. Lubkin[29], Tucker[55] and Wimp[58] studied the relationship between the accelerativeness for the W transform and the Aitken 2 process.
We remark that Wn is also represented as
sn+1
sn+2
1
1
73
(11.14)
(n+1)
n1
j=1
ja + b + 1
,
ja + b
(11.15)
1/b)
(i) 6= 0, 1,
(ii) = 0 and sn+1 /sn is of constant sign for suciently large n,
c1
c2
+ 2 + ...,
n
n
(11.16)
W0
= sn ,
(n)
(11.17a)
(n+1)
Wk+1 = Wk
(n)
where Wk
(n+1)
= Wk
(n+1)
(n)
(n+1)
Wk
Wk 2 Wk
,
(n+2) 2
(n)
(n)
(n+1)
Wk
Wk Wk 2 Wk
k = 0, 1, . . . ,
(11.17b)
(n)
formation.
In order to give an asymptotic formula of the iterated W transform, we dene the
transformation (Sablonni`ere[47]). For a sequence (sn ), the transform is dened by
(sn ) = sn+1
+ 1 sn sn+1
,
2 sn
(11.18)
cj
sn s + n
,
nj
j=0
(n+1)
(sn ) = s1
(11.19)
in (10.17).
> 0.
(11.20)
+ 1 sn sn+1
,
2 sn
(11.21)
and
+1
sn+1 ( sn ) =
(n)
14 W (n) in (11.17) coincides with
of
k
k
15 When the sequence s is dened in n
n
sn+2 sn+1
sn sn+1
2
sn+1
2 sn
)
.
75
(11.22)
Using the above lemma and the asymptotic formula of the modied Aitken 2 formula
(Theorem 10.4), Sablonni`ere has proved the following theorem.
(n)
Wk
s = n2k
]
(k)
(k)
c
c
1
(k)
c0 + 1 + 22 + O( 3 ) ,
n
n
n
then
(n)
Wk+1
s=n
2k2
(k)
c0 6= 0,
(11.23)
]
[
1
(k+1)
c0
+ O( ) ,
n
(11.24)
where
(k)
(k)
(k+1)
c0
(k)
= 0
(k)
6( 2k)
c0 ( 2k)3
(k)
(k)
(k)
(11.25)
(j)
Wk
s = O(n2k )
as n
(11.26)
cj
sn s + n
,
nj/2
j=0
(11.27)
Wk
(k)
(k)
(k)
c0 6= 0,
(11.28)
then
(n)
(k+1) k/21/2
Wk+1 s = c0
(k+1) k/21
+ c1
76
+ O(nk/23/2 ),
(11.29)
where
(k)
(k+1)
c0
c1
=
2
(k 2) (k + 2 2)
(11.30)
and
(k)
(k+1)
c1
c0 (k 2)4 (k + 2 2)2
(11.31)
Recently, Osada[39] has extended the iterated W transform to vector sequences; the
Euclidean W transform and the vector W transform. A similar property to Theorem
11.7 holds for both transforms.
11.4 Numerical examples of the iterated W transformation
For linearly convergent sequences the W transform works well.
Example 11.1 Let us consider
sn =
1
(1)i1 .
i
i=1
(11.32)
(n3k)
sn
1
2
3
4
5
6
7
8
9
10
11
12
13
14
17
1.00
0.29
0.87
0.37
0.81
0.40
0.78
0.43
0.76
0.45
0.75
0.46
0.74
0.47
0.72
0.60
(n3k)
Wk
0.6061
0.60442
0.60511
0.60490
0.60489
0.60489
0.60489
0.60489
0.60489
0.60489
0.60489
0.60489
0.60489
77
0
79
888
86446
86430
86435
86432
86432
86432
86432
6
4
2192
2155
21630
21630
Example 11.2 We apply the iterated W transform to the partial sums of (1.5). We
(n3k)
give sn , Wk
9 exact digits. For this series, the iterated Aitken 2 process cannot accelerate but the
iterated W transform can do. However, the W transform is inferior to the automatic
modied Aitken 2 formula.
Table 11.2
The iterated W transform applying to (1.5)
n
sn
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
2.0
1.35
1.54
1.67
1.76
1.82
1.88
1.92
1.96
1.99
2.02
2.04
2.06
2.08
2.10
2.61
(n3k)
Wk
2.590
2.6019
2.6063
2.61234
2.61236
2.61236
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
3
2
90
527
5326
5337
5330
5365
53440
53486
Example 11.3 We apply the iterated W transform to the partial sums of (1.5)+(2) =
4.25730 94155 33714:
sn =
i=1
(n3k)
We give sn , Wk
i+1
.
i2
(11.33)
we obtain 4 exact digits. For comparison, we show Levin v-transform Tn2 . The W
transform is slightly better than the Levin v-transform.
78
Table 11.3
The iterated W transform applying to (1.5) + (2)
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
20
(n3k)
(1)
sn
Wk
Tn2
1.0
2.6
2.9
3.09
3.22
3.31
3.39
3.45
3.50
3.54
3.58
3.61
3.63
3.66
3.70
3.77
4.25
4.14
4.17
4.19
4.2568
4.2590
4.2596
4.2596
4.2596
4.2596
4.2596
4.2596
4.2591
4.25727 2
4.25730 9
4.05
4.18
4.212
4.226
4.234
4.240
4.243
4.246
4.248
4.249
4.2509
4.2518
4.2525
4.2546
4.25730
79
(12.1a)
(12.1b)
k = 2, 3, . . . .
(12.1c)
f (x) = f (x1 ) +
(12.2a)
x x2
1 (x1 , x2 ) +
2 (x1 , x2 , x3 ) 0 (x1 ) +
x x3
..
..
(12.2b)
The equality of (12.2) holds for x = x1 , . . . , xl . The right-hand side of (12.2) is called
Thieles interpolation formula.
(n)
for
= sn ,
(n)
0
1
(n)
(12.3a)
1
(n+1)
0
(n)
(12.3b)
k
(n+1)
= k2 +
(n+1)
k1
(n)
k1
80
k = 2, 3, . . . .
(12.3c)
nm
nm1
(m)
1 +
nm2
(m)
(m)
2 0 + (m)
(m)
3 1 + . .
..
(12.4)
(12.5)
(m)
nk + a1 nk1 + + ak
sn ; 2k k
,
n + b1 nk1 + + bk
(12.6)
where a1 , . . . , ak , b1 , . . . , bk are constants independent of n and the sign ; denotes approximate equality. By construction, the equality of (12.6) holds for n = m, . . . , m + 2k.
Suppose a sequence (sn ) with the limit s satises
sn =
snk + a1 nk1 + + ak
.
nk + b1 nk1 + + bk
(12.7)
(m)
Then, by the above discussion, 2k = s for any m. Thus the -algorithm is a rational
extrapolation which is exact on a sequence satisfying (12.7).
A sequence satisfying (12.7) has an asymptotic expansion of the form
(
)
c1
c2
sn s + n c0 +
+ 2 + ... ,
n
n
as n
(12.8)
81
(12.9a)
C0 = c,
(12.9b)
2k 1
k = 1, 2, . . . ,
C2k2
2k
= C2k2 +
k = 1, 2, . . . ,
(1 )C2k1
C2k1 = C2k3 +
C2k
(12.9c)
(12.9d)
This sequence (Cn ) is called the associated sequence of the -algorithm with respect to
and c.
For the associated sequence of the -algorithm, the following two theorems hold.
Theorem 12.1 Under the above notation,
k(2 )(3 ) (k )
, k = 1, 2, . . .
c(1 + ) (k 1 + )
c(1 + ) (k + )
=
, k = 1, 2, . . . .
(1 )(2 ) (k )
C2k1 =
(12.10a)
C2k
(12.10b)
C2k+1 = C2k1 +
(12.11)
Similarly,
C2k+2 =
c(1 + )(2 + ) (k + 1 + )
.
(1 ) (k + 1 )
82
(12.12)
We remark that Theorem 12.1 is still valid when is an integer and k < ||.
Theorem 12.2 Under the above notation,
c()
C2k
=
,
2
k k
()
lim
(12.13)
(x) = lim
we obtain the result.
(12.14)
as n .
(12.15)
Let (Cn ) be the associated sequence of the -algorithm with respect to and c0 in (12.15).
Let A = (1 )(1/2 + c1 /c0 ). Then the following formulae are valid.
(1)
(n)
1
= C1 (n + 1)
[
1+
]
A
B1
3
+
+ O((n + 1) ) ,
n + 1 (n + 1)2
(12.16)
where
B1 =
2 1 c1 (1 ) (1 )2 c1 2
c2 (2 )
+
+
+
.
2
2
12
2c0
c0
c0
(12.17)
(2)
(n)
2
[
= s + C2 (n + 1) 1 +
]
c1
B2
3
+
+ O((n + 1) ) ,
c0 (n + 1) (n + 1)2
(12.18)
where
B2 =
c0 (1 + )
2c1 2
c2 (5 2 )
+
+
.
6(1 )
c0 (1 )
(1 )2
83
(12.19)
2j = s + C2j (n + j) 1 +
+ O((n + j)2 )
c0 (n + j)
(12.20)
(12.21)
= c0 (n + 1)
[
1
(
)
]
A
1 c1 (1 )
c2
2
+
+
+
n+1
6
2c0
c0 (n + 1)2
+ O((n + 1)3 ).
(12.22)
Hence, we obtain
(n)
1
[
1+
= C1 (n + 1)
]
B1
A
3
+
+ O((n + 1) ) .
n + 1 (n + 1)2
(12.23)
2k s
C2k
sn+2k s
as n ,
(12.24)
where the sign means asymptotic approximate. When is small non-integer, the
-algorithm cannot be available.
When is a negative integer, say k, we have C0 6= 0, . . . , C2k2 6= 0 and C2k = 0.
Thus, it follows from Theorem 12.3 that
2k = s + O((n + k)k2 ),
(n)
as n .
(12.25)
We give sn and 2k
1
.
2
i
i=1
(12.26)
Table 12.1
The -algorithm applying to (2)
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
20
(n2k)
sn
1.00
1.25
1.36
1.42
1.46
1.49
1.511
1.527
1.539
1.549
1.558
1.564
1.570
1.575
1.580
1.596
1.644
2k
1.650
1.6468
1.64489
1.64492
1.64493
1.64493
1.64493
1.64493
1.64493
1.64493
1.64493
1.64493
1.64493
1.64493
1.64493
2
437
414
40643
40662
40668
40668
40668
40668
40668
40668
40668
8
64
418
56
82
56
50
48
1
.
sn =
i i
i=1
(n2k)
We give sn and 2k
(12.27)
85
Table 12.2
The -algorithm applying to (1.5)
(n2k)
sn
2k
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
2.0
1.35
1.54
1.67
1.76
1.82
1.88
1.92
1.96
1.99
2.02
2.04
2.06
2.08
2.10
2.61
2.19
2.25
2.40
2.42
2.48
2.49
2.525
2.528
2.520
2.553
2.552
2.553
2.564
2.612
86
as n
(13.1)
where < 0 and c0 (6= 0), c1 , . . . are constants independent of n. As we proved in the
preceding section, the -algorithm works well on a sequence satisfying (13.1) if and only
if is a negative integer. In this section we extend -algorithm that works well on (13.1)
for any < 0.
13.1 The generalized -algorithm
For a sequence (sn ) satisfying (13.1), we put s0 = 0 if s0 is not dened. We dene
(n)
k
as follows:
(n)
1 = 0,
(13.2a)
(n)
= sn ,
(n)
= k2 +
(13.2b)
(n+1)
k1
(n+1)
k1
(n)
k1
k = 1, 2, . . . .
(13.2c)
This procedure is called the generalized -algorithm with a parameter [37]. It is obvious
that, when = 1, the generalized -algorithm coincides with the -algorithm dened
in (12.3).
(n)
produced by applying
(n)
Theorem 13.1 (Osada) If 2k1 and 2k satisfy the asymptotic formulae of the forms
1
(n)
2k1 =
(k1)
d0
[
]
(n + k 1)2k1 1 + O((n + k 1)1 ) ,
(n)
2k = s + (n + k)2k
(k1)
with d0
]
(k)
(k)
d
d
(k)
2
d0 + 1 +
+ O((n + k)3 )
n + k (n + k)2
(13.3)
(13.4)
(k)
6= 0 and d0 6= 0, then
]
[
(n)
(k+1)
+ O((n + k + 1)1 ) ,
2k+2 = s + (n + k + 1)2k2 d0
(13.5)
*The material in this section is taken from the authors paper: N. Osada, A convergence acceleration
method for some logarithmically convergent sequences, SIAM J. Numer. Anal. 27(1990), pp.178-189.
87
where
(k)
(k+1)
d0
(k)
(k)
d0
(d )2
2d2
(2k 1) (k) 1
+
12
(2k )(2k + 1)
d0 (2k )2
(k)
(d0 )2 (2k 1)
(k1)
d0
(2k + 1)
(13.6)
2k
(n)
(k)
Hence, we obtain
2k
(n+1)
2k
(n)
2k
1
(k)
d0
(n + k + 1)2k+1
1
(
(k)
1
d
+ (k) 1
2 d (2k )
0
2k + 1
n+k+1
(k)
(k)
(d )2 (2k + 1)
2k 1
d
+ 1(k) + 1 (k)
12
2d0
(d )2 (2k )2
) 0
(k)
d2 (2k + 2)
2k + 1
(k)
2
d0 (2k )(2k + 1) (n + k + 1)
]
+O((n + k + 1)3 ) .
(k)
d2 (2k
(k)
+ 2)
(k)
d0 (2k
)(2k + 1)
]
+O((n + k + 1)3 ) .
88
d0
(k1)
d0
(2k
+ 1)
(13.8)
2k + 1
(n + k + 1)2
(13.9)
Similarly we obtain
(n)
(n+1)
2k+2 = 2k
2k + 1
(n+1)
2k+1
(n)
2k+1
[
= s + (n + k + 1)2k2
(k)
(k)
d0
(d )2
(2k 1) (k) 1
12
d0 (2k )2
(k)
(k)
2d2
(d0 )2 (2k 1)
+ (k1)
(2k )(2k + 1)
d0
(2k + 1)
]
+O((n + k + 1)1 ) .
(13.10)
(j)
2k = s + O((n + k)2k ).
(13.11)
Proof. By means of the induction on k, the proof follows from Theorem 13.1.
(n)
(n+1)
= s1
(n)
(n+k)
s dened in (10.17).
give sn , 2k
, where l = bn/2c.
89
Table 13.1
The generalized -algorithm and
the modied Aitken 2 formula applied to (1.5)
(nk)
(n2k)
sn
1
2
3
4
5
6
7
8
9
10
11
12
13
14
0
1
1
2
2
3
3
4
4
5
5
6
6
7
1.00
1.35
1.54
1.67
1.76
1.82
1.88
1.92
1.96
1.99
2.02
2.04
2.06
2.08
2.61
sk
2k
2.640
2.6205
2.61215
2.61232
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
3
71
572
5334
53458
53488
53487
53489
53487
53486
53486
0
2
2
0
848
854
2.640
2.6205
2.61217
2.61232
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
9
657
560
53431
53475
53487
53487
53487
53486
53486
53486
5
16
10
17
04
13
85
)
c1
c2
c0 +
+ 2 + ... ,
n
n
as n
(13.1)
where < 0 and c0 (6= 0), c1 , . . . are unknown constants independent of n. We dene n
by
(
n = 1 +
1
).
sn
2 sn1
(13.12)
(
)
t1
t2
t0 + + 2 + . . . ,
n
n
90
as n ,
(13.13)
where t0 (6= 0), t1 , . . . are unknown constants independent of n. Thus by applying the
generalized -algorithm with parameter 2 on (n ), we can estimate the exponent .
Suppose that the rst n terms of a sequence (sn ) satisfying (13.1) are given. Then
(m)
as follows:
(m)
1 = 0,
(13.14a)
(m)
= m ,
(m)
= k2
m 1,
(m+1)
(13.14b)
k+1
(m+1)
k1
(m)
k1
k = 1, 2, . . . .
(13.14c)
Next, we dene n (n 3) by
{
n =
n3
(1)
if n is odd,
(0)
if n is even.
n2
(13.15)
n,0 = sm ,
(m)
n,1
(m)
n,k
m = 0, . . . , n
n
, m = 0, 1, . . . , n 1,
= (m+1)
(m)
n,0 n,0
k 1 n
(m+1)
= n,k2 + (m+1)
, k = 2, . . . , n; m = 0, . . . , n k.
(m)
n,k1 n,k1
(13.16a)
(13.16b)
(13.16c)
This scheme is called the automatic generalized -algorithm. The data ow of this scheme
is as follows (case n = 4):
s1
s4
&
&
&
s1
4,0
(2)
4,0
(3)
4,0
(4)
4,0
s2
s3
s2
s3
s4
(1)
(2)
0
(1)
&
&
&
4,1
&
(0)
(1)
1
&
&
(0)
4,2
&
(1)
4,2
&
(2)
4,2
&
&
(0)
&
4,4
(0)
(1)
4,1
(2)
4,1
(3)
4,1
91
(0)
4,3
(1)
4,3
(0)
(1)
(1)
Table 13.2
The automatic generalized -algorithm
applied to (1.5)
n
sn
1
2
3
4
5
6
7
8
9
10
11
12
13
14
1.00
1.35
1.54
1.67
1.76
1.82
1.88
1.92
1.96
1.99
2.02
2.04
2.06
2.08
2.61
(n2k)
0.544
0.5071
0.50015
0.50001
0.50000
0.49999
0.49999
0.50000
0.50000
0.50000
0.49999
0.50000
0.50000
n,2k
4
0052
9980
99946
00002
00001
00002
99999
00000
00000
3
1
9
76
93
00
2.55
2.604
2.61217
2.61236
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
2.61237
60
568
53453
53487
53487
53488
53488
53487
53486
53486
1
1
9
2
2
49
85
92
cj nj ,
c0 6= 0, 6= 0},
(14.1)
j=0
sn+1 s
sn+1
= lim
= 1 },
n sn s
n sn
(14.2)
respectively. For (sn ) S , (sn ) converges if 0 < || < 1, and (sn ) diverges if || > 1.
We consider subsets of S and LOGSF as follows:
cj nj ,
c0 6= 0, < 0},
(14.3)
j=0
L1 = { (sn ) | sn s + n
L2 = { (sn ) | sn s + n
L3 = { (sn ) | sn s +
j=0
cj nj ,
c0 6= 0, N},
(14.4)
cj nj ,
c0 6= 0, < 0},
(14.5)
j=0
m
i=1
cij nj ,
(14.6)
j=0
aj + bj log n
,
L4 = { (sn ) | sn s + n
nj
j=0
L5 = { (sn ) | sn s + n (log n)
j=0 i=0
< 0},
cij
,
(log n)i nj
(14.7)
*The material in this section is an improvement of the authors informal paper: N. Osada, Asymptotic
expansion and acceleration methods for certain logarithmically convrgent sequences, RIMS Kokyuroku
676(1988), pp.195-207.
93
L2 L4 LOGSF,
L2 L5 LOGSF.
(14.9)
94
Table 14.1
Test series
set No.
S
1
2
(0.8)i
i=1
n
Alt
3
4
L1
5
6
7
8
= 1, = 4
(1)i1
1
i
log 2
= 1 Example 3.1
2 /6
= 1 Example 3.4
= 2 Example 3.4
=1
1 = 0.5, 2 = 1
= 1 Example 3.6
log i
i i
1
i(log i)2
= 1 Example 3.7
i=1
n
1
(1)i1
i
i=1
1
i2
i=1
n
11
i+1
i2
log i
i=1
n
i=1
L5 12
i3
i i
i=1
n (
)2
1/i
i+e
i=1
L4 10
n+1
i=2
= 1, = 0.8
log 5
i=1
L3
log 5
4i
i
i=1
L2
asymptotic expansion
(1)i1
i=1
n
sum
i2
2 Example 3.5
(14.10)
1nm
Acceleration methods taken up in this section are as follows. The -algorithm, the
Levin u-, v-, and t-transforms, the d(2) - and d(3) -transforms, the iterated Aitken 2 process, the automatic modied Aitken 2 formula, the W transform, the -algorithm, and
the automatic generalized -algorithm. All methods require no knowledge of asymptotic
expansion of objective sequence. We compare with the quantities listed in Table 14.2.
Table 14.2
Acceleration methods
acceleration method
denition
quantity
-algorithm
(8.10)
2k
Levin u-transform
9.1
Tn1
Levin v-transform
9.1
Tn2
Levin t-transform
9.1
Tn2
d(2) -transform
9.3
E2l2
d(3) -transform
9.3
E3m3
(10.10)
10.3
(11.17)
-algorithm
(12.3)
13.2
(n2k)
Tk
, k = b(n 1)/2c
(nl)
sn,l , l = bn/2c
Wp(n3p) , p = b(n 1)/3c
(n2l)
2l
, l = bn/2c
(n2k)
n,2k , k = b(n 1)/2c
(n2k)
, k = b(n 1)/2c
(1)
(1)
(1)
(n2l+2)
, l = bn/2c
(n3m+3)
, m = bn/3c
For each acceleration method we show the maximum signicant digits from 20 terms
of each test series in Table 14.3. In table 14.3, the number of terms is abbreviated to
NT, and the number of signicant digits is abbreviated to SD. Numerical computations reported in this section were carried out on the NEC ACOS-610 computer in
double precision with approximately 16 digits.
96
Table 14.3
The maximum signicant digits
partial sum
NT
SD
S
Alt
L1
L2
L3
L4
L5
1
2
3
4
5
6
7
8
9
10
11
12
20
2.72
diverge
20
1.61
20
0.96
20
1.31
20
2.92
20
0.35
20
0.17
20
0.31
20
0.71
20
0.35
20
0.49
-algorithm
NT
SD
20
20
20
20
20
20
20
20
20
20
20
20
8.01
7.81
15.11
15.74
2.06
4.17
0.78
0.53
0.75
1.62
0.01
0.69
Levin u
NT
SD
19
16
15
14
14
12
10
13
20
20
20
20
10.78
10.78
15.90
15.56
11.46
11.49
9.01
8.44
2.58
3.11
0.98
1.07
Levin v
NT
SD
20
16
15
14
12
12
12
13
19
20
20
20
10.40
11.58
15.86
15.56
9.64
11.01
8.55
7.58
2.96
3.47
1.24
1.13
Levin t
NT
SD
S
Alt
L1
L2
L3
L4
L5
1
2
3
4
5
6
7
8
9
10
11
12
20
16
14
15
20
20
20
20
20
20
20
20
10.49
10.54
15.95
15.54
2.28
4.60
0.89
0.16
0.87
1.52
0.03
0.74
d(2) -trans
NT
SD
20
19
20
20
18
16
14
15
12
13
13
19
7.16
8.36
13.85
15.90
11.29
12.44
9.82
10.76
5.54
7.07
4.98
1.29
97
d(3) -trans
NT
SD
20
19
20
19
20
16
19
18
15
20
17
20
6.45
6.18
12.84
15.31
10.69
12.20
10.20
9.63
8.54
6.99
4.80
1.59
Aitken 2
NT
SD
17
17
19
16
20
19
19
7
19
15
20
20
9.62
11.77
16.08
16.08
3.18
5.72
1.42
1.05
1.35
1.33
0.18
0.81
aut mod 2
NT
SD
S
Alt
L1
L2
L3
L4
L5
1
2
3
4
5
6
7
8
9
10
11
12
13
18
19
19
12
19
11
19
19
16
14
20
Lubkin W
NT
SD
6.76
10.52
16.01
15.50
11.02
12.53
9.79
9.70
3.04
3.03
0.75
1.13
20
19
19
17
15
18
15
13
20
12
20
20
-algorithm
NT
SD
10.69
11.13
16.26
15.54
9.66
10.92
8.34
8.38
4.43
2.25
0.46
1.31
decelerate
decelerate
decelerate
decelerate
18
11.66
20
12.81
18
1.52
20
1.03
17
1.54
20
3.03
18
0.44
20
0.89
aut gen
NT
SD
20
20
19
19
20
14
9
17
18
19
17
20
8.06
7.99
14.51
14.59
12.18
13.05
10.51
11.29
3.08
3.58
1.27
1.15
14.4 Extraction
As Delahahe and Germain-Bonne[14] proved, there is no acceleration method that
can accelerate all sequences belonging to LOGSF. However, if a logarithmic sequence
(sn ) satises the asymptotic form
sn = s + O(n ),
(14.11)
sn = s + O(n (log n) ),
(14.12)
or
(14.13)
s2n = s + O((2 )n n ),
(14.14)
or
98
respectively. Both (14.13) and (14.14) converges linearly to s with contraction ratio 2 .
In particular, if a sequence (sn ) belongs to L3 , the subsequence (s2n ) of (sn ) satises
the asymptotic expansion of the form
s2n s +
cj nj ,
(14.15)
j=1
where cj and 1 > 1 > 2 > > 0 are constants. By Theorem 8.4, the -algorithm and
the iterated Aitken 2 process can accelerate the subsequence eciently.
If (sn ) satises
sn = s + O((log n) ),
(14.16)
(14.17)
that is, (s2n ) converges logarithmically. Therefore the d-transform or the automatic
generalized -algorithm are expected to accelerate the convergence of (s2n ).
In Table 14.4, we take up the Levin t-transform, the -algorithm, the iterated Aitken
2 process, and the d(2) -transform as acceleration methods, and we apply to the last 6
series in Table 14.1. Though we do not list in Table 14.4, the Levin t-transform is slightly
better than the Levin u-, v-transforms. The d(2) -transform is slightly better than the
d(3) -transform.
Table 14.4
The maximum signicant digits
series
partial sum
Levin
No.
NT
SD
NT
7
8
9
10
11
12
16384
16384
16384
16384
16384
16384
1.81
1.36
1.80
3.18
0.74
0.99
16384
16384
16384
16384
16384
16384
t
SD
-algorithm
NT
SD
Aitken 2
NT
SD
d(2)
NT
SD
Table 14.5
The maximum signicant digits for
n
2
+1
i=2
series
No.
1
.
i(log i)2
Levin u
Lubkin W
NT
NT
NT
SD
SD
SD
NT
3.91 2048
SD
d(3)
NT
SD
14.5 Conclusions
Table 14.3 show that the best available methods are the d(2) - and d(3) -transforms.
For S , all tested methods except the -algorithm work well. For L1 and L2 , the
automatic generalized -algorithm is the best.
The Levin u-, v-transforms, the automatic generalized -algorithm, and the automatic modied Aitken 2 formula, and Lubkins W transforms are good methods. The
automatic generalized -algorithm and the automatic modied Aitken 2 formula are
generalizations of the sequence transformation
sn 7 sn
1 sn sn1
,
2 sn1
(14.18)
100
j=1
xj
f (x)dx =
xj1
Ij .
(15.1)
j=1
cj nj .
(15.2)
j=1
j=0
101
cj nj ,
(15.3)
or
aj + bj log n
cj + dj log n
Sn I + n
+n
.
j
n
nj
j=0
j=0
(15.4)
If f (x) is of class C in [a, b] and if the quadrature formula is either the trapezoidal rule
or the midpoint rule, then j = 2j 2 in (15.3).
II-1. When j s in (15.3), or and in (15.4) are known, by applying the Richardson
extrapolation or the E-algorithm to (Sn ) or (S2n ), the convergence of (Sn ) is accelerated.
In 1955, W. Romberg[45] applied the Richardson extrapolation to (S2n ) when j =
2j 2 in (15.3). Since 1961, many authors such as I. Navot[33] and H. Rutishauser[46]
applied it to improper integrals, see Joyces survey paper[22].
II-2. When the asymptotic scale in the asymptotic expansion is unknown, acceleration methods taken up in the previous section are applied to (Sn ) or (S2n ). As we saw in
the previous section, when there are integers i and j such that i j is not integer, we
can not obtain a high accurate result by applying an acceleration method to (Sn ) itself.
However, for (15.3) and (15.4) it is expected to obtain a good result by applying the
-algorithm or the iterated Aitken 2 process to (S2n ). The rst proposer of this method
was C. Brezinski[6,7]. In 1970 and 1971, he applied the -algorithm with parameter to
(S2n ) and the -algorithm when j = 2j 2 in (15.3). Subsequently many authors such
as D. K. Kahaner[23] applied acceleration methods to nite integrals.
III. For another way of applying extrapolation methods to numerical integration,
see Brezinski and Redivo Zaglia[11, p.366386] and Rabinowitz[43].
15.2 Application to semi-innite integrals
We apply the above method I-1 to integrals listed in Table 15.1.
102
Table 15.1
Semi-innite integrals
with a monotonically decreasing integrand
No.
integral
Sn =
ex dx
geometric series
x2 ex dx
Sn
3
0
2n
exact
dx
1 + x2
f (x)dx
cj
+ j=1 2j1
2
n
2j
methods that we apply are the Levin u-transform, the -algorithm, Lubkins W transform, the iterated Aitken 2 -process, the d(2) -transform and the automatic generalized
-algorithm. The tolerances are = 106 and 1012 . For 0 dx/(1 + x2 ), we take
= 109 instead of 1012 . The stopping criterion is |Tn Tn1 | < , where (Tn ) is
the accelerated sequence. The results are shown in Table 15.2. Throughout this section,
the number of terms is abbreviated to T, and the number of functional evaluations is
abbreviated to FE.
Table 15.2
Number of terms, functional evaluations, and errors
= 106
Levin
No. T FE
1
2
3
u-transform
error
5 43
6.16 109
8 96 1.24 109
10 102
6.46 109
Aitken 2
No. T
1
2
3
3
8
FE
29
96
T
4
8
process
error
-algorithm
FE
error
Lubkins W transform
T FE
error
36
6.19 109
96 6.60 1010
failure
5
8
9
d(2) -transform
automatic generalized
FE
error
6.16 109 7 57
6.16 109
1.91 109 8 96 6.58 1010
failure
10 102 3.24 108
103
43
96
95
T FE
5
8
9
43
96
95
6.19 109
5.05 108
9.63 107
error
6.16 109
7.75 1010
4.17 108
= 1012
Levin
No. T FE
1
2
Aitken
No. T FE
1
2
u-transform
error
process
error
Lubkins W transform
T FE
error
-algorithm
FE
error
d(2) -transform
FE
error
automatic generalized
T FE
error
d(2) -transform
Levin u-transform
No. T
3
FE
error
FE
error
automatic generalized
T
FE
error
When = 1012 , all methods except the automatic generalized -algorithm (T= 27,
integral
4
0
exact
ex cos xdx
0.5
x sin x
dx
x2 + 1
cos x
dx
x2 + 1
cos x
dx
x2 + 1
sin x
dx
x2
/(2e)
/(2e)
0.42102 44382 40708
0.50406 70619 06919
104
We compute integrals between two consecutive zeros by the Romberg method. Acceleration methods considered here are the Levin u-transform, the -algorithm, and the
d(2) -transform. These methods require no knowledge of asymptotic expansion of the integrand or the integral. The tolerances are = 106 and 1012 . The results are shown
in Table 15.4.
Table 15.4
Number of terms, functional evaluations, and errors
= 106
Levin
No. T FE
1
2
3
4
5
u-transform
error
-algorithm
FE
error
d(2) -transform
FE
error
5 81 7.44 109
9 137
1.46 107
8 97 7.75 108
9 161 1.21 107
8 97 1.05 108
= 1012
No.
1
2
3
4
5
Levin u
T
FE
5 353
12 641
12 897
12 897
12 1025
transform
error
2.50 1016
4.53 1014
1.82 1014
2.01 1013
9.01 1015
-algorithm
FE
error
4 321
4.16 1017
18 833 9.40 1015
17 1217
8.10 1014
18 1281 6.90 1014
17 1345
3.68 1014
d(2) -transform
FE
error
The Levin v- and t-transforms perform similar to the Levin u-transform. The iterated Aitken 2 process and Lubkins W trasnsform are slightly better than the algorithm. The d(2) -transform is better that the d(3) -transform. The best acceleration
methods we tested for semi-innite oscillating integrals are the Levin transforms.
These results are less than Hasegawa and Toriis results[19], but they are available
in practice because they require no knowledge of the integrand.
105
No.
integral
xdx
exact
2/3
Sn = h
i=1
Sn I + c0 n
1.5
2
0
dx
2 1
B( , )
3 3
4
0
log x
dx
x
Sn I + c0 n0.5 +
j=1
cj n2j
cj n2j
j=1
2/ 3
4
Sn I +
Sn I +
j=1
j=0
+cj nj1 )
We use the midpoint rule as the quadrature formula. Acceleration methods are
the Levin u-transform, the -algorithm, Lubkins W transform, the iterated Aitken 2 process, the d(2) -transform and the automatic generalized -algorithm. The tolerance is
= 106 and the maximum number of terms is 15. The results are shown in Table 15.6.
106
Table 15.6
Number of terms, functional evaluations, and errors
= 106
No.
1
2
3
4
No.
1
2
3
4
Levin u-transform
T
FE
error
8
255
3.70 109 7
13 8191 2.09 107 8
15 32767
7.21 106 13
14 16383 4.90 107 11
Aitken 2 process
T
FE
error
-algorithm
FE
error
Lubkins W transform
T
FE
error
automatic generalized
T
FE
error
6
63 4.22 107 8
255 1.20 107 8
255
9
511 9.10 1010 14 16383 3.36 108 15 32767
15 32767 6.56 106 15 32767
1.64 105 15 32767
6
15 32767 2.01 10
15 32767 1.06 107 14 16383
2.24 1011
8.26 109
9.97 106
3.20 107
The -algorithm is the best. For the tolerance = 109 , only the -algorithm
succeeds on all integrals listed in Table 15.5, provided that the number of terms is less
than or equals to 15.
107
CONCLUSIONS
In this paper we studied acceleration methods for slowly convergent scalar sequences
from asymptotic view point, and applied these methods to numerical integration.
In conclusion, our opinion is as follows.
1. Suppose that a sequnce (sn ) has an asymptotic expansion of the form
sn s +
cj gj (n).
(1)
j=1
(2)
2. By the above 1, if we know (gj (n)), we can choose a suitable acceleration method
for (sn ).
3. We show the most suitable methods in the following Table 1. We append the
number of theorem giving the asymptotic formula.
4. For a logarithmically convergent sequence (sn ), we can usually obtain higher
accuracy when we apply to (s2n ).
5. There is no all-purpose acceleration method. The best method of all we treated
is the d-transform, and the second best method is the automatic generalized -algorithm.
In application, we can usually determine a type of an aymptotic expansion of an objective
sequence. For example, when we apply the midpoint rule to an improper integral with
endpoint singularity, the objective sequence has the asymptotic expansion of the form
sn s + n
cj n
j=0
+n
dj nj .
(3)
j=0
) gives high
accurate result. In particular, it is a good method that the -algorithm is applied to M2n ,
where M2n is the 2n panels midpoint rule.
108
Table 1
Suitable acceleration methods
asymptotic expansion
sn s
n
j=1 cj j
n n j=0 cj nj
n j=0 cj nj
cij ni j
j=0 (aj
asymptotic scale
unknown
(sn )
Richardson extrapolation1)
-algorithm2)
(sn )
E-algorithm3)
Levin transforms4)
(sn )
(s2n )
generalized -algorithm5)
modied Aitken 2 formula6)
Richardson extrapolation1)
automatic generalized
-algorithm
-algorithm2)
(sn )
(s2n )
E-algorithm3)
Richardson extrapolation1)
d-transform
-algorithm2)
E-algorithm3)
E-algorithm3)
d-transform
-algorithm2)
E-algorithm3)
E-algorithm3)
d-transform
i j
n
i,j cij (log n)
1)
asymptotic scale
known
(sn )
(s2n )
Formula (7.37), 2) Theorem 8.4, 3) Theorem 6.2, 4) Theorem 9.1, 5) Theorem 13.2, 6) Theorem 10.5.
(4)
nj
(5)
109
Acknowledgements
I would like to express my deepest gratitude to Professor T. Torii of Nagoya University for his constant guidance and encouragement in the preparation of this thesis.
I would like to heartly thank Professor T. Mitsui of Nagoya University for his advice
which improved this thesis.
I am indebted to Professor S. Kuwabara of Nagoya University for his taking the
trouble to referee this thesis.
I would like to give my thank Professor C. Brezinski of Unversite des Sciences et
Technologies de Lille for the invitation to an international congress in Luminy, September
1989, that made the turning point of my research.
I am grateful to Professor M. Iri of the University of Tokyo and Professor K.
Nakashima of Waseda University for their help and encouragement. I would also like
to thank Professor I. Ninomiya of Chubu University who taught me fundamentals of
numerical analysis on occasion at various meetings when I began to study it.
110
REFERENCES
[1] A.C. Aitken, On Bernoullis numerical solution of algebraic equations, Proc. Roy. Soc.
Edinburgh Ser. A 46(1926), 289305.
[2] W.G. Bickley and J.C.P. Miller, The numerical summation of slowly convergent series
of positive terms, Philos. Mag. 7th Ser. 22(1936), 754767.
[3] P. Bjrstad, G. Dahlquist and E. Grosse, Extrapolation of asymptotic expansions by a
modied Aitken 2 -formula, STAN-CS-79-719 (Computer Science Dept., Stanford Univ.,
1979).
[4] P. Bjrstad, G. Dahlquist and E. Grosse, Extrapolation of asymptotic expansions by
a modied Aitken 2 -formula, BIT 21(1981), 5665.
ements de mathematique, Fonctions dune variable reelle, (Hermann,
[5] N. Bourbaki, El
Paris, 1961).
[6] C. Brezinski, Application du -algorithme `
a la quadrature numerique, C. R. Acad.
Sc. Paris t.270(1970), 12521253.
[7] C. Brezinski, Etudes sur les - et -algorithmes, Numer. Math. 17(1971), 153162.
[8] C. Brezinski, Acceleration de suits `
a convergence logarithmique C. R. Acad. Sc. Paris
t.273(1971), 727730.
[9] C. Brezinski, Acceleration de la convergence en analyse numerique Lecture notes in
Math., 584 (Springer, Berlin, 1977).
[10] C. Brezinski, A general extrapolation algorithm, Numer. Math. 35(1980), 175187.
[11] C. Brezinski and M. Redivo Zaglia, Extrapolation methods: theory and practice,
(Elsevier, Amsterdam, 1991).
[12] N. G. de Bruijn, Asymptotic Methods in Analysis (Dover Publ., New York, 1981).
[13] F. Cordellier, Caracterisation des suites que la premi`ere etape du -algorithme transforme en suites constantes, C. R. Acad. Sc. Paris t.284(1971), 389392.
[14] J. P. Delahaye and B. Germain-Bonne, Resultats negatifs en acceleration de la convergence, Numer. Math. 35(1980), 443457.
[15] J.E. Drummond, Summing a common type of slowly convergent series of positive
terms, J. Austral. Math. Soc. 19 Ser. B(1976), 416421.
[16] W.F. Ford and A. Sidi, An algorithm for a generalization of the Richardson extrapolation process, SIAM J. Numer. Anal. 24(1987), 1212-1232.
[17] H.L. Gray and W.D. Clark, On a class of nonlinear transformations and their applications to the evaluation of innite series, J. Res. Nat. Bur. Stand. 73B(1969), 251274.
111
[18] S. Gustafson, A method of computing limit values, SIAM J. Numer. Anal. 10(1973),
10801090.
[19] T. Hasegawa and T. Torii, Indenite integration of oscillatory functions by the Chebyshev series expansion, J. Comput. Appl. Math. 17(1987), 2129.
[20] T. H
avie, Generalized Neville type extrapolation schemes, BIT 19(1979), 204213.
[21] P. Henrici, Elements of numerical analysis, (John Wiley and Sons, New York, 1964).
[22] D.C. Joyce, Survey of extrapolation processes in numerical analysis, SIAM Rev.
13(1971), 435490.
[23] D. K. Kahaner, Numerical quadrature by the -algorithm, Math. Comp. 26(1972),
689693.
[24] K. Knopp, Theory and application of innite series, 2nd. English ed., (Dover Publ.,
New York, 1990).
[25] C. Kowalewski, Acceleration de la convergence pour certaines suites `
a convergence
logarithmique, Lecture note in Math. 888(1981), 263272.
[26] D. Levin, Development of non-linear transformations for improving convergence of
sequences, Intern. J. Computer Math. 3(1973), 371388.
[27] D. Levin and A. Sidi, Two new classes of nonlinear transformations for accelerating
the convergence of innite integrals and series, Appl. Math. and Comp. 9(1981), 175215.
[28] I. M. Longman, Note on a method for computing innite integrals of oscillatory
functions, Proc. Cambridge Phil. Soc. 52(1956), 764768.
[29] S. Lubkin, A method of summing innite series, J. Res. Nat. Bur. Stand. 48(1952),
228254.
[30] J.N. Lyness and B.W. Ninham, Numerical quadrature and asymptotic expansions,
Math. Comp. 21(1967), 162178.
[31] A.C. Matos and M. Prevost, Acceleratoon property for the columns of the Ealgorithm, Numer. Algorithms 2(1992), 393408.
[32] K. Murota and M. Sugihara, A remark on Aitkens 2 -process, Trans. Inform. Process. Soc. Japan 25(1984), 892894. (in Japanese) (= , , Aitken
, ).
112
[35] N. Osada, Asymptotic expansions and acceleration methods for alternating series (in
Japanese), Trans. Inform. Process. Soc. Japan 28(1987), 431436. ( = ,
, ).
[36] N. Osada, Asymptotic expansions and acceleration methods for logarithmically convergent series (in Japanese), Trans. Inform. Process. Soc. Japan, 29(1988), 256261. ( =
, , ).
113
114
(A.1)
where < 0 and c0 (6= 0), c1 , c2 are constants. Then the following asymptotic formul
hold.
(sn )2
c0
=
s
+
n + O(n1 ).
2 sn
1
1 (sn )2
(2) sn
= s + O(n1 ).
2
sn
1 sn sn
= s + O(n2 ).
(3) sn
sn sn
Proof. (1) Using (A.1) and the binomial expansion, we have
[(
]
]
[(
)
)1
1
1
1 + c1 n1
1+
1
sn = c0 n
1+
n
n
[(
]
)2
1
2
+ c2 n
1+
1 + O(n4 )
n
)
(
1
c0 + c1 ( 1)n2
= c0 n1 +
2
[
]
1
1
+ c0 ( 1) + c1 ( 1) + c2 ( 2)n3 + O(n4 )
6
2
(1) sn
and
]
)1
1
2 sn = c0 n1
1+
1
n
[(
]
(
)
)2
1
1
+
c0 + c1 ( 1)n2
1+
1 + O(n4 )
2
n
[
(
)
]
c1
1
1
2
= c0 ( 1)n
1+ 1+
( 2) + O( 2 ) .
c0
n
n
[(
By (A.2),
2
(sn ) =
c20 2 n22
)
]
[
(
1
1
2c1
( 1) + O( 2 ) ,
1+ 1+
c0
n
n
then we have
115
(A.2)
]
[
(
)
(sn )2
c0
c1
1
1
=
n 1+ 1+
( 2) + O( 2 )
2 sn
1
c0
n
n
[
(
)
]
2c1
1
1
1 1+
( 1) + O( 2 )
c0
n
n
[
(
)
]
c0
c1 1
1
=
n 1+ 1+
+ O( 2 ) .
1
c0 n
n
Thus we obtain
sn
(A.3)
(sn )2
c0
=s+
n + O(n1 ).
2
sn
1
1 (sn )2
= s c0 n1 + O(n2 ).
2
sn
(3) Similarly,
(
)
1
sn = c0 n
+ c0 + c1 ( 1)n2
2
[
]
1
1
+ c0 ( 1) c1 ( 1) + c2 ( 2)n3 + O(n4 ).
6
2
1
c20 2 n22
[
]
2c1
1
1
1+
( 1) + O( 2 ) ,
c0
n
n
and
sn sn = c0 ( 1)n
[
]
c1 ( 2) 1
1
1+
+ O( 2 ) .
c0
n
n
[
]
sn sn
c0
c1 1
1
=
n 1+
+ O( 2 ) .
sn n
1
c0 n
n
Thus
Therefore we obtain
sn
as desired.
1 sn sn
= s + O(n2 ),
sn n
116
(A.4)
+ 1 sn sn+1
,
2 sn
cj
.
j
n
j=0
(B.1)
If Wn,k is represented as
[
Wn,k s = n
2k
(k)
c0
]
(k)
(k)
c1
c2
1
+
+ 2 + O( 3 ) ,
n
n
n
then
Wn,k+1 s = n
2k2
(k)
c0 6= 0,
(B.2)
]
[
1
(k+1)
+ O( ) ,
c0
n
where
(k)
(k+1)
c0
(k)
+
(k)
(k)
Proof. Let c0 , c1
(k)
(k)
(k)
6( 2k)
c0 ( 2k)3
(k)
(k)
and c2
(B.3)
be dened by
[
Wn,k s = (n + k)2k
]
(k)
(k)
c
1
(k)
2
+ O(
) .
c0 + 1 +
n + k (n + k)2
(n + k)3
Then
(
)
(k)
(k)
(k)
Wn,k s = c0 n2k + c0 k( 2k) + c1 n2k1
(
)
1 (k) 2
(k)
(k)
+
n2k2
c0 k ( 2k)( 2k 1) + c1 k( 2k 1) + c2
2
+ O(n2k3 )
(B.4)
117
(k)
c0 = c0 ,
(k)
(k)
(k)
c1 = c0 k( 2k) + c1 ,
1 (k)
(k)
(k)
(k)
c2 = c0 k 2 ( 2k)( 2k 1) c1 k( 2k 1) + c2 .
2
By Theorem 10.4,
(k+1)
2k (Wn,k ) s = d0
where
(k)
(k)
(k+1)
d0
(k)
c0 (1 + 2k)
(
c )2
2
c2
(k) 1
+
,
12
( 2k)( 2k 1)
c0 ( 2k)2
(k)
(k)
(k)
= 0
(k)
12
c0 ( 2k)2
(k)
(k)
(k)
(B.5)
By (B.2),
Wn+1,k
)
(k)
(
2k
1)
1
3
c
(k)
= c0 ( 2k)n2k1 1 +
( 2k 1) + 1 (k)
2
n
c0 ( 2k)
(
)
(k)
(k)
c
7
3c ( 2k 1)
2k 2
+ (k) 2
+
( 2k 1) + 1 (k)
6
n2
2c0 ( 2k)
c0 ( 2k)
]
1
+O( 3 ) ,
n
(
(B.6)
and
Wn+1,k 2k (Wn,k )
[
(
=
(k)
c0 n2k
1+
2k +
(k)
c1
(k)
c0
1
n
(k)
(k)
(k+1)
c ( 2k 1) c2
d
1
+
( 2k)( 2k 1) + 1
+ (k) 0 (k)
(k)
2
c0
c0
c0
]
1
+O( 3 ) .
n
118
1
n2
(B.7)
We put
Wn,k+1 s = Wn+1,k N/D.
Then
N = Wn+1,k (Wn+1,k 2k (Wn,k ))
[
(
)
(k)
3
c
(2
4k
1)
5
1
(k)
= (c0 )2 ( 2k)n24k1 1 +
5k + 1 (k)
2
2
n
c0 ( 2k)
(
(k)
c1 ( 2k 1)(5 10k 3)
19 19k 7
)+
+ ( 2k 1)(
(k)
6
3
3
c0 ( 2k)
]
)
(k)
(k)
(k+1)
2c2 ( 2k 1) (c1 )2 ( 2k 1) d0
1
1
+
+
(k)
+ O( 3 ) .
(k)
(k)
n2
n
c ( 2k)
(c )2 ( 2k)
c
0
(B.8)
Similarly, we have
D = Wn+1,k 2k (Wn,k )
[
(
(k)
= c0 ( 2k)n2k1 1 +
(
7
( 2k 1) +
6
]
1
+O( 3 ) .
n
(k)
c ( 2k 1)
3
( 2k 1) + 1 (k)
2
c0 ( 2k)
(k)
3c1 ( 2k 1)
(k)
2c0 ( 2k)
(k)
c2
(k)
c0 (
2k)
1
n
(k+1)
d0
(k)
c0 ( 2k)
2k 2
n2
(B.9)
+
( 2k)( 2k 1) +
2
(k)
(k)
(k)
2
c0
c0
c0 ( 2k) n
]
1
(B.10)
+O( 3 ) .
n
Since Wn,k+1 s = Wn+1,k N/D , we obtain
(k+1)
2d
Wn,k+1 s = 0
n2k2 + O(n2k3 ).
2k
This completes the proof.
119
FORTRAN PROGRAM
Here we give a FORTRAN program that includes the subroutines GENRHO, the
generalized -algorithm, and MODAIT, the modied Aitken 2 formula. The main routine given below is an example of applications of the subroutines to the series
n
1
.
sn =
i i
i=1
The parameters NMAX, EPSTOR, DMINTOR, XTV in GENRHO and MODAIT are as
follows:
NMAX
EPSTOR
DMINTOR
A positive number. For a variable x, if |x| <DMINTOR then the program stop.
XTV
ILL
TH
XX
(n2k)
2k
RHO
(n1k)
RHO(1,k):k
(nk)
RHO(1,k):k
KOPT
(input)
(output)
(n2k)
is
sk
(n1k)
S(1,k):sk
(nk)
S(1,k):sk
KOPT
(input)
(output)
(nk)
is
DS
(nk1)
(nk2)
DS(1,k):sk
sk
(nk)
DS(1,k):sk
(nk1)
sk
(input)
(output)
102
103
109
202
203
209
210
211
220
221
229
231
232
239
201
300
D2X=DX-DX0
DD=DX/D2X
GO TO 209
ENDIF
DX0=DX
DX=XX-X0
D2X=DX-DX0
DD0=DD
DD=DX/D2X
ALPHA=1.0D0/(DD-DD0)+1.0D0
TH=-2.0D0
NN=N-2
GO TO (202,203),IACCL
CALL GENRHO(ALPHA,TRHO,NN,DMINTOR,KOPT,NMAX,ILL,TH)
GO TO 209
CALL MODAIT(ALPHA,TS,DTS,NN,DMINTOR,KOPT,NMAX,ILL,TH)
GO TO 209
CONTINUE
ERX=ABS(XX-XTV)
SDXER=-LOG10(ERX)
XP=XX
IF (N.LE.2) GO TO 229
GO TO (210,220),IACCL
CONTINUE
DO 211 NN=1,N
XP=X(NN)
TH=ALPHA
CALL GENRHO(XP,RHO,NN,DMINTOR,KOPT,NMAX,ILL,TH)
CONTINUE
GO TO 229
CONTINUE
DO 221 NN=1,N
XP=X(NN)
TH=ALPHA
CALL MODAIT(XP,S,DS,NN,DMINTOR,KOPT,NMAX,ILL,TH)
CONTINUE
GO TO 229
CONTINUE
ER=ABS(XP-XTV)
SDER=-LOG10(ER)
IF (N.LE.2) GO TO 232
WRITE (*,2000) N,XX,ALPHA,XP,SDXER,SDER,KOPT
GO TO 239
WRITE (*,2010) N,XX
GO TO 239
CONTINUE
DXP=ABS(XP-XP0)
XP0=XP
IF (DXP.LT.EPSTOR) GO TO 300
IF (ILL.GE.1) GO TO 300
IF (N.LE.2) GO TO 201
CONTINUE
CONTINUE
122
101
2000
2010
3000
3100
9999
WRITE (*,3100)
IF (ILL.GE.1) THEN
WRITE (*,*) ABNORMALLY ENDED
ENDIF
WRITE (*,3100)
CONTINUE
FORMAT (I4,3D25.15,2F7.2,I5)
FORMAT (I4,1D25.15)
FORMAT (1H1)
FORMAT (/)
STOP
END
FUNCTION TERM(N)
REAL*8 X,TERM
X=DBLE(N)
TERM=1.0D0/X/SQRT(X)
RETURN
END
SUBROUTINE GENRHO(XX,RHO,N,DMINTOR,KOPT,NMAX,ILL,TH)
REAL*8 DMINTOR,ER,TH
REAL*8 RHO(0:1,0:NMAX)
REAL*8 DRHO,XX
KOPT=0
IF (N.EQ.1) GO TO 110
KEND=N-1
DO 101 K=KEND,0,-1
RHO(0,K)=RHO(1,K)
101 CONTINUE
110 CONTINUE
RHO(1,0)=XX
IF (N.EQ.1) THEN
RHO(1,1)=-TH/XX
GO TO 199
ENDIF
DRHO=RHO(1,0)-RHO(0,0)
ER=ABS(DRHO)
KOPT=0
IF (ER.LT.DMINTOR) THEN
ILL=1
GO TO 199
ENDIF
RHO(1,1)=-TH/DRHO
KEND=N
DO 121 K=2,KEND
DRHO=RHO(1,K-1)-RHO(0,K-1)
ER=ABS(DRHO)
IF ((ER.LT.DMINTOR).AND.(MOD(K,2).EQ.1)) THEN
KOPT=K-1
GO TO 140
ENDIF
IF ((ER.LT.DMINTOR).AND.(MOD(K,2).EQ.0)) THEN
123
ILL=1
GO TO 199
ENDIF
RHO(1,K)=RHO(0,K-2)+(DBLE(K-1)-TH)/DRHO
121 CONTINUE
IF (MOD(N,2).EQ.0) THEN
KOPT=N
ELSE
KOPT=N-1
ENDIF
140 CONTINUE
XX=RHO(1,KOPT)
199 RETURN
END
101
110
111
120
140
199
SUBROUTINE MODAIT(XX,S,DS,N,DMINTOR,KOPT,NMAX,ILL,TH)
REAL*8 DMINTOR,TH,COEF
REAL*8 W1,W2,XX
REAL*8 S(0:1,0:NMAX),DS(0:1,0:NMAX)
KOPT=0
IF (N.EQ.1) GO TO 110
KEND=INT((N-1)/2)
DO 101 K=0,KEND
S(0,K)=S(1,K)
IF ((MOD(N,2).EQ.1).AND.(K.EQ.KEND)) GO TO 101
DS(0,K)=DS(1,K)
CONTINUE
CONTINUE
S(1,0)=XX
IF (N.EQ.1) THEN
DS(1,0)=XX
GO TO 199
ENDIF
DS(1,0)=XX-S(0,0)
KEND=INT(N/2)
DO 111 K=1,KEND
W1=DS(0,K-1)*DS(1,K-1)
W2=DS(1,K-1)-DS(0,K-1)
IF (ABS(W2).LT.DMINTOR) THEN
ILL=1
GO TO 199
ENDIF
COEF=(DBLE(2*K-1)-TH)/(DBLE(2*K-2)-TH)
S(1,K)=S(0,K-1)-COEF*W1/W2
IF (N.EQ.2*K-1) GO TO 111
DS(1,K)=S(1,K)-S(0,K)
CONTINUE
KOPT=INT(N/2)
CONTINUE
XX=S(1,KOPT)
RETURN
END
124