PadeType Approximation
and General Orthogonal Polynomials
ISNM 50:
International Series of Numerical Mathematics
Internationale Schriftenreihe zur N umerischen Mathematik
Serie internationale d' Analyse numerique
Vol. 50
Claude Brezinski
PadeType Approximation
and General
Orthogonal
Polynomials
1980
lJ
Springer Basel AG
CIPKurztitelaufnahme
der Deutschen Bibliothek
Brezinski, Claude:
Padetype approximation and general
orthogonal polynomials / by Claude
Brezinski.  Basel, Boston, Stuttgart:
Birkhiiuser,1980.
(International series of numerical
mathematics; Vol. 50)
Contents
Introduction. . . . . . . . . .
Chapter 1: Padetype approximants. .
1.1
Definition of the approximants.
1.2
Basic properties. . . . .
1.3
Convergence theorems
1.4
Some applications . . . .
1.5
Higher order approximants .
7
9
9
14
.24
28
.32
.40
.40
.43
50
50
51
53
57
61
67
67
.69
.. 75
75
.79
84
91
105
115
126
126
126
133
135
147
152
159
159
Chapter 4: Generalizations
The topological Balgorithm.
4.1
178
. . . .
171
176
178
Contents
184
190
191
195
208
220
Appendix .
227
Bibliography
240
249
Index
178
Introduction
In the last few years Pade approximants became more and more widely used
in various fields of physics, chemistry and mathematics. They provide
rational approximations to functions which are formally defined by a power
series expansion. Pade approximants are also closely related to some
methods which are used in numerical analysis to accelerate the convergence
of sequences and iterative processes.
Many books recently appeared on this subject dealing with algebraic
properties, study of the convergence, applications and so on. Chapters on
Pade approximants can also be found in older books on continued fractions
because these two subjects have a strong connection. The scope of this book
is quite different. A Parle approximant is defined so that its power series
expansion matches the power series to be approximated as far as possible.
This property completely defines the denominator as well as the numerator
of the Pade approximant under consideration. The trouble arising with Pade
approximants is the location of the poles, that is the location of the zeros of
the denominator. One has no control on these poles and it is impossible to
force them to be in some region of the complex plane.
This was the reason for the definition and study of the socalled
Padetype approximants. In such approximants it is possible to choose some
of the poles and then to define the denominator and the numerator so that the
expansion of the approximant matches the series to be approximated as far
as possible. On one hand it is possible to choose all the poles and, on the
other hand, it is possible to choose no pole  which is nothing but the
definition of a Pade approximant. Such an approach to the problem directly
leads to the introduction of general orthogonal polynomials into the theory
of Pade approximants. This connection, known for a long time, had not been
fully exploited. Thus the aim of this book is twofold: first to introduce
Padetype approximants and secondly to study Pade approximants on the
basis of general orthogonal polynomials. The complete algebraic theory of
Pade approximants is unified on this basis; old and new results can be easily
obtained such as recurrence schemes for computing Pade approximants,
error formulas, matrix interpretation and so on. Properties of some convergence acceleration methods for sequences of numbers or for sequences of
vectors can be also derived from the theory. Padetype approximants are
also very useful in applications since they can provide better results than
Pade approximants.
The material contained in this book leads to many research problems,
especially in the new field of Padetype approximants. Many results appear
for the first time.
The contents are as follows. Chapter 1 deals with the definition
and general properties of Padetype approximants. General orthogonal
Introduction
Chapter 1
Padetype approximants
1.1.
Let
f(t)
= L c/,
i=O
This is a formal equality in the sense that if the series on the righthand side
converges for some t then f(t) is equal to its sum; if the series diverges f
represents its analytic continuation (assumed to exist). In the sequel we shall
only be dealing with formal power series and formal equalities which means
that the series developments of both sides of an equality are the same.
Our purpose is to construct a rational fraction whose denominator has
degree k and whose numerator has degree k  1 so that its expansion in
ascending powers of t coincides with the expansion of f up to the degree
kl.
There are several reasons for looking for such rational approximations
to series. The first is to obtain an approximation to a function which can, for
example, be used in computations. The second is that the series may
converge too slowly to be of any use and that we want to accelerate its
convergence. The third reason is that only few coefficients of the series may
be known and that a good approximation to the series is needed to obtain
properties of the function that it represents.
Let us now begin our investigation by defining a linear functional acting
on the space of real polynomials by
C(Xi) = Ci ,
The number
Ci
for
i = 0, 1, ....
Lemma 1.1.
f(t)
= cl xt)l).
= c(1)+c(x)t+c(x 2 )f+ . ..
= co+C 1t+C 2 t 2 + ... = f(t).
Let v be an arbitrary polynomial of degree k,
v(x) = b o + b1x + ... + bkxk,
10
Chapter 1
and define w by
w(t) = c(v(x)  V(t)
xt
ai =
cibi + i+ 1
for
= 0, ... , k1
j=O
Proof.
v(x)v(t)
xt
since
(Xi  ti)/(x  t) =
Xi  I
+ Xi 2 t + ... + xt i 2 + t i I
Applying c to the both sides of the preceding equality the result immediately follows
Two polynomials v differing only by the constant term give rise to the
same polynomial w.
Let us now define v and w by
v(t) = tkV(t I),
w(t) = tkIW(t I ).
That is to say, we reverse the enumeration of the coefficients in v and w so
that
v(t) = bot k + b i t k  I + ... + bb
w(t) = aot k I + a I t k 2 + ... + akI'
w (or w) is said to be associated to v (or v) with respect to the functional c.
The main result is
Theorem 1.1.
w(t)/v(t)  f(t) = (J(tk),
t~ O.
will be denoted by
(k l/k),(t)
11
Padetype approximants
= Co + Cl t +
with
fn (t) =
Cn + l
+ Cn +2 t + ....
with
w(t) = c(n+l)(V(X)  V(t))
xt
c(n+l)
is defined by
c(m\x i ) = c(xm+i) = cm + i .
The reason for this shift is due to the fact that the first coefficient of fn is
cn + 1
The preceding rational fraction has a denominator with degree k and a
numerator with degree n + k. We shall denote such a fraction by
(n + k/kMt).
Theorem 1.2.
(n
cI xt)l) =
=
Applying the functional c to this identity and using the convention that
c(Xi)=Ci =0 for i<O, we get
f(t)
tn+l(cot n  l
+ C I t n + ... ).
12
Chapter 1
Let
Tn(t)=o+o t+ ... +0 tn2+cotnl+Cltn+ ....
w(t)Ii3(t).
Since the first n  1 coefficients of T are equal to zero it is easy to see, from
Lemma 1.2, that w reduces to a polynomial of degree k and that whas only
terms from the degree n  1 up to the degree n + k  1. Thus the rational
fraction
tn+1(n + k lin + k)'n (t)
+ kMt).
Theorem 1.3.
(kin + kMt) t(t) = (J(t k+1).
Proof. From Theorem 1.1 we get
(n + k lin + k)'n (t) Tn (t) = (J(tn+k)
is
to set
wet) = c(n+ll(v(x)  vet))
xt
and
wet) =
tn+k1W(t 1).
(plq)f(t) =
L c/ + tpq+1w(t)lV(t)
i=O
13
Padetype approximants
and with the convention that the sum is equal to zero if p  q < O. We will
call v the generating polynomial of the approximant. We have
(plqMt)  f(t)
= O'(t
+1).
(OIl)
(0/2) .. .
(1/0)
(Ill)
(1/2) .. .
(2/0)
(2/1)
(2/2) .. .
with
(pIO)f(t) = co+ ... +cptp.
This array will be called the Padetype table. The (klk) approximants are
called diagonal approximants. Recurrence relations hold between the elements of this table if the generating polynomials of the various approximants
belong to a family of orthogonal polynomials. Such a case has been studied
in [65] and it will be described in much more details in the following
chapters. Recurrence relations can also hold in other cases.
There is, in general, no connection between the upper half of the table
and the lower half. The only connection, without any additional assumptions
on the denominators of the approximants occurs when p = q, that is for
diagonal approximants.
Let g be the reciprocal series of f, which is formally defined by
f(t)g(t) = l.
If we write g as
g(t)=d o+d 1 t+d 2 f+ ...
i=1,2, ....
14
Chapter 1
where
w(1)(t) = c(1)(V (x)  vet)).
xt
Also, consider the approximant (k/k\(t) with cov(t) + tw(O(t) as denominator. Then we can establish
Property 1.1.
(k/kMt)(k/k)g(t) = 1.
Vet) = tkV(tl).
and
Then
Let
Wet) = d(1)(V(x)  vet))
x t
(k/k)f
p/q =
c/ +O(tk+l).
i=O
k
(k/k)g
q/p =
d;t i +O(tk+l).
i=O
Thus
(k/kMk/k)g
{~}.
Hence
Since q and q have the same degree k, q = q and the property follows.
This property is called "reciprocal covariance". The study of some
other properties is the aim of the next section.
1.2.
Basic properties
The properties which will be studied in this section can be divided into three
classes: the first one deals with general algebraic properties of Parletype
approximants; the second is devoted to the study of the error while the third
15
Padetype approximants
of degree
p and V a polynomial
of
O(tp+l).
Then
W(t)/V(t) = (P/q)f(t)
where the denominator
of (p/q)
is taken as
v.
where R has degree p  q and Q has degree at mostq 1. It follows from
the assumption that
pq
R(t)=
L c/
i=O
and that Q(t)/V(t) = c;,q+l + c;,q+zt + ... + c;,tq 1 + O(tq). Then the coefficients of the polynomials Q and V must be linked by relationships similar to
those of Lemma 1.2; that is to say that Q is associated to V with respect to
the functional c(pq+l) and then W(t)/V(t) = (P/q)f(t) as defined in section 1.1.
The case with p < q can be treated in the same way.
Let us now turn to the property of linearity whose proof is obvious from
Property 1.2.
= a (p/qMt) + Rdt)
+ Bf(atl(1 +bt
C + Df(at/(l + bt
16
Chapter 1
+ DW(at/(l + bt)].
(k/k)h(t) = (k/k),(at/(l + bt
if the two approximants are constructed from the same denominator.
Let t' = at/(1 + bt). Then
(k/k),(t') = f(t')+(J(t,k+l) = h(t)+(J(t k+1 ).
If Wand V are respectively the numerator and the denominator of (k/k)f
then
k
a;(at/(l+bt)Y
(k/k),(t') =:i:_O_ _ __
i=O
b;(at/(l + bti
a;(at)i(l + bt)ki
(k/k),(t') = _i:_O_ _ _ __
i=O
b;(at)i(l + bt)ki
This is the ratio of two polynomials of degree k in t such that its series
expansion agrees with h up to the term of degree k. It follows from the
property of unicity (Property 1.2) that (k/k)f(t') = (k/k)h(t).
Now let h be defined by
h(t) = A + Bf(t).
C+Df(t)
We get
17
Padetype approximants
Carrying out the formal division of these two series, if the constant term
C + Dco of the denominator is different from zero, we get
A + Bf(t) + O'(tk+l).
C+Df(t)
From the property of unicity if follows that this rational fraction was
(k/k)h (t).
Gathering together the two parts of the proof gives the property.
Another useful property of Padetype approximants is a compact formula for them which is quite similar to Nuttall's compact formula for Pad6
approximants [93].
Property 1.5. Let {qn} be any sequence of polynomials such that qn has the
exact degree n for all n. Let V be the k x k matrix with elements Vij =
cI Xt)qil (X)qjl (x for i, j = 1, ... , k, let v' be the vector with components
vr = c(qil(x)(I v(x)/v(t l ) for i = 1, ... ,k, and let u be the vector with
components Ui = C(qil(X for i = 1, ... , k. Then
(k l/k),(t) = (u, VIV')
where (.,.) denotes the scalar product
nominator of (k l/k)f is taken as v.
of two
and therefore
(kl/k) (t)=_I_ c (V(t 1)v(X).
f
v(t 1)
lxt
It is easy to see that
1
v(t l )
v(t1)v(x)
lxt
V(tl)V(X)
lxt
so that
(k 1/k),(t) = l3oc(qo) + ... +l3klC(qkI).
For i
= 0, ... , k 1 we get
C(qi (x)(I v(x)/v(t 1 ) = C(qi (x)(I xt)(l3oqo(x) + ... + I3kIqkl (x).
18
Chapter 1
That is
cl xt)q~)
( cIxt)qlqo)
cl Xt)qkIqO)
with
. . . . . . . .....
The basic idea for such a compact formula in the case of Pade
approximants for some special series is due to Magnus [90] and Shenton
[119]. Here we have extended this idea for our purpose.
If qn (x) = x n, then vij = Ci+ j 2  tC;+j1 for i, j = 1, ... , k and Vi = Cil for
i = 1, ... , k. This is the exact generalization of Nuttall's compact formula for
Pade approximants to the case of Padetype approximants. However, since
(k l/k)f only depends on CO, ... , Ckt then, in the preceding compact
formula, arbitrary values can be given to Ck' ... , C2kl. In particular they
can be set to zero and we get
COCIt ... Ck2Ckl~
c2 t. . .
( =.
V
. .
CI 
Ckl"
Pk_l)
Ckl
(kl/kMt)
= L (u,(A1BYAIv'W.
i=O
It immediately follows from Theorem 1.1 that
Padetype approximants
19
Co
= 0, we have
= ... = Cn = 0 and if
(p + l/q)f(t) = Pz{t)/Q(t)
Then
Proof.
dg~
= P 1 (t) _ At P +1 + (J(t p +2 )
Q(t)
h(t) =
L (h + g;W + (J(t +
;=0
P
1)
if
00
f(t) =
L {;t;
;=0
00
and
g(t) =
L gl.
;=0
(P/q)f+g(t) =
1)
Chapter 1
20
and
(P/q)h (t) =
i=O
Proof.
_()
w t
=t
= v(t)f(t) tkc(IV~~J
tk
f(t)(k1/k),(t)
""
= v(tL~ d/
with
d; = c(xiv(x
=~
21Tl
11x11f(x 1) dx =c (1)
1 .
f(t)=2
1Tl
xt
xt
21
Padetype approximants
= 2~i
c(F(x
x1f(x1)F(x) dx.
U sing this representation we get the following expression for the error term
of Padetype approximants
.1 i
x 1f x  1)v(x) dx.
xt
f(t)(k1/kMt)= _t(k) 21
v t 1Tl
From Theorem 1.4 and from the definitions of the Padetype approximants
we immediately obtain
CoroUary 1.2.
f(t)  (p/q) (t) = t:+l
f
v(t)
C(pq+l) (
v (x) )
1xt
of (P/q)f.
g(X;)
v(x)v(x;)_,_1___
1_.
i=1
xX;
v (X;)1x;t
t ~(x;)_1_=x t ~(x;)_1_
;=1 V
(X;) x  X;
22
Chapter 1
get
C(P)
t V(t l )
v(t)
(kl/kMt)=c(P)=
A i (1x i t)1
Co
(x}t1)1
Xl"
(Xk t  1)1
k}
Xk' .. Xk
CI .
Ck}
kl
. Xl
(k 1/kMt) = ~
1 Xl'" X~l
1 Xk'" X~l
The third consequence of Theorem 1.5 is that, using a formula given by
Magnus [90, pj. 17], we can obtain a second compact formula for Padetype
approximants. Let
W
(1 ... 1)
Xl
..
. . .
Xk
. .'
(1XIW1)
'
g
(c.o)
d
,an
l' 
(1; Xkt)l
Ckl
Then
(k l/kMt) = (g, W 1 1').
It is easy to see that these two formulas for (k l/k) are exactly the same.
Corollary 1.3.
=
(k l/kMt) =
e/
i=O
with
k
ei =
j~l
AjxJ,
i =0,1, ....
23
Padetype approxirnants
(k1/kMt)=
j=l
Ci
L Ajxl,
j=l
f(t)(kl/kMt)=
L (Ciei)ti.
i=k
with
rr   .
L()
x I
XXj
j=l
j:#:i
~ Xj
dj
p(j)(xJ = dx j
1 xt)l)x=x,
with L
n
Xl)k,
= k and
Xl> .
,Xn distinct;
i=l
(x  Xn)kn,
and where w, v and ware defined as above. It is well known that the general
Hermite interpolation polynomial can be deduced from the Lagrange
interpolation polynomial by continuity arguments. Thus we have obtained
the following general theorem
Theorem 1.6. Let P be the general Hermite interpolation polynomial for the
function (1 xttl as defined above. Then
c(P) = (k l/kMt).
24
Chapter 1
This theorem can also be proved by writing down P and showing that c(P) is
the partial fraction decomposition of w(t)/i5(t).
Remark. If n=l then v(x)=(XXI)k and (k1/k>t(t)=w(t)/(lx l t)k. In
particular, if Xl = 0, then w(t) = Ckl + Ck2t+ ... +cot k l and (k 1/k>t(t) =
co+c1t+ ... +Ck_l tk  l . The corresponding general Hermite interpolation
polynomial in this case is the Taylor interpolation polynomial at Xl = that
is P(x) = 1 + xt+ ... + XkIt k l which is the truncation of the series expansion of (1 xt)l. If Xl = 1 then (k 1/k) is the generalized Euler transformation of f [65].
Conversely let
X; 's
and
~ 's
Theorem 1.7.
(k l)/kMt) = c(P)
where P is the general Hermite interpolation polynomial as defined above.
Proof. One only has to write down the partial fraction decomposition of
w(t)/i5(t) to see that it is equal to c(P) where P is the general Hermite
interpolation polynomial.
This approximation depends on k since for each value of k a new polynomial v has to be chosen. This approximation can be called Vk and written as
V k = akOSO + ... + akkSk
with
akO+ .. +akk
= 1.
1x
If, for each k, the polynomial v does not depend on the coefficients Ci then
this transformation is linear and is called a summation method. The sequence {Vn } converges to the same limit as the sequence {s,,} for any
converging sequence {s,,} if the three conditions of Toeplitz theorem are
satisfied. We shall return to this problem in the next section.
1.3.
Convergence theorems
n~oo
or
k~oo.
Let
Sn
=I
i~O
w,
25
Padetype approximants
and
with
k
B;
L B; = 1.
;=0
In fact, since the polynomial v can depend on k and n, the Bj's also depend
on k and n. For a fixed value of t, this is a summation process and the
convergence (}f the transformed sequence to the same limit will be ensured if
the conditions of Toeplitz theorem are met. Returning to the notations of
the preceding section, this theorem says that a condition necessary and
sufficient for {Vn } to converge to the same limit as {Sn} for any converging
sequence is that
k
L lak;I<M,
;=0
lim aki
= 0,
for
for
k =0, 1, ... ;
i = 0, 1, ... ;
k+oo
k
lim
L aki = 1.
k+oo i=O
Theorem 1.8. Let teD. The sequence (n+k/kMt) converges to f(t) when
n + 00 and for k;;il: fixed if
k
L IBJs:;M,
"In.
i=O
The sequence (n + k/kMt) converges to f(t) when k+oo and for n;;il:
lim Bi =0,
fixed if
i=O, 1, ... ,
k+oo
and
k
L IBJE;M,
Vk.
i=O
26
Chapter 1
= (m':ll 
for
= 1, ... , k,
and
with
BbO) = 1.
If t ~ 0, Xi ~ 0 and lim
X;
=0
lim
= o.
k>oo
impossible that
B~k) =
i=O
i = 0,1, ... and, by Theorem 1.8, the sequence (n + k/kMt) converges to f(t)
= bitkilv(t)
and
Bi
for
n=0,1, ....
i=O
of
BiXi are equal to tx;. Thus Lemmas 1 and 2 in the Wimp paper
i=O
become
27
Padetype approximants
Theorem 1.11. Let tED, let {X;} be an infinite sequence of numbers such that
tX;;;;;' and tx; =1= 1, Vi and let v(x) = (x  Xl) ... (X  Xk). If the series
L tx;/(I tX;)
i=l
converges then
lim (n + k/kMt) = f(t),
Vn;;;;'O fixed.
k>~
Remark. The condition on the convergence of the series implies that the
sequence {X;} converges to zero as in the preceding theorem.
Theorem 1.12. Let tED. If, for all k, the zeros X; of v are such that
tX;E[a,O] with a>O then the sequence (n+k/k)f converges to f(t) when
k H~) for fixed n;;;;' 0.
Let us now study the particular case where the coefficients ciare given
by
Ci
Xi da(x)
i=O, 1, ...
1& da(x).
a
lxt
i~ I:,~:~ I~M,
Vk
then
lim (k 1/kMt) = f(t),
k+oo
28
1.4.
Chapter 1
Some applications
If the zeros of the denominator have negative real parts, if /3i ~ ai for
i = p + 1, ... , k and if 0 ~ ador i = k + 1, ... , n + k where p is the integer part
of k12, then r is A acceptable.
Proof. We follow the ideas of [48]. Since the zeros of the denominator of
(kIn + k) have negative real parts, r is analytic in the right halfplane. If
n ;;l1: 1 then lim !r(t)! = O.
It 1"""00
= /3i
2
! ( . )!2 = 1 + (/3P+1 llp+1)t (P+1) + ... + (/3n+k  <Xn+k)c2(n+k)
r It
1 + a1 c2 + ... + <Xn+kt2(n+k)
with the convention that /3i = 0 for i;?1:; k + 1. Thus. if f3i ~ a i for i =
P + 1, ... , n + k then !r(it)j2 ~ 1. If n = 0 then /3k = a~ and ak = b~ which
implies that lim !r(t)! ~ 1.
111+00
29
Padetype approximants
Such approximants are very useful for integrating systems of linear differential equations since the denominator can be factored. Such approximations
have been recently studied by Saff, Schonhage and Varga [112] but with a
different numerator. Let us first study the convergence of these appro ximants
Theorem 1.15. 'tit ~ 0
lim (k l/kMt) = e'.
k>oo
Proof. We shall use the second part of Theorem 1.8 with n = 1. It is
obvious that Bi ~ o. We only have to prove that the Bi's tend to zero when
k~oo since the exponential series converges for every t~O. Since Bi =
bit k i/i5(t) and since i5(t) tends to e', we only have to study the convergence
of bit k i to zero. We get
k (
i
i5(t)=1+t+i~
1k" ... 1 T
Thus
( 1) (
ki1)
k
(ki)!
for
= bk = 1.
. (k _1)kil
b t k,,,::::
i
..;
k
tk  i
(ki)!
k>oo
30
Chapter 1
all k. However, it is easy to show by using Theorem 1.14 that the approximants (k 1/k) are Aacceptable for k = 1, ... ,4. So also are the approximants (k/k) for k = 1, ... ,3 but the approximant (4/4) is not Aacceptable.
Another useful application of rational approximation is to the inversion
of the Laplace transform. Let f be the Laplace transform of g
f(t)
g(x)e dx
X
'
and let us assume that the power series expansion of f is known at least up
to some power.
We can replace f by some Padetype approximant and then invert it. It
will provide us an approximation to g. The idea of such a method is due to
Longman [87]. For the Laplace transform inversion of a rational function
one needs either the partial fraction decomposition or a special trick due to
Longman and Sharir [88] involving the summation of infinite series. If the
rational function is a Padetype approximant then the poles are known and
the partial fraction decomposition is easy. Moreover the poles can be
arbitrarily chosen, which can be a very interesting feature.
We must, of course, only consider Padetype approximants with a
denominator having a degree greater than the degree of the numerator since
lim f(t) = O. If we consider the (k l/k) approximants and if we assume that
all the zeros of the denominator are distinct then we get
k
(kl/k)f(t) =
with
Ai = w(xJ/v'(xJ.
 L Aixi1e
x / x ,.
i=l
We shall now give an application of Padetype approximants to computer's arithmetic. It is well known that computers work with numbers having
only a finite number of digits. Thus arithmetic operations are not performed
exactly and the arithmetic is not the classical one (for example addition is
nonassociative). These errors can propagate very rapidly in a sequence of
operations and the result obtained may have no resemblance to the true
result. Many attempts have been made recently, either to estimate these
errors or to correct them. It is also possible to use a nonclassical arithmetic.
Such an arithmetic has been recently proposed by Krishnamurthy,
Mahadeva Rao and Subramanian [80]; it is based on padic numbers.
Let alb be a rational number which is assumed, for simplicity, to be
positive and less than one. Let p be a prime number. It is well known (see,
for example, [7]) that alb can be uniquely written in the padic form as
00
alb =
L ~pi
i=O
Padetype approximants
31
where the coefficients Ci are integers such that O";;;Ci ,,;;;pl. This padic
form can be symbolically written
alb = Co' C1C2C3'" .
For example we have (with p = 5)
1/3 = 2.313131 ...
= 2+ 3p(1 + p2+ p4+ ... )+ p2(1 + p2+ p4+ . .. ).
In the padic norm the series 1 + p2 + p4 + ... converges to (1 p2)1 and
thus we have
2.313131 ... = 2+(3p+ p2)/(1_p2) = 8/24 = 1/3.
Such a padic expansion cannot be, of course, represented with finite length
arithmetic. Thus, Krishnamurthy et al. propose to replace this infinite
expansion by the finite one
kl Ckpk
C2k_lp2kl
CO+CIP+",+CklP
~1"'k
P P 1
They call such a representation a H(p, r) code for rational numbers where
r=2k
.
Arithmetic operations can be exactly performed in that code and the
exact H(p, r) code of the result is obtained. Then the main problem is the
conversion from H(p, r) code to rationals. If alb in its H(p, r) code is written
as W/(pk 1) where W is the numerator of the representation, then these
authors proved that
a(pk1)bW=O
(modp2k)
or
a(pk 1)bW=Kp2k
where K is an integer; the values of the numbers a and b satisfying this
diophantine equation can be found by writing the H(p, r) code of bla.
The results of Krishnamurthy et al. will be now explained in the
framework of Padetype approximation. Let us consider the series f(p) =
alb=co+c 1 p+ c2p2+ ... and let us construct its (2k1/k) Padetype approximant with v(x) = Xk 1 as generating polynomial. We know that
(2k 1/k Mp) = Co + ... + Ck_lpkl + pk(k 1/k)fk1 (p)
with
Al(P) = Ck +Ck+lP + ....
It is easy to check that
(k 1/k)fk1 (p) = (Ck + ... + C2kl pkl)/(l_ pk)
and thus
k
2kl
kl CkP
C2klP
_
(2k1/kMp) co+c1P+",+CklP
~1"'k
P p 1
which is the H(p, r) code of a/b.
32
Chapter 1
( v(x) )
1 xp
C(k)   .
If we set
W=
C2 k_l)p2kl
1xp
Since the left hand side of this relation is an integer so is the right hand side.
Moreover C(k)(v(x)(lxp)l) = c(k)(v(x)(l+Xp+X 2p2+ ... )) is an integer
and we can write
a(pk 1) bW = Kp2k
where K is an integer.
Thus H(p, r) codes are identical to Padetype approximants and the
conversion procedure from the code to rationals follows from the error term
for Padetype approximants.
It is possible that the theory of Padetype approximants can be useful
for deriving new results for H(p, r) codes or constructing more powerful
codes for exact arithmetic.
1.5.
C(lV~X1t)
Padetype approximants
33
for i
then
The computation of such a Padetype approximant requires the knowledge of co, ... , Cm+kl' Thus, from the algebraic point of view, nothing has
been gained, and if we want to compare approximants with additional
conditions to approximants with no conditions we must compare approximants using the same number of coefficients of the series.
The arbitrary constants on which the polynomial v depends are either
its ~oefficients or its zeros. One can, of course, choose the values of some
coefficients and compute the remaining coefficients with the additional
conditions. Such a method seems to be of no help because the coefficients of
a polynomial have no immediate meaning for approximation. The best way
seems to choose km arbitrary points Xl"'" Xkm' to let u(x)=
(XXI)",(XXkm), to write v(x)=u(x)Pm(x) and to determine the
polynomial Pm of degree m by the additional conditions
k=1,2, ....
Chapter 1
34
C(XiV(X
= L C;+jbj = 0,
= 0, ... , k1.
j=O
One of the bi's is arbitrary. Let us choose bk = 1 so that v has the exact
degree k. We get
kl
L ci+jbj = Ci+k,
j=O
Thus bo, . .. , bk  1 are obtained as the solution of a system of linear equations. The determinant of this system must be different from zero; that is
1= O.
Such a determinant is called a Hankel determinant and is denoted by
Hk(co)
If Hk(co) 1= 0 then the approximant exists and is called a Pade approximant. We shall denote it by [k l/k],(x). From the additional conditions
2
(:)
ce;~~:)=O(ek)
= _t2k c(P~(x).
P~(t) 1xt
35
Padetype approximants
(k 1!kMt) = [k 1!m]f(t).
Proof. v(x) = xkmu(x) where u has degree m. But v(x) = xkxmku(xl) =
u(x) which shows that v has degree m. Moreover, by Theorem 1.16,
f(t)(k 1!kMt) =O(tk+m).
(k 11k) is the ratio of a polynomial with degree k 1 by a polynomial with
degree m and the result is proved by Theorem 1.18.
This property generalizes a result given by Wheeler [136] in a particular
case.
Property 1.5 holds for Pade approximants. If qn (x) = x n , it is exactly
Nuttall's compact formula. If qn(x)=Pn(x), then we obtain a result which is
Chapter 1
36
where
~ = C(XiPk(X)) = bOci +b1Ci+l + ... +bkCi+k'
If the zeros of P k are distinct then the determinantal formula given after
Theorem 1.5 is still valid and the next formula too. In this case Corollary 1.3
also holds. From the additional conditions and from the definition of Qk we
get
pq
~ Ci t q + i
t.
~ Ci t i
t.
i=O
i=O
[P/q]f(t) = f(t),
Proof. Let f(t) = a(t)/b(t) and [p/q],(t) = U(t)/V(t). Then we can write
~
aCt)  b(t)f(t) = 0 =
dit i
i=p+q+l
where
~
= 0 for
i ;;., p + q + 1.
U(t)  V(t)f(t) =
L e/
i=p+q+l
37
Padetype approximants
where V is uniquely determined. Then the result follows from the unicity
property of Theorem 1.18.
Such a result does not hold in general for Padetype approximants
since, for a given degree of approximation, the polynomial v can be
arbitrarily chosen. If v = b then the property is true.
Property 1.11. Let g be the reciprocal series of f defined by f(t)g(t)
= 1;
then
[p/q]t(t)[q/pl(t) = 1.
It is not our purpose in this book to extensively study well known properties
about Pade approximants. They can be found, for example, in [9].
Let us now give two results which show another link between Padetype
approximants and Pade approximants.
Theorem 1.19. Let (k l/k)t be the Padetype approximant constructed from
v. Ifv(x)=u(x)Pm(x) with m:s;;;k:s;;;2m then
(k l/kMt) = [m l/mJt(t).
Proof
w(t) = c(U(X)Pm(X) U(t)Pm(t)
xt
=c ( u ( t )
Pm(X)Pm(t) P ( )u(x)U(t)
+ x
.
xt
m
xt
and thus
w(t) = u(t)c(Pm(x)  Pm (t)
xt
= u(t)Qm (t)
Theorem 1.20.
[p/q];(t) = (p + q l/2q)r(t)
if the generating polynomial of the Padetype approximant is the square of the
generating polynomial of the Pade approximant.
Proof. [p/q];(t) is the ratio of a polynomial with degree p +q 1 to a
polynomial of degree 2q. Moreover
[p/q];(t)  !'(t) = (J(t p + q ).
Thus, by Property 1.2, the result follows if the denominator of (p + q l/2q)r is chosen as the square of that of [p/qJt.
38
Chapter 1
n+oo
for
n = 1, 0, 1, . .. and
k = 0, 1, ...
Moreover
t Zk
Hk+ 1 (co)
[
f(t) kl/k]f(t)= Hk(co) (1~tfk+1
with
~E[a,b].
From this theorem one can obtain bounds for the error. Let a =
and
VtE (00,0]
Vt E [0, d]
with d<R.
These results will be proved later.
Padetype approximants of order p, where p is the degree of the
numerator, are linearly dependent on the coefficients of the series while
Padetype approximants of higher orders and Pade approximants depend
nonlinearly on the coefficients. Pade approximants can be applied to the
approximation of the limit S of a sequence {Sn} as has been done in
section 1.2 for Padetype approximants. If we consider the series f
defined by Co = So, Cj = Sj  Sjl for i = 1, 2, ... then
[n + k/k]f(l) = ek(Sn) = e~nJ,
S
(0)=
eZk
(Pk(x))
1
.
x
Padetype approximants
39
Chapter 2
General orthogonal polynomials
2.1.
Definition.
Let r;p be the vector space of real polynomials and let {Ck} be a given infinite
sequence of real numbers. We define a linear functional C on r;p by the
relations
i = 0,1, ... ;
C is completely determined by the sequence {Ck} and
moment of order k of the functional c.
is said to be the
Ck
c(xipdx))=O,
=0
we get
+ a l c 2 + ... + akck+l = 0
aOck1
Since Pk is required to have the exact degree k let us give an arbitrary non
zero value to ak' The solution of the preceding system of linear equations
41
will give us ao, ... , akl under the necessary and sufficient condition that the
following Hankel determinant be different from zero
Hk(co) =
Co
Cl' . Ckl
Cl
C2' Ck
Ckl
Ck ... C2k2
Theorem 2.1.
Co
Cl
'"
Ck
Cl
C2
..
Ck+l
Ckl
Ck
...
C2kl
Pk(x)=D k ...................
for
k=1,2, .. .
Xk
and Po(x) = Do where the Dk's are arbitrary non zero constants.
Proof. Multiplying P k by Xi and applying the functional
rows are identical for i = 0, ... , k 1.
since one polynomial has a degree less than the other one. This relation is
really the orthogonality relation of the polynomials {Pk }. The name of
orthogonal polynomials comes from it. This relation can be regarded as a
bilinear form in an indefinite inner product space [12]. This point of view
has been recently developed by Van Rossum [106].
If H~oll =/=0 then it follows from Theorem 2.1 that C(XkPk)=/=O and
c(P~) =/= O. If H~O), ... , Hk2. 1 are different from zero and if mOl = 0 then
c(x"Pm)=O
and
c(PnPm)=O for
C(xmpm)=/=o
and
c(P~)=/=O
c(X k  l
for
n<m
and
Pk  1) = C(P~l) = 0
42
Chapter 2
Orthogonal polynomials can also be obtained by applying the GramSchmidt orthogonalisation process to {Xk}.
Theorem 2.2.
Po(x)
= 1,
n=O, 1, ....
Let us assume that the orthogonality relations are satisfied for all k ~ n;
then
for
k~
n.
and thus
c(PD = D~HkO)Hkoll'
If HkO);;' 0 for all k then c(P~);;. 0 for all k and the functional c (or the
sequence {cd) is said to be positive. If HkO) > 0 for all k then c is positive
definite.
Let us recall that a necessary and sufficient condition for c to be
positive definite is the existence of a bounded, nondecreasing and with
infinitely many points of increase such that
ci
L:
OO
Xi da(x).
This result has been proved by Widder [139]. In that case the orthogonal
polynomials we get are the classical orthogonal polynomials [54, 129].
Finding a from the moments Ci is called the moment problem. It isa
difficult problem and many books have been written on this subject [2,124].
When a has only a finite number k of points of increase then c is positive
but not definite and it can be proved that H~O) = 0, for all n ;;. k.
When c is positive definite we can consider the polynomials pt defined
by
43
Pn+1(x)=X n+l 
L c(xn+lpt)Pt(x),
i=O
for
n=O, 1, ...
The proof is as in Theorem 2.2.
If we write pt as pt(x) = tkxk + ... then we find that t~ = HI2)/HI2ll and
then
Hloll = (totl ... tk )2
which is a result proved by Geronimus [60].
2.2.
Recurrence relation
We first have to show that the polynomials Pk are linearly independent.
\:Ix.
we get
Theorem 2.4. The polynomials {Pd satisfy the following recurrence relation
Pk+l(X)=(Ak+lX+Bk+l)Pk(X)Ck+lPkl(X)
for
k=O,l, ...
with P l(X) = 0 and Po(x) = an arbitrary non zero constant. The constants are
given by
and
Ck+l = f3klhk+l/(f3khkl)
13k = C(XPkPk+l )
and
with
C\(k = c(xP~),
hk = c(P~).
Chapter 2
44
Proof. Since the polynomials Pk form a basis of fJ' any polynomial can be
written in this basis. Thus
XPk(X) = a~kJtPk+t(x)+a~k)Pk(X)+ ... +a~k)po(x).
obtain
If i = k we get
aLk) = ak/hk
If i = k + 1 we get
a~klt = 13Jhk+1.
We obtain
Ak+l = hk+1/13k
= tk+1 /tk
(2.1)
45
tk and tk+1 can be chosen arbitrarily and thus Ak+1 can be computed directly
from this formula without the knowledge of P k + 1
We also get
Ck+1 = (3kl
(3k
hk+1_ tk  1tk +1
hk 1
hk 1
tk
for
k = 1, 2, ...
(2.2)
+ .. .)Pk )
= tkC(X k + 1 P k ) + SkC(XkPk )
(2.3)
and thus
= _ O!ktk+l
(2.4)
hktk
k+1
k+l n )
Ck
O!k ( kn)
(kp kl
)
= h
c X Ck +(3kl
h c x
k
kl
k+l n)
Ck
n )
hk] (3kl ( k
)
=1 [ tkc (k+l
X
Ck + Sk +  h C x Pk  1
tk
tk
kl
(2.5)
46
Chapter 2
These relations cannot be used in practice because they also need the
knowledge of Pk + 1
One of the simplest choice for tk is tk = 1, Vk. In that case A k + 1 = 1 and
the formula of Theorem 2.1 applies with Dk = 1/H~O). The polynomials are
said to be monic.
We get
hk + 1 /f3k =Ak + 1 = 1
and then
B k + 1 = aJhk ,
Ck + 1 = hk/hk 1
with hk = C(XkPk) = H~oll/H~O) and ak = C(X k+1Pk) + SkC(XkPk). Let us write
P k as
k
Pk(x) =
L p~k)Xi,
i=O
Then putting into the recurrence relation and identifying the coefficients
of the various powers in x we get a recursive method to compute the
coefficients of Pk
P1(x)=0,
Pbl)=p~F=O,
h_l=1,
Po(x) = 1,
p~O) = 1,
p~l = p~O) = 0,
ho = Co
P 1(x) = x  Cl/CO,
p~l) = 1,
p~l) = Cl/CO,
p~~ = p~l) = 0.
Then, for k = 0,1, ... , we compute
k
L Ck+iP~k),
;=0
hk
'Yk
= L Ck+i+1P\k),
;=0
ak = 'Yk + p~"21hk'
B k+1 = ak/hk,
Ck+1 = hk/hk 1,
Pk+1(X) = (x + Bk+1)Pk(X) Ck+1Pk1(X),
i=O, ... , k.
This is the bordering method obtained in [23] to compute Pade approximants. This bordering method had been obtained by using a bordering
technique for solving systems of linear equations with bordered matrices as
is the case for Pade approximants. Thus we see that this bordering method is
perfectly explained with the theory of orthogonal polynomials since it is only
the recurrence relation of orthogonal polynomials.
47
Remark 1. Using (2.3) and writing the expressions for a,)h k and
we get
B
k+l 
akl/hkl
'Ykl_ 'Yk.
kl
'Yl =
O.
Remark 2. The expressions for Ak+l and Bk+l can be obtained more easily
by identification of terms of the same degree in the recurrence relation.
However the longer proof was useful in obtaining the last recursive method.
The preceding recursive method furnishes the coefficients of the recurrence relation as well as the coefficients of the orthogonal polynomials. If
only the coefficients of the recurrence relation are needed, then a method
due to Chebyshev [39] can be used.
Let us consider the infinite matrix Z with elements Zk,n = C(PkXn). By
the orthogonality property Zk,n = 0 if n < k. Moreover Zl,n = 0 and ZO.n =
Cn' Multiplying the recurrence relation of the monic orthogonal polynomials
by xn and applying the functional c to both sides of the equation we get
Ck + 1 =Zk,k/Zkl,kl
and
Zk+l,k =
Bk +1 =
Zk+l,kl =
0 implies
(Co = co)
0 implies
Zkl,k/Zkl,kl  Zk,k+l/Zk,k
lIo(x) = 1.
= c(lIn ).
Let us now proceed in two stages: we first compute the {vn } from the {Cn}
and then determine the coefficients {Bk + 1} and {Ck+l} from the modified
moments.
48
Chapter 2
= c(llkXn). We
Y1,n =0,
Moreover
Y k.O= Vk'
Thus we have to compute the first column of the matrix Y from its first row
which is known. Multiplying the recurrence relation of the polynomials {llk}
by xn and applying the functional c to both sides, we get
Thus, starting from the two initial rows, all the rows of Y can be successively
computed. This method [49] generalizes a method due to Wheeler
[136,137].
The second stage consists in computing the coefficients of the recurrence relation from the modified moments. The method used, due to
Wheeler [136], is very similar to the Chebyshev method.
Let us consider the matrix Z with elements Zk n = c(Pklln ) which are
zero if n < k. MUltiplying the recurrence relation of {iU by lln and applying
c we get
Zk+l,n
= c(xPklln) + Bk+1Zk,n 
Ck+1 Zkl,n'
conditions) then
Zk+l,k1
and
Zk+l,k
0 implies
(Co = co)
0 implies
 hk tk  1 tk+1 _ A
h
 k+l
k+l 
tk
k1 tk
C(Xkpk )
C
(k1p
k1
(2.6)
49
We also have
C(X
k+lp ) _
k
Sk+lhk _
Sk+l
t k + l tk
tk + l
    C X
kp)
k.
=_
(BA
k+l
k+l
__ B k + l
 A
k+l
+ Sk) C(XkPk )
tk
)
( k+lp )+ C k + l (kp
C X
k
A
C X
kl
k+l
(2.7)
p(x) =
L aiPi(x),
with
ak 1= o.
i=O
Then
k
c(pPk ) =
L aic(PkPi) = akc(P~).
i~O
50
2.3.
Chapter 2
Algebraic properties
 tk
h [Pk+1(X)Pk(t) Pk+1 (t)Pk(x)] = (x  t)
hi 1Pi (X)Pi (t).
i=O
ktk+l
Proof Let us write the recurrence relation for k = 1 and multiply it by Pi(t)
For all i, A i + 1 is different from zero since P i +1 has the exact degree i + 1. Let
us multiply this equality by hi 1Ai;1 and use
iI
iI
We get
(x  t)hi 1 Pi (x)Pi (t) = hi 1Ai;I[Pi (t)Pi +1(X) Pi (X)Pi+l(t)]
Coronary 2.1.
t
k k+l
i=O
i=O
hk tk + 1
X 
51
and
Kdx, t) =
L hilPi{x)Pi{t).
Theorem 2.7.
c{p{x)Kdx, t))
= pet)
p{x) =
L bjPj{x).
j~O
c(p{x)Kdx, t)) =
L hilpi{t)c{Pi (x)p{x))
i~O
c{p{x)Kdx, t)) =
L hil PJt)bic{pf)
i~O
= L biPi(t) = pet).
i~O
If p has degree less than k then C{Kk {x, t)p{x )(x  t)) = p{t){t  t) = O.
52
Chapter 2
for
for
n = 0, ... , k,
c(K~(x,
Corollary 2.2.
Co
c1
Ck
Ck+l
C2 k
Xk
t
0
Ck
C2 k
cn
Cn+k
tk
0
Co
HlO~lc(XnKk(X, t))
=  ..
For any n = 0, ... , k let us replace the last row by its difference from the nth
row; we get
Co . , . Ck
1
o ...
tn
Let
1)
X,
~ ~,
(
( 1)
'
t,
~ ~.
'
A, ..
(CO ...
Ck )
. . . .
Ck . C2k
A
cT
c by
53
Theorem 2.S. For a fixed value of t the polynomials {Kk(x, t)} form a family
of orthogonal polynomials with respect to c.
Proof. For m:E;nl
c(Kn (x, t)Km(x, t
= cx 
= (t t)Km(t, t) = O.
We say that the polynomials {Qk} are associated with the polynomials {Pk}'
They are sometimes called orthogonal polynomials of second kind. We have
already proved that Q k has degree kl,
From Theorem 2.1 we immediately get
Theorem 2.9.
C2
Qk(t)=Dk
Ckl
. Ck
Co
........... Ck
(cotkl+Cltk2+ .. +Ckl)
Theorem 2.10. The polynomials {Qd satisfy the same recurrence relation as
the polynomials {Pk} with Q_l(x)=1, Qo(x)=O and C 1= A1c(Po).
Proof. Let us write the recurrence relation for the polynomials Pk with x
and t as variables. Let us subtract these two recurrence relations and divide
by tx. We get
Then
54
Chapter 2
for
k = 1, 2, ... ,
Ol(t) = c (
A1X+B1A1tB1 )
Po
xt
= A1c(PO)
= 0 we get
Theorem 2.12.
PdX)Ok+l(X) Ok (X)Pk+1(X)
= Ak+1hk
for
k ;;;.0.
Proof. Let us write the recurrence relation for P k + 1 and multiply it by Ok.
Then write the recurrence relation for Ok+l and multiply it by Pk If we
subtract these two relations we get
Pk(X)Ok+l(X)  OdX)Pk+1(X) = Ck+1[Pk1(X)Qk (x) Okl(X)Pk(x)].
Ak+1hk
A1h o PO(X)Ol(X).
= 0 the
<?k(X)]l + [<?k(X)
Pk(x)
Pk(x)
<?k_l(X)]l
Pk 1 (X)
55
Proof. Using Theorem 2.12 twice we see that the left hand side of the
preceding relation is equal to
Pdx) [Pk+ 1 (X) + Pk 1 (X)].
A k+1 h k Akhk 1
+Bk+l)(Ak+lhk)l.
= (A k + 2 X + B k + 2 )Ak + l hk
Replacing Ak+lhk by its expression we get the desired result.
[ <?k+l(X)
Pk+1(X)
where
Pk and
hi1Pi(X)Qi(t).
i=O
hi 1Pi (X)Qi(t)
i=O
tk+1
56
Chapter 2
= C1ho1Al1
and the definitions of C10 0 10 0 0 , P 1 and Po. We thus find that it is equal
to 1.
If we multiply this equality by K k+1(x, t) and if we apply c we find the
result of Theorem 2.12.
Coronary 2.4.
k
L hi1Pi (X)Oi(X),
i=O
1
k+1
h [Pk+1(t)Ok(t)Ok+1(t)Pk(t)]=1
k
xt
xt
= Ak+1hk
L hi 1Pi (X)Oi(t).
i=O
Let x  t and the first part of the corollary follows. Differentiating the
relation of Theorem 2.12 we get the second result.
Applying c to the first of these relations we get
C(OkP~+1) = C(Ok+1 P O
Let
k
Hdx, t) = L hi1Pi(X)Oi(t),
i=O
then
Let
Mk(t) = (Okt
Pk t
Nk(t) = (
Oklt),
Akt+ Bk
Ck
Pk  1 t
1)
0' for
for
= 0,1, ... ,
k=1,2, ....
57
Thus we have
Mk(t) = M o(t)N 1(t)N2 (t) ... Ndt)
with Mo(t) =
(~
~).
= C1 C2 Ck + 1
Using this result, relation (2.6) becomes
Al ... Akc(x kpk) = hk
2.4.
[Bk+l
B2] ( kp)
(Xk+lp)
k =  Ak+l + ... + A2 C X k
If c
01
Chapter 2
58
We shall now study what happens to the zeros when the functional is
positive definite. We first have to study in more detail the meaning of
positive definiteness.
c(qO
for every polynomial q =f 0 such that q(x) ~ 0, Vx.
Proof. Let us recall that, for c positive definite, HkO) > 0 and c(P~) > 0, Vk.
Let p be an arbitrary polynomial with degree k. Then
p(x) = aoPo(x) + ... +akPdx)
and
n (x  Xj  iYj)(x  Xj + iYj)
m
q(x) = d
j=l
n (xxjiYj)
m
A(x)+iB(x)=d 1l2
j=l
n (xxj+iYj)
m
A(x)iB(x)=d l/2
j=l
and
59
Xl) ...
Then
(X  xn)v(x)
Xl) ..
(X  Xn)Pk (x
= O.
= cx 
and thus n = k.
Let us now prove that the zeros are distinct. Let us assume that
multiplicity n> 1. Then we can write
Xl
has
Pk(x) = (x  XI)nU(X)
= cx 
XI)nU 2 (X
>0
= cx 
Xlt+ I U2 (X
>0
Pk+I(Z)Pk(z) = hktk+l
tk
i~O
hfPf(z).
Pk+1(Y) and P~+I(Z) have opposite signs. So have Pdy) and Pk(z) which
shows that P k has a zero between y and z. From Theorem 2.14 this zero
cannot be y or z. The converse is proved in the same way.
b) Let y and z be two consecutive zeros of P k + l From Theorem 2.12 it
follows that
Pk(y)Ok+l(y) = Pk (Z)Ok+1(Z)
= Ak+lhk.
From c), Pk(y) and Pdz) have opposite signs. So have Ok+l(y) and Ok+I(Z)
which shows that Ok+1 has a zero between y and z. Since Pk+1 has degree
k + 1, it has k + 1 distinct real zeros. Between two consecutive zeros there is
a zero of Ok+l. Thus Ok+l has k real distinct zeros.
d) From Theorem 2.11 we know that the polynomials Ok satisfy
Corollary 2.1. Then the same proof as in c) can be given.
e) Let y and z be two consecutive zeros of Pk From Theorem 2.12
Ody)Pk+1(Y) = Ok(Z)Pk+I(Z).
60
Chapter 2
Pk+1(y) and Pk+t(z) have opposite signs. So have Ok(y) and Ok(Z) and Ok
has one zero between y and z; Ok cannot have more than one zero between
y and z since Ok can only have k 1 zeros.
If c
c(P~):E; C(p2)
with
= 1 since Pk
Ok (x,
a)
Pk(x, a) = Pk(x)aPkt(x)
Ok(X, a)= Ok(x)aOkt(x)
=0
for
= 0, ... , k 
2.
61
2.5.
Lk(x;) = f(x;)
for
i = 1, ... , k.
f
f(x;)
Lk(x) = vdx) i~ (x  x;)v~ix;)
with Vk(X) = (x and we get
Xl)' ..
c(Lk) =
L A~k)f(x;)
i=l
with
xX;
Vk(X;)
XXi
f.
Ci=C(X i )=
L A}k)X:,
for
i~l
Chapter 2
62
c(P) = c(R) =
wIth Pk(x) = (x 
Xl) .
(X  Xk)
R(~)
i~ Pk(~) Qk(~)'
sInce
c(P) =
L A~k) R(~).
i=l
But
P(Xi)=Qk(~)Pk(~)+R(~)=R(~)
In the case where Xl, ... , Xk are the roots of Pk (which are assumed to
be distinct) the interpolatory quadrature formula is called a Gaussian
quadrature formula. It is exactly the method we used in the first chapter to
construct Pade approximants; the corresponding function was f(x) =
(lxt)l and the result of Theorem 1.15 follows from Theorem 2.20.
Let us now give some other expressions for the coefficients Afk).
A(k) = Qk(~) = Pkl(~)Qk(~) Pk(~)Qkl(~).
Pk(~)
Pkl(~)Pk(~) Pk(~)Pkl(~)
This shows that, if the functional c is positive definite, then Afk) > 0, Vi and
k.
We also have
A~k) = Pkl(~)Qd~) Pk(~)Qkl(~)
Pk(~)Pkl (~)
Akhk l
Pk<~)Pkl (~)
Let us consider
P~(X)
P(X)=p,2( )( _
k
Since
)2
P(X;) = 8i;.
c(P) = ,~ A(k)l)
.. = A~k)
J
lJ
I
j=l
63
c(p) =f
k
i=l
A~k)p(X;) =
O.
of
IEil
L IA~k)I~M
Vk.
i=l
Proof. The condition is sufficient since
IEil
i~ IA~k)1
~MmaxleJ
i
Let us now prove that the condition is necessary. Let us assume that
k
i=l
= A~k)/IA~kl Then
Vf E V.
k+oo
64
Chapter 2
VfED
where D is dense in V(D
Illkll~M,
= V) and
Vk
where M is independent
of k.
Iitil = xe[a,b]
max If(x)1 then
i=1
By Theorem 2.19 we see that the first condition of the preceding theorem is
satisfied and we have
Theorem 2.24. A necessary and sufficient condition in order that
Vf E Cco[a, b]
is that M (independent
of k)
L IA~k)I~M,
Vk.
i=1
Thus the conditions for stability and for convergence are the same. We
see that Theorem 1.11 is a consequence of this theorem as well as Toeplitz
theorem which was used in secuon 1.3 to obtain Theorem 1.8.
From the preceding theorem we get
Corollary 2.S. If the functional c is positive definite then Gaussian quadrature
formulas are stable and convergent on Cco[ a, b ].
Proof. We have already seen that A~k) = Kk~1(X;, X;). Thus if the functional c
is positive then A~k) > 0 and
k
i=1
i=1
Let c be defined by
C(XI)
r
Xi
da(x)
65
(1 Xt)l E C~[a, b] and Corollary 2.5 applies. Thus we have proved that, in
that case
lim [k l/kl(t) =
let),
k>~
P~(x) f2k)(
[( ) _ (
~)
X P x)+ (2k)! t~
where ~ E [a, b] and depends on x.
Thus
1
c(f) = c(P)+ (2k)! t~ c(P~(X)f2k)(~
but, by Theorem 2.20
c(P) =
i=l
i=l
(2k~! t~ c(P~(X)f2k)(~.
and
and thus
c(f)c(Lk )
= t~(~~)! f2k)(~).
Xi do: (x)
66
Chapter 2
with a bounded and nondecreasing in [a, b] then all the preceding assumptions are satisfied and we get the error term given in Theorem 1.18 as well
as the inequalities which follow it.
Let us now assume that the power series expansion of { is known (or
only some of the first coefficients)
00
L /;Xi.
{(x) =
i=O
Then
00
L c;/;.
e(f) =
i=O
L ert
i=O
Pk(x, a)
(x  X;)P~(X;, a)
e(PnPm )
= L A~k)pn(X;)Pm(X;)
i=1
where the X; 's are the roots of Pk which are assumed to be distinct. If
then the orthogonality property gives
k
L A~k)pn (X;)P
(X;) = 0,
i=1
If n=m then
k
L Alk)p~(X;)::f o.
i=1
n+m==:2k1.
n::f m
67
In this section we saw that the theory of general orthogonal polynomials can be used to prove old and new results about Padetype and Pade
approximants and that the proofs are generally easier than the classical
proofs.
2.6.
Matrix formalism
eo
al
bi
el
a2
b2
0
0
0
..... . . . . .
")
...
Starting from this matrix one can consider the difference equation
k=1,2, ....
This difference equation is defined by one row of the J matrix. The first row
of J gives
aoyo + bOYl = xyo
Yo= 1.
It is easy to see that, for all k, Yk is a polynomial of degree k with respect to
the parameter x. We shall write Yk = Pk(x). For that sequence of polynomials
to exist all the numbers bk must be different from zero. This assumption will
now be always made. The polynomials Po, PI>' .. are linearly independent.
If do, ... ,d" exist such that
then
P(X)=bdn xPnl(x)+ ....
nI
O.
Thus any polynomial can be expressed in this basis. We can now define
a linear functional c on r;; by
C(PnPk) = 0,
if
n=f k.
68
Chapter 2
This relation defines the functional c on the whole space since any polynomial P of degree k can be uniquely written as
c(P) = c(PoP) =
L d;c(POPi) = doc(P~).
i=O
P(x)~O,
A(x)=
L d;Pi(x)
i=O
c(A 2) =
L L d;~C(PiPi) = L d~c(P~).
i=O
But, using the notation of Theorem 2.4
i=Oj=O
A k + 1 = b"k = tk+1t"k
and then
1
bk 1 hk
Ck+1 = eklbk =bh
and
kl
ek1b"k~1 = hkh"k~l'
Thus if ekl and bk 1 have the same sign so have hk and hkh Vk. Since
ho = 1 the result follows.
Theorem 2.26. A necessary and sufficient condition for the functional c to be
~O) (~:
=
Ck
::
Ck+l
: :: . CC.:2:kl)
c(A 2) =
L L a;~c(xixj) = L L Ci+ja;~
i=Oj=O
i=Oj=O
69
and
c(A 2) = (d,
where d
A~O)d)
Theorem 2.27. Let c be positive definite and let a be such that Pk(a) =1= o. Let
V(a) be the number of zeros of Pk which are greater than a. Then V(a) is
equal to the number of changes of sign in the sequence
We set
a = (ao; ... ; ~; 0; 0; .. y,
b = (b o; ... ; bk ; 0; 0; .. y.
We can define on
form by
~,
(p, q) = c(pq).
where A = (Ci+i)Zi~O and where (.,.) is the ordinary scalar product of two
vectors (which is well defined since the vectors a and b have only a finite
number of non zero components).
If H~O) =1= 0, Vk the bilinear form is definite. From the Schur theorem we
know that an orthogonal matrix Q and a diagonal matrix D exist such that
A=QDQ T
where the columns of Q are the eigenvectors of A and where the eigenvalues of A are the diagonal elements of D. Thus
(p, p) = (a, QDQTa) = (QTa, DQTa).
The scalar product is different from zero for all a =1= 0 iff all the eigenvalues
of A are different from zero that is to say H~O) =1= 0, Vk. With this bilinear
70
Chapter 2
form {lj> is an inner product space [12]. The bilinear form is positive
definite iff ~) > 0, Vk. With this bilinear form, {lj> becomes a preHilbert
space. This point of view has been recently developed by Van Rossum
[106].
When c is positive definite it is possible to apply the Schwarz inequality
and we get
i, j
= 0,1, ....
Holder inequalities also apply and some other inequalities for positive
moment sequences can be deduced from them [32] (see section 2.10).
Let us now come to the matrix interpretation of orthogonal polynomials. Following Gragg [63] we set
Pdx) = p&k) + pik)x + ... + p~k)Xk
and
We shall denote by Bk the square matrix of the first k rows and columns of
any infinite matrix B.
Let H be the diagonal matrix with elements hi = c(P~). Then the
orthogonality relation of the family {Pk } can be written as
LALT=H,
or
LkAkLf = Hk> for
k = 0, 1, ....
Since Pk has the exact degree k for all k, then the matrices Lk are invertible
and we get
Ak = LklHdLkl)T,
A =L lH(L l)T
k =0,1, ...
o o
...
o o ...
1
71
Theorem 2.28.
k = 1, 2, ....
Proof. We have
o
o
1
x+B 2
1
o
o
o
o
o
o
o
o
o o
o o
Theorem 2.30.
k = 2, 3, ....
Proof. This is the same as that in Theorem 2.28 since the polynomials Ok
satisfy the same recurrence relation with a different initialisation.
The zeros of Ok are the eigenvalues of Jkl and the HadamardGerschgorin Theorem can be used to locate these zeros. If c is positive
definite the theory of J matrices shows that the zeros of Ok are real and
72
Chapter 2
T~(~
1 0 0 0
0 1 0 0
0 0 1 0
"
")
. ,
1
0
0
1
P6k)
 pik )
0
_ p~k)
0
0
0
0
0
0
0
0
Fk =
0
_
p~k)
1
0
p(k)  p~k~l
k2
JL=LT,
k = 2,3, ....
1..
Vk = (
xik )
(k)kl
Xl
..1)
Xkk )
(~)k~l
Xk
,X~k).
Then
FkVk = VkD k
which shows that the eigenvalues of Fk are the zeros of Pk. We also have
ikLk = LkFk
and then
ikLkV k = LkFk V k = Lk VkDk.
If we set Q k = Lk V k then
ikQk = QkDk
which shows that the matrices i k and Dk are similar. Thus the zeros of Pk
are the eigenvalues of Jk
Let us now consider the matrix Wk"l of order k with elements
Kk(xlkl, xjk)) for i, j = 1, ... ,k. From the ChristoffelDarboux relation
(Theorem 2.6) and from Corollary 2.1 we see that the matrix Wk is diagonal
and Kk(xlkl, xl k )) = 1/Alk) =1= O. Since the elements of Q k are equal to
73
Pi_l(X~k))
Q[Hk 1Qk
= W k1
and thus
Hk = QkWkO[
Rk(x) =
L (xxl k))IRl k)
i=1
where the matrices Rlk) are called residue matrices. Then we formally have
00
Rk(x) =
L A/X i + 1
i=O
Ik =
L Rlk),
i=O
k
Jk
= L Rlk)xl k).
i=O
= OkDkOk
1,
(k)( )_ 1
A~k) P (k))piI (k))
r i.i x ~ t... _ (k) iI Xn
Xn ,
''jl n=1 X
Xn
i, j
1, ... , k.
Using the ChristoffelDarboux formula we see that the matrices R~k) are
projectors, that is to say R~k)2 = R~k). If the functional c is positive the
polynomials pt = Pk/h~/2 are orthonormal and the matrix ot = H k 1l2 0 kW~/2
74
Chapter 2
It = Hk1l2JkHt/2
is symmetric and we
Itot=otDk
. The residue matrices R~k)* = Hk1l2R~k)Ht/2 of the matrix (xIk  It)l are
projectors. They are also orthogonal and symmetric and thus
(R~k)*)l
We shall now give some brief remarks about Hankel quadratic forms
and the RayleighRitz principle. This principle will be extended in section
2.7.1.
Let q be an arbitrary polynomial with degree k
k
i=O
i=O
L qixi = L a;Pi(x)
q(x) =
and
q2(X) =
i.j=O
i.j=O
Thus
C(q2) =
i.j=O
i=O
L Ci+J.qiCli = L afc(Pf).
From the inertia law of real quadratic forms [55] we know that the number of
positive terms and the number of negative terms is independent of the
representation. Moreover a representation of the form
k
C(q2) =
L bf A;
i=O
exists where the .\;'s are the eigenvalues of the matrix A k + 1 Thus the
number of positive c(pf) is equal to the number of positive eigenvalues of
Ak+1 and the number of negative c(Pf) is equal to the number of negative
eigenvalues of A k+1' When passing from Ak to Ak+1 the number of positive
eigenvalues or the number of negative eigenvalues is increased by one. If
c(pfO, Vi then all the eigenvalues of the matrices Ak are positive, Vk and
the functional c is positive which is a previous result.
Remark. If c(Pf} > for i = 0, ... , k then the preceding properties are true
up to the index k.
Let q and a be vectors with components qo, ... , qk and ao, ... , ak
respectively. Then
C(q2) = (a, Hk+1a) = (a, Lk+1Ak+lL[+la)
75
q,q )
k+1
indexed so that
Vq E ITllk+l.
0,
II\\
Since this inequality is valid for all vectors q we can take q = Lf+1a and thus
we get
\ ,.::: (a, Hk+1a) ,.:::A
I\k ~(LT
LT a ) ~ o
a
k+l ,
k+l
We also have
\
I\k
. (q,
= mIn
q"O
2.7.
Ak+1q)
(q, q)
(a, Hk+1a)
a"O
(Lf+la, Lf+1a)
= mIn
The aim of this section is to show the close connection which exists between
orthogonal polynomials and some projection methods used in the theory of
linear operators. Some of these projection methods are related to a generalization of Pade approximants for series whose coefficients belong to a
topological vector space. This connection will be studied in section 4.1.2.
2.7.1. The moment method
Let E be a Hilbert space with inner product denoted by (.,.) and let
zo, Zb ... ,Zk be elements of E which are assumed to be linearly independent. Let
Ek
Ak
Zl = Akz O,
Z2
Zkl
= A k z 1 = A~zo,
= A k z k  2 = A~l Zo,
Hkzk = Akz k  1,
Hkzk E Ek
76
Chapter 2
Let us use the fact that Zk  Hkz k is orthogonal to Ek> that is to say
orthogonal to zo, ... ,Zkl' We get
(Zk  HkZk> zn) = 0,
for
= 0, ... , kl
or
(Zo, zo)ao + (zo, zl)al + ... +(zo, zkl)akl + (zo, Zk) = 0,
Since zo, ... , Zkl are linearly independent, the determinant of this system is
different from zero and ao, . .. ,akl are uniquely determined.
Let
Pk(t)=aO+a1t+ ... +ak_ltkl+tk.
Then
Pk(Ak)zo= aozo+ +aklzkl + Hkz k = 0.
We also have
AkPk(Ak)zo = Pk(Ak)AkzO= PdAk)zl
= 0,
= 0,
'flu
E k. In particular if U is an
Pk(Ak)u =Pk(A)U =0
=AkZ +b
P(Ak)z + Q(Ak)b
Ak)z.
The degree of Q is one less than the degree of P and P(I) = 1. The
coefficients of Q are determined from those of P by the preceding relation.
Let us choose P as P(t)=Pk (t)/Pk (1) where Pk is the characteristic
polynomial of A k. Since 1  Ak is invertible then Pdl) f: and we get
77
since
Pk(A k ) = P(Ak ) = O.
= aJ I
a i and
i=O
k
ak
= 1 and if
0, ... , k 1.
If the equation we want to solve in Ek is written as
= I
a j for i
j=i+l
AkZ=b
P(Ak)z + Q(Ak)b
This relation shows that the degree of Q is one less than the degree of P and
that P(O) = 1. Let us choose P(t) = P k (t)/Pk (0) where P k is the characteristic
polynomial of A k. Then PdO) # 0 since Ak is invertible and P(O) = 1; then
we have
Z =
P(Ak)z + Q(Ak)b
= Q(Ak)b.
Since v E Eb Hkv
= v and we have
78
Chapter 2
Thus
Ak=HkAHk
for
or
kl
for
i=O
with
Hkx =aozo+ +aklzkl
and we get
Hkx
Ua
U(UTU)lUTX
= U(UTU)lUT.
n=O, 1, ...
since
79
<A~k+l)<A~k)<
...
<A~il,
for
i = 1, 2, ....
i = 1,2, ...
k>oo
k = i, i + 1, ....
finally
A =inf A(k)
,
'
k = i, i + 1, ....
i=O
i=O
The moment method corresponds to the choice cf>i = Zi. If Zi = A izo then
better results are obtained since the Zi are connected with the problem to be
solved. Convergence theorems for the moment method are particular cases
of convergence theorems for Galerkin method.
80
Chapter 2
Yo= y,
= AZk 1 = A kZ,
Yk
= A *Ykl = (A *)ky,
Obviously we have
Vi
and j
with
i +j
= k.
k=O, 1, ....
We set
and
Yk =Pk(A*)y
Zk =Pk(A)z.
Thus
c(PD = (Yb Zk),
c(xP~) = (Yb Az k ).
We get
Pk+l(A)= (A + Bk+1)PdA)Ck+1Pkl(A)
= (A * + Bk+1)Yk 
Ck+1 Ykl
with
Z1 =OEE,
Zo = Z E E,
51I =OEE',
Yo = YE E',
k = 0,1, ...
81
which is exactly Lanczos biorthogonalization method. From the orthogonality relation of the family {Pd we obtain directly the result of Lanczos
k f n.
C=YZ.
Let
zo, Z1> . ..
and
Zk
= PdA)z =
p~k)Zb
i=O
k
Yk = Pk(A *)y =
p~k)Yk
i=O
Y=LY.
Thus
YZ = L YZL T = LCL T
H.
n
00
det C =
(Yb Zk).
k=O
AZ=ZjT,
We also have
YA =LYA =LTy=jLY
{zd. We
82
Chapter 2
YA=iY,
H E has finite dimension k then Yk = Zk = 0 since there cannot be more
than k linearly independent elements. P k is the characteristic polynomial of
A. All the preceding results are still valid with truncated matrices of
dimension k. Thus P k is also the characteristic polynomial of the matrix Ak
formed by the first k rows and columns of A. We also see that A and j (Ak
and jk) are similar. If A is seHadjoint then the sequence {Ck} is positive.
H Y = Z then Yk = Zk, Vk and the polynomial P k is given by
(zn, zo)ao + (zn, zl)al + ... + (zm zkl)akl + (zn, Zk) = 0
for n = 0, ... , k 1. It follows from the results of the preceding section
that P k is the characteristic polynomial of the operator Ak = HkAHk obtained by the moment method. Now let
Bi = qi + eil>
Ci
= 1, 2, ... ,
eo = 0
= qi1E;1
with
e1
0
Lk =
~.
C": ":}
R k
0
1
R(O) =
(0)
qk
e<0)
kl
0
1
where all the matrices are infinite. Then the preceding results apply and we
get
1(0)
R(O)TL (O)T
83
subdiagonal. We set
_ B~ll = q~ll + e~~\,
C~1) = q~=\ e~~l'
= q~l) + e~~b
e~O)q~<:\ = ep)qp).
]<1)
R(1)TL (1)T
where R(1) and L (1) are obtained by putting the index 1 instead of 0 in the
definition of R (0) and L (0). Then we can set ]<2) = L (l)T R (1)T and so on. This is
Rutishauser's LR algorithm to compute the eigenvalues of a matrix
[107,108,110]. We get
qkn ) + ekn ) = qkn +1) + ekn_"11),
eknl qk"21 = ekn +1)qkn +1),
with
We have
qiO) = BiO) = c(xP6)/c(P6) = Cl/CO,
qi1) =  Bi1) = qiO) + eiO)
but
eiO) = C~O)/qiO) = c(Pi)/c(xP6) = (cOC2  ci)/(COC1)
and then
(1)
ql
2
_CO::...C",2__C~1
C2
Co
COC1
Cl
1
=+
Chapter 2
84
IAll < IA21 < ... < 1",1 < ... then
lim q~n) = Ak, for k = 1, 2, ... ,
of A satisfy
Let us now use Lanczos method to solve the system Bx = b in IRn where B is
a square symmetric positive definite matrix. We have seen that, if we choose
y = z, then the polynomials Pk produced by Lanczos method are the
characteristic polynomials of the matrices Bk = HkBHk obtained by the
moment method.
Let Xk be the solution of BkXk = b. In section 2.7.1 we saw that
Xk = Ok (Bk)b,
Ok (B k ) = B;l,
k=1,2, . ..
where Ok is defined by
1 Pdt)/PdO) = tOdt).
Moreover, since
m = Hk
Ok (B k ) = Ok (HkBHk ) = HkOk(B)Hk
Then
Zo
= ab
Zo
= b we get
zo= b
+ {3k+l)Zk 
Zk+l = (B
{3k+l
85
'Yk+lZkl
P1(t) = 0,
= 0, 1, ...
P o(t)=1
k=O, 1, ...
P k is the characteristic polynomial of the matrix Bk obtained by the moment
method from b, Bb, . .. ,Bkb. If we replace Pk(t) by P k (0)[1 tQk(t)] we get
Let us now replace t by the matrix B. We get a matrix. Let us mUltiply the
vector b by this matrix and let rk = BXk  b; then
Pk+1 (O)rk+l = (B + {3k+l)Pk (O)rk  'Yk+lPkl (O)rkl,
Pk+1(O)Xk+l
Xk+l = Xk
Xk)].
We set
Uk = rk + ILkl 'Yk+l (Xkl  Xk).
Then we obtain
Let Ak
= 1L~l'Yk+l.
We finally get
Uk = rk + Akukl>
Xk+l = Xk
k=O, 1, ... ,
+ ILkUk
We now have to find simple expressions for the coefficients Ak and ILk.
Let us first establish a property of the sequence {rk}. From Lanczos method
86
Chapter 2
we know that Zk=Pk(B)zo=PdB)b; thus 'k=Zk/Pk(O) and the biorthogonality relation (Zb z;) = 0 if k t i gives rise to
(rk, r;) =
0, if
k t i.
Thus
('k+1> Uk)
= AkAk 1 AO('k+l, U 1) = O.
= rk + ILkBuk
and
(Ub rk+l)=O=(Ub rk)+lLk(UbBuk).
Ak
,o=b,
Uo='o
Uk = rk + AkUkl,
ILk = CUb rk)/(Uk, BUk),
+ ILkUk,
'k + ILkBuk
Xk+1 = Xk
'k+l =
= (U;, 'k+l 
'b we get
87
fLk1
we obtain
CkI
Ck
C2 kl
1
_t k 
with
Thus we obtain
Xk
CkI
Ck
C2 k1
b
BkIb
k=1,2, ...
=
Co
CI
Ck
CkI
Ck
C2kl
= Db,
V2 = BVI = DBb,
Let
Sk
= Span {Vb' .. , vd
and let
Wk
on
Chapter 2
(Vi' Vi)
(V, V,)
Thus
b
Wk
= D
Ck
C2k1
BkIb
Co
Ckl

CI
...
Ck
Ck
...
C2k1
Let us now give an expression for the error. From a classical result [83]
(Vb
VI)
(Vb VI)
(V, VI)
...
(Vb
Vk)
VI
(Vb vd
(V, vd
Vk
= DxDxk
VW k =
(VI, Vt)
(VbVk)
(Vb
(Vb
Vt)
Vk)
89
and thus
Ck
C2kl
Bk1b
Co
Ckl
XXk=
Cl
Ck
C;k
C2 kl
We have
(v  Wk, V  Wk) = (D(x  xd, D(x  Xk
= (x  Xk, B(x  Xk = IIx  xkll~
IIx xklFa = (v  Wk, v) = (D(x  Xk), Dx) = (x  Xk, b).
Thus we get
Cl
Ck
(b, b)
C2kl
(Bk1b, b)
/ H(1)
(x, b)
Co
where
Cl
H k1) =
Ck
Ck
C2kl
(Bk1b, b)
(b, b)
(Bb, b)
(Bk1b,b)
(Bkb,b)
'"
~~k~, ~).
/Hkl).
(B 2k  1b,b)
Thus we get
Bx = b = aoBb+ ... +~_lBnb,
90
Chapter 2
The columns of the determinant in the numerator of Ilx  x,,11~ are thus
linearly dependent and it follows that x" = x. If b, Bb, ... ,Bkb are linearly
dependent but if b, Bb, . .. ,B k 1 b are linearly independent for some k then
Xk = x. It can also be proved that (u Vb U V;) = 0 Vk1= i [79].
Let us now consider the case of an arbitrary matrix B. Lanczos method
can still be applied to construct the polynomials Pk Now Pn will be the
characteristic polynomial of B and we shall obtain
x = Qn(B)b
if
zo=ab.
Xk
+ ILkUk
We have to find out the new expressions for Ak and ILk because in the
preceding relations we have extensively used the fact that B was symmetric
which is no longer assumed. We set
Xk = Qk(BT)b,
rk =BTXk b.
We get
rk = (BTQk(BT) I)b = Pk(BT)b/Pk(0).
Thus
rk = Yk/PdO).
rJ = 0,
we get
Vi 1= k.
and thus
(rk+b Uk) = AkAk 1 ... AO(rk+l' Ul) = O.
But
rk+l
= rk + ILkBTUk,
Uk = rk
and
91
which gives
ILk
Moreover
A = _ P~l(O) (Yb Zk)
k
P~(O) (Ykl, Zkl)
(rk> rk)
(rkl> rkl)
ro=ro=b,
uo=uo=b
Uk = rk + AkUkl,
ILk = CUb rk)/(Ub Bu k ),
+ ILkUb
rk+1 = rk + ILkBub
rk+l = rk + ILkBTUk.
Xk+l = Xk
Vi f k
Vi =!= k
Vi =!= k.
Thus
ILk
x..
2.8.
k = 0,1, ... ,
with
Po(X) = 1 and
P_1(x)=0.
92
Chapter 2
relation we get
X1(Pk+1(x) + qk+1Pk (x))
= Pk(x)ekx1(pk(X) + qkPkl(X)),
k =0,1, ....
We set
Pk(x) = X1(Pk+1(X) + qk+lPk(X))
Pk+1(x) = xPdX)qk+lPk(X),
(2.8)
k = 1, 2, ... ,
(2.9)
with
Po(x) = X1(P1(X)+q1PO(x)) = X1(P1(X) B1PO(x)) = Po(x) = 1.
+ qk+l C(PiPk ) = O.
That is to say that the polynomials {Pd are orthogonal with respect to the
sequence Cb C2'
From Theorem 2.8 we see that the polynomials Pk are identical to
Kk(x, 0), apart from a mUltiplying factor. The associated polynomials Ok are
defined by
Ok(t) = c(Pk(X) Pk(t)) = c(x Pk(x) Pk(t)).
xt
xt
But
Pk(x)  Pdt) = (Pk+l(X)+qk+lPk(X))Xl_(Pk+l(t) +qk+1Pk(t))t1
= (tPk+1(X)  XPk+1(t) + qk+l(tPdx)  xPk(t)))X1t 1.
We also have
tPk(x) xPk(t) = tPk(x) tPdt) + tPk(t) xPdt)
= t(Pk(x) 
Pk(t))  (x  t)Pk(t).
Thus
x(Pk (x) 
Pk (t)) = t1[t(Pk+1(x) 
Pk + 1 (t))  (x  t)Pk+1(t)
93
Co
Co
(2.10)
We know that the polynomials {Pk} and {Ok} satisfy the same recurrence
relation and we have
Qk+l(t)cot 1Pk+1(t) = (t + B k+1)Qk(t) C k+1Qkl(t)
 cot\t + B k+1)Pk(t) + cot1Ck+lPk_l(t)
= tOk(t)qk+l Qk(t) ekQk(t)qkekOkl(t)
 COPk(t) + qk+l cot 1Pk(t) + ekcOt 1Pk(t)
+ COqkekt 1P k  1(t).
k=1,2, ...
(2.11)
with
Qo(t) = Oo(t) = O.
Thus
Pk+l(X) = (x qk+l ek+l)Pdx)ekqk+1Pkl(X)
= ekilk
Let us check that th~ polynomials {Ok} satisfy the same recurrence relation
as the polynomials {Pk }. We have
Ok+l(t) = tOk+1(t) COP k+1 {t) ek+l Ok(t);
94
Chapter 2
The family {Pd and {Pd are called adjacent (or contiguous) systems of
orthogonal polynomials [103].
It is obvious that the same process can be indefinitely repeated. Thus
we shall obtain polynomials orthogonal to the sequence C2, C3, ... and then
polynomials orthogonal to the sequence C3 , C4 , and so on.
For simplicity we set eLO) = ek> qLO) = qk> eLl) = ek> qLl) = ilk> pLO) = Pk> PLl) =
Pk> OLO) = Ok, oLl) = Ok and for the functionals cia) == C and C(l) == c.
In general we shall denote by {pLn)} the family of orthogonal polynomials with respect to the sequence Cn> Cn+b ... that is to say with respect to
the functional c(n) defined by
i = 0, 1, ....
The associated polynomials will be denoted by OLn) and the coefficients
occurring in the recurrence relation by eLn) and qLn).
These sequences are related by
c(n)(x i )
= Cn + i ,
(2.12)
qLnl l
(2.13)
qbl)
e\O)
ebl)
q(O)
q\l)
qb2)
eb2)
e~l)
qil )
q~2)
q~)
e~)
2 .
e\2)
q(2)
q~3)
ei3)
2 .
q(3)
2 .
All the entries of this array can be calculated if the first diagonal is known,
i.e., qbO) = 0, ebO) = 0, qiO), eiO), qil, . ... They can be also computed from the
first three columns i.e., qbn) = 0, e~n) = and q\n) = Cn+1/ Cn" This last result
comes out by shifting the indexes since we have already proved that
q\O) = Cl/CO.
95
In fact the column {q&n)} is not useful. Starting from {e&n)} and {q~n)} the
relation (2.12) gives {e~n)} and then (2.13) gives {q~n)} and so on.
The qdalgorithm always relates entries located at the four vertices of a
rhombus. The qdalgorithm is said to be a rhombus algorithm.
Gathering all the preceding results we obtain (with the convention that
q (n)e(n)
o 0 = Cno \.In)
V
p~l(x)=o,
= (x q~nll e~np~n)(x)_q~n)e~n)~"21(X),
Q~l(x) = 1,
Q&n)(x) = 0
Q~nll(x) = (x q~nll  e~nQ~n)(x)_q~n)e~n)Q~n21(X),
p~n+l)(x) = p~n)(x)  e~n) p~~l)(x),
k = 0, 1, ...
p~nll(x)
= 0,1, ...
k = 0,1, ...
k=O, 1, ...
k = 0, 1, .. .
k = 1, 2, ... .
Moreover
Let
Cn+kl
Cn+k
Cn+kl
Cn+2kl
1 . . . . . . . Xk
and
Cn . . . ..
Cn+kl
...
. . . . . . . . . Cn+k
. . . . . . . . . . Cn+2kl
CnXkl+Cn+lXk2+ ... + Cn+k2X + Cn+kl
For x = 0 we get
= p~n)(o) _ p~n+l)(O),
p~n)(o) = _q~n)p~n21(0)
e~n) p~n_~l)(O)
96
Chapter 2
But
Cn+kl
Cn +2kl
Thus
H(n+l) H(n) H(n+2) _ [H(n+l)]2
k
k
k,
H(n+2)
H(n)H(n+l)
kl
k
k
(n)_~
ek
(2.14)
H(n+l)H(n)
k
kl
 H(n)H(n+l)
k
kl
(n) _
qk
(2.15)
= h~n)jM"21
with
Thus
Using (2.15) we obtain
(n) _
ek 
H (n+l)H(n)
kl
k+l
(2.16)
H~n)H~n+l)
n, k = 0,1, .. ,.
(2.17)
The determinants H~n) are called Hankel determinants. They obey the
recurrence relation (2.17) which must be initialized with Hbn ) = 1 and
Hin ) = Cn for n = 0,1, ... since hbn ) = Cn = Hin)jH&n).
If the convention that Ci = 0 for i < 0 is made, then Hankel determinants
with a negative value of n can be defined. Obviously
H~n)
H(k+l) k

H(k+2) k

=0,
Vn:s;;k
0
0
0
0
Co
Co
Cl
Co
Ck3
Ck2
Ckl
0
0
Co
Cl
Co
Cl
C2
Cl
Ck2
Ckl
Ck
= (_1)k(kl)/2C~
= (_1)k(k1)/2
Cl
C2
Ck
Co
Cl
Ckl
C1
97
The value of this determinant can be easily obtained as will be seen in the
next section. Thus the recurrence relation of Hankel determinants can be
used for k = 0,1, ... and n = k + 1, k + 2, ... with the preceding initial
conditions.
From that we can also define the quantities ekn) and qkn) for k = 0, 1, ...
and n = k + 1, k +2, ... and we thus obtain the array
q\O)
(2)
q3 .
(1)
q2
e&l)
q~l)
e&2)
e~l)
q~2)
q~l)
e\2)
e~)
q~3)
~&4)
e~O)
e~l)
q~2)
e~3)
e~2)
q(3 1) .
boundary
conditions
q(O)
3 .
q(l)
3 .
Starting from the vertical boundary conditions this array can be computed
by means of relations (2.12) and (2.13). Starting from the horizontal boundary conditions and using (2.12) and (2.13) also allows to compute the whole
array. This last method is called the progressive form of the qdalgorithm
and is much more stable from the numerical point of view [71].
Obviously all the preceding polynomials can now be defined for k =
0, 1, ... and n = k + 1,k + 2, .... In the sequel when we shall write "for
all k and n" it will have the preceding meaning.
We have, of course, assumed that Hkn ) =f 0, "In and k. In that case we
say that the sequence {cn } is completely definite. We shall always assume
now that this condition is satisfied.
Some results about the zeros of adjacent systems of orthogonal polynomials will now be proved.
Theorem 2.31. For all k and n, Pkn) has no common zero with Pt+l), Pkn_+/),
Pkn  1) and Pk~ll). Moreover, Pkn)(O) =f o.
which is impossible by Theorem 2.14. The two other proofs are similar. If
Pkn)(O) = 0 then
Pknll(O) = OPkn+l)(O)qknllPkn\o) = 0
which is impossible by the preceding result. From the determinantal formula
98
for
Chapter 2
p~n)
we see that
H~n)p~n)(o)
= (1tH~n+l)
which shows that H~n+l) = 0 and is contrary to the assumption that {Ck} is
completely definite.
Remark. For the classical Legendre polynomials it is well known that
P2k(0) = 0 but C2n + 1 = S=lX 2n +1 dx = 0 and thus {Ck} is not completely definite
since Hi2n +1) = 0, Vn.
Theorem 2.32. If the sequence Cn+l, Cn+2, ... is positive definite then, for all
k, the zeros of P~~l are real and distinct, two consecutive zeros of P~~+ll) are
separated by a zero of p~n+l) and by a zero of P~~l. Two consecutive zeros of
p~n+l) are separated by a zero of p~n2l> by a zero of p~n++ll) and by a zero of
p~n).
Proof. From Theorem 2.15 we know that the zeros of p~n++/) are real and
distinct and that two consecutive zeros of ~n++/) are separated by one zero of
p\cn+l) (and conversely).
Let Zl and Z2 be two consecutive zeros of p~n++/); then
p~n21(Zl)
= ekn21P~n+l)(Zl)'
Since p~n+l) has one zero between Zl and Z2 then p~n+l)(Zl) and p~n+l)(Z2)
have opposite signs. So have p~n21(Zl) and p~n21(Z2) which shows that p~n21
has one zero between Zl and Z2. Since p~n+;l) has k + 1 distinct real zeros
then p~n21 has k distinct real zeros. Since p~n21 is a real polynomial, the last
zero of Pknll is also real.
Let now Zl and Z2 be two consecutive zeros of p~n+l); then
p~n:ll)(Zl) = Pkn21 (Zl),
p~~/)(Z2)
= p~n21 (Z2).
Since ~~;l) has one zero between Zl and Z2 then p~".;l)(Zl) and p~n+;1)(Z2)
have opposite signs. So have Pknll(zl) and Pk~1(Z2) which shows that p~n21
has one zero between Zl and Z2. We also have
)
P (n)
k+l (z 1)  _q(n)
k+l p(n)(z
k 1,
)
P (n)
k+l (z 2 )  _q(n)
k+l p(n)(Z
k
2,
Using the determinantal expressions of e~n) and q~n>, the preceding relations
become
H~n)H~n+l)(x) = H~n+l)Hr)(x) H~nllHkn~/)(X),
99
with
H~l(x)
= 0 and
H~n)(x)
=1
Taking x = 0 in the first relation leads to the recurrence relation (2.17) for
Hankel determinants.
Taking x = 1 we have
4n) =
(p~n)(1)  p~n+1)(1))/P~"+/\1).
In the determinantal expressions of p~n) each column (except the first one)
can be replaced by its difference from the preceding columns and then the
determinant can be expanded with respect to its last row.
Defining the operator .1 as
.1cn =cn,
.1
+ 1 cn
= .1
n=0,1, ...
P Cn+l 
.1 P cm
p, n = 0,1, ...
we obtain
H k (.1cn+l)Hdcn ) H k (.1cn )Hdcn+1) = H k 1(.1cn+l)Hk+l(Cn),
(2.18)
We also have
q~n) = (P~"+/\1)  p~n)(1))/p~n21 (1)
and we get
H k  1 (.1cn+l)Hdcn ) + H k (.1cn )Hk  1 (cn+l) = H k  1 (.1C n )Hk (cn+l)'
(2.19)
As in Chapter 1, we set
p~n)(x) = xkp~n)(xl),
Q~n)(x) = xk1Q~n\xl).
(2.20)
k =0, 1, ...
(2.21)
k =0,1, ...
(2.22)
k = 1,2, ....
(2.23)
The recurrence relation can also be transformed to get recurrence
relations for the polynomials {p~n)} and {6~n)} and also for the polynomials
{H~n)(x)} and {.Ht)(x)} where
Ht)(x)
= xkH~n\xl).
100
Chapter 2
(2.24)
(2.25)
[Hk"21]2x2Hk"~1(X).
..
(2.26)
Replacing qk"21 and ekn+1) by their determinantal expressions and then using
(2.17) we get
Hk"+2)Hk")(X) = Hk"+1)Hkn+1)(X) + Hk"21H\c"~12)(X).
(2.27) . :
Taking the expression of Hk"_~1)(X) from (2.27) (by changing n to n 1) and
using (2.24) we obtain
(2.28)
After changing n to n 1 in (2.26) we get Hk"l)(X) and using (2.25) we get
Hkn)Hkn+1)Hk"+l)(x) = [Hkn+l1)Hkn+1) xHk")Hk"21]Hk")(X)
XHk";l)Hkn21Hkn!l)(x).
.._
(2.29) .  ..
101
(2.30) :
k+l
kl
k+2
k+l
(2.31)
+ xHknllHknll(X)]
+ Hkn+1) Hkn;ll Hkn_+/l(x )Hkn+ll)(X).
But from (2.25) the expression in square brackets is equal to Hkn+ll)Hknl(X).
Then, dividing by Hkn+1)Hkn+i1l we get
(2.32)
ekn ) = Pknl'(O) 
Pkn+1)'(0),
(2.33)
102
Chapter 2
where H~n) = Hk(tlc n ) for the sake of simplicity. Taking x = 1 in these eight
relations we get
H~n)H~n+l) = H~n+1)H~n) + H~n21H~n_+/>,
H (n+1)H(n)
k
k+1
H (n+2)H(n)
k
k
(2.34)
= H(n+1)H(n)_H(n)
H(n+1)
k+1
k
k+1
k
,
(2.35)
H(n+2)
(2.36)
k+1
k1,
= H(n)
H (n) _ H(n1)H(n+1)
k+1
k
k+1
k
,
[H (n+1)]2H (n2) = [H(n2)H(n+2) + H(n+1)H(n1) _
k
k+1
k+1
k
k
k+1
H (n+1)H(n1)
k
k+1
(2.37)
H(n+2)H(n2)]H (n)
k1
k+2
k
_[H(n1)]2H(n+2)
k+1
k1 ,
[H (n)]2H(n)
k
k+1
= [H(n+1)H(n1)
_
k+1
k
(2.38)
H(n)H(n) _ H(n1)H(n+1)]H(n)
k
k+1
k+2
k1
k
_ [H(n) ]2H(n)
k+1
k1>
H (n)H(n1)H(n+1) k
k+1
k

(H(n+1)H(n1)
k
k+1
(2.39)
+ H(n)
H(nH (n)
k+1
k
k

(nl) =
H (n)H(n+1)Hk
k
k+1
H~nJ1H~n+l)H~n1),
(2.40)
(H(n)H(n) _ H(n1)H(n+1H(n)
k
k+1
k+1
k
k
 H~n+/)H~n21H~"+1l).
(2.41)
= Hkn)/H~n~l>
s~n) = H~n)/Hkn).
(2.42)
r(n)
_k_ _ =l+~'
s~n)
(2.43)
r~n+1)
the numbers
rkn )
and
Skn )
= Hkn)(X )/Hkn ).
= (l)k.
c(n)
103
s~n) [p(n)(x)
_ r~n11 p(n+1)(x)]
k
(n+1) k1
,
(n+1)
Sk
(2.46)
rk
(n+1) [
p(n)
xp(n+1)(x)
_
k+1 (x) = ~
(n)
k
Sk+1
(n+1)]
rk+1
(n) p(n)(x)
k
,
rk+1
(2.47)
(2.48)
(2.49)
(2.50)
with
(2.51)
(2.52)
= 1 + D~n) + Ft),
p~n11(x) = (I~n)x +J~np~n)(x) K~n)p~n21(X)
E~n)
(2.53)
(2.54)
with
(2.55)
(2.56)
(2.57)
(2.58)
(2.59)
It must also be noticed that a connection exists between the rs and
qdalgorithms since
= r~~:"l)s~n+1)/r~n11s~nl,
e~n11 = r~n12s~"l1/r~n+i1)s~n+1).
q~n11
Chapter 2
104
k = 1, 2, .. .
k = 0,1, .. .
with
Uo(x)
= l.
= 0, 1, ...
Theorem 2.33. The polynomials {Ud are orthogonal with respect to the
functional u (that is to say with respect to the sequence Cn, 0, Cn+l, 0, ... ).
Proof. UkUm is a polynomial which only contains odd powers of x if one of
the indices is odd and the other even. Thus, by definition of U, u(UkUm ) = O.
u(UZkUZm ) = U(Pkn)(Xz)P~)(xZ = C(n)(Pkn)(X)p~)(x = 0
if k=l= m.
U(UZk +1 U Zm +1) = u(xZPkn+l)(xz)P~+l)(xZ = c(n)(xPkn+l)(x)p~+l)(x
= c(n+l)(Pkn+l)(x)p~+l)(x = 0 if k =1= m.
Let us now consider the polynomials {Vk } defined by
V Zk (t) = tQkn)(tZ),
VZk+1(t) = Qkn+l)(tZ)+ cnPkn+1 )(f).
Theorem 2.34.
Vdt) = u(Uk(x) Ud t).
xt
Proof.
105
Applying u we get
V 2k (t) = tC(n)(al + aix + t 2)+ .. .
+ak(xkl+xk2t2+ ... + xt2k  4 + fk2))
= tQkn)(f).
2
2
2k
2k 1
aO+al(x +xt+t)+ ... +ak(x +x  t+ ...
Then
V 2k +1(t) = u(ao+ alf+ . .. + akt2k) + u{x 2[a 1+ a2(f+ x 2)+ . ..
+ak(fk 2+ ... +x 2k  2)n,
V 2k+l(t) = cnPkn+1)(f)+ c(n){x[al +ait 2+x)+ ...
+ak(t2k  1+ ... +xk1m.
= 0,1,. ..
with
Vo(x) = o.
We also have
Qk + 1)(X 2) = x 2Qkn)(X 2) n
CnPkn)(X2) 
ekn)Qkn~l)(x2),
k=1,2, ....
k=1,2, ....
2.9.
= 1.
106
Chapter 2
L d;x i
If we write g(x) =
i=O
i=1,2, ....
for
0 ...
and thus, we obtain
Cl
do
Co
Ck
(l)k Co
dk=~
Co
Cl
We thus obtain the values of the determinants Hl k + 2 ) which are used to
initialize the progressive form of the q  d algorithm
el k
+l)
= dk1/dk,
qlk+l) = dJdk l .
for
i = 0,1, ...
d(xi)=O,
for
i<O.
Theorem 2.35.
C3 . Ck+l
Rk(x)=
C4 . , Ck+2
.................................................... .
Ck
COX
+ C1
C OX2
+ C1X + C 2
..
coXk
for
p = 0, ... , k1.
107
For p = the elements of the last row of the determinant which is equal to
d(R k ) are given by
d(cox
+ CI)
d(cox 2 +
= COdl +
cid o
= 0,
C2) = cOd 2+
CIx +
and so on up to
d(cox k + CIX k 
c 2d o = 0,
cid l +
I + . . . + CkI X + Ck)
= cOdk +
CI d k  I + . . . + CkI d l +
CkdO = 0.
+CI)) =
CO~+I +cIdp
= 
p+I
pI
i~2
i~O
L Ci~+li =  L Ci+2~1i'
pI
i~3
i~O
=and so on up to
d(xP(cox k +
. . . +Ck))
L Ci~+2i =  L Ci+3~1i'
= CO~+k +
...
+Ck~
p+k
=
pI
cidp + k  i =
i~k+I
L Ci+k+I~li'
i~O
Thus, if p ~ k 1, this last row is a linear combination of the preceding rows
with the coefficients dp  b dp  2 , . ,do and the determinant is equal to
zero .
0,
for
p = 1, ... , k 2,
where Ok is the polynomial associated with P k . This relation is not true for
p=O.
Now let {Sd be the polynomials associated with {Rd. They are defined
in the usual way by
Sk(t) = d(Rk(X) Rk(t)).
xt
Theorem 2.36.
C2
C3
Ck+1
C3
C4
Ck+2
Ck
Ck+1
C2kI
Sk(t) =
t k I
108
Chapter 2
Proof.
C3 .. Ck+i
C2
Ck
Co
Let us apply the functional d to this determinant. The elements of the last
row of Sk(t) will be
d(co) = codo = 1,
d(co(x + t)+ Ci) = COdi + codot+ cido = t,
and so on up to
d(cO(X k i +
X k 2
From the formulas given in the last section and from Theorems 2.35
and 2.36 we get
Corollary 2.7.
Sk(X) = p~22.i(X),
Rk(x) = (cox + ci)pf;2. i (x)+ O~22.i(X).
This means that the polynomials {Sk} are orthogonal with respect to the
functional C(2) (that is to say with respect to the sequence c2, c3, ... ).
Reciprocally the polynomials {Ok} are orthogonal with respect to the
sequence d 2, d 3, ... or with respect to the functional d(2) defined by
i =0, 1, ....
109
with
P~l<X)
=0
and
P~2)(X)
= arbitrary constant j
O.
The associated polynomials {Qk2 )} satisfy the same relation and from Corollary 2.7 we immediately get
Theorem 2.37.
k = 0,1, .. .
k = 0,1, .. .
R o(x)=1,
So(x) = 0,
R1(x)=cOX+Cl
SI(X) = 1.
All the properties satisfied by the polynomials {Pk2)} and {Qk2)} produce
properties for the polynomials {R k} and {Sk}' The proofs are obvious and are
omitted. We get, writing
hk
= C(2)(Pk2)2)
Theorem 2.38.
k
= (x 
t)
L hi R+l(X)Ri+I(t)
i=O
l
i=O
for
k = 0,1, ... ,
== (x  t)
Sk+1(x)R~+2(X) Sk+ix)R~+l(X) == A k+1 hk
L hi R i+ (X)Si+l(t),
i=O
l
L hi1Ri+l(X)Si+I(X),
i=O
k
L hi Ri+I(X)Si+l(x),
i=O
l
110
Chapter 2
Let us now study the conditions for {dk } to be positive definite. Let us
write Rdx) = akxk + .... We get
d(R~) = akd(xkRk)
with
i=O
L eickli = 0,
k = 1,2, ....
i=O
The sequence {ek} is positive definite if Hk (eo)Hk+ 1(e O) > 0, Vk. Writing
H k +1(eo) and making some linear combinations of the rows and of the
columns and using the preceding relations we get
Hk+1(e O) = d~+l Hk+1(cl)
and thus
Hk(eo)Hk+1(eO) = d~k+lHdcl)Hk+l(cl)'
111
dn + k
dn +2k
1 ......... Xk
dn + 2k
1 ......... Xk
dn + k
R~n+1)(x)=
dn + l
dn + k
dn + k
dn + 2k 
dn + l
dn + k + 1
dn + k
dn + 2k
0
Ck '"
Cn +2k
n
CiX n  i
CiX k  i
i~O
i=O
Co
i
Ci =
0 and
cjx p  j
= 0 if i < O.
j~O
diXniR~n+l)(x)+
i~O
s~n+l)(x) and effecting the same combinations we find that this determinant
is equal to COl p~:n+l)(x). Analogous relations are obtained by interchanging
the role of the sequences {cn } and {~}. With the preceding convention on
the sum all these results can be gathered in [24].
112
Chapter 2
= 0,1, ...
and n = 0, 1, 2, ... ,
Rkn+1)(x)
i=O
n
Pkn + 1 )(x)
i=O
Remark. From these relations we see that Q~:k+1) and S~~k+l) are polynomials of degree k for n = 1, 2, ....
For n =0 we get
coRk1)(X) = Qk1}(X) + copr}(x),
doPkl)(x) = Sk1}(X) + doRr)(x).
For n = 1 we get
RI;}(x)(dox + d 1) + sl;}(x) = doPkO~l(X),
Pk2 }(x)(cox + Cl) + Qk2 }(X) = coRkO~l(X),
SkO}(x) = d oPk22. 1(X),
QkO}(X) = coRI;2. 1 (x).
We set
dOP~~k+1}(X) = Rkn+1}(X)
i=O
n
COR~:k+l}(X) = Pkn+1}(X)
L CiX
+ x n+1Qkn+1}(X).
i=O
Remark. For negative values of n the relations of Theorems 2.41 and 2.42
can be written as
coRkit+1}(X) = Q~~k+1}(X),
COR~.. +l)(X)
for k
= 0, 1, ...
= Q~';k+1)(X),
and n = 1, 2, ...
doPkn+1}(X) = S~:k+1}(X),
dop~n+1)(x)
= S~:k+l)(X)
113
Let us now study the orthogonality properties of the associated polynomials. From Corollary 2.7 we saw that the polynomials {Sk} are orthogonal
with respect to the sequence C2, C3, .... Similarly the polynomials {Qk} are
orthogonal with respect to the sequence d2 , d 3 , We shall now generalize
these results.
Theorem 2.43. For fixed n ;a. 1 the polynomials {Q~~k+1)} are orthogonal with
respect to the sequence d..+h d..+2, ... and the polynomials {S~:k+1)} are
orthogonal with respect to the sequence Cn+h Cn+2, .... For fixed n;a. 0 the
polynomials {Q~n)} are orthogonal with respect to the sequence d~n), d~n), ...
and the polynomials {s~n)} are orthogonal with respect to the sequence
c~n), c~n), . .. with
= (I)k(k+1)/2Hk(cn_k+2)/c~+t,
c~n) = (I)k(k+1) /2 H k (d.._k+2)/d~+1.
d~n)
Let tn)(x) =
L c~n)xi
with
d = Cn+i
n)
i=O
00
g(n)(x) =
i=O
= bod~~2+' .. + bk1d~~k+1'
 a1 ,~
+ c~n)d~~k+1)
(n)d(n)
+
i+2j
Cj
j=1
=d~~1(a1c~n) + ...
i+k+1
+ ak '~ c (n)d(n)
i+k+1j
j
j=k
+ akckn + ...
+d~n)(a1d~2+'" +akd~k+1)'
But
_
+ ... + akCn+i+k
c (n)( Xip(n
k
 aOCn+i
= aodn) + ... + akd~k = 0 for
i = 0, ... , k 1
which shows that the polynomials {Q~n)} are orthogonal with respect to the
sequence d~n), d~n>, ... and a symmetric result for {s~n)}.
114
Chapter 2
The sequences {Ckn)} and {dkn)} are related to the sequences {cd and {dd
as it was shown at the beginning of this section and we get the formulas of
the theorem by shifting the indexes.
Remark. From the preceding proof we see that J(X~lQkn)) = with the
convention that d(X~l) = d~n).
Another kind of orthogonality, which is not the classical one, can be
defined for the polynomials {Qkn )} and {Skn )} as follows.
For n = 0, 1, ... and p = 0, ... , n + k 1 we have
Cod(~n+l)(xPR~7k+l)) =
= d(~n+l)(pkn+l)(x)XPit Cixn~i)
+
d(~n+l)(xPQkn+l)).
d(~n+l)(pkn+l)(x)xP
it Cixn~i)
= cO(aO~+l +
Since Qkn+1 ) has degree k 1, the polynomials {Qkn+1 )} are orthogonal with
respect to the functional d(~n+l) if n  k 1 ~ k  2; that is to say
d(~n+l)(XPQkn+l))
C(~n+l)(xPSkn+l))
for
for
p = 0, ... , n  k 1
p = 0, ... , n  k 1
and
and
2k,,;; n + 1,
2k,,;; n + l.
Thus, such an orthogonality is not valid for all k when n is a fixed integer.
The results of this section generalized some previous results of Van
Rossum [101,102] and Wynn [145].
Since the polynomials R~7tl) can be defined from the polynomials
Pkn+1 ) and ()kn+l) and since these polynomials satisfy (2.20), (2.21), (2.22),
(2.23) and a recurrence relation, then so are the polynomials R~~+k+l) and we
get
with
115
nl
i=O
and
i=O
then adding we get the second relation. Using the second relation in the first
one and the value of Nt!ll) obtained from the first relation with k  1
instead of k we get the third relation. The initializations come out from the
definitions.
The polynomials R~:~ are directly related to the polynomials Ptn ) as
follows
Theorem 2.45.
(n)
(xnPtn)(x) tnPkn)(t))
eORn+k(t) = e
.
xt
Proof.
e R(n)(t) = e (
o n+k
xn tn)
= Pknl(t)e ( + Qkn)(t)
xt
since
Qknl(t) = ern) (
p(n)(x)  p(n)(t))
( p(n)(x)  p(n)(t))
k
k
= e xn k
k.
xt
xt
Moreover
VP: P(x);;;'O.
VP: P(x);;;.O
and
P(x)'fO.
Chapter 2
116
Theorem 2.46. A necessary and sufficient condition that there should exist a
nondecreasing function a with infinitely many points of increase (with a finite
number of points of increase) such that
n = c(xn) =
L:
n =0,1, ...
xn da(x),
Theorem 2.47. A necessary and sufficient condition that there should exist
a nondecreasing function a with infinitely many points of increase such that
cn = c(xn) =
L:
HiO0,
k = 0,1, ....
n=O, 1, ...
xn da(x),
is that
Theorem 2.48. A necessary and sufficient condition that there should exist a
nondecreasing function a such that
n = c(xn) =
n = 0,1, ...
xn da(x),
In this case the sequence {cn } is called a Stieltjes moment sequence and
we shall write {cn } E MS. The fundamental property of MS sequences is given
by
Theorem 2.49. A necessary and sufficient condition that there should exist a
nondecreasing function a with infinitely many points of increase such that
n = c(xn) =
is that
n = 0,1, ...
xn da(x),
k =0,1, ....
The same result can be proved for a with a finite number of points of
increase but, in that case, at least one of the preceding determinants must be
zero. The same is not true for MH sequences.
Theorem 2.50. A necessary and sufficient condition that there should exist a
= c(xn) =
xn da(x),
Ol
such that
n=O, 1, ...
117
is that
n, k =0, 1, ....
This theorem has been proved by Hausdorff [68] and such a sequence is
called a Hausdorff moment sequence. The sequence {c;.} is also called totally
monotonic and we shall write {cn}ETM. Another important class of sequence is that of totally oscillating (TO) sequences. We shall write {c;.}ETO if
{( 1)nc,.} E TM.
The following fundamental property of TM sequences has been proved
by Wynn [144]
Theorem 2.51. Let {en} be a convergent sequence. A necessary and sufficient
condition that {c,.} E TM is that
H~O)~O
and
H~l)~O,
k=0,1, ....
(l bIh(x)g(xW da(x) )1Ir (lb Ih(x)lp da(x) )1IP(lb Ig(x)lq da(x) )1/q
:so;
where hE L:[a, b], g E L~[a, b], where p, q and r are finite real numbers
satisfying pl + ql = r 1 This inequality implies that hg E L:[a, b]; it has
been used in probability theory by Liapounoff in the discrete case [85] (see
also [67, p. 146]).
Theorem 2.52. If {c,.}ETM or MS and if pl+ q l=r 1 then
for all nand k such that r(n I: k), np and kq are nonnegative integers.
Moreover if these integers are even then the inequality is true for MH
sequences.
118
Chapter 2
n, k =0,1, ....
If {cn}EMH this inequality is true for k=O,l, ... and n=0,2,4, ....
By taking n = 0, k = 1 and q = r + 1 we get
Corollary 2.10. If {cn}ETM or MS then
Co
Co
Co
Moreover for TM sequences (c,Jco)l/n ~ 1, 'Vn and
lim (cn/co)lIn
= R 1 ~ 1
n~oo
o~
Co
Co
Co
In this corollary we have to assume that Co 1= 0, that is to say that a is
non constant in its interval of definition. The second part of the corollary
follows from Cn+l ~ Cn for TM sequences.
Let us now look more carefully into TM sequences. We have first
Theorem 2.53. Let {en} E TM. If k exists such that Ck+l = Ck then Cn = a,
'Vn~l with co~a~O. Conversely ifcl1=O then cn 1=0,'Vn.
xn da(x)
with a bounded nondecreasing in [0, 1]. If k exists such that Ck+l = Ck then
LlCk=O= fxk(lX)da(X).
119
lim
~Cn+l~
Cl
if C 1 =f
,;;;
then
1,
Cn
Cn+l/Cn
= lim
c~/n = R 1 ~
n+ OO
n+oo
f(t)
L c/o
Proof. We assume
Theorem 2.51
Cl
=f
so that, by Theorem
2.53,
n =f 0, 'v'n. From
Cn
a, then
n+oo
= a(l)a(l).
n l\n
=
da(x),
n =0,1, ...
Let an be defined by
an(O) = an(O+) = 0,
for
an (1)  an (1) = Cn .
Then, from a theorem due to Helly [69], we know that no< n 1 < n2 < ...
exist such that
lim a nk (x)
= a(x)
k+oo
Thus we have
a = lim cnk = lim [a nk (1)  a nk (1)] = a(l) a(l).
k+oo
k+oo
120
Chapter 2
n = 0,1, ...
n=O, 1, ....
Thus
.:1 k cn.:1 k+2 cn ~ (.:1 k+l cn )2 ~
~,
~ ct i_L1da(x)
f( t )  L.
i~O I
1xt
and
fn{t)
Then
{f(t)  fn (t)} E TM,
'VtE[O, R[
'VtE ]R, 0]
= L c/o
i=O
121
If 1.s;; xt < 1 then 1 + ... + xnt n = (1 X n+1t n+1)/(1_ xt) and we get
tn+lxn+l
1
da(x) for te]R,R[.
xt
tn+1x n+1
Let an be defined by dan (x) = 1
da (x). ~ is bounded and nonxt
decreasing in [0, 1] for all t e [0, R[ and the result immediately follows from
Theorem 2.50.
All the preceding results apply to the sequences {(l)k.1 kc,.} for a fixed
value of k since they also are TM sequences. They can also be easily
transformed for TO sequences.
Let us now show how to generate new TM sequences from a given TM
sequence [21].
Theorem 2.57. Let a(x) = L Il;x i be a power series with radius of confn(t)=f(t)
l1R
00
i=O
vergence R and such that ai ;a. 0, Vi and let {cn}eTM, with co<R. Then
{a(c,.)}eTM.
Proof. It is well known that if {Un} and {vn} are TM sequences so are {Unv n}
and {aUn + bvn} with a, b;a. 0 (the same is true for MS sequences). Let
ak(x) = ao+ ... + akxk. Then {ak(c,.)}eTM for a fixed k. Since O.s;; ... .s;;cn.s;;
... .s;;c1.s;;co<R then a(cn) = lim ak(cn) exists for all n. Thus {a(cn)} is the
k+co
n, k, p = 0,1, ...
p, n = 0, 1, . . ..
k .....oo
Sequences
a r (a CnY
(a  Cn)r
tgc,.
1/ cosc,.
1/ sin c,. 1/ c,.
Conditions
O.s;;r.s;; 1
co.s;;a
r;a.O
co<a
co<7T/2
co<7T/2
co < 7T/2
a>l
 Log"(1c,.)
argth c,.
arc sin c,.
arccosc,.
sh c,.
ch c,.
co <l
co <l
co <l
co <l
no condition
no condition
122
Chapter 2
The same result can be proved for MS sequences but the conditions
must be satisfied by all the cn's. But we previously saw that if {cn} EMS and
converges then {cn } E TM. Thus for diverging MS sequences the only
possible series a for which Theorem 2.57 holds are entire series (R = (0) and
we get
Corollary 2.13. Let {cn}EMS and {cn}TM then the sequences {aCn}(a> 1),
{sh Cn} and {ch cn} are MS sequences.
i~O
if Co < 1
123
and thus
lim
c~/n =
lim (cnlco)lIn
0.
n+oo
n+oo
00
n>OO
An=O(cna).
xna(x) dx,
n =0,1, ...
IAk c I
LI
Mk!
(n + 1) ... (n
+ k + 1)
n, k = 0,1, ...
Proof. Let us first prove that the condition is necessary. We assume that
Cn
xna(x) dx
l~kCnI:s;;
x n(lx)k la(x)1 dx
124
Chapter 2
and
1
1
x n (l X)k dx
= kn+l
11
x n(1 x )kd x
Mk!
:S;;(I)k.1 kc
(n + 1) ... (n + k + 1)
n
:s;
Mk!
(n + 1) ... (n + k
+ 1)
But
.1 k (_1_) =
n +1
(I)k
k!
(n + 1) ... (n + k + 1)
which shows that the sequences {~ Cn} and {~+ Cn} are TM sen+l
n+l
quences.
From Theorem 2.50, bounded and nondecreasing a and (3 exist such
that
M
n+l Cn =.l xnda(x)
But
n+l =
.l
xn dx
xn d[Mx a(x)]=
xn d[{3(x) Mx].
By unicity we have
Mx a(x) = Mx) Mx = cp(x)
125
and
L1hCP(X) = Mh  L1hCX(X) = L1h(3(X)  Mh
'v'x~o
_M:s;;L1h~(X) :s;;M.
But cP is a function of bounded variation and has a derivative a(x) almost
everywhere. Thus by Lebesgue Theorem
11m
LX L1hCP(U)
( ) du
h d u= LX au
h>O
11
lim h
1
cp(u) du lim h
h>O
LX cp(u) duo
h>O
But cp is continuous in [0,00) and so the right hand side of the preceding
relation tends to cp(x)cp(o)=cp(x) when h tends to zero. cp is absolutely
continuous and we have
cp(x) =
and
Cn
a(u) du
xn dcp(x) =
xna(x) dx
xna(x) dx,
n =0,1, ... ,
lL1 k c I:s;;
n
for n
Mk!
(np+1) ... (np+k+1)
= p, p + 1, ...
and k
= 0, 1, ...
Chapter 3
Pade approximants and related matters
The aim of this chapter is to show how the theory of general orthogonal
polynomials can be used to derive old and new results for Pade approximants and related matters as continued fractions and the ealgorithm. All the
known results are obtained much more easily by using general orthogonal
polynomials than they were previously obtained; no special prerequisite is
needed such as Schwein's development or extensional identities. Moreover
everything follows in a very natural way and a unified approach of Pade
approximants and related subjects is obtained. New results are also extracted, such as recursive methods to compute Pade approximants or the
connection between the topological e algorithm and the biconjugate gradient method. The first attempt to use the theory of general orthogonal
polynomials in Pade approximants seems to be due to Wall [133]. This
problem was also tackled in [3, 4, 25, 101, 125, 145].
3.1.
Pade approximants
L c/+tn+'(k1/k)",(t)
(n+k/kMt)=
i=()
with
fn (t) = Cn + 1 + c
l1
+2 t
+ ...
That is to say that the linear functional used to define the numerator
of (k l/kk was the functional c(n+l) such that c(n+I)(x i ) = Cn+ l +i . In
section 1.5 we saw that if the generating polynomial of (k l/k)f.. is the orthogonal polynomial of degree k with respect to the functional c(n + I) then
(n+k/kMt) becomes the Pade approximant [n+k/kJt(t). Thus, by USing
the notations of section 2.8 we get
11
[n + k/kJt(t) =
L c/+t n+
i~O
6(n+l)( )
(~+I/)
Pk
127
was the functional CC n+1) defined by CC n+l)(x i ) = Cin+!' If the generating
polynomial of this approximant is chosen as the orthogonal polynomial of
degree n + k with respect to the functional C C n +1) then we get the Pade
approximant [kin + kJr(t) and we obtain
[kin
+ k]f(t) = t n+ 1 Q~+nk+l)(t)/P~:tl)(t).
With the convention that a sum with a negative upper index is equal to zero
the two preceding results can be gathered together in the following theorem
Theorem 3.1.
pq
[P/q]f(t) =L
c/ + t
p q
+ 1 Q!:'q+l)(t)/p!:'q+l)(t),
p,q=O,I, ....
i=1l
= cIlR~qp+l)(t)/p!:,q+1) (t),
p,q=O,I, ...
= N~q+l)(t)/p!:,q+l)(t),
q+l
HCpq+1)
t P +q+ 1 +O(t P + q + 2 )
p~;; t 2k +O(t 2k +
1 ).
we get
f(t) [k l/k]f(t) =
H CIl)
128
Chapter 3
Using Theorem 3.1 and Theorem 1.15 we can obtain expressions for
the error
Theorem 3.4.
t P+ q + 1
= _t P+q + 1
C(pq+1)
i_
(P(Pq+l)(X)
C(p+l) .:!.q.:........:..
P~ q+l)(t)
_

(xq P(Pq+l)(X)
xt
1 xt
tP+q+ 1
[P~q+l)(t)f
([P(Pq+l)(X)f)
(pq+l) "'q::.._'.;..;;;...
1 xt
f(t) = ~
cl + t p  q +1 c (pq+l)l xt)l).
i=O
But
Q~q+l)(t) = t q 
1 c(pq+1)
P(Pq+l)(t1) _ p(Pq+l)( )
q
t 1 _ x q
x
= P~q+l)(t)c(Pq+1)l xt)l)
 tqc(pq+l)(P~q+l)(X )(1 xt)l).
Thus
t P+ 1
f(t)[p/ql(t)
= P~q+l)(t) c(pq+1)
P(Pq+1)( )
q
x
(P(Pq+l)(X)
q 1 xt
p<:q+l)
we have
= tqC(pq+l) ( x q~pq+l)(
q
x )
1xt
1xt
which proves the first relation. The second relation follows from
c(pq+1)(x q x i ) = c(P+1)(x i ) and the proof of the third relation is as in Theorem
1.15 .
This result generalizes a known result in the case of Stieltjes series [5].
From the preceding results we have
h(pq+l)
q
= C(Pq+l)(p(Pq+l)2)
= H(pq+l)/H(pq+l)
q
q+l
q
h(pq+2)
q o:+ 7
(pq+2)
h(pq+l)
ql
h(pq+l)
q
h(pq)
q
h(pq)
q+l
h(pq+l)
q
h~q+l)
129
Proof. We have
h~q+l)
H(pq+l) H(pq+2)
q+l
ql
_
H(pq+l) H(pq+2) 
h~~lq+2)
(pq+l)
eq
It is easy to check that the second member of the left hand side is equal to
q~;rl) and that the two members in the right hand side are respectively
equal to q~+7) and e~+7). Thus the relation of the theorem is nothing else
but the q  d relation.
This theorem was first proved by Van Rossum [105], using a quite
different technique.
From the preceding theorem we can deduce the
Corollary 3.1. If
h(pq+1)
q
h (pq+2)
ql
h(pq+2)
+ h(pq+l)
q
0
q
then
h (pq+2)
ql,
h(pq)
h (pq)
q+l
h(pq+2) .
+ q(pq+l)
=
q+l
q(pq)
q+l
+ e(pq)
= 0
q+l
.
h(pq+l)
q
q(pq)
q+l
e(pq+l)
q
e(pq)
q+l
q (pq+l)
q+l
We also have
h (pq)
q+l
h(pq+l)
q
_
h(pq+l) h(pq+2) q
q
(pq)
eq+l
q(pq+l)
q+l
This result was also proved by Van Rossum [105]. The condition of this
corollary is satisfied by the main diagonals of the Pade tables for e t and
00
e'Yt
IT
i=l
00
00
these examples
q~l)e~l)<O,
a;
>0 Vi and
a;
i=l
Corollary 3.2. The Pade approximants [k/k] for the two preceding series have
no poles in Re (t) < O.
For the exponential series this result has been recently very much
refined [113, 114, 115, 116].
Chapter 3
130
The last result which has been proved by Van Rossum [105] and whose
proof can be simplified is
Vk
q
ql
q+l
q
h(pq+2) = h(pq+l) .
Proof. This last relation can be written as
q(pq)
q q+l _ q+2
e(pq+l)  e(pq+l)
q
q+l
Vk.
A =H(O) H(2)
k+l kl
and thus
A = f.L = 1/2.
COcz
coc2ci
1
= 0 then
131
Theorem 3.7.
[k 1/k]f(t)
= Co det (Ikl 
tJ~_I)/det
(Ik  tJk)'
where
[k 1/k]f(t)
[kl/k]f(t)=cO
L (e,Pke)t i
for
We also have
R(k))
e, i e
i=1 1 tXi
k
[k l/k]f(t) =
if the zeros
Co
X;
Since the polynomials are monic we get ore = e' and Lk" l e = 'YkCijl with
e' = (1; ... ; 1)T. Using Vk"I'Yk = (Aik ); ; Akk))T and D~e' = (x~; . .. ; X~)T
we get
k
Ti) = L..
~ A(k) i
j
Xi'
Co( e,Jk e
i =0,1, ...
j=l
and thus
k
C.
I
=~
~
A(k)X~
J
J'
j=l
with A~k) = co(e, R~k)e) or A~k) given by one of its definitions as previously
seen.
The preceding matrix formulation of Pade approximants is equivalent
to Nuttall's formula obtained as Property 1.5. Replacing Rk by its value
132
Chapter 3
we get
[k l/k],(t)
Using now L;;l = AkL[H;;\ L;;le = hoI p&Ol" and ho = p&W co with ,,=
(co; ... ; Ck_I)T we get
L e;A~z =0
and so
k
eiA~+iz
= 0,
\fn~O.
i~O
L eiCn+i =0,
i=O
\fn~O
with Ci = (y, A~z) for all i ~ O. Let us now consider the series
defined by
f(x)
133
If
i=O
L eiA~z=O,
i=O
Moreover, if this minimal polynomial has the factor t with multiplicity r (if
r = 0, t is not a factor) then
m
L eiA~z=O,
i=r
and
mr
L er+ic,,+i = 0,
i=O
for
n = r, r + 1, ....
[n + k 1/k],(t) = N
[n + k/k 1],(t) = W
[n + k/klt(t) = C
[n + k + l/klt(t) = S
[n + k/k + 1],(t) = E
134
Chapter 3
We have
n
L e/ + tn+1Q~n+l)(t)/P~n+1)(t),
C=
;=0
n1
L e/ + tnQ~n)(t)/P~n)(t),
;=0
W=
n+1
L c/ + tn+2Q~n~2)(t)lP~n":12)(t),
;=0
S=
n+l
L e/ + tn+2Q~n+2)(t)/P~n+2)(t),
;=0
n1
= L e/ + tnQrl1(t)/P~nll(t).
;=0
tP~n)(t)
tQ~n)(t)
qk+1 
and so
p~n+1)(t)Q~n)(t) P~"l1(t)Q~n)(t)
= enP~n)(t)p~n+1)(t) + tQ~n+1)(t)p~n)(t)
 p~n)(t)Q~nl1(t).
p~n+l)(t) p~nl1(t)
Q~n)(t)]
p~n)(t) = en
Q~n+1)(t)
t p~n+1)(t)
Q~n)(t)
pr)(t)
Multiplying by t n we get
p~nl1 (t)(E  N) = .P<t+ 1)(t)( C  N).
(3.1)
ek
tP~n_+[>(t)
tQr+2)(t) .
t2QL~~2)(t)
Thus
tpLn+1)(t)QLn~2)(t)  tP~n+2)(t)Q~n_~2)(t)
= QLn+1)(t)p~n":l)(t)
 Cn+lp~n+1)(t)PLn":l)(t)  tQ~n+2)(t)p~~~2)(t).
p~n+1)(t)
135
Multiplying by
tn+ 1
we obtain
(3.2)
EN
(E C)(C N)
SC
(C W)(SC)
CW
(C W)(SC)
or
+
EC
CN
= (EC)(CN) + (EC)(CN) .
NC
SC
1
1
WC EC
+=+.
(3.3)
This rule can be used to compute recursively the whole Pade table
starting from the boundary conditions
[l!q]f(t) = 0,
[p/O],(t) =
[P/1]f(t) =
f c/,
[O/q],(t)
00,
= [q/O];l(t) = (~ dirl.
i=Q
Since the cross rule also holds for the series g and because of the reciprocal
covariance property we also have
1
+
l
N1CS1_C 1 = W1_C 1+ E1C 1 .
This rule is the inverse cross rule.
Chapter 3
136
such as [n + k/kMt) for k = 0,1, ... and n = 0, 1, ... fixed. Such a method,
which is a generalization of a method given by Gragg [63], has been found
by Brezinski [23] by using a bordering method for solving systems of linear
equations of increasing dimension. It is in fact the algorithm given in section
2.2.
Let us now come to other relations and recursive methods. From
Theorem 2.12 we know that
p~nl(t)Q~n11 (t)
Q~n)(t)p~n11 (t)
= c(n)(p~n)2) = H~nll/H~n).
Thus
1'~n)(t)(Xnll (t)  Q}:')(t)1'~n11 (t) = t2kH~n11/H~n)
and we get
_
N
Q (n) (t)
Q (n)(t)
_ n_k
_ _ t" k+l
E  t 1'~n)(t)
1'~n11(t)
H(n)
_ n+2k
k+1
t
H~n)1'~n)(t)1'~n11(t)
We get
N  E =[n +kl/k],(t)[n + k/k + 1Mt)
[H~~1]2
t n+2k
ill")(t)ill"l 1 (t)
.
(3.4)
H(n) H(n+l)
k+1 k
t"+2k
ill")(t)H(:+1)(t)
.
(3.5)
But we have
1'(n) (t) 1'(n+1)(t)
k+1_  k
p}:'+l)(t)
(n) t1'(n)(t)
= (N  E) qk.:.H k
(NC)(NE)=EC=(NE)
p~n+l)(t)
and thus
H(n) H(n+l)
k+1 _ k+1
tn+2k+1
(3.6)
H~~1 (t)H~n+l)(t)
Relations (3.4), (3.5) and (3.6) have been obtained by Wynn [146]. Now we
have
ES=(EC)(SC)=(EC)
137
But
=t
H(n) H(n+l)
H(n+l)H(n+1)(t)
k+1
k
Hln21(t)Hln+1)(t) Hln21 H ln+2)(t)
k+l
k+1
k+}
H~n21(t)Hln+2l(t)
tn+2k+2
(3.7)
N C E C
EC
tHkn++l)
WC
SC
(3.8)
Similarly we have
NC
WC
EC
SC
(3.9)
(EN)(ES)
(WC?
(EC?
[Hln+1)(t)]2
Hkn )(t)Hkn+2l(t)
[Ht+1)(t)]2
(N  W)(NE) _ (SE)(S W)
(NC?
(SC?
A,
~ (n) (t){r,n+2)(t)
H
k+l
kl
= f.L
Hk"ll (t)Hkn_+12l(t)]/[Hkn+1l(tW
Chapter 3
138
N E
= (CE)(CN)
[Hkn+1)(t)Y
tn+2k+lHkn+l)Hkn++/)'
tH(n+1)H(n) H~(n+2)(t)
k+l
k+l kl
can prove the following relations which were discovered by Wynn [142]
e'kn)Qkn+1)(t) + (q'knll 
ek~/) 
t)Okn)(t) + to'k~/)(t)
and
qknl)to~nl)(t)
= Cn  1 q'knl)p'kn 1)(t) 
CnPkn)(t).
H~M+l)
and
S(L/M) =
H~M+l)(t).
NE
W
SW
E
SE
C
S
with
C=[n+k/k]f(t).
From the relations (3.4), (3.5), (3.6) and (3.7) it is easy to get
(C  E)(S  SE)
(C  S)(E  SE)
and
similar
(W, C, Sw, S).
HknllHkn++12)
Hkn+2)Hkn12
identities
for
(NW, N, W, C)
and
139
We also have
(C  SE)(S
 E) = [Hr:l)]2
'__
''_'
'::'''':';''0..,...::,.,.
(C  E)(S  SE)
Hk"llHkn:12)
(C  SE)(S  E)
[H~:~1)]2
(C  S)(E  SE)
Hkn+2)Hk"l2
The right hand side of these equalities is independent of t. So also is the left
hand side and we obtain ratios of Pade approximants which are invariant
with respect to t. We also get
(C  SW)(E  S) Hkn+2)Hkn++/)
(CE)(SWS) = HknllHkn+3) t,
(CNE)(SE)
(C  S)(NE  E)
Hknl 1 Hk":/) t
Hkn+ 2 ) Hln;21 ) ,
(N  C)(E  SE) _
(CSE)(N  E) 
ek+1 t,
(W  C)(S  SE)
(WS)(CSE)
qk+l t.
(n)
(n+1)
The polynomials
n+kl.
We set
Pk
n)
n+kl
Nt)(t) =
a~k.n)ti.
i=O
a~k+nLl =
Pk
n)
(l)kHkn;l)/Hkn).
140
Chapter 3
Finally we get
N~nll (t)
p~nll (t)
(3.10)
(k.n+l)
an + k
Pknll (t)
(3.11) :
kl
l"lkl
l"lk
Thus
Nkn)(t) a~".k1.n+2) Nkn+l)(t)  a~k+k+1) Nkn_~2)(t)
Pkn)(t) = a~\~il.n+2) Pkn+1)(t)  a~k+k+l) Pk"+;)(t)
(3.12) *
[n +2k 3/2] ~
11'
1l'
[n + 2k 1/0] ~ [n + 2k 1/1]
1l'
[n +2k/0]
This is an algorithm due to Baker [8] whose origin can be now fully
understood.
Starting from [n +2k/0] and [n +2k 1/0] and starting also from
[0/n+2k] and [0/n+2kl] forms the Pindor algorithm [99].
The preceding method is independent of the arbitrary normalization
factor which is chosen for orthogonal polynomials. This normalization factor
can be modified during the computation.
141
We now express the relations between the coefficients of the polynomials occurring in the denominator of (3.10) and the relations between the
coefficients of the polynomials occurring in the numerator of (3.11). We get
b~t+l) = b~k+'i+2) = 0,
bkk,n+l) b\k+l,n) = bkk.n+l) b\k.n+2) + b~k++l,n)b\,=~+l),
i = 0, ... , k + 1,
a~t+l)=O,
a~k+k+l) a\k+l,n)
= a~k+k+l) a\k,n+2) 
a~k+k::r a\,=~+l),
i = 0, ... , n + k.
= a~k+k+l) b\k,n+2) 
a~~kj;2{ b\,=~+ll,
i = 0, ... , k + 1.
(3.13)
(3.14)
with
k
hkn) =
L Cn+k+ibk~~).
(3.15)
i=O
Nl"ll (t)
Pl"ll (t) =
(3.17)
where eknl, qln~l are computed from (3.13), (3.14) and (3.15). Thus, relations
(3.16) and (3.17) allow us to compute recursively Pade approximants located
on a descending staircase in the Pade table without calculating the whole qd
array
[n/O]
{J,
[n + 1/0]::? [n n1/1]
[n +2/1]::? [n + 2/2]
{J,
Chapter 3
142
This is an algorithm due to Watson [135] which is now also fully explained.
Some other methods based on the connection with continued fractions can
be found in [142] but these methods need the knowledge of the whole qd
array or the whole table of Hankel determinants.
The relations (3.11), (3.12), (3.16) and (3.17) can be used in many
different ways to compute other sequences of Pade approximants. For
example if we use (3.17), then (3.12), then (3.11), then (3.16) and so on we
are able to compute recursively the following sequence of Pade approximants
En/OJ
[n/1]
11'
[n + 1/0] ~ [n + 1/1]
[n/2]
[n + l/2] ~
All these methods can also be used with negative values of n, that is to say
for the reciprocal series. Thus, by the property of reciprocal covariance, we
can compute
[0/n+2k1] [0/n+2k]
{!,
{J,
[2/n+2k2]
by using alternatively (3.12) and (3.11).
If we use alternatively (3.16) and (3.17) we can calculate the sequence
[O/n] ~ [O/n + 1]
{1,
[1/n + 1] ~ [1/n + 2]
{!,
[2/n+2]~
{!,
[l/n] [1/n + 1]
{),
[2/n] ~ [2/n + 1]
{J,
We can also follow some bizarre route in the Pade table such as
34
t ~
12 56
1314 1718
~
t
~
t
7 8 11 12 15 16
~
t
910
143
t3
= l+t+"2+"6+ ....
f(t)=e t
We have
[0/0] = 1 = m 1)(t)/P&1)(t),
q\l) = 1/2.
(3.17) gives
[1/1] = N\l)(t)/p\l)(t) = (2 + t)/(2 t).
and thus
[0/1] = N\O)(t)/P\O)(t) = 1/(1 t).
Using (3.11) we obtain
N~l)(t)
p~l)(t) =
and
[0/2] = N~l)(t)/~l)(t) = 21(2  2t + c2).
Thus
Chapter 3
144
and thus
k+l k
To obtain (3.11) this ratio of determinants had been expressed in terms of
a~k+k:2{ and a~~k+1). We shall now express it in terms of quantities which can
be directly computed from p~nl1 and p~n+1) and from (3.10) we get
N~n+2)(t)
p~n+2l(t)
tb~k+~l.n) N~n+l)(t)
= b~k.n+1lPknl1(t) tbk\;l.n)p~n+l)(t)
We also have
h(n+l) h(n+1)
(n) _ (n+ll __
k___
k_
qk+l ek
 h(n) h(n+2)
k
kl
H(n+l)
k+l
[H(n+2)H(n) _ H(n) H(n+2)]
H(n+2)
k
k
k+1 kl
H (n+l)H(n)
k
k+l k
and, by using the recurrence relation (2.17) for Hankel determinants we get
m + )m
n
n + 1)
(n)
(n+1)
k+l
k
qk+l  ek
= H(n) H(n+2)
k+l
145
since
p~n)l(o) =
bik,n).
= (p~n+2)(t) _ Ptll(t))/t(bik,n+2) 
bik+1,n))
Hk,n)) =
b~k,n+l)p~n)(t)
b~k,n) p~n+l)(t)
b~k,n+l) N~n)(t) 
P~~12)(t)
b~k,n+l)p~n)(t)_ b~k,n)p~n+1)(t) .
Hk,n) N~n+l)(t)
[H~n+l)]2
kl
H~"11 H~n_+12)
[H~n+l)y
We thus obtain
N~n+l)(t)
p~n+l)(t)
= (p~n)(t) 
146
Chapter 3
and thus
N~n_~l)(t)
N~n)(t)  N~n+l)(t)
p~n_~l)(t)
p~n>Ct)  p~n+l)(t)
b~k,n)
= 1 is satisfied.
p~n)(t)
and thus
N~n>Ct)
p~n)(t)
p~n+l)(t)  p~nll(t) .
b~k,n) =
1 is satisfied.
We also have
which gives
Nkn+ ll (t)
p~n+l)(t)
b~k,n)
h~nl)
+ h~n)b~k,nl)t)N~n)(t) 
N~n+l)(t)
(h~nl)b~k,n)
p~n+l)(t)
h~n)bkk,n)tN~nl)(t)
P k + 1 (t)  1 + h(n+l)
kl
and we obtain
Nkn;tl)(t)
pt:P(t)
h~n) (n+l)
t P k (t)  h(n+l) tPk 1 (t)
(k,n)
an + k 
kl
+ h(n)a(k
l,n+l)t)p"(n)(t)  h(n)a(k,n) tP(n+1)(t)
(h (n+l)a(k,n)
kl
n+kl
k
n+kl
k
k
n+kl
kl
*
A conversational program using these relations is given in the appendix.
147
Some other relations can be obtained by eliminating unknown quantities from the preceding identities.
These relations are also valid for the power series expansions of the
remainder terms for which they give recurrence relations. Let us, for
example, consider the relation (3.16) and let us set
R~n)(t)
= A(t)Nt)(t) + B(t)p~n)(t)
= Ntn)(t) 
f(t)p~n)(t)
and the preceding relation becomes a recurrence relation between remainder terms in the Pade table.
To end this section let us give a very nice method, also due to
Bussonnais, to obtain directly all these relations. Let us assume, for example, that we want to derive relation (3.16). From the basic property of Pade
approximants we know that
Nkn_+11)(t)  f(t)Pkn_+l)(t) = O(t n+2k  1),
Nkn)(t)  f(t)Pkn)(t)
= O(tn+2k).
and
satisfy
N(t)  f(t)P(t)
where P has degree k and N has degree n + k and where A and Bare
~olynomials or rational functions in t. In that way we shall have P(t) ==
p~n+l)(t) and N(t) == Mn+1)(t).
In this example it will be quite easy to find A and B from the first term
of the power series expansions of the remainders. The same procedure can
be also used for approximants which are not adjacent in the Pade table but
the expressions of A and B will be much more difficult to get.
3.1.4. Normality
For simplicity we set
H~n)(t)
= H~n)p~n)(t),
Mkn)(t) = Hkn)Nt)(t),
148
Chapter 3
so that
[n + k/kl<t) = M~n+l)(t)/Bt+ll(t).
From the relations (3.4), (3.5), (3.6) and (3.7) and after reducing to the same
denominator we get
Mt)(t)Btll (t) 
M~nll (t)Bt)(t)
M~nl(t)Bt+l)(t) 
= [H~nll]2tn+2\
Mkn+l)(t)Btl(t) =  H~nllH~n+l)tn+2k,
Bt)
Proof. From Theorem 2.43 we know that the polynomials Mi.n) are ort~ogonal polynomials. Then from Theorem 2.32 we kno,! that f1~n>, ~~l'
M~n+~) and M~n+2) have no common zero. Similarly for H~n), H~~b H~n+l)
and H~n+2). From the preceding relations it is easy to see that Bt) and M~n)
In this case we say that the Pade table is normal. Each Pade approximant
[p/q] is the ratio of a polynomial with exact degree p divided by a
polynomial with exact degree q.
Let us now study what happens if H~nll = 0 for some value of nand k.
In that case the coefficient of X k + 1 in p~nll is zero. In the recurrence relation
A~~l and c~nll are zero and, in general, B~~l is different from zero. So p~n)
and Pknll are identical apart from a multiplying factor. So also are Okn) and
Q~nll with the same multiplying factor since they satisfy the same recurrence
relation and thus
[n + k/k + l]f(t) == [n + k l/klt(t)
which can also be found by (3.4). From (3.5) and (3.7) we see that
[n + k/k]f(t) == [n + k l/kJr(t) == [n + k 1/k + l]f(t).
[n+kl/k+l]
[n + k/k]
[n + k/k + 1]
149
= q~n)q~n+l)e~n+1),
4n+22)q~n82) + e~n+l)q~n';22)
e~~2)q~n82)q~n81)
= q~n++l)e~n++l) +q~n+l)e~n+l),
= q~n:l)e~"++l)e~n12.
H{nl)H{n+l) k+2
k
.
Let us assume that H~n.;p and H~n12 are equal to zero. Then the nine
following apprmiimants are identical
[n+kl/k]
[n+k1Ik+1]
[n+k1/k+2]
[n + klk]
[n + klk + 1]
[n+klk+2]
[n+ k+ l/k]
And similarly when there are more and more adjacent Hankel determinants
which are equal to zero. Identical adjacent Pade approximants only occur in
square blocks. This is called the block structure of the Pade table which has
been first systematically studied by H. Pade [94].
Let us consider Hankel determinants located as indicated
H~"ll =
H~n+2)
IV
= W H~n+~l) = C
H~n12 = E
Hkn+~2)=S
NSWE=C 2
If two adjacent determinants (N and S are not adjacent nor Wand E) are
zero then a third determinant is equal to zero. Let us consider the more
complete scheme
NW
ill NE
Vi
C E
S SE
SW
If Nand E (or W) are zero then C and NE (or NW) are zero. In both cases
we have a square block with nine identical Pade approximants located
Chapter 3
150
[n + k 11k]
[n + k 11k + 1]
H (n)
k+l
[n + k/k]
[n + k/k + 1]
E (or W) are zero then C and SE (or SW) are also zero and we
_
have a ~quare .!Jlock ~ith nine identical Pade approximants.
If C and E (or W) are zero then NE and/or SE is zero (or NW and/or
SW). If NE (or NW) is zero then so is N and we have a block with nine
identical Pade aperoxim~nts. T~e same if SE (or SW) is zero. The proof is
also the_ same if C and N (or SlAre zer..Q:...
__
If C and E are zero and if NE and SE are also zero then Nand S are
zero. Let us consider the ,array
If Sand
NN
ww
NW
W
SW
N
C
NE
S
SS
SE
EE
Let us assume EE and the determinants just above and under it to be non
zero (if one of them is zero they are all zero and we have a square block
with sixteen identical Pade approximants). We shall prove that NW= W=
SW. To prove this we first need an identity proved by Gragg [62] for the
carray and which can be also written for Hankel determinants. We have
NNCNWNE=N 2 ,
NSWE=C 2 ,
CSSSWSE=S2,
NESECEE=E 2,
NWSW WWC=
WZ,
and thus
true if SW= O.
Thus we get a square block with nine Hankel determinants equal to
zero and the sixteen corresponding Pade approximants are identical.
151
If H~n) = 0 for p = k, ... , k + m then all the Hankel determinants located in the square block having the preceding diagonal are zero and we get
a square block of identical Pade approximants (compare with [62]).
The preceding remarks are the basis of Cordellier's extended cross rule
when a square block of identical Pade approximants occurs in the table [47].
This rule has also been proved algebraically by Claessens and Wuytack [44]
who first extend the q  d algorithm in the nonnormal case.
Theorem 3.9. If the Pade table contains a square block of order m with
[n+k1Ik], [n+k1Ik+m1], [n+k+m2Ik] and [n+k+m2Ik+
m 1] at its corners, then
q (nm)
k+m
q (nm)
k+m
n
m
i=1
e(n:i)
k+.1
= e(n+m1)
k
q(n+i1)
k'
i=1
n e(nm+j1)+e(n+m)
n q(nm+j1)
k+m
n
n k+J'
i
k+mj
j=1
k+m+l
j=1
= e(n+ml)
k
e (n+m)
k+m
q(n+mj)+ q(n+ml)
k
k+l
j=l
q(nm+jl)
k+m+l
j=1
= q(n+m1)
k+l
n
m
e(nJ:mj)
for
1
 , ... , m  1
,
j=l
e(n+mj)
k+j
j=l
From these relations q~n;:>, e~n;:) and q~+::;l) for i = 1, ... , m can
be computed.
Let us now give Cordellier's extended cross rule
Theorem 3.10. If the Pade table contains a square block of order m
with [n+k1/k], [n+kl/k+m1], [n+k+m2Ik] and [n+k+m21
k + m 1] at its corners, then
([n+ k 21k +i 1][n + k _l/k])l
+([n + k + ml/k + m  i][n + k
=([n +
_l/k])l
for
i = 1, ... , m
= N kn12(x )P\:,)(x) 
Nkn)(x )~"l2(X) .
On the other hand, using Corollary 2.3 and also another intermediate result
152
Chapter 3
we have
Let us set
C=[n+kl/k],
N; =[n+k2/k+il],
Si =[n+k+ml/k+mi],
W; =[n+k+i2/kl],
Ei =[n+k+m il/k+m].
Then we have
~1CJ~m
.
C
.
.
wm
E1
sm ...
S1
for
i = 1, ... , m.
The inverse extended cross rule also holds. From the practical point of
view this rule is difficult to implement in the general case because the
computation of Ei needs the knowledge of C, N;, W; and Si'
Cordellier has recently produced two algorithms [46] which allow us to
compute recursively a diagonal from [pIO] to [Olp] even if square blocks
occur. The implementation of these algorithms is very easy. These relations,
which use Kronecker's division algorithm [134], appear as a generalization
of the relation given in the previous section when a square block exists.
3.2.
Continned fractions
The reader is assumed to be familiar with the basic definitions and properties of a continued fraction. We only recall that if C is a continued fraction
C = bo
153
k=1,2, ...
with
Bo=1,
Thus if we consider the continued fraction
C(t)
with
Oo(t) = 0,
Ol(t) = 1,
Po(t) = 1,
P_1(t)=0.
Ck (t) = [k l/kMt).
This continued fraction is called the continued fraction associated with the
series f (or the associated continued fraction).
We can deduce from it a recursive algorithm to compute the family of
monic orthogonal polynomials {Pd.
We set
h o = C(t),
hn
1+Bn t 
Thus
IC
n +1
I
1 + Bn+lt
...
ho = C1/h b
~ = 1 + Bnt Cn+lt2/~+1'
Let
~
We have
= u"l/u,,.
n=1,2, ....
154
Chapter 3
= (1 + Bnt)u" 
n = 1, 2, ...
Cn+1fu,,+1'
U_ 1 = C 1U1
= f(t) and
Uo = 1.
Po(t) = 1,
1,
n+1
= [Un 
u,,+1]
t=o'
Pn+1(t)
= (t + B n+1)Pn(t) 
C n+1Pn 1(t),
[(1 + B n+1t)u,,+1 
u,,]t=o'
t2
n+2
This algorithm can also be deduced from the division algorithm which is
currently used to compute the continued fraction corresponding to the series
f (see below the definition) [66].
From the recurrence relation between the u" we obviously have
Bn+1
= u~(O) U~+1(0),
2Cn+2
u~(O).
= hkt2k+2
155
and
t+ B k+l=
Qk+1(t)Pkl(t)  Pk+1(t)Okl(t)
.
Qk(t)Pk1(t) Okl(t)Pk(t)
The first relation was already known while the second can be written as
2k
Ok+l(t)Pk1(t) Pk+1(t)Okl(t) = Akhk1t (1 + Bk+1t).
k = 0,1, ...
with
Vk,
C(t)=~L_S._L
~ IA2t+B;
...
Pdt)
...
+~+~+(J(t2k2)
t 2k
t 2k + 1
156
Chapter 3
and thus
C~k) = C~k+1) =
cik ) = cik
+l)
Co,
= Cb
dk ) = Ci ,
for
= 0, ... , 2k  1
..
Ck+1/tktk+1
(0)
Qdx) _ Qdt) _
2k
H k+1 2k
( 2k+1)
x Pk(x)  Pdt) CO+C1t+ ... +C2k t  H~O) t +0 t
.
All these results can be obviously generalized for the other diagonals of
the Pade table by considering the continued fractions
(n)
~(n)
c(n)( ) _ ~
1
_
2
_
X  X+ Bin) X + B~n) ...
where the coefficients B~n) and c~n) are those occurring in the recurrence
relation of the polynomials {p~n)}. The kth convergent c~n)(x) of this continued fraction is equal to Q~n)(x)/p~n)(x).
Since the polynomials {Q~n]1} form a family of orthogonal polynomials
they are the partial denominators of a continued fraction. This continued
fraction is
(n)/
~(n)
E(n)(x) = ~
2
Cn _
3
_ .
x+B~n)
x+B~n)
ci~(n)(x)),
= Cin)/(x + Bin)cnct>(x)),
= 0,1, ...
with ct>(x) = 0. (See [120, 121, 122, 123] for a particular case.)
In Theorems 2.33 and 2.34 we already studied some properties of
polynomials which were denoted by Uk and V k . These results will be useful
now.
Let fn(t)=c n +cn +1t+ ... and gn(t)=fn(t2). Since the polynomials {Ud
157
Cn+1>
0, ... we have
But
U 2k (t) = t2kp~n)(t2),
V 2k (t) =
t2k2Q~n)(t2)
and thus
[2k 1I2k]g" (t) = [k l/kk (t2).
We also have
U2k +1(t) = t2kp~n+l)(t2),
V 2k+l (t) = t 2k [Q~n+l)(t2) + c,.p~n+1)(t2)]
and we get
V2k+1(t)IU2k+l(t) = CZQ~n+l)(t2)/P~n+l)(CZ)+ en
= t[k1Ik]fn+'(CZ)+c,. = [klkl.<t 2)
and finally
[2k12k + 1]g" (t) = [klk]fn (CZ).
If now we consider the continued fraction
,q(n)1
(n)1
,q(n),
D(t) = ~
n _
1
_
1
_
2
_
t
t
t
t
k=1,2, ...
_
(n)
with Uo(t) = 1,
V_ 1 (t) =1.
U_ 1 (t) = 0.
k =0,1, ...
tion
2 _
sJ
D(t)rt
,q~n)t21
Ie~n)t2,
1
 ...
and we get
Dk (t 2) = [k l/k]g" (t).
By the preceding results we finally obtain
D2dt) = [k l/kl (t),
n
158
Chapter 3
c2pt p + 1 /cpl_+
I
A(p)
I_+ +
A(p)
I..
2
p
G (p)()[,/O]()
t  PI f t +11 rp t/cp  l . 11A(p)+II(p)t
11A(p)+II(p)t
2.2
p
.p
We have
with
'Y~n)+8~nll = 'Y~~l)+8~n++l>'
'Y~n)8~n)
= 'Y~n+l)8~n:l)
c t
1!;1
I ~ ~
tIc +~p(P)t +~p(P)t + ...
P+ 1
p+l
with
= p~n)(J"~n.:tl),
p~n+l) + (J"~n+l) = p~n) + (J"~"ll.
p~n.:il)(Jtll
For more details the reader is referred to [38]. The connection between
Pade approximants and continued fractions is thus complete. Sequences of
Pade approximants can be computed either by the relations given in the
159
IS
related to the
We shall see, in this section, how the e algorithm of Wynn is derived from
the preceding results and how it is related to Shanks transformation of
sequences. The proofs are much more simple than the classical proofs since
they do not need any special knowledge. We shall also relate the ealgorithm to orthogonal polynomials and derive old and new results about
it. The connection with the moment method will also be studied. No special
prerequisite about the ealgorithm is needed to understand this section.
1
,,(n+l) _
"2k
,,(n)
+ ,,(nl) _
"2k
"2k
,,(n)
"2k
with
e~n) = Sn
= L ci and
e~i = 00,
i=O
e~kk) = Vk1
( kdi)1
=L
B~kk1)
and
= O.
1=0
= e~nJ+l 
e~k':H,
The cross rule is obviously satisfied and the numbers e~'i2+1 are completely
defined by the four preceding identities for all k and n. Moreover the
numbers Bin) with odd or even values of the index k are now related by
(3.18)
The boundary conditions become
n
ebn) = Sn = Lei,
;=0
e~i = 0,
160
Chapter 3
(I)
eo
e~l
e(1)
e2
e1
e~O)
2
(1)
e2
eiO)
e~l
e~~
e~O)
eb1)
e~l
e(3)
e<~)
e~)
eb2)
2
. .. boundary conditions
(I)
e~l
ei
2)
This array is called the earray. In this array the lower index denotes a
column while the upper index denotes a diagonal.
The e algorithm relates numbers located at the four vertices of a
rhombus
eCn)
~(n+1) _______ k . ~(n)
Ck1
. E n + 1) _______
Ck+1
Thus, in this table, we can proceed from left to right or from top to bottom.
Let us now study the connection with the qdalgorithm. We set
g~'il = Pkn+l)(t)/Pkn)(t) and g~'il+1 = Pknl1 (t)/Pk"+l)(t)
where gkn) is a function of t. From (2.20) and (2.21) we get
p(n+1)(t)
p(n+1)(t)
k
1 (n) k1
Pkn)(t) =  ek t Pkn)(t) ,
c )
( )
P kn+ 1 (t) _ _ (n) p kn (t)
Pkn+1)(t)  1 qk+l t Pkn+l)(t)
and it follows
ekn)t = (1 g~'il)g~'ill>
q (n)
k+l t  (1 g(n)
2k+l )g(n)
2k'
(3.19)
From the relations of the qdalgorithm we see that the gkn) must satisfy
g~'il_lg~nJ = g~nk"=V g~nk"=~'
(3.20)
161
gh = Phn+I)(t)/Phn)(t) = 1,
n)
W)/(S  W).
qk+1 t
co
i=O
The nth partial sum of this series for t = 1 is equal to Sn and we shall
compute the values of the Pade approximants at t = 1 by means of the
ealgorithm. We shall obtain
e~i2 = [n + k/k1(1) = Nt+I)(1)/p~n+I)(1) = Mn+1)(1)/p~n+I)(1)
= SOR~:k+1)(1)/~n+I)(1)
= [~n+I)(1)
ita i+ Q~n+I)(1)]/~n+1)(1).
C
1
.:1Sn
1
.:1Sn+I
.:1Sn+k I .:1Sn+k
1
.:1s..
0
2
.:1 Sn
p~n+I)
and
Q~n+1)
we obtain
1
.:1Sn+k
.:1Sn+2k I
0
2
.:1 Sn+k_ I
.:1 2Sn+2k2
= H k(.:1 2Sn),
162
Chapter 3
and thus
..:1Sn+2k  1
n, k = 0,1, ...
Wynn proved later that all the numbers edSn) can be computed recursively
by the E algorithm without explicitly computing the values of the Hankel
determinants involved in Shanks transformation (by using their recurrence
relation) and that we get
E~':2 = edSn).
If we subtract we get
.,(n) _ .,(n1) _ (.,(n+1) _ .,(n _ (.,(n+1) _ .,(n) )1_ (.,(n) _ .,(n11
( "2k+2
"2k+2
"2k
"2k  "2k+1 "2k+1
"2k+1 "2k+1
.
Replacing the two differences in the left hand side by their values we obtain
(e~k:;'~  e~nJ+1)1 + (E~k~V  e~':2+1)1
= (e~k:V  E~':2+1)1 + (E~k:;'V  E~':2+1)1
163
with
e~{
=0
and
e\n) = l/..1Sn
= 0 and
(e~n)1
= ..1Sn.
This relation shows that the numbers (e~':2+1)1 are equal to the numbers
obtained by applying Shanks transformation to the sequence {..1Sn}. Thus
e~':2+1 = l/ek(..1SJ = Hk(..13SJ/Hk+l(..1Sn).
(3.21)
(3.22)
(3.23)
(3.24)
Hk+l(Sn+1)Hk(..12Sn) Hk+l(Sn)Hk(..12Sn+l)
= Hk+l(..1Sn)Hk(..1Sn+1),
Hk+2(Sn)Hk(..12Sn+l)  Hk+l(Sn+l)Hk+i..12Sn)
= 
Hk+l(..1Sn)Hk+l (..1Sn +1 ),
164
Chapter 3
+ ,,(n+l) _
,,(n) _ ,,(n+1)
"2k+2
"2k
A=
,,(n)
"2k
"2k
(e~k~~  e~k+1)f
(e~nJ  e~k~2i)(e~nJ 
(eW+2 
eW+2)
(e~'i2  e~k+1)2
=
H k + 1(.:1 Sn)Hk 
(.:1
e~k+1)f
(e~k+2)  e~k+l2
'
Sn+2)
'In, k.
+ (,,(n+2) _
"2k2
,,(n+lI(,,(n) _ ,,(n+l1
"2k
"2k+2
"2k
= Hk (.:1 2 Sn + 1 ),
165
,,(n+1) _
( "2k
,,(n) )(,,(n+2) _
"2k+2 "2k
_
( ,,(n+1) _ ,,(n+2(,,(n
"2k
"2k
"2k+2
,,(n+1
"2k+2
,,(n+1
"2k+2
Hk+1(.1Sn)Hk+1(.1Sn+2)
( ,,(n+1) _
"2k
( ,,(n+1) _
"2k
,,(n+1(,,(n+2) _
"2k+2 "2k
,,(n) )(,,(n+2) _
"2k+2 "2k
,,(n) )
"2k+2
,,(n+1
"2k+2
[Hk+1(.1Sn+1>:f
Hk+1(.1Sn)Hk+1(.1Sn+2)
( ,,(n+1) _
"2k
( ,,(n+1) _
"2k
,,(n+1(,,(n+2) _ ,,(n) )
"2k+2 "2k
"2k+2
,,(n+2(,,(n) _ ,,(n+1
"2k
"2k+2
"2k+2
( ,,(n+1)_
"2k
( ,,(n+1) _
"2k
(n+1)
( B2k
,,(n+3(,,(n) _
"2k2 "2k+2
,,(n) )(,,(n+3) _
"2k+2 "2k2
(n1( (n+2)
B2k+2 e2k

>
(n+1)
( S2k
,,(n+2
"2k
,,(n+2
"2k
(n
B2k+2
(n+2( (n 1)
(n
 B2k
S2k+2  S2k+2

>
( ,,(n) _ ,,(n+1(,,(n) _
"2k
"2k
"2k+2
( ,,(n+1) _ ,,(n+1(,,(n) _
"2k
"2k+2 "2k
,,(n+1
"2k+2
,,(n) )
"2k+2
Hk (.1Sn+2)Hk+2(.1Sn)
[Hk+1(.1Sn+1>:f
Hk (.1Sn+2)Hk +1 (.1Sn+1)
Hk+l(.1Sn)Hk(.1Sn+3)
H k+1(A>S)H
( AS )
Ll n
k+1 Ll n+1
= H k(AS
)H (AS ) '
Ll n+2
k+2 Ll n1
H (AS )H (AS)
k Ll n+1
k+2 Ll n
H (AS )H (AS)
k+1
Ll
n+1
k+1
Ll
(n+2)
(n+2( (n+1)
(n+1
S2k2  S2k
S2k
 S2k+2
H k (AS
)Hk+1 (AS
)
Ll n+2
Ll n+1
Of course, relations (3.11), (3.12), (3.16) and (3.17) can be written in terms
of the ealgorithm and used to compute recursively members of the Barray.
All the preceding equalities can be useful in proving inequalities between members of the sarray and thus proving the convergence. For
example if a sequence {Sn} satisfies
(_1)k .1 k Sn ~ 0,
\:In, k
and
(1)kHk(.12P+1Sn)~ 0,
\:Ik, n, p.
and from these inequalities it can be shown that [17, 18, 28]
lim s~i2 = lim s~n~ = S,
n+oo
k~
\:In, k
166
Chapter 3
where S = lim S" and that the convergence is accelerated for totally
"_CO
monotonic sequences such that n_co
lim (S"+1  S)/(Sn  S) =f l.
At the beginning of this section we have related the ealgorithm applied
n
Lln+k 1So
Ll"+k1So
Ll"+2k2So
Hkn) = Hk(Cn) ==
LlnSk_1
Llnso
..
= Hk(Ll"So)
...
LlnSk_1
Ll"S2k2
This result is easily obtained by replacing the second column by its sum with
the first one and so on and by performing the same operations on the rows.
We have
ekn) Hkn11Hkn_~2) e~n+1)
q~n+1) = H~n)H~"+2)
If we take n
H~0~1
q (n)
k+1
= 0 then
Hk+1(SO)
.,(0)
<>2k
and thus
e~O)/q~l) = e~~/e~~_2.
Consequently we get
e~~=So
n
k
e~O)/q~l).
(3.25)
;=0
and thus
q (2)/e(1) _
k
.,(0)
1
e~~+1
/.,(0)
k  <>2k+1 <>2kh
1
e(O)
=2k+1 AS
"l
n .
k
0;=0
q~2)/e~1)
167
q~O) =
(3.26)
with
e~O)/q~O)
e&O)/(e~O) 
e&O).
Thus we get
k e(O)
e~O)e~O)
So
q(O) = B(O) _ B(O) .
i=1
2k2
2k
We also have
e~1)
B~0~_1  e~0~_3
(3.27)
2k+1
e(O)
2k1
1
=AS
n
k
q(1)/e(1)
i
i'
(3.28)
"" Oi=1
Thus if the sequences {e~O)}, {q~1)} and {e~1)} are known it is possible to
compute recursively the sequences {B~~} and {e~o~+1} by (3.25) and (3.28).
Reciprocally, if the sequences {e~O~} and {e~o~+l} are known the sequences {e~O)}, {e~l)}, {q~O)} and {q~l)} corresponding to the sequence {c" = ..1n So}
can be recursively obtained from (3.26) and (3.27) as follows
e~l)=O
and
q~O)=..1So/So= 1/e~0)e~0).
Let us assume that e~l~l and q~O) are known; then we get e~O) from (3.26) and
we have
Then
e~1)
B~o,: = ek (So) =
So
Sk
Sk
..12 So
S2k
..12 Sk _ 1
..12 Sk_1
..12S2k _2
168
Chapter 3
ek(SO) =
So
.1 So
.1 So
.1 2 S 0
.1 kSo
.1 k+1S O
.1kSo
.12 S 0
.1k +1S O
.12kSO
.1 k+1S O
.12kSo
.1k + 1 So
which shows that ek(SO) is the ratio of the two Hankel determinants Hl2l1
and Hk2 ) for the sequence {.1"So}. If we set
w~~ = Hk"11/Hk"+2)
then
w~~ = edSo) = e~~
n e~")/q~"+l)
k
= ek (.:1"So),
i=l
and
w~~ = l/e~~+l'
n=O, 1, ... ,
Another simpler way of computing the ratios e~~ is to use the recurrence relation of Hankel determinants for the sequence {.1n So} instead of
computing the Hankel determinants for {Sn} and for {.1 2 Sn} as suggested
above. This means that the recurrence relation of Hankel determinants must
be used with the initial conditions
Hbn) = 1 and Hin ) = .1nSo.
169
Let us see one more recursive method to compute the numbers e~12 for
different values of nand k. This method is based on the relationships (2.46)
to (2.59) of Chapter 2.
Let S be the linear functional defined by
S(Xi) = Si.
Then
S(xni>kn)(x = (l)kGln)
Gk
(n)
. .Cn.+k. . /
Cn+kl
illn)
Cn+2kl
with
illn) = Hk (.1cn).
k+1
(n)
rk+ 1
(n+l)
= G(n+l)_
rk+l G(n)
k
(n)
k
(3.29)
'k+1
which is the recursive algorithm given by Pye and Atchison for implementing the Gtransformation [100], where the numbers ,In) are computed by
(2.43), (2.44) and (2.45).
Let us now consider the particular case where Cn = .1Sn. Then
c(n)(x i ) = Cn+i = .1Sn+i =
S(xn+i+I xn+i)
xn+i)pln)(x)) = 0,
= 0, ... , k 1,
s(n) [
(:+1)
Sk
,(n)
]
ek(Sn) + (~:~) ek1 (Sn+1) ,
rk
S(n+1) [,(n+1)
]
ek+l(Sn) =~ ~:)1 ek(Sn)ek(Sn+1) ,
Sk+1 'k+1
r(n))
'k
,(n)
(~:~) ek1 (Sn+2)'
'k
sln+1))
ek+1 (Snl) = ( 1 + (nl) ek (Sn) Sk+1
sln+l)
(nI)
Sk+1
ek (Sn+l),
170
Chapter 3
where Dln>,
F~n)
and
Eln )
where Iln>, Kkn) and Jln) are given by (2.55), (2.56) and (2.57),
 rln:ll)/sl~l>
= (rlnl2 + rln:12)/sln++1
1).
All these relations can be used to follow any route in the earray. The
rsalgorithm must be implemented with rin) = ..::1Sw
In section 2.7.2 we gave a method to compute the eigenvalues of a
matrix A. We first compute the numbers Cn = (y, A n z ) and then use the
Lanczos method to obtain a matrix J similar to A. The LRalgorithm is then
applied to the tridiagonal matrix J to get its eigenvalues. Since the LRalgorithm for a tridiagonal matrix reduces to the qdalgorithm, we can
directly apply the qdalgorithm to the sequence {cn}, that is to say, with the
initial conditions q\n) = cn + 1 /c n
If the eigenvalues of A are such that IAll > IA21 > ... > IAvI then
Rutishauser [107] proved
lim Hdcn+1)/Hk (cn) = A1A2 ... Ak
n>=
and thus
lim qln) = Ab
n>=
for
= 1, ... , p.
But
171
n>=
for
k = 1, ... , p.
n>=
for
k = 1, ... , p.
e~'i2 =
Ci
= 1, 2, ....
+ <Xn+1)(1)/p\cn+l)(1)
i=O
Theorem 3.11
e~'i2 =
Sn + Q\cn+1)(1)/p\cn+1l(1).
Theorem 3.12
S _ e(n) =
2k
1
(n+1) (x kp\cn+l)(x))
p\cn+l)(1) c
1 x
1
p\cn+l)(1) C
(n+k+l) (p\cn+l)(x))
1 x
_
1
(n+l) ([p\cn+l)(x)]2)
 [p\cn+1)(1)]2 C
::......::::...1x.:....:.::...
172
Chapter 3
p(n+l)(x))
k
=
Ix
c(n+l)
p~n+l)
(xip(n+l)(x))
k
Ix
we know that
for
1= , ... , .
(n)
e2 k
1
(n+1) (X iP<kn+1 )(X))
p~n+l)(I) C
1 x
for
i = 0, ... , k.
We have
c(n+l)(x i (l X)l) = c(n+l)(x i
If we write P<t+l) as
p~n+l)(x)
aj
for
i = 0, ... , k
j=O
for
= 0, ... , k  1.
for
i = 0, ... , k
/to
e&'iJ = e
llj.
Uk =
L a~k)Sn+i>
i=O
dk =
Pkn+l)(I).
173
We have
B k+lUk 
d_ 1 =0.
kl
~ (B
(k) C
(klSn+i +Bk+lak(k)Sn+k
Ck+lUkl  t..
k+lai  k+lai
i=O
kl
= t..
~ (a~k+1)a~kS ,+(a(k+1)_a(k~
I
II
n+l
k
kl '"'n+k
i=O
kl
= Uk+1  Sk+l a~k)Sn+i+l
i=O
= Uk+l
L a~k)Sn+i+l'
i=O
Ukl
(~(k)
Uk+l
(l+B
(n)
"2k+2 
(n) + C
"2k+2  "2k
d k  1 n)
(n
k+l  d "2k  "2k2
k+l
or
(n)
(n)
+(l+B
) die n )
(n
"2k+2
_ "2k2
k+l  d B2k  "2k2 .
k+l
(n)
"2k+2"2k
= C2'"
dod l
k+
dk dlk+l
n)
"2
(n
"0
174
Chapter 3
Sn
'v'k> 1.
We shall now give some other expressions for e~'i2. Let us consider the
following determinant
Sn
Sn+1
Sn+k
Sn+1
Sn+2
Sn+k+1
Sn+k
Sn+k+1
Xk
Sn+2k
Xk
D=
We multiply the first row by x and we replace it by its difference from the
second row. We multiply the second row by x and we replace it by its
difference from the third row and so on until the kth row. Then we perform
the same manipulations on the columns. Setting Wi =Si+22xSi+1 +X 2 S i ,
we get
Wn
W n +k1
Sn+k X  Sn+k+1
W n+1
Wn+k
Sn+k+1 X  Sn+k+2
Wn +k1
W n +2k2
Sn+k X  Sn+k+1
Sn+2k1 X Sn+2k
=X 2k
Sn+2k1 X Sn+2k
Sn+2k
Xk
0
0
0
Xk
Expanding this determinant with respect to its last column and using a result
of Geronimus [60] we get
D=Hdwn ).
Let K~n) be the reproducing kernel associated with the sequence of moments
Sm Sn+1, ... as defined in section 2.3.2. Then
K~n)(x, x)
If
= 1 then
Wi = ..12 S i
e~'i2 = [K~n)(I,
and we obtain
1)]1.
175
e~nJ =
then
e~'il ~ 0. Moreover
i = 1, ... , k.
That is
for
If h~n) ~
n+oo
n+ OCI
for
and 'In.
and
.12 (A nSn) = An(A 2Sn+2  2ASn+1+ Sn).
We set
vn(A) = A2Sn+22ASn+1 + Sm
wn(A)
= Sn+22ASn+1 + A2Sn.
Then
and
Hk(.12 (A nSn)) = Hk(A nvn(A)) = A k(n+k1)Hk(v n(A))
176
Chapter 3
P;") belong to
and let us apply the moment method to the vectors Llz", ... , LlZ,,+k. For this
we consider the square matrix Z with columns Llzn, ... , LlZn +k 1 and a elRk
with components ao, . . , akl such that
The system
zTZa + ZT LlZn+k
=0
can be expressed as
(Llzn, Llzn)ao + (Llz", Llzn+1)al + ...
= Akzn+k 1 + bk
If we look at the particular form of the vector Zn then Ak must have the
form
Ak=( :
ao
1
0
0
1
0
a2
al
...
lJ
177
which explains the connection between the ealgorithm and the moment
method.
Chapter 4
Generalizations
The aim of this chapter is to study several generalizations of the main idea
of Chapter 1 leading to Padetype approximants.
The first of these generalizations will be the case of series with coefficients in a topological vector space. The application to sequences produces a
new ealgorithm, called the topological ealgorithm, which generalizes the
scalar ealgorithm of Wynn and which can be used to accelerate the
convergence of sequences of vectors. This algorithm is also related to the
biconjugate gradient method.
The other generalization deals with Padetype approximants for double
power series. Old and new results are obtained in the framework of the
general theory. Finally the case of series of functions will be studied. In all
these generalizations a fundamental role is played by orthogonal polynomials.
4.1.
i = 0, 1, ....
xt
This will be a polynomial of degree k  1 in t with coefficients in B and from
the definition of w we get
w(t) f(t)V(t) = O(tk)
where wand i5 are defined in the usual way and where O(tk) denotes a
series with coefficients in B beginning with a term of degree k. Moreover
c1 xt)l) = f(t)
and Theorem 1.4 applies and we get
f(t)  (k l/k),(t) =
i5~:) c(1v~x~J
179
Generalizations
c(xiv(X=OEE
This is only possible if C;' , Ci+k are linearly dependent for i = 0, ... , m
and thus, in the general case, it will be impossible to construct such a
polynomial v. However v can be chosen such that
(y, c(xiv(x) =
for
where y is an arbitrary element of the dual space E' of E and where (.,.)
denotes the duality between E' and E. If we define the linear functional C by
C(Xi) = (y, ci )
i = 0, 1, ...
for
then obviously
C(Xi) = (y, C(Xi
for
are identical with those of the first chapter. Thus when m = k  1 we get
Padetype approximants of the following form
pq
[p/qJt(t)
= L c/ + tPq+lQ~q+l)(t)/f><rq+1)(t),
p, q =0,1, ...
i=O
and where
Q~q+l)
Cp  q +1+i),
i =0,1, ...
is defined by
Q(pq+l)(t) = C(pq+l) (
q
P(Pq+l)() P(Pq+l)(t)
q
X  q
x t
with
(PQ+l)( X i)_
 C
p
Q +l+i>
i =0,1, ....
t P+1
= f><rQ+1)(t) C(pQ+l)
(P(PQ+l)(X)
Q 1xt
180
Chapter 4
p<:q+l)
we get
= O(t P+ q + 1)
with bj
+ ... + bqcp_q+ i =
for
= 0, 1, ...
for
c(xiv(x=O
for i P  q + 1, ... , p
V(x for i 0, ... , q1.
c(xiv(x =
or such that
C(Pq+l)(X i
is identical to
P~q+l)
and consequently
[P/q]f(t) = f(t).
+ ... + bqci =
for
0, 1, ...
i=O
then an algorithm can be found to compute recursively the Pade approximants [n + k/klt(t). This algorithm has been called the topological algorithm. It can also be applied to any sequence {Sn} of elements of E as
was done in the scalar case. This algorithm is as follows
~i=OEE',
(n)
_
2k+l 
(n+l)
2kl
n =0,1, ...
Y
(
y,
(n+l)
2k
(n
2k
(n+l)
(n)
_
(n+l)
2k+2  2k
+ <2k+l
(n+l)

n, k
= 0,1, ...
(n)
2k
 2k
(n)
(n+l)
2k+l, 2k

,
c(n
~2k
n, k = 0,1, ....
181
Generalizations
where Hk is the classical Hankel determinant for the sequence in parentheses and where
Sn+kl
(y, ..1Sn+k 1)
b 1 == a/(b, a)
Sbn ) == 1,
r~n)
== (y, ..1Sn)
and we get
G~n)
== e~nr
182
Chapter 4
much more economical than the topological ealgorithm since the computation of e~'i! needs k(2k + 1)(5p 1) operations and 6kp words in the computer memory [27] while the computation of G~n) requires 3k(k + 5)p/2 +
k(13k  9)/2 operations and only (k + 2)p words of the memory.
The Gtransformation can also be used for implementing an acceleration method due to GermainBonne [59, propriete 12, p. 71]. The transformed vector Yn of this method can be obtained by applying the Gtransformation to
Co
= (.::lx n , .::lXn),
C1
= (.::lXn, .::lxn+ l ),
and we get
G~O)=
Yn.
If the value of k or the value of n is changed then all the computations must
be started again.
The diagonal sequence {e~O~} can be als~ computed recursively by a
method similar to the method described in section 3.3.2 for the scalar
ealgorithm. This method is based on the recurrence relation for orthogonal
polynomials. Let {Pk } be the orthogonal polynomials with respect to the
sequence {(y, .::lSn)}. If we write
then, as previously
e~o~ = Uk/ dk
with
k
Uk
L a~k)Si
and
dk
= Pk (I).
i=O
Bk+IUk 
L a~k)Si+l.
Ck +1 U k1 = Uk+l
i=O
k
a~k) Sn+i
i=O
basis of our method in the scalar case, is no longer true. Thus we shall set
u~n) =
L a~k)Sn+i
i=O
= u~nll 
Ur+1)
183
Generalizations
with
a~O)
= ak>
a~O) / d k
= e~od
and
Thus
a<::l
and
abn )
= Sn.
n e 2k+2
dk
dk
(0)
(0)
k+l  d n e 2k +  d n+l e 2k
k+l
k+l
n, k = 0,1, ...
These elements can be displayed in an array
le~~ =
2e~~ =
3e~~ =
oe~~ =
e(O)
lebO)
= SI
o
1
2e bO)
e(O)
2
= S2
1
2
3e bO)
e(O)
e(O)
4 .
e(O)
2
= S3
e(O)
,,(0)
3"'2
(0)
3~4
(0)
n e 2k
(0)
n e 2k+2
(0)/
n+l e2k
Thus, in this array, we proceed from left to right and from top to bottom
and we obtain
184
Chapter 4
that is to say that the first diaonal of this array is the sequence {e~O~}.
Moreover it can be seen that (Tkk)/d k = ke~o~ are the elements given by the
second ealgorithm defined in [22] and that the ne~o~ for n = 1, ... , k 1 are
the intermediate elements for which no recursive scheme was previously
known.
If now we assume the sequence {Sn} to be generated by Sn+l = ASn + b
where bEE and where A is a linear application in E then
(T~n+l) =
n+le~o~ = Ane~~+ b
dk
(0)
d k (A
(0)
e 2k +
b)
dk 1 (0)
 Ck+l d e2k2'
k+l
= e~o~,
rk = Xk  (AXk + b).
Xk
Then we get
dk
dk
d k 1
Xk+l = (1 + B k+1)  d Xk   d r k  Ck + 1  d Xk 1
k+l
k+l
k+l
or
if
A.k
= Ck+1f.LLl'
This can be compared with the conjugate gradient method for solving
(I  A)x = b as described in section 2.7.3. It is obvious that there is a very
close connection between the two methods; this connection will be studied
in the next section.
4.1.2. Solution of equations
where A is a linear operator with finite rank k (in particular it can be the
operator obtained by the moment method). We consider the following
185
Generalizations
iteration
Xo
given
x,,+l
= Ax" + b,
n=O, 1, ....
Let Pk(t) = eo+ e1t+ ... + ektk be the characteristic polynomial of A. Then
k
ejx,,+i
i=O
i=O
i~O
i=O
then
= L e ix,,+;/Pk (I),
Vn.
i=O
As we saw in section 2.7.2 the characteristic polynomial P k is the polynomial of degree k belonging to the family of orthogonal polynomials with
respect to the sequence {c" = (y, .1x,,)} where y is an arbitrary element of the
Hilbert space (since .1x,,+1 = A.1xn and by taking z = .1xo in Lanczos
method).
Thus by using Theorem 2.1 we get
Xo
Xk
(y, .1xo)
(y, .1Xk)
(y, .1X2kt)
x=
(y, .1xo)
(y, .1Xk)
(y, .1Xkt)
(y, .1X2kl)
= e~o~.
L
j=O
ejx,,+j+l
j=O
i=O
= L eixn + i,
i=O
Vn
L e/
i=O
be this polyno
Chapter 4
186
L eixn+i is
i=O
ei It is easy to
i=O
x = 8~0~.
Analogous results can be proved if t is a factor of the preceding minimal
polynomial [19].
In section 2.7.3 we saw that the iterates of the conjugate gradient
method are given by
Xk
Bk1b
b
Co
Cl
Ck
Ckl
Ck
C2k''1
Co
Cl
Ck
Ckl
Ck
C2kl
with Ci = (b, zJ = (b, Bib) when this method was applied to the solution of
Bx = b with zo = b.
If we now consider the vectors
yo=O
Yn+l = AYn + b
n = 0, 1, ...
with B = I  A and if we apply the generalized Shanks transformation to
the sequence {Yn} with Y= b we get
Yo
Yk
Co
ctc
Theorem 4.1. Let B be a selfadjoint operator in a Hilbert space and let {Xk}
be the sequence obtained by applying the conjugate gradient method to the
solution of Bx = b with Xo = O.
187
Generalizations
k=O, 1, ....
ek(yo),
Proof. We have
Zi
We shall prove by induction that LlYi = (lr Ll iZO. This relation is obviously
true for i = O. Let us assume that it is true for an index i; then
But
(l)iB Ll izo = B LlYi = BAib = AiBb = Aiz l = LlYl = (l)i Lliz l
and thus
LlYi+1 = (1)i+1(Ll iZl  Ll izo) = (_1)i+1 Ll i+1 Zoo
Moreover we have
i
LlYi=(lrL C{(l)i zi _i
i~O
and
i
We can also prove that Zi = (W Ll i+1yo and Ci = (_l)i LliC~. In the determinan tal expression for Xk let us replace, in the numerator and in the
denominator, the second column by its difference from the first one, the
third column by its difference from the second one and so on up to the last.
Then let us replace the third column by its difference from the second one
and so on. Setting Vo = 0 and Vi =  B i l b for i = 1, 2, ... we get
Vo
Co
Xk =
Llvo
Llco
Ck 1 LlCk  l
Ll kv o
Ll kC O
Ll k Ck _1
1
Co
Llco
(l)k
Ll kco
Ckl
LlCk 1
Ll kCk _1
188
Chapter 4
Xk
.d kl Co
.dkco
.d 2kl CO
1
Co
1
.dco
(_1)k
.dkco
..a
kl Co
But
.diVO =
iI
i=O
i=O
=
(:f
(1)iC{B ii)B 1 b
)=0
)=0
= _(B_I)iX+(_WX
= (W(I 
Replacing now
Xk
~ ico
by
(_1)iC~
and
~ ivo
by ( WYi we get
Yo
Yl
c~
c~
(_1)kYk
(_1)kCk
(1)kl Ck _1
(_1)kC~
(1)2kl C2k _1
1
(_1)k
c6
c~
(_1)kCk
( _1)klC~_1
(_1)kCk
(_1)2k 1 C'2k1
Multiplying, in the numerator and in the denominator, the columns 2,4, ...
by (1) and the rows 2, 4, ... by (1) we get the result.
Remark. The sequences {Xk} and {ek(YO)} are also identical with the sequence obtained by the moment method and with a sequence obtained by
a method due to GermainBonne [59].
189
Generalizations
2p
= x P = X
From the practical point of view the conjugate gradient method needs
less computation than the topological ealgorithm which does not use the
property that (y, Ai+iz ) = (Aiy, Aiz ) when A is selfadjoint.
Now if A is not selfadjoint then X k obtained by the biconjugate
gradient method and ek (Yo) are still given by the same determinantal
expressions and since, in the preceding proof, no use is made of the
selfadjointess of A and B, we get
Remark. It has not been possible to find a connection with the moment
method in the general case.
The topological ealgorithm can also be used to accelerate the convergence of the sequence {yn} to x when IIAII < 1. If the eigenvalues of A are
such that IA11> IA21 > IA31 > ... it can be proved that
lim e~'i2=x,
Vk
and that
lim 11e~'i2+2  xll!lle~1+2) 
xii = 0,
k =0,1, ....
Zn
~y, ~n~k~)
Thus the computation of e~o~ from So, ... , S2k can be looked as the
determination of the operator Ak such that
..1S1 = Ak ..1So,
..1Sk  1 = Ak ..1Sk  2 ,
Hk ..1Sk = Ak ..1Sk  1 ,
190
Chapter 4
This is the reason why the computation of e~o~ also needs the knowledge of
Sk+2, .. , S2k
Sl = F(So),
S2k
= F(S2kl)
(I  Ak)l(F(xn )  Akxn )
then
where cp is a mapping from IRk into itself such that lim Ilcp(h)11 = o.
IIhllO
Xn
4.2.
Let f be a double power series. The aim of this section is to define and to
study Padetype approximants for such series.
The generalization presents some difficulties since a polynomial in two
k,
k2
Generalizations
191
i=Oi=O
for
i, j
{J}
by
= 0, 1, ....
Let
k,
k2
L L biiXiyi.
V(x, Y) =
i=O i=O
We define W as
W(t s) = c(V(x, y)+ V(t, s) V(t, y) V(x, s)
,
(xt)(ys)
where c acts on x and y and where t and s are parameters.
Thus
192
Chapter 4
Thus
V(x, y)+ V(t, s) V(x, s) V(t, y)
(x  t)(y  s)
k,
=L
k2
L b (X
ii
i 1
i=1 i=1
W(t, s) =
k2
L L biiCX i 1 + ... + t
i
1)(yi 1+ ...
+ sil.
i=1 i=1
Moreover
iI
CX i  1 + ...
iI
L L tnsmCi_1_n,i_l_m
n=O m=O
W(t, s)
=L
k2
iI
iI
Lb L L
Ci_l_n,i_1_mtnSm.
ii
i=1 i=1
n=O m=O
Lemma 4.2
k,i k 2 1
W(t, s) =
I I
a"mtn,~m
n=O m=O
with
k,
a"m
k2
L Lb
iiC i ln,J1m'
i=1 i=1
V(t, s)
k,
= ....
\~ ....,
\ bIJ t k''sk
'~()1=()
i = 0"
... k2  I.
Generalizations
193
and
V(t, s)f(t, s) =
Ie.
Ie,
ia
iO
L L bij L L cnmtle,+niSIe,+mi.
nO mO
L~ L~ tNs M L L
( le lle,
V(t, s)f(t, s)
NO MO
= M, we get
)
biiCN+ile"M+jle2
ia jO
V(t, s)f(t, s)
i.J
i.J dNMt s ,
NOMO
Ie.I Ie,I
L L
W(t, s)=
dNMtNs M
NOMO
with
fe,
dNM =
Ie,
L L bijCN+ik"M+jfe2'
i=O i=O
oc
Xl
L L
N=O M=k2
dNMtNs M +
~l
L L dNMtNS M.
N=k, M=O
"J
194
Chapter 4
I(t,s)=
L L ci/sj+tn+l/A(t,S)+sm+1/B(t,S)+tn+lsm+l/w(t,s)
i=Oj=O
with
00
IA(t, s)=
L L Ci+n+l,/sj,
i=Oj=O
00
Iw(t, S) =
00
L L Ci+n+1,j+m+1tisj.
i=Oj=O
195
Generalizations
In the same way we can define the complete table of the approximants
(Ph P2/ql' Q2),(t, s)
Deorem 4.4
(Pl> P2/Ql> Q2),(t, s) = f(t, s)+O'(tisP,+I)+O'(tP,+ISi),
i ~o.
consideration.
= WI(t)W2(S)
Proof. We have separated this property from the preceding one because we
have to look more deeply into the structure of the approximant
(k h k2/kl> k 2)f. The notations will be the same as in Property 4.1. From
section 4.2.1 we have
(k h k2/kh k2),(t, s) = aobo+ t(k l l, k2/kl' k2+ l)fJt, s)
+ s(kl> k2 i/ki + 1, k 2)fs (t, s) + ts(k i 1, k2 1/kl' k 2)fw(t, s)
196
Chapter 4
with
'"
'"
L
Ci+I,Ot i = bo L ai+Iti,
i=O
fA(t, s)=
;=0
fB(t, s) =
'"
'"
L
CO,j+ISj = a L bj+ls j,
j=O
j=O
fw(t, s) =
If we set
'"
gl(t) =
L ~+Iti,
i=O
'"
L bj+ls j
gZ(S) =
j=O
then
fA (t, s) = bOg l (t),
fB(t, s) = aogz(s),
fw(t, s) = gl(t)gZ(S).
The approximant (k l 1, kz/kl' kz + 1)tA is constructed from the generating
polynomial v(x,y)=yV(x,y)=yvl(x)vz(Y). Let WA be its associated
polynomial; then
 ,(YVI(x)VZ(y) + svl(t)vz{s)  yvl(t)VZ(y)  SV I(X)VZ(S)
( ) C
WAt,S
(xt)(ys)
Thus
and
WA(t, s)
6(t, s)
wil)(t)
bo ::(
) = bo(k l l/kl)g,(t).
VI t
An analogous result is true for the approximant of fB and from Property 4.1
we know that the approximant of fw is the product of the approximants of
gl and gz multiplied by aob o. Thus we finally get
(kl> kz/ kl> kz),(t, s) = (a o + t(k l 1/k l )g,(t))(bo + s(kz l/k Z)g2(S
= (k 1 /k l )t,(t)(k z/k z)f2(s)
197
Generalizations
approximant to f and let v(x, 0) be the generating polynomial of the approximanttofl where v(X,y)=yk 2V(X, yl); then
(k l 1, k2 11k!> k2),(t, 0) = (k l 1Ik l ),,(t),
(k l , k21kb k2),(t, 0) = (kllkl),,(t).
Proof. From Lemma 4.2 we have
k,l
W(t, 0) =
an,k2_ltk,ln
n=O
with
k,
L bik2Ciln,O
a n,k21 =
;=1
and
k,
L b;k2Xk,i.
V(t, 0) =
i=O
Now
k, k2
L L biiXiyk,j,
V(X, y) =
i=Oj=O
k,
L bik2 Xi.
V(X, 0) =
;=0
w ()
t = a
0) = \:;1
i.J wntn
n=O
with
k,
k,nl
Wn
;=0
k,
ciObn+ i+ 1,k2 =
L bik2Ciln,O
;=1
i=n+l
bik2C;1n,O
198
Chapter 4
since cpq = for p < 0. Thus the first part of the result is proved by using the
definitions of (t,O) and w(t). Now
L Ci+I,Ot i and
i~O
Obviously the same property also holds by exchanging the roles of t and
s. This property is called projection property.
Property 4.5. If f(t, s) = ft(t) + Ms) and if the generating polynomial of the
approximant to f is the product of the generating polynomials of the approximants to f1 and f2' then
if ij =t and so
fA(t, s) =
fB(t, s) =
Ci+1,Ot i,
=
CO,j+ISj,
j~O
fw(t, s) = 0.
We have fl(t) = c~o + fA (t, s) and Ms) = cgo + fB(t, s) with c~o + cgo = Coo. Thus
c~o + t(k l  1,
for n,
m;<=
0.
i;<=
199
Generalizations
then W(t, s)/V(t... s) is the Pade~type approximant (k l 1, k z 1Ik l , kZ)f with
V(x, Y)=X k'yk 2V(XI, yl) as generating polynomial.
Proof. The series W(t, s) V(t, s)f(t, s) has no terms in tisj for i<kl and
j < k z . Identifying the coefficients of the same powers of the variables we
find that the coefficients of V, Wand f are related by the relations used in
Lemma 4.2 and in Theorem 4.3. Thus W is the polynomial associated with
V and thus W(t, s)/V(t, s) is the corresponding Padetype approximant with
V as generating polynomial.
Property 4.8. Let Vand
in t and k z in s, such
that
W(t, s)  V(t, s)f(t, s) = O(tiSk2+l) + O(Sitk,+l),
i;:. 0
then W(t, s)/V(t, s) = (k l , kz/kl' kz),(t, s) with V(x, y) = x k'yk 2V(XI, yl) as
generating polynomial.
Proof. From the proof of Theorem 4.3 we know that
k, k2
W(t, s) =
dNMtNs M
L L
N~O M~O
with
kl
dNM =
k2
L L bijCN+ik"M+jk2
i~Oj~O
dNM
k2
= cOO bk,N,k2M+
j~k2M+l
k,
k,
i~k,N+l
bi.k,MCN+ikt.O +
Thus we have
SM
M~O
tI
k2
bijCN+ikt.M+jk2
i~k,N+l j~k,M+l
tN(,
N~O
N~O M~O
t
t .f
bi,k1MCN+ik"O)
,~k,N+l
tNSM(
,~k,N+l J~k,M+l
bijCN+ikt.M+jk2)'
200
Chapter 4
1}te proof is now easy to complete. For example the third term is equal to
WA(t, s)= tk,lSk2WA(tI, Sl) with
_ ,(YV(x,Y)+SV(t,s)YV(t,y)sV(x,s)
( ) c
WAt,S
(xt)(ys)
where the functional c' is defined by
c'(xiyi) =
Ci+1,O
I I
n~O m~O
WA(t,s)=
k,l
k2
n~O
m~O
I I
anmtk,nlsk2m
with
k,
anm
= I bimcin,O'
which is exactly the third term. The proofs are the same for the other terms
and the result is established.
Properties 4.7 and 4.8 are called first uniqueness properties.
Property 4.9. Let g be the reciprocal series
of f defined
by
f(t, s)g(t, s) = 1.
Then
(kl' k2/kb k2),(t, S)(kb k2/kl' k2)g(t, s) = 1
if the denominator
other one.
of one
approximant
is
Proof. We set
(kl' k2/kl' k2>t(t, s) = W(t, s)/V(t, s).
We have
i ~ O.
of the
201
Generalizations
Multiplying by g we get
g(t, s) W(t, s)  \1(t, s) = O(tiS~+l)+ O(tk,+lSi)
which shows, by Property 4.8, that \1(t, s)/W(t, s) = (kl> k2/k 1 , k2)g(t, s) constructed with W(x, y) = xk'yk2 W(xt, yl) as generating polynomial.
This property is called reciprocal covariance. A necessary and sufficient
condition for g to exist is that coo =/= o.
Property 4.10. We set
(k 1 , k2/kl> k2Mt, s) = U(t, s)/V(t, s)
and
A + Bf(t', s')
g(t, s) = C+ Df(t', s')
with t' = at/(1 bt), s' = es/(1 ds), ae=/= 0 and C+ Dcoo =/= O. Then
i ~ O.
If we set
~
U(t, s) =
L L ai/s i
i~O i~O
then
and
V(t, s) =
L L b/si
i~O i~O
i ~O.
202
Chapter 4
Moreover
k,
k2
(1bt)k,(1ds)k 2U(t',S')=
I I
k,
~i(at)i(1bt)k,i(es)i(1ds)k2i,
k2
)k2i.
i=Oi=O
Thus (kl> k21k1' k 2>t(t', s') is the ratio of two polynomials with degrees k1 in t
and k2 in s which fits g up to the terms tk'Sk2. Thus, by Property 4.8, this
approximant is identical to (k 1, k21k1' k 2)g(t, s) constructed with (1 bt)k'(1dS)k 2V(t', s') as a denominator.
Now, for the second part of the proof, we set
A V(t, s) + BU(t, s)
CV(t, s)+ DU(t, s)
C+ Df(t, s)
i~O.
C Oj
= 0, Vj and
L L Ci+1/S i .
We have
i~
i~
Generalizations
203
since { and t(k l 1, k2 1Ik l , k2 )g have no terms in tOsj. Moreover t(k l 1, k 2 1/k l , k 2 )g is the ratio of a polynomial with degree kl in t and k 2 1 in
s divided by a polynomial with degree kl in t and k2 in s. By a uniqueness
proof similar to that of Property 4.8 it is identical with (k l , k2 1/ kl' k 2)t
constructed with the same denominator.
We also have the following similar results if the approximants are
constructed with the same denominators
Property 4.12. Let {(t, 0) = 0, 'fit and sg(t, s) = {(t, s). Then
s(k l l, k 2 1/k l , k 2 )g(t, s) = (k l l, k2/kl' k2Mt, s).
Let {(t, 0) = {(O, s) = 0, 'fit and sand stg(t, s) = {(t, s). Then
st(k l 1, k2  1/ kl' k 2)g(t, s) = (kl> k2/ kl' k2Mt, s).
Lemma 4.3
{(t, s) = cl xt)I(I yS)I).
Proof. We formally have
i=Oj=O
for
i = 1, ... ,kl
and
j = 1, ... , k2
and let
u(x) = (x 
Xl) ...
(X  Xk,),
Theorem 4.5
c(P) = (k l l, k 2 1/kl> k2Mt, s)
Chapter 4
204
P 1 (x)=I
, u(x)
1,
;=1 U (x;)(xx;) 1x;t
P ( )=
2
I
j=l
v(y)
1
v'(Yj)(Y  Yj) 1 Yjs
and thus
k,
k2
L .(
)1 L ;()1 W(x;, y)
;=1
X;
x;t
v Yj
YjS
c(P)=C(Pl(X)P2(Y)) =
j=l
with
Wet, S) = c(u(X)  u(t) v(y)  v(S))
xt
YS
= c(u(x)v(y) + u(t)v(s) 
u(t)v(y)  u(x)v(s))
(xt)(ys)
k2
L ;(
)1 W(x;, Yj)
j=
Yj
Yjs
1
Thus
1 1
s t
W(tt, Sl)
u(t 1 )v(t 1) = Wet, s)/V(t, s)
We know that
P 1 (x)
= g(x)+ O[(xt)k,],
and thus
c(P) = c(g(x)h(y)) + c(g(x)O[(ys )k2]) + c(h(y )O[(xt)k,])
+ c(O[(xt)k, ]O[(yS)k2])
or
Generalizations
205
As the last term is included in the two preceding terms we get the result of
Theorem 4.3.
Let us now see how approximants with various degrees can be defined
from polynomial interpolation. We have
Chapter 4
206
Theorem 4.6

W(t,s)f(t,s)V(t,s)=t 's
Proof. From the definition of
2C
W we
have
i~O
i ~ O.
L L tisiC(XiyiV(tt, y))
i=Oi=O
= t k' sk2
207
Generalizations
for
c( Xl+n m~o brmym+i )
=0
n = k 1, ... , 11.
if
Thus we get
SM
M=k2
tNdNM
N=O
with
k,
dNM
=L
k2
blmCI+Nk"m+Mk2
1=0 m=O
N=k,
~
N=k,
tN
sMdNM,
M=O
~
tN
M=k2
sMdNM
and we get the expansion of the error given in the proof of Theorem 4.3.
Before giving another expression for the error term we shall prove a
lemma which will also be useful later
Lemma 4.4
dNM = c(X N k'yMk2V(x, y.
Proof.
k2
k,
L L biiC(Xi+Nk'yi+Mk2) = c(XNk'yMk,V(X, y.
dNM =
i=Oj=O
Theorem 4.7
W(1, s) f(t, s)V(t, s) = t k'sk2C(V(X, y)g(x)h(y)(l t k'X k'Sk2y k,.
Proof. Using Lemma 4.4 and the previous expansion for the error we get
tk'sk,c(V(t\ y)g(x)h(y
~
L L SMtNc(X N k'yMk2V(X, y
M=k2
N=O
~
= sk,
L L sMtNC(XNk'yMV(X, y
M=ON=O
208
Chapter 4
We also have
t k1 s k2 C(V(X, sl)g(x)h(y
= t k1 C(yk2V(X, y)g(x)h(y
11
00
Cii
00
xiyi da(x, y)
vex, y) E
Before ending this section let us give another expression for the
approximants which could be useful for proving convergence results.
If we set
n
Snm
L L Ci/S i ,
i~Oi~O
L L
Bn+l.m+lSnm
n=O m=O
and that
for
= 0, ... , m ~ k  1
209
Generalizations
oc
L L dNMtNs M
tkskC(V(tt, y)g(x)h(y
N=OM=k
oc
L L dN,M_N+ktNsMN+k.
M=ON=O
M = 0, ... , k 1
for
and
N = 0, ... , M.
M = 0, ... , k 1
for
and
N = 0, ... , M.
= 0,
=0
0, ... , M.
for
N = 1, ... , k
or
CXNkykN + XkNyNk) Vex, y
= 0, for
= 1, ... , k.
This choice was proposed by Chisholm [41] to define Pade approximants for
double power series. The preceding equations uniquely determine V. We shall
call these approximants Capproximants and denote them by k 1/k)Mt)
without repeating the degree for each variable since it is the same.
210
Chapter 4
By construction we have
Theorem 4.9
k  1/ k)>t(t, s)  f(t, s) = O(t i s 2k 
i ),
i = 0, ... , 2k.
= C(XNkyMNV(X, y = 0,
= C(yMNxNkV(y, x = 0,
for M
= 0,
= a(xMNu(xb(yNkv(y =
+ a(xkNu(xb(yNkv(y
= a(xNku(xb(ykNv(y
= 0, for N = 1, ... , k
= [k l/k]t.(t)
by the uniqueness property of Pade approximants in one variable.
The other properties of section 4.2.2 are also true and mainly the
covariance properties (but only with a = e for homographic covariance
because of the supplementary equations).
We can also construct higher order approximants with various degrees
in the numerator and in the denominator. For example if we set
(n + k, n + k/k, k>t(t, s) = U(t, s)/\1(t, s)
Generalizations
211
then we have
V(t,s){(t,s)U(t,s)=
N=O
L
M=k+n+l
N=k+n+l
M=O
00
00
L M=k+n+l
L
N=O
M=O
L
N=k+n+l
00
L L
N=n+k+l M=n+k+l
The first sum is equal to
00
"L.J
"
L.Jd N,MN+k+n+l t N S MN+k+n+l
M=ON=O
"
L.J
"d
tMN+k+n+l SN
L.J
MN+k+n+l,N
.
M=ON=O
M = 0, ... , k 1
for
d2kN+n+l,N+ dN,2kN+n+l = 0,
for
and
N = 0, ... , M,
N = 1, ... , k.
Thus V is defined by
c(xNkyMN+n+l V(x, y = 0,
c(xMN+n+lyNkV(x,y=O,
for
y = 0,
for
and then the approximants are defined as in section 4.2.1. They will be
denoted by n + k/k, and we get, by construction
Theorem 4.10
n+ k/k,(t, s) {(t, s) = O(ti s n+2k+li),
For example if n =
c(x1yV(x, y
C(xyl V(x, y
and k
= 0,
= 0,
cx+y)V(x, y=O.
= 0, ... , n + 2k + 1.
= 1, V is defined by
212
Chapter 4
= 0,
= 0,
U( t, s) =
t( cOOb Ol + ClO) +
+ st(cooboo + blOc lO +
COO +
s( cOOb lO + COl)
bOlCOl +c ll ).
We shall now give an expression for the error of higher order approximants
but we first need two lemmas
Lemma 4.5
kl
(1 a)(1 b)
L L ajb
i
j = 1
i=O j=O
L ajb
j=O
k
j+
L ajb
Ak =
+H ,
j=l
= 1. We assume it
L L ajbii,
i=Oj=O
k
Bk =
L ajb
k
j,
j=O
k
Ck
Lab +
j
1 j
j=l
A k+ l = Ak +
L ajb k j = Ak + Bk
j=O
and
= 1  Bk + Ck +
=
since
Bk  aBk  bBk
1
k
1 b + _ aBk + abBk
abBk
213
Generalizations
We also have
abBk =
k+l
i=O
i=1
1 ai+lbk+li = 1 a i b k+2 i =
k+l
aBk
=1
a i +1b k j
=1
j=O
CHI>
j=1
Finally
A k+1(1 a)(1 b) = 1 B k+1+ CHI.
Lemma 4.6.
k i 
i~ a i b +1 i ](1a)1(1b)\
k
Vk~1.
(1a)1(1b)1=
11 aibii.
i=Oi=O
= 0, ... , k  1
y = C(XMNyNkV(X, y =
and N
= 0, ... , M
then
with
Hk(a, b)=
j=O
j=1
1 a i b k i 1 a i b +1 i.
k
We also have
V(t, s)f(t, s) W(t, s) = c(V(x, y)g(x)h(y)x ky kH 2k (xt, ys.
Proof
<X
c(V(x, y)g(x)h(y)x k )
=1
1 tiSiiC(XikyiiV(X, y
i=Oj=O
k1
=1
1 tjsiiC(XikyiiV(X, y
i=O j=O
The first sum is zero from the orthogonality relations which define V. Using
Lemma 4.6 we see that the second term is equal to
c(V(x, y)g(x)h(y)xkHdxt, ys
214
Chapter 4
The error term can be decomposed into three parts. The first is
M
00
tkskC(V(X, y)g(x)h(y =
I I
dN+k,M_N+ktN+kSMN+k
M=O N=O
which has no terms with total degree less than 2k. The second term is
M
00
I L dN,M_N+ktNsMN+k
SkC(V(X, y)g(x)h(y)xkHdxt, ys
M=kN=O
00
tkc(V(x, y)g(x)h(y)ykHk(xt, ys
L L dM_N+k,NtMN+kSN
M=kN=O
kl
I I
dN,M_N+ktNsMN+k+
M=k N=O
00
00
<Xl
M=k
N=k
I I
dN,M_N+ktNsMN+k
kl
00
'\' '\' d
N MN+k + '\'
'\' d N+k,MN+k t N+k s MN+k .
t... t...
N,MN+k t s
t... t...
M=k N=O
M=O N=O
I I
00
dN+k,M_NtN+kSMN =
M+k
I I
dN,M_N+ktNsMN+k.
M=k N=k
M=k N=O
'\'
t...
M=k
M+k
(.0
'\' d
N MN+k
'\' d
tN MN
t...
N,MN+k t s
= '\'
t...
t... N,MN s
N=O
M=2k N=O
=
c(
xkykV(X, y)
M~2k
N%O
(xt)N(YS)MN).
We apply Lemma 4.6 and we get the second expression for the error.
Remarks. If s = 0 then H2k (xt, 0) = (xtfk and we get
Vet, O){(t, 0) Wet, 0) = t 2k C(X ky kV(X, y)g(x
which is the error term for Pad6 approximants in one variable since
k
c(xnykV(x, y
=I
i=O
We also have
bikCn+i,O'
215
Generalizations
and thus
V(t, s)t(t, s) Wet, s)
k 1
Xk V(x, y)
= t2k+l c(X + ykV(x, y)  s 2k+l c(yk+1
=''
In Theorem 4.11 we have not used the supplementary equations. Thus this
result is true for Capproximants.
From Lemma 4.5 all the orthogonality relations for V can be gathered
together in
c(V(x, y)g(x)h(y)x k(1 Hdxt, ys)))
= 0,
'It, s,
= 0,
'It, s.
Comparing the two expressions for the error we get the following relation
which can also be proved directly
x ky kH 2k (Xt, ys) = tkSk(Xkt k + ykSk)Hk(xt, ys) tkSk.
There is an important property which is not satisfied by Capproximants: it is the consistency property which means that if t is the
expansion of a rational function with numerator of degree k  1 in t and
k  1 in s divided by a denominator with degree k in t and k in s then the
higher order approximant (k 1, k 11k, k)f must be identical to f. This fact
comes from the supplementary conditions added by Chisholm because if
t(t, s) = A(t, s)/B(t, s) then we must have W(t, s) = aA(t, s) and Vet, s) =
aBet, s) and there is no reason that
CXNkykN + XkNyNk) Vex, y
= CXNkykN + XkNyNk)B(x,
for N = 1, ... , k.
We shall now use some other supplementary conditions which ensure
that the consistency property should be satisfied, in addition to the other
properties.
To have the consistency property we must set to zero additional
coefficients dNM instead of Chisholm's symmetrized equations. If we do not
want to lose the symmetry property we must set dMN also to zero. Since we
must only add k supplementary conditions then dNN must be zero if k is
odd. Apart from this restriction we are free for the choice of the indexes of
the terms to be set to zero.
Let us now study if the other properties impose some constraints on the
choice of the indexes. For the factorisation property we have
dNM = C(XNkyMkV(X, y = a(xNku(xb(yMkv(y = 0,
dMN = C(XMkyNkV(X, y = a(xMku(xb(yNkv(y = 0.
For (k 1, k 11k, k)f to be the product of the Pade approximants in one
variable it is necessary for u and v to be the orthogonal polynomials of
degree k with respect to a and b. That is to say
a(xiu(x = b(yiv(y = 0,
for
i = 0, ... , k1.
216
Chapter 4
Thus we only have two possible choices for the indexes Nand M:
O~Nk~kl and M arbitrary or O~Mk~k1 and N arbitrary.
That is to say that (N, M) must belong to the lattice in the grey zone
(including the boundary)
M
Uyt=
di/S i .
(i,j)eQk
For
V  Ug =
L
1i,j)eQk
di/ Si =
g
di/S i .
(i,j)eQk
d:
217
Generalizations
d/s j
(i,j)eOk
di/s j
(i,j)eQ.
L
(i,j)eN 2
c/s j
218
Chapter 4
aijCpi,qi
= 0,
V(p,q)EA
(i,j)ES
with
Cii
if
i<0
or
j < O.
Proof. Let us first prove that the condition is necessary. We assume that
Thus
L
S
{O
V(p,q)EA
V(p,q)EA
aiiCpi,qj= bpq
V(p, q) E A.
We have
Remark. If all the bpq are zero then f has no power series expansion as was
assumed.
Let (iI' il)' ... , (is> is) be the elements of S.
V(p, q)EA.
219
Generalizations
the homogeneous system is zero since the solution is not identically zero.
Corollary 4.2. If
= 0,
V(g, qJ E
and if
1= 0,
V(g, qJ E
then
Proof. Since the first determinant is zero, aij not all zero exist such that
k
= 1, ... , s.
(4.1)
This system of linear equations has rank s  1 and its leading minor is not
zero. Thus it has a unique solution with ai,j, = 1. This solution is given by the
remaining equations.
If the last equation in (4.1) is replaced by
V(p, q)EA
(4.2)
then the solution of the system of the remaining equations does not change
and thus the solution of (4.1) is also the same.
Thus (4.2) is satisfied V(p, q) E A with the same aij and, by Theorem
4.12, f is a rational function
220
Chapter 4
4.3.
Series of functions
Let us now assume that
f is a series of functions
f(t)
= L Cigi(t).
i=O
We shall try to generalize Padetype approximants to such series.
Pade approximants for series of orthogonal polynomials have been
studied by Clenshaw and Lord [45], Gragg and Johnson [64], Fleisher [52],
Holdeman [75], Maehly [89] and Snyder [127]. All these authors are
looking for approximations of the form
L d~,j)gk(t).
k
L a/ i=O
L b/
p
i=O
and choosing the coefficients ai and bi so that the power series expansion
coincides with f as far as possible. This approach will be now studied in the
framework of the first chapter and generalized to Padetype approximants.
We consider the series f and let G be the generating function of the
family {gJ (for properties and examples, see [74])
G(x, t) =
L Xigi(t).
i=O
Let
C(Xi) = Ci
i = 0, 1, ....
(i}
and defined by
221
Generalizations
We obviously have
Lemma 4.7
f(t)
= c(G(x, t.
We shall obtain an approximation of f by replacing G by its interpolation polynomial P at Xl> ... ,Xk' We have
k
P(x) =
L L~k)(X)G(~, t)
i=l
with
L~k)(X)=fI
j=l
j""i
xxi =
v(x),
Xi  Xj (X  X;} V (X;)
and
v(X) = (x  Xl) ... (X  Xk)'
Thus we obtain
k
c(P) =
L A~k)G(Xi> t)
i=l
with
i)
A ~k) = c (L~k)( X = W(X
'()
I
Xi
and w defined by
w(t) = c(v(x)  v(t).
xt
Theorem 4.13
00
L eigi(t)
(k l/k),(t) =
i=O
with
k
eJ =
'\'
~
A~k)X!
I
I'
J';;'O
i=l
and
ei = ci ,
222
Chapter 4
xj=
L Llk)(x)xi,
for j
i=l
= 0, ... , k1.
Thus
k
CJ
=~
for J'  0 ,
A(k)X!
,
k  1
i=l
We have
k
'"
i=l
i=l
j=O
= j~O
Ct1 Alk)x{)gj(t)
(k l/kMt) =
L Alk)G(x;, t) = L Al k) L x{gj(t)
i=k
Let us now show how to construct the whole table of approximants. We
can write
n
G(x, t)=
'"
i=O
i=O
Let Gn be the generating function of the family {gn+i+1} [74]
Gn(x, t)=
We have
n
f(t)
= L Cigi (t)+C(n+1)(Gn(X, t
i=O
A~k)
n'+l G(X;, t)
i=l Xi
with
223
Generalizations
We shall of course assume that none of the X; 's is zero. By analogy, let us
denote this approximation by (n + k/k). We have
n
i=O
i=1
L bigi(t) + L B~k)G(X;, t)
(n + k/k),(t) =
with
b.I
= c. I
~ B~k)Xi,
i.J
J
J
i=O, ... , n,
j=1
B~k)
= A~k)/X~+\
k
.
n+l+1
= i.J
~
i= 1, ... , k,
B~k)X?+j+1
I
= 0, ... , k1.
i=1
 ~ A(k) j
Cn+j+1  t...
i
Xi
i=1
= n + 1, ... , n + k.
Theorem 4.15
f(t)  (n + k/k>t(t) = O(gn+k+1(t.
Proof. We have
(n + k/k),(t) =
Cigi(t)
i=O
ejgj(t) +
j=O
ejgj(t)
j=O
with ej =
L B~k)X{. Thus
i=1
(n + k/k),(t) =
00
i=O
j=n+1
L Cigi(t)+ L
ejgj(t)
= cj
for j
= n + 1, ... , n + k.
00
f(t)(n+k/k),(t)=
L (cie;)gi(t) .
i=n+k+1
{gin+1}
224
Chapter 4
Xl, ... ,
Xn+k and
n+k
L A~n+k)G_n(Xi' t)
i=l
with
A~n+k)
= c<n+l)(L\n+k)(x.
We have
C
.
n+J+l
= c<n+l)(x j ) =
n+k
~
~
i=l
A ~n+k)x!
I
"
That means that the numbers A\n+k) are given as the solution of the system
C.
J
n+k
~ A~n+k)xn+jl
I..J
I
I
,
j=n+l, ... , k.
i=l
Proof
(kin + k),(t) =
n+k
00
i=l
j=O
L A\n+k)x~l L x{gj(t)
00
=L
ejgj(t)
j=O
with
ej =
n+k
L A\n+k)x~+jl.
i=l
Thus
k
(kIn + k),(t)
= L cjgj(t) +
j=O
00
L ejgj(t)
j=k+1
f(t)(kln + k),(t) =
(ci eJgi(t).
i=k+l
Thus we are able to construct the whole table of Padetype approximants with the property that
f(t)  (plq),(t) = O(gP+l(t.
Generalizations
225
Theorem 4.17. If the interpolation points Xl' ... ' Xq are the zeros of the
polynomial p:q+l) of degree q which is orthogonal with respect to the
functional C(pq+l) then
= 0, ... , p + q
From the practical point of view the construction of [p/q]t(t) needs the
determination of p:q+l) by using the recurrence relation and then computing its zeros. This was the procedure followed by Hornecker [76]. It is
different from the one used for the classical Pade approximants since, in that
case, the denominator of the approximant is f><:q+l) and the knowledge of
the zeros is not needed. It does not seem that such a method can be
developed in the general case.
In the particular case where gi is the Chebyshev polynomial of degree i,
Paszkowski [95, 96] has obtained an improvement of Hornecker's method in
which the zeros have not to be computed. Let us now give these results.
Let
gi(t) = Ti(t)
f.
i=l
i = 0,1, ...
f is given by
2(12xt+X2)=2+i~1 x'Ti(t).
Let Ptn + 1 ) be the polynomial of degree k orthogonal with respect to the
functional c(n+1). We shall write it as
= R(t)/S(t)
and we have
i=l
226
Chapter 4
Paszkowski proved
Theorem 4.18
S(t) =!eO+el T 1(t) + ... +ekTk(t),
with
ki
ei
= L djdj + i ,
i=O, ... , k,
j=O
ho = !coeo +
L ciei,
i=1
hi
= !{cieo +.f
(cliil + cj+j)ei ),
= 1, ... , n + k.
J=1
Proof. We shall not give it. Let us only say that it follows from the
generating function G and from the mUltiplication rule of Chebyshev
polynomials
'T;(t)1j(t) = !CI!iil(t) + 'T;+j(t.
Appendix
To end the book let us give a conversational program to compute recursively sequences of Pade approximants. This subroutine is written in FORTRAN and it uses the relations given in section 3.1.3.
According to the computer some instructions have to be modified by
the user. So are the values of the variables NL, NI and ZE.
NL is the variable used in the READ instruction for the unit:
READ(NL, )....
NI is the variable used in the WRITE instruction for the unit:
WRITE (NI, ) ....
ZE is a parameter whose work is to avoid division by zero. If a divisor
is, in absolute value, less than ZE then a message is printed and the
computations are stopped.
The user can, of course, change the DIMENSION statement. MK is a
parameter indicating the maximum number of components of the vectors
appearing in the DIMENSION instruction. The aim of MK is to avoid
violation of memory.
No special knowledge of FORTRAN by the user is assumed. He only
has to know what are the formats 11, 12 and D21.15 to enter the data.
Let us now give the listing of the subroutine followed by an example.
Subroutines corresponding to other algorithms described in this book can be
found in
C. BREZINSKI
Programmes FORTRAN pour transformations de suites et de series. Publication
ANO3, Laboratoire de Caicul, Univ. de Lille, 1979.
228
Appendix
SUBROUTINE TRAVEL
DOUBLE pRfCISION t(50),At(~O,,81(50),AZ(50),B2(50),Al(50l,R3(50),
IM1,H2,D(50),ZE
INTEGER 1'1,91
141(1150
TM~
NL"3
NIII4
ZElll,DlO
C
INITIALIZATIONS
J'
WRrT~(NI,ll~l
IFePt,LT,O~OR.Ql,LT,O)GO Tn 80
IF(Pl*91,EG,OlGO Tn I
WRIT~
(Ny, IOJ)
GO HI 2
eO
WRIT~(NI,122'
GO HI 2
IFCP1+QI,Nf,O)GO TO 5
PlaPI+1
!i MIIP1+91
IF(~.LT,MKlGO TO 4
IoIRIlfC fII 1.117l
GO TO 2
/I IoIRITP'tNt,1201
REAOCNL,lll'J
IFCJ,fIIf,OlGO TO 2
14"P !+Iil I
43 IoIRJTE(Nl,10b)M
MM .. M+I
REAO(NL,107)CCCI"r.I,MM)
WRITECNI,IOBlCCCtl,I a l,MM)
"RITE CfIIT, 120)
READCNL,tltU
IfCJ,Nf,OlGO Tn 43
IFCPt,FQ,OlGO TO 3
B1(l,1I1,OO
82(1"1,00
DO b 1111,1'\
H(!hC(Il
b A2ct)lIe(1l
A2(Pt+l,lIcrPI+ll
PhPhl
GO Tn 2'5
C
C
C
IFCOA~SCCClll,GT,l,O.30)GO
WRITEC N r.10q)
GO Tn 113
47 OCt lal.oO/CC! l
00 411 h2,"'"
0(1)110.00
LllIt
DO /19 Jel,l
4'1 0(I)1I0CJ,+OCJ).C(IJ+l)
48 O(llll.Oell*OeI)
41(I)III,DO
A2(t llll,DO
00 50 Tal,IH
Bl (I)IIDtI)
Tn 47
COMPUTER
Appendix
229
50 IIi!CJ)aD(I)
1Ii!(GI.I)OfGI.ll
CALL PAINTtNI,ZE,N,PI,QI,A2,A2)
OIQt.l
IFCN.EQ.I)GO Tn 71
GO TO 14
C
C
C
THE TwO
KNO~N
25 laPl.1
lIaGhl
tFCoA8Sr8ICK.LT.ZE)GO TO 77
llaU_I
b9 IF(Ll.GE.O)GO TO b4
WRITECN!,110,PI,K,l,K
GO TO 31
b4 WRITECNI,121)Pl,K,L,II,l,LI,Pt,Ll
31 AEADCNl,11tlJ
IolRlTf(Ny,I01)J
IF(J.GT~O.ANn.J.lT.7)GO TO 29
WAH(Nt,1l8)
GO TO 31
29 GO TOC1,8,b5,b.,9,39),J
FORMULA t
7 B3Cl).AtCL)*82Cl)
tFCK.LT~2)Gn Tn 33
DO 10 Id,K
10 83CI)aAtCl)*Ai!CI).A2CPlt2)*8ICYI)
33 Bi!(K+l) AZ(PltZ)*81(K)
A3Ct)a.,CL)*A2Cl)
IFCL.LT.Z)GO TO 14
DO 1 t hi!, L
11 A1CI)aAtCL)*A2CI)Ai!(PI+i!)*AICY.I)
34 DO 12 Ial,l
12 U(l)d30)
DO 13 hl,K
13 82(I)a83eI)
CALL PAr NTCNJ,lE,N,PI,K,A2,82)
IF(N.EQ.llGO TO 77
GO Tn 10
FORMIJLA 4
8 IFCM.GF.PltQI+ZlGO TO 15
Na".1
"'IIPhQI.i!
IfCM.GT."'KI)Gn TO 42
44 WRITECNY,ltZ)'"
101",=",+,
JaNtl
REAOCNL,107)CCCI),t a J,MM)
WRITEr NI,1(8)(CCIl,IIIJ,MMl
WRITEC~II,120)
READ(NL,UUJ
IFCJ.Nf.OlGO Tn 44
15 Hl a O.OO
H2 I1 0.DO
DO .. Ial,K
Hl I1 HI+CCP1+1+1)*ll(Kl+l)
tb H2I1H2tCCP1+It2)*Ri!CKI+l)
JJIIO
82(K+1):0.DO
28 IfCOA8SCHI'.lT.ZElGO TO 77
H2aH2!HI
83 C1 )aR! C1)
JaKtl
no IT h2,J
Appendix
230
17 83Cl)82CI).H2*B1Cl.1)
UCO'2(1 ,
JL+l
DO 18 hZ,J
18 43Cr)A2Cl)H2*Al(ll)
PlL
C4LL PRr NTCNI,ZE,N,L,K,Al,9l)
lL+t
1<11'+1
DO 19 hl,l
Ai (J "'2 CI)
19 '2CI)'lCl)
DO 20 181,11'
8ICI,.820)
20 82(n8l0)
IF(JJ.fr.l.O)GO TO 8t
PIPhi
91;1+1
IFCN.EQ.I)GO TO 77
GO TO 25
81 IFCN.EQ.I)GO TO 77
GO TO III
FORt04UlA b
C
b5
IFCL1.Gf.O)GO TO b8
WRfTEC N r,lt8)
GO TO b9
b8 DO b7 hl,l
b7 A1CII_'I(1)*B2CK).A2CI)*81CI<)
AICl+I).8'Z(L+I,*BI(K)"
DO 70 181,1;11
70 81(1)_81(1)*82CI<)92Cl)*81tK)
IFCN.EQ.l)GO TO 17
C
C
GO Tn 14
72 'l(t)_'ICI+t,*HlA2(I+l)
U(UAI(L+U
DO 13 rd,Qt
73 83(1).BI(I+l)*HI82CI+I,
C'LL PRtNTCNt,ZE,N,PI,Ll,A3,S3)
DO 111 hl,L
'2(1)'1 (I)
74 '1(1'''30)
DO 75 hl,K
75 UeI)Slf%)
DO 7" hl,1lIl
,,, Bun83CI'
QlLI
IFCN.!Q.l)GO TO 77
C
'PPROXI~'NT8
III LPl+t
1<91+1
IFCO'BSCBI(K.LT,ZE)GO TO 77
LtPl.1
Appendix
41 IFClt.GE,O)GO TO 4&
WRITEC Nt,119)L,K,L,Gl
GO TO 32
41& WAtT~CNt,12t)Lt,K,L,K,l,Gl,Lt,Qt
32 REAOCNl, 11 t)J
WRlTE(NJ,I01)J
IFCJ,CT.O,AND,J.LT,7)GO TO 30
WAITE (NI, 118)
CO TO 32
30 GO TOC2t,22,51,52,9,39),J
C
C
FORIOUL' 2
21 IFCL1.G.0)GO TO 40
WRITEC N t,118)
GO TO 41
410 DO n hl,K
23 81(I)a'tCI.)*B2CI)A2CL).81CI)
81CK+l).A1CI.)12(K+l)
IF(Pl.LT,I)GO TO l7
DO 241 hl,Pl
24 A1Cl)aAICL).,2CI),2CI.)*AI(I)
37 CALL PRt NT(Nt,lE,N,l.l,K,Al,8t)
PhLt
Glal\
IFCN,EO.I)GO TO 77
GO TO 25
FORMUl.A 3
C
C
22 tFCIO.GE.PI+Ql+2)GO TO 2&
NaM+l
"aPl+Ql+l
IFCM,GT.MK_l,GO TO 42
415 WRITE(Nt,lt2)M
""aM+l
J.N+l
REAOCNL,107)CCCI),IJ,MM)
WRITECNI,10e)CC(I),I.J,"")
"'AITE (Nt, 120)
AEADCNL,111)J
IFCJ,NE.O)GO TO 415
2& Hl a O.OO
H2aCCP1+K+2).e2(1)
DO 27 hl,K
HlaHl+C(Pl+1+l)*81(Kt+l)
27 H2aH2+CCP1+I+l,*82(KwI+2)
JJal
42(L+1"O.OO
CO TO 28
FOA"llL' 5
51 A3(t)aA2Cl)*81CK)
r'CL.LT~2)GO TO 54
DO
td,L
53 A3Cl'.'2(1)*81(I<)w'ICII)*81(K+,)
54 A2(I.+l)''ICL)*8lCI<+I)
83(1).82(1)*91(K)
DO 55 182,1(
55 83(I)a92C1)*91CI()81(11)*82(1(+')
DO 5/1 hl,l
5/1 U(I).A3CI)
DO !l7 hl,1(
57 BZU).a,lI)
CALL PA1NTCN!,ZE,N,I.,Ql,A2,8l)
I'(N.EQ.l)GD TO 77
GO TO 25
5,
231
Appendix
232
c:
c:
C
FO"I'IULA I
52 IFCLt,GE.O)GO TO III
"'RITE eNy, 118)
GO TO 41
U rF(OABSeBlel)',LT.!flGO TO 77
HlS2 (1) IBt (1)
DO 58 hl,PI
58 A)Cl)Al(I+1)*"'I A2(I+l)
00 59 hI ,~t
59 BleIl.1l1 (I+l)*Hl .. S2C!+I)
BlCK)"a 2 (K+\ )
00 60 hl,L
bO A2Cn d ! (ll
DO 61 hi ,"1
61 Al(1).'3(1)
00 b2 IaI,K
82erh"l (I)
62 8t (IleA3(I)
CALL "~INTC~y,IE,N,LI,~I,AI,AI)
Pl:oPI!
IFlN,EI'J,1>GO TO 71
GO TO 25
C
OUGt.l08TlClI
C
C
77 WRITECNy.1OS)PI,QI
7 9 FlEADCNL, \lIlJ
IFCJ,GT,O.ANO,J,LT.3)GO TO 78
WRITEe~lltlI8)
GO TO 79
78 GO TOe9,19),J
42 WRITE eNI.! 17)
9 \IIRlTE (Ny, 1111)
RETUFIN
C
C
C
FORMATS
100 FORMAT(III,' THIS IS A CONVERSATIONNAL PROGRAM TO COMPUTE',I,' REt
103
104
2t ) , )
lOb FORMATC/,' ENTER THE COEFFICIENTS Of THE SERIES fROM CCO) UP TO C(
1',12,') (021,15)')
107 FORMATt021,1~1
108 FORMAT(OJO,15)
109 FORMATCI " THE RECIPROCAL SERIES CANNOT BE CoMPUTED SINCE teo).o'
1)
110 fORMATC/,' ENTFR 1 IF YOU WANT Tn COMPUTE (',I2,'I',IZ,"
AND 2 I~
1',1,' YOU WANT TO COMPUTE (',IlI,'II,YZ,',,',I,' ENTER 5 I~ YOU WA
2NT TO STOP AND 6 TO BEGIN AGAIN EV~RYTHING (II")
111 fORMATOIl
112 fORMATCI,' [NTfR TH! COEfFTCIfNT ce',I2,') OF THE SERIES (021,IS"
1)
11" FORMATe/II,'
GonD BVE, SEE YOU LATER',III)
Itb FORMATCIHII
117 FORMATe/,' THE SW' OF THE nEGREE5 f5 TOO LARGE')
116 FORMAT(/,' ERROR')
119 FORMATC/,' ENTER Z IF YOU WANT TO COMPUTE (',12,'1',12,') AND 3 TO
1 COMPUTE (',r2,'I',I2,')',I,' ENT~R 5 TO STOP AND b TO ~EGYN AGAIN
2 EVFRYTHTNG tIlll)
Appendix
120
FOR~AT(/,'
11'1 NO ( I P ' )
233
on YOU WANT TO
~O~tFY
WRITEC"I,112)P,Q
l"Ptl
I<"Q+l
DO 1 It,l
R~(1)/Rct)
J"hl
WRITE("lJ,101l)R,J
IoIRITE(~I,113)
DO 2 I=t,1<
RB (l)lll ( 1)
JaY1
2 WRITE(Nl,101l)R,J
WRITE(NI.t15)
RETURN
3 N=1
RETURN
108 FOR~AT(D30.15,' X**',I2)
112 FORMATI' THE COEFFICIENTS OF (',12,'/',12,') AREI',/I,' NUME.RATOR'
t)
12103
021,151+,12345&18 90123450+01
OF THE
~NE
~EGREFS
MUST BE ZERO
&0
At..I~
THE
DfNn~INHOR
234
Appendix
o
~O
YOU WANT TO MODIFY THE DATA? TYPE 1 FOR YES AND 0 FOR NO tIl)
t)
CDZI.IS)
.1000000000000010+01
_.50nooononOOOoJ70+00
~O
~OU
WANT TO MODIFY THE DATA? TYPE 1 FOR YES AND 0 FOR NO ell)
::>
yOU WANT TO MODIFY THE DATA? TyPE t fOR VEl AND 0 FOR NO CII)
VOU WANT TO MOOIFV THE DATA? TYPE I FOR YES AND 0 FOR NO ell)
*****************************
THE COEFFICIENTS OF ( II I' ARE,
",UMERATOR
.1000000000000000+01
.1"6'6 6.050+00
~ENOMINATOR
.1000000000000000+01
6.6 66 420+00
X** 0
x** 1
X** 0
X** 1
*****************************
~NTER 1 IF YOU WANT TO COMPuTE ( 01 1) , 2 TO COMPUTE C 21 ')
1 To tOMPUTE ( 21 0) AND 4 TO COMPUTE ( 01 0)
f.NTER 5 TO STOP AND 6 TO BEGIN AGAIN EVERYTHING (II)
***********.*****************
THE COEFFICIENTS OF ( 01 1) ARE.
",UMERHOR
.IOoooooonooo0370+01
X** 0
~ENO"'INATOR
.1000000000000370+01
.50nononooooOt860+00
*****************************
X** 0
x** 1
235
Appendix
J)
*****************************
THE COEFFICIENTS OF ( 01 2) AREI
"UMERATOR
.1000000000000370+01
Xu 0
nENOMINATOII
.10nOoOno0000007D+Ol
.5000000000000370+00
833333333332Q8&DOl
*****************************
~N'ER 2 IF YOU WANT TO CnMPuTE ( 11 2) AND 3 TO CO"PUTI ( 11 I)
F,NTER 5 TO STOP AND ~ TO BEGIN AGaIN F,YERYTHING (11)
:!
*****************************
THE COEFFICIENTS OF ( 11 2) AREI
"UMERATOP
.1000000000000370+01
.500000noOOOI1920+00
!'\ENOMINATM
.1000000000000070+01
.looooononOOOI120+01
.t~&~~~~~666728QO+OO
***.*************************
nO VOU WANT TO MODTFY THE DATA' TVPE 1 fOR YES AND 0 FOR NO tIl)
236
Appendix
*****************************
T~E
COEFFICIENTS OF ( 11 3) ARE
..,UME:RATOR
.1000000000000370+01
.633333333332qltD+00
Xu 0
xu
iiENOMINATOR
.100000000000001D+Ol
.tl133313133326qD+Ol
.2333333333330 600+00
.111111111109525001
X** 0
Xu 1
X** 2
Xu 3
*****************************
~NTER
~NTER
~
~NTER
~O
yOU WANT TO MoDIFV THE DATA? TYPE: \ FnR YES AND 0 FOR NO tIl)
.****************************
p\ENOMJNATOR
,1000 0 00000000070+01
,14 Q QQQ9Q9Q995980+01
,59Q999999996684D+00
,499999999997A3900t
X** 0
x** 1
X** 2
Xu 3
*****************************
237
Appendix
*****************************
THE COEFFICIENTS OF ( 21 2) ARE,
"UIofERATOR
.1000000000000070+01
.b999999999982120+00
.33333333332 8124001
U*O
XU 1
XU I
';ENOMINATOR
.1000000000000070+01
.11999999999980bO+Ol
.2999999999985470+00
.*...
_** *.******.***.*
x** 0
x** I
Xu 2
X** 3
"'E"'oMINATn~
.10nOnnOoOOoOo070+01
.1333333333329330+01
.39999999999b9080+00
X** 0
Xu 1
x** 2
X** 3
.,ENOMINATO~
,1000000000000000+01
,1999999999990310+00
*****************************
h* 0
Xu I
238
Appendix
.****************************
,41bbbbbbbb6b8280.0t
"E"IOMI"!ATOR
,tnoonoooOOOOn070+01
,749QQq'999987710+00
*****************************
~NTER I IF YOU WANT TO COMPUTE ( 21 2) , 2 TO COMPUTE ( 31 2)
1 TO COMPUTE ( 31 0) AND 4 TO COMPUTE ( 21 0)
~NTER 5 TO STOP AND b TO 8EGIN AGAIN EVERYTHING (Ill
a****************************
THE COEFFICIENTS OF ( 21
'IIUMERATOR
0'
AREI
,1000000000004J2D+Ol
_,499999"99'45980+00
,3333333333190000+0 0
"ENOMINATOR
,1000000000000000+01
*****************************
ENTER 1 IF YOU WANT TO COMPUTE ( 11 l' , 2 TO COMPUTE ( 31
1 TO CnMPUTE ( 31 0) AND 4 TO COMPUTE C II 0)
FNTER 5 TO STOP AND b TO BEGIN AGAIN EVERYTHING (tl)
1
*****************************
THE COEFFICIENTS OF C 31 0) AREI
OjU"'ERATOR
,9999Q999Q9'82120+00
,50000000000Ib45D+00
,3333333333284910+00
,2499'9""'64050.00
x** 0
X** 1
XU 2
xu
l'
Appendix
239
"EIIIOMINATOR
.lo~ooonooooOo07D.OI
X** 0
*****************************
"RRnR
0;
LATFR
tIl)
References
[1]
N. I. AKHIEZER
The classical moment problem. Oliver and Boyd, Edinburgh, 1965.
[2]
N.
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
I.
AKHIEZER, M. KREIN
Some questions in the theory of moments. American Mathematical Society, Providence, 1962.
G. D. ALLEN, C. K. CHUI, W. R. MADYCH, F. J. NARCOWICH, P. W. SMITH
Pade approximation and orthogonal polynomials. Bull. Austral. Math. Soc., 10 (1974)
263270.
G. D. ALLEN, C. K. CHUI, W. R. MADYCH, F. J. NARCOWICH, P. W. SMITH
Pade approximation and gaussian quadrature. Bull. Austral. Math. Soc., 11 (1974)
6369.
G. D. ALLEN, C. K. CHUI, W. R. MADYCH, F. J. NARCOWICH, P. W. SMITH
Pade approximation of Stieltjes series. J. Approx. Theory, 14 (1975) 302316.
R. ALT
Deux theoremes sur la Astabilite des schemas de RungeKutta simplement implicites.
RAIRO, R3 (1972) 99104.
G. BACHMAN
Introduction to padic numbers and valuation theory. Academic Press, New York,
1964.
G. A. BAKER, Jr.
The Pade approximant method and some related generalizations. In "The Pade
approximant in theoretical physics", G. A. Baker Jr. and J. L. Gammel eds., Academic
Press, New York, 1970.
G. A. BAKER, Jr.
Essentials of Pade approximants. Academic Press, New York, 1975.
J. BEUNEU
Resolution des systemes d'equations lineaires par la methode des compensations.
Publication 69, Laboratoire de Calcul, Universite de Lille, 1976.
D. BESSIS
Construction of variational bounds for the Nbody eigenstate problem by the method
of Pade approximations. In "Pade approximants method and its applications to
mechanics", H. Cabannes ed., Lecture Notes in Physics 47, SpringerVerlag, Heidelberg, 1976.
J. BOGNAR
Indefinite inner product spaces. SpringerVerlag, Heidelberg, 1974.
N. BOURBAKI
Algebre. Polyn6mes et fractions rationnelles. Livre II. Chapitre IV. Hermann, Paris,
1950.
C. BREZINSKI
Resuitats sur les procedes de sommation et l'Ealgorithme. RIRO, R3 (1970) 147153.
C. BREZINSKI
c. BREZINSKI
Sur un algorithme de resolution des systemes non lineaires. c.R. Acad. Sc. Paris, 272A
(1971) 145148.
References
[17]
C. BREZINSKI
Etudes sur les e et palgorithmes. Numer. Math., 17 (1971) 153162.
[18]
C.
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
241
BREZINSKI
L'ealgorithme et les suites totalement monotones et oscillantes. c.R. Acad. Sc. Paris,
276A (1973) 305308.
C. BREZINSKI
Some results in the theory of the vector ealgorithm. Linear Algebra, 8 (1974) 7786.
C.
BREZINSKI
BREZINSKI
BREZINSKI
BREZINSKI
242
[36]
[37]
References
A. BULTHEEL
Fast algorithms for the factorization of Hankel and Toeplitz matrices and the Pade
approximation problem. Report TW 42, KU Leuven, Belgium, October 1978.
D. BUSSONNAIS
[39]
[40]
[41]
[42]
[43]
[44]
[45]
[46]
D. BUSSONNAIS
"Tous" les algorithmes de calcul par recurrence des approximants de Pade d'une serie.
Construction des fractions continues correspondantes. Seminaire d'analyse numerique
293, Universite de Grenoble, 1978.
P. L. CHEBYSHEV
Sur les fractions continues. J. Math. Pures Appl., ser. II, 3 (1858) 289323.
T. S. CHIHARA
An introduction to orthogonal polynomials. Gordon and Breach, New York, 1978.
J. S. R. CHISHOLM
Rational approximants defined from double power series. Math. Comp., 27 (1973)
841848.
J. S. R. CHISHOLM
Rational polynomial approximants in N variables. In "Pade approximants method and
its applications to mechanics", H. Cabannes ed., Lecture Notes in Physics 47, SpringerVerlag, Heidelberg, 1976.
G. CLAESSENS
A new look at the Pade table and the different methods for computing its elements. J.
Compo Appl. Math., 1 (1975) 141152.
G. CLAESSENS, L. WUYTACK
On the computation of nonnormal Pade approximants. Report 7741, Antwerp
University, 1977.
c. W. CLENSHAW, K. LoRD
Rational approximations for Chebyshev series. In "Studies in Numerical Analysis",
Academic Press, New York, 1974.
F. CORDELLfER
Algorithme de calcul recursif des elements d'une table de Pade non normale.
Colloque sur les approximants de Pade, LilIe, 2830 mars 1978.
[47]
[48]
[49]
[50]
[51]
[52]
[53]
[54]
F. CORDELLIER
R.
FLETCHER
References
[55]
[56]
[57]
[58]
243
F. R. GANTMACHER
The theory of matrices. Chelsea, New York, 1960.
W. GAUTSCHI
On the construction of Gaussian quadrature rules from modified moments. Math.
Comp., 24 (1970) 245260.
E. GEKELER
On the solution of systems of equations by the epsilon algorithm of Wynn. Math.
Comp., 26 (1972) 427436.
B. GERMAINBONNE
B. GERMAINBONNE
J. GERONIMUS
[62]
[63]
[64]
[65]
[66]
[67]
[68]
[69]
[70]
[71]
[72]
[73]
[74]
J. GILEWICZ
244
[75]
[76]
[77]
[78]
[79]
[80]
[81]
[82]
[83]
[84]
[85]
[86]
[87]
[88]
[89]
References
Jr.
A method for the approximation of functions defined by formal series expansions in
orthogonal polynomials. Math. Comp., 23 (1969) 275288.
G. HORNECKER
Approximations rationnelles voisines de la meilleure approximation au sens de Tchebycheff. C. R. Acad. Sc. Paris, 249 (1959) 93994l.
J. H. HOLDEMAN,
G.
HORNECKER
AI.
MAGNUS
AI.
[92]
MAGNUS
References
[93]
[94]
[95]
[96]
[97]
[98]
[99]
[100]
[101]
[102]
[103]
[104]
[105]
[106]
[107]
[108]
[109]
[110]
[111]
245
J. NUTIALL
Convergence of Pade approximants for the BetheSalpeter amplitude. Phys. Rev., 157
(1967) 13121316.
H. PADE
Sur la representation approchee d'une fonction par des fractions rationnelles. Ann. Ec.
Norm. Sup., 9 (1892) 193.
S. PASZKOWSKI
Approximation uniforme des fonctions continues par les fonctions rationnelles. Zastosowania Matematyki,6 (1963) 441458.
S. PASZKOWSKI
Zastosowania numeryczne wielomianow i szeregow Czebyszewa. Panstwowe Wydawnictwo Naukowe, Warsaw, 1975.
R. PENNACCHI
La trasformazioni razionali di una successione. Calcolo, 5 (1968) 3750.
O. PERRON
Die Lehre von dem Kettenbriichen. Chelsea, New York, 1950.
M. PINDOR
A simplified algorithm for calculating the Pade table derived from Baker and Longman
schemes. J. Compo Appl. Math., 2 (1976) 255257.
W. C. PYE, T. A. ATCHISON
An algorithm for the computation of higher order Gtransformation. SIAM J. Numer.
Anal., 10 (1973) 17.
H. VAN ROSSUM
A theory of orthogonal polynomials based on the Pade table. Thesis, University of
Utrecht, Van Gorcum, Assen, 1953.
H. VAN ROSSUM
Systems of orthogonal and quasiorthogonal polynomials connected with the Pade
table. Koninkl. Nederl. Akad. Weten., ser. A, 58 (1955) 517524, 525534, 675682.
H. VAN ROSSUM
Contiguous orthogonal systems. Koninkl. Nederl. Akad. Weten., ser. A, 63 (1960)
323332.
H. VAN ROSSUM
On the poles of Parle approximants to e. Nieuw Archief voor Wiskunde, (1) 19 (1971)
3745.
H. VAN ROSSUM
Linear dependence of Pade diagonals. Nieuw Archief voor Wiskunde, (3) 25 (1977)
103114.
H. VAN ROSSUM
Pade approximants and indefinite inner product spaces. In "Pade and rational approximation", E. B. Saff and R. S. Varga eds., Academic Press, New York, 1977.
H. RUTISHAUSER
Der QuotientenDifferenzen Algorithmus. BirkhiiuserVerlag, Basel, 1957.
H. RunSHAusER
Solution of eigenvalue problems with the LRtransformation. NBS Appl. Math. Series,
49 (1958) 4781.
H. RUTISHAUSER
Uber eine Verallgemeinerung der Kettenbriiche (Vorlaiifiger Bericht). ZAMM, 39
(1958) 278279.
H. RUTISHAUSER
Stabile Sonderfiille des QuotientenDifferenzen Algorithmus. Numer. Math., 5 (1963)
95112.
R. A. SACK, A. F. DONOVAN
An algorithm for Gaussian quadrature given modified moments. Numer. Math., 18
(1972) 465478.
246
[112]
[113]
[114]
[115]
[116]
[117]
[118]
[119]
[120]
[121]
[122J
[123]
[124]
[125]
[126]
[127]
[128]
[129]
[130]
[131]
References
E. B. SAFF, A. SCHONHAGE, R. S. VARGA
Geometric convergence to e Z by rational functions with real poles. Numer. Math., 25
(1976) 307322.
E. B. SAFF, R. S. VARGA
Convergence of Pade approximants to e Z on unbounded sets. J. Approx. Theory, 13
(1975) 470488.
E. B. SAFF, R. S. VARGA
On the zeros and poles of Pade approximants to e Z Numer. Math., 25 (1976) 114.
E. B. SAFF, R. S. VARGA
On the zeros and poles of Pade approximants to e. II. In "Pade and rational
approximation", E. B. Saff and R. S. Varga eds., Academic Press, New York, 1977.
E. B. SAFF, R. S. VARGA
On the zeros and poles of Pade approximants to e. III. Numer. Math., to appear.
D. SHANKS
An analogy between transients and mathematical sequences and some nonlinear
sequence to sequence transforms suggested by it. Naval Ordnance Laboratory
Memorandum 9994, White Oak, Md., 1949.
D. SHANKS
Nonlinear transformations of divergent and slowly convergent sequences. J. Math. Phys.,
34 (1955) 142.
L. R. SHENTON
A determinantal expansion for a class of definite integrals. Proc. Edinburgh Math.
Soc., (2) 9 (1953) 4352, 10 (1954) 7891, 134140, 153188.
J. SHERMAN
On the numerators of the convergents of the Stieltjes continued fraction. Trans.
Amer. Math. Soc., 35 (1933) 6487.
J. SHERMAN, J. SHOHAT
~ ~
On the numerators of the continued fraction
1
2
 . .. Proc. Nat.
Acad. Sci., 18 (1932) 283287.
x  C 1 X  C2
J. SHOHAT
Sur une c1asse etendue de fractions continues. c.R. Acad. Sc. Paris, 191 (1930) 988.
J. SHOHAT
On the Stieltjes continued fraction. Amer. J. Math., 54 (1932) 7984.
J. SHOHAT, J. D. TAMARKIN
The problem of moments. Mathematical Surveys 1, Amer. Math. Soc., Providence,
1943.
A. VAN DER SLUIS
General orthogonal polynomials. Thesis, University of Amsterdam, 1956.
A. C. SMITH
A oneparameter method for accelerating the convergence of sequences and series.
Compo and Maths with Appls., 4 (1978) 157171.
M. A. SNYDER
Chebyshev methods in numerical approximation. Prentice Hall, Englewood Cliffs,
1966.
E. STIEFEL
Kernel polynomials in linear algebra and their numerical applications. NBS Appl.
Math. Series, 49 (1958) 122.
G. SZEGO
Orthogonal polynomials. Amer. Math. Soc., Providence, 1939.
Y. TAGAMLITZKI
Sur les suites verifiant certaines inegalites. C.R. Acad. Sc. Paris, 223A (1946) 940942.
Yu. V. VOROBYEV
Method of moments in applied mathematics. Gordon and Breach, New York, 1965.
References
[132]
[133]
[134]
[135]
[136]
[137]
[138]
[139]
[140]
[141]
[142]
[143]
[144]
[145]
[146]
[147]
[148]
[149]
247
H. S. WALL
Continued fractions and totally monotone sequences. Trans. Amer. Math. Soc., 48
(1940) 165184.
H. S. WALL
The analytic theory of continued fractions. Van Nostrand, New York, 1948.
D. D. WARNER
Hermite interpolation with rational functions. Thesis, University of California, San
Diego, 1974.
P. J. S. WATSON
Algorithms for differentiation and integration. In "Pade approximants and their
applications", P. R. GravesMorris ed., Academic Press, New York, 1973.
J. C. WHEELER
Modified moments and Gaussian quadratures. Rocky Mountains J. Math., 4 (1974)
287296.
J. C. WHEELER, C. BLUMSTEIN
Modified moments for harmonic solids. Phys. Rev., B6 (1972) 43804382.
D. V. WIDDER
The Laplace transform. Princeton Univ. Press, Princeton, New Jersey, 1946.
D. V. WIDDER
An introduction to transform theory. Academic Press, New York, 1971.
J. WIMP
Toeplitz arrays, linear sequence transformations and orthogonal polynomials. Numer.
Math., 23 (1974) 118.
P. WYNN
On a device for computing the em(Sn) transformation. MTAC, 10 (1956) 9196.
P. WYNN
The rational approximation of functions which are formally defined by a power series
expansion. Math. Comp., 14 (1960) 147186.
P. WYNN
Upon systems of recursions which obtain among the quotients of the Pade table.
Numer. Math., 8 (1966) 264269.
P. WYNN
On the convergence and stability of the epsilon algorithm. SIAM J. Numer. Anal., 3
(1966) 91122.
P. WYNN
A general system of orthogonal polynomials. Quart. J. Math. Oxford, (2) 18 (1967)
8196.
P. WYNN
Differencedifferential recursions for Pade quotients. Proc. London Math. Soc., (3) 23
(1971) 283300.
P. WYNN
Upon some continuous prediction algorithms. Calcolo,9 (1972) 177234, 235278.
P. WYNN
A numerical method for estimating parameters in mathematical models. Centre de
recherches matMmatiques, Universite de Montreal, CRM443, 1974.
P. WYNN
Private communication.
This bibliography is far from complete. Pade approximants, orthogonal polynomials, continued
fractions, moment problems, convergence acceleration are very wide subjects for which a very
wide literature exists. So it was impossible to give here an extensive bibliography containing
248
References
2000 items. Such a bibliography has been published as internal reports and is available on
request
C. BREZINSKI
A bibliography on Pade approximation and related subjects. Publication 96,
Laboratoire de Calcul, Universite de Lille, 1977.
C. BREZINSKI
A bibliography on Pade approximation and related subjects. Addendum I. Publication
118, Laboratoire de Calcul, Universite de Lille, 1978.
Index
Aacceptability 2830
Additional conditions 3233
Additional property 198
Adjacent systems of orthogonalpolynomials 91105
Adjoint operator 80
Associated continued fraction 153
Associated polynomials 10, 5357, 192
Baker algorithms 140
Baker notations 138
Biconjugate gradient method 9091, 189
Bilinear form 69
Biorthogonalization method 7984
Bordering method 46, 173
Bussonnais method 144147
Canterbury approximants 190191, 209217
Cauchy product 69
CayleyHamilton theorem 185
Characteristic polynomial 7576, 78
Chebyshev method 4748
Chebyshev series 225226
Chisholm approximants 190191, 209217
Cholesky factorization 48, 70
ChristoffelDarboux relation 50, 5456
Compact formula 1718,22
Completely definite functional 97
Completely monotonic functions 119
Conjugate gradient method 8490, 186189
Consistency property 215216
Contiguous systems of orthogonalpolynomials 91105
Continued fractions 152159
Contraction 158
Convergence acceleration 189
Convergent of a continued fraction 152
Cordellier extended cross rule 151152
Cordellier relation 137, 164
Corresponding continued fraction 157158
Cpolynomial 29
Cross rule 133135, 162163
Definite functional 41
Determinantal formulas 22,41, 127, 126
Difference equation 67
Diophantine equation 31
Division algorithm 152
Double power series 190220
L1 99
Eigenvalues 75,7879
Epsilon array 160
Epsilon algorithm 38, 84, 159177, 178190
Error term 2021,3335,127128,171172, 179, 206208, 213215, 222
Euler transformation 24
Exponential function 2830, 129130
Factorization properties 195197
Functional 9, 20, 33, 40
Galerkin method 79
galgorithm 160161
Gaussian factorization 48, 70
Gaussian quadrature method 34, 6167,
8283
Generating function 220
Generating polynomial 13
GermainBonne method 182, 188
GramSchmidt orthogonalisation 4243
G transformation 169, 181
HadamardGerschgorin theorem 71
Hadamard polynomials 9899
Hamburger moment sequence 116118
Hankel determinants 34, 41, 9597, 163,
181
Hankel polynomials 9899
Hankel quadratic forms 74
Hausdorff moment sequence 117123
Hermit interpolation 2324
Holder inequalities 70, 117
Homographic covariance 1516,201202
Interpolatory quadrature methods 6167
Invariant ratios 138139
Inverse cross rule 135, 163
250
Jacobi matrices
Index
6769
152
Quadratic forms 74
Quadrature method 2122, 34, 6167
Quasiorthogonal polynomials 60, 66
Rational transformation 174
Rayleigh quotient 8384, 171
RayleighRitz principle 7475,7879
Reciprocal covariance 14, 37, 127,201
Reciprocal functional 106
Reciprocal orthogonal polynomials 105115
Reciprocal series 13, 105106, 200
Recurrence relation 4347
Recursive computation of Pade approximants 135147. 227239
Reproducing kernels 5153
Residue matrices 73
Rhombus algorithm 95
rsalgorithm 102
Schur theorem 69
Schwarz inequality 70
Second kind orthogonal polynomials 5357
Semidefinite functional 115
Separation of the zeros 5860, 98
Series of functions 220226
Shanks transformation 38, 162, 169170,
181
Square blocks 148, 149
Stability of quadrature methods 6364
Staircases 140, 141
Stieltjes moment sequence 116118
Stieltjes series 27, 38, 60, 6466, 208
Sturm sequence 69
Summation process 24
Supplementary conditions 209, 210, 211,
215,217
Symmetry property 197
Toeplitz theorem 2526
Topological 6 algorithm 178190
Totally monotonic sequence 117123, 165
Totally oscillating sequence 117,121,122
Translation properties 202203
Unicity property 15, 35, 198200, 205206
Uniform boundeness theorem 6364
Watson algorithm
qdalgorithm 8384,
161,166167,170
9497,
148151,
Zeros
5760
141142
Гораздо больше, чем просто документы.
Откройте для себя все, что может предложить Scribd, включая книги и аудиокниги от крупных издательств.
Отменить можно в любой момент.