Академический Документы
Профессиональный Документы
Культура Документы
probability distributions
Hsien-Kuei Hwang
Abstract
Several new asymptotic estimates (with precise error bounds) are derived for Poisson and bino-
mial distributions as the parameters tend to infinity. The analytic methods used are also applicable
to other discrete distribution functions.
1 Introduction
Finding asymptotically efficient approximations to discrete probability distribution functions is a clas-
sic subject in probability theory. The general problem is as follows. Given a random variable X
depending on a certain large real parameter, say N , with probability distribution
P(X = j) = aj (N ) (j Z),
find asymptotic approximations for the distribution function jm aj (N ), as N and for all
P
possible values of m (depending on N ). In general, when m becomes large, the exact summation is
practically not very useful. Thus there is a need to find simpler asymptotic estimates. For example,
if X is a binomial random variable with mean np, then aj (N ) = aj (n) = nj pj q nj ; and if X is a
1
In addition to the two classical distributions, our methods can also be applied to the many existing
Poisson and binomial variants, mixtures, and convolutions, cf. [25, Chaps. 3 and 4]; and to other
discrete distribution functions. Our techniques for deriving numerical bounds are also suitable for use
for other Poisson approximation problems, cf. [5, 38, 45].
This article is organized as follows. We first list some known asymptotic estimates concerning
the Poisson distribution function in the next section. Then we state and prove our new results in
Section 2.2. Application of these methods to the technique of Poissonization is briefly discussed in
Section 2.3. A parallel study of the binomial distribution with less details is given in Section 3. We
then briefly compare the different expansions derived in this article and indicate applications of our
methods to combinatorial and arithmetical problems in the final section.
Notation. The notation [z j ]f (z) represents the coefficient of z j in the Taylor expansion of f ;
(x)j = x(x 1) (x j + 1) if j 1 and (x)0 = 1 for any real x.
2 Poisson distribution
Consider a Poisson random variable X with mean > 0:
j
P(X = j) = e (j 0).
j!
Let us denote by m () the distribution function of X:
j
m () = e
X
(m 0).
0jm
j!
In many problems in number theory (cf. [44, Chap. II.6]) and in combinatorics (cf. [22, Chaps. 3,5,9]),
is a large parameter. Moreover, once the different asymptotic behaviours of m () have been explicitly
characterized, we can employ m () as primitive asymptotic approximant for more sophisticated
problems. Besides the classical Poisson approximations (cf. [5]), let us mention the distribution of
integers x with a given number of prime factors (cf. [4, 44]) and the number of components in
decomposable combinatorial structures (cf. [21]). Thus we investigate the asymptotic behaviour of
m () as and m runs through its possible values (depending on ). When is bounded and
m , the asymptotic behaviour of m () can be easily derived by the usual saddlepoint method,
cf. [10, 24, 26, 50]. For completeness, we include the resulting formula at the end of 2.2.
m + 12
!
m () = + O 1/2 ,
as , uniformly for m = + O . For more precise Edgeworth expansions (with or
without continuity correction and error bounds), see [9, 12, 32, 34] and [25, p. 162].
2
2. Cramer-type large deviations (cf. [32] [27, p. 100] [22, Chap. 3]):
x
x+1
1 m () = (1 (x)) exp H 1+O (m = + x ),
x
x+1
m () = (x) exp H 1+O (m = x ),
uniformly for x 0, x = o( ), where
y2 X (1)j
H(y) = (1 + y) log(1 + y) y = yj , (1)
2 j3
j(j 1)
the latter equality holding for 1 < y 1. Effective versions of these results can be found in
[32].
3. Uniform asymptotic expansions for the incomplete gamma function: our m () is related to the
incomplete gamma function by
where
1
Z
Q(a, ) = ta1 et dt,
(a)
being the gamma function. The asymptotic behaviour of Q has been extensively studied in
the literature, most notably by Temme [39, 40], see also [49]. For our purpose, let us mention
the following expansion from [40]
2
em /2 X
m1 () 1 ( 2m) bj mj , (3)
2m j0
1 r r(r2 + 10r + 1) 1
b0 = ; b1 = 3
3.
1r 12(1 r)
4. By the definition of m ()
m X (m)j
m () = e , (4)
m! 0jm j
3
2.2 New results
First of all, from (4), we have roughly
m X m 1
m () e mj j e , (5)
m! 0jm m! 1 m/
and we expect that the last expression would provide a better approximation to m () for certain
ranges of m than the first term in (5). On the other hand, when m = 0, the two become =.
Hence, this formal approximation might be uniformly valid for 0 m < . This is roughly so as we
now state.
Theorem 1 If 1 m A , where A > 0, then m () satisfies
m 1 X j! j (m)
m () = e 1+ + R , (6)
m! 1 r 2j<
( m)j
e1/(12m) m/2
|R | < K (m 1; 2), (8)
( m)
with K2 = /2 and for 3 K = 2(+2)/2 (( + 1)/2)/ . In particular,
|R | < K e1/12 A .
m m m(m 2)
2 (m) = , 3 (m) = , 4 (m) = ,
2 3 8
m(5m 6) 3m2 26m + 24
5 (m) = , 6 (m) = .
30 144
The method of proof extends the original one by Selberg [37] to an asymptotic expansion as in
[23], the error term being further improved on here.
e 1 m1 z
I
m () = z e dz (0 < < 1). (9)
2i |z|= 1z
1 X (z r)j (z r)
= + ( 1).
1 z 0j< (1 r)j+1 (1 z)(1 r)
e 1
I
(z r)j z m1 ez dz + Y ,
X
m () = j+1
0j<
(1 r) 2i |z|=r
4
where
e 1 (z r) m1 z
I
Y =
z e dz.
(1 r) 2i |z|=r 1z
By expanding the factor (z r)j and computing the residues, we obtain
1 mj
I
(z r)j z m1 ez dz = j! j (m),
2i |z|=r m!
where, in particular, 0 (m) = 1 and 1 (m) = 0, this last relation motivating the choice of r (= m/).
The error term is then estimated by Laplaces method and is better if preceded by an integration
by parts.
By using the relation
1 d m z
z m1 ez = z e , (10)
(z r) dz
we obtain
e 1 (z r)2
I
Y =
z m ez (( 2)(1 z) + 1 r) dz.
(1 r) 2i |z|=r (1 z)2
Thus
( 1)e+m rm1
|Y | 2 , (11)
(1 r)+1
where for 0
1 2/2
Z Z
e 1 em(1cos t) dt = (1 cos t)/2 em(1cos t) dt.
it
=
2 0
2 1 y (1)/2 e2my
Z
= dy
0 1y
2+1 1 (3)/2 2my 1
Z
p
= y e 2my 1 y dy.
0 2
By using the inequality 1 y 1 y/2 for 0 y 1 and by an integration by parts, we obtain
2(+3)/2 v 1
Z
< (1)/2
1 v ev v (3)/2 dv
m 0 4m 2
= 2(+1)/2 1 (( + 1)/2)m(+1)/2 ( 1).
1
The quantity 0 is essentially the modified Bessel function of order 0 I0 (z) (cf. [48, p. 373] and [5, p. 263]): 0 =
m
e I0 (m). By considering properties of the function x1/2 ex I0 (x), we have
0 m c0 with c0 = 0.46882 . . . , (12)
p
for all m 1. Note that /8 = 0.62665 . . .
5
From these bounds and the inequality (cf. [48, p. 253] or [5, p. 263])
m m1/2 e1/(12m) 2
e m < (m 1),
m!
it follows that
e1/(12m) m/2
|R | = (1 r)e m m! |Y | < K ,
( m)
m+1 1
m () = e m .
m!
1
1+
m1
1
1+
m2
1
1+
..
.
Useful estimates for m () may be derived from this representation as in [46, pp. 5356] and [3] for
binomial distribution.
2. The polynomials j (m) are related to Laguerre polynomials
1
L() n
n (x) = [z ](1 z) exp (xz/(1 z))
by
(mj)
j (m) = Lj (m),
and thus satisfy the recurrences
(
(j + 1)j+1 (m) = jj (m) mj1 (m) (j 1);
(13)
0 (m) = 1, 1 (m) = 0.
These relations are computationally more useful than the defining equation (7). The j (m)s are also
related to Tricomi polynomials (cf. [42]) or Charlier polynomials (cf. [5, 36, 8]), see these cited papers
and the references therein for asymptotics of this class of polynomials.
3. For the incomplete Gamma function, an expansion similar to (6) without explicit error bound
was derived by Tricomi (cf. [16, p. 140 (4)]) by a different method.
The fact that 1 (m) = 0 makes the leading term in (6) rather powerful as the convergence rate
(taking = 2) is of order m/2 , which becomes 2 for m = O(1). One may ask if such a phenomenon
can be repeated so that one would have an expansion whose successive terms are of order mj 2j (the
terms in (6) are of order m[j/2] j ). Adapting an idea due to Franklin and Friedman [17] for integrals
of the form
Z
J= t1 et f (t) dt,
0
6
Theorem 2 Let rj = (m j)/, j 0. The distribution function m () satisfies the identity
m X (1)j
1
m () = e + fj (rj )(m)j , (14)
m! 1 r0 1jm 2j
d fj (z) fj (rj )
fj+1 (z) = . (15)
dz z rj
e1/(12(m)) (m)
|R | < M (m 1; 1), (17)
( m)2+1
No simple general expression for fj (rj ) seems available. In particular, setting r = m/, we have
1
2 f1 (r1 ) =
(1 r)( m + 1)2
3 3m + 4
4 f2 (r2 ) =
(1 r)( m + 1)2 ( m + 2)3
15( m)3 + 90( m)2 + 175( m) + 108
6 f3 (r3 ) = .
(1 r)( m + 1)2 ( m + 2)3 ( m + 3)4
Proof of Theorem 2. For convenience, let us write I(m; f ) instead of m (), where f (z) = f0 (z) =
(1 z)1 . As in [17, 41], our starting point is the formula
m e
I
I(m; f ) = f (r0 )e + z m1 ez (f (z) f (r0 )) dz.
m! 2i |z|=r0
m I(m 1; f1 )
I(m; f ) = f (r0 )e , (18)
m!
for m 1. Formula (18) together with the initial condition I(0; f ) = f (0) = 1 leads to (14) by
induction.
Thus, to establish the asymptotic nature of (16), it suffices to estimate the error term
(1)
Y = I(m ; f ) (1 m).
To this end, we use the following representation (cf. [41, p. 238]) of f :
Z 1Z 1 Z 1
f (z) = t t31 t21
1 f (2) (r0 + t1 (r1 + + t (z r1 ) )) dt dt1 dt1 ,
0 0 0
7
from which we deduce by induction
(2)!
|f (r ei )| ( 1).
2 !(1 r0 )2+1
Consequently,
(2)!e+m rm+
Z
2 / 2
|Y | +1 e2(m)t dt.
2 ! (1 r0 )2+1
Proceeding along the lines as the estimation of 0 , we deduce (17). Note that a better numerical
bound for K can be obtained by applying (12).
In view of the two error terms (8) and (17), it is obvious that the expansion (14) is more powerful
than (6). Roughly, this is due to the fact that the interpolation point of each fj in (14) adaptively
varies with m j, while it remains fixed in (6). A major disadvantage of (14) is that the computation
of its coefficients becomes involved as j increases. We propose, as in [41], a modification of (14) in
which successive interpolation points are the same.
Theorem 3 Let r = m/. The distribution function m () satisfies the asymptotic expansion
1 r r(r + 2)
fe0 (r) = , fe1 (r) = 3
, fe2 (r) = ,
1r (1 r) (1 r)5
r(r2 + 8r + 6) 3 2
e4 (r) = r(r + 22r + 58r + 24) ,
fe3 (r) = , f
(1 r)7 (1 r)9
4 3 2
r(r + 52r + 328r + 444r + 120)
fe5 (r) = .
(1 r)11
m e
J(f ) = fe0 (r)e J(fe1 ),
m!
from which (19) follows as above.
We can write
X $j,h (r)
fej (z) = 2j+h+1
(z r)h ,
h0
(1 r)
$j+1,h (r) = r(h + 1)$j,h+2 (r) + h(1 r)$j,h+1 (r) (j, h 0),
8
Remark. The preceding methods of proof apply mutatis mutandis to the case when m +A , A >
0. It suffices to apply Cauchys residue theorem (or, equivalently, considering the tails e j>m j /j!):
P
e 1
I
m () = 1 z m1 ez dz ( > 1).
2i |z|= z1
Many other types of normal approximation (usually of the form m () (g(, m))) can be found
in [31, 2, 13].
Concerning the case when m and bounded, we have by the saddlepoint method (cf. [50])
!
e+m rm1 12 13 2882 888 + 313
3
m () = 1 1+ + + O m ,
2m 12m 288m2
where r = m/ > 1. Note that this expansion can also be obtained from (3) but with more involved
computations.
2.3 Poissonization
Poissonization is a widely-used technique in stochastic process, summability of divergent sequence,
analysis of algorithms, etc.; see, for example, [1, 6, 18, 35, 19]. The idea is roughly described as follows.
Given a discrete probability distribution {ak }k0 (or, in general, a complex sequence), consider the
Poisson generating function:
X j
b() = e aj ( C).
j0
j!
According to the preceding discussions, if the growth order of b, as |z| in a certain sector
containing the positive real axis, is not too large, then an would be well approximated by b(n) since n
is the saddle point of ez z n . Thus the Poisson heuristic may be regarded as a saddlepoint heuristic.
To be more precise, let us consider the following de-Poissonization lemma from [35] (properly
modified in a form suitable for our discussions).
Let S be a cone in the z-plane:
9
and f (z) := ez j0 aj z j /j! be an entire function for some given sequence an . If, for z S and
P
and, for z 6 S ,
|ez f (z)| = O |z| e|z| (0 < < 1),
then
an = f (n) + O n1/2 (n ). (20)
by Ritts theorem (cf. [33, pp. 911]). From this observation and the proof technique of Theorem 1,
we can derive the asymptotic expansion:
X
an = f (n) + f (j) (n)j (n) + O n/2 (n ), (22)
2j<
for 2. In particular, the error term in (20) is O n1 . That (22) is an asymptotic expansion is
easily seen from (21) and the fact that j (n) is a polynomial in n of degree [j/2], thus f (j) (n)j (n)
n[(j+1)/2] .
On the other hand, the method of proof of Theorem 2 also applies and we obtain
X
(1)j fj (n)(n j)j + O n
an = f (n) + ( 1),
1j<
the fj being defined as in (15) with f0 = f . Note that fj (n)(n)j nj , j 0. Thus the use of the
second expansion is preferable.
3 Binomial distribution
The methods we used in the last section can be amended to treat the binomial distribution. Let Yn
be a binomial random variable with parameters p and n, 0 < p < 1:
!
n j nj
P(Yn = j) = p q (0 j n),
j
where q = 1 p. As in the last section, we first list some known results regarding the asymptotics of
the distribution function of Yn , and then present some new ones. Set for 0 m n
!
X n j nj
Bm (n) = p q .
0jm
j
10
3.1 Known results
A rather complete account on different approximations and bounds for Bm (n) is given in [25, pp. 114
122], most of them being of normal-type; cf. also [2, 13]. To this account, we may add the references
[46, 5, 30, 43].
Asymptotics of this function has been extensively studied by Temme [39, 43]. In particular, we
quote the following result
s nm m+1 !
nm q p p0 1
Bm (n) = 1 ( 2 ) + +
2(m + 1)(n + 1) q0 p0 q0 q q0
1 + O m1 ,
11
6. By definition
!
n m nm X q j (m)j
Bm (n) = p q 1+ , (24)
m 1jm
pj (n m + j)j
expand the factor (1 z)1 at the saddlepoint z = r = qm/(p(n m + 1)), and then proceed as
above; the error estimates obtained are less satisfactory, however. Hence we use instead the following
representation:
pm+1 z n+m
I
Bm (n) = (1 qz)m1 dz (1 < < 1/q),
2i |z|= z1
which follows either from (25) by a change of variables or from the well-known relation between
binomial and negative binomial distributions (cf. [25, p. 210]).
It will be more convenient to wrok with
pm+1 z n+m1
I
m (n) = Bm (n + 1) = (1 qz)m1 dz (1 < < 1/q). (26)
2i |z|= z 1
Theorem 4 If 3 m np A n, where A > 0, then m (n) satisfies
!
n pm+1 q nm X (1)j j!j (n, m)
m (n) = 1+ j (n 1)
+ E , (27)
m 1 2j<
(pn m) j1
j+2 (n, m) = (j + 1)(n 2m)j+1 (n, m) (j + 1)m(n m)(n j)j (n, m), (28)
for j 0 with the initial conditions 0 (n, m) = n1 and 1 (n, m) = 0. The absolute value of the error
term is bounded above by
m (n m)/2
|E | < C (m 1, 2 < m) (29)
(pn m) (m )/2 n/2
3
with C = 2/2 2 e1/6 ( 1)(( 1)/2).
12
The result is clearly stronger than (23). The first few terms of j are given by
|j (n, m)|
= O n[j/2] (j 2),
(n 1)j1
uniformly for 0 m n. Note that we restricted to be less than m since otherwise a direct
computation of m (n) by its definition is preferable. This condition is also needed to justify the
convergence of an integral.
Proof. Starting from (26) we expand the factor (z 1)1 at the saddlepoint z = :
(1)j 1
I
(z )j z n+m1 (1 qz)m1 dz + Z ,
X
m+1
m (n) = p (30)
0j<
( 1)j+1 2i |z|=
where
pm+1 (1) 1 (z ) n+m1
I
Z = z (1 qz)m1 dz.
( 1) 2i |z|= z1
By an integration by part using the relation
1 d n+m
z n+m1 (1 qz)m1 = (1 qz)m ,
z
qn(z ) dz
it follows that
qeit m
1
2 1
Z
|t| dt
1 q
2
Z m/2
1 4n(n m) 2
2
< t 1+ t dt
0 2 m2
Z
= 2 2 m1 (n(n m))(1)/2 u(3)/2 (1 + u)m/2 du.
0
Note that the integral on the right-hand side (a beta function B(( 1)/2, (m +1)/2)) is convergent
for > 1 and m > 1. We next find an upper bound for this integral. We have for 2 < m
Z
u(3)/2 (1 + u)m/2 du < 2(1)/2 (m )(1)/2 (( 1)/2).
0
13
For,
Z Z
u(3)/2 (1 + u)m/2 du = (1 ew )(3)/2 e(m+1)w/2 dw
0 0
Z
w(3)/2 e(m+1)w/2 dw, if 3;
< Z0
w1/2 e(m2)w/2 dw, if = 2,
0
(
2 + 1)(1)/2 (( 1)/2), if 3;
(1)/2 (m
= p
2/(m 2), if = 2,
where 1 F1 (a; c; z) = (a, c, z) is the confluent hypergeometric functions (cf. [15, Chap. VI]). (The above
relation may also be represented in terms of other hypergeometric functions.) The final form of the
coefficients in (27) and (28) follows from the differential equation satisfied by 1 F1 and straightforward
computations.
In an analogous manner, we deduce the following results whose proofs will be omitted.
Theorem 5 Let j = (n m j)/(n 2j), j 0, and m0 = min{m, n m}. For 0 m pn 1,
m (n) satisfies the identity
!
X (1)j gj (j ) n 2j
m (n) = pm+1 q nm g0 (0 ) + j
,
1jm0
q n(n 2) (n 2j + 2) m 2j
14
where = (n m)/(qn), and ge0 (z) = (z 1)1 and
Since there are two variables (n and m) in this case, other expansions are also possible.
We list the first few terms of ge in the following.
(1 q) (1 q)
ge1 () = 3
, ge2 () = ((1 2q) + 2 q) ,
( 1) ( 1)5
(1 q) 2 2 2 2
ge3 () = (6q 6q + 1) + 2(4q 9q + 4) + q 6q + 6 .
( 1)7
4 Remarks
We have discussed analytic methods for describing the asymptotic behaviours of integrals of the forms
1 1
I I
m1 z
z e f (z)dz and z m1 (q + pz)n f (z)dz,
2i 2i
where f (z) = (1 z)1 . The function f being meromorphic, the expansions we derived are valid in
a somewhat restricted range. In general, if f is entire with moderate growth order at infinity (as the
de-Poissonization lemma in Section 2.3), our expansions would hold in a wider range for the second
parameter. This is so, for example, when f (z) = 1/(z) in the case of the Stirling numbers of the first
kind (cf. [23, 14]). A great deal of related combinatorial and arithmetical problems can be found in
[22].
Integrals of the form
1
I
z m1 L(z) f (z)dz ( ),
2i
with
X zj
L(z) = ,
j1
j!(j 1)!
arising in many combinatorial and arithmetic instances (cf. [22, Chaps. 6,10]) can also be dealt with
along the lines of this article, using known analytic properties of the modified Bessel functions.
Acknowledgements
The author is indebted to Jim Pitman and W.-Q. Liang for many valuable comments and suggestions.
References
[1] D. Aldous, Probability approximations via the Poisson clumping heuristic, Springer-Verlag, New
York, 1989.
[2] D. Alfers and H. Dinges, A normal approximation for beta and gamma tail probabilities,
Zeitschrift fur Wahrscheinlichkeitstheorie und verwandte Gebiete, 65:399420 (1984).
[3] R. R. Bahadur, Some approximations to the binomial distribution function, The Annals of
Mathematical Statistics, 31:4354 (1960).
[4] M. Balazard, H. Delange, and J.-L. Nicolas, Sur le nombre de facteurs premiers des entiers,
Comptes Rendus de lAcademie des Sciences, Serie I, Paris, 306:511514 (1988).
15
[5] A. D. Barbour, L. Holst, and S. Janson, Poisson approximation, Oxford Science Publications,
Clarendon Press, Oxford, 1992.
[7] N. Bleistein, Uniform asymptotic expansions of integrals with stationary point near algebraic
singularity, Communications on Pure and Applied Mathematics, 19:353370 (1966).
[8] Bo Rui and R. Wong, Uniform asymptotic expansion of Charlier polynomials, Methods and
Applications of Analysis, 1:294313 (1994).
[9] T.-T. Cheng, The normal approximation to the Poisson distribution and a proof of a conjecture
of Ramanujan, Bulletin of the American Mathematical Society, 55:396401 (1949).
[11] H. E. Daniels, Tail probability approximations, International Statistical Review, 55:3748 (1987).
[12] H. Delange, Sur le nombre des diviseurs premiers de n, Acta Arithmetica, 7:191215 (1962).
[13] H. Dinges, Special cases of second order Wiener germ approximations, Probability Theory and
Related Fields, 83:557 (1989).
[14] M. Drmota and M. Soria, Marking in combinatorial constructions: generating functions and
limiting distributions, Theoretical Computer Science, 144:6799 (1995).
[15] A. Erdelyi, Higher transcendental functions, volume I, Robert E. Krieger Publishing Company,
Malabar, Florida, 1953.
[16] A. Erdelyi, Higher transcendental functions, volume II, Robert E. Krieger Publishing Company,
Malabar, Florida, 1953.
[17] J. Franklin and B. Friedman, A convergent asymptotic representation for integrals, Proceedings
of the Cambridge Philosophical Society, 53:612619 (1957).
[18] G. H. Gonnet and J. I. Munro, The analysis of linear probing sort by the use of a new mathematical
transform, Journal of Algorithms, 5:451470 (1984).
[19] P. Grabner, Searching for losers, Random Structures and Algorithms, 4:99110 (1993).
[20] F. A. Haight, Handbook of the Poisson distribution, Wiley, New York, 1968.
[21] H.-K. Hwang, A Poisson geometric law for the number of components in unlabelled combinato-
rial structures, submitted.
[22] H.-K. Hwang, Theoremes limites pour les structures combinatoires et les fonctions arithmetiques,
These, Ecole polytechnique, 1994.
[23] H.-K. Hwang, Asymptotic expansions for the Stirling numbers of the first kind, Journal of
Combinatorial Theory, series A, 71:343351 (1995).
[25] N. L. Johnson, S. Kotz, and A. W. Kemp, Univariate discrete distributions, John Wiley & Sons,
Inc., New York, second edition, 1992.
[26] J. E. Kolassa, Series approximation methods in statistics, Lecture Notes in Statistics, volume 88,
Springer-Verlag, 1994.
16
[27] V. F. Kolchin, Random mappings, Optimization Software Inc., New York, 1986.
[28] J. E. Littlewood, On the probability in the tail of a binomial distribution, Advances in Applied
Probability, 1:4372 (1969).
[29] R. Lugannani and S. O. Rice, Saddlepoint approximation for the distribution of the sum of
independent random variables, Advances in Applied Probability, 12:475490 (1980).
[30] B. D. McKay, On Littlewoods estimate for the binomial distribution, Advances in Applied
Probability, 21:475478 (1989).
[31] W. Molenaar, Approximations to the Poisson, binomial and hypergeometric functions, Mathe-
matical Centre Tracts 31, Amsterdam, 1970.
[32] K. K. Norton, Estimates for partial sums of the exponential series, Journal of Mathematical
Analysis and Applications, 63:265296 (1978).
[33] F. W. J. Olver, Asymptotics and special functions, Academic Press, New York, 1974.
[35] B. Rais, P. Jacquet, and W. Szpankowski, Limiting distribution for the depth in Patricia tries,
SIAM Journal on Discrete Mathematics, 6:197213 (1993).
[36] S. O. Rice, Uniform asymptotic expansions for saddle point integralsapplication to a probability
distribution occurring in noise theory, The Bell System Technical Journal, 47:19712013 (1968).
[37] A. Selberg, Note on a paper by L. G. Sathe, Journal of the Indian Mathematical Society, 18:8387
(1954).
[38] S. Ya. Shorgin, Approximation of a generalized Binomial distribution, Theory of Probability and
its Applications, 22:846850 (1977).
[39] N. M. Temme, The asymptotic expansion of the incomplete gamma functions, SIAM Journal on
Mathematical Analysis, 10:757766 (1979).
[40] N. M. Temme, The uniform asymptotic expansions of a class of integrals related to cumulative
distribution functions, SIAM Journal on Mathematical Analysis, 13:239253 (1982).
[41] N. M. Temme, Uniform asymptotic expansions of Laplace integrals, Analysis, 3:221249 (1983).
[42] N. M. Temme, A class of polynomials related to those of Laguerre, In Lecture Notes in Mathemat-
ics, 1171, Proceedings of the Laguerre Symposium held at Bar-le-Duc, pages 459464. Springer-
Verlag, Berlin, 1985.
[43] N. M. Temme, Incomplete Laplace integrals: uniform asymptotic expansion with application to
the incomplete beta function, SIAM Journal on Mathematical Analysis, 18:16381663 (1987).
[44] G. Tenenbaum, Introduction a la theorie analytique et probabiliste des nombres, Institut Elie
Cartan, Universite de Nancy I, Nancy, France, 1990. English version by C. B. Thomas, Cambridge
University Press, 1995.
[45] J. V. Uspensky, On Ch. Jordans series for probability, Annals of Mathematics, 32:306312
(1931).
17
[47] B. L. van der Waerden, On the method of saddle-points. Applied Scientific Research, B2:3345
(1951).
[48] E. T. Whittaker and G. N. Watson, A course of modern analysis, an introduction to the general
theory of infinite processes and of analytic functions; with an account of the principal transcendental
functions. Cambridge University Press, Cambridge, 4th edition, 1927.
[50] R. Wong, Asymptotic approximations of integrals, Academic Press, Inc., Boston, 1989.
Hsien-Kuei Hwang
Institute of Statistical Science
Academia Sinica
Taipei, 11529
Taiwan
e-mail: hkhwang@stat.sinica.edu.tw
18