Вы находитесь на странице: 1из 49

Probability Theory

Richard F. Bass

c
These notes are 1998
by Richard F. Bass. They may be used for personal or classroom purposes,
but not for commercial purposes.
Revised 2001.

1. Basic notions.
A probability or probability measure is a measure whose total mass is one. Because the origins of
probability are in statistics rather than analysis, some of the terminology is different. For example, instead
of denoting a measure space by (X, A, ), probabilists use (, F, P). So here is a set, F is called a -field
(which is the same thing as a -algebra), and P is a measure with P() = 1. Elements of F are called events.
Elements of are denoted .
Instead of saying a property occurs almost everywhere, we talk about properties occurring almost
surely, written a.s.. Real-valued measurable functions from to R are called random variables and are
usually denoted by X or Y or other capital letters. We often abbreviate random variable by r.v.
We let Ac = ( :
/ A) (called the complement of A) and B A = B Ac .
Integration (in the sense of Lebesgue) is called expectation or expected value, and we write E X for
R
R
XdP. The notation E [X; A] is often used for A XdP.
The random variable 1A is the function that is one if A and zero otherwise. It is called the
indicator of A (the name characteristic function in probability refers to the Fourier transform). Events such
as ( : X() > a) are almost always abbreviated by (X > a).
Given a random variable X, we can define a probability on R by
PX (A) = P(X A),

A R.

(1.1)

The probability PX is called the law of X or the distribution of X. We define FX : R [0, 1] by


FX (x) = PX ((, x]) = P(X x).

(1.2)

The function FX is called the distribution function of X.


As an example, let = {H, T }, F all subsets of (there are 4 of them), P(H) = P(T ) = 12 . Let
X(H) = 1 and X(T ) = 0. Then PX = 21 0 + 12 1 , where x is point mass at x, that is, x (A) = 1 if x A
and 0 otherwise. FX (a) = 0 if a < 0, 12 if 0 a < 1, and 1 if a 1.
Proposition 1.1. The distribution function FX of a random variable X satisfies:
(a) FX is nondecreasing;
(b) FX is right continuous with left limits;
(c) limx FX (x) = 1 and limx FX (x) = 0.
Proof. We prove the first part of (b) and leave the others to the reader. If xn x, then (X xn ) (X x),
and so P(X xn ) P(X x) since P is a measure.
Note that if xn x, then (X xn ) (X < x), and so FX (xn ) P(X < x).
Any function F : R [0, 1] satisfying (a)-(c) of Proposition 1.1 is called a distribution function,
whether or not it comes from a random variable.
1

Proposition 1.2. Suppose F is a distribution function. There exists a random variable X such that
F = FX .
Proof. Let = [0, 1], F the Borel -field, and P Lebesgue measure. Define X() = sup{x : F (x) < }. It
is routine to check that FX = F .
In the above proof, essentially X = F 1 . However F may have jumps or be constant over some
intervals, so some care is needed in defining X.
Certain distributions or laws are very common. We list some of them.
(a) Bernoulli. A random variable is Bernoulliif P(X
 = 1) = p, P(X = 0) = 1 p for some p [0, 1].
n
(b) Binomial. This is defined by P(X = k) =
pk (1 p)nk , where n is a positive integer, 0 k n,
k
and p [0, 1].
(c) Geometric. For p (0, 1) we set P(X = k) = (1 p)pk . Here k is a nonnegative integer.
(d) Poisson. For > 0 we set P(X = k) = e k /k! Again k is a nonnegative integer.
(e) Uniform. For some positive integer n, set P(X = k) = 1/n for 1 k n.
If F is absolutely continuous, we call f = F 0 the density of F . Some examples of distributions
characterized by densities are the following.
(f) Uniform on [a, b]. Define f (x) = (b a)1 1[a,b] (x). This means that if X has a uniform distribution,
then
Z
1
1[a,b] (x) dx.
P(X A) =
b

a
A
(g) Exponential. For x > 0 let f (x) = ex .
2
(h) Standard normal. Define f (x) = 12 ex /2 . So
1
P(X A) =
2

ex

/2

dx.

(i) N (, 2 ). We shall see later that a standard normal has mean zero and variance one. If Z is a
standard normal, then a N (, 2 ) random variable has the same distribution as + Z. It is an
exercise in calculus to check that such a random variable has density

2
2
1
e(x) /2 .
2

(1.3)

(j) Cauchy. Here


f (x) =

1 1
.
1 + x2

We can use the law of a random variable to calculate expectations.


Proposition 1.3. If g is bounded or nonnegative, then
Z
E g(X) = g(x) PX (dx).
Proof. If g is the indicator of an event A, this is just the definition of PX . By linearity, the result holds for
simple functions. By the monotone convergence theorem, the result holds for nonnegative functions, and by
linearity again, it holds for bounded g.
2

R
If FX has a density f , then PX (dx) = f (x) dx. So, for example, E X = xf (x) dx and E X 2 =
x2 f (x) dx. (We need E |X| finite to justify this if X is not necessarily nonnegative.)

We define the mean of a random variable to be its expectation, and the variance of a random variable
is defined by
Var X = E (X E X)2 .
For example, it is routine to see that the mean of a standard normal is zero and its variance is one.
Note
Var X = E (X 2 2XE X + (E X)2 ) = E X 2 (E X)2 .
Another equality that is useful is the following.
Proposition 1.4. If X 0 a.s. and p > 0, then
E Xp =

pp1 P(X > ) d.

The proof will show that this equality is also valid if we replace P(X > ) by P(X ).
Proof. Use Fubinis theorem and write
Z

p
0

p1

Z
P(X > ) d = E

p1

Z
1(,) (X) d = E

pp1 d = E X p .

We need two elementary inequalities.


Proposition 1.5. Chebyshevs inequality If X 0,
P(X a)

EX
.
a

Proof. We write
h
i
hX
i
P(X a) = E 1[a,) (X) E
1[a,) (X) E X/a,
a
since X/a is bigger than 1 when X [a, ).
If we apply this to X = (Y E Y )2 , we obtain
P(|Y E Y | a) = P((Y E Y )2 a2 ) Var Y /a2 .

(1.4)

This special case of Chebyshevs inequality is sometimes itself referred to as Chebyshevs inequality, while
Proposition 1.5 is sometimes called the Markov inequality.
The second inequality we need is Jensens inequality, not to be confused with the Jensens formula of
complex analysis.
3

Proposition 1.6. Suppose g is convex and and X and g(X) are both integrable. Then
g(E X) E g(X).
Proof. One property of convex functions is that they lie above their tangent lines, and more generally their
support lines. So if x0 R, we have
g(x) g(x0 ) + c(x x0 )
for some constant c. Take x = X() and take expectations to obtain
E g(X) g(x0 ) + c(E X x0 ).
Now set x0 equal to E X.
If An is a sequence of sets, define (An i.o.), read An infinitely often, by

(An i.o.) =
n=1 i=n Ai .

This set consists of those that are in infinitely many of the An .


A simple but very important proposition is the Borel-Cantelli lemma. It has two parts, and we prove
the first part here, leaving the second part to the next section.
P
Proposition 1.7. (Borel-Cantelli lemma) If n P(An ) < , then P(An i.o.) = 0.
Proof. We have
P(An i.o.) = lim P(
i=n Ai ).
n

However,
P(
i=n Ai )

P(Ai ),

i=n

which tends to zero as n .

2. Independence.
Let us say two events A and B are independent if P(A B) = P(A)P(B). The events A1 , . . . , An are
independent if
P(Ai1 Ai2 Aij ) = P(Ai1 )P(Ai2 ) P(Aij )
for every subset {i1 , . . . , ij } of {1, 2, . . . , n}.
Proposition 2.1. If A and B are independent, then Ac and B are independent.
Proof. We write
P(Ac B) = P(B) P(A B) = P(B) P(A)P(B) = P(B)(1 P(A)) = P(B)P(A).

We say two -fields F and G are independent if A and B are independent whenever A F and
B G. Two random variables X and Y are independent if the -field generated by X and the -field
generated by Y are independent. (Recall that the -field generated by a random variable X is given by
4

{(X A) : A a Borel subset of R}.) We define the independence of n -fields or n random variables in the
obvious way.
Proposition 2.1 tells us that A and B are independent if the random variables 1A and 1B are independent, so the definitions above are consistent.
If f and g are Borel functions and X and Y are independent, then f (X) and g(Y ) are independent.
This follows because the -field generated by f (X) is a sub--field of the one generated by X, and similarly
for g(Y ).
Let FX,Y (x, y) = P(X x, Y y) denote the joint distribution function of X and Y . (The comma
inside the set means and.)
Proposition 2.2. FX,Y (x, y) = FX (x)FY (y) if and only if X and Y are independent.
Proof. If X and Y are independent, the 1(,x] (X) and 1(,y] (Y ) are independent by the above comments.
Using the above comments and the definition of independence, this shows FX,Y (x, y) = FX (x)FY (y).
Conversely, if the inequality holds, fix y and let My denote the collection of sets A for which P(X
A, Y y) = P(X A)P(Y y). My contains all sets of the form (, x]. It follows by linearity that My
contains all sets of the form (x, z], and then by linearity again, by all sets that are the finite union of such
half-open, half-closed intervals. Note that the collection of finite unions of such intervals, A, is an algebra
generating the Borel -field. It is clear that My is a monotone class, so by the monotone class lemma, My
contains the Borel -field.
For a fixed set A, let MA denote the collection of sets B for which P(X A, Y B) = P(X
A)P(Y B). Again, MA is a monotone class and by the preceding paragraph contains the -field generated
by the collection of finite unions of intervals of the form (x, z], hence contains the Borel sets. Therefore X
and Y are independent.
The following is known as the multiplication theorem.
Proposition 2.3. If X, Y , and XY are integrable and X and Y are independent, then E XY = E XE Y .
Proof. Consider the random variables in (X) (the -field generated by X) and (Y ) for which the
multiplication theorem is true. It holds for indicators by the definition of X and Y being independent.
It holds for simple random variables, that is, linear combinations of indicators, by linearity of both sides.
It holds for nonnegative random variables by monotone convergence. And it holds for integrable random
variables by linearity again.
Let us give an example of independent random variables. Let = 1 2 and let P = P1 P2 , where
(i , Fi , Pi ) are probability spaces for i = 1, 2. We use the product -field. Then it is clear that F1 and F2
are independent by the definition of P. If X1 is a random variable such that X1 (1 , 2 ) depends only on 1
and X2 depends only on 2 , then X1 and X2 are independent.
This example can be extended to n independent random variables, and in fact, if one has independent
random variables, one can always view them as coming from a product space. We will not use this fact.
Later on, we will talk about countable sequences of independent r.v.s and the reader may wonder whether
such things can exist. That it can is a consequence of the Kolmogorov extension theorem; see PTA, for
example.
If X1 , . . . , Xn are independent, then so are X1 E X1 , . . . , Xn E Xn . Assuming everything is
integrable,
E [(X1 E X1 ) + (Xn E Xn )]2 = E (X1 E X1 )2 + + E (Xn E Xn )2 ,
5

using the multiplication theorem to show that the expectations of the cross product terms are zero. We have
thus shown
Var (X1 + + Xn ) = Var X1 + + Var Xn .
(2.1)
We finish up this section by proving the second half of the Borel-Cantelli lemma.
P
Proposition 2.4. Suppose An is a sequence of independent events. If n P(An ) = , then P(An i.o.) = 1.
Note that here the An are independent, while in the first half of the Borel-Cantelli lemma no such
assumption was necessary.
Proof. Note
N
c
P(N
i=n Ai ) = 1 P(i=n Ai ) = 1

N
Y
i=n

P(Aci ) = 1

N
Y

(1 P(Ai )).

i=n

By the mean value theorem, 1 x ex , so we have that the right hand side is greater than or equal to
PN
1 exp( i=n P(Ai )). As N , this tends to 1, so P(
i=n Ai ) = 1. This holds for all n, which proves
the result.

3. Convergence.
In this section we consider three ways a sequence of r.v.s Xn can converge.
We say Xn converges to X almost surely if (Xn 6 X) has probability zero. Xn converges to X in
probability if for each , P(|Xn X| > ) 0 as n . Xn converges to X in Lp if E |Xn X|p 0 as
n .
The following proposition shows some relationships among the types of convergence.
Proposition 3.1. (a) If Xn X a.s., then Xn X in probability.
(b) If Xn X in Lp , then Xn X in probability.
(c) If Xn X in probability, there exists a subsequence nj such that Xnj converges to X almost surely.
Proof. To prove (a), note Xn X tends to 0 almost surely, so 1(,)c (Xn X) also converges to 0 almost
surely. Now apply the dominated convergence theorem.
(b) comes from Chebyshevs inequality:
P(|Xn X| > ) = P(|Xn X|p > p ) E |Xn X|p /p 0
as n .
To prove (c), choose nj larger than nj1 such that P(|Xn X| > 2j ) < 2j whenever n nj .
So if we let Ai = (|Xnj X| > 2i for some j i), then P(Ai ) 2i+1 . By the Borel-Cantelli lemma
P(Ai i.o.) = 0. This implies Xnj X on the complement of (Ai i.o.).
Let us give some examples to show there need not be any other implications among the three types
of convergence.
Let = [0, 1], F the Borel -field, and P Lebesgue measure. Let Xn = en 1(0,1/n) . Then clearly Xn
converges to 0 almost surely and in probability, but E Xnp = enp /n for any p.
Let be the unit circle, and let P be Lebesgue measure on the circle normalized to have total mass
Pn
1. Let tn = i=1 i1 , and let An = { : tn1 < tn }. Let Xn = 1An . Any point on the unit circle will
be in infinitely many An , so Xn does not converge almost surely to 0. But P(An ) = 1/2n 0, so Xn 0
in probability and in Lp .
6

4. Weak law of large numbers.


Suppose Xn is a sequence of independent random variables. Suppose also that they all have the same
distribution, that is, FXn = FX1 for all n. This situation comes up so often it has a name, independent,
identically distributed, which is abbreviated i.i.d.
Pn
Define Sn = i=1 Xi . Sn is called a partial sum process. Sn /n is the average value of the first n of
the Xi s.
Theorem 4.1. (Weak law of large numbers) Suppose the Xi are i.i.d. and E X12 < . Then Sn /n E X1
in probability.
Proof. Since the Xi are i.i.d., they all have the same expectation, and so E Sn = nE X1 . Hence E (Sn /n
E X1 )2 is the variance of Sn /n. If > 0, by Chebyshevs inequality,
Var (Sn /n)
=
P(|Sn /n E X1 | > )
2

Pn

i=1 Var Xi
n2 2

nVar X1
.
n2 2

(4.1)

Since E X12 < , then Var X1 < , and the result follows by letting n .
A nice application of the weak law of large numbers is a proof of the Weierstrass approximation
theorem.
Theorem 4.2. Suppose f is a continuous function on [0, 1] and > 0. There exists a polynomial P such
that supx[0,1] |f (x) P (x)| < .
Proof. Let
P (x) =

n
X
k=0

 
n
f (k/n)
xk (1 x)nk .
k

Clearly P is a polynomial. Since f is continuous, there exists M such that |f (x)| M for all x and there
exists such that |f (x) f (y)| < /2 whenever |x y| < .
Let Xi be i.i.d. Bernoulli r.v.s with parameter x. Then Sn , the partial sum, is a binomial, and hence
P (x) = E f (Sn /n). The mean of Sn /n is x. We have
|P (x) f (x)| = |E f (Sn /n) f (E X1 )| E |f (Sn /n) f (E X1 )|
M P(|Sn /n x| > ) + /2.
By (4.1) the first term will be less than
M Var X1 /n 2 M x(1 x)/n 2 M n 2 ,
which will be less than /2 if n is large enough, uniformly in x.

5. Techniques related to almost sure convergence.


Our aim is the strong law of large numbers (SLLN), which says that Sn /n converges to E X1 almost
surely if E |X1 | < .
We first prove it under the assumption that E X14 < .
7

Proposition 5.1. Suppose Xi is an i.i.d. sequence with E Xi4 < and let Sn =
E X1 a.s.

Pn

i=1

Xi . Then Sn /n

Proof. By looking at Xi E Xi we may assume that the Xi have mean 0. By Chebyshev,


P(|Sn /n| > )

E S4
E (Sn /n)4
= 4 n4 .
4

If we expand Sn4 , we will have terms involving Xi4 , terms involving Xi2 Xj2 , terms involving Xi3 Xj , terms
involving Xi2 Xj Xk , and terms involving Xi Xj Xk X` , with i, j, k, ` all being different. By the multiplication
theorem and the fact that the Xi have mean 0, the expectations of all the terms will be 0 except for those
of the first two types. So
n
X
X
E Sn4 =
E Xi4 +
E Xi2 E Xj2 .
i=1

i6=j

By the finiteness assumption, the first term on the right is bounded by c1 n. By Cauchy-Schwarz, E Xi2
(E Xi4 )1/2 < , and there are at most n2 terms in the second term on the right, so this second term is
bounded by c2 n2 . Substituting, we have
P(|Sn /n| > ) c3 /n2 4 .
Consequently P(|Sn /n| > i.o.) = 0 by Borel-Cantelli. Since is arbitrary, this implies Sn /n 0 a.s.
Before we can prove the SLLN assuming only the finiteness of first moments, we need some preliminaries.
Proposition 5.2. If Y 0, then E Y < if and only if

P(Y > n) < .

R
Proof. By Proposition 1.4, E Y = 0 P(Y > x)dx. P(Y > x) is nonincreasing in x, so the integral is
P
P
bounded above by n=0 P(Y > n) and bounded below by n=1 P(Y > n).
If Xi is a sequence of r.v.s, the tail -field is defined by
n=1 (Xn , Xn+1 , . . .). An example of an
event in the tail -field is (lim supn Xn > a). Another example is (lim supn Sn /n > a). The reason
for this is that if k < n is fixed,
Pn
Xi
Sk
Sn
=
+ i=k+1 .
n
n
n
Pn
The first term on the right tends to 0 as n . So lim sup Sn /n = lim sup( i=k+1 Xi )/n, which is in
(Xk+1 , Xk+2 , . . .). This holds for each k. The set (lim sup Sn > a) is easily seen not to be in the tail -field.
Theorem 5.3. (Kolmogorov 0-1 law) If the Xi are independent, then the events in the tail -field have
probability 0 or 1.
This implies that in the case of i.i.d. random variables, if Sn /n has a limit with positive probability,
then it has a limit with probability one, and the limit must be a constant.
Proof. Let M be the collection of sets in (Xn+1 , . . .) that is independent of every set in (X1 , . . . , Xn ).
M is easily seen to be a monotone class and it contains (Xn+1 , . . . , XN ) for every N > n. Therefore M
must be equal to (Xn+1 , . . .).
If A is in the tail -field, then for each n, A is independent of (X1 , . . . , Xn ). The class MA of sets
independent of A is a monotone class, hence is a -field containing (X1 , . . . , Xn ) for each n. Therefore MA
contains (X1 , . . .).
8

We thus have that the event A is independent of itself, or


P(A) = P(A A) = P(A)P(A) = P(A)2 .
This implies P(A) is zero or one.
The next proposition shows that in considering a law of large numbers we can consider truncated
random variables.
Proposition 5.4. Suppose Xi is an i.i.d. sequence of r.v.s with E |X1 | < . Let Xn0 = Xn 1(|Xn |n) . Then
(a) Xn converges almost surely if and only if Xn0 does;
Pn
(b) If Sn0 = i=1 Xi0 , then Sn /n converges a.s. if and only if Sn0 /n does.
Proof. Let An = (Xn 6= Xn0 ) = (|Xn | > n). Then P(An ) = P(|Xn | > n) = P(|X1 | > n). Since E |X1 | < ,
P
then by Proposition 5.2 we have
P(An ) < . So by the Borel-Cantelli lemma, P(An i.o.) = 0. Thus for
almost every , Xn = Xn0 for n sufficiently large. This proves (a).
For (b), let k (depending on ) be the largest integer such that Xk0 () 6= Xk (). Then Sn /n Sn0 /n =
(X1 + + Xk )/n (X10 + + Xk0 )/n 0 as n .
Next is Kolmogorovs inequality, a special case of Doobs inequality.
Proposition 5.5. Suppose the Xi are independent and E Xi = 0 for each i. Then
P( max |Si | )
1in

E Sn2
.
2

Proof. Let Ak = (|Sk | , |S1 | < , . . . , |Sk1 | < ). Note the Ak are disjoint and that Ak (X1 , . . . , Xk ).
Therefore Ak is independent of Sn Sk . Then
E Sn2
=

n
X
k=1
n
X
k=1
n
X

E [Sn2 ; Ak ]
E [(Sk2 + 2Sk (Sn Sk ) + (Sn Sk )2 ); Ak ]
E [Sk2 ; Ak ] + 2

k=1

n
X

E [Sk (Sn Sk ); Ak ].

k=1

Using the independence, E [Sk (Sn Sk )1Ak ] = E [Sk 1Ak ]E [Sn Sk ] = 0. Therefore
E Sn2

n
X
k=1

E [Sk2 ; Ak ]

n
X

2 P(Ak ) = 2 P( max |Sk | ).


1kn

k=1

Our result is immediate from this.


The last result we need for now is a special case of what is known as Kroneckers lemma.
Pn
P
Proposition 5.6. Suppose xi are real numbers and sn = i=1 xi . If j=1 (xj /j) converges, then sn /n 0.
Pn
Pn
Proof. Let bn = j=1 (xj /j), b0 = 0, and suppose bn b. As is well known, this implies ( i=1 bi )/n b.
We have n(bn bn1 ) = xn , so
Pn
Pn
Pn1
Pn
bi
sn
i=1 (i + 1)bi
i=1 (ibi ibi1 )
i=1 ibi
=
=
= bn i=1 b b = 0.
n
n
n
n
9

6. Strong law of large numbers.


This section is devoted to a proof of Kolmogorovs strong law of large numbers. We showed earlier
that if E Xi2 < , where the Xi are i.i.d., then the weak law of large numbers (WLLN) holds: Sn /n converges
to E X1 in probability. The WLLN can be improved greatly; it is enough that xP(|X1 | > x) 0 as x .
Here we show the strong law (SLLN): if one has a finite first moment, then there is almost sure convergence.
First we need a lemma.
Pn
Lemma 6.1. Suppose Vi is a sequence of independent r.v.s, each with mean 0. Let Wn = i=1 Vi . If
P
i=1 Var Vi < , then Wn converges almost surely.
Proof. Choose nj > nj1 such that
shows that

i=nj

Var Vi < 23j . If n > nj , then applying Kolmogorovs inequality

P( max |Wi Wnj | > 2j ) 23j /22j = 2j .


nj in

Letting n , we have P(Aj ) 2j , where


Aj = (max |Wi Wnj | > 2j ).
nj i

By the Borel-Cantelli lemma, P(Aj i.o.) = 0.


Suppose
/ (Aj i.o.). Let > 0. Choose j large enough so that 2j+1 < and
/ Aj . If n, m > nj ,
then
|Wn Wm | |Wn Wnj | + |Wm Wnj | 2j+1 < .
Since is arbitrary, Wn () is a Cauchy sequence, and hence converges.
Theorem 6.2. (SLLN) Let Xi be a sequence of i.i.d. random variables. Then Sn /n converges almost surely
if and only if E |X1 | < .
Proof. Let us first suppose Sn /n converges a.s. and show E |X1 | < . If Sn ()/n a, then
Sn1
Sn1 n 1
=
a.
n
n1 n
So
Xn
Sn
Sn1
=

a a = 0.
n
n
n
P
Hence Xn /n 0, a.s. Thus P(|Xn | > n i.o.) = 0. By the second part of Borel-Cantelli,
P(|Xn | > n) < .
P
Since the Xi are i.i.d., this means n=1 P(|X1 | > n) < , and by Proposition 4.1, E|X1 | < .
Now suppose E |X1 | < . By looking at Xi E Xi , we may suppose without loss of generality that
Pn
E Xi = 0. We truncate, and let Yi = Xi 1(|Xi |i) . It suffices to show i=1 Yi /n 0 a.s., by Proposition 5.4.
Next we estimate. We have
E Yi = E [Xi 1(|Xi |i) ] = E [X1 1(|X1 |i) ] E X1 = 0.
10

The convergence follows by the dominated convergence theorem, since the integrands are bounded by |X1 |.
To estimate the second moment of the Yi , we write
E Yi2 =

2yP(|Yi | y) dy
0

2yP(|Yi | y) dy

=
0

2yP(|X1 | y) dy,
0

and so

X
1 i
2yP(|X1 | y) dy
i2 0
i=1
Z

X
1
=2
1(yi) yP(|X1 | y) dy
i2 0
i=1
Z X

1
=2
1
yP(|X1 | y) dy
2 (yi)
i
0
i=1
Z
1
yP(|X1 | y) dy
4
y
0
Z
=4
P(|X1 | y) dy = 4E |X1 | < .

E (Yi2 /i2 )

i=1

Let Ui = Yi E Yi . Then Var Ui = Var Yi E Yi2 , and by the above,

Var (Ui /i) < .

i=1

Pn
Pn
By Lemma 6.1 (with Vi = Ui /i), i=1 (Ui /i) converges almost surely. By Kroneckers lemma, ( i=1 Ui )/n
Pn
Pn
converges almost surely. Finally, since E Yi 0, then i=1 E Yi /n 0, hence i=1 Yi /n 0.

7. Uniform integrability. Before proceeding to some extensions of the SLLN, we discuss uniform integrability. A sequence of r.v.s is uniformly integrable if
Z
|Xi | dP 0

sup
i

(|Xi |>M )

as M .
Proposition 7.1. Suppose there exists : [0, ) [0, ) such that is nondecreasing, (x)/x as
x , and supi E (|Xi |) < . Then the Xi are uniformly integrable.
Proof. Let > 0 and choose x0 such that x/(x) < if x x0 . If M x0 ,
Z

Z
|Xi | =

(|Xi |>M )

|Xi |
(|Xi |)1(|Xi |>M )
(|Xi |)

11

Z
(|Xi |) sup E (|Xi |).
i

Proposition 7.2. If Xn and Yn are two uniformly integrable sequences, then Xn + Yn is also a uniformly
integrable sequence.
Proof. Since there exists M0 such that supn E [|Xn |; |Xn | > M0 ] < 1 and supn E [|Yn |; |Yn | > M0 ] < 1,
then supn E |Xn | M0 + 1, and similarly for the Yn . Let > 0 and choose M1 > 4(M0 + 1)/ such that
supn E [|Xn |; |Xn | > M1 ] < /4 and supn E [|Yn |; |Yn | > M1 ] < /4. Let M2 = 4M12 .
Note P(|Xn | + |Yn | > M2 ) (E |Xn | + E |Yn |)/M2 /(4M1 ) by Chebyshevs inequality. Then
E [|Xn + Yn |; |Xn + Yn | > M2 ] E [|Xn |; |Xn | > M1 ]
+ E [|Xn |; |Xn | M1 , |Xn + Yn | > M2 ]
+ E [|Yn |; |Yn | > M1 ]
+ E [|Yn |; |Yn | M1 , |Xn + Yn | > M2 ].
The first and third terms on the right are each less than /4 by our choice of M1 . The second and fourth
terms are each less than M1 P(|Xn + Yn | > M2 ) /2.
The main result we need in this section is Vitalis convergence theorem.
Theorem 7.3. If Xn X almost surely and the Xn are uniformly integrable, then E |Xn X| 0.
Proof. By the above proposition, Xn X is uniformly integrable and tends to 0 a.s., so without loss of
generality, we may assume X = 0. Let > 0 and choose M such that supn E [|Xn |; |Xn | > M ] < . Then
E |Xn | E [|Xn |; |Xn | > M ] + E [|Xn |; |Xn | M ] + E [|Xn |1(|Xn |M ) ].
The second term on the right goes to 0 by dominated convergence.

8. Complements to the SLLN.


Proposition 8.1. Suppose Xi is an i.i.d. sequence and E |X1 | < . Then

S

n
E X1 0.
E
n
Proof. Without loss of generality we may assume E X1 = 0. By the SLLN, Sn /n 0 a.s. So we need to
show that the sequence Sn /n is uniformly integrable.
Pick M1 such that E [|X1 |; |X1 | > M1 ] < /2. Pick M2 = M1 E |X1 |/. So
P(|Sn /n| > M2 ) E |Sn |/nM2 E |X1 |/M2 = /M1 .
We used here E |Sn |
We then have

Pn

i=1

E |Xi | = nE |X1 |.

E [|Xi |; |Sn /n| > M2 ] E [|Xi | : |Xi | > M1 ] + E [|Xi |; |Xi | M1 , |Sn /n| > M2 ]
+ M1 P(|Sn /n| > M2 ) 2.
Finally,
n

E [|Sn /n|; |Sn /n| > M2 ]

1X
E [|Xi |; |Sn /n| > M2 ] 2.
n i=1

We now consider the three series criterion. We prove the if portion here and defer the only if
to Section 20.
12

Theorem 8.2. Let Xi be a sequence of independent random variables., A > 0, and Yi = Xi 1(|Xi |A) . Then
P
P
P
X converges if and only if all of the following three series converge: (a)
P(|Xn | > A); (b)
E Yi ; (c)
P i
Var Yi .
P
Proof of if part. Since (c) holds, then (Yi E Yi ) converges by Lemma 6.1. Since (b) holds, taking the
P
P
P
difference shows
Yi converges. Since (a) holds,
P(Xi 6= Yi ) = P(|Xi | > A) < , so by Borel-Cantelli,
P
P(Xi 6= Yi i.o.) = 0. It follows that
Xi converges.

9. Conditional expectation.
If F G are two -fields and X is an integrable G measurable random variable, the conditional
expectation of X given F, written E [X | F] and read as the expectation (or expected value) of X given
F, is any F measurable random variable Y such that E [Y ; A] = E [X; A] for every A F. The conditional
probability of A G given F is defined by P(A | F) = E [1A | F].
If Y1 , Y2 are two F measurable random variables with E [Y1 ; A] = E [Y2 ; A] for all A F, then Y1 = Y2 ,
a.s., or conditional expectation is unique up to a.s. equivalence.
In the case X is already F measurable, E [X | F] = X. If X is independent of F, E [X | F] = E X.
Both of these facts follow immediately from the definition. For another example, which ties this definition
with the one used in elementary probability courses, if {Ai } is a finite collection of disjoint sets whose union
is , P(Ai ) > 0 for all i, and F is the -field generated by the Ai s, then
P(A | F) =

X P(A Ai )
P(Ai )

1A i .

This follows since the right-hand side is F measurable and its expectation over any set Ai is P(A Ai ).
As an example, suppose we toss a fair coin independently 5 times and let Xi be 1 or 0 depending
whether the ith toss was a heads or tails. Let A be the event that there were 5 heads and let Fi =
(X1 , . . . , Xi ). Then P(A) = 1/32 while P(A | F1 ) is equal to 1/16 on the event (X1 = 1) and 0 on the event
(X1 = 0). P(A | F2 ) is equal to 1/8 on the event (X1 = 1, X2 = 1) and 0 otherwise.
We have
E [E [X | F]] = E X

(9.1)

because E [E [X | F]] = E [E [X | F]; ] = E [X; ] = E X.


The following is easy to establish.
Proposition 9.1. (a) If X Y are both integrable, then E [X | F] E [Y | F] a.s.
(b) If X and Y are integrable and a R, then E [aX + Y | F] = aE [X | F] + E [Y | F].
It is easy to check that limit theorems such as monotone convergence and dominated convergence
have conditional expectation versions, as do inequalities like Jensens and Chebyshevs inequalities. Thus,
for example, we have the following.
Proposition 9.2. (Jensens inequality for conditional expectations) If g is convex and X and g(X) are
integrable,
E [g(X) | F] g(E [X | F]),
A key fact is the following.
13

a.s.

Proposition 9.3. If X and XY are integrable and Y is measurable with respect to F, then
E [XY | F] = Y E [X | F].

(9.2)

Proof. If A F, then for any B F,






E 1A E [X | F]; B = E E [X | F]; A B = E [X; A B] = E [1A X; B].
Since 1A E [X | F] is F measurable, this shows that (9.1) holds when Y = 1A and A F. Using linearity
and taking limits shows that (9.1) holds whenever Y is F measurable and Xand XY are integrable.
Two other equalities follow.
Proposition 9.4. If E F G, then




E E [X | F] | E = E [X | E] = E E [X | E] | F .
Proof. The right equality holds because E [X | E] is E measurable, hence F measurable. To show the left
equality, let A E. Then since A is also in F,
 
 




E E E [X | F] | E ; A = E E [X | F]; A = E [X; A] = E [E X | E]; A .
Since both sides are E measurable, the equality follows.
To show the existence of E [X | F], we proceed as follows.
Proposition 9.5. If X is integrable, then E [X | F] exists.
Proof. Using linearity, we need only consider X 0. Define a measure Q on F by Q(A) = E [X; A] for
A F. This is trivially absolutely continuous with respect to P|F , the restriction of P to F. Let E [X | F]
be the Radon-Nikodym derivative of Q with respect to P|F . The Radon-Nikodym derivative is F measurable
by construction and so provides the desired random variable.
When F = (Y ), one usually writes E [X | Y ] for E [X | F]. Notation that is commonly used (however,
we will use it only very occasionally and only for heuristic purposes) is E [X | Y = y]. The definition is as
follows. If A (Y ), then A = (Y B) for some Borel set B by the definition of (Y ), or 1A = 1B (Y ). By
linearity and taking limits, if Z is (Y ) measurable, Z = f (Y ) for some Borel measurable function f . Set
Z = E [X | Y ] and choose f Borel measurable so that Z = f (Y ). Then E [X | Y = y] is defined to be f (y).
If X L2 and M = {Y L2 : Y is F-measurable}, one can show that E [X | F] is equal to the
projection of X onto the subspace M. We will not use this in these notes.
10. Stopping times.
We next want to talk about stopping times. Suppose we have a sequence of -fields Fi such that
Fi Fi+1 for each i. An example would be if Fi = (X1 , . . . , Xi ). A random mapping N from to
{0, 1, 2, . . .} is called a stopping time if for each n, (N n) Fn . A stopping time is also called an optional
time in the Markov theory literature.
The intuition is that the sequence knows whether N has happened by time n by looking at Fn .
Suppose some motorists are told to drive north on Highway 99 in Seattle and stop at the first motorcycle
shop past the second realtor after the city limits. So they drive north, pass the city limits, pass two realtors,
and come to the next motorcycle shop, and stop. That is a stopping time. If they are instead told to stop
at the third stop light before the city limits (and they had not been there before), they would need to drive
to the city limits, then turn around and return past three stop lights. That is not a stopping time, because
they have to go ahead of where they wanted to stop to know to stop there.
We use the notation a b = min(a, b) and a b = max(a, b). The proof of the following is immediate
from the definitions.
14

Proposition 10.1.
(a) Fixed times n are stopping times.
(b) If N1 and N2 are stopping times, then so are N1 N2 and N1 N2 .
(c) If Nn is a nondecreasing sequence of stopping times, then so is N = supn Nn .
(d) If Nn is a nonincreasing sequence of stopping times, then so is N = inf n Nn .
(e) If N is a stopping time, then so is N + n.
We define FN = {A : A (N n) Fn for all n}.
11. Martingales.
In this section we consider martingales. Let Fn be an increasing sequence of -fields. A sequence of
random variables Mn is adapted to Fn if for each n, Mn is Fn measurable.
Mn is a martingale if Mn is adapted to Fn , Mn is integrable for all n, and
E [Mn | Fn1 ] = Mn1 ,

a.s.,

n = 2, 3, . . . .

(11.1)

If we have E [Mn | Fn1 ] Mn1 a.s. for every n, then Mn is a submartingale. If we have E [Mn |
Fn1 ] Mn1 , we have a supermartingale. Submartingales have a tendency to increase.
Let us take a moment to look at some examples. If Xi is a sequence of mean zero i.i.d. random
variables and Sn is the partial sum process, then Mn = Sn is a martingale, since E [Mn | Fn1 ] = Mn1 +
E [Mn Mn1 | Fn1 ] = Mn1 + E [Mn Mn1 ] = Mn1 , using independence. If the Xi s have variance
one and Mn = Sn2 n, then
2
2
E [Sn2 | Fn1 ] = E [(Sn Sn1 )2 | Fn1 ] + 2Sn1 E [Sn | Fn1 ] Sn1
= 1 + Sn1
,

using independence. It follows that Mn is a martingale.


Another example is the following: if X L1 and Mn = E [X | Fn ], then Mn is a martingale.
Pn
If Mn is a martingale and Hn Fn1 for each n, it is easy to check that Nn = i=1 Hi (Mi Mi1 )
is also a martingale.
If Mn is a martingale and g(Mn ) is integrable for each n, then by Jensens inequality
E [g(Mn+1 ) | Fn ] g(E [Mn+1 | Fn ]) = g(Mn ),
or g(Mn ) is a submartingale. Similarly if g is convex and nondecreasing on [0, ) and Mn is a positive
submartingale, then g(Mn ) is a submartingale because
E [g(Mn+1 ) | Fn ] g(E [Mn+1 | Fn ]) g(Mn ).
12. Optional stopping.
Note that if one takes expectations in (11.1), one has E Mn = E Mn1 , and by induction E Mn = E M0 .
The theorem about martingales that lies at the basis of all other results is Doobs optional stopping theorem,
which says that the same is true if we replace n by a stopping time N . There are various versions, depending
on what conditions one puts on the stopping times.
Theorem 12.1. If N is a bounded stopping time with respect to Fn and Mn a martingale, then E MN =
E M0 .
Proof. Since N is bounded, let K be the largest value N takes. We write
E MN =

K
X

E [MN ; N = k] =

k=0

K
X
k=0

15

E [Mk ; N = k].

Note (N = k) is Fj measurable if j k, so
E [Mk ; N = k] = E [Mk+1 ; N = k]
= E [Mk+2 ; N = k] = . . . = E [MK ; N = k].
Hence
E MN =

K
X

E [MK ; N = k] = E MK = E M0 .

k=0

This completes the proof.


The assumption that N be bounded cannot be entirely dispensed with. For example, let Mn be the
partial sums of a sequence of i.i.d. random variable that take the values 1, each with probability 12 . If
N = min{i : Mi = 1}, we will see later on that N < a.s., but E MN = 1 6= 0 = E M0 .
The same proof as that in Theorem 12.1 gives the following corollary.
Corollary 12.2. If N is bounded by K and Mn is a submartingale, then E MN E MK .
Also the same proof gives
Corollary 12.3. If N is bounded by K, A FN , and Mn is a submartingale, then E [MN ; A] E [MK ; A].
Proposition 12.4. If N1 N2 are stopping times bounded by K and M is a martingale, then E [MN2 |
FN1 ] = MN1 , a.s.
Proof. Suppose A FN1 . We need to show E [MN1 ; A] = E [MN2 ; A]. Define a new stopping time N3 by

N3 () =

N1 ()
N2 ()

if A
if
/ A.

It is easy to check that N3 is a stopping time, so E MN3 = E MK = E MN2 implies


E [MN1 ; A] + E [MN2 ; Ac ] = E [MN2 ].
Subtracting E [MN2 ; Ac ] from each side completes the proof.
The following is known as the Doob decomposition.
Proposition 12.5. Suppose Xk is a submartingale with respect to an increasing sequence of -fields Fk .
Then we can write Xk = Mk + Ak such that Mk is a martingale adapted to the Fk and Ak is a sequence of
random variables with Ak being Fk1 -measurable and A0 A1 .
Proof. Let ak = E [Xk | Fk1 ] Xk1 for k = 1, 2, . . . Since Xk is a submartingale, then each ak 0. Then
Pk
let Ak = i=1 ai . The fact that the Ak are increasing and measurable with respect to Fk1 is clear. Set
Mk = Xk Ak . Then
E [Mk+1 Mk | Fk ] = E [Xk+1 Xk | Fk ] ak+1 = 0,
or Mk is a martingale.
Combining Propositions 12.4 and 12.5 we have
16

Corollary 12.6. Suppose Xk is a submartingale, and N1 N2 are bounded stopping times. Then
E [XN2 | FN1 ] XN1 .
13. Doobs inequalities.
The first interesting consequences of the optional stopping theorems are Doobs inequalities. If Mn
is a martingale, denote Mn = maxin |Mi |.
Theorem 13.1.

If Mn is a martingale or a positive submartingale,


P(Mn a) E [|Mn |; Mn a]/a E |Mn |/a.

Proof. Set Mn+1 = Mn . Let N = min{j : |Mj | a} (n + 1). Since | | is convex, |Mn | is a submartingale.
If A = (Mn a), then A FN and by Corollary 12.3
aP(Mn a) E [|MN |; A] E [|Mn |; A] E |Mn |.

For p > 1, we have the following inequality.


Theorem 13.2. If p > 1 and E |Mi |p < for i n, then
 p p
E |Mn |p .
E (Mn )p
p1

Proof. Note Mn

Pn

|Mn |, hence Mn Lp . We write


Z
Z
E (Mn )p =
pap1 P(Mn > a) da
i=1

Z
=E

Mn

pap2 |Mn | da =

pap1 E [|Mn |1(Mn a) /a] da

p
E [(Mn )p1 |Mn |]
p1

(E (Mn )p )(p1)/p (E |Mn |p )1/p .


p1
The last inequality follows by H
olders inequality. Now divide both sides by the quantity (E (Mn )p )(p1)/p .

14. Martingale convergence theorems.


The martingale convergence theorems are another set of important consequences of optional stopping.
The main step is the upcrossing lemma. The number of upcrossings of an interval [a, b] is the number of
times a process crosses from below a to above b.
To be more exact, let
S1 = min{k : Xk a},

T1 = min{k > S1 : Xk b},

and
Si+1 = min{k > Ti : Xk a},

Ti+1 = min{k > Si+1 : Xk b}.

The number of upcrossings Un before time n is Un = max{j : Tj n}.


17

Theorem 14.1. (Upcrossing lemma) If Xk is a submartingale,


E Un (b a)1 E [(Xn a)+ ].

Proof. The number of upcrossings of [a, b] by Xk is the same as the number of upcrossings of [0, b a]
by Yk = (Xk a)+ . Moreover Yk is still a submartingale. If we obtain the inequality for the the number of
upcrossings of the interval [0, b a] by the process Yk , we will have the desired inequality for upcrossings of
X.
So we may assume a = 0. Fix n and define Yn+1 = Yn . This will still be a submartingale. Define the
0
Si , Ti as above, and let Si0 = Si (n + 1), Ti0 = Ti (n + 1). Since Ti+1 > Si+1 > Ti , then Tn+1
= n + 1.
We write
n+1
n+1
X
X
0
0
0
0
E [YSi+1
YTi0 ].
E [YTi YSi ] +
E Yn+1 = E YS1 +
i=0

i=0

All the summands in the third term on the right are nonnegative since Yk is a submartingale. For the jth
upcrossing, YTj0 YSj0 b a, while YTj0 YSj0 is always greater than or equal to 0. So

(YTi0 YSi0 ) (b a)Un .

i=0

So
(4.8)

E Un E Yn+1 /(b a).

This leads to the martingale convergence theorem.


Theorem 14.2. If Xn is a submartingale such that supn E Xn+ < , then Xn converges a.s. as n .
Proof. Let U (a, b) = limn Un . For each a, b rational, by monotone convergence,
E U (a, b) c(b a)1 E (Xn a)+ < .
So U (a, b) < , a.s. Taking the union over all pairs of rationals a, b, we see that a.s. the sequence Xn ()
cannot have lim sup Xn > lim inf Xn . Therefore Xn converges a.s., although we still have to rule out the
possibility of the limit being infinite. Since Xn is a submartingale, E Xn E X0 , and thus
E |Xn | = E Xn+ + E Xn = 2E Xn+ E Xn 2E Xn+ E X0 .
By Fatous lemma, E limn |Xn | supn E |Xn | < , or Xn converges a.s. to a finite limit.
Corollary 14.3. If Xn is a positive supermartingale or a martingale bounded above or below, Xn converges
a.s.
Proof. If Xn is a positive supermartingale, Xn is a submartingale bounded above by 0. Now apply
Theorem 4.12.
18

If Xn is a martingale bounded above, by considering Xn , we may assume Xn is bounded below.


Looking at Xn + M for fixed M will not affect the convergence, so we may assume Xn is bounded below by
0. Now apply the first assertion of the corollary.
Proposition 14.4. If Xn is a martingale with supn E |Xn |p < for some p > 1, then the convergence is in
Lp as well as a.s. This is also true when Xn is a submartingale. If Xn is a uniformly integrable martingale,
then the convergence is in L1 . If Xn X in L1 , then Xn = E [X | Fn ].
Xn is a uniformly integrable martingale if the collection of random variables Xn is uniformly integrable.
Proof. The Lp convergence assertion follows by using Doobs inequality (Theorem 13.2) and dominated
convergence. The L1 convergence assertion follows since a.s. convergence together with uniform integrability
implies L1 convergence. Finally, if j < n, we have Xj = E [Xn | Fj ]. If A Fj ,
E [Xj ; A] = E [Xn ; A] E [X ; A]
by the L1 convergence of Xn to X . Since this is true for all A Fj , Xj = E [X | Fj ].

15. Applications of martingales.


Let Sn be your fortune at time n. In a fair casino, E [Sn+1 | Fn ] = Sn . If N is a stopping time, the
optional stopping theorem says that E SN = E S0 ; in other words, no matter what stopping time you use
and what method of betting, you will do not better on average than ending up with what you started with.
An elegant application of martingales is a proof of the SLLN. Fix N large. Let Yi be i.i.d. Let
Zn = E [Y1 | Sn , Sn+1 , . . . , SN ]. We claim Zn = Sn /n. Certainly Sn /n is (Sn , , SN ) measurable. If
A (Sn , . . . , SN ) for some n, then A = ((Sn , . . . , SN ) B) for some Borel subset B of RN n+1 . Since the
Yi are i.i.d., for each k n,
E [Y1 ; (Sn , . . . , SN ) B] = E [Yk ; (Sn , . . . , SN ) B].
Summing over k and dividing by n,
E [Y1 ; (Sn , . . . , SN ) B] = E [Sn /n; (Sn , . . . , SN ) B].
Therefore E [Y1 ; A] = E [Sn /n; A] for every A (Sn , . . . , SN ). Thus Zn = Sn /n.
Let Xk = ZN k , and let Fk = (SN k , SN k+1 , . . . , SN ). Note Fk gets larger as k gets larger,
and by the above Xk = E [Y1 | Fk ]. This shows that Xk is a martingale (cf. the next to last example
in Section 11). By Doobs upcrossing inequality, if UnX is the number of upcrossings of [a, b] by X, then
+
X
E UN
E XN
/(b a) E |Z0 |/(b a) = E |Y1 |/(b a). This differs by at most one from the number of
upcrossings of [a, b] by Z1 , . . . , ZN . So the expected number of upcrossings of [a, b] by Zk for k N is
bounded by 1 + E |Y1 |/(b a). Now let N . By Fatous lemma, the expected number of upcrossings
of [a, b] by Z1 , . . . is finite. Arguing as in the proof of the martingale convergence theorem, this says that
Zn = Sn /n does not oscillate.
It is conceivable that |Sn /n| . But by Fatous lemma,
E[lim |Sn /n|] lim inf E |Sn /n| lim inf nE |Y1 |/n = E |Y1 | < .
Another application of martingale techniques is Walds identities.
19

Proposition 15.1. Suppose the Yi are i.i.d. with E |Y1 | < , N is a stopping time with E N < , and N
is independent of the Yi . Then E SN = (E N )(EY1 ), where the Sn are the partial sums of the Yi .
Proof. Sn n(E Y1 ) is a martingale, so E SnN = E (n N )E Y1 by optional stopping. The right hand side
tends to (E N )(E Y1 ) by monotone convergence. SnN converges almost surely to SN , and we need to show
the expected values converge.
Note

nk
X
X
X
|SnN | =
|Snk |1(N =k)
|Yj |1(N =k)
=

k=0
n X

k=0 j=0
n
X

|Yj |1(N =k) =

j=0 k>j

|Yj |1(N j)

j=0

|Yj |1(N j) .

j=0

The last expression, using the independence, has expected value

(E |Yj |)P(N j) (E |Y1 |)(1 + E N ) < .

j=0

So by dominated convergence, we have E SnN E SN .


Walds second identity is a similar expression for the variance of SN .
We can use martingales to find certain hitting probabilities.
Proposition 15.2. Suppose the Yi are i.i.d. with P(Y1 = 1) = 1/2, P(Y1 = 1) = 1/2, and Sn the partial
sum process. Suppose a and b are positive integers. Then
P(Sn hits a before b) =

b
.
a+b

If N = min{n : Sn {a, b}}, then E N = ab.


2
= E n N . Let n . The right hand side converges to E N
Proof. Sn2 n is a martingale, so E SnN
by monotone convergence. Since SnN is bounded in absolute value by a + b, the left hand side converges
2
, which is finite. So E N is finite, hence N is finite almost surely.
by dominated convergence to E SN
Sn is a martingale, so E SnN = E S0 = 0. By dominated convergence, and the fact that N < a.s.,
hence SnN SN , we have E SN = 0, or

aP(SN = a) + bP(SN = b) = 0.
We also have
P(SN = a) + P(SN = b) = 1.
2
Solving these two equations for P(SN = a) and P(SN = b) yields our first result. Since E N = E SN
=
2
2
a P(SN = a) + b P(SN = b), substituting gives the second result.

Based on this proposition, if we let a , we see that P(Nb < ) = 1 and E Nb = , where
Nb = min{n : Sn = b}.
Next we give a version of the Borel-Cantelli lemma.
20

P
Proposition 15.3. Suppose An Fn . Then (An i.o.) and ( n=1 P(An | Fn1 ) = ) differ by a null set.
Pn
Proof. Let Xn =
m=1 [1Am P(Am | Fm1 )]. Note |Xn Xn1 | 1. Also, it is easy to see that
E [Xn Xn1 | Fn1 ] = 0, so Xn is a martingale.
We claim that for almost every either lim Xn exists and is finite, or else lim sup Xn = and
lim inf Xn = . In fact, if N = min{n : Xn k}, then XnN k 1, so XnN converges by the
martingale convergence theorem. Therefore lim Xn exists and is finite on (N = ). So if lim Xn does not
exist or is not finite, then N < . This is true for all k, hence lim inf Xn = . A similar argument shows
lim sup Xn = in this case.
P
P
Now if lim Xn exists and is finite, then n=1 1An = if and only if
P(An | Fn1 ) < . On the
P
P
other hand, if the limit does not exist or is not finite, then
1An = and
P(An | Fn1 ) = .

16. Weak convergence.

We will see later that if the Xi are i.i.d. with mean zero and variance one, then Sn / n converges in
the sense

P(Sn / n [a, b]) P(Z [a, b]),

where Z is a standard normal. If Sn / n converged in probability or almost surely, then by the zero-one
law it would converge to a constant, contradicting the above. We want to generalize the above type of
convergence.
We say Fn converges weakly to F if Fn (x) F (x) for all x at which F is continuous. Here Fn
and F are distribution functions. We say Xn converges weakly to X if FXn converges weakly to FX . We
sometimes say Xn converges in distribution or converges in law to X. Probabilities n converge weakly if
their corresponding distribution functions converges, that is, if Fn (x) = n (, x] converges weakly.
An example that illustrates why we restrict the convergence to continuity points of F is the following.
Let Xn = 1/n with probability one, and X = 0 with probability one. FXn (x) is 0 if x < 1/n and 1 otherwise.
FXn (x) converges to FX (x) for all x except x = 0.
Proposition 16.1. Xn converges weakly to X if and only if E g(Xn ) E g(X) for all g bounded and
continuous.
The idea that E g(Xn ) converges to E g(X) for all g bounded and continuous makes sense for any
metric space and is used as a definition of weak convergence for Xn taking values in general metric spaces.
Proof. First suppose E g(Xn ) converges to E g(X). Let x be a continuity point of F , let > 0, and choose
such that |F (y) F (x)| < if |y x| < . Choose g continuous such that g is one on (, x], takes values
between 0 and 1, and is 0 on [x + , ). Then FXn (x) E g(Xn ) E g(X) FX (x + ) F (x) + .
Similarly, if h is a continuous function taking values between 0 and 1 that is 1 on (, x ] and 0
on [x, ), FXn (x) E h(Xn ) E h(X) FX (x ) F (x) . Since is arbitrary, FXn (x) FX (x).
Now suppose Xn converges weakly to X. If a and b are continuity points of F and of all the FXn ,
then E 1[a,b] (Xn ) = FXn (b) FXn (a) F (b) F (a) = E 1[a,b] (X). By taking linear combinations, we have
E g(Xn ) E g(X) for every g which is a step function where the end points of the intervals are continuity
points for all the FXn and for FX . Since the set of points that are not a continuity point for some FXn or
for FX is countable, and we can approximate any continuous function on an interval by such step functions
uniformly, we have E g(Xn ) E g(X) for all g such that the support of g is a closed interval whose endpoints
are continuity points of FX and g is continuous on its support.
Let > 0 and choose M such that FX (M ) > 1 and FX (M ) < and so that M and M are
continuity points of FX and of the FXn . By the above argument, E (1[M,M ] g)(Xn ) E (1[M,M ] g)(X),
21

where g is a bounded continuous function. The difference between E (1[M,M ] g)(X) and E g(X) is bounded
by kgk P(X
/ [M, M ]) 2kgk . Similarly, when X is replaced by Xn , the difference is bounded by
kgk P(Xn
/ [M, M ]) kgk P(X
/ [M, M ]). So for n large, it is less than 3kgk . Since is arbitrary,
E g(Xn ) E g(X) whenever g is bounded and continuous.
Let us examine the relationship between weak convergence and convergence in probability. The

example of Sn / n shows that one can have weak convergence without convergence in probability.
Proposition 16.2. (a) If Xn converges to X in probability, then it converges weakly.
(b) If Xn converges weakly to a constant, it converges in probability.
(c) (Slutskys theorem) If Xn converges weakly to X and Yn converges weakly to a constant c, then
Xn + Yn converges weakly to X + c and Xn Yn converges weakly to cX.
Proof. To prove (a), let g be a bounded and continuous function. If nj is any subsequence, then there
exists a further subsequence such that X(njk ) converges almost surely to X. Then by dominated convergence,
E g(X(njk )) E g(X). That suffices to show E g(Xn ) converges to E g(X).
For (b), if Xn converges weakly to c,
P(Xn c > ) = P(Xn > c + ) = 1 P(Xn c + ) 1 P(c c + ) = 0.
We use the fact that if Y c, then c + is a point of continuity for FY . A similar equation shows
P(Xn c ) 0, so P(|Xn c| > ) 0.
We now prove the first part of (c), leaving the second part for the reader. Let x be a point such that
x c is a continuity point of FX . Choose so that x c + is again a continuity point. Then
P(Xn + Yn x) P(Xn + c x + ) + P(|Yn c| > ) P(X x c + ).
So lim sup P(Xn + Yn x) P(X + c x + ). Since can be as small as we like and x c is a continuity
point of FX , then lim sup P(Xn + Yn x) P(X + c x). The lim inf is done similarly.
We say a sequence of distribution functions {Fn } is tight if for each > 0 there exists M such that
Fn (M ) 1 and Fn (M ) for all n. A sequence of r.v.s is tight if the corresponding distribution
functions are tight; this is equivalent to P(|Xn | M ) .
Theorem 16.3. (Hellys theorem) Let Fn be a sequence of distribution functions that is tight. There exists
a subsequence nj and a distribution function F such that Fnj converges weakly to F .
What could happen is that Xn = n, so that FXn 0; the tightness precludes this.
Proof. Let qk be an enumeration of the rationals. Since Fn (qk ) [0, 1], any subsequence has a further
subsequence that converges. Use diagonalization so that Fnj (qk ) converges for each qk and call the limit
F (qk ). F is nondecreasing, and define F (x) = inf qk x F (qk ). So F is right continuous and nondecreasing.
If x is a point of continuity of F and > 0, then there exist r and s rational such that r < x < s and
F (s) < F (x) < F (r) + . Then
Fnj (x) Fnj (r) F (r) > F (x)
and
Fnj (x) Fnj (s) F (s) < F (x) + .
Since is arbitrary, Fnj (x) F (x).
Since the Fn are tight, there exists M such that Fn (M ) < . Then F (M ) , which implies
limx F (x) = 0. Showing limx F (x) = 1 is similar. Therefore F is in fact a distribution function.
We conclude by giving an easily checked criterion for tightness.
22

Proposition 16.4. Suppose there exists : [0, ) [0, ) that is increasing and (x) as x .
If c = supn E (|Xn |) < , then the Xn are tight.
Proof. Let > 0. Choose M such that (x) c/ if x > M . Then
Z
P(|Xn | > M )

(|Xn |)
1(|Xn |>M ) dP E (|Xn |) .
c/
c

17. Characteristic functions.


We define the characteristic function of a random variable X by X (t) = E eitx for t R.
R
Note that X (t) = eitx PX (dx). So if X and Y have the same law, they have the same characteristic
R
function. Also, if the law of X has a density, that is, PX (dx) = fX (x) dx, then X (t) = eitx fX (x) dx, so
in this case the characteristic function is the same as (one definition of) the Fourier transform of fX .
Proposition 17.1. (0) = 1, |(t)| 1, (t) = (t), and is uniformly continuous.
Proof. Since |eitx | 1, everything follows immediately from the definitions except the uniform continuity.
For that we write
|(t + h) (t)| = |E ei(t+h)X E eitX | E |eitX (eihX 1)| = E |eihX 1|.
|eihX 1| tends to 0 almost surely as h 0, so the right hand side tends to 0 by dominated convergence.
Note that the right hand side is independent of t.
Proposition 17.2. aX (t) = X (at) and X+b (t) = eitb X (t),
Proof. The first follows from E eit(aX) = E ei(at)X , and the second is similar.
Proposition 17.3. If X and Y are independent, then X+Y (t) = X (t)Y (t).
Proof. From the multiplication theorem,
E eit(X+Y ) = E eitX eitY = E eitX E eitY .

Note that if X1 and X2 are independent and identically distributed, then


X1 X2 (t) = X1 (t)X2 (t) = X1 (t)X2 (t) = X1 (t)X2 (t) = |X1 (t)|2 .
Let us look at some examples of characteristic functions.
(a) Bernoulli: By direct computation, this is peit + (1 p) = 1 p(1 eit ).
(b) Coin flip: (i.e., P(X = +1) = P(X = 1) = 1/2) We have 12 eit + 12 eit = cos t.
(c) Poisson:

X
X (eit )k
it
it
k
eitk e
E eitX =
= e
= e ee = e(e 1) .
k!
k!
k=0

(d) Point mass at a: E eitX = eita . Note that when a = 0, then 1.


23

(e) Binomial: Write X as the sum of n independent Bernoulli r.v.s Bi . So

X (t) =

n
Y

Bi (t) = [Bi (t)]n = [1 p(1 eit )]n .

i=1

(f) Geometric:
(t) =

p(1 p)k eitk = p

((1 p)eit )k =

k=0

p
.
1 (1 p)eit

(g) Uniform on [a, b]:


(t) =

1
ba

eitx dx =

eitb eita
.
(b a)it

Note that when a = b this reduces to sin(bt)/bt.


(h) Exponential:
Z

eitx ex dx =

e(it)x dx =

.
it

(i) Standard normal:


1
(t) =
2

eitx ex

/2

dx.

This can be done by completing the square and then doing a contour integration. Alternately, 0 (t) =

R
2
(1/ 2) ixeitx ex /2 dx. (do the real and imaginary parts separately, and use the dominated
convergence theorem to justify taking the derivative inside.) Integrating by parts (do the real and
imaginary parts separately), this is equal to t(t). The only solution to 0 (t) = t(t) with (0) = 1
2
is (t) = et /2 .
(j) Normal with mean and variance 2 : Writing X = Z + , where Z is a standard normal, then
2 2
X (t) = eit Z (t) = eit t /2 .
(k) Cauchy: We have
(t) =

eitx
dx.
1 + x2

This is a standard exercise in contour integration in complex analysis. The answer is e|t| .
18. Inversion formula.
We need a preliminary real variable lemma, and then we can proceed to the inversion formula, which
gives a formula for the distribution function in terms of the characteristic function.
RN
Lemma 18.1. (a) 0 (sin(Ax)/x) dx sgn (A)/2 as N .
Ra
(b) supa | 0 (sin(Ax)/x) dx| < .
Proof. If A = 0, this is clear. The case A < 0 reduces to the case A > 0 by the fact that sin is an odd
function. By a change of variables y = Ax, we reduce to the case A = 1. Part (a) is a standard result
in contour integration, and part (b) comes from the fact that the integral can be written as an alternating
series.
24

An alternate proof of (a) is the following. exy sin x is integrable on {(x, y); 0 < x < a, 0 < y < }.
So

Z
0

sin x
dx =
x

exy sin x dy dx

Z0 Z0

=
0

exy sin x dx dy

ia
h exy
(y
sin
x

cos
x)
dy
y2 + 1
0
0
Z h
o
 eay
1 i
dy
(y
sin
a

cos
a)

=
y2 + 1
y2 + 1
0
Z ay
Z ay
ye
e

dy cos a
dy.
= sin a
2
2
y +1
y2 + 1
0
0
Z

The last two integrals tend to 0 as a since the integrand is bounded by (1 + y)ey if a 1.
R
Theorem 18.2. (Inversion formula) Let be a probability measure and let (t) = eitx (dx). If a < b,
then
Z T ita
1
e
eitb
1
1
lim
(t) dt = (a, b) + ({a}) + ({b}).
T 2 T
it
2
2
The example where is point mass at 0, so (t) = 1, shows that one needs to take a limit, since the
integrand in this case is 2 sin t/t, which is not integrable.
Proof. By Fubini,
Z

eita eitb
(t) dt =
it

eita eitb itx


e (dx) dt
it

eita eitb itx


e dt (dx).
it

Z Z
=

To justify this, we bound the integrand by the mean value theorem


Expanding eitb and eita using Eulers formula, and using the fact that cos is an even function and
sin is odd, we are left with
Z T
Z hZ T
sin(t(x b)) i
sin(t(x a))
dt
dt (dx).
2
t
t
0
0
Using Lemma 18.1 and dominated convergence, this tends to
Z
[sgn (x a) sgn (x b)](dx).

Theorem 18.3. If

Proof.

|(t)| dt < , then has a bounded density f and


Z
1
f (y) =
eity (t) dt.
2
1
1
(a, b) + ({a}) + ({b})
2
2
Z T ita
e
eitb
1
= lim
(t) dt
T 2 T
it
Z ita
e
eitb
1
(t) dt
=
2
it
Z
ba

|(t)| dt.
2
25

Letting b a shows that has no point masses.


We now write
Z itx
e
eit(x+h)
1
(t)dt
(x, x + h) =
2
it
Z  Z x+h

1
=
eity dy (t) dt
2
x
Z
Z x+h 

1
eity (y) dt dy.
=
2
x
R ity
So has density (1/2) e
(t) dt. As in the proof of Proposition 17.1, we see f is continuous.
A corollary to the inversion formula is the uniqueness theorem.
Theorem 18.3. If X = Y , then PX = PY .
The following proposition can be proved directly, but the proof using characteristic functions is much
easier.
Proposition 18.4. (a) If X and Y are independent, X is a normal with mean a and variance b2 , and Y is
a normal with mean c and variance d2 , then X + Y is normal with mean a + c and variance b2 + d2 .
(b) If X and Y are independent, X is Poisson with parameter 1 , and Y is Poisson with parameter 2 ,
then X + Y is Poisson with parameter 1 + 2 .
(c) If Xi are i.i.d. Cauchy, then Sn /n is Cauchy.
Proof. For (a),
X+Y (t) = X (t)Y (t) = eiatb

2 2

t /2 ictc2 t2 /2

= ei(a+c)t(b

+d2 )t2 /2

Now use the uniqueness theorem.


Parts (b) and (c) are proved similarly.

19. Continuity theorem.


Lemma 19.1. Suppose is the characteristic function of a probability . Then
Z 1/A



([2A, 2A]) A
(t) dt 1.
1/A

Proof. Note
1
2T

Z T Z
1
(t) dt =
eitx (dx) dt
2T T
Z Z
1
1[T,T ] (t)eitx dt(dx)
=
2T
Z
sin T x
=
(dx).
Tx

Since |(sin(T x))/T x| 1 and | sin(T x)| 1, then |(sin(T x))/T x| 1/2T A if |x| 2A. So
Z
Z sin T x

1


(dx) ([2A, 2A]) +
(dx)

Tx
[2A,2A]c 2T A
1
(1 ([2A, 2A])
= ([2A, 2A]) +
2T A


1
1
=
([2A, 2A]).
+ 1
2T A
2T A
26

Setting T = 1/A,
1 1
A Z 1/A


(t) dt + ([2A, 2A]).

2 1/A
2 2
Now multiply both sides by 2.
Proposition 19.2. If n converges weakly to , then n converges to uniformly on every finite interval.
Proof. Let > 0 and choose M large so that ([M, M ]c ) < . Define f to be 1 on [M, M ], 0 on
R
R
[M 1, M + 1]c , and linear in between. Since f dn f d, then if n is large enough,
Z
(1 f ) dn 2.
We have

|eihx 1|n (dx)


Z
Z
2 (1 f ) dn + h |x|f (x) n (dx)

|n (t + h) n (t)|

2 + h(M + 1).
So for n large enough and |h| /(M + 1), we have
|n (t + h) n (t)| 3,
which says that the n are equicontinuous. Therefore the convergence is uniform on finite intervals.
The interesting result of this section is the converse, Levys continuity theorem.
Theorem 19.3. Suppose n are probabilities, n (t) converges to a function (t) for each t, and is
continuous at 0. Then is the characteristic function of a probability and n converges weakly to .
Proof. Let > 0. Since is continuous at 0, choose small so that

1 Z


(t) dt 1 < .

2
Using the dominated convergence theorem, choose N such that
Z
1
|n (t) (t)| dt <
2
if n N . So if n N ,
Z
1 Z

1 Z
1


(t)
dt

(t)
dt

|n (t) (t)| dt



n
2
2
2
1 2.
By Lemma 19.1 with A = 1/, for such n,
n [2/, 2/] 2(1 2) 1 = 1 4.
This shows the n are tight.
Let nj be a subsequence such that nj converges weakly, say to . Then nj (t) (t), hence
(t) = (t), or is the characteristic function of a probability . If 0 is any subsequential weak limit
point of n , then 0 (t) = (t) = (t); so 0 must equal . Hence n converges weakly to .
We need the following estimate on moments.
27

Proposition 19.4. If E |X|k < for an integer k, then X has a continuous derivative of order k and
Z
(k)
(t) = (ix)k eitx PX (dx).
In particular, (k) (0) = ik E X k .
Proof. Write

Z i(t+h)x
e
eitx
(t + h) (t)
=
P(dx).
h
h
R
The integrand is bounded by |x|. So if |x|PX (dx) < , we can use dominated convergence to obtain the
desired formula for 0 (t). As in the proof of Proposition 17.1, we see 0 (t) is continuous. We do the case of
general k by induction. Evaluating (k) at 0 gives the particular case.
Here is a converse.
Proposition 19.5. If is the characteristic function of a random variable X and 00 (0) exists, then E |X|2 <
.
Proof. Note

1 cos hx
eihx 2 + eihx
= 2
0
h2
h2

and 2(1 cos hx)/h2 converges to x2 as h 0. So by Fatous lemma,


Z
Z
1 cos hx
2
x PX (dx) 2 lim inf
PX (dx)
h0
h2
(h) 2(0) + (h)
= 00 (0) < .
= lim sup
2
h
h0

One nice application of the continuity theorem is a proof of the weak law of large numbers. Its proof
is very similar to the proof of the central limit theorem, which we give in the next section.
Another nice use of characteristic functions and martingales is the following.
Proposition 19.6. Suppose Xi is a sequence of independent r.v.s and Sn converges weakly. Then Sn
converges almost surely.
Proof. Suppose Sn converges weakly to W . Then Sn (t) W (t) uniformly on compact sets by Proposition
19.2. Since W (0) = 1 and W is continuous, there exists such that |W (t) 1| < 1/2 if |t| < . So for n
large, Sn (t)| 1/4 if |t| < .
Note
h
i
E eitSn | X1 , . . . Xn1 = eitSn1 E [eitXn | X1 , . . . , Xn1 ] = eitSn1 Xn (t).
Q
Since Sn (t) = Xi (t), it follows that eitSn /Sn (t) is a martingale.
Therefore for |t| < and n large, eitSn /Sn (t) is a bounded martingale, and hence converges almost
surely. Since Sn (t) W (t) 6= 0, then eitSn converges almost surely if |t| < .
Let A = {(, t) (, ) : eitSn () does not converge}. For each t, we have almost sure conR
R R
R R
vergence, so 1A (, t) P(d) = 0. Therefore 1A dP dt = 0, and by Fubini,
1 dt dP = 0. Hence
A
R
almost surely, 1A (, t) dt = 0. This means, there exists a set N with P(N ) = 0, and if
/ N , then eitSn ()
converges for almost every t (, ).
28

If
/ N , by dominated convergence,

Ra
0

eitSn () dt converges, provided a < . Call the limit Aa . Also

eitSn () dt =

eiaSn () 1
iSn ()

if Sn () 6= 0 and equals a otherwise.


Since Sn converges weakly, it is not possible for |Sn | with positive probability. If we let
0
N = { : |Sn ()| } and choose
/ N N 0 , there exists a subsequence Snj () which converges
to a finite limit, say R. We can choose a < such that eiaSn () converges and eiaR 6= 1. Therefore
Aa = (eiaR 1)/R, a nonzero quantity. But then
limn eiaSn () 1
eiaSn () 1
.
Sn () = R a itS ()
Aa
e n dt
0
Therefore, except for N N 0 , we have that Sn () converges.

20. Central limit theorem.


The simplest case of the central limit theorem (CLT) is the case when the Xi are i.i.d., with mean

zero and variance one, and then the CLT says that Sn / n converges weakly to a standard normal. We first
prove this case.
We need the fact that if cn are complex numbers converging to c, then (1 + (cn /n))n ec . We leave
the proof of this to the reader, with the warning that any proof using logarithms needs to be done with some
care, since log z is a multi-valued function when z is complex.

Theorem 20.1. Suppose the Xi are i.i.d., mean zero, and variance one. Then Sn / n converges weakly to
a standard normal.
Proof. Since X1 has finite second moment, then X1 has a continuous second derivative. By Taylors
theorem,
X1 (t) = X1 (0) + 0X1 (0)t + 00X1 (0)t2 /2 + R(t),
where |R(t)|/t2 0 as |t| 0. So
X1 (t) = 1 t2 /2 + R(t).
Then
h

in
t2
Sn /n (t) = Sn (t/ n) = (X1 (t/ n))n = 1
+ R(t/ n) .
2n

Since t/ n converges to zero as n , we have


2

Sn /n (t) et

/2

Now apply the continuity theorem.

Let us give another proof of this simple CLT that does not use characteristic functions. For simplicity
let Xi be i.i.d. mean zero variance one random variables with E |Xi |3 < .
29


Proposition 20.2. With Xi as above, Sn / n converges weakly to a standard normal.
Proof. Let Y1 , . . . , Yn be i.i.d. standard normal r.v.s that are independent of the Xi . Let Z1 = Y2 + + Yn ,
Z2 = X1 + Y3 + + Yn , Z3 = X1 + X2 + Y4 + + Yn , etc.
Let us suppose g C 3 with compact support and let W be a standard normal. Our first goal is to
show

(20.1)
|E g(Sn / n) E g(W )| 0.
We have

n
X

Yi / n)
E g(Sn / n) E g(W ) = E g(Sn / n) E g(
i=1

n h
X
i=1

X + Z 
 Y + Z i
i
i E g i i .
Eg
n
n

By Taylors theorem,
g

X + Z 

1
i
i = g(Zi / n) + g 0 (Zi / n) i + g 00 (Zi / n)Xi2 + Rn ,
n
n 2

where |Rn | kg 000 k |Xi |3 /n3/2 . Taking expectations and using the independence,
Eg

X + Z 

1
i
i = E g(Zi / n) + 0 + E g 00 (Zi / n) + E Rn .
2
n

We have a very similar expression for E g((Yi + Zi )/ n). Taking the difference,
X + Z 
 Y + Z 
E |Xi |3 + E |Yi |3


i
i E g i i kg 000 k
.
E g
n
n
n3/2
Summing over i from 1 to n, we have (20.1).
By approximating continuous functions with compact support by C 3 functions with compact support,

we have (20.1) for such g. Since E (Sn / n)2 = 1, the sequence Sn / n is tight. So given there exists M

such that P(|Sn / n| > M ) < for all n. By taking M larger if necessary, we also have P(|W | > M ) < .
Suppose g is bounded and continuous. Let be a continuous function with compact support that is bounded
by one, is nonnegative, and that equals 1 on [M, M ]. By (20.1) applied to g,

|E (g)(Sn / n) E (g)(W )| 0.
However,

|E g(Sn / n) E (g)(Sn / n)| kgk P(|Sn / n| > M ) < kgk ,

and similarly
|E g(W ) E (g)(W )| < kgk .
Since is arbitrary, this proves (20.1) for bounded continuous g. By Proposition 16.1, this proves our
proposition.

We give another example of the use of characteristic functions.


30

Proposition 20.3. Suppose for each n the r.v.s Xni , i = 1, . . . , n are i.i.d. Bernoullis with parameter pn .
Pn
If npn and Sn = i=1 Xni , then Sn converges weakly to a Poisson r.v. with parameter .
Proof. We write

h
in
Sn (t) = [Xn1 (t)]n = 1 + pn (eit 1)
in
h
it
npn it
(e 1) e(e 1) .
= 1+
n
Now apply the continuity theorem.
A much more general theorem than Theorem 20.1 is the Lindeberg-Feller theorem.
Theorem 20.4. Suppose for each n, Xni , i = 1, . . . , n are mean zero independent random variables. Suppose
Pn
2
2
(a)
i=1 E Xni > 0 and
Pn
2
(b) for each , i=1 E [|Xni
; |Xni | > ] 0.
Pn
Let Sn = i=1 Xni . Then Sn converges weakly to a normal r.v. with mean zero and variance 2 .
Note nothing is said about independence of the Xni for different n.

Let us look at Theorem 20.1 in light of this theorem. Suppose the Yi are i.i.d. and let Xni = Yi / n.
Then
n
X

E (Yi / n)2 = E Y12


i=1

and

n
X

E [|Xni |2 ; |Xni | > ] = nE [|Y1 |2 /n; |Y1 | >

n] = E [|Y1 |2 ; |Y1 | >

n],

i=1

which tends to 0 by the dominated convergence theorem.


If the Yi are independent with mean 0, and
Pn

E |Yi |3
0,
(Var Sn )3/2
i=1

then Sn /(Var Sn )1/2 converges weakly to a standard normal. This is known as Lyapounovs theorem; we
leave the derivation of this from the Lindeberg-Feller theorem as an exercise for the reader.
2
Proof. Let ni be the characteristic function of Xni and let ni
be the variance of Xni . We need to show
n
Y

ni (t) et

2 /2

i=1

Using Taylor series, |eib 1 ib + b2 /2| c|b|3 for a constant c. Also,


|eib 1 ib + b2 /2| |eib 1 ib| + |b2 |/2 c|b|2 .
If we apply this to a random variable tY and take expectations,
|Y (t) (1 + itE Y t2 E Y 2 /2)| c(t2 E Y 2 t3 E Y 3 ).
Applying this to Y = Xni ,
2
|ni (t) (1 t2 ni
/2)| cE [t3 |Xni |3 t2 |Xni |2 ].

31

(20.2)

The right hand side is less than or equal to


cE [t3 |Xni |3 ;|Xni | ] + cE [t2 |Xni |2 ; |Xni | > ]
ct3 E [|Xni |2 ] + ct2 E [|Xni |2 ; |Xni | ].
Summing over i we obtain
n
X

2
|ni (t) (1 t2 ni
/2)| ct3

E [|Xni |2 ] + ct2

E [|Xni |2 ; |Xni | ].

i=1

We need the following inequality: if |ai |, |bi | 1, then


n
n
n
X
Y
Y


bi
|ai bi |.
ai
i=1

i=1

i=1

To prove this, note


Y

ai

bi = (an bn )

Y
i<n

bi + an

Y

ai

i<n

Y 
bi
i<n

and use induction.


2
2
2
/2| 1 because ni
2 + E [|Xni
|; |Xni | > ] < 1/t2 if we take
Note |ni (t)| 1 and |1 t2 ni
small enough and n large enough. So
n
n
Y

Y
X
X


2
(1 t2 ni
/2) ct3
E [|Xni |2 ] + ct2
E [|Xni |2 ; |Xni | ].
ni (t)
i=1

i=1

2
2
2
/2, and so
/2) is asymptotically equal to t2 ni
0, then log(1 t2 ni
Since supi ni

X

Y
2
2
(1 t2 ni
/2) = exp
log(1 t2 ni
/2)
is asymptotically equal to


X
2 2
2
exp t2
ni
/2 = et /2 .
Since is arbitrary, the proof is complete.
We now complete the proof of Theorem 8.2.
P
Proof of only if part of Theorem 8.2. Since
Xn converges, then Xn must converge to zero a.s.,
P
and so P(|Xn | > A i.o.) = 0. By the Borel-Cantelli lemma, this says
P(|Xn | > A) < . We also conclude
P
by Proposition 5.4 that
Yn converges.
Pn
Pn

Let cn = i=1 Var Yi and suppose cn . Let Znm = (Ym E Ym )/ cn . Then m=1 Var Znm =
Pn

(1/cn ) m=1 Var Ym = 1. If > 0, then for n large, we have 2A/ cn < . Since |Ym | A and hence
P

n
2
|E Ym | A, then |Znm | 2A/ cn < . It follows that
large n.
m=1 E (|Znm | ; |Znm | > ) = 0 for
Pn
Pn

By Theorem 20.4,
m E Ym )/ cn converges weakly to a standard normal. However,
m=1 (YP
m=1 Ym
P

E Ym / cn are nonrandom, so
converges, and cn , so
Ym / cn must converge to 0. The quantities
there is no way the difference can converge to a standard normal, a contradiction. We conclude cn does not
converge to infinity.
Let Vi = Yi E Yi . Since |Vi | < 2A, E Vi = 0, and Var Vi = Var Yi , which is summable, by the if
P
P
P
part of the three series criterion,
Vi converges. Since
Yi converges, taking the difference shows
E Yi
converges.
32

21. Framework for Markov chains.


Suppose S is a set with some topological structure that we will use as our state space. Think of S
as being Rd or the positive integers, for example. A sequence of random variables X0 , X1 , . . ., is a Markov
chain if
P(Xn+1 A | X0 , . . . , Xn ) = P(Xn+1 A | Xn )
(21.1)
for all n and all measurable sets A. The definition of Markov chain has this intuition: to predict the
probability that Xn+1 is in any set, we only need to know where we currently are; how we got there gives
no new additional intuition.
Lets make some additional comments. First of all, we previously considered random variables as
mappings from to R. Now we want to extend our definition by allowing a random variable be a map X
from to S, where (X A) is F measurable for all open sets A. This agrees with the definition of r.v. in
the case S = R.
Although there is quite a theory developed for Markov chains with arbitrary state spaces, we will confine our attention to the case where either S is finite, in which case we will usually suppose S = {1, 2, . . . , n},
or countable and discrete, in which case we will usually suppose S is the set of positive integers.
We are going to further restrict our attention to Markov chains where
P(Xn+1 A | Xn = x) = P(X1 A | X0 = x),
that is, where the probabilities do not depend on n. Such Markov chains are said to have stationary transition
probabilities.
Define the initial distribution of a Markov chain with stationary transition probabilities by (i) =
P(X0 = i). Define the transition probabilities by p(i, j) = P(Xn+1 = j | Xn = i). Since the transition
probabilities are stationary, p(i, j) does not depend on n.
In this case we can use the definition of conditional probability given in undergraduate classes. If
P(Xn = i) = 0 for all n, that means we never visit i and we could drop the point i from the state space.
Proposition 21.1. Let X be a Markov chain with initial distribution and transition probabilities p(i, j).
then
P(Xn = in , Xn1 = in1 , . . . , X1 = i1 , X0 = i0 ) = (i0 )p(i0 , i1 ) p(in1 , in ).
(21.2)
Proof. We use induction on n. It is clearly true for n = 0 by the definition of (i). Suppose it holds for n;
we need to show it holds for n + 1. For simplicity, we will do the case n = 2. Then
P(X3 = i3 ,X2 = i2 , X1 = i1 , X0 = i0 )
= E [P(X3 = i3 | X0 = i0 , X1 = i1 , X2 = i2 ); X2 = i2 , X1 = ii , X0 = i0 ]
= E [P(X3 = i3 | X2 = i2 ); X2 = i2 , X1 = ii , X0 = i0 ]
= p(i2 , i3 )P(X2 = i2 , X1 = i1 , X0 = i0 ).
Now by the induction hypothesis,
P(X2 = i2 , X1 = i1 , X0 = i0 ) = p(i1 , i2 )p(i0 , i1 )(i0 ).
Substituting establishes the claim for n = 3.
The above proposition says that the law of the Markov chain is determined by the (i) and p(i, j).
The formula (21.2) also gives a prescription for constructing a Markov chain given the (i) and p(i, j).
33

P
Proposition 21.2. Suppose (i) is a sequence of nonnegative numbers with i (i) = 1 and for each i
the sequence p(i, j) is nonnegative and sums to 1. Then there exists a Markov chain with (i) as its initial
distribution and p(i, j) as the transition probabilities.
Proof. Define = S . Let F be the -fields generated by the collection of sets {(i0 , i1 , . . . , in ) : n >
0, ij S}. An element of is a sequence (i0 , i1 , . . .). Define Xj () = ij if = (i0 , i1 , . . .). Define
P(X0 = i0 , . . . , Xn = in ) by (21.2). Using the Kolmogorov extension theorem, one can show that P can be
extended to a probability on .
The above framework is rather abstract, but it is clear that under P the sequence Xn has initial
distribution (i); what we need to show is that Xn is a Markov chain and that
P(Xn+1 = in+1 | X0 = i0 , . . . Xn = in ) = P(Xn+1 = in+1 | Xn = in ) = p(in , in+1 ).

(21.3)

By the definition of conditional probability, the left hand side of (21.3) is


P(Xn+1 = in+1 , Xn = in , . . . X0 = i0 )
P(Xn = in , . . . , X0 = i0 )
(i0 ) p(in1 , in )p(in , in+1 )
=
(i0 ) p(in1 , in )
= p(in , in+1 )

P(Xn+1 = in+1 | X0 = i0 , . . . , Xn = in ) =

(21.4)

as desired.
To complete the proof we need to show
P(Xn+1 = in+1 , Xn = in )
= p(in , in+1 ),
P(Xn = in )
or
P(Xn+1 = in+1 , Xn = in ) = p(in , in+1 )P(Xn = in ).

(21.5)

Now
X

P(Xn = in ) =

P(Xn = in , Xn1 = in1 , . . . , X0 = i0 )

i0 ,,in1

(i0 ) p(in1 , in )

i0 ,,in1

and similarly
P(Xn+1 = in+1 , Xn = in )
X
P(Xn+1 = in+1 , Xn = in , Xn1 = in1 , . . . , X0 = i0 )
=
i0 ,,in1

= p(in , in+1 )

(i0 ) p(in1 , in ).

i0 ,,in1

Equation (21.5) now follows.


Note in this construction that the Xn sequence is fixed and does not depend on or p. Let p(i, j) be
fixed. The probability we constructed above is often denoted P . If is point mass at a point i or x, it is
denoted Pi or Px . So we have one probability space, one sequence Xn , but a whole family of probabilities
P .
34

Later on we will see that this framework allows one to express the Markov property and strong
Markov property in a convenient way. As part of the preparation for doing this, we define the shift operators
k : by
k (i0 , i1 , . . .) = (ik , ik+1 , . . .).
Then Xj k = Xj+k . To see this, if = (i0 , i1 , . . .), then
Xj k () = Xj (ik , ik+1 , . . .) = ij+k = Xj+k ().
22. Examples.
Random walk on the integers
We let Yi be an i.i.d. sequence of r.v.s, with p = P(Yi = 1) and 1 p = P(Yi = 1). Let
Pn
Xn = X0 + i=1 Yi . Then the Xn can be viewed as a Markov chain with p(i, i + 1) = p, p(i, i 1) = 1 p,
and p(i, j) = 0 if |j i| 6= 1. More general random walks on the integers also fit into this framework. To
check that the random walk is Markov,
P(Xn+1 = in+1 | X0 = i0 , . . . , Xn = in )
= P(Xn+1 Xn = in+1 in | X0 = i0 , . . . , Xn = in )
= P(Xn+1 Xn = in+1 in ),
using the independence, while
P(Xn+1 = in+1 | Xn = in ) = P(Xn+1 Xn = in+1 in | Xn = in )
= P(Xn+1 Xn = in+1 in ).
Random walks on graphs
Suppose we have n points, and from each point there is some probability of going to another point.
For example, suppose there are 5 points and we have p(1, 2) = 21 , p(1, 3) = 21 , p(2, 1) = 41 , p(2, 3) = 12 ,
p(2, 5) = 14 , p(3, 1) = 41 , p(3, 2) = 14 , p(3, 3) = 12 , p(4, 1) = 1, p(5, 1) = 12 , p(5, 5) = 12 . The p(i, j) are often
arranged into a matrix:

0 12 12 0 0

1
1
1

0
0
2
4
4

1 1 1
P = 4 4 2 0 0 .

1 0 0 0 0

1
2

1
2

Note the rows must sum to 1 since


5
X
j=1

p(i, j) =

5
X

P(X1 = j | X0 = i) = P(X1 S | X0 = i) = 1.

j=1

Renewal processes
Let Yi be i.i.d. with P(Yi = k) = ak and the ak are nonnegative and sum to 1. Let T0 = i0 and
Pn
Tn = T0 + i=1 . We think of the Yi as the lifetime of the nth light bulb and Tn the time when the nth light
bulb burns out. (We replace a light bulb as soon as it burns out.) Let
Xn = min{m n : Ti = m for some i}.
35

So Xn is the amount of time after time n until the current light bulb burns out.
If Xn = j and j > 0, then Ti = n + j for some i but Ti does not equal n, n + 1, . . . , n + j 1 for any
i. So Ti = (n + 1) + (j 1) for some i and Ti does not equal (n + 1), (n + 1) + 1, . . . , (n + 1) + (j 2) for
any i. Therefore Xn+1 = j 1. So p(i, i 1) = 1 if i 1.
If Xn = 0, then a light bulb burned out at time n and Xn+1 is 0 if the next light bulb burned out
immediately and j 1 if the light bulb has lifetime j. The probability of this is aj . So p(0, j) = aj+1 . All
the other p(i, j)s are 0.
Branching processes
Consider k particles. At the next time interval, some of them die, and some of them split into several
particles. The probability that a given particle will split into j particles is given by aj , j = 0, 1, . . ., where
the aj are nonnegative and sum to 1. The behavior of each particle is independent of the behavior of all
the other particles. If Xn is the number of particles at time n, then Xn is a Markov chain. Let Yi be i.i.d.
random variables with P(Yi = j) = aj . The p(i, j) for Xn are somewhat complicated, and can be defined by
Pi
p(i, j) = P( m=1 Ym = j).
Queues
We will discuss briefly the M/G/1 queue. The M refers to the fact that the customers arrive according
to a Poisson process. So the probability that the number of customers arriving in a time interval of length t
is k is given by et (t)k /k! The G refers to the fact that the length of time it takes to serve a customer is
given by a distribution that is not necessarily exponential. The 1 refers to the fact that there is 1 server.
Suppose the length of time to serve one customer has distribution function F with density f . The
probability that k customers arrive during the time it takes to serve one customer is
Z
ak =

et

(t)k
f (t) dt.
k!

Let the Yi be i.i.d. with P(Yi = k 1) = ak . So Yi is the number of customers arriving during the time it
takes to serve one customer. Let Xn+1 = (Xn + Yn+1 )+ be the number of customers waiting. Then Xn is a
Markov chain with p(0, 0) = a0 + a1 and p(i, j 1 + k) = ak if j 1, k > 1.
Ehrenfest urns
Suppose we have two urns with a total of r balls, k in one and r k in the other. Pick one of the
r balls at random and move it to the other urn. Let Xn be the number of balls in the first urn. Xn is a
Markov chain with p(k, k + 1) = (r k)/r, p(k, k 1) = k/r, and p(i, j) = 0 otherwise.
One model for this is to consider two containers of air with a thin tube connecting them. Suppose
a few molecules of a foreign substance are introduced. Then the number of molecules in the first container
is like an Ehrenfest urn. We shall see that all states in this model are recurrent, so infinitely often all the
molecules of the foreign substance will be in the first urn. Yet there is a tendency towards equilibrium, so
on average there will be about the same number of molecules in each container for all large times.
Birth and death processes
Suppose there are i particles, and the probability of a birth is ai , the probability of a death is bi ,
where ai , bi 0, ai + bi 1. Setting Xn equal to the number of particles, then Xn is a Markov chain with
p(i, i + 1) = ai , p(i, i 1) = bi , and p(i, i) = 1 ai bi .
23. Markov properties.
A special case of the Markov property says that
E x [f (Xn+1 ) | Fn ] = E Xn f (X1 ).
36

(23.1)

The right hand side is to be interpreted as (Xn ), where (y) = E y f (X1 ). The randomness on the right
hand side all comes from the Xn . If we write f (Xn+1 ) = f (X1 ) n and we write Y for f (X1 ), then the
above can be rewritten
E x [Y n | Fn ] = E Xn Y.
Let F be the -field generated by
n=1 Fn .
Theorem 23.1. (Markov property) If Y is bounded and measurable with respect to F , then
E x [Y n | Fn ] = E Xn [Y ],

P a.s.

for each n and x.


Proof. If we can prove this for Y = f1 (X1 ) fm (Xm ), then taking fj (x) = 1ij (x), we will have it for Y s
of the form 1(X1 =i1 ,...,Xm =im ) . By linearity (and the fact that S is countable), we will then have it for Y s
of the form 1((X1 ,...,Xm )B) . A monotone class argument shows that such Y s generate F .
We use induction on m, and first we prove it for m = 1. We need to show
E [f1 (X1 ) n | Fn ] = E Xn f1 (X1 ).
Using linearity and the fact that S is countable, it suffices to show this for f1 (y) = 1{j} (y). Using the
definition of n , we need to show
P(Xn+1 = j | Fn ) = PXn (X1 = j),
or equivalently,
Px (Xn+1 = j; A) = E x [PXn (X1 = j); A]

(23.2)

when A Fn . By linearity it suffices to consider A of the form A = (X1 = i1 , . . . , Xn = in ). The left hand
side of (23.2) is then
Px (Xn+1 = j, X1 = i1 , . . . , Xn = ij ),
and by (21.4) this is equal to
p(in , j)Px (X1 = i1 , . . . , Xn in ) = p(in , j)Px (A).
Let g(y) = Py (X1 = j). We have
Px (X1 = j, X0 = k) =

Pk (X1 = j) if x = k,
0
if x =
6 k,

while
E x [g(X0 ); X0 = k] = E x [g(k); X0 = k] = Pk (X1 = j)Px (X0 = k) =

P( X1 = j) if x = k,
0
if x =
6 k.

It follows that
p(i, j) = Px (X1 = j | X0 = i) = Pi (X1 = j).
So the right hand side of (23.2) is
E x [p(Xn , j); x1 = i1 , . . . , Xn = in ] = p(in , j)Px (A)
37

as required.
Suppose the result holds for m and we want to show it holds for m + 1. We have
E x [f1 (Xn+1 ) fm+1 (Xn+m+1 ) | Fn ]
= E x [E x [fm+1 (Xn+m+1 ) | Fn+m ]f1 (Xn+1 ) fm (Xn+m ) | Fn ]
E x [E Xn+m [fm+1 (X1 )]f1 (Xn+1 ) fm (Xn+m ) | Fn ]
E x [f1 (Xn+1 ) fm1 (Xn+m1 )h(Xn+m ) | Fn ].
Here we used the result for m = 1 and we defined h(y) = fn+m (y)g(y), where g(y) = E y [fm+1 (X1 )]. Using
the induction hypothesis, this is equal to
E Xn [f1 (X1 ) fm1 (Xm1 )g(Xm )] = E Xn [f1 (X1 ) fm (Xm )E Xm fm+1 (X1 )]
= E Xn [f1 (X1 ) fm (Xm )E [fm+1 (Xm+1 ) | Fm ]]
= E Xn [f1 (X1 ) fm+1 (Xm+1 )],
which is what we needed.
Define N () = (N () )(). The strong Markov property is the same as the Markov property, but
where the fixed time n is replaced by a stopping time N .
Theorem 23.2. If Y is bounded and measurable and N is a finite stopping time, then
E x [Y N | FN ] = E XN [Y ].
Proof. We will show
Px (XN +1 = j | FN ) = PXN (X1 = j).
Once we have this, we can proceed as in the proof of the Theorem 23.1 to obtain our result. To show the
above equality, we need to show that if B FN , then
Px (XN +1 = j, B) = E x [PXN (X1 = j); B].

(23.3)

Recall that since B FN , then B (N = k) Fk . We have


Px (XN +1 = j, B, N = k) = Px (Xk+1 = j, B, N = k)
= E x [Px (Xk+1 = j | Fk ); B, N = k]
= E x [PXk (X1 = j); B, N = k]
= E x [PXN (X1 = j); B, N = k].
Now sum over k; since N is finite, we obtain our desired result.
Another way of expressing the Markov property is through the Chapman-Kolmogorov equations. Let
p (i, j) = P(Xn = j | X0 = i).
n

Proposition 23.3. For all i, j, m, n we have


pn+m (i, j) =

pn (i, k)pm (k, j).

kS

38

Proof. We write
P(Xn+m = j, X0 = i) =

P(Xn+m = j, Xn = k, X0 = i)

P(Xn+m = j | Xn = k, X0 = i)P(Xn = k | X0 = i)P(X0 = i)

P(Xn+m = j | Xn = k)pn (i, k)P(X0 = i)

pm (k, j)pn (i, k)P(X0 = i).

If we divide both sides by P(X0 = i), we have our result.


Note the resemblance to matrix multiplication. It is clear if P is the matrix made up of the p(i, j),
then P n will be the matrix whose (i, j) entry is pn (i, j).
24. Recurrence and transience.
Let
Ty = min{i > 0 : Xi = y}.
This is the first time that Xi hits the point y. Even if X0 = y we would have Ty > 0. We let Tyk be the k-th
time that the Markov chain hits y and we set
r(x, y) = Px (Ty < ),
the probability starting at x that the Markov chain ever hits y.
Proposition 24.1. Px (Tyk < ) = r(x, y)r(y, y)k1 .
Proof. The case k = 1 is just the definition, so suppose k > 1. Using the strong Markov property,
Px (Tyk < ) = Px (Ty Tyk1 < , Tyk1 < )
= E x [Px (Ty Tyk1 < | FTyk1 ); Tyk1 < ]
k1

= E x [PX(Ty

(Ty < ); Tyk1 ]

= E x [Py (Ty < ); Tyk1 < ]


= r(y, y)Px (Tyk1 < ).
We used here the fact that at time Tyk1 the Markov chain must be at the point y. Repeating this argument
k 2 times yields the result.
We say that y is recurrent if r(y, y) = 1; otherwise we say y is transient. Let
N (y) =

1(Xn =y) .

n=1

Proposition 24.2. y is recurrent if and only if E y N (y) = .


Proof. Note
E y N (y) =
=

X
k=1

Py (N (y) k) =

X
k=1

r(y, y)k .

k=1

39

Py (Tyk < )

We used the fact that N (y) is the number of visits to y and the number of visits being larger than k is the
same as the time of the k-th visit being finite. Since r(y, y) 1, the left hand side will be finite if and only
if r(y, y) < 1.
Observe that
E y N (y) =

Py (Xn = y) =

pn (y, y).

If 
we consider simple symmetric random walk on the integers, then pn (0, 0) is 0 if n is odd and equal
n
to
2n if n is even. This is because in order to be at 0 after n steps, the walk must have had n/2
n/2
positive steps and n/2 negative steps; the probability of this is given by the binomial distribution. Using

Stirlings approximation, we see that pn (0, 0) c/ n for n even, which diverges, and so simple random walk
in one dimension is recurrent.
Similar arguments show that simple symmetric random walk is also recurrent in 2 dimensions but
transient in 3 or more dimensions.


Proposition 24.3. If x is recurrent and r(x, y) > 0, then y is recurrent and r(y, x) = 1.
Proof. First we show r(y, x) = 1. Suppose not. Since r(x, y) > 0, there is a smallest n and y1 , . . . , yn1
such that p(x, y1 )p(y1 , y2 ) p(yn1 , y) > 0. Since this is the smallest n, none of the yi can equal x. Then
Px (Tx = ) p(x, y1 ) p(yn1 , y)(1 r(y, x)) > 0,
a contradiction to x being recurrent.
Next we show that y is recurrent. Since r(y, x) > 0, there exists L such that pL (y, x) > 0. Then
pL+n+K (y, y) pL (y, x)pn (x, x)pK (x, y).
Summing over n,
X

pL+n+K (y, y) pL (y, x)pK (x, y)

pn (x, x) = .

We say a subset C of S is closed if x C and r(x, y) > 0 implies y C. A subset D is irreducible if


x, y D implies r(x, y) > 0.
Proposition 24.4. Let C be finite and closed. Then C contains a recurrent state.
From the preceding proposition, if C is irreducible, then all states will be recurrent.
Proof. If not, for all y we have r(y, y) < 1 and
E x N (y) =

r(x, y)r(y, y)k1 =

k=1

Since C is finite, then


X
y

r(x, y)
< .
1 r(y, y)

E x N (y) < . But that is a contradiction since

E x N (y) =

XX
y

pn (x, y) =

XX
n

pn (x, y) =

X
n

40

Px (Xn C) =

X
n

1 = .

Theorem 24.5. Let R = {x : r(x, x) = 1}, the set of recurrent states. Then R =
i=1 Ri , where each Ri is
closed and irreducible.
Proof. Say x y if r(x, y) > 0. Since every state is recurrent, x x and if x y, then y x. If x y and
y z, then pn (x, y) > 0 and pm (y, z) > 0 for some n and m. Then pn+m (x, z) > 0 or x z. Therefore we
have an equivalence relation and we let the Ri be the equivalence classes.
Looking at our examples, it is easy to see that in the Ehrenfest urn model all states are recurrent. For
the branching process model, suppose p(x, 0) > 0 for all x. Then 0 is recurrent and all the other states are
transient. In the renewal chain, there are two cases. If {k : ak > 0} is unbounded, all states are recurrent.
If K = max{k : ak > 0}, then {0, 1, . . . , K 1} are recurrent states and the rest are transient.
P
For the queueing model, let = kak , the expected number of people arriving during one customers
service time. We may view this as a branching process by letting all the customers arriving during one
persons service time be considered the progeny of that customer. It turns out that if 1, 0 is recurrent
and all other states are also. If > 1 all states are transient.
25. Stationary measures.
A probability is a stationary distribution if
X

(x)p(x, y) = (y).

(25.1)

In matrix notation this is P = , or is the left eigenvector corresponding to the eigenvalue 1. In the case of
a stationary distribution, P (X1 = y) = (y), which implies that X1 , X2 , . . . all have the same distribution.
We can use (25.1) when is a measure rather than a probability, in which case it is called a stationary
measure.
If we have a random walk on the integers, (x) = 1 for all x serves as a stationary measure. In the
case of an asymmetric random walk: p(i, i + 1) = p, p(i, i 1) = q = 1 p and p 6= q, setting (x) = (p/q)x
also works.
 
r
r
In the Ehrenfest urn model, (x) = 2
works. One way to see this is that is the distribution
x
one gets if one flips r coins and puts a coin in the first urn when the coin is heads. A transition corresponds
to picking a coin at random and turning it over.
Proposition 25.1. Let a be recurrent and let T = Ta . Set
(y) = E a

T
1
X

1(Xn =y) .

n=0

Then is a stationary measure.


The idea of the proof is that (y) is the expected number of visits to y by the sequence X0 , . . . , XT 1
while P is the expected number of visits to y by X1 , . . . , XT . These should be the same because XT =
X0 = a.
Proof. First, let pn (a, y) = Pa (Xn = y, T > n). So
(y) =

P (Xn = y, T > n) =

X
n=0

n=0

41

pn (a, y)

and
X

(y)p(y, z) =

XX

pn (a, y)p(y, z).

y n=0

Second, we consider the case z 6= a. Then


X

pn (a, y)p(y, z)

Pa (hit y in n steps without first hitting a and then go to z in one step)

= pn+1 (a, z).


So
X

(y)p(y, z) =

XX

pn (a, y)p(y, z)

n
y

n=0

n=0

pn+1 (a, z) =

pn (a, z)

= (z)
since p0 (a, z) = 0.
Third, we consider the case a = z. Then
X

pn (a, y)p(y, z)

Pa (hit y in n steps without first hitting a and then go to z in one step)

= Pa (T = n + 1).
Recall Pa (T = 0) = 0, and since a is recurrent, T < . So
X

(y)p(y, z) =

XX

pn (a, y)p(y, z)

n
y

X
a

n=0

n=0

P (T = n + 1) =

Pa (T = n) = 1.

On the other hand,


T
1
X

1(Xn =a) = 1(X0 =a) = 1,

n=0

hence (a) = 1. Therefore, whether z 6= a or z = a, we have P (z) = (z).


Finally, we show (y) < . If r(a, y) = 0, then (y) = 0. If r(a, y) > 0, choose n so that pn (a, y) > 0,
and then
X
1 = (a) =
(y)pn (a, y),
y

which implies (y) < .


We next turn to uniqueness of the stationary distribution. We call the stationary measure constructed
in Proposition 25.1 a .
42

Proposition 25.2. If the Markov chain is irreducible and all states are recurrent, then the stationary
measure is unique up to a constant multiple.
Proof. Fix a S. Let a be the stationary measure constructed above and let be any other stationary
measure.
Since = P , then
X
(z) = (a)p(a, z) +
(y)p(y, z)
y6=a

= (a)p(a, z) +

(a)p(a, y)p(y, z) +

y6=a

XX

(x)p(x, y)p(y, z)

x6=a y6=a

= (a)Pa (X1 = z) + (a)Pa (X1 6= a, X2 = z) + P (X0 6= a, X1 6= a, X2 = z).


Continuing,
(z) = (a)

n
X

Pa (X1 6= a, X2 6= a, . . . , Xm1 6= a, Xm = z)

m=1

+ P (X0 6= a, X1 6= a, . . . , Xn1 6= a, Xn = z)
n
X
(a)
Pa (X1 6= a, X2 6= a, . . . , Xm1 6= a, Xm = z).
m=1

Letting n , we obtain
(z) (a)a (z).
We have
(a) =

(x)pn (x, a) (a)

a (x)pn (x, a)

= (a)a (a) = (a),


since a (a) = 1 (see proof of Proposition 25.1). This means that we have equality and so
(x) = (a)a (x)
whenever pn (x, a) > 0. Since r(x, a) > 0, this happens for some n. Consequently
(x)
= a (x).
(a)

Proposition 25.3. If a stationary distribution exists, then (y) > 0 implies y is recurrent.
Proof. If (y) > 0, then
=

(y) =

X
X

(x)pn (x, y) =

(x)

(x)

n=1 x

n=1

Px (Xn = y) =

pn (x, y)

n=1

(x)E x N (y)

n=1

(x)r(x, y)[1 + r(y, y) + r(y, y)2 + ].

Since r(x, y) 1 and is a probability measure, this is less than


X
(x)(1 + r(y, y) + ) 1 + r(y, y) + .
x

Hence r(y, y) must equal 1.


Recall that Tx is the first time to hit x.
43

Proposition 25.4. If the Markov chain is irreducible and has stationary distribution , then
1
(x) = x .
E Tx
Proof. (x) > 0 for some x. If y S, then r(x, y) > 0 and so pn (x, y) > 0 for some n. Hence
X
(y) =
(x)pn (x, y) > 0.
x

Hence by Proposition 25.3, all states are recurrent. By the uniqueness of the stationary distribution, x is
a constant multiple of , i.e., x = c. Recall
x (y) =

Px (Xn = y, Tx > n),

n=0

and so
X

x (y) =

XX

Px (Xn = y, Tx > n) =

y n=0

XX
n

Px (Xn = y, Tx > n)

Px (Tx > n) = E x Tx .

Thus c = E x Tx . Recalling that x (x) = 1,


(x) =

1
x (x)
= x .
c
E Tx

We make the following distinction for recurrent states. If E x Tx < , then x is said to be positive
recurrent. If x is recurrent but E x Tx = , x is null recurrent.
Proposition 25.5. Suppose a chain is irreducible.
(a) If there exists a positive recurrent state, then there is a stationary distribution,.
(b) If there is a stationary distribution, all states are positive recurrent.
(c) If there exists a transient state, all states are transient.
(d) If there exists a null recurrent state, all states are null recurrent.
Proof. To show (a), if x is positive recurrent, then there exists a stationary measure with (x) = 1. Then
(y) = (y)/E x Tx will be a stationary distribution.
For (b), suppose (x) > 0 for some x. We showed this implies (y) > 0 for all y. Then 0 < (y) =
1/E y Ty , which implies E y Ty < .
We showed that if x is recurrent and r(x, y) > 0, then y is recurrent. So (c) follows.
Suppose there exists a null recurrent state. If there exists a positive recurrent or transient state as
well, then by (a) and (b) or by (c) all states are positive recurrent or transient, a contradiction, and (d)
follows.
26. Convergence.
Our goal is to show that under certain conditions pn (x, y) (y), where is the stationary distribution. (In the null recurrent case pn (x, y) 0.)
Consider a random walk on the set {0, 1}, where with probability one on each step the chain moves
to the other state. Then pn (x, y) = 0 if x 6= y and n is even. A less trivial case is the simple random walk
on the integers. We need to eliminate this periodicity.
Suppose x is recurrent, let Ix = {n 1 : pn (x, x) > 0}, and let dx be the g.c.d. (greatest common
divisor) of Ix . dx is called the period of x.
44

Proposition 26.1. If r(x, y) > 0, then dy = dx .


Proof. Since x is recurrent, r(y, x) > 0. Choose K and L such that pK (x, y), pL (y, x) > 0.
pK+L+n (y, y) pL (y, x)pn (x, x)pK (x, y),
so taking n = 0, we have pK+L (y, y) > 0, or dy divides K + L. So dy divides n if pn (x, x) > 0, or dy is a
divisor of Ix . Hence dy divides dx . By symmetry dx divides dy .
Proposition 26.2. If dx = 1, there exists m0 such that pm (x, x) > 0 whenever m m0 .
Proof. First of all, Ix is closed under addition: if m, n Ix ,
pm+n (x, x) pm (x, x)pn (x, x) > 0.
Secondly, if there exists N such that N, N + 1 Ix , let m0 = N 2 . If m m0 , then m N 2 = kN + r
for some r < N and
m = r + N 2 + kN = r(N + 1) + (N r + k)N Ix .
Third, pick n0 Ix and k > 0 such that n0 + k Ix . If k = 1, we are done. Since dx = 1, there exists
n1 Ix such that k does not divide n1 . We have n1 = mk + r for some 0 < r < k. Note (m + 1)(n0 + k) Ix
and (m + 1)n0 + n1 Ix . The difference between these two numbers is (m + 1)k n1 = k r < k. So now
we have two numbers in Ik differing by less than or equal to k 1. Repeating at most k times, we get two
numbers in Ix differing by at most 1, and we are done.
We write d for dx . A chain is aperiodic if d = 1.
If d > 1, we say x y if pkd (x, y) > 0 for some k > 0 We divide S into equivalence classes S1 , . . . Sd .
Every d steps the chain started in Si is back in Si . So we look at p0 = pd on Si .
Theorem 26.3. Suppose the chain is irreducible, aperiodic, and has a stationary distribution . Then
pn (x, y) (y) as n .
Proof. The idea is to take two copies of the chain with different starting distributions, let them run
independently until they couple, i.e., hit each other, and then have them move together. So define
(
q((x1 , y1 ), (x2 , y2 )) =

p(x1 , x2 )p(y1 , y2 ) if x1 6= y1 ,
p(x1 , x2 )
if x1 = y1 , x2 = y2 ,
0
otherwise.

Let Zn = (Xn , Yn ) and T = min{i : Xi = Yi }. We have


P(Xn = y) = P(Xn = y, T n) + P(Xn = y, T > n)
= P(Yn = y, T n) + P(Xn = y, T > n),
while
P(Yn = y) = P(Yn = y, T n) + P(Yn = y, T > n).
Subtracting,
P(Xn = y) P(Yn = y) P(Xn = y, T > n) P(Yn = y, T > n)
P(Xn = y, T > n) P(T > n).
45

Using symmetry,
|P(Xn = y) P(Yn = y)| P(T > n).
Suppose we let Y0 have distribution and X0 = x. Then
|pn (x, y) (y)| P(T > n).
It remains to show P(T > n) 0. To do this, consider another chain Zn0 = (Xn , Yn ), where now we
take Xn , Yn independent. Define
r((x1 , y1 ), (x2 , y2 )) = p(x1 , x2 )p(y1 , y2 ).
The chain under the transition probabilities r is irreducible. To see this, there exist K and L such
that p (x1 , x2 ) > 0 and pL (y1 , y2 ) > 0. If M is large, pL+M (x2 , x2 ) > 0 and pK+M (y2 , y2 ) > 0. So
pK+L+M (x1 , x2 ) > 0 and pK+L+M (y1 , y2 ) > 0, and hence we have rK+L+M ((x1 , x2 ), (y1 , y2 )) > 0.
It is easy to check that 0 (a, b) = (a)(b) is a stationary distribution for Z 0 . Hence Zn0 is recurrent,
and hence it will hit (x, x), hence the time to hit the diagonal {(y, y) : y S} is finite. However the
distribution of the time to hit the diagonal is the same as T .
K

27. Gaussian sequences.


We first prove a converse to Proposition 17.3.
Proposition 27.1. If E ei(uX+vY ) = E eiuX E eivY for all u and v, then X and Y are independent random
variables.
Proof. Let X 0 be a random variable with the same law as X, Y 0 one with the same law as Y , and X 0 , Y 0
independent. (We let = [0, 1]2 , P Lebesgue measure, X 0 a function of the first variable, and Y 0 a function
0
0
0
0
of the second variable defined as in Proposition 1.2.) Then E ei(uX +vY ) = E eiuX E eivY . Since X, X 0 have
the same law, they have the same characteristic function, and similarly for Y, Y 0 . Therefore (X 0 , Y 0 ) has the
same joint characteristic function as (X, Y ). By the uniqueness of the Fourier transform, (X 0 , Y 0 ) has the
same joint law as (X, Y ), which is easily seen to imply that X and Y are independent.
A sequence of random variables X1 , . . . , Xn is said to be jointly normal if there exists a sequence
of independent standard normal random variables Z1 , . . . , Zm and constants bij and ai such that Xi =
Pm
j=1 bij Zj + ai , i = 1, . . . , n. In matrix notation, X = BZ + A. For simplicity, in what follows let us take
A = 0; the modifications for the general case are easy. The covariance of two random variables X and Y is
defined to be E [(X E X)(Y E Y )]. Since we are assuming our normal random variables are mean 0, we
can omit the centering at expectations. Given a sequence of mean 0 random variables, we can talk about
the covariance matrix, which is Cov (X) = E XX t , where X t denotes the transpose of the vector X. In the
above case, we see Cov (X) = E [(BZ)(BZ)t ] = E [BZZ t B t ] = BB t , since E ZZ t = I, the identity.
t
Let us compute the joint characteristic function E eiu X of the vector X, where u is an n-dimensional
vector. First, if v is an m-dimensional vector,
E eiv

=E

m
Y

eivj Zj =

j=1

m
Y

E eivj Zj =

j=1

m
Y

evj /2 = ev

v/2

j=1

using the independence of the Zs. So


E eiu

= E eiu

BZ

= eu

BB t u/2

By taking u = (0, . . . , 0, a, 0, . . . , 0) to be a constant times the unit vector in the jth coordinate direction,
we deduce that each of the Xs is indeed normal.
46

Proposition 27.2. If the Xi are jointly normal and Cov (Xi , Xj ) = 0 for i 6= j, then the Xi are independent.
Proof. If Cov (X) = BB t is a diagonal matrix, then the joint characteristic function of the Xs factors, and
so by Proposition 27.1, the Xs would in this case be independent.

28. Stationary processes.


In this section we give some preliminaries which will be used in the next on the ergodic theorem. We
say a sequence Xi is stationary if (Xk , Xk+1 , . . .) has the same distribution as (X0 , X1 , . . .).
One example is if the Xi are i.i.d. For readers who are familiar with Markov chains, another is if Xi
is a Markov chain, is the stationary distribution, and X0 has distribution .
A third example is rotations of a circle. Let be the unit circle, P normalized Lebesgue measure on
, and [0, 2). We let X0 () = and set Xn () = + n (mod 2).
A fourth example is the Bernoulli shift: let = [0, 1), P Lebesgue measure, X0 () = , and Xn ()
be binary expansion of from the nth place on.
Proposition 28.1. If Xn is stationary, then Yk = g(Xk , Xk+1 , . . .) is stationary.
Proof. If B R , let
A = {x = (x0 , x1 , . . .) : (g(x0 , . . .), g(x1 , . . . , ), . . .) B}.
Then
P((Y0 , Y1 , . . .) B) = P((X0 , X1 , . . .) A)
= P((Xk , Xk+1 , . . .) A)
= P((Yk , Yk+1 , . . .) B).

We say that T : is measure preserving if P(T 1 A) = P(A) for all A F.


There is a one-to-one correspondence between measure preserving transformations and stationary
sequences. Given T , let X0 = and Xn = T n . Then
P((Xk , Xk+1 , . . .) A) = P(T k (X0 , X1 , . . .) A) = P((X0 , X1 , . . .) A).
b = R , and define X
bk () = k , where = (0 , 1 , . . .). Define
On the other hand, if Xk is stationary, define
b so that the law of X
b under Pb is the same as the law of X under P. Then define T = (1 , 2 , . . .).
Pb on
We see that
b X
b0 , X
b1 , . . .) A)
P(A) = P((0 , 1 , . . .) A) = P((
= P((X0 , X1 , . . .) A) = P((X1 , X2 , . . .) A)
b1 , X
b2 , . . .) A) = P((1 , 2 , . . .) A)
= Pb((X
= P(T A) = P(T 1 A).
We say a set A is invariant if T 1 A = A (up to a null set, that is, the symmetric difference has probability zero). The invariant -field I is the collection of invariant sets. A measure preserving transformation
is ergodic if the invariant -field is trivial.
In the case of an i.i.d. sequence, A invariant means A = T n A (Xn , Xn+1 , . . .) for each n. Hence
each invariant set is in the tail -field, and by the Kolmogorov 0-1 law, T is ergodic.
In the case of rotations, if is a rational multiple of , T need not be ergodic. For example, let =
and A = (0, /2) (, 3/2). However, if is an irrational multiple of , the T is ergodic. To see that,
47

recall that if f is measurable and bounded, then f is the L2 limit of


coefficients. So
X
f (T n x) =
ck eikx+ikn
X
=
dk eikx ,

PK

ikx
,
k=K ck e

where ck are the Fourier

where dk = ck eikn . If f (T n x) = f (x) a.e., then ck = dk , or ck eikn = ck . But is not a rational multiple
of , so eikn 6= 1, so ck = 0. Therefore f = 0 a.e. If we take f = 1A , this says that either A is empty or A
is the whole space, up to sets of measure zero.
Our last example was the Bernoulli shift. Let Xi be i.i.d. with P(X = 1) = P(Xi = 0) = 1/2. Let
P
Yn = m=0 2(m+1) Xn+m . So there exists g such that Yn = g(Xn , Xn+1 , . . .). If A is invariant for the
Bernoulli shift,
A = ((Yn , Yn+1 , . . .) B) = ((Xn , Xn+1 , . . .) C),
where C = {x : (g(x0 , x1 , . . .), g(x1 , x2 , . . .), . . .) B}. this is true for all n, so A is in the invariant -field
for the Xi s, which is trivial. Therefore T is ergodic.
29. The ergodic theorem.
The key to the ergodic theorem is the following maximal lemma.
Lemma 29.1. Let X be integrable. Let T be a measure preserving transformation, let Xj () = X(T j ),
let Sk () = X0 () + + Xk1 (), and Mk () = max(0, S1 (), . . . , Sk ()). Then E [X; Mk > 0] 0.
Proof. If j k, Mk (T ) Sj (T ), so X() + Mk (T ) X() + Sj (T ) = Sj+1 (), or
X() Sj+1 () Mk (T ),

j = 1, . . . , k.

Since S1 () = X() and Mk (T ) 0, then


X() S1 () Mk (T ).
Therefore

Z
E [X(); Mk > 0]

[max(S1 , . . . , Sk )() Mk (T )]
(Mk >0)

Z
[Mk () Mk (T )].

=
(Mk >0)

On the set (Mk = 0) we have Mk () Mk (T ) = Mk (T ) 0. Hence


Z
E [X(); Mk > 0] [Mk () Mk (T )].
Since T is measure preserving, E Mk () E Mk (T ) = 0, which completes the proof.
Recall I is the invariant -field. The ergodic theorem says the following.
Theorem 29.2. Let T be measure preserving and X integrable. Then
n1
1 X
X(T m ) E [X | I],
n m=0

where the convergence takes place almost surely and in L1 .


Proof. We start with the a.s. result. By looking at X E [X | I], we may suppose E [X | I] = 0. Let > 0
and D = {lim sup Sn /n > }. We will show P(D) = 0.
48

P
P
Let > 0. Since X is integrable,
P(|Xn ()| > n) = P(|X| > n) < (cf. proof of Proposition
5.1). By Borel-Cantelli, |Xn |/n will eventually be less than . Since is arbitrary, |Xn |/n 0 a.s. Since
(Sn /n)(T ) (Sn /n)() = Xn ()/n X0 ()/n 0,
then lim sup(Sn /n)(T ) = lim sup(Sn /n)(), and so D I. Let X () = (X() )1D (), and define Sn
and Mn analogously to the definitions of Sn and Mn . On D, lim sup(Sn /n) > , hence lim sup(Sn /n) > 0.
Let F = n (Mn > 0). Note ni=0 (Mi > 0) = (Mn > 0). Also |X | |X| + is integrable. By
Lemma 29.1, E [X ; Mn > 0] 0. By dominated convergence, E [X ; F ] 0.
We claim D = F , up to null sets. To see this, if lim sup(Sn /n) > 0, then n (Mn > 0). Hence
D F . On the other hand, if F , then Mn > 0 for some n, so Xn 6= 0 for some n. By the definition of
X , for some n, T n D, and since D is invariant, D a.s.
Recall D I. Then
0 E [X ; D] = E [X ; D]
= E [E [X | I]; D] P(D) = P(D),
using the fact that E [X | I] = 0. We conclude P(D) = 0 as desired.
Since we have this for every , then lim sup Sn /n 0. By applying the same argument to X, we
obtain lim inf Sn /n 0, and we have proved the almost sure result. Let us now turn to the L1 convergence.
0
00
00
Let M > 0, XM
= X1(|X|M ) , and XM
= X XM
. By the almost sure result,
1X 0
0
XM (T m ) E [XM
| I]
n
almost surely. Both sides are bounded by M , so

1 X


0
0
XM
(T m ) E [XM
| I] 0.
E
n

(29.1)

00
| < ; this is possible by dominated convergence. We
Let > 0 and choose M large so that E |XM

have

1 n1

1X
X 00 m
00
00
E
XM (T )
E |XM
(T m )| = E |XM
|
n m=0
n

and
00
00
00
E |E [XM
| I]| E [E [|XM
| | I]] = E |XM
| .

So combining with (29.1)


1 X



lim sup E
X(T m ) E [X | I] 2.
n
This shows the L1 convergence.
What does the ergodic theorem tell us about our examples? In the case of i.i.d. random variables, we
see Sn /n E X almost surely and in L1 , since E [X | I] = E X. Thus this gives another proof of the SLLN.
For rotations of the circle with X() = 1A () and is an irrational multiple of , E [X | I] = E X =
P
P(A), the normalized Lebesgue measure of A. So the ergodic theorem says that (1/n) 1A ( + n), the
average number of times + n is in A, converges for almost every to the normalized Lebesgue measure
of A.
Finally, in the case of the Bernoulli shift, it is easy to see that the ergodic theorem says that the
average number of ones in the binary expansion of almost every point in [0, 1) is 1/2.

49

Вам также может понравиться