5 views

Uploaded by AravindVR

LaLonde

- Mobile Computing and Simulation october 2009
- Schuster P. - Stochasticity in Processes. Fundamentals and Applications to Chemistry and Biology - (Springer Series in Synergetics) - 2016
- Chapter 3 Discrete Probability Distributions_final 3
- 978-3-319-94153-0
- PTSP WORKSHEET.docx
- Lecture 2
- Chapter 3 Discrete Probability Distributions_2.docx
- CHAPTER 5
- UNIT I - Random Variables
- Tutorial 1
- MA0232 Probability and Random Processes
- Probability Paper Example
- JQT - Planned Replacement - Some Theory and Its Application
- UNITIIProbabilityDFTheoryByDrNVNagendram
- A Behavioral Stock Market Model
- AIMA_ch14
- CS 70 n16
- Mlc Guia
- sekar
- Lecture 6 & 7.pdf

You are on page 1of 6

Scott M. LaLonde

February 27, 2013

Abstract

We present a proof of the Martingale Stopping Theorem (also known as

Doobs Optional Stopping Theorem). We begin with some preliminaries on

measure-theoretic probability theory, which allows us to discuss the definition

and basic properties of martingales, We then state some auxiliary results and

use them to prove the main theorem.

Introduction

a players fortune in a fair game. That is to say, his expected fortune at time n given

the history up to this point is equal to his current fortune:

E(Xn |X1 , . . . , Xn1 ) = Xn1 .

This in turn implies that for all n,

E(Xn ) = E(Xn1 ) = = E(X1 ) = E(X0 ),

so the players expected fortune at any time is equal to his starting expected fortune.

It is natural to ask whether the game remains fair when stopped at a randomly

chosen time. Loosely speaking, if T is a random stopping time and XT denotes the

game stopped at this time, do we have

E(XT ) = E(X0 )

as well? In general the answer is no, as Doyle and Snell point out. They envision

a situation where the player is allowed to go into debt by any amount and to play

for an arbitrarily long time. In such a situation, the player will inevitably come out

ahead. There are conditions which will guarantee fairness, and Doyle and Snell [2]

give them in the following theorem, which is phrased in the context of gambling.

Theorem (Martingale Stopping Theorem). A fair game that is stopped at a random

time will remain fair to the end of the game if it is assumed that:

1

(b) A player must stop if he wins all of this money or goes into debt by this amount.

Our goal is to develop a more formal statement of this theorem, called Doobs

Optional-Stopping Theorem, and then to prove it. We will start with some general

background material on probability theory, provide formal definitions of martingales

and stopping times, and finally state and prove the theorem. It should be noted that

our exposition will largely be based on that of Williams [4], though a nice overview

of martingales and various results about them can be found in Doob [1].

Preliminaries

Modern approaches to probability theory make much use of measure theory. Since

the proof of Doobs theorem will rely heavily on some sort of integral convergence

theorem (namely the Dominated Convergence Theorem), we need to introduce some

background that places probability theory within the realm of measure theory.

In modern probability theory the model for a random experiment is called a

probability space. This is a triple (, , P), where

is a set, called the sample space.

is a -algebra of subsets of .

P is a probability measure on (, ), i.e. every set in is measurable and

P() = 1.

The notion of a probability space generalizes ideas from discrete probability. We

have already mentioned that is the sample space of an experiment. The -algebra

represents the set of possible outcomes, or the events to which one can assign a

probability. The measure P gives the probability that an outcome occurs.

Of course in discrete probability one is usually interested in random variables,

which are real-valued functions on the sample space. For us, a random variable will

be a function X : R which is measurable with respect to . The expected value

of a random variable X is its integral with respect to the measure P:

Z

E(X) =

X() dP(),

and we will say that a random variable X is integrable if E(|X|) < . Finally, we

will need to make reference to the conditional expectation of a random variable: given

a sub--algebra A of , the conditional expectation E(X|A) is a random variable

which satisfies certain conditions related to X and A. The proper definition is quite

complicated, so one should simply think of E(X|A) as the expectation of X given

2

that the events contained in A have occurred. This description is not completely

accurate, but it should help the readers understanding of the uses of conditional

expectation in the sequel.

Now that we have the appropriate background material out of the way, we can formally define a martingale. Fix a probability space (, , P), and let X = {Xn }

n=0

be a sequence of random variables on .

Definition 1. A filtration on (, , P) is an increasing sequence F = {Fn }

n=1

F1 F2 F3

of sub--algebras of . The sequence {Xn }

n=1 is said to be adapted to F if Xn is

Fn -measurable for each n.

Remark 2. This definition may seem abstract, but it helps to keep the following

idea in mind. The -algebra Fn represents the information available to us at time n

in a random process, or the events that we can detect at time n. That the sequence

is increasing represents the fact that we gain information as the process goes on.

This idea can perhaps be made even more clear by pointing out that a common

choice for F is the natural filtration (or minimal filtration):

Fn = (X1 , . . . , Xn ).

In this case, Fn is the smallest -algebra on making the random variables X1 , . . . , Xn

measurable. The information available at time n is precisely that generated by the

Xi for 1 i n. Of course more information could be available, which would

correspond to a different choice of filtration F.

Filtrations are important because they provide a concise way of defining a martingale. With this in mind, let F = {Fn } be a fixed filtration on (, , P).

Definition 3. A random process X = {Xn } is called a martingale relative to F if

(a) X is adapted to F,

(b) E(|Xn |) < for all n, and

(c) E(Xn+1 |Fn ) = Xn almost surely.

As we have already discussed, we are interested in what happens when one stops

a martingale at a random time. To do this, we need a formal way of talking about

a rule for stopping a random process which does not depend on the future. This

leads to the following definition of a stopping time.

3

{T = n} = { : T () = n} Fn

(1)

Remark 5. Intuitively, T is a random variable taking positive integer values (and

possibly ) which gives a rule for stopping a random process. Condition (1) says

that the decision whether to stop or not at time n depends only on the information

available to us at time n (i.e. the history up to and including time n). No knowledge

of the future is required, since such a rule would surely result in an unfair game.

Let X = {Xn } be a random process, and let T be a stopping time. For any

positive integer n and any , we define

T n() = min{T (), n}.

With this notation, we can define a stopped process.

Definition 6. The stopped process X T = {XnT } is given by

XnT () = XT n() ().

A useful result that we will need for the proof of Doobs theorem (but that we

will not prove) says that X T inherits certain desirable properties from X.

Proposition 7. If X = {Xn } is a martingale, then the stopped process X T =

{XT n } is also a martingale. In particular, for all n we have

E(XT n ) = E(X0 ).

This is part (ii) of [4, Theorem 10.9], and an outline of the proof can be found

there. The proof is not difficult, but the details are not particularly enlightening

from our current perspective.

We now have all the pieces in place to state and prove our main theorem. First we

need to formalize what it means to stop a process at a random time. Suppose

we have a martingale X = {Xn } and a stopping time T . Assume that T is almost

surely finite. Then we can define a random variable XT : R by

XT () = XT () (),

at least for outside some set of probability 0. (To make XT everywhere-defined,

we could set it equal to 0 on this null set.) Intuitively, E(XT ) represents the players

4

expected fortune when stopping at a random time. If we are to show that we still

have a fair game, we will need to check that

E(XT ) = E(X0 ).

(2)

XT n XT almost surely. Moreover, we know that E(XT n ) = E(X0 ) for all n. It

would be nice if we could conclude that E(XT n ) E(XT ), since we would then

have (2). This amounts to showing that

Z

Z

XT () dP()

XT n () dP()

that will allow us to invoke convergence theorems from measure theory, with the

Dominated Convergence Theorem being the likely candidate.

We will prove the version of Doobs theorem given in [4, Theorem 10.10], which

is essentially the same as the formal statement given in class. The proof of part

(b) will differ slightly from Williams proof, however. In the process we will obtain

direct analogues of the Martingale Stopping Theorem from [2]. In this regard,

requirement that there is only a finite amount of money in the world can be

encoded by assuming that the random variables Xn are uniformly bounded; this is

condition (b) below. Similarly, the requirement that the player stop after a finite

amount of time is obtained by requiring that T be almost surely bounded, which is

condition (a). We also show that there is a third condition under which the theorem

holds; this condition is essentially limit on the size of a bet at any given time.

Theorem 8 (Doobs Optional-Stopping Theorem). Let (, , P) be a probability

space, F = {Fn } a filtration on , and X = {Xn } a martingale with respect to F.

Let T be a stopping time. Suppose that any one of the following conditions holds:

(a) There is a positive integer N such that T () N for all .

(b) There is a positive real number K such that

|Xn ()| < K

for all n and all , and T is almost surely finite.

(c) E(T ) < , and there is a positive real number K such that

|Xn () Xn1 ()| < K

for all n and all .

Then XT is integrable, and

E(XT ) = E(X0 ).

5

Proof. Note that in all three cases T is a.s. finite. By our previous discussion, this

implies that XT is a.s.-defined, and we have XT n XT almost surely. Furthermore, we know that XT n is integrable for all n, and that E(XT n ) = E(X0 ).

Suppose that (a) holds. Then for n N , we have T () n = T () for all .

Hence XT n = XT for n N , and it follows that XT is integrable with

E(XT ) = E(XT N ) = E(X0 ).

Now suppose that (b) holds. Then the boundedness condition on the Xn implies

that

|XT n ()| < K

for all n and all . Also, it is fairly easy to check that

T n()

XT n () = X0 () +

Xk () Xk1 ()

k=1

T n()

k=1

Therefore, in either case (b) or (c) we have bounded |XT n | by an integrable random

variable, so the Dominated Convergence Theorem applies. It follows that XT is

integrable, and

Z

Z

lim

XT n () dP() =

XT () dP().

Equivalently,

lim E(XT n ) = E(XT ).

References

[1] J.L. Doob, What is a martingale?, Amer. Math. Monthly 78 (1971), no. 5, 451

463.

[2] Peter G. Doyle and J. Laurie Snell, Random walks and electrical networks, Carus

Mathematical Monographs, Mathematical Association of America, Washington,

D.C., 1984.

[3] Efe

A.

Ok,

Probability

theory

with

https://files.nyu.edu/eo1/public/books.html.

economic

applications,

[4] David Williams, Probability with martingales, Cambridge University Press, Cambridge, 1991.

- Mobile Computing and Simulation october 2009Uploaded byFamida Begam
- Schuster P. - Stochasticity in Processes. Fundamentals and Applications to Chemistry and Biology - (Springer Series in Synergetics) - 2016Uploaded byfictitious30
- Chapter 3 Discrete Probability Distributions_final 3Uploaded byVictor Chan
- 978-3-319-94153-0Uploaded byasat12
- PTSP WORKSHEET.docxUploaded bySrinivas Samal
- Lecture 2Uploaded byAbdul Basit Khan
- Chapter 3 Discrete Probability Distributions_2.docxUploaded byVictor Chan
- CHAPTER 5Uploaded byAlyanna Crisologo
- UNIT I - Random VariablesUploaded byShubham Vishnoi
- Tutorial 1Uploaded byAshwin Kumar
- MA0232 Probability and Random ProcessesUploaded byDAVID
- Probability Paper ExampleUploaded byGreen Arrow
- JQT - Planned Replacement - Some Theory and Its ApplicationUploaded bybriandpc
- UNITIIProbabilityDFTheoryByDrNVNagendramUploaded byPashupati
- A Behavioral Stock Market ModelUploaded byKrishnan Muralidharan
- AIMA_ch14Uploaded byapi-19643506
- CS 70 n16Uploaded bycookiesayrawr
- Mlc GuiaUploaded byMauricio Vargas
- sekarUploaded byGaya Thri
- Lecture 6 & 7.pdfUploaded bySarah Seunarine
- ProbabilityUploaded byTawanda Zimbizi
- Chapter 3 Discrete Random Variable.pdfUploaded byLizhe Khor
- Clark LundbergUploaded byms. bee
- sampling designUploaded bynehaarunagarwal
- M2L02Uploaded byabimana
- B2 W2 Chi Square ResultsUploaded byZACHARIAS PETSAS
- Lecturenotes9 10 ProbabilityUploaded byarjunvenugopalachary
- rs4 probability tutorial.pdfUploaded byJojo Ramos
- Spss ChoudhuryUploaded byNirmalendu Patra
- black winUploaded byKrishna Chaitanya Srikanta

- PD/2007/AugustUploaded byAravindVR
- HomeworkUploaded byAravindVR
- fsamp-solUploaded byAbdul Azeez
- Additional Questions Error coding theoryUploaded byishan
- PD/2007/AugustUploaded byAravindVR
- PD/2007/AugustUploaded byAravindVR
- exam1solUploaded byAravindVR
- M1TeacherSlides.pdfUploaded byAravindVR
- LO_M1.pdfUploaded byAravindVR
- PD/2007/AugustUploaded byAravindVR
- PD/2007/AugustUploaded byAravindVR
- PD/2007/AugustUploaded byAravindVR
- PD/2007/AugustUploaded byAravindVR
- PD/2007/AugustUploaded byAravindVR
- PD/2007/AugustUploaded byAravindVR
- PD/2007/AugustUploaded byAravindVR
- PD/2007/AugustUploaded byAravindVR
- PD/2007/AugustUploaded byAravindVR
- PD/2007/AugustUploaded byAravindVR
- PD/2007/AugustUploaded byAravindVR
- PD/2007/AugustUploaded byAravindVR
- PD/2007/AugustUploaded byAravindVR
- PD/2007/AugustUploaded byAravindVR
- PD/2007/AugustUploaded byAravindVR
- PD/2007/AugustUploaded byAravindVR
- PD/2007/AugustUploaded byAravindVR
- PD/2007/AugustUploaded byAravindVR
- PD/2007/AugustUploaded byAravindVR
- PD/2007/AugustUploaded byAravindVR
- PD/2007/AugustUploaded byAravindVR

- unit 1 nature of scienceUploaded byapi-235066492
- ARM7 UM10139_LPC214xUploaded bynilmak2u2
- Transhuman Space - Under Pressure.pdfUploaded bylovecraft1890
- Advantages of OOPUploaded bydnlkaba
- Deploying SharePoint 2016.pdfUploaded byPatrick Tanis
- Desktop Transformation Pooled Desktops Reference ArchitectureUploaded byFrancisco Petit Ramírez
- Health and SafetyUploaded byopaolis
- Asch, M., Levi-Strauss and the Political. The Elementary Structures of Kinship and the Resolution of Relations between Indigenous People and Settler states.docUploaded bymiguel_henriques1112
- 5_2_MIBquickReferenceGuide (2)Uploaded byAbdallah Ben Hamida
- Bicarbonate Therapy in Severe Metabolic AcidosisUploaded byEK
- Sri Brhad BhagatamrtamUploaded byalcgn
- Btech ResultUploaded byAnoop Janardhanan
- lec20-21_compressed SCATTER.pdfUploaded byBobby Ward
- 06_MIDIUploaded byGourav Agarwal
- ASCAP Vol 11 No 001Uploaded byfreemind3682
- SystemVerilog for VHDL EngineersUploaded byraysalemi
- DR Cereno v CAUploaded byMarkDungo
- PTest-2Uploaded byUday Prakash Sahu
- OSART Guidelines 2015Uploaded bymadalina_tronea
- MIN1987-04Uploaded bySem Jean
- Operations Management 2010 April (2006 Ad)Uploaded byV
- EmpireofNeoMemorybyHeribertoYepez.pdfUploaded bydiazbarbaradiaz
- IRJET-Review on Intrusion Detection System using Recurrent Neural Network with Deep LearningUploaded byIRJET Journal
- AU6860B_datasheet_v01Uploaded bydavid29x
- Audit Trail ReportingUploaded byNagaraj Gunti
- Marfan's Syndrome -nHow is the Body AffectedUploaded byElok Nur Farida Anggraini
- Configure Symbols for components.pdfUploaded byEduardo Martinez
- re95086_2009-07Uploaded byErick Parra
- ConnectorsUploaded bykabshiel
- 2016_Text Book of Clinical Pharmacognosy Dr. Mansoor Ahmad Karachi UiversityUploaded byAmini Mohammad Humayoon