Вы находитесь на странице: 1из 3

Stat 150 Stochastic Processes

Spring 2009

Lecture 6: Markov Chains and First Step Analysis II


Lecturer: Jim Pitman

Further Analysis of Markov Chain


In class last time: We found that if h = P h, then (1) E[h(Xn+1 ) | X0 , X1 , . . . , Xn ] = h(Xn ) (2) E[h(Xn+1 ) | h(X0 ), h(X1 ), . . . , h(Xn )] = h(Xn ) h(Xn ) is a MC. Why? This illustrates a conditional form of E[Y ] = E[E[Y |X ]], which can be written more generically as ( ) E[Y |Z ] = E[E[Y |X, Z ]|Z ]

To obtain (2), take Y = h(Xn+1 ), Z = (h(X0 ), h(X1 ), . . . , h(Xn )), X = (X0 , X1 , . . . , Xn ). Notice that (X, Z ) = (X0 , X1 , . . . , Xn , h(X0 ), h(X1 ), . . . , h(Xn )) is the same information as (X0 , X1 , . . . , Xn ) since Z is a function of X . So E[h(Xn+1 ) | h(X0 ), h(X1 ), . . . , h(Xn )] =E[E[h(Xn+1 ) | X0 , X1 , . . . , Xn , h(X0 ), h(X1 ), . . . , h(Xn )] | h(X0 ), h(X1 ), . . . , h(Xn )] =E[E[h(Xn+1 )|X0 , X1 , . . . , Xn ] | h(X0 ), h(X1 ), . . . , h(Xn )] =E[h(Xn ) | h(X0 ), h(X1 ), . . . , h(Xn )] =h(Xn )

Lecture 6: Markov Chains and First Step Analysis II

Let us derive ( ) from the denition of E[Y |Z ]: E[Y |Z = z ] = E[Y 1(Z = z )] P(Z = z ) E[ x Y 1(X = x, Z = z )] = P(Z = z ) E[Y 1(X = x, Z = z )] = x P(Z = z ) E[Y |X = x, Z = z ]P(X = x|Z = z )P(Z = z ) = x P(Z = z ) =
x

E[Y |X = x, Z = z ]P(X = x|Z = z )

= E[E[Y |X, Z = z ]|Z = z ] Therefore, E[Y |Z ] = E[E[Y |X, Z ]|Z ]. Note that the above argument uses 1 = x Y 1(X = x). The switch of sum and E x 1(X = x) which implies Y = can be justied provided either Y 0 or E(|Y |) < ).

First Step Analysis An Example


Consider a symmetric nearest neighbour random walk in a 2-D grid:
7 8

0.25 6 D C 4 0.25 0.25 5 A B 3

0.25

Suppose squares 1 to 8 form an absorbing boundary. Start the walk in Square A, what is the distribution of the exit point on the boundary? Let fj denote the probability of hitting Square j . Obvious symmetries: f1 = f5 , f2 = f6 , f3 = f7 , f4 = f8 .

Lecture 6: Markov Chains and First Step Analysis II

Less obvious: f2 = f3 . Idea: Let TB denote the time of rst hitting state B (if ever), T = time of rst hitting some state i {1, 2, . . . , 8}. Observe that if XT {2, 3}, then we must get there via B , which implies TB < . That f2 = f3 now follows from: General fact (Strong Markov Property): For a MC X0 , X1 , . . . , with transition matrix P , a state B, if TB = rst hitting time of B: TB = rst n : Xn = B if any if none

Then (XTB , XTB +1 , . . . ) given TB < is a MC with transition matrix P and initial state B. Let fij = Pi (XT = j ) denote the probability, starting at state i, of exiting in square j . So fj = pAj . From the above symmetries, and the obvious fact that PA (T < ) = 1, we see 1 f1 + 2 f2 + f4 = . 2 By conditioning on X1 , called rst step analysis, 1 1 1 fA1 = 0 + 1 + fB 1 + 4 4 4 1 1 1 f1 = + f2 + f2 4 4 4 1 1 1 fA2 = 0 + 0 + fB 2 + 4 4 4 1 1 f2 = f1 + f4 4 4 Solving f1 + 2 f2 + f4 = 1 2 1 1 1 f1 = + f2 + f2 4 4 4 1 1 f2 = f1 + f4 4 4 1 fD 1 4 (By symmetry) 1 fD 2 4

yields f1 = 7/24, f2 = 1/12, f4 = 1/24. And we can obtain the distribution of other exit points by symmetry.

Вам также может понравиться