Академический Документы
Профессиональный Документы
Культура Документы
3.1. Let {Xn }n0 be the gamblers ruin Markov chain with p = .45 and N = 6. That is,
if you currently have i > 0 dollars, in the next round the probability of winning $1 is and
losing $1 is .55, and you stop playing if your fortune ever reaches $0 or $6.
Suppose the gambler starts with X0 = 3 dollars. Use a computer to compute pn for some
large values of n, and use this to approximate the probability that the gambler eventually
doubles his money. Explain how you got your answer.
(Well learn how to calculate this probability exactly later in the chapter, but for now use the
above method to give a good approximation.)
Answer. Computing p100 gives
1. 0. 0. 0. 0. 0. 0.
0.904769 5.7 108 0. 9.3 108 0. 3.8 108 0.095231
0.788375
0. 1.7 107 0. 1.4 107 0. 0.211625
p100
0.646116 1.4 107
0. 2.3 107
0. 9.3 108
0.353883
7 7
0.472244
0. 2.1 10 0. 1.7 10 0. 0.527755
0.259734 8.5 108 0. 1.4 107 0. 5.7 108 0.740265
0. 0. 0. 0. 0. 0. 1.
Thus, by time 100 the gambler has gone broke with probability 0.646116 and doubled his
money with probability 0.353883. There is some small probability (less than 0.000001) that
the gambler has neither gone broke nor doubled his money by this time, so the probability
that he will ever double his money (before going broke) is very close to 0.353883.
3.2. In much the same way as we can view probability distributions on the state space
S as row vectors, we can think of functions f : S R as column vectors. For instance, if
S = {1, 2, 3, 4} then the function f (x) = x2 could be represented as
1
4
f = 9 .
16
a) Show that (P f )(i) (the i-th entry of the column vector P f ) equals Ei [f (X1 )].
Answer. The i-th entry of of P f is
X X
pi,j f (j) = Pi (X1 = j)f (j) = Ei [f (X1 )].
jS jS
1
b) If a row vector is a probability distribution on S, what probabilistic quantity does
P n f represent?
Answer. Since P n is a row vector whose j-th entry is (P n )j = P (Xn = j), then it
follows that
X X
P n f = (P n )j f (j) = P (Xn = j)f (j) = E [f (Xn )].
jS jS
42 1 1 1
det(P I) = 3 + = ( 1)( )( + )
3 4 12 2 6
Therefore, eigenvalues are 1 = 1, 2 = 21 and 3 = 1
6
. Corresponding eigenvectors are
1 1 1
~v1 = 1 , ~v2 = 0 , ~v3 = 4/3 .
1 1 1
Therefore,
1 1 1 1 0 0 2/7 3/7 2/7
P n = QDn Q1 = 1 0 4/3 0 (1/2)n 0 1/2 0 1/2
n
1 1 1 0 0 (1/6) 3/14 3/7 3/14
2 3 2 1
0 12
3 3 3
n n
7
2
7
3
7
2 1 2 1 142 4 7 142
= 7 7 7 + 0 0 0 + 7 7 7 .
2 3 2 2 6
7 7 7
12 0 2 1 3
14
3
7 143
2
Problems from the book:
This is equivalent to
For the second part of the problem, consider the Gamblers ruin Markov chain on
{0, 1, 2, 3, 4} with win probability p = 1/2 and started from X0 = 2, and let A0 = {2},
A1 = {1} and A2 = {0, 4}. Then
P2 (X3 = 0 | X0 A0 , X1 A1 , X2 A2 ) = 1
P2 (X3 = 0 | X2 A2 ) = P2 (X2 = 0 | X2 A2 )
P2 (X2 = 0) 1/4 1
= = = .
P2 (X2 = 0) + P2 (X2 = 4) 1/4 + 1/4 2
2.4 If X0 = i then i = k if the first k 1 steps of the walk stay at i and then the k-th step
goes to a different site. That is,