Вы находитесь на странице: 1из 8

41

Appendix D. Solutions of exercises


(i) The process is (a) stochastic (randomness in the choice of doors the mouse uses), involves
(b) state changes at discrete times (sound of the bell), and there are three (c) discrete
states (the three rooms the mouse can be in). So we have a discrete time and discrete
state space stochastic process. Is it Markovian and homogeneous? Given a state (room)
at any given time, the transition made at the next time step depends only on the
present room but not on the rooms the mouse was in at prior time steps, so the chain is
Markovian. Finally, the single time step transition probabilities do not depend on the
time on the clock, so the Markov chain is homogeneous.
room A
room B room C
Next we calculate the matrix W
XY
(probability to be in state X at the next step if in
state Y at the present step). The state space is S = {A, B, C}. Since the mouse always
changes room, we have W
AA
= W
BB
= W
CC
= 0. Next the six non-diagonal elements:
from state A: 2 doors to B, 1 door to C, hence W
BA
= 2/3 and W
CA
= 1/3.
from state B: 2 doors to A, 1 door to C, hence W
AB
= 2/3 and W
CB
= 1/3.
from state C: 1 door to A, 1 door to B, hence W
AC
= 1/2 and W
BC
= 1/2.
(ii) The process is (a) stochastic (randomness in the roulette), involves (b) state changes
at discrete times (spin of the wheel), and (c) there are discrete states (capital evolves
in multiples of 10). So we have a discrete time and discrete state space stochastic
process. Note that S is not nite (no upper limit to capital). Is the process Markovian
and homogeneous? Given a state (capital) at any given time, the transition made at the
next time step depends only on the present capital and what is generated by the wheel
but not on the capital held by the player at prior time steps, so the chain is Markovian.
Finally, the single time step transition probabilities (numbers generated by the wheel)
do not depend on the time on the clock, so the Markov chain is homogeneous.
Next we calculate W
XY
(probability to have X pounds at the next step, if present
capital is Y pounds). Suppose we have Y pounds now, and spin the wheel:
odd number generated: probability 1/2, resulting next value X = Y + 10
even number generated: probability 1/2, resulting next value X = Y 10
Combined: W
Y +10,Y
=
1
2
, W
Y 10,Y
=
1
2
, W
XY
= 0 for X = Y 10. Equivalently:
W
XY
=
1
2

X,Y +10
+
1
2

X,Y 10
. However, if we cannot go into negative capital (it is a
42
casino, not a lender), and in fact we need to have at least 10 to place a bet. We have
to adjust the transitions to take this into account:
Y 10: bet possible, even number generated: probability 1/2, resulting next value
X = Y + 10
Y 10: bet possible, odd number generated: probability 1/2, resulting next value
X = Y 10
Y < 10: no more bets possible, next value X = Y .
This gives
Y 10 : W
XY
=
1
2

X,Y +10
+
1
2

X,Y 10
Y < 10 : W
XY
=
X,Y
(iii) The process is (a) stochastic (randomness in picking balls), involves (b) state changes
at discrete times (the swaps), and (c) there are discrete states (the three congurations
shown above). So we have a discrete time and discrete state space stochastic process. Is
it Markovian and homogeneous? Given a state at any given time, the transition made
at the next time step depends only on this present state and which balls are taken from
the urns, but not on where the balls were at prior time steps, so the chain is Markovian.
Finally, the single time step transition probabilities (balls picked from the urns) do not
depend on the time on the clock, so the Markov chain is homogeneous.
A B
g
g
w
w
w
state 1
A B
g
w
w
g
w
state 2
A B
w
w
g
w
g
state 3
Next calculate W
XY
(probability to have state X at next step, if present state is Y ).
present state is 1: the only possible transition is 1 2, so W
11
= W
31
= 0, W
21
= 1
present state is 2:
let us check the possible colour draws (c
A
, c
B
) {(w, w), (w, b), (b, w), (b, b)}
(c
A
, c
B
) = (w, w): probability =
1
2
.
1
3
=
1
6
, new state 2
(c
A
, c
B
) = (w, b): probability =
1
2
.
2
3
=
1
3
, new state 3
(c
A
, c
B
) = (b, w): probability =
1
2
.
1
3
=
1
6
, new state 1
(c
A
, c
B
) = (b, b): probability =
1
2
.
2
3
=
1
3
, new state 2
hence: W
12
= 1/6, W
22
= 1/6 + 1/3 = 1/2, W
32
= 1/3
present state is 3:
always c
A
= b, let us check the possible colour draws c
B
{w, b}
c
B
= w: probability=2/3, new state 2
c
B
= b: probability=1/3, new state 3
hence: W
13
= 0, W
23
= 2/3, W
33
= 1/3
43
(iv) The process is (a) stochastic (randomly chosen strain), involves (b) changes at discrete
times (generations), and (c) there are discrete states (the N possible strains). So we
have a discrete time and discrete state space stochastic process. Is it Markovian and
homogeneous? Given a state (strain) at any given time, the transition made at the next
time step is random, so depends not even on the present state, let alone on earlier ones.
So the chain is Markovian. Finally, the single time step transition probabilities do not
depend on the time on the clock, so the Markov chain is homogeneous.
Next calculate W
XY
(probability to have strain X at next generation, if present strain
is Y ). New strains are drawn fully randomly, each with probability 1/N to be chosen,
so W
XY
= 1/N for all X, Y {1, . . . , N}.
(v) Let (P
m
)
ij
> 0 for all i, j {1, . . . , N}. We now prove that the same must be true for
P
m+1
. We dene z = min
i,jN
(P
m
)
ij
> 0 and use

k
p
ik
= 1 for all i N:
(P
m+1
)
ij
=

k
p
ik
(P
m
)
kj
z

k
p
ik
= z > 0
Via induction it now follows that (P
n
)
ij
> 0 for all i, j N and all n m. []
(vi) Consider the following matrix:
P =
_
_
_
_
_
_
0 0 1/2 1/2
0 0 1/2 1/2
1/2 1/2 0 0
1/2 1/2 0 0
_
_
_
_
_
_
Clearly p
ij
0 for all (i, j). Also we see that

j
p
ij
= 1 for all i 4. Hence P is a
stochastic matrix. Next calculate P
2
and P
3
:
P
2
=
_
_
_
_
_
_
0 0 1/2 1/2
0 0 1/2 1/2
1/2 1/2 0 0
1/2 1/2 0 0
_
_
_
_
_
_
_
_
_
_
_
_
0 0 1/2 1/2
0 0 1/2 1/2
1/2 1/2 0 0
1/2 1/2 0 0
_
_
_
_
_
_
=
_
_
_
_
_
_
1/2 1/2 0 0
1/2 1/2 0 0
0 0 1/2 1/2
0 0 1/2 1/2
_
_
_
_
_
_
P
3
=
_
_
_
_
_
_
1/2 1/2 0 0
1/2 1/2 0 0
0 0 1/2 1/2
0 0 1/2 1/2
_
_
_
_
_
_
_
_
_
_
_
_
0 0 1/2 1/2
0 0 1/2 1/2
1/2 1/2 0 0
1/2 1/2 0 0
_
_
_
_
_
_
=
_
_
_
_
_
_
0 0 1/2 1/2
0 0 1/2 1/2
1/2 1/2 0 0
1/2 1/2 0 0
_
_
_
_
_
_
= P
It now follows that for m > 0:
P
2m
= P
2
, P
2m+1
= P
So we can go from any initial state to any nal state (requiring an even number of steps
if going from {1, 2} to {3, 4} or vice versa, and requiring an odd number of steps if going
44
from {1, 2} to {1, 2} or from {3, 4} to {3, 4}). Thus the system is ergodic. However, we
see that there is no time n such that all entries of P
n
are positive, hence the chain is
not regular.
(vii) Let there be an i such that p
ij
=
ij
for all j S. We rst prove by induction that
(P
n
)
ij
=
ij
for all j S and all n 0. Induction basis n = 1 is already given. Now
the induction step. Suppose that (P
n
)
ij
=
ij
for all j S. We now inspect P
n+1
:
(P
n+1
)
ij
=

k
(P
n
)
ik
p
kj
=

ik
p
kj
= p
ij
=
ij
Hence the property holds for n + 1, and via induction therefore for all n 1. Since
there is thus for any n > 0 always at least one row of P
n
(namely row i) for which all
but one of the entries are zero, it can never be true that (P
n
)
k
> 0 for all k, S.
The chain can therefore not be regular.
(viii) Let there be an i such that p
ij
=
ij
for all j S. Dene the vector p with entries
p
j
=
ij
. It then follows that

j
p
j
p
j
=

j
p
j

ij
= p
i
=
i
= p

So p satises the equation that denes stationary solutions of the Markov chain. Since
its entries are non-negative and add up to one, it is a stationary state of the process.
(ix) We calculated earlier the transition probabilities for the mouse in the language of W
XY
.
We now label the states as (A, B, C) (1, 2, 3) and write the matrix P with entries
p
ij
, where p
ij
is the probability to move to j in one step when presently in i. So:
from state 1: p
11
= 0, p
12
= 2/3, p
13
= 1/3
from state 2: p
21
= 2/3, p
22
= 0, p
23
= 1/3
from state 3: p
31
= 1/2, p
32
= 1/2, p
33
= 0
giving
P =
_
_
_
_
0 2/3 1/3
2/3 0 1/3
1/2 1/2 0
_
_
_
_
Let us inspect ergodicity and regularity. To do so we calculate P
2
:
P
2
=
_
_
_
_
0 2/3 1/3
2/3 0 1/3
1/2 1/2 0
_
_
_
_
_
_
_
_
0 2/3 1/3
2/3 0 1/3
1/2 1/2 0
_
_
_
_
=
_
_
_
_
11/18 1/6 2/9
1/6 11/18 2/9
1/3 1/3 1/3
_
_
_
_
(P
2
) has strictly positive entries, so the Markov chain is regular and hence certainly
ergodic. Since chains with absorbing states cannot be ergodic it follows that there is no
45
absorbing state (this is also clear by direct inspection of P: there is no row with all but
one of the entries equal to zero, which is the signature of an absorbing state).
(x) We found in an earlier exercise that the transition probabilities for the fair casino were
Y 10 : W
XY
=
1
2

X,Y +10
+
1
2

X,Y 10
Y < 10 : W
XY
=
X,Y
The set of states here is S = {0, 1, 2, . . .}. Translation into the language of p
ij
gives
i 10 : p
ij
=
1
2

j,i+10
+
1
2

j,i10
i < 10 : p
ij
=
ij
We see that there are absorbing states, namely all i {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}. So it is
immediately clear that this process can be neither ergodic nor regular.
(xi) We found earlier the transition probabilities for the gambling banker:
W
11
= 0, W
21
= 1, W
31
= 0
W
12
= 1/6, W
22
= 1/2, W
32
= 1/3
W
13
= 0, W
23
= 2/3, W
33
= 1/3
In the language of {p
ij
} this implies
P =
_
_
_
_
0 1 0
1/6 1/2 1/3
0 2/3 1/3
_
_
_
_
Eigenvalues? work out eigenvalue polynomial (left- and right eigenvalues are the same)
Det
_
_
_
_
1 0
1/6 1/2 1/3
0 2/3 1/3
_
_
_
_
= 0 :
3

5
6

2
9
+
1
18
= 0
We know = 1 must be a solution, and so it is. This helps us to factorize further:

5
6

2
9
+
1
18
= ( 1)( a)( b)

5
6

2
9
+
1
18
=
3
(a + b)
2
+ ab
2
+ (a + b) ab

5
6

2
9
+
1
18
= (a + b + 1)
2
+ (a + b + ab) ab
So a + b =
1
6
and ab =
1
18
, given a
2
+
1
6
a
1
18
= 0, and hence a {1/3, 1/6}.
Conclusion: the three eigenvalues of P are {1/3, 1/6, 1}.
46
Let us inspect ergodicity and regularity. If we calculate P
2
we nd
P
2
=
_
_
_
_
0 1 0
1/6 1/2 1/3
0 2/3 1/3
_
_
_
_
_
_
_
_
0 1 0
1/6 1/2 1/3
0 2/3 1/3
_
_
_
_
=
_
_
_
_
1/6 1/2 1/3
1/12 23/36 5/18
1/9 5/9 1/3
_
_
_
_
So there exists an integer n 1 such that all entries of P
n
are positive. This conrms
that P is regular, and therefore also ergodic. Since it is regular it cannot have absorbing
states.
(xii) For the mutating virus we have found that W
XY
= 1/N for all X, Y {1, . . . , N}. In
terms of P this gives p
ij
=
1
N
for all i, j {1, . . . , N}. Eigenvalues are calculated easily:
x
i
=
N

j=1
1
N
x
j
hence : = 0 or x
i
= x
j
i, j
If = 0 the eigenvector is (1, 1, . . . , 1) (modulo normalization), substitution of which
into the eigenvalue eqn immediately gives = 1. Conclusion: the eigenvalues of P are
{0, 1}. The matrix P has strictly positive entries, so the chain is regular and therefore
also ergodic. Since it is regular it cannot have absorbing states.
(xiii) Back to the mouse. Eigenvalues of P? work out eigenvalue polynomial (left- and right
eigenvalues are the same)
Det
_
_
_
_
2/3 1/3
2/3 1/3
1/2 1/2
_
_
_
_
= 0 :
3

7
9

2
9
= 0
We know = 1 must be a solution, and so it is. This helps us to factorize further:

7
9

2
9
= ( 1)( a)( b)

7
9

2
9
= ( 1)(
2
(a + b) + ab)

7
9

2
9
=
3
(a + b)
2
+ ab
2
+ (a + b) ab

7
9

2
9
= (a + b + 1)
2
+ (a + b + ab) ab
So b = a 1, leading to

7
9

2
9
= (a
2
+ a + 1) + a
2
+ a
So a
2
+ a = 2/9, giving a =
1
2

1
6
, so a {2/3, 1/3}.
Conclusion: the three eigenvalues of P are {2/3, 1/3, 1}.
Stationary states are left eigenvectors of P with eigenvalue one and strictly non-negative
47
entries. Let us calculate the left eigenvector(s) of P with eigenvalue one:
x
1
=
2
3
x
2
+
1
2
x
3
6x
1
= 4x
2
+ 3x
3
x
2
=
2
3
x
1
+
1
2
x
3
6x
2
= 4x
1
+ 3x
3
x
3
=
1
3
x
1
+
1
3
x
2
3x
3
= x
1
+ x
2
Combining the rst two gives x
1
= x
2
, upon which the third equation gives x
3
=
2
3
x
1
.
Hence the left-eigenvector (modulo normalization) for = 1 is (x
1
, x
2
, x
3
) = (3, 3, 2).
All entries are non-negative, and we can normalize to get probabilities that add up
to one, giving us the stationary state (p
1
, p
2
, p
3
) = (
3
8
,
3
8
,
1
4
). This state is unique.
Asymptotically the mouse will this spend about
3
8
of its life in room A,
3
8
of its life in
room B, and
1
4
of its life in room C.
(xiv) In the case of the gambling banker we know that there is an eigenvalue = 1 (see earlier
exercise). Stationary states are left eigenvectors of P with eigenvalue one (i.e. right
eigenvectors of P

with eigenvalue one), so let us calculate these:


P

=
_
_
_
_
0 1/6 0
1 1/2 2/3
0 1/3 1/3
_
_
_
_

_
x
1
= x
2
/6
x
2
= x
1
+ x
2
/2 + 2x
3
/3
x
3
= x
2
/3 +x
3
/3

_
x
1
= x
2
/6
x
2
= 2x
3
2x
3
= x
2
We nd that there is just one eigenvector (modulo normalization): (x
1
, x
2
, x
3
) = (1, 6, 3).
All entries are non-negative, and after normalization

i
p
i
= 1 we have found the
(unique) stationary state: (p
1
, p
2
, p
3
) = (
1
10
,
3
5
,
3
10
). Asymptotically the probability for
state 1 to come up is just 1/10, so the average gain for the banker per time step (in
M) is: p
1
.9 (p
2
+p
3
).1 =
1
10
.9
9
10
= 0. For large times this is exactly the borderline
case, where loss and gain balance out. Had he entered a bet for e.g. winning 8M
for each time state 1 comes up, and paying 1M when states 2 or 3 show up, then he
would have lost millions.
(xv) For the mutating virus we have show that there is an eigenvalue = 1, and we also
determined that for = 1 the eigenvector is (1, 1, . . . , 1). Since here P is symmetric, left-
and right eigenvectors are identical. Normalizing the eigenvector such that

i
p
i
= 1
gives the stationary state p
i
= 1/N for all i {1, . . . , N}. This is a proper stationary
state, since all entries are non-negative. It is unique, since we found only one eigenvector
for = 1. We now know that asymptotically the probability to nd the virus in any of
its N strains is 1/N (all strains equally likely).
Can we also calculate the probabilities for nite times? Suppose the virus is in strain
k at time zero. The probability for it to be in state k, so p
i
(0) =
ik
. After n iterations
the probability to nd it in strain k is
p
k
(n) =

i
p
i
(0)(P
n
)
ik
=

ik
(P
n
)
ik
= (P
n
)
kk
48
So we require the powers of P:
(P
2
)
ij
=
N

=1
p
i
p
j
=
N

=1
(
1
N
)
2
=
1
N
= p
ij
We see that P
2
= P, and hence P
n
= P for all n 1. From this we deduce that
p
k
(n) = (P
n
)
kk
= p
kk
= 1/N. So the probability for the virus to be in state k at any
time is 1/N. This should not be surprise, given the simple form of the transition matrix
p
ij
, it will be more interesting and complicated if we limit the possible mutants in a
single generation of the virus.

Вам также может понравиться