Вы находитесь на странице: 1из 10

1. Let 0 := {0, 1} with F0 the complete -eld and P0 the uniform measure.

. Let (, F , P) = (0 , F0 , P0 ) be the product space corresponding to an innite sequence of independent fair coin ips. Let Q0 be the measure on (0 , F0 ) giving measure 2/3 to 1 and 1/3 to 0 and let Q = Q 0 be the corresonding measure on sequences. Let Fn = (1 , . . . , n ) be the -eld of information in the rst n coordinates. Let Zn : R be the random variable dened by Zn ( ) := dQ ( ) dP

on the -eld Fn . We do not know whether Q << P on F but let Z := lim supn Zn be our candidate for dQ/dP on F . (a) Give a formula for Zn ( ) in terms of the rst n coordinates of . Solution : We dene an = |{1 i n : i = 0}| = n Zn ( ) = 2 3
an n i=0 i .

Then

4 3

nan

4n 3n 2an

(b) Give a formula for Z ( ). Solution : Z ( ) = lim sup


n

4n 3n 2an

(c) What is the law of the random variable Z on (, F , P)? Solution : By the Strong Law of Large Numbers, almost surely
n an n

1 2

a.s. as n . Thus

lim Zn = lim

4 3 2an /n

Since,

a.s. 4 4 3 32an /n 2

< 1, we have, Z = 0 with probability 1.

(d) What is the law of the random variable Z on (, F , Q)? Solution : By the Strong Law of Large Numbers, almost surely
n an n

1 3

a.s. as n . Thus

lim Zn = lim

4 3 2an /n

Since,

a.s. 4 324 1/3 32an /n

> 1, we have, Z = with probability 1.

2. Durrett Exercise 5.1.9 - 5.1.10: (a) Let Var(X | F ) := E(X 2 | F ) (E(X | F ))2 . Show that Var(X ) = EVar(X | F ) + Var(E(X | F )) . Solution : E(Var(X |F )) + Var(E(X |F )) = E[E(X 2 |F ) (E(X |F ))2 ] + E[E(X |F )2 ] (E[E(X |F )])2 = E(X 2 ) E[E(X |F )2 ] + E[E(X |F )2 ] [E(X )]2 = E(X 2 ) [E(X )]2 = Var(X )

(b) Let Y1 , Y2 , . . . be IID with mean and variance 2 . Let N be a random positive integer, independent from the variables {Yj }, with EN 2 < , and let X be the random sum dened by X :=
N j =1 Yj .

Show that

Var(X ) = 2 EN + 2 Var(N ) . Solution : We rst see that E(Var(X |N )) = E(N 2 ) = 2 EN and Var(E(X |N )) = Var(N ) = 2 Var(N ). Then using the result in Exercise 1.9, we nish the proof.

3. Dene a probability model for the following conditional probability statement. Your model should contain a probability triple, (, F , P) a subeld F F0 of information and an event A. Then you should compute the conditional probability P(A | F ). You nd a strange-looking coin. Having no idea what is the probability p of ipping HEADS with this coin, you guess p to be uniformly distributed on [0, 1]. You ip once and it comes up HEADS. What is the chance that your next ip is HEADS? Solution : Assume that given p two ips are independent. Let = {H, T } {H, T } [0, 1] so that the coordinates of may be interpreted as the rst ip, the second ip and the value of p. Let C be the class of all the subsets {H, T } and take F = C C B([0, 1]) . To make the ips conditionally independent Bernoulli (p) given p, we dene P as follows. Let W be the class of sets that are the product of a singleton in {H, T }, another singleton in {H, T } and an interval in [0, 1]. We dene P on W by P (X1 = H, X2 = H, p A) =
A

x2 dx x(1 x) dx
A

P (X1 = H, X2 = T, p A) = P (X1 = T, X2 = H, p A) =
A

x(1 x) dx (1 x)2 dx .
A

P (X1 = T, X2 = T, p A) =

The class W is a -system so this denition extends uniquely to (W ) = F . We may

now compute: P (X2 = H |X1 = H ) = = =


1 2 0 x dx 1 0 x dx

1/3 1/2 2 . 3

4. A data set includes a measurement of a certain blood protein and a task performance score for each of 100 subjects. (a) Make a probability model (a space, sigma eld, measure, sub-sigma eld, random variable, etc.) in which the measurements for dierent subjects are independent but no assumptions are made about the dependence between the two measures for each subject or about their distribution. (b) Interpret the statement three quarters of the variation in performance may be explained by the level of blood protein A as a statement about variances and conditional variances in the model. Solution : Let = {all the subjects}, and let F be the power set of . A reasonable probability measure can be dened on F . Let X be a random variable, and X ( ) = the amount of blood protein for the subject . Let Y be a random variable and Y ( ) = the score of task performance for the subject . Dene F0 to be a sub-sigma eld, and F0 = (X ), which is the information contained in X .

According to problem 2 on this Homework Set, the variance of Y can be decomposed into the variation due to known information, V ar(E (Y |F0 )], and the expected variation not explained by the known information, E [V ar(Y |F0 )]. Therefore, saying three quarters of the variation in performance may be explained
3 V ar(Y ). by the level of blood protein A is equivalent to saying V ar(E (Y |F0 )) = 4

Note: this completely general solution allows for the possibility that Y is a deterministic function of X , which is consistent with any data set for which X is measured so accurately that no two X -values coincide. It is a dierent question as to whether this assumption is reasonable!

1. Let = [1, 1] and let F be the -eld generated by the following sets: any set A A with A a Borel subset of [0, 1] as well as all singleton sets {x} with x [1, 1]. Note that F separates points. Let = (1 + x)/2 dx where dx is Lebesgue measure on [1, 1]. What is the restriction of to the -eld F ? Solution : Let B be a Borel subset of [0, 1]. Then B B is in F and by denition of , we have (B B ) = = =
B

1+x 1+x dx + dx 2 2 B B 1x 1+x dx + dx 2 2 B B 1 dx

= m(B ) , where m is Lebesgue measure on ([0, 1], B ). Thus one good answer to the question is that is the measure dened by B B m(B ). Another good answer is to give the density of with respect to a known reference measure. Let m denote one half times Lebesgue measure on [1, 1], restricted to F . If we feel comfortable with m then, verifying that |F << m , it suces to give the density of . This turns out to be the constant 1, that is, = m on F . In general, it is easy to see that if has density f on [1, 1] then |F has density f (x) + f (x) with respect to m : the function f (x) + f (x) is clearly F -measurable and gives the correct integrals over sets in F .

2. Fix integers M < k < M . Let {Sn } be a simple random walk whose increments {Xn } are IID fair 1 coin ips, started from S0 = k . Let := inf {n : Sn = M } be the hitting time of {M, M }. Use Bayes rule to compute P(X1 = 1 | S = M ). What is the limit as M ? Now assume k > 0. Let Pk denote the law of simple random walk starting from k and let := inf {n : Sn {0, M }} be the time to hit 0 or M . Compute Pk (X1 = 1 | S = M ). Again, identify the limit as M . The process whose law is the limit of laws of {Sk }k1 conditioned on hitting M before 0 as M is known as simple random walk conditioned to escape to +. Solution : By Bayes rule, P(X1 = 1 | S = M ) = P(S = M |X1 = 1)P(X1 = 1) P(S = M ) P(S = M |X1 = 1) . = 2P(S = M )

Now, P(S = M ) is the probability that the walk reaches M before reaching M when starting from k . This is equal to the probability that the walk reaches M k before reaching M k when starting from 0, which from Durrett, Chapter 3, we know is, (M + k )/(M k + M + k ) = (M + k )/2M . Similarly, P(S = M |X1 = 1) is the probability that the walk reaches M before reaching M when starting from k + 1. Following the above argument, this equals, (M + k + 1)/2M . Therefore, P(X1 = 1 | S = M ) = 1 M +k+1 as M . 2(M + k ) 2 2

Next, P(X1 = 1 | S = M ) = P(S = M |X1 = 1)P(X1 = 1) P(S = M ) P(S = M |X1 = 1) = . 2P(S = M )

As before, P(S = M ) is the probability that the walk reaches M before reaching 0 when starting from k , which, by similar arguments is equal to k/M , and P(S = M |X1 = 1) is the probability that the walk reaches M before reaching 0 when starting from k + 1, which equals, (k + 1)/M . Thus, P(X1 = 1 | S = M ) = k+1 , 2k

which is a constant in terms of M , and hence stays the same as M goes to .

3. Durrett Exercise 2.11: Let {Xn } and {Yn } be sequences, adapted to {Fn }, of positive, integrable random variables. Suppose E(Xn+1 | Fn ) (1 + Yn )Xn with
n=1 Yn

< almost surely. Prove that Xn converges almost surely to a nite

limit. Hint: nd a closely related nonnegative supermartingale. Solution : Our required supermartingale is Mn = Xn n1 k=1 (1 + .

Yk )

Clearly, Mn is positive for each n and EMn EXn < . Also, E(Mn+1 |Fn ) = 1
n k=1 (1

+ Yk ) + Yk )

E(Xn+1 |Fn ) (1 + Yn )Xn = Mn .

1
n k=1 (1

Being a non-negative supermartingale, Mn converges a.s. to some limit, call it, M . Also, since
n=1 Yn

< a.s.,
n

(1 + Yk ) e
k=1

k=n

Yk

k=1

Yk

< .

Therefore, being an increasing sequence,

n k=1 (1

+ Yk ) converges almost surely to

some Y , say. Thus, Xn must converge almost surely to a limit that equals M Y .

Вам также может понравиться