0 Голоса «за»0 Голоса «против»

Просмотров: 1159 стр.A Solution Manual for Brownian Motion can be useful for Student reference.

Jul 13, 2019

© © All Rights Reserved

PDF, TXT или читайте онлайн в Scribd

A Solution Manual for Brownian Motion can be useful for Student reference.

© All Rights Reserved

Просмотров: 1

A Solution Manual for Brownian Motion can be useful for Student reference.

© All Rights Reserved

- Neuromancer
- The E-Myth Revisited: Why Most Small Businesses Don't Work and
- How Not to Be Wrong: The Power of Mathematical Thinking
- Drive: The Surprising Truth About What Motivates Us
- Chaos: Making a New Science
- The Joy of x: A Guided Tour of Math, from One to Infinity
- How to Read a Person Like a Book
- Moonwalking with Einstein: The Art and Science of Remembering Everything
- The Wright Brothers
- The Other Einstein: A Novel
- The 6th Extinction
- The Housekeeper and the Professor: A Novel
- The Power of Discipline: 7 Ways it Can Change Your Life
- The 10X Rule: The Only Difference Between Success and Failure
- A Short History of Nearly Everything
- The Kiss Quotient: A Novel
- The End of Average: How We Succeed in a World That Values Sameness
- Made to Stick: Why Some Ideas Survive and Others Die
- Algorithms to Live By: The Computer Science of Human Decisions
- The Universe in a Nutshell

Вы находитесь на странице: 1из 159

de Gruyter Graduate, Berlin 2012

ISBN: 978–3–11–027889–7

Solution Manual

R.L. Schilling, L. Partzsch: Brownian Motion

Julian Hollender, Felix Lindner and Michael Schwarzenberger who supported us in the prepa-

ration of this solution manual. Daniel Tillich pointed out quite a few misprints and minor

errors.

Lothar Partzsch

2

Contents

19 On Diffusions 153

3

1 Robert Brown’s New Thing

Problem 1.1 (Solution) a) We show the result for Rd -valued random variables. Let ξ, η ∈ Rd .

By assumption,

ξ Xn ξ X

lim E exp [i ⟨( ), ( )⟩] = E exp [i ⟨( ), ( )⟩]

n→∞ η Yn η Y

⇐⇒ lim E exp [i⟨ξ, Xn ⟩ + i⟨η, Yn ⟩] = E exp [i⟨ξ, X⟩ + i⟨η, Y ⟩]

n→∞

d

lim E exp [i⟨η, Yn ⟩] = E exp [i⟨η, Y ⟩] or Yn Ð

→Y

n→∞

d

lim E exp [i⟨ξ, Xn ⟩] = E exp [i⟨ξ, X⟩] or Xn Ð

→ X.

n→∞

Since Xn á Yn we find

n→∞

n→∞

n→∞ n→∞

b) We have

1 almost surely d

Xn = X + ÐÐÐÐÐÐÐ→ X Ô⇒ Xn Ð

→X

n n→∞

1 almost surely d

Yn = 1 − Xn = 1 − − X ÐÐÐÐÐÐÐ→ 1 − X Ô⇒ Yn Ð

→1−X

n n→∞

almost surely d

Xn + Yn = 1 ÐÐÐÐÐÐÐ→ 1 Ô⇒ Xn + Yn Ð

→ 1.

n→∞

d d d

Xn Ð

→ X, Yn Ð

→ Y ∼ 1 − X, Xn + Yn Ð

→ 1.

d

Assume that (Xn , Yn ) Ð

→ (X, Y ). Since X á Y , we find for the distribution of X + Y :

Thus, X + Y ∼/ δ0 ∼ 1 = limn (Xn + Yn ) and this shows that we cannot have that

d

(Xn , Yn ) Ð

→ (X, Y ).

5

R.L. Schilling, L. Partzsch: Brownian Motion

d

c) If Xn á Yn and X á Y , then we have Xn + Yn Ð

→ X + Y : this follows since we have

for all ξ ∈ R:

n→∞ n→∞

n→∞ n→∞

= Ee iξX

Ee iξY

= E [eiξX eiξY ]

a)

= E eiξ(X+Y ) .

d

A similar (even easier) argument works if (Xn , Yn ) Ð

→ (X, Y ). Then we have

f (x, y) ∶= eiξ(x+y)

n→∞ n→∞

For a counterexample (if Xn and Yn are not independent), see part b).

implies X á Y and the d-convergence of the bivariate sequence (Xn , Yn ). This is a

consequence of the following

Lemma. Let (Xn )n⩾1 and (Yn )n⩾1 be sequences of random variables (or random

vectors) on the same probability space (Ω, A, P). If

d d

Xn á Yn for all n ⩾ 1 and Xn ÐÐÐ→ X and Yn ÐÐÐ→ Y,

n→∞ n→∞

d

then (Xn , Yn ) ÐÐÐ→ (X, Y ) and X á Y .

n→∞

Proof. Write φX , φY , φX,Y for the characteristic functions of X, Y and the pair

(X, Y ). By assumption

n→∞ n→∞

A similar statement is true for Yn and Y . For the pair we get, because of independence

n→∞ n→∞

n→∞

n→∞ n→∞

= Ee iξX

Ee iηY

= φX (ξ)φY (η).

6

Solution Manual. Last update June 12, 2017

Thus, φXn ,Yn (ξ, η) → h(ξ, η) = φX (ξ)φY (η). Since h is continuous at the origin

(ξ, η) = 0 and h(0, 0) = 1, we conclude from Lévy’s continuity theorem that h is a

d

(bivariate) characteristic function and that (Xn , Yn ) Ð

→ (X, Y ). Moreover,

iz

∣eiz − 1∣ = ∣∫ eζ dζ∣ ⩽ sup ∣eiy ∣ ∣z∣ = ∣z∣ (*)

0 ∣y∣⩽∣z∣

Thus,

Since limn→∞ E ei⟨ξ,Xn ⟩ = E ei⟨ξ,X⟩ , we are done if we can show that the first term in the

last line of the displayed formula tends to zero. To see this, we use the Lipschitz continuity

of the exponential function. Fix ξ ∈ Rd .

= E ∣ei⟨ξ,Yn −Xn ⟩ − 1∣

∣Yn −Xn ∣⩽δ ∣Yn −Xn ∣>δ

(*)

⩽ δ ∣ξ∣ + ∫ 2dP

∣Yn −Xn ∣>δ

ÐÐÐ→ δ ∣ξ∣ ÐÐ→ 0,

n→∞ δ→0

P

where we used in the last step the fact that Xn − Yn Ð

→ 0.

d

Problem 1.3 (Solution) Recall that Yn Ð

→ Y with Y = c a.s., i. e. where Y ∼ δc for some constant

P

c ∈ R. Since the d-limit is trivial, this implies Yn Ð

→ Y . This means that both “is this still

true”-questions can be answered in the affirmative.

d

We will show that (Xn , Yn ) Ð

→ (Xn , c) holds – without assuming anything on the joint

distribution of the random vector (Xn , Yn ), i.e. we do not make assumption on the corre-

lation structure of Xn and Yn . Since the maps x ↦ x + y and x ↦ x ⋅ y are continuous, we

see that

lim E f (Xn , Yn ) = E f (X, c) ∀f ∈ Cb (R × R)

n→∞

7

R.L. Schilling, L. Partzsch: Brownian Motion

implies both

lim E g(Xn Yn ) = E g(Xc) ∀g ∈ Cb (R)

n→∞

and

lim E h(Xn + Yn ) = E h(X + c) ∀h ∈ Cb (R).

n→∞

distributional convergence, i.e. the pointwise convergence of the characteristic functions.

This means that we take f (x, y) = ei(ξx+ηy) for any ξ, η ∈ R:

∣E ei(ξXn +ηYn ) − E ei(ξX+ηc) ∣ ⩽ ∣E ei(ξXn +ηYn ) − E ei(ξXn +ηc) ∣ + ∣E ei(ξXn +ηc) − E ei(ξX+ηc) ∣

d

The second expression on the right-hand side converges to zero as Xn Ð

→ X. For fixed

η we have that y ↦ eiηy is uniformly continuous. Therefore, the first expression on the

right-hand side becomes, with any > 0 and a suitable choice of δ = δ() > 0

E ∣eiηYn − eiηc ∣ = E [∣eiηYn − eiηc ∣ 1{∣Yn −c∣>δ} ] + E [∣eiηYn − eiηc ∣ 1{∣Yn −c∣⩽δ} ]

⩽ 2 E [1{∣Yn −c∣>δ} ] + E [1{∣Yn −c∣⩽δ} ]

⩽ 2 P(∣Yn − c∣ > δ) +

P -convergence as δ, are fixed

ÐÐÐÐÐÐÐÐÐÐÐÐÐÐÐÐ→ Ð→ 0.

n→∞ ↓0

Remark. The direct approach to (a) is possible but relatively ugly. Part (b) has a

relatively simple direct proof:

Fix ξ ∈ R.

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶

Ðn→∞

ÐÐ→0 by d-convergence

For the first term on the right we find with the uniform-continuity argument from Prob-

lem 1.1.2 and any > 0 and suitable δ = δ(, ξ) that

= E ∣eiξYn − 1∣

⩽ + P (∣Yn ∣ > δ)

fixed

ÐÐÐ→ ÐÐ→ 0

n→∞ →0

8

Solution Manual. Last update June 12, 2017

Problem 1.4 (Solution) Let ξ, η ∈ R and note that f (x) = eiξx and g(y) = eiηy are bounded

and continuous functions. Thus we get

i⟨(ηξ ), (X )⟩

Ee Y = E eiξX eiηY

= E f (X)g(Y )

= lim E f (Xn )g(Y )

n→∞

n→∞

i⟨(ηξ ), (XYn )⟩

= lim E e

n→∞

d

and we see that (Xn , Y ) Ð

→ (X, Y ).

Assume now that X = φ(Y ) for some Borel function φ. Let f ∈ Cb and pick g ∶= f ○ φ.

Clearly, f ○ φ ∈ Bb and we get

= E f (Xn )g(Y )

ÐÐÐ→ E f (X)g(Y )

n→∞

= E f (X)f (X)

= E f 2 (X).

n→∞

Thus,

ÐÐÐ→ E f 2 (X) − 2 E f (X)f (X) + E f 2 (X) = 0,

n→∞

L2

i.e. f (Xn ) Ð→ f (X).

Now fix > 0 and R > 0 and set f (x) = −R ∨ x ∧ R. Clearly, f ∈ Cb . Then

P(∣Xn − X∣ > )

⩽ P(∣Xn − X∣ > , ∣X∣ ⩽ R, ∣Xn ∣ ⩽ R) + P(∣X∣ ⩾ R) + P(∣Xn ∣ ⩾ R)

= P(∣f (Xn ) − f (X)∣ > , ∣X∣ ⩽ R, ∣Xn ∣ ⩽ R) + P(∣X∣ ⩾ R) + P(∣f (Xn )∣ ⩾ R)

⩽ P(∣f (Xn ) − f (X)∣ > ) + P(∣X∣ ⩾ R) + P(∣f (Xn )∣ ⩾ R)

⩽ P(∣f (Xn ) − f (X)∣ > ) + P(∣X∣ ⩾ R) + P(∣f (X)∣ ⩾ R/2) + P(∣f (Xn ) − f (X)∣ ⩾ R/2)

where we used that {∣f (Xn )∣ ⩾ R} ⊂ {∣f (X)∣ ⩾ R/2} ∪ {∣f (Xn ) − f (X)∣ ⩾ R/2} because of

the triangle inequality: ∣f (Xn )∣ ⩽ ∣f (X)∣ + ∣f (X) − f (Xn )∣

= P(∣f (Xn ) − f (X)∣ > ) + P(∣X∣ ⩾ R/2) + P(∣X∣ ⩾ R/2) + P(∣f (Xn ) − f (X)∣ ⩾ R/2)

9

R.L. Schilling, L. Partzsch: Brownian Motion

= P(∣f (Xn ) − f (X)∣ > ) + 2 P(∣X∣ ⩾ R/2) + P(∣f (Xn ) − f (X)∣ ⩾ R/2)

1 4

⩽ ( 2 + 2 ) E (∣f (X) − f (Xn )∣2 ) + 2 P(∣X∣ ⩾ R/2)

R

,R fixed and f =fR ∈Cb X is a.s. R-valued

ÐÐÐÐÐÐÐÐÐÐÐÐ→ 2 P(∣X∣ ⩾ R/2) ÐÐÐÐÐÐÐÐÐÐ→ 0.

n→∞ R→∞

Problem 1.5 (Solution) Note that E δj = 0 and V δj = E δj2 = 1. Thus, E S⌊nt⌋ = 0 and V S⌊nt⌋ =

⌊nt⌋.

a) We have, by the central limit theorem (CLT)

√

S⌊nt⌋ ⌊nt⌋ S⌊nt⌋ CLT √

√ = √ √ ÐÐÐ→ t G1

n n ⌊nt⌋ n→∞

√

where G1 ∼ N(0, 1), hence Gt ∶= t G1 ∼ N (0, t).

b) Let s < t. Since the δj are iid, we have, S⌊nt⌋ − S⌊ns⌋ ∼ S⌊nt⌋−⌊ns⌋ , and by the central

limit theorem (CLT)

√

S⌊nt⌋−⌊ns⌋ ⌊nt⌋ − ⌊ns⌋ S⌊nt⌋−⌊ns⌋ CLT √

√ = √ √ ÐÐÐ→ t − s G1 ∼ Gt−s .

n n ⌊nt⌋ − ⌊ns⌋ n→∞

If we know that the bivariate random variable (S⌊ns⌋ , S⌊nt⌋ −S⌊ns⌋ ) converges in distri-

bution, we do get Gt ∼ Gs + Gt−s because of Problem 1.1. But this follows again from

the lemma which we prove in part d). This lemma shows that the limit has indepen-

dent coordinates, see also part c). This is as close as we can come to Gt − Gs ∼ Gt−s ,

unless we have a realization of ALL the Gt on a good space. It is Brownian motion

which will achieve just this.

c) We know that the entries of the vector (Xtnm − Xtnm−1 , . . . , Xtn2 − Xtn1 , Xtn1 ) are inde-

pendent (they depend on different blocks of the δj and the δj are iid) and, by the

one-dimensional argument of b) we see that

d √

Xtnk − Xtnk−1 ÐÐÐ→ tk − tk−1 Gk1 ∼ Gktk −tk−1 for all k = 1, . . . , m

n→∞

By the lemma in part d) we even see that

d √ √

(Xtnm − Xtnm−1 , . . . , Xtn2 − Xtn1 , Xtn1 ) ÐÐÐ→ ( t1 G11 , . . . , tm − tm−1 Gm

1 )

n→∞

Gk1 ,

√ √

( t1 G11 , . . . , tm − tm−1 Gm 1 ) ∼ (Gt1 , . . . , Gtm −tm−1 ) ∼ (Gt1 , . . . , Gtm − Gtm−1 ).

1 m

Lemma. Let (Xn )n⩾1 and (Yn )n⩾1 be sequences of random variables (or random

vectors) on the same probability space (Ω, A, P). If

d d

Xn á Yn for all n ⩾ 1 and Xn ÐÐÐ→ X and Yn ÐÐÐ→ Y,

n→∞ n→∞

d

then (Xn , Yn ) ÐÐÐ→ (X, Y ) and X á Y (for suitable versions of the rv’s).

n→∞

10

Solution Manual. Last update June 12, 2017

Proof. Write φX , φY , φX,Y for the characteristic functions of X, Y and the pair

(X, Y ). By assumption

n→∞ n→∞

A similar statement is true for Yn and Y . For the pair we get, because of independence

n→∞ n→∞

n→∞

n→∞ n→∞

= Ee iξX iηY

Ee

= φX (ξ)φY (η).

Thus, φXn ,Yn (ξ, η) → h(ξ, η) = φX (ξ)φY (η). Since h is continuous at the origin

(ξ, η) = 0 and h(0, 0) = 1, we conclude from Lévy’s continuity theorem that h is a

d

(bivariate) characteristic function and that (Xn , Yn ) Ð

→ (X, Y ). Moreover,

1 ⎛ B(t) − B( 2 ) B( 2 ) − B(s) ⎞

s+t s+t

B(t) − B(s) 1

√ =√ ⎜ √ + √ ⎟ =∶ √ (X + Y ) .

t−s 2⎝ t−s t−s ⎠ 2

2 2

2

that X ∼ N(0, 1), cf. Rényi [8, Chapter VI.5, Theorem 2, pp. 324–325].

√

√ n B −B

tj tj−1 t − s n Btj − Btj−1

Bt − Bs = tj − tj−1 ∑ √ = ∑ √

j=1 t j − t j−1 n j=1 tj − tj−1

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸n¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶

=∶Gj

By assumption, the random variables (Gnj )j,n are identically distributed (for all j, n) and

independent (in j). Moreover, E(Gnj ) = 0 and V(Gnj ) = 1. Applying the central limit

theorem (for triangular arrays) we obtain

1 n d

√ ∑ Gnj Ð

→ G1

n j=1

11

2 Brownian motion as a Gaussian process

Problem 2.1 (Solution) Let us check first that f (u, v) ∶= g(u)g(v)(1 − sin u sin v) is indeed a

probability density. Clearly, f (u, v) ⩾ 0. Since g(u) = (2π)−1/2 e−u

2 /2

is even and sin u is

odd, we get

Let us show that (U, V ) is not a normal random variable. Assume that (U, V ) is normal,

then U + V ∼ N(0, σ 2 ), i.e.

E eiξ(U +V ) = e− 2 ξ

1 2 σ2

. (*)

2 2

= (∫ eiξu g(u) du) − (∫ eiξu g(u) sin u du)

2

1

= e−ξ − ( −iu

2

∫ e (e − e )g(u) du)

iξu iu

2i

2

1

= e−ξ − ( ∫ (ei(ξ+1)u − ei(ξ−1)u )g(u) du)

2

2i

2

1

= e−ξ − ( (e− 2 (ξ+1) − e− 2 (ξ−1) ))

2 1 2 1 2

2i

1 2 2

= e−ξ + (e− 2 (ξ+1) − e− 2 (ξ−1) )

2 1 2 1

4

1

= e−ξ + e−1 e−ξ (e−ξ − eξ ) ,

2 2 2

4

and this contradicts (*).

Problem 2.2 (Solution) Let (ξ1 , . . . , ξn ) ≠ (0, . . . , 0) and set t0 = 0. Then we find from (2.12)

n n n

∑ ∑ (tj ∧ tk ) ξj ξk = ∑ (tj − tj−1 )(ξj + ⋯ + ξn ) ⩾ 0.

2

(2.1)

j=1 k=1 j=1 ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶

>0

Equality (= 0) occurs if, and only if, (ξj + ⋯ + ξn )2 = 0 for all j = 1, . . . , n. This implies

that ξ1 = . . . = ξn = 0.

13

R.L. Schilling, L. Partzsch: Brownian Motion

Abstract alternative: Let (Xt )t∈I be a real-valued stochastic process which has a second

moment (such that the covariance is defined!), set µt = E Xt . For any finite set S ⊂ I we

pick λs ∈ C, s ∈ S. Then

s,t∈S s,t∈S

⎛ ⎞

=E ∑ (Xs − µs )λs (Xt − µt )λt

⎝s,t∈S ⎠

s∈S t∈S

2

⎛ ⎞

= E ∣ ∑ (Xs − µs )λs ∣ ⩾ 0.

⎝ s∈S ⎠

Remark: Note that this alternative does not prove that the covariance is strictly positive

definite. A standard counterexample is to take Xs ≡ X.

Problem 2.4 (Solution) Let ei = (0, . . . , 0, 1, 0 . . .) ∈ Rn be the ith standard unit vector. Then

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶

i

and

⟨B(ei + ej ), ei + ej ⟩ = bii + bjj + 2bij

We have

Problem 2.5 (Solution) a) Xt = 2Bt/4 is a BM1 : scaling property with c = 1/4, cf. 2.12.

= E(B4t − B2t )(B2t − Bt ) − E(B2t − Bt )2

= E(B4t − B2t ) E(B2t − Bt ) − E(B2t − Bt )2

(B1)

= − E(Bt2 ) = −t ≠ 0.

(B1)

√

c) Zt = tB1 is not a BM1 , the independent increments property is violated:

√ √ √ √ √ √

E(Zt − Zs )Zs = ( t − s) s E B12 = ( t − s) s ≠ 0.

14

Solution Manual. Last update June 12, 2017

1 1 x2 (y − x)2

a) fB(s),B(t) (x, y) = √ exp [− ( + )] .

2π s(t − s) 2 s t−s

b) Denote by fB(1) the density of B(1). Then we have

fB(s),B(t),B(1) (x, y, z)

=

fB(1) (z)

1 1 x2 (y − x)2 (z − y)2 z2

= √ exp [− ( + + )] (2π)1/2 exp [ ] .

(2π)3/2 s(t − s)(1 − t) 2 s t−s 1−t 2

Thus,

1 x2 (y − x)2

1 y2

fB(s),B(t)∣B(1) (x, y ∣ B(1) = 0) = √ exp [− ( + + )] .

2π s(t − s)(1 − t) 2 s t−s 1−t

Note that

x2 (y − x)2 y2 t s 2 y2 y2 t s 2 y2

+ + = (x − y) + + = (x − y) + .

s t−s 1 − t s(t − s) t t 1 − t s(t − s) t t(1 − t)

Therefore,

E(B(s)B(t) ∣ B(1) = 0)

1 ∞ 1 y2

= √ ∫ y exp [− ]×

2π s(t − s)(1 − t) y=−∞ 2 t(1 − t)

∞ 1 t s 2

×∫ x exp [− (x − y) ] dx dy

x=−∞ 2 s(t − s) t

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹√

¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶

√

√

s(t−s)

= 2π s

t

y

t

1 ∞ s 1 y2

=√ √ ∫ y 2 exp [− ] dy

2π t(1 − t) y=−∞ t 2 t(1 − t)

s

= t(1 − t) = s(1 − t).

t

fB(t1 ),B(t2 ),B(t3 ),B(t4 ) (u, x, y, z)

=

fB(t1 ),B(t4 ) (u, z)

1

1 t1 (t4 − t1 ) 2

1 u2 (x − u)2 (y − x)2 (z − y)2

= [ ] exp [− ( + + + )] ×

2π t1 (t2 − t1 )(t3 − t2 )(t4 − t3 ) 2 t1 t2 − t1 t3 − t2 t4 − t3

1 u2 (z − u)2

× exp [ ( + )] .

2 t1 t4 − t1

Thus,

15

R.L. Schilling, L. Partzsch: Brownian Motion

1

1 t1 (t4 − t1 ) 2

1 x2 (y − x)2 y2

= [ ] exp [− ( + + )] .

2π t1 (t2 − t1 )(t3 − t2 )(t4 − t3 ) 2 t2 − t1 t3 − t2 t4 − t3

Observe that

x2 (y − x)2 y2 t3 − t1 t2 − t1 2 t4 − t1

+ + = (x − y) + y2.

t2 − t1 t3 − t2 t4 − t3 (t2 − t1 )(t3 − t2 ) t3 − t1 (t3 − t1 )(t4 − t3 )

Therefore, we get (using physicists’ notation: ∫ dy h(y) ∶= ∫ h(y) dy for easier read-

ability)

1 ∞ 1 t4 − t1

= ∫ dy exp [− y2] ×

2π(t4 − t3 ) y=−∞ 2 (t3 − t1 )(t4 − t3 )

y ∞ 1 t2 − t1 2 t3 − t1

×√ ∫ x exp [− (x − y) ] dx

2π(t2 − t1 )(t3 − t2 ) x=−∞ 2 t3 − t1 (t2 − t1 )(t3 − t2 )

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶

t2 −t1

=√ y

2

t3 −t1 t3 −t1

= = .

t3 − t1 t4 − t1 t4 − t1

C(s, t) = E(Xs Xt )

= E(Bs2 − s)(Bt2 − t)

= E(Bs2 − s)([Bt − Bs + Bs ]2 − t)

= E(Bs2 − s)(Bt − Bs )2 + 2 E(Bs2 − s)Bs (Bt − Bs ) + E(Bs2 − s)Bs2 − E(Bs2 − s)t

= E(Bs2 − s) E(Bt − Bs )2 + 2 E(Bs2 − s)Bs E(Bt − Bs ) + E(Bs2 − s)Bs2 − E(Bs2 − s)t

(B1)

= 2s2 = 2(s2 ∧ t2 ) = 2(s ∧ t)2 .

C(s, t) = E(Xs Xt ) = e− 2 (s+t) E Beαs Beαt = e− 2 (s+t) (eαs ∧ eαt ) = e− 2 ∣t−s∣ .

α α α

b) We have

n

= ∏ eαtk /2 fB(eαt1 ),...,B(eαtn ) (eαt1 /2 x1 , . . . , eαtn /2 xn )

k=1

16

Solution Manual. Last update June 12, 2017

−1/2

αtk /2 x /2

n n

= ∏ eαtk /2 (2π)−n/2 ( ∏ (eαtk − eαtk−1 )) e− 2 ∑k=1 (e k −e k−1 xk−1 ) /(e k −e k−1 )

1 n αt 2 αt αt

k=1 k=1

−1/2

−α(tk −tk−1 )/2

xk−1 )2 /(1−eα(tk −tk−1 ) )

n

= (2π)−n/2 ( ∏ (1 − e−α(tk −tk−1 ) )) e− 2 ∑k=1 (xk −e

1 n

k=1

Remark: the form of the density shows that the Ornstein–Uhlenbeck is strictly stationary,

i.e.

(X(t1 + h), . . . , X(tn + h) ∼ (X(t1 ), . . . , X(tn )) ∀h > 0.

Problem 2.9 (Solution) “⇒” Assume that we have (B1). Observe that the family of sets

⋃ σ(Bu1 , . . . , Bun )

0⩽u1 ⩽⋯⩽un ⩽s, n⩾1

and so

⎛1 0 0 . . . 0⎞ ⎛ Bu1 ⎞ ⎛ Bu1 ⎞

⎜ ⎟⎜ ⎟ ⎜ ⎟

⎜1 1 0 . . . 0⎟ ⎜ ⎟ ⎜ ⎟

⎜ ⎟ ⎜ Bu2 − Bu1 ⎟ ⎜ Bu2 ⎟

⎜ ⎟⎜ ⎟ ⎜ ⎟

Bt − Bs á ⎜1 1 1 . . . 0⎟ ⎜ ⎟=⎜ ⎟

⎜ ⎟ ⎜ Bu3 − Bu2 ⎟ ⎜ Bu3 ⎟

⎜ ⎟⎜ ⎟ ⎜ ⎟

⎜⋮ ⋮ ⋮ ⋱ 0⎟ ⎜ ⋮ ⎟ ⎜ ⋮ ⎟

⎜ ⎟⎜ ⎟ ⎜ ⎟

⎝1 1 1 . . . 1⎠ ⎝Bun − Bun−1 ⎠ ⎝Bun ⎠

“⇐” Let 0 = t0 ⩽ t1 < t2 < . . . < tn < ∞, n ⩾ 1. Then we find for all ξ1 , . . . , ξn ∈ Rd

E (ei ∑k=1 ⟨ξk , B(tk )−B(tk−1 )⟩ ) = E (ei⟨ξn , B(tn )−B(tn−1 )⟩ ⋅ ei ∑k=1 ⟨ξk , B(tk )−B(tk−1 )⟩ )

n n−1

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶

Ftn−1 mble., hence áB(tn )−B(tn−1 )

n−1

⋮

n

= ∏ E (ei⟨ξk , B(tk )−B(tk−1 )⟩ ).

k=1

= inf{a2 r ⩾ 0 ∶ aBr = a}

= a2 inf{r ⩾ 0 ∶ Br = 1} = a2 τ1 .

17

R.L. Schilling, L. Partzsch: Brownian Motion

E Wt2 = C(t, t) = E(Bt2 − t)2 = E(Bt4 − 2tBt2 + t2 ) = 3t2 − 2t2 + t2 = 2t2 ≠ const.

E Ys Yt = E(Bs+h − Bs )(Bt+h − Bt )

= E Bs+h Bt+h − E Bs+h Bt − E Bs Bt+h + E Bs Bt

= (s + h) ∧ (t + h) − (s + h) ∧ t − s ∧ (t + h) + s ∧ t

⎧

⎪

⎪

⎪0, if t > s + h ⇐⇒ h < t − s

= (s + h) − (s + h) ∧ t = ⎨

⎪

⎪h − (t − s),

⎪ if t ⩽ s + h ⇐⇒ h ⩾ t − s.

⎩

Swapping the roles of s and t finally gives: the process is stationary with g(t) =

(h − ∣t∣)+ = (h − ∣t∣) ∨ 0.

t↑1

and

lim Wt (ω) = B1 (ω) − lim tβ1/t (ω) − β1 (ω) = B1 (ω);

t↓1 t↓1

Corollary 2.7, W is a BM1 .

⎛ B1 ⎞

⎛ Wt1 ⎞ ⎛1 t1 0 0 ⋯ 0 −1⎞ ⎜ ⎟

⎜ ⎟ ⎜ ⎟⎜⎜ β1/t1 ⎟

⎟

⎜ Wt2 ⎟ ⎜1 0 t2 0 ⋯ −1⎟

⎜ ⎟=⎜ 0 ⎟⎜⎜ ⋮

⎟

⎟

⎜ ⎟ ⎜ ⎟⎜ ⎟

⎜ ⋮ ⎟ ⎜ ⋮ ⋮ 0 t3 ⋯ ⋮ ⋮ ⎟ ⎜ ⎟

⎜ ⎟ ⎜ ⎟ ⎜β

⎜ 1/tn ⎟

⎟

⎝Wtn ⎠ ⎝1 0 0 0 ⋯ tn −1 ⎠

⎝ β1 ⎠

18

Solution Manual. Last update June 12, 2017

and since

B1 á (β1/t1 , . . . , β1/tn , β1 )⊺

E Wt = E B1 + t E β1/t − E β1 = 0

E Wti Wtj = E(B1 + ti β1/ti − β1 )(B1 + tj β1/tj − β1 )

= 1 + ti tj t−1 −1 −1

j − ti ti − tj tj + 1 = ti = ti ∧ tj .

Case 3: Assume that 0 < t1 < . . . < tk ⩽ 1 < tk+1 < . . . < tn . Then we have

⎛ Bt1 ⎞

⎛ 1 0 ⋯ 0 ⎞⎜ ⎟

⎜ ⎟⎜ ⋮ ⎟

⎛ Wt1 ⎞ ⎜ 0 ⋱ ⎟⎜ ⎟

⎜ 0 ⎟⎜ ⎟

⎜ ⎟ ⎜ ⎟⎜ ⋮ ⎟

⎜

⎟

⎜ Wt2 ⎟ ⎜ ⋮ ⋱ ⋮ ⎟⎜ ⎟

⎜ ⎟ ⎜ ⎟⎜

⎟ ⎜ Btk ⎟

⎜ ⎟ ⎜ ⎟

⎜ ⋮ ⎟ ⎜ ⋯ 1 ⎟⎜ ⎟

⎜ ⎟=⎜ 0 0 ⎟⎜

⎟ ⎜ B1 ⎟

⎜ ⎟ ⎜ ⎟.

⎜Wtk ⎟ ⎜ ⋯ −1 ⎟⎜ ⎟

⎜ ⎟ ⎜ 1 tk+1 0 0 ⎟⎜

⎟ ⎜β1/tk+1 ⎟

⎜ ⎟ ⎜ ⎟

⎜ ⋮ ⎟ ⎜ −1 ⎟⎜ ⎟

⎜ ⎟ ⎜

⎜

1 0 tk+2 0 ⎟⎜

⎟⎜ ⋮ ⎟

⎝Wtn ⎠ ⎜ ⎟⎜ ⎟

⎜ ⋮ ⋮ ⋱ ⋮ ⎟⎜ ⎟

⎜ ⎟ ⎜ β1/tn ⎟⎟

⎝ 1 0 ⋯ tn −1 ⎠⎜ ⎟

⎝ β1 ⎠

Since

(Bt1 , . . . , Btk , B1 ) á (β1/tk+1 , . . . , β1/tn , β1 )

E Wt = 0

E Wti Wtj = E Bti (B1 + tj β1/tj − β1 ) = ti = ti ∧ tj

for i ⩽ k < j.

Problem 2.13 (Solution) The process X(t) = B(et ) has no memory since (cf. Problem 2.9)

and, therefore,

= σ(X(t + a) − X(a) ∶ t ⩾ 0).

The process X(t) ∶= e−t/2 B(et ) is not memoryless. For example, X(a + a) − X(a) is not

independent of X(a):

E(X(2a) − X(a))X(a) = E (e−a B(e2a ) − e−a/2 B(ea ))e−a/2 B(ea ) = e−3a/2 ea − e−a ea ≠ 0.

19

R.L. Schilling, L. Partzsch: Brownian Motion

Problem 2.14 (Solution) The process Wt = Ba−t − Ba , 0 ⩽ t ⩽ a clearly satisfies (B0) and (B4).

For 0 ⩽ s ⩽ t ⩽ a we find

B(s)

lim tB(1/t) = 0 Ô⇒ lim =0 a.s.

t↓0 s↑∞ s

Moreover,

2

B(s) s 1 s→∞

E( ) = 2 = ÐÐÐ→ 0

s s s

i.e. we get also convergence in mean square.

Remark: a direct proof of the SLLN is a bit more tricky. Of course we have by the classical

SLLN that

Bn ∑j=1 (Bj − Bj−1 ) SLLN

n

= ÐÐÐ→ 0 a.s.

n n n→∞

But then we have to make sure that Bs /s converges. This can be done in the following

way: fix s > 0. Then there is a unique interval (n, n + 1] such that s ∈ (n, n + 1]. Thus,

∣ ∣⩽∣ ∣+∣ ∣⋅ ⩽ + ∣ ∣

s s n+1 s n n n

and we have to show that the expression with the sup tends to zero. This can be done

by showing, e.g., that the L2 -limit of this expression goes to zero (using the reflection

principle) and with a subsequence argument.

Σ ∶= ⋃ σ(B(t) ∶ t ∈ J)

J⊂[0,∞), J countable

Clearly,

⋃ σ(Bt ) ⊂ Σ ⊂ σ(Bt ∶ t ⩾ 0) = F∞

B def

(*)

t⩾0

The first inclusion follows from the fact that each Bt is measurable with respect to Σ.

∅∈Σ and F ∈ Σ Ô⇒ F c ∈ Σ.

20

Solution Manual. Last update June 12, 2017

Let (An )n ⊂ Σ. Then, for every n there is a countable set Jn such that An ∈ σ(B(t) ∶ t ∈

Jn ). Since J = ⋃n Jn is still countable we see that An ∈ σ(B(t) ∶ t ∈ J) for all n. Since

the latter family is a σ-algebra, we find

⋃ An ∈ σ(B(t) ∶ t ∈ J) ⊂ Σ.

n

B

is, by definition, the smallest σ-algebra for which

all Bt are measurable—that

F∞

B

⊂ Σ.

B

.

Problem 2.17 (Solution) Assume that the indices t1 , . . . , tm and s1 , . . . , sn are given. Let

{u1 , . . . , up } ∶= {s1 , . . . , sn } ∪ {t1 , . . . , tm }. By assumption,

Thus, we may thin out the indices on each side without endangering independence:

{s1 , . . . , sn } ⊂ {u1 , . . . , up } and {t1 , . . . , tm } ⊂ {u1 , . . . , up }, and so

F∞ á G∞ Ô⇒ Ft á Gt .

Conversely, since (Ft )t⩾0 and (Gt )t⩾0 are filtrations we find

t⩾0 t⩾0

⋃ Ft á ⋃ Gt .

t⩾0 t⩾0

Since the families ⋃t⩾0 Ft and ⋃t⩾0 Gt are ∩-stable (use again the argument that we have

filtrations to find for F, F ′ ∈ ⋃t⩾0 Ft some t0 with F, F ′ ∈ Ft0 etc.), the σ-algebras generated

by these families are independent:

F∞ = σ ( ⋃ Ft ) á σ ( ⋃ Gt ) = G∞ .

t⩾0 t⩾0

for a BMd (Bt )t⩾0 . Then

⎡ n ⎤ ⎡ n ⎤

⎛ ⎢ ⎥⎞ ⎛ ⎢ ⎥⎞

E exp ⎢i ∑ ⟨ξj , X(tj ) − X(tj−1 )⟩⎥ = E exp ⎢i ∑ ⟨ξj , U B(tj ) − U B(tj−1 )⟩⎥⎥

⎢ ⎥ ⎢

⎝ ⎢ j=1 ⎥⎠ ⎝ ⎢ j=1 ⎥⎠

⎣ ⎦ ⎣ ⎦

21

R.L. Schilling, L. Partzsch: Brownian Motion

⎡ n ⎤

⎛ ⎢ ⎥⎞

= E exp ⎢⎢i ∑ ⟨U ⊺ ξj , B(tj ) − B(tj−1 )⟩⎥⎥

⎝ ⎢ j=1 ⎥⎠

⎣ ⎦

⎡ ⎤

⎢ 1 n ⎥

= exp ⎢⎢− ∑ (tj − tj−1 )⟨U ⊺ ξj , U ⊺ ξj ⟩⎥⎥

⎢ 2 j=1 ⎥

⎣ ⎦

⎡ ⎤

⎢ 1 n ⎥

= exp ⎢⎢− ∑ (tj − tj−1 )∣ξj ∣2 ⎥⎥ .

⎢ 2 j=1 ⎥

⎣ ⎦

(Observe ⟨U ⊺ ξj , U ⊺ ξj ⟩ = ⟨U U ⊺ ξj , ξj ⟩ = ⟨ξj , ξj ⟩ = ∣ξj ∣2 ). The claim follows.

Problem 2.20 (Solution) Note that the coordinate processes b and β are independent BM1 .

√

a) Since b á β, the process Wt = (bt + βt )/ 2 is a Gaussian process with continuous

sample paths. We determine its mean and covariance functions:

1

E Wt = √ (E bt + E βt ) = 0;

2

Cov(Ws , Wt ) = E(Ws Wt )

1

= E(bs + βs )(bt + βt )

2

1

= ( E bs bt + E βs bt + E bs βt + E βs βt )

2

1

= (s ∧ t + 0 + 0 + s ∧ t) = s ∧ t

2

where we used that, by independence, E bu βv = E bu E βv = 0. Now the claim follows

from Corollary 2.7.

√

• E(Wt bt ) = 2−1/2 E(bt + βt )βt = 2−1/2 (E bt E βt + E βt2 ) = t/ 2 ≠ 0, i.e. W and β are

NOT independent.

This means that X is not a BM2 , as its coordinates are not independent.

1 ⎛bt + βt ⎞ ⎛ bt ⎞ 1 ⎛1 1 ⎞ ⎛ bt ⎞

√ =U =√ .

2 ⎝bt − βt ⎠ ⎝βt ⎠ 2 ⎝1 −1⎠ ⎝βt ⎠

Clearly, U U ⊺ = id, i.e. Problem 2.19 shows that (Yt )t⩾0 is a BM2 .

E Xt = 0

Cov(Xt , Xs ) = E Xt Xs

= E(λbs + µβs )(λbt + µβt )

= λ2 E bs bt + λµ E bs βt + λµ E bt βs + µ2 βs βt

22

Solution Manual. Last update June 12, 2017

= λ2 E bs bt + λµ E bs E βt + λµ E bt E βs + µ2 E βs βt

= λ2 (s ∧ t) + 0 + 0 + µ2 s ∧ t = (λ2 + µ2 )(s ∧ t).

(0, βs ) ≠ (0, 0).

βs−t − βs are independent BM1 , cf. Time inversion 2.11 and Theorem 2.16.

⎛ cos α sin α ⎞ ⎛ bt ⎞

Wt = U Bt⊺ = .

⎝− sin α cos α⎠ ⎝βt ⎠

The matrix U is a rotation, hence orthogonal and we see from Problem 2.19 that W is a

Brownian motion.

Problem 2.24 (Solution) If G ∼ N(0, Q) then Q is the covariance matrix, i.e. Cov(Gj , Gk ) = qjk .

Thus, we get for s < t

= E Xsj (Xtk − Xsk ) + E(Xsj Xsk )

= E Xsj E(Xtk − Xsk ) + sqjk

= (s ∧ t)qjk .

⊺ ξ,B ⊺ ξ∣2 ⊺ ξ⟩

E ei⟨ξ,Xt ⟩ = E ei⟨Σ t⟩

= e− 2 ∣Σ = e− 2 ⟨ξ,ΣΣ

t t

,

1 1

fQ (x) = √ exp (− ⟨x, Qx⟩) .

(2πt)n det Q 2t

´¹¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹¶

n−k

where k < n is the rank of Q. The k-dimensional vector has a nondegenerate normal

distribution in Rk .

23

3 Constructions of Brownian Motion

N −1

WN (t, ω) = ∑ Gn (ω)Sn (t), t ∈ [0, 1],

n=0

converge as N → ∞ P-a.s. uniformly for t towards B(t, ω), t ∈ [0, 1]—cf. Problem 3.2.

Therefore, the random variables

1 N −1 1 1

P -a.s.

∫ WN (t) dt = ∑ Gn ∫ Sn (t) dt ÐÐÐ→ X = ∫ B(t) dt.

0 n=0 0 N →∞ 0

1

This shows that ∫0 WN (t) dt is the sum of independent N(0, 1)-random variables, hence

itself normal and so is its limit X.

From the definition of the Schauder functions (cf. Figure 3.2) we find

1 1

∫ S0 (t) dt =

0 2

1 1 −3 j

∫ S2j +k (t) dt = 2 2 , k = 0, 1, . . . , 2j − 1, j ⩾ 0.

0 4

and this shows

1 n 2 −1 3

j

1 1

∫ W2n+1 (t) dt = G0 + ∑ ∑ 2− 2 j G2j +l .

0 2 4 j=0 l=0

Consequently, since the Gj are iid N(0, 1) random variables,

1

E∫ W2n+1 (t) dt = 0,

0

1 1 n 2 −1 −3j

j

1

V∫ W2n+1 (t) dt = + ∑ ∑ 2

0 4 16 j=0 l=0

1 1 n −2j

= + ∑2

4 16 j=0

1 1 1 − 2−2(n+1)

= +

4 16 1 − 14

1 1 4 1

ÐÐÐ→ + = .

n→∞ 4 16 3 3

This means that

∞

1 3 2 −1

j

1

X = G0 + ∑ 2− 2 j ∑ G2j +l

2 j=0 4 l=0

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶

∼N(0,2j )

where the series converges P-a.s. and in mean square, and X ∼ N(0, 13 ).

25

R.L. Schilling, L. Partzsch: Brownian Motion

Problem 3.2 (Solution) a) From the definition of the Schauder functions Sn (t), n ⩾ 0, t ∈

[0, 1], we find

0 ⩽ Sn (t) ∀n, t

1 −j/2

S2j +k (t) ⩽ S2j +k ((2k + 1)/2j+1 ) = 2−j/2 /2j+1 = 2 ∀j, k, t

2

2j −1

1 −j/2

∑ S2j +k (t) ⩽ 2 (disjoint supports!)

k=0 2

By assumption,

∃C > 0, ∃ ∈ (0, 21 ), ∀n ∶ ∣an ∣ ⩽ C ⋅ n .

Thus, we find

∞ ∞ 2j −1

∑ ∣an ∣Sn (t) ⩽ ∣a0 ∣ + ∑ ∑ ∣a2j +k ∣S2j +k (t)

n=0 j=0 k=0

∞ 2j −1

⩽ ∣a0 ∣ + ∑ ∑ C ⋅ (2j+1 ) S2j +k (t)

j=0 k=0

∞

1 −j

⩽ ∣a0 ∣ + ∑ C ⋅ 2(j+1) 2 < ∞.

j=0 2

n=0 an Sn (t) converges absolutely and uniformly for t ∈ [0, 1].

√

b) For C > 2 we find from

√ √

√ 2 1 2 1 −C 2 /2

e− 2 C log n ⩽

1 2

P (∣Gn ∣ > log n) ⩽ √ n ∀n ⩾ 3

π C log n π C

∞ √

∑ P (∣Gn ∣ > log n) < ∞.

n=1

√

By the Borel–Cantelli Lemma we find that Gn (ω) = O( log n) for almost all ω, thus

Gn (ω) = O(n ) for any ∈ (0, 1/2).

n=0 Gn (ω)Sn (t) converges a.s. uniformly for

t ∈ [0, 1].

1/p

Problem 3.3 (Solution) Set ∥f ∥p ∶= ( E ∣f ∣p )

complete and that the condition stated in the problem just says that (Xn )n is a Cauchy

sequence in the space Lp (Ω, A, P; S). A good reference for this is, for example, the mono-

graph by F. Trèves [13, Chapter 46]. You will find the ‘pedestrian’ approach as Solution

2 below.

26

Solution Manual. Last update June 12, 2017

Solution 2: By assumption

m⩾Nk

l−1

∀l>k 2

∥d(XNk , XNk+1 )∥p ⩽ 2−k Ô⇒ ∥d(XNk , XNl )∥p ⩽ ∑ 2−j ⩽ .

j=k 2k

k,l→∞

This means that that (d(XNk , Xm ))k⩾0 is a Cauchy sequence in Lp (P; R). By the com-

pleteness of the space Lp (P; R) there is some fm ∈ Lp (P; R) such that

in Lp

d(XNk , Xm ) ÐÐÐ→ fm

k→∞

almost surely

d(Xnk , Xm ) ÐÐÐÐÐÐÐ→ fm .

k→∞

The subsequence nk may also depend on m. Since (nk (m))k is still a subsequence of

(Nk ), we still have d(Xnk (m) , Xm+1 ) → fm+1 in Lp , hence we can find a subsequence

(nk (m + 1))k ⊂ (nk (m))k such that d(Xnk (m+1) , Xm+1 ) → fm+1 a.s. Iterating this we see

that we can assume that (nk )k does not depend on m.

Moreover,

lim ∥fm ∥p = lim ∥ lim d(Xnk , Xm )∥p ⩽ lim lim ∥d(Xnk , Xm )∥p

m→∞ m→∞ k→∞ m→∞ k→∞

k→∞ m⩾nk

Therefore,

⩽ ∣d(Xnk , Xmr ) − fmr ∣ + ∣d(Xnk , Xmr ) − fmr ∣ + 2∣fmr ∣.

d(Xnk , Xnl ) ⩽ ∣d(Xnk , Xmr ) − fmr ∣ + ∣d(Xnk , Xmr ) − fmr ∣ + 2 ⩽ 4 ∀k, l ⩾ L().

27

R.L. Schilling, L. Partzsch: Brownian Motion

Since S is complete, this proves that (Xnk )k⩾0 converges to some X ∈ S almost surely.

n→∞ m⩾n

This condition says that the sequence dn ∶= supm⩾n dp (Xn , Xm ) converges in Lp (P; R) to

zero. Hence there is a subsequence (nk )k such that

k→∞ m⩾nk

almost surely. This shows that d(Xnk , Xnl ) → 0 as k, l → ∞, i.e. we find by the complete-

ness of the space S that Xnk → X.

we know that

⎛n ⎞

P(Xt = Yt ) = 1 ∀t ⩾ 0 Ô⇒ P(Xtj = Ytj j = 1, . . . , n) = P ⋂ {Xtj = Ytj } = 1.

⎝j=1 ⎠

Thus,

⎛n ⎞ ⎛n n ⎞

P ⋂ {Xtj ∈ Aj } = P ⋂ {Xtj ∈ Aj } ∩ ⋂ {Xtj = Ytj }

⎝j=1 ⎠ ⎝j=1 j=1 ⎠

⎛n ⎞

= P ⋂ {Xtj ∈ Aj } ∩ {Xtj = Ytj }

⎝j=1 ⎠

⎛n ⎞

= P ⋂ {Ytj ∈ Aj } ∩ {Xtj = Ytj }

⎝j=1 ⎠

⎛n ⎞

= P ⋂ {Ytj ∈ Aj } .

⎝j=1 ⎠

P(Xt = Yt ∀t ⩾ 0) = 1 Ô⇒ ∀t ⩾ 0 ∶ P(Xt = Yt ) = 1.

D ⊂ I be any countable dense subset. Then

⎛ ⎞

P ⋃ {Xq ≠ Yq } ⩽ ∑ P(Xq ≠ Yq ) = 0

⎝q∈D ⎠ q∈D

28

Solution Manual. Last update June 12, 2017

which means that P(Xq = Yq ∀q ∈ D) = 1. If I is countable, we are done. In the other case

we have, by the density of D,

D∋q D∋q

equivalent Ô⇒

/ modification: To see this let (Bt )t⩾0 and (Wt )t⩾0 be two independent

one-dimensional Brownian motions defined on the same probability space. Clearly,

these processes have the same finite-dimensional distributions, i.e. they are equivalent.

On the other hand, for any t > 0

∞ ∞

P(Bt = Wt ) = ∫ P(Bt = y) P(Wt ∈ dy) = ∫ 0 P(Wt ∈ dy) = 0.

−∞ −∞

Problem 3.6 (Solution) Since (Bq )q∈Q∩[0,∞) is uniformly continuous, there exists a unique pro-

cess (Bt )t⩾0 such that Bt = limq→t Bq and t ↦ Bt is continuous.

We use the characterization from Lemma 2.14. Its proof shows that we can derive (2.17)

⎡ ⎤

⎢ ⎛ n ⎞⎥⎥ ⎛ 1 n ⎞

⎢

E ⎢exp i ∑ ⟨ξj , Xqj − Xqj−1 ⟩ + i⟨ξ0 , Xq0 ⟩ ⎥ = exp − ∑ ∣ξj ∣2 (qj − qj−1 )

⎢ ⎝ j=1 ⎠⎥ ⎝ 2 j=1 ⎠

⎣ ⎦

on the basis of (B0)–(B3) for (Bq )q∈Q∩[0,∞) and q0 , . . . , qn ∈ Q ∩ [0, ∞).

(k) (k)

qj , k ⩾ 1. Since (2.17) holds for qj , j = 0, . . . , n and every k ⩾ 0, we can easily perform

the limit k → ∞ on both sides (on the left we use dominated convergence!) since Bt is

continuous.

This proves (2.17) for (Bt )t⩾0 , and since (Bt )t⩾0 has continuous paths, Lemma 2.14 proves

that (Bt )t⩾0 is a BM1 .

ft0 ,t,t1 (x0 , x, x1 ) = √ exp (− [ + + ])

(2π)3/2 (t1 − t)(t − t0 )t0 2 t1 − t t − t0 t0

1 1 1 (x1 − x0 )2 x20

ft0 ,t1 (x0 , x1 ) = √ exp (− [ + ]) .

(2π) (t1 − t0 )t0 2 t1 − t0 t0

ft ∣ t0 ,t1 (x ∣ x1 , x2 )

ft0 ,t,t1 (x0 , x, x1 )

=

ft0 ,t1 (x0 , x1 )

(x1 −x)2 (x−x0 )2 x20

1

(2π)3/2

√ 1

exp (− 12 [ t1 −t + + t0 ])

(t1 −t)(t−t0 )t0 t−t0

=

(x1 −x0 )2 x20

1 √ 1

(2π) (t1 −t0 )t0 exp (− 12 [ t1 −t0 + t0 ])

29

R.L. Schilling, L. Partzsch: Brownian Motion

¿

1 ÁÀ (t1 − t0 ) 1 (x1 − x)2 (x − x0 )2 (x1 − x0 )2

=√ Á exp (− [ + − ])

2π (t1 − t)(t − t0 ) 2 t1 − t t − t0 t1 − t0

¿

1 ÁÀ (t1 − t0 ) 1 (t − t0 )(x1 − x)2 + (t1 − t)(x − x0 )2 (x1 − x0 )2

=√ Á exp (− [ − ])

2π (t1 − t)(t − t0 ) 2 (t1 − t)(t − t0 ) t1 − t0

Now consider the argument in the square brackets [⋯] of the exp-function

[ − ]

(t1 − t)(t − t0 ) t1 − t0

(t1 − t0 ) t − t0 t1 − t (t1 − t)(t − t0 )

= [ (x1 − x)2 + (x − x0 )2 − (x1 − x0 )2 ]

(t1 − t)(t − t0 ) t1 − t0 t1 − t0 (t1 − t0 )2

(t1 − t0 ) t − t0 t1 − t t − t0 (t1 − t)(t − t0 ) 2

= [( + ) x2 + ( − ) x1

(t1 − t)(t − t0 ) t1 − t0 t1 − t0 t1 − t0 (t1 − t0 )2

t1 − t (t1 − t)(t − t0 ) 2

+( − ) x0

t1 − t0 (t1 − t0 )2

t − t0 t1 − t (t1 − t)(t − t0 )

−2 x1 x − 2 xx0 + 2 x1 x0 ]

t1 − t0 t1 − t0 (t1 − t0 )2

(t1 − t0 ) (t − t0 )2 2 (t1 − t)2 2

= [x2 + x + x

(t1 − t)(t − t0 ) (t1 − t0 )2 1 (t1 − t0 )2 0

t − t0 t1 − t (t1 − t)(t − t0 )

−2 x1 x − 2 xx0 + 2 x1 x0 ]

t1 − t0 t1 − t0 (t1 − t0 )2

(t1 − t0 ) t − t0 t1 − t 2

= [x − x1 − x0 ]

(t1 − t)(t − t0 ) t1 − t0 t1 − t0

(t1 − t0 ) t − t0 t1 − t 2

= [x − ( x1 + x0 )] .

(t1 − t)(t − t0 ) t1 − t0 t1 − t0

Set

(t1 − t)(t − t0 ) t − t0 t1 − t

σ2 = and m = x1 + x0

(t1 − t0 ) t1 − t0 t1 − t0

then our calculation shows that

1 (x − m)2

ft ∣ t0 ,t1 (x ∣ x1 , x2 ) = √ exp ( ).

2π σ 2σ 2

30

4 The Canonical Model

Problem 4.1 (Solution) Let F ∶ R → [0, 1] be a distribution function. We begin with a general

lemma: F has a unique generalized monotone increasing right-continuous inverse:

(4.1)

[ = sup{x ∶ F (x) ⩽ u}].

Indeed: For those t where F is strictly increasing and continuous, there is nothing to show.

Let us look at the two problem cases: F jumps and F is flat.

F (t) G(u)

w+

w

w−

If F (t) jumps, we have G(w) = G(w+ ) = G(w− ) and if F (t) is flat, we take the right

endpoint of the ‘flatness interval’ [G(v−), G(v)] to define G (this leads to right-continuity

of G)

a) Let (Ω, A, P) = ([0, 1], B[0, 1], du) (du stands for Lebesgue measure) and define X =

G (G = F −1 as before). Then

= λ({u ∈ [0, 1] ∶ G(u) ⩽ x})

(the discontinuities of F are countable, i.e. a Lebesgue null set)

31

R.L. Schilling, L. Partzsch: Brownian Motion

= λ([0, F (x)]) = F (x).

b) Use the product construction and part a). To be precise, we do the construction for

two random variables. Let X ∶ Ω → R and Y ∶ Ω′ → R be two iid copies. We define

on the product space

(Ω × Ω′ , A ⊗ A′ , P × P′ )

the new random variables ξ(ω, ω ′ ) ∶= X(ω) and η(ω, ω ′ ) ∶= Y (ω ′ ). Then we have

• ξ, η live on the same probability space

• ξ ∼ X, η ∼ Y

= P × P′ ({(ω, ω ′ ) ∈ Ω × Ω′ ∶ X(ω) ∈ A})

= P × P′ ({ω ∈ Ω ∶ X(ω) ∈ A} × Ω′ )

= P({ω ∈ Ω ∶ X(ω) ∈ A})

= P(X ∈ A).

• ξáη

= P × P′ ({(ω, ω ′ ) ∈ Ω × Ω′ ∶ X(ω) ∈ A, Y (ω ′ ) ∈ B})

= P × P′ ({ω ∈ Ω ∶ X(ω) ∈ A} × {ω ∈ Ω′ ∶ Y (ω ′ ) ∈ B})

= P({ω ∈ Ω ∶ X(ω) ∈ A}) P′ ({ω ∈ Ω′ ∶ Y (ω ′ ) ∈ B})

= P(X ∈ A) P(Y ∈ B)

= P × P′ (ξ ∈ A) P × P′ (η ∈ B)

The same type of argument works for arbitrary products, since independence is

always defined for any finite-dimensional subfamily. In the infinite case, we have

to invoke the theorem on the existence of infinite product measures (which are

constructed via their finite marginals) and which can be seen as a particular case

of Kolmogorov’s theorem, cf. Theorem 4.8 and Theorem A.2 in the appendix.

c) The statements are the same if one uses the same construction as above. A difficulty

is to identify a multidimensional distribution function F (x). Roughly speaking, these

are functions of the form

an infinite rectancle. An abstract characterisation is the following

32

Solution Manual. Last update June 12, 2017

• F ∶ Rn → [0, 1]

• xj ↦ F (x) is monotone increasing

• xj ↦ F (x) is right continuous

• F (x) = 0 if at least one entry xj = −∞

• F (x) = 1 if all entries xj = +∞

• ∑(−1)∑k=1 k F (1 a1 + (1 − 1 )b1 , . . . , n an + (1 − n )bn ) ⩾ 0 where −∞ < aj < bj < ∞

n

and where the outer sum runs over all tuples (1 , . . . , n ) ∈ {0, 1}n

The last property is equivalent to

(1) (n) (k)

• ∆h1 ⋯∆hn F (x) ⩾ 0 ∀h1 , . . . , hn ⩾ 0 where ∆h F (x) = F (x + hek ) − F (x) and

ek is the kth standard unit vector of Rn .

In principle we can construct such a multidimensional F from its marginals using

the theory of copulas, in particular, Sklar’s theorem etc. etc. etc.

Another way would be to take (Ω, A, P) = (Rn , B(Rn ), µ) where µ is the probability

measure induced by F (x). Then the random variables Xn are just the identity maps!

The independent copies are then obtained by the usual product construction.

Problem 4.2 (Solution) Step 1: Let us first show that P(lims→t Xs exists) < 1.

Since Xr á Xs and Xs ∼ −Xs we get

√

Xr − Xs ∼ Xr + Xs ∼ N(0, s + r) ∼ s + r N(0, 1).

Thus,

P(∣Xr − Xs ∣ > ) = P (∣X1 ∣ > √

) ÐÐÐ→ P (∣X1 ∣ > √ ) ≠ 0.

s+r r,s→t 2t

This proves that Xs is not a Cauchy sequence in probability, i. e. it does not even converge

in probability towards a limit, so a.e. convergence is impossible.

In fact we have

∞

{ω ∶ lim Xs (ω) does not exist} ⊃ ⋂ { sup ∣Xs − Xr ∣ > 0}

s→t k=1 s,r∈[t−1/k,t+1/k]

P ( lim Xs does not exist) ⩾ lim P ( sup ∣Xs − Xr ∣ > 0) ⩾ P (∣X1 ∣ > √ )

s→t k s,r∈[t−1/k,t+1/k] 2t

n→∞

Step 2: Fix t > 0, fix a sequence (tn )n with tn → t, and set

s→t n→∞

33

R.L. Schilling, L. Partzsch: Brownian Motion

Clearly, A ⊂ A(tn ) for any such sequence. Moreover, take two sequences (sn )n , (tn )n such

that sn → t and tn → t and which have no points in common; then we get by independence

and step 1

(Xs1 , Xs2 , Xs3 . . .) á (Xt1 , Xt2 , Xt3 . . .) Ô⇒ A(tn ) á A(sn )

By Step 1, q < 1. Since there are infinitely many sequences having all no points in common,

we get 0 ⩽ P(A) ⩽ limk→∞ q k = 0.

34

5 Brownian Motion as a Martingale

Let s ⩽ t. Then σ(Bt − Bs ), FsB and σ(X) are independent, thus σ(Bt − Bs ) is

independent of σ(σ(X), FtB ) = F̃t . This shows that FtB is an admissible filtration for

(Bt )t⩾0 .

B

FtB ⊂ σ(FtB , N) = Ft .

From measure theory we know that (Ω, A, P) can be completed to (Ω, A∗ , P∗ ) where

A∗ ∶= {A ∪ N ∶ A ∈ A, N ∈ N},

P∗ (A∗ ) ∶= P(A) for A∗ = A ∪ N ∈ A∗ .

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶

∈A ∈N

= P({Bt − Bs ∈ A} ∩ F )

= P(Bt − Bs ∈ A) P(F )

= P∗ (Bt − Bs ∈ A) P∗ (F ∪ N ).

B

Therefore Ft is admissible.

Problem 5.2 (Solution) Let t = t0 < . . . < tn , and consider the random variables

E (ei ∑k=1 ⟨ξk , B(tk )−B(tk−1 )⟩ 1F ) = E (ei⟨ξn , B(tn )−B(tn−1 )⟩ ⋅ ei ∑k=1 ⟨ξk , B(tk )−B(tk−1 )⟩ 1F )

n n−1

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶

Ftn−1 mble., hence áB(tn )−B(tn−1 )

n−1

35

R.L. Schilling, L. Partzsch: Brownian Motion

n

= ∏ E (ei⟨ξk , B(tk )−B(tk−1 )⟩ ) E 1F .

k=1

This shows that the increments are independent among themselves (use F = Ω) and

that they are all together independent of Ft (use the above calculation and the fact that

the increments are among themselves independent to combine again the ∏n1 under the

expected value)

Thus,

Ft á σ(B(tk ) − B(tk−1 ) ∶ k = 1, . . . , n)

t<t1 <...<tn

n⩾1

Problem 5.3 (Solution) a) i) E ∣Xt ∣ < ∞, since the expectation does not depend on the

filtration.

iii) Let N denote the set of all sets which are subsets of P-null sets. Denote by

P∗ the measure of the completion of (Ω, A, P) (compare with the solution to

Exercise 5.1.b)).

∫ Xs d P∗ = ∫ Xs d P = ∫ Xt d P = ∫ Xt d P∗ .

F∗ F F F∗

ii) Note that {Xt ≠ Yt }, its complement and any of its subsets is in Ft∗ . Let B ∈

B(Rd ). Then we get

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶

∈Ft ∈Ft∗ ∈Ft∗

Ys d P∗ = ∫ Xs d P∗ = ∫ Xt d P∗ = ∫ Yt d P∗ ,

a)

∫

F∗ F ∗ F∗ F∗

Problem 5.4 (Solution) Let s < t and pick sn ↓ s such that s < sn < t. Then

sub-MG a.e.

E(Xt ∣ Fs+ ) ←ÐÐÐÐ E(X(t) ∣ Fsn ) ⩾ X(sn ) ÐÐÐ→ X(s+) =

continuous

X(s).

sn ↓s n→∞ paths

The convergence on the left side follows from the (sub-)martingale convergence theorem

(Lévy’s downward theorem).

36

Solution Manual. Last update June 12, 2017

Problem 5.5 (Solution) Here is a direct proof without using the hint.

E(Bt4 ∣ Fs )

= E ((Bt − Bs + Bs )4 ∣ Fs )

= Bs4 + 4Bs3 E(Bt − Bs ) + 6Bs2 E((Bt − Bs )2 ) + 4Bs E((Bt − Bs )3 ) + E((Bt − Bs )4 )

= Bs4 + 6Bs2 (t − s) + 3(t − s)2

= Bs4 − 6Bs2 s + 6Bs2 t + 3(t − s)2 ,

and

E(Bt2 ∣ Fs ) = E ((Bt − Bs + Bs )2 ∣ Fs )

= t − s + 2Bs E(Bt − Bs ) + Bs2

= Bs2 + t − s.

Combining these calculations, such that the term 6Bs2 t vanishes from the first formula,

we get

= Bs4 − 6sBs + 3s2 − 3t2 .

E ec∣B0 ∣ = E ec∣B0 ∣ = 1.

2

and for c ⩽ 0

E ec∣B0 ∣ ⩽ 1 E ec∣B0 ∣ ⩽ 1.

2

and

Now let t > 0 and c > 0. There exists some R > 0 such that c∣x∣ < 1

4t ∣x∣2 for all ∣x∣ > R.

Thus

1 2

1 2 1 2 1 2

⩽ c̃ ∫

∣x∣⩽R ∣x∣>R

− 4t

1

∣x∣2

⩽ ecR + c̃ ∫ e dx < ∞,

∣x∣>R

2 (c− 1 )

E ec∣Bt ∣ = c̃ ∫ ec∣x∣

2− 1 ∣x∣2

dx = c̃ ∫ e∣x∣

2

2t 2t dx

2t < 0 or equivalently c < 1

2t .

37

R.L. Schilling, L. Partzsch: Brownian Motion

∣x∣2

a) We have p(t, x) = (2πt)− 2 e−

d

Problem 5.7 (Solution) 2t . By the chain rule we get

p(t, x) = − t− 2 −1 (2π)− 2 e− 2t + (2πt)− 2 (−1)t−2 (−1)

d d

e 2t

∂t 2 2

and for all j = 1, . . . , d

∂ 2xj − ∣x∣2

p(t, x) = (2πt)− 2 ( −

d

) e 2t ,

∂xj 2t

∂2 − d2 1 − ∣x∣2 x2 ∣x∣2

− d j − 2t

p(t, x) = (2πt) ( − ) e 2t + (2πt) 2 e .

∂x2j t t2

Adding these terms and noting that ∣x∣2 = ∑dj=1 x2j we get

d

1 d ∂2 d ∣x∣2 ∂

− d −1 −

∑ 2 p(t, x) = − (2πt) 2 t e 2t + 2

e 2t = p(t, x).

2 j=1 ∂xj 2 2 t ∂t

1 ∂2

∫ p(t, x) f (t, x) dx

2 ∂x2j

∞

1 ∂ ∂ 1 ∂

= p(t, x) f (t, x)∣ − ∫ p(t, x) ⋅ f (t, x) dx

2 ∂xj −∞ ∂xj 2 ∂xj

∞

∂ 1 ∂2 1

=0− p(t, x) ⋅ f (t, x)∣ + ∫ 2

p(t, x) ⋅ f (t, x) dx

∂xj 2 −∞ ∂xj 2

∂2 1

=∫ 2

p(t, x) ⋅ f (t, x) dx.

∂xj 2

By the same arguments as in Exercise 5.6 we find that all terms are integrable and

vanish as ∣x∣ → ∞. This justifies the above calculation. Furthermore summing over

j = 1, . . . d we obtain the statement.

Problem 5.8 (Solution) Measurability (i.e. adaptedness to the Filtration Ft ) and integrability

is no issue, see also Problem 5.6.

a) Ut is only a martingale for c = 0.

Solution 1: see Exercise 5.9.

Solution 2: if c ≠ 0, E Ut is not constant, i.e. cannot be a martingale. If c = 0, Ut is

trivially a martingale.

b) Vt is a martingale since

s t

E (Vt ∣ Fs ) = t E(Bt − Bs ) + tBs − E (∫ Br dr ∣ Fs ) − E (∫ Br dr ∣ Fs )

0 s

s t

= tBs − ∫ Br dr − E (∫ (Br − Bs ) + Bs dr ∣ Fs )

0 s

s

= tBs − ∫ Br dr − (t − s)Bs

0

= Vs .

38

Solution Manual. Last update June 12, 2017

= aBs3 + 3aBs2 E Bt−s + 3aBs E Bt−s

2

+ a E Bt−s

3

− 0 − tBs

= aBs3 + (3a(t − s) − t)Bs .

3. Thus Yt is a

martingale and Wt is not a martingale.

and

t s

3 E (∫ Br dr ∣ Fs ) = 3 ∫ Br dr + 3(t − s)Bs .

0 0

Thus, Xt is a martingale.

Problem 5.9 (Solution) Note that E ∣Xt ∣ < ∞ for all a, b, cf. Problem 5.6. We have

= eaBs +bt E eaBt−s

2 /2

= eaBs+bt+(t−s)a .

2

Thus, Xt is a martingale if, and only if, bs = bt + (t − s) a2 , i.e., b = − 21 a2 .

d d

(j) 2 Pr. 5.5 (j) 2

E ( d1 ∣Bt ∣2 − t ∣ Fs ) = −t + 1

d ∑ E ((Bt ) ∣ Fs ) = −t + 1

d ∑ ((Bs ) + t − s) = 1

d ∣Bs ∣2 − s.

j=1 j=1

Problem 5.11 (Solution) For a)–c) we prove only the statements for τ ○ , the statements for τ

are proved analogously.

A ⊂ C Ô⇒ {t ⩾ 0 ∶ Xt ∈ A} ⊂ {t ⩾ 0 ∶ Xt ∈ C} Ô⇒ τA○ ⩾ τC○ .

○

b) By part a) we have τA∪C ⩽ τA○ and τA∪C

○

⩽ τC○ . Thus,

a)

○

τA∪C ⩽ min{τA○ , τC○ }.

○

, it is enough to show that

○

since this implication shows that τA∪C (ω) ⩾ min{τA○ (ω), τC○ (ω)} holds.

39

R.L. Schilling, L. Partzsch: Brownian Motion

Ô⇒ t ⩾ τA○ (ω) or t ⩾ τC○ (ω)

Ô⇒ t ⩾ min{τA○ (ω), τC○ (ω)}.

○

.

Remark: we cannot expect “=”. To see this consider a BM1 staring at B0 = 0 and

the set

A = [4, 6] and C = [1, 2] ∪ [5, 7].

n⩾1

In order to show the converse, τA○ ⩾ inf n⩾1 τA○ n , it is enough to check that

n⩾1

since, if this is true, this implies that τA○ (ω) ⩾ inf n⩾0 τA○ n (ω).

Ô⇒ t ⩾ τA○ n (ω) for some n ∈ N

Ô⇒ t ⩾ inf τA○ n (ω).

n⩾0

n ∶ Xs ∈ A} is monotone decreasing as

n

n → ∞. Thus we get

inf ( n1 + inf{s ⩾ 1

n ∶ Xs ∈ A}) = 0 + inf inf{s ⩾ 1

n ∶ Xs ∈ A}

n n

= inf{s > 0 ∶ Xs ∈ A}

= τA .

○

f) Let Xt = x0 + t. Then τ{x 0}

= 0 and τ{x0 } = ∞.

More generally, a similar situation may happen if we consider a process with con-

tinuous paths, a closed set F , and if we let the process start on the boundary ∂F .

Then τF○ = 0 a.s. (since the process is in the set) while τF > 0 is possible with positive

probability.

40

Solution Manual. Last update June 12, 2017

Let x0 ∈ U . Then τU○ = 0 and, since U is open and Xt is continuous, there exists an N > 0

such that

X 1 ∈ U for all n ⩾ N.

n

Thus τU = 0.

If x0 ∉ U , then Xt (ω) ∈ U can only happen if t > 0. Thus, τU○ = τU .

y∈A y∈A

y∈A y∈A

= ∣x − z∣

Problem 5.14 (Solution) We treat the two cases simultaneously and check the three properties

of a sigma algebra:

i) We have Ω ∈ F∞ and

Ω ∩ {τ ⩽ t} = {τ ⩽ t} ∈ Ft ⊂ Ft+ .

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶

∈Ft ⊂Ft+ ∈Ft(+) since A∈Fτ (+)

n n ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶

∈Ft(+)

Problem 5.15 (Solution) a) Let F ∈ Fτ + , i.e., F ∈ F∞ and for all s we have F ∩ {τ ⩽ s} ∈ Fs+ .

Let t > 0. Then

F ∩ {τ < t} = ⋃ (F ∩ {τ ⩽ s}) ∈ ⋃ Fs+ ⊂ Ft .

s<t s<t

t>s t>s

41

R.L. Schilling, L. Partzsch: Brownian Motion

b) We have {τ ⩽ t} ∈ Ft ⊂ F∞ and

⎧

⎪

⎪

⎪{τ ⩽ t} ∈ Ft if r ⩾ t;

{τ ⩽ t} ∩ {τ ∧ t ⩽ r} = ⎨

⎪

⎪

⎩{τ ⩽ r} ∈ Fr ⊂ Ft

⎪ if r < t.

1 2

Problem 5.16 (Solution)

optional stopping

τ ∧t

1 = E e 2 (τ ∧t)c

1 2 +icB

.

1 2

2 π the cosine is positive. By

Fatou’s lemma we get for all mc < 1

2 π

1 2

t→∞

1 2

t→∞

1 2

⩾ E (e 2 τ c cos(cBτ ))

1 2

⩾ cos(mc) E e 2 τ c .

2c and all c < π/(2m). Since

∞

tj tj

et = ∑ Ô⇒ ∀t ⩾ 0, j ⩾ 0 ∶ et ⩾

j=0 j! j!

t

b) By Exercise 5.8 d) the process Bt3 − 3 ∫0 Bs ds is a martingale. By optional stopping

we get

τ ∧t

E (Bτ3∧t − 3 ∫ Bs ds) = 0 for all t ⩾ 0. (*)

0

Set m = max{a, b}. By the definition of τ we see that ∣Bτ ∧t ∣ ⩽ m; since τ is integrable

we get

τ ∧t

∣Bτ3∧t ∣ ⩽ m3 and ∣∫ Bs ds∣ ⩽ τ ⋅ m.

0

Therefore, we can use in (*) the dominated convergence theorem and let t → ∞:

τ 1

E (∫ Bs ds) = E(Bτ3 )

0 3

1 1

= (−a)3 P(Bτ = −a) + b3 P(Bτ = b)

3 3

(5.12) 1 −a b + b a

3 3

=

3 a+b

1

= ab(b − a).

3

42

Solution Manual. Last update June 12, 2017

Problem 5.17 (Solution) By Example 5.2 c) ∣Bt ∣2 −d⋅t is a martingale. Thus we get by optional

stopping

1

E(t ∧ τR ) =

E ∣Bt∧τR ∣2 for all t ⩾ 0.

d

Since ∣Bt∧τR ∣ ⩽ R, we can use monotone convergence on the left and dominated convergence

on the right-hand side to get

1 1 1

E τR = sup E(t ∧ τR ) = lim E ∣Bt∧τR ∣2 = E ∣BτR ∣2 = R2 .

t⩾0 t→∞ d d d

{σ ∧ τ ⩽ t} = {σ ⩽ t} ∪ {τ ⩽ t} ∈ Ft .

´¹¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¶ ´¹¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¶

∈Ft ∈Ft

0⩽r∈Q

r∈Q∩[0,t]

This shows that {σ < τ }, {σ ⩾ τ } = {σ < τ }c ∈ Fσ∧τ . Since σ and τ play symmetric

roles, we get with a similar argument that {σ > τ }, {σ ⩽ τ } = {σ > τ }c ∈ Fσ∧τ , and

the claim follows.

c) Since τ ∧σ is an integrable stopping time, we get from Wald’s identities, Theorem 5.10,

that

E Bτ2∧σ = E(τ ∧ σ) < ∞.

= E ( E(Bσ∧τ Bτ 1{σ⩽τ } ∣ Fτ ∧σ ))

b)

= E (Bσ∧τ 1{σ⩽τ } E (Bτ ∣ Fτ ∧σ ))

(*)

= E(Bσ∧τ

2

1{σ⩽τ } ).

With an analogous calculation for τ ⩽ σ we conclude

2

) = E σ ∧ τ.

In the step marked with (*) we used that for integrable stopping times σ, τ we have

43

R.L. Schilling, L. Partzsch: Brownian Motion

F F

Since Bτ ∧k ÐÐÐ→ Bτ in L2 (P), see the proof of Theorem 5.10, we get for some fixed

k→∞

i < k because of Fσ∧τ ∧i ⊂ Fσ∧τ ∧k that

∫ Bτ d P = k→∞

lim ∫ Bτ ∧k d P = lim ∫ Bσ∧τ ∧k d P = ∫ Bσ∧τ d P

k→∞

for all F ∈ Fσ∧τ ∧i .

F F F F

Let ρ = σ ∧ τ (or any other stopping time). Since Fρ∧k = Fρ ∩ Fk we see that Fρ is

generated by the ∩-stable generator ⋃i Fρ∧i , and (*) follows.

= Eτ − 2Eτ ∧ σ + Eσ

= E(τ − 2(τ ∧ σ) + σ)

= E ∣τ − σ∣.

44

6 Brownian Motion as a Markov Process

2 /(2t)

for the one-dimensional normal

density.

property of a Brownian motion

E u(∣Bt+s ∣ ∣ Fs ) = E u(∣(Bt+s − Bs ) + Bs ∣ ∣ Fs )

= E u(∣(Bt+s − Bs ) + y∣)∣

y=Bs

= E u(∣Bt + y∣)∣ .

y=Bs

y=−Bs y=Bs

and, therefore,

1

E u(∣Bt+s ∣ ∣ Fs ) = [E u(∣Bt + y∣) + E u(∣Bt − y∣)]y=B

2 s

1 ∞

= [∫ (u(∣z + y∣) + u(∣z − y∣)) gt (z) dz]

2 −∞ y=Bs

1 ∞

= [∫ u(∣z∣) (gt (z + y) + gt (z − y)) dz]

2 −∞ y=Bs

∞

=∫ u(∣z∣) (gt (z + y) + gt (z − y)) dz∣

0 y=Bs

∞

= ∫ u(∣z∣) (gt (z + ∣y∣) + gt (z − ∣y∣)) dz ∣

0

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶ y=Bs

=∶gu,s,t+s (y)—it is independent of s!

function.

45

R.L. Schilling, L. Partzsch: Brownian Motion

c) Set Mt ∶= sups⩽t Bs for the running maximum, i.e. Yt = Mt − Bt . From the reflection

principle, Theorem 6.9 we know that Yt ∼ ∣Bt ∣. So the guess is that Y and ∣B∣ are

two Markov processes with the same transition function!

property of Brownian motion

u⩽s 0⩽u⩽t

u⩽s 0⩽u⩽t

and, as supu⩽s (Br − Bs ) is Fs measurable and (Bs − Bt+s ), sup0⩽u⩽t (Bs+u − Bs+t ) á Fs ,

we get

0⩽u⩽t y=supu⩽s (Br −Bs )

0⩽u⩽t y=Ys

Using time inversion (cf. 2.11) we see that (Bt−u − Bt )u∈[0,t] is again a BM1 , and we

get (Bt , sup0⩽u⩽t (Bu − Bt )) ∼ (−Bt , sup0⩽u⩽t Bu ))

0⩽u⩽t y=Ys

Using Solution 2 of Problem 6.8 we know the joint distribution of (Bt , supu⩽t Bu ):

0⩽u⩽t

∞ z 2 2z − x −(2z−x)2 /2t

=∫ ∫ u(max{y + x, z}) √ e dx dz.

z=0 x=−∞ 2πt t

z z z

Splitting the integral ∫x=−∞ into two parts ∫x=−∞,y+x⩽z + ∫x=−∞,y+x>z we get

∞ 2 z−y 2z − x 2 ∞

e−(2z−x) /2t dx dz = √ −(z+y)2 /2t

2

I =∫ u(z) √ ∫ ∫ u(z) e dz

z=0 2πt x=−∞ t 2πt z=0

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶

z−y

=e−(2z−x) /2t ∣

2

−∞

and

2 ∞ z 2z − x −(2z−x)2 /2t

II = √ ∫ ∫ u(y + x) e dx dz

2πt z=0 x=−z−y t

2 ∞ x+y 2z − x

e−(2z−x) /2t dz

2

=√ ∫ u(y + x) ∫

2πt x=−y z=x t

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶

x+y

=− 21 e−(2z−x) /2t ∣

2

z=x

1 ∞

u(y + x) [e−x /2t − e−(x+2y) /2t ] dx

2 2

dx = √ ∫

2πt x=−y

46

Solution Manual. Last update June 12, 2017

1 ∞

u(ξ) [e−(ξ−y) /2t − e−(ξ+y) /2t ] dξ.

2 2

=√ ∫

2πt x=−y

Finally, adding I and II we end up with

∞

E (u( max {y + Bt , sup Bu )})) = ∫ u(z)(gt (z + y) + gt (z − y)) dz, y⩾0

0⩽u⩽t 0

s

Is = ∫ Br dr and Ms = sup Bu and Fs = FsB .

0 u⩽s

E (f (Ms+t , Bs+t ) ∣ Fs )

= E (f ( sup Bu ∨ Ms , (Bs+t − Bs ) + Bs ) ∣ Fs )

s⩽u⩽s+t

s⩽u⩽s+t

sups⩽u⩽s+t (Bu − Bs ), Bs+t − Bs á Fs while Ms and Bs are Fs measurable. Thus, we

can treat these groups of random variables separately (see, e.g., Lemma A.3:

E (f (Ms+t , Bs+t ) ∣ Fs )

s⩽u⩽s+t y=Ms ,z=Bs

= φ(Ms , Bs )

where

s⩽u⩽s+t

E (f (Is+t , Bs+t ) ∣ Fs )

s+t

= E (f ( ∫ Bu du + Is , (Bs+t − Bs ) + Bs ) ∣ Fs )

s

s+t

= E (f ( ∫ (Bu − Bs ) du + Is + tBs , (Bs+t − Bs ) + Bs ) ∣ Fs ) .

s

s+t

∫s (Bu − Bs ) du, Bs+t − Bs á Fs while Is + tBs and Bs are Fs measurable. Thus, we

can treat these groups of random variables separately (see, e.g., Lemma A.3:

E (f (Is+t , Bs+t ) ∣ Fs )

47

R.L. Schilling, L. Partzsch: Brownian Motion

s+t

= E (f ( ∫ (Bu − Bs ) du + y + tz, (Bs+t − Bs ) + z)) ∣

s y=Is ,z=Bs

= φ(Is , Bs )

s+t

φ(y, z) = E (f ( ∫ (Bu − Bs ) du + y + tz, (Bs+t − Bs ) + z)) .

s

c) No! If we use the calculation of a) and b) for the function f (y, z) = g(y), i.e. only

depending on M or I, respectively, we see that we still get

E (g(It+s ) ∣ Fs ) = ψ(Bs , Is ),

i.e. (It , Ft )t cannot be a Markov process. The same argument applies to (Mt , Ft )t .

First, if f ∶ Rd×n → R, f = f (x1 , . . . , xn ), x1 , . . . , xn ∈ Rd , we see that

= E f (B(t1 )) + x, . . . , B(tn ) + x)

Rd Rd

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶

n times

and the last expression is clearly measurable. This applies, in particular, to f = ∏nj=1 1Aj

where G ∶= ⋂nj=1 {B(tj ) ∈ Aj }, i.e. Ex 1G is Borel measurable.

Set

⎧

⎪ ⎫

⎪

⎪n d ⎪

Γ ∶= ⎨ ⋂ {B(tj ) ∈ Aj } ∶ n ⩾ 0, 0 ⩽ t1 < ⋯tn , A1 , . . . An ∈ Bb (R )⎬ .

⎪

⎪ ⎪

⎪

⎩j=1 ⎭

Let us see that Σ is a Dynkin system. Clearly, ∅ ∈ Σ. If A ∈ Σ, then

x ↦ Ex 1Ac = Ex (1 − 1A ) = 1 − Ex 1A ∈ Bb (Rd ) Ô⇒ Ac ∈ Σ.

x ↦ Ex 1A = ∑ Ex 1Aj ∈ Bb (Rd ).

j

This shows that Σ is a Dynkin System. Denote by δ(⋅) the Dynkin system generated by

the argument. Then

Γ ⊂ Σ ⊂ F∞

B

Ô⇒ δ(Γ) ⊂ δ(Σ) = Σ ⊂ F∞

B

.

But δ(Γ) = σ(Γ) since Γ is stable under finite intersections and σ(Γ) = F∞

B

. This proves,

in particular, that Σ = F∞

B

.

Since we can approximate every bounded F∞

B

measurable function Z by step functions

with steps from F∞

B

, the claim follows.

48

Solution Manual. Last update June 12, 2017

Problem 6.4 (Solution) Following the hint we set un (x) ∶= (−n)∨x∧n. Then un (x) → u(x) ∶= x.

Using (6.7) we see

E [un (Bt+τ ) ∣ Fτ + ](ω) = EBτ (ω) un (Bt ).

E [un (Bτ ) ∣ Fτ + ](ω) = un (Bτ )(ω)

and we get

lim E [un (Bτ ) ∣ Fτ + ](ω) = lim un (Bτ )(ω) = Bτ (ω).

n→∞ n→∞

Since the l.h.S. is Fτ + measurable (as limit of such measurable functions!), the claim

follows.

s⩽t s⩽t s⩽t

∼ inf{s ⩾ 0 ∶ Bs/c2 = b}

= inf{rc2 ⩾ 0 ∶ Br = b}

= c2 inf{r ⩾ 0 ∶ Br = b} = c2 τb .

c) We have

Brownian motion.

τ ⩽τ

{τc ⩽ s} ∩ {τa ⩽ t} c= a {τc ⩽ s ∧ t} ∩ {τa ⩽ t} ∈ Ft∧s ∩ Ft ⊂ Ft .

This shows that {τc ⩽ s} ∈ Fτa , i.e. τc is Fτa measurable. Since c is arbitrary, {τc }c∈[0,a]

is Fτa measurable, and the claim follows.

Problem 6.7 (Solution) We begin with a simpler situation. As usual, we write τb for the first

passage time of the level b: τb = inf{t ⩾ 0 ∶ sups⩽t Bs = b} where b > 0. From Example 5.2

d) we know that (Mtξ ∶= exp(ξBt − 21 tξ 2 ))t⩾0 is a martingale. By optional stopping we get

49

R.L. Schilling, L. Partzsch: Brownian Motion

that (Mt∧τ

ξ

)t⩾0 is also a martingale and has, therefore, constant expectation. Thus, for

b

ξ > 0 (and with E = E0 )

1 = E M0ξ = E ( exp(ξBt∧τb − 21 (t ∧ τb )ξ 2 ))

Since the RV exp(ξBt∧τb ) is bounded (mind: ξ ⩾ 0 and Bt∧τb ⩽ b), we can let t → ∞ and

get

1 = E ( exp(ξBτb − 21 τb ξ 2 )) = E ( exp(ξb − 12 τb ξ 2 ))

√

or, if we take ξ = 2λ,

√

E e−λτb = e− 2λb

.

√

E e−λτb = e− 2λ∣b∣

∀b ∈ R.

○

Now let us turn to the situation of the problem. Set τ = τ(a,b) c . Here, Bt∧τ is bounded (it

is in the interval (a, b), and this makes things easier when it comes to optional stopping.

As before, we get by stopping the martingale (Mtξ )t⩾0 that

t→∞

(and not, as before, for positive ξ! Mind also the starting point x ≠ 0, but this does not

change things dramatically.) by, e.g., dominated convergence. The problem is now that

Bτ does not attain a particular value as it may be a or b. We get, therefore, for all ξ ∈ R

√

Now pick ξ = ± 2λ. This yields 2 equations in two unknowns:

√ √ √

e 2λ x

=e 2λ a

Ex (e−λτ 1{Bτ =a} ) + e 2λ b

Ex (e−λτ 1{Bτ =b} )

√ √ √

e− 2λ x

= e− 2λ a

Ex (e−λτ 1{Bτ =a} ) + e− 2λ b

Ex (e−λτ 1{Bτ =b} )

√ √

2λ (x−a)

e = Ex (e−λτ 1{Bτ =a} ) + e 2λ (b−a)

Ex (e−λτ 1{Bτ =b} )

√ √

e− 2λ (x−a)

= Ex (e−λτ 1{Bτ =a} ) + e− 2λ (b−a)

Ex (e−λτ 1{Bτ =b} )

and so

√ √

−λτ sinh ( 2λ (x − a)) −λτ sinh ( 2λ (b − x))

E (ex

1{Bτ =b} ) = √ and E (e

x

1{Bτ =a} ) = √ .

sinh ( 2λ (b − a)) sinh ( 2λ (b − a))

For the solution of Problem a) we only have to add these two expressions:

√ √

−λτ −λτ −λτ sinh ( 2λ (b − x)) + sinh ( 2λ (x − a))

Ee = E (e 1{Bτ =a} ) + E (e 1{Bτ =b} ) = √ .

sinh ( 2λ (b − a))

50

Solution Manual. Last update June 12, 2017

the first passage time of the level y. Then Bτ = y and we get for y ⩾ x

P(Bt ⩽ x, Mt ⩾ y) = P(Bt ⩽ x, τ ⩽ t)

= P(Bt∨τ ⩽ x, τ ⩽ t)

by the tower property and pull-out. Now we can use Theorem 6.11

B∼−B

y⩾x

∞

(2πt)−1/2 e−z

2 /(2t)

P(Bt ⩽ x, Mt ⩾ y) = P(Bt ⩾ 2y − x) = ∫ dz

2y−x

2(2y − x) −(2y−x)2 /(2t)

P(Bt ∈ dx, Mt ∈ dy) = √ e dx dy.

2πt3

Solution 2 (using Theorem 6.18): We have (with the notation of Theorem 6.18)

dx x2 (x−2y)2

P(Mt < y, Bt ∈ dx) = lim P(mt > a, Mt < y, Bt ∈ dx) = √ [e− 2t − e− 2t ]

(6.19)

a→−∞ 2πt

and if we differentiate this expression in y we get

2(2y − x) −(2y−x)2 /(2t)

P(Bt ∈ dx, Mt ∈ dy) = √ e dx dy.

2πt3

Problem 6.9 (Solution) This is the so-called absorbed or killed Brownian motion. The result

is

1

(e−(x−z) /(2t) − e−(x+z) /(2t) ) dz,

2 2

Px (Bt ∈ dz, τ0 > t) = (gt (x − z) − gt (x + z)) dz = √

2πt

for x, z > 0 or x, z < 0.

To see this result we assume that x > 0. Write Mt = sups⩽t Bs and mt = inf s⩽t Bs for the

running maximum and minimum, respectively. Then we have for A ⊂ [0, ∞)

51

R.L. Schilling, L. Partzsch: Brownian Motion

= Px (Bt ∈ A, x ⩾ mt > 0)

= P0 (−Bt ∈ A − x, 0 ⩾ −Mt > −x)

B∼−B

= P0 (Bt ∈ x − A, 0 ⩽ Mt < x)

2(2b − a) (2b − a)2

P0 (Bt ∈ da, Mt ∈ db) = √ exp (− ) da db

2πt3 2t

and we get

2(2b − a)

x (2b − a)2

Px (Bt ∈ A, τ0 > t) = ∫ 1A (x − a) [∫ √ exp (− ) db] da

0 2πt3 2t

t x 2 ⋅ 2 ⋅ (2b − a) (2b − a)2

= ∫ 1A (x − a) √ [∫ exp (− ) db] da

2πt3 0 2t 2t

1 x 2 ⋅ (2b − a) (2b − a)2

= ∫ 1A (x − a) √ [∫ exp (− ) db] da

2πt 0 t 2t

x

1 (2b − a)2

=√ ∫ 1 A (x − a) [− exp (− )] da

2πt 2t b=0

1 a2 (2x − a)2

=√ ∫ 1 A (x − a) {exp (− ) − exp (− )} da

2πt 2t 2t

1 (x − z)2 (x + z)2

=√ ∫ 1 A (z) {exp (− ) − exp (− )} da.

2πt 2t 2t

The calculation for x < 0 is similar (actually easier): Let A ⊂ (−∞, 0]

= P0 (Bt ∈ A − x, 0 ⩽ Mt < −x)

2(2b − a) (2b − a)2

= ∬ 1A (a + x)1[0,−x) (b) √ exp (− ) db da

2πt2 2t

t −x 2 ⋅ (2b − a) (2b − a)2

= ∫ 1A (a + x) √ ∫ exp (− ) db da

2πt3 0 t 2t

−x

1 (2b − a)2

=√ ∫ 1 A (a + x) [− exp (− )] da

2πt 2t b=0

1 a2 (2x + a)2

=√ ∫ 1 A (a + x) {exp (− ) − exp (− −)} da

2πt 2t 2t

1 (y − x)2 (x + y)2

=√ ∫ 1 A (y) {exp (− ) − exp (− −)} da.

2πt 2t 2t

Problem 6.10 (Solution) For a compact set K ⊂ Rd the set Un ∶= K + B(0, 1/n) ∶= {x + y ∶ x ∈

K, ∣y∣ < 1/n} is open.

52

Solution Manual. Last update June 12, 2017

we see that φn (x) is continuous. Obviously, 1Un (x) ⩾ φn (x) ⩾ φn+1 ⩾ 1K , and 1K = inf n φn

follows.

= P (inf {h ⩾ 0 ∶ Bt+h = 0} + t > a)

= E [PBt (inf {h ⩾ 0 ∶ Bh = 0} > a − t)]

x=Bt

x=Bt

= E [P (τ−x > a − t) ∣ ]

x=Bt

B∼−B

∞

= E [∫ √

(6.13)

e ds]

a−t 2πs3

∞ ∣Bt ∣ −Bt2 /(2s)

= ∫ E [√ e ] ds.

a−t 2πs3

Thus, differentiating with respect to a and using Brownian scaling yields

⎡ ∣Bt ∣ Bt2 ⎤

⎢ ⎥

P(ξ̃t ∈ da) = E ⎢ √ exp (− )⎥

⎢ 2π(a − t)3 2(a − t) ⎥⎦

⎣

√

1 t ∣B1 ∣ 1 t

= √ E [√ √ exp (− B12 )]

(a − t) π a−t 2 2 a−t

1

= √ E [∣c B1 ∣ exp (−(c B1 )2 )]

(a − t) π

1

= √ E [∣Bc2 ∣ exp (−Bc22 )]

(a − t) π

where c2 = 1 t

2 a−t .

∞

E [∣Bs ∣ e−Bs ] = (2πs)−1/2 ∫ ∣x∣ e−x e−x

2 2 2 /(2s)

dx

−∞

∞ 2 (1+(2s)−1 )

= (2πs)−1/2 2 ∫ x e−x dx

0

∞

1 −x2 (1+(2s)−1 )

= (2πs)−1/2 −1

−1

∫ 2(1 + (2s) )x e dx

(1 + (2s) ) 0

1 2s −1 ∞

[e−x (1+(2s) ) ]

2

=√

2πs 2s + 1 x=0

1 2s

=√ .

2πs 2s +1

53

R.L. Schilling, L. Partzsch: Brownian Motion

Let (Bt )t⩾0 be a BM1 . Find the distribution of ξ̃t ∶= inf{s ⩾ t ∶ Bs = 0}. This gives

1 1 2c2

P(ξ̃t ∈ da) = √ √

(a − t) π 2π c 2c2 + 1

√

1 a−t t

= √

(a − t)π t (a − t)a/(a − t)

√

1 t

= .

aπ a − t

P (Bt = 0 for some t ∈ (u, v)) = 1 − P (Bt ≠ 0 for all t ∈ (u, v)).

√

2 u

P (Bt ≠ 0 for all t ∈ (u, v)) = arcsin

π v

and so √

2 u

P (Bt = 0 for some t ∈ (u, v)) = 1 − arcsin .

π v

b) Since (u, v) ⊂ (u, w) we find with the classical conditional probability that

P ({Bt ≠ 0 ∀t ∈ (u, w)} ∩ {Bt ≠ 0 ∀t ∈ (u, v)})

=

P (Bt ≠ 0 ∀t ∈ (u, v))

P (Bt ≠ 0 ∀t ∈ (u, w))

=

P (Bt ≠ 0 ∀t ∈ (u, v))

√u

a) arcsin

= √w

arcsin uv

c) We have

= lim P (Bt ≠ 0 ∀t ∈ (u, w) ∣ Bt ≠ 0 ∀t ∈ (u, v))

u→0

√

arcsin wu

= lim √u

b)

u→0 arcsin

v

√ √

v v−u

= lim √ √

a)

√

v

=√ .

w

Problem 6.13 (Solution) We have seen in Problem 6.1 that M − B is a Markov process with

the same law as ∣B∣. This entails immediately that ξ ∼ η.

Attention: this problem shows that it is not enough to have only Mt − Bt ∼ ∣Bt ∣ for all

t ⩾ 0, we do need that the finite-dimensional distributions coincide. The Markov property

guarantees just this once the one-dimensional distributions coincide!

54

7 Brownian Motion and Transition Semigroups

Problem 7.1 (Solution) Banach space: It is obvious that C∞ (Rd ) is a linear space. Let us show

that it is closed. By definition, u ∈ C∞ (Rd ) if

Let (un )n ⊂ C∞ (Rd ) be a Cauchy sequence for the uniform convergence. It is clear that

the uniform limit u = limn un is again continuous. Fix and pick R as in (*). Then we get

∣u(x)∣ ⩽ ∣un (x) − u(x)∣ + ∣un (x)∣ ⩽ ∥un − u∥∞ + ∣un (x)∣.

Since un() ∈ C∞ , we find with (*) some R = R(n(), ) = R() such that

Density: Fix an and pick R > 0 as in (*), and pick a cut-off function χ = χR ∈ C(Rd )

such that

1B(0,R) ⩽ χR ⩽ 1B(0,2R) .

x ∣x∣>R ∣x∣>R

Problem 7.2 (Solution) Fix (t, y, v) ∈ [0, ∞) × Rd × C∞ (Rd ), > 0, and take any (s, x, u) ∈

[0, ∞) × Rd × C∞ (Rd ). Then we find using the triangle inequality

∣Ps u(x) − Pt v(y)∣ ⩽ ∣Ps u(x) − Ps v(x)∣ + ∣Ps v(x) − Pt v(x)∣ + ∣Pt v(x) − Pt v(y)∣

⩽ sup ∣Ps u(x) − Ps v(x)∣ + sup ∣Ps v(x) − Ps Pt−s v(x)∣ + ∣Pt v(x) − Pt v(y)∣

x x

⩽ ∥u − v∥∞ + ∥v − Pt−s v∥∞ + ∣Pt v(x) − Pt v(y)∣

55

R.L. Schilling, L. Partzsch: Brownian Motion

δ Ô⇒ ∣Pt v(x) − Pt v(y)∣ < .

• Using the strong continuity of the semigroup (Proposition 7.3 f) there is some δ2 =

δ2 (t, v, ) such that ∣t − s∣ < δ2 Ô⇒ ∥Pt−s v − v∥∞ ⩽ .

tower

property

= Ex (f (Xt ) Ex (g(Xt+s ) ∣ Ft ))

pull

out

Markov

property

= Ex (f (Xt )h(Xt ))

h(y) = Ey g(Xs ) is again in C∞ .

Thus, Ex f (Xt )g(Xt+s ) = Ex φ(Xt ) and φ(y) = f (y)h(y) is in C∞ . This shows that

x ↦ Ex (f (Xt )g(Xt+s )) is in C∞ .

Using semigroups we can write the above calculation in the following form:

Problem 7.4 (Solution) Set u(t, z) ∶= Pt u(z) = pt ⋆ u(z) = (2πt)d/2 ∫Rd u(y)e∣z−y∣

2 /2t

dy.

pt (z − y) = (2πt)−d/2 e−∣z−y∣

2 /2t

, t>0

decays exponentially, we see — as in the proof of Proposition 7.3 g) — that for each z

∣∂zk pt (z − y)∣

⩽ sup ∣Qk (z, y, t)∣ 1B(0,2R) (y) + sup ∣Qk (z, y, t)e−∣y∣

2 /(16t)

∣ e−∣y∣

2 /(16t)

1Bc (0,2R) (y).

∣y∣⩽2R ∣y∣⩾2R

This inequality holds uniformly in a small neighbourhood of z, i.e. we can use the differ-

entiation lemma from measure and integration to conclude that ∂ k Pt u ∈ Cb .

56

Solution Manual. Last update June 12, 2017

x ↦ ∂t u(t, x) is in C∞ for t > 0: This follows from the first part and the fact that

d ∣z − y∣2

∂t pt (z − y) = − (2πt)−d/2−1 e−∣z−y∣ /2t + (2πt)−d/2 e−∣z−y∣ /2t

2 2

2 2t2

1 ∣z − y∣2

d

= ( − ) pt (z − y).

2 t2 t

Again with the domination argument of the first part we see that ∂t ∂xk u(t, x) is continuous

on (0, ∞) × Rd .

Problem 7.5 (Solution) (a) Note that ∣un ∣ ⩽ ∣u∣ ∈ Lp . Since ∣un −u∣p ⩽ (∣un ∣+∣u∣)p ⩽ (∣u∣+∣u∣)p =

2p ∣u∣p ∈ L1 and since ∣un (x) − u(x)∣ → 0 for every x as n → ∞, the claim follows by

dominated convergence.

Young

∥Pt un − Pt um ∥Lp = ∥pt ⋆ (un − um )∥Lp ⩽ ∥pt ∥L1 ∥un − um ∥Lp = ∥un − um ∥Lp .

(Pt un )n , and therefore P̃t u ∶= limn Pt un exists in Lp .

If vn is any other sequence in Lp with limit u, the above argument shows that

limn Pt vn also exists. ‘Mixing’ the sequences (wn ) ∶= (u1 , v1 , u2 , v2 , u3 , v3 , . . .) pro-

duces yet another convergent sequence with limit u, and we conclude that

n n n

(c) Any u ∈ Lp with 0 ⩽ u ⩽ 1 has a representative u ∈ Bb . And then the claim follows

since Pt is sub-Markovian.

(d) Recall that y ↦ ∥u(⋅ + y) − u∥Lp is for u ∈ Lp (dx) a continuous function. By Fubini’s

theorem and the Hölder inequality

= E (∥u(⋅ + Bt ) − u∥pLp ) .

can use the dominated convergence theorem to conclude that limt→0 ∥Pt u − u∥Lp = 0.

Rd

Rd

Rd Rd

57

R.L. Schilling, L. Partzsch: Brownian Motion

Rd Rd

Rd

Rd

Problem 7.7 (Solution) Using Tt 1C (x) = pt (x, C) = ∫ 1C (y) pt (x, dy) we get

= Tt1 (1C1 [Tt2 −t1 1C2 {⋯Ttn−1 −tn−2 ∫ 1Cn (xn ) ptn −tn−1 (⋅, dxn )⋯}])(x)

= Tt1 (1C1 [Tt2 −t1 1C2 {⋯ ∫ 1Cn−1 (xn−1 ) ∫ 1Cn (xn ) ptn −tn−1 (xn−1 , dxn )×

= ∫ . . . ∫ 1C1 (x1 )1C2 (x2 )⋯1Cn (xn )ptn −tn−1 (xn−1 , dxn )ptn−1 −tn−2 (xn−2 , dxn−1 )×

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶

n integrals

n

= ∫ . . . ∫ 1C1 ×⋯×Cn (x1 , . . . , xn ) ∏ ptj −tj−1 (xj−1 , dxj )

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶ j=1

n integrals

This shows that pxt1 ,...,tn (C1 × . . . × Cn ) is the restriction of

n

pxt1 ,...,tn (Γ) = ∫ . . . ∫ 1Γ (x1 , . . . , xn ) ∏ ptj −tj−1 (xj−1 , dxj ), Γ ∈ B(Rd⋅n )

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶ j=1

n integrals

and the right-hand side clearly defines a probability measure. By the uniqueness theorem

for measures, each measure is uniquely defined by its values on the rectangles, so we are

done.

inf ∣x − α∣ ⩽ ∣x − a∣ ⩽ ∣x − y∣ + ∣a − y∣

α∈A

inf ∣x − α∣ ⩽ ∣x − y∣ + inf ∣a − y∣

α∈A a∈A

α∈A a∈A

58

Solution Manual. Last update June 12, 2017

d(x,Unc )

(b) By definition, Un = K + B(0, 1/n) and un (x) ∶= d(x,K)+d(x,Unc ) . Being a combination

of continuous functions, see Part (a), un is clearly continuous. Moreover,

un ∣K ≡ 1 and un ∣Unc ≡ 0.

n→∞

This shows that 1K ⩽ un ⩽ 1Unc ÐÐÐ→ 1K .

(c) Assume, without loss of generality, that supp χn ⊂ B(0, 1/n2 ). Since 0 ⩽ un ⩽ 1, we

find

χn ⋆ un (x) = ∫ χn (x − y)un (y) dy ⩽ ∫ χn (x − y) dy = 1 ∀x.

un (y) = ⩾ = 1 − γ. ∀y ∈ K + B(0, γ/n)

d(y, K) + d(y, Unc ) 1/n

(Essentially this means that un is ‘linear’ for x ∈ Un ∖ K!). Thus, if γ > 1/n,

⩾ (1 − γ) ∫ χn (x − y)1K+B(0,γ/n) (y) dy

⩾ (1 − γ) ∫ χn (x − y)1x+B(0,1/n2 ) (y) dy

=1−γ ∀x ∈ K.

n n

hence,

lim χn ⋆ un (x) = x for all x ∈ K.

n→∞

n + 1

n2

. Since

1 1 1 1

+ 2 < d(x, K) ⩽ d(x, y) + d(y, K) Ô⇒ d(x, y) > 2 or d(y, K) > ,

n n n n

and so, using that supp χn ⊂ B(0, 1/n2 ) and supp un ⊂ K + B(0, 1/n),

1 1

χn ⋆ un (x) = ∫ χn (x − y)un (y) dy = 0 ∀x ∶ d(x, K) > + 2.

n n

It follows that limn χn ⋆ un (x) = 0 for x ∈ K c .

use vn ∶= χn ⋆ 1K+supp un where (χn )n is any sequence of type δ. Indeed, as before,

59

R.L. Schilling, L. Partzsch: Brownian Motion

For x ∈ K we find

= ∫ χn (y)1K+supp un (x − y) dy

= ∫ χn (y) dy

=1 ∀x ∈ K.

everywhere) approximation of 1K : consider K = {0}, then χn ⋆ 1K ≡ 0. In fact, since

1K ∈ L1 we get χn ⋆ 1K → 1K in L1 hence, for a subsequence, a.e. ...

Problem 7.9 (Solution) (a) Existence, contractivity: Let us, first of all, check that the series

converges. Denote by ∥A∥ any matrix norm in Rd . Then we see

X

X X

X ∞

(tA)j X

X ∞ tj ∥Aj ∥ ∞ j

t ∥A∥j

∥Pt ∥ = X

X

X

X

X ∑

X

X

X

X

X ⩽ ∑ ⩽ ∑ = et∥A∥ .

X

X

Xj=0

j! X

X

X j=0

j! j=0 j!

This shows that, in general, Pt is not a contraction. We can make it into a contraction

by setting Qt ∶= e−t∥A∥ Pt . It is clear that Qt is again a semigroup, if Pt is a semigroup.

Semigroup property: This is shown using as for the one-dimensional exponential se-

ries. Indeed,

∞

(t + s)k Ak

e(t+s)A = ∑

k=0 k!

∞ k

1 k j k−j k

=∑∑ ( )t s A

k=0 j=0 k! j

∞ k

tj Aj sk−j Ak−j

=∑∑

k=0 j=0 j! (k − j)!

∞

tj Aj ∞ sk−j Ak−j

=∑ ∑

j=0 j! k=j (k − j)!

∞

tj Aj ∞ sl Al

=∑ ∑

j=0 j! l=0 l!

= etA esA .

X

X ∞ j jX X ∞ j−1 j X

X t A XX X

X t A XX

∥etA − id∥ = X

X

X

X ∑

X

X

X

X = t X

X

X

X ∑

X

X

X

X

X

X

X j! X

X

X X

X

X j! X

X

X

X j=1 X Xj=1 X

and, as in the first calculation, we see that the series converges absolutely. Letting

t → 0 shows strong continuity, even continuity in the operator norm.

60

Solution Manual. Last update June 12, 2017

lim ∣etA v − v∣ = 0.

t→0

Since

∣etA v − v∣ ⩽ ∥etA − id ∥ ⋅ ∣v∣

strong continuity is implied by uniform continuity. One can show that the generator

of a norm-continuous semigroup is already a bounded operator, see e.g. Pazy.)

∞ ∞

tj Aj sj Aj (tj − sj )Aj

etA − esA = ∑ ( − )=∑

j=0 j! j! j=1 j!

=∑ ÐÐ→ ∑ jt .

t−s j=1 t − s j! j=1 j!

∞ j ∞

j−1 A Aj−1

∑ jt = A ∑ tj−1 = AetA .

j=1 j! j=1 (j − 1)!

A similar calculation, pulling out A to the back, yields that the sum is also etA A.

(c) Assume first that AB = BA. Repeated applications of this rule show Aj B k = B k Aj

for all j, k ⩾ 0. Thus,

∞ ∞

tj Aj tk B k ∞ ∞ tj tk Aj B k ∞ ∞ tk tj B k Aj

etA etB = ∑ ∑ =∑∑ =∑∑ = etB etA .

j=0 k=0 j! k! j=0 k=0 j!k! k=0 j=0 k!j!

lim = lim

t→0 t t t→0 t t

and this proves AB = BA.

Alternative solution for the converse: If s = j/n and t = k/n for some common de-

nominator n, we get from etA etB = etB etA that

1 1 1 1 1 1 1 1

etA esB = e n A ⋯e n A e n B ⋯e n B = e n B ⋯e n B e n A ⋯e n A = esB etA .

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶

k j j k

etA − id sB etA − id

AesB = lim e = esB lim = esB A

t→0 t t→0 t

and,

esB − id esB − id

AB = A lim = lim A = BA.

s→0 s s→0 s

61

R.L. Schilling, L. Partzsch: Brownian Motion

(d) We have

∞

Aj 1

eA/k = id + k1 A + ρk and k 2 ρk = ∑ j−2

.

j=2 j! k

Note that k 2 ρk is bounded. Do the same for B (with the remainder term ρ′k ) and

multiply these expansions to get

eA/k eB/k = id + k1 A + k1 B + σk

∥ k1 A + k1 B + σk ∥ < 1.

log(eA/k eB/k ) = 1

k A + k1 B + σk + σk′

k log(eA/k eB/k ) = A + B + τk

k→∞

A/k eB/k )

= lim ek log(e

k→∞

A/k eB/k ) k

= lim (elog(e )

k→∞

k

= lim (eA/k eB/k )

k→∞

k−1

Skk − Tkk = ∑ Skj (Sk − Tk )Tkk−1−j .

j=0

k−1

∥Skk − Tkk ∥ ⩽ ∑ ∥Skj (Sk − Tk )Tkk−1−j ∥

j=0

k−1

⩽ ∑ ∥Skj ∥ ⋅ ∥Sk − Tk ∥ ⋅ ∥Tkk−1−j ∥

j=0

⩽ k ∥Sk − Tk ∥ ⋅ e∥A∥+∥B∥ .

Observe that X X

X

X ∞

(A + B)j ∞ ∞ Aj B l X X

∥Sk − Tk ∥ = X X

X X C

X

X

X ∑ − ∑ ∑ X

j j! k l l! X

X ⩽ 2

X

X

Xj=0

k j j!

j=0 l=0 k X

X

X

k

with a constant C depending only on ∥A∥ and ∥B∥. This yields Skk − Tkk → 0.

62

Solution Manual. Last update June 12, 2017

Problem 7.10 (Solution) (a) Let 0 < s < t and assume throughout that h ∈ R is such that

t − s − h > 0. We have

= Pt−(s+h) Ts+h − Pt−(s+h) Ts + Pt−(s+h) Ts − Pt−s Ts

= Pt−(s+h) (Ts+h − Ts ) + (Pt−(s+h) − Pt−s )Ts

= (Pt−(s+h) − Pt−s )(Ts+h − Ts ) + Pt−s (Ts+h − Ts ) + (Pt−(s+h) − Pt−s )Ts .

1

(Pt−(s+h) Ts+h u − Pt−s Ts u)

h

Ts+h u − Ts u Ts+h u − Ts u Pt−(s+h) − Pt−s

= (Pt−(s+h) − Pt−s ) + Pt−s + Ts u

h h h

= I + II + III.

Letting h → 0 gives

II → Pt−s BTs and III → −Pt−s ATs .

Ts+h u − Ts u

I = (Pt−(s+h) − Pt−s ) ( − Ts Bu) + (Pt−(s+h) − Pt−s )Ts Bu = I1 + I2 .

h

By the strong continuity of the semigroup (Pt )t , we see that I2 → 0 as h → 0.

Furthermore, by contractivity,

Ts+h u − Ts u Ts+h u − Ts u

∥I1 ∥ ⩽ (∥Pt−(s+h) ∥ + ∥Pt−s ∥) ⋅ ∥ − Ts Bu∥ ⩽ 2 ∥ − Ts Bu∥ → 0

h h

since u ∈ D(B).

(b) In general, no. The problem is the semigroup property (unless Tt and Ps commute

for all s, t ⩾ 0):

Ut Us = Tt Pt Ts Ps ≠ Tt Ts Pt Ps = Tt+s Pt+s = Ut+s .

It is interesting to note (and helpful for the proof of (c)) that Ut is an operator on

C∞ :

Pt Tt

Ut ∶ C∞ ÐÐÐÐ→ C∞ ÐÐÐÐ→ C∞

∥Ut f − Us f ∥ = ∥Tt Pt f − Ts Pt f + Ts Pt f − Ts Ps f ∥

⩽ ∥(Tt − Ts )Pt f ∥ + ∥Ts (Pt − Ps )f ∥

⩽ ∥(Tt − Ts )Pt f ∥ + ∥(Pt − Ps )f ∥

63

R.L. Schilling, L. Partzsch: Brownian Motion

n

(c) Set Ut,n ∶= (Tt/n Pt/n ) .

therefore, Tt/n Pt/n ∶ C∞ → C∞ as well as Ut,n .

We have ∥Ut,n f ∥ = ∥Tt/n Pt/n ⋯Tt/n Pt/n f ∥ ⩽ ∏nj=1 ∥Tt/n ∥∥Pt/n ∥∥f ∥ ⩽ ∥f ∥. So, by the

continuity of the norm

n n

show that Ut,n is strongly continuous. Let X, Y be contractions in C∞ . Then we get

= X n−1 (X − Y ) + (X n−1 − Y n−1 )Y

By iteration, we get

n−1

∥X n f − Y n f ∥ ⩽ ∑ ∥(X − Y )Y k f ∥.

k=0

Take Y = Tt/n Pt/n , X = Ts/n Ps/n where n is fixed. Then letting s → t shows the

strong continuity of each t ↦ Ut,n .

Semigroup property: Let s, t ∈ Q and write s = j/m and t = k/m for the same m.

Then we take n = l(j + k) and get

n l(j+k)

(T s+t P s+t ) = (T 1 P 1 )

n n lm lm

lj lk

= (T 1 P 1 ) (T 1 P 1 )

lm lm lm lm

lj lk

= (T j P j ) (T k P k )

ljm ljm lkm lkm

lj lk

= (T ljs P ljs ) (T t P t )

lk lk

For arbitrary s, t the semigroup property follows by approximation and the strong

continuity of Ut : let Q ∋ sn → s and Q ∋ tn → t. Then, by the contraction property,

⩽ ∥Ut f − Utn f ∥ + ∥(Us − Usn )(Utn − Ut )f ∥ + ∥(Us − Usn )Ut f ∥

⩽ ∥Ut f − Utn f ∥ + 2∥(Utn − Ut )f ∥ + ∥(Us − Usn )Ut f ∥

and the last expression tends to 0. The limit limn Usn +tn u = Us+t u is obvious.

64

Solution Manual. Last update June 12, 2017

Generator: Let us begin with a heuristic argument (by ? and ?? indicate the steps

which are questionable!). By the chain rule

d d

∣ Ut g = ∣ lim(Tt/n Pt/n )n g

dt t=0 dt t=0 n

d

= lim ∣ (Tt/n Pt/n )n g

?

n dt t=0

n−1

= lim [n(Tt/n Pt/n ) (Tt/n n1 BPt/n + Tt/n n1 APt/n )g∣ ]

??

n t=0

= Bg + Ag.

So it is sensible to assume that D(A)∩D(B) is not empty. For the rigorous argument

we have to justify the steps marked by question marks.

ds Ts Ps f exists and is Ts Af + BPs f for f ∈ D(A) ∩ D(B).

This follows similar to (a) since we have for s, h > 0

= (Ts+h − Ts )(Ps+h − Ps )f + Ts (Ps+h − Ps )f + (Ts+h − Ts )Ps f.

Divide by h. Then the first term converges to 0 as h → 0, while the other two terms

tend to Ts Af and BPs f , respectively.

theorem from calculus, e.g. Rudin [9, Theorem 7.17].

verges for some t0 > 0. If (fn′ )n converges [locally] uniformly, then (fn )n converges

[locally] uniformly to a differentiable function f and we have f ′ = limn fn′ .

This theorem holds for functions with values in any Banach space space and, there-

fore, we can apply it to the situation at hand: Fix g ∈ D(A) ∩ D(B); we know

that fn (t) ∶= Ut,n g converges (even locally uniformly) and, because of ?? , that

fn′ (t) = (Tt/n Pt/n )n−1 (Tt/n A + BPt/n )g.

Since limn (Tt/n Pt/n )n u converges locally uniformly, so does limn (Tt/n Pt/n )n−1 u; more-

over, by the strong continuity, Tt/n A + BPt/n ) → (A + B)g locally uniformly for

g ∈ D(A) ∩ D(B). Therefore, the assumptions of the theorem are satisfied and we

may interchange the limits in the calculation above.

Problem 7.11 (Solution) The idea is to show that A = − 21 ∆ is closed when defined on C2∞ (R).

Since C2∞ (R) ⊂ D(A) and since (A, D(A)) is the smallest closed extension, we are done.

So let (un )n ⊂ C2∞ (R) be a sequence such that un → u uniformly and (Aun )n is a C∞

Cauchy sequence. Since C∞ (R) is complete, we can assume that u′′n → 2g uniformly for

some g ∈ C∞ (Rd ). The aim is to show that u ∈ C2∞ .

65

R.L. Schilling, L. Partzsch: Brownian Motion

x x y

un (x) − un (0) − xu′n (0) = ∫ (u′n (y) − u′n (0)) dy = ∫ ∫ u′′n (z) dz.

0 0 0

x y x y

un (x) − un (0) − xu′n (0) = ∫ ∫ u′′n (z) dz → ∫ ∫ 2g(z) dz.

0 0 0 0

Since un (x) → u(x) and un (0) → u(0), we conclude that u′n (0) → c converges.

(b) Recall the following theorem from calculus, e.g. Rudin [9, Theorem 7.17].

verges for some t0 > 0. If (fn′ )n converges uniformly, then (fn )n converges uniformly

to a differentiable function f and we have f ′ = limn fn′ .

If we apply this with fn′ = u′′n → 2g and fn (0) = u′n (0) → c, we get that u′n (x)−u′n (0) →

x

∫0 2g(z) dt.

Let us determine the constant c′ ∶= limn u′n (0). Since u′n converges uniformly, the

limit as n → ∞ is in C∞ , and so we get

x

− lim u′n (0)) = lim lim (u′n (x) − u′n (0)) = lim ∫ 2g(z) dz

n→∞ x→−∞ n→∞ x→−∞ 0

i.e. c′ = ∫−∞ g(z) dz. We conclude that u′n (x) → ∫−∞ g(z) dt uniformly.

0 x

x y

(c) Again by the Theorem quoted in (b) we get un (x) − un (0) → ∫0 ∫−∞ 2g(z) dz uni-

0 y

formly, and with the same argument as in (b) we get un (0) = ∫−∞ ∫−∞ 2g(z) dz.

Problem 7.12 (Solution) By definition, (for all α > 0 and formally but justifiable via monotone

convergence also for α = 0)

∞

Uα 1C (x) = ∫ e−αt Pt 1C (x) dt

0

∞

=∫ e−αt E 1C (Bt + x) dt

0

∞

= E∫ e−αt 1C−x (Bt ) dt.

0

This is the ‘discounted’ (with ‘interest rate’ α) total amount of time a Brownian motion

spends in the set C − x.

Problem 7.13 (Solution) First formula: We use induction. The induction start with n = 0 is

clearly correct. Let us assume that the formula holds for some n and we do the induction

step n ↝ n + 1. We have for β ≠ α

dn dn

dn+1 dαn Uα f (x) − dβ n Uβ f (x)

Uα f (x) = lim

dαn+1 β→α β−α

n!(−1)n Uαn+1 f (x) − n!(−1)n Uβn+1 f (x)

= lim

β→α β−α

66

Solution Manual. Last update June 12, 2017

= n!(−1)n lim

β→α β−α

Using the identity an+1 − bn+1 = (a − b) ∑nj=0 an−j bj we get, since the resolvents commute,

= ∑ U U f (x) = −U α β ∑ Uα Uβ f (x)

U n−j j

β−α β − α j=0 α β

j=0

In the last line we used the resolvent identity. Now we can let β → α to get

β→α n

ÐÐ→ −Uα Uα ∑ Uαn−j Uαj f (x) = −(n + 1)Uαn+2 f (x).

j=0

n

n

∂ n (f g) = ∑ ( )∂ j f ∂ n−j g

j=0 j

n n

∂ n (αUα f (x)) = ( )α∂ n Uα f (x) + ( )∂ n−1 Uα f (x)

0 1

= αn!(−1)n Uαn+1 f (x) + n(n − 1)!(−1)n−1 Uαn f (x)

= n!(−1)n+1 (id −αUα )Uαn f (x).

Problem 7.14 (Solution) (a) Let f ⩾ 0 be a Borel function. Then we get by monotone con-

vergence

∞ ∞

U f (x) = lim Uα f (x) = lim ∫ e−αt Pt f (x) dt = ∫ Pt f (x) dt.

α→0 α→0 0 0

∞

N f (x) = U f (x) = ∫ Pt f (x) dt

0

follows for all measurable f if N f ± , U f ± are finite.

∞

g(x) = ∫ gt (x) dt

0

∞

= ∫ (2πt)−d/2 exp(−∣x∣2 /(2t)) dt

0

d/2

s=∣x∣2 /(2t)

∞

−d/2 2s ∣x∣2

= ∫ (2π) ( 2) e−s ds

dt=−∣x∣2 /(2s2 ) ds 0 ∣x∣ 2s2

∞

= ∣x∣2−d (2π)−d/2 2d/2−1 ∫ sd/2−2 e−s ds

0

= ∣x∣2−d π −d/2 21 Γ( d2 − 1)

∣x∣2−d Γ( d−2 )

= 2

2 π d/2

67

R.L. Schilling, L. Partzsch: Brownian Motion

∣x∣2−d d−2

2 Γ( 2

d−2

)

= d−2

2 π d/2 2

∣x∣2−d Γ( d2 )

= .

π d/2 (d − 2)

Problem 7.15 (Solution) (a) The process (t, Bt ) starts at (0, B0 ) = 0, and if we start at (s, x)

we consider the process (s + t, x + Bt ) = (s, x) + (t, Bt ). Let f ∈ Bb ([0, ∞) × R).

Since the motion in t is deterministic, we can use the probability space (Ω, A, P = P)

generated by the Brownian motion (Bt )t⩾0 . Then

gence that

(σ,ξ)→(s,x) (σ,ξ)→(s,x)

=E lim f (σ + t, ξ + Bt )

(σ,ξ)→(s,x)

= E f (s + t, x + Bt )

= Tt f (s, x)

which shows that Tt preserves f ∈ Cb ([0, ∞) × R). In a similar way we see that

∣(σ,ξ)∣→∞ ∣(σ,ξ)∣→∞

Tt is a semigroup: Let f ∈ C∞ ([0, ∞)×R). Then, by the independence and stationary

increments property of Brownian motion,

= E f (s + t + τ, x + (Bt+τ − Bt ) + Bt )

= E E(t,Bt ) f (s + τ, x + (Bt+τ − Bt ))

= E E(t,Bt ) f (s + τ, x + (Bτ )

= E Tτ (s + t, x + Bt )

= Tt Tτ (s, x).

that for every > 0 there is some δ > 0 such that

68

Solution Manual. Last update June 12, 2017

∣Bt ∣⩽δ

1

⩽ + 2∥f ∥∞ E(Bt2 )

δ2

t

= + 2∥f ∥∞ .

δ2

Since the estimate is uniform in (s, x), this proves strong continuity.

Markov property: this is trivial.

(b) The transition semigroup is

2 /(2t)

dy.

R

∞

Uα f (s, x) = ∫ e−tα Tt f (s, x) dt

0

Tt f (s, x) − f (s, x) E f (s + t, x + Bt ) − f (s, x)

=

t t

E f (s + t, x + Bt ) − f (s, x + Bt ) E f (s, x + Bt ) − f (s, x)

= +

t t

t→0

ÐÐ→ E ∂t f (s, x + B0 ) + 12 ∆x f (s, x)

= (∂t + 12 ∆x )f (s, x).

Note that, in view of Theorem 7.19, pointwise convergence is enough (provided the

pointwise limit is a C∞ -function).

(s,x)

(c) We get for u ∈ C1,2

∞ that under P

t

Mtu ∶= u(s + t, x + Bt ) − u(s, x) − ∫ (∂r + 21 ∆x )u(s + r, x + Br ) dr

0

is an Ft -martingale. This is the same assertion as in Theorem 5.6 (up to the choice

of u which is restricted here as we need it in the domain of the generator...).

Problem 7.16 (Solution) Let u ∈ D(A) and σ a stopping time with E σ < ∞. Use optional

stopping (Theorem A.18 in combination with remark A.21) to see that

σ∧t

u

Mσ∧t ∶= u(Xσ∧t ) − u(x) − ∫ Au(Xr ) dr

0

σ∧t

Ex u(Xσ∧t ) − u(x) = Ex (∫ Au(Xr ) dr) .

0

Since u, Au ∈ C∞ we see

σ∧t σ∧t

∣Ex (∫ Au(Xr ) dr)∣ ⩽ Ex (∫ ∥Au∥∞ dr) ⩽ ∥Au∥∞ ⋅ Ex σ < ∞,

0 0

i.e. we can use dominated convergence and let t → ∞. Because of the right-continuity of

the paths of a Feller process we get Dynkin’s formula (7.21).

69

R.L. Schilling, L. Partzsch: Brownian Motion

P(Xt ∈ F ∀t ∈ R+ ) ⩽ P(Xq ∈ F ∀q ∈ Q+ ).

Xq ∈ F ∀q ∈ Q+ Ô⇒ Xt = +lim Xq ∈ F ∀t ⩾ 0

Q ∋q→t

70

8 The PDE Connection

2 /2t

for the heat kernel. Since convolutions

are smoothing, one finds easily that P f = g ⋆ f ∈ C∞

∞ ⊂ D(∆). (There is a more general

concept behind it: whenever the semigroup is analytic—i.e. z ↦ Pz has an extension to,

say, a sector in the complex plane and it is holomorphic there—one has that Tt maps the

underlying Banach space into the domain of the generator; cf. e.g. Pazy [6, pp. 60–63].)

Lemma 8.1 def semi-

group

uniformly

u (t, x) ÐÐÐÐÐ→ Pt f (x).

→0

Moreover,

∂ 1

u (t, ⋅) = ∆x Pt P f

∂t 2

1 uniformly 1

= P ( ∆x Pt f ) ÐÐÐÐÐ→ ∆x Pt f.

2 →0 2

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶

∈C∞

Since both the sequence and the differentiated sequence converge uniformly, we can inter-

change differentiation and the limit, cf. [9, Theorem 7.17, p. 152], and we get

∂ ∂ 1

u(t, x) = lim u (t, x) = ∆x u(t, x)

∂t →0 ∂t 2

and

u (0, ⋅) = P f ÐÐ→ f = u(0, ⋅)

→0

and we get a solution for the initial value f . The proof of the uniqueness in Lemma 8.1

stays valid.

t

dt ∫0 f (Bs ) ds = f (Bt ) so that f (Bt ) = 0. We

d

Problem 8.2 (Solution) By differentiation we get

can assume that f is positive and bounded, otherwise we could consider f ± (Bt ) ∧ c for

some constant c > 0. Now E f (Bt ) = 0 and we conclude from this that f = 0.

t t

∣χn (Bt )e−α ∫0 gn (Bs ) ds ∣ ⩽ ∣e−α ∫0 ds ∣ = e−αt ⩽ 1

71

R.L. Schilling, L. Partzsch: Brownian Motion

t t

lim χn (Bt )e−α ∫0 gn (Bs ) ds = 1R (Bt )e−α ∫0 1(0,∞) (Bs ) ds

n→∞

∞ t

vn,λ (x) = ∫ e−λt E (χn (Bt )e−α ∫0 gn (Bs ) ds ) dt ÐÐÐ→ vλ (x).

0 n→∞

′′

vn,λ (x) = 2(αχn (x) + λ)vn,λ (x) − gn (x), (*)

′′

and since the expression on the right has a limit as n → ∞, we get that limn→∞ vn,λ (x)

exists.

x x

′ ′

vn,λ (x) − vn,λ (0) = 2 ∫ (αχn (y) + λ)vn,λ (y) dy − ∫ gn (y) dy, (**)

0 0

′ ′

and, again by dominated convergence, we conclude that limn→∞ [vn,λ (x) − vn,λ (0)]

exists. In addition, the right-hand side is uniformly bounded (for all ∣x∣ ⩽ R):

x x R R

∣2 ∫ (αχn (y) + λ)vn,λ (y) dy − ∫ gn (y) dy∣ ⩽ 2 ∫ (α + λ) dy + ∫ dy

0 0 0 0

⩽ 2(α + λ + 1)R.

x

′ ′ ′

vn,λ (x) − vn,λ (0) − x vn,λ (0) = ∫ [vn,λ (z) − vn,λ (0)] dz.

0

Since the expression under the integral converges boundedly and since limn→∞ vn,λ (x)

′ ′

exists, we conclude that limn→∞ vn,λ (0) exists. Consequently, limn→∞ vn,λ (x) exists.

n→∞

vλ′ (x) ′

= lim vn,λ (x)

n→∞

vλ′′ (x) ′′

= lim vn,λ (x)

n→∞

t

Problem 8.4 (Solution) We have to show that v(t, x) ∶= ∫0 Ps g(x) ds is the unique solution of

the initial value problem (8.7) with g = g(x) satisfying ∣v(t, x)∣ ⩽ C t.

Existence: The linear growth bound is obvious from ∣Ps g(x)∣ ⩽ ∥Ps g∥∞ ⩽ ∥g∥∞ < ∞. The

rest follows from the hint if we take A = 1

2 ∆ and Lemma 7.10.

72

Solution Manual. Last update June 12, 2017

∞

Uniqueness: We proceed as in the proof of Lemma 8.1. Set vλ (x) ∶= ∫0 e−λt v(t, x) dt.

This integral is, for λ > 0, convergent and it is the Laplace transform of v(⋅, x). Under the

Laplace transform the initial value problem (8.7) with g = g(x) becomes

and this problem has a unique solution, cf. Proposition 7.13 f). Since the Laplace trans-

form is invertible, we see that v is unique.

with two integration constants c, d ∈ R. The boundary conditions u(0) = a and u(1) = b

show that

d=a and c = b − a

so that

u(x) = (b − a)x + a.

On the other hand, by Corollary 5.11 (Wald’s identities), Brownian motion started in

x ∈ (0, 1) has the probability to exit (at the exit time τ ) the interval (0, 1) in the following

way:

Px (Bτ = 1) = x and Px (Bτ = 0) = 1 − x.

Therefore, if f ∶ {0, 1} → R is a function on the boundary of the interval (0, 1) such that

f (0) = a and f (1) = b,then

This means that u(x) = Ex f (Bτ ), a result which we will see later in Section 8.4 in much

greater generality.

Problem 8.6 (Solution) The key is to show that all points in the open and bounded, hence

relatively compact, set D are non-absorbing. Thus the closure of D has an neighbourhood,

say V ⊃ D̄ such that E τDc ⩽ E τV c . Let us show that E τV c < ∞.

Since D is bounded, there is some R > 0 such that B(0, R) ⊃ D̄. Pick some test function

χ = χR such that χ∣Bc (0,R) ≡ 0 and χ ∈ Cc∞ (Rd ). Pick further some function u ∈ C2 (Rd )

such that ∆u > 0 in B(0, 2R). Here are two possibilities to get such a function:

d

u(x) = ∣x∣2 = ∑ x2j Ô⇒ 1

2 ∆u(x) = 1

j=1

x1

F (x) = F (x1 ) ∶= ∫ f (z1 ) dz1

0

73

R.L. Schilling, L. Partzsch: Brownian Motion

and

x1 x1 y1

U (x) = U (x1 ) ∶= ∫ F (y1 ) dy1 = ∫ ∫ f (z1 ) dz1 .

0 0 0

Clearly, 1

2 ∆U (x) = 1

2 ∂x21 U (x1 ) = f (x1 ), and we can arrange things by picking the correct

f.

Problem: neither u nor U will be in D(∆) (unless you are so lucky as in the proof of

Lemma 8.8 to pick instantly the right function).

∆(χ ⋅ U ) = χ ⋅ ∆U + U ⋅ ∆χ + 2⟨∇χ, ∇U ⟩

∆(χ ⋅ U )∣B(0,R) = ∆U ∣B(0,R) .

The rest of the proof follows now either as in Lemma 7.24 or Lemma 8.8 (both employ,

anyway, the same argument based on Dynkin’s formula).

Problem 8.7 (Solution) We are following the hint. Let L = ∑dj,k=1 ajk (x) ∂j ∂k + ∑dj=1 bj (x) ∂j .

Then

j,k j

j,k j

j,k

If ∣x∣ < R and χ∣B(0,R) = 1, then L(uχ)(x) = Lu(x). Set u(x) = e−x1 /γr . Then only the

2 2

2 2

2x1 − x1 2 2x2 x

− 1

∂1 u(x) = − 2 e γr2 and ∂12 u(x) = 2 ( 21 − 1) e γr2

γr γr γr

2 2

2a11 (x) 2x21 − γr

x1

2b1 (x)x1 − γr

x1

−Lu(x) = (1 − ) e 2

+ e 2

γr2 γr2 γr2

2

2a11 (x) 2x21 2b1 (x)x1 − γr

x1

=[ (1 − ) + ] e 2

γr2 γr2 γr2

2a0 2 2b0 − γr r2

⩾ [ 2 (1 − ) − ]e 2

γr γ γr

This shows that the drift b1 (x) can make the expression in the bracket negative!

Let us modify the Ansatz. Observe that for f (x) = f (x1 ) we have

74

Solution Manual. Last update June 12, 2017

!!

Lf (x) ⩾ a0 ∂12 f (x) − b0 ∂1 f (x) > 0.

This means that ∂12 f /∂1 f > b0 /a0 seems to be natural and a reasonable Ansatz would be

x1 2b0

y

f (x) = ∫ e a0 dy.

0

Then

2b0

x1 2b0 2b 0 x

∂1 f (x) = e a0 and ∂12 f (x) = e a0 1

a0

and we get

2b0 2b 0 x 2b0

x

Lf (x) = a11 (x) e a0 1 − b1 (x)e a0 1

a0

2b0 2b 0 x 2b0

x

⩾ a0 e a0 1 − b0 e a0 1

a0

2b0

x1

⩾ (2b0 − b0 ) e a0 > 0.

Problem 8.8 (Solution) Assume that B0 = 0. Any other starting point can be reduced to this

situation by shifting Brownian motion to B0 = 0. The LIL shows that a Brownian motion

satisfies

B(t) B(t)

−1 = lim √ < lim √ =1

t→0

t→0 2t log log 1t 2t log log 1t

√

i.e. B(t) oscillates for t → 0 between the curves ± 2t log log 1t . Since a Brownian motion

has continuous sample paths, this means that it has to cross the level 0 infinitely often.

Problem 8.9 (Solution) The idea is to proceed as in Example 8.12 e) where Zaremba’s needle

plays the role a truncated flat cone in dimension d = 2 (but in dimension d ⩾ 3 it has

too small dimension). The set-up is as follows: without loss of generality we take x0 = 0

(otherwise we shift Brownian motion) and we assume that the cone lies in the hyperplane

{x ∈ Rd ∶ x1 = 0} (otherwise we rotate things).

Let B(t) = (b(t), β(t)), t ⩾ 0, be a BMd where b(t) is a BM1 and β(t) is a (d − 1)-

dimensional Brownian motion. Since B is a BMd , we know that the coordinate processes

b = (b(t))t⩾0 and β = (β(t))t⩾0 are independent processes. Set σn = inf{t > 1/n ∶ b(t) = 0}.

Since 0 ∈ R is regular for {0} ⊂ R, see Example 8.12 e), we get that limn→∞ σn = τ{0} = 0

almost surely with respect to P0 . Since β á b, the random variable β(σn ) is rotationally

symmetric (see, e.g., the solution to Problem 8.10).

Let C be a flat (i.e. in the hyperplane {x ∈ Rd ∶ x1 = 0}) cone such that some truncation

C ′ of it lies in Dc . By rotational symmetry, we get

opening angle of C

P0 (β(σn ) ∈ C) = γ = .

full angle

75

R.L. Schilling, L. Partzsch: Brownian Motion

P0 (β(σn ) ∈ C ′ ) = γ.

n→∞ n→∞

Problem 8.10 (Solution) Proving that the random variable β(σn ) is absolutely continuous with

respect to Lebesgue measure is relatively easy: note that, because of the independence of

b and β, hence σn and β,

d 0 d

− P (β(σn ) ⩾ x) = − ∫ P0 (βt ⩾ x) P(σn ∈ dt)

dx dx R

d

= ∫ − P0 (βt ⩾ x) P(σn ∈ dt)

R dx

1

e−x /(2t) P(σn ∈ dt)

2

=∫ √

R 2πt

∞ 1

e−x /(2t) P(σn ∈ dt).

2

=∫ √

1/n 2πt

(observe, for the last equality, that σn takes values in [1/n, ∞).) Since the integrand

is bounded (even as t → 0), the interchange of integration and differentiation is clearly

satisfied.

Proving that the random variable β(σn ) is rotationally symmetric is easy: note that,

because of the independence of b and β, hence σn and β, we have for all Borel sets

A ⊂ Rd−1

∞

P0 (β(σn ) ∈ A) = ∫ P0 (βt ∈ A) P(σn ∈ dt)

1/n

R

∞ 1 −∣x∣2 /(2t)

=∫ e P(σn ∈ dt) dx.

1/n (2πt)(d−1)/2

(here x ∈ Rd−1 ).

76

Solution Manual. Last update June 12, 2017

It is a bit more difficult to work out the exact shape of the density. Let us first determine

the distribution of σn . Clearly,

y=b(1/n)

b∼−b

y=b(1/n)

b∼−b

−y=b(1/n)

y=b(1/n)

(6.12)

y=b(1/n)

∞

=4∫ P0 (b(t − 1/n) < y) P0 (b(1/n) ∈ dy)

0

2 1 ∞ y

√ ∫ ∫ e−z /2(t−1/n) dz e−ny /2 dy

2 2

= √

π t− 1 1 0 0

n n

√

change of variables: ζ = z/ t − 1

n

√ √

2 n ∞ 1

y/ t− n

e−ζ /2 dζ e−ny /2 dy.

2 2

= ∫ ∫

π 0 0

√ √

d 0 2 n d ∞ 1

y/ t− n

e−ζ /2 dζ eny /2 dy

2 2

− P (σn > t) = − ∫ ∫

dt √ π dt 0 0

n ∞

1 −3/2 −y 2 /2(t− n1

) −ny 2 /2

= (t − n ) ∫ ye e dy

√π 0

∞ 2

n −3/2 −y nt

= (t − n1 ) ∫ y e 2 t−1/n dy

π 0

√ ∞

n −3/2 t − n1 −y

2

nt

= (t − n1 ) [−e 2 t−1/n ]

π nt y=0

√

n −1/2 1

= (t − n1 )

π nt

77

R.L. Schilling, L. Partzsch: Brownian Motion

1 1

= √ .

π t nt − 1

Now we proceed with the d-dimensional case. We have for all x ∈ Rd−1

∞ 1

e−∣x∣ /(2t) P(σn ∈ dt) dx

2

β(σn ) ∼ ∫ (d−1)/2

1/n (2πt)

1 ∞ 1

e−∣x∣ /(2t) dt

2

= (d+1)/2 (d−1)/2 ∫ √

π 2 1/n t (d+1)/2 nt − 1

n(d−1)/2 ∞ 1

e−n∣x∣ /(2s) ds

2

= (d+1)/2 (d−1)/2 ∫ √

π 2 1 s (d+1)/2 s−1

(d−1)/2

(∗) n

= (d+1)/2 (d−1)/2 B( d2 , 12 ) 1F1 ( d2 , d+1

2 ; − 2 ∣x∣

n 2

)

π 2

where B(⋅, ⋅) is Euler’s Beta-function and 1F1 is the degenerate hypergeometric function,

cf. Gradshteyn–Ryzhik [4, Section 9.20, 9.21] and, for (∗), [4, Entry 3.471.5, p. 340].

78

9 The Variation of Brownian Paths

Problem 9.1 (Solution) Let > 0 and Π = {t0 = 0 < t1 < . . . < tm = 1} be any partition of [0, 1].

As a continuous function on a compact space, f is uniformly continuous, i.e. there exists

δ > 0 such that ∣f (x) − f (y)∣ <

2m for all x, y ∈ [0, 1] with ∣x − y∣ < δ. Pick n0 ∈ N so that

′ ∣Π∣

∣Πn ∣ < δ ∶= δ ∧ 2 for all n ⩾ n0 .

∣Π∣

Now, the balls B(tj , δ ′ ) for 0 ⩽ j ⩽ m are disjoint as δ ′ ⩽ 2 . Therefore the sets B(tj , δ ′ ) ∩

Πn0 for 0 ⩽ j ⩽ m are also disjoint, and non-empty as ∣Πn0 ∣ < δ ′ . In particular, there exists

a subpartition Π′ = {q0 = 0 < q1 < . . . < qm = 1} of Πn0 such that ∣tj − qj ∣ < δ ′ ⩽ δ for all

0 ⩽ j ⩽ m. This implies

RRR m m RRR m

RRR ∣f (t ) − f (t )∣ − RR

RRR ∑ j j−1 ∑ ∣f (q j ) − f (q j−1 RRR ⩽ ∑ ∣∣f (tj ) − f (tj−1 )∣ − ∣f (qj ) − f (qj−1 )∣∣

)∣

RRj=1 j=1 RRR j=1

m

⩽ ∑ ∣f (tj ) − f (qj ) + f (tj−1 ) − f (qj−1 )∣

j=1

m

⩽ 2 ⋅ ∑ ∣f (tj ) − f (qj )∣

j=0

⩽ .

Because adding points to a partition increases the corresponding variation sum, we have

′ Πn0

S1Π (f, 1) ⩽ S1Π (f, 1) + ⩽ S1 (f, 1) + ⩽ lim S1Πn (f, 1) + ⩽ VAR1 (f, 1) +

n→∞

n→∞

let’s discontinuous function f = 1Q∩[0,1] and Πn a refining sequence of partitions made up

of rational points.

Problem 9.2 (Solution) Note that the problem is straightforward if ∥x∥ stands for the maxi-

mum norm: ∥x∥ = max1⩽j⩽d ∣xj ∣.

Remember that all norms on Rd are equivalent. One quick way of showing this is the

following: Denote by ej with j ∈ {1, . . . , d} the usual basis of Rd . Then

1⩽j⩽d 1⩽j⩽d 1⩽j⩽d

79

R.L. Schilling, L. Partzsch: Brownian Motion

for every x = ∑dj=1 xj ej in Rd using the triangle inequality and the positive homogeneity

of norms. In particular, x ↦ ∥x∥ is a continuous mapping from Rd equipped with the

supremum-norm ∥ ⋅ ∥∞ to R, since

holds for every x, y in Rd . Hence, the extreme value theorem claims that x ↦ ∥x∥ attains

its minimum on the compact set {x ∈ Rd ∶ ∥x∥∞ = 1}. Finally, this implies A ∶= min{∥x∥ ∶

∥x∥∞ = 1} > 0 and hence

x

∥x∥ = ∥ ∥ ⋅ ∥x∥∞ ⩾ A ⋅ ∥x∥∞

∥x∥∞

for every x ≠ 0 in Rd as required.

to determine the finiteness of variations. In particular, VARp (f ; t) < ∞ if, and only if,

⎧

⎪ ⎫

⎪

⎪ ⎪

sup ⎨ ∑ ∣g(tj ) − g(tj−1 )∣p ∨ ∣h(tj ) − h(tj−1 )∣p ∶ Π finite partition of [0, 1]⎬

⎪

⎪ ⎪

⎪

⎩tj−1 ,tj ∈Π ⎭

is finite. But this term is bounded from below by VARp (g; t) ∨ VARp (h; t) and from above

by VARp (g; t) + VARp (h; t), which proves the desired result.

Problem 9.3 (Solution) Let p > 0, > 0 and Π = {t0 = 0 < t1 < . . . < tn = 1} a partition

of [0, 1]. Since f is continuous and the rational numbers are dense in R, there exist

0 < q1 < . . . < qn−1 < 1 such that qj is rational and ∣f (tj ) − f (qj )∣ < n−1/p 1/p for every

1 ⩽ j ⩽ n − 1. In particular, Π′ = {q0 = 0 < q1 < . . . < qn = 1} is a rational partition of [0, 1]

such that ∑nj=0 ∣f (tj ) − f (qj )∣p ⩽ .

φ(ta + (1 − t)0) ⩾ tφ(a) + (1 − t)φ(0) ⩾ tφ(a) for all a ⩾ 0 and t ∈ [0, 1]. Hence

a b

φ(a + b) = φ(a + b) + φ(a + b) ⩽ φ(a) + φ(b)

a+b a+b

for all a, b ⩾ 0, i.e. φ is subadditive. In particular, we have ∣x + y∣p ⩽ (∣x∣ + ∣y∣)p ⩽ ∣x∣p + ∣y∣p

and thus

For p > 1, on the other hand, and x, y ∈ R such that ∣x∣ < ∣y∣ we find

∣y∣

∣∣y∣p − ∣x∣p ∣ = ∫ ptp−1 dt ⩽ p ⋅ (∣x∣ ∨ ∣y∣)p−1 ⋅ (∣y∣ − ∣x∣) ⩽ p ⋅ (∣x∣ ∨ ∣y∣)p−1 ⋅ ∣y − x∣

∣x∣

and hence

80

Solution Manual. Last update June 12, 2017

Let p > 0 and > 0. For every partition Π = {t0 = 0 < t1 < . . . < tn = 1} there exists a

rational partition Π′ = {q0 = 0 < q1 < . . . < qn = 1} such that ∑nj=0 ∣f (tj ) − f (qj )∣1∧p ⩽ and

hence

RRR n RR

p RRR

n

RRR ∣f (t ) − f (t )∣p −

RRR ∑ j j−1 ∑ ∣f (qj ) − f (qj−1 )∣ RRR

RRj=1 j=1 RRR

n

⩽ ∑ ∣∣f (tj ) − f (tj−1 )∣p − ∣f (qj ) − f (qj−1 )∣p ∣

j=1

(*) n

⩽ max {1, (p ⋅ 2p−1 ⋅ ∥f ∥p−1

∞ )} ⋅ ∑ ∣f (tj ) − f (qj ) + f (tj−1 ) − f (qj−1 )∣

1∧p

(**)

j=1

n

⩽ C ⋅ ∑ ∣f (tj ) − f (qj )∣1∧p

j=0

⩽C ⋅

p (f ; 1) ⩽ VARp (f ; 1) where

⎧

⎪ ⎫

⎪

⎪ ′ ⎪

p (f ; 1)

VARQ ∶= sup ⎨ ∑ ∣f (qj ) − f (qj−1 )∣ ∶ Π finite, rational partition of [0, 1]⎬

p

⎪

⎪ ⎪

⎪

⎩qj−1 ,qj ∈Π′ ⎭

and hence the desired result as tends to zero.

Alternative Approach: Note that (ξ1 , . . . , ξn ) ↦ ∑nj=1 ∣f (ξj ) − f (ξj−1 )∣p is a continuous

map since it is the finite sum and composition of continuous maps, and that the rational

numbers are dense in R.

⎧

⎪ ⎫

⎪

⎪n ⎪

VAR○p (f ; t) ∶= sup ⎨ ∑ ∣f (sj ) − f (sj−1 )∣ ∶ n ∈ N and 0 < s0 < s1 < . . . < sn < t⎬

p

⎪

⎪ ⎪

⎪

⎩j=1 ⎭

because there are less (non-negative) summands in the definition of VAR○p (f ; t).

Let > 0 and Π = {t0 = 0 < t1 < . . . < tn = t} a partition of [0, t]. Set sj = tj for 1 ⩽ j ⩽ n − 1

and note that ξ ↦ ∣f (ξ0 ) − f (ξ)∣p is a continuous map for every ξ0 ∈ [0, t] since it is the

composition of continuous maps. Hence we can pick s0 ∈ (t0 , t1 ) and sn ∈ (tn−1 , tn ) with

ε

∣∣f (s1 ) − f (t0 )∣p − ∣f (s1 ) − f (s0 )∣p ∣ <

2

ε

∣∣f (tn ) − f (tn−1 )∣ − ∣f (sn ) − f (tn−1 )∣ ∣ <

p p

2

and so that 0 < s0 < s1 < . . . < sn < t. This implies

n n−1

∑ ∣f (tj ) − f (tj−1 )∣ = ∣f (s1 ) − f (t0 )∣ + ∑ ∣f (sj ) − f (sj−1 )∣ + ∣f (tn ) − f (sn−1 )∣

p p p p

j=1 j=2

n

ε ε

⩽ + ∑ ∣f (sj ) − f (sj−1 )∣p +

2 j=1 2

81

R.L. Schilling, L. Partzsch: Brownian Motion

⩽ + VAR○p (f ; t)

and thus VARp (f ; t) ⩽ + VAR○p (f ; t) since the partition Π = {t0 = 0 < t1 < . . . < tn = t} was

arbitrarily chosen. Consequently, VARp (f ; t) ⩽ VAR○p (f ; t) as tends to zero, as required.

The same argument shows that varp (f ; t) does not change its value (if it exists).

n

2

E Yn = ∑ E (B ( nk ) − B ( k−1

n

))

k=1

n

= ∑ V (B ( nk ) − B ( k−1

n

))

k=1

n

= ∑ ( nk − k−1

n

)

k=1

n

=∑ 1

n

k=1

=1

n

2

V Yn = ∑ V (B ( nk ) − B ( k−1

n

))

k=1

n

4 2 2

= ∑ E (B ( nk ) − B ( k−1

n

)) − (E (B ( nk ) − B ( k−1

n

)) )

k=1

n

k−1 2 k−1 2

= ∑ 3 ⋅ ( nk − n

) − ( nk − n

)

k=1

n

=2⋅ ∑ 1

n2

k=1

=2⋅ 1

n

n ) ∼ N(0, n ) are iid random variables. By a

1

2

standard result the sum of squares n ∑nk=1 (B( nk ) − B( k−1

n )) has a χn -distribution,

2

1

2−n/2 s 2 −1 e− 2 1[0,∞) (s).

n s

Γ( 2 )

n

and we get

n

−n/2 n

(ns) 2 −1 e− 2 1[0,∞) (s).

2 n ns

∑ (B( nk ) − B( k−1

n )) ∼ 2

k=1 Γ( 2 )

n

Here is the calculation: (in case you do not know this standard result...): If X ∼

N(0, 1) and x > 0, we have

√

√ 1 x t2

P(X ⩽ x) = P(X ⩽ x) = √ ∫ √ exp (− ) dt

2

2π − x 2

82

Solution Manual. Last update June 12, 2017

√

2 x t2

=√ ∫ exp (− ) dt

2π 0 2

1 x s

= √ ∫ exp (− ) ⋅ s−1/2 ds

2π 0 2

1 s

fX 2 (s) = 1(0,∞) (s) ⋅ √ ⋅ exp (− ) ⋅ s−1/2 .

2π 2

Let X1 , X2 , . . . be independent and identically distributed random variables with

X1 ∼ N(0, 1). We want to prove by induction that for n ⩾ 1

s

fX 2 +...+X 2 (s) = Cn ⋅ 1(0,∞) (s) ⋅ exp (− ) ⋅ sn/2−1

1 n 2

with some normalizing constants Cn > 0. Assume that this is true for 1, . . . , n. Since

2

Xn+1 is independent of X12 + . . . + Xn2 and distributed like X12 , we know that the

density of the sum is a convolution. This leads to

∞

fX 2 +...+X 2 (s) = ∫ fX 2 +...+X 2 (t) ⋅ fX 2 (s − t) dt

1 n+1 −∞ 1 n n+1

ts s−t

= Cn ⋅ C1 ⋅ ∫ exp (− ) ⋅ tn/2−1 ⋅ exp (− ) ⋅ (s − t)−1/2 dt

0 2 2

s s

= Cn ⋅ C1 ⋅ exp (− ) ⋅ ∫ tn/2−1 ⋅ (s − t)−1/2 mdt

2 0

s s t n/2−1 t −1/2

= Cn ⋅ C1 ⋅ exp (− ) ⋅ sn/2−1 ⋅ s−1/2 ⋅ ∫ ( ) ⋅ (1 − ) dt

2 0 s s

s 1

= Cn ⋅ C1 ⋅ exp (− ) ⋅ s(n+1)/2−1 ⋅ ∫ xn/2−1 ⋅ (1 − x)−1/2 dx

2 0

s (n+1)/2−1

= Cn+1 ⋅ exp (− ) ⋅ s

2

using the change of variable x = t/s. Since probability distribution functions integrate

to one, we find

∞ s ∞

1 = Cn ⋅ ∫ exp (− ) ⋅ sn/2−1 ds = Cn ⋅ 2n/2 ∫ exp (−t) ⋅ tn/2−1 dt

0 2 0

= Cn ⋅ 2n/2 ⋅ Γ(n/2)

and thus

−1

fX 2 +...+Xn2 (s) = (2n/2 ⋅ Γ(n/2)) ⋅ 1(0,∞) (s) ⋅ e−s/2 ⋅ sn/2−1

1

remember that B ( nk ) − B ( k−1

n

) ∼ N(0, 1/n) ∼ n−1/2 ⋅ Xk for 1 ⩽ k ⩽ n. Hence

1

−1

= n ⋅ (2 n/2

⋅ Γ(n/2)) ⋅ 1(0,∞) (s) ⋅ e−n⋅s/2 ⋅ (ns)n/2−1 .

∞ 2 ∞

E(eξ⋅X ) = (2 ⋅ π)−1/2 ∫ eξ⋅x e−x

2 2 2 /2 −1/2⋅(1−2ξ)⋅x2

dx = √ ∫ e dx

−∞ 2⋅π 0

83

R.L. Schilling, L. Partzsch: Brownian Motion

2 ∞

= (1 − 2ξ)−1/2 √ ∫ e

−y 2 /2

dy

2⋅π 0

= (1 − 2ξ)−1/2

using the change of variable x2 = (1 − 2ξ)y 2 . Since the moment generating function

ξ ↦ (1−2ξ)−1/2 has a unique analytic extension to an open strip around the imaginary

axis, the characteristic function is of the form

E(ei⋅ξ⋅X ) = (1 − 2iξ)−1/2 .

2

n

) ∼ N(0, 1/n), we obtain

n n

E(ei⋅ξ⋅Yn ) = ∏ E(ei⋅ξ⋅(Bk/n −B(k−1)/n ) ) = ∏ E(ei⋅(ξ/n)⋅X ) = (1 − 2i(ξ/n))−n/2

2 2

k=1 k=1

and hence

lim φn (ξ) = lim (1 − 2i(ξ/n)) = ( lim (1 − ) ) = (e−2iξ ) = eiξ .

n→∞ n→∞ n→∞ n

(d) We have shown in a) that E ((Yn − 1)2 ) = V(Yn ) = 2/n which tends to zero as n → ∞.

√ ∞ y −y2 /2 ∞

1 ∞ 1

e−y

2 /2

dy = ⋅ [ − e−y /2 ] = ⋅ e−x /2

2 2

2π ⋅ P(Z > x) = ∫ dy > ∫

⋅e

x x x x x x

−x2 /2

1 e

Ô⇒ P(Z > x) < √

2π x

On the other hand

√ ∞

e−y

2 /2

2π ⋅ P(Z > x) = ∫ dy

x

∞

x2 −y2 /2

<∫ ⋅e dy

x y2

∞ ∞

1

= x2 ⋅ ([− ⋅ e−y /2 ] − ∫ e−y /2 dy)

2 2

y x x

1 ∞ √

= x2 ⋅ ([− ⋅ e−y /2 ] − 2π ⋅ P(Z > x))

2

y x

√

Ô⇒ (1 + x ) ⋅ 2π ⋅ P(Z > x) ⩾ x ⋅ e−x /2

2 2

1 xe−x /2

2

2π x2 + 1

2n 2n

P ( lim ⋃ Ak,n ) = 1 − P (lim inf ⋂ Ack,n )

n→∞ k=1 n→∞ k=1

2n

⩾ 1 − lim inf P ( ⋂ Ack,n )

n→∞ k=1

84

Solution Manual. Last update June 12, 2017

2n

= 1 − lim inf ∏ P(Ack,n )

n→∞

k=1

n

and hence it suffices to prove lim inf n→∞ ∏2k=1 P(Ack,n ) = 0.

2n 2n

⩽ e−2

n ⋅P(A

1,n )

∏ P(Ak,n ) = (1 − P(A1,n ))

c

k=1

and a) implies

√ √

2n ⋅ P(A1,n ) = 2n ⋅ P( 2−n ⋅ ∣Z∣ > c n2−n )

√

= 2n+1 ⋅ P(Z > c n)

√

2n+1 c n

⋅ e−c n/2 .

2

⩾√ ⋅ 2

2π c n + 1

Now, (c2 n)/(c2 n + 1) → 1 as n → ∞ and thus there exists some n0 ∈ N such that

√

c2 n 1 c n 1

⩾ ⇐⇒ 2 ⩾ √

c n+1 2

2 c n + 1 2c n

2n 1 1 1

2n ⋅ P(A1,n ) ⩾ √ ⋅ √ ⋅ e−c n/2 = √ ⋅ √ ⋅ e(log(2)−c /2)n

2 2

2π c n 2πc n

√

for n ⩾ n0 . Since ln(2)−c2 /2 > 0 if, and only if, c < 2 log(2), we have 2n ⋅P(A1,n ) → ∞

n √

and thus lim inf n→∞ ∏2k=1 P(AC k,n ) = 0 if c < 2 log(2).

√

c) With c < 2 log(2) we deduce

2n

1 = P (lim sup ⋃ Ak,n )

n→∞ k=1

√

such that ∣B(k2−n )(ω) − B((k − 1)2−n )(ω)∣ > c n 2−n })

∣B(k2−n )(ω) − B((k − 1)2−n )(ω)∣ √

such that √ > c n})

2−n

2 −1)

) = e−λ E(eλX ) = e−λ (1 − 2λ)−1/2

2

Φ(λ) = E(eλ(X for all 0 < λ < 1/2.

2 −1) 2 2

∣(X 2 − 1)2 eλ(X ∣ ⩽ ∣X 2 − 1∣2 ⋅ eλX ⩽ 2(X 4 + 1) ⋅ eλ0 X .

85

R.L. Schilling, L. Partzsch: Brownian Motion

Since λ < λ0 < 1/2 there is some > 0 such that λ < λ0 < λ0 + < 1/2. Thus,

2 −1)

∣ ⩽ 2(X 4 + 1)e−X ⋅ e(λ0 +)X .

2 2

∣(X 2 − 1)2 eλ(X

2

4

⎛n ⎞ n ⎛n ⎞ n n n

L(n) ∶= ∑ aj + 3 ⋅ ∑ a4j − 4 ⋅ ∑ a3j ⋅ ( ∑ ak ) + 2 ∑ ∑ a2j a2k

⎝j=1 ⎠ j=1 ⎝j=1 ⎠ k=1 j=1 k=j+1

2 2

n n ⎛n n ⎞

R(n) ∶= 2 ⋅ ∑ a2j ( ∑ ak ) + 4 ⋅ ∑ ∑ aj ak

j=1 k=1 ⎝j=1 k=j+1 ⎠

k≠j

we deduce:

= (a41 + 4a31 a2 + 6a21 a22 + 4a1 a32 + a42 ) + 3(a41 + a42 )

− 4(a41 + a42 + a31 a2 + a32 a1 ) + 2a21 a22

= 6a21 a22 + 2a21 a22

= 2(a21 a22 + a22 a21 ) + 4a21 a22

= R(2)

b) Induction step: Assume that we have already shown that the statement is true for

n. Then

n 4 n n n

L(n + 1) = ( ∑ aj + an+1 ) + 3 ⋅ ( ∑ a4j + a4n+1 ) − 4 ⋅ ( ∑ a3j + a3n+1 ) ⋅ ( ∑ ak + an+1 )

j=1 j=1 j=1 k=1

n n n

+ 2( ∑ ∑ a2j a2k + ∑ a2j a2n+1 )

j=1 k=j+1 j=1

n 3 n 2 n

= L(n) + 4( ∑ aj ) an+1 + 6( ∑ aj ) a2n+1 + 4( ∑ aj )a3n+1 + a4n+1

j=1 j=1 j=1

n n n

+ 3a4n+1 − 4a4n+1 − 4( ∑ a3j )an+1 − 4a3n+1 ( ∑ aj ) + 2 ∑ a2j a2n+1

j=1 j=1 j=1

n 3 n 2 n n

= L(n) + 4( ∑ aj ) an+1 + 6( ∑ aj ) a2n+1 − 4( ∑ a3j )an+1 + 2 ∑ a2j a2n+1

j=1 j=1 j=1 j=1

n 3 n 2 n n

= L(n) + 4an+1 ( ∑ aj ) + 6a2n+1 ( ∑ aj ) − 4an+1 ( ∑ a3j ) + 2a2n+1 ∑ a2j

j=1 j=1 j=1 j=1

86

Solution Manual. Last update June 12, 2017

R(n + 1) = 2 ⋅ ∑ a2j ( ∑ ak ) + 4 ⋅ ( ∑ ∑ aj ak )

j=1 k=1 j=1 k=j+1

k≠j

n 2 n n n 2

= R(n) + 2a2n+1 ( ∑ ak ) + 2 ∑ a2j (a2n+1 + 2an+1 ∑ ak ) + 4( ∑ aj an+1 )

k=1 j=1 k=1 j=1

k≠j

n n n

+ 4 ⋅ 2 ⋅ ( ∑ ∑ aj ak )( ∑ ai an+1 )

j=1 k=j+1 i=1

n 2 n n n n 2

= R(n) + 2a2n+1 ( ∑ ak ) + 2a2n+1 ∑ aj

2

+ 4an+1 ∑ ∑ a2j ak + 4an+1 ( ∑ aj )

2

k=1 j=1 j=1 k=1 j=1

k≠j

n n n

+ 4an+1 ⋅ ( ∑ ∑ aj ak )( ∑ ai )

j=1 k=1 i=1

k≠j

n 2 n n n

= R(n) + 6a2n+1 ( ∑ ak ) + 2an+1 ∑ aj + 4an+1 ∑ ∑ aj ak

2 2 2

k=1 j=1 j=1 k=1

k≠j

n n n

+ 4an+1 ⋅ ( ∑ ∑ aj ak )( ∑ ai )

j=1 k=1 i=1

k≠j

n 3 n n n n n n

4an+1 ( ∑ aj ) − 4an+1 ( ∑ a3j ) = 4an+1 ∑ ∑ a2j ak + 4an+1 ⋅ ( ∑ ∑ aj ak )( ∑ ai )

j=1 j=1 j=1 k=1 j=1 k=1 i=1

k≠j k≠j

n 3 n n n n n n

( ∑ aj ) − ( ∑ a3j ) = ∑ ∑ a2j ak + ( ∑ ∑ aj ak )( ∑ ai )

j=1 j=1 j=1 k=1 j=1 k=1 i=1

k≠j k≠j

n n n

( ∑ ∑ aj ak )( ∑ ai )

j=1 k=1 i=1

k≠j

n n n

= ∑ ∑ ∑ ai aj ak

i=1 j=1 k=1

k≠j

n n n n n n n

= ∑ ∑ ∑ ai aj ak + ∑ ∑ a2i ak + ∑ ∑ a2i aj

i=1 j=1 k=1 i=j=1 k=1 i=k=1 j=1

j≠i k≠j k≠j j≠i

k≠i

and hence L(n + 1) = R(n + 1) if, and only if, an+1 = 0 or an+1 ≠ 0 and

n 3 n n n n n n

( ∑ aj ) − ( ∑ a3j ) = 3 ∑ ∑ a2j ak + ∑ ∑ ∑ ai aj ak

j=1 j=1 j=1 k=1 i=1 j=1 k=1

k≠j j≠i k≠j

k≠i

n 3 n n n n n n

⇐⇒ ( ∑ aj ) = ( ∑ a3j ) + 3 ∑ ∑ a2j ak + ∑ ∑ ∑ ai aj ak

j=1 j=1 j=1 k=1 i=1 j=1 k=1

k≠j j≠i k≠j

k≠i

87

R.L. Schilling, L. Partzsch: Brownian Motion

Problem 9.9 (Solution) We prove this statement inductively. The statement is obviously true

for n = 1. Assume that it holds for n. Then

n+1 n+1 n+1 n n n+1

∣ ∏ aj − ∏ bj ∣ = ∣ ∏ aj − ( ∏ bj ) ⋅ an+1 + ( ∏ bj ) ⋅ an+1 − ∏ bj ∣

j=1 j=1 j=1 j=1 j=1 j=1

n n n

⩽ ∣ ∏ aj − ∏ bj ∣ ⋅ ∣an+1 ∣ + ∣ ∏ bj ∣ ⋅ ∣an+1 − bn+1 ∣

j=1 j=1 j=1

n

⩽ ∑ ∣aj − bj ∣ + ∣an+1 − bn+1 ∣

j=1

n+1

= ∑ ∣aj − bj ∣

j=1

as required.

88

10 Regularity of Brownian Paths

(λh)k −λh

P(Nt+h − Nt = k) = P(Nh = k) = e .

k!

This shows that we have for any α > 0

∞

(λh)k −λh

E (∣Nt+h − Nt ∣α ) = ∑ k α e

k=0 k!

∞

(λh)k −λh

= λh e−λh + ∑ k α e

k=2 k!

∞

(λh)k−1 −λh

= λh e−λh + λh ∑ k α e

k=2 k!

−λh

= λh e + o(h)

and, thus,

E (∣Nt+h − Nt ∣α )

lim =λ

h→0 h

which means that (10.1) cannot hold for any α > 0 and β > 0.

b) Part a) shows also E (∣Nt+h − Nt ∣α ) ⩽ c h, i.e. condition (10.1) holds for α > 0 and

β = 0.

The fact that β = 0 is needed for the convergence of the dyadic series (with the power

γ < β/α) in the proof of Theorem 10.1.

c) We have

∞

tk −t ∞ tk −t ∞

tk−1 ∞ j

t

E(Nt ) = ∑ k e = ∑k e =t∑ e−t = t ∑ e−t = t

k=0 k! k=1 k! k=1 (k − 1)! j=0 j!

∞

tk −t ∞ 2 tk −t ∞

tk−1

E(Nt2 ) = ∑ k 2 e = ∑k e =t∑k e−t

k=0 k! k=1 k! k=1 (k − 1)!

∞ ∞

tk−1 tk−1

= t ∑ (k − 1) e−t + t ∑ e−t

k=1 (k − 1)! k=1 (k − 1)!

∞ ∞

tk−2 tk−1

=t ∑

2

e−t + t ∑ e−t = t2 + t

k=2 (k − 2)! k=1 (k − 1)!

E(Nt − t) = E Nt − t = 0

E ((Nt − t)2 ) = E(Nt2 ) − 2t E Nt + t2 = t

89

R.L. Schilling, L. Partzsch: Brownian Motion

and, finally, if s ⩽ t

= E ((Nt − Ns − t + s)(Ns − s)) + E ((Ns − s)2 )

= E ((Nt − Ns − t + s)) E ((Ns − s)) + s

=s=s∧t

Alternative Solution: One can show, as for a Brownian motion (Example 5.2 a)), that

Nt is a martingale for the canonical filtration FtN = σ(Ns ∶ s ⩽ t). The proof only

uses stationary and independent increments. Thus, by the tower property, pull out

and the martingale property,

= E ((Ns − s) E ((Nt − t) ∣ FsN ))

= E ((Ns − s)2 )

= s = s ∧ t.

n n

max ∣xj ∣p ⩽ max (∣x1 ∣p + ⋯ + ∣xn ∣p ) = ∑ ∣xj ∣p ⩽ ∑ max ∣xk ∣p = n max ∣xk ∣p .

1⩽j⩽n 1⩽j⩽n j=1 j=1 1⩽k⩽n 1⩽k⩽n

p

Since max1⩽j⩽n ∣xj ∣p = ( max1⩽j⩽n ∣xj ∣) the claim follows (actually with n1/p which is

smaller than n....)

(∣x∣ + ∣y∣)α ⩽ ∣x∣α + ∣y∣α

In the proof of Theorem 10.1 (page 154, line 1 from above and onwards) we get:

90

Solution Manual. Last update June 12, 2017

α

∣ξ(x) − ξ(y)∣ ∣ξ(x) − ξ(y)∣α

( sup ) = sup sup

x,y∈D, x≠y ∣x − y∣γ m⩾0 x,y∈D 2−(m+1)γα

2−m−1 ⩽∣x−y∣<2−m

m⩾0 j⩾m

(1+γ)α

=2 sup ∑ 2mγα σjα

m⩾0 j⩾m

∞

⩽ 2(1+γ)α ∑ 2jγα σjα .

j=0

α ∞

∣ξ(x) − ξ(y)∣ (1+γ)α

E [( sup ) ] ⩽ 2 ∑ 2jγα E [σjα ]

x≠y, x,y∈D ∣x − y∣γ j=0

∞

⩽ c 2(1+γ)α ∑ 2jγα 3n 2−jβ

j=0

∞

= c 2(1+γ)α 3n ∑ 2j(γα−β) < ∞.

j=0

The rest of the proof continues literally as on page 154, line 10 onwards.

Alternative Solution: use the subadditivity of Z ↦ E(∣Z∣α ) directly in the second part of

the calculation, replacing ∥Z∥Lα by E(∣Z∣α ).

Theorem. Let (Bt )t⩾0 be a BM1 . Then t ↦ Bt (ω) is for almost all ω ∈ Ω nowhere Hölder

continuous of any order α > 1/2.

It is not clear if the set An,α is measurable. We will show that Ω ∖ An,α ⊂ Nn,α for a

measurable null set Nn,α .

Assume that the function f is α-Hölder continuous of order α at the point t0 ∈ [0, n].

Then

∃ δ > 0 ∃ L > 0 ∀ t ∈ B(t0 , δ) ∶ ∣f (t) − f (t0 )∣ ⩽ L ∣t − t0 ∣α .

Since [0, n] is compact, we can use a covering argument to get a uniform Hölder constant.

Consider for sufficiently large values of k ⩾ 1 the grid { kj ∶ j = 1, . . . , nk}. Then there

exists a smallest index j = j(k) such that for ν ⩾ 3 and, actually, 1 − να + ν/2 < 0

j j j+ν

t0 ⩽ and ,..., ∈ B(t0 , δ).

k k k

91

R.L. Schilling, L. Partzsch: Brownian Motion

∣f ( ki ) − f ( i−1

k

)∣ ⩽ ∣f ( ki ) − f (t0 )∣ + ∣f (t0 ) − f ( i−1

k

)∣

α α

⩽ L(∣ ki − t0 ∣ + ∣ i−1

k − t0

∣ )

(ν+1)α να 2L(ν+1)α

⩽ L( kα + kα

) = kα .

∞ kn j+ν

2L(ν+1)α

L,ν,α

Cm ∶= ⋂ ⋃ ⋂ {∣B( ki ) − B( i−1

k

)∣ ⩽ kα

}

k=m j=1 i=j+1

we have

∞ ∞

Ω ∖ An,α ⊂ ⋃ ⋃ Cm

L,ν,α

.

L=1 m=1

L,ν,α

Our assertion follows if we can show that P(Cm ) = 0 for all m, L ⩾ 1 and all rational

α > 1/2. If k ⩾ m,

kn j+ν

2L(ν+1)α

L,ν,α

P(Cm ) ⩽ P ( ⋃ ⋂ {∣B( ki ) − B( i−1

k

)∣ ⩽ kα

})

j=1 i=j+1

kn j+ν

2L(ν+1)α

⩽ ∑ P ( ⋂ {∣B( ki ) − B( i−1

k

)∣ ⩽ kα

})

j=1 i=j+1

kn

2L(ν+1)α ν

= ∑ P ({∣B( ki ) − B( i−1 )∣ ⩽ })

(B1)

k kα

j=1

2L(ν+1)α ν

= kn P ({∣B( k1 )∣ ⩽ })

(B2)

kα

c ν 1−να+ν/2<0

⩽ kn ( α−1/2

) = cν n k 1−να+ν/2 ÐÐÐÐÐÐÐÐÐÐ→ 0.

k k→∞

For the last estimate we use B( k1 ) ∼ k −1/2 B(1), cf. 2.12, and therefore

√

√ 1

x k

−y 2 /2

√

P (∣B( k )∣ ⩽ x) = P (∣B(1)∣ ⩽ x k) = √

1

∫ e dy ⩽ c x k.

2π √ ²

−x k ⩽1

This proves that a Brownian path is almost surely nowhere not Hölder continuous of a

fixed order α > 1/2. Call the set where this holds Ωα . Then Ω0 ∶= ⋂Q∋α>1/2 Ωα is a set

with P(Ω0 ) = 1 and for all ω ∈ Ω0 we know that BM is nowhere Hölder continuous of any

order α > 1/2.

The last conclusion uses the following simple remark. Let 0 < α < q < ∞. Then we have

for f ∶ [0, n] → R and x, y ∈ [0, n] with ∣x − y∣ < 1 that

92

Solution Manual. Last update June 12, 2017

Problem 10.5 (Solution) Fix > 0, fix a set Ω0 ⊂ Ω with P(Ω0 ) = 1 and h0 = h0 (2, ω) such that

(10.6) holds for all ω ∈ Ω0 , i.e. for all h ⩽ h0 we have

√

sup ∣B(t + h, ω) − B(t, ω)∣ ⩽ 2 2h log h1 .

0⩽t⩽1−h

Pick a partition Π = {t0 = 0 < t1 < . . . < tn } of [0, 1] with mesh size h = maxj (tj − tj−1 ) ⩽ h0

and assume that h0 /2 ⩽ h ⩽ h0 . Then we get

n n 1+

∑ ∣B(tj , ω) − B(tj−1 , ω)∣ ⩽ 22+2 ⋅ 21+ ∑ ((tj − tj−1 ) log tj −t1j−1 )

2+2

j=1 j=1

n

⩽ c ∑ (tj − tj−1 ) = c .

j=1

n

sup ∑ ∣B(tj , ω) − B(tj−1 , ω)∣2+2 ⩽ c .

∣Π∣⩽h0 j=1

Since we have ∣x − y∣p ⩽ 2p−1 (∣x − z∣p + ∣z − y∣p ) and since we can refine any partition Π of

[0, 1] in finitely many steps to a partition of mesh < h0 , we get

n

VAR2+2 (B; 1) = sup ∑ ∣B(tj , ω) − B(tj−1 , ω)∣2+2 < ∞

Π⊂[0,1] j=1

for all ω ∈ Ω0 .

93

11 The Growth of Brownian Paths

√

11.1. Fix C > 2 and define An ∶= {Mn > C n log n}. By the reflection principle we find

√

P(An ) = P (sup Bs > C n log n)

s⩽n

√

= 2 P (Bn > C n log n)

√ √

= 2 P ( n B1 > C n log n)

scaling

√

= 2 P (B1 > C log n)

2

(11.1) 1 2

⩽ √ √ exp (− C2 log n)

2π C log n

2 1 1

=√ √ .

2π C log n n /2

C 2

Since C 2 /2 > 2, the series ∑n P(An ) converges and, by the Borel–Cantelli lemma we see

that

√

∃ΩC ⊂ Ω, P(ΩC ) = 1, ∀ω ∈ ΩC ∃n0 (ω) ∀n ⩾ n0 (ω) ∶ Mn (ω) ⩽ C n log n.

Mn

∀ω ∈ ΩC ∶ lim √ ⩽ C.

n→∞ n log n

√

Since every t is in some interval [n − 1, n] and since t ↦ t log t is increasing, we see that

√

Mt Mn Mn n log n

√ ⩽√ =√ √

t log t (n − 1) log(n − 1) n log n (n − 1) log(n − 1)

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶

→1 as n→∞

Remark: We can get the exceptional set in a uniform way: On the set Ω0 ∶= ⋂Q∋C>2 ΩC

we have P(Ω0 ) = 1 and

Mn

∀ω ∈ Ω0 ∶ lim √ ⩽ 2.

n→∞ n log n

11.2. One should assume that ξ > 0. Since y ↦ exp(ξy) is monotone increasing, we see

1

P (sup(Bs − 12 ξs) > x) = P (esups⩽t (ξBs − 2 ξ

2 s)

> eξx )

s⩽t

Doob

⩽ e−xξ E eξBt − 2 ξ t = e−xξ .

1 2

(A.13)

95

R.L. Schilling, L. Partzsch: Brownian Motion

(Remark: we have shown (A.13) only for supD∋s⩽t Msξ where D is a dense subset of [0, ∞).

Since s ↦ Msξ has continuous paths, it is easy to see that supD∋s⩽t Msξ = sups⩽t Msξ almost

surely.)

Usage in step 1o of the Proof of Theorem 11.1: With the notation of the proof we set

√ 1√ n

t = qn and ξ = q −n (1 + ) 2q n log log q n and x = 2q log log q n .

2

s⩽t

√ √

P (sup Bs > x + 12 ξt) = P (sup Bs > 1

2 2q n log log q n + 21 (1 + ) 2q n log log q n )

s⩽t s⩽q n

√

= P (sup Bs > (1 + 2 ) 2q n log log q n )

s⩽q n

√ √

⩽ exp (− 12 2q n log log q n q −n (1 + ) 2q n log log q n )

1

=

(log q n )1+

1 1

= .

(log q)1+ n1+

11.3. Actually, the hint is not needed, the present proof can be adapted in an easier way. We

perform the following changes at the beginning of page 166: Since every t > 1 is in some

√

interval of the form [q n−1 , q n ] and since the function Λ(t) = 2t log log t is increasing for

t > 3, we find for all t ⩾ q n−1 > 3

√ n

∣B(t)∣ sups⩽qn ∣B(s)∣ 2q log log q n

√ ⩽√ n √ .

2t log log t 2q log log q n 2q n−1 log log q n−1

Therefore

∣B(t)∣ √

lim √ ⩽ (1 + ) q a.s.

t→∞ 2t log log t

Letting → 0 and q → 1 along countable sequences, we find the upper bound.

Remark: The interesting paper by Dupuis [3] shows LILs for processes (Xt )t⩾0 with sta-

tionary and independent increments. It is shown there that the important ingredient are

estimates of the type P(Xt > x). Thus, if we know that P(Xt > x) ≍ P ( sups⩽t Xs > x),

we get a LIL for Xt if, and only if, we have a LIL for sups⩽t Xs .

96

Solution Manual. Last update June 12, 2017

√

Bt Bt 2t log log t

√ =√ ⋅ √

b a+t 2t log log t b a+t

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶

limt→∞ (⋯)=1 limt→∞ (⋯)=∞

Bt

lim √ =∞

t→∞ b a + t

b) Let b ⩾ 1 and assume, to the contrary, that E τ < ∞. Then we can use the second

Wald identity, cf. Theorem 5.10, and get

E (τ ∧ n) = E B 2 (τ ∧ n) ⩽ E(b2 (a + τ ∧ n)).

(1 − b2 ) E (τ ∧ n) ⩽ ab2 Ô⇒ E (τ ∧ n) ⩽ Ô⇒ Eτ ⩽ < ∞.

1 − b2 convergence 1 − b2

97

12 Strassen’s Functional Law of the Iterated

Logarithm

√

The function w(t) = t, 0 ⩽ t ⩽ 1, is a limit point of the family

B(st)

Zs (t) = √

2s log log s

where t > 0 is fixed and for s → ∞.

By the Khintchine’s LIL (cf. Theorem 11.1) we obtain

B(st)

lim √ =1 (almost surely P)

s→∞ 2st log log(st)

and so

B(st) √

lim √ = t (almost surely P)

s→∞ 2s log log(st)

which implies

√

B(st) B(st) log log(st) √

lim √ = lim √ ⋅ = t.

s→∞ 2s log log s s→∞ 2s log log(st) log log s

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶

→ 1 for s → ∞

√

On the other hand, the function w(t) = t cannot be a limit point of Zs (⋅) in C(o) [0, 1] for

s → ∞. We prove this indirectly: Let sn = sn (ω) be a sequence, such that limn→∞ sn = ∞.

Then

∥Zsn (⋅) − w(⋅)∥∞ ÐÐÐ→ 0

n→∞

√ √ √ √

( t − ) ⋅ 2sn log log sn ⩽ B(sn ⋅ t) ⩽ ( t + ) 2sn log log sn (*)

holds for all sufficiently large n and every t ∈ [0, 1]. This, however, contradicts

√ √

1 1

(1 − ) 2tk log (log ) ⩽ B(tk ) ⩽ (1 + ) 2tk log (log ), (**)

tk tk

for a sequence tk = tk (ω) → 0, k → ∞, cf. Corollary 11.2.

Indeed: fix some n, then the right side of (*) is in contradiction with the left side of (**).

Remark: Note that

t 1 1 ds

∫ w′ (s)2 ds = ∫ = +∞.

0 4 0 s

99

R.L. Schilling, L. Partzsch: Brownian Motion

t 2 t t 1

∣w(t)∣2 = ∣∫ w′ (s) ds∣ ⩽ ∫ w′ (s)2 ds ⋅ ∫ 1 ds ⩽ ∫ w′ (s)2 ds ⋅ t ⩽ t.

0 0 0 0

Problem 12.3 (Solution) Since u is absolutely continuous (w.r.t. Lebesgue measure), for almost

all t ∈ [0, 1], the derivative u′ (t) exists almost everywhere.

Let t be a point where u′ exists and let (Πn )n⩾1 be a sequence of partitions of [0, 1] such

(n)

that ∣Πn ∣ → 0 as n → ∞. We denote the points in Πn by tk . Clearly, there exists a

(n) (n) (n) (n) (n) (n)

sequence (tjn )n⩾1 such that tjn ∈ Πn and tjn −1 ⩽ t ⩽ tjn for all n ∈ N and tjn − tjn −1 → 0

as n → ∞. We obtain

⎡ (n) ⎤2

⎢ 1 tjn ⎥

fn (t) = ⎢ (n) (n) ∫ (n) u (s) ds⎥⎥

⎢ ′

⎢t − t tjn −1 ⎥

⎣ jn jn −1 ⎦

(n) (n)

to simplify notation, we set tj ∶= tjn and tj−1 ∶= tjn −1 , then

2

1

=[ ⋅ (u(tj ) − u(tj−1 ))]

tj − tj−1

2

1

=[ ⋅ (u(tj ) − u(t) + u(t) − u(tj−1 ))]

tj − tj−1

⎡ ⎤2

⎢ tj − t u(tj ) − u(t) t − tj−1 u(t) − u(tj−1 ) ⎥⎥

= ⎢⎢ ⋅ + ⋅ ⎥

⎢ tj − tj−1 tj − t tj − tj−1 t − tj−1 ⎥

⎣ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶ ⎦

→ u′ (t) → u′ (t)

ÐÐÐ→ [u′ (t)] .

2

n→∞

Problem 12.4 (Solution) We use the notation of Chapter 4: Ω = C(o) [0, 1], w = ω, A =

B(C(o) [0, 1]), P = µ, B(t, ω) = Bt (ω) = w(t), t ∈ [0, ∞).

limn→∞ ∣Πn ∣ = 0,

(n) (n) (n) (n)

Πn = {sk ∶ 0 = s0 < s1 < . . . < sln = 1} ;

(n) (n) (n) (n)

by s̃k , k = 1, . . . , ln we denote arbitrary intermediate points, i.e. sk−1 ⩽ s̃k ⩽ sk for all

k. Then we have

1

Gφ (ω) = φ(1)B1 (ω) − ∫ Bs (ω) dφ(s)

0

ln

(n) (n)

= φ(1)B1 (ω) − lim ∑ Bs̃(n) (ω)(φ(sk ) − φ(sk−1 )).

∣Πn ∣→0 k=1 k

Write

ln

(n) (n)

Gφn ∶=φ(1)B1 − ∑ Bs̃(n) (φ(sk ) − φ(sk−1 ))

k

k=1

ln

(n) (n)

= ∑ (B1 − Bs̃(n) )(φ(sk ) − φ(sk−1 )) + B1 φ(0).

k

k=1

100

Solution Manual. Last update June 12, 2017

Then Gφ (ω) = limn→∞ Gφn (ω) for all ω ∈ Ω. Moreover, the elementary identity

l l−1

∑ ak (bk − bk−1 ) = ∑ (ak − ak+1 )bk + al bl − a1 b0

k=1 k=1

implies

ln −1

(n)

Gφn = ∑ (Bs̃(n) − Bs̃(n) )φ(sk ) + (B1 − Bs̃(n) )φ(1) − (B1 − Bs̃(n) )φ(0) + B1 φ(0)

k+1 k ln 1

k=1

ln

(n)

= ∑ (Bs̃(n) − Bs̃(n) )φ(sk ) + Bs̃(n) φ(0),

k+1 k 1

k=0

(n) (n)

where s̃ln +1 ∶= 1, s̃0 ∶= 0.

ln

(n)

V Gφn = ∑ φ2 (sk ) V(Bs̃(n) − Bs̃(n) ) + φ2 (0) V Bs̃(n)

k+1 k 1

k=0

ln

(n) (n) (n) (n)

= ∑ φ2 (sk )(s̃k+1 − s̃k ) + φ2 (0)s̃1

k=0

1

ÐÐÐ→ ∫ φ2 (s) ds.

n→∞ 0

This and limn→∞ Gφn = Gφ (P-a.s.) imply that Gφ is a Gaussian random variable

1

with E Gφ = 0 and V Gφ = ∫0 φ2 (s) ds.

b) Without loss of generality we use for φ and ψ the same sequence of partitions.

Clearly, Gφn ⋅ Gψ

n → G ⋅ G for n → ∞ (P-a.s.) Using the elementary inequality

φ ψ

2ab ⩽ a2 + b2 and the fact that for a Gaussian random variable E(G4 ) = 3(E(G2 ))2 ,

we get

1

n) ) ⩽

E ((Gφn Gψ 2

n ) )]

[ E ((Gφn )4 ) + E ((Gψ 4

2

3 2 2 2

= [( E(Gφn )2 ) + ( E(Gψ n) ) ]

2

3 1 2 1 2

⩽ [( ∫ φ2 (s) ds) + ( ∫ ψ 2 (s) ds) ] + (n ⩾ n ).

2 0 0

This implies

n) Ð

E(Gφn Gψ ÐÐ→ E(Gφ Gψ ).

n→∞

Moreover,

ln ln

(n) (n)

n ) = E [( ∑ (Bs̃(n) − Bs̃(n) )φ(sk )) ⋅ ( ∑ (Bs̃(n) − Bs̃(n) )ψ(sj ))]

E(Gφn Gψ

k+1 k j=0 j+1 j

k=0

ln

(n)

+ φ(0)ψ(0) E(B 2(n) ) + φ(0) E [Bs̃(n) ∑ (Bs̃(n) − Bs̃(n) )ψ(sj )]

s̃1 1 j+1 j

j=0

ln

(n)

+ ψ(0) E [Bs̃(n) ∑ (Bs̃(n) − Bs̃(n) )ψ(sk )]

1 k+1 k

k=0

101

R.L. Schilling, L. Partzsch: Brownian Motion

ln

(n) (n)

= ∑ E ((Bs̃(n) − Bs̃(n) )2 ) φ(sk )ψ(sk ) + ⋯

k+1 k

k=0

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶

(n) (n)

=s̃k+1 −s̃k

1

ÐÐÐ→ ∫ φ(s)ψ(s) ds.

n→∞ 0

This proves

1

E (Gφ Gψ ) = ∫ φ(s)ψ(s) ds.

0

E [(Gφn − Gψ

n ) ] = E [(Gn ) ] − 2 E [Gn Gn ] + E [(Gn ) ]

2 φ 2 φ ψ ψ 2

1 1 1

=∫ φ2n (s) ds − 2 ∫ φn (s)ψn (s) ds + ∫ ψn2 (s) ds

0 0 0

1

=∫ (φn (s) − ψn (s))2 ds.

0

This and φn → φ in L2 imply that (Gφn )n⩾1 is a Cauchy sequence in L2 (Ω, A, P).

Consequently, the limit X = limn→∞ Gφn exists in L2 . Moreover, as φn → φ in L2 , we

1 1

also obtain that ∫0 φ2n (s) ds → ∫0 φ2 (s) ds.

1

Since Gφn is a Gaussian random variable with mean 0 and variance ∫0 φ2n (s) ds, we

1

see that Gφ is Gaussian with mean 0 and variance ∫0 φ2 (s) ds.

Finally, we have φn → φ and ψn → ψ in L2 ([0, 1]) implying

1 1

∫ φn (s)ψn (s) ds → ∫ φ(s)ψ(s) ds.

0 0

Thus,

1

E(Gφ Gψ ) = ∫ φ(s)ψ(s) ds.

0

Problem 12.5 (Solution) The vectors (X, Y ) in a) – d) are a.s. limits of two-dimensional Gaus-

sian distributions. Therefore, they are also Gaussian. Their mean is clearly 0. The general

density of a two-dimensional Gaussian law (with mean zero) is given by

1 1 x2 y 2 2ρ xy

f (x, y) = √ exp {− ( + − )} .

2πσ1 σ2 1 − ρ2 2(1 − ρ2 ) σ12 σ22 σ1 σ2

In order to solve the problems we have to determine the variances σ12 = V X, σ22 = V Y

E XY

and the correlation coefficient ρ = . We will use the results of Problem 12.4.

σ1 σ2

t 1 1 1

a) σ12 = V (∫ s2 dw(s)) = ∫ 1[1/2,t] (s)s4 ds = (t5 − ),

1/2 0 5 32

σ22 = V w(1/2) = 1/2 (= V B1/2 cf. canonical model),

t 1

E (∫ s2 dw(s) ⋅ w(1/2)) = ∫ 1[1/2,t] (s)s2 ⋅ 1[0,1/2] (s) ds = 0

1/2 0

Ô⇒ ρ = 0.

102

Solution Manual. Last update June 12, 2017

1 5 1

b) σ12 = (t − )

5 32

σ22 = V w(u + 1/2) = u + 1/2

t

E (∫ s2 dw(s) ⋅ w(u + 1/2))

1/2

1

=∫ 1[1/2,t] (s)s2 ⋅ 1[0,u+1/2] (s) ds

0

(1/2+u)∧t

=∫ s2 ds

1/2

3

1 1 1

= ((( + u) ∧ t) − ) .

3 2 8

3

1

3 ((( 12 + u) ∧ t) − 81 )

Ô⇒ ρ = 1/2

.

[ 15 (t5 − ) (u + 21 )]

32 ⋅

1

t1 5 1

c) σ12 = V (∫ (t − ),

s2 dw(s)) =

1/2 5 32

t 1 1

σ22 = V (∫ s dw(s)) = (t3 − )

1/2 3 8

t t t 1 4 1

E (∫ s2 dw(s) ⋅ ∫ s dw(s)) = ∫ s3 ds = (t − )

1/2 1/2 1/2 4 16

1

(t4 − 1

)

Ô⇒ ρ = 4 16

1/2

.

[ 15 (t5 − ) 1

32 ⋅ 3

1

(t3 − 81 )]

1 1 1

d) σ12 = V (∫ es dw(s)) = ∫ e2s ds = (e2 − e),

1/2 1/2 2

σ22 = V(w(1) − w(1/2)) = 1/2,

1 1

E (∫ es dw(s) ⋅ (w(1) − w(1/2))) = ∫ es ⋅ 1 ds = e − e1/2 .

1/2 1/2

e − e1/2

Ô⇒ ρ = 1/2

.

( 14 (e2 − e))

Problem 12.6 (Solution) Let wn ∈ F , n ⩾ 1, and wn → v in C(o) [0, 1]. We have to show that

v ∈ F.

Now:

wn ∈ F Ô⇒ ∃(cn , rn ) ∈ [q −1 , 1] × [0, 1] ∶ ∣wn (cn rn ) − wn (rn )∣ ⩾ 1.

Observe that the function (c, r) ↦ w(cr) − w(r) with (c, r) ∈ [q −1 , 1] × [0, 1] is continuous

for every w ∈ C(o) [0, 1].

Since [q −1 , 1] × [0, 1] is compact, there exists a subsequence (nk )k⩾1 such that cnk → c̃ and

rnk → r̃ as k → ∞ and (c̃, r̃) ∈ [q −1 , 1] × [0, 1].

103

R.L. Schilling, L. Partzsch: Brownian Motion

Finally,

∣v(c̃r̃) − v(r̃)∣ = lim ∣wnk (cnk rnk ) − wnk (rnk )∣ ⩾ 1,

k→∞

and v ∈ F follows.

√

Problem 12.7 (Solution) Set L(t) = 2t log log t, t ⩾ e and sn = q n , n ∈ N, q > 1. Then:

a) for the first inequality:

∣B(sn−1 )∣ ⎛ B(sn−1 ) 1 ⎞

P( > )=P ∣ √ ∣ ⋅√ >

L(sn ) 4 ⎝ sn−1 2q log log sn 4 ⎠

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶

∼N (0,1)

√

= P (∣B(1)∣ > 2q log log q n )

4

using Problem 9.6 and P(∣Z∣ > x) = 2 P(Z > x) for x ⩾ 0

√

2 4 2

⩽ √ ⋅ exp {− ⋅ 2q log log q n }

π 2q log log q n 32

C

⩽ 2

n

if q is sufficiently large.

b) for the second inequality:

t

sup ∣w(t)∣ = sup ∣∫ w′ (s) ds∣

t⩽q −1 t⩽q −1 0

1/q

⩽∫ ∣w′ (s)∣ ds

0

1/2

1/q

′

1/q 1 1 1/2

⩽ [∫ w (s) ds ⋅ ∫

2

ds] ⋅ [∫ w′ (s)2 ds ⋅ ]

0 0 0 q

√

r

⩽ <

q 4

for all sufficiently large q.

B(⋅ sn )

c) for the third inequality: Brownian scaling √

sn

∼ B(⋅) yields

⎛ ∣B(tsn )∣ ⎞ ⎛ ∣B(t)∣ ⎞

P sup √ > = P sup √ >

⎝0⩽t⩽q−1 2sn log log sn 4 ⎠ ⎝0⩽t⩽q−1 2 log log sn 4 ⎠

⎛ √ ⎞

=P sup ∣B(t)∣ > 2 log log sn

⎝0⩽t⩽q−1 4 ⎠

√

⩽ 2 P (∣B(1/q)∣ > 2 log log sn )

4

(*) ⎛ ∣B(1/q)∣ √ ⎞ C

⩽ 2P √ > 2q log log q n ⩽ 2

⎝ 1/q 4 ⎠ n

for all q sufficiently large. In the estimate marked with (*) we used

P ( sup ∣B(t)∣ > x) ⩽ 2 P ( sup B(t) > x) = 2 P(M (t0 ) > x) = 2 P (∣B(t0 )∣ > x).

Thm.

104

Solution Manual. Last update June 12, 2017

⎛ ∣B(sn−1 )∣ ∣B(tsn )∣ 3 ⎞

P + sup ∣w(t)∣ + sup >

⎝ L(sn ) t⩽q −1 0⩽t⩽q −1 L(sn ) 4⎠

⎛ ∣B(sn−1 )∣ ∣B(tsn )∣ ⎞

⩽P > or sup ∣w(t)∣ > or sup >

⎝ L(sn ) 4 t⩽q −1 4 0⩽t⩽q −1 L(sn ) 4⎠

∣B(sn−1 )∣ ⎛ ⎞ ⎛ ∣B(tsn )∣ ⎞

⩽ P( > ) + P sup ∣w(t)∣ > + P sup >

L(sn ) 4 ⎝t⩽q−1 4⎠ ⎝0⩽t⩽q−1 L(sn ) 4⎠

C C

⩽ 2

+0+ 2

n n

for all sufficiently large q. Using the Borel–Cantelli lemma we see that

⎛ ∣B(sn−1 )∣ ∣B(tsn )∣ ⎞ 3

lim + sup ∣w(t)∣ + sup ⩽ .

n→∞ ⎝ L(sn ) t⩽q −1 0⩽t⩽q −1 L(sn ) ⎠ 4

105

13 Skorokhod Representation

show that Bt − Bs á Fs for all s ⩽ t. Let A, A′′ , C be Borel sets in Rd . Then we find for

F ∈ FsB

P ({Bt − Bs ∈ C} ∩ F ∩ {U ∈ A} ∩ {V ∈ A′ })

= P ({Bt − Bs ∈ C} ∩ F ) ⋅ P ({U ∈ A} ∩ {V ∈ A′ }) (since U, V á F∞

B

)

= P ({Bt − Bs ∈ C}) ⋅ P (F ) ⋅ P ({U ∈ A} ∩ {V ∈ A′ }) (since Bt − Bs á F∞

B

)

= P ({Bt − Bs ∈ C}) ⋅ P (F ∩ {U ∈ A} ∩ {V ∈ A′ }) (since U, V á F∞

B

)

and this shows that Bt −Bs is independent of the family Es = {F ∩G ∶ F ∈ FsB , G ∈ σ(U, V )}.

This family is stable under finite intersections, so Bt − Bs á σ(Es ) = Fs .

107

14 Stochastic Integrals: L2–Theory

M 2 − ⟨M ⟩ and N 2 − ⟨N ⟩

(M + N )2 − ⟨M + N ⟩ and (M − N )2 − ⟨M − N ⟩

(M + N )2 − (M − N )2 = 4M N and ⟨M + N ⟩ − ⟨M − N ⟩ = 4⟨M, N ⟩

def

Then assume that we have any two representations for a simple process

j k

Then

f = ∑ φj−1 1[sj−1 ,sj ) 1[0,T ) = ∑ φj−1 1[sj−1 ,sj ) 1[tk−1 ,tk )

j j,k

and, similarly,

f = ∑ ψk−1 1[sj−1 ,sj ) 1[tk−1 ,tk ) .

k,j

j (j,k) ∶ [sj−1 ,sj )∩[tk−1 ,tk )≠∅

(j,k) ∶ [sj−1 ,sj )∩[tk−1 ,tk )≠∅

(k,j) ∶ [sj−1 ,sj )∩[tk−1 ,tk )≠∅

k

109

R.L. Schilling, L. Partzsch: Brownian Motion

Problem 14.3 (Solution) • Positivity is clear, finiteness follows with Doob’s maximal in-

equality

E [sup ∣Ms ∣2 ] ⩽ 4 sup E [∣Ms ∣2 ] = 4 E [∣MT ∣2 ] .

s⩽T s⩽T

• Triangle inequality:

1

2

∥M + N ∥M2 = (E [sup ∣Ms + Ns ∣ ]) 2

T

s⩽T

1

2

⩽ (E [(sup ∣Ms ∣ + sup ∣Ns ∣) ]) 2

s⩽T s⩽T

1 1

2 2

⩽ (E [sup ∣Ms ∣ ]) + (E [sup ∣Ns ∣ ])

2 2

s⩽T s⩽T

where we used in the first estimate the subadditivity of the supremum and in the

second inequality the Minkowski inequality (triangle inequality) in L2 .

• Positive homogeneity

1 1

2 2

∥λM ∥M2 = (E [sup ∣λMs ∣2 ]) = ∣λ∣ (E [sup ∣Ms ∣2 ]) = ∣λ∣ ⋅ ∥M ∥M2 .

T T

s⩽T s⩽T

• Definiteness

∥M ∥M2 = 0 ⇐⇒ sup ∣Ms ∣2 = 0 (almost surely).

T

s⩽T

Problem 14.4 (Solution) Let fn → f and gn → f be two sequences which approximate f in the

norm of L2 (λT ⊗ P). Then we have

2 2

E (∣fn ● BT − gn ● BT ∣ ) = E (∣(fn − gn ) ● BT ∣ )

T

= E (∫ ∣fn (s) − gn (s)∣2 ds)

0

ÐÐÐ→ 0.

n→∞

L2 (P)- lim fn ● BT = L2 (P)- lim gn ● BT .

n→∞ n→∞

Problem 14.5 (Solution) Solution 1: Let τ be a stopping time and consider the sequence of

discrete stopping times

⌊2m τ ⌋ + 1

τm ∶= ∧ T.

2m

Let t0 = 0 < t1 < t2 < . . . < tn = T and, without loss of generality, τm (Ω) ⊂ {t0 , . . . , tn }.

Then (Bt2j − tj )j is again a discrete martingale and by optional stopping we get that

(Bτ2m ∧tj − τm ∧ tj )j is a discrete martingale. This means that for each m ⩾ 1

110

Solution Manual. Last update June 12, 2017

and this indicates that we can set ⟨B τ ⟩t = t ∧ τ . This process makes Bt∧τ

2

− t ∧ τ into a

martingale. Indeed: fix 0 ⩽ s ⩽ t ⩽ T and add them to the partition, if necessary. Then

a.e. L1 (P)

Bτ2m ∧t ÐÐÐ→ Bτ ∧t and Bτ2m ∧t ÐÐÐ→ Bτ ∧t

m→∞ m→∞

∫ Bτ ∧s d P = m→∞

lim ∫ Bτm ∧s d P = lim ∫ Bτm ∧t d P ∫ Bτ ∧t d P

m→∞

for all F ∈ Fs

F F F F

t

Btτ = ∫ 1[0,τ ) dBs

0

● t t

⟨∫ 1[0,τ ) dBs ⟩ = ∫ 12[0,τ ) ds = ∫ 1[0,τ ) ds = τ ∧ t.

0 t 0 0

(Of course, one should make sure that 1[0,τ ) ∈ L2T , see e.g. Problem 14.14 below or Prob-

lem 15.2 in combination with Theorem 14.20.)

Problem 14.6 (Solution) We begin with a general remark: if f = 0 on [0, s] × Ω, we can use

Theorem 14.13 f) and deduce f ● Bs = 0.

a) We have

t

E [(f ● Bt )2 ∣ Fs ] = E [(f ● Bt − f ● Bs )2 ∣ Fs ] = E [∫ f 2 (r) dr ∣ Fs ] .

14.13 b)

(14.19) s

If both f and g vanish on [0, s], the same is true for f ± g. We get

2 t

E [((f ± g) ● Bt ) ∣ Fs ] = E [∫ (f ± g)2 (r) dr ∣ Fs ] .

s

Subtracting the ‘minus’ version from the ‘plus’ version and gives

2 2 t

E [((f + g) ● Bt ) − ((f − g) ● Bt ) ∣ Fs ] = E [∫ (f + g)2 (r) − (f − g)2 (r) dr ∣ Fs ] .

s

or

t

4 E [(f ● Bt ) ⋅ (g ● Bt ) ∣ Fs ] = 4 E [∫ (f ⋅ g)(r) dr ∣ Fs ] .

s

E (f ● Bt ∣ Fs ) = f ● Bs =

martingale see above

0

111

R.L. Schilling, L. Partzsch: Brownian Motion

n→∞

Problem 14.7 (Solution) Because of Lemma 14.10 it is enough to show that fn ●BT ÐÐÐ→ f ●BT

in L2 (P). This follows immediately from Theorem 14.13 c):

2 2

E [∣fn ● BT − f ● BT ∣ ] = E [∣(fn − f ) ● BT ∣ ]

T n→∞

= E [∫ ∣fn (s) − f (s)∣2 ds] ÐÐÐ→ 0.

0

Problem 14.8 (Solution) Assume that (f ● B)2 − A is a martingale where At is continuous and

increasing. Since (f ● B)2 − f 2 ● ⟨B⟩ is a martingale, we conclude that

L2

Problem 14.9 (Solution) If Xn Ð→ X then supn E(Xn2 ) < ∞ and the claim follows from the

fact that

E ∣Xn2 − Xm

2

∣ = E [∣Xn − Xm ∣∣Xn + Xm ∣]

√ √

⩽ E ∣Xn + Xm ∣2 E ∣Xn − Xm ∣2

√ √ √

⩽ ( E ∣Xn ∣2 + E ∣Xm ∣2 ) E ∣Xn − Xm ∣2 .

Problem 14.10 (Solution) Let Π = {t0 = 0 < t1 < . . . < tn = T } be a partition of [0, T ]. Then we

get

n

BT3 = ∑ (Bt3j − Bt3j−1 )

j=1

n

= ∑ (Btj − Btj−1 )[Bt2j + Btj Btj−1 + Bt2j−1 ]

j=1

n

= ∑ (Btj − Btj−1 )[Bt2j − 2Btj Btj−1 + Bt2j−1 + 3Btj Btj−1 ]

j=1

n

= ∑ (Btj − Btj−1 )[(Btj − Btj−1 )2 + 3Btj Btj−1 ]

j=1

n

= ∑ (Btj − Btj−1 )[(Btj − Btj−1 )2 + 3Bt2j−1 + 3Btj−1 (Btj − Btj−1 )]

j=1

n n n

3 2

= ∑ (Btj − Btj−1 ) + 3 ∑ Bt2j−1 (Btj − Btj−1 ) + 3 ∑ Btj−1 (Btj − Btj−1 )

j=1 j=1 j=1

n n n

3

= ∑ (Btj − Btj−1 ) + 3 ∑ Bt2j−1 (Btj − Btj−1 ) + 3 ∑ Btj−1 (tj − tj−1 )

j=1 j=1 j=1

n

2

+ 3 ∑ Btj−1 [(Btj − Btj−1 ) − (tj − tj−1 )]

j=1

= I1 + I2 + I3 + I4 .

Clearly,

T T

I2 ÐÐÐ→ 3 ∫ Bs2 dBs and I3 ÐÐÐ→ 3 ∫ Bs ds

∣Π∣→0 0 ∣Π∣→0 0

112

Solution Manual. Last update June 12, 2017

integral. The latter also converges in L2 since I2 and, as we will see in a moment, I1 and

I4 converge in L2 -sense.

Let us show that I1 , I4 → 0.

⎛n 3 ⎞ (B1)

n

3

V I1 = V ∑ (Btj − Btj−1 ) = ∑ V ((Btj − Btj−1 ) )

⎝j=1 ⎠ j=1

n

= ∑ V (Bt3j −tj−1 )

(B2)

j=1

n

= ∑ (tj − tj−1 ) V (B1 )

scaling 3 3

j=1

n

⩽ ∣Π∣ ∑ (tj − tj−1 ) V (B13 )

2

j=1

∣Π∣→0

Moreover,

⎛⎛ n 2 ⎞ ⎞

2

⎝⎝ j=1 ⎠ ⎠

⎛n n 2 2 ⎞

= 9 E ∑ ∑ Btj−1 [(Btj − Btj−1 ) − (tj − tj−1 )]Btk−1 [(Btk − Btk−1 ) − (tk − tk−1 )]

⎝j=1 k=1 ⎠

⎛n 2 2⎞

= 9 E ∑ Bt2j−1 [(Btj − Btj−1 ) − (tj − tj−1 )]

⎝j=1 ⎠

n

2 2

= 9 ∑ E (Bt2j−1 [(Btj − Btj−1 ) − (tj − tj−1 )] )

j=1

n

2 2

= 9 ∑ E (Bt2j−1 ) E ([(Btj − Btj−1 ) − (tj − tj−1 )] )

(B1)

j=1

n

2

= 9 ∑ E (Bt2j−1 ) E ([Bt2j −tj−1 − (tj − tj−1 )] )

(B2)

j=1

n

2

= 9 ∑ tj−1 E (B12 )(tj − tj−1 )2 E ([B12 − 1] )

scaling

j=1

n

= 9 ∑ tj−1 (tj − tj−1 )2 V(B12 )

j=1

n

⩽ 9T ∣Π∣ ∑ (tj − tj−1 ) V(B12 )

j=1

2

∣Π∣→0

Now for the argument with the mixed terms. Let j < k; then tj−1 < tj ⩽ tk−1 < tk , and by

the tower property,

2 2

E (Btj−1 [(Btj − Btj−1 ) − (tj − tj−1 )]Btk−1 [(Btk − Btk−1 ) − (tk − tk−1 )])

113

R.L. Schilling, L. Partzsch: Brownian Motion

2 2

= E (E [Btj−1 [(Btj − Btj−1 ) − (tj − tj−1 )]Btk−1 [(Btk − Btk−1 ) − (tk − tk−1 )] ∣ Ftk−1 ])

tower

2 2

= E (Btj−1 [(Btj − Btj−1 ) − (tj − tj−1 )]Btk−1 E [[(Btk − Btk−1 ) − (tk − tk−1 )] ∣ Ftk−1 ])

pull

out

2 2

= E (Btj−1 [(Btj − Btj−1 ) − (tj − tj−1 )]Btk−1 E [(Btk − Btk−1 ) − (tk − tk−1 )] )

(B1)

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶

=0

= 0.

Problem 14.11 (Solution) Let Π = {t0 = 0 < t1 < . . . < tn = T } be a partition of [0, T ]. Then we

get

= f (tj−1 )(Btj − Btj−1 ) + Btj−1 (f (tj ) − f (tj−1 )) + (Btj − Btj−1 )(f (tj ) − f (tj−1 )).

f (T )BT − f (0)B0

n n n

= ∑ f (tj−1 )(Btj − Btj−1 ) + ∑ Btj−1 (f (tj ) − f (tj−1 )) + ∑ (Btj − Btj−1 )(f (tj ) − f (tj−1 ))

j=1 j=1 j=1

= I1 + I2 + I3 .

Clearly,

L2 T

I1 Ð→ ∫ f (s) dBs (stochastic integral)

0

a.s. T

I2 ÐÐ→ ∫ Bs df (x) (Riemann-Stieltjes integral)

0

and if we can show that I3 → 0 in L2 , then we are done (as this also implies the L2 -

convergence of I2 ). Now we have

⎡ n 2⎤

⎢⎛ ⎞ ⎥⎥

⎢

E ⎢ ∑ (Btj − Btj−1 )(f (tj ) − f (tj−1 )) ⎥

⎢⎝j=1 ⎠ ⎥

⎣ ⎦

⎡n n ⎤

⎢ ⎥

= E ⎢⎢ ∑ ∑ (Btj − Btj−1 )(f (tj ) − f (tj−1 ))(Btk − Btk−1 )(f (tk ) − f (tk−1 ))⎥⎥

⎢j=1 k=1 ⎥

⎣ ⎦

the mixed terms break away because of the independent increments property of Brownian

motion

n

= ∑ E [(Btj − Btj−1 )2 (f (tj ) − f (tj−1 ))2 ]

j=1

n

= ∑ (f (tj ) − f (tj−1 ))2 E [(Btj − Btj−1 )2 ]

j=1

n

= ∑ (tj − tj−1 )(f (tj ) − f (tj−1 ))2

j=1

n

⩽ 2 ∣Π∣ ⋅ ∥f ∥∞ ∑ ∣f (tj ) − f (tj−1 )∣

j=1

114

Solution Manual. Last update June 12, 2017

∣Π∣→0

∣f (t)∣ ⩽ ∣f (t) − f (0)∣ + ∣f (0)∣ ⩽ VAR1 (f ; [0, t]) + VAR1 (f ; {0}) ⩽ 2VAR1 (f ; [0, T ])

Problem 14.12 (Solution) Replace, starting in the fourth line of the proof of Proposition 14.16,

the argument as follows:

t 2

E [sup ∣∫ [f (s) − f Π (s)] dBs ∣ ]

t⩽T 0

T 2

⩽ 4∫ E [∣f (s) − f Π (s)∣ ] ds

0

n sj

=4∑∫ E [∣f (s) − f (sj−1 )∣2 ] ds

j=1 sj−1

n sj

⩽4∑∫ sup E [∣f (u) − f (v)∣2 ] ds ÐÐÐ→ 0.

j=1 sj−1 u,v∈[sj−1 ,sj ] ∣Π∣→0

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶

→0, ∣Π∣→0

Problem 14.13 (Solution) To simplify notation, we drop the n in Πn and write only 0 = t0 <

t1 < . . . < tk = T and

α

θn,j = θj = αtj + (1 − α)tj−1 .

We get

k T

LT (α) ∶= L2 (P)- lim ∑ Bθj (Btj − Btj−1 ) = ∫ Bs dBs + αT.

∣Π∣→0 j=1 0

Indeed, we have

k

∑ Bθj (Btj − Btj−1 )

j=1

k k

= ∑ Btj−1 (Btj − Btj−1 ) + ∑ (Bθj − Btj−1 )(Btj − Btj−1 )

j=1 j=1

k k k

= ∑ Btj−1 (Btj − Btj−1 ) + ∑ (Bθj − Btj−1 )2 + ∑ (Btj − Bθj )(Bθj − Btj−1 )

j=1 j=1 j=1

= X + Y + Z.

L2 T

We know already that X ÐÐÐ→ ∫0 Bs dBs . Moreover,

∣Π∣→0

⎛k ⎞

V Z = V ∑ (Btj − Bθj )(Bθj − Btj−1 )

⎝j=1 ⎠

k

= ∑ V [(Btj − Bθj )(Bθj − Btj−1 )]

j=1

115

R.L. Schilling, L. Partzsch: Brownian Motion

k

= ∑ E [(Btj − Bθj )2 (Bθj − Btj−1 )2 ]

j=1

k

= ∑ E [(Btj − Bθj )2 ] E [(Bθj − Btj−1 )2 ]

j=1

k

= ∑ (tj − θj )(θj − tj−1 )

j=1

k

as in Theorem 9.1

= α(1 − α) ∑ (tj − tj−1 )(tj − tj−1 ) ÐÐÐÐÐÐÐÐÐ→ 0.

j=1

Finally,

⎛k ⎞ k

E Y = E ∑ (Bθj − Btj−1 )2 = ∑ E(Bθj − Btj−1 )2

⎝j=1 ⎠ j=1

k k

= ∑ (θj − tj−1 ) = α ∑ (tj − tj−1 ) = αT.

j=1 j=1

Consequence: LT (α) = 1

2

(BT2 + (2α − 1)T ), and this stochastic integral is a martingale if,

and only if, α = 0, i.e. if θj = tj−1 is the left endpoint of the interval.

For α = 1

2 we get the so-called Stratonovich or mid-point stochastic integral. This will

obey the usual calculus rules (instead of Itô’s rule). A first sign is the fact that

LT ( 12 ) = 21 BT2

T

LT ( 12 ) = ∫ Bs ○ dBs

0

Problem 14.14 (Solution) a) Let τk be a sequence of stopping times with countably many,

discrete values such that τk ↓ τ . For example, τk ∶= (⌊2k τ ⌋ + 1)/2k , see Lemma A.15

in the appendix. Write s1 < . . . < sK for the values of τk . In particular,

j

And so

j

116

Solution Manual. Last update June 12, 2017

k→∞

the integral. If you want to show it nevertheless, have a look at Problem 15.2 below.

j

j

= BT ∧τk .

= E(T ∧ τk − T ∧ τ ) ÐÐÐ→ 0

k→∞

by dominated convergence.

∫ 1[0,T ∧τ ) (s) dBs = L - lim ∫ 1[0,T ∧τk ) (s) dBs = L - lim BT ∧τk = BT ∧τ

2 d) 2 c)

k k

integrable.

f) The result is, in the light of the localization principle of Theorem 14.13 not unex-

pected.

• Clearly, ∅, [0, T ] × Ω ∈ P.

• Let Γ ∈ P. Then

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶

∈B[0,t]⊗Ft ∈B[0,t]⊗Ft

thus Γc ∈ P.

• Let Γn ∈ P. By definition

Γn ∩ ([0, t] × Ω) ∈ B[0, t] ⊗ Ft

117

R.L. Schilling, L. Partzsch: Brownian Motion

n n

i.e. ⋃n Γn ∈ P.

Problem 14.16 (Solution) Let f (t, ω) be right-continuous on the interval [0, T ]. (We consider

only T < ∞ since the case of the infinite interval [0, ∞) is actually easier.)

Set

⌊2n s⌋+1

fnT (s, ω) ∶= f ( 2n ∧ T, ω)

then

fnT (s, ω) = ∑ f ( k+1

2n ∧ T, ω) 1[k2−n ,(k+1)2−n ) (s) (s ⩽ T )

k

and, since (⌊2n s⌋ + 1)/2n ↓ s, we find by right-continuity that fn → f as n → ∞. This

means that it is enough to consider the P-measurability of the step-function fn .

Fix n ⩾ 0, write tj = j2−n . Then t0 = 0 < t1 < . . . tN ⩽ T for some suitable N . Observe that

for any x ∈ R

N

{(s, ω) ∶ f (s, ω) ⩽ x} = {T } × {ω ∶ f (T, ω) ⩽ x} ∪ ⋃ [tj−1 , tj ) × {ω ∶ f (tj , ω) ⩽ x}

j=1

and each set appearing in the union set on the right is in B[0, T ] ⊗ FT .

This shows that fnT and f are B[0, T ] ⊗ FT measurable.

Now consider fnt and f (t)1[0,t] . We conclude, with the same reasoning, that both are

B[0, t] ⊗ Ft measurable.

This shows that a right-continuous f is progressive.

If f is left-continuous, we use ⌊2n s⌋/2n ↑ s and define the approximating function as

k

processes of the form

fn (s, ω) = ∑ φj−1 (s)1[tj−1 ,tj ) (s)

j

where φj−1 is Ftj−1 measurable such that fn → f in L2 (µT ⊗ P). In particular, there is a

subsequence such that

t t

lim ∫ ∣fn(k) (s)∣2 dAs = ∫ ∣f (s)∣2 dAs a.s.

k→∞ 0 0

t

so that it is enough to check that the integrals ∫0 ∣fn(j) (s)∣2 dAs are adapted. By defintion

t

∫ ∣fn(j) (s)∣2 dAs = ∑ φ2j−1 (Atj ∧t − Atj−1 ∧t )

0 j

and from this it is clear that the integral is Ft measurable for each t.

118

15 Stochastic Integrals: Beyond L2T

Problem 15.1 (Solution) We know from the proof of Lemma 15.2 that for f ∈ L2T and any

approximating sequence (fn )n⩾0 ⊂ ET we have

t t

∀ t ∈ [0, T ] ∃(n(j, t))j⩾1 ∶ lim ∫ ∣fn(j,t) (s, ⋅)∣2 ds = ∫ ∣f (s, ⋅)∣2 ds almost surely.

j→∞ 0 0

In particular the subsequence may depend on t. Since the rational numbers Q+ ∩ [0, T ]

are dense in [0, T ] we can construct, by a diagonal procedure, a subsequence m(j) such

that

q q

∃(m(j))j⩾1 ∀ q ∈ [0, T ] ∩ Q ∶ lim ∫ ∣fm(j) (s, ⋅)∣2 ds = ∫ ∣f (s, ⋅)∣2 ds almost surely.

j→∞ 0 0

Observe that for any t ∈ (0, T ) there are rational numbers q, r ∈ Q ∩ [0, T ] such that

0 ⩽ r ⩽ t ⩽ q ⩽ T . Then

r t q

∫ ∣fm(j) (s, ⋅)∣2 ds ⩽ ∫ ∣fm(j) (s, ⋅)∣2 ds ⩽ ∫ ∣fm(j) (s, ⋅)∣2 ds

0 0 0

and

r t

lim ∫ ∣fm(j) (s, ⋅)∣2 ds ⩽ lim ∫ ∣fm(j) (s, ⋅)∣2 ds

j→∞ 0 j→∞ 0

t q

⩽ lim ∫ ∣fm(j) (s, ⋅)∣2 ds ⩽ lim ∫ ∣fm(j) (s, ⋅)∣2 ds

j→∞ 0 j→∞ 0

hence

r t t q

∫ ∣f (s, ⋅)∣2 ds ⩽ lim ∫ ∣fm(j) (s, ⋅)∣2 ds ⩽ lim ∫ ∣fm(j) (s, ⋅)∣2 ds ⩽ ∫ ∣f (s, ⋅)∣2 ds.

0 j→∞ 0 j→∞ 0 0

t t t t

∫ ∣f (s, ⋅)∣2 ds ⩽ lim ∫ ∣fm(j) (s, ⋅)∣2 ds ⩽ lim ∫ ∣fm(j) (s, ⋅)∣2 ds ⩽ ∫ ∣f (s, ⋅)∣2 ds.

0 j→∞ 0 j→∞ 0 0

Alternative Solution: As in the proof of Lemma 15.2 there exists a sequence (fn )n⩾0 ⊂ ET

which converges to f in L2 (λT ⊗ P). There is a subsequence (fn(j) )j⩾0 such that

T

∫ ∣fn(j) (s, ⋅) − f (s, ⋅)∣2 ds → 0 almost surely.

0

RRR 1 1R

2R

RRR

1

RRR( t ∣f (s, ⋅)∣ 2 2

− (

t

∣f (s, ⋅)∣ 2

R ⩽ (

t

∣f (s, ⋅) − (s, ⋅)∣ 2 2

RRR ∫0 n(j) ds) ∫ ds) R

RRR ∫ n(j) f ds)

RR 0

R

0

119

R.L. Schilling, L. Partzsch: Brownian Motion

1

T 2

⩽ (∫ ∣fn(j) (s, ⋅) − f (s, ⋅)∣ ds)

2

0

j→∞

Problem 15.2 (Solution) Solution 1: We have that the process t ↦ 1[0,τ (ω)) (t) is adapted

⌊2n s⌋

Int (s, ω) ∶= 1[0,τ (ω)) ( 2n ∧ t) = ∑ 1[0,τ (ω)) (tj+1 ∧ t)1[tj ,tj+1 ) (s ∧ t).

j

check that Int is B[0, t] ⊗ Ft -measurable. But this is obvious from the form of Int .

Problem 15.3 (Solution) Assume that σn are stopping times such that (Mtσn 1{σn >0} )t is a

martingale. Clearly,

• τn ∶= σn ∧ n ↑ ∞ almost surely as n → ∞;

1{σn >0} = Mtσn ∧n 1{σn >0} = Mtσn ∧n 1{σn ∧n>0} = Mt n 1{τn >0} .

σn τ

Mt∧n

Doob

E [sup ∣M (s ∧ τn )∣2 ] ⩽ 4 E [∣M (τn )∣2 ] ⩽ 4 E [∣M (n)∣2 ].

s⩽T

120

16 Itô’s Formula

Problem 16.1 (Solution) We try to identify the bits and pieces as parts of Itô’s formula. For

f (x) = ex we get f ′ (x) = f ′′ (x) = ex and so

t 1 t Bs

eBt − 1 = ∫ eBs dBs + ∫ e ds.

0 2 0

Thus,

1 t Bs

Xt = eBt − 1 − ∫ e ds.

2 0

With the same trick we try to find f (x) such that f ′ (x) = xex . A moment’s thought

2

1 x2

will do. Moreover f ′′ (x) = ex + 2x2 ex . This then gives

2 2

reveals that f (x) = 2e

1 Bt2 1 t 2 1 t 2 2

e − = ∫ Bs eBs dBs + ∫ (eBs + 2Bs2 eBs ) ds

2 2 0 2 0

and we see that

1 Bt2 t 2 2

Yt = (e − 1 − ∫ (eBs + 2Bs2 eBs ) ds) .

2 0

2

Note: the integrand Bs2 eBs is not of class L2T , thus we have to use a stopping technique

(as in step 4o of the proof of Itô’s formula or as in Chapter 15).

Then f (t)g(t) = F ○ G(t). If we differentiate this using the chain rule we get

d

(F ○ G) = ∂x F ○ G(t) ⋅ f ′ (t) + ∂y F ○ G(t) ⋅ g ′ (t) = g(t) ⋅ f ′ (t) + f (t) ⋅ g ′ (t)

dt

(surprised?) and if we integrate this up we see

t t

F ○ G(t) − F ○ G(0) = ∫ f (s)g ′ (s) ds + ∫ g(s)f ′ (s) ds

0 0

t t

=∫ f (s) dg(s) + ∫ g(s) df (s).

0 0

Note: For the first equality we have to assume that f ′ , g ′ exist Lebesgue a.e. and

that their primitives are f and g, respectively. This is tantamount to saying that f, g

are absolutely continuous with respect to Lebesgue measure.

and ∂x2 f (x, y) = ∂y2 f (x, y) = 0. Thus, the 2-dimensional Itô formula yields

t t

bt βt = ∫ bs dβs + ∫ βs dbs +

0 0

121

R.L. Schilling, L. Partzsch: Brownian Motion

1 t 2 1 t 2 t

+ ∫ ∂x f (bs , βs ) ds + ∫ ∂y f (bs , βs ) ds + ∫ ∂x ∂y f (bs , βs ) d⟨b, β⟩s

2 0 2 0 0

t t

=∫ bs dβs + ∫ βs dbs + ⟨b, β⟩t .

0 0

If b á β we have ⟨b, β⟩ ≡ 0 (note our Itô formula has no mixed second derivatives!)

and we get the formula as in the statement. Otherwise we have to take care of

⟨b, β⟩. This is not so easy to calculate since we need more information on the joint

distribution. In general, we have

∣Π∣→0 tj ,tj−1

Problem 16.3 (Solution) Consider the two-dimensional Itô process Xt = (t, Bt ) with parame-

ters

⎛0⎞ ⎛1⎞

σ≡ and b ≡ .

⎝1⎠ ⎝0⎠

t

=∫ (∂1 f (Xs )σ11 + ∂2 f (Xs )σ21 ) dBs

0

t 1

+∫ (∂1 f (Xs )b1 + ∂2 f (Xs )b2 + ∂2 ∂2 f (Xs )σ21

2

) ds

0 2

t t 1

= ∫ ∂2 f (Xs ) dBs + ∫ (∂1 f (Xs )b1 + ∂2 ∂2 f (Xs )) ds

0 0 2

t ∂f t ∂f 1 ∂2f

=∫ (s, Bs ) dBs + ∫ ( (s, Bs ) + (s, Bs )) ds.

0 ∂x 0 ∂t 2 ∂x2

Let (Bt1 , . . . , Btd )t⩾0 be a BMd and f ∶ [0, ∞)×Rd → R be a function of class C1,2 . Consider

the d + 1-dimensional Itô process Xt = (t, Bt1 , . . . , Btd ) with parameters

⎛1⎞

⎧

⎪ ⎜ ⎟

⎪

⎪1, if i = k + 1; ⎜0⎟

σ ∈ Rd+1×d , σik = ⎨ and b = ⎜ ⎟

⎜ ⎟.

⎪

⎪

⎪ ⎜⋮⎟

⎩0, else; ⎜ ⎟

⎝0⎠

= f (Xt ) − f (X0 )

⎡

t ⎢d+1

⎤

d ⎥ d+1 t 1 d+1 t d

= ∑ ∫ ⎢ ∑ ∂j f (Xs )σjk ⎥⎥ dBsk + ∑ ∫ ∂j f (Xs )bj ds + ∑ ∫ ∂i ∂j f (Xs ) ∑ σik σjk ds

⎢

k=1 0 ⎢ ⎣ j=1

⎥

⎦ j=1 0 2 i,j=1 0 k=1

122

Solution Manual. Last update June 12, 2017

d t t 1 d+1 t

= ∑∫ ∂k+1 f (Xs ) dBsk + ∫ ∂1 f (Xs ) ds + ∑ ∫ ∂j ∂j f (Xs ) ds

k=1 0 0 2 j=2 0

d t ∂f t ∂f 1 d ∂2f

= ∑∫ (s, Bs1 , . . . , Bsd ) dBsk + ∫ ( (s, Bs1 , . . . , Bsd ) + ∑ (s, Bs1 , . . . , Bsd )) ds.

k=1 0 ∂xk 0 ∂t 2 k=1 ∂x2k

Problem 16.4 (Solution) Let Bt = (Bt1 , . . . , Btd ) be a BMd and f ∈ C1,2 ((0, ∞) × Rd , R) as in

Theorem 5.6. Then the multidimensional time-dependent Itô’s formula shown in Problem

16.3 yields

t

Mtf = f (t, Bt ) − f (0, B0 ) − ∫ Lf (s, Bs ) ds

0

t ∂ 1

= f (t, Bt ) − f (0, B0 ) − ∫ ( f (s, Bs ) + ∆x f (s, Bs )) ds

0 ∂t 2

d t ∂f

= ∑∫ (s, Bs1 , . . . , Bsd ) dBsk .

k=1 0 ∂xk

By Theorem 14.13 it follows that Mtf is a martingale (note that the assumption (5.5)

guarantees that the integrand is of class L2T !)

Problem 16.5 (Solution) First we show that Xt = et/2 cos Bt is a martingale. We use the time-

dependent Itô’s formula from Problem 16.3. Therefore, we set f (t, x) = et/2 cos x. Then

∂f 1 ∂f ∂2f

(t, x) = et/2 cos x, (t, x) = −et/2 sin x, (t, x) = −et/2 cos x.

∂t 2 ∂x ∂x2

Hence we obtain

t

∂f t ∂f 1 ∂2f

=∫ (s, Bs ) dBs + ∫ ( (s, Bs ) + (s, Bs )) ds + 1

0 ∂x 0 ∂t 2 ∂x2

t t 1 1

= − ∫ es/2 sin Bs dBs + ∫ ( es/2 cos Bs − es/2 cos Bs ) ds + 1

0 0 2 2

t

= −∫ es/2 sin Bs dBs + 1,

0

t)e−x−t/2 . Then

∂f 1

(t, x) = e−x−t/2 − (x + t)e−x−t/2 ,

∂t 2

∂f −x−t/2

(t, x) = e − (x + t)e−x−t/2 ,

∂x

∂f

(t, x) = −2e−x−t/2 + (x + t)e−x−t/2 .

∂x2

By the time-dependent Itô’s formula we have

= f (t, Bt ) − f (0, 0)

123

R.L. Schilling, L. Partzsch: Brownian Motion

t

=∫ (e−Bs −s/2 − (Bs + s)e−Bs −s/2 ) dBs +

0

t 1 1

+∫ (e−Bs −s/2 − (Bs + s)e−Bs −s/2 + (−2e−Bs −s/2 + (Bs + s)e−Bs −s/2 )) ds

0 2 2

t

=∫ (e−Bs −s/2 − (Bs + s)e−Bs −s/2 ) dBs .

0

Problem 16.6 (Solution) a) The stochastic integrals exist if bs /rs and βs /rs are in L2T . As

∣bs /rs ∣ ⩽ 1 we get

T T

∥b/r∥2L2 (λT ⊗P) = ∫ [E (∣bs /rs ∣2 )] ds ⩽ ∫ 1ds = T < ∞.

0 0

Since bs /rs is adapted and has continuous sample paths, it is progressive and so an

element of L2T . Analogously, ∣βs /rs ∣ ⩽ 1 implies βs /rs ∈ L2T .

14.13 it follows that

t t

• t ↦ ∫0 bs /rs dbs , t ↦ ∫0 βs /rs dβs are continuous; thus t ↦ Wt is a continuous

process.

t t

• ∫0 bs /rs dbs , ∫0 βs /rs dβs are square integrable martingales, and so is Wt .

t t

=∫ b2s /rs2 ds + ∫ βs2 /rs2 ds

0 0

t b2s + βs2

=∫ ds

0 rs2

t

=∫ ds = t,

0

Therefore, Wt is a BM1 .

Note, that the above processes can be used to calculate Lévy’s stochastic area formula,

see Protter [7, Chapter II, Theorem 43]

Problem 16.7 (Solution) The function f = u+iv is analytic, and as such it satisfies the Cauchy–

Riemann equations, see e.g. Rudin [10, Theorem 11.2],

ux = vy and uy = −vx .

u(bt , βt ) − u(b0 , β0 )

t t 1 t

=∫ ux (bs , βs ) dbs + ∫ uy (bs , βs ) dβs + ∫ (uxx (bs , βs ) + uyy (bs , βs )) ds

0 0 2 0

124

Solution Manual. Last update June 12, 2017

t t

=∫ ux (bs , βs ) dbs + ∫ uy (bs , βs ) dβs ,

0 0

where the last term cancels as uxx = vyx and uyy = −vxy . Theorem 14.13 implies

t t

• t ↦ u(bt , βt ) = ∫0 ux (bs , βs ) dbs + ∫0 uy (bs , βs ) dβs is a continuous process.

t t

• ∫0 ux (bs , βs ) dbs , ∫0 uy (bs , βs ) dβs are square integrable martingales, and so u(bt , βt )

is a square integrable martingale.

• the quadratic variation is given by

t t t

=∫ u2x (bs , βs ) ds + ∫ u2y (bs , βs ) ds = ∫ 1 ds = t,

0 0 0

Due to Lévy’s characterization of a BM1 , Theorem 9.12 or 17.5, we know that u(bt , βt )

is a BM1 . Analogously, we see that v(bt , βt ) is also a BM1 . Just note that, due to the

Cauchy–Riemann equations we get from u2x + u2y = 1 also vy2 + vx2 = 1.

The quadratic covariation is (we drop the arguments, for brevity):

1

⟨u, v⟩t = (⟨u + v⟩t − ⟨u − v⟩t )

4

1 t t t t

= (∫ (ux + vx )2 ds + ∫ (uy + vy )2 ds − ∫ (ux − vx )2 ds − ∫ (uy − vy )2 ds)

4 0 0 0 0

t

= ∫ (ux vx + uy vy ) ds

0

t

= ∫ (−vy uy + uy vy ) ds = 0.

0

the function g(ut , vt ) = ei(ξut +ηvt ) and s < t yields

t t 1 2 t

g(ut , vt ) − g(us , vs ) = iξ ∫ g(ur , vr ) dur + iη ∫ g(ur , vr ) dvr − (ξ + η 2 ) ∫ g(ur , vr ) dr,

s s 2 s

as the quadratic covariation ⟨u, v⟩t = 0. Since ∣g∣ ⩽ 1 and since g(ut , vt ) is progressive,

the integrand is in L2T and the above stochastic integrals exist. From Theorem 14.13 we

deduce that

t t

E (∫ g(ur , vr ) dur 1F ) = 0 and E (∫ g(ur , vr ) dvr 1F ) = 0.

s s

for all F ∈ σ(ur , vr ∶ r ⩽ s) =∶ Fs . If we multiply the above equality by e−i(ξus +ηvs ) 1F and

take expectations, we get

1 t

E (g(ut − us , vt − vs )1F ) = P(F ) − (ξ 2 + η 2 ) ∫ E (g(ur − us , vr − vs )1F ) dr.

2 0

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶

=Φ(t) =Φ(r)

Since this integral equation has a unique solution (use Gronwall’s lemma, Theorem A.43),

we get

1 2 +η 2 )

125

R.L. Schilling, L. Partzsch: Brownian Motion

1 2 1 2

From this we deduce with Lemma 5.4 that (u(bt , βt ), v(bt , βt )) is a BM2 .

Note that the above calculation is essentially the proof of Lévy’s characterization theorem.

Only a few modifications are necessary for the proof of the multidimensional version, see

e.g. Karatzas, Shreve [5, Theorem 3.3.16].

t t

Problem 16.8 (Solution) Let Xt = ∫0 σ(s) dBs + ∫0 b(s) ds be an d-dimensional Itô process.

Assuming that f = u + iv and thus u = Re f = 21 f + 12 f¯ and v = Im f = 2i

1 1 ¯

f + 2i f are

C2 -functions, we may apply the real d-dimensional Itô formula (16.9) to the functions

u, v ∶ Rd → R,

f (Xt ) − f (X0 )

= u(Xt ) − u(X0 ) + i(v(Xt ) − v(X0 ))

t t 1 t

=∫ ∇u(Xs )⊺ σ(s) dBs + ∫ ∇u(Xs )⊺ b(s) ds + ⊺ 2

∫ trace(σ(s) D u(Xs )σ(s)) ds

0 0 2 0

t t 1 t

+ i (∫ ∇v(Xs )⊺ σ(s) dBs + ∫ ∇v(Xs )⊺ b(s) ds + ∫ trace(σ(s)⊺ D2 v(Xs )σ(s)) ds)

0 0 2 0

t t 1 t

= ∫ ∇f (Xs ) σ(s) dBs + ∫ ∇f (Xs ) b(s) ds + ∫ trace(σ(s)⊺ D2 f (Xs )σ(s)) ds,

⊺ ⊺

0 0 2 0

by the linearity of the differential operators and the (stochastic) integral.

Problem 16.9 (Solution) a) By definition we have supp χ ⊂ [−1, 1] hence it is obvious that

for χn (x) ∶= nχ(nx) we have supp χn ⊂ [−1/n, 1/n]. Substituting y = nx we get

1/n 1/n 1

∫ χn (x) dx = ∫ nχ(nx) dx = ∫ χ(y) dy = 1

−1/n −1/n −1

∣∂ k fn (x)∣ = ∣f ⋆ (∂ k χn )(x)∣

= ∣∫ f (y)∂ k χn (x − y) dy∣

B(x,1/n)

y∈B(x,1/n) R

y∈B(x,1/n) R

y∈B(x,1/n)

d) For x ∈ R we have

R

126

Solution Manual. Last update June 12, 2017

y∈B(x,1/n)

y∈B(x,1/n)

This shows that limn→∞ ∣f ⋆ χn (x) − f (x)∣ = 0, i.e. limn→∞ f ⋆ χn (x) = f (x), at all x

where f is continuous.

c) Using the above result and taking the supremum over all x ∈ R we get

x∈R x∈R y∈B(x,1/n)

Problem 16.10 (Solution) We follow the hint and use Lévy’s characterization of a BM1 , The-

orem 9.12 or 17.5.

• t ↦ βt is a continuous process.

● t t

⟨β⟩t = ⟨∫ sgn(Bs ) dBs ⟩ = ∫ (sgn(Bs ))2 ds = ∫ ds = t

0 t 0 0

Thus, β is a BM1 .

127

17 Applications of Itô’s Formula

Problem 17.1 (Solution) Lemma. Let (Bt , Ft )t⩾0 be a BMd , f = (f1 , . . . , fd ), fj ∈ L2P (λT ⊗ P)

for all T > 0, and assume that ∣fj (s, ω)∣ ⩽ C for some C > 0 and all s ⩾ 0, 1 ⩽ j ⩽ d, and

ω ∈ Ω. Then

⎛d t 1 d t ⎞

exp ∑ ∫ fj (s) dBsj − ∑ ∫ fj2 (s) ds , t ⩾ 0, (17.1)

⎝j=1 0 2 j=1 0 ⎠

is a martingale for the filtration (Ft )t⩾0 .

t t

Proof. Set Xt = ∑dj=1 ∫0 fj (s) dBsj − 21 ∑dj=1 ∫0 fj2 (s) ds. Itô’s formula, Theorem 16.5, yields

d t 1 d t 1 d t

eXt − 1 = ∑ ∫ eXs fj (s) dBsj − ∑ ∫ e s fj (s) ds + ∑ ∫ e s fj (s) ds

X 2 X 2

j=1 0 2 j=1 0 2 j=1 0

d d

t s 1 d s

= ∑∫ exp ( ∑ ∫ fk (r) dBrk − ∑ ∫ fk (r) dr) fj (s) dBs

2 j

j=1 0 k=1 0 2 k=1 0

d t d s 1 s 2

= ∑∫ ∏ exp (∫ fk (r) dBrk − ∫ f (r) dr) fj (s) dBs .

j

j=1 0 k=1 0 2 0 k

If we can show that the integrand is in L2P (λT ⊗ P) for every T > 0, then Theorem 14.13

applies and shows that the stochastic integral, hence eXt , is a martingale.

We will see that we can reduce the d-dimensional setting to a one-dimensional setting.

The essential step in the proof is the analogue of the estimate on page 250, line 6 from

above. In the d-dimensional setting we have for each k = 1, . . . , d

T 2

fj (r) dBrj − 12 ∑dj=1 ∫0T fj2 (r) dr T

fj (r) dBrj

E [∣e∑j=1 ∫0

d

fk (T )∣ ] ⩽ C 2 E [e2 ∑j=1 ∫0

d

]

⎡d ⎤

⎢ 2 ∫0T fj (r) dBrj ⎥

⎢

= C E ⎢∏ e

2 ⎥

⎥

⎢j=1 ⎥

⎣ ⎦

d T 1/d

fj (r) dBrj

⩽ C 2 ∏ (E [e2d ∫0 ]) .

j=1

n n 1/pk

∫ ∏ φk dµ ⩽ ∏ (∫ ∣φk ∣ dµ) ∀(p1 , . . . , pn ) ∈ [1, ∞)n ∶ ∑nk=1 =1

pk 1

pk

k=1 k=1

with n = d and p1 = . . . = pd = d. Now the one-dimensional argument with dfj playing the

role of f shows (cf. page 250, line 9 from above)

T 2 d 1/d

fj (r) dBrj − 12 ∑dj=1 ∫0T fj2 (r) dr T

fj (r) dBrj

E [∣e∑j=1 ∫0

d

fk (T )∣ ] ⩽ C 2 ∏ (E [e2d ∫0 ])

j=1

129

R.L. Schilling, L. Partzsch: Brownian Motion

2T

⩽ C 2 e2dC < ∞.

Problem 17.2 (Solution) As for a Brownian motion one can see that the independent incre-

ments property of a Poisson process is equivalent to saying that Nt − Ns á FsN for all s ⩽ t,

cf. Lemma 2.10 or Section 5.1. Thus, we have for s ⩽ t

Nt −Ns áFs

N

= E(Nt − Ns − (t − s)) + Ns − s

pull out

Nt −Ns ∼Nt−s

= E(Nt − Ns ) − (t − s) + Ns − s

= E(Nt−s ) − (t − s) + Ns − s

= Ns − s.

Observe that

2

(Nt − t)2 − t = (Nt − Ns − (t − s) + (Ns − s)) − t

2

= (Nt − Ns − (t − s)) + (Ns − s)2 + 2(Ns − s)(Nt − Ns − t + s) − t.

Thus,

2

= (Nt − Ns − (t − s)) + 2(Ns − s)(Nt − Ns − t + s) − (t − s).

Now take E(⋯ ∣ FsN ) in the last equality and observe that Nt − Ns á Fs . Then

Nt −Ns áFs

N 2

= E [(Nt − Ns − (t − s)) ] + 2 E [(Ns − s)(Nt − Ns − t + s) ∣ FsN ] − (t − s)

Nt −Ns ∼Nt−s 2

= E [(Nt−s − (t − s)) ] + 2(Ns − s) E [(Nt − Ns − t + s) ∣ FsN ] − (t − s)

pull out

Nt −Ns áFs

N

= V Nt−s + 2(Ns − s) E(Nt − Ns − t + s) − (t − s)

= t − s + 2(Ns − s) ⋅ 0 − (t − s) = 0.

n

Q(W (tj ) ∈ Aj , ∀j = 1, . . . , n) = ∫ ∏ 1Aj (W (tj )) dQ

j=1

n

= ∫ ∏ 1Aj (B(tj ) − ξtj ) eξB(T )− 2 ξ

1 2

T

dP.

j=1

1 2

By the tower property and the fact that eξB(t)− 2 ξ t

is a martingale we get

n

ξB(T )− 12 ξ 2 T

∫ ∏1Aj (B(tj ) − ξtj ) e dP

j=1

130

Solution Manual. Last update June 12, 2017

⎡ ⎤

⎢ ⎛n ⎞⎥

= E ⎢⎢E ∏ 1Aj (B(tj ) − ξtj ) eξB(T )− 2 ξ T ∣ Ftn ⎥⎥

1 2

⎢ ⎝j=1 ⎠⎥

⎣ ⎦

⎡n ⎤

⎢ ⎥

= E ⎢⎢∏ 1Aj (B(tj ) − ξtj ) E (eξB(T )− 2 ξ T ∣ Ftn )⎥⎥

1 2

⎢j=1 ⎥

⎣ ⎦

⎡n ⎤

⎢ ⎥

= E ⎢⎢∏ 1Aj (B(tj ) − ξtj ) eξB(tn )− 2 ξ tn ⎥⎥

1 2

⎢j=1 ⎥

⎣ ⎦

⎡ ⎤

⎢ ⎛ n ⎞⎥

= E ⎢⎢E ∏ 1Aj (B(tj ) − ξtj ) eξB(tn )− 2 ξ tn ∣ Ftn−1 ⎥⎥

1 2

⎢ ⎝j=1 ⎠⎥

⎣ ⎦

⎡ n−1

⎢

= E ⎢⎢ ∏ 1Aj (B(tj ) − ξtj ) eξB(tn−1 )− 2 ξ tn−1 ×

1 2

⎢ j=1

⎣

⎤

⎥

× E (1An (B(tn ) − ξtn ) eξ(B(tn )−B(tn−1 ))− 2 ξ (tn −tn−1 )

∣ Ftn−1 ) ⎥⎥

1 2

⎥

⎦

Now, since B(tn ) − B(tn−1 ) á Ftn−1 we get

1 2

∣ Ftn−1 )

1 2

∣ Ftn−1 )

1 2

)∣

y=B(tn−1 )−ξtn−1

E (1An ((B(tn ) − B(tn−1 )) − ξ(tn − tn−1 ) + y)eξ(B(tn )−B(tn−1 ))− 2 ξ (tn −tn−1 )

1 2

)

1 2

)

1 1 2

(tn −tn−1 ) − 2(t 1

n −tn−1 )

x2

=√ ∫ 1An (x − ξ(tn − tn−1 ) + y)e 2

ξx− ξ

e dx

2π(tn − tn−1 )

1 − 2(t 1

n −tn−1 )

(x−ξ(tn −tn−1 ))2

=√ ∫ 1An (x − ξ(tn − tn−1 ) + y)e dx

2π(tn − tn−1 )

1 − 2(t 1

n −tn−1 )

z2

=√ ∫ 1An (z + y)e dz

2π(tn − tn−1 )

= E 1An (B(tn ) − B(tn−1 ) + y)

131

R.L. Schilling, L. Partzsch: Brownian Motion

n

j

Q(W (tj ) ∈ Aj , ∀j = 1, . . . , n) = E ∏ 1Aj ( ∑k=1 (B(tk ) − B(tk−1 ))).

j=1

Solution 2: As in the first part of Solution 1 we see that we can assume that T = tn . Since

we know the joint distribution of (B(t1 ), . . . , B(tn )), cf. (2.10b), we get (using x0 = t0 = 0)

n

= ∫ ∏ 1Aj (B(tj ) − ξtj ) eξB(tn )− 2 ξ

1 2

tn

dP

j=1

n (xj −xj−1 )2

ξxn − 21 ξ 2 tn − 2 ∑j=1

1 n

tj −tj−1

dx1 . . . dxn

= ∫…∫ ∏ 1Aj (xj − ξtj ) e e √

j=1 (2π)n/2∏nj=1 tj − tj−1

⎡ (x −x )2 ⎤

⎛ n ⎢⎢ − 12 tj −tj−1 ⎥⎞ ∑n (ξ(xj −xj−1 )− 1 ξ 2 (tj −tj−1 ))

⎥ e j=1 dx1 . . . dxn

= ∫…∫ ∏ ⎢1Aj (xj − ξtj ) e j j−1

⎥ 2 √

⎝j=1 ⎢ ⎥⎠ (2π) ∏nj=1 tj − tj−1

n/2

⎣ ⎦

n ⎡⎢ (x −x )2

− 12 tj −tj−1 +ξ(xj −xj−1 )− 21 ξ 2 (tj −tj−1 ) ⎥

⎤

⎢ ⎥ dx1 . . . dxn

= ∫…∫ ∏ ⎢ Aj j

1 (x − ξt j ) e j j−1

⎥ √

j=1 ⎢ ⎥ (2π) ∏nj=1 tj − tj−1

n/2

⎣ ⎦

n ⎡⎢ − 1

2⎤

((xj −xj−1 )+ξ(tj −tj−1 )) ⎥⎥ dx1 . . . dxn

= ∫…∫ ∏ ⎢⎢1Aj (xj − ξtj ) e 2(tj −tj−1 ) ⎥ √

j=1 ⎢ ⎥ (2π) ∏nj=1 tj − tj−1

n/2

⎣ ⎦

n − 2(t −t

1

(z −z ) 2

dz . . . dz

∏ [1Aj (zj ) e j j−1 )

1 n

= ∫…∫ ] √

j j−1

n/2 n

= P (B(t1 ) ∈ A1 , . . . , B(tn ) ∈ An ).

s⩽t

1

= ∫ 1(−∞,x] (Bt + αt)1(−∞,y] ( sups⩽t (Bs + αs)) dQ

βt

1 2

1 2

1 2

=

Girsanov

= e− 2 α t ∫

1 2

1(−∞,x] (ξ) eαξ Q(Wt ∈ dξ, sups⩽t Ws ⩽ y).

Rd

Q( sups⩽t Wt < y, Wt ∈ dξ) = lim Q( inf s⩽t Ws > a, sups⩽t Wt < y, Wt ∈ dξ)

a→−∞

132

Solution Manual. Last update June 12, 2017

dξ ξ2 (ξ−2y)2

= √ [e− 2t − e− 2t ]

(6.19)

2πt

s⩽t

x 1 ξ2 (ξ−2y)2

− 21 t α2

=∫ eαξ e √ (e− 2t − e− 2t ) dξ

−∞ 2πt

1 x (ξ−αt)2 (ξ−2y−αt)2

− 2αy −

=√ ∫ (e 2t − e e 2t ) dξ

2πt −∞

√ √

x−αt x−2y−αt

1 2

− z2 e2αy z2

=√ dz − √ e− 2 dz

t t

∫ e ∫

2πt −∞ 2πt −∞

√ ) − e2αy Φ( x−2y−αt

= Φ( x−αt √ ).

t t

Problem 17.5 (Solution) a) Since Xt has continuous sample paths we find that

̂

τ b = inf {t ⩾ 0 ∶ Xt ⩾ b}.

Moreover, we have

{̂

τ b ⩽ t} = { sups⩽t Xs ⩾ b}.

Indeed,

Ô⇒ ̂

τ b (ω) ⩽ t

Ô⇒ ω ∈ {̂

τ b ⩽ t},

and so {̂

τ b ⩽ t} ⊃ { sups⩽t Xs ⩾ b}. Conversely,

ω ∈ {̂

τ b ⩽ t} Ô⇒ ̂

τ b (ω) ⩽ t

Ô⇒ Xτ̂ (ω) ⩾ b, ̂

τ b (ω) ⩽ t

b (ω)

Ô⇒ sup Xs (ω) ⩾ b

s⩽t

Ô⇒ ω ∈ { sups⩽t Xs ⩾ b},

and so {̂

τ b ⩽ t} ⊂ { sups⩽t Xs ⩾ b}.

P (̂

τ b > t) = P ( sups⩽t Xs < b)

= P ( sups⩽t Xs ⩽ b)

= P (Xt ⩽ b, sups⩽t Xs ⩽ b)

√ ) − e2αb Φ( −b−αt

= Φ( b−αt √ )

Prob.

17.4 t t

√ √

= Φ( √t − α t) − e Φ( − √bt − α t).

b 2αb

133

R.L. Schilling, L. Partzsch: Brownian Motion

Differentiating in t yields

d √ √

− P (̂

τ b > t) = e2αb ( 2tb√t − 2√

α

t

) Φ ′

( − √b − α t) + ( b√ + √

t 2t t

α

2 t

) Φ ′ √b

( t

− α t)

dt

1 (b+αt)2 (b−αt)2

− 2t − 2t

= √ (e2αb ( 2tb√t − 2√ α

t

) e + ( b√

2t t

+ α

√

2 t

) e )

2π

1 (b−αt)2 (b−αt)2

− 2t − 2t

= √ (( 2tb√t − 2√ α

t

) e + ( b√

2t t

+ α

√

2 t

) e )

2π

1 2b − (b−αt)2

=√ √ e 2t

2π 2t t

b (b−αt)2

= √ e− 2t

t 2πt

P (̂ √ ) − e2αb Φ( −b−αt

τ b > t) = Φ( b−αt

t

√ )

t

⎧

⎪

⎪

⎪

⎪Φ(−∞) − e2αb Φ(−∞) = 0 if α > 0

⎪

⎪

⎪

ÐÐ→ ⎨Φ(0) − e0 Φ(0) = 0 if α = 0

t→∞ ⎪

⎪

⎪

⎪

⎪

⎪

⎩Φ(∞) − e Φ(∞) = 1 − e

⎪ if α < 0

2αb 2αb

Therefore, we get

⎧

⎪

⎪

⎪1 if α ⩾ 0

̂

P (τ b < ∞) = ⎨

⎪

⎪

⎪ 2αb

if α < 0.

⎩e

Problem 17.6 (Solution) Basically, this is done on page 260, first few lines. If you want to

be a bit more careful, you should treat the real and imaginary parts of exp[iξBT ] =

cos(ξBT ) + i sin(ξBt ) separately. Let us do this for the real part.

We apply the 2-dimensional Itô-formula to the process Xt = (t, Bt ) and with f (t, x) =

2 /2

cos(ξx)etξ (see also Problem 16.3): Since

ξ2

cos(ξx)etξ /2

2

∂t f (t, x) =

2

∂x f (t, x) = −ξ sin(ξx)etξ /2

2

2 /2

∂x2 f (t, x) = −ξ 2 cos(ξx)etξ

we get

2 /2

cos(ξBT )eT ξ −1

2

ξ T

sξ 2 /2

T 1 T

ds − ξ ∫ sin(ξBs )esξ /2 dBs − ξ 2 ∫ cos(ξBs )esξ /2 ds

2 2

= ∫ cos(ξBs )e

2 0 0 2 0

T 2 /2

= −ξ ∫ sin(ξBs )esξ dBs .

0

Thus,

T

cos(ξBT ) = e−T ξ

2 /2

sin(ξBs )e(s−T )ξ

2 /2

−ξ∫ dBs .

0

134

Solution Manual. Last update June 12, 2017

Since the integrand of the stochastic integral is continuous and bounded, it is clear that

it is in L2P (λT ⊗ P). Hence cos(ξBt ) ∈ HT2 .

Problem 17.7 (Solution) Because of the properties of conditional expectations we have for s ⩽ t

M áG∞

E (Mt ∣ Hs ) = E (Mt ∣ σ(Fs , Gs )) = E (Mt ∣ Fs ) = Ms .

Thus, (Mt , Ht )t⩾0 is still a martingale; (Bt , Ht )t⩾0 is treated in a similar way.

we get

inf{t ∶ a(t) ⩾ s} ⩾ inf{t ∶ a(t) > s − } ⩾ inf{t ∶ a(t) ⩾ s − }

and

inf{t ∶ a(t) ⩾ s} ⩾ lim inf{t ∶ a(t) > s − } ⩾ lim inf{t ∶ a(t) ⩾ s − }.

↑0 ↑0

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶

=lim↑0 τ (s−)=τ (s−)

Thus, inf{t ∶ a(t) ⩾ s} ⩾ τ (s−). Assume that inf{t ∶ a(t) ⩾ s} > τ (s−). Then

Assume that τ (s) ⩾ t. Then a(t−) = inf{s ⩾ 0 ∶ τ (s) ⩾ t} ⩽ s. On the other hand,

17.14 d)

a(t−) ⩽ s Ô⇒ ∀ > 0 ∶ a(t − ) ⩽ s Ô⇒ ∀ > 0 ∶ τ (s) > t − Ô⇒ τ (s) ⩾ t.

n⩾1 n⩾1

17.14 A.15

c) s+1/n

n⩾1 n⩾1

time.

135

R.L. Schilling, L. Partzsch: Brownian Motion

Problem 17.10 (Solution) Solution 1: Assume that f ∈ C2 . Then we can apply Itô’s formula.

Use Itô’s formula for the deterministic process Xt = f (t) and apply it to the function xa

(we assume that f ⩾ 0 to make sure that f a is defined for all a > 0):

t d a t

f a (t) − f a (0) = ∫ [ x ] df (s) = ∫ af a−1 (s) df (s).

0 dx x=f (s) 0

This proves that the primitive ∫ f a−1 df = f a /a. The rest is an approximation argument

(f ∈ C1 is pretty immediate).

Solution 2: Any absolutely continuous function has an Lebesgue a.e. defined derivative f ′

and f = ∫ f ′ ds. Thus,

t

t t

′

t 1 d a f a (s) f a (t) − f a (0)

∫ f a−1

(s) df (s) = ∫ f a−1

(s)f (s) ds = ∫ f (s) ds = [ ] = .

0 0 0 a ds a 0 a

motion and f1 , . . . , fd ∈ L2P (λT ⊗ P) for all T > 0. Then, we have for 2 ⩽ p < ∞

⎡ p/2 ⎤

⎢ T d ⎥ t p

E ⎢(∫ ∑ ∣fk (s)∣ ds) ⎥⎥ ≍ E [sup ∣∑ ∫ fk (s) dBsk ∣ ]

⎢ 2

(17.2)

⎢ 0 k=1 ⎥ 0

⎣ ⎦ t⩽T k

t

Proof. Let Xt = ∑k ∫0 fk (s) dBsk . Then we have

t t

⟨X⟩t = ⟨∑ ∫ fk (s) dBsk , ∑ ∫ fl (s) dBsl ⟩

k 0 l 0

t t

= ∑ ⟨∫ fk (s) dBsk , ∫ fl (s) dBsl ⟩

k,l 0 0

t

= ∑∫ fk (s)fl (s) d ⟨B k , B l ⟩s

k,l 0

t

= ∑∫ fk2 (s) ds

k 0

With these notations, the proof of Theorem 17.16 goes through almost unchanged and we

get the inequalities for p ⩾ 2.

Remark: Often one needs only one direction (as we do later in the book) and one can use

17.18 directly, without going through the proof again. Note that

d p d p

t t

∣∑ ∫ fk (s) dBsk ∣ ⩽ ( ∑ ∣∫ fk (s) dBsk ∣)

k=1 0 k=1 0

d t p

⩽ cd,p ∑ ∣∫ fk (s) dBsk ∣ .

k=1 0

136

Solution Manual. Last update June 12, 2017

Thus, by (17.18)

⎡ d p⎤ d p

⎢ t

k ⎥

t

E ⎢sup ∣ ∑ ∫ fk (s) dBs ∣ ⎥ ⩽ cd,p ∑ E [sup ∣∫ fk (s) dBsk ∣ ]

⎢ t⩽T k=1 0 ⎥

⎣ ⎦ k=1 t⩽T 0

d ⎡ p/2 ⎤

⎢ T

⎥

≍ cd,p ∑ E ⎢(∫ ∣fk (s)∣2 ds) ⎥

k=1 ⎣

⎢ ⎥

0 ⎦

⎡ p/2 ⎤

⎢ T d ⎥

≍ cd,p E ⎢⎢(∫ ∑ ∣fk (s)∣2 ds) ⎥⎥ .

⎢ 0 k=1 ⎥

⎣ ⎦

137

18 Stochastic Differential Equations

where b, σ are non-random coefficients such that the corresponding (stochastic) integrals

exist. Obviously,

(dXt )2 = σ 2 (t) (dBt )2 = σ 2 (t) dt

t t

eiξXt − eiξXs = ∫ iξeiξXr b(r) dr + ∫ iξeiξXr σ(r) dBr

s s

1 t 2 iξXr 2

− ∫ ξ e σ (r) dr.

2 s

Now take any F ∈ Fs and multiply both sides of the above formula by e−ξXs 1F . We get

t t

eiξ(Xt −Xs ) 1F − 1F = ∫ iξeiξ(Xr −Xs ) 1F b(r) dr + ∫ iξeiξ(Xr −Xs ) 1F σ(r) dBr

s s

1 t 2 iξ(Xr −Xs )

− ∫ ξ e 1F σ 2 (r) dr.

2 s

Taking expectations gives

t

E (eiξ(Xt −Xs ) 1F ) = P(F ) + ∫ iξ E (eiξ(Xr −Xs ) 1F ) b(r) dr

s

1 t 2 iξ(Xr −Xs )

− ∫ ξ E (e 1F ) σ 2 (r) dr

2 s

t 1

= P(F ) + ∫ (iξb(r) − ξ 2 σ 2 (r)) E (eiξ(Xr −Xs ) 1F ) dr.

s 2

Define φs,t (ξ) ∶= E (eiξ(Xt −Xs ) 1F ). Then the integral equation

t 1 2 2

φs,t (ξ) = P(F ) + ∫ (iξb(r) − ξ σ (r))φr,s (ξ) dr

s 2

has the unique solution (use Gronwall’s lemma, cf. also the proof of Theorem 17.5)

t

b(s) ds− 12 ξ 2 ∫st σ 2 (r) dr

φs,t (ξ) = P(F ) eiξ ∫s

and so

t

b(r) dr− 12 ξ 2 ∫st σ 2 (r) dr

E (eiξ(Xt −Xs ) 1F ) = P(F ) eiξ ∫s . (*)

t 1 t 2

Xt ∼ N(µt , σt2 ), µt = ∫ b(r) dr, σt2 = ∫ σ (r) dr.

0 2 0

139

R.L. Schilling, L. Partzsch: Brownian Motion

arbitrary, (*) shows that

Xt − Xs á Fs ,

n tj 1 tj

E e∑j=1 ξj (Xtj −Xtj−1 ) = ∏ exp (iξ ∫

n

b(r) dr − ξ 2 ∫ σ 2 (r) dr) ,

j=1 tj−1 2 tj−1

i.e. (Xt1 , Xt2 − Xt1 , . . . , Xtn − Xtn−1 ) is a Gaussian random vector with independent com-

ponents. Since Xtk = ∑kj=1 (Xtj − Xtj−1 ) we see that (Xt1 , . . . , Xtn ) is a Gaussian random

variable.

= E(Xs2 ) + E Xs E(Xt − Xs )

= E(Xs2 ) + E Xs E Xt − (E Xs )2

= V Xs + E Xs E Xt

s s t

=∫ σ 2 (r) dr + ∫ b(r) dr ∫ b(r) dr.

0 0 0

In fact, since the mean is not zero, it would have been more elegant to compute the

covariance

s

Cov(Xs , Xt ) = E(Xs − µs )(Xt − µt ) = E(Xs Xt ) − E Xs E Xt = V Xs = ∫ σ 2 (r) dr.

0

= E (eiξX ) − E (eiξX ) P(F )

= E (eiξX ) P(F c )

140

Solution Manual. Last update June 12, 2017

= (1 − 2−n−1 ) Xn (tk−1 ) + B(tk ) − B(tk−1 )

= (1 − 2−n−1 )[(1 − 2−n−1 ) Xn (tk−2 ) + B(tk−1 ) − B(tk−2 )] + [B(tk ) − B(tk−1 )]

⋮

= (1 − 2−n−1 )k Xn (t0 ) + (1 − 2−n−1 )k−1 [B(t1 ) − B(t0 )] + . . . +

+ (1 − 2−n−1 )[B(tk−1 ) − B(tk−2 )] + [B(tk ) − B(tk−1 )]

k−1

= (1 − 2−n−1 )k A + ∑ (1 − 2−n−1 )j [B(tk−j ) − B(tk−j−1 )]

j=1

Observe that B(tj ) − B(tj−1 ) ∼ N(0, 2−n ) for all j and A ∼ N(0, 1). Because of the

independence we get

k−1

2 and so

2n −1

Xn ( 12 ) ∼ N(0, (1 − 2−n−1 )2 + ∑j=1 (1 − 2−n−1 )2j ⋅ 2−n ).

n

Using

lim (1 − 2−n−1 )2 = e− 2

n 1

n→∞

and

2n −1

1 − (1 − 2−n−1 )2 1 − (1 − 2−n−1 )2

n n

−n−1 2j

) ⋅ 2−n = −n

ÐÐÐ→ 1 − e− 2

1

∑ (1 − 2 −n−1

⋅ 2 = −n−2

j=1 1 − (1 − 2 ) 2 1−2 n→∞

d

finally shows that Xn ( 12 ) ÐÐÐ→ X ∼ N(0, 1).

n→∞

b) The solution of this SDE follows along the lines of Example 18.4 where α(t) ≡ 0,

β(t) ≡ − 12 , δ(t) ≡ 0 and γ(t) ≡ 1:

dXt○ = 1 ○

2 Xt dt Ô⇒ Xt○ = et/2

Zt = et/2 Xt , Z0 = X0

t

dZt = et/2 dBt Ô⇒ Zt = Z0 + ∫ es/2 dBs

0

t

Xt = e−t/2 A + e−t/2 ∫ es/2 dBs .

0

For t = 1

2 we get

1/2

X1/2 = A e−1/4 + e−1/4 ∫ es/2 dBs

0

141

R.L. Schilling, L. Partzsch: Brownian Motion

1/2 s

e ds) = N(0, 1).

s t

C(s, t) = E Xs Xt = e−s/2 e−t/2 E A2 + e−s/2 e−t/2 E (∫ er/2 dBr ∫ eu/2 dBu )

0 0

s

−(s+t)/2 −(s+t)/2

=e +e ∫

r

e dr

0

= e−(t−s)/2 .

Problem 18.3 (Solution) Since Xt○ is such that 1/Xt○ solves the homogeneous SDE from Ex-

ample 18.3, we see that

t t

Xt○ = exp (− ∫ (β(s) − 21 δ 2 (s)) ds) exp (− ∫ δ(s) dBs )

0 0

Observe that Xt○ = f (It1 , It2 ) where It is an Itô process with

t

It1 = − ∫ (β(s) − 12 δ 2 (s)) ds

0

t

It2 = − ∫ δ(s) dBs .

0

2

dXt○ = ∂1 f (It1 , It2 ) dIt1 + ∂2 f (It1 , It2 ) dIt2 + 1

2

j k

∑ ∂k ∂k dIt dIt

j,k=1

2 dIt2 dIt2 )

= Xt○ (−β(t) dt + 21 δ 2 (t) dt − δ(t) dBt + 21 δ 2 (t) dt)

= Xt○ (−β(t) + δ 2 (t)) dt − Xt○ δ(t) dBt .

Remark:

1. we used here the two-dimensional Itô formula (16.6) but we could have equally well

used the one-dimensional version (16.6) with the Itô process It1 + It2 .

2. observe that Itô’s multiplication table gives us exactly the second-order term in

(16.6).

Since

dZt = (α(t) − γ(t)δ(t))Xt○ dt + γ(t)Xt○ dBt and Xt = Zt /Xt○

we get

1 t t

Xt = ○

(X0 + ∫ (α(s) − γ(s)δ(s))Xs○ ds + ∫ γ(s)Xs○ dBs ) .

Xt 0 0

142

Solution Manual. Last update June 12, 2017

t

Problem 18.4 (Solution)

three ways:

Solution 1: you guess the right result and use Itô’s formula (16.5) to verify that the

above Xt is indeed a solution to the SDE. For this rewrite the above solution as

t

eβt Xt = X0 + ∫ σeβs dBs Ô⇒ d(eβt Xt ) = σeβt dBt .

0

Now with the two-dimensional Itô formula for f (x, y) = xy and the two-dimensional

Itô-process (eβt , Xt ) we get

so that

βXt eβt dt + eβt dXt = σeβt dBt ⇐⇒ dXt = −βXt dt + σ dBt .

Admittedly, this is unfair as one has to know the solution beforehand. On the other

hand, this is exactly the way one verifies that the solution one has found is the

correct one.

Solution 2: you apply the time-dependent Itô formula from Problem 16.3 or the 2-

dimensional Itô formula, Theorem 16.6 to

t

Xt = u(t, It ) and It = ∫ eβs dBs and u(t, x) = eβt X0 + σeβt x

0

to get—as dt dBt = 0—

Again, this is best for the verification of the solution since you need to know its form

beforehand.

Solution 3: you use Example 18.4 with α(t) ≡ 0, β(t) ≡ −β, γ(t) ≡ σ and δ(t) ≡ 0.

But, honestly, you will have to look up the formula in the book. We get

Zt = eβt Xt , Z0 = X0 = ξ = const.;

dZt = σeβt dBt ;

t

Zt = σ ∫ eβs dBs + Z0 ;

0

t

Xt = e−βt ξ + e−βt σ ∫ eβs dBs , t ⩾ 0.

0

Solution 4: by bare hands and with Itô’s formula! Consider first the deterministic

ODE

t

xt = x0 − β ∫ xs ds

0

143

R.L. Schilling, L. Partzsch: Brownian Motion

which has the solution xt = x0 e−βt , i.e. eβt xt = x0 = const. This indicates that the

transformation

Yt ∶= eβt Xt

By assumption,

Yt − Y0

t t

=∫ ( ft (s, Xs ) − βXs fx (s, Xs ) + 12 σ 2 fxx (s, Xs ) ) ds + ∫ σfx (s, Xs ) dBs

0

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶ 0

=0 =0

t

=∫ σfx (s, Xs ) dBs .

0

So we have the solution, but we still have to go through the procedure in Solution 1

or 2 in order to verify our result.

b) Since Xt is the limit of normally distributed random variables, it is itself Gaussian (see

also part d))—if ξ is non-random or itself Gaussian and independent of everything

else. In particular, if X0 = ξ = const.,

t

2β (1 − e )) .

Now

σ 2 −β(t+s) 2βs

C(s, t) = E Xs Xt = e−β(t+s) ξ 2 + e (e − 1), t ⩾ s ⩾ 0,

2β

and, therefore

σ 2 −β∣t−s∣

C(s, t) = e−β(t+s) ξ 2 + (e − e−β(t+s) ) for all s, t ⩾ 0.

2β

d) We have

⎡ n ⎤

⎛ ⎢ ⎥⎞

E exp ⎢i ∑ λj Xtj ⎥⎥

⎢

⎝ ⎢ j=1 ⎥⎠

⎣ ⎦

⎡ n ⎤

⎛ ⎢ n tj ⎥⎞

= E exp ⎢⎢i ∑ λj e−βtj ξ + iσ ∑ λj e−βtj ∫ eβs dBs ⎥⎥

⎝ ⎢ j=1 0 ⎥⎠

⎣ j=1 ⎦

⎛ σ 2 ⎡⎢ n ⎤2 ⎞

⎥ ⎛

⎡ n

⎢

⎤

⎥⎞

= exp ⎜− ⎢⎢ ∑ λj e−βtj ⎥⎥ ⎟ E exp ⎢⎢iσ ∑ ηj Yj ⎥⎥

⎝ 4β ⎢⎣j=1 ⎥ ⎠ ⎝ ⎢ j=1 ⎥⎠

⎦ ⎣ ⎦

144

Solution Manual. Last update June 12, 2017

where

tj

ηj = λj e−βtj , Yj = ∫ eβs dBs , t0 = 0, Y0 = 0.

0

Moreover,

n n n

∑ ηj Yj = ∑ (Yk − Yk−1 ) ∑ ηj

j=1 k=1 j=k

and

tk

Yk − Yk−1 = ∫ eβs dBs ∼ N(0, (2β)−1 (e2βtk − e2βtk−1 )) are independent.

tk−1

⎡ n ⎤

⎛ ⎢ ⎥⎞

⎢

E exp ⎢i ∑ λj Xtj ⎥⎥

⎝ ⎢ j=1 ⎥⎠

⎣ ⎦

2

σ 2 n σ2 2

= exp [− ( ∑j=1 λj e−βtj ) ] ∏ exp [− (e2βtk − e2βtk−1 )( ∑j=k λj e−βtj ) ]

n n

4β k=1 4β

σ2 2

( ∑j=1 λj e−βtj ) {1 + e2βt1 − 1}] ×

n

= exp [−

4β

n

σ2 2

(1 − e−2β(tk −tk−1 ) ) ⋅ ( ∑j=k λj e−β(tj −tk ) ) ]

n

× ∏ exp [−

k=2 4β

σ2 2

( ∑j=1 λj e−β(tj −t1 ) ) ] ×

n

= exp [−

4β

n

σ2 2

(1 − e−2β(tk −tk−1 ) ) ⋅ ( ∑j=k λj e−β(tj −tk ) ) ] .

n

× ∏ exp [−

k=2 4β

Note: the distribution of (Xt1 , . . . , Xtn ) depends on the difference of the consecutive

epochs t1 < . . . < tn .

X̃t = eβt Xt and Ũt = eβt Ut

and we show that both processes have the same finite-dimensional distributions.

Clearly, both processes are Gaussian and both have independent increments. From

and for s ⩽ t

t

X̃t − X̃s = σ ∫ eβr dBr

s

2

∼ N(0, 2β

σ

(e2βt − e2βs )),

σ

Ũt − Ũs = √ (B(e2βt − 1) − B(e2βs − 1))

2β

σ

∼ B(e2βt − e2βs )

2β

σ 2 2βt

∼ N(0, 2β

(e − e2βs ))

145

R.L. Schilling, L. Partzsch: Brownian Motion

Problem 18.5 (Solution) We use the time-dependent Itô formula from Problem 16.3 (or the

x dy

2-dimensional Itô-formula for the process (t, Xt )) with f (t, x) = ect ∫0 σ(y) . Note that the

parameter c is still a free parameter.

Thus,

Xt dy 1 1 σ ′ (Xt ) 2

= cect ∫ dt + ect dXt − ect 2 σ (Xt ) dt

0 σ(y) σ(Xt ) 2 σ (Xt )

Xt dy b(Xt ) 1

= cect ∫ dt + ect dt + ect dBt − ect σ ′ (Xt ) dt

0 σ(y) σ(Xt ) 2

Xt dy 1 b(Xt )

= ect [c ∫ − σ ′ (Xt ) + ] dt + ect dBt .

0 σ(dy) 2 σ(Xt )

Let us show that the expression in the brackets [⋯] is constant if we choose c appropriately.

For this we differentiate this expression:

d x dy 1 b(x) c d 1 ′ b(x)

[c ∫ − σ ′ (x) + ]= − [ σ (x) − ]

dx 0 σ(dy) 2 σ(x) σ(x) dx 2 σ(x)

c 1 d b(x)

= − [ σ ′′ (x) − ]

σ(x) 2 dx σ(x)

1 1 d b(x)

= (c − σ(x) [ σ ′′ (x) − ])

σ(x) 2 dx σ(x)

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶

=const. by assumption

This shows that we should choose c in such a way that the expression c − σ ⋅ [⋯] becomes

zero, i.e.

1 d b(x)

c = σ(x) [ σ ′′ (x) − ].

2 dx σ(x)

Using the time-dependent Itô formula (cf. Problem 16.3) or the 2-dimensional Itô formula

(cf. Theorem 16.6) for the process (t, Bt ) we get

= Bt dt + t dBt

Xt

= dt + t dBt .

t

Together with the initial condition X0 = 0 this is the SDE which has Xt = tBt as solution.

146

Solution Manual. Last update June 12, 2017

The trouble is, that the solution is not unique! To see this, assume that Xt and Yt are

any two solutions. Then

Xt Yt Zt

dZt ∶= d(Xt − Yt ) = dXt − dYt = ( − ) dt = dt, Z0 = 0.

t t t

This is an ODE and all (deterministic) processes Zt = ct are solutions with initial condition

Z0 = 0. If we want to enforce uniqueness, we need a condition on Z0′ . So

Xt d

dXt = dt + t dBt and Xt ∣ = x′0

t dt t=0

Problem 18.7 (Solution) a) With the argument from Problem 18.6, i.e. Itô’s formula, we get

for f (t, x) = x/(1 + t)

x 1

∂t f (t, x) = − , ∂x f (t, x) = , ∂x2 f (t, x) = 0.

(1 + t)2 1+t

And so

Bt 1

dUt = − dt + dBt

(1 + t) 2 1+t

Ut 1

=− dt + dBt .

1+t 1+t

b) Using Itô’s formula for f (x) = sin x we get, because of sin2 x + cos2 x = 1, that

√

= 1 − sin2 Bt dBt − 21 sin Bt dt

√

= 1 − Vt2 dBt − 12 Vt dt

Xt −a sin Bt 1 −a cos Bt

d( ) = ( ) dBt + ( ) dt

Yt b cos Bt 2 −b sin Bt

− a b sin Bt 1 a cos Bt

= ( bb ) dBt − ( ) dt

a a cos Bt

2 b sin Bt

− a Yt 1 Xt

= ( bb ) dBt − ( ) dt.

a Xt

2 Yt

Problem 18.8 (Solution) a) We use Example 18.4 (and 18.3) where we set

147

R.L. Schilling, L. Partzsch: Brownian Motion

Then we get

dZt = bXt○ dt

t t

dXt○ = X0○ exp (∫ (σ 2 − 12 σ 2 ) ds − ∫ σ dBs )

0 0

= X0○ exp ( 21 σ 2 t − σ Bt )

t

Zt = ∫ bXs○ ds

0

Thus,

t

bX0○ e 2 σ

1 2 s−σB

Zt = ∫ s

ds

0

Zt t 1 2

= be− 2 σ t+σBt ∫ e 2 σ s−σBs ds.

1 2

Xt = ○

Xt 0

just found:

t

Ô⇒ Xt = X0 + be− 2 σ

1 2 t+σB 1 2 s−σB

t

∫ e2 σ s

ds.

0

Then we get

dXt○ = Xt○ dt

dZt = mXt○ dt + σXt○ dBt

Thus,

Xt○ = X0○ et

t t

Zt = ∫ m es ds + σ ∫ es dBs

0 0

t

= m (et − 1) + σ ∫ es dBs

0

Zt t

Xt = ○ = m (1 − e−t ) + σ ∫ es−t dBs

Xt 0

t

Ô⇒ Xt = x0 + m (1 − e−t ) + σ ∫ es−t dBs .

0

√ √

b(x) = 1 + x2 + 21 x and σ(x) = 1 + x2 .

148

Solution Manual. Last update June 12, 2017

x b(x) 1 ′

σ ′ (x) = √ and κ(x) = − σ (x) = 1.

1 + x2 σ(x) 2

x dy

d(x) = ∫ = arsinh x and Zt = f (Xt ) = d(Xt ).

0 σ(y)

1 ′

= dXt + 12 ( σ1 ) (Xt ) σ 2 (Xt ) dt

σ(Xt )

Xt Xt

= (1 + √ ) dt + dBt + 12 (− ) (1 + Xt2 ) dt

2 1 + Xt 2 (1 + Xt

2 )3/2

= dt + dBt ,

and so Zt = Z0 + t + Bt . Finally,

1/2

Problem 18.10 (Solution) Set b = b(t, x), b0 = b(t, 0) etc. Observe that ∥b∥ = (∑j ∣bj (t, x)∣2 )

1/2

and ∥σ∥ = (∑j,k ∣σjk (t, x)∣2 ) are norms; therefore, we get using the triangle estimate

and the elementary inequality (a + b)2 ⩽ 2(a2 + b2 )

∥b∥2 + ∥σ∥2 = ∥b − b0 + b0 ∥2 + ∥σ − σ0 + σ0 ∥2

⩽ 2∥b − b0 ∥2 + 2∥σ − σ0 ∥2 + 2∥b0 ∥2 + 2∥σ0 ∥2

⩽ 2L2 ∣x∣2 + 2∥b0 ∥2 + 2∥σ0 ∥2

⩽ 2L2 (1 + ∣x∣)2 + 2(∥b0 ∥2 + ∥σ0 ∥2 )(1 + ∣x∣)2

⩽ 2(L2 + ∥b0 ∥2 + ∥σ0 ∥2 )(1 + ∣x∣)2 .

Problem 18.11 (Solution) a) If b(x) = −ex and X0x = x we have to solve the following

ODE/integral equation

t x

Xtx = x − ∫ eXs ds

0

1

Xtx = log ( ).

t + e−x

This shows that

1 1

lim Xtx = lim log ( ) = log = − log t.

x→∞ x→∞ t + e−x t

This means that Corollary 18.21 fails in this case since the coefficient of the ODE

grows too fast.

149

R.L. Schilling, L. Partzsch: Brownian Motion

t

∣∫ b(Xs ) ds∣ ⩽ M t.

0

t 2 t

E [∣∫ σ(Xsx ) dBs ∣ ] = E [∫ ∣σ 2 (Xsx )∣ ds] ⩽ M 2 t.

0 0

t 2 t 2

E(∣Xtx − x∣2 ) ⩽ 2 E [∣∫ b(Xs ) ds∣ ] + 2 E [∣∫ σ(Xsx ) dBs ∣ ]

0 0

⩽ 2(M t)2 + 2M 2 t

= 2M 2 t(t + 1).

By Fatou’s lemma

⎛ ⎞

E lim ∣Xtx − x∣2 ⩽ lim E(∣Xtx − x∣2 ) ⩽ 2M 2 t(t + 1)

⎝∣x∣→∞ ⎠ ∣x∣→∞

c) Assume now that b(x) and σ(x) grow like ∣x∣p/2 for some p ∈ (0, 2). A calculation as

above yields

t 2 Cauchy t t

∣∫ b(Xs ) ds∣ ⩽ t∫ ∣b(Xs )∣2 ds ⩽ cp t ∫ (1 + ∣Xs ∣p ) ds

0 Schwarz 0 0

t 2 t t

E [∣∫ σ(Xsx ) dBs ∣ ] = E [∫ ∣σ 2 (Xsx )∣ ds] ⩽ c′ ∫ E(1 + ∣Xs ∣p ) ds.

0 0 0

t t

E ∣Xtx − x∣2 ⩽ 2cp t ∫ (1 + E(∣Xs ∣p )) ds + 2c′ ∫ (1 + E(∣Xs ∣p )) ds

0 0

t

⩽ ct,p + c′t,p ∫ ∣x∣p dt

0

′

= ct,p + t ct,p ∣x∣p .

Again by Fatou’s theorem we see that the left-hand side grows like ∣x∣2 (if Xtx is

unbounded) while the (larger!) right-hand side grows like ∣x∣p , p < 2, and this is

impossible.

∣x − y∣ ! x y

⩽ ∣ 2 − 2∣

(1 + ∣x∣)(1 + ∣y∣) ∣x∣ ∣y∣

150

Solution Manual. Last update June 12, 2017

2

∣x − y∣2 x y

⇐⇒ ⩽ ∣ 2 − 2∣

(1 + ∣x∣) (1 + ∣y∣)

2 2 ∣x∣ ∣y∣

∣x∣2 − 2⟨x, y⟩ + ∣y∣2 ∣x∣2 2⟨x, y⟩ ∣y∣2

⇐⇒ ⩽ − +

(1 + ∣x∣)2 (1 + ∣y∣)2 ∣x∣4 ∣x∣2 ∣y∣2 ∣y∣4

1 1 1 1 ∣x∣2 + ∣y∣2

⇐⇒ 2⟨x, y⟩ ( 2 2 − )⩽ + −

∣x∣ ∣y∣ (1 + ∣x∣) (1 + ∣y∣)2

2 ∣x∣2 ∣y∣2 (1 + ∣x∣)2 (1 + ∣y∣)2

1 1 1 1

⇐⇒ 2⟨x, y⟩ ( − ) ⩽ (∣x∣2 + ∣y∣2 ) ( − ).

∣x∣2 ∣y∣2 (1 + ∣x∣)2 (1 + ∣y∣)2 ∣x∣2 ∣y∣2 (1 + ∣x∣)2 (1 + ∣y∣)2

By the Cauchy-Schwarz inequality we get 2⟨x, y⟩ ⩽ 2∣x∣ ⋅ ∣y∣ ⩽ ∣x∣2 + ∣y∣2 , and this shows that

the last estimate is correct.

151

19 On Diffusions

1 d d

Au = Lu = ∑ aij ∂i ∂j u + ∑ bi ∂i u

2 i,j=1 i=1

c → C. Fix R > 0 and i, j ∈ {1, . . . , d} where x = (x1 , . . . , xd ) ∈ R

d

and χ ∈ C∞

c (R ) such that χ∣B(0,R) ≡ 1.

d

1

L(φu) = ∑ aij ∂i ∂j (φu) + ∑ bi ∂i (φu)

2 i,j i

1

= ∑ aij (∂i ∂j φ + ∂i ∂j u + ∂i φ∂j u + ∂i u∂j φ) + ∑ bi (u∂i φ + φ∂i u)

2 i,j i

i,j

Now use u(x) = xi and φ(x) = χ(x). Then uχ ∈ C∞

c , L(uχ) ∈ C and so

c , L(uχ) ∈ C and so

L(uχ)(x) = aij + xj bi (x) + xi bj (x) for all ∣x∣ < R Ô⇒ aij ∣B(0,R) continuous.

which is familiar from measure and integration theory, cf. Schilling [11, Theorem 11.5, pp.

92–93]: observe that by our assumptions

∂ 2 p(t, x, y)

∣ ∣ ⩽ C(t) for all x, y ∈ Rd

∂xj ∂xk

∂ 2 p(t, x, y)

∣ u(y)∣ ⩽ C(t) ∣u(y)∣ ∈ L1 (Rd ) (*)

∂xj ∂xk

for each t > 0. Thus we get

∂2 ∂2

∫ p(t, x, y) u(y) dy = ∫ p(t, x, y) u(y) dy.

∂xj ∂xk ∂xj ∂xk

153

R.L. Schilling, L. Partzsch: Brownian Motion

Moreover, (*) and the fact that p(t, ⋅, y) ∈ C∞ (Rd ) allow us to change limits and integrals

to get for x → x0 and ∣x∣ → ∞

∂2 ∂2

lim ∫ p(t, x, y) u(y) dy = ∫ lim p(t, x, y) u(y) dy

x→x0 ∂xj ∂xk x→x0 ∂xj ∂xk

∂2

=∫ p(t, x0 , y) u(y) dy

∂xj ∂xk

Ô⇒ Tt maps C∞

c (R ) into C(R );

d d

∂2 ∂2

lim ∫ p(t, x, y) u(y) dy = ∫ lim p(t, x, y) u(y) dy = 0

∣x∣→∞ ∂xj ∂xk ∣x∣→∞ ∂xj ∂xk

´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶

=0

Ô⇒ Tt maps C∞

c (R ) into C∞ (R ).

d d

Addition: With a standard uniform boundedness and density argument we can show that

Tt maps C∞ into C∞ : fix u ∈ C∞ (Rd ) and pick a sequence (un )n ⊂ C∞

c (R ) such that

d

lim ∥u − un ∥∞ = 0.

n→∞

Then we get

n→∞

Problem 19.3 (Solution) Let u ∈ C2∞ . Then there is a sequence of test functions (un )n ⊂ C∞

c

such that ∥un − u∥(2) → 0. Thus, un → u uniformly and A(un − um ) → 0 uniformly. The

closedness now gives u ∈ D(A).

c (R ). Then

d

i,j Rd j Rd Rd

= ∑∫ u ⋅ ∂i ∂j (aij φ) dx − ∑ ∫ u ⋅ ∂j (bj φ) dx + ∫ u ⋅ cφ dx

int by

parts

i,j Rd j Rd Rd

= ⟨u, L∗ φ⟩L2

where

L∗ (x, Dx )φ(x) = ∑ ∂i ∂j (aij (x)φ(x)) − ∑ ∂j (bj (x)φ(x)) + c(x)φ(x).

ij j

Now assume that we are in (t, x) ∈ [0, ∞) × Rd —the case R × Rd is easier, as we have no

boundary term. Consider L + ∂t = L(x, Dx ) + ∂t for sufficiently smooth u = u(t, x) and

φ = φ(t, x) with compact support in [0, ∞) × Rd . We find

∞

∫ ∫ (L + ∂t )u(t, x) ⋅ φ(t, x) dx dt

0 Rd

∞ ∞

=∫ ∫ Lu(t, x) ⋅ φ(t, x) dx dt + ∫ ∫ ∂t u(t, x) ⋅ φ(t, x) dx dt

0 Rd 0 Rd

154

Solution Manual. Last update June 12, 2017

∞ ∞

=∫ ∫ Lu(t, x) ⋅ φ(t, x) dx dt + ∫ ∫ ∂t u(t, x) ⋅ φ(t, x) dt dx

0 Rd R d 0

∞ ∞ ∞

=∫ ∫ u(t, x) ⋅ L∗ φ(t, x) dx dt + ∫ (u(t, x)φ(t, x)∣ −∫ u(t, x) ⋅ ∂t φ(t, x) dt) dx

0 Rd Rd t=0 0

∞ ∞

=∫ ∫ u(t, x) ⋅ L∗ φ(t, x) dx dt − ∫ (u(0, x)φ(0, x) + ∫ u(t, x) ⋅ ∂t φ(t, x) dt) dx.

0 Rd Rd 0

c (R )

d

d

Tt u(x) = Tt L(⋅, D)u(x)

dt

d

Ô⇒ ∫ p(t, x, y) u(y) dy = ∫ p(t, x, y) L(y, Dy )u(y) dy

dt

d

Ô⇒ ∫ p(t, x, y) u(y) dy = ∫ p(t, x, y) L(y, Dy )u(y) dy.

dt

The change of differentiation and integration can easily be justified by a routine application

of the differentiation lemma (e.g. Schilling [11, Theorem 11.5, pp. 92–93]): under our

assumptions we have for all ∈ (0, 1) and R > 0

d

sup sup ∣ p(t, x, y) u(y)∣ ⩽ C(, R) ∣u(y)∣ ∈ L1 (Rd ).

t∈[,1/] ∣x∣⩽R dt

Inserting the expression for the differential operator L(y, Dy ), we find for the right-hand

side

1 d ∂ 2 u(y) d

∂u(y)

= ∑ ∫ p(t, x, y) ⋅ ajk (y) dy + ∑ ∫ p(t, x, y) ⋅ bj (y) dy

2 j,k=1 ∂yj ∂yk j=1 ∂yj

1 d ∂2 d

∂

= (ajk (y) ⋅ p(t, x, y))u(y) dy + ∑ ∫ (bj (y) ⋅ p(t, x, y))u(y) dy

int. by

∑ ∫

parts 2 j,k=1 ∂yj ∂yk j=1 ∂yj

c (R ) is arbitrary.

d

Problem 19.6 (Solution) Problem 6.2 shows that Xt is a Markov process. The continuity of

the sample paths is obvious and so is the Feller property (using the form of the transition

function found in the solution of Problem 6.2).

t

Let us calculate the generator. Set It = ∫0 Bs ds. The semigroup is given by

t

Tt u(x, y) = Ex,y u(Bt , It ) = E u (Bt + x, ∫0 (Bs + x) ds + y) = E u(Bt + x, It + tx + y).

If we differentiate the expression under the expectation with respect to t, we get with the

help of Itô’s formula

+ ∂y u(Bt + x, It + tx + y) d(It + tx)

155

R.L. Schilling, L. Partzsch: Brownian Motion

1 2

+ ∂ u(Bt + x, It + tx + y) dt

2 x

= ∂x u(Bt + x, It + tx + y) dBt

+ ∂y u(Bt + x, It + tx + y)(Bt + x) dt

1

+ ∂x2 u(Bt + x, It + tx + y) dt

2

since dBs dIs = 0. So,

t

E u(Bt + x, It + tx + y) − u(x, y) = ∫ E [∂y u(Bs + x, Is + sx + y)(Bs + x)] ds

0

1 t

+ ∫ E [∂x u(Bs + x, Is + sx + y)] ds.

2

2 0

Dividing by t and letting t → 0 we get

1 2

Lu(x, y) = x ∂y u(x, y) + ∂ u(x, y).

2 x

Problem 19.7 (Solution) We assume for a) and b) that the operator L is more general than

written in (19.1), namely

1 d ∂ 2 u(x) d ∂u(x)

Lu(x) = ∑ aij (x) + ∑ bj (x) + c(x)u(x)

2 i,j=1 ∂xi ∂xj j=1 ∂xj

a) If u has compact support, then Lu has compact support. Since, by assumption, the

coefficients of L are continuous, Lu is bounded, hence Mtu is square integrable.

Obviously, Mtu is Ft measurable. Let us establish the martingale property. For this

we fix s ⩽ t. Then

t

Ex (Mtu ∣ Fs ) = Ex (u(Xt ) − u(X0 ) − ∫ Lu(Xr ) dr ∣ Fs )

0

t

= Ex (u(Xt ) − u(Xs ) − ∫ Lu(Xr ) dr ∣ Fs )

s

s

+ u(Xs ) − u(X0 ) − ∫ Lu(Xr ) dr

0

t−s

= Ex (u(Xt ) − u(Xs ) − ∫ Lu(Xr+s ) dr ∣ Fs ) + Msu

0

t−s

= EXs (u(Xt−s ) − u(X0 ) − ∫ Lu(Xr ) dr) + Msu .

Markov

property 0

Observe that Tt u(y) = Ey u(Xt ) is the semigroup associated with the Markov process.

Then

t−s

Ey (u(Xt−s ) − u(X0 ) − ∫ Lu(Xr ) dr)

0

t−s

= Tt−s u(y) − u(y) − ∫ Ey (Lu(Xr )) dr = 0

0

by Lemma 7.10, see also Theorem 7.21. This shows that Ex (Mtu ∣ Fs ) = Msu , and we

are done.

156

Solution Manual. Last update June 12, 2017

c (R ) such that

d

χ∣B(x, R) ≡ 1. Then for all f ∈ C2 (Rd ) we have χf ∈ C2c (Rd ) and it is not hard to see

that the calculation in part a) still holds for such functions.

Set τ = τRx = inf{t > 0 ∶ ∣Xt − x∣ ⩾ R}. This is a stopping time and we have

Moreover,

1

L(χf ) = ∑ aij ∂i ∂j (χf ) + ∑ bi ∂i (χf ) + cχf

2 i,j i

1

= ∑ aij (f ∂i ∂j χ + χ∂i ∂j f + ∂i χ∂j f + ∂i f ∂j χ) + ∑ bi (f ∂i χ + χ∂i f ) + cχf

2 i,j i

i,j

χf

R

, Ft )t⩾0 is a martingale. More-

over, we get for s ⩽ t

Ex (Mt∧τ

f

R

∣ Fs ) = Ex (Mt∧τ

χf

R

∣ Fs )

= Ms∧τ

χf

R

= Ms∧τ

f

R

.

c) A diffusion operator L satisfies that c = 0. Thus, the calculation for L(χf ) in part

b) shows that

ij

For the first we note that d⟨M u , M φ ⟩t = dMtu dMtφ (by the definition of the bracket

process) and the latter we can calculate with the rules for Itô differentials. We have

k

j j,k

By definition,

j,k

157

R.L. Schilling, L. Partzsch: Brownian Motion

Thus, using that all terms containing (dt)2 and dBtk dt are zero, we get

dMtu dMtφ = ∑ ∑ ∂j u(Xt )∂l φ(Xt )σjk (Xt )σlm (Xt ) dBtk dBtm

j,k l,m

j,k l,m

j,l k

j,l

where ajl = ∑k σjk (Xt )σlk (Xt ) = (σσ ⊺ )jl . (x ⋅ y denotes the Euclidean scalar product

and ∇ = (∂1 , . . . , ∂d )⊺ .)

158

Bibliography

[1] Chung, K.L.: Lectures from Markov Processes to Brownian Motion. Springer, New York

1982. Enlarged and reprinted as [2].

[2] Chung, K.L., Walsh, J.B.: Markov Processes, Brownian Motion, and Time Symmetry

(Second Edition). Springer, Berlin 2005. This is the 2nd edn. of [1].

indépendants et stationnaires. In: Séminaire de Prohabilités, Springer, Lecture Notes in

Mathematics 381, Berlin 1974, 40–77.

[4] Gradshteyn, I.S., Ryzhik, I.M.: Table of Integrals, Series, and Products (4th corrected and

enlarged edn). Academic Press, San Diego 1980.

[5] Karatzas, I., Shreve, S.E.: Brownian Motion and Stochastic Calculus. Springer, New York

1988.

[6] Pazy, A.: Semigroups of Linear Operators and Applications to Partial Differential Equa-

tions. Springer, New York 1983.

[7] Protter, P.E.: Stochastic Integration and Differential Equations (Second Edn.). Springer,

Berlin 2004.

[8] A. Rényi: Probability Theory. Dover, Mineola (NY) 2007. (Reprint of the 1970 edition

published by Norh-Holland and Akadémiai Kiadó.)

[9] Rudin, W.: Principles of Mathematical Analysis. McGraw-Hill, Auckland 1976 (3rd edn).

[10] Rudin, W.: Real and Complex Analysis. McGraw-Hill, New York 1986 (3rd edn).

[11] Schilling, R.L.: Measures, Integrals and Martingales. Cambridge University Press, Cam-

bridge 2011 (3rd printing with corrections).

[12] Schilling, R.L. and Partzsch, L.: Brownian Motion. An Introduction to Stochastic Pro-

cesses. De Gruyter, Berlin 2012.

[13] Trèves, F.: Topological Vector Spaces, Distributions and Kernels. Academic Press, San

Diego (CA) 1990.

159

- Pricing of Zero-Coupon BondsЗагружено:Kofi Appiah-Danquah
- Introduction to Matrix Analytic Methods in Stochastic Modeling- ArticleЗагружено:Amgad Alsisi
- ouЗагружено:Roadstar2008
- emcshaneЗагружено:Al Azhary Masta
- PROBABILITY THEORY & STOCHASTIC PROCESSES.pdfЗагружено:Shareef Khan
- NUS - MA1505 (2012) - Chapter 1Загружено:Gilbert Sebastiano
- PTSPЗагружено:SourabhMittal
- ORFE Academic Guide 2014Загружено:Prakhar Agarwal
- When do diffusion-limited trajectories become memoryless?Загружено:Maciej Dobrzyński
- On Almost - Generalized Semi ContinuousЗагружено:Alexander Decker
- 9783642400742-c1Загружено:kensaii
- Huggins 1938Загружено:Vanessa Lopez F
- 5671002 B. Sc (Honours) StatisticsЗагружено:haresh
- Evaluation techniquesЗагружено:Kenny Torales
- KTOM NotesЗагружено:Kwok Yee Ther
- cdo ts extensions lastЗагружено:thorstenschmidt
- ie425_syllabusЗагружено:Bruce Marshall
- 07_Stochastic Programming With Applications to Power SystemsЗагружено:tamndprp
- unit3Загружено:MahmoudAbdulGalil
- Bet 01Загружено:Sheharyar Khan
- (Advanced Texts in Physics) Philippe Réfrégier (Auth.)-Noise Theory and Application to Physics_ From Fluctuations to Information-Springer-Verlag New York (2004)Загружено:Juan Diego Cutipa Loayza
- continuity.pdfЗагружено:naveen raj
- Lecture 01Загружено:Kalpesh Patil
- Metric SpacesЗагружено:Yourfriend 2113
- OPERATION_RESEARCH_INTRO.Загружено:prashantmehta
- 30multiplechoicequestions.pdfЗагружено:Anike Pal
- slide7Загружено:Omar Abdalla
- 90-27Загружено:محمد
- Lectures 1-13 +Outlines-6381-2016-Work Book Version.pdfЗагружено:Anonymous Y4ajGZ
- Mathcad - Example Sheet.pdfЗагружено:Airways Siahaan

- Readers Solution Manual for Probability, Random Processes and Statistical Analysis ( HISASHI KOBAYASHI )Загружено:Anonymous bZtJlFvPtp
- Matrix Analysis ( 2nd ) Solutions to ExercisesЗагружено:Anonymous bZtJlFvPtp
- Performance Limits in Communication Theory and Practice ( J. K. Skwirzynski )Загружено:Anonymous bZtJlFvPtp
- Myths and Counterexamples in Mathematical Programming.pdfЗагружено:Pedro Díaz
- [Chen]_Linear_System_Theory.pdfЗагружено:Adib Hashemi
- Linear Network Error Correction Coding ( Xuan Guang ; Zhen Zhang )Загружено:Anonymous bZtJlFvPtp
- Daniel W. Stroock (auth.)-A Concise Introduction to Analysis-Springer International Publishing (2015).pdfЗагружено:mmarriag
- Solutions Selected Exercises First 3 Chapters for Stochastic Integration and Differential Equations ( Protter )Загружено:Anonymous bZtJlFvPtp
- Abstract Algebra SolutionsЗагружено:Thigpen Fockspace
- Block Transceivers - Ofdm and BeyondЗагружено:Anonymous bZtJlFvPtp
- Introduction to Digital Communication (2ed)_Ziemer.pdfЗагружено:Anonymous bZtJlFvPtp
- Multi-Carrier Techniques For Broadband Wireless Communications ( Man-On Pun ).pdfЗагружено:Anonymous bZtJlFvPtp
- Detection and Estimation for Communication andRadar Systems ( Kung Yao ; 2013 CRC ).pdfЗагружено:Anonymous bZtJlFvPtp
- Instructor's Solutions Manual to Advanced Calculus -- A Geometric View ( James J. Callahan )Загружено:Anonymous bZtJlFvPtp
- Analysis in Banach Spaces - Volume I - Martingales and Littlewood-Paley Theory ( Tuomas Hytonen ; Jan van Neerven ; Mark Veraar ).pdfЗагружено:Anonymous bZtJlFvPtp
- Solutions Manual for Mathematical Statistics -- Asymptotic Minimax Theory.pdfЗагружено:Anonymous bZtJlFvPtp
- 2018_Book_GaloisTheoryThroughExercises.pdfЗагружено:Holman Ayala
- Gallian Solutions ManualЗагружено:lucas
- Numerical Optimization - Solutions ManualЗагружено:Gugo Man
- Functional Analysis ( Sergei Ovchinnikov ).pdfЗагружено:Anonymous bZtJlFvPtp
- Stochastic Processes in Engineering Systems 2nd ( Eugene Wong ; Bruce Hajek ).pdfЗагружено:Anonymous bZtJlFvPtp
- Cyclic Division Algebras - A Tool for Space-Time Coding ( Frederique Oggier ; Jean-Claude Belfiore ).pdfЗагружено:Anonymous bZtJlFvPtp
- Large MIMO Systems.pdfЗагружено:lucas
- [Michel_Sakarovit]_Linear Programming.pdfЗагружено:tiagoalvesai
- Solution Manual for Fractal Geometry Mathematical Foundations and Applications, 3nd ( Falconer )Загружено:Anonymous bZtJlFvPtp

- Sullivan_John.pdfЗагружено:melolohorgli
- 392 Malvar v Kraft Food.docxЗагружено:Marvin De Leon
- Westlake, Donald - [SH5] Hitch Your Spaceship to a Star (Ss-rtf)Загружено:Naumaan Choudhary
- M2000 Northbound Alarm SNMP Interface Developer GuideЗагружено:David Gunawan
- Testequipmentshop.com Air2Guide Differential Pressure Gauge Air2guide P TES A2G 10 Data SheetЗагружено:Teq Sho
- Alternative to Sba 2010Загружено:Mildred Walters
- Agency (Arteche Digests)Загружено:velasquez0731
- Information Technology Numerical MethodsЗагружено:api-26349602
- Microeconometrics Lecture NotesЗагружено:burggu
- Nica Pelle Ti Zing Catalogue (GEA)Загружено:Thong Kin Mun
- Civil 3D TutorialsЗагружено:Rebecca Zodinpuii
- Y53_VS111Загружено:mehdi810
- Chocolate Industry in IndiaЗагружено:hotjob123
- Anode Spec. SampleЗагружено:zaidi562
- World English 3 - Workbook Unit 3Загружено:demonzero13
- Job ApplicationЗагружено:Anton Nazeem
- Lawsuit-Phillips-Skate-3-27-17Загружено:mitchell6northam
- 509408001Загружено:Hamed Okaf
- College"Giants of an Earlier Capitalism": The Chartered Trading Companies as Modern MultinationalsЗагружено:Todd Burst
- Huawei in TransportationЗагружено:Lakshmi Kiran
- Technical SpecificationЗагружено:AHT
- Optimal Control Problems - Control Systems Questions and Answers - SanfoundryЗагружено:priyadharsini
- people of the phil. v galloЗагружено:Che Pulido
- Shell Cyprina 963.pdfЗагружено:Anonymous oAbjbl4H
- Preliminary EMCP Integrated Voltage Regulator Implementation Guide Draft00 Rev08Загружено:Adeel
- 2. 36 Theory and Practice of Corporate GovernanceЗагружено:m_dattaias
- Project 2- Modeling the Motion of a SpringЗагружено:jlawes1
- May 2015Загружено:OnsiteInstaller
- Heavy OilЗагружено:siddharth_92
- CITIES, TOWNS, AND SUBURBS: Toward Zero-Carbon Buildings by Hillary BrownЗагружено:Post Carbon Institute

## Гораздо больше, чем просто документы.

Откройте для себя все, что может предложить Scribd, включая книги и аудиокниги от крупных издательств.

Отменить можно в любой момент.