Академический Документы
Профессиональный Документы
Культура Документы
——–
an introduction
to option pricing with martingales
——–
Christophe Giraud1
1 Conditional expectation 10
1.1 Discrete setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.1 Conditioning against a random variable . . . . . . . . . . . . . . . . 11
1.1.2 Conditioning against several random variables . . . . . . . . . . . . 12
1.2 General case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.2.1 Extension to the case where there exists a joint density . . . . . . . 13
1.2.2 General case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2 Martingales 19
2.1 Information, filtration, stopping time . . . . . . . . . . . . . . . . . . . . . 21
2.1.1 Information, filtration . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.1.2 Stopping time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2 Martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2.2 Optional stopping theorem . . . . . . . . . . . . . . . . . . . . . . . 25
2.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.3.1 Galton-Watson genealogy . . . . . . . . . . . . . . . . . . . . . . . 27
2.3.2 With the optional stopping theorem . . . . . . . . . . . . . . . . . . 28
1
CONTENTS 2
6 Brownian motion 53
6.1 Continuous-time processes . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6.2 Brownian motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6.2.1 Gaussian law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
6.2.2 Definition of Brownian motion . . . . . . . . . . . . . . . . . . . . . 54
6.2.3 Properties of Brownian motion . . . . . . . . . . . . . . . . . . . . 55
6.3 Continuous-time martingales . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
6.4.1 Basic properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
6.4.2 Quadratic variation . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7 Itô calculus 57
7.1 Problematic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
7.2 Itô’s integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
7.3 Itô’s processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
7.4 Girsanov formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
7.4.1 Stochastic exponentials . . . . . . . . . . . . . . . . . . . . . . . . . 61
7.4.2 Girsanov formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
7.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
CONTENTS 3
8 Black-Scholes model 64
8.1 Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
8.1.1 The Black-Scholes model . . . . . . . . . . . . . . . . . . . . . . . . 64
8.1.2 Portfolios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
8.1.3 Risk neutral probability . . . . . . . . . . . . . . . . . . . . . . . . 65
8.2 Price of a european option in the Black-Scholes model . . . . . . . . . . . . 66
9 Appendix 69
9.1 Convergence of random variables . . . . . . . . . . . . . . . . . . . . . . . 69
9.1.1 Convergence a.s. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
9.1.2 Convergence in L2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
9.1.3 Convergence in probability . . . . . . . . . . . . . . . . . . . . . . . 69
9.1.4 Convergence in distribution . . . . . . . . . . . . . . . . . . . . . . 69
9.1.5 Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
9.2 Construction of Itô’s integral . . . . . . . . . . . . . . . . . . . . . . . . . . 70
9.2.1 Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
9.2.2 Integration of elementary processes . . . . . . . . . . . . . . . . . . 70
9.2.3 Extension to L2T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
CONTENTS 4
E(X) expectation of X
X⊥Y X is independent of Y
i.e. namely
i.i.d. independent and identically distributed
a.s. almost surely (with probability 1)
P∗ risk neutral probability
iif if and only if
s.t. stopping time
r.v. random variable
Chapter 0
The goal: to give a comprehensive introduction to option pricing with martingale. The
first two chapters give some mathematical stuff (conditional expectation, martingale).
Chapter 3,4,5 focus on the completely discrete setting. We use the risk neutral proba-
bility approach to compute the price of european and american options.
1. A european company manages its activity in euros, but signs its contracts in dollar,
payable on receipt. Between today and the receipt, the euro/dollar exchange rate
may fluctuate. The company thus have to face an exchange risk. If it does not want
to take charge of it, the company will sign a contract that protects itself against this
risk.
2. The market price of copper fluctuate dramatically. A copper mine may wish to
protect itself against this fluctuation. The manager of the mine will then sign a
contract, that warrants him a minimum price for its copper.
5
CHAPTER 0. GOAL OF THE LECTURE 6
• when SN < K: the owner of the option has the right to buy at price K an asset
that he could buy for less in the market. This right is not interesting. He does not
exercise it : nothing happens.
• when SN ≥ K : the owner of the call can buy an asset for less than the market price,
which is interesting. The seller of the option, must then buy an action at price SN
and sell it at price K to the owner of the option. Things goes as if the seller was
giving SN − K to the owner.
In short: at time n = 0 the owner gives C to the seller of the call. At time N , he receives
the maximum between SN −K and 0, we denote by (SN −K)+ . We call payoff, the function
f = (SN − K)+ .
European put: contract that gives the right (but not the obligation) to its owner to sell
an asset at the fixed price K (strike) at time N (maturity). This contract has a price C
(prime). In this case the payoff is f = (K − SN )+ .
American call (resp. put): contract that gives the right (without obligation) to buy
(resp. sell) an asset at any time before time N (maturity) at the fixed price K. This
contract has a price.
exotic options: there exists many other options, called exotic options. Example given:
the collar option, with payoff f = min(max(K1 , SN ), K2 ), the Boston option, with payoff
f = (SN − K1 )+ − (K2 − K1 ), where K1 < K2 , etc.
XN ≥ g(SN ).
Assume that either when SN = S+ , or when SN = S− , the value of the portfolio is larger
than the payoff g(SN ). Then the seller could earn money with positive probability and no
risk of loosing money. We assume that this is impossible (market at equilibrium). We say
in this case that the market is arbitrage-free. As a consequence, we must have XN = g(SN ),
which in turn enforces (β, γ) to satisfy the equations
rN
βe + γS+ = g(S+ )
βerN + γS− = g(S− ).
Fundamental Remark: neither the price C nor the portfolio (β, γ) depend on the
probability that the asset S takes value S+ or S− !
why?
Because our hedging strategy works whatever the evolution of the market, not in average.
If we want to face our obligation in every case, the probability of rise or fall does not
CHAPTER 0. GOAL OF THE LECTURE 9
matter. Our hedging must work both when the price rises and when it falls. There is no
probabilistic arguments here (6= average hedging via diversification).
Nevertheless, the price C of the option can be view as the expected value of the payoff
under some artificial probability the risk neutral probability. This remark is the corner
stone of the pricing methods developed below.
Let us focus on a european call. We define the risk neutral probability P∗ by
erN S0 − S−
P∗ (SN = S+ ) = =: p∗ and P∗ (SN = S− ) = 1 − p∗ .
S+ − S −
We then have
1
also called ”present value”
Chapter 1
Conditional expectation
The goal: to formalize the notion of conditional expectation given some information I.
Informally, the conditional expectation of a random variable (r.v.) X given I represents
the ’average value expected for X when one knows the information I’.
Example: throw two dices. Write X1 and X2 for the value of first and second dice, and
set S = X1 + X2 . If you have no information, the average value expected for S is
E(S) = E(X1 ) + E(X2 ) = 7.
Assume now that you know the value of X1 . Then, the average value expected for S
knowing the value of X1 (our information) is
X1 + E(X2 ) = X1 + 3.5 .
The latter quantity is what we call the ’conditional expectation of S knowing X1 ’.
Notations: Ω will represent the universe of the possibilities, we will write F for the set
of all the possible events (F is a so-called σ-algebra) and P(A) for the probability that the
event A ∈ F occurs.
10
CHAPTER 1. CONDITIONAL EXPECTATION 11
This quantity is a real number that represents ”the average value expected for X knowing
that the event B has occurred”.
- Example: throw a dice with 6 faces, and set X for the obtained value. Write B
for the event ”XP is not smaller than 3”. Then, the expected value of X knowing B
is E(X | B) = 6i=1 i P(X = i | B). Since P(B) = 2/3, we have
3
P(X = i | B) = P({X = i} ∩ B)
2
0 if i = 1 or 2
=
1/4 else.
- Example: Let Z be a r.v. with uniform law on {1, . . . , n} (i.e. P(Z = i) = 1/n
for i = 1, . . . , n) and ε be a r.v. independent of Z, such that P(ε = 1) = p and
P(ε = −1) = 1 − p.
We set X = εZ. Then, X is a r.v. with value in {−n, . . . , n}. Let us compute
E (X | Z). For j ∈ {1, . . . , n},
n
X
E (X | Z = j) = i P(X = i | Z = j)
i=−n
E (X | Z = j) = −jP(X = −j | Z = j) + jP(X = j | Z = j)
= −jP(ε = −1 | Z = j) + jP(ε = 1 | Z = j).
——————
Remark: we usually write {Z = zj } := {ω ∈ Ω : Z(ω) = zj }, for the event Z takes the
value zj . We also usually write P(X = xi , Z = zj ) for P({X = xi } ∩ {Z = zj }).
We will extend the previous definition to the case where we deal with several random
variables. In this case, formula (1.1) reads E (X | Z1 = zj1 , . . . , Zn = zjn ) = m
P
i=1 i P(X =
x
xi | Z1 = zj1 , . . . , Zn = zjn ). Remind that this quantity is a real number.
Definition 1.2 Conditional expectation of X given Z1 , . . . , Zn .
We call conditional expectation of X given Z1 , . . . , Zn the random variable defined by:
E (X | Z1 , . . . , Zn ) (ω) := h(Z1 (ω), . . . , Zn (ω)),
where h : Rn → R is defined by h(zj1 , . . . , zjn ) = E (X | Z1 = zj1 , . . . , Zn = zjn ) .
In particular,
P(X ∈ dx, Z ∈ dz) := P(X ∈ [x, x + dx], Z ∈ [z, z + dz]) = f(X,Z) (x, z) dx dz .
Note that the margins can be computed in the following way
Z Z
P(X ∈ dx) = f(X,Z) (x, z) dz dx and P(Z ∈ dz) = f(X,Z) (x, z) dx dz .
z∈R x∈R
——————
When we deal with random variables that have a density, then P(X = x) and P(Z = z) are
equal to zero, so we cannot define the quantity P(X = x | Z = z). To bypass this difficulty
we will define another quantity P(X ∈ dx | Z = z) that will represent ”the probability
that X ∈ [x, x + dx] givenR Z = z.”
In view of P(Z ∈ dz) = x∈R f(X,Z) (x, z)dx dz and P(X ∈ dx, Z ∈ dz) = f(X,Z) (x, z) dz dx,
it is natural to define P(X ∈ dx | Z = z) by
P(X ∈ dx, Z ∈ dz) f(X,Z) (x, z)
P(X ∈ dx | Z = z) := =R dx
P(Z ∈ dz) f
y∈R (X,Z)
(y, z)dy
Definition 1.3 We call conditional expectation of X given Z the random variable defined
by
E (X | Z) (ω) = h(Z(ω))
R
where h : R → R is defined by h(z) = x∈R x P(X ∈ dx | Z = z).
It is a ”continuous space” version of the formula of 1.1.1.
CHAPTER 1. CONDITIONAL EXPECTATION 14
Let us check for example that we find back the formula of Section 1.1.1 For X : Ω →
{x1 , . . . , xm } and Z : Ω → {z1 , . . . , zk }, we define h by
h(z1 ) = E (X | Z = z1 )
..
.
h(zk ) = E (X | Z = zk )
h(z) = 0 if z ∈ / {z1 , . . . , zk }
The formula of the section 1.1.1 defines the conditional expectation of X given Z by
E (X | Z) (ω) := h(Z(ω)). Does this definition fits with the one given above (1.2.2)? Let
us check that h(Z) fulfills conditions a’) and b’), (which implies that the two definitions
fit, since there exists a unique random variable fulfilling this two conditions).
Condition a’) is clearly satisfied by h(Z) !
?
Let us check condition b’): do we have E (h(Z)f (Z)) = E (Xf (Z)) for any function f ?
Check that
Xk
h(Z(ω)) = 1{Z(ω)=zj } E (X | Z = zj ),
i=1
| {z } | {z }
random variable non−random
so that
Pk
E (h(Z)f (Z)) = E j=1 1{Z=zj } E (X | Z = zj ) f (Z)
| {z }
=f (zj )
Pk
= j=1 f (zj ) E (X | Z = zj ) E 1{Z=z }
| {z j }
=P(Z=zj )
| “
{z ”
}
=E X1{Z=zj }
hP i
k
= E j=1 f (zj )X1{Z=zj }
Note that kj=1 f (zj )1{Z(ω)=zj } = f (Z(ω)), which leads to E (h(Z)f (Z)) = E (Xf (Z)).
P
Condition b’) is then satisfied. To conclude, we have check that the two definitions fit.
——————
Informally, E (X | Fn ) represents the average value expected for X when one knows the
values of Z1 , . . . , Zn . Let us come back to the example of the introduction.
Example: We throw two dice :
X1 = value of the first dice
X2 = value of the second dice
S = total value = X1 + X2 .
Since X1 et X2 are independent we expect that E (S | X1 ) = X1 + E (X2 ), since ”X1 gives
no information on X2 ”.
Write h(x) = x + E (X2 ), and check that h(X1 ) fulfills conditions a) and b).
• Condition b): let us check that E (h(X1 )f (X1 )) = E (Sf (X1 )):
E (h(X1 )f (X1 )) = E (X1 + E (X2 ))f (X1 ) = E (X1 f (X1 )) + E (X2 ) E (f (X1 ))
| {z }
= E (X2 f (X1 ))
since X1 ⊥ X2
Remind that if X1 et X2 are independent, E (f (X1 )g(X2 )) = E (f (X1 )) E (g(X2 )) .
Conclusion : E (h(X1 )f (X1 )) = E ((X1 + X2 )f (X1 )) = E (Sf (X1 )), so condition b)
is fulfilled and thus
7
E (S | X1 ) = h(X1 ) = X1 + E (X2 ) = X1 + .
2
2. If X ≤ Y , E (X | Fn ) ≤ E (Y | Fn ).
1.3 Exercises
1. We throw a dice and write N for the result (between 1 and 6). Then, we throw N 2
times the dice, and write S for the sum of the results (including the first throw).
Compute E(S|N ) and then E(S).
2. Assume that the random variables (X, Y ) have a density
f (x, y) := n(n − 1)(y − x)n−2 1(x,y)∈A
where A := {(x, y) ∈ R2 /0 ≤ x ≤ y ≤ 1}. Check that
n−1+X
E(Y |X) = .
n
3. Let (Xn ; n ∈ N) be a sequence of independent random variables. We focus on the
random walk Sn := X1 + . . . + Xn and set Fn = σ(S1 , . . . , Sn ).
a) Compute E(Sn+1 |Fn ).
b) For any z ∈ C, check that
E z Sn+1 |Fn = z Sn E z Xn+1 .
CHAPTER 1. CONDITIONAL EXPECTATION 18
Martingales
• We write Xn for the fortune of the gambler after time n. X0 is then its initial fortune.
Since, ”not to stake” is considered as a game, we have
X (n)
Xn−1 = Mj .
j∈J
P (n) (n)
After time n its fortune is Xn = j∈J Mj R j .
• The information that have the gambler after time n is the value of the returns
R(1) , . . . , R(n) . We write Fn := σ(R(1) , . . . , R(n) ) for this information.
19
CHAPTER 2. MARTINGALES 20
Problematic:
1. Is there a way for the gambler to choose its stakes, so that he wins money on average,
viz so that E (Xn ) > X0 ?
2. Is there a way to ”stop gambling” so that, if T is the (random) time at which ones
stop, E (XT ) > X0 ?
Answer to the first question:
Let us compute the conditional expectation of the gambler’s fortune after time n, given
Fn−1 :
!
X (n) (n)
E (Xn | Fn−1 ) = E Mj Rj | Fn−1
j∈J
X
(n) (n)
= E Mj Rj | Fn−1
j∈J
X
(n) (n) (n)
= Mj E Rj | Fn−1 since Mj is Fn−1 -measurable
j∈J
X
(n) (n) (n)
= Mj E Rj since Rj is independent of Fn−1
j∈J
X
(n) (n)
≤ Mj = Xn−1 since E Rj ≤ 1.
j∈J
1A : Ω → {0, 1}
1 if ω ∈ Ω
ω 7→ 1A (ω) =
0 if ω ∈
/Ω
In the same way, if X : Ω → R is a random variable and B is a subset of R, we set 1{X∈B}
for the random variable:
1{X∈B} : Ω → {0, 1}
1 if X(ω) ∈ B
ω 7→
0 if X(ω) ∈/B
Proposition 2.2
Proof :
1. we shall check that XT : Ω → R is FT -measurable, i.e. that {XT ∈ A} ∈ FT for any
A⊂R: T
{XT ∈ A} ∈ FT ⇔ {XT ∈ A} T {T = n} ∈ Fn ∀n ∈ N
⇔ {Xn ∈ A} {T = n} ∈ Fn ∀n ∈ N, which is true.
| {z } | {z }
∈Fn ∈Fn
2. et 3. : exercise. 2
2.2 Martingales
2.2.1 Definition
We observe a random process (Xn )n∈N . We write Fn for our information at time n. This
information contains σ(X0 , X1 , . . . , Xn ), given by the values X0 , X1 , . . . , Xn , that is to
say σ(X0 , X1 , . . . , Xn ) ⊂ Fn . Usually, our only information is X0 , X1 , . . . , Xn so that
Fn = σ(X0 , X1 , . . . , Xn ).
Exercise: Let (Mn )n∈N be a martingale. Check that for any positive n and p
E(Mn+p |Fn ) = Mn .
Example:
• The gambler’s fortune in a casino is a supermartingale.
⊂
In this case, σ(X1 , . . . , Xn ) 6 = Fn
• The random walk Sn = Y1 + . . . + Yn − nm, where the (Yi )’s are iid and m = E (Y1 ).
The process (Sn )n∈N is a martingale, with Fn = σ(S0 , S1 , . . . , Sn ). Indeed,
E (Sn+1 | Fn ) = E (Y1 + . . . + Yn+1 − (n + 1)m | Fn )
= E (Sn + Yn+1 − m | Fn ) ,
with Sn Fn -measurable, and Yn+1 independent of Fn . Therefore
E (Sn+1 | Fn ) = Sn + E (Yn+1 ) −m = Sn .
| {z }
=m
• Assume that X is a random variable fulfilling E (|X|) < ∞ and consider a given filtra-
tion (Fn )n∈N (ex: Fn = σ(Z0 , . . . , Zn )). Then the process defined by Mn = E (X | Fn )
is a martingale, since E (Mn+1 | Fn ) = E (E (X | Fn+1 ) | Fn ) = E (X | Fn ) = Mn .
One may wonder if equality (2.2), which is true at a fixed time n, still holds true at a
random time T , viz do we have
E (XT ) = E (X0 ) ?
For sure, this is not true at any random time T . For exemple, when T is the time where
Xn reaches its maximum, we can’t have E (XT ) = E (X0 ), except if X0 = XT a.s. Indeed,
XT ≥ X0 . Nevertheless, we shall see below, that this equality turns to be true when T is
a stopping time (modulo some restrictions).
CHAPTER 2. MARTINGALES 25
2. When T is bounded a.s., i.e. when there exists N ∈ N such that P(T ≤ N ) = 1,
3. If P(T < ∞) = 1 and if there exists Y such that |Xn∧T | ≤ Y for any n ∈ N, with
E(Y ) < ∞
then E (XT ) = E (X0 ) (resp. ≤) .
Remarks:
Write pn ∈ {tail, head} for the result at time n and Xn for our fortune. We have
2Xn if pn+1 = tail
Xn+1 =
0 if pn+1 = head
viz Xn+1 = 2Xn 1{pn+1 =tail} . Since X0 = 1, iterating previous equality gives Xn =
2n 1{p1 =tail,...,pn =tail} . Let us check that (Xn ) is a martingale:
E (Xn+1 | Fn ) = E 2Xn 1{pn+1 =tail} | Fn
2Xn is Fn -measurable
= 2Xn E 1{pn+1 =tail} since
1{pn+1 =tail} is independent of Fn
= Xn since P(pn+1 = tail) = 1/2
CHAPTER 2. MARTINGALES 26
The time T = inf{n ≥ 0 such that Xn = 0} is a stopping time fulfilling P(T < ∞) =
1. Since XT (ω) (ω) = 0, we have E (XT ) = 0, so that
E (XT ) = 0 6= E (X0 ) = 1.
Why? the preceding result cannot be applied since |Xn∧T | cannot be dominated by
any Y with finite mean.
Exercise: With the help of the first part of the optional stopping theorem and Fatou’s
lemma, prove that when (Xn ) is a positive (super)martingale and T a stopping time, then
we always have E(XT ) ≤ E(X0 ) (without conditions on T )
——————
In practice: how to use it ?
CHAPTER 2. MARTINGALES 27
Usually, we choose some suitable martingale (Xn ) and stopping time T and then apply
the equality E (XT ) = E (X0 ) (warning: check the hypotheses of the optional stopping
theorem!).
——————
Example: The bankruptcy problem. Here is a simple example of use of the optional
stopping theorem. Two persons A and B stake 1$ on a coin. A the initial time A has a$
and B has b$. They play until one of them has been bankrupted.
1 if A wins at time i
A’s fortune : Xn = a + ε1 + . . . + εn , with (εi ) iid, εi =
−1 if A looses at time i
(Xn ) is a martingale and the time T of bankruptcy, namely T = inf{n ≥ 0 such that Xn =
0 or a + b} is a stopping time. Furthermore, |Xn∧T | ≤ a + b, so we can apply the optional
stopping theorem, which ensures that
As a consequence
a
P(A wins B’s fortune) = .
a+b
2.3 Exercises
2.3.1 Galton-Watson genealogy
We focus henceforth on a simple model of population. At the initial time n = 0, there is one
individual (the ancestor). We write Zn for the number of individual at the n-th generation.
We assume that each individual gives birth to children independently of the others. We
also assume that the number of children of an individual follows some “universal” law µ.
Universal means that it is the same for everybody.
(n)
Let us formalize this situation. We write Xk for the number of children of the individual
k present at generation n. Previous hypothesis says that the random variables
(n)
Xk ; k ∈ N, n ∈ N
(n)
are independent and that their law is µ, viz P Xk
= i = µ(i). Furthermore, note that
we have the formula
(n) (n)
Zn+1 = X1 + · · · + XZn .
Let X be a random variable distributed according to µ (i.e. P(X = i) = µ(i)). We set
m := E(X) , and write G for its generative function, defined by
1. Express G(s) in terms of the µ(k), k ∈ N. What is the value of G(1)? G(0)?
2. Check that G is non-decreasing and convex. What is the value of G0 (1)? Draw the
functions x 7→ G(x) and x 7→ x in the case where m ≤ 1 and m > 1. Take care of
the behavior of G around 1.
(1) (n−1)
3. We set Fn := σ Xk , . . . , Xk ; k ∈ N , which represents the information con-
tained in the genealogical tree up to generation n−1. Compute E(sZn+1 |Fn ). Deduce
that
Gn (s) := E sZn = G
| ◦ ·{z
· · ◦ G}(s).
n times
4. Express with words what Gn (0) corresponds to. We focus now on the probability of
extinction, viz on p := P(∃ n ∈ N : Zn = 0). Express p in terms of the Gn (0)’s.
5. With the help of the figure of 2., check that p = 1 when m ≤ 1 and p = G(p) < 1,
when m > 1.
7. We admits that the limit M∞ (ω) := limn→∞ Mn (ω) exists a.s. When m ≤ 1, what is
the value of M∞ ? E(M∞ )? Compare E(M∞ ) and limn→∞ E(Mn ). Give a heuristic
explanation to this result.
9. For any λ > 0, prove with the theorem of dominated convergence that
L(λ) = G(L(λ/m)).
- Wald identity
We focus on the random walk Sn = X1 + · · · + Xn , with (Xn )n∈N i.i.d. We set m := E(Xn )
and T := inf{n ∈ N : Sn ≥ a} for a given a > 0.
a) What can you say about T ?
b) Assume that m > 0. What can we say about (Sn − nm)n∈N ? Derive the equality
c) Prove that E(Sn 1T >n ) ≤ a P(T > n), and then justify with the law of large numbers
the equality (so-called Wald identity)
1
E(T ) = E(ST ) (focus only on the case where Xn ≥ 0 for all n).
m
d) We assume now that m = 0. Prove that (Sn ) is a martingale. Do we have E(ST ) =
E(S0 )? Why? Fix > 0, and set T = inf{k ∈ N : Sk + k ≥ a}. Prove that T ≥ T and
(use the Wald identity)
1 a
E(T ) ≥ E(T ) = E(ST + T ) ≥ .
What is the value of E(T )?
Chapter 3
Goals: to define the B-S market and to rely the mathematical concept of “neutral risk
probability” to the economical concepts of “arbitrage” and “completeness”.
We write ∆Bn := Bn − Bn−1 and ∆Sn := Sn − Sn−1 for the variation of B ans S between
the time n − 1 and n. The assets evolve according to the dynamic:
∆Bn = rn Bn−1
(3.1)
∆Sn = ρn Sn−1
with rn (interest rate) and ρn (return) random in general. We write Fn for the information
we have at time n. Since S0 , . . . , Sn are known at time n, we have σ(S0 , . . . , Sn ) ⊂ Fn .
The asset B is said non risky, because its evolution is predictable: at time n − 1 we know
the value of the interest rate rn for the time n. The variable rn is thus Fn−1 -measurable.
A contrario, S is a risky asset: at time n − 1 we do not know the value of ρn . The random
variable ρn is thus Fn -measurable, but not Fn−1 -measurable.
Remark: Here the time n is discrete. Note that we can see ∆Xn = Xn − Xn−1 as the
derivative of X in discrete time. We also assume henceforth that B and S can only take a
finite number of value. In other words, we are in a completely discrete setting.
Note that this corresponds to the reality. First the quotation are given with only a finite
precision. Second, the quotations occur in discrete time.
——————
30
CHAPTER 3. THE B-S MARKET 31
We can rewrite the dynamic as follows: equation (3.1) gives Bn − Bn−1 = rn Bn−1 so that
Bn = (1 + rn )Bn−1 .
Iterating this formula gives : Bn = (1 + rn ) . . . (1 + r1 )B0 . Setting
n
X n
X
Un = rk and Vn = ρk
k=1 k=1
we obtain
Bn = B0 εn (U )
Sn = S0 εn (V )
with εn (U ) = (1 + ∆Un ) . . . (1 + ∆U1 ). In analogy with the continuous time setting, we
will call the random variable εn (U ) “stochastic exponential”.
3.1.2 Portfolio
Let us consider a portfolio Π = (βn , γn )n≤N made of βn units of B and γn units of S at
time n. Its value at time n is
XnΠ = βn Bn + γn Sn .
We manage the portfolio in the following way. At time n, we have βn units of asset B
and γn units of asset S. We then decide to reinvest for the next quotation, namely we
choose βn+1 and γn+1 . This choice occurs at time n. This means that βn+1 and γn+1 are
Fn -measurable (or equivalently βn and γn are Fn−1 -measurable).
At time n, the value of the portfolio is XnΠ = βn Bn + γn Sn . After the reinvestment, its
value is βn+1 Bn + γn+1 Sn . It is natural to assume that when we reinvest money, no value
is added or lost, viz
βn Bn + γn Sn = βn+1 Bn + γn+1 Sn .
Such a trading strategy is said self-financed. In the following definition, the self-financing
condition is expressed at time n − 1 instead of n.
Warning: the value of the portfolio does not change during the reinvestment, but it
changes between two consecutive time, due to the fluctuation of B and S. As an exercise,
check that Π is self-financed if and only if the fluctuation of X Π between time n − 1 and
time n is
∆XnΠ = βn ∆Bn + γn ∆Sn .
CHAPTER 3. THE B-S MARKET 32
The second condition is equivalent to ”(Sn /Bn )n≤N is a martingale under P∗ ”. Next propo-
sition gives a condition on Un and Vn that ensures that a probability is “risk neutral”.
To prove this result we need the following properties of the stochastic exponential.
Besides
(∆Xn )2 (∆Xn )2
∗ ∗
∆(X − X − [X, X ])n = ∆Xn − ∆Xn + − ∆Xn ∆Xn −
1 + ∆Xn 1 + ∆Xn
= 0.
CHAPTER 3. THE B-S MARKET 33
Since for any process Z, we have Zn = Z0 + nk=1 ∆Zk , it follows that (X −X ∗ −[X, X ∗ ])n =
P
0 and finally εn (X)εn (−X ∗ ) = εn (0) = 1.
3. Using that εn−1 (X) is Fn−1 -measurable:
Sn S0 εn (V )
=
εn (U ) εn (U )
Lemma 2.
= S0 εn (V ) εn (−U ∗ )
Lemma 1.
= S0 εn (V − U ∗ − [V, U ∗ ])
so according to Lemma 3.1.3, the process (Sn /εn (U ))n≤N is a martingale under P∗ iif
(εn (V − U ∗ − [V, U ∗ ])n≤N is a martingale under P∗ . A computation gives
n
X ∆Vk − ∆Uk
Vn − Un∗ ∗
− [V, U ]n =
k=1
1 + ∆Uk
and (εn (V − U ∗ − [V, U ∗ ])n≤N is a martingale iif E∗ (∆Vn − ∆Un | Fn−1 ) = 0, namely iif
V − U is a martingale. Putting pieces together leads to the claimed result. 2
——————
Next lemma investigates the statistical evolution of the value of a portfolio based on a
self-financed strategy under a risk neutral probability.
CHAPTER 3. THE B-S MARKET 34
Proposition 3.2 Assume that there exists a risk neutral probability P∗ . Then, if Π is a
self-financed portfolio, its so-called discounted value (or present value) εn (U )−1 XnΠ is a
martingale under P∗ .
3.2.2 Arbitrage
Henceforth, we focus on the evolution of the market until a fixed time N .
In economy, we say that there is an arbitrage opportunity if there is an opportunity to
make a profit without any risk of loosing money. In mathematical words this become:
Definition 3.3 There exists an arbitrage opportunity, when there exists a self-financed
strategy Π such that
Next fundamental theorem relies the economic notion of ”arbitrage opportunity” to the
mathematical notion of ”neutral risk probability”.
Theorem 3.1
There exists no arbitrage opportunity ⇐⇒ There exists at least one risk neutral probability.
CHAPTER 3. THE B-S MARKET 35
Next theorem links this concept to the uniqueness of risk neutral probability.
Theorem 3.2 Let us consider an arbitrage-free B-S market. Then
Proof : (⇒) Assume that the market is complete and that there exists two different risk
neutral probabilities P∗ and P0 . Since P∗ =
6 P0 , there exists an event A such that
Set f (ω) = εN (U )(ω)1{A} (ω). Due to the completeness of the market there exists a trading
strategy Π such that XNΠ (ω) = f (ω) whatever ω ∈ Ω. According to Proposition 3.2, the
discounted value of the portfolio Π is a martingale under P∗ and P0 . As a consequence,
Now, on the one hand ε0 (U )−1 = 1 and on the other hand εN (U )−1 XNΠ = εN (U )−1 f = 1{A} .
Therefore
E∗ (1{A} ) = X0Π = E0 (1{A} ),
CHAPTER 3. THE B-S MARKET 36
i.e. P∗ (A) = P0 (A) which contradicts (3.2). There cannot exist two different risk neutral
probabilities. Besides, since the market is arbitrage-free, there exists at least one risk
neutral probability.
(⇐) Admitted. See, example given [5] Chap. V Section 4. 2
is a martingale under P0 .
See exercises below for a proof. Next example shows how to use this formula.
Example: Assume that the interest rates rn are constant, viz rn = r > 0 for any n. As-
sume also that the ρn ’s are i.i.d. and set
m = E(ρn ) ≥ 0, and σ 2 = E((ρn − m)2 ) > 0.
CHAPTER 3. THE B-S MARKET 37
3.5 Exercises
3.5.1 Change of probability: Girsanov lemma
Let us prove Girsanov formula. First note that for any random variable X, we have
E0 (W ) = E(ZN W ) = E(Zp W ).
2. We set Gn := r−m
σ2
Mn and ZN = EN (G). Check that (Gn )n≤N and (En (G))n≤N are
martingales under P.
CHAPTER 3. THE B-S MARKET 38
4. We define P∗ according to (3.4). Check (with Girsanov formula) that (Vn − Un )n≤N
is a martingale under P∗ .
4.1 Problematic
We want to compute the fair price of a european option with payoff f at maturity N .
Examples of european options:
• european call: f (ω) = (SN (ω) − K)+
Definition 4.1 (Hedging portfolio) A hedging portfolio (or strategy) Π, is a portfolio such
that its value at maturity is larger than the payoff f , namely
The fair price of an option will then corresponds to the minimal initial value X0Π that can
have a self-financed hedging portfolio. In short:
Definition 4.2 (Price of an option) The fair price of a european option with payoff f and
maturity N is
Π − Π is self-financed
C := inf X0 such that .
− XNΠ (ω) ≥ f (ω), ∀ω ∈ Ω
39
CHAPTER 4. EUROPEAN OPTION PRICING 40
C = E∗ εN (U )−1 f .
2. There exists a self-financed hedging strategy Π∗ with initial value C. The value at
time n of the portfolio Π∗ is
∗
XnΠ = E∗ εN (U )−1 εn (U )f | Fn .
Remarks:
• There exist theoretical formulas giving the composition (βn∗ , γn∗ )n≤N of the portfolio
Π∗ . But they are seldom useful.
• Usually, the hard task is to compute P∗ . This may be done with the Girsanov formula.
When P∗ is known, then we can compute C at least numerically (using Monte-Carlo
methods).
and therefore
− Π is self-financed
X0Π ≥ E∗ εN (U )−1 f .
C := inf such that (4.1)
− XNΠ (ω) ≥ f (ω), ∀ω ∈ Ω
Besides, the market is complete, so there exists a self-financed strategy Π∗ such that
∗ ∗
XNΠ (ω) = f (ω) for any ω ∈ Ω. Since Π∗ is self-financed, (XnΠ /εn (U )) is a martingale. As
a consequence
∗ ∗
X0Π = E∗ εN (U )−1 XNΠ = E∗ εN (U )−1 f .
CHAPTER 4. EUROPEAN OPTION PRICING 41
∗
Now, Π∗ is a self-financed portfolio, so due to the very definition of C we have X0Π ≥ C.
It follows that
C ≤ E∗ εN (U )−1 f .
(4.2)
Combining (4.1) and (4.2) we get
C = E∗ εN (U )−1 f .
We have proved the first part of the theorem. Concerning the second part, note that Π∗
∗
suits. Moreover, since (XnΠ /εn (U )) is a martingale under P∗ we have
∗
XnΠ ∗
= E∗ εN (U )−1 XNΠ | Fn = E∗ εN (U )−1 f | Fn .
εn (U )
We get the value of Π∗ at time n in multiplying the equality by εn (U ), and then make
εn (U ) enter into the conditional expectation (εn (U ) is Fn -measurable). 2
• imagine an option with price x > C + : the seller can then earn money without risk.
Indeed, at the end, he will have at least (x − C + ) × E∗ (εN (U )) > 0.
• imagine an option with price x < C − : the seller is sure to loose money. Indeed, he
will loose at least (C − − x) × E∗ (εN (U )) > 0.
We thus conclude that [C − , C + ] corresponds to the fair prices.
It is actually interesting to develop other strategies in incomplete market, see for example
[3, 4].
4.4 Exercises
4.4.1 Cox-Ross-Rubinstein model
We assume that the interest rates are constant, i.e. rn = r ≥ 0 and that the return (ρn )n≤N
are i.i.d. We also assume that they can only take two values a and b. Our aim is to compute
the price and hedging of an option with payoff f := g(SN ) and maturity N .
Prove that P∗ is risk neutral E∗ (ρn ) = r
⇐⇒
1.
p∗ := P∗ (ρn = b) = r−a
⇐⇒ b−a
3. Assume that π ∗ is the optimal hedging strategy. Check that its value at time n is
∗
Xnπ = (1 + r)−(N −n) FN∗ −n (Sn ).
4. Assume that we are at time n − 1. We know S0 , S1 , . . . , Sn−1 and we shall choose βn∗
et γn∗ . Check that they must satisfy:
∗
βn B0 (1 + r)n + γn∗ Sn−1 (1 + a) = (1 + r)−(N −n) FN∗ −n (Sn−1 (1 + a))
βn∗ B0 (1 + r)n + γn∗ Sn−1 (1 + b) = (1 + r)−(N −n) FN∗ −n (Sn−1 (1 + b)).
5. Derive that
FN∗ −n (Sn−1 (1 + b)) − FN∗ −n (Sn−1 (1 + a))
γn∗ = (1 + r)−(N −n)
(b − a)Sn−1
∗
π
Xn−1 − γn∗ Sn−1 (1 + r)−(N −n+1) FN∗ −n+1 (Sn−1 ) − γn∗ Sn−1
and βn∗ = = .
B0 (1 + r)n−1 B0 (1 + r)n−1
CHAPTER 4. EUROPEAN OPTION PRICING 43
6. In the case of a european call, viz when g(SN ) = (SN − K)+ , check that
C = S0 B(k0 , N, p0 ) − (1 + r)−N K B(k0 , N, p∗ )
N
0 1+b ∗ X
where p = p, B(k0 , N, p) = CNk pk (1 − p)N −k
1+r k=k0
3. Conclude that
T (ρ + σ 2 /2) + log(S0 /K) T (ρ − σ 2 /2) + log(S0 /K)
∆→0 −ρT
C ∼ S0 Φ √ − Ke Φ √ .
σ T σ T
We find here the Black-Scholes formula (see Chapter 8).
CHAPTER 4. EUROPEAN OPTION PRICING 44
When one takes a = −σ∆ and b constant, then the limit model is the Merton model based
on a Poisson process. In this case C → S0 P1 − Ke−ρT P2 where
∞
X [(1 + b)(ρ + σ)T /b]i
P1 = exp[−(1 + b)(ρ + σ)T /b)]
i=k0
i!
∞
X [(ρ + σ)T /b]i
P2 = exp[−(ρ + σ)T /b)]
i=k0
i!
Chapter 5
The goal: to compute the price of an american option and the optimal time of exercise.
5.1 Problematic
In this chapter, we will consider an arbitrage-free and complete B-S market. According to
the results of Chapter 3, this implies the existence of a unique risk neutral probability P∗ .
We focus henceforth on an american option with maturity N and payoff fn at time n ≤ N .
Example:
——————
The owner of the option can exercise his option at any time before time N . He then receives
fn . For hedging, the seller thus must have a portfolio Π such that
As for european options, the fair price of an american option corresponds to the minimal
initial value that can have a self-financed hedging portfolio. In short:
45
CHAPTER 5. AMERICAN OPTION PRICING 46
Write τ exc for the exercise time. The time τ exc will depend on the evolution of the market,
so it is a random variable. Besides, the owner of the option cannot predict the future.
When he decides to exercise his option, he takes his decision only on the information
available at that time. This exactly means that τ exc is a stopping time.
Since this must be true for any exercise time τ exc , we must have
X0Π ≥ sup E∗ ετ (U )−1 fτ
τ s.t.
τ ≤N
where ”s.t.” means ”stopping time”. As a consequence the price C defined by (5.2) admits
for lower bound
C ≥ sup E∗ ετ (U )−1 fτ .
(5.3)
τ s.t.
τ ≤N
C = sup E∗ ετ (U )−1 fτ
τ s.t.
τ ≤N
where ”s.t.” means ”stopping time” and where E∗ represents the expectation under P∗ .
CHAPTER 5. AMERICAN OPTION PRICING 47
C ≥ sup E∗ ετ (U )−1 fτ .
τ s.t.
τ ≤N
Thus to prove the theorem, all we need is to find a self-financed hedging portfolio Π∗ with
initial value
∗
X0Π = sup E∗ ετ (U )−1 fτ .
τ s.t.
τ ≤N
This work will be done at section 5.5. Before, we will try to better understand the opti-
misation problem appearing in Theorem 5.1.
We also set
Tn := inf{k ∈ [n, N ], such that Xk = Yk }.
Then, for any stopping time τ with value between n and N
In particular,
Y0 = E∗ (XT0 ) = sup E∗ (Xτ ) . (5.5)
τ s.t.
τ ≤N
Proof :
a) We first prove that (5.4) implies (5.5). Taking the expectation of (5.4) gives
Furthermore, we have assumed that (5.4) holds true for n = k+1, so that E∗ (Xτ 0 | Fk+1 ) ≤
Yk+1 . Putting pieces together gives
We thus have checked that (5.4) holds true for n = k. A descending iteration ensures that
(5.4) holds true for any n. 2
2. ∆An (ω) = An (ω) − An−1 (ω) ≥ 0, for all n ≤ N , ω ∈ Ω, i.e. (An ) is non-decreasing.
The first result gives the decomposition of a supermartingale as a martingale and a non-
decreasing predictable process.
Theorem 5.3 Doob’s decomposition.
Assume that (Yn )n≤N is a supermartingale (i.e. Yn ≥ E(Yn+1 | Fn )). Then, there exists a
unique martingale (Mn )n≤N and a unique non-decreasing predictable process (An )n≤N such
that
Yn = Mn − An and Y0 = M0 .
Proof :
Proof : Set f = εN (U )MN . Since the market is complete, there exists a self-financed
portfolio Π = (βn , γn )n≤N such that XNΠ = f . Moreover (Mn ) is a martingale under P∗ as
well as (εn (U )−1 XnΠ ) (Proposition 3.2), so
Lemma 5.1 Assume that Π = (βn , γn )n≤N is a (generic) self-financed portfolio. Then
n
XnΠ
Π
X Sk
= X0 + γk ∆ .
εn (U ) k=1
εk (U )
As a consequence
Π
XnΠ XnΠ
Xn−1
∆ = −
εn (U ) εn (U ) εn−1 (U )
Sn Sn−1
= βn B0 + γn − βn B0 − γn
εn (U ) εn−1 (U )
Sn Sn−1
= γn − .
εn (U ) εn−1 (U )
This proves the lemma. 2
All we need to prove Theorem 5.1, is to find a self-financed hedging portfolio with initial
value Λ.
Let us consider the sequence (Yn ) of the dynamic programing principle. In view of
Yn := max (Xn , E∗ (Yn+1 | Fn )) ,
we have Yn ≥ E∗ (Yn+1 | Fn ), so (Yn )n≤N is a supermartingale. We write Yn = Mn − An for
its Doob decomposition and
n
X
αk ∆ εk (U )−1 Sk
Mn = M 0 + (5.7)
k=1
The sequence (βn∗ ) is thus defined by iteration. By construction, the portfolio Π∗ is self-
financed and its initial value is Λ. It remains to check that it is also an hedging portfolio.
In view of Lemma 5.1
n
∗
X
−1
XnΠ γk∗ ∆ εk (U )−1 Sk .
εn (U ) =Λ+
k=1
Keep in mind that γk∗ = αk and compare previous equality to (5.7). It turns out that
∗
Mn = εn (U )−1 XnΠ . In particular
∗
XnΠ = εn (U )Mn ≥ εn (U )Yn since Mn = Yn + An with An ≥ 0.
Now according to the dynamic programing principle (5.4)
E∗ ετ (U )−1 εn (U )fτ | Fn ≥ fn
εn (U )Yn = sup
τ s.t.
n≤τ ≤N
(the latter equality comes from fn = E∗ (ετ (U )−1 εn (U )fτ | Fn ) for τ = n). Putting pieces
∗
together, we conclude that XnΠ ≥ fn for all n ≤ N . The portfolio Π∗ is then an hedging
portfolio.
Remark: T0 = inf{k ≤ N, such that εk (U )−1 fk = Yk } is the optimal time of exercise.
Indeed, according to (5.5), for all stopping time τ bounded by N
E∗ ετ (U )−1 fτ ≤ E∗ εT0 (U )−1 fT0 .
CHAPTER 5. AMERICAN OPTION PRICING 52
5.6 Exercises
Assume that εn (U )−1 fn = g(Yn )Mn with Mn a martingale under P∗ , M0 = 1 and g such
that g(y) ≤ g(y ∗ ) ∀y ∈ R.
1. check that εn (U )−1 fn ≤ g(y ∗ )Mn . Derive that for any stopping time τ ≤ N
Remark: we usually do not have ”with probability 1, Yn takes the value y ∗ before time
N ”. Nevertheless, when N goes to infinity the probability that Yn takes the value y ∗ before
time N tends to 1. Therefore, for large value of N , g(y ∗ ) is a good approximation of C
and τ ∗ is a good approximation of the optimal exercise time.
Chapter 6
Brownian motion
F0 ⊂ · · · ⊂ Ft ⊂ · · ·
Definition 6.2 A process Y is so-called Ft -adapted, if for any t ≥ 0 the random variable
Yt is Ft -measurable. .
53
CHAPTER 6. BROWNIAN MOTION 54
(d)
where W is a Brownian motion, [x] represents the integer part of x and ”−→” means
”convergence in distribution” (see the Appendix for a reminder on the various types of
convergence)
2. Scaling: for any c > 0, the process c−1/2 Wct t≥0 is again a Brownian motion.
4. Markov property: for any u > 0, the process (Wu+t − Wu )t≥0 is again a Brownian
motion, independent of Fu .
(Admitted)
Some properties of the Brownian motion are rather unusual. Let us give an example. Fix
a > 0 and write Ta for the first time where the Brownian motion W hits the value a. Then,
whatever ε > 0, the Brownian motion W will hit again infinitely many times the value a
during the little time interval [Ta , Ta + ε] !
E(Mt+s |Ft ) = Mt .
When a martingale is continuous, the properties we have seen in the discrete setting (fun-
damental property, optional stopping theorem, maximal inequality, Doob’s decomposition,
etc) still hold true after replacing ”n ∈ N” by ”t ∈ [0, ∞[”.
CHAPTER 6. BROWNIAN MOTION 56
6.4 Exercises
6.4.1 Basic properties
√
1. Assume that N follows a N (0, 1) law and set Xt = t N for any t ≥ 0. Check that
the process X fulfills the two first properties of Brownian motion, but not the third
one.
2. Check that a Brownian motion W is a martingale.
3. Set Ft = σ (Ws , s ≤ t) and Lt = Wt2 − t. Prove that L is a Ft -martingale.
4. Set Et = exp(Wt − t/2). Is Et a martingale?
5. Assume that M is a martingale such that E (Mt2 ) < +∞. Check that for any s ≤ t:
E (Mt − Ms )2 | Fs = E Mt2 − Ms2 | Fs .
3. Write |τ | = maxk=1...n ∆tk and < W >τt (ω) = nk=1 (∆Wk (ω))2 . Check that
P
n X
X n
E (< W >τt −t)2 = E (∆Wj )2 − ∆tj (∆Wk )2 − ∆tk
j=1 k=1
X n
= 2 (∆tk )2
k=1
|τ |→0
≤ 2|τ | × t −→ 0 .
Itô calculus
The goal: to give a short introduction to Itô calculus and its special rules.
7.1 Problematic
Let us discuss briefly and informally the motivation for introducing the Itô calculus. We
want to model a ”continuous-time noisy signal” X = (Xt )t≥0 . In analogy with the discrete
time, we want the evolution on a short time interval δt to be given by
”δXt (ω) = a(t, ω)δt + σ(t, ω)t (ω)” (7.1)
where t represents some ”noise” fulfilling the properties:
• t is independent of s for s 6= t, and has the same law,
• E(t ) = 0.
P
Therefore, setting ”Wt = s≤t s ” it is natural, in view of the Donsker’s principle, to
assume that Wt is a Brownian motion. Equation (7.1) then turns to
δXt (ω) = a(t, ω)δt + σ(t, ω)δWt (ω) .
Unfortunately, the Brownian motion is nowhere differential, so we cannot give a meaning
to this equality in terms of differentials. We will try instead to give a meaning to it in
terms of integrals:
Z t Z t
Xt (ω) = X0 + a(s, ω) ds + σ(s, ω)dWs (ω). (7.2)
0 0
The first integral enters into the classical field of integration theory and is well-defined.
But what is the meaning of the second integral?
Should W be with bounded variations, could we define the second integral in terms of the
Stieljes’ integral. Unfortunately, we have seen in Exercise 6.4.2 that a Brownian motion
has no bounded variations. The goal of Itô’s integration theory is precisely to give a
mathematical meaning to (7.2).
57
CHAPTER 7. ITÔ CALCULUS 58
n
X Z t
P
H(i−1)t/n Wit/n − W(i−1)t/n −→ Hs dWs for any t > 0, (7.3)
i=1 0
P
where ”−→” means ”convergence in probability” (see the Appendix for a reminder).
This process fulfills the equality
Z t 2 ! Z t
E Hs2 ds ,
E Hs dWs =
0 0
Pn P Rt
and therefore i=1 Wit/n Wit/n − W(i−1)t/n → t + 0 Ws dWs .
where in the right-hand side, the integral is a classic Riemann (or Lebesgue) integral.
CHAPTER 7. ITÔ CALCULUS 59
2. The last term in the right-hand side is unusual. This odd rule of calculus is due to
the fact that Brownian motion has a quadratic variation. Let us inspect this.
Proof : We only sketch the main lines of the proof of Itô’s formula: our goal is just to
understand the appearance of the last integral.
Set ti = it/n and for any process Y , we write ∆Yti = Yti − Yti−1 . The Taylor expansion
ensures that
X
g(t, Xt ) = g(0, X0 ) + ∆g(ti , Xti )
i
X ∂g ∂g
= g(0, X0 ) + (ti−1 , Xti−1 )∆ti + (ti−1 , Xti−1 )∆Xti
i
∂t ∂x
1 X ∂2g
+ 2
(ti−1 , Xti−1 ) (∆Xti )2 + residue,
2 i ∂x
where the residue goes to 0 when n goes to infinity. Furthermore, note that (∆Xti )2 =
Ht2i−1 (∆Wti )2 + ”residue”, and according to exercise 6.4.2 (∆Wti )2 ≈ ∆ti . So, the conver-
gence of Riemann sums to Riemann integrals gives
1 X ∂2g 1 t 2 ∂2g
Z
2
(ti−1 , Xti−1 ) (∆Xti ) ≈ H (s, Xs ) ds .
2 i ∂x2 2 0 s ∂x2
CHAPTER 7. ITÔ CALCULUS 60
Using again the convergence of Riemann sums to Riemann integrals for the first term of
Taylor expansion and the approximation formula (7.3) for the second term leads to the
claimed result:
Z t Z t
1 t 2 ∂2g
Z
∂g ∂g
g(t, Xt ) = g(0, X0 ) + (s, Xs ) dXs + (s, Xs ) ds + H (s, Xs ) ds .
0 ∂x 0 ∂t 2 0 s ∂x2
2
Remark: The condition g ∈ C 2 is necessary. Indeed, set g(t, x) = |x|, which is not C 2 at
0. Let us check
R t that Itô’s formula cannot hold true for this function. Otherwise, we would
have |Wt | = 0 sgn(Ws ) dWs , which is impossible since the left-hand side is a submartingale
(Jensen formula), whereas the right-hand side is a martingale.
Rt Indeed, the right-hand side
is a stochastic integral, with the integrand fulfilling 0 E (sgn(Ws )2 ) ds = t < ∞.
Examples:
Rt Rt Rt
1. Check that Wt2 = 2 0
Ws dWs + t and Wt3 = 3 0
Ws2 dWs + 0
Ws ds.
1 ∂2g
∂g ∂g
dXt = (t, Wt ) dWt + (t, Wt ) + (t, Wt ) dt .
∂x ∂t 2 ∂x2
In conclusion, we find
σ2
Xt = X0 exp σWt + µ − t ,
2
Exercises:
1. Use Itô ’s formula to check that εt (X) is solution of the stochastic differential equation
dYt
= dXt . (7.4)
Yt
Warning! the solution of (7.4) is Yt = εt (X) and not Yt = exp(Xt ) as you may have
expected. This is due to the special rules of Itô calculus.
Theorem 7.2 - Girsanov formula - Assume that X fulfills the hypotheses of Novikov’sRt
criterion (for any positive t) and define Q by (7.5). Then, the process W̃t := Wt − 0 Hs ds,
t ≤ T , is a Brownian motion under Q.
As a conseqence, for any measurable function g : R[0,T ] → R, we have the formula
Z T
1 T 2
Z
E g W̃t ; t ≤ T = E g(Wt ; t ≤ T ) exp − Hs dWs − Hs ds ,
0 2 0
dYt = Kt Ht dt + Kt dWt ,
7.5 Exercises
7.5.1 Scaling functions
We consider a diffusion dXt = σ(Xt ) dWt + b(Xt ) dt, with b and σ continuous. We assume
that b is bounded and that there exists and M such that 0 < ≤ σ(x) ≤ M < ∞, for
any x ∈ R. We set also Ft = σ(Ws , s ≤ t).
1. Find a function s ∈ C 2 such that (s(Xt ))t≥0 is a martingale. This function is so-called
”scaling function”.
s(X0 ) − s(a)
P (Tb < Ta ) = .
s(b) − s(a)
3. Assume that limx→∞ s(x) < ∞. Show that in this case limb→∞ P (Tb < Ta ) > 0.
Derive that with positive probability the diffusion X will never reach a.
CHAPTER 7. ITÔ CALCULUS 63
Rt
3. Replace µt by µt = 0
ms ds. Find Q such that (Xt , t ≤ T ) is a Brownian motion
under Q.
Chapter 8
Black-Scholes model
The goal: to define the Black-Scholes model, and price a european option in this setting.
8.1 Setting
8.1.1 The Black-Scholes model
As in the discrete setting, we focus on a market made of two assets:
• a non risky asset B (bond),
The parameter r corresponds to the interest rate of the bond B, whereas µ and σ corre-
sponds to the trend and volatility of the asset S.
We can give a closed form for the value of Bt and St . The evolution of B is driven by an
ordinary differential equation, solved by
Bt = B0 ert .
σ2
St = S0 exp σWt + µ − t .
2
Throughout this chapter, Ft refers to σ(Wu , u ≤ t). Note that according to the previous
formula, we also have Ft = σ(Su , u ≤ t).
64
CHAPTER 8. BLACK-SCHOLES MODEL 65
8.1.2 Portfolios
A portfolio Π = (βt , γt )t≥0 made of βt units of B and γt units of S has value
XtΠ = βt Bt + γt St .
We call discounted value (or present value) of a portfolio Π, the value X̃tΠ := e−rt XtΠ .
Next lemma gives a characterization of self-financed portfolios (compare with Lemma 5.1).
Lemma 8.1 A portfolio Π is self-financed if and only if its discounted value X̃tΠ fulfills
dX̃tΠ = γt dS̃t , with S̃t := e−rt St .
Proof : Itô’s formula ensures that dX̃tΠ = −re−rt XtΠ dt + e−rt dXtΠ . Therefore, a portfolio
is self-financed if and only if
Now, according to the Girsanov-Cameron-Martin formula (see Exercise 7.5.2), the process
(Wt∗ )t≤T is a Brownian motion under the probability P∗ defined by
2 !
µ−r µ−r T
P∗ (A) := E (ZT 1A ) , with ZT := exp − WT − .
σ σ 2
Since the discounted value of the stock S̃t is a stochastic exponential under P∗ , it is
t≤T
a martingale under P∗ (as already checked in exercise 6.4.1.4).
Conclusion: the probability P∗ is a risk neutral probability.
Comment: Compare the probability P∗ with the risk neutral probability computed at the
end of Section 3.4.
Definition 8.2 A portfolio Π is so-called an hedging portfolio when XTΠ ≥ g(ST ) and
Z T
2
∗
E X̃tΠ dt < +∞ . (8.2)
0
The first condition is the same as in the discrete setting, while the second condition is
technical.
The price C of an option will again correspond to the minimal initial value X0Π that can
have an hedging self-financed portfolio, namely
Π − Π is self-financed
C := inf X0 such that .
− Π is hedging
Again, we will solve this minimization problem, with martingale methods. First, we will
bound C from below by using that the discounted value of a self-financed hedging portfolio
is a martingale under P∗ . Second, we will construct a self-financed hedging portfolio Π∗ ,
whose initial value fits with the lower bound found at the first step.
In the next result, we assume that g is piecewise C 1 and fulfills E (g(ST )2 ) < ∞.
CHAPTER 8. BLACK-SCHOLES MODEL 67
2. There exists a self-financed hedging portfolio Π∗ with initial value C. The value at
time t of the portfolio Π∗ is
∗
XtΠ = e−r(T −t) E∗ (g(ST ) | Ft ) = G(t, St ).
∂G G(t, St ) − γt∗ St
γt∗ = (t, St ) and βt∗ = .
∂x Bt
with W ∗ Brownian motion under P∗ . The discounted price X̃tΠ of Π is therefore a stochastic
RT
integral, and since we have assumed that 0 E∗ (X̃tΠ )2 dt < ∞, it is a martingale under
P∗ . In particular, we have
X̃0Π = E∗ X̃TΠ ≥ E∗ e−rT g(ST ) ,
where the last inequality comes from the condition XTΠ ≥ g(ST ). It follows that C ≥
E∗ e−rT g(ST ) .
Conversely, we will show that there exists a self-financed hedging portfolio Π∗ with initial
∗ −rT ∗ −rT
value E e g(ST ) . Set Mt = E e g(ST ) | Ft .
Lemma 8.2 When M is defined by the above formula, its value is given by
Proof : (of the lemma). The second equality is straightforward. Let us prove the first
one. Due to the very definition of conditional expectation, all we need is to check that:
1. the random variable e−rt G(t, St ) is Ft -measurable (which is obvious!)
2. the equality E∗ (e−rT g(ST )h(St )) = E∗ (e−rt G(t, St )h(St )) holds for any measurable h.
For the second point, note that
g(ST ) = g St exp (r − σ 2 /2)(T − t) + σ(WT∗ − Wt∗ )
Appendix
9.1.2 Convergence in L2
n→∞
Xn is said to converge in L2 to X, when E (|Xn − X|2 ) −→ 0.
9.1.5 Relationships
Convergence a.s. =⇒ Convergence in probability =⇒ Convergence in distribution
⇑
Convergence L2
69
CHAPTER 9. APPENDIX 70
9.2.1 Setting
Henceforth, W is a Brownian motion and Ft = σ(Ws , s ≤ t). Fix T ∈ [0, ∞] and write
MT for the space of Ft -martingales which are continuous and start from 0 (i.e. M0 = 0)
with probability one and fulfill E (MT2 ) < ∞.
We
Rwrite L2T
for
the closure of this space endowed with the scalar product (H|K)L =
T
E 0 Hs Ks ds .
where the hi ’s are bounded Fti−1 -measurable random variables and 0 ≤ t0 < . . . < tn ≤ T .
We write henceforth ET for the space of elementary processes endowed with the scalar
product (·|·)L . For t ≤ T and a process H of the previous form, we define the stochastic
integral as follows
Z t n
X
Hs dWs (ω) := hi (ω) Wt∧ti (ω) − Wt∧ti−1 (ω) ,
0 i=1
Pn
Proof : First, note that the process i=1 hi W t∧ti
− W t∧ti−1 t≤T
is continuous and
takes value 0 at t = 0. Second, it is a martingale. Indeed, fix 0 ≤ tk ∧ t ≤ s < tk+1 ∧ t.
Using the martingale property of Brownian motions (see exercise 6.4.1.2), we have
n
!
X
E hi Wt∧ti − Wt∧ti−1 | Fs
i=1
k
X
= E hi Wt∧ti − Wt∧ti−1 | Fs + E hk+1 Wt∧tk+1 − Wt∧tk | Fs
i=1
n
X
+ E hi Wt∧ti − Wt∧ti−1 | Fs
i=k+2
k
X
= hi Wt∧ti − Wt∧ti−1 + hk+1 E Wt∧tk+1 | Fs − Wt∧tk
i=1
n
X
+ E E hi Wt∧ti − Wt∧ti−1 | Fti−1 ∧t | Fs
i=k+2
k
X
= hi Wt∧ti − Wt∧ti−1 + hk+1 (Ws − Wt∧tk ) + 0
i=1
n
X
= hi Ws∧ti − Ws∧ti−1 .
i=1
R
t
We thus have checked that Hs dWs 0
belongs to MT , it remains to prove that the
R· t≤T
map H 7→ 0 Hs dWs is an isometry.
R·
Let us compute the norm of 0 Hs dWs :
Z · 2 Z T 2 !
Hs dWs = E Hs dWs
0 M 0
XX
= E hi hj (Wti − Wti−1 )(Wtj − Wtj−1 ) .
i j
For i < j, the variable hi hj (Wti − Wti−1 ) is Ftj−1 -measurable. Therefore, according to the
third property of Brownian motion hi hj (Wti − Wti−1 ) is independent of Wtj − Wtj−1 . It
follows that
E hi hj (Wti − Wti−1 )(Wtj − Wtj−1 ) = E hi hj (Wti − Wti−1 ) E Wtj − Wtj−1 = 0.
| {z }
=0
R·
We have check that the map H 7→ 0
Hs dWs is an isometry. 2
ET → MT
Z ·
H 7→ Hs dWs
0
Corollary 9.1 When H ∈ L2T and is left-continuous, the following convergence holds:
n
X Z t
P
H(i−1)t/n Wit/n − W(i−1)t/n −→ Hs dWs , for any t ≤ T.
i=1 0
(n) Pn
Proof : Set Hs = i=1 H(i−1)t/n 1](i−1)t/n,it/n] (s). On the one hand
Z t n
X
Hs(n)
dWs = H(i−1)t/n Wit/n − W(i−1)t/n .
0 i=1
R t (n)
On the other hand H (n) converges to H in L2t : according to the isometry property 0 Hs dWs
Rt
converges to 0 Hs dWs in Mt and therefore in probability. 2
R·
Remark: the Itô integral 0 Hs dWs constructed above belongs to the space MT and
is therefore continuous with probability one. Since we can modify a Rprocess on a set of
·
probability 0 without changing its law, we can choose a version of 0 Hs dWs which is
continuous everywhere.
Bibliography
[1] Baxter M., Rennie A. Financial calculus: an introduction to derivative pricing. Cam-
bridge University Press 1999, Cambridge.
[2] Karatzas I., Shreve S.E. Brownian Motion and Stochastic Calculus. Springer-Verlag,
New-York, 1988.
[3] Mel’nikov A.V. Financial Markets: stochastic analysis and the pricing of derivative
securities. Translations of Mathematical Monographs 1999, AMS edition, Providence.
[5] Shiryaev A.N. Essentials of stochastic finance: facts, models, theory. Advanced Series
on Statistical Science & Applied Probability 1999, World Scientific, Singapore.
73
Index
∆Xn , 30 market
εn (U ), 31 B-S, 30
complete, 35
arbitrage, 34 incomplete, 41
ask, 41 martingale, 23, 55
bankruptcy problem, 27 measurable, 14
bid, 41 model
bond, 30 Black-Scholes, 43
Brownian motion Merton, 43
definition, 54 non-decreasing predictable process, 49
properties, 55 normal law, 54
conditional expectation Novikov’s criterion, 61
continuous setting, 13 optional stopping theorem, 25
discrete setting, 11
general setting, 14 portfolio
Cox-Ross-Rubinstein model, 42 hedging, 39, 66
self-financed, 31, 65
discounted value, 34, 65
predictable representation, 50
Donsker’s principle, 54
present value, 34, 65
Doob’s decomposition, 49
price of an option
dynamic programing principle, 47
american
exercise time, 46 calculus, 46
definition, 45
Galton-Watson, 27 european
gambler in a casino, 19 calculus, 40, 67
Gaussian law, 54 definition, 39
Girsanov formula, 36, 62 process
adapted, 53
inequality
continuous, 53
maximal, 28
elementary, 70
of Jensen, 17
Itô, 59
Itô’s
left-continuous, 53
integral, 58
Ito’s risk neutral probability, 32, 65
formula, 59
74
INDEX 75
spread, 41
stochastic exponential, 61
definition, 31
properties, 32
stock, 30
stopping time, 22
strategy
hedging, 39
self-financed, 31
submartingale, 23
supermartingale, 23
Wald identity, 29
Wiener process, 54