Академический Документы
Профессиональный Документы
Культура Документы
Gerard Brunick
Advisor: Steven E. Shreve
Given an initial Itô process, Krylov and Gyöngy have shown that it is of-
ten possible to construct a diffusion process with the same one-dimensional
marginal distributions. As the one-dimensional marginal distributions of a
price process under a pricing measure essentially determine the prices of
European options written on that price process, this result has found wide
application in Mathematical Finance. In this dissertation, we extend the
result of Krylov and Gyöngy in two directions: We relax the technical con-
ditions which must be imposed on the initial Itô process. And we clarify
the relationship between the stochastic differential equation that is solved
by the mimicking process and the properties of the initial process that are
preserved.
A Weak Existence Result with
Application to the Financial
Engineer’s Calibration Problem
Gerard Brunick
Gerard Brunick
Advisor: Steven E. Shreve
Given an initial Itô process, Krylov and Gyöngy have shown that it is of-
ten possible to construct a diffusion process with the same one-dimensional
marginal distributions. As the one-dimensional marginal distributions of a
price process under a pricing measure essentially determine the prices of
European options written on that price process, this result has found wide
application in Mathematical Finance. In this dissertation, we extend the
result of Krylov and Gyöngy in two directions: We relax the technical con-
ditions which must be imposed on the initial Itô process. And we clarify
the relationship between the stochastic differential equation that is solved
by the mimicking process and the properties of the initial process that are
preserved.
i
Acknowledgments
ii
Contents
1 Introduction 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Definitions and Notation . . . . . . . . . . . . . . . . . . . . . 7
2 Statement of Results 12
2.1 Updating Functions . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Main Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3 Applications to Mixture Models . . . . . . . . . . . . . . . . . 30
4 Main Theorem 61
4.1 Conditional Expectation Lemmas . . . . . . . . . . . . . . . . 61
4.2 Approximation Lemmas . . . . . . . . . . . . . . . . . . . . . 66
4.3 Main Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 77
A Galmarino’s Test 93
iii
F Convergence of Characteristics 128
References 136
iv
Chapter 1
Introduction
1.1 Introduction
Emanuel Derman [Der01] neatly summarizes the way in which many market
participants make use of financial models as follows:
1
CHAPTER 1. INTRODUCTION
2
CHAPTER 1. INTRODUCTION
and track the current value of the primary security, the running maximum,
the running minimum, and the historical average. An example of an auxiliary
process that may not be updated using only the changes in the price of the
primary security would be the current price of the primary security and the
price one week prior. Of course, if we instead decided to track the price of the
primary security over the entire previous week, this auxiliary process could
be updated using only the changes in the price of the primary security.
We now assume that we have fixed some auxiliary process that satisfies the
updating condition sketched above, and that we have prices for a collection of
European-style derivative securities. In this context, European-style means
that the holder of the derivative security receives a single payment at a fixed
maturity and makes no decisions prior to that date. The main result of
this dissertation essentially asserts that when the payoff of each derivative
security can be expressed as a function of the auxiliary process evaluated
at the derivative’s maturity, then it is possible to construct a model with a
price process that satisfies a “simple” stochastic differential equation (SDE)
in such a way that the model prices for all derivative securities agree with
the given prices. Moreover, the structure of this SDE is determined by the
structure of the auxiliary process. As an example of such a situation, we
note that the payoff of most barrier options can be expressed as a function of
the current value, the running maximum, and the running minimum of the
underlying security’s price at the maturity of the option.
To see how such a result might be useful, consider the simplest case where
the auxiliary process is simply the underlying security’s price. In this case,
the payoffs of European puts and calls are functions of the auxiliary pro-
cess at each option’s maturity. This case corresponds to the notion of “local
volatility models” and has been studied extensively, starting with the work
of Dupire [Dup94] and Derman and Kani [DK94]. (Rubinstein [Rub94] has
also given similar results in a binomial tree model.) The forward equation
of Fokker [Fok13], Planck [Pla17], and Kolmogorov [Kol31] expresses the re-
lationship between the drift and volatility coefficients of a diffusion process
and the one-dimensional marginal distributions of that process. Breeden and
Litzenberger [BL78] have shown the equivalence between European option
prices and the one-dimensional marginal distributions of the underlying price
process under any pricing measure. Connecting these two results and observ-
ing that the drift of a price process under any pricing measure is determined
by no-arbitrage, Dupire argued that the volatility coefficient in a diffusion
model for a price process may be implied directly from the set of European
3
CHAPTER 1. INTRODUCTION
(1.1) dSbt = σ
b(Sbt , t) Sbt dWt
will agree with the market prices. It was clear to Dupire from the start that
(1.1) may not be a very good model for the price process and he advocated a
hedging strategy that is robust to violations of the dynamics given in (1.1).
Indeed, empirical work such as [DFW98] indicates that σ b must often be
modified to refit market prices. As the model assumes that σ b should be a
fixed function, this is inconsistent.
Nevertheless, (1.1) is still useful because it can be used to characterize the
models that are consistent with a given set of European option prices. We
will say that a model is an Itô model if the price processes for the primary
securities are modeled as Itô processes. Consider a general Itô model where
the price process solves the SDE
for some adapted process σ under some pricing measure. Given such a model,
we could compute the European option prices, take these prices as inputs to
Dupire’s approach, and then choose σ b such that the prices for European
options written on price processes which solve (1.1) and (1.2) agree. It turns
out that the process σ and the function σb that we imply are related by the
rather intuitive formula
Derman and Kani give such a formula in [DK98]. As the local volatility func-
tion σ
b essentially characterizes the European option prices, (1.3) essentially
characterizes the Itô models that are consistent with a given collection of Eu-
ropean option prices. Gatheral [Gat06] argues that one should understand
local volatilities as an “effective theory,” and (1.3) connects a local volatility
model with a stochastic volatility model in a way that is consistent, at least
with respect to pricing European options.
The relationship given in (1.3) has found a wide range of applications.
Brigo and Mecurio [BM01] [BM02] use (1.3) to produce local volatility models
4
CHAPTER 1. INTRODUCTION
where the prices for European options are given as simple mixtures of Black-
Scholes prices. To do this, they fix a finite number of deterministic volatility
scenarios and build a stochastic volatility model by randomly choosing a
volatility scenario at the initial time. It is clear that the price for any option
in such a model is given as a mixture of the prices that are computed in
each scenario. They then compute σ b explicitly using (1.3) and conclude that
the corresponding local volatility model has European option prices that are
given as mixtures of the prices computed in each scenario. We note that
the prices for non-European options in the initial mixture model and the
mimicking local volatility model may differ. Piterbarg stresses this point
in the working paper [Pit03a] and argues against the use of such mixture
models. We will briefly revisit this point at the end of Chapter 2.
Avellaneda et al. [ABOBF02] use (1.3) for pricing European options on
baskets of securities. In this case, the volatility of the basket is given as
the sum of the volatilities of the underlying securities and ideas from Varad-
han [Var67] are used to compute the most likely configuration of the basket
and approximate σ b. Combining (1.3) with parameter averaging techniques,
Antonov, Misirpashaev, and Piterbarg [Pit03b] [Pit05] [Pit06] [AM06] [Pit07]
[AMP07] have developed pricing approximations for a range of markets.
Inspired by the success of the HJM methodology, some authors have
attempted to use σ b as the state of a process. Unlike parameterized models,
this approach has the advantage that essentially any set of European option
prices may be matched with an appropriate choice of state. Derman and
Kani [DK98] initiate such an approach for trinomial trees, but comment
that the no-arbitrage drift conditions for such a model in continuous-time
are rather involved. More recently, Carmona and Nadtochiy [Car07] [CN07]
have provided a rigorous development of this approach in continuous-time.
They use results of Kunita [Kun90] on stochastic flows to ensure that σ ..
bt ( , ),
which is now a random field, remains regular enough that the pricing PDE
may be solved, and they derive the drift restrictions, correcting a mistake
in [DK98]. Formula (1.3) actually provides some clue as to why the drift
restrictions in such a model are so difficult to deal with. To see how a
perturbation to σb affects P[St ∈ dx], one must re-solve the forward equation
using the perturbed value of σ b, so the drift restrictions that enforces (1.3)
are not available in closed-form.
Formula (1.3) is also useful because it suggests a way to adjust an initial
model to fit a set of European options prices. In particular, given a model
of the form (1.2), equation (1.3) suggests that we might attempt to choose a
5
CHAPTER 1. INTRODUCTION
dXt = µt dt + σt dWt ,
(1.6) dX
bt = µ
b(X
bt , t) dt + σ
b(X
bt , t) dW
ct
6
CHAPTER 1. INTRODUCTION
so the integral takes values in the set Rd when f takes values in Rd . In partic-
ular, the integral is always defined. With this definition, one should interpret
7
CHAPTER 1. INTRODUCTION
lim En f (X n ) = E∞ f (X ∞ )
n→∞
8
CHAPTER 1. INTRODUCTION
Spaces of functions
If E1 is a topological space, then the Borel σ-field on E1 is the σ-field generated
by the open subsets of E1 . If E2 is another topological space, then C(E1 ; E2 )
denotes the set of continuous maps from E1 to E2 . If E2 has a metric d2 , then
we will always endow C(R+ ; E2 ) with the locally uniform topology and the
compatible distance
∞
X
2-n 1 ∧ sup d2 x(s), y(s) .
d(x, y) ,
s≤n
n=1
9
CHAPTER 1. INTRODUCTION
ture, so E has a zero element, and we may define the space of paths that
start at zero
C0 (R+ ; E) , x ∈ C(R+ ; E) : x(0) = 0 .
This is a closed subset of C(R+ ; E), so C0 (R+ ; E) is a Polish space in the
relative topology. In this situation, we can also define the difference operator
.
∆ : C(R+ ; E)×R+ → C0 (R+ ; E) by ∆(x, t) , x(t + ) − x(t).
We will slightly abuse the notation and use the same symbols for these oper-
ators as we vary the space E. The domain and range of the operators should
be clear from the context. The maps Θ, ∇, and ∆ are continuous, and they
are linear in x, for fixed t, when E is a vector space.
If X : Ω → C(R+ ; E), then we use the notation Xt : Ω → E to denote the
map ω 7→ X(ω)(t), and we use the standard “stopped process” notation. In
particular, if T is an R+ -valued random
variable, then X T : Ω → C(R+ ; E)
denotes the map ω 7→ ∇ X(ω), T (ω) . Notice that if t and u are nonnegative,
then we have
and a similar chain of equalities for ∆(X t+u , t) when E is a vector space.
If X is random variables, then σ(X) denotes the σ-field generated by
X. If G and H are σ-fields, and X and Y are random variables, then
σ(G , H , X, Y ) = G ∨ H ∨ σ(X) ∨ σ(Y ).
Processes
We denote by BV [a, b]; Rd the class of Rd -valued functions that are of
bounded variation when restricted to the interval [a, b]. Similarly, AC [a, b]; Rd
denotes the class of Rd -valued functions that are absolutely continuous when
restricted to the interval [a, b]. One may consult Appendix C for further
details.
10
CHAPTER 1. INTRODUCTION
(1.15) Xt = X0 + Mt + Bt ,
11
Chapter 2
Statement of Results
In this chapter, we present the main result of the dissertation and we give a
few corollaries to illustrate potential applications. To state the main result,
we need to first define the notion of an updating function. We give this
definition and present a few examples of updating functions in Section 2.1.
In Section 2.2, we state the main result of the dissertation and give corollaries.
In Section 2.3, we show how these result can be used to give an answer to a
question raised by Piterbarg about the prices for barrier options in mixture
models.
12
CHAPTER 2. STATEMENT OF RESULTS
so property (a) of Def. 2.1 holds. If we know the value of X at time t, and
we know how the process X changes after time t, then we may reconstruct
13
CHAPTER 2. STATEMENT OF RESULTS
In Example 2.4, the only information that we decided to record about the
process X was the current location. In the next example, we choose to track
both the current location and the running maximum. We restrict ourselves
to the one-dimensional case for simplicity.
Notice that
X0 + ∆t (X, 0)
Φt Z0 , ∆(X, 0) = Φt X
0
X0 , ∆(X, 0) = X0 + ψt ◦ ∆(X, 0)
= Zt ,
As this is true for all s ≤ t, we conclude that property (a) of Def. 2.1 holds.
14
CHAPTER 2. STATEMENT OF RESULTS
We check that
h X i X0 + ∆t (X, 0)
0
Φt Z0 , ∆(X, 0) = Φt X0 , ∆(X, 0) = X0 + ∆Tt (X, 0) = Zt .
0
t
15
CHAPTER 2. STATEMENT OF RESULTS
e3 + s
e1 + ∇s (x, t)
= e2 + ∇s ∇(x, t), (T − e3 )+
e3 + s
= Φs e, ∇(x, t) = Φts e, ∇(x, t) .
As this is true for any s ≤ t, we again conclude that property (a) of Def. 2.1
holds.
To see that property (b) holds, we first note that for any path y and times
s, t ∈ R+ , we have
∇ Θ(y, t), s = Θ ∇ y, (s − t)+ , t .
(2.7)
∇t+u x, (T − e3 )+
+
= ∇t+u ∇(x, t) + Θ ∆(x, t), t , (T − e3 )
= ∇t+u ∇(x, t), (T − e3 )+ + ∇t+u Θ(y, t), (T − e3 )+
+ +
(2.8) = x t ∧ (T − e3 ) + Θt+u ∇ y, (T − e3 − t) , t
= ∇t x, (T − e3 )+ + ∇u ∆(x, t), (T − e3 − t)+ ,
16
CHAPTER 2. STATEMENT OF RESULTS
This shows that property (b) of Def. 2.1 holds, so Φ is an updating function.
As is quickly becoming clear, the hardest part of checking that a function
is an updating function is working through the notation. This is particularly
true of the last example that we present. In this last example, we use Z to
record the entire trajectory of X up until the current time. The updating
function removes an initial segment of path from the front end of ∆(X, t)
and appends it to the end of the initial path segment stored in Z. As we now
have a path-valued process, this situation is somewhat unpleasant to deal
with notationally, and the reader will not lose much by omitting the details.
2.9 Example. Let X be a continuous, Rd -valued process, and set Zt =
(X t, t), so Zt records the entire trajectory of X up until the time t. Then Z
may be updated using only the changes in X. This example is extremal in
the sense that we choose to record the most information about the path of
X that is possible without violating property (a) of Def. 2.1.
We take E , C(R+ ; Rd )×R+ , and we write a typical point in E as e = [ ee12 ].
We map a segment of path to a point in E, using the second coordinate to
record the length of the segment. Let Ψ : E → C(R+ ; E) denote the map
such that
∇(e1 , e2 + t)
Ψt (e) = .
e2 + t
We might describe Ψ as a map that reveals more and more of the path
e1 over time. In particular, we give Ψ a path e1 and an initial time e2 ,
and Ψt shows us the piece of e1 that lives on the interval [0, e2 + t]. Let
17
CHAPTER 2. STATEMENT OF RESULTS
Recall that to compute Θ(x, −e2 ), we slide the path x to the right by the
amount e2 , and we have Θt (x, −e2 ) = x(0) = 0 for t ∈ [0, e2 ]. Φ appends
the path x to the initial path segment e and then hands the newly con-
structed path over to Ψ, which slowly
∇(X,0) reveals information about the path in
an adapted way. As Z0 = 0
, we have
Φt Z0 , ∆(X, 0) = Ψt ∇(X, 0) + ∆(X, 0), 0 = Ψt (X, 0) = Zt .
The first and last equality state that sliding the path x or ∇(x, t) to the right
by e2 and then stopping it at time e2 +s is the same as first stopping the path
at time s and then sliding it to the right by e2 . The second equality follows
from the fact that stopping a path at two deterministic times is equivalent
to stopping the path once at the earlier time. Using this observation and the
fact that ∇(x + y, t) = ∇(x, t) + ∇(y, t), we may write
∇ ∇(e1 , e2 ) + Θ(x, −e2 ), e2 + s
= ∇ ∇(e1 , e2 ), e2 + s + ∇ Θ(x, −e2 ), e2 + s
= ∇ ∇(e1 , e2 ), e2 + s + ∇ Θ ∇(x, t), −e2 , e2 + s
= ∇ ∇(e1 , e2 ) + Θ ∇(x, t), −e2 , e2 + s .
18
CHAPTER 2. STATEMENT OF RESULTS
19
CHAPTER 2. STATEMENT OF RESULTS
To conclude, we write
We have now shown that property (b) of Def. 2.1 holds, so Φ is an updating
function.
20
CHAPTER 2. STATEMENT OF RESULTS
and set
Z t Z t
(2.13) Yt , µs ds + σs dWs .
0 0
2.14 Remark. Cor. 4.5 asserts that we may find deterministic functions
b : E×R+ → Rd and νb : E×R+ → S+d and a Lebesgue-null set N ⊂ R+ such
µ
b(Zt , t) = E[µs | Zt ] a.s. and νb(Zt , t) = E[σσsT | Zt ] a.s. when t ∈
that µ / N . If
we take r2 = d, then Lem. D.6 asserts that we may take the positive square
root of νb to get a function σ b taking values in S+d and satisfying σ b2 = νb.
As a result, we can always find functions satisfying the requirements of the
previous theorem; however, in applications we can often compute versions of
µ
b and σ b explicitly, so we formulate the theorem to take these functions as
inputs.
21
CHAPTER 2. STATEMENT OF RESULTS
dXt = µt dt + σt dWt ,
(2.17) dX
bt = µ
b(X
bt , t) dt + σ
b(X
bt , t) dW
ct
22
CHAPTER 2. STATEMENT OF RESULTS
the class of financial models in which the price processes solve an SDE of the
form (2.17). Given the equivalence between the one-dimensional marginal
distributions of a price process under a pricing measure and the prices of
European options, Cor. 2.16 admits the following financial interpretation: If
there exists any model in M which is consistent with a given set of market
prices for European options, then there also exists a model in D which is
consistent with that set of market prices.
So, taking µ
b = 0 and
s
c1 η(x, tc1 ) + c2 η(x, tc2 )
(2.19) σ
b(x, t) = ,
η(x, tc1 ) + η(x, tc2 )
Cor. 2.16 asserts the existence of a solution to the SDE (2.17) with the same
one-dimensional marginal distributions as Ye .
23
CHAPTER 2. STATEMENT OF RESULTS
time interval.
The first corollary that we presented corresponds to the diffusion case
where the only information that we choose to track about the process X is
the current location. At the other extreme, we might choose to remember
the entire history of X.
2.20 Corollary. Let W be an Rr1 -valued Wiener process, and let X be an
Rd -valued Itô process with stochastic differential
dXt = µt dt + σt dWt ,
(2.21) dX
bt = µ b t , t) dt + σ
b(X b t , t) dW
b(X ct ,
24
CHAPTER 2. STATEMENT OF RESULTS
Lipster and Shiryaev refer to a process that solves an SDE of the form
(2.21) as a process of diffusion-type, and they give Cor. 2.20 under the ad-
ditional assumptions that d = r1 = 1 and σ = 1 (see [LS01] Thm. 7.12),
although it is not clear that these assumptions are necessary for their ap-
proach to work. Lipster and Shiryaev provide an explicit formula for the
Radon-Nikodym derivative of the law of a process of diffusion-type with re-
spect to the law of a Wiener process. To apply this result to a general Itô
process like X, they must show that X solves some SDE of the form (2.21).
Their approach is to filter the drift from the path of the process X. They
subtract the filtered drift from X, and they show that what remains is a
Wiener process; although, it may differ from the Wiener process that was
initially used to define X.
2.22 Example. Assume that we are in the setting of Example 2.10. For
n
each fixed n, define the sequence of functions ξm : C(R+ ; R) → R+ by
m
n
X
i
i−1
2
ξm (x) , x nm
−x nm
.
i=1
n e
For each fixed n, nξm e02 in probability as m → ∞. Moving
(Y ) converges to σ
to a subsequence {a(n, m)}m that converges a.s., we define ξ n , lim inf n ξa(n,m)
n
,
m→∞
so ξ (Ye ) = ξ n (Ye 1/n ) = σ
n
e02 , P-a.s.,
e where Ye 1/n denotes the process stopped
th
at time 1/n (not the n root). Define σ b : C(R+ , R)×R+ → R+ by
∞
X p
σ
b(x, t) , ξ n (x) 1[1/n, 1/(n−1)) (t),
i=1
25
CHAPTER 2. STATEMENT OF RESULTS
b(Ye t , t) = σ
σ e0 , P-a.s.,
e for t > 0. In this simple case, it is clear without even
applying Cor. 2.20 that Ye solves dYet = σ b(Ye t , t) dW
ft .
It seems that the results that fall between Cor. 2.16 and Cor. 2.20 are
new. For example, we have the following corollary.
(2.24) dX
bt = µ
b(X
bt , M
ct , t) dt + σ
b(X
bt , M
ct , t) dW
ct ,
(2.25) dX
bt = µ
b(X
bt , N
bt , t) dt + σ
b(X
bt , N
bt , t) dW
ct .
26
CHAPTER 2. STATEMENT OF RESULTS
L (X ct ) = L (X
bt , M bt ) = L (Zbt ) = L (Zt ) = L (Xt , Mt ) ∀t ∈ R+ ,
bt , N
so we are done.
2.26 Example.
Assume that we are in the setting of Example 2.10, set
N
et , max Wfs : s ∈ [0, t] , and define
(2m − x)2
2(2m − x)
p(x, m; t) , √ exp − 1{x≤m, m≥0} .
2πt3 2t
27
CHAPTER 2. STATEMENT OF RESULTS
So taking µ
b(u, v, t) = 0 and
s
c1 p(x, m; tc1 ) + c2 p(x, m; tc2 )
σ
b(x, m, t) = ,
p(x, m; tc1 ) + p(x, m; tc2 )
Cor. 2.23 asserts the existence a solution to (2.24) such that the one-dimensional
marginal distributions of the the process (X,b Mc) agree with the one-dimensional
marginal distributions of (Ye , M
f), where M ct = max X bs : s ∈ [0, t] .
2.27 Corollary. Let W be a Wiener process, fix some time T ∈ R+ , and let
X have stochastic differential
dXt = µt dt + σt dWt ,
where µ and σ are adapted processes that satisfy (2.12). Further assume that
N ⊂ R+ is Lebesgue-null set and that µ b : R2 ×R+ → R and σ b : R2 ×R+ → R
T T
are functions such that µ b2 (Xt , XtT ; t) =
b(Xt , Xt ; t) = E[µt | Xt , Xt ] a.s. and σ
E[σt2 | Xt , XtT ] a.s. when t ∈
/ N.
Then there exists a stochastic basis (Ω, b Fb , F, b that supports processes
b P)
W
c and X b such that W c is a Wiener process, X b solves the SDE:
(2.28) dX
bt = µ
b(X b T ; t) dt + σ
bt , X b(X b T ; t) dW
bt , X ct ,
t t
28
CHAPTER 2. STATEMENT OF RESULTS
(2.29) dX
bt = µ bt , Zb2 ; t) dt + σ
b(X bt , Zb2 ; t) dW
b(X ct .
t t
(2.30) b Zb2 = X
P[ bT ∀t] ≥ P[ Zb02 = Zb01 ] = 1.
t t
so we are done.
2.31 Example. Assume that we are in the setting Example 2.10. Taking
X = Ye in Cor. 2.27, we may compute µ b and σ b explicitly. It is clear that
b = 0. When t ≤ T , σ
µ b(e, t) is a.s. only evaluated at the points with e1 = x2
and we may use the formula given in in (2.19). We now assume that t > T .
Write a typical point in R2 as x = (x1 , x2 ). Set Ae , (Yet , YetT ) and define
29
CHAPTER 2. STATEMENT OF RESULTS
Cor. 2.27 asserts the existence a solution to (2.28) such the one-dimensional
marginal distributions of the the process (X,b X b T ) agree with the one-dimensional
marginal distributions of the process (Ye , Ye T ).
30
CHAPTER 2. STATEMENT OF RESULTS
we conclude that
(2.33) B(T,
e K, L) , p1 B 1 (T, K, L) + p2 B 2 (T, K, L)
gives an arbitrage-free pricing rule for up-and-out call options in the model
e where B(T,
P, e K, L) is the price for the option with maturity T , strike K,
and barrier L. Note that we are not making any effort to justify the pric-
ing formula (2.32); instead, we are simply observing that the existence of a
martingale measure is sufficient to ensure the absence of arbitrage. Piterbarg
conjectures that this “coin-flip” model is essentially the only model in which
the pricing rule (2.33) is arbitrage-free. In particular, Piterbarg writes:
Does there exist a “real” and ‘reasonable” dynamic model, in
which uncertainty is revealed over time, and not in an instant
explosion of information as in [the coin-flip model], such that all
European options and all barriers are priced using (2.33)? The
answer is most likely no, but we do not have a formal proof.
To produce another model in which the pricing rule (2.33) is arbitrage-
free, we apply Cor. 2.23 to the process (S, M ) under the measure P
e to produce
a measure P c such that L (S,
b and processes Sb and M b M b = L (S, M | P).
c | P) e
This is the geometric version of Example 2.26. It then follows that
b e−rT 1 c bT − K)+ = E e e−rT 1{M ≤L} (ST − K)+
E {MT ≤L} ( S T
= B(T,
b K, L),
so the the pricing rule (2.33) is also arbitrage-free for the model P.
b We should
31
CHAPTER 2. STATEMENT OF RESULTS
also observe that the prices computed by discounting cash flows under the
pricing measure Pb will no longer be given as mixtures of Black-Scholes prices
after the initial time. By including the running minimum in the auxiliary
process, it is possible to construct an arbitrage-free model which is distinct
from the coin-flip model, and in which all options with both upper and lower
barriers may be priced as simple mixtures of Black-Scholes prices without
introducing arbitrage. While we make no claims about the extent to which
the model P b is “real” or ‘reasonable”, it is fully-specified and dynamically-
consistent.
32
Chapter 3
3.1 Setting. Let (E, E ) be a Polish space with its Borel σ-field, and set
Ω , E×C0 (R+ ; Rd ) with typical point ω = (e, x). Define the random variable
E(e, x) , e, the process X(e, x) , x, and the filtration F0 , {Ft0 }t∈R+ where
Ft0 , E ⊗σ(X t ) = E ⊗σ(Xs : s ≤ t). Than Ω is a Polish space under the
33
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
In this chapter, we will always assume that we are in Setting 3.1. Notice
that by taking E = Rd and defining Zt (e, x) , e + x(t), we recover the
standard Wiener space with canonical process Z.
One might be concerned that F0 does not satisfy the usual conditions;
however, Lem. F.8 asserts that every right-continuous F0 -martingale remains
a martingale when we move to the smallest filtration generated by F0 that
satisfies the usual conditions, so we can move to a filtration that satisfies
the usual conditions if we need to invoke results from the general theory of
processes. Moreover, F0 -stopping times have a number of useful properties
that are lost when we move to the right-continuous filtration generated by F0 .
In particular, if T is an F0 -stopping time, then the events in the σ-field FT0
have a nice characterization (e.g., Lem. A.1), and FT0 is countably generated.
These result are developed in Appendix A.
The following notion will be fundamental.
(b) Gi ⊂ Hi for 0 ≤ i ≤ n.
34
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
tion. Notice that in this example, we do not have Cti ∈ Hi when i > 0.
35
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
Rt
Let Y solve dYt = σt (Y ) dWt with Y0 = 0, and set Ct = 0 σs2 (Ys ) ds. In
prose, we flip a coin to choose a volatility at the initial time, and we use
this volatility over the time interval [0, t1 ). At each time ti , we flip again to
reset the volatility level, but the odds are adjusted so that the conditional
distribution of the volatility chosen at time ti given Yti = y is the same as
the conditional distribution of σe0 = σ
eti in Example 2.10 given Yeti = y. Let Ω
and Π be defined as in Example 3.4, and set P , L (Ye , C) e where Ye and C e
are defined as in Example 2.10. In particular, P is a measure on Ω. In this
⊗Π
case, we have P = L (Y, C).
36
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
(e.g., Lem. A.3), uniqueness follows from the standard π-system argument.
If we let ψ : C0 (R+ ; Rd ) → Ω denote the map y 7→ e0 , ∇(x0 , t) + Θ(y, −t) ,
then
ψ -1 A ∩ {∆(X, t) ∈ B }
(a) P[T = T (ω 0 )] = 1,
37
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
= P[A ∩ {∆ X, T (ω 0 ) ∈ C }]
= 1A (ω 0 ) Q[C ],
so (b) follows.
Finally, take A ∈ FT0 and let F = B ∩ {∆(X, T ) ∈ C} with B ∈ FT0 and
C ∈ C0 . In this case, we have
As F = σ FT0 , ∆(X, T ) (e.g., Lem. A.3), (c) then follows from the standard
π-system argument.
We can now patch together a fixed initial point in Ω and a single prob-
ability measure on C0 (R+ ; Rd ). We use this construction to glue together a
probability measure P on Ω and a probability kernel Q on C0 (R+ ; Rd ).
3.11 Definition. Let (Ω0 , F 0 ) and (Ω00 , F 00 ) be a measurable spaces and
fix some G 0 ⊂ F 0 . We say that Q : Ω0 ×F 00 → [0, 1] is a G 0 -measurable
probability kernel from (Ω0 , F 0 ) to (Ω00 , F 00 ), if
(a) Q[A] is a G 0 -measurable random variable for fixed A ∈ F 00 , and
(b) Qω0 is a probability measure on (Ω00 , F 00 ) for fixed ω 0 ∈ Ω0 .
3.12 Theorem. Let P be a probability measure on (Ω, F ), T be an F0 -
stopping time, and Q be an FT0 -measurable probability kernel from (Ω, F ) to
(C0 (R+ ; Rd ), C0 ). Then there exists a unique probability measure on (Ω, F ),
denoted P⊕T Q, such that
(a) P⊕T Q[A] = P[A] for all A ∈ FT0 , and
(b) the map ω 7→ δω ⊕T (ω) Qω [B] is a version of P⊕T Q[B | FT0 ] for each
B ∈ F.
Proof. Let Qb : Ω×F → [0, 1] denote the map (ω, A) 7→ δω ⊕T (ω) Qω [A].
b is an F 0 -measurable probability kernel from (Ω, F ) to
We first show that Q T
(Ω, F ).
38
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
Q[A] = EP Q[A]
b = EQ [1A ],
where we use the second property in Lem. 3.10. Therefore Q has property
(a). If B ∈ F , then
Q A ∩ B = EP Q[A
b ∩ B] = EP 1A Q[B]
b = EQ 1A Q[B]
b ,
b ∈ F0,
where we have used the last property in Lem. 3.10, the fact that Q T
0 0
and the fact that Q and P agree on FT . As A ∈ FT was arbitrary, we have
now shown that Q has property (b) of Thm. 3.12.
The uniqueness is evident, as any other measure R with these properties
must assign measure
h i
0
R B = R R B FT = EP Q[B]
b
to any set B ∈ F .
39
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
In addition, we say that Q is regular if there exists a P-null set N such that
Qω0 [G] = 1G (ω 0 ) for all G ∈ G 0 and ω 0 ∈
/ N.
3.14 Theorem. Let (Ω0 , F 0 ) be a Polish space with its Borel σ-field, and
let P be a probability measure on (Ω0 , F 0 ). If we fix some G 0 ⊂ F 0 , then a
conditional probability distribution for P given G 0 exists. Moreover, if G 0 is
countably generated, then we may choose a regular version.
For proof, one may consult [SV79] Thm 1.1.6 and Thm 1.1.8.
B ∈ σ G , ∆(X, T ) .
read (b) as saying that X has a strong Markov-like property at the stopping
time T under the measure P1 ⊗T,G P2 .
40
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
where these equalities hold for all ω and we have used (b) of Lem. 3.10. We
can then conclude that 1G Q e ∆(X, T ) ∈ C is a version of P[B b | F 0 ], and
T
2
we already
know 1G Q ∆(X, T ) ∈ C is a version of P G ∩ {∆(X, T ) ∈
that
e
C} G = P2 B G , so B ⊂ A . But A is a σ-field and B is closed with
respect to intersection, so σ(B) = σ G , ∆(X, T ) ⊂ A .
We now have the existence of a common version, but we still need to show
that every version of P2 [A | G ] actually works. Fix A ∈ σ G , ∆(X, T ) , let Y
be any version of P2 [A | G ], and let Z be a version of P2 [A | G ] which is also
b | F 0 ]. So P2 [Y 6= Z] = 0 ⇒ P1 [Y 6= Z] = 0 as P2 |G P1 |G ,
a version of P[A T
b 6= Z] = 0 by (a), so Y is a version of P[A
but then P[Y b | F 0 ] and we are
T
done.
41
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
representations
√
c1 W t for t ≤ t1 and U0 < 1/2,
√
c2 W t
for t ≤ t1 and U0 ≥ 1/2,
Yt = √ η(Y 1 , t1 c1 )
Yt1 + c1 (Wt − Wt1 ) for t > t1 and U1 < η(Yt , t c t)+η(Y t1 , t1 c2 )
,
1 1 1
√
η(Yt1 , t1 c1 )
Yt1 + c2 (Wt − Wt1 ) for t > t1 and U1 ≥
, and
η(Yt1 , t1 c1 )+η(Yt1 , t1 c2 )
tc1 for t ≤ t1 and U0 < 1/2,
tc2
for t ≤ t1 and U0 ≥ 1/2,
Ct = η(Y 1 , t1 c1 )
Ct1 + (t − t1 )c1 for t > t1 and U1 < η(Yt , t c t)+η(Y t1 , t1 c2 )
, and
1 1 1
η(Yt1 , t1 c1 )
Ct1 + (t − t1 )c2 for t > t1 and U1 ≥
.
η(Yt , t c )+η(Yt , t c )
1 1 1 1 1 2
42
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
Proof. Set
These are both Borel measurable subsets of C0 (R+ ; Rd ) (e.g., Cor. C.9 andd
Cor. C.11). Fixing
any ω = (e, x) ∈ Ω, we see that ∇ x, T (ω) ∈ BV [0, t]; R
and ∆ x, T (ω) ∈ BV [0, (t−T (ω))∨0]; R implies that x ∈ BV [0, t]; Rd ,
d
so T
A ∈ F V d ∩ {∆(A, T ) ∈ F V d } ⊂ A ∈ F V d .
= 1.
This is (a).
Similarly, if ω = (e, x) ∈ Ω, ∇ x, T (ω) ∈ AC [0, t]; Rd and ∆ x, T (ω) ∈
AC [0, (t − T (ω)) ∨ 0]; Rd then x ∈ AC [0, t]; Rd , so may replace F V d with
AC d in the previous argument to get (b).
The following corollary is often easier to use.
3.21 Corollary. In addition to the assumptions of 3.19, let A be an Rd -
valued, continuous process such that ∆(A, T ) is σ G , ∆(X, T ) -measurable.
Then the following two implications hold.
(a) If A is Pi -a.s. of finite variation for i ∈ {1, 2}, then A is P12 -a.s. of
finite variation.
(b) If A is Pi -a.s. absolutely continuous for i ∈ {1, 2}, then A is P12 -a.s.
absolutely continuous.
Proof. We have {A ∈ F V d } ⊂ {AT ∈ F V d }, so P1 [A ∈ F V d ] = 1 implies
P1 [AT ∈ F V d ] = 1. We also have {A ∈ F V d } ⊂ {∆(A, T ) ∈ F V d }, so
P2 [A ∈ F V d ] = 1 implies P2 [∆(A, T ) ∈ F V d ] = 1. Assertion (a) then
43
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
follows from (a) of Lem. 3.20, and essentially the same argument shows that
(b) follows from (b) of Lem. 3.20.
Proof. It follows from the previous corollary that A is P12 -a.s. absolutely
continuous.R t As a(ω) is a version of the derivative for each ω, we must have
12
P [At = 0 au du for all t] = 1.
By taking divided differences of the process AT , we may find a σ(AT )⊗R+ -
∂
measurable process a1 such that a1t (ω) = ∂t T
t (ω) whenever this deriva-2
A
tive exits. Similarly, there exists a σ ∆(A, T ) ⊗R+ -measurable process a
∂
such that a2t (ω) = ∂t ∆t (A, T ) whenever this derivative exists (e.g., take
0 T 0
Ft = σ(A ) or Ft = σ ∆(A, T ) for all t in Lem. C.10).
Now define the sets
1
n ∂ T o
B (ω) , t ∈ R+ : At (ω) does not exist , and
∂t
2
n ∂ o
B (ω) , t ∈ R+ : ∆t A(ω), T (ω) does not exist.
∂t
∂
/ B 1 (ω), then ∂t
If 0 < t < T (ω) and t ∈ ATt (ω) exists and AT (ω) and A(ω)
∂ ∂
agree in some neighborhood of t, so ∂t At (ω) exists and agrees with ∂t ATt (ω).
44
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
In particular, if 0 < t < T (ω) and t ∈ / B(ω) ∪ B 1 (ω), then at (ω) = a1t (ω). If
T d 1
ω ∈ {A ∈ F V }, then B(ω) ∪ B (ω) has Lebesgue measure zero, so at (ω 0 )
and a1t (ω 0 ) agree for Lebesgue-a.e. t ∈ 0, T (ω) . This means that
Z T ∧S Z T ∧S
(3.26) 1{AT ∈F V d } f (au ) du = 1{AT ∈F V d } f (a1u ) du
0 0
for all ω, where we use the extended integral of Rem. 1.7. The process
R T ∧S
a1 1[0,T ∧S] is FT0 ⊗R+ -measurable, so 0 f (a1u ) du is FT0 -measurable by Fu-
bini’s Theorem, and the same holds for the left hand side of (3.26).
Similar pathwise arguments show that aT (ω)+t (ω) = a2t (ω) if t > 0, T (ω)+
t ∈ / B(ω), and t ∈ / B 2 (ω). If ω ∈ {∆(A, T ) ∈ F V d }, then aT (ω)+t (ω) and
a2t (ω) agree for Lebesgue-a.e. t, so
Z T ∨S Z (S−T )+
(3.27) 1{∆(A,T )∈F V d } f (au ) du = 1{∆(A,T )∈F V d } f (a2u ) du
T 0
45
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
46
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
{S ∨ T − T ≤ t} = {S ≤ T + t} ∈ FT0 +t = Fbt0 .
A ∩ {S ∨ T − T ≤ t} = A ∩ {S ≤ T + t} ∈ FT0 +t = Fbt0 .
As this is the hitting time of a closed set, T2n is an F0 -stopping time. Notice
that in this case, Tb2n = inf{t ≥ 0 : |∆t (M, T )| ≥ n}, so Tb2n is σ ∆(M, T ) -
cTb2n = ∇ ∆(M, T ), Tb2n . Also notice that M cTb2n ≤ n,
measurable as is M cn , M
cn is an (F
so M b0 , P2 )-martingale. We write Z ∈ bF to mean that Z is a
bounded F -measurable random variable. For 0 ≤ s ≤ t < ∞, let
where the second equality follows from (b) of Cor. 3.15. This means that
B ⊂ A , but A is a monotone class (by bounded convergence) and B is
closed with respect to forming finite products, so σ(B) ⊂ A by a monotone
class argument. As
47
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
E12 [Z(Mtn − Msn )] = E12 [Z 1{s≤T } (Mtn − MTn )] + E12 [Z 1{s≤T } (MTn − Msn )]
+ E12 [Z 1{T <s} (Mtn − Msn )]
, A + B + C.
cn is an (F
follows from the fact that M b0 , P12 )-martingale.
To see that B = 0, notice that Z 1{s≤T } (MTn − Msn ) ∈ F0T , so we may
apply (a) of Cor. 3.15 and write
48
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
n
where we have also used the fact that M T ∧T is a bounded (F0 , P1 )-local
martingale, so it is in fact a martingale.
Finally, we show that C = 0. Let sb , s ∨ T − T and b
t , t ∨ T − T . Notice
0 0
that Z 1{T <s} ∈ Fs ⊂ Fsb . Then
b
Tn Tn
Z 1{T <s} (Mtn − Msn ) = Z 1{T <s∧T1n } (Mt∨T
2
− Ms∨T2
)
cbn − M
= Z 1{T <s∧T n } (M cn )
1 t sb
Finally notice that {T < T1n } ∈ FT0 = Fb00 , so Z 1{T <s∧T1n } ∈ Fbsb0 . We then
write
Proof. Take Fb0 , T1n , T2n , Tb2n , and T n as in the proof of the previous lemma.
cn , M
Let M cTb2n , M n , M T2n , and M n,m , M T2n ∧T m . M cn is bounded by n,
n,m
and M is bounded by m + (n ∧ m). In particular, M n,m is an (F0 , P)-
martingale. Notice that if m ≥ n, then {T m ≥ T2n } = {T1m > T } ∈ FT0 . In
prose, when m ≥ n, the only way for T m to happen strictly before T2n is for
the process to make a move of at least size m before time T . Also notice
that if m ≥ n, then M n = M n,m on the set {T m ≥ T2n }.
We are now ready to show that M cn is a (P, F
b0 )-martingale. Fix s < t and
49
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
bounded Z ∈ Fbs0 = FT0 +s . Notice that Z1{T m ≥T2n } ∈ Fbs0 = FT0 +s , and write
cn − M
E[Z(M cn )] = E[Z(M n − M n )]
t s T +t T +s
= lim E[Z1{T m ≥T2n } (MTn,m n,m
+t − MT +s )]
m
= 0,
where we have used bounded convergence, and the fact that M n,m is a (P, F0 )-
martingale. This means that Tbn is a localizing sequence, and M is a (P, Fb0 )-
local martingale.
Combining Cor. 3.30 and Lem. 3.29 yields the following corollary.
3.33 Remark. We often apply this result with Z = YT for some process
Y . In this situation, we have (MU ∧S − MT ∧S ) YT = (MU ∧S − MT ∧S ) YT ∧S as
MU ∧S − MT ∧S is only nonzero if T < S.
50
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
Proof. We write
E[(MU − MT ) Z | FS0 ]
= 1{S≤T } E[(MU − MT ) Z | FS0 ] + 1{T <S≤U } E[(MU − MT ) Z | FS0 ]
+ 1{U <S} E[(MU − MT ) Z | FS0 ]
h i
0 0 0
= 1{S≤T } E E[(MU − MT ) | FT ] Z FS + 1{T <S≤U } E[MU | FS ] − MT Z
| {z }
=0
+ 1{U <S} (MU − MT ) Z
= 1{T <S≤U } (MS − MT ) Z + 1{U <S} (MU − MT ) Z
= (MU ∧S − MT ∧S ) Z
1 2
Yt , Mt∧T Mt∧T + (M 1 − Mt∧T
1
)(M 2 − Mt∧T
2
) − Ct .
let
T n , inf{t ≥ 0 : |Mti | ≥ n for any i ∈ {1, 2, 3} or |Ct | ≥ n}.
n n n
and define M i,n , (M i )T for i ∈ {1, 2, 3}, C n , C T , and Y n , Y T . Notice
that M i,n is bounded by n for i ∈ {1, 2, 3} and Y n is bounded by 5n2 + n.
We now show that Y n is a martingale under P1 and P2 . For 0 ≤ s ≤ t,
we write
Ei (Mt1,n − Mt∧T
1,n 2,n 1,n
0 2,n
Fs = (Ms1,n − Ms∧T
)Mt∧T Mt∧T
(3.35) 1,n 2,n
= (Ms1,n − Ms∧T
Ms∧T ,
2,n
where we have applied the previous lemma with M = M 1,n , Z = Mt∧T ,
1,n
1,n
S = s, T = t ∧ T , and U = t and used the fact that (Ms − Ms∧T is zero
51
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
if T ≥ s. Clearly the same equality holds if we reverse the roles of M 1,n and
M 2,n . Now we use the fact that M 3,n is a martingale to write
E12 Mt3,n F 0 = E12 Ytn Fs0 + E12 (Mt1,n − Mt∧T 1,n 2,n
0
)Mt∧T Fs
12
2,n 2,n 1,n
0
+ E (Mt − Mt∧T )Mt∧T Fs
1,n
2,n 2,n
1,n
= Ys + (Ms1,n − Ms∧T
n
Ms∧T + (Ms2,n − Ms∧T Ms∧T
= Ms3,n
52
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
∆(Y, t1 ) and ∆(C, t1 ) are both σ G , ∆(X, t1 ) -measurable. Notice that under
both P , L (Ye , C)
e and Q , P⊗ t ,σ(Y ) P = L (Y, C), Y is a continuous
1 t1
semimartingale with the characteristics (0, C).
(a) P1 |G P2 ⊗T,H P3 |G ,
(b) P1 ⊗S,G P2 |H P3 |H , and
(c) P1 ⊗S,G P2 ⊗T,H P3 = P1 ⊗S,G P2 ⊗T,H P3 .
P12 , P1 ⊗S,G P2 ,
P23 , P2 ⊗T,H P3 ,
P12,3 , P1 ⊗S,G P2 ⊗T,H P3 , and
(a) Fix A ∈ G with P1 [A] > 0. A ∈ FT0 and (a) of Cor. 3.15 imply
P23 [A] = P2 [A], and P2 [A] > 0 as P1 |G P2 |G .
53
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
A ∈ σ G , ∆(X T , S) ⊂ σ G , ∆(X, S) ,
so 0 is also a version of P12 [A | FS0 ] by (b) of Cor. 3.15 and P12 [A] = 0.
(c) Fix G ∈ G , B ∈ σ ∆(X T , S) , and C ∈ σ ∆(X, T ) . Let Z be any
E23 1G Y = E2 1G Y = E2 1G∩B Z
h i
1A E 1B E 1C H G
1,23 2 3
=E
h i
= E1,23 1A E23 1B∩C G
= P1,23 A ∩ B ∩ C .
As we have
We now have everything that we need for the proof of Thm. 3.6. We
restate the result below for the reader’s convenience.
54
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
Em+1 1A Z = Em 1A Z = Pm A ∩ B = Pm+1 A ∩ B ,
then (b) of Cor. 3.15 says that any version of E[B | Gm+1 ] is a version
of Em+1 [B | FT0m+1 ], so Pm+1 satisfies the inductive assumption (b) for all
0 ≤ i ≤ m + 1.
55
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
It is then clear that Pn satisfies properties (a) and (b). To see thatthis
measure is unique, fix any A0 ∈ σ(E) = F00 and Ai ∈ σ ∆(X Ti , XTi−1 ) for
1 ≤ i ≤ n + 1. Then
h i
n
n+1 n n n 0 0
P ∩i=0 Ai = E 1A0 E 1A1 · · · E [1An+1 | FTn ] · · · F0
h i
= E 1A0 E 1A1 · · · E[1An+1 | Gn ] · · · F0
n n 0
h i
= E 1A0 E 1A1 · · · E[1An+1 | Gn ] · · · G0 ,
so the probability assigned to the event ∩n+1 i=0 Ai is fully determined by P and
the properties (a) and (b). As
any two measures which agree on sets of the form ∩ni=0 Ai must agree on all
of F by the π-λ theorem.
Now we quickly check that the properties of the original measure which
were preserved by the binary construction are also preserved by the gen-
eral construction. All of these proofs are essentially the same, and we use
induction to reduce to the binary case.
3.39 Lemma. Let P be a measure on Ω and Π = {(Ti , Gi )}0≤i≤n be an
extended partition. Let A be a continuous, Rd -valued process, and assume
that ∆(A, Ti ) is σ Gi , ∆(X, Ti ) -measurable for each i ∈ {1, . . . , n}. Then
the following two implications hold.
⊗Π
(a) If A is P-a.s. of finite variation, then A is P -a.s. of finite variation.
⊗Π
(b) If A is P-a.s. absolutely continuous, then A is P -a.s. absolutely con-
tinuous.
Proof. Assume that A is P-a.s. of finite variation, and set P0 , P ⊗T0 ,G0 P.
It then follows from Cor. 3.21 that A is P0 -a.s. of finite variation. We now
proceed by induction, so assuming that A is Pi -a.s. of finite variation and
setting Pi+1 = Pi ⊗Ti+1 ,Gi+1 P, it again follows from Cor. 3.21 that A is Pi+1 -
⊗Π
a.s. of finite variation. As Pn = P , we have (a). Assertion (b) follows in
the same way.
56
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
Proof. Set P0 , P ⊗T0 ,G0 P. It then follows from Lem. 3.22 that
Z S Z T0 ∧S Z T0 ∨S
0
E f (au ) du = E f (au ) du + E f (au ) du
0 0 T0
Z S
=E f (au ) du .
0
⊗Π
As Pn = P , we are done.
57
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
i ∈ {0, . . . , N (n)}. If
Z t
(3.43) P
E kau k du < ∞ ∀t ∈ R+ ,
0
using the integrability assumption (3.43). Applying the previous lemma with
+
f (x) = kxk − M and S = t, we have
Z t + Z t +
n
E kau k − M du = EP
kau k − M du < ε.
0 0
Proof. By taking the limit of divided difference on the left (e.g., Lem. C.10),
∂
we may find an F0 -predictable processes a such that at (ω) = ∂t At (ω) when-
ever this derivative exists. As A is P-a.s. absolutely continuous, pathwise
58
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
Combining (3.45) and (3.46), we see that we may apply the previous lemma
to conclude that a is uniformly integrable with respect to {(Pn ×λ[0,t] )}n for
each t.
As A0 = 0, we only need to check that we can control the modulus of
continuity on each compact interval [0, t] to conclude that {L (A | Pn )}n is
tight. Fix some T > 0. Using the uniform integrability, we may choose M
so large that Z t +
n
sup E kau k − M du < ε2 /2.
n 0
so {L (A | Pn )}n is tight.
59
CHAPTER 3. A CROSS PRODUCT CONSTRUCTION
Proof. Setting P0 , P⊗T0 ,G0 P, we may apply Cor. 3.36 to conclude that X has
characteristics (B, C) under P0 . We then proceed inductively, setting Pi+1 =
Pi ⊗Ti+1 ,Gi+1 P and applying Cor. 3.36 to conclude that X has characteristics
⊗Π
(B, C) under Pi+1 for each i < n. As Pn = P , we are done.
60
Chapter 4
Main Theorem
for all t ∈ R+ and all bounded f : E×R+ → Rd that are E ⊗R+ -measurable.
Moreover, in this case we have
Z t
(4.4) E kb
z (Yu , u)k du < ∞ ∀t ∈ R+ .
0
61
CHAPTER 4. MAIN THEOREM
62
CHAPTER 4. MAIN THEOREM
Xd Z
= ei g(y, u) ν i (dy, du)
i=1 E×[0,t]
Z t
=E zu g(Yu , u) du .
0
63
CHAPTER 4. MAIN THEOREM
But then
Z T Z T
0
αN E 1{ez(Ys ,s)∈HNc } ds < E ze(Ys , s), y 1{ez(Ys ,s)∈HNc } ds
0 0
Z T
0
=E (zs , y ) 1{ez(Ys ,s)∈HNc } ds
0
Z T
≤ αN E 1{ez(Ys ,s)∈HNc } ds ,
0
We have
Z t Z t
E ze(Yu , u) g(Yu , u) du = E zb(Yu , u) g(Yu , u) du
0 0
Z t
=E zu g(Yu , u) du ,
0
64
CHAPTER 4. MAIN THEOREM
σ(X)⊗R+ . Indeed, if this were true, it would imply that every adapted
process is progressive without modification. If the sample paths of X have
enough regularity that we may write X as the pointwise limit of simple
functions, then it is of course true that X ∈ σ(X)⊗R+ . We give the previous
definition so that we may handle the situation where we do not assume any
sample path regularity.
If this expression is finite, then (4.11) follows in the same way by dominated
converge at each coordinate. If this expression is infinite, then both sides of
(4.11) are defined to be ∞ (recall Rem. 1.7). Either way, (4.11) holds.
65
CHAPTER 4. MAIN THEOREM
where the first and last equalities follow from Lem. 4.10.
66
CHAPTER 4. MAIN THEOREM
While we do not repeat the proof, it essentially results from the fact that
we can approximate f arbitrarily well in L1 L (Y ) with bounded, continu-
ous functions. The point is that we get a stronger kind of convergence from
the fact that the Yn share a common law. This is related to the notion of
weak-strong convergence as developed by Jacod and Memin in [JM81b] and
[JM81a]. In the following theorem, it is the assumption of common one-
dimensional marginal distributions that allows us to conclude that we have
weak convergence even though f is only assumed to be measurable.
4.15 Lemma. Let E be a Polish space and let {Y n }n≤∞ be sequence of contin-
uous E-valued processes, possibly defined on different
Rt spaces. Let f : E×R+ →
R be a measurable function and define Ft , 0 f (Ysn , s) ds. If
d n
(b) Y n ⇒ Y ∞ , and
R t
(c) E1 0 kf (Yu1 , u)k du < ∞ ∀t ∈ R+ ,
then
..
(d ) f (Y n , ), Pn ×λ[0,t] n∈N is uniformly integrable ∀t ∈ R+ ,
(e) Pn F n ∈ C(R+ ; Rd ) = 1 for each n ∈ N, and
(f ) (Y n , F n ) ⇒ (Y ∞ , F ∞ ).
67
CHAPTER 4. MAIN THEOREM
Using the integrability assumption (c), we may make this last expression
arbitrarily small by choosing M sufficiently large. This implies (d), and also
implies that
Z t
n
n
P
f (Yu , u)
du < ∞ ∀t ∈ R+ = 1 ∀n ∈ N,
0
68
CHAPTER 4. MAIN THEOREM
In particular,
inf sup Pn d(F n , Z n,m ) > δ = 0
m∈N n∈N
The next result will show that we may approximate an integrable process
in L1 (P×λ[0,t] ) using step functions if we randomize the partition that we use
to generate the step functions. First we will need to present a lemma from
analysis.
If f : R → R is a function which is integrable over [0, T ], and we set
φn (t) , n 1[0,1/n] (t), then φn converges to the Dirac mass at 0 in some sense, so
we might expect f ∗ φn to converge to f in L1 [0, T ], λ[0,T ] . This observation
motivates, but is not quite equivalent to, the following lemma.
69
CHAPTER 4. MAIN THEOREM
Rt
4.19 Lemma. Let f : R+ → Rd be function with 0 kf (s)k ds < ∞ for all
t ∈ R+ . Define the sets
n u+i−1 u+i
Ii , (t, u) ∈ R+ ×[0, 1] : ≤t< ,
n n
Proof. Fix any t and ε > 0. Then choose g ∈ C([0, t + 1]; Rd ) with
Z t+1
kf (s) − g(s)k ds < ε/4.
0
Pmn u+i−1
Set m , dte ∈ [t, t + 1) ∩ N and set gn (s, u) , i=1 g n
1Iin (s, u). We
have
Z 1Z t
kfn (s, u) − gn (s, u)k ds du
0 0
mn Z u+i
X 1 Z
n
u+i−1 u+i−1
≤
f
n
−g n
ds du
u+i−1
i=1 0 n
mn Z 1
X
u+i−1
u+i−1
du
=
f
n
−g n
i=1 0
n
mn Z i
X n
=
f (v) − g(v)
dv ≤ ε/4.
i−1
i=1 n
70
CHAPTER 4. MAIN THEOREM
when n ≥ N .
Set Ω , [0, 1]×Ω0 with typical point ω = (u, ω 0 ), and define U (u, ω 0 ) , u.
Letting R[0,1] denote the Borel σ-field on [0, 1], set F , R[0,1] ⊗G , P ,
λ[0,1] ×Q, and a(u, ω 0 ) , a0 (ω 0 ). Finally, define the random times T0n , 0,
Tin , (U + i − 1)/n for i ∈ {1, . . . , n2 }, and Tnn2 +1 , ∞ and the sampled
P 2
processes ant , ni=1 aTin 1[Tin ,Ti+1
n ) (t). Then
Z t
(4.22) lim E P
kas − ans k ds = 0 ∀t ∈ R+ .
n→∞ 0
Proof. We will first show that the collection of processes {an }n is uniformly
integrable with respect to P×λ[0,t] for each t ∈ R+ . To see this, fix some
71
CHAPTER 4. MAIN THEOREM
n
t ∈ R+ and set m , dte ∈ [t, t + 1) ∩ N so t ≤ Tmn+1 . We then write
Z t
n
E P
kas k 1{kans k>M } ds
0
Z Tn
mn+1
n
≤E P
kas k 1{kans k>M } ds
0
mn
1 X Ph i
= E kaTi k 1{kaT n k>M }
n
n i=1 i
mn Z 1
X h i du
= EQ ka0(u+i−1)/n k 1{ka0(u+i−1)/n k>M }
i=1 0
n
mn Z i/n
X h i
= EQ ka0s k 1{ka0s k>M } ds
i=1 (i−1)/n
Z m
≤E Q
ka0s k 1{ka0s k>M } ds .
0
where the third relation follows Fubini’s Theorem applied to the product
measure P = Q×λ[0,t] . As a0 is integrable over the interval [0, m] under Q,
we may make this last expression arbitrarily small by choosing M large. We
have now shown
that {an }n is uniformly integrable with respect to P×λ[0,t] .
As a result, ka − an k n is also uniformly integrable with respect to P×λ[0,t] .
Define
Z 1Z t
0 0
(4.23) At,n (ω ) , ka0s (ω 0 ) − ans (u, ω 0 )k ds du,
0 0
72
CHAPTER 4. MAIN THEOREM
where both inequalities follow from Jensen’s inequality. The uniform integra-
bility of {A0t,n }n with respect to Q then follows from the uniform integrability
of kas − ans k n with respect to P×λ[0,t] .
Rt
If we fix an ω 0 such that 0 ka0s (ω 0 )k ds < ∞ for all t ∈ R+ , then we may
apply the previous lemma to the right-hand R t side of (4.23) to conclude that
0 0 0
limn At,n (ω ) = 0. (4.21) implies that Q 0 kas k ds < ∞ ∀t ∈ R+ = 1, so
may conclude that limn A0m,n = 0 Q-a.e. Combining this with the uniform
integrability of {A0t,n }n , we conclude that limn EQ [A0t,n ] = 0. As t is arbitrary,
we have shown (4.22).
sume that on each space there is defined a processes xn and a random partition
73
CHAPTER 4. MAIN THEOREM
and Fkn , σ(Yjn , Tjn : j ≤ k) for k ∈ {0, . . . , N (n)}, and assume that
{Ykn , Fkn }0≤k≤N (n) is a martingale for each n. Then
Z s
n n
(4.26) lim E sup
xu du
= 0 ∀t ∈ R+ .
n→∞ s∈[0,t] 0
π = {0 = T0 ≤ T1 ≤ . . . ≤ TN },
Rt
is a random partition with TN > t and |π| ≤ 1. Set Xt , 0 xs ds, Yk , XTk ,
and Fk , σ(Yj , Tj : j ≤ k). We show below that if Yk , Fk 0≤k≤N is a
martingale, then
h i Z t+1
+
E sup kXs k ≤ M E |π| + E kxs k − M ds
s∈[0,t] 0
h Z t+1 1
+ i 2
(4.27) + d C1 M E |π| + E kxu k − M du
0
s
h Z t+1 i
× E kxu k du ,
0
74
CHAPTER 4. MAIN THEOREM
and X, we write
h i
E max kYn k
1≤n≤N ∧S
X h i
≤ E max |Yni |
0≤n≤N ∧S
1≤i≤d
X q
i
P
≤ C1 E (Yni − Yn−1 )2
1≤i≤d 0≤n≤N ∧S
q
XTn − XTn−1
2
P
≤ d C1 E
r0≤n≤N ∧S
Z t+1
≤ d C1 E max
XTn − XTn−1
kxu k du
0≤n≤N ∧S 0
s s Z t+1
≤ d C1 E max
XTn − XTn−1
E kxu k du
1≤n≤N ∧S 0
Z t+1 12
+
= d C1 M E |π| + E kxu k − M du
0
(4.28) s
h Z t+1 i
× E kxu k du ,
0
We now sup over s ∈ [0, t] on the left hand side of (4.29) and take expectations
75
CHAPTER 4. MAIN THEOREM
to give
h i h Z t + i h i
E sup kXs k ≤ M E |π| + E kxu k − M du + E max kYn k .
s∈[0,t] 0 1≤n≤N ∧S
Set
ε2
ε
δ = δ(ε, C2 , M ) , ∧ ,
2M1 8M1 C12 C2 d2
and choose M2 (δ) so large that that |π(n)| ≤ δ ∧ 1 and TNn (n) > t for all
n > M2 using the fact that {π(n)}n converges uniformly to the identity.
Putting this all together and applying the estimate (4.27) to X n then gives
76
CHAPTER 4. MAIN THEOREM
h i
En sup kXsn k
s∈[0,t]
hZ t+1 + i
n n
kxns k − M1
≤ M1 E |π(n)| + E ds
0
h Z t+1 + i 21
+ d C1 M1 E |π(n)| + En
n
kxns k
− M1 ds
0
s
h Z t+1 i
× E n kxns k ds
0
ε2 21 p
≤ M1 δ + ε/4 + d C1 M1 δ + C2
8C12 C2 d2
ε p
≤ ε/2 + d C1 √ C2
2C1 C2 d
≤ε
for all n ≥ N . As t and ε were arbitrary, we have shown that (4.26) holds.
We first give a version of the main theorem stated in terms of the char-
acteristics of an Itô process.
77
CHAPTER 4. MAIN THEOREM
Rt
0
cs ds, where bt ∈ Rd and ct ∈ S+d are F-adapted processes with
Z t
(4.31) E kbs k + kcs k ds < ∞ ∀t ∈ R+ .
0
78
CHAPTER 4. MAIN THEOREM
and set Gin , σ(Ytni ). In Example 3.4, we showed that Π(n) , {(Gin , tni )}0≤i≤n
is an extended partition, so we may define the sequence of measures Pn ,
P⊗Π(n) . Recall that we interpreted these extended partitions as filtration-like
objects in which we choose to forget everything about the process X at time
tni except the current location of Y . We also showed in Example 3.4 that, in
this case, we have
n n n
Hi n , σ Gin , ∆(X ti , tni−1 ) = σ Θ(Y ti , tni−1 ), ∆(C ti , tni−1 ) .
79
CHAPTER 4. MAIN THEOREM
(4.33) Q , L (Ze0 , Ye , B,
e C),
e
(4.34) L (E 0 , Y 0 , Z 0 , B 0 , C 0 | Q) = L (Ze0 , Ye , Z,
e B,
e C).
e
By taking divided difference on the left (e.g., Lem. C.10), we may find G0 -
predictable processes b0 and c0 such that, for each ω 0 ∈ Ω0 , we have b0t (ω 0 ) =
∂
B 0 (ω 0 ) and c0t (ω 0 ) = ∂t
∂t t
∂
Ct0 (ω 0 ) whenever these derivatives exists. As B
e and
C
e are P-a.s.
e absolutely continuous, (4.34) implies that B 0 and C 0 are Q-a.s.
80
CHAPTER 4. MAIN THEOREM
We will now show that bb(Zt , t) is still a version of EP [bt | Zt ] for Lebesgue-
a.e. t. Fixing any t ∈ R+ and any bounded, E ⊗R+ /R d -measurable f :
E×R+ → Rd , we write
Z t Z t
P
E bs f (Zs , s) ds = Ee bs f (Zs , s) ds
e e
0 0
Z t
=Ee b(Zs , s) f (Zs , s) ds
b e e
0
Z t
P
=E bb(Zs , s) f (Zs , s) ds .
0
The first and last equalities follows from the fact that L (B, Z | P) = L (B,
e Z)
e
(e.g., Cor. C.18). The middle equality follows from our assumption that
81
CHAPTER 4. MAIN THEOREM
Notice that each Tin is trivially an F0 -stopping time as Tin is F00 -measurable,
and notice that the sequence of partitions {π(n)}n converges uniformly to
the identity.
We now define the additional objects that we need to specify a generalized
partition. For each n ∈ N, let
Intuitively, this structure means that the only historical information that we
keep at the reset time Tin is the value of U and the current location of Z.
Notice that T1n − T0n = U/n and Tin − Ti−1n
= 1/n for i > 1, so Tin − Ti−1
n
is
always Gi−1 -measurable. As Z may be updating using only the changes in
n
Y , we have
n
(4.41) Θ(Z Ti , Ti−1
n
) ∈ Hi n ∀i ∈ {1, . . . , n2 + 1}.
82
CHAPTER 4. MAIN THEOREM
where we use property (b) of Def. 2.1 at (4.42) and property (a) of Def. 2.1 at
i−1 ) are all Hi -measurable,
Tn
(4.43). Because Tin − Ti−1
n
, ZTi−1
n , and ∆(Y i , T
n n
variables.
We now set Π(n) , {(Tin , Gin )}0≤i≤n2 for n ≥ 1, and we show that each
Π(n) is an extended partition. Specifically, we need to check that
(a) each Tin is a finite F0 -stopping time,
(c) Gin ⊂ Hi n .
Claim (a) holds as π(n) is uniformly bounded by n and Tin ∈ F00 for all i,
Tn n
and we have already shown (b). Writing ZTin = ΘTin −Ti−1 n (Z i , T
i−1 ), we see
that (4.41) and Ti − Ti−1 ∈ Gi−1 imply that ZTin ∈ Hi for each i, so (c)
n n n
83
CHAPTER 4. MAIN THEOREM
L E, Y, Z | Pn ⇒ L E,
(4.44) b Yb , Zb .
where we have used the fact that Pn agrees with P on each Hi+1 n
(e.g., (a) of
Thm. 3.6). It then follows from (4.44) that L (Zt ) = L (Zt | P) = L (Zet ) for
b
all t ∈ R+ .
To complete the proof, we need to characterize the limit. We will show
that Yb has the characteristics (B,
b C)
b with respect to (F, b by showing that
b P)
84
CHAPTER 4. MAIN THEOREM
As we have (4.39), (4.40), and (4.44), and we have shown that Z has the
same one-dimensional marginal distributions under each Pn , we may apply
Lem. 4.15 and Rem. 4.18 to conclude that
If we show that limn Pn [d(B, B) > ε] = 0 and limn Pn [d(C, C) > ε] = 0 for
each ε, then (4.45) follows from (4.46) (e.g., Lem. B.2). We will actually do
slightly more. We will show that
h i
(4.49) lim En sup kBs − Bs k = 0 ∀t ∈ R+ , and
n→∞ s≤t
h i
(4.50) lim En sup kCs − Cs k = 0 ∀t ∈ R+ .
n→∞ s≤t
We now show that (4.49) holds by approximating B and B with step func-
tions. As a first step, we show that there exist random variables {ξin }1≤i≤n2
such that P[ξin = bTin ] = 1 and ξin is Hi+1 n
-measurable. Recall that Tin ,
(U + i − 1)/n for i ∈ {1, . . . , n2 }, and define the Rd -valued random variables
prose, ξin is the right derivative of B at the time Tin (when it exists), so ξin
is fully determined by the changes in B just after Tin . For each ω 0 ∈ Ω0 , we
define the sets
n o
Aωn,i
0 , u ∈ [0, 1] : ξin (u, ω 0 ) = b0(u+i−1)/n (ω 0 ) , and
n ∂ o
Bωn,i0 , u ∈ [0, 1] : Bt0 (ω 0 ) exists at t = (u + i − 1)/n .
∂t
Recall that b0t (ω 0 ) = ∂t
∂
Bt0 (ω 0 ) whenever this derivative exists. It is clear
from the construction of ξin that ξin (u, ω 0 ) = ∂t ∂
Bt0 (ω 0 ) at t = (u + i − 1)/n
whenever this derivative exists. Combining these two observations, we see
85
CHAPTER 4. MAIN THEOREM
only apply Fubini’s Theorem to the set {ξin = bTin }, so the potential lack of
Borel measurability of the set Bin (ω 0 ) is not a problem.
We now define some sequences of step functions which we will use to
approximate b and b. Let
n2
X Z t
bnt , ξin 1[Tin ,Ti+1
n ) (t), Btn , bns ds,
i=1 0
n2
X Z t
bnt , bTin 1[Tin ,Ti+1
n ) (t), Btn , bns ds, and
i=1 0
n2
X
Π(n)
bt , bTin 1[Tin ,Ti+1
n ) (t).
i=1
conclude that ∆(B n , Tin ) is σ Gin , ∆(X, Tin ) -measurable. This means that
we may apply Lem. 3.40 to the Rd ×Rd -valued process (B, B n ). In particular,
taking f : Rd ×Rd → R to be the function f (x, y) = kx − yk, we may apply
86
CHAPTER 4. MAIN THEOREM
87
CHAPTER 4. MAIN THEOREM
Once we have reduced the estimate to a calculation under P, we may use the
fact that U is strongly independent of Z, so U is strongly independent of b,
and we may again apply Lem. 4.20 to conclude that
Z t
n
lim EP
kbs − bs k ds = 0.
n→∞ 0
We are now almost done. We only need to estimate the difference between
B and B n . To do this, we define
n
Z t
n n n
Ψt , Bt − Bt = bns − bns ds.
0
88
CHAPTER 4. MAIN THEOREM
We have
nalready shown
that bn converges to b in L1 (P×λ[0,t] ) which implies
that (b , P×λ[0,t] ) n is uniformly integrable. In particular, we may make
(4.57) arbitrarily small
by choosing sufficiently large M . But this means
that (bn , Pn ×λ[0,t] n is uniformly integrable, so we have (4.56).
n
Set δin , Tin − Ti−1 = U/n 1{i=1} + 1/n 1{i>1} . We then write
h i h i
En ΨnTin − ΨnTi−1
0
= δin En ξi−1
n n 0
n FTi−1
n − bb(ZTi−1
n ,T
i−1 ) F n
Ti−1
h i
= δin EP ξi−1
n n
G
n
− bb(ZTi−1
n ,T
i−1 ) i−1
h i
= δin EP bTi−1 n
ZTi−1 , Ti−1 − δin bb(ZTi−1
n ,T
n
i−1 )
n
n
= 0.
The first equality follows from the F00 -measurability of δin . The second equal-
ity follows from the Hi n -measurability of ξi−1
n
− bb(ZTi−1
n ,T
n
i−1 ) and property
n
(b) of Thm. 3.6. The third equality follows from the P-equivalence of ξi−1 and
bTi−1
n and the definition of Gi−1 . The final inequality follows from Cor. 4.12
n
and the fact that U is strongly independent of b and Z under P. This means
that {ΨnTin }0≤i≤n2 is a discrete time martingale under Pn , and we may apply
Lem. 4.25 to conclude that
n n n n
(4.58) lim E sup kBs − Bs k = lim sup E sup kΨs k = 0 ∀t ∈ R+ .
n→∞ s≤t n→∞ s≤t
89
CHAPTER 4. MAIN THEOREM
4.59 Remark. In this proof, we construct ξin such that P[ξin = bTin ] = 1 for
each i by taking the right derivative of B at Tin . In this remark, we want to
emphasis that this does not imply that ξin and bTin are Pn -indistinguishable.
In general, the measures in the sequence {Pn }n are not equivalent to P. The
reason that ξin and bTin agree under P is that U is (strongly) independent of
B under P, and B is absolutely continuous. As a result, Tin is P-a.s. a point
at which B is differentiable, and the left and right derivatives agree at such
a point. Once we start constructing new measures, U and B are no longer
independent. In fact, we would expect that the characteristics quite often
have “kinks” at reset times as we reset the dynamics of the process at these
times, so we should not expect the left and right derivatives to agree at these
points.
In particular, if we resume the setting of Remark 4.32, we see that C
is P-a.s. linear, so C is P-a.s. differentiable for all t > 0; however, C has a
“kink” at each reset time under each Pn whenever we “reflip” and change
the variance accumulation rate. Also notice that the right derivative of C at
the reset time tni is equal to derivative of C over the interval (ti , ti+1 ), while
the left derivative of C at the reset time tni is equal to the derivative of C
over the previous interval (ti−1 , ti ).
To get the theorem announced in Section 2.2, we must show that we can
add a Wiener process to the stochastic basis produced in Thm. 4.30. This
involves moving to an extension, so we make the following definition.
4.60 Definition. Let X denote the canonical process on the space C(R+ ; Rr ),
let C denotes the Borel σ-field on C(R+ ; Rr ), let C0 = {σ(X t )}t∈R+ de-
note the filtration generated by X, and let W denote
Wiener’s measure on
C(R+ ; R ). We refer to W , C(R+ ; R ), C , C , W as Wiener’s basis on
r r 0
C(R+ ; Rr ).
90
CHAPTER 4. MAIN THEOREM
and set
Z t Z t
(2.13) Yt , µs ds + σs dWs .
0 0
91
CHAPTER 4. MAIN THEOREM
r2
M
f, Ye and Ze from Ω e to Ωb , Ω×C(R
e + ; R ). Moving to the extension, we
Rt
bt , bb(Zbs , s) ds and Zb = Φ(Zb0 , Yb ), and Zb still has the same
still have B 0
one-dimensional marginal distributions as Z. Thm. D.9 asserts the existence
of an F-adapted,
b continuous, Rr2 -valued Wiener process W c defined on Ωb such
that Z t
Mt =
c σ
b(Zbs , s) dW
cs .
0
As Yb = M
c + B,
b we see that Yb satisfies (b), and we are done.
92
Appendix A
Galmarino’s Test
A.2 Remark. If T is the last time that X leaves an open set G, then
XT ∈ / G. This means that T is also the last time that the process stopped
at T leaves the set G. In particular, T = T (E, X T ), but T is clearly not a
stopping time as you must look into the future to determine if you will enter
the set G again later. In particular, the property which must be checked in
(b) is strictly stronger than the property which must be checked in (c).
93
APPENDIX A. GALMARINO’S TEST
⇐ Assume that property (b) holds. We need to show that this implies
{T ≤ t} ∈ Ft0 . By the previous case, it is sufficient to show that
Fix ω ∈ {T ≤ t} and set ω t , E(ω), X t (ω) . Then E(ω t ) = E(ω) and
Xu (ω t ) = Xu (ω) for 0 ≤ u ≤ T (ω) ≤ t, Using the assumption, we see that
T (ω t ) = T (ω), so T (ω t ) ≤ t and ω t ∈ {T ≤ t}.
A , T = t and Z = z ∈ Ft0 .
So ω ∈ A ⇒ E(ω), X t (ω) ∈ A, but then Z E(ω), X t (ω) = z. As ω was
arbitrary, we conclude that Z = Z(E, X t )
⇐ Suppose that Z is F -measurable, and that Z = Z(E, X T ). Fixing an
0
arbitrary constant z, we need to show that A ,
0 0 t 0
{Z ≤ z and T ≤ t} 0∈ Ft .
Fix some ω ∈ A and set ω , E(ω ), X (ω ) . Then E(ω) = E(ω ) and
Xu (ω) = Xu (ω 0 ) for 0 ≤ u ≤ T (ω 0 ) ≤ t so T (ω) = T (ω 0 ) by the previous
equivalence. Then
0 0
X T (ω) (ω) = X T (ω ) (ω) = X T (ω ) (ω 0 ).
94
APPENDIX A. GALMARINO’S TEST
X T = X S + Θ ∆(X T , S), −S ,
and observing that E, X S , and S are all FS0 -measurable, we conclude that
FT0 ⊂ σ FS0 , ∆(X T , S) . On the other hand, FS0 , X T , and
S are all FT0 -
0 T 0
measurable. This means that we also have σ FS , ∆(X , S) ⊂ FT , complet-
ing the proof.
and
measurable.
95
APPENDIX A. GALMARINO’S TEST
96
APPENDIX A. GALMARINO’S TEST
Z(ω) = Z(ω t )
t S t U1 t t
= g E(ω ), X (ω ); ∆ X (ω ), S(ω )
= g E(ω), X S (ω); ∆ X U2 (ω), S(ω) .
As ω is arbitrary, we have Z = g E, X S ; ∆(X U2 , S) and the characterization
given in the preceding lemma implies that Z ∈ σ G , ∆(X U2 , S) . We have
then
= σ G , ∆(X U2 , S) ∩ FT0
∈ σ G , ∆(X U2 , S) ,
by the previous lemma. In particular, we know that T −S
so if we then write ∆(X T , S) = ∇ ∆(X U2 , S), T − S) , then it is clear that
97
APPENDIX A. GALMARINO’S TEST
for A.8. The opposite inclusion follows immediately from the previous lemma
as
G G
U2
U2
and use the fact that T −S ∈ σ , ∆(X , S) . To show that σ , ∆(X , S) ⊂
σ G , ∆(X T , S), ∆(X U2 , T ) , we write
and use the fact that T −S ∈ σ G , ∆(X T , S) . We have now shown (A.9)
98
Appendix B
lim En f (X n ) = E∞ f (X ∞ )
n→∞
(a) X n ⇒ X ∞ , and
99
APPENDIX B. METRIC SPACE-VALUED RANDOM VARIABLES.
B.2 Lemma. Let (E, d) be a metric space and let {X n }n∈N and {Y n }n∈N be
collections of E-valued random variables. If X n ⇒ X ∞ and d(X n , Y n ) ⇒ 0
then Y n ⇒ X ∞ .
Proof. Fix any bounded uniformly continuous f : E → R, and write
∞
E [f (X ∞ )] − En [f (Y n )]
h i
≤ E∞ [f (X ∞ )] − En [f (X n )] + En f (X n ) − f (Y n ) .
100
APPENDIX B. METRIC SPACE-VALUED RANDOM VARIABLES.
101
APPENDIX B. METRIC SPACE-VALUED RANDOM VARIABLES.
as x(s) and xn (s) are both in B(x∗ (t) + 1). In particular, F (xn ) → F (x).
We used the fact that closed bounded subsets of Rd are compact in the
previous proof.
B.7 Theorem (Lusin’s Theorem). Let E be a metric space, µ be finite mea-
sure on E, and f be a real-valued measurable function on E. Given any ε > 0,
there exists a continuous function g such that µ({x : f (x) 6= g(x)}) < ε.
Proof. See [Kec95] Thm 17.12
B.8 Lemma. Let (E, d) be a metric space, and let µ be finite measure on
that space. Then the collection of bounded Lipschitz continuous functions on
E is dense in Lp (E, µ) for any p ≥ 1.
Proof. Let f : E → R with 0 ≤ f ≤ M for some finite constant M . Fix any
ε > 0 and choose continuous g with µ({x : f (x) 6= g(x)}) < ε 2−p−1 M −p using
the last theorem. Without loss of generality, we may assume that 0 ≤ g ≤ M ;
otherwise, replace g with (0 ∨ g) ∧ M . Let gn (x) = inf y∈E g(y) + nd(y, x),
so we have 0 ≤ gn ≤ g, gn (x) → g(x) as n → ∞, and each gn is Lipschitz
continuous withR constant n. Using bounded convergence, we may choose N
p
so large that E |g − gN | dµ < ε/2, and then
Z Z Z
|f − gN | dµ ≤ |f − g| dµ + |gN − g|p dµ ≤ ε
p p
E E E
The result follows for arbitrary f ∈ Lp (E, µ) by first truncating, and then
approximating the positive and negative parts.
B.9 Corollary. Let E be a metric space, let µ be a finite measure on E, and
let f : E → Rd be a measurable function. Then there Rexists a sequence of
bounded, Lipschitz continuous functions {fn } such that kf − fn k dµ → 0.
102
APPENDIX B. METRIC SPACE-VALUED RANDOM VARIABLES.
103
Appendix C
FV and AC Processes
Z t
(C.4) f (t) = f (a) + f 0 (u) du ∀t ∈ [a, b].
a
104
APPENDIX C. FV AND AC PROCESSES
then f ∈ AC [a, b]; Rd , f 0 exists for Lebesgue-a.e. t ∈ [a, b], and f 0 = g for
C.6 Lemma. Let f : [a, b] → Rd ⊗Rd . If f (t) − f (s) ∈ S+d for all s, t ∈ [a, b]
with s ≤ t, then and f ∈ BV [a, b]; Rd ⊗Rd .
Proof. It is clear that f ii is nondecreasing and, therefore, of finite variation
for each i ∈ {1, . . . , d}. Letting {ei } denote the canonical basis on Rd , and
105
APPENDIX C. FV AND AC PROCESSES
Taking the supremum over all such partitions, we see that f ij ∈ BV [a, b]; R .
We have now shown that each component of f is of bounded variation on the
interval [a, b], so f must be of bounded variation on the interval [a, b].
106
APPENDIX C. FV AND AC PROCESSES
and set fn (t) , f (nt)/n, so fn → 0 uniformly, but Vart (fn ) = t for all n.
C.8 Corollary. If X is a continuous process, then Vart (X) is a (measurable)
random variable.
Proof. The composition of a measurable map and a lower semicontinuous
map is measurable.
C.9 Corollary. The set
107
APPENDIX C. FV AND AC PROCESSES
If X and Y are two absolutely continuous processes which share the same
law, then the derivatives of X and Y should the same law in some sense. To
make this precise, one must address the fact the derivatives are only specified
up to equivalence with respect to Lebesgue’s measure, and the following
lemma gives one possible approach.
108
APPENDIX C. FV AND AC PROCESSES
C.12 Lemma. Let (E, E ) be a metric space with its Borel σ-field, and let S i
and S 2 be probability spaces with S i = (Ωi , F i , Pi ). Let S 1 support a con-
tinuous, Rd -valued process X i , a measurable, Rd -valued process xi , and a con-
tinuous, E-valued process Y . Let f : Rd ×E → R+ be an R d ⊗E -measurable
function, and define the R+ -valued random variables
Z ∞
f,i
(C.13) Z , f (xis , Ysi ) ds for i ∈ {1, 2}.
0
If Z t
i
P Xti = xis ds ∀t ∈ R+ = 1 for i ∈ {1, 2},
0
and L (X 1 , Y 2 ) = L (X 2 , Y 2 ), then
Proof. Set
Z t
i
(C.15) A , Xti = xis ds ∀t ∈ R+ .
0
R .
X i is continuous and 0 xiu du is left-continuous (e.g., Rem. 1.7), so we may
replace R+ with Q+ in the (C.15) to see that Ai is measurable.
We first show that the lemma holds when f is of the form f (a, b) =
e−t g(a, b) for some bounded, R d ⊗E -measurable g, and we then show that
the lemma holds as stated using monotone convergence.
Assume that f (a, b) = e−t g(a, b) for some bounded, continuous g. Define
φ : C(R+ ; Rd )×R+ → Rd by
n
R∞
and set Znf,i , 0
f (φns ◦ X i , Ysi ) ds. As L (X 1 , Y 1 ) = L (X 2 , Y s ), we have
Set
109
APPENDIX C. FV AND AC PROCESSES
where limn zn 6= z∞ means that either the limit doesn’t exists, or that the
∂
limit exists and differs from z∞ . If ∂t Xti (ω i ) exists and agrees with xit (ω i ),
then the difference quotients used to define φnt ◦ X i (ω i ) must converge to this
value. In particular,
n ∂ o
B i (ω i ) ⊂ t ∈ R+ : Xti (ω i ) 6= xit (ω i ) .
∂t
∂
If ω i ∈ Ai , then Thm. C.5 asserts that ∂t Xti (ω i ) exists and agrees with xit (ω i )
for Lebesgue-a.e. t. In particular, λ(B i (ω i )) = 0 and
We now show that C is a monotone class. Assume that {gn }n∈N is a uniformly
bounded sequence of functions in C that converge to some limiting function
g pointwise on Rd ×E. Setting fn (a, b) , e−t gn (a, b) for n ∈ N and f ,
e−t g(a, b), we have limn fn (xit , Yti ) = f∞ (xit , Yti ) for each t ∈ R+ , so we may
apply dominated converge to conclude that
Z ∞ Z ∞
fn ,i i i
lim Z = lim fn (xt , Yt ) du = f (xit , Yti ) du = Z f,i
n→∞ n→∞ 0 0
(C.17) L (Z 1 , Y 1 , Z fn ,1 ) = L (Z 2 , Y 2 , Z fn ,2 ) ∀n ∈ N
from the definition of C , so we may conclude that (C.14) holds for this case.
Finally, we show that the result holds for nonegative f . Setting fn =
110
APPENDIX C. FV AND AC PROCESSES
C.18 Corollary. Let (E, E ) be a metric space with its Borel σ-field, and let
S i and S 2 be probability spaces with S i = (Ωi , F i , Pi ). Let S 1 support a
continuous, Rd -valued process X i , a measurable, Rd -valued process xi , and a
continuous, E-valued process Y . Let f : Rd ×E →→ Rr be an R d ⊗E /R r -
measurable function, and define the Rr -valued random variables
Z ∞
f,i
(C.19) Z , f (xis , Ysi ) ds for i ∈ {1, 2}.
0
If Z t
i
P Xti = xis ds ∀t ∈ R+ = 1 for i ∈ {1, 2},
0
and L (X 1 , Y 2 ) = L (X 2 , Y 2 ), then
Proof. Write f = (fi )1≤i≤r , and let fi+ and fi− denote the positive and neg-
ative parts of fi . We may apply Lem. C.12 to conclude that
+ +
L (X 1 , Y 1 , Z f1 ,1 ) = L (X 2 , Y 2 , Z f1 ,2 ),
111
APPENDIX C. FV AND AC PROCESSES
112
Appendix D
Semimartingale Characteristics
(1.15) Xt = X0 + Mt + Bt ,
113
APPENDIX D. SEMIMARTINGALE CHARACTERISTICS
Rt Rt
Proof. Set Bt , 0 µs ds and Mt , 0 σs dWs . It is then clear that X has
Rt
the canonical decomposition X = X0 + M + B. As hM it = 0 σs σsT ds, it
is clear that B and hM i are both a.s. absolutely continuous, so X is an Itô
process.
Going in the other direction, we will show that we can construct a Wiener
process W such that (D.1) holds. The first step is to find good versions of
the characteristics.
We now need to modify c so that it only takes valued in S+d , and we follow
114
APPENDIX D. SEMIMARTINGALE CHARACTERISTICS
115
APPENDIX D. SEMIMARTINGALE CHARACTERISTICS
+
A 7→
While the map A is measurable,
it is not continuous.
To see this
1 0 1 0 1 0
consider An = → A∞ = , then A+
n = but A+
∞ = A∞ .
0 1/n 0 0 0 n
[Con98] shows thats the the map A 7→ A+ is in fact analytic when restricted
to matrices of a common rank. Notice that AA+ and A+ A are idempotent
and self-adjoint, so they are orthogonal projections.
We recall the following definition from Section 4.3.
4.60 Definition. Let X denote the canonical process on the space C(R+ ; Rr ),
let C denotes the Borel σ-field on C(R+ ; Rr ), let C0 = {σ(X t )}t∈R+ de-
note the filtration generated by X, and let W denote
Wiener’s measure on
C(R+ ; R ). We refer to W , C(R+ ; R ), C , C , W as Wiener’s basis on
r r 0
C(R+ ; Rr ).
D.9 Theorem. Let B = (Ω, F , F0 , P) be a stochastic basis which supports
an Rd -valued, P-a.s. continuous local martingale M and an adapted, Rd ⊗Rr -
valued process σ with Z t
hM it = σs σsT ds.
0
116
APPENDIX D. SEMIMARTINGALE CHARACTERISTICS
As σ bs+ σ
bs is an orthogonal projection, we have kb σs+ σ
bs k ≤ r and kI −
+
σ
bs σbs k ≤ r. Recall that we use the Frobenius norm on Rr ⊗Rr rather than
the operator norm, so kIk = r. This means that
Z t
Z t
+ +
bs ⊗b
σ σs dh M is
≤ kbσs+ σ
bs k ds ≤ t r, and
c
0 0
Z t
Z t
+ +
(I − σ bs )⊗(I − σ
bs σ bs σ b is
bs ) dh X
≤ bs+ σ
kI − σ bs k ds ≤ t r,
0 0
so Z t Z t
W
ct , bs+
σ dM
cs + bs+ σ
(I − σ bs ) dX
bs .
0 0
117
APPENDIX D. SEMIMARTINGALE CHARACTERISTICS
is well-defined. We have
Z t Z t
+ +
hWc it = bs ⊗b
σ cis +
σs dh M bs+ σ
(I − σ bs+ σ
bs )⊗(I − σ b is
bs ) dh X
0 0
Z t
= bs+ σ
σ bs+ σ
bs + I − σ bs du
0
= tI,
hL
b−M
cit = h Li
b t + hM cit − 2h L,
b M cit
Z t Z t
=2 σ bsT ds − 2
bs σ bs ⊗I d dh W
σ c, Mcis
Z0 t Z0 t
=2 σ bsT ds − 2
bs σ σ bs+ ⊗I d dh M
bs σ cis
c, M
Z0 t Z0 t
=2 σ bsT ds − 2
bs σ σ bs+ σ
bs σ bsT ds
bs σ
0 0
= 0.
In particular, M = M
c, so we are done.
118
APPENDIX D. SEMIMARTINGALE CHARACTERISTICS
Rt
such that X has the characteristics (B, C) where Bt , 0 bs ds and Ct ,
Rt 1/2
c ds. Define σt , ct . Lem. D.6 asserts that the map A 7→ A1/2 is
0 s
measurable, soR t σ isTalso F-predictable. Defining M , X − X0 − B, we see
that hM it = 0 σs σs ds. The previous theorem asserts the existence of an F-b
R t
adapted, Rd -valued, continuous Wiener process W c such that Mct = σ
0 s
b dW
cs .
Setting µ
b , bb, we see that X b solves (D.11).
119
Appendix E
Rebolledo’s Criterion
[C2] For each t and ε > 0 there exists a δ > 0 such that
h i
Pα sup kXsα2 − Xsα1 k ≥ ε ≤ ε
s1 ,s2 ∈At,δ
[C4] For each t and ε > 0 there exists a δ > 0 such that
Pα kXTαα +u − XTαα k ≥ ε ≤ ε
120
APPENDIX E. REBOLLEDO’S CRITERION
[C5] For each t and ε > 0 there exists a δ > 0 such that
Pα kXTαα − XSαα k ≥ ε ≤ ε
E.2 Remark. We do not need to control the size of the first or the last
interval.
Proof. If u, v ∈ [ti−1 , ti ] for some i the result is immediate. The only other
possibility is that ti−1 ≤ u < ti ≤ v < ti+1 for some i, but the |x(v) − x(u)| ≤
|x(v) − x(ti )| + |x(ti ) − x(u)| ≤ 2ε.
We also observe that if [C5] holds, then we can bound the probability that
the processes makes a large number of large moves in a given time interval.
This is the content of the following
121
APPENDIX E. REBOLLEDO’S CRITERION
E.3 Lemma. Suppose that P |XT − XS | ≥ ε ≤ ε for all stopping times
S ≤ T ≤ t with T − S ≤ δ. If we define the stopping times T0 , 0, and
Ti , inf t > Ti−1 : |Xt − XTi−1 | ≥ ε , then
(E.4) (1 − t/δn) P Tn ≤ t ≤ ε.
so we have
n
X
n P Tn ≤ t ≤ P Tn ≤ t and Ti − Ti−1 > δ + nε
i=1
n
X
≤ E 1{Tn ≤t} (Ti − Ti−1 )/δ + nε
i=1
= E 1{Tn ≤t} Tn /δ + nε
≤ t P Tn ≤ t /δ + nε.
122
APPENDIX E. REBOLLEDO’S CRITERION
s, we have
α
|XT α − XSαα | ≥ ε ⊆ |XTαα − Xsα | ≥ ε/2 ∪ |Xsα − XSαα | ≥ ε/2 .
δ P |XTαα − XSαα | ≥ ε
R T α +δ
= E 1{|XTαα −XSαα |≥ε} T α ds
Z ∞
P |XTαα − XSαα | ≥ ε and s ∈ [T α , T α + δ] ds
=
Z0 ∞
P |XTαα − Xsα | ≥ ε and s ∈ [T α , T α + δ]
≤
0
+ P |Xsα − XSαα | ≥ ε and s ∈ [S α , S α + 2δ] ds
Z δ Z 2δ
α α
P |XSαα +u − XSαα | ≥ ε/2 du
= P |XT α +u − XT α | ≥ ε/2 du +
0 0
≤ δ ε,
so [C5] holds.
Finally, we will show that [C5] implies [C2], so assume [C5], fix some t
and ε > 0, and define the stopping times T0α , 0 and
B α , Tiα − Ti−1α
≥ δ2 /2 for all i ≥ 1 with Tiα ≤ t ,
123
APPENDIX E. REBOLLEDO’S CRITERION
then we may apply Lemma E.1 to conclude that |Xsα2 (ω α ) − Xsα1 (ω α )| ≤ ε for
all s1 ≤ s2 ≤ t with s2 −s1 ≤ δ2 . In particular, we are done if Pα [B α ] ≥ 1−ε.
Define the sets
α
Ciα , Tiα − Ti−1 < δ2 /2 and Tiα ≤ t
= |XTiα ∧(Ti−1
α +δ /2)∧ t − XT α ∧ t | ≥ ε/2 ,
2 i−1
(B α )c ⊂ ∪∞ α n α α
i=1 Ci ⊂ ∪i=1 Ci ∪ {Tn ≤ t},
we have n
α c X
P Ci + P Tnα ≤ t ≤ ,
P (B ) ≤
i=1
124
APPENDIX E. REBOLLEDO’S CRITERION
Using Chebyshev’s inequality, the domination property, and the fact that A
is increasing, we see that
P Xt∗ ≥ x ≤ P XS∧U ∧t ≥ x + P At ≥ a
1
≤ E AS∧U ∧t + P A∞ ≥ a
x
a
≤ + P A∞ ≥ a
x
holds for all t. Letting t → ∞ through some sequence and noting that
∗
{X∞ > x} ⊂ {Xt∗ ≥ x for some t}, we have
∗ a
P X∞ > x ≤ + P A∞ ≥ a .
x
But the right hand side is continuous in x, so we really have
∗ ∗ a
P X∞ ≥ x = lim P X∞ > x − 1/n ≤ lim + P A∞ ≥ a
n n x − 1/n
a
= + P A∞ ≥ a .
x
125
APPENDIX E. REBOLLEDO’S CRITERION
and
126
APPENDIX E. REBOLLEDO’S CRITERION
As this holds for all u ∈ [0, δ], we conclude that M satisfies condition [C4].
127
Appendix F
Convergence of Characteristics
128
APPENDIX F. CONVERGENCE OF CHARACTERISTICS
x0 , −∞, and
xi , lim inf B m ∩ (xi−1 , ∞) for i ≥ 1.
m→∞
Then inf {xi }i∈N > −∞, xi > xi−1 when xi−1 < ∞, and A = {xi : xi < ∞}.
Proof. We first show that if y ∈ R with f (y+) − f (y−) < ε, then we may
choose δ = δ(y) > 0 and M = M (y) ∈ N such that B m ∩ (y − δ, y + δ) = ∅
for all m ≥ M . Choose η so small that f (y+) − f (y−) < ε − η, and then
choose δ so small that f (y−) − f (y − δ) < η/2 and f (y + 2δ) − f (y+) < η/2.
Finally, choose M so large that 1/M < δ. If m ≥ M and q ∈ (y − δ, y + δ),
then {q, q + 1/m} ⊂ (y − δ, y + 2δ), so
In particular, B m ∩ (y − δ, y + δ) = ∅.
As f is bounded from below, the set A ∩ (−∞, n] contains a finite number
of points for each n. In particular, A contains a least element, and we may
linearly order the jumps of size at least ε as {yn }n<N = A with yi−1 < yi for
some N ∈ N. We now show that the inductive assumption xi−1 = yi−1 < ∞
implies that xi = yi for i < N .
We will argue by contradiction that xi ≥ yi , so assume that xi < yi , and
then choose qm ∈ B m ∩ (xi−1 , ∞) with qm → xi . If xi = xi−1 , then we may
129
APPENDIX F. CONVERGENCE OF CHARACTERISTICS
choose δ > xi−1 so close to xi−1 that we have f (z) − f (xi−1 +) < ε/2 when
z ∈ (xi−1 , δ). For sufficiently large m, we have {qm , qm + 1/m} ⊂ (xi−1 , δ),
but this contradicts the fact that f (qm + 1/m) − f (qm ) ≥ ε. Recall that
f is bounded below, so this argument is valid when xi−1 = −∞. On the
other hand, if xi ∈ (xi−1 , yi ), then we have f (xi +) − f (xi −) < ε, so we
may choose δ > 0 and M with B m ∩ (xi − δ, xi + δ) = ∅ for all m ≥ M .
This is again a contradiction. We have now shown that xi ≥ yi . Choosing
qm ∈ Q ∩ (xi−1 , ∞) with yi ∈ (qm , qm + 1/m), we see that
so T a (z0 ) ≤ t∞ . On the other hand, if s < t∞ , then there exists some n such
that tn ∈ (s, t∞ ), and this implies that z0 (s) ≤ an < a and T a (z0 ) > s. In
particular, T a (z0 ) = t∞ . We have now shown that the map a 7→ T a (z0 ) is
left continuous.
We now assume that the map a 7→ T a (z0 ) is continuous at the point
a = c, and we show that this implies that the map z 7→ T c (z) is continuous
130
APPENDIX F. CONVERGENCE OF CHARACTERISTICS
at the point z = z0 . Notice that this assumption implies that z0 does not
have a local max at t = T c (z0 ) and prevents the situation in Example F.2.
Let zn → z0 , fix ε > 0, and choose b < c < d with T c (z0 ) − T b (z0 ) <
ε and T d (z0 ) − T c (z0 ) < ε using the continuity of the map a 7→ T a (z0 ).
Set δ , min{c − b, d − c}/2, set t = T d (z0 ), and choose N so large that
sups≤t |z0 (s) − zn (s)| ≤ δ for all n ≥ N . Notice that this implies that zn (s) ≤
z0 (s) + δ ≤ b + δ < c for s ∈ [0, T b (z0 )] and zn (t) ≥ z0 (t) − δ = d − δ > c. In
particular, T c (zn ) ∈ T b (z0 ), T d (z0 ) , so |T c (z0 ) − T c (zn )| ≤ ε.
Recursively define a sequence of functions ξin : C(R+ ; R) → R by setting
ξ0n (x) , −∞, and then defining
for each i > 0. For fixed x, the map a 7→ T a (x) is left-continuous, nonde-
creasing, and nonnegative, so we may apply Lem. F.4 to conclude that
Defining
A , ∪i,n a : P[ξin (Z) = a] > 0 ,
131
APPENDIX F. CONVERGENCE OF CHARACTERISTICS
132
APPENDIX F. CONVERGENCE OF CHARACTERISTICS
133
APPENDIX F. CONVERGENCE OF CHARACTERISTICS
This is again a contradiction, so we conclude that f (t) − f (s) ∈ S+d for all
s, t ∈ R+ with s ≤ t.
We now have everything that we need to prove the main theorem of this
subsection.
134
APPENDIX F. CONVERGENCE OF CHARACTERISTICS
X n , B n , C n , X n − B n , (X n − B n )⊗(X n − B n ) − C n
⇒ X, B, C, X − B, (X − B)⊗(X − B) − C
135
Bibliography
136
BIBLIOGRAPHY
137
BIBLIOGRAPHY
138
BIBLIOGRAPHY
[MQR07] Dilip Madan, Michael Qian Qian, and Yong Ren. Calibrat-
ing and pricing with embedded local volatility models. Risk,
20(9):138–143, 2007.
[Roy88] H.L. Royden. Real analysis, 3rd Edition. Macmillan New York,
1988.
139
BIBLIOGRAPHY
140