Вы находитесь на странице: 1из 29

Digital Object Identier (DOI) 10.

1007/s004400000102
Probab. Theory Relat. Fields 119, 7098 (2001)
Jean-Marc Azas Mario Wschebor
On the regularity of the distribution
of the maximum of one-parameter
Gaussian processes
Received: 14 May 1999 / Revised version: 18 October 1999 /
Published online: 14 December 2000 c Springer-Verlag 2001
Abstract. The main result in this paper states that if a one-parameter Gaussian process has
C
2k
paths and satises a non-degeneracy condition, then the distribution of its maximum on
a compact interval is of class C
k
. The methods leading to this theorem permit also to give
bounds on the successive derivatives of the distribution of the maximum and to study their
asymptotic behaviour as the level tends to innity.
1. Introduction and main results
Let X = {X
t
: t [0, 1]} be a stochastic process with real values and continuous
paths dened on a probability space (, , P). The aim of this paper is to study
the regularity of the distribution function of the random variable M := max{X
t
:
t [0, 1]}.
X is said to satisfy the hypothesis H
k
, k a positive integer, if:
(1) X is Gaussian;
(2) a.s. X has C
k
sample paths;
(3) For every integer n 1 and any set t
1
, ..., t
n
of pairwise different parameter
values, the distribution of the random vector:
X
t
1
, ..., X
t
n
, X

t
1
, ..., X

t
n
, ..., X
(k)
t
1
, ..., X
(k)
t
n
is non degenerate.
We denote m(t ) and r(s, t ) the mean and covariance functions of X, that is
m(t ) := E(X
t
), r(s, t ) := E
_
(X
s
m(s))(X
t
m(t ))
_
and r
ij
:=

i+j
s
i
t
j
r (i, j =
0, 1, ..) the partial derivatives of r, whenever they exist.
Our main results are the following:
J.-M. Azas, M. Wschebor: Laboratoire de Statistique et Probabilit es,
UMR-CNRS C55830, Universit e Paul Sabatier, 118, route de Narbonne,
31062 Toulouse Cedex 4. France. e-mail: azais@cict.fr
M. Wschebor: Centro de Matem atica, Facultad de Ciencias, Universidad de la Rep ublica,
Calle Igua 4225, 11400 Montevideo, Uruguay. e-mail: wscheb@fcien.edu.uy
Mathematics Subject Classication (2000): 60G15, 60Gxx, 60E05
Key words or phrases: Extreme values Distribution of the maximum
Regularity of the distribution of the maximum 71
Theorem 1.1. Let X = {X
t
: t [0, 1]} be a stochastic process satisfying H
2k
.
Denote by F(u) = P(M u) the distribution function of M.
Then, F is of class C
k
and its succesive derivatives can be computed by repeated
application of Lemma 3.3.
Corollary 1.1. Let X be a stochastic process verifying H
2k
and assume also that
E(X
t
) = 0 and Var(X
t
) = 1.
Then, as u +, F
(k)
(u) is equivalent to
(1)
k1
u
k
2
e
u
2
/2
_
1
0
_
r
11
(t, t )dt. (1)
The regularity of the distribution of M has been the object of a number of
papers. For general results when X is Gaussian, one can mention:Ylvisaker (1968);
Tsirelson (1975); Weber (1985); Lifshits (1995); Diebolt and Posse (1996) and
references therein.
Theorem 1.1 appears to be a considerable extension, in the context of one-
parameter Gaussianprocesses, of existingresults onthe regularityof the distribution
of the maximumwhich, as far as the authors know, do not go beyond Lipschitz con-
dition for the rst derivative. For example, it implies that if the process is Gaussian
with C

paths and satises the non-degeneracy condition for every k = 1, 2, . . . ,


then the distribution of the maximumis C

. The same methods provide bounds for


the successive derivatives as well as their asymptotic behaviour as their argument
tends to +(Corollary 1.1).
Except in Theorem 3.1, which contains a rst upper bound for the density of
M, we will assume X to be Gaussian.
The proof of Theorem 1.1 is based upon the main Lemma 3.3. Before giving
the proofs we have stated Theorem 3.2 which presents the result of this Lemma in
the special case leading to the rst derivative of the distribution function of M. As
applications one gets upper and lower bounds for the density of M under condi-
tions that seem to be more clear and more general than in previous work (Diebolt
and Posse, 1996). Some extrawork is needed to extend the implicit formula (9) to
non-Gaussian processes, but this seems to be feasible.
As for Theorem 1.1 for derivatives of order greater than 1, its statement and its
proof rely heavily on the Gaussian character of the process.
The main result of this paper has been exposed in the note by Azas and
Wschebor (1999).
2. Crossings
Our methods are based on well-known formulae for the moments of crossings of
the paths of stochastic processes with xed levels, that have been obtained by a
variety of authors, starting from the fundamental work of S.O.Rice (19441945).
In this section we review without proofs some of these and related results.
Let f : I IR be a function dened on the interval I of the real numbers,
C
u
(f ; I) := {t I : f (t ) = u}
72 J.-M. Azas, M. Wschebor
N
u
(f ; I) =
_
C
u
(f ; I)
_
denote respectively the set of roots of the equation f (t ) = u on the interval I and
the number of these roots, with the convention N
u
(f ; I) = + if the set C
u
is
innite. N
u
(f ; I) is called the number of crossings of f with the level u on
the interval I.
In the same way, if f is a differentiable function the number of upcrossings
and downcrossings of f are dened by means of
U
u
(f ; I) := ({t I : f (t ) = u, f

(t ) > 0})
D
u
(f ; I) := ({t I : f (t ) = u, f

(t ) < 0}).
For a more general definition of these quantities see Cram er and Leadbetter (1967).
In what follows, f
p
is the norm of f in L
p
(I, ), 1 p +, denot-
ing the Lebesgue measure. The joint density of the nite set of real-valued random
variables X
1
, ...X
n
at the point (x
1
, ...x
n
) will be denoted p
X
1
,...,X
n
(x
1
, ...x
n
) when-
ever it exists. (t ) := (2)
1/2
exp(t
2
/2) is the density of the standard normal
distribution, (t ) :=
_
t

(u)du its distribution function.


The following proposition (sometimes called Kacs formula) is a common tool
to count crossings.
Proposition 2.1. Let f : I = [a, b] IR be of class C
1
, f (a), f (b) = u. If f
does not have local extrema with value u on the inteval I, then
N
u
(f ; I) = lim
0
1/(2)
_
I
1I
{|f (t )u|<}
|f

(t )|dt.
For m and k, positive integers, k m, dene the factorial kt h power of m by
m
[k]
:= m(m1) (mk +1).
For other real values of m and k we put m
[k]
:= 0. If k is an integer k 1 and I an
interval in the real line, the diagonal of I
k
is the set:
D
k
(I) := {(t
1
, ..., t
k
) I
k
, t
j
= t
h
for some pair (j, h), j = h}.
Finally, assume that X = {X
t
: t IR} is a real valued stochastic process with C
1
paths. We set , for (t
1
, ..., t
k
) I
k
\D
k
(I) and x
j
IR (j = 1, ..., k):
A
t
1
,...t
k
(x
1
, ...x
k
) :=
_
IR
k
_
k

j=1
|x

j
|
_
p
X
t
1
,...,X
t
k
,X

t
1
,...,X

t
k
(x
1
, ...x
k
, x

1
, ...x

k
)dx

1
...dx

k
and
I
k
(x
1
, ...x
k
) :=
_
I
k
A
t
1
,...t
k
(x
1
, ...x
k
)dt
1
...dt
k
,
where it is understood that the density in the integrand of the definition of A
t
1
,...t
k
(x
1
, ...x
k
) exists almost everywhere and that the integrals above can take the value
+.
Regularity of the distribution of the maximum 73
Proposition 2.2. Let k be a positive integer, u a real number and I a bounded
interval in the line. With the above notations and conditions, let us assume that the
process X also satises the following conditions:
1. the density
p
X
t
1
,...,X
t
k
,X

s
1
,...,X

s
k
(x
1
, ...x
k
, x

1
, ...x

k
)
exists for (t
1
, ...t
k
), (s
1
, ...s
k
) I
k
\D
k
(I) and is a continuous function of
(t
1
, ...t
k
) and of x
1
, ...x
k
at the point (u, ..., u).
2. the function
(t
1
, ..., t
k
, x
1
, ...x
k
) A
t
1
,...t
k
(x
1
, ...x
k
)
is continuous for (t
1
, ..., t
k
) I
k
\D
k
(I) and x
1
, ...x
k
belonging to a neigh-
bourhood of u.
3. (additional technical condition)
_
IR
3
|x

1
|
k1
|x

2
x

3
|p
X
t
1
,...,X
t
k
,X

s
1
,X

s
2
,X

t
1
(x
1
, ...x
k
, x

1
, x

2
, x

3
)dx

1
dx

2
dx

3
0
as |s
2
t
1
| 0, uniformly as (t
1
, ..., t
k
) varies in a compact subset of
I
k
\D
k
(I) and x
1
, ..., x
k
in a xed neighbourhood of u.
Then,
E((N
u
(X, I))
[k]
) = I
k
(u, ..., u). (2)
Both members in (2) may be +
Remarks. (a) For k = 1 formula (2) becomes
E[N
u
(X; I)] =
_
I
dt
_
+

|x

|p
X
t
,X

t
(u, x

)dx

. (3)
(b) Simple variations of (3), valid under the same hypotheses are:
E[U
u
(X; I)] =
_
I
dt
_
+
0
x

p
X
t
,X

t
(u, x

)dx

(4)
E[D
u
(X; I)] =
_
I
dt
_
0

|x

|p
X
t
,X

t
(u, x

)dx

. (5)
In the same way one can obtain formulae for the factorial moments of marked
crossings, that is, crossings such that some additional condition holds true. For
example, if Y = {Y
t
: t IR} is some other stochastic process with real values
such that for every t , (Y
t
, X
t
, X

t
) admit a joint density, a < b +and
N
a,b
u
(X, I) := {t : t I, X
t
= u, a < Y
t
< b}.
Then
E[N
a,b
u
(X; I)] =
_
b
a
dy
_
I
dt
_
+

|x

|p
Y
t
,X
t
,X

t
(y, u, x

)dx

. (6)
74 J.-M. Azas, M. Wschebor
In particular, if M
+
a,b
is the number of strict local maxima of X
(.)
on the interval I
such that the value of X
(.)
lies in the interval (a, b), then M
+
a,b
= D
a,b
0
(X

, I) and:
E[M
+
a,b
] =
_
b
a
dy
_
I
dt
_
0

|x

|p
X
t
,X

t
,X

t
(x, 0, x

)dx

. (7)
Sufcient conditions for the validity of (6) and (7) are similar to those for 3.
(c) Proofs of (2) for Gaussian processes satisfying certain conditions can be
found in Belayev (1966) and Cram er-Leadbetter (1967). Marcus (1977) contains
various extensions. The present statement of Proposition 2.2 is from Wschebor
(1985).
(d) It may be non trivial to verify the hypotheses of Proposition 2.2. However
some general criteria are available. For example if X is a Gaussian process with C
1
paths and the densities
p
X
t
1
,...,X
t
k
,X

s
1
,...,X

s
k
are non-degenerate for (t
1
, ...t
k
), (s
1
, ...s
k
) I
k
\D
k
, then conditions 1, 2, 3 of
Proposition 2.2 hold true (cf Wschebor, 1985, p.37 for a proof and also for some
manageable sufcient conditions in non-Gaussian cases).
(e) Another point related to Rice formulae is the non existence of local extrema
at a given level. We mention here two well-known results:
Proposition 2.3 (Bulinskaya, 1961). Suppose that X has C
1
paths and that for
every t I, X
t
has a density p
X
t
(x) bounded for x in a neighbourhood of u.
Then, almost surely, X has no tangencies at the level u, in the sense that if
T
X
u
:= {t I, X
t
= u, X

t
= 0},
then P(T
X
u
= ) = 1.
Proposition 2.4 (Ylvisakers Theorem, 1968). Suppose that {X
t
: t T } is a
real-valued Gaussian process with continuous paths, dened on a compact sepa-
rable topological space T and that Var(X
t
) > 0 for every t T . Then, for each
u IR, with probability 1, the function t X
t
does not have any local extrema
with value u.
3. Proofs and related results
Let be a random variable with values in IR
k
with a distribution that admits a
density with respect to the Lebesgue measure . The density will be denoted by
p

(.) . Further, suppose E is an event. It is clear that the measure

(B; E) := P({ B} E)
dened on the Borel sets B of IR
k
, is also absolutely continuous with respect to .
We will denote the density of related to E the Radon derivative:
p

(x; E) :=
d

(.; E)
d
(x).
It is obvious that p

(x; E) p

(x) for -almost every x IR


k
.
Regularity of the distribution of the maximum 75
Theorem 3.1. Suppose that Xhas C
2
paths, that X, X

, X

admit a joint density at


every time t , that for every t , X

t
has a bounded density p
X

t
(.) and that the function
I (x, z) :=
_
1
0
dt
_
0

|x

|p
X
t
,X

t
,X

t
(x, z, x

)dx

is uniformly continuous in z for (x, z) in some neighbourhood of (u, 0). Then the
distribution of M admits a density p
M
(.) satisfying a.e.
p
M
(u) p
X
0
(u; X

0
< 0) +p
X
1
(u; X

1
> 0)
+
_
1
0
dt
_
0

|x

|p
X
t
,X

t
,X

t
(u, 0, x

)dx

. (8)
Proof . Let u IR and h > 0. We have
P(M u) P(M u h) = P(u h < M u)
P(u h < X
0
u, X

0
< 0) +P(u h < X
1
u, X

1
> 0)
+P(M
+
uh,u
> 0),
where M
+
uh,u
= M
+
uh,u
(0, 1), since if uh < M u, then either the maximum
occurs in the interior of the interval [0, 1] or at 0 or 1, with the derivative taking
the indicated sign. Note that
P(M
+
uh,u
> 0) E(M
+
uh,u
).
Using Proposition 2.3, with probability 1, X

(.) has no tangencies at the level 0,


thus an upper bound for this expectation follows from the Kacs formula:
M
+
uh,u
= lim
0
1
2
_
1
0
1I
{X(t )[uh,u]}
1I
{X

(t )[,]}
1I
{X

(t )<0}
|X

(t )|dt a.s.
which together with Fatous lemma imply:
E(M
+
uh,u
) liminf
0
1
2
_

dz
_
u
uh
I (x, z)dx =
_
u
uh
I (x, 0)dx.
Combining this bound with the preceeding one, we get
P(M u) P(M u h)

_
u
uh
_
p
X
0
(x; X

0
< 0) +p
X
1
(x; X

1
> 0) +I (x, 0)
_
dx,
which gives the result.
In spite of the simplicity of the proof, this theorem provides the best known
upper-bound for Gaussian processes. In fact, in this case, formula (8) is a simpler
expression of the bound of Diebolt and Posse (1996). More precisely, if we use
their parametrization by putting
m(t ) = 0 ; r(s, t ) =
(s, t )
(s)(t )
,
76 J.-M. Azas, M. Wschebor
with
(t, t ) = 1,
11
(t, t ) = 1,
10
(t, t ) = 0,
12
(t, t ) = 0,
02
(t, t ) = 1,
after some calculations, we get exactly their bound M(u) ( their formula (9)) for
the density of the maximum.
Let us illustrate formula (8) explicitly when the process is Gaussian, centered
with unit variance. By means of a deterministic time change, one can also assume
that the process has unit speed (Var(X

t
) 1). Let L the length of the new
time interval. Clearly t, m(t ) = 0, r(t, t ) = 1, r
11
(t, t ) = 1, r
10
(t, t ) = 0,
r
12
(t, t ) = 0, r
02
(t, t ) = 1. Note that
Z N(,
2
) E(Z

) = (/) (/).
The formulae for regression imply that conditionally on X
t
= u, X

t
= 0, X

t
has expectation u and variance r
22
(t, t ) 1. Formula (8) reduces to
p
M
(u) p
+
(u):=(u)
_
1+(2)
1/2
_
L
0
C
g
(t )(u/C
g
(t )) +u(u/C
g
(t ))dt
_
,
with C
g
(t ) :=

r
22
(t, t ) 1
As x +, (x) = 1
(x)
x
+
(x)
x
3
+O
_
(x)
x
5
_
. This implies that
p
+
(u) = (u)
_
1 +Lu(2)
1/2
+(2)
1/2
u
2
_
L
0
C
3
g
(t )(u/C
g
(t ))dt
_
+O
_
u
4
(u/C
+
)
_
,
with C
+
:= sup
t [0,L]
C
g
(t ).
Furthermore the exact equivalent of p
M
(u) when u +is
(2)
1
u L exp(u
2
/2)
as we will see in Corollary 1.1.
The following theorem is a special case of Lemma 3.3. We state it separately
since we use it below to compare the results that follow from it with known results.
Theorem 3.2. Suppose that X is a Gaussian process satisfying H
2
. Then M has a
continuous density p
M
given for every u by
p
M
(u) = p
X
0
(u

; M u) +p
X
1
(u

; M u)
+
_
1
0
dt
_
0

|x

|p
X
t
,X

t
,X

t
(u

, 0, x

; M u)dx

, (9)
where p
X
0
(u

; M u) = lim
xu
p
X
0
(x; M u) exists and is a continuous
function of u , as well as p
X
1
(u

; M u) and p
X
t
,X

t
,X

t
(u

, 0, x

; M u).
Regularity of the distribution of the maximum 77
Again, we obtain a simpler version of the expression by Diebolt and Posse
(1996).
In fact, the result 3.2 remains true if X is Gaussian with C
2
paths and one
requires only that X
s
, X
t
, X

t
, X

t
admit a joint density for all s, t, s = t [0, 1].
If we replace the event {M u} respectively by {X

0
< 0}, {X

1
> 0} and in
each of the three terms in the right hand member in formula (9) we get the general
upper-bound given by (8).
To obtain lower bounds for p
M
(u), we use the following immediate inequalities:
P(M u/X
0
= u) = P(M u, X

0
< 0/X
0
= u)
P
_
X

0
< 0/X
0
= u
_
E(U
u
[0, 1]1I
{X

0
<0}
/X
0
= u).
In the same way
P(M u/X
1
= u) = P(M u, X

1
> 0/X
1
= u)
P
_
X

1
> 0/X
1
= u
_
E(D
u
[0, 1]1I
{X

1
>0}
/X
1
= u)
and if x

< 0 :
P(M u/X
t
= u, X

t
= 0, X

t
= x

)
1 E([D
u
([0, t ]) +U
u
([t, 1])] /X
t
= u, X

t
= 0, X

t
= x

).
If we plug these lower bounds into Formula (9) and replace the expectations of
upcrossings and downcrossings by means of integral formulae of (4), (5) type, we
obtain the lower bound:
p
M
(u) p
X
0
(u; X

0
< 0) +p
X
1
(u; X

1
< 0)
+
_
1
0
dt
_
0

|x

|p
X
t
,X

t
,X

t
(u, 0, x

)dx

_
1
0
ds
_
0

dx

_
+
0
x

s
p
X
s
,X

s
,X
0,
X

0
(u, x

s
, u, x

)dx

_
1
0
dt
_
0

|x

|
_
_
t
0
ds
_
0

|x

|p
X
s
,X

s
,X
t
,X

t
,X

t
(u, x

, u, 0, x

)dx

+
_
1
t
ds
_
+
0
x

p
X
s
,X

s
,X
t
,X

t
,X

t
(u, x

, u, 0, x

)dx

_
dx

.
(10)
Simpler expressions for (10) also adapted to numerical computations, can be found
in Cierco (1996).
Finally, some sharper upperbounds for p
M
(u) are obtained when replacing the
event {M > u} by {X
0
+ X
1
> 2u}, the probability of which can be expressed
using the conditionnal expectation and variance of X
0
+ X
1
; we are able only to
express these bounds in integral form.
We now turn to the proofs of our main results.
78 J.-M. Azas, M. Wschebor
Lemma 3.1. (a) Let Z be a stochastic process satisfying H
k
(k 2) and t a point
in [0, 1]. Dene the Gaussian processes Z

, Z

, Z
t
by means of the orthogonal
decompositions:
Z
s
= a

(s) Z
0
+sZ

s
s (0, 1] . (11)
Z
s
= a

(s) Z
1
+(1 s) Z

s
s [0, 1) . (12)
Z
s
= b
t
(s)Z
t
+c
t
(s) Z

t
+
(s t )
2
2
Z
t
s
s [0, 1] s = t. (13)
Then, the processes Z

, Z

, Z
t
can be extended dened at s = 0, s = 1, s = t
respectively sothat they become pathwise continuous andsatisfy H
k1
, H
k1
, H
k2
respectively.
(b) Let f be any function of class C
k
. When there is no ambiguity on the pro-
cess Z, we will dene f

, f

, f
t
in the same manner, putting f instead of Z in
(11), (12), (13), but still keeping the regression coefcients corresponding to Z.
Then f

, f

, f
t
can be extended by continuity in the same way to functions in
C
k1
, C
k1
, C
k2
respectively.
(c) Let m be a positive integer, suppose Z satises H
2m+1
and t
1
, ..., t
m
belong
to [0, 1] {, }. Denote by Z
t
1
,...,t
m
the process obtained by repeated application
of the operation of part (a) of this Lemma, that is
Z
t
1
,...,t
m
s
=
_
Z
t
1
,...,t
m1
_
t
m
s
.
Denote by s
1
, ..., s
p
(p m) the ordered p-tuple of the elements of t
1
, ..., t
m
that
belong to [0, 1] (i.e. they are not or ). Then, a.s. for xed values of the
symbols , the application:
_
s
1
, ..., s
p
, s
_

_
Z
t
1
,...,t
m
s
,
_
Z
t
1
,...,t
m
_

s
_
is continuous.
Proof . (a) and (b) follow in a direct way, computing the regression coefcients
a

(s), a

(s) , b
t
(s), c
t
(s) and substituting into formulae (11), (12), (13). Note
that (b) also follows from (a) by applying it to Z +f and to Z. We prove now (c)
which is a consequence of the following:
Suppose Z(t
1
, ..., t
k
) is a Gaussian eld with C
p
sample paths (p 2) dened on
[0, 1]
k
with no degeneracy in the same sense that in the definition of hypothesis H
k
(3) for one-parameter processes. Then the Gaussian elds dened by means of:
Z

(t
1
, ..., t
k
) = (t
k
)
1
_
Z(t
1
, ..., t
k1
, t
k
) a

(t
1
, ..., t
k
)Z(t
1
, ..., t
k1
, 0)
_
for t
k
= 0,
Z

(t
1
, ..., t
k
) = (1 t
k
)
1
_
Z(t
1
, ..., t
k1
, t
k
) a

(t
1
, ..., t
k
)Z(t
1
, ..., t
k1
, 1)
_
for t
k
= 1,

Z(t
1
, ..., t
k
, t
k+1
) = 2 (t
k+1
t
k
)
2
(Z(t
1
, ..., t
k1
, t
k+1
)
b(t
1
, ..., t
k
, t
k+1
)Z(t
1
, ..., t
k
)
c(t
1
, ..., t
k
, t
k+1
)
Z
t
k
(t
1
, ..., t
k
)) for t
k+1
= t
k
Regularity of the distribution of the maximum 79
can be extended to [0, 1]
k
(respectively [0, 1]
k
, [0, 1]
k+1
) into elds with paths in
C
p1
(respectively C
p1
, C
p2
). In the above formulae,
- a

(t
1
, ..., t
k
) is the regression coefcient of Z(t
1
, ..., t
k
) on Z(t
1
, ..., t
k1
, 0),
- a

(t
1
, ..., t
k
) is the regression coefcient of Z(t
1
, ..., t
k
) on Z(t
1
, ..., t
k1
, 1),
- b(t
1
, ..., t
k
, t
k+1
), c(t
1
, ..., t
k
, t
k+1
) are the regression coefcients of
Z(t
1
, ..., t
k1
, t
k+1
) on the pair
_
Z(t
1
, ..., t
k
),
Z
t
k
(t
1
, ..., t
k
)
_
.
Let us prove the statement on

Z. The other two are simpler. Denote by Vthe sub-


space of L
2
(, , P) generated by the pair
_
Z(t
1
, ..., t
k
),
Z
t
k
(t
1
, ..., t
k
)
_
. Denote
by
V
the version of the orthogonal projection of L
2
(, , P) on the orthogonal
complement of V, dened by means of.

V
(Y) := Y
_
bZ(t
1
, ..., t
k
) +c
Z
t
k
(t
1
, ..., t
k
)
_
,
where b and c are the regression coefcients of Y on the pair
Z(t
1
, ..., t
k
),
Z
t
k
(t
1
, ..., t
k
).
Note that if {Y

: } is a random eld with continuous paths and such that


Y

is continuous in L
2
(, , P) , then a.s.
_
, t
1
, ..., t
k
)
V
(Y

)
is continuous.
From the definition:

Z(t
1
, ..., t
k
, t
k+1
) = 2 (t
k+1
t
k
)
2

V
(Z(t
1
, ..., t
k1
, t
k+1
)) .
On the other hand, by Taylors formula:
Z(t
1
, ..., t
k1
, t
k+1
)=Z(t
1
, ..., t
k
)+(t
k+1
t
k
)
Z
t
k
(t
1
, ..., t
k
)+R
2
(t
1
, ..., t
k
, t
k+1
)
with
R
2
(t
1
, ..., t
k
, t
k+1
) =
_
t
k+1
t
k

2
Z
t
2
k
(t
1
, ..., t
k1
, ) (t
k+1
) d
so that

Z(t
1
, ..., t
k
, t
k+1
) =
V

_
2 (t
k+1
t
k
)
2
R
2
(t
1
, ..., t
k
, t
k+1
)
_
. (14)
It is clear that the paths of the random eld

Z are p 1 times continuously dif-
ferentiable for t
k+1
= t
k
. Relation (14) shows that they have a continuous extension
to [0, 1]
k+1
with

Z(t
1
, ..., t
k
, t
k
) =
V

2
Z
t
2
k
(t
1
, ..., t
k
)
_
. In fact,

_
2 (s
k+1
s
k
)
2
R
2
(s
1
, ..., s
k
, s
k+1
)
_
80 J.-M. Azas, M. Wschebor
= 2 (s
k+1
s
k
)
2
_
s
k+1
s
k

2
Z
t
2
k
(s
1
, ..., s
k1
, )
_
(s
k+1
) d.
According to our choice of the version of the orthogonal projection
V
, a.s. the
integrand is a continuous function of the parameters therein so that, a.s.:

Z(s
1
, ..., s
k
, s
k+1
)
V

2
Z
t
2
k
(t
1
, ..., t
k
)
_
when (s
1
, ..., s
k
, s
k+1
)
(t
1
, ..., t
k
, t
k
).
This proves (c). In the same way, when p 3, we obtain the continuity of the
partial derivatives of

Z up to the order p2.
The following lemma has its own interest besides being required in our proof
of Lemma 3.3. It is a slight improvement of Lemma 4.3, p. 76 in Piterbarg (1996)
in the case of one-parameter processes.
Lemma 3.2. Suppose that X is a Gaussian process with C
3
paths and that for all
s = t , the distributions of X
s
, X

s
, X
t
, X

t
and of X
t
, X

t
, X
(2)
t
, X
(3)
t
do not degen-
erate. Then, there exists a constant K (depending on the process) such that
p
X
s
,X
t
,X

s
,X

t
(x
1
, x
2
, x

1
, x

2
) K(t s)
4
for all x
1
, x
2
, x

1
, x

2
IR and all s, t, s = t [0, 1].
Proof .
p
X
s
,X
t
,X

s
,X

t
(x
1
, x
2
, x

1
, x

2
) (2)
2
_
Det Var(X
s
, X
t
, X

s
, X

t
)
_
1/2
,
where Det Var stands for the determinant of the variance matrix. Since by hypoth-
esis the distribution does not degenerate outside the diagonal s = t , the conclusion
of the lemma is trivially true on a set of the form {|s t | > }, > 0. By a com-
pactness argument it is sufcient to prove it for s, t in a neighbourhood of (t
0
, t
0
)
for each t
0
[0, 1]. For this last purpose we use a generalization of a technique
employed by Belyaev (1966). Since the determinant is invariant by adding linear
combination of rows (resp. columns) to another row (resp. column),
Det Var(X
s
, X
t
, X

s
, X

t
) = Det Var(X
s
, X

s
,

X
(2)
s
,

X
(3)
s
),
with

X
(2)
s
= X
t
X
s
(t s)X

s

(t s)
2
2
X
(2)
t
0

X
(3)
s
= X

t
X

s

2
(t s)

X
(2)
s

(t s)
2
6
X
(3)
t
0
,
The equivalence refers to (s, t ) (t
0
, t
0
). Since the paths of X are of class
C
3
,
_
X
s
, X

s
, (2(t s)
2
)

X
(2)
s
, (6(t s)
2
)

X
(3)
s
_
tends almost surely to
Regularity of the distribution of the maximum 81
_
X
t
0
, X

t
0
, X
(2)
t
0
, X
(3)
t
0
_
as (s, t ) (t
0
, t
0
). This implies the convergence of the
variance matrices. Hence
Det Var(X
s
, X
t
, X

s
, X

t
)
(t s)
8
144
Det Var(X
t
0
, X

t
0
, X
(2)
t
0
, X
(3)
t
0
),
which ends the proof.
Remark. the proof of Lemma 3.2 shows that the density of X
s
, X

s
, X
t
, X

t
exists
for |s t | sufciently small as soon as the process has C
3
paths and for every t
the distribution of X
t
, X

t
, X

t
, X
(3)
t
does not degenerate. Hence, under this only
hypothesis, the conclusion of the lemma holds true for 0 < |s t | < and some
> 0.
Lemma 3.3. Suppose Z = {Z
t
: t [0, 1]} is a stochastic process that veries
H
2
. Dene:
F
v
(u) = E
_

v
.1I
A
u
_
where
A
u
= A
u
(Z, ) = {Z
t
(t ) u f or all t [0, 1]},
(.) is a real valued C
2
function dened on [0, 1],

v
= G(Z
t
1
(t
1
)v, ..., Z
t
m
(t
m
)v) for some positive integer m, t
1
, ..., t
m

[0, 1] , v IR and some C

function G : IR
m
IR having at most polynomial
growth at , that is, |G(x)| C(1 + x
p
) for some positive constants C, p
and all x IR
m
( . stands for Euclidean norm).
Then,
For each v IR, F
v
is of class C
1
and its derivative is a continuous function
of the pair (u, v) that can be written in the form:
F

v
(u) = (0)E
_

v,u
.1I
A
u(Z

)
_
.p
Z
0
( (0) .u)
+(1)E
_

v,u
.1I
A
u(Z

)
_
p
Z
1
( (1) .u)

_
1
0
(t )E
_

t
v,u
_
Z
t
t

t
(t ).u
_
1I
A
u(Z
t
,
t
)
_
p
Z
t
,Z

t
_
(t ) .u,

(t ) .u
_
dt, (15)
where the processes Z

, Z

, Z
t
and the functions

,
t
are as in Lemma
3.1 and the random variables

v,u
,

v,u
,
t
v,u
are given by:

v,u
= G
_
t
1
_
Z

t
1

(t
1
) u
_
+ (t
1
) (u v), ...
..., t
m
_
Z

t
m

(t
m
) u
_
+ (t
m
) (u v)
_

v,u
= G
_
(1 t
1
)
_
Z

t
1

(t
1
) u
_
+ (t
1
) (u v), ...
..., (1 t
m
)
_
Z

t
m

(t
m
) u
_
+ (t
m
) (u v)
_
82 J.-M. Azas, M. Wschebor

t
v,u
= G
_
(t
1
t )
2
2
_
Z
t
t
1

t
(t
1
) u
_
+ (t
1
) (u v), ...
...,
(t
m
t )
2
2
_
Z
t
t
m

t
(t
m
) u
_
+ (t
m
) (u v)
_
.
Proof . We start by showing that the arguments of Theorem 3.1 can be extended to
our present case to establish that F
v
is absolutely continuous. This proof already
contains a rst approximation to the main ideas leading to the proof of the lemma.
Step 1 Assume - with no loss of generality - that u 0 and write for h > 0:
F
v
(u) F
v
(u h) = E
_

v
.1I
A
u
\A
uh
_
E
_

v
.1I
A
uh
\A
u
_
(16)
Note that:
A
u
\ A
uh
{(0)(u h) < Z
0
(0)u, (0) > 0}
{(1)(u h) < Z
1
(1)u, (1) > 0}
_
M
(1)
uh,u
1
_
(17)
where:
M
(1)
uh,u
= {t : t (0, 1), (t ) 0, the function Z
(.)
(.)(u h)
has a local maximum at t with value falling in the interval [0, (t )h]}.
Using the Markov inequality
P(M
(1)
uh,u
1) E
_
M
(1)
uh,u
_
,
and the formula for the expectation of the number of local maxima applied to the
process t Z
t
(t )(u h) imply
|E
_

v
.1I
A
u
\A
uh
_
|
1I
{(0)>0}
_
(0)u
(0)(uh)
E (|
v
|/Z
0
= x) p
Z
o
(x)dx
+1I
{(1)>0}
_
(1)u
(1)(uh)
E (|
v
| /Z
1
= x) p
Z
1
(x)dx
+
_
1
0
1I
{(t )>0}
dt
_
(t )h
0
E
_
|
v
|(Z

(t )(u h))

/V
2
= (x, 0)
_
.p
V
2
(x, 0)dx, (18)
where V
2
is the random vector
_
Z
t
(t )(u h), Z

(t )(u h)
_
.
Now, the usual regression formulae and the form of
v
imply that
|E
_

v
.1I
A
u
\A
uh
_
| (const ).h
where the constant may depend on u but is locally bounded as a function of u.
Regularity of the distribution of the maximum 83
An analogous computation replacing M
(1)
uh,u
by
M
(2)
uh,u
= {t : t (0, 1), (t ) 0, the function Z
(.)
(.)u
has a local maximum at t, Z
t
(t )u [0, (t )h]}
leads to a similar bound for the second term in (16). It follows that
|F
v
(u) F
v
(u h)| (const ).h
where the constant is locally bounded as a function of u. This shows that F
v
is
absolutely continuous.
The proof of the Lemma is in fact a renement of this type of argument. We
will replace the rough inclusion (17) and its consequence (18) by an equality.
In the two following steps we will assume the additional hypothesis that Z
veries H
k
for every k and (.) is a C

function.
Step 2.
Notice that:
A
u
\ A
uh
= A
u

_
{(0)(u h) < Z
0
(0)u, (0) > 0}
{(1)(u h) < Z
1
(1)u, (1) > 0} {M
(1)
uh,u
1}
_
. (19)
We use the obvious inequality, valid for any three events F
1
, F
2
, F
3
:
3

1
1I
F
j
1I

3
1
F
j
1I
F
1
F
2
+1I
F
2
F
3
+1I
F
3
F
1
to write the rst term in (16) as:
E
_

v
.1I
A
u
\A
uh
_
= E
_

v
.1I
A
u
1I
{(0)(uh)<Z
0
(0)u}
_
1I
{(0)>0}
+E
_

v
.1I
A
u
1I
{(1)(uh)<Z
1
(1)u}
_
1I
{(1)>0}
+E
_

v
.1I
A
u
M
(1)
uh,u
_
+R
1
(h) (20)
where
|R
1
(h)| E
_
|
v
|1I
{(0)(uh)<Z
0
(0)u,(1)(uh)<Z
1
(1)u}
_
1I
{(0)>0,(1)>0}
+E
_
|
v
|1I
_
(0)(uh)<Z
0
(0)u,M
(1)
uh,u
1
_
_
1I
{(0)>0}
+E
_
|
v
|1I
_
(1)(uh)<Z
1
(1)u,M
(1)
uh,u
1
_
_
1I
{(1)>0}
+E
_
|
v
|
_
M
(1)
uh,u
1I
M
(1)
uh,u
1
_
_
= T
1
(h) +T
2
(h) +T
3
(h) +T
4
(h)
Our rst aim is to prove that R
1
(h) = o(h) as h 0.
It is clear that T
1
(h) = O(h
2
).
84 J.-M. Azas, M. Wschebor
Let us consider T
2
(h). Using the integral formula for the expectation of the
number of local maxima:
T
2
(h) 1I
{(0)>0}
_
1
0
1I
{(t )0}
dt
_
(0)h
0
dz
0
_
(t )h
0
dz.
.E
_
|
v
|(Z

(t )(u h))

/V
3
= v
3
_
p
V
3
(v
3
),
where V
3
is the random vector
_
Z
0
(0)(u h), Z
t
(t )(u h), Z

(t )(u h)
_
,
and v
3
= (z
0
, z, 0).
We divide the integral in the right-hand member into two terms, respectively the
integrals on [0, ] and [, 1] in the t -variable, where 0 < < 1. The rst integral
can be bounded by
_

0
1I
{(t )0}
dt
_
(t )h
0
dz E
_
|
v
|(Z

(t )(u h))

/V
2
= (z, 0)
_
p
V
2
(z, 0).
where the random vector V
2
is the same as in (18). Since the conditional expecta-
tion as well as the density are bounded for u in a bounded set and 0 < h < 1, this
expression is bounded by (const )h.
As for the second integral, when t is between and 1 the Gaussian vector
_
Z
0
(0)(u h), Z
t
(t )(u h), Z

(t )(u h)
_
has a bounded density so that the integral is bounded by C

h
2
, where C

is a constant
depending on .
Since > 0 is arbitrarily small, this proves that T
2
(h) = o(h). T
3
(h) is similar
to T
2
(h).
We now consider T
4
(h). Put:
E
h
=
_
Z
(4)
(.)

(4)
(.)(u h)

h
1/4
_

_
|
v
| h
1/4
_
where .

stands for the sup-norm in [0, 1]. So,


T
4
(h) E
_
|
v
|1I
E
h
M
(1)
uh,u
(M
(1)
uh,u
1)
_
+E
_
|
v
|1I
E
C
h
M
(1)
uh,u
_
(21)
(E
C
denotes the complement of the event E).
The second term in (21) is bounded as follows:
E
_
|
v
|1I
E
C
h
M
(1)
uh,u
_

_
E
_
|
v
|
4
_
E
_
_
M
(1)
uh,u
_
4
__
1/4 _
P(E
C
h
)
_
1/2
.
The polynomial bound on G, plus the fact that Z

has nite moments of all


orders, imply that E
_
|
v
|
4
_
is uniformly bounded.
Also, M
(1)
uh,u
D
0
(Z

(.)

(.)(u h), [0, 1]) = D (recall that D


0
(g; I) de-
notes the number of downcrossings of level 0 by function g). A bound for E
_
D
4
_
Regularity of the distribution of the maximum 85
can be obtained on applying Lemma 1.2 in Nualart-Wschebor (1991). In fact, the
Gaussian process Z

(.)

(.)(uh) has uniformly bounded one-dimensional mar-


ginal densities and for every positive integer p the maximum over [0, 1] of its p-th
derivative has nite moments of all orders. Fromthat Lemma it follows that E
_
D
4
_
is bounded independently of h, 0 < h < 1.
Hence,
E
_
|
v
|1I
E
C
h
M
(1)
uh,u
_
(const )
_
P( Z
(4)
(.)

(4)
(.)(u h)

> h
1/4
) +P(|
v
| > h
1/4
)
_
1/2
(const )
_
C
1
e
C
2
h
1/2
+h
q/4
E
_
|
v
|
q
_
_
1/2
,
where C
1
, C
2
are positive constants and q any positive number. The bound on the
rst term follows from the Landau-Shepp (1971) inequality (see also Fernique,
1974) since even though the process depends on h it is easy to see that the bound
is uniform on h, 0 < h < 1. The bound on the second term is simply the Markov
inequality. Choosing q > 8 we see that the second term in (21) is o(h).
For the rst termin (21) one can use the formula for the second factorial moment
of M
(1)
uh,u
to write it in the form:
_
1
0
_
1
0
1I
{(s)0,(t )0}
dsdt
_
(s)h
0
dz
1
_
(t )h
0
dz
2
E(|
v
|1I
E
h
(Z

(s)(u h))

(Z

(t )(u h))

/V
4
= v
4
).p
V
4
(v
4
),
(22)
where V
4
is the random vector
_
Z
s
(s)(u h), Z
t
(t )(u h), Z

(s)(u h), Z

(t )(u h)
_
and v
4
= (z
1
, z
2
, 0, 0).
Let s = t and Q be the - unique - polynomial of degree 3 such that
Q(s) = z
1
, Q(t ) = z
2
, Q

(s) = 0, Q

(t ) = 0. Check that
Q(y) = z
1
+(z
2
z
1
)(y s)
2
(3t 2y s)(t s)
3
Q

(t ) = 6(z
1
z
2
)(t s)
2
Q

(s) = 6(z
1
z
2
)(t s)
2
.
Denote, for each positive h,
(y) := Z
y
(y)(u h) Q(y).
Under the conditioning V
4
= v
4
in the integrand of (22), the C

function (.)
veries (s) = (t ) =

(s) =

(t ) = 0. So, there exist t


1
, t
2
(s, t ) such that

(t
1
) =

(t
2
) = 0 and for y [s, t ]:
|

(y)| =|
_
y
t
1

()d |=|
_
y
t
1
d
_

t
2

(4)
()d |
(t s)
2
2

(4)

.
86 J.-M. Azas, M. Wschebor
Noting that a


_
a+b
2
_
2
for any pair of real numbers a, b, it follows that the
conditional expectation in the integrand of (22) is bounded by:
E(|
v
|.1I
E
h
.(t s)
4
( Z
(4)
(.)

(4)
(.)(u h)

)
2
/V
4
= v
4
)
(t s)
4
.h
1/2
.h
1/4
= (t s)
4
.h
3/4
. (23)
On the other hand, applying Lemma 3.2 we have the inequality
p
V
4
(z
1
, z
2
, 0, 0) p
Z
s
,Z
t
,Z

s
,Z

t
(0, 0, 0, 0) (const )(t s)
4
the constant depending on the process but not on s, t .
Summing up, the expression in (22) is bounded by
(const ).h
2
.h
3/4
= o(h).
Replacing now in (20) the expectation E
_

v
.1I
A
u
M
(1)
uh,u
_
by the corresponding
integral formula:
E
_

v
.1I
A
u
\A
uh
_
= 1I
{(0)>0}
(0)
_
u
uh
E
_

v
.1I
A
u
/Z
0
= (0)x
_
.p
Z
0
((0)x)dx
+1I
{(1)>0}
(1)
_
u
uh
E
_

v
.1I
A
u
/Z
1
= (1)x
_
.p
Z
1
((1)x)dx
+
_
1
0
1I
{(t )0}
dt
_
(t )h
0
dzE
_

v
.1I
A
u
(Z

(t )(u h))

/V
2
= (z, 0)
_
p
V
2
(z, 0) +o(h)
=
_
u
uh
H
1
(x, h)dx +o(h) (24)
where:
H
1
(x, h) = 1I
{(0)>0}
(0)E
_

v
.1I
A
u
/Z
0
= (0)x
_
.p
Z
0
((0)x)
+1I
{(1)>0}
(1)E
_

v
.1I
A
u
/Z
1
= (1)x
_
.p
Z
1
((1)x)
+
_
1
0
1I
{(t )0}
E(
v
.1I
A
u
(Z

(t )(u h))

/Z
t
= (t )x, Z

t
=

(t )(u h))
.p
Z
t
,Z

t
((t )x,

(t )(u h))(t )dt. (25)


Step 3. Our next aim is to prove that for each u the limit
lim
h0
F
v
(u) F
v
(u h)
h
exists and admits the representation (15) in the statement of the Lemma. For that
purpose, we will prove the existence of the limit
lim
h0
1
h
E
_

v
.1I
A
u
\A
uh
_
. (26)
Regularity of the distribution of the maximum 87
This will follow from the existence of the limit
lim
h0,uh<x<u
H
1
(x, h).
Consider the rst term in expression (25). We apply Lemma 3.1(a) and with the
same notations therein:
Z
t
= a

(t ) Z
0
+t Z

t
,
t
= a

(t ) (0) +t

t
t [0, 1] .
For u h < x < u replacing in (25) we have:
E
_

v
.1I
A
u
/Z
0
= (0)x
_
= E
_
G
_
t
1
(Z

t
1

(t
1
)x) +(t
1
)(x v), ..., t
m
(Z

t
m

(t
m
)x)
+(t
m
)(x v)
_
1I
B(u,x)
_
= E
_

v,x
.1I
B(u,x)
_
(27)
where

v,x
is dened in the statement and
B(u, x) =
_
t Z

t
(t )u a

(t ) (0)x for all t [0, 1]


_
.
For each such that 0 < 1 and a

(s) > 0 if 0 s , we dene:


B

(u, x) =
_
t Z

t
(t )u a

(t ) (0)x for all t [, 1]


_
=
_
Z

(t )u +
a

(t ) (0)(u x)
t
for all t [, 1]
_
.
It is clear that since we consider the case (0) > 0, then
B(u, x) = B
0
+(u, x) := lim
0
B

(u, x).
Introduce also the notations:
M
[s,t ]
= sup
_
Z

()u : [s, t ]
_
,

(x) = |u x| sup
_
|a

(t ) (0)|
t
: t [, 1]
_
.
We prove that as x u,
E
_

v,x
.1I
B(u,x)
_
E
_

v,u
.1I
B(u,u)
_
(28)
We have,
|E
_

v,x
.1I
B(u,x)
_
E
_

v,u
.1I
B(u,u)
_
|
E
_
|

v,x

v,u
|
_
+|E
_

v,u
(1I
B(u,x)
1I
B(u,u)
)
_
|. (29)
88 J.-M. Azas, M. Wschebor
From the definition of

v,x
it is immediate that the rst term tends to 0 as x u.
For the second term it sufces to prove that
P(B(u, x)B(u, u)) 0 as x u. (30)
Check the inclusion:
B(u, x)B

(u, u)
_

(x) M
[,1]

(x)
_

_
M
[,1]
0, M
[0,]
> 0
_
which implies that
P(B(u, x)B(u, u)) P(B(u, x)B

(u, u)) +P(B

(u, u)B(u, u))


P(|M
[,1]
|

(x)) +2.P(M
[,1]
0, M
[0,]
> 0).
Let x u for xed . Since

(x) 0, we get:
limsup
xu
P(B(u, x)B(u, u)) P(M
[,1]
= 0) +2.P(M
[,1]
0, M
[0,]
> 0).
The rst term is equal to zero because of Proposition 2.4. The second term
decreases to zero as 0 since
_
M
[,1]
0, M
[0,]
> 0
_
decreases to the empty
set.
It is easy to prove that the function
(u, v) E
_

v,u
.1I
A
u
(Z

)
_
is continuous. The only difculty comes from the indicator function 1I
A
u
(Z

)
al-
though again the fact that the distribution function of the maximum of the process
Z

(.)

(.)u has no atoms implies the continuity in u in much the same way as
above.
So, the rst term in the right-hand member of (25) has the continuous limit:
1I
{(0)>0}
(0)E
_

v,u
.1I
A
u(Z

)
_
.p
Z
0
((0).u).
With minor changes, we obtain for the second term the limit:
1I
{(1)>0}
(1)E
_

v,u
.1I
A
u(Z

)
_
.p
Z
1
((1).u),
where Z

are as in Lemma 3.1 and

v,u
as in the statement of Lemma 3.3.
The third termcan be treated in a similar way. The only difference is that the re-
gression must be performed on the pair (Z
t
, Z

t
) for each t [0, 1], applying again
Lemma 3.1 (a),(b),(c). The passage to the limit presents no further difculties, even
if the integrand depends on h.
Finally, note that conditionally on Z
t
= (t )u, Z

t
=

(t )u one has
Z

(t )u = Z
t
t

t
(t )u
and
(Z
t
(t )u)

1I
A
u
(Z,)
= (Z
t
(t )u)1I
A
u
(Z,)
.
Regularity of the distribution of the maximum 89
Adding up the various parts, we get:
lim
h0
1
h
E
_

v
.1I
A
u
\A
uh
_
= 1I
{(0)>0}
(0)E
_

v,u
.1I
A
u(Z

)
_
.p
Z
0
((0).u)
+1I
{(1)>0}
(1)E
_

v,u
.1I
A
u(Z

)
_
.p
Z
1
((1).u)

_
1
0
(t )1I
{(t )>0}
dt E
_

t
v,u
(Z
t
t

t
(t ).u)1I
A
u(Z
t
,
t
)
_
p
Z
t
,Z

t
((t )u,

(t )u).
Similar computations that we will not perform here show an analogous result
for
lim
h0
1
h
E
_

v
.1I
A
uh
\A
u
_
and replacing into (16) we have the result for processes Z with C

paths.
Step 4. Suppose now that Z and (.) satisfy the hypotheses of the Lemma and
dene:
Z

(t ) = (

Z)(t ) +Y(t ) and

(t ) = (

)(t )
where > 0,

(t ) =
1
(
1
t ), a non-negative C

function with compact


support,
_
+

(t )dt = 1 and Y is a Gaussian centered stationary process with C

paths and non-purely atomic spectrum, independent of Z. Proceeding as in Sec.


10.6 of Cramer-Leadbetter (1967), one can see that Y veries H
k
for every k. The
definition of Z

implies that Z

inherites this property. Thus for each positive ,


Z

meets the conditions for the validity of Steps 2 and 3, so that the function
F

v
(u) = E
_

v
1I
A
u
(Z

)
_
where

v
= G(Z

t
1

(t
1
)v, ..., Z

t
m

(t
m
)v) is continuoustly differentiable
and its derivative veries (15) with the obvious changes, that is:
_
F

v
_

(u) =

(0)E
_
_

v,u
_

.1I
A
u
_
(Z

,(

_
_
.p
Z

0
_

(0) .u
_
+

(1)E
_
_

v,u
_

.1I
A
u
_
(Z

,(

_
_
.p
Z

1
_

(1) .u
_

_
1
0

(t )E
_
_

v,u
_
t
_
_
Z

_
t
t

_

_
t
(t ).u
_
1I
A
u((Z

)
t
,(

)
t
)
_
p
Z

t
,(Z

t
_

(t ) .u,
_

_
(t ) .u
_
dt. (31)
Let 0. We prove next that (F

v
)

(u) converges for xed (u, v) to a limit


function F

v
(u) that is continuous in (u, v). On the other hand, it is easy to see that
for xed (u, v) F

v
(u) F
v
(u). Also, from (31) it is clear that for each v, there
exists
0
> 0 such that if (0,
0
), (F

v
)

(u) is bounded by a xed constant when


u varies in a bounded set because of the hypothesis on the functions G and and
the non-degeneracy of the one and two-dimensional distribution of the process Z.
90 J.-M. Azas, M. Wschebor
So, it follows that F

v
(u) = F

v
(u) and the same computation implies that F

v
(u)
satises (15).
Let us show how to proceed with the rst term in the right-hand member of
(31). The remaining terms are similar.
Clearly, almost surely, as 0 one has Z

t
Z
t
, (Z

t
Z

t
, (Z

t
Z

t
uniformly for t [0, 1], so that the definition of Z

in (11) implies that (Z

t

Z

t
uniformly for t [0, 1], since the regression coefcient (a

(t ) converges to
a

(t ) uniformly for t [0, 1] (with the obvious notation).


Similarly, for xed (u, v):
(

t
, (

v,u
)

v,u
uniformly for t [0, 1].
Let us prove that
E
_
(

v,u
)

1I
A
u
_
(Z

,(

_
_
E
_

v,u
1I
A
u(Z

)
_
.
This is implied by
P
_
A
u
_
_
Z

,
_

_
A
u
_
Z

__
0. (32)
as 0. Denote, for > 0, 0:
C
u,
= A
u
_
_
Z

,
_

_
=
_
_
Z

t

_

(t ).u for every t [0, 1]


_
E
u,
=
_
Z

(t )u + for all t [0, 1]


_
.
One has:
P(C
u,
E
u,0
) P(C
u,
\ E
u,
) +P(E
u,
\ C
u,
) +P(E
u,
\ E
u,0
).
Let K be a compact subset of the real line and suppose u K. We denote:
D
,
=
_
sup
uK,t [0,1]
|
_
_
Z

t

_

(t ).u
_

_
Z

(t ).u
_
|>
_
and
F
u,
=
_
sup
t [0,1]
_
Z

(t )u
_

_
.
Fix > 0 and choose small enough so that:
P
_
D
,
_
< .
Check the following inclusions:
C
u,
\ E
u,
D
,
,
_
E
u,
\ C
u,
_
D
c
,
F
u,
, E
u,
\ E
u,0
F
u,
which imply that if is small enough:
P(C
u,
E
u,0
) 2. +2.P
_
F
u,
_
.
Regularity of the distribution of the maximum 91
For each u, as 0 one has
P
_
F
u,
_
P
_
sup
t [0,1]
_
Z

(t )u
_
= 0
_
= 0.
where the second equality follows again on applying Proposition 2.4.
This proves that as 0 the rst term in the right-hand member of (31) tends
to the limit
(0)E
_

v,u
.1I
A
u(Z

)
_
.p
Z
0
( (0) .u) .
It remains to prove that this is a continuous function of (u, v). It sufces to prove
the continuity of the function
E
_
1I
A
u(Z

)
_
= P
_
A
u
_
Z

__
as a function of u. For that purpose we use inequality:
| P
_
A
u+h
_
Z

__
P
_
A
u
_
Z

__
|
P
_
| sup
t [0,1]
_
Z

(t ).u
_
|| h |

_
and as h 0 the right-hand member tends to P
_
| sup
t [0,1]
_
Z

(t ).u
_
|= 0
_
which is equal to zero by Propostion 2.4.
Proof of Theorem 1.1 We proceed by induction on k.
We will give some details for the rst two derivatives including some implicit
formulae that will illustrate the procedure for general k.
We introduce the following additional notations. Put Y
t
:= X
t
(t )u and de-
ne, on the interval [0, 1], the processes X

, X

, X
t
, Y

, Y

, Y
t
, and the functions

,
t
, as in Lemma 3.1. Note that the regression coefcients corresponding
to the processes Xand Y are the same, so that anyone of themmay be used to dene
the functions

,
t
. One can easily check that
Y

s
= X

(s)u
Y

s
= X

(s)u
Y
t
s
= X
t
s

t
(s)u.
For t
1
, ..., t
m
[0, 1] {, } , m 2, we dene by induction the stochastic
processes X
t
1
,...,t
m
=
_
X
t
1
,...,t
m1
_
t
m
, Y
t
1
,...,t
m
=
_
Y
t
1
,...,t
m1
_
t
m
and the function

t
1
,...,t
m
=
_

t
1
,...,t
m1
_
t
m
, applying Lemma 3.1 for the computations at each stage.
With the aim of somewhat reducing the size of the formulae we will express
the successive derivatives in terms of the processes Y
t
1
,...,t
m
instead of X
t
1
,...,t
m
.
The reader must keep in mind that for each m-tuple t
1
, ..., t
m
the results depend on
u through the expectation of the stochastic process Y
t
1
,...,t
m
. Also, for a stochastic
process Z we will use the notation
A(Z) = A
0
(Z, ) = {Z
t
0 : for all t [0, 1]} .
92 J.-M. Azas, M. Wschebor
First derivative. Suppose that X satises H
2
. We apply formula (15) in Lemma
3.3 for 1, Z = X and (.) 1 obtaining for the rst derivative:
F

(u) = E
_
1I
A(Y

)
_
p
Y
0
(0) +E
_
1I
A(Y

)
_
p
Y
1
(0)

_
1
0
E
_
Y
t
1
t
1
1I
A
_
Y
t
1
_
_
p
Y
t
1
,Y

t
1
(0, 0)dt
1
. (33)
This expression is exactly the expression in (9) with the indicated notational chang-
es and after taking profit of the fact that the process is Gaussian, via the regression
on the conditionning in each term. Note that according to the definition of the
Y-process:
E
_
1I
A(Y

)
_
= E
_
1I
A
u(X

)
_
E
_
1I
A(Y

)
_
= E
_
1I
A
u(X

)
_
E
_
Y
t
1
t
1
1I
A
_
Y
t
1
_
_
= E
_
Y
t
1
t
1
1I
A
u(X
t
1
,
t
1
)
_
.
Secondderivative. Suppose that Xsatises H
4
. Then, X

, X

, X
t
1
satisfyH
3
, H
3
,
H
2
respectively. Therefore Lemma 3.3 applied to these processes can be used to
show the existence of F

(u) and to compute a similar formula, excepting for the


necessity of justifying differentiation under the integral sign in the third term. We
get the expression:
F

(u) = E
_
1I
A(Y

)
_
p
(1)
Y
0
(0) E
_
1I
A(Y

)
_
p
(1)
Y
1
(0)
+
_
1
0
E
_
Y
t
1
t
1
1I
A
_
Y
t
1
_
_
p
(1,0)
Y
t
1
,Y

t
1
(0, 0)dt
1
+p
Y
0
(0)
_

(0)E
_
1I
A(Y
,
)
_
p
Y

0
(0) +

(1)E
_
1I
A(Y
,
)
_
p
Y

1
(0)
_

_
1
0

(t
2
)E
_
Y
,t
2
t
2
1I
A
_
Y
,t
2
_
_
p
Y

t
2
,(Y

t
2
(0, 0)dt
2
+p
Y
1
(0)
_

(0)E
_
1I
A(Y
,
)
_
p
Y

0
(0) +

(1)E
_
1I
A(Y
,
)
_
p
Y

1
(0)
_

_
1
0

(t
2
)E
_
Y
,t
2
t
2
1I
A
_
Y
,t
2
_
_
p
Y

t
2
,(Y

t
2
(0, 0)dt
2

_
1
0
p
Y
t
1
,Y

t
1
(0, 0)
_
_
_
_
_

t
1
(t
1
)E
_
1I
A
(
Y
t
1
)
_
+
t
1
(0)E
_
Y
t
1
,

t
1
1I
A
_
Y
t
1
,
_
_
p
Y
t
1
0
(0)
+
t
1
(1)E
_
Y
t
1
,

t
1
1I
A
_
Y
t
1
,
_
_
p
Y
t
1
1
(1)

_
1
0

t
1
(t
2
)E
_
Y
t
1
,t
2
t
1
Y
t
1
,t
2
t
2
1I
A(Y
t
1
,t
2 )
_
p
Y
t
1
t
2
,(Y
t
1 )

t
2
(0, 0)dt
2
_

_
dt
1
,
(34)
In this formula p
(1)
Y
t
0
, p
(1)
Y
t
1
and p
Y
t
1
,Y

t
1
(0, 0)
(1,0)
stand respectively for the deriv-
ative of p
Y
t
0
(.), the derivative of p
Y
t
1
(.) and the derivative with respect to the rst
variable of (p
Y
t
1
,Y

t
1
(., .)).
To validate the above formula, note that:
Regularity of the distribution of the maximum 93
The rst two lines are obtained by differentiating with respect to u, the densities
p
Y
0
(0) = p
X
0
(u), p
Y
1
(0) = p
X
1
(u), p
Y
t
1
,Y

t
1
(0, 0) = p
X
t
1
,X

t
1
(u, 0).
Lines 3 and 4 come fromthe application of Lemma 3.3 to differentiate E(1I
A(Y

)
).
The lemma is applied with Z = X

, =

, = 1.
Similarly, lines 5 and 6 contain the derivative of E(1I
A(Y

)
).
The remaining corresponds to differentiate the function
E
_
Y
t
1
t
1
1I
A(Y
t
1
)
_
= E
_
_
X
t
1
t
1

t
1
(t
1
)u
_
1I
A
u
(X
t
1
,
t
1
)
_
in the integrand of the third term in (33). The rst term in line 7 comes from the
simple derivative

v
E
_
(X
t
1
t
1

t
1
(t
1
)v)1I
A
u
(X
t
1
,
t
1
)
_
=
t
1
(t
1
)E(1I
A(Y
t
1
).
The other terms are obtained by applying Lemma 3.3 to compute

u
E
_
(X
t
1
t
1

t
1
(t
1
)v)1I
A
u
(X
t
1
,
t
1
)
_
,
putting Z = X
t
1
, =
t
1
, = X
t
1
t
1

t
1
(t
1
)v.
Finally, differentiation under the integral sign is valid since because of Lemma
3.1, the derivative of the integrand is a continuous function of (t
1
, t
2
, u) due
the regularity and non-degeneracy of the Gaussian distributions involved and
Proposition 2.4.
General case. With the above notation, given the mtuple t
1
, ..., t
m
of elements
of [0, 1] {, } we will call the processes Y, Y
t
1
, Y
t
1
,t
2
, ..., Y
t
1
,...,t
m1
the ances-
tors of Y
t
1
,...,t
m
. In the same way we dene the ancestors of the function
t
1
,...,t
m
.
Assume the following induction hypothesis: If X satises H
2k
then F is k
times continuously differentiable and F
(k)
is the sum of a nite number of terms
belonging to the class D
k
which consists of all expressions of the form:
_
1
0
..
_
1
0
ds
1
..ds
p
Q(s
1
, .., s
p
)E
_
1I
A
_
Y
t
1
,..,t
m
_
_
K
1
(s
1
, .., s
p
)K
2
(s
1
, .., s
p
) (35)
where:
1 m k.
t
1
, ..., t
m
[0, 1] {, } , m 1.
s
1
, .., s
p
, 0 p m, are the elements in {t
1
, ..., t
m
} that belong to [0, 1] (that
is, which are neither nor ). When p = 0 no integral sign is present.
Q(s
1
, .., s
p
) is a polynomial in the variables s
1
, .., s
p
.
is a product of values of Y
t
1
,...,t
m
at some locations belonging to
_
s
1
, .., s
p
_
.
K
1
(s
1
, .., s
p
) is a product of values of some ancestors of
t
1
,...,t
m
at some
locations belonging to the set
_
s
1
, .., s
p
_
{0, 1} .
K
2
(s
1
, .., s
p
) is a sum of products of densities and derivatives of densities of
the random variables Z

at the point 0, or the pairs ( Z

, Z

) at the point (0, 0)


where
_
s
1
, .., s
p
_
{0, 1} and the process Z is some ancestor of Y
t
1
,...,t
m
.
94 J.-M. Azas, M. Wschebor
Note that K
1
does not depend on u but K
2
is a function of u.
It is clear that the induction hypothesis is veried for k = 1. Assume that it
is true up to the integer k and that X satises H
2k+2
. Then F
(k)
can be written as
a sum of terms of the form (35). Consider a term of this form and note that the
variable u may appear in three locations:
1. In , where differentiation is simple given its product form, the fact that

u
Y
t
1
,...,t
q
s
=
t
1
,...,t
q
(s), q m, s
_
s
1
, ..., s
p
_
and the boundedness of
moments allowing to differentiate under the integral and expectation signs.
2. In K
2
(s
1
, .., s
p
) which is clearly C

as a function of u. Its derivative with


respect to u takes the form of a product of functions of the types K
1
(s
1
, .., s
p
)
and K
2
(s
1
, .., s
p
) dened above.
3. In 1I
A
_
Y
t
1
,..,t
m
_
. Lemma 3.3 shows that differentiation produces 3 terms depend-
ing upon the processes Y
t
1
,...,t
m
,t
m+1
with t
m+1
belonging to [0, 1] {, }.
Each term obtained in this way belongs to D
k+1
.
The proof is achieved by noting that, as in the computation of the second de-
rivative, Lemma 3.1 implies that the derivatives of the integrands are continuous
functions of u that are bounded as functions of (s
1
, .., s
p
, t
m+1
, u) if u varies in a
bounded set.
The statement and proof of Theorem 1.1 can not, of course, be used to obtain
explicit expressions for the derivatives of the distribution function F. However, the
implicit formula for F
(k)
(u) as sum of elements of D
k
can be transformed into ex-
plicit upper-bounds if one replaces everywhere the indicator functions 1I
A(Y
t
1
,..,t
m
)
)
by 1 and the functions
t
1
,..,t
m
(.) by their absolute value.
On the other hand, Theorem1.1 permits to have the exact asymptotic behaviour
of F
(k)
(u) as u + in case Var(X
t
) is constant. Even though the number of
terms in the formula increases rapidly with k, there is exactly one term that is dom-
inant. It turns out that as u +, F
(k)
(u) is equivalent to the k-th derivative of
the equivalent of F(u). This is Corollary 1.1.
Proof of Corollary 1.1. To prove the result for k = 1 note that under the hypothesis
of the Corollary, one has r(t, t ) = 1, r
01
(t, t ) = 0, r
02
(t, t ) = r
11
(t, t ) and an
elementary computation of the regression (13) replacing Z by X, shows that:
b
t
(s) = r(s, t ), c
t
(s) =
r
01
(s, t )
r
11
(t, t )
and

t
(s) = 2
1 r(s, t )
(t s)
2
since we start with (t ) 1.
This shows that for every t [0, 1] one has inf
s[0,1]
(
t
(s)) > 0 because of the
non-degeneracy condition and
t
(t ) = r
02
(t, t ) = r
11
(t, t ) > 0. The expression
for F

becomes:
F

(u) = (u)L(u), (36)


Regularity of the distribution of the maximum 95
where
L(u) = L
1
(u) +L
2
(u) +L
3
(u),
L
1
(u) = P(A
u
(X

),
L
2
(u) = P(A
u
(X

),
L
3
(u) =
_
1
0
E
_
(X
t
t

t
(t )u)1I
A
u
(X
t
,
t
)
_
dt
(2r
11
(t, t ))
1/2
.
Since for each t [0, 1] the process X
t
is bounded it follows that
a.s. 1I
A
u
(X
t
,
t
)
1 as u +.
A dominated convergence argument shows now that L
3
(u) is equivalent to

u
(2)
1/2
_
1
0
r
02
(t, t )
(r
11
(t, t ))
1/2
dt =
u
(2)
1/2
_
1
0
_
r
11
(t, t )dt.
Since L
1
(u), L
2
(u) are bounded by 1, (1) follows for k = 1.
For k 2, write
F
(k)
(u) =
(k1)
(u)L(u) +
h=k

h=2
_
k 1
k 1
_

(kh)
(u)L
(h1)
(u). (37)
As u +, for each j = 0, 1, ..., k 1,
(j)
(u) (1)
j
u
j
(u) so that the
rst termin (37) is equivalent to the expression in (1). Hence, to prove the Corollary
it sufces to show that the succesive derivatives of the function L are bounded. In
fact, we prove the stronger inequality
|L
(j)
(u)| l
j
(
u
a
j
), j = 1, ..., k 1 (38)
for some positive constants l
j
, a
j
, j = 1, ..., k 1.
We rst consider the function L
1
. One has:

(s) =
1 r(s, 0)
s
f or 0 < s 1,

(0) = 0,
(

(s) =
1 +r(s, 0) s.r
10
(s, 0)
s
2
f or 0 < s 1, (

(0) =
1
2
r
11
(0, 0).
The derivative L

1
(u) becomes
L

1
(u) =

(1)E[1I
A
u
(X
,
,
,
)
] p
X

1
(

(1)u)

_
1
0

(t )E
_
(X
,t
t

,t
(t )u)1I
A
u
(X
,t
,
,t
)
_
p
X

t
,(X

t
(

(t )u, (

(t )u) dt.
Notice that

(1) is non-zero so that the rst term is bounded by a constant


times a non-degenerate Gaussian density. Even though

(0) = 0, the second


96 J.-M. Azas, M. Wschebor
term is also bounded by a constant times a non-degenerate Gaussian density be-
cause the joint distribution of the pair (X

t
, (X

t
) is non-degenerate and the pair
(

(t ), (

(t )) = (0, 0) for every t [0, 1].


Applying a similar argument to the succesive derivatives we obtain (38) with
L
1
instead of L.
The same follows with no changes for
L
2
(u) = P(A
u
(X

).
For the third term
L
3
(u) =
_
1
0
E
_
(X
t
t

t
(t )u)1I
A
u
(X
t
,
t
)
_
dt
(2r
11
(t, t ))
1/2
we proceed similarly, taking into account
t
(s) = 0 for every s [0, 1]. So (38)
follows and we are done.
Remark. Suppose that X satises the hypotheses of the Corollary with k 2.
Then, it is possible to rene the result as follows.
For j = 1, ..., k :
F
(j)
(u) = (1)
j1
(j 1)!h
j1
(u)

_
1 +(2)
1/2
.u.
_
1
0
(r
11
(t, t ))
1/2
dt
_
(u) +
j
(u)(u) (39)
where h
j
(u) =
(1)
j
j!
((u))
1

(j)
(u), is the standard j-th Hermite polynomial
(j = 0, 1, 2, ...) and
|
j
(u) | C
j
exp(u
2
)
where C
1
, C
2
, ... are positive constants and > 0 does not depend on j.
The proof of (39) consists of a slight modication of the proof of the Corollary.
Note rst that fromthe above computation of

(s) it follows that 1) if X

0
< 0,
then if u is large enough X

(s).u 0 for all s [0, 1] and 2) if X

0
> 0,
then X

(0).u > 0 so that:


L
1
(u) = P(X

(s).u 0) for all s [0, 1])


1
2
as u +.
On account of (38) this implies that if u 0:
0
1
2
L
1
(u) =
_
+
u
L

1
(v)dv D
1
exp(
1
u
2
)
with D
1
,
1
positive constants.
L
2
(u) is similar. Finally:
L
3
(u) =
_
1
0
E
_
(X
t
t

t
(t )u)
_
dt
(2r
11
(t, t ))
1/2
Regularity of the distribution of the maximum 97

_
1
0
E
_
(X
t
t

t
(t )u)1I
(A
u
(X
t
,
t
))
C
_
dt
(2r
11
(t, t ))
1/2
. (40)
The rst term in (40) is equal to:
(2)
1/2
.u.
_
1
0
(r
11
(t, t ))
1/2
dt.
As for the second term in (40) denote
#
= inf
s,t [0,1]

t
(s) > 0 and let u > 0.
Then:
P
_
_
A
u
(X
t
,
t
)
_
C
_
P( s [0, 1] such that X
t
s
>
#
.u) D
3
exp(
3
u
2
)
with D
3
,
3
are positive constants, the last inequality being a consequence of the
Landau-Shepp-Fernique inequality.
The remainder follows in the same way as the proof of the Corollary.
Acknowledgements. This work has received a support fromCONICYT-BID-Uruguay, grant
91/94 and from ECOS program U97E02.
References
1. Adler, R.J.: An Introduction to Continuity, Extrema and Related Topics for General
Gaussian Processes, IMS, Hayward, Ca (1990)
2. Azas, J-M., Wschebor, M.: R egularit e de la loi du maximum de processus gaussiens
r eguliers, C.R. Acad. Sci. Paris, t. 328, s erieI, 333336 (1999)
3. Belyaev, Yu.: On the number of intersections of a level by a Gaussian Stochastic process,
Theory Prob. Appl., 11, 106113 (1966)
4. Berman, S.M.: Sojourns and extremes of stochastic processes, The Wadworth and
Brooks, Probability Series (1992)
5. Bulinskaya, E.V.: On the mean number of crossings of a level by a stationary Gaussian
stochastic process, Theory Prob. Appl., 6, 435438 (1961)
6. Cierco, C.: Probl` emes statistiques li es ` a la d etection et ` a la localisation dun g` ene ` a effet
quantitatif. PHD dissertation. University of Toulouse.France (1996)
7. Cram er, H., Leadbetter, M.R.: Stationary and Related Stochastic Processes, J. Wiley &
Sons, New-York (1967)
8. Diebolt, J., Posse, C.: On the Density of the Maximum of Smooth Gaussian Processes,
Ann. Probab., 24, 11041129 (1996)
9. Fernique, X.: R egularit e des trajectoires des fonctions al eatoires gaussiennes, Ecole
dEt e de Probabilit es de Saint Flour, Lecture Notes in Mathematics, 480, Springer-Ver-
lag,New-York (1974)
10. Landau, H.J., Shepp, L.A.: On the supremum of a Gaussian process, Sankya Ser. A, 32,
369378 (1971)
11. Leadbetter, M.R., Lindgren, G., Rootz en, H.: Extremes and related properties of random
sequences and processes. Springer-Verlag, New-York (1983)
12. Lifshits, M.A.: Gaussian random functions. Kluwer, The Netherlands (1995)
13. Marcus, M.B.: Level Crossings of a Stochastic Process with Absolutely Continuous
Sample Paths, Ann. Probab., 5, 5271 (1977)
98 J.-M. Azas, M. Wschebor
14. Nualart, D., Vives, J.: Continuit e absolue de la loi du maximum dun processus continu,
C. R. Acad. Sci. Paris, 307, 349354 (1988)
15. Nualart, D., Wschebor, M.: Int egration par parties dans lespace de Wiener et approxi-
mation du temps local, Prob. Th. Rel. Fields, 90, 83109 (1991)
16. Piterbarg, V.I.: Asymptotic Methods in the Theory of Gaussian Processes and Fields,
American Mathematical Society. Providence, Rhode Island (1996)
17. Rice, S.O.: Mathematical Analysis of Random Noise, Bell System Technical J., 23,
282332, 24, 45156 (19441945)
18. Tsirelson, V.S.: The Density of the Maximumof a Gaussian Process, Th. Probab. Appl.,
20, 817856 (1975)
19. Weber, M.: Sur la densit e du maximum dun processus gaussien, J. Math. Kyoto Univ.,
25, 515521 (1985)
20. Wschebor, M.: Surfaces al eatoires. Mesure g eometrique des ensembles de niveau, Lec-
ture Notes in Mathematics, 1147, Springer-Verlag (1985)
21. Ylvisaker, D.: A Note on the Absence of Tangencies in Gaussian Sample Paths, The
Ann. of Math. Stat., 39, 261262 (1968)

Вам также может понравиться