Вы находитесь на странице: 1из 234

Preface

There is a long history of the application of monotone methods and compari-


son arguments in deterministic dynamical systems (see, e.g., Smith [102] and
the literature quoted there). Monotonicity methods are now fully integrated
within the framework of deterministic dynamical systems theory. The situ-
ation is quite different for random systems and stochastic differential equa-
tions. Monotonicity arguments have mainly been used for one-dimensional
random or stochastic differential equations, relying on well-known compar-
ison theorems for solutions of one-dimensional ordinary random (see, e.g.,
Ladde/Lakshmikantham [75]) or stochastic (see, e.g., Ikeda/Watanabe
[57]) equations. In particular, these theorems and also the analysis of some
explicitly solvable models make it possible to give a complete description of
random attractors and bifurcation scenarios for several rather complicated
situations (see, e.g., Arnold [3, Chap.9]).
Let us also mention that products of positive random matrices have been
the subject of numerous studies (comprising, in particular, a random version
of Perron-Frobenius theory) with applications notably in economics and bi-
ology (for a survey, see Arnold/Demetrius/Gundlach [8]). Kellerer
[65] found that independent identically distributed iterations of monotone
random mappings on R+ are a model ideally suited for extending discrete
Markov chain theory to uncountable state spaces.
Our main goal in this book is to present the basic ideas and methods for
order-preserving (or monotone) random dynamical systems that have been
developed over the past few years. We focus on the qualitative behaviour of
these systems and our main objects are equilibria and attractors.
There is a deep analogy between the theory of random dynamical sys-
tems and the classical theory of dynamical systems. This analogy makes it
possible to develop qualitative theory for stochastic systems relying on ideas
of classical dynamical systems. In this book we try to expose this analogy in
a clear and transparent way. We hope it makes the book accessible not only
to experts in stochastic analysis but also to people working in the field of
deterministic dynamical systems. It provides a bridge from classical theory
to stochastic dynamics and it can be also used as an introductory textbook
on random dynamical systems at the graduate level.
VI Preface

Our main application is to the so-called cooperative random and stochas-


tic ordinary differential equations. These systems arise naturally from math-
ematical models in the field of ecology, epidemiology, economics and bioche-
mistry (see, e.g., the literature quoted in Smith [102]). Deterministic co-
operative differential equations have been studied by many authors (see,
e.g., Smith [102] and the references therein). The books by Krasnoselskii
[68, 69] and the series of papers by Hirsch [52, 53, 54] (see also the references
in Smith [102]) lay the groundwork for the qualitative theory of deterministic
cooperative systems. Monotone methods and comparison arguments are of
prime importance in the study of these systems.
The results presented in this book rely on ideas and methods developed
in collaboration with Ludwig Arnold (see Arnold/Chueshov [5], [6] and
[7]). The author is extremely grateful to him for very stimulating and fruitful
discussions on the subject. Warmest thanks are also due to Gunter Ochs,
James Robinson and Björn Schmalfuss for their comments and suggestions,
all of which improved the book.
The book was written while the author was spending the 2000/2001 aca-
demic year at the Institut für Dynamische Systeme, Universität Bremen. He
would like to thank the people at that institution for their very kind hospital-
ity during this period. He also gratefully acknowledges the financial support
of the Deutsche Forschungsgemeinschaft.
September 2001 Igor Chueshov
Contents

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1. General Facts about Random Dynamical Systems . . . . . . . . 9


1.1 Metric Dynamical Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2 Concept of RDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3 Random Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.4 Dissipative, Compact and Asymptotically Compact RDS . . . . 24
1.5 Trajectories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.6 Omega-limit Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.7 Equilibria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
1.8 Random Attractors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
1.9 Dissipative Linear and Affine RDS . . . . . . . . . . . . . . . . . . . . . . . . 45
1.10 Connection Between Attractors and Invariant Measures . . . . . 49

2. Generation of Random Dynamical Systems . . . . . . . . . . . . . . . 55


2.1 RDS Generated by Random Differential Equations . . . . . . . . . . 55
2.2 Deterministic Invariant Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2.3 The Itô and Stratonovich Stochastic Integrals . . . . . . . . . . . . . . 65
2.4 RDS Generated by Stochastic Differential Equations . . . . . . . . 70
2.5 Relations Between RDE and SDE . . . . . . . . . . . . . . . . . . . . . . . . 76

3. Order-Preserving Random Dynamical Systems . . . . . . . . . . . 83


3.1 Partially Ordered Banach Spaces . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.2 Random Sets in Partially Ordered Spaces . . . . . . . . . . . . . . . . . . 88
3.3 Definition of Order-Preserving RDS . . . . . . . . . . . . . . . . . . . . . . . 93
3.4 Sub-Equilibria and Super-Equilibria . . . . . . . . . . . . . . . . . . . . . . 95
3.5 Equilibria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
3.6 Properties of Invariant Sets of Order-Preserving RDS . . . . . . . 105
3.7 Comparison Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

4. Sublinear Random Dynamical Systems . . . . . . . . . . . . . . . . . . . 113


4.1 Sublinear and Concave RDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
4.2 Equilibria and Semi-Equilibria for Sublinear RDS . . . . . . . . . . . 116
4.3 Almost Equilibria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
VIII Contents

4.4 Limit Set Trichotomy for Sublinear RDS . . . . . . . . . . . . . . . . . . 125


4.5 Random Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
4.6 Positive Affine RDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

5. Cooperative Random Differential Equations . . . . . . . . . . . . . . 143


5.1 Basic Assumptions and the Existence Theorem . . . . . . . . . . . . . 143
5.2 Generation of RDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.3 Random Comparison Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
5.4 Equilibria, Semi-Equilibria and Attractors . . . . . . . . . . . . . . . . . 156
5.5 Random Equations with Concavity Properties . . . . . . . . . . . . . . 160
5.6 One-Dimensional Explicitly Solvable Random Equations . . . . . 166
5.7 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
5.7.1 Random Biochemical Control Circuit . . . . . . . . . . . . . . . 171
5.7.2 Random Gonorrhea Model . . . . . . . . . . . . . . . . . . . . . . . . . 175
5.7.3 Random Model of Symbiotic Interaction . . . . . . . . . . . . . 176
5.7.4 Random Gross-Substitute System . . . . . . . . . . . . . . . . . . 178
5.8 Order-Preserving RDE with Non-Standard Cone . . . . . . . . . . . 180

6. Cooperative Stochastic Differential Equations . . . . . . . . . . . . 185


6.1 Main Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
6.2 Generation of Order-Preserving RDS . . . . . . . . . . . . . . . . . . . . . . 186
6.3 Conjugacy with Random Differential Equations . . . . . . . . . . . . 188
6.4 Stochastic Comparison Principle . . . . . . . . . . . . . . . . . . . . . . . . . 192
6.5 Equilibria and Attractors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
6.6 One-Dimensional Stochastic Equations . . . . . . . . . . . . . . . . . . . . 199
6.6.1 Stochastic Equations on R+ . . . . . . . . . . . . . . . . . . . . . . . 199
6.6.2 Stochastic Equations on a Bounded Interval . . . . . . . . . 206
6.7 Stochastic Equations with Concavity Properties . . . . . . . . . . . . 214
6.8 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
6.8.1 Stochastic Biochemical Control Circuit . . . . . . . . . . . . . . 219
6.8.2 Stochastic Gonorrhea Model . . . . . . . . . . . . . . . . . . . . . . . 221
6.8.3 Stochastic Model of Symbiotic Interaction . . . . . . . . . . . 222
6.8.4 Lattice Models of Statistical Mechanics . . . . . . . . . . . . . 223

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Introduction

The state of many physical, chemical and biological systems can be described
by a single time-dependent variable x(t) which satisfies the ordinary differ-
ential equation
ẋ(t) = f (λ, x(t)) . (1)
This equation depends on parameters λ = (λ1 , . . . λm ) which characterize the
properties of the environment and are usually called external parameters. For
example, the equation
ẋ = αx − x3 (2)
can be used to describe the growth of a biological population. It contains the
parameter α ∈ R which takes into account the properties of the environment.
If there is an existence and uniqueness theorem for (1), then we can de-
fine an evolution operator St in R by the formula St x0 = x(t; x0 ), where
x(t; x0 ) is the solution to (1) with x(0; x0 ) = x0 . The uniqueness theorem for
(1) and the fact that R is a totally ordered set imply that one-dimensional
equations generate monotone (or order-preserving) dynamical systems, i.e.
St x1 ≥ St x2 provided x1 ≥ x2 . This property drastically simplifies the dy-
namics. For example, for equation (2) we have either one or three equilibrium
points depending on the parameter α and every solution is attracted by an
equilibrium in a monotone way. Indeed, it is easy to see that any solution to
(2) with initial data x0 has the form

x0 eαt
x(t) = 1/2
.
(1 + x20 · α−1 · (e2αt − 1))

Therefore we have that (i) if α < 0, then x(t) → √ 0 as t → +∞ for any initial
data x0 ; (ii) if α > 0 and x0 > 0,√then x(t) → α as t → +∞ and (iii) if
α > 0 and x0 < 0, then x(t) → − α as t → +∞. Thus for the case α < 0
we have a unique globally asymptotically stable equilibrium and in the case
α > 0 we have two stable equilibria√ and√ one unstable. In the latter case the
global attractor is the interval [− α, α] (by definition, the global attractor
is a strictly invariant set which uniformly attracts every bounded set). Thus
equilibria and their stability properties completely determine the long-time
dynamics of the system.

I. Chueshov: LNM 1779, pp. 1–7, 2002.


c Springer-Verlag Berlin Heidelberg 2002
2 Introduction

Similar behaviour is observed for one-dimensional systems with discrete


time which are generated from a nondecreasing continuous mapping f : R →
R via the formula
xn+1 = f (xn ), n = 0, 1, . . . .
The situation becomes more complicated in d-dimensional case. The phase
space Rd is a partially ordered set with respect to the natural order relation
(x = (x1 , . . . , xd ) ≥ 0 if and only if xi ≥ 0 for all i) and there is no mono-
tonicity in general. For example, it is easy to see that the linear system

x˙1 = a11 x1 + a12 x2 ,


x˙2 = a21 x1 + a22 x2 ,

produces solutions which are monotone with respect to initial data if and only
if a12 ≥ 0 and a21 ≥ 0. Nevertheless monotone multi-dimensional ordinary
differential equations cover important classes of mathematical models arising
in modern natural science (see discussion in Smith [102]). The mathematical
theory of deterministic monotone (order-preserving) systems is presently well-
developed due to the efforts of many authors (see, e.g., Krasnoselskii [69],
Hirsch [52, 53, 54] and also Smith [102] and the references therein). A well-
posed autonomous system of ordinary differential equations

ẋi = fi (x1 , . . . , xd ), i = 1, . . . , d ,

generates an order-preserving (with respect to the natural order relation)


dynamical system in Rd if and only if the mapping x → (f1 (x), . . . , fd (x))
from Rd into itself is cooperative (quasi-monotone), i.e.

fi (x1 , . . . , xd ) ≤ fi (y1 , . . . , yd )

for all (x1 , . . . , xd ) and (y1 , . . . , yd ) from Rd such that xi = yi and xj ≤ yj for
j = i, where i = 1, . . . , d. For example, this relation holds for the following
system of differential equations

ẋ1 (t) = g(xd (t)) − α1 x1 (t) ,


ẋj (t) = xj−1 (t) − αj xj (t), j = 2, . . . , d ,

where αj > 0 for j = 1, . . . , d and g(xd ) is a nondecreasing function. A system


of this type provides a simple model for positive feedback in biochemical
control circuit (see, e.g., Selgrade [96] and Smith [102] and the references
therein). The variables xj , j = 1, . . . , d−1, could represent the concentrations
of a sequence of enzymes and xd , the concentrations of their substrate.
It was shown by Hirsch [52] that generic solutions to some classes of
monotone systems converge to the set of equilibria. Thus, as in the one-
dimensional case, we observe some simplification in the long-time dynam-
ics. However an important construction due to Smale (see, e.g., Smith [102,
Introduction 3

Chap.4]) shows that any (complicated) dynamics can occur on unstable in-
variant sets for monotone systems of sufficiently large dimension.
If the system is coupled to a fluctuating environment, then external
parameters can become stochastic quantities. In many cases these quanti-
ties can be presented as stationary random processes. We refer to Hors-
themke/Lefever [55, Chap.1] for a detailed discussion on the nature and
sources of randomness in dynamical systems.
Thus taking into account random fluctuations of the environment for the
system described by (1) leads to the equation

ẋ(t) = f (λ0 + ξ(t, ω), x(t)) ,

where λ0 corresponds to the mean state of the environment and the station-
ary process ξ(t, ω) with zero expectation on some probability space (Ω, F, P)
describes environmental fluctuations around this mean state. For example,
equation (2) turns into

ẋ = (α + ξ(t, ω)) · x − x3 . (3)

As above we can show that the process x(t; ω, x0 ) which solves (3) with initial
data x0 has the form

x0 exp{αt + η(t, ω)}


x(t; ω, x0 ) =  1/2 ,
t
1 + 2x20 · 0 exp{2αs + 2η(s, ω)}ds
t
where η(t, ω) = 0 ξ(τ, ω)dτ . It is clear that the solutions x(t; ω, x0 ) depend
on x0 in a monotone way, i.e. the relation x0 ≥ x∗0 implies that x(t; ω, x0 ) ≥
x(t; ω, x∗0 ).
Assume that the strong law of large numbers is valid for the process
ξ(t, ω), i.e. t−1 η(t, ω) → 0 almost surely as t → +∞. Then it is easy to see
that all solutions x(t; ω, x0 ) tend to 0 almost surely as t → +∞ in the case
α < 0. In the case α > 0 the situation is a bit more complicated. How-
ever it is possible to prove (see Arnold [3, Chap.9]) that there exists a
stationary process ζ(t, ω) > 0 which solves equation (3) and such that the
interval [−ζ(t, ω), ζ(t, ω)] is a globally attracting set in some sense. Thus sta-
tionary solutions to equation (3) play a role of equilibria and the interval
[−ζ(t, ω), ζ(t, ω)] should be treated as a global attractor. As we will see in
Chap.3, a similar picture is inherent in some classes of multi-dimensional
monotone systems with both continuous and discrete time. However it is
well to bear in mind that random monotone systems may display the long-
time behaviour which is impossible in deterministic (autonomous or periodic)
order-preserving systems. As an example we can consider the following dif-
ferential equation
ẋ = ξ(t, ω) · x(1 − x)
4 Introduction

in the interval [0, 1] ⊂ R. Under some conditions concerning the stationary


random process ξ(t, ω) the omega-limit set for any point from the open inter-
val (0, 1) is a non-trivial completely ordered set. We refer to Example 3.6.1
and Sects.5.6 and 6.6 for details. This phenomenon does not take place in
deterministic strongly order-preserving systems (see Smith [102]) and this is
one of obstacles which prevent the direct expansion of the results available
for deterministic monotone systems.
To make the analogy with the deterministic case more precise it is con-
venient to involve the modern concept (see Arnold [3] and also Sect.1.2
below) of a random dynamical system. This concept covers the most im-
portant families of dynamical systems with randomness, including random
and stochastic ordinary and partial differential equations and random differ-
ence equations, and makes it possible to study randomness in the framework
of classical dynamical systems theory with all its powerful machinery. Ran-
domness could describe environmental or parametric perturbations, internal
fluctuations, measurement errors, or just lack of knowledge. The theory of
random dynamical systems has been developed intensively in recent years
and contains a lot of interesting and deep results. From a probabilistic point
of view this theory offers a new approach to the study of qualitative proper-
ties of stochastic differential equations. It became possible due to important
results on two-parameter flows generated by stochastic equations (see, e.g.,
Belopolskaya/Dalecky [15], Elworthy [43], Kunita [74] and the liter-
ature quoted there). For a detailed discussion of the theory and applications
of random dynamical systems we refer to the monograph Arnold [3].
To present a clear explanation of the general concept of a random dynam-
ical system (see Sect.1.2 for the formal definition) we consider the following
simple discrete dynamical system.
Assume that f0 and f1 are continuous mappings of a metric space X into
itself. Let us consider X as the state space of some system that evolves as
follows: if x is the state of the system at time k then its state at time k + 1 is
either f0 (x) or f1 (x) with probability 1/2 and the choice of f0 or f1 does not
depend on time and the previous states. We can find the state of the system
after a number of steps in time if we flip a coin and write down the sequence
of events from right to left using 0 and 1. Assume, for example, that after 7
flips we get a following set of outcomes: 1001101. Here 1 corresponds to the
head falling and 0 corresponds to the tail falling. Therewith the state of the
system at time 7 will be written in the form

y = (f1 ◦ f0 ◦ f0 ◦ f1 ◦ f1 ◦ f0 ◦ f1 )(x).

This construction can be formalized as follows. Let Ω be the set of two-sided


sequences ω = {ωi | i ∈ Z} consisting of zeros and ones. On the set Ω there
is a probability measure P such that

P(Ci1 ...im ) = P0 (C1 ) · . . . · P0 (Cm )


Introduction 5

for any “cylindrical” set

Ci1 ...im = {ω | ωik ∈ Ck , k = 1, . . . , m} ,

where Ck is one of the sets ∅, {0}, {1}, {0, 1} and P0 (∅) = 0, P0 ({0}) =
P0 ({1}) = 1/2, P0 ({0, 1}) = 1. Here {i1 , . . . , im } is an arbitrary m-tuple of
integers. For every n ∈ Z we denote by θn the left shift operator in Ω, i.e.

θn {ωi | i ∈ Z} = {ωi+n | i ∈ Z}, n∈Z.

It is clear that the shift operator preserves probabilities of sets from Ω.


For each n ∈ Z+ and ω ∈ Ω we define the mapping πn (t, ω) of Ω × X into
itself by the formula

πn = π1 ◦ πn−1 , n ∈ N, π0 = id ,

where π1 (ω, x) = (θ1 ω, fω0 (x)). This mapping πn can be written in the form

πn (ω, x) = (θn ω, ϕ(n, ω)x) , (4)

where ϕ(n, ω) is defined by the formula

ϕ(n, ω) = fωn−1 ◦ fωn−2 ◦ . . . ◦ fω1 ◦ fω0 , ω = {ωi | i ∈ Z}, n∈N,

and satisfies the cocycle property

ϕ(0, ω) = id, ϕ(n + m, ω) = ϕ(n, θm ω) ◦ ϕ(m, ω)

for all n, m ∈ Z+ and ω ∈ Ω. The pair (θn , ϕ(n, ω)) is called a random
dynamical system with discrete time. The mapping θn models the evolution of
some random environment and ϕ(ω, n) describes the dynamics of the system.
If X = R and f0 and f1 are nondecreasing functions, then the mappings
ϕ(n, ω) are order preserving, i.e. the relation x1 ≥ x2 implies that ϕ(n, ω)x1 ≥
ϕ(n, ω)x2 for all n ∈ Z+ and ω ∈ Ω.
It is easy to see that ϕ satisfies the cocycle property if and only if πn given
by (4) is a semigroup, i.e. πn ◦ πm = πn+m for n, m ∈ Z+ . Thus we obtain a
dynamical system in the classical sense (i.e. a semiflow of mappings from some
space into itself). We note that semiflows of a similar structure (see (4)) arise
in the theory of nonautonomous (deterministic) differential equations and
they are known as skew-product flows (see, e.g., Chicone/Latushkin [19]
and the references therein). This observation is important in the study of the
long-time behaviour of random dynamical systems.
The aim of this book is to present a recently developed approach which is
suitable for investigating a variety of qualitative aspects of order-preserving
random dynamical systems and to give the backgrounds for further develop-
ment of the theory. We try to demonstrate the effectiveness of this approach
6 Introduction

by analyzing the long-time behaviour of some classes of random and stochas-


tic ordinary differential equations which arise in many applications.
Although the most of general results in this book are proved for infinite-
dimensional phase spaces, our examples and applications deal with finite-
dimensional systems only. The book does not treat order-preserving random
dynamical systems generated by random and stochastic partial differential
equations. We refer to the papers Chueshov [21] and Chueshov/Vuiller-
mot [24, 25, 26], which are devoted to the application of monotone methods
and comparison arguments to the study of long-time behaviour of random
and stochastic parabolic PDEs (see also Chueshov [22] and Shen/Yi [99],
where similar approaches are used for nonautonomous parabolic equations).
Now we describe the structure of the book.
We start with the preliminary Chapter 1 devoted to a description of some
background material from the general theory of random dynamical systems
and to a discussion of the simplest examples. Some results presented in this
chapter are given without proofs. However for the sake of completeness we
prove the theorem on the existence of a random (pull back) attractor. We
also prove here several auxiliary facts which are important in our subsequent
considerations. They are mainly concerned with measurability of trajectories
and invariant sets. For a more detailed presentation on random dynamical
systems we refer to the book Arnold [3].
In Chapter 2 we describe results on the generation of random dynamical
systems by random and stochastic ordinary differential equations. We mainly
follow the presentation given in Arnold [3, Chap.2] and invoke some clas-
sical results on stochastic equations (see, e.g., Ikeda/Watanabe [57] and
Kunita [74]). We also prove a theorem on the existence of invariant deter-
ministic domains for these equations and consider relations between random
and stochastic differential equations. The reader who is primarily interested
in the general theory of order-preserving random dynamical systems can omit
this chapter on first reading.
Chapter 3 is central to the book. We develop here the general theory of
order-preserving random dynamical systems. We first consider properties of
partially ordered vector spaces and prove some auxiliary results concerning
random sets in these spaces. After that we introduce the concept of an order-
preserving random dynamical system and study properties of sub- and super-
equilibria for these systems. We prove a theorem on the existence of equilibria
between two ordered sub- and super-equilibria. These semi-equilibria are also
proved to be very useful in the description of random attractors for the sys-
tems considered.
In Chapter 4 we study the asymptotic behavior of order-preserving ran-
dom systems which have an additional concavity property called sublinearity
(or subhomogeneity), frequently encountered in applications. Sublinear ran-
dom systems are contractive with respect to some metric which is defined
on parts of the cone. This implies that random equilibria are unique and
Introduction 7

asymptotically stable in each part of the cone. Our main result here is a
random limit set trichotomy, stating that in a given part either (i) all orbits
are unbounded, (ii) all orbits are bounded but their closure reaches out to
the boundary of the part, or (iii) there exists a unique, globally attracting
equilibrium. Several examples, including Markov chains and affine systems,
are given.
In Chapters 5 and 6 we apply the results of Chapters 3 and 4 to study the
qualitative behaviour of random and stochastic perturbations of cooperative
ordinary differential equations. These applications are the main motivations
for the development of the general theory presented in Chapters 3 and 4
and we believe that random and stochastic cooperative differential equations
merit a detailed study of its own.
In Chapter 5 we consider random cooperative differential equations in
Rd+ (real noise case). We first give conditions under which these equations
generate order-preserving random dynamical systems in Rd+ and then study
monotonicity properties of these systems. We prove several theorems on the
existence of equilibria and random attractors. Systems with concavity prop-
erties are also considered. We apply general results from Chapters 3 and 4 to
study the long-time behaviour of these systems and to obtain the limit set
trichotomy theorem for random cooperative differential equations. We con-
clude Chapter 5 with a series of examples including a class of one-dimensional
explicitly solvable equations to show possible scenarios of the long-time be-
haviour in monotone systems.
Chapter 6 is devoted to stochastic cooperative differential equations
(white noise case). The hypotheses that guarantee order-preserving prop-
erties for this case lead to a special structure of the diffusion terms. In fact
we consider here some class of stochastic perturbations of deterministic au-
tonomous cooperative differential equations. We prove several assertions on
the long-time behaviour, investigate properties of systems that possess con-
cavity properties and establish a stochastic version of the limit set trichotomy
theorem. We study the long-time dynamics in one-dimensional equations with
details. We also discuss the stochastic versions of certain examples consid-
ered in the previous chapter. Although the results for the stochastic case are
similar to the random case, Chapter 6 is not at all a duplication of Chapter 5
because the methods of proof are quite different.
1. General Facts about Random Dynamical
Systems

In this chapter we recall some basic definitions and facts about random dy-
namical systems. For a more detailed discussion of the theory and applica-
tions of random dynamical systems we refer to the monograph Arnold [3].
We pay particular attention to dissipative systems and their random (pull
back) attractors. These attractors were studied by many authors (see, e.g.,
Arnold [3], Crauel/Debussche/Flandoli [35], Crauel/Flandoli [36],
Schenk-Hoppé [89], Schmalfuss [92, 93] and the references therein). The
ideas that lead to the concept of a random attractor have their roots
in the theory of deterministic dissipative systems which has been suc-
cessfully developed in the last few decades ( see, e.g., the monographs
Babin/Vishik [13], Chueshov [20], Hale [50], Temam [104] and the liter-
ature quoted therein). The proof of the existence of random attractors given
below follows almost step-by-step the corresponding deterministic argument
(see, e.g., Chueshov [20], Temam [104]).
Throughout this book we will be concerned with a probability space by
which we mean a triple (Ω, F, P), where Ω is a space, F is a σ-algebra of
sets in Ω, and P is a nonnegative σ-additive measure on F with P(Ω) = 1.
We do not assume in general that the σ-algebra is complete. Below we will
also use the symbol T for either R or Z and we will denote by T+ all non-
negative elements of T. We will denote by B(X) the Borel σ-algebra of sets
in a topological space X. By definition B(X) is the σ-algebra generated by
the collection of open subsets of X. If (X1 , F1 ) and (X2 , F2 ) are measurable
spaces, we denote by F1 × F2 the product σ-algebra of subsets in X1 × X2
which is defined as the σ-algebra generated by the cylinder sets A = A1 × A2 ,
Ai ∈ Fi . We refer to Cohn [30] for basic definitions and facts from the
measure theory.

1.1 Metric Dynamical Systems

The random dynamical system is an object consisting of a metric dynamical


system and a cocycle over this system. We need a metric dynamical system
for modeling of random perturbations.

I. Chueshov: LNM 1779, pp. 9–53, 2002.


c Springer-Verlag Berlin Heidelberg 2002
10 1. General Facts about Random Dynamical Systems

Definition 1.1.1. A metric dynamical system (MDS) θ ≡ (Ω, F, P, {θt , t ∈


T}) with (two-sided) time T is a probability space (Ω, F, P) with a family of
transformations {θt : Ω → Ω, t ∈ T} such that
1. it is one-parameter group, i.e.

θ0 = id, θt ◦ θs = θt+s for all t, s ∈ T ;

2. (t, ω) → θt ω is measurable;
3. θt P = P for all t ∈ T, i.e. P(θt B) = P(B) for all B ∈ F and all t ∈ T.
A set B ∈ F is called θ-invariant if θt B = B for all t ∈ T. A metric dynamical
system θ is said to be ergodic under P if for any θ-invariant set B ∈ F we
have either P(B) = 0 or P(B) = 1.

We refer to Cornfeld/Fomin/Sinai [29], Mañé [79], Rudolph [88], Si-


nai [100] and Walters [106] for the references and presentation of MDS
and ergodic theory.
From an applied point of view the use of metric dynamical systems to
model external perturbations assumes implicitly that the external influence
is stationary in some sense (see examples below). This means that we do
not consider possible transient (random) process in the environment, i.e.
we assume that all these processes are finished before we start to observe
the dynamics of our system. This is also the reason why we consider MDS
with two-sided time. We note that any one-sided MDS (with time T+ ) pos-
sesses a natural two-sided extension (see, e.g., Cornfeld/Fomin/Sinai [29,
Sect.10.4] or Arnold [3, Appendix A]).
Now we give several important examples of metric dynamical systems.
They show what kind of time dependence we can allow in the equations
considered in Chaps.5 and 6.

Example 1.1.1 (Periodic Case). Consider the probability space (Ω, F, P),
where Ω is a circle of unit circumference, F is its σ-algebra of Borel sets and P
is the Lebesgue measure on Ω. Let {θt , t ∈ R} be the group of rotations of the
circle. It is easy to see that we obtain an ergodic MDS (Ω, F, P, {θt , t ∈ R})
with continuous time.

Example 1.1.2 (Quasi-Periodic Case). Let Ω be d-dimensional torus, Ω =


Tord . Assume that its points are written as x = (x1 , . . . , xd ) with xi ∈ [0, 1).
Let F be the σ-algebra of Borel sets of Tord and P be the Lebesgue measure
on Tord . We define transformations {θt , t ∈ T} by the formula

θt x = (x1 + t · a1 (mod 1), . . . , xd + t · ad (mod 1)), t∈T,

for a given a = (a1 , . . . , ad ). Thus we obtain an MDS. If the numbers


a1 , . . . , ad , 1 are rationally independent, then this MDS is ergodic (see, e.g.,
Rudolph [88]).
1.1 Metric Dynamical Systems 11

Example 1.1.3 (Almost Periodic Case). Let f (x) be a Bohr almost periodic
function on R. We define the hull H(f ) of the function f as the closure of
the set {f (x + t), t ∈ R} in the norm f = supx∈R |f (x)|. The hull H(f ) is
a compact metric space, and it has a natural commutative group structure.
Therefore it possesses a Haar measure which, if normalized to unity, makes
H(f ) into probability space. If we define transformations {θt , t ∈ T} as shifts:
(θt g)(x) = g(x + t), g ∈ H(f ), we obtain an ergodic MDS with continuous
time. For details we refer to Ellis [42] and Levitan/Zhikov [77].

Example 1.1.4 (Ordinary Differential Equations). MDS can be also gener-


ated by ordinary differential equations (ODE). Let us consider a system of
ODEs in Rn :
dxi
= fi (x1 , . . . xn ), i = 1, . . . , n . (1.1)
dt
Assume that the Cauchy problem for this system is well-posed. We define
{θt , t ∈ R} by the formula θt x = x(t), where x(t) is the solution of (1.1) with
x(0) = x. Assume that a nonnegative smooth function ρ(x1 , . . . , xn ) satisfies
the stationary Liouville equation
n

(ρ(x1 , . . . xn ) · fi (x1 , . . . xn )) = 0 (1.2)
i=1
∂x i


and possesses the property Rn ρ(x) dx = 1. Then ρ(x) is a density of a
probability measure on Rn . By Liouville’s theorem
 
f (θt x)ρ(x) dx = f (x)ρ(x) dx
Rn Rn

for any bounded continuous function f (x) on Rn and therefore in this situ-
ation an MDS arises with Ω = Rn , F = B(Rn ) and P(dx) = ρ(x)dx. Here
B(Rn ) is the Borel σ-algebra of sets in Rn . Sometimes it is also possible
to construct an MDS connected with the system (1.1), when the solution
ρ to (1.2) is not integrable but the problem (1.1) possesses a first integral
(e.g., if (1.1) is a Hamiltonian system) with appropriate properties (see, e.g.,
Mañé [79] or Sinai [100] for details).

Example 1.1.5 (Bernoulli Shifts). Let (Ω0 , F0 , P0 ) be a probability space


and (Ω, F, P) be the probability space of infinite sequences ω = {ωi }, where
ωi ∈ Ω0 , i ∈ Z. Here F is the σ-algebra generated by finite-dimensional
cylinders
Ci1 ...im = {ω | ωik ∈ Ck , k = 1, . . . , m} ,
where Ck ∈ F0 and {i1 , . . . , im } is an arbitrary m-tuple of integers. The
probability measure P is defined such that P(Ci1 ...im ) = P0 (C1 ) · . . . · P0 (Cm ).
12 1. General Facts about Random Dynamical Systems

We define transformations {θt , t ∈ Z} by the formula θt ω = ω ∗ , where


ω = {ωi } and ω ∗ = {ωi+t }. Since
θt Ci1 ...im = {ω | ωik −t ∈ Ck , k = 1, . . . , m} ,
the probability measure P is invariant under θt . Thus we obtain an MDS.
In the particular case when Ω0 = {0, 1} is a two-point set and P0 ({0}) =
P0 ({1}) = 1/2, we have the standard Bernoulli shift. In the general case we
can interpret this MDS as one generated by an infinite sequence of indepen-
dent identically distributed random variables.
Example 1.1.6 (Stationary Random Process). Let ξ = {ξ(t), t ∈ T} be a
stationary random process on a probability space (Ω, F, P), where F is the
σ-algebra generated by ξ. Assume that in the continuous case (T = R) the
process ξ possesses the càdlàg property: all trajectories are right-continuous
and have limits from the left. Then the shifts ξ(t) → (θτ ξ)(t) = ξ(t + τ )
generate an MDS. See Arnold [3] and the references therein for details.
In the framework of stochastic equations the following example of an MDS
is of importance.
Example 1.1.7 (Wiener Process). Let Wt = (Wt1 , . . . , Wtd ) be a Wiener pro-
cess with values in Rd and two-sided time R. Let (Ω, F, P) be the corre-
sponding canonical Wiener space. More precisely, let C0 (R, Rd ) be the space
of continuous functions ω from R into Rd such that ω(0) = 0 endowed with
the compact-open topology, i.e. with the topology generated by the metric
∞
1 n (ω, ω ∗ )
(ω, ω ∗ ) := , n (ω, ω∗) = max |ω(t) − ω ∗ (t)| .
n=1
2n 1 + n (ω, ω ∗ ) t∈[−n,n]

Let F̃ be the corresponding Borel σ-algebra of C0 (R, Rd ), and let P be the


Wiener measure on F̃. We suppose Ω is the subset in C0 (R, Rd ) consisting of
the functions that have a growth rate less than linear for t → ±∞ and F is
the restriction of F̃ to Ω. In this realization Wt (ω) = ω(t), where ω(·) ∈ Ω,
i.e. the elements of Ω are identified with the paths of the Wiener process.
We define a metric dynamical system θ by θt ω(·) := ω(t + ·) − ω(t). These
transformations preserve the Wiener measure and are ergodic. Thus we have
an ergodic MDS. The flow {θt } is called the Wiener shift. We note that the σ-
algebra F is not complete with respect to P and we cannot use its completion
F̄P to construct MDS because (t, ω) → θt ω is not a measurable mapping
from (R × Ω, B(R) × F̄P ) into (Ω, F̄P ). This is one of the reasons why the
completeness of F is not assumed in the basic definitions. See Arnold [3]
for details. We also note that this realization of a Wiener process makes it
possible to introduce the white noise process as the derivative Ẇt of Wt with
respect to t in the sense of generalized functions. From an applied point of
view white noise processes correspond to an extremely short memory of the
environment in comparison with the memory of the system (see the discussion
in Horsthemke/Lefever [55], for instance).
1.2 Concept of RDS 13

1.2 Concept of RDS

Let X be a Polish space, i.e. a separable complete metric space. We equip


X with the Borel σ-algebra B = B(X) generated by open sets of X. We
need the following concept of a (continuous) random dynamical system (cf.
Arnold [3]).
Definition 1.2.1 (Random Dynamical System). A random dynamical
system (RDS) with (one-sided) time T+ and state (phase) space X is a pair
(θ, ϕ) consisting of a metric dynamical system θ ≡ (Ω, F, P, {θt , t ∈ T}) and a
cocycle ϕ over θ of continuous mappings of X with time T+ , i.e. a measurable
mapping
ϕ : T+ × Ω × X → X, (t, ω, x) → ϕ(t, ω, x) ,
such that
(i) the mapping x → ϕ(t, ω, x) ≡ ϕ(t, ω)x is continuous for every t ≥ 0 and
ω ∈ Ω,
(ii) the mappings ϕ(t, ω) := ϕ(t, ω, ·) satisfy the cocycle property:

ϕ(0, ω) = id, ϕ(t + s, ω) = ϕ(t, θs ω) ◦ ϕ(s, ω)

for all t, s ∈ T+ and ω ∈ Ω. Here ◦ means composition of mappings.


We emphasize the following peculiarities of this definition.

Remark 1.2.1. (i) While the metric dynamical system (modeling the random
perturbations) is assumed to have two-sided time T = R or Z, the cocycle is
only required to have one-sided time T+ = R+ or Z+ . This reflects the fact
that evolution operators are often non-invertible. However this set-up allows
us to consider ϕ(t, θs ω) for t ∈ T+ , but starting at an arbitrary (possibly
negative) time s ∈ T which will be crucial for the construction of equilibria
and attractors. In the case of continuous time (T = R) the standard definition
of a continuous RDS requires the continuity of the mappings (t, x) → ϕ(t, ω)x
for all ω ∈ Ω (see Arnold [3, Sect.1.1]). This property is usually true for
the RDS generated by finite-dimensional random and stochastic equations.
However, as we will see, many general results on the long-time behaviour
can be proved under a weaker assumption of the continuity of the mapping
x → ϕ(t, ω)x for each t ≥ 0 and ω ∈ Ω. We also note that the cocycle
property reduces to the classical semiflow property if ϕ is independent of ω.
Hence deterministic dynamical systems are particular cases of RDS.
(ii) If in Definition 1.2.1 the cocycle is defined on a θ-invariant set Ω ∗ of
full measure, then we can extend it to the whole Ω by the formula

ϕ(t, ω) if ω ∈ Ω ∗ ,
ϕ̃(t, ω) := (1.3)
id / Ω∗ .
if ω ∈
14 1. General Facts about Random Dynamical Systems

Thus we obtain the cocycle ϕ̃(t, ω) which is indistinguishable from ϕ(t, ω).
We recall that by definition the indistinguishability of ϕ(t, ω) and ϕ̃(t, ω)
means that there exists a set N ∈ F such that P(N ) = 0 and

{ω : ϕ(t, ω) = ϕ̃(t, ω) for some t ∈ R+ } ⊂ N .

In our case the cocycles coincide on the θ-invariant set Ω ∗ and we can set
N = Ω \ Ω ∗ . In further considerations we do not distinguish cocycles which
coincide on θ-invariant sets of full measure.
(iii) In the definition of an RDS we require some properties to be valid for
all ω ∈ Ω. However the stochastic analysis deals usually with almost all ele-
mentary events ω. Solutions to stochastic differential equations are defined al-
most surely, for example. Therefore to construct RDS connected with stochas-
tic equations we need extend the corresponding evolution operator to all
ω ∈ Ω and prove the cocycle property for this extension. This can be done for
many cases which are important from the point of view of applications. This
procedure is usually referred to as perfection. Roughly speaking the perfection
of cocycles (or other objects) can be done in the following way. First we prove
a property for some θ-invariant set Ω ∗ of full measure. After that we define the
cocycle on Ω \ Ω ∗ in an appropriate way (cf. (1.3)). Perfection theorems have
been shown in various different cases, see, e.g., Arnold/Scheutzow [10],
Scheutzow [90], Kager/Scheutzow [61], Sharpe [98] and also the dis-
cussion in Arnold [3].

We also recall the following definitions Arnold [3].


Definition 1.2.2 (Smooth RDS). Let X be an open subset of a Banach
space. A random dynamical system (θ, ϕ) is said to be a smooth RDS of class
C k or a C k RDS, where 1 ≤ k ≤ ∞, if it satisfies the following property: for
each (t, ω) ∈ T+ × Ω the mapping x → ϕ(t, ω)x from X into itself is k times
Frechet differentiable with respect to x and the derivatives are continuous with
respect to x.

Definition 1.2.3 (Affine RDS). Let X be a linear Polish space. The RDS
(θ, ϕ) is said to be affine if the cocycle ϕ is of the form

ϕ(t, ω)x = Φ(t, ω)x + ψ(t, ω) , (1.4)

where Φ(t, ω) is a cocycle over θ consisting of bounded linear operators of X,


and ψ : T+ × Ω → X is a measurable function. If ψ(t, ω) ≡ 0 then the affine
RDS is said to be linear.

If (θ, Φ) is a linear RDS, then the cocycle property for the mapping ϕ defined
by (1.4) is equivalent to the relation

ψ(t + s, ω) = Φ(t, θs ω)ψ(s, ω) + ψ(t, θs ω), t, s ≥ 0 . (1.5)


1.2 Concept of RDS 15

A thorough treatment of affine RDS in Rd can be found in Sect. 5.6 of


Arnold [3].
Any RDS (θ, ϕ) generates a skew-product semiflow {πt , t ∈ T+ } on Ω × X by
the formula
πt (ω, x) = (θt ω, ϕ(t, ω)x), t ∈ T+ . (1.6)
Since (ω, x) → πt (ω, x) is an (F × B)-measurable mapping from Ω × X into
itself, we obtain a measurable dynamical system on (Ω × X, F × B). Here B
is the σ-algebra of Borel sets in X. The cocycle property for ϕ is equivalent
to the semigroup property for π. We note that the standard theory of skew-
product flows (see, e.g., Shen/Yi [99], Chicone/Latushkin [19] and the
references therein) usually requires that both Ω and X are topological spaces
and {θt } are continuous mappings. In the RDS case we have no topology on
Ω in general.
The simplest examples of RDS are described below.

Example 1.2.1 (Markov Chain). This is a generalization of the example con-


sidered in the Introduction. Let (Ω0 , F0 , P0 ) be a probability space and X be a
Polish space. Assume that f (α, x) is a measurable mapping from Ω0 × X into
X which is continuous with respect to x for every fixed α ∈ Ω0 . Let (Ω, F, P)
be the probability space of infinite sequences ω = {ωi }, where ωi ∈ Ω0 , i ∈ Z,
and θ = (Ω, F, P, {θt , t ∈ Z}) be the metric dynamical system constructed in
Example 1.1.5. For every ω = {ωi : i ∈ Z} ∈ Ω we introduce the function
fω : X → X by the formula fω (x) = f (ω0 , x) and for each n ∈ Z+ and
ω ∈ Ω we define the mapping ϕ(n, ω) by the formula

ϕ(n, ω) = fθn−1 ω ◦ fθn−2 ω ◦ . . . ◦ fθ1 ω ◦ fω , ω = Ω, n ∈ N . (1.7)

We also suppose ϕ(0, ω) = id. It is easy to see that the sequence ϕ(n, ω)x
solves the difference equation

xn+1 = fθn ω (xn ), n ∈ Z+ , x0 = x ,

and the mappings ϕ(n, ω) possess the cocycle property. Thus we obtain a
discrete RDS. It is a C k -RDS, if X ⊂ Rd and f (α, ·) ∈ C k (X, X). If X is a
linear Polish space and f (α, ·) are affine mappings, i.e. f (α, x) = Kα x + hα ,
where Kα are continuous linear operators in X and hα are elements from X,
then the RDS constructed above is affine. It is a linear RDS when hα = 0 for
α ∈ Ω0 .
Since all random mappings fθn ω , n ∈ Z, are independent and identically
distributed (i.i.d.), the RDS constructed above generates (see Arnold [3,
p.53]) the homogeneous Markov chain

{Φxn := ϕ(n, ω)x : n ∈ Z+ , x ∈ X}

with state space X and transition probability


16 1. General Facts about Random Dynamical Systems

P (x, B) := P{Φn+1 ∈ B | Φn = x}

= P{ω : fω (x) ∈ B} ≡ P0 {α : f (α, x) ∈ B}, B ∈ B(X) .

For a detailed presentation of the theory of Markov chains we refer to Gih-


man/Skorohod [48, Chap.2], for example. We note that the inverse prob-
lem of constructing an RDS of i.i.d. mappings with a prescribed transition
probability is not unique in general and so far largely unsolved. We refer to
Arnold [3] and Kifer [66] for discussions of this problem.

Example 1.2.2 (Kick Model). Let {ξk : k ∈ Z} be a stationary random


process (chain) in X on a probability space (Ω, F, P) and θ be the corre-
sponding metric dynamical system such that ξk (ω) = ξ0 (θk ω) for all k ∈ Z
(cf. Example 1.1.6). Suppose that mappings fω : X → X have the form

fω (x) = g(x, ξ1 (ω)), ω∈Ω,

where g is a continuous function from X × X into X. In this case the cocycle


ϕ defined by (1.7) generates the sequence xn = ϕ(n, ω)x which solves the
difference equation

xn+1 = g(xn , ξn+1 (ω)), n ∈ Z+ , x0 = x .

If X is a Banach space and g(x, ξ) = g(x) + ξ, then this equation has the
form
xn+1 = g(xn ) + ξn+1 (ω), n ∈ Z+ , x0 = x . (1.8)
A kick force model corresponds to the case when the mapping g : X → X has
the form g(x) = y(T ; x), where T > 0 is a fixed number and y(t) := y(t; x)
solves the equation

ẏ(t) = h(y(t)), t > 0, y(0) = x . (1.9)

Here h is a mapping from X into itself such that equation (1.9) generates a
(deterministic) continuous dynamical system. In this case

ϕ(n, ω)x = z(n · T + 0, ω; x), n ∈ Z+ .

Here z(t) := z(t, ω; x) is a generalized solution to the problem



ż(t) = h(z(t)) + ξk (ω) · δ(t − k · T ), z(+0) = x ,
k∈Z

where δ(t) is a Dirac δ-function of time. Thus the kick model describes the
situation when the deterministic system (1.9) gets random kicks with some
period T and evolves freely between kicks. We note that kick models are
sufficiently popular in the study of turbulence phenomena.
1.2 Concept of RDS 17

The next examples present the simplest versions of RDS considered in Chaps.
2, 5 and 6 with details.
Example 1.2.3 (1D Random Equation). Let θ = (Ω, F, P, {θt , t ∈ R}) be a
metric dynamical system. Consider the pathwise ordinary differential equa-
tion
ẋ(t) = f (θt ω, x(t)) . (1.10)
Under some natural conditions (see Sect. 2.1 below) on the function f :
Ω × R → R this equation generates an RDS with state space R and with the
cocycle given by the formula ϕ(t, ω)x = x(t), where x(t) is the solution to
(1.10) with x(0) = x. This RDS is affine if f (ω, x) = a(ω) · x + b(ω) for some
random variables a(ω) and b(ω).

Example 1.2.4 (Binary Biochemical Model). Consider the system of ordinary


differential equations

ẋ1 = g(x2 ) − α1 (θt ω)x1 ,


(1.11)
ẋ2 = x1 − α2 (θt ω)x2 ,

over a metric dynamical system θ. This is a two-dimensional version of the


deterministic model considered in the Introduction. If we assume that g(x)
is a globally Lipschitz function and αi (ω) is a random variable such that
αi (θt ω) ∈ L1loc (R) for i = 1, 2 and ω ∈ Ω, then equations (1.11) generate an
RDS in R2 with ϕ(t, ω)x = x(t), where x(t) = (x1 (t), x2 (t)) is the solution
to (1.11) with x(0) = x.

Example 1.2.5 (1D Stochastic Equation). Let {Wt } be the one-dimensional


Wiener process (see Example 1.1.7). Then the Itô stochastic differential equa-
tion in R
dx(t) = b(x(t))dt + σ(x(t))dWt , (1.12)
where the scalar functions b(x) and σ(x) possess some regularity properties
(see Sect. 2.4 below), also generates an RDS. Of course, the same conclu-
sion remains true, if we understand the stochastic equation (1.12) in the
Stratonovich sense. We note that formally equation (1.12) can be written in
the form
ẋ(t) = b(x(t)) + σ(x(t))Ẇt
and the corresponding RDS can be interpreted as a system in a white noise
environment.

More detailed presentation of the last three examples and their generaliza-
tions can be found in Chaps.5 and 6. We also refer to Sects. 2.1 and 2.4 in
18 1. General Facts about Random Dynamical Systems

Chap.2 for a description of the basic properties of random and stochastic


differential equations.
As in the deterministic case the following concept of topological equivalence
(or conjugacy) of two random dynamical systems is of importance in our
study. In particular below we will use equivalence between some classes of
random and stochastic differential equations.
Definition 1.2.4 (Equivalence of RDS). Let (θ, ϕ1 ) and (θ, ϕ2 ) be two
RDS over the same MDS θ with phase spaces X1 and X2 resp. These RDS
(θ, ϕ1 ) and (θ, ϕ2 ) are said to be (topologically) equivalent (or conjugate) if
there exists a mapping T : Ω × X1 → X2 with the properties:
(i) the mapping x → T (ω, x) is a homeomorphism from X1 onto X2 for
every ω ∈ Ω;
(ii) the mappings ω → T (ω, x1 ) and ω → T −1 (ω, x2 ) are measurable for every
x1 ∈ X1 and x2 ∈ X2 ;
(iii) the cocycles ϕ1 and ϕ2 are cohomologous, i.e.

ϕ2 (t, ω, T (ω, x)) = T (θt ω, ϕ1 (t, ω, x)) for any x ∈ X1 . (1.13)

We refer to Arnold [3], Keller/Schmalfuss [63] and also to the recent


papers Imkeller/Lederer [58] and Imkeller/Schmalfuss [59] for more
details concerning equivalence of RDS.

1.3 Random Sets

One of the goals in this book is to describe the long-time behaviour of RDS
and the limit regimes of these systems. These limit regimes typically depend
on an event ω and therefore to characterize their attractivity properties we
should at least be able to calculate the distance between (random) trajecto-
ries and (random) limit objects and treat this distance as a random variable.
It is also crucial to decide whether the limit regimes contain a random vari-
able representing the different states of the system. These circumstances lead
to a notion of a random set which is stronger than simply a collection of sets
depending on ω. We introduce this notion of a random set following to Cas-
taing/Valadier [18] and Hu/Papageorgiou [56] (see also Crauel [32]
and Arnold [3]).
Below any mapping from Ω into the collection of all subsets of X is said
to be a multifunction (or a set valued mapping) from Ω into X.
Definition 1.3.1 (Random Set). Let X be a metric space with a metric
. The multifunction ω → D(ω) = ∅ is said to be a random set if the mapping
ω → distX (x, D(ω)) is measurable for any x ∈ X, where distX (x, B) is the
distance in X between the element x and the set B ⊂ X. If D(ω) is closed for
each ω ∈ Ω then D is called a random closed set. If D(ω) are compact sets
1.3 Random Sets 19

for all ω ∈ Ω then D is called a random compact set. A random set {D(ω)}
is said to be bounded if there exist x0 ∈ X and a random variable r(ω) > 0
such that

D(ω) ⊂ {x ∈ X : (x, x0 ) ≤ r(ω)} for all ω∈Ω.

For ease of notation we denote the random set ω → D(ω) by D or {D(ω)}.

Remark 1.3.1. (i) The property of D being a random closed set is slightly
stronger than

graph(D) = {(ω, x) ∈ Ω × X : x ∈ D(ω)}

being F × B(X)-measurable and D(ω) being closed; the two properties are
equivalent if F is P-complete, i.e. if for any set A ∈ F with zero probability
all subsets of A also belong to F (see Castaing/Valadier [18]).
(ii) For any x ∈ X and bounded sets A and B from X we have the relation

|distX (x, A) − distX (x, B)| ≤ h(A|B) ,

where h(A|B) is the Hausdorff distance defined by the formula

h(A|B) = sup distX (a, B) + sup distX (b, A) .


a∈A b∈B

Therefore, if for a multifunction ω → D(ω) there exists a sequence {Dn } of


random bounded sets such that

lim h(Dn (ω)|D(ω)) = 0 for all ω ∈ Ω ,


n→∞

then D(ω) = ∩n≥0 ∪k≥n Dk (ω) for every ω ∈ Ω and ω → D(ω) is a random
bounded set (D denotes the closure of D in X).

Example 1.3.1 (Random Ball). Let X = Rd . Suppose that r(ω) ≥ 0 is a


random variable and a(ω) is a random vector from Rd . Then the multifunction

ω → B(ω) = {x : |x − a(ω)| ≤ r(ω)}

is a random compact set. Here | · | is the Euclidean distance in Rd . This fact


follows from the formula

0 if y ∈ B(ω) ,
distX (y, B(ω)) =
|y − a(ω)| − r(ω) if y ∈ / B(ω) ,

which implies that distX (y, B(ω)) = max {0, |y − a(ω)| − r(ω)}. It is also
clear that intB(ω) = {x : |x − a(ω)| < r(ω)} is a random (open) set.
20 1. General Facts about Random Dynamical Systems

More general examples are described in Proposition 1.3.1(vi) and in Propo-


sition 1.3.6.
We need the following properties of random sets (for the proofs we refer
to Hu/Papageorgiou [56, Chap.2], see also Castaing/Valadier [18],
Crauel [32] and Arnold [3]).
Proposition 1.3.1. Let X be a Polish space. The following assertions hold:
(i) D is a random set in X if and only if the set {ω : D(ω) ∩ U = ∅} is
measurable for any open set U ⊂ X;
(ii) D is a random set in X if and only if {D(ω)} is a random closed set
(D(ω) denotes the closure of D(ω) in X);
(iii) D is a random compact set in X if and only if D(ω) is compact for every
ω ∈ Ω and the set {ω : D(ω) ∩ C = ∅} is measurable for any closed set
C ⊂ X;
(iv) if {Dn , n ∈ N} is a sequence of random closed sets with non-void inter-
section and there exists n0 ∈ N such that Dn0 is a random compact set,
then ∩n∈N Dn is a random compact set in X;
(v) if {Dn , n ∈ N} is a sequence of random sets, then D = ∪n∈N Dn is also a
random set in X;
(vi) if f : Ω × X → X is a mapping such that f (ω, ·) is continuous for all ω
and f (·, x) is measurable for all x, then ω → f (ω, D(ω)) is a random set
in X provided D is a random set in X; similarly, ω → f (ω, D(ω)) is a
random compact set in X provided D is a random compact set.
The following representation theorem (see Ioffe [60]) provides us with a
convenient description of random closed sets.
Theorem 1.3.1. Let D be a random closed set in a Polish space X. Then
there exist a Polish space Y and a mapping g(ω, y) : Ω × Y → X such that
(i) g(ω, ·) is continuous for all ω ∈ Ω and g(·, y) is measurable for all y ∈ Y ;
(ii) for all ω ∈ Ω and y1 , y2 ∈ Y one has
(g(ω, y1 ), g(ω, y2 )) ≤ (1 + (g(ω, y1 ), g(ω, y2 ))) · r(y1 , y2 ) ,
where (·, ·) and r(·, ·) are distances in X and Y ;
(iii) for all ω ∈ Ω one has D(ω) = g(ω, Y ), the range of g(ω, ·).
This theorem immediately implies the following assertion.
Proposition 1.3.2 (Measurable Selection Theorem). Let a multifunc-
tion ω → D(ω) take values in the subspace of closed non-void subsets of a
Polish space X. Then {D(ω)} is a random closed set if and only if there
exists a sequence {vn : n ∈ N} of measurable maps vn : Ω → X such that
vn (ω) ∈ D(ω) and D(ω) = {vn (ω), n ∈ N} for all ω∈Ω.
In particular if {D(ω)} is a random closed set, then there exists a measurable
selection, i.e. a measurable map v : Ω → X such that v(ω) ∈ D(ω) for all
ω ∈ Ω.
1.3 Random Sets 21

Below we also need the following assertion on the measurability of projections


(see, e.g., Castaing/Valadier [18, p.75]). It deals with the σ-algebra Fu
of universally measurable sets associated with the measurable space (Ω, F)
which is defined by the formula

Fu = F̄ν ,
ν

where the intersection taken over all probability measures ν on (Ω, F) and F̄ν
denotes the completion of the σ-algebra F with respect to the measure ν. We
call Fu the universal σ-algebra and F̄ν the ν-completion of F for shortness.
Recall that the P-completion F̄P is the σ-algebra consisting of all subsets A
of Ω for which there are sets U and V in F such that U ⊂ A ⊂ V and
P(U ) = P(V ). The probability measure P can be extended from F to F̄P such
that F̄P is a complete σ-algebra with respect to the extended probability
measure. For details we refer to Cohn [30], for instance. We also note that
θt F̄P = F̄P for any fixed t ∈ R. This property follows from the relation
P(θt U ) = P(U ) for any U ∈ F and t ∈ R.
Proposition 1.3.3 (Projection Theorem). Let X be a Polish space and
M ⊂ Ω × X be a set which is measurable with respect to the product σ-algebra
F × B(X). Then the set

projΩ M = {ω ∈ Ω : (ω, x) ∈ M for some x ∈ X}

is universally measurable, i.e. belongs to Fu . In particular it is measurable


with respect to the P-completion F̄P of F.
Now we introduce the following set valued analog of a separable process (cf.
Gihman/Skorohod [48, p.165]).
Definition 1.3.2. Let I be a set in R. A collection {Ct : t ∈ I} of random
sets is said to be separable if there exists an everywhere dense countable set
Q in I such that

Ct (ω) ⊂ {Cτ (ω) : τ ∈ [t − n−1 , t + n−1 ] ∩ Q} (1.14)
n∈N

for all t ∈ I and ω ∈ Ω. The set Q is called the separability set of the
collection {Ct }. A process {v(t, ω) : t ∈ I} is said to be separable if the
collection of random sets Ct (ω) = {v(t, ω)} is separable.
It is easy to see that {Ct : t ∈ I} is a separable collection with a separability
set Q if and only if for any t ∈ I and x ∈ Ct (ω) there exist sequences {tn } ⊂ Q
and {xn } ⊂ X such that tn → t and xn → x as n → ∞ and xn ∈ Ctn (ω).
The following proposition gives examples of separable collections of ran-
dom closed sets.
22 1. General Facts about Random Dynamical Systems

Proposition 1.3.4. Let D be a random closed set and I = (α, β) ⊂ R.


Assume that the function h(t, ω, x) : I × Ω × X satisfies
(i) for each t ∈ I the function h(t, ω, ·) is continuous for all ω ∈ Ω and
h(t, ·, x) is measurable for all x ∈ X;
(ii) h(·, ω, x) is a right continuous function for all ω ∈ Ω and x ∈ X.
Then ω → h(t, ω, D(ω)) is a separable collection of random closed sets whose
separability set Q is an arbitrary everywhere dense countable set from (α, β).
The same conclusion holds if h(·, ω, x) is a left continuous function.

Proof. Proposition 1.3.1(vi) implies that ω → h(t, ω, D(ω)) is a random


closed set for every t. From Theorem 1.3.1 we have that

h(t, ω, D(ω)) = {h(t, ω, g(ω, y)) : y ∈ Y }

Thus by (ii) for any t ∈ I there exists a sequence {tk } ⊂ Q such that tk > t
and
h(t, ω, g(ω, y)) = lim h(tk , ω, g(ω, y))
tk →t

for every y ∈ Y and ω ∈ Ω. This property easily implies



h(t, ω, g(ω, y)) ∈ {Cτ (ω) : τ ∈ [t − n−1 , t + n−1 ] ∩ Q}
n∈N

for all y ∈ Y and ω ∈ Ω, where Ct (ω) = h(t, ω, D(ω)). This relation gives
the separability of {h(t, ω, D(ω))}. 2

The main property of separable collections of random closed sets which is


important in the considerations below is given in the following proposition.
Proposition 1.3.5. Let {Ct : t ∈ I} be a separable collection of random
sets. Then the multifunction

ω → C(ω) = Ct (ω)
t∈I

is a random closed set.

Proof. It follows from (1.14) that ∪t∈I Ct (ω) = ∪t∈I∩Q Ct (ω). Therefore we
can apply Proposition 1.3.1(v). 2

Below we also need the following assertion.


Proposition 1.3.6. Let V : X → R be a continuous function on a Polish
space X and R(ω) be a random variable. If the set VR (ω) := {x : V (x) ≤
R(ω)} is non-empty for any ω ∈ Ω, then it is a random closed set.
1.3 Random Sets 23

Proof. The idea of the proof is borrowed from Schenk-Hoppé [89]. It is clear
that VR (ω) is closed for any ω ∈ Ω. Due to Proposition 1.3.1(i) it is sufficient
to prove that {ω : VR (ω) ∩ U = ∅} is measurable for every open set U ⊂ X.
This is equivalent to measurability of the set

{ω : VR (ω) ∩ U = ∅} ≡ {ω : U ⊂ X \ VR (ω)} .

This measurability follows from the relation

{ω : U ⊂ X \ VR (ω)} = {ω : R(ω) < s for any s ∈ V (U )} (1.15)

which we now prove. Since

X \ VR (ω) = V −1 (R) \ V −1 ((−∞, R(ω)]) = V −1 ((R(ω), +∞)) ,

we have that U ⊂ X \ VR (ω) if and only if V (U ) ⊂ (R(ω), +∞). This implies


(1.15) and therefore

{ω : U ⊂ X \ VR (ω)} = {ω : R(ω) < sn },
n∈N

where sn ∈ V (U ) and sn → inf V (U ) as n → ∞. 2

The following notions of random tempered sets and variables play an impor-
tant role in applications of the general theory of RDS connected with ran-
dom and stochastic equations (cf. Chaps. 4 and 5). Roughly speaking, that
a random variable which describes an influence of the random environment
is tempered means that this environment evolves in non-explosive way.
Definition 1.3.3 (Tempered Random Set). A random set {D(ω)} is
said to be tempered with respect to MDS θ = (Ω, F, P, {θt , t ∈ T}) if there
exist a random variable r(ω) and an element y ∈ X such that

D(ω) ⊂ {x | distX (x, y) ≤ r(ω)} for all ω∈Ω

and r(ω) is a tempered random variable with respect to θ, i.e


sup e−γ|t| |r(θt ω)| < ∞ for all ω ∈ Ω and γ > 0 . (1.16)
t∈T

A random variable v(ω) with values in X is said to be tempered if the one-


point random set {v(ω)} is tempered.
It is clear that every deterministic set is tempered. We note that non-
tempered random variables exist on any standard probability space with
ergodic and aperiodic θ (see Arnold/Cong/Oseledets [9]). Sometimes
(see, e.g., Arnold [3, p.164]) the definition of a tempered random variable
is based on the relation
24 1. General Facts about Random Dynamical Systems

1
lim log {1 + |r(θt ω)|} = 0 for all ω ∈ Ω .
|t|→∞ |t|

which is weaker than (1.16). However we prefer to use (1.16) because it allows
us to simplify some calculations in the applications below. We also note that
if θ is ergodic, the only alternative to property (1.16) is that
1
lim log {1 + |r(θt ω)|} = +∞ for almost all ω ∈ Ω ,
|t|→∞ |t|

see Arnold [3, p.165].


As in the deterministic case we need a notion of an invariant set for the
description of qualitative properties of RDS. It is convenient to introduce this
notion for multifunctions to cover all types of random sets.
Definition 1.3.4 (Invariance Property). Let (θ, ϕ) be a random dynam-
ical system. A multifunction ω → D(ω) is said to be
(i) forward invariant with respect to (θ, ϕ) if ϕ(t, ω)D(ω) ⊆ D(θt ω) for all
t > 0 and ω ∈ Ω, i.e. if x ∈ D(ω) implies ϕ(t, ω)x ∈ D(θt ω) for all t ≥ 0
and ω ∈ Ω;
(ii) backward invariant with respect to (θ, ϕ) if ϕ(t, ω)D(ω) ⊇ D(θt ω) for all
t > 0 and ω ∈ Ω, i.e. for every t > 0, ω ∈ Ω and y ∈ D(θt ω) there exists
x ∈ D(ω) such that ϕ(t, ω)x = y;
(iii) invariant with respect to (θ, ϕ) if ϕ(t, ω)D(ω) = D(θt ω) for all t > 0 and
ω ∈ Ω, i.e. if it is both forward and backward invariant.
We note that the forward invariance of the multifunction ω → D(ω) means
that
graph(D) = {(ω, x) ∈ Ω × X : x ∈ D(ω)}
is a forward invariant set in Ω × X with respect to the semiflow {πt } defined
by (1.6), i.e. πt graph(D) ⊂ graph(D) for all t > 0. The same is true for the
property of invariance.

1.4 Dissipative, Compact and Asymptotically Compact


RDS

In this section we start to develop methods for studying the qualitative be-
haviour of random dynamical systems. Our main goal is to investigate the
behaviour of expressions of the form x(t) = ϕ(t, θ−t ω)x when t → +∞. At
first sight this object looks a bit strange. However there are at least three
reasons to study the limiting structure of ϕ(t, θ−t ω)x.
The first one is connected with the question of what limiting dynamics
we want to observe. The point is that in many applications RDS are generated
1.4 Dissipative, Compact and Asymptotically Compact RDS 25

by equations whose coefficients depend on θt ω. These coefficients describe


the internal evolution of the environment and θ−t ω represents the state of
the environment at time −t which transforms into the “real” state (ω) at
the time of observation (time 0, after a time t has elapsed). Furthermore the
two-parameter mapping U (τ, s) := ϕ(τ − s, θs ω) describes the evolution of
the system from moment s to time τ , τ > s. Therefore the limiting structure
of U (0, −t)x = ϕ(t, θ−t ω)x when t → +∞ can be interpreted as the state of
our system which we observe now (t = 0) provided it was in the state x in the
infinitely distant past (t = −∞). Thus the union of all these limits provides
us with the real picture of the present state of the system.
The second reason is that the asymptotic behaviour of ϕ(t, θ−t ω)x pro-
vides us with some information about the long-time future. Indeed, since {θt }
are measure preserving, we have that
P {ω : ϕ(t, ω)x ∈ D} = P {ω : ϕ(t, θ−t ω)x ∈ D}
for any x ∈ X and D ∈ B(X). Therefore
lim P {ω : ϕ(t, ω)x ∈ D} = lim P {ω : ϕ(t, θ−t ω)x ∈ D} ,
t→+∞ t→+∞

if the limit on the right hand side exists. Thus the limiting behaviour of
ϕ(t, θ−t ω)x for all ω determines the long-time behaviour of ϕ(t, ω)x with
respect to convergence in probability.
The third reason is purely mathematical. If on the set of random variables
a(ω) with values in X we define the operators Tt by the formula
(Tt a)(ω) = ϕ(t, θ−t ω)a(θ−t ω), t ∈ R+ ,
then the family {Tt , t ∈ R+ } is a one-parameter semigroup. Indeed, using
the cocycle property we have
(Ts [Tt a])(ω) = ϕ(s, θ−s ω)(Tt a)(θ−s ω) = ϕ(s, θ−s ω)ϕ(t, θ−t−s ω)a(θ−t−s ω)

= ϕ(t + s, θ−t−s ω)a(θ−t−s ω) = (Tt+s a)(ω) .


Thus it becomes possible to use ideas from the theory of deterministic (au-
tonomous) dynamical systems for which the semigroup structure of the evo-
lution operator is crucial. Below we introduce several important dynamical
notions and study the qualitative behaviour of RDS relying on this observa-
tion.
Let D be a family of random closed sets which is closed with respect to
inclusions (i.e. if D1 ∈ D and a random closed set {D2 (ω)} possesses the
property D2 (ω) ⊂ D1 (ω) for all ω ∈ Ω, then D2 ∈ D). Sometimes the
collection D is called a universe of sets (see, e.g., Schenk-Hoppé [89]) or
an IC-system (see Flandoli/Schmalfuss [44]). The simplest example of a
universe is the collection of all one-point subsets of X. However the concept
26 1. General Facts about Random Dynamical Systems

of a universe allows us to include the consideration of local regimes of the


system into the theory in a natural way. We refer to Schenk-Hoppé [89]) for
a further discussion of this concept. In the applications presented in Chaps.5
and 6 we deal with the universe of all tempered subsets of the phase space.
Definition 1.4.1 (Absorbing Set). A random closed set {B(ω)} is said
to be absorbing for the RDS (θ, ϕ) in the universe D, if for any D ∈ D and
for any ω there exists t0 (ω) such that

ϕ(t, θ−t ω)D(θ−t ω) ⊂ B(ω) for all t ≥ t0 (ω) and ω∈Ω.

Definition 1.4.2 (Dissipative RDS). An RDS (θ, ϕ) is said to be dissi-


pative in the universe D, if there exists an absorbing set B for the RDS (θ, ϕ)
in the universe D such that

B(ω) ⊂ Br(ω) (x0 ) ≡ {x : distX (x, x0 ) ≤ r(ω)}, (1.17)

for some x0 ∈ X and random variable r(ω) and for all ω ∈ Ω. If X is a linear
space and x0 = 0, then the variable r(ω) is said to be a radius of dissipativity
of the RDS (θ, ϕ) in the universe D.
The simplest examples of dissipative RDS are the following ones.
Example 1.4.1 (Discrete Dissipative RDS). Let us consider the RDS con-
structed in Example 1.2.1. Let X = R and Ω0 = {0, 1} be a two-point set.
Assume that the continuous functions f0 and f1 possess the property

|fi (x)| ≤ a|x| + b with some 0 ≤ a < 1, b ≥ 0 .

In this case Ω is the set of two-sided sequences ω = {ωi | i ∈ Z} consisting of


zeros and ones and

ϕ(n, ω) = fωn−1 ◦ fωn−2 ◦ . . . ◦ fω1 ◦ fω0 , ω = {ωi | i ∈ Z}, n∈N.

Using the cocycle property it is easy to see that

|ϕ(n + 1, ω)x| ≤ a · |ϕ(n, ω)x| + b, n ∈ Z+ . (1.18)

Therefore after n iterations we obtain

|ϕ(n, ω)x| ≤ an · |x| + b · (1 − a)−1 , n ∈ Z+ . (1.19)

Let D be the family of all tempered (with respect to θ) random closed sets
in R. Let D ∈ D and D(ω) ⊂ {x : |x| ≤ r(ω)}, where r(ω) possesses the
property (1.16) (i.e. is a tempered random variable). Then (1.19) implies that

|ϕ(n, θ−n ω)x(θ−n ω)| ≤ an r(θ−n ω) + b · (1 − a)−1 , for all x(ω) ∈ D(ω) .

Since 0 ≤ a < 1, it follows from (1.16) that an r(θ−n ω) → 0 as n → +∞.


Therefore for every ω ∈ Ω there exists n0 (ω) such that an r(θ−n ω) ≤ 1 for
n ≥ n0 (ω). Consequently we have
1.4 Dissipative, Compact and Asymptotically Compact RDS 27

ϕ(n, θ−n ω)D(θ−n ω) ⊂ B := [−1 − b · (1 − a)−1 , 1 + b · (1 − a)−1 ]

for n ≥ n0 (ω). Thus the RDS considered is dissipative in the universe D of


all tempered random closed sets from R. Using (1.18) with n = 0 one can
easily see that B is a forward invariant set from D.

Example 1.4.2 (Kick Model). Let X be a Banach space and g : X → X be


a continuous mapping such that

g(x) ≤ a x + b, 0 ≤ a < 1, b ≥ 0 . (1.20)

Consider the RDS (θ, ϕ) generated by the difference equation

xn+1 = g(xn ) + ξ(θn+1 ω), n ∈ Z+ , (1.21)

over a metric dynamical system (Ω, F, P, {θn , n ∈ Z}), where ξ(ω) is a tem-
pered random variable in X. Using (1.20) and (1.21) we have

ϕ(n, ω)x ≤ an x + R(θn ω), n ∈ Z+ ,

where


R(ω) = b(1 − a)−1 + ak ξ(θ−k ω)
k=0

is a tempered random variable. It is easy to see that for every δ > 0 the ball
B δ (ω) = {x : x ≤ (1 + δ)R(ω)} is a forward invariant absorbing set for
(θ, ϕ) in the universe D of all tempered random closed sets from X.

Example 1.4.3 (Continuous Dissipative RDS). Let (θ, ϕ) be the RDS consid-
ered in Example 1.2.3 from the random ODE ẋ = f (θt ω, x). Assume addi-
tionally that the function f (ω, x) possesses the property

xf (ω, x) ≤ −α|x|2 + β, for all ω ∈ Ω ,

where α > 0 and β ≥ 0 are nonrandom constants. Then it is easy to see that
1 d
· |x(t)|2 ≤ −α|x(t)|2 + β, t>0,
2 dt
for any solution to (1.10). Therefore, since ϕ(t, ω)x = x(t), we have

β
|ϕ(t, ω)x|2 ≤ e−2αt |x|2 + · 1 − e−2αt , t>0.
α
As in Example 1.4.1 this property implies that (θ, ϕ) is dissipative in the
universe D of all tempered (with respect to θ) random closed sets from R.
Moreover the absorbing set B = {x : |x| ≤ 1 + β/α} is a forward invariant
set from D.
28 1. General Facts about Random Dynamical Systems

The situation described in Example 1.4.3 admits the following generalization


which can be also considered as an extension of well-known deterministic
results (see, e.g., Babin/Vishik [13], Chueshov [20] or Hale [50]) to the
random case.
Proposition 1.4.1. Assume that the phase space X of RDS (θ, ϕ) is a sepa-
rable Banach space with the norm · and there exists a continuous function
V : X → R with the properties:
(i) V (ϕ(t, ω)x) is absolutely continuous with respect to t for any (ω, x) ∈
Ω × X;
(ii) there exists a constant α > 0 and a tempered random variable β(ω) ≥ 0
such that for every (ω, x) ∈ Ω × X we have the inequality

d
V (ϕ(t, ω)x) + (α + (θt ω)) · V (ϕ(t, ω)x) ≤ β(θt ω) (1.22)
dt
for almost all t > 0, where (ω) is a random variable such that (θt ω)
lies in L1loc (R) for every ω ∈ Ω and
 t  0
1 1
lim (θτ ω) dτ = lim (θτ ω) dτ = 0 (1.23)
t→+∞ t 0 t→+∞ t −t

for all ω ∈ Ω;
(iii) there exist positive constants b1 , b2 , δ1 , δ2 and nonnegative numbers c1 and
c2 such that

b1 x δ1 − c1 ≤ V (x) ≤ b2 x δ2 + c2 , x∈X. (1.24)

Then the RDS (θ, ϕ) is dissipative in the universe D of all tempered random
closed sets in X. Moreover there exists a tempered random variable R(ω) ≥ 0
such that for any positive  the set

B (ω) = {x : V (x) ≤ (1 + )R(ω)} (1.25)

is a forward invariant absorbing tempered random closed set.

Proof. Let D ∈ D and x(ω) ∈ D(ω) for all ω ∈ Ω. From (1.22) we have that
 t 
V (ϕ(t, ω)x(ω)) ≤ V (x(ω)) · exp −αt − (θτ ω) dτ
0

 t  t 
+ β(θs ω) · exp −α(t − s) − (θτ ω) dτ ds .
0 s
1.4 Dissipative, Compact and Asymptotically Compact RDS 29

Therefore
 0 
V (ϕ(t, θ−t ω)x(θ−t ω)) ≤ V (x(θ−t ω)) · exp −αt − (θτ ω) dτ
−t
(1.26)
 0  0 
+ β(θs ω) · exp αs − (θτ ω) dτ ds .
−t s

It follows from (1.23) that for any ε > 0 and ω ∈ Ω there exists c(ω) > 0
such that  t 
 
 (θτ ω) dτ  ≤ ε|t| + c(ω), t ∈ R, ω ∈ Ω . (1.27)
0

Therefore, since β(ω) is tempered, for all ω ∈ Ω the integral


 0  0 
R(ω) = β(θs ω) · exp αs − (θτ ω) dτ ds (1.28)
−∞ s

exists. It follows from (1.27) that


 0

R(θt ω) ≤ C(ω)e ε|t|


eαs e(γ+ε)|t+s| ds · sup e−γ|τ | β(θτ ω)
−∞ τ

≤ e(γ+2ε)|t| sup e−γ|τ | β(θτ ω)


α−γ−ε τ

for all ε > 0 and γ > 0 such that γ + ε < α. This implies that R(ω) is a
tempered random variable. Proposition 1.3.6 and relation (1.24) imply that
B (ω) given by (1.25) is a tempered random closed set. Let
 0 
e(t, ω) = exp αt − (θτ ω) dτ .
t

Then from (1.26) for any x(ω) ∈ B (ω) we have that


 0
V (ϕ(t, θ−t ω)x(θ−t ω)) ≤ (1 + )R(θ−t ω) · e(−t, ω) + β(θs ω) · e(s, ω)ds .
−t

Since e(−t, ω) · e(s, θ−t ω) = e(s − t, ω), it follows from (1.28) that
 −t
R(θ−t ω) · e(−t, ω) = β(θs ω) · e(s, ω)ds .
−∞

Therefore
V (ϕ(t, θ−t ω)x(θ−t ω)) ≤ (1 + )R(ω) .
30 1. General Facts about Random Dynamical Systems

Thus B (ω) is forward invariant. It follows from (1.24) and (1.26) that

V (ϕ(t, θ−t ω)x(θ−t ω)) ≤ b2 x(θ−t ω) δ2 + c2 · e−αt + R(ω) .

This relation implies that B (ω) is absorbing in the universe D. 2

Remark 1.4.1. If θ is an ergodic metric dynamical system, assumption (ii) in


Proposition 1.4.1 can be replaced by the inequality
d
V (ϕ(t, ω)x) + α̃(θt ω) · V (ϕ(t, ω)x) ≤ β(θt ω) , (1.29)
dt
there β(ω) ≥ 0 is a tempered random variable and α̃(ω) ∈ L1 (Ω, F, P) is
a random variable such that Eα̃ > 0. Indeed, it follows from the Birkhoff-
Khintchin ergodic theorem (see, e.g., Arnold [3, Appendix]) that
 t
1
lim α̃(θτ ω)dτ = Eα̃, ω ∈ Ω∗ ,
|t|→∞ t 0


where Ω ⊆ Ω is a θ-invariant set of full measure. Without loss of generality
we can suppose that Ω ∗ = Ω (see Remark 1.2.1(ii)). Therefore we can apply
Proposition 1.4.1 with α = Eα̃ and (ω) = α̃(ω) − Eα̃.

Example 1.4.4 (Binary Biochemical Model). Consider the RDS (θ, ϕ) gen-
erated in R2 by equations (1.11) over an ergodic metric dynamical system
θ. Let the hypotheses concerning g and αi listed in Example 1.2.4 hold. We
assume in addition that

αmin (ω) = min{α1 (ω), α2 (ω)} ∈ L1 (Ω, F, P) and α0 = Eαmin > 0 .

If
α0
x1 · (x2 + g(x2 )) ≤ · (x21 + x22 ) + β0 , (x1 , x2 ) ∈ R2+ ,
2
where β0 ≥ 0 is a constant, then (1.29) holds with V (x) = x21 + x22 , α̃ =
2αmin (ω) − α0 and β(ω) ≡ 2β0 . Thus the RDS (θ, ϕ) is dissipative in the
universe of all tempered random closed sets from R2 .

The following concepts are useful when the phase space X is infinite-
dimensional.
Definition 1.4.3 (Compact RDS). An RDS (θ, ϕ) is said to be compact
in the universe D, if it is dissipative in D and the absorbing set B is a random
compact set.
If the phase space X of an RDS (θ, ϕ) is compact, then (θ, ϕ) is a compact
RDS. If X is a finite-dimensional space, then any dissipative RDS is compact.
1.4 Dissipative, Compact and Asymptotically Compact RDS 31

Example 1.4.5 (Kick Model). Let (θ, ϕ) be the RDS considered in Exam-
ple 1.4.2. Assume additionally that g is a compact mapping, i.e. g(B) is a
compact set for every bounded set B from X. The set
C(ω) = ϕ(1, θ−1 ω)B δ (θ−1 ω) = g(B δ (θ−1 ω)) + ξ(ω)
is an absorbing forward invariant random compact set for (θ, ϕ) in the uni-
verse D of all tempered random closed sets from X.
Definition 1.4.4 (Asymptotically Compact RDS). An RDS (θ, ϕ) is
said to be asymptotically compact in the universe D, if there exists an at-
tracting random compact set {B0 (ω)}, i.e. for any D ∈ D and for any ω ∈ Ω
we have
lim dX {ϕ(t, θ−t ω)D(θ−t ω) | B0 (ω)} = 0 , (1.30)
t→+∞

where dX {A|B} = supx∈A distX (x, B).


It is clear that any compact RDS is asymptotically compact. Deterministic
examples of asymptotically compact systems which are not compact can be
found in Babin/Vishik [13], Chueshov [20], Hale [50] and Temam [104].
The following assertion shows that every asymptotically compact RDS is
dissipative.
Proposition 1.4.2. Let (θ, ϕ) be an asymptotically compact RDS in D with
an attracting random compact set {B0 (ω)}. Then it is dissipative in D.
Proof. For any x0 ∈ X we can find a random variable r(ω) ∈ (0, +∞) such
that
B0 (ω) ⊂ {x : distX (x, x0 ) ≤ r(ω)} for all ω ∈ Ω . (1.31)
To prove this we note that by Theorem 1.3.1
B0 (ω) = {g(ω, y) : y ∈ Y } for all ω ∈ Ω ,
where Y is a Polish space and the mapping g(ω, y) : Ω × Y → X is such
that g(ω, ·) is continuous for all ω ∈ Ω and g(·, y) is measurable for all y ∈ Y .
Since B0 (ω) is a compact set and Y is separable, r(ω) defined by
r(ω) := sup distX (x0 , g(ω, y)) ∈ (0, +∞), ω∈Ω,
y∈Y

is a random variable and (1.31) holds.


It follows from (1.30) that for any D ∈ D and for any ω there exists a
t0 (ω) such that
ϕ(t, θ−t ω)D(θ−t ω) ⊂ B ∗ (ω) := {x : distX (x, x0 ) ≤ 1+r(ω)} for t ≥ t0 (ω) .
Thus (θ, ϕ) is dissipative. 2
The notions of dissipative, compact and asymptotically compact random sys-
tems differ only in infinite-dimensional phase spaces.
32 1. General Facts about Random Dynamical Systems

1.5 Trajectories

In this section we describe some measurable properties of the trajectories of


RDS.
Definition 1.5.1. Let D : ω → D(ω) be a multifunction. We call the mul-
tifunction 
ω → γD
t
(ω) := ϕ(τ, θ−τ ω)D(θ−τ ω)
τ ≥t

the tail (from the moment t) of the pull back trajectories emanating from D.
0
If D(ω) = {v(ω)} is a single valued function, then ω → γv (ω) ≡ γD (ω) is
said to be the (pull back) trajectory (or orbit) emanating from v.
In the deterministic case Ω is a one-point set and ϕ(t, ω) = ϕ(t) is a semigroup
t
of continuous mappings. Therefore in this case the tail γD has the form
 
t 0
γD = ϕ(τ )D = ϕ(τ )(ϕ(t)D) = γϕ(t)D ,
τ ≥t τ ≥0

t
i.e. γD is a collection of the “normal” trajectories emanating from ϕ(t)D.
We note that any tail is a forward invariant multifunction. It also follows
from Proposition 1.3.1(v) that in the case of discrete time (T = Z) the closure
γDt (ω) of any tail γ t (ω) is a random closed set. For continuous time we have
D
the following proposition.
Proposition 1.5.1. For any random closed set {D(ω)} the closure γD t (ω)
t
of any tail γD (ω) of the pull back trajectories emanating from D is a random
closed set with respect to the σ-algebra Fu of universally measurable sets.

Proof. The idea of the proof is borrowed from Crauel/Flandoli [36]. The
Representation Theorem 1.3.1 gives that D(ω) = g(ω, Y ), where Y is a Polish
space, g(ω, ·) is continuous for all ω ∈ Ω and g(·, y) is measurable for all y ∈ Y .
Therefore for every x ∈ X we have

d(t, ω) := distX (x, ϕ(t, θ−t ω)D(θ−t ω)) = inf distX (x, ϕ(t, θ−t ω)g(θ−t ω, yk )) ,
k

where {yk } is a dense sequence in Y . Since (t, ω) → (t, θ−t ω) is a measurable


mapping and (t, ω) → dk (t, ω) := distX (x, ϕ(t, ω)g(ω, yk )) is a measurable
function, the function (t, ω) → dk (t, θ−t ω) is also measurable. Consequently
the function (t, ω) → d(t, ω) is B(R+ ) × F-measurable. It is also clear that
t (ω)) = dist(x, γ t (ω)) = inf d(τ, ω) .
dist(x, γD D
τ ≥t
1.5 Trajectories 33

For any a ∈ R+ we have



ω : inf d(τ, ω) < a = projΩ {(τ, ω) : d(τ, ω) < a, τ ≥ t} ,
τ ≥t

where projΩ is the canonical projection of R+ × Ω on Ω defined by

projΩ M = {ω ∈ Ω : (t, ω) ∈ M for some t ∈ R+ }.

Hence Proposition 1.3.3 implies that {ω : inf τ ≥t d(τ, ω) < a} is a universally


measurable set and therefore ω → γDt (ω) is a random closed set with respect

to F .
u
2

As a direct consequence of Proposition 1.3.5 we also have the following as-


sertions.
Proposition 1.5.2. Let a(ω) be a random variable in X. Assume that t →
ϕ(t, θ−t ω)a(θ−t ω) is a separable process, t ∈ R+ . Then ω → γat (ω) is a
forward invariant random closed set with respect to F. In particular, if for
some x ∈ X the mapping t → ϕ(t, θ−t ω)x is a right continuous function for
all t > 0 and ω ∈ Ω, then ω → γxt (ω) is a forward invariant random closed
set with respect to F.

Proof. It is clear that



t (ω) =
γD {ϕ(τ, θ−τ ω)a(θ−τ ω) : τ ≥ t, τ ∈ Q} ,

where Q is a separability set of the process t → ϕ(t, θ−t ω)a(θ−t ω). Therefore
we can apply Proposition 1.3.1(v). 2

Proposition 1.5.3. Let (θ, ϕ) be an RDS such that the function

(t, x) → ϕ(t, θ−t ω)x is a continuous mapping (1.32)

from R+ × X into X. Assume that D is a random closed set such that


{D(θt ω) : t ≤ 0} is a separable collection. Then the closure γD
t of the tail γ t
D
is a forward invariant random closed set with respect to F for every t ≥ 0. In
t possesses this property for every deterministic D.
particular, γD

Proof. Since {D(θt ω) : t ≤ 0} is a separable collection, we can find an


everywhere dense countable set Q such that for any t ≥ 0 and x ∈ D(θ−t ω)
there exist tn ∈ Q and xn ∈ D(θ−tn ω) such that xn → x and tn → t
as n → ∞. Property (1.32) implies that ϕ(tn , θ−tn ω)xn → ϕ(t, θ−t ω)x as
n → ∞. Therefore {ϕ(t, θ−t ω)D(θ−t ω) : t ≥ t0 } is a separable collection for
any t0 ≥ 0. Thus we can apply Propositions 1.3.5 and 1.3.1(v). 2
34 1. General Facts about Random Dynamical Systems

Remark 1.5.1. Assume that the mappings ϕ(t, ω) are restrictions to R+ of


mappings ϕ̃(t, ω) which satisfy the conditions listed in Definition 1.2.1 for
all t, s ∈ R and such that (t, x) → ϕ̃(t, ω)x is a continuous mapping from
R × X into X for every ω ∈ Ω. This situation is typical for RDS generated by
finite-dimensional random and stochastic differential equations (for instance,
this is true for the RDS considered in Examples 1.2.4 and 1.4.4). The cocycle
property for ϕ̃ implies that

ϕ̃(t, θ−t ω) ◦ ϕ̃(−t, ω) = ϕ̃(−t, ω) ◦ ϕ̃(t, θ−t ω) = id, t ∈ R, ω ∈ Ω .

Hence (t, x) → (t, ϕ̃(−t, ω)) is a bijective mapping from R × X into itself and
−1
ϕ(t, θ−t ω) = ϕ̃(t, θ−t ω) = [ϕ̃(−t, ω)] , t≥0.

Therefore by Proposition 1.1.6 (Arnold [3]) (t, x) → ϕ(t, θ−t ω) is a continu-


ous mapping from R × X into X for every ω ∈ Ω provided that X is either a
compact Hausdorff space or a finite-dimensional topological manifold. There-
fore in this case by Proposition 1.5.2 {γat (ω)} is a forward invariant random
closed set with respect to F for every a(ω) such that the mapping t → a(θt ω)
is continuous for all ω ∈ Ω. By Proposition 1.5.3 the same is true for γD t ,

where D is a deterministic subset in X.


We note that if X is a separable Banach space, then the set of random
variables v(ω) such that t → v(θt ω) is a C ∞ -function for every ω is dense
in the set of all random variables with respect to convergence in probability
(see the argument given in the proof of Proposition 8.3.8 Arnold [3]). We
also note that in the case considered the function t → ϕ(t, θ−t ω)a(θ−t ω) is
a stochastically continuous process (i.e. it is continuous with respect to con-
vergence in probability) for any random variable a(ω). This property follows
from the stochastic continuity of the process t → a(θt ω) (see Arnold [3,
Appendix A.1]).

1.6 Omega-limit Sets

To describe the asymptotic behaviour of RDS as in the deterministic case (cf.


Hartman [51] and also Hale [50], Temam [104], Chueshov [20], for exam-
ple) we use the concept of an omega-limit set. As in Crauel/Flandoli [36]
our definition concerns pull back trajectories.
Definition 1.6.1. Let D : ω → D(ω) be a multifunction. We call the mul-
tifunction
  
ω → ΓD (ω) := t (ω) =
γD ϕ(τ, θ−τ ω)D(θ−τ ω)
t>0 t>0 τ ≥t

the (pull back) omega-limit set of the trajectories emanating from D.


1.6 Omega-limit Sets 35

The following assertion gives another description of omega-limit sets.


Proposition 1.6.1. Let ΓD (ω) be the omega-limit set of the trajectories em-
anating from a multifunction D. Then x ∈ ΓD (ω) if and only if there exist
sequences tn → +∞ and yn ∈ D(θ−tn ω) such that

x = lim ϕ(tn , θ−tn ω)yn . (1.33)


n→+∞

Proof. Let x ∈ ΓD (ω). Then we have



x∈ ϕ(τ, θ−τ ω)D(θ−τ ω) for all n = 1, 2, . . . .
τ ≥n

Therefore there exists an element bn such that



bn ∈ ϕ(τ, θ−τ ω)D(θ−τ ω) (1.34)
τ ≥n

and dist(x, bn ) ≤ 1/n, n = 1, 2, . . .. It follows from (1.34) that there exist


tn ≥ n and yn ∈ D(θtn ω) such that bn = ϕ(tn , θ−tn ω)yn . It is clear that we
have (1.33) for these tn and yn .
Vice versa, assume that an element x possesses property (1.33). It is
obvious that for any t > 0 there exists tn such that
 
ϕ(tn , θ−tn ω)yn ∈ ϕ(τ, θ−τ ω)D(θ−τ ω) ⊂ ϕ(τ, θ−τ ω)D(θ−τ ω) .
τ ≥t τ ≥t

Therefore 
x∈ ϕ(τ, θ−τ ω)D(θ−τ ω) for all t > 0.
τ ≥t

This implies that x ∈ ΓD (ω). 2

We note that Proposition 1.6.1 provides us with a description of omega-


limit sets. But it does not guarantee that they are nonempty. The following
assertion gives us conditions under which ΓD (ω) is nonempty.
Proposition 1.6.2. Assume that the RDS (θ, φ) is asymptotically compact
in a universe D with the attracting random compact set {B0 (ω)}. Then for
any D ∈ D and for all ω ∈ Ω the omega-limit set ΓD (ω) is a nonempty
compact set and ΓD (ω) ⊂ B0 (ω). The multifunction ω → ΓD (ω) is invariant
and it is a random compact set with respect to the σ-algebra Fu of universally
measurable sets (with respect to F, in the case of discrete time).
36 1. General Facts about Random Dynamical Systems

Proof. Let tn → ∞ and yn ∈ D(θ−tn ω) be arbitrary sequences. From (1.30)


we have that

ϕ(tn , θ−tn ω)yn → B0 (ω) when n → +∞ ,

i.e. there exists a sequence {bn } ⊂ B0 (ω) such that

distX (ϕ(tn , θ−tn ω)yn , bn ) → 0 when n → +∞ .

The compactness of B0 (ω) implies that for some subsequence {nk } and some
b ∈ B0 (ω) we have that bnk → b. This implies that

ϕ(tnk , θ−tnk ω)ynk → b ∈ B0 (ω) when k → +∞ .

Thus ΓD (ω) is nonempty. It is clear from (1.30) that any element of the form
(1.33) belongs to B0 (ω). Therefore we have ΓD (ω) ⊂ B0 (ω) and, since ΓD (ω)
is closed, ΓD (ω) is a compact set.
Let us prove that ω → ΓD (ω) is invariant. Using the cocycle property we
have

ϕ(t, ω)x = lim ϕ(t, ω) ◦ ϕ(tn , θ−tn ω)yn = lim ϕ(t + tn , θ−t−tn ◦ θt ω)yn
n→∞ n→∞

for any x ∈ ΓD (ω) of the form (1.33). Due to Proposition 1.6.1 this implies
that ϕ(t, ω)x ∈ ΓD (θt ω). Thus ϕ(t, ω)ΓD (ω) ⊂ ΓD (θt ω) for all t > 0 and
ω ∈ Ω.
Assume that x ∈ ΓD (θt ω) for some t > 0 and ω ∈ Ω. Proposition 1.6.1
implies that
x = lim ϕ(tn , θ−tn ◦ θt ω)yn , (1.35)
n→∞

where yn ∈ D(θ−tn ◦ θt ω) and tn → ∞. The cocycle property gives that

x = lim ϕ(t, ω)zn with zn = ϕ(tn − t, θ−tn +t ω)yn . (1.36)


n→∞

From (1.30) we have that zn → B0 (ω) as n → ∞. Since B0 (ω) is compact,


there exist {nk } and b ∈ B0 (ω) such that znk → b as k → ∞. Moreover
Proposition 1.6.1 implies that b ∈ ΓD (ω). From (1.36) we obtain that x =
ϕ(t, ω)b. Therefore ΓD (θt ω) ⊂ ϕ(t, ω)ΓD (ω) for all t > 0 and ω ∈ Ω. Thus
{ΓD (ω)} is invariant.
To prove that {ΓD (ω)} is a random compact set with respect to Fu we
use Proposition 1.5.1 and the obvious formula ΓD (ω) = ∩n∈Z+ γD n (ω) which

implies in our case that


n
dist(x, ΓD (ω)) = lim dist(x, γD (ω)), ω∈Ω. (1.37)
n→∞
1.6 Omega-limit Sets 37

Indeed, since ΓD (ω) ⊂ γD


n+1
(ω) ⊂ γD
n
(ω), we have that
n
dist(x, γD (ω)) ≤ dist(x, γD
n+1
(ω)) ≤ dist(x, ΓD (ω))

for any x ∈ X. Therefore the limit in (1.37) exists and

dist(x, ΓD (ω)) ≥ lim dist(x, γD


n
(ω)), ω∈Ω.
n→∞

Let xn ∈ γD
n
(ω) be such that

1
dist(x, xn ) ≤ dist(x, γD
n
(ω)) + , n = 1, 2, . . .
n
Since γDn
(ω) → B0 (ω) as n → ∞ for all ω ∈ Ω, there exist a subsequence
nk = nk (ω) and b ∈ B0 (ω) such that xnk → b. By Proposition 1.6.1 b ∈
ΓD (ω). Therefore

dist(x, ΓD (ω)) ≤ dist(x, b) = lim dist(x, xnk ) ≤ lim dist(x, γD


n
(ω)) .
k→∞ n→∞

Thus we obtain (1.37). By Proposition 1.5.1 ω → dist(x, γD n


(ω)) is Fu -
measurable. Therefore ω → dist(x, ΓD (ω)) is also F -measurable. Hence ΓD
u

is a random set with respect to the universal σ-algebra Fu . 2

Remark 1.6.1. The existence and measurability of omega-limit sets with re-
spect to the universal σ-algebra can be proved under a weaker property than
the asymptotic compactness of RDS (θ, ϕ). Assume that {D(ω)} is a random
closed set and for every ω ∈ Ω there exists a compact set BD (ω) ⊂ X such
that
lim dX {ϕ(t, θ−t ω)D(θ−t ω) | BD (ω)} = 0 ,
t→+∞

where dX {A|B} = supx∈A distX (x, B). Then, as in the proof of Proposi-
tion 1.6.2, it follows from Proposition 1.5.1 that ΓD exists and ω → ΓD (ω)
is an invariant random compact set with respect to the universal σ-algebra
Fu . If we additionally assume that the closure γDt (ω) of the tail γ t (ω) is a
D
random closed set for every t ≥ 0 (cf. Proposition 1.5.3 and Remark 1.5.1),
then ΓD is a random compact set with respect to F. We refer to Crauel [33]
for other results concerning the measurability of omega-limit sets.

The following two assertions provide us with conditions which guarantee that
{ΓD (ω)} is a random compact set with respect to the σ-algebra F.
Proposition 1.6.3. If {D(ω)} is a forward invariant random compact set
for the RDS (θ, ϕ), then the multifunction ω → ΓD (ω) is an invariant random
compact set with respect to F and ΓD (ω) ⊂ D(ω).
38 1. General Facts about Random Dynamical Systems

Proof. Since {D(ω)} is a forward invariant set, we have


 
ΓD (ω) = ϕ(t, θ−t ω)D(θ−t ω) = ϕ(n, θ−n ω)D(θ−n ω) . (1.38)
t>0 n∈Z+

Proposition 1.3.1(vi) implies that ω → Dn (ω) := ϕ(n, ω)D(ω) is a random


compact set. Therefore ω → Dn (θ−n ω) is also a random compact set. Conse-
quently it follows from Proposition 1.3.1(iv) that ΓD (ω) is a random compact
set. It is clear from (1.38) that ΓD (ω) is a forward invariant set. Let us prove
its backward invariance. Let x ∈ ΓD (θt ω) for some t > 0 and ω ∈ Ω. Then as
above by Proposition 1.6.1 we have (1.35) and (1.36) with zn ∈ D(ω). Since
D(ω) is compact, we can choose a convergent subsequence {znk } and apply
the same argument as in the proof of Proposition 1.6.2. 2

Proposition 1.6.4. Let a(ω) be a random variable in X. Assume that the


process t → ϕ(t, θ−t ω)a(θ−t ω) is separable for t ∈ R+ and for each ω ∈ Ω
there exists t∗ = t∗ (ω) such that γat∗ (ω) is a compact set. Then the omega-
limit set ω → Γa (ω) is a random compact set with respect to F.

Proof. The compactness of γat∗ (ω) implies that Γa (ω) is a nonempty com-
pact set for all ω ∈ Ω. Therefore we can use Proposition 1.5.2, the for-
mula ΓD (ω) = ∩n∈Z+ γD
n (ω) and the argument given in the proof of Proposi-

tion 1.6.2. 2

1.7 Equilibria

A special case of omega-limit sets are random equilibria. They are the random
analog of deterministic fixed points and generate stationary stochastic orbits
(cf. Arnold [3], Arnold/Schmalfuss [11] and Schmalfuss [94]).
Definition 1.7.1. A random variable u : Ω → X is said to be an equilibrium
(or fixed point, or stationary solution) of the RDS (θ, ϕ) if it is invariant
under ϕ, i.e. if

ϕ(t, ω)u(ω) = u(θt ω) for all t≥0 and all ω∈Ω.

It is clear that if u = u(ω) is an equilibrium, then Γu (ω) = u(ω).


Example 1.7.1 (Kick Model). If in Example 1.4.2 we additionally assume
that g is a linear mapping such that g ≤ a < 1, then it is easy to see that


u(ω) = g k (ξ(θ−k ω))
k=0

is an equilibrium for the RDS generated by (1.21).


1.7 Equilibria 39

Remark 1.7.1. The problem of the construction of equilibria for general RDS
is rather complicated. The following example demonstrates the difficulties in
the construction of equilibria. Let us consider the RDS on R+ constructed
in the Introduction (cf. also Example 1.4.1) with f0 (x) = 12 x and f1 (x) =
1 1
2 + f0 (x) = 2 (1 + x). Both functions f0 (x) and f1 (x) have a fixed point:
f0 (0) = 0 and f1 (1) = 1. To obtain an equilibrium we should look for a
solution to the equation fω0 (u(ω)) = u(θ1 ω), where ω = {ωi | i ∈ Z} is a two-
sided sequence consisting of zeros and ones and θ1 is the left one-symbol shift
operator. It is clear that an equilibrium u(ω) is not simply a random variable
which takes as its values the fixed points 0 and 1 of the mappings f0 (x) and
f1 (x). The variable u(ω) can really depend on the sequence ω = {ωi | i ∈ Z} in
a very complicated way. However we prove in Chap.3 that this RDS possesses
a unique globally asymptotically stable equilibrium in R+ with its values
inside the interval (0, 1).
We also note that the results by Ochs/Oseledets [87] and Ochs [85]
show that it is impossible to generalize topological fixed point theorems to
the case of random dynamical systems. However, as we will see in Chaps.3–6,
there are more simple approaches which allow us to construct equilibria for
monotone RDS.

The following simple assertion makes it possible to prove the uniqueness of


equilibria, if they exist, in several important cases (see, e.g., Sect. 4.2 below).
Proposition 1.7.1. Let ω → D(ω) be a forward invariant multifunction for
the RDS (θ, ϕ). Assume that on the set

G = {(ω, u, v) : u, v ∈ D(ω), ω ∈ Ω} ⊂ Ω × X × X

there exists a function V : G → R satisfying


(i) V (ω, u(ω), v(ω)) is measurable for any random variables u(ω) and v(ω)
from D(ω);
(ii) for any u and v from D(ω) we have

V (θt ω, ϕ(t, ω)u, ϕ(t, ω)v) ≤ V (ω, u, v) for all t > 0, ω ∈ Ω ; (1.39)

(iii) we have strict inequality in (1.39), if u = v.


Then any two equilibria u1 (ω) and u2 (ω) with the property u1 (ω), u2 (ω) ∈
D(ω) for all ω ∈ Ω are equal on the set of full measure which is invariant
with respect to θ.

Proof. Assume that the RDS (θ, ϕ) has two equilibria u1 and u2 in D such
that u1 (ω) = u2 (ω) on a measurable set U ⊂ Ω with P(U ) > 0. It follows
from condition (iii) that

V (θt ω, ϕ(t, ω)u1 (ω), ϕ(t, ω)u2 (ω)) < V (ω, u1 (ω), u2 (ω)) < ∞ (1.40)
40 1. General Facts about Random Dynamical Systems

for all ω ∈ U and t > 0. Since u1 and u2 are equilibria, (1.40) is equivalent
to
V (θt ω, u1 (θt ω), u2 (θt ω)) < V (ω, u1 (ω), u2 (ω)) < ∞
for all ω ∈ U and t > 0. From (1.39) we also have

V (θt ω, u1 (θt ω), u2 (θt ω)) ≤ V (ω, u1 (ω), u2 (ω)) < ∞

for all ω ∈ Ω and t > 0. However the functions

ft (ω) := V (θt ω, u1 (θt ω), u2 (θt ω)) and f (ω) := V (ω, u1 (ω), u2 (ω))

have the same probability distribution for every t > 0, but satisfy ft (ω) ≤
f (ω) for ω ∈ Ω and ft (ω) < f (ω) for ω ∈ U . This contradicts the assumption
P(U ) > 0. Thus for any fixed t > 0 we have f (θt ω) = f (ω) on a set of full
measure. Let
Ωn = {ω : f (θn ω) = f (ω)}, n ∈ Z+ .
The sets Ωn are F-measurable and P(Ωn ) = 1. Property (1.39) implies that
f (θt ω) = f (ω) for all t ∈ [0, n] and ω ∈ Ωn . Therefore f (θn−k θs ω) = f (θs ω)
for all s ∈ [0, k] and ω ∈ Ωn , where k ≤ n. Thus

θs Ωn ⊂ Ωn−k for all 0≤s≤k≤n. (1.41)

Let Ω ∗ = ∩n≥1 Ωn . It is clear that P(Ω ∗ ) = 1 and f (θt ω) = f (ω) for all t ≥ 0
and ω ∈ Ω ∗ . From (1.41) we also easily have that θs Ω ∗ ⊆ Ω ∗ for all s ≥ 0.
Therefore Ω̃ = ∩s≥0 θs Ω ∗ = ∩n∈Z+ θn Ω ∗ is F-measurable θ-invariant set such
that P(Ω̃) = 1. Since Ω̃ ⊂ Ω ∗ , we have that u1 (ω) = u2 (ω) for all ω ∈ Ω̃. 2

We note that Proposition 1.7.1 is wrong without the assumption (iii). Indeed,
the identical mapping f (x) = x in R possesses the property |f (x) − f (y)| =
|x − y| and every point x ∈ R is an equilibrium for f . See also the example
of an RDS given in Remark 4.2.1 in Chap.4.
Example 1.7.2. Consider the one-dimensional random differential equation

ẋ(t) = (g(x(t)) + ξ(θt ω)) · h(x(t))

over some metric dynamical system θ. Here ξ is a random variable and g, h :


R → R are smooth functions. Assume that this equation generates RDS in
some interval (a, b) ⊆ R and h(x) > 0 for all x ∈ (a, b). If g(x) is strictly
decreasing on (a, b), then the function
  u ds 
 
V (u, v) =   , u, v ∈ (a, b) ,
v h(s)

satisfies the hypotheses of Proposition 1.7.1. The same is true for V ∗ (u, v) :=
−V (u, v) provided that g(x) is strictly increasing.
1.8 Random Attractors 41

1.8 Random Attractors


Below we also need the following concept of a random attractor of an RDS
(see, e.g., Arnold [3], Crauel/Debussche/Flandoli [35], Crauel/Fla-
ndoli [36], Schenk-Hoppé [89], Schmalfuss [92, 93] and the references
therein). The appearance of this concept is motivated by the correspond-
ing definition of a global attractor (cf. Babin/Vishik [13], Chueshov [20],
Hale [50], Ladyzhenskaya [76], Temam [104], for example).
Definition 1.8.1. Let D be a universe. A random closed set {A(ω)} from D
is said to be a random pull back attractor of the RDS (θ, ϕ) in D if A(ω) = X
for every ω ∈ Ω and the following properties hold:
(i) A is an invariant set, i.e. ϕ(t, ω)A(ω) = A(θt ω) for t ≥ 0 and ω ∈ Ω;
(ii) A is attracting in D, i.e. for all D ∈ D

lim dX {ϕ(t, θ−t ω)D(θ−t ω) | A(ω)} = 0, ω∈Ω, (1.42)


t→+∞

where dX {A|B} = supx∈A distX (x, B).


Below for brevity we sometimes say “random attractor” instead of “random
pull back attractor”.
Remark 1.8.1. (i) If A is a random attractor, then the convergence in (1.42)
and the invariance of the measure P with respect to θ imply that

dX {ϕ(t, ω)D(ω) | A(θt ω)} → 0, D∈D,

in probability as t → ∞, i.e.

lim P {ω : dX {ϕ(t, ω)D(ω) | A(θt ω)} > δ} = 0 , D ∈ D,


t→+∞

for any δ > 0. Thus any pull back attractor is a forward attractor with respect
to convergence in probability. We refer to Ochs [86] for some discussion of
the theory of attractors based on convergence in probability. We note that
an example given in Arnold [3] shows that pull back convergence (1.42)
does not imply forward convergence, i.e. the closeness of ϕ(t, ω)D(ω) and
A(θt ω) in the topology of the space X for every ω ∈ Ω. We also refer to
Scheutzow [91] for a short survey of other (non-equivalent) definitions of a
random attractor.
(ii) An attractor depends crucially on a choice of universe D. Indeed, the
deterministic dynamical system in R generated by the equation

ẋ = x − x3

has one-point attractor A = {1} in the universe of all compact subsets of


R+ \ {0} (see the formula for solutions given in the Introduction). The same
formula implies that the interval [−1, 1] is the attractor in the universe of all
42 1. General Facts about Random Dynamical Systems

bounded subsets of R and the set {−1, 0, 1} is the attractor in the universe of
all one-point subsets of R. We also note that there exists some classification
of random attractors (see, e.g., Crauel [34]) depending on the choice of
families of sets which are attracted (set attractors, point attractors, etc.).
(iii) Sometimes it is convenient to consider random attractors which do not
belong to the corresponding universe (see Crauel [33, 34], Crauel/Debus-
sche/Flandoli [35], Crauel/Flandoli [36]).

Proposition 1.8.1. If the RDS (θ, ϕ) possesses a random attractor in the


universe D, then this attractor is unique in D.

Proof. Assume that there exist two random attractors A1 (ω) and A2 (ω) in
the universe D. Since ϕ(t, ω)A1 (ω) = A1 (θt ω), we have

dX {A1 (ω) | A2 (ω)} = dX {ϕ(t, θ−t ω)A1 (θ−t ω) | A2 (ω)}

for all t > 0. Therefore the attraction property (1.42) implies that

dX {A1 (ω) | A2 (ω)} = 0 .

Thus A1 (ω) ⊂ A2 (ω). The same argument gives A2 (ω) ⊂ A1 (ω). 2

In a similar way we can prove the following assertion.


Proposition 1.8.2. If the RDS (θ, ϕ) possesses a random attractor in the
universe D, then any backward invariant random closed set from D lies in
the attractor. In particular the attractor contains every equilibrium u(ω) with
the property {u(ω)} ∈ D.
Now we prove a theorem on the existence of random attractors.
Theorem 1.8.1. Let (θ, ϕ) be an asymptotically compact RDS in the uni-
verse D with an attracting random compact set B0 ∈ D. Then this RDS
possesses a unique random compact pull back attractor {A(ω)} in the uni-
verse D, and A(ω) ⊂ B0 (ω) for all ω ∈ Ω. This attractor has the form
 
A(ω) = ΓB0 (ω) ≡ ϕ(τ, θ−τ ω)B0 (θ−τ ω) for all ω∈Ω. (1.43)
t>0 τ ≥t

We also have the relation



A(ω) = ϕ(n, θ−n ω)B0 (θ−n ω) for all ω ∈ Ω, N ∈ Z+ . (1.44)
n≥N

Proof. We follow the line of arguments given for the deterministic case (see,
e.g., Temam [104] or Chueshov [20]).
Let A(ω) be defined by (1.43). Proposition 1.6.2 implies that A(ω) is a
nonempty invariant set and is a compact subset of B0 (ω) for all ω ∈ Ω.
1.8 Random Attractors 43

Let us prove the attraction property (1.42). Let D ∈ D. Proposition 1.6.2


shows that ΓD (ω) is a nonempty compact set and ΓD (ω) ⊂ B0 (ω) for every
ω ∈ Ω. It is also easy to see from the invariance of ΓD (ω) that

ΓD (ω) ⊂ ΓB0 (ω) = A(ω) for all ω ∈ Ω . (1.45)

Assume now that property (1.42) is not true for some D ∈ D. Then there
exist  > 0 and sequences tn → ∞ and yn ∈ D(θ−tn ω) such that

distX (ϕ(tn , θ−tn ω)yn , A(ω)) ≥ , n = 1, 2, . . . , (1.46)

for some ω ∈ Ω. It follows from (1.30) that there exists a sequence {bn } ⊂
B0 (ω) such that

lim distX (ϕ(tn , θ−tn ω)yn , bn ) = 0 .


t→+∞

Therefore the compactness of B0 (ω) implies that the limit

z = lim ϕ(tnk , θ−tnk ω)ynk


k→+∞

exists for some subsequence {nk }. Proposition 1.6.1 and relation (1.45) imply
that z ∈ ΓD (ω) ⊂ A(ω). Thus we have

lim distX (ϕ(tnk , θ−tnk ω)ynk , A(ω)) = 0 ,


k→+∞

contradicting equation (1.46).


Now we prove (1.44). Let

ΓN∗ (ω) = ϕ(n, θ−n ω)B0 (θ−n ω) with N ∈ Z+ .
n≥N

Since {A(ω)} is invariant and A(ω) ⊂ B0 (ω) for all ω ∈ Ω, we have

A(ω) = ϕ(n, θ−n ω)A(θ−n ω) ⊂ ϕ(n, θ−n ω)B0 (θ−n ω) for all n ∈ Z+ .

Therefore A(ω) ⊂ ΓN∗ (ω) for any ω ∈ Ω and N ∈ Z+ . On the other hand, it
is clear from (1.43) that ΓN∗ (ω) ⊂ A(ω). Thus (1.44) is proved.
To prove that {A(ω)} is a random compact set we use Proposition 1.3.1(iv)
and relation (1.44). 2

Remark 1.8.2. (i) It is clear that if the RDS (θ, ϕ) has a random compact
attractor, then (θ, ϕ) is asymptotically compact. Thus Theorem 1.8.1 implies
that (θ, ϕ) possesses a random compact attractor in D if and only if this RDS
is asymptotically compact in D with an attracting set from D.
44 1. General Facts about Random Dynamical Systems

(ii) Under the hypotheses of Theorem 1.8.1 similarly to the deterministic case
(see, e.g., Chueshov [20, Sect.1.5.2]) we can prove that

lim h (A(ω) | ϕ(t, θ−t ω)B(θ−t ω)) = 0, ω∈Ω,


t→+∞

for any absorbing set B ∈ D of the RDS (θ, ϕ), where h(A|B) is the Hausdorff
distance defined by the equality

h(A|B) = dX {A|B} + dX {B|A} with dX {A|B} = sup distX (x, B) .


x∈A

This property means that the set AB t (ω) := ϕ(t, θ−t ω)B(θ−t ω) provides us
with an approximate image of the random attractor A(ω) for t large enough.
We also refer to Arnold/Schmalfuss [12] for the study of stability prop-
erties of random attractors for finite-dimensional RDS.
Corollary 1.8.1. Let (θ, ϕ) be a dissipative RDS in the universe D with an
absorbing set from D. Assume that the phase space X is locally compact. Then
the RDS (θ, ϕ) possesses a unique global random attractor in the universe D.
Proof. In this case any closed bounded set is compact. Therefore (θ, ϕ) is a
compact RDS and we can apply Theorem 1.8.1. 2
Corollary 1.8.2. Assume that for the RDS (θ, ϕ) the hypotheses of Proposi-
tion 1.4.1 on the dissipativity of an RDS possessing a Lyapunov type function
hold. Let the phase space X be finite-dimensional. Then the RDS (θ, ϕ) pos-
sesses a unique random attractor in the universe D of all tempered random
closed sets in X.
Proof. Since X is finite-dimensional, Proposition 1.4.1 implies that (θ, ϕ) is
a compact RDS. Thus we can apply Theorem 1.8.1. 2
Theorem 1.8.1 and Corollaries 1.8.1 and 1.8.2 imply the existence of random
attractors for the RDS considered in Examples 1.4.1, 1.4.3, 1.4.4 and 1.4.5.
Below we also need the following simple assertion concerning attractors of
equivalent RDS (cf. Keller/Schmalfuss [63] and Imkeller/Schmalfuss
[59]).
Proposition 1.8.3. Let (θ, ϕ1 ) and (θ, ϕ2 ) be two RDS over the same MDS
θ with phase spaces X1 and X2 resp. Assume that the systems (θ, ϕ1 ) and
(θ, ϕ2 ) are conjugate by a random homeomorphism T from X1 onto X2 (see
Definition 1.2.4) and there exists a compact random attractor A1 for the
RDS (θ, ϕ1 ) in the universe D1 . Then the RDS (θ, ϕ2 ) possesses a random
attractor A2 in the universe
 
D2 = {T (ω, D(ω))} : {D(ω)} ∈ D1 .

The attractors A1 and A1 are conjugated by the random homeomorphism T ,


i.e T (ω, A1 (ω)) = A2 (ω) for all ω ∈ Ω.
1.9 Dissipative Linear and Affine RDS 45

Proof. Since T is a homeomorphism, Proposition 1.3.1(vi) implies that


A2 (ω) := T (ω, A1 (ω)) is an invariant random compact set. From (1.13) we
also have that
d2 (ω, t) := dX2 {ϕ2 (t, θ−t ω)D2 (θ−t ω) | A2 (ω)}

= dX2 {T (ω, ϕ1 (t, θ−t ω)D1 (θ−t ω)) | T (ω, A1 (ω))} ,

where D2 (ω) = T (ω, D1 (ω)) and dX {A|B} = supx∈A distX (x, B). If d2 (ω, t)
does not tend to 0 as t → ∞ for some ω, then there exist tn → ∞ and
bn ∈ D1 (θ−tn ω) such that

distX2 (T (ω, xn (ω)) , T (ω, A1 (ω))) ≥ ε, n ∈ Z+ , (1.47)

for some ε > 0, where xn (ω) = ϕ1 (tn , θ−tn ω)bn . Since A1 (ω) is an attractor
for (θ, ϕ1 ), there exists a sequence {an } ⊂ A1 (ω) such that

distX1 (xn (ω), an ) → 0 as n → ∞.

The compactness of A1 (ω) implies that xnk (ω) → a for some subsequence
{nk } and a ∈ A1 (ω). Therefore distX2 (T (ω, xnk (ω)) , T (ω, a)) → 0. This
contradicts (1.47). Thus A2 is a random attractor for (θ, ϕ2 ). 2

1.9 Dissipative Linear and Affine RDS


In this section we prove several results on global attractors for dissipative
linear and affine random dynamical systems in a real separable Banach space
X. By Definition 1.2.3 the cocycle ϕ of an affine RDS has the form

ϕ(t, ω)x = Φ(t, ω)x + ψ(t, ω) , (1.48)

where Φ(t, ω) is a cocycle over θ consisting of bounded linear operators of X,


and ψ : T+ × Ω → X satisfies

ψ(t + s, ω) = Φ(t, θs ω)ψ(s, ω) + ψ(t, θs ω), t, s ≥ 0 . (1.49)

If ψ(t, ω) ≡ 0 we obtain a linear RDS (θ, Φ).


Our first result gives a criterion for dissipativity of linear RDS.
Proposition 1.9.1. Assume that D is a universe of subsets of X such that
for any D ∈ D and for any λ > 0 the set ω → λD(ω) := {x : xλ−1 ∈ D(ω)}
belongs to D. Then the linear RDS (θ, Φ) is dissipative in D if and only if

lim sup Φ(t, θ−t ω)v = 0 (1.50)


t→+∞ v∈D(θ
−t ω)

for any D ∈ D.
46 1. General Facts about Random Dynamical Systems

Proof. Let r(ω) be a radius of dissipativity of (θ, Φ). Then for any D ∈ D
and for any λ > 0 there exists a time tλ,D (ω) > 0 such that

Φ(t, θ−t ω)v ≤ r(ω), v ∈ λD(θ−t ω), t ≥ tλ,D (ω) .

Therefore
r(ω)
sup Φ(t, θ−t ω)v ≤ , t ≥ tλ,D (ω) .
v∈D(θ−t ω) λ
Hence
r(ω)
lim sup sup Φ(t, θ−t ω)v ≤
t→+∞ v∈D(θ−t ω) λ
for all λ > 0. This implies (1.50).
Vice versa, (1.50) implies that the deterministic ball {x : x ≤ 1} is an
absorbing set for (θ, Φ). 2

From Proposition 1.9.1 we easily have the following assertion.


Corollary 1.9.1. Let D be the universe consisting of one-point subsets of
X. Then
lim Φ(t, θ−t ω)x = 0 for any x ∈ X
t→+∞

if and only if the RDS (θ, Φ) is dissipative in D.

Remark 1.9.1. Let D be a universe such that {0} ∈ D. It is easy to see that
the dissipativity of the affine RDS (θ, ϕ) implies the dissipativity of its linear
part (θ, Φ).

Now we consider asymptotically compact affine RDS.


Proposition 1.9.2. Assume that D is a universe of subsets of X such that
{0} ∈ D and for any D ∈ D and λ > 0 the set ω → λD(ω) := {x :
xλ−1 ∈ D(ω)} belongs to D. Let (θ, ϕ) be an asymptotically compact affine
RDS with the cocycle given by (1.48) and with an attracting random compact
set B0 ∈ D. Then the limit

u(ω) := lim ψ(t, θ−t ω) (1.51)


t→+∞

exists for all ω ∈ Ω and is an equilibrium for the RDS (θ, ϕ). This equilibrium
is globally asymptotically (pull back) stable in D, i.e.

lim sup ϕ(t, θ−t ω)v − u(ω) = 0 (1.52)


t→+∞ v∈D(θ
−t ω)

for any D ∈ D. Moreover {u(ω)} ∈ D and the RDS (θ, ϕ) possesses a unique
equilibrium with this property.
1.9 Dissipative Linear and Affine RDS 47

Proof. From (1.49) we get

ψ(τ, θ−τ ω) = Φ(t, θ−t ω)ψ(τ − t, θ−τ ω) + ψ(t, θ−t ω), τ >t≥0. (1.53)

Since {0} ∈ D, we have that

ψ(τ, θ−τ ω) = ϕ(τ, θ−τ ω)0 → B0 (ω) as τ → ∞. (1.54)

Hence there exist τn = τn (ω) → ∞ and b ∈ B0 (ω) such that

ψ(τn , θ−τn ω) → b as n→∞.

Since

ψ(τ − t, θ−τ ω) = ϕ(τ − t, θ−τ ω)0 → B0 (θ−t ω) as τ →∞,

we can choose a subsequence {τnk } and an element b1 (t) ∈ B0 (θ−t ω) such


that ψ(τnk − t, θ−τnk ω) → b1 (t) as n → ∞. Consequently from (1.53) we have

b = Φ(t, θ−t ω)b1 (t) + ψ(t, θ−t ω) . (1.55)

Relation (1.54) implies that (θ, Φ) is asymptotically compact in D. Therefore


by Proposition 1.4.2 (θ, Φ) is dissipative in D. Since B0 ∈ D, Proposition 1.9.1
implies that Φ(t, θ−t ω)b1 (t) → 0 as t → ∞. Therefore the limit in (1.51)
exists. It is easy to see that u(ω) is an equilibrium and u(ω) ∈ B0 (ω). Thus
{u(ω)} ∈ D. Using the relation

ϕ(t, θ−t ω)v − u(ω) = Φ(t, θ−t ω)v − Φ(t, θ−t ω)u(θ−t ω) (1.56)

and Proposition 1.9.1 we obtain (1.52). Finally, if there exists another equi-
librium v(ω) with the property {v(ω)} ∈ D, then we have

v(ω) = Φ(t, θ−t ω)v(θ−t ω) + ψ(t, θ−t ω).

In the limit t → ∞ we obtain v(ω) = u(ω). 2

Remark 1.9.2. If in Proposition 1.9.2 the universe D contains all bounded de-
terministic sets, then any equilibrium v(ω) coincides with u(ω) almost surely.
Indeed, since (θ, Φ) is dissipative, from Proposition 1.9.1 we have that

lim P(UN
δ
)=0
t→∞

for every δ > 0 and N ∈ N, where


 
δ
UN := ω : sup Φ(t, ω)v > δ .
v≤N
48 1. General Facts about Random Dynamical Systems

It is clear that

{ω : Φ(t, ω)v(ω) > δ} ⊂ {ω : v(ω) > N } ∪ UN


δ
.

Hence

lim sup P {ω : Φ(t, ω)v(ω) > δ} ≤ P {ω : v(ω) > N }


t→∞

for every δ > 0 and N ∈ N. Thus

lim P {ω : Φ(t, ω)v(ω) > δ} = 0 .


t→∞

Since v(ω) is an equilibrium, this implies that

lim P {ω : v(ω) − ψ(t, θ−t ω) > δ} = 0 .


t→∞

Therefore it follows from (1.51) that v(ω) = u(ω) almost surely.

To obtain a result on the exponential stability of an equilibrium we need the


following concept.
Definition 1.9.1 (Top Lyapunov Exponent). The top Lyapunov expo-
nent for a linear RDS (θ, Φ) in a separable Banach space X is the minimal real
number λ with the following property: there exists a θ-invariant set Ω ∗ ⊂ Ω
of full measure such that

Φ(t, ω)x ≤ Rε (ω)e(λ+ε)t x , ω ∈ Ω∗, t≥0, (1.57)

for every ε > 0 and all x ∈ X, where Rε (ω) > 0 is a tempered random
variable.
We refer to Arnold [3, Part II] for conditions which guarantee the existence
of the top Lyapunov exponent and for a comprehensive presentation of the
theory of Lyapunov exponents for finite-dimensional RDS.
Following the line of argument given in the proof of Proposition 1.9.2 we
can easily prove the next assertion.
Proposition 1.9.3. Let (θ, ϕ) be an affine RDS with the cocycle given by
(1.48). Assume that the linear RDS (θ, Φ) has top Lyapunov exponent λ < 0
and for every ω ∈ Ω there exists a tempered random compact set B0 (ω) such
that
lim distX (ψ(t, θ−t ω), B0 (ω)) = 0 .
t→∞

Then the limit in (1.51) exists and belongs to B0 (ω) for all ω ∈ Ω ∗ . It is
an equilibrium on Ω ∗ , i.e. the property ϕ(t, ω)u(ω) = u(θt ω) holds for all
ω ∈ Ω ∗ . Moreover this equilibrium is unique almost surely and
1.10 Connection Between Attractors and Invariant Measures 49

 
lim e γt
sup ϕ(t, θ−t ω)v − u(ω) = 0, ω ∈ Ω∗ , (1.58)
t→+∞ v∈D(θ−t ω)

for any tempered random closed set D ⊂ X and γ < −λ (Ω ∗ is described in


Definition 1.9.1).

Proof. As in the proof of Proposition 1.9.2 using (1.53) we find that for any
t > 0 there exist b ∈ B0 (ω) and b1 (t) ∈ B0 (θ−t ω) such that (1.55) holds.
Since B0 (ω) is a tempered, there exists a tempered random variable r(ω) > 0
such that b1 (t) ≤ r(θ−t ω). Therefore it follows from (1.57) that

Φ(t, θ−t ω)b1 (t) → 0, t → ∞, ω ∈ Ω∗ ,

provided λ + ε < 0. Thus (1.55) implies that the limit in (1.51) exists for
ω ∈ Ω ∗ . It is clear that u(ω) ∈ B0 (ω) for all ω ∈ Ω ∗ and it is an equilibrium
on Ω ∗ . Using relation (1.56) with an arbitrary v ∈ X, we find that

ϕ(t, θ−t ω)v − u(ω) ≤ Rε (θ−t ω)e(λ+ε)t ( v + r(θ−t ω)) .

Since Rε (ω), {D(ω)} and r(ω) are tempered, we obtain (1.58).


To prove the uniqueness of u(ω) we assume that for some random variable
w(ω) we have ϕ(t, ω)w(ω) = w(θt ω) almost surely. Therefore

w(ω) − ψ(t, θ−t ω) = Φ(t, θ−t ω)w(θ−t ω)

almost surely. Since

P {ω : Φ(t, θ−t ω)w(θ−t ω) ≥ δ} = P {ω : Φ(t, ω)w(ω) ≥ δ} → 0 ,

as t → ∞, we obtain P {ω : w(ω) − u(ω) ≥ δ} = 0 for any δ > 0. Thus


w(ω) = u(ω) almost surely. 2

To conclude this section we refer to Arnold [3, Sect.5.6] for a more detailed
study of the asymptotic properties of affine systems with general hyperbolic
linear parts in finite-dimensional spaces.

1.10 Connection Between Attractors and Invariant


Measures

A number of interesting properties follow from the fact that the RDS (θ, ϕ)
has a random attractor. One of them is the existence of an invariant measure
of (θ, ϕ) in the sense of the theory of RDS. In this section we introduce the
corresponding notions and briefly discuss the properties of these measures.
50 1. General Facts about Random Dynamical Systems

For details we refer to Crauel [31, 32], Crauel/Flandoli [36], Arnold


[3], Schmalfuss [95] and the references therein.
As above we consider an RDS on a Polish space X and denote by B the
Borel σ-algebra on X.
To explain the main idea of introducing of invariant measures we start
with a discrete time RDS which generates a Markov chain (cf. Example 1.2.1).
Let θ = (Ω, F, P, {θt , t ∈ Z}) be a discrete metric dynamical system and
ψn (ω) := ψ(θn ω, ·) be independent identically distributed (i.i.d.) random
continuous mappings from X into itself. In this case we can construct an
RDS by defining the cocycle ϕ by the formula

ϕ(n, ω)x = ψn−1 (ω) ◦ ψn−2 (ω) ◦ . . . ◦ ψ0 (ω)x, x ∈ X.

One can prove (see Arnold [3, p.53]) that the family of sequences

{Φxn := ϕ(n, ω)x : n ∈ Z+ , x ∈ X}

is a homogeneous Markov chain with state space X and transition probability

P (x, B) := P{Φn+1 ∈ B | Φn = x} = P{ω : ϕ(n, ω)x ∈ B}, B ∈ B(X) .

For detailed presentation of the theory of Markov chains we refer to Gih-


man/Skorohod [48] and Meyn/Tweedie [83], for example. The central
topic of Markov chain theory is the existence of a stationary (invariant) prob-
ability measure (we denote it by ν) which is defined as a measure on (X, B)
satisfying the relation

ν(B) = (P ∗ ν)(B) := P (x, B)ν(dx), B ∈ B .
X

The main consequence of the existence of a stationary probability measure is


the possibility of producing a stationary process from the Markov chain. If Φ0
is a random variable with distribution ν, then {Φn = ϕ(n, ω)Φ0 : n ∈ Z+ } is a
stationary process, i.e. all variables Φn have the same distribution. Stationary
measures are also important because of they define the long term and ergodic
behaviour of the chain (Meyn/Tweedie [83]).
One can prove (see Arnold [3, Chap.2]) that in the above case a proba-
bility measure ν on (X, B) is stationary for the Markov chain {Φn } if and only
if the measure P × ν is invariant with respect to the skew-product semiflow
πn defined by (1.6), i.e.
 
f (ω, x)P(dω)ν(dx) = f (θn ω, ϕ(n, ω)x)P(dω)ν(dx)
Ω×X Ω×X

for any bounded measurable function f on Ω × X. This observation is the


basis for the following general definition.
1.10 Connection Between Attractors and Invariant Measures 51

Definition 1.10.1 (Invariant Measure for RDS). Let (θ, ϕ) be an RDS


with phase space X. A probability measure µ on (Ω × X, F × B) is said to be
an invariant measure for RDS (θ, ϕ) (or ϕ-invariant, for short) if
(i) it is invariant with respect to the skew-product semiflow πt (see (1.6)),
i.e. πt µ = µ which means that
 
f (ω, x)µ(dω, dx) = f (θt ω, ϕ(t, ω)x)µ(dω, dx)
Ω×X Ω×X

for all t ∈ T+ and f ∈ L1 (Ω × X, µ);


(ii) the basic probability measure P is the Ω-marginal of µ on (Ω, F), i.e.
µ(A × X) = P(A) for any A ∈ F.
The measure µ is said to be ϕ-ergodic if for any C ∈ F × B with the property
that πt−1 C = C for all t ≥ 0, we have either µ(C) = 0 or µ(C) = 1.
It is known (see, e.g., Arnold [3] and the references therein) that any proba-
bility measure µ on (Ω×X, F×B) possesses a disintegration (or factorization),
i.e there exists a function (ω, B) → µω (B) from Ω × B into the interval [0, 1]
such that
(i) ω → µω (B) is F-measurable for any B ∈ B;
(ii) there exists a measurable set Qµ in Ω such that P(Qµ ) = 1 and B →
µω (B) is a probability measure on (X, B) for all ω ∈ Qµ ;
(iii) for all f ∈ L1 (Ω × X, µ) we have
   
f (ω, x)µ(dω, dx) = f (ω, x)µω (dx) P(dω) .
Ω×X Ω X

The disintegration µω is unique P-almost surely.


Example 1.10.1. It follows directly from Definition 1.7.1 that any equilibrium
u(ω) for the RDS (θ, ϕ) generates an invariant measure by the formula
 
f (ω, x)µ(dω, dx) = f (ω, u(ω))P(dω) . (1.59)
Ω×X Ω

The factorization µω of this invariant measure is a random Dirac measure,


i.e. µω = δu(ω) , where δu(ω) is defined by the formula

f (x)δu(ω) (dx) = f (u(ω)), f ∈ Cb (X) ,
X

with Cb (X) the space of bounded continuous functions on X. We also note


that if θ is an ergodic metric dynamical system, then every equilibrium u(ω)
generates a ϕ-ergodic invariant measure by the formula (1.59). Indeed, let
C ∈ F × B be an invariant set, i.e. πt−1 C = C for all t ≥ 0. Then
52 1. General Facts about Random Dynamical Systems

A := {ω : (ω, u(ω)) ∈ C} = {ω : (ω, u(ω)) ∈ πt−1 C}

= {ω : (θt ω, u(θt ω)) ∈ C} = θ−t A

for all t ≥ 0. Since θ−t = θt−1 , we have θt A = A for all t ∈ R. The ergodicity
of θ implies that we have either P(A) = 0 or µ(A) = 1. It is clear from (1.59)
that µ(C) = P(A). Thus µ is ϕ-ergodic.
The following assertion (see, e.g., Crauel/Flandoli [36], Crauel [32, 33]
and Arnold [3]) describes the relation between invariant measures and for-
ward invariant random sets.
Proposition 1.10.1. A probability measure µ on (Ω ×X, F ×B) is invariant
for (θ, ϕ) if and only if its disintegration µω possesses property ϕ(t, ω)µω =
µθt ω P-almost surely, i.e. for any f ∈ Cb (X) we have
 
f (ϕ(t, ω)x)µω (dx) = f (x)µθt ω (dx) P − almost surely .
X X

Moreover there exists a forward invariant random closed set {C(ω)} such that
µω (C(ω)) = 1 for almost all ω ∈ Ω.
On the other hand for any forward invariant random compact set {C(ω)}
there exists an invariant measure µ concentrated on {C(ω)}, i.e. µ{(ω, x) :
x ∈ C(ω)} = 1. In particular if the RDS (θ, ϕ) possesses a random compact
attractor {A(ω)} in the universe D which contains all bounded determinis-
tic sets, then there exists an invariant measure µ concentrated on {A(ω)}.
Moreover in the last case every invariant probability measure is concentrated
on {A(ω)}.

Remark 1.10.1. We note that if the cocycle ϕ can be extended to a cocycle


ϕ̃ with two-sided time T, then in Proposition 1.10.1 we can choose a perfect
version of disintegration µω , i.e. the invariant measure µ possesses a disinte-
gration µ̃ω such that

ϕ(t, ω)µ̃ω = µ̃θt ω for all t ≥ 0, ω ∈ Ω .

We refer to Scheutzow [90] for the proof of this result. We also refer to
Schenk-Hoppé [89] for additional properties of invariant measures in the
case of invertible cocycles, i.e. for RDS with time T (not T+ ).

Let us define the future F+ and the past F− σ-algebras for RDS (θ, ϕ) by
the formulas
F+ = σ{ω → ϕ(τ, θt ω) : t, τ ≥ 0}
1.10 Connection Between Attractors and Invariant Measures 53

and
F− = σ{ω → ϕ(τ, θ−t ω) : 0 ≤ τ ≤ t} ,
where σ{fα (ω) : α ∈ Λ} denotes the σ-algebra generated by the mappings
{fα }, where α ∈ Λ.
Definition 1.10.2 (Markov Measure). A probability measure µ on (Ω ×
X, F×B) is said to be a Markov measure if its disintegration µω is measurable
with respect to the past σ-algebra F− .
The following theorem (see, e.g. Crauel [31, 32], Crauel/Flandoli [36]
and Arnold [3]) shows that invariant Markov measures supported by the
random attractor for the RDS (θ, ϕ) generate stationary probability measures
in the phase space of this RDS.
Theorem 1.10.1. Assume that the RDS (θ, ϕ) possesses a random compact
attractor {A(ω)} in the universe D which contains all bounded determin-
istic sets. Then there an exists invariant Markov measure µ supported by
{A(ω)}, i.e. µ{(ω, x) : x ∈ A(ω)} = 1. Assume additionally that the pro-
cesses {ϕ(t, ω)x : x ∈ X} form a Markov family, i.e. the stochastic kernels
Pt (x, B) := P{ω : ϕ(t, ω)x ∈ B} satisfy the Chapman-Kolmogorov equation

Pt+s (x, B) = Pt (y, B)Ps (x, dy), t, s ≥ 0, B ∈ B .
X

If the σ-algebras F− and F+ are independent, then for any invariant Markov
measure µ supported by {A(ω)} the measure  on (X, B) defined by the for-
mula 
(B) = µω (B)P(dω), B ∈ B ,

is a stationary probability measure for the Markov semigroup associated with
the family {ϕ(t, ω)x : x ∈ X}, i.e.

(B) = Pt (x, B)(dx), B ∈ B ,
X

or, in equivalent form,


 
g(x)(dx) = Eg(ϕ(t, ·)x)(dx), g(x) ∈ Cb (X) .
X X

In particular under the conditions of this theorem every F− -measurable equi-


librium u(ω) with the property {u(ω)} ∈ D generates a stationary measure
on (X, B) by the formula (B) = EχB (u), where χB (x) = 1 for x ∈ B and
χB (x) = 0 otherwise, i.e. by the formula (B) = P{ω : u(ω) ∈ B}, B ∈ B.
We also note that if F− and F+ are independent, then every invariant
Markov measure is supported by the attractor of (θ, ϕ) in the universe con-
sisting of all finite subsets of X (see Crauel [34]). Random systems generated
by stochastic differential equations give examples of RDS where the future
and past σ-algebras are independent (see Arnold [3, Sect.2.3]).
2. Generation of Random Dynamical Systems

In this chapter we collect some results concerning those random dynamical


systems generated by random and stochastic ordinary differential equations.
Most of them are well-known (see Arnold [3, Chap.2] and the references
therein) and we include them here mainly for the sake of completeness. We
also prove several assertions on the existence of deterministic invariant do-
mains for these systems and consider relations between random and stochastic
ordinary differential equations. These results are important in the study of
monotone dynamical systems connected with random and stochastic differ-
ential equations.

2.1 RDS Generated by Random Differential


Equations

In this section we consider a class of ordinary differential equations (ODE)


whose right-hand sides contain ω as a parameter. For every fixed ω these equa-
tions can be solved as a deterministic nonautonomous ODEs. They model the
so-called “real noise case” and also include periodic and almost-periodic equa-
tions as particular cases. We also refer to Ladde/Lakshmikantham [75] for
another approach to random differential equations.
Let θ = (Ω, F, P, {θt , t ∈ R}) be a metric dynamical system. We assume that
f = (f1 , . . . , fd ) : Ω × Rd → Rd is a measurable function which is locally
bounded and locally Lipschitz continuous with respect to x for every ω ∈ Ω.
More precisely, we assume that for any compact set K ⊂ Rd there exists a
random variable CK (ω) ≥ 0 such that
 a+1
CK (θt ω) dt < ∞ for all a ∈ R, ω ∈ Ω , (2.1)
a

and
|f (ω, x)| ≤ CK (ω), |f (ω, x) − f (ω, y)| ≤ CK (ω) · |x − y| (2.2)
for any x, y ∈ K and ω ∈ Ω. Here and below | · | is the Euclidean distance in
Rd .

I. Chueshov: LNM 1779, pp. 55–81, 2002.


c Springer-Verlag Berlin Heidelberg 2002
56 2. Generation of Random Dynamical Systems

We emphasize that assumptions (2.1) and (2.2) are stated here for all
ω ∈ Ω. This does not spoil generality because we can apply the following
simple perfection procedure. Assume that (2.1) and (2.2) hold almost surely
and consider the sets
  b 
ΩN = ω : CKN (θt ω) dt < ∞ for all a < b ,
a

where KN = {x ∈ Rd : |x| ≤ N }. It is clear that ΩN is a θ-invariant subset


of Ω and P(ΩN ) = 1. The set Ω ∗ = ∩N ∈N ΩN possesses the same properties.
Relations (2.1) and (2.2) are valid for all ω ∈ Ω ∗ . Therefore instead of θ we
can consider the metric dynamical system θ∗ = (Ω ∗ , F∗ , P, {θt , t ∈ R}), where
F∗ is the σ-algebra induced by F on Ω ∗ . Another way to obtain a perfect
version of relations (2.1) and (2.2) would be to redefine f (ω, x) on Ω \ Ω ∗ in
an appropriate way.
We consider the random differential equation (RDE) in Rd

ẋ(t) = f (θt ω, x(t)), x(0) = x0 ∈ Rd , (2.3)

driven by the metric dynamical system θ. In applications random differential


equations usually arise in the following way. Assume that g(·, ·) : Rm × Rd →
Rd is a continuous function such that for any compact set K ⊂ Rd there
exists a constant CK ≥ 0 such that

|g(λ, x)| ≤ CK · (1 + |λ|p ), λ ∈ Rm , x ∈ K ,

and

|g(λ, x) − g(λ, y)| ≤ CK · (1 + |λ|p ) · |x − y|, λ ∈ Rm , x, y ∈ K ,

for some p ≥ 1. Let {ξt (ω) : t ∈ R} be a stationary random process in


Rd with continuous trajectories on a probability space (Ω, F, P). For this
process there is the standard realization such that the functions t → ξt (ω)
are continuous for all ω ∈ Ω (see Arnold [3, Appendix]). Let θ be the
metric dynamical system generated by ξt (ω). In this case ξt (ω) = ξ0 (θt ω).
If E|ξ|p < ∞, then the random function f (ω, x) = g(ξ0 (ω), x) satisfies (2.1)
and (2.2) and equation (2.3) turns into RDE

ẋ(t) = g(ξt (ω), x(t)), x(0) = x0 ∈ Rd . (2.4)

This equation can be interpreted as a model for the description of dynamics


of a system governed by the equation ẋ = g(λ, x) which takes into account
random fluctuations of the external parameter λ around λ0 = Eξ (cf. the
discussion in the Introduction). We also note that in the case of RDE of the
form (2.4) the function t → f (θt ω, x) ≡ g(ξt (ω), x) is continuous for all ω ∈ Ω
and x ∈ Rd . Several results of Chap.5 rely on this (or a weaker) continuity
property.
2.1 RDS Generated by Random Differential Equations 57

Definition 2.1.1. A function x(t, ω) = (x1 (t, ω), . . . , xd (t, ω)) is said to
be a local solution to problem (2.3) if for every ω ∈ Ω there exists t0 =
t0 (ω, x0 ) > 0 such that x(t, ω) is continuous with respect to t from the inter-
val (0, t0 (ω, x0 )) into Rd for each ω ∈ Ω and satisfies the equation
 t
x(t, ω) = x0 + f (θτ ω, x(τ, ω)) dτ, 0 < t < t0 (ω, x0 ), ω ∈ Ω . (2.5)
0

If t0 (ω, x0 ) = ∞ for all ω ∈ Ω, then x(t, ω) is said to be a global solution to


(2.3).
Remark 2.1.1. It is clear from (2.5) that x(t, ω) is absolutely continuous on
the segment [0, (1 − δ) · t0 (ω, x0 )] for any ω ∈ Ω and 0 < δ < 1 and for each
ω ∈ Ω it satisfies differential equation (2.3) for almost all t ∈ (0, t0 (ω, x0 )).
Proposition 2.1.1. Under conditions (2.1) and (2.2) problem (2.3) has a
unique local solution x(t, ω) ≡ x(t, ω; x0 ) for any initial data x0 ∈ Rd . This
solution depends continuously on x0 for every ω ∈ Ω. If we assume addition-
ally that f (ω, ·) ∈ C 1 (Rd ) for each ω ∈ Ω, then x(t, ω; x0 ) is continuously
differentiable with respect to the initial data x0 and the Jacobian
d
∂xi (t, ω; x0 )
Dx0 x(t, ω) ≡ Dx0 x(t, ω; x0 ) =
∂x0j i,j=1

solves the variational equation


 t
Dx0 x(t, ω) = I + Dx f (θτ ω, x(τ, ω))Dx0 x(τ, ω) dτ (2.6)
0

for all t from the interval (0, t0 (ω, x0 )) of the existence of the solution x(t, ω).
Moreover the determinant detDx0 x(t, ω) satisfies Liouville’s equation
 t 
detDx0 x(t, ω) = exp tr{Dx f (θτ ω, x(τ, ω))} dτ (2.7)
0

for all t ∈ (0, t0 (ω, x0 )).


Proof. This is a direct ω-wise adaptation of the corresponding deterministic
proof (see, e.g., Coddington/Levinson [28] and Amann [2]). 2
Theorem 2.1.1. Let (2.1) and (2.2) be valid. Assume that the solution
x(t, ω; x0 ) to problem (2.3) given by Proposition 2.1.1 is global for all x ∈ Rd
and ω ∈ Ω (see Definition 2.1.1). Then the RDE (2.3) generates a RDS
(θ, ϕ) with the cocycle ϕ defined by the formula

ϕ(t, ω, x0 ) = x(t, ω; x0 ), t > 0, ω ∈ Ω, x0 ∈ Rd ,

where x(t, ω; x0 ) is the global solution to problem (2.3) for the initial data
x0 ∈ Rd . Moreover the mapping (t, x) → ϕ(t, ω, x) is continuous for all
58 2. Generation of Random Dynamical Systems

ω ∈ Ω. If we assume additionally that f (ω, ·) ∈ C 1 (Rd ) for each ω ∈ Ω,


then (θ, ϕ) is a C 1 RDS and the Jacobian
d
∂[ϕ(t, ω, x)]i
Dx ϕ(t, ω, x) =
∂xj i,j=1

uniquely solves the variational equation


 t
Dx ϕ(t, ω, x) = I + Dx f (θτ ω, ϕ(τ, ω, x))Dx ϕ(τ, ω, x) dτ, t > 0, ω ∈ Ω .
0
(2.8)
Moreover the determinant detDx ϕ(t, ω, x) satisfies Liouville’s equation
 t 
detDx ϕ(t, ω, x) = exp tr{Dx f (θτ ω, ϕ(τ, ω, x))} dτ , t > 0 . (2.9)
0

Proof. This follows from Proposition 2.1.1. We refer to Arnold [3] for de-
tails. 2
Corollary 2.1.1. Assume that f (ω, x) satisfies (2.1) and (2.2) and there
exist random variables c1 (ω) and c2 (ω) such that t → cj (θt ω) is locally inte-
grable and
x, f (ω, x) ≤ c1 (ω)|x|2 + c2 (ω) , (2.10)
where ·, · is the inner product in Rd generated by the Euclidean norm | · |.
Then the conclusions of Theorem 2.1.1 are true.
Proof. Under condition (2.10) we obviously have (cf. Remark 2.1.1) that
1 d
· |x(t, ω)|2 ≤ c1 (θt ω)|x(t, ω)|2 + c2 (θt ω)
2 dt
on the existence semi-interval [0, t0 (ω, x0 )). Consequently the Gronwall lemma
gives
  t   t 
|x(t, ω)|2 ≤ |x0 |2 + 2 |c2 (θτ ω)| dτ · exp 2 |c1 (θτ ω)| dτ . (2.11)
0 0

Therefore the standard result on continuation of solutions (see, e.g., Hart-


man [51] or Hale [49]) gives that the solution x(t, ω) can be continued to
the whole semi-axis R+ . 2
Example 2.1.1 (Binary Biochemical Model). The equations
ẋ1 = g(x2 ) − α1 (θt ω)x1 ,
(2.12)
ẋ2 = x1 − α2 (θt ω)x2 ,
generate an RDS in R2 provided that g(x) ∈ C 1 (R), g (x) is bounded and
the function t → αi (θt ) is locally integrable for each ω ∈ Ω and i = 1, 2. The
cocycle of this RDS has the form ϕ(t, ω)x = x(t), where x(t) = (x1 (t), x2 (t))
is the solution to (2.12) with x(0) = x.
2.1 RDS Generated by Random Differential Equations 59

Remark 2.1.2. (i) Under the hypotheses of Corollary 2.1.1 the cocycle ϕ(t, ω)
possesses the following property which is important in the study of pull back
trajectories (cf. Proposition 1.5.2): for every x ∈ Rd and ω ∈ Ω the function
t → ϕ(t, θ−t ω)x is right continuous on R+ . To prove this we note that (2.5)
implies the relation
 0
ϕ(t, θ−t ω)x − x = f (θs ω, ϕ(t + s, θ−t ω)x) ds .
−t

It follows from (2.11) that for any T > 0 there exists CT (ω) > 0 such that

|ϕ(t + s, θ−t ω)x| ≤ CT (ω) for all − t ≤ s ≤ 0, 0 ≤ t ≤ T .

Therefore from (2.2) we have


 0
|ϕ(t, θ−t ω)x − x| ≤ CK(ω) (θs ω) ds ,
−t

where K(ω) = {x ∈ Rd : |x| ≤ CT (ω)}. Therefore (2.1) implies that

lim |ϕ(t, θ−t ω)x − x| = 0 for any x ∈ Rd , ω ∈ Ω . (2.13)


t→+0

By the cocycle property we have ϕ(s + t, θ−s−t ω) = ϕ(s, θ−s ω)ϕ(t, θ−s−t ω)
for any t, s ≥ 0. Therefore (2.13) implies that

lim |ϕ(t, θ−t ω)x − ϕ(s, θ−s ω)x| = 0 for any s > 0, x ∈ Rd , ω ∈ Ω .
t→s+0

We note it is also possible to prove the continuity of the mapping (t, x) →


ϕ(t, θ−t ω)x for every ω ∈ Ω (see Arnold [3, Part I]). However we do not
use this in what follows.
(ii) Assume that in (2.10) the random variable c1 (ω) satisfies the condition
 t  0
1 1
lim c1 (θτ ω) dτ = lim c1 (θτ ω) dτ = α
t→+∞ t 0 t→+∞ t −t

for all ω ∈ Ω with a negative constant α and that the variable max{0, c2 (ω)}
is tempered. Then under the hypotheses of Corollary 2.1.1 we can apply
Proposition 1.4.1 (see also Remark 1.4.1) with the function V (x) = |x|2
to prove that the RDS generated by (2.3) is dissipative in the universe of all
tempered subsets of Rd and possesses a random attractor in this universe (see
Corollary 1.8.2). In particular (2.12) generates a dissipative RDS provided
that the assumptions of Example 1.4.4 hold.

Now we consider the affine (linear nonhomogenious) RDE in Rd

ẋ(t) = A(θt ω)x(t) + b(θt ω), x(0) = x0 ∈ Rd , (2.14)


60 2. Generation of Random Dynamical Systems

driven by the metric dynamical system θ. Here A(ω) = {aij (ω)}di,j=1 is a


random matrix and b(ω) = (b1 (ω), . . . , bd (ω)) is a random vector in Rd .
If A(θt ω) and |b(θt ω)| belong to the space L1loc (R) for all ω ∈ Ω, then
we can apply Corollary 2.1.1 to construct an affine RDS (θ, ϕ) in Rd . It is
easy to see that the cocycle ϕ can be represented in the form
 t
ϕ(t, ω)x = Φ(t, ω)x + Φ(t − s, θs ω)b(θs ω) ds , (2.15)
0

where Φ(t, ω) is the linear cocycle in Rd generated by the linear RDE

ẋ(t) = A(θt ω)x(t), x(0) = x0 ∈ Rd . (2.16)

The following result contains useful information on the top Lyapunov expo-
nent of the linear RDS (θ, Φ). It is an easy consequence of the multiplicative
ergodic theorem (see, e.g., Arnold [3, Chaps.3,4]).
Theorem 2.1.2. Assume that the matrix A(ω) satisfies A(·) ∈ L1 (Ω, F, P)
and A(θt ω) ∈ L1loc (R) for all ω ∈ Ω. Let Φ(t, ω) be the linear cocycle in
Rd generated by (2.16). Then there exists a θ-invariant set Ω ∗ ⊂ Ω of full
measure such that for each x ∈ Rd \ {0} the Lyapunov exponent
1
λ(ω, x) := lim log |Φ(t, ω)x| (2.17)
t→+∞ t
exists for all ω ∈ Ω ∗ . For every ω ∈ Ω ∗ the image of the function x →
λ(ω, x) is a finite set. If θ is an ergodic metric dynamical system, then λ :=
maxx∈Rd \{0} λ(ω, x) is a constant on Ω ∗ and it is the top Lyapunov exponent
in the sense of Definition 1.9.1. Moreover in this case we have

Eµmin ≤ λ(ω, x) ≤ Eµmax , x ∈ Rd \ {0}, ω ∈ Ω∗ , (2.18)

where µmin (ω) and µmax (ω) are the least and the greatest eigenvalues of the
Hermitian part of the matrix A(ω). In particular the top Lyapunov exponent
λ belongs to the interval [Eµmin , Eµmax ].
Proof. This follows directly from Arnold [3, Theorem 3.4.1] (see also
Arnold [3, Example 3.4.15]). Relation (2.18) follows from the Birkhoff–
Khinchin ergodic theorem (see, e.g., Sinai Ya. G. [100]) and the argument
given in Hartman [51, p.56], see also Arnold [3, Theorem 6.2.8]. 2
We also note that if Eµmax < 0 and b(ω) is tempered, then the affine RDS
(θ, ϕ) generated by (2.14) over an ergodic θ is dissipative in the universe of
all tempered subsets of Rd (see Remark 2.1.2(ii)) and both Propositions 1.9.2
and 1.9.3 can be applied here.
Example 2.1.2 (1D Affine RDE). Consider the one-dimensional RDE

ẋ = α(θt ω)x + β(θt ω)


2.2 Deterministic Invariant Sets 61

over an ergodic metric dynamical system θ, where α(ω) and β(ω) are random
variables such that t → α(θt ω) and t → β(θt ω) are locally integrable. This
equation generates an affine RDS in
t
R. The cocycle ϕ has the form (2.15)
with Φ(t, ω)x = x exp 0 α(θτ ω)dτ . If α ∈ L1 (Ω, F, P), then the Birkhoff–
Khinchin ergodic theorem implies that the (top) Lyapunov exponent for (θ, Φ)
is λ = Eα (see Remark 1.4.1). The RDS (θ, ϕ) is dissipative in the universe
D of all tempered subsets of R provided that Eα < 0 and β(ω) is a tempered
random variable. In this case
 0  0 
ψ(t, θ−t ω) = β(θs ω) exp α(θτ ω)dτ ds
−t s

in representation (1.48) and therefore (see Propositions 1.9.2 and 1.9.3 and
also Remark 1.9.2) the RDS (θ, ϕ) possesses a unique exponentially stable
equilibrium
 0  0 
u(ω) = β(θs ω) exp α(θτ ω)dτ ds .
−∞ s

In the case Eα > 0 the RDS (θ, ϕ) is not dissipative in D (see Remark 1.9.1).
Nevertheless a simple calculation shows that
 ∞  s 
v(ω) = − β(θs ω) exp − α(θτ ω)dτ ds
0 0

is an equilibrium for (θ, ϕ) provided that Eα > 0.

2.2 Deterministic Invariant Sets


In this section we give a result concerning deterministic invariant sets for
RDS generated by random differential equations. We will use it in Chap.5 to
prove positivity of solutions to problem (2.3) under some conditions concern-
ing f (ω, x). We note that there are many results concerning these invariance
properties for nonautonomous ODE (see, e.g., Martin [81] and Deimling
[41] and the references therein). However all of them assume continuous de-
pendence of the right hand sides on t and x. This assumption looks rather
restrictive for random ODE. Below we show that it can be avoided.
We do not assume the smoothness of invariant sets and we use the fol-
lowing definition of an outer normal vector.
Definition 2.2.1. Let D be a closed set in Rd . Assume that x0 belongs to
the boundary ∂D of the set D. A unit vector ν is said to be an outer normal
to D at the point x0 , if there exists a ball B(x1 ) with center at x1 such that
B(x1 ) ∩ D = {x0 } and ν = λ · (x1 − x0 ) for some positive λ.
We use the following concept of an invariant set for an RDE (2.3).
62 2. Generation of Random Dynamical Systems

Definition 2.2.2. The set F is said to be a deterministic forward invariant


set for the RDE (2.3) if its local solution x(t, ω; x0 ) lies in F for every x0 ∈ F,
t ∈ (0, t(ω, x0 )) and ω ∈ Ω. Here (0, t(ω, x0 )) is the maximal interval of the
existence of the solution x(t, ω; x0 ).
We have the following result on the existence of invariant sets.
Theorem 2.2.1. Assume that (2.1) and (2.2) are valid. Let D be a closed
set in Rd possessing the properties: (i) the set D has an outer normal at every
point of the boundary ∂D and (ii) for any x ∈ ∂D we have the relation

f (ω, x), νx  ≤ 0, ω ∈ Ω, (2.19)

for every outer normal νx at x. Then the set D is a deterministic forward


invariant set for the RDE (2.3).
This theorem immediately implies the following assertion.
Corollary 2.2.1. Under the conditions of Theorems 2.1.1 and 2.2.1 the set
D is a deterministic forward invariant set for the RDS (θ, ϕ) generated by
problem (2.3).
The argument given in the proof of Corollary 2.1.1 makes it possible to obtain
the following result.
Corollary 2.2.2. Let the hypotheses of Theorem 2.2.1 hold. Assume that
there exist random variables c1 (ω) and c2 (ω) such that t → cj (θt ω) is locally
integrable and inequality (2.10) holds for any x ∈ D and for all ω ∈ Ω. Then
for any x0 ∈ D problem (2.3) possesses a unique global solution x(t, ω; x)
such that x(t, ω; x) ∈ D for all t ≥ 0 and ω ∈ Ω. This solution generates an
RDS with phase space D.

Example 2.2.1. The one-dimensional RDE

ẋ = α + β(θt ω) · x − x3

satisfies the hypotheses of Corollary 2.2.2 with D = R+ if α ≥ 0 and the


function t → β(θt ω) is locally integrable for every ω ∈ Ω. We can also apply
Corollary 2.2.2 with D = [0, 1] to the equation

ẋ = β(θt ω) · x(1 − x) .

Example 2.2.2 (Binary Biochemical Model). Let (θ, ϕ) be the RDS consid-
ered in Example 2.1.1. If g(0) ≥ 0, then R2+ = {x = (x1 , x2 ) : xi ≥ 0} is a
forward invariant set for (θ, ϕ). This property is important because x1 and
x2 represent concentrations of macro-molecules.

Theorem 2.2.1 is a particular case of the following assertion which is also


important in what follows.
2.2 Deterministic Invariant Sets 63

Theorem 2.2.2. Assume that (2.1) and (2.2) hold. Let O ⊆ Rd be a deter-
ministic forward invariant open set for the RDE (2.3) and D be a closed set
in Rd such that (i) D ∩ O = ∅, (ii) D has an outer normal at every point of
the set ∂D ∩ O and (iii) relation (2.19) holds for any x ∈ ∂D ∩ O. Then the
set D ∩ O is a deterministic forward invariant set for the RDE (2.3).
In the proof of Theorem 2.2.2 we rely on some ideas presented in Bony [17].
We start with the following deterministic lemma.
Lemma 2.2.1. Assume that f (t) is a continuous function on the segment
[a, b] such that
1
lim inf (f (t + h) − f (t)) ≥ −m(t)
h→0, h<0 |h|

for almost all t ∈ (a, b), where m(t) ∈ L1 (a, b). Then
 t2
f (t2 ) − f (t1 ) ≤ m(τ ) dτ for all a ≤ t1 < t2 ≤ b . (2.20)
t1

Proof. It is clear that


 t
g(t) ≡ f (t) − m(τ ) dτ ∈ C[a, b]
a

and satisfies the relation


1
lim inf (g(t + h) − g(t)) ≥ 0 (2.21)
h→0, h<0 |h|

for all t ∈ B, where B is a measurable set of full measure in (a, b). To obtain
(2.20) we should prove that g(t) is a nonincreasing function on [a, b]. It is
sufficient to prove that the function Φ(t) = g(t) − γt is nonincreasing for any
γ > 0. From (2.21) we have
1
lim inf (Φ(t + h) − Φ(t)) ≥ γ > 0, t∈B. (2.22)
h→0, h<0 |h|

This implies that for every t ∈ B there exists h(t) > 0 such that

Φ(t − τ ) ≥ Φ(t), 0 ≤ τ < h(t), t ∈ B . (2.23)

Let t1 < t2 be points from B. Consider the covering of the segment [t1 , t2 ]
by intervals (t − min{h(t2 ), h(t)}, t), where t ∈ B. It is clear that there exists
a finite subcovering. Moreover we can choose the points τ1 < τ2 < . . . < τN
from B ∩ (t1 , t2 ) such that

t1 ∈ (τ1 − h(τ1 ), τ1 ), τN ∈ (t2 − h(t2 ), t2 )


64 2. Generation of Random Dynamical Systems

and
τk ∈ (τk+1 − h(τk+1 ), τk+1 ), k = 1, . . . N − 1 .
Therefore from (2.23) we have

Φ(t1 ) ≥ Φ(τ1 ); Φ(τk ) ≥ Φ(τk+1 ), k = 1, . . . N − 1; Φ(τN ) ≥ Φ(t2 ) .

This implies that Φ(t1 ) ≥ Φ(t2 ). 2

Proof of Theorem 2.2.2. Let x(t) be a local solution to (2.3) for some fixed
ω with initial data from D ∩ O. Assume that this solution may leave the set
D ∩ O. Since O is forward invariant, there exist a point x∗ ∈ ∂D ∩ O and a
semiinterval (t0 , t1 ] such that x(t0 ) = x∗ , x(t) ∈ Br (x∗ ) and x(t) ∈ D ∩ O
for t ∈ (t0 , t1 ]. Here Br (x∗ ) is an open ball with center x∗ and with radius r
chosen such that B2r (x∗ ) ⊂ O.
Let hn < 0 and hn → 0. Assume that t, t + hn ∈ (t0 , t1 ] and denote
x = x(t) and xn = x(t + hn ) for short. Let δ(t) = dist(x(t), D ∩ B2r (x∗ )).
Since xn → x, it is clear that we can suppose that there exists a sequence
{yn } ⊂ ∂D ∩ B2r (x∗ ) which converges to some element y ∈ ∂D ∩ B2r (x∗ ) such
that δ(t + hn ) = |xn − yn | and δ(t) = |x − y|. Therefore we obtain the relation

|xn − yn |2 − |x − yn |2
δ(t + hn ) − δ(t) ≥ |xn − yn | − |x − yn | = .
|xn − yn | + |x − yn |
Thus we have
|wn |2 − |wn + vn |2 −2wn , vn  − |vn |2
δ(t + hn ) − δ(t) ≥ = ,
|wn | + |wn + vn | |wn | + |wn + vn |
where wn = xn − yn and vn = x − xn . We have that wn → x − y = 0 and
vn · h−1
n → −f (θt ω, x) for almost all t ∈ (t0 , t1 ). Therefore

1 w vn 
n
lim inf {δ(t + hn ) − δ(t)} ≥ − lim ,
n→∞ |hn | n→∞ |wn | |hn |

 x−y 
=− , f (θt ω, x)
|x − y|

for almost all t ∈ (t0 , t1 ]. It is clear that the vector |x−y|


x−y
is an outer normal
to D at y ∈ ∂D ∩ O. Consequently from (2.19) we obtain that
1  x−y 
lim inf {δ(t + h) − δ(t)} ≥ − , f (θt ω, x) − f (θt ω, y) .
h→0, h<0 |h| |x − y|

It follows from (2.2) that there exists a constant C(ω, t) > 0 such that
 t1
C(ω, t)dt < ∞, and |f (θt ω, x) − f (θt ω, y)| ≤ C(ω, t)|x − y| .
t0
2.3 The Itô and Stratonovich Stochastic Integrals 65

Therefore
1
lim inf {δ(t + h) − δ(t)} ≥ −C(ω, t) · δ(t).
h→0, h<0 |h|
Using Lemma 2.2.1 we obtain
 t
δ(t) − δ(s) ≤ C(ω, τ )δ(τ ) dτ for all t0 ≤ s < t ≤ t1 .
s

Since δ(t0 ) = 0, Gronwall’s lemma implies that δ(t) = 0 for all t0 ≤ t ≤


t1 . This contradicts to the assumption x(t) ∈ D ∩ O for t ∈ (t0 , t1 ]. Thus
Theorem 2.2.2 is proved. 2
It is clear from the proof of Theorem 2.2.2 that the following assertion con-
cerning deterministic nonautonomous equations holds.
Proposition 2.2.1. Suppose that a measurable function f (t, x) : R+ ×Rd →
Rd possesses the following property: for every compact set K ⊂ Rd there exists
a nonnegative function CK (t) ∈ L1loc (R+ ) such that

|f (t, x)| ≤ CK (t) and |f (t, x) − f (t, y)| ≤ CK (t) · |x − y|

for any x, y ∈ K. Let D be a closed set in Rd which possesses an outer normal


at every point of the boundary ∂D such that for any x ∈ ∂D and t ∈ R+ we
have the relation
f (t, x), νx  ≤ 0
for every outer normal νx at x. Then the set D is a forward invariant set for
the ordinary differential equation

ẋ(t) = f (t, x(t)), x(0) = x0 , (2.24)

i.e. for any local solution x(t; x0 ) to problem (2.24) the property x0 ∈ D im-
plies that x(t; x0 ) ∈ D for all t ∈ (0, t(x0 )), where (0, t(ω, x0 )) is the maximal
interval of the existence x(t; x0 ).

We use Proposition 2.2.1 in Chap.5 to study monotonicity properties of RDS


generated by cooperative equations.

2.3 The Itô and Stratonovich Stochastic Integrals

In this section we recall several standard definitions and facts from stochastic
analysis and give a short description of stochastic integration. We need this
to construct RDS generated by Itô and Stratonovich stochastic differential
equations in the next section. For details concerning stochastic integration
66 2. Generation of Random Dynamical Systems

we refer to Chung/Williams [27], Ikeda/Watanabe [57], Kunita [74],


McKean [82], for instance.
Let Wt (ω) = (Wt1 (ω), . . . , Wtm (ω)) be a Wiener process with values in
R for which we take two-sided time t ∈ R, m ≥ 1. The realization of
m

this process in the space C(R; Rm ) of continuous functions was discussed in


Example 1.1.7. Let (Ω, F, P) be the corresponding canonical Wiener space
and let {Fst , −∞ ≤ s < t ≤ ∞} be the filtration of the σ-algebras defined by
the formula
Fst = σ{Wτ1 − Wτ2 : s ≤ τ1 , τ2 ≤ t} ∨ N ,
where σ{ξ} denotes σ-algebra generated by the random variable ξ and N is
the collection of null sets of F. Below we denote Ft = F−∞
t
. The process Wt
satisfies
(i) Wti is an Ft -measurable Gaussian variable, i = 1, . . . , m;
(ii) Wt+s − Wt is independent of Ft for s > 0;
(iii) EWt = 0 and E(Wti − Wsi )(Wtj − Wsj ) = δij · |t − s|.
Below we consider random dynamical systems over the ergodic metric dy-
namical system θ = (Ω, F, P, {θt , t ∈ R}) connected with this Wiener process
(see Example 1.1.7). The transformations θt are defined such that

Wt (θτ ω) = Wt+τ (ω) − Wτ (ω), t, τ ∈ R, ω ∈ Ω . (2.25)

Definition 2.3.1 (Continuous, Adapted and Predictable Processes).


A mapping f from [a, b]×Ω into Rm , where [a, b] ⊆ R, is said to be a random
process on [a, b] (with values in Rm ) if ω → f (t, ω) is measurable for every
t ∈ [a, b]. This process is said to be
(i) a continuous random process on [a, b] if f (t, ω) is a continuous function
with respect to t ∈ [a, b] for almost all ω, i.e.
 
  
P ω : lim |f (t + h, ω) − f )t, ω)| = 0 =0;
 h→0 
t∈[a,b]

(ii) an Ft -adapted random process on [a, b] if f (t, ω) is Ft -measurable for


every fixed t ∈ [a, b];
(iii) a predictable random process on [a, b] if f (t, ω) is measurable with respect
to σ-algebra generated in [a, b] × Ω by all Ft -adapted continuous random
processes on [a, b].
Below we denote by L2 [a, b] the set of all predictable processes f (t, ω) with
the property
 b
|f (t, ω)|2 dt < ∞ almost surely .
a
2.3 The Itô and Stratonovich Stochastic Integrals 67

We note that the set of all Ft -adapted continuous random processes on [a, b]
are dense in L2 [a, b], i.e. for any f ∈ L2 [a, b] there exists a sequence {fn }
Ft -adapted continuous processes such that
 b
lim |f (t, ω) − fn (t, ω)|2 dt = 0 almost surely .
n→∞ a

For any f ∈ L2 [a, b] we can uniquely define the Itô stochastic integral
 m 

t  t
Iat (f ) = f (τ, ω), dWτ (ω) = fi (τ, ω)dWτi (ω), t ∈ [a, b] ,
a i=1 a

as an Ft -adapted continuous process on [a, b] with the properties:


(i) If f (t, ω) is a continuous Ft -adapted process, then


n

Iat (f ) = (P)- lim f (t ∧ tk ), Wt∧tk+1 − Wt∧tk (2.26)
|∆|→0
k=1

uniformly with respect to t ∈ [a, b] for any partition

∆ = {a = t1 < t2 . . . < tn+1 = b}

with diameter |∆| → 0. As usual u ∧ v = min{u, v} and the symbol


(P)-lim denotes the limit in probability.
(ii) The relations
 t
EIat (f ) = 0 and E|Iat (f )|2 = E|f (τ, ·)|2 dτ
a

are valid, if the integral on the right-hand side exists.

In many situations it is convenient to use the Stratonovitch stochastic inte-


gral. To define this we have to assume that the integrand f is a continuous
semimartingale. Let us introduce the following concepts.
Definition 2.3.2 (Stopping Time). A random variable τ (ω) with values
in R+ is said to be a stopping time on [a, b] if {ω : τ (ω) ≤ t} ∈ Ft for all
t ∈ [a, b].

Definition 2.3.3 (Martingale). An Ft -adapted random process m(t, ω)


with values in R is said to be a martingale on [a, b] if E|m(t)|2 < ∞ for all
t ∈ [a, b] and E{m(t) | Fs } = m(s) almost surely for any a ≤ s < t ≤ b. Here
E{m | Fs } denotes the conditional expectation of the random variable m with
respect to σ-algebra Fs .
68 2. Generation of Random Dynamical Systems

Definition 2.3.4 (Local Martingale). A random process m(t, ω) in R is


said to be a local martingale on [a, b] if there exists an increasing sequence
of stopping times {τn (ω)} such that P{τn < b} → 1, n → ∞, and mn (t, ω) =
m(t ∧ τn (ω), ω) are martingales for each n, where u ∧ v = min{u, v}.

Definition 2.3.5 (Continuous Semimartingale). An Ft -adapted contin-


uous random process f (t, ω) on [a, b] is said to be a continuous semimartingale
on [a, b] if it has a decomposition f (t, ω) = m(t, ω) + a(t, ω), where m(t, ω) is
a local martingale on [a, b] and a(t, ω) is a process of bounded variation, i.e.
 n 

sup |a(tk+1 , ω) − a(tk , ω)| : a = t1 < t2 . . . < tn+1 = b, n ≥ 1 < ∞
k=1

almost surely. A process f (t, ω) = (f1 , . . . , fd ) with values in Rd is called


continuous semimartingale, if its components fi possess this property.

As an example of a continuous semimartingale we can consider the process


 t m 
 t
X(t, ω) = c0 (ω) + h0 (τ, ω)dτ + hi (τ, ω)dWτi (ω) ,
a i=1 a

where hk ∈ L2 [a, b], k = 0, 1, . . . , m, and c0 (ω) is a random Fa -measurable


variable.
Suppose that X(t, ω) is a continuous semimartingale on [a, b] with values
in Rd and G(x) is C 2 -mapping from Rd into Rm . Let g(t, ω) = G(X(t, ω)).
Then (see, e.g., Ikeda/Watanabe [57] or Kunita [74]) the limit (cf. (2.26))
n 
 
g(t ∧ tk+1 ) + g(t ∧ tk )
Sat (g) = (P)- lim , Wt∧tk+1 − Wt∧tk
|∆|→0 2
k=1

exists, uniformly with respect to t ∈ [a, b] for any partition ∆ = {a = t1 <


t2 . . . < tn+1 = b} with diameter |∆| → 0. This random process Sat (g) is
called the Stratonovich stochastic integral of g by ◦dWt and it is denoted by
 m 

t  t
Sat (g) = g(τ, ω), ◦dWτ (ω) = Gi (X(τ, ω)) ◦ dWτi (ω), t ∈ [a, b] .
a i=1 a

The Stratonovich integral can be defined for more general mappings G (see,
e.g., Kunita [74]). However this will not be necessary in our subsequent
considerations.
We have the following relation between the Stratonovich and Itô integrals
(see, e.g., Kunita [74]). Let G(x) be a C 2 -mapping from Rd into Rm and let
X(t, ω) = (X1 (t, ω), . . . , Xd (t, ω)) be continuous semimartingale with values
in Rd . Then
2.3 The Itô and Stratonovich Stochastic Integrals 69

 
t  t 
G(X(τ, ω)), ◦dWτ (ω) = G(X(τ, ω)), dWτ (ω)
a a


1   
m d t
∂Gi
+ (X(τ, ω))d W i , Xj τ ,
2 i=1 j=1 a ∂xj

where {M, N }t is the joint quadratic variation of continuous semimartingales


Mt and Nt which is defined by the formula

n

{M, N }t = (P)- lim Mt∧tk+1 − Mt∧tk · Nt∧tk+1 − Nt∧tk ,
|∆|→0
k=1

where ∆ = {a = t1 < t2 . . . < tn+1 = b} is a partition with diameter |∆| → 0.


In many cases the value {M, N }t can be calculated. For instance, if
 t m 
 t
Xi (t, ω) = ci (ω) + hi0 (τ, ω)dτ + hij (τ, ω)dWτj (ω) , (2.27)
a j=1 a

where hij ∈ L2 [a, b] for all i = 1, . . . , d and j = 0, 1, . . . , m and ci (ω) are


random Fa -measurable variables, then
m 
 t
{Xi , Xj }t = hik (τ, ω)hjk (τ, ω)dτ .
k=1 a

Thus in this case we have


 t  t
 
G(X(τ, ω)), ◦dWτ (ω) = G(X(τ, ω)), dWτ (ω)
a a


1 
m d t
∂Gi
+ (X(τ, ω))hji (τ, ω)dτ .
2 i=1 j=1 a ∂xj

In particular both integrals coincide if Xi (t, ω) are absolutely continuous


functions for almost all ω and the derivatives Ẋi (t, ω) belong to L2 [a, b] (the
case hij ≡ 0 for j = 1, . . . , m).
The main advantage of the Stratonovich integral in comparison with the
Itô integral is connected with the differentiation rule which is stated in the
following well-known assertion (see, e.g., Ikeda/Watanabe [57] or Kunita
[74]).
Theorem 2.3.1 (Itô’s Formula). Let G(x) ∈ C 2 (Rd ) and X(t, ω) =
(X1 (t, ω), . . . , Xd (t, ω)) be given by (2.27). Then G(X(t, ω)) is a continuous
semimartingale and it satisfies the formula
70 2. Generation of Random Dynamical Systems

d 
 t
∂G
G(X(t, ω)) − G(X(a, ω)) = (X(τ ))dXi (τ )
i=1 a ∂xi

d 
1   t ∂2G
m
+ (X(τ, ω))hil (τ, ω)hjl (τ, ω)dτ .
2 i,j=1 a
∂xi xj
l=1

If G(x) ∈ C 3 (Rd ) and


 t m 
 t
Xi (t, ω) = ci (ω) + hi0 (τ, ω)dτ + hij (τ, ω) ◦ dWτj (ω) ,
a j=1 a

where hij and ci (ω) are the same as above, then we have
d 
 t
∂G
G(X(t, ω)) − G(X(a, ω)) = (X(τ )) ◦ dXi (τ ) .
i=1 a ∂xi

Here we have used the notation


 t  t m  t

g(τ )dXi (τ ) = g(τ )hi0 (τ )dτ + g(τ )hij (τ )dWτj ,
a a j=1 a

t
similarly for the Stratonovich integral a
g(τ ) ◦ dXi (τ ).

2.4 RDS Generated by Stochastic Differential


Equations

In this section we consider stochastic differential equations (SDE) in the


sense of Itô and Stratonovich. The main attention is paid to the case of
Stratonovich SDEs because in Chap.6 we deal mainly with applications of
the general theory to Stratonovich equations. Since there is a simple relation
between the Itô and Stratonovich cases (see Theorem 2.4.2 below), this means
that we assume some additional smoothness properties concerning the coef-
ficients in comparison with the standard theory of Itô differential equations.
We prefer to deal with the Stratonovich case because it leads to some sim-
plifications in formulas. We also refer to Horsthemke/Lefever [55] for a
discussion of the relation between Itô and Stratonovich SDEs from an applied
point of view.
We need the following functional spaces (cf., e.g., Arnold [3] and Kunita
[74]).
2.4 RDS Generated by Stochastic Differential Equations 71

Definition 2.4.1 (Spaces Cbk,δ ). For any k ∈ Z+ and 0 < δ ≤ 1 we intro-


duce the space Cbk,δ ≡ Cbk,δ (Rd ) as the set of continuous functions f (x) from
Rd into R such that f k,δ < ∞, where
|f (x)| 
f k,0 = sup + sup |Dα f (x)| ,
x∈R d 1 + |x| x∈Rd 1≤|α|≤k

 |D f (x) − Dα f (x)|
α
f k,δ = f k,0 + sup , 0<δ≤1.
x =y |x − y|δ
|α|=k
! ∂ |α|
Here α = (α1 , . . . , αd ), |α| = αi and Dα = α α
∂x1 1 ...∂xd d
. For a closed set
D ⊂ Rd we denote by Cbk,δ (D) the space of restrictions to D of functions from
Cbk,δ (Rd ).
Now we consider the following system of Itô SDEs

m
dxi = hi (x1 , . . . , xd )dt + σij (x1 , . . . , xd )dWtj , i = 1, . . . , d . (2.28)
j=1

We assume that hi (x) and σij (x) are functions from Cb0,1 (Rd ).
Definition 2.4.2 (Solutions to Itô SDE). A random process x(t, ω) on
R+ with values in Rd is said to be a solution to the system of Itô SDEs (2.28)
with initial data x∗ (ω) = (x∗1 (ω), . . . , x∗d (ω)) if it is an Ft -adapted continuous
process on R+ satisfying the integral equation
 t
xi (t, ω) = x∗i (ω) + hi (x1 (τ, ω), . . . , xd (τ, ω))dτ
0

(2.29)
m 
 t
+ σij ((x1 (τ, ω), . . . , xd (τ, ω))dWτj (ω)
j=1 0

almost surely for all i = 1, . . . , d and t > 0.


The following theorem shows that solutions to (2.28) generate an RDS in Rd
(for the proof and discussions we refer to Arnold [3, Chap.2]).
Theorem 2.4.1 (Generation by Itô SDE). Let hi (x) and σij (x) be func-
tions from Cb0,1 (Rd ). Then there exists a unique (up to indistinguishabil-
ity) continuous RDS (θ, ϕ) over the metric dynamical system θ connected
with the Wiener process Wt such that x(t, ω) = ϕ(t, ω, x∗ (ω)) is a solu-
tion to the system of Itô SDEs (2.28) for every F0 -measurable initial data
x∗ (ω) ∈ L2 (Ω, F, P). Moreover sup[0,T ] E|x(t, ·)|2 < ∞ for every T > 0, the
function (t, x) → ϕ(t, ω, x) is a continuous mapping from R+ ×Rd into Rd for
every ω ∈ Ω, and the processes {ϕ(t, ω)x : x ∈ Rd } form a Markov family.
If hi (x) and σij (x) are from Cbk,δ (Rd ) for some k ≥ 1 and δ > 0, then (θ, ϕ)
is C k RDS.
72 2. Generation of Random Dynamical Systems

Example 2.4.1 (Binary Biochemical Model). Let (Wt1 , Wt2 ) be a Wiener pro-
cess in R2 . Consider the system of Itô ordinary differential equations

dx1 = (g(x2 ) − α1 x1 ) dt + σ1 x1 dWt1 ,


(2.30)
dx2 = (x1 − α2 x2 ) dt + σ2 x2 dWt2 ,

where αi and σi are constants and g(x) is a Lipschitz continuous function.


Equations (2.30) generate an RDS in R2 . This is a C k RDS provided that
g(x) ∈ Cbk,1 (R).
Now we consider the following system of Stratonovich SDEs

m
dxi = fi (x1 , . . . , xd )dt + σij (x1 , . . . , xd ) ◦ dWtj , i = 1, . . . , d . (2.31)
j=1

We assume that fi (x) ∈ Cb1,δ (Rd ) and σij (x) ∈ Cb2,δ (Rd ).
Definition 2.4.3 (Solutions to Stratonovich SDE). A continuous se-
mimartingale x(t, ω) on R+ with values in Rd is said to be a solution to the
system (2.31) with initial data x∗ (ω) = (x∗1 (ω), . . . , x∗d (ω)) if it satisfies the
integral equation
 t

xi (t, ω) = xi (ω) + fi (x1 (τ, ω), . . . , xd (τ, ω))dτ
0

(2.32)
m 
 t
+ σij ((x1 (τ, ω), . . . , xd (τ, ω)) ◦ dWτj (ω)
j=1 0

almost surely for all i = 1, . . . , d and t > 0.


The proof of the following theorem can be found in Kunita [74, Chap.3].
Theorem 2.4.2 (Existence for Stratonovich SDE). Let fi (x) ∈ Cb1,δ (Rd )
and σij (x) ∈ Cb2,δ (Rd ) for some 0 < δ ≤ 1 and for all i = 1, . . . , d,
j = 1, . . . , m. Assume that

1 
d m
∂σkj (x)
ck (x) ≡ · σij (x) · ∈ Cb1,δ (Rd ), k = 1, . . . , d . (2.33)
2 i=1 j=1 ∂xi

Then the system of Stratonovich SDEs (2.31) has a unique solution for every
F0 -measurable initial data x∗ (ω) ∈ L2 (Ω, F, P). Further this solution satisfies
Itô’s SDEs (2.28) with hi (x) = fi (x) + ci (x), where ci (x) is given by (2.33),
i = 1, . . . , d.
Conversely, if x(t, ω) is a solution to Itô SDEs (2.28) with hi (x) ∈
Cb1,δ (Rd ) and σij (x) as above, then x(t, ω) solves the system of Stratonovich
SDEs (2.31) with fi (x) = hi (x) − ci (x), i = 1, . . . , d.
2.4 RDS Generated by Stochastic Differential Equations 73

Example 2.4.2 (Binary Biochemical Model). Let (θ, ϕ) be the RDS generated
by (2.30). If g(x) ∈ Cb1,δ (R), then x(t) = ϕ(t, ω, x) solves the Stratonovich
equations
   
σ12
dx1 = g(x2 ) − α1 + x1 dt + σ1 x1 ◦ dWt1 ,
2

   
σ2
dx2 = x1 − α2 + 2 x2 dt + σ2 x2 ◦ dWt2 .
2

The following theorem shows that solutions to (2.31) generate an RDS in Rd


(for the proof and discussions we refer to Arnold [3, Chap.2]).
Theorem 2.4.3 (Generation by Stratonovich SDE). Assume that the
functions fi (x), σij (x) and ci (x) satisfy the hypotheses of Theorem 2.4.2.
Then there exists a unique (up to indistinguishability) continuous C 1 RDS
(θ, ϕ) over the metric dynamical system θ connected with the Wiener process
Wt such that x(t, ω) = ϕ(t, ω)x∗ is a solution to the system of Stratonovich
SDEs (2.31) with initial data x∗ ∈ Rd and the function (t, x) → ϕ(t, ω, x) is
a continuous mapping from R+ × Rd into Rd for every ω ∈ Ω. The Jacobian
d
∂[ϕ(t, ω, x)]i
Dx ϕ(t, ω, x) =
∂xj i,j=1

uniquely solves the variational equation


 t
Dx ϕ(t, ω, x) =I + Dx f (ϕ(τ, ω, x))Dx ϕ(τ, ω, x) dτ
0

(2.34)
m 
 t
+ Dx σj (ϕ(τ, ω, x))Dx ϕ(τ, ω, x) ◦ dWτj
j=1 0

for all t > 0. Here f = (f1 . . . , fd ) and σj = (σ1j . . . , σdj ) are mappings from
Rd into itself. Moreover for every t > 0 the determinant detDx ϕ(t, ω, x)
satisfies Liouville’s equation
 t
detDx ϕ(t, ω, x) = exp tr{Dx f (ϕ(τ, ω, x))}dτ
0

m 
 t
+ tr{Dx σj (ϕ(τ, ω, x))} ◦ dWτj  . (2.35)
j=1 0

Furthermore for all t0 ≤ t1 ≤ . . . ≤ tn the random variables


 
ϕ(tk − tk−1 , θtk−1 ω) : k = 1, . . . , n
74 2. Generation of Random Dynamical Systems

are independent (in particular, the past and future σ-algebras are indepen-
dent) and the processes {ϕ(t, ω)x : x ∈ Rd } form a Markov family.
Example 2.4.3 (Binary Biochemical Model). Consider the following Strato-
novich version of equations (2.30)

dx1 = (g(x2 ) − α1 x1 ) dt + σ1 x1 ◦ dWt1 ,


(2.36)
dx2 = (x1 − α2 x2 ) dt + σ2 x2 ◦ dWt2 .

If g(x) ∈ Cb1,δ (R), 0 < δ ≤ 1, then these equations generate a C 1 RDS in R2 .


By Theorem 2.4.2 equations (2.36) can be rewritten in the Itô form
   
σ12
dx1 = g(x2 ) − α1 − x1 dt + σ1 x1 dWt1 ,
2

   
σ2
dx2 = x1 − α2 − 2 x2 dt + σ2 x2 dWt2 .
2

Remark 2.4.1. It is possible (see Arnold [3, Chap.2]) to prove in the two
cases described in Theorems 2.4.1 and 2.4.3 that the cocycle ϕ(t, ω) can
be extended to a cocycle ϕ̃ with two-sided time. In particular this implies
(see Arnold [3, Theorem 1.1.6]) that the function (t, x) → ϕ(t, θ−t ω)x =
[ϕ̃(−t, ω)]−1 x is continuous for every ω ∈ Ω. Therefore the mapping t →
ϕ(t, θ−t ω)v(θ−t ω) is continuous for a dense (with respect to convergence in
probability) set of random variables v(ω) (cf. Remark 1.5.1).
Theorem 2.4.3 can be applied to the affine (linear nonhomogeneous) Stratono-
vich SDE

m
dx(t) = (A0 x(t) + b0 ) dt + (Aj x(t) + bj ) ◦ Wtj , (2.37)
j=1

where Aj = {ajik }di,k=1 is a d × d matrix and bj is a vector in Rd , j =


0, 1, . . . , m. Thus (2.37) generates an affine RDS (θ, ϕ) in Rd (see Arnold [3,
Sect.2.3]). The cocycle ϕ can be represented in the form

ϕ(t, ω)x = Φ(t, ω)x + ψ(t, ω) , (2.38)

where ψ(t, ω) = ϕ(t, ω)0 and Φ(t, ω) is the linear cocycle in Rd generated by
the linear SDE
m
dx(t) = A0 x(t)dt + Aj x(t) ◦ Wtj . (2.39)
j=1

As in the random case (cf. Theorem 2.1.2) from the multiplicative ergodic
theorem (see Arnold [3, Chaps.3,4]) we can derive the following assertion
on the top Lyapunov exponent of the linear RDS (θ, Φ).
2.4 RDS Generated by Stochastic Differential Equations 75

Theorem 2.4.4. Let Φ(t, ω) be the linear cocycle in Rd generated by (2.39).


Then there exists a θ-invariant set Ω ∗ ⊂ Ω of full measure such that for each
x ∈ Rd \ {0} the Lyapunov exponent
1
λ(x) := lim log |Φ(t, ω)x| (2.40)
t→+∞ t
exists for all ω ∈ Ω ∗ and is independent of ω ∈ Ω ∗ . The image of the function
x → λ(x) is a finite set and λ := maxx∈Rd \{0} λ(x) is the top Lyapunov
exponent in the sense of Definition 1.9.1. Moreover there exists a probability
measure  on S d−1 := {x : |x| = 1} such that
 
  
1 
m
λ= A0 s, s + (Aj + A∗j )s, Aj s − 2Aj s, s2 (ds) .
S d−1  2
j=1

(2.41)

Proof. The existence of Lyapunov exponents follows directly from Arnold


[3, Theorem 3.4.1] (see also Arnold [3, Example 3.4.19]). To obtain (2.41)
we apply Proposition 6.2.11 and Remark 6.2.4 from Arnold [3]. 2

Remark 2.4.2. Relation (2.41) is known as the Furstenberg-Khasminskii for-


mula. Under some generic conditions the measure  can be found as a solution
of the Fokker-Plank equation (Arnold [3, Sect.6.2]). We also refer to Khas-
minskii [64] and Mao [80] for examples of calculations of bounds for the top
Lyapunov exponent.

Example 2.4.4 (1D Affine SDE). Consider the following one-dimensional


Stratonovich SDE
dx = (λx + β)dt + σ · x ◦ dWt ,
where Wt is a Wiener process in R and λ, β, σ are constants. This equation
generates an affine RDS (θ, ϕ) in R. The cocycle ϕ has the form (2.38), where
Φ(t, ω)x = x exp {λt + σWt (ω)} and
 t
ψ(t, ω) = β exp {λ(t − τ ) + σ(Wt (ω) − Wτ (ω))} dτ . (2.42)
0

It is clear that the number λ is the (top) Lyapunov exponent for (θ, Φ). If
λ < 0, then the RDS (θ, ϕ) is dissipative in the universe of all tempered
subsets of R. By Proposition 1.9.2 and Remark 1.9.2 the RDS (θ, ϕ) has a
unique equilibrium u(ω). Relations (1.51) and (2.42) imply that
 0
u(ω) = β exp {−λτ − σWτ (ω)} dτ .
−∞

This equilibrium is measurable with respect to the past σ-algebra F− (see


the definition in Sect.1.10) and exponentially stable (see Proposition 1.9.3).
76 2. Generation of Random Dynamical Systems

If λ > 0, then the RDS (θ, ϕ) possesses the equilibrium


 ∞
v(ω) = −β exp {−λτ − σWτ (ω)} dτ ,
0

which is measurable with respect to the future σ-algebra F+ .


A similar picture is observed for the Ornstein-Uhlenbeck type equation

dx = λxdt + σdWt ,

which generates an RDS with the cocycle


 t
λt
ϕ(t, ω)x = e x + σ eλ(t−τ ) dWτ , x∈R.
0

This RDS has an exponentially stable F− -measurable equilibrium


 0
u(ω) = σ e−λτ dWτ
−∞

provided that λ < 0. In the case λ > 0 it possesses an unstable F+ -measurable


equilibrium  ∞
v(ω) = −σ e−λτ dWτ .
0

2.5 Relations Between Random and Stochastic


Differential Equations

In this section we first consider approximations of solutions to Stratonovich


stochastic differential equations by solutions to random differential equations.
Apparently the first result in this direction was obtained by Wong/Zakaı̈
[109] (see also Wong [108]). There are now many expansions and general-
izations of the Wong – Zakaı̈ theorem (see, e.g., Belopolskaya/Dalecky
[15], Ikeda/Watanabe [57], Kunita [74] and also the survey Twardowska
[105] and the references therein).
We introduce the following smooth approximation of the Wiener process
Wt = (Wt1 , . . . , Wtj ). Let φ(t) be a nonnegative function on R with the prop-
erties  1
φ(t) ∈ C 1 (R), supp φ(t) ⊂ [0, 1], φ(t) dt = 1 .
0

We set φε (t) = ε−1 φ(t/ε) for ε > 0 and


 ∞  ε
j,ε
Wt (ω) = φε (τ − t)Wτ (ω) dτ =
j
φε (τ )Wτj+t (ω) dτ . (2.43)
−∞ 0
2.5 Relations Between RDE and SDE 77

Using the relation Wτ (θt ω) = Wt+τ (ω) − Wt (ω), it is easy to see that
d j,ε ε
dt Wt (ω) = ηj (θt ω), where
 ε
ηjε (ω) = − φ̇ε (τ )Wτj (ω) dτ .
0

We consider the following approximation of the Stratonovich SDE (2.31):

dxi m
= fi (x1 , . . . , xd ) + σij (x1 , . . . , xd ) · ηjε (θt ω), i = 1, . . . , d . (2.44)
dt j=1

If fi (x) ∈ Cb1,δ (Rd ) and σij (x) ∈ Cb2,δ (Rd ), then Corollary 2.1.1 implies that
the random differential equations (2.44) generates an RDS in Rd . In particular
equations (2.44) are uniquely solved on Rd for any initial data x∗ ∈ Rd .
We have the following Wong-Zakaı̈ type approximation theorem (see, e.g.,
Ikeda/Watanabe [57] and Kunita [74]).
Theorem 2.5.1. Let the hypotheses of Theorem 2.4.2 hold. Let a function
x(t, ω; x∗ ) be the solution to the Stratonovich SDE (2.31) with initial data
x∗ ∈ Rd and xε (t, ω; x∗ ) be the solution to the RDE (2.44) with the same
initial data. Then we have
 
lim E sup sup |x(t; x∗ ) − xε (t; x∗ )|2 =0 (2.45)
ε→0 [0,T ] |x∗ |≤R

for any positive T and R.

Remark 2.5.1. Assume that the hypotheses of Theorem 2.4.2 hold. Let ϕ(t, ω)
and ϕε (t, ω) be the cocycles of the RDS generated by (2.31) and (2.44) re-
spectively. From (2.45) we have
  
m
lim E dτ sup sup |ϕ(t, θτ ω)x∗ − ϕε (t, θτ ω)x∗ |2 =0
ε→0 −m t∈[0,l] |x∗ |≤r

for all m, l, r ∈ N. This implies that there exists a sequence εn → 0 such that
the set Ω ∗ of all ω ∈ Ω satisfying
  
β
lim dτ sup sup |ϕ(t, θτ ω)x∗ − ϕεn (t, θτ ω)x∗ |2 =0 (2.46)
n→0 α [0,T ] |x∗ |≤R

for all α < β, T > 0 and R > 0, has full measure, i.e. P(Ω ∗ ) = 1. Moreover
the set Ω ∗ is θ-invariant, i.e. θt Ω ∗ = Ω ∗ for all t ∈ R.

This remark along with Theorems 2.2.1 and 2.5.1 allows us to obtain the fol-
lowing result concerning deterministic invariant domains for RDS generated
by Stratonovich SDE (2.31).
78 2. Generation of Random Dynamical Systems

Corollary 2.5.1. Assume that the hypotheses of Theorem 2.4.2 hold. Let D
be a closed set in Rd such that (i) D has an outer normal at every point of
the boundary ∂D and (ii) for any x ∈ ∂D we have

f (x), νx  ≤ 0 and σj (x), νx  = 0, j = 1, . . . , m , (2.47)

for every outer normal νx to D at x (see Definition 2.2.1), where f =


(f1 . . . , fd ) and σj = (σ1j . . . , σdj ). Then the set D is a deterministic forward
invariant set for the Stratonovich SDE (2.31), i.e. the property x(0) = x∗ ∈ D
implies that x(t, ω; x∗ ) ∈ D for all t > 0 and for almost all ω ∈ Ω. Moreover
there exists a measurable set Ω ∗ ⊂ Ω such that P(Ω ∗ ) = 1, θt Ω ∗ = Ω ∗ for
all t ∈ R and

ϕ(t, ω)x ∈ D for all t ∈ R+ , x ∈ D, ω ∈ Ω ∗ , (2.48)

where ϕ(t, ω) is the cocycle of RDS generated by (2.31).

Proof. It is sufficient to prove (2.48). Let Ω ∗ be the set of ω ∈ Ω satisfying


(2.46). Then it is clear from (2.46) that
 β
lim E dt sup |ϕ(t, θ−t ω)x∗ − ϕεn (t, θ−t ω)x∗ | = 0, ω ∈ Ω∗ ,
n→0 0 |x∗ |≤R

for all positive β and R. Consequently for every ω ∈ Ω ∗ and x∗ ∈ Rd there


exists a subsequence {εn(k) } such that

ϕεn(k) (t, θ−t ω)x∗ → ϕ(t, θ−t ω)x∗ , k→∞, (2.49)

for almost all t from the interval [0, β]. If x∗ ∈ D, Theorem 2.2.1 implies that
ϕεn(k) (t, θ−t ω)x∗ ∈ D. Therefore it follows from (2.49) that ϕ(t, θ−t ω)x∗ ∈ D
for almost all t from [0, β]. Since the function t → ϕ(t, θ−t ω)x is continuous for
every ω ∈ Ω and x ∈ Rd (see Remark 2.4.1), we have that ϕ(t, θ−t ω)x∗ ∈ D
for all t ∈ [0, β], x∗ ∈ D and ω ∈ Ω ∗ with arbitrary β > 0. Now the invariance
of Ω ∗ implies (2.48). 2

Corollary 2.5.1 makes it possible to redefine the cocycle ϕ to obtain the


following assertion.
Corollary 2.5.2. Assume that the hypotheses of Corollary 2.5.1 hold. Then
there exists a unique (up to indistinguishability) continuous C 1 RDS (θ, ϕ)
over the metric dynamical system θ connected with the Wiener process Wt
such that the conclusions of Theorem 2.4.3 are valid and ϕ(t, ω)D ⊂ D for
all t ∈ R+ and ω ∈ Ω. This means that equations (2.31) generate a C 1 RDS
in D.
2.5 Relations Between RDE and SDE 79

Proof. Let Ω ∗ be the set given by Corollary 2.5.1. If we redefine the cocycle
ϕ of the RDS generated by (2.31) (see Theorem 2.4.3) by the formula

ϕ(t, ω) if ω ∈ Ω ∗ ,
ϕ̃(t, ω) := (2.50)
id if ω ∈
/Ω,

then we obtain a cocycle which is indistinguishable from ϕ(t, ω). Obviously


ϕ̃(t, ω)D ⊂ D for all t ∈ R+ and ω ∈ Ω and the conclusions of Theorem 2.4.3
are valid for ϕ̃(t, ω). 2

Example 2.5.1 (1D Stochastic Equation). Let f (x) ∈ Cb1,1 (R), σ(x) ∈ Cb2,1 (R)
and σ(x) · σ (x) ∈ Cb1,1 (R). If f (0) ≥ 0 and σ(0) = 0, then the equation

dx(t) = f (x(t))dt + σ(x(t)) ◦ dWt

generates an RDS in R+ .

Example 2.5.2 (Binary Biochemical Model). If g(x) ∈ Cb1,δ (R), 0 < δ ≤ 1,


and g(0) ≥ 0, then equations (2.36) generate an RDS in R2+ = {(x1 , x2 ) :
xi ≥ 0}.

Now we give a result by Imkeller/Schmalfuss [59] (see also Imkeller/Le-


derer [58]) on the conjugacy of stochastic and random differential equations.
To construct a random equation which is equivalent to the stochas-
tic equation (2.31) we involve the stationary Ornstein-Uhlenbeck process
z(t, ω) = (z1 (t, ω), . . . , zm (t, ω)) in Rm which solves the equations

dzk = −µzk dt + dWtk , k = 1, . . . , m , (2.51)

for some µ > 0. For existence and properties of solutions to (2.51) we refer
to Ikeda/Watanabe [57] or McKean [82], for instance. The stationary
solution {zk (t, ω)} can be written in the form
 t
zk (t, ω) = e−µ(t−τ ) dWtk (ω) almost surely .
−∞

However to produce an RDE for every ω ∈ Ω we need a perfect version


of this process. The existence and properties of this version are stated in
the following assertion which is a direct corollary of the infinite-dimensional
result proved by Chueshov/Scheutzow [23, Proposition 3.1].
Lemma 2.5.1. On the probability space (Ω, F, P) there exists a tempered
random variable z(ω) = (z1 (ω), . . . , zm (ω)) which maps Ω into Rm and pos-
sesses the properties:
(i) {zi (ω)} are independent Gaussian variables with Ezi = 0 and Ezi2 =
(2µ)−1 ;
(ii) t → z(θt ω) is continuous from R into Rd for all ω ∈ Ω;
80 2. Generation of Random Dynamical Systems

(iii) the process z(t, ω) ≡ z(θt ω) solves equations (2.51);


(iv) the relation
 t  0
1 1
lim zi (θτ ω) dτ = lim zi (θτ ω) dτ = 0 (2.52)
t→+∞ t 0 t→+∞ t −t

holds for all i = 1, . . . , m and ω ∈ Ω.

Proof. The existence of the variable z(ω) with properties (i)-(iii) follows from
Chueshov/Scheutzow [23, Proposition 3.1]. To obtain (iv) we note that
the ergodic theorem for stationary processes (see, e.g., Gihman/Skoro-
hod [48, p.140]) implies that (2.52) holds almost surely. Let Ω ∗ be the set
of all ω ∈ Ω such that (2.52) holds. It is clear that Ω ∗ is a θ-invariant set of
full measure. Therefore we can redefine z(ω) by zero outside of Ω ∗ . 2

Now we can state the conjugacy theorem (see Imkeller/Schmalfuss [59]


for the proof).
Theorem 2.5.2. Let the hypotheses of Theorem 2.4.2 hold. Assume addi-
tionally that fi and σij belong to C ∞ (Rd ) and the diffusion terms σij (x) in
(2.31) satisfy
d 
 
∂σjl (x) ∂σjk (x)
[σk , σl ]j ≡ σik (x) − σil (x) =0 (2.53)
i=1
∂xi ∂xi

for all k, l = 1, . . . , m and j = 1, . . . , d. Let u = (u1 , . . . , ud ) : Rm × Rd → Rd


be a solution to the equations

∂uj (z, x)
= σji (u(z, x)), u(0, x) = x ∈ Rd , i = 1, . . . , m, j = 1, . . . , d ,
∂zi
(2.54)
and let z(ω) be the random variable given by Lemma 2.5.1. Then u(z(ω), x)
is a tempered random variable in Rd for every x ∈ Rd and the mapping
x → T (ω, x) ≡ u(z(ω), x) is a diffeomorphism of Rd for each ω ∈ Ω. Further,
if (θ, ϕ) is the RDS generated by (2.31), then

ϕ(t, ω, x) = T (θt ω, ψ(t, ω, T −1 (ω, x))), t > 0, x ∈ Rd , ω ∈ Ω ,

where ψ is the cocycle corresponding to the random equation


 
dy −1
m
= [Dx T (θt ω, y)] f (T (θt ω, y)) + µ σj (T (θt ω, y))zj (θt ω) .
dt j=1
2.5 Relations Between RDE and SDE 81

Example 2.5.3. Let us assume that m = d and σij (x) = δij · σi (xi ), i.e. we
consider the following system of Stratonovich SDEs

dxi = fi (x1 , . . . , xd )dt + σi (xi ) ◦ dWti , i = 1, . . . , d . (2.55)

Simple calculation shows that condition (2.53) is satisfied and the equations
in (2.54) have the form

∂uj ∂ui
= 0, if i = j and = σi (ui ); ui (0, x) = xi . (2.56)
∂zi ∂zi

Here i, j = 1, . . . , d. Suppose that |σi (u)| > 0 for all u ∈ R and i = 1, . . . , d.


Then it is easy to see that

ui (z, x) = Hi−1 (zi + Hi (xi )), i = 1, . . . , d ,


1
where Hi (u) is a primitive for σi (u) . In this case

T (ω, x) = H1−1 (z1 (ω) + H1 (x1 )), . . . , Hd−1 (zd (ω) + Hd (xd )) ,

where every zi (ω) generates the stationary solution to the Ornstein-Uhlenbeck


equation dzi + µzi dt = dWti via the formula zi (θt ω). In this case the cocycle
ψ is generated by the random equation

ẏi = σi (yi )gi (θt ω, y1 , . . . , yd ), i = 1, . . . , d ,

where

fi H1−1 (z1 (ω) + H1 (y1 )), . . . , Hd−1 (zd (ω) + Hd (yd ))
gi (ω, y) = + µzi (ω)
σi (Hi−1 (zi (ω) + Hi (yi )))

for i = 1, . . . , d.
Equations (2.56) can be also solved in the case when σi (u) = σi · u. In
this case we have ui (z, x) = xi exp{σi zi } for i = 1, . . . , d. In particular, this
observation means that the SDE (2.36) which arises in a stochastic binary
biochemical model is conjugate with the RDE

ẏ1 = e1 (θt ω)−1 g(e2 (θt ω)y2 ) − (α1 − µσ1 z1 (θt ω))y1 ,

ẏ2 = e2 (θt ω)−1 e1 (θt ω)y1 − (α2 − µσ2 z2 (θt ω))y2 ,

where ei (ω) = exp{σi zi (ω)} for i = 1, 2.

Other examples of the conjugacy of SDE and RDE can be found in Kel-
ler/Schmalfuss [63], Imkeller/Schmalfuss [59] and in Chap.6 below.
3. Order-Preserving Random Dynamical
Systems

In this chapter we first consider properties of partially ordered Banach spaces


and prove some auxiliary results concerning random sets in these spaces. In
Sect.3.3 we introduce a general concept of order-preserving (monotone) ran-
dom dynamical systems and consider several examples. We also define sub-
and super-equilibria and prove a theorem on the existence of an equilibrium
between them. On the one hand this theorem is a random version of the
well-known deterministic assertion (see, e.g., Hirsch [54] and Smith [102]).
On the other hand it generalizes statements on the existence of random fixed
points in comparison with the theorems presented in Schmalfuss [94] and
Arnold/Schmalfuss [11]. We consider the simplest examples and we give
a counterexample that shows that ω-limit sets for monotone random dy-
namical systems can contain a non-trivial ordering subset of elements. This
phenomenon does not take place in the deterministic case. We also prove that
a global attractor of a monotone random dynamical system must be between
two equilibria. We conclude this chapter with discussion of a comparison
principle for order-preserving RDS in Sect.3.7.

3.1 Partially Ordered Banach Spaces

In this section we describe some well-known results concerning cones and


partially ordered spaces. We mainly follow Krasnoselskii [68] and Kras-
noselskii/Lifshits/Sobolev [71].
Let V be a real Banach space with a closed convex cone V+ ⊂ V such
that V+ ∩ (−V+ ) = {0}. This cone defines a partial order relation on V via
x ≤ y if y − x ∈ V+ which is compatible with the vector space structure of
V . We write x < y when x ≤ y and x = y. If V+ has nonempty interior intV+
we say that the cone V+ is solid and V is strongly ordered. We write x  y
if y − x ∈ intV+ . For any elements a and b from V such that a ≤ b we define
the (conic) interval [a, b] as the set of the form

[a, b] = {x ∈ V : a ≤ x ≤ b} .

If the cone V+ is solid, then any bounded set B ⊂ V is contained in some


interval.

I. Chueshov: LNM 1779, pp. 83–111, 2002.


c Springer-Verlag Berlin Heidelberg 2002
84 3. Order-Preserving Random Dynamical Systems

The cones of nonnegative elements in Rd and in C(D), where D is compact


in Rd , are solid. This cone in Lp (D) is not solid.
Definition 3.1.1 (Upper and Lower Bounds). An element v ∈ V is said
to be a upper bound for a subset B ⊂ V if x ≤ v for each x ∈ B. Similarly,
u ∈ V is called a lower bound for a subset B ⊂ V if x ≥ u for each x ∈ B.
An upper bound v0 is said to be the least upper bound (or supremum) and
denoted by v0 = sup B, if any other upper bound v satisfies v ≥ v0 . Similarly,
a lower bound u0 is said to be the greatest lower bound (or infimum) and
denoted by u0 = inf B, if u0 ≥ u for any other lower bound u. If the set B
has an upper bound, it is said to be bounded from above. If it has a lower
bound, it is said to be bounded from below. Finally, a set which is bounded
from both above and below is said to be order-bounded.
We note that suprema and infima, if they exist, are unique. Simple examples
on the plane R2 show that sup B and inf B, if they exist, do not belong to
the closure B of B in general.
Definition 3.1.2 (Maximal and Minimal Elements). An element v ∈ B
is said to be maximal (minimal) in B if the property x ≥ v (x ≤ v) for some
x ∈ B implies that x = v.
A maximal element need not be an upper bound, nor a minimal element a
lower bound.
Definition 3.1.3 (u-norm). Let u ∈ V+ . An element x ∈ V is said to be
u-subordinate if we have the inequality −αu ≤ x ≤ αu for some α ≥ 0. The
smallest such α is denoted by x u and called the u-norm of x.
It is easy to see that the functional x u is a norm on the linear set Vu of all
u-subordinate elements from V . If the interval [−u, u] is bounded in the norm
of the space V , then x ≤ R x u for any x ∈ Vu , where R is the radius of
a ball containing [−u, u]. Moreover the space Vu is complete with respect to
the u-norm if and only if the interval [−u, u] is bounded in the norm of V .
Definition 3.1.4 (Part (Birkhoff ) Metric). (i) The equivalence classes
under the equivalence relation defined by x ∼ y if there exists α > 0 such that
α−1 x ≤ y ≤ αx on the cone V+ are called the parts of V+ .
(ii) Let C be a part of V+ . Then

p(x, y) := inf{log α : α−1 x ≤ y ≤ αx}, x, y ∈ C ,

defines a metric on C called the part metric (or Birkhoff metric) of C.


Clearly intV+ is a part and every part is also a convex cone in V . For a proof
of the fact that p is a metric on C and for other properties of the part met-
ric we refer to Krasnoselskii/Lifshits/Sobolev [71] and Bauer/Bear
[14]. The concept of the part metric plays an important role in the study of
3.1 Partially Ordered Banach Spaces 85

sublinear RDS. In Sect.4.1 we prove that these RDS are nonexpansive with
respect to p.
Let u ∈ V+ \ {0} and Cu be the part of the cone which contains u. Then it
easy to prove the following relations between the part metric and the u-norm
 

x − y u ≤ ep(x,y) − 1 · max ep(x,u) , ep(y,u) , x, y ∈ Cu , (3.1)

and

p(x, y) ≤ log {1 + x − y u · ( u x + u y )} , x, y ∈ Cu . (3.2)

Therefore Cu is a complete space with respect to the part metric provided


the interval [−u, u] is bounded in the norm of V . If the cone V+ is solid, then
 
p(x, y) ≤ log 1 + r−1 · x − y , x, y ∈ intV+ , (3.3)

where r = min {dist(x, ∂V+ ), dist(y, ∂V+ )} (see Krasnoselskii/Burd/Ko-


lesov [70, p.136] or Krause/Nussbaum [72, Lemma 2.3]). We also note
that in Rd with the standard cone

Rd+ = {x = (x1 , . . . , xd ) ∈ Rd : xi ≥ 0, i = 1, . . . , d}

every set

{x = (x1 , . . . , xd ) ∈ Rd+ : xi > 0, i ∈ I; xi = 0, i ∈ I; I ⊂ {1, . . . , d}}

is a part and the part metric in intRd+ has the form


 
xi yi  xi 
p(x, y) = log max , : i = 1, . . . , d = max  log  . (3.4)
yi xi i yi

In particular this formula shows that the part metric is not strictly convex,
i.e. for some points a and b from intRd+ the set

{x : p(a, x) ≤ αp(a, b)} ∩ {x : p(b, x) ≤ (1 − α)p(a, b)}, 0<α<1,

may consist of more than one point.


Below we also use the following monotonicity property of the part metric.
Lemma 3.1.1. Assume that a ≤ b are elements from a part C of the cone
V+ . Then [a, b] ⊂ C and p(v1 , v2 ) ≤ p(a, b) for any v1 , v2 ∈ [a, b].

Proof. The relation a ≤ b ≤ λa with λ ≥ 1 implies that v1 ≤ b ≤ λa ≤ λv2


and v2 ≤ b ≤ λa ≤ λv1 for any v1 , v2 ∈ [a, b]. Thus λ−1 v1 ≤ v2 ≤ λv1 and
therefore p(v1 , v2 ) ≤ p(a, b). 2

The following concept of a normal cone is also important in applications.


86 3. Order-Preserving Random Dynamical Systems

Definition 3.1.5 (Normal Cone). Let V be a real Banach space. A cone


V+ is said to be normal if the norm · in V is semi-monotone, i.e., there
exist a constant c such that the property 0 ≤ x ≤ y implies that x ≤ c · y .
The following proposition (see, e.g., Krasnoselskii/Lifshits/Sobolev
[71]) characterizes the normality property of cones.
Proposition 3.1.1. The cone V+ is normal if and only if one of the follow-
ing assertions is valid:
(i) the relations un ≤ xn ≤ vn for all n ∈ Z+ and convergences un → z and
vn → z imply that xn → z;
(ii) the original norm · in V is equivalent to the norm

x ∗ = max inf u , inf u ; (3.5)
u≤x u≥x

(iii) every interval [a, b] = {x ∈ V : a ≤ x ≤ b} is bounded in the norm of V ;


(iv) for any u ∈ V+ the space Vu of all u-subordinate elements from V is
complete with respect to u-norm;
(iv) every part of the cone V+ is a complete space with respect to the part
metric.

Remark 3.1.1. Let V+ be a normal cone. Then it is easy to see that the norm
· ∗ defined by (3.5) is monotone, i.e. the relation 0 ≤ x ≤ y implies that
x ∗ ≤ y ∗ . For any monotone norm · we have the inequality
 
x − y ≤ 2ep(x,y) − e−p(x,y) − 1 · min { x , y } , x, y ∈ V+ \ {0} ,

where p(x, y) is the part metric and we suppose p(x, y) = ∞ if x and y belong
to different parts (the proof see in Krause/Nussbaum [72]).

In general the monotonicity and boundedness of a sequence does not imply its
convergence. Counterexamples can be easily constructed in the space C[0, 1]
of continuous functions on the interval [0, 1] ⊂ R with the cone of nonnegative
functions. Therefore the following concept is useful in applications.
Definition 3.1.6 (Regular Cone). Let V be a real Banach space. A cone
V+ is said to be regular if every monotone sequence

x1 ≤ x2 ≤ . . . ≤ xn ≤ . . .

which is bounded from above, converges in the norm of the space V .


Every cone in Rd is regular. The cone of nonnegative functions in Lp (Q) is

! forp the space  of sequences {ai }i=1
regular, 1 ≤ p < ∞. The same is true p

of real numbers with the property |ai | < ∞, 1 ≤ p < ∞. One can prove
(see, e.g., Krasnoselskii/Lifshits/Sobolev [71]) that every regular cone
is normal.
3.1 Partially Ordered Banach Spaces 87

Definition 3.1.7 (Minihedral Cone). A cone V+ is said to be minihedral


if every finite set M in V which is order-bounded has a supremum. A cone
V+ is called strongly minihedral if every set M in V which is order-bounded
has a supremum.
Minihedrality means that the nonempty intersection of any finite number of
sets xi + V+ has the form u + V+ . The cones of nonnegative elements in Rd
and in C(D), where D is compact in Rd , are minihedral. Moreover they are
solid and normal. The cones of nonnegative elements in Lp (D) is strongly
minihedral. This cone in C 1 (D) is not minihedral.
The following theorem shows a relation between minihedrality and regu-
larity of the cone.
Theorem 3.1.1. A solid minihedral cone cannot be regular in an infinite-
dimensional space. In a separable Banach space every regular minihedral cone
is strongly minihedral.
Below we use the following result on the existence of suprema for compact
sets.
Theorem 3.1.2. Let V+ be a solid normal minihedral cone in the real Ba-
nach space V . Then every compact set B ⊂ V has a supremum.
The next assertion on the representation of spaces with cones is also impor-
tant in our subsequent considerations.
Theorem 3.1.3. Let V+ be a solid normal minihedral cone in the real Ba-
nach space V . Then there exists a linear homeomorphism of V onto the space
C(Q) of continuous functions on some compact topological Hausdorff space
such that the image of the cone V+ is the cone of nonnegative functions in
C(Q).
For the proofs of Theorems 3.1.1, 3.1.2 and 3.1.3 and comments we refer to
the monograph Krasnoselskii/Lifshits/Sobolev [71].
Theorem 3.1.3 allow us to establish the following properties of normal
solid minihedral cones.
Corollary 3.1.1. Let V be a real Banach space with a solid normal mini-
hedral cone V+ . Suppose that Φ is the linear homeomorphism of V onto the
space C(Q) given by Theorem 3.1.3. Then
(i) for any u, v ∈ V the supremum u ∨ v := sup{u, v} and the infimum
u ∧ v := inf{u, v} exist and

Φ(u ∨ v) = Φ(u) ∨ Φ(v), Φ(u ∧ v) = Φ(u) ∧ Φ(v) ;

(ii) the functions {u, v} → u ∨ v and {u, v} → u ∧ v are continuous mappings


from V × V into V .
88 3. Order-Preserving Random Dynamical Systems

Proof. (i) It is easy to check that the elements

zsup = Φ−1 (Φ(u) ∨ Φ(v)) and zinf = Φ−1 (Φ(u) ∧ Φ(v))

are the supremum and infimum for the pair u and v.


(ii) Simple calculation gives that
1
u∨v = · (u + v + (u − v)+ + (u − v)− ) (3.6)
2
and
1
u∧v = · (u + v − (u − v)+ − (u − v)− ) , (3.7)
2
where w+ = w ∨ 0 and w− = −(w ∧ 0) = (−w)+ . Therefore it is sufficient to
prove that w → w+ is a continuous mapping from V into V . We obviously
have
∗ ∗
w+ − w+ ≤ C1 Φ(w+ ) − Φ(w+ ) C(Q)
(3.8)
∗ ∗
≤ C1 Φ(w) − Φ(w ) C(Q) ≤ C1 C2 w − w

for any w and w∗ from V , where C1 and C2 are the operator norms of Φ−1
and Φ. 2

3.2 Random Sets in Partially Ordered Spaces

In this section we assume that V is a real separable Banach space with a


closed convex cone V+ and establish several properties of random sets in V .
Below a mapping ω → v(ω) from Ω into V is called a random variable in
V if it is measurable with respect to the Borel σ-algebra B(V ) generated by
the open sets of V . We also note that the sets V+ , V+ \ {0} and intV+ are
elements of B(V ).
Proposition 3.2.1. Let a(ω) and b(ω) be random variables in V . Then the
semiintervals
Ia (ω) = {x : x ≥ a(ω)} ≡ a(ω) + V+
and
I b (ω) = {x : x ≤ b(ω)} ≡ b(ω) − V+
are random closed sets. If the cone V+ is solid and a(ω)  b(ω) for all ω ∈ Ω,
then the interval
[a, b](ω) = {x : a(ω) ≤ x ≤ b(ω)}
is a random closed set. If a(ω) ≤ b(ω) for all ω ∈ Ω, then [a, b](ω) is a
random closed set provided that either V+ is a solid normal minihedral cone
or V is a finite-dimensional space.
3.2 Random Sets in Partially Ordered Spaces 89

Proof. Since dist(y, Ia (ω)) = dist(−y + a(ω), V+ ) and dist(z, V+ ) is a contin-


uous function with respect to z, we have that dist(y, Ia (ω)) is measurable for
any y ∈ V . Therefore by Definition 1.3.1 the semiinterval Ia (ω)) is a random
closed set. The same is true for I b (ω).
Since [a, b] = a + [0, b − a], by Proposition 1.3.2 it is sufficient to consider
the case a(ω) ≡ 0.
We first assume that the cone V+ is solid and b(ω)  0. In this case for
any open set U ⊂ V we have

U ∩ {x : 0  x ≤ b(ω)} = (U ∩ intV+ ) ∩ I b (ω)

Hence by Proposition 1.3.1(i)

{ω : U ∩ {x : 0  x ≤ b(ω)} = ∅} ∈ F

for all open sets U ⊂ V and therefore {x : 0  x ≤ b(ω)} is a random set.


By Proposition 1.3.1(ii) [0, b](ω) = {x : 0  x ≤ b(ω)} is a random closed
set.
Assume now that V+ is a solid normal minihedral cone and b(ω) ≥ 0. Since
I b (ω) is a random closed set, by Proposition 1.3.2 there exists a sequence
{vn (ω)} ⊂ I b (ω) of random variables such that {vn (ω)} = I b (ω) for every
ω ∈ Ω. Let vn+ (ω) = 0 ∨ vn (ω). Corollary 3.1.1(ii) implies that {vn+ (ω)} =
[0, b](ω). Thus by Proposition 1.3.2 [0, b](ω) is a random closed set.
If V is finite-dimensional, then we use the representation

[0, b](ω) = Dn (ω) with Dn (ω) = I b (ω) ∩ {x ∈ V+ : x ≤ n}
n∈N

and Proposition 1.3.1(iv,v). 2

Proposition 3.2.2. Let V+ be a solid normal cone. Then a random set


{D(ω)} is bounded if and only if there exists a random variable v(ω)  0
such that
D(ω) ⊂ [−v(ω), v(ω)] for all ω ∈ Ω . (3.9)
The set {D(ω)} is tempered (see Definition 1.3.3) if and only if the random
variable v(ω) in (3.9) is tempered.

Proof. Let D(ω) be bounded. Then there exists a random variable r(ω) ∈ R+
such that

D(ω) ⊂ {x ∈ V : x ≤ r(ω)} for all ω ∈ Ω . (3.10)

Since V+ is solid, there exists u ∈ intV+ such that the interval [−u, u] contains
the unit ball of V . Therefore we have (3.9) with v(ω) = r(ω) · u. If {D(ω)} is
tempered, then r(ω) is a tempered random variable. Therefore v(ω) = r(ω)·u
is a tempered element in V+ .
90 3. Order-Preserving Random Dynamical Systems

Assume that (3.9) is valid with v(ω) ≥ 0. Then for any x ∈ D(ω) we have
0 ≤ x + v(ω) ≤ 2v(ω). Therefore the normality of the cone implies that

x ≤ v(ω) + x + v(ω) ≤ (1 + 2c) v(ω) , x ∈ D(ω) .

Thus we have (3.10) with r(ω) = (1 + 2c) v(ω) . Here above c is the constant
from Definition 3.1.5 and r(ω) is tempered if v(ω) is a tempered element in
V+ . 2

Lemma 3.2.1. Let a(ω) and b(ω) be random variables in V . If V+ is a solid


normal minihedral cone, then

u(ω) = a(ω) ∨ b(ω) and v(ω) = a(ω) ∧ b(ω)

are random variables in V . If we additionally assume that a(ω) and b(ω) are
tempered random variables in V , then the same property is true for u(ω) and
v(ω).

Proof. The first part follows easily from Corollary 3.1.1(ii). For the second
part, due to (3.6) and (3.7) it is sufficient to prove that w+ (ω) = w(ω) ∨ 0 is
tempered for any tempered variable w(ω) ∈ V . Using (3.8) with w∗ = 0 we
have w+ (ω) ≤ C1 C2 w(ω) . Therefore the temperedness of w(ω) implies
the temperedness of w+ (ω). 2

Theorem 3.2.1. Assume that the cone V+ is a solid normal minihedral cone.
Let {D(ω)} be a random compact set in V . Then sup D(ω) and inf D(ω) are
random variables in V . If we assume additionally that D(ω) is tempered, then
sup D(ω) and inf D(ω) are tempered random elements in V .

Proof. Since inf D = − sup{−D}, it is sufficient to consider sup D(ω) only.


Proposition 1.3.2 implies that there exists a sequence {vn (ω) : n ∈ N} of
random variables in V such that

vn (ω) ∈ D(ω) and D(ω) = {vn (ω), n ∈ N} for all ω ∈ Ω.

Let
wN (ω) = sup{v1 (ω), . . . , vN (ω)}, N = 1, 2, . . . .
Lemma 3.2.1 implies that wN (ω) is a random variable in V for every N =
1, 2, . . .. It is also clear that

w1 (ω) ≤ w2 (ω) ≤ . . . ≤ wN (ω) ≤ . . . (3.11)

Let us prove that for every fixed ω ∈ Ω the limit

w(ω) = lim wN (ω) (3.12)


N →∞
3.2 Random Sets in Partially Ordered Spaces 91

exists. Due to property (3.11) it is sufficient to prove that there exists a


convergent subsequence wNk (ω). Since D(ω) is a compact set, for any k ∈ N
there exists Nk ∈ N such that
−k
D(ω) ⊂ ∪N
n=1 (vn (ω) + 2
k
· B), k = 1, 2, . . . , (3.13)

where B is the unit ball of V . Since V+ is solid, there exists an interval [−u, u]
which contains the ball B. It follows from (3.13) that for any m ∈ N there
exists nm ≤ Nk such that

vm (ω) ∈ vnm (ω) + 2−k · B ⊂ [vnm (ω) − 2−k u, vnm (ω) + 2−k u]

and therefore

vm (ω) ≤ vnm (ω) + 2−k u ≤ wNk + 2−k u, m∈N.

Consequently

wNk (ω) ≤ wNk+1 (ω) ≤ wNk (ω) + 2−k u, k∈N.

The normality of the cone V+ implies that

wNk+1 (ω) − wNk (ω) ≤ C · 2−k u , k∈N.

Therefore the sequence {wNk } is a Cauchy sequence. This implies the ex-
istence of the limit in (3.12). It is clear that w(ω) = sup D(ω). The tem-
peredness of sup D(ω) and inf D(ω) for tempered D(ω) follows from Propo-
sition 3.2.2. 2

The following uniform randomization of the definition of the part of a cone


will turn out to be useful.
Definition 3.2.1. For every random variable v : Ω → V+ we define the part
Cv of v, or generated by v in the cone V+ to be the collection of random
variables w : Ω → V+ possessing the property

α−1 v(ω) ≤ w(ω) ≤ αv(ω) for all ω∈Ω

for some nonrandom number α ≥ 1.


Note that w ∈ Cv if and only if there is a deterministic r such that w(ω) ∈
Br (v(ω)) for all ω ∈ Ω, where Br (v) is the sphere in the part metric centered
at v with radius r. Hence Cv consists of those random variables which can
be included in some sphere around v with deterministic radius. Note that for
any w ∈ Cv we have Cw = Cv .
If α in the above definition were allowed to depend on ω, Cv would just
be a part-valued random variable, assigning to each ω the part of v(ω).
92 3. Order-Preserving Random Dynamical Systems

Proposition 3.2.3. Let V+ be a normal cone and v : Ω → V+ \ {0} be a


random variable. Then the part Cv generated by v is a complete metric space
with respect to the metric

(u, w) = sup p(u(ω), w(ω)), u, w ∈ Cv , (3.14)


ω∈Ω

where p is the part metric (see Definition 3.1.4).

Proof. We only need to prove the completeness of Cv with respect to the


metric (3.14). Let {um (ω)} be a Cauchy sequence in Cv . Then (um , v) is
bounded and therefore there exists λ > 1 such that

λ−1 v(ω) ≤ um (ω) ≤ λv(ω) for all m ∈ Z+ , ω ∈ Ω .

Using (3.1) we have

lim sup un (ω) − um (ω) v(ω) = 0 ,


n,m→∞ ω∈Ω

where · v is the v-norm. Since the cone V+ is normal, we have that w ≤


Cλ (ω) w v(ω) for any w ∈ [−λv(ω), λv(ω)]. Therefore {um (ω)} is a Cauchy
sequence in V for each ω ∈ Ω. Thus there exists a random variable u(ω) such
that
lim un (ω) − u(ω) = 0 for all ω ∈ Ω (3.15)
m→∞

and
λ−1 v(ω) ≤ u(ω) ≤ λv(ω), ω∈Ω.
In particular u(ω) ∈ Cv . Since {um (ω)} is a Cauchy sequence in Cv , for any
ε > 0 there exists Nε such that

p(um (ω), un (ω)) ≤ ε for all m, n ≥ Nε , ω ∈ Ω .

Therefore by Definition 3.1.4

e−2ε um (ω) ≤ un (ω) ≤ e2ε um (ω) for all m, n ≥ Nε , ω ∈ Ω .

If we let n → ∞, then using (3.15) we obtain the inequality

e−2ε um (ω) ≤ u(ω) ≤ e2ε um (ω) for all m ≥ Nε , ω ∈ Ω .

This implies that (u, um ) ≤ 2ε for m ≥ Nε . Thus (u, um ) → 0 as m → ∞.


2

Below we also need the following property of the part metric p(u, v).
3.3 Definition of Order-Preserving RDS 93

Proposition 3.2.4. Let u(ω) and v(ω) be random variables in V+ such that
p(u(ω), v(ω)) exists for every ω ∈ Ω. Then the function ω → p(u(ω), v(ω)) is
a random variable.
Proof. From Definition 3.1.4 we have
Ac : = {ω : p(u(ω), w(ω) < c} = {ω : e−c u(ω) < v(ω) < ec u(ω)}

= {ω : v(ω) − e−c u(ω) > 0} ∩ {ω : ec u(ω) − v(ω) > 0}


for every c > 0. Since V+ \ {0} is a Borel set, we have that Ac ∈ F. 2

3.3 Definition of Order-Preserving RDS


Let X be a subset of a real separable Banach space V with a closed convex
cone V+ .
Definition 3.3.1. An RDS (θ, ϕ) with phase space X is said to be
(i) order-preserving if
x≤y implies ϕ(t, ω)x ≤ ϕ(t, ω)y for all t≥0 and ω ∈ Ω;
(ii) strictly order-preserving if it is order-preserving and
x<y implies ϕ(t, ω)x < ϕ(t, ω)y for all t≥0 and ω ∈ Ω;
(iii) strongly order-preserving if it is order-preserving and
x<y implies ϕ(t, ω)x  ϕ(t, ω)y for all t≥0 and ω ∈ Ω.
We now give several simple examples. For more complicated examples of
order-preserving RDS we refer to Chaps.5 and 6 below.
Example 3.3.1 (Markov Chain). Let (Ω0 , F0 , P0 ) be a probability space. As-
sume that f (α, x) is a measurable mapping from Ω0 × R into R which is
continuous and nondecreasing with respect to x for every fixed α ∈ Ω0 . Then
the RDS (θ, ϕ) constructed in Example 1.2.1 with X = R is order-preserving.
If f (α, x) is an increasing function for every α, then (θ, ϕ) is a strongly order-
preserving RDS.
Example 3.3.2 (Kick Model). Let V be a Banach space with a cone V+ and
g : V → V be an order-preserving (deterministic) mapping. Consider the
RDS (θ, ϕ) generated by the difference equation
xn+1 = g(xn ) + ξ(θn+1 ω), n ∈ Z+ ,
over a metric dynamical system θ = (Ω, F, P, {θn , n ∈ Z}), where ξ is a
random variable in V . It is clear that (θ, ϕ) is an order-preserving RDS. If
g(x) is a strictly (strongly) order-preserving mapping, then (θ, ϕ) is a strictly
(strongly) order-preserving RDS.
94 3. Order-Preserving Random Dynamical Systems

Example 3.3.3 (1D Random Equation). Let θ = (Ω, F, P, {θt , t ∈ R}) be a


metric dynamical system. Consider the random ordinary differential equation

ẋ(t) = f (θt ω, x(t)). (3.16)

Assume that the function f : Ω ×R → R possesses properties (2.1), (2.2) and


(2.10) which guarantee a well-posedness of the Cauchy problem for (3.16).
Then Corollary 2.1.1 implies that equation (3.16) generates an RDS with
state space R. For any two solutions to equation (3.16) we obviously have the
relation  t 
x(t) − y(t) = (x(0) − y(0)) · exp ξ(τ, ω) dτ ,
0

where the function


f (θt ω, x(t)) − f (θt ω, y(t))
ξ(t, ω) =
x(t) − y(t)

is locally integrable with respect to t for every fixed ω ∈ Ω. Therefore the


RDS generated by (3.16) is strongly order-preserving with respect to the cone
R+ .

Example 3.3.4 (1D Stochastic Equation). Let {Wt } be the one-dimensional


Wiener process (see Example 1.1.7 and Section 2.3), and take two scalar
functions h(x) and σ(x) which belong to Cb0,1 (R) (see Definition 2.4.1). Then
Theorem 2.4.1 implies that the Itô stochastic differential equation

dx(t) = h(x(t))dt + σ(x(t))dWt (3.17)

generates an RDS in R. Due to the comparison principle (see Ikeda/Wata-


nabe [57]) we can assert that this RDS is strictly order-preserving with the
cone R+ . Of course, the same conclusion remains true if we understand the
stochastic equation (3.17) in the Stratonovich sense and assume that h(x) ∈
Cb1,δ (R), σ(x) ∈ Cb2,δ (R) and σ(x) · σ (x) ∈ Cb1,δ (R) for some δ ∈ (0, 1] (see
Theorem 2.4.3).

Example 3.3.5 (Binary Biochemical Model). It follows from results presented


in Chaps. 5 and 6 that both random (2.12) and stochastic (2.36) equations
generate order-preserving random dynamical systems in R2+ provided that
g(0) ≥ 0 and g (x) ≥ 0 for x > 0.

Example 3.3.6 (Affine Order-Preserving RDS). Let V be a Banach space


with a cone V+ . Let us consider an affine RDS (θ, ϕ) with state space V (see
Definition 1.2.3). This system is an order-preserving RDS if and only if in
the representation (cf. (1.4))

ϕ(t, ω)x = Φ(t, ω)x + ψ(t, ω), x ∈ V, ω ∈ Ω ,


3.4 Sub-Equilibria and Super-Equilibria 95

the linear operator Φ(t, ω) maps V+ into itself for any t ≥ 0 and ω ∈ Ω. We
obtain particular cases of affine order-preserving RDS if in Examples 3.3.1 –
3.3.4 we additionally assume that f (α, x), f (ω, x), g(x, h(x) and σ(x) are lin-
ear functions with respect to x. We study properties of affine order-preserving
RDS in detail in Sect. 4.6.

3.4 Sub-Equilibria and Super-Equilibria

The following concepts of sub- and super-equilibria turn out to be of prime


importance for the study of order-preserving RDS. They are the stochastic
analog of the corresponding deterministic notions (see, e.g., Hirsch [54] and
Smith [102]).
Let (θ, ϕ) be an order-preserving RDS on a subset X of a real separable
Banach space V with a closed convex cone V+ .
Definition 3.4.1. A random variable u : Ω → X is said to be
(i) a sub-equilibrium if

ϕ(t, ω)u(ω) ≥ u(θt ω) for all t≥0 and all ω∈Ω; (3.18)

(ii) a super-equilibrium if

ϕ(t, ω)u(ω) ≤ u(θt ω) for all t≥0 and all ω∈Ω. (3.19)

It is clear that any equilibrium (see Definition 1.7.1) is both a sub- and
super-equilibrium. Below we will refer to sub- and super-equilibria as semi-
equilibria.
We note that inequality (3.18) is equivalent to

ϕ(t − s, θs ω)u(θs ω) ≥ u(θt ω) for all t ≥ s and ω ∈ Ω . (3.20)

Indeed, if we let s = 0 in (3.20) we obtain (3.18). On the other hand (3.18)


implies that

ϕ(τ, θs ω)u(θs ω) ≥ u(θτ +s ω) for any τ ≥ 0 and s ∈ T .

Therefore after substituting τ = t − s we have (3.20). In the same way the


inequality (3.19) is equivalent to

ϕ(t − s, θs ω)u(θs ω) ≤ u(θt ω) for all t ≥ s and ω ∈ Ω . (3.21)

Remark 3.4.1. Assume that (θ, ϕ) is an order-preserving RDS with state


space X = V . Then for any sub-equilibrium a(ω) and for any super-
equilibrium b(ω) the random semiintervals
96 3. Order-Preserving Random Dynamical Systems

Ia (ω) = {x : x ≥ a(ω)} ≡ a(ω) + V+


and
I b (ω) = {x : x ≤ b(ω)} ≡ b(ω) − V+
are forward invariant (see Definition 1.3.4) random closed sets. This follows
from Proposition 3.2.1, definitions (3.18) and (3.19) and from the order-
preserving property of ϕ. In particular, if a(ω) and b(ω) are sub- and super-
equilibria such that a(ω) ≤ b(ω), then the random interval

[a, b](ω) = {x : a(ω) ≤ x ≤ b(ω)} (3.22)

is forward invariant. Moreover an interval of the type (3.22) is forward invari-


ant if and only if a(ω) is a sub-equilibrium and b(ω) is a super-equilibrium.
A similar fact is also valid for the semiintervals Ia and I b . However we em-
phasize that in general the interval (3.22) is not backward invariant even for
the case when a(ω) and b(ω) are equilibria. Indeed, let us consider the linear
mapping Tλ : R2 → R2 given by the formula
 
1+λ 1−λ 1−λ 1+λ
Tλ (x1 , x2 ) = x1 + x2 , x1 + x2
2 2 2 2

with λ ∈ (0, 1). This mapping is a contraction along the line l = {(s, −s) :
s ∈ R} and it is order-preserving with respect to R2+ with equilibria a =
(−1, −1) and b = (1, 1). A simple calculation shows that any element (α, −α)
with α ∈ (λ, 1] belongs to [a, b] and it does not belong to Tλ [a, b].

Example 3.4.1 (Semi-equilibria for 1D Random Equation). Let (θ, ϕ) be the


order-preserving RDS generated in R by random differential equation (3.16)
under the conditions given in Example 3.3.3. Assume that there exists a ∈ R
such that f (ω, a) ≥ 0 for all ω ∈ Ω. Then a(ω) ≡ a is a sub-equilibrium for
(θ, ϕ). Indeed, Corollary 2.2.1 implies that [a, +∞) is a deterministic forward
invariant set for the RDS (θ, ϕ). Thus (see Remark 3.4.1) a(ω) ≡ a is a sub-
equilibrium. The same argument gives that b(ω) ≡ b is a super-equilibrium
provided b ∈ R satisfies the inequality f (ω, b) ≤ 0 for all ω ∈ Ω.
Assume now that the RDS (θ, ϕ) generated by (3.16) has a (random)
sub-equilibrium u(ω). Since
 t
ϕ(t − s, θs ω)x = x + f (θτ ω, ϕ(τ − s, θs ω)x) dτ
s

for any t > s and x ∈ R, it follows from (3.20) that


 t
u(θt ω) ≤ u(θs ω) + f (θτ ω, ϕ(τ − s, θs ω)u(θs ω)) dτ, t > s, ω ∈ Ω .
s

If we assume that the mapping (t, x) → f (θt ω, x) is continuous, then we


obtain
3.4 Sub-Equilibria and Super-Equilibria 97

u(θt+h ω) − u(θt ω)
D+ u(θt ω) := lim sup ≤ f (θt ω, u(θt ω)), t ∈ R, ω ∈ Ω .
h→+0 h

Thus the sub-equilibrium u(ω) is a random variable such that the stationary
process u(t) := u(θt ω) solves (in some sense) the differential inequality

u̇(t) ≤ f (θt ω, u(t)) for all t ∈ R, ω ∈ Ω .

Similarly, if u(ω) is a super-equilibrium for (θ, ϕ), then the process u(t) =
u(θt ω) solves the inequality

u̇(t) ≥ f (θt ω, u(t)) for all t ∈ R, ω ∈ Ω .

Moreover using the comparison principle (cf. Theorem 5.3.1 below) one can
prove that a random variable u(ω) is a semi-equilibrium for (θ, ϕ) if and only
if the process u(t) = u(θt ω) solves one of these differential inequalities. In
particular a number c is a sub-equilibrium (resp. super-equilibrium) if and
only if f (ω, c) ≥ 0 (resp. f (ω, c) ≤ 0) for all ω ∈ Ω. Similar results remain
true for order-preserving RDS generated by systems of random or stochastic
differential equations.

Example 3.4.2 (Semi-equilibria for Binary Biochemical Model). Consider the


RDS presented in Example 2.1.1. If g(0) ≥ 0 and g (x) ≥ 0 for x > 0, then
a = 0 is a sub-equilibrium (cf. Example 3.3.5). Assume that αi (ω) ≥ αi0 > 0
for i = 1, 2 and ω ∈ Ω. If there exists r > 0 such that g(r) − α10 α20 r ≤ 0,
then b = (α20 r, r) is a super-equilibrium. To prove these results we can use
Corollary 2.2.1.

Example 3.4.3 (Semi-equilibria for 1D Stochastic Equation). Let (θ, ϕ) be


the order-preserving RDS on R constructed in Example 3.3.4. Assume that
there exists a ∈ R such that h(a) ≥ 0 and σ(a) = 0. Then Corollary 2.5.1
implies that there exists a θ-invariant set Ω ∗ ⊂ Ω of full measure such that

ϕ(t, ω)[a, +∞) ⊂ [a, +∞) for ω ∈ Ω ∗ .

Therefore there exists a version of the cocycle ϕ (see Remark 1.2.1(ii)) such
that ϕ(t, ω)a ≥ a for all ω ∈ Ω and t ≥ 0. Thus a(ω) ≡ a is a sub-equilibrium
for (θ, ϕ). The same argument gives that b(ω) ≡ b is a super-equilibrium
provided b ∈ R satisfies the conditions h(b) ≤ 0 and σ(b) = 0.

The following assertion contains some monotonicity properties of sub- and


super-equilibria.
Proposition 3.4.1. Let a(ω) be a sub-equilibrium of the order-preserving
RDS (θ, ϕ) on X ⊂ V . Then

as (ω) := ϕ(s, θ−s ω)a(θ−s ω) (3.23)


98 3. Order-Preserving Random Dynamical Systems

is a sub-equilibrium for any s > 0. These sub-equilibria possess the property

as (ω) ≥ aσ (ω) ≥ a(ω) (3.24)

for any s ≥ σ ≥ 0 and ω ∈ Ω. The same assertion is valid for super-equilibria


with the reversed inequality signs in (3.24).

Proof. The cocycle property gives

ϕ(t, ω)as (ω) = ϕ(t, ω)ϕ(s, θ−s ω)a(θ−s ω) = ϕ(t + s, θ−s ω)a(θ−s ω)
(3.25)
= ϕ(s, θt−s ω)ϕ(t, θ−s ω)a(θ−s ω) .

Using the inequality (cf. (3.18))

ϕ(t, θ−s ω)a(θ−s ω) ≥ a(θt−s ω) (3.26)

and the order-preserving property we have

ϕ(s, θt−s ω)ϕ(t, θ−s ω)a(θ−s ω) ≥ ϕ(s, θt−s ω)a(θt−s ω) = as (θt ω) .

Consequently (3.25) implies that as (ω) is a sub-equilibrium.


It follows from (3.26) with t = s that as (ω) ≥ a(ω) for any s ≥ 0 and
ω ∈ Ω. Since as (ω) is a sub-equilibrium, the last inequality gives

(as )σ (ω) = ϕ(σ, θ−σ ω)as (θ−σ ω) ≥ as (ω) for any s, σ ≥ 0 . (3.27)

From the definition of as (ω) we have

(as )σ (ω) = ϕ(σ, θ−σ ω)as (θ−σ ω) = ϕ(σ, θ−σ ω)ϕ(s, θ−s−σ ω)a(θ−s−σ ω) .

Therefore the cocycle property gives

(as )σ (ω) = ϕ(s + σ, θ−s−σ ω)a(θ−s−σ ω) = as+σ (ω), s, σ ≥ 0 . (3.28)

Consequently inequality (3.27) implies (3.24).


The assertion for super-equilibria can be proved in a similar way. 2

For minihedral cones we also have the following property of semi-equilibria.


Proposition 3.4.2. Let the cone V+ be minihedral. Then sup{u1 (ω), u2 (ω)}
is a sub-equilibrium provided that u1 (ω) and u2 (ω) are sub-equilibria and also

sup{u1 (ω), u2 (ω)} ∈ X for all ω∈Ω.

Similarly, if u1 (ω) and u2 (ω) are super-equilibria and inf{u1 (ω), u2 (ω)} ∈ X
for all ω ∈ Ω, then inf{u1 (ω), u2 (ω)} is a super-equilibrium.
3.4 Sub-Equilibria and Super-Equilibria 99

Proof. For any two sub-equilibria u1 (ω) and u2 (ω) we have

ϕ(t, ω)(sup{u1 (ω), u2 (ω)}) ≥ ϕ(t, ω)uj (ω) ≥ uj (θt ω), j = 1, 2 .

Therefore ϕ(t, ω)(sup{u1 (ω), u2 (ω)}) ≥ sup{u1 (θt ω), u2 (θt ω)}. The same ar-
gument applies for super-equilibria. 2

The following simple assertion on the existence of sub- and super-equilibria


is useful in what follows.
Lemma 3.4.1. Let V+ be a normal solid minihedral cone. Assume that an
order-preserving RDS (θ, ϕ) possesses a backward invariant random compact
set A(ω) ⊂ X, i.e. ϕ(t, ω)A(ω) ⊇ A(θt ω). Then b(ω) := sup A(ω) is a sub-
equilibrium and a(ω) := inf A(ω) is a super-equilibrium for the RDS (θ, ϕ)
provided that a(ω) and b(ω) belong to X for all ω ∈ Ω.

Proof. Since (θ, ϕ) is order-preserving, the equation

a(ω) ≤ w(ω) ≤ b(ω) for all w(ω) ∈ A(ω)

implies that

ϕ(t, ω)a(ω) ≤ ϕ(t, ω)w(ω) ≤ ϕ(t, ω)b(ω) for all w(ω) ∈ A(ω) .

The invariance property ϕ(t, ω)A(ω) ⊇ A(θt ω) now gives

ϕ(t, θ−t ω)a(θ−t ω) ≤ w(ω) ≤ ϕ(t, θ−t ω)b(θ−t ω) for all w(ω) ∈ A(ω) .

Since a(ω) = inf A(ω) and b(ω) = sup A(ω), the last relation implies that

ϕ(t, θ−t ω)a(θ−t ω) ≤ a(ω) and ϕ(t, θ−t ω)b(θ−t ω) ≥ b(ω)

for all t ≥ 0 and ω ∈ Ω. 2

Remark 3.4.2. (i) If the cone V+ is normal, solid and miniedral and if for an
RDS (θ, ϕ) on X = V there exists a random element x ∈ V such that the
closure γxτ (ω) of the orbit

γxτ (ω) = ϕ(t, θ−t ω)x(θ−t ω)
t≥τ

emanating from ϕ(τ, θ−τ ω)x(θ−τ ω) is a random compact set for some τ > 0,
then there exist a sub-equilibrium b(ω) and a super-equilibrium a(ω) for (θ, ϕ)
such that a(ω) ≤ b(ω). In fact the compactness of γxτ (ω) implies (see Sect.1.6)
that the corresponding omega-limit set
 
Γx (ω) = ϕ(τ, θ−τ ω)x(θ−τ ω)
t>0 τ ≥t
100 3. Order-Preserving Random Dynamical Systems

is an invariant random compact set. Therefore we can apply Lemma 3.4.1


with A(ω) = Γx (ω).
(ii) Assume in addition to the hypotheses of Lemma 3.4.1 that A(ω) is an
invariant set and b(ω) = sup A(ω) ∈ A(ω) (resp. a(ω) = inf A(ω) ∈ A(ω)) for
all ω ∈ Ω. The second assumption is always true for one-dimensional RDS.
Then it is easy to see that b(ω) (resp. a(ω)) is an equilibrium. This property is
not true in general without the assumption sup A(ω) ∈ A(ω). As an example
we can consider the following two-dimensional discrete deterministic system
with X = R2+ and
√ √
ϕ(1; x1 , x2 ) = ( x1 + x1 x2 , x2 + x1 x2 ) .

It is clear that the set A = {(1, 0), (0, 1)} is invariant and sup A = (1, 1) is a
strict sub-equilibrium.

3.5 Equilibria

The monotonicity properties of semi-equilibria given by Proposition 3.4.1


allow us to establish a result on the existence of equilibria.
Theorem 3.5.1. Let (θ, ϕ) be an order-preserving RDS. Assume that there
exist a sub-equilibrium a(ω) and a super-equilibrium b(ω) such that a(ω) ≤
b(ω) and the interval [a, b](ω) defined by (3.22) belongs to X. Assume also
that the set φ(t0 , ω)[a, b](ω) is relatively compact in X for some t0 > 0 and
for every ω ∈ Ω. Then the limits

u(ω) = lim as (ω) = sup as (ω) (3.29)


s→+∞ s>0

and
ū(ω) = lim bs (ω) = inf bs (ω) (3.30)
s→+∞ s>0

exist, where as (ω) and bs (ω) are defined as in (3.23). These limits are equi-
libria of (θ, ϕ) such that

a(ω) ≤ u(ω) ≤ ū(ω) ≤ b(ω) . (3.31)

Proof. Let us consider the discrete (T = Z) RDS (θ̂, ϕ̂), where θ̂n = θnt0
and ϕ̂(n, ω) = ϕ(nt0 , ω). It is clear that a(ω) and b(ω) are sub- and super-
equilibria of ϕ̂. Let ân (ω) and b̂n (ω) be defined for (θ̂, ϕ̂) as in (3.23). From
Proposition 3.4.1 one can see that

a(ω) ≤ ân (ω) ≤ ân (ω) ≤ b̂m (ω) ≤ b̂m (ω) ≤ b(ω) (3.32)
3.5 Equilibria 101

for any n, n , m, m ∈ Z+ such that n ≤ n and m ≤ m . From (3.28) we have

ân+1 (ω) = (ân )1 (ω) = ϕ̂(1, θ̂−1 ω)ân (θ̂−1 ω) = ϕ(t0 , θ−t0 ω)ant0 (θ−t0 ω) .

Consequently ân+1 (ω) ∈ ϕ(t0 , θ−t0 ω)[a, b](θ−t0 ω) for any n ∈ Z+ . Thus the
sequence {ân (ω)} is relatively compact for every ω. The monotonicity prop-
erty (3.32) implies that this sequence has a unique limit point. Indeed, let us
assume that there exist two points v(ω) and w(ω) such that

v(ω) = lim ânk (ω), ânk (ω) ≤ v(ω), k = 1, 2, . . .


k→∞

and
w(ω) = lim âmk (ω), âmk (ω) ≤ w(ω), k = 1, 2, . . .
k→∞

for some sequences {nk } and {mk }. However for any k there exists l such
that nk ≤ ml and, therefore, ânk (ω) ≤ âml (ω) ≤ w(ω). Hence

v(ω) = lim ânk (ω) ≤ w(ω) .


k→∞

In the same way we have w(ω) ≤ v(ω). Hence v(ω) = w(ω). Thus the limit

u(ω) = lim ân (ω) (3.33)


n→∞

exists. Since for any s ∈ T+ there exists n ∈ Z+ such that (n − 1)t0 < s ≤ nt0
we have ân (ω) ≤ as (ω) ≤ ân+1 (ω). Therefore (3.33) implies that the element
u(ω) possesses property (3.29).
It remains to prove that u(ω) is an equilibrium. The continuity of the
cocycle ϕ and the structure of as (ω) imply that

ϕ(t, θ−t ω)u(θ−t ω) = lim ϕ(t, θ−t ω)φ(s, θ−s−t ω)a(θ−s−t ω) .


s→∞

Therefore the cocycle property gives

ϕ(t, θ−t ω)u(θ−t ω) = lim ϕ(t + s, θ−s−t ω)a(θ−s−t ω)


s→∞

= lim as+t (ω) = u(ω) .


s→∞

This relation means that u(ω) is an equilibrium. In the same way one can
prove the existence of an equilibrium ū(ω) possessing property (3.30). The
inequalities (3.32) imply (3.31). 2

Remark 3.5.1. For regular cones (see Definition 3.1.6) Theorem 3.5.1 is
valid without the assumption concerning the relative compactness of the set
φ(t0 , ω)[a, b](ω).
102 3. Order-Preserving Random Dynamical Systems

Corollary 3.5.1. Assume that the hypotheses of Theorem 3.5.1 hold. If a(ω)
is measurable with respect to the past σ-algebra F− (see the definition in
Sect.1.10), then the equilibrium u(ω) is also F− -measurable and the random
Dirac measure δu(ω) is a disintegration of the invariant Markov measure µ
on (Ω × X, F × B(X)) which has the form

µ(A) = P {ω : (ω, u(ω)) ∈ A} , A ∈ F × B(X) . (3.34)

The same is true concerning b(ω) and ū(ω).

Proof. Since
F− = σ{ω → ϕ(τ, θ−t ω) : 0 ≤ τ ≤ t} ,
the mapping ω → ϕ(s, θ−s ω)x is F− -measurable for any x ∈ X. If a(ω) is
F− -measurable, then a(θ−s ω) is also F− -measurable for s ≥ 0. Therefore
as (ω) = ϕ(s, θ−s ω)a(θ−s ω) possesses the same property and (3.29) implies
that the equilibrium u(ω) is F− -measurable. It is clear that the measure
defined by (3.34) is invariant for the RDS (θ, ϕ) (see Example 1.10.1). Its
disintegration µω has the form µω (B) = χB (u(ω)), where χB (x) = 1 for
x ∈ B and χB (x) = 0 if x ∈ B. Therefore ω → µω (B) is F− -measurable for
any B ∈ B. Thus µ is a Markov measure by Definition 1.10.2. 2

Example 3.5.1 (Equilibria for 1D Random Equations). Let (θ, ϕ) be the


order-preserving RDS on R constructed in Example 3.3.3. Assume that there
exist a < b such that f (ω, a) ≥ 0 and f (ω, b) ≤ 0 for all ω ∈ Ω. Then
a(ω) ≡ a is a sub-equilibrium and b(ω) ≡ b is a super-equilibrium for (θ, ϕ)
(see Example 3.4.1). Thus Theorem 3.5.1 implies the existence of equilibria
u(ω) and u(ω) such that

a ≤ u(ω) ≤ u(ω) ≤ b .

By Corollary 3.5.1 these equilibria are F− -measurable and therefore they gen-
erate invariant Markov measures for the RDS (θ, ϕ) connected with equation
(3.16).

Example 3.5.2 (Equilibria for Binary Biochemical Model). Consider the sit-
uation described in Example 3.4.2. Theorem 3.5.1 and Corollary 3.5.1 im-
ply that the RDS (θ, ϕ) generated in R2+ by equations (2.12) possesses F− -
measurable equilibria u(ω) and u(ω) such that

0 ≤ u(ω) ≤ u(ω) ≤ (α20 r, r), ω∈Ω.

It is clear that u(ω) > 0 provided that g(0) > 0.

The following assertion shows that for a one-dimensional order-preserving


RDS every ergodic invariant measure is generated by some equilibrium.
3.5 Equilibria 103

Proposition 3.5.1. Let (θ, ϕ) be an order-preserving RDS whose state space


X = [c1 , c2 ] is an interval (which need not be finite) in R and let µ be an
invariant measure for (θ, ϕ). Assume that µ possesses a disintegration µω
such that ϕ(t, ω)µω = µθt ω for all t ≥ 0 and ω ∈ Ω (cf. Remark 1.10.1).
Then (θ, ϕ) has at least one equilibrium u(ω) ∈ X with

µω {(c1 , u(ω)]} ≥ 1/2 and µω {[u(ω), c2 )} ≥ 1/2 . (3.35)

If (θ, ϕ) is strictly order-preserving and µ is a ϕ-ergodic measure, then its


disintegration µω is a random Dirac measure, µω = δu(ω) , where u(ω) is an
equilibrium.

Proof. The main idea of the proof is due to Hans Crauel (see Arnold [3,
Sect.1.8], where the second assertion of this proposition is proved for one-
dimensional RDS with continuous two-sided time). For the sake of simplicity
we consider the case X = R only. The proofs for other cases are similar.
We start with some preliminary observations. Let ν be a probability mea-
sure on R and

I− = {a : ν{(−∞, a]} ≥ 1/2} and I+ = {b : ν{[b, ∞)} ≥ 1/2} .

Since g− (x) := ν{(−∞, x]} is a right continuous function and g+ (x) :=


ν{[x, ∞)} is a left continuous function, the sets I− and I+ have the form
I− = [α, ∞) and I+ = (−∞, β] for some α, β ∈ R. It is also easy to see that
I− ∩ I+ = ∅, i.e. β ≥ α. Thus for any probability measure ν on R we have
the relation
ν{(−∞, x]} ≥ 1/2 and ν{[x, ∞)} ≥ 1/2
for all x ∈ [α, β], where

α = min{a : ν{(−∞, a]} ≥ 1/2} and β = max{b : ν{[b, ∞)} ≥ 1/2} .

Let µω be the disintegration of µ and

α(ω) = min{a : µω {(−∞, a]} ≥ 1/2} ,

β(ω) = max{b : µω {[b, ∞)} ≥ 1/2} .


Since
{ω : α(ω) > c} = {ω : µω {(−∞, c]} < 1/2}
and
{ω : β(ω) < c} = {ω : µω {[c, ∞)} < 1/2} ,
104 3. Order-Preserving Random Dynamical Systems

the values α(ω) and β(ω) are random variables. Since ϕ is order-preserving,
we have
(−∞, x] ⊆ ϕ(t, ω)−1 (−∞, ϕ(t, ω)x] .
Thus by the invariance of µω , we obtain the relation

µω {(−∞, x]} ≤ µω {ϕ(t, ω)−1 (−∞, ϕ(t, ω)x]} = µθt ω {(−∞, ϕ(t, ω)x]} .

Similarly,
µω {[x, ∞)} ≤ µθt ω {[ϕ(t, ω)x, ∞)} .
These properties imply that α(ω) is a sub-equilibrium and β(ω) is a super-
equilibrium for (θ, ϕ) such that α(ω) ≤ β(ω). Therefore we can apply The-
orem 3.5.1 to conclude that there exist at least one equilibrium u(ω) ∈
[α(ω), β(ω)] with properties (3.35).
The random semiinterval I u (ω) := (−∞, u(ω)] is a forward invariant
random closed set (see Remark 3.4.1), i.e. I u (ω) ⊂ ϕ(t, ω)−1 I u (θt ω). If for
some ω ∈ Ω there exists x ∈ ϕ(t, ω)−1 I u (θt ω) such that x ∈ I u (ω), then
ϕ(t, ω)x ≤ u(θt ω) and x > u(ω) which is impossible if ϕ is strictly order-
preserving. Thus I u (ω) = ϕ(t, ω)−1 I u (θt ω). This relation implies that the
set
M− = {(ω, x) : x ≤ u(ω)} ∈ F × B(R)
satisfies πt−1 M− = M− , where πt is the skew-product semiflow corresponding
to (θ, ϕ) (see (1.6)). We also have

µ(M− ) = µω {I u (ω)}P(dω) ≥ 1/2 .

Thus the ϕ-ergodicity of µ implies that µ(M− ) = 1. Therefore

µω {(−∞, u(ω)]} = 1 almost surely .

In a similar way we obtain that µω {[u(ω), ∞)} = 1 almost surely. Conse-


quently µω {{u(ω)}} = 1 for almost all ω ∈ Ω. This implies that δu(ω) is a
disintegration for µ. 2

The following proposition shows that the pull back omega-limit set (see Def-
inition 1.6.1) emanating from a semi-equilibrium consists of a single equilib-
rium.
Proposition 3.5.2. Assume c is either a sub- or a super-equilibrium and
t (ω)
for any ω ∈ Ω there exists t0 = t0 (ω) such that the closure γc0 (ω) of the
tail of the orbit γc0 (ω) emanating from c,
3.6 Properties of Invariant Sets of Order-Preserving RDS 105


γct0 (ω) (ω) = ϕ(t, θ−t ω)c(θ−t ω) ,
t≥t0 (ω)

is a compact set in X. Then the omega-limit set of c,


 
Γc (ω) = ϕ(τ, θ−τ ω)c(θ−τ ω) ,
t>0 τ ≥t

consists of a single equilibrium u and

lim ϕ(t, θ−t ω)c(θ−t ω) = u(ω) for all ω ∈ Ω monotonically .


t→∞

Proof. Using Proposition 3.4.1 we obviously have that for any ω ∈ Ω the
sequence cn (ω) = ϕ(n, θ−n ω)c(θ−n ω) is monotone and relatively compact for
each ω. Therefore we can repeat the argument given in the proof of Theo-
rem 3.5.1. 2

Below we also need the following assertion.


Lemma 3.5.1. Let (θ, ϕ) be an order-preserving RDS over an ergodic metric
dynamical system θ with phase space X = V+ . Assume that (θ, ϕ) is strongly
positive, i.e. ϕ(t, ω)(V+ \ {0}) ⊂ intV+ . Then for any equilibrium u(ω) there
exists a θ-invariant subset B ∈ F of full P-measure such that either u(ω) = 0
for all ω ∈ B or u(ω)  0 for all ω ∈ B.

Proof. Let B0 = {ω : u(ω) > 0}, where u(ω) is an equilibrium.The set B0 is


F-measurable because V+ \{0} is a Borel set in V . Since u(θt ω) = ϕ(t, ω)u(ω)
for all t ≥ 0, the strong positivity assumption implies that θt B0 ⊂ B0 for all
t ≥ 0. It is clear that B := ∩n∈Z+ θn B0 ⊂ B0 is invariant with respect to θ,
i.e. θt B = B for any t ∈ R, and P(B) = P(B0 ). Moreover u(ω)  0 for ω ∈ B.
The ergodicity implies that P(B) is equal either 1 or 0. If P(B) = 1, then
the lemma is proved. If P(B) = 0 then we have P(B0 ) = 0 and u(ω) = 0 for
ω ∈ A0 := Ω \B0 , where P(A0 ) = 1. Since u(ω) is an equilibrium, it is easy to
see that A0 ⊂ θt A0 for any t ≥ 0. This implies that A = ∩n∈Z+ θ−n A0 ⊂ A0
possesses the properties (a) P(A) = P(A0 ) = 1, (b) A is invariant with respect
to θ, (c) u(ω) = 0 for ω ∈ A. 2

3.6 Properties of Invariant Sets of Order-Preserving


RDS

In this section we prove a theorem on the structure of the random pull back
attractor (see Definition 1.8.1) of an order-preserving RDS. We obtain this
result as a corollary of the following general assertion which can also be useful
to prove the existence of equilibria. We consider an order-preserving RDS
(θ, ϕ) on a subset X of a real separable Banach space V with a closed convex
106 3. Order-Preserving Random Dynamical Systems

cone V+ such that V+ ∩ {−V+ } = {0}. We do not assume any additional


properties of the cone V+ here.
Below for an element v ∈ X and a subset A ⊂ X we write v ≥ A if v ≥ a
for any a ∈ A. We understand the relation v ≤ A similarly.
Theorem 3.6.1. Let A(ω) be an invariant random compact set for an order-
preserving RDS (θ, ϕ). Then the following assertions are valid:
(i) if there exists a random variable v(ω) such that v(ω) ≥ A(ω) for all
ω ∈ Ω and

lim distX (ϕ(t, θ−t ω)v(θ−t ω), A(ω)) = 0 , ω∈Ω, (3.36)


t→+∞

then there exists an equilibrium u(ω) ∈ A(ω) such that u(ω) = sup A(ω)
and

lim ϕ(t, θ−t ω)v(θ−t ω) = u(ω) for all ω∈Ω; (3.37)


t→+∞

(ii) if there exists a random variable v(ω) satisfying (3.36) and such that
v(ω) ≤ A(ω) for all ω ∈ Ω , then there exists an equilibrium u(ω) ∈ A(ω)
such that u(ω) = inf A(ω) and

lim ϕ(t, θ−t ω)v(θ−t ω) = u(ω) for all ω∈Ω. (3.38)


t→+∞

Proof. We prove assertion (i) only. Since A(ω) ≤ v(ω), the invariance prop-
erty of A(ω) implies that

a(ω) ≤ ϕ(t, θ−t ω)v(θ−t ω) for any a(ω) ∈ A(ω) . (3.39)

The compactness of A(ω) and property (3.36) imply that for any ω ∈ Ω there
exist a sequence tn = tn (ω) → ∞ and an element u(ω) ∈ A(ω) such that

ϕ(tn , θ−tn ω)v(θ−tn ω) → u(ω) when n → ∞ .

From (3.39) we have that a(ω) ≤ u(ω) for any a(ω) ∈ A(ω), i.e. u(ω) is the
least upper bound for A(ω). Let us prove (3.37). Assume that this relation
is not true for some ω ∈ Ω. Then there exists a sequence τk → ∞ such that

ϕ(τk , θ−τk ω)v(θ−τk ω) − u(ω) ≥ δ for all k ∈ Z+ (3.40)

with some positive δ. As above the compactness of A(ω) and property (3.36)
allow us to extract a subsequence {τkm } and to find an element u∗ ∈ A(ω)
such that

ϕ(τkm , θ−τkm ω)v(θ−τkm ω) → u∗ when m → ∞ .

It is also clear that u∗ ≥ A(ω) and therefore u∗ = sup A(ω). Consequently


u∗ = u(ω) which contradicts (3.40). Thus we have (3.37). Property (3.37)
3.6 Properties of Invariant Sets of Order-Preserving RDS 107

implies that u(ω) is a random variable in X. Since u(ω) = sup A(ω), the
invariance of A(ω) implies

a(ω) ≤ ϕ(t, θ−t ω)u(θ−t ω) for any a(ω) ∈ A(ω), t ≥ 0 .

Therefore

ϕ(t, θ−t ω)u(θ−t ω) ≥ u(ω) for all ω ∈ Ω, t ≥ 0 .

Hence using the property u(ω) ∈ A(ω) we find that

ϕ(t, θ−t ω)u(θ−t ω) = u(ω) ,

i.e. u(ω) is an equilibrium. 2

The main corollary of Theorem 3.6.1 is the following result concerning the
structure of the global attractor for an order-preserving RDS.
Theorem 3.6.2. Assume that the order-preserving RDS (θ, ϕ) on X has
a random compact pull back attractor A(ω) in some universe D and that
this attractor is order-bounded in the following sense: there exists a random
interval
[b, c](ω) = {x : b(ω) ≤ x ≤ c(ω)}
such that {b(ω)}, {c(ω)} ∈ D and A(ω) ⊂ [b, c](ω). Then there exist two
equilibria u(ω) and ū(ω) in A(ω) such that u(ω) ≤ ū(ω) and the random
attractor A(ω) belongs to the interval [u, ū](ω), i.e.

u(ω) ≤ a(ω) ≤ ū(ω) for any a(ω) ∈ A(ω) . (3.41)

These equilibria u(ω) and ū(ω) are globally asymptotically stable from below
and from above respectively, i.e.

lim ϕ(t, θ−t ω)w(θ−t ω) = u(ω) (3.42)


t→+∞

and
lim ϕ(t, θ−t ω)v(θ−t ω) = ū(ω) (3.43)
t→+∞

for any w(ω) ≤ u(ω) and for any v(ω) ≥ ū(ω) such that {w(ω)} and {v(ω)}
belong to D.

Proof. The application of Theorem 3.6.1 gives us the existence of the equi-
libria u(ω) and ū(ω). To prove (3.42) we note that

ϕ(t, θ−t ω)w(θ−t ω) ≤ u(ω) and ϕ(t, θ−t ω)w(θ−t ω) → A(ω)

for any w(ω) ≤ u(ω) such that {w(ω)} ∈ D. Now the compactness of A(ω)
and an argument similar to that used in the proof of Theorem 3.6.1 give
(3.42). The same argument can be applied to prove (3.43). 2
108 3. Order-Preserving Random Dynamical Systems

Remark 3.6.1. The theorem on the existence of a random attractor (see


Sect. 1.8) implies that the conditions of Theorem 3.6.2 hold, for example, if we
assume that the order-preserving RDS (θ, ϕ) is asymptotically compact and
possesses an absorbing interval in D, i.e. there exists an interval [b, c](ω) ∈ D
with the property: for every D ∈ D there exists a time t0 (ω, D) > 0 such
that
φ(t, θ−t ω)D(θ−t ω) ∈ [b, c](ω) for all t ≥ t0 (ω, D) .

This remark allow us to derive from Theorem 3.6.2 the following corollary.
Corollary 3.6.1. Let the order-preserving RDS (θ, ϕ) be asymptotically com-
pact in some universe D. Assume that D contains an absorbing interval for
this RDS. If (θ, ϕ) has a unique equilibrium point u(ω) in D, then {u(ω)} is
a random attractor for this RDS in D.
In the connection with Remark 3.6.1 and Corollary 3.6.1 it is convenient to
introduce the concept of an absorbing semi-equilibrium.
Definition 3.6.1. A super-equilibrium u(ω) is said to be absorbing in the
universe D if for any B ∈ D there exists tB (ω) > 0 such that

ϕ(t, θ−t ω)B(θ−t ω) ⊂ I u (ω) = u(ω) − V+ , ω∈Ω,

for all t ≥ tB (ω), i.e. we have

ϕ(t, θ−t ω)v(θ−t ω) ≤ u(ω), t ≥ tB (ω), ω∈Ω, (3.44)

for all v(ω) ∈ B(ω). Similarly a sub-equilibrium w(ω) is said to be absorbing


in the universe D if instead of (3.44) we have

ϕ(t, θ−t ω)v(θ−t ω) ≥ w(ω), t ≥ tB (ω), ω∈Ω,

for all v(ω) ∈ B(ω).

Proposition 3.6.1. Assume that an order-preserving RDS (θ, ϕ) possesses


an absorbing super-equilibrium u(ω) and an absorbing sub-equilibrium w(ω)
in some universe D such that {u(ω)}, {w(ω)} ∈ D. Then w(ω) ≤ u(ω) and
the interval [w(ω), u(ω)] is absorbing and forward invariant for RDS (θ, ϕ)
in the universe D.

Proof. From Proposition 3.4.1 we have that w(ω) ≤ ϕ(t, θ−t ω)w(θ−t ω) for all
t > 0. Therefore (3.44) implies that w(ω) ≤ u(ω) and the interval [w(ω), u(ω)]
is absorbing by Definition 3.6.1. It is forward invariant by Remark 3.4.1. 2

We conclude this section with the following example which demonstrates a


phenomenon which is impossible in deterministic order-preserving dynamical
systems.
3.7 Comparison Principle 109

Example 3.6.1. Let us consider the following scalar Stratonovich equation

dx(t) = g(x(t)) ◦ dWt , t > 0, (3.45)

where g(x) is a smooth function on R possessing the properties g(u0 ) =


g(u1 ) = 0 and g(x) > 0 for x ∈ (u0 , u1 ). Here u0 < u1 are real numbers. It is
easy to see that the cocycle for RDS generated by (3.45) in [u0 , u1 ] has the
form
ϕ(t, ω)x = G−1 (G(x) + Wt ), t > 0, x ∈ (u0 , u1 ) ,
1
where G(x) is a primitive for g(u) on the interval (u0 , u1 ) and G−1 is the
inverse mapping for G : (u0 , u1 ) → R. It is clear that this RDS is strongly
order-preserving. A simple calculation shows that for this case there are no
equilibria except u0 and u1 and that ω-limit set for any point x from (u0 , u1 )
coincides with the whole interval [u0 , u1 ]: a non-trivial completely ordered
set. This phenomenon does not take place in deterministic (autonomous or
periodic) strongly order-preserving systems (cf. Smith [102]). Furthermore,
in this example all the trajectories oscillate between the two equilibria u0 and
u1 and there is no equilibrium inside the interval.

3.7 Comparison Principle

All the above considerations give ample proof of the crucial role played by sub-
and super-equilibria in the study of qualitative properties of order-preserving
RDS. One of the methods of proving their existence relies on the comparison
principle. Let V be a Banach space with a cone V+ and let X1 and X2 be
subsets of V . Let (θ, ϕ1 ) and (θ, ϕ2 ) be two RDS on X1 and X2 over the same
metric dynamical system θ and take Y ⊂ X1 ∩ X2 . The system (θ, ϕ2 ) is said
to dominate (θ, ϕ1 ) from above on Y (or (θ, ϕ1 ) dominates (θ, ϕ2 ) from below
on Y ) if

ϕ1 (t, ω, x) ≤ ϕ2 (t, ω, x) for any t > 0, x ∈ Y, ω∈Ω. (3.46)

Clearly (3.46) implies that any super-equilibrium v(ω) for (θ, ϕ2 ) such that
v(ω) ∈ Y for all ω ∈ Ω is a super-equilibrium for (θ, ϕ1 ) and any sub-
equilibrium w(ω) for (θ, ϕ1 ) with the property w(ω) ∈ Y for all ω ∈ Ω is a
sub-equilibrium for (θ, ϕ2 ).
In many applications, e.g. for the construction of random attractors (see
Chaps.5 and 6 below), a nonlinear RDS can be shown to be dominated by
an affine RDS (see Definition 1.2.3), whose equilibrium then becomes a sub-
or super-equilibrium of the corresponding nonlinear RDS.
As an example of an application of the comparison principle we prove the
following assertion.
110 3. Order-Preserving Random Dynamical Systems

Proposition 3.7.1. Let (θ, ϕ) be an order-preserving RDS in a cone V+ of a


separable Banach space V . Assume that the system (θ, ϕ) is dominated from
above on the cone V+ by an affine RDS (θ, ϕaff ). Suppose that RDS (θ, ϕaff )
satisfies the hypotheses of Proposition 1.9.2, i.e. (θ, ϕaff ) is asymptotically
compact in a universe D with the properties (a) {0} ∈ D, (b) for any D ∈ D
and λ > 0 the set ω → λD(ω) belongs to D and (c) an attracting random
compact set B0 belongs to D. Let u(ω) be the unique equilibrium for (θ, ϕaff )
in D. Then u(ω) ≥ 0 and for any µ ≥ 1 the random variable vµ (ω) := µu(ω)
is a super-equilibrium for (θ, ϕ). If the cone V+ is solid, then the interval
[0, e(ω) + u(ω)] with arbitrary e(ω) ∈ intV+ is absorbing for (θ, ϕ) in the
universe
D∗ = {D ∈ D : D(ω) ⊂ V+ for all ω ∈ Ω} .
If u(ω)  0, then vµ (ω) is an absorbing super-equilibrium for (θ, ϕ) in D∗
for any µ > 1. In this case, if V is a finite-dimensional space, then the
RDS (θ, ϕ) possesses a random attractor in the universe D̃ consisting of all
random closed sets {B(ω)} such that B(ω) ⊂ [0, αu(ω)] for some α > 0 and
the conclusions of Theorem 3.6.2 hold.

Proof. The cocycle ϕaff has the form

ϕaff (t, ω)x = Φ(t, ω)x + ψ(t, ω), x∈V .

Since ϕ(t, ω)0 ≥ 0 and ϕ(t, ω)x ≤ ϕaff (t, ω)x for all x ∈ V+ , we have

ψ(t, ω) = ϕaff (t, ω)0 ≥ ϕ(t, ω)0 ≥ 0 .

Therefore (1.51) implies that u(ω) ≥ 0. We also have

ϕ(t, ω)vµ (ω) ≤ µΦ(t, ω)u(ω) + ψ(t, ω)

= µϕaff (t, ω)u(ω) + (1 − µ)ψ(t, ω) = µu(θt ω) + (1 − µ)ψ(t, ω) .

Hence vµ (ω) is a super-equilibrium for (θ, ϕ) for any µ ≥ 1. Relation (1.52)


implies that [0, e(ω) + u(ω)] is an absorbing interval for (θ, ϕ). If u(ω)  0
we can choose e(ω) = (µ − 1) · u(ω) and therefore vµ (ω) is an absorbing
super-equilibrium, µ > 1. Thus [0, µu(ω)] is an absorbing set for (θ, ϕ) in
D̃. Hence (θ, ϕ) is dissipative. If V is finite-dimensional, then Corollary 1.8.1
implies that a random attractor exists in the universe D̃ and we can apply
Theorem 3.6.2. 2

Example 3.7.1 (Binary Biochemical Model). Consider the random differen-


tial equations

ẋ1 = g(x2 ) − α1 (θt ω)x1 ,


(3.47)
ẋ2 = x1 − α2 (θt ω)x2 ,
3.7 Comparison Principle 111

over an ergodic metric dynamical system θ. Assume that g(x) is a C 1 function


such that

g(0) ≥ 0, 0 ≤ g(x) ≤ g0 , g (x) ≥ 0 for all x ∈ R+ ,

where g0 is a constant. Let αi (ω) ∈ L1 (Ω, F, P) be a random variable such


that αi (θt ω) ∈ L1loc (R) and Eαi > 0 for i = 1, 2. Equations (3.47) generate
a strictly order-preserving RDS in R2+ (see Example 3.3.5). It is easy to see
that (θ, ϕ) is dominated from above by the affine RDS (θ, ϕaff ) generated by
the equations

ẋ1 = g0 − α1 (θt ω)x1 ,


ẋ2 = x1 − α2 (θt ω)x2 .

Let u(ω) = (u1 (ω), u2 (ω)), where


 0  0 
u1 (ω) = g0 exp − α1 (θτ ω)dτ ds
−∞ s

and  0  0 
u2 (ω) = u1 (θs ω) exp − α2 (θτ ω)dτ ds .
−∞ s

A simple calculation shows that u(ω)  0 is an equilibrium for (θ, ϕaff ). Thus
u(ω) is a super-equilibrium for (θ, ϕ). Proposition 3.7.1 can be applied here
with the universe D consisting of all tempered sets from R2 .
4. Sublinear Random Dynamical Systems

In this chapter we consider a class of order-preserving RDS which possess


certain concavity properties. The deterministic versions of these properties
play an important role in many studies and applications, see Krasnoselskii
[68, 69], Krause et al. [72, 73], Smith [101], Takač [103] and the references
therein. For the sake of simplicity we assume that the state space X is equal
to a solid cone V+ of a real Banach space V ,

X = V+ = {x ∈ V : x ≥ 0} ,

i.e. we consider random dynamical systems (θ, ϕ) which possess the positivity
property: ϕ(t, ω)V+ ⊂ V+ for all t and ω. For order-preserving systems this
property is equivalent to the relation ϕ(t, ω)0 ≥ 0. Our main result in this
chapter is a random limit set trichotomy which describes all possible types
of long-time behaviour in sublinear random systems. This result is a clear
manifestation of the general experience that monotonicity, and even more
so sublinearity, drastically simplifies the possible long-term behaviour of a
dynamical system.

4.1 Sublinear and Concave RDS


We start with the most general concavity property which we call sublinearity
(sometimes also named subhomogeneity). Sublinearity means concavity for
the particular case in which one of the reference points is 0, hence asks less
(and is thus more general) than classical concavity.
Definition 4.1.1 (Sublinear RDS). An order-preserving RDS (θ, ϕ) on
X = V+ is said to be sublinear if for any x ∈ V+ and for any λ ∈ (0, 1) we
have
λϕ(t, ω, x) ≤ ϕ(t, ω, λx) for all t > 0 and ω ∈ Ω . (4.1)
The RDS is said to be (i) strictly sublinear if we have in addition for any
x ∈ intV+ the strict inequality

λϕ(t, ω, x) < ϕ(t, ω, λx) for all t>0 and ω∈Ω, (4.2)

and (ii) strongly sublinear if in addition to (4.1) we have

I. Chueshov: LNM 1779, pp. 113–141, 2002.


c Springer-Verlag Berlin Heidelberg 2002
114 4. Sublinear Random Dynamical Systems

λϕ(t, ω, x)  ϕ(t, ω, λx) for all t > 0, x ∈ intV+ , and ω ∈ Ω , (4.3)

i.e. ϕ(t, ω, λx) − λϕ(t, ω, x) ∈ intV+ .


Equation (4.1) holds automatically for t = 0 and for λ = 0 and 1. In one-
dimensional case the properties of strict sublinearity (4.2) and strong sublin-
earity (4.3) coincide. Property (4.1) can be equivalently rewritten as follows:
For any x ∈ V+ and for any λ > 1 we have

ϕ(t, ω, λx) ≤ λϕ(t, ω, x) for all t > 0 and ω ∈ Ω . (4.4)

Similarly for (4.2) and (4.3).


Using conditions (4.1) and (4.4) it is easy to see that if u ≥ 0 is
(i) a sub-equilibrium, then λu(·) is a sub-equilibrium for any λ ∈ [0, 1];
(ii) a super-equilibrium, then λu(·) is a super-equilibrium for any λ ≥ 1;
(iii) an equilibrium, then λu(·) is a sub-equilibrium for any λ ∈ [0, 1] and a
super-equilibrium for any λ ≥ 1.

Example 4.1.1 (Binary Biochemical Model). Let (θ, ϕ) be the RDS in R2+
generated by the equations

ẋ1 = g(x2 ) − α1 (θt ω)x1 ,


ẋ2 = x1 − α2 (θt ω)x2 ,

over a metric dynamical system θ. We assume that the function g(x) and the
random variables α1 (ω) and α2 (ω) satisfy the assumptions of Examples 2.1.1
and 3.3.5. In Sect.5.7 we prove that (θ, ϕ) is a strictly sublinear RDS if g(x)
is a sublinear mapping from R+ into R, i.e. if λg(x) ≤ g(λx) for all x ≥ 0 and
0 < λ < 1. This system is strongly sublinear if we assume additionally that
g (x) > 0 for x > 0. Similar result remains true for the stochastic case (cf.
Example 2.4.3 and Sect.6.8). We note that sublinear functions g(x) appear
in the Griffith (g(x) = x · (1 + x)−1 for x > 0) and in the Othmer-Tyson
(g(x) = (1 + x) · (k + x)−1 , x > 0, k > 1) biochemical models. We refer to
Selgrade [96] for a discussion and for the references.

We note that an order-preserving affine (see Definition 1.2.3 and Exam-


ple 3.3.6) RDS which maps V+ into itself is automatically sublinear. It is
strictly (resp. strongly) sublinear if ψ(t, ω) > 0 (resp. ψ(t, ω)  0) in the
representation (1.4) for t > 0. It is also easy to see that a scalar function
g : R+ → R is sublinear if and only if g(u)/u is nonincreasing.
Definition 4.1.2 (Concave RDS). An order-preserving RDS (θ, ϕ) on
X = V+ is said to be concave if for any 0 ≤ x ≤ y and for any λ ∈ (0, 1) we
have
λϕ(t, ω, x) + (1 − λ)ϕ(t, ω, y) ≤ ϕ(t, ω, λx + (1 − λ)y) (4.5)
4.1 Sublinear and Concave RDS 115

for all t > 0 and ω ∈ Ω. The RDS is said to be strictly concave if in addition
we have strict inequality in (4.5) for all 0  x  y and it is strongly concave
if

λϕ(t, ω, x) + (1 − λ)ϕ(t, ω, y)  ϕ(t, ω, λx + (1 − λ)y), 0xy.

It is clear that (strict, strong) concavity implies (strict, strong) sublinearity.


A simple one-dimensional example f (x) = (1 + x)−1 , x ∈ V+ = R+ shows
that the converse is not valid.
If (θ, ϕ) is a C 1 -smooth RDS we can establish the following necessary and
sufficient conditions for sublinearity and concavity. Below we denote by Dx
the Frechet derivative with respect to x.
Proposition 4.1.1. Assume that (θ, ϕ) is a C 1 -smooth order-preserving
RDS in V+ . Then
(i) it is sublinear if and only if

Dx ϕ(t, ω, x)x ≤ ϕ(t, ω, x) for all t ≥ 0, ω ∈ Ω, x ∈ V+ \ {0} , (4.6)

and it is strictly (strongly) sublinear provided that in (4.6) we have strict


(strong) inequality for x ∈ intV+ ;
(ii) the system (θ, ϕ) is concave if and only if for any x, z ∈ V+ \ {0} we
have

Dx ϕ(t, ω, x + z)z ≤ Dx ϕ(t, ω, x)z for all t > 0, and ω ∈ Ω , (4.7)

and it is strictly (strongly) concave if in (4.7) the inequality is strict (strong)


for all x and z from intV+ .

Proof. Since

d 1 1
· ϕ(t, ω, λx) = − 2 {ϕ(t, ω, λx) − Dx ϕ(t, ω, λx)λx} ,
dλ λ λ

we have
 ν
1 1 1
ϕ(t, ω, νx) − · ϕ(t, ω, λx) = − {ϕ(t, ω, µx) − Dx ϕ(t, ω, µx)µx} dµ ,
ν λ λ µ2

for any 0 < λ < ν ≤ 1, ω ∈ Ω and x ∈ V+ \ {0}. This implies (i).


To prove (ii) we first note that

d 1
· [ϕ(t, ω, x + λz) − ϕ(t, ω, x)]
dλ λ
 
λ
1
=− 2 [Dx ϕ(t, ω, x + µz) − Dx ϕ(t, ω, x + λz)] dµ z
λ 0
116 4. Sublinear Random Dynamical Systems

for any x and z from V+ \ {0}. Therefore for any λ2 > λ1 > 0 we have
 
λ1 λ1
ϕ(t, ω, x + λ1 z) − ϕ(t, ω, x + λ2 z) − 1 − ϕ(t, ω, x)
λ2 λ2
 λ2  
λ
1
= λ1 2
[Dx ϕ(t, ω, x + µz)z − Dx ϕ(t, ω, x + λz)z] dµ dλ. (4.8)
λ1 λ 0

Consequently the (strict, strong) concavity in the differential form (4.7) im-
plies (strict, strong) concavity in the sense of Definition 4.1.2. It is clear from
(4.5) that

Dx ϕ(t, ω, x + z)z ≤ ϕ(t, ω, x + z) − ϕ(t, ω, x) ≤ Dx ϕ(t, ω, x)z (4.9)

for all x and z from V+ \ {0}. Consequently (4.5) implies (4.7). 2

Remark 4.1.1. A simple example of the C 1 -mapping



2x + x(x − 1)2 if 0 ≤ x ≤ 1 ,
f (x) =
2 + x − x1 if x > 1 ,

from R+ into itself shows that strict (strong) sublinearity does not imply
strict (strong) inequality in (4.6).

Below we also use the following concept of concavity for a C 1 -smooth RDS
which was introduced by Smith [101] in the deterministic case.
Definition 4.1.3 (S-Concave RDS). A C 1 -smooth order-preserving RDS
(θ, ϕ) on X = V+ is said to be s-concave if for any 0  x  y and z ∈ intV+
we have

Dx ϕ(t, ω, y)z < Dx ϕ(t, ω, x)z for all t > 0, and ω ∈ Ω . (4.10)

It is clear from Proposition 4.1.1 that any s-concave RDS is strictly concave.

4.2 Equilibria and Semi-Equilibria for Sublinear RDS

In this section we prove a uniqueness theorem for equilibria of strongly sub-


linear RDS and study their stability properties.
We start with the following important lemma. We recall (see Defini-
tion 3.1.4) that any equivalence class C under the equivalence relation on
the cone V+ defined by
 
{x ∼ y} ⇐⇒ ∃ α ∈ R+ \ {0}, α−1 x ≤ y ≤ αx (4.11)
4.2 Equilibria and Semi-Equilibria for Sublinear RDS 117

is called a part of V+ and every part C is a metric space with respect to the
part (Birkhoff) metric defined by

p(x, y) := inf{log α : α−1 x ≤ y ≤ αx}, x, y ∈ C . (4.12)

Lemma 4.2.1. Let (θ, ϕ) be a sublinear order-preserving RDS on V+ . Then


(i) ϕ preserves the equivalence relation (4.11) and is nonexpansive under
the part metric on every part C of V+ , i.e. for all x, y ∈ C

p(ϕ(t, ω)x, ϕ(t, ω)y) ≤ p(x, y) for all t≥0 and ω∈Ω.

(ii) (θ, ϕ) is strongly sublinear if and only if it is contractive under the


part metric, i.e. for all x, y ∈ intV+ , x = y,

p(ϕ(t, ω)x, ϕ(t, ω)y) < p(x, y) for all t > 0, ω∈Ω, (4.13)

and ϕ(t, ω)intV+ ⊂ intV+ for t > 0 and ω ∈ Ω.

Proof. (i) It follows from (4.1) and (4.4) that if for x, y ∈ V+ and some α ≥ 1

α−1 x ≤ y ≤ αx

then also

α−1 ϕ(t, ω)x ≤ ϕ(t, ω)y ≤ αϕ(t, ω)x for all t ≥ 0 and ω ∈ Ω

and hence by (4.12) we have p(ϕ(t, ω)x, ϕ(t, ω)y) ≤ p(x, y) for all t ≥ 0 and
ω ∈ Ω, proving (i).
(ii) Assume that x, y ∈ intV+ and there is no λ > 0 such that y = λx. In
this case p(x, y) > 0 and

e−p(x,y) x < y < xep(x,y) .

Thus (4.3) implies that

e−p(x,y) ϕ(t, ω)x  ϕ(t, ω)y  ep(x,y) ϕ(t, ω)x for t > 0 and ω ∈ Ω .

It is clear that for every t > 0 and ω ∈ Ω there exists µ := µ(t, ω, x, y) > 0
such that

eµ e−p(x,y) ϕ(t, ω)x  ϕ(t, ω)y  e−µ ep(x,y) ϕ(t, ω)x .

Therefore
p(ϕ(t, ω)x, ϕ(t, ω)y) ≤ p(x, y) − µ < p(x, y) .
118 4. Sublinear Random Dynamical Systems

Thus we obtain (4.13) for these x and y. If y = λx for some λ > 1, then
p(x, y) = log λ and

ϕ(t, ω)x ≤ ϕ(t, ω)y  λϕ(t, ω)x for t > 0 and ω ∈ Ω . (4.14)

As above this implies (4.13). The case y = λx with 0 < λ < 1 is similar. Thus
a strongly sublinear RDS possesses property (4.13). It is also clear from (4.14)
that ϕ(t, ω)intV+ ⊂ intV+ .
If (4.13) holds for some order-preserving RDS, then for any x ∈ intV+
and 0 < λ < 1 we have
1
p(ϕ(t, ω)[λx], ϕ(t, ω)x) ≤ log −µ
λ
with some positive µ. Hence eµ λϕ(t, ω)x ≤ ϕ(t, ω)[λx]. This property and
the invariance of intV+ imply (4.3). 2

Lemma 4.2.1 is a motivation for the following definition.

Definition 4.2.1. A sublinear order-preserving RDS (θ, ϕ) is said to be


strongly sublinear on a part C of V+ if ϕ(t, ω)C ⊂ C for t > 0 and ω ∈ Ω
and it is contractive under the part metric, i.e. (4.13) holds for all x, y ∈ C,
x = y.

Theorem 4.2.1 (Uniqueness of Equilibrium). If a sublinear order-pre-


serving RDS (θ, ϕ) is strictly sublinear on some part C of the cone V+ , then
any two equilibria in C are equal on a set of full measure in Ω which is
invariant with respect to θ.

Proof. Consider the function V (ω, u, v) = p(u, v) on Ω ×C ×C, where p is the


part metric. Proposition 3.2.4 implies that the function ω → V (ω, u(ω), v(ω))
is measurable for any random variables u(ω) and v(ω) from C. Lemma 4.2.1
gives that for any u and v from C we have

V (θt ω, ϕ(t, ω)u, ϕ(t, ω)v) ≤ V (ω, u, v) for all t > 0, ω ∈ Ω ,

with strict inequality, if u = v. Thus the function V (ω, u, v) = p(u, v) satisfies


the hypotheses of Proposition 1.7.1 which gives the assertion. 2

Remark 4.2.1. (i) Theorem 4.2.1 is wrong without the assumption of strong
sublinearity. Consider for example ẋ = a(θt ω)x on X = R+ , where a(θt ω) =
db(θt ω)/dt is the derivative of a stationary process t → b(θt ω) with absolutely
continuous trajectories. Then the sublinear (but not strongly or strictly sub-
linear) solution is
ϕ(t, ω)x = xe−b(ω) eb(θt ω) ,
meaning that ϕ is a coboundary, i.e. is a cocycle which is cohomologous to
the trivial cocycle ψ(t, ω) ≡ 1 (Arnold [3, Chap.5]), and any x(ω) = ceb(ω) ,
4.2 Equilibria and Semi-Equilibria for Sublinear RDS 119

c ∈ R+ , is an equilibrium. It is also easy to see that we cannot replace the


strong sublinearity by property (4.2). The deterministic mapping f (x1 , x2 ) =

( x1 , x2 ) of R2+ into itself provides an example.
(ii) If two equilibria coincide on a set of full measure in Ω, then they
generate the same ϕ-invariant measure on Ω × V+ by equation (1.59). Thus
Theorem 4.2.1 means that for any part C of the cone V+ a strongly sublinear
RDS (θ, ϕ) has at most one invariant measure generated by a random Dirac
measure supported by C. We also note that every part of the cone can contain
a positive equilibrium which is stable in its part (see Remark 4.5.1(ii) below).

Proposition 4.2.1. Let v : Ω → V+ . Let the order-preserving RDS (θ, ϕ)


be strongly sublinear on the part Cv generated by v(ω) (see Definition 3.2.1).
Then any two equilibria in Cv are equal on a θ-invariant set in Ω of full
measure.

The idea of the proof is the same as for Theorem 4.2.1 and Proposition 1.7.1.
We next make a statement about the global asymptotic stability of an
equilibrium u in its own part Cu .
Theorem 4.2.2. Let (θ, ϕ) be a strongly sublinear order-preserving RDS in
V+ . Assume that it has an equilibrium u : Ω → intV+ . Suppose that there
exists a constant µ0 ≥ 1 such that for all µ > µ0 the orbits emanating from
µu and µ−1 u are relatively compact in V+ . Then u is globally asymptotically
stable in Cu , i.e. there exists a θ-invariant set Ω ∗ ∈ F of full measure such
that for any w ∈ Cu

lim ϕ(t, θ−t ω)w(θ−t ω) = u(ω) for all ω ∈ Ω∗ . (4.15)


t→+∞

Proof. Let w ∈ Cu . Then there exists an integer µ = µ(w) > µ0 ≥ 1 such


that
µ−1 u(ω) ≤ w(ω) ≤ µu(ω) . (4.16)
Since µu is a super-equilibrium, Proposition 3.5.2 and our assumption ensure
the existence of an equilibrium wµ such that

lim ϕ(t, θ−t ω)µu(θ−t ω) = wµ (ω) for all ω ∈ Ω . (4.17)


t→+∞

Similarly, there exists an equilibrium wµ such that

lim ϕ(t, θ−t ω)µ−1 u(θ−t ω) = wµ (ω) for all ω ∈ Ω . (4.18)


t→+∞

By Theorem 3.5.1

µ−1 u(ω) ≤ wµ (ω) ≤ wµ (ω) ≤ µu(ω).


120 4. Sublinear Random Dynamical Systems

Hence wµ , wµ ∈ Cu , and by Proposition 4.2.1 there exists a θ-invariant set


Ωµ∗ ∈ F of full measure such that

wµ (ω) = wµ (ω) = u(ω) for all ω ∈ Ωµ∗ , µ ∈ N, µ > µ0 .

Therefore

wµ (ω) = wµ (ω) = u(ω) for all ω ∈ Ω ∗ := ∩µ∈N,µ>µ0 Ωµ∗ . (4.19)

It is clear that Ω ∗ is a θ-invariant set of full measure. By sublinearity, (4.16)


implies

ϕ(t, θ−t ω)µ−1 u(θ−t ω) ≤ ϕ(t, θ−t ω)w(θ−t ω) ≤ ϕ(t, θ−t ω)µu(θ−t ω) .

Consequently (4.17) to (4.19) imply (4.15). 2

We next present a criterion for the existence and half-sided attractivity of an


equilibrium.
Theorem 4.2.3. Let (θ, ϕ) be a strongly sublinear order-preserving RDS in
V+ .
(i) Let a : Ω → intV+ be a sub-equilibrium such that the orbits emanating
from elements λa are relatively compact in V+ for all 0 < λ ≤ 1 and ω ∈ Ω.
Then there exists an equilibrium u such that u(ω) ≥ a(ω) for all ω ∈ Ω and

lim ϕ(t, θ−t ω)w(θ−t ω) = u(ω) (4.20)


t→+∞

on a θ-invariant set in Ω of full measure for any w possessing the property

α−1 a(ω) ≤ w(ω) ≤ u(ω) for some number α ≥ 1 . (4.21)

(ii) Let b : Ω → intV+ be a super-equilibrium such that the orbits emanat-


ing from elements λb are relatively compact in V+ for all λ ≥ 1 and ω ∈ Ω.
Then there exists an equilibrium v such that v(ω) ≤ b(ω) for all ω ∈ Ω. If
v(ω)  0 for all ω ∈ Ω, then

lim ϕ(t, θ−t ω)w(θ−t ω) = v(ω) (4.22)


t→+∞

on a θ-invariant set in Ω of full measure for any w possessing the property

v(ω) ≤ w(ω) ≤ βb(ω) for some number β ≥ 1 .

Proof. We only prove (i). Since λa is a sub-equilibrium for every 0 < λ ≤ 1,


by Proposition 3.5.2 there exists an equilibrium uλ such that

lim ϕ(t, θ−t ω)λa(θ−t ω) = uλ (ω).


t→+∞
4.2 Equilibria and Semi-Equilibria for Sublinear RDS 121

It is clear that uλ (ω) ≤ u1 (ω) for 0 < λ ≤ 1. By (4.1)

λϕ(t, θ−t ω)a(θ−t ω) ≤ ϕ(t, θ−t ω)λa(θ−t ω) for all t > 0 and ω ∈ Ω ,

hence
λu1 (ω) ≤ uλ (ω) ≤ u1 (ω) for all 0<λ≤1.
This means that uλ ∈ Cu1 for all λ ∈ (0, 1], thus by Proposition 4.2.1,
uλ (ω) = u1 (ω) on a θ-invariant set Ω ∗ of full measure. As in the proof
of Theorem 4.2.2 we can choose Ω ∗ independent of λ. By Theorem 3.5.1,
u1 (ω) ≥ a(ω).
For all w satisfying (4.21) with u(ω) := u1 (ω) clearly

ϕ(t, θ−t ω)α−1 a(ω) ≤ ϕ(t, θ−t ω)w(ω) ≤ ϕ(t, θ−t ω)u1 (ω) = u1 (ω) .

Letting t → ∞ gives (4.20) with u(ω) = u1 (ω). 2


Using Uniqueness Theorem 4.2.1 it is easy to derive from Theorem 4.2.3 the
following assertion.
Corollary 4.2.1. Let (θ, ϕ) be a strongly sublinear order-preserving RDS in
V+ . Assume that there exist a sub-equilibrium a(ω) and a super-equilibrium
b(ω) such that a(ω) ≤ b(ω) and the hypotheses of Theorem 4.2.3 concerning
a and b hold. Then there exists an equilibrium u(ω) ∈ [a(ω), b(ω)] such that
(4.20) holds for any w(ω) satisfying

β −1 a(ω) ≤ w(ω) ≤ βb(ω) for some β ≥ 1 .

Remark 4.2.2. The compactness assumptions in Theorems 4.2.2 and 4.2.3(ii)


can be omitted if the cone V+ is regular (see Definition 3.1.6). Moreover, in
this case relation (4.22) holds for all w(ω) satisfying

β −1 v(ω) ≤ w(ω) ≤ βb(ω) for some β ≥ 1 .

As for the first statement of Theorem 4.2.3, the regularity of the cone V+ and
the compactness of the trajectory γa imply (4.20) for all w(ω) satisfying

α−1 a(ω) ≤ w(ω) ≤ αu(ω) for some α ≥ 1 .

In the case when 0 is an equilibrium we have the following assertion for


concave systems.
Proposition 4.2.2. Let (θ, ϕ) be a concave order-preserving C 1 RDS in a
normal cone V+ and v ≡ 0 be an equilibrium. Let (θ, Φ) be the linearization
of (θ, ϕ) at 0. If the top Lyapunov exponent λ of (θ, Φ) is negative, then there
exist a θ-invariant set Ω ∗ of full measure such that
 
lim eγt ϕ(t, θ−t ω)x = 0, x ∈ V+ , ω ∈ Ω ∗ , (4.23)
t→∞

for every γ < −λ.


122 4. Sublinear Random Dynamical Systems

Proof. From (4.7) we have Dx ϕ(t, ω, z)z ≤ Dx ϕ(t, ω, 0)z for any z > 0.
Therefore from  1
ϕ(t, ω)x = Dx ϕ(t, ω, sx)x ds
0
we get that

ϕ(t, ω)x ≤ Dx ϕ(t, ω, 0)x ≡ Φ(t, ω)x, x ∈ V+ .

Therefore (4.23) follows from Definition 1.9.1 of the top Lyapunov exponent.
2

4.3 Almost Equilibria


In this section we introduce the notion of an almost equilibrium and prove
a theorem which gives a description of the long-time behaviour of strongly
sublinear RDS with a strongly positive sub-equilibrium.
Definition 4.3.1. A random variable u(ω) in V+ is said to be an almost
equilibrium of an RDS (θ, ϕ) if it is invariant under ϕ for almost all ω ∈ Ω,
i.e. if there exists a set Ω ∗ ∈ F such that P(Ω ∗ ) = 1 and

ϕ(t, ω)u(ω) = u(θt ω) for all t≥0 and all ω ∈ Ω∗ . (4.24)

The following assertion shows that we can choose the set Ω ∗ in (4.24) to be
θ-invariant.
Proposition 4.3.1. If u(ω) ≥ 0 is an almost equilibrium of an RDS (θ, ϕ),
then there exists a θ-invariant set Ω ∗∗ ∈ F̄P of full measure such that (4.24)
holds.
Proof. Let

Ω̃ := {ω : ϕ(t, ω)u(ω) = u(θt ω) for all t ≥ 0} .

Since Ω ∗ ⊆ Ω̃ and P(Ω ∗ ) = 1, we have that θs Ω̃ ∈ F̄P and P̄(θs Ω̃) = 1 for
every fixed s ∈ R. Here P̄ is the extension of P on F̄P . Using the cocycle
property we get

ϕ(t, θs ω)u(θs ω) = ϕ(t, θs ω)ϕ(s, ω)u(ω) = ϕ(t + s, ω)u(ω) = u(θt+s ω)

for all t, s ≥ 0 and ω ∈ Ω̃. Hence θs Ω̃ ⊂ Ω̃ for all s ≥ 0. Let Ω ∗∗ := ∩n≥0 θn Ω̃.
It is clear that θt Ω ∗∗ ⊂ Ω ∗∗ for t ≥ 0, P̄(Ω ∗∗ ) = 1 and (4.24) holds for
ω ∈ Ω ∗∗ . Let k − 1 ≤ t < k for k ∈ N. Then
  
θ−t Ω ∗∗ ⊂ θn−t Ω̃ = θn−k θk−t Ω̃ ⊂ θm θk−t Ω̃
n≥0 n≥0 m≥0

Since θk−t Ω̃ ⊂ Ω̃, we obtain θ−t Ω ∗∗ ⊂ Ω ∗∗ . Thus Ω ∗∗ is a θ-invariant set. 2


4.3 Almost Equilibria 123

Remark 4.3.1. If (θ, ϕ) is an RDS with discrete time, then in the proof of
Proposition 4.3.1 we have Ω̃ ∈ F and therefore Ω ∗∗ ∈ F. Under this condition
it is possible (cf. Remark 1.2.1(ii)) to find a version ϕ̃ of the cocycle ϕ such
that u(ω) is an equilibrium for (θ, ϕ̃). We also refer to Scheutzow [90],
where the perfection problem of crudely invariant elements is discussed for
invertible cocycles.

For the main result of this section we need the following definitions.
Definition 4.3.2. Let U ∈ F. The orbit γa (ω) = ∪t≥0 ϕ(t, θ−t ω)a(θ−t ω) of
the RDS (θ, ϕ) in X = V+ emanating from a is said to be bounded on U if
there exists a random variable C on U such that

ϕ(t, θ−t ω)a(θ−t ω) ≤ C(ω) for all t ≥ 0 and ω ∈ U .

The orbit γa is said to be bounded, if it is bounded on the whole Ω. We say


that the orbit γa is unbounded if it is not bounded.

Definition 4.3.3. An RDS (θ, ϕ) in V+ is said to be conditionally compact


if for any U ∈ F and for any orbit γa (ω) which is bounded on U there exists
a family of compact sets K(ω) such that

lim dist (ϕ(t, θ−t ω)a(θ−t ω), K(ω)) = 0 ω∈U . (4.25)


t→∞

We note that an RDS (θ, ϕ) in V+ is conditionally compact if any orbit γa (ω)


which is bounded on some U ∈ F is a relatively compact set for any ω ∈ U .
Theorem 4.3.1. Let V be a separable Banach space with a normal solid
cone V+ . Assume that (θ, ϕ) is a strongly sublinear conditionally compact
order-preserving RDS over an ergodic metric dynamical system θ. Suppose
that there exists a sub-equilibrium a(ω) ∈ int V+ . Then either
(i) we have ϕ(t, θ−t ω)v(θ−t ω) → ∞ almost surely as t → ∞ for every
v(ω) ∈ int V+ such that v(ω) ≥ αa(ω) for some nonrandom α > 0 and for
every ω ∈ Ω or
(ii) there exists a unique almost equilibrium u(ω)  0 defined on a θ-
invariant set Ω ∗ ∈ F of full measure such that

lim ϕ(t, θ−t ω)v(θ−t ω) = u(ω), ω ∈ Ω∗ , (4.26)


t→+∞

for any random variable v(ω) possessing the property αa(ω) ≤ v(ω) ≤ λu(ω)
for all ω ∈ Ω ∗ and for some nonrandom positive α and λ.

Proof. From Proposition 3.4.1 we get that {as (ω) := ϕ(s, θ−s ω)a(θ−s ω), s >
0} is a monotone family of sub-equilibria. Since the cone V+ is normal, there
exists an equivalent norm · ∗ on V such that s → as (ω) ∗ is a monotone
nondecreasing function for every ω ∈ Ω (see Remark 3.1.1) and therefore the
limit lims→∞ as (ω) ∗ exists (finite or infinite). Thus if (i) is not true, then
124 4. Sublinear Random Dynamical Systems

there exists v(ω) ∈ V+ such that v(ω) ≥ αa(ω) for some 0 < α < 1 and
ϕ(t, θ−t ω)v(θ−t ω) ∗ → ∞ for ω ∈ U , where U ∈ F and P(U ) > 0. Therefore
for any ω ∈ U there exists a sequence {tn (ω)} such that tn (ω) → ∞ as
n → ∞ and

sup ϕ(tn , θ−tn ω)v(θ−tn ω) ∗ < ∞, ω∈U .


n

Since (θ, ϕ) is sublinear, we have supn atn (ω) (ω) ∗ < ∞ for ω ∈ U . This
implies that
sup as (ω) ∗ < ∞, ω ∈ U ,
s≥0

because for any ω ∈ U and s > 0 there exists tn (ω) such that as (ω) ∗ ≤
atn (ω) (ω) ∗ . Consider the set

Ũ := {ω : sup as (ω) ∗ < ∞} .


s≥0

The monotonicity of as (ω) ∗ implies that


 
Ũ = {ω : sup ak (ω) ∗ < ∞} = {ω : ak (ω) ∗ < N }.
k∈N
N ∈N k∈N

Thus Ũ ∈ F. Let us prove that Ũ is θ-invariant. Indeed, using the cocycle


property for 0 ≤ t ≤ s we have

as (θt ω) = ϕ(s, θ−s+t ω)a(θ−s+t ω)

= ϕ(t, ω)ϕ(s − t, θ−s+t ω)a(θ−s+t ω) = ϕ(t, ω)as−t (ω) .

Since {as−t (ω) : s ≥ t} is a bounded set for every t ≥ 0 and ω ∈ Ũ , it


belongs to some interval [0, bt (ω)] for all ω ∈ Ũ and t ≥ 0. Therefore

as (θt ω) ∈ [0, ϕ(t, ω)bt (ω)], ω ∈ Ũ .

Thus sups≥t as (θt ω) ∗ < ∞ for ω ∈ Ũ . Since as (θt ω) ≤ at (θt ω) for 0 ≤ s ≤


t, we have
sup as (θt ω) ∗ < ∞, for all t ≥ 0, ω ∈ Ũ .
s≥0

Consequently θt Ũ ⊂ Ũ for t ≥ 0 and therefore U ∗ := ∩t≥0 θt Ũ = ∩n∈Z+ θn Ũ


is a θ-invariant set such that P(U ∗ ) = P(Ũ ) > 0. By the ergodicity of θ we
have P(U ∗ ) = 1. Thus sups≥0 as (ω) ∗ < ∞ on the θ-invariant set U ∗ of full
measure.
4.4 Limit Set Trichotomy for Sublinear RDS 125

Now we restrict the RDS (θ, ϕ) to U ∗ . Since (θ, ϕ) is conditionally compact,


the limit
u(ω) = lim as (ω), ω ∈ U ∗ ,
s→∞

exists, and this is a strongly positive equilibrium by Proposition 3.5.2. Since


nu(ω) is a super-equilibrium for every n ∈ N, ϕ(t, θ−t ω)[nu(θ−t ω)] converges
to a strongly positive equilibrium which coincides with u(ω) on a θ-invariant
set Ω ∗ ⊂ U ∗ of full measure (see Theorem 4.2.1). The set Ω ∗ can be chosen
independent of n. Therefore using Theorem 4.2.3 we obtain (4.26). 2

Corollary 4.3.1. Let V be a separable Banach space with a normal solid


cone V+ . Assume that (θ, ϕ) is a strongly sublinear conditionally compact
order-preserving RDS over an ergodic metric dynamical system θ. Suppose
that ϕ(t, ω)0  0 for all t > 0 and ω ∈ Ω. Then either
(i) for any x ∈ V+ we have ϕ(t, θ−t ω)x → ∞ almost surely as t → ∞
or
(ii) there exists a unique almost equilibrium u(ω)  0 defined on a θ-
invariant set Ω ∗ ∈ F of full measure such that (4.26) holds for any random
variable v(ω) possessing the property 0 ≤ v(ω) ≤ λu(ω) for all ω ∈ Ω ∗ and
for some nonrandom λ > 0.

Proof. Proposition 3.4.1 implies that aε (ω) := ϕ(ε, θ−ε ω)0  0 is a sub-
equilibrium for every ε > 0. It is also clear that ϕ(t, θ−t ω)x ≥ aε (ω) for all
x ∈ V+ , ω ∈ Ω and t ≥ ε. Thus we can apply Theorem 4.3.1. 2

Remark 4.3.2. We note that the uniqueness results stated in Theorem 4.2.1
and Propositions 1.7.1 and 4.2.1 remain true for almost equilibria because
the proof of Proposition 1.7.1 invokes only monotonicity arguments for scalar
measurable functions and properties of probability distributions.

4.4 Limit Set Trichotomy for Sublinear RDS

In this section we prove the limit set trichotomy theorem which describes the
only three possible asymptotic scenarios for sublinear systems. We do not
assume the existence of a strongly positive sub-equilibrium here.
In the deterministic discrete time case a limit set trichotomy was discov-
ered (and so named) by Krause/Ranft [73] and generalized by Krause/
Nussbaum [72].
Below we say that a multifunction {F (ω)} belongs to the part Cv gener-
ated by a random variable v(ω) (see Definition 3.2.1) if there exists a non-
random number λ > 1 such that F (ω) ⊂ [λ−1 v(ω), λv(ω)] for all ω ∈ Ω.
Theorem 4.4.1 (Limit Set Trichotomy). Let V be a separable Banach
space with a normal solid minihedral cone V+ . Assume that (θ, ϕ) is a strongly
sublinear conditionally compact order-preserving RDS in V+ . Let v : Ω →
126 4. Sublinear Random Dynamical Systems

int V+ be a random variable, and denote by Cv its part in V+ . Assume there


exists a ∈ Cv such that the orbit emanating from a does not leave Cv , i.e.

at (ω) := ϕ(t, θ−t ω)a(θ−t ω) ∈ Cv for all t≥0. (4.27)

Then Cv is a forward invariant set, i.e. (4.27) holds for any a ∈ Cv , and
precisely one of the following three cases applies:
(i) for all b ∈ Cv , the orbit γb emanating from b is unbounded;
(ii) for all b ∈ Cv , the orbit γb emanating from b is bounded, but the
closure of γb does not belong to Cv ;
(iii) there exists a unique almost equilibrium u ∈ Cv measurable with
respect to the universal σ-algebra Fu , and for all b ∈ Cv the orbit emanating
from b converges to u, i.e.

lim ϕ(t, θ−t ω)b(θ−t ω) = u(ω) for almost all ω∈Ω. (4.28)
t→+∞

The proof of this theorem relies on the following three lemmas.


Lemma 4.4.1. Let (θ, ϕ) be a sublinear order-preserving RDS in V+ and let
v : Ω → V+ .
(i) Assume that there exist a ∈ Cv and t0 ≥ 0 such that

ϕ(t0 , θ−t0 ω)a(θ−t0 ω) ∈ Cv . (4.29)

Then for any b ∈ Cv we have

ϕ(t0 , θ−t0 ω)b(θ−t0 ω) ∈ Cv . (4.30)

(ii) Assume that there exists a ∈ Cv for which (4.27) holds. Then for
any b ∈ Cv the orbit emanating from b does not leave Cv , i.e. Cv is forward
invariant.

Proof. (i) Since a, b ∈ Cv , there exists a nonrandom number λ ≥ 1 such that

λ−1 a(ω) ≤ b(ω) ≤ λa(ω) for all ω ∈ Ω . (4.31)

Therefore sublinearity and monotonicity give the inequality

λ−1 ϕ(t0 , ω)a(ω) ≤ ϕ(t0 , ω)b(ω) ≤ λϕ(t0 , ω)a(ω) for all ω ∈ Ω . (4.32)

Hence (4.29) implies (4.30).


Assertion (ii) follows immediately from (i). 2

Lemma 4.4.2. Let (θ, ϕ) be a sublinear order-preserving RDS in V+ and let


v : Ω → V+ . Assume that for some a ∈ Cv the orbit γa emanating from a
does not leave Cv and is bounded. Then for any b ∈ Cv the orbit γb emanating
from b is also bounded.
4.4 Limit Set Trichotomy for Sublinear RDS 127

Proof. If γa ⊂ Cv ⊂ V+ is bounded, by Proposition 3.2.2 there exists a


random element w(ω) ∈ intV+ such that

0 ≤ ϕ(t, θ−t ω)a(θ−t ω) ≤ w(ω) for all t > 0, ω ∈ Ω .

Hence (4.32) implies that

0 ≤ ϕ(t, θ−t ω)b(θ−t ω) ≤ λw(ω) for all t > 0, ω ∈ Ω ,

where b is an arbitrary element with property (4.31). The normality of the


cone V+ implies that γb is bounded. 2

Lemma 4.4.3. Let (θ, ϕ) be a sublinear order-preserving RDS in V+ and let


v : Ω → V+ . Assume that for some a ∈ Cv the orbit γa is bounded and its
closure γa (·) belongs to Cv . Then this property is valid for any b ∈ Cv .

Proof. Let b(ω) ∈ γb (ω). Then there exists a sequence {tn (ω)} such that

ϕ(tn , θ−tn ω)b(θ−tn ω) → b(ω) .

By (4.31) and (4.32)

λ−1 ϕ(tn , θ−tn ω)a(θ−tn ω) ≤ ϕ(tn , θ−tn ω)b(θ−tn ω) ≤ λϕ(tn , θ−tn ω)a(θ−tn ω) .
(4.33)
Since γa (·) ⊂ Cv , there exists µ > 1 such that

µ−1 v(ω) ≤ ϕ(t, θ−t ω)a(θ−t ω) ≤ µv(ω) for all t ≥ 0, ω ∈ Ω .

Therefore (4.33) implies that

µ−1 λ−1 v(ω) ≤ ϕ(tn , θ−tn ω)b(θ−tn ω) ≤ µλv(ω) .

Consequently b ∈ [µ−1 λ−1 v(ω), µλv(ω)] ⊂ Cv . 2

We are now in a position to prove the limit set trichotomy theorem.


Proof of Theorem 4.4.1. By Lemma 4.4.1(ii), Cv is forward invariant.
We now consider the trichotomy. If (i) is not true, then by Lemma 4.4.2
all orbits are bounded. We hence have either (ii), or there exists an orbit
whose closure belongs to Cv . If the latter is the case, Lemma 4.4.3 implies
that the closure of each orbit belongs to Cv . Therefore the omega-limit set of
each element of Cv belongs to Cv . We will now prove that all omega-limit sets
coincide with a one-point set consisting of the unique equilibrium u ∈ Cv .
Let Γa be the omega-limit set of a ∈ Cv . Since Γa = ∩n∈Z+ γan (ω), where
γan (ω) is the tail of the orbit γa (ω), we have from (4.25) and Proposition 1.5.1
that Γa is a random compact set with respect to the universal σ-algebra Fu
128 4. Sublinear Random Dynamical Systems

(cf. Remark 1.6.1). Since Γa ⊂ Cv , there exists a number α > 1 such that

α−1 v(ω) ≤ w(ω) ≤ αv(ω) for all w(ω) ∈ Γa (ω) .

Hence by Theorem 3.2.1

w(ω) := sup Γa (ω)  0 ,

exists and it is an Fu -measurable variable. We also have

α−1 v(ω) ≤ w(ω) ≤ αv(ω) . (4.34)

The invariance of Γa , i.e.

Γa (θt ω) = ϕ(t, ω)Γa (ω) for all t ≥ 0 ,

(cf. Lemma 3.4.1) implies that w is a sub-equilibrium. It is clear that the


multifunction ω → Γa (θt ω) is a random compact set with respect to Fu for
any fixed t ∈ R+ . Therefore w(θt ω) is an Fu -measurable variable for any
fixed t ∈ R+ .
Similarly,
w(ω) := inf Γa (ω)
is an Fu -measurable super-equilibrium such that w(θt ω) is an Fu -measurable
variable for any fixed t ∈ R+ and

α−1 v(ω) ≤ w(ω) ≤ αv(ω) . (4.35)

It follows from (4.34) and (4.35) that

p(w(ω), w(ω)) < 2 log α for all ω ∈ Ω ,

where p(·, ·) stands for the part metric.


Clearly w(ω) ≤ w(ω) for all ω ∈ Ω. Since w(ω) and w(ω) are super- and
sub-equilibria, respectively, we have

ϕ(t, ω)w(ω) ≤ w(θt ω) ≤ w(θt ω) ≤ ϕ(t, ω)w(ω) (4.36)

for all ω ∈ Ω and t ≥ 0. This inequality and Lemma 3.1.1 imply

p(ϕ(t, ω)w(ω), ϕ(t, ω)w(ω)) ≥ p(w(θt ω), w(θt ω)) (4.37)

for all ω ∈ Ω and t ≥ 0. On the other hand, since ϕ(t, ω) is sublinear,

p(ϕ(t, ω)w(ω), ϕ(t, ω)w(ω)) ≤ p(w(ω), w(ω)) for all ω ∈ Ω and t ≥ 0 ,

implying

p(w(θt ω), w(θt ω)) ≤ p(w(ω), w(ω)) for all ω ∈ Ω and t ≥ 0 .


4.4 Limit Set Trichotomy for Sublinear RDS 129

Proposition 3.2.4 implies that ft (ω) := p(w(θt ω), w(θt ω)) is an Fu -measurable
variable for any fixed t ∈ R+ . Let us prove that ft has the same distribution
for each t ∈ R+ . Let Uct = {ω : ft (ω) ≤ c}. Since Uct ∈ Fu , there exists
Ũct ∈ F such that Ũct ⊆ Uct and P(Ũct ) = P̄(Uct ), where P̄ is the extension of P
to Fu . It is clear that θ−t Ũc0 ⊂ Uct . Therefore

P̄(Uct ) ≥ P(θ−t Ũc0 ) = P(Ũc0 ) = P̄(Uc0 ) .

In a similar way the relation θt Ũct ⊂ Uc0 implies P̄(Uc0 ) ≥ P̄(Uct ). Thus all
variables ft have the same distribution.
Suppose now that w(ω) = w(ω) is not true on a set of positive probability,
i.e. there exist a measurable set U ⊂ Ω with P(U ) > 0 such that

w(ω) < w(ω) for ω ∈ U . (4.38)

Property (4.38) and strong sublinearity imply

p(ϕ(t, ω)w(ω), ϕ(t, ω)w(ω)) < p(w(ω), w(ω)) for ω ∈ U and t > 0 ,

hence

p(w(θt ω), w(θt ω)) < p(w(ω), w(ω)) for ω ∈ U and t > 0 .

However, both sides of the last inequality have the same distribution, leading,
as in the proof of Proposition 1.7.1, to a contradiction of the assumption
P(U ) > 0. Thus w(ω) = w(ω) almost surely, and (4.36) implies that u(ω) ≡
w(ω) is an almost equilibrium. Moreover Γa = {u(ω)} almost surely. It finally
follows from (4.35) that u ∈ Cv . Proposition 4.2.1 and Remark 4.3.2 imply
that this equilibrium is unique almost surely in Cv . In particular Γb = {u(ω)}
almost surely for any b ∈ Cv which gives the asymptotic stability (4.28). This
completes the proof of Theorem 4.4.1. 2
Remark 4.4.1. (i) It is clear from the proof that if the cases (i) and (ii) of
Theorem 4.4.1 do not apply and if there exists an element a ∈ Cv such that
γa (ω) is a random compact set with respect to F, then case (iii) holds with
the equilibrium measurable with respect to F.
(ii) For a discrete RDS (T = Z) the equilibrium given by Theorem 4.4.1 in
case (iii) is measurable with respect to F because the closure of any trajectory
is F-measurable (see Sect. 1.5).
(iii) Theorem 4.4.1 is wrong without the assumption of strong sublinearity,
see Remark 4.2.1(i).
By slightly strengthening hypothesis (4.27) we can also prove another version
of the trichotomy theorem.
Corollary 4.4.1. Assume that the assumptions of Theorem 4.4.1 hold and
property (4.27) is valid in a strengthened form: there exists an a ∈ Cv such
130 4. Sublinear Random Dynamical Systems

that the orbit emanating from a does not leave Cv and for any T > 0 there
exists λT > 1 such that

λ−1
T v(ω) ≤ ϕ(t, θ−t ω)a(θ−t ω) ≤ λT v(ω) for all t ∈ [0, T ] . (4.39)

Then property (4.39) holds for any a ∈ Cv , and precisely one of the following
three cases applies:
(i) for all b ∈ Cv , the orbit γb emanating from b is unbounded;
(ii) for all b ∈ Cv , the orbit γb emanating from b is bounded, but

lim sup sup p(ϕ(t, θ−t ω)b(θ−t ω), v(ω)) = 0 ; (4.40)
t→∞ ω∈Ω

(iii) there exists a unique almost equilibrium u ∈ Cv measurable with


respect to the universal σ-algebra Fu , and for all b ∈ Cv the orbit emanating
from b converges to u, i.e. (4.28) holds.

Proof. Theorem 4.4.1 is applicable here. We need only prove (4.40) in case
(ii). If (4.40) is not true, then (4.39) implies that

λ−1
∞ v(ω) ≤ ϕ(t, θ−t ω)b(θ−t ω) ≤ λ∞ v(ω) for all t ≥ 0, ω ∈ Ω

with some constant λ∞ > 1. This implies that

γb (ω) ⊂ [λ−1
∞ v(ω), λ∞ v(ω)] ⊂ Cv

which is impossible in case (ii) of Theorem 4.4.1. 2

For one-dimensional sublinear RDS we have the following version of the


trichotomy theorem which requires the continuity of the mapping t →
ϕ(t, θ−t ω)x.
Theorem 4.4.2. Let (θ, ϕ) is be a strongly sublinear order-preserving RDS
in R+ over an ergodic metric dynamical system θ. Assume that the function
t → ϕ(t, θ−t ω)x is continuous for all x ∈ R+ and ω ∈ Ω. Then precisely one
of the following three cases applies:
(i) lim supt→+∞ ϕ(t, θ−t ω)x = ∞ almost surely for all x > 0;
(ii) limt→+∞ ϕ(t, θ−t ω)x = 0 almost surely for all x ≥ 0;
(iii) there exists a unique F-measurable almost equilibrium u(ω) > 0 de-
fined on a θ-invariant set Ω ∗ of full measure such that

lim ϕ(t, θ−t ω)b(θ−t ω) = u(ω), ω ∈ Ω∗ , (4.41)


t→+∞

for any b(ω) with the property λ−1 u(ω) ≤ b(ω) ≤ λu(ω) for all ω ∈ Ω ∗ and
for some λ > 1.
4.4 Limit Set Trichotomy for Sublinear RDS 131

Proof. If (i) is not true, then there exist x0 > 0 and a set U ∈ F such that
P(U ) > 0 and supt∈R+ ϕ(t, θ−t ω)x0 < ∞ for ω ∈ U . Let

Ũ := {ω : sup ϕ(t, θ−t ω)x0 < ∞} .


t∈R+

Since
 
Ũ = {ω : sup ϕ(t, θ−t ω)x0 < ∞} = {ω : ϕ(t, θ−t ω)x0 < N } ,
t∈Q∩R+
N ∈N t∈Q∩R+

the set Ũ is measurable. Thus as in the proof of Theorem 4.3.1 we can obtain
that there exists a θ-invariant set Ω ∗ of full measure such that

lim sup ϕ(t, θ−t ω)x0 < ∞ for all ω ∈ Ω ∗ . (4.42)


t→∞

Therefore by Remark 1.6.1 and Proposition 1.6.4 the omega-limit set Γx0 (ω)
emanating from x0 is a nonempty invariant compact random set measurable
with respect to the σ-algebra F. Since sup B ∈ B for any compact set B ⊂ R+ ,
Lemma 3.4.1 and Remark 3.4.2(ii) imply that u(ω) := sup Γx0 (ω) ≥ 0 is an
F-measurable equilibrium on Ω ∗ . By Lemma 3.5.1 we have either u(ω) = 0
or u(ω) > 0 almost surely.
If u(ω) = 0 almost surely, then ϕ(t, θ−t ω)x → 0 almost surely for all
0 ≤ x ≤ x0 . The sublinearity implies that
 
x x
ϕ(t, ω)x = ϕ(t, ω) · x0 ≤ · ϕ(t, ω)x0 for all x ≥ x0 . (4.43)
x0 x0

Thus ϕ(t, θ−t ω)x → 0 almost surely for all x ∈ R+ .


If u(ω) > 0 almost surely, then from (4.42) and (4.43) we have

lim sup ϕ(t, θ−t ω)x < ∞ for all x ∈ R+ , ω ∈ Ω ∗ .


t→∞

Therefore by the same argument ux (ω) := sup Γx (ω) is an F-measurable


positive equilibrium for every x > 0. By Theorem 4.2.1 we have that ux (ω) =
u(ω) on a θ-invariant set of full measure. Thus Theorem 4.2.2 implies (4.41).
2

The following two simple examples of discrete systems show that all three
cases of the limit set trichotomy can actually occur. The corresponding ex-
amples of RDS with continuous time are discussed in Chaps.5 and 6.
We start with the deterministic case.
Example 4.4.1. Let us consider the scalar function fα (x) = αx + 1+xx
on R+ .
It is easy to see that for every α ∈ R+ the mapping x → fα (x) generates a
strongly sublinear dynamical system in R+ . The hypotheses of Theorem 4.4.1
hold for this system with v = 1. If α ≥ 1, then fαn (x) → ∞ for any x > 0, i.e.
132 4. Sublinear Random Dynamical Systems

any orbit γx emanating from x > 0 is unbounded. If α = 0, then fαn (x) → 0


for any x ≥ 0, i.e. any orbit γx is bounded, but the closure of γx contains
elements (namely 0) which do not belong to any part Cv ⊂ intR+ . Finally
for α ∈ (0, 1) there exists a unique globally asymptotically stable positive
equilibrium. To produce more complicated limit behaviour we can consider
the mapping f = (fα1 , . . . , fαd ) from Rd+ into itself with appropriate choices
of αi .
Now using the properties of the functions fα we can can easily construct a
random example.
Example 4.4.2. Let us consider the RDS on R+ constructed in the example
x x
given in the Introduction with f0 (x) = α0 x + 1+x and f1 (x) = α1 x + 2+2x ,
where 0 ≤ α0 ≤ α1 . As in the previous example it is clear that these two
mappings generate a strongly sublinear RDS in R+ . Since α0 < fi (1) ≤ α1 +1
for i = 1, 2, the random part Cv generated by v(ω) ≡ 1 is forward invariant.
Therefore the trichotomy theorem applies. As in the previous example it is
easy to see that (i) if α1 ≥ α0 ≥ 1, then any orbit γx emanating from x > 0 is
unbounded; (ii) if α0 = α1 = 0, then any orbit γx is bounded, but the closure
of γx contains elements (namely 0) which do not belong to the part Cv ;
(iii) if α0 , α1 ∈ (0, 1), then there exists a unique globally asymptotically
stable positive equilibrium. As above, using these properties we can produce
more complicated limit behaviour.

4.5 Random Mappings

In this section we consider a sublinear order-preserving RDS generated by


random mappings in Rd+ .
Let θ = (Ω, F, P, {θn , n ∈ Z}) be a metric dynamical system with discrete
time T = Z. Assume that the function f : Ω × Rd+ → Rd+ is measurable and
has the following properties:
(i) f (ω, ·) is continuous for every ω ∈ Ω,
(ii) f (ω, ·) is order-preserving, i.e. f (ω, x) ≤ f (ω, y) for all 0 ≤ x ≤ y and all
ω ∈ Ω,
(iii) f (ω, ·) is sublinear, i.e. λf (ω, x) ≤ f (ω, λx) for 0 < λ < 1, all x ∈ Rd+
and ω ∈ Ω.
Under (i) to (iii) the random difference equation

xn+1 = f (θn ω, xn ) (4.44)

generates a sublinear order-preserving RDS in Rd+ .


4.5 Random Mappings 133

We note that assumptions (i) to (iii) are fulfilled, for instance, for the function


N
f (ω, x) = Ak (ω)xαk + b(ω) , (4.45)
k=1

where Ak (ω) are d × d matrices with nonnegative entries, b(ω) ∈ Rd+ , αk =


α1 αd
(αk1 , . . . , αkd ) are multi-indices with 0 < αkj ≤ 1, and xαk := (x1 k , . . . , xd k ). It
can be easily checked that the sublinearity condition (iii) is valid in the form

λα f (ω, x) ≤ f (ω, λx), 0 < λ < 1, x ∈ Rd+ , ω∈Ω,

where α = maxj,k αkj ≤ 1. Consequently

p(f (ω, x), f (ω, y)) ≤ αp(x, y), x, y ∈ C ⊂ Rd+ ,

where p is the part metric and C is any part of Rd+ .


Hence if α < 1 then f is uniformly contractive with respect to p. This
makes it possible to use standard fixed point methods to prove the existence
of equilibria for this case.
Proposition 4.5.1. Assume that f (ω, x) has the form (4.45) with the pa-
rameters αkj possessing the property α := maxj,k αkj < 1. Let v(ω) > 0 for all
ω ∈ Ω. If the part Cv generated by v is invariant for the RDS (θ, ϕ) defined
by (4.44), then there exists a unique equilibrium u(ω) in Cv for (θ, ϕ) and

sup p(ϕ(n, θ−n ω)w(θ−n ω), u(ω)) ≤ αn sup p(w(ω), u(ω)) (4.46)
ω∈Ω ω∈Ω

for all w ∈ Cv and n ∈ Z+ .

Proof. We define the mapping T : Cv → Cv by the formula

(T w)(ω) := ϕ(1, θ−1 ω)w(θ−1 ω) = f (θ−1 ω, w(θ−1 ω)), ω∈Ω.

It is easy to see that

(T w1 , T w2 ) ≤ α(w1 , w2 ) for all w1 , w2 ∈ Cv , (4.47)

where (w1 , w2 ) = supω∈Ω p(w1 (ω), w2 (ω)). By Proposition 3.2.3 Cv is a com-


plete metric space with respect to . Therefore we can apply the contraction
principle and conclude that the mapping T has unique stationary point u(ω)
in Cv . Relation (4.46) easily follows from (4.47). 2

The following assertion gives a sufficient condition for the existence of an


invariant part Cv .
134 4. Sublinear Random Dynamical Systems

Corollary 4.5.1. Assume that the entries of the matrices Ak (ω) are bounded
from above by a nonrandom constant and α = maxj,k αkj < 1. Let b(ω) =
b0 (ω) · v, where v ∈ intRd+ and b0 (ω) > 0 is a scalar random variable such
that
0 < β0 ≤ β1 b0 (ω) ≤ b0 (θ1 ω) ≤ β2 b0 (ω), ω ∈ Ω ,
for some nonrandom βi . Then the part Cb generated by b is invariant for the
RDS (θ, ϕ) generated by (4.44) with f given by (4.45) and the conclusions of
Proposition 4.44 hold.

Proof. A simple calculation shows that b(ω) ≤ f (ω, b(ω)) ≤ Cb(ω) for some
constant C > 0. This implies that the part Cb is invariant and therefore we
can apply Proposition 4.5.1. 2

Remark 4.5.1. (i) In the situation of Corollary 4.5.1 the equilibrium u(ω) is
globally stable not only in Cb . It is easy to see that

p(ϕ(n, ω)w(ω), u(θn ω)) ≤ αn p(w(ω), u(ω)) for all ω ∈ Ω (4.48)

provided that p(w(ω), u(ω)) is finite for each ω ∈ Ω. Therefore for each ω ∈ Ω
we have that

p(ϕ(n, θ−n ω)w(θ−n ω), u(ω)) → 0 as n→∞

with exponential rate provided that p(w(ω), u(ω)) is a tempered random


variable. We also note that relation (4.48) means that un (ω) := u(θn ω) is a
forward exponentially attracting stationary process.
√ √
(ii) The deterministic example f (x) = ( x1 , . . . , xd ) shows that every
part of the cone Rd+ can contain an equilibrium which is exponentially stable
in this part.
(iii) Assertions similar to Proposition 4.5.1 and Corollary 4.5.1 can be
proved for more general mappings. Assume that f : Ω × Rd+ → Rd+ is a
measurable function such that f (ω, ·) ∈ C 1 (intRd+ ) for every ω ∈ Ω. Then
the property


d  ∂f (ω, x) 
 i 
xj   < α(ω)fi (ω, x), i = 1, . . . , d, x ∈ intRd+ , ω ∈ Ω ,
j=1
∂x j

where α(ω) is a positive random variable, implies that

p(f (ω, x), f (ω, y)) < α(ω)p(x, y), x, y ∈ intRd+ , ω ∈ Ω ,

where p is the part metric (see Krause/Nussbaum [72, Theorem 4.1]). Thus
under the condition α(ω) ≤ α0 < 1 we can obtain the same results as for the
mapping (4.45).
4.5 Random Mappings 135

The following assertion deals with another class of mappings and is an appli-
cation of the limit set trichotomy theorem.
Proposition 4.5.2. Assume that the measurable mapping f : Ω × Rd+ → Rd+
possesses properties (i) and (ii) and also
(iii∗ ) for each ω ∈ Ω the function f (ω, ·) is strongly sublinear, i.e.

λf (ω, x)  f (ω, λx) for all 0<λ<1 and x ∈ intRd+ ;

(iv) there exist points x and x in intRd+ such that

f (ω, x) ≥ x and f (ω, x) ≤ x for all ω∈Ω. (4.49)

Then there exists a unique strongly positive equilibrium u(ω) for the RDS
generated by (4.44). This equilibrium is uniformly separated from 0 and from
∞, i.e. there exist positive constants α and β such that αe ≤ u(ω) ≤ βe for
all ω ∈ Ω, where e = (1, . . . , 1). Moreover the equilibrium u(ω) is globally
asymptotically stable in intRd+ , i.e. for every x ∈ intRd+ we have

lim ϕ(n, θ−n ω)x = u(ω) for almost all ω ∈ Ω. (4.50)


n→+∞

Proof. Relations (4.49) imply that xm := m−1 x is a super-equilibrium and


xm := mx is a sub-equilibrium for each m ∈ N. Therefore every deterministic
interval [xm , xm ] with m large enough is an invariant set (see Remark 3.4.1).
Hence the part Ce is invariant and option (iii) of Theorem 4.4.1 is the only
possible one. 2

Proposition 4.5.2 allow as to obtain the following assertion which slightly


strengthens a result by Bhattacharya/Lee [16, Sect.4] concerning asymp-
totic behaviour of a class of Markov chains generated by families of random
mappings as described in Example 1.2.1.
Let [a, b] be an interval in intRd+ . In a similar way to Bhattacharya/Lee
[16] we introduce the class Aa,b of sets in [a, b] of the form Ac = {x : h(x) ≤
c}, where h varies over the class of all continuous nondecreasing functions
from [a, b] into itself and we define the semidistances

da,b (ν1 , ν2 ) = sup{|ν1 (A) − ν2 (A)| : A ∈ Aa,b }

on the space all probability measures on (intRd+ , B(intRd+ )). The function
da,b (·, ·) is a distance if we restrict ourselves to measures with support in
[a, b].
Theorem 4.5.1. Assume that the hypotheses of Proposition 4.5.2 hold and
that the random mappings f (θn ω, ·), n ∈ Z, are independent and identically
distributed (i.i.d.). Let u(ω) be the equilibrium given by Proposition 4.5.2 for
the RDS (θ, ϕ) generated by (4.44). Then
136 4. Sublinear Random Dynamical Systems

(i) the family of the random sequences

{Φxn := ϕ(n, ω)x : n ∈ Z+ , x ∈ Rd+ }

is a homogeneous Markov chain with state space Rd+ and transition prob-
ability

P (x, B) := P{Φn+1 ∈ B | Φn = x} = P{ω : f (ω, x) ∈ B}, B ∈ B(Rd+ ) ;

(ii) the measure ν on (Rd+ , B(Rd+ )) defined by the formula

ν(A) := P{ω : u(ω) ∈ A}, A ∈ B(Rd+ ) ,

has compact support in intRd+ and is an invariant probability measure for


the Markov chain {Φxn }, i.e

ν(A) = (P ∗ ν)(A) := P (x, A)ν(dx), for all A ∈ B(Rd+ ) ;
Rd
+

(iii) for every compact set K ⊂ intRd+ we have that

lim sup |p(n) (x, [a, b]) − ν([a, b])| = 0 (4.51)


n→∞ x∈K

for any interval [a, b] ⊂ intRd+ , where p(n) (x, A) = P{ω : ϕ(n, ω)x ∈ A}
for A ∈ B(Rd+ );
(iv) (P ∗n λ)(A) → ν(A) as n → ∞ for all A ∈ B(Rd+ ) and for any probability
measure λ on (Rd+ , B(Rd+ )) with compact support in intRd+ .
If the equilibrium u(ω) possesses the property

P{ω : u(ω) ≤ a} > 0 and P{ω : u(ω) ≥ a} > 0 (4.52)

for some a ∈ intRd+ , then for any m ∈ N large enough we have


 

sup dxm ,xm p(n) (x, ·), ν : x ∈ [xm , xm ] → 0 (4.53)

exponentially fast as n → ∞. Here xm = m−1 x and xm = mx, where x and


x satisfy (4.49).

Proof. Items (i) and (ii) follow from the general assertion proved by Arnold
[3, Theorem 2.1.4] (see also the discussion in Sect.1.10). The support of ν is
a compact set in intRd+ because u(ω) ∈ [αe, βe] for all ω ∈ Ω.
(iii) Any compact K belongs to the interval [xm , xm ] with m large enough.
Therefore the relation

ϕ(n, ω)xm ≤ ϕ(n, ω)x ≤ ϕ(n, ω)xm , x∈K, (4.54)


4.5 Random Mappings 137

implies
P{ω : ϕ(n, ω)xm ≤ b} ≤ p(n) (x, [0, b]) ≤ P{ω : ϕ(n, ω)xm ≤ b}
for every x ∈ K ⊂ [xm , xm ]. From (4.50) we have
P{ω : ϕ(n, ω)z ≤ b} = P{ω : ϕ(n, θ−n ω)z ≤ b} → P{ω : u(ω) ≤ b}
as n → ∞ for any z ∈ intRd+ . Hence

p(n) (x, [0, b]) → ν([0, b]), n → ∞, b ∈ intRd+ ,


uniformly with respect to x ∈ K. This implies (4.51).
(iv) Since (P ∗n λ)(A) = Rd p(n) (x, A)λ(dx) and suppλ ⊂ [xm , xm ] for
+
some m, it follows from (4.51) that (P ∗n λ)([a, b]) → ν([a, b]) for any interval
[a, b] ⊂ Rd+ . This implies the weak convergence of (P ∗n λ) to ν as n → ∞.
To prove (4.53) under condition (4.52) we use a result from Bhat-
tacharya/Lee [16]. Relation (4.51) implies that
P{ω : ϕ(n, ω)xm ≤ a} → P{ω : u(ω) ≤ a}, n→∞,
and
P{ω : ϕ(n, ω)xm ≥ a} → P{ω : u(ω) ≥ a}, n→∞,
for any fixed m. Therefore it follows from (4.52) that there exists n0 = n0 (m)
with m large enough such that
P{ω : ϕ(n0 , ω)xm ≤ a} > 0 and P{ω : ϕ(n0 , ω)xm ≥ a} > 0 .
Hence using (4.54) we have
P{ω : ϕ(n0 , ω)x ≤ a, ∀x ∈ [xm , xm ]} ≥ P{ω : ϕ(n0 , ω)xm ≤ a} > 0 (4.55)
and
P{ω : ϕ(n0 , ω)x ≥ a, ∀x ∈ [xm , xm ]} ≥ P{ω : ϕ(n0 , ω)xm ≥ a} > 0 .
(4.56)
Since the interval [xm , xm ] is forward invariant with respect to ϕ(n, ω), we
can apply Theorem 2.1 from Bhattacharya/Lee [16], which gives the con-
vergence in (4.53) under conditions (4.55) and (4.56). 2
Remark 4.5.2. Instead of assumption (iv) in Proposition 4.5.2 we can assume
that
fi (ω, x) fi (ω, x)
lim sup > 1 and lim inf <1
x→0 xi min j xj →∞ xi
for each i = 1, . . . , d uniformly with respect to ω ∈ Ω, where fi (ω, x) are
the components of the mapping f = (f1 , . . . , fd ). This observation makes it
possible to relax the hypotheses concerning the function f (ω, x) in Bhat-
tacharya/Lee [16, Sect.4]. It was assumed there that for each ω the map-
ping f (ω, x) is a strictly concave continuously differentiable function with
some properties of the derivatives near 0 and infinity.
138 4. Sublinear Random Dynamical Systems

4.6 Positive Affine RDS

In this section we consider affine and linear order-preserving RDS which leave
a cone to be invariant. The results given below show that the order-preserving
property provides us with an alternative approach to the study of affine RDS
and makes it possible to obtain additional information in contrast with the
more general affine RDS studied in Sect.1.9.
Let V be a real Banach space with closed convex cone V+ . Recall (see
Definition 1.2.3) that RDS (θ, ϕ) in V is affine if the cocycle ϕ is of the form

ϕ(t, ω)x = Φ(t, ω)x + ψ(t, ω), (4.57)

where Φ(t, ω) is a cocycle over θ consisting of bounded linear operators of V .


The function ψ : T+ × Ω → V satisfies the relation

ψ(t + s, ω) = Φ(t, θs ω)ψ(s, ω) + ψ(t, θs ω), t, s ≥ 0. (4.58)

The affine RDS (θ, ϕ) is said to be positive (with respect to the cone V+ ) if
ϕ(t, ω)V+ ⊂ V+ for all t > 0 and ω ∈ Ω. If ψ(t, ω) ≡ 0 then the affine RDS is
said to be linear. The simplest properties of positive affine RDS are collected
in the following assertion.
Proposition 4.6.1. The affine RDS (θ, ϕ) with the cocycle ϕ of the form
(4.57) is positive with respect to the cone V+ if and only if Φ(t, ω) is positive,
i.e. maps V+ to itself, and ψ : T+ × Ω → V+ . Any positive affine RDS is a
sublinear order-preserving system. It is strongly sublinear if ψ(t, ω)  0 for
t > 0. Furthermore

ψ(t, θ−t ω) ≥ ψ(τ, θ−τ ω) ≥ 0 for all t≥τ ≥0 (4.59)

and at (ω) := ψ(t, θ−t ω) is a sub-equilibrium for any t ≥ 0.

Proof. If Φ(t, ω) is positive and ψ(t, ω) ≥ 0, then the RDS (θ, ϕ) is obviously
positive and order-preserving. On the other hand, if (θ, ϕ) is a positive RDS,
then ψ(t, ω) = ϕ(t, ω)0 ≥ 0. Since
1 1
Φ(t, ω)x + · ψ(t, ω) = · ϕ(t, ω)[λx] ≥ 0
λ λ
for any x ≥ 0 and λ > 0, letting λ → +∞ we obtain Φ(t, ω)x ≥ 0 for
x ≥ 0. Since w = 0 is a sub-equilibrium, Proposition 3.4.1 implies that
at (ω) = ψ(t, θ−t ω) = ϕ(t, θ−t ω)0 is also a sub-equilibrium for any t ≥ 0.
From (4.57) we have

ϕ(t, ω)[λx] − λϕ(t, ω)x = (1 − λ)ψ(t, ω), 0 < λ < 1, x ∈ V+ .

This relation implies the sublinear properties of (θ, ϕ). 2


4.6 Positive Affine RDS 139

Example 4.6.1 (1D Positive Affine RDE). Consider one-dimensional RDE

ẋ = α(θt ω)x + β(θt ω) (4.60)

over a metric dynamical system θ, where α(ω) and β(ω) are random vari-
ables such that t → α(θt ω) and t → β(θt ω) are locally integrable. Equation
an affine RDS
in R. The cocycle ϕ has the form (4.57) with
(4.60) generates
t
Φ(t, ω)x = x exp 0 α(θτ ω)dτ and
 t  t 
ψ(t, ω) = β(θs ω) exp α(θτ ω)dτ ds.
0 s

If β(ω) ≥ 0 for all ω ∈ Ω, then (θ, ϕ) is a positive affine RDS. It is strongly


sublinear provided that β(ω) > 0.

Theorem 4.6.1. Let (θ, ϕ) be a positive affine RDS with the cocycle ϕ rep-
resented in the form (4.57). Assume that there exists t0 = t0 (ω) > 0 such
that {ψ(t, θ−t ω) : t ≥ t0 } is a relatively compact set for each ω ∈ Ω. Then

u(ω) := lim ψ(t, θ−t ω) = sup ψ(t, θ−t ω) (4.61)


t→∞ t>0

exists and is an equilibrium for (θ, ϕ). Furthermore,


(i) if there are no non-zero equilibria for (θ, Φ), i.e. if the equation
w(θt ω) = Φ(t, ω)w(ω) has no non-trivial solution, then the equilibrium is
unique;
(ii) if ψ(t, ω)  0 for t > 0, then u(ω)  0, and u is the unique (up to
indistinguishability) equilibrium in V+ . It is attracting in the sense that there
exists a θ-invariant set Ω ∗ ∈ F of full measure such that

lim ϕ(t, θ−t ω)w(θ−t ω) = u(ω) for all ω ∈ Ω∗ (4.62)


t→+∞

for any random variable w possessing the property 0 ≤ w(ω) ≤ λu(ω) for all
ω ∈ Ω ∗ with some nonrandom constant λ > 0.

Proof. By the compactness condition and the monotonicity property (4.59)


the limit (4.61) exists. Equation (4.58) implies

ψ(t + s, θ−t−s ω) = Φ(t, θ−t ω)ψ(s, θ−s θ−t ω) + ψ(t, θ−t ω), t, s ≥ 0 .

Letting s → ∞ and using (4.61)

u(ω) = Φ(t, θ−t ω)u(θ−t ω) + ψ(t, θ−t ω) ,

hence u(ω) is an equilibrium.


(i) For the uniqueness just note that the difference of two equilibria sat-
isfies w(θt ω) = Φ(t, ω)w(ω).
140 4. Sublinear Random Dynamical Systems

(ii) Since ψ(t, ω)  0, equation (4.61) implies u(ω)  0. Assume now


that there is a second equilibrium v(ω) ≥ 0. Then a simple calculation shows
that
wβ (ω) = βv(ω) + (1 − β)u(ω)
is also an equilibrium for any 0 ≤ β ≤ 1. It is clear that wβ (ω) is strongly
positive for any 0 < β < 1. Therefore Uniqueness Theorem 4.2.1 implies that
1
(v(ω) + u(ω)) ≡ w1/2 (ω) = u(ω), ω ∈ Ω∗ ,
2
where Ω ∗ ∈ F is a θ-invariant set of full measure. This is only possible if
v(ω) = u(ω), ω ∈ Ω ∗ .
Since 0 is a sub-equilibrium and ψ(t, θ−t ω) = ϕ(t, θ−t ω)0, (4.62) follows
from (4.61) and Theorem 4.2.2. We use the relation

ϕ(t, θ−t ω)[λu(θ−t ω)] = λu(ω) + (1 − λ)ψ(t, θ−t ω)

to prove that the orbit emanating from λu is relatively compact for any λ. 2

Remark 4.6.1. We note that the assumption on the compactness of

{ψ(t, θ−t ω) : t ≥ t0 (ω)}

can be replaced by the condition: there exists a random element v(ω) ∈ V+


such that ψ(t, θ−t ω) ≤ v(ω) for all ω ∈ Ω and t > 0 provided that the cone
V+ is regular (see Definition 3.1.6 and Remark 4.2.2).

Example 4.6.2 (1D Positive Affine RDE). Consider the RDS (θ, ϕ) described
in Example 4.6.1. We additionally assume that θ is ergodic, α ∈ L1 (Ω, F, P),
and β(ω) ≥ 0 is a tempered random variable. If Eα < 0, then
 0  0 
ψ(t, θ−t ω) = β(θs ω) exp α(θτ ω)dτ ds ≤ u(ω)
−t s

for all t ≥ 0, where


 0  0 
u(ω) := β(θs ω) exp α(θτ ω)dτ ds . (4.63)
−∞ s

The finiteness of u(ω) follows from the Birkhoff–Khinchin ergodic theorem


(cf. Remark 1.4.1). Thus Theorem 4.6.1 is applicable here. It is clear that
u(ω) given by (4.63) is an equilibrium for (θ, ϕ) and (4.61) holds.
If β(ω) ≥ δ > 0 and Eα > 0, then the integral in (4.63) diverges on a set
of positive probability and we cannot apply Theorem 4.6.1. Nevertheless in
this case the RDS (θ, ϕ) possesses an equilibrium (see Example 2.1.2).

As an example of an application of the comparison principle (see Sect.3.7)


and Theorem 4.6.1 we have the following assertion.
4.6 Positive Affine RDS 141

Proposition 4.6.2. Assume that a system (θ, ϕ) on the solid normal cone
V+ is dominated from above by a positive affine RDS (θ, ϕaff ). Suppose that
the RDS (θ, ϕaff ) satisfies the hypotheses of Theorem 4.6.1 with ψ(t, ω)  0
for t > 0 and ω ∈ Ω. Let u(ω) be the strongly positive equilibrium for (θ, ϕaff ).
Then for any µ > 1 the random variable vµ (ω) = µu(ω) is an absorbing super-
equilibrium for (θ, ϕ) in the universe D consisting of all random closed sets
{B(ω)} such that B(ω) ⊂ [0, αu(ω)] for some α > 0. Moreover if V is a
finite-dimensional space, the RDS (θ, ϕ) possesses a random attractor in the
universe D and the conclusions of Theorem 3.6.2 hold.

Proof. Since ϕ(t, ω)x ≤ ϕaff (t, ω)x for all x ∈ V+ , we have

ϕ(t, θ−t ω)B(θ−t ω) ⊂ [0, ϕaff (t, θ−t ω)[αu(θ−t ω)]]

for every B(ω) ⊂ [0, αu(ω)]. Theorem 4.6.1 implies

ϕaff (t, θ−t ω)[αu(θ−t ω)] ≤ µu(ω), t ≥ t0 (ω), µ>1.

Thus [0, µu(ω)] is an absorbing set for (θ, ϕ) and therefore (θ, ϕ) is dissipa-
tive. If V is finite-dimensional, then Corollary 1.8.1 implies that a random
attractor exists in the universe D and we can apply Theorem 3.6.2. 2

The following assertion characterizes the linear part of positive affine RDS
with a strongly positive equilibrium.
Corollary 4.6.1. Let (θ, ϕ) be a positive affine RDS in V+ . Assume that
ψ(t, ω)  0 for t > 0 and that there exists a strongly positive equilibrium
u(ω). Then for any random element w(ω) such that 0 ≤ w(ω) ≤ αu(ω) for
all ω ∈ Ω with some nonrandom constant α > 0 we have

lim Φ(t, θ−t ω)w(θ−t ω) = 0, (4.64)


t→+∞

where Φ(t, ω) is linear part of the affine cocycle (θ, ϕ).

Proof. It is clear from (4.57) that (θ, Φ) is dominated from above by (θ, ϕ).
Therefore Proposition 4.6.2 implies that µu(ω) is an absorbing super-equilib-
rium for (θ, Φ) in the universe D consisting of all random closed sets {B(ω)}
such that B(ω) ⊂ [0, αu(ω)] for some α > 0. Hence (4.64) follows from
Proposition 1.9.1. 2
5. Cooperative Random Differential Equations

In this chapter we consider cooperative random differential equations. For


every fixed ω these equations can be solved as deterministic nonautonomous
ODEs and they generate order-preserving random systems under the stan-
dard (deterministic) cooperativity condition which appears in the nonau-
tonomous case (see, e.g., Krasnoselskii [69] or Smith [102] and the refer-
ences therein). We also note that cooperative ODEs with periodic and almost-
periodic right-hand sides are naturally included in the class of cooperative
random ODEs.
Deterministic cooperative differential equations are one of the main ap-
plications of monotone methods and comparison arguments and have been
studied by numerous authors, see Krasnoselskii [69], Hirsch [52, 53, 54],
Smith [102] and the references therein. The term cooperative system came
from the population biology literature.
Here we restrict ourselves to random equations with phase space Rd+ ,
where
Rd+ = {x = (x1 , . . . , xd ) ∈ Rd : xi ≥ 0, i = 1, . . . , d}
is the standard cone in Rd , for the following reasons: (a) this class of equations
appears naturally in many applications (see examples below) and (b) most of
the results given here can be easily extended to other choices of state space.

5.1 Basic Assumptions and the Existence Theorem

Let θ = (Ω, F, P, {θt , t ∈ R}) be a metric dynamical system. We consider in


Rd+ the pathwise ordinary differential equation

ẋ(t) = f (θt ω, x(t)) . (5.1)

We assume that f = (f1 , . . . , fd ) : Ω × Rd+ → Rd is a measurable function


such that f (ω, ·) possesses the following properties for all ω ∈ Ω:
(R1) f (ω, ·) is continuously differentiable and fi (ω, ·) and ∂fi (ω, ·)/∂xj ,
i, j = 1, . . . , d, are bounded on compact sets K ⊂ Rd+ by CK (ω) such
that t → CK (θt ω) is locally integrable;

I. Chueshov: LNM 1779, pp. 143–183, 2002.


c Springer-Verlag Berlin Heidelberg 2002
144 5. Cooperative Random Differential Equations

(R2) there exist random variables C1 and C2 such that t → Cj (θt ω) is locally
integrable and

x, f (ω, x) ≤ C1 (ω)|x|2 + C2 (ω) ,

where ·, · is the standard inner product in Rd and |x|2 = x, x;
(R3) f (ω, ·) is weakly positive, i.e.

fi (ω, x) ≥ 0, for all x ∈ Γi , ω ∈ Ω, i = 1, . . . , d,

where  
Γi = x = (x1 , . . . , xd ) ∈ Rd+ : xi = 0 .
We note that condition (R3) is satisfied if and only if

f (ω, x), y ≥ 0 whenever x ∈ ∂Rd+ , y ≥ 0, x, y = 0 . (5.2)

Sometimes instead of (R3) we will assume that


(R3∗ ) f (ω, ·) is strongly positive, i.e.

fi (ω, x) > 0, for all x ∈ Γi , x = 0, ω ∈ Ω, i = 1, . . . , d .

Proposition 5.1.1. Assume that conditions (R1), (R2) and (R3) hold.
Then for any initial data x0 ∈ Rd+ at the moment t = 0 problem (5.1) has
a unique global solution x(t, ω) ≡ x(t, ω; x0 ) (see Definition 2.1.1) such that
x(t, ω) ∈ Rd+ for all t ≥ 0 and ω ∈ Ω. This solution is continuously dif-
ferentiable with respect to the initial data x0 and relations (2.6) and (2.7)
concerning the evolution of the Jacobian and its determinant hold.

Proof. We first extend the function f (ω, x) from Rd+ to Rd such that the
extended function f˜(ω, x) belongs to C 1 (Rd ) for all ω ∈ Ω and possesses
properties (2.1) and (2.2), i.e. for any compact set K ⊂ Rd there exists a
random variable CK (ω) ≥ 0 such that
 a+1
CK (θt ω) dt < ∞ for all a ∈ R, ω ∈ Ω , (5.3)
a

and
|f˜(ω, x)| ≤ CK (ω), |f˜(ω, x) − f˜(ω, y)| ≤ CK (ω) · |x − y|
for any x, y ∈ K and ω ∈ Ω. It is clear that this extension exists. Now we
can apply Proposition 2.1.1 to prove that the problem

ẋ(t) = f˜(θt ω, x(t)), x(0) = x0 ,


5.2 Generation of RDS 145

has a unique local solution x̃(t, ω; x0 ) which is continuously differentiable with


respect to the initial data x0 and possesses properties (2.6) and (2.7). The
weak positivity condition (R3) in the form (5.2) implies that

f˜(ω, x), νx  = f (ω, x), νx  ≤ 0, x ∈ ∂Rd+ , ω ∈ Ω ,

where νx is an outer normal to ∂Rd+ at the point x ∈ ∂Rd+ (see Definition


2.2.1). Hence it follows from Theorem 2.2.1 that for any x0 ∈ Rd+ the solution
x̃(t, ω; x0 ) does not leave Rd+ and therefore it gives a unique local solution
x(t, ω; x0 ) to problem (5.1). Property (R2) and Corollary 2.2.2 imply that
the solution x(t, ω; x0 ) can be extended to the whole time semi-axis R+ . 2

5.2 Generation of RDS

The following assertion shows that equation (5.1) generates an RDS in Rd+ .
Proposition 5.2.1. Assume that conditions (R1)–(R3) hold. Then equation
(5.1) generates a C 1 RDS (θ, ϕ) in Rd+ with the cocycle ϕ(t, ω) defined by the
formula ϕ(t, ω)x = x(t), where x(t) is an absolutely continuous solution to
the equation  t
x(t) = x + f (θτ ω, x(τ )) dτ . (5.4)
0

Moreover the Jacobian Dx ϕ(t, ω, x) satisfies equations (2.8) and (2.9). We


also have the relations

ϕ(t, ω)(Rd+ \ {0}) ⊂ Rd+ \ {0} for all t>0 and ω∈Ω (5.5)

and
ϕ(t, ω)intRd+ ⊂ intRd+ for all t>0 and ω∈Ω. (5.6)

If we additionally assume that (R3 ) holds, then (θ, ϕ) is strongly positive,
i.e.

ϕ(t, ω)(Rd+ \ {0}) ⊂ intRd+ for all t>0 and ω∈Ω. (5.7)

Proof. It follows from Proposition 5.1.1 that (5.1) generates a global C 1 RDS
(θ, ϕ) in Rd+ with properties (2.8) and (2.9).
To prove (5.5) we assume that for some fixed ω ∈ Ω, x ∈ Rd+ and t0 =
t0 (ω, x) > 0 we have x(t0 ) = ϕ(t0 , ω)x = 0. Since f (θτ ω, 0) ≥ 0 for all τ ∈ R,
equation (5.4) implies that
 t0
0 ≤ x(t) ≤ − (f (θτ ω, x(τ )) − f (θτ ω, 0)) dτ, 0 ≤ t ≤ t0 .
t

Since supτ ∈[0,t0 ] |x(τ, ω)| ≤ r(ω) with some r(ω) > 0, property (R1) gives
146 5. Cooperative Random Differential Equations

|f (θτ ω, x(τ )) − f (θτ ω, 0)| ≤ C(τ, ω) · |x(τ )| ,

where

C(τ, ω) ≡ sup |Dx f (θτ ω, x)| ∈ L1loc (R) for each ω ∈ Ω .


|x|≤r(ω)

Therefore we have
 t0
|x(t)| ≤ C(τ, ω) · |x(τ )| dτ, 0 ≤ t ≤ t0 .
t

This implies that x(t) ≡ 0 for all 0 ≤ t ≤ t0 and therefore x = 0. Thus we


have (5.5).
Let us prove (5.6 and (5.7). Suppose that for some ω ∈ Ω there exist a
solution x(t) to equation (5.1) such that x(0) = x ≥ 0, a time t0 > 0 and an
element z ∈ Rd+ \ {0} such that x(t0 ) = z and z ∈ Γi for some i ∈ {1, . . . , d}.
In this case we have
 t0
xi (t) + fi (θτ ω, x(τ )) dτ = 0, 0 ≤ t ≤ t0 . (5.8)
t

Using (R3) we get


 t0
xi (t) + [fi (θτ ω, x(τ )) − fi (θτ ω, x̂(τ )] dτ ≤ 0
t

for 0 ≤ t ≤ t0 , where

x̂(τ ) = (x1 (τ ), . . . , xi−1 (τ ), 0, xi+1 (τ ), . . . xd (τ ))

Therefore, as above (R1) implies that


 t0
0 ≤ xi (t) ≤ C(τ, ω) · xi (τ ) dτ, 0 ≤ t ≤ t0 .
t

Consequently xi (t) ≡ 0 for all 0 ≤ t ≤ t0 . This is impossible if x(0) = x  0


and therefore we obtain (5.6). Further, if xi (t) ≡ 0 for all 0 ≤ t ≤ t0 , we have
from (5.8) that
 t0
fi (θτ ω, x̂(τ )) dτ = 0, 0 ≤ t ≤ t0 ,
t

which is impossible under condition (R3∗ ) provided that x(0) = x > 0. Thus
ϕ(t, ω)x  0 for all x > 0, i.e. we have (5.7). 2

Now we introduce assumptions that guarantee that the RDS generated by


(5.1) in Rd+ is order-preserving. We assume that
5.2 Generation of RDS 147

(R4) the function f (ω, ·) is cooperative, i.e.

fi (ω, x) ≤ fi (ω, y), i = 1, . . . , d, ω∈Ω, (5.9)

for all x, y ∈ Rd+ such that xi = yi and xj ≤ yj for j = i.


It is easy to see that a function f (ω, x) satisfies condition (R4) if and only if

f (ω, y) − f (ω, x), z ≥ 0 whenever 0 ≤ x ≤ y, z ≥ 0, y − x, z = 0 .

We note that the cooperativity condition (R4) is also known as quasi-


monotonicity (see Walter [107]) and it can be written (see, e.g., Smith
[102]) in the differential form as
(R4∗ ) for each ω ∈ Ω we have

∂fi (ω, x)
≥0 when i = j, x = (x1 , . . . , xd ) ∈ Rd+ . (5.10)
∂xj

As in the deterministic case (see, e.g., Hirsch [52], Krasnoselskii [68, 69],
Smith [102] and the references therein) we need the concept of irreducibility.
Recall the following definition.
Definition 5.2.1. A matrix A = {aij }di,j=1 is called irreducible if for every
nonempty, proper subset I of the set N = {1, 2, . . . , d}, there is an i ∈ I and
j ∈ N \ I such that aij = 0.
One can show that a matrix A is irreducible if and only if no nonzero proper
subspace spanned by a subset of the standard basis in Rd is mapped by A
into itself.
Theorem 5.2.1. Let (R1)–(R4) hold. Then equation (5.1) generates a stric-
tly order-preserving RDS (θ, ϕ) in Rd+ and

ϕ(t, ω)intRd+ ⊂ intRd+ for any t ≥ 0, ω ∈ Ω . (5.11)

If the matrix
d
∂fi (ω, x)
Dx f (ω, x) ≡ (5.12)
∂xj i,j=1

is irreducible for all x ∈ intRd+ and ω ∈ Ω, then

ϕ(t, ω)x  ϕ(t, ω)y for all 0x<y and ω∈Ω, (5.13)

i.e. equation (5.1) generates a strongly order-preserving RDS in intRd+ . If the


matrix (5.12) is irreducible for all positive x from Rd+ and ω ∈ Ω, then the
RDS (θ, ϕ) is strongly order-preserving in Rd+ .
148 5. Cooperative Random Differential Equations

The proof of this theorem follows the line of argument for the deterministic
case (see, e.g., Hirsch [52] or Krasnoselskii [68]) and relies on the following
assertion.
Lemma 5.2.1. Let (R1)–(R4) be valid. Let ϕ(t, ω) be the cocycle generated
by (5.1). Then for any x ∈ Rd+ the linear mapping ψx (t, ω) ≡ Dx ϕ(t, ω, x)
possesses the properties

ψx (t, ω)Rd+ ⊂ Rd+ for all t>0 and ω∈Ω; (5.14)

ψx (t, ω)(Rd+ \ {0}) ⊂ Rd+ \ {0} for all t>0 and ω∈Ω; (5.15)

ψx (t, ω)intRd+ ⊂ intRd+ for all t>0 and ω∈Ω. (5.16)

If additionally the matrix Dx f (θt ω, ϕ(t, ω, x)) is irreducible for all t ≥ 0 and
ω ∈ Ω, then ψx (t, ω) possesses the property

ψx (t, ω)(Rd+ \ {0}) ⊂ intRd+ for all t>0 and ω∈Ω. (5.17)

Proof. Proposition 5.1.1 implies that

y(t) = ψx (t, ω)y0 ≡ Dx ϕ(t, ω, x)y0

is a solution to the problem

ẏ = Dx f (θt ω, x(t))y, y(0) = y0 , (5.18)

where x(t) = ϕ(t, ω)x. Assumption (R4) implies (R4∗ ) and therefore the
right-hand side of the equation (5.18) is weakly positive (see (R3)). Conse-
quently (5.14) follows from Proposition 2.2.1. Relation (5.15) can be proved
in the same way as (5.5).
To prove (5.16) let us assume that for some ω there exist t0 > 0,
z  0 and i ∈ {1, . . . , d} such that we have yi (t0 ) = 0 for the solution
y(t) = (y1 (t), . . . , yd (t)) to problem (5.18) with y0 = z. Since ψx (t, ω) is a
linear order-preserving operator, equation (5.14) implies that yi (t0 ) = 0 for
a solution to problem (5.18) with arbitrary initial data y0 ∈ Rd . This implies
that Detψx (t0 , ω) = 0 which is impossible because of Liouville’s equation
(2.9).
To obtain the last assertion of the lemma we apply the same method as in
the proof of property (5.7). Assume that for some ω ∈ Ω there exist a solution
y(t) = (y1 (t), . . . , yd (t)) ≥ 0 to (5.18) with nonzero initial data and a moment
t0 > 0 such that yi (t0 ) = 0 for i ∈ I and yi (t0 ) > 0 when i ∈ I, where I
is a proper subset of {1, . . . , d}. We note that the relation yi (t0 ) = 0 for all
5.2 Generation of RDS 149

i ∈ {1, . . . , d} is impossible because of (5.15). Let aij (t, ω) be the entries of


the matrix Dx f (θt ω, ϕ(t, ω, x)), i.e.
∂fi
aij (t, ω) = (θt ω, ϕ(t, ω, x)), t ≥ 0, ω ∈ Ω, i, j = 1, . . . , d .
∂xj

Since {aij (t, ω)} is irreducible, there exists a pair {k, l} such that k ∈ I, l ∈ I
and akl (t, ω) > 0. These k and l can depend on t and ω. It follows from (5.18)
that
  t0  s 
yi (t) + aij (s, ω)yj (s) · exp − aii (τ, ω)dτ ds = 0
j =i t t

for t ∈ [0, t0 ] and i ∈ I. Therefore


  t0
yi (t) + FI (t, s, ω) ds ≤ 0 (5.19)
i∈I t

for t ∈ [0, t0 ], where


  s 
FI (t, s, ω) = aij (s, ω)yj (s) · exp − aii (τ, ω)dτ .
i∈I j ∈I t

Since y(t) is continuous, we have yj (t) > 0 for j ∈ I and for all t ∈ [t0 − δ, t0 ]
with some δ = δ(ω) > 0. Therefore the irreducibility of {aij (t, ω)} implies
I (t, s, ω) > 0 for all s ∈ [t, t0 ] with t ∈ [t0 − δ, t0 ]. Thus from (5.19) we
that F!
have i∈I yi (t) < 0 for t ∈ [t0 − δ, t0 ) which is impossible. 2
Proof of Theorem 5.2.1. We make use of the equation
 1
ϕ(t, ω, y) = ϕ(t, ω, x) + Dx ϕ(t, ω, sy + (1 − s)x)ds(y − x) (5.20)
0

valid for all t ≥ 0, ω ∈ Ω and x, y ∈ Rd+ . If 0 ≤ x < y, then from


(5.15) and (5.20) we have that ϕ(t, ω, y) > ϕ(t, ω, x), i.e. (θ, ϕ) is strictly
order-preserving in Rd+ . If 0 ≡ x  y, then from (5.16) and (5.20) we
have that ϕ(t, ω, y)  ϕ(t, ω, 0) ≥ 0, i.e. (5.11) is valid. Moreover if for
all x ∈ intRd+ and ω ∈ Ω the matrix (5.12) is irreducible, then (5.11) implies
that Dx f (θt ω, ϕ(t, ω, x)) is irreducible for all t ∈ R+ and ω ∈ Ω. Therefore
(5.17) and (5.20) give (5.13). In a similar way we obtain the last assertion of
Theorem 5.2.1 and conclude the proof. 2

Remark 5.2.1. Let (θ, ϕ) be the RDS in Rd+ generated by (5.1). Since t →
ϕ(t, θ−t ω)x is a right continuous function (see Remark 2.1.2(i)), it follows
from Proposition 1.5.2 that the closure γx (ω) of any pull back orbit γx (ω)
emanating from x ∈ Rd+ is a random closed set with respect to the σ-algebra
F.
150 5. Cooperative Random Differential Equations

5.3 Random Comparison Principle

The following comparison theorem is of importance in what follows. In the


deterministic case it is known as the Kamke theorem (see, e.g., Smith [102],
Walter [107] or the references in Krasnoselskii [68]). We also refer to
Ladde/Lakshmikantham [75] for a random comparison principle for an-
other class of RDE in Rd .
Let us consider in Rd+ the system of random ordinary differential equations

ẏi (t) = gi (θt ω, y1 (t), . . . , yd (t)), i = 1, . . . , d , (5.21)

with the function g = (g1 , . . . , gd ) : Ω × Rd+ → Rd possessing properties (2.1)


and (2.2), i.e. for any compact set K ⊂ Rd+ there exists a random variable
CK (ω) ≥ 0 such that (5.3) holds and

|g(ω, x)| ≤ CK (ω), and |g(ω, x) − g(ω, y)| ≤ CK (ω) · |x − y|

for any x, y ∈ K and ω ∈ Ω. We denote by y(t, ω; x) the local solution to


problem (5.21) with initial data x ∈ Rd+ at the time t = 0 with the property
y(t, ω; x) ∈ Rd+ for t ∈ [0, t0 (ω, x)), where t0 (ω, x) is a positive number. The
existence of this solution follows from Proposition 2.1.1 at least for initial
data from intRd+ .
Theorem 5.3.1 (Random Comparison Principle). Assume that (R1)-
(R4) hold for the function f . Let ϕ(t, ω) be the cocycle of the RDS in Rd+
generated by (5.1). Then the condition

f (ω, x) ≤ g(ω, x) for all x ∈ Rd+ , ω ∈ Ω , (5.22)

implies that

ϕ(t, ω)x ≤ y(t, ω; x) for all t ∈ [0, t0 (ω, x)), ω ∈ Ω, x ∈ Rd+ . (5.23)

If
f (ω, x) ≥ g(ω, x) for all x ∈ Rd+ , ω ∈ Ω , (5.24)
then

ϕ(t, ω)x ≥ y(t, ω; x) for all t ∈ [0, t0 (ω, x)), ω ∈ Ω, x ∈ Rd+ . (5.25)

Proof. Assume (5.22). Then the function z(t) = y(t, ω; x) − ϕ(t, ω)x is a local
solution to problem

żi (t) = hi (t, ω, z1 (t), . . . , zd (t)), zi (0) = 0, i = 1, . . . , d ,

where
h(t, ω, z) = g(θt ω, ϕ(t, ω)x + z) − f (θt ω, ϕ(t, ω)x) .
5.3 Random Comparison Principle 151

From (5.22) and (R4) we have

hi (t, ω, z) ≥ fi (θt ω, ϕ(t, ω)x + z) − fi (θt ω, ϕ(t, ω)x) ≥ 0

for every z = (z1 , . . . , zd ) ∈ Rd+ with zi = 0. This implies that

h(t, ω, z), νz  ≤ 0, t > 0, z ∈ ∂Rd+ , ω ∈ Ω ,

where νz is an outer normal to ∂Rd+ at z. Therefore Proposition 2.2.1 implies


that z(t) ≥ 0 on the interval [0, t0 (ω, x)), and we have (5.23).
Assume now that (5.24) holds. Then the function z ∗ (t) = −z(t) satisfies
the equation
ż ∗ (t) = h∗ (t, ω, z ∗ (t)), z(0) = 0 ,
where h∗ (t, ω, z) = −h(t, ω, −z). Since y(t, ω; x) ∈ Rd+ for t ∈ [0, t0 (ω, x)), we
also have z ∗ (t) ≤ ϕ(t, ω)x. From (5.24) and (R4) we obtain

h∗i (t, ω, z) ≥ fi (θt ω, ϕ(t, ω)x) − fi (θt ω, ϕ(t, ω)x − z) ≥ 0

for every z = (z1 , . . . , zd ) ∈ Rd+ with zi = 0 such that z ≤ ϕ(t, ω)x. Therefore
as above we can conclude that z ∗ (t) ≥ 0. Thus we obtain (5.25). 2

From Theorem 5.3.1 we easily have the following assertion.


Corollary 5.3.1. Assume that f satisfies (R1)-(R4) and g satisfies (R1)-
(R3). Let ϕ(t, ω) and ψ(t, ω) be the cocycles of the RDS in Rd+ generated by
(5.1) and by (5.21). Then
(i) condition (5.22) implies that

ϕ(t, ω)x ≤ ψ(t, ω)x for all t ∈ R+ , ω ∈ Ω, x ∈ Rd+ ;

(ii) if we have strict inequality in (5.22) for every x ∈ intRd+ and ω ∈ Ω,


then

ϕ(t, ω)x < ψ(t, ω)x for all t > 0, ω ∈ Ω, x ∈ intRd+ ;

(iii) condition (5.24) implies that

ϕ(t, ω)x ≥ ψ(t, ω)x for all t ∈ R+ , ω ∈ Ω, x ∈ Rd+ ;

(iv) if we have strict inequality in (5.24) for every x ∈ intRd+ and ω ∈ Ω,


then

ϕ(t, ω)x > ψ(t, ω)x for all t > 0, ω ∈ Ω, x ∈ intRd+ .


152 5. Cooperative Random Differential Equations

Proof. It is necessary to prove (ii) and (iv) only. Assume that (5.22) with
a strict inequality is valid. Suppose that for some ω ∈ Ω, x ∈ intRd+ and
t0 > 0 we have that ϕ(t0 , ω)x = ψ(t0 , ω)x. Denote x(t) = ϕ(t, ω)x and
y(t) = ψ(t, ω)x. Then from (5.22) and from assertion (i) of the corollary we
have  t0
0 ≤ y(t) − x(t) ≤ − (g(θτ ω, y(τ )) − g(θτ ω, x(τ ))) dτ
t

for all t ∈ [0, t0 ]. This equality allows us to conclude that y(t) = x(t) for all
t ∈ [0, t0 ] (cf. the argument given in the proof of Proposition 5.2.1). Therefore
from (5.1) and (5.21) we have the equality
 t0
(g(θτ ω, x(τ )) − f (θτ ω, x(τ ))) dτ = 0, t ∈ [0, t0 ] .
t

By Theorem 5.2.1 x(τ ) ∈ intRd+ and therefore the last equality contradicts
to the strict inequality in (5.22) for x ∈ intRd+ .
The proof of (iv) is similar. 2

Below we also need a stronger version of the comparison principle.


Theorem 5.3.2 (Strong Random Comparison Principle). Assume
that f satisfies (R1)-(R4) and g satisfies (R1)-(R3). Let (θ, ϕ) and (θ, ψ)
be the RDS in Rd+ generated by (5.1) and by (5.21). The following assertions
hold.
(i) If

fi (ω, x) < gi (ω, x) for all i = 1, . . . , d, x ∈ intRd+ , ω ∈ Ω , (5.26)

then

ϕ(t, ω)x  ψ(t, ω)x for all t > 0, x ∈ intRd+ , ω ∈ Ω . (5.27)

(ii) If

fi (ω, x) > gi (ω, x) for all i = 1, . . . , d, x ∈ intRd+ , ω ∈ Ω , (5.28)

then

ϕ(t, ω)x  ψ(t, ω)x for all t > 0, x ∈ intRd+ , ω ∈ Ω . (5.29)

(iii) If matrix (5.12) is irreducible for all x ∈ intRd+ and ω ∈ Ω, then


(a) property (5.22) with strict inequality for every x ∈ intRd+ implies
(5.27);
5.3 Random Comparison Principle 153

(b) property (5.24) with strict inequality for every x ∈ intRd+ implies
(5.29).

Proof. Let x(t) = ϕ(t, ω)x and y(t) = ψ(t, ω)x with x ∈ intRd+ .
(i) We obviously have
 t
y(t) − x(t) = y(s) − x(s) + [g(θτ ω, y(τ )) − f (θτ ω, x(τ ))] dτ .
s

Corollary 5.3.1 and Theorem 5.2.1 imply that y(t) > x(t)  0 for all t > 0.
Therefore it follows from (5.26) that
 t
yi (t) − xi (t) > yi (s) − xi (s) + [fi (θτ ω, y(τ )) − fi (θτ ω, x(τ ))] dτ (5.30)
s

for all t > s ≥ 0 and i = 1, . . . , d. Hence


d 
 t
yi (t) − xi (t) ≥ yi (s) − xi (s) + aij (τ )(yj (τ ) − xj (τ ))dτ , (5.31)
j=1 s

where  1
∂fi
aij (τ ) = (θτ ω, x(τ ) + λ(y(τ ) − x(τ ))) dλ .
0 ∂xj
Cooperativity condition (R4∗ ) and the relation yj (τ ) − xj (τ ) ≥ 0 imply that
 t
yi (t) − xi (t) ≥ yi (s) − xi (s) + aii (τ )(yi (τ ) − xi (τ ))dτ
s

for all t > s ≥ 0 and i = 1, . . . , d. If we suppose that

yi (t0 ) = xi (t0 ) for some t0 > 0 and i ∈ {1, . . . , d} , (5.32)

then we obtain that


 t0
0 ≤ yi (s) − xi (s) ≤ |aii (τ )|(yi (τ ) − xi (τ ))dτ
s

for all 0 ≤ s ≤ t0 . This implies that yi (s) = xi (s) for all s ∈ [0, t0 ]. Therefore
we can apply (R4) in (5.30) and obtain the inequality

yi (t) − xi (t) > yi (s) − xi (s) ≥ 0 for all 0 ≤ s < t ≤ t0

which contradicts to (5.32).


(ii) In this case Corollary 5.3.1 implies that x(t) ≥ y(t). Since (R1)–(R3)
hold for g, it follows from (5.6) that y(t)  0. Therefore as above from (5.28)
we can obtain the inequality
154 5. Cooperative Random Differential Equations

 t
xi (t) − yi (t) > xi (s) − yi (s) + [fi (θτ ω, x(τ )) − fi (θτ ω, y(τ ))] dτ
s

and derive (5.29).


(iii-a) We use the idea presented in the proof of Lemma 5.2.1.
By Corollary 5.3.1 and Theorem 5.2.1 we obviously have that y(t) >
x(t)  0 for all t > 0. Assume that for some ω ∈ Ω there exist x ∈ intRd+
and t0 > 0 such that

yi (t0 ) = xi (t0 ), i ∈ I, and yi (t0 ) > xi (t0 ), i ∈ I , (5.33)

where I is a proper subset of {1, . . . , d}. We note that the relation yi (t0 ) =
xi (t0 ) for all i = 1, . . . , d is impossible because of Corollary 5.3.1(ii). An
argument similar to given above makes it possible to obtain (5.31) for this
case. Hence as above we can prove that yi (s) = xi (s) for all s ∈ [0, t0 ] and
i ∈ I. Therefore (5.31) implies that
  t0
Fi (τ )dτ ≤ 0 for all s ∈ [0, t0 ] , (5.34)
i∈I s

where 
Fi (τ ) = aij (τ )(yj (τ ) − xj (τ )) ≥ 0, i∈I.
j ∈I

Since yj (t) − xj (t) > 0 for j ∈ I and for all t ∈ [t0 − δ,!t0 ] with some
δ = δ(ω) > 0, from the irreducibility condition we get that i∈I Fi (t) > 0
for t ∈ [t0 − δ, t0 ]. This contradicts to (5.34).
To prove (iii-b) we use arguments similar to ones given in the proofs of
(ii) and (iii-a). 2
As an application of the random comparison principle we prove the following
assertion.
Theorem 5.3.3. Assume that f satisfies (R1)–(R4) and

f (ω, x) ≤ A(ω)x + b(ω), x ∈ Rd+ , ω ∈ Ω , (5.35)

where A(ω) = {aij (ω)}di,j=1 and b(ω) = (b1 (ω), . . . , bd (ω)) ≥ 0 are tempered
random variables such that t → A(θt ω) and t → |b(θt ω)| are locally in-
tegrable. Assume also that θ is an ergodic metric dynamical system and the
linear RDS (θ, Φ) generated by

ẋ(t) = A(θt ω)x(t)

has top Lyapunov exponent negative. Then there exist a θ-invariant set Ω ∗
of full measure and a version (θ, ϕ̃) of the RDS (θ, ϕ) generated by (5.1) such
that ϕ̃(t, ω) = ϕ(t, ω) for all ω ∈ Ω ∗ and t ≥ 0, and the RDS (θ, ϕ̃) possesses
an absorbing super-equilibrium v(ω)  0 in the universe D of all tempered
subsets of Rd+ .
5.3 Random Comparison Principle 155

Proof. The comparison principle implies that the RDS (θ, ϕc ) generated by
the equation
ẋ(t) = A(θt ω)x(t) + b(θt ω) + c · e ,
where e := (1, . . . , 1) ∈ Rd+ , dominates the system (θ, ϕ) from above for any
c ∈ R+ . Condition (5.35) and the weak positivity property (R3) of f imply
that aij (ω) ≥ 0, i = j. Therefore (θ, ϕc ) is a strictly order-preserving RDS in
Rd+ (see Theorem 5.2.1). By (2.15) the cocycle ϕc (t, ω) has the form
ϕc (t, ω)x = Φ(t, ω)x + ψc (t, ω) ,
where  t
ψc (t, ω) := Φ(t − s, θs ω) (b(θs ω) + c · e) ds .
0
Property (5.11) implies that ψc (t, ω)  0 for all ω ∈ Ω, t > 0 and c > 0. By
Theorem 2.1.2 and Definition 1.9.1 we also have
 0  √ 
|ψc (t, θ−t ω)| ≤ Rε (θs ω)e−(λ+ε)s |b(θs ω)| + c · d ds, ω ∈ Ω ∗ ,
−t

where ε > 0 is arbitrary and Ω ∗ is the θ-invariant set of full measure given
by Theorem 2.1.2. Therefore we can choose ε > 0 such that {|ψc (t, θ−t ω)| :
t ≥ 0} is bounded for every ω ∈ Ω ∗ . Consequently by Proposition 1.9.3 for
any c > 0 the system (θ, ϕc ) possesses an equilibrium wc (ω) on Ω ∗ such that
 
lim eγt sup |ϕ(t, θ−t ω)v − wc (ω)| = 0, ω ∈ Ω∗ ,
t→+∞ v∈D(θ−t ω)

for any tempered random closed set D ⊂ Rd+ and γ < −λ. It is also clear
from Theorem 4.6.1 that wc (ω)  wc (ω) when c > c , ω ∈ Ω ∗ . Therefore, if
we redefine f by f (ω, x) = −x + e on Ω \ Ω ∗ , then

wc (ω) if ω ∈ Ω ∗ ,
vc (ω) =
(1 + c)e if ω ∈ Ω \ Ω ∗ ,

is a D-absorbing super-equilibrium for the RDS (θ, ϕ̃) with the cocycle ϕ̃
defined by the formula

ϕ(t, ω)x if ω ∈ Ω ∗ ,
ϕ̃(t, ω)x =
e−t x + (1 − e−t )e if ω ∈ Ω \ Ω ∗ .
2
Corollary 5.3.2. Under the hypotheses of Theorem 5.3.3 the RDS (θ, ϕ̃)
generated by (5.1) has a global random attractor in the universe D of all tem-
pered subsets of Rd+ and it possesses the properties stated in Theorem 3.6.2.
Proof. This follows directly from Theorem 5.3.3, Corollary 1.8.1 and Theo-
rem 3.6.2. 2
156 5. Cooperative Random Differential Equations

5.4 Equilibria, Semi-Equilibria and Attractors


Now we give several results on the existence of equilibria and attractors for
the systems considered. We first note that under assumptions (R1)–(R3)
Proposition 5.2.1 implies that the element x ≡ 0 is a sub-equilibrium for
the RDS (θ, ϕ) generated by (5.1) in Rd+ . The following assertion gives some
properties of this sub-equilibrium.
Proposition 5.4.1. Let assumptions (R1)–(R4) hold. If f (ω, 0) > 0 for all
ω ∈ Ω, then ϕ(t, ω)0 > 0 for all t > 0 and ω ∈ Ω. If

fi (ω, 0) > 0, for all ω ∈ Ω, i = 1, . . . , d , (5.36)

then ϕ(t, ω)0  0.

Proof. Let f (ω, 0) > 0. Assume that there exists t0 > 0 such that ϕ(t0 , ω)0 =
0 for some ω ∈ Ω. Then the same argument as in the proof of property (5.5)
gives us that ϕ(t, ω)0 = 0 for all t ∈ [0, t0 ]. Thus x(t) ≡ 0 is a stationary
solution to (5.1) which is impossible because f (ω, 0) > 0.
Assume now that (5.36) holds. Denote x(t) = ϕ(t, ω)0. Suppose that there
exists t0 > 0 such that xi (t0 ) = 0 for some i and ω ∈ Ω. Then
 t0
−xi (t) = fi (θτ ω, 0, . . . , 0, xi (τ ), 0, . . . , 0) dτ
t

 t0
+ (fi (θτ ω, x(τ )) − fi (θτ ω, 0, . . . , 0, xi (τ ), 0, . . . , 0)) dτ
t

for all t ∈ [0, t0 ]. The cooperativity condition implies that


 t0
xi (t) + fi (θτ ω, 0, . . . , 0, xi (τ ), 0, . . . , 0) dτ ≤ 0 . (5.37)
t

Therefore it follows from (5.36) that


 t0
xi (t) + [fi (θτ ω, 0, . . . , 0, xi (τ ), 0, . . . , 0) − fi (θτ ω, 0)] dτ ≤ 0 .
t

Thus as in the proof of Proposition 5.2.1 we find that xi (t) ≡ 0 for t ∈ [0, t0 ].
t
Hence (5.37) implies t 0 fi (θτ ω, 0)dτ ≤ 0 which is impossible. 2

The following assertion contains a sufficient condition for the existence of an


equilibrium.
Proposition 5.4.2. Let assumptions (R1)–(R4) be valid. Assume that for
some x ∈ Rd+ and for any ω ∈ Ω there exists t0 = t0 (ω) such that the closure
t (ω)
γx0 (ω) of the tail of the orbit emanating from x,
5.4 Equilibria, Semi-Equilibria and Attractors 157


γct0 (ω) (ω) = ϕ(t, θ−t ω)x ,
t≥t0 (ω)

is a compact set in Rd+ . Then there exists an equilibrium u(ω) for the RDS
(θ, ϕ) generated by (5.1). This equilibrium is positive when f (ω, 0) > 0.

Proof. Since t → ϕ(t, θ−t ω)x is a right continuous function (see Remark 2.1.2)
and therefore a separable process, Remark 1.6.1 and Proposition 1.6.4 imply
that the omega-limit set Γx (ω) exists and is an invariant random compact
set. Therefore we can apply Lemma 3.4.1. Hence v(ω) := inf Γx (ω) is a super-
equilibrium such that v(ω) ≥ 0. Since 0 is a sub-equlibrium, the existence of
an equilibrium u(ω) ∈ [0, v(ω)] now follows from Theorem 3.5.1. 2

In applications below we also use the following assertion on the existence of


equilibria.
Theorem 5.4.1. Let assumptions (R1)–(R4) be valid and assume that there
exists a C 1 function W (x) from intRd+ into R+ such that for some nonrandom
R we have

|∇W (x)| = 0, f (ω, x), ∇W (x) ≤ 0 for all ω∈Ω (5.38)

provided that W (x) = R and x ∈ intRd+ . If the set

B = {x ∈ intRd+ : W (x) ≤ R} (5.39)

is bounded, then there exists an equilibrium u(ω) for the RDS (θ, ϕ) generated
by (5.1) in Rd+ such that 0 ≤ u(ω) ≤ sup B. In this case there also exists a sub-
equilibrium w(ω) with the properties w(ω) ≥ u(ω) and inf B ≤ w(ω) ≤ sup B.
We also have u(ω) > 0 provided that f (ω, 0) > 0 for all ω ∈ Ω and u(ω)  0
if (5.36) holds.
The proof of this theorem relies on the following lemma.
Lemma 5.4.1. Let assumptions (R1)–(R3) be valid and assume that there
exists a C 1 function W (x) from intRd+ into R+ such that for some nonrandom
R we have (5.38) provided that W (x) = R. Then the set B given by (5.39)
is a forward invariant set with respect to ϕ(t, ω), i.e. ϕ(t, ω)B ⊂ B for all
ω ∈ Ω. Here ϕ(t, ω) is the cocycle generated by (5.1).

Proof. For every x from the set

intRd+ ∩ ∂B = {x ∈ intRd+ : W (x) = R}


∇W (x) d
an outer normal has the form νx = |∇W (x)| . Since intR+ is an open forward
invariant set for (θ, ϕ) (see (5.6)), we can apply Theorem 2.2.2 with O =
intRd+ and D = B. It shows that B = intRd+ ∩ B is a deterministic forward
invariant set for the RDS (θ, ϕ). 2
158 5. Cooperative Random Differential Equations

Proof of Theorem 5.4.1. Lemma 5.4.1 shows that B is a forward invari-


ant compact set for RDS (θ, ϕ) in Rd+ . Therefore from Proposition 1.6.3 we
have that the omega-limit set ΓB (ω) is an invariant random compact set.
Consequently, as in Proposition 5.4.2, we can apply Lemma 3.4.1. Therefore
there exist a sub-equilibrium inf B ≤ w(ω) ≤ sup B and a super-equilibrium
0 ≤ v(ω) ≤ sup B such that v(ω) ≤ w(ω). Since 0 is a super-equlibrium, the
existence of an equilibrium u(ω) ∈ [0, v(ω)] such that

u(ω) ≥ sup ϕ(t, θ−t ω)0 = lim ϕ(t, θ−t ω)0


t>0 t→∞

now follows from Theorem 3.5.1. This last relation and Proposition 5.4.1
imply the positivity properties of u(ω). 2
Corollary 5.4.1. Let (R1)–(R4) hold. Assume that there exist positive num-
bers R and α such that

d
xiα−1 fi (ω, x1 , . . . , xd ) ≤ 0
i=1

!d
provided that i=1 xα i = R and x ∈ intR+ . Then there exists an equilibrium
d
1/α
u(ω) lying in the interval [0, R e] for the RDS generated by (5.1) in Rd+ .
d
Here e is the element from intR+ given by the formula e = (1, . . . , 1).
!d
Proof. We apply Theorem 5.4.1 with W (x) = i=1 xα
i . 2

Below we make use of the following simple sufficient condition for the exis-
tence of super- and sub-equilibria for problem (5.1).
Proposition 5.4.3. Let (R1)–(R3) be valid. Assume that there exists a non-
random element w ∈ Rd+ such that f (ω, x) satisfies (R4) for all x ∈ [0, w]
and
fi (ω, w) ≤ 0, for all i = 1, . . . , d and ω ∈ Ω . (5.40)
Then w(ω) ≡ w is a super-equilibrium for the RDS (θ, ϕ) generated by (5.1).
If the inequality in the formula above is reversed and (R4) holds for all x ≥ w,
then w(ω) ≡ w is a sub-equilibrium.

Proof. If w = 0, then the weak positivity (R3) and equation (5.40) imply
that f (ω, 0) = 0 and therefore w is an equilibrium.
Assume that there exists w ∈ intRd+ such that fi (ω, w) ≤ 0 for all ω ∈ Ω
and i = 1, . . . , d. The cooperativity condition implies that

fi (ω, w − y) ≤ fi (ω, w) ≤ 0 for all y ∈ Γi ∩ [0, w], ω ∈ Ω , (5.41)

where i = 1, . . . , d. We apply Theorem 2.2.2 with O = intRd+ and D = [0, w].


If x ∈ ∂D ∩ intRd+ , then there exist a subset I ⊂ {1, . . . , d} and an element
5.4 Equilibria, Semi-Equilibria and Attractors 159

y ∈ ∩i∈I Γi such that x = w − y, y  w, and yi > 0 for i ∈ I. Any outer


normal νx at x has the form
 
νx = αi ei with αi ≥ 0, αi2 = 1 ,
i∈I i∈I

where {ei } is the standard basis in Rd . Therefore from (5.41) we have that

f (ω, x), νx  = αi fi (ω, x) ≤ 0 .
i∈I

Thus Theorem 2.2.2 implies that the set [0, w]∩intRd+ is invariant with respect
to (θ, ϕ). Hence [0, w] is also an invariant set and w is a super-equilibrium
(see Remark 3.4.1).
Assume now that w ∈ ∂Rd+ \ {0}. For the sake of simplicity we consider
the case w1 = 0 and wj > 0, j = 2, . . . , d (for other cases the proof is similar).
The weak positivity condition (R3) and (5.40) imply that

f1 (ω, 0, w2 , . . . , wd ) = 0 and f1 (ω, 0, x2 , . . . , xd ) ≥ 0, xj ≥ 0 .

Therefore using the cooperativity condition (R4) it is easy to see that

f1 (ω, 0, x2 , . . . , xd ) = 0 for 0 ≤ xj ≤ wj , j = 2, . . . , d . (5.42)

Applying the argument given above we can conclude that (w2 , . . . , wd ) is a


super-equilibrium for the RDS generated by the RDE

ẋj (t) = fj (θt ω, 0, x2 (t), . . . , xd (t)), j = 2, . . . , d .

Therefore it follows from (5.42) that w = (0, w2 , . . . , wd ) is a super-equilib-


rium for the RDS (θ, ϕ).
Assume that (5.40) holds with the inequality reversed, i.e

fi (ω, w) ≥ 0, for all i = 1, . . . , d, ω ∈ Ω .

The cooperativity condition implies that

fi (ω, w + y) ≥ fi (ω, w) ≥ 0 for all y ∈ Γi , ω ∈ Ω ,

where i = 1, . . . , d. Thus the mapping x → f (ω, w + x) is weakly positive.


Therefore as in the proof of Proposition 5.2.1 we can conclude that the set
w + Rd+ is invariant with respect to (θ, ϕ). This implies that w is a sub-
equilibrium. 2

Now we prove an assertion on the existence of a random attractor.


160 5. Cooperative Random Differential Equations

Theorem 5.4.2. Let assumptions (R1)–(R4) be valid and assume that there
exists a C 1 function W (x) from Rd+ into R such that

a1 |x|α1 − b1 ≤ W (x) ≤ a2 |x|α2 + b2 , (5.43)

where aj , αj , bj are positive constants, and

f (ω, x), ∇W (x) + (β + (ω)) · W (x) ≤ C(ω) for all ω∈Ω, (5.44)

where C(ω) ≥ 0 is a tempered random variable, β > 0 is a nonrandom


constant and (ω) is a random variable such that (θt ω) lies in L1loc (R) for
every ω ∈ Ω and
 
1 t 1 0
lim (θτ ω) dτ = lim (θτ ω) dτ = 0
t→+∞ t 0 t→+∞ t −t

for all ω ∈ Ω. Then the RDS (θ, ϕ) possesses a random attractor A(ω) in
the universe D of all tempered random closed subsets of Rd+ . This attractor
is bounded from above and from below and there exist maximal and minimal
equilibria ū and u such that the random interval [u, ū] contains the attractor as
well as all other possible tempered equilibria. In particular, if the equilibrium
u is unique, then A = {u}.

Proof. From (5.1) and (5.44) we have


d
W (ϕ(t, ω)x) + (β + (θt ω)) · W (ϕ(t, ω)x) ≤ C(θt ω) .
dt
Therefore we can apply Proposition 1.4.1 and Corollary 1.8.2 to conclude
that the RDS (θ, ϕ) generated in the space Rd+ by problem (5.1) possesses a
random global attractor A(ω) in the universe D. The existence of the maximal
and minimal equilibria ū and u and their properties follow from Theorem
3.6.2. 2

Remark 5.4.1. The hypotheses of Theorem 5.4.2 hold with W (x) = |x|2 , if
f (ω, x) satisfies (R1), (R3), (R4) and also the inequality

x, f (ω, x) + (β + (ω)) · |x|2 ≤ C(ω)

with β, (ω) and C(ω) possessing the properties listed in Theorem 5.4.2.

5.5 Random Equations with Concavity Properties


Here we study the qualitative behavior of random cooperative differential
equations possessing some concavity properties. We rely on general results
presented in Chap.4 for random sublinear systems. We start with the follow-
ing assertion.
5.5 Random Equations with Concavity Properties 161

Lemma 5.5.1. Assume that conditions (R1)–(R4) hold and for any ω ∈ Ω
the function f (ω, ·) is a sublinear mapping from Rd+ into Rd , i.e.

λf (ω, x) ≤ f (ω, λx) (5.45)

for 0 < λ < 1 and for all x ∈ Rd+ and ω ∈ Ω. Then the RDS (θ, ϕ) gener-
ated by (5.1) is sublinear. Moreover (θ, ϕ) is strongly sublinear if one of the
following conditions is satisfied:
(i) λfi (ω, x) < fi (ω, λx) for all i = 1, . . . , d, 0 < λ < 1, x ∈ intRd+ and
ω ∈ Ω;
(ii) the matrix (5.12) is irreducible for all x ∈ intRd+ and ω ∈ Ω and prop-
erty (5.45) holds with strict inequality for every x ∈ intRd+ .
Proof. The function xλ (t) = λ · ϕ(t, ω)x is the solution to the problem

ẋλ (t) = fλ (θt ω, xλ ), xλ (0) = λx ,

where fλ (ω, x) = λf (ω, λ−1 x). From (5.45) we have fλ (ω, x) ≤ f (ω, x).
Therefore the comparison principle (see Corollary 5.3.1) gives

λ · ϕ(t, ω)x ≡ xλ (t) ≤ x(t) ≡ ϕ(t, ω)[λx] .

Thus (θ, ϕ) is sublinear. The strong sublinearity of (θ, ϕ) under condition


either (i) or (ii) follows from Theorem 5.3.2. 2

The simplest examples of sublinear mappings are f (ω, x) = a(ω) · x and
f (ω, x) = a(ω) · (1 + x)−1 , where x ∈ R+ and a(ω) ≥ 0. They are strictly
(and strongly) sublinear if a(ω) > 0.
Thus we can apply here the results presented in Chap.4 for sublinear
systems. For instance an application of Corollary 4.3.1 leads to the following
result.
Theorem 5.5.1. Assume that conditions (R1)–(R4) and relation (5.36)
hold. Assume also that the function f satisfies the condition either (i) or (ii)
of Lemma 5.5.1. Let (θ, ϕ) be the RDS generated by (5.1) over the ergodic
metric dynamical system θ. Then either
(i) for any x ∈ Rd+ we have |ϕ(t, θ−t ω)x| → ∞ almost surely as t → ∞ or
(ii) there exists a unique almost equilibrium u(ω)  0 defined on a θ-
invariant set Ω ∗ of full measure such that

lim ϕ(t, θ−t ω)v(θ−t ω) = u(ω), ω ∈ Ω∗ ,


t→+∞

for any random variable v(ω) possessing the property 0 ≤ v(ω) ≤ λu(ω)
for all ω ∈ Ω ∗ and for some nonrandom λ > 0.
Proof. Proposition 5.4.1 implies that ϕ(t, ω)0  0 for all t > 0 and ω ∈ Ω.
It is also clear that any finite-dimensional RDS is conditionally compact.
Therefore we can apply Corollary 4.3.1. 2
162 5. Cooperative Random Differential Equations

We can also apply the trichotomy theorem (see Sect.4.4) in our situation.
Theorem 5.5.2 (Limit Set Trichotomy). Let conditions (R1), (R3),
(R4) and either (i) or (ii) of Lemma 5.5.1 be valid. Instead of (R2) we assume
that there exist positive nonrandom constants a and b such that

−a · |x|1 ≤ fj (ω, x) ≤ b · (1 + |x|1 ) x ∈ Rd+ , ω ∈ Ω, j = 1, . . . , d ,


for
(5.46)
1
!d
where |·|1 is the l -norm in R , i.e. |x|1 = j=1 |xi | for x = (x1 . . . , xd ) ∈ Rd .
d

Let e := (1, . . . , 1) ∈ Rd+ and let Ce be the collection of random variables


w : Ω → Rd+ possessing the property

α−1 · e ≤ w(ω) ≤ α · e for all ω∈Ω

for some nonrandom number α ≥ 1. Then any orbit emanating from a ∈ Ce


does not leave Ce , i.e.

ϕ(t, ω)a(ω) ∈ Ce for all a ∈ Ce , t ≥ 0 (5.47)

and precisely one of the following three cases applies:


(i) for all b ∈ Ce , the orbit γb emanating from b is unbounded;
(ii) for all b ∈ Ce , the orbit γb emanating from b is bounded, but the
closure of γb does not belong to Ce ;
(iii) there exists a unique F-measurable almost equilibrium u ∈ Ce , and
for all b ∈ Ce the orbit emanating from b converges to u, i.e.

lim ϕ(t, θ−t ω)b(θ−t ω) = u(ω) for almost all ω∈Ω. (5.48)
t→+∞

Proof. It follows from (5.46) that

−a · |x|1 · e ≤ f (ω, x) ≤ b · (1 + |x|1 ) · e for all x ∈ Rd+ , ω ∈ Ω .

Therefore Comparison Theorem 5.3.1 implies that

y1 (t) ≤ ϕ(t, ω)e ≤ y2 (t), t ∈ [0, t0 ) ,

where y1 (t) and y2 (t) are solutions to the nonrandom problems

ẏ1 (t) = −a · |y1 (t)|1 · e and ẏ2 (t) = b · (1 + |y2 (t)|1 ) · e (5.49)

with initial data y1,2 (0) = e and t0 := sup{s : y1 (t) ≥ 0, t ∈ [0, s)}. It is easy
to see that

|y1 (t)|1 = d · exp{−adt} and 1 + |y2 (t)|1 = (d + 1) · exp{bdt} (5.50)

for t ∈ [0, t0 ). Therefore from (5.49) and (5.50) we have


5.5 Random Equations with Concavity Properties 163

y1 (t) = exp{−adt} · e and y2 (t) = e + (1 + d−1 ) · (exp{bdt} − 1) · e


for t ∈ [0, t0 ). However these relations give solutions to (5.49) for each t ≥ 0.
Consequently t0 = ∞ and

exp{−adt} · e ≤ ϕ(t, ω)e ≤ (1 + d−1 ) · exp{bdt} · e . (5.51)

Thus the orbit emanating from e does not leave Ce and therefore we can
apply Theorem 4.4.1. By Remark 5.2.1 the trajectory emanating from e is a
random set with respect to F. Therefore by Remark 4.4.1(i) the equilibrium
u(ω) is F-measurable. This completes the proof. 2

Remark 5.5.1. Relation (5.51) shows that under the conditions of Theo-
rem 5.5.2 property (4.39) holds with a(ω) ≡ e. Therefore by Corollary 4.4.1
statement (ii) in Theorem 5.5.2 is valid in the form: for all b ∈ Ce , the orbit
γb emanating from b is bounded, but

lim sup sup p(ϕ(t, θ−t ω)b(θ−t ω), v(ω)) = ∞ ,
t→∞ ω∈Ω

where p is the part metric in intRd+ (see (3.4)).

Under additional assumptions we can obtain a more detailed description of


the behaviour of trajectories than that given by Theorem 5.5.2. For example
using Theorem 4.2.3 we can prove the following assertion.
Proposition 5.5.1. Let the conditions of Theorem 5.5.2 hold.
(i) If there exists a sub-equilibrium w(ω) such that w(ω) ∈ Ce then either
(a) for all v ∈ Ce , the orbit γb emanating from v is unbounded; or (b) there
exists a unique equilibrium u ≥ w such for all v ∈ Ce the orbit emanating
from v converges to u, i.e.

lim ϕ(t, θ−t ω)v(θ−t ω) = u(ω) for all ω ∈ Ω∗ , (5.52)


t→+∞

where Ω ∗ is a θ-invariant set of full measure.


(ii) Assume that θ is ergodic and there exists a super-equilibrium w(ω) ∈
Ce . If
ϕ(t, ω)(Rd+ \ {0}) ⊂ intRd+ for all ω ∈ Ω , (5.53)
then there exists a θ-invariant set Ω ∗ of full measure such that either (a) for
all v ∈ Ce , the orbit γv emanating from v converges to zero for ω ∈ Ω ∗ , i.e.

lim ϕ(t, θ−t ω)v(θ−t ω) = 0, ω ∈ Ω∗ , (5.54)


t→+∞

or (b) there exists a unique equilibrium u(ω) such that 0  u(ω) ≤ α · e for
ω ∈ Ω ∗ and we have (5.52) for all v ∈ Ce .
164 5. Cooperative Random Differential Equations

Proof. If under the condition in (i) option (a) is not valid, then the orbit
emanating from w is bounded. Therefore (b) follows from the first part of
Theorem 4.2.3 (see also Remark 4.2.2).
As for assertion (ii), the second part of Theorem 4.2.3 implies the existence
of an equilibrium 0 ≤ u(ω) ≤ w(ω) ≤ α · e. Lemma 3.5.1 and (5.53) give that
either u(ω) = 0 or u(ω)  0 on a θ-invariant set of full measure. In the first
case we obtain (5.54). In the second case we obtain (b). 2

The construction of sub- or super-equilibria in Proposition 5.5.1 usually relies


on the comparison principle (see examples below). We also note that by
Proposition 5.2.1 property (5.53) holds under assumption (R3∗ ). The result
given below provides other conditions that guarantee (5.53).
In the next proposition we use the ordering relation between d × d matri-
2
ces which arises from viewing them as vectors from the space Rd with the
2
standard cone Rd+ .
Proposition 5.5.2. Assume that assumptions (R1)–(R4) are met and that
the matrix Dx f (ω, x) is irreducible for x ∈ intRd+ and ω ∈ Ω. Let the function
f (ω, x) be s-concave, i.e.

Dx f (ω, x) < Dx f (ω, y) for x  y  0, and ω ∈ Ω . (5.55)

Then (θ, ϕ) is s-concave (see Definition 4.1.3) and strongly order-preserving


in Rd+ . In particular (5.53) holds.

Proof. We first note that y(t) = Dx ϕ(t, ω, x)z is the solution to the problem

ẏ(t) = Dx f (θt ω, ϕ(t, ω, x))y(t), y(0) = z .

Since Dx f (ω, x) is irreducible in intRd+ , we have ϕ(t, ω, x)  ϕ(t, ω, y)  0


for all x > y  0 by Theorem 5.2.1. Therefore property (5.55) gives that

Dx f (θt ω, ϕ(t, ω, x)) < Dx f (θt ω, ϕ(t, ω, y))

for any x > y  0. Consequently the comparison principle implies that

Dx ϕ(t, ω, x)z < Dx ϕ(t, ω, y)z for x  y  0, z ∈ intRd+ and ω ∈ Ω .

Therefore (θ, ϕ) is an s-concave RDS.


In order to apply Theorem 5.2.1 to prove that (θ, ϕ) is strongly order-
preserving in Rd+ we need to verify relation (5.53) only. For any x ∈ Rd+ \ {0}
and 0 < λ < 1 we have
 λ
ϕ(t, ω, x) ≥ ϕ(t, ω, λx) = ϕ(t, ω, 0) + Dx ϕ(t, ω, sx) ds x .
0

Hence
ϕ(t, ω, x) ≥ λ · Dx ϕ(t, ω, 0)x + λ · b(t, ω, x, λ) , (5.56)
5.5 Random Equations with Concavity Properties 165

where
 λ
−1
b(t, ω, x, λ) = λ (Dx ϕ(t, ω, sx) − Dx ϕ(t, ω, 0)) x ds
0

is a random variable from Rd such that limλ→0 b(t, ω, x, λ) = 0 for all ω ∈ Ω.


From the s-concavity of (θ, ϕ) we obtain

Dx ϕ(t, ω, 0) ≥ Dx ϕ(t, ω, y/2) > Dx ϕ(t, ω, y) if y  0 and ω ∈ Ω .

Therefore from Lemma 5.2.1 we have

Dx ϕ(t, ω, 0)  0 for all ω ∈ Ω .

Hence for every ω ∈ Ω there exists λ0 (ω) > 0 such that

Dx ϕ(t, ω, 0)x + b(t, ω, x, λ)  0 for all λ < λ0 (ω) .

Thus (5.56) implies that ϕ(t, ω, x)  0 for every x > 0, whence (5.53). 2

The following theorem describes possible scenarios in s-concave systems.


Theorem 5.5.3. Let (R1)–(R4) be valid. Assume that the function f (ω, x)
is s-concave and that the matrix Dx f (ω, x) is irreducible for all x ∈ intRd+
and ω ∈ Ω. If f (ω, 0) ∈ Rd+ \ {0} for all ω ∈ Ω, then either
(a) the orbit γv emanating from v is unbounded for all v(ω) ≥ 0, or
(b) there exists a unique equilibrium u  0 such for every v(ω) possessing
the property 0 ≤ v(ω) ≤ α · u(ω) with some α > 0 the orbit emanating
from v converges to u, i.e.

lim ϕ(t, θ−t ω)v(θ−t ω) = u(ω) for all ω ∈ Ω∗ . (5.57)


t→+∞

where Ω ∗ is a θ-invariant set of full measure.


If we additionally assume that the affine RDS generated by the equation

ẏ = f (θt ω, 0) + Dx f (θt ω, 0)y (5.58)

possesses a super-equilibrium w(ω) ∈ intRd+ , then there exist bounded orbits


and (b) holds. If f (ω, 0) ≡ 0, θ is an ergodic metric dynamical system and
the top Lyapunov exponent of the linear RDS (5.58) is less than zero, then
we have
lim φ(t, θ−t ω)x = 0 for all x ∈ Rd+
t→∞

on a θ-invariant set of full measure.


166 5. Cooperative Random Differential Equations

Proof. If f (ω, 0) ∈ Rd+ \ {0} for all ω ∈ Ω, then by Proposition 5.4.1 we


have that ϕ(t, ω, 0) > 0 for all t > 0 and ω ∈ Ω. Therefore Proposition 5.5.2
implies that ϕ(t, ω, 0)  0 for t > 0. Consequently vs (ω) = ϕ(s, θ−s ω, 0)  0
is a sub-equilibrium for every s > 0 (see Proposition 3.4.1). By Proposition
5.5.2 (θ, ϕ) is s-concave. Therefore Proposition 4.1.1 implies that (θ, ϕ) is
concave, i.e. it satisfies (4.5). In particular

(1 − λ)ϕ(t, ω, 0) + λϕ(t, ω, y) ≤ ϕ(t, ω, λy), y>0.

Hence λϕ(t, ω, y)  ϕ(t, ω, λy) for all t > 0 and y > 0. Thus (θ, ϕ) is strongly
sublinear. Therefore we can apply the same argument as in the proof of
Proposition 5.5.1 to obtain (a)/(b) dichotomy.
Since  1
f (ω, x) = f (ω, 0) + Dx f (ω, sx)ds x ,
0

under s-concavity condition (5.55) we have

f (ω, x) ≤ f (ω, 0) + Dx f (ω, 0)x .

Therefore the affine RDS generated by (5.58) dominates (θ, ϕ) from above.
Hence w(ω) ∈ intRd+ is a super-equilibrium for (θ, ϕ). Thus the orbit γw
emanating from w is bounded, whence (b).
The proof of the last assertion follows from Proposition 4.2.2 (cf. Theo-
rem 2.1.2). 2

5.6 One-Dimensional Explicitly Solvable Random


Equations

In this section we consider a class of RDS generated by one-dimensional RDE


of the form
ẋ = α(θt ω) · f (x) + β(θt ω) · g(x), x ∈ R , (5.59)
over an ergodic metric dynamical system θ. We assume that α(ω) and β(ω)
are tempered random variables from L1 (Ω, F, P). In this case the Birkhoff-
Khintchin ergodic theorem (see, e.g., Arnold [3, Appendix]) gives the rela-
tions
 
1 t 1 t
lim α(θτ ω)dτ = Eα, lim β(θτ ω)dτ = Eβ (5.60)
|t|→∞ t 0 |t|→∞ t 0

on a θ-invariant set Ω ∗ ∈ F of full measure. Without loss of generality we


can suppose that Ω ∗ = Ω (see Remark 1.2.1(ii)). We also assume that f (x)
and g(x) are C 1 functions on R such that
5.6 One-Dimensional Explicitly Solvable Random Equations 167

α(ω) · f (0) + β(ω) · g(0) ≥ 0, ω∈Ω,


and equation (5.59) generates an RDS (θ, ϕ) in some interval [0, a] ⊆ R+ .
This is implied by the relation

α(ω) · f (a) + β(ω) · g(a) ≤ 0, ω∈Ω,

in the case 0 < a < ∞ (see Proposition 5.4.3) and by the relation

x · [α(ω) · f (x) + β(ω) · g(x)] ≤ C1 (ω) · x2 + C2 (ω), ω∈Ω,

in the case [0, a] = R+ , where C1 and C2 are random variables such that
t → Cj (θt ω) is locally integrable (see Proposition 5.2.1).
If

g(x) > 0 and f (x) = g(x) · (γ1 G(x) + γ2 ) for 0<x<a, (5.61)

where γ1 and γ2 are constants and G(x) is a primitive for [g(x)]−1 on the
interval (0, a) (i.e. G (x) = [g(x)]−1 , x ∈ (0, a)), then the cocycle ϕ can be
represented in the explicit form. Indeed, if x(t) is a solution to (5.59), then
y(t) = G(x(t)) solves the linear equation

ẏ = γ1 α(θt ω) · y + β(θt ω) + γ2 α(θt ω) .

Therefore the cocycle ϕ has the form


  t 
ϕ(t, ω, x) = G−1 G(x) exp γ1 α(θτ ω)dτ
0
 t  t   (5.62)
+ (β(θs ω) + γ2 α(θs ω)) exp γ1 α(θτ ω)dτ ds .
0 s

This observation is proved to be useful in the study of bifurcation phenomena


in one-dimensional RDE (see Arnold [3, Chap.9] and also Xu [110]).
Example 5.6.1. Consider the RDE

ẋ = α(θt ω) · x + β(θt ω) · xN +1 , x ∈ R+ , (5.63)

where N > 0. If we assume that β(ω) ≤ 0, then equation (5.63) generates an


RDS (θ, ϕ) in R+ . It is easy to see that this system is sublinear. We obviously
have relation (5.61) with γ1 = −N and γ2 = 0 and therefore


t
x exp 0 α(θτ ω)dτ
ϕ(t, ω, x) = 
 1/N
(5.64)
t  s
1 + N xN 0 |β(θs ω)| exp N 0 α(θτ ω)dτ ds

for x > 0 and ϕ(t, ω, 0) = 0. Since β(ω) ≤ 0, the RDS (θ, ϕ) is dominated
from above by the system generated by the linear equation ẋ = α(θt ω) · x.
168 5. Cooperative Random Differential Equations

By the ergodic theorem (see (5.60)) the Lyapunov exponent for this system
is λ = Eα. Therefore if Eα < 0, then limt→∞ ϕ(t, θ−t ω, u(θ−t ω)) = 0 for any
tempered u(ω) ≥ 0. Thus A(ω) = {0} is a random attractor for (θ, ϕ) in the
universe D of all tempered subsets of R+ .
Assume that Eα > 0 and β(ω) ≤ −β0 < 0 for all ω ∈ Ω. Then using
Theorem 5.4.2 (see Remark 5.4.1) we can prove that (θ, ϕ) possesses a random
attractor A(ω) = [0, u(ω)] in the universe D. Here u(ω) ≥ 0 is an equilibrium.
Using (5.64) it is easy to find that
  0  0  −1/N
u(ω) = N |β(θs ω)| exp −N α(θτ ω)dτ ds .
−∞ s

Moreover, there exists γ > 0 such that

lim eγt |ϕ(t, θ−t ω, x) − u(ω)| = 0 for all x > 0 and ω ∈ Ω . (5.65)
t→∞

If for some b > 0 we have α(ω) + β(ω) · bN ≤ 0 for all ω ∈ Ω, then the equi-
librium u(ω) belongs to the interval (0, b]. Indeed, in this case by Proposition
5.4.3 b is a super-equilibrium, i.e. ϕ(t, θ−t ω, b) ≤ b. Therefore (5.65) implies
that u(ω) ≤ b.

Example 5.6.2. We consider the RDE

ẋ = β(θt ω) · g(x), x ∈ [0, 1] , (5.66)

where g ∈ C 1 (R), g(x) > 0 for x ∈ (0, 1) and g(0) = g(1) = 0. This equation
generates an RDS (θ, ϕ) in [0, 1] with the cocycle
  t 
ϕ(t, ω, x) = G−1 G(x) + β(θτ ω)dτ , (5.67)
0

where G(x) is a primitive for [g(x)]−1 on (0, 1) (relation (5.61) holds with
γ1 = γ2 = 0). It is clear that G(x) is an increasing function such that G(x) →
+∞ as x → 1 and G(x) → −∞ as x → 0. Therefore using (5.60) and (5.67)
we observe the following behaviour of trajectories:

(i) if Eβ < 0, then

lim ϕ(t, ω, x) = lim ϕ(t, θ−t ω, x) = 0, x ∈ [0, 1) ;


t→∞ t→∞

(ii) if Eβ > 0, then

lim ϕ(t, ω, x) = lim ϕ(t, θ−t ω, x) = 1, x ∈ (0, 1] .


t→∞ t→∞
5.6 One-Dimensional Explicitly Solvable Random Equations 169

In the case Eβ = 0 the dynamics is more complicated. For instance, if


d
β(θt ω) = dt B(θt ω) is a derivative of a stationary process B(θt ω) with abso-
lutely continuous trajectories, then from (5.67) we have

ϕ(t, ω, x) = G−1 (G(x) + B(θt ω) − B(ω)) .

Therefore a random variable uc (ω) = G−1 (c + B(ω)) satisfies the relation


ϕ(t, ω, uc (ω)) = uc (θt ω) for any c ∈ R. Thus we have a continuous family
{uc (ω) : c ∈ R} of equilibria such that uc (ω) → 0 as c → −∞ and uc (ω) → 1
as c → ∞. If B0 := supω∈Ω |B(ω)| < ∞, then

0 < G−1 (G(x) − 2B0 ) ≤ ϕ(t, ω, x) ≤ G−1 (G(x) + 2B0 ) < 1

for all t > 0, ω ∈ Ω and x ∈ (0, 1), i.e. all trajectories emanating from points
x ∈ (0, 1) are separated from the equilibria 0 and 1 uniformly with respect
to t.
On the other hand, if we assume that β ∈ L2 (Ω, F, P), Eβ = 0, and the
process β(θt ω) satisfies the central limit theorem, i.e. the limit
  t 2
−1/2
lim E t β(θτ ·)dτ =σ>0
t→∞ 0

exists and
 t   a
1 ξ2
−1/2
lim P ω : t β(θτ ω)dτ < a =√ e− 2σ dξ
t→∞ 0 2πσ −∞

for any a ∈ [−∞, +∞], then it follows from (5.67) that under the conditions
g (0) > 0 and g (1) < 0 we have
√ √
1
lim P ω : e− t log t ≤ ϕ(t, ω, x) ≤ e− t/ log t =
t→∞ 2
and √ √
1
lim P ω : e− t log t ≤ 1 − ϕ(t, ω, x) ≤ e− t/ log t =
t→∞ 2
for all x ∈ (0, 1). We refer to Chueshov/Vuillermot [24] for the proof of
the last two formulas and for other facts on the long-time dynamics of the
RDS generated by (5.66). The last two relations imply that

lim P {ω : ϕ(t, ω, x) ∈ [δ, 1 − δ]} = 0


t→∞

for any δ > 0 and x ∈ (0, 1). This means that dist (ϕ(t, ω, x), {0, 1}) → 0 in
probability as t → ∞. Thus the two-point set A = {0, 1} is a weak point
attractor, i.e. an attractor with respect to convergence in probability.
170 5. Cooperative Random Differential Equations

Example 5.6.3. Consider a one-dimensional RDE of the form

ẋ = α(θt ω) · sin x + β(θt ω) · (1 − cos x) (5.68)

over an ergodic metric dynamical system θ, where α(ω) and β(ω) are random
variables such that t → α(θt ω) and t → β(θt ω) are locally integrable. This
equation generates an RDS in R. Since any interval [2πk, 2π(k + 1)] is an
invariant set and equation (5.68) is invariant with respect to the change
x → x+2πk, we consider the problem on the unit circle C which is interpreted
as the interval [0, 2π] with identified end-points.
A simple calculation shows that relation (5.61) holds with γ1 = −1, γ2 = 0
and G(x) = − cot x2 . Therefore equation (5.68) generates an RDS (θ, ϕ) in C
with the cocycle

ϕ(t, ω)x = 2arccot (−y(t, ω; − cot(x/2))) , x ∈ (0, 2π) ,

where y(t, ω; y0 ) solves the affine equation

ẏ = −α(θt ω) · y + β(θt ω), y(0) = y0 .

Therefore ϕ(t, ω)0 = 0 and


 t 
x −  t α(θτ ω)dτ −
t
ϕ(t, ω)x = 2arccot cot · e 0 − β(θs ω)e s α(θ τ ω)dτ
ds
2 0

for x ∈ C \ {0}. Assume that α ∈ L1 (Ω, F, P). It follows from the considera-
tions presented in Example 2.1.2 that in both cases Eα > 0 and Eα > 0 the
RDS (θ, ϕ) on the circle C has two equilibria u0 ≡ 0 and either
 0 0

u+ (ω) = 2arccot − β(θs ω)e− s
α(θτ ω)dτ
ds if Eα > 0 ,
−∞

or  ∞ s

u− (ω) = 2arccot β(θs ω)e 0
α(θτ ω)dτ
ds if Eα < 0 .
0

If Eα > 0, then Proposition 1.9.3 implies that

ϕ(t, θ−t ω)v(θ−t ω) → u+ (ω) as t→∞

for any random variable v(ω) from the interval [ε, 2π − ε], where ε > 0 is
arbitrary.
In the case Eα < 0 using the representation
  t
u− (θt ω) u− (ω)
y(t, ω; y0 ) = − cot + y0 + cot · e− 0 α(θτ ω)dτ
2 2
5.7 Applications 171

we obtain that
ϕ(t, θ−t ω)v(θ−t ω) = 2arccot z(t, ω)
for any v(ω) ∈ C \ {0}, where
  0
u− (ω) v(θ−t ω) u− (θ−t ω)
z(t, ω) = − cot − cot − cot · e− −t α(θτ ω)dτ .
2 2 2

Hence in the circle C we have the relation

ϕ(t, θ−t ω)v(θ−t ω) → 0 as t→∞

provided that
 u− (ω) 
 v(ω)
 cot − cot  ≥ δ(ω) > 0 ,
2 2
where δ(ω)−1 is a tempered random variable.
Thus we observe the exchange of stability between the equilibria u0 (ω)
and u± (ω) when the value Eα changes the sign in the RDS (θ, ϕ) generated
by (5.68) in the circle C.

5.7 Applications

As the main example of an application of the theory developed we consider a


random model of the control protein synthesis in the cell (for the deterministic
case see, e.g., Smith [102] and the references therein).

5.7.1 Random Biochemical Control Circuit

Consider the system of random differential equations

ẋ1 (t) = g(θt ω, xd (t)) − α1 (θt ω)x1 (t) , (5.69)

ẋj (t) = xj−1 (t) − αj (θt ω)xj (t), j = 2, . . . , d . (5.70)


Here {αj (ω)} are random variables for which t → αj (θt ω) is locally integrable.
We assume that g : Ω ×R → R is measurable and x → g(ω, x) is continuously
differentiable for every ω ∈ Ω. Moreover there exist positive random variables
b(ω) and c(ω) and a deterministic constant a > 0 possessing the properties

0 ≤ g(ω, x) ≤ a · x + b(ω) and 0 < g (ω, x) ≤ c(ω) (5.71)

for every ω ∈ Ω and for every x > 0. We assume also that t → b(θt ω) and
t → c(θt ω) are locally integrable. We use g to denote the derivative with
respect to the space variable.
172 5. Cooperative Random Differential Equations

The values xj represent concentrations of various macro-molecules in the


cell and therefore must be nonnegative. It is easy to see that assumptions
(R1)–(R4) are valid here. Hence equations (5.69) and (5.70) generate a strictly
order-preserving RDS (θ, ϕ) in the cone Rd+ . We note that the above equations
reduce to the standard deterministic equations of a biochemical control circuit
(see Smith [102]), when g(x) and αj are nonrandom.
Assume now that there exist positive nonrandom constants αj , j =
1, . . . , d, such that αj (ω) ≥ αj > 0, j = 1, . . . , d, and consider the follow-
ing affine RDE
ẋ1 (t) = a · xd (t) − α1 x1 (t) + b(θt ω) , (5.72)
ẋj (t) = xj−1 (t) − αj xj (t), j = 2, . . . , d . (5.73)
It is clear that equations (5.72) and (5.73) generate a strictly order-preserving
RDS (θ, ψ) in the cone Rd+ . Comparison Theorem 5.3.1 implies that this RDS
dominates (θ, ϕ) from above. The cocycle ψ of this system has the form
 t
ψ(t, ω)x = etA x + ξ(t, ω), ξ(t, ω) := e(t−τ )A B(θτ ω) dτ , (5.74)
0

where A is the matrix with all entries equal to zero, except ajj = −αj ,
aj,j−1 = 1 and a1d = a, B(ω) is the column whose only nonzero element is
b1 (ω) = b(ω). Since the eigenvalues λ of the matrix A satisfy the equation
&d &d
j=1 (λ + αj ) = a, it is easy to see that Reλ < 0 provided that j=1 αj > a,
which we assume to make A stable.
Assume also that b(ω) is tempered. Then it is easy to see that t →
ξ(t, θ−t ω) is bounded for all ω ∈ Ω. Therefore by Theorem 4.6.1 and Re-
mark 4.6.1
 0
w(ω) := lim ξ(t, θ−t ω) = e−τ A B(θτ ω) dτ (5.75)
t→∞ −∞

exists and is a tempered equilibrium for (θ, ψ). Since

ψ(t, ω)y − w(θt ω) = etA (y − w(ω)) ,

this equilibrium uniformly attracts all tempered random sets with exponential
speed, i.e. there exists γ > 0 such that

lim eγt sup |ψ(t, θ−t ω)y − w(ω)| = 0


t→+∞ y∈D(θ−t ω)

for any D(ω) ∈ D, where D is the universe of all random tempered sets in
Rd+ .
By the comparison principle the random variable µw(ω) is a super-
equilibrium for the RDS (θ, ϕ) generated by (5.69) and (5.70) for any µ ≥ 1.
5.7 Applications 173

Consequently by Proposition 3.5.2 the RDS (θ, ϕ) possesses an equilibrium


u(ω) such that 0 ≤ u(ω) ≤ w(ω). By Proposition 3.7.1 (θ, ϕ) is dissipative
in D. Therefore it has a random attractor A in D, and since v(ω) ≡ 0 is
evidently a sub-equilibrium for (θ, ϕ), this attractor belongs to the interval
[0, w](ω) := {v : 0 ≤ v ≤ w(ω)} and the conclusions of Theorem 3.6.2 on
the structure of random attractors are valid.
If we assume in addition that g(ω, 0) > 0 for all ω ∈ Ω, then the affine
system
ẋ1 (t) = −α1 (θt ω)x1 (t) + g(θt ω, 0) ,
ẋj (t) = xj−1 (t) − αj (θt ω)xj (t), j = 2, . . . , d .
dominates (θ, ϕ) from below. It possesses a unique globally asymptotically
stable equilibrium v(ω) = (v1 (ω), . . . , vd (ω)) with
 0  0 
v1 (ω) = g(θτ ω, 0) · exp − α1 (θs ω)ds dτ ,
−∞ τ

and
 0  0 
vj (ω) = vj−1 (θτ ω) · exp − αj (θs ω)ds dτ, j = 2, . . . , d .
−∞ τ

For every ω the equilibrium v(ω) belongs to intRd+ and v(ω) ≤ w(ω), where
w(ω) is given by (5.75). The comparison principle gives that µv(ω) is a sub-
equilibrium of (θ, ϕ) for any 0 ≤ µ ≤ 1. Therefore the random attractor of
(θ, ϕ) is contained in the interval [v(ω), w(ω)]. According to Theorem 3.6.2
this attractor lies between two equilibria u(ω) and u(ω) such that 0  v(ω) ≤
u(ω) ≤ u(ω) ≤ w(ω). Moreover, u(ω) = u(ω) almost surely provided that the
function g(ω, x) possesses the property

λg(ω, x) < g(ω, λx) for 0 < λ < 1, ω ∈ Ω .

In this case the condition (ii) of Lemma 5.5.1 holds. This implies that the
system (θ, ϕ) is strongly sublinear and therefore we can apply Theorem 4.2.1
on the uniqueness of strongly positive equilibria.
We now consider the case when g(ω, 0) = 0 for all ω ∈ Ω. In this case
the attractor A lies between the two equilibria u(ω) ≡ 0 and u(ω) ≥ 0. We
can guarantee that u(ω)  0 if we assume, for instance, that there exist
nonrandom constants αj∗ , j = 1, . . . , d, and a function g0 (x) such that

αj (ω) ≤ αj∗ , j = 1, . . . , d, g0 (x) ≤ g(ω, x) for all ω ∈ Ω

and
'
d
g0 (x)
αj∗ < lim sup ≤ +∞ . (5.76)
j=1
x→+0 x

Indeed, let
174 5. Cooperative Random Differential Equations

 
'
d '
d
v (n) = (v1n , . . . , vdn ) := εn  αj∗ , αj∗ , . . . , αd∗ , 1  0 ,
j=2 j=3

where εn are positive numbers. Since


 
'
d
g0 (εn ) '
d
g(ω, vdn ) − α1 (ω)v1n ≥ g0 (εn ) − εn αj∗ = εn  − αj∗  ,
j=1
εn j=1

equation (5.76) implies that there exists a sequence {εn }, εn > 0, εn → 0,


such that
g(ω, vdn ) − α1 (ω)v1n > 0 .
We also have the relation
n
vi−1 − αi (ω)vin ≥ vi−1
n
− αi∗ vin = 0 .

Thus by Proposition 5.4.3 v (n) is a sub-equilibrium for any n = 1, 2, . . .. This


implies the instability of u(ω) ≡ 0 and the strong positivity of u(ω). Since
u(ω) is the maximal equilibrium in the attractor, Theorem 3.6.2 implies that
u(ω) is stable from above. Its stability from below in the strongly sublinear
case is equivalent to the property

ϕ(t, θ−t ω)v (n) → u(ω) as t→∞ (5.77)

for every n almost surely. If (5.77) is not true, then there exists another
strongly positive equilibrium w(ω) such that ϕ(t, θ−t ω)v (n) → w(ω). This
contradicts the uniqueness of strongly positive equilibria for strongly sublin-
ear RDS. Thus the equilibrium u(ω) is stable.
If g (ω, x) is a strictly decreasing function for every ω ∈ Ω, then Propo-
sition 5.5.2 implies that the RDS (θ, ϕ) is s-concave. Therefore if for some
nonrandom g0∗
'
d
g (ω, 0) ≤ g0∗ < αj , (5.78)
j=1

the system (θ, ϕ) is dominated from above by the RDS generated by the
linear equations
ẋ1 (t) = g0∗ · xd (t) − α1 x1 (t) , (5.79)
ẋj (t) = xj−1 (t) − αj xj (t), j = 2, . . . , d . (5.80)
However, assumption (5.78) means that all the eigenvalues of problem (5.79)
and (5.80) possesses the property Reλj < 0. Thus the zero equilibrium of
(5.79) and (5.80) is exponentially stable. This implies that the random at-
tractor A(ω) of (θ, ϕ) is trivial, i.e. A(ω) = {0}.
5.7 Applications 175

5.7.2 Random Gonorrhea Model

Let us consider a system of random differential equations of the following


form

ẋj (t) = fj (θt ω, x1 (t), . . . , xd (t), p1 − x1 (t), . . . , pd − xd (t)),


j = 1, . . . , d .
(5.81)
Here p = (p1 , . . . , pd ) is a fixed point from int Rd+ and fj (ω, x, y) are mea-
surable functions on Ω × [0, p] × [0, p], where [0, p] is the interval in Rd+ with
end-points 0 and p. We also assume that for every ω ∈ Ω
(a) fj (ω, x, y) is a continuously differentiable function on [0, p] × [0, p] such
that t → fj (θt ω, 0, p) is locally integrable and the partial derivatives of
fj (ω, ·, ·) are bounded by C(ω) such that C(θt ω) ∈ L1loc (R), j = 1, . . . , d;
(b) we have
fj (ω, x, p − x) ≥ 0, j = 1, . . . , d ,
for all x ∈ [0, p] of the form x = (x1 , . . . , xj−1 , 0, xj+1 , . . . , xd ) and

fj (ω, x, p − x) ≤ 0, j = 1, . . . , d ,

for all x ∈ [0, p] of the form x = (x1 , . . . , xj−1 , pj , xj+1 , . . . , xd );


(c) the function

f (ω, x, p − x) = (f1 (ω, x, p − x), . . . , fd (ω, x, p − x))

satisfies the cooperativity condition, i.e.


 
∂fi (ω, x, y)  ∂fi (ω, x, y) 
 −  ≥0 if i = j, 0 < x < p .
∂xj  ∂yj 
(x,p−x) (x,p−x)

It is easy to see that equations (5.81) possess a local solution for any initial
data from {x : 0  x  p}. Assumption (b) and Proposition 5.4.3 imply
that the interval [0, p] is a forward invariant set. This makes it possible to
guarantee the global existence of solutions to (5.81) with initial data from
[0, p] for every ω and therefore equations (5.81) generate an RDS with state
space X = [0, p] ⊂ Rd+ . Assumption (c) implies that this RDS is strictly
order-preserving (see Theorem 5.2.1). It is clear that w(ω) ≡ p is a super-
equilibrium and v(ω) ≡ 0 is a sub-equilibrium. Therefore Theorem 3.5.1
implies the existence of an equilibrium u(ω) with the property

0 ≤ u(ω) ≤ p for all ω ∈ Ω .

If f (ω, 0, p) = (f1 (ω, 0, p), . . . , fd (ω, 0, p)) > 0 then u(ω) > 0 by Proposi-
tion 5.4.1.
We note that deterministic version of the equations (5.81) first appeared in
Hirsch [53] as a generalization of a gonorrhea transmission models previously
176 5. Cooperative Random Differential Equations

considered. The time-periodic version of (5.81) was studied in Smith [101],


Takač [103], see also Krause/Ranft [73].
The assumptions above are met, if we suppose, for instance,


d
fj (ω, x, p − x) = −αj (ω)xj + (pj − xj ) βji (ω)xi . (5.82)
i=1

Here αj (ω) and βji (ω) are random variables such that t → αj (θt ω) and
t → βji (θt ω) are locally integrable and satisfy the inequalities

αj (ω) ≥ αj0 > 0, 0


βji (ω) ≥ βji >0 for all ω ∈ Ω ,

where αj0 and βji0


are nonrandom constants. Biologically, equations (5.81)
with fj of the type (5.82) correspond to the population divided into d groups,
each of constant population size pj . The variable xj denotes the number
infected with gonorrhea in the j-th group, pj −xj is the number of susceptibles
in the the j-th group, βji is the rate at which group i infects the j-th group
and αj is the care rate. The randomness of αj and βji can be interpreted as
seasonal fluctuations.
The mapping f = (f1 , . . . , fd ) given by (5.82) is s-concave (cf. (5.55))
Therefore Proposition 5.5.2 implies that the RDS generated by (5.81) with
(5.82) is s-concave. Moreover the function (5.82) admits the estimate


d
−αj (ω)xj + βjj (ω)(pj − xj )xj ≤ fj (ω, x, p − x) ≤ −αj (ω)xj + pj βji (ω)xi
i=1

for x ∈ [0, p]. Using the Comparison Theorem 5.3.1 we find that the RDS
generated by (5.81) and (5.82) is dominated from above by a linear system
and from below by the direct product of one-dimensional RDS. These prop-
erties make it possible (see Theorem 5.5.3 and Example 5.6.1) to give some
condition on αj (ω) and βji (ω) which ensure one of the following cases:
(a) u(ω) ≡ 0 is globally stable and A = {0};
(b) there exists a strictly positive equilibrium u(ω) and A ⊂ [0, u], where
u(ω) ≡ 0 is unstable and u(ω) is stable.

5.7.3 Random Model of Symbiotic Interaction

We consider the RDE

ẋj = xj hj (θt ω, x1 , . . . , xd ), j = 1, . . . , d . (5.83)

These equations arise in the model of symbiosis between d populations, for


instance. Here xj is the size of the j-th population. For a deterministic ver-
sion of this model see, e.g., Smith [102] and the references therein and also
Krause/Ranft [73], where the periodic case is discussed.
5.7 Applications 177

We assume that the function h = (h1 , . . . , hd ) satisfies (R1), (R3), (R4)


and is bounded for every ω ∈ Ω (hence (R2) holds). Under these conditions
equations (5.83) generate an order-preserving RDS (θ, ϕ) in Rd+ possessing
the following invariance property: for every subset I of N = {1, . . . , d} the
set
KI = {x = (x1 , . . . , xd ) ∈ Rd+ : xj = 0, j ∈ N \ I}
is invariant with respect to (θ, ϕ). The restriction of (θ, ϕ) to KI is described
by the RDE
ẋj = xj hj (θt ω, pI x), j ∈ I .
where pI is a projector in Rd defined by (pI x)j = xj for j ∈ I and (pI x)j = 0
if j ∈ N \ I.
This example demonstrates a typical symbiotic interaction, i.e. interac-
tion that can result in benefits for several populations as far as their size is
concerned. To see this assume that

hj (ω, x) = αj (ω, xj ) + gi (ω, xi ) , (5.84)
i =j

where αj (ω, x) and gi (ω, x) are functions on Ω × R+ such that the conditions
on hj mentioned above are valid. We also assume that

0 ≤ gi (ω, 0) ≤ gi (ω, x) ≤ M, and gi (ω, x) > 0


for every x > 0 .
(5.85)
We choose αj (ω, x) such that the RDS generated in R+ by the equation

ẋ = x · αj (θt ω, x) (5.86)

has a positive equilibrium for every j = 1, . . . , d. For this we can assume, for
example, that
αj (ω, x) ≥ αj0 > 0 for 0 ≤ x ≤ δ
and
αj (ω, x) ≤ −βj0 < 0 for x ≥ R ,
where αj0 , δ, βj0 and R are positive nonrandom constants. We note that equa-
tion (5.86) describes the evolution of the j-th population independent of
the others. Denote the positive equilibrium for (5.86) by vj (ω). Assumptions
(5.85) ensure that the RDS generated in Rd+ by

ẋj = xj · αj (θt ω, xj ), j = 1, . . . , d ,

dominates the RDS (θ, ϕ) generated by (5.83) with (5.84) from below. There-
fore v(ω) = (v1 (ω), . . . , vd (ω)) is a positive sub-equilibrium for (θ, ϕ). On the
other hand under condition (5.85) the system generated by

ẋj = xj · (αj (θt ω, xj ) + (d − 1)M ), j = 1, . . . , d ,


178 5. Cooperative Random Differential Equations

dominates (θ, ϕ) from above. If βj0 > (d − 1)M this RDS has a super-
equilibrium w(ω) such that w(ω) > v(ω). Therefore Theorem 3.5.1 implies the
existence of an equilibrium u(ω) such that v(ω) ≤ u(ω) ≤ w(ω). This equi-
librium attracts (from below) the collection of equilibria (v1 (ω), . . . , vd (ω))
which correspond to the isolated dynamics of each population. We hence
observe that the interaction results in a benefit for all populations.

5.7.4 Random Gross-Substitute System

The deterministic gross-substitute system represents the law of supply and


demand in economics (see, e.g., Nakajima [84] and Sell/Nakajima [97]
and the references therein). Here we consider a random version of this system.
Such a generalization seems to be natural because it reflects changes due to
random impacts. We consider the RDE

ẋi = fi (θt ω, x1 , . . . , xd ), i = 1, . . . , d , (5.87)

where the function f : Ω × Rd+ → Rd satisfies conditions (R1)–(R4) and also


Walras’ law
d
fi (ω, x1 , . . . , xd ) = 0 (5.88)
i=1

for all ω ∈ Ω and x ∈ intRd+ . The simplest example of a system satisfying


(R1)–(R4) and (5.88) gives the following RDE


d
ẋi = aij (θt ω) · hj (xj ), i = j, . . . , d,
j=1

where the matrix aij (ω) satisfies the cooperativity condition, i.e. aij (ω) ≥ 0
for all i = j and ω ∈ Ω and


d
aij (ω) = 0 for all ω ∈ Ω, j = 1, . . . , d ,
i=1

and hj : R+ → R+ are nondecreasing functions such that hj (0) = 0.


Theorem 5.2.1 implies that (5.87) generates a strictly order-preserving
RDS in Rd+ . It is easy to see (cf. Corollary 5.4.1) that for any β ≥ 0 the set
 

d
Σβ = x ∈ int Rd+ : xi = β
i=1

is a forward invariant set for the RDS (θ, ϕ) generated by (5.87). Therefore
the closure Σ β of Σβ is also a forward invariant set.
5.7 Applications 179

Let us prove that the RDS (θ, ϕ) is nonexpansive with respect to the
!d
l1 -norm defined by the formula |x|1 = i=1 |xi | for x = (x1 , . . . , xd ), i.e.

|ϕ(t, ω)x − ϕ(t, ω)y|1 ≤ |x − y|1 , x, y ∈ Rd+ , t > 0, ω ∈ Ω . (5.89)

Indeed, let

x(t) = ϕ(t, ω)x, y(t) = ϕ(t, ω)y and z(t) = ϕ(t, ω)z ,

where z = x ∨ y ≡ sup{x, y}. It is clear that z(t) ≥ x(t) ∨ y(t) for all t > 0.
Therefore

d
|x(t) − y(t)|1 = (2 max{xi (t), yi (t)} − xi (t) − yi (t))
i=1
(5.90)

d
≤ (2zi (t) − xi (t) − yi (t)) .
i=1

Consequently the invariance of Σ β and the relation zi = max{xi , yi } imply


that

d 
d
|x(t) − y(t)|1 ≤ (2zi − xi − yi ) = |xi − yi | = |x − y|1 .
i=1 i=1

for any
x, y ∈ R+ .
d
Thus we have (5.89)
If the matrix ∂x∂fi
j
(ω, x) is irreducible for every x  0 and ω ∈ Ω, then
by Theorem 5.2.1 the RDS (θ, ϕ) is strongly order-preserving in int Rd+ . In
this case the restriction (θ, ϕβ ) of (θ, ϕ) to Σβ is contractive for each β > 0.
Indeed, if x, y ∈ Σβ for some β > 0 and x = y, then z > x and z > y. Since
(θ, ϕ) is strongly order-preserving, the last relation implies that

z(t) = ϕ(t, ω)z  x(t) and z(t) = ϕ(t, ω)z  y(t) .

Consequently z(t) > x(t) ∨ y(t) for t > 0. Therefore we have strict inequality
in (5.90) and obtain (5.89) with strict inequality provided that x, y ∈ Σβ
and x = y. Thus (θ, ϕβ ) is contractive for each β > 0. Proposition 1.7.1
implies that every set Σβ can contain a unique (up to indistinguishability)
equilibrium.
Let us attempt to prove the existence of these equilibria. Since Σ β is
a compact set, the RDS (θ, ϕβ ) possesses a random attractor Aβ ⊆ Σ β .
Lemma 3.4.1 implies that

wβ (ω) = (w1β (ω), . . . , wdβ (ω)) := sup Aβ (ω)


180 5. Cooperative Random Differential Equations

is a sub-equilibrium such that β ≤ |wβ (ω)|1 ≤ d · β. Moreover 0 ≤ wβ (ω) ≤


β · e, where e = (1, . . . , 1). It is clear that ϕ(t, θ−t ω)(βe) ≤ βd · e. Therefore,
since the cone Rd+ is regular, Proposition 3.5.2 implies that

uβ (ω) := lim ϕ(t, θ−t ω)wβ (θ−t ω) = sup ϕ(t, θ−t ω)wβ (θ−t ω)
t→+∞ t>0

exists for any β > 0 and is an equilibrium such that


 
 d 
2
u (ω) ∈ x ∈ R+ : β ≤
β d
xi ≤ d β [0, βde] .
i=1

It follows from the last relation that the RDS (θ, ϕ) possesses infinitely many
equilibria. Since x(t) = uβ (θt ω) solves (5.87), we have that


d 
d
uβi (θt ω) = uβi (ω), t > 0, ω ∈ Ω .
i=1 i=1

Therefore if θ is an ergodic metric dynamical system, then there exist β ∗ ∈


[β, d2 β] and a θ-invariant set Ω ∗ ⊂ Ω of full measure such uβ (ω) ∈ Σ β ∗ for
all ω ∈ Ω ∗ . Moreover in the ergodic case the invariance of Σβ implies that
any semi-equilibrium is an equilibrium on a θ-invariant set of full measure.
We conjecture that every set Σ β contains an equilibrium. However we cannot
prove it now.

5.8 Order-Preserving RDE with Non-Standard Cone

In previous sections we have considered RDE with order-preserving properties


with respect to the standard cone Rd+ . However by considering other cones
besides Rd+ we can enlarge the area of possible applications of monotone
methods to the study of random equations.
Here we restrict our attention to the case that the cone is one of the
orthants of Rd . Let m = (m1 , . . . , md ), where mi ∈ {0, 1}. Consider the cone
 
Km := x = (x1 , . . . , xd ) ∈ Rd : (−1)mi xi ≥ 0, i = 1, . . . , d .

This cone generates a partial order ≤m defined by x ≤m y if and only if


y − x ∈ Km . Let P be the diagonal matrix defined by

P = diag {(−1)m1 , . . . , (−1)md } .

We note that x ≤m y if and only if P x ≤ P y, where ≤ is the order relation


generated by Rd+ .
5.8 Order-Preserving RDE with Non-Standard Cone 181

Assume that (θ, ϕ) is an RDS in some domain D ⊆ Rd generated by the


random equation
ẋ(t) = f (θt ω, x(t)) . (5.91)
It is easy to see that the cocycle ϕ can be represented in the form

ϕ(t, ω)x = P −1 ψ(t, ω)P x, x∈D,

where ψ is the cocycle of the RDS in P D generated by the RDE

ẏ(t) = g(θt ω, y(t)), with g(ω, y) = P f (ω, P −1 y) . (5.92)

Thus the random systems generated by (5.91) and (5.92) are conjugate. Since
P = P −1 is an order isomorphism, the RDS (θ, ϕ) is order-preserving with
respect to the cone Km if and only if (θ, ψ) is order-preserving with respect
to the cone Rd+ . Hence after simple calculations we find that the condition

∂fi (ω, x)
(−1)mi +mj ≥ 0, i = j, x = (x1 , . . . , xd ) ∈ D, ω ∈ Ω , (5.93)
∂xj

implies that (θ, ϕ) generated by (5.91) is order-preserving with respect to the


cone Km .
As an example we consider the random version of the model of two
competing populations occupying an environment consisting of two discrete
patches between which they can migrate. This model of competition and
migration is described by the following system of random equations

ẋ1 =ε(x2 − x1 ) + x1 (a1 − b1 (θt ω)x1 − c1 (θt ω)x3 ) ,


ẋ2 =ε(x1 − x2 ) + x2 (a2 − b2 (θt ω)x2 − c2 (θt ω)x4 ) ,
(5.94)
ẋ3 =δ(x4 − x3 ) + x3 (a3 − b3 (θt ω)x3 − c3 (θt ω)x1 ) ,
ẋ4 =δ(x3 − x4 ) + x4 (a4 − b4 (θt ω)x4 − c4 (θt ω)x2 ) .

Here x1 and x2 denote the population density of one population in patches


1 and 2 respectively and x3 and x4 denote the population density of the
second population in patches 1 and 2. No mortality is suffered during migra-
tion between patches. We assume that ε, δ and aj are positive deterministic
parameters. The parameters ε and δ are the migration coefficients and aj
is the intrinsic growth rate of the corresponding population in patch 1 or
2. The terms bj (θt ω) and cj (θt ω) describe randomly fluctuating interaction
rates between the populations. This randomness can occur under seasonal
fluctuations, for instance. For deterministic version of this model we refer to
Smith [102] and to the references therein.
We assume that bj (ω) and cj (ω) are nonnegative random variables such
that t → bj (θt ω) and t → cj (θt ω) are locally integrable for every ω ∈ Ω. These
assumptions imply conditions (R1)–(R3) and therefore by Proposition 5.2.1
equations (5.94) generate C 1 RDS (θ, ϕ) in R4+ . It is clear that (5.93) holds
182 5. Cooperative Random Differential Equations

with m = (0, 0, 1, 1). Therefore (θ, ϕ) is an order-preserving RDS in R4+ with


respect to the cone
 
K(0,0,1,1) := x = (x1 , x2 , x3 , x4 ) ∈ R4 : x1 , x2 ≥ 0, x3 , x4 ≤ 0 .

Assume additionally that

bj (ω) ≥ bj0 > 0 for all ω ∈ Ω, j = 1, 2, 3, 4 . (5.95)

Then from (5.94) we have that

4
d 2
|x| ≤ 2 (ai − bi0 xi )x2i ≤ −|x|2 + C
dt i=1

with some constant C > 0. Consequently

|ϕ(t, ω)x|2 ≤ |x|2 e−t + C(1 − e−t ), x ∈ R4+ , ω ∈ Ω .

Therefore (θ, φ) is a dissipative RDS in R4+ in the universe D of all tempered


random sets of R4+ and

B = {x ∈ R4+ : |x| ≤ R}, R = (1 + C)1/2 ,

is a forward invariant absorbing set. Hence by Corollary 1.8.1 (θ, ϕ) possesses


a random pull back attractor A(ω) in D.
To study the attractor A(ω) we first note that the structure of equations
(5.94) implies that the sets

V1 = {x ∈ R4+ : x3 = x4 = 0} and V2 = {x ∈ R4+ : x1 = x2 = 0}

are forward invariant for (θ, ϕ). The restriction (θ, ϕ1 ) of (θ, ϕ) to V1 is gen-
erated by the equations

ẋ1 =ε(x2 − x1 ) + x1 (a1 − b1 (θt ω)x1 ) ,

ẋ2 =ε(x1 − x2 ) + x2 (a2 − b2 (θt ω)x2 ) ,

for which the cooperativity condition (R4∗ ) holds. Hence (θ, ϕ1 ) is order-
preserving in V1 with respect to the standard cone. It follows from (5.95) and
from Proposition 5.4.3 that there exists a super-equilibrium w = (w1 , w2 , 0, 0)
with wi ≥ R for (θ, ϕ1 ) in V1 .
Similarly, the restriction (θ, ϕ2 ) of (θ, ϕ) to V2 is an order-preserving RDS
with respect to the standard cone and there exists a super-equilibrium w =
(0, 0, w3 , w4 ) with wi ≥ R for (θ, ϕ2 ) in V2 . We obviously have

ϕ(t, ω)w = (ϕ1 (t, ω)(w1 , w2 ), 0, 0) ≤m (w1 , w2 , 0, 0) = w


5.8 Order-Preserving RDE with Non-Standard Cone 183

and
ϕ(t, ω)w = (0, 0, ϕ2 (t, ω)(w3 , w4 )) ≥m (0, 0, w3 , w4 ) = w .
Thus w is a super-equilibrium and w is a sub-equilibrium for (θ, ϕ) with
respect to the cone K(0,0,1,1) . Moreover w ≥m w and the absorbing set B
belongs to the interval {x : w ≤m x ≤m w̄}. Consequently by Theorem 3.6.2
the attractor A(ω) possesses the property

A(ω) ⊂ {x ∈ R4+ : u(ω) ≤m x ≤m u(ω)} ,

where u(ω) and u(ω) are equilibria. It is also easy to find that u(ω) ∈ V1 and
u(ω) ∈ V2 . Moreover u (resp. u) is globally asymptotically stable from above
in V1 (resp. V2 ) with respect to the standard cone.
6. Cooperative Stochastic Differential
Equations

In this chapter we consider order-preserving RDS generated in Rd+ by stochas-


tic differential equations (SDE). The order-preserving property for stochastic
equations requires a special form of the diffusion terms. The corresponding
assumptions concerning the drift terms are the same as in the deterministic
case. In fact we deal here with some classes of stochastic perturbations of
deterministic order-preserving systems.
In this chapter we rely essentially on the Wong-Zakaı̈ type approximation
theorem and on the results on conjugacy of random and stochastic equations
(see Sect.2.5). We deal with Stratonovich equations only. However most re-
sults given below remain true for the Itô case, at least after obvious minor
changes.
We refer to Chap.2 for a description of the basic definitions and results
on stochastic differential equations.

6.1 Main Assumptions

We consider the following system of Stratonovich stochastic differential equa-


tions

m
dxi = fi (x1 , . . . , xd )dt + σij (xi ) ◦ dWtj , i = 1, . . . , d , (6.1)
j=1

where Wt (ω) = (Wt1 (ω), . . . , Wtm (ω)) is a Wiener process with values in Rm
and two-sided time R, m ≥ 1. Below we denote by θ the metric dynamical
system corresponding to this process (see Example 1.1.7).
In this chapter our main assumptions are follows:
(S1) every function fi : Rd+ → R belongs to the class Cb1,δ (Rd+ ) (see Def-
inition 2.4.1), i.e. fi (x) is a continuously differentiable function, with
derivatives bounded and globally δ-Hölder continuous:
 
 ∂f 
 i ∂fi 
 (x) − (y) ≤ C|x − y|δ , 0 < δ ≤ 1, i, j = 1, . . . , d ;
 ∂xj ∂xj 

I. Chueshov: LNM 1779, pp. 185–225, 2002.


c Springer-Verlag Berlin Heidelberg 2002
186 6. Cooperative Stochastic Differential Equations

(S2) for every i = 1, . . . , d and j = 1, . . . , m the functions σij : R+ → R


are twice continuously differentiable, with first derivative bounded and
second derivative bounded and globally δ-Hölder continuous, 0 < δ ≤ 1,
such that σij · σij
∈ Cb1,δ (Rd+ );
(S3) the property of weak positivity holds, i.e. σij (0) = 0 for all i = 1, . . . , d,
j = 1, . . . , m and
fi (x) ≥ 0, i = 1, . . . , d ,
for all x ∈ Rd+ of the form x = (x1 , . . . , xi−1 , 0, xi+1 , . . . , xd );
(S4) the function f = (f1 , . . . , fd ) is cooperative, i.e.

fi (x) ≤ fi (y), i = 1, . . . , d ,

for all x, y ∈ Rd+ such that xi = yi and xj ≤ yj for j = i or, in equivalent


differential form,

∂fi (x)
≥ 0, i = j, x ∈ Rd+ .
∂xj

6.2 Generation of Order-Preserving RDS

Proposition 6.2.1. Assume that conditions (S1)–(S3) hold. Then equation


(6.1) generates a C 1 RDS (θ, ϕ) in Rd+ such that the conclusions of Theo-
rem 2.4.3 hold in Rd+ .

Proof. Let f˜(x) = (f˜1 (x), . . . , f˜d (x)) be a function from Rd into itself such
that f˜i (x) ∈ C 1,δ (Rd ) and f˜i (x) = fi (x) for all x ∈ Rd+ , i = 1, . . . , d. Let
σ̃ij (x) ∈ C 2,δ (R) be an extension of σij (x) from R+ to R such that σ̃ij · σ̃ij
1,δ
belongs to Cb (Rd+ ), i = 1, . . . , d, i = 1, . . . , m. It follows from Theorem 2.4.3
that the stochastic equations

m
dxi = f˜i (x1 , . . . , xd )dt + σ̃ij (xi ) ◦ dWtj , i = 1, . . . , d ,
j=1

generate a C 1 RDS in Rd . Property (S3) implies that the set D = Rd+ satisfies
the assumptions of Corollaries 2.5.1 and 2.5.2. Therefore from Corollary 2.5.2
there exists a unique (up to indistinguishability) continuous C 1 RDS (θ, ϕ)
in Rd+ generated by the system of Stratonovich SDEs (6.1) in the sense of
Theorem 2.4.3. 2

Proposition 6.2.2. Let (S1)–(S4) be valid. Then equation (6.1) generates


a strictly order-preserving C 1 RDS (θ, ϕ) in Rd+ and
6.2 Generation of Order-Preserving RDS 187

ϕ(t, ω)(Rd+ \ {0}) ⊂ Rd+ \ {0} for any t ≥ 0, ω ∈ Ω . (6.2)

Proof. We first approximate (6.1) by the system of random differential equa-


tions

m
ẋi = fi (x1 , . . . , xd ) + σij (xi ) · ηεj (θt ω), i = 1, . . . , d , (6.3)
j=1

where the random variables ηjε (ω)


are defined as in Sect.2.5:

1 ε
ηjε (ω) = − 2 φ̇(τ /ε)Wτj (ω) dτ
ε 0

with a nonnegative function φ(t) ∈ C 1 (R) such that


 1
supp φ(t) ⊂ [0, 1], φ(t) dt = 1 .
0

Theorem 5.2.1 implies that for every ε > 0 equations (6.3) generate an order-
preserving C 1 RDS (θ, ϕε ) in Rd+ . Therefore
 β
l(ϕε (t, θ−t ω)y − ϕε (t, θ−t ω)x)dt ≥ 0, 0≤x≤y,
α

for all 0 ≤ α < β, where l is a positive (l(x) ≥ 0 whenever x ≥ 0) linear


functional on Rd . Consequently from (2.46) as in the proof of Corollary 2.5.1
we can conclude that
 β
l(ϕ(t, θ−t ω)y − ϕ(t, θ−t ω)x)dt ≥ 0, 0 ≤ x ≤ y, ω ∈ Ω ∗ ,
α

for all 0 ≤ α < β, where Ω ∗ is the θ-invariant set of full measure defined
in Remark 2.5.1. From this relation and from the continuity of the function
t → ϕ(t, θ−t ω)x (see Remark 2.4.1) we obtain that the inequality x ≤ y
implies ϕ(t, ω)x ≤ ϕ(t, ω)y for all t ≥ 0 and ω ∈ Ω ∗ . The invertibility of the
cocycle ϕ(t, ω) of the RDS generated by (6.1) (see [3, Theorem 2.3.32]) implies
that ϕ(t, ω)x < ϕ(t, ω)y for all 0 ≤ x < y, t ≥ 0 and ω ∈ Ω ∗ . Therefore after
redefining the cocycle ϕ by formula (2.50) (cf. Corollary 2.5.2) we obtain a
strictly order-preserving RDS.
If for some x0 > 0, t0 > 0 and ω ∈ Ω we have ϕ(t0 , ω)x0 = 0, then
ϕ(t0 , ω)y = 0 for all 0 ≤ y ≤ x0 , which is impossible because of the invert-
ibility of the cocycle ϕ(t, ω). Thus we have (6.2). 2

As for the random case (see Proposition 5.4.3) we have the following simple
condition for the existence of semi-equilibria for the RDS generated by (6.1).
Proposition 6.2.3. Let (S1)–(S3) be valid. Assume that there exists an el-
ement c = (c1 , . . . , cd ) in Rd+ such that σij (cj ) = 0 for each i = 1, . . . , d and
j = 1, . . . , m. If f (x) satisfies (S4) for all x ∈ [0, c] and
188 6. Cooperative Stochastic Differential Equations

fi (c) ≤ 0 for all i = 1, . . . , d , (6.4)

then v(ω) ≡ c is a super-equilibrium for the RDS (θ, ϕ) generated by (6.1)


and the restriction of (θ, ϕ) to the interval [0, c] is a strictly order-preserving
RDS. If we have the reversed inequality sign in (6.4) and (S4) holds for all
x ≥ c, then w(ω) ≡ c is a sub-equilibrium and the restriction of (θ, ϕ) to the
set Ic = {x ∈ Rd+ : x ≥ c} is a strictly order-preserving RDS.

Proof. Applying Proposition 5.4.3 to the RDS (θ, ϕε ) generated by the ap-
proximate equation (6.3) we obtain

ϕε (t, θ−t ω)c ≤ c for all t ≥ 0, ω ∈ Ω ,

under condition (6.4). Therefore as in the proof of Proposition 6.2.2 transition


to the limit gives the relation

ϕ(t, θ−t ω)c ≤ c for all t ≥ 0, ω ∈ Ω ∗ ,

where Ω ∗ is a θ-invariant set of full measure. Therefore redefining the cocycle


ϕ by the formula (2.50) we obtain that c = (c1 , . . . , cd ) is a super-equilibrium
and the interval [0, c] is a forward invariant set for (θ, ϕ) (see Remark 3.4.1).
Consequently by Proposition 6.2.2 (θ, ϕ) is a strictly order-preserving RDS
on the interval [0, c].
The proof of the second part of the proposition is similar. 2

6.3 Conjugacy with Random Differential Equations

In this section we describe several situations in which the RDS generated by


(6.1) is equivalent to an RDS generated by a random differential equation.
In some sense the theorems given below are particular cases of the result by
Imkeller/Schmalfuss [59] presented in Theorem 2.5.2. However we do not
assume C ∞ -smoothness of the coefficients in (6.1).
As in Sect.2.5 we denote by z(ω) the random variable in Rm such
that z(t, ω) := z(θt ω) = (z1 (θt ω), . . . , zm (θt ω)) is the stationary Ornstein-
Uhlenbeck process in Rm which solves the equations

dzk = −µzk dt + dWtk , k = 1, . . . , m ,

for some µ > 0 and possesses the properties described in Lemma 2.5.1.
To present the main idea clearly we start with the simplest case of linear
diffusion terms.
Theorem 6.3.1. Assume that (S1)–(S3) hold. Let (θ, ϕ) be the RDS gener-
ated in Rd+ by (6.1). If σij (xi ) = sij · xi are linear functions, then (θ, ϕ) is
equivalent to the RDS (θ, ψ) generated in Rd+ by the RDE:
6.3 Conjugacy with Random Differential Equations 189

ẏi (t) = gi (θt ω, y1 (t), . . . , yd (t)), i = 1, . . . , d , (6.5)

with

gi (ω, y1 , . . . , yd ) = esi (ω)−1 · fi (y1 · es1 (ω), . . . , yd · esd (ω)) + µyi zis (ω) . (6.6)

Here esi (ω) = exp {zis (ω)} and


m
zis (ω) = sij zj (ω), i = 1, . . . , d , (6.7)
j=1

where the random variables zj (ω) are given by Lemma 2.5.1. Moreover we
have the relation

ϕ(t, ω, x) = T (θt ω, ψ(t, ω, T −1 (ω, x))), t > 0, x ∈ Rd+ , ω ∈ Ω , (6.8)

where the diffeomorphism T (ω, ·) : Rd+ → Rd+ is a linear mapping given by


the formula

T (ω, y) = (y1 · es1 (ω), . . . , yd · esd (ω)), ω∈Ω.

Proof. The functions gi (ω, y) given by (6.6) satisfy conditions (R1)-(R3) of


Chap.5. Therefore Proposition 5.2.1 implies that the RDE (6.5) generates an
RDS (θ, ψ) in Rd+ . If we apply Itô’s formula (see Theorem 2.3.1) to the value
xi (t, ω) := yi (t, ω)·esi (θt ω), then we find that x(t, ω) = (x1 (t, ω), . . . , xd (t, ω))
satisfies (6.1). Therefore using (6.8) we can define the perfect cocycle ϕ which
satisfies in Rd+ the conclusions of Theorem 2.4.3. 2

Now we consider diffusion coefficients σij of a slightly more general form. We


assume that

σij (xi ) = σi (xi ) · sij , σi (x) > 0, x > 0, σi (0) > 0 , (6.9)

where sij are constants. To obtain a theorem on conjugacy for this case we
need the following results.
Lemma 6.3.1. Suppose that Hi (x) is a primitive for σi (x)−1 on R+ \ {0}
and zis (ω) is defined by (6.7). Let Ti (ω, ·) : R+ → R+ be the random mapping
given by the formula

Ti (ω, y) = Hi−1 (zis (ω) + Hi (y)), y > 0 and Ti (ω, 0) = 0 .

Then Ti (ω, y) ∈ C 3 (R+ \{0})∩C 1 (R+ ) for all ω ∈ Ω and the random mapping
T (ω, ·) : Rd → Rd+ defined by the relation

T (ω, y) = (T1 (ω, y1 ), . . . , Td (ω, yd )) (6.10)

is a strictly order-preserving diffeomorphism. Moreover the relations


190 6. Cooperative Stochastic Differential Equations

Ti (ω, y)
exp {−a|zis (ω)|} ≤ ≤ exp {a|zis (ω)|} , y>0, (6.11)
y
and
σi (y)
exp {−a|zis (ω)|} ≤ ≤ exp {a|zis (ω)|} , y>0, (6.12)
σi (Ti (ω, y))

hold for every i = 1, . . . , d and ω ∈ Ω. Here a = supx∈R+ |σ (x)|.

Proof. It is clear that Hi (·) : R+ \ {0} → R is an increasing C 3 -function


such that
1
Hi (x) = ci (x) + log x, x > 0 ,
σi (0)
where ci (x) belongs to the class C 3 (R+ \ {0}) ∩ C 1 (R+ ). This implies the cor-
responding smoothness of Ti (ω, y). It is clear that T (ω, ·) is a diffeomorphism
of intRd+ . A simple calculation shows that

σi (Ti (ω, y)) Ti (ω, y)


lim Ti (ω, y) = lim = lim >0.
y→0 y→0 σi (y) y→0 y

Therefore every mapping y → Ti (ω, y) is a diffeomorphism on R+ . It is a


strictly order-preserving mapping because every function Hi (x) is strictly
monotone.
To prove (6.11) we note that 0 < σ(x) ≤ ax for x > 0. Therefore
1 y2
Hi (y2 ) − Hi (y1 ) ≥ log , y2 > y 1 > 0 .
a y1
This relation implies that

Hi (y) ≤ z + Hi (y) ≤ Hi (yeaz ), y > 0, z ≥ 0 ,

and
Hi (yeaz ) ≤ z + Hi (y) ≤ Hi (y), y > 0, z < 0 .
From the monotonicity of Hi−1 we get (6.11).
Let ηi (x) = σi (Hi−1 (x)). Since dx d
ηi (x) = σi (Hi−1 (x)) · ηi (x), we obtain
 
Hi (y)
σi (y) ηi (Hi (y)) −1
= = exp σi (Hi (ξ))dξ .
σi (Ti (ω, y)) ηi (zis (ω) + Hi (y)) zis (ω)+Hi (y)

This implies (6.12). 2


6.3 Conjugacy with Random Differential Equations 191

Lemma 6.3.2. If the functions fi and σij satisfy (S1)-(S3) and (6.9) holds,
then the functions

fi (T (ω, y))
gi (ω, y) = σi (yi ) · + µσi (yi ) · zis (ω) (6.13)
σi (Ti (ω, yi ))

satisfy conditions (R1)-(R3) of Chap.5. If f = (f1 , . . . , fd ) is cooperative,


then g = (g1 , . . . , gd ) satisfies (R4) of Chap.5.

Proof. This is rather simple and it relies on the properties the functions
Ti (ω, x) described in Lemma 6.3.1. We leave the details to the reader. 2

Theorem 6.3.2. Assume that the functions fi and σij satisfy (S1)-(S3) and
(6.9) holds. Let (θ, ϕ) be the RDS generated in Rd+ by (6.1). Then (θ, ϕ) is
equivalent to the RDS (θ, ψ) generated in Rd+ by the RDE (6.5) with gi (ω, y)
given by (6.13). Relation (6.8) holds with the diffeomorphism T (ω, ·) : Rd+ →
Rd+ given by the formula (6.10). Moreover we have the relations (6.2) and

ϕ(t, ω)intRd+ ⊂ intRd+ for any t≥0 and ω∈Ω. (6.14)

If in addition (S4) holds, then (θ, ϕ) is strictly order preserving.

Proof. Lemma 6.3.2 and Proposition 5.2.1 imply that the RDE (6.5) with
gi (ω, y) given by (6.13) generates a strictly positive RDS (θ, ψ) in Rd+ such
that
ψ(t, ω)intRd+ ⊂ intRd+ for any t ≥ 0, ω ∈ Ω . (6.15)
Let y(t, ω) = ψ(t, ω)T −1 (ω, x) with x ∈ intRd+ . Then we can apply Itô’s
formula (see Theorem 2.3.1) to xi (t, ω) := Ti (θt ω, yi (t, ω)) and find that
x(t, ω) = (x1 (t, ω), . . . , xd (t, ω)) satisfies (6.1) with initial data x(0, ω) = x for
every x ∈ intRd+ . Now using the continuous dependence of solutions to (6.1)
on initial data we obtain that T (θt ω, ψ(t, ω)T −1 (ω, x)) is also a solution to
this equation with initial data x from Rd+ . Therefore using (6.8) we can define
the perfect cocycle ϕ which satisfies in Rd+ the conclusions of Theorem 2.4.3.
2

Corollary 6.3.1. Assume that fi and σij satisfy (S1)-(S3) and (6.9) holds.
Let (θ, ϕ) be the RDS generated in Rd+ by (6.1). If f = (f1 , . . . , fd ) is strongly
positive, i.e. if
fi (x) > 0, i = 1, . . . , d , (6.16)
for all x ∈ Rd+ \ {0} of the form x = (x1 , . . . , xi−1 , 0, xi+1 , . . . , xd ), then (θ, ϕ)
is a strongly positive RDS, i.e. ϕ(t, ω)(Rd+ \ {0}) ⊂ intRd+ for any t ≥ 0 and
ω ∈ Ω.

Proof. In this case the function g given by (6.13) satisfies (R3∗ ). Therefore
we can apply Proposition 5.2.1. 2
192 6. Cooperative Stochastic Differential Equations

Theorems 6.3.2 and 5.2.1 imply the following assertion.


Corollary 6.3.2. Assume that the functions fi and σij satisfy (S1)-(S4) and
(6.9) holds. Let (θ, ϕ) be the RDS generated in Rd+ by (6.1). If the matrix
d
∂fi (x)
Dx f (x) ≡ (6.17)
∂xj i,j=1

is irreducible (see Definition 5.2.1) for all x ∈ intRd+ , then

ϕ(t, ω, x)  ϕ(t, ω, y) for all 0x<y and ω∈Ω, (6.18)

i.e. the equation (6.1) generates a strongly order-preserving RDS in intRd+ .


If the matrix (6.17) is irreducible for all positive x from Rd+ and ω ∈ Ω, then
RDS (θ, ϕ) is strongly order-preserving in Rd+ .

Proof. Theorem 5.2.1 is applied to (6.5) with gi given by (6.13). 2

Similar to Theorem 6.3.2 we can also prove the following assertion.


Theorem 6.3.3. Assume that the functions fi and σij satisfy (S1)-(S4) and
σij (xi ) = σi (xi )·sij , where sij are constants and σi (xi ) possess the properties
(a) there exists ci > 0 such that σi (ci ) = 0, (b) σi (xi ) > 0 for all xi ∈ (0, ci ),
(c) σi (0) > 0 and (d) σi (ci ) > 0. Assume that fi fulfills (6.4). Let (θ, ϕ) be
the RDS generated in Rd+ by (6.1). Then the restriction (θ, ϕc ) of (θ, ϕ) to
[0, c] is equivalent to the order-preserving RDS (θ, ψ c ) generated in [0, c] by
the RDE (6.5) with gi given by (6.13), where T (ω, ·) : [0, c] → [0, c] is given
by the formula (6.10) with

Ti (ω, y) = Hi−1 (zis (ω) + Hi (y)), y ∈ (0, ci )

with Ti (ω, 0) = 0 and Ti (ω, ci ) = ci . Here Hi (x) is a primitive for σi (u)−1


on the interval (0, ci ). Moreover

ϕc (t, ω, x) = T (θt ω, ψ c (t, ω, T −1 (ω, x))), t > 0, x ∈ [0, c], ω ∈ Ω .

6.4 Stochastic Comparison Principle

Analogously to the Random Comparison Theorem 5.3.1 we can prove the cor-
responding stochastic version (for the one-dimensional case see Ikeda/Wata-
nabe [57] and Karatzas/Shreve [62], for the Rd case see Ladde/Lakshmi-
kantham [75] and also Geiss/Manthey [46] and the references therein).
Here we consider the simplest case assuming (S1)–(S4) for one of the system.
We refer to Geiss/Manthey [46] for more general comparison theorems.
6.4 Stochastic Comparison Principle 193

Theorem 6.4.1 (Stochastic Comparison Principle). Assume that con-


ditions (S1)–(S4) for (6.1) hold. Consider in Rd+ the system of Stratonovich
stochastic equations

m
dxi = gi (x1 , . . . , xd )dt + σij (xi ) ◦ dWtj , i = 1, . . . , d . (6.19)
j=1

with g = (g1 , . . . , gd ) : Rd+ → Rd satisfying (S1)–(S3). Let ψ(t, ω) denote the


corresponding cocycle generated by (6.19). Then
(i) the condition

fi (x) ≤ gi (x) for all x ∈ Rd+ , i = 1, . . . , d , (6.20)

implies that

ϕ(t, ω)x ≤ ψ(t, ω)x for all t > 0, ω ∈ Ω and x ∈ Rd+ ; (6.21)

(ii) if
fi (x) ≥ gi (x) for all x ∈ Rd+ , (6.22)
then

ϕ(t, ω)x ≥ ψ(t, ω)x for all t > 0, ω ∈ Ω and x ∈ Rd+ . (6.23)

Proof. Under the conditions listed above, equations (6.19) generate an RDS
in Rd+ (see Proposition 6.2.1). Let us consider together with (6.3) the system
of random differential equations

m
ẋi = gi (x1 , . . . , xd ) + σij (xi ) · ηεj (θt ω), i = 1, . . . , d , (6.24)
j=1

where ηεj (θt ω) is defined as in (6.3). By Proposition 5.2.1 both equations (6.3)
and (6.24) generate RDS in Rd+ . Let ϕε and ψ ε be the corresponding cocycles.
Let (6.20) hold. The random comparison principle (see Corollary 5.3.1(i))
implies that ϕε (t, ω)x ≤ ψ ε (t, ω)x for all t > 0, ω ∈ Ω and x ∈ Rd+ . Hence
 β
l(ψ ε (t, θ−t ω)x − ϕε (t, θ−t ω)x)dt ≥ 0, x ≥ 0, ω ∈ Ω ,
α

for all 0 ≤ α < β, where l is a positive (x ≥ 0 implies l(x) ≥ 0) linear func-


tional on Rd . Therefore as in the proof of Proposition 6.2.2 after transition
to the limit we can obtain (6.21) for all Ω̃ = Ωϕ∗ ∩ Ωψ∗ , where Ωϕ∗ (resp. Ωψ∗ )
is the θ-invariant set of full measure defined by the cocycle ϕ (resp. ψ) as
in Remark 2.5.1. Therefore after modifications of the cocycles ϕ and ψ we
obtain (6.21). The same argument proves (ii). 2
194 6. Cooperative Stochastic Differential Equations

Remark 6.4.1. (i) If the diffusion coefficients σij (x) satisfy (6.9), then using
Theorem 6.3.2 we can also prove assertions which are similar to Corollary
5.3.1 and Theorem 5.3.2 for RDS generated by stochastic equations.
(ii) The result of this section remains true if we interpret the equations
(6.1) in the Itô sense. The point is that by Theorem 2.4.2 the system of Itô
stochastic equations

m
dxi = fi (x1 , . . . , xd )dt + σij (xi ) · dWtj , i = 1, . . . , d . (6.25)
j=1

is equivalent to the system of Stratonovich equations


 
1 m m

dxi = fi (x1 , . . . , xd ) − 
σ (xi ) · σij (xi ) dt + σij (xi ) ◦ dWtj ,
2 j=1 ij j=1
(6.26)
where i = 1, . . . , d. It is clear that the assumptions on fi (x)!immediately
imply the corresponding properties of the functions fi (x) − 12 j=1 σij
m
(xi ) ·
σij (xi ) and vice versa. This observation makes it also possible to find the
corresponding Itô versions of the results presented in Sect.6.2 and 6.3.

6.5 Equilibria and Attractors

Now we give a result on the existence of equilibria and attractors for the
stochastic systems considered. As in the random case we note that under
assumptions (S1)–(S4) Proposition 6.2.1 implies that the element x ≡ 0 is a
sub-equilibrium for the RDS (θ, ϕ) generated by (6.1) in Rd+ .
Throughout this section we assume that the diffusion terms in (6.1) have
the following particular form:

σij (xi ) = σi (xi ) · sj for all i = 1, . . . , d, j = 1, . . . , m , (6.27)

where sj are constants.


We start with a Lyapunov function type theorem giving sufficient condi-
tions for the existence of random attractors and equilibria.
Theorem 6.5.1. Let conditions (S1)–(S3) and (6.27) hold. Assume that
there exists a function V (x) ∈ C(Rd+ ) ∩ C 1 (intRd+ ) possessing the proper-
ties
f (x), ∇V (x) + α · V (x) ≤ β for all x ∈ intR+ d
(6.28)
6.5 Equilibria and Attractors 195

and

d
∂V (x)
· σi (xi ) = γ · V (x) for all x ∈ intR+
d
, (6.29)
i=1
∂xi
where α > 0, β > 0 and γ ∈ R are constants. Let
 0
R(ω) := β · exp{ατ − γWτ(s) (ω)} dτ ,
−∞

(s) !m
where Wt = j=1 sj Wtj . Let (θ, ϕ) be the RDS in Rd+ generated by (6.1).
Then the random set
 
B(ω) := x ∈ Rd+ : V (x) ≤ 2β/α + 2R(ω)

absorbs every deterministic bounded set, i.e. for any bounded set B from Rd+
there exists t0 = t0 (ω, B) > 0 such that ϕ(t, ω)B ⊂ B(θt ω) for t ≥ t0 . If we
additionally assume that

a1 |x|α1 − b1 ≤ V (x) ≤ a2 |x|α2 + b2 , for all x ∈ intR+


d
, (6.30)

where aj , αj , bj are positive constants, then the RDS (θ, ϕ) possesses a random
attractor A(ω) in the universe D of all tempered subsets of Rd+ . This attractor
is measurable with respect to the past σ-algebra F− . If in addition (S4) holds,
then the attractor A(ω) is bounded from above and from below and there exist
maximal and minimal equilibria ū and u such that the random interval [u, ū]
contains the attractor as well as all other possible tempered equilibria. In
particular, if the equilibrium u is unique, then A = {u}.

Proof. Let us consider the RDE

ẋi = fi (x1 , . . . , xd ) + σi (xi ) · ηε(s) (θt ω), i = 1, . . . , d , (6.31)


(s) !m
where ηε (ω) = j=1 sj ηεj (ω) and ηεj (ω) is defined as in (6.3). Let ϕε (t, ω) be
the corresponding cocycle generated by (6.31) and xε (t) = ϕε (t, ω)x, where
x ∈ intRd+ . Using (6.31), (6.28) and (6.29) we find that the function Vε (t) =
V (xε (t)) satisfies the inequality

d  
Vε (t) ≤ −α + γ · ηε(s) (θt ω) Vε (t) + β .
dt
Therefore we have
 t 
Vε (t) ≤ V (x) exp −αt + γ ηε(s) (θτ ω)dτ
0
 t  t 
+β· exp −α(t − ξ) + γ ηε(s) (θτ ω)dτ dξ .
0 ξ
196 6. Cooperative Stochastic Differential Equations

Hence
 0 
V (ϕ (t, θ−t ω)x) ≤ V (x) exp −αt + γ
ε
ηε(s) (θτ ω)dτ
−t
 0  0  (6.32)
+β· exp αξ + γ ηε(s) (θτ ω)dτ dξ .
−t ξ

Since (see Sect.2.5)

d (s),ε  d j,ε m
ηε(s) (θt ω) = Wt (ω) := sj Wt (ω) ,
dt j=1
dt

where Wtj,ε (ω) is defined by (2.43), using (2.25) we have


 0 
m  ε
ηε(s) (θτ ω)dτ =− sj φε (τ )Wξj (θτ ω)dτ .
ξ j=1 0

Since τ → Wξj (θτ ω) is a continuous function for every ξ and ω (see (2.25)),
we have that
 0 
m
(s)
lim ηε(s) (θτ ω)dτ = − sj Wξj (ω) = −Wξ (ω) . (6.33)
ε→0 ξ j=1

Let Ω ∗ be a θ-invariant set of full measure such that (2.46) holds. As in the
proof of Corollary 2.5.1 (cf.(2.49)) we conclude that for any ω ∈ Ω ∗ and
x ∈ Rd+ there exists εk → 0 such that

ϕεk (t, θ−t ω)x → ϕ(t, θ−t ω)x for almost all t > 0 .

Therefore from (6.32) and (6.33) we have


(s)
V (ϕ(t, θ−t ω)x) ≤ V (x) exp −αt − γW−t (ω)
 0
(6.34)
(s)
+β· exp αξ − γWξ (ω) dξ
−t

for almost all t > 0. Since t → ϕ(t, θ−t ω)x is continuous (see Remark 2.4.1),
we have inequality (6.34) for all ω ∈ Ω ∗ , t > 0 and x ∈ Rd+ which implies
that

(s)
V (ϕ(t, θ−t ω)x) ≤ V (x) exp −αt − γW−t (ω) + R(ω) . (6.35)

for all ω ∈ Ω ∗ , t > 0 and x ∈ Rd+ .


6.5 Equilibria and Attractors 197

For ω ∈ Ω ∗ we redefine the cocycle ϕ(t, ω) by the formula ϕ(t, ω)x =


y(t; x), where y(t) = y(t; x) is the solution to the problem

ẏi = fi (y1 , . . . , yd ), y(0) = x, i = 1, . . . , d ,

It is clear from (6.28) that

β
V (ϕ(t, θ−t ω)x) ≤ V (x)e−αt + 1 − e−αt , t > 0, ω ∈ Ω ∗ . (6.36)
α
Inequalities (6.35) and (6.36) imply that the random set B(ω) absorbs every
deterministic bounded set from Rd+ .
It is clear that R(ω) is a tempered random variable. Therefore under
condition (6.30) we have B ∈ D. From (6.35) and (6.36) we also have that B is
D-absorbing for the RDS (θ, ϕ) (cf. the proof of Proposition 1.4.1). Therefore
we can apply Theorem 1.8.1 on the existence of random attractors and assert
that the RDS (θ, ϕ) generated in the space Rd+ by problem (6.1) possesses
a random global D-attractor A(ω). Since R(ω) is F− -measurable, it follows
from (1.44) that A(ω) is F− -measurable. The existence of the maximal and
minimal equilibria ū and u and their properties under condition (S4) follow
from Theorem 3.6.2. 2

Corollary 6.5.1. Let assumptions (S1)–(S3) and (6.27) with σi (xi ) ≡ xi be


valid. Assume that there exist positive numbers κ, α and β such that


d 
d
xκ−1
i · fi (x) ≤ −α · xκi + β, x ∈ Rd+ . (6.37)
i=1 i=1

Then the RDS (θ, ϕ) generated by (6.1) possesses a random attractor A(ω)
in the universe D of all tempered subsets of Rd+ and all the conclusions of
Theorem 6.5.1 hold.
!d
Proof. It is easy to see that the function V (x) = i=1 xκi satisfies all the
hypotheses of Theorem 6.5.1. 2

Corollary 6.5.2. Let assumptions (S1)–(S3) and (6.27) hold. Assume ad-
ditionally that

σi (x) > 0 for x > 0, σi (0) > 0, i = 1, . . . , d ,

and
σi (x)
= λi + O(x−γi ) when x → ∞, i = 1, . . . , d , (6.38)
x
where λi > 0 and γi > 0 are constants, and there exist positive numbers α, β
and 0 ≤ κ < (min λi ) · (max λi )−1 such that
198 6. Cooperative Stochastic Differential Equations

fi (x1 . . . , xd ) ≤ −α · xi + gi (x1 . . . , xd ), x = (x1 . . . , xd ) ∈ Rd+ , (6.39)


where the function gi (x1 . . . , xd ) possesses the property
 
d
|gi (x1 . . . , xd )| ≤ β · 1 + xκj  , x = (x1 . . . , xd ) ∈ Rd+ , (6.40)
j=1

Then the RDS (θ, ϕ) generated by (6.1) possesses a random attractor A(ω)
in the universe D of all tempered subsets of Rd+ and all the conclusions of
Theorem 6.5.1 hold.

Proof. Let


d  x 

V (x) = Vi (xi ) with Vi (x) = exp δ ,
i=1 1 σi (ξ)

where δ is a positive parameter. If we set Vi (0) = 0, then V (x) ∈ C(Rd+ ) ∩


C 3 (intRd+ ). It is clear that V (x) satisfies (6.29) with γ = δ and it follows
from (6.39) that


d
fi (x1 . . . , xd )
f (x), ∇V (x) = δ Vi (xi ) ·
i=1
σi (xi )


d
xi d
gi (x1 . . . , xd )
≤ −αδ · Vi (xi ) + δ Vi (xi ) · . (6.41)
i=1
σi (xi ) i=1
σi (xi )

It is clear that every function x


σi (x) · Vi (x) is continuous on R+ . Therefore
(6.38) implies that
xi 1
· Vi (xi ) ≥ Vi (xi ) − C, λ := max λi ,
σi (xi ) 2λ

for some constant C. Therefore from (6.41) we obtain

αδ d
gi (x1 . . . , xd )
f (x), ∇V (x) ≤ − · V (x) + δ Vi (xi ) · +C . (6.42)
2λ i=1
σi (xi )

A simple calculation shows that


δ/λi δ/λi
C1 · xi ≤ Vi (xi ) ≤ C2 · xi if xi ≥ 1, i = 1, . . . , d ,

and
δ/σi (0) δ/σi (0)
C1 · xi ≤ Vi (xi ) ≤ C2 · xi if 0 < xi < 1, i = 1, . . . , d ,
6.6 One-Dimensional Stochastic Equations 199

These inequalities imply that under the condition δ ≥ max σi (0) we have

Vi (xi ) ( ) ( )
≤ C · 1 + V (x)1−λi /δ and xi ≤ C · 1 + V (x)λi /δ ,
σi (xi )

where x = (x1 , . . . , xd ) ∈ intRd+ and C is a constant. Therefore it is easy to


see from (6.40) that


d
gi (x1 , . . . , xd ) ( ∗
)
Vi (xi ) · ≤ C · 1 + V (x)κ , (6.43)
i=1
σi (xi )

where κ∗ = 1 − (min λi − κ · max λi ) · δ −1 and δ is large enough. Since κ∗ < 1


for all δ > 0, relations (6.42) and (6.43) imply (6.28) with appropriate choice
of parameters. 2

6.6 One-Dimensional Stochastic Equations

In this section we consider the properties of the RDS generated by a single


Stratonovich SDE
dx = f (x)dt + σ(x) ◦ dWt . (6.44)
There are many results concerning this equation (see, e.g., Arnold [3],
Gihman/Skorohod [47], Ikeda/Watanabe [57], Karatzas/Shreve [62],
Khasminskii [64] among others). We include this section for the sake of com-
pleteness.
Below for any closed interval I ⊆ R (finite or not) and k ∈ Z+ , δ ∈ (0, 1]
we denote by C k,δ (I) the space of k times continuously differentiable functions
f (x) on I such that the derivative f (k) (x) satisfies the Hölder condition with
the exponent δ in a vicinity of every point from I. We also denote by Cbk,δ (I)
the space of restrictions of functions from Cbk,δ (R) to the interval I.

6.6.1 Stochastic Equations on R+

We start with the following assertion on the generation of RDS in R+ by


(6.44). It admits a slightly more general class of drift terms in comparison
with the result given by Proposition 6.2.2.
Proposition 6.6.1. Assume that

f (x) ∈ C 1,δ (R+ ), f (x) ≤ ax + b, f (0) ≥ 0 , (6.45)

and
200 6. Cooperative Stochastic Differential Equations

σ(x) ∈ Cb2,δ (R+ ), σ(x) · σ (x) ∈ Cb1,δ (R+ ) ,


(6.46)

σ(0) = 0, |σ (0)| > 0, |σ(x)| > 0 if x > 0 .
Here above a, b ∈ R+ and δ ∈ (0, 1]. Then (6.44) generates a strictly order-
preserving RDS in R+ .

Proof. We suppose that σ (0) > 0 and σ(x) > 0 if x > 0 for the definiteness.
We first assume that −α ≤ f (x) ≤ ax + b for some α > 0. Denote by
χN (z) a function from C 2 (R) with the properties (i) χN (z) = N + 1/2 for
z ≥ N + 1, (ii) χN (z) = z for z ∈ [−∞, N ] and (iii) 0 ≤ χ N (z) ≤ 1 for
all z ∈ R. Then fN (x) := χN (f (x)) ∈ Cb1,δ (R+ ). Since condition (S4) holds
automatically for the one-dimensional case, Proposition 6.2.2 implies that the
equation
dx = fN (x)dt + σ(x) ◦ dWt (6.47)
generates a strictly order-preserving C 1 RDS (θ, ϕN ) in R+ . Since fN (x) ≤
fN +1 (x) ≤ ax + b, Comparison Theorem 6.4.1 implies that

ϕN (t, ω)x ≤ ϕN +1 (t, ω)x ≤ ϕ̄(t, ω)x, t > 0, ω ∈ Ω, x ∈ R+ , (6.48)

where (θ, ϕ̄) is the RDS generated by (6.44) with f (x) = ax + b. Relation
(6.48) implies that the limit

ϕ(t, ω)x := lim ϕN (t, ω)x, t > 0, ω ∈ Ω, x ∈ R+ , (6.49)


N →∞

exists. By Theorem 6.3.2 the RDS (θ, ϕN ) is equivalent to the RDS (θ, ψN )
generated by the RDE

fN (T (θt ω, y))
ẏ = σ(y) · + µσ(y) · z(θt ω) ,
σ(T (θt ω, y))

where T (ω, y) = H −1 (z(ω) + H(y)), y > 0 and T (ω, 0) = 0. Here H(x) is


0
a primitive for σ(x)−1 on R+ \ {0} and z(ω) = −∞ eµτ dWτ is a Gaussian
random variable which generates a stationary Ornstein-Uhlenbeck process in
R. We also have the relation

ϕN (t, ω, x) = T (θt ω, ψN (t, ω, T −1 (ω, x))), t > 0, x ∈ R+ , ω ∈ Ω .

Therefore (6.49) implies that

y(t, ω) = ψ(t, ω, y0 ) := T −1 (θt ω, ϕ(t, ω, T (ω, y0 ))), t > 0, ω ∈ Ω , (6.50)

is a local solution to the problem

f (T (θt ω, y))
ẏ = σ(y) · + µσ(y) · z(θt ω), y0 ∈ R + .
σ(T (θt ω, y))
6.6 One-Dimensional Stochastic Equations 201

Thus (t, x) → ϕ(t, ω, x) is a continuous function for every ω ∈ Ω. Hence


by (6.48) the limit in (6.49) is uniform with respect to (t, x) from compact
subsets of R+ × R+ . This implies that (θ, ϕ) is an order-preserving RDS.
It is strictly order-preserving because of (6.50). It is also easy to see that
x(t) = ϕ(t, ω)x solves (6.44).
Now we consider the case of general f satisfying (6.45). Let χ̄N (z) be
a function from C 2 (R) with the properties (i) χ̄N (z) = z for z ≥ −N ,
(ii) χ̄N (z) = −N − 1/2 for z ≤ −N − 1 and (iii) 0 ≤ χ̄ N (z) ≤ 1 for all
z ∈ R. Let fN (x) := χ̄N (f (x)). Since fN satisfies (6.45) and −N − 1 ≤
fN (x) ≤ ax + b, equation (6.47) generates a strictly order-preserving RDS
(θ, ϕ∗N ) in R+ such that

ϕ∗N (t, ω)x ≥ ϕ∗N +1 (t, ω)x ≥ 0, t > 0, ω ∈ Ω, x ∈ R+ .

Therefore the limit

ϕ(t, ω)x := lim ϕ∗N (t, ω)x, t > 0, ω ∈ Ω, x ∈ R+ ,


N →∞

exists. The same argument as above leads to the conclusion. 2

Remark 6.6.1. Using Feller’s test for non-explosion (see, e.g., Karatzas/
Shreve [62, p.348]) it is also possible to give the sufficient and necessary
conditions on the functions f (x) ∈ C 1,δ (R+ ) and σ(x) ∈ Cb2,δ (R+ ) for gener-
ation of a C 1 RDS by equation (6.44) (cf. Arnold [3, p.96] and Kunita [74,
p.181-184]).

Proposition 6.6.1 and Corollary 6.5.2 imply the following assertion.


Corollary 6.6.1. Assume in addition to (6.45) and (6.46) that

f (x) |σ(x)|
lim sup <0 and = λ + O(x−γ ), x → ∞ , (6.51)
x→∞ x x

where λ > 0 and γ > 0 are constants. Then the RDS (θ, ϕ) generated by (6.44)
possesses a random attractor A(ω) in the universe D of all tempered subsets
of R+ . This attractor is measurable with respect to the past σ-algebra F− .
Moreover A(ω) = [u(ω), ū(ω)], where ū and u are F− -measurable tempered
equilibria such that 0 ≤ u(ω) ≤ ū(ω).

Example 6.6.1. We consider an RDS generated in R+ by the SDE

dx = (αx − βxN +1 )dt + σx ◦ dWt , (6.52)

where β > 0, α, σ ∈ R \ {0} and N > 0. By Proposition 6.6.1 this equation


generates a strictly order-preserving RDS (θ, ϕ) in R+ . By Corollary 6.6.1
this RDS has a random attractor A(ω) = [0, u(ω)] in the universe D of all
tempered subsets of R+ , where u(ω) ≥ 0 is an F− -measurable equilibrium.
202 6. Cooperative Stochastic Differential Equations

We note that as in the random case (see Example 5.6.1) the cocycle ϕ can
be represented in the form
x exp {αt + σWt (ω)}
ϕ(t, ω, x) =  1/N
t
1 + βN xN 0 exp {N (ατ + σWτ (ω))} dτ

for x > 0 and ϕ(t, ω, 0) = 0. Therefore if α < 0, then A(ω) = {0}. In the case
α > 0 we have A(ω) = [0, uα,β,N (ω)], where
  0 − N1
uα,β,N (ω) := βN exp {N (ατ + σWτ (ω))} dτ . (6.53)
−∞

Moreover a simple calculation relying on Proposition 1.9.3 shows that there


exists γ > 0 such that
lim eγt |ϕ(t, θ−t ω, x) − uα,β,N (ω)| = 0 for all x > 0 and ω ∈ Ω . (6.54)
t→∞

If N = 2m+1 is odd, m ≥ 1, then equation (6.52) is invariant with respect to


the transformation x → −x. Therefore Proposition 6.6.1 implies that (6.52)
generates a strictly order-preserving RDS (θ, ϕ̄) in R. This RDS has a random
attractor A(ω) in the universe D of all tempered subsets of R. We have that
A(ω) = {0} if α < 0 and A(ω) = [−uα,β,N (ω), uα,β,N (ω)] when α > 0. In
the last case the equilibrium uα,β,N (ω) (resp. −uα,β,N (ω)) is globally stable
in R+ \ {0} (resp. R− \ {0}) and u0 ≡ 0 is an unstable equilibrium. Thus
we observe here a pitchfork bifurcation as α increases through 0. We refer to
Arnold [3, Chap.9] for a detailed discussion of the bifurcation phenomena for
RDS (see also Crauel/Flandoli [37], Crauel et al. [38], Arnold [4] and
the references therein). We also note that other explicitly solvable SDE can
be found in Horsthemke/Lefever [55, pp.139ff] and Kloeden/Platen
[67, pp.117ff] (see also Example 6.6.3 below).
The next assertion shows that the behaviour of trajectories presented in Ex-
ample 6.6.1 is typical for a dissipative RDS generated in R+ by an equation
of the form (6.44).
To describe possible scenarios of long-time dynamics we introduce the
speed measure m(dx) on R+ by the formula (see, e.g., Karatzas/Shreve
[62])
  x 
f (ξ) dx
m(A) = exp 2 2
dξ , A ∈ B(R+ ) . (6.55)
A 1 σ (ξ) |σ(x)|
It is easy to see that m([1, +∞)) < ∞ under conditions (6.45), (6.46) and
(6.51). Below we also use that, by Theorem 2.4.2, equation (6.44) can be
written in Itô’s form
 
1
dx = f (x) + σ (x)σ(x) · dt + σ(x) · dWt . (6.56)
2
6.6 One-Dimensional Stochastic Equations 203

Theorem 6.6.1. Assume that hypotheses (6.45), (6.46) and (6.51) hold. Let
A(ω) be the random attractor in the universe D of all tempered subsets of R+
for the RDS generated by (6.44).
(i) If f (0) = 0 and m((0, 1]) = ∞, then A(ω) = {0} almost surely.
(ii) If f (0) = 0 and m((0, 1]) < ∞, then A(ω) = [0, u(ω)] for some F− -
measurable equilibrium u(ω) such that u(ω) > 0 almost surely. There
are no other (up to indistinguishability) F− -measurable equilibria in the
set (0, u(ω)] and

m(B)
lim P{ω : ϕ(t, ω)x ∈ B} = P{ω : u(ω) ∈ B} ≡ (6.57)
t→∞ m(R+ )

for all x > 0 and B ∈ B(R+ ).


(iii) If f (0) > 0, then there exists an F− -measurable equilibrium u(ω) such
that u(ω) > 0 and A(ω) = {u(ω)} almost surely. Moreover (6.57) holds
for any x ∈ R+ .

Proof. For definiteness we assume that σ(x) > 0 for x > 0.


(i) As in Karatzas/Shreve [62] (see also Ikeda/Watanabe [57]) we
define the scale function s : [0, +∞] → R ∪ {±∞} (for equation (6.56)) by
the formula  x  y 
f (ξ) dy
s(x) = exp −2 2 (ξ)
dξ .
1 1 σ |σ(y)|
Since m([1, +∞)) < ∞, it is easy to prove that s(+∞) = +∞ (see Crauel
et al. [38, Lemma 2.4]). Hence as in Scheutzow [91] we can conclude that
ϕ(t, ω)x → 0 in probability for every x ∈ R+ , i.e.

lim P{ω : ϕ(t, ω)x ≥ ε} = 0, x ∈ R+ , (6.58)


t→∞

for any ε > 0. Indeed, choose 0 < γ < ε such that ε·m((γ, ε]) > m([ε, ∞)) and
consider a function f˜ ∈ C 1,δ (R+ ) possessing the properties (a) f˜(x) ≥ f (x)
for x ∈ R+ , (b) f˜(x) = f (x) if x ≥ γ and (c) m̃((0, 1]) < ∞, where m̃(dx) is
defined by (6.55) with f˜ instead of f . It is easy to construct a such function
f˜ choosing f˜(x) = k · x with k > 0 at a vicinity of 0. Let (θ, ϕ̃) be the RDS
generated by (6.44) with f˜ instead of f . Comparison Theorem 6.4.1 implies
that ϕ(t, ω)x ≤ ϕ̃(t, ω)x for all t ∈ R, ω ∈ Ω and x > 0. We also have
s̃(0) = −∞ and s̃(+∞) = ∞, where s̃(x) is the scale function for the RDS
(θ, ϕ̃). Therefore by [78, Theorem 7, Chap.4] we have

m̃([ε, ∞)) m̃([ε, ∞))


lim P{ω : ϕ̃(t, ω)x ≥ ε} = ≤ .
t→∞ m̃([0, ∞)) m̃([γ, ∞))

Since m̃([a, ∞)) = m([a, ∞)) for any a ≥ γ, we obtain that

m̃([ε, ∞)) < ε · m((γ, ε]) ≤ ε · m([γ, ∞)) ≤ εm̃([γ, ∞)) .


204 6. Cooperative Stochastic Differential Equations

Thus

lim sup P{ω : ϕ(t, ω)x ≥ ε} ≤ lim P{ω : ϕ̃(t, ω)x ≥ ε} ≤ ε


t→∞ t→∞

for any ε > 0. This implies (6.58).


Since

{ω : ϕ(t, ω)v(ω) ≥ ε} ⊂ {ω : ϕ(t, ω)N ≥ ε} ∪ {ω : v(ω) ≥ N }

for any random variable v(ω) and N ∈ N, it follows from (6.58) that

lim sup P{ω : ϕ(t, ω)v(ω) ≥ ε} ≤ P{ω : v(ω) ≥ N }


t→∞

for every N ∈ N. Therefore

lim P{ω : ϕ(t, ω)v(ω) ≥ ε} = 0 (6.59)


t→∞

for every {v(ω)} ∈ D and ε > 0. If A(ω) = {0} is not true almost surely,
then by Corollary 6.6.1 there exists an equilibrium u(ω) ≥ 0 such that P{ω :
u(ω) ≥ ε} > 0 for some ε > 0. However the relation

P{ω : ϕ(t, ω)u(ω) ≥ ε} = P{ω : u(θt ω) ≥ ε} = P{ω : u(ω) ≥ ε} > 0

contradicts (6.59). Thus A(ω) = {0} almost surely.


(ii) In this case we have m(R+ ) < ∞. Therefore the function
 x 
N f (ξ)
(x) = exp 2 2
dξ , x>0, (6.60)
σ(x) 1 σ (ξ)

where N = [m(R+ )]−1 , is a stationary solution to the Fokker-Plank equation


  
∂ 1 ∂2 2 ∂ 1
= · σ (x) − f (x) + σ (x)σ(x)  , x>0,
∂t 2 ∂x2 ∂x 2

possessing the property R+ (x)dx = 1. Thus (x) is a density of a sta-
tionary measure for the Markov family {ϕ(t, ω)x, x > 0}. Since in the case
considered the stationary measure on R+ \ {0} is  unique (see, e.g., Hors-
themke/Lefever [55]), the measure (B) := B (x)dx, B ∈ B(R+ ), is
ergodic. Moreover transition probabilities Pt (x, ·) weakly converge to the sta-
tionary measure , i.e.

Pt (x, B) = P{ω : ϕ(t, ω)x ∈ B} → (B), as t→∞,

for any x > 0 and B ∈ B(R+ ) (see, e.g., Mandl [78, Theorem 7, Chap.4]). In
particular, this implies that A(ω) = [0, ū(ω)], where ū(ω) is an F− -measurable
equilibrium such that ū(ω) > 0 almost surely. Indeed, if ū(ω) = 0 on a set of
positive measure, then by Lemma 3.5.1 u(ω) = 0 almost surely. In this case
6.6 One-Dimensional Stochastic Equations 205

A(ω) = {0} almost surely and therefore P{ω : ϕ(t, ω)x ≥ δ} → 0 for any
x > 0 and δ > 0, which is impossible because ([δ, ∞)) > 0.
The uniqueness of the stationary measure  and Theorem 1.10.1 imply
that

P{ω : ũ(ω) ∈ B} = P{ω : u(ω) ∈ B} = (B), B ∈ B(R+ ) ,

for any F− -measurable positive equilibrium ũ(ω). Therefore the inequality


ũ(ω) < ū(ω) is impossible on a set of positive measure. Thus there are no
other positive F− -measurable equilibria in A(ω).
(iii) The assumption f (0) > 0 implies that ϕ(t, ω)0 > 0 for all t > 0 and
ω ∈ Ω. Thus by Proposition 3.5.2 there exists a positive F− -measurable
equilibrium u(ω) = limt→∞ ϕ(t, θ−t ω)0 and by Corollary 6.6.1 A(ω) =
[u(ω), u(ω)], where 0 < u(ω) ≤ u(ω) are F− -measurable equilibria. Since
the property f (0) > 0 implies that m((0, 1]) < ∞, as in case (ii) we can
conclude that u(ω) = u(ω) almost surely. 2

Remark 6.6.2. Assume that f (x) satisfies a Lipschitz condition on each com-
pact subset of R+ , f (0) = 0 and |f (x)| ≤ ax+b for some positive a and b. Let
σ(x) ∈ C 2 with bounded first and second derivatives and σ(0) = 0. It was
proved by Scheutzow [91] that A(ω) = {0} is a random attractor for the
RDS generated by (6.44) if and only if m((0, 1]) = ∞ and m([1, ∞)) < ∞.

Example 6.6.2. Consider the following generalization of equation (6.52)

dx = (αx − g(x))dt + σx ◦ dWt , (6.61)

where α, σ ∈ R \ {0} and g(x) ∈ C 2 (R+ ) satisfies

β1 xN +1 ≤ g(x) ≤ β2 xN +1 , x ≥ 0,

where β1 , β2 > 0 and N > 0. Proposition 6.6.1 and Corollary 6.6.1 are applied
here. Therefore (6.61) generates a strictly order-preserving RDS (θ, ϕ) in
R+ which has a random attractor A(ω) = [0, u(ω)] in the universe D of all
tempered subsets of R+ . Using the comparison principle it is easy to see that

ϕα,β2 (t, ω, x) ≤ ϕ(t, ω, x) ≤ ϕα,β1 (t, ω, x), x≥0, (6.62)

where (θ, ϕα,β ) is the RDS generated by (6.52). Therefore A(ω) = {0} if α < 0
and A(ω) = [0, u(ω)] when α > 0, where the F− -measurable equilibrium u(ω)
satisfies the inequality

0 < uα,β2 ,N (ω) ≤ u(ω) ≤ uα,β1 ,N (ω), ω∈Ω.

Here uα,β,N (ω) is given by (6.53). It follows from (6.57) that u(ω) attracts
every trajectory ϕ(t, ω)x with x > 0 with respect to convergence in distribu-
tion. However using properties of the RDS generated by (6.52) and relation
(6.62) we can prove that
206 6. Cooperative Stochastic Differential Equations

lim ϕ(t, θ−t ω, x) = u(ω) for all x > 0 (6.63)


t→∞

almost surely. Indeed, (6.54) and (6.62) imply that the omega-limit set Γx (ω)
emanating from x > 0 (see Definition 1.6.1) possesses the property

Γx (ω) ⊆ [uα,β2 ,N (ω), uα,β1 ,N (ω)], ω∈Ω.

Thus by Lemma 3.4.1 and Remark 3.4.2(ii) u(ω) := inf Γx (ω) and u(ω) :=
sup Γx (ω) are F− -measurable equilibria such that

0 < uα,β2 ,N (ω) ≤ u(ω) ≤ u(ω) ≤ uα,β1 ,N (ω), ω∈Ω.

It is clear that u(ω), u(ω) ∈ (0, u(ω)]. Hence, applying Theorem 6.6.1(ii) we
obtain that u(ω) = u(ω) = u(ω) almost surely. Thus Γx (ω) = u(ω) almost
surely and (6.63) holds. As in Arnold [3, Theorem 9.3.3] it is also possible
to prove the convergence property (6.63) for a random variable x = x(ω) > 0
such that x(ω) and x(ω)−1 are tempered.

6.6.2 Stochastic Equations on a Bounded Interval

Now we consider an SDE of the form (6.44) inside a bounded deterministic


interval [l, r]. We assume that

f (x) ∈ C 1,δ ([l, r]), f (l) ≥ 0, f (r) ≤ 0 , (6.64)

and

σ(x) ∈ Cb2,δ ([l, r]), σ(l) = σ(r) = 0, |σ(x)| > 0 if l < x < r . (6.65)

Here above δ ∈ (0, 1]. Under these conditions by Proposition 6.2.3 equation
(6.44) generates a strictly order-preserving RDS (θ, ϕ) in the interval [l, r].
As above we introduce the speed measure m(dx) on [l, r] and the scale
function s : [l, r] → R ∪ {±∞} by the formulas (see, e.g., Karatzas/Shre-
ve [62])
  x 
f (ξ) dx
m(A) = exp 2 2
dξ , A ∈ B([l, r]) , (6.66)
A c σ (ξ) |σ(x)|
and  x  y 
f (ξ) dy
s(x) = exp −2 dξ , x ∈ [l, r] ,
c c σ 2 (ξ) |σ(y)|
where c is a fixed point from (l, r).
Similar to Theorem 6.6.1 we can prove the following result.
Theorem 6.6.2. Assume that (6.64) and (6.65) hold. Let A(ω) be the ran-
dom attractor for the RDS (θ, ϕ) generated by (6.44) in the interval [l, r].
(i) If f (l) = f (r) = 0, then A(ω) = [l, r].
(ii) If f (l) = 0 and f (r) < 0, then
6.6 One-Dimensional Stochastic Equations 207

(a) A(ω) = {l} almost surely provided that m((l, c]) = ∞;


(b) the property m((l, c]) < ∞ implies that A(ω) = [l, u(ω)] for some F− -
measurable equilibrium u(ω) such that l < u(ω) < r almost surely. There
are no other F− -measurable equilibria in the set (l, u(ω)] and

m(B)
lim P{ω : ϕ(t, ω)x ∈ B} = P{ω : u(ω) ∈ B} ≡ (6.67)
t→∞ m([l, r])

for all x ∈ (l, r] and B ∈ B([l, r]).


(iii) If f (l) > 0 and f (r) = 0, then
(a) A(ω) = {r} almost surely provided that m([c, r)) = ∞;
(b) A(ω) = [u(ω), r] provided that m([c, r)) < ∞, where u(ω) is an F− -
measurable equilibrium u(ω) such that l < u(ω) < r almost surely. There
are no other F− -measurable equilibria in the set [u(ω), r) and (6.67) holds
for all x ∈ [l, r) and B ∈ B([l, r]).
(iv) If f (l) > 0 and f (r) < 0, then there exists an F− -measurable equilibrium
u(ω) such that l < u(ω) < r and A(ω) = {u(ω)} almost surely. Moreover
(6.67) holds for all x ∈ [l, r].

Proof. Assertion (i) follows from the property ϕ(t, ω)[l, r] = [l, r] for all t > 0
and ω ∈ Ω. To prove the other assertions of the theorem we note that f (l) > 0
(resp. f (r) < 0) implies that m((l, c]) < ∞ (resp. m([c, r)) < ∞). Therefore
we can apply the same argument as in the proof of Theorem 6.6.1. 2

Now we consider the case f (l) = f (r) = 0 in details. We are interested in the
description of the long-time behaviour of trajectories inside the attractor.
The following assertion is an almost direct consequence of the well-known
theorems on the boundary behaviour of one-dimensional diffusion processes
(see, e.g., Ikeda/Watanabe [57] or Karatzas/Shreve [62]).
Theorem 6.6.3. Assume that (6.64) and (6.65) hold. Let f (l) = f (r) = 0.
Denote by (θ, ϕ) the RDS generated by (6.44) in the interval [l, r].
(i) If m([l, r]) < ∞, then there exists an F− -measurable equilibrium u(ω)
such that l < u(ω) < r almost surely and (6.67) holds for all x ∈ (l, r)
and B ∈ B([l, r]). Moreover


P ω : lim inf ϕ(t, ω)x = l = P ω : lim sup ϕ(t, ω)x = r = 1
t→∞ t→∞
(6.68)
for any x ∈ (l, r) and the process ϕ(t, ω)x is recurrent, i.e. for any
y ∈ (l, r) we have

P {ω : ϕ(t, ω)x = y for some t ∈ R+ } = 1. (6.69)


208 6. Cooperative Stochastic Differential Equations

(ii) If m((l, c]) = ∞ and m([c, r]) < ∞, then

lim P{ω : ϕ(t, ω)x ≥ l + ε} = 0, x ∈ [l, r) ,


t→∞

for any ε > 0 and


 

P ω : lim ϕ(t, ω)x = l = P ω : sup ϕ(t, ω)x < r = 1 (6.70)


t→∞ t∈R+

for any x ∈ [l, r) provided that s(l) > −∞.


(iii) If m((l, c]) < ∞ and m([c, r]) = ∞, then

lim P {ω : ϕ(t, ω)x ≤ r − ε} = 0, x ∈ (l, r] ,


t→∞

for any ε > 0 and




P ω : lim ϕ(t, ω)x = r = P ω : inf ϕ(t, ω)x > l = 1 (6.71)
t→∞ t∈R+

for any x ∈ (l, r] provided that s(r) < ∞.


(iv) Let m((l, c]) = ∞ and m([c, r]) = ∞. Then

lim P {ω : ϕ(t, ω)x ∈ [l + ε, r − ε]} = 0, x ∈ (l, r) , (6.72)


t→∞

for any ε > 0. Moreover


(a) if s(l) > −∞ and s(r) < ∞, then

P ω : lim ϕ(t, ω)x = l = 1 − P ω : lim ϕ(t, ω)x = r


t→∞ t→∞
s(r) − s(x) (6.73)
= , x ∈ [l, r] ;
s(r) − s(l)

(b) if s(l) > −∞ and s(r) = ∞, then (6.70) hold;


(c) if s(l) = −∞ and s(r) < ∞, then (6.71) hold;
(d) if s(l) = −∞ and s(r) = ∞, then (6.68) and (6.69) hold.

Proof. (i) As in the proof of Theorem 6.6.1(ii) it is easy to see that (x) :=
m((l,x]) r
m([l,r]) solves

the stationary Fokker-Plank equation on (l, r), l (x)dx = 1,
and (B) = B (x)dx, B ∈ B((l, r)), is a unique ergodic stationary measure.
Therefore using Theorem 2.3.45 Arnold [3] we can prove that the limit
 r  r
h(x)µω (dx) := lim h(ϕ(t, θ−t ω)x)(x)dx
l t→∞ l

exists almost surely for all h ∈ Cb([l, r]). Moreover µω is a disintegration


of a Markov invariant mesure and Ω µω (B)P(dω) = (B), B ∈ B(R+ ). By
Remarks 1.10.1 and 2.4.1 there exists a version of µω such that
6.6 One-Dimensional Stochastic Equations 209

 r  r
h(x)µθt ω (dx) = h(ϕ(t, ω)x)µθt ω (dx) for all ω ∈ Ω .
l l

Therefore by Proposition 3.5.1 there exists an F− -measurable equilibrium


u(ω) such that µω = δu(ω) . Since ((l, r)) = 1 and

P{ω : l < α ≤ u(ω) ≤ β < r} = µω ([α, β])P(dω) = ([α, β]) ,

we have that l < u(ω) < r almost surely and (6.67) holds for x ∈ (l, r).
Since the property m([l, r]) < ∞ implies that s(l) = −∞ and s(r) = ∞
(see, e.g., Crauel et al. [38, Lemma 2.4]), we can apply Proposition 5.5.22
Karatzas/Shreve [62] (see also Ikeda/Watanabe [57, Theorem 6.3.1])
to obtain (6.68) and (6.69).
To prove (ii) and (iii) we can repeat with a slight modification the ar-
gument given in the proof of Theorem 6.6.1(i) and apply Proposition 5.5.22
Karatzas/Shreve [62].
(iv) To prove (6.72) we consider the process
* +
y(t, ω; z) := G ϕ(t, ω) G−1 (z) , z ∈ R ,

where G(x) is a primitive for [σ(x)]−1 on the interval (l, r). This process
solves the SDE
f (G−1 (y))
dy = · dt + dWt , y(0) = z . (6.74)
σ(G−1 (y))

It follows from Friedman [45, Chaps.5 and 15] that {y(t, ω; z) : z ∈ R} is a


Feller process with transition probability

P̃t (z, B) := P {ω : y(t, ω; z) ∈ B} = p(t, z, y)dy ,
B

where p(t, z, y) is a continuous strictly positive function on (R+ \{0})×R×R.


It is also easy to see that the speed measure m̃ for equation (6.74) possesses
the property m̃(R) = ∞. Therefore it follows from Kunita [74, Theorem
1.3.10] that

lim P̃t (z, [a, b]) = 0 for any −∞<a<b<∞.


t→∞

Now using the relation

P {ω : ϕ(t, ω)x ∈ C} = P {ω : y(t, ω; G(x)) ∈ G(C)} = P̃t (G(x), G(C)) ,

where C ∈ B(l, r), we obtain (6.72).


Assertions (iv-a)-(iv-d) are direct consequences of Proposition 5.5.22
Karatzas/Shreve [62]. 2
210 6. Cooperative Stochastic Differential Equations

Theorem 6.6.3 implies the following result on a random attractor with respect
to the convergence in probability (see Ochs [86] for the theory of attractors
based on this type of convergence).
Corollary 6.6.2. Assume that (6.64) and (6.65) hold. Let f (l) = f (r) = 0
and m((l, r)) = ∞. Then the two-point set A := {l, r} is a weak point random
attractor for the RDS (θ, ϕ) generated by (6.44) in the interval [l, r]. This
means that (i) ϕ(t, ω)A = A for all t ≥ 0 and ω ∈ Ω; (ii) ϕ(t, ω)x converges
to A in probability for every x ∈ [l, r], i.e.

lim P {ω : dist (ϕ(t, ω)x, A) ≥ ε} = 0, x ∈ [l, r] ,


t→∞

for any ε > 0; (iii) A is a minimal set possessing the properties (i) and (ii).

Proof. It follows from Theorem 6.6.3(ii-iv) and the relation

P{ω : ϕ(t, ω)x ∈ [l + ε, r − ε]} = P {ω : dist (ϕ(t, ω)x, A) ≥ ε} .

Example 6.6.3. Consider a Stratonovich SDE of the form

dx = ασ(x)dt + σ(x) ◦ dWt , (6.75)

where α ∈ R is a parameter and

σ(x) ∈ Cb2,δ (R), σ(l) = σ(r) = 0, σ(x) > 0 if x ∈ (l, r) , (6.76)

for some bounded interval [l, r] ⊂ R and δ ∈ (0, 1]. In this case the speed
measure m has the form
 b
dx
m([a, b]) = , [a, b] ⊂ [l, r], if α = 0 ,
a σ(x)

and
,     a -
b
1 dξ dξ
m([a, b]) = · exp 2α − exp 2α , if α = 0 ,
2α c σ(ξ) c σ(ξ)

where c is a fixed point from (l, r). For the scale function s we have the
representation  x

s(x) = , x ∈ [l, r], if α = 0 ,
c σ(ξ)

and   x 
1 dξ
s(x) = · 1 − exp −2α , if α = 0 .
2α c σ(ξ)

Therefore the application of Theorem 6.6.3 gives the following result.


6.6 One-Dimensional Stochastic Equations 211

(i) If α < 0, then (6.70) holds.


(ii) If α = 0, then we have (6.68), (6.69) and (6.72).
(iii) If α > 0, then (6.71) holds.
From Corollary 6.6.2 we also have that {l, r} is a weak point random attractor
for all α ∈ R. On the other hand, relying on the representation

ϕ(t, ω)x = G−1 (G(x) + αt + Wt (ω)), x ∈ (l, r) , (6.77)

where G(x) is a primitive for [σ(x)]−1 on the interval (l, r), and using the law
of the iterated logarithm for the one-dimensional Wiener process (see, e.g.,
Friedman [45, p.40]) it is easy to prove that in the case α = 0 the interval
[l, r] is the pull back omega-limit set for the trajectory emanating from any
point x ∈ (l, r).
If in addition we assume that σ (l) > 0 and σ (r) < 0, then relying on
(6.77) after a simple calculation (see Chueshov/Vuillermot [25]) we find
that
1
lim log |ϕ(t, ω)x − l| = ασ (l), P − a.s., x ∈ (l, r), α<0,
t→∞ t
and
1
lim log |ϕ(t, ω)x − r| = ασ (r), P − a.s., x ∈ (l, r), α>0.
t→∞ t
To interpret these results we recall (see, e.g., Khasminskii [64]) that (a) an
equilibrium u (either l or r) is said to be stable in probability if the relation

lim P ω ∈ Ω : sup |ϕ(t, ω)x − u| > ε = 0 (6.78)
x→u t>0

holds for every ε > 0; (b) u is globally asymptotically stable in probability if


relation (6.78) holds and if we have

P ω ∈ Ω : lim |ϕ(t, ω)x − u| = 0 = 1


t→∞

for every x ∈ (l, r); (c) u is unstable in probability if (6.78) does not hold.
Thus in this example, on the one hand, we have the weak point attractor
A = {l, r} which does not depend on α, on the other hand, we observe the
following bifurcation picture (with respect to the parameter α):
(i) if α < 0, l is globally asymptotically stable, whereas r is unstable in
probability;
(ii) if α = 0, both l and r are unstable in probability;
(iii) if α > 0, r is globally asymptotically stable, whereas l is unstable in
probability.
212 6. Cooperative Stochastic Differential Equations

We note that a similar character of the exchange of stability between the


equilibria l and r can be seen in the random case (see Example 5.6.2) and also
in the case of equation (6.44) with σ satisfying (6.76) and with f ∈ C 1,δ ([l, r])
possessing the property

β1 σ(x) ≤ f (x) ≤ β2 σ(x), x ∈ [l, r] ,

where β1 and β2 are positive constants.

Example 6.6.4. Consider now an Itô SDE on the interval (l, r) of the same
form as (6.75):
dx = ασ(x)dt + σ(x)dWt , (6.79)
where α ∈ R is a parameter and σ(x) satisfies (6.76). Assume that σ (l) > 0
and σ (r) < 0 hold. This equation can be written as a Stratonovich SDE of
the form  
1
dx = α − σ (x) σ(x)dt + σ(x) ◦ dWt ,
2
The speed measure m and the scale function s for this equation are repre-
sented by the formulas
  x 
dξ dx
m(A) = σ(c) exp 2α 2
, A ∈ B([l, r]) ,
A c σ(ξ) σ (x)

and  x  y 
1 dξ
s(x) = exp −2α dy , x ∈ [l, r] ,
σ(c) c c σ(ξ)
where c is a fixed point from (l, r). It is easy to see that m((l, r)) = ∞ for
every α ∈ R. Therefore, as in Example 6.6.3, Corollary 6.6.2 implies that the
two-point set {l, r} is a weak point random attractor for the RDS in [l, r]
generated by (6.79) for every α ∈ R. Applying Theorem 6.6.3 we obtain the
following more precise information on the dynamics:
(i) if α ≤ 12 σ (r), then (6.70) holds;
(ii) if 12 σ (r) < α < 12 σ (l), then we have (6.73);
(iii) if α ≥ 12 σ (l), then (6.71) holds.
Thus we observe that in contrast with the RDS generated by (6.75) the long-
time behaviour of ϕ(t, ω)x is characterized by the absence of any recurrent
and oscillatory behavior for all values of α. Moreover, it is also possible to
prove (see Chueshov/Vuillermot [26]) that the bifurcation picture in this
example is following:
(i) if α ≤ 12 σ (r), l is globally asymptotically stable, whereas r is unstable
in probability;
6.6 One-Dimensional Stochastic Equations 213

(ii) if 12 σ (r) < α < 12 σ (l), both l and r are stable in probability;
(iii) If α ≥ 12 σ (l), r is globally asymptotically stable, whereas l is unstable
in probability.
Thus the exchange of stability between the equilibria l and r, when the pa-
rameter α varies from minus infinity to plus infinity, is slower (softer) in this
example in contrast with the picture that we can see in Example 6.6.3. How-
ever in both examples the weak point random attractor A = {l, r} is the
same for all α ∈ R.
The following example shows that the idea outlined in Sect.5.6 can be also
used in the stochastic case.
Example 6.6.5. Consider the Stratonovich SDE
dx = (α1 sin x + α2 (1 − cos x)) dt
(6.80)
+ σ1 sin x ◦ dWt1 + σ2 (1 − cos x) ◦ dWt2 ,
where αi and σi are parameters and Wt = (Wt1 , Wt2 ) is a Wiener process
in R2 . This equation generates an RDS in R. However, as in Example 5.6.3,
it is natural to consider equation (6.80) on the unit circle C which is inter-
preted as the interval [0, 2π] with identified end-points. Using Itô’s formula
for the Stratonovich integrals (see Theorem 2.3.1) it is easy to see that (6.80)
generates an RDS (θ, ϕ) in C with the cocycle
ϕ(t, ω)x = 2arccot (−ψ(t, ω)[− cot(x/2)]) , 0 < x < 2π ,
where ψ(t, ω) is the cocycle in R generated by the affine Stratonovich equation
dy = (−α1 y + α2 ) dt − σ1 y ◦ dWt1 + σ2 ◦ dWt2 .
In the case α2 = σ1 = 0 we obtain the SDE
dy = −α1 y dt + σ2 dWt2 .
Therefore the equation
dx = α1 sin x dt + σ2 (1 − cos x) ◦ dWt2
generates an RDS (θ, ϕ̃) with the cocycle
 t 
−α1 t x −α1 (t−s) 2
ϕ̃(t, ω)x = 2arccot e · cot − σ2 e dWs , 0 < x < 2π .
2 0

This formula implies that for any α1 = 0 the RDS (θ, ϕ̃) has two equilibria
u0 ≡ 0 and u(ω) ∈ (0, 2π) which stability properties can be easily derived
from the results presented in Example 2.4.4. The case α1 = 0 is covered by
Example 6.6.3 with α = 0.
Remark 6.6.3. We also note that it is possible to give examples which show
that all the cases listed in Theorem 6.6.3 are really occur. We refer to
Scheutzow [91], where the corresponding examples are presented in the
case [l, r) = R+ .
214 6. Cooperative Stochastic Differential Equations

6.7 Stochastic Equations with Concavity Properties

We start with conditions that ensure that the RDS generated by the system
of stochastic cooperative differential equations (6.1) is sublinear.
Lemma 6.7.1. Assume that conditions (S1), (S3) and (S4) hold. Let σij (x)
be linear functions, i.e.

σij (x) = σij · x for all x ∈ R+ , (6.81)

where σij are constants. If the function f (·) is a sublinear mapping from Rd+
into Rd , i.e. if

λf (x) ≤ f (λx) for all 0<λ<1 and x ∈ Rd+ , (6.82)

then the RDS (θ, ϕ) generated by (6.1) is sublinear. Moreover (θ, ϕ) is strongly
sublinear if one of the following conditions is satisfied:
(i) f is a strongly sublinear mapping, i.e. λf (x)  f (λx) for all 0 < λ < 1
and x ∈ intR d
+ ;

(ii) the matrix ∂fi


∂xj is irreducible for all x ∈ intRd+ and λf (x) < f (λx)
for all 0 < λ < 1 and x ∈ intRd+ .

Proof. The function xλ (t) = λ · ϕ(t, ω)x is the solution to the problem


m
dxλi (t) = fiλ (xλ (t))dt + σij · xλi (t) ◦ dWtj , i = 1, . . . , d ,
j=1

with initial data xλ (0) = λ · x, where f λ (x) = λf (λ−1 x). From (6.82) we
have f λ (x) ≤ f (x). Therefore the stochastic comparison principle (see The-
orem 6.4.1(i)) gives

λ · ϕ(t, ω)x ≡ xλ (t) ≤ x(t) ≡ ϕ(t, ω)[λx] .

To prove that (θ, ϕ) is strongly sublinear under the conditions either (i) or
(ii) we note that by Theorem 6.3.1 (θ, ϕ) is equivalent to the RDS (θ, ψ)
generated by (6.5) with g given by (6.6). Therefore we can use Lemma 5.5.1.
2

We note that Lemma 6.7.1 remains true if we understand (6.1) in the Itô
sense (cf. Remark 6.4.1). It is also easy to see that the RDS generated by
equation (6.52) gives an example of a strongly sublinear RDS.
Now we apply the results presented Chap.4 for sublinear systems. The
application of Corollary 4.3.1 gives the following assertion.
6.7 Stochastic Equations with Concavity Properties 215

Theorem 6.7.1. Assume that conditions (S1), (S3), (S4) and (6.81) hold
and f (0)  0. Let the assumption either (i) or (ii) of Lemma 6.7.1 be valid.
Then either
(i) for any x ∈ Rd+ we have |ϕ(t, θ−t ω)x| → ∞ almost surely as t → ∞
or
(ii) there exists a unique almost equilibrium u(ω)  0 defined on a θ-
invariant set Ω ∗ of full measure such that

lim ϕ(t, θ−t ω)v(θ−t ω) = u(ω), ω ∈ Ω∗ ,


t→+∞

for any random variable v(ω) possessing the property 0 ≤ v(ω) ≤ λu(ω) for
all ω ∈ Ω ∗ and for some nonrandom λ > 0.

Proof. The property f (0)  0 implies that the function g(ω, x) given by
formula (6.6) satisfies (5.36). Therefore it follows from Proposition 5.4.1 and
Theorem 6.3.1 that ϕ(t, ω)0  0 for all t > 0 and ω ∈ Ω. Thus (θ, ϕ) is
strongly positive. It is also clear that any finite-dimensional RDS is condi-
tionally compact. Therefore we can apply Corollary 4.3.1. 2

Now we prove a stochastic version of the trichtomy theorem (for the random
case see Theorem 5.5.2).
Theorem 6.7.2 (Limit Set Trichotomy). Let conditions (S1), (S3), (S4),
(6.81) and either (i) or (ii) of Lemma 6.7.1 hold. Assume that the coefficients
σij ≡ σj independent of i and there exist positive constants a and b such that

−a · xj ≤ fj (x) ≤ b · (1 + |x|1 ) for all x ∈ Rd+ , j = 1, . . . , d , (6.83)


!d
where | · |1 is l1 -norm in ∈ Rd , i.e. |x|1 = j=1 |xi | for x ∈ Rd . Let e =
(1, . . . , 1) ∈ Rd+ and
 ∞

η(ω) = exp −ν|τ | − Wτ(σ) (ω) dτ , (6.84)


−∞

(σ) !m j
where ν is a positive constant and Wt = j=1 σj Wt . Denote by Cη =
Cη (ω) the collection of random variables w : Ω → R+ possessing the property
d

α−1 η(ω)e ≤ w(ω) ≤ αη(ω)e for all ω∈Ω

for some nonrandom number α ≥ 1. Let (θ, ϕ) be the RDS generated by the
equation
(σ)
dxi (t) = fi (x1 (t), . . . , xd (t))dt + xi (t) ◦ dWt , i = 1, . . . , d .

Then any orbit of (θ, ϕ) emanating from a ∈ Cη does not leave Cη , i.e.

ϕ(t, ω)a(ω) ∈ Cη (θt ω) for all a ∈ Cη (ω), t ≥ 0 , (6.85)


216 6. Cooperative Stochastic Differential Equations

and precisely one of the following three cases applies:


(i) for all b ∈ Cη (ω), the orbit γb emanating from b is unbounded;
(ii) for all b ∈ Cη (ω), the orbit γb emanating from b is bounded, but the
closure of γb does not belong to Cη (ω) and

lim sup sup p(ϕ(t, θ−t ω)b(θ−t ω), η(ω)) = ∞ ,
t→∞ ω∈Ω

where p is the part metric in Rd+ ;


(iii) there exists a unique F-measurable almost equilibrium u ∈ Cη (ω),
and for all b ∈ Cη (ω) the orbit emanating from b converges to u, i.e.

lim ϕ(t, θ−t ω)b(θ−t ω) = u(ω) for almost all ω∈Ω. (6.86)
t→+∞

Proof. As in the proof of Theorem 5.5.2 it is sufficiently to check the invari-


ance property of Cη (ω). It follows from (6.83) that

−a · x ≤ f (x) ≤ b · (1 + |x|1 ) · e for all x ∈ Rd+ .

Therefore Theorem 6.4.1 implies that

y (1) (t) ≤ ϕ(t, ω)η(ω)e ≤ y (2) (t) , (6.87)

where y (1) (t) and y (2) (t) are solutions to the problems
(σ)
dy (1) (t) = −a · y (1) (t) · dt + y (1) (t) ◦ dWt , y (1) (0) = η(ω)e , (6.88)

and
(σ)
dy (2) (t) = b·(1+|y (2) (t)|1 )·e·dt+y (2) (t)◦dWt , y (2) (0) = η(ω)e . (6.89)

Using (6.88) it is easy to find that


(1) (σ)
yi (t) = η(ω) · exp −at + Wt , i = 1, . . . , d . (6.90)

From (6.89) we have that


(σ)
d|y (2) (t)|1 = bd · (1 + |y (2) (t)|1 ) · dt + |y (2) (t)|1 ◦ dWt . (6.91)

Therefore

(2) (σ)
yi (t) ≤ |y (2) (t)|1 = d · η(ω) · exp bd · t + Wt
 t
(6.92)
(σ)
+ bd · exp bd · (t − τ ) + Wt − Wτ(σ) dτ .
0
6.7 Stochastic Equations with Concavity Properties 217

Using the equality


(σ) (σ)
Wt (ω) − Wτ(σ) (ω) = −Wτ −t (θt ω)

and relations (6.84) and (6.87), it is easy to see that

e−(a+ν)t η(θt ω) · e ≤ ϕ(t, ω) [η(ω) · e] ≤ d(1 + b)e(ν+bd)t η(θt ω) · e .

This implies the invariance of Cη (ω). Thus we can apply Theorem 4.4.1 and
Corollary 4.4.1. 2

In order to show that all three cases of the limit set trichotomy can actually
occur we consider the following example.
Example 6.7.1. Consider the Stratonovich stochastic differential equation on
X = R+ given by

dx = f (x)dt + σx ◦ dWt , σ∈R, (6.93)

where
x
f (x) = αx + , α∈R,
1+x
which is strongly sublinear for any α ∈ R, hence by Lemma 6.7.1 the RDS
(θ, ϕ) generated by (6.93) is strongly sublinear for any α, σ ∈ R. Theo-
rem 6.7.2 is applicable here for any α and σ.
The point x = 0 is always an equilibrium.
Consider first the case α > 0. Since f (x) ≥ αx, the comparison principle
yields
ϕ(t, ω)b(ω) ≥ b(ω)eαt+σWt (ω) ,
hence
ϕ(t, θ−t ω)b(θ−t ω) ≥ b(θ−t ω)eαt−σW−t (ω) .
Consequently, for any initial random variable b such that 1/b(ω) is tempered,
the orbit γb of ϕ emanating from b is unbounded, in fact converges to infinity
with probability one.
Now consider the case −1 < α < 0. Then f (x) < 1 + αx and Corol-
lary 6.6.1 implies that (θ, ϕ) possesses a random attractor A(ω) = [0, u(ω)],
where u(ω) ≥ 0 is an F− -measurable tempered equilibrium. A simple calcula-
tion shows that the speed measure m given by (6.55) satisfies m((0, 1]) < ∞
if α > −1. Therefore by Theorem 6.6.1 u(ω) > 0 almost surely and it follows
from Theorem 4.2.2 that u(ω) is attractive in the part Cu .
Here we can also clearly see why the initial values have to be restricted
somehow, e.g. to those in Cu . Take, for example, a random variable a(ω) > 0
218 6. Cooperative Stochastic Differential Equations

which is not tempered, i.e. for which lim supt→∞ 1t log a(θ−t ω) = +∞. Non-
tempered random variables exist on any standard probability space and for
any ergodic and aperiodic θ Arnold/Cong/Oseledets [9, Lemma 8.6],
which includes to the present situation. Then a ∈ Cu and since

ϕ(t, θ−t ω)a(θ−t ω) ≥ a(θ−t ω)eαt−σW−t (ω) ,

there exists α < 0 such that the right-hand side tends to +∞ along some
sequence tn → ∞. Hence the orbit emanating from a does not converge to u.
Finally, if α < −1 then ϕ is dominated by the linear cocycle generated by
dx = (α + 1)xdt + σx ◦ dWt , hence

ϕ(t, θ−t ω)b(θ−t ω) ≤ b(θ−t ω)e(α+1)t−σW−t (ω) .

Thus whenever b is tempered

lim ϕ(t, θ−t ω)b(θ−t ω) = 0


t→∞

almost surely. If b is even ε-slowly varying, i.e. if b(θt ω) ≤ b(ω)eε|t| for some
ε > 0 such that ε + (α + 1) < 0, then the orbit γb is bounded, but the
closure of γb contains elements (namely 0) which do not belong to any part
Cv ⊂ intR+ .
We can combine several of the equations (6.93) to produce more compli-
cated limit behaviour.
The following assertion gives conditions under which the RDS generated by
(6.1) is s-concave (see Definition 4.1.3).
Proposition 6.7.1. Assume that assumptions (S1), (S3), (S4) and (6.81)
are met and that the matrix Dx f (x) is irreducible in intRd+ . Let the function
f (x) be s-concave, i.e.

Dx f (x) < Dx f (y) for xy0. (6.94)

Then (θ, ϕ) is s-concave and strongly order-preserving. In particular we have

ϕ(t, ω)(Rd+ \ {0}) ⊂ intRd+ for all ω∈Ω. (6.95)

Proof. This follows from Theorem 6.3.1 and Proposition 5.5.2. 2

As in the random case (see Theorem 5.5.3) we can also prove the following
assertion.
Theorem 6.7.3. Let (S1), (S3), (S4) and (6.81) hold. Assume that the
fuction f (x) is s-concave and that the matrix Dx f (x) is irreducible for all
x ∈ intRd+ . If f (0) > 0, then either (a) for all v(ω) ≥ 0, the orbit γv ema-
nating from v is unbounded; or (b) there exists a unique equilibrium u  0
such for every v(ω) possessing the property 0 ≤ v(ω) ≤ α · u(ω) with some
6.8 Applications 219

α > 0 the orbit emanating from v converges to u on a θ-invariant set Ω ∗ of


full measure, i.e.

lim ϕ(t, θ−t ω)v(θ−t ω) = u(ω) for all ω ∈ Ω∗ . (6.96)


t→+∞

If we additionally assume that the affine RDS generated by the equation


 

m
dy = (f (0) + Dx f (0)y) · dt + y ◦ d  σj Wtj  (6.97)
j=1

possesses a super-equilibrium w(ω) ∈ intRd+ , then there exist bounded orbits


and assertion (b) holds. If f (0) ≡ 0 and the top Lyapunov exponent of the
linear SDE (6.97) is negative (which is the case if and only if the eigenvalues
of Dx f (0) have negative real parts), then we have

lim ϕ(t, θ−t ω)x = 0 for all x ∈ Rd+


t→∞

on a θ-invariant set Ω ∗ of full measure.

6.8 Applications

We start with a stochastic version of the model of control protein synthesis


in the cell (cf. Subsect.5.7.1).

6.8.1 Stochastic Biochemical Control Circuit

Consider the following system of Stratonovich stochastic equations

dx1 (t) = (g(xd (t)) − α1 x1 (t))dt + σ1 · x1 (t) ◦ dWt1 , (6.98)

dxj (t) = (xj−1 (t) − αj xj (t))dt + σj · xj (t) ◦ dWtj , j = 2, . . . , d . (6.99)


Here σj are nonnegative and αj are positive constants, j = 1, . . . , d and
g : R+ → R+ is a C 1 function such that

0 < g(u) ≤ au + b, and g (u) ≥ 0 for every u > 0 (6.100)

for some constants a and b. Proposition 6.2.2 implies that equations (6.98)
and (6.99) generate a strictly order-preserving RDS (θ, ϕ) in the cone Rd+ .
To construct sub- and super-equilibria for (6.98) and (6.99) we consider the
following auxiliary affine problem

dy1 (t) = (b + ayd (t) − α1 y1 (t))dt + σ1 · y1 (t) ◦ dWt1 , (6.101)


220 6. Cooperative Stochastic Differential Equations

dyj (t) = (yj−1 (t) − αj yj (t))dt + σj · yj (t) ◦ dWtj , j = 2, . . . , d . (6.102)


It is clear (see Proposition 6.2.2 and Corollary 6.3.2) that equations (6.101)
and (6.102) generate a strongly order-preserving affine RDS for every b ≥ 0.
We denote the corresponding cocycle by ϕaff (t, ω). It is clear that
 t
ϕaff (t, ω)x = Φ(t, ω)x + Φ(t − τ, θτ ω)b dτ, t > 0, ω ∈ Ω ,
0

where Φ(t, ω) is the cocycle generated by (6.101) and (6.102) with b = 0 and
b = (b, 0, . . . , 0) ∈ Rd+ . If a is small enough, then the cocycle Φ(t, ω) has the
negative top Lyapunov exponent, i.e. there exists λ < 0 such that

|Φ(t, ω)x| ≤ Rε (ω)e(λ+ε)t |x|, ω ∈ Ω∗, t ≥ 0, x ∈ Rd ,

for every ε > 0, where Rε (ω) > 0 is a tempered random variable and
Ω ∗ ∈ F is a θ-invariant set of full measure. We can suppose Ω ∗ = Ω (see Re-
mark 1.2.1(iii)). By Theorem 5.6.5 (Arnold [3]) the RDS (θ, ϕaff ) possesses
a unique tempered equilibrium
 ∞
v(ω) ≡ Φ(τ, θ−τ ω)b dτ, ω ∈ Ω .
0

It is clear that this equilibrium is strongly positive and measurable with


respect to the past σ-algebra F− . Theorem 6.4.1 implies that the RDS (θ, ϕaff )
dominates (θ, ϕ) from above. Therefore µ · v(ω) is a super-equilibrium for
(θ, ϕ) for any µ ≥ 1.
Let D be the universe of all tempered random closed sets D(ω) of Rd+ .
Applying Proposition 1.9.3 we have that v(ω) uniformly attracts all random
sets D(ω) from D with exponential speed, i.e. there exists γ > 0 such that

lim eγt sup |ϕaff (t, θ−t ω)y − v(ω)| = 0


t→+∞ y∈D(θ−t ω)

for any D ∈ D. This means that for any µ > 1 the super-equilibrium µv(ω)
is absorbing for the RDS (θ, ϕ). On the other hand the affine RDS generated
by
dy1 (t) = (g(0) − α1 y1 (t))dt + σ1 · y1 (t) ◦ dWt1 ,
dyj (t) = (yj−1 (t) − αj yj (t))dt + σj · yj (t) ◦ dWtj , j = 2, . . . , d ,
dominates (θ, ϕ) from below. This system possesses a uniformly attracting
equilibrium w(ω) such that 0 ≤ w(ω) ≤ v(ω). It is easy to find for w(ω) =
(w1 (ω), . . . , wd (ω)) the recurrence formulae
 0
1
w1 (ω) = g(0) eα1 t−σ1 Wt dt (6.103)
−∞
6.8 Applications 221

and
 0
j
wj (ω) = wj−1 (θt ω)eαj t−σj Wt dt, j = 2, . . . , d . (6.104)
−∞

In particular we have w(ω)  0 when g(0) > 0 and w(ω) ≡ 0 if g(0) = 0. It


is also clear that µ−1 w(ω) is a sub-equilibrium for (θ, ϕ) for any µ ≥ 1. Thus
any interval of the form

[µ−1 w(ω), µv(ω)] = {u : µ−1 w(ω) ≤ u ≤ µv(ω)}

with µ > 1 is an absorbing invariant set for (θ, ϕ) and it belongs to D. By


Corollary 1.8.1 the RDS (θ, ϕ) generated by (6.98) and (6.99) in Rd+ possesses
a random attractor A(ω) ∈ D. This attractor belongs to the interval [w, v](ω).
Therefore applying Theorem 3.6.2 we obtain the existence of two equilibria
u(ω) and ū(ω) in A(ω) such that u(ω) ≤ ū(ω) and A(ω) ⊂ [u, ū](ω). These
equilibria are F− -measurable and they are globally asymptotically stable from
below and from above respectively, i.e.

lim ϕ(t, θ−t ω)w(θ−t ω) = u(ω)


t→+∞

and
lim ϕ(t, θ−t ω)v(θ−t ω) = ū(ω)
t→+∞

for any tempered w(ω) and v(ω) such that w(ω) ≤ u(ω) and v(ω) ≥ ū(ω).
Assume in addition to (6.100) that g(0) > 0, g (x) > 0 for x > 0, and
g(x) is strictly sublinear, i.e. λg(x) < g(λx) for any 0 < λ < 1 and x > 0.
Lemma 6.7.1 implies that (θ, ϕ) is a strongly sublinear RDS. In this case
the sub-equilibrium w(ω) given by (6.103) and (6.104) is strongly positive.
Therefore we can apply Theorem 4.2.1 and Corollary 3.6.1 to prove that
the random attractor A(ω) is a one-point set consisting of a unique globally
asymptotically stable equilibrium.

6.8.2 Stochastic Gonorrhea Model

We consider the following stochastic version of the problem (5.81) and (5.82):
. /
d
dxj = −αj xj + (pj − xj ) βji xi dt + σj xj (pj − xj ) ◦ dWtj . (6.105)
i=1

Here j = 1, . . . , d and αj , βji and pj are positive numbers, σj = 0. The


diffusion term in (6.105) models stochastic fluctuations of the rate βjj at
which j-th group infects itself.
222 6. Cooperative Stochastic Differential Equations

By Propositions 6.2.2 and 6.2.3 equations (6.105) generate a global strictly


order-preserving RDS with the state space

X = [0, p] = {u ∈ Rd : 0 ≤ u ≤ p} ,

where p = (p1 , . . . , pj ). It is clear that w(ω) ≡ p is a super-equilibrium and


v(ω) ≡ 0 is an equilibrium. Therefore Theorem 3.5.1 implies the existence of
an equilibrium u(ω) with the property

0 ≤ u(ω) < p for all ω ∈ Ω.

In order to prove the existence of a strictly positive equilibrium let us consider


the following auxiliary linear problem

dyj (t) = hj (yj )dt + gj (yj ) ◦ dWtj , j = 1, 2, . . . , d , (6.106)

where

hj (x) = −αj x + βjj x(pj − x) and gj (x) = σj x(pj − x) .

The system in (6.106) is decoupled, hence for each j equation (6.106) gener-
ates a strictly order-preserving RDS (θ, ψj ) in the one-dimensional interval
[0, pj ]. Theorem 6.4.1 implies that the direct product of these systems domi-
nates the system (θ, ϕ) generated by (6.105) from below.
Under the condition αj < pj βjj the speed measure (6.66) for problem
(6.106) on [0, pj ] possesses the property m((0, cj ]) < ∞ for any cj ∈ (0, pj ).
Therefore by Theorem 6.6.2(ii-b) there exists unique F− -measurable equilib-
rium vj (ω) ∈ (0, pj ) having a distribution with the density
. /
x
Nj 2hj (v)
j (x) = · exp 2
dv , 0 < x < pj ,
gj (x) cj gj (v)

where Nj is the normalizing factor and cj ∈ (0, pj ). Thus the system gener-
ated by (6.105) has a strongly positive sub-equilibrium

v(ω) = (v1 (ω) . . . , vd (ω)) ∈ [0, p]

provided that αj < pj βjj for all j = 1, 2, . . . , d. Therefore there exists a


strongly positive equilibrium u(ω) which is globally asymptotically stable
from above.

6.8.3 Stochastic Model of Symbiotic Interaction

We consider a stochastic version of the RDE (5.83) with a right-hand side of


the form (5.84):
6.8 Applications 223

 

dxj = xj αj (xj ) + gi (xi ) · dt + σj xj ◦ dWtj , j = 1, . . . , d , (6.107)
i =j

where σj are constants and αj (x) and gi (x) are smooth functions on R+ with
properties like (S1) and such that

0 ≤ gi (0) ≤ gi (x) ≤ M, and gi (x) > 0 for every x > 0 (6.108)

Under these conditions equations (6.107) generate an order-preserving RDS


(θ, ϕ) in Rd+ possessing the following invariance property: for every subset I
of integers from N = {1, . . . , d} the set

KI = {x = (x1 , . . . , xd ) ∈ Rd+ : xj = 0, j ∈ N \ I}

is invariant with respect to (θ, ϕ). The restriction of (θ, ϕ) to KI is described


by the system of stochastic equations
 

dxj = xj αj (xj ) + gi (xi ) · dt + σj xj ◦ dWtj , j ∈ I .
i =j,i∈I

Consider the equations

dx = x · αj (x) + σj x ◦ dWtj , j = 1, . . . , d , (6.109)

that describe the existence of each population independent of the others. Let
αj (x) be chosen such that the RDS generated by (6.109) in R+ has a positive
equilibrium for every j = 1, . . . , d. This is, for example, implied by

αj (0) > 0, lim sup αj (x) ≤ −βj < 0, j = 1, . . . , d ,


x→+∞

(see Theorem 6.6.1(ii)). Denote by vj (ω) the positive equilibrium for (6.109).
As in Subsect.5.7.3 under the condition βj > (d − 1)M we can prove the
existence of an equilibrium u(ω) = (u1 (ω), . . . , ud (ω)) for the RDS (θ, ϕ)
generated by (6.107) such that uj (ω) > vj (ω) and which attracts (from below)
the collection (v1 (ω), . . . , vd (ω)) of equilibria that correspond to the isolated
dynamics of each population. Thus as in the random case we observe an
interaction results in a benefit for all populations.

6.8.4 Lattice Models of Statistical Mechanics

In this subsection we briefly consider an order-preserving RDS generated by


an infinite family of coupled stochastic differential equations. We note that
formally the results of previous sections are not applied here. However the
methods developed allow us to study this RDS.
224 6. Cooperative Stochastic Differential Equations

One of the approaches to the investigation of lattice systems in Statistical


Mechanics relies on the study of infinite systems of stochastic differential
equations (see, e.g., Albeverio/et al. [1] and Da Prato/Zabczyk [39, 40]
and the references therein). For example in the theory of classical anharmonic
crystals the following infinite system of stochastic equations
 

dxi =  aij · xj + f (xi ) · dt + dWti , i ∈ Zd , (6.110)
j∈Zd

arises in RZ . Here {Wti : i ∈ Zd } are independent standard real valued


d

Wiener processes. As in Da Prato/Zabczyk [39, 40] we also assume that


the coefficients aij possess the properties (i) aij = aji ≥ 0 for all i = j;
!d
(ii) there exists r > 0 such that aij = 0 if |i − j|1 ≡ k=1 |ik − jk | > r;
(iii) |aij | ≤ M for all i, j ∈ Zd . In this case

β := sup |aij | < ∞ .
i∈Zd
j∈Zd

The discrete Laplace operator on Zd corresponds to the choice

aii = −2d, aij = 1 if |i − j|1 = 1, aij = 0 if |i − j|1 > 1 ,

and the assumptions above are true with β = 4d.


For the function f : R → R we require that f is a globally Lipschitz
function such that
αx + γ1 ≤ f (x) ≤ αx + γ2 , (6.111)
where β + α < 0 and γ1 ≤ γ2 are constants.
Let V = l2 (Zd ) be the space of sequences {xi : i ∈ Zd } such that

x 2l2 (Zd ) := |xi |2 · (i) < ∞ ,
i∈Zd

2
where (i) = exp{−δ|i|1 } for i ∈ Zd with some δ > 0. Let V+ := l,+ (Zd )
be the cone of nonnegative elements in l2 (Zd ), i.e. {xi : i ∈ Zd } ∈ V+ if
and only if xi ≥ 0 for all i ∈ Zd . The matrix {aij } and the function f define
mappings of l2 (Zd ) into itself via the formulae

(Ax)i = aij · xj , (F (x))i = f (xi ), i ∈ Zd , x ∈ l2 (Zd ) .
j∈Zd

The mapping A is a bounded linear operator in l2 (Zd ) (see Da Prato/Zab-


czyk [39, Proposition 3.2]) and F is a globally Lipschitz mapping in this
space. The system of equations (6.110) can be written in operator form as

dx = (Ax + F (x)) · dt + dWt .


6.8 Applications 225

It follows from Da Prato/Zabczyk [39, Theorem 3.4] that equation (6.110)


generates an RDS (θ, ϕ) in the space l2 (Zd ). Moreover the cocycle ϕ(t, ω) can
be represented in the form

ϕ(t, ω)x = z(θt ω) + ψ(t, ω)[x − z(ω)] .

Here (θ, ψ) is the RDS generated by the random equation


 
ẏi = aij · yj + f (yi + zi (θt ω)) + aij · zj (θt ω) + µzi (θt ω) , (6.112)
j∈Zd j∈Zd

where i ∈ Zd and z(θt ω) = {zi (θt ω) : i ∈ Zd } is the stationary Ornstein-


Uhlenbeck process in l2 (Zd ) which solves the equations

dzi = −µzi dt + dWti , i ∈ Zd ,

for some µ > 0 (cf. Sect.2.5). The method used in the proof of Theorem 5.3.1
can be applied here. Therefore the RDS (θ, ψ) is order-preserving. Thus (θ, ϕ)
is also an order-preserving RDS.
Let us consider the affine system
 

dxi =  aij · xj + αxi + γ  · dt + dWti , i ∈ Zd . (6.113)
j∈Zd

It follows from Da Prato/Zabczyk [39, Theorem 3.4] again that for any γ
this equation generates an RDS (θ, ϕγ ) in the space l2 (Zd ). The comparison
principle (cf. Theorem 6.4.1) and property (6.111) give

ϕγ1 (t, ω)x ≤ ϕ(t, ω)x ≤ ϕγ2 (t, ω)x for all x ∈ l2 (Zd ) , (6.114)

where t > 0 and ω ∈ Ω. Since β +α < 0, it is clear (see Da Prato/Zabczyk


[39, Sect.3]) that we can choose δ > 0 in the definition of (i) such that

Ax, xl2 (Zd ) ≤ −(α + ε) · x 2l2 (Zd )

for some ε > 0. Therefore for any γ ∈ R the affine RDS (θ, ψγ ) possesses a
F− -measurable equilibrium wγ (ω) ∈ l2 (Zd ). Inequality (6.114) implies that
wγ1 (ω) is a sub-equilibrium and wγ2 (ω) is a super-equilibrium for (θ, ϕ).
2
We obviously have the relation wγ1 (ω) ≤ wγ2 (ω). Since the cone l,+ (Zd )
is regular, Theorem 3.5.1 and Remark 3.5.1 imply the existence of an F− -
measurable equilibrium u(ω) in l2 (Zd ) for (θ, ϕ). This equilibrium generates a
Markov invariant measure for the stochastic equation (6.110) (see Sect.1.10).
References

1. Albeverio S., Daletskii A., Kondratiev Y., Röckner M. (1999) Fluctua-


tions and their Glauber Dynamics in Lattice Systems. J Funct Anal 166(1):148-
167
2. Amann H. (1983) Gewöhnlicne Differentialgleichungen. Walter de Gruyter,
Berlin
3. Arnold L. (1998) Random Dynamical Systems. Springer, Berlin Heidelberg
New York
4. Arnold L. (1999) Recent Progress in Stochastic Bifurcation Theory. Institut
für Dynamische Systeme, Universität Bremen. Report 439
5. Arnold L., Chueshov I. (1998) Order-Preserving Random Dynamical Sys-
tems: Equilibria, Attractors, Applications. Dynamics and Stability of Systems
13:265–280
6. Arnold L., Chueshov I. (2001) A Limit Set Trichotomy for Order-Preserving
Random Systems. Positivity 5(2):95–114
7. Arnold L., Chueshov I. (2001) Cooperative Random and Stochastic Differ-
ential Equations. Discrete and Continuous Dynamical Systems 7(1):1-33
8. Arnold L., Demetrius L., Gundlach M. (1994) Evolutionary Formalism for
Products of Positive Random Matrices. Ann of Appl Probab 4:859–901
9. Arnold L., Nguyen Dinh Cong, Oseledets V. (1999) Jordan Normal Form
for Linear Cocycles. Random Operators and Stochastic Equations 7:301–356
10. Arnold L., Scheutzow M. (1995) Perfect Cocycles through Stochastic Dif-
ferential Equations. Probab Theory Relat Fields 101:65–88
11. Arnold L., Schmalfuss B. (1996) Fixed Points and Attractors for Random
Dynamical Systems. In: Naess A., Krenk S. (Eds.) IUTAM Symposium on Ad-
vances in Nonlinear Stochastic Mechanics. Kluver, Dordrecht, 19–28
12. Arnold L., Schmalfuss B. (2001) Lyapunov Second Method for Random
Dynamical Systems. J Diff Equations 177:235–265
13. Babin A., Vishik M. (1992) Attractors of Evolution Equations. Noth-Holland,
Amsterdam
14. Bauer H., Bear H. S. (1969) The Part Metric in Convex Sets. Pacif J Math
30:15–33
15. Belopolskaya Ja.I., Dalecky Yu.L. (1990) Stochastic Equations and Dif-
ferential Geometry. Kluver Academic Publishers, Dordrecht
16. Bhattacharya R., Lee O. (1988, 1997) Asymptotics of a Class of Markov
Processes which are not in General Irreducible. Ann of Probab 16:1333–1347;
Correction, Ann of Probab 25:1541–1543
17. Bony J.-M. (1969) Principe du Maximum, Inegalite de Harnack et Unicite
du Probleme de Cauchy pour les Operateurs Elliptiques Degeneres. Ann Inst
Fourier 19:277–304
18. Castaing C., Valadier M. (1977) Convex Analysis and Measurable Multi-
functions, Lect Notes in Math 580. Springer, Berlin
228 References

19. Chicone C., Latushkin Yu. (1999) Evolution Semigroups in Dynamical Sys-
tems and Differential Equations. Amer Math Soc, Providence, Rhode Island
20. Chueshov I.D. (1999) Introduction to the Theory of Infinite-Dimensional Dis-
sipative Systems. Acta, Kharkov (in Russian)
21. Chueshov I.D. (2000) Order-Preserving Random Dynamical Systems Gen-
erated by a Class of Coupled Stochastic Semilinear Parabolic Equations. In:
Fiedler B, Gröger K., Sprekels J. (Eds.) International Conference on Differen-
tial Equations, EQUADIF 99, Berlin, Aug 1–7, 1999, vol 1. World Scientific,
Singapore, 711–716
22. Chueshov I.D. (2001) Order-Preserving Skew-Product Flows and Nonau-
tonomous Parabolic Systems. Acta Appl Math 65:185–205
23. Chueshov I.D., Scheutzow M. (2001) Inertial Manifolds and Forms for
Stochastically Perturbed Retarded Semilinear Parabolic Equations. J Dyn Diff
Equations 13:355–380
24. Chueshov I.D., Vuillermot P.-A. (1998) Long-Time Behavior of Solutions
to a Class of Quasilinear Parabolic Equations with Random Coefficients. Ann
Inst Henri Poincaré, Analyse non Linéaire 15:191–232
25. Chueshov I.D., Vuillermot P.-A. (1998) Long-Time Behavior of Solutions
to a Class of Stochastic Parabolic Equations with White Noise: Stratonovitch’s
Case. Probab Theory and Relat Fields 112:149–202
26. Chueshov I.D., Vuillermot P.-A. (2000) Long-Time Behavior of Solutions
to a Class of Stochastic Parabolic Equations with White Noise: Ito’s Case. Stoch
Anal Appl 18:581–615
27. Chung K.L., Williams R.J. (1983) Introduction to Stochastic Integration.
Birkhäuser, Boston Basel Stuttgart
28. Coddington E.A., Levinson N. (1955) Theory of Ordinary Differential Equa-
tions. Springer, New York
29. Cornfeld I.P., Fomin S.V., Sinai Ya. G. (1982) Ergodic Theory. Springer,
New York
30. Cohn D.L. (1980) Measure Theory. Birkhäuser, Boston Basel Stuttgart
31. Crauel H. (1991) Markov Measures for Random Dynamical Systems. Stochas-
tics and Stoch Reports 37:153–173
32. Crauel H. (1995) Random Probability Measures on Polish Spaces. Habilita-
tionsschrift, Universität Bremen
33. Crauel H. (1999) Global Random Attractors are Uniquely Determined by
Attracting Deterministic Compact Sets. Ann Mat Pura Appl, Ser IV 176:57–72
34. Crauel H. (2001) Random Point Attractors versus Random Set Attractors. J
London Math Society (2) 63:413–427
35. Crauel H., Debussche A., Flandoli F. (1997) Random Attractors. J Dyn
Diff Equations 9:307–341
36. Crauel H., Flandoli F. (1994) Attractors for Random Dynamical Systems.
Probab Theory Relat Fields 100:365–393
37. Crauel H., Flandoli F. (1998) Additive Noise Destroys a Pitchfork Bifurca-
tion. J Dyn Diff Equations 10:259–274
38. Crauel H., Imkeller P., Steinkamp M. (1999) Bifurcations of One-
Dimensional Stochastic Differential Equations. In: Crauel H., Gundlach M.
(Eds.) Stochastic Dynamics. Conference on Random Dynamical Systems, Bre-
men, Germany, April 28 - May 2, 1997. Springer. Berlin Heidelberg New York,
27–47
39. Da Prato G., Zabczyk J. (1995) Convergence to Equilibrium for Classical
and Quantum Spin Systems. Probab Theory Relat Fields 103:529–552
40. Da Prato G., Zabczyk J. (1996) Ergodicity for Infinite Dimensional Systems.
Cambridge University Press, Cambridge
References 229

41. Deimling K. (1977) Ordinary Differential Equations in Banach Spaces, Lect


Notes Math 596. Springer, Berlin New York
42. Ellis R. (1969) Lectures on Topological Dynamics. W.A. Benjamin Inc, New
York
43. Elworthy K. D. (1982) Stochastic Differential Equations on Manifolds. Cam-
bridge University Press, Cambridge
44. Flandoli F., Schmalfuss B. (1996) Random Attractors for the 3D Stochas-
tic Navier-Stokes Equations with Multiplicative White Noise. Stoch and Stoch
Reports 59:21–45
45. Friedman A. (1975, 1976) Stochastic Differential Equations and Applications,
Vol 1, 2. Academic Press, New York
46. Geiss C., Manthey R. (1994) Comparison Theorems for Stochastic Differen-
tial Equations in Finite and Infinite Dimensions. Stoch Processes Appl 53:23–35
47. Gihman I.I., Skorohod A.V. (1972) Stochastic Differential Equations. Sprin-
ger, Berlin Heidelberg New York
48. Gihman I.I., Skorohod A.V. (1974) The Theory of Stochastic Processes,
vol I. Springer, Berlin Heidelberg New York
49. Hale J. K. (1980) Ordinary Differential Equations. Krieger, Malabar Florida
50. Hale J. K. (1988) Asymptotic Behavior of Dissipative Systems. Amer Math
Soc, Providence, Rhode Island
51. Hartman P. (1982) Ordinary Differential Equations, 2nd edn. Birkhäuser,
Boston Basel Stuttgart
52. Hirsch M. W. (1982, 1985) Systems of Differential Equations that are Com-
petitive or Cooperative, I: Limit Sets. SIAM J Math Anal 13:167–179; II: Con-
vergence Almost Everywhere. SIAM J Math Anal 16:423–439
53. Hirsch M. W. (1984) The Dynamical System Approach to Differential Equa-
tions. Bull Amer Math Soc 11:1–64
54. Hirsch M. W. (1988) Stability and Convergence in Strongly Monotone Dy-
namical Systems. J reine Angew Math 383:1–53
55. Horsthemke W., Lefever R. (1984) Noise-Induced Transitions. Springer,
Berlin Heidelberg New York
56. Hu S., Papageorgiou N. S. (1997) Handbook of Multivalued Analysis, vol 1:
Theory. Kluver Academic Publishers, Dordrecht
57. Ikeda N., Watanabe S. ( 1981) Stochastic Differential Equations and Diffu-
sion Processes. North-Holland, Amsterdam
58. Imkeller P., Lederer C. (2001) On the Cohomology of Flows of Stochastic
and Random Differential Equations. Probab Theory Relat Fields 120:209–235.
59. Imkeller P., Schmalfuss B. (2001) The Conjugacy of Stochastic and Ran-
dom Differential Equations and the Existence of Global Attractors. J Dyn Diff
Equations 13:215–249
60. Ioffe A.D. (1979) Single-Valued Representation of Set-Valued Mappings.
Trans Amer Math Soc 252:133–145
61. Kager G., Scheutzow M. (1997) Generation of One-sided Random Dynam-
ical Systems by Stochastic differential Equations. Electronic J Probab 2, paper
8
62. Karatzas I., Shreve S.E. (1988) Brownian Motion and Stochastic Calculus.
Springer, Berlin Heidelberg New York
63. Keller H., Schmalfuss B. (1998) Attractors for Stochastic Differential Equa-
tions with Nontrivial Noise. Izvestiya Akad Nauk R Moldova 26(1):43–54
64. Khasminskii R. Z. (1980) Stochastic Stability of Differential Equations. Si-
jthoff and Noorhoff, Alphen
65. Kellerer H. (1995) Order-Preserving Random Dynamical Systems. Univer-
sität München. Preprint
230 References

66. Kifer Y. (1986) Ergodic Theory of Random Transformations. Birkhäuser,


Boston Basel Stuttgart
67. Kloeden P.E., Platen E. (1992) Numerical Solutions of Stochastic Differen-
tial Equations. Springer, Berlin Heidelberg New York
68. Krasnoselskii M. A. (1964) Positive Solutions of Operator Equations. No-
ordhoff, Groningen
69. Krasnoselskii M. A. (1968) The Operator of Translation Along Trajecto-
ries of Differential Equations. Transl Math Monographs 19. Amer Math Soc,
Providence, Rhode Island
70. Krasnoselskii M. A., Burd V.S., Kolesov Yu.S. (1973) Nonlinear Almost
Periodic Oscillations. John Wiley, New York
71. Krasnoselskii M. A., Lifshits E.A., Sobolev A.V. (1989) Positive Lin-
ear Systems – Method of Positive Operators. Sigma Series in Appl Math 5.
Heldermann, Berlin
72. Krause U., Nussbaum R. G. (1993) A Limit Set Trichotomy for Self-
Mappings of Normal Cones in Banach Spaces. Nonlin Anal, Theory, Methods
& Appl 20:855–870
73. Krause U., Ranft P. (1992) A Limit Set Trichotomy for Monotone Nonlinear
Dynamical Systems. Nonlin Anal, Theory, Methods & Appl 19:375–392
74. Kunita H. (1990) Stochastic Flows and Stochastic Differential Equations.
Cambridge University Press, Cambridge
75. Ladde G.S., Lakshmikantham V. (1980) Random Differential Inequalities.
Academic Press, New York
76. Ladyzhenskaya O. (1991) Attractors for Semigroups and Evolution Equa-
tions. Cambridge University Press, Cambridge
77. Levitan B., Zhikov V. (1982) Almost Periodic Functions and Differential
Equations. Cambridge University Press, Cambridge
78. Mandl P. (1968) Analytical Treatment of One-Dimensional Markov Processes.
Springer, Berlin Heidelberg New York
79. Mañé R. (1987) Ergodic Theory and Differentiable Dynamical Systems.
Springer, Berlin Heidelberg New York
80. Mao X. (1994) Exponential Stability of Stochastic Differential Equations. Mar-
cel Dekker, New York Basel Hong Kong
81. Martin R.H. (1976) Nonlinear Operators and Differential Equations in Banach
Spaces. Wiley, New York
82. McKean H.P. (1969) Stochastic Integrals. Academic Press, New York
83. Meyn S.P., Tweedie R.L. (1993) Markov Chains and Stochastic Stability.
Springer, London
84. Nakajima F. (1979) Periodic Time Dependent Gross-Substitute Systems.
SIAM J Appl Math 36:421–427
85. Ochs G. (1998) Examples of Random Dynamical Systems without Random
Fixed Points. Univ Iagellonicae Acta Math 36:133–141
86. Ochs G. (1999) Weak Random Attractors. Institut für Dynamische Systeme,
Universität Bremen. Report 449
87. Ochs G., Oseledets V.I. (1999) Topological Fixed Point Theorems do not
hold for Random Dynamical Systems. J Dyn Diff Equations 11(4):583–593.
88. Rudolph D. J., (1990) Fundamentals of Measurable Dynamics. Oxford Uni-
versity Press, Oxford
89. Schenk-Hoppé K. R. (1998) Random Attractors - General Properties, Exis-
tence and Applications to Stochastic Bifurcation Theory. Discrete and Contin-
uous Dynamical Systems 4(1):99-130
90. Scheutzow M. (1996) On the Perfection of Crude Cocycles. Random & Comp.
Dynamics, 4:235–255
References 231

91. Scheutzow M. (2001) Comparison of Various Concepts of a Random Attrac-


tor: A Case Study. To be published in Archiv der Mathematik
92. Schmalfuss B. (1992) Backward cocycles and attractors for stochastic differ-
ential equations. In: Reitmann V., Riedrich T., Koksch N. (Eds.), International
Seminar on Applied Mathematics - Nonlinear Dynamics: Attractor Approxima-
tion and Global Behaviour. Teubner, Leipzig, 185–192.
93. Schmalfuss B. (1997) The Attractor of the Stochastic Lorenz System. Z
Angew Math Phys 48:951–975
94. Schmalfuss B. (1998) A Random Fixed Point Theorem and the Random
Graph Transformation. J Math Anal Appl 225:91–113
95. Schmalfuss B. (1999) Measure Attractors and Random Attractors for
Stochastic Partial Differential Equations. Stoch Anal Appl 17(6):1075–1101
96. Selgrade J. (1980) Asymptotic Behavior of Solutions to Single Loop Positive
Feedback Systems. J Diff Equations 38:80–103
97. Sell G.R., Nakajima F. (1980) Almost Periodic Time Dependent Gross-
Substitute Dynamical Systems. Tôhoku Math J 32:255–263
98. Sharpe M. (1988) General Theory of Markov Processes. Academic Press,
Boston
99. Shen W., Yi Y. (1998) Almost Automorphic and Almost Periodic Dynamics
in Skew Product Semiflows. Memoirs Amer Math Soc 136(647). Amer Math
Soc, Providence Rhode Island
100. Sinai Ya. G. (1994) Topics in Ergodic Theory. Princeton University Press,
Princeton
101. Smith H. L. (1986) Cooperative Systems of Differential Equations with Con-
cave Nonlinearities. Nonlin Anal, Theory, Methods & Appl 10:1037–1052
102. Smith H. L. (1996) Monotone Dynamical Systems. An Introduction to the
Theory of Competitive and Cooperative Systems. Amer Math Soc, Providence
Rhode Island
103. Takač P. (1990) Asymptotic Behavior of Discrete-Time Semigroups of Sub-
linear Strongly Increasing Mappings with Applications to Biology. Nonlin Anal,
Theory, Methods & Appl 14:35–42
104. Temam R. (1988) Infinite–Dimensional Dynamical Systems in Mechanics and
Physics. Springer New York
105. Twardowska K. (1996) Wong – Zakaı̈ Approximations for Stochastic Differ-
ential Equations. Acta Appl Math 43:317–359
106. Walters P. (1982) An Introduction to Ergodic Theory. Springer, Berlin Hei-
delberg New York
107. Walter W. (1970) Differential and Integral Inequalities. Springer, Berlin
Heidelberg New York
108. Wong E. (1971) Stochastic Processes in Information and Dynamical Systems.
Mc Graw-Hill, New York San Francisco
109. Wong E., Zakai M., (1969) Riemann-Stieltjes approximations of stochastic
integrals. Z Wahrscheinlichkeitstheorie verw Geb 12:87–97
110. Xu Kedai (1993) Bifurcations of Random Differential Equations in Dimension
One. Rand Comput Dynamics 1:277–305
Index

P-complete σ-algebra, 19 Fokker-Plank equation, 204


P-completion of σ-algebra, 21 function
ϕ-ergodic measure, 51 – cooperative, 147
ϕ-invariant measure, 51 Furstenberg–Khasminskii formula, 75
u-norm, 84 future σ-algebra, 52
u-subordination, 84
gonorrhea model, 175, 2221
almost equilibrium, 112 gross-substitute system, 178
Bernoulli shifts, 11
binary biochemical model interval, 83
– random, 17, 30, 58, 62, 94, 97, 102, – absorbing, 108
110, 114 invariant measure, 51
– stochastic, 72–74, 79, 94 irreducible matrix, 147
biochemical control circuit, 2 Itô stochastic equation, 71
– random, 171 Itô stochastic integral, 67
– stochastic, 219 Itô’s formula, 69
Borel σ-algebra, 9
kick model, 16, 27, 31, 38, 93
Chapman-Kolmogorov equation, 53
cocycle, 13 lattice model, 223
comparison principle, 109 Liouville’s equation, 58, 73
– random, 150 Lyapunov exponent, 60, 75
– stochastic, 192
competition and migration model, 181
mapping
cone
– strongly positive, 144
– minihedral, 87
– sublinear, 161, 214
– normal, 86
– part of, 84, 91 – weakly positive, 144
– regular, 86 Markov chain, 15
– solid, 83 Markov family, 53
conjugacy of RDS, 18 Markov measure, 53
cooperativity condition, 147 martingale, 67
MDS, see metric dynamical system
disintegration, 51 – ergodic, 10
measurable selection theorem, 20
equilibrium, 38 metric dynamical system, 10
– stable multifunction, 18
– – from above, 107
– – from below, 107 orbit, see trajectory
– – in probability, 211 Ornstein-Uhlenbeck process, 79
equivalence of RDS, 18 outer normal, 61
234 Index

part (Birkhoff) metric, 84 – – below, 84


past σ-algebra, 52 – infimum of, 84
perfection procedure, 14 – invariant, 24
Polish space, 13 – – backward, 24
probability space, 9 – – forward, 24
process – lower bound, 84
– adapted, 66 – maximal element of, 84
– continuous, 66 – minimal element of, 84
– predictable, 66 – omega-limit, 34
product σ-algebra, 9 – order-bounded, 84
projection theorem, 21 – random, 18
– – bounded, 19
radius of dissipativity, 26 – – closed, 18
random attractor, 41 – – compact, 19
– weak point, 210 – – tempered, 23
random differential equation, 56 – supremum of, 84
random Dirac measure, 51 – universally measurable, 21
random dynamical system, 13 – upper bound, 84
RDE, see random differential equation skew-product semiflow, 15
RDS, see random dynamical system spaces C k,δ (I), 199
– C k -smooth, 14 spaces Cbk,δ , 70
– s-concave, 116 spaces Cbk,δ (I), 199
– affine, 14, 46, 138 speed measure, 202, 206
– asymptotically compact, 31 stationary measure, 50, 53
– compact, 30 stochastic differential equation, 70
– concave, 114 stopping time, 67
– conditionally compact, 123 Stratonovich stochastic equation, 72
– dissipative, 26 Stratonovich stochastic integral, 68
– linear, 14, 45 sub-equilibrium, 95
– order-preserving, 93 – absorbing, 108
– strictly sublinear, 113 super-equilibrium, 95
– strongly positive, 105, 145 – absorbing, 108
– strongly sublinear, 114 symbiotic interaction, 176, 222
– sublinear, 113
tail of trajectory, 32
scale function, 203, 206 tempered random variable, 23
SDE, see stochastic differential top Lyapunov exponent, 48, 60, 75
equation trajectory, 32
semi-equilibria, 95
semimartingale, 68 universal σ-algebra, 21
separability set, 21 universe, 25
separable collection of random sets, 21
set Walras’ law, 178
– θ-invariant, 10 white noise process, 12
– absorbing, 26 Wiener process, 12, 66
– bounded from Wiener shift, 12
– – above, 84 Wong-Zakaı̈ type theorem, 77

Вам также может понравиться