Академический Документы
Профессиональный Документы
Культура Документы
EQUATIONS
PROBLEMS
CLAUDIA TIMOFTE
ORDINARY DIFFERENTIAL
EQUATIONS
PROBLEMS
To my students
Preface
Claudia Timofte
7
Contents
1 Introduction 11
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
9
Chapter 1
Introduction
The theory of ordinary differential equations represents a major tool for modeling
and investigating important physical phenomena. The demands on the mathemati-
cal training of a modern physicist are constantly increasing. In this problem book,
we intend to provide the reader with some basic concepts and results about the
elementary theory of ordinary differential equations. Many areas of modern physics
lead naturally to the study of such equations. The material contained in this book
is based on the textbook [15]. For detailed proofs, more advanced topics and more
exercises, we refer to [2], [3], [6], [7], [?], [9], [10], [11], [14], [15], and [17].
When dealing with physical phenomena, we are often unable to find directly
the laws relating the quantities that characterize a given phenomenon. However, a
relationship between these quantities and certain derivatives of them can be easily
obtained. In this way, we are led to equations containing the unknown functions
under the sign of the derivative or the differential. Roughly speaking, such equations
in which the unknown function appears under the sign of the derivative or of the
differential are called differential equations. If in a differential equation the unknown
11
12 ORDINARY DIFFERENTIAL EQUATIONS
function depends only on one variable, the differential equation is called ordinary.
If the unknown function depends on two or more independent variables, then we
get a so-called partial differential equation. We shall deal here only with ordinary
differential equations and, so, the term ordinary will be often omitted. In the second
volume of these notes, we shall address the case of partial differential equations.
The order of a differential equation is the highest order of the derivative (or
differential) of the unknown function involved in the equation. Roughly speaking,
a solution of a differential equation is a function which, when substituted into the
equation, makes it an identity. The procedure of finding the solutions of a differen-
tial equation is called integration of the differential equation. For simple differential
equations, it is, sometimes, possible to obtain an exact solution, but in more com-
plicated cases it is often necessary to apply approximate methods.
Let I ⊆ R be an open interval and G ⊆ Rn , n ≥ 1, be a domain.
In this case, we shall say that the function f defines an explicit differential equation
on I × G. Also, f will be called the right hand-side of the differential equation (5.1).
The variable x is called the independent variable, while y is the dependent variable
or the unknown function.
A differential equation not depending on x is called autonomous, and one with
no terms depending only on x is called homogeneous.
Remark 1.3 In many physical problems, the independent variable x represents the
time, so it will be convenient to denote it by t.
Under quite general conditions, we can prove local existence results for a solution
of the differential equation (1.1). Moreover, for a given point (x0 , y0 ) ∈ I × G, we
shall be interested in finding a solution ϕ : I1 ⊆ I → G of the differential equation
(1.1) such that x0 ∈ I1 and
ϕ(x0 ) = y0 . (1.3)
Such a problem, in which we are looking for the solution of a differential equation if
the value of the unknown function at some point is known, is called an initial-value
problem or a Cauchy problem.
In general, a differential equation has an infinite number of solutions. So, we
have to impose further conditions to individualize them. Unfortunately, very often
it is difficult or quite impossible to find explicitly or even implicitly the solutions of
a differential equation.
Finally, let us mention that the class of equations that are integrable by quadra-
tures is extremely narrow. But even for such equations, very often it is impossible
to find explicitly the solutions. Hence, we shall usually obtain our solutions in an
implicit form, i.e. we shall get a function Ψ such that the solution ϕ : I1 → G
verifies the identity
Ψ(x, ϕ(x)) ≡ 0, x ∈ I1 . (1.4)
As we shall see, the general solution of equation (1.1) will be of the form Φ(x, y, C) =
0, where C is an arbitrary constant in J ⊆ Rn . By assigning specific values to the
arbitrary constant C in the general solution, we get the so-called particular solutions.
14 ORDINARY DIFFERENTIAL EQUATIONS
Also, there are cases in which the solution can be given only in a parametric
form, i.e. (
x = α(p, C),
(1.5)
y = β(p, C),
solution
Φ(x, y, C) = 0
of the equation (1.1) Eliminating C from this equation and from the equation
∂Φ
(x, y, C) = 0,
∂C
we get φ(x, y) = 0. If this function satisfies our original differential equation, but
it doesn’t belong to the family Φ(x, y, C) = 0, this function will be the so-called
singular solution. The uniqueness condition is violated at each point of such a
singular integral and this singular solution consists only of singular points.
Let us note that the points on the boundary of the domain of existence of solution
are also called singular points. Contrary, any point belonging to the interior of the
domain of existence such that through this point we have a single integral curve
passing through is called an ordinary point.
Remark 1.7 Let us notice that singular solutions cannot be obtained from the gen-
eral solution by giving particular admissible values to the constants.
As we shall see in what follows, for modeling many important physical phenom-
ena, we shall need to use higher-order differential equations. The general form of an
nth order differential equation is the following one:
where the function ϕ is supposed to have, on some interval (a, b), continuous deri-
vatives up to the order n and to satisfy the equation, i.e.
The arbitrary constants Ci must be independent, i.e. their number cannot be re-
duced by introducing other arbitrary constants depending continuously on the given
ones.
We remark that very often the general solution of the equation (1.6) can be given
only implicitly, i.e. in the form
Φ(x, y, C1 , C2 , . . . , Cn ) = 0, Ci ∈ R, i = 1, 2, . . . , n. (1.9)
In this case, this solution is called the general integral of equation (1.6). Any solution
obtained from the general solution by giving particular admissible values to the
constants Ci is called a particular solution.
Chapter 2
First-Order Differential
Equations
Geometrically, the general solution of the equation (2.1), y = ϕ(x, C), where C is
an arbitrary constant, represents a family of integral curves, i.e. a set of curves
corresponding to different values of the constant C.
17
18 ORDINARY DIFFERENTIAL EQUATIONS
dy = f (x, y),
dx (2.2)
y(x ) = y , (x , y ) ∈ I × G.
0 0 0 0
Under quite reasonable conditions, one can prove that for any point (x0 , y0 ) ∈
I × G, there exists a solution ϕ : I1 ⊆ I → G of the differential equation (2.1) such
that x0 ∈ I1 and ϕ(x0 ) = y0 . Hence, we shall also address the question of finding a
solution for the following problem:
dy = f (x, y)
dx (2.4)
y(x ) = y .
0 0
Such a problem, in which we look for the solution of a differential equation if the
value of the unknown function at some point is known, is called an initial-value
problem or a Cauchy problem. Geometrically, Cauchy’s problem can be formulated
as follows: find the integral curve of the differential equation (2.1) passing through
a given point P0 (x0 , y0 ).
In general, we cannot solve this kind of equations with arbitrary right-hand side.
However, there are many forms of f for which we can solve the first-order differential
equation (2.1) explicitly. In what follows, we shall consider various forms of f (x, y)
for which we are able to generate a solution to equation (2.1). For other forms of
the right-hand side f , see [9] and [15].
We focus now on the case n = 1 and we prove that under mild conditions imposed
on the right-hand side of a differential equation we can ensure the local existence
and uniqueness of its solution.
For a given point (x0 , y0 ), let us consider the rectangle
The unique solution of the above problem can be determined as the limit of a uni-
formly convergent sequence of functions, called the sequence of successive approxi-
mations. This sequence is defined by the following recurrence formula:
y0 (x) = y0 , Z
x
(2.8)
y n (x) = y 0 + f (t, yn−1 (t)) dt, n ≥ 1.
x0
Remark 2.6 Let us remark that this is a local existence theorem. In fact, we can
of the desired solution on the interval x0 −H ≤ x ≤ x0 +H, where
prove the existence
b
H = min a, . Also, we notice that instead of imposing the Lipschitz condition
M
(2....), we may ask the existence and the boundedness, in the absolute value, in D,
∂f
of the partial derivative (x, y).
∂y
Remark 2.7 (G. Peano) The local existence of a solution of (2....) can be proven
by a different method if we assume only the continuity of the function f . However,
this assumption is not enough to ensure the uniqueness of the solution.
Remark 2.8 Let us suppose that the right-hand side of the equation (2.1) depends
also on a parameter λ:
dy
= f (x, y, λ). (2.9)
dx
20 ORDINARY DIFFERENTIAL EQUATIONS
Remark 2.9 Under similar conditions, it is possible to prove the continuous depen-
dence of the solution y = y(x, x0 , y0 ) of the equation (2.4) on the initial values x0
and y0 .
dy = f (x, y),
dx (2.10)
y(x ) = y .
0 0
i = 1, 2, . . . , n} .
INTRODUCTION 21
Theorem 2.11 We assume that all the functions fi (x, y1 , y2 , . . . , yn ) are continuous
on D and satisfy, in D, a Lipschitz condition with respect to all their arguments start-
ing with the second one, i.e. there exists L > 0 such that for any (x, y1 , y2 , . . . , yn )
and (x, z1 , z2 , . . . , zn ) ∈ D we have
n
X
f (x, y1 , y2 , . . . , yn ) − f (x, z1 , z2 , . . . , zn ) ≤ L |yi − zi |. (2.13)
i=1
where
M = max fi (x, y).
D
Then, f possesses the property of global existence of solutions, i.e. for any (x0 , y0 ) ∈
I × Rn , there exists ϕ : I → Rn solution of the Cauchy problem
dy = f (x, y),
dx
y(x ) = y .
0 0
Remark 2.13 Similar questions regarding the existence and uniqueness of solutions
arise for first-order differential equations not solved for the derivative, i.e. equations
in the implicit form:
F (x, y, y 0 ) = 0, (2.14)
Remark 2.14 We emphasize here that the above functions are not the only solu-
tions of equation (5.26). Indeed, it is not difficult to see that if we have two solutions
ϕ1 (·) : (a, b) → I2 and ϕ2 (·) : (b, c) → I2 such that
ϕ(x0 ) = y0 . (2.19)
Remark 2.16 It might happen that at the points (x0 , y0 ) ∈ I1 × I2 \ J the solutions
of equation (2.15) verifying the initial condition (2.19) are not unique.
together with combined solutions of the form (2....). We remark that if g(y) 6=
0, ∀y ∈ I2 , then we do not have stationary solutions for the separable equation
(2.15).
Thus,
T (t1 ) − S
= e−k(t1 −t2 ) ,
T (t2 ) − S
which leads to
T (t1 ) − S
k(t1 − t2 ) = − ln
.
T (t2 ) − S
Hence, one can determine the constant k if the time interval t1 − t2 is known (and
vice-versa, as well).
A practical application of such a model is the determination of the so-called time
of death. Let us suppose that a human body was discovered in a room at 10 : 00 p.m.
and its temperature was 27◦ C. The temperature of the room is kept constant at
15◦ C. Two hours later the temperature of the body dropped to 24◦ C. Let us find
the time of death. First, let us determine the constant k. We have
1 24 − 15
k = − ln ' 0.14.
2 27 − 15
In order to get the time of death we need to remember that the temperature of a
normal person (not sick!) at time of death is 36.6◦ C. Then, we get
1 36.6 − 15
td = − ln ' −4.21 hours,
k 27 − 15
dy y
= , x ∈ I ⊆ R∗+ .
dx x
y(x) = Cx, C ∈ R∗ .
26 ORDINARY DIFFERENTIAL EQUATIONS
y(x) = Csin x, C ∈ R∗ .
ln (1 + y 2 ) = − cos x + C, C ∈ R.
Then
y 2 = Ce− cos x − 1.
This means that the general solution can be the positive or negative square root of
the right-hand side.
INTRODUCTION 27
The interval of existence is the whole real line, because the expression under the
square root sign is never negative (the minimum under the square root sign is 1).
du f (u) − u
= , (2.21)
dx x
which is a separable equation.
Using the algorithm for solving separable equations, we get the general solution
u = ψ(x, C), where C is an arbitrary real constant and the stationary solutions
ui (x) ≡ ui , with ui solutions of the algebraic equation f (u) = u.
Hence, the general integral of (2.20) is:
(
u(x) = ψ(x, C),
ui (x) = ui
x2 + 2xy − y 2 = C, C ∈ R∗ .
dy
= a(x) y + b(x), (2.23)
dx
where a, b : I ⊆ R → R are given continuous functions.
a) If b(x) ≡ 0, the equation (2.23) is called homogeneous. In this case, we get
dy
= a(x) y, (2.24)
dx
which is a particular case of a separable equation. Then, (2.24) will have the sta-
tionary solution y = 0. For y 6= 0, separating the variables and integrating, we
get Z Z
dy
= a(x) dx + C1 ,
y
where C1 is an arbitrary real constant. Computing the integral, we obtain the
general solution of (2.24):
Z
y = C exp a(x) dx ,
y = 0.
ϕ(x0 ) = y0 . (2.25)
This solution is Rx
a(s) ds
ϕ(x) = y0 e x0 , x ∈ I.
INTRODUCTION 31
b) Let us deal now with the nonhomogeneous linear equation (2.23). This equation
can be integrated by the so-called method of variation of parameters. Using this
method, which is based, as we shall see, on the existence and uniqueness theorem for
such equations, we try to satisfy the nonhomogeneous equation (2.23) by considering
C as being a derivable function of the independent variable. So, the solution of
equation (2.23) is sought to be of the form
R
a(x) dx
y(x) = C(x) e . (2.26)
Hence, R
− a(x) dx
C 0 (x) = b(x)e ,
which, by integration, yields
Z R
− a(s) ds
C(x) = b(x)e dx + K,
R
a(x) dx
with the general solution y = C e .
2) Using the method of variation of parameters, we try to satisfy the nonhomoge-
neous equation (2.23) by looking for a solution of the form:
R
a(x) dx
y = C(x) e .
Remark 2.30 If we know two particular solutions y1 (x) and y2 (x) of the nonhomo-
geneous equation (2.23), then its general solution can be obtained without performing
any quadrature and we have:
y = C(x2 − 1), C ∈ R∗ .
y = C(x)x.
where K is an arbitrary real constant. Hence, the general solution of our equation
is:
e−x
y(x) = K + x2 e−x .
x
dy 3 sin x
= − y + 3 , x ∈ (0, π)
dx x x
y(π/2) = −2.
Solution. Using the method of variation of parameters, it is not difficult to see that
− cos x + C
y(x) = , C ∈ R.
x3
Now, we have to match up the initial condition to get the particular solution
π3
cos x +
y(x) = − 4 .
x3
dy
= a(x) y + b(x) y α , (2.28)
dx
where a, b : I ⊆ R → R are given continuous functions and α ∈ R \ {0, 1}.
Let us remark that for α = 0, the equation (2.28) is a nonhomogeneous linear
equation, while for α = 1, (2.28) becomes a homogeneous linear one.
Performing the change of variables z = y 1−α , we get the nonhomogeneous linear
equation
dz
= (1 − α) a(x) z(x) + (1 − α) b(x). (2.29)
dx
36 ORDINARY DIFFERENTIAL EQUATIONS
Using the method of variation of parameters, we obtain the general solution of (2.29),
i.e. z(x) = ψ(x, C), where C is an arbitrary real constant. Thus, coming back to
the variable y, we get the general solution of (2.28):
1/(1−α)
y(x) = (ψ(x, C)) .
Remark 2.37 Let us remark that it is possible to find the general solution of equa-
tion (2.28) by using a kind of a method of variation of parameters. More precisely,
the solution of equation (2.28) is sought to be of the form
R
a(x) dx
y(x) = C(x) e , (2.30)
where C is a derivable function of x which remains to be determined. Computing
the derivative of y and substituting in (2.28), we get a new differential equation
with separable variables for the unknown function C. Solving it, we obtain C(x) =
ϕ(x, K), where K is an arbitrary real constant. Finally, the general solution of
equation (2.28) is R
a(x) dx
y(x) = ϕ(x, K) e .
Hence, using our previous notation, the general form of Riccati’s equation is the
following one:
dy
= a(x) + b(x) y + c(x) y 2 , (2.31)
dx
where a, b, c : I ⊆ R → R are given continuous functions. Such an equation is
nonlinear and, in general, it is not integrable by quadratures. However, it may be
transformed into a Bernoulli’s equation, by means of a suitable change of variables,
provided that a particular solution y0 (x) of this equation is known. Indeed, the
change of variables
z(x) = y(x) − y0 (x) (2.32)
leads to a Bernoulli’s equation:
dz
= (b(x) + 2y0 (x)a(x))z + a(x) z 2 . (2.33)
dx
By integration, the general solution of (2.33) will be of the form z(x) = ψ(x, C),
with C ∈ R. Therefore, the general solution of (2.31) will be
Remark 2.41 If we know two distinct particular solutions y1 (x) and y2 (x) of the
equation (2.31), then, the change of variables
y(x) − y1 (x)
z(x) =
y(x) − y2 (x)
leads us directly to a homogeneous linear equation.
Remark 2.42 If we know three distinct particular solutions y1 (x), y2 (x) and y3 (x)
of the equation (2.31), then its general solution can be obtained without performing
any quadrature and we have:
y(x) − y1 (x) y3 (x) − y1 (x)
: = C,
y(x) − y2 (x) y3 (x) − y2 (x)
where C is an arbitrary real constant.
INTRODUCTION 39
1 3 cos2 x
y(x) = + .
cos x 3C − cos3 x
z(x) = Cx, C ∈ R∗
1 1 + Kx
y(x) = , K ∈ R.
x2 1 − Kx
dy 1
= y 2 − y tan x + , x ∈ (0, π/2),
dx cos2 x
1
y(x) = tan x + ,
z(x)
dz
= −z tan x − 1,
dx
1
y(x) = tan x + x .
2 − 1
tan
ln cos x
tan x2 + 1
INTRODUCTION 41
Remark 2.48 Let D = {(x, y) | a < x < b, c < y < d} ⊆ R2 and P, Q ∈ C 1 (D),
Q(x, y) 6= 0. The left-hand side of (2.35) is the total differential of some function F
if and only if the following condition, called Euler’s condition, is fulfilled:
∂P ∂Q
(x, y) = (x, y), ∀(x, y) ∈ D. (2.42)
∂y ∂x
It remains to see how we can determine a first integral F for the exact equation
(2.35). In fact, we have to determine F from its total differential dF (x, y) =
P (x, y)dx+Q(x, y)dy. Let us fix an arbitrary point (x0 , y0 ). We can determine F by
taking the line integral from dF between the fixed point (x0 , y0 ) and a point with
variable coordinates (x, y), over any path, since the line integral is path-independent.
Using Leibniz formula, we have:
Z (x,y)
dF = F (x, y) − F (x0 , y0 ). (2.43)
(x0 ,y0 )
Z (x,y0 ) Z (x,y)
= P (x, y0 ) dx + Q(x, y) dy. (2.44)
(x0 ,y0 ) (x,y0 )
Therefore, F being determined, the solution of the exact equation (2.35) is given
implicitly by
F (x, y) = C. (2.46)
Remark 2.49 First integrals F for the differential equation (2.35) can be also ob-
tained if the domain D is not a rectangle, but it is a so-called *-domain, i.e. there
exists a certain point (x0 , y0 ) ∈ D such that t(x0 , y0 ) + (1 − t)(x, y) ∈ D, for any
t ∈ [0, 1], (x, y) ∈ D.
INTRODUCTION 43
x2 − xy + y 2 − y + x = C.
Solution. Since
we have
∂P ∂Q
(x, y) = (x, y), ∀(x, y) ∈ D.
∂y ∂x
Hence, our equation is an exact one and there exists a function F : D → R,
F ∈ C 1 (D) such that
F (x, y(x)) = C, C ∈ R.
x2 + x3 y − y 3 = C.
Solution. We first note that the equation is exact and there exists a function F :
D → R, F ∈ C 1 (D) such that
F (x, y(x)) = C, C ∈ R.
y sin x + x2 exp(y) − y + C, C ∈ R,
In some cases, when the left-hand side of the equation (2.35) is not a total
differential and we are not allowed to apply the above procedure, we can still solve
it by choosing a function
such that, after multiplying the left-hand side of the equation (2.35) by µ, this
becomes a total differential, i.e. there exists a function F such that
∂ ∂
(µ(x, y)P (x, y)) = (µ(x, y)Q(x, y)). (2.49)
∂y ∂x
Hence,
∂ ∂ ∂Q ∂P
(ln µ(x, y))P (x, y) − (ln µ(x, y))Q(x, y) = − . (2.50)
∂y ∂x ∂x ∂y
For results of existence and non-uniqueness of such a factor, the interested reader is
referred to [9]. In general, it is not easy to find, for a given equation, an integrating
factor, but we usually try to find such a multiplier by considering it as being a
function depending only on one argument (for example, only of x, or of y, or of xy,
and so forth).
Exercise 2.53 Integrate the following equation, looking for an integrating factor
of the form µ = µ(x):
∂ ∂
(µ(x, y)P (x, y)) = (µ(x, y)Q(x, y))
∂y ∂x
x2 − y 2 + 1 2y
2
dx + dy = 0,
x x
which is an exact equation. By integration, it is not difficult to compute its first
integral. Hence, the general solution of our initial equation is given, implicitly, by
x2 − Cx + y 2 − 1 = 0, C ∈ R.
Exercise 2.54 Solve the following equation, looking for an integrating factor of the
form µ = µ(y):
∂ ∂
(µ(x, y)P (x, y)) = (µ(x, y)Q(x, y))
∂y ∂x
1
µ(x) = .
y2
7
(2x − 3y)dx + dy = 0,
y2 − 3x
7
x2 − 3xy − = C, C ∈ R.
y
Exercise 2.55 Integrate the following equation, knowing that it possesses an inte-
grating factor of the form µ = µ(x + y 2 ):
Solution. Assuming that µ = µ(x + y 2 ), from (2.49) we obtain the integrating factor
1
µ(x) = .
(x + y 2 )3
Now, multiplying our equation by µ, we get
3y 2 − x 2y 3 − 6xy
dx + dy = 0,
(x + y 2 )3 (x + y 2 )3
which is an exact equation. By integration, it is not difficult to compute its first
integral. Hence, the general solution of our initial equation is given, implicitly, by
x − y2
= C, C ∈ R.
(x + y 2 )2
F (x, y, y 0 ) = 0, (2.51)
where F : D ⊆ R3 → R, F ∈ C 1 (D).
A derivable function ϕ : I ⊆ R → R is called a solution of equation (2.51) if
(x, ϕ(x), ϕ 0 (x)) ∈ D, ∀x ∈ I and
If, using the theorem on implicit functions, we can solve equation (2.51) for the
derivative y 0 , then we obtain one or several equations
y 0 = fi (x, y), i = 1, 2, . . . .
Integrating these equations, which are solved for the derivative, we get the solutions
of equation (2.51).
We consider here only two important particular cases of implicit equations: La-
grange’s equation and Clairaut’s equation.
Lagrange’s Equation. The general form of such an equation is the following one:
b) If p − a(p) = 0 and if pi are the real roots of this algebraic equation, then
y(x) = x a(pi ) + b (pi ) are singular solutions of equation (2.53) (they are straight
lines).
Solution. Denoting
0
y =p
and differentiating with respect to x, we get
dp 1 dp
p = 2p + 2x + , p > 0.
dx p dx
Hence, considering x = x(p), we get the nonhomogeneous equation
dx 2 1
= − x − 2,
dp p p
having the general solution
C 1
x(p) = 2
− , C ∈ R.
p p
INTRODUCTION 49
Using the equation, we get the parametric equations of the complete integral of our
Lagrange’s equation:
C 1
x(p) = 2 − ,
p p
y(p) = ln p + 2C − 2.
p
Exercise 2.57 Solve the following Lagrange’s equation:
2
y = 2xy 0 − y 0 .
y = x y 0 + g (y 0 ), (2.57)
dp
a) If = 0, then p = C and y(x) = Cx + g(C), with C ∈ I, is the general
dx
solution of equation (2.57) (a one-parameter family of integral curves which are
straight lines).
b) If x + g 0 (p) = 0, we get the parametric equations of the singular solution of
(2.57): ( 0
x(p) = −g (p)),
0 (2.58)
y(p) = −p g (p)) + g(p).
Notice that the integral curve defined by (2.58) is the envelope of the family of
integral curves (2.57).
Problems on Chapter 2
But this equality cannot hold true, because the function in the left-hand side of
(2.120) has Darboux property, while Heaviside’s function has not this property.
This is a contradiction and, hence, y = 0 is the unique solution of our equation.
Solution. Denoting
0
y =p
and differentiating with respect to x, we get
dp
(x − 2p) = 0.
dx
dp
a) If = 0, then p = C and
dx
y(x) = Cx − C 2 , C∈R
is the general solution of the given equation (a one-parameter family of straight
lines).
b) If x − 2p = 0, then we get the parametric equations of the singular solution
of our Clairaut’s equation:
x(p) = 2p,
y(p) = p2 .
Eliminating p, we get
x2
y(x) = .
4
Exercise 2.65 Solve the initial value problem
√
dy y2 − 1
, x ∈ (0, 3),
=
dx x
y(1) = 2.
Solution. Since the problem is separable, the solution to the problem is straightfor-
ward. Indeed, since y = 1 and y = −1 are not solutions of our initial value problem,
the equation is separated to
dy dx
2
= .
y −1 x
Integrating both sides gives
y−1
= Cx2 , C > 0.
y+1
If we plug in the condition y(1) = 2, we get
3 + x2
y(x) = .
3 − x2
INTRODUCTION 55
y(0) = 2.
Solution. Since the equation is linear, using the method of variation of parameters,
it is not difficult to see that the general solution of our equation is
In order to find the particular solution to the given IVP, we use the initial condition
y(0) = 2. We obtain C = 2. Therefore, the solution is
y(1) = 1.
Solution. Since our equation is not exact, looking for an integrating factor µ = µ(y),
we easily get
1
µ(y) = 2 .
y
Notice that y = 0 is not a solution of our problem. Now, multiplying our equation
by µ, we get
1 x
( + x)dx − 2 dy = 0,
y y
which is an exact equation. By integration, it is not difficult to compute its first
integral. Hence, the general solution of our initial equation is given, implicitly, by
x x2
+ + C = 0, C ∈ R.
y 2
If we plug in the initial condition, we get C = −3/2. Hence, the solution of our
Cauchy problem is given by
x x2 3
+ − =0
y 2 2
or
2x
y(x) = − 2 .
x −3
56 ORDINARY DIFFERENTIAL EQUATIONS
Solution. It is not difficult to see that the right-hand side of our equation satisfies
the assumptions of the uniqueness theorem. Also, y(x) ≡ −2 and y(x) ≡ 2 are two
constant solutions of our differential equation. Since the solution of the given initial
value problem starts between these solutions, then it has to remain in the same
interval. So,
−2 < y(x) < 2, ∀x,
i.e.
| y(x) |< 2, ∀x.
Chapter 3
Higher-Order Differential
Equations
The general form of a differential equation of the n-th order is the following one:
where F : G ⊆ Rn+2 → R. If this equation can be solved for the highest derivative,
we get
y (n) = f (x, y, y 0 , . . . , y (n−1) ), (3.2)
where f : D ⊆ R × Rn → R.
A function ϕ(·) : I ⊆ R → R, where I is an interval in R, is a solution of the dif-
ferential equation (3.1) if ϕ(·) is n-times derivable, (x, ϕ(x), ϕ 0 (x), . . . , ϕ(n−1) (x)) ∈
D, ∀x ∈ I and ϕ(·) satisfies the equation, i.e.
Remark 3.1 It is not difficult to transform the n-th order equation (3.1) to a system
of n first-order equations, for which we already have an existence and uniqueness
result.
57
58 ORDINARY DIFFERENTIAL EQUATIONS
(n−1)
If in a neighbourhood of the initial values (x0 , y0 , y0 0 , . . . , y0 ) the function f is
continuous with respect to all its arguments and satisfies a Lipschitz condition with
respect to all its arguments beginning with the second one, then there exists a unique
solution of the Cauchy problem (3.2).
Remark 3.3 The general solution of the differential equation (3.1) depends on n
parameters C1 , C2 , . . . , Cn , i.e. y = y(x, C1 , C2 , . . . , Cn ). These n parameters can
be, for instance, the initial values of the sought-for function and its derivatives,
(n−1)
y0 , y0 0 , . . . , y0 .
By performing the change of variables z(x) = y (k) (x), the equation (3.3) becomes
depending on n parameters C1 , C2 , . . . , Cn .
INTRODUCTION 59
Of course, for equation (3.5) we may also obtain some singular solutions zi ,
which, by k-fold integration, will give the desired singular solutions yi of equation
(3.3).
Solution. Since the function y does not enter explicitly into our equation, setting
z(x) = y 0 (x), we obtain
xz 0 + z = −x2 z 2 ,
(
z(1) = 1.
Solving this Bernoulli’s equation, together with its initial condition, we have z(x) =
1/x2 . Therefore, the solution of our initial Cauchy problem is
1
y(x) = − + 2.
x
Solution. Since the function y does not enter explicitly into our equation, setting
0
z(x) = y (x),
we get
0 1 02
z = xz − z ,
4
which is Clairaut’s equation. Denoting
0
z =p
dp 1
(x − p) = 0.
dx 2
60 ORDINARY DIFFERENTIAL EQUATIONS
dp
a) If = 0, then p = C1 and
dx
1
z(x) = C1 x − C12 , C1 ∈ R
4
is the general solution of equation Clairaut’s equation. Taking into account our
change of variables, the general solution of the original equation is
x2 1 2
y(x) = C1 − C1 x + C2 , C1 , C2 ∈ R.
2 4
1
b) If x − p = 0, then we get the singular solution of Clairaut’s equation:
2
z(x) = x2 ,
Solution. Setting
00
z(x) = y (x),
we get the nonhomogeneous linear equation
dz z 1+x
=− + ,
dx x x
with the general solution
x C1
z(x) =
+1+ , C1 ∈ R.
2 x
Taking into account our change of variables, we obtain
x3 x2
y(x) = + + K1 x ln | x | +K2 x + K3 , K1 , K2 , K3 ∈ R.
12 2
II. Let the left-hand side of equation (3.1) be a homogeneous function of degree
zero with respect to the arguments y, y 0 , . . . , y (n) , i.e. the equation has the form
!
y 0 y 00 y (n)
F x, , ,..., = 0, F : D ⊆ Rn+1 → R. (3.8)
y y y
INTRODUCTION 61
2yy 00 − 3y 0 2 − 4y 2 = 0,
π π
y(0) = 1, y 0 (0) = 0, x ∈ (− , ).
2 2
0
Solution. Since the equation is homogeneous in y, y 0 , y 00 , setting z(x) = y (x)/y(x),
we obtain
2z 0 = z 2 + 4,
(
z(0) = 0.
Solving this equation with separable variables and taking into account its initial
condition, we have z(x) = 2 tan x. Therefore, the solution of the initial Cauchy
problem is
1
y(x) = .
cos2 x
Exercise 3.8 Integrate the following equation:
00 02
yy − y = x2 y 2 .
62 ORDINARY DIFFERENTIAL EQUATIONS
x3
z(x) = + C1 , C1 ∈ R.
3
Then, taking into account our change of variables, it remains to integrate the equa-
tion
0 x3
y = y ( + C1 ).
3
Hence, the general solution of our initial equation is
x4
+ C1 x
y(x) = C2 e 12 , C1 ∈ R, C2 ∈ R∗ .
III. Let us assume that the left-hand side of equation (3.1) does not involve
explicitly the independent variable x, i.e. the equation has the form
F (y, y 0 , y 00 , . . . , y (n) ) = 0, F : D ⊆ Rn+1 → R. (3.13)
The order of such an equation can be reduced by unity if we regard y in this equation
as an independent variable and y 0 as an unknown function. Therefore, let us perform
the change of variables
z(y(x)) = y 0 (x). (3.14)
02
Then, y 00 = z 0 z, y 000 = z 00 z 2 + zz and, by induction, y (n) = ϕ(z, z 0 , . . . , z (n−1) ).
So, equation (3.13) becomes
0
Φ(y, z, z , . . . , z (n−1) ) = 0. (3.15)
Integrating (3.15), we obtain its general solution as being of the form
z(y) = ψ(y, C1 , C2 , . . . , Cn−1 ), (3.16)
depending on (n − 1) parameters C1 , C2 , . . . , Cn−1 and, possibly, the singular so-
lutions zi . By integration in y 0 (x) = ψ(y, C1 , C2 , . . . , Cn−1 ), we get the general
solution of (3.13), i.e. a family of functions
y(x) = ϕ(x, C1 , C2 , . . . , Cn ) (3.17)
depending on n parameters C1 , C2 , . . . , Cn . Also, corresponding to the singular
solutions zi , by integration, we get the singular solutions yi to the original equation
(3.13).
64 ORDINARY DIFFERENTIAL EQUATIONS
2y 000 − 3y 0 2 = 0,
(
2u 00 − 3u2 = 0,
(
Since in this last equation the independent variable x is not involved explicitly,
setting z(u(x)) = u 0 (x), we get
2z dz − 3u2 = 0,
du
z(1) = −1.
It is not difficult to see that the solution of this Cauchy problem is z 2 = u3 . Taking
into account that z(u(x)) = u 0 (x), u(x) > 0 and u(0) = 1, we obtain
4
u(x) = .
(x + 2)2
Hence, to get the solution of our initial Cauchy value problem, it remains to integrate
the equation
dy 4
= .
dx (x + 2)2
We obtain
4
y(x) = − + C, C ∈ R.
x+2
Since y(0) = −3, we get C = −1 and the solution of the initial Cauchy problem is
x+6
y(x) = − , x ∈ I.
x+2
y(0) = 0, y 0 (0) = 1, x ∈ − π , π .
2 2
INTRODUCTION 65
Solution. Since in our equation the independent variable x is not involved explicitly,
setting
0
z(y) = y (x),
we get
dz
z = 2yz,
dy
z(0) = 1.
z(y) = y 2 + 1.
Therefore, to get the solution of our initial Cauchy value problem, it remains to
integrate the equation
dy
= y 2 + 1.
dx
We obtain
y(x) = tan (x + C).
π π
Since y(0) = 0 and x ∈ − , , the solution of the initial Cauchy problem is
2 2
y(x) = tan x.
y(2) = 1, y 0 (2) = 0.
we get
dz
1 + z 2 = 2yz ,
dy
z(1) = 0.
z 2 (y) = y − 1.
66 ORDINARY DIFFERENTIAL EQUATIONS
The order of such an equation can be reduced by unity if we perform the change of
variables
| x |= es . (3.19)
Without loss of generality, we can assume that x > 0. So, the change of variables
(3.19) defines a new function z, by the following formula:
zz 00 − 2z z 0 + z 0 2 = 0,
(
z(0) = 1, z 0 (0) = 1.
u(zu 0 − 2z + u) = 0,
(
u(1) = 1,
x = es ,
we get 00
z + z = 0,
0
z(0) = 1, z (0) = 1.
x = es ,
we get
00 0 02
zz − 2zz + z = 0,
z(0) = 1, z 0 (0) = 1.
we obtain 0
u(zu − 2z + u) = 0,
u(1) = 1,
a0 (x) y (n) (x) + a1 (x) y (n−1) (x) + · · · + an−1 (x) y 0 (x) + an (x) y(x) =
a0 (x) y (n) (x) + a1 (x) y (n−1) (x) + · · · + an−1 (x) y 0 (x) + an (x) y(x) =
If the coefficient a0 (x) is not equal to zero on the interval (a, b), then, by dividing
(3.25) by a0 (x), we get
L(y) = 0, (3.27)
Indeed, it is not difficult to see that L is linear. Using its linearity, we get
m m
!
X X
L Ci yi = Ci L(yi ),
i=1 i=1
Definition 3.17 The functions y1 (x),. . . ,yn (x) are called linearly dependent over
(a, b) if there exist n constants α1 , . . . , αn , at least one of which is not equal to zero,
such that
α1 y1 (x) + · · · + αn yn (x) ≡ 0, x ∈ (a, b). (3.29)
If identity (3.29) is fulfilled only for α1 = · · · = αn = 0, then the functions
y1 (x), . . . , yn (x) are called linearly independent over the interval (a, b).
70 ORDINARY DIFFERENTIAL EQUATIONS
Example 3.20 The functions ekx , x ekx , x2 ekx , . . . , xp ekx , with k ∈ R and p ∈ N,
are linearly independent on the interval (a, b).
Theorem 3.21 If the functions y1 (x), y2 (x), . . . , yn (x) are linearly dependent on the
interval (a, b) and have derivatives up to the order (n − 1), then the determinant
y1 (x) y2 (x) ........................ yn (x)
0
y (x) y 0 (x) ........................ y 0 (x)
1 2 n
W (x) ≡ W (y1 , . . . , yn ) =
...........................................................
(n−1) (n−1) (n−1)
y1 (x) y2 (x) ...... yn (x)
Using the linearity of L, we easily get the following result (see [15]):
Theorem 3.26 (i) If yhom is a solution of the homogeneous linear equation (3.30)
and yp is a solution of the nonhomogeneous linear equation (3.32), then yhom + yp
is also a solution of equation (3.32).
(ii) If yi , i = 1, 2, . . . , m, are solutions of the equations
L[y] = fi (x), i = 1, 2, . . . , m,
m
P
then y = αi yi , where αi are given constants, is a solution of the equation
i=1
m
X
L(y) = αi fi (x) (the principle of superposition).
i=1
(iii) If the equation L(y) = U (x) + iV (x), with real coefficients pi (x) and real
U and V has a complex solution y(x) = u(x) + i v(x), then the real part u and the
imaginary part v are, respectively, solutions of the equations
y (n) (x) + p1 (x) y (n−1) (x) + · · · + pn (x) y(x) = 0, x ∈ (a, b), (3.33)
Theorem 3.27 The general solution on the interval (a, b) of the nonhomogeneous
linear equation
a0 y (n) (x) + a1 y (n−1) (x) + · · · + an−1 y 0 (x) + an y(x) = f (x), x ∈ (a, b), (3.37)
where the function f is a continuous function defined on the interval (a, b) and
ai , i = 0, 1, . . . , n are real constant coefficients.
If the right-hand side of equation (3.37) is identically zero, the linear equation
(3.37) is called homogeneous. Thus, its general form is
a0 y (n) (x) + a1 y (n−1) (x) + · · · + an−1 y 0 (x) + an y(x) = 0, x ∈ (a, b). (3.38)
Let us deal first with the homogeneous case. The first method of integrating
linear ordinary differential equations with constant coefficients is due to Euler. He
thought of solving a linear homogeneous differential equation with constant coeffi-
cients of the form (3.38) by looking for solutions of the form y = erx , where r is a
constant to be determined. If y = erx , then
Thus, y = erx is a solution of (3.38) if and only r satisfies the equation (3.39),
called the characteristic equation of the differential equation (3.38). The roots of
this characteristic equation, called characteristic values or eigenvalues, will reveal
to us the nature of the solutions of (3.38). Several cases must be considered.
Case 1. If all the roots r1 , r2 , . . . , rn of equation (3.39) are real and distinct, then
are n linearly independent solutions of equation (3.38), i.e. they form a fundamental
system of solutions. So, in this case, the general solution of (3.38) is
are solutions of equation (3.38), forming a linearly independent system on any in-
terval (a, b). So, putting together all the solutions correspondong to the roots ri of
multiplicity αi , with i = 1, . . . , k, we get the needed fundamental system of solutions
of equation (5.107):
and the general solution will be their linear combination with n arbitrary real con-
stants Ci , i = 1, . . . , n.
Case 3. If the equation (3.39) with real coefficients has a complex root α+iβ, β > 0,
then among the remaining roots there must be its conjugate root α − iβ. For such a
pair of complex eigenvalues, the two corresponding solutions of differential equation
(3.38) are e(α+iβ)x , e(α−iβ)x . Then, the function
1 (α+iβ)x (α−iβ)x
αx
(α+iβ)x
e +e = e cos(βx) = Re e , (3.42)
2
74 ORDINARY DIFFERENTIAL EQUATIONS
which are real functions, are linearly independent over any interval (a, b).
Reasoning in the same manner for each root ri of equation (3.39), we get the
needed fundamental system of solutions of equation (3.38), consisting of n linearly
independent functions y1 , . . . , yn and the general solution will be their linear combi-
nation with n arbitrary real constants Ci , i = 1, . . . , n.
Case 4. If the equation (3.39) has a complex root α + iβ of multiplicity m, then
α − iβ is also a root of multiplicity m. For such complex eigenvalues, using again
Euler’s formula, we get the 2m corresponding solutions of the differential equation
(3.38) as being of the following form:
( αx
e cos( βx), xeαx cos( βx), . . . , xm−1 eαx cos( βx),
(3.44)
eαx sin( βx), xeαx sin( βx), . . . , xm−1 eαx sin( βx).
It is not difficult to prove that these functions form a linearly independent system
y1 , . . . , y2m of solutions of equation (3.38) over any interval (a, b).
Reasoning in the same manner for each root ri of equation 3.39), we get the
needed fundamental system of solutions of equation (3.38), consisting of n linearly
independent functions y1 , . . . , yn and the general solution will be their linear combi-
nation with n arbitrary real constants Ci , i = 1, . . . , n.
Ci ∈ R, i = 1, 2, 3.
Exercise 3.31 Find the general solution of the equation y (4) −4y 000 +8y 00 −8y 0 +4y =
0.
Solution. The characteristic equation is r4 −4r3 +8r2 −8r+4 = 0, i.e. (r2 −2r+2)2 =
0. Thus, r = 1 ± i are complex roots of multiplicity 2 and the fundamental system
of solutions is
y1 = ex cos x, y2 = xex cos x, y3 = ex sin x, y4 = xex sin x.
The general solution is
y = C1 ex cos x + C2 xex cos x + C3 ex sin x + C4 xex sin x, Ci ∈ R, i = 1, 4.
Theorem 3.33 The general solution on the interval (a, b) of the nonhomogeneous
linear equation (3.37) with constant coefficients ai and continuous right-hand side f
n
P
is equal to the sum of the general solution Ci yi of the corresponding homogeneous
i=1
equation (3.38) and of some particular solution yp of the nonhomogeneous equation
(3.37), i.e.
Xn
y(x) = Ci yi (x) + yp (x). (3.45)
i=1
Another method for solving (3.37) is the general method of variation of para-
meters. More precisely, if we know the general solution of equation (3.38),
with the functions C1 (x), . . . , Cn (x) determined from the following system:
This system of n linear equations with a nonzero determinant (the determinant is the
Wronskian of the fundamental system of solutions y1 , . . . , yn ) has a unique solution
Z
+ ϕn (x) dx + Cn yn (x), C1 , . . . , Cn ∈ R. (3.51)
For a Cauchy problem, using the initial conditions, we determine the n unknown
constants Ci and we obtain the unique solution of our initial-value problem.
yhom = C1 ex + C2 e−x , C1 , C2 ∈ R.
Following the method of variation of parameters, let us look for the general solution
of the given nonhomogeneous equation in the form y(x) = C1 (x) ex + C2 (x) e−x .
Solving the linear system
C1 0 (x) ex + C2 0 (x) e−x = 0,
(
we get
0 x2 −x 0 x2 x
C1 (x) = e , C2 (x) = − e .
2 2
By integration, we have
2 2
x x
C1 (x) = − − x − 1 e−x +K1 , C2 (x) = − + x − 1 ex +K2 , K1 , K2 ∈ R.
2 2
Hence, the general solution of our initial equation is
y(x) = K1 ex + K2 e−x − x2 − 2, K1 , K2 ∈ R.
r2 + 9 = 0,
Following the method of variation of parameters, let us look for the general solution
of the given nonhomogeneous equation in the form
we get 0
C1 (x) = −x sin 3x,
0
C2 (x) = x cos 3x.
By integration, we have
x 1
C1 (x) = 3 cos 3x − 9 sin 3x + K1 ,
yp = B0 xs + B1 xs−1 + · · · + Bs . (3.52)
INTRODUCTION 79
yp = xk B0 xs + B1 xs−1 + · · · + Bs .
(3.53)
Case b). Let us consider that the right-hand member of equation (3.37) is of the
form f (x) = epx A0 xs + A1 xs−1 + · · · + As , where p and Ai , i = 0, . . . , s are real
constants.
If p is not a root of the characteristic equation associated to (3.37), then the par-
ticular solution can be sought as being of the same form as the right-hand side,
i.e.
yp = epx B0 xs + B1 xs−1 + · · · + Bs .
(3.54)
yp = xm epx B0 xs + B1 xs−1 + · · · + Bs .
(3.55)
Case c). Let us consider that the right-hand member of equation (3.37) is of the
form f (x) = epx [P0 (x) cos( qx) + Q0 (x) sin( qx)], where p and q are real constants,
P0 and Q0 are polynomials in x, with real coefficients.
If p ± iq are not roots of the characteristic equation associated to (3.37), then the
particular solution can be found as being of the form
r2 + r = 0,
Since 1 is not a characteristic root, looking for a particular solution of the nonho-
mogeneous equation of the form
yp = ex (B0 x + B1 ),
we get
1
yp = ex (x − ).
2
Hence, the general solution of our nonhomogeneous equation is
1
y = C1 cos x + C2 sin x + ex (x − ), C1 , C2 ∈ R.
2
Exercise 3.39 Solve the nonhomogeneous equation
y 00 − y = ex (x2 − 1).
r2 − r = 0,
yhom = C1 + C2 ex , C1 , C2 ∈ R.
82 ORDINARY DIFFERENTIAL EQUATIONS
Since 1 is a simple characteristic root, looking for a particular solution of the non-
homogeneous equation of the form
yp = xex (B0 x2 + B1 x + B2 ),
we get
x2 x 1
yp = xex ( − − ).
6 4 4
Hence, the general solution of our nonhomogeneous equation is
x2 x 1
y = C1 + C2 ex + xex ( − − ), C1 , C2 ∈ R.
6 4 4
Exercise 3.40 Integrate the nonhomogeneous equation
00 0
y + 4y + 4y = cos 2x.
r2 + 4r + 4 = 0,
Since ±2i are not characteristic roots, looking for a particular solution of the non-
homogeneous equation of the form
we get
1
sin 2x.
yp =
8
Hence, the general solution of our nonhomogeneous equation is
1
y = C1 e−2x + C2 xe−2x + sin 2x, C1 , C2 ∈ R.
8
Exercise 3.41 Find the general solution of the nonhomogeneous equation
00
y + 4y = cos 2x.
INTRODUCTION 83
r2 + 4 = 0,
Since ±2i are simple characteristic roots, looking for a particular solution of the
nonhomogeneous equation of the form
we get
x
yp = sin 2x.
4
Hence, the general solution of our nonhomogeneous equation is
x
y = C1 cos 2x + C2 sin 2x + sin 2x, C1 , C2 ∈ R.
4
Exercise 3.42 Find the general solution of the nonhomogeneous equation
00 0
y + 2y + 2y = e−x (x cos x + 3 sin x).
r2 + 2r + 2 = 0,
Since −1 ± i are simple characteristic roots, looking for a particular solution of the
nonhomogeneous equation of the form
we get
5 1
yp = xe−x (− cos x + x sin x).
4 4
Hence, the general solution of our nonhomogeneous equation is
5 1
y = C1 e−x cos x + C2 e−x sin x + xe−x (− cos x + x sin x), C1 , C2 ∈ R.
4 4
Exercise 3.43 Find the law of motion of a material point of mass m, which is
attracted to a fixed center O with a force proportional to the distance x of the point
to the attracting center O (an elastic force), ignoring the resistance of the medium.
r
k F0
where k > 0 is the proportionality factor. If we denote ω = and a = , we
m m
obtain
d2 x
+ ω 2 x = a cos (λt). (3.59)
dt2
So, we have to solve a nonhomogeneous second-order linear equation with constant
coefficients. Since the characteristic equation associated to the corresponding ho-
mogeneous equation is r2 + ω 2 = 0, with the complex roots r = ±ω i, the general
solution of the associated homogeneous equation is
Case 2. If λ = ω, i.e. the frequency of the external force is equal the frequency of
the free oscillations, a particular solution of equation (3.59) is to be sought as being
of the form xp (t) = t (A cos (ωt) + B sin (ωt)), where A and B are real coefficients to
a
be determined. We obtain A = 0 and B = . Therefore,
2ω
at
xp (t) = sin (ωt)
2ω
and the general solution of equation (3.59) is
at
x(t) = C1 cos (ωt) + C2 sin (ωt) + sin(ωt), C1 , C2 ∈ R.
2ω
We remark that in this case the amplitude of the oscillations of our solution increases
infinitely when the time t goes to infinity. This phenomenon, called resonance, could
be very dangerous and could even lead to the destruction of the elastic system.
However, we point out that resonance could be sometimes a friendly phenomenon
(see [7]).
If we impose the initial conditions x(0) = 0 and x 0 (0) = 0, we get C1 = 0 and
C2 = 0. Therefore,
at
x(t) = sin (ωt).
2ω
The fact that the solution grows to infinity at the resonant frequency is highly
idealized, since, in practice, any physical system is damped due to friction, air
resistance, etc.
where r
γ k γ2
α= , β= − .
2m m 4m2
The particular solution of equation (3.60) is
a(ω 2 − λ2 ) γ0 λa
xp (t) = 2 cos (λt) + 2 sin (λt),
2 2 2
(ω − λ ) + γ0 λ 2 (ω − λ2 )2 + γ02 λ2
where r
γ k F0
γ0 = , ω = , a= .
m m m
Hence, the general solution of equation (3.60) is
a(ω 2 − λ2 ) γ0 λa
+ cos (λt) + 2 sin (λt).
(ω 2 − λ2 )2 + γ02 λ2 (ω − λ2 )2 + γ02 λ2
Let us notice that in this case the presence of the damping stops the solution to
blow-up when ω = λ.
Case 2. If
γ2 k
2
− > 0,
4m m
the characteristic equation (3.60) has the real roots
r
γ γ2 k
r1,2 = − ± 2
− .
2m 4m m
Therefore, the general solution of the homogeneous equation is
a(ω 2 − λ2 ) γ0 λa
xp (t) = 2 cos (λt) + 2 sin (λt).
2 2 2
(ω − λ ) + γ0 λ 2 (ω − λ2 )2 + γ02 λ2
88 ORDINARY DIFFERENTIAL EQUATIONS
Case 3. If
γ2 k
2
− = 0,
4m m
the characteristic equation has the double real root
γ
r1,2 = − .
2m
Therefore, the general solution of the homogeneous equation is
γ
− 2m t
xhom (t) = e (C1 + C2 t), C1 , C2 ∈ R.
Problems on Chapter 3
r2 − 4r + 13 = 0
INTRODUCTION 89
r2 − 1 = 0
we get
A = −1, B = 0, C = −2.
y(x) = C1 ex + C2 e−x − x2 − 2, C1 , C2 ∈ R.
0
Imposing the initial conditions y(0) = −2 and y (0) = 1, we have
1 1
C 1 = , C2 = − .
2 2
Hence, the solution of our IVP is
1
y(x) = (ex − e−x ) − x2 − 2.
2
r2 + 8r + 25 = 0.
r2 + 1 = 0,
y1 = cos x, y2 = sin x.
y = C1 cos x + C2 sin x, C1 , C2 ∈ R.
Due to the special form of the right-hand side of our equation, we shall look for a
particular solution as being of the form
i.e.
(r − 1)2 (r + 1)2 (r + 5) = 0,
has five roots: r = 1 with multiplicity 2, r = −1 with multiplicity 2 and r = −5
with multiplicity 1.
Therefore, the general solution is
r2 + 2r + 2 = 0,
In order to find the solution of the given Cauchy problem, we have to use the initial
conditions to determine C1 and C2 .
We get √
C1 = C2 = 2eπ/4 ,
which implies √
y(x) = 2eπ/4 (e−x cos x + e−x sin x).
Solution. Looking for particular solutions of the form y = erx , we obtain the char-
acteristic equation
r2 + 9 = 0,
sin 3x
y(x) = .
3
Solution. Looking for particular solutions of the form y = erx , we obtain the char-
acteristic equation
r2 + 3r = 0,
y = C1 + C2 e−3x , C1 , C2 ∈ R.
y(0) = 0, y 0 (0) = 1, x ∈ (− 1 , 1 ).
2 2
94 ORDINARY DIFFERENTIAL EQUATIONS
Solution. Since in our equation the independent variable x is not involved explicitly,
setting
0
z(y) = y (x),
we get
dz 2
z( − z ) = 0,
dy
z(0) = 1.
Since z 6= 0, it is not difficult to see that the solution of this Cauchy problem is
1
z(y) = .
1−y
6 1. Therefore, to get the solution of our initial Cauchy value problem,
Notice that y =
it remains to integrate the equation
dy 1
= .
dx 1−y
We obtain
y 2 (x)
y(x) − = x + C, C ∈ R.
2
1 1
Since y(0) = 0 and x ∈ (− , ), the solution of the initial Cauchy problem is
2 2
√
y(x) = 1 − 1 − 2x.
y(0) = 1, y 0 (0) = 1.
Solution. Since y = 0 is not a solution of our IVP, dividing the equation by y 2 and
performing the change of variables
0
y (x)
z(x) = ,
y(x)
we get 0
z (x) = x2 ,
z(0) = 1.
INTRODUCTION 95
leads to
0
xz + 2z + x = 1.
This is a first order linear differential equation. Its resolution gives
C x 1
z(x) = − + .
x2 3 2
5
Since z(1) = 1, we get C = . Consequently, we have
6
5 x 1
z(x) = − + .
6x2 3 2
96 ORDINARY DIFFERENTIAL EQUATIONS
5 x2 x
y(x) = K − − + .
6x 6 2
5
The condition y(1) = 2 gives K = .
2
Therefore, we have
5 5 x2 x
y(x) = − − + .
2 6x 6 2
Note that this solution is defined for x > 0.
leads to
0
zz + z 3 y = 0.
r3 − 4r = 0,
r1 = 0, r2 = 2, r3 = −2,
and
000 0
y − 4y = 3 cos x,
the guessed form for the particular solution of the first equation is
The general form of a system of n linear differential equations of the first order is
the following one:
dy1
= a11 (x) y1 + a12 (x) y2 + · · · + a1n (x) yn + f1 (x),
dx
dy2 = a21 (x) y1 + a22 (x) y2 + · · · + a2n (x) yn + f2 (x),
dx (4.1)
.................................................................................,
dyn
= an1 (x) y1 + an2 (x) y2 + · · · + ann (x) yn + fn (x),
dx
99
100 ORDINARY DIFFERENTIAL EQUATIONS
where
a11 a12 ..... a1n
y1 f1
y2 a21 a22 ..... a2n f2
Y = , A = ............................... , F = . (4.4)
.. ..
. .
yn an1 an2 ..... ann fn
L(Y ) = 0. (4.7)
It is not difficult to see that the operator L is linear. So, we obtain that
m m
!
X X
L C i Yi = Ci L(Yi ),
i=1 i=1
where Ci are arbitrary constants. Therefore, we get immediately the following result:
m
P
Theorem 4.1 (1) A linear combination Ci Yi , with arbitrary constant coeffi-
i=1
cients Ci , of solutions Yi of the homogeneous system (4.7) is also a solution of the
same system.
(2) If the homogeneous linear system (4.7) with real coefficients aij (x) has a
complex solution Y (x) = U (x) + iV (x), then the real part U and the imaginary part
V are also solutions of system (4.7).
Definition 4.2 The vectors Y1 (x), . . . ,Yn (x) are called linearly dependent over
(a, b) if there exist n constants α1 , . . . , αn , at least one of which is not equal to zero,
such that
α1 (x) Y1 (x) + · · · + αn (x) Yn (x) ≡ 0, x ∈ (a, b). (4.9)
If the vector identity (4.9) is fulfilled only for α1 = · · · = αn = 0, then the vectors
Y1 (x), . . . , Yn (x) are called linearly independent over the interval (a, b).
It is not difficult to see that if the vectors Y1 (x), Y2 (x), . . . , Yn (x) are linearly depen-
dent on the interval (a, b), then the Wronskian W (x) ≡ W (Y1 , . . . , Yn ) is identically
zero on (a, b).
Remark 4.3 If the linearly independent vectors Y1 , . . . , Yn are solutions of the ho-
mogeneous linear system (5.136) with continuous coefficients aij on (a, b), then the
Wronskian W (x) = W (Y1 , . . . , Yn ) cannot vanish at any point x ∈ (a, b).
It is not difficult to prove the following result (see, for detailed proofs, [15]).
Theorem 4.5 The general solution of the homogeneous system (4.7) with continu-
ous coefficients aij on (a, b) is the linear combination
Theorem 4.7 (1) If Yhom is a solution of the homogeneous linear system (4.7)
and Yp is a solution of the nonhomogeneous linear system (4.6), then Yhom + Yp is
also a solution of system (4.6).
102 ORDINARY DIFFERENTIAL EQUATIONS
L(Y ) = Fi (x), i = 1, 2, . . . , m,
where
f1i
f2i
Fi = , (4.11)
..
.
fni
m
X
then Y = αi Yi , where αi are given constants, is a solution of the system
i=1
m
X
L(Y ) = αi Fi (x) (the principle of superposition).
i=1
(3) If the system L[Y ] = U (x) + iV (x), with real coefficients aij (x) and real U
and V such that
u1 v1
u2 v2
U = . , V = . , (4.12)
.
. .
.
un vn
then the real part U and the imaginary part V are, respectively, solutions of the
systems
L(Y ) = U (x), L(Y ) = V (x).
to the sum of the general solution of the corresponding homogeneous system and of
some particular solution Yp of the nonhomogeneous system, i.e.
n
X
Y (x) = Ci Yi (x) + Yp (x). (4.14)
i=1
Y = eλx U, (4.18)
where λ ∈ C and
u1
u2
U = , U 6= 0. (4.19)
..
.
un
Therefore, from (4.17) we get
(A − λI) U = 0, (4.20)
where I is the unit matrix. So, U 6= 0 will be a solution of (4.17) if and only if
Equation (4.21) is called the characteristic equation associated to the system (4.17),
λ is called an eigenvalue of the matrix A and U an eigenvector corresponding to λ.
It is not difficult to see that the map λ 7→ det (A − λI) = KA (λ) is a polynomial of
degree n, called the characteristic polynomial of the linear map A:
The set of all the eigenvalues of the matrix A is called the spectrum of A:
the set of all the eigenvectors (proper vectors) corresponding to the eigenvalue λ.
Since equation (4.21) is a polynomial equation of degree n, using the fundamental
theorem of algebra, we see that (4.21) will have n solutions, not necessarily distinct.
Hence, the spectrum of A will be
Also, it is not difficult to see that if λ ∈ σ(A) and U ∈ P VA (λ), then c U ∈ P VA (λ),
∀c ∈ C \ {0}. Therefore, a proper vector corresponding to a given eigenvalue is not
uniquely determined.
We shall call the multiplicity of the eigenvalue λi the biggest number m with the
property that (λ − λi )m divides the determinant det (A − λI). We shall denote the
multiplicity of λi by m(λi ). Sometimes, we may refer at m(λi ) as being the algebraic
multiplicity of λi . Also, we shall call the geometric multiplicity of an eigenvalue λi
the number of linearly independent eigenvectors corresponding to this eigenvalue
(or the dimension of the eigenspace). Equivalently, the geometric multiplicity may
be defined as being the number of degrees of freedom in the eigenvector equation
(4.20).
Now, let us see how, depending of the nature of the eigenvalues λ, we can con-
struct a fundamental system of solutions for the homogeneous system (4.17). We
have to distinguish between four cases.
Case 1. Let us assume that all the eigenvalues λi , i = 1, 2, . . . , n, are real and
distinct. For each λi , we determine, from (4.20), an eigenvector Ui ∈ Rn , Ui 6= 0.
Then, the vectors
Yi = eλi x Ui , i = 1, 2, . . . , n (4.26)
are linearly independent solutions of system (4.17), i.e. {Y1 , . . . , Yn } is a fundamental
system of solutions for this system. Therefore, the general solution of (4.17) will be
Y = C1 Y1 + · · · + Cn Yn , Ci ∈ R, i = 1, 2, . . . , n. (4.27)
are linearly independent solutions of system (4.17) Reasoning in the same manner
for all the eigenvalues λi , we get a fundamental system of solutions {Y1 , . . . , Yn }.
Hence, the general solution of (4.17) will be
Y = C1 Y1 + · · · + Cn Yn , Ci ∈ R, i = 1, 2, ..., n. (4.29)
Case 3. Let us assume that λ is a real eigenvalue of multiplicity m(λ) > 1. For
such a λ, we shall look for a solution of system (4.17) of the form
Y = [P0 + P1 x + · · · + Pm(λ)−1 xm(λ)−1 ] eλx , (4.30)
106 ORDINARY DIFFERENTIAL EQUATIONS
Therefore,
(A − λI)m(λ) P0 = 0. (4.32)
We can choose m(λ) linearly independent vectors P0i ∈ Rn , P0i 6= 0 (a basis of the
subspace Ker(A − λI)m(λ) ⊆ Rn , which is of dimension m). Then, corresponding
to these vectors, we can determine by recurrence all Pji , for j = 1, 2, . . . , m(λ) − 1.
Therefore, we get m(λ) linearly independent solutions of system (4.17). Reasoning
in the same manner for all the eigenvalues λ of the matrix A, we get a fundamental
system of solutions {Y1 , . . . , Yn } for our system. Hence, its general solution will be
Y = C1 Y1 + · · · + Cn Yn , Ci ∈ R, i = 1, 2, . . . , n. (4.33)
(A − λI)m(λ) P0 = 0 (4.36)
and
1
Pj = (A − λI)j P0 , j = 1, 2, . . . , m(λ) − 1. (4.37)
j!
We can choose m(λ) linearly independent vectors P0i ∈ Cn , P0i 6= 0. Then, corre-
sponding to these vectors, we can determine by recurrence all Pji , j = 1, 2, . . . , m(λ)−
1. Therefore, we obtain m(λ) vectors
Then, the vectors Re (Yi ) and Im (Yi ) are 2m(λ) independent solutions of system
(4.17). Reasoning in the same manner for all the eigenvalues λ of the matrix A,
we get a fundamental system of solutions {Y1 , . . . , Yn } for our system. Hence, its
general solution will be
Y = C1 Y1 + · · · + Cn Yn , Ci ∈ R, i = 1, 2, . . . , n. (4.39)
Solution. It is not difficult to see that the characteristic equation associated to our
linear system, det (A − λI) = 0, has the roots λ1 = 0, λ2 = 1, with m(λ2 ) = 2. So,
corresponding to the simple real eigenvalue λ1 , we can determine a proper vector
U1 6= 0. Indeed, we have
2 −1 −1 u1 0
3 − 2 − 3 u2 = 0 .
−1 1 2 u3 0
This gives
u2 = 3u1 , u3 = −u1 .
Therefore, corresponding to the first eigenvalue, we get is
1 1
U1 = 3 , Y1 = 3 .
−1 −1
For the double real root λ2 , we shall look for a solution of our system of the form
Y = [P0 + P1 x] eλx ,
Let us note that the dimension of the space of the eigenvectors corresponding to the
double eigenvalue λ2,3 is 2 and this justifies the special form of the above fundamental
solutions Y2 and Y3 .
Hence, we get a fundamental system of solutions {Y1 , Y2 , Y3 } for our system and
the general solution is
1 1 1
Y = C1 3 + C2 ex 1 + C3 ex 0 , Ci ∈ R, i = 1, 2, 3,
−1 0 1
or, by components,
y = C1 + (C2 + C3 ) ex ,
1
y2 = 3C1 + C2 ex ,
y = −C + C ex .
3 1 3
INTRODUCTION 109
dy2 = 4y1 + y2 .
dx
Solution. Since the matrix of this homogeneous system is
1 1
A= ,
4 1
the characteristic equation
det(A − λI) = 0
has the roots
λ1 = 3, λ2 = −1.
Corresponding to the simple real eigenvalue λ1 , we can determine a proper vector
U1 6= 0. Indeed, we have
−2 1 u1 0
= .
4 −2 u2 0
This gives
−2u1 + u2 = 0.
Therefore, a proper vector corresponding to the first eigenvalue is
1
U1 = .
2
The second eigenvector U2 6= 0, for the second eigenvalue λ2 , is determined from
2 1 v1 0
= ,
4 2 v2 0
which, upon solving, gives
1
U2 = .
−2
110 ORDINARY DIFFERENTIAL EQUATIONS
Hence, by components,
y1 = C1 e3x + C2 e−x ,
Solution. The characteristic equation det (A − λI) = 0 has the complex conjugate
roots λ = ±i. For the complex eigenvalue λ = i, we can determine a proper vector
U ∈ C2 , U 6= 0. Indeed, we have
−i − 1 u1 0
= .
1 −i u2 0
are linearly independent solutions of our system. The general solution is then given
by ! !
cos x sin x
Y = C1 + C2 , C1 , C2 ∈ R.
sin x − cos x
INTRODUCTION 111
Hence, by components,
(
y1 = C1 cos x + C2 sin x,
y2 = C1 sin x − C2 cos x.
and (
m(αj + iβj ), if αj + iβj is an eigenvalue of A,
mj = (4.44)
0, if αj + iβj is not an eigenvalue of A.
where the real constants a, b, c, d will be determined using the method of undeter-
mined coefficients. By doing this, one easily gets a = −2, b = 0, c = 1, d = 3.
Hence, the general solution of our nonhomogeneous system is
(
y1 = C1 cos x + C2 sin x − 2x,
y2 = C1 sin x − C2 cos x + x + 3.
y1 = C1 e−3x + C2 e−x ,
(
using the superposition principle, we shall look for the general solution of our non-
homogeneous system as being of the form
Since finding a particular solution for the nonhomogeneous system (4.15) is not
easy, in order to find its general solution we can use the method of variation of
parameters. So, if we know the general solution of system (4.17),
where {Y1 , . . . , Yn } is a fundamental system of solutions, then we shall look for the
general solution of system (5.144) as being of the form
We get
Pn
Ci 0 (x) y1i = f1 (x),
i=1
n
Ci 0 (x) y2i = f2 (x)
P
i=1 (4.47)
.................................
n
Ci 0 (x) yni = fn (x).
P
i=1
y2 = C1 + C2 e−2x ,
where C1 , C2 ∈ R. Using the method of variation of parameters and looking for the
general solution of our system as being of the form
y1 = C1 (x) − C2 (x) e−2x ,
(
Problems on Chapter 4
In a similar manner, we get the second eigenvector U2 6= 0, for the second eigenvalue
λ2 and the third eigenvector U3 6= 0, for the second eigenvalue λ3 :
1 1
1 ,
U2 = U3 = −2
.
1 1
INTRODUCTION 117
Y2 = Re(eλ2 x U2 )
and
Y3 = Im(eλ2 x U2 ),
dy2 = y1 − 2y2 .
dx
Hence, by components,
y1 = 4C1 e2x + C2 e−x ,
y2 = C1 e2x + C2 e−x .
120 ORDINARY DIFFERENTIAL EQUATIONS
λ = −6 ± i.
Hence, by components,
y1 = e−6x (C1 cos x + C2 sin x),
For the double real eigenvalue λ2,3 , looking for fundamental solutions of the form
we get
1 1
−x −x
Y2 = −1
e , Y3 = 0
e .
1 −1
Let us note that the dimension of the space of the eigenvectors corresponding to the
double eigenvalue λ2,3 is 2 and this justifies the special form of the above fundamental
solutions Y2 and Y3 . The general solution is then given by
1 1 1
2x −x −x
Y = C1 1 e + C2 −1 e + C3 0
e , C1 , C2 , C3 ∈ R.
1 0 −1
Hence, by components,
λ1 = 2, λ2,3 = 4.
INTRODUCTION 123
It is not difficult to see that, corresponding to the simple real eigenvalue λ1 , a proper
vector U1 6= 0 is
1
1 .
U1 =
1
Hence, the first fundamental solution of our system is
1
2x
Y1 = 1 e
1
For the double real eigenvalue λ2,3 , looking for fundamental solutions of the form
Y (x) = (P0 + P1 x)e−x , P0 , P1 ∈ R3 ,
we get
1 + 3x 2 + 3x
4x 4x
1 − 3x
Y2 = −3x
e , Y3 = e .
3x 1 + 3x
Let us note that the dimension of the space of the eigenvectors corresponding to the
double eigenvalue λ2,3 is 1 and this justifies the special form of the above fundamental
solutions Y2 and Y3 . The general solution is then given by
1 1 + 3x 2 + 3x
2x 4x 4x
Y = C1 1 e + C2 1 − 3x e + C3 −3x
e , C1 , C2 , C3 ∈ R.
1 3x 1 + 3x
Hence, by components,
y1 = C1 e2x + C2 (1 + 3x)e4x + C3 (2 + 3x)e4x ,
y2 = C1 e2x + C2 (1 − 3x)e4x − 3C3 xe4x ,
y3 = C1 e2x + 3C2 xe4x + C3 (1 + 3x)e4x .
124 ORDINARY DIFFERENTIAL EQUATIONS
λ1,2 = −2.
For this double real eigenvalue λ1,2 , looking for fundamental solutions of the form
we get
1+x x
Y1 = e−2x , Y2 = e−2x .
−x 1−x
Let us point out that the dimension of the space of the eigenvectors corresponding
to the double eigenvalue λ2,3 is 1 and this justifies the special form of the above
fundamental solutions Y1 and Y2 . The general solution is given by
1+x x
Y = C1 e−2x + C2 e−2x , C1 , C2 ∈ R.
−x 1−x
Hence, by components,
dy2 = −9y1 .
dx
INTRODUCTION 125
Solution. It is not difficult to see that, using the elimination method, the general
solution is given by
y1 = C1 e3x + C2 e−3x ,
dy2 = y1 − y2 .
dx
Solution. The matrix of this homogeneous system is
1 −2
A=
1 −1
and, therefore, the characteristic equation
det(A − λI) = 0
are linearly independent solutions of our system. The general solution is then given
by
cos x sin x
Y = C1 + C2 , C1 , C2 ∈ R.
(sin x + cos x)/2 (sin x − cos x)/2
Hence, by components,
y1 = C1 cos x + C2 sin x,
[2] V.I. Arnold, Ordinary Differential Equations, Editura Ştiinţifică şi Enciclo-
pedică, Bucharest, 1978 (in Romanian).
[3] W.E. Boyce, R.C. DiPrima, Elementary Differential Equations, Wiley, New
York, 1986.
[7] A.B. Dickinson, Differential Equations: Theory and Use in Time and Mo-
tion, Reading, Mass., Addison-Wesley, 1972.
[9] Şt. Mirică, Differential and Integral Equations, Vol. I-III, Editura Univer-
sităţii, Bucharest, 1999-2002 (in Romanian).
127
128 SPECIAL TOPICS IN MATHEMATICS
[13] W. Rudin, Real and Complex Analysis, Editura Theta, Bucharest, 1999 (in
Romanian).