Вы находитесь на странице: 1из 123

ORDINARY DIFFERENTIAL

EQUATIONS

PROBLEMS
CLAUDIA TIMOFTE

ORDINARY DIFFERENTIAL
EQUATIONS

PROBLEMS
To my students
Preface

The present book is a collection of problems in ordinary differential equations.


The book is based on some lectures I delivered for a number of years at the Faculty
of Physics of the University of Bucharest and covers the curriculum on ordinary
differential equations for the students of the first year of this faculty.
The material follows the textbook [15]. Each chapter contains a brief review of
the corresponding theoretical results, worked out examples and proposed problems.
Since the ”learning-by-doing” method is a successful one, the student is encouraged
to solve as many exercises as possible. The basic prerequisites for studying ordinary
differential equations using this book are undergraduate courses in linear algebra
and one-variable calculus.
It is my hope that this book will serve as an useful outlook for the students of
the first year of the Faculty of Physics of the University of Bucharest.
I would like to thank to my students for their continuous questions, comments
and suggestions, which helped me to improve the content of these notes.

Claudia Timofte

7
Contents

1 Introduction 11

2 First-Order Differential Equations 17


2.1 Separable Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.2 First-Order Homogeneous Differential Equations . . . . . . . . . . . 27
2.3 Linear Equations of the First Order . . . . . . . . . . . . . . . . . . 30
2.4 Bernoulli’s Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.5 Riccati’s Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.6 Exact Differential Equations . . . . . . . . . . . . . . . . . . . . . . . 41
2.7 Elementary Types of Implicit Differential Equations . . . . . . . . . 47

3 Higher-Order Differential Equations 57


3.1 Nonlinear Differential Equation of Order n . . . . . . . . . . . . . . . 58
3.2 Higher-Order Linear Differential Equations . . . . . . . . . . . . . . 68

4 Systems of Linear Differential Equations of the First Order 99

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

9
Chapter 1

Introduction

”How can it be that mathematics, being after all a


product of human thought independent of experience,
is so admirably adapted to the objects of reality?”
(Albert Einstein)

The theory of ordinary differential equations represents a major tool for modeling
and investigating important physical phenomena. The demands on the mathemati-
cal training of a modern physicist are constantly increasing. In this problem book,
we intend to provide the reader with some basic concepts and results about the
elementary theory of ordinary differential equations. Many areas of modern physics
lead naturally to the study of such equations. The material contained in this book
is based on the textbook [15]. For detailed proofs, more advanced topics and more
exercises, we refer to [2], [3], [6], [7], [?], [9], [10], [11], [14], [15], and [17].
When dealing with physical phenomena, we are often unable to find directly
the laws relating the quantities that characterize a given phenomenon. However, a
relationship between these quantities and certain derivatives of them can be easily
obtained. In this way, we are led to equations containing the unknown functions
under the sign of the derivative or the differential. Roughly speaking, such equations
in which the unknown function appears under the sign of the derivative or of the
differential are called differential equations. If in a differential equation the unknown

11
12 ORDINARY DIFFERENTIAL EQUATIONS

function depends only on one variable, the differential equation is called ordinary.
If the unknown function depends on two or more independent variables, then we
get a so-called partial differential equation. We shall deal here only with ordinary
differential equations and, so, the term ordinary will be often omitted. In the second
volume of these notes, we shall address the case of partial differential equations.
The order of a differential equation is the highest order of the derivative (or
differential) of the unknown function involved in the equation. Roughly speaking,
a solution of a differential equation is a function which, when substituted into the
equation, makes it an identity. The procedure of finding the solutions of a differen-
tial equation is called integration of the differential equation. For simple differential
equations, it is, sometimes, possible to obtain an exact solution, but in more com-
plicated cases it is often necessary to apply approximate methods.
Let I ⊆ R be an open interval and G ⊆ Rn , n ≥ 1, be a domain.

Definition 1.1 Let us consider a function f : I × G → Rn . The general form of an


ordinary first-order differential equation (solved for the derivative) is as follows:
dy
= f (x, y). (1.1)
dx
We shall also use the notation y 0 = f (x, y).

In this case, we shall say that the function f defines an explicit differential equation
on I × G. Also, f will be called the right hand-side of the differential equation (5.1).
The variable x is called the independent variable, while y is the dependent variable
or the unknown function.
A differential equation not depending on x is called autonomous, and one with
no terms depending only on x is called homogeneous.

Definition 1.2 A function ϕ : I1 ⊆ I → G is called a solution of the differential


equation (5.1) if it is derivable and satisfies the relationship

= f (x, ϕ(x)), ∀x ∈ I1 .
dx
If fi are the components of the vector field f in the canonical basis of Rn , then the
differential equation (1.1) can be written as
dyi
= fi (x, y1 , . . . , yn ), i = 1, 2, . . . , n. (1.2)
dx
INTRODUCTION 13

Hence, we get a system of differential equations of dimension n on R. A solution of


such a system is a derivable map, ϕ(·) = (ϕ1 (·), . . . , ϕn (·)) : I1 ⊆ I → G, verifying
the identities
dϕi
= f (x, ϕ1 (x), . . . , ϕn (x)), ∀x ∈ I1 , i = 1, . . . , n.
dx
Any solution of the equation (1.1) is also called an integral curve of the differential
equation or of the field f . Geometrically, the general solution of the equation (1.1)
represents a family of integral curves.

Remark 1.3 In many physical problems, the independent variable x represents the
time, so it will be convenient to denote it by t.

Under quite general conditions, we can prove local existence results for a solution
of the differential equation (1.1). Moreover, for a given point (x0 , y0 ) ∈ I × G, we
shall be interested in finding a solution ϕ : I1 ⊆ I → G of the differential equation
(1.1) such that x0 ∈ I1 and
ϕ(x0 ) = y0 . (1.3)

Such a problem, in which we are looking for the solution of a differential equation if
the value of the unknown function at some point is known, is called an initial-value
problem or a Cauchy problem.
In general, a differential equation has an infinite number of solutions. So, we
have to impose further conditions to individualize them. Unfortunately, very often
it is difficult or quite impossible to find explicitly or even implicitly the solutions of
a differential equation.
Finally, let us mention that the class of equations that are integrable by quadra-
tures is extremely narrow. But even for such equations, very often it is impossible
to find explicitly the solutions. Hence, we shall usually obtain our solutions in an
implicit form, i.e. we shall get a function Ψ such that the solution ϕ : I1 → G
verifies the identity
Ψ(x, ϕ(x)) ≡ 0, x ∈ I1 . (1.4)

As we shall see, the general solution of equation (1.1) will be of the form Φ(x, y, C) =
0, where C is an arbitrary constant in J ⊆ Rn . By assigning specific values to the
arbitrary constant C in the general solution, we get the so-called particular solutions.
14 ORDINARY DIFFERENTIAL EQUATIONS

Also, there are cases in which the solution can be given only in a parametric
form, i.e. (
x = α(p, C),
(1.5)
y = β(p, C),

for p ∈ J0 ⊆ R. If we are able eliminate the parameter p between the equations


(1.5), then we can get the explicit form of our solution.

Remark 1.4 In the elementary theory of ordinary differential equations, to enlarge


the class of equations that are integrable by quadratures, we can also use proper
changes of variables.

Remark 1.5 It is useful to notice here that if f (·, ·) : D ⊆ R2 → R∗ is continuous,


then ϕ(·) : I ⊆ R → R∗ is a solution of the equation (1.1) in R if and only if ϕ(·)
is a diffeomorphism (strictly monotone) and its inverse ϕ−1 (·) is solution for the
equation
dx 1
= .
dy f (x, y)

Let us consider now the differential equation (1.1) for f (·, ·) : D ⊆ R2 → R.


If in the neighbourhood of the initial point (x0 , y0 ) the conditions of the existence
and uniqueness theorem are fulfilled, then there is only one integral curve passing
through this point. If the conditions of the existence and uniqueness theorem are
not fulfilled, various situations can appear. Through the point (x0 , y0 ) may pass
one integral curve, several curves, an infinite number of integral curves or there is
no integral curve that passes through this point. The points at which at least one
of the conditions of this uniqueness result is violated are called singular points. A
curve that consists entirely of singular points is called singular. If the graph of
a certain solution consists entirely of singular points, then this solution is called
singular. Notice that not every point at which the conditions of the uniqueness
result are violated is a singular one, since these conditions are only sufficient, but
not necessary.
A solution of the differential equation (1.1) is singular if the corresponding inte-
gral curve has the following property: through any of its points, there passes another
integral curve of the given equation which is tangent to the first curve. In fact, the
graph of a singular solution is just the envelope of the family of curves represent-
ing the general solution. More precisely, let us suppose that we know the general
INTRODUCTION 15

solution
Φ(x, y, C) = 0
of the equation (1.1) Eliminating C from this equation and from the equation
∂Φ
(x, y, C) = 0,
∂C
we get φ(x, y) = 0. If this function satisfies our original differential equation, but
it doesn’t belong to the family Φ(x, y, C) = 0, this function will be the so-called
singular solution. The uniqueness condition is violated at each point of such a
singular integral and this singular solution consists only of singular points.

Example 1.6 For the differential equation


p
dy 1 − y2
= ,
dx y
the general solution (x + C)2 + y 2 = 1, for C ∈ R, and the singular solutions y = ±1.
Thus, the family of integral curves consists of circles of radius 1 centered on the Ox-
axis and the envelope of this family of circles consists exactly of the straight lines
y = ±1.

Let us note that the points on the boundary of the domain of existence of solution
are also called singular points. Contrary, any point belonging to the interior of the
domain of existence such that through this point we have a single integral curve
passing through is called an ordinary point.

Remark 1.7 Let us notice that singular solutions cannot be obtained from the gen-
eral solution by giving particular admissible values to the constants.

As we shall see in what follows, for modeling many important physical phenom-
ena, we shall need to use higher-order differential equations. The general form of an
nth order differential equation is the following one:

F (x, y, y 0 , y 00 , . . . , y (n) ) = 0, (1.6)

where F is a given function of (n+2) variables, usually satisfying certain conditions of


continuity and differentiability. As we shall see, the general solution of the equation
(1.6) depends on n arbitrary constants, i.e.

y(x) = ϕ(x, C1 , C2 , . . . , Cn ), Ci ∈ R, i = 1, 2, . . . , n, (1.7)


16 ORDINARY DIFFERENTIAL EQUATIONS

where the function ϕ is supposed to have, on some interval (a, b), continuous deri-
vatives up to the order n and to satisfy the equation, i.e.

F (x, ϕ(x), ϕ 0 (x), ϕ 00 (x), . . . , ϕ(n) (x)) = 0, x ∈ (a, b). (1.8)

The arbitrary constants Ci must be independent, i.e. their number cannot be re-
duced by introducing other arbitrary constants depending continuously on the given
ones.
We remark that very often the general solution of the equation (1.6) can be given
only implicitly, i.e. in the form

Φ(x, y, C1 , C2 , . . . , Cn ) = 0, Ci ∈ R, i = 1, 2, . . . , n. (1.9)

In this case, this solution is called the general integral of equation (1.6). Any solution
obtained from the general solution by giving particular admissible values to the
constants Ci is called a particular solution.
Chapter 2

First-Order Differential
Equations

Let I ⊆ R be an open interval and G ⊆ Rn , n ≥ 1, be a domain.

Definition 2.1 Let us consider a function f : I × G → Rn . The general form of an


ordinary first-order differential equation is the following one:
dy
= f (x, y). (2.1)
dx
In this case, we shall say that the function f defines an explicit differential equation
on I × G. Also, f will be often called the right hand-side of the differential equation
(2.1). The variable x is called the independent variable, while y is the dependent
variable or the unknown function.

Definition 2.2 A function ϕ : I1 ⊆ I → G is called a solution of the differential


equation (2.1) if it is derivable and it verifies the relationship:

= f (x, ϕ(x)), ∀x ∈ I1 .
dx
Definition 2.3 Let f : I × G → Rn defining the differential equation (2.1). Any
solution of this equation is called an integral curve of the differential equation (2.1)
or of the field f .

Geometrically, the general solution of the equation (2.1), y = ϕ(x, C), where C is
an arbitrary constant, represents a family of integral curves, i.e. a set of curves
corresponding to different values of the constant C.

17
18 ORDINARY DIFFERENTIAL EQUATIONS

Proposition 2.4 Let f : I × G → Rn be a continuous function and let us consider


the Cauchy problem:

 dy = f (x, y),

dx (2.2)
 y(x ) = y , (x , y ) ∈ I × G.
0 0 0 0

Also, let I0 ⊆ I be a neighbourhood of x0 . Then, the function ϕ : I0 ⊆ I → G is a


solution of the Cauchy problem (2.2) if and only if ϕ is continuous and satisfies the
relationship: Z x
ϕ(x) = y0 + f (s, ϕ(s))ds, ∀x ∈ I0 , (2.3)
x0
called the integral equation associated to problem (2.2).

Under quite reasonable conditions, one can prove that for any point (x0 , y0 ) ∈
I × G, there exists a solution ϕ : I1 ⊆ I → G of the differential equation (2.1) such
that x0 ∈ I1 and ϕ(x0 ) = y0 . Hence, we shall also address the question of finding a
solution for the following problem:

 dy = f (x, y)

dx (2.4)
 y(x ) = y .
0 0

Such a problem, in which we look for the solution of a differential equation if the
value of the unknown function at some point is known, is called an initial-value
problem or a Cauchy problem. Geometrically, Cauchy’s problem can be formulated
as follows: find the integral curve of the differential equation (2.1) passing through
a given point P0 (x0 , y0 ).
In general, we cannot solve this kind of equations with arbitrary right-hand side.
However, there are many forms of f for which we can solve the first-order differential
equation (2.1) explicitly. In what follows, we shall consider various forms of f (x, y)
for which we are able to generate a solution to equation (2.1). For other forms of
the right-hand side f , see [9] and [15].
We focus now on the case n = 1 and we prove that under mild conditions imposed
on the right-hand side of a differential equation we can ensure the local existence
and uniqueness of its solution.
For a given point (x0 , y0 ), let us consider the rectangle

D = {(x, y) ∈ R2 | x0 − a ≤ x ≤ x0 + a, y0 − b ≤ y ≤ y0 + b}. (2.5)


INTRODUCTION 19

Theorem 2.5 Let f : D → R be a continuous function on D, which satisfies, in


D, a Lipschitz condition with respect to its second argument, i.e. there exists L > 0
such that for any (x, y1 ), (x, y2 ) ∈ D we have

|f (x, y1 ) − f (x, y2 )| ≤ L |y1 − y2 |. (2.6)

Then, for the equation


dy
= f (x, y), (2.7)
dx
there exists a unique solution y = y(x), defined for x0 − H ≤ x ≤ x0 + H, that
satisfies the condition y(x0 ) = y0 . Here,
 
b 1
H < min a, , , with M = max |f (x, y)|.
M L D

The unique solution of the above problem can be determined as the limit of a uni-
formly convergent sequence of functions, called the sequence of successive approxi-
mations. This sequence is defined by the following recurrence formula:

 y0 (x) = y0 , Z

x
(2.8)

 y n (x) = y 0 + f (t, yn−1 (t)) dt, n ≥ 1.
x0

Remark 2.6 Let us remark that this is a local existence theorem. In fact, we can
 of the desired solution on the interval x0 −H ≤ x ≤ x0 +H, where
prove the existence

b
H = min a, . Also, we notice that instead of imposing the Lipschitz condition
M
(2....), we may ask the existence and the boundedness, in the absolute value, in D,
∂f
of the partial derivative (x, y).
∂y

Remark 2.7 (G. Peano) The local existence of a solution of (2....) can be proven
by a different method if we assume only the continuity of the function f . However,
this assumption is not enough to ensure the uniqueness of the solution.

Remark 2.8 Let us suppose that the right-hand side of the equation (2.1) depends
also on a parameter λ:
dy
= f (x, y, λ). (2.9)
dx
20 ORDINARY DIFFERENTIAL EQUATIONS

If f is continuous with respect to λ for λ0 ≤ λ ≤ λ1 , satisfies the conditions of the


existence and uniqueness theorem (see Theorem 2.5) and if the Lipschitz constant L
is independent of λ, then the solution y = y(x, λ) of the equation (2.9) that satisfies
the initial condition y(x0 ) = y0 depends continuously on the parameter λ.

Remark 2.9 Under similar conditions, it is possible to prove the continuous depen-
dence of the solution y = y(x, x0 , y0 ) of the equation (2.4) on the initial values x0
and y0 .

Remark 2.10 Let us consider the Cauchy problem

 dy = f (x, y),

dx (2.10)
 y(x ) = y .
0 0

If in a neighbourhood of the point (x0 , y0 ) the function f has continuous derivatives


up to the order k, then, in some neighbourhood of the point (x0 , y0 ), the solution of
problem (2.10) has continuous derivatives up to the order (k + 1).

For more details concerning existence theorems, continuation of solutions, maximal


solutions, local and global continuity of solutions on parameters, etc, the interested
reader is referred to [9].
We can deal in a similar manner with the case n > 1, i.e. the case of system of
equations 
 dyi = fi (x, y1 , y2 , . . . , yn ),

dx (2.11)
 yi (x0 ) = yi0 , i = 1, 2, . . . , n.

We can replace this Cauchy problem by the equivalent integral equations


Zx
yi (x) = yi0 + fi (x, y1 , y2 , . . . , yn ) dx, i = 1, 2, . . . , n. (2.12)
x0

Now, let us consider the rectangle

D = (x, y1 , y2 , . . . , yn ) ∈ Rn+1 | x0 − a ≤ x ≤ x0 + a, yi0 − bi ≤ yi ≤ yi0 + bi ,




i = 1, 2, . . . , n} .
INTRODUCTION 21

Theorem 2.11 We assume that all the functions fi (x, y1 , y2 , . . . , yn ) are continuous
on D and satisfy, in D, a Lipschitz condition with respect to all their arguments start-
ing with the second one, i.e. there exists L > 0 such that for any (x, y1 , y2 , . . . , yn )
and (x, z1 , z2 , . . . , zn ) ∈ D we have
n
X

f (x, y1 , y2 , . . . , yn ) − f (x, z1 , z2 , . . . , zn ) ≤ L |yi − zi |. (2.13)
i=1

Then, for the system (2.11), there exists a unique solution

Y (x) = (y1 (x), y2 (x), . . . , yn (x)),

defined for x0 − H ≤ x ≤ x0 + H, that satisfies the initial condition Y (x0 ) = Y0 ,


with Y0 = (y10 , y20 , . . . , yn0 ). Here,
 
b1 b2 bn 1
H < min a, , , . . . , , ,
M M M L

where

M = max fi (x, y) .
D

Remark 2.12 Let f : I × Rn → Rn . We shall suppose that f is continuous and


there exist two continuous functions a, b : I → R+ and a real number r > 0 such that

kf (x, y)k ≤ a(x)kyk + b(x), ∀(x, y) ∈ I × Rn , kyk > r.

Then, f possesses the property of global existence of solutions, i.e. for any (x0 , y0 ) ∈
I × Rn , there exists ϕ : I → Rn solution of the Cauchy problem

 dy = f (x, y),

dx
 y(x ) = y .
0 0

Remark 2.13 Similar questions regarding the existence and uniqueness of solutions
arise for first-order differential equations not solved for the derivative, i.e. equations
in the implicit form:
F (x, y, y 0 ) = 0, (2.14)

where F : D ⊆ R3 → R, F = F (u, v, w) ∈ C 1 (D).


22 ORDINARY DIFFERENTIAL EQUATIONS

2.1 Separable Equations


We shall focus now on the case n = 1 and we shall discuss some elementary cases of
first-order differential equations that are integrable by quadratures.
Let I1 and I2 be open intervals in R. The general form of a separable equation
is the following one:
dy
= f (x) g(y), (2.15)
dx
where f : I1 → R and g : I2 → R are continuous functions.
We start by remarking that if y1 , y2 , . . . are the solutions of the algebraic equation
g(y) = 0, then ϕi (x) ≡ yi , for x ∈ I1 , are obviously solutions of the equation (2.15),
called stationary solutions.
Let J = {y | g(y) 6= 0}. If G(·) is a primitive of 1/g on J and F is a primitive
of f , then it is not difficult to see that a continuous function ϕ(·) : I0 ⊆ I1 → J is a
solution of equation (2.15) if and only if there exists a constant C ∈ R such that

G(ϕ(x)) = F (x) + C, x ∈ I0 . (2.16)

Hence, (5.27) gives exactly the general solution y in an implicit form.


Since G 0 (y) ≡ 1/g(y) 6= 0, for any y ∈ J, the map G(·) is invertible and we get

y(x) = G−1 (F (x) + C), C ∈ R, (2.17)

which is the general solution, on J, in an explicit form.

Remark 2.14 We emphasize here that the above functions are not the only solu-
tions of equation (5.26). Indeed, it is not difficult to see that if we have two solutions
ϕ1 (·) : (a, b) → I2 and ϕ2 (·) : (b, c) → I2 such that

lim ϕ1 (x) = lim ϕ2 (x) = y0 ∈ I2 ,


x→b− x→b+

then we also have the combined solution


ϕ (x), x ∈ (a, b),

 1

ϕ(x) = y0 , x = b, (2.18)

 ϕ (x), x ∈ (b, c).
2

It is not difficult to see that we have the following result.


INTRODUCTION 23

Proposition 2.15 If the functions f : I1 → R and g : I2 → R are continuous, then,


for any (x0 , y0 ) ∈ I1 × I2 , there exists a neighbourhood Ix0 of x0 in I1 and there
exists ϕ(·) : Ix0 → I2 solution of equation (2.15) such that

ϕ(x0 ) = y0 . (2.19)

Moreover, for any (x0 , y0 ) ∈ I1 × J, there exists a neighbourhood Ix0 of x0 in I1 and


there exists a unique ϕ(·) : Ix0 → I2 solution of equation (5.26) verifying (5.30).

Remark 2.16 It might happen that at the points (x0 , y0 ) ∈ I1 × I2 \ J the solutions
of equation (2.15) verifying the initial condition (2.19) are not unique.

To summarize, we can write down an algorithm for solving a separable equation.


Algorithm for integrating separable equations. Consider the separable equa-
tion (5.26).
1) If y1 , y2 , . . . are the solutions of the equation g(y) = 0, then ϕi (x) ≡ yi , for
x ∈ I1 , are solutions of the equation (2.15), called stationary solutions.
2) Let J = {y | g(y) 6= 0}. Separating the variables and integrating, we get:
Z Z
dy
= f (x)dx + C,
g(y)
where C is an arbitrary real constant. If we denote by G a primitive of 1/g and by
F a primitive of f , then G(y) = F (x) + C, which gives exactly the general solution
y in an implicit form. When it is possible, we get the general solution in an explicit
y = ψ(x, C).
Hence, the complete integral of the equation (2.15) is:
(
y = ψ(x, C),
yi (x) = yi ,

together with combined solutions of the form (2....). We remark that if g(y) 6=
0, ∀y ∈ I2 , then we do not have stationary solutions for the separable equation
(2.15).

Example 2.17 (Population Dynamics) Let us consider a simple mathematical model


governing the population dynamics of a certain species, commonly called the expo-
nential model. In such a model, the rate of change of the population is proportional
24 ORDINARY DIFFERENTIAL EQUATIONS

to its existing size. So, if P (t) denotes the population, we have


dP
= kP,
dt
where the rate k is supposed to be constant. It is easy to see that if k > 0, we have
growth, and if k < 0, we have decay.
If we impose the initial condition P (0) = P0 , we get P (t) = P0 ekt . Thus, for
k > 0, the population grows and continues to expand to infinity, the rate of growth
being determined by the parameter k. For k < 0, the population will tend to 0, and,
so, we are facing extinction.
A more elaborate model was proposed in 1838 by the Belgium mathematician
Verhulst, to remedy this flaw in the exponential model. This model, called the
logistic model or the Verhulst-Pearl model, is given by
 
dP P
= kP 1− ,
dt L
where L is a limiting size for the population (the carrying capacity). When P is
small compared to L, this equation reduces to the exponential one. Obviously, we
have the stationary solutions P = 0 and P = L. For P 6= 0 and P 6= L, separating
the variables and integrating we obtain
LC ekt
P (t) = , C ∈ R∗ .
L + C ekt
If we consider the initial condition P (0) = P0 , where P0 6= 0 and P0 6= L, we get
LP0
P (t) = .
P0 + (L − P0 ) e−kt
Thus, lim P (t) = L. We remark that this model is still not quite satisfactory, since
t→∞
it does not allow a population to face extinction. Indeed, even starting with a small
population, this will always tend to the carrying capacity L.

Example 2.18 (Newton’s Law of Cooling) Experimental facts show that, up to a


quite satisfactory approximation, the surface temperature of an object changes at a
rate which is proportional to its relative temperature, i.e. proportional to the differ-
ence between its temperature and the temperature of the surrounding environment.
This is the so-called Newton’s law of cooling. Thus, if T is the temperature of the
object at time t, we get
dT
= −k (T − S), k > 0,
dt
INTRODUCTION 25

where S is the temperature of the surrounding environment. This is a first order


separable differential equation. The solution, satisfying the initial condition T (0) =
T0 , is given by
T (t) = S + (T0 − S) e−kt .

Thus,
T (t1 ) − S
= e−k(t1 −t2 ) ,
T (t2 ) − S
which leads to
T (t1 ) − S
k(t1 − t2 ) = − ln
.
T (t2 ) − S
Hence, one can determine the constant k if the time interval t1 − t2 is known (and
vice-versa, as well).
A practical application of such a model is the determination of the so-called time
of death. Let us suppose that a human body was discovered in a room at 10 : 00 p.m.
and its temperature was 27◦ C. The temperature of the room is kept constant at
15◦ C. Two hours later the temperature of the body dropped to 24◦ C. Let us find
the time of death. First, let us determine the constant k. We have

1 24 − 15
k = − ln ' 0.14.
2 27 − 15

In order to get the time of death we need to remember that the temperature of a
normal person (not sick!) at time of death is 36.6◦ C. Then, we get

1 36.6 − 15
td = − ln ' −4.21 hours,
k 27 − 15

which means that the death happened around 5 : 48 p.m.

Exercise 2.19 Solve the following separable equation:

dy y
= , x ∈ I ⊆ R∗+ .
dx x

Solution. Obviously, y = 0 is a stationary solution of our equation. For y 6= 0,


separating the variables and integrating, we get

y(x) = Cx, C ∈ R∗ .
26 ORDINARY DIFFERENTIAL EQUATIONS

Exercise 2.20 Solve the following separable equation:


dy
= y cot x, x ∈ (0, π/2).
dx
Solution. It is easy to see that y = 0 is a stationary solution of our equation. For
y 6= 0, separating the variables and integrating, we get

y(x) = Csin x, C ∈ R∗ .

Exercise 2.21 Solve the following equation:


dy
x2 y 2 + 1 = y, x ∈ I ⊆ R∗+ .
dx
Solution. The equation can be written in the normal form:
dy y−1
= 2 2.
dx x y
Obviously, y = 1 is a stationary solution of our equation. For y 6= 1, separating the
variables and integrating, we get the general solution of our equation in an implicit
form:
y2 1
+ y + ln | y − 1 |= − + C, C ∈ R.
2 x
Exercise 2.22 Consider the differential equation:
dy (1 + y 2 ) sin x
= .
dx 2y

a) Find its general solution.


b) Find a particular solution which satisfies the initial condition y(0) = 1 and
identify its interval of existence.

Solution. a) Separating the variables and integrating, we get

ln (1 + y 2 ) = − cos x + C, C ∈ R.

Then
y 2 = Ce− cos x − 1.
This means that the general solution can be the positive or negative square root of
the right-hand side.
INTRODUCTION 27

b) Matching up the initial condition gives us


p
y(x) = 2e− cos x+1 − 1.

The interval of existence is the whole real line, because the expression under the
square root sign is never negative (the minimum under the square root sign is 1).

2.2 First-Order Homogeneous Differential Equations


Let f : I ⊆ R → R be a continuous function. The general form of a homogeneous
differential equation is the following one:
dy y
=f . (2.20)
dx x
Notice that (2.20) is defined by a homogeneous function of degree zero. The homo-
geneous equation (2.20) can be transformed into a separable equation by performing
the change of variables u = y/x. Therefore, making use of the chain rule, equation
(2.20) becomes, for x in an interval J ⊆ R \ {0},

du f (u) − u
= , (2.21)
dx x
which is a separable equation.
Using the algorithm for solving separable equations, we get the general solution
u = ψ(x, C), where C is an arbitrary real constant and the stationary solutions
ui (x) ≡ ui , with ui solutions of the algebraic equation f (u) = u.
Hence, the general integral of (2.20) is:
(
u(x) = ψ(x, C),
ui (x) = ui

and, consequently, the complete integral of the equation (2.20) is:


(
y(x) = xψ(x, C),
yi (x) = xui (x).

Note that if f (u) 6= u, we do not have singular solutions.


To summarize, let us write down an algorithm for solving homogeneous equa-
tions.
28 ORDINARY DIFFERENTIAL EQUATIONS

Algorithm for integrating homogeneous equations. Consider the equation


(2.20).
1) Performing the change of variables u = y/x, equation (2.20) becomes
du f (u) − u
= , (2.22)
dx x
which is a separable equation.
2) We solve the separable equation (2.21) and we get the general solution u =
ψ(x, C), where C is an arbitrary real constant, and the stationary solutions ui (x) ≡
ui , where ui are solutions of the algebraic equation f (u) = u. Thus, the complete
integral of (2.21) is: (
u(x) = ψ(x, C),
ui (x) = ui .

3) The complete integral of the equation (2.20) is then:


(
y(x) = xψ(x, C),
yi (x) = xui (x).

Exercise 2.23 Find the general solution of the following equation:


dy x+y
= .
dx y−x
Solution. Performing the change of variables u = y/x, we get
du
x(u − 1) = 1 + 2u − u2 .
dx
√ √ √
Obviously, u = 1+ 2 and u = 1− 2 are stationary solutions. Hence, y = x(1+ 2)
√ √
and y = x(1 − 2) will be solutions for our initial equation. For u 6= 1 + 2 and

u 6= 1 − 2, separating the variables and integrating, we get x2 (1 + 2u − u2 ) = C,
with C ∈ R∗ . Hence, the general solution of our initial equation, given implicitly, is

x2 + 2xy − y 2 = C, C ∈ R∗ .

Exercise 2.24 Find the solution of the following Cauchy problem:


x2 + y 2

dy
= , x ∈ (0, ∞),



dx xy


y(1) = 0.

INTRODUCTION 29

Solution. Performing the change of variables


y(x)
z(x) = ,
x
our equation becomes
dz 1
= ,
dx xz
which has the general solution
z 2 (x) = 2ln x + C.
Therefore,
y 2 (x) = 2x2 ln x + Cx2 .
Taking into account the initial condition y(1) = 0, we get C = 0. Hence, the solution
of our Cauchy problem is:
y 2 (x) = 2x2 ln x.
Exercise 2.25 Solve
dy y 2 + 2xy
= , x ∈ (0, ∞).
dx x2
Solution. We notice that this equation can be written as
dy y2 y
= 2 +2 .
dx x x
Thus, it is in homogeneous form and if we perform the change of variables
y(x)
z(x) = ,
x
we have
dz
x = z 2 + z.
dx
Obviously, z = 0 and z = −1 are stationary solutions, which gives the solutions
y = 0 and y = −x. For z 6= 0 and z 6= −1, we get
dz z2 + z
=
dx x
Integrating, we find that
z
Cx = , C ∈ R∗ ,
z+1
which gives
Cx2
y= ,
1 − Cx
where the constant C may be determined by the initial conditions.
30 ORDINARY DIFFERENTIAL EQUATIONS

2.3 Linear Equations of the First Order


A first-order linear equation is an equation that is linear in the unknown function
and in its derivative. The general form of such an equation is the following one:

dy
= a(x) y + b(x), (2.23)
dx
where a, b : I ⊆ R → R are given continuous functions.
a) If b(x) ≡ 0, the equation (2.23) is called homogeneous. In this case, we get

dy
= a(x) y, (2.24)
dx
which is a particular case of a separable equation. Then, (2.24) will have the sta-
tionary solution y = 0. For y 6= 0, separating the variables and integrating, we
get Z Z
dy
= a(x) dx + C1 ,
y
where C1 is an arbitrary real constant. Computing the integral, we obtain the
general solution of (2.24):
Z 
y = C exp a(x) dx ,

where C ∈ R∗ . Hence, the complete integral of (2.24) is:


 Z 
 y = C exp a(x) dx , C ∈ R∗ ,

 y = 0.

In fact, it is not difficult to see that we have the following result.

Proposition 2.26 If the function a : I → R is continuous, then, for any (x0 , y0 ) ∈


I × R, there exists a unique ϕ(·) : I → R solution of equation (2.24) such that

ϕ(x0 ) = y0 . (2.25)

This solution is Rx
a(s) ds
ϕ(x) = y0 e x0 , x ∈ I.
INTRODUCTION 31

b) Let us deal now with the nonhomogeneous linear equation (2.23). This equation
can be integrated by the so-called method of variation of parameters. Using this
method, which is based, as we shall see, on the existence and uniqueness theorem for
such equations, we try to satisfy the nonhomogeneous equation (2.23) by considering
C as being a derivable function of the independent variable. So, the solution of
equation (2.23) is sought to be of the form
R
a(x) dx
y(x) = C(x) e . (2.26)

Computing the derivative of y and substituting in (2.23), we get


R R R
a(x) dx a(x) dx a(x) dx
C 0 (x)e + C(x)e a(x) = a(x)C(x)e + b(x).

Hence, R
− a(x) dx
C 0 (x) = b(x)e ,
which, by integration, yields
Z R
− a(s) ds
C(x) = b(x)e dx + K,

where K ∈ R. Consequently, the general solution of (2.23) is:


Z R
 R
− a(s) ds a(x) dx
y(x) = b(x)e dx + K e .

Moreover, it is easy to see that we have the following result:

Proposition 2.27 If a, b : I → R are continuous functions, then, for any (x0 , y0 ) ∈


I × R, there exists a unique ϕ(·) : I → R solution of equation (2.23) such that
ϕ(x0 ) = y0 . This solution is
Rx Z x Rx
a(s)ds
ϕ(x) = y0 e x 0 + b(s)e s a(t)dt ds, x ∈ I.
x0

Algorithm for integrating nonhomogeneous linear equations. The method


of variation of parameters.
Consider the nonhomogeneous linear equation (2.23).
1) We associate the corresponding homogeneous equation of (2.23):
dy
= a(x) y,
dx
32 ORDINARY DIFFERENTIAL EQUATIONS

R
a(x) dx
with the general solution y = C e .
2) Using the method of variation of parameters, we try to satisfy the nonhomoge-
neous equation (2.23) by looking for a solution of the form:
R
a(x) dx
y = C(x) e .

Substituting in (2.23), we get:


dC R
− a(x) dx
= b(x) e
dx
and, integrating, we obtain:
Z R
− a(s) ds
C(x) = b(x)e dx + K,

where K is an arbitrary real constant. The general solution of (2.23) is:


Z R R
− a(s) ds a(x) dx
y(x) = ( b(x)e dx + K) e .

Remark 2.28 The general solution of a nonhomogeneous linear equation is the


sum between the general solution of the corresponding homogeneous equation and a
particular solution of the nonhomogeneous equation.

Remark 2.29 If we know a particular solution yp (x) of the nonhomogeneous equa-


tion (2.23), then its general solution can be obtained performing only one quadrature
and we have: R
a(x) dx
y(x) = yp (x) + Ce , (2.27)
where C is an arbitrary real constant.

Remark 2.30 If we know two particular solutions y1 (x) and y2 (x) of the nonhomo-
geneous equation (2.23), then its general solution can be obtained without performing
any quadrature and we have:

y(x) = y1 (x) + C(y1 (x) − y2 (x)),

where C is an arbitrary real constant.

Exercise 2.31 Solve the following homogeneous linear equation:


dy 1
= y sin x.
dx 3
INTRODUCTION 33

Solution. Obviously, y = 0 is a stationary solution. For y 6= 0, separating the


variables and integrating, we get the general solution
1
− cos x
y = Ce 3 , C ∈ R∗ .

Exercise 2.32 Integrate the homogeneous linear equation:


dy 2x
=y 2 , x ∈ (1, ∞).
dx x −1
Solution. Obviously, y = 0 is a stationary solution. For y 6= 0, separating the
variables and integrating, we get the general solution

y = C(x2 − 1), C ∈ R∗ .

Exercise 2.33 Solve the following linear equation:


dy 2y + ln x
= , x ∈ I ⊆ R∗+ .
dx x ln x
Solution. The associated homogeneous equation
dy 2y
=
dx x ln x
has the general solution y = C ln2 x. Using the method of variation of parameters,
we try to satisfy our nonhomogeneous equation by looking for a solution of the form
y = C(x) ln2 x. Substituting in our initial equation, we get:
dC 1
=
dx x ln2 x
and, integrating, we obtain:
1
C(x) = K − ,
ln x
where K is an arbitrary real constant. Consequently, the general solution of our
equation is:
y(x) = − ln x + K ln2 x.

Exercise 2.34 Integrate the linear equation


dy y
= + x cos x.
dx x
34 ORDINARY DIFFERENTIAL EQUATIONS

Solution. It is not difficult to see that the associated homogeneous equation


dy y
=
dx x
has the general solution
y = Cx.
Using the method of variation of parameters, we try to satisfy our nonhomogeneous
equation by looking for a solution of the form:

y = C(x)x.

Substituting in our initial equation, we get:


dC
= cos x
dx
and, integrating, we obtain:
C(x) = sin x + K,
where K is an arbitrary real constant. Therefore, the general solution of our equation
is:
y(x) = x(sin x + K).

Exercise 2.35 Integrate the linear equation


dy
x + (x + 1)y = 3x2 e−x , x ∈ (0, ∞).
dx
Solution. Since the associated homogeneous equation
dy x+1
=− y
dx x
has the general solution
Ce−x
y= ,
x
we look for the solution of our equation as being of the form:
e−x
y = C(x) .
x
It is not difficult to see that
C(x) = x3 + K,
INTRODUCTION 35

where K is an arbitrary real constant. Hence, the general solution of our equation
is:
e−x
y(x) = K + x2 e−x .
x

Exercise 2.36 Find the solution of the initial value problem

dy 3 sin x


 = − y + 3 , x ∈ (0, π)
dx x x

y(π/2) = −2.

Solution. Using the method of variation of parameters, it is not difficult to see that

− cos x + C
y(x) = , C ∈ R.
x3
Now, we have to match up the initial condition to get the particular solution

π3
cos x +
y(x) = − 4 .
x3

2.4 Bernoulli’s Equation


Many equations can be reduced to linear equations by means of suitable changes
of variables. An example is offered by the so-called Bernoulli’s equation, which, for
instance, can be encountered in problems related to the motion of material points
when the resistance of the medium depends on the velocity. The general form of
such an equation is the following one:

dy
= a(x) y + b(x) y α , (2.28)
dx
where a, b : I ⊆ R → R are given continuous functions and α ∈ R \ {0, 1}.
Let us remark that for α = 0, the equation (2.28) is a nonhomogeneous linear
equation, while for α = 1, (2.28) becomes a homogeneous linear one.
Performing the change of variables z = y 1−α , we get the nonhomogeneous linear
equation
dz
= (1 − α) a(x) z(x) + (1 − α) b(x). (2.29)
dx
36 ORDINARY DIFFERENTIAL EQUATIONS

Using the method of variation of parameters, we obtain the general solution of (2.29),
i.e. z(x) = ψ(x, C), where C is an arbitrary real constant. Thus, coming back to
the variable y, we get the general solution of (2.28):
1/(1−α)
y(x) = (ψ(x, C)) .

Remark 2.37 Let us remark that it is possible to find the general solution of equa-
tion (2.28) by using a kind of a method of variation of parameters. More precisely,
the solution of equation (2.28) is sought to be of the form
R
a(x) dx
y(x) = C(x) e , (2.30)
where C is a derivable function of x which remains to be determined. Computing
the derivative of y and substituting in (2.28), we get a new differential equation
with separable variables for the unknown function C. Solving it, we obtain C(x) =
ϕ(x, K), where K is an arbitrary real constant. Finally, the general solution of
equation (2.28) is R
a(x) dx
y(x) = ϕ(x, K) e .

Exercise 2.38 Integrate the following equation:


dy
3y 2 − y 3 = x + 1.
dx
Solution. Obviously, our equation can be written as
dy 1 x + 1 −2
= y+ y ,
dx 3 3
i.e. we have a Bernoulli’s equation with α = −2. Therefore, the change of variables
z = y3
leads to the nonhomogeneous equation
dz
= z + x + 1,
dx
having the general solution
z(x) = Cex − x − 2, C ∈ R.
Hence, the general solution of our equation is

y(x) = 3 Cex − x − 2,
where C is an arbitrary constant in R.
INTRODUCTION 37

Exercise 2.39 Solve the following Cauchy problem:


dy


 = 2y tan x + y 2 ,
dx
 y(π/4) = 2.

Solution. This is a Bernoulli’s equation with α = 2 and, since y = 0 is not a solution


of our problem, performing the change of variables z = 1/y we are led at the linear
equation
dz
= −2z tan x − 1.
dx
The general solution of this linear equation is
sin 2x
z(x) = C cos2 x − , C ∈ R.
2
Thus, the general solution of our Bernoulli’s equation is
1
y(x) = .
sin 2x
C cos2 x −
2
Imposing the initial condition y(π/4) = 2, the solution of our Cauchy problem is
2
y(x) = .
4 cos2 x − sin 2x

2.5 Riccati’s Equation


Before giving the definition of Riccati’s equations, let us look again again at the first
order differential equation
dy
= f (x, y).
dx
If we approximate f (x, y), with x kept constant, we get
dy
= P (x) + Q(x) y + R(x) y 2 + . . .
dx
If we stop at y, we get a linear equation. If we look at the second-order approxima-
tion, we are led to equation of the type
dy
= P (x) + Q(x) y + R(x) y 2 .
dx
38 ORDINARY DIFFERENTIAL EQUATIONS

Hence, using our previous notation, the general form of Riccati’s equation is the
following one:
dy
= a(x) + b(x) y + c(x) y 2 , (2.31)
dx
where a, b, c : I ⊆ R → R are given continuous functions. Such an equation is
nonlinear and, in general, it is not integrable by quadratures. However, it may be
transformed into a Bernoulli’s equation, by means of a suitable change of variables,
provided that a particular solution y0 (x) of this equation is known. Indeed, the
change of variables
z(x) = y(x) − y0 (x) (2.32)
leads to a Bernoulli’s equation:
dz
= (b(x) + 2y0 (x)a(x))z + a(x) z 2 . (2.33)
dx
By integration, the general solution of (2.33) will be of the form z(x) = ψ(x, C),
with C ∈ R. Therefore, the general solution of (2.31) will be

y(x) = y0 (x) + ψ(x, C). (2.34)

Remark 2.40 If we know a particular solution y0 (x) of (2.31), then, by performing


the change of variables
1
y(x) = y0 (x) + ,
z(x)
we get directly a nonhomogeneous linear equation which can be easily solved.

Remark 2.41 If we know two distinct particular solutions y1 (x) and y2 (x) of the
equation (2.31), then, the change of variables
y(x) − y1 (x)
z(x) =
y(x) − y2 (x)
leads us directly to a homogeneous linear equation.

Remark 2.42 If we know three distinct particular solutions y1 (x), y2 (x) and y3 (x)
of the equation (2.31), then its general solution can be obtained without performing
any quadrature and we have:
y(x) − y1 (x) y3 (x) − y1 (x)
: = C,
y(x) − y2 (x) y3 (x) − y2 (x)
where C is an arbitrary real constant.
INTRODUCTION 39

Exercise 2.43 Solve the following Riccati’s equation:


dy sin x
= −y 2 sin x + 2 , x ∈ (0, π/2),
dx cos2 x
1
knowing that it admits the particular solution y0 (x) = .
cos x
Solution. Performing the change of variables
1 1
y(x) = + ,
cos x z(x)
we get the nonhomogeneous linear equation
dz sin x
=2 z + sin x,
dx cos x
having the general solution
C cos x
z(x) = 2
− , C ∈ R.
cos x 3
Therefore, the general solution of Riccati’s equation is

1 3 cos2 x
y(x) = + .
cos x 3C − cos3 x

Exercise 2.44 Integrate the following Riccati’s equation:


dy x 2 1
= y2 − y − , x ∈ (0, ∞),
dx 2 x 2 x3
1
knowing that it admits the following two particular solutions: y1 (x) = 2 and
x
1
y2 (x) = − 2 .
x
Solution. Performing the change of variables
y(x) − y1 (x)
z(x) = ,
y(x) − y2 (x)
we get the homogeneous linear equation
dz
x = z,
dx
40 ORDINARY DIFFERENTIAL EQUATIONS

having the general solution

z(x) = Cx, C ∈ R∗

and the stationary solution


z = 0.

Hence, the general solution of our Riccati’s equation is

1 1 + Kx
y(x) = , K ∈ R.
x2 1 − Kx

Exercise 2.45 Solve the following equation The equation

dy 1
= y 2 − y tan x + , x ∈ (0, π/2),
dx cos2 x

knowing that it admits the particular solution y0 (x) = tan x.

Solution. Performing the change of variables

1
y(x) = tan x + ,
z(x)

we get the nonhomogeneous linear equation

dz
= −z tan x − 1,
dx

with the general solution


 tan x − 1 
2
z(x) = ln + C cos x, C ∈ R.

tan x2 + 1

Thus, the general solution of our Riccati’s equation is

1
y(x) = tan x +  x .
2 − 1
tan 
ln cos x

tan x2 + 1

INTRODUCTION 41

2.6 Exact Differential Equations


Let D ⊆ R2 be a domain and let us consider the following equation:
P (x, y) dx + Q(x, y) dy = 0, (2.35)
where P, Q : D → R are given continuous functions and Q(x, y) 6= 0.
Equation (2.35) is said to be an exact differential equation if there exists a func-
tion F : D → R, F ∈ C 1 (D) such that
dF (x, y) = P (x, y) dx + Q(x, y) dy. (2.36)
Notice that F is called a first integral of (2.35) and we have:

∂F

 (x, y) = P (x, y),
 ∂x


(2.37)
∂F
(x, y) = Q(x, y).



 ∂y

In this case, equation (2.35) takes the form


dF (x, y) = 0. (2.38)
If the function y(x) is a solution of (2.38), then
dF (x, y(x)) = 0 (2.39)
and, consequently,
F (x, y(x)) = C, C ∈ R. (2.40)
Conversely, if (2.40) holds true for some function y(x), then, by differentiation, we
get dF (x, y(x)) = 0. Hence, F (x, y) = C, where C is an arbitrary real constant, is
the complete integral of the equation (2.35). Thus, we can formulate the following
result.
Proposition 2.46 If (2.35) is an exact equation and F is a first integral for it,
then ϕ(·) is a solution of (2.35) ⇐⇒ ∃ C ∈ R such that F (x, ϕ(x)) = C.

Remark 2.47 If we impose an initial condition y(x0 ) = y0 , then the constant C


can be determined from (2.40) and we get C = F (x0 , y0 ). Thus,
F (x, y) = F (x0 , y0 ) (2.41)
defines the solution of our Cauchy problem as an implicit function of x.
42 ORDINARY DIFFERENTIAL EQUATIONS

Remark 2.48 Let D = {(x, y) | a < x < b, c < y < d} ⊆ R2 and P, Q ∈ C 1 (D),
Q(x, y) 6= 0. The left-hand side of (2.35) is the total differential of some function F
if and only if the following condition, called Euler’s condition, is fulfilled:
∂P ∂Q
(x, y) = (x, y), ∀(x, y) ∈ D. (2.42)
∂y ∂x

It remains to see how we can determine a first integral F for the exact equation
(2.35). In fact, we have to determine F from its total differential dF (x, y) =
P (x, y)dx+Q(x, y)dy. Let us fix an arbitrary point (x0 , y0 ). We can determine F by
taking the line integral from dF between the fixed point (x0 , y0 ) and a point with
variable coordinates (x, y), over any path, since the line integral is path-independent.
Using Leibniz formula, we have:
Z (x,y)
dF = F (x, y) − F (x0 , y0 ). (2.43)
(x0 ,y0 )

On the other hand, based on the path-independence of such an integral, it is conve-


nient to take as a path of integration a polygonal line consisting of two line segments
which are parallel to the coordinate axes. We have
Z (x,y) Z (x,y)
dF = P (x, y)dx + Q(x, y) dy =
(x0 ,y0 ) (x0 ,y0 )

Z (x,y0 ) Z (x,y)
= P (x, y0 ) dx + Q(x, y) dy. (2.44)
(x0 ,y0 ) (x,y0 )

Hence, from (2.43) and (2.44), we get


Z (x,y0 ) Z (x,y)
F (x, y) = F (x0 , y0 ) + P (x, y0 ) dx + Q(x, y) dy. (2.45)
(x0 ,y0 ) (x,y0 )

Therefore, F being determined, the solution of the exact equation (2.35) is given
implicitly by
F (x, y) = C. (2.46)

Remark 2.49 First integrals F for the differential equation (2.35) can be also ob-
tained if the domain D is not a rectangle, but it is a so-called *-domain, i.e. there
exists a certain point (x0 , y0 ) ∈ D such that t(x0 , y0 ) + (1 − t)(x, y) ∈ D, for any
t ∈ [0, 1], (x, y) ∈ D.
INTRODUCTION 43

Algorithm for integrating exact equations.


1) Check if the equation P (x, y) dx + Q(x, y) dy = 0 is exact.
2) Write down the system

∂F
 ∂x (x, y) = P (x, y),



∂F

 (x, y) = Q(x, y).
 ∂y

3) Integrate the above system to find F (x, y).


4) The general solution is given by the implicit equation F (x, y) = C, where C is
an arbitrary real constant.
5) If an initial-value problem is given, plug in the initial condition to find the
constant C.

Exercise 2.50 Integrate the following equation:

(2x − y + 1) dx + (−x + 2y − 1) dy = 0, (x, y) ∈ D.

Solution. Since P (x, y) = 2x − y + 1 and Q(x, y) = −x + 2y − 1, we have


∂P ∂Q
(x, y) = (x, y), ∀(x, y) ∈ D.
∂y ∂x
Hence, our equation is an exact one and there exists a function F : D → R, F ∈
C 1 (D), with dF (x, y) = P (x, y) dx + Q(x, y) dy. Thus, the general solution is given
implicitly by
F (x, y(x)) = C, C ∈ R.
To determine F , let us choose an arbitrary fixed point (x0 , y0 ). Then,
Z (x,y0 ) Z (x,y)
F (x, y) = F (x0 , y0 ) + (2x − y0 + 1) dx + (−x + 2y − 1) dy.
(x0 ,y0 ) (x,y0 )

So, the general solution is given by

x2 − xy + y 2 − y + x = C.

Exercise 2.51 Solve the following equation:

(2x + 3x2 y)dx + (x3 − 3y 2 )dy = 0, (x, y) ∈ D.


44 ORDINARY DIFFERENTIAL EQUATIONS

Solution. Since

P (x, y) = 2x + 3x2 y, Q(x, y) = x3 − 3y 2 ,

we have
∂P ∂Q
(x, y) = (x, y), ∀(x, y) ∈ D.
∂y ∂x
Hence, our equation is an exact one and there exists a function F : D → R,
F ∈ C 1 (D) such that

dF (x, y) = P (x, y)dx + Q(x, y)dy.

Therefore, the general solution is given implicitly by

F (x, y(x)) = C, C ∈ R.

To determine F , let us choose an arbitrary (fixed) point (x0 , y0 ). Then,


Z (x,y0 ) Z (x,y)
2
F (x, y) = F (x0 , y0 ) + (2x + 3x y0 )dx + (x3 − 3y 2 )dy,
(x0 ,y0 ) (x,y0 )

which means that the general solution is given by

x2 + x3 y − y 3 = C.

Exercise 2.52 Solve

(y cos x + 2x exp(y))dx + (sin x + x2 exp(y) − 1)dy = 0.

Solution. We first note that the equation is exact and there exists a function F :
D → R, F ∈ C 1 (D) such that

dF (x, y) = P (x, y)dx + Q(x, y)dy.

Therefore, the general solution is given implicitly by

F (x, y(x)) = C, C ∈ R.

It is not difficult to see that the general solution is given by

y sin x + x2 exp(y) − y + C, C ∈ R,

where the constant C can be determined by initial conditions.


INTRODUCTION 45

In some cases, when the left-hand side of the equation (2.35) is not a total
differential and we are not allowed to apply the above procedure, we can still solve
it by choosing a function

µ : D → R \ {0}, µ ∈ C 1 (D), (2.47)

such that, after multiplying the left-hand side of the equation (2.35) by µ, this
becomes a total differential, i.e. there exists a function F such that

dF (x, y) = µ(x, y) (P (x, y) dx + Q(x, y) dy) . (2.48)

Such a function µ is called an integrating factor or a multiplier. From (2.42), we see


that µ should verify

∂ ∂
(µ(x, y)P (x, y)) = (µ(x, y)Q(x, y)). (2.49)
∂y ∂x

Hence,
∂ ∂ ∂Q ∂P
(ln µ(x, y))P (x, y) − (ln µ(x, y))Q(x, y) = − . (2.50)
∂y ∂x ∂x ∂y
For results of existence and non-uniqueness of such a factor, the interested reader is
referred to [9]. In general, it is not easy to find, for a given equation, an integrating
factor, but we usually try to find such a multiplier by considering it as being a
function depending only on one argument (for example, only of x, or of y, or of xy,
and so forth).

Exercise 2.53 Integrate the following equation, looking for an integrating factor
of the form µ = µ(x):

(x2 − y 2 + 1)dx + 2xydy = 0, (x, y) ∈ D.

Solution. Assuming that µ = µ(x), from

∂ ∂
(µ(x, y)P (x, y)) = (µ(x, y)Q(x, y))
∂y ∂x

we easily get the integrating factor


1
µ(x) = .
x2
46 ORDINARY DIFFERENTIAL EQUATIONS

Now, multiplying our equation by µ, we get

x2 − y 2 + 1 2y
2
dx + dy = 0,
x x
which is an exact equation. By integration, it is not difficult to compute its first
integral. Hence, the general solution of our initial equation is given, implicitly, by

x2 − Cx + y 2 − 1 = 0, C ∈ R.

Exercise 2.54 Solve the following equation, looking for an integrating factor of the
form µ = µ(y):

(2xy 2 − 3y 3 )dx + (7 − 3xy 2 )dy = 0, (x, y) ∈ D.

Solution. Assuming that µ = µ(y), from

∂ ∂
(µ(x, y)P (x, y)) = (µ(x, y)Q(x, y))
∂y ∂x

we easily get the integrating factor

1
µ(x) = .
y2

Now, multiplying our equation by µ, we get

7
(2x − 3y)dx + dy = 0,
y2 − 3x

which is an exact equation. By integration, it is not difficult to compute its complete


integral. Hence, the general solution of our initial equation is given, implicitly, by

7
x2 − 3xy − = C, C ∈ R.
y

Exercise 2.55 Integrate the following equation, knowing that it possesses an inte-
grating factor of the form µ = µ(x + y 2 ):

(3y 2 − x) dx + (2y 3 − 6xy) dy = 0, (x, y) ∈ D.


INTRODUCTION 47

Solution. Assuming that µ = µ(x + y 2 ), from (2.49) we obtain the integrating factor
1
µ(x) = .
(x + y 2 )3
Now, multiplying our equation by µ, we get
3y 2 − x 2y 3 − 6xy
dx + dy = 0,
(x + y 2 )3 (x + y 2 )3
which is an exact equation. By integration, it is not difficult to compute its first
integral. Hence, the general solution of our initial equation is given, implicitly, by
x − y2
= C, C ∈ R.
(x + y 2 )2

2.7 Elementary Types of Implicit Differential Equations


The general form of an implicit first-order differential equation (i.e. not solved for
the derivative) is the following one:

F (x, y, y 0 ) = 0, (2.51)

where F : D ⊆ R3 → R, F ∈ C 1 (D).
A derivable function ϕ : I ⊆ R → R is called a solution of equation (2.51) if
(x, ϕ(x), ϕ 0 (x)) ∈ D, ∀x ∈ I and

F (x, ϕ(x), ϕ 0 (x)) = 0, x ∈ I. (2.52)

If, using the theorem on implicit functions, we can solve equation (2.51) for the
derivative y 0 , then we obtain one or several equations

y 0 = fi (x, y), i = 1, 2, . . . .

Integrating these equations, which are solved for the derivative, we get the solutions
of equation (2.51).
We consider here only two important particular cases of implicit equations: La-
grange’s equation and Clairaut’s equation.
Lagrange’s Equation. The general form of such an equation is the following one:

y = x a(y 0 ) + b(y 0 ), (2.53)


48 ORDINARY DIFFERENTIAL EQUATIONS

where a, b : I ⊆ R → R and a, b ∈ C 1 (I).


0
If we denote y = p and we differentiate (2.53) with respect to x, we get
0 dp
x a 0 (p) + b (p) = p − a(p). (2.54)
dx
a) If p − a(p) 6= 0, then, if we consider x = x(p), we get
dx a 0 (p) b 0 (p)
= x+ . (2.55)
dp p − a(p) p − a(p)
This equation is a nonhomogeneous linear one and, hence, is readily integrable by
the method of variation of parameters. After solving it, we get x = x(p, C), C ∈ R.
From (2.53), we also have y = x a(p) + b(p). Therefore, we get the parametric
equations defining the general solution of equation (2.53):
(
x = x(p, C),
(2.56)
y = y(p, C).

b) If p − a(p) = 0 and if pi are the real roots of this algebraic equation, then
y(x) = x a(pi ) + b (pi ) are singular solutions of equation (2.53) (they are straight
lines).

Exercise 2.56 Integrate the following Lagrange’s equation:


0 0
y = 2xy + ln y .

Solution. Denoting
0
y =p
and differentiating with respect to x, we get
dp 1 dp
p = 2p + 2x + , p > 0.
dx p dx
Hence, considering x = x(p), we get the nonhomogeneous equation
dx 2 1
= − x − 2,
dp p p
having the general solution
C 1
x(p) = 2
− , C ∈ R.
p p
INTRODUCTION 49

Using the equation, we get the parametric equations of the complete integral of our
Lagrange’s equation:
C 1


 x(p) = 2 − ,

 p p

 y(p) = ln p + 2C − 2.



p
Exercise 2.57 Solve the following Lagrange’s equation:
2
y = 2xy 0 − y 0 .

Solution. Denoting y 0 = p and differentiating with respect to x, we get


dp
−p = (2x − 2p).
dx
For p 6= 0, considering x = x(p), we get the nonhomogeneous equation
dx 2
= − x + 2,
dp p
with the general solution
C 2p
x(p) = 2
+ , C ∈ R.
p 3
Using the equation, we get the parametric equations of the general solution of our
Lagrange’s equation:  C 2p
 x(p) = p2 + 3 ,


2C p2
 y(p) = + .


p 3

For p = 0, we get the solution y = 0.

Clairaut’s Equation. The general form of such an equation is:

y = x y 0 + g (y 0 ), (2.57)

where g : I ⊆ R → R and g ∈ C 1 (I).


If we denote y 0 = p and we differentiate (5.68) with respect to x, we get
dp
(x + g 0 (p)) = 0.
dx
50 ORDINARY DIFFERENTIAL EQUATIONS

dp
a) If = 0, then p = C and y(x) = Cx + g(C), with C ∈ I, is the general
dx
solution of equation (2.57) (a one-parameter family of integral curves which are
straight lines).
b) If x + g 0 (p) = 0, we get the parametric equations of the singular solution of
(2.57): ( 0
x(p) = −g (p)),
0 (2.58)
y(p) = −p g (p)) + g(p).
Notice that the integral curve defined by (2.58) is the envelope of the family of
integral curves (2.57).

Exercise 2.58 Solve the following Clairaut’s equation:


0 1
y = xy + .
2y 0
Solution. Denoting
0
y = p,
we have p 6= 0. Differentiating with respect to x, we get
dp 1
(x − 2 ) = 0.
dx 2p
dp
a) If = 0, then p = C and
dx
1
y(x) = Cx + , C ∈ I ⊆ R∗
2C
is the general solution of the given equation (a one-parameter family of straight
lines).
1
b) If x − 2 = 0, then we get the parametric equations of the singular solution
2p
of our Clairaut’s equation:
1


 x(p) = 2 ,

 2p

 1
 y(p) = .

p
Eliminating p, we get
y 2 (x) = 2x.
INTRODUCTION 51

Exercise 2.59 Solve the following differential equation


ay 0
y = xy 0 + p , a > 0.
1 + y 02
Solution. The given equation has the general solution
Ca
y = Cx + √ , C∈R
1 + C2
and the singular solution in a parametric form
 a
 x(p) = − ,

 (1 + p2 )3/2
ap3
 y(p) = .


(1 + p2 )3/2

Eliminating the parameter p, we obtain

x2/3 + y 2/3 = a2/3 ,

which represents the equation of an astroid.

Problems on Chapter 2

Exercise 2.60 Consider the equation


dy
= H(x)y(x),
dx
where H is Heaviside’s function

 0, x < 0,
H(x) =
1, x ≥ 0.

Prove that this equation has, on R, only the solution y = 0.

Solution. Obviously, y = 0 is a solution of our equation. Arguing by contradiction,


let us suppose that there is another solution y 6= 0 on R. This means that there
exists x0 ∈ R such that y(x0 ) 6= 0. Therefore, there exists an open interval I ∈ V(x0 )
such that y(x0 ) 6= 0 on I and
0
y (x)
= H(x) on I. (2.59)
y(x)
52 ORDINARY DIFFERENTIAL EQUATIONS

But this equality cannot hold true, because the function in the left-hand side of
(2.120) has Darboux property, while Heaviside’s function has not this property.
This is a contradiction and, hence, y = 0 is the unique solution of our equation.

Exercise 2.61 Solve the following Cauchy problem:


dy


 = y(1 − y),
dx


y(0) = 2.

Solution. Since the problem is separable, the solution is straightforward. Indeed,


since y = 0 and y = 1 are not solutions of our initial value problem, the equation
can be separated to
dy
= dx.
y(1 − y)
Integrating both sides gives
y
= C ex , C ∈ R∗ .
1−y
Insertion of the initial condition yields
2
y(x) = .
2 − e−x
Exercise 2.62 Solve the initial value problem

dy 2 cos2 x − sin2 x + y 2
x ∈ (0, π/2),


 = ,
dx 2 cos x


 y(0) = −1,

knowing that y0 = sin x is a particular solution of the given equation.

Solution. Since we have a Riccati’s equation, performing the change of variables


1
y(x) = sin x + ,
z(x)
we get the nonhomogeneous linear equation
dz sin x 1
=− z− ,
dx cos x 2 cos x
INTRODUCTION 53

having the general solution


1
z(x) = − sin x + C cos x, C ∈ R.
2
Therefore, the general solution of Riccati’s equation is
1
y(x) = sin x + .
1
− sin x + C cos x
2
The initial condition y(0) = −1 implies 1/C = −1, or C = −1. Therefore, the
solution to the IVP is
1
y(x) = sin x + .
1
− sin x − cos x
2
Exercise 2.63 Integrate the equation
dy
= −2 − y + y 2 ,
dx
knowing that y0 = 2 is a particular solution.

Solution. Since we have a Riccati equation, performing the change of variables


1
y(x) = 2 + ,
u(x)
we get the nonhomogeneous linear equation
du
= −3u + 1,
dx
having the general solution
1
u(x) = Ce−3x − , C ∈ R.
3
Therefore, the general solution of the given Riccati’s equation is
1
y(x) = + 2.
Ce−3x − 1/3

Exercise 2.64 Integrate the following Clairaut’s equation:


0 02
y = xy − y .
54 ORDINARY DIFFERENTIAL EQUATIONS

Solution. Denoting
0
y =p
and differentiating with respect to x, we get
dp
(x − 2p) = 0.
dx
dp
a) If = 0, then p = C and
dx
y(x) = Cx − C 2 , C∈R
is the general solution of the given equation (a one-parameter family of straight
lines).
b) If x − 2p = 0, then we get the parametric equations of the singular solution
of our Clairaut’s equation: 
 x(p) = 2p,

y(p) = p2 .

Eliminating p, we get
x2
y(x) = .
4
Exercise 2.65 Solve the initial value problem


dy y2 − 1
, x ∈ (0, 3),


 =
dx x


 y(1) = 2.

Solution. Since the problem is separable, the solution to the problem is straightfor-
ward. Indeed, since y = 1 and y = −1 are not solutions of our initial value problem,
the equation is separated to
dy dx
2
= .
y −1 x
Integrating both sides gives
y−1
= Cx2 , C > 0.
y+1
If we plug in the condition y(1) = 2, we get
3 + x2
y(x) = .
3 − x2
INTRODUCTION 55

Exercise 2.66 Find the solution of the problem


 0
 y + y tan x = cos2 x,

y(0) = 2.

Solution. Since the equation is linear, using the method of variation of parameters,
it is not difficult to see that the general solution of our equation is

y(x) = (sin x + C) cos x, C ∈ R.

In order to find the particular solution to the given IVP, we use the initial condition
y(0) = 2. We obtain C = 2. Therefore, the solution is

y(x) = (sin x + 2) cos x.

Exercise 2.67 Find the solution of the problem


 (y + xy 2 )dx − xdy = 0,

x > 0,

y(1) = 1.

Solution. Since our equation is not exact, looking for an integrating factor µ = µ(y),
we easily get
1
µ(y) = 2 .
y
Notice that y = 0 is not a solution of our problem. Now, multiplying our equation
by µ, we get
1 x
( + x)dx − 2 dy = 0,
y y
which is an exact equation. By integration, it is not difficult to compute its first
integral. Hence, the general solution of our initial equation is given, implicitly, by
x x2
+ + C = 0, C ∈ R.
y 2
If we plug in the initial condition, we get C = −3/2. Hence, the solution of our
Cauchy problem is given by
x x2 3
+ − =0
y 2 2
or
2x
y(x) = − 2 .
x −3
56 ORDINARY DIFFERENTIAL EQUATIONS

Exercise 2.68 Consider the initial value problem


dy


 = (y 2 − 4) cos(xey ),
dx


y(0) = 1.

Show that the solution of this problem verifies

| y(x) |< 2, ∀x.

Solution. It is not difficult to see that the right-hand side of our equation satisfies
the assumptions of the uniqueness theorem. Also, y(x) ≡ −2 and y(x) ≡ 2 are two
constant solutions of our differential equation. Since the solution of the given initial
value problem starts between these solutions, then it has to remain in the same
interval. So,
−2 < y(x) < 2, ∀x,
i.e.
| y(x) |< 2, ∀x.
Chapter 3

Higher-Order Differential
Equations

The general form of a differential equation of the n-th order is the following one:

F (x, y, y 0 , . . . , y (n) ) = 0, (3.1)

where F : G ⊆ Rn+2 → R. If this equation can be solved for the highest derivative,
we get
y (n) = f (x, y, y 0 , . . . , y (n−1) ), (3.2)

where f : D ⊆ R × Rn → R.
A function ϕ(·) : I ⊆ R → R, where I is an interval in R, is a solution of the dif-
ferential equation (3.1) if ϕ(·) is n-times derivable, (x, ϕ(x), ϕ 0 (x), . . . , ϕ(n−1) (x)) ∈
D, ∀x ∈ I and ϕ(·) satisfies the equation, i.e.

ϕ(n) (x) = f (x, ϕ(x), . . . , ϕ(n−1) (x)), x ∈ I.

Remark 3.1 It is not difficult to transform the n-th order equation (3.1) to a system
of n first-order equations, for which we already have an existence and uniqueness
result.

Theorem 3.2 Let us consider the Cauchy problem

y (x) = f (x, y, y 0 , . . . , y (n−1) ),


( (n)
(n−1) (3.3)
y(x0 ) = y0 , y0 0 (x0 ) = y0 0 , . . . , y (n−1) (x0 ) = y0 .

57
58 ORDINARY DIFFERENTIAL EQUATIONS

(n−1)
If in a neighbourhood of the initial values (x0 , y0 , y0 0 , . . . , y0 ) the function f is
continuous with respect to all its arguments and satisfies a Lipschitz condition with
respect to all its arguments beginning with the second one, then there exists a unique
solution of the Cauchy problem (3.2).

Remark 3.3 The general solution of the differential equation (3.1) depends on n
parameters C1 , C2 , . . . , Cn , i.e. y = y(x, C1 , C2 , . . . , Cn ). These n parameters can
be, for instance, the initial values of the sought-for function and its derivatives,
(n−1)
y0 , y0 0 , . . . , y0 .

A relationship of the form Φ(x, y, C1 , C2 , . . . , Cn ) = 0, defining implicitly the general


solution, is called the general integral of the differential equation (5.71). By assigning
particular values to the constants C1 , C2 , . . . , Cn , we obtain particular solutions for
equation (3.1).

3.1 Nonlinear Differential Equation of Order n


Let us consider now some types of nonlinear differential equation of order n allowing
for reducing the order.
I. Let us assume that the left-hand side of equation (5.70) does not contain
explicitly the unknown function and its derivatives up to the order (k − 1), inclusive,
i.e. the equation has the form

F (x, y (k) , y (k+1) , . . . , y (n) ) = 0, 1 ≤ k ≤ n. (3.4)

By performing the change of variables z(x) = y (k) (x), the equation (3.3) becomes

F (x, z, z 0 , . . . , z (n−k) ) = 0. (3.5)

Integrating (3.4), we get the general solution of the form

z(x) = ψ(x, C1 , C2 , . . . , Cn−k ), (3.6)

depending on (n−k) parameters C1 , C2 , . . . , Cn−k . By k−fold integration, we obtain


the general solution of (3.3), i.e. a family of functions

y(x) = ϕ(x, C1 , C2 , . . . , Cn ) (3.7)

depending on n parameters C1 , C2 , . . . , Cn .
INTRODUCTION 59

Of course, for equation (3.5) we may also obtain some singular solutions zi ,
which, by k-fold integration, will give the desired singular solutions yi of equation
(3.3).

Exercise 3.4 Solve the Cauchy problem



 xy 00 + y 0 = −x2 y 0 2 ,
 y(1) = y 0 (1) = 1.

Solution. Since the function y does not enter explicitly into our equation, setting
z(x) = y 0 (x), we obtain
xz 0 + z = −x2 z 2 ,
(

z(1) = 1.

Solving this Bernoulli’s equation, together with its initial condition, we have z(x) =
1/x2 . Therefore, the solution of our initial Cauchy problem is

1
y(x) = − + 2.
x

Exercise 3.5 Integrate the equation


00 1 00 2 0
xy − y − y = 0.
4

Solution. Since the function y does not enter explicitly into our equation, setting
0
z(x) = y (x),

we get
0 1 02
z = xz − z ,
4
which is Clairaut’s equation. Denoting
0
z =p

and differentiating with respect to x, we get

dp 1
(x − p) = 0.
dx 2
60 ORDINARY DIFFERENTIAL EQUATIONS

dp
a) If = 0, then p = C1 and
dx
1
z(x) = C1 x − C12 , C1 ∈ R
4
is the general solution of equation Clairaut’s equation. Taking into account our
change of variables, the general solution of the original equation is
x2 1 2
y(x) = C1 − C1 x + C2 , C1 , C2 ∈ R.
2 4
1
b) If x − p = 0, then we get the singular solution of Clairaut’s equation:
2
z(x) = x2 ,

which gives, for the original equation, the singular solution


x3
y(x) = + K1 , K1 ∈ R.
3
Exercise 3.6 Integrate the following equation:
000 00
xy + y = 1 + x, x ∈ I ⊆ R∗ .

Solution. Setting
00
z(x) = y (x),
we get the nonhomogeneous linear equation
dz z 1+x
=− + ,
dx x x
with the general solution
x C1
z(x) =
+1+ , C1 ∈ R.
2 x
Taking into account our change of variables, we obtain
x3 x2
y(x) = + + K1 x ln | x | +K2 x + K3 , K1 , K2 , K3 ∈ R.
12 2
II. Let the left-hand side of equation (3.1) be a homogeneous function of degree
zero with respect to the arguments y, y 0 , . . . , y (n) , i.e. the equation has the form
!
y 0 y 00 y (n)
F x, , ,..., = 0, F : D ⊆ Rn+1 → R. (3.8)
y y y
INTRODUCTION 61

Performing the change of variables


y 0 (x)
z(x) = , (3.9)
y(x)
by induction, we see that equation (3.8) becomes

Φ(x, z, z 0 , . . . , z (n−1) ) = 0. (3.10)

Integrating it, we get the general solution of the form

z(x) = ψ(x, C1 , C2 , . . . , Cn−1 ), (3.11)

depending on (n − 1) parameters C1 , C2 , . . . , Cn−1 . By integration in y 0 (x) =


yψ(x, C1 , C2 , . . . , Cn−1 ), we obtain the general solution of (3.8), i.e. a family of
functions
y(x) = ϕ(x, C1 , C2 , . . . , Cn ) (3.12)
depending on n parameters C1 , C2 , . . . , Cn .
Of course, for equation (3.10) we may also have some singular solutions zi , which,
by integration, will give the desired singular solutions yi of equation (3.8).

Exercise 3.7 Solve the Cauchy problem

 2yy 00 − 3y 0 2 − 4y 2 = 0,

π π
 y(0) = 1, y 0 (0) = 0, x ∈ (− , ).
2 2
0
Solution. Since the equation is homogeneous in y, y 0 , y 00 , setting z(x) = y (x)/y(x),
we obtain
2z 0 = z 2 + 4,
(

z(0) = 0.

Solving this equation with separable variables and taking into account its initial
condition, we have z(x) = 2 tan x. Therefore, the solution of the initial Cauchy
problem is
1
y(x) = .
cos2 x
Exercise 3.8 Integrate the following equation:
00 02
yy − y = x2 y 2 .
62 ORDINARY DIFFERENTIAL EQUATIONS

Solution. Obviously, y = 0 is a solution of our equation. For y 6= 0, dividing the


equation by y 2 , we get
00 0
y y 2
− ( ) = x2 .
y y
0 00
Since this equation is homogeneous in y, y , y , setting
0
y (x)
z(x) = ,
y(x)

we get a first-order differential equation


0
z = x2 ,

having the general solution

x3
z(x) = + C1 , C1 ∈ R.
3
Then, taking into account our change of variables, it remains to integrate the equa-
tion
0 x3
y = y ( + C1 ).
3
Hence, the general solution of our initial equation is

x4
+ C1 x
y(x) = C2 e 12 , C1 ∈ R, C2 ∈ R∗ .

Exercise 3.9 Solve the equation:


00 02
2yy + y = 0.

Solution. Obviously, y = 0 is a solution of our equation. For y 6= 0, dividing the


equation by y 2 , we get
00 0
y y 2
2 + ( ) = 0.
y y
0 00
Since this equation is homogeneous in y, y , y , setting
0
y (x)
z(x) = ,
y(x)
INTRODUCTION 63

we get the first-order differential equation


0
2z + 3z 2 = 0.
Obviously, z = 0 is a solution. Hence, y = C, C ∈ R is a solution of the original
equation. For z 6= 0, separating the variables and integrating, we obtain
2
z(x) = (x + C1 )−1 .
3
Therefore, integrating
0 2
y = y (x + C1 )−1 ,
3
we get the general solution of our initial equation
y(x) = C2 (x + C1 )2/3 , C1 ∈ R, C2 ∈ R∗ .

III. Let us assume that the left-hand side of equation (3.1) does not involve
explicitly the independent variable x, i.e. the equation has the form
F (y, y 0 , y 00 , . . . , y (n) ) = 0, F : D ⊆ Rn+1 → R. (3.13)
The order of such an equation can be reduced by unity if we regard y in this equation
as an independent variable and y 0 as an unknown function. Therefore, let us perform
the change of variables
z(y(x)) = y 0 (x). (3.14)
02
Then, y 00 = z 0 z, y 000 = z 00 z 2 + zz and, by induction, y (n) = ϕ(z, z 0 , . . . , z (n−1) ).
So, equation (3.13) becomes
0
Φ(y, z, z , . . . , z (n−1) ) = 0. (3.15)
Integrating (3.15), we obtain its general solution as being of the form
z(y) = ψ(y, C1 , C2 , . . . , Cn−1 ), (3.16)
depending on (n − 1) parameters C1 , C2 , . . . , Cn−1 and, possibly, the singular so-
lutions zi . By integration in y 0 (x) = ψ(y, C1 , C2 , . . . , Cn−1 ), we get the general
solution of (3.13), i.e. a family of functions
y(x) = ϕ(x, C1 , C2 , . . . , Cn ) (3.17)
depending on n parameters C1 , C2 , . . . , Cn . Also, corresponding to the singular
solutions zi , by integration, we get the singular solutions yi to the original equation
(3.13).
64 ORDINARY DIFFERENTIAL EQUATIONS

Exercise 3.10 Find the solution of the Cauchy problem

2y 000 − 3y 0 2 = 0,
(

y(0) = −3, y 0 (0) = 1, y 00 (0) = −1, x ∈ I ⊆ R \ {−2}.

Solution. If we perform the change of variables y 0 (x) = u(x), we obtain

2u 00 − 3u2 = 0,
(

u(0) = 1, u 0 (0) = −1.

Since in this last equation the independent variable x is not involved explicitly,
setting z(u(x)) = u 0 (x), we get

 2z dz − 3u2 = 0,

du
 z(1) = −1.

It is not difficult to see that the solution of this Cauchy problem is z 2 = u3 . Taking
into account that z(u(x)) = u 0 (x), u(x) > 0 and u(0) = 1, we obtain
4
u(x) = .
(x + 2)2
Hence, to get the solution of our initial Cauchy value problem, it remains to integrate
the equation
dy 4
= .
dx (x + 2)2
We obtain
4
y(x) = − + C, C ∈ R.
x+2
Since y(0) = −3, we get C = −1 and the solution of the initial Cauchy problem is
x+6
y(x) = − , x ∈ I.
x+2

Exercise 3.11 Solve the Cauchy problem


0 00

 2yy = y ,

 y(0) = 0, y 0 (0) = 1, x ∈ − π , π .
 

2 2
INTRODUCTION 65

Solution. Since in our equation the independent variable x is not involved explicitly,
setting
0
z(y) = y (x),
we get 
dz
 z = 2yz,


dy


 z(0) = 1.

It is not difficult to see that the solution of this Cauchy problem is

z(y) = y 2 + 1.

Therefore, to get the solution of our initial Cauchy value problem, it remains to
integrate the equation
dy
= y 2 + 1.
dx
We obtain
y(x) = tan (x + C).
 π π
Since y(0) = 0 and x ∈ − , , the solution of the initial Cauchy problem is
2 2
y(x) = tan x.

Exercise 3.12 Find the solution of the Cauchy problem


2
 1 + y 0 = 2yy 00 ,

y(2) = 1, y 0 (2) = 0.

Solution. Performing the change of variables


0
z(y(x)) = y (x),

we get 
dz
 1 + z 2 = 2yz ,


dy


 z(1) = 0.

It is not difficult to see that the solution of this Cauchy problem is

z 2 (y) = y − 1.
66 ORDINARY DIFFERENTIAL EQUATIONS

Therefore, y − 1 has to be positive. Integrating the equation


dy p
=± y−1
dx
and taking into account the initial condition y(2) = 1, we get
(x − 2)2
y(x) = 1 + .
4

IV. Let us consider the following nonlinear Euler’s equation:


 
F y, xy 0 , x2 y 00 , . . . , xn y (n) = 0, F : D ⊆ Rn+1 → R, x ∈ I ⊆ R∗ . (3.18)

The order of such an equation can be reduced by unity if we perform the change of
variables
| x |= es . (3.19)
Without loss of generality, we can assume that x > 0. So, the change of variables
(3.19) defines a new function z, by the following formula:

z(s) = y(es ), s = ln x. (3.20)

Then, y 0 = z 0 /x, i.e. xy 0 = z 0 . In a similar manner, we get x2 y 00 = z 00 − z 0 . By


induction, xn y (n) = ϕ(z, z 0 , z 00 , . . . , z (n) ). Thus, equation (5.87) becomes

Φ(z, z 0 , . . . , z (n) ) = 0. (3.21)

Integrating (3.21), we get the general solution of the form

z(s) = ψ(s, C1 , C2 , . . . , Cn ), (3.22)

depending on n parameters C1 , C2 , . . . , Cn . Hence, the general solution of (3.18) is

y(x) = ψ(ln x, C1 , C2 , . . . , Cn ), (3.23)

i.e. a family of functions depending on n parameters C1 , C2 , . . . , Cn . Of course, for


equation (3.21) we may also have some singular solutions zi , which will give the
desired singular solutions yi of equation (3.18).

Exercise 3.13 Find the solution of the Cauchy problem


x yy + x2 y 0 2 − xyy 0 = 0,
( 2 00

y(1) = y 0 (1) = 1, x > 0.


INTRODUCTION 67

Solution. Performing the change of variables x = es , we get

zz 00 − 2z z 0 + z 0 2 = 0,
(

z(0) = 1, z 0 (0) = 1.

Setting u(z) = z 0 , we obtain

u(zu 0 − 2z + u) = 0,
(

u(1) = 1,

with the solution u(z) = z. Thus, z(s) = es , which leads to y(x) = x.

Exercise 3.14 Solve the Cauchy problem


 2 00 0
 x y + xy + y = 0,
0
y(1) = y (1) = 1, x > 0.

Solution. Performing the change of variables

x = es ,

we get  00
 z + z = 0,
0
z(0) = 1, z (0) = 1.

It is not difficult to see that the solution of this Cauchy problem is

z(s) = cos s + sin s.

Hence, the solution of our initial Cauchy problem is

y(x) = cos(ln x) + sin(ln x).

Exercise 3.15 Find the solution of the Cauchy problem



00 02 0
 x2 yy + x2 y − xyy = 0,

 y(1) = y 0 (1) = 1, x > 0.



68 ORDINARY DIFFERENTIAL EQUATIONS

Solution. Performing the change of variables

x = es ,

we get 
00 0 02
 zz − 2zz + z = 0,

 z(0) = 1, z 0 (0) = 1.

If we perform a new change of variables,


0
u(z) = z ,

we obtain  0
 u(zu − 2z + u) = 0,

u(1) = 1,

with the solution


u(z) = z.
Therefore,
z(s) = es ,
which gives
y(x) = x.

3.2 Higher-Order Linear Differential Equations


The general form of a linear differential equation of order n is the following one:

a0 (x) y (n) (x) + a1 (x) y (n−1) (x) + · · · + an−1 (x) y 0 (x) + an (x) y(x) =

= f (x), x ∈ (a, b) (3.24)


where the functions f, a0 , . . . , an are continuous functions defined on the interval
(a, b). If the right-hand side of equation (3.24) is identically zero, the linear equation
(3.24) is called homogeneous. So, its general form is

a0 (x) y (n) (x) + a1 (x) y (n−1) (x) + · · · + an−1 (x) y 0 (x) + an (x) y(x) =

= 0, x ∈ (a, b). (3.25)


INTRODUCTION 69

If the coefficient a0 (x) is not equal to zero on the interval (a, b), then, by dividing
(3.25) by a0 (x), we get

y (n) (x) + p1 (x)y (n−1) (x) + · · · + pn−1 (x)y 0 (x) + pn (x)y(x) =

= 0, x ∈ (a, b). (3.26)


If the coefficients p1 (x), p2 (x), . . . , pn (x) are continuous on the interval (a, b), then
the differential equation (3.26) has a unique solution, defined on the same interval
(a, b), satisfying, for any x0 ∈ (a, b), the initial conditions
(n−1)
y(x0 ) = y0 , y 0 (x0 ) = y0 0 , . . . , y (n−1) (x0 ) = y0 .

Equation (3.26) can be written as:

L(y) = 0, (3.27)

where the linear differential operator of the n-th order L is defined by

L(y) = y (n) (x) + p1 (x) y (n−1) (x) + · · · + pn (x) y(x). (3.28)

Indeed, it is not difficult to see that L is linear. Using its linearity, we get
m m
!
X X
L Ci yi = Ci L(yi ),
i=1 i=1

where Ci are arbitrary constants. Thus, we obtain the following result:


m
P
Theorem 3.16 (i) A linear combination Ci yi , with arbitrary constant coeffi-
i=1
cients Ci of solutions yi of the homogeneous linear equation (3.26) is also a solution
of this equation.
(ii) If the homogeneous linear equation (3.26) with real coefficients pi (x) has a
complex solution y(x) = u(x) + iv(x), then the real part u and the imaginary part v
are also solutions of equation (3.26).

Definition 3.17 The functions y1 (x),. . . ,yn (x) are called linearly dependent over
(a, b) if there exist n constants α1 , . . . , αn , at least one of which is not equal to zero,
such that
α1 y1 (x) + · · · + αn yn (x) ≡ 0, x ∈ (a, b). (3.29)
If identity (3.29) is fulfilled only for α1 = · · · = αn = 0, then the functions
y1 (x), . . . , yn (x) are called linearly independent over the interval (a, b).
70 ORDINARY DIFFERENTIAL EQUATIONS

Example 3.18 The functions ek1 x , ek2 x , . . . , ekn x , with ki 6= kj , ∀i 6= j, ki ∈ R, are


linearly independent on the interval (a, b).

Example 3.19 The functions 1, x, x2 , . . . , xn are linearly independent on the inter-


val (a, b).

Example 3.20 The functions ekx , x ekx , x2 ekx , . . . , xp ekx , with k ∈ R and p ∈ N,
are linearly independent on the interval (a, b).

It is not difficult to see that the following theorem holds true.

Theorem 3.21 If the functions y1 (x), y2 (x), . . . , yn (x) are linearly dependent on the
interval (a, b) and have derivatives up to the order (n − 1), then the determinant

y1 (x) y2 (x) ........................ yn (x)
0
y (x) y 0 (x) ........................ y 0 (x)
1 2 n
W (x) ≡ W (y1 , . . . , yn ) =
...........................................................

(n−1) (n−1) (n−1)
y1 (x) y2 (x) ...... yn (x)

called the Wronskian, is identically zero.

Remark 3.22 If the linearly independent functions y1 , . . . , yn are solutions of the


homogeneous linear equation (3.26) with continuous coefficients p1 , . . . , pn on (a, b),
then the Wronskian W (x) = W (y1 , . . . , yn ) cannot vanish at any point x ∈ (a, b).

Definition 3.23 A set of n linearly independent solutions y1 , . . . , yn of the homo-


geneous equation (3.26), with continuous coefficients p1 , . . . , pn , is called a funda-
mental system of solutions of this equation.

It is not difficult to see that the following theorem holds true:

Theorem 3.24 The general solution of the homogeneous linear equation

y (n) (x) + p1 (x)y (n−1) (x) + · · · + pn (x)y(x) = 0 (3.30)

with continuous coefficients pi on (a, b) is the linear combination

y(x) = C1 y1 (x) + · · · + Cn yn (x) (3.31)

of n linearly independent solutions y1 , . . . , yn on (a, b), with n arbitrary coefficients


C1 , . . . , C n .
INTRODUCTION 71

Remark 3.25 The maximum number of linearly independent solutions of a homo-


geneous linear equation with continuous coefficients is equal to its order.

If y1 , . . . , yn is a fundamental set of solutions for (3.30), it is not difficult to prove
that we get the following formula, called Ostrogradsky-Liouville-Abel’s formula:
 Z x 
W (x) = W (x0 ) exp − p1 (t) dt ,
x0

for a given x0 ∈ (a, b).


Let us consider now the nonhomogeneous linear equation

y (n) (x) + p1 (x) y (n−1) (x) + · · · + pn (x) y(x) = f (x). (3.32)

Using the linearity of L, we easily get the following result (see [15]):

Theorem 3.26 (i) If yhom is a solution of the homogeneous linear equation (3.30)
and yp is a solution of the nonhomogeneous linear equation (3.32), then yhom + yp
is also a solution of equation (3.32).
(ii) If yi , i = 1, 2, . . . , m, are solutions of the equations

L[y] = fi (x), i = 1, 2, . . . , m,
m
P
then y = αi yi , where αi are given constants, is a solution of the equation
i=1
m
X
L(y) = αi fi (x) (the principle of superposition).
i=1

(iii) If the equation L(y) = U (x) + iV (x), with real coefficients pi (x) and real
U and V has a complex solution y(x) = u(x) + i v(x), then the real part u and the
imaginary part v are, respectively, solutions of the equations

L(y) = U (x), L(y) = V (x).

Hence, if {y1 , y2 , . . . , yn } is a fundamental system of solutions for the homogeneous


linear equation

y (n) (x) + p1 (x) y (n−1) (x) + · · · + pn (x) y(x) = 0, x ∈ (a, b), (3.33)

and C1 , C2 , . . . , Cn are arbitrary constants, we are led to the following result.


72 ORDINARY DIFFERENTIAL EQUATIONS

Theorem 3.27 The general solution on the interval (a, b) of the nonhomogeneous
linear equation

y (n) (x) + p1 (x) y (n−1) (x) + · · · + pn (x) y(x) = f (x) (3.34)

with continuous coefficients pi and continuous right-hand side f on (a, b) is equal to


Pn
the sum of the general solution Ci yi of the corresponding homogeneous equation
i=1

y (n) (x) + p1 (x) y (n−1) (x) + · · · + pn (x) y(x) = 0 (3.35)

and of some particular solution yp of the nonhomogeneous equation (3.34), i.e.


n
X
y(x) = Ci yi (x) + yp (x). (3.36)
i=1

Hence, we shall be interested in methods of finding particular solutions of equation


(3.34). We shall deal here only with the case of linear differential equations with
constant coefficients. For the general case, see [15].
The general form of a linear differential equation of the order n, with constant
coefficients, is the following one:

a0 y (n) (x) + a1 y (n−1) (x) + · · · + an−1 y 0 (x) + an y(x) = f (x), x ∈ (a, b), (3.37)

where the function f is a continuous function defined on the interval (a, b) and
ai , i = 0, 1, . . . , n are real constant coefficients.
If the right-hand side of equation (3.37) is identically zero, the linear equation
(3.37) is called homogeneous. Thus, its general form is

a0 y (n) (x) + a1 y (n−1) (x) + · · · + an−1 y 0 (x) + an y(x) = 0, x ∈ (a, b). (3.38)

Let us deal first with the homogeneous case. The first method of integrating
linear ordinary differential equations with constant coefficients is due to Euler. He
thought of solving a linear homogeneous differential equation with constant coeffi-
cients of the form (3.38) by looking for solutions of the form y = erx , where r is a
constant to be determined. If y = erx , then

y 0 = rerx , y 00 = r2 erx , . . . , y (n) = rn erx .

Substituting these derivatives into (3.38), we get

erx a0 rn + a1 rn−1 + · · · + an−1 r + an = 0,



INTRODUCTION 73

which implies that

a0 rn + a1 rn−1 + · · · + an−1 r + an = 0. (3.39)

Thus, y = erx is a solution of (3.38) if and only r satisfies the equation (3.39),
called the characteristic equation of the differential equation (3.38). The roots of
this characteristic equation, called characteristic values or eigenvalues, will reveal
to us the nature of the solutions of (3.38). Several cases must be considered.
Case 1. If all the roots r1 , r2 , . . . , rn of equation (3.39) are real and distinct, then

y1 = er1 x , y2 = er2 x , . . . , yn = ern x , (3.40)

are n linearly independent solutions of equation (3.38), i.e. they form a fundamental
system of solutions. So, in this case, the general solution of (3.38) is

y = C1 er1 x + C2 er2 x + · · · + Cn ern x , (3.41)

where C1 , C2 , . . . , Cn are arbitrary constants.


Case 2. Let us suppose that equation (3.39) has real roots ri , with multiplicity αi ,
i.e. the characteristic polynomial is of the form P (r) = a0 (r − r1 )α1 · · · (r − rk )αk ,
with α1 + · · · + αk = n. Then, it turns out that in this case, corresponding to the
root ri , of multiplicity αi , the functions

eri x , x eri x , . . . , xαi −1 eri x

are solutions of equation (3.38), forming a linearly independent system on any in-
terval (a, b). So, putting together all the solutions correspondong to the roots ri of
multiplicity αi , with i = 1, . . . , k, we get the needed fundamental system of solutions
of equation (5.107):

er1 x , xer1 x , . . . , xα1 −1 er1 x , . . . , erk x , xerk x , . . . , xαk −1 erk x

and the general solution will be their linear combination with n arbitrary real con-
stants Ci , i = 1, . . . , n.
Case 3. If the equation (3.39) with real coefficients has a complex root α+iβ, β > 0,
then among the remaining roots there must be its conjugate root α − iβ. For such a
pair of complex eigenvalues, the two corresponding solutions of differential equation
(3.38) are e(α+iβ)x , e(α−iβ)x . Then, the function
1  (α+iβ)x (α−iβ)x

αx

(α+iβ)x

e +e = e cos(βx) = Re e , (3.42)
2
74 ORDINARY DIFFERENTIAL EQUATIONS

as well as the function


1  (α+iβ)x   
e − e(α−iβ)x = eαx sin(βx) = Im e(α+iβ)x (3.43)
2i
are solutions of equation (3.38). We have used here Euler’s formula,

eα+iβ = eα (cos β + i sin β).

It is not difficult to prove that the solutions

y1 = eαx cos(βx), y2 = eαx sin(βx),

which are real functions, are linearly independent over any interval (a, b).
Reasoning in the same manner for each root ri of equation (3.39), we get the
needed fundamental system of solutions of equation (3.38), consisting of n linearly
independent functions y1 , . . . , yn and the general solution will be their linear combi-
nation with n arbitrary real constants Ci , i = 1, . . . , n.
Case 4. If the equation (3.39) has a complex root α + iβ of multiplicity m, then
α − iβ is also a root of multiplicity m. For such complex eigenvalues, using again
Euler’s formula, we get the 2m corresponding solutions of the differential equation
(3.38) as being of the following form:
( αx
e cos( βx), xeαx cos( βx), . . . , xm−1 eαx cos( βx),
(3.44)
eαx sin( βx), xeαx sin( βx), . . . , xm−1 eαx sin( βx).

It is not difficult to prove that these functions form a linearly independent system
y1 , . . . , y2m of solutions of equation (3.38) over any interval (a, b).
Reasoning in the same manner for each root ri of equation 3.39), we get the
needed fundamental system of solutions of equation (3.38), consisting of n linearly
independent functions y1 , . . . , yn and the general solution will be their linear combi-
nation with n arbitrary real constants Ci , i = 1, . . . , n.

Exercise 3.28 Solve the linear equation y 00 − 5y 0 + 4y = 0.

Solution. The characteristic equation is r2 −5r +4 = 0, with simple real roots r1 = 4


and r2 = 1. Therefore, the general solution is y = C1 e4x + C2 ex , where C1 , C2 are
arbitrary constants.

Exercise 3.29 Solve the equation y 000 + 3y 00 + 3y 0 + y = 0.


INTRODUCTION 75

Solution. The associated characteristic equation r3 + 3r2 + 3r + 1 = 0 has the triple


real root r = −1. So, the general solution is y = e−x C1 + C2 x + C3 x2 , with


Ci ∈ R, i = 1, 2, 3.

Exercise 3.30 Find the general solution of the equation y 00 − 4y 0 + 5y = 0.

Solution. The characteristic equation r2 − 4r + 5 = 0 has the conjugate complex


roots k = 2 ± i. So, a fundamental system of solutions is given by y1 = e2x cos x,
y2 = e2x sin x and the general solution is y(x) = C1 e2x cos x + C2 e2x sin x, with
C1 , C2 ∈ R.

Exercise 3.31 Find the general solution of the equation y (4) −4y 000 +8y 00 −8y 0 +4y =
0.

Solution. The characteristic equation is r4 −4r3 +8r2 −8r+4 = 0, i.e. (r2 −2r+2)2 =
0. Thus, r = 1 ± i are complex roots of multiplicity 2 and the fundamental system
of solutions is
y1 = ex cos x, y2 = xex cos x, y3 = ex sin x, y4 = xex sin x.
The general solution is
y = C1 ex cos x + C2 xex cos x + C3 ex sin x + C4 xex sin x, Ci ∈ R, i = 1, 4.

Exercise 3.32 Solve the equation


00
y (4) + 2y + y = 0.

Solution. The characteristic equation is


r4 + 2r2 + 1 = 0,
i.e.
(r2 + 1)2 = 0.
Hence, r = ±i are complex roots of multiplicity 2. Therefore, the fundamental
system of solutions is
y1 = cos x, y2 = x cos x, y3 = sin x, y4 = x sin x
and the general solution is
y = C1 cos x + C2 x cos x + C3 sin x + C4 x sin x, Ci ∈ R.

Let us come back now to the nonhomogeneous case.


76 ORDINARY DIFFERENTIAL EQUATIONS

Theorem 3.33 The general solution on the interval (a, b) of the nonhomogeneous
linear equation (3.37) with constant coefficients ai and continuous right-hand side f
n
P
is equal to the sum of the general solution Ci yi of the corresponding homogeneous
i=1
equation (3.38) and of some particular solution yp of the nonhomogeneous equation
(3.37), i.e.
Xn
y(x) = Ci yi (x) + yp (x). (3.45)
i=1

Another method for solving (3.37) is the general method of variation of para-
meters. More precisely, if we know the general solution of equation (3.38),

y(x) = C1 y1 (x) + · · · + Cn yn (x), (3.46)

where y1 , . . . , yn is a fundamental system of solutions, then we shall look for the


general solution of equation (3.37) as being of the form

y(x) = C1 (x) y1 (x) + · · · + Cn (x) yn (x), (3.47)

with the functions C1 (x), . . . , Cn (x) determined from the following system:

C1 0 (x) y1 (x) + C2 0 (x) y2 (x) + · · · + Cn 0 (x) yn (x) = 0,





 C1 0 (x) y1 0 (x) + C2 0 (x) y 0 (x) + · · · + Cn 0 (x) yn 0 (x) = 0


2
(3.48)

 ...................................................................................
 C1 0 (x) y (n−1) (x) + C2 0 (x) y (n−1) (x) + · · · + Cn 0 (x) yn(n−1) (x) = f (x).


1 2

This system of n linear equations with a nonzero determinant (the determinant is the
Wronskian of the fundamental system of solutions y1 , . . . , yn ) has a unique solution

Ci 0 (x) = ϕi (x), i = 1, 2, . . . , n, (3.49)

where ϕi are continuous functions on (a, b). Thus,


Z
Ci (x) = ϕi (x) dx + Ci , Ci ∈ R, i = 1, 2, . . . , n. (3.50)

Consequently, the general solution of equation (3.37) is


Z  Z 
y(x) = ϕ1 (x) dx + C1 y1 (x) + ϕ2 (x) dx + C2 y2 (x) + · · · +
INTRODUCTION 77

Z 
+ ϕn (x) dx + Cn yn (x), C1 , . . . , Cn ∈ R. (3.51)

For a Cauchy problem, using the initial conditions, we determine the n unknown
constants Ci and we obtain the unique solution of our initial-value problem.

Exercise 3.34 Solve the nonhomogeneous equation y 00 − y = x2 .

Solution. The characteristic equation is r2 − 1 = 0, with real roots r1 = 1 and


r2 = −1. Therefore, the general solution of the homogeneous equation is

yhom = C1 ex + C2 e−x , C1 , C2 ∈ R.

Following the method of variation of parameters, let us look for the general solution
of the given nonhomogeneous equation in the form y(x) = C1 (x) ex + C2 (x) e−x .
Solving the linear system
C1 0 (x) ex + C2 0 (x) e−x = 0,
(

C1 0 (x) ex − C2 0 (x) e−x = x2 ,

we get
0 x2 −x 0 x2 x
C1 (x) = e , C2 (x) = − e .
2 2
By integration, we have
 2   2 
x x
C1 (x) = − − x − 1 e−x +K1 , C2 (x) = − + x − 1 ex +K2 , K1 , K2 ∈ R.
2 2
Hence, the general solution of our initial equation is

y(x) = K1 ex + K2 e−x − x2 − 2, K1 , K2 ∈ R.

Exercise 3.35 Find the general solution of the nonhomogeneous equation


00
y + 9y = 3x.

Solution. The characteristic equation

r2 + 9 = 0,

has the complex roots


r = ±3i.
78 ORDINARY DIFFERENTIAL EQUATIONS

Thus, the general solution of the homogeneous equation is

yhom = C1 cos 3x + C2 sin 3x, C1 , C2 ∈ R.

Following the method of variation of parameters, let us look for the general solution
of the given nonhomogeneous equation in the form

y(x) = C1 (x) cos 3x + C2 (x) sin 3x.

Solving the linear system


 0 0
 C1 (x) cos 3x + C2 (x) sin 3x = 0,
0 0
−3C1 (x) sin 3x + 3C2 (x) cos 3x = 3x,

we get  0
 C1 (x) = −x sin 3x,
0
C2 (x) = x cos 3x.

By integration, we have
 x 1
 C1 (x) = 3 cos 3x − 9 sin 3x + K1 ,

 C (x) = x sin 3x + 1 cos 3x + K ,




2 2 K1 , K2 ∈ R.
3 9
Hence, the general solution of our initial equation is
x
y(x) = K1 cos 3x + K2 sin 3x + , K1 , K2 ∈ R.
3
For the case of linear equations with constant coefficients, there are some im-
portant particular cases of right-hand sides of equation (3.37) for which a particular
solution can be found easily. We shall mention them here briefly.
Case a). Let us consider the case in which the right-hand member of equation
(3.37) is a polynomial of degree s in the variable x, with real constant coefficients,
i.e. f (x) = A0 xs + A1 xs−1 + · · · + As .
If an 6= 0, then the required particular solution may be sought as being of the same
form as the right-hand side, i.e.

yp = B0 xs + B1 xs−1 + · · · + Bs . (3.52)
INTRODUCTION 79

If an = an−1 = · · · = an−k+1 = 0, but an−k 6= 0, then the particular solution may


be sought as being of the form

yp = xk B0 xs + B1 xs−1 + · · · + Bs .

(3.53)

Case b). Let us consider that the right-hand member of equation (3.37) is of the
form f (x) = epx A0 xs + A1 xs−1 + · · · + As , where p and Ai , i = 0, . . . , s are real


constants.
If p is not a root of the characteristic equation associated to (3.37), then the par-
ticular solution can be sought as being of the same form as the right-hand side,
i.e.
yp = epx B0 xs + B1 xs−1 + · · · + Bs .

(3.54)

If p is a root of multiplicity m of the characteristic equation associated to (3.37),


then the particular solution has to be sought as being of the form

yp = xm epx B0 xs + B1 xs−1 + · · · + Bs .

(3.55)

Case c). Let us consider that the right-hand member of equation (3.37) is of the
form f (x) = epx [P0 (x) cos( qx) + Q0 (x) sin( qx)], where p and q are real constants,
P0 and Q0 are polynomials in x, with real coefficients.
If p ± iq are not roots of the characteristic equation associated to (3.37), then the
particular solution can be found as being of the form

yp = epx [P 0 (x) cos ( qx) + Q0 (x) sin ( qx)], (3.56)

where P 0 (x), Q0 (x) are polynomials in x such that


 
max grad P 0 (x) , grad Q0 (x) ≤ max (grad (P0 (x)) , grad (Q0 (x))) . (3.57)

If p ± iq are roots of multiplicity m of the characteristic equation associated to


(5.106), then a particular solution can be found in the form

yp = xm epx [P 0 (x) cos ( qx) + Q0 (x) sin ( qx)]. (3.58)

Exercise 3.36 Solve the nonhomogeneous equation


00
y + y = x2 + 2x.
80 ORDINARY DIFFERENTIAL EQUATIONS

Solution. The characteristic equation is


r2 + 1 = 0,
with the complex roots
r = ±i.
Therefore, the general solution of the homogeneous equation is
yhom = C1 cos x + C2 sin x, C1 , C2 ∈ R.
Since a2 6= 0, looking for a particular solution of the nonhomogeneous equation of
the form
yp = B 0 x 2 + B 1 x + B 0 ,
we get
yp = x2 + 2x − 2.
Hence, the general solution of our nonhomogeneous equation is
y = C1 cos x + C2 sin x + x2 + 2x − 2, C1 , C2 ∈ R.

Exercise 3.37 Solve the nonhomogeneous equation


00 0
y + y = 2x + 3.

Solution. The characteristic equation is


r2 + r = 0,
with the real distinct roots
r1 = 0, r2 = −1.
Therefore, the general solution of the homogeneous equation is
yhom = C1 + C2 e−x , C1 , C2 ∈ R.
Since a2 = 0, looking for a particular solution of the nonhomogeneous equation of
the form
yp = x(B0 x + B1 ),
we get
yp = x(x + 1).
Hence, the general solution of our nonhomogeneous equation is
y = C1 + C2 e−x + x(x + 1), C1 , C2 ∈ R.
INTRODUCTION 81

Exercise 3.38 Integrate the nonhomogeneous equation


00
y + y = ex (2x + 1).

Solution. The characteristic equation is

r2 + r = 0,

with the complex roots


r = ±i.
Therefore, the general solution of the homogeneous equation is

yhom = C1 cos x + C2 sin x, C1 , C2 ∈ R.

Since 1 is not a characteristic root, looking for a particular solution of the nonho-
mogeneous equation of the form

yp = ex (B0 x + B1 ),

we get
1
yp = ex (x − ).
2
Hence, the general solution of our nonhomogeneous equation is
1
y = C1 cos x + C2 sin x + ex (x − ), C1 , C2 ∈ R.
2
Exercise 3.39 Solve the nonhomogeneous equation

y 00 − y = ex (x2 − 1).

Solution. The characteristic equation is

r2 − r = 0,

with the roots


r1 = 0, r2 = 1.
Therefore, the general solution of the homogeneous equation is

yhom = C1 + C2 ex , C1 , C2 ∈ R.
82 ORDINARY DIFFERENTIAL EQUATIONS

Since 1 is a simple characteristic root, looking for a particular solution of the non-
homogeneous equation of the form

yp = xex (B0 x2 + B1 x + B2 ),

we get
x2 x 1
yp = xex ( − − ).
6 4 4
Hence, the general solution of our nonhomogeneous equation is
x2 x 1
y = C1 + C2 ex + xex ( − − ), C1 , C2 ∈ R.
6 4 4
Exercise 3.40 Integrate the nonhomogeneous equation
00 0
y + 4y + 4y = cos 2x.

Solution. The characteristic equation is

r2 + 4r + 4 = 0,

with the double real root


r = −2.
Therefore, the general solution of the homogeneous equation is

yhom = C1 e−2x + C2 xe−2x , C1 , C2 ∈ R.

Since ±2i are not characteristic roots, looking for a particular solution of the non-
homogeneous equation of the form

yp = A cos 2x + B sin 2x,

we get
1
sin 2x.
yp =
8
Hence, the general solution of our nonhomogeneous equation is
1
y = C1 e−2x + C2 xe−2x + sin 2x, C1 , C2 ∈ R.
8
Exercise 3.41 Find the general solution of the nonhomogeneous equation
00
y + 4y = cos 2x.
INTRODUCTION 83

Solution. The characteristic equation is

r2 + 4 = 0,

with the complex roots


r = ±2i.
Therefore, the general solution of the homogeneous equation is

yhom = C1 cos 2x + C2 sin 2x, C1 , C2 ∈ R.

Since ±2i are simple characteristic roots, looking for a particular solution of the
nonhomogeneous equation of the form

yp = x(A cos 2x + B sin 2x),

we get
x
yp = sin 2x.
4
Hence, the general solution of our nonhomogeneous equation is
x
y = C1 cos 2x + C2 sin 2x + sin 2x, C1 , C2 ∈ R.
4
Exercise 3.42 Find the general solution of the nonhomogeneous equation
00 0
y + 2y + 2y = e−x (x cos x + 3 sin x).

Solution. The characteristic equation is

r2 + 2r + 2 = 0,

with the complex roots


r = −1 ± i.
Therefore, the general solution of the homogeneous equation is

yhom = C1 e−x cos x + C2 e−x sin x, C1 , C2 ∈ R.

Since −1 ± i are simple characteristic roots, looking for a particular solution of the
nonhomogeneous equation of the form

yp = xe−x [(A0 x + A1 ) cos x + (B0 x + B1 ) sin x],


84 ORDINARY DIFFERENTIAL EQUATIONS

we get
5 1
yp = xe−x (− cos x + x sin x).
4 4
Hence, the general solution of our nonhomogeneous equation is
5 1
y = C1 e−x cos x + C2 e−x sin x + xe−x (− cos x + x sin x), C1 , C2 ∈ R.
4 4
Exercise 3.43 Find the law of motion of a material point of mass m, which is
attracted to a fixed center O with a force proportional to the distance x of the point
to the attracting center O (an elastic force), ignoring the resistance of the medium.

Solution. According to Newton’s law, we have


d2 x
m = −k x,
dt2
where k > 0 is the proportionality factor. Let us note that the minus sign indi-
cates the fact that the direction of the acting force is opposite to the sign of the
displacement x. Therefore, we get
d2 x k
+ ω 2 x = 0, ω2 = .
dt2 m
So, we have to solve a second-order linear equation with constant coefficients. Since
the associated characteristic equation is r2 +ω 2 = 0, with the complex roots r = ±ω i,
the general solution of our equation is x(t) = C1 cos (ωt) + C2 sin (ωt), C1 , C2 ∈ R.
If we take C1 = A sin ϕ, C2 = A cos ϕ, where A and ϕ are arbitrary real constants,
we have x(t) = A sin (ωt + ϕ). This means that the material point performs periodic
harmonic oscillations about the attracting center O, with amplitude A and initial
phase ϕ.

Exercise 3.44 Determine the oscillations of a material point of mass m, subject to


the action of an elastic force whose magnitude is proportional to the deviation x of
the point from the equilibrium position, assuming that another perturbing periodic
force F = F0 cos λt is acting on the material point and the resistance of the medium
is neglected.

Solution. According to Newton’s law, we have


d2 x
m = −kx + F0 cos λt,
dt2
INTRODUCTION 85

r
k F0
where k > 0 is the proportionality factor. If we denote ω = and a = , we
m m
obtain
d2 x
+ ω 2 x = a cos (λt). (3.59)
dt2
So, we have to solve a nonhomogeneous second-order linear equation with constant
coefficients. Since the characteristic equation associated to the corresponding ho-
mogeneous equation is r2 + ω 2 = 0, with the complex roots r = ±ω i, the general
solution of the associated homogeneous equation is

xhom (t) = C1 cos (ωt) + C2 sin (ωt), C1 , C2 ∈ R.

Now, we have to find a particular solution of our nonhomogeneous equation (3.59).


We have to distinguish between the following two cases:
Case 1. If λ 6= ω, i.e. the frequency of the external force is different from the
frequency of the free oscillations, a particular solution of equation is to be sought as
being of the form xp (t) = A cos (λt) + B sin (λt), where A and B are real coefficients
a
to be determined. Substituting in (3.59), we get A = 2 , B = 0. Therefore,
ω − λ2
a
xp (t) = 2 cos (λt)
ω − λ2
and the general solution of equation (3.59) is
a
x(t) = C1 cos (ωt) + C2 sin (ωt) + cos (λt), C1 , C2 ∈ R.
ω2 − λ2
We remark that this solution is the superposition of two bounded oscillations with
different frequencies.
Moreover, if we impose the natural initial conditions x(0) = 0 and x 0 (0) = 0, we
a
get C2 = 0 and C1 = − 2 . Thus,
ω − λ2
a
x(t) = 2 (cos(λt) − cos(ωt)).
ω − λ2
In fact, we have
   
2a ω−λ ω+λ
x(t) = 2 sin t sin t .
ω − λ2 2 2
Thus, our solution is composed of two distinct frequencies, i.e. (ω − λ)/2 and
(ω + λ)/2. These two amplitudes form a beat frequency between them.
86 ORDINARY DIFFERENTIAL EQUATIONS

Case 2. If λ = ω, i.e. the frequency of the external force is equal the frequency of
the free oscillations, a particular solution of equation (3.59) is to be sought as being
of the form xp (t) = t (A cos (ωt) + B sin (ωt)), where A and B are real coefficients to
a
be determined. We obtain A = 0 and B = . Therefore,

at
xp (t) = sin (ωt)

and the general solution of equation (3.59) is
at
x(t) = C1 cos (ωt) + C2 sin (ωt) + sin(ωt), C1 , C2 ∈ R.

We remark that in this case the amplitude of the oscillations of our solution increases
infinitely when the time t goes to infinity. This phenomenon, called resonance, could
be very dangerous and could even lead to the destruction of the elastic system.
However, we point out that resonance could be sometimes a friendly phenomenon
(see [7]).
If we impose the initial conditions x(0) = 0 and x 0 (0) = 0, we get C1 = 0 and
C2 = 0. Therefore,
at
x(t) = sin (ωt).

The fact that the solution grows to infinity at the resonant frequency is highly
idealized, since, in practice, any physical system is damped due to friction, air
resistance, etc.

Exercise 3.45 Solve the following differential equation


d2 x
m + γx 0 + k x = F0 cos (λt), (3.60)
dt2
where γ > 0 measures the damping force.

Solution. The characteristic equation associated to the corresponding homogeneous


equation is m r2 + γ r + k = 0. We have to distinguish between three cases.
Case 1. If
γ2 k
2
− < 0,
4m m
then the characteristic equation (3.60) has the complex roots
r
γ k γ2
r1,2 = − ±i − .
2m m 4m2
INTRODUCTION 87

Therefore, the general solution of the homogeneous equation is

xhom (t) = C1 e−αt cos (βt) + C2 e−αt sin (βt), C1 , C2 ∈ R,

where r
γ k γ2
α= , β= − .
2m m 4m2
The particular solution of equation (3.60) is

a(ω 2 − λ2 ) γ0 λa
xp (t) = 2 cos (λt) + 2 sin (λt),
2 2 2
(ω − λ ) + γ0 λ 2 (ω − λ2 )2 + γ02 λ2

where r
γ k F0
γ0 = , ω = , a= .
m m m
Hence, the general solution of equation (3.60) is

x(t) = C1 e−αt cos (βt) + C2 e−αt sin (βt)+

a(ω 2 − λ2 ) γ0 λa
+ cos (λt) + 2 sin (λt).
(ω 2 − λ2 )2 + γ02 λ2 (ω − λ2 )2 + γ02 λ2
Let us notice that in this case the presence of the damping stops the solution to
blow-up when ω = λ.
Case 2. If
γ2 k
2
− > 0,
4m m
the characteristic equation (3.60) has the real roots
r
γ γ2 k
r1,2 = − ± 2
− .
2m 4m m
Therefore, the general solution of the homogeneous equation is

xhom (t) = C1 er1 t + C2 er2 t , C1 , C2 ∈ R.

The particular solution of equation (3.60) is

a(ω 2 − λ2 ) γ0 λa
xp (t) = 2 cos (λt) + 2 sin (λt).
2 2 2
(ω − λ ) + γ0 λ 2 (ω − λ2 )2 + γ02 λ2
88 ORDINARY DIFFERENTIAL EQUATIONS

Hence, the general solution is


a(ω 2 − λ2 ) γ0 λa
x(t) = C1 er1 t + C2 er2 t + cos (λt) + 2 sin (λt).
(ω 2 − λ2 )2 + γ02 λ2 (ω − λ2 )2 + γ02 λ2

Case 3. If
γ2 k
2
− = 0,
4m m
the characteristic equation has the double real root
γ
r1,2 = − .
2m
Therefore, the general solution of the homogeneous equation is
γ
− 2m t
xhom (t) = e (C1 + C2 t), C1 , C2 ∈ R.

Since the particular solution is


a(ω 2 − λ2 ) γ0 λa
xp (t) = 2 cos (λt) + 2 sin (λt),
2 2 2
(ω − λ ) + γ0 λ 2 (ω − λ2 )2 + γ02 λ2

the general solution of equation (3.60) is


γ
− 2m t a(ω 2 − λ2 ) γ0 λa
x(t) = e (C1 + C2 t) + 2 cos (λt) + 2 sin (λt).
2 2 2
(ω − λ ) + γ0 λ 2 (ω − λ2 )2 + γ02 λ2

Problems on Chapter 3

Exercise 3.46 Integrate the following equation


00 0
y − 4y + 13y = x + 1.

Solution. Let us consider first the corresponding homogeneous equation, i e.


00 0
y − 4y + 13y = 0.

Since the characteristic equation

r2 − 4r + 13 = 0
INTRODUCTION 89

has the complex roots


r1,2 = 2 ± 3i,
the general solution of the associated homogeneous equation is

yhom (x) = e2x (C1 cos 3x + C2 sin 3x), C1 , C2 ∈ R.

Looking for a particular solution of the nonhomogeneous equation as being of the


form
yp = Ax + B,
we get
1 17
A= , B= .
13 169
Therefore, the general solution of the initial equation will be
1 17
y(x) = e2x (C1 cos 3x + C2 sin 3x + x+ , C1 , C2 ∈ R.
13 169
Exercise 3.47 Solve the following Cauchy problem:
 00
 y = x2 + y,
0
y(0) = −2, y (0) = 1.

Solution. Let us consider first the corresponding homogeneous equation, i e.


00
y − y = 0.

Since the characteristic equation

r2 − 1 = 0

has the real roots


r1 = 1, r2 = −1,
the general solution of the associated homogeneous equation is

yhom (x) = C1 ex + C2 e−x , C1 , C2 ∈ R.

Looking for a particular solution of the nonhomogeneous equation as being of the


form
yp = Ax2 + Bx + C,
90 ORDINARY DIFFERENTIAL EQUATIONS

we get
A = −1, B = 0, C = −2.

Therefore, the general solution of the initial equation will be

y(x) = C1 ex + C2 e−x − x2 − 2, C1 , C2 ∈ R.
0
Imposing the initial conditions y(0) = −2 and y (0) = 1, we have
1 1
C 1 = , C2 = − .
2 2
Hence, the solution of our IVP is
1
y(x) = (ex − e−x ) − x2 − 2.
2

Exercise 3.48 Solve the following Cauchy problem:


 00 0
 y + 8y + 25y = 0,
0
y(0) = 0, y (0) = 4.

Solution. The characteristic equation is

r2 + 8r + 25 = 0.

The quadratic formula gives


r1,2 = −4 ± 3i.

Hence, the general solution of our equation is

y(t) = C1 e−4t cos(3t) + C2 e−4t sin(3t), C1 , C2 ∈ R.


0
Imposing the initial conditions y(0) = 0, y (0) = 4, we get
4
C1 = 0, C2 = .
3
Therefore,
4
y(t) = e−4t sin(3t),
3
which shows that the motion is dying out in time, i.e. y → 0, as t → ∞.
INTRODUCTION 91

Exercise 3.49 Find the solution of the following equation:


00
y + y = sin 2x + x cos 2x.

Solution. We begin by finding the general solution of the homogeneous equation,


i.e.
00
y + y = 0.
Since the characteristic equation is

r2 + 1 = 0,

with the complex roots


r = ±i,
a fundamental system of solutions for the homogeneous equation is given by

y1 = cos x, y2 = sin x.

Therefore, the general solution of the homogeneous equation is

y = C1 cos x + C2 sin x, C1 , C2 ∈ R.

Due to the special form of the right-hand side of our equation, we shall look for a
particular solution as being of the form

yp = (ax + b) cos 2x + (cx + d) sin 2x, a, b, c, d ∈ R.


0 00
Computing y and y , plugging into the original equation and identifying equal
powers of x, we get
1 1
a = − , b = c = 0, d = .
3 9
Hence,
x 1
yp = − cos 2x + sin 2x
3 9
and the general solution of our initial equation is
x 1
y = C1 cos x + C2 sin x − cos 2x + sin 2x.
3 9
Exercise 3.50 Find the general solution of the following equation:
000 00 0
y (5) + 5y 4 − 2y − 10y + y + 5y = 0.
92 ORDINARY DIFFERENTIAL EQUATIONS

Solution. The characteristic equation

r5 + 5r4 − 2r3 − 10r2 + r + 5 = 0,

i.e.
(r − 1)2 (r + 1)2 (r + 5) = 0,
has five roots: r = 1 with multiplicity 2, r = −1 with multiplicity 2 and r = −5
with multiplicity 1.
Therefore, the general solution is

y = C1 ex + C2 xex + C3 e−x + C4 xe−x + C5 e−5x , Ci ∈ R, i = 1, 5.

Exercise 3.51 Solve the following Cauchy problem:


 00 0
 y + 2y + 2y = 0,
0
y(π/4) = 2, y (π/4) = −2.

Solution. The characteristic equation is

r2 + 2r + 2 = 0,

with the complex roots


r = −1 ± i.
Therefore, the general solution of our linear equation is

y = C1 e−x cos x + C2 e−x sin x, C1 , C2 ∈ R.

In order to find the solution of the given Cauchy problem, we have to use the initial
conditions to determine C1 and C2 .
We get √
C1 = C2 = 2eπ/4 ,
which implies √
y(x) = 2eπ/4 (e−x cos x + e−x sin x).

Exercise 3.52 Find the solution of the following Cauchy problem:


 00
 y + 9y = 0,
0
y(0) = 0, y (0) = 1.

INTRODUCTION 93

Solution. Looking for particular solutions of the form y = erx , we obtain the char-
acteristic equation
r2 + 9 = 0,

with the complex roots


r = ±3i.

Thus, the general solution of our equation is

y = C1 cos 3x + C2 sin 3x, C1 , C2 ∈ R.

The initial conditions give


C1 = 1, C2 = 1/3.

So, the solution of the initial-value problem is

sin 3x
y(x) = .
3

Exercise 3.53 Solve


00 0
y + 3y = 0.

Solution. Looking for particular solutions of the form y = erx , we obtain the char-
acteristic equation
r2 + 3r = 0,

with real simple roots


r1 = 0, r2 = −3.

Thus, the general solution of our equation is

y = C1 + C2 e−3x , C1 , C2 ∈ R.

Exercise 3.54 Solve the Cauchy problem



00 03
 y − y = 0,

 y(0) = 0, y 0 (0) = 1, x ∈ (− 1 , 1 ).


2 2
94 ORDINARY DIFFERENTIAL EQUATIONS

Solution. Since in our equation the independent variable x is not involved explicitly,
setting
0
z(y) = y (x),
we get 
dz 2
 z( − z ) = 0,


dy


 z(0) = 1.

Since z 6= 0, it is not difficult to see that the solution of this Cauchy problem is
1
z(y) = .
1−y
6 1. Therefore, to get the solution of our initial Cauchy value problem,
Notice that y =
it remains to integrate the equation
dy 1
= .
dx 1−y
We obtain
y 2 (x)
y(x) − = x + C, C ∈ R.
2
1 1
Since y(0) = 0 and x ∈ (− , ), the solution of the initial Cauchy problem is
2 2

y(x) = 1 − 1 − 2x.

Exercise 3.55 Find the solution of the Cauchy problem



00 02
 yy − y = x2 y 2 ,

 y(0) = 1, y 0 (0) = 1.

Solution. Since y = 0 is not a solution of our IVP, dividing the equation by y 2 and
performing the change of variables
0
y (x)
z(x) = ,
y(x)
we get  0
 z (x) = x2 ,

z(0) = 1.

INTRODUCTION 95

It is not difficult to see that the solution of this Cauchy problem is


x3
z(x) = + 1.
3
Therefore, to get the solution of our initial Cauchy value problem, it remains to
integrate the equation
dy x3
= y( + 1).
dx 3
We obtain
x4
+x
y(x) = Ke 12 , K ∈ R∗ .
Since y(0) = 1, we get K = 1.
Hence, the solution of the initial Cauchy problem is
x4
+x
y(x) = e 12 .

Exercise 3.56 Solve the initial value problem


 00 0

 xy + 2y + x = 1, x ∈ (0, ∞),



y(1) = 2,



 0

y (1) = 1.

Solution. Since y is missing, the change of variables


0
z(x) = y (x)

leads to
0
xz + 2z + x = 1.
This is a first order linear differential equation. Its resolution gives
C x 1
z(x) = − + .
x2 3 2
5
Since z(1) = 1, we get C = . Consequently, we have
6
5 x 1
z(x) = − + .
6x2 3 2
96 ORDINARY DIFFERENTIAL EQUATIONS

Since y 0 = z, we obtain, after integration

5 x2 x
y(x) = K − − + .
6x 6 2
5
The condition y(1) = 2 gives K = .
2
Therefore, we have
5 5 x2 x
y(x) = − − + .
2 6x 6 2
Note that this solution is defined for x > 0.

Exercise 3.57 Find the general solution of the equation


00 03
y + y y = 0.

Solution. Since the independent variable x is missing, the change of variables


0
z(y) = y (x)

leads to
0
zz + z 3 y = 0.

This a first order separable differential equation. Its resolution gives




 z = 0 (stationary solution),

2
 z(x) = , C1 ∈ R (general solution).


y2 + 2C1
0
Since z(y) = y (x), we get, after integration, the complete integral of our equation:


 y = K, K ∈ R (stationary solution),

 y3

 + 2yC1 = 2x + C2 , C1 , C2 ∈ R (general solution).
3

Exercise 3.58 Find the general solution of the equation


000 0
y − 4y = x + 3 cos x.
INTRODUCTION 97

Solution. Due to the fact that the characteristic equation is

r3 − 4r = 0,

with the simple real roots

r1 = 0, r2 = 2, r3 = −2,

the general solution of the homogeneous equation is

yhom (x) = C1 + C2 e2x + C3 e−2x , C1 , C2 , C3 ∈ R.

If we split the equation into the following two equations:


000 0
y − 4y = x

and
000 0
y − 4y = 3 cos x,
the guessed form for the particular solution of the first equation is

yp,1 (x) = x(Ax + B),

where A and B are to be determined.


After some easy computations, we get A = −1/8 and B = 0. Therefore, we have
1
yp,1 (x) = − x2 .
8
The guessed form for the particular solution of the second equation is

yp,2 (x) = C cos x + D sin x,

where C and D are to be determined. After simple computations, we get C = 0 and


D = −3/5. Therefore, we have
3
yp,2 (x) = − sin x
5
Hence, the particular solution of the original equation is given by
1 3
yp = − x2 − sin x.
8 5
and, therefore, its general solution is
1 3
y = C1 + C2 e2x + C3 e−2x − x2 − sin x.
8 5
98 ORDINARY DIFFERENTIAL EQUATIONS
Chapter 4

Systems of Linear Differential


Equations of the First Order

The general form of a system of n linear differential equations of the first order is
the following one:
dy1


 = a11 (x) y1 + a12 (x) y2 + · · · + a1n (x) yn + f1 (x),
dx



 dy2 = a21 (x) y1 + a22 (x) y2 + · · · + a2n (x) yn + f2 (x),



dx (4.1)


 .................................................................................,

dyn


= an1 (x) y1 + an2 (x) y2 + · · · + ann (x) yn + fn (x),



dx

where aij : I ⊆ R → R, aij ∈ C 0 (I), i, j = 1, 2, . . . , n and fi : I ⊆ R → R, fi ∈


C 0 (I), i = 1, 2, . . . , n.
Let us recall that if all the functions aij and fi are continuous on the interval
I = (a, b), there exists a unique solution of system (4.1), satisfying the given initial
conditions
yi (x0 ) = yi0 , i = 1, 2, . . . , n. (4.2)
Moreover, for this particular case of linear systems, we have a global existence and
uniqueness theorem for the solution of system (4.1) which satisfies the initial condi-
tions (4.2) (see [9]).
System (4.1) can be written in the abridged vector form
dY
= AY + F, (4.3)
dx

99
100 ORDINARY DIFFERENTIAL EQUATIONS

where
a11 a12 ..... a1n
     
y1 f1
 y2   a21 a22 ..... a2n   f2 
Y = , A =  ............................... , F = . (4.4)
     
.. ..
 .     . 
yn an1 an2 ..... ann fn

If we define the operator L by


dY
L(Y ) = − AY, (4.5)
dx
then (4.1) becomes
L(Y ) = F. (4.6)
If fi ≡ 0, i = 1, 2, . . . , n, the system (4.6) is said to be homogeneous. Hence, the
general form of such a linear system is

L(Y ) = 0. (4.7)

It is not difficult to see that the operator L is linear. So, we obtain that
m m
!
X X
L C i Yi = Ci L(Yi ),
i=1 i=1

where Ci are arbitrary constants. Therefore, we get immediately the following result:
m
P
Theorem 4.1 (1) A linear combination Ci Yi , with arbitrary constant coeffi-
i=1
cients Ci , of solutions Yi of the homogeneous system (4.7) is also a solution of the
same system.
(2) If the homogeneous linear system (4.7) with real coefficients aij (x) has a
complex solution Y (x) = U (x) + iV (x), then the real part U and the imaginary part
V are also solutions of system (4.7).

Let us consider the vectors Y1 , Y2 , . . . , Yn , where


 
y1i
 y2i 
Yi =  .  , (4.8)
 
 .. 
yni
for i = 1, 2, . . . , n.
INTRODUCTION 101

Definition 4.2 The vectors Y1 (x), . . . ,Yn (x) are called linearly dependent over
(a, b) if there exist n constants α1 , . . . , αn , at least one of which is not equal to zero,
such that
α1 (x) Y1 (x) + · · · + αn (x) Yn (x) ≡ 0, x ∈ (a, b). (4.9)
If the vector identity (4.9) is fulfilled only for α1 = · · · = αn = 0, then the vectors
Y1 (x), . . . , Yn (x) are called linearly independent over the interval (a, b).

It is not difficult to see that if the vectors Y1 (x), Y2 (x), . . . , Yn (x) are linearly depen-
dent on the interval (a, b), then the Wronskian W (x) ≡ W (Y1 , . . . , Yn ) is identically
zero on (a, b).

Remark 4.3 If the linearly independent vectors Y1 , . . . , Yn are solutions of the ho-
mogeneous linear system (5.136) with continuous coefficients aij on (a, b), then the
Wronskian W (x) = W (Y1 , . . . , Yn ) cannot vanish at any point x ∈ (a, b).

Definition 4.4 A set of n linearly independent solutions Y1 , . . . , Yn of the homoge-


neous system (4.7) with continuous coefficients aij , is called a fundamental system
of solutions of this system.

It is not difficult to prove the following result (see, for detailed proofs, [15]).

Theorem 4.5 The general solution of the homogeneous system (4.7) with continu-
ous coefficients aij on (a, b) is the linear combination

Y (x) = C1 Y1 (x) + · · · + Cn Yn (x) (4.10)

of n linearly independent solutions Y1 , . . . , Yn on (a, b), with n arbitrary coefficients


C1 , . . . , C n .

Remark 4.6 The maximum number of linearly independent solutions of a homo-


geneous system of n equations with continuous coefficients is equal to n. In other
words, the set of all the maximal solutions of equation (4.7) is a linear space which
is isomorphic with Rn . In fact, this set is also a vector subspace of dimension n of
the infinite dimensional vector space C 1 (I, Rn ).

Theorem 4.7 (1) If Yhom is a solution of the homogeneous linear system (4.7)
and Yp is a solution of the nonhomogeneous linear system (4.6), then Yhom + Yp is
also a solution of system (4.6).
102 ORDINARY DIFFERENTIAL EQUATIONS

(2) If Yi , i = 1, 2, . . . , m, are solutions of the systems

L(Y ) = Fi (x), i = 1, 2, . . . , m,

where  
f1i
 f2i 
Fi =  , (4.11)
 
..
 . 
fni
m
X
then Y = αi Yi , where αi are given constants, is a solution of the system
i=1

m
X
L(Y ) = αi Fi (x) (the principle of superposition).
i=1

(3) If the system L[Y ] = U (x) + iV (x), with real coefficients aij (x) and real U
and V such that    
u1 v1
 u2   v2 
U =  . , V =  . , (4.12)
   
.
 .  .
 . 
un vn

has a complex solution Y (x) = U (x) + iV (x), where


   
u1 v1
 u2   v2 
U =  . , V = . , (4.13)
   
.
 .   .. 
un vn

then the real part U and the imaginary part V are, respectively, solutions of the
systems
L(Y ) = U (x), L(Y ) = V (x).

Theorem 4.8 If {Y1 , Y2 , . . . , Yn } is a fundamental system of solutions for the homo-


geneous linear system L(Y ) = 0 and C1 , C2 , . . . , Cn are arbitrary constants, then the
general solution on the interval (a, b) of the nonhomogeneous linear system L(Y ) = F
with continuous coefficients aij and continuous right-hand side F on (a, b) is equal
INTRODUCTION 103

to the sum of the general solution of the corresponding homogeneous system and of
some particular solution Yp of the nonhomogeneous system, i.e.
n
X
Y (x) = Ci Yi (x) + Yp (x). (4.14)
i=1

As we have already mentioned, since finding a particular solution of a nonhomo-


geneous system is not an easy task, the general solution of such a nonhomogeneous
system can be determined using the method of variation of parameters, provided
that the general solution of the corresponding homogeneous system is known (see
[5], [9], [15]).
We shall deal now on the case of systems of n linear differential equations with
constant coefficients. The general form of such a system is the following one:
dy1


 = a11 y1 + a12 y2 + · · · + a1n yn + f1 (x),
dx



 dy2 = a21 y1 + a22 y2 + · · · + a2n yn + f2 (x),



dx (4.15)
 .....................................................................,



 dyn



 = an1 y1 + an2 y2 + · · · + ann yn + fn (x),
dx
where aij ∈ R, i, j = 1, 2, . . . , n and fi : I ⊆ R → R, fi ∈ C 0 (I), i = 1, 2 . . . , n.
Let us recall that there exists a unique solution of system (4.15), which satisfies
the given initial conditions

yi (x0 ) = yi0 , i = 1, 2, . . . , n. (4.16)

We shall consider first the corresponding homogeneous system


dy1


 = a11 y1 + a12 y2 + · · · + a1n yn ,
dx



 dy2 = a21 y1 + a22 y2 + · · · + a2n yn ,



dx (4.17)


 ........................................................,

dyn


= an1 y1 + an2 y2 + · · · + ann yn .



dx
The set SA of all the maximal solutions of system (4.15) is a vector subspace
of dimension n of the infinite dimensional vector space C 1 (R, Rn ). Moreover, by
induction, it follows that SA ⊂ C ∞ (R, Rn ).
104 ORDINARY DIFFERENTIAL EQUATIONS

In order to solve (4.17), it will be enough to indicate a simple procedure for


finding a fundamental system of solutions for such homogeneous linear systems. We
shall try to look for a solution of this system as being of the following form:

Y = eλx U, (4.18)

where λ ∈ C and  
u1
 u2 
U = , U 6= 0. (4.19)
 
..
 . 
un
Therefore, from (4.17) we get

(A − λI) U = 0, (4.20)

where I is the unit matrix. So, U 6= 0 will be a solution of (4.17) if and only if

det (A − λI) = 0. (4.21)

Equation (4.21) is called the characteristic equation associated to the system (4.17),
λ is called an eigenvalue of the matrix A and U an eigenvector corresponding to λ.
It is not difficult to see that the map λ 7→ det (A − λI) = KA (λ) is a polynomial of
degree n, called the characteristic polynomial of the linear map A:

KA (λ) = (−1)n λn + (−1)n−1 λn−1 Tr(A) + · · · + det(A). (4.22)

The set of all the eigenvalues of the matrix A is called the spectrum of A:

σ(A) = {λ ∈ C | det (A − λI) = 0}. (4.23)

Also, for each λ ∈ σ(A), we shall denote by

P VA (λ) = {U ∈ Cn \ {0} | (A − λI) U = 0} (4.24)

the set of all the eigenvectors (proper vectors) corresponding to the eigenvalue λ.
Since equation (4.21) is a polynomial equation of degree n, using the fundamental
theorem of algebra, we see that (4.21) will have n solutions, not necessarily distinct.
Hence, the spectrum of A will be

σ(A) = {λ1 , . . . , λn }. (4.25)


INTRODUCTION 105

Also, it is not difficult to see that if λ ∈ σ(A) and U ∈ P VA (λ), then c U ∈ P VA (λ),
∀c ∈ C \ {0}. Therefore, a proper vector corresponding to a given eigenvalue is not
uniquely determined.
We shall call the multiplicity of the eigenvalue λi the biggest number m with the
property that (λ − λi )m divides the determinant det (A − λI). We shall denote the
multiplicity of λi by m(λi ). Sometimes, we may refer at m(λi ) as being the algebraic
multiplicity of λi . Also, we shall call the geometric multiplicity of an eigenvalue λi
the number of linearly independent eigenvectors corresponding to this eigenvalue
(or the dimension of the eigenspace). Equivalently, the geometric multiplicity may
be defined as being the number of degrees of freedom in the eigenvector equation
(4.20).
Now, let us see how, depending of the nature of the eigenvalues λ, we can con-
struct a fundamental system of solutions for the homogeneous system (4.17). We
have to distinguish between four cases.
Case 1. Let us assume that all the eigenvalues λi , i = 1, 2, . . . , n, are real and
distinct. For each λi , we determine, from (4.20), an eigenvector Ui ∈ Rn , Ui 6= 0.
Then, the vectors
Yi = eλi x Ui , i = 1, 2, . . . , n (4.26)
are linearly independent solutions of system (4.17), i.e. {Y1 , . . . , Yn } is a fundamental
system of solutions for this system. Therefore, the general solution of (4.17) will be
Y = C1 Y1 + · · · + Cn Yn , Ci ∈ R, i = 1, 2, . . . , n. (4.27)

Case 2. Let us assume that λ = α ± iβ, with β > 0, is a complex eigenvalue


of A. For such an eigenvalue, we determine, from equation (4.20), an eigenvector
U ∈ Cn , U 6= 0. Then, the vectors
   
Y1 = Re eλx U , Y2 = Im eλx U (4.28)

are linearly independent solutions of system (4.17) Reasoning in the same manner
for all the eigenvalues λi , we get a fundamental system of solutions {Y1 , . . . , Yn }.
Hence, the general solution of (4.17) will be
Y = C1 Y1 + · · · + Cn Yn , Ci ∈ R, i = 1, 2, ..., n. (4.29)

Case 3. Let us assume that λ is a real eigenvalue of multiplicity m(λ) > 1. For
such a λ, we shall look for a solution of system (4.17) of the form
Y = [P0 + P1 x + · · · + Pm(λ)−1 xm(λ)−1 ] eλx , (4.30)
106 ORDINARY DIFFERENTIAL EQUATIONS

with P0 , P1 , . . . , Pm(λ)−1 ∈ Rn . Substituting in (4.17) and identifying the coefficients


of equal powers of x, we get the relationships:
(
(A − λI) Pm(λ)−1 = 0,
(4.31)
(A − λI) Pj−1 = j Pj , j = 1, 2, . . . , m(λ) − 1.

Therefore,
(A − λI)m(λ) P0 = 0. (4.32)
We can choose m(λ) linearly independent vectors P0i ∈ Rn , P0i 6= 0 (a basis of the
subspace Ker(A − λI)m(λ) ⊆ Rn , which is of dimension m). Then, corresponding
to these vectors, we can determine by recurrence all Pji , for j = 1, 2, . . . , m(λ) − 1.
Therefore, we get m(λ) linearly independent solutions of system (4.17). Reasoning
in the same manner for all the eigenvalues λ of the matrix A, we get a fundamental
system of solutions {Y1 , . . . , Yn } for our system. Hence, its general solution will be

Y = C1 Y1 + · · · + Cn Yn , Ci ∈ R, i = 1, 2, . . . , n. (4.33)

Case 4. Let us assume that λ = α ± i β, β > 0, is a complex eigenvalue of


multiplicity m(λ) > 1. For such a λ, we shall look for a solution of system (4.17) of
the form
Y = [P0 + P1 x + · · · + Pm(λ)−1 xm(λ)−1 ] eλx , (4.34)
with P0 , P1 , . . . , Pm(λ)−1 ∈ Cn . Then, we get:
(
(A − λI) Pm(λ)−1 = 0,
(4.35)
(A − λI) Pj−1 = j Pj , j = 1, 2, . . . , m(λ) − 1.

Therefore, by induction, we have

(A − λI)m(λ) P0 = 0 (4.36)

and
1
Pj = (A − λI)j P0 , j = 1, 2, . . . , m(λ) − 1. (4.37)
j!
We can choose m(λ) linearly independent vectors P0i ∈ Cn , P0i 6= 0. Then, corre-
sponding to these vectors, we can determine by recurrence all Pji , j = 1, 2, . . . , m(λ)−
1. Therefore, we obtain m(λ) vectors

Yi = [P0i + P1i x + · · · + Pm(λ)−1


i
xm(λ)−1 ] eλx , i = 1, 2, . . . , m(λ). (4.38)
INTRODUCTION 107

Then, the vectors Re (Yi ) and Im (Yi ) are 2m(λ) independent solutions of system
(4.17). Reasoning in the same manner for all the eigenvalues λ of the matrix A,
we get a fundamental system of solutions {Y1 , . . . , Yn } for our system. Hence, its
general solution will be

Y = C1 Y1 + · · · + Cn Yn , Ci ∈ R, i = 1, 2, . . . , n. (4.39)

Exercise 4.9 Solve the following system:


 dy
1
 dx = 2y1 − y2 − y3 ,




 dy
2
= 3y1 − 2y2 − 3y3 ,

 dx

 dy3

 = −y1 + y2 + 2y3 .
dx

Solution. It is not difficult to see that the characteristic equation associated to our
linear system, det (A − λI) = 0, has the roots λ1 = 0, λ2 = 1, with m(λ2 ) = 2. So,
corresponding to the simple real eigenvalue λ1 , we can determine a proper vector
U1 6= 0. Indeed, we have
    
2 −1 −1 u1 0
 3 − 2 − 3   u2  =  0  .
−1 1 2 u3 0
This gives
u2 = 3u1 , u3 = −u1 .
Therefore, corresponding to the first eigenvalue, we get is
   
1 1
U1 =  3  , Y1 =  3  .
−1 −1
For the double real root λ2 , we shall look for a solution of our system of the form

Y = [P0 + P1 x] eλx ,

with P0 , P1 ∈ R3 . From Y 0 = AY , identifying the coefficients of equal powers of x,


we get the relationships (
(A − λI) P1 = 0,
(A − λI) P0 = P1 .
108 ORDINARY DIFFERENTIAL EQUATIONS

Therefore, (A − λI)2 P0 = 0, i.e.


    
−1 1 1 u1 0
 −3 3 3   u2  =  0  ,
1 −1 −1 u3 0

which gives u1 = u2 + u3 . We can choose two linearly independent vectors P0i ∈


R3 , P0i 6= 0,
   
1 1
1 2
P0 =  1 , P0 =
  0 .
0 1

Then, corresponding to these vectors, we can determine the corresponding vectors


P1i , i = 1, 2. We have:
   
0 0
P11 =  0  , P12 =  0  .
0 0

Therefore, we get the solutions


   
1 1
Y2 = e x  1  , Y3 = ex  0  .
0 1

Let us note that the dimension of the space of the eigenvectors corresponding to the
double eigenvalue λ2,3 is 2 and this justifies the special form of the above fundamental
solutions Y2 and Y3 .
Hence, we get a fundamental system of solutions {Y1 , Y2 , Y3 } for our system and
the general solution is
     
1 1 1
Y = C1  3  + C2 ex  1  + C3 ex  0  , Ci ∈ R, i = 1, 2, 3,
−1 0 1

or, by components,
y = C1 + (C2 + C3 ) ex ,

 1

y2 = 3C1 + C2 ex ,
 y = −C + C ex .

3 1 3
INTRODUCTION 109

Exercise 4.10 Solve the following system:



dy1
 dx = y1 + y2 ,


 dy2 = 4y1 + y2 .



dx
Solution. Since the matrix of this homogeneous system is
 
1 1
A= ,
4 1
the characteristic equation
det(A − λI) = 0
has the roots
λ1 = 3, λ2 = −1.
Corresponding to the simple real eigenvalue λ1 , we can determine a proper vector
U1 6= 0. Indeed, we have
    
−2 1 u1 0
   =  .
4 −2 u2 0
This gives
−2u1 + u2 = 0.
Therefore, a proper vector corresponding to the first eigenvalue is
 
1
U1 =   .
2
The second eigenvector U2 6= 0, for the second eigenvalue λ2 , is determined from
    
2 1 v1 0
   =  ,
4 2 v2 0
which, upon solving, gives  
1
U2 =  .
−2
110 ORDINARY DIFFERENTIAL EQUATIONS

The general solution is then given by


   
1 1
Y = C1   3x
e + C2   e−x , C1 , C2 ∈ R.
2 2

Hence, by components,
 y1 = C1 e3x + C2 e−x ,

y2 = 2C1 e3x − 2C2 e−x .


Exercise 4.11 Find the solution of the following system:



dy1
= −y2 ,


dx

dy2

 = y1 .
dx

Solution. The characteristic equation det (A − λI) = 0 has the complex conjugate
roots λ = ±i. For the complex eigenvalue λ = i, we can determine a proper vector
U ∈ C2 , U 6= 0. Indeed, we have
    
−i − 1 u1 0
= .
1 −i u2 0

This gives u1 = iu2 . Therefore, a proper complex vector corresponding to this


eigenvalue is  
1
U= .
−i
Then, the vectors
Y1 = Re (eλx U ), Y2 = Im (eλx U ),

i.e. the vectors ! !


cos x sin x
Y1 = , Y2 = ,
sin x − cos x

are linearly independent solutions of our system. The general solution is then given
by ! !
cos x sin x
Y = C1 + C2 , C1 , C2 ∈ R.
sin x − cos x
INTRODUCTION 111

Hence, by components,
(
y1 = C1 cos x + C2 sin x,
y2 = C1 sin x − C2 cos x.

Exercise 4.12 Find the solution of the following system:



dy1
= 4y1 − y2 ,


dx

dy2

 = y1 + 2y2 .
dx

Exercise 4.13 Solve the following system:



dy1
= y1 + y2 ,


dx

dy2

 = 4y1 + y2 .
dx

Let us consider now the nonhomogeneous system (4.15). If {Y1 , Y2 , . . . , Yn } is


a fundamental system of solutions for the homogeneous linear system (4.17) and
C1 , C2 , . . . , Cn are arbitrary constants, we are led immediately to the following result.
Theorem 4.14 The general solution on the interval (a, b) of the nonhomogeneous
linear system (4.15) with constant coefficients aij and continuous right-hand side F
n
P
on (a, b) is equal to the sum of the general solution Ci Yi of the corresponding ho-
i=1
mogeneous system (4.17) and of some particular solution Yp of the nonhomogeneous
system (4.15), i.e.
Xn
Y (x) = Ci Yi (x) + Yp (x). (4.40)
i=1

If the right-hand side of system (4.15) is a quasi-polynomial of the form:


k
X
F (x) = eαj x (Pj (x) cos(βj x) + Qj (x) sin(βj x)) , (4.41)
j=1

where αj , βj ∈ R, j = 1, 2, . . . , k and Pj (x), Qj (x) are polynomials in x, then a


particular solution of the nonhomogeneous system (4.15) is sought to be of the form
k
X
eαj x xmj P j (x) cos(βj x) + Qj (x) sin(βj x) ,

Yp (x) = (4.42)
j=1
112 ORDINARY DIFFERENTIAL EQUATIONS

where P j (x), Qj (x) are polynomials in x such that


 
max grad P j (x) , grad Qj (x) ≤ max (grad (Pj (x)) , grad (Qj (x))) (4.43)

and (
m(αj + iβj ), if αj + iβj is an eigenvalue of A,
mj = (4.44)
0, if αj + iβj is not an eigenvalue of A.

Substituting in (4.15) and using the method of undetermined coefficients, we obtain


the needed polynomials P j (x), Qj (x).

Exercise 4.15 Find the solution of the following system:



dy1
= −y2 + x + 1,


dx

dy2

 = y1 + 2x + 1.
dx

Solution. Since the general solution of the associated homogeneous system is


(
y1 = C1 cos x + C2 sin x,
y2 = C1 sin x − C2 cos x,

where C1 , C2 ∈ R, we shall look for the general solution of our nonhomogeneous


system as being of the form
(
y1 = C1 cos x + C2 sin x + ax + b,
y2 = C1 sin x − C2 cos x + cx + d,

where the real constants a, b, c, d will be determined using the method of undeter-
mined coefficients. By doing this, one easily gets a = −2, b = 0, c = 1, d = 3.
Hence, the general solution of our nonhomogeneous system is
(
y1 = C1 cos x + C2 sin x − 2x,
y2 = C1 sin x − C2 cos x + x + 3.

Exercise 4.16 Solve the following system:



dy1
= −2y1 + y2 + e−x ,


dx

dy2

 = y1 − 2y2 + x.
dx

INTRODUCTION 113

Solution. The general solution of the associated homogeneous system is

y1 = C1 e−3x + C2 e−x ,
(

y2 = −C1 e−3x + C2 e−x ,

where C1 , C2 ∈ R. Since −1 is an eigenvalue of the matrix A associated to our


system and the right-hand side of this system can be written as
   
1 −x 0
F = e + x,
0 1

using the superposition principle, we shall look for the general solution of our non-
homogeneous system as being of the form

y1 = C1 e−3x + C2 e−x + (ax + b)e−x + cx + d,


(

y2 = −C1 e−3x + C2 e−x + (αx + β)e−x + γx + δ,

where the real constants a, b, c, d, α, β, γ, δ will be determined using the method of


undetermined coefficients. By doing this, we obtain the general solution of our
nonhomogeneous system:

−3x + C e−x + 1 x e−x + 1 x − 4 ,



 y1 (x) = C1 e

 2
2 3 9
−3x −x 1 1 −x 2 5
 y2 (x) = −C1 e + C2 e + x− e + x− .


2 2 3 9

Since finding a particular solution for the nonhomogeneous system (4.15) is not
easy, in order to find its general solution we can use the method of variation of
parameters. So, if we know the general solution of system (4.17),

Y (x) = C1 Y1 (x) + · · · + Cn Yn (x), (4.45)

where {Y1 , . . . , Yn } is a fundamental system of solutions, then we shall look for the
general solution of system (5.144) as being of the form

Y (x) = C1 (x) Y1 (x) + · · · + Cn (x) Yn (x), (4.46)

where C1 (x), . . . , Cn (x) are some continuously differentiable functions to be found.


114 ORDINARY DIFFERENTIAL EQUATIONS

We get
 Pn


 Ci 0 (x) y1i = f1 (x),

 i=1
n


Ci 0 (x) y2i = f2 (x)

 P

i=1 (4.47)
.................................




n


Ci 0 (x) yni = fn (x).

 P


i=1

But (4.47) is a system of n linear equations with a nonzero determinant, since


this determinant is exactly the Wronskian for the fundamental system of solutions
{Y1 , . . . , Yn }. Therefore, this system has a unique solution
Ci 0 (x) = ϕi (x), i = 1, 2, . . . , n, (4.48)
where ϕi are continuous functions on (a, b). Hence,
Z
Ci (x) = ϕi (x) dx + Ci , Ci ∈ R, i = 1, 2, . . . , n. (4.49)

Consequently, the general solution of system (4.15) is


Z  Z 
Y (x) = ϕ1 (x) dx + C1 Y1 (x) + ϕ2 (x) dx + C2 Y2 (x) + · · · +
Z 
+ ϕn (x) dx + Cn Yn (x), C1 , . . . , Cn ∈ R. (4.50)

If we have a Cauchy problem, then, from the initial conditions, we determine


the n unknown constants Ci and we obtain the unique solution of our initial-value
problem.
Remark 4.17 If we are able to find a particular solution Yp (x) of system (4.15),
for instance using the method of undetermined coefficients, then the general solution
of this system can be written immediately as
Y (x) = C1 Y1 (x) + C2 Y2 (x) + · · · + Cn Yn (x) + Yp (x), C1 , . . . , Cn ∈ R. (4.51)
Exercise 4.18 Find the solution of the following Cauchy problem:
dy1


 = −y1 + y2 + x,
 dx


dy2
= y1 − y2 − x,


 dx
 y1 (0) = y2 (0) = 1.

INTRODUCTION 115

Solution. The general solution of the associated homogeneous system is


y1 = C1 − C2 e−2x ,
(

y2 = C1 + C2 e−2x ,

where C1 , C2 ∈ R. Using the method of variation of parameters and looking for the
general solution of our system as being of the form
y1 = C1 (x) − C2 (x) e−2x ,
(

y2 = C1 (x) + C2 (x) e−2x ,


we get 
 C1 (x) = K1 ,
x 1
 C2 (x) = − e2x + e2x + K2 ,
2 4
where K1 , K2 ∈ R. Therefore, the general solution of our nonhomogeneous system
is, by components,  x 1
 y1 = K1 − K2 e−2x + − ,

2 4
−2x x 1
 y2 = K1 + K2 e
 − + .
2 4
Imposing the given initial condition, we get K1 = 1 and K2 = −1/4. Thus, the
unique solution of our Cauchy problem is
 3 1 x
 y1 = + e−2x + ,

4 4 2
5 1 −2x x
 y2 = − e
 − .
4 4 2

Problems on Chapter 4

Exercise 4.19 Integrate the following system:


dy1


 = 3y1 − y2 + y3 ,



 dx


dy2

= −y1 + 5y2 − y3 ,
 dx




 dy3 = y1 − y2 + 3y3 .



dx
116 ORDINARY DIFFERENTIAL EQUATIONS

Solution. Since the matrix of this homogeneous system is


 
3 −1 1
 
 
 −1
A= 5 −1 
,
 
1 −1 3

the characteristic equation


det(A − λI) = 0

has the roots


λ1 = 2, λ2 = 3, λ3 = 6.

Corresponding to the simple real eigenvalue λ1 , we can determine a proper vector


U1 6= 0. Indeed, we have
    
1 −1 1 u1 0
     
     
 −1 3 − 1   u2  =  0  .
     
     
1 −1 1 u3 0

Therefore, a proper vector corresponding to the first eigenvalue is


 
−1
 
 
U1 =  0 

.
 
1

In a similar manner, we get the second eigenvector U2 6= 0, for the second eigenvalue
λ2 and the third eigenvector U3 6= 0, for the second eigenvalue λ3 :
   
1 1
   
   
 1 ,
U2 =   U3 =  −2 

.
   
1 1
INTRODUCTION 117

The general solution is then given by


     
−1 1 1
     
  2x   3x   6x
Y = C1  0  e + C2  1  e + C3  −2 
    
e , C1 , C2 , C3 ∈ R.
     
1 1 1
Hence, by components,
y1 = −C1 e2x + C2 e3x + C3 e6x ,






y2 = C2 e3x − 2C3 e6x ,




y3 = C1 e2x + C2 e3x + C3 e6x .

Exercise 4.20 Find the solution of the following system:


dy1


 = −2y1 + 2y2 + 2y3 ,



 dx


dy2

= −10y1 + 6y2 + 8y3 ,


 dx


 dy3 = 3y1 − y2 − 2y3 .



dx
Solution. Since the matrix of this homogeneous system is
 
−2 2 2
 
 
A=  −10 6 8 ,
 
3 −1 −2
the characteristic equation
det(A − λI) = 0
has the real root λ1 = 0 and the complex conjugate roots λ2 = 1 + i and λ3 = 1 − i.
For the real root λ1 , a proper vector is
 
1
 
 
U1 =  −1 

.
 
2
118 ORDINARY DIFFERENTIAL EQUATIONS

For the complex eigenvalue λ2 = 1 + i, a proper vector U2 ∈ C3 , U 6= 0 is


 
1+i
 
 
U2 =  2i 

.
 
1

Then, the vectors


Y1 = U 1 ,

Y2 = Re(eλ2 x U2 )

and
Y3 = Im(eλ2 x U2 ),

i.e. the vectors


     
1 cos x − sin x cos x + sin x
     
     
Y1 =  −1  , Y2 = 
 
 −2 sin x
 , Y3 = 
  2 cos x 

     
2 cos x sin x
are linearly independent solutions of our system. The general solution is then given
by
     
1 cos x − sin x cos x + sin x
     
     
 
 −2 sin x
Y = C1  −1  + C2   + C3 
  2 cos x ,

     
2 cos x sin x
where C1 , C2 , C3 ∈ R. Hence, by components,

y1 = −C1 + C2 (cos x − sin x)ex + C3 (cos x + sin x)ex ,








y2 = −C1 − 2C2 sin xex + 2C3 cos x,




y3 = 2C1 + C2 cos xex + C3 sin xex .

INTRODUCTION 119

IV.3. Solve the following system:



dy1
 dx = 3y1 − 4y2 ,


 dy2 = y1 − 2y2 .



dx

Since the matrix of this homogeneous system is


 
3 −4
A= ,
1 −2

the characteristic equation


det(A − λI) = 0

has the roots


λ1 = 2, λ2 = −1.

Corresponding to the simple real eigenvalue λ1 , we can determine a proper vector


U1 6= 0. Indeed, we have  
4
U1 =   .
1
The second eigenvector U2 6= 0, for the second eigenvalue λ2 , is
 
1
U2 =  .
1

The general solution is then given by


   
4 1
Y = C1   e2x + C2   e−x , C1 , C2 ∈ R.
1 1

Hence, by components,
 y1 = 4C1 e2x + C2 e−x ,

y2 = C1 e2x + C2 e−x .

120 ORDINARY DIFFERENTIAL EQUATIONS

Exercise 4.21 Integrate the following system:



 dy1 = −7y1 + y2 ,

 dx

 dy2 = −2y1 − 5y2 .





dx
Solution. Since the matrix of this homogeneous system is
 
−7 1
A= ,
−2 − 5
the characteristic equation
det(A − λI) = 0
has the complex conjugate roots

λ = −6 ± i.

For the complex eigenvalue λ = −6 + i, a proper vector U ∈ C2 , U 6= 0 is


 
1
U = .
1+i
Then, the vectors
Y1 = Re(eλx U )
and
Y2 = Im(eλx U ),
i.e. the vectors
   
cos x sin x
Y1 =   e−6x , Y2 =   e−6x
cos x − sin x sin x + cos x
are linearly independent solutions of our system. The general solution is then given
by
   
cos x sin x
Y = C1   e−6x + C2   e−6x , C1 , C2 ∈ R.
cos x − sin x sin x + cos x
INTRODUCTION 121

Hence, by components,
 y1 = e−6x (C1 cos x + C2 sin x),

y2 = e−6x (C1 (cos x − sin x) + C2 (cos x + sin x)).


Exercise 4.22 Solve the following system:


dy1


 = y2 + y3 ,



 dx


dy2

= y1 + y3 ,
 dx




 dy3 = y1 + y2 .



dx
Solution. Due to the fact that the matrix of this homogeneous system is
 
0 1 1
 
 
A=  1 0 1 
,
 
1 1 0
the characteristic equation
det(A − λI) = 0
has the roots
λ1 = 2, λ2,3 = −1.
It is not difficult to see that, corresponding to the simple real eigenvalue λ1 , a proper
vector U1 6= 0 is  
1
 
 
 1 .
U1 =  
 
1
Hence, the first fundamental solution of our system is
 
1
 
  2x
Y1 =  1 e

 
1
122 ORDINARY DIFFERENTIAL EQUATIONS

For the double real eigenvalue λ2,3 , looking for fundamental solutions of the form

Y (x) = (P0 + P1 x)e−x , P0 , P1 ∈ R3 ,

we get    
1 1
   
  −x   −x
Y2 =  −1 

e , Y3 =  0 

e .
   
1 −1
Let us note that the dimension of the space of the eigenvectors corresponding to the
double eigenvalue λ2,3 is 2 and this justifies the special form of the above fundamental
solutions Y2 and Y3 . The general solution is then given by
     
1 1 1
     
  2x   −x   −x
Y = C1  1  e + C2  −1  e + C3  0 
    
 e , C1 , C2 , C3 ∈ R.
     
1 0 −1

Hence, by components,

y1 = C1 e2x + C2 e−x + C3 e−x ,








y2 = C1 e2x − C2 e−x ,




y3 = C1 e2x − C3 e−x .

Exercise 4.23 Solve the following system:


dy1


 = 8y1 − y2 − 5y3 ,



 dx


dy2

= −2y1 + 3y2 + y3 ,


 dx


 dy3 = 4y1 − y2 − y3 .



dx
Solution. The characteristic equation of this system has the roots

λ1 = 2, λ2,3 = 4.
INTRODUCTION 123

It is not difficult to see that, corresponding to the simple real eigenvalue λ1 , a proper
vector U1 6= 0 is  
1
 
 
 1 .
U1 =  
 
1
Hence, the first fundamental solution of our system is
 
1
 
  2x
Y1 =  1 e

 
1
For the double real eigenvalue λ2,3 , looking for fundamental solutions of the form
Y (x) = (P0 + P1 x)e−x , P0 , P1 ∈ R3 ,
we get    
1 + 3x 2 + 3x
   
  4x   4x
 1 − 3x
Y2 =   −3x
e , Y3 =  e .
 
   
3x 1 + 3x
Let us note that the dimension of the space of the eigenvectors corresponding to the
double eigenvalue λ2,3 is 1 and this justifies the special form of the above fundamental
solutions Y2 and Y3 . The general solution is then given by
     
1 1 + 3x 2 + 3x
     
  2x   4x   4x
Y = C1  1  e + C2  1 − 3x  e + C3  −3x 
    
 e , C1 , C2 , C3 ∈ R.
     
1 3x 1 + 3x
Hence, by components,
y1 = C1 e2x + C2 (1 + 3x)e4x + C3 (2 + 3x)e4x ,






y2 = C1 e2x + C2 (1 − 3x)e4x − 3C3 xe4x ,




y3 = C1 e2x + 3C2 xe4x + C3 (1 + 3x)e4x .

124 ORDINARY DIFFERENTIAL EQUATIONS

Exercise 4.24 Integrate the following system:



dy1
 dx = −y1 + y2 ,


 dy2 = −y1 − 3y2 .





dx

Solution. The characteristic equation of this system has the roots

λ1,2 = −2.

For this double real eigenvalue λ1,2 , looking for fundamental solutions of the form

Y (x) = (P0 + P1 x)e−2x , P0 , P1 ∈ R2 ,

we get
   
1+x x
Y1 =   e−2x , Y2 =   e−2x .
−x 1−x
Let us point out that the dimension of the space of the eigenvectors corresponding
to the double eigenvalue λ2,3 is 1 and this justifies the special form of the above
fundamental solutions Y1 and Y2 . The general solution is given by
   
1+x x
Y = C1   e−2x + C2   e−2x , C1 , C2 ∈ R.
−x 1−x

Hence, by components,

 y1 = [C1 + (C1 + C2 )x]e−2x ,


y2 = [C2 − (C1 + C2 )x]e−2x .


Exercise 4.25 Solve the following system:



dy1
 dx = −y2 ,


 dy2 = −9y1 .



dx
INTRODUCTION 125

Solution. It is not difficult to see that, using the elimination method, the general
solution is given by
 y1 = C1 e3x + C2 e−3x ,

y2 = −3(C1 e3x − C2 e−3x ), C1 , C2 ∈ R.


Exercise 4.26 Find the solution of the following system:



dy1
 dx = y1 − 2y2 ,


 dy2 = y1 − y2 .



dx
Solution. The matrix of this homogeneous system is
 
1 −2
A= 
1 −1
and, therefore, the characteristic equation

det(A − λI) = 0

has the complex conjugate roots


λ = ±i.
For the complex eigenvalue λ = i, we can determine a proper vector U ∈ C2 , U 6= 0.
Indeed, we get  
1
 
U =  (1 − i)/2  .

Then, the vectors


Y1 = Re(eλx U )
and
Y2 = Im(eλx U ),
i.e. the vectors
   
cos x sin x
Y1 =   , Y2 =  ,
(sin x + cos x)/2 (sin x − cos x)/2
126 ORDINARY DIFFERENTIAL EQUATIONS

are linearly independent solutions of our system. The general solution is then given
by
   
cos x sin x
Y = C1   + C2   , C1 , C2 ∈ R.
(sin x + cos x)/2 (sin x − cos x)/2

Hence, by components,

 y1 = C1 cos x + C2 sin x,

y2 = C1 (sin x + cos x)/2 + C2 (sin x − cos x)/2.



Bibliography

[1] G. Arfken, H. Weber, Mathematical Methods for Physicists, Elsevier Aca-


demic Press, 2005.

[2] V.I. Arnold, Ordinary Differential Equations, Editura Ştiinţifică şi Enciclo-
pedică, Bucharest, 1978 (in Romanian).

[3] W.E. Boyce, R.C. DiPrima, Elementary Differential Equations, Wiley, New
York, 1986.

[4] A. M. Bruckner, J. B. Bruckner, B. S. Thomson, Real Analysis,


Prentice- Hall, 1997.

[5] N. Cotfas, L. A. Cotfas, Complements of Mathematics, Editura Univer-


sităţii din Bucureşti, 2009 (in Romanian).

[6] R. Courant, D. Hilbert, Methods of Mathematical Physics, Wiley, New


York, 1989.

[7] A.B. Dickinson, Differential Equations: Theory and Use in Time and Mo-
tion, Reading, Mass., Addison-Wesley, 1972.

[8] J.K. Hunter, B. Nachtergaele, Applied Analysis, World Scientific, 2000.

[9] Şt. Mirică, Differential and Integral Equations, Vol. I-III, Editura Univer-
sităţii, Bucharest, 1999-2002 (in Romanian).

[10] Gh. Moroşanu, Differential Equations. Applications, Editura Academiei,


Bucharest, 1989 (in Romanian).

[11] M. E. Piticu, M. Vraciu, C. Timofte, G. Pop, Differential Equations.


Problems, Editura Universităţii din Bucureşti, 1995 (in Romanian).

127
128 SPECIAL TOPICS IN MATHEMATICS

[12] W. Rudin, Principles of Mathematical Analysis, McGraw-Hill, New York,


1964.

[13] W. Rudin, Real and Complex Analysis, Editura Theta, Bucharest, 1999 (in
Romanian).

[14] P. Szekeres, A Course in Modern Mathematical Physics, Cambridge Uni-


versity Press, 2006.

[15] C. Timofte, Ordinary Differential Equations, Editura Universităţii din Bu-


cureşti, 2006.

[16] C. Timofte, Differential Calculus, Editura Universităţii din Bucureşti, 2009.

[17] D. Zwillinger, Handbook of Differential Equations, Academic Press, Boston,


1997.

Вам также может понравиться