Академический Документы
Профессиональный Документы
Культура Документы
Sectia: Informatic
a englez
a, Curs: Dynamical Systems, An: 2015/2016
(1)
When first presenting differential equations, one can say, roughly speaking, that a
differential equation is a relation involving the derivatives of some unknown function
up to a given order. This means that it is a scalar equation of the form
(2)
c
2015
Adriana Buic
a, Differential Equations
(3)
This also clarifies that we need to introduce (n 1) scalar unknowns beside the
scalar unknown x of our equation (2). These new unknowns will be the derivatives
of x up to order (n 1), that is
(4)
X1 = x,
X2 = x0 ,
X3 = x00 ,
... Xn = x(n1) .
= X2
X20
= X3
...
0
Xn1
= Xn
Xn0
= g(t, X1 , X2 , ..., Xn ).
The final step in seeing that this system can be put into the vectorial form (3) is to
identify the components of the vectorial function f . Of course, these are
f1 (t, X1 , ..., Xn ) = X2 ,
f2 (t, X1 , ..., Xn ) = X3 ,
Solutions. Now we intend to present the precise notion of solution for a differential equation. We give also some examples.
Definition 1 We say that a vectorial function : I Rn is a solution of the
differential equation (1) if
(i) I R is an open interval, C 1 (I, Rn ),
(ii) (t, (t)) D, for all t I,
(iii) 0 (t) = f (t, (t)), for all t I.
In particular, for the nth order differential equation (2) the notion of solution is
as follows.
Definition 2 We say that scalar a function : I R is a solution of the nth order
differential equation (2) if
(i) I R is an open interval, C n (I),
(ii) (t, (t), 0 (t), ..., (n1) (t)) D, for all t I,
(iii) (n) (t) = g(t, (t), 0 (t), ..., (n1) (t)), for all t I.
3
We present now some examples in the form of exercises (this means that you
have to check them).
1) The function : R R, (t) = t2 cos t + 34 is a solution of the scalar first
order differential equation x0 = 2t + sin t.
2) The function : R R, (t) = 23 et is a solution of the scalar first order
differential equation x0 = x.
2 /2
= f (t, x)
x(t0 ) = ,
where f : D Rn continuous on the open subset D R Rn and (t0 , ) D are
all given. Note that t0 is called the initial time while is called the initial value or
the initial position. In the particular case n = 2 we have
x01
= f1 (t, x)
x02
= f2 (t, x)
x1 (t0 ) = 1 ,
x2 (t0 ) = 2 .
The IVP for (2) is
x(n) = g(t, x, x0 , ..., x(n1) )
x(t0 ) = 1
x0 (t0 ) = 2
...
x(n1) (t0 ) = n ,
where g : D R continuous on the open subset D R Rn and (t0 , 1 , ..., n ) D
are all given. In the particular case n = 2 we have
x
00
= g(t, x, x0 )
x(t0 ) = 1
x0 (t0 ) = 2 .
In this case 1 is called the initial position while 2 is the initial velocity.
It is worth to say that, in general, an IVP has a unique solution.
5
has a
2) Knowing that the initial value problem x0 = 3x, x(0) = 1 has a unique
solution, find it among the following functions:
(a) x = et ;
(b) x = 1 ;
(c) x = 1/3;
(d) x = t;
(e) x = e3t .
3) Knowing that the initial value problem x0 = x 3, x(0) = 1 has a
unique solution, find it among the functions of the form x = 3 + c et , with c R.
x0 = x1/3 ,
x(0) = 0.
The form and the Existence and Uniqueness Theorem for the IVP. In the
previous lecture we saw the general form of an nth order scalar differential equation.
In this lecture we begin the study of a particular case of such equations, namely the
class of nth order scalar linear differential equations which have the form
(5)
c
2015
Adriana Buic
a, Linear Differential Equations
x0 (t0 ) = 2
...
x(n1) (t0 ) = n .
For further reference we write below the form of a linear homogeneous differential
equation.
(7)
When equation (5) is linear nonhomogeneous, we say (7) is the linear homogeneous
differential equation associated to it.
The fundamental theorems for linear differential equations.3 These theorems give the structure of the set of solutions of such equations. Their proofs relies
on Linear Algebra. The key that opens the door of this theory is to associate a
linear map to a linear differential equation. First we note that the set of continuous
functions on an open interval, C n (I), has a linear structure when considering the
usual operations of addition between functions and multiplication of a function with
a real number. For each function x C n (I) we define a new function, denoted Lx,
as
Lx(t) = x(n) (t) + a1 (t)x(n1) (t) + + an (t)x(t), for all t I.
It is not difficult to see that Lx C(I). In this way we obtain a map between the
linear spaces C n (I) and C(I), i.e.
L : C n (I) C(I).
Proposition 1 (i) The map L is linear, that is, for any x, y C n (I) and any
, R we have
L(x + y) = Lx + Ly.
3
c
2015
Adriana Buic
a, Linear Differential Equations
(ii) The linear homogeneous differential equation (7) can be written equivalently
Lx = 0,
while the linear nonhomogeneous differential equation (5) can be written equivalently
Lx = f.
The fundamental theorem for linear homogeneous differential equations follows.
Theorem 2 Let x1 , ..., xn be n linearly independent solutions of (7). Then the general solution of (7) is
x = c1 x1 + ... + cn xn ,
c1 , ..., cn R.
Proof. Applying the previous Proposition, we obtain that the set of solutions of the
linear homogeneous differential equation (7) is ker L, which, further, using Linear
Algebra, is a linear subspace of C n (I). With all these in mind, note that, in order
to complete the proof of our theorem it remains to prove that the linear space
ker L has dimension n. We know that the Euclidean space Rn has dimension n. We
intend to find an isomorphism between Rn and ker L, because, as we know from
Linear Algebra, an isomorphism between linear spaces preserves the dimension. Let
us introduce a notation first. Let t0 I be fixed. For any Rn , whose components
are denoted 1 , ..., n , we know by Theorem 1 that the IVP (6) has a unique solution.
Denote this solution by (, ). It is not difficult to see that (, ) ker L. In this
way we defined the bijective map
: Rn ker L,
() = (, ).
(0)) = ,
(n1)
(0)) = .
(n1)
(0)) = + .
(8)
L1 = 0,
(9)
L2 = 0,
(10)
L3 = 0,
(n1)
(0)) = + .
If we compare this last relation with (10) we see that both functions 3 and 4
are solutions of the same IVP, which, by Theorem 1, has a unique solution. Hence
3 = 4 , that is
( + ) = () + ().
With this we finished to prove that is a linear map.
As we discussed in the beginning of this proof, we can conclude now that the
set of solutions of the linear homogeneous differential equation (7), which coincides
with ker L, is a linear space of dimension n. The hypothesis of our theorem is that
x1 , ..., xn are linearly independent solutions of (7), which, in other words, means that
{x1 , ..., xn } is a basis of ker L. Then
ker L = {c1 x1 + ... + cn xn ,
c1 , ..., cn R},
c R.
12
c R.
(11)
where a, f C(I) and write also the linear homogeneous equation associated,
(12)
x0 + a(t)x = 0.
c
2015
Adriana Buic
a, Linear Differential Equations
13
As one can easily see the null function is also a solution of this IVP, which, by
Theorem 1, has a unique solution. Hence 0.
(ii) Since a is a continuous function on I, the hypothesis a(t) 6= 0 for all t I
assures that, either a(t) > 0 for all t I or a(t) < 0 for all t I. Applying (i) we
deduce that a similar result holds for . Hence
(13)
c R.
Step 4. We write the solution explicitly. We have |x(t)| = eA(t)+c , hence x(t) =
ec eA(t) for an arbitrary constant c R. Now we note that {ec : c R} = R .
Then we can write equivalently x(t) = c eA(t) for an arbitrary constant c R .
Step 5. The solution x = 0 found at Step 1 and the family of solutions x(t) =
A(t)
ce
, c R found at Step 4 can be written together into the formula
x(t) = c eA(t) . c R.
Now we present the Lagrange method, also called the variation of the constant
method used to find a particular solution of the first order linear nonhomogeneous
14
differential equation (11). This consists in looking for some function C 1 (I) with
the property that
xp = (t) eA(t)
is a solution of (11). After replacing this form in (11) we obtain 0 (t) = eA(t) f (t).
Rt
Hence, some function can be written as (t) = t0 eA(s) f (s)ds. Consequently we
found a particular solution of (11)
t
eA(t)+A(s) f (s)ds.
xp =
t0
The next result follows now applying the Fundamental Theorem for linear nonhomogeneous differential equations.
Proposition 4 The general solution of the first order linear nonhomogeneous differential equation (11) is
A(t)
x(t) = c e
eA(t)+A(s) f (s)ds,
c R.
t0
x(t) eA(t) =
eA(s) f (s)ds + c,
c R.
t0
Writing explicitly the unknown x(t) we obtain the same expression of the general
solution as in Proposition 4.
15
and consider again the linear map L (defined in the beginning) corresponding to
(14).
We start by noticing that, when looking for solutions of (14) of the form
x = ert
(with r R that has to be found), we obtain that r must be a root of the nth degree
algebraic equation
(15)
rn + a1 rn1 + + an1 r + an = 0.
c
2015
Adriana Buic
a, Linear Differential Equations
16
t R,
where , R are fixed real numbers. Using Eulers formula we know that its real
and, respectively, imaginary parts are
u(t) = e t cos t,
v(t) = e t sin t.
t R.
This last formula tells us that the derivatives of the function ert , where r C is
fixed, are computed using the same rules as when r R. Hence in the case that
r = + i is a root of (15), the complex-valued function ert verifies (14). We thus
have
Proposition 6 If r = + i with 6= 0, is a root of (15), then e t cos t and
e t sin t are solutions of (14).
We notice that, since the polynomial l(r) has real coefficients, in the case that
r = + i with =
6 0 is a root of l, we have that its conjugate, r = i is a root,
17
too. According to the previous proposition, this gives that e t cos t and e t sin t
are solutions of (14). But this is no new information. In fact, it is usually said that
the two solutions indicated in the proposition comes from the two roots i.
Hence we have seen that any complex root provides a solution of (14). But still
there is the possibility that the solutions obtained are not enough, since we know
by the Fundamental Theorem of Algebra that a polynomial of degree n has indeed
n roots, but counted with their multiplicity. We will show that
Proposition 7 If r C is a root of multiplicity m of the polynomial l, then tk ert
verifies (14) for any k {0, 1, 2, ..., m 1}.
Proof. We remind first that r C is a root of multiplicity m of the polynomial l if
and only if
l(r) = l0 (r) = ... = l(m1) (r) = 0.
By direct calculations we obtain for each k {0, 1, 2, ..., m 1} that
k
rt
rt
L(t e ) = e
k
X
j=0
which, in the case that r C is a root of multiplicity m gives that L(tk ert ) = 0.
We describe now The characteristic equation method for the linear homogeneous differential equation with constant coefficients (14).
Step 1. Write the characteristic equation (15). Note that it is an algebraic equation of degree n (equal to the order of the differential equation) and with the same
coefficients as the differential equation.
Step 2. Find all the n roots in C of (15), counted with their multiplicity.
Step 3. Associate n functions obeying the following rules.
For r = a real root of order m we take m functions:
et , tet , . . . , tm1 et .
For r = + i and r = i roots of order m we take 2m functions
et cos t, et sin t, . . . , tm1 et cos t, tm1 et sin t.
The following useful result holds true.
18
with constant coefficients a1 , ..., an and with f C(R) of special form. Denote again
the characteristic polynomial l(r) = rn + a1 r(n1) + ... + an .
We consider those functions f which can be solutions to some linear homogeneous
differential equation with constant coefficients. More exactly, the function f can be
either of the form Pk (t)et or Pk (t)et cos t + Pk (t)et sin t, where Pk (t) and Pk (t)
denote some polynomials in t of degree at most k. Roughly speaking, the idea behind
this method is that (16) has a particular solution of the same form as f (t). The
following rules apply.
Assume that f (t) = Pk (t)et .
In the case that r = is not a root of the characteristic polynomial l(r), then
xp = Qk (t)et for some polynomial Qk (t) of degree at most k whose coefficients have
to be determined.
In the case that r = is a root of multiplicity m of the characteristic polynomial
l(r), then xp = tm Qk (t)et for some polynomial Qk (t) of degree at most k whose
coefficients have to be determined.
Assume now that f (t) = Pk (t)et cos t + Pk (t)et sin t.
In the case that r = + i is not a root of the characteristic polynomial l(r),
k (t)et sin t for some polynomials Qk (t) and Q
k (t) of
then xp = Qk (t)et cos t + Q
degree at most k whose coefficients have to be determined.
In the case that r = + i is a root of multiplicity m of the characteristic
k (t)et sin t] for some polynomials
polynomial l(r), then xp = tm [Qk (t)et cos t + Q
k (t) of degree at most k whose coefficients have to be determined.
Qk (t) and Q
19
20
Chapter 3
The dynamical system generated by a differential equation6
x = f (x)
x = f (x)
x(0) =
c
2015
Adriana Buic
a, The dynamical system generated by a differential equation
21
It is easy to see that (i) holds true. In order to prove (ii) let us consider s and
be fixed and x1 , x2 be two functions given by
x1 (t) = (t + s, ) and x2 (t) = (t, (s, )).
By the definition of the flow, x2 is a solution of the IVP
x = f (x)
x(0) = (s, ).
(19)
d
((t + s, )) =
In the same time we have that x1 (0) = (s, ) and x 1 (t) =
dt
(t
+ s, ) = f ((t + s, )) = f (x1 (t)). Hence, x1 is also a solution of the IVP
(19). As a consequence of Theorem 6, this IVP has a unique solution, thus the two
functions x1 and x2 must be equal. The proof of (iii) is beyond the aim of these
lectures.
When working with the flow, it is said to be the initial state of the dynamical
system generated by equation (17), while (t, ) is said to be the state at time t.
According to these, the space Rn to which belong the states it is called the state
space of the dynamical system generated by (17). It is also called the phase space.
We say that Rn is an equilibrium state/point (or critical point, or stationary
point or steady-state solution) of the dynamical system generated by (17) when
(t, ) =
for any t R.
It is important to notice that the equilibria of (17) can be found solving in Rn the
equation
f (x) = 0.
The orbit of the initial state is
() = { (t, ) : t I }.
The positive orbit of the initial state is
+ () = { (t, ) : t I , t > 0 }.
22
The particular case n = 1. In this case the state space is R and we say that
equation (17) is scalar. We start with an example.
Example 1. Study the dynamical system generated by the scalar equation
x = x.
The state space is R. The flow is
: R R R,
(t, ) = et .
Proof. (i) We prove first the second statement. Since is not an equilibrium point
we have that either f () > 0 or f () < 0. We consider the case when f () > 0 (the
other case is similar).
d
Then we have that (0, ) = f () > 0. We assume by contradiction that there
dt
d
exists t1 such that (t1 , ) 0. We denote 1 = (t1 , ). Since f () > 0 and
dt
f (1 ) 0, it follows that there exists between and 1 , such that f ( ) = 0.
But the function (, ) is continuous on the open interval (0, t1 ), or (t1 , 0). Hence,
it takes all the values between and 1 . This means that there exists t2 such that
(t2 , ) = . We consider now the IVP
x = f (x)
x(t2 ) =
and see that it has two solutions: (, ) and the constant function . This fact
contradicts the unicity property.
The first statement follows by the fact that () is the image of the continuous
and strictly monotone function (, ) which, by Theorem 6, is defined on an open
interval.
(ii) In these hypotheses we have that (0, ) (0, ) < 0. Assume by contradiction that there exists t1 such that (t1 , ) (t1 , ) 0. From here, using the
continuity of the function (t, ) (t, ), we deduce that there exists t2 such that
(t2 , ) (t2 , ) = 0. We consider now the IVP
x = f (x)
x(t2 ) = (t2 , )
and see that it has two different solutions: (t, ) and (t, ). This fact contradicts
the unicity property.
(iii) The function (t, ) is a solution of x = f (x), hence
(20)
d
(t, ) = f ((t, )).
dt
24
lim (t, ) =
and
d
(t, ) = 0.
t dt
lim
Passing to the limit as t in (20) and taking into account equations (21), we
obtain that
0 = f ( ),
which means that must be an equilibrium point. The proof of (iv) is similar.
As a consequence of the above result we give the following procedure useful to
represent the phase portrait of any scalar dynamical system x = f (x).
Step 1. Find all the equilibria, i.e. solve f (x) = 0.
Step 2. Represent the equilibria on the state space, R. The orbits are the ones
corresponding to the equilibria and the open intervals of R delimited by the equilibria.
Step 3. Determine the sign of f on each orbit. According to this sign, insert
an arrow on each orbit. If the sign is +, the arrow must indicate that x increases,
while if the sign is , the arrow must indicate that x decreases.
Example 2. Consider the differential equation x = x x3 .
The state space is R.
The equilibrium points are 1, 0, 1.
The orbits are (, 1), {1}, (1, 0), {0}, (0, 1), {1}, (1, ).
The function f (x) = x x3 is positive on (, 1), negative on (1, 0),
positive on (0,1) and negative on (1, ).
Example 3. How to read a phase portrait? Assume that we see a phase portrait
of some scalar differential equation x = f (x) and note that, for example, the open
bounded interval (a, b) is an orbit such that the arrow on it indicates to the right.
Only with this information we can deduce some important properties of the flow of
the differential equation having this phase portrait.
Let (a, b) be a fixed initial state. Then () = (a, b), which means that the
image of the function (, ) is the open bounded interval (a, b) (we used only the
definition of the orbit). By Theorem 6, since (, ) is bounded, we must have that
25
its interval of definition is R. The fact that the arrow indicates to the right provides
the information that the function (, ) is strictly increasing. We know that a
continuous increasing function defined on the interval (, ) whose image is the
interval (a, b) must have the limit as t equal to a, and limit as t equal
to b, hence lim (t, ) = a and lim (t, ) = b. By Lemma 1 we deduce that a
t
x = f (x)
x = Ax
where the matrix A Mn (R) is called the matrix of the coefficients of the linear
system (22). We assume that
det A 6= 0
such that the only equilibrium point of (22) is = 0 (here 0 denotes the null vector
from Rn ).
26
g
+ + sin = 0,
m
L
where > 0 is the damping coefficient, m is the mass of the bob, L > 0 is the length
of the rod and g > 0 is the gravity constant. What happen when = 0?
27
y = 2y,
x(0) = 1 ,
y(0) = 2
t R}.
In other words, the orbit is the curve in the plane xOy of parametric equations
x = 1 et , y = 2 e2t , t R.
Note that the parameter t can be eliminated and thus obtain the cartesian equation
12 y = 2 x2 ,
which, in general, is an equation of a parabola with the vertex in the origin. In the
special case 1 = 0 this is the equation x = 0, that is the Oy axis, while in the
special case 2 = 0 this is the equation y = 0, that is the Ox axis. Note that each
orbit lie on one of these planar curves, but it is not the whole parabola or the whole
line. More precisely, we have
() = {(x, y) R2
12 y = 2 x2 ,
() = {(0, y) R2
2 y > 0} when 1 = 0, 2 6= 0,
() = {(x, 0) R2
1 x > 0} when 1 6= 0, 2 = 0,
1 x > 0,
2 y > 0} when 1 2 6= 0,
(0) = {0}.
On each orbit the arrows must point toward the origin.
Example 2.
x = x,
y = y.
!
1 0
The matrix of the system is A =
, which have the eigenvalues 1 = 1 and
0 1
2 = 1. Hence the equilibrium point at the origin is a saddle, which is unstable.
In order to find the flow we have to consider the IVP
x = x,
y = y,
x(0) = 1 ,
y(0) = 2
t R}.
In other words, the orbit is the curve in the plane xOy of parametric equations
x = 1 et , y = 2 et , t R.
29
Note that the parameter t can be eliminated and thus obtain the cartesian equation
xy = 1 2 ,
which, in general, is an equation of a hyperbola. More precisely, we have
() = {(x, y) R2
xy = 1 2 ,
2 y > 0} when 1 = 0, 2 6= 0,
() = {(x, 0) R2
1 x > 0} when 1 6= 0, 2 = 0,
() = {(0, y) R
1 x > 0,
2 y > 0} when 1 2 6= 0,
(0) = {0}.
On each orbit the arrows must point such that x moves away from 0, while y moves
toward 0.
Example 3.
x = y,
y = x.
!
0 1
The matrix of the system is A =
, which have the eigenvalues 1,2 =
1 0
i. Hence the equilibrium point at the origin is a center, which is stable but not
asymptotically stable.
In order to find the flow we have to consider the IVP
x = y,
y = x,
x(0) = 1 ,
y(0) = 2
t R}.
In other words, the orbit is the curve in the plane xOy of parametric equations
x = 1 cos t 2 sin t, y = 1 sin t + 2 cos t, t R.
Note that the parameter t can be eliminated and thus obtain the cartesian equation
x2 + y 2 = 12 + 22 ,
30
which, in general, is an equation of a circle with the center at the origin and radius
p
12 + 22 . More precisely, we have
() = {(x, y) R2
x2 + y 2 = 12 + 22 } when 12 + 22 6= 0,
(0) = {0}.
On each orbit the arrows must point in the trigonometric sense.
Example 4.
x = x y,
y = x + y. !
1 1
The matrix of the system is A =
, which have the eigenvalues 1,2 = 1i.
1 1
Hence the equilibrium point at the origin is an unstable focus.
In order to find the flow we have to consider the IVP
x = x y,
y = x + y,
x(0) = 1 ,
y(0) = 2
where
(t) > 0 for any t R.
We can write equivalently
(24)
tan (t) =
y(t)
.
x(t)
Our aim is to find a system satisfied by the new unknowns and . In this system
their derivatives will be involved. We will show two ways to find this new system.
Method 1. We take the derivatives in the equalities (23) and obtain
x = cos sin ,
y = sin + cos .
31
x = x y,
y = x + y, we obtain
= ,
(t) = 0 + t,
yx
y x
=
.
2
cos
x2
2 = 2xx + 2y y,
After we replace
x = x y,
y = x + y, we obtain
2
= x2 xy + xy + y 2 ,
cos
= (x2 + xy xy + y 2 ) 2 ,
x
cos2
.
= 2 2
cos2
x = f1 (x, y)
y = f2 (x, y)
New questions arise: How to check that a given function is a first integral? How
to find a first integral? The answer to the first question is given by the following
result.
Proposition 8 A nonconstant C 1 function H : U R is a first integral in U of
(25) if and only if it satisfies the first order linear partial differential equation
(26)
f1 (x, y)
H
H
(x, y) + f2 (x, y)
(x, y) = 0, for any (x, y) U.
x
y
H
H
(x, y) 2x
(x, y) = 0.
x
y
f2 (x, y)
dy
=
,
dx
f1 (x, y)
which is called the cartesian differential equation of the orbits of (25). After the
integration of (27) we look for a function of two variables H such that we can write
the general solution of (27) as H(x, y) = c, c R. This H is a first integral of (25).
Example 5. We come back to the system x = y, y = 2x. This time we want
to find a first integral. The previous statement says that we need to integrate the
equation
dy
2x
=
.
dx
y
This is separable, and it can be written as ydy = 2xdx. After integration we obtain
y 2 /2 = x2 + c, c R. Hence H(x, y) = y 2 /2 + x2 is a first integral in R2 .
With the previous example, note that the first integral is not unique. Having one
first integral, we can find many more, for example by multiplying it with any non
null constant.
34
c
2015
Adriana Buic
a, The dynamical system generated by a differential equation
35
y 0 = f (x, y),
y(x0 ) = y0 .
c
2015
Adriana Buic
a, Numerical methods for differential equations
36
x x0
.
n
The Euler method formula for the IVP (28) with constant step size h is
yk+1 = yk + h f (xk , yk ),
k = 0, n 1.
The Runge-Kutta method formula for the IVP (28) with constant step size h is
yk+1 = yk +
h
h
f (xk , yk ) + f (xk+1 , yk + h f (xk , yk )),
2
2
k = 0, n 1.
Note that the starting point y0 is the one the appears in the initial condition
in (28). Hence y0 = (x0 ) is an exact value. In fact, only in theory y0 is an exact
value, since practically, when y0 has too many decimals (for example is an irrational
number) a human or even a computer use only a truncation of it. When applying
a numerical method, the errors are due to the formula itself and to the truncations
made. Moreover, the errors accumulate at each step, thus, in general, the errors are
larger as the interval [x0 , x ] is larger.
When the step size h is smaller, the partition of the interval [x0 , x ] is finer. In
general the errors are smaller as the step size h is smaller.
Exercise 1. We consider the IVP y 0 = y, y(0) = 1 whose solution we know that
it is : R R, (x) = ex . Apply the Euler numerical method with a constant step
size h > 0 on the interval [0, x ] where x > 0 is fixed. Prove that
yk = (1 + h)k ,
k = 0, n where h =
Prove that yn (x ) = ex as n .
37
x
.
n
In the rest of the lecture we present two ideas on how the Euler numerical formula
can be derived.
The first idea uses the notion of Taylor polynomial. We know that the Taylor
polynomial around a point a of some function is a good approximation of it at
least in a small neighborhood of a. The highest the degree of the Taylor polynomial,
the better the approximation. But we consider only the Taylor polynomial of degree
1, that is
(a) + (x a)0 (a).
With this we approximate (x) for x sufficiently close to a. Now consider that is
the exact solution of the IVP (28). Remind that this implies
0 (x) = f (x, (x)).
Instead of a we take a point xk from the partition of the interval [x0 , x ] and instead
of x we take xk+1 which must be close to xk . Denote, as before, an approximation
of (xk ) by yk . Then
(xk ) + (xk+1 xk )f (xk , (xk ))
is an approximation for (xk+1 ). But this formula is not practical since it uses the
exact value (xk ) which is not known. That is why it is replaced by an approximation
yk . After this we obtain
yk+1 = yk + (xk+1 xk )f (xk , yk ).
38
The usefulness of the direction field comes after the following property. The slope
of the direction field in some given point is the slope of the solution curve that passes
through that point. More precisely, let the point (x1 , y1 ) be given and let a solution
(x) of y 0 = f (x, y), whose graph passes through this point. We know that the slope
of the direction field is f (x1 , y1 ) and that the slope of the solution curve is 0 (x1 ).
We have to prove that
0 (x1 ) = f (x1 , y1 ).
Indeed, since the graph of passes through (x1 , y1 ) we have that (x1 ) = y1 , and
since is a solution of y 0 = f (x, y) we have that 0 (x1 ) = f (x1 , (x1 )). The proof
is done.
Now we come back to the Euler numerical method to find an approximate solution of the IVP y 0 = f (x, y), y(x0 ) = y0 . The geometrical idea behind it is the
following. We start in (x0 , y0 ) and follow the vector of slope f (x0 , y0 ) until it intersects the vertical line x = x1 in a point (x1 , y1 ). Remind that x0 , y0 , x1 are given,
and deduce that y1 satisfies
y1 y0 = f (x0 , y0 )(x1 x0 ).
Thus
y1 = y0 + (x x0 )f (x0 , y0 ).
Once we are in (x1 , y1 ) we follow the vector of slope f (x1 , y1 ) until it intersects the
vertical line x = x2 in a point (x2 , y2 ) with
y2 = y1 + f (x1 , y1 )(x2 x1 ).
We proceed in the same way until the end of the interval, x , obtaining
yk+1 = yk + f (xk , yk )(xk+1 xk ).
40
Now we continue our study of the direction field with a second example, where
we consider the differential equation
x
y0 = .
y
We will find the shape of the solution curves using the direction field. First we notice
that, given m, the isocline for the slope m is the line
y=
1
x.
m
We notice that the vectors of the directions field are orthogonal to the corresponding
isocline. Hence, any solution curve is orthogonal to all the lines that passes through
the origin of coordinates. We deduce that a solution curve must be a circle centered
in the origin.
The direction field in the phase space R2 of a planar dynamical system is defined
in a similar way and we will see that it is tangent to its orbits. More precisely, let
f1 , f2 : R2 R be continuous functions and let
x = f1 (x, y),
y = f2 (x, y).
f2 (x, y)
.
f1 (x, y)
We have the following useful property. The slope of the direction field in some given
point is the slope of the orbit that passes through that point. Indeed, let the point
(x1 , y1 ) be given and let a solution (1 (t), 2 (t)) of the system whose orbit passes
through this point. We know that the vector (01 (t), 02 (t)) is tangent to the orbit for
any t. Take now t1 such that (1 (t1 ), 2 (t1 )) = (x1 , y1 ). Note that (01 (t1 ), 02 (t1 )) =
(f1 (x1 , y1 ), f2 (x1 , y1 )) and that this vector is tangent to the orbit in (x1 , y1 ). Hence
the slope of the orbit in (x1 , y1 ) is f2 (x1 , y1 )/f1 (x1 , y1 ). The proof is done.
41