Вы находитесь на странице: 1из 13

NOTES ON THE EXISTENCE AND UNIQUENESS THEOREM

FOR FIRST ORDER DIFFERENTIAL EQUATIONS

I. Statement of the theorem.


We consider the initial value problem
(1.1)

y (x) = F (x, y(x))


y(x0 ) = y0 .

Here we assume that F is a function of the two variables (x, y), defined in a rectangle
R = {(x, y) : x0 a x x0 + a,
(1.2)

y0 b y y0 + b}

and we assume that F is continuous and has a continuous y-derivative F


y in R. Note that then F
F
and y are bounded in R; that is, there are nonnegative constants M and K so that
(1.3)
(1.4)

|F (x, y)| M



F
K.

(x,
y)

y

Theorem 1. Suppose that F satisfies the assumptions above. Let


(1.5)



b
Ih (x0 ) = [x0 h, x0 + h], where h min a,
.
M

Then there is a unique function x 7 y(x), defined for x in Ih (x0 ), with continuous first derivative,
such that
y (x) = F (x, y(x)) for all x Ih (x0 )
and such that
y(x0 ) = y0 .
The theorem allows us to make predictions on the length of the interval (that is h is less than or
equal to the smaller of the numbers a and b/M ). In most cases the lower bound is not very good,
in the sense that the interval on which the solution exists may be much larger then the interval
predicted by the Theorem.

Math 319 notes, A.S.

II. How to apply the theorem: An example.


Suppose you are given the problem
2

ey(x) 1
y (x) =
1 x2 y(x)2
y(2) = 1

(2.1)

which we likely cannot solve explicitly.


We want to find an interval on which a solution surely exists. Here our function F is defined
2
by F (x, y) = ey 1 (1 x2 y 2 )1 and x0 , y0 are given by x0 = 2, y0 = 1. Thus we need to pick a
rectangle R which is centered at (2, 1). In this rectangle we need to have good control on F and
F/y (cf.(1.3), (1.4)) and so we certainly have to choose R so small that it contains no points at
which the denominator 1 x2 y 2 vanishes. The exact choice of the rectangle is up to you but the
properties of F and F/y, as required in the theorem, must be satisfied.
Lets pick a, b small in the definition of R, say, lets choose a = 1/2 and b = 1/4 so that we work
in the rectangle
R = {(x, y) : 5/2 x 3/2, 3/4 y 5/4}.
Notice that then for (x, y) in R we have x2 9/4, y 2 9/16 and therefore x2 y 2 81/64 so
2
2 2 1
|1 x2 y 2 | 81
| < 4 and ey 1
64 1 = 17/64 > 1/4. for (x, y) in R. Thus we get |(1 x y )
25
e 16 1 = e9/16 < 3 which implies
ey2 1


|F (x, y)| =
3 4 = 12,
1 x2 y 2

for (x, y) in R.

Thus a legitimate (but non-optimal) choice for M in (1.3) is M = 12.


To verify also (1.4) we compute
2

2yey 1
ey 1
F
(x, y) =
+
2yx2 .
y
1 x2 y 2
(1 x2 y 2 )2
Observe that that |2y| 5/2 and |2yx2 | (5/2)3 in R and using the bounds above we can estimate
for all (x, y) in R
F
2yey2 1 ey2 1




2
(x, y)
2yx
+




y
1 x2 y 2
(1 x2 y 2 )2
5
12 + 3 42 (5/2)3 = 780.
2
Thus we see that condition (2.2) is also satisfied, with K = 780.
Now if we take

b
h min a,
M

= min {1/2, (1/4)/12} = 1/48,

then by the Theorem we can be sure that the problem has exactly one solution in the interval
[2 h, 2 + h]. So for example if we chose h = .02 (which is less than 1/48), we would deduce
that there is a unique solution in the interval [2.02, 1.98].
III. The initial value problem (1.1) is equivalent to an integral equation. For the proof of
existence and uniqueness one first shows the equivalence of the problem (1.1) to a seemingly more
difficult, but in fact more manageable problem of solving an integral equation.
We have
2

Lemma1 3.1.
Let x 7 (x) be a function with continuous derivative, defined in the interval Ih (x0 ) = [x0
h, x0 + h], with values in [y0 b, y0 + b].
Then satisfies the initial value problem


(3.1)

(x)

= F (x, (x))

(x0 )

= y0

if and only if it satisfies the integral equation


(3.2)

(x) = y0 +

F (t, (t))dt.
x0

Proof. Let us first assume that is differentiable (and continuous) so that satisfies (3.1). Then
we integrate and deduce that for |x x0 | h
Z

(t)dt =
x0

F (t, (t))dt;
x0

however the left hand side is equal to (x) (x0 ) by the fundamental theorem of calculus. Thus,
since (x0 ) = y0 we get
Z x
F (t, (t))dt
(x) y0 =
x0

which is equivalent to (3.2).


Vice versa, assume that is continuous and satisfies the integral equation (3.2). Then the
integrand F (t, (t)) Ris also a continuous function of t and thus, by the fundamental theorem of
x
calculus the integral x0 F (t, (t))dt is a differentiable function of x with derivative F (x, (x)). Thus
the right hand side of (3.2) is a differentiable function with derivative F (x, (x)) and by (3.2) is
differentiable with (x) = F (x, (x)) which is Rone part of (3.1). It remains to check that (x0 ) = y0
x
but this immediately follows from (3.2), since x00 ...dt = 0.

Thus we have established the equivalence of the two problems and now in order to prove the
existence and uniqueness theorem for (1.1) we just have to establish that the equation (3.1) has a
unique solution in [x0 h, x0 + h].
IV. Proof of the uniqueness part of the theorem. Here we show that the problem (3.1) (and
thus (1,1)) has at most one solution (we have not yet proved that it has a solution at all).
Let , be two functions with values in [y0 b, y0 + b] satisfying the integral equation (3.1),
thus
Rx

(x)
=
y
+
F (t, (t))dt
0

x0

,
say for x [x0 , x0 + ].

Rx

(x) = y0 + x0 F (t, (t))dt.

We wish to establish that (x) = (x) for x in Ih (x0 ). We shall show that (x) = (x) for x in
[x0 , x0 + ] but an analogous argument shows (x) = (x) for x [x0 , x0 ]. (Carry out this
modification of the argument yourself!)
1A

lemma is an auxiliary theorem.

To see (x) = (x) for x x0 observe that



Z x


F (t, (t)) F (t, (t))dt
|(x) (x)|
x
Z x0

|F (t, (t)) F (t, (t))|dt


x0
Z x
K
|(t) (t)|dt

(4.1)

x0

where for the last inequality we used the mean value theorem for derivatives.2 Indeed
F (t, (t)) F (t, (t)) =

F
(t, )((t) (t))
y

where is between (t) and (t) and if we use the bound (1.4), we obtain
|F (t, (t)) F (t, (t))| K|(t) (t)|
for all t between x0 and x and thus (4.1).
Now consider the function U defined by
Z
U(x) =

|(x) (x)|dt.

x0

Then clearly
(4.2)

U(x) 0

for

x x0 .

Now (4.1) is rewritten as U (x) KU(x), or equivalently U (x) KU(x) 0 which is also
equivalent to [U (x) KU(x)]eK[xx0 ] 0. But the last inequality just says that
d
[U(x)eK(xx0 ) ] 0
dx
for x0 x x0 + h.
Integrating from x0 to x yields
U(x)e

K(xx0 )

U(x0 )e

K(x0 x0 )

x
x0


d
U(t)eK(tt0 ) dt 0
|dt
{z
}
0

but U(x0 ) = 0 so we get U(x)eK(xx0 ) 0 for x x0 . Therefore U(x) 0 for x x0 . Now we


have that U(x) 0 and U(x) 0 at the same time which of course implies U(x) = 0 for x x0 .
Therefore

|(t) (t)|dt = 0 for all x [x0 , x0 + h].


x0

But the integrand is continuous and one can show that therefore (x) = (x) (for x x0 ).
A similar argument shows that also (x) = (x) for x x0 .
2 The

g ()(s

mean value theorem for derivatives says that for a differentiable function g we have g(s1 ) g(s2 ) =
s2 ) where lies between s1 and s2 . We apply this to g(s) = F (t, s) (for fixed t).

V. Existence of the solution via iterations. Let g be a continuous function in [x0 h, x0 + h].
Define the function T g by
(5.1)

T g(x) = y0 +

F (t, g(t))dt.
x0

The following Lemma is absolutely essential since it allows an iterative application of T .


Lemma 5.1. The following is true:
If g satisfies |g(x) y0 | b for all x in [x0 h, x0 + h], then T g also satisfies
(5.2)

|T g(x) y0 | b for all x in [x0 h, x0 + h].

Proof. Itis here where the crucial condition h b/M is used. If |g(x) y0| b and |x x0 | h <
b
then for x0 x
min a, M
Z x



|T g(x) y0 |
F (t, g(t))dt
x
Z x0

|F (t, g(t)|dt
x0
Z x

M dt = M |x x0 | M h b
x0
Rx
Rx
and a similar argument goes for x x0 (just write x 0 | |dt instead of x0 | |dt in this case). 
Now for the iteration alluded to above we set
0 (x) = y0
1 (x) = T 0 (x) = y0 +
2 (x) = T 1 (x) = y0 +
3 (x) = T 2 (x) = y0 +

F (t, y0 )dt

x
Z x0

x
Z x0

F (t, 1 (t))dt
F (t, 2 (t))dt

x0

... ...
Thus if n1 is already defined, we set
n (x) = y0 +

F (t, n1 (t))dt.
x0

Lemma 5.1 says that if (x, n1 (x)) belongs to the rectangle R where F was defined and if |tx0 | h
then (t, n (t)) will also belong to the rectangle R. Since (t, 0 (t) = (t, y0 ) belongs to R, then by
Lemma 5.1 (t, 1 (t)) belongs to R (if |t x0 | h), and then by Lemma 5.1 again (t, 2 (t)) belongs
to R (if |t x0 | h), and so on. Thus the definition of n makes indeed sense for all n provided
that always |t x0 | h.
We repeat: By using the Lemma we can define the sequence n recursively by
(5.3.1)

0 (x) = y0

and
(5.3.2)

n (x) = T n1 (x),

n = 1, 2, . . .

if |x x0 | h.
We want the sequence n to converge to the solution of the initial value problem.
The main statement in the proof of convergence is contained in the following Lemma 5.2 which
is also important for applications (together with Lemma 5.3 below).
5

Lemma 5.2. Let M0 = max|tx0 |h |F (t, y0 )|.


The sequence n defined in (5.3) testifies the estimate
(5.4)

|n+1 (x) n (x)| M0 K n

|x x0 |n+1
for all x [x0 h, x0 + h],
(n + 1)!

One can deduce that the sequence n converges to a limit which is a solution of (3.1) and
therefore a solution of the initial value problem (1.1). The precise information is contained in
Lemma 5.3. The sequence n defined in (5.3) converges to a limit function , for all |x x0 | h.
That is, is defined in [x0 h, x0 + h] and, moreover, has a continuous derivative and satisfies
the initial value problem (1.1). The following error estimate is true for |x x0 | h:
(5.5)

|(x) n (x)|

M0 K n |x x0 |n+1 K|xx0 |
e
.
(n + 1)!

Here M0 is as in Lemma 5.2.


VI. Proof of Lemma 5.2.
We shall prove the inequality for x0 x x0 + h, and after simple modifications the argument
also yields the desired estimate for x0 h x x0 .3
We show the inequality by iteration, i.e. mathematical induction. That is, we first prove the
estimate for n = 0. Then we show that for all n the estimate for |n (x) n1 (x)| implies the
estimate for |n+1 (x) n (x)|.
This means because we have shown it for n = 0, it then follows for n = 1. Since the estimate
for n = 1 implies the estimate for n = 2 the estimate for n = 2 is also true. Since the estimate for
n = 2 implies the estimate for n = 3 the estimate for n = 3 is also true, and so on.
First step. We show the assertion for n = 0, that is show that
|1 (x) 0 (x)| M0 |x x0 | if |x x0 | h.
Indeed 1 = T 0 and therefore
Z x



|1 (x) 0 (x)| =
F (t, y0 )dt
x
Z x0

|F (t, y0 )|dt
x0

M0 |x x0 |.

Second step. Now we show that for all n the inequality


|n (x) n1 (x)| M0 K n1

(n1 )

|x x0 |n
n!

implies the inequality


(n )
3 Carry

|n+1 (x) n (x)| M K n


this out yourselves. If x < x0 write

R x0
x

|x x0 |n+1
.
(n + 1)!

| | instead of

Rx

x0

| |) in the argument below, ...

We write
|n+1 (x) n (x)| = |T n (x) T n1 (x)|
Z x



=
F (t, n (t)) F (t, n1 (t)) dt.
x0

By the mean value theorem of differential calculus (with respect to the y-variable, as in III)
F
F (t, n (t)) F (t, n1 (t)) =
(t, s)[n (t) n1 (t)]
y
where s is some value between n (t) and n1 (t). Therefore
|F (t, n (t)) F (t, n1 (t)| K|n (t) n1 (t)|
but since we assume the validity of (n1 ) we have
|n (t) n1 (t)| M0 K n1
Thus we get

x
x0

|t x0 |n
.
n!



F (t, n (t) F (t, n1 (t))dt
x

|F (t, n (t) F (t, n1 (t))|dt

x0

M0 K

n1 (t

x0

x0 )n
dt = M0 K n1 K
n!
n+1

x
x0

(t x0 )n
dt
n!

(x x0 )
(n + 1)!
Putting inequalities together we get (n ) which we wanted to verify.
= M0 K n

VII. Discussion of Lemma 5.3. The proof here has to be somewhat incomplete as we have to
use various results from advanced calculus to justify some convergence results. However you can
still understand how the argument goes if you take those results for granted.
If m > n then
|m (x) n (x)|
= |m (x) m1 (x) + m1 (x) m2 (x) + n+1 (x) n (x)|
|m (x) m1 (x)| + |m1 (x) m2 (x)| + + |n+1 (x) n (x)|
and from Lemma 5.2 we get
|m (x) n (x)|
|x x0 |m
|x x0 |m1
(x x0 )n+1
+ M0 K m2
+ + M0 K n
m!
(m 1)!
(n + 1)!
2
2
n+1 h
K|x x0 |
K |x x0 |
K mn1 |x x0 |mn1 i
|x x0 |
1+
+
+ +
= M0 K n
(n + 1)!
(n + 2)
(n + 2)(n + 3)
(n + 2) m
i
2
2
3
3
n+1 h
K|x x0 | K |x x0 |
K |x x0 |
|x x0 |
1+
+
+
+
M0 K n
(n + 1)!
1
2!
3!

M0 K m1

(7.1)

= M0 K n

|x x0 |n+1 K|xx0 |
e
.
(n + 1)!

4 In

other words, for assuming (n1 ) we have to show that (n ) holds too, and this implication has to be valid
for all n.

The last formula follows from the power series expansion of the exponential function. By a theorem
from advanced calculus the estimate (7.1) shows the convergence of n to a limiting function , in
fact we get
max |m (x) (x)| 0 as m .
|xx0 |h

Passing to the limit as m in (7.1) we get the estimate


|(x) n (x)| M0 K n

|x x0 |n+1 K|xx0 |
e
(n + 1)!

which is estimate (5.5).


Also theorems on uniform convergence from advanced calculus show that is continuous in
[x0 h, x0 + h] and that satisfies the integral equation (3.2) which was
(x) = y0 +

F (t, (t))dt,

for |x x0 | h.

x0

Therefore, by Lemma 3.1, is also differentiable with continuous derivative and solves (3.1) which
is our original initial value problem.
References for this last argument, and more on uniform convergence:
W. Rudin, Principles of Mathematical Analyisis, Ch. 7.
S. Lang, Undergraduate Analysis.
Buck, Advanced Calculus. (This is the most elementary of these books.)
On reserve in the Mathematics Library (Room B-224 Van Vleck)!
Exercise: For the example in II above, discuss the error estimate in Lemma 5.3. Give some
explicit bounds for the maximal error in the interval [2.001, 1.999].
More examples.
(i). Consider the problem
(7.2)

y (x)

= 2 sin(3xy(x))

y(0)

= y0

We use our theorem to show that this problem has a unique solution in (, ). To do this it
suffices to show that it has a unique solution on every interval [L, L].
This is because we have a unique solution on the interval [L1 , L1 ] and a unique solution on
the interval [L2 , L2 ] with L2 > L1 by the uniqueness part the two solutions have to agree on the
smaller interval [L1 , L1 ].
Now fix L. Define R = {(x, y) : L x L, y0 b y y0 + b} for very large A. Note that
the function F defined by F (x, y) = 2 sin(3xy) satisfies |F (x, y)| 2 and |F/y| 6L for (x, y) in
R; in particular observe that these bounds are independent of b. By the existence and uniqueness
theorem there is a unique solution for the problem on the interval [h, h] where h = min{L, b/2}.
Since our bounds are independent of b we may choose b large, in particular we may choose b larger
than 2L, so that h = L. Thus we get a unique solution on [L, L].
8

(ii). The next example is really a counterexample to show what happens if a hypothesis in Theorem
1 does not hold. Consider the problem
p

y (x) = |y(x)|
y(0)

= 0.

Clearly
y(x) 0
is a solution. Another solution is given by
Y (x) =

x2 /4

if x > 0

if x 0.

(Check that this is indeed a function which satisfies the equation and initial value condition!).
Yet another solution of the initial value problem is
 2
x /4
if x > 0
e
Y (x) =
2
x /4 if x 0.

So, clearly, uniqueness does not hold. Which hypothesis of Theorem 1 is not satisfied?

(iii). If instead you consider the initial value problem


p

y (x) = |y(x)|
y(0)

= y0 ,

and for y0 6= 0 you can apply Theorem 1. Show from Theorem 1 that there is a unique solution in
some interval containing y0 . For this problem also find the explicit solution, by the usual methods
for first order separable equations.
(iv). Consider the initial value problem

y (x) = 1 + B 2 y(x)2
y(0)

= 0.

Here B is a parameter and we are interested in what happens when B gets large.
Use Theorem 1 to show that there is an interval containing 0 where the problem has a solution.
What is the length of this interval (predicted by your application of Theorem 1)?
Then find the explicit solution and determine the maximal interval containing 0 for which a
solution exists.
VIII. A global variant of the existence and uniqueness theorem.
We consider again an initial value problem

y (x) = F (x, y(x))
(8.1)
y(x0 ) = y0 .
but now assume that F is a function of (x, y) defined in a entire strip
(8.2)

S = {(x, y) : x0 a x x0 + a, < y < }.

and we assume that F is continuous and has a continuous and bounded y-derivative
particular we assume that


F



(8.3)
y (x, y) K for all (x, y) S.
9

F
y

in S. In

Theorem 8.1. Suppose that F satisfies the assumptions above. Then there is a unique function
x 7 y(x), defined in [x0 a, x0 + a] with continuous first derivative, such that


y (x) = F (x, y(x))


y(x0 ) = y0

for all x in [x0 a, x0 + a].


The conclusion is somewhat stronger as in Theorem 1 since we get a unique solution on the
full interval [x0 a, x0 + a] ; moreover there is no analogue of the assumption (1.3). However the
assumption on F/y is much restrictive as boundedness is required on the entire infinite strip (such
an assumption fails for F (x, y) = y 2 , for example).
The assumption (1.3) on the size of F is not needed since the iteration in section V above always
makes sense (as F is defined on the entire strip). For the proof of the convergence result we need
only the boundedness of F at height y0 (i.e. an inequality |F (t, y0 )| M0 which holds if the function
F is continuous) and, most importantly, the bound for F/y on the entire strip (i.e. (8.3)).
Examples: (i) The example in (7.2) is also valid here, that is, Theorem 8.1 can be applied.
Check this.
(ii) Another example is

y (x) =

y(x)3
1+x2 +y(x)2

y(0) = y0

which has a unique solution on every interval [L, L]. Check the hypothesis of Theorem 8.1) for the
function F (x, y) = y 3 (1 + x2 + y 2 )1 .
IX. A global variant of the existence and uniqueness theorem for systems.
It is very useful to have a variant of Theorem 8.1 for systems. The proof requires perhaps more
mathematical maturity but is not really harder, and the result is very useful.
Lets formulate this result first for systems with two equations.
We now consider an initial value problem for two unknown functions y1 , y2

(9.1)


y1 (x)

y (x)
2
y1 (x0 )

y2 (x0 )

= F (x, y1 (x), y2 (x))


= F2 (x, y1 (x), y2 (x))
= y1,0
= y2,0

and assume that F is a function of (x, y1 , y2 ) defined in


(9.2)

S = {(x, y1 , y2 ) : x0 a x x0 + a, < y1 < , < y2 < }

and we assume that F is continuous and has continuous and bounded y1 - and y2 -derivatives
F
, in S, in particular we assume that for all (x, y) S
y2

(9.3)




F


y1 (x, y1 , y2 ) K



F
K

(x,
y
,
y
)
1
2

y2
10

F
y1 ,

Theorem 9.1. Suppose that F satisfies the assumptions above. Then there is a unique pair of
functions y1 , y2 defined in [x0 a, x0 + a] with continuous first derivative, such that (9.1) holds for
all x in [x0 a, x0 + a].
The proof of Theorem 9.1 is very similar to the proofs of Theorems 1 and 8.1. We just write out
the itreation procedure. This times we have to simultaneously compute approxinations for y1 and
y2 , and we let n,1 and n,2 be the n th iteration.
Then we set
0,1 (x) = y1,0
0,2 (x) = y2,0
and if n 1, 1, n1,2 are already computed we get n, 1, n,2 from
Z x
n,1 (x) = y1,0 +
F1 (t, n1,1 (t), n1,2 (t))dt.
x0
Z x
n,2 (x) = y2,0 +
F2 (t, n1,1 (t), n1,2 (t))dt.
x0

Extensions to larger systems. The extension to systems with m equations and m unknown
functions is also possible. Then we are working with m functions F1 , . . . , Fm of the variables x and
y1 , . . . , ym and we are trying to determine unknown functions y1 , . . . , ym (x) so that

y1 (x) = F1 (x, y1 (x), y2 (x), . . . , ym (x)),

y2 (x) = F2 (x, y1 (x), y2 (x), . . . , ym (x)),


(9.4)
..

ym (x) = Fm (x, y1 (x), y2 (x), . . . , ym (x)).

with an initial condition for each function


(9.5)

yi (x0 ) = y0,i

for i = 1, . . . , m.

Here we assume besides the continuity of the Fi the boundedness of all partial y derivatives, i.e.
the boundedness of all functions Fi /dyk , for i = 1, . . . , m and k = 1, . . . , m; all of this is supposed
to hold in a region where x0 a x x0 + a] and < yi < , i = 1, . . . , m. The straightforward
generalization of Theorem 9.1 to systems with m equations holds true:
Theorem 9.2. Under the assumptions just stated the problem (9.4), (9.5) has a unique solution
(y1 (x), y2 (x), . . . , ym (x)) in [x0 a, x0 + a].
Linear systems.
What will be important in our class is the example of linear systems

y1 (x) = a11 (x)y1 (x) + a12 (x)y2 (x) + + a1m (x)ym (x) + g1 (x)

y2 (x) = a21 (x)y1 (x) + a22 (x)y2 (x) + + a2m (x)ym (x) + g2 (x)
(9.6)
..


ym (x) = am1 (x)y1 (x) + am2 (x)y2 (x) + + amm (x)ym (x) + gm (x)

which we consider subject to the


(9.7)

yi (x0 ) = y0,i

for i = 1, . . . , m.
11

Theorem 9.3. Suppose the coefficient functions aij and the functions gi are continous on the
interval [x0 a, x0 +a]. Then the problem (9.6), (9.7) has a unique solution (y1 (x), y2 (x), . . . , ym (x))
in [x0 a, x0 + a].
Indeed this is just a special case of Theorem 9.2 with
Fi (x, y1 , . . . , ym ) = ai,1 (x)y1 + ai,2 (x)y2 + + ai,m (x)ym + gi (x)
The coefficient functions aij and also the gi are bounded on the interval [x0 a, x0 +a] and
aij (x).

Fi
yj (x, y)

We shall see later in class that the problem of solving equations with higher derivatives is
equivalent to a problem of solving first order equations (see the following section).
X. Linear higher order differential equations. Consider the equation
y (m) (x) + am1 (x)y (m1) (x) + + a1 (x)y (x) + a0 (x)y(x) = g(x)

(10.1)

subject to the m initial value conditions

y(x0 )

y (x0 )
..

(m1)
y
(x0 )

(10.2)

= 0
= 1

= m1 .

and we assume that the functions a0 , ... am1 and g are continuous functions on the interval
[x0 a, x0 + a].
Theorem 10.1. There is exactly one function y which has continuous derivatives up to order n in
[x0 a, x0 + a] so that (10.1) and (10.2) are satisfied.
This follows from Theorem 9.3 and the following Lemma which is not hard to check.
Lemma 10.2.
y

(i) Suppose that y solves (10.1) and (10.2). Then set y1 (x) = y(x), y2 (x) = y (x), ..., ym (x) =
(x). Then the functions y1 , . . . , ym satisfy the first order system

(m1)

(10.3)


y1 (x)

y (x)

(x)
y

m1

ym (x)

= y2 (x)
= y3 (x)
..
.
= ym (x)
= a0 (x)y1 (x) a1 (x)y2 (x) am1 (x)ym (x) + g(x)

with the initial value condition

(10.4)

y1 (x0 ) = 0

y2 (x0 ) = 1
..

ym (x0 ) = m1 .
12

(ii) Now suppose vice versa that functions y1 , . . . , ym are given with continuous derivatives and
that (10.3) and (10.4) are satisfied in [x0 a, x0 + a]. Set y(x) = y1 (x). Then y has n continuous
derivatives (from 10.3) so that both the equation (10.1) and the initial value condition (10.2) are
satisfied.
In other words the problem of solving the initial value problem for the higher order equation
(10.1), (10.2) is equivalent to solving the initial value problem for the first order system (10.3),
(10.4).

13

Вам также может понравиться