Вы находитесь на странице: 1из 30

Math 3410, Fall 2009

Second semester dierential equations


1 Some review
1.1 First order equations
We will need to know how to do separable equations and linear equations.
A separable equation is one like
dy
dt
= ty
2
.
We can rewrite this as
dy
y
2
= t dt.
Integrating both sides,

1
y
=
1
2
t
2
+c,
so
y =
1
1
2
t
2
+c
.
A linear equation is one like
y

+
2
t
y = t.
One multiplies by an integrating factor
p = e
R
2
t
= e
2 ln t
= t
2
to get
t
2
y

+ 2ty = t
3
,
or
(t
2
y)

= t
3
,
1
which leads to
t
2
y =
1
4
t
4
+c,
and then
y =
1
4
t
2
+
c
t
2
.
When the linear equation has constant coecients and is homogeneous
(i.e., the right hand side is 0), things are much easier. To solve
y

4y = 0,
we guess a solutions of the form y = e
rt
, so y

= re
rt
. Then
re
rt
4e
rt
= 0,
or r = 4, and therefore the solution is
y = ce
4t
.
To identify c, one needs an initial condition, e.g., y(0) = 2. Then
2 = ce
40
= c,
so we then have
y = 2e
4t
.
For non-homogeneous equations, such as
y

4y = e
3t
,
one way to solve it is to solve the homogeneous equation y

4y = 0, and
then use
y = ce
4t
+y
p
,
where y
p
is a particular solution. One way to nd a particular solution is to
make an educated guess. If we guess y
p
= Ae
3t
, then we have
y

p
4y
p
= 3Ae
3t
4Ae
3t
,
and this will equal e
3t
if A = 1. We conclude the solution to the non-
homogeneous equation is
y = ce
4t
e
3t
.
2
1.2 Series
From calculus we have the Taylor series
e
x
= 1 +x +
x
2
2!
+
x
3
3!
+ ,
cos x = 1
x
2
2!
+
x
4
4!
,
and
sin x = x
x
3
3!
+
x
5
5!
.
If i =

1, substituting and doing some algebra shows that


e
ix
= cos x +i sin x.
2 Second order linear
2.1 Applications
First consider a spring hung from the ceiling with a weight hanging from it.
Let u be the distance the weight is below equilibrium. There is a restoring
force upwards of amount ku by Hookes law. There is damping resistance
against the motion, which is Ru

. And the net force is related to accelera-


tion by Newtons laws, so
ku Ru

= F = mu

.
This leads to
mu

+Ru

ku = 0.
If there is an external force acting on the spring, then the right hand side is
replaced by F(t).
The second example is that of a circuit with a resistor, inductance coil,
and capacitor hooked up in series. Let I be the current, Q the charge, R
the resistance, L the inductance, and C the capacitance. We know that
I = dQ/dt. The voltage drop across the resistor is IR, across the capacitor
3
Q/C, and across the inductance coil L
dI
dt
. So if E(t) is the potential put into
the current,
E(t) = LQ

+RQ

+
1
C
Q.
Sometimes this is dierentiated to give
E

(t) = LI

+RI

+
1
C
I.
2.2 Linear, constant coecients, homogeneous
Lets look at an example:
y

5y

+ 4y = 0.
From Math 211, we know a way of solving this. Let v = y

, and this one


equation becomes a system
y

= v;
v

= 5v 4y.
We then set up matrices, where
X =
_
y
v
_
,
A =
_
4 5
0 1
_
,
and the equation is
X

= AX.
We assume
W =
_
w
1
w
2
_
and that our solution is of the form
X = We
rt
for some r, w
1
, and w
2
. We will review this method later when we want to
generalize it, but lets look at an easier method.
4
Lets guess our solution is of the form
y = e
rt
for some r. Then
y

= re
rt
, y

= r
2
e
rt
.
So our equation becomes
e
rt
(r
2
5r + 4) = 0.
This factors as (r 4)(r 1) = 0, or r = 1, 4. The general solution to our
equation is then
y = c
1
e
4t
+c
2
e
t
.
This works except for two cases. If r
1
= r
2
, we do not get two solutions.
Thus, for example, the equation
y

4y

+ 4y = 0
leads to r = 2. In this case we have as a solution
y = c
1
e
2t
+c
2
te
2t
.
We dierentiate to check that this works.
Secondly we might have r
1
= a + bi, r
2
= a bi. Actually in this case we
proceed and everything works out eventually. It goes like this.
e
(a+bi)t
= e
at
e
bti
= e
at
(cos bt +i sin bt),
and similarly for e
(abi)t
. Then
y = c
1
e
(a+bi)t
+c
2
e
(abi)t
= c
1
e
at
cos bt +c
1
ie
at
sin bt +c
2
e
at
cos bt c
2
ie
at
sin bt
= (c
1
+c
2
)e
at
cos bt + (c
1
i c
2
i)e
at
sin bt
= d
1
e
at
cos bt +d
2
e
at
bt.
Once we see how this goes, we immediately can go to the last line without
going through the derivation.
5
2.3 Constants
To get the values of c
1
and c
2
we need extra information. In an initial value
problem, we are given y(t
0
) and y

(t
0
) for some t
0
. For example, consider
y

+ 4y = 0, y(0) = 3, y

(0) = 4.
We solve r
2
+ 4 = 0, or r = 2i. So
y = c
1
cos 2t +c
2
sin 2t.
Then
3 = y(0) = c
1
1 +c
2
0,
or c
1
= 1. Dierentiating,
y

(t) = 2c
1
sin 2t + 2c
2
cos 2t,
and substituting in t = 0, we get c
2
= 2.
One could also look at the same ODE but instead of initial values, suppose
we are given boundary values: y(0) = 0, y(1) = 3. Then as before c
1
= 0,
which leads to
y = c
2
sin 2t.
Putting in t = 1, we get c
2
= 3/ sin 2.
There is no analog of boundary value problems for rst order equations.
2.4 Method of undetermined coecients
When solving
y

5y

+ 4y = e
3t
,
the general solution is
h
+ y
p
, where y
h
is the general solution to the ho-
mogeneous equation and y
p
is a particular solution to the non-homogeneous
equation. One way to get a particular solution is variation of parameters:
try y = Ae
3t
. Then
y

= 3Ae
3t
, y

= 9Ae
3t
,
and substituting gives
9Ae
3t
15Ae
3t
+ 4e
3t
= e
3t
.
6
So A =
1
2
, and y
p
=
1
2
e
3t
, and thus
y = c
1
e
4t
+c
2
e
t

1
2
e
3t
.
To nd c
1
, c
2
, one waits until one has the most general solution before
using initial or boundary values.
If the right hand side were e
4t
, this doesnt work, but one could try
y
P
= Ate
t
to get a particular solution. If on the right hand side there was cos t, one
needs to try Acos t + Bsin t. One could also have e
4t
+ cos t + 2e
2t
on the
right hand side, for example, and one nds a particular solution for each
piece, and then adds.
2.5 Euler equation
An equation like
x
2
y

+ 4xy

4y = 0
is called an Euler equation. Try y = x
r
as a trial solution. Then
y

= rx
r1
, y

= r(r 1)x
r2
.
Substituting,
r(r 1)x
1
x
r2
+ 4xrx
r1
4x
r
= 0,
or
r(r 1) + 4r 4 = 0,
or r = 4, 1. Then the general solution is
y = c
1
x
4
+c
2
x
1
.
Again there are variations when r
1
= r
2
or when r is complex. If r
1
= r
2
,
the solution is
c
1
x
r
+c
2
x
r
ln x.
If r = a bi, the solution is
y = c
1
x
a
cos(b ln x) +c
2
x
a
sin(b ln x).
7
3 Series solutions
We show how to use Taylor series to solve equations such as
y

+xy 3y = 0,
which does not have constant coecients, nor is it Eulers equation.
Lets start with a simpler equation
y

+y = 0.
Suppose
y = a
0
+a
1
x +a
2
x
2
+ +a
n
x
n
+a
n+1
x
n+1
+ .
Then
y

= a
1
+ 2a
2
x + 3a
3
x
2
+
and
y

= 2a
2
+ 3 2a
3
x + 4 3a
4
x
2
+ .
Since y

+y = 0, we have
0 = (2a
2
+a
0
) + (3 3a
3
+a
1
)x + (4 3a
4
+a
2
)x
2
+ + ((n + 2)(n + 1)a
n+2
+a
n
)x
n
+ ,
which leads to
2a
2
+a
0
= 0, 3 2a
3
+a
1
= 0, etc.,
or
a
2
=
a
0
2
, a
3
=
a
1
3 2
, a
4
=
a
2
4 3
=
a
0
4 3 2
,
and
a
5
=
a
3
5 4
=
a
1
5 4 3 2
.
Substituting in the equation for y,
y =a
0
+a
1
x
a
0
2!
x
2

a
1
3!
x
3
+
a
0
4!
x
4
+
= a
0
_
1
x
2
2!
+
x
4
4!

_
+a
1
_
x
x
3
3!
+
_
.
8
We conclude
y = a
0
cos x +a
1
sin x.
Note
y(0) = a
0
, y

(0) = a
1
.
Typically, one gets a
0
( ) + a
1
( ), and one does not recognize the Taylor
series. Nevertheless the Taylor series is good for calculations.
If we are interested in the behavior of y near 3 instead of near 0, we use
a Taylor series about 3 instead.
3.1 Regular singular points
Look at the more general equation
P(x)y

+Q(x)y

+R(x)y = 0.
If P(x
0
) = 0, then x
0
is an ordinary point, and the above theory works. If
P(x
0
) = 0, then x
0
is called a singular point.
If x
0
is a singular point and
lim
xx
0
xQ(x)
P(x)
and lim
xx
0
x
2
R(x)
P(x)
both exist, then x
0
is called a regular singular point.
Lets look at an example:
x
2
y

+xy

+ (x
1
9
)y = 0.
If the (x
1
9
) coecient were instead
1
9
, this would be the Euler equation
with solution
c
1
x
1/3
+c
2
x
1/3
.
Since neither x
1/3
nor x
1/3
have Taylor series expansions about 0, a modi-
cation of the power series method is needed.
To handle the above dierential equation near 0, we assume y is of the
form
y = a
0
x
r
+a
1
x
r+1
+a
2
x
r+2
+ .
9
Then
y

= ra
0
x
r1
+ (r + 1)a
1
x
r
+ (r + 2)a
2
x
r+1
+
and
y

= r(r 1)a
0
x
r2
+ (r + 1)ra
1
x
r1
+ (r + 2)(r + 1)a
2
x
r
+ .
Substituting in the dierential equation, we get
x
2
y

+xy

+ (x
1
9
)y
= [r(r 1)a
0
+ra
0

1
9
a
0
]x
r
+ [(r + 1)ra
1
+ (r + 1)a
1
+a
0

1
9
a
1
]x
r+1
+ .
We set each of the coecients equal to 0. If we dont want a
0
to be 0, we
must have
r(r 1) +r
1
9
= 0,
or r =
1
3
,
1
3
. We get two solutions, one, say y
1
, for r =
1
3
and one, say y
2
,
for r =
1
3
. The general solution is then y = c
1
y
1
+c
2
y
2
.
To see how y
1
goes, from the coecient of x
r+1
we get
(r + 1)ra
1
+ (r + 1)a
1
+a
0

1
9
a
1
= 0,
or
4
3

1
3
a
1
+
4
3
a
1
+a
0

1
9
a
1
= 0
and we solve for a
1
in terms of a
0
; it turns out a
1
=
3
5
a
0
. Using the
coecient of x
r+2
, we get an equation that we can solve for a
2
in terms of
a
1
, and hence in terms of a
1
. We continue, getting all the a
i
s in terms of a
0
.
If we then substitute back in the formula for y, we get
y = a
0
_
x
1/3

3
5
x
4/3
+
_
.
The expression inside the parentheses is y
1
.
To get y
2
, we do the same, but with r =
1
3
.
When one solves these equations, one gets out 2 values of r. If the two
values of r are not equal and do not dier by an integer, everything is ne.
If they are equal or dier by an integer, one gets one solution, but one has
to work hard to get another.
As an example, look at Bessels equation of order 0:
x
2
y

+xy

+x
2
y = 0.
10
4 Boundary value problems
A boundary value problem is one like
y

+ 2y = 0, y(0) = 1, y() = 0
or
y

+y = 0, y(0) = 1, y() = a.
Some of these have no solutons, some have innitely many solutions.
Let us look at
y

+y = 0, y(0) = 0, y(L) = 0,
and see for what values there is a solution. We have r
2
+ = 0, so r =

i,
so
y = c
1
cos t +c
2
sin t.
Since y(0) = 0, then c
1
= 0, and the solution is a multiple of sin(

t). In
order for y(L) = 0, we must have

L be a multiple of , which says that


=
n
2

2
L
2
.
11
5 Fourier series
Suppose f is dened on an interval and suppose we can write
f =
a
0
2
+

m=1
_
a
m
cos
mx
L
+b
m
sin
mx
L
_
.
If this is the case, how do we nd a
m
, b
m
?
Recall
cos(A +B) = cos Acos B sin Asin B,
cos(A B) = cos Acos B + sin Asin B,
so
cos Acos B =
1
2
[cos(A +B) + cos(A B)].
Then
_
L
L
cos Ax cos Bx dx = 0
unless A = B or A = B. Similarly
_
sin Ax cos Bx = 0
and _
sin Ax sin Bx = 0
unless A = B or A = B.
Therefore multiplying f by cos
mx
L
and integrating over [L, L] gives the
a
m
s, and multiplying by sin instead of cos gives the b
m
s.
The formulas we get are
a
m
=
1
L
_
L
L
f(x) cos
mx
L
dx
and
b
m
=
1
L
_
L
L
f(x) sin
mx
L
dx.
12
An example is
f(x) =
_
x 2 x < 0
x 0 x < 2
.
Here we get b
m
= 0, a
0
= 2, and
a
m
=
_
8/(m/p)
2
m odd
0 m even
Another is
f(x) =
_

_
0 3 < x < 1
1 1 < x < 1
0 1 < x < 3.
Here b
m
= 0, a
0
= 2/3, and
a
n
=
2
n
sin
n
3
.
5.1 Related information
From Eulers identities, e
ix
= cos x+i sin x and e
ix
= cos xi sin x. Adding
and dividing by 2 gives the formula for cosine, and subtracting and dividing
by 2i gives the formula for sine:
cos x =
e
ix
+e
ix
2
, sin x =
e
ix
e
ix
2i
.
If we use this in the Fourier series expansion of a function, and collect terms,
we get
f(x) =

n=
c
n
e
inx
.
Note the sum goes over negative ns as well as positive ones. This form of
Fourier series is quite common.
Conversely, given a series in terms of sums of the e
inx
, we can use Eulers
formula to write it in terms of sines and cosines.
13
Parsevals identity is a way of expressing
_
L
L
f(x)
2
dx is terms of sums of
the squares of the coecients. This gives rise to some very pretty formulas,
and also has theoretical importance, but not so much practical importance.
Recall from linear algebra that an orthonormal basis {v
1
, . . . , v
n
} is a
collection of vectors such that every vector v in R
n
can be written as v =

n
i=1
c
i
v
i
for some constants c
i
, v
i
v
j
= 0 if i = j, and the inner product
equals 1 if i = j. To nd the c
j
s, we dot v with v
j
to get
v v
j
= c
1
v
1
v
j
+ +c
n
v
n
v
j
= c
j
.
This is what we are doing in Fourier series. Let L = 1 for simplicity, dene
f g =
_
L
L
f(x)g(x) dx, and let the v
j
s be cos(nx/L)s and sin(nx/L)s.
The coecients are given by taking the inner product of f with the v
j
s.
The Fourier series expansion works when f is piecewise dierentiable. f
does not need to be continuous.
f is even if f(x) = f(x) and odd if f(x) = f(x). When f is even,
we get a Fourier cosine series, and if f is odd, a Fourier sine series.
If we have a function on [0, L], we can extend it to be even on [L, L] to
get a Fourier cosine series, or odd to get a Fourier sine series.
14
6 Separation of variables
Let us look at the equation

2
u
x
2
=
u
t
,
with boundary conditions
u(x, 0) = f(x)
for all x and also
u(0, t) = 0, u(L, t) = 0.
This is a PDE, called the heat equation. Let us assume the solution is of
the form
u(x, t) = X(x)T(t),
where X is a function only of x and T is a function only of t. We then have

2
X

(x)T(t) = X(x)T

(t),
or
X

X
=
1

2
T

T
.
Since the left hand side is a function only of x and the right hand side is a
function only of t, they must be equal to a constant, say . So
X

+X = 0
and
T

+
2
T = 0.
The boundary values translate to X(0) = X(L) = 0. The solutions then are
X = sin
nx
L
, =
n
2

2
L
2
and then
T = e
n
2

2
t/L
2
.
So
u
n
= e
n
2

2
t/L
2
sin
nx
L
15
solves the PDE. Also,
u =

n
c
n
u
n
is a solution. Since u(x, 0) = f(x), we have
f(x) =

n
c
n
sin
nx
L
,
and we determine the coecients c
n
by expanding f in a Fourier sine series.
7 Other boundary conditions
If we have the boundary conditions
u(0, t) = T
1
, u(L, t) = T
2
,
look at
u (T
2
T
1
)
x
L
+T
1
.
If we have insulated ends:
u
x
(0, t) = 0, u
x
(L, t) = 0,
we proceed as above, but get
X = c
1
sin x +c
2
cos x.
From the boundary values, we get c
1
= 0 and
u(x, t) =
c
0
2
+

c
n
e
n
2

2
t/L
2
cos
nx
L
.
More general boundary value condition are things like
u(0, t) = 0, u
x
(L, t) = 0
or u
x
(0, t) h
1
u(0, t) = 0.
16
8 Wave equation
The wave equation is

2
u
xx
= u
tt
with boundary conditions
u(0, t) = 0, u(L, t) = 0,
and
u(x, 0) = f(x), u
t
(x, 0) = g(x),
where f(0) = f(L) = 0 and g(0) = g(L) = 0.
We rst assume g is equal to 0 for all x. Supposing u = XT, we have
X

X
=
1

2
T

T
= ,
which gives us
X

+X = 0
as before and
T

+
2
T = 0,
so that
T = k
1
cos
nt
L
+k
2
sin
nt
L
.
Since u
t
(x, 0) = 0, then k
2
must be 0.
Then
u =

c
n
sin
nx
L
cos
nt
L
,
where
f(x) = u(x, 0) =

c
n
sin
nx
L
.
Next we assume f is identically 0, and similarly get
u =

k
n
sin
nx
L
sin
nt
L
.
We dierentiate with respect to t, set t = 0, and use Fourier series to get k
n
.
In the case where neither f nor g is identically 0, we do f and g separately
(set g = 0, then set f = 0) and add.
17
9 Laplace equation
The Laplace equation is
u
xx
+u
yy
= 0.
We suppose we are in a square [0, a] [0, b] with boundary conditions 0 on
the top, left, and bottom, and u(a, y) = f(y) on the right.
We write
u = XY,
which leads to
X

X
=
Y

Y
= .
The boundary conditions become X(0) = 0, Y (0) = Y (b) = 0. We get
Y = sin
ny
b
,
and
X = c
1
e
nx/b
+c
2
e
nx/b
.
The boundary condition X(0) = 0 implies c
1
= c
2
, or
X = c
1
sinh
nx
b
.
Our solution becomes
u =

c
n
sinh
nx
b
sin
ny
b
.
If we put in x = a and let
b
n
= c
n
sinh
na
b
,
then
f(y) = u(a, y) =

n
b
n
sin
ny
b
.
.
We can also look at the Laplace equation in a circle of radius 1. In this
case we make a change of variables and get
u
rr
+
1
r
u
r
+
1
r
2
u

= 0
18
with boundary condition
u(1, ) = f().
We write
u = R
and get
R

+
1
r
R

+
1
r
2
R

= 0,
which leads to
r
2
R

R
+r
R

R
=

= .
If < 0, then
= c
1
e
a
1

+c
2
e
a
2

,
which is not periodic. If = 0, we have
= c
1
+c
2
.
To be periodic, we have to have c
2
= 0. But then
r
2
R

+rR

= 0
implies
R = k
1
+k
2
ln r,
and either u is not bounded, or else u is constant.
So to get anything interesting, we need > 0. Let =

, so that
=
2
. We get
= c
1
sin +c
2
cos
and
r
2
R

+rR

2
R = 0,
which implies
R = k
1
r

+k
2
r

.
To be periodic, we must have = n. To keep u bounded, we must have
k
2
= 0. So our solution is
u(r, ) =
c
0
2
+

r
n
(c
n
cos n +k
n
sin n).
Putting in r = 1, we expand f() in a Fourier series to get the coecients
c
n
, k
n
.
19
10 Sturm-Liouville theory
One might want to solve more general PDEs, such as
r(x)u
t
= [p(x)u
x
]
x
q(x)u +F(x, t),
with general boundary conditions such as
u
x
(0, t) h
1
u(0, t) = 0, u
x
(L, t) h
2
u(L, t) = 0.
One could also look at more general regions.
If one wanted to solve
ru
t
= [pu
x
]
x
qu,
one would try u = XT and get
rXT

= (pX

T qXT,
or
(pX

X

q
r
=
T

T
= ,
and one is led to solving
(py

qy +ry = 0.
Recall the notions of eigenvalues, eigenvectors, and orthogonality.
Look at
(py

qy +ry = 0, 0 < x < 1,


with boundary conditions

1
y(0) +
2
y

(0) = 0,
1
y(1) +
2
y

(1) = 0.
Dene the operator
L(y) = (p(x)y

+qy,
and we look at
Ly = ry,
20
that is, eigenfunctions for the operator L.
The operator L is symmetric in the following sense:
_
g(Lf) =
_
gqf
_
(pf

g
=
_
gqf +
_
(pf

)g

(pf

)g

1
0
=
_
gqf
_
f(pg

+ (pf

g +fpg

1
0
,
and
(pf

g+pfg

1
0
= p(1)f

(1)g(1)+p(0)f

(0)g(0)+p(1)f(1)g

(0)p(1)f(0)g

(0).
If f and g both satisfy the boundary conditions, then this is 0, and we are
led to _
g(Lf) =
_
f(Lg).
The eigenfunctions are orthogonal:
_

1
(L
2
) =
_

1
r
2

2
and _
(L
1
)
2
=
_

1
r
1

2
.
So if
1
=
2
, then
_
r
1

2
= 0.
The eigenvalues are real, the rst one is simple, and

1
<
2

3
.
An example is p = 1, r = 1, and the eigenfunctions are sin nx, with
eigenvalues n
2

2
.
We say the eigenfunctions are normalized if
_
1
0
r
2
m
= 1.
Suppose we can write f =

c
n

n
. Then fr =

c
n
r
n
, and
_
fr
m
=

n
c
n
_
r
n

m
= c
n
_
r
2
n
.
21
So
c
n
=
_
fr
m
.
As an example, look at
y

+y = 0, y

(0) = 0, y

(1) +y(1) = 0.
The equation the eigenvalues must satisfy is
1

= tan

.
10.1 Nonhomogeneous equations
Suppose we are looking at
Ly = ry +f.
Let y
n
be the normalized eigenfunctions, which means
_
1
0
y
2
n
r = 1.
We assume y =

n
b
n
y
n
. Suppose f/r =

c
n
y
n
. Then
Ly =

b
n
Ly
n
=

n
b
n

n
ry
n
,
and we get

b
n

n
ry
n
=

r
n
b
n
y
n
+

c
n
ry
n
.
So

[b
n
(
n
) r
n
]y
n
= 0,
which leads to
b
n
=
c
n

and
y =

c
n

y
n
.
As an example, consider
y

+ 2y = sin 3x 4 sin 5x,


22
with boundary conditions y(0) = 0, y(1) = 0. The rst thing we do is rewrite
the equation in the form:
py

+qy = ry +f,
or
py

+p

qy +ry +f = 0.
Our equation ts into this form if we let p = 1, q = 0, r = 1, = 2,
and f = sin 3x + 4 sin 5x. We nd the eigenfunctions are sin nx with
corresponding eigenvalues n
2

2
. So the normalized eigenfunctions are
y
n
=

2 sin nx.
We expand f as

c
n
y
n
, and taking into account the normalization,
c
3
=

2/2, c
5
= 2

2,
and all the other c
n
are 0. So
b
3
=

2/2
9
2
2
, b
5
=
2

2
25
2
2
,
and all the other b
n
are 0. Hence the solution is
y =

2/2
9
2
2

2 sin 3x +
2

2
25
2
2

2 sin 5x.
There are some classical Sturm-Liouville operators:
1) Hermite: y

2xy

+y = 0;
2) Bessel: x
2
y

+xy

+ (x
2

2
)y = 0;
3) Chebyshev: (1 x
2
)y

xy

+
2
y = 0.
23
11 Linear systems
Let X =
_
x
y
_
, A a 2 2 matrix, and suppose we want to solve
X

= AX.
We assume the solution is of the form X = We
rt
, where W is a 21 matrix.
Then
X

= rWe
rt
,
so
rWe
rt
= AWe
rt
,
or
AW = rW.
Thus r are the eigenvalues of A and W are the corresponding eigenvectors.
The solution becomes
X = c
1
W
1
e
r
1
t
+c
2
W
2
e
r
2
t
.
This is ne is r
1
= r
2
are real. If r = a bi, then we get
X = W
1
e
at
cos bt +iW
1
e
at
sin bt
+W
2
e
at
cos bt iW
2
eat sin bt
= Z
1
e
at
cos bt +Z
2
e
at
sin bt.
An example is where X

= AX and
A =
_

1
2
1
1
1
2
_
.
Then r =
1
2
i,
X
1
=
_
1
i
_
e

1
2
+i
t,
and X
2
is similar. We get the two solutions by looking at the real and
imaginary parts of X
1
.
24
When we have repeated roots, one solution will be X = W
1
e
rt
. For the
other, we try
X = Z
1
te
rt
+Z
2
e
rt
.
Then
X

= Z
1
e
rt
+Z
1
rte
rt
+rZ
2
e
rt
,
and X

= AX implies
Z
2
re
rt
+Z
1
e
rt
+Z
1
rte
rt
= AZ
1
te
rt
+AZ
2
e
rt
,
or
rZ
1
= AZ
1
and
Z
1
+rZ
2
= AZ
2
.
For Z
1
we get what we had before. We then solve Z
1
+ rZ
2
= AZ
2
. For
example, if
A =
_
1 1
1 3
_
,
then r = 2 and Z
1
=
_
1
1
_
.
X
2
=
_
1
1
_
te
2t
+
_
0
1
_
e
2t
.
12 Nonlinear systems
12.1 Phase plane
If r
1
= r
2
and both are negative, we get Figure 1. Our solution is
W
1
e
r
1
t
+W
2
e
r
2
t
= e
r
1
t
(W
1
+W
2
e
(r
2
r
1
)t
).
We get a similar picture if both are positive.
If one of r
1
, r
2
is positive and the other negative, we get something like
Figure 2.
If the rs are complex, our solution is of the form
W
1
e
at
cos bt +W
2
e
at
sin bt
with a > 0, the picture looks like Figure 3. If a < 0, the arrows point inward.
25
12.2 Autonomous equations
Well look at
x

= F(x, y); y

= G(x, y).
Since there is no dependence on t on the right hand side, these are called
autonomous equations.
Lets start by looking at an example:
x

= 4x + 2y +y
2
;
y

= 2x y +x
2
.
We compare this to
x

= 4x + 2y
y

= 2x y.
When x, y = 0, we get x

, y

= 0, so (0, 0) is a critical point. When we solve


this linear system, we get
r =
3

41
2
.
So one value of r is positive and one negative, and (0, 0) is an unstable
equilibrium. Now when x and y are small, the x
2
and y
2
are negligible, so
this also has an unstable equilibrium.
Another example is
x

= x +y + 1 +y
2
y

= x y + 4.
We solve y
2
+ x + y + 1 = 0, x y + 4 = 0 to nd the critical point. We
end up with (2, 2) and (3, 1). For the second one, we do a transformation
u = x + 3, v = y 1, and our equation becomes
u

= (u 3) + (v + 1) + 1 + (v + 1)
2
= u +v +v
2
+ 2v
v

= (u 3) (v
1
) + 4 = u v.
We now look at the linear approximation to see the behavior near these
critical points.
27
12.3 Competing species
Here is an example.
x

= x(1 x y)
y

= y(2 y 3x).
To interpret this, x

is approximately x when there is plenty of food. It is


approximately 1 x when x gets near the max, 1 x y when both do.
The critical points are (0, 0), (
1
2
,
1
2
), (0, 2), and (1, 0)..
Another example:
x

= x(1 x y)
y

= y(2 2x y).
(1, 0) is a nontrivial equilibrium point.
Also:
x

= x(1 x y)
y

= y(
3
4
y
1
2
x),
wher (
1
2
,
1
2
) is the equilibrium point.
12.4 Predator-prey
Let x be the prey, y the predator.
x

= x(1 2y)
y

= y(1 + 4).
The way to interpret this is x

= x 2xy, and the 2xy is the rate that the


prey is killed o, proportional to the number of encounters.
Another example:
x

= x(1
1
2
y)
y

= y(
3
4
+
1
4
x).
The pattern is elliptical. Two properties: the period is independent of the
initial condition, and the prey leads the predator by a quarter cycle.
28
13 Numerical solutions
13.1 Eulers method
y

= f(t, y), with step size h. We start with t


0
, y
0
.
d
0
= y

0
= f(t
0
, y
0
)
t
1
= t
0
+h
y
1
= y
0
+d
0
h
d
1
= f(t
1
, y
1
)
t
2
= t
1
+h
y
2
= y
0
+d
1
h
and so on.
An example.
The analyze the error: let y = (t) be the solution. By Taylors expansion,
(t
n
+h) = (t
n
) +

(t
n
)h +

(t
n
)
h
2
2
+ .
We have y
n+1
= (t
n
+ h), y
n
= (t
n
), and d
n
h =

(t
n
)h. So the error is of
order

h
2
2
.
There are two types of error: truncation error and round-o error.
13.2 Improved Euler method
The following equation is exact:
y(t
n+1
) = y(t
n
) +
_
t
n+1
tn
f(t, y) dt.
Eulers method is
y
n+1
= y
n
+f(t
n
, y
n
)(t
n+1
t
n
).
29
Better would be
y
n+1
= y
n
+
f(t
n
, y
n
) +f(t
n+1
, y
n+1
)
2
h,
but f(t
n+1
, y
n+1
) is unknown. The improved Euler method uses
y
n+1
= y
n
+
f(t
n
, y
n
) +f(t
n+1
, y
n
+hd
n
))
2
h.
13.3 Runge-Kutta
The error for Euler is h
2
, the error for improved Euler is h
3
, and for Runge-
Kutta h
5
. The formula for Runge-Kutta is
y
n+1
= y
n
+h
_
k
n1
+ 2k
n2
+ 2k
n3
+k
n4
6
_
,
where
k
n1
= f(t
n
, y
n
)
k
n2
= f(t
n
+
1
2
h, y
n
+
1
2
hk
n1
)
k
n3
= f(t
n
+
1
2
h, y
n
+
1
2
hk
n2
)
k
n4
= f(t
n
+h, y
n
+hk
n3
).
Example
30

Вам также может понравиться