Академический Документы
Профессиональный Документы
Культура Документы
Where the coefficients aij’s, and gi’s are arbitrary functions of t. If every
term gi is constant zero, then the system is said to be homogeneous.
Otherwise, it is a nonhomogeneous system if even one of the g’s is nonzero.
x′ A x g
x ′ x1 g1
1′ x g
x2 2 2
′ x3 g3
x3
x′ = , x= , g= .
: : :
′ xn g n
x n
For a homogeneous system, g is the zero vector. Hence it has the form
x′ = Ax.
Examples:
x1′ = x2
−k γ F (t )
x2′ = x1 − x2 +
m m m
x1 ′ = x2
x2 ′ = x3
x3 ′ = 4 x1 − 3 x2 + 2 x3
x1 ′ = x2
x2 ′ = x3
x3 ′ = x4
: : :
: : :
xn−1′ = xn
′ − a0 a a a g (t )
xn = x1 − 1 x2 − 2 x3 − ... − n−1 xn +
an an an an an
*
The exceptions being the systems whose coefficient matrices are diagonal matrices. However, our
Eigenvector method will nevertheless be able to solve them without any modification.
4. Rewrite the system you found in (a) Exercise 1, and (b) Exercise 2, into a
matrix-vector equation.
5. Convert the third order linear equation below into a system of 3 first
order equation using (a) the usual substitutions, and (b) substitutions in the
reverse order: x1 = y″, x2 = y′, x3 = y. Deduce the fact that there are multiple
ways to rewrite each n-th order linear equation into a linear system of n
equations.
y″′ + 6y″ + y′ − 2y = 0
Answers D-1.1:
1. x1 ′ = x2 2. x1 ′ = x2
x2′ = −5x1 + 4x2 x2 ′ = x3
x3′ = −9x1 + 5x3 + t cos 2t
3. x1 ′ = x2
x2 ′ = x3
x3 ′ = x4
x4 ′ = 6x1 − 2πx2 + πx3 − 3x4 + 11
0 1 0 0
0 1 0 0 1 0
4. (a) x′ = x (b) x′ = x +
− 5 4 − 9 0 5 t cos 2t
Review topics:
1 0 0
1 0 0 1 0
I2 = 0 1 , I3 = , etc.
0 0 1
AI = IA = A
In x = x, x = any n × 1 vector
Properties:
A+0=0+A=A
A0 = 0 = 0A
a b e f a ± e b ± f
c d ± g =
h c ± g d ± h
a b ka kb
k = , for any scalar k.
c d kc kd
a b e f ae + bg af + bh
c d g =
h ce + dg cf + dh
a b x ax + by
c d y = cx + dy .
a b
det = ad − bc
c d
Note: The determinant is a function whose domain is the set of all
square matrices of a certain size, and whose range is the set of all real
(or complex) numbers.
AB = BA = In,
then the matrix B is called the inverse matrix of A, denoted A−1. The
inverse matrix, if it exists, is unique for each A. A matrix is called
invertible if it has an inverse matrix.
a b
Theorem: For any 2 × 2 matrix A = c d ,
its inverse, if exists, is given by
1 d − b
−1
A = ad − bc − c a .
1 − 2 2 − 3 2 − 2 − 4 − (−3)
(i) 2A − B = 2 −
=
5 2 − 1 4 10 − (−1) 4 − 4
0 − 1
= 11 0
1 − 2 2 − 3 2 + 2 − 3 − 8 4 − 11
(ii) AB = 5 2 − 1 4 = 10 − 2 − 15 + 8 = 8 − 7
On the other hand:
2 − 3 1 − 2 2 − 15 − 4 − 6 − 13 − 10
BA = − 1 4 5 2 = − 1 + 20 2 + 8 = 19 10
1 2 2 1 2 2 1 / 6 1/ 6
(iv) A = −1 = =
− 5 1 12 − 5 1 − 5 / 12 1 / 12
2 − (−10)
If the vector b on the right-hand side is the zero vector, then the
system is called homogeneous. A homogeneous linear system always
has a solution, namely the all-zero solution (that is, the origin). This
solution is called the trivial solution of the system. Therefore, a
homogeneous linear system Ax = 0 could have either exactly one
solution, or infinitely many solutions. There is no other possibility,
since it always has, at least, the trivial solution. If such a system has n
equations and exactly the same number of unknowns, then the number
of solution(s) the system has can be determined, without having to
solve the system, by the determinant of its coefficient matrix:
(A − r I) x = 0.
a b 1 0 a − r b
A − rI =
c d −r 0 1 = c d − r .
a − r b
det = (a − r )(d − r ) − bc = r 2 − (a + d )r + (ad − bc) = 0
c d − r
2 3 1 0 2 − r 3
A − rI = − r =
0 1 4
4 3 3 − r .
2 − r 3
det = (2 − r )(3 − r ) − 12 = r 2 − 5r − 6 = (r + 1)(r − 6) = 0
4 3 − r
2 + 1 3 3 3 0
(A − r I) x = (A + I) x = x = x =
4 3 + 1 4 4
0 .
1
k1 = .
− 1
2 − 6 3 − 4 3 0
(A − r I) x = (A − 6 I) x = x = x =
4 3 − 6 4 − 3
0 .
3
k2 = .
4
a 0 a b a 0
0 d , or 0 d , or c d .
Then the eigenvalues are just the main diagonal entries, r = a and d in all 3
examples above.
a − r b
det = r 2 − (a + d )r + (ad − bc) = 0
c d − r
r 2 − Trace(A) r + det(A) = 0.
Note: For any square matrix A, Trace(A) = [sum of all entries on the main
diagonal (running from top-left to bottom-right)]. For a 2 × 2 matrix A,
Trace(A) = a + d.
We first find the eigenvalue(s) and then write down, for each eigenvalue, the
matrix (A − r I) as usual. Then we take any row of (A − r I) that is not
consisted of entirely zero entries, say it is the row vector (α , β). We put a
minus sign in front of one of the entries, for example, (α , −β). Then an
eigenvector of the matrix A is found by switching the two entries in the
above vector, that is, k = (−β , α).
2 3
Example: Previously, we have seen A = 4 3 .
The characteristic equation is
r 2 − Trace(A) r + det(A) = r 2 − 5r − 6 = (r + 1)(r − 6) =0,
3 3
which has roots r = −1 and 6. For r = −1, the matrix (A − r I) is .
4 4
Take the first row, (3, 3), which is a non-zero vector; put a minus sign to the
first entry to get (−3, 3); then switch the entry, we now have k1 = (3, −3). It
is indeed an eigenvector, since it is a nonzero constant multiple of the vector
we found earlier.
On very rare occasions, both rows of the matrix (A − r I) have all zero
entries. If so, the above algorithm will not be able to find an eigenvector.
Instead, under this circumstance any non-zero vector will be an eigenvector.
− 5 − 1 2 0
Let C = 7 3 and D= − 2 − 1 .
1. Compute: (i) C + 2D and (ii) 3C – 5D.
4. Compute: (i) C −1, (ii) D −1, (iii) (CD) −1, (iv) show (CD) −1 = D −1C −1.
Answers D-1.2:
− 1 − 1 − 25 − 3
1. (i)
1 31 14
, (ii)
3
− 8 1 − 10 − 2
2. (i) 3 − 1
, (ii)
8 − 3
3. (i) −8, (ii) −2, (iii) 16, (iv) 16
− 3 / 8 − 1 / 8 1 / 2 0 − 3 / 16 − 1 / 16
4. (i)
5 / 8
, (ii) , (iii) − 1 / 2 − 1 / 2 , (iv) The
7/8 − 1 − 1
equality is not a coincidence. In general, for any pair of invertible matrices
C and D, (CD) −1 = D −1C −1.
s s
5. (i) r1 = 2, k1 = ; r 2 = −4, k = − s ; s = any nonzero number
− 7 s
2
s 0
(ii) r1 = 2, k1 = ; r2 = −1, k 2 = ; s = any nonzero number
− 2 s / 3 s
x1 ′ = a x1 + b x2
x2 ′ = c x1 + d x2
a b
x′ = c d x.
Or, in shorthand x′ = Ax, if A is already known from context.
r k e rt = A k e rt.
Since e rt is never zero, we can always divide both sides by e rt and get
r k = A k.
We see that this new equation is exactly the relation that defines eigenvalues
and eigenvectors of the coefficient matrix A. In other words, in order for a
function x = k e rt to satisfy our system of differential equations, the number r
must be an eigenvalue of A, and the vector k must be an eigenvector of A
corresponding to r. Just like the solution of a second order homogeneous
linear equation, there are three possibilities, depending on the number of
distinct, and the type of, eigenvalues the coefficient matrix A has.
A related note, (from linear algebra,) we know that eigenvectors that each
corresponds to a different eigenvalue are always linearly independent from
each others. Consequently, if r1 and r2 are two different eigenvalues, then
their respective eigenvectors k1 anf k2, and therefore the corresponding
solutions, are always linearly independent.
If the coefficient matrix A has two distinct real eigenvalues r1 and r2, and
their respective eigenvectors are k1 and k2. Then the 2 × 2 system x′ = Ax
has a general solution
x = C1 k1 e r1 t + C2 k2 e r2 t .
2 3
Example: x′ = 4 3 x.
We have already found that the coefficient matrix has eigenvalues
r = −1 and 6. And they each respectively has an eigenvector
1 3
k1 = , k2 = .
− 1 4
Therefore, a general solution of this system of differential equations is
1 −t 3 6 t
x = C1 e + C2 e
− 1 4
3 + 1 − 2 4 − 2 0
(A − r I) x = (A + I) x = x = x =
2 − 2 + 1 2 − 1
0 .
1
k1 = ,
2
For r = 2, the system is
3 − 2 −2 1 − 2 0
(A − r I) x = (A − 2 I) x = x = x =
2 − 2 − 2 2 − 4
0 .
2
k2 = .
1
1 2 2 t
x = C1 e −t + C2 1 e .
2
That is
C1 + 2C2 = 1
2C1 + C2 = −1 .
1 −t 2 2 t − e − t + 2e 2 t
x= − e + e = −t 2t .
2 1 − 2e + e
A little detail: Similar to what we have done before, first there was the
complex-valued general solution in the form
x = C1 k1 e( λ + µi ) t + C2 k2 e( λ − µi ) t .
We “filter out” the imaginary parts by carefully choosing two sets of
coefficients to obtain two corresponding real-valued solutions that are also
linearly independent:
u = eλ t (a cos( µ t ) − b sin( µ t ) )
v = eλ t (a sin( µ t ) + b cos( µ t ) )
The real-valued general solution above is just x = C1 u + C2 v. In particular,
it might be useful to know how u and v could be derived by expanding the
following complex-valued expression (the front half of the complex-valued
general solution):
Then, u is just the real part of this complex-valued function, and v is its
imaginary part.
Take the first (the one with positive imaginary part) eigenvalue r = i,
and find one of its eigenvectors:
2 − i −5 0
(A − r I) x = x =
1 − 2 − i 0 .
5 5 0
k= = 2 + − 1 i = a + bi
2 − i
a b
− 1 − ( 2 + 3i ) −6 − 3 − 3i − 6 0
(A − r I) x = x = 3 = 0 .
3 5 − ( 2 + 3i ) 3 − 3i
− 1 + i − 1 1
k= = 1 + 0 i = a + bi
1
− 1 1 − 1 1
x = C1 e 2t cos(3t ) − sin(3t ) + C2 e 2 t sin(3t ) + cos(3t )
1 0 1 0
− cos(3t ) − sin(3t ) 2 t cos(3t ) − sin(3t )
= C1 e 2t
+ C2 e
cos( 3t ) sin( 3 t )
− 1 1 − 1 1
x(0) = C1 e 0 cos(0) − sin(0) + C2 e 0 sin(0) + cos(0)
1 0 1 0
− 1 1 − C + C2 0
= C1 + C2 = 1 =
1 0 C1 2
Suppose the coefficient matrix A has a repeated real eigenvalues r, there are
2 sub-cases.
(i) If r has two linearly independent eigenvectors k1 and k2. Then the 2 × 2
system x′ = Ax has a general solution
x = C1 k1 e rt + C2 k2 e rt.
Note: For 2 × 2 matrices, this possibility only occurs when the coefficient
matrix A is a scalar multiple of the identity matrix. That is, A has the form
1 0 α 0
α = 0 α , for any constant α.
0 1
2 0
Example: x′ = 0 2 x.
1 0 2t C1 2t
x = C1 e 2t + C2 1 e = C e .
0 2
α 0
x′ = 0 α x,
a general solution is
C
x = 1 eαt .
C2
Now, why haven’t we seen an analogous solution during our study of second
order linear equations? (Recall that every general solution of such an
equation with a repeated real root r of characteristic equation necessarily has
rt
a te –term in it, which is absent here.) The reason is simple – that this
system is not equivalent to any single second order linear equation. Writing
out the two equations within the system explicitly and we can see that, rather,
they are actually two independent first order linear equations completely
unrelated to each other:
x1 ′ = α x1
x2 ′ = α x2
Each equation can be easily solved, either as a first order linear equation, or
αt
as a separable equation. Their solutions are, respectively, x1(t) = C1e
and x2(t) = C2e αt. Write them back into a vector function and we obtain
the same general solution, namely
C
x = 1 eαt .
C2
x = C1 k e rt + C2 (k t e rt + η e rt).
(A − r I ) η = k .
Comment: Without getting too deeply into a linear algebra discussion, the
required second vector η is a generalized eigenvector (of order 2) of A. It is
a nonzero vector that satisfies the following system of linear equations
(A − r I )2 η = 0 .
(A − r I)[(A − r I) η] = 0.
(A − r I ) η = v = k .
1 + 3 −4 4 − 4 0
(A − r I) x = x = x =
4 − 7 + 3 4 − 4
0 .
Both equations of the system are 4x1 − 4x2 = 0, we get the same
relation x1 = x2. Hence, there is only one linearly independent
eigenvector:
1
k= .
1
Next, solve for η:
4 − 4 1
4 − 4 η = 1 .
1
+ η 2
It has solution in the form η = 4 .
η
2
1 / 4
Choose η2 = 0, we get η = 0 .
1 1 1 4
x = C1 e− 3 t + C2 t e −3t + e− 3t
1 1 0
Given:
x′ = Ax.
x = C1 k1 e r1 t + C2 k2 e r2 t .
x = C1 k1 e rt + C2 k2 e rt.
(ii) When only one linearly independent eigenvector exist –
x = C1 k e rt + C2 (k t e rt + η e rt).
Note: Solve the system (A − r I) η = k to find the vector η.
1. Rewrite the following second order linear equation into a system of two
equations.
y″ + 5y′ − 6y = 0
Then: (a) show that both the given equation and the new system have the
same characteristic equation. (b) Find the system’s general solution.
8 − 4 − 3 2
4. x′ = x. 5. x′ = x.
1 4 −1 − 5
2 1 2 − 5
6. x′ = x. 7. x′ = x.
1 2 1 − 4
− 4 0 5
9. x′ = x, x(3) = .
0 − 4 −2
1 4 0
x′ =
3
10. x, x(1) = .
2 3
6 8 8
x′ = x(0) = .
6
11. x,
2 0
1 − 1 6
x′ = x(0) = .
1
12. x,
− 1 4
6 3 − 1
x′ = x(20) = .
1
14. x,
− 2 − 1
3 5 5
x′ = x(243) = .
1
15. x,
− 2 14
16. For each of the initial value problems #8 through #15, how does the
solution behave as t → ∞?
17. Find the general solution of the system below, and determine the
possible values of α and β such that the initial value problem has a solution
that tends to the zero vector as t → ∞.
− 5 − 1 α
x′ =
3
x , x (0) = β .
7
1 1 0
cos( t ) − sin( t ) − 4 t cos( t ) + sin( t )
5. x = C1 e − 4 t + C 2 e − sin( t )
− cos( t )
1 1
6. x = C1 e t + C 2 e 3 t
− 1 1
5 1
7. x = C1 e t + C 2 e − 3 t
1 1
−t − 4 cos t − 2 sin t 5e −4t + 12
8. x= e 9. x = − 4 t + 12
2 cos t − 4 sin t − 2e
2e5t −5 − 2e −t +1 4e 2t + 4e10t
10. x = 5t −5 −t +1 11. x = 10 t
2e + e − 2e + 2e
2t
5 + e 2t 9 − 6e −6t +330
12. x = 2t 13. x = −6 t +330
5 − e 3 + 2e
5e 3t −60 − 6e 4t −80
14. x = 3t −60
− 5e + 4e 4t −80
2 t − 486 5 cos(3t − 729) + 25 sin(3t − 729)
15. x = e
14 cos(3t − 729) − 8 sin(3t − 729)
a
α ( t −t 0 )
18. x = e
b
a eα (t −t0 )
19. x = β (t −t0 )
b e
1 − 4 − 2
Example: x′ = 4 − 7 x, x(0) = 1 .
Before we start, let us rewrite the problem into the explicit form of
individual linear equations:
x1 ′ = x1 − 4 x2 x1(0) = −2
x2 ′ = 4 x1 − 7 x2 x2(0) = 1
[(s – 1) (s + 7) – (–16)]L{x2} = s – 9
(s2 + 6s + 9)L{x2} = s – 9
Therefore,
s −9 1 12
= −
L{x2} = ( s + 3) 2 s + 3 ( s + 3) 2
− 2s − 18 −2 12
L{x1} = ( s + 3) 2 = s + 3 − ( s + 3) 2
x1 − 12 te −3 t − 2 e −3 t − 12 t − 2 −3 t
x= = −3t
= e .
x
2 − 12 te −3t
+ e − 12 t + 1
This agrees with the solution we have found earlier using the
eigenvector method.
The method above can also, without any modification, be used to solve
nonhomogeneous systems of linear differential equations. It gives us a way
to solve nonhomogeneous linear systems without having to learn a separate
technique. In addition, it also allows us to tackle linear systems with
discontinuous forcing functions, if necessary.
− 4 − 2 −t 3
Example: x′ = 3 1 x + 2t − 1 , x(0) = − 5 .
Rewrite the problem explicitly and transform:
1
sL{x1} − 3 = −4L{x1} − 2L{x2} − (5)
s2
2 1
sL{x2} + 5 = 3L{x1} + L{x2} + − (6)
s2 s
Simplify:
2 1 − 5s 2 − s + 2
−3L{x1} + (s – 1)L{x2} = 2 − − 5 = (6*)
s s s2
Multiplying eq. (5*) by s − 1 and eq. (6*) by 2, then subtract the latter
from the former. We eliminate L{x2}, to find L{x1}.
2
3s 3 + 7 s 2 + s − 3
(s + 3s + 2)L{x1} =
s2
Therefore,
3s 3 + 7 s 2 + s − 3 1 3 11
L{x1} = s 2 ( s + 1)( s + 2) = 4( s + 2) − 2s 2 + 4s
1 −2t 3 11
→ x1 = e − t+
4 2 4
2
− 5s 3 − 12s 2 − 2s + 5
(s + 3s + 2)L{x2} =
s2
Therefore,
− 5s 3 − 12s 2 − 2s + 5 −1 5 19
L{x2} = s 2 ( s + 1)(s + 2) = 4( s + 2) + 2s 2 − 4s
− 1 −2t 5 19
→ x2 = e + t−
4 2 4
1 −2t 3 11
x1 4 e − t + 1 e − 2 t − 6 t + 11
x= = 2 4 = −2 t
− 1 5 19 .
x
2 e + t −
− 2 t 4 − e + 10 t − 19
4 2 4
Exercise D-1.4:
2 1 − 1 1
2. x′ = x + 8 , x(0) = .
− 1 2 1
3 0 − 4 sin 2t 2
3. x′ = x+ , x(0) = .
5 − 2 cos 2t −1
2 − 8 3t + 1 0
4. x′ = x + − 6t − 2 , x(0) = .
− 1 4 4