Академический Документы
Профессиональный Документы
Культура Документы
Paweł Polak
1/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Content:
Finding Eigenvalues and Eigenvectors
Difference Equations
Stability of 2 × 2 Matrices
Example
4 −2
Let A = . Find A100 .
−1 3
P −1 AP = D
−1 −1
P(P AP)P = PDP −1
−1 −1
(PP )A(PP ) = PDP −1
IAI = PDP −1
A = PDP −1 ,
and so
4/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Example (continued)
Now, 100
2100
100 2 0 0
D = = .
0 5 0 5100
Therefore,
100 −1
A100 = PD P
2100
1 −2 0 1 1 2
=
1 1 0 5100 3 −1 1
5/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Theorem
If A is an n × n matrix and P is an invertible n × n matrix such that
A = PDP −1 , then Ak = PD k P −1 for each k = 1, 2, 3, . . .
Questions
When is it possible to diagonalize a matrix?
How do we find a diagonalizing matrix?
Answer
Eigenvalues and eigenvectors.
6/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Eigenvalues and Eigenvectors
Definition
Let A be an n × n matrix, λ a real number, and u 6= 0 an n-vector. If
Au = λu, then λ is an eigenvalue of A, and u is an eigenvector of A
corresponding to λ, or a λ-eigenvector.
Example
1 2 1
Let A = and u = . Then
1 2 1
1 2 1 3 1
Au = = =3 = 3u.
1 2 1 3 1
1
This means that 3 is an eigenvalue of A, and is an eigenvector of
1
A corresponding to 3 (or a 3-eigenvector of A).
7/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Examples
Example
Projection matrices, P = A(AT A)−1 AT , have all the eigenvalues
equal to 0 or 1.
If u ∈ C (P), then Pu = u so λ = 1.
If u ∈ N(P), then Pu = 0u so λ = 0.
By Fundamental Theorem of Linear Algebra,
dim(C (P)) + dim(N(P)) = m. Hence, there are no other
eigenvalues.
The reflection matrix R = 2P − I has eigenvalues 1 and −1, and the
eigenvectors are the same as matrix P.
(Intuition: Every reflection matrix can be written as R = 2P − I ,
where P is the projection matrix on the “mirror”, i.e.,
reflection = 2(projection) − I . So λR = 2λP − 1).
What about rotation matrices? It is not so easy and intuitive.
Example: let Q be a 2 × 2 rotation
√ matrix which rotates vectors by
90o , then Q 2 u = −u. So λ = −1, i.e., the eigenvalues are
complex numbers i and −i.
8/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
What an eigenvalue and eigenvector tell us about a matrix?
Suppose that A is an n × n matrix, with eigenvalue λ and corresponding
eigenvector u. Then u 6= 0 is an n-vector, λ ∈ R, and Au = λu.
It follows that
Au − λu = 0
Au − λI u = 0
(A − λI )u = 0
det(A − λI ) = 0.
9/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
The Characteristic Polynomial
Definition
The characteristic polynomial of an n × n matrix A is defined to be
cA (x) = det(A − xI ).
Example
4 −2
The characteristic polynomial of A = is
−1 3
cA (x) = det(A − xI )
4 −2 x 0
= det −
−1 3 0 x
4−x −2
= det
−1 3 − x
= (4 − x)(3 − x) − 2
= x 2 − 7x + 10.
10/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Finding Eigenvalues and Eigenvectors
Theorem
Let A be an n × n matrix.
1 The eigenvalues of A are the roots of cA (x).
2 The λ-eigenvectors u are the nontrivial solutions to (A − λI )u = 0.
Procedure:
Let A be an n × n matrix.
Eigenvalues: Find λ by solving the equation
cA (x) = det(A − xI ) = 0
12/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Example (continued)
Solve the system in the standard way, by putting the augmented matrix
of the system in reduced row-echelon form.
2 −2 0 1 −1 0
→ .
−1 1 0 0 0 0
13/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Example (continued)
4 −2
To find the 5-eigenvectors of A = solve the homogeneous
−1 3
system (A − 5I )u = 0, with coefficient matrix
4 −2 1 0 −1 −2
A − 5I = −5 = .
−1 3 0 1 −1 −2
−1 −2 0 1 2 0
→ .
−1 −2 0 0 0 0
Therefore the 5-eigenvectors of A are the vectors
−2s −2
u= =s where s ∈ R and s 6= 0.
s 1
14/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Basic Eigenvectors
Definition
A basic eigenvector of an n × n matrix A is any nonzero multiple of a
basic solution to (A − λI )u = 0, where λ is an eigenvalue of A.
4 −2
Basic eigenvectors of A =
−1 3
1
u= is is a basic eigenvector of A corresponding to the eigenvalue
1
−2
2. u = is a basic eigenvector of A corresponding to the
1
eigenvalue 5.
15/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Eigenvalues with multiplicity greater than one
Problem
Find the characteristic polynomial and eigenvalues of the matrix
4 1 2
A= 0 3 −2 .
0 −1 2
Solution
4−x 1 2
cA (x) = det(A − xI ) = det 0 3−x −2
0 −1 2−x
= (4 − x)[(3 − x)(2 − x) − 2]
= (4 − x)(x 2 − 5x + 4)
= −(x − 4)(x − 4)(x − 1)
= −(x − 4)2 (x − 1).
Therefore, A has eigenvalues 1 and 4, with 4 being an eigenvalue of
multiplicity two. 16/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Definition
The multiplicity of an eigenvalue λ of A is the number of times λ occurs
as a root of cA (x).
Example
4 1 2
We have seen that A = 0 3 −2 has eigenvalues λ1 = 1 and
0 −1 2
λ2 = 4 of multiplicity two. To find an eigenvector of A corresponding to
λ1 = 1, solve the homogeneous system (A − I )u = 0:
3 1 2 0 1 0 1 0
0 2 −2 0 → 0 1 −1 0 .
0 −1 1 0 0 0 0 0
−s
The general solution is u = s where s ∈ R. We get a basic
s
eigenvector by choosing s = 1 (in fact, any nonzero value of s gives us a
basic eigenvector).
17/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Example (continued)
−1
Therefore, u = 1 is a (basic) eigenvector of A corresponding to
1
λ1 = 1.
4 1 2
To find an eigenvector of A = 0 3 −2 corresponding the
0 −1 2
λ2 = 4, solve the system (A − 4I )u = 0:
0 1 2 0 0 1 2 0
0 −1 −2 0 → 0 0 0 0 .
0 −1 −2 0 0 0 0 0
s
The general solution is u = −2t where s, t ∈ R.
t
18/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Example (continued)
In this case, the general solution has two parameters, which leads to two
basic eigenvectors that are not scalar multiples of each other, i.e., since
s 1 0
u = −2t = s 0 + t −2 where s, t ∈ R,
t 0 1
19/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Problem
For
3 −4 2
A= 1 −2 2 ,
1 −5 5
find cA (x), the eigenvalues of A, and basic eigenvector(s) for each
eigenvalue.
Solution
3−x −4 2 3−x −4 2
det(A − xI ) = 1
−2 − x 2 = 0
3−x x −3
1 −5 5−x 1 −5 5−x
3−x −4 −2
3 − x −2
= 0 3−x 0 = (3 − x)
1 −x
1 −5 −x
0 − 12 0
0 −4 2 0 1
1 −5 2 0 → ··· → 0 1 − 12 0
1 −5 2 0 0 0 0 0
1 1
2t 2
Thus u = 1 =t 1 , t ∈ R. Choosing t = 2 gives us
2t 2
t 1
1
u1 = 1
2
as an eigenvector corresponding to λ2 = 2.
22/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Solution (continued)
Finally, to find a basic eigenvector corresponding to λ3 = 1, solve
(I − A)u = 0.
2 −4 2 0 1 0 −1 0
1 −3 2 0 → ··· → 0 1
−1 0
1 −5 4 0 0 0 0 0
r 1
Thus u = r = r 1 , r ∈ R. Choosing r = 1 gives us
r 1
1
u3 = 1
1
is an eigenvector corresponding to λ3 = 1.
23/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Solution (continued)
3 −4 2
Summarizing, for A = 1 −2 2 , we have found three eigenvalues,
1 −5 5
and a corresponding eigenvector for each as follows.
1 2 1
λ1 = 3 and X1 = 1 ; λ2 = 2 and X2 = 1 ; λ3 = 1 and X3 = 1 .
2 1 1
An easy way to check your work: compute Au1 and see if you get 3u1 .
3 −4 2 1 3 1
Au1 = 1 −2 2 1 = 3 = 3 1 = 3u1 .
1 −5 5 2 6 2
You should check that Au2 = 2u2 and that Au3 = 1u3 = u3 ,
24/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Eigenvalues and eigenvectors (review)
Let A be an n × n matrix
1 Compute the charasteristic polynomial of A,
cA (x) = det(A − xI ).
(A − λI )u = 0.
25/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Example (Triangular Matrices)
Consider the matrix
2 −1 0 3
0 5 1 −2
A=
0
.
0 0 7
0 0 0 −4
The characteristic polynomial of A is
2−x −1 0 3
0 5−x 1 −2
= −(2−x)(5−x)x(−x−4).
cA (x) = det(A−xI ) = det
0 0 −x 7
0 0 0 −4 − x
26/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Diagonalizing Matrix
Theorem
Suppose the n × n matrix A has n linearly independent eigenvectors
u1 , . . . , un . Put them into the columns of an eigenvector matrix U. Then
U −1 AU is the eigenvalue matrix Λ
λ1
U −1 AU = Λ =
..
.
λn
Proof.
For all eigenvectors uk and the corresponding eigenvalues λk we have Auk = λk uk , if we put all
the eigenvectors as columns of matrix U then
AU = UΛ,
where Λ is a diagonal matrix with the corresponding eigenvalues on the diagonal.Since the
eigenvectors are assumed to be linearly independent, U −1 exists, and U −1 AU = Λ. 27/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Diagonalizing Matrix
Remark
If A = UΛU −1 , then A2 = UΛ2 U −1 .
More generally,
Remark
Ak = UΛk U −1 for any k ∈ R.
28/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Similar Matrices
Definition
Let A, and B be n × n matrices. Suppose there exists an invertible
matrix P such that
A = P −1 BP
Then A and B are called similar matrices.
Theorem
Let A and B be similar matrices, so that A = P −1 BP where A, B are
n × n matrices and P is invertible. Then A and B have the same
eigenvalues.
Proof
Assume Bu = λu. Let v = P −1 u. Then
Av = (P −1 BP)P −1 u = P −1 Bu = P −1 λu = λv.
29/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Using Similar and Elementary Matrices
Problem
Find the eigenvalues for the matrix
33 105 105
A = 10 28 30
−20 −60 −62
Solution
We will use elementary matrices to simplify A before finding the
eigenvalues. Left multiply A by E , and right multiply by the inverse of E .
1 0 0 33 105 105 1 0 0 33 −105 105
0 1 0 10 28 30 0 1 0 = 10 −32 30
0 2 1 −20 −60 −62 0 −2 1 0 0 −2
Notice that the resulting matrix and A are similar matrices (with E
playing the role of P) so they have the same eigenvalues.
30/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Solution (continued)
We do this step again, on the resulting matrix above.
1 −3 0 33 −105 105 1 3 0 3 0 15
0 1 0 10 −32 30 0 1 0 = 10 −2 30 = B
0 0 1 0 0 −2 0 0 1 0 0 −2
31/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Nondiagonalizable Matrices
Remark
(i) We always have that GM ≤ AM.
(ii) Whenever GM < AM, then the corresponding matrix A is not
diagonalizable.
32/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Nondiagonalizable Matrices
Example
0 1
A= has det(A − λI ) = λ2
0 0
so λ = 0 is an eigenvalue of multiplicity 2. But there is only one
eigenvector (1, 0) (check it!).
These three matrices all have the same shortage of eigenvectors, and
their repeated eigenvalue is λ = 5.
5 1 6 −1 7 2
A= , A= , A=
0 5 1 4 −2 3
33/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Fibonacci Numbers
Definition
The Fibonacci sequence 0, 1, 1, 2, 3, 5, 8, 13, . . . comes from
Fk+2 = Fk+1 + Fk
Example
Note, that the derivative of e λt is λe λt . The whole point of the next few
slides is to convert constant-coefficient differential equations into linear
algebra.
du(t) du(t)
The ordinary equations dt = u and dt = λu(t) are solved by
exponentials:
du(t)
= u(t) produces u(t) = Ce t and
dt
du(t)
= λu(t) produces u(t) = Ce λt
dt
where C = u(0) is the starting value.
35/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Systems of Differential Equations
These differential equations are linear, i.e., if u(t) and v(t) are solutions,
so is C u(t) + Dv(t). We will need n constants like C and D to match
the n components of u(0).
u = e λt x by using Ax = λx.
36/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Example
du
Solve dt = Au, where
0 1
A=
1 0
4
starting from u(0) = This is a vector equation for u. It contains two
2
sqalar equations for the components y and z. They are “coupled
together” because the matrix A is not diagonal.
d y 0 1 y dy (t) dz(t)
= means that = z(t) and = y (t)
dt z 1 0 z dt dt
Notice: these u’s satisfy Au1 = u1 and Au2 = −u2 , just like x1 and x2 .
The factors e t and e −t change with time. Those factors give
du1 du2
= u1 = Au1 and = −u2 = Au2
dt dt
We have two solutions to du/dt = Au. To find all other solutions,
multiply these special solutions by any numbers C and D and add
Ce + De −t
t
1 1
Complete Solution: u(t) = Ce t + De −t =
1 −1 Ce t − De −t
with these two constants we can match the starting vector. Here,
u(0) = (4, 2), so C = 3 and D = 1.
38/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
System of Differential Equations
du(t)
= Au
dt
c1 x1 + · · · + cn xn
of the eigenvectors of A.
(2) Multiply each eigenvector xi by its growth factor e λi t .
(3) The solution is the same combination of those pure solutions e λt x:
u(t) = c1 e λ1 t x1 + · · · + cn e λn t xn
Not included: If two λ’s are equal, with only one eigenvector, another
solution is needed. It will be te λt x. Step (1) needs to diagonalize
A = X ΛX −1 : a basis of n eigenvectors.
39/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Example
1 1 1 9
du
= 0 2 1 u starting from u(0) = 7
dt
0 0 3 4
The eigenvalues are λ = 1, 2, 3 and the corresponding eigenvectors are
x1 = (1, 0, 0), x2 = (1, 1, 0), and x3 = (1, 1, 1).
(1) The vector u(0) = (9, 7, 4) is 2x1 + 3x2 + 4x3 . Thus
(c1 , c2 , c3 ) = (2, 3, 4).
(2) The factors e λt give exponential solutions e t x1 , e 2t x2 and e 3t x3 .
[(3)] The combination that starts from u(0) is
u(t) = 2e t x1 + 3e 2t x2 + 4e 3t x3
The coefficients 2, 3, 4 came from solving the linear equation
c1 x1 + c2 x2 + c3 x3 = u(0)
c1 1 1 1 2 9
x1 x2 x3 c2 = 0 1 1 3 7 which is X c = u(0)
c3 0 0 1 4 4
40/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Second Order Equations
Consider a linear second order differential equation with constant
coefficients
y 00 + by 0 + ky = 0
We can turn it into a vector first order differential equation of y and y 0 .
y d 0 1
Define u = 0 , then u = Au where A = .
y dt −k −b
A is called a companion matrix to the 2nd order equation with y 00 . The
first equation is trivial (but true). The second equation connects y 00 , y 0
d
and y . So we can solve dt u = Au by eigenvalues of A:
−λ 1
A − λI = and cA (λ) = det(A − λI ) = λ2 + bλ + k = 0
−k −b − λ
The eigenvectors and the solution are
1 1 1 1
x1 = x2 = and u(t) = c1 e λ1 t + c2 e λ2 t
λ1 λ2 λ1 λ2
00 d y 0 1
y = −y can be written as u = Au, where u = 0 and A =
dt y −1 0
So, indeed the vector u goes around a circle with the radius 1 because
2
ku(t)k = cos2 (t) + sin2 (t) = 1 for all t. 42/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Difference Equations
To display a circle on a screen, replace y 00 = −y by a difference equation.
Here are three choices using
Y (t + ∆t) − 2Y (t) + Y (t − ∆t)
Divide by (∆t)2 to approximate y 00 by one of the formulas
(F) Forward from n − 1: Yn+1 −2Y n +Yn−1
(∆t)2 = −Yn−1
Yn+1 −2Yn +Yn−1
(C) Centered at time n: (∆t)2 = −Yn
Yn+1 −2Yn +Yn−1
(B) Backward from n + 1: = −Yn+1
(∆t)2
The two step equations above reduce to 1-step systems
Un+1 = AUn
with Un = (Yn , Zn ) with Z being like y 0 above. The corresponding
matrices A are
−1 −1
1 ∆t 1 0 1 ∆t 1 −∆t
AF = AC = and AB =
−∆t 1 ∆t 1 0 1 ∆t 1
with two eigenvalues for each case given by
λF = 1 ± i∆t |λC | = 1 and λB = 1/(1 ± i∆t)
43/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Difference Equations
44/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Stability of Matrices
So the real part of λ controls the growth (r > 0) or the decay (r < 0) of
the system.
45/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Stability of 2 × 2 Matrices
Theorem
A is stable and u(t) →0 when
all eigenvalues λ have negative real parts.
a b
The 2 × 2 matrix A = must pass the test:
c d
The trace T = a + d must be negative
The determinant D = ad − bc must be positive.
Proof.
The trace is always equal to the sum of eigenvalues, and the determinant is always equal to
the product of eigenvalues. Hence,
For the real eigenvalues, the later is equivalent with eigenvalues having the same sign,
then the former implies that the eigenvalues are negative.
In case of complex eigenvalues, they must have the form r + is and r − is. Otherwise
T and D would not be real. The determinant is automatically positive since
D = (r + is)(r − is) = r 2 + s 2 , and the trace is T = (r + is) + (r − is) = 2r . So the
negative trace means r < 0 and the matrix is stable.
46/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Stability of 2 × 2 Matrices
47/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Direction Field with Trajectories
48/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
The Exponential of a Matrix
One way to define e x for numbers is by the infinite series
1 2 1 1
ex = 1 + x + x + x3 + x4 + . . .
2! 3! 4!
When we change x to a square matrix At, this series define the matrix
exponential e At
Definition
Matrix Exponential e At :
At 1 2 1 3 1 4
e = I + At + (At) + (At) + (At) + · · ·
2 3! 4!
Its t Derivative Ae At :
d At 1 2 1 3 2 1 4 3 At
e = A + A t + A t + A t + · · · = Ae
dt 2 3! 4!
1 2 1 3 1 4 1 2 1 3 1 4
(I +At+ (At) + (At) + (At) +· · · )x = (1+λt+ (λt) + (λt) + (λt) +· · · )x
2 3! 4! 2 3! 4!
49/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
The Exponential of a Matrix
We have
d
u = Au, where u(0) is a given initial condition.
dt
We want to write the solution u(t) in a new form e At u(0).
We found u(t) = e At u(0) by diagonalizing matrix A. Assume A has n
independent eigenvectors, so it is diagonalizable. So
1 1 1
e At = I + At + (At)2 + (At)3 + (At)4 + · · ·
2 3! 4!
1 1 1
= I + (X ΛX −1 )t + (X ΛX −1 t)2 + (X ΛX −1 t)3 + (X ΛX −1 t)4 + · · ·
2 3! 4!
1 1 1
= I + (X ΛX −1 )t + X Λ2 X −1 t 2 + X Λ3 X −1 t 3 + X Λ4 X −1 t 4 + · · ·
2 3! 4!
1 1 1
= X (I + Λt + Λ2 t 2 + Λ3 t 3 + Λ4 t 4 + · · · )X −1
2 3! 4!
= Xe Λt X −1
50/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
The Exponential of a Matrix
So if A is diagonalizable (i.e., A = X ΛX −1 ), then e At is diagonalizable
too, and
e At = Xe Λt X −1
Do the multiplication to recognize that
u(t) = e At u(0) corresponds to u(t) = Xe Λt X −1 u(0)
I.e. e λ1 t
c1
.. ..
u(t) = x1 ··· xn
. .
e λn t cn
where −1
c1
..
. = x1 ··· xn u(0).
cn
This solution is exactly the same answer that came earlier from three
steps.
u(t) = c1 e λ1 t x1 + · · · + cn e λn t xn
51/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Repeated Roots Case with GM < AM
Example
Consider
00 0
y − 2y + y = 0
Linear algebra reduces this second order differential equation to a vector equation for
u = (y , y 0 )
y0
d y du 0 1
0 = 0 is = Au = u
dt y 2y − y dt −1 2
At It (A−I )t t
e =e e = e [I + (A − I )t]
52/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Repeated Roots Case for the Rotation Matrix
Example
Consider again the motion around the circle:
00
y +y =0
Linear algebra reduces this second order differential equation to a vector equation for
u = (y , y 0 ) 0
d y y du 0 1
0 = is = Au = u
dt y −y dt −1 0
Notice that
0 1 2 −1 0 3 0 −1 4 1 0 5 4
A= A = A = A = and A =A A=A
−1 0 0 −1 1 0 0 1
1 − 12 t 2 + · · · 1 3
At 1 2 2 1 3 3 1 4 4 t − 3! t
e = I + At + A t + A t + A t + ··· =
2 3! 4! −t + 16 t 3 − · · · 1 − 12 t 2 + · · ·
0 1 At cos(t) sin(t)
So A= and e =
−1 0 − sin(t) cos(t)
i.e.,
(e At )−1 = (e At )T = e −At
Remark
Antisymmetric is the same as “skew-symmetric”. Thoes matrices have
pure imaginary eigenvalues like i and −i. Then e At has eigenvalues like
e it and e −it . Their absolute value is 1: neutral stability, pure oscilation,
energy conserved. So ku(t)k = ku(0)k, i.e., the dynamics stay on the
circle.
54/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Direction Field with Trajectories
55/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Symmetric Matrices
S = X ΛX −1 = S T = (X ΛX −1 )T = (X −1 )T ΛX T
S = ST and S = X ΛX −1
⇓
X −1 = X T and S = X ΛX T
56/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Eigenvalues of Real Symmetric Matrices
Theorem
All the eigenvalues of a real symmetric matrix are real
Proof.
Suppose Sx = λx. Until we know otherwise λ can be a complex number a + ib (a and b
are real). Its complex conjugate is λ̄ = a − ib.
Similarly the components of x can be complex numbers, and switching the signs of their
imaginary parts gives x̄
The good thing is that product of conjugate numbers is the conjugate of the product,
¯
i.e., λ̄x̄ = λx
So we can take conjugates of
T T
Sx = λx leads to Sx̄ = λ̄x̄. Transpose to x̄ S = x̄ λ̄
Now take the dot product of the first equation with x̄ and the last equation with x:
T T T T
x̄ Sx = x̄ λx and also x̄ Sx = x̄ Sx.
The left sides are the same so the right sides are equal. One equation has λ, the other
has λ̄. They multiply x̄T x = kxk2 which is not zero because x is the eigenvector.
Therefore λ must be equal λ̄, and a + ib = a − ib. So b = 0, and λ = a is real.
57/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Eigenvectors of Symmetric Matrices
Theorem
Eigenvectors of a symmetric matrix (when they correspond to different
λ’s) are always perpendicular.
Proof.
Suppose
Sx = λ1 x and Sy = λ2 y.
We are assuming here that λ1 6= λ2 . Take dot product of the first equation with y and the
second with x
T T T T T T T
Use S =S (λ1 x) y = (Sx) y = x S y = x Sy = x λ2 y
The left side is xT λ1 y, the right side is xT λ2 y. Since λ1 6= λ2 , this proves that xT y = 0. The
eigenvector x for λ1 , is perpendicular to the eigenvector y for λ2 .
58/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Eigenvectors of Symmetric Matrices
In general, we know that a repeated eigenvalue can produce a shortage of
eigenvectors (GM < AM case). It NEVER happens for symmetric
matrices (real or not).
Remark (S = S T ⇒ GM = AM)
If S = S T , then there is enough eigenvectors to diagonalize S.
Proof.
Uses Schur’s Theorem which is another decomposition of any square matrix given below.
Schur’s S = QTQ −1 means that T = Q T SQ. The transpose is again Q T SQ. The triangular
matrix T is symmetric when S = S T .
Then T must be diagonal, so T = Λ.
This proves that S = Q −1 . The symmetric S has n orthonormal eigenvectors in Q.
Every square A factors into QTQ −1 , where T is upper triangular and Q̄ T = Q −1 . If A has real
eigenvalues, then Q and T can be chosen real: Q T Q = I .
The following theorem summarizes the results for the symmetric matrices
S = QΛQ T
Remark
Q T Q = QQ T = I
60/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Geometry of Spectral Theorem
Geometry:
λ qT
1 1
.. ..
S = QΛQ T = q1 ··· qn
. .
λn qT
n
61/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Spectral Decomposition
where qj qT
j are n × n matrices with rank 1, and
Sqi = QΛQ T qi = λ1 q1 qT T
1 qi + · · · + λn qn qn qi = λi qi
62/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Complex Eigenvalues of Real Matrices
Note that for any real matrix S
Sx = λx ⇒ Sx̄ =¯x̄
For symmetric matrices λ and x turn out to be real, and those two
equations become the same.
But a NONsymmetric matrix can easily produce λ and x that are
complex. Then Ax̄ = λ̄x̄ is true but different from Ax = λx.
We get another complex eigenvalue, and a new eigenvector.
Example
cos θ − sin θ
A= has λ1 = cos θ + i sin θ and λ2 = cos θ − i sin θ
sin θ cos θ
Remark
product of pivots = determinant = product of eigenvalues
Remark
If S = S T , then the number of positive eigenvalues = number of positive
pivots.
64/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Positive Definite Matrices
65/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Positive Definite Matrices
Remark
If S and T are symmetric positive definite matrices, then S + T is also a
symmetric positive definite matrix.
66/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Positive Semidefinite Matrices
Theorem
Positive semidefinite matrices have:
1. All n pivots of S are non-negative.
2. All n! principal minors of S are non-negative.a
3. All n eigenvalues of S are bigger or equal to zero (λi ≥ 0)
4. xT Sx ≥ 0 for all vectors x.
5. S equals AT A for some matrix A (possibly with dependent columns).
a A principal submatrix of a square matrix S is the matrix obtained by deleting any k rows
and the corresponding k columns. The determinant of a principal submatrix is called the
principal minor of S.
67/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7
Important Application: Test for a (Local) Minimum
Consider a function of two variables F (x, y ).
(i) Saddle point F (x, y ) = x 2 − y 2 ; (ii) Saddle point between two maxima; (iii) Monkey Saddle
Surface F (x, y ) = x 3 − 3xy 2
69/69
Paweł Polak (Stevens Institute of Technology) MA 232: Linear Algebra - Lec. 7