Вы находитесь на странице: 1из 15

LINEAR MAPS

While studying any class of mathematical objects, the maps between objects of that
class that preserve the dening structure play a crucial role. In this chapter we shall be
concerned with the maps between vector spaces that preserve the linear structure.
1. Linear maps
Denition 1.1 (Linear map). Let V and W be vector spaces over the same eld F. A
function T : V W is called a linear map or linear operator if
i) T(v +w) = Tv +Tw (v, w V ).
ii) T(v) = T(v) (v V, F).
Remark 1.2. In what follows, to simplify, whenever we say T : V W is a linear map, it
will be implicit that both, V and W, are vector spaces over the same eld.
Remark 1.3. Note that if T : V W is a linear map, v
1
, v
2
, . . . , v
n
are vectors in V and

1
,
2
, . . . ,
n
are scalars in the eld then
T(
1
v
1
+
2
v
2
+. . . +
n
v
n
) =
1
T(v
1
) +
2
T(v
2
) +. . . +
n
T(v
n
).
Example: Let V be a vector space. The identity map id
V
: V V , v v, and the zero
map O
V
: V V , v 0, are trivial examples of linear maps.
Example: T : R
3
R
2
, (x, y, z) (x, z), is a linear map, for
T
_
(x
1
, y
1
, z
1
) + (x
2
, y
2
, z
2
)
_
= T(x
1
+x
2
, y
1
+y
2
, z
1
+z
2
) = (x
1
+x
2
, z
1
+z
2
)
= (x
1
, z
1
) + (x
2
, z
2
) = T(x
1
, y
1
, z
1
) +T(x
2
, y
2
, z
2
)
for every pair (x
1
, y
1
, z
1
), (x
2
, y
2
, z
2
) R
3
, and
T
_
(x, y, z)
_
= T(x, y, z) = (x, z) = (x, z) = T(x, y, z)
for every R and every (x, y, z) R
3
.
More generally, let A M
m,n
(F) and dene T : F
n
F
m
, x Ax (where we are
realizing x F
n
as a matrix in M
n,1
(F)). Then
(i) T(x +y) = A(x +y) = Ax +Ay = Tx +Ty (x, y F
n
), and
(ii) T(x) = A(x) = Ax = Tx (x F
n
, F),
so T is a linear map.
Example: Let V be a vector space and let X and Y be subspaces of V so that V = XY .
Every v V can be written in a unique way as a sum x
v
+ y
v
with x
v
X and y
v
Y .
Thus, there are well dened maps P
X
: V X, v x
v
, and P
Y
: V Y , v y
v
. Both,
P
X
and P
Y
, are linear, for
P
X
(u +v) = P
X
(x
u
+y
u
+x
v
+y
v
)
= P
X
(x
u
+x
v
. .
X
+y
u
+y
v
. .
Y
) = x
u
+x
v
= P
X
(u) +P
X
(v) (u, v V ),
1
2 LINEAR MAPS
and
P
X
(u) = P
X
((x
u
+y
u
)) = P
X
(x
u
..
X
+ y
u
..
Y
) = x
u
= P
X
(u) (u V, F),
and likewise for P
Y
. The maps P
X
and P
Y
are called the projection along Y onto X
and the projection along X onto Y , respectively.
Example: T : C
R
[0, 1] R, f f(0). (The Dirac delta.)
Example: T : C
R
[0, 1] C
R
[0, 1], f F, where F is dened by F(x) :=
_
x
0
f(t) dt
(x [0, 1]). (The primitive of a continuous function.)
Example: F : C
C
(R) C
C
(R), f F(f), where F(f)(x) :=
_

f(t)e
ixt
dt (x R).
(The Fourier transform.)
Example: Let be an open subset of R and let f C
R
(). Recall that f is said
to be dierentiable at t
0
if lim
tt
0
f(t)f(t
0
)
tt
0
exists. The function f is called
dierentiable if it is dierentiable at each point of . When f C
R
() is dierentiable,
the function f

: R, t f

(t), is called its dierential. If, in addition, the latter is


continuous then f is said to be continuously dierentiable. The set of all continuously
dierentiable real-valued functions on , denoted C
1
R
(), is a subspace of C
R
() and the
map D : C
1
R
() C
R
(), f f

, is a linear map.
The composition of linear maps has the following properties (compare with the proper-
ties of the product of matrices).
Proposition 1.4 (Properties of the composition of linear maps). .
i) The composition of linear maps is again linear;
ii) For every in the eld (R)S = (RS) = R(S);
iii) (RS)T = R(ST);
iv) S(T
1
+T
2
) = ST
1
+ST
2
;
v) (S
1
+S
2
)T = S
1
T +S
2
T;
where R, S, S
1
, S
2
, T, T
1
and T
2
are linear maps such that all the above compositions
make sense.
Proof. Tutorial.
Theorem 1.5 (Unique mapping theorem). Let V and W be vector spaces over the same
eld and suppose V has a basis B. Let f : B W be any function. Then there exists a
unique linear map T : V W with the property that T(b) = f(b) (b B).
Proof. We give the proof only for nite-dimensional V .
Let V , W, B and f be as in the hypotheses of the theorem. Furthermore, let B =
v
1
, . . . , v
n
. Let v V be arbitrary. Since B is a basis, v can be expressed in a unique way
as a linear combination of elements in B. Say v =

n
i=1

i
v
i
. Then dene Tv :=
i
f(v
i
).
(Notice that there is no other way of dening T, so a fortiori T is unique!) The map T is
linear (Exercise!) and Tv
i
= f(v
i
) (1 i n). (Why?)
Denition 1.6 (Linear isomorphism). A bijective linear map is called a linear iso-
morphism. Two vector spaces V and W are said to be linearly isomorphic, denoted
V

= W, if there exists a linear isomorphism T : V W.
LINEAR MAPS 3
It can be shown that if T : V W is a linear isomorphism then T
1
: W V is also
a linear isomorphism. So, the order of V and W in the last denition is irrelevant.
Theorem 1.7. Let T : V W be a linear map and suppose V has a basis B. Let
T(B) = T(b) : b B. Then
i) T is one-to-one if and only if T(B) is a linearly independent set.
ii) T is onto if and only if T(B)) = W.
iii) T is an isomorphism if and only if T(B) is a basis of W.
Proof. We give the proof only for nite-dimensional V , so, throughout, we assume B =
v
1
, . . . , v
n
.
i) Suppose T is one-to-one and let
1
, . . . ,
n
F be such that
1
Tv
1
+ +
n
Tv
n
= 0.
Then T(

n
i=1

i
v
i
) = 0 = T(0)

n
i=1

i
v
i
= 0
1
= =
n
= 0. So, T(B) is a
linearly independent set.
Conversely, suppose T(B) is a linearly independent set and let u, v V such that
T(u) = T(v). Let u =

n
i=1

i
v
i
and v =

n
i=1

i
v
i
. Then

n
i=1

i
Tv
i
=

n
i=1

i
Tv
i

i
=
i
(1 i n) (because the Tv
i
s are linearly independent), i.e., u = v.
ii) Suppose T is onto and let w W be arbitrary. There exists v V such that Tv = w.
Let v =

n
i=1

i
v
i
. Then w = Tv = T(

n
i=1

i
v
i
) =

n
i=1

i
Tv
i
T(B)). This shows
W T(B)). The reverse inclusion is obvious.
Conversely, suppose T(B)) = W. Let w W be arbitrary. There are scalars
1
, . . . ,
n
such that w =

n
i=1

i
Tv
i
= T(

n
i=1

i
v
i
). So, T is onto.
iii) Immediate from parts (i) and (ii).
Corollary 1.8. Let V be a vector space of dimension n over a eld F. Then V

= F
n
.
Proof. Let v
1
, . . . , v
n
be a basis of V . There is a linear map T : V F
n
such that
Tv
i
= e
i
(1 i n). By part (iii) of the previous proposition, T is an isomorphism.
Corollary 1.9. Two nite-dimensional vector spaces over the same eld are isomorphic
if and only if they have the same dimension.
Proof. Exercise.
Thus, up to linear isomorphism, there is, for each dimension, precisely one vector space
over a given eld.
2. Kernel, image and rank of a linear map
Denition 2.1. Let T : V W be a linear map.
The kernel of T, denoted ker T, is dened to be the set v V : T(v) = 0.
The image of T, denoted imT, is dened to be the set T(V ) := w W : w =
T(v) for some v V (or equivalently, the set Tv : v V ).
The rank of T, denoted rank T, is the dimension of imT.
Example: Let A M
m,n
(F) and let T : F
n
F
m
, x Ax. Then rank T = dimAx : x
F
n
= dimAe
1
, Ae
2
, . . . , Ae
n
) = rank A.
Proposition 2.2. Let T : V W be a linear map. Then
4 LINEAR MAPS
a) ker T is a subspace of V .
b) imT is a subspace of W.
c) T is onto if and only if T(V ) = W.
d) T is one-to-one if and only if ker T = 0.
Proof. (a) and (b) are left as exercises. (c) is clear. We prove (d).
(d) Suppose T is one-to-one and let v ker T. Then T(v) = 0 = T(0) v = 0.
Conversely, suppose ker T = 0 and let u, v V such that T(u) = T(v). Then T(uv) =
0 u v ker T u = v.
Example: Let T : R
3
R
2
, (x, y, z) (x y, y +z). Then
ker T =
_
(x, y, z) R
3
: T(x, y, z) = (0, 0)
_
=
_
(x, y, z) R
3
: (x y, y +z) = (0, 0)
_
=
_
(x, y, z) R
3
: x y = 0 and y +z = 0
_
,
and
imT =
_
T(x, y, z) : (x, y, z) R
3
_
=
_
(x y, y +z) : x, y, z R
_
.
Theorem 2.3 (The dimension theorem). Let V be a nite-dimensional vector space and
let T : V W be a linear map. Then dimV = rank T + dim(ker T).
Proof. Choose a basis v
1
, . . . , v
m
for ker T, and extend it to a basis v
1
, . . . , v
m
, v
m+1
, . . . ,
v
n
for V . Then dimV = dim(ker T) + (n m), so, to nish the proof, it will be enough
to show that B = Tv
m+1
, . . . , Tv
n
is a basis for imT.
First we show B spans imT. For this, let w imT arbitrary and let v V so that
w = Tv. Then v =

n
i=1

i
v
i
, so w =

n
i=1

i
Tv
i
=

n
m+1

i
Tv
i
Tv
m+1
, . . . , Tv
n
),
i.e., imT B). The reverse inclusion is obvious, so imT = B).
It remains to show that B is linearly independent. For this, let
m+1
, . . . ,
n
be scalars
such that
m+1
Tv
m+1
+ +
n
Tv
n
= 0. Then T(
m+1
v
m+1
+ +
n
v
n
) = 0

m+1
v
m+1
+ +
n
v
n
ker T there are scalars
1
, . . . ,
m
such that
m+1
v
m+1
+
+
n
v
n
=
1
v
1
+ +
m
v
m

1
v
1

m
v
m
+
m+1
v
m+1
+ +
n
v
n
= 0

1
= =
m
=
m+1
= =
n
= 0. Done!
Example: Let T : R
3
R
2
be the map from the previous example. Then ker T =
_
(x, x, x) : x R
_
=

(1, 1, 1)
_
, so dim(ker T) = 1. By the dimension theorem,
rank T = dimR
3
dim(ker T) = 3 1 = 2.
Corollary 2.4. Let V and W be nite-dimensional vector spaces of the same dimension.
Then for a linear map T : V W the following are equivalent:
i) T is an isomorphism.
ii) T is one-to-one.
iii) T is onto.
Proof. Tutorial. (Hint: Use the dimension theorem.)
LINEAR MAPS 5
3. Vector spaces of linear maps
Let V and W be vector spaces over F. The collection of all linear maps from V to W,
denoted L(V, W), forms a vector space over F if for every T
1
, T
2
, T L(V, W) and every
F we dene
(T
1
+T
2
)(v) := T
1
(v) +T
2
(v) (v V ),
and
( T)(v) := T(v) (v V ).
A special case of this last construction arises when W = F. Then a linear map f : V
W(= F) is called a linear functional and L(V, W) is called the dual space of V , and
denoted by V

.
Proposition 3.1. Let V be a nite-dimensional vector space. Then V

= V .
Proof. Tutorial.
4. On the correspondence between matrices
and linear maps
Let V and W be nite-dimensional vector spaces over the same eld F. Let =
v
1
, . . . , v
m
be a basis for V , and let = w
1
, . . . , w
n
be a basis for W. Let T : V W
be a linear map, let v V arbitrary and let w = Tv. Since is a basis of V we have that
v = x
1
v
1
+x
2
v
2
+ +x
m
v
m
.
On the other hand, as is a basis of W, we have that
T(v
1
) = a
11
w
1
+a
21
w
2
+ +a
n1
w
n
,
T(v
2
) = a
12
w
1
+a
22
w
2
+ +a
n2
w
n
,
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
T(v
m
) = a
1m
w
1
+a
2m
w
2
+ +a
nm
w
n
,
and also
w = y
1
w
1
+y
2
w
2
+ +y
n
w
n
.
Combining all the above identities we obtain

n
i=1
y
i
w
i
= w = Tv = T
_

m
j=1
x
j
v
j
_
=

m
j=1
x
j
Tv
j
=

m
j=1
x
j
_

n
i=1
a
ij
w
i
_
=

n
i=1
_

m
j=1
a
ij
x
j
_
w
i
,
so,
y
i
=

m
j=1
a
ij
x
j
(1 i n),
6 LINEAR MAPS
or, in matrix terms,
_
_
_
_
_
y
1
y
2
.
.
.
y
n
_
_
_
_
_
=
_
_
_
_
_
a
11
a
12
. . . a
1m
a
21
a
22
. . . a
2m
.
.
.
.
.
.
.
.
.
.
.
.
a
n1
a
n2
. . . a
nm
_
_
_
_
_
. .

,
(T)
_
_
_
_
_
x
1
x
2
.
.
.
x
m
.
_
_
_
_
_
In this way we have associated with T L(V, W) a matrix
,
(T) M
n,m
(F), i.e., we
have dened a map from L(V, W) into M
n,m
(F).
Example: Let A M
m,n
(F) and let T : F
n
F
m
, x Ax. Let = e
1
, . . . , e
n
and
= e
1
, . . . , e
m
. Then
,
(T) = A.
Example: A linear map T : R
3
R
3
has as image the plane x +y +z = 0 and as kernel
the line x = y = z. If
T
_
_
1
1
2
_
_
=
_
_
a
0
1
_
_
, T
_
_
2
1
1
_
_
=
_
_
3
b
5
_
_
and T
_
_
1
2
1
_
_
=
_
_
1
2
c
_
_
,
nd a, b and c, and nd the matrix representation of T with respect to the standard basis
of R
3
.
Solution: Since every point (x, y, z) imT must satisfy x +y +z = 0 one immediately
deduces that a = 1, b = 2 and c = 1. As for the matrix representation of T with
respect to the standard basis, rst note that, by hypothesis, T(1, 1, 1) = (0, 0, 0). Hence,
T
_
_
1
0
0
_
_
= T
_
_
_
_
2
1
1
_
_

_
_
1
1
1
_
_
_
_
= T
_
_
2
1
1
_
_
=
_
_
3
2
5
_
_
,
and likewise,
T
_
_
0
1
0
_
_
= T
_
_
1
2
1
_
_
=
_
_
1
2
1
_
_
and T
_
_
0
0
1
_
_
= T
_
_
1
1
2
_
_
=
_
_
1
0
1
_
_
.
Lastly, the matrix representation of T with respect to the standard basis is
_
_
3 1 1
2 2 0
5 1 1
_
_
.
Theorem 4.1. Let V , W, and be as above. Then the map
,
: L(V, W) M
m,n
(F),
T
,
(T), is a linear isomorphism.
Proof. First we show
,
is linear. Let T
1
, T
2
L(V, W) be arbitrary, and let
,
(T
1
) =
(a
ij
),
,
(T
2
) = (b
ij
) and
,
(T
1
+T
2
) = (c
ij
). For every 1 j m, one has that

n
i=1
c
ij
w
i
= (T
1
+T
2
)(v
j
) = T
1
(v
j
) +T
2
(v
j
)
=

n
i=1
a
ij
w
i
+

n
i=1
b
ij
w
i
=

n
i=1
(a
ij
+b
ij
)w
i
,
so, c
ij
= a
ij
+ b
ij
(1 i n, 1 j m), or equivalently,
,
(T
1
+ T
2
) =
,
(T
1
) +

,
(T
2
). Similarly, one shows that
,
(T) =
,
(T) ( F, T L(V, W)).
LINEAR MAPS 7
To see that
,
is one-to-one, let T L(V, W), such that
,
(T) = 0 (0 here stands
for the zero matrix!). Then T(v
i
) = 0 (1 i n) T(v) = 0 (v V ) (Why?) T = 0
ker
,
= 0
,
is one-to-one.
Lastly, let A = (a
ij
) M
n,m
(F) and let T : V W be the unique linear map that
satises T(v
j
) := a
1j
w
1
+ +a
nj
w
n
(1 j m). Then
,
(T) = A. This shows
,
is onto and concludes the proof.
Note that the linear isomorphism of the last theorem is not canonical, i.e., it depends
on the bases and chosen. If we change the bases (even the order of its elements!) then,
in general, the matrix will change.
Another important property of the maps
,
s is the following.
Theorem 4.2. Let U, V and W be nite dimensional vector spaces over the same scalar
eld F. Let , and be bases for U, V and W, respectively. Let T : U V and
S : V W be linear maps. Then
,
(S)
,
(T) =
,
(ST).
Proof. Let = u
1
, . . . , u
m
, let = v
1
, . . . , v
l
and let = w
1
, . . . , w
n
. Also, let

,
(S) = (b
ij
),
,
(T) = (a
ij
) and
,
(ST) = (c
ij
). For every 1 j m, one has that

n
i=1
c
ij
w
i
= (ST)(u
j
) = S(T(u
j
)) = S
_

l
k=1
a
kj
v
k
_
=

l
k=1
a
kj
S(v
k
)
=

l
k=1
a
kj
_

n
i=1
b
ik
w
i
_
=

n
i=1
_

l
k=1
b
ik
a
kj
_
w
i
.
It follows that c
ij
=

l
k=1
b
ik
a
kj
(1 i n, 1 j m), whence the desired result.
Corollary 4.3. Let V be a vector space and let and be two bases of it. Then T L(V )
is a linear isomorphism if and only if
,
(T) is invertible. Moreover, if T is a linear
isomorphism then
,
(T)
1
=
,
(T
1
).
Proof. Let T L(V ) be an isomorphism. Then there is T
1
L(V ) so that TT
1
= id =
T
1
T. By Theorem 4.2,
,
(T)
,
(T
1
) =
,
(id) = I =
,
(id) =
,
(T
1
)
,
(T).
So,
,
(T) is invertible and
,
(T)
1
=
,
(T
1
).
Conversely, suppose
,
(T) is invertible. Then there is A L(V ) so that A
,
(T) =
I =
,
(T)A. As
,
is onto, there is R L(V ) such that
,
(R) = A. By Theorem 4.2,

,
(RT) =
,
(R)
,
(T) = I =
,
(id) and
,
(TR) =
,
(R)
,
(T) = I =

,
(id). This last implies that RT = id = TR, and so, T must be an isomorphism. (Can
you explain why?)
When V = W it makes more sense to choose = . In this case, we write

(T),
instead of
,
(T).
How are matrices representing the same linear map related?
Corollary 4.4. Let T L(V ), and let and be bases for V . There is an invertible
matrix Q, so that Q
1

(T)Q =

(T).
Proof. Since T = idT id, we have, by Theorem 4.2, that

(T) =
,
(id)

(T)
,
(id)
and, by Corollary 4.3, that
,
(id)
1
=
,
(id). Set Q =
,
(id).
Two matrices A and B for which there exists an invertible matrix Q, so that B =
Q
1
AQ, are said to be similar. Similarity is an equivalence relation.
Now that we are aware of the relationship between matrices and linear maps we can take
advantage of it. Although a linear map may have innitely many matrix representations
8 LINEAR MAPS
it is apparent that some of them will behave better than others in computations. For
instance, diagonal ones. It is thus natural to ask: when does a linear operator have a
diagonal (or closed to diagonal) matrix representation? We shall look at this question in
the following sections.
5. Eigenvalues and eigenvectors
It follows easily from the denition of the maps

that a linear map T acting on a


nite-dimensional vector space, V , has a diagonal matrix representation if and only if
there exists a basis v
1
, . . . , v
n
of V so that Tv
i
v
i
) (1 i n). This motivates the
following.
Denition 5.1 (Eigenvalues and eigenvectors). Let V be a vector space over F, and let
T L(V ). We say that F is and eigenvalue of T if there exists v

V 0 such
that Tv

= v

. The vector v

is called an eigenvector associated to .


Furthermore, we shall say that is an eigenvalue of A M
n
(F) if it is an eigenvalue
of the linear map x Ax (x F
n
), i.e., if there exists x F
n
0 such that Ax = x.
If this last happens, we shall also say that x is an eigenvector of A.
Example: Let V be a vector space. Then 1 (resp. 0) is the only eigenvalue of id
V
(resp. O
V
),
and every non-zero vector in V is an eigenvector for it.
More generally, if XY = V and P
X
is the projection along Y onto X then 0 and 1 are
the only possible eigenvalues of P
X
. Indeed, if is an eigenvalue and v = x
v
+y
v
V 0
is an eigenvector for then x
v
= P
X
(x
v
+ y
v
) = (x
v
+ y
v
) (1 )x
v
= y
v

(1 )x
v
= 0 and y
v
= 0 either x
v
,= 0, = 1 and y
v
= 0, or y
v
,= 0, = 0 and
x
v
= 0.
Example: Let T : R
2
R
2
, (x, y) (2x + y, x + 2y). One easily veries that T(1, 1) =
3(1, 1). So 3 is an eigenvalue of T with eigenvector (1, 1).
Example: The map f : R
2
R
2
dened by f(x, y) := (y, x) has no eigenvalues, for
f(x, y) = (x, y) (y, x) = (x, y) y = x and x = y x =
2
x and
y =
2
y (1 +
2
)x = 0 and (1 +
2
)y = 0 x = 0 and y = 0 (because R).
Denition 5.2 (Spectrum). Let V be a vector space over a eld F and let T L(V ).
The set Sp(T) := F : T id is not an isomorphism is called the spectrum of T.
When V is nite-dimensional one has that Sp(T) = eigenvalues of T.
A eld F is said to be algebraically closed if every non-constant polynomial with
coecients in F has a root in F.
Examples:
The polynomial x
2
+1 R[x] has no roots in R, so R is not algebraically closed.
C is algebraically closed. (This deep result is known as the Fundamental The-
orem of Algebra.)
Theorem 5.3. Let V be a nite-dimensional vector space over an algebraically closed eld
F. Then every T L(V ) has at least one eigenvalue.
Proof. We need to show that there exists F and v ,= 0 such that Tv = v. Now,
Tv = v for some v ,= 0 (T id)v = 0 for some v ,= 0 ker(T id) ,= 0
LINEAR MAPS 9
T id is not one-to-one T id is not a linear isomorphism

(T id)
is not invertible ( any basis for V ) 0 = det(

(T id)) = det(

(T) I). So
is an eigenvalue of T if and only if it is a root of the polynomial det(

(T) I). As
F is algebraically closed, the last polynomial has at least one root in F, so T has at least
one eigenvalue.
Theorem 5.3 does not hold if F is not algebraically closed (see the third example of this
section) or if V is innite-dimensional! For instance, let T : C
N
C
N
, (x
1
, x
2
, . . .)
(0, x
1
, x
2
, . . .). Then T
_
(x
1
, x
2
, . . .)
_
= (x
1
, x
2
, . . .) (0, x
1
, x
2
, . . .) = (x
1
, x
2
, . . .)
x
1
= x
2
= = 0.
Note that the proof of Theorem 5.3 also shows that the eigenvalues of a linear operator
T acting on a nite-dimensional vector space are precisely the roots of the characteristic
polynomial of any one of its matrix representations.
Example: Once again, let V = X Y and let P
X
L(V ) be the projection along Y
onto X. Choose bases x
1
, . . . , x
m
and y
1
, . . . , y
n
for X and Y , respectively. Then
= x
1
, . . . , x
m
, y
1
, . . . , y
n
is a basis for V and, by the denition of P
X
,

(P
X
) is a
diagonal matrix with 1 in the rst m diagonal entries and 0 in the remaining ones. It
follows that det(

(P
X
) I) = ()
n
(1 )
m
, so 0 and 1 are the only eigenvalues
of P
X
.
Example: Let
A =
_
_
2 0 0
1 2 1
1 0 1
_
_
.
Find the eigenvalues of A, and nd an invertible matrix P such that P
1
AP is diagonal.
By the previous remark, the eigenvalues of A are the roots of
p
A
() = det(AI) = det
_
_
2 0 0
1 2 1
1 0 1
_
_
= (2 )
2
(1 ),
i.e., = 1 and = 2.
Next, we look for linearly independent vectors v
1
, v
2
, v
3
R
3
so that
(i) Av
1
= v
1
, (ii) Av
2
= 2v
2
, (iii) Av
3
= 2v
3
.
Remark 5.4. Note that if is an eigenvalue of A M
n
(F) then the number of linearly
independent solutions one can nd for the system of linear equations (AI)x = 0 cannot
be greater than the multiplicity of as a root of p
A
. Indeed, let x
1
, . . . , x
k
F
n
be linearly
independent vectors so that (AI)x
i
= 0 (1 i k), let = x
1
, . . . , x
k
, x
k+1
, . . . , x
n

be a basis for F
n
and let T : F
n
F
n
, x Ax. Then, since A and

(T) must be similar


(see Corollary 4.4), we have that p
A
(t) = det(A tI) = det(

(T) tI) = ( t)
k
q(t),
where q is a polynomial of degree n k.
Now, nding v
1
amounts to nding a basis for (x, y, z) R
3
: x = 0 and y +z = 0 =
(0, y, y) : y R = (0, 1, 1)), while nding v
2
and v
3
amounts to nding a basis for
(x, y, z) R
3
: x +z = 0 = (x, y, x) : x, y R = (0, 1, 0), (1, 0, 1)).
10 LINEAR MAPS
We take v
1
= (0, 1, 1), v
2
= (0, 1, 0) and v
3
= (1, 0, 1), which are linearly indepen-
dent. Set
P = (v
1
v
2
v
3
) =
_
_
0 0 1
1 1 0
1 0 1
_
_
and D =
_
_
1 0 0
0 2 0
0 0 2
_
_
.
Then P is invertible and one can easily verify that AP = PD, or equivalently, that
P
1
AP = D.
The fact that we were able to nd v
1
linearly independent from v
2
and v
3
in the last
example, was not just fortune. Indeed, the following holds.
Theorem 5.5. Let T L(V ), let
1
,
2
, . . . ,
m
be distinct eigenvalues of T and let
v
1
, v
2
, . . . , v
m
be associated eigenvectors (one per each eigenvalue listed!). Then v
1
, . . . , v
m
are linearly independent.
Proof. Clearly, v
1
is linearly independent. Suppose it has been shown that v
1
, . . . , v
k
are
linearly independent for some k < m. Then v
1
, . . . , v
k
, v
k+1
are linearly independent. To
see this, let
1
, . . . ,
k+1
be scalars such that
(1)
1
v
1
+ +
k
v
k
+
k+1
v
k+1
= 0.
Applying T on both sides of this last identity we nd that
(2)
1

1
v
1
+ +
k

k
v
k
+
k+1

k+1
v
k+1
= 0.
Multiplying (1) by
k+1
and adding it to (2) we obtain

1
(
1

k+1
)v
1
+ +
k
(
k

k+1
)v
k
= 0.
By the induction hypothesis,
i
(
i

k+1
) = 0 (1 i k). But the
i
s are all distinct,
so
i
= 0 (1 i k). Going back to (1), it follows that
k+1
= 0 too, and hence the
desired result.
6. Invariant Subspaces
A subspace W of a vector space V is said to be an invariant subspace of T L(V )
if T(W) W.
Examples: Let V be a vector space.
Let T L(V ), let be an eigenvalue of T, and let v be an eigenvector for . Then
v) is an invariant subspace of T.
Let T L(V ). Then any linear subspace W contained in ker T is an invariant
subspace of T, for T(W) T(ker T) = 0 W. Note that the previous example
can be seen as a particular case of this one, for if v satises Tv = v then (T
id)v = 0, and so, v) is a linear subspace of ker(T id).
Let T L(V ). Then imT is an invariant subspace of T because T(imT) T(V ) =
imT.
Let V = X Y and let P
X
be the projection along Y onto X. Then both X and
Y are invariant subspaces of P
X
.
The importance of invariant subspaces in the study of linear maps steams from the
following fact.
LINEAR MAPS 11
Proposition 6.1. Let V be a nite-dimensional vector space, let T L(V ), and let X be
an invariant subspace of T. Furthermore, let Y be a linear complement for X, that is, a
linear subspace of V such that V = X Y and let be a basis for V formed by a basis of
X and a basis of Y , taken in this order. Then

(T) has the form


_
_
_
_
_
_
_
_
_
_

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0
_
_
_
_
_
_
_
_
_
_
where the diagonal blocks have sizes dimX and dimY , respectively. If Y is also an in-
variant subspace of T then

(T) becomes
_
_
_
_
_
_
_
_
_
_
0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0
0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0
_
_
_
_
_
_
_
_
_
_
Proof. Let = x
1
, . . . , x
m
, y
1
, . . . , y
n
, where x
1
, . . . , x
m
is a basis for X and y
1
, . . . , y
n
is a basis for Y . Since X is invariant under T, Tx
i
X and so Tx
i
=
1,i
x
1
+ +
n,i
x
m
(1 i m). That

(T) has the desired form follows readily from this an the denition
of

(T).
The proof of the second part is similar and we leave it to you.
Remark 6.2. The second part of the last proposition can be easily extended to direct sums
of any nite number of invariant subspaces. Precisely, let V be nite-dimensional, let
T L(V ), and let V
1
, . . . , V
k
be invariant subspaces of T such that V = V
1
V
k
. For
each 1 i k let
i
be a basis for V
i
, and let be the basis for V formed by gluing together
the bases
1
, . . . ,
k
, in this order (see the Chapter Vector spaces, Proposition 4.1). Then

(T) =
_
_
_
_
_
B
1
B
2
.
.
.
B
k
_
_
_
_
_
,
where each B
i
in the diagonal stands for a square matrix of size dimV
i
and all the entries
outside the B
i
s are zero.
7. The Jordan form
Not every linear map on a nite-dimensional vector space can be represented by a
diagonal matrix.
12 LINEAR MAPS
Example: Let T : C
2
C
2
,
_
x
y
_

_
2 1
0 2
__
x
y
_
. Then Sp(T) = 2. Moreover,
_
2 1
0 2
__
x
y
_
= 2
_
x
y
_

_
2x +y = 2x
2y = 2y
y = 0.
Thus, the solution of the last system of linear equations is the subspace (x, 0) : x R =
(1, 0)) of R
2
. This subspace has dimension 1, so there is no basis of eigenvectors for T,
and in turn, T cannot have a diagonal representation.
What can we do if T is not diagonalizable? The answer to this question is given by
Theorem 7.2 below.
First, we introduce the following.
Denition 7.1 (Jordan cell). A Jordan cell (or Jordan block) of dimension n with
entries in a eld F is a matrix in M
n
(F) of the form
_
_
_
_
_
_
1

.
.
.
.
.
.
1

_
_
_
_
_
_
.
Theorem 7.2 (Jordan Form Theorem). Let V be a nite-dimensional vector space over
an algebraically closed eld F, and let T L(V ). Then there exists a basis of V so that

(T) =
_
_
_
_
_
J
1
J
2
.
.
.
J
N
_
_
_
_
_
where each J
i
is a Jordan cell. This last representation is unique except for the order of
the cells and is usually referred to as the Jordan Form of T.
We omit the proof.
Remark 7.3. Given a matrix A M
n
(F), we shall refer to the Jordan form of the linear
map x Ax (x F
n
) as the Jordan form of the matrix A. Moreover, a matrix like
the one guring in Theorem 7.2 will be said to be in Jordan form.
Example: Let
A =
_
_
_
_
_
_
_
11
4

1
4

1
4

1
4
1
1
4
9
4
1
4

3
4
1

1
4
3
4
7
4

1
4
1
1
4
1
4

3
4
9
4
1
0 0 0 0 3
_
_
_
_
_
_
_
,
LINEAR MAPS 13
and let T : C
5
C
5
, x Ax. Then, for the basis =
_
(1, 1, 1, 1, 0), (1, 1, 1, 1, 0),
(1, 1, 1, 1, 0), (1, 1, 1, 1, 0), (0, 0, 0, 0, 1)
_
, we have that

(T) =
_
_
_
_
_
_
_
2 1
2 1
2
3 1
3
_
_
_
_
_
_
_
. .
Jordan Form
.
In this case the Jordan form of A has two Jordan cells, one of size 3, corresponding to the
eigenvalue 2, and one of size 2, corresponding to the eigenvalue 3. Moreover,
A =
_
_
_
_
_
_
_
1 1 1 1 0
1 1 1 1 0
1 1 1 1 0
1 1 1 1 0
0 0 0 0 1
_
_
_
_
_
_
_
. .
Q
_
_
_
_
_
_
_
2 1
2 1
2
3 1
3
_
_
_
_
_
_
_
_
_
_
_
_
_
_
1
4
1
4
1
4
1
4
0
1
4
1
4

1
4

1
4
0
1
4

1
4
1
4

1
4
0
1
4

1
4

1
4
1
4
0
0 0 0 0 1
_
_
_
_
_
_
_
. .
Q
1
.
Note that Q =
,e
(id), where e denotes the standard basis of C
5
(see the proof of
Corollary 4.4).
Example: Determine, up to permutation of cells, all possible Jordan Forms of a matrix A
whose characteristic polynomial is p
A
(t) = (t 2)
3
(t 1)
2
.
From the form of p
A
we know that Sp(A) = 1, 2. Moreover, by Corollary 4.4, if J
is the Jordan Form of A then J = Q
1
AQ for some invertible matrix Q. It follows that
p
J
(t) = det(J tI) = det(Q
1
AQ tI) = det(Q
1
) det(A tI) det(Q) = p
A
(t). Since
J tI is a triangular matrix and the determinant of a triangular matrix is precisely the
product of its diagonal entries, 2 must gure 3 times in the diagonal of J while 1 must
gure twice. So, the possible Jordan Forms for A are:
_
_
_
_
_
_
_
2
2
2
3
3
_
_
_
_
_
_
_
,
_
_
_
_
_
_
_
2 1
2
2
3
3
_
_
_
_
_
_
_
,
_
_
_
_
_
_
_
2 1
2 1
2
3
3
_
_
_
_
_
_
_
,
_
_
_
_
_
_
_
2
2
2
3 1
3
_
_
_
_
_
_
_
,
_
_
_
_
_
_
_
2 1
2
2
3 1
3
_
_
_
_
_
_
_
,
_
_
_
_
_
_
_
2 1
2 1
2
3 1
3
_
_
_
_
_
_
_
.
Denition 7.4. A linear map T (resp. a matrix A) is said to be nilpotent if there is
k N such that T
k
= 0 (resp. A
k
= 0). The smallest k with this property is called the
nilpotency index of T (resp. A).
14 LINEAR MAPS
Example: Let V be the vector space of all polynomials of degree n with coecients in
a eld F. Then D : V V , p p

, where p

stands for the derivative of p, is a nilpotent


linear map with nilpotency index n + 1.
Example: Any matrix in M
4
(F) of the form
_
_
_
_
0 a b c
0 0 d e
0 0 0 f
0 0 0 0
_
_
_
_
is nilpotent of index 4.
Remark 7.5. It follows easily from Theorems 4.1 and 4.2 that if V is nite-dimensional
and T L(V ) then T is nilpotent (resp. idempotent)

(T) is nilpotent (resp. idem-


potent) for some basis

(T) is nilpotent (resp. idempotent) for every basis .


(A square matrix A is called idempotent if A
2
= A. For the denition of an idempotent
map, see Exercise 1 from Tutorial 7.)
An immediate consequence of Theorem 7.2 is the following.
Corollary 7.6. Let V be a nite-dimensional vector space. Then every T L(V ) can be
written as the sum of a diagonalizable map and a nilpotent one.
Proof. Let T L(V ) arbitrary. By Theorem 7.2, there is a basis for V such that

(T) = J is in Jordan form. Write J as D +N, where D is a diagonal matrix and N is


a nilpotent one. Then T =
1

(D) +
1

(N). The map


1

(D) is clearly diagonalizable


while
1

(N) is nilpotent.
Example: Computing e
A
(Sketch). Let A M
n
(C), let J be the Jordan Form of A and
let Q be an invertible matrix such that A = QJQ
1
. Then
e
A
= e
QJQ
1
=

i=0
1
i!
(QJQ
1
)
i
=

i=0
1
i!
QJ
i
Q
1
= Q
_

i=1
1
i!
J
i
_
Q
1
= Qe
J
Q
1
.
Suppose
J =
_
_
_
_
_
J
1
J
2
.
.
.
J
k
_
_
_
_
_
,
where each J
l
(1 l k) is a Jordan cell. Then
e
J
=

i=0
1
i!
_
_
_
_
_
J
1
J
2
.
.
.
J
k
_
_
_
_
_
i
=

i=0
_
_
_
_
_
1
i!
J
i
1
1
i!
J
i
2
.
.
.
1
i!
J
i
k
_
_
_
_
_
=
_
_
_
_
_

i=0
1
i!
J
i
1

i=0
1
i!
J
i
2
.
.
.

i=0
1
i!
J
i
k
_
_
_
_
_
=
_
_
_
_
_
e
J
1
e
J
2
.
.
.
e
J
k
_
_
_
_
_
.
LINEAR MAPS 15
Each J
l
can be written as
l
I
l
+N
l
, where I
l
is the identity matrix of size n
l
= size of J
l
and
N
l
is the nilpotent matrix with all the entries immediately above the main diagonal equal
to 1 and zero everywhere else. It can be shown that e
A+B
= e
A
e
B
whenever AB = BA.
Thus,
e
J
l
= e

l
I
l
+N
l
= e

l
I
l
e
N
l
= e

l
I
l
_
n

i=0
1
i!
N
i
l
_
= e

l
_
_
_
_
_
_
_
_
_
_
_
0 1
1
2!
1
3!

1
(n
l
1)!
0 0 1
1
2!

1
(n
l
2)!
0 0 0 1
1
(n
l
3)!
0 0 0 0
1
(n
l
4)!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0 0
_
_
_
_
_
_
_
_
_
_
_
.
. .
Z
l
Combining all the above one obtains that
e
A
= Q
_
_
_
_
_
e

1
Z
1
e

2
Z
2
.
.
.
e

k
Z
k
_
_
_
_
_
Q
1
.

Вам также может понравиться