Академический Документы
Профессиональный Документы
Культура Документы
2
Chapter 1
Vector Spaces
Throughout these notes F will denote a field. The reader unfamiliar with fields
should think of F as the rational numbers Q or the real numbers R or the complex
numbers C. Elements of F are called scalars.
3
6. (distributive law for addition in V ) For all x, y ∈ V and α ∈ F,
α(x + y) = αx + αy.
(α + β)x = αx + βx
1x = x.
{x ∈ Fn | Ax = 0}
{x ∈ Fm | x = Ay for some y ∈ Fn }
4
is a subspace of Fm , called the column space of A and denoted C(A).
The row space of A, denoted R(A), is defined to be the subspace
{xt | x ∈ C(At )}
of Fn (now treated as row vectors).
3. Let V = Mm×n (F) denote the set of all m × n matrices with entries in F.
Then V is a vector space over F under usual matrix addition and multipli-
cation of a matrix by a scalar. We set Mn (F) = Mn×n (F).
Recall that, for a square matrix A, tr(A) denotes the sum of the diagonal
entries of A. The set W = {A ∈ Mn (F) | tr(A) = 0} is a subspace of
Mn (F).
4. Let S be a set. The set of all functions V = {f : S → F} is a vector space
over F under pointwise addition and scalar multiplication, i.e., for f, g ∈ V
(f + g)(s) = f (s) + g(s), s ∈ S,
(αf )(s) = αf (s), α ∈ F, s ∈ S.
5. Let a < b be real numbers and set S = [a, b]. Let V be the real vector space
of all real valued functions on S as in the example above. The set C[a, b] of
continuous real valued functions on S is a subspace of V . Another subspace
of V is the set of all functions in V differentiable at a fixed s ∈ S.
6. A function f : F → F is a polynomial with coefficients in F if there exist
scalars a0 , . . . , ak ∈ F (called the coefficients of f ) such that
f (s) = a0 + a1 s + a2 s2 + · · · + ak sk , s ∈ F.
If the function f is the zero function then all the coefficients ai are zero (see
the exercises). This result implies that the coefficients of a polynomial are
uniquely determined (why?).
If a polynomial f can be written in the form above with ak 6= 0 then we
say that f has degree k. We define the degree of the zero polynomial to be
−∞.
The set
P (F) = {a0 + a1 s + a2 s2 + · · · | a0 , a1 , a2 , . . . ∈ F}
of all polynomials is a vector space over F, under usual addition of polyno-
mials and multiplication of polynomials with scalars, and the set
Pn (F) = {a0 + a1 s + · · · + an sn | a0 , a1 , . . . , an ∈ F}
of all polynomials with degree at most n is a subspace of P (F).
5
00 0
7. The set of all solutions to the differential equation y + ay + by = 0 where
a, b ∈ R form a real vector space. More generally, in this example we can
take a = a(x), b = b(x) to be suitable functions of x.
The empty sum of vectors is the zero vector. Thus L(∅) = {0}. We say that L(S)
is spanned by S.
6
Proof. Note that L(S) is a subspace (why?). Now, if S ⊂ W ⊂ V and W is a
subspace of V then by L(S) ⊂ W (why?). The result follows. 2
Example 1.2.3. 1. Different sets may span the same subspace. For example,
let ei , i = 1, . . . , n denote the standard vectors in Fn (i.e., ei is the column
vector with 1 in the ith coordinate and 0’s elsehwere). Then (check this)
L({e1 , e2 , . . . , en }) = L({e1 , e1 + e2 , . . . , e1 + e2 + · · · + en }) = Rn .
α1 v1 + α2 v2 + . . . + αn vn = 0
7
Remark 1.2.5. (i) Any subset of V containing a linearly dependent set is linearly
dependent.
(ii) Any subset of a linearly independent set in V is linearly independent.
Example 1.2.6. (i) If a set S contains the zero vector 0 then S is dependent since
1.0 = 0.
(ii) Consider the vector space Fn and let S = {e1 , e2 , . . . , en }. Then S is
linearly independent. Indeed, if α1 e1 + α2 e2 + . . . + αn en = 0 for some scalars
α1 , α2 , . . . , αn then (α1 , α2 , . . . , αn )t = 0. Thus each αi = 0 and S is linearly
independent.
Check that the set S = {e1 , e1 + e2 , . . . , e1 + e2 + · · · + en } is linearly inde-
pendent.
(iii) Let V be the vector space of all continuous functions from R to R. Let
S = {1, cos2 t, sin2 t}. Then the relation cos2 t + sin2 t − 1 = 0 shows that S is
linearly dependent.
(iv) Let α1 < α2 < · · · < αn be real numbers. Let V = {f : R →
R | f is continuous}. Consider the set S = {eα1 x , eα2 x , . . . , eαn x }. We show
that S is linearly independent by induction on n. Let n = 1 and βeα1 x = 0. Since
eα1 x 6= 0 for any x, we get β = 0. Now assume that the assertion is true for n − 1
and
β1 eα1 x + . . . + βn eαn x = 0.
Then β1 e(α1 −αn )x + · · · + βn e(αn −αn )x = 0
Let x → ∞ to get βn = 0. Now apply induction hypothesis to get β1 = · · · =
βn−1 = 0.
(v) In the vector space P (R) of all polynomials p(t) with real coefficients the
set S = {1, t, t2 , . . .} is linearly independent. Suppose that 0 ≤ n1 < n2 < . . . <
nr and
α1 tn1 + α2 tn2 + . . . + αr tnr = 0
for certain real numbers α1 , α2 , . . . , αr . Differentiate n1 times to get α1 = 0.
Continuing this way we see that all α1 , α2 , . . . , αr are zero.
Lemma 1.2.7. (Exchange lemma) Let S = {v1 , v2 , . . . , vk } be a subset of a
vector space V. Then any k + 1 elements in L(S) are linearly dependent.
8
such that
(i) each Si spans L(S), i = 0, 1, . . . , n.
(ii) |Si | = k, i = 0, 1, . . . , n.
(iii) {w1 , . . . , wi } ⊆ Si , i = 0, 1, . . . , n.
We shall produce this sequence of sets inductively, the base case i = 0 being
clear. Now suppose we have sets S0 , . . . , Sj satisfying (i), (ii), (iii) above, for
some j < n.
Since Sj spans L(S) we can write
X
wj+1 = cs s,
s∈Sj
and hence the set (Sj − {t}) ∪ {wj+1 } satisfies conditions (i), (ii), and (iii) above
for i = j + 1. That completes the proof. 2
Example 1.2.8. 1. Let A be a m × n matrix with n > m. Then there exists a
nonzero column vector x such that Ax = 0. That is, homogeneous linear equa-
tions with more variables tha equations always have a nontrivial solution.
Indeed, let a1 , a2 , . . . , an ∈ Fm be the columns of A. Since Fm is spanned
by {e1 , . . . , em } and n > m we have from the Theorem above that a1 , . . . , an
are linearly dependent. So there are scalars x1 , . . . , xn , not all zero, such that
x1 a1 + x2 a2 + · · · + xn an = 0. Thus Ax = 0 where x = (x1 , . . . , xn )t .
2. Let A, B be matrices of order 9 × 7 and 4 × 3 respectively. We claim that
there is a nonzero 7 × 4 matrix X such that AXB = 0 (the zero matrix). Think
of the entries of X as variables. So there are 28 variables. Then AXB = 0 is
equivalent to a system of 27 homogeneous linear equations in these 28 variables.
Thus there is a nonzero solution.
Definition 1.2.9. A subset S of a vector space V is called a basis of V if elements
of S are independent and V = L(S). A vector space V possessing a finite basis is
called finite dimensional. Otherwise V is called infinite dimensional.
Theorem 1.2.10. Any two bases of a finite dimensional vector space have same
number of elements.
9
Proof. Suppose S and T are bases of a finite dimensional vector space V. Suppose
|S| < |T |. Since T ⊂ L(S) = V, T is linearly dependent. This is a contradiction.
2
Definition 1.2.11. The number of elements in a basis of a finite-dimensional vec-
tor space V is called the dimension of V. It is denoted by dim V.
Example 1.2.12. (1) The n coordinate vectors e1 , e2 , . . . , en in Fn form a basis of
Fn .
(2) Let A be a n × n matrix. Then the columns of A form a basis of Fn iff A
is invertible. (why?)
(3) Pn (F) = {a0 + a1 t + . . . + an tn | a0 , a1 , . . . , an ∈ F} is spanned by
S = {1, t, t2 , . . . , tn }. Since S is independent, dim Pn (R) = n + 1.
(4) Consider the vector space Mm×n (F). Let Eij denote the m × n ma-
Pm1 in
trix with Pn(i, j) position and 0 elsewhere. If A = (aij ) ∈ Mm×n (F) then
A = i=1 j=1 aij Eij . It is easy to see that the mn matrices Eij are linearly
independent. Hence Mm×n (F) is an mn−dimensional vector space.
Lemma 1.2.13. Suppose V is a finite dimensional vector space. Let S be a lin-
early independent subset of V . Then S can be enlarged to a basis of V .
Proof. Suppose that dim V = n and S has less than n elements. Let v ∈ V \L(S).
Then S ∪ {v} is a linearly independent subset of V (why?). Continuing this way
we can enlarge S to a basis of V . 2
Theorem 1.2.14. Let A be a m × n matrix. Then dim R(A) = dim C(A) and this
number is called the rank of A, denoted rank(A).
10
It is easy to check that V1 + · · · + Vk is a subspace of V , called the sum of
V1 , V2 , . . . , Vk .
If each vector v ∈ V1 + · · · + vk can be written uniquely as v = v1 + · · · + vk ,
vi ∈ Vi for all i then we say that the sum is direct and write
V = V1 ⊕ V2 ⊕ · · · ⊕ Vk .
Example 1.2.15. (i) Let V1 be the subspace of Fn consisting of all vectors (x1 , . . . , xn )t
with x1 +· · ·+xn = 0 and let V2 be the subspace spanned by the vector (1, 1, . . . , 1)t .
Then Fn = V1 ⊕ V2 .
Clearly V1 ∩ V2 = {0} so it is enough to show that Fn = V1 + V2 . This follows
from (why?)
where α = x1 + · · · + xn .
(ii) Let V1 be the subspace of Mn (F) consisting of all matrices of trace zero
and let V2 be the subspace spanned by the identity matrix. Then (why?)
Mn (F) = V1 ⊕ V2 .
11