Вы находитесь на странице: 1из 53

Chapter 4

Vector Spaces Associated with


Matrices
Chapter 4 Vector Spaces Associated with Matrices

Section 4.1
Row Spaces and Column
Spaces
Row spaces and column spaces (Definition 4.1.1)

a11 a12 ··· a1n


Let A = (ai j ) be an m × n matrix. a21 a22 ··· a2n
A=
⁞ ⁞ ⁞
am1 am2 ··· amn
r1
r2
Write A = where ri = ai 1 ai 2 ··· ai n

rm is the i th row of A.

The row space of A is the subspace of n spanned by


the rows of A,
i.e. the row space of A = span{ r1, r2, ..., rm } ⊆ n.
(Definition 4.1.1
Row spaces and column spaces & Remark 4.1.2)

a11 a12 ··· a1n


On the other hand, a21 a22 ··· a2n
A=
write A = c1 c2 ··· cn ⁞ ⁞ ⁞
a1 j am1 am2 ··· amn
a2 j
where cj = is the j th column of A.

am j
The column space of A is the subspace of m spanned
by the column of A,
i.e. the column space of A = span{ c1, c2, ..., cn } ⊆ m.
Note that the column space of A = the row space of AT
and the row space of A = the column space of AT.
(In here, we identify the row vectors with the column vectors.)
Row and column vectors (Notation 4.1.5)

Recall that a vector in n can be identified as a matrix.


When a vector in n is written as (u1, u2, ..., un), it is a
row vector and is identified with a 1 × n matrix
u1 u2 ··· un ;

and if it is written as (u1, u2, ..., un)T, it is a column vector


and is identified with an n × 1 matrix
u1
u2
.

un
An examples (Example 4.1.4.1)

r1 = 2 −1 0 ,
2 −1 0
1 −1 3 r2 = 1 −1 3 ,
Let A = . Write
−5 1 0 r3 = −5 1 0 ,
1 0 1
r4 = 1 0 1 .
The row space of A
= span{ r1, r2, r3, r4 }
= { a(2, −1, 0) + b(1, −1, 3) + c(−5, 1, 0) + d(1, 0, 1) |
a, b, c, d ∈ }
= { (2a + b − 5c + d, −a − b + c, 3b + d ) | a, b, c, d ∈ }.
An examples (Example 4.1.4.1)

2 −1 0
1 −1 3
Write c1 = , c2 = , c3 = .
−5 1 0
1 0 1

The column space of A


= span{ c1, c2, c3 }
= { a(2, 1,2 −5, 1)T +−1
b(−1, −1,01, 0)T + c(0, 3, 0, 1)T |
a, b, c ∈ }
1 −1 3
a − b, a+ −b b + 3c,+ −5a
= { (2a c + b, a,ab,+ cc)∈
T | a, b, c, d ∈ }.
−5 1 0
1 0 1
Row equivalent matrices (Discussion 4.1.6.1)

Recall that a matrix B is row equivalent to a matrix A if


B can be obtain from A through a series of elementary
operations.
Row equivalence is an equivalence relation on matrices:
(a) Any matrix is row equivalent to itself.
(b) If a matrix B is row equivalent to a matrix A,
then A is also row equivalent to B.
(c) If a matrix C is row equivalent to a matrix B
and B is row equivalent to another matrix A,
then C is also row equivalent to A.
Row equivalent matrices (Discussion 4.1.6.2)

A matrix is row equivalent to its row-echelon form.


In particular, if two matrices have a same row-echelon
form, then they are row equivalent.
Since every matrix has a unique reduced row-echelon
form, two matrices are row equivalent if and only if they
have the same reduced row-echelon form.
Row spaces (Theorem 4.1.7)

row
matrix A matrix B
equivalent

the row space of A = the row space of B

Proof: r1
r2
Write A = where ri = ai 1 ai 2 ··· ai n

rm is the i th row of A.

We need to show that each of the three types of


elementary operations preserve the row space of A.
Row spaces (Theorem 4.1.7)

In here, we only prove that the first type of elementary


operations preserve the row space of A (the other two are
similar).
r 1

r1
ri −1
r2 kRi
A= B= kri

ri +1
rm

rm

Then the row space of A = span{ r1, r2, ..., rm } and


the row space of B = span{ r1, ..., ri −1, kri, ri +1, ..., rm }.
Row spaces (Theorem 4.1.7)

Theorem 3.2.10: span(u1, u2, ..., uk) ⊆ span(v1, v2, ..., vm) if
and only if each ui is a linear combination of v1, v2, ..., vm .

Since kri ∈ span{ r1, r2, ..., rm }, (by Theorem 3.2.10)


span{ r1, ..., ri −1, kri, ri +1, ..., rm } ⊆ span{ r1, r2, ..., rm }.
1
Since ri = k (kri) ∈ span{ r1, ..., ri −1, kri, ri +1, ..., rm },
(by Theorem 3.2.10)
span{ r1, r2, ..., rm } ⊆ span{ r1, ..., ri −1, kri, ri +1, ..., rm }.
So we have shown that
span{ r1, r2, ..., rm } = span{ r1, ..., ri −1, kri, ri +1, ..., rm }
i.e. the row space of A = the row space of B.
An example (Example 4.1.8)

0 0 1 1 0 0
R1 ↔ R3 2R1 R1 − R2
A= 0 2 4 D= 0 2 4
1
2 1 2 0 0 1

Since A and D are row equivalent,


the row space of A = the row space of D,
i.e. span{ (0, 0, 1), (0, 2, 4), ( 12 , 1, 2) }
= span{ (1, 0, 0), (0, 2, 4), (0, 0, 1) }.
Row-echelon forms (Remark 4.1.9 & Example 4.1.8.2)

Let A be a matrix and R a row-echelon form of A.


Since the nonzero rows of R are linearly independent
(see Question 3.26), the nonzero rows of R form a basis for
the row space of R and hence form a basis for the row
space of A.
For example,
2 2 −1 0 1 2 2 −1 0 1
3 3
−1 −1 2 −3 1 Gaussian 0 0 2 −3 2
A= R=
0 0 1 1 1 Elimination 0 0 0 3 0
1 1 −2 0 −1 0 0 0 0 0
3 3
So { (2, 2, −1, 0, 1), (0, 0, 2
,
−3, 2 ), (0, 0, 0, 3, 0) }
is a basis for the row space of A.
Column spaces (Discussion 4.1.10)

row
matrix A matrix B
equivalent
Warning: In general,
the column space of A ≠ the column space of B.

For example,
0 0 R1 ↔ R2 1 0
A= B=
1 0 0 0
A and B are row equivalent
but the column space of A = { (0, y )T | y ∈ }
while the column space of B = { (x, 0)T | x ∈ }.
Column vectors (Theorem 4.1.11)

A= a1 ··· ai1 ··· ai2 ······ aik ··· an


row equivalent

B= b1 ··· bi1 ··· bi2 ······ bik ··· bn


···
···
S = { ai1, ai2 , ..., aik } T = { bi1, bi2 , ..., bik }
S is linearly ⇔ T is linearly
independent. independent.
S is a basis for the T is a basis for the

column space of A. column space of B.
An example (Example 4.1.12.1)

2 2 −1 0 1 2 2 −1 0 1
3 3
−1 −1 2 −3 1 Gaussian 0 0 2 −3 2
A= R=
0 0 1 1 1 Elimination 0 0 0 3 0
1 1 −2 0 −1 0 0 0 0 0

The three corresponding columns The three columns are linearly



are linearly dependent. dependent.
Row-echelon forms (Remark 4.1.13 & Example 4.1.12.2)

Let A be a matrix and R a row-echelon form of A.


A basis for the column space of A can be obtained by
taking the columns of A that correspond to the pivot
columns in R.
2 2 −1 0 1 2 2 −1 0 1
3 3
−1 −1 2 −3 1 Gaussian 0 0 2 −3 2
A= R=
0 0 1 1 1 Elimination 0 0 0 3 0
1 1 −2 0 −1 0 0 0 0 0

The three corresponding columns The three pivot columns form a


form a basis for the column space
⇐ basis for the column space of R.
of A.
Finding bases (Example 4.1.14.1)

Let u1 = (1, 2, 2, 1), u2 = (3, 6, 6, 3), u3 = (4, 9, 9, 5),


u4 = (−2, −1, −1, 1), u5 = (5, 8, 9, 4), u6 = (4, 2, 7, 3).
Find a basis for W = span{ u1, u2, u3, u4, u5, u6 }.
Method 1: 1 2 2 1 1 2 2 1
Place the six 3 6 6 3 0 1 1 1
vectors as row
vectors to form 4 9 9 5 Gaussian 0 0 1 1
a 6 × 4 matrix: −2 −1 −1 1 Elimination 0 0 0 0
5 8 9 4 0 0 0 0
4 2 7 3 0 0 0 0
So
{ (1, 2, 2, 1), (0, 1, 1, 1), (0, 0, 1, 1) } is a basis for W.
Finding bases (Example 4.1.14.1)

Method 2: Place the six vectors as column vectors to


form a 4 × 6 matrix:

1 3 4 −2 5 4 1 3 4 −2 5 4
2 6 9 −1 8 2 Gaussian 0 0 1 3 −2 −6
2 6 9 −1 9 7 Elimination 0 0 0 0 1 5
1 3 5 1 4 3 0 0 0 0 0 0

pivot columns
So
{ (1, 2, 2, 1), (4, 9, 9, 5), (5, 8, 9, 4) } is a basis for W.
Finding bases (Example 4.1.14.2)

Let S = { (1, 4, −2, 5, 1), (2, 9, −1, 8, 2), (2, 9, −1, 9, 3) }.


S is linearly independent.
Extend S to a basis for 5.

Solution:
In the following, we present an algorithm that extends a
linearly independent subset S of n to a basis for n.
Finding bases (Example 4.1.14.2)

1 4 −2 5 1 1 4 −2 5 1
Gaussian
A = 2 9 −1 8 2 R = 0 1 3 −2 0
Elimination
2 9 −1 9 3 0 0 0 1 1

Step 1: Form a matrix A using the non-pivot


vectors in S as rows. columns

Step 2: Reduce A to a row-echelon form R.


Step 3: Identify the non-pivot columns in R.
For this example, the 3rd and 5th columns are non-
pivot columns.
Finding bases (Example 4.1.14.2)

1 4 −2 5 1 1 4 −2 5 1
Gaussian
A = 2 9 −1 8 2 R = 0 1 3 −2 0
Elimination
2 9 −1 9 3 0 0 0 1 1

Step 4: For each non-pivot column, get non-pivot


a vector such that the leading entry of columns
the vector is at that column.
For this example, we need two vectors of the form
(0, 0, x, *, *) and (0, 0, 0, 0, y) where x and y are
nonzero.
In particular, we take (0, 0, 1, 0, 0) and (0, 0, 0, 0, 1).
Finding bases (Example 4.1.14.2)

1 4 −2 5 1 1 4 −2 5 1
Gaussian
A = 2 9 −1 8 2 R = 0 1 3 −2 0
Elimination
2 9 −1 9 3 0 0 0 1 1

Step 5: Finally, non-pivot


S U ( the set of vectors choose in Step 4 ) columns

is a basis for n.
For this example,
{ (1, 4, −2, 5, 1), (2, 9, −1, 8, 2), (2, 9, −1, 9, 3),
(0, 0, 1, 0, 0), (0, 0, 0, 0, 1) }
is a basis for 5.
Linear Systems (Discussion 4.1.15)

2x − y = −1 2 −1 0 −1
x
x − y + 3z = 4 1 −1 3 4
y =
−5x + y = −2 −5 1 0 −2
z
x + z = 3 1 0 1 3

Rewrite the system as


2 −1 0 −1
1 −1 3 4
x +y +z = .
−5 1 0 −2
1 0 1 3
Linear Systems (Discussion 4.1.15)

2 −1 0 −1 2 −1 0 −1
For example, x
1 −1 3 4 1 −1 3 4
(x, y, z) = (1,
y 3,= 2) 1
x + y3 + z2 =
−5 1 0 −2 −5 1 0 −2
is a solution to z system.
the
1 0 1 3 1 0 1 3

2 −1 0 −1
1 −1 3 4
Let A = and b = .
−5 1 0 −2
1 0 1 3
Then a solution to the system is a way of writing the
vector b as a linear combination of the column vectors of
A.
Linear systems (Theorem 4.1.16)

Let A be an m × n matrix.
The column space of A = { Au | u ∈ n }.
A linear system Ax = b is consistent if and only if b lies
in the column space of A.

Proof:
Write A = c1 c2 ··· cn
where cj is the j th column of A.
Linear systems (Theorem 4.1.16)

For any u = (u1, u2, ..., un)T ∈ n,

u1
u2
Au = c1 c2 ··· cn

un
= u1c1 + u2c2 + ··· + uncn ∈ span{ c1, c2, ..., cn }
= the column space of A.
Thus { Au | u ∈ n } ⊆ the column space of A.
Linear systems (Theorem 4.1.16)

On the other hand, suppose


b ∈ span{ c1, c2, ..., cn } = the column space of A.
This means that there exists u1, u2, ..., un ∈ such that
u1 u1
u u
b = u1c1 + u2c2 + ··· + uncn = Ac1 2c2 ∈··· cn | u ∈2
{ Au n }.
⁞ ⁞
un un

Thus the column space of A ⊆ { Au | u ∈ n }


and hence we have shown that
the column space of A = { Au | u ∈ n }.
Linear systems (Theorem 4.1.16)

Finally,
the linear system Ax = b is consistent
⇔ there exists u ∈ n such that Au = b
⇔ b ∈ { Au | u ∈ n } = span{ c1, c2, ..., cn }
= the column space of A.
Chapter 4 Vector Spaces Associated with Matrices

Section 4.2
Ranks
Row spaces and column spaces (Theorem 4.2.1)

The row space and column space of a matrix has the


same dimension.

Proof: Let A be a matrix and R a row-echelon form of


A.
0 ⊗ * ··· ··· ··· *
0 0 ⊗ * ··· ··· *
the number of nonzero rows
⁞ ⁞
R= = the number of leading entries
0 ··· ··· 0 ⊗ * *
= the number of pivot columns
0 ··· ··· ··· ··· ··· 0

pivot columns
Row spaces and column spaces (Theorem 4.2.1)

The nonzero rows in R form a basis for the row space of


A (see Remark 4.1.9).
So dim( the row space of A )
= the number of nonzero rows in R.
The columns of A corresponding to the pivot columns in
R form a basis for the column space of A (see Remark
4.1.13).
So dim( the column space of A )
= the number of pivot columns in R.
= the number of nonzero rows in R (see the previous slide)
= dim( the row space of A ).
An example (Example 4.2.2)

2 0 3 −1 8 2 0 3 −1 8
Gaussian
C = 2 1 1 −2 5 0 1 −2 −1 −3
Elimination
−4 −3 0 5 −7 0 0 0 0 0

{ (2, 0, 3, −1, 8), (0, 1, −2, −1, −3) } is a basis for the row
space of C.
{ (2, 2, −4)T, (0, 1, −3)T } is a basis for the column space
of C.
The dimension of both the row space and column space
of C is 2.
Ranks (Definition 4.2.3 & Example 4.2.4.1 & Remark 4.2.5)

The rank of a matrix is the dimension of its row space


(and its column space).
We denote the rank of a matrix A by rank(A).
• rank(0) = 0 and rank(In) = n.
• For an m × n matrix A, rank(A) ≤ min{ m, n }.
If rank(A) = min{ m, n }, then A is said to have full
rank.
• A square matrix A is of full rank if and only if
det(A) ≠ 0.
• rank(A) = rank(AT ).
Linear systems (Remark 4.2.6 & Example 4.2.7)

A linear system Ax = b is consistent if and only if


A and ( A | b ) have the same rank.

2 −1 0 1 2 −1 0 1
x
1 −1 3 0 1 −1 3 0
y =
−5 1 0 0 −5 1 0 0 Gaussian
z Elimination
1 0 1 0 1 0 1 0

A b (A|b) 2 −1 0 1
For this example, rank(A) = 3 0 −1/2 3 −1/2
while rank( ( A | b ) ) = 4. 0 0 −9 4
The system is inconsistent. 0 0 0 7/9
Ranks of product of matrices (Theorem 4.2.8)

Let A and B be m × n and m × p matrices


respectively. Then
rank(AB) ≤ min{ rank(A), rank(B) }.
(Please read our textbook for a proof of the result.)
Chapter 4 Vector Spaces Associated with Matrices

Section 4.3
Nullspaces and Nullities
Nullspaces (Definition 4.3.1 & Notation 4.3.2)

Let A be an m × n matrix.
The solution space of the homogeneous linear system
Ax = 0 is known as the nullspace of A.
The dimension of the nullspace of A is called the nullity
of A and is denoted by nullity(A).
Since the nullspace is a subspace of n,
nullity(A) = dim( the nullspace of A )
≤ dim( n) = n.
From now on, vectors in nullspaces, as well as solution
sets of linear systems, will always written as column
vectors. (See Notation 4.1.5.)
Examples (Example 4.3.3.1)

2 2 −1 0 1
−1 −1 2 −3 1
Find a basis for the nullspace of A = .
0 0 1 1 1
1 1 −2 0 −1

Solution:

2 2 −1 0 1 0 1 1 0 0 1 0
−1 −1 2 −3 1 0 Gauss-Jordan 0 0 1 0 1 0
0 0 1 1 1 0 Elimination 0 0 0 1 0 0
1 1 −2 0 −1 0 0 0 0 0 0 0
Examples (Example 4.3.3.1)

The reduced row-echelon


The general solution to the form of A:
homogeneous linear system 1 1 0 0 1 0
Ax = 0 is 0 0 1 0 1 0
0 0 0 1 0 0
−s − t −1 −1
0 0 0 0 0 0
s 1 0
x= −t = s 0 + t −1
0 0 0 where s, t are
t 0 1 arbitrary parameters.

Thus { (−1, 1, 0, 0, 0)T, (−1, 0, −1, 0, 1)T } is a basis for


the nullspace of A.
Here nullity(A) = 2.
Examples (Example 4.3.3.2)

2 1 −5 1
Determine the nullity of B = −1 −1 1 0 .
0 3 0 1

Solution:

2 1 −5 1 0 1 0 0 −7/9 0
Gauss-Jordan
−1 −1 1 0 0 0 1 0 1/3 0
Elimination
0 3 0 1 0 0 0 1 −4/9 0
Examples (Example 4.3.3.2)

The reduced row-echelon


The general solution to form of B:
the homogeneous linear
system Bx = 0 is 1 0 0 −7/9 0
0 1 0 1/3 0
7 7
9
t 9 0 0 1 −4/9 0
−1t −1
x= 3 =t 3
4 4
t
9 9 where t is a arbitrary parameter.
t 1

Thus { (7, −3, 4, 9)T } is a basis for the nullspace of B


and nullity(B) = 1.
Dimension Theorem for matrices (Theorem 4.3.4)

Let A be a matrix.
Then rank(A) + nullity(A) = the number of columns of A.

Proof: Let R be a row-echelon form of A.


The columns of R can 0 ⊗ * ··· ··· ··· *
be classified into two 0 0 ⊗ * ··· ··· *
types: ⁞ ⁞
pivot columns, R= 0 ··· ··· 0 ⊗ * *
non-pivot columns. 0 ··· ··· ··· ··· ··· 0

pivot columns non-pivot columns


Dimension Theorem for matrices (Theorem 4.3.4)

Then (by Remark 4.1.13)


rank(A) = dim( the column space of A )
= the number of pivot-columns in R
and (by Discussion 3.6.5)
nullity(A) = dim( the solution space of Ax = 0 )
= the number of non-pivot-columns in R.
So rank(A) + nullity(A)
= the number of pivot-columns in R
+ the number of non-pivot-columns in R
= the number of columns of R.
= the number of columns of A.
Examples (Example 4.3.5.2)

In each of the following cases, find rank(A), nullity(A)


and nullity(AT ). (Recall that rank(A) = rank(AT ).)
(a) A is a 3 × 4 matrix and rank(A) = 3.
Answer: nullity(A) = 4 − rank(A) = 1,
nullity(AT ) = 3 − rank(AT ) = 3 − rank(A) = 0.
(b) A is a 7 × 5 matrix and nullity(A) = 3.
Answer: rank(A) = 5 − nullity(A) = 2,
nullity(AT ) = 7 − rank(AT ) = 7 − rank(A) = 5.
(c) A is a 3 × 2 matrix and nullity(AT ) = 3.
Answer: rank(A) = rank(AT ) = 3 − nullity(AT ) = 0,
nullity(A) = 2 − rank(A) = 2.
Linear systems (Theorem 4.3.6 & Remark 4.3.7)

Suppose x = v is a solution to a linear system Ax = b.


Then the solution set of the system Ax = b is given by
{ u + v | u ∈ the nullspace of A },
i.e. the system Ax = b has a general solution
x = (a general solution for Ax = 0)
+ (one particular solution to Ax = b).

By this result, a consistent linear system Ax = b has only


one solution if and only if the nullspace of A is { 0 }.
Proof of the theorem (Theorem 4.3.6)

Proof: Let M = { u + v | u ∈ the nullspace of A }.


Since x = v is a solution to the system Ax = b, Av = b.
Let x = w be any solution to the system Ax = b,
i.e. Aw = b.
Write u = w − v.
Then Au = A(w − v) = Aw − Av = b − b = 0.
This means that u ∈ the nullspace of A
and hence w = u + v ∈ M.
We have shown that
the solution set of the system ⊆ M.
Proof of the theorem (Theorem 4.3.6)

On the other hand, take any w ∈ M,


i.e. w = u + v for some u ∈ the nullspace of A.
(Remind that Au = 0 and Av = b.)
Then Aw = A(u + v) = Au + Av = 0 + b = b.
Thus x = w is a solution to the system Ax = b.
We have shown that
M ⊆ the solution set of the system.

So the solution set of the system = M.


An examples (Example 4.3.8)

Consider the linear system Ax = b where


2 2 −1 0 1 0
−1 −1 2 −3 1 0
A= and b = .
0 0 1 1 1 3
1 1 −2 0 −1 −3
We know that (by Example 4.3.3.1)
−1 −1
1 0
the nullspace of A = s 0 + t −1 s, t ∈
0 0
0 1
An examples (Example 4.3.8)

It can be checked easily that (1, −1, 1, 1, 1)T is a


solution to Ax = b.
Thus the system has a general solution

−1 −1 1
1 0 −1
x=s 0 + t −1 + 1 where s, t are
0 0 1 arbitrary parameters.
0 1 1
A related result (Remark 4.3.9)

Consider the ordinary differential equation


d2 y dy
a 2 +b + cy = f(x).
dx dx
where a, b, c are real constants.
The general solution for the equation is usually written as
d 2y dy
y = a general solution for a 2 + b + cy = 0
dx dx
d 2y dy
+ one particular solution for a 2 + b + cy = f(x) .
dx dx
A related result (Remark 4.3.9)

The form looks similar to the solutions to Ax = b


discussed in Theorem 4.3.6.
This is not a coincidence. To explain the relation between
these two different types of equations, we need the
concept of "abstract" vector spaces that will be covered in
MA2101 Linear Algebra II.

Вам также может понравиться