Вы находитесь на странице: 1из 35

Denitions of Linear Algebra Terms

In order to learn and understand mathematics, it is necessary to understand


the meanings of the terms (vocabulary words) that are used. This document
contains denitions of some of the important terms used in linear algebra.
All of these denitions should be memorized (and not just memorized but
understood). In addition to the denitions that are given here, we also include
statements of some of the most important results (theorems) of linear algebra.
The proofs of these theorems are not included here but will be given in class.
There will be a question on each of the exams that asks you to state
some denitions. It will be required that you state these denitions very
precisely in order to receive credit for the exam question. (Seemingly very
small things such as writing the word or instead of the word and, for
example, can render a denition incorrect.) The denitions given here are
organized according to the section number in which they appear in David
Lays textbook.
Here is a suggestion for how to study linear algebra (and other subjects
in higher level mathematics):
1. Carefully read a section in the textbook and also read your class notes.
Pay attention to denitions of terms and examples and try to under-
stand each concept along the way as you read it. You dont have to
memorize all of the denitions upon rst reading. You will memorize
them gradually as you work the homework problems.
2. Work on the homework problems. As you work on the homework prob-
lems, you should nd that you will need to continuously refer back to
the denitions and to the examples given in the textbook. (This is
how you will naturally come to memorize the denitions.) You will
nd that you have to spend a considerable amount of time working on
homework. When you are done, you will probably have read through
the section and your course notes several times and you will probably
have memorized the denitions. (There are some very good TrueFalse
questions among the homework problems that are good tests of your
understanding of the basic terms and concepts. Such TrueFalse ques-
tions will also appear on the exams given in this course.)
3. Test yourself. After you are satised that you have a good under-
standing of the concepts and that can apply those concepts in solving
1
problems, try to write out the denitions without referring to your book
or notes. You might then also try to work some of the supplementary
problems that appear at the end of each chapter of the textbook.
4. One other tip is that it is a good idea to read a section in the textbook
and perhaps even get started on working on the homework problems
before we go over that section in class. This will better prepare you to
take notes and ask questions when we do go over the material in class.
It will alert you to certain aspects of the material that you might not
quite understand upon rst reading and you can then ask questions
about those particular aspects during class.
1.1: Systems of Linear Equations
Denition 1 A linear equation in the variables x
1
; x
2
; : : : ; x
n
is an equa-
tion of the form
a
1
x
1
+ a
2
x
2
+ : : : + a
n
x
n
= b
where a
1
; a
2
; : : : ; a
n
and b are real or complex numbers (that are often given
in advance).
Example 2
4x + 4y 7z + 8w = 8
is a linear equation in the variables x; y; z; and w.
3x + 5y
2
= 6
is not a linear equation (because of the appearance of y
2
).
Denition 3 Suppose that m and n are positive integers. A system of
linear equations in the variables x
1
; x
2
; : : : ; x
n
is a set of equations of the
form
a
11
x
1
+ a
12
x
2
+ : : : + a
1n
x
n
= b
1
a
21
x
1
+ a
22
x
2
+ : : : + a
2n
x
n
= b
2
.
.
.
a
m1
x
1
+ a
m2
x
2
+ : : : + a
mn
x
n
= b
m
2
where a
ij
(i = 1; : : : ; m, j = 1; : : : n) and b
i
(i = 1; : : : ; m) are real or complex
numbers (that are often given in advance). This system of linear equations
is said to consist of m equations with n unknowns.
Example 4
2x
1
+ 2x
2
5x
3
+ 4x
4
2x
5
= 4
4x
1
+ x
2
5x
4
= 1
3x
1
2x
2
+ 4x
4
2x
5
= 4
is a system of three linear equations in ve unknowns. (The unknowns or
variables of the system are x
1
; x
2
; x
3
; x
4
; and x
5
.)
Denition 5 A solution of the linear system
a
11
x
1
+ a
12
x
2
+ : : : + a
1n
x
n
= b
1
a
21
x
1
+ a
22
x
2
+ : : : + a
2n
x
n
= b
2
.
.
.
a
m1
x
1
+ a
m2
x
2
+ : : : + a
mn
x
n
= b
m
is an ordered ntuple of numbers (s
1
; s
2
; : : : ; s
n
) such that all of the equations
in the system are true when the substitution x
1
= s
1
; x
2
= s
2
; : : : ; x
n
= s
n
is
made.
Example 6 The 5tuple
_
0; 1;
12
5
; 0; 3
_
is a solution of the linear system
2x
1
+ 2x
2
5x
3
+ 4x
4
2x
5
= 4
4x
1
+ x
2
5x
4
= 1
3x
1
2x
2
+ 4x
4
2x
5
= 4
because all equations in the system are true when we make the substitutions
x
1
= 0
x
2
= 1
x
3
=
12
5
x
4
= 0
x
5
= 3.
The 5tuple
_
1; 3;
3
5
; 0;
1
2
_
is also a solution of the system (as the reader can
check).
3
Denition 7 The solution set of a system of linear equations is the set of
all solutions of the system.
Denition 8 Two linear systems are said to be equivalent to each other if
they have the same solution set.
Denition 9 A linear system is said to be consistent if it has at least one
solution and is otherwise said to be inconsistent.
Example 10 The linear system
2x
1
+ 2x
2
5x
3
+ 4x
4
2x
5
= 4
4x
1
+ x
2
5x
4
= 1
3x
1
2x
2
+ 4x
4
2x
5
= 4
is consistent because it has at least one solution. (It in fact has innitely
many solutions.) However, the linear system
x + 2y = 5
x + 2y = 9
is obviously inconsistent.
Denition 11 Suppose that m and n are positive integers. An mn matrix
is a rectangular array of numbers with m rows and n columns. Matrices are
usually denoted by capital letters. If m = 1 or n = 1, then the matrix is
referred to as a vector. In particular, if m = 1, then the matrix is a row
vector of dimension n, and if n = 1 then the matrix is a column vector of
dimension m. Vectors are usually denoted by lower case boldface letters or
(when handwritten) as lower case letters with arrows above them.
Example 12
A =
_

_
8 10 6 8
5 7 6 6
0 5 10 1
1 3 10 5
4 8 1 10
_

_
is a 5 4 matrix.
r =

r =
_
2 3 1 9 8

4
is a row vector of dimension 5.
c =

c =
_
_
4
5
10
_
_
is a column vector of dimension 3.
Remark 13 If A is an m n matrix whose columns are made up of the
column vectors c
1
; c
2
; : : : ; c
n
, then A can be written as
A = [c
1
c
2
c
n
] .
Likewise, if A is made up of the row vectors r
1
; r
2
; : : : ; r
m
, then A can be
written as
A =
_

_
r
1
r
2
.
.
.
r
m
_

_
.
Example 14 The 5 4 matrix
A =
_

_
8 10 6 8
5 7 6 6
0 5 10 1
1 3 10 5
4 8 1 10
_

_
can be written as
A = [c
1
c
2
c
3
c
4
]
where
c
1
=
_

_
8
5
0
1
4
_

_
, c
2
=
_

_
10
7
5
3
8
_

_
, c
3
=
_

_
6
6
10
10
1
_

_
, c
4
=
_

_
8
6
1
5
10
_

_
5
or can be written as
A =
_

_
r
1
r
2
r
3
r
4
r
5
_

_
where
r
1
=
_
8 10 6 8

r
2
=
_
5 7 6 6

r
3
=
_
0 5 10 1

r
4
=
_
1 3 10 5

r
5
=
_
4 8 1 10

.
Denition 15 For a given linear system of equations
a
11
x
1
+ a
12
x
2
+ : : : + a
1n
x
n
= b
1
a
21
x
1
+ a
22
x
2
+ : : : + a
2n
x
n
= b
2
.
.
.
a
m1
x
1
+ a
m2
x
2
+ : : : + a
mn
x
n
= b
m
,
the mn matrix
A =
_

_
a
11
a
12
a
1n
a
21
a
22
a
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
m1
a
m2
a
mn
_

_
is called the coecient matrix of the system and the m(n + 1) matrix
[A b] =
_

_
a
11
a
12
a
1n
b
1
a
21
a
22
a
2n
b
2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
m1
a
m2
a
mn
b
m
_

_
is called the augmented matrix of the system.
6
Example 16 The coecient matrix of the linear system
2x
1
+ 2x
2
5x
3
+ 4x
4
2x
5
= 4
4x
1
+ x
2
5x
4
= 1
3x
1
2x
2
+ 4x
4
2x
5
= 4
is
A =
_
_
2 2 5 4 2
4 1 0 5 0
3 2 0 4 2
_
_
and the augmented matrix of this system is
[A b] =
_
_
2 2 5 4 2 4
4 1 0 5 0 1
3 2 0 4 2 4
_
_
.
Denition 17 An elementary row operation is performed on a matrix
by changing the matrix in one of the following ways:
1. Interchange: Interchange two rows of the matrix. If rows i and j are
interchanged, then we use the notation r
i
r
j
.
2. Scaling: Multiply all entries in a single row by a nonzero constant,
k. If the ith row is multiplied by k, then we use the notation k r
i
r
i
.
3. Replacement: Replace a row with the sum of itself and a nonzero
multiple of another row. If the ith row is replaced with the sum of itself
and k times the jth row, then we use the notation r
i
+ k r
j
r
i
.
Example 18 Here are some examples of elementary row operations per-
formed on the matrix
A =
_
_
2 5 7 7
7 4 8 2
8 7 8 2
_
_
.
1. Interchange rows 1 and 3:
_
_
2 5 7 7
7 4 8 2
8 7 8 2
_
_
r
1
!r
3

_
_
8 7 8 2
7 4 8 2
2 5 7 7
_
_
7
2. Scale row 1 by a factor of 5.
_
_
2 5 7 7
7 4 8 2
8 7 8 2
_
_
5r
1
!r
1

_
_
10 25 35 35
7 4 8 2
8 7 8 2
_
_
3. Replace row 2 with the sum of itself and 2 times row 3.
_
_
2 5 7 7
7 4 8 2
8 7 8 2
_
_
r
2
+2r
3
!r
2

_
_
2 5 7 7
9 18 8 2
8 7 8 2
_
_
Denition 19 An mn matrix, A, is said to be row equivalent (or simply
equivalent) to an m n matrix, B, if A can be transformed into B by a
series of elementary row operations. If A is equivalent to B, then we write
A ~ B.
Example 20 As can be seen by studying the previous example
_
_
2 5 7 7
7 4 8 2
8 7 8 2
_
_
~
_
_
8 7 8 2
7 4 8 2
2 5 7 7
_
_
,
_
_
2 5 7 7
7 4 8 2
8 7 8 2
_
_
~
_
_
10 25 35 35
7 4 8 2
8 7 8 2
_
_
,
and
_
_
2 5 7 7
7 4 8 2
8 7 8 2
_
_
~
_
_
2 5 7 7
9 18 8 2
8 7 8 2
_
_
.
Example 21 The sequence of row operations
_
_
3 6
1 6
7 1
_
_
2r
1
!r
1

_
_
6 12
1 6
7 1
_
_
4r
3
!r
3

_
_
6 12
1 6
28 4
_
_
r
1
!r
3

_
_
28 4
1 6
6 12
_
_
shows that
_
_
3 6
1 6
7 1
_
_
~
_
_
28 4
1 6
6 12
_
_
.
8
Remark 22 (optional) Row equivalence of matrices is an example of what
is called an equivalence relation. An equivalence relation on a nonempty
set of objects, X, is a relation, ~, that is reexive, symmetric, and transitive.
The reexive property is the property that x ~ x for all x X. The
symmetric property is the property that if x X and y X with x ~ y,
then y ~ x. The transitive property is the property that if x X, y X,
and z X with x ~ y and y ~ z, then x ~ z. If we take X to be the set
of all mn matrices, then row equivalence is an equivalence relation on X
because
1. A ~ A for all A X.
2. If A X and B X and A ~ B, then B ~ A.
3. If A X, B X, and C X and A ~ B and B ~ C, then A ~ C.
1.2: Row Reduction and Echelon Forms
Denition 23 The leading entry in a row of a matrix is the rst (from
the left) nonzero entry in that row.
Example 24 For the matrix
_

_
0 1 4 8
0 0 4 3
5 0 0 10
0 0 0 1
9 7 2 8
_

_
,
the leading entry in row 1 is 1, the leading entry in row 2 is 4, the leading
entry in row 3 is 5, the leading entry in row 4 is 1, and the leading entry
in row 5 is 9. (Note that if a matrix happens to have a row consisting of all
zeros, then that row does not have a leading entry.)
Denition 25 A zero row of a matrix is a row consisting entirely of zeros.
A nonzero row is a row that is not a zero row.
Denition 26 An echelon form matrix is a matrix for which
1. No zero row has a nonzero row below it.
9
2. Each leading entry is in a column to the right of the leading entry in
the row above it.
Denition 27 A reduced echelon form matrix is an echelon form matrix
which also has the properties:
3. The leading entry in each nonzero row is 1.
4. Each leading 1 is the only nonzero entry in its column.
Example 28 The matrix
_

_
6 4 10
1 2 10
4 4 2
0 10 1
_

_
is not an echelon form matrix.
The matrix
_
0 0 0 4 8
0 0 0 0 3
_
is an echelon form matrix but not a reduced echelon form matrix.
The matrix
_

_
0 0 1 0
0 0 0 1
0 0 0 0
0 0 0 0
0 0 0 0
_

_
is a reduced echelon form matrix.
Theorem 29 (Uniqueness of the Reduced Echelon Form) Each ma-
trix, A, is equivalent to one and only one reduced echelon form matrix. This
unique matrix is denoted by rref(A).
Example 30 For
A =
_

_
6 4 10
1 2 10
4 4 2
0 10 1
_

_
;
10
we can see (after some work) that
rref (A) =
_

_
1 0 0
0 1 0
0 0 1
0 0 0
_

_
.
For
A =
_
0 0 0 4 8
0 0 0 0 3
_
,
we have
rref (A) =
_
0 0 0 1 0
0 0 0 0 1
_
.
For
A =
_

_
0 0 1 0
0 0 0 1
0 0 0 0
0 0 0 0
0 0 0 0
_

_
,
we have
rref (A) =
_

_
0 0 1 0
0 0 0 1
0 0 0 0
0 0 0 0
0 0 0 0
_

_
.
Denition 31 A pivot position in a matrix, A, is a position in A that
corresponds to a rowleading 1 in rref(A). A pivot column of A is column
of A that contains a pivot position.
Example 32 By studying the preceding example, we can see that the pivot
columns of
A =
_

_
6 4 10
1 2 10
4 4 2
0 10 1
_

_
are columns 1, 2, and 3. (Thus all columns of A are pivot columns).
11
The pivot columns of
A =
_
0 0 0 4 8
0 0 0 0 3
_
are columns 4 and 5.
The pivot columns of
A =
_

_
0 0 1 0
0 0 0 1
0 0 0 0
0 0 0 0
0 0 0 0
_

_
are columns 3 and 4.
Denition 33 Let A be the coecient matrix of the linear system of equa-
tions
a
11
x
1
+ a
12
x
2
+ : : : + a
1n
x
n
= b
1
a
21
x
1
+ a
22
x
2
+ : : : + a
2n
x
n
= b
2
.
.
.
a
m1
x
1
+ a
m2
x
2
+ : : : + a
mn
x
n
= b
m
.
Each variable that corresponds to a pivot column of A is called a basic vari-
able and the other variables are called free variables.
Example 34 The coecient matrix of the linear system
2x
1
+ 2x
2
5x
3
+ 4x
4
2x
5
= 4
4x
1
+ x
2
5x
4
= 1
3x
1
2x
2
+ 4x
4
2x
5
= 4
is
A =
_
_
2 2 5 4 2
4 1 0 5 0
3 2 0 4 2
_
_
.
Since
rref (A) =
_
_
1 0 0
6
5
2
5
0 1 0
1
5
8
5
0 0 1
2
5
6
5
_
_
,
12
we see that the pivot columns of A are columns 1, 2, and 3. Thus the basic
variables of this linear system are x
1
, x
2
; and x
3
. The free variables are x
4
and x
5
.
Theorem 35 (Existence and Uniqueness Theorem) A linear system of
equations is consistent if and only if the rightmost column of the augmented
matrix for the system is not a pivot column. If a linear system is consistent,
then the solution set of the linear system contains either a unique solution
(when there are no free variables), or innitely many solutions (when
there is at least one free variable).
Example 36 The augmented matrix of the system
2x
1
+ 2x
2
5x
3
+ 4x
4
2x
5
= 4
4x
1
+ x
2
5x
4
= 1
3x
1
2x
2
+ 4x
4
2x
5
= 4
is
[A b] =
_
_
2 2 5 4 2 4
4 1 0 5 0 1
3 2 0 4 2 4
_
_
.
Since
rref ([A b]) =
_
_
1 0 0
6
5
2
5
6
5
0 1 0
1
5
8
5
19
5
0 0 1
2
5
6
5
6
5
_
_
,
we see that the rightmost column of [A b] is not a pivot column. This means
that the system is consistent. Furthermore, since the system does have free
variables, there are innitely many solutions to the system.
Here is a general procedure for determining whether a given linear sys-
tem of equations is consistent or not and, if it is consistent, whether it has
innitely many solutions or a unique solution:
1. Compute rref([A b]) to determine whether or not the rightmost col-
umn of [A b] is a pivot column. If the rightmost of [A b] is a pivot
column, then the system is inconsistent.
13
2. Assuming that the rightmost column of [A b] is not a pivot column,
take a look at rref(A). (This requires no extra computation.) If all
columns of A are pivot columns, then the linear system has a unique
solution. If not all columns of A are pivot columns, then the system
has innitely many solutions.
1.3: Vector Equations
Denition 37 Suppose that
u =
_

_
u
1
u
2
.
.
.
u
n
_

_
and v =
_

_
v
1
v
2
.
.
.
v
n
_

_
are column vectors of dimension n. We dene the sum of u and v to be the
vector
u +v =
_

_
u
1
+ v
1
u
2
+ v
2
.
.
.
u
n
+ v
n
_

_
.
Denition 38 Suppose that
u =
_

_
u
1
u
2
.
.
.
u
n
_

_
is a column vectors of dimension n and suppose that c is a real number (also
called a scalar). We dene the scalar multiple cu to be
cu =
_

_
cu
1
cu
2
.
.
.
cu
n
_

_
.
14
Denition 39 The symbol R
n
denotes the set of all ordered ntuples of num-
bers. The symbol V
n
denotes the set of all column vectors of dimension n.
(The textbook author does not make this distinction. He uses R
n
to denote
both things. However, there really is a dierence. Elements of R
n
should
be regarded as points whereas elements of V
n
are matrices with one column.)
If x = (x
1
; x
2
; : : : ; x
n
) is a point in R
n
, then the position vector of x is the
vector
x =
_

_
x
1
x
2
.
.
.
x
n
_

_
.
Example 40 x = (1; 2) is a point in R
2
. The position vector of this point
(which is an element of V
2
) is
x =
_
1
2
_
.
Denition 41 Suppose that u
1
; u
2
; : : : ; u
n
are vectors in V
m
. (Note that
this is a set of n vectors, each of which has dimension m). Also suppose that
c
1
; c
2
; : : : ; c
n
are scalars. Then the vector
c
1
u
1
+ c
2
u
2
+ + c
n
u
n
is called a linear combination of the vectors u
1
; u
2
; : : : ; u
n
.
Example 42 Suppose that
u
1
=
_
1
2
_
, u
2
=
_
0
5
_
, u
3
=
_
1
4
_
.
Then the vector
4u
1
2u
2
+ 3u
3
= 4
_
1
2
_
2
_
0
5
_
+ 3
_
1
4
_
=
_
7
30
_
is a linear combination of u
1
,u
2
, and u
3
.
Denition 43 Suppose that u
1
; u
2
; : : : ; u
n
are vectors in V
m
. The span of
this set of vectors, denoted by Spanu
1
; u
2
; : : : ; u
n
, is the set of all vectors
that are linear combinations of the vectors u
1
; u
2
; : : : ; u
n
. Formally,
Spanu
1
; u
2
; : : : ; u
n

= v V
m
[ there exist scalars c
1
; c
2
; : : : ; c
n
such that v = c
1
u
1
+ c
2
u
2
+ + c
n
u
n
.
15
Denition 44 The zero vector in V
m
is the vector
0
m
=
_

_
0
0
.
.
.
0
_

_
.
Remark 45 If u
1
; u
2
; : : : ; u
n
is any set of vectors in V
m
, then 0
m
Spanu
1
; u
2
; : : : ; u
n

because
0
m
= 0u
1
+ 0u
2
+ + 0u
n
.
Denition 46 A vector equation in the variables x
1
, x
2
,. . . ,x
n
is an equa-
tion of the form
x
1
a
1
+ x
2
a
2
+ + x
n
a
n
= b
where a
1
,a
2
,. . . ,a
n
and b are vectors in V
m
(which are usually known before-
hand). An ordered ntuple (s
1
; s
2
; : : : ; s
n
) is called a solution of this vector
equation if the equation is satised when we set x
1
= s
1
, x
2
= s
2
,. . . , x
n
= s
n
.
The solution set of the vector equation is the set of all of its solutions.
Example 47 An example of a vector equation is
x
1
_
1
2
_
+ x
2
_
0
5
_
+ x
3
_
1
4
_
=
_
4
2
_
.
The ordered triple (x
1
; x
2
; x
3
) = (9; 4=5; 5) is a solution of this vector equa-
tion because
9
_
1
2
_
+
4
5
_
0
5
_
+ 5
_
1
4
_
=
_
4
2
_
.
Remark 48 The vector equation
x
1
a
1
+ x
2
a
2
+ + x
n
a
n
= b
is consistent if and only if b Spana
1
; a
2
; : : : ; a
n
.
Remark 49 The vector equation
x
1
a
1
+ x
2
a
2
+ + x
n
a
n
= b
16
where
a
1
=
_

_
a
11
a
21
.
.
.
a
m1
_

_
, a
2
=
_

_
a
12
a
22
.
.
.
a
m2
_

_
, . . . ,a
n
=
_

_
a
1n
a
2n
.
.
.
a
mn
_

_
and
b =
_

_
b
1
b
2
.
.
.
b
m
_

_
is equivalent to the system of linear equations
a
11
x
1
+ a
12
x
2
+ : : : + a
1n
x
n
= b
1
a
21
x
1
+ a
22
x
2
+ : : : + a
2n
x
n
= b
2
.
.
.
a
m1
x
1
+ a
m2
x
2
+ : : : + a
mn
x
n
= b
m
.
This means that the vector equation and the system have the same solution
sets. Each can be solved (or determined to be inconsistent) by rowreducing
the augmented matrix
_
a
1
a
2
a
n
b

.
Example 50 Let us solve the vector equation
x
1
_
1
2
_
+ x
2
_
0
5
_
+ x
3
_
1
4
_
=
_
4
2
_
.
This equation is equivalent to the linear system
x
1
+ x
3
= 4
2x
1
5x
2
+ 4x
3
= 2.
Since
[A b] =
_
1 0 1 4
2 5 4 2
_
and
rref ([A b]) =
_
1 0 1 4
0 1
2
5

6
5
_
,
17
we see that the vector equation is consistent (because the rightmost column of
[A b] is not a pivot column) and that it in fact has innitely many solutions
(because not all columns of A are pivot columns). We also see that the general
solution is
x
1
= 4 x
3
x
2
=
6
5
+
2
5
x
3
x
3
= free.
Note that if we set x
3
= 5, then we get the particular solution
x
1
= 9
x
2
=
4
5
x
3
= 5
that was identied in Example 47.
1.4: The Matrix Equation Ax = b
Denition 51 For a matrix of size mn,
A =
_
a
1
a
2
a
n

=
_

_
a
11
a
12
a
1n
a
21
a
22
a
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
m1
a
m2
a
mn
_

_
,
and a vector of dimension n,
x =
_

_
x
1
x
2
.
.
.
x
n
_

_
,
we dene the product Ax to be
Ax = x
1
a
1
+ x
2
a
2
+ + x
n
a
n
.
18
Example 52 Suppose
A =
_
2 3 4
4 3 3
_
and
x =
_
_
4
1
0
_
_
.
Then
Ax = 4
_
2
4
_
1
_
3
3
_
+ 0
_
4
3
_
=
_
11
19
_
.
Denition 53 Suppose that
A =
_
a
1
a
2
a
n

=
_

_
a
11
a
12
a
1n
a
21
a
22
a
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
m1
a
m2
a
mn
_

_
is an mn matrix and that
b =
_

_
b
1
b
2
.
.
.
b
m
_

_
V
m
.
Then
Ax = b
is called a matrix equation.
Remark 54 The system of linear equations
a
11
x
1
+ a
12
x
2
+ : : : + a
1n
x
n
= b
1
a
21
x
1
+ a
22
x
2
+ : : : + a
2n
x
n
= b
2
.
.
.
a
m1
x
1
+ a
m2
x
2
+ : : : + a
mn
x
n
= b
m
,
19
the vector equation
x
1
a
1
+ x
2
a
2
+ + x
n
a
n
= b,
and the matrix equation
Ax = b
are all equivalent to each other! The only subtle dierence is that solutions
of the system and of the vector equation are ordered ntuples
(x
1
; x
2
; : : : ; x
n
) = (s
1
; s
2
; : : : ; s
n
) R
n
;
whereas solutions of the matrix equation are the corresponding vectors
_

_
x
1
x
2
.
.
.
x
n
_

_
=
_

_
s
1
s
2
.
.
.
s
n
_

_
V
n
.
1.5: Solution Sets of Linear Systems
Denition 55 A homogeneous linear system is one of the form
Ax = 0
m
where A is an mn matrix.
A nonhomogeneous linear system is a linear system that is not ho-
mogeneous.
Remark 56 Homogenous systems are consistent! Thats because they have
the solution x = 0
n
.
Denition 57 The solution x = 0
n
is called the trivial solution of the
homogeneous system Ax = 0
m
. A nontrivial solution of the homogeneous
system Ax = 0
m
is a solution x V
n
such that x ,= 0
n
.
Example 58 The system
5x + 2y 3z + w = 0
3x + y 5z + w = 0
x 3y 5z + 5w = 0
is homogeneous. A nontrivial solution of this system (as the reader can
check) is (x; y; z; w) = (18; 34; 1; 25).
20
Theorem 59 If every column of A is a pivot column, then the homogeneous
system Ax = 0
m
has only the trivial solution. If not every column of A is
a pivot column, then the homogeneous system Ax = 0
m
has innitely many
nontrivial solutions (in addition to the trivial solution).
Example 60 The system given in Example 58 has coecient matrix
A =
_
_
5 2 3 1
3 1 5 1
1 3 5 5
_
_
.
Without doing any computations, we can see that it is not possible that every
column of A can be a pivot column. (Why?) Therefore the homogeneous
system Ax = 0
m
has innitely many nontrivial solutions.
Theorem 61 Suppose that A is an mn matrix and suppose that b V
m
.
Also suppose that v V
n
is a solution of the homogeneous system Ax = 0
m
and that w V
n
is a solution of the system Ax = b. Then v + w is also a
solution of Ax = b.
1.7: Linear Independence
Denition 62 A set of vectors v
1
; v
2
; : : : ; v
n
in V
m
is said to be linearly
independent if the vector equation
x
1
v
1
+ x
2
v
2
+ + x
n
v
n
= 0
m
has only the trivial solution. Otherwise the set of vectors is said to be linearly
dependent.
Example 63 Let v
1
and v
2
be the vectors
v
1
=
_
_
4
1
4
_
_
, v
2
=
_
_
1
1
4
_
_
.
We solve the vector equation
x
1
v
1
+ x
2
v
2
= 0
3
21
by observing that
A =
_
_
4 1
1 1
4 4
_
_
~
_
_
1 0
0 1
0 0
_
_
.
Since each column of A is a pivot column, the vector equation has only the
trivial solution. Therefore the set of vectors v
1
; v
2
is linearly independent.
Denition 64 Suppose that v
1
; v
2
; : : : ; v
n
is a linearly dependent set of
vectors and suppose that (c
1
; c
2
; : : : ; c
n
) is a nontrivial solution of the vec-
tor equation
x
1
v
1
+ x
2
v
2
+ + x
n
v
n
= 0
m
.
Then the equation
c
1
v
1
+ c
2
v
2
+ + c
n
v
n
= 0
m
is called a linear dependence relation for the set of vectors v
1
; v
2
; : : : ; v
n
.
Example 65 Consider the set of vectors
v
1
=
_
_
4
1
4
_
_
, v
2
=
_
_
1
1
4
_
_
, v
3
=
_
_
2
3
4
_
_
, v
4
=
_
_
4
3
3
_
_
.
We solve the vector equation
x
1
v
1
+ x
2
v
2
+ x
3
v
3
+ x
4
v
4
= 0
3
by observing that
_
_
4 1 2 4
1 1 3 3
4 4 4 3
_
_
~
_
_
1 0 0
11
40
0 1 0
13
20
0 0 1
9
8
_
_
.
Our conclusion is that the vector equation has nontrivial solutions and that
the general solution is
x
1
= 11t
x
2
= 26t
x
3
= 45t
x
4
= 40t
where t can be any real number. By choosing t = 1, we see that a linear
dependence relation for the set of vectors v
1
,v
2
,v
3
,v
4
is
11v
1
+ 26v
2
+ 45v
3
+ 40v
4
= 0
3
.
22
1.8: Introduction to Linear Transformations
Denition 66 A transformation (or function or mapping), T, from V
n
to V
m
is a rule that assigns each vector x V
n
to a vector T (x) V
m
. The
set V
n
is called the domain of T and the set V
m
is called the codomain (or
target set) of T. If T is a transformation from V
n
to V
m
, then we write
T : V
n
V
m
.
Denition 67 Suppose that T : V
n
V
m
. For each x V
n
, the vector
T (x) V
m
is called the image of x under the action of T. The set of all
images of vectors in V
n
under the action of T is called the range of T:In set
notation, the range of T is
y V
m
[ y = T (x) for at least one x V
n
.
Example 68 This example illustrates the terms dened in the preceding two
denitions: Consider the transformation T : V
2
V
3
dened by
T
__
x
y
__
=
_
_
x
2
4
y x
_
_
.
The domain of T is V
2
and the codomain of T is V
3
.
The image of the vector
x =
_
4
1
_
is
T (x) =
_
_
16
4
5
_
_
.
The range of T consists of all vectors in V
3
that have the form
_
_
s
4
t
_
_
where s can be any nonnegative real number and t can be any real number.
For example, the vector
y =
_
_
6
4
9
_
_
23
is in the range of T because
y = T
__ _
6
9 +
_
6
__
.
Note that it is also true that
y = T
__

_
6
9
_
6
__
.
Are there any other vectors x V
2
such that T (x) = y? The answer to this
does not matter as far as determining whether or not y is in the range of
T is concerned. We know that y is in the range of T because we know that
there exists at least one vector x V
2
such that T (x) = y.
Denition 69 A transformation T : V
n
V
m
is said to be a linear trans-
formation if both of the following conditions are satised:
1. T (u +v) = T (u) + T (v) for all vectors u and v in V
n
.
2. T (cu) = cT (u) for all vectors u in V
n
and for all scalars c.
Example 70 Let T : V
3
V
2
be dened by
T
_
_
_
_
x
y
z
_
_
_
_
=
_
x
x + y + 2z
_
.
We will show that T is a linear transformation: Let
u =
_
_
x
1
y
1
z
1
_
_
and v =
_
_
x
2
y
2
z
2
_
_
24
be two vectors in V
3
. Then
T (u +v) = T
_
_
_
_
x
1
+ x
2
y
1
+ y
2
z
1
+ z
2
_
_
_
_
=
_
x
1
+ x
2
(x
1
+ x
2
) + (y
1
+ y
2
) + 2 (z
1
+ z
2
)
_
=
_
x
1
+ x
2
(x
1
+ y
1
+ 2z
1
) + (x
2
+ y
2
+ 2z
2
)
_
=
_
x
1
x
1
+ y
1
+ 2z
1
_
+
_
x
2
x
2
+ y
2
+ 2z
2
_
= T (u) + T (v) :
Now let
u =
_
_
x
y
z
_
_
be a vector in V
3
and let c be a scalar. Then
T (cu) = T
_
_
_
_
cx
cy
cz
_
_
_
_
=
_
cx
cx + cy + 2 (cz)
_
= c
_
x
x + y + 2z
_
cT (u) .
We have shown that T is a linear transformation.
Denition 71 The zero transformation is the linear transformation T :
V
n
V
m
dened by T (u) = 0
m
for all u V
n
.
Denition 72 The identity transformation is the linear transformation
T : V
n
V
n
dened by T (u) = u for all u V
n
.
We leave it to the reader to prove that the zero transformation and the
identity transformation are both linear transformations.
25
Example 73 The transformation T : V
2
V
3
dened by
T
__
x
y
__
=
_
_
x
2
4
y x
_
_
is not a linear transformation. We leave it to the reader to explain why not.
1.9: The Matrix of a Linear Transformation
Denition 74 A transformation T : V
n
V
m
is said to be onto V
m
if the
range of T is V
m
.
Denition 75 (Alternate Denition) A transformation T : V
n
V
m
is said to be onto V
m
if each vector y V
m
is the image of at least one
vector x V
n
.
Denition 76 A transformation T : V
n
V
m
is said to be onetoone
if each vector y V
m
is the image of at most one vector x V
n
.
Denition 77 A transformation T : V
n
V
m
that is both onto V
m
and
onetoone is called a bijection.
Remark 78 In reference to the dart analogy that we have been using in
class: Supposing that V
n
is our box of darts and V
m
is our dartboard, a
transformation T : V
n
V
m
is onto V
m
if every point on the dartboard
gets hit by at least one dart; whereas T is onetoone if every point on the
dartboard either does not get hit or gets hit by exactly one dart. If T is a
bijection, then every point on the dartboard gets hit by exactly one dart.
2.2: The Inverse of a Matrix
Denition 79 Suppose that A =
_
a b
c d
_
is a 2 2 matrix. The quantity
ad bc is called the determinant of A. It is denoted either by det (A) or by
[A[.
Denition 80 The n n identity matrix, denoted by I
n
, is the matrix
that has entries of 1 at all positions along its main diagonal and entries of 0
elsewhere.
26
Example 81 The 3 3 identity matrix is
I
3
=
_
_
1 0 0
0 1 0
0 0 1
_
_
.
Denition 82 An n n matrix, A, is said to be invertible if there exists
an n n matrix, B, such that BA = AB = I
n
.
Proposition 83 If A is an invertible n n matrix and B and C are n n
matrices such that BA = AB = I
n
and CA = AC = I
n
, then B = C. In
other words, if A has an inverse, then this inverse is unique.
Denition 84 Suppose that A is an invertible n n matrix. The inverse
of A is dened to be the unique matrix, A
1
, such that A
1
A = AA
1
= I
n
.
Example 85 The inverse of the matrix
A =
_
3 0
1 3
_
is
A
1
=
_
1
3
0

1
9
1
3
_
.
(The reader should perform the computations to check that A
1
A = AA
1
=
I
2
.)
Denition 86 An elementary matrix is a matrix is a matrix that can
be obtained by performing a single elementary row operation on an identity
matrix.
Example 87 The matrix
E =
_
_
1 0 0
0 1 0
9 0 1
_
_
is an elementary matrix because it is obtained from I
3
by the elementary row
operation 9 R
1
+ R
3
R
3
.
27
2.3: Characterizations of Invertible Matrices
Denition 88 A transformation T : V
n
V
n
is said to be invertible if
there exists a transformation S : V
n
V
n
such that S (T (x)) = x for all
x V
n
and T (S (x)) = x for all x V
n
.
Proposition 89 If T : V
n
V
n
is invertible, then there is a unique trans-
formation S : V
n
V
n
such that S (T (x)) = x for all x V
n
and
T (S (x)) = x for all x V
n
. This unique transformation is called the
inverse of T and is denoted by T
1
.
Example 90 If T : V
2
V
2
is the linear transformation that rotates all
vectors in V
2
counterclockwise about the origin through an angle of 90

, then
T is invertible and T
1
: V
2
V
2
is the linear transformation that rotates
all vectors in V
2
clockwise about the origin through an angle of 90

.
Theorem 91 Suppose that A is an n n matrix and let T : V
n
V
n
be
the linear transformation dened by T (x) = Ax for all x V
n
. Then the
following statements are all equivalent to each other (meaning that all of the
statements are true or all of them are false).
1. A is invertible.
2. T is invertible.
3. T is onto V
n
.
4. T is onetoone.
5. A ~ I
n
.
6. The columns of A span V
n
.
7. The homogeneous equation Ax = 0
n
has only the trivial solution.
8. For every b V
n
, the equation Ax = b has a unique solution.
28
4.1: Vector Spaces and Subspaces
Denition 92 A vector space is a non-empty set of objects, V , called vec-
tors, on which are dened two operations, called addition and multiplication
by scalars (real numbers), subject to the ten axioms listed below. The axioms
must hold for all vectors u; v; and w in V and for all scalars c and d.
1. The sum of u and v, denoted by u +v is in V .
2. The scalar multiple of u by c, denoted by cu, is in V .
3. u +v = v +u.
4. (u +v) +w = u + (v +w).
5. There is a zero vector, 0, in V such that u +0 = u.
6. There is a vector u in V such that u + (u) = 0.
7. c (u +v) = cu + cv.
8. (c + d) u = cu + du.
9. c (du) = (cd) u.
10. 1u = u.
Denition 93 If V is a vector space and W is a nonempty subset of V ,
then W is said to be a subspace of V if W is itself a vector space under the
same operations of addition and multiplication by scalars that hold in V .
Denition 94 Suppose that v
1
,v
2
,. . . ,v
n
are vectors in a vector space, V ,
and suppose that c
1
,c
2
,. . . ,c
n
are scalars. Then
c
1
v
1
+ c
2
v
2
+ + c
n
v
n
is said to be a linear combination of the vectors v
1
,v
2
,. . . ,v
n
.
Denition 95 Suppose that v
1
; v
2
; : : : ; v
n
is a set of vectors in a vector
space, V . This set of vectors is said to be linearly independent if the
equation
c
1
v
1
+ c
2
v
2
+ + c
n
v
n
= 0
has only the trivial solution (c
1
= c
2
= = c
n
= 0). Otherwise the set of
vectors is said to be linearly dependent.
29
Denition 96 Suppose that S = v
1
; v
2
; : : : ; v
n
is a set of vectors in a
vector space, V . The span of S, denoted by Span(S), is dened to be the
set of all possible linear combinations of the vectors in S.
4.2: Linear Transformations
Denition 97 Suppose that V and W are vector spaces. A transformation
T : V W is said to be a linear transformation if
T (u +v) = T (u) + T (v) for all u and v in V
and
T (cu) = cT (u) for all u in V and all scalars c.
Remark 98 As in our previous discussions of transformations, the vector
space V is called the domain of T and the vector space W is called the
codomain (or target set) of T.
Remark 99 In the above denition, since V and W might be two dierent
vectors spaces and hence might have two dierent operations of addition and
multiplication by scalars, we might want to use dierent symbols to denote
these dierent operations. For example we might use + and + to denote
addition and scalar multiplication in V and use and to denote addition
and scalar multiplication in W. With these notations, the conditions for T
to be a linear transformation would appear as
T (u +v) = T (u) T (v) for all u and v in V
and
T (c + u) = c T (u) for all u in V and all scalars c.
In most cases it is not necessary to use dierent notations such as this since
the operations that are intended are clear from the context of whatever is
being discussed.
Denition 100 The nullspace (also called the kernel) of a linear trans-
formation T : V W is the set of all vectors in V that are transformed
by T into the zero vector in W. This space (which is a subspace of V ) is
denoted by ker (T). Thus
ker (T) = v V [ T (v) = 0
W
.
30
Denition 101 The range of a linear transformation T : V W is the
set of all vectors in W that are images under T of at least one vector in V .
This space (which is a subspace of W) is denoted by range(T). Thus
range (T) = w W [ T (v) = w for at least one v V .
Another (shorter) way to write this is
range (T) = T (v) [ v V .
Denition 102 In the case of a linear transformation T : V
n
V
m
whose
standard matrix is the m n matrix A, the nullspace of A, denoted by
Nul(A), is dened to be the nullspace (or kernel) of T. Thus Nul(A) =
ker (T).
Remark 103 Nul(A) is the solution set of the matrix equation Ax = 0
m
.
Denition 104 In the case of a linear transformation T : V
n
V
m
whose
standard matrix is the mn matrix A, the column space of A, denoted by
Col(A), is dened to be the range of T. Thus Col(A) =range(T).
Remark 105 If A = [a
1
a
2
a
n
], then Col(A) = Span(a
1
; a
2
; : : : ; a
n
).
4.3: Bases
Denition 106 A basis for a vector space, V , is a nite indexed set of
vectors, B = v
1
; v
2
; : : : ; v
n
, in V that is linearly independent and also
spans V (meaning that Span(B) = V ).
Remark 107 Some vector spaces have standard bases meaning bases
that are rather obvious without having to do to much checking. For example,
the standard basis for V
n
is B = e
1
; e
2
; : : : ; e
n
where
e
1
=
_

_
1
0
.
.
.
0
_

_
, e
2
=
_

_
0
1
.
.
.
0
_

_
, , e
n
=
_

_
0
0
.
.
.
1
_

_
.
31
The standard basis for P
n
(R) (the set of all polynomial functions with domain
R and degree less than or equal to n) is B = p
0
; p
1
; p
2
; : : : ; p
n
where
p
0
(x) = 1 for all x R
p
1
(x) = x for all x R
p
2
(x) = x
2
for all x R
.
.
.
p
n
(x) = x
n
for all x R.
The vector space F (R) (the set of all functions with domain R) does not
have a basis because no nite set of functions spans this space.
4.4: Coordinate Systems
Denition 108 Suppose that V is a vector space, that B = v
1
; v
2
; : : : ; v
n

is a basis for V , and that v is a vector in V . The coordinates of v relative


to the basis B are the unique scalars c
1
; c
2
; ; c
n
such that c
1
v
1
+ c
2
v
2
+
+ c
n
v
n
= v. The coordinate vector of v relative to the basis B is the
vector
[v]
B
=
_

_
c
1
c
2
.
.
.
c
n
_

_
V
n
.
The transformation T : V V
n
dened by T (v) = [v]
B
is a linear trans-
formation that is called the coordinate transformation (or coordinate
mapping) on V determined by the basis B.
Denition 109 If V and W are vector spaces and T : V W is a linear
transformation that is onetoone and maps V onto W, then T is said to be
an isomorphism of V and W.
Denition 110 If V and W are vector spaces and if there exists an isomor-
phism T : V W, then V is said to be isomorphic to W.
Remark 111 Isomorphism is an equivalence relation on the set of all vector
spaces. This is because:
32
1. Every vector space is isomorphic to itself. (Why?)
2. If V is isomorphic to W, then W is isomorphic to V . (Why?)
3. If V is isomorphic to W and W is isomorphic to X, then V is isomor-
phic to X. (Why?)
4.5: The Dimension of a Vector Space
Denition 112 If V is a vector space that is spanned by a nite set of
vectors, then V is said to be nitedimensional and the dimension of V ,
denoted by dim(V ), is dened to be the number of vectors in a basis for V .
The dimension of the zero vector space, V = 0, is dened to be 0. (Note
that the zero vector space has no basis.) If V ,= 0 and V is not spanned by
any nite set of vectors, then we say that V is innitedimensional.
4.6: The Rank of a Matrix
Denition 113 If A is an m n matrix, then the rank of A, denoted by
rank (A), is dened to be the dimension of the column space of A. Thus
rank (A) = dim(Col (A)).
Remark 114 Since we know that the column space and the row space of any
matrix have the same dimension, it is also true that rank (A) = dim(row(A)).
Direct Sums of Subspaces
Denition 115 If V is a vector space and H and K are subspaces of V such
that H K = 0, then the direct sum of H and K, which is denoted by
H K and which is also a subspace of V , is dened to be
H K = u +v [ u H and v K .
Remark 116 In order for the direct sum, H K, to be dened, it is nec-
essary that H K = 0. (This is part of the denition of direct sum.)
Whether or not H K = 0, the set
H + K = u +v [ u H and v K
33
is called the sum of H and K. H + K is a subspace of V whether or not
HK = 0. The dierence is that when HK = 0 (and hence H+K =
H K), each vector, w, in H K can be written uniquely in the form
w = u + v where u H and v K; whereas this unique representation
might not be possible for all vectors in H + K if H K ,= 0.
Denition 117 Two subspaces, H and K, of a vector space V are said to
be complementary to each other if H K = V .
Denition 118 Two vectors, u and v, in V
n
are said to be orthogonal to
each other if u
T
v = v
T
u = [0].
Remark 119 If u and v are any two vectors in V
n
, then u
T
v = v
T
u. Thus
we could simply state the above denition by saying that u and v are orthog-
onal to each other if u
T
v = [0].
Remark 120 If [a] is a 1 1 matrix, then we often adopt the convention
to just view [a] as a scalar and hence just write a instead of [a]. With this
convention, we can say that u and v are orthogonal to each other if u
T
v = 0
(without writing the brackets around 0). Also, with this convention, u
T
v (for
any two vectors, u and v, in V
n
) is called the dot product of u and v and is
denoted by u v. For example, if
u =
_
_
4
1
4
_
_
and v =
_
_
1
1
4
_
_
,
then
uv = u
T
v =
_
4 1 4

_
_
1
1
4
_
_
= (4) (1)+(1) (1)+(4) (4) = 13.
Denition 121 Two subspaces, H and K, of V
n
are said to be orthogonal
to each other if u
T
v = 0 for all u H and v K.
Denition 122 Two subspaces, H and K, of V
n
are said to be orthogonal
complements of each other if H and K are orthogonal to each other and
H K = V
n
.
34
Fundamental Subspaces of a Matrix
Let
A = [a
1
a
2
a
n
] =
_

_
b
1
b
2
.
.
.
b
m
_

_
be an mn matrix (where a
1
, a
2
, . . . , a
n
are the column vectors of A (each
of which has size m1) and b
1
, b
2
, . . . , b
m
are the row vectors of A (each
of which has size 1 n). The following four subspaces (two of which are
subspaces of V
n
and the other two of which are subspaces of V
m
are called
the fundamental subspaces of the matrix A:
1. The row space of A,
row(A) = Span
_
b
T
1
; b
T
2
; : : : ; b
T
m
_
is a subspace of V
n
.
2. The null space of A,
Nul (A) = x V
n
[ Ax = 0
m

is a subspace of V
n
.
3. The column space of A,
Col (A) = Span(a
1
; a
2
; : : : ; a
n
)
is a subspace of V
m
.
4. The null space of A
T
,
Nul
_
A
T
_
=
_
x V
m
[ A
T
x = 0
n
_
is a subspace of V
m
.
35

Вам также может понравиться