Вы находитесь на странице: 1из 16

2

Operations with Matrices


We shall now introduce a number of standard operations
that can be performed on matrices, among them addition,
scalar multiplication and multiplication. We shall then
describe the principal properties of these operations. Our
object in so doing is to develop a systematic means of
performing calculations with matrices.

Addition and Subtraction


Let A and B be two 𝑚 × 𝑛 matrices; as usual write 𝑎𝑖𝑗 and
𝑏𝑖𝑗 for their respective (𝑖, 𝑗) entries. Define the sum A+B to
be the 𝑚 × 𝑛 matrix whose (𝑖, 𝑗) entry is 𝑎𝑖𝑗 + 𝑏𝑖𝑗 ; thus to
form the matrix 𝑨 + 𝑩 we simply add corresponding entries
of A and B. Similarly, the difference 𝑨 − 𝑩 is the 𝑚 × 𝑛
matrix whose (𝑖, 𝑗) entry is 𝑎𝑖𝑗 − 𝑏𝑖𝑗 . However 𝑨 + 𝑩 and
𝑨 − 𝑩 are not defined if A and B do not have the same
numbers of rows and columns.

Scalar Multiplication
By a scalar we shall mean a number, as opposed to a
matrix or array of numbers. Let c be a scalar and A be an
𝑚 × 𝑛 matrix. The scalar multiple cA is the 𝑚 × 𝑛 matrix
whose (𝑖, 𝑗) entry is 𝑐𝑎𝑖𝑗 . Thus to form cA we multiply every
entry of A by the scalar c. The matrix (−1)𝑨 is usually
written –A; it is called the negative of A since it has the
property that 𝑨 + (−𝑨) = 𝟎.
Example
If
1 2 0 1 1 1
𝑨= and 𝑩 = ,
−1 0 1 0 −3 1
Then
5 7 3 −1 1 −3
2𝑨 + 3𝑩 = and 2𝑨 − 3𝑩 =
−2 −9 5 −2 9 −1

Matrix Multiplication
It is less obvious what the “natural” definition of the
product of two matrices should be. Let us start with the
simplest interesting case, and consider a pair of 2 × 2
matrices
𝑎11 𝑎12 𝑏11 𝑏12
𝑨= 𝑎 𝑎22 and 𝑩 = .
21 𝑏21 𝑏22
In order to motivate the definition of the matrix product 𝑨𝑩
we consider two sets of linear equations
𝑎11 𝑦1 𝑎12 𝑦2 = 𝑥1 𝑏11 𝑧1 𝑏12 𝑧2 = 𝑦1
+
𝑎21 𝑦1 𝑎22 𝑦2 = 𝑥2 and +
𝑏21 𝑧1 𝑏22 𝑧2 = 𝑦2
Observe that the coefficient matrices of these linear
systems are 𝑨 and 𝑩 respectively. We shall think of these
equations as representing changes of variables from 𝑦1 , 𝑦2
to 𝑥1 , 𝑥2 , and from 𝑧1 , 𝑧2 to 𝑦1 , 𝑦2 respectively.
Suppose that we replace 𝑦1 and 𝑦2 in the first set of
equations by the values specified in the second set. After
simplification we obtain a new set of equations.
𝑎11 𝑏11 + 𝑎12 𝑏21 𝑧1 + 𝑎11 𝑏12 + 𝑎12 𝑏22 𝑧2 = 𝑥1
𝑎21 𝑏11 + 𝑎22 𝑏21 𝑧1 + 𝑎21 𝑏12 + 𝑎22 𝑏22 𝑧2 = 𝑥2
This has coefficient matrix
𝑎11 𝑏11 + 𝑎12 𝑏21 𝑎11 𝑏12 + 𝑎12 𝑏22
𝑎21 𝑏11 + 𝑎22 𝑏21 𝑎21 𝑏12 + 𝑎22 𝑏22
and represents a change of variables from 𝑧1 , 𝑧2 to 𝑥1 , 𝑥2
which may be thought of as the composite of the original
changes of variables.
At first sight this new matrix looks formidable. However, it
is in fact obtained from A and B in quite a simple fashion,
namely by the row-times-column rule. For example, the
(1, 2) entry arises from multiplying corresponding entries of
row 1 of A and column 2 of B, and then adding the
resulting numbers; thus
𝑏12
𝑎11 𝑎12 ⟶ 𝑎11 𝑏12 + 𝑎12 𝑏22 .
𝑏22
Other entries arise in a similar fashion from a row of A and
a column of B.
Having made this observation, we are now ready to define
the product AB where A an 𝑚 × 𝑛 matrix and B is an 𝑛 × 𝑝
matrix. The rule is that the 𝑖, 𝑗 entry of AB is obtained by
multiplying corresponding entries of row 𝑖 of 𝑨 and column
𝑗 of B, and then adding up the resulting products. This is
the row-times-column rule. Now row 𝑖 of A and column 𝑗 of
B are
𝑏1𝑗
𝑏2𝑗
𝑎𝑖1 𝑎𝑖2 … 𝑎𝑖𝑛 and .

𝑏𝑛𝑗

Hence the 𝑖, 𝑗 entry of AB is


𝑎𝑖1 𝑏1𝑗 + 𝑎𝑖2 𝑏𝑗 2 + ⋯ + 𝑎𝑖𝑛 𝑏𝑛𝑗 ,

which can be written more concisely using the summation


notation as
𝑛
𝑘=1 𝑎𝑖𝑘 𝑏𝑘𝑗 .

Notice that the rule only makes sense if the number of


columns of A equals the number of rows of B. Also the
product of an 𝑚 × 𝑛 matrix and 𝑛 × 𝑝 matrix is an 𝑚 × 𝑝
matrix.

Example
Let
0 2 0
2 1 −1
𝑨= and 𝑩 = 1 1 1
3 0 2
1 5 −1
Since A is 2 × 3 and B is 3 × 3, we see that AB is defined
and is a 2 × 3 matrix. However BA is not defined. Using the
row-times-columns rule, we quickly find that
0 0 2
𝑨𝑩 =
2 16 −2

Example
Let
0 1 1 1
𝑨= and 𝑩 = .
0 0 0 0
In this case both AB and BA are defined, but these
matrices are different:
0 0 0 1
𝑨𝑩 = = 022 and 𝑩𝑨 = .
0 0 0 0
Thus already we recognize some interesting features of
matrix multiplication. The matrix product is not
commutative, that is, AB and BA may be different when
both are defined; also the product of two non-zero matrices
can be zero, a phenomenon which indicates that any theory
of division by matrices will face considerable difficulties.
Conformable Matrices: The matrices A and B are
conformable if their product AB is defined, that is, if the
number of columns of A is equal to the number of rows of
B.

Powers of a Matrix
Once matrix products have been defined, it is clear how to
define a non-negative power of a square matrix. Let 𝑨 be an
𝑛 × 𝑛 matrix. Then the 𝑚 𝑡ℎ power of 𝑨, where 𝑚 is a non-
nagative integer, is defined by the equations
𝑨0 = 𝑰𝑛 and 𝑨𝑚 +1 = 𝑨𝑚 𝑨
This is an example of a recursive definition: the first
equation specifies 𝑨0 , while the second shows how to define
𝑨𝑚 +1 , under the assumption that 𝑨𝑚 has already been
defined. Thus 𝑨1 = 𝑨, 𝑨2 = 𝑨𝑨, 𝑨3 = 𝑨2 𝑨 etc… We do not
attempt to define negative powers at this juncture.
Example
0 1
Let 𝑨 = .
−1 0
−1 0 0 −1 1 0
Then 𝑨2 = , 𝑨3 = and 𝑨4 = .
0 −1 1 0 0 1
The reader can verify that higher powers of 𝑨 do not lead to
new matrices in this example. Therefore 𝑨 has just four
distinct powers, 𝑨0 = 𝐼2 , 𝑨1 = 𝑨, 𝑨2 and 𝑨3 .

The Transpose of a Matrix


If 𝑨 is an 𝑚 × 𝑛 matrix, the transpose of 𝑨, denoted by 𝑨𝑇 ,
is the 𝑛 × 𝑚 matrix whose 𝑖, 𝑗 entry equals the 𝑗, 𝑖 entry
of 𝑨. Thus the columns of 𝑨 become the rows of 𝑨𝑇 . For
example, if
𝑎 𝑏
𝑨= 𝑐 𝑑 ,
𝑒 𝑓
then the transpose of 𝑨 is
𝑎 𝑐 𝑒
𝑨𝑇 = 𝑏 𝑑 𝑓 .
A matrix which equals to its transpose is called
symmetric. On the other hand, if 𝑨𝑇 equals −𝑨, then 𝑨 is
said to be skew-symmetric. For example, the matrices
𝑎 𝑏 0 −𝑎
and .
𝑏 𝑐 𝑎 0
are symmetric and skew-symmetric respectively. Clearly
symmetric matrices and skew-symmetric matrices must be
square. We shall see in the coming module that symmetric
matrices can be in a real sense reduced to diagonal
matrices.

Determinant of a Matrix: The determinant of the 𝑚 × 𝑚


matrix 𝑨 = 𝑎𝑖𝑗 is defined as

𝑝
𝑨 = det 𝑨 ≡ −1 𝑎1𝑖1 , 𝑎2𝑖2 , … , 𝑎𝑚 𝑖𝑚

where the sum is taken over all products consisting of


precisely one element from each row and each column of A
multiplied by −1 or 1, if the permutation 𝑖1 , … . , 𝑖𝑚 is odd or
even, respectively.
Singular Matrix: An 𝑚×𝑚 matrix A is singular if
det 𝑨 = 0.
Non-singular Matrix: An 𝑚 × 𝑚 matrix A is non-singular
if det 𝑨 ≠ 0.
Adjoint of Matrix : For 𝑚 ≥ 2, the 𝑚 × 𝑚 matrix 𝐴𝑎𝑑𝑗 =
𝑇
𝑐𝑜𝑓 𝑎𝑖𝑗 is the adjoint of the 𝑚 × 𝑚 matrix 𝐴 = 𝑎𝑖𝑗 .
Here 𝑐𝑜𝑓 𝑎𝑖𝑗 is the cofactor of 𝑎𝑖𝑗 . For instance, for 𝑚 = 3.
𝑎22 𝑎23 𝑎21 𝑎23 𝑎21 𝑎22
det 𝑎 𝑎33 −det 𝑎 𝑎33 det 𝑎 𝑎32
32 31 31
𝑎12 𝑎13 𝑎11 𝑎13 𝑎11 𝑎12
𝐴𝑎𝑑𝑗 = −det 𝑎 𝑎33 det 𝑎 𝑎33 − det 𝑎 𝑎32
32 32 31
𝑎12 𝑎13 𝑎11 𝑎13 𝑎11 𝑎12
det 𝑎 𝑎23 − det 𝑎 𝑎23 det 𝑎 𝑎22
22 21 21

For 𝑚 = 1, 𝐴𝑎𝑑𝑗 is defined to be 1.

The Inverse of a Square Matrix


An 𝑛 × 𝑛 matrix 𝑨 is said to be invertible if there is an 𝑛 × 𝑛
matrix 𝑩 such that
𝑨𝑩 = 𝑰𝑛 = 𝑩𝑨
Then 𝑩 is called an inverse of 𝑨.

Invertible Matrix: An (𝑚 × 𝑚) matrix A is invertible if 𝑨−1


exists.
Example
Show that the matrix
1 3
is not invertible.
3 9
𝑎 𝑏
Solution: If were an inverse of the matrix, then we
𝑐 𝑑
1 3 𝑎 𝑏 1 0
should have = ,
3 9 𝑐 𝑑 0 1
which leads to a set of linear equations with no solutions,
𝑎 + 3𝑐 = 1
𝑏 + 3𝑑 = 0
3𝑎 + 9𝑐 = 0
3𝑏 + 9𝑑 = 1
Indeed the first and third equations clearly contradict each
other. Hence the matrix is not invertible.
Example
Show that the matrix
1 −2
𝑨=
0 1
is invertible and find an inverse for it.
𝑎 𝑏
Solution: Suppose that 𝑩 = is an inverse of 𝑨. Write
𝑐 𝑑
out the product 𝑨𝑩 and set it equal to 𝑰2 , just as in the
previous example. This time we get a set of linear equations
that has a solution,
𝑎 − 2𝑐 = 1
𝑏 − 2𝑑 = 0
𝑐=0
𝑑=1
Indeed there is a unique solution 𝑎 = 1, 𝑏 = 2, 𝑐 = 0, 𝑑 = 1
Thus the matrix
1 2
𝑩=
0 1
To make sure that 𝑩 is really an inverse of 𝑨, we need to
verify that 𝑩𝑨 is also equal to 𝑰2 ; this is in fact true, as the
reader should check.
We now present some important facts about inverses of
matrices.
Theorem
A square matrix has at most one inverse.
Proof: Suppose that a square matrix 𝑨 has two inverses 𝑩1
and 𝑩2 . Then
𝑨𝑩1 = 𝑨𝑩2 = 𝑰 = 𝑩1 𝑨 = 𝑩2 𝑨
The idea of the proof is to consider the product 𝑩1 𝑨 𝑩2 ;
since 𝑩1 𝑨 = 𝑰, this equals 𝑰𝑩2 = 𝑩2 . On the other hand, by
the associative law (we will discuss in the next module) it
also equals 𝑩1 𝑨𝑩2 , which equals 𝑩1 𝑰 = 𝑩1 . Therefore
𝑩1 = 𝑩2
Note: From now on we shall write

𝑨−1
for the unique inverse of an invertible matrix 𝑨.
Theorem
a. If 𝑨 is an invertible matrix, then 𝑨−1 is invertible and
𝑨−1 −1 = 𝑨
b. If 𝑨 and 𝑩 are invertible matrices of the same size, the
𝑨𝑩 is ivertible and 𝑨𝑩 −1 = 𝑩−1 𝑨−1
Proof:
a. Certainly we have 𝑨𝑨−1 = 𝑰 = 𝑨−1 𝑨, equations which
can be viewed as saying that 𝑨 is an inverse of 𝑨−1 .
Therefore, since 𝑨−1 cannot have more than one
inverse, its inverse must be 𝑨.
b. To prove the assertions we have only to check that
𝑩−1 𝑨−1 is an inverse of 𝑨𝑩. This is easily done:
𝑨𝑩 𝑩−1 𝑨−1 = 𝑨 𝑩𝑩−1 𝑨−1 , by two applications of the
associative law (we will discuss in the next module);
the latter matrix equals 𝑨𝑰𝑨−1 = 𝑨𝑨−1 = 𝑰. Similarly
𝑩−1 𝑨−1 𝑨𝑩 = 𝑰. Since inverses are unique, 𝑨𝑩 −1 =
𝑩−1 𝑨−1 .

Generalized Inverse Matrix: An 𝑛 × 𝑚 matrix 𝑨− is a


generalized inverse of the 𝑚 × 𝑛 matrix A if it satisfies
𝑨𝑨− 𝑨 = 𝑨.

Trace of a Matrix: The trace of an 𝑚 × 𝑚 matrix 𝑨 = 𝑎𝑖𝑗


is
𝑚

𝑡𝑟 𝑨 = 𝑡𝑟 𝑨 ≡ 𝑎11 + ⋯ + 𝑎𝑚𝑛 = 𝑎𝑖𝑗


𝑖=1

Unitary Matrix: An (𝑚 × 𝑚) matrix A is unitary if 𝑨∗ = 𝑨−1 ,


where 𝑨∗ is tranjugate (transposed conjugate i.e., (𝑨)𝑇 ) of A.
Similar Matrices: The 𝑚 × 𝑚 matrices A and B are similar
if an invertiable matrix P exists such that 𝑩 = 𝑷𝑨𝑷−1 .
Minor of an Element of a Matrix: for an (𝑚 × 𝑚) matrix
𝑨 = 𝑎𝑖𝑗 the minor of an element 𝑎𝑖𝑗 is the determinant of
the matrix obtained by deleting the 𝑖𝑡ℎ row 𝑗𝑡ℎ column
from A.
𝑎1,1 … 𝑎1,𝑗 −1 𝑎1,𝑗 +1 … 𝑎1,𝑚
⋮ ⋮ ⋮ ⋮
𝑎𝑖−1,1 … 𝑎𝑖−1,𝑗 −1 𝑎𝑖−1,𝑗 +1 … 𝑎𝑖−1,𝑚
𝑚𝑖𝑛𝑜𝑟 𝑎𝑖𝑗 ≡ det 𝑎 … 𝑎𝑖+1,𝑗 −1 𝑎𝑖+1,𝑗 +1 … 𝑎𝑖_+1,𝑚
𝑖+1,1
⋮ ⋮ ⋮ ⋮
𝑎𝑚 ,1 … 𝑎𝑚 ,𝑗 −1 𝑎𝑚 ,𝑗 +1 … 𝑎𝑚 ,𝑚

Cofactor of an Element of a Matrix: For an 𝑚 × 𝑚 matrix


𝑨 = 𝑎𝑖𝑗 . The cofactor of the 𝑖𝑗𝑡ℎ element 𝑎𝑖𝑗 is
𝑖+𝑗
𝑐𝑜𝑓 𝑎𝑖𝑗 ≡ −1 𝑚𝑖𝑛𝑜𝑟 𝑎𝑖𝑗

Cofactor Matrix: The (𝑚 × 𝑚) matrix 𝑐𝑜𝑓 𝑎𝑖𝑗 is the


cofactor matrix of the (𝑚 × 𝑚) matrix 𝐴 = 𝑎𝑖𝑗 .

Nilpotent Matrix: An (𝑚 × 𝑚) matrix 𝑨 is nilpotent if there


exists a positive integer 𝑖 such that 𝑨𝑖 = 0 . The smallest
positive integral exponent 𝑖 such that 𝑨𝑖 = 0 is called index
of 𝑨.

Idempotent Matrix: An 𝑚 × 𝑚 matrix 𝑨 is idempotent if


𝑨2 = 𝑨.
Normal Matrix : an (𝑚 × 𝑚) matrix 𝑨 is normal if 𝑨∗ 𝑨 = 𝑨𝑨∗
Orthogonal Matrix: An (𝑚 × 𝑚) matrix 𝑨 is orthogonal if
𝑨𝑇 = 𝑨−1
Imaginary Matrix: A 𝑚 × 𝑚 is an imaginary matrix if it
can be represented as 𝑨 = 𝑖𝑩, where B is real matrix and
𝑖 = −1. That is, all elements of A are imaginary numbers
or zero.

Involutory Matrix: An (𝑚 × 𝑚) matrix 𝑨 = 𝑎𝑖𝑗 is called


involutory or unipotent if 𝑨2 = 𝑰𝑚 for example,
cos 𝜃 sin 𝜃
sin 𝜃 −cos 𝜃
is involutory.

Kronecker Matrix: An (𝑚 × 𝑚) matix 𝑨 = 𝑎𝑖𝑗 with

0 𝑓𝑜𝑟 𝑖 ≠ 𝑗
𝑎𝑖𝑗 = 𝑐 𝑓𝑜𝑟 𝑖 = 𝑗 = 𝑘
1 𝑓𝑜𝑟 𝑖 = 𝑗 ≠ 𝑘

for some 𝑘 ∈ 1, … 𝑚 is a kronecker matrix . For instance,


1 0 0 … 0
0 𝑐 0 … 0
0 0 1 0
⋮ ⋮ ⋮ ⋱ ⋮
0 0 0 … 1
is a Kronecker matrix.
Kronecker Product: The Kronecker product or direct
product of two matrices 𝑨 = 𝑎𝑖𝑗 𝑚 × 𝑚 and 𝑩 = 𝑎𝑖𝑗 𝑝 × 𝑞
is defined as
𝑎11 𝐵 … 𝑎1𝑛 𝐵
𝑨⨂𝑩 ≡ ⋮ ⋮ 𝑚𝑝 × 𝑛𝑞
𝑎𝑚1 𝐵 … 𝑎𝑚𝑛 𝐵
Direct Sum: The direct sum of two matrices 𝑨 =
𝑎𝑖𝑗 𝑚 × 𝑚 and 𝑩 = 𝑏𝑖𝑗 𝑛 × 𝑛 is
𝑨 0
𝑨⨁𝑩 ≡ 𝑚+𝑛 × 𝑚×𝑛
0 𝑩

Diagonalizable Matrix: An 𝑚 × 𝑚 matrix A is


diagonalizable if a nonsingular 𝑚 × 𝑚 matrix P exists
such that 𝑷𝑨𝑷−1 is a diagonal matrix.

Echelon Form: An 𝑚 × 𝑛 matrix 𝑨 = 𝑎𝑖𝑗 is in echelon


form if for any row 𝑖 , either 𝑎𝑖𝑗 = 0, 𝑗 = 1, … . . , 𝑛, or there
exists a 𝑘 ∈ 1, … , 𝑛 such that 𝑎𝑖𝑘 = 1 and 𝑎𝑖𝑗 = 0 for 𝑗 < 𝑘
and 𝑎𝑖𝑘 = 0 for 𝑙 ≠ 𝑖. For example
0 0 0 0 0
0 0 1 0 𝑎25
0 0 0 1 𝑎35
0 0 0 0 0
is in echelon form.
Elementary Matrix Operations: The following
modifications of a matrix are called elementary operations:
i. Interchanging two rows or two columns,
ii. Multiplying any row or column by a nonzero
number,
iii. Adding a multiple of one row to another row,
iv. Adding a multiple of one column to another column.
Elementary Matrix: An 𝑚 × 𝑚 matrix is called elementary
if it is obtained by applying a single elementary matrix
operation to the 𝑚 × 𝑚 identity matrix 𝐼𝑚
Hadamard Matrix: A Handmard matrix 𝑯𝑘 , 𝑘 = 0,1, …, is a
2𝑘 × 2𝑘 matrix which has only elements 1 and −1 and
which is obtained recursively as
𝑯𝑘−1 𝑯𝑘−1 1 1
𝑯𝑘 = = ⨂𝑯𝑘−1 , 𝑘 = 1,2, …,
𝑯𝑘−1 – 𝑯𝑘−1 1 −1
starting with 𝑯0 = 1. For instance,
1 1 1 1
1 −1 1 −1
𝑯2 =
1 1 −1 −1
1 −1 −1 1

Hardmard Product: The Hadamard product or schur


product or elementwise product of the two matrices
𝑨 = 𝑎𝑖𝑗 𝑚 × 𝑛 and 𝑩 = 𝑏𝑖𝑗 𝑚 × 𝑛 is defined as

𝑨⨀𝑩 ≡ 𝑎𝑖𝑗 𝑏𝑖𝑗 𝑚 × 𝑛 .

Absolute Value of a Matrix: The absolute value of an


𝑚 × 𝑛 matrix 𝑨 = 𝑎𝑖𝑗 is defined as

𝑎11 𝑎𝑏𝑠 𝑎12 𝑎𝑏𝑠 … 𝑎1𝑛 𝑎𝑏𝑠


𝑎21 𝑎𝑏𝑠 𝑎22 𝑎𝑏𝑠 … 𝑎2𝑛 𝑎𝑏𝑠
𝑨 𝑎𝑏𝑠 ≡ 𝑎𝑖𝑗 = 𝑚×𝑛
𝑎𝑏𝑠 ⋮ ⋮ ⋮
𝑎𝑚1 𝑎𝑏𝑠 𝑎𝑚2 𝑎𝑏𝑠 … 𝑎𝑚𝑛 𝑎𝑏𝑠
where 𝑐 𝑎𝑏𝑠 denotes the modulus of the complex number
𝑐 = 𝑐1 + 𝑖𝑐2 defined as 𝑐 𝑎𝑏𝑠 = 𝑐12 + 𝑐22 = 𝑐𝑐 here 𝑐 is the
complex conjugate of c.

Вам также может понравиться