Вы находитесь на странице: 1из 15

Append ix C

Vecto r-Matrix Al gebra

C-1 INTRODUCTION

In deriving mathematical models of modem dynamic systems, one finds that the differ­
ential equations involved may become very complicated due to the multiplicity of

is advantageous to use vector-matrix notation, such as that used in the state-space rep­
inputs and outputs. To simplify the mathematical expressions of the system equations, it

resentation of dynamic systems. For theoretical work, the notational simplicity gained
by using vector-matrix operations is most convenient and is, in fact, essential for the
analysis and design of modem dynamic systems. With vector-matrix notation, one can
handle large, complex problems with ease by following the systematic format of repre­

The principal objective of this appendix is to present definitions associated with


senting the system equations and dealing with them mathematically by computer.

matrices and the basic matrix algebra necessary for the analysis of dynamic systems.

C-2 DEFINITIONS ASSOCIATED WITH MATRICES

Matrix. A matrix is defined as a rectangular array of elements that may be


real numbers, complex numbers, functions, or operators. The number of columns, in

705
706 Appendix C

A =
[ an
afl a2m
]
general, is not necessarily the same as the number of rows. Consider the matrix

a Im

an I anm

where a ij denotes the (i, j)th element of A. This matrix has n rows and m columns
and is called an n X m matrix. The first index represents the row number, the second
index the column number. The matrix A is sometimes written ( aij)'

Equality o f two matrices. '!\vo matrices are said to be equal if and only if
their corresponding elements are equal. Note that equal matrices must have the
same number of rows and the same number of columns.

Vector. A matrix having only one column, such as

is called a column vector. A column vector having n elements is called an n-vector or


n-dimensional vector.
A matrix having only one row, such as

is called a row vector.

Square matrix. A square matrix is a matrix in which the number of rows is


equal to the number of columns. A square matrix is sometimes called a matrix of
order n, where n is the number of rows (or columns).

of a square matrix A are zero, A is called a diagonal matrix and is written as


Diagonal matrix. If all the elements other than the main diagonal elements

al l 0
an
A = = ( aij 8ij)

0 ann
Appendix C 707

where the 8;j are the Kronecker deltas, defined by

8ij = 1 if i =j
=0 if i #= j
Note that all of the elements that are not explicitly written in the foregoing matrix
are zero. The diagonal matrix is sometimes written

diag( alh a 22, · · · , ann )

Identity matrix or unity matrix. The identity matrix or unity matrix I is a


matrix whose elements on the main diagonal are equal to unity and whose other
elements are equal to zero; that is,

Zero matrix.
1=[1 1

o !l = diag(1, 1 , . . . , 1 )

A zero matrix is a matrix whose elements are all zero.

Determinant of a matrix. For each square matrix, there exists a determinant.


The determinant has the following properties:

1. If any two consecutive rows or columns are interchanged, the determinant


changes its sign.
2. If any row or any column consists only of zeros, then the value of the determi­
nant is zero.
3. If the elements of any row (or any column) are exactly k times those of another

4. If, to any row (or any column), any constant times another row (or column) is
row (or another column), then the value of the determinant is zero.

added, the value of the determinant remains unchanged.


5. If a determinant is multiplied by a constant, then only one row (or one column)
is multiplied by that constant. Note, however, that the determinant of k times
n
an n X n matrix A is k times the determinant of A, or

I kA I = �I A I
6. The determinant of the product of two square matrices A and B is the product
of determinants, or
I AB I = I A I I B I
Singular matrix. A square matrix i s called singular if the associated deter­
minant is zero. In a singular matrix, not all the rows (or not all the columns) are
independent of each other.

Nonsingular matrix. A square matrix is called nonsingular if the associated


determinant is nonzero.
708 Appendix C

Transpose. If the rows and columns of an n X m matrix A are interchanged,

A is denoted by A'. That is, if

[ ]
the resulting mn matrix is called the transpose of A. The transpose of the matrix
X

al l a12 a Im
al a22 a2m
A= t

[ ]
anI an2 anm
then

a ll a21 a nI
a2 a22 an2
A' = t
aIm a2m anm
Note that ( A' ) ' = A.

Symmetric matrix. If a square matrix A is equal to its transpose, or


A = A'
then the matrix A is called a symmetric matrix.

Skew-symmetric matrix. If a square matrix A is equal to the negative of


its transpose, or

A = -A'
then the matrix A is called a skew-symmetric matrix.

Conjugate matrix. If the complex elements of a matrix A are replaced by

A = (a;j), where a;j is the complex conjugate of a;j. For example, if


their respective conjugates, then the resulting matrix is called the conjugate of A and
is denoted by

A = [ 0
-1 + j
-1 + j
1
-3 - j3
-1

-1 j4
-2 + j3
]
then

A= [ 0
-1 - j
-1 - j
1
-3 + j3
-1

-1 j4
-2 - j3
]
Conjugate transpose. The conjugate transpose is the conjugate of the
transpose of a matrix. Given a matrix A, the conjugate transpose is denoted by A' or
A*; that is,
Appendix C 709

For example, if
1 + jS
3 -j
]
1 + j3
then

Note that

If A is a real matrix (i.e., a matrix whose elements are real), the conjugate transpose
A* is the same as the transpose A' .

Hermitian matrix. A matrix whose elements are complex quantities is


called a complex matrix. If a complex matrix A satisfies the relationship
A = A* or aij = (iji
where (iji is the complex conjugate of aj;, then A is called a Hermitian matrix. An
example is

A= [ 1
4 - j3
4 + j3
2
]
If a Hermitian matrix A is written as A = B + jC, where B and C are real matrices,
then
B = B' and C = -C'
In the preceding example,

A = B + jC = [! �] [ _� � ]
+j

Skew-Hermitian matrix. If a matrix A satisfies the relationship


A = -A*
then A is called a skew-Hermitian matrix. An example is

A-
_[ jS
2 + j3
If a skew-Hermitian matrix A is written as A = B + jC, where B and C are real
matrices, then
B = -B' and C = C'
710 Appendix C

In the present example,

C-3 MATRIX ALGEBRA

This section presents the essentials of matrix algebra, as well as additional defini­
tions. It is important to remember that some matrix operations obey the same rules
as those in ordinary algebra, but others do not.

added if they have the same number of rows and the same number of columns. If
Addition and subtraction of matrices. '!\vo matrices A and B can be

A = (aij ) and B = (bij ), then


A+B = (aij + b;j )
Thus, each element of A is added to the corresponding element of B. Similarly,
subtraction of matrices is defined as

As an example, consider

A=
[! � ! ] and B = [� � � ]
Then

A+ B
6 6]
= [ 4 and A _
B =
[ -4 0 0 ]
5 9 7 3 1 5

Multiplication of a matrix by a scalar. The product of a matrix and a


scalar is a matrix in which each element is multiplied by the scalar; that is, for a
matrix A and a scalar k,

Multiplication of a matrix by a matrix. Multiplication of a matrix by a


matrix is possible between conformable matrices (matrices such that the number of
columns of the first matrix equals the number of rows of the second). Otherwise,
multiplication of two matrices is not defined.
Appendix C 71 1

Let A be an n X m matrix and B be an m X p matrix. Then the product AB,


which we read "A postmultiplied by B" or "B premultiplied by A," is defined as
follows:

i= 1,2, . . . , n; j= 1, 2, ...,p

columns as B. Thus, the matrix C is an n X p matrix.


The product matrix C has the same number of rows as A and the same number of

Note that even if A and B are conformable for AD, they may not be con­
formable for BA, in which case BA is not defined.
The associative and distributive laws hold for matrix multiplication; that is,
(AB)C = A(BC)
( A + B)C = AC + BC
C(A + B ) = CA + CB

BA. To show this, let


If AB = BA, then A and B are said to commute. Note that, in general,
AB

[� !J
#:

A = and B = [� � �]
Then

AB = [�10 21 :015]
3
and BA = [1: �1 ]
Clearly, AB #: BA. As another example, let

A= [� ! ] and B = [� �]
Then

AB = [:0 1� ] and BA = [� 1012 ]


Because matrix multiplication is, in general, not commutative, we must preserve
Again, AB #: BA.

the order of the matrices when we multiply one matrix by another. (This is the reason
why we often use the terms "premultiplication" or "postmultiplication," to indicate

An example of the case where AB = BA is given next.


whether the matrix is multiplied from the left or the right.)

A= [� �]. B = [� �]
712 Appendix C

AB and BA are given by

AB = BA =
[� 1� ]
Clearly, A and B commute in this case.

Power of a matrix. The kth power of a square matrix A is defined to be


It!' = AA · · · A
k
Note that, for a diagonal matrix A = diag( au , a22, " . , ann } ,
o
k
au

It!' =

o
= diag( afh a�, . . . , a�n }

Further properties of matrices. The transposes of A + B and AB are


respectively given by
( A + B ) ' = A' + B'
( AB ) ' = B'A'
To prove the last relationship, note that the (i, j)th element of AB is
m

� aikbkj = Cij
k =l
The (� j)th element of B' A' is
m m

� bkiajk = � ajkbki = Cj;


k =l k =l
which is equal to the (j, i)th element of AB or the (i, j)th element of ( AB ) '. Hence,
(AB ) ' = B' A' As an example, consider
,

A= [� �] and B = [� �]
Then

AB =
[24 ]
26
23 22
B'A' =
[� ! ] [! � ] [�: �] =

Clearly, (AB) ' = B'A',


Appendix C 713

In a similar way, we obtain for the conjugate transposes of A + B and AB,


(A + B)- = A- + B-
(AD)- = B-A·

submatrix M of A such that the determinant of M is nonzero and the determinant of


Rank of matrix. A matrix A is said to have rank m if there exists an m X m

every r X r submatrix (where r 2: m + 1 ) of A is zero.


As an example, consider the following matrix:

A=

[1 1 1 il -

Note that I AI = O. One of a number of largest submatrices whose determinant is


not equal to zero is

Hence, the rank of the matrix A is 3.

C-4 MATRIX INVERSION

Minor M;j. If the ith row and jth column are deleted from an n X n matrix
A, the resulting matrix is an (n - 1 ) x (n - 1 ) matrix. The determinant of this
(n - 1 ) x (n 1 ) matrix is called the minor Mij of the matrix A.
-
Cofactor A ij• The cofactor A ij of the element a ;j of the n x n matrix A is
defined by the equation
= ( - 1 )' J M;j
'+ '
A ij
That is, the cofactor A ij of the element aij is ( - 1 ) i + times the determinant of the
j
matrix formed by deleting the ith row and the jth column from A. Note that the
cofactor A ij of the element a;j is the coefficient of the term a;j in the expansion of
the determinant I AI , since it can be shown that

ai l An + a,'2A;2 + . . . + a;nA ;n = IAI

ajl A i l + aJ'2AI'2 + . . . + ajn Ain = 0 i #: j


because the determinant of A in this case possesses two identical rows. Hence, we
obtain
n
� ajkA ;k = 8j; I A I
k=l
714 Appendix C

Similarly,
n
�akiAkj = cSijl A I
k =l
Adjoint matrix. The matrix B whose element in the ith row and jth column
equals Aj; is called the adjoint of A and is denoted by adj A, or
B = ( b;j) = (Aj;) = adj A

[ ]
That is, the adjoint of A is the transpose of the matrix whose elements are the
cofactors of A, or
All A 21 A nI
ad'J A = : :
A l2 An A n2

A1n A2n A nn
Note that the element of the jth row and ith column of the product A(adj A) is
n n
�ajkbki = � ajkA;k = cSj; I A I
k =l k =l
Hence, A(adj A) is a diagonal matrix with diagonal elements equal to IAI, or
A(adj A } = IAI I

Similarly, the element in the jth row and ith column of the product (adj A)A is
n n
�bjkaki = �Akjak; = cS;jl A I
k =l k =l
Hence, we have the relationship
A(adj A) = (adj A } A = IAI I (C-1)

2
For example, given the matrix

A=
G -�]
-1
0
we find that the determinant of A -is 17 and that
-3

I -� -3 1 -I � -�I I -� -� I
-
2
adj A = - I 31 -3 1 I � -�I - I 31 -� I
3 -
1 1
� I - I � � 1 -� I
I 3
1

[; -�]
6
2
= -3
-7
Appendix C 715

[� �][�
Thus,

]
2 6
A(adj A) = -1 -3
0 1 - �
7
-3 2
0

n n
- -

= 1 1
7
0
= IAI I

Inverse of a matrix. If, for a square matrix A, a matrix B exists such that
BA = AB = I, then B is denoted by A-I and is called the inverse of A. The inverse
of a matrix A exists if the determinant of A is nonzero or A is nonsingular.
By definition, the inverse matrix A-I has the property that
AA-I = A- I A = I
where I is the identity matrix. If A is nonsingular and AD = C, then B = A-I C. This
can be seen from the equation
A-l AB = IB =B = A-I e
If A and B are nonsingular matrices, then the product AB is a nonsingular matrix.
Moreover,
(ABrl = B- IA-I
The preceding equation may be proved as follows:
(B-I A-I )AB = B-1 (A-IA)B = B-I IB = B-l B = I
Similarly,

Note that
(A-I rl = A
(AI)' = (A' rl
(A-I )- = (A-rl
From Equation (C-l) and the definition of the inverse matrix, we have
adj A
A-I =
IAI
Hence, the inverse of a matrix is the transpose of the matrix of its cofactors, divided

[:f: ::
by the determinant of the matrix. That is, if

A =

an I an2
716 Appendix C

then

A ll A 21 An I
lAf 1Af W
A 12 A ll A n2
A-I =
adj A
=
1Af 1Af lAf
IA I
--

A1 n A 2n A nn
W lAf W
where A ij is the cofactor of aij of the matrix A. Thus, the terms in the ith column

0]
of A-I are 111A I times the cofactors of the ith row of the original matrix A. For
example, if

G
2
A= -1 -2
o -3

adj A = [� -4]
then the adjoint of A and the determinant I A I are respectively found to be

6
-3 2 and IAI = 17
2 -7

af�� [�
Hence, the inverse of A is

A-I =
-� ] =
17 -17
177

In what follows, we give formulas for finding inverse matrices for the 2 X 2
matrix and the 3 X 3 matrix. For the 2 x 2 matrix

A =
[: !J where ad - be ::p 0

the inverse matrix is given by

1 [ d -b
A-I -
- ad - be - e a J
0
For the 3 x 3 matrix

where I AI ::p
Appendix C 717

the inverse matrix is given by

Remarks on cancellation of matrices. Cancellation of matrices is not


valid in matrix algebra. Consider, for example, the product of the two singular
matrices

and B =
[ 1
-2
-2
4
] #= 0

Then

[ ][ 4
] [0 0] 0
0
2 1 1 -2
°
AB = = =
6 3 -2

Qearly, AB = 0 implies neither that A = 0 nor that B = O. In fact, AB = 0 implies


one of the following three statements:
L A=0
2. B=0
3. Both A and B are singular.
We can easily prove that, if both A and B are nonzero matrices and AB = 0, then
both A and B are singular: Assume that A and B are not singular. Then a matrix A-I
exists with the property that

which contradicts the assumption that B is a nonzero matrix . Thus, we conclude that
both A and B must be singular if A #= 0 and B #= O.
Similarly, notice that if A is singular, then neither AB = AC nor BA = CA
implies that B = C. If, however, A is a nonsingular matrix, then AD = AC implies
that B = C and BA = CA also implies that B = C.

C-5 DIFFERENTIATION AND INTEGRATION OF MATRICES

The derivative of an n X m matrix A(t) is defined to be the n X m matrix, each ele­


ment of which is the derivative of the corresponding element of the original matrix,
718 Appendix C

provided that all the elements a;j( t) have derivatives with respect to t That is,

d d d
( t) (t) ( t)
dt an dt a12 dt a lm
d d d
!!. A(t)
dt
= (!!.dt a ..(t» )
IJ =
dt a21
( t)
dt a22
( t)
dt a2m
(t)

d d d
( t) (t) (t)
dt anl dt an2 dt anm

Similarly, the integral of an n X m matrix A(t) is defined to be

1an (t) dt 1adt) dt 1alm(t) dt


1a2m(t) dt
1A(t) dt (1aij(t) dt ) 1a21( 1a22
t) dt (t) dt
= =

1anl (t) dt 1a2n(t) dt 1anm(t) dt


Differentiation of the product of two matrices. If the matrices A(t) and
B(t) can be differentiated with respect to t, then
d dA(t) dB(t)
[A ( t)B(t» ) = dt B (t) + A(t) �
dt
Here again the multiplication of A{t) and dB(t)ldt [or dA(t)ldt and B{t)] is, in gener­
al, not commutative.

Differentiation of A-I(t). If a matrix A(t) and its inverse A-I (t) are differ­
entiable with respect to t, then the derivative of AI(t) is given by
dAI ( t)dA(t) 1
-- = - A-I (t)
A- ( t)
dt dt
--

The derivative may be obtained by differentiating A ( t)A-I (t) with respect to t.


Since
d dA(t) dAI (t)
[A(t )A-l (t)] = ---A- 1 (t) + A(t) ---
dt dt dt
and

0
�A(t) A-l (t) = �I =
dt dt
Appendix C 719

we obtain

or

Вам также может понравиться