Академический Документы
Профессиональный Документы
Культура Документы
C-1 INTRODUCTION
In deriving mathematical models of modem dynamic systems, one finds that the differ
ential equations involved may become very complicated due to the multiplicity of
is advantageous to use vector-matrix notation, such as that used in the state-space rep
inputs and outputs. To simplify the mathematical expressions of the system equations, it
resentation of dynamic systems. For theoretical work, the notational simplicity gained
by using vector-matrix operations is most convenient and is, in fact, essential for the
analysis and design of modem dynamic systems. With vector-matrix notation, one can
handle large, complex problems with ease by following the systematic format of repre
matrices and the basic matrix algebra necessary for the analysis of dynamic systems.
705
706 Appendix C
A =
[ an
afl a2m
]
general, is not necessarily the same as the number of rows. Consider the matrix
a Im
an I anm
where a ij denotes the (i, j)th element of A. This matrix has n rows and m columns
and is called an n X m matrix. The first index represents the row number, the second
index the column number. The matrix A is sometimes written ( aij)'
Equality o f two matrices. '!\vo matrices are said to be equal if and only if
their corresponding elements are equal. Note that equal matrices must have the
same number of rows and the same number of columns.
al l 0
an
A = = ( aij 8ij)
0 ann
Appendix C 707
8ij = 1 if i =j
=0 if i #= j
Note that all of the elements that are not explicitly written in the foregoing matrix
are zero. The diagonal matrix is sometimes written
Zero matrix.
1=[1 1
o !l = diag(1, 1 , . . . , 1 )
4. If, to any row (or any column), any constant times another row (or column) is
row (or another column), then the value of the determinant is zero.
I kA I = �I A I
6. The determinant of the product of two square matrices A and B is the product
of determinants, or
I AB I = I A I I B I
Singular matrix. A square matrix i s called singular if the associated deter
minant is zero. In a singular matrix, not all the rows (or not all the columns) are
independent of each other.
[ ]
the resulting mn matrix is called the transpose of A. The transpose of the matrix
X
al l a12 a Im
al a22 a2m
A= t
[ ]
anI an2 anm
then
a ll a21 a nI
a2 a22 an2
A' = t
aIm a2m anm
Note that ( A' ) ' = A.
A = -A'
then the matrix A is called a skew-symmetric matrix.
A = [ 0
-1 + j
-1 + j
1
-3 - j3
-1
�
-1 j4
-2 + j3
]
then
A= [ 0
-1 - j
-1 - j
1
-3 + j3
-1
�
-1 j4
-2 - j3
]
Conjugate transpose. The conjugate transpose is the conjugate of the
transpose of a matrix. Given a matrix A, the conjugate transpose is denoted by A' or
A*; that is,
Appendix C 709
For example, if
1 + jS
3 -j
]
1 + j3
then
Note that
If A is a real matrix (i.e., a matrix whose elements are real), the conjugate transpose
A* is the same as the transpose A' .
A= [ 1
4 - j3
4 + j3
2
]
If a Hermitian matrix A is written as A = B + jC, where B and C are real matrices,
then
B = B' and C = -C'
In the preceding example,
A = B + jC = [! �] [ _� � ]
+j
A-
_[ jS
2 + j3
If a skew-Hermitian matrix A is written as A = B + jC, where B and C are real
matrices, then
B = -B' and C = C'
710 Appendix C
This section presents the essentials of matrix algebra, as well as additional defini
tions. It is important to remember that some matrix operations obey the same rules
as those in ordinary algebra, but others do not.
added if they have the same number of rows and the same number of columns. If
Addition and subtraction of matrices. '!\vo matrices A and B can be
As an example, consider
A=
[! � ! ] and B = [� � � ]
Then
A+ B
6 6]
= [ 4 and A _
B =
[ -4 0 0 ]
5 9 7 3 1 5
i= 1,2, . . . , n; j= 1, 2, ...,p
Note that even if A and B are conformable for AD, they may not be con
formable for BA, in which case BA is not defined.
The associative and distributive laws hold for matrix multiplication; that is,
(AB)C = A(BC)
( A + B)C = AC + BC
C(A + B ) = CA + CB
[� !J
#:
A = and B = [� � �]
Then
AB = [�10 21 :015]
3
and BA = [1: �1 ]
Clearly, AB #: BA. As another example, let
A= [� ! ] and B = [� �]
Then
the order of the matrices when we multiply one matrix by another. (This is the reason
why we often use the terms "premultiplication" or "postmultiplication," to indicate
A= [� �]. B = [� �]
712 Appendix C
AB = BA =
[� 1� ]
Clearly, A and B commute in this case.
It!' =
o
= diag( afh a�, . . . , a�n }
� aikbkj = Cij
k =l
The (� j)th element of B' A' is
m m
A= [� �] and B = [� �]
Then
AB =
[24 ]
26
23 22
B'A' =
[� ! ] [! � ] [�: �] =
A=
[1 1 1 il -
Minor M;j. If the ith row and jth column are deleted from an n X n matrix
A, the resulting matrix is an (n - 1 ) x (n - 1 ) matrix. The determinant of this
(n - 1 ) x (n 1 ) matrix is called the minor Mij of the matrix A.
-
Cofactor A ij• The cofactor A ij of the element a ;j of the n x n matrix A is
defined by the equation
= ( - 1 )' J M;j
'+ '
A ij
That is, the cofactor A ij of the element aij is ( - 1 ) i + times the determinant of the
j
matrix formed by deleting the ith row and the jth column from A. Note that the
cofactor A ij of the element a;j is the coefficient of the term a;j in the expansion of
the determinant I AI , since it can be shown that
Similarly,
n
�akiAkj = cSijl A I
k =l
Adjoint matrix. The matrix B whose element in the ith row and jth column
equals Aj; is called the adjoint of A and is denoted by adj A, or
B = ( b;j) = (Aj;) = adj A
[ ]
That is, the adjoint of A is the transpose of the matrix whose elements are the
cofactors of A, or
All A 21 A nI
ad'J A = : :
A l2 An A n2
A1n A2n A nn
Note that the element of the jth row and ith column of the product A(adj A) is
n n
�ajkbki = � ajkA;k = cSj; I A I
k =l k =l
Hence, A(adj A) is a diagonal matrix with diagonal elements equal to IAI, or
A(adj A } = IAI I
Similarly, the element in the jth row and ith column of the product (adj A)A is
n n
�bjkaki = �Akjak; = cS;jl A I
k =l k =l
Hence, we have the relationship
A(adj A) = (adj A } A = IAI I (C-1)
2
For example, given the matrix
A=
G -�]
-1
0
we find that the determinant of A -is 17 and that
-3
I -� -3 1 -I � -�I I -� -� I
-
2
adj A = - I 31 -3 1 I � -�I - I 31 -� I
3 -
1 1
� I - I � � 1 -� I
I 3
1
[; -�]
6
2
= -3
-7
Appendix C 715
[� �][�
Thus,
]
2 6
A(adj A) = -1 -3
0 1 - �
7
-3 2
0
n n
- -
= 1 1
7
0
= IAI I
Inverse of a matrix. If, for a square matrix A, a matrix B exists such that
BA = AB = I, then B is denoted by A-I and is called the inverse of A. The inverse
of a matrix A exists if the determinant of A is nonzero or A is nonsingular.
By definition, the inverse matrix A-I has the property that
AA-I = A- I A = I
where I is the identity matrix. If A is nonsingular and AD = C, then B = A-I C. This
can be seen from the equation
A-l AB = IB =B = A-I e
If A and B are nonsingular matrices, then the product AB is a nonsingular matrix.
Moreover,
(ABrl = B- IA-I
The preceding equation may be proved as follows:
(B-I A-I )AB = B-1 (A-IA)B = B-I IB = B-l B = I
Similarly,
Note that
(A-I rl = A
(AI)' = (A' rl
(A-I )- = (A-rl
From Equation (C-l) and the definition of the inverse matrix, we have
adj A
A-I =
IAI
Hence, the inverse of a matrix is the transpose of the matrix of its cofactors, divided
[:f: ::
by the determinant of the matrix. That is, if
A =
an I an2
716 Appendix C
then
A ll A 21 An I
lAf 1Af W
A 12 A ll A n2
A-I =
adj A
=
1Af 1Af lAf
IA I
--
A1 n A 2n A nn
W lAf W
where A ij is the cofactor of aij of the matrix A. Thus, the terms in the ith column
0]
of A-I are 111A I times the cofactors of the ith row of the original matrix A. For
example, if
G
2
A= -1 -2
o -3
adj A = [� -4]
then the adjoint of A and the determinant I A I are respectively found to be
6
-3 2 and IAI = 17
2 -7
af�� [�
Hence, the inverse of A is
A-I =
-� ] =
17 -17
177
In what follows, we give formulas for finding inverse matrices for the 2 X 2
matrix and the 3 X 3 matrix. For the 2 x 2 matrix
A =
[: !J where ad - be ::p 0
1 [ d -b
A-I -
- ad - be - e a J
0
For the 3 x 3 matrix
where I AI ::p
Appendix C 717
and B =
[ 1
-2
-2
4
] #= 0
Then
[ ][ 4
] [0 0] 0
0
2 1 1 -2
°
AB = = =
6 3 -2
which contradicts the assumption that B is a nonzero matrix . Thus, we conclude that
both A and B must be singular if A #= 0 and B #= O.
Similarly, notice that if A is singular, then neither AB = AC nor BA = CA
implies that B = C. If, however, A is a nonsingular matrix, then AD = AC implies
that B = C and BA = CA also implies that B = C.
provided that all the elements a;j( t) have derivatives with respect to t That is,
d d d
( t) (t) ( t)
dt an dt a12 dt a lm
d d d
!!. A(t)
dt
= (!!.dt a ..(t» )
IJ =
dt a21
( t)
dt a22
( t)
dt a2m
(t)
d d d
( t) (t) (t)
dt anl dt an2 dt anm
Differentiation of A-I(t). If a matrix A(t) and its inverse A-I (t) are differ
entiable with respect to t, then the derivative of AI(t) is given by
dAI ( t)dA(t) 1
-- = - A-I (t)
A- ( t)
dt dt
--
0
�A(t) A-l (t) = �I =
dt dt
Appendix C 719
we obtain
or