JEROME J. CONNOR, Sc.D., Massachusetts Institute of
Technology, is Professor of Civil Engineering at Massa
chusetts Institute of Technology. He has been active in
teaching and research in structural analysis and mechanics
at the U.S. Army Materials and Mechanics Research
Agency and for some years at M.I.T. His primary inter
est is in computer based analysis methods, and his current
research is concerned with the dynamic analysis of pre
stressed concrete reactor vessels and the development of
finite element models for fluid flow problems.
Dr. Connor
is one of the original developers of ICESSTRUDL, and
has published extensively in the structural field.
ANALYSIS OF
STRUCTURAL MEMBER
SYSTEMS
JEROME
J. CONNOR
Massachusetts Institute of Technology
THE RONALD PRESS COMPANY • NEW YORK
Copyright ©
1976 by
Ttrn RONALD PRESS COMPANY
All Rights Reserved
No part of this book may be reproduced
in any form without permission in writing
from the publisher.
Library of Congress Catalog Card Number: 74—22535
PRINTED IN ThE UNITCD STATES OF AMERICA
Preface
With the development over the past decade of computerbased analysis
methods, the teaching of structural analysis subjects has been revolutionized.
The traditional division between structural analysis and structural mechanics
became no longer necessary, and instead of teaching a preponderance of solu
tion details it is now possible to focus on the underlying theory.
What has been done here is to integrate analysis and mechanics in a sys tematic presentation which includes the mechanics of a member, the matrix
formulation of the equations for a system of members, and solution techniques.
The three fundamental steps in formulating a problem in solid mechanics—.
enforcing equilibrium, relating deformations and displacements, and relating forces and deformations—form the basis of the development, and the central
theme is to establish the equations for each step and then discuss how the com
plete set of equations is solved.
In this way, a reader obtains a more unified
view of a problem, sees more clearly where the various simplifying assumptions
are introduced, and is better prepared to extend the theory.
The chapters of Part I contain the relevant topics for an essential back
ground in linear algebra, differential
and matrix transformations.
Collecting this material in the first part of the book is convenient for the con
tinuity of the mathematics presentation as well as for the continuity in the
following development.
Part II treats the analysis of an ideal truss.
The governing equations for
small strain but arbitrary displacement are established and then cast into
matrix form.
virtual forces by manipulating the governing equations, introduce a criterion
for evaluating the stability of an equilibrium position, and interpret the gov
erning equations as stationary requirements for certain variational principles.
These concepts are essential for an appreciation of the solution schemes de
scribed in the following two chapters.
Part III is concerned with the behavior of an isolated member. For com
pleteness, first are presented the governing equations for a deformable elastic
solid allowing for arbitrary displacements, the continuous form of the princi
ples of virtual displacements and virtual forces, and the stability criterion.
Unrestrained torsionflexure of a prismatic member is examined in detail and
then an approximate engineering theory is developed. We move on to re
strained torsionflexure of a prismatic member, discussing various approaches
for including warping restraint and illustrating its influence for thinwalled
Next, we deduce the principles of virtual displacements and
iii
PREFACE
_{o}_{p}_{e}_{n} and closed sections.
The concluding chapters treat the behavior of
planar and arbitrary curved members.
How one assembles and solves the governing equations for a member sys
tern is discussed in Part IV.
then a general formulation of the governing equations is described.
metrically nonlinear behavior is considered in the last chapter, which dis
First, the direct stiffness method is outlined;
Geo
_{c}_{u}_{s}_{s}_{e}_{s}
member forcedisplacement
relations,
including
_{t}_{o}_{r}_{s}_{i}_{o}_{n}_{a}_{l}_{}_{f}_{l}_{e}_{x}_{u}_{r}_{a}_{l}
coupling, solution schemes, and linearized stability analysis.
The objective has been a text suitable for the teaching of modern structural
member system analysis, and what is offered is an outgrowth of lecture notes
developed in recent years at the Massachusetts Institute of Technology. To
the many students who have provided the occasion of that development, I am
Particular thanks go to Mrs. Jane Malinofsky for her
patience in typing the manuscript, and to Professor Charles Miller for his
encouragement.
_{d}_{e}_{e}_{p}_{l}_{y} _{a}_{p}_{p}_{r}_{e}_{c}_{i}_{a}_{t}_{i}_{v}_{e}_{.}
Cambridge, Mass.
January, 1976
JEROME J. CONNOR
Contents
I—MATHEMATICAL PRELIMINARiES
_{1} 
Introduction to Matrix Algebra 

1—i 
Definition of a Matrix 
3 

1—2 
Equality, Addition, and Subtraction of Matrices 
5 

_{1}_{—}_{3} 
Matrix Multiplication 
5 

1—4 
Transpose of a Matrix 
8 

_{1}_{—}_{5} 
Special Square Matrices 
10 

_{1}_{—}_{6} 
Operations on Partitioned Matrices 
12 

1—7 
Definition and Properties of a Determinant 
16 

1—8 
Cofactor Expansion Formula 
19 

1—9 
Cramer's Rule 
21 

1—10 Adjoint and Inverse Matrices 
22 

1—11 Elementary Operations on a Matrix 
24 

1—12 Rank of a Matrix 
27 

1—13 Solvability of Linear Algebraic Equations 
_{3}_{0} 

2 
CharacteristicValue Problems and Quadratic Forms 
_{4}_{6} 

2—1 
Introduction 
46 

2—2 
SecondOrder CharacteristicValue Problem 
48 

2—3 
Similarity and Orthogonal Transformations 
52 

2—4 
The nthOrder Symmetrical CharacteristicValue 

Problem 
55 

2—5 
Quadratic Forms 
_{5}_{7} 

3 
Relative Extrema for a Function 
66 

3—1 
Relative Extrema for a Function of One Variable 
_{6}_{6} 

3—2 
Relative Extrema for a Function of n Independent 

Variables 
71 

3—3 
Lagrange Multipliers 
75 

4 
Differential Geometry of a Member Element 
81 

4—1 
Parametric Representation of a Space Curve 
81 

4—2 
Arc Length 
_{8}_{2} 
CONTENTS
_{4}_{—}_{3} 
Unit Tangent Vector 
85 

_{4}_{—}_{4} 
Principal Normal and Binormal Vectors 
86 

_{4}_{—}_{5} 
Curvature, Torsion, and the Frenet Equations 
88 

_{4}_{—}_{6} 
Summary of the Geometrical Relations for a Space 

Curve 
91 

_{4}_{—}_{7} 
Local Reference Frame for a Member Element 
92 

_{4}_{—}_{8} 
Curvilinear Coordinates for a Member Element 
94 

_{5} 
Matrix Transformations for a Member Element 
_{1}_{0}_{0} 

5—1 
Rotation Transformation 
100 

5—2 
ThreeDimensional Force Transformations 
103 

_{5}_{—}_{3} 
ThreeDimensional Displacement Transformations 
109 

Il—ANALYSIS OF AN IDEAL TRUSS 

_{6} 
Governing Equations for an Ideal Truss 

_{6}_{—}_{1} 
General 
115 

_{6}_{—}_{2} 
Elongation—Joint Displacement Relation for a Bar 
_{1}_{1}_{6} 

_{6}_{—}_{3} 
General Elongation—Joint Displacement Relation 
120 

6—4 
ForceElongation Relation for a Bar 
125 

6—5 
General Bar Force—Joint Displacement Relation 
130 

_{6}_{—}_{6} 
Joint ForceEquilibrium Equations 
_{1}_{3}_{0} 

_{6}_{—}_{7} 
Introduction of Displacement Restraints; 

Governing Equations 
_{1}_{3}_{2} 

_{6}_{—}_{8} 
Arbitrary Restraint Direction 
_{1}_{3}_{4} 

_{6}_{—}_{9} 
Initial Instability 
137 

_{7} 
Variational Principles for an Ideal Truss 
_{1}_{5}_{2} 

7—1 
General 
152 

7—2 
Principle of Virtual Displacements 
_{1}_{5}_{3} 

_{7}_{—}_{3} 
Principle of Virtual Forces 
_{1}_{5}_{9} 

7—4 
Strain Energy; Principle of Stationary Potential 

Energy 
_{1}_{6}_{2} 

7—5 
Complementary Energy; Principle of Stationary 

Complementary Energy 
_{1}_{6}_{5} 

_{7}_{—}_{6} 
Stability Criteria 
_{1}_{6}_{9} 

8 
Displacement Method—Ideal Truss 
_{1}_{7}_{8} 

8—1 
General 
178 

_{8}_{—}_{2} 
178 
CONTENTS
8—4 
Incremental Formulation; Classical Stability 

Criterion 
191 

_{8}_{—}_{5} 
Linearized Stability Analysis 
200 
_{9} Force Method—Ideal Truss
_{9}_{—}_{1} 
General 
210 

9—2 
Governing Equations—Algebraic Approach 
211 

_{9}_{—}_{3} 
Governing Equations—Variational Approach 
216 

_{9}_{—}_{4} 
Comparison of the Force and Mesh Methods 
217 

Ill—ANALYSIS OF A MEMBER ELEMENT 

_{1}_{0} 
Governing Equations for a Deformable Solid 
_{2}_{2}_{9} 

10—1 
General 
229 
_{1}_{0}_{—}_{2} 
Summation Convention; Cartesian Tensors 
230 
_{1}_{0}_{—}_{3} 
Analysis of Deformation; Cartesian Strains 
232 
10—4 
Analysis of Stress 
240 
10—5 
Elastic StressStrain Relations 
_{2}_{4}_{8} 
_{1}_{0}_{—}_{6} 
Principle of Virtual Displacements; Principle of 

Stationary Potential Energy; Classical Stability 

Criteria 
253 

10—7 
Principle of Virtual Forces; Principle of 

Stationary Complementary Energy 
257 
11 St. Venant Theory of TorsionFlexure of
Prismatic Members
271
11—1 
Introduction and Notation 
271 
11—2 
The PureTorsion Problem 
273 
_{1}_{1}_{—}_{3} 
Approximate Solution of the Torsion Problem for 

ThinWalled Open Cross Sections 
281 

11—4 
Approximate Solution of the Torsion Problem for 

ThinWalled Closed Cross Sections 
286 

11—5 
TorsionFlexure with Unrestrained Warping 
293 
11—6 
Exact Flexural Shear Stress Distribution for a 

Rectangular Cross Section 
_{3}_{0}_{3} 

11—7 
Engineering Theory of Flexural Shear Stress Distribution in ThinWalled Cross Sections 
306 
12 Engineering Theory of Prismatic _{M}_{e}_{m}_{b}_{e}_{r}_{s}
12—1
Introduction
_{3}_{3}_{0}
_{3}_{3}_{0}
CONTENTS
_{1}_{2}_{—}_{3} 
ForceDisplacement Relations; Principle of 

Virtual Forces 
333 

_{1}_{2}_{—}_{4} 
Summary of the Governing Equations 
339 
_{1}_{2}_{—}_{5} 
Displacement Method of Solution—Prismatic Member 
340 
_{1}_{2}_{—}_{6} 
Force Method of Solution 
349 
_{1}_{3} Restrained TorsionFlexure of a Prismatic Member
371
_{1}_{3}_{—}_{1} 
Introduction 
371 
_{1}_{3}_{—}_{2} 
Displacement Expansions; Equilibrium Equations 
_{3}_{7}_{2} 
_{1}_{3}_{—}_{3} 
ForceDisplacement Relations—Displacement Model 
375 
_{1}_{3}_{—}_{4} 
Solution for Restrained Torsion—Displacement Model 
379 
13—5 
ForceDisplacement Relations—Mixed Formulation 
383 
_{1}_{3}_{—}_{6} 
Solution for Restrained Torsion—Mixed Formulation 
389 
_{1}_{3}_{—}_{7} 
Application to ThinWalled Open Cross. Sections 
395 
13—8 
Application to ThinWalled Closed Cross Sections 
405 
_{1}_{3}_{—}_{9} 
Governing Equations—Geometrically Nonlinear 

Restrained Torsion 
_{4}_{1}_{4} 
14 Planar Deformation of a Planar Member 
425 

_{1}_{4}_{—}_{1} 
Introduction; Geometrical Relations 
425 

_{1}_{4}_{—}_{2} 
ForceEquilibrium Equations 
427 

_{1}_{4}_{—}_{3} 
ForceDisplacement Relations; Principle of 

14—4 
Virtual Forces ForceDisplacement Relations—Displacement 
429 

Expansion Approach; Principle of Virtual 

Displacements 
435 

14—5 
Cartesian Formulation 
445 

14—6 
Displacement Method of Solution—Circular Member 
449 

14—7 
Force Method of Solution 
_{4}_{5}_{8} 

14—8 
Numerical Integration Procedures 
_{4}_{7}_{3} 

15 
Engineering Theory of an Arbitrary Member 
_{4}_{8}_{5} 

_{1}_{5}_{—}_{1} 
Introduction; Geometrical Relations 
_{4}_{8}_{5} 

_{1}_{5}_{—}_{2} 
ForceEquilibrium Equations 
_{4}_{8}_{8} 

15—3 
ForceDisplacement Relations—Negligible Warping 

Restraint; Principle of Virtual Forces 
_{4}_{9}_{0} 

15—4 
Displacement Method—Circular Planar Member 
_{.}_{4}_{9}_{3} 

_{1}_{5}_{—}_{5} 
Force Method—Examples 
_{4}_{9}_{9} 

15—6 
Restrained Warping Formulation 
507 

15—7 
Member ForceDisplacement Relations—Complete 

End Restraint 
_{5}_{1}_{1} 
CONTENTS
15—9 
Member Matrices—Prismatic Member 
_{5}_{2}_{0} 
15—10 Member Matrices—Thin Planar Circular Member 
_{5}_{2}_{4} 
15—11 Flexibility Matrix—Circular Helix
_{5}_{3}_{1}
15—12 Member ForceDisplacement Relations—Partial End Restraint 
_{5}_{3}_{5} 

tV—ANALYSIS OF A MEMBER SYSTEM 

16 
Direct Stiffness Method—Linear System 
_{5}_{4}_{5} 

16—1 
Introduction 
_{5}_{4}_{5} 

16—2 
Member ForceDisplacement Relations 
546 

16—3 
System Equilibrium Equations 
_{5}_{4}_{7} 

16—4 
Introduction of Joint Displacement Restraints 
548 

17 
General Formulation—Linear System 
_{5}_{5}_{4} 

17—1 
Introduction 
_{5}_{5}_{4} 

17—2 
Member Equations 
_{5}_{5}_{5} 

17—3 
System ForceDisplacement Relations 
_{5}_{5}_{7} 

17—4 
System Equilibrium Equations 
_{5}_{5}_{9} 

17—5 
Introduction of Joint Displacement Restraints; 

Governing Equations 
_{5}_{6}_{0} 

17—6 
Network Formulation 
_{5}_{6}_{2} 

17—7 
Displacement Method 
_{5}_{6}_{5} 

17—8 
Force Method 
_{5}_{6}_{7} 

17—9 
Variational Principles 
_{5}_{7}_{0} 

17—10 Introduction of Member Deformation Constraints 
_{5}_{7}_{3} 

18 
Analysis of Geometrically Nonlinear Systems 
_{5}_{8}_{5} 

18—1 
Introduction 
_{5}_{8}_{5} 

18—2 
Member Equations—Planar Deformation 
_{5}_{8}_{5} 

18—3 
Member Equations—Arbitrary Deformation 
_{5}_{9}_{1} 

18—4 
Solution Techniques; Stability Analysis 
_{5}_{9}_{7} 

Index 
_{6}_{0}_{5} 
Part I
MATHEMATICAL
PRELIMINARIES
1
Introduction to
Matrix Algebra
_{1}_{—}_{1}_{.}
DEFINITION OF A MATRIX
_{A}_{n} ordered set of quantities may be a onedimensional array, such as
a twodimensional array, such as
a11, a12, . a21, a22, . 
. 
. ,a1, 

. 
. 
, 
ami,
_{a} twodimensional array, the first subscript defines the row location of an element and the second subscript its column location. A twodimensional array having ,n rows and n columns is called a matrix of order m by n if certain arithmetic operations (addition, subtraction, multi
plication) associated with it are defined. The array is usually enclosed in square
brackets and written as*
a11 
_{a}_{1}_{2} 
_{} 
 
a1,, 
a21 
a22 
 
a2,, 

_{a}_{,}_{,}_{,}_{1} 
_{a}_{m}_{2} 
a,,,,, 
=
= a
Note that the first term in the order pertains to the number of rows and the
second term to the nuiñber of columns. For convenience, we refer to the order
of a matrix as simply m x n rather than of order m by n.
^{*} In print, a matrix is represented by a boldfaced letter.
_{4}
INTRODUCTION TO MATRIX ALGEBRA
CHAP. 1
_{A} matrix having only one row is called a row matrix. Similarly, a matrix
having only one column is called a column matrix or column vector.* Braces
instead ofbrackets are commonly used to denote a column matrix and the
column subscript is eliminated. Also, the elements are arranged horizontally
instead of vertically, to save space. The various columnmatrix notations are:
C11
^{C}^{2}^{1}
_{C}_{1}
^{C}^{2}
{c1,
.
,
_{{}_{c}_{1}_{}}
= c
If the number of rows and the number of columns are equal, the matrix is said
to be square. (Special types of square matrices are discussed in a later section.)
Finally, if all the elements are zero, the matrix is called a null matrix, and is
represented by 0 (boldface, as in the previous case).
Example 1—1
_{3} x 4 Matrix
1 x 3 Row Matrix
3 x 1 Column Matrix
f3]
2 2 Square Matrix
2 x 2 Null Matrix
4 
2—1 
2 

_{3} 
—7 
_{1} 
—8 
2 
4 
—3 
1 
or
[3
4
2]
4Jor{3,4,2}
[2
[0
[o
5
7
0
o
^{*} _{T}_{h}_{i}_{s} is the mathematical definition of a vector. In mechanics, a vector is defined as a quantity having both magnitude and direction. We will denote a mechanics vector quantity, such as force
_{S}_{E}_{C}_{.} _{1}_{—}_{3}_{.}
MATRIX MULTIPLICATION
1—2.
EQUALITY, ADDITION, AND SUBTRACTION OF MATRICES
Two matrices, a and b, are equal if they are of the same order and if cor
responding elements are equal:
_{a} _{=}
_{b}
when
If a is of order m x n, the matrix equation
a=b
corresponds to mn equations:
^{=}
=
_{=}
1,
_{1}_{,} 2,
.
, m
,
Addition and subtraction operations are defined only for matrices of the same
order. The sum of two m x n matrices, a and b, is defined to be the m x n
matrix
_{+}
Similarly,
For example, if 

[1 
2 

then 

and 
+
—
ii
_{—}_{d}
=
=
+
— bLJ]
[0
b=[3
—1
i
[1 
1 
0 
_{1} 
—1 
[1
3
—1
2
—1
—1
It is obvious from the example that addition is commutative and associative:
a+b=b+a 
_{(}_{1}_{—}_{6}_{)} 
a+(b+c)=(a+b)+c 
_{(}_{1}_{—}_{7}_{)} 
1—3.
MATRIX MULTIPLICATION
The product of a scalar k and a matrix a is defined to be the matrix
in which each element of a is multiplied by k. For example, if
then
k=5
and
[—10
+35
6
INTRODUCTION TO MATRIX ALGEBRA
_{C}_{H}_{A}_{P}_{.} _{1}
Scalar multiplication is commutative. That is,
ka = ak
= {ka11]
To establish the definition of a matrix multiplied by a column matrix, we
consider a system of m linear algebraic equations in n unknowns, x1, .x2
+ a12x2 + 
_{+} 
C1 

a21x1 + a22x2 
_{+} 
_{+} 
_{=} 
_{C}_{2} 
l2miXi + am2x2 + 
+ 
This set can be written as
_{a}_{l}_{k}_{x}_{k}
C1
i =
1,
_{2}_{,}
,rn
where k is a dummy index. Using column matrix notation, (1—9) takes the form
i =
Now, we write (1—9) as a matrix product:
= {c1}
i= 1,2, ,,rn
1,2
(1—11)
Since (1—10) and (1—Il) must be equivalent, it follows that the definition
equation for a matrix multiplied by a column matrix is
ax
=
_{u}_{l}_{k}_{x}_{k}_{}}
j = 1,
, m
This product is defined only when the column order of a is equal to the row order of x. The result is a column matrix, the row order of which is equal to
that of a. In general, if a is of order r x s, and x of order s x 1, the product
ax is of orderr x 1.
Example _{1}_{—}_{2}
a=
1 11
8
4j
2
x={3}
_{1}_{(}_{1}_{)}_{(}_{2}_{)} + (—1)(3)
4 

+ (3)(3) 
_{9} 
_{S}_{E}_{C}_{.} _{1}_{—}_{3}_{.}
MATRIX MULTIPLICATION
We consider next the product of two matrices. This product is associated
with a linear transformation of variables. Suppose that the n original variables
x1,
Y1,Y2, .
,x,, in (1—9) are expressed as a linear combination of s new variables
,ys:
_{X}_{k} _{=}
1=
Substituting for Xk in (1—10),
_{k} _{=}
_{1}_{,}
.
, n
(1—13)
_{=}
i
_{1}_{,}
, m
Interchanging the order of summation, and letting
k=i
_{i} _{=}
j —
^{1}^{,}^{2}
^{i}^{n}
the transformed equations take the form
=
i
1,2,
.,
Noting (1—12), we can write (1—15) as
py =
C
where p is in x .s and y is S x 1. Now, we also express
the transformation of variables, in matrix form,
_{w}_{h}_{e}_{r}_{e} _{b} _{i}_{s} _{n} _{x} _{s}_{.}
x
= by
Substituting for x in (1—11),
(1—14)
which defines
aby=c
and requiring (1—16) and (1—18) to be equivalent, results in the following definition equation for the product, ab:
ab
_{=}
_{[}_{b}_{k}_{J}_{]} _{=} _{[}_{p}_{t}_{,}_{]}
k
=
, n
This product is defined only when the column order of a is equal to the row
order of b. In general, if a is of order r x n, and b of order n x q, the product
ab is of order r x q. The element at the ith row and jth column of the product
is obtained by multiplying corresponding elements in the ith row of the first
_{8}
INTRODUCTION TO MATRIX ALGEBRA
CHAP. 1
Example 1—3
(1)(1) 
_{+} _{(}_{0}_{)}_{(}_{O}_{)} 
(IXI) + (O)(1) 
_{(}_{1}_{)}_{(}_{O}_{)} _{+} 
ab = (—l)(1) 
+ (1)(O) 
(—1)(l) + (1)(l) 
(—1)(O) + 
1)
(O)(1) + (2)(O)
(O)(1) + (2)(l)
(0)(0) + (2)(—1)
[+1
ab=J_1
_{[}
_{0}
+1
0
_{+}_{2}
0
—1
—l
_{+}_{4}
—2 +6
(1)(— 1) + (01(3)
(—1)(—1) + (1)(3) (0)(—1) + (2)(3)
_{I}_{f} the product ab is defined, a and b are said to be confbrmable in the order stated. One should note that a and b will be conformable in either order only when a is in x n and b is n x in. In the previous example, a and b are con
formable but b and a are not since the product ha is not defined.
When the relevant products are defined, multiplication of matrices is as
sociative,
a(bc) = 
(ab)c 
(1—20) 

and distributive, 

a(b + 
c) = ab 
+ ac 

(b + c)a = 
ha 
+ Ca 

but, in general, not commutative, 

ab 
ba 
(1—22) 
Therefore, in multiplying b by a, one should distinguish preinultiplication, ab,
from postrnultiplication ha. For example, if a and b are square matrices of order
2, the products are
_{[}_{a}_{1}_{1}
_{[}_{a}_{2}_{1}
_{[}_{b}_{1}_{1}
_{[}_{b}_{2}_{1}
_{a}_{1}_{2}_{1}_{[}_{b}_{i}_{j}
a22j[b21
_{b}_{1}_{2}_{1}_{[}_{a}_{j}_{i}
b22j[a21
_{b}_{1}_{2}_{1}
b22j
_{—}
[aitbji
_{+} a12b21
[a21b11 + a22b21
_{a}_{i}_{z}_{l} _{—} _{[}_{b}_{j}_{j}_{a}_{j}_{1}
_{+} _{b}_{1}_{2}_{a}_{2}_{1}
a22]
_{[}_{b}_{2}_{1}_{a}_{1}_{1} _{+} b22a21
a11b12 + a12b22
a21b12 + a22b22
b11a12 + b12a22
b21a12 + b22a22
_{W}_{h}_{e}_{n} _{a}_{b} _{=} _{h}_{a}_{,} the matrices are said to commute or to be permutable.
1—4.
TRANSPOSE OF A MATRIX
is defined as the matrix obtained from a by
_{T}_{h}_{e} transpose of a =
SEC. 1—4
TRANSPOSE OF A MATRIX
_{9}
^{a}^{T} = {a79]:
^{a} _{=}
_{=}
_{=} [a79] =
a11
021
amj
^{0}^{1}^{2}
a12
a22
am2
a21
^{0}^{2}^{2}
a1,
a2,
a,,,
^{a}^{m}^{2}
a,,,
(1—23)
The element, a79, at the ith row and jth column of aT, where now i varies from 1 to n and j from 1 to m, is given by
where
_{a}_{7}_{9} _{=}
is the element at the jth row and ith column of a.
[3
2
1
r3
a =[2
T
7
1
5
4
(1—24)
For example,
Since the transpose of a column matrix is a row matrix, an alternate notation
for a row matrix is
[a1, a2
a,] =
(1—25)
We consider next the transpose matrix associated with the product of two
matrices. Let
p==ab
where a is m x n and b is n x s. The product, p, is m x s and the element,
Pu,
(a)
=
Ilukbkf
_{—} _{1}
.1 —
m
(b)
The transpose of p will be of order s x m and the typical element is
where now I = 1, 2
write (c) as
p79 =
k1
It follows from (d) that
p79 =
s and j = 1,
=
k1
,m.
(c)
Using (1—24) and (b), we can
j —
_{=}
1, 2,.
S
_{(}_{d}_{)}
= (ab)T _{=} _{b}_{T}_{a}_{T}
10
INTRODUCTION TO MATRIX ALGEBRA
_{C}_{H}_{A}_{P}_{.} _{1}
_{t}_{r}_{a}_{n}_{s}_{p}_{o}_{s}_{e}_{d} matrices in reversed order. This rule is also applicable to multiple products. For example, the transpose of abc is
Example 1—4
(abc)T = cT(ab)T
cTbTaT
_{(}_{1}_{—}_{2}_{7}_{)}
ab = 
13 
(ab)T = [4 
_{1}_{3} 
_{6}_{]} 
6 
Alternatively,
(ab)T
aT
=
= bTaT =
[2
—1]
_{1}_{—}_{5}_{.}
SPECIAL SQUARE MATRICES
= [2
—1]
= [4
13
6]
_{I}_{f} the numbers of rows and of columns are equal, the matrix is said to be square
and of order n, where n is the number of rows. The elements
(i _{=} _{1}_{,}
_{,}
_{n}_{)}
lie on the principal diagonal. If all the elements except the principaldiagonal
elements are zero, the matrix is called a diagonal matrix. We will use d for
diagonal matrices. If the elements of a diagonal matrix are all unity, the diagonal
matrix is referred to as a unit matrix. A unit matrix is usually indicated by
where n is the order of the matrix.
Example 1—5
Square Matrix, Order 2
[1
[3
7
2
Diagonal Matrix, Order 3
[2 
0 
0 

5 
0 

[o 
0 
3 

_{U}_{n}_{i}_{t} Matrix, Order 2 

12[ LO 
0 

I 
SEC. 1—5.
SPECIAL SQUARE MATRICES
We introduce the Kronecker delta notation:
oij=0
_{+}_{1}
i—j
With this notation, the unit matrix can be written as
_{=} i,j = 1, 2
Also, the diagonal matrix, d, takes the form
_{d} _{=}
(1—28) 

n 
(1—29) 
(1—30) 
_{w}_{h}_{e}_{r}_{e} _{d}_{1}_{,}
_{,}
are the principal elements. If the principal diagonal elements
are all equal to k, the matrix reduces to
=
=
and is called a scalar matrix.
(1—31)
Let a be of order rn x n. One can easily show that multiplication of a by a
conformable unit matrix does not change a:
^{a}
Ima = a
(1—32)
A unit matrix is commutative with any square matrix of the same order.
Similarly, two diagonal niatrices of order n are commutative and the product
is a diagonal matrix of order a.
diagonal matrix d multiplies the ith row of a by
multiplies the jth column by
Premultiplication of a by a conformable
and postmultiplication
Example 1—6
[2
[o
01[3
01[2
[6
—i][o 5j[O 5j[O _ij[o —5
01
0
[2
—
01[3 'l_[ 6
ij [2
2
7] — [—2 —7
[3
11[2
[6
[2 7j[0 _1j[4
01
—'
—7
A square matrix a for which
property that a =
If
=
(i
=
is called symmetrical and has the
j) and the principal diagonal elements
all equal zero, the matrix is said to be skewsymmetrical. In this case, aT = _{—} _{a}_{.}
Any square matrix can be reduced to the sum of a symmetrical matrix and a
skewsymmetrical matrix:
a=b+c
^{=}
^{+}
_{1}_{2} INTRODUCTION TO MATRIX ALGEBRA
CHAP. 1
_{T}_{h}_{e} product of two symmetrical matrices is symmetrical only when the matrices
are commutative.* Finally, one can easily show that products of the type
(aTa)
(aaT)
(aTba)
where a is an arbitrary matrix and b a symmetrical matrix, result in symmetrical
matrices.
A square matrix having zero elements to the left (right) of the principal
diagonal is called an upper (lower) triangular matrix. Examples are:
Upper Triangular Matrix
352 071
004
Lower Triangular Matrix
214 570 300
Triangular matrices are encountered in many of the computational procedures
developed for linear systems. Some important properties of triangular matrices
are:
_{1}_{.} The transpose of an upper triangular matrix is a lower triangular matrix
and vice versa.
2. The product of two triangular matrices of like structure is a triangular
16.
matrix of the same structure.
_{[}_{a}_{1}_{1}
[a21
_{0} _{1}_{[}_{b}_{1}_{1}
I
_{0}
_{1}
b22j
I =
[aijbij
[a21b11 + a22b21
OPERATIONS ON PARTITIONED MATRICES
_{0}
a22b22
Operations on a matrix of high order can be simplified by considering the matrix to be divided into smaller matrices, called .subina.trices or cells. The
partitioning is usually indicated by dashed lines. A matrix can be partitioned
in a number of ways. For example,
a
a11
a21
_{0}_{3}_{1}
_{0}_{1}_{2}
a22
a32
_{0}_{1}_{3}_{1}
023
a33J
a11 
a12 
_{0}_{1}_{3} 
a11 
a12 
a13 

= a1 

= 

_{0}_{3}_{1} 
_{0}_{3}_{2} 
a33 
a31 
a32 
a33 
Note that the partition lines are always straight and extend across the entire
matrix. To reduce the amount of writing, the submatrices are represented by
^{*} See Prob. 1—7.
_{S}_{E}_{C}_{.} _{1}_{—}_{6}_{.}
OPERATtONS ON PARTITIONED MATRICES
_{a} single symbol. We will use upper case letters to denote the submatrices
whenever possible and omit the partition lines.
Example 11
We represent
as
where
a
[A11
[A21
A11
A21
[au
a=Ia,i
_{A}_{1}_{2}_{1}
A22J
Ia11
[a21
_{=} _{I}
=
[a31
_{o}_{r}
a121
a32]
a12
a22
a13
a23
_{a} _{=} [A11
[A21
A12 =
Ia13
_{I}
La23
A12
A22
A22 = [a33]
If two matrices of the same order are identically partitioned, the rules of
matrix addition are applicable to the submatrices. Let
[A11
[A23
A121
I
A22J
[B11
_{[}_{8}_{2}_{3}
8121
B22j
(134)
_{w}_{h}_{e}_{r}_{e} BLJ and A13 are of the same order. The sum is
_{a}
_{+} _{b} =
[A11 + 8fl
LA2I + B21
A12 _{+} _{B}_{1}_{2}_{1} A22 + B22j
(135)
The rules of matrix multiplication are applicable to partitioned matrices
provided that the partitioned matrices are conformable for multiplication. In
general, two partitioned matrices are conformable for multiplication if the
partitioning of the rows of the second matrix is identical to the partitioning of
the columns of the first matrix. This restriction allows us to treat the various
submatrices as single elements provided that we preserve the order of mul
tiplication. Let a and b be two partitioned matrices:
^{a} ^{[}^{A}^{1}^{3}^{1}^{t}
b = [B1d
1, 2, I = 1,2
k= 1,2,
=
We can write the product as
C =
ab =
M
C ik
—
[CIk]
1
i
.
,,
i
1,
, M
M
,S
,
(1—36)
—
_{1}_{4}
INTRODUCTION TO MATRIX ALGEBRA
CHAP. 1
_{A}_{s} an illustration, we consider the product
_{a}_{b} _{=}
_{a}_{u}
_{1}_{2}_{2}_{1}
1233
a12
a22
a32
a13
a23
033
h1
h2
b3
Suppose we partition a with a vertical partition between the second and third
columns,
a =
1211
a21
_{a}_{3}_{1}
1212 
a13 

a22 
a23 
= [A11A12] 
a32 
a33 
_{F}_{o}_{r} the rules of matrix multiplication to be applicable to the submatrices of a,
we must partition b with a horizontal partition between the second and third
rows. Taking
the product has the form
=
[A,1A12]
= A11B11
_{+} _{A}_{1}_{2}_{B}_{2}_{1}
The conformability of two partitioned matrices does not depend on the
horizontal partitioning of the first matrix or the vertical partitioning of the
second matrix. To show this, we consider the product
ab
£121
_{1}_{2}_{3}_{1}
a12
1222
a32
a13
a23
1233
b11
b31
b12
_{1}_{3}_{2}_{2}
b32
Suppose we partition a with a horizontal partition between the second and
third rows:
a
a11
1221
_{1}_{2}_{3}_{1}
_{1}_{2}_{1}_{2}
a22
a32
C1j3
1223
a33
_{r} A11
_{=}
Since the column order of A11 and A21 is equal to the row order of b, no
partitioning of b is required. The product is
ab
_{=}
[A111
[A11b
^{L}^{A}^{2}^{i}^{j}^{b} = [A21b
As an alternative, we partition b with a vertical partition.
b =
b21
b31
b12
b22
= [811B12]
_{S}_{E}_{C}_{.} _{1}_{—}_{6}_{.}
OPERATIONS ON PARTITIONED MATRICES
order of a, no partitioning of a is necessary and the product has the form
ab = a[B11B12] _{=} _{[}_{a}_{B}_{j}_{1}
_{a}_{B}_{i}_{2}_{]}
To transpose a partitioned matrix, one first interchanges the offdiagonal submatrices and then transposes each submatrix. If
then
_{a} _{=}
A11 
A12 
A1,, 

A21 
A22 

Arnt 
Am2 

AT1 
AT1 

AT 
AT 
. 
. 
. 
AT 
AT 
AT 
. 
. 
. 
AT 
A particular type of matrix encountered frequently is the quasidiagonal
matrix. This is a partitioned matrix whose diagonal submatrices are square of
various orders, and whose offdiagonal submatrices are null matrices. An
example is
a11
a= 0
0
_{0}
a22
a32
_{0}
a33
which can be written in partitioned form as
where
_{a} = [Ai
_{A}_{1} _{=} _{[}_{a}_{1}_{1}_{]}
A2]
_{A}_{2} _{=} [a22
a32
a23]
a33
and 0 denotes a null matrix. The product of two quasidiagonal matrices of
like structure (corresponding diagonal submatrices are of the same order) is
a quasidiagonal matrix of the same structure.
_{A}_{1} 
_{0} 
_{0} 
B1 
0 
0 
A1B1 
_{0} 
0 

0 

A 
0 

_{A} 
and 
are of the same order. 
We use the term quasi to distinguish between partitioned and unpartitioned
matrices having the same form. For example, we call
(1—40)
_{1}_{6}
INTRODUCTION TO MATRIX ALGEBRA
CllAP. 1
_{1}_{—}_{7}_{.}
DEFINITION AND PROPERTIES OF A DETERMINANT
_{T}_{h}_{e} concept of a determinant was originally developed in connection with the solution of square systems of linear algebraic equations. To illustrate how
this concept evolved, we consider the simple case of two equations:
a11x1 + a12x2 _{=}
a21X1 + a22x2 = C2
_{S}_{o}_{l}_{v}_{i}_{n}_{g} (a) for x3 and x2, we obtain
(a11a22 — 
a12a21)x1 
c2a12 
_{(}_{a}_{1}_{1}_{a}_{2}_{2} _{—} 
a12a21)x2 _{=} 
_{—}_{c}_{1}_{a}_{2}_{1} _{+} _{c}_{2}_{a}_{1}_{1} 
The scalar quantity, a1 1a22 — a21 a2
defined as the determinant of the second
order square array
usually indicated by enclosing the array (or matrix) with vertical lines:
(i,j
1, 2). The determinant of an array (or matrix) is
a11
a21
a12
a22
_{=}
_{a}_{l} = a31a22 — a12a21
We use the terms array and matrix interchangeably, since they are synony mous. Also, we refer to the determinant of an ethorder array as an nthorder
determinant. It shou'd be noted that determinants are associated only with
square arrays, that is, with square matrices.
The determinant of a thirdorder array is defined as
a11 
a12 
a13 
+a11a22a33 

a21 
a22 
a23 = —a12a21a33 _{+} a12a23a31 
(1—42) 

a31 
a32 
a33 
+a13a21a32 — _{a}_{1}_{3}_{a}_{2}_{2}_{a}_{3}_{1} 
This number is the coefficient of x1, x2, and x3, obtained when the thirdorder
system ax
c is solved successively for x1, x2. and x3. Comparing (l—41) and
(1—42), we see that both expansions involve products which have the following
properties:
_{1}_{.} Each product contains only one clement from any row or column and
no element occurs twice in the same product. The products differ only
in the column subscripts.
2. The sign of a product depends on the order of the column subscripts,
e.g., +a11a22a33 and —a11a23a32,
These properties are associated with the arrangement of the column subscripts
and can be conveniently described using the concept of a permutation, which
is discussed below.
A set of distinct integers is considered to be in natural order if each integer
is followed only by larger integers. A rearrangement of the natural order is
_{S}_{E}_{C}_{.} _{1}_{—}_{7}_{.}
DEFINITION AND PROPERTIES OF A DETERMINANT
_{(}_{1}_{,}_{5}_{,}_{3}_{)} _{i}_{s} _{a} _{p}_{e}_{r}_{m}_{u}_{t}_{a}_{t}_{i}_{o}_{n} _{o}_{f}_{(}_{1}_{,} _{3}_{,}_{5}_{)}_{.} _{I}_{f} an integer is followed by a smaller integer, the pair is said to form an inversion. The number of inversions for a set is defined
as the sum of the inversions for each integer. As an illustration, we consider
the set (3, 1, 4, 2). Working from left to right, the integer inversions are:
Integer
Inversions
Total
3 
(3, 1)(3, 2) 
2 
_{1} 
None 
_{0} 
_{4} 
(4,2) 
_{1} 
_{2} 
None 
_{0} 
3 
This set has three inversions. A permutation is classified as even (odd) if the total number of inversions for the set is an even (odd) integer. According to
this convention, (1, 2, 3) and (3, 1, 2) are even permutations and (1, 3, 2) is an
odd permutation. Instead of cbunting the inversions, we can determine the
number of integer interchanges required to rearrange the set in its natural order
since an even (odd) number of interchanges corresponds to an even (odd)
number of inversions. For example, (3,2, 1) has three inversions and requires
one interchange. Working with interchanges rather than inversions is practical
only when the set is small.
Referring back to (1—41) and (1—42), we see that each product is a permutation
of the set of column subscripts and the sign is negative
_{t}_{h}_{e} _{p}_{e}_{r}_{m}_{u}_{t}_{a}_{t}_{i}_{o}_{n}
is odd. The number of products is equal to the number of possible permutations
of the column subscripts that can be formed. One can easily show that there
are
_{W}_{e} _{l}_{e}_{t}
possible permutations for a set of n distinct integers.
_{,} _{c}_{e}_{,}_{,}_{)} be a permutation of the set (1,
_{,} _{n}_{)} _{a}_{n}_{d} _{d}_{e}_{f}_{i}_{n}_{e}
• 
as 

• 
_{+} _{I} 
when 
is an even permutation 

_{.} 
_{.} 
_{,} 

(1—43) 

_{—} 
_{1} 
_{w}_{h}_{e}_{n} 
_{.} 
_{,} _{a}_{,}_{,}_{)} 
is an odd permutation 
Using (1—43), the definition equation for an ,ithorder determinant can be
written as
a11 
a12 
_{a}_{1}_{,}_{,} 
a21 
a22 
a2,, 
1
=
where the summation is taken over all possible permutations of (1, 2,
(1—44)
.
. , n).
_{1}_{8}
INTRODUCTION TO MATRIX ALGEBRA
_{C}_{H}_{A}_{P}_{.} _{1}
Example 1—8
_{T}_{h}_{e} permutations for n = 3 are
a1=1
_{c}_{x}_{i}_{—}_{1}
= 2
z1=2
x23
1
a33
a32
= 3
a3=1
a32
a3=1
e123=+1
e132=—1
_{e}_{2}_{3}_{1}_{=}_{+}_{1}
e312=+1
e321——1
_{U}_{s}_{i}_{n}_{g} (1—44), we obtain
a11
a21
a12
a22
a32
a13
a11a22a33 — a11a23a32
a23 = —a12a21a33 + a12a23a31
a33
+a13a21a32 — a13a22a31
This result coincides with (1—42).
The following properties of determinants can be established* from (1—44):
1. If all elements of any row (or column) are zero, the determinant is zero.
_{2}_{.} The value of the determinant is unchanged if the rows and columns are
interchanged; that is, aT! =
_{a}_{!}_{.}
_{3}_{.} If two successive rows (or two successive columns) are interchanged, the
sign of the determinant is changed.
4. If all elements of one row (or one column) are multiplied by a number k,
_{t}_{h}_{e} determinant is multiplied by k.
_{5}_{.} If corresponding elements of two rows (or two columns) are equal or in
a constant ratio, then the determinant is zero.
_{6}_{.} If each element in one row (or one column) is expressed as the sum of
two terms, then the determinant is equal to the sum of two determinants,
in each of which one of the two terms is deleted in each element of that
row (or column).
_{7}_{.} If to the elements of any row (column) are added k times the cor
responding elements of any other row (column), the determinant is
unchanged.
We demonstrate these properties for the case of a secondorder matrix. Let
a
The determinant is
= [a31
[a21
a22
_{a}_{!} = a11a22 — a12a21
Properties 1 and 2 are obvious. It follows from property 2 that laTl
_{a}_{!}_{.}
We
^{*} _{S}_{e}_{e} Probs. 1—17, 1—18, 1—19.
_{S}_{E}_{C}_{.} _{1}_{—}_{8}_{.}
COFACTOR EXPANSION FORMULA
illustrate the third by interchanging the rows of a:
_{a}_{'} _{=} [a21 
a22 
a12 
_{a}_{'}_{!} = a21a12 — a11a22 = —Ia!
Property 4 is also obvious from (b). To demonstrate the fifth, we take
Then
Next, let
a21 = ka11
_{a}_{!} _{=} a11(kaj2)
_{a}_{1}_{1}
_{+} _{c}_{1}_{1}
According to property 6,
where
ibi
b11
a21
al
b12
a22
a22 = ka12
a12(ka1j) _{=} _{0}
a12 = b12 _{+} _{c}_{1}_{2}
hi + ci
ci
=
a21
a22
This result can be obtained by substituting for O.ii _{a}_{n}_{d} _{a}_{1}_{2} _{i}_{n} _{(}_{b}_{)}_{.} illustrate property 7, we take
Finally, to
Then,
b12 = a12 _{+} _{k}_{a}_{2}_{2} b21 = a21 b22 = a7,
_{i}_{b}_{i}
_{=}
_{(}_{a}_{1}_{1}
_{+} _{k}_{a}_{2}_{1}_{)}_{a}_{2}_{2} — (a12 + ka22)a21 _{=}
_{a}_{!}
18.
COFACTOR EXPANSION FORMULA
_{I}_{f} _{t}_{h}_{e} _{r}_{o}_{w} _{a}_{n}_{d} _{c}_{o}_{l}_{u}_{m}_{n} _{c}_{o}_{n}_{t}_{a}_{i}_{n}_{i}_{n}_{g} _{a}_{n} element,
in the square matrix, a,
are deleted, the determinant of the remaining square array is called the minor
of
and is denoted by
the minor of
by
As an illustration, we take
The values of
and
The cofactor of
_{=} _{(}_{—}
328
a=
1
7
4
531
_{d}_{e}_{n}_{o}_{t}_{e}_{d} _{b}_{y}
associated with a23 and a22 are
is related to
(1—45)
20
INTRODUCTION TO MATRIX ALGEBRA
CHAP. 1
Cofactors occur naturally when (.1 —44) is expanded9 in terms of the elements
of a row or column. This leads to the following expansion formula, called
Laplace's expansion by cofactors or simply Laplace's expansion:
=
a1kAIk
akJAkJ
(1 —46)
Equation (1—46) states that the determinant is equal to the sum of the products of the elements of any single row or column by their cofactors.
Since the determinant is zero if two rows or columns are identical, if follows
that
k1
k
I
= 0
0
s
(147)
The above identities are used to establish Cramer's rule in the following section.
Example 1—9
(1)
a11
(121
a31
We apply (1—46) to a thirdorder array and expand with respect to the first row:
a12
a23
a32
a13
a23
a33
=
_{2}
a22 
_{0}_{2}_{3} 
_{0}_{2}_{3} 
a22 

a33 
+ 
a31 
(133 
+ 
0j3(— 
1) 
035 
a32 
a11(a22a33 — a23a32) _{+} a52(—a21a33 _{+} a23a31) _{+} a53(a21a32 _{—} _{0}_{2}_{2}_{0}_{3}_{5}_{)}
To illustrate (1 —47), we take the cofactors for the first row and the elements of the second
row:
= a21(a22a33
— a23a32) _{+} a22(—a21a33 _{+} _{a}_{2}_{3}_{a}_{3}_{1}_{)} _{+} _{a}_{2}_{3}_{(}_{a}_{2}_{1}_{a}_{3}_{2} _{—} _{a}_{2}_{2}_{a}_{3}_{1}_{)}
_{0}
(2)
Suppose the array is triangular in form, for example, lower triangular. Expanding
with respect to the first row, we have
0 
0 

a21 
_{a}_{2}_{2} 
0 
= a11 
031 
a32 
033 
(122
032
0
033
= (a51)(a22a33) = a11a22a33
Generalizing this result, we find that the determinant of a triangular matrix is equal to
the product of the diagonal elements. This result is quite useful.
^{*} _{S}_{e}_{e} Probs. 1—20, 1—21.
SEC. 1—9.
CRAMER'S RULE
The evaluation of a determinant, using the definition equation (1—44) or the cofactor expansion formula (1—46) is quite tedious, particularly when the array
is large. A number of alternate and more efficient numerical procedures for
evaluating determinants have been developed. These procedures are described
in References 9—13.
Suppose a square matrix, say c, is expressed as the product of two square
matrices,
c='ab
and we want cJ.
square matrices is equal to the product of the determinants:
It can be shown* that the determinant of the product of two
_{c}_{i} _{=}
_{a}_{!}
_{h}_{i}
(1—48)
Whether we use (1—48) or first multiply a and b and then determine lab! depends
on the form and order of a and b.
is quite efficient. t
Example _{1}_{—}_{1}_{0}
Alternatively,
a! =
[1
31
5]
hi
c [11 [
29J
[1
a=[0
a! = 5 Determining c first, we obtain
_{1}_{—}_{9}_{.}
_{r}_{s}
= [5
121
20]
CRAMER'S RULE
If they are diagonal or triangular, (1—48)
=
and
31
_{5}_{]}
bi = 8
b
^{a}^{n}^{d}
r2
Ic!
3
4
=
—20
cj =
r2
[1
0
—20
_{I}_{c}_{!} _{=} _{+}_{4}_{0}
^{c}^{i} ^{=}
^{+}^{4}^{0}
We consider next a set of n equations in n unknowns:
^{*} See Ref. 4, section 3—16.
=
_{j} = 1, 2,
.
,
ii
_{(}_{a}_{)}
_{2}_{2}
INTRODUCTION TO MATRIX ALGEBRA
_{C}_{H}_{A}_{P}_{.} _{1}
Multiplying both sides of (a) by Air, where r is an arbitrary integer from 1 to n,
and summing with respect to j, we obtain (after interchanging the order of
summation)
k1
Xk
Now, the inner sum vanishes when r
follows from (1—47). Then, (b) reduces to
lalxr _{=}
=
j=1
k and equals al when r = k.
This
_{T}_{h}_{e} expansion on the right side of (c) differs from the expansion
_{a}_{l}
=
ajrAj.
only in that the rth column of a is replaced by c. Equation (c) leads to Cramer's rule, which can be stated as follows:
A set of n linear algebraic equations in n unknowns, ax = c, has a
unique solution when
0. The expression for Xr (r = 1, 2
n) is
the ratio of two determinants; the denominator is al and the numerator
is the determinant of the matrix obtained from a by replacing the rth
column by c.
_{I}_{f} _{j}_{a}_{f} _{=} _{0}_{,} a is said to be singular. Whether a solution exists in this ease will
_{d}_{e}_{p}_{e}_{n}_{d} _{o}_{n} _{c}_{.}
it exists, will not be unique. Singular matrices and the question of solvability
All we can conclude from Cramer's rule is that the solution, if
are discussed in Sec. 1 —13.
_{1}_{—}_{1}_{0}_{.}
ADJOINT AND INVERSE MATRICES
_{W}_{e} have shown in the previous section that the solution to a system of n
equations in n unknowns,
can be expressed as
i,j
1
1, 2,
1,
,
n
., ii
(note that we have taken r = I in Eq. c of Sec. 1—9). Using matrix notation,
(b) takes the form
[Au]T{cj}
_{S}_{E}_{C}_{.} _{1}_{—}_{1}_{0}_{.}
ADJOINT AND INVERSE MATRICES
23
_{W}_{e} define the adjoint and inverse matrices for the square matrix a of order n as
adjoint a = Adj a = 
(1—49) 

inverse a = a1 
Adj a 
(1—50) 
Note that the inverse matrix is defined only for a nonsingular square matrix.
Example 1—11
We determine the adjoint and inverse matrices for
The matrix of cofactors is
_{A}_{l}_{s}_{o}_{,} _{a}_{l} = —25.
Then
123
a= 2
412
3
_{1}
5 
0 
—10 
—1 
—10 
_{+}_{7} 
—7 
+5 
—1 
5 
—i 
—7 

Adja 
0 
—10 +5 

_{—}_{1}_{0} 
+7 —1 
=
_{—}_{1}_{/}_{5} 
+ 1/25 
+7/25 

— Adj a = 
0 
+ 2/5 
— 1/5 
^{a} 
_{+}_{2}_{/}_{5} 
—7/25 
+ 1/25 
Using the inversematrix notation, we can write the solution of (a) as
x =
Substituting for x in (a) and c in (d), we see that a1 has the property that
a1a = aa' _{=}
Equation (1—51) is frequently taken as the definition of the inverse matrix
instead of (1—50). Applying (1—48) to (i—Si), we obtain
It follows that (1—Si) is valid only when
matrix is analogous to division in ordinary algebra.
0. Multiplication by the inverse
If a is symmetrical,, then a
^{1} is also symmetrical. To show this, we take the
transpose of (1—5 1), and use the fact that a =. aT:
_{2}_{4}
INTRODUCTION TO MATRIX ALGEBRA
_{C}_{H}_{A}_{P}_{.} _{1}
Premultiplication _{b}_{y} _{a}_{'} ^{1} _{r}_{e}_{s}_{u}_{l}_{t}_{s} _{i}_{n}
— a"'
and therefore a1 is also symmetrical. One can also show* that, for any
nonsingular square matrix, the inverse and transpose operations can be inter
changed:
bT,_t =
(1—52)
We consider next the inverse matrix associated with the product of two square
matrices. Let
c = ab
where a and b are both of order n x n and nonsingular. Premultiplication
_{b}_{y}
and then b1 results in
a'c = _{b}
(b'a'')c =
It follows from the definition of the inverse matrix that
(ab)1 =
(1—53)
In general, the inverse of a multiple matrix product is equal to the product of
the inverse matrices in reverse order. For example,
=
_{T}_{h}_{e} determination of the inverse matrix using the definition equation (1 —50)
is too laborious when the order is large. A number of inversion procedures
based on (1—51) have been developed. These methods are described in Ref. 9—13.
1—11.
ELEMENTARY OPERATIONS ON A MATRIX
The elementary operations on a matrix are:'
1. The interchange of two rows or of two columns.
_{2}_{.} The multiplication of the elements of a row or a column by a number
other than zero.
_{3}_{.} The addition, to the elements of a row or column, of k times the cor
responding element of another row or column.
These operations can be effected by premultiplying (for row operation) or
postmultiplying (for column operation) the matrix by an appropriate matrix,
called an elementary operation matrix.
We consider a matrix a of order
x n. Suppose that we want to interchange
rowsj and k. Then, we premultiply a by an rn x in matrix obtained by modifying
the mthorder unit matrix, I,,,, in the following way:
_{1}_{.} 
< 
Гораздо больше, чем просто документы.
Откройте для себя все, что может предложить Scribd, включая книги и аудиокниги от крупных издательств.
Отменить можно в любой момент.