Вы находитесь на странице: 1из 620

JEROME J. CONNOR, Sc.D., Massachusetts Institute of

Technology, is Professor of Civil Engineering at Massa-

chusetts Institute of Technology. He has been active in

teaching and research in structural analysis and mechanics

at the U.S. Army Materials and Mechanics Research

Agency and for some years at M.I.T. His primary inter-

est is in computer based analysis methods, and his current

research is concerned with the dynamic analysis of pre-

stressed concrete reactor vessels and the development of

finite element models for fluid flow problems.

Dr. Connor

is one of the original developers of ICES-STRUDL, and

has published extensively in the structural field.

ANALYSIS OF

STRUCTURAL MEMBER

SYSTEMS

JEROME

J. CONNOR

Massachusetts Institute of Technology

THE RONALD PRESS COMPANY • NEW YORK

Copyright ©

1976 by

Ttrn RONALD PRESS COMPANY

All Rights Reserved

No part of this book may be reproduced

in any form without permission in writing

from the publisher.

Library of Congress Catalog Card Number: 74—22535

PRINTED IN ThE UNITCD STATES OF AMERICA

Preface

With the development over the past decade of computer-based analysis

methods, the teaching of structural analysis subjects has been revolutionized.

The traditional division between structural analysis and structural mechanics

became no longer necessary, and instead of teaching a preponderance of solu-

tion details it is now possible to focus on the underlying theory.

What has been done here is to integrate analysis and mechanics in a sys- tematic presentation which includes the mechanics of a member, the matrix

formulation of the equations for a system of members, and solution techniques.

The three fundamental steps in formulating a problem in solid mechanics—.

enforcing equilibrium, relating deformations and displacements, and relating forces and deformations—form the basis of the development, and the central

theme is to establish the equations for each step and then discuss how the com-

plete set of equations is solved.

In this way, a reader obtains a more unified

view of a problem, sees more clearly where the various simplifying assumptions

are introduced, and is better prepared to extend the theory.

The chapters of Part I contain the relevant topics for an essential back-

ground in linear algebra, differential

and matrix transformations.

Collecting this material in the first part of the book is convenient for the con-

tinuity of the mathematics presentation as well as for the continuity in the

following development.

Part II treats the analysis of an ideal truss.

The governing equations for

small strain but arbitrary displacement are established and then cast into

matrix form.

virtual forces by manipulating the governing equations, introduce a criterion

for evaluating the stability of an equilibrium position, and interpret the gov-

erning equations as stationary requirements for certain variational principles.

These concepts are essential for an appreciation of the solution schemes de-

scribed in the following two chapters.

Part III is concerned with the behavior of an isolated member. For com-

pleteness, first are presented the governing equations for a deformable elastic

solid allowing for arbitrary displacements, the continuous form of the princi-

ples of virtual displacements and virtual forces, and the stability criterion.

Unrestrained torsion-flexure of a prismatic member is examined in detail and

then an approximate engineering theory is developed. We move on to re-

strained torsion-flexure of a prismatic member, discussing various approaches

for including warping restraint and illustrating its influence for thin-walled

Next, we deduce the principles of virtual displacements and

iii

PREFACE

open and closed sections.

The concluding chapters treat the behavior of

planar and arbitrary curved members.

How one assembles and solves the governing equations for a member sys-

tern is discussed in Part IV.

then a general formulation of the governing equations is described.

metrically nonlinear behavior is considered in the last chapter, which dis-

First, the direct stiffness method is outlined;

Geo-

cusses

member force-displacement

relations,

including

torsional-flexural

coupling, solution schemes, and linearized stability analysis.

The objective has been a text suitable for the teaching of modern structural

member system analysis, and what is offered is an outgrowth of lecture notes

developed in recent years at the Massachusetts Institute of Technology. To

the many students who have provided the occasion of that development, I am

Particular thanks go to Mrs. Jane Malinofsky for her

patience in typing the manuscript, and to Professor Charles Miller for his

encouragement.

deeply appreciative.

Cambridge, Mass.

January, 1976

JEROME J. CONNOR

Contents

I—MATHEMATICAL PRELIMINARiES

1

Introduction to Matrix Algebra

 

1—i

Definition of a Matrix

3

1—2

Equality, Addition, and Subtraction of Matrices

5

13

Matrix Multiplication

5

1—4

Transpose of a Matrix

8

15

Special Square Matrices

10

16

Operations on Partitioned Matrices

12

1—7

Definition and Properties of a Determinant

16

1—8

Cofactor Expansion Formula

19

1—9

Cramer's Rule

21

1—10 Adjoint and Inverse Matrices

22

1—11 Elementary Operations on a Matrix

24

1—12 Rank of a Matrix

27

1—13 Solvability of Linear Algebraic Equations

30

2

Characteristic-Value Problems and Quadratic Forms

 

46

2—1

Introduction

46

2—2

Second-Order Characteristic-Value Problem

48

2—3

Similarity and Orthogonal Transformations

52

2—4

The nth-Order Symmetrical Characteristic-Value

 

Problem

55

 

2—5

Quadratic Forms

57

3

Relative Extrema for a Function

 

66

3—1

Relative Extrema for a Function of One Variable

66

3—2

Relative Extrema for a Function of n Independent

 

Variables

71

 

3—3

Lagrange Multipliers

75

4

Differential Geometry of a Member Element

 

81

4—1

Parametric Representation of a Space Curve

81

4—2

Arc Length

82

V

CONTENTS

 

43

Unit Tangent Vector

85

44

Principal Normal and Binormal Vectors

86

45

Curvature, Torsion, and the Frenet Equations

88

46

Summary of the Geometrical Relations for a Space

 

Curve

91

 

47

Local Reference Frame for a Member Element

92

48

Curvilinear Coordinates for a Member Element

94

5

Matrix Transformations for a Member Element

 

100

5—1

Rotation Transformation

100

5—2

Three-Dimensional Force Transformations

103

53

Three-Dimensional Displacement Transformations

109

 

Il—ANALYSIS OF AN IDEAL TRUSS

6

Governing Equations for an Ideal Truss

 

61

General

115

62

Elongation—Joint Displacement Relation for a Bar

116

63

General Elongation—Joint Displacement Relation

120

6—4

Force-Elongation Relation for a Bar

125

6—5

General Bar Force—Joint Displacement Relation

130

66

Joint Force-Equilibrium Equations

130

67

Introduction of Displacement Restraints;

 

Governing Equations

132

 

68

Arbitrary Restraint Direction

134

69

Initial Instability

137

7

Variational Principles for an Ideal Truss

 

152

7—1

General

152

7—2

Principle of Virtual Displacements

153

73

Principle of Virtual Forces

159

7—4

Strain Energy; Principle of Stationary Potential

 

Energy

162

 

7—5

Complementary Energy; Principle of Stationary

 

Complementary Energy

165

 

76

Stability Criteria

169

8

Displacement Method—Ideal Truss

 

178

8—1

General

178

82

Operation on the Partitioned Equations

178

CONTENTS

8—4

Incremental Formulation; Classical Stability

Criterion

191

85

Linearized Stability Analysis

200

9 Force Method—Ideal Truss

 

91

General

210

9—2

Governing Equations—Algebraic Approach

211

93

Governing Equations—Variational Approach

216

94

Comparison of the Force and Mesh Methods

217

 

Ill—ANALYSIS OF A MEMBER ELEMENT

10

Governing Equations for a Deformable Solid

 

229

10—1

General

229

102

Summation Convention; Cartesian Tensors

230

103

Analysis of Deformation; Cartesian Strains

232

10—4

Analysis of Stress

240

10—5

Elastic Stress-Strain Relations

248

106

Principle of Virtual Displacements; Principle of

Stationary Potential Energy; Classical Stability

Criteria

253

10—7

Principle of Virtual Forces; Principle of

Stationary Complementary Energy

257

11 St. Venant Theory of Torsion-Flexure of

Prismatic Members

271

11—1

Introduction and Notation

271

11—2

The Pure-Torsion Problem

273

113

Approximate Solution of the Torsion Problem for

Thin-Walled Open Cross Sections

281

11—4

Approximate Solution of the Torsion Problem for

Thin-Walled Closed Cross Sections

286

11—5

Torsion-Flexure with Unrestrained Warping

293

11—6

Exact Flexural Shear Stress Distribution for a

Rectangular Cross Section

303

11—7

Engineering Theory of Flexural Shear Stress Distribution in Thin-Walled Cross Sections

306

12 Engineering Theory of Prismatic Members

12—1

Introduction

330

330

CONTENTS

123

Force-Displacement Relations; Principle of

Virtual Forces

333

124

Summary of the Governing Equations

339

125

Displacement Method of Solution—Prismatic Member

340

126

Force Method of Solution

349

13 Restrained Torsion-Flexure of a Prismatic Member

371

131

Introduction

371

132

Displacement Expansions; Equilibrium Equations

372

133

Force-Displacement Relations—Displacement Model

375

134

Solution for Restrained Torsion—Displacement Model

379

13—5

Force-Displacement Relations—Mixed Formulation

383

136

Solution for Restrained Torsion—Mixed Formulation

389

137

Application to Thin-Walled Open Cross. Sections

-395

13—8

Application to Thin-Walled Closed Cross Sections

405

139

Governing Equations—Geometrically Nonlinear

Restrained Torsion

414

14 Planar Deformation of a Planar Member

 

425

 

141

Introduction; Geometrical Relations

425

142

Force-Equilibrium Equations

427

143

Force-Displacement Relations; Principle of

14—4

Virtual Forces Force-Displacement Relations—Displacement

429

 

Expansion Approach; Principle of Virtual

Displacements

435

 

14—5

Cartesian Formulation

445

14—6

Displacement Method of Solution—Circular Member

449

14—7

Force Method of Solution

458

14—8

Numerical Integration Procedures

473

15

Engineering Theory of an Arbitrary Member

 

485

151

Introduction; Geometrical Relations

485

152

Force-Equilibrium Equations

488

15—3

Force-Displacement Relations—Negligible Warping

 

Restraint; Principle of Virtual Forces

490

 

15—4

Displacement Method—Circular Planar Member

.493

155

Force Method—Examples

499

15—6

Restrained Warping Formulation

507

15—7

Member Force-Displacement Relations—Complete

 

End Restraint

511

CONTENTS

15—9

Member Matrices—Prismatic Member

520

15—10 Member Matrices—Thin Planar Circular Member

524

15—11 Flexibility Matrix—Circular Helix

531

 

15—12 Member Force-Displacement Relations—Partial End Restraint

535

 

tV—ANALYSIS OF A MEMBER SYSTEM

16

Direct Stiffness Method—Linear System

 

545

16—1

Introduction

545

16—2

Member Force-Displacement Relations

546

16—3

System Equilibrium Equations

547

16—4

Introduction of Joint Displacement Restraints

548

17

General Formulation—Linear System

 

554

17—1

Introduction

554

17—2

Member Equations

555

17—3

System Force-Displacement Relations

557

17—4

System Equilibrium Equations

559

17—5

Introduction of Joint Displacement Restraints;

 

Governing Equations

560

 

17—6

Network Formulation

562

17—7

Displacement Method

565

17—8

Force Method

567

17—9

Variational Principles

570

17—10 Introduction of Member Deformation Constraints

573

18

Analysis of Geometrically Nonlinear Systems

 

585

18—1

Introduction

585

18—2

Member Equations—Planar Deformation

585

18—3

Member Equations—Arbitrary Deformation

591

18—4

Solution Techniques; Stability Analysis

597

Index

605

Part I

MATHEMATICAL

PRELIMINARIES

1

Introduction to

Matrix Algebra

11.

DEFINITION OF A MATRIX

An ordered set of quantities may be a one-dimensional array, such as

a two-dimensional array, such as

a11, a12, . a21, a22, .

.

. ,a1,

.

.

,

ami,

a two-dimensional array, the first subscript defines the row location of an element and the second subscript its column location. A two-dimensional array having ,n rows and n columns is called a matrix of order m by n if certain arithmetic operations (addition, subtraction, multi-

plication) associated with it are defined. The array is usually enclosed in square

brackets and written as*

a11

a12

-

-

a1,,

a21

a22

-

a2,,

a,,,1

am2

a,,,,,

=

= a

Note that the first term in the order pertains to the number of rows and the

second term to the nuiñber of columns. For convenience, we refer to the order

of a matrix as simply m x n rather than of order m by n.

* In print, a matrix is represented by a boldfaced letter.

3

4

INTRODUCTION TO MATRIX ALGEBRA

CHAP. 1

A matrix having only one row is called a row matrix. Similarly, a matrix

having only one column is called a column matrix or column vector.* Braces

instead ofbrackets are commonly used to denote a column matrix and the

column subscript is eliminated. Also, the elements are arranged horizontally

instead of vertically, to save space. The various column-matrix notations are:

C11

C21

C1

C2

{c1,

.

,

{c1}

= c

If the number of rows and the number of columns are equal, the matrix is said

to be square. (Special types of square matrices are discussed in a later section.)

Finally, if all the elements are zero, the matrix is called a null matrix, and is

represented by 0 (boldface, as in the previous case).

Example 1—1

3 x 4 Matrix

1 x 3 Row Matrix

3 x 1 Column Matrix

f3]

2 2 Square Matrix

2 x 2 Null Matrix

4

2—1

2

3

—7

1

—8

2

4

—3

1

or

[3

4

2]

4Jor{3,4,2}

[2

[0

[o

5

7

0

o

* This is the mathematical definition of a vector. In mechanics, a vector is defined as a quantity having both magnitude and direction. We will denote a mechanics vector quantity, such as force

is assumed in this text.

SEC. 13.

MATRIX MULTIPLICATION

1—2.

EQUALITY, ADDITION, AND SUBTRACTION OF MATRICES

Two matrices, a and b, are equal if they are of the same order and if cor-

responding elements are equal:

a =

b

when

If a is of order m x n, the matrix equation

a=b

corresponds to mn equations:

=

=

=

1,

1, 2,

.

, m

,

Addition and subtraction operations are defined only for matrices of the same

order. The sum of two m x n matrices, a and b, is defined to be the m x n

matrix

+

Similarly,

For example, if

 

[1

2

then

and

+

ii

d

=

=

+

bLJ]

[0

b=[3

—1

i

[1

1

0

1

—1

[1

3

—1

2

—1

—1

It is obvious from the example that addition is commutative and associative:

a+b=b+a

(16)

a+(b+c)=(a+b)+c

(17)

1—3.

MATRIX MULTIPLICATION

The product of a scalar k and a matrix a is defined to be the matrix

in which each element of a is multiplied by k. For example, if

then

k=5

and

[—10

10

+35

5

6

INTRODUCTION TO MATRIX ALGEBRA

CHAP. 1

Scalar multiplication is commutative. That is,

ka = ak

= {ka11]

To establish the definition of a matrix multiplied by a column matrix, we

consider a system of m linear algebraic equations in n unknowns, x1, .x2

+ a12x2 +

+

C1

a21x1 + a22x2

+

+

=

C2

l2miXi + am2x2 +

+

This set can be written as

alkxk

C1

i =

1,

2,

,rn

where k is a dummy index. Using column matrix notation, (1—9) takes the form

i =

Now, we write (1—9) as a matrix product:

= {c1}

i= 1,2, ,,rn

1,2

(1—11)

Since (1—10) and (1—Il) must be equivalent, it follows that the definition

equation for a matrix multiplied by a column matrix is

ax

=

ulkxk}

j = 1,

, m

This product is defined only when the column order of a is equal to the row order of x. The result is a column matrix, the row order of which is equal to

that of a. In general, if a is of order r x s, and x of order s x 1, the product

ax is of orderr x 1.

Example 12

a=

1 11

8

-4j

2

x={3}

1(1)(2) + (—1)(3)

 

4

+ (3)(3)

9

SEC. 13.

MATRIX MULTIPLICATION

We consider next the product of two matrices. This product is associated

with a linear transformation of variables. Suppose that the n original variables

x1,

Y1,Y2, .

,x,, in (1—9) are expressed as a linear combination of s new variables

,ys:

Xk =

1=

Substituting for Xk in (1—10),

k =

1,

.

, n

(1—13)

=

i

1,

, m

Interchanging the order of summation, and letting

k=i

i =

j

1,2

in

the transformed equations take the form

=

i

1,2,

.,

Noting (1—12), we can write (1—15) as

py =

C

where p is in x .s and y is S x 1. Now, we also express

the transformation of variables, in matrix form,

where b is n x s.

x

= by

Substituting for x in (1—11),

(1—14)

which defines

aby=c

and requiring (1—16) and (1—18) to be equivalent, results in the following definition equation for the product, ab:

ab

=

[bkJ] = [pt,]

k

=

, n

This product is defined only when the column order of a is equal to the row

order of b. In general, if a is of order r x n, and b of order n x q, the product

ab is of order r x q. The element at the ith row and jth column of the product

is obtained by multiplying corresponding elements in the ith row of the first

8

INTRODUCTION TO MATRIX ALGEBRA

CHAP. 1

Example 1—3

(1)(1)

+ (0)(O)

(IXI) + (O)(1)

(1)(O) +

ab = (—l)(1)

+ (1)(O)

(—1)(l) + (1)(l)

(—1)(O) +

1)

(O)(1) + (2)(O)

(O)(1) + (2)(l)

(0)(0) + (2)(—1)

[+1

ab=J_1

[

0

+1

0

+2

0

—1

—l

+4

—2 +6

(1)(— 1) + (01(3)

(—1)(—1) + (1)(3) (0)(—1) + (2)(3)

If the product ab is defined, a and b are said to be confbrmable in the order stated. One should note that a and b will be conformable in either order only when a is in x n and b is n x in. In the previous example, a and b are con-

formable but b and a are not since the product ha is not defined.

When the relevant products are defined, multiplication of matrices is as-

sociative,

 

a(bc) =

(ab)c

(1—20)

and distributive,

a(b +

c) = ab

+ ac

(b + c)a =

ha

+ Ca

but, in general, not commutative,

 

ab

ba

(1—22)

Therefore, in multiplying b by a, one should distinguish preinultiplication, ab,

from postrnultiplication ha. For example, if a and b are square matrices of order

2, the products are

[a11

[a21

[b11

[b21

a121[bij

a22j[b21

b121[aji

b22j[a21

b121

b22j

[aitbji

+ a12b21

[a21b11 + a22b21

aizl [bjjaj1

+ b12a21

a22]

[b21a11 + b22a21

a11b12 + a12b22

a21b12 + a22b22

b11a12 + b12a22

b21a12 + b22a22

When ab = ha, the matrices are said to commute or to be permutable.

1—4.

TRANSPOSE OF A MATRIX

is defined as the matrix obtained from a by

The transpose of a =

SEC. 1—4

TRANSPOSE OF A MATRIX

9

aT = {a79]:

a =

=

= [a79] =

a11

021

amj

012

a12

a22

am2

a21

022

a1,

a2,

a,,,

am2

a,,,

(1—23)

The element, a79, at the ith row and jth column of aT, where now i varies from 1 to n and j from 1 to m, is given by

where

a79 =

is the element at the jth row and ith column of a.

[3

2

1

r3

a =[2

T

7

1

5

4

(1—24)

For example,

Since the transpose of a column matrix is a row matrix, an alternate notation

for a row matrix is

[a1, a2

a,] =

(1—25)

We consider next the transpose matrix associated with the product of two

matrices. Let

p==ab

where a is m x n and b is n x s. The product, p, is m x s and the element,

Pu,

(a)

=

Ilukbkf

1

.1 —

m

(b)

The transpose of p will be of order s x m and the typical element is

where now I = 1, 2

write (c) as

p79 =

k1

It follows from (d) that

p79 =

s and j = 1,

=

k1

,m.

(c)

Using (1—24) and (b), we can

j

=

1, 2,.

S

(d)

= (ab)T = bTaT

10

INTRODUCTION TO MATRIX ALGEBRA

CHAP. 1

transposed matrices in reversed order. This rule is also applicable to multiple products. For example, the transpose of abc is

Example 1—4

(abc)T = cT(ab)T

cTbTaT

(127)

ab =

13

(ab)T = [4

13

6]

6

Alternatively,

(ab)T

aT

=

= bTaT =

[2

—1]

15.

SPECIAL SQUARE MATRICES

= [2

—1]

= [4

13

6]

If the numbers of rows and of columns are equal, the matrix is said to be square

and of order n, where n is the number of rows. The elements

(i = 1,

,

n)

lie on the principal diagonal. If all the elements except the principal-diagonal

elements are zero, the matrix is called a diagonal matrix. We will use d for

diagonal matrices. If the elements of a diagonal matrix are all unity, the diagonal

matrix is referred to as a unit matrix. A unit matrix is usually indicated by

where n is the order of the matrix.

Example 1—5

Square Matrix, Order 2

[1

[3

7

2

Diagonal Matrix, Order 3

 

[2

0

0

 

5

0

 

[o

0

3

Unit Matrix, Order 2

 

12[

LO

0

I

SEC. 1—5.

SPECIAL SQUARE MATRICES

We introduce the Kronecker delta notation:

oij=0

+1

i—j

With this notation, the unit matrix can be written as

= i,j = 1, 2

Also, the diagonal matrix, d, takes the form

d =

 

(1—28)

n

(1—29)

(1—30)

where d1,

,

are the principal elements. If the principal diagonal elements

are all equal to k, the matrix reduces to

=

=

and is called a scalar matrix.

(1—31)

Let a be of order rn x n. One can easily show that multiplication of a by a

conformable unit matrix does not change a:

a

Ima = a

(1—32)

A unit matrix is commutative with any square matrix of the same order.

Similarly, two diagonal niatrices of order n are commutative and the product

is a diagonal matrix of order a.

diagonal matrix d multiplies the ith row of a by

multiplies the jth column by

Premultiplication of a by a conformable

and postmultiplication

Example 1—6

[2

[o

01[3

01[2

[6

—i][o 5j[O 5j[O _ij[o —5

01

0

[2

01[3 'l_[ 6

ij [2

2

7] — [—2 —7

[3

11[2

[6

[2 7j[0 _1j[4

01

—'

—7

A square matrix a for which

property that a =

If

=

(i

=

is called symmetrical and has the

j) and the principal diagonal elements

all equal zero, the matrix is said to be skew-symmetrical. In this case, aT = a.

Any square matrix can be reduced to the sum of a symmetrical matrix and a

skew-symmetrical matrix:

a=b+c

=

=

+

12 INTRODUCTION TO MATRIX ALGEBRA

CHAP. 1

The product of two symmetrical matrices is symmetrical only when the matrices

are commutative.* Finally, one can easily show that products of the type

(aTa)

(aaT)

(aTba)

where a is an arbitrary matrix and b a symmetrical matrix, result in symmetrical

matrices.

A square matrix having zero elements to the left (right) of the principal

diagonal is called an upper (lower) triangular matrix. Examples are:

Upper Triangular Matrix

352 071

004

Lower Triangular Matrix

214 570 300

Triangular matrices are encountered in many of the computational procedures

developed for linear systems. Some important properties of triangular matrices

are:

1. The transpose of an upper triangular matrix is a lower triangular matrix

and vice versa.

2. The product of two triangular matrices of like structure is a triangular

1-6.

matrix of the same structure.

[a11

[a21

0 1[b11

I

0

1

b22j

I =

[aijbij

[a21b11 + a22b21

OPERATIONS ON PARTITIONED MATRICES

0

a22b22

Operations on a matrix of high order can be simplified by considering the matrix to be divided into smaller matrices, called .subina.trices or cells. The

partitioning is usually indicated by dashed lines. A matrix can be partitioned

in a number of ways. For example,

a

a11

a21

031

012

a22

a32

0131

023

a33J

a11

a12

013

a11

a12

a13

= a1

 

=

031

032

a33

a31

a32

a33

Note that the partition lines are always straight and extend across the entire

matrix. To reduce the amount of writing, the submatrices are represented by

SEC. 16.

OPERATtONS ON PARTITIONED MATRICES

a single symbol. We will use upper case letters to denote the submatrices

whenever possible and omit the partition lines.

Example 1-1

We represent

as

where

a

[A11

[A21

A11

A21

[au

a=Ia,i

A121

A22J

Ia11

[a21

= I

=

[a31

or

a121

a32]

a12

a22

a13

a23

a = [A11

[A21

A12 =

Ia13

I

La23

A12

A22

A22 = [a33]

If two matrices of the same order are identically partitioned, the rules of

matrix addition are applicable to the submatrices. Let

[A11

[A23

A121

I

A22J

[B11

[823

8121

B22j

(134)

where BLJ and A13 are of the same order. The sum is

a

+ b =

[A11 + 8fl

LA2I + B21

A12 + B121 A22 + B22j

(1-35)

The rules of matrix multiplication are applicable to partitioned matrices

provided that the partitioned matrices are conformable for multiplication. In

general, two partitioned matrices are conformable for multiplication if the

partitioning of the rows of the second matrix is identical to the partitioning of

the columns of the first matrix. This restriction allows us to treat the various

submatrices as single elements provided that we preserve the order of mul-

tiplication. Let a and b be two partitioned matrices:

a [A131t

b = [B1d

1, 2, I = 1,2

k= 1,2,

=

We can write the product as

C =

ab =

M

C ik

[CIk]

1

i

.

,,

i

1,

, M

M

,S

,

(1—36)

14

INTRODUCTION TO MATRIX ALGEBRA

CHAP. 1

As an illustration, we consider the product

ab =

au

1221

1233

a12

a22

a32

a13

a23

033

h1

h2

b3

Suppose we partition a with a vertical partition between the second and third

columns,

a =

1211

a21

a31

1212

a13

a22

a23

= [A11A12]

a32

a33

For the rules of matrix multiplication to be applicable to the submatrices of a,

we must partition b with a horizontal partition between the second and third

rows. Taking

the product has the form

=

[A,1A12]

= A11B11

+ A12B21

The conformability of two partitioned matrices does not depend on the

horizontal partitioning of the first matrix or the vertical partitioning of the

second matrix. To show this, we consider the product

ab

£121

1231

a12

1222

a32

a13

a23

1233

b11

b31

b12

1322

b32

Suppose we partition a with a horizontal partition between the second and

third rows:

a

a11

1221

1231

1212

a22

a32

C1j3

1223

a33

r A11

=

Since the column order of A11 and A21 is equal to the row order of b, no

partitioning of b is required. The product is

ab

=

[A111

[A11b

LA2ijb = [A21b

As an alternative, we partition b with a vertical partition.

b =

b21

b31

b12

b22

= [811B12]

SEC. 16.

OPERATIONS ON PARTITIONED MATRICES

order of a, no partitioning of a is necessary and the product has the form

ab = a[B11B12] = [aBj1

aBi2]

To transpose a partitioned matrix, one first interchanges the off-diagonal submatrices and then transposes each submatrix. If

then

a =

A11

A12

A1,,

A21

A22

Arnt

Am2

AT1

AT1

AT

AT

.

.

.

AT

AT

AT

.

.

.

AT

A particular type of matrix encountered frequently is the quasi-diagonal

matrix. This is a partitioned matrix whose diagonal submatrices are square of

various orders, and whose off-diagonal submatrices are null matrices. An

example is

a11

a= 0

0

0

a22

a32

0

a33

which can be written in partitioned form as

where

a = [Ai

A1 = [a11]

A2]

A2 = [a22

a32

a23]

a33

and 0 denotes a null matrix. The product of two quasi-diagonal matrices of

like structure (corresponding diagonal submatrices are of the same order) is

a quasi-diagonal matrix of the same structure.

A1

0

0

B1

0

0

A1B1

0

0

0

 

A

0

 

A

and

are of the same order.

 

We use the term quasi to distinguish between partitioned and unpartitioned

matrices having the same form. For example, we call

(1—40)

16

INTRODUCTION TO MATRIX ALGEBRA

Cl-lAP. 1

17.

DEFINITION AND PROPERTIES OF A DETERMINANT

The concept of a determinant was originally developed in connection with the solution of square systems of linear algebraic equations. To illustrate how

this concept evolved, we consider the simple case of two equations:

a11x1 + a12x2 =

a21X1 + a22x2 = C2

Solving (a) for x3 and x2, we obtain

(a11a22 —

a12a21)x1

c2a12

(a11a22

a12a21)x2 =

c1a21 + c2a11

The scalar quantity, a1 1a22 — a21 a2

defined as the determinant of the second-

order square array

usually indicated by enclosing the array (or matrix) with vertical lines:

(i,j

1, 2). The determinant of an array (or matrix) is

a11

a21

a12

a22

=

al = a31a22 a12a21

We use the terms array and matrix interchangeably, since they are synony- mous. Also, we refer to the determinant of an eth-order array as an nth-order

determinant. It shou'd be noted that determinants are associated only with

square arrays, that is, with square matrices.

The determinant of a third-order array is defined as

a11

a12

a13

+a11a22a33

a21

a22

a23 = —a12a21a33 + a12a23a31

(1—42)

a31

a32

a33

+a13a21a32 — a13a22a31

This number is the coefficient of x1, x2, and x3, obtained when the third-order

system ax

c is solved successively for x1, x2. and x3. Comparing (l—41) and

(1—42), we see that both expansions involve products which have the following

properties:

1. Each product contains only one clement from any row or column and

no element occurs twice in the same product. The products differ only

in the column subscripts.

2. The sign of a product depends on the order of the column subscripts,

e.g., +a11a22a33 and —a11a23a32,

These properties are associated with the arrangement of the column subscripts

and can be conveniently described using the concept of a permutation, which

is discussed below.

A set of distinct integers is considered to be in natural order if each integer

is followed only by larger integers. A rearrangement of the natural order is

SEC. 17.

DEFINITION AND PROPERTIES OF A DETERMINANT

(1,5,3) is a permutation of(1, 3,5). If an integer is followed by a smaller integer, the pair is said to form an inversion. The number of inversions for a set is defined

as the sum of the inversions for each integer. As an illustration, we consider

the set (3, 1, 4, 2). Working from left to right, the integer inversions are:

Integer

Inversions

Total

3

(3, 1)(3, 2)

2

1

None

0

4

(4,2)

1

2

None

0

 

3

This set has three inversions. A permutation is classified as even (odd) if the total number of inversions for the set is an even (odd) integer. According to

this convention, (1, 2, 3) and (3, 1, 2) are even permutations and (1, 3, 2) is an

odd permutation. Instead of cbunting the inversions, we can determine the

number of integer interchanges required to rearrange the set in its natural order

since an even (odd) number of interchanges corresponds to an even (odd)

number of inversions. For example, (3,2, 1) has three inversions and requires

one interchange. Working with interchanges rather than inversions is practical

only when the set is small.

Referring back to (1—41) and (1—42), we see that each product is a permutation

of the set of column subscripts and the sign is negative

the permutation

is odd. The number of products is equal to the number of possible permutations

of the column subscripts that can be formed. One can easily show that there

are

We let

possible permutations for a set of n distinct integers.

, ce,,) be a permutation of the set (1,

, n) and define

as

 

+ I

when

is an even permutation

 
 

.

.

,

 

(1—43)

 

1

when

.

, a,,)

is an odd permutation

Using (1—43), the definition equation for an ,ith-order determinant can be

written as

a11

a12

a1,,

a21

a22

a2,,

1

=

where the summation is taken over all possible permutations of (1, 2,

2)

(1—44)

.

. , n).

18

INTRODUCTION TO MATRIX ALGEBRA

CHAP. 1

Example 1—8

The permutations for n = 3 are

a1=1

cxi1

= 2

z1=2

x23

1

a33

a32

= 3

a3=1

a32

a3=1

e123=+1

e132=—1

e231=+1

e312=+1

e321—-—1

Using (1—44), we obtain

a11

a21

a12

a22

a32

a13

a11a22a33 — a11a23a32

a23 = —a12a21a33 + a12a23a31

a33

+a13a21a32 — a13a22a31

This result coincides with (1—42).

The following properties of determinants can be established* from (1—44):

1. If all elements of any row (or column) are zero, the determinant is zero.

2. The value of the determinant is unchanged if the rows and columns are

interchanged; that is, aT! =

a!.

3. If two successive rows (or two successive columns) are interchanged, the

sign of the determinant is changed.

4. If all elements of one row (or one column) are multiplied by a number k,

the determinant is multiplied by k.

5. If corresponding elements of two rows (or two columns) are equal or in

a constant ratio, then the determinant is zero.

6. If each element in one row (or one column) is expressed as the sum of

two terms, then the determinant is equal to the sum of two determinants,

in each of which one of the two terms is deleted in each element of that

row (or column).

7. If to the elements of any row (column) are added k times the cor-

responding elements of any other row (column), the determinant is

unchanged.

We demonstrate these properties for the case of a second-order matrix. Let

a

The determinant is

= [a31

[a21

a22

a! = a11a22 — a12a21

Properties 1 and 2 are obvious. It follows from property 2 that laTl

a!.

We

SEC. 18.

COFACTOR EXPANSION FORMULA

illustrate the third by interchanging the rows of a:

a' = [a21

a22

a12

a'! = a21a12 — a11a22 = —Ia!

Property 4 is also obvious from (b). To demonstrate the fifth, we take

Then

Next, let

a21 = ka11

a! = a11(kaj2)

a11

+ c11

According to property 6,

where

ibi

b11

a21

al

b12

a22

a22 = ka12

a12(ka1j) = 0

a12 = b12 + c12

hi + ci

ci

=

a21

a22

This result can be obtained by substituting for O.ii and a12 in (b). illustrate property 7, we take

Finally, to

Then,

b12 = a12 + ka22 b21 = a21 b22 = a7,

ibi

=

(a11

+ ka21)a22 — (a12 + ka22)a21 =

a!

1-8.

COFACTOR EXPANSION FORMULA

If the row and column containing an element,

in the square matrix, a,

are deleted, the determinant of the remaining square array is called the minor

of

and is denoted by

the minor of

by

As an illustration, we take

The values of

and

The cofactor of

= (

328

a=

1

7

4

531

denoted by

associated with a23 and a22 are

is related to

(1—45)

M23. =

= —1

A23 =

( 1)5M23

=

+ 1

20

INTRODUCTION TO MATRIX ALGEBRA

CHAP. 1

Cofactors occur naturally when (.1 —44) is expanded9 in terms of the elements

of a row or column. This leads to the following expansion formula, called

Laplace's expansion by cofactors or simply Laplace's expansion:

=

a1kAIk

akJAkJ

(1 —46)

Equation (1—46) states that the determinant is equal to the sum of the products of the elements of any single row or column by their cofactors.

Since the determinant is zero if two rows or columns are identical, if follows

that

k1

k

I

= 0

0

s

(147)

The above identities are used to establish Cramer's rule in the following section.

Example 1—9

(1)

a11

(121

a31

We apply (1—46) to a third-order array and expand with respect to the first row:

a12

a23

a32

a13

a23

a33

=

2

a22

023

023

a22

a33

+

a31

(133

+

0j3(—

1)

035

a32

a11(a22a33 — a23a32) + a52(—a21a33 + a23a31) + a53(a21a32 022035)

To illustrate (1 —47), we take the cofactors for the first row and the elements of the second

row:

= a21(a22a33

a23a32) + a22(—a21a33 + a23a31) + a23(a21a32 a22a31)

0

(2)

Suppose the array is triangular in form, for example, lower triangular. Expanding

with respect to the first row, we have

 

0

0

a21

a22

0

= a11

031

a32

033

(122

032

0

033

= (a51)(a22a33) = a11a22a33

Generalizing this result, we find that the determinant of a triangular matrix is equal to

the product of the diagonal elements. This result is quite useful.

* See Probs. 1—20, 1—21.

f See Ref. 4, sect. 3

SEC. 1—9.

CRAMER'S RULE

The evaluation of a determinant, using the definition equation (1—44) or the cofactor expansion formula (1—46) is quite tedious, particularly when the array

is large. A number of alternate and more efficient numerical procedures for

evaluating determinants have been developed. These procedures are described

in References 9—13.

Suppose a square matrix, say c, is expressed as the product of two square

matrices,

c='ab

and we want cJ.

square matrices is equal to the product of the determinants:

It can be shown* that the determinant of the product of two

ci =

a!

hi

(1—48)

Whether we use (1—48) or first multiply a and b and then determine lab! depends

on the form and order of a and b.

is quite efficient. t

Example 110

Alternatively,

a! =

[1

31

5]

hi

c [11 [

29J

[1

a=[0

a! = 5 Determining c first, we obtain

19.

rs

= [5

121

20]

CRAMER'S RULE

If they are diagonal or triangular, (1—48)

=

and

31

5]

bi = 8

b

and

r2

Ic!

3

4

=

—20

cj =

r2

[1

0

—20

Ic! = +40

ci =

+40

We consider next a set of n equations in n unknowns:

=

j = 1, 2,

.

,

ii

(a)

22

INTRODUCTION TO MATRIX ALGEBRA

CHAP. 1

Multiplying both sides of (a) by Air, where r is an arbitrary integer from 1 to n,

and summing with respect to j, we obtain (after interchanging the order of

summation)

k1

Xk

Now, the inner sum vanishes when r

follows from (1—47). Then, (b) reduces to

lalxr =

=

j=1

k and equals al when r = k.

This

The expansion on the right side of (c) differs from the expansion

al

=

ajrAj.

only in that the rth column of a is replaced by c. Equation (c) leads to Cramer's rule, which can be stated as follows:

A set of n linear algebraic equations in n unknowns, ax = c, has a

unique solution when

0. The expression for Xr (r = 1, 2

n) is

the ratio of two determinants; the denominator is al and the numerator

is the determinant of the matrix obtained from a by replacing the rth

column by c.

If jaf = 0, a is said to be singular. Whether a solution exists in this ease will

depend on c.

it exists, will not be unique. Singular matrices and the question of solvability

All we can conclude from Cramer's rule is that the solution, if

are discussed in Sec. 1 —13.

110.

ADJOINT AND INVERSE MATRICES

We have shown in the previous section that the solution to a system of n

equations in n unknowns,

can be expressed as

i,j

1

1, 2,

1,

,

n

., ii

(note that we have taken r = I in Eq. c of Sec. 1—9). Using matrix notation,

(b) takes the form

[Au]T{cj}

SEC. 110.

ADJOINT AND INVERSE MATRICES

23

We define the adjoint and inverse matrices for the square matrix a of order n as

adjoint a = Adj a =

(1—49)

inverse a = a1

Adj a

(1—50)

Note that the inverse matrix is defined only for a nonsingular square matrix.

Example 1—11

We determine the adjoint and inverse matrices for

The matrix of cofactors is

Also, al = —25.

Then

123

a= 2

412

3

1

5

0

—10

—1

—10

+7

—7

+5

—1

 

5

—i

—7

Adja

0

—10 +5

10

+7 —1

=

 

1/5

+ 1/25

+7/25

—-- Adj a =

0

+ 2/5

1/5

a

+2/5

—7/25

+ 1/25

Using the inverse-matrix notation, we can write the solution of (a) as

x =

Substituting for x in (a) and c in (d), we see that a1 has the property that

a1a = aa' =

Equation (1—51) is frequently taken as the definition of the inverse matrix

instead of (1—50). Applying (1—48) to (i—Si), we obtain

It follows that (1—Si) is valid only when

matrix is analogous to division in ordinary algebra.

0. Multiplication by the inverse

If a is symmetrical,, then a

1 is also symmetrical. To show this, we take the

transpose of (1—5 1), and use the fact that a =. aT:

24

INTRODUCTION TO MATRIX ALGEBRA

CHAP. 1

Premultiplication by a' 1 results in

a"'

and therefore a1 is also symmetrical. One can also show* that, for any

nonsingular square matrix, the inverse and transpose operations can be inter-

changed:

bT,_t =

(1—52)

We consider next the inverse matrix associated with the product of two square

matrices. Let

c = ab

where a and b are both of order n x n and nonsingular. Premultiplication

by

and then b1 results in

a'c = b

(b'a'')c =

It follows from the definition of the inverse matrix that

(ab)1 =

(1—53)

In general, the inverse of a multiple matrix product is equal to the product of

the inverse matrices in reverse order. For example,

=

The determination of the inverse matrix using the definition equation (1 —50)

is too laborious when the order is large. A number of inversion procedures

based on (1—51) have been developed. These methods are described in Ref. 9—13.

1—11.

ELEMENTARY OPERATIONS ON A MATRIX

The elementary operations on a matrix are:'

1. The interchange of two rows or of two columns.

2. The multiplication of the elements of a row or a column by a number

other than zero.

3. The addition, to the elements of a row or column, of k times the cor-

responding element of another row or column.

These operations can be effected by premultiplying (for row operation) or

postmultiplying (for column operation) the matrix by an appropriate matrix,

called an elementary operation matrix.

We consider a matrix a of order

x n. Suppose that we want to interchange

rowsj and k. Then, we premultiply a by an rn x in matrix obtained by modifying

the mth-order unit matrix, I,,,, in the following way:

1.

<