Академический Документы
Профессиональный Документы
Культура Документы
TOPIC 1
SYSTEMS OF LINEAR EQUATIONS AND MATRICES
Contents
Linear algebra is the branch of mathematics concerned with the study of vectors, vector spaces
(or linear spaces), linear transformations, and systems of linear equations in finite dimensions.
Vector spaces are a central theme in modern mathematics; thus, linear algebra is widely used in
both abstract algebra and functional analysis. Linear algebra also has a concrete representation
in analytic geometry and it is generalized in operator theory. It has extensive applications in the
natural sciences and social sciences, since nonlinear models can often be approximated by a
linear model.
In mathematics and linear algebra, a system of linear equations is a set of linear equations such
as
x1 4 x2 x3 6
3 x1 x2 2 x3 7
2 x1 2 x2 3 x3 3
A standard problem is to decide if any assignment of values for the unknowns can satisfy all
three equations simultaneously, and to find such an assignment if it exists. The existence of a
solution depends on the equations, and also on the available values (whether integers, real
numbers, and so on).
There are many different ways to solve systems of linear equations, such as substitution,
elimination, matrix inversion and determinants. However, one of the most efficient ways is
given by Gaussian elimination (matrix).
where x1 , x2 ,..., xn are the unknowns and the numbers a11 , a12 ,..., amn are the
coefficients of the system.
3
Ax b
where A is an m x n matrix, x is a column vector with n entries, and b is a column vector with m
entries. Gauss-Jordan elimination applies to all these systems, even if the coefficients come from
an arbitrary field.
If the field is infinite (as in the case of the real or complex numbers), then only the following
three cases are possible (exactly one will be true).
A system of equations that has at least one solution is called consistent; if there is no solution it
is said to be inconsistent.
A system of the form Ax 0 is called a homogeneous system of linear equations. The set of all
solutions of such a homogeneous system is called the nullspace of the matrix A.
4
Example 1
If the system is homogeneous and at least one xi 0 then we have a nontrivial solution.
Because of a homogeneous linear system always has the trivial solution; there are only 2
possibilities for its solution.
(a) The system has only the trivial solution
(b) The system has infinitely many solutions in addition to the trivial solution.
(Howard, 2005)
Theorem
A homogeneous system of linear equations with more unknowns than equations has
infinitely many solutions.
Augmented Matrices
Augmented Matrix
Elementary row operations are used to reduce an augmented matrix or matrix to row echelon
form (REF) or reduced row-echelon form (RREF). Reducing the matrix:
1. All zero rows (consisting entirely of zeros) are at the bottom of the matrix.
2. The first nonzero number in each nonzero row is a 1, called the leading 1 for that row.
3. Each leading 1 is to the right of all leading 1s in the rows above it.
A row-echelon matrix is said to be in reduced row-echelon form if, in addition, it satisfies the
following condition:
Example 2
In each part, determine whether the matrix is in row echelon form, reduced row echelon form or
neither.
1 0 0 3
1 2 1 3 0 1 0 3
(a) 0 1 1 2 (b) 0 0 0 0 (c) 0 0 1 1
0 0 1 1 0 0 1 4 0 0 0 1
1 0 0 0 2 1 0 0 0 2 0 1 2 3 2
0 0 1 0 0
0 1 0 0 3 0 0 1 1 7
(d) 0 0 0 1 2 (e) (f) 0 2
0 0 2 0 2 0 0 1
0 0 0 0 0
0 0 0 0 1 1 0 0 0 0 0
0 0 0 0
1 1 0 0 1 0 0 3 1 0 0 3
0 0 1 0
(g) (h) 0 1 0 2 (i) 0 1 0 2
0 0 0 0 0 0 1 1 0 1 1 1
Example 3
Suppose that the augmented matrix for a system of linear equations has been reduced by row
operations to the given reduced row-echelon form. Then solve the system.
1 0 0 5
(a) 0 1 0 2 x1 5 , x2 2 , x3 4
0 0 1 4
1 0 0 4 1
(b) 0 1 0 2 6 x3 3x4 2 , x2 2 x4 6 , x1 4 x4 1 .
0 0 1 3 2
Since the 4th column has no leading 1, x 4 is called a free variable which can be assigned as a
parameter. Parameters take any arbitrary values. The remaining variables ( x1 , x 2 , x 3 ) are
called leading variables.
Thus, x4 t . x3 2 3t , x2 6 2t , x1 1 4t .
7
1 6 0 0 4 2
0 0 1 0 3 1
(c) 0 0 0 1 5 2 use first 3 rows to solve the system of linear equation
0 0 0 0 0 0
The leading 1s occur in positions (row 1, column 1), (row 2, column 3) and (row 3, column 4)
are the pivot positions. A column that contains a pivot position is called a pivot column. Thus
the pivot columns are columns 1, 3 and 4.
1 0 0 3
(d) 0 1 0 2 means no solution
0 0 0 1
Example 4
Solve the system by Gaussian Elimination
x1 2 x2 3x3 5
1. 2 x1 5 x2 3x3 3
x1 8 x3 17
Solution
Gaussian Elimination
1 2 3 5 R2 R2 2 R1 1 2 3 5 1 2 3 5
2 5 3 3 0 1 3 7 0 1 3 7
1 0 8 17 R3 R3 R1 0 2 5 12 R3 R3 2 R2 0 0 1 2
1 2 3 5
R3 0 1 3 7
R3
1 0 0 1 2
x1 2 x2 3x3 5 , x2 3x3 7 , x3 2
8
Back Substitution:
x2 7 3(2) 1
x1 5 2(1) 3(2) 1
x1 1 , x2 1 , x3 2
2 x1 2 x2 2 x3 4
2. 3x1 2 x2 x3 3
4 x1 3x2 2 x3 3
Solution
2 2 2 4 R 1 1 1 2
R 12
3 2 1 3 1 3 2 1 3
2 3 2 3
4 3 4 3
R2
R2 1 1 1 2
R2 R2 3R1 1 1 1 2 5
0 1
4 3
0 5 4 3
5 5
R3 R3 4 R1 6 5 R3
0 6 5
7 R3
7
0 1
7 7
1 1 1 2 1 1 1 2
0
4 3 0 4 3
1 1
5 5 5 5
R3 R3 R2
0 2 4 35
R3 2 R3
0
0 1 2
0
35 35
1 1 1 2
4 3 4 3
0 1 x1 1x2 x3 2, x2 x3 , x3 2
5 5 5 5
35 0 0 2
R3 R3 1
2
Back-Substitution:
4 3
x2 2 x2 1 x1 1 2 2 x1 1
5 5 ;
x1 1 , x2 1 , x3 2
9
Example 5
x1 2 x2 3x3 5
(a) 2 x1 5 x2 3x3 3
x1 8 x3 17
Solution
1 2 3 5 R2 R2 2 R1 1 2 3 5 1 2 3 5
2 5 3 3 0 1 3 7 0 1 3 7
1 0 8 17 R3 R3 R1 0 2 5 12 R3 R3 2 R2 0 0 1 2
1 2 3 5 R1 R1 3R3 1 2 0 1 R1 R1 2 R2 1 0 0 1
R3 0 1 3 7 0 1 0 1 0 1 0 1
R3
1 0 0 1 2 R2 R2 3R3 0 0 1 2 0 0 1 2
x1 1 , x2 1 , x3 2
2 x1 2 x2 2 x3 4
(b) 3x1 2 x2 x3 3
4 x1 3x2 2 x3 3
10
Solution
2 2 2 4 1 1 1 2 R2 R2 3 R1
R1 R1
3 2 1 3 2 3
2 1 3
2 3 4 2 3
4 3 3 R3 R3 4 R1
R2 1 1
R 1 1 1 2 1 2
1 1 1 2 2 5
4 3
0 1
4 3
0 5 4 3 0 1
5 5 5 5
6 5
R R3 R3 R3 R2
0 4
7
6 5 2
7 0 0
3
0 1
7 7
35 35
1 1 1 2
R1 R1 R3 1 1 0 0 R1 R1 R2 1 0 0 1
4 3
0 1 0 1 0 1 0 1 0 1
5 5
35 0 0 4 0 0 1 2 0 0 1 2
R3 R3 1 2 R2 R2 R3
2 5
x1 1 , x2 1 , x3 2
Example 6
x yz0
(a) Solve x 2 y z 0
3x 4 y z 0
Solution
1 1 1 0 R2 R2 R1 1 1 1 0 R3 R3 R2 1 1 1 0
1 2 1 0 0 1 2 0 0 1 2 0
3 4 1 0 R3 R3 3R1 0 1 2 0 0 0 0 0 many solutions
Since last row is entirely zero entries, the homogeneous system has nontrivial solution (many
solutions)
11
x1 x2 x3 0 and x2 2 x3 0
Let x3 t
x2 2t 0 x2 2t
x1 2t t 0 x1 3t
x1 3t , x2 2t , x3 t
x 2 y 5z 0
(b) Solve 3x 2 y 2 z 0
4 x 4 y 5z 0
Solution
1 2 5 0 R2 R2 3R1 1 2 5 0 R3 R3 R2 1 2 5 0
3 2 2 0 0 4 13 0 0 4 13 0
4 4 5 0 R3 R3 4 R1 0 4 15 0 0 0 2 0
13
R2 R2 1 2 5 R2 R2 R3
4 0 4 1 2 0 0 R1 R1 2 R2 1 0 0 0
13
0 1 0 0 1 0 0 0 1 0 0
4
R3 0 R1 R1 5R3 0 0 1 0 0 0 1 0
R3
2 0 0 1
Summary
a1 b1 c1 d1
Consider a general 3x3 system has the augmented matrix form a 2 b2 c2 d 2 for variables x, y
a d 3
3 b3 c3
and z. Reduce the form using the elementary row operations to the form
a b c d
0 e f g . In this form, the system can be easily solved as follows:
0 0 i
h
i
If h 0 we can determine z uniquely, and likewise y and x from the other two rows.
h
Thus, we arrive at a unique solution.
If h 0 and i 0 . The last row reads 0z i which is absurd. Thus, there is no solution.
If h 0 and i 0 . The last row is all zeros. Consequently, there are infinitely many solutions
which can be written in terms of a free variable called parameter. For example, let z t we can
write y and x in terms of t, and t .
13
Matrix
Definition
A Matrix is a rectangular array of numbers. The numbers in the array are called the entries
in the matrix.
A 1 n matrix (one row and n columns) is called a row vector, and an m 1 matrix
(one column and m rows) is called a column vector. . Row and column vectors are of
special importance and they are denoted using small and bolded letters.
Example 7
1
a 2 4 1 0
a) or b= 4 x 1 column vector b
2
1 x 3 row vector a
4
14
7 6
2 0 1
b) Matrices: A or B 0 1
5 1 3 2 3
A square matrix is a matrix which has the same number of rows and columns.
It is called as a square matrix A of order n and the entries a11, a22. . . ann are the main
diagonal of A.
Operations on Matrices
Definition
Two matrices are defined to be equal if they have the same size and their corresponding
entries are equal.
In matrix notation, if A= [aij] and B = [bij] have the same size, then A = B if and only if
(A)ij= (B)ij
Example 9
a 0 2 1 c 2
3 1 b d 1 8
2 1 5 e f 5
a 1, b 8, c 0, d 3, e 2 and f 1
Definition
If A and B are matrices of equal sizes, then the sum A+ B is the matrix obtained by adding
their corresponding entries, and the difference A B is the matrix obtained by subtracting
their corresponding entries. Matrices of different sizes cannot be added or subtracted.
(A + B)ij = (A)ij + (B) ij (A - B)ij = (A)ij - (B)ij
= a ij + b ij = a ij + b ij
15
Example 10
(a) Addition
1 0 2 1 2 3 2 2 1
3 1 3 4 5 6 7 6 3
2 1 5 7 8 9 5 7 14
(b) Subtraction
1 0 2 1 2 3 0 2 5
3 1 3 4 5 6 1 4 9
2 1 5 7 8 9 9 9 4
Scalar Multiples
Definition
If A is any matrix and c is any scalar, then the product cA is the matrix obtained by
multiplying each entry of the matrix A by c. The matrix cA is said to be a scalar multiple of
A.
(cA)ij = c(A)ij = caij
1 0 1 2 4 3 1 0 0 6 8 9
3 2 3 0 2 1 3 0 5 3 2 1 7 5 5
0 4 1 0 2 6 5 4 0 25 28 15
Multiplying Matrices
Definition
If A is an m-by-n matrix and B is an n-by-p matrix, then their matrix product AB is the m-
by-p matrix (m rows, p columns) given by
16
Example 12
Compute AB and BA. Hence comment on the order of the matrix multiplication.
Solution:
3 1 1 2 7 9 1 2 3 1 1 3
AB 4 3 3 1 whilst BA 1 1 9 7
1 1 4 3
(a) A+0=0+A=A
(b) A-A=0
(c) 0A=A
(d) A0 = 0 ; 0A =0
1 0 0 0
1 0 0 0
1 0 0
I3 = 0 1 0
1 0
Identity Matrices 2
I = , , I4 =
0 1 0 0 1 0
0 0 1
0 0 0 1
Theorem
If R is the reduced row-echelon form of an n x n matrix A, then either R has a row of zeros
or R is the identity matrix In.
18
Inverse of A
Definition
If A is a square matrix, and if a matrix B of the same size can be found such that
AB = BA= I, then A is said to be invertible and B is called an inverse of A.
Example 14
2 3 8 3
Consider A and B . Find AB and BA
5 8 5 2
Solution
2 3 8 3 1 0
AB
5 8 5 2 0 1
8 3 2 3 1 0
BA .
5 2 5 8 0 1
Theorem
a b
The matrix A is invertible if ad bc 0 , in which case the inverse is given
c d
1 1 d b
by the formula A
ad bc c a
19
Properties of Inverses
Example 15
1 2
Find the inverse of A .
4 3
1 3 2 1 3 2
A1
5 4 1 5 4 1
Solution:
Power of a Matrix
Definition
If A is a square matrix, then we define the nonnegative integer powers of A to be
a. A0 = I
b. An = A . A . A . . . .A (n> 0)
n factors
c. An = (A1)n = A1 . A1 . . . A1
n factors
Laws of Exponents
a. If A is a square matrix and r and s are integers, then Ar As = Ar+s
b. ( Ar ) s = A r s
c. If A is an invertible matrix, then : A1 is invertible and ( A1 ) 1 = A
d. If A is an invertible matrix, then : An is invertible and ( An ) 1 = (A1 )n for n = 1, 2,
e. For any nonzero scalar k, the matrix kA is invertible and (kA)1 = (1/k) A1
20
Example 16
1 3 2 1 3 2 1 17 8
A2 ( A1 )2
5 4 1 5 4 1 25 16 9
Solution:
Definition
If A is any n x m matrix, then the transpose of A, denoted by AT, is defined to be the m x n
matrix that results from interchanging the rows and columns of A; that is, the first column of
AT is the first row of A, the second column of AT is the second row of A, and so forth.
Example 17
T T
1 0 1 1 2 0 1 4
1 2 3
1. 2 3 0 0 3 4
2. 2 5
0 4 1 1 0 1 3 6 4 5 6
Observe that AT for a square matrix A can also be obtained by reflecting A about its main
diagonal.
If the sizes of the matrices are such that the stated operations can be performed, then
(a) ((A)T)T = A
(d) (AB)T= BT AT
Invertibility of a Transpose
Theorem
If A is an invertible matrix, then AT is also invertible and (AT)1 =( A1)T
21
Example 18
3 4 2 3
Let A and B . Find
2 3 1 1
(a) A1
(b) (AT)-1
(c) (2B)1
(d) (3B)T
3 4
Solution: (a) A1
2 3
3 2
(b) (AT )1 (A-1 )T
4 3
1 1 1/ 2 3/ 2
(c ) (2 B) 1 B
2 1/ 2 1
2 1 6 3
(d ) (3B)T 3
3 1 9 3
Elementary Matrices
Definition
An n x n matrix is called an elementary matrix if it can be obtained from the n x n identity
matrix In by performing a single elementary row operation
Example 19 I E
1 0 0 1 0 0
I 0 1 0 E 0 1 0 .
0 0 1 1 0 1
22
Multiply row i by c 0 [I] R1 = 3R1 [E] Multiply row i by 1/c [I] R1 = 1/3 R1 [E-1]
Interchange rows i and j [I] R3 R2 [E] Interchange rows i and j [I] R2 R3 [E-1]
Add c times row i to row j [I] R1 = R1 +2R3 [E] Add c times row i to row j [I] R1 = R12R3 [E-1]
Example 20
1 4 6 1 4 6
(a)
Consider the matrices A 0 0 1 and B 0 0 1 .
2 10 9 0 2 3
Find an elementary matrix E satisfying EA=B.
1 3 0 1 3 0
(b) Consider the matrices A 0 1 3 and B 0 1 3 .
2 10 2 0 0 14
Solution
1 0 0
(a) A
2 R R
1
B
3
; I
2 R R
1 3
E 0 1 0
2 0 1
(b)
1 3 0 1 3 0 1 3 0
A 0 1 3 0 1 3 0 1 3 B
2 10 2 0 4 2 0 0 14
R3 = R3 -2R1 R3 = R3 + 4R2
Thus,
1 0 0 1 0 0
R R3 2 R1
I 0 1 0 3 0 1 0 E
1
0 0 1 2 0 1
1 0 0 1 0 0
R R3 4 R2
I 0 1 0 3 0 1 0 E
2
0 0 1 0 4 1
Theorem
Every elementary matrix is invertible, and the inverse is also an elementary matrix
Inversion Algorithm:
Example 21
1 2 3
Find the inverse of A 4 5 6
3 1 2
Solution
1 2 3 1 0 0 1 2 3 1 0 0 1 2 3 1 0 0
4 5 6 0 1 0 0 3 6 4 1 0 0 3 6 4 1 0
3 1 2 0 0 1 0 5 11 3 0 1 0 0 3 11 5 3
5 3
1 2 0 12 5 3 1 2 0 12
0 3 0 26 11 6 0 1 0 26 11 2
3 3
0 0 3 11 5 3 0 0 1 11
5 1
3 3
16 7 1
1 0 0 3 3
0 1 0 26 11 2
3 3
0 0 1 11 5 1
3 3
Thus,
16 7 1
3 3 16 7 3
26 2 26 11 6
11 1
A 1
3 3 3
11 11 5 3
5 1
3 3
25
Example 22
1 6 4
Show that A is not invertible A 2 4 1
1 2 5
Solution
1 6 4 1 0 0 1 6 4 1 0 0 1 6 4 1 0 0
2 4 1 0 1 0 0 8 9 2 1 0 0 8 9 2 1 0
1 2 5 0 0 1 0 8 9 1 0 1 0 0 0 1 1 1
E k . . . E 2 E 1 A = In A1 = Ek . . . E2 E1 In = Ek . . . E2 E1
A1 A = E1 1 E2 1 . . . Ek 1 In = E1 1 E2 1 . . . Ek 1
Example 23
Solution
1 0 0 1 0 1 1 0 1
I 3 0 1 0 R1 R1 R3 0 1 0 E E 1 0 1 0
1 1
0 0 1 0 0 1 0 0 1
1 0 0 1 0 0 1 0 0
I 3 0 1 0 R2 R2 2 R1 2 1 0 E2
E2 1
2 1 0
0 0 1 0 0 1 0 0 1
1 0 0 1 0 0 1 0 0
I 3 0 1 0 R3 R2 0 0 1 E
3 E3 1
0 0 1
0 0 1 0 1 0 0 1 0
Given E3 E2 E1 A I 3 :
E3 E2 E1 AA1 I 3 A1 yields to A1 E3 E2 E1
1 0 0 1 0 0 1 0 1 1 0 0 1 0 1 1 0 1
A E3 E2 E1 0 0 1 2 1 0 0 1 0 0 0 1 0 1 0 0 0 1
1
0 1 0 0 0 1 0 0 1 2 1 0 0 0 1 2 1 2
0 0 1 0 0 1 0 1 0 0 0 1 0 1 0 0 1 0
1 2 5
Let A = 3 2 2 . If A is invertible, then
4 4 5
27
Basic Theorem
Theorem
Every system of linear equations has no solutions, or has exactly one solution, or has
infinitely many solutions.
Theorem
If A is an invertible n x n matrix, then for each n x 1 matrix b , the system of equations Ax =b
has exactly one solution, namely , x = A1b
Example 24
x y 1
Solve the given system of linear equations using matrix inversion: 2 x z 10
y 2
Solution:
Ax = b is written as
1 1 0 x 1 1 0 1
2 0 1 y 10 . From Example 21, A1 is found as
A1 = 0 0 1
0 1 0 z 2 2 1 2
1 0 1 1 3
x=A b 1
0 0 1 10 2 i.e. x = 3 , y=2 , z=4
2 1 2 2 4
Example 25
x 2 y z 2 x 2y z 1
Solve the systems 2x 5 y z 1 and 2 x 5 y z 1
3x 7 y 2 z 1 3x 7 y 2 z 0
Solution
Linear systems with a common coefficient matrix
1 2 1 2 1 1 2 1 2 1 1 2 1 2 1
2 5 1 1 1 0 9 1 5 3 0 9 1 5 3
3 7 2 1 0 0 1 1 5 3 0 0 10 50 30
29
1 2 1 2 1 1 2 0 3 2 1 2 0 3 2
0 9 1 5 3 0 9 0 0 0 0 1 0 0 0
0 0 1 5 3 0 0 1 5 3 0 0 1 5 3
1 0 0 3 2
0 1 0 0 0
0 0 1 5 3
Theorem (Optional)
Let A be a square matrix,
(a) If B is a square matrix satisfying BA = I, then B = A1
i. e A A1 = A1A = I
Theorem
Let A and B be square matrices of the same size. If AB is invertible, then A and B must also
be invertible.
A Fundamental Problem:
Let A be a fixed m x n matrix. Find all m x 1 matrices b such that the system of equations
Ax = b is consistent.
Example 26
Solution
(a)
1 b1
b1
2 2 2 b1 2 2
1 1 1 1 1 1
3 2b2 3b1
3 5 1 b2 0 2 4 b2 2 b1 0 1 2 4
4 7 2 b3 0 3 6 b 2b 0 1 2 b 2b
3 1
3 1
3
b1
2
1 1 1
2b2 3b1 Therefore b3 2b1 2b2 3b1
0 1 2 0
4 3 4
0 0 0 b 2b 2b 3b
3 1
2 1
3 4
i.e b1 6b2 4b3 0 b1 6b2 4b3 or
b1 4b3 b 6b2
b2 or b3 1
6 4
31
b1
Thus, one of the conditions for b b2
b1 6b2
4
(b)
1 1 2 b1 1 1 2 b1 1 1 2 b1
1 2 3 b2 0 1 5 b1 b2 0 1 5 b1 b2
3 7 4 b3 0 10 2 b3 3b1 0 10 2 b3 3b1
1 1 2 b1 1 1 2 b1
0 1 5 b1 b2 0 1 5 b1 b2
0 0 52 10 b1 b2 b3 3b1 0 0 1 10 b1 b2 b3 3b1
52
Therefore there are no conditions on the bs for the linear system to be consistent.
I Diagonal Matrix is a square matrix in which all of the entries off the main diagonal are
zero.
d1 0 . . 0
0 d2 0 . 0
1. A general n x n diagonal matrix D can be written as . 0 d3 . . .
. . . . .
0 0 . . d n
2. A diagonal matrix is invertible iff all of its diagonal entries are nonzero.
1 0 . . 0
d1
0 1 0 . 0
d2
i.e the inverse of D is . 0 1 . . such that DD D D I .
1 1
d3
. . . . .
0 0 . . 1
d n
32
Example 27
0 0 1
Solution
1 / 3 0 0
A (A )
2 1 / 2
0 1 / 2 0
0 0 1
II A square matrix in which all the entries above the main diagonal are zero is called lower
triangular matrix.
A square matrix in which all the entries below the main diagonal are zero is called upper
triangular matrix.
A square matrix that is either upper or lower triangular is called triangular matrix.
Example 28
1 2 3 1 0 0
A 0 5 6 B 4 5 0
0 0 2 3 1 2
Upper Triangular Lower Triangular
33
- Pi Pj is denoted if Pi Pj and Pj Pi
- may have separate component of vertices that are connected only among themselves or may
not be connected to any other vertex.
Directed graph having n vertices can be represented by n x n matrix called vertex matrix
1 if Pi Pj
mij
0 otherwise
34
Example 1
P2
0 1 0
(a) M 0 0 1
0 1 0
P1 P3
(b)
P3 P4
P2 P1
P5
0 1 0 0 0
0 0 1 1 1
0 0 0 1 0
1 1 1 0 0
1 0 0 1 0
Example 2: Draw the directed graph for the following vertex matrix.
0 1 0 1 0
0 1 1 0
0 1
1 0 0 1 0 0 1
(a) (b) 0 0 0 0 1
0 1 0 0
1 1 0 0 1
1 0 1 0
0 0 1 1 0
35
r - Step connections:
Theorem
Let M be the vertex of a directed graph and let mij(r) be the (i,j)-th element of M r . Then
mij(r) is equal to the number of r-step connections from Pi to Pj.
The number of 1step connections is mij (either 0 or 1). The number of 2step connections
is calculated from the square of the vertex matrix M 2.
1 step connection as Pi Pj
2 step connection as Pi Pj Pm
3 step connection as Pi Pj Pm Pn
Example 3
Consider M = .
Solution
M2= =
M3= =
CLIQUES
Definition
A subset S of a directed graph is called a clique if it satisfies the following three conditions:
Example 4 Determine any cliques, if any for the given directed graph.
P1
Solution:
Cliques
P2 P3
Definition
A dominance-directed graph is a directed graph such that for any distinct pair of
vertices Pi and Pj , either Pi Pj or Pj Pi but not both. (One way directed)
Theorem
In any dominance-directed graph, there is at least one vertex from which there is a 1-step
or 2-step connection to any other vertex.
Definition
The power of a vertex Pi is the sum of the entries of the ith row of the matrix A = M + M2,
Example 5
0 0 1 1 0
1 0 1 0 1
Consider M 0 0 0 1 0 . Find the power of all vertices Pi.
0 1 0 0 0
1 0 1 1 0
Solution
0 0 1 1 0 0 0 1 1 0 0 1 0 1 0
1 0 1 0 1 1 0 1 0 1 1 0 2 3 0
M 2 0 0 0 1 0 0 0 0 1 0 = 0 1 0 0 0
0 1 0 0 0 0 1 0 0 0 1 0 1 0 1
1 0 1 1 0 1 0 1 1 0 0 1 1 2 0
0 0 1 1 0 0 1 0 1 0
1 0 1 0 1 1 0 2 3 0
A=M+M2 = 0 0 0 1 0 + 0 1 0 0 0
0 1 0 0 0 1 0 1 0 1
1 0 1 1 0 0 1 1 2 0
0 1 1 2 0
2 0 3 3 1
= 0 1 0 1 0 0+1+0+1+0 = 2
1 1 1 0 1
1 1 2 3 0
Thus,
Power of P1 is 4 Power of P4 is 4
Power of P2 is 9 Power of P5 is 7
Power of P3 is 2