Вы находитесь на странице: 1из 136

Target AA |R ECALL

READ | REDO

Strictly as per GTU syllabus...

Linear Algebra
and
Vector Calculus
LAVC (GTU Subject Code - 2110015)
B.E. Semester II
Version 1.0

Powered by

Prof. (Dr.) Rajesh M. Darji


B. Sc. (Gold Medalist)
M. Sc. (Gold Medalist)
Tetra Gold Medalist in Mathematics
Ph. D. (Mathematics)
ISTE (Reg. 107658)
IMS, AMS
Department of Mathematics
Sarvajanik College of Engg. & Tech. (SCET) SURAT

Latest version available at www.rmdarji.ijaamm.com

Dear Readers,
For any query regarding this subject,
feel free to ask or WhatsApp on (+91) 9427 80 9779
Target AA |R ECALL
READ | REDO

Strictly as per GTU syllabus...

Linear Algebra
and
Vector Calculus
LAVC (GTU Subject Code - 2110015)
B.E. Semester II
Version 1.0

Powered by

Prof. (Dr.) Rajesh M. Darji


B. Sc. (Gold Medalist)
M. Sc. (Gold Medalist)
Tetra Gold Medalist in Mathematics
Ph. D. (Mathematics)
ISTE (Reg. 107658)
IMS, AMS
Department of Mathematics
Sarvajanik College of Engg. & Tech. (SCET) SURAT

Latest version available at www.rmdarji.ijaamm.com

Dear Readers,
For any query regarding this subject,
feel free to ask or WhatsApp on (+91) 9427 80 9779
Linear Algebra
and
Vector Calculus
Target AA
LAVC (GTU- 2110015) B.E. Semester II
| RE CALL
READ | R E DO

Powered by Prof. (Dr.) Rajesh M. Darji

Dedicated to
My Beloved Students
Contents

1 Review of Matrices 1
1.1 Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Types of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.1 Row and Column Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.2 Zero or Null Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.3 Square Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.4 Transpose of Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.5 Symmetric Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.6 Skew Symmetric Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

Target AA
1.2.7 Diagonal Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.8 Scalar Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.9 Unit Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.10 Upper Triangle Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.11 Lower Triangle Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Determinant of Matrix . . . . . . . . . . . . . . .L.L. . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Minor of an Element . . . E . .D.O . .R
. . | . E
CA
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
anE
1.5 Cofactor of R AD | .R. . . . . . . . . . . . . . . . . . . . . . . . . . .
Element . . . . . . . . . . . . . . . 3
1.6 Adjoint of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Powered by Prof. (Dr.) Rajesh M. Darji
1.7 Singular and Non-singular Matrix . . . . . . . . . . . . . . . . . . . . .
1.8 Operations of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4
4
1.8.1 Equality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.8.2 Multiplication of Matrix by a Scalar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.8.3 Addition and Substation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.8.4 Multiplication of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.9 Inverse of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.10 Elementary Transformations on Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.11 Equivalent Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.12 Gauss-Jordan Method to find Inverse Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.13 Rank of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.14 Rank by Row Echelon Method: (Elementary Transformation Method) . . . . . . . . . . . . . . . 9
1.14.1 Row-Echelon or Canonical form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.15 Reduced Row Echelon Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2 System of Linear Algebraic Equations 12


2.1 System of Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Augmented Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 Non-Homogeneous System of Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 Homogeneous System of Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5 Conditions for the Consistency of Non-Homogeneous System of Equations . . . . . . . . . . . 13
2.6 Conditions for the Consistency of the System of Homogeneous Equations . . . . . . . . . . . . 17

i
Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) ii

3 Notions of Vectors in Rn 20
3.1 Euclidean Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2 Linear Combination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3 Linearly Independent Vectors (LI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.4 Linearly Dependent Vectors (LD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.5 Euclidean Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.6 Normalized Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.7 Euclidean Distance and Angle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.8 Cauchy-Schwarz’s inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.9 Minkowski’s Triangular Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

4 Vector Space 26
4.1 Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.2 Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.3 Some Standard Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.4 Subspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.5 Linear Combination and Span . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.6 Linearly Independent Vectors (LI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.7 Linearly Dependent Vectors (LD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.8 Wronskian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.9 Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.10 Some Standard Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.11 Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

Target AA
4.12 Ordered Basis and Coordinate Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.13 Translation Matrix (Change of Basis Matrix) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.14 Fundamental Spaces: Row Space, Column Space, Null Space . . . . . . . . . . . . . . . . . . . . 45
4.15 Rank and Nullity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.16 Rank-Nullity Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
E CALL
| R E DO | R
5 Linear Transformation (Linear Mapping) 51
READ . . . . . . . . . . . . . .
5.1 Linear transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.2 Particular Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

Powered by Prof. (Dr.) Rajesh M. Darji


5.3 Matrix Linear Transformation . . . . . . . . . .
5.4 Composition Linear Transformations . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
53
54
5.5 Onto (Surjective) Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.6 One-one (Injective) Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.7 Range (Image) and Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.8 Inverse Linear Transformation (Isomorphism) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

6 Eigenvalues and Eigenvectors 61


6.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
6.2 Method of Finding Eigenvalue and Eigenvector . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
6.3 Properties of Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
6.4 Properties of Eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
6.5 Algebraic and Geometric Multiplicity of an eigenvalue . . . . . . . . . . . . . . . . . . . . . . . . 70
6.6 Cayley-Hamilton Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
6.7 Similar Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
6.8 Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
6.9 Orthogonally Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) iii

7 Quadratic Forms and Complex Matrices 78


7.1 Quadratic Form (QF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
7.2 Matrix of Quadratic Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
7.3 Index, Signature and Rank of Quadratic Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
7.4 Definiteness of Quadratic Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
7.5 Complex Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
7.6 Conjugate Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
7.7 Conjugate Transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
7.8 Hermitian, Skew-Hermitian, Unitary and Normal Matrices . . . . . . . . . . . . . . . . . . . . . 83
7.9 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

8 Inner Product Space and Orthogonal Basis 86


8.1 Inner Product Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
8.2 Properties of Inner Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
8.3 Some Standard Inner Product Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
8.4 Norm, Distance and Angle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
8.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
8.6 Orthogonal Complement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
8.7 Properties of W ⊥ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
8.8 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
8.9 Orthogonal Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
8.10 Orthogonal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
8.11 Orthogonal and Orthonormal Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

Target AA
8.12 Coordinate Relative to Orthonormal Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
8.13 Gram-Schmidt Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
8.14 Least Square Approximate Solution for Linear System . . . . . . . . . . . . . . . . . . . . . . . . 100
8.15 Orthogonal Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

ECALL
9 Vector calculus I: Vector differentiation 104
9.1 Scalar and Vector . . . . . E . .D.O . .R
. . | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
R
| Vectors
READ of
9.2 Algebraic Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
9.3 Point Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Powered by Prof. (Dr.) Rajesh M. Darji
9.4 Vector Differential Operator . . . . . . . . . . . . . .
9.5 Gradient . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
105
105
9.6 Divergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
9.7 Curl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
9.8 Directional Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
9.9 Angle between two Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

10 Vector Calculus II: Vector Integration 114


10.1 Line Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
10.2 Surface Integral (Normal Surface Integral) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
10.3 Volume Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
10.4 Integral Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

LAVC (GTU-2110015) B.E. Semester II


Chapter 1
Review of Matrices

1.1 Matrix

A matrix is a rectangular arrangement of certain numbers (called elements or entries) in an array of m rows
and n columns such as,  
a 11 a 12 a 13 . . . . . . a 1n
 a 21 a 22 a 23 . . . . . . a 2n 
 
 a 31 a 32 a 33 . . . . . . a 3n 
 
. .. .. .. .. .. 
 
A= ..
 . . . . . 

 . .. .. .. .. .. 
 .

Target AA
 . . . . . . 

a m1 a m2 a m3 ... ... a mn
is called an m × n matrix and generally it is denoted by A m×n . Here m × n is known as order of the matrix.
£ ¤
More generally matrix can be denoted by, A m×n = a i j , Where i = 1, 2, 3, ...m, and j = 1, 2, 3, ...n.
Here a i j denotes the elements on the i t h row and the j t h column that may be real or complex.
e. g.
O | R ECALL
A D | RED1 2
  
3 0 4

R E A 3×2 =  2 6  , A 3×3 =  −1 0.8 5 
3 −4 2 2 3 + 7i
Remark:
Powered by Prof. (Dr.) Rajesh M. Darji
1. Distinct notations are used for enclosing the elements of matrix are [] , () , {} , kk

2. Elements a 11 , a 22 , a 33 ... are said to be on leading diagonal or principal diagonal elements of the ma-
trix.

1.2 Types of Matrices

1.2.1 Row and Column Matrix

A matrix of order 1 × n is having only one row and n column is known as row matrix or row vector.
£ ¤
That is A 1×n = a 11 a 12 a 13 . . . . a 1n
e. g.
£ ¤
A 1×4 = −2 1 0 3
Similarly, a matrix of order m × 1 is having m rows and only one column is known as column matrix or
column vector.  
a 11
 a 
0
 
 21 
a
 
That is A m×1 =   31  e. g. A 3×1 = 1
  
 ..  2
 . 
a m1

1
Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 2

1.2.2 Zero or Null Matrix

A matrix containing all zero elements is said to be zero matrix or null matrix and is denoted by Z or O.
e. g.
0 0 0 0
 

Z 3×4 =  0 0 0 0  = O
0 0 0 0

1.2.3 Square Matrix

A matrix containing same numbers of rows and columns i.e. m = n is said to be square matrix. If A is square
matrix of order n then it is also denoted by A n .
e. g.
1 2 3
 
· ¸
0 1
A 2×2 = = A 2; A 3×3 =  0 −5 1  = A 3
−1 2
1 2 −2

1.2.4 Transpose of Matrix

Matrix obtained by interchanging the rows and columns of the given matrix A is called transpose of A and is
denoted by the symbol A 0 or A T .
e. g.  
1 2 3
1 2 0 1
 
 2 1 4 
A=  ⇒ AT =  2 1 1 1 
 

Target AA
 0 1 2 
3 4 2 −1
1 1 −1

1.2.5 Symmetric Matrix

A square matrix A is said to be symmetric matrix if A = A TL


R E C A L.
|
RED1O 2 −1 
e. g.
R E A D | 
a h g

A=  2 5 3  and A = h
 b f 
−1 3 7 g f c
Powered by
Thus, in a symmetric matrix a i j = a j i
Prof.
∀i , j
(Dr.) Rajesh M. Darji
1.2.6 Skew Symmetric Matrix

A square matrix A is said to be skew symmetric matrix if A = −A T .


Thus, in a skew symmetric matrix a i j = −a j i ∀i j. Note that the diagonal elements of a skew symmetric
matrix are always zero because a i i = −a i i ⇒ a i i = 0
e. g.
0 1 −2
 

A =  −1 0 3 
2 −3 0

1.2.7 Diagonal Matrix

If in a square matrix, all non-diagonal elements are zeros then it is called diagonal matrix.
e. g.
1 0 0
 

A= 0 2 0 
0 0 3

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 3

1.2.8 Scalar Matrix

If a diagonal matrix has all diagonal elements equal i.e. a 11 = a 22 = a 33 .... then it is called scalar matrix.
e. g.
7 0 0
 

A= 0 7 0 
0 0 7

1.2.9 Unit Matrix

A diagonal matrix of order n in which all the diagonal elements are unity (one) is called unit matrix of order
n and is denoted by I n . Unit matrix is also called an identity matrix.
e. g.
1 0 0
 
· ¸
1 0
I2 = amd I 3 =  0 1 0 
0 1
0 0 1

1.2.10 Upper Triangle Matrix

It is a square matrix in which all the elements below the principle diagonal are zero.
e. g.
3 1 −2
 

A= 0 7 4 
0 0 1

Target AA
1.2.11 Lower Triangle Matrix

It is a square matrix in which all the elements above the principle diagonal are zer.
e. g.
1 0 0
 
C A L L
|AR=E
R E DO
3 3 0 
READ | 2 1 5

1.3 Determinant of Matrix


Powered by Prof. (Dr.) Rajesh M. Darji
If A is a square matrix then determinant of A is denoted by | A | or det (A).
e. g.
¯ ¯
2 3 1 ¯ 2 3 1 ¯
 
¯ ¯
A =  1 2 3  ⇒ | A | = det (A) = ¯¯ 1 2 3 ¯¯ = −12.
−1 1 0 ¯ −1 1 0 ¯

1.4 Minor of an Element

The minor of an element of | A | is determinant obtained by omitting the row and column in which the ele-
ment present. In general the minor of an element a i j is denoted byM i j .
¯ ¯
¯ a1 b1 c1 ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯
¯ ¯ ¯ b2 c2 ¯ ¯ b1 c1 ¯ ¯ a1 c1 ¯ ¯ a1 b1 ¯
e. g. If | A | = ¯ a 2 b 2 c 2 ¯ then, M 11 = ¯
¯ ¯ ¯ ¯ , M 21 = ¯
¯ ¯ , M 22 = ¯
¯ ¯ , M 33 = ¯¯ ¯
¯ a b c ¯ b3 c3 ¯ b3 c3 ¯ a3 c3 ¯ a3 b3 ¯
3 3 3

1.5 Cofactor of an Element

The cofactor of an element a i j of | A | is denoted by A i j and is defined as A i j = (−1)i + j M i j


¯ ¯
¯ a 11 a 12 a 13 ¯ ¯ ¯ ¯ ¯
¯ ¯ 1+1
¯ a 22 a 23 ¯ 2+1
¯ a 12 a 13 ¯¯
e. g. If |A| = ¯ a 21 a 22 a 23 ¯ then, A 11 = (−1) M 11 = ¯
¯ ¯ ¯ ¯ , A 21 = (−1) M 21 = − ¯¯
¯ a a 32 a 33 ¯ a 32 a 33 ¯
31 a 32 a 33
¯

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 4

1.6 Adjoint of a Matrix

Adjoint of a square matrix A is the transpose of the matrix formed by the cofactor of the elements the given
a 11 a 12 a 13 A 11 A 21 A 31
   

matrix A and is denoted by adj(A). That is, if A =  a 21 a 22 a 23  then adj (A) =  A 12 A 22 A 32 


a 31 a 32 a 33 A 13 A 23 A 33
e. g.
1 2 3 19 23 −3
   

A =  0 −1 −3  ⇒ adj (A) =  −12 −16 3  (Verify !)


4 5 −4 4 3 −1

1.7 Singular and Non-singular Matrix

For a square matrix A if | A | = 0 then it is called singular and if | A | 6= 0 then it is called non-singular matrix.

1.8 Operations of Matrices

1.8.1 Equality

Two matrices A and B of the same order are said to be equal if all the elements of A and B in the correspond-
ing position are equal.
e. g.
· ¸ · ¸
1 2 1 2
A= , B= ⇒ A=B
3 4 3 4

e. g.
Target AA
1.8.2 Multiplication of Matrix by a Scalar
£ ¤ £

| RE
¤
For any scalar k if , A = a i j then k A = ka i j , 1 É i É m, 1 É j É n.

CALL
·
1 2RE3AD
¸ | R E DO
·
2 4 6
¸ ·
−1 −2 −3
¸
If A = then, 2A = and (−1) A = −A =
3 −1 2 6 −2 4 −3 1 −2

1.8.3 Addition and Substation


Powered by Prof. (Dr.) Rajesh M. Darji
£ ¤ £ ¤ £ ¤
Let A = a i j and B = b i j , 1 É i É m, 1 É j É n then A ± B = a i j ± b i j .
· ¸ · ¸ · ¸ · ¸
1 2 2 3 3 5 −1 −1
e. g. Let A = , B= then A + B = and A − B =
1 2 4 5 5 7 −3 −4

1.8.4 Multiplication of Matrices


£ ¤ £ ¤
Let A = a i j and B = b j k matrices of orderm × n and n × p respectively then product AB exist and it is
n
X
m × p matrix which is define as AB = [c i k ] where c i k = ai j b j k .
j =1

â In order to find product of two matrices A and B , take row from first matrix A and column from second
matrix B .

â Find the product of respective entries of row and column, and then add them.

â It gives the entry on corresponding row and column of the product matrix (AB ).

â For example, in following matrices A and B , if we consider first row (R 1 ) from matrix A and second
column (C 2 ) from matrix B then corresponding entry of product matrix AB lies on first row and sec-
ond column and is given by (1 × 1) + (2 × 2) + (3 × 4) = 17.

â Similarly, we can find all other entries in product matrix AB .

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 5

2 1 1
 
· ¸ · ¸
1 2 3 9 17 6
A 2×3 = , B 3×3 =  −1 2 1  ⇒ (AB )2×3 =
2 3 4 13 24 9
3 4 1
Remark:

1. In general AB 6= B A.

2. AB = O does not imply A = O or B = O


e. g.
· ¸ · ¸ · ¸
1 2 2 6 0 0
A= ,B = ⇒ AB = =O
1 2 −1 −3 0 0

3. If AB = AC does not imply B = C 4. A (BC ) = (AB )C [Associative Law]

5. A (B +C ) = AB + AC and (A + B )C = AC + BC [Distribution Law]

6. k (AB ) = (k A) B = A (kB )
¢n
7. If A be a square matrix thenA 2 = A A, A 3 = A 2 A. In general A m+n = A m A n , A 0 = I .and A m = A mn
¡

8. If A be a square matrix of order n then AI n = I n A = A


¡ ¢0
9. A 0 = A ,(A + B )0 = A 0 + B 0 and (AB )0 = B 0 A 0

10. | AB | = | A | | B | 11. A adj (A) = adj (A) A = | A | I

Target AA
1.9 Inverse of a Matrix

For a non-singular square matrix A if there exist another non-singular matrix B such that AB = B A = I then
matrix A is called invertible and matrix B is called inverse of A. It is denoted by B = A −1 and is given by

R−1EC1A
LL
O | A
| RED
= adj (A)
READ
|A|

Thus a square matrix A is invertible if |A| 6= 0, That is, A is non-singular In this case A A −1 = A −1 A = I
Powered by Prof. (Dr.) Rajesh M. Darji
Remark:
¢−1 ¢0 ¢−1
1. (AB )−1 = B −1 A −1 A0 = A −1 , A −1 I n−1 = I n
¡ ¡ ¡
2. = A,

1.10 Elementary Transformations on Matrix

1. The interchange of i t h and j t h rows is denoted by R i j or R i ↔ R j .


The interchange of i t h and j t h columns is denoted by C i j or C i ↔ C j

2. The multiplication of each element of i t h row by a nonzero scalar k is denoted by kR i .


The multiplication of each element of i t h column by a nonzero scalar k is denoted by kC i .

3. Multiplication of each element of i t h row by a nonzero scalar k and adding corresponding element to
the j t h row is denoted by R i j (k) or R j → R j + kR i
¡ ¢

Multiplication of each element of i t h row by a nonzero scalar k and adding corresponding element to
the j t h column is denoted by C i j (k) or C j → C j + kC i
¡ ¢

1.11 Equivalent Matrices

Two matrices A and B are said to be equivalent if one can be obtained from other one by applying the finite
numbers of elementary operations and they are denoted by A ∼ B or A → B .

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 6

1.12 Gauss-Jordan Method to find Inverse Matrix

The method of finding the inverse of a given matrix by elementary row transformations is called Gauss-
Jordan method and can be apply as follow:

I : A −1
£ ¤
[A : I] ⇒

Illustration 1.1 Prove that any square matrix can be expressed as sum of symmetric and skew symmet-
4 2 −3
 

ric matrices. Hence express the matrix  1 3 −6  as such a sum of symmetric and skew symmetric
−5 0 7
matrices.

Solution: Let A be the square matrix of order n. Hence,

2A = A + A T + A − A T
¡ ¢ ¡ ¢

1¡ ¢ 1¡
∴ A = A + A T + A − A T = P +Q
¢ ¡ ¢
Say (1.1)
2 2
Now, we will see that P and Q are symmetric and skew-symmetric matrices respectively.


P = A + AT
¢
2

Target AA
¢ T 1¡
· ¸
1¡ ¢T
PT = A + AT = A + AT ∵ (k A)T = k A T
£ ¤

2 2
1 ³ T ¡ T ¢T ´ 1 ¡ T ¢
= A + A = A +A =P
2 2
∴ P T =P
E CALL
R
Therefor, P is a symmetric matrix.
| E DO | R
Also, READ

Q = A − A T Powered by Prof. (Dr.) Rajesh M. Darji
¢
2
¢ T 1¡
· ¸
T 1¡ T
¢T
∴ Q = A−A = A − AT
2 2
1 ³ T ¡ T ¢T ´ 1 ¡ T ¢
= A − A = A −A
2 2
1¡ T
¢
= − A − A = −Q
2
∴ QT = −Q

Therefore Q is a skew-symmetric matrix.


Hence from (1.1), it is proved that every square matrix can be expressed as sum of symmetric and skew
symmetric matrices.
4 2 −3
 

Now let A =  1 3 −6  = P +Q, where


−5 0 7

4 2 −3 4 1 −5 8 3 −8
     
1¡ 1 1
A + A T =  1 3 −6  +  2
¢
P= 3 0  =  3 6 −6 
2 2 2
−5 0 7 −3 −6 7 −8 −6 14
4 3/2 −4
 

∴ P =  3/2 3 −3 
−4 −3 7

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 7

and

4 2 −3 4 1 −5 0 1 2
    
1¡ 1T
¢ 1
Q= A − A =  1 3 −6  −  2 3 0  =  −1 0 −6 
2 2 2
−5 0 7 −3 −6 7 −2 6 0
0 1/2 1
 

∴ Q = −1/2 0 −3 

−1 3 0

Hence,
4 2 −3 4 3/2 −4 0 1/2 1
     
 1 3 −6  =  3/2 3 −3  +  −1/2 0 −3 
−5 0 7 −4 −3 14 −1 3 0

0 1 2
 

Illustration 1.2 Find inverse of the matrix  1 2 3  by using Gauss-Jordan method.


3 1 1

Solution: Consider the matrix,


0 1 2 1 0 0
 

[A : I ] =  1 2 3 0 1 0 
3 1 1 0 0 1
In order to find inverse of a given matrix, we transform A to I in above matrix by applying only row opera-

Target AA
tions successively, as follow:

0 1 2 1 0 0
 

[A : I ] =  1 2 3 0 1 0  → R 12
3 1 1 0 0 1
1 2 3 0 1 0
ECALL
 

∼ 0 1 2 1 0 E 0D O→ R|13 R
(−3)
|R
1 E1AD0 0 1
3 R
1 2 3 0 1 0
 

∼ 0 1 Powered
2 1 by Prof. (Dr.) Rajesh M. Darji
0 0  → R 21 (−2) , R 23 (5)
0 −5 −8 0 −3 1
1 0 −1 −2 1 0
 

∼ 0 1 2 1 0 0  → R 31 (1/2) , R 32 (−1)
0 0 2 5 −3 1
1 0 0 1/2 −1/2 1/2
 
µ ¶
1
∼ 0 1 0
 −4 3 −1 →  R3
2
0 0 2 5 −3 1
1 0 0 1/2 −1/2 1/2
 

−1  = I : A −1
£ ¤
∼ 0 1 0 −4 3
0 0 1 5/2 −3/2 1/2

Hence,
1/2 −1/2 1/2
 

A −1 =  −4 3 −1 
5/2 −3/2 1/2

Exercise 1.1
1 2 4
 

1. Express the matrix  −2 5 3  as such a sum of symmetric and skew symmetric matrices.
−1 6 3

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 8

2. Prove that inverse of a matrix is unique.

1 2 3
 

3. Find adjoint of the matrix A =  0 1 2 . Also verify that A adj (A) = adj (A) A = | A | I
2 0 1
¸n
λ 1 λn nλn−1
· · ¸
4. Show that = , where n is a positive integer.
0 λ 0 λn

5. Show that ¯ adj (A) ¯ = | A |n−1 , where A be a square matrix of order n. Hence deduce that ¯ adj adjA ¯ =
¯ ¯ ¯ ¡ ¢¯
2
| A |(n−1)

6. Find the inverse of the following matrices by using Gauss-Jordan method (using row operations):

2 1 3 1 3 3
   

a.  3 1 2  b.  1 4 3 
1 2 3 1 3 4
 
−1 −3 3 −1
 1 1 −1 0 
c. 
 
 2 −5 2 −3 

−1 1 0 1

7. Find the inverse of the following matrices by using adjoint method if exist:

1 1 1 1 2 −2
   

Target AA
a.  1 2 3  b.  −1 3 0 
1 4 9 0 −2 1

8. If A and B are symmetric matrices then prove that AB is symmetric, provided A and B commute.

L adj adj A = | A |n−2 A. Hence find adj adj A , if


¡ ¢ ¡ ¢
9. If A is a non-singular matrix of order n, then showLthat

−1 −8 4

E D O | RECA
A=
1
−4 R 4 EA7D
|R
9
−8 −1 −4
Powered by Prof. (Dr.)
Answers
Rajesh M. Darji
1 0 3/2 0 2 5/2 1 −2 1
     

1.  0 5 9/2  +  −2 0 −3/2  3. adj (A) =  4 −5 −2 


3/2 9/2 3 −5/2 3/2 0 −2 4 1

4. Hint: Use principle of mathematical induction.


 
0 2 1 3
−1/6 1/2 −1/6 7 −3 −3
   
 1 1 −1 −2 
6. a. −7/6 1/2 5/6  b. −1 1 0  c. 
   
1 2 0 1 

5/6 −1/2 −1/6 −1 0 1

−1 1 2 6

3 −5/2 1/2 3 2 6
   

7. a.  −3 4 −1  b.  1 1 2  9. A
1 −3/2 1/2 2 2 5

E E E

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 9

1.13 Rank of a Matrix

The matrix is said to be of rank r if there is,

1. At least one minor of the order r which is not equal to zero and

2. Every minor of order (r + 1) is equal to zero.

The rank of a matrix A is denoted by ρ (A) = r


Remark:

1. The rank of a matrix A is the maximum order of its non vanishing minor.

2. If the matrix A has non-zero minor of order r then ρ (A) Ê r

3. If the matrix A has all the minors of order (r+1) as zeros ρ (A) É r

4. If A is an m × n matrix then ρ (A) É min (m, n)

5. Elementary transformations do not alter the order and the rank of the matrix.

1.14 Rank by Row Echelon Method: (Elementary Transformation Method)

1.14.1 Row-Echelon or Canonical form

â Let A be the matrix of order m × n i. e. A m×n .

Target AA
â The row-echelon or canonical form of the matrix A m×n is a matrix in which one or more elements in
each of the first r rows are non-zero and all the elements in remaining rows are zeroes.

â Any matrix A can always be reduced to the echelon form by applying only row transformations.

â In this case rank of a matrix is given by ρ (A) = m − k, where m denotes total numbers of rows and k
LL
ECifAk = 0 then ρ (A) = m.
| Rthat
denotes total numbers of zero-rows. Note

READ | R E DO
* Important:

â Pivot: In echelon form of matrix


Powered Prof. (Dr.) Rajesh M. Darji
by the first non-zero element of non-zero row from left is called Pivot of
the corresponding row and the corresponding row is known as pivot row, as show in below matrix.
 
1 3 5 4
 0 6 −1 4 
 
 
 0 0 0 2 
0 0 0 0

â Observe that R 1 , R 2 , R 3 are pivot rows whose pivots are enclosed by the rectangular box.

â Further the column on which the pivot of row exist is known as pivot column. In above matrix
C 1 ,C 2 ,C 4 are pivot columns.

1.15 Reduced Row Echelon Form

Reduced-echelon form is a row echelon form in which all the pivot are unity and above all elements of pivot
are zeros.
e. g.  
1 0 −3 0 2
 0 1 2 0 8 
 
 
 0 0 0 1 5 
0 0 0 0 0

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 10

1 2 3
 

Illustration 1.3 Find the rank of the matrix  1 4 2 , using determinant method.
2 6 5
1 2 3
 

Solution: Let A =  1 4 2 
2 6 5
Obviously, the highest ordered minor of A is 3rd order and it is det(A) itself.
¯ ¯
¯ 1 2 3 ¯
¯ ¯
∴ det (A) = ¯¯ 1 4 2 ¯¯ = 1 (20 − 12) − 2 (5 − 4) + 3 (6 − 8) = 8 − 2 + 6 = 0.
¯ 2 6 5 ¯

So rank of A is not 3. ¯ ¯
¯ 1 2 ¯
Now, consider 2nd order minor of A formed by its 1st and 2nd rows as ¯ ¯ ¯ = 4 − 2 = 2 6= 0.
1 4 ¯
Hence, rank of matrix A is 2. ∴ ρ(A) = 2.
 
0 1 −3 −1
 1 0 1 1 
Illustration 1.4 Find the rank of the matrix   by reducing to echelon echelon form.
 
 3 1 0 2 
1 1 −2 0
 
0 1 −3 −1
 1 0 1 1 
Solution: Let A = 
 
 3 1 0 2 

1 1 −2 0

Target AA
To reduce A to its row-echelon form, use only row transformations, and bring 0 (zero) under the first non
zero (pivot) element of each row (enclosed by rectangular box in bellow process), starting from first row.
 
0 1 −3 −1
 1 0 1 1 
 
A =  R3 → R3 − R1 ; R4 → R4 − R1
 3 1 0 2 
O | R ECALL
1 1 −2 0 | RED
 READ 
0 1 −3 −1
 1 0 1 1 
 
∼
 3 0 3 Powered 3 
 R3 → Prof. (Dr.) Rajesh M. Darji
byR3 − 3R2 ; R4 → R4 − R2
1 0 1 1
 
0 1 −3 −1
 1 0 1 1 
 
∼ 
 0 0 0 0 
0 0 0 0
Hence, ρ(A) = m − k = 4 − 2 = 2. (number of non-zero rows)
Exercise 1.2
1 2 −1 3
 

1. Find the rank of the matrix  3 4 0 −1  using determinant method.


−1 0 −2 7
2. Find the rank of the following matrices by reducing to the echelon form:
   
2 3 −1 −1 1 2 −2 3
 1 −1 −2 −4   2 5 −4 6 
a.  b. 
   
 3 1 3 −2   −1 −3 2 −2 
 
6 3 0 −7 2 4 −1 6

1 2 3
 

3. Convert the matrix  2 3 4  in to reduced row echelon form and hence find the rank of matrix.
3 4 5

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 11

Answers
1 0 −1
 

1. 2 2. a. 3 b. 4 3.  0 1 12 , Rank = 2
0 0 0

E E E

Powered by
Prof. (Dr.) Rajesh M. Darji
B. Sc. (Gold Medalist)
M. Sc. (Gold Medalist)
Tetra Gold Medalist in Mathematics
Ph. D. (Mathematics)
ISTE (Reg. 107058)
IMS, AMS
http://rmdarji.ijaamm.com/
Contact: (+91) 9427 80 9779

Target AA | RE CALL
READ | R E DO

Powered by Prof. (Dr.) Rajesh M. Darji

LAVC (GTU-2110015) B.E. Semester II


Chapter 2
System of Linear Algebraic Equations

2.1 System of Equations

The collection of more than one linear equation is called system of equations.
Consider the system of m equations in n unknown x 1 , x 2 , x 3 .....x n as follow:

a 11 x 1 + a 12 x 2 + a 13 x 3 + . . . . . . + a 1n x n = b 1
a 21 x 1 + a 22 x 2 + a 23 x 3 + . . . . . . + a 2n x n = b 2
a 31 x 1 + a 32 x 2 + a 33 x 3 + . . . . . . + a 3n x n = b 3
··· ··· ··· ··· ··· ··· ··· ··· ···

Target AA
··· ··· ··· ··· ··· ··· ··· ··· ···
a m1 x 1 + a m2 x 2 + a m3 x 3 + . . . . . . + a mn x n = b m

The solutions of above system means the value of unknown x 1 , x 2 , x 3 .....x n that will satisfy the given system,
that may or may not be exist.
LL
CAby
| REform
â The above system can be rewrite in compact using matrix notations as,
R E DO
READa | a 12
11 a 13 ... ... a 1n

x1
 
b1

 a
 21 a 22 a 23 ... ... a 2n x2 b2
   
Prof. (Dr.) Rajesh M. Darji
   
Powered

 a 31 aby 32 a 33 ... ... a 3n

 x3
 
  b3


 . .. .. .. .. ..
=
   
 .
 . . . ... ... . . .
 
   
 .. .. .. .. .. ..
    
   
 . . . ... ... .  .   . 
a m1 a m2 a m3 ... ... a mn xn bm

That is, AX = B , where


     
a 11 a 12 a 13 ... ... a 1n x1 b1

 a 21 a 22 a 23 ... ... a 2n 


 x2 


 b2 

a 31 a 32 a 33 ... ... a 3n x3 b3
     
     
A=
 .. .. .. ;

X =
 .. ;

B =
 .. 
. . ... ... . . .

     
.. .. .. .. ..
     
     
 . . ... ... .   .   . 
a m1 a m2 a m3 ... ... a mn xn bn

* Important:

When the system AX = B has the solution then system is said to be consistent otherwise the system is called
inconsistent.

12
Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 13

2.2 Augmented Matrix

If AX = B is the system of m equations in n unknown then the matrix written as [A : B ] or (A, B ) is called as
the Augmented matrix of the given system. Hence
 
a 11 a 12 a 13 ... ... a 1n b1

 a 21 a 22 a 23 ... ... a 2n b2 

a 31 a 32 a 33 ... ... a 3n b3
 
 
[A : B ] = 
 .. .. .. .. .. 
. . . ... ... . .

 
.. .. .. .. ..
 
 
 . . . ... ... . . 
a m1 a m2 a m3 ... ... a mn bm

2.3 Non-Homogeneous System of Equations

For the system of equation AX = B , if matrix B is not a null matrix (non-zero matrix) then the system of
equation AX = B is known as non-homogeneous system of equations.

2.4 Homogeneous System of Equations

For the system of equation AX = B , if matrix B is a null matrix (zero matrix) then the system of equation
AX = B (AX = Z ) is known as Homogeneous system of equations.
e. g.

Target AA
 
x +y +z =3   x +y +z =0 

x − y + 2z = 4 Non - homo. equations; 2x + 3y − 4z = 0 Homo. equations.

 

2x + 3y − z = 0 x − y + 2z = 0

2.5 Conditions for the Consistency of Non-Homogeneous System of Equations


O | R ECALL
ED of m equations in n unknown as AX = B .
R E AD | R
Consider the system non-homogeneous
For the augmented matrix [A : B ], if

1. ρ (A) 6= ρ (A : B ), then the system


Powered Prof. (Dr.) Rajesh M. Darji
by is inconsistent and possesses no solution.
2. ρ (A) = ρ (A : B ), then the system is consistent and possesses solution.
In this case,

â If ρ (A) = ρ (A : B ) = n (number of unknown) then solution is unique, and


â If ρ (A) = ρ (A : B ) = r < n, then system possess infinite numbers of solutions, and that can be
represented parametrically in terms of (n − r ) parameters.

3. In particular let m = n = 3, that is three equations in three unknown, then

â If | A | 6= 0 then the system has unique solution and is given by, AX = B ⇒ X = A −1 B .


â If | A | = 0 then the system is inconsistent or has infinite numbers of solutions, which can be
followed by reduction method.

* Important:

â To find rank of the augmented matrix we shall reduce the matrix in to the row echelon form.

â This method is known as Reduction Method or Gauss Elimination Method. Also in this method we
apply only row elementary transformations.

â Gauss Jordan Elimination Method: In this method convert the matrix system [A : B ] in to reduced
row echelon form and then apply back substitution.

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 14

Illustration 2.1 Examine the consistency of following system of equations, and solve if consistent by Gauss
elimination method:

x 1 + x 2 + x 3 = 9, 2x 1 + 4x 2 − 3x 3 = 1, 3x 1 + 6x 2 − 5x 3 = 0.

Solution: Here given system has three equations in three unknown x A , x 2 , and x 3 , that is (m × n = 3 × 3)
system.
In Gauss elimination method, we reduce augmented matrix [A : B ] to row echelon form using row oper-
ations only, as follow:

1 1 2 9
 

[A : B ] =  2 4 −3 1  R 2 → R 2 − 2R 1 ; R 3 → R 3 − 3R 1
3 6 −5 0
1 1 2 9
 
3
∼ 0 2 −7
 −17  R 3 → R 3 − R 2
2
0 3 −11 −27
1 1 2 9
 

∼ 0 2 −7
 −17  (2.1)
0 0 −1/2 −3/2

Observe that, ρ(A) = ρ(A : B ) = 3 = n (number of unknown x 1 , x 2 , x 3 ).


∴ Given system of equation is consistent and has unique solution, and is given by making back substitu-
tion from (2.1), as
1 3

Target AA
R3 : − x3 = − 

2 2


R 2 : 2x 2 − 7x 3 = −17  ⇒ x 1 = 1, x 2 = 2, x 3 = 3.


R 1 : x 1 + x 2 + 2x 3 = 9

Illustration 2.2 Test the consistency of the|following LL


RECAequations and solve them if they consistent, by Gauss
elimination method:
A D | R E DO
E
R 2x + 2y + 2z = 0, −2x + 5y + 2z = 1, 8x + y + 4z = −1.

Solution: The augmented matrix for


Powered Prof. (Dr.) Rajesh M. Darji
by given (m × n = 3 × 3) system is,
2 2 2 0
 

[A : B ] = −2 5
 2 1  R 2 → R 2 + R 1 ; R 3 → R 3 − 4R 1
8 1 4 −1
2 2 2 0
 

∼ 0 7
 4 1  R3 → R3 + R2
0 −7 −4 −1
 
2 2 2 0
∼ 0 7 4 1  (2.2)
 
0 0 0 0

Observe that, ρ(A) = ρ(A : B ) = 2 < (n = 3).


∴ Given system is consistent and has infinite solutions, which can be expressed in terms of one parameter
(n − r = 3 − 2 = 1). Such solutions are known as 1-parametric solution and can be obtained by assuming
arbitrary parameter say t ∈ R, to unknown corresponding to non-pivot column. Such unknown is called free
variable. In (2.2), z is a free variable. Hence, making back substitution from (2.2), we get

z = t, t ∈R
1 − 4t
R 2 : 7y + 4z = 1 ∴ y= ,
7
1 + 3t
R 1 : 2x + 2y + 2z = 0 ∴ x =−
7

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 15

Thus required solution is


1 + 3t 1 − 4t
x =− , y= , z = t, t ∈R
7 7

Illustration 2.3 Solve the following system of equations by Gauss-Jordan elimination:

x 1 + 3x 2 − 2x 3 + 2x 5 = 0
2x 1 + 6x 2 − 5x 3 − 2x 4 + 4x 5 − 3x 6 = 1
5x 3 + 10x 4 + 15x 6 = 5
2x 1 + 6x 2 + 8x 4 + 4x 5 + 18x 6 = 6

Solution: Given system is of four equatons in six unknowns x 1 , x 2 , x 3 , x 4 , x 5 , x 6 . That is (4 × 6).


In Gauss-Jordan elimination method, we reduce the augmented matrix to reduced row echelon form
using only row operations, as
 
1 3 −2 0 2 0 0
 2 6 −5 −2 4 −3 −1 
[A : B ] =   R 2 → R 2 − 2R 1 ; R 4 → R 4 − 2R 1
 
 0 0 5 10 0 15 5 
2 6 0 8 4 18 6
 
1 3 −2 0 2 0 0
 0 0 −1 −2 0 −3 −1 
∼  R 3 → R 3 + 5R 2 ; R 4 → R 4 + 4R 2
 
 0 0 5 10 0 15 5 

Target AA
0 0 4 8 0 18 6
 
1 3 −2 0 2 0 0
 0 0 −1 −2 0 −3 −1 
∼  R 2 → (−1) R 2 ; R 3 ↔ R 4 (i.e R 34 )
 
 0 0 0 0 0 0 0 
0 0 0 0 0 6 2

1 3 −2 0 2 0 0 | RECA
 LL
 0 0 1 2| 0RE O
D 1 
EAD
µ ¶
3 1
∼

R
 0 0 0 0 0 6
 R4 →
2 
R4
6
0 0 0 0 0 0 0
 Powered by
1 3 −2 0 2 0 0
Prof. (Dr.) Rajesh M. Darji

 0 0 1 2 0 3 1 
 
∼  R 1 → R 1 + 2R 2 ; R 2 → R 2 − 3R 3
 0 0 0 0 0 1 1/3 
0 0 0 0 0 0 0
 
1 3 0 2 2 3 2
 0 0 1 2 0 0 0 
 
∼  (Reduced row echelon form) (2.3)
 0 0 0 0 0 1 1/3 
0 0 0 0 0 0 0

∴ ρ (A) = ρ (A, B ) = 3 < 6 (= Number of unknowns)


Therefore, system has (6 − 3 = 3) 3-parametric infinite solutions, and three parameters say r, s, t are assign
to free variables x 2 , x 4 , x 5 respectively, that is x 2 = r, x 4 = s, x 5 = t . Thus from (2.3), required solution is given
by
1
x 1 = −3r − 4s − 2t , x 2 = r, x 3 = −2s, x 4 = s, x 5 = t , x 6 = , r, s, t ∈ R
3

Illustration 2.4 For what values of λ and µ do the system of equations

x + y + z = 6, x + 2y + 3z = 10, x + 2y + λz = µ,

has (i) a unique solution, (ii) an infinite numbers of solutions and (iii) no solution.

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 16

Solution: Consider the augmented matrix [A : B ] for given equations and apply reduction method, as

1 1 1 6
 

[A : B ] =  1 2 3 10  R 2 → R 2 − R 1 ; R 3 → R 3 − R 1
1 2 λ µ
1 1 1 6
 

∼ 0 1 2 4  R3 → R3 − R2
0 1 λ−1 µ−6
1 1 1 6
 

∼ 0 1 2 4 
0 0 λ−3 µ − 10

Observe that,

i. If λ 6= 3 then ρ(A) = ρ(A : B ) = 3 ∀µ ∈ R. Hence system has unique solution.

ii. If λ = 3 and µ = 10, than ρ(A) = ρ(A : B ) = 2 < 3. Hence system has 1-parametric infinite solutions.

iii. If λ = 3 and µ 6= 10, than ρ(A) = 2, ρ(A : B ) = 3, that is ρ(A) 6= ρ(A : B ). Hence system is inconsistent
and has no solution.

Exercise 2.1
1. Examine the following system of equations for consistency and if consistent then solve it, using Gauss
elimination method:

Target AA
a. 2x 1 + x 2 − x 3 + 3x 4 = 8,
b. x + y + z = 6,
c. 4x − 2y + 6z = 8,
x 1 + x 2 + x 3 − x 4 = −2,
x − y + 2z = 3, 3x + y + z = 8,
x + y − 3z = −1,

| RE
2. Solve the following equations using Gauss Jordan
L
3x 1 + 2x 2 − x 3 = 6,
2x − 2y + 3z = 7.
15x − 3y + 9z = 21.

CALelimination method, if they consistent:


4x 2 + 3x 3 + 2x 4 = −8.

AD
E4 = 11, | R E DO
2x 1 + x 2 − x 3 +R3x x 1 − 2x 2 + x 3 + x 4 = 8, 4x 1 + 7x 2 + 2x 3 − x 4 = 0, 3x 1 + 5x 2 + 4x 3 + 4x 4 = 17.

3. Solve the following Powered


system of by Prof. (Dr.) Rajesh M. Darji
equations for x, y, z:
1 3 4 3 2 1 2 1 2
− + + = 30, + − = 9, − + = 10.
x y z x y z x y z
1 1 1
[Hint: Put = u, = v, = w.]
x y z
4. Find for what value of λ the set of equations,

2x − 3y + 6z − 5w = 3, y − 6z + w = 1, 4x − 5y + 6z − 9w = λ,

has (i) no solution, (ii) infinite numbers of the solutions.

5. Show that if µ 6= 0 then the system of equations,

2x + y = a, x + µy − z = b, y + 2z = c

has unique solution for all a, b, c. Also if µ = 0 then determine the relation satisfied by a, b, c such that
system is inconsistent. Find the general solution by taking µ = 0, a = 1, b = 1, c = −1.

6. Investigate for what values of a and b the system of simultaneous equations:

2x − y + 3z = 6, x + y + 2z = 2, 5x − y + az = b,

has (i) no solution, (ii) a unique solution and (iii) an infinite solutions.

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 17

7. Solve the following system by matrix inversion method:

x + y + z = 0, 2x + 3y − z = −5, x − y + z = 4.

[Hint: AX = B ⇒ X = A −1 B ]

8. Use matrix inversion method to determine the value of λ for which following system is consistent:

x + 2y + z = 3, x + y + z = λ, 3x + y + 3z = λ2 ,

Answers

1. a. x 1 = 2, x 2 = −1, x 3 = −2, x 4 = 1 b. Inconsistent c. x = 1, y = 3t − 2, z = t , t ∈ R


1 1 1
2. x 1 = 2, x 2 = −1, x 3 = 1, x 4 = 3 3. x = , y = , z = 4. (i) λ 6= 7 (ii) λ = 7
2 4 5
5. a = 2b + c, x = 1 + t , y = −1 − 2t , z = t , t ∈ R 6. (i) a = 8, b = 6 (ii) a 6= 8, b ∈ R (iii) a 6= 8, b 6= 6

7. x = 1, y = −2, z = 1 8. λ = 2, 3

E E E

2.6 Conditions for the Consistency of the System of Homogeneous Equations

Target AA
Consider the homogeneous system of m equations in n unknown as AX = Z , where Z is a null matrix of
order (m × 1).
For the augmented matrix [A : Z ], we note that ρ (A) = ρ (A : Z ). Hence Homogeneous system is always
consistent and has either unique solution or infinite numbers solutions. This can be written as follow:

1. If ρ (A) = ρ (A : Z ) = n, (number of unknown) C


E then L system possess unique solution and is given by
ALthe
O | R
x 1 = x 2 = x 3 = .... = x n = 0, REDis also known as trivial solution.
| which
READ
2. If ρ (A) = ρ (A : Z ) = r < n, then system possess infinite numbers of solutions which can be repre-

solutions.
Powered by Prof. (Dr.) Rajesh M. Darji
sented parametrically in trems of (n-r)-parameters. These solutions are are also known as non-trivial

3. In particular let m = n = 3, that is three equations in three unknown, then

â If |A| 6= 0, the system has unique solution (trivial) and is given by x 1 = x 2 = x 3 = 0.


â If |A| = 0, the system has infinite numbers of parametric solutions (non-trivial) which can be
followed by reduction method.

Illustration 2.5 Solve the equations:

x 1 + 3x 2 + 2x 3 = 0, 2x 1 − x 2 + 3x 3 = 0, 3x 1 − 5x 2 + 4x 3 = 0, , x 1 + 17x 2 + 4x 3 = 0.

Solution: The augmented matrix is


 
1 3 2 0
 2 −1 3 0 
[A : Z ] =   → R 2 − 2R 1 ; R 3 − 3R 1 ; R4 − R1
 
 3 −5 4 0 
1 17 4 0
 
1 3 2 0
 0 −7 1 0 
∼  → R 3 − 2R 2 ; R 4 + 2R 1
 
 0 −14 −2 0 
0 14 2 0

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 18

 
1 3 2 0
0 −7 −1 0
 
∼
 

 0 0 0 0 
0 0 0 0
∴ ρ (A) = ρ (A : Z ) =2 < 3 (number of unknowns)

Hence system has an infinite numbers of non-trivial (1-parametric) solutions, given by

11t t
x1 = , x2 = − , x3 = t , t ∈R
7 7

Illustration 2.6 Find the values of k for which the system of equations (3k −8)x +3y +3z = 0, 3x +(3k−)y +
3z = 0, 3x + 3y + (3k − 8)z = 0 has a non-trivial solution.

Solution: For the given system of equations to have a n0n-trivial solution, the determinant of the coeffi-
cient matrix should be zero. That is,
¯ ¯
¯ 3k − 8 3 3 ¯
¯ ¯
¯
¯ 3 3k − 8 3 ¯=0
¯
¯ 3 3 3k − 8 ¯
¯ ¯
¯ 3k − 2 3k − 2 3k − 2 ¯¯
¯ £ ¤
⇒ ¯
¯ 3 3k − 8 3 ¯=0
¯ Operating R 1 + R 2 + R 3
3 3 3k − 8 ¯

Target AA
¯
¯ ¯
¯
¯ 1 1 1 ¯
¯
⇒ (3k − 2) ¯¯ 3 3k − 8 3 ¯=0
¯
¯ 3 3 3k − 8 ¯
¯ ¯
1 0 0
¯ = 0 ECALL Operating C 2 −C 1 ; C 3 −C 1
¯ ¯
¯ ¯ £ ¤
⇒ (3k − 2) ¯¯ 3 3k − 11 3
O| R
3 | R 3kE −D
¯
3
REA D 11 ¯
¯

⇒ (3k − 2) (3k − 11)2 = 0


2 11 11 Powered by
k= , , Prof. (Dr.) Rajesh M. Darji
3 3 3

Exercise 2.2

1. Solve the following system of equations:

a. 2x 1 + x 2 +3x 3 +6x 4 = 0, 3x 1 − x 2 + x 3 +3x 4 = 0, −x 1 −2x 2 +3x 3 = 0, −x 1 −4x 2 −3x 3 −3x 4 = 0.


b. x + y + 2z = 0, x + 2y + 3z = 0, x + 3y + 4z = 0, x + 4y + 7z = 0.
c. x + 2y + 3z = 0, 2x + 3y + z = 0, 4x + 5y + 4z = 0.

2. Examine the following system for the non-trivial solution:

5x + 2y − 3z = 0, 3x + y + z = 0, 2x + y + 6z = 0.

3. For the different values of k discuss the nature of the solutions of the following system:

x + 2y − z = 0, 3x + (k + 7) y − 3z = 0, 2x + 4y + (k − 3) z = 0.

4. Show that the system of equations, ax +b y +c z = 0, bx +c y +az = 0, c x +a y +bz = 0, has a non-trivial


solution if a + b + c = 0 or a = b = c.

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 19

5. Show that the system of equations, x + 2y + 3z = λx, 3x + y + 2z = λy, 2x + 3y + z = λz, can possess a
non-trivial solution only if λ = 6. Obtain the non-trivial solution for real value of λ.

6. Find the value of λ for which the equations,

(λ − 1) x + (3λ + 1) y + 2λz = 0, (λ − 1) x + (4λ − 2) y + (λ + 3) z = 0, 2x + (2λ + 1) y + 3 (λ − 1) z = 0,

are consistent, and find the ratio x : y : z when λ has smallest of these values. What happens when λ
has the greatest of these values.

Answers

1. a. x 1 = x 2 = x 3 = x 4 = 0 b. x = y = t , z = −t , t ∈ R c. det(A) 6= 0, x = y = z = 0

2. system has non-trivial solution. 3. For k = 1, x = y = z = 0 ; For k 6= 1, x = −2t , y = t , z = 0, t ∈ R

5. x = y = z = t , t ∈ R 6. λ = 0, 3

E E E

Powered by
Prof. (Dr.) Rajesh M. Darji
B. Sc. (Gold Medalist)
M. Sc. (Gold Medalist)

Target AA
Tetra Gold Medalist in Mathematics
Ph. D. (Mathematics)
ISTE (Reg. 107058)
IMS, AMS
http://rmdarji.ijaamm.com/
| RE CALL Contact: (+91) 9427 80 9779

READ | R E DO

Powered by Prof. (Dr.) Rajesh M. Darji

LAVC (GTU-2110015) B.E. Semester II


Chapter 3
Notions of Vectors in Rn

3.1 Euclidean Space 1

â Let R denotes the set of real numbers then, an n times Cartesian product of R with itself is denoted by
Rn . That is
Rn = R × R × R × ... × R (n times)

â In particular, R2 = R × R, R3 = R × R × R

â Elements of Rn are (x 1 , x 2 , ...x n ) , x i ∈ R ans are known as an order n − t upl es. Thus

Target AA
Rn = {(x 1 , x 2 , x 3 , . . . . . . x n ) :x i ∈ R, 1 É i É n}

â Elements of R2 are called order pair and elements of R3 are called order triplets.

R2 = x, y : x, y ∈ R , R3 = x, y, z : x, y, z ∈ R
©¡ ¢ ª ©¡ ¢ ª

O | R ECALL
n
RE D as a vector or point and can be presented by mean of column
REA |
â The elements of R D
are also referred
matrix as  
x1
Powered by 

Prof.
x2

 (Dr.) Rajesh¤ M. Darji T
x3
  £
x = = x1 x2 x3 ... xn =X
..
 
 
 . 
xn

â Here Rn is known as Real Euclidean n dimensional space. For example, R2 and R3 are two and three
dimensional spaces respectively.

â The elements x 1 , x 2 , x 3 , ....x n are called components of the vector.

â The standard arithmetic addition, subtraction, scalar-multiplication, zero (null) vector etc. in Rn are
as same as define for the matrices.

â The multiplication between two vectors in Rn define as follow:


1
Euclid or Father of Geometry; Greek, Mid-4th century BCE-Mid-3rd century BCE.

20
Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 21

   
x1 y1

 x2 


 y2 

x3 y3  ∈ Rn then
   
Let x =  , y =
.. ..
   
   
 .   . 
xn yn
 
x1

 x2 

yT x = x3
£ ¤ 
y1 y2 y3 ··· yn   = x 1 y 1 + x 2 y 2 + x 3 y 3 + ..... + x n y n
..
 
 
 . 
xn

â This product is known as Euclidean Inner product or Dot product and is denoted by x · y. Hence,
n
X
x·y = xi y i , 1 É i É n.
i =1

e. g. Let x = (1, 4, −2) , y = (2, −1, 3) ∈ R3 ⇒ x · y = (1) (2) + (4) (−1) + (2) (3) = 2 − 4 + 6 = 4

3.2 Linear Combination

Let x 1 , x 2 , x 3 , ...x n ∈ Rn and c 1 , c 2 , c 3 , ...c n ∈ R then the vector,

Target AA
n
X
c 1 x 1 + c 2 x 2 + c 3 x 3 + ..... + c n x n = ci x i
i =1

is called the linear combination of the vectors.


e. g. Let x 1 = (1, 2, −1) , x 2 = (2, 1, 1) , x 3 = (1, 0, 3) ∈ R3 then linear combination of x 1 , x 2 , x 3 is
O | R ECALL
c 1 x 1 + c 2 x 2 + c 3 x 3 =c 1 (1,|2,R E+Dc2 (2, 1, 1) + c3 (1, 0, 3)
−1)
READ = (c 1 + 2c 2 + c 3 , 2c 1 + c 2 , −c 1 + c 2 + 2c 3 ) , c1 , c2 , c3 ∈ R

Powered
3.3 Linearly Independent by (LI)
Vectors Prof. (Dr.) Rajesh M. Darji
The vectors x 1 , x 2 , x 3 , ...x n ∈ Rn are said to be linearly independent vectors if, whenever

c 1 x 1 + c 2 x 2 + c 3 x 3 + ..... + c n x n = 0 ⇒ c 1 = c 2 = c 3 = ..... = c n = 0.

3.4 Linearly Dependent Vectors (LD)

The vectors x 1 , x 2 , x 3 , ...x n ∈ Rn are said to be linearly dependent vectors if, whenever

c 1 x 1 + c 2 x 2 + c 3 x 3 + ..... + c n x n = 0 ⇒ Not all c 1 , c 2 , c 3 , .....c n are 0.

That is at least one constant is non-zero.


â In this case at least one vector can always be expressed as a linear combination of rest of the vectors.

3.5 Euclidean Norm

Let x = (x 1 , x 2 , x 3 , ...x n ) ∈ Rn then norm or magnitude of vector x is denoted by ° x ° and is defined as,
° °

° ° p q
° x ° = x · x = x 2 + x 2 + x 2 + ..... + x 2
1 2 3 n

â The norm of a vector is also called length of a vector.


q p p p
x = (1, 2, −2, 3) ∈ R4 (1)2 + (2)2 + (−2)2 + (3)2 =
° °
e. g. ⇒ °x ° = 1 + 4 + 4 + 9 = 18 = 3 2

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 22

3.6 Normalized Vector

The vector of unit norm is called unit vector or normalized vector and is denoted byx̂.
° ° ° ° x
â If ° x ° 6= 1 then x can be converted to normalized vector by dividing with ° x °. Thus, x̂ = °
°x °
° be always

normalized vector.
cos θ
· ¸ · ¸ · ¸
1 0
e. g. , , are normalized (unit) vectors in R2 .
sin θ 0 −1

3.7 Euclidean Distance and Angle

Let x, y ∈ Rn
¡ ¢ ° °
â Distance: The distance between two vectors is defined as d x, y = ° x − y ° .
x·y
â Angle: The angle θ between two vectors is defined as cos θ = ° °° °.
°x °° y °
Also, x, y ∈ Rn are called orthogonal (perpendicular) vectors is cos θ = 0 i.e. x·y =0

3.8 Cauchy-Schwarz’s inequality 2

For x, y ∈ Rn ,
¯ ¯ ° °° °
¯x · y ¯ É °x ° °y °

Proof: Angle between two vectors x, y ∈ Rn is defined as,

Target AA
¯ ¯
x·y
¯
¯ x·y ¯
¯ ¯x · y ¯
cos θ = ° ° ° ° ⇒ | cos θ | = ¯ ° ° ° ° ¯¯ =
¯ ° °° °
°x °° y ° °x °° y ° °x °° y °

But | cos θ | É 1,
¯ xA L¯L
¯ ¯
C
|∴RE°° °° °° °° É 1,
· y
R E DO
READ |
x y
Hence,
¯ ¯ ° °° °
Powered by Prof. (Dr.) Rajesh M. Darji
¯x · y ¯ É ° x ° ° y °

3.9 Minkowski’s Triangular Inequality 3

For x, y ∈ Rn ,
° ° ° ° ° °
°x + y ° É °x °+° y °.

Proof: By definition of norm for x, y ∈ Rn , we have

° x + y °2 = x + y · x + y
° ° ¡ ¢ ¡ ¢
¡ ¢ ¡ ¢
= x+y · x+y
= x ·x +x ·y +y ·x +y ·y
£ ¤
= x · x + 2x · y + y · y ∵ x · y = y · x
° °2 ° ° ° ° ° °2
É °x ° +2°x °° y °+° y °
£ ¯ ¯ ° °° °¤
∵ ¯x · y ¯ É °x °° y °
° °2 ¡° ° ° °¢2
∴ °x + y ° É °x °+°y °
° ° ° ° ° °
°x + y ° É °x °+°y °

2
Augustin-Louis Cauchy; French, 1789 and Karl Hermann Amandus Schwarz; German, 1843-1921.
3
Hermann Minkowski; German, 1864-1909.

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 23

Illustration 3.1 Find the constant k such that the vectors (1, k, −3) and (2, −5, 4) are orthogonal.

Solution: Let x = (1, k, −3) and y = (2, −5, 4). For orthogonal vectors,

x·y =0 ⇒ (1) (2) + (k) (−5) + (−3) (4) = 0


∴ 2 − 5k − 12 = 0 ⇒ k = −2

Illustration 3.2 verify the Cauchy-Schwartz’s and triangle inequality for the vectors x = (1, −3, 2) and y =
(1, 1, −1). Also find distance and angle between them.

Solution:
° ° p p ° ° p p
x = (1, −3, 2) , y = (1, 1 − 1) ⇒ x · y = 1 − 3 − 2 = −4, °x ° = 1 + 9 + 4 = 14, °y ° = 1 + 1 + 1 = 3

Therefore,
¯ ¯ ° °° ° p p p ¯ ¯ ° °° °
¯x · y ¯ = |−5| = 5 and °x ° ° y ° = 14 3 = 42 ⇒ ¯x · y ¯ É °x ° ° y ° (3.1)

° ° p ° ° ° ° p p ° ° ° ° ° °
°x + y ° = k(2, −2, 1)k = 4 + 4 + 1 = 3 and °x ° + ° y ° = 14 + 3 ⇒ °x + y ° É °x ° + ° y ° (3.2)

Hence from (3.1) and (3.2), Cauchy-Schwarz’s and Triangle angle inequalities are verified.
Also distance and angle between them are given by

Target AA
¡ ¢ ° ° p
d x, y = °x − y ° = k(0, −4, 3)k = 0 + 16 + 3
¡ ¢ p
∴ d x, y = 19

and L
xR· yECAL (−5) 5
E D O
cosθ = | = p p = −p
R
READ |
° ° ° °
° x y
° ° ° 14 3 42
µ ¶
5
∴ θ = cos−1 − p
Powered by Prof. (Dr.) Rajesh M. Darji
42

Illustration 3.3 Show that x 1 , x 2 , x 3 are linearly independent and x 4 depends on them, where x 1 = (1, 2, 4) , x 2 =
(2, −1, 3) , x 3 = (0, 1, 2) , x 4 = (−3, 7, 2).

Solution: For x 1 , x 2 , x 3 consider the linear combination,

c1 x 1 + c2 x 2 + c3 x 3 = 0
∴ c 1 (1, 2, 4) + c 2 (2, −1, 3) + c 3 (0, 1, 2) = (0, 0, 0)
⇒ c 1 + 2c 2 = 0, 2c 1 − c 2 + c 3 = 0, 4c 1 + 3c 2 + 2c 3 = 0. (3.3)

Now x 1 , x 2 , x 3 are linearly independent if c 1 = c 2 = c 3 = 0, that is the homogeneous system (3.3) should have
trivial solution. The determinant of coefficient matrix is
¯ ¯
¯ 1 2 0 ¯
¯ ¯
¯ 2 −1 1 ¯ = 1 (−2 − 5) − 2 (4 − 4) + 0 = −7 6= 0
¯ ¯
¯ 4 3 2 ¯

∴ system (3.3) has trivial solution, hence x 1 , x 2 , x 3 are linearly independent.


Now including fourth vector x 4 = (−3, 4, 7) in above linear combination, we get

c1 x 1 + c2 x 2 + c3 x 3 + c4 x 4 = 0
∴ c 1 (1, 2, 4) + c 2 (2, −1, 3) + c 3 (0, 1, 2) + c 4 (−3, 7, 4) = (0, 0, 0)

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 24

⇒ c 1 + 2c 2 − 3c 4 = 0, 2c 1 − c 2 + c 3 + 7c 4 = 0, 4c 1 + 3c 2 + 2c 3 + 4c 4 = 0. (3.4)

Now we solve homogeneous system (3.4) for unknown c 1 , c 2 , c 3 and c 4 . The augmented matrix is

1 2 0 −3 0
 

[A : Z ] =  2 −1 1 4 0  → R 2 − 2R 1 ; R 3 − 4R 1
4 3 2 7 0
1 2 0 −3 0
 

∼  0 −5 1 10 0  → R3 − R2
0 −5 2 19 0
 
1 2 0 −3 0
∼ 0 −5 1 10 0  (3.5)
 

0 0 1 9 0
∴ ρ (A) =ρ (A : Z ) = 3 < 4

System has one-parametric non-trivial (non-zero) solution, that is we are not getting all c 1 , c 2 , c 3 and c 4 are
zero. So x 4 depends on x 1 , x 2 , x 3 .
â In this case to find relation among them, solving (3.5) for c 1 , c 2 , c 3 , c 4 , we get
13t t
c1 = , c2 = , c 3 = −9t , c4 = t , t ∈R
5 5
Substitute in linear combination of (3.4),
13t t
t ∈R

Target AA
x 1 , + x 2 − 9t x 3 + t x 4 = 0, ⇒ 13x 1 , +x 2 − 45x 3 + 5x 4 = 0
5 5

Exercise 3.1
1. Examine for linear dependence or independence the following system of vectors. If dependence, find
CALL
relation among them: (1-4) O | RE
| RED
a. −
(1, RE
→ = −1,
x
ADx→ = (2, 1, 1) , −x→ = (3, 0, 2)
1) , −
1 2 3
b. −
x 1 = (2, 2, 7, −1) , x 2 = (3, −1, 2, 4) , −
→ −
→ → = 1, 3, 1)
x 3 (1,
c. −
→ Powered

→ by Prof. (Dr.) Rajesh M. Darji


x 1 = (3, 1, −4) , x 2 = (2, 2, −3) , x 3 = (0, −4, 1)
d. → = £ 1 2 4 ¤T , −

x → = £ 3 7 10 ¤T
x
1 2

2. Which pair of the following vectors are orthogonal:

x = (5, 4, 1) , y = (3, −4, 1) , z = (1, −2, 3)

[Hint: For orthogonal vector, dot product is zero.]

3. Find the constant k such that the vectors (2, 3k, −4, 1, 5) and (6, −1, 3, 7, 2k) are orthogonal.

4. Discuss and find the relation of linear dependence among the row vectors of the matrix,

1 1 −1 1
 
 1 −1 2 −1 
3 1 0 1

5. If a and b are unit vectors such that a +2b and 5a −4b are perpendicular to each other, then find angle
between a and b.

6. Pythagoras4 theorem: For the orthogonal vectors x, y ∈ Rn , prove that


° x + y °2 = ° x °2 + ° y °2 .
° ° ° ° ° °

4
Pythagoras, Greek; 570-495 BC

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 25

7. Parallelogram Law: For any vectors x, y ∈ Rn , prove that

° x + y °2 + ° x − y °2 = 2° x °2 + 2° y °2 .
° ° ° ° ° ° ° °

8. For any vectors x, y ∈ Rn , prove that


¯° ° ° °¯ ° °
¯ ° x ° − ° y ° ¯ É ° x − y°.

Answers

1. a. L.D., −
→+−
x → − →
1 x2 = x3 b. L.I. c. L.D., 2−
→ = 3−
x 1
→+−
x →
2 x3 d. L.I. 2. x, y and y, z 3. −1

4. L.D., 2R 1 + R 2 = R 3 5. 60◦

E E E

Powered by
Prof. (Dr.) Rajesh M. Darji
B. Sc. (Gold Medalist)
M. Sc. (Gold Medalist)
Tetra Gold Medalist in Mathematics
Ph. D. (Mathematics)
ISTE (Reg. 107058)

Target AA
IMS, AMS
http://rmdarji.ijaamm.com/
Contact: (+91) 9427 80 9779

| RE CALL
READ | R E DO

Powered by Prof. (Dr.) Rajesh M. Darji

LAVC (GTU-2110015) B.E. Semester II


Chapter 4
Vector Space

4.1 Field

A non empty set F is said to be a field if it satisfies the following properties:

i. ∀ x, y ∈ F, x + y ∈ F and x · y ∈ F. [Closed under addition and multiplication]


1
ii. ∀x ∈ F ∃ − x ∈ F and ∀x 6= 0 ∈ F ∃ ∈F. [Existence of additive and multiplicative inverse]
x
iii. 0, 1 ∈ F. [Existence of additive and multiplicative identities]

Target AA
e. g.

â The set of rational numbers, Q and the set of real numbers, R are real fields.

â The set of complex numbers, C is a complex field.


O | R ECALL 1
â | R
The set of natural numbers, NEisDnot a field because 0 ∉ N and ∀x ∈ N, −x, ∉ N.
READ x
1
â The set of integers, Z is also not a field because ∀x ∈ Z, ∉ Z.
Powered by Prof. (Dr.) Rajesh M. Darji
x

4.2 Vector Space

A non empty set V is said to be a vector space or a linear space over the field F if there exist two maps
+ : V ×V → V as + u, v = u + v, called vector addition (VA), and · : F ×V → V as · α, v = α·· u, called scalar
¡ ¢ ¡ ¢

multiplication (SM), satisfying the following properties:

∀ u, v, w ∈ V and α, β ∈ F

1. u + v ∈ V. [Closed under VA]


¡ ¢ ¡ ¢
2. u + v + w = u + v + w. [Associative law for VA]

3. u + v = v + u. [Commutative property for VA]

4. There exist an element 0 ∈ V, such that, ∀ u ∈ V


u + 0 = 0 + u = u. [Additive identity]

5. ∀u ∈ V there exist an element −u ∈ V such that,


¡ ¢ ¡ ¢
u + −u = −u + u = 0. [Additive inverse]

6. α · u ∈ V. [Closed under SM]

7. α · u + v = α · u + α · v.
¡ ¢

26
Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 27

8. α + β · u = α · u + β · u.
¡ ¢

9. αβ · u = α β · u .
¡ ¢ ¡ ¢

10. 1 · u = u.

Remark:

1. The elements of V are called vectors even though they are any objects like matrices, polynomials,
functions, n − t upl es etc. The vector space is also known as Abstract vector space.

2. Sometimes vector addition and scalar multiplication are also denoted by ⊕ and ∗ respectively.

3. Instead of field F if we take R, the set of real numbers then V is called the real vector space or real
linear space or real vector linear space overR. Generally we consider always F = R unless given.

4. The scalar multiplication can simply denoted by αu instead of α · u


¡ ¢ ¡ ¢

4.3 Some Standard Vector Spaces

1. The n dimensional space Rn is a vector space over R under usual addition and scalar multiplication
in Rn .
Let x = (x 1 , x 2 .....x 3 ) , y = y 1 , y 2 .....y 3 ∈ Rn and α ∈ R then the usual vector addition and scalar multi-
¡ ¢

plication in Rn are defined as

Target AA
¡ ¢ ¡ ¢
x + y = (x 1 , x 2 , .....x n ) + y 1 , y 2 , .....y n = x 1 + y 1 , x 2 + y 2 , .....x n + y n

and
α · x = α (x 1 , x 2 , .....x n ) = (αx 1 , αx 2 , .....αx n )

L is a vector space under the matrix addition


ECALentries,
2. The set of all (2 × 2) matrices that is M22 with real
D O |
R
R
D |¸ RE
and matrix scalar multiplication over .
R½·EA a b
¾
Here, M22 = : a, b, c, d ∈ R
c d

Let u =
·
u 1 u 2 Powered
¸
,v =
·
v 1 byv 2
¸
Prof. (Dr.) Rajesh M. Darji
∈ M22 and α ∈ R then the matrix addition and scalar multiplica-
u3 u4 v3 v4
tion in M22 are defined as
· ¸
u1 + v 1 u2 + v 2
u+v =
u3 + v 3 u4 + v 4

and
αu 1 αu 2
· ¸
α·u =
αu 3 αu 4

3. The set of all polynomial with real coefficients, of degree É n that is P n (R) is a vector space over R.
Here, P n (R) = p : p = p (x) / deg p (x) É n
© ª

Let p, q ∈ P n and α ∈ R then

p = p (x) = a 0 + a 1 x + a 2 x 2 + ..... + a n x n , q = q (x) = b 0 + b 1 x + b 2 x 2 + ..... + b n x n where a i ,b i ∈ R.

The vector addition and scalar multiplication in P n (R) are defined as

p + q = (a 0 + b 0 ) + (a 1 + b 1 ) x + (a 2 + b 2 ) x 2 + ..... + (a n + b n ) x n

and
α · p = (αa 0 ) + (αa 1 ) x + (αa 2 ) x 2 + ..... + (αa n ) x n

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 28

4. The class of all functions f : R → R that is A is vector spaces under the functions addition and function
scalar multiplication over R.
Let f , g ∈ A and α ∈ R the vector addition and scalar multiplication in A are defined as
¡ ¢
f + g x = f (x) + g (x)

and
α · f x = α f (x) ∀x ∈ R
¡ ¢

¡ ¢ ¡ ¢
Illustration 4.1¢ Show ¡that the set of all pairs, of real numbers of the form 1, y with the operation 1, y +
0 0
1, y = 1, y + y and k 1, k y , where k ∈ R is a vector space.
¡ ¢ ¡ ¢

Solution: Let V = 1, y : y ∈ R . In order to prove V is a vector space, we have to show that all ten condi-
©¡ ¢ ª

tions for vector space listed in definition of vector space are satisfied.
Let u, v, w ∈ V and k, m ∈ R ∴ u = (1, x) , v = 1, y , w = (1, z) for some x, y, z ∈ R
¡ ¢

¡ ¢
1. u + v = (1, x) + 1, y
∵ x, y ∈ R ⇒ x + y ∈ R
¡ ¢ £ ¤
= 1, x + y ∈ V
∴ V is closed under vector addition.
¡ ¢ £¡ ¢ ¤
2. u + v + w = (1, x) + 1, y + (1, z)
£¡ ¢¤
= (1, x) + 1, y + z
¡ ¢
= (1, x) + 1, y + z

Target AA
¡ ¢
= 1, x + y + z
¡ ¢ £ ¡ ¢¤
u + v + w = (1, x) + 1, y + (1, z)
£¡ ¢¤
= 1, x + y + (1, z)
¡ ¢
= 1, x + y + (1, z)
L
RECAL
¡ ¢
= 1, x + y + z
u + v + w = u + v + w| REDO
¡ ¢ ¡ ¢ |

READ
Vector addition is associative in V .

Prof. (Dr.) Rajesh M. Darji


¡ ¢
3. u + v = (1, x) + 1, y Powered by
¡ ¢
= 1, x + y
∵ x + y = y + x for x, y ∈ R
¡ ¢ £ ¤
= 1, y + x
¡ ¢
= 1, y + (1, x)
u +v = v +u
∴ Vector addition is commutative in V .

4. For additive identity, we need to find an element say 0 = (1, θ) ∈ V for some θ ∈ R such that, ∀u ∈
V, u + 0 = 0 + u = u, That is

(1, x) + (1, θ) = (1, θ) + (1, x) = (1, x) ⇒ (1, x + θ) = (1, θ + x) = (1, x)

Observe that, above condition holds if θ = 0. Hence 0 = (1, 0) ∈ V is the additive identity.
∴ Additive identity exist for vector addition in V.

5. For additive inverse, we need to find an element say −u = (1, λ) ∈ V for some λ ∈ R such that ∀u ∈
¡ ¢ ¡ ¢
V, u + −u = −u + u = 0. That is

(1, x) + (1, λ) = (1, λ) + (1, x) = (1, 0) ⇒ (1, x + λ) = (1, λ + x) = (1, 0)

Observe that, above condition holds if λ = −x. Hence −u = (1, −x) ∈ V is the additive inverse of u =
(1, x).
∴ Additive inverse exist for vector addition in V.

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 29

6. ku = k (1, x) = (1, kx) ∈ V [∵ k, x ∈ R ⇒ kx ∈ R]


∴ V is closed under scalar multiplication.
¡ ¢ £ ¡ ¢¤
7. k u + v = k (1, x) + 1, y
£¡ ¢¤
= k 1, x + y
¡ ¢
= k 1, x + y
¡ ¢ £ ¤
= 1, kx + k y ∵ By definition of SM
¡ ¢
ku + kv = k (1, x) + k 1, y
¡ ¢
= (1, kx) + 1, k y
¡ ¢ £ ¤
= 1, kx + k y ∵ By definition of VA
¡ ¢
∴ k u + v = ku + kv

8. (k + m) u = (k + m) (1, x)
= [1, (k + m) x]
= (1, kx + mx)
ku + mu = k (1, x) + m (1, x)
= (1, kx) + (1, mx)
= (1, kx + mx)
∴ (k + m) u = ku + mu

Target AA
9. (km) u = (km) (1, x)
= [1, (km) x]
= (1, kmx)
¡ ¢
k mu = k [m (1, x)]

CALL
= k [(1, mx)]
| RE
= k (1, mx)
RE AD | R E DO
= (1, kmx)
¡ ¢
∴ (km) u = k mu
Powered by Prof. (Dr.) Rajesh M. Darji
10. 1u = 1 (1, x)
= (1, 1x)
= (1, x) [∵ 1x = x]
∴ 1u = u

Thus, all ten conditions for vector space are hold true for given vector addition and scalar multiplication in
V . Therefor V is a vector space.

whether the set V = x, y : x, y ∈ R , under the addition x 1 , y 1 ⊕ x 2 , y 2 = x 1 + x 2 , y 1 + y 2


©¡ ¢ ª ¡ ¢ ¡ ¢ ¡ ¢
Illustration 4.2 Check
and multiplication α ∗ x, y = α x, α2 y , is a vector space over the field R or not?
¡ ¢ ¡ 2 ¢

Solution: Given that V = R2 and the defined addition is the usual vector addition of R2 , hence all five
conditions for vector space are satisfied evidently. So it is sufficent to check remaining five conditions for
scalar multiplication.
Let u = x, y ∈ V and α, β ∈ R.
¡ ¢

1. αu = α x, y
¡ ¢

= α2 x, α2 y ∈ V ∵ α, x, y ∈ R ⇒ α2 x, α2 y ∈ R
¡ ¢ £ ¤

∴ V is closed under scalar multiplication.

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 30

α + β u = α + β x, y
¡ ¢ ¡ ¢¡ ¢
2.
h¡ ¢2 ¡ ¢2 i £
= α + β x, α + β y
¤
∵ By definition of SM
= α2 + 2αβ + β2 x, α2 + 2αβ + β2 y
£¡ ¢ ¡ ¢ ¤

= α2 x + 2αβx + β2 x, α2 y + 2αβy + β2 y
¡ ¢

αu + βu = α x, y + β x, y
¡ ¢ ¡ ¢

= α2 x, α2 y + β2 x, β2 y
¡ ¢ ¡ ¢

= α2 x + β2 x, α2 y + β2 y
¡ ¢

∴ α + β u 6= αu + βu
¡ ¢

That is scalar multiplication is not distributive over scalar addition. Hence V is not a vector space. (Reader
can verify that all other remaining conditions for scalar multiplications are hold)

4.4 Subspace

A non empty subset W of the vector space V over R, is said to be a subspace of V if, W itself vector space
over R, udder the same vector addition and scalar multiplication of V .

Theorem 4.1 A non empty subset W of a vector space V over R, is subspace of V if and only if,

i. ∀ u, v ∈ W then u + v ∈ W.

ii. ∀ u ∈ W, α ∈ R then αu ∈ W.

Target AA
That is, W should be closed under vector addition and scalar multiplication.

Note: Every vector spcae V has two precise subspace like, singleton set {0} and vector space V itself. These
subspces are called trivial subspace.

ECAL
Illustration 4.3 Check whether the following subsets WLof vector space V are subspaces or not?
DO | R
A D | RRE
3
x, y, z : x 2 + y 2 + z 2 É 1 ;V = R3 .
E R
©¡ ¢ ª
a. R
W = {(x, 3x, 2x) : x ∈ } ;V = . b. W =

c. W = The set of all points lying on the line passing through the origin and V = R2 .
Powered by Prof. (Dr.) Rajesh M. Darji
Solution: In order to check subspace, first of all we show that given set W is a non empty subset of V (that
is to show atleast one element exist in W ), and then we check two conditions of Theorem 4.1.

a. Here W = {(x, 3x, 2x) : x ∈ R} ;V = R3 .


Obviously W ⊂ V , and for 0 ∈ R, 0 = (0, 0, 0) = (0, 3 (0) , 2 (0)) ∈ W . Therefor, W is non empty.
Let u, v ∈ W, α ∈ R, therefor u = (x, 3x, 2x) , v = y, 3y, 2y for some x, y ∈ R.
¡ ¢

¡ ¢
i. u + v = (x, 3x, 2x) + y, 3y, 2y
¡ ¢
= x + y, 3x + 3y, 2x + 2y
∵ x, y ∈ R ⇒ x + y ∈ R
¡ ¡ ¢ ¡ ¢¢ £ ¤
= x + y, 3 x + y , 2 x + y ∈ W
ii. αu = α (x, 3x, 2x)
= (αx, 3αx, 2αx)
= (αx, 3 (αx) , 2 (αx)) ∈ W [∵ α, x ∈ R ⇒ αx ∈ R]

∴ W is closed under vector addition and scalar multiplication.


∴ W is a subspace of V .

x, y, z : x 2 + y 2 + z 2 É 1 ;V = R3 .
©¡ ¢ ª
b. Here W =
Obviously W ⊂ V , and for 0 = (0, 0, 0) ∈ R3 , 02 + 02 + 02 É 1. Therefor, 0 ∈ W , so W is non empty.
Let u = x 1 , y 1 , z 1 , v = x 2 , y 2 , z 2 ∈ W ⇒ x 12 + y 12 + z 12 É 1, x 22 + y 22 + z 22 É 1 By definition of W
¡ ¢ ¡ ¢ £ ¤

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 31

¡ ¢
i. u + v = x 1 + x 2 , y 1 + y 2 , z 1 + z 2 .
Now,
¢2
(x 1 + x 2 )2 + y 1 + y 2 + (z 1 + z 2 )2 = x 12 + 2x 1 x 2 + x 22 + y 12 + 2y 1 y 2 + y 22 + z 12 + 2z 1 z 2 + z 22
¡ ¡ ¢ ¡ ¢ ¡ ¢

= x 12 + y 12 + z 12 + x 22 + y 22 + z 22 + 2x 1 x 2 + 2y 1 y 2 + 2z 1 z 2
¡ ¢ ¡ ¢ ¡ ¢
¡ ¢
É 1 + 1 + 2x 1 x 2 + 2y 1 y 2 + 2z 1 z 2
¡ ¢
É 2 + 2x 1 x 2 + 2y 1 y 2 + 2z 1 z 2
Ð 1 (always)

∴ u + v does not satisfy condition of W . ∴ u+v ∉W


∴ W is not closed under vector additon. hence W is not subspace of V.
Note: Observe that, geometrically W represent the interior of the unit sphere. i.e. x 2 + y 2 + z 2 É 1 .
¡ ¢

Hence the interior of the unit sphere is not a subspace of the whole space.

c. Equation of line passing through origin is given by y = mx, m ∈ R.


∴ W = x, y : y = mx, m ∈ R , V = R2 .
©¡ ¢ ª

Obviously W ⊂ V , and 0 = (0, 0) ∈ W . Therefor, W is non empty.


Let u = x 1 , y 1 , v = x 2 , y 2 ∈ W and α ∈ R ⇒ y 1 = mx 1 , y 2 = mx 2 , m ∈∈ R.
¡ ¢ ¡ ¢

¡ ¢ ¡ ¢ ¡ ¢
i. u + v = x 1 , y 1 + x 2 , y 2 = x 1 + x 2 , y 1 + y 2
y 1 + y 2 = mx 1 + mx 2 = m (x 1 + x 2 ) ⇒ u+v ∈W

Target AA
ii. αu = α x 1 , y 1 = αx 1 , αy 1
¡ ¢ ¡ ¢

αy 1 = α (mx 1 ) m (αx 1 ) ⇒ αu ∈ W

∴ W is closed under vector addition and scalar multiplication.


∴ W is a subspace of V .
| RE CALL
Exercise 4.1
READ | R E DO
1. Check whether the following sets are vector space over the field R or not?
Powered by Prof. (Dr.) Rajesh M. Darji
a. V = {x > 0 : x ∈ R} where x + y = x y and αx = x α .
b. V = x, y : x, y ∈ N where x 1 , y 1 + x 2 , y 2 = x 1 + x 2 , y 1 + y 2 and α x, y = αx, αy .
©¡ ¢ ª ¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢

c. V = R2 where x 1 , y 1 ⊕ x 2 , y 2 = x 1 + x 2 + 1, y 1 + y 2 + 1 and k ¯ x, y = kx, k y


¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢

d. V = p ∈ P 2 : p (0) = 1 with usual operations.


© ª

e. V = R3 with usual vector addition and scalar multiplication defined by k x, y, z = (0, 0, kz) .
¡ ¢

f. V = R2 where (u 1 , u 2 )⊕(v 1 , v 2 ) = (u 1 + v 1 − 2, u 2 + v 2 − 3) and α¯(u 1 , u 2 ) = (αu 1 + 2α − 2, αu 2 − 3α + 3)

2. Explain why the set of all 2-by-2 matrices with rational entries is not a real vector space?
[Hint: Not closed under scalar multiplication.]

3. Check whether the following subsets W of vector space V , under usual operations, are subspace or
not?

a. W = (x, 0, 0) : x, y ∈ R ; V = R3 . b. W = x, y, z : x 2 + y 2 + z 2 = 1 ;V = R3 .
© ª ©¡ ¢ ª
 
 a 0


c. W =  0 b  : a, b, c ∈ R ;V = M32 . d. W = x, x 2 , 0 : x ∈ R ; V = R3 .
©¡ ¢ ª
 
c 0
e. W = The set of all 2-by-2 symmetric matrices; V = M22 .
f. W = x, y, z ∈ R3 : x + y + z = 1 ; V = R3 .
©¡ ¢ ª
© ª
g. W = f ∈ V : f (0) = 0 ; and V = the set of all real valued functions.

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 32

h. W = The interior of the unit circle and V = R2 .


i. W = The set of all 2-by-2 skew-symmetric matrices and V = M22 . .
j. W = The set of all the lines not passing through origin and V = R2 .
k. W =The plane does not passing through origin and V = R3 .
l. W = f ∈ V : f (0) = 1 ; V = A .
© ª

4. Prove that the circular cylinder generated by unit circle is not a subspace of the space, under usual VA
and SM.
[Hint: W = x, y, z ∈ R3 : x 2 + y 2 = 1, z ∈ R ]
©¡ ¢ ª

5. Prove that if W1 and W2 are subspaces of the vector space V then W! ∪ W2 is also subspace of V . But
W! ∩ W2 may not be sub space of V .

Answers

1. a. Yes, all other no. 3. a, c, e, g, i are subspace, all other are not subspace.

E E E

4.5 Linear Combination and Span

â Linear combination of the vectors v 1 , v 2 , v 3 .....v n of a vector space V , is defined as

Target AA R E D
c 1 v 1 + c 2 v 2 , +c 3 v 3 + ..... + c n v n ,

O | R ECª A©LL
©
c 1 , c 2 , c 3 .....c n ∈ R

ª
â Set of all linear combinations of vectors of W = v 1 , v 2 , v 3 .....v n is called span of W and is denoted by
spanW , that is

R
|
© ª
spanW = span v , v , v .....v = c v + c v + c v + ..... + v : c ∈
READ
1 2 3 n 1 1 2 2 3 3 n i

e. g. v 1 = (1, −1, 2)Powered


, v 2 = (3, 2, Prof. (Dr.) Rajesh M. Darji
by1) then
span v 1 , v 2 = {c 1 (1, −1, 2) + c 2 (3, 2, 1) : c 1 , c 2 ∈ R} = {(c 1 + 3c 2 , −c 1 + 2c 2 , 2c 1 + c 2 ) : c 1 , c 2 ∈ R}
© ª

â We can say that a w vector of a vector space V is a linear combination of the vectors v 1 , v 2 , v 3 .....v n of
a vector space V , if there exist scalars c 1 , c 2 , c 3 .....c n ∈ R such that,

w = c 1 v 1 + c 2 v 2 , +c 3 v 3 + ..... + c n v n

â If all the vectors


©
of V are expressed as a linear combustion of the vectors v 1 , v 2 , v 3 .....v n then we can
ª © ª
say that set v 1 , v 2 , v 3 .....v n span V and is denoted by span v 1 , v 2 , v 3 .....v n = V.
© ª
â Let v 1 , v 2 , v 3 .....v n be the vectors of a vector space V and if v n+1 ∈ span v 1 , v 2 , v 3 .....v n , then
© ª © ª
span v 1 , v 2 , v 3 .....v n = span v 1 , v 2 , v 3 .....v n , v n+1

© ª
â For the subset W = v 1 , v 2 , v 3 .....v n of a vector space V , spanW is subspace of V .

4.6 Linearly Independent Vectors (LI)

The vectors v 1 , v 2 , v 3 .....v n ∈ V are said to be linearly independent if whenever,

c 1 v 1 + c 2 v 2 + c 3 v 3 + ..... + c n v n = 0 ⇒ c 1 = c 2 = c 3 = ..... = c n = 0.

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 33

4.7 Linearly Dependent Vectors (LD)

The vectors v 1 , v 2 , v 3 .....v n ∈ V are said to be linearly dependent if whenever,

c 1 v 1 + c 2 v 2 + c 3 v 3 + ..... + c n v n = 0 ⇒ c 1 , c 2 , c 3 , .....c n not all zero


© ª
â In general the set v 1 , v 2 , v 3 .....v n ⊆ V is linearly dependent if and only if at least one vector can
always be expressed as a linear combustion of rest of the vectors.

â In particular, two vectors u and v are linearly dependent if and only if u = kv for some k ∈ R.
e. g.

i. u = (2, −1, 3) , v = (6, −3, 9) ∈ R3 are linearly dependent because v = 3u.


ii. sin 2x and sin x cos x are also linearly dependent because sin 2x = 2 sin x cos x.

â A finite set of vectors that contains a zero vector, 0 is always linearly dependent .
e. g. {(3, −1, 2) , (1, 2, −4) , (0, 0, 0)} is linearly dependent set because it contains zero vector.

â A singleton set (set containing only one vector) is linearly dependent if and only if it contains a zero
vector. That is 0.

e. g. {(1, 0, −1)} is linearly independent where as {(0, 0, 0)} is linearly dependent .

4.8 Wronskian1

Target AA
The Wronskian of the functions u, v or u, v, w is defined as a determinant
¯ ¯
¯ u
¯ 0 v0 w0
¯ ¯ ¯
¯ u v ¯ ¯
W = ¯¯ 0 or W = ¯ u v w
u v0 ¯
¯ ¯
¯ ¯
¯ u 00 v 00 w 00 ¯

L continuous functions) is LD OR LI according to


â The subset of functions {u, v} or {u, v, w} of C (setAofLall
EC
the corresponding Wroskian of theE DO | WR= 0 OR
functions
| R W 6= 0.
R E A D
Illustration 4.4 Show that vector w = (9, 2, 7) is a linear combination of u = (1, 2, −1) and v = (6, 4, 2) in R3 .
Solution: To show ←
− is a linear combination of ←
− and ←
v−, we have to find constants c 1 and c 2 such that
w
Powered by
u
Prof. (Dr.) Rajesh M. Darji
c1 u + c2 v = w ⇒ c 1 (1, 2, −1) + c 2 (6, 4, 2) = (9, 2, 7) (4.1)

To solve non-homogeneous system (4.1), consider the augmented matrix [A : B ], can be obtained by putting
the vectors in columns as
1 6 9
 

[A : B ] =  2 4 2  → R 2 − 2R 1 ; R 3 + R 1
−1 2 7
1 6 9
 

∼  0 −8 −16  → R 3 + R 2
0 8 16
1 6 9
 

∼  0 −8 −16 
0 0 0
∴ ρ (A) =ρ (A : B ) = 2 (number of unknowns)

∴ System has unique solution and is given by back substitution as


c 1 + 6c 2 = 9, −8c 2 = −16 ⇒ c 1 = −3, c 2 = 2

∴ From (4.1), w = −3u + 2v


1
Jòzef Maria Hoene-Wroński; Polishsh, 1776-1853.

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 34

Illustration 4.5 Show that span {(1, 0, 1) , (−1, 2, 3) , (0, 1, −1)} = R3

Solution: To show the span, we have to show that every vector u = x, y, z ∈ R can be expressed as a linear
¡ ¢

combination of the given vector. That is there should exist c 1 , c 2 , c 3 such that
¡ ¢
c 1 (1, 0, 1) + c 2 (−1, 2, 3) + c 3 (0, 1, −1) = x, y, z (4.2)

Consider the augmented matrix of system (4.2)

1 −1 0 x
 

[A : B ] =  0 2 1 y  → R3 − R1
1 3 −1 z
1 −1 0 x
 

∼ 0 2
 1 y  → R 3 + 2R 2
0 −4 −1 z −x
1 −1 0 x
 

∼ 0 2 1
 y 
0 0 1 z − x + 2y
∴ ρ (A) =ρ (A : B ) = 3 (no of unknowns.)

∴ System is consistent and has unique solution.


∴ There exist c 1 , c 2 , c 3 , satisfying (4.2). Hence span {(1, 0, 1) , (−1, 2, 3) , (0, 1, −1)} = R3

Target AA
Note: To find linear combination for given vector of R3 , solve above system by back substitution, we get

1¡ ¢ 1¡ ¢
c1 = 3x − y − z , c2 = x −y −z , c 3 = −x + 2y + z
2 2

CALL
Hence from (4.2)
| RE
REyA−D
3x − z | R E DO
x −y −z ¡ ¢ ¡ ¢
(1, 0, 1) + (−1, 2, 3) + −x + 2y + z (0, 1, −1) = x, y, z
2 2

e. g. If u = (1, 1, 1) ⇒
Powered
1 by 1 Prof. (Dr.)
c1 = , c2 = − , c3 = 2
Rajesh1 M.1 Darji
∴ (1, 1, 1) = (1, 0, 1) − (−1, 2, 3) + 2 (0, 1, −1)
2 2 2 2
Illustration 4.6 Examine the following vectors for LI or LD:
· ¸ · ¸ · ¸
3 2 3 2 3 1 −1 −2 3 1 0
a. 1 − t + t , −2 + 3t + t + 2t , 1 + t + 5t in P 3 . b. , , in M22 .
1 1 1 2 1 0

c. cos2 x, sin2 x in R .

Solution:
a. Consider the linear combination of given vectors of P 3 :

c 1 1 − t + t 3 + c 2 −2 + 3t + t 2 + 2t 3 + c 3 1 + t 2 + 5t 3 = 0 = 0 + 0 · t + 0 · t 2 + 0 · t 3
¡ ¢ ¡ ¢ ¡ ¢
(4.3)

The augmented matrix for corresponding homogeneous system of (4.3) is, (obtained by putting coefficient
ascending powers of each polynomial in columns)

c1 c2 c3
 
1 −2 1 0 1
 −1 3 0 0  t
[A : Z ] = 
 
 2
 0 1 1 0  t
1 2 5 0 t3

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 35

Reducing to row echelon form:


 
1 −2 1 0
 −1 3 0 0 
[A : Z ] =   → R2 + R1 ; R4 − R1
 
 0 1 1 0 
1 2 5 0
 
1 −2 1 0
 0 1 1 0 
∼  → R3 − R2 ; R 4 − 4R 2
 
 0 1 1 0 
0 4 4 0
 
1 −2 1 0
 0 1 1 0 
∼
 
 0 0 0 0


0 0 0 0
∴ ρ (A) =ρ (A : Z ) = 2 < 3 (number of unknowns)

∴ System (4.3) has non-trivial (non zero) one parametric solution. Hence all c 1 , c 2 , c 3 can not be zero.
Hence given vector are linearly dependent.
Note: To find dependent relation, solving above system we get c 2 = −c 3 , c 1 = −3c 3 .
∴ From (4.3), −3 1 − t + t 3 − −2 + 3t + t 2 + 2t 3 + 1 + t 2 + 5t 3 = 0
¡ ¢ ¡ ¢ ¡ ¢

b. Consider the linear combination of given vectors of M2 2 :

Target AA
· ¸ · ¸ · ¸ · ¸
1 −1 −2 3 1 0 0 0
c1 + c2 + c3 =0= (4.4)
1 1 1 2 1 0 0 0

The augmented matrix for corresponding homogeneous system of (4.4) is, (obtained by putting row-entries
of each matrix in columns)

O | R ECALL
E0 D
 
AD | R 0 
1 −2 1
 −1E 3 0
R
[A : Z ] =   → R2 + R1 ; R3 − R1 ; R4 − R1
 1 1 1 0 


1 2 0
Powered 0
by

Prof. (Dr.) Rajesh M. Darji
1 −2 1 0
 0 1 0 0 
∼  → R 3 + 3R 2 ; R 4 − 4R 2
 
 0 −3 1 0 
0 4 −1 0
 
1 −2 1 0
 0 1 0 0 
∼  → R4 + R3
 
 0 0 1 0 
0 0 −1 0
 
1 −2 1 0
 0 1 0 0 
∼  → R4 + R3
 
 0 0 1 0 
0 0 0 0
∴ ρ (A) =ρ (A : Z ) = 3 (number of unknowns)

∴ System (4.4) has unique trivial (zero) solution, that is c 1 = c 2 = c 3 = 0. Hence given vectors are linearly
independent.

c. Given functions sin2 x and cos2 x are linearly in dependent because we can not write one function as
a constant multiple of another function. That is sin2 x 6= k · cos2 x, k ∈ R.
Alternate Method:

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 36

Let u = sin2 x, v = cos2 x, then Wronskian’s of u, v is

¯ u v ¯ ¯ sin2 x cos2 x 2
cos2 x ¯¯
¯ ¯ ¯ ¯ ¯ ¯
¯ = sin 2x ¯ sin x
¯ ¯
W = ¯¯ 0 = = − sin 2x 6= 0
u v 0 ¯ ¯ sin 2x − sin 2x
¯ ¯
¯ ¯ 1 −1 ¯

∴ u and v are linearly independent.

Illustration 4.7 Find the condition on parameter a such that the set {(1, −1, 1) , (1, 0, a) , (−1, −a, 0)} is lin-
early independent.

Solution: Three vectors of R3 are linearly independent if the determinant of vectors is not zero. That is
¯ ¯
¯ 1 1 −1 ¯
¯ = a 2 6= 0
¯ ¯
¯ −1
¯ 0 −a ¯ ⇒ a 6= 0
¯ 1 a 0 ¯

∴ Given vectors are linearly independent for all a ∈ R − {0} .

Exercise 4.2

1. Find the span of the vectors (1, 0, 0) and (0, 0, 1).

2. Express (5, −1, 9) as a linear combination of v 1 = (2, 9, 0) , v 2 = (3, 3, 4) , v 3 = (1, 2, 1) . [Summer-2016]

3. Is (4, 20) is linear combination of the vectors (2, 10) and (−3, −15) ?

Target AA
4. Show that in R4 the vector (1, 4, −2, 6) is a linear combination of the vectors (1, 2, 0, 4) and (1, 1, 1, 3)
where as (2, 6, 0, 9) is not a linear combination of given vectors.
· ¸ · ¸ · ¸ · ¸
1 0 1 1 1 1 1 1
5. Show that the matrices , , , span M22
0 0 0 0 1 0 1 1

6. Check the LI or LD for the set { x, | x | } in C EC


R(1, 1).ALL
|
A D | R E DO
E
If v 1 , v 2 , v 3 areRlinearly independent in V then prove that v 1 + v 2 , v 2 + v 3 , v 3 + v 1 are also linearly
¡ ¢ ¡ ¢ ¡ ¢
7.
independent.

8. Prove that every subset Powered Prof. (Dr.) Rajesh M. Darji


by is LI and every super set of LD set is LD.
of LI set

9. Let v 1 , v 2 , .....v n be LI subset of vector space V over R. If {x 1 , x 2 , .....x n } and y 1 , y 2 , .....y n be two
© ª © ª
n n
subsets of R such that
X X
xi v i = y i v i then prove that x i = y i ∀ 1 É i É n
i =1 i =1

Answers

1. R2 2. −308v 1 + 69v 2 + 179v 3 3. Yes, (4, 20) = 5 (2, 10) + 2 (−3, −15) 6. LD

E E E

4.9 Basis
© ª
A subset W = v 1 , v 2 , v 3 .....v n of a vector space V is said to be the basis of V if,

i. v 1 , v 2 , v 3 .....v n are linearly independent and

ii. W span V i.e. span W = V

â If W is basis for V then we can say that V is generated by W and W is called generator of V .

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 37

4.10 Some Standard Basis

1. The standard basis of R2 is {(1, 0) , (0, 1)}, of R3 is {(1, 0, 0) , (0, 1, 0) , (0, 0, 1)} and so on.
½· ¸ · ¸ · ¸ · ¸¾
1 0 0 1 0 0 0 0
2. The standard basis of M2×2 is , , , ,
0 0 0 0 1 0 0 1

3. The standard basis of P n (x) is 1, x, x 2 , x 3 , ......x n


© ª

Note: A vector space can have more than one basis. But every basis number of the vectors is same. (vectors
may be different)

4.11 Dimension

The number of the vectors in any basis of the vector space V is said to be the dimension of V and is denoted
by dimV . â If dimV is finite then V is called finite dimensional vector space.

dim Rn = n,
¡ ¢
e. g. dim (P n ) = (n + 1) , dim (M 22 ) = 4.

* Important:

If dim v = n, then

1. Every basis of V has exactly n number of vectors.

Target AA
2. A subset of less or more than n vectors could not be a basis of V.

3. Set of more than n vectors is always linearly dependent (LD)

4. Set of less than n vectors could not be spanV . (No span)

RELI
5. A subset of exactly n vectors, which is|either CA LL
or span V is always basis of V .
READ | R E DO
Illustration 4.8 Show that whether the following sets form basis for given vector space or not? Justify the
answers.
Powered by Prof. (Dr.) Rajesh M. Darji
a. {(1, 2) , (3, −1)} for R2 . b. {(1, 1, 0) , (−1, 0, 0)} for R3 .

c. {(1, −1, 1) , (−1, 2, −2) , (−1, 4, −4)} for R3 . d. {(1, 0, 1) , (1, 1, 0) , (0, 1, 1) , (2, 1, 1)} for R3 .

e. 3 + x 3 , 2 − x − x 2 , x + x 2 − x 3 , x + 2x 2 for P 3 (x) .
© ª

Solution:

a. We know that dim R2 = 2 and given set contain exactly two vectors. So it is sufficient to check weather
the set is linearly independent of not.
¯ ¯
2
¯ 1 3 ¯
For two vectors of R , the determinant of the vectors is ¯
¯ ¯ = −7 6= 0.
2 −1 ¯
∴ Given subset is linearly independent subset of R2 , hence it is basis.

b. Given subset has two vector and dim R3 = 3, hence it can not span R3 . So it is not basis.

c. Given subset has three vectors and dim R3 = 3. Hence for basis it sufficient to check linearly dependent
of given subset.
¯ ¯
¯ 1 −1 −1 ¯
For three vectors of R3 , the determinant of the vectors is ¯¯ −1 2
¯ ¯
4 ¯¯ = 0.
¯ 1 −2 −4 ¯

∴ Given subset is linearly dependent. So it is not basis.

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 38

d. Given subset has four vector and dim R3 = 3, hence it linearly dependent subset. So it is not basis.

e. Given subset has four vectors (polynomials) and dim P 3 = 4, so it is sufficient to check linearly in
dependence of given polynomials. Consider the linear combination,

c 1 3 + x 3 + c 2 2 − x − x 2 + c 3 x + x 2 − x 3 + c 4 x + 2x 2 = 0
¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢

The augmented matrix for above system is


 
3 2 0 0 0
 0 −1 1 1 0 
[A : Z ] =   R1 ↔ R4
 
 0 −1 1 1 0 
1 0 −1 0 0
 
1 0 −1 0 0
 0 −1 1 1 0 
∼  → R 4 − 3R 1
 
 0 −1 1 1 0 
3 2 0 0 0
 
1 0 −1 0 0
 0 −1 1 1 0 
∼  → R 3 − R 2 ; R 4 + 2R 2
 
 0 −1 1 1 0 
0 2 3 0 0
 
1 0 −1 0 0
 0 −1 1 1 0 

Target AA
∼  R3 ↔ R4
 
 0 0 0 0 0 
0 0 5 2 0
 
1 0 −1 0 0
 0 −1 1 1 0 
 
∼
CALL

 0 0 5 2 0 
0 0ED0O0| R0E
∴ R=ρ
ρ (A)
D|R
EA(A : Z ) = 3 < 4 (number of unknowns)

∴ System has one Powered Prof. (Dr.) Rajesh M. Darji


bynon trivial (non zero) solution, that is not all c1 , c2 , c3 , c4 are zero.
parametric
∴ Given subset is linearly dependent and hence it is not basis.

Illustration 4.9 Reduce the following set {(1, 0, 0) , (0, 1, −1) , (0, 4, −3) , (0, 2, 0)} to obtain the basis for the
vector space R3 .

Solution: We know that dim R3 = 3 and given set has four vectors. So it is linearly dependent , and not a
¡ ¢

basis. To reduce to basis of R3 , we have to remove one vector from set which depends on other vectors. It
can be done as follow:

â Construct a matrix say A, by taking the vectors in column.

â Reduce matrix A to its equivalent row echelon form.

â The vectors corresponding to non pivot columns are linearly dependent vectors, that will be removed
from original set, and we required basis.

Let v 1 = (1, 0, 0) , v 2 = (0, 1, −1) , v 3 = (0, 4, −3) , v 4 = (0, 2,). Matrix of vectors,

v1 v2 v3 v4
1 0 0 0
 

A = 0 1 4 2  → R3 + R2
0 −1 −3 0

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 39

 
1 0 0 0
∼ 0 1 4 2 
 

0 0 1 2

Since the fourth column in echelon form is non pivoting, on removing corresponding v 4 = (0, 2, 0) from
given set we get required basis. That is {(1, 0, 0) , (0, 1, −1) , (0, 4, −3) ,}

Illustration 4.10 Find the basis for the solution space of the equation AX = 0, where
 
−1 0 1 2
 −1 1 0 −1 
A=
 
0 −1 1 3 


1 −2 1 4

Solution: To find basis for solution space of the equation AX = 0, first of all we obtain solution of the given
homogeneous system. Consider the augmented matrix for given equation is given by

x1 x2 x3 x4
 
−1 0 1 2 0
 −1 1 0 −1 0 
[A : Z ] =   → R2 − R1 ; R4 + R1
 
 0 −1 1 3 0 
1 −2 1 4 0
 
−1 0 1 2 0
 0 0 −1 −3 0 
∼ R2 ↔ R3
 

Target AA
0 −1 1 3 0

 
0 −2 2 6 0
 
−1 0 1 2 0
 0 −1 1 3 0 
∼  → R 4 − 2R 2
 
0 0 −1 −3 0
CALL
 
0 −2 2 6
| RE 0

EAD 0
−1 1 2 | R E DO 
0
R
0 −1 1 3 0

∼
 

0 0 −1 −3 0

0
Powered by
0 0 0
Prof. (Dr.) Rajesh M. Darji
0

∴ ρ (A) =ρ (A : Z ) = 3 < 4

∴ System has non trivial one parametric solution which is obtained by assuming parameter t ∈ R to free
variable x 4 , and is given by

x 1 = −t , x 2 = −2t x 3 = −3t , x4 = t , t ∈R

Now solution space is defined as set of all possible solutions, that is


n o
W = X : AX = 0
= {(x 1 , x 2 , x 3 , x 4 ) : x 1 = −t , x 2 = −2t , x 3 = −3t , x 4 = t , t ∈ R}
= {(−t , −2t , 3t , t ) : t ∈ R}
= {t (−1, −2, −3, 1) : t ∈ R}
=Linear combination of the vector (−1, −2, −3, 1) .
=span {(−1, −2, −3, 1)}

∴ Solution space is spaned by the set {(−1, −2, −3, 1)} and set is also linearly independent because it on-
tained only one non zero vector.
∴ {(−1, −2, −3, 1)} is required basis for solution space.
Note that, dimension of the solution space is 1, and it is same as number of non pivot column of row
echelon form of A.

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 40

Illustration 4.11 Find the basis for the plane x + 2y + 3z = 0 in R3 .

Solution: Let W be the given plane x + 2y + 3z = 0 in R3 .

W = x, y, z : x + 2y + 3z = 0, x, y, z ∈ R
©¡ ¢ ª

= −y − 3z, y, z : y, z ∈ R
©¡ ¢ ª £ ¤
∵ x + 2y + 3z = 0 ⇒ x = −y − 3z
= −y, y, 0 + (−3z, 0, z) : y, z ∈ R
©¡ ¢ ª £ ¤
Separating y and z
= y (−1, 1, 0) + z (−3, 0, 1) : y, z ∈ R
© ª

∴ W = span {(−1, 1, 0) , (−3, 0, 1)}

∴ W is spanned by linearly independent set {(−1, 1, 0) , (−3, 0, 1)}.


∴ required basis for plane x + 2y + 3z = 0 is {(−1, 1, 0) , (−3, 0, 1)} and dimension 0f plane is 2.

Exercise 4.3

1. Which of the following sets of vectors are basis ?

a. 1 − 3x + 2x 2 , 1 + x + 4x 2 , 1 − 7x for P 2
½· ¸ · ¸ · ¸ · ¸¾
1 2 0 −1 0 2 0 0
b. , , , for M22 [Winter-2015]
1 −2 −1 0 3 1 −1 2

2. Let V be the space spanned by v 1 = cos 2x, v 2 = sin2 x, v 3 = cos2 x then show that S = v 1 , v 2 , v 3 is
© ª

not basis for V .


[Hint: cos 2x = cos2 x − sin2 x]

Target AA
3. For what real values of λ do the following vectors form a basis for R3 ?
µ
1 1

|
[Hint: Take determinant of vectors = 0]
¶ µ
1

RECAL
1
¶ µ
v 1 = λ, − , − , v 2 = − , λ, − , v 3 = − , − , λ
2 2 2 2
L
1 1
2 2

AD | R E DO
E1, 1) , (1, 1, −1) , (3, 1, −3) , (1, 2, 0) to basis for R3 .
R(0,
4. Reduce the set

5. Extend the set {(1, 1, 1, 1) , (1, 2, 1, 2)} to a basis for R4 .


Powered by Prof. (Dr.) Rajesh M. Darji
4
[Winter-2012]
[Hint: Take union with standard basis of R and then reduce new set of six vectors.]

6. Extend the set 1, x 2 to a basis for P 4 .


© ª

7. Reduce the following set to obtain the basis for the vector space P 2 :
p 0 = 2, p 1 = −4x, p 2 = x 2 + x + 1, p 3 = 2x + 7, p 4 = 5x 2 − 1.

8. In each part, determine whether the three vectors lie in a plane (linearly independent ) or on the same
line (linearly dependent ):

a. v 1 = (2, −2, 0) , v 2 = (6, 1, 4) , v 3 = (2, 0, −4) b. v 1 = (−6, 7, 2) , v 2 = (3, 2, 4) , v 3 = (4, −1, 2)

9. Show that M23 has dimension 6.


[Hint: Construct standard basis for M23 . ]

10. Determine a basis for and the dimension of the solution space of the following homogeneous system:

a. 2x 1 + 2x 2 − x 3 + x 5 = 0 b. x +y +z =0
−x 1 − x 2 + 2x 3 − 3x 4 + x 5 = 0 3x + 2y − 2z = 0
x 1 + x 2 − 2x 3 − x 5 = 0 4x + 3y + z = 0
x 3 + x 4 + x 5 = 0 [Winter-2017] 6x + 5y + z = 0

11. Determine basis for the following subspace of R3 :

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 41

a. The line x = 2t , y = −t , z = 4t .
b. All the vectors of the form (a, b, c) for which b = a + c.
c. The subspace x, y, z, w ∈ R4 : x + y − z = y + z = 0 .
©¡ ¢ ª

 
1 0 x
   
 
12. Is v 1 , v 2 form basis for H ? Where v 1 =  0  , v 2 =  1  , H =  x  : x ∈ R .
© ª
 
0 0 0

13. Under what condition is a set with one vector linearly independent ?

14. Show that every set with more then three vectors from P 2 is linearly dependent.

15. Prove that the space spanned by two vectors in R3 is a line through the origin, a plane through the
origin or the origin itself.

16. Use proportional identities, where required, to check which of following sets of vectors in F (−∞, ∞)
are linearly dependent.

a. 6, 3sin2 x, 2cos2 x b. x, cos x


c. 1, sin x, sin 2x d. (3 − x)2 , x 2 − 6x, 5

17. Given two linearly independent vectors (1, 0, 1, 0) and (0, −1, 1, 0) of R4 , find a basis for R4 that includes
these two vectors.

18. Determine whether the vectors v 2 = (1, 2, −1) , v 3 = (−3, 1, 0) , v 4 = (2, 11, −5) forms a basis for R3 or not

Target AA
? If not choose, construct a basis of R3 consisting the vectors out of the given vectors.

Answers
½ ¾
1
1. a. No b. Yes 3. λ ∈ R − − , 1 4. (0, 1, 1) , (1, 1, −1) , (3, 1, −3)
ECA© LL
2
D O | R
0)E
(1, 0,|0,R 6. 1, x, x 2 , x 3 , x 4
ª © ª
5. {(1, 1, 1, 1) , (1, 2, 1, 2)
A ,D , (0, 1, 0, 0)} 7. p0, p1, p2
R E
8. a. in plane. b. on line 10. a. {(−1, 1, 0, 0) , (−1, 0, −1, 0, 1)} , dim = 2 b. Null space, dim= 0

11. a. {(2, −1, 4)}


Powered by Prof.
b. {(1, 1, 0) , (0, 1, 1)}
(Dr.) Rajesh M. Darji
c. {(2, −1, 1, 0) , (0, 0, 0, 1)} , dim = 2 12. Yes

16. a. d. LD, b. ĻI 17. {(1, 0, 1, 0) , (0, −1, 1, 0) , (1, 0, 0, 0) , (0, 0, 0, 1)}

18. not basis, {(1, 2, −1) , (−3, 1, 0) , (1, 0, 0)}

E E E

4.12 Ordered Basis and Coordinate Vector


© ª
â An ordered basis is a basis S = v 1 , v 2 , v 3 .....v n along with the ordering of its vectors.
© ª
â Let S = v 1 , v 2 , v 3 .....v n be the ordered basis for the vector space V and u ∈ V .
If u = c 1 v 1 +c 2 v 2 +c 3 v 3 +.....+c n v n , then the coefficients c 1 , c 2 , c 3 , .....c n are called coordinate of vector
u with respect to the basis S.

â The corresponding vector (c 1 , c 2 , c 3 , .....c n ) of Rn is called coordinate and is denoted by u S . That is


¡ ¢

¡ ¢
u S = (c 1 , c 2 , c 3 , .....c n )

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 42

4.13 Translation Matrix (Change of Basis Matrix)


© ª © ª
â Let S = v 1 , v 2 , v 3 .....v n and T = w 1 , w 2 , w 3 .....w n are two basis for the vector space V.

â The translation matrix from basis T to S is defined as,


££ ¤ £ ¤ £ ¤ £ ¤ ¤
P= w 1 S , w 2 S , w 3 S ..... w n S

where i t h column of matrix P is the coordinate matrix of w i relative to the basis S


£ ¤ £ ¤
â The coordinate matrix of u relative to S, u S and relative to T , u T are related by equation,
£ ¤ £ ¤
u S =P u T

* Important:

1. Sometimes the translation matrix from T to S is denoted by PT →S .

2. Here P is always invertible and P−1 be a translation matrix from S to T , that is P−1
S→T
. Hence

PT →S ⇔ P−1
S→T

Illustration 4.12 If u = (10, 5, 0) ∈ R3 , Find the coordinate vector for u relative to,

a. The standard basis for R3 .

Target AA
b. The basis T = {(1, −1, 1) , (0, 1, 2) , (3, 0, −1)}

Solution:
a. The standard basis for R3 is S = {(1, 0, 0) , (0, 1, 0) , (0, 0, 1)} = e 1 , e 2 , e 3 (Say). To find coordinate vector
RE5,C
relative to standard basis S, we represent u = (10,
R E DO
0)A
| asLaL
© ª

linear combination of vectors S.

0)A
u = (10,R5,E
D|
= (10, 0, 0) + (0, 5, 0) + (0, 0, 0)
=10 (1, 0, 0)Powered
+ 5 (0, 1, 0)by Prof. (Dr.) Rajesh M. Darji
+ 0 (0, 1, 0)
u =10 · e 1 + 5 · e 2 + 0 · e 3
¡ ¢
∴ u S = (10, 5, 0) Required coordinate vector relative to standard basis.

â It is worth to note that, for any vector of R3 (in fact of Rn ), given vector it self represent a coordinate
vector relative to standard basis.
© ª
b. For coordinate vector relative to basis T = {(1, −1, 1) , (0, 1, 2) , (3, 0, −1)} = v 1 , v 2 , v 3 (say), we represent
u as a linear combination of vectors of T . Consider,

c1 v 1 + c2 v 2 + c3 v 3 = u ⇒ c 1 (1, −1, 1) + c 2 (0, 1, 2) + c 3 (3, 0, −1) = (10, 5, 0) (4.5)

To solve non-homogeneous system (4.5), consider augmented matrix: (Put coefficient vectors in column)

c1 c2 c3
1 0 3 10
 

[A : B ] = −1 1
 0 5  → R2 + R1 ; R3 − R1
1 2 −1 0
1 0 3 10
 

∼ 0 1 3 15  → R 3 − 2R 1
0 2 −4 −10

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 43

1 0 3 10
 

∼ 0 1
 3 15 
0 0 −10 −40
Back substitution ⇒ c 1 = −2, c 2 = 3, c3 = 4
¡ ¢
∴ Required coordinate vector relative to basis T is u T = (c 1 , c 2 , c 3 ) = (−2, 3, 4)

Illustration 4.13 Determine the coordinate vector of p = 4−2x+3x 2 relative to the basis B = 2, −4x, 5x 2 − 1
© ª

for P 2 .

Solution: Consider the lineal combination,

c 1 (2) + c 2 (−4x) + c 3 5x 2 − 1 = 4 − 3x + 4x 2
¡ ¢
c1 p 1 + c2 p 2 + c3 p 3 = p ⇒ (4.6)

To solve system (4.6), consider augmented matrix, (putting coefficients of polynomials in column)

c1 c2 c3
2 0 −1 4
 

[A : Z ] =  0 −4 0 −2 
0 0 5 3

23
Observe that above matrix is already in echelon form, so making back substitution we get c 1 = , c2 =
µ ¶ 10
1 3 ¡ ¢ 23 1 3
, c 3 = . Hence, Required coordinate is p B = , , .

Target AA
2 5 10 2 5

Illustration 4.14 Consider the standard basis for R3 i.e. S = e 1 , e 2 , e 3 and another basis T = v 1 , v 2 , v 3
© ª © ª

where v 1 = (1, −1, 1) , v 2 = (0, 1, 2) , v 3 = (3, 0, −1) of R3

a. Find the translation matrix P from T to S and Q from S to T.


| R E CALL
u TD O
D | £ ¤E
£ ¤ £ ¤
b. Compute u S , given that R = (9, −1, −8)
£ ¤ REA
c. Compute u T , given that u S = (−6, 7, 2)

Solution: Powered by Prof. (Dr.) Rajesh M. Darji


a. By definition of translation matrix P from T to S, we have
££ ¤ £ ¤ £ ¤ ¤
P= v1 S v2 S v3 S (4.7)
£ ¤ £ ¤ £ ¤
where v 1 S , v 2 S , v 3 S are column matrix of coordinate vectors, of v 1 , v 2 , v 3 relative to standard basis S
respectively.
Also know that the coordinate vector of any vector of R3 relative to standard basis is vector itself (See
Illustration 4.12). Hence

1 0 3
     
£ ¤ £ ¤ £ ¤
v 1 S =  −1  , v1 S =  1 , v1 S =  0 
1 2 −1

1 0 3
 

∴ From (4.7), the translation matrix P from T to S is, P =  −1 1 0 


1 2 −1
Also the translation matrix Q from S to T is given by
££ ¤ £ ¤ £ ¤ ¤
Q= e1 T e2 T e3 T , e 1 = (1, 0, 0) , e 2 = (0, 1, 0) , e 3 = (0, 0, 1) (4.8)
£ ¤ £ ¤ £ ¤
where e 1 T , e 2 T , e 3 T are column matrix of coordinate vectors, of e 1 , e 2 , ve 3 relative to the basis T re-
spectively.

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 44

â To find these coordinate vectors, first we obtain coordinate for any vector say u = (a, b, c) ∈ R3 relative
to basis T. Let

c1 v 1 + c2 v 2 + c3 v 3 = u ⇒ c 1 (1, −1, 1) + c 2 (0, 1, 2) + c 3 (3, 0, −1) = (a, b, c) (4.9)

Augmented matrix os system (4.9) is

c1 c2 c3
1 0 3 a
 

[A : B ] =  −1 1 0 b  → R2 + R1 ; R3 − R1
1 2 −1 c
1 0 3 a
 

∼ 0 1 3 b + a  → R 3 − 2R 2
0 2 −4 c −a
1 0 3 a
 

∼ 0 1
 3 b+a 
0 0 −10 c − 3a − 2b

By back substitution, we get

1 1 1
c1 = (a − 6b + 3c) , c2 = (a + 4b + 3c) c3 = (3a + 2b − c)
10 10 10

Target AA
c1 a − 6b + 3c
   
£ ¤ 1
∴ u T = [(a, b, c)]T =  c 2  =  a + 4b + 3c 
10
c3 3a + 2b − c
1 3
     
−6
£ ¤ 1  £ ¤ 1  £ ¤ 1 
⇒ e 1 T = [(1, 0, 0)]T = 1  , e 2 T = [(0, 1, 0)]T = 4  , e 3 T = [(0, 0, 1)]T = 3 
10
E C A L L 10 10
DO | R
3 2 −1
| R E
READ 1

1 −6 3

∴ From (4.8), required translation matrix Q from S to T is, Q =  1 4 3 


10
Powered by Prof. (Dr.) Rajesh M. Darji 3 2 −1
Note: We know that if P is a translation matrix from T to S then P−1 is a translation matrix from S to T .
1 −6 3
1 
Hence if P is known, then Q = P−1 = 1 4 3  (Verify !)
10
3 2 −1
£ ¤ £ ¤
b. To find u S using u T = (9, −1, −8). That is to convert T coordinate into S coordinate. Hence we
use the translation matrix T to S, that is P, given by the relation

1 0 3 9
    
−15
£ ¤ £ ¤ ¡ ¢
u S = P u T =  −1 1 0   −1  =  −10  ∴ u S = (−15, −10, 17)
1 2 −1 −8 17

£ ¤ £ ¤
c. Similarly, to find u T using u S = (−6, 7, 2). That is to convert S coordinate into T coordinate. Hence
we use the translation matrix S to T , that is Q, given by the relation

1 −6 3
    
−6 −42 µ ¶
£ ¤ £ ¤ 1  1 ¡ ¢ 21 14 3
u T =Q u S = 1 4 3  7  =  28  ∴ u S = − , ,−
10 10 5 5 5
3 2 −1 2 −6

Exercise 4.4

1. Find the coordinate of the following vectors relative to the basis S, given that,

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 45

© ª
a. v = (2, −1, 3), S = v 1 , v 2 , v 3 , v 1 = (1, 0, 0) , v 2 = (2, 2, 0) , v 3 = (3, 3, 3)
b. p = 4 − 3x + x 2 , S = p 1 , p 2 , p 3 , p 1 = 1, p 2 = x, p 3 = x 2
© ª
· ¸ · ¸ · ¸ · ¸ · ¸
1 0 n o −1 1 1 1 0 0 0 0
c. A = , S = A1, A2, A3, A4 , A1 = , A2 = , A3 = , A4 =
−1 0 0 0 0 0 1 0 0 1
[Hint: c 1 A 1 + c 2 A 2 + c 3 A 3 + c 4 A 4 = 0]

2. Determine the coordinate vector of p = 4 − 2x + 3x 2 with respect to the standard basis of P 2 .

3. Consider the standard basis B for R3 and another basis C = {(1, 2, 1) , (1, −1, 1) , (1, 0, −1)}:

a. Find the translation matrix P from C to B.


b. Find the translation matrix Q from B to C.

4. Consider the standard basis for P 2 i.e. B = 1, x, x 2 and another basis C = 2, −4x, 5x 2 − 1
© ª © ª

a. Find the translation matrices from C to B and B to C .


¡ ¢
b. Determine the polynomial that has the coordinate vector p C = (−4, 3, 11)

Answers
µ ¶
1 1
1. a. (3, −2, 1) b. (4, −3, 1) c. − , , −1, 0 2. (4, −2, 3)
2 2

1 1 1 1/6 1/3 1/6


   

Target AA
3. a. P =  2 −1 0  b. Q =  1/3 −1/3 1/3 
1 1 −1 1/2 0 −1/2

2 0 −1 1/2 0 1/10
   

4. a. PC →B =  0 −4 0  , QB →c =  0 −1/4 0  b. p = −19 − 12x + 55x 2


0 0 5 0 0 L 1/5
| RE CAL
READ | R E DO E E E

Powered
4.14 Fundamental Spaces: Prof. (Dr.) Rajesh M. Darji
bySpace, Column Space, Null Space
Row

â Consider an m × n matrix with real entries as,


 
a 11 a 12 ... a 1n
a 21 a 22 a 2n
 
 ... 
A=
 
.. .. .. 

 . . ... . 

a m1 a m2 ... a mn

â The rows of above matrix are referred as row vectors of Rn and are denoted by r i , 1 É i É m

â The columns of above matrix are referred as column vectors of Rm and are denoted by c j , 1 É j É n

â Row space: The row space of the matrix A is defined as the span of row vectors of A and is denoted by
row (A). Hence,

row (A) = span r 1 , r 2 , r 3 .......r m = k 1 r 1 + k 2 r 2 + ... + k m r m : k 1 , k 2 ...k m ∈ R


© ª © ª

â Column space: The column space of of the matrix A is defined as the span of column vectors of A and
is denoted by col (A). Hence,

col (A) = span c 1 , c 2 , c 3 .......c n = k 1 c 1 + k 2 c 2 + ... + k n c n : k 1 , k 2 ...k n ∈ R


© ª © ª

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 46

â Null space: The solution space of the system of homogeneous system AX = 0 , is called the null space
of A and is denoted by nul (A) . Hence,
n o
nul (A) = X ∈ Rn : AX = 0

â Row space and Null space are subspace of of Rn , Column space is a subspace of Rm .

Theorem 4.2 (Basis for Row space and Column space)


Let A be a given matrix and B be its equivalent row-echelon matrix. Then

i. The set of pivot rows of matrix B forms a basis for the row space of A.

ii. The set of columns of A, corresponding to the pivot columns of B forms a basis for the column space
of A

4.15 Rank and Nullity

â Dimension of of row space of matrix A is called row rank of A.


â Dimension of of column space of matrix A is called column rank of A.
â The row rank and the column rank of a matrix are always same, and commonly it is known as rank of
matrix A.

â Rank of A is given by number of pivots in the row echelon form, and is denoted by ρ(A).

Target AA
â The dimension of the null space of A is called nullity of A and is denoted by µ (A) .
Alternatively, nullity is defined to be the number of non-pivot columns in the echelon form of matrix
A or numbers of free variables in the solution of AX = 0..

â For any matrix A, rank (A) = rank A T .


¡ ¢

| RE CALL
RE
4.16 Rank-Nullity AD
Theorem
| R E DO

Let A be an (m × n) matrix then,


Powered by Prof. (Dr.) Rajesh M. Darji
rank (A) + nullity (A) = Number of columns of A.
∴ ρ (A) + µ (A) = n

Note: Rank-Nullity theorem is also known as dimension theorem in context of ρ (A) = dim [row (A)] and
µ (A) = dim [nul (A)]. Hence

dim [row (A)] + dim [nul (A)] = n = number of columns

Illustration 4.15 Find row (A) , col (A) , nul (A) , row A T , col A T , nul A T , given
¡ ¢ ¡ ¢ ¡ ¢

 
1 −2 1 1 2
 −1 3 0 2 −2 
A=
 
0 1 1 3 4 


1 2 3 13 5

Solution: Since  
  1 −1 0 1
1 −2 1 1 2  −2 3 1 2 
 −1 3 0 2 −2  T
 
A= ⇔ A = 1 0 1 3
   
0 1 1 3 4 
 
 
1 2 3 13

 
1 2 3 13 5
2 −2 4 5

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 47

1. row (A) = span of row vectors of A


= {k 1 (1, −2, 1, 1, 2) + k 2 (−1, 3, 0, 2, −2) + k 3 (0, 1, 1, 3, 4) + k 4 (1, 2, 3, 13, 5) : k 1 , k 2 , k 3 , k 4 ∈ R}
= {(k − k 2 + k 4 , −2k 1 + 3k 2 + k 3 + 2k 4 , k 1 + k 3 + 3k 4 , k 1 + 2k 2 + 3k 3 + 13k 4 , 2k 1 − 2k 2 + 4k 3 + 5k 4 )}
= col A T ∵ Rows of A are columns of A T
¡ ¢ £ ¤

2. col (A) = span of column vectors of A


= {k 1 (1, −1, 0, 1) + k 2 (−2, 3, 1, 2) + k 3 (1, 0, 1, 3) + k 4 (1, 2, 3, 13) + k 5 (2, −2, 4, 5) : k 1 , ...k 5 ∈ R}
= {(k 1 − 2k 2 + k 3 + k 4 + 2k 5 , −k 1 + 3k 2 + 2k 4 − 2k 5 , k 2 + k 3 + 3k 4 + 4k 5 , k 1 + 2k 2 + 3k 3 + 13k 4 + 5k 5 )}
= row A T ∵ Columnss of A are rows of A T
¡ ¢ £ ¤

n o
3. nul (A) = X ∈ R5 : AX = 0 .
Consider the augmented matrix for homogeneous system AX = 0:

x1 x2 x3 x4 x5
 
1 −2 1 1 2 0
 −1 3 0 2 −2 0 
[A : Z ] = 
 
0 1 1 3 4 0

 
1 2 3 13 5 0
Reducing to row echelon form, we get

x1 x2 x3 x4 x5
 

Target AA
1 −2 1 1 2 0
 0 1 1 3 0 0 
[A : Z ] ∼ 
 

 0 0 −2 0 3 0 
0 0 0 0 4 0
∴ ρ (A) = ρ (A : Z ) = 4 < 5 (number of unknowns)
L
ECALgiven
O |
∴ System has non trivial one parametricRsolution by assigning parameter t to free variable x 4 ,
A D | RED as
R
and is given by E
back substitution,

x 1 = −7t , x 2 = −3t , x 3 = 0, x4 = t , x 5 = 0, t ∈R
Powered
∴ Xby Prof. (Dr.) Rajesh M. Darji
= (x 1 , x 2 , x 3 , x 4 , x 5 ) = (−7t , −3t , 0, t , 0) , t ∈ R
∴ nul (A) = {(−7t , −3t , 0, t , 0) : t ∈ R} = span {(−7, −3, 0, 1, 0)}
¡ T¢ n o
4. nul A = X ∈ R4 : A T X = 0 .
Consider the augmented matrix for homogeneous system A T X = 0:

x1 x2 x3 x4
 
1 −1 0 1 0

 −2 3 1 2 0 

AT : Z = 
£ ¤
1 0 1 3 0
 

 
 1 2 3 13 0 
2 −2 4 5 0
Reducing to row echelon form, we get

x1 x2 x3 x4
 
1 −1 0 1 0

 0 1 1 4 0 

T
£ ¤
A :Z ∼ 0 0 4 3 0
 

 
 0 0 0 −2 0 
0 0 0 0 0
∴ ρ (A) = ρ (A : Z ) = 4 (= number of unknowns)

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 48

∴ System has trivial solution, that is x 1 = x 2 = x 3 = x 4 = 0.


Hence, nul A T = {(0, 0, 0, 0)} = 0
¡ ¢ © ª

Illustration 4.16 Find the basis for the row space, column space and null space of the following matrix:
[Summer-2016]  
1 −3 4 −2 5 4
 1 −6 9 −1 8 2 
 
 2 −6 9 −1 9 7
 

−1 3 −4 2 −5 −4
 
1 −3 4 −2 5 4
 1 −6 9 −1 8 2 
Solution: Let A = 
 
2 −6 9 −1 9 7 


−1 3 −4 2 −5 −4
Reducing to row echelon form:
 
1 −3 4 −2 5 4
0 −3 5 1 3 −2 
 
A∼ =B (4.10)

 0 0 1 3 −1 −1 
0 0 0 0 0 0
By Theorem (4.2),
1. Basis for row space of A is given by the set pivot rows of B . Hence basis for row space of A is
{(1, −3, 4, −2, 5, 4) , (0, −3, 5, 1, 3, −2) , (0, 0, 1, 3, −1, −1)}

Target AA
2. Basis for column space of A is given by the set of columns of A corresponding to pivot columns of B .
Hence basis for row space of A is
{(1, 1, 2, −1) , (−3, −6, −6, 3) , (4, 9, 9, −4)}

3. To find basis for null space of A, first we find null L L of A.


space
RECA
DO |form of augmented matrix for system AX = 0 is
From equation (4.10), the REechelon
| row
READ x1 x2 x3 x4 x5 x6
 
Powered by  Prof. (Dr.) Rajesh M.
1
0
−3
−3
 Darji
4 −2
5 1
5 4
3 −2
0
0
[A : Z ] ∼ 
 
0

 0 0 1 3 −1 −1 
0 0 0 0 0 0 0
∴ ρ (A) = ρ (A : Z ) = 3 < 6 (number of unknowns)
∴ By back substitution, 3-parametric solution is given by
14 8
x 1 = −s − 5t , x 2 = − r + s + t , x 3 = −3r + s + t , x 4 = r, x 5 = s, x 6 = t , r, s, t ∈ R
3 3 µ ¶
14 8
∴ X = (x 1 , x 2 , x 3 , x 4 , x 5 , x 6 ) = −s − 5t , − r + s + t , −3r + s + t , r, s, t
3 3
Hence, ½µ ¶ ¾
14 8
nul (A) = −s − 5t , − r + s + t , −3r + s + t , r, s, t : r, s, t ∈ R
3 3
â To find basis, rewrite nul (A) as a span of coefficient vectors of r, s, t , as follow:
½ µ ¶ µ ¶ ¾
14 8
nul (A) = r 0, − , −3, 1, 0, 0 + s −1, , 1, 0, 1, 0 + t (−5, 1, 1, 0, 0, 1) : r, s, t ∈ R
3 3
½µ ¶ µ ¶ ¾
14 8
∴ nul (A) = span 0, − , −3, 1, 0, 0 , −1, , 1, 0, 1, 0 , (−5, 1, 1, 0, 0, 1)
3 3
½µ ¶ µ ¶ ¾
14 8
Thus, basis for null space of A is 0, − , −3, 1, 0, 0 , −1, , 1, 0, 1, 0 , (−5, 1, 1, 0, 0, 1) , because this
3 3
set is always linearly independent and it span nul (A) .

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 49

* Important:

From (4.10),

â Rank of A, ρ (A) = 3 = number of pivot columns.

â Nullity of A, that is dimension of null space of A, µ (A) = 3 = number of non-pivot columns.

â ρ (A) + µ (A) = 3 + 3 = 6 = total number of columns of A.


Hence, rank-nullity theorem is also verified.

Illustration 4.17 Find the basis for the vector space span {(1, −1, 2) , (0, 5, −8) , (3, 2, −2) , (8, 2, 0)} .

Solution: Let W = span {(1, −1, 2) , (0, 5, −8) , (3, 2, −2) , (8, 2, 0)} .
If we consider a matrix A by putting the vectors in column, then W becomes column space of A. (Alter-
nately, we can also put vectors in rows, then W becomes row space of S)

1 0 3 8
 

A =  −1 5 2 2  ⇒ W = col (A)
1 −8 −2 0

∴ Required basis for W , is the basis for column space of A.


Reducing A in to row echelon form, we get
 
1 0 3 8

Target AA
A∼ 0 5 5 10  = B
 

0 0 3 8

∴ Required basis is given by column vectors of A corresponding to non-pivot columns of B.


∴ Basis for given vector space is {(1, −1, 1) , (0, 5, −8) , (3, 2, −1)} .
Exercise 4.5
| RE CALL
1. Find the basisR Ethe
for | R E DO
ADrow space, column space and null space of the following matrices and verify the
rank-nullity theorem:
 

2 −4 Powered
1 2 −2 by−3

Prof. (Dr.) Rajesh M. Darji
1 3
 −1 −1 −1
2 0 1
1 0 
a.  −1 2 0 0 1 −1  b. 
 
 0 4 2 4 3 

10 −4 −2 4 −2 4
1 3 2 −2 0
½· ¸ · ¸ · ¸ · ¸¾
−1 1 2 −2 2 −1 −5 4
2. Find the basis for the vector space spanned by , , ,
−2 1 4 −2 3 1 −9 −1
[Hint: Consider matrix A by taking given matrices in column. Required basis is basis for col(A).]

3. Find basis for the space span 1 + x + x 2 + x 3 , 1 + x 2 , 1 + 2x + 2x 2 + x 3 , 1 + x 2 + 2x 3 ⊂ P 3 (x)


© ª

[Hint: Consider matrix A by taking coefficients in column. Required basis is basis for col(A).]

4. Let A be (3 × 4) matrix with ρ (A) = 3.


n o
a. What is the dimension of X : AX = 0 ?

b. Is AX = b consistent for all b ?


c. If AX = b is consistent, how many solution does it have ?

5. Prove that the null space of m × n matrix is a subspace of Rn .

2 0 −1
 

6. Find rank and nullity of the matrix A =  4 0 −2  and verify dimension theorem. [Summer-2015]
0 0 0

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 50

Answers
µ ¶
1 5
1. a. Row space basis: (2, −4, 1, 2, −2, −3) , (0, 16, −7, −6, 8, 19) , 0, 0, , 1, 0, − ,
2 2
Column space basis:
½µ {(2. − 1, 10) , (−4, 2,
¶ µ −4) , (1, 0, −2)}, ¶ ¾
1 1
Null space basis: −1, − , −2, 1, 0, 0 , 0, − , 0, 0, 1, 0 , (1, 1, 5, 0, 0, 1) , Rank 3, Nullity 3.
2 2
b. Row space basis: (1, 3, 2, 0, 1) , (0, 2, 1, 1, 1) , (0, 0, 0, 2, 1),
Column space basis:
½µ {(1, −1, 1, 0) ¶ ,µ(1, −1, 4, 3) , (0, 1,¶¾4, −2)},
1 1 1 1 1
Null space basis: − , − , 1, 0 , − , − , − , 1 , Rank 3, Nullity 2.
2 2 4 4 2
½· ¸ · ¸ · ¸¾
−1 1 2 −1 −5 4
3. 1 + x + x 2 + x 3 , 1 + x 2 , 1 + 2x + 2x 2 + x 3 , 1 + x 2 + 2x 3
© ª
2. , ,
−2 1 3 1 −9 −1

4. a. 1 b. No c. Infinite 6. Rank 1, Nullity 2

E E E

Powered by
Prof. (Dr.) Rajesh M. Darji
B. Sc. (Gold Medalist)
M. Sc. (Gold Medalist)
Tetra Gold Medalist in Mathematics

Target AA
Ph. D. (Mathematics)
ISTE (Reg. 107058)
IMS, AMS
http://rmdarji.ijaamm.com/
Contact: (+91) 9427 80 9779

| RE CALL
READ | R E DO

Powered by Prof. (Dr.) Rajesh M. Darji

LAVC (GTU-2110015) B.E. Semester II


Chapter 5
Linear Transformation (Linear Mapping)

5.1 Linear transformation

Let V and W are real vector spaces then a mapping or a function or a transformation defined from V to W
that is T : V → W is said to be linear transformation if it will satisfies the following two conditions.
¡ ¢ ¡ ¢ ¡ ¢
i. ∀u, v ∈ V ; T u +v = T u +T v

ii. ∀u ∈ V, α ∈ R; T αu = αT u
¡ ¢ ¡ ¢

* Important:

Target AA
1. T : V → W preserves two basic operations of the vector space namely vector addition and scalar mul-
tiplication.

2. For α = 0, T 0v = 0w . Hence linear transformation maps zero vector V to the zero vector of W .
¡ ¢

3. For V = Rn and W = Rm , linear transformation


E C TL
A : RLn → Rm is also known as Euclidian linear trans-
formation. If m = n , T :|RnR n O |
→ERD
R
is called linear operator on Rn
READ
5.2 Particular Transformations
Powered by Prof. (Dr.) Rajesh M. Darji ¡ ¢
1. Zero transformation: A linear transformation T : V → W is c zero transformation if, T v = 0, ∀ ∈ V.
¡ ¢
2. Identity transformation: An operator T : V → V is said to be an identity operator if, T v = v, ∀ ∈ V.

Illustration 5.1 Check whether following mapping are linearly transformation or not.
a. T : R2 → R2 ,
¡ ¢ ¡ ¢
T x, y = x + 2y, 3x − y [summer-2016]

b. T : R3 → R2 , T x, y, z = |x| , y + z
¡ ¢ ¡ ¢

· ¸ µ¯ ¯ ¶
a b ¯ a b
c. T : M22 → R2 , T
¯
= ¯¯ ¯,0
c d c d ¯

Solution:
a. Let u = x 1 , y 1 , v = x 2 , y 2 ∈ R2 , α ∈ R
¡ ¢ ¡ ¢

¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢
∴ T u = T x 1 , y 1 = x 1 + 2y 1 , 3x 1 − y 1 , T v = T x 2 , y 2 = x 2 + 2y 2 , 3x 2 − y 2
¡ ¢ ¡ ¢
i. T u + v =T x 1 + x 2 , y 1 + y 2
£ ¡ ¢ ¡ ¢¤
= (x 1 + x 2 ) + 2 y 1 + y 2 , 3 (x 1 + x 2 ) − y 1 + y 2
¡ ¢
= x 1 + x 2 + 2y 1 + 2y 2 , 3x 1 + 3x 2 − y 1 − y 2
¡ ¢ ¡ ¢
= x 1 + 2y 1 , 3x 1 − y 1 + x 2 + 2y 2 , 3x 2 − y 2
¡ ¢ ¡ ¢ ¡ ¢
∴ T u +v = T u +T v

51
Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 52

T αu =T αx 1 , αy 1
¡ ¢ ¡ ¢
ii.
= (αx 1 ) + 2 αy 1 , 3 (αx 1 ) − αy 1
£ ¡ ¢ ¡ ¢¤

= αx 1 + 2αy 1 , 3αx 1 − αy 2
¡ ¢

= α x 1 + 2y 1 , 3x 1 − y 1
¡ ¢

∴ T αu = αT u
¡ ¢ ¡ ¢

∴ T preserves vector addition and scalar multiplication. Hence T is a linear transformation.

b. u = x 1 , y 1 , z 1 , v = x 2 , y 2 , z 2 ∈ R3 , α ∈ R
¡ ¢ ¡ ¢

¡ ¢ ¡ ¢
i. T u + v =T x 1 + x 2 , y 1 + y 2 , z 1 + z z
¡ ¢ £ ¤
= |x 1 + x 2 | , y 1 + y 2 + z 1 + z z By given definition
¡ ¢ ¡ ¢
6= |x 1 | , y 1 + z 1 + |x 2 | , y 2 + z z [∵ |x 1 + x 2 | 6= |x 1 | + |x 2 |]
¡ ¢ ¡ ¢ ¡ ¢
∴ T u + v 6= T u + T v
∴ T does not preserve vector addition. Hence T is not linear transformation.
· ¸ µ¯ ¯ ¶
a b ¯ a b ¯¯
c. Given that T = ¯¯ , 0 = (ad − bc, 0)
c d c d ¯
· ¸ · ¸
a1 b1 a2 b2 ¡ ¢ ¡ ¢
∴ u= ;v = ∈ M 22 ⇒ T u = (a 1 d 1 − b 1 c 1 , 0) , T v = (a 2 d 2 − b 2 c 2 , 0)
c1 d1 c2 d2

Target AA
· ¸
¡ ¢ a1 + a2 b1 + b2
i. T u + v =T
c1 + c2 d1 + d2
µ¯ ¯ ¶
¯ a1 + a2 b1 + b2 ¯
= ¯¯ ¯,0
c1 + c2 d1 + d2 ¯

c 2A
= ((a 1 + a 2 ) (d 1 + d 2 ) − (b 1 + b 2 ) (c 1 +C LL
) , 0)
DbO | RE
= (a 1 d 1 + a 2 d 1 | 2 dE
+ aR 2− 1 c 1 − b 1 c 2 − b 2 c 1 − b 2 c 2 , 0)
READ
6= (a 1 d 1 − b 1 c 1 , 0) + (a 2 d 2 − b 2 c 2 , 0)
¡ ¢ ¡ ¢ ¡ ¢
T u + v 6=T u + T u
Powered by Prof. (Dr.) Rajesh M. Darji
∴ T does not preserve vector addition. Hence T is not linear transformation.

* Important:

â For linear transformation, formula of T must be linear, that is of the form ax + b y + c z, other wise
mapping is not linear transformation.

â In formula of T , if there is some non linear term like product, power, modulus, constant, any non
linear function etc. then T be never linear transformation.

example, T : R3 → R2 defined by T x, y, z = x − 2y, 4x + y + 3z is linearly transformation but


¡ ¢ ¡ ¢
â For
T x, y, z = x y, 4x + y + 3z or T x, y, z = x − y, z 2 + 1 or T x, y, z = x + y − 2z, tan x etc. are not
¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢

linear transformations. (verify !)

Illustration 5.2 Determine the linearly transformation T : R2 → R3 such that T (1, 0) = (1, 2, 3) and T (1, 1) =
(0, 1, 0). Also find T (2, 3) . [Summer-2016]

Solution: To find general formula for T : R2 → R3 , first we represent an arbitrary vector x, y ∈ R2 as a


¡ ¢

linear combination of (1, 0) and (1, 1) . for this consider


¡ ¢
x, y = c 1 (1, 0) + c 2 (1, 1) ⇒ c 1 + c 2 = x, c2 = y ∴ c 1 = x − y, c2 = y

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 53

¡ ¢ ¡ ¢
Hence, x, y = x − y (1, 0) + y (1, 1)
¡ ¢ £¡ ¢ ¤ £ ¤
⇒ T x, y =T x − y (1, 0) + y (1, 1) Applying T on both sides
¡ ¢
= x − y T (1, 0) + yT (1, 1) [∵ T is linear transformation]
¡ ¢
= x − y (1, 2, 3) + y (0, 1, 0) [∵ Given T (1, 0) = (1, 2, 3) , T (1, 1) = (0, 1, 0)]
¡ ¢ ¡ ¢
= x − y, 2x − 2y, 3x − 3y + 0, y, 0
¡ ¢ ¡ ¢
∴ T x, y = x − y, 2x − y, 3x − 3y Required formula.
¡ ¢
Now put x, y = (2, 3) ∴ T (2, 3) = (−1, 1, −3)

5.3 Matrix Linear Transformation

â Let A be an m×n matrix then its induced linear transformation T A : Rn → Rm is defined as T A x = Ax.
¡ ¢

â Further, ¡if T¢ : Rn → Rm be a linear transformation then there exit an m × n matrix A such that T = T A ,
that is T x = Ax.

â Matrix A is called matrix of T or standard matrix of T and is sometimes denoted by A = [T ].

Illustration 5.3
a. Find matrix of the linearly transformation T : R4 → R3 defined by,
¡ ¢ ¡ ¢
T w, x, y, z = w − 2x − y + 2z, −2w + 4x + 3y − z, −w + 2x + y − z .

Target AA
−2 1 4
 

b. Find linearly transformation induced by the matrix  3 5 7 .


6 0 −1

Solution:
a. Given T : R4 → R3 , so induced (standard) matrix A is L
O | R EC A L of the order (3 × 4), which can be constructed by
following method:
R ED
R EAD |
â There are four unknowns w, x, y, z in definition of T , so A has four columns.

â Formula of T gives three Prof. (Dr.) Rajesh M. Darji


byequations, so A has three rows.
linear
Powered
â Induced matrix A can be constructed by putting coefficients of w, x, y, z in rows respectively, as bellow:

w x y z
1 −2 −1 2
 

A =  −2 4 3 −1  = [T ]
−1 2 1 −1

−2 1 4
 

b. Let A =  3 5 7 .
6 0 −1
³ ´
Since A is (3 × 3) matrix, the induced transformation is T A : R3 → R3 and is defined by T A X = AX , X ∈ R3 .
Hence,
x x −2 1 4 x −2x + y + 4z
        

∀X =  y  ∈ R3 , T A  y  =  3 5 7   y  =  3x + 5y + 7z 
z z 6 0 −1 z 6x − z
∴ T A x, y, z = −2x + y + 4z, 3x + 5y + 7z, 6x − z , x, y, z ∈ R3
¡ ¢ ¡ ¢ ¡ ¢

Exercise 5.1
1. Determine whether the following mappings are linearly transformation or not?

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 54

a. T : R3 → R2 , b. T : R2 → R2 , T x, y = x + 2, y 2
¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢
T x, y, z = y, z
c. T : R2 → R2 , T x, y = x y, x d. T : R2 → R3 , T x, y = x + 3, 2y, x + y
¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢
· ¸ · ¸
a b a b
e. T : M22 → R2 , T = (ad + 1, b + c) f. T : M22 → R2 , T = (2a − b − d , 0)
c d c d

2. Determine the linearly transformation T : R2 → R3 such that T (−1, 1) = (1, 1, 2) and T (3, −1) = (−2, 0, 1).

3. If S = {ê 1 , ê 2 , ê 3 } be the standard basis for R3 and T : R3 → R3 be linearly transformation such that
T (ê 3 ) = 2ê 1 + 3ê 2 + 5ê 3 , T (ê 2 + ê 3 ) = ê 1 and T (ê 1 + ê 2 + ê 3 ) = ê 2 − ê 3 then find T (ê 1 + 2ê 2 + 3ê 3 ).
[Hint: T (ê 1 + 2ê 2 + 3ê 3 ) = T (ê 1 ) + T (ê 2 + ê 3 ) + T (ê 1 + ê 2 + ê 3 )]

4. S = u 1 , u 2 , u 3 where u 1 = (1, 1, 1) , u 2 = (1, 1, 0) , u 3 = (1, 0, 0) be the basis for R3 Let T : R3 → R2 be the


© ª
¡ ¢ ¡ ¢ ¡ ¢
LT. Assume that T u 1 = (1, 2) , T u 2 = (3, 4) and T u 3 = (5, 6) . Find the formula for T (x 1 , x 2 , x 3 ).

5. Find standard matrix for the following linearly transformation :

a. w 1 =2x 1 − 3x 2 + x 3 b. w 1 =x 1
w 2 =3x 1 + 5x 2 − x 3 w 2 =x 1 + x 2
w 3 =x 1 + x 2 + x 3
w 4 =x 1 + x 2 + x 3 + x 4
³ ´
[Hint: Take W = T X ]

Target AA
Answers
¡ ¢ 1¡ ¢
1. a. c. f. are LT. 2. T x, y = −x + y, x + 3y, 3x + 7y 3. (3, 4, 4)
2
 
1 0 0 0
· ¸
2 −3 1 1 1 0 0
a.ALL
¡ ¡¢ ¢  
4. T x, y, z = 5x − 2y − 2z, 6x − 2y − 2z 5. C
E b. 
 
| R 3 5 −1 1 1 1 0
R E DO

 
READ | 1 1 1 1

E E E
Powered by Prof. (Dr.) Rajesh M. Darji
5.4 Composition Linear Transformations

â If T : V → W and S : W → U are linear transformations then the composition S ◦ T : V → U is also a


linear transformation.

â Composition of two linear transformations is also denoted by ST.

â In particular, if T A : Rn → Rk and TB : Rk → Rm are induced by matrices A and B respectively then


(TB ◦ T A ) : Rn → Rm is composite linearly transformation and is defined as (TB ◦ T A ) (X ) = (B A) X . Also
the matrix transformation is given by [TB ◦ T A ] = [TB ] [TB ] = B A

Illustration 5.4 Show that T : R2 → R2 and S : R2 → R3 defines as T (a, b) = (a + b, b) and S (a, b) = (2a, b, a + 2b)
are linear. Also find formula for composite transformation S ◦ T .

Solution: Given T (a, b) = (a + b, b) and S (a, b) = (2a, b, a + 2b). Clearly T and S are linear. (verify !)
Consider induced matrices of T and S, given by

2 0
 
· ¸
1 1
[T ] = = A, [S] =  0 1  = B
0 1
1 2

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 55

The induced matrix for composite transformation S ◦ T is defined by

[S ◦ T ] = [S] [T ] = B A
2 0 · 2 2
   
¸
1 1
= 0 1  = 0 1 
0 1
1 2 1 3
2 2
 
¡ ¢
∴ [S ◦ T ] = 0 1  = C
 Say
1 3

∴ formula for S ◦ T : R2 → R3 is
³ ´
S ◦ T X =C X , X ∈ R2
2 2 ·
 
¸
a
= 0 1 
b
1 3
2a + 2b
 

= b 
a + 3b
∴ S ◦ T (a, b) = (2a + 2b, b, a + 3b) Required formula.

Target AA
5.5 Onto (Surjective) Linear Transformations

A linear transformation T : V → W is said to be onto (surjective), if for all w ∈ W there exist an element v ∈ V
¡ ¢
such that T v = w.

CALL
5.6 One-one (Injective) Linear Transformations
E
| R E DO | R
RETA:D
¡ ¢ ¡ ¢
A linear transformation V → W is said to be one-one (injective) , if ∀u, v ∈ V ; T u =T v ⇒ u=v
¡ ¢ ¡ ¢
or ∀u, v ∈ V ; u 6= v ⇒ T u 6= T v

* Important:
Powered by Prof. (Dr.) Rajesh M. Darji
Let T : Rn → Rm be linearly transformation and A be its induced matrix transformation then,

1. T is onto if AX = b has solution for all b, that is all rows of row echelon form of A are pivots.

2. T is one-one if AX = 0 has unique solution, that is all columns of row echelon form of A are pivots.

3. A linearly transformation which is one-one and onto is known as one-onto-one or Bijective.

Illustration 5.5 Check whether T : R3 → R3 defined by T x, y, z = x + 3y, y, 2x + z is linear. Is it one to


¡ ¢ ¡ ¢

one and onto ? [Summer-2017]


¡ ¢ ¡ ¢
Solution: Given T x, y, z = x + 3y, y, 2x + z .
Let u = x 1 , y 1 , z 1 , v = x 2 , y 2 , z 2 ∈ R3 , α ∈ R
¡ ¢ ¡ ¢

¡ ¢ ¡ ¢
i. T u + v =T x 1 + x 2 , y 1 + y 2 , z 1 + z 2
¡ ¡ ¢ ¡ ¢ ¢ £ ¤
= (x 1 + x 2 ) + 3 y 1 + y 2 , y 1 + y 2 , 2 (x 1 + x 2 ) + (z 1 + z 2 ) By given definition
¡ ¢
= x 1 + x 2 + 3y 1 + 3y 2 , y 1 + y 2 , 2x 1 + 2x 2 + z 1 + z 2
¡ ¢ ¡ ¢
= x 1 + 3y 1 , y 1 , 2x 1 + z 1 + x 2 + 3y 2 , y 2 , 2x 2 + z 2
¡ ¢ ¡ ¢ ¡ ¢
∴ T u + v =T u + T v

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 56

ii. T αu + v =T αx 1 , αy 1 , αz 1
¡ ¢ ¡ ¢

= α x 1 + 3y 1 , α y 1 , α (2x 1 + z 1 )
¡ ¡ ¢ ¡ ¢ ¢

= αx 1 + 3αy 1 , αy 1 , 2αx 1 + αz 1
¡ ¢
¡ ¢
=α x 1 + 3y 1 , y 1 , 2x 1 + z 1
∴ T αu =αT u
¡ ¢ ¡ ¢

∴ T preserves vector addition and scalar multiplication. Hence T is a linear transformation.


Now for one-one and onto, we reduce the standard matrix of T to row echelon form:

1 3 0
 

A =  0 1 0  → R3 − R1
1 0 1
1 3 0
 

∼  0 1 0  → R 3 + 3R 2
0 −3 1
 
1 3 0
∼ 0 1 0 =B
 

0 0 1

Observe that, in echelon form B all columns and rows are pivot. Hence given linearly transformation is
one-one and onto.
Note: Here T is both one-one and onto, hence it is bijective.

Target AA
5.7 Range (Image) and Kernel

Let T : V → W be the linear transformation, then


© ¡ ¢ ª
1. The range of T is defined as Range (T ) = R (T ) = T v : ∀v ∈ V = Im (T )
© RECA ¡L ¢L ª
ED
2. The kernel of T is defined as ker (TO) = |v ∈ V : T v = 0w
R
READ |
* Important:

In particular, if T : Rn → RPowered
m
and A be Prof. (Dr.) Rajesh M.X ∈Darji
byits induced matrix, that is T X = AX ,
³ ´
R , then n

n o
1. Range of T : R (T ) = AX : X ∈ Rn = col (A) = col (T ) [Known column space of T ]

n o
2. Kernel of T : ker (T ) = X ∈ Rn : AX = 0 = nul (A) = nul (T ) [Known null space of T ]

Remark:

1. Range of T : V → W is subspace of W and Kernel is subspace of V


¡ ¢
2. For zero linear transformation, that is T v = 0w , ∀v ∈ V ,
© ª
R (T ) = 0w and ker (T ) = V
¡ ¢
3. For identity linear operator, that is T v = v, ∀ ∈ V ,
© ª
R (T ) = V and ker (T ) = 0v

Theorem 5.1 (Rank-Nullity theorem or Dimension theorem


³ ´ for linear transformation)
Let T : Rn → Rm be ıand A be its induced matrix, that is T X = AX , X ∈ Rn , then rank and nullity of T are
defined as rank and nullity of A respectively. Hence,

rank (A) + nullity (A) = n = number of columns ⇔ rank (T ) + nullity (T ) = dim Rn

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 57

Rank-Nullity theorem is also known as dimension theorem in context of rank (T ) = dim R (T ) and nullity (T ) =
dim [nul (T )]. Hence
dim [R (T )] + dim [nul (T )] = dim Rn

Illustration 5.6 Show that the mapping T : R2 → R3 defined by T x, y = x + y, x − y, y is linear transfor-


¡ ¢ ¡ ¢

mation. Find range, null space (kernel), rank and nullity of T .


¡ ¢ ¡ ¢
Solution: Given T x, y = x + y, x − y, y . Clearly, T is linear transformation. (Verify !)
Induced matrix for T is (3 × 2), given by,
1 1
 

A =  1 −1 
0 1
Reducing to row echelon form:  
1 1
A∼ 0 −2 
 
0 1

1. Range: By definition,
n ³ ´ o ©¡
R (T ) = T X : X ∈ R2 = x + y, x − y, y : x, y ∈ R2 = col (T ) = col (A)
¢ ª

n ³ ´ o n o
2. Null space (Kernel): By definition, ker (T ) = X ∈ R2 : T X = 0 = X ∈ R2 : AX = 0 , that is kernel of
T is the solution space of AX = 0. From row echelon form of A, system has trivial solution ρ (A) = 2 =

Target AA
number of unknowns. Therefore x = 0, y = 0. Thus,
© ª
ker (T ) = 0 = col (T ) = nul (A)

3. Rank: Rank of T = Rank of A = 2. (Dimension of null space of A, that is number of non pivot columns)
O | R ECALL
RED
READ |
4. Nullity: Nullity of T = Nullity of A = 0. (Dimension of column space of A, that is number of pivot
columns)

Illustration 5.7 State the Dimension


Powered Prof. (Dr.) Rajesh M. Darji
by theoremfor linear transformation andfind the rank and nullity of
−1 2 0 4 5 −3
 3 −7 2 0 1 4 
T A , where T A : R6 → R4 be multiplication by A =  [Winter-2017]
 
2 −5 2 4 6 1

 
4 −9 2 −4 −4 7

Solution: Statement og dimension theorem for linearly transformation is given by Theorem 5.1.
Now rank and nullity of T are defined to be the rank and nullity of A. Reducing A to row echelon form,
we get
 
−1 2 0 4 5 −3
 3 −7 2 0 1 4 
A =  → R 2 + 3R 1 ; R 3 + 2R 1 ; R 4 + 4R 1
 
 2 −5 2 4 6 1 
4 −9 2 −4 −4 7
 
−1 2 0 4 5 −3
 0 −1 2 12 16 −5 
∼  → R3 − R2 ; R4 − R2
 
 0 −1 2 12 16 5 
0 −1 2 12 16 5
 
−1 2 0 4 5 −3
0 −1 2 12 16 −5 
 
∼


 0 0 0 0 0 0 
0 0 0 0 0 0

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 58

â Rank of T = Rank of A = 2 (Dimension of Column space T )


â Nullity of T = Nullity of A = Number of non pivot columns = 4 (Dimension of null space of T )
â Observe that, dim [col (T )] + dim [nul (T )] = 2 + 4 = 6 = dim R6 .
Hence dimension theorem for linearly transformation is verified.

Illustration 5.8 Let T : R3 → R3 be linearly transformation defined as T x, y, z = x + 2y − z, y + z, x + y − 2z .


¡ ¢ ¡ ¢

Find a basis and dimension of the range and kernel of T .

1 2 −1
 

Solution: Induced matrix for given linearly transformation is A =  0 1 1  .


1 1 −2
Basis for Range (Image) and Kernel of T , are the basis for column space and null space of A respectively.
Reducing A to row echelon form:

1 2 −1
 

A = 0 1 1  → R3 − R1
1 1 −2
1 2 −1
 

∼ 0 1 1  → R3 + R2
0 −1 −1
 
1 2 −1
∼ 0 1 1 =B
 

Target AA
0 0 0

1. Basis for Range of T is the set of columns of A corresponding to pivot columns of B, and is given by
{(1, 0, 1) , (2, 1, 1)} . Also dimension of range of T is 2.

2. Basis for Kernel (null space) of T is the basis for null space of A, that is solution space of AX = 0, X ∈ R3 .
CALL
O | REnon trivial solution given by X = (3t . − t , t ) , t ∈ R.
From B , system AX = 0 has one parametric
| RED
RE
Hence basis for AD
null space of T is {(3. − 1, 1)}. Also dimension of kernel of T is 1.

Illustration 5.9 Let T : R3 → R3 be the LT defines as T x, y, z = (x + y − z, x − 2y + z, −2x − 2y + 2z)


¡ ¢
Powered by Prof. (Dr.) Rajesh M. Darji
a. Which of the vectors from the set {(1, 2, 3) , (1, 2, 1) , (−1, 1, 2)} belongs to ker (T ) ?

b. Which of the vectors from the set {(1, 2, −2) , (3, 5, 2) , (−2, 3, 4)} belongs to R (T ) ?

Exercise 5.2
1. Determine whether the given LT be one-onto-one (bijective) or not ?

a. T : R4 → R3 ; T (a, b, c, d ) = (a − 2b − c + 2d , −2a + 4b + 3c − d , −a + 2b + c − d )
b. T : R3 → R4 ; T (a, b, c) = (a + 3b + 2c, −a − b − c, 4b + 2c, a + 3b + 2c)
c. T : R4 → R4 ; T (a, b, c, d ) = (a + 2b − c + 2d , 2a + b + 3c + 2d , a − b + 2c + 2d , 2b + d )
d. T : R2 → R3 ; T (a, b) = (a + 2b, 2a + 3b, 3a + 4b)

2. Let T : R4 → R3 be the linearly transformation defined as


¡ ¢ ¡ ¢
T x, y, z, w = x − y + z + w, x + 2z − w, x + y + 3z − 3w

Find a basis and dimension of range and kernel of T.

3. Find the corresponding transformation and indicate the source (image) and the target (co-domain)
Euclidean spaces for the matrices:
2 0 2 1
 
£ ¤
a. 1 2 b.  1 1 −1 0 
0 1 −2 1

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 59

Answers

1. a. onto, not one-one b. Not one-one, Not onto c. one-one and onto d. one-one, not onto

2. Basis for Range: {(1, 1, 1) , (−1, 0, 1)} , dim = 2; Basis for kernel: {(−2, −1, 1, 0) , (1, 2, 0, 1)} , dim = 2.

3. a. T : R2 → R, T x, y = x + 2y, Source: R2 , Target: R


¡ ¢

b. T : R4 → R3 , T x, y, z, w = 2x + 2z + w, x + y − z, y − 2z + w , Source: R4 , Target: R3
¡ ¢ ¡ ¢

E E E

5.8 Inverse Linear Transformation (Isomorphism)

â A linearly transformation T : Rn → Rn is said to be invertible if there exist a linearly transforma-


tion T −1 : Rn → Rn such that T ◦ T −1 = T −1 ◦ T = I
where I : Rn → Rn is an identity linear operator.

â Here T and T −1 are called inverse linearly transformation of each other.


³ ´ ³ ´
â In this case, T X =Y ⇔ Y = T −1 X

â A linear transformation is invertible if and only if it is bijective (one-one and onto).

â If A is an induced matrix of T , then T is invertible if and only if matrix A is invertible.

Target AA
In this case formula for T −1 is given by
³ ´
T −1 X = A −1 X
£ −1 ¤
⇔ T = A −1

â An invertible (Bijective) transformation is


Rcalled LL
ECAIsomorphism
|
REAV D
â Two vector spaces
| R E DO
and W are said to be isomorphic if there exist an isomorphisam from v to W .

Illustration 5.10 Check whether T : R3 → R3 defined by T (x 1 , x 2 , x 3 ) = (3x 1 + x 3 , −2x 1 +x 2 , −x 1 +2x 2 +4x 3 )


Powered by Prof. (Dr.) Rajesh. M. Darji
is invertible (Isomorphism) or not. If invertible then find formula for T −1

Solution: Given T (x 1 , x 2 , x 3 ) = (3x 1 + x 3 , −2x 1 + x 2 , −x 1 + 2x 2 + 4x 3 ). The matrix for T is given by

3 0 1
 

A =  −2 1 0 
−1 2 4

We know that T is invertible if and only if A −1 exist, that is A should be non singular. Since
¯ ¯
¯ 3 0 1 ¯
¯ ¯
det (A) = ¯¯ −2 1 0 ¯¯ = 3 (4 − 0) − 0 + 1 (−4 + 1) = 9 6= 0
¯ −1 2 4 ¯

∴ A is non singular, that is A −1 exist. Hence T is invertible (Isomorphism) and formula for T −1 is
³ ´
T −1 X = A −1 X , X ∈ R3 . (5.1)

Now,
4 2 −1
 
1 1
A −1 = adj (A) =  8 13 −2 
|A| 9
−3 −6 3

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 60

∴ From (5.1),

4 2 −1 x1 4x 1 + 2x 2 − x 3
    
³ ´ 1 1
T −1 X =  8 13 −2   x 2  =  8x 1 + 13x 2 − 2x 3 
9 9
−3 −6 3 x3 −3x 1 − 6x 2 + 3x 3
4x 1 + 2x 2 − x 3 8x 1 + 13x 2 − 2x 3 −3x 1 − 6x 2 + 3x 3
µ ¶
−1
∴ T (x 1 , x 2 , x 3 ) = , ,
9 9 9

Exercise 5.3
In each of the following case find T −1 , if exist ?

1. T : R2 → R2 , T x, y = (2y, 3x − y) 2. T : R3 → R3 ,
¡ ¢ ¡ ¢ ¡ ¢
T x, y, z = 2y + z, x − 4y, 3x

3. T : R3 → R3 , T (x 1 , x 2 , x 3 ) = (x 1 − x 2 , x 2 − x 1 , x 1 − x 3 )

Answers
x + 2y x ¢ ³z y z y z´
µ ¶
1. T −1 x, y = 2. T −1 x, y, z = , − + , x + − 3. T −1 does not exist.
¡ ¢ ¡
,
6 2 3 4 12 2 6

E E E

Powered by

Target AA
Prof. (Dr.) Rajesh M. Darji
B. Sc. (Gold Medalist)
M. Sc. (Gold Medalist)
Tetra Gold Medalist in Mathematics

CALL
Ph. D. (Mathematics)
| RE
| R E DO
ISTE (Reg. 107058)
READ IMS, AMS
http://rmdarji.ijaamm.com/

Powered by Prof. (Dr.) Rajesh M. Darji Contact: (+91) 9427 80 9779

LAVC (GTU-2110015) B.E. Semester II


Chapter 6
Eigenvalues and Eigenvectors

6.1 Definition

â Let A be the square matrix of order n. A non-zero vector X ∈ Rn is said to be an eigenvector of A if there
a scalar λ (real or complex) such thatAX = λX .

â The scalar λ is said to be eigenvalue or characteristic value of A and the vector X is said to be eigen-
vector or characteristic vector of A corresponding to the eigenvalue λ.

6.2 Method of Finding Eigenvalue and Eigenvector

Target AA


Suppose λ be the eigenvalue corresponding the non-zero eigenvector X , then
³ ´
AX = λX X 6= 0

⇒ AX = λI X , where I is an identity matrix.


⇒ AX − λI X = 0 | RE CALL
R=E0ADX 6= 0
³ | R´ EDO
⇒ (A − λI ) X (6.1)

Prof. (Dr.) Rajesh M. Darji


This is homogeneous system of equations which has non-trivial solution. Hence
Powered by
det (A − λI ) = | A − λI | = 0 (6.2)

â Equation (6.2) is called characteristic equation or characteristic polynomial of materix A.

â On solving equation (6.2), we get n eigenvalues as λ1 , λ2 , λ3 .....λn .

â Solving equation (6.1), for each value of λ we get corresponding eigenvector.

6.3 Properties of Eigenvalues

1. The set of all eigenvalues of A is called the spectrum of A.

2. Trace of A = Sum of all eigenvalues of A.

3. If λ1 , λ2 , λ3 .....λn are the eigenvalues of A then λ1 × λ2 × λ3 × ..... × λn = | A | = det (A) .

4. The eigenvalues of upper or lower triangular matrix, hence the diagonal matrix are the elements of
its main diagonal.
1
5. If λ is an eigenvalues of a non singular matrix A then is an eigenvalue of A −1 .
λ
6. If λ is an eigenvalues of A then kλ is an eigenvalue of k A.

61
Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 62

7. If λ is an eigenvalues of A then λm is an eigenvalue of A m where m ∈ N.

8. Spectral shift: If λ is an eigenvalue of A, then λ − k is eigenvalue of matrix (A − k I ) .

9. Matrices A and A T have same eigenvalues.

10. A matrix is a non singular if and only if λi 6= 0, ∀i = 1, 2, 3.....n. OR A square matrix A is invertible if
and only if λ = 0 is not an eigenvalue of A.
|A|
11. If λ is an eigenvalues of a non singular matrix A then is an eigenvalue of adj A.
λ
12. The eigenvalues of symmetric matrix are real.

6.4 Properties of Eigenvectors

1. The eigenvector corresponding to the eigenvalue is not unique. That is if X is an eigenvector corre-
sponding to the eigenvalue λ so is k X , for the scalark 6= 0 .

2. If λ1 , λ2 , λ3 .....λn are the distinct eigenvalues of an (n × n) matrix then the corresponding eigenvectors
X 1 , X 2 , X 3 .......X n are linearly independent.

3. When two or more eigenvalues are equal it may or may not be possible to get linearly indepen-
dent eigenvectoreigenvectors corresponding repeated eigenvalues.

4. eigenvalue may be zero but eigenvector can not be zero.

Target AA
5. All eigenvectors of a symmetric matrix are always linearly independent and orthogonal.
n o
6. Eigen space: Let λ be an eigenvalue of the matrix A then the set E λ = X : AX = λX is called the eigen
space of λ.

O | ECAL(AL− λI ) X = 0 or null space of the matrix transfor-


In other words, the solution space of theRsystem
ED space.
Reigen
mation (A − λI ) is called|the
READ
* Important:
Powered by Prof. (Dr.) Rajesh M. Darji
1. If A be the square matrix of order (2 × 2) then its characteristic equation is given by,

λ2 − S 1 λ + |A| = 0,

where
S 1 = trace (A) = Sum of diagonal elements = a 11 + a 22 , |A| = det (A)

2. If A be the square matrix of order (2 × 2) then its characteristic equation is given by,

λ3 − S 1 λ2 + S 2 λ − |A| = 0,

where
S 1 = trace (A) = Sum of diagonal elements = a 11 + a 22 + a 33 ,
S 2 = Sum of minors of diagonal elements = M 11 + M 22 + M 33
|A| = det (A)

3. Cramer’s Rule:
)
a1 x + b1 y + c1 z = 0 x y z
t ∈R
¡ ¢
⇒ ¯ ¯ = −¯ ¯=¯ ¯ =t Say ,
a2 x + b2 y + c2 z = 0 ¯ b1
¯ c 1 ¯¯ ¯ a1
¯ c 1 ¯¯ ¯¯ a 1 b 1 ¯¯
¯ b c2 ¯ a c2 ¯ ¯ a2 b2 ¯
2 2
¯

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 63

· ¸
14 −10
Illustration 6.1 Find the eigenvalues and eigenvectors of the matrix .
5 −1
· ¸
14 −10
Solution: Suppose A = .
5 −1


If λ be the eigenvalue of A corresponding to non zero eigenvector X , then
³ ´
AX = λX , X 6= 0 ∈ R2 ⇒ (A − λI ) X = 0 (6.3)

The characteristic polynomial of system (6.3) is

det (A − λI ) = 0
⇒ λ2 − S 1 λ + |A| = 0,
where S 1 = trace (A) = a 11 + a 22 = 14 − 1 = 13,
|A| = det (A) = −14 + 50 = 36
2
∴ λ − 13λ + 36 = 0 ⇒ (λ − 4) (λ − 9) = 0
∴ λ = 4, 9 = λ1 , λ2 (Say)
∴ eigenvalues of A are 4, 9.

Now, eigenvectors of A are given by equation (6.3), as

(A − λI ) X = 0

Target AA
µ· ¸ · ¸¶ · ¸ · ¸
14 −10 1 0 x 0
∴ −λ =
5 −1 0 1 y 0
λ 0
µ· ¸ · ¸¶ · ¸ · ¸
14 −10 x 0
∴ − =
5 −1 0 λ y 0
14 − λ
= O | REC
AL L
· ¸· ¸ · ¸
−10 x 0
∴ (6.4)
5 −1 − λ | yRED 0
READ · ¸· ¸ · ¸
10 −10 x 0
λ = λ1 = 4 : Substituting λ = 4 in equation (6.4), we get = . This yields one
equation
Powered by Prof. (Dr.) Rajesh M. Darji 5 −5 y 0

t ∈R
¡ ¢
10x − 10y = 0 ⇒ x=y =t say ,
The corresponding eigenvector is obtained by taking any non zero value of t . Without loss of generality (for
· ¸
1
simplicity), we take t = 1. Hence the eigenvector for λ1 = 4 is X 1 = .
1
λ = λ2 = 9 : Substituting λ = 9 in equation (6.4), we get

x
· ¸· ¸ · ¸
5 −10 x 0
t ∈R
¡ ¢
= ⇒ 5x − 10y = 0 ∴ =y =t say ,
5 −10 y 0 2
· ¸
2
For t = 1, eigenvectoreigenvector corresponding to λ2 = 9 is X 2 = . Thus,
1
· ¸ · ¸
1 2
λ1 = 4 → X 1 = , λ2 = 9 → X 2 =
1 1

3 −1 0
 

Illustration 6.2 Find the eigenvalues and basis for eigen space for the matrix A =  −1 2 −1  .
0 −1 3

[Summer-2016]

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 64

Solution: If λ be the eigenvalue of A corresponding to eigenvector X ∈ R3 , then the characteristic equation


is,

det (A − λI ) = 0 ⇒ λ3 − S 1 λ2 + S 2 λ − |A| = 0 (6.5)

where S 1 =trace (A) = Sum of diagonal elements = a 11 + a 22 + a 33


= 3 + 2 + 3 = 8,
S 2 =Sum of minors diagonal elements = M 11 + M 22 + M 33
¯ ¯ ¯ ¯ ¯ ¯
¯ 2 −1 ¯ ¯ 3 0 ¯ ¯ 3 −1 ¯

¯ ¯ + ¯ ¯ + ¯ ¯ = 5 + 9 + 5 = 19,
−1 3 ¯ ¯ 0 3 ¯ ¯ −1 2 ¯
¯ ¯
¯ 3 −1 0 ¯
¯ ¯
|A| = ¯¯ −1 2 −1 ¯¯ = 3 (6 − 1) + 1 (−3 − 0) + 0 = 15 − 3 = 12.
¯ 0 −1 3 ¯
Substituting in (6.5), we get characteristic equation

λ3 − 8λ2 + 19λ − 12 = 0 (6.6)

Method to find roots of characteristic equation: Equation (6.6) has cubic polynomial, so it may has
three real roots. To find these roots one of the following method:

â Method 1: Put different values of λ say 0, 1, −1, 2, −2, 3, −3.... until equation is satisfied (that is LHS
becomes 0).

Target AA
Observe that for λ = 1, equation (6.6) is satisfied. Hence one root is λ = 1. That is one factor is (λ−1).

â To find other roots adjust second factor of characteristic polynomial as

λ3 − 8λ2 + 19λ − 12 = λ2 (λ − 1) − 7λ (λ − 1) + 12 (λ − 1)
C−A1)LL
¡ 2
λ − 7λ + 12
¢
|R E(λ
=

A D | R E DO = (λ − 1) (λ − 3) (λ − 4)
R E
â Method 2: Since one value of λ is 1 (i.e. one factor is (λ − 1)), second factor can be obtained using
Powered
Synthetic Division (click here)byas follow: Prof. (Dr.) Rajesh M. Darji
1 1 −8 19 −12
0 1 −7 12
1 −7 12 0

∴ Second factor is λ2 − 7λ + 12. Hence, λ3 − 8λ2 + 19λ − 12 = (λ − 1) λ2 − 7λ + 12 .


¡ ¢

From (6.6),

λ3 − 8λ2 + 19λ − 12 = 0
∴ (λ − 1) λ2 − 7λ + 12 = 0
¡ ¢

∴ (λ − 1) (λ − 3) (λ − 4) = 0
∴ λ = 1, 3, 4 = λ1 , λ2 , λ3
¡ ¢
Say
∴ eigenvalues of A are 1, 3, 4.

Now the eigenvector X 6= 0 ∈ R3 is given by homogeneous system,


³ ´
(A − λI ) X = 0, X 6= 0 ∈ R3
3 −1 0 1 0 0 x 0
       

⇒  −1 2 −1  − λ  0 1 0   y  =  0 
0 −1 3 0 0 1 z 0

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 65

3 −1 0 λ 0 0 x 0
       

∴  −1 2 −1  −  0 λ 0   y  =  0 
0 −1 3 0 0 λ z 0
3 − λ −1 0 x 0
    

∴  −1 2 − λ −1   y = 0  (6.7)
0 −1 3 − λ z 0

2 −1 0 x 0
    

λ = λ1 = 1 : From (6.7),  −1 1 −1   y  =  0 .
0 −1 2 z 0
Since the system has non trivial solution, this yields exactly two different equations. Solution of this
system is obtained by considering any two different equations out of three rows. Considering 1st and 3rd
row, we get
y
2x − y = 0, −y + 2z = 0 ⇒ 2x = y = 2z ∴ x = = z = t , t ∈ R
2
1
 

For t = 1, eigenvector corresponding to eigenvalue λ = 1 is X 1 =  2  .


1
Eigen space: By definition, eigen space of λ1 = 1 is the solution space of the system (A − λ1 I ) X = 0. That is
the set of all one parametric solutions of the system (A − λ1 I ) X = 0. Hence
n o
E λ1 = X ∈ R3 : (A − λ1 I ) X = 0 = {(t , 2t , t ) : t ∈ R}
∴ E λ1 = span {(1, 2, 1)}

Target AA
Since the singleton set {(1, 2, 1)} is linearly independent (because vector in not zero) and it spans the eigen
space E λ1 , hence it is basis for eigen space E λ1 .
∴ Basis for eigen space E λ1 = {(1, 2, 1)}
â It is worth to note that the basis for eigen space is the set of corresponding eigenvector.
O
−1D 0
 REC
|
LL 
 A
|0 RE x 0

λ = λ2 = 3 : From R EAD −1 1 −1   y  =  0 
(6.7),
0 −1 0 z 0
∴ From 1st and 2nd equation,
Powered by Prof. (Dr.) Rajesh M. Darji
−y = 0, −x + y − z = 0 ⇒ y = 0, x = −z = t , t ∈R

1
 

For t = 1, eigenvector for λ2 = 3 is X 2 =  0 


−1
â Eigen space E λ2 = {(t , 0, −t ) : t ∈ R} and basis for eigen space is {(1, 0, −1)} .

−1 −1 0 x 0
    

λ = λ3 = 4 : From (6.7), ∴  −1 −2 −1   y  =  0 
0 −1 −1 z 0
∴ From 1st and 3rd equations,

−x − y = 0, −y − z = 0 ⇒ −x = y = −z = t , t ∈R
 
−1
For t = 1, eigenvector for λ3 = 4 is X 3 =  1 
−1
â Eigen space E λ2 = {(−t , t , −t ) : t ∈ R} and basis for eigen space is {(−1, 1, −1)} .

Important deductions:

Using properties of eigenvalue [See section 6.3], we have following important deductions:

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 66

1. Spectrum of A : {(1, 2, 1) , (1, 0, −1) , (−1, 1, −1)} .

2. λ1 + λ2 + λ3 = 1 + 3 + 4 = 7 = Trace of A.

3. λ1 × λ2 × λ3 = 1 × 3 × 4 = 12 = det (A) .

4. Here λ = 1, 3, 4 are eigenvalues of A, so

i. λ = 0 is not an eigenvalue of A. Hence A is non singular matrix and eigenvalues of A −1 are


1 1
λ−1 = 1, , .
3 4
ii. eigenvalues of 3A are 3λ = 3, 9, 12, of −2A are −2λ = −2, −6, −8 and so on.
iii. eigenvalues of A 2 are λ2 = 1, 9, 16, of A 3 are λ3 = 1, 27, 64 and so on.
iv. eigenvalues of A − 3I are λ − 3 = −2, 0, 1, of A + 4I are λ + 4 = −5, 7, 8 and so on.
v. eigenvalues of A T are 1, 3, 4.
|A| 12 12 12
vi. eigenvalues of adj (A) are = , , = 12, 4, 3.
λ 1 3 4
5. Since all eigenvalues are distinct, corresponding eigenvectors X 1 , X 2 , X 3 are always linearly indepen-
dent.
¡ ¢ ¡ ¢ ¡ ¢
6. Dimensions of eigen spaces are, dim E λ1 = 1, dim E λ2 = 1, dim E λ3 = 1.

1 2 2
 

Illustration 6.3 Find the eigenvalues and eigenvectors of the matrix  0 2 1  . [Summer-2017]
−1 2 2

where
Target AA
Solution: If λ be the eigenvalue of A corresponding to eigenvector X ∈ R3 , then the characteristic equation
is,

det (A − λI ) = 0 ⇒ λ3 − S 1 λ2 + S 2 λ − |A| = 0

S 1 =trace (A) = 1 + 2 + 2 = 5,
O
¯ ED ¯ ¯ | R ECALL
(6.8)

A D |R ¯ ¯ ¯
ME
¯ 2 1 ¯ ¯ 1 2 ¯ ¯ 1 2 ¯
S 2 =M 11 +R 22 + M 33 = ¯ 2 2 ¯ ¯ −1 2 ¯ ¯ 0 2 ¯ = 2 + 4 + 2 = 8,
¯ ¯ + ¯ ¯ + ¯ ¯

Prof. (Dr.) Rajesh M. Darji


¯ ¯
¯ 1 2 2 ¯
¯ Powered
¯ by
|A| = ¯¯ 0 2 1 ¯¯ = 1 (4 − 2) − 2 (0 + 1) + 2 (0 + 2) = 2 − 2 + 4 = 4.
¯ −1 2 2 ¯
Substituting in (6.8), we get

λ3 − 5λ2 + 8λ − 4 = 0
∴ λ = 1, λ2 − 4λ + 4 = 0
∴ λ = 1, (λ − 2)2 = 0
∴ λ = 1, 2, 2 = λ1 , λ2 , λ3
¡ ¢
Say
¡ ¢
∴ Egien values of A are 1,2,2 Repeated eigenvalues

Now the eigenvector X 6= 0 ∈ R3 is given by homogeneous system,


  0
1−λ 2 2 x
 

(A − λI ) X = 0 ⇒  0 2−λ 1   y  =  0 (6.9)
 
−1 2 2−λ z 0
  0
0 2 2 x


λ = λ1 = 1 : From (6.9),  0 1 1   y  =  0
 
−1 2 1 z 0
∴ From 2nd and 3rd equation,
y + z = 0, −x + 2y + z = 0 ⇒ x = y = −z = t t ∈R

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 67

1
 

For t = 1, eigenvector corresponding to λ1 = 1 is X 1 =  1  .


−1
λ = λ2 = λ3 = 2 :
(Note that for two equal eigenvalues, there may or may not exist two linearly independent eigenvectors.
That is for two same eigenvalues, corresponding eigenvectors are may be two or one.)
−1 2 2 x 0
    

From (6.9),  0 0 1   y  =  0 
−1 2 0 z 0
x
∴ From 2nd and 3rd equation, z = 0, −x + 2y = 0 ⇒ z = 0, = y = t, t ∈ R
2
2
 

For t = 1, we get one eigenvector X 2 =  1  .


0
Observe that if we take any other value of t , we obtain an eigenvector which is constant multiple of X 2 , that
is linearly dependent with X 2 . Thus there exist only one linearly independent eigenvector corresponding
two repeated eigenvalue λ2 = λ3 = 2. Hence,
2
 

λ2 = λ3 = 2 → X 2 =  1 
0

5 0 1
 

Illustration 6.4 Find eigenvalues and eigenvectors of the matrix A =  1 1 0  .

Target AA
−7 1 0
Solution: eigenvalue of A is given by characteristic equation,

det (A − λI ) = 0
∴ λ3 − 6λ2 + 12λ − 8 = 0 (λ − 2) λ2 − 4λ2 + 4L= 0
¡ ¢

¡ | RECA L
∴ (λ − 2)3 = 0 λ = 2, D O
¢
⇒ R E
2, 2 All eigenvalues are equal
READ |
Corresponding to three equal eigenvalues, the linearly independent eigenvectors may be one or two or
three, given by
Powered by Prof. (Dr.) Rajesh M. Darji

5−λ 0 1

x
 
0

³ ´
(A − λI ) X = 0, X 6= 0 ∈ R3 ⇒  1 1−λ 0  y  =  0 
−7 1 0−λ z 0
3 0 1 x 0
    

For λ = 2,  1 −1 0   y  = 0 

−7 1 −2 z 0
z
From 1st and 2nd equation, 3x + z = 0, x−y =0 x = y = − = t , t ∈ R.

3
For t = 1, we have only one linearly independent eigenvector corresponding to three repeated eigenvalues
1
 

as X =  1 
−3

2 1 1
 

Illustration 6.5 Find eigenvalues and eigenvectors of the matrix A =  0 1 0  .


0 0 1
Solution: Given matrix is upper triangle matrix, so its are eigenvalues are its main diagonal elements. That
is λ = 2, 1, 1 are eigenvalues of A. Now eigenvectors are given by
2−λ 1 1 x 0
    
³ ´
(A − λI ) X = 0, X 6= 0 ∈ R3 ⇒  0 1−λ 0  y  =  0 
0 0 1−λ z 0

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 68

0 1 1 x 0
    

λ = λ1 = 2 :  0 −1 0   y  =  0 .
0 0 −1 z 0
1
 

From 2nd and 3rd equation, y = z = 0, x = t, t ∈ R Hence, λ1 = 2 → X 1 =  0 


0
1 1 1 x 0
    

λ = λ2 = λ3 = 1 :  0 0 0   y  =  0 .
0 0 0 z 0
From 1st equation (2nd and 3rd columns are non pivot),

x + y + z = 0, y = t1 , z = t2 ⇒ x = −t 1 , −t 2 , y = t 1 , z = t 2 , t1 , t2 ∈ R

Since, for repeated eigenvalue (two same eigenvalues), we get two parametric solution. Hence we can find
two linearly independent eigenvectors, by assuming the values of parameters t 1 , t 2 as follow:
   
−1 −1
t 1 = 1, t 2 = 0 ⇒ X 2 =  1 , and t 1 = 0, t 2 = 1 ⇒ X3 = 0 
0 −1

Thus two linearly independent eigenvectors corresponding to repeated eigenvalues like


   
−1 −1
λ2 = λ3 = 1 → X 2 =  1  , X 3 =  0 
0 −1

Target AA O it E
SoR
Solution: Here given matrix is symmetric. |
ED of A is
Calways
has
3 −1
Illustration 6.6 Find the eigenvalue and eigenvector for the matrix A =  −1

L
0

2 −1 
0 −1 3
AL three linearly independent vectors irrespective

D | Requation
of eigenvalues. The characteristic
REA
det (A − λI ) = 0
∴ λ3 − 8λ
Powered Prof. (Dr.)
by 2 + 19λ − 12 = 0 ⇒ λRajesh
= 1, 3, 4 = λ ,M.
λ , λ Darji 1 2 3

Now, eigenvectors are given by

3−λ 0 x 0
    
³ ´ −1
(A − λI ) X = 0, X 6= 0 ∈ R3 ⇒  −1 2 − λ −1   y  =  0 
0 −1 3 − λ z 0

2 −1 0 x 0
    

λ = λ1 = 1 :  −1 1 −1   y  =  0 
0 −1 2 z 0
1
 
y
From 1st and 3rd equation, 2x = y = 2z ∴ x = = z = t , t ∈ R. Hence, λ1 = 1 → X 1 =  2 
2
1
0 −1 0 x 0
    

λ = λ2 = 3 :  −1 −1 −1   y  =  0  .
0 −1 0 z 0
1
 

From 1st and 2nd equation, y = 0, x = −z = t , t ∈ R. Hence, λ2 = 3 → X 1 =  0 


−1
0 x 0
    
−1 −1
λ = λ3 = 4 :  −1 −2 −1   y  =  0  .
0 −1 −1 z 0

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 69

1
 

From 1st and 3rd equation, x = −y = z = t , t ∈ R. Hence, λ3 = 4 → X 3 =  −1 


−1
Note: Since given matrix is symmetric its eigenvectors are always linearly independent and pair wise or-
thogonal. That is X 1 · X 2 = X 2 · X 3 X 3 · X 1 = 0. (Verify !)
0 1 1
 

Illustration 6.7 Find eigenvalues and eigenvectors of the matrix A =  1 0 1  .


1 1 0
Solution: Given matrix is symmetric. Hence it has three linearly independent pair wise orthogonal eigen-
vectors.
Characteristic equation of A:

det (A − λI ) = 0 ⇒ λ3 − 3λ − 2 = 0 ⇒ λ = 2, −1, −1

eigenvectors are given by,


1 1 x 0
    
³ ´ −λ
(A − λI ) X = 0, X 6= 0 ∈ R3 ⇒  1 −λ 1  y  =  0 
1 1 −λ z 0

1 1 x 0
    
−2
λ = λ1 = 2 :  1 −2 1  y  =  0 . From 1st and 2nd equation,
1 1 −2 z 0
)
−2x + y + z = 0 x y z

Target AA
£ ¤
⇒ ¯ ¯ = −¯ ¯=¯ ¯ By Cramer’s rule
x − 2y + z = 0 ¯ 1 1 ¯
¯ ¯
¯ −2 1
¯
¯ ¯ −2 1
¯ ¯
¯
¯
¯ −2 1 ¯ ¯ 1 1 ¯ ¯ 1 −2 ¯

1
 
x y z
∴ = = ⇒ x = y = z = t, t ∈ R Hence, λ1 = 2 → X 1 =  1 
3 3 3
CA L L
| RE
1
E
R x D O
1 |1
RE1AD 0
    

λ = λ2 = λ3 = − 1 :  1 1 1   y  =  0 .
1 1 1
Powered by
z
Prof. (Dr.) Rajesh M. Darji
0
∴ x + y + z = 0, y = t 1 , z = t 2 ⇒ x = −t 1 − t 2 , y = t 1 , z = t 2 , t1 , t2 ∈ R

Since A is symmetric, it has thre linearly independent and pair wise orthogonal eigenvectors. Second vector
 
−1
is given by taking t 1 = 1 and t 2 = 0 as X 2 =  1  .
0
â To find third vector X 3 , we select t 1 and t 2 such that X 2 · X 3 = 0. This can be achieve by taking general
−t 1 − t 2
 

form of eigenvector X 3 , that is X 3 =  t1 .


t2
Now, X2·X3 =0
∴ (−1, 1, 0) · (−t 1 − t 2 , t 1 , t 2 ) = 0
∴ (−1) (−t 1 − t 2 ) + (1) (t 1 ) + (0) (t 2 ) = 0
∴ 2t 1 + t 2 = 0 ⇒ t 2 = −2t 1
1
 

Thus for third vector if we take t 1 = 1 then t 2 = −2. Hence, X 3 =  1  .


−2

1
   
−1
∴ λ2 = λ3 = −1 → X 2 =  1  , X 3 =  1 
0 −2

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 70

6.5 Algebraic and Geometric Multiplicity of an eigenvalue

â The number of times an eigenvalue λ exist, is called an algebraic multiplicity of λ and is denoted by
multa (λ).

â The dimension of eigen space of λ is called geometric multiplicity of λ and is denoted by multg (λ).

0 1 1
 

Illustration 6.8 Determine the algebraic and geometric multiplicity of  1 0 1  . [Winter-2016]


1 1 0

Solution: To determine the algebraic and geometric multiplicity first find eigenvalues and eigenvectors.
For given matrix we have obtained eigenvalues and eigenvectors in Illustration 6.7, and are

1 1
     
−1
λ1 = 2 → X 1 =  1  λ2 = λ3 = −1 → X 2 =  1  , X 3 =  1 
1 0 −2

By definition,

1. eigenvalue λ = 2 exist one time and dimension of eigen space (number of corresponding eigenvector)
is also one.
∴ Algebraic multiplicity of 2 = multa (2) = 1 and Geometric multiplicity of 2 = multg (2) = 1.

2. eigenvalue λ = −1 exist two time and dimension of eigen space (number of corresponding eigenvec-

Target AA
tor) is also two.
∴ Algebraic multiplicity of −1 = multa (−1) = 2 and Geometric multiplicity of −1 = multg (−1) = 2.

Note:

1. In Illustration 6.1, =E
multa (4) = multa (9)R 1,CAL L g (4) = multg (9) = 1.
mult
|
A D | R E DO
2. In IllustrationR E
6.2, multa (1) = multa (3) = multa (4) = 1, multg (1) = multg (3) = multg (4) = 1.

3. In Illustration 6.3, multa (1) = 1, multa (2) = 2,


Powered by Prof. (Dr.)multRajesh
(1) = mult (2) = 1.
g
M. Darji g

4. In Illustration 6.4, multa (2) = 3, multg (2) = 1.

5. In Illustration 6.5, multa (2) = 1, multg (1) = 2.

6. In Illustration 6.6, multa (1) = multa (3) = multa (4) = 1, multg (1) = multg (3) = multg (4) = 1.

Exercise 6.1

1. Find the eigenvalues, eigenvectors and hence the basis for the eigen space for the following matrices:
· ¸ · ¸ · ¸
0 3 1 0 3 0
a. b. c.
4 0 0 1 8 −1

2. Non-symmetric matrix and non repeated eigenvalues:

1 0 −1 4 6 6
   

a.  1 2 1  [Winter-2015] b.  1 3 2 
2 2 3 −1 −4 −3

3. Non-symmetric matrix and repeated eigenvalues:

1 0 0 2 1 0 4 6 6
     

a.  2 0 1  b.  0 2 1  c.  1 3 2 
3 1 0 0 0 2 −1 −5 −2

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 71

4. Symmetric matrix and non repeated eigenvalues:

−2 5 4 5 0 1
   

a.  5 7 5  b.  0 −2 0 
4 5 −2 1 0 5

5. Symmetric matrix and repeated eigenvalues:

1 2 3 3 1 1
   

a.  2 4 6  b.  1 3 −1 
3 6 9 1 −1 3

−5 4 34
 

6. Find the eigenvalues of A =  0 0 4  . Is A invertible ? [Summer-2016]


0 0 4

0 1 0
 

7. Determine the algebraic and geometric multiplicity of A =  0 0 1 . [Winter-2015]


1 −3 3

1
8. If λ is an eigenvalues of an orthogonal matrix A, prove that is also an eigenvalue of A.
λ
[Hint: For orthogonal matrix A −1 = A T ]

9. Let A be a 6 × 6 matrix with the characteristic equation λ2 (λ − 1) (λ − 2)3 = 0. What are the possible

Target AA
dimensions for eigen spaces for A ?

Answers
· p ¸ · p ¸ · ¸ · ¸ · ¸ · ¸
2 2 3/2 − 3/2 1 0 3 0
1. a. p , − p → , b. 1, 1 → , c. −1, 3 → ,
1 1 0 1 0 4
3 3
CA  L L
 DO| RE
E
R −1
EA |
6 0 3
       
−1 D −2
2. a. 1, 2, 3 →  R 1  ,  1  ,  1  b. −1, 1, 4 →  2  ,  −1  ,  1 
0 2 2 −7 1 −1
 Powered
 by Prof. (Dr.)
 Rajesh
 M.
 Darji
0 0 1 4 3
   

3. a. −1 → −1 , 1, 1 → 1 
   b. 2, 2, 2 → 0 
 c. 1 →  1 , 2, 2 →  1 
1 1 0 −3 −2

1 1 0 1
           
−1 −1
4. a. −6, −3, 12 →  0  ,  −1  ,  2  b. 6, −2, 4 →  0  ,  1  ,  0 
1 1 1 1 0 1

1 1 1
           
−3 −1 −1
5. a. 0, 0, 14 →  0  ,  5  ,  2  b. 1, 4, 4 →  1  ,  1  ,  −1 
1 −3 3 1 0 2

1

6. λ = −5, 0, 4, not invertible as λ = 0 is eigenvalue. 7. 1, 1, 1 →  1  , multa (1) = 3, multg (1) = 1


1

9. λ = 0 → dim E λ = 1 or 2, λ = 1 → dim E λ = 1, λ = 2 → dim E λ = 1 or 2 or 3.

E E E

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 72

6.6 Cayley-Hamilton Theorem 1

Every square matrix satisfies its own characteristic equation.

â For (2 × 2) matrix A, λ2 − S 1 λ + |A| = 0 ⇒ A 2 − S 1 A + |A| I 2 = 0.

â For (3 × 3) matrix A, λ3 − S 1 λ2 + S 2 λ − |A| = 0 ⇒ A 3 − S 1 A 2 + S A − |A| I 3 = 0.


where S 1 = trace (A) , S 2 = M 11 + M 22 + M 33 , |A| = det (A) .
· ¸
1 4
Illustration 6.9 Verify Cayley-Hamilton theorem for A = . Hence find A 3 and A −1 .
2 3

Solution: The characteristic equation of given matrix is λ2 − 5λ − 2 = 0.


∴ By Cayley-Hemilton theorem we have

A 2 − 5A − 2I 2 = 0 (6.10)

Verification:
· ¸ · ¸· ¸ · ¸
1 3 2 1 3 1 3 7 15
A= ⇒ A = AA = =
2 4 2 4 2 4 10 22
· ¸ · ¸ · ¸
7 15 1 3 1 0
∴ A 2 − 5A − 2I 2 = −5 −2
10 22 2 4 0 1
· ¸ · ¸ · ¸
7 15 5 15 2 0

Target AA
= − −
10 22 10 20 0 2
· ¸ · ¸
7 − 5 − 2 15 − 15 − 0 0 0
= =
10 − 10 − 0 22 − 20 − 2 0 0
=0
Hence, 2
A − 5A − 2I 2 = 0 | RE
∴ CALLtheorem is verified.
Cayley - Hemilton

EAD | R E DO
R equation (6.10) by A, we get
To find A 3 : Multiplying

Powered A
3 2
Prof.
by − 5A − 2A = 0
·
(Dr.)
¸
⇒ ARajesh
·
= 5A + 2A M. Darji
¸ · ¸
3 2

7 15 1 3 37 81
∴ A3 = 5 +2 =
10 22 2 4 54 118

To find A −1 : Multiplying equation (6.10) by A, we get

1
A − 5I 2 − 2A −1 = 0 ⇒ A −1 = (A − 5I 2 )
2
µ· ¸ · ¸¶ · ¸
−1 1 1 3 1 0 1 −4 3
∴ A = −5 =
2 2 4 0 1 2 2 −1
· ¸
−2 3/2
∴ A −1 =
1 −1/2

2 1 1
 

Illustration 6.10 Using Cayley-Hemilton theorem find A −1 if A =  0 1 0  . Hence find the matrix
1 1 2
represented by A 8 − 5A 7 + 7A 6 − 3A 5 + A 4 − 5A 3 + 8A 2 − 2A + I .
1
Arthur Cayley; British, 1821-1895 and William Rowan Hamilton; Irish, 1805–1865.

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 73

Solution: Characteristic equation of given matrix is λ3 − 5λ2 + 7λ − 3 = 0. Hence by Cayley-Hemilton the-


orem
A 3 − 5A 2 + 7A − 3I 3 = 0 (6.11)
Multiplying both sides by A −1 , we get
1¡ 2
A 2 − 5A + 7I 3 − 3A −1 = 0 ⇒ A −1 =
¢
A − 5A + 7I 3
3
2 1 1 2 1 1 2 1 1 1 0 0
      
−1 1
∴ A =   0 1 0   0 1 0  − 5  0 1 0  + 5  0 1 0 
3
1 1 2 1 1 2 1 1 2 0 0 1
5 4 4 10 5 5 5 0 0
     
1 
= 0 1 0 −   0 5 0 + 0 5 0 
 
3
4 4 5 5 5 10 0 0 5
2 −1 −1 2/3 −1/3 −1/3
   
1
∴ A −1 =  0 3 0 = 0 1 0 
3
−1 −1 2 −1/3 −1/3 2/3

Now, to find matrix repented by A 8 − 5A 7 + 7A 6 − 3A 5 + A 4 − 5A 3 + 8A 2 − 2A + I , first we split this expression


with one factor as LHS of equation (6.11), that is A 3 − 5A 2 + 5A − I 3 . This can be done using Long Division
Method for Polynomials (click here) as follow:

A 8 − 5A 7 + 7A 6 − 3A 5 + A 4 − 5A 3 + 8A 2 − 2A + I ¡ 5 ¢ A2 + A + I
= A + A + , where I = I 3
A 3 − 5A 2 + 7A − 3I A 3 − 5A 2 + 7A − 3I

Target AA
Multiply both the sides by A 3 − 5A 2 + 7A − 3I , we get
¡ ¢

A 8 − 5A 7 + 7A 6 − 3A 5 + A 4 − 5A 3 + 8A 2 − 2A + I = A 5 + A A 3 − 5A 2 + 7A − 3I + A 2 + A + I
¡ ¢¡ ¢ ¡ ¢

= A 5 + A (0) + A 2 + A + I
¡ ¢ ¡ ¢
[∵ (6.11)]

R E CA L=LA2 + A + I
RE DO |
READ |
5 4 4 2 1 1 1 0 0
     

=  0 1 0 + 0 1 0 + 0 1 0 
4 4 5 1 1 2 0 0 1
Powered by Prof. (Dr.) Rajesh M. Darji

8 5 5

∴ A 8 − 5A 7 + 7A 6 − 3A 5 + A 4 − 5A 3 + 8A 2 − 2A + I =  0 3 0 
5 5 8

Exercise 6.2
1. Verify Cayley-Hamilton theorem for the following matrix A and hence find A 3 and A −1 :
· ¸ · ¸
1 2 −1 1
a. [Winter-2015] b.
3 4 3 0

2. Verify Cayley-Hamilton theorem for the following matrix A and hence find A 4 and A −1 :

2 −1 1 6 −1 1
   

a.  −1 2 −1  [Winter-2014, 2015] b.  −2 5 −1  [Summer-2017]


1 −1 2 2 1 7
· ¸
1 4
3. If A = then simplify A 5 − 4A 4 − 7A 3 + 11A 2 − A − 10I .
2 3

1 3 2
 

4. Determine A −1 by using Cayley-Hamilton theorem for the matrix A =  0 −1 4  . Hence find


−2 1 5
8 7 6 5 4 3 2
the matrix represented by A − 5A − A + 37A + A − 5A − 3A + 41A + 3I . [Winter-2016]

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 74

Answers
· ¸ · ¸ · ¸ · ¸
37 54 −2 1 −7 4 0 1/3
1. a. , b. ,
81 118 3/2 −1/2 12 −3 1 1/3

86 −85 85 3/4 1/4 −1/4


   

2. a.  −85 86 −85 ,   1/4 3/4 1/4 


85 −85 86 −1/4 1/4 3/4
2176 −520 1400 3/16 1/24 −1/48
   
· ¸
6 4
b.  −1920 776 −1400  ,  1/16 5/24 1/48  3. A + 5I =
2 8
1920 520 2696 −1/16 −1/24 7/48

9 13 −14 13 8 −40
   
1 
4. 8 −9 4 , −2A 2 + 4A + 3I =  16 −11 −16 
37
2 7 1 16 8 −27

E E E

6.7 Similar Matrices

Two matrices A and B are said to be similar if there exist a non-singular matrix P such that B = P −1 AP . Also
similar matrices have same eigenvalues.

Target AA
6.8 Diagonalization

A matrix A is said to be diagonalizable (or can be diagonalizable) if it is similar to some diagonal matrix.
That is there exist a non-singular matrix P such that P −1 AP = D, where D is a diagonal matrix.

Theorem 6.1 An n × n matrix A is diagonalizable C


if A LL
and only if it has n linearly independent eigenvectors.
| RE
In this case,
A D | R E DO
R E
â A is similar to the diagonal matrix D = P −1 AP, where

Powered
â P is the matrix whose Prof. (Dr.) Rajesh M. Darji
by are the linearly independent eigenvectors and is known as Modal
columns
Matrix.

â D is the diagonal matrix whose diagonal elements are the eigenvalues of A and is known as a Spectral
Matrix.

6.9 Orthogonally Diagonalization

â A matrix A is said to be orthogonally diagonalizable (or can be diagonalizable orthogonally) if there


exist an orthogonal matrix M such that M T AM = D, where D is a diagonal matrix.

â A matrix A is orthogonally diagonalizable if and only if it is symmetric.

â We know that an n × n symmetric real matrix always has n linearly independent eigenvectors, even
through the eigenvalues are repeated.

â On normalizing each eigenvector we obtain the modal matrix M which is always orthogonal.

Illustration 6.11 Determine whether the following matrices are diagonalizable or not ? If so, diagonalize
them.
2 0 −2 1 2 1
   

a. A =  0 3 0  [Winter-2015] b. A =  2 0 −2 
0 0 3 −1 2 3

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 75

Solution: In order to digonalize, first of all we obtain eigenvalues and eigenvectors of given matrix.

a. Since given matrix is an upper triangle matrix, its eigenvalues are main diagonal elements. That is
λ = 2, 3, 3 are eigenvalues of A.
Now eigenvector is given by homogeneous system

2−λ 0 x 0
    
³ ´ −2
det (A − λI ) X = 0, X 6= 0 ∈ R3 ⇒  0 3−λ 0  y  =  0 
0 0 3−λ z 0

0 0 −2 x 0
    

λ = λ1 = 2 :  0 1 0  y  =  0 
0 0 1 z 0
1
 

From 1st and 2nd equation, y = z = 0, x = t, t ∈ R. Hence, λ1 = 2 → X 1 =  0  .


0
−1 0 −2 x 0
    

λ = λ2 = λ3 = 3 :  0 0 0   y  =  0 
0 0 0 z 0
From 1st equation (2nd and 3rd columns are not pivot), x = −2z, y = t 1 , z = t 2 , t 1 , t 2 ∈ R.
0
   
−2
Hence, λ2 = λ3 = 3 → X 2 =  1  , X 3 =  0  .
0 1
Since A has three linearly independent eigenvectors, it is diagonalizable. The Modal Matrix P which diago-

Target AA
nalize A is given by taking eigenvectors in columns, that is

1 0 −2
 
£ ¤
P = X1 X2 X3 = 0 1 1 
0 0 0

O | R ECA2LL
| RED
0 0

EAAisDP −1 AP = D, where D =  0 3 0  = Spectral Matrix.


Also diagonalizationRof [See Theorem 6.1]
0 0 3
Powered
b. Characteristic equation of A is, by Prof. (Dr.) Rajesh M. Darji
λ3 − 4λ2 + 4λ = 0 ⇒ λ = 0, 2, 2

Eigen vectors are given by,

1−λ 2 1 x 0
    
³ ´
(A − λI ) X = 0, X 6= 0 ∈ R3 ⇒  2 −λ −2   y  =  0 
−1 2 3−λ z 0

1 2 1 x 0
    

λ = λ1 = 0 :  2 0 −2   y  =  0 
−1 2 3 z 0
From 1st and 2nd equation, x + 2y + z = 0, 2x − 2z = 0 ⇒ x = −y = z = t , t ∈ R.
1
 

Hence, λ1 = 0 → X 1 =  −1  .
1
2 1 x 0
    
−1
λ = λ2 = λ3 = 2 :  2 −2 −2   y  =  0 
−1 2 1 z 0
From 1st and 2nd equation, −x + 2y + z = 0, 2x − 2y − 2z = 0 ⇒ y = 0, x = z = t , t ∈ R.
1
 

Hence, λ2 = λ3 = 2 → X 2 =  0  .
1

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 76

Since given matrix A has only two linearly independent eigen vectors X 1 , X 2 corresponding to three eigen-
values λ = 0, 2, 2. Hence, it is not diagolalizable. [See Theorem 6.1]

2 0 1
 

Illustration 6.12 Find the normalized modal matrix M for the matrix A =  0 3 0  and diagonalize
1 0 2
orthogonally.

Solution: Observe that given matrix is symmetric because A = A T . Hence it is always orthogonally diago-
nalizable. [See section 6.9]
Characteristic equation of A : λ3 − 7λ2 + 15λ − 9 = 0 ⇒ λ = 1, 3, 3.
Eigen vectors are given by,

2−λ 0 1 x 0
    
³ ´
(A − λI ) X = 0, X 6= 0 ∈ R3 ⇒  0 3−λ 0  y  =  0 
1 0 2−λ z 0

1 0 1 x 0
    

λ = λ1 = 1 :  0 2 0  y  =  0 
1 0 1 z 0
1
 

From 1st and 2nd equation, y = 0, x = −z = t , t ∈ R. ∴ λ1 = 1 → X 1 =  0  .


−1
  p 
1 1/ 2

Target AA
X1 1 
∴ Normalized eigenvector is, X 1 = ° ° = p 0 = 0
b   
p
° ° 
°X 1° 2 −1
−1/ 2
−1 0 1 x 0
    

λ = λ2 = λ3 = 3 :  0 0 0   y  =  0 
1 0 −1 z 0
From 1st or 3rd equation, x = z = t 1 , y = | t 2 ,RE t 2A
t 1 ,C
LL
∈ R. The first linearly independent vector is given by
D O
is D | RE
REA
taking t 1 = 1, t 2 = 0, that
  p 
1 1 1/ 2
  
Powered
  Prof. (Dr.) Rajesh M. Darji
X 2 = 0 by ⇒ X 2 = ° ° = p
b
X2 1 
0 = 0 
 
p

1 °X 2°
° ° 2 1
1/ 2

t1
 

â Second linearly independent vector X 3 =  t 2  is such that [See Illustration 6.7]


t1

X2·X3 =0 ⇒ 2t 1 = 0 ∴ t 1 = 0.

0 0
   
X 3
So we can take any value of t 2 , let t 2 = 1. X 3 =  1  ⇒ Xb3 = ° ° =  1  . Thus, required nor-
°X 3°
° °
0 0
malized modal matrix is  p p 
1/ 2 1/ 2 0
£ ¤ 
M= Xb1 Xb2 Xb3 =  0 0 1 

p p
−1/ 2 1/ 2 0
Also diagnonalization of A is defined as,

1 0 0
 

M T AM = D =  0 3 0  = Spectral matrix.
0 0 3

Note:

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 77

1. Here the normalized modal matrix is an orthoganal matrix, that is M T = M −1 .

2. Symmetric matrix is always orthoganally diagonalizable and hence diagonalizable.

3. For simply diadonalization, do not normalize the eigen vectors. In this case the modal matrix is

1 1 0 1 0 0
   

P −1 AP = D =  0 3 0 
£ ¤
P= X1 X2 X3 = 0 0 1  ⇒
−1 1 0 0 0 3

Exercise 6.3
1. For the following matrix, find the non singular matrix P and the diagonal matrix D such that D =
P −1 AP.
· ¸ · ¸
−4 −6 5 3
a. [Winter-2016] b.
3 5 3 5
1 1 −2 1 1 3
   

c.  −1 2 1  d.  1 5 1 
0 1 −1 3 1 1

1 1 1
 

2. Show that the matrix  0 1 1  can not be diagonalizable.


0 0 1

3. Find the normalized modal matrix M and diagonalize orthogonally the following matrices:

Target AA
2 2 0
 
· ¸
2 1
a. b.  2 5 0 
1 2
0 0 3
· ¸
a b
4. Prove that if b 6= 0, then
0 | RE
a CALL
is not diagonalizable.

READ | R E DO
Answers

Prof. (Dr.) Rajesh M. Darji


· ¸ · ¸ · ¸ · ¸
−2 −1 −1 0 1 −1 8 0
1. a. P = Powered
,D = by b. P = ,D =
1 1 0 2 1 1 0 2
1 1 3 −1 0 0 1 1 −1 3 0 0
       

c. P =  0 3 2 ,D =  0 2 0  d. P =  −1 2 0 ,D =  0 6 0 
1 1 1 0 0 1 1 1 1 0 0 −2
 p p 
" p p # 0 −2/ 5 1/ 5

3 0 0

p p 
· ¸
−1/ 2 1/ 2 1 0
3. a. P = p p ,D = b. P =  0 1/ 5 2/ 5  , D = 

0 1 0 
1/ 2 1/ 2 0 3
1 0 0 0 0 6

E E E

Powered by
Prof. (Dr.) Rajesh M. Darji
B. Sc. (Gold Medalist)
M. Sc. (Gold Medalist)
Tetra Gold Medalist in Mathematics
Ph. D. (Mathematics)
ISTE (Reg. 107058)
IMS, AMS
http://rmdarji.ijaamm.com/
Contact: (+91) 9427 80 9779

LAVC (GTU-2110015) B.E. Semester II


Chapter 7
Quadratic Forms and Complex Matrices

7.1 Quadratic Form (QF)

â A homogeneous polynomial of degree two in n variables is called the quadratic form (QF) in n vari-
ables.

â General QF in two variable:


Q (x 1 , x 2 ) = a 11 x 12 + 2a 12 x 1 x 2 + a 22 x 22
For example, 5x 12 − 2x 22 + 4x 1 x 2 , x 2 − 2x y are QF in two variables. But x 2 − 6y 2 + x − 5y is not a QF
because all terms are not of degree two.

Target AA
â General QF in three variable:

Q (x 1 , x 2 , x) = a 11 x 12 + a 22 x 22 + a 33 x 32 + 2a 12 x 1 x 2 + 2a 23 x 2 x 3 + 2a 31 x 3 x 1

For example, 9x 12 −x 22 +4x 32 +6x 1 x 2 −8x 1 x 3 +x 2 x 3 , x 1 x 2 +x 2 x 3 +x 3 x 1 are QF but x 12 −7x 22 +x 32 +4x 1 x 2 x 3


is not QF as last term is of degree 3. CALL | RE
A D | R E DO
7.2 Matrix of Quadratic Form R E

Quadratic form in two and three variables


Powered by Prof. (Dr.) Rajesh M. Darji
can be expressed in matrix form as follow:

1. Q (x 1 , x 2 ) = a 11 x 12 + 2a 12 x 1 x 2 + a 22 x 22 = X T AX ,
· ¸ · ¸
x1 a 11 a 12
where X = and A = .
x2 a 12 a 22

2. Q (x 1 , x 2 , x 3 ) = a 11 x 12 + a 22 x 22 + a 33 x 32 + 2a 12 x 1 x 2 + 2a 23 x 2 x 3 + 2a 13 x 1 x 3 = X T AX ,
 
x1 a 11 a 12 a 13
 

where X =  x 2  and A =  a 12 a 22 a 23 .
 

x3 a 13 a 23 a 33

* Important:

Observe that matrix form of a quadratic form is X T AX , where

â X is a column matrix of variable and

â A is a symmetric matrix in which diagonal entries are the coefficients of variables having square and
other entries are half of the coefficients of cross multiplied variables, filled by symmetry in appropriate
columns.

â In both representation A is a symmetric matrix. See the following illustration.

Illustration 7.1

78
Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 79

· ¸ · ¸
x1 4 −3
a. 4x 12 − 9x 22 − 6x 1 x 2 = X T AX , where X = , A= .
x2 −3 −9
· ¸ · ¸
¢2 x 1 −1
b. x − y = x 2 − 2x y + y 2 = X T AX ,
¡
where X = , A= .
y −1 1

x1 3 −1 0
   

c. 3x 12 + 2x 22 + 3x 32 − 2x 1 x 2 − 2x 3 x 2 = X T AX , where X =  x 2  , A =  −1 2 −1  .
x3 0 −1 3

x 2
   
−1 −1/2
d. 2x 2 + 5y 2 − 6z 2 − 2x y − y z + 8xz = X T AX , where X =  y  , A =  −1 5 4 .
z −1/2 4 −6

[summer-2015]

7.3 Index, Signature and Rank of Quadratic Form

For quadratic form X T AX ,

â Number of positive eigenvalues of A is called index of QF.


â The difference between number of positive and negative eigenvalues of A is called signature of QF.
â Number of non zero eigen value is called rank of QF.

Target AA
7.4 Definiteness of Quadratic Form

A quadratic form Q (x 1 , x 2 , x 3 .....x n ) = X T AX is said to be

1. Positive definite if all eigenvalues of A are positive.


O | R ECALL
RED of A are negative.
READ |
2. Negative definite if all eigenvalues

3. Semi-positive definite if all eigenvalues are positive and atleast one eigenvalue of A is zero.

Powered
4. Semi-negative definite by
if all eigenvalues Prof. (Dr.) Rajesh M. Darji
are negative and atleast one eigenvalue is zero.

5. Infinite if some eigenvalues of A are positive and some eigenvalues of A are negative.

Illustration 7.2 Determine the index, signature, rank and definiteness of the quadratic form −3x 2 − 5y 2 −
3z 2 + 2x y + 2y z − 2xz.

Solution: Matrix form of quadratic form is

x −3 1 1
   

−3x 2 − 5y 2 − 3z 2 + 2x y + 2y z − 2xz = X T AX , where X =  y  , A =  1 −5 −1 


z 1 −1 −3

The characteristic equation of A is

λ3 + 11λ2 + 36λ + 36 = 0 ⇒ λ = −2, −3, −6

1. Index = Number of positive eigenvalues = 0.

2. Signature = Difference of number of positive and number of negative eigen values.


Here A has no positive eigenvalue, so number of positive eigen value is 0. Also A has all three negative
eigenvalues, so number of negative eigenvalues is 3.
Hence difference between +ve and −ve eigenvalues is 3. (always consider difference in modulus)
∴ Signature = 3.

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 80

3. Rank = Number of non zero eigen value = 3.

4. Since all eigen values of A are non zero and negative.


Hence given quadratic form is of negative definite.

Theorem 7.1 (Principal Axes Theorem)


Let Q (x 1 , x 2 , x 3 .....x n ) = X T AX be a quadratic form where A is an n×n symmetric matrix. Then Q (x 1 , x 2 , x 3 .....x n )
can be transformed into λ1 y 12 + λ2 y 22 + λ3 y 32 + ..... + λn y n2 by the orthogonal linear transformation X = P Y ,
where P is an orthogonal modal matrix of A and λ1 , λ2 , λ3 , .....λn are the eigenvalues of A.
â The reduced form of QF is known as canonical form (or sum of squares) of Q (x 1 , x 2 , x 3 .....x n ).

Illustration 7.3 Find the canonical form of the quadratic form 2x 12 + 3x 22 + 2x 32 + 2x 1 x 3 , using orthogonal
transformation. Also find index, rank and signature of the quadratic form. [Summer-2014]

Solution: Matrix form of given quadratic form is,

x1 3 0 1
  

2x 12 + 3x 22 + 2x 32 + 2x 1 x 3 = X T AX , where X =  x 2  , A= 0 3 0 
x3 1 0 2

Characteristic equation of A :

λ3 − 8λ2 + 20λ − 15 = 0 (λ − 3) λ2 − 5λ + 5 = 0
¡ ¢

∴ λ = 3, λ2 − 5λ + 5 = 0

Target AA
p " p #
5 ± 25 − 20 2 −b ± b 2 − 4ac
∴ λ = 3, λ = ∵ ax + bx + c = 0 ⇒ x=
2 2a
p p
5+ 5 5− 5
∴ λ = 3, , = λ1 , λ2 , λ3
2 2
O | R ECALL
RED 7.1], given quadratic form can be reduce to canonical form, under
Thus by Principal Axis Theorem [Theorem
READ |X = P Y as
the orthog0nal transformation
à p ! à p !
X T Powered
AX = λ1 yby
2 Prof. (Dr.) Rajesh M. Darji
2 2
1 + λ2 y 2 + λ3 y 3 = 3y 12 +
5+ 5 2
2
y2 +
5− 5 2
2
y3

x1 y1
   

where P is normalized modal matrix of A, X =  x 2  and Y =  y 2  . Also,


x3 y3

1. Index = Number of +ve eigenvalue = 3.

2. Rank = Number of non zero eigenvalue = 3.

3. Signature = Difference of +ve and −ve eigenvalue = 3.

Note: In order to find orthogonal transformation X = P Y that reduce given quadratic form to canonical
form (to verify principal axis theorem) it is essential to find normalized modal matrix P of symmetric matrix
A. See below illustration.

Illustration 7.4 Determine the orthogonal transformation which transform the quadratic form 5x 2 +5y 2 +
5z 2 + 4x y + 4y z + 4zx into canonical form.

Solution: Given quadratic form

x1 5 2 2
   

5x 12 + 5x 22 + 5x 32 + 4x 1 x 2 + 4x 2 x 3 + 4x 3 x 1 = X T AX , X =  x2  , A= 2 5 2 
x3 2 2 5

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 81

∴ Characteristic equation of A : λ3 − 15λ2 + 63λ − 81 = 0 ⇒ λ = 3, 3, 9. Now eigenvectors are given by

5−λ 2 2 x 0
    
³ ´
(A − λI ) X = 0, X 6= 0 ∈ R3 ⇒  2 5−λ 2  y  =  0 
2 2 5−λ z 0

2 2 2 x 0
    

λ = λ1 = λ2 = 3 :  2 2 2   y  =  0 
2 2 2 z 0
This yields, x + y + z = 0 ⇒ x = −t 1 − t 2 , y = t 1 , z = t2 , t 1 , t 2 ∈ R.
∴ Two linearly independent vectors are
  p 
−1/ 2
  
−1 −1
X1 1 p 
X1 = 1  ⇒ Xb1 = ° ° = p  1  =  1/ 2 

0 °X 1°
° ° 2 0 0

and   p 
1 1 1/ 6
  
X2 1 p 
⇒ Xb2 = ° ° = p  1  =  1/ 6 
X2 = 1 

6 −2 p
°X 2°
° °
−2 −2/ 6

−4 2 2 x 0
    

λ = λ1 = λ2 = 3 :  2 −4 2   y  =  0 
2 2 −4 z 0

Target AA
∴ From 1st and 2nd equation, −2x + y + z = 0, x − 2y + z = 0 ⇒ x = y = z = t , t ∈ R.
  p 
1 1 1/ 3
 
X3 1  p 
∴ X3 = 1  ⇒ Xb3 = ° ° = p  1  =  1/ 3 
3 1 p
°X 3°
° °
1 1/ 3

| RE CALL p p p 
| REDO£

−1/ 2 1/ 6 1/ 3
p p p 
The normalized modal AD of A is P =
REmatrix X1 X2 X3
¤ 
=  1/ 2 1/ 6 1/ 3  .
p p
0 −2/ 6 1/ 3
Hence required orthogonal transformation
Powered by is, Prof. (Dr.) Rajesh M. Darji
p p  p 
x1 −1/ 2 1/ 6 1/ 3 y1
  
p p p
X = P Y ⇒  x 2  =  1/ 2 1/ 6 1/ 3   y 2 
 
p p
x3 0 −2/ 6 1/ 3 y3
y1 y2 y3 y1 y2 y3 y2 y3
∴ x 1 = − p + p + p , x 2 = p + p + p , x 3 = −2 p + p
2 6 3 2 6 3 6 3

â Note that, if we substitute these values of x 1 , x 2 , x 3 in given quadratic form and simplify, we get the
canonical form of quadratic form as X T AX = 3y 12 + 3y 22 + 9y 32 . This is the statement principal axis theorem.
(Verify !)
Note: Recall from following table, some quadratic equations and its geometrical nature/name:

Equation Nature/Name

1. x2 + y 2 = a2 Circle

x2 y 2
2. + =1 Ellipse
a2 b2

x2 y 2
3. − =1 Hyperbola
a2 b2
LAVC (GTU-2110015) B.E. Semester II
Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 82

4. x2 + y 2 + z2 = a2 Sphere

x2 y 2 z2
5. + + =1 Ellipsoid
a2 b2 c 2

x2 y 2 z2 x2 y 2 z2
6. ± ∓ = 1 or − − =1 Hyperboloid
a2 b2 c 2 a2 b2 c 2

Illustration 7.5 Find the nature of the graph represented by the following equations:
a. x 2 + 4x y + y 2 = 16 b. 5x 2 + 5y 2 + 5z 2 + 4x y + 4xz + 4y z = 9

Solution:
a. The quadratic form corresponding to given eqution is
· ¸ · ¸
2 2 T x 1 2
x + 4x y + y = X AX , where X = , A=
y 2 1

Now characteristic equation of A : λ2 − 2λ − 3 = 0 ⇒ λ = 3, −1 = λ1 , λ2


Hence by principal axis theorem, using orthogonal transformation given quadratic form will be trans-
formed in to canonical form as

Target AA
X T AX = λ1 y 12 + λ2 y 22 ⇒ x 2 + 4x y + y 2 = 3y 12 − y 22 = 16 [∵ Given]

y 12 y 22
∴ 3y 12 − y 22 = 16 ⇒ − =1 → Hyperbola [See 3rd equation in above table]
16/3 16
∴ Given quadratic equation represent the curve hyperbola.
ECALL
| Rare λ = 3, 3, 9. [See Illustration 7.4]
E DOform
b. The eigenvalues for given quadratic
| R
∴ By principal AD
REaxis theorem, we have

5x 2 + 5y 2 + 5z 2 + 4x y + 4xz + 4y z = 3y 12 + 3y 22 + 9y 32 = 9

Powered by
3y 12 + 3y 22 + 9y 32 = 9
Prof. (Dr.) Rajesh M. Darji
y 12 y 22 y 32
⇒ + + =1 → Ellipsoid [See 5th equation in above table]
3 3 1
∴ Given quadratic equation represent the surface of ellipsoid.
Exercise 7.1
1. Which of the following forms are the quadratic form ? If so, express it as matrix form X T AX and
determine the its index, signature, rank and definiteness.

a. x 2 − 2x y b. 3x 12 + 7x 22
c. x y + y z + zx d. 4x 12 + x 22 + 15x 32 − 4x 1 x 2 2

2. Reduce the following quadratic form to the canonical form (sum of square) by using orthogonal linear
transformation and write the rank, index and signature:

a. 2 x 12 + x 1 x 2 + x 22 b. 2x 12 + 5x 22 + 3x 32 + 4x 1 x 2
¡ ¢

c. 2x 12 + x 22 − 3x 32 d. 3x 2 + 3z 2 + 8x y + 8xz + 8y z [Winter-2014]

3. Find the nature of the graph represented by the following equations: (Name the quadratic)

a. x 2 + 4x y + 3y 2 = 4 b. 2x 2 − 4x y + 2y 2 = 1
c. 5x 2 − 4x y + 8y 2 − 36 = 0 d. 5x 2 − 2y 2 + 5z 2 + 2xz = 1

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 83

Answers

1. a. Index 1, Signature 0, Rank 2, Infinite form. b. Index 2, Signature 2, Rank 2, Positive definite
form. c. Index 1, Signature 1, Rank 3, Infinite form. d. not quadratic form

2. a. 3y 12 + y 22 , Rank 2, Index 2, Signature 2 b. y 12 + 3y 22 + 6y 32 , Rank 3, index 3, Signature 3


c. 2y 12 + y 22 − 3y 32 , Rank 2, Index 2, Signature 1
1³ p ´ 1³ p ´
d. −y 12 + 7 − 177 y 22 + 7 + 177 y 32 , Rank 3, Index 1, Signature 1
2 2
3. a. Hyperbola b. Ellipse c. Ellipse d. Hyperboloid

E E E

7.5 Complex Matrix

A matrix is said to be complex matrix if it has at least one complex entry otherwise it is known as real matrix.

e. g.
1 2 −1
 
· ¸
1 2+i
A= , A= 0 i 3 , where i 2 = −1
−i 5
3 7 4

Target AA
7.6 Conjugate Matrix

Matrix obtained by replacing the elements of a complex matrix A by its complex conjugate numbers is said
to be conjugate matrix of A and is denoted by A.
e. g.
AL L 1 2 − i
A = DO | R E C ⇒ A =
· ¸ · ¸
1 2+i

READ | RE −i 5 i 5

7.7 Conjugate Transpose


Powered by Prof. (Dr.) Rajesh M. Darji

³ ´ A is denoted A and is define as conjugate of transpose (or
The conjugate transpose of a complex matrix
0
transpose of conjugate) of A. That is (A 0 ) = A = A ∗ .
e. g.
T 
1 i 3 + 2i 1 3 − 2i 1 0
   
³ ´T −i −3
A= 0 2 −4i  ⇒ A∗ = A =  0 2 4i  =  −i 2 1+i 
−3 1 − i 5 −3 1 + i 5 3 − 2i 4i 5

7.8 Hermitian, Skew-Hermitian, Unitary and Normal Matrices


£ ¤
A square matrix A = a i j of size n × n is said to be

1. Hermitian if A ∗ = A, that is a i j = a j i . (Elements of main diagonal are purely real)

2. Hermitian if A ∗ = −A, that is a i j = −a j i . (Elements of main diagonal are purely imaginary)

3. Unitary if A ∗ = A −1 that is A ∗ A = A A ∗ = I n .

4. Normal if A ∗ A = A A ∗

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 84

* Important:

â Observe that the real symmetric and skew-symmetric matrices are complex analogous of Hermition
and Skew-Hermition matrices.

â Every Hermitian matrix is normal since A ∗ A = A A = A A ∗ and every unitary matrix is normal matrix
since A ∗ A = I = A A ∗

â Eigenvalues of Hermitian matrix are real.

â Eigenvectors of normal matrix A corresponding to different eigen spaces are orthogonal.

7.9 Properties
¢∗
A∗ 2. (A ± B )∗ = A ∗ ± B ∗
¡
1. =A

3. (k A)∗ = k A ∗ 4. (AB )∗ = B ∗ A ∗
³ ´
5. A = A 6. AB = A B

7. (k A) = k A 8. (A ± B ) = A ± B
³ ´
9. det A = det (A)

k
 
−1 −i

Target AA
Illustration 7.6 Find k, l and m to make A a Hermitian matrix; where A =  3 − 5i 0 m .
l 2 + 4i 2

Solution: We know that A is Hermitian if A ∗ = A, that is a i j = a j i . Thus for given matrix,

k = a 12 = a 21 = (3 − 5i ) = 3 + 5i
| RE CALL ∴ k = 3 + 5i
l = a 31 = a 13 = (−i ) =D
REA
i | R E DO ∴ l =i
m = a 23 = a 32 = (2 + 4i ) = 2 − i 4 ∴ m = 2−i4

Powered by Prof. (Dr.) Rajesh M. Darji



¡ ¢
Illustration 7.7 Prove that det A = det (A).
³ ´T
Solution: We know that, A∗ = A
·³ ´ ¸
T
det A ∗ = det A
¡ ¢

³ ´ £
∵ det A = det A T
¤
= det A
£ ¤
= det A ∵ Property 9
det A ∗ = det A
¡ ¢
∴ Proved.

Exercise 7.2
1. In each part find A ∗ :

2i 1−i 2i 1−i −1 + i
   

a. A =  4 3+i  b. A =  4 5 − 7i −i 
5+i 0 i 3 1

2. Which of the following are Hermitian matrices ?


· ¸ · ¸
1 1+i i i
a. A = b. A =
1 − i −3 −i i

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 85

1−i −1 + i 1 0 0
   
−2
c. A =  1+i 0 3  d. A =  0 1 0 
−1 − i 3 5 0 0 1

[Hint: For Hermitian matrix A ∗ = A.]

3. Prove that,

a. If A is Hermitian then det (A) is real.


b. If A is Unitary then | det (A) | = 1.
c. The entries of main diagonal of the Hermitian matrix are real numbers.
d. If A is Unitary then A ∗ is also unitary.

Answers
4
 
· ¸ −2i −i
−2i 4 5−i
1. a. b.  1 + i 5 + 7i 3  2. a, c, d yes, b no.
1+i 3−i 0
−1 − i i 1

E E E

Powered by
Prof. (Dr.) Rajesh M. Darji

Target AA
B. Sc. (Gold Medalist)
M. Sc. (Gold Medalist)
Tetra Gold Medalist in Mathematics
Ph. D. (Mathematics)
ISTE (Reg. 107058)
| RE CALL IMS, AMS

READ | R E DO http://rmdarji.ijaamm.com/
Contact: (+91) 9427 80 9779

Powered by Prof. (Dr.) Rajesh M. Darji

LAVC (GTU-2110015) B.E. Semester II


Chapter 8
Inner Product Space and Orthogonal Basis

8.1 Inner Product Space

Let V be the real vector space then a mapping 〈 , 〉 : V ×V → R is said to be an inner product on V if it satisfies
the following axioms:

∀ u, v, w ∈ V and α ∈ R
­ ® ­ ®
1. u, v = v, u [Symmetry]
­ ® ­ ® ­ ®
2. u + v, w = u, w + v, w [Additive]

Target AA
3. αu, v = α u, v
­ ® ­ ®
[Homogeneity]
­ ®
4. u, u Ê 0 [Positivity]
­ ®
5. u, u = 0 ⇔ u = 0

O|R ECALL
ED
A vector space together with an inner product
R is called an inner product space.
READ |
8.2 Properties of Inner Product
Powered by Prof. (Dr.) Rajesh M. Darji
Let V be the real inner product space. For u, v, w ∈ V and α ∈ R
­ ® ­ ®
1. 0, u = u, 0 = 0
­ ® ­ ® ­ ®
2. u, v + w = u, v + u, w

3. u, αv = α u, v = 0
­ ® ­ ®

­ ® ­ ® ­ ®
4. u − v, +w = u, w − v, w
­ ® ­ ® ­ ®
5. u, v − w = u, v − u, w

8.3 Some Standard Inner Product Spaces

1. Euclidean inner product space Rn :


Let u = (u 1 , u 2 , u 3 .....u n ) and v = (v 1 , v 2 , v 3 .....v n ) are vectors of Rn then the formula
­ ®
u, v = u · v = u 1 v 1 + u 2 v 2 + u 3 v 3 + ..... + u n v n

defines inner product on Rn and hence Rn is an inner product space.


Here the inner product, define as above is known as standard inner product on Rn .
e. g. Let u = (1, −1, 2) , v = (2, 1, 3) ∈ R3 ⇒
­ ®
u, v = u · v = 2 − 1 + 6 = 7.

86
Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 87

2. Weighted inner product space Rn : Let u = (u 1 , u 2 , u 3 .....u n ) and v = (v 1 , v 2 , v 3 .....v n ) are vectors of
Rn and w 1 , w 2 , w 3 .....w n are positive numbers which we shall call weight, then the weighted inner
product is defined as
­ ®
u, v = w 1 u 1 v 1 + w 2 u 2 v 2 + w 3 u 3 v 3 + ..... + w n u n v n

1
e. g. u = (1, 2, −2) , v = (1, 1, 3) ∈ R3 w 1 = 2, w 2 = , w 3 = 1
2
­ ® 1
⇒ u, v = 2 (1) (1) + (2) (1) + 1 (−2) (3) = −3.
2
3. An Inner product generated by matrix:
Let u, v ∈ Rn and A be invertible n × n then an inner product generated by matrix A is defined by,
­ ®
u, v = Au · Av

· ¸
1 −1
e. g. Let u = (1, 2) , v = (2, −3) ∈ R2 , A=
4 2
· ¸· ¸ · ¸ · ¸· ¸ · ¸
1 −1 1 −1 1 −1 2 5
⇒ Au = = = (−1, 8) , Av = = = (5, 2)
4 2 2 8 4 2 −3 2
­ ®
∴ u, v = Au · Av = (−1, 8) · (5, 2) = −5 + 16 = 11
­ ®
∴ u, v = 11

Target AA
4. Inner product on M 22 :
· ¸ · ¸
a1 a2 b1 b2
Let A = ,B = ∈ M 22 , then the standard inner product on M22 is defined by
a3 a4 b3 b4

〈A, B 〉 = trace B T A aL
1 bL
¡ ¢
1 + a2 b2 + a3 b3 + a4 b4
DO | RECA
R·E1A3 D | RE ¸ · ¸
−2 3
e. g. Let A = , B= ∈ M22 ⇒ 〈A, B 〉 = −2 + 9 + 0 + 10 = 17.
4 2 0 5
Powered by Prof. (Dr.) Rajesh M. Darji
5. Inner product on P 2 (x) : Let p = a 0 + a 1 x + a 2 x 2 , q = b 0 + b 1 x + b 2 x 2 ∈ P 2 (x). Then the standard
inner product on P 2 (x) is defined by
­ ®
p, q = a 0 b 0 + a 1 b 1 + a 2 b 2

Similarly, we can extend the definition on P n (x).


e. g. Let p = 1 + 2x + x 2 , q = 2 − 4x + 5x 2 ∈ P 2 (x) ⇒ 2 − 8 + 5 = −1.

6. Inner product on C [a, b]:


Let f = f (x) , g = g (x) ∈ C [a, b] (Set of all continuous functions defined on [a, b]). The standard inner
product on C [a, b] is defined by
Z b
­ ®
f ,g = f (x) g (x) d x
a

e. g. Let f (x) = x + x 2 , g (x) = −x ∈ C [−1, 1]


Z 1 Z 1
¡ 2
−x − x 3 d x
­ ® ¢
⇒ f ,g = f (x) g (x) d x =
−1 −1
3 4 ¸1
x x
· ·µ ¶ µ ¶¸
1 1 1 1
=− + =− + − − +
3 4 −1 3 4 3 4
­ ® 2
∴ f ,g = −
3

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 88

Illustration
­
8.1 Let u = (u 1 , u 2 ) and v = (v 1 , v 2 ) be vectors in R2 . Verify that the weighted Euclidean inner
®
product u, v = 3u 1 v 1 + 2u 2 v 2 satisfies the inner product axioms. [Winter-2017]

Solution: Let u = (u 1 , u 2 ) , v = (v 1 , v 2 ) , w = (w 1 , w 2 ) ∈ R2 and α ∈ R.


­ ®
1. u, v = 3u 1 v 1 + 2u 2 v 2
= 3v 1 u 1 + 2v 2 u 2
­ ® ­ ®
∴ u, v = v, u
­ ®
2. u + v, w = 3 (u 1 + v 1 ) w 1 + 2 (u 2 + v 2 ) w 2
= 3 (u 1 w 1 + v 1 w 1 ) + 2 (u 2 w 2 + v 2 w 2 )
= (3u 1 w 1 + 2u 2 w 2 ) + (3v 1 w 1 + 2v 2 w 2 )
­ ® ­ ® ­ ®
∴ u + v, w = u, w + v, w

αu, v = 3 (αu 1 ) v 1 + 2 (αu 2 ) v 2


­ ®
3.
= α (3u 1 v 1 + 2u 2 v 2 )
u, v = α u, v
­ ® ­ ®

­ ®
4. u, u = 3u 1 u 1 + 2u 2 u 2
= 3u 12 + 2u 22 Ê 0
­ ®
∴ u, v Ê 0

Target AA
­ ®
5. u, u = 0 ⇔ 3u 1 u 1 + 2u 2 u 2 = 0
⇔ 3u 12 + 2u 22 = 0
⇔ u 1 = 2u 2 = 0
⇔ u = (u 1 , u 2 ) = (0, 0) = 0

CALL
­ ®
∴ u, v ⇔ u == 0
| RE
RE
Hence, given product satisfies | R E DO
AD all the inner product axioms.

8.4 Norm, Distance and Angleby


Powered Prof. (Dr.) Rajesh M. Darji
Let V be the real inner product space and u, v ∈ V , then norm, distance and angle are defined as
° ° q­ ®
1. Norm of vector: °u ° = u, u
¡ ¢ ° ° q­ ®
2. Distance between two vectors : d u, v = °u − v ° = u − v, u − v
­ ®
u, v
3. Angle θ between two vectors: cos θ = ° ° ° °
°u °°v °

Also u and v are said to be orthogonal to each other if u, v = 0 i.e. θ = 90◦ and is denoted by u⊥v.
­ ® ¡ ¢

Illustration 8.2 Let R4 have the Euclidean inner product. Find the cosine of the angle θ and distance
between the vectors u = (4, 3, 2, −1) and v = (−2, 1, 2, 3) . [Winter-2017]

Solution: Given u = (4, 3, 2, −1) , v = (−2, 1, 2, 3) . With respect to standard inner product in R4 , we have
­ ®
u, v = u · v = −8 + 3 + 4 − 3 = −4
° ° p p ° ° p p p
°u ° = 16 + 9 + 4 + 1 = 30, °v ° = 4 + 1 + 4 + 9 = 18 = 3 2

Angle the cosine of between two vectors is defined by


­®
u, v u·v −4 4 2
cos θ = ° ° ° ° = ° ° ° ° = ¡p ¢ ¡ p ¢ = − p ∴ cos θ = − p
°u ° °v ° °u ° °v ° 30 3 2 6 15 3 15

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 89

Also distance between two vectors is given by,


¡ ¢ ° °
d u, v = °u − v ° = k(4, 3, 2, −1) − (−2, 1, 2, 3)k = k(6, 2, 0, −4)k
¡ ¢ p p p ¡ ¢ p
∴ d u, v = 36 + 4 + 0 + 16 = 56 = 3 6 ∴ d u, v = 3 6

Illustration 8.3 Find ° p °, d p, q and cos θ using standard inner product on P 2 where p = −3 − x +
° ° ¡ ¢

x 2, q = 2 + x 2.
Solution: Standard inner product in P 2 is defined as p, q = a 0 b 0 + a 1 b 1 + a 2 b 2 .
­ ®

â Norm of p is
° ° q­ ® p q
°p ° = p, p = a 0 a 0 + a 1 a 1 + a 2 a 2 = a 02 + a 12 + a 22
° ° q p
∴ °p ° = (−3)2 + (−1)2 + (1)2 = 11 ∵ p = −3 − x + x 2
£ ¤

â Distance between p and q is


d p, q = °p − q ° = ° −3 − x + x 2 − 2 + x 2 ° = ° −5 − x 2 °
¡ ¢ ° ° °¡ ¢ ¡ ¢° °¡ ¢°
q p
d p, q = (−5)2 + (0)2 + (1)2 = 26
¡ ¢

â Cosine of angle θ between p and q is
­ ®
p, q (−3) (2) + (−1) (0) + (1) (1)
cos θ = ° ° ° ° = ³p ´ ³p ´
°p ° °q °
(−3)2 + (−1)2 + (1)2 (2)2 + (0)2 + (1)2

Target AA
−6 + 0 + 1 5
∴ cos θ = p p = − p
11 5 55

8.5 Results
° °
1. °u ° Ê 0 | RE CAL2.L °° αu °° = | α | °° u °°
AD | R E DO
RE inequality: Let V be the real inner product space and u, v ∈ V then
3. Cauchy-Schwarz’s

Prof. (Dr.) Rajesh M. Darji


¯­ ®¯ ° ° ° °
¯ u, v ¯ É ° u ° ° v °
Powered by
Proof: Angle between two vectors of an inner product space V is defined by,
­ ®
u, v
cos θ = ° ° ° °
°u ° °v °
¯­ ®¯
¯ u, v ¯
Since |cos θ| É 1 ⇒ ° ° ° ° É 1
¯­ ®¯ ° ° ° °
°u ° °v °
∴ ¯ u, v ¯ É °u ° °v ° Proved.

4. Triangle inequality: Let V be the real inner product space and u, v ∈ V then
° ° ° ° ° °
°u + v ° É °u ° + °v °

°u + v °2 = u + v, u + v
° ° ­ ® £ ¤
Proof: ∵ By definition of norm
­ ® ­ ® £ ¤
= u, u + v + v, u + v ∵ By definition of inner product
­ ® ­ ® ­ ® ­ ®
= u, u + u, v + v, u + v, v
° °2 ­ ® ° °2 £ ­ ® ­ ®¤
= °u ° + 2 u, v + °v ° ∵ u, v = v, u
° °2 ¯­ ®¯ ° °2 £ ­ ® ¯­ ®¯¤
É °u ° + 2 ¯ u, v ¯ + °v ° ∵ u, v É ¯ u, v ¯
° °2 ° ° ° ° ° °2 £ ¤
É °u ° + 2 °u ° °v ° + °v ° ∵ Cauchy - Schwarz’s inequality
°u + v °2 É °u ° + °v ° 2
° ° ¡° ° ° °¢
° ° ° ° ° °
∴ °u + v ° É °u ° + °v ° Proved.

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 90

5. Generalized Pythagoras Theorem: If u and v are orthogonal vectors of the real inner product space
V then
° u + v °2 = ° u °2 + ° v °2
° ° ° ° ° °
­ ®
Proof: Since u and v are orthogonal vectors, we have u, v = 0.
°u + v °2 = u + v, u + v
° ° ­ ®
Now
­ ® ­ ®
= u, u + v + v, u + v
­ ® ­ ® ­ ® ­ ®
= u, u + u, v + v, u + v, v
° °2 ° °2 £ ­ ® ­ ® ¤
= °u ° + 0 + 0 + °v ° ∵ u, v = v, u = 0
° °2 ° °2 ° °2
∴ °u + v ° É °u ° + °v ° Proved.

Illustration
­ ®
8.4 Verify Cauchy-Schwarz’s inequality for u = (−2, 1) and v = (1, 0) , using the inner product
u, v = 4u 1 v 1 + 5u 2 v 2 .
­ ®
Solution: For given weighted ineer product u, v = 4u 1 v 1 + 5u 2 v 2 ,
° ° q­ ® p q
°u ° = u, u = 4u 1 u 1 + 5u 2 u 2 = 4u 12 + 5u 22
q p
⇒ °u ° = 4(−2)2 + 5(1)1 = 21
° ° £ ¤
∵ u = (−2, 1)
° ° q
°v ° = 4(1)2 + 5 (0) = 2
£ ¤
Similarly, ∵ v = (1, 0)
° °° ° p
∴ °u ° °v ° = 2 21 (8.1)
­ ®
Also, u, v = 4u 1 v 1 + 5u 2 v 2 = 4 (−2) (1) + 5 (1) (0) = −8

Target AA
¯­ ®¯
∴ ¯ u, v ¯ = 8 (8.2)
¯­ ®¯ ° ° ° °
∴ From (8.1) and (8.2), we have ¯ u, v ¯ É °u ° °v °
Hence, Cauchy-Schwarz’s inequality is satisfied.
­ ® ° °
Illustration 8.5 If u and v are unit vectors such that u, v = −1, evaluate ° 2u − v ° .
O|R ECALL
Solution: Using definitions of inner
| R E Dproduct and norm,
° °2 ­READ ®
° 2u − v ° = 2u − v , 2u − v
­ ® ­ ®
= 2u − v , 2u + 2u − v , −v
­ Powered
® ­ Prof. (Dr.) Rajesh M. Darji
by® ­ ® ­
= 2u , 2u + −v , 2u + 2u, −v + −v , −v
®
­ ® ­ ® ­ ® ­ ®
= 4 u ,u −2 v ,u −2 u ,v + v ,v [∵ Axiom 3 of definition]
­ ® ­ ® ­ ® £ ­ ® ­ ®¤
= 4 u ,u −4 u ,v + v ,v ∵ v ,u = u ,v
° °2 ­ ® ° °2
= 4°u ° − 4 u , v + °v ° [∵ Definition of norm]
= 4 (1) − 4 (−1) + (1) [∵ Given]
° °2 ° °
° 2u − v ° = 9 ⇒ ° 2u − v ° = 3

Illustration 8.6 Use Cauchy-Schwarz inequality to prove for all real values of a, b, θ,
(a cos θ + b sin θ)2 É a 2 + b 2

Solution: Let u = (a, b) , v = (cos θ, sin θ) ∈ R2 .


∴ By cauchy-Schwarz inequality in R2 , with standard inner product (dot product), we have
¯­ ®¯ ° ° ° ° ¯ ¯ ° °° °
¯ u, v ¯ É °u ° °v ° ⇒ ¯u · v ¯ É °u ° °v °
p p p p
∴ |a cos θ + b sin θ| É a 2 + b 2 cos2 θ + sin2 θ = a 2 + b 2 1
p
∴ |a cos θ + b sin θ| É a 2 + b 2
Hence, (a cos θ + b sin θ)2 É a 2 + b 2
£ ¤
∵ Squaring both the sides

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 91

Exercise 8.1
· ¸
2 3 0
1. Show that u, v = 9u 1 v 1 + 4u 2 v 2 is the inner product on R generated by the matrix A =
­ ®
.
0 2
­ ®
Also find u, v for u = (−3, 2) and v = (1, 7) .

2. For u = (u 1 , u 2 , u 3 ) , v = (v 1 , v 2 , v 3 ) ∈ R3 , show that u, v = u 12 v 12 +u 22 v 22 +u 32 v 32 . is not an inner product.


­ ®

3. Let p = p (x) , q = q (x) ∈ P 2 . Show that p, q = p (0) q (0) + p 21 q 12 + p (1) q (1) is an inner product
­ ® ¡ ¢ ¡ ¢

on P 2 . Is this inner product on P 3 ? Explain.

4. In each part use the given inner product on R2 to find ° w ° and d u, v , where w = (−1, 3) , u =
° ° ¡ ¢

(−1, 2) , v = (2, 5).

a. The standard Euclidean inner product.


­ ®
b. The weighted inner product u, v = 3u 1 v 1 + 2u 2 v 2 .
µ ¶
1 2
c. The inner product generated by A = .
−1 1
· ¸ · ¸
2 6 −4 7
5. For M 22 find k A k and d (A, B ) given that, A = ,B =
9 4 1 6

6. In each part, verify the Cauchy-Schwarz inequality:

a. p = −1 + 2x + x 2 , q = 2x using standard inner product in P 2 .


µ ¶ µ ¶
−1 2 1 0
using inner product 〈U ,V 〉 = trace VT U .
¡ ¢
b. U = , V=

Target AA
6 1 3 3
° °2 ° °2 ° °2 ° °2
7. Prove that ° u + v ° + ° u − v ° = 2° u ° + 2° v ° .

8. If u and v are an (n × 1) matrices and A be an (n × n) matrix then, prove that


´2 ³
v T A T Au É u T A T Au v T A T Av ECALL
³ ´³ ´
|R
A D | R E DO
R E
[Hint: Use Cauchy-Schwarz inequality for inner product u, v = Au · Av = v T A T Au]
­ ®

Prof. (Dr.) Rajesh M. Darji


9. Show that, equality holds in Cauchy-Schwarz inequality if and only if u and v are linearly dependent.
Powered by ¯­ ®¯
¯ u, v ¯
[Hint: u, v L.D. ⇔ u = kv ⇔ θ = 0 ⇔ cos θ = 1 ⇔ ° ° ° ° = 1 ⇔ ¯ u, v ¯ = °u ° °v °]
¯­ ®¯ ° ° ° °
°u ° °v °
­ ® ­ ® ­ ® ° ° ° ° ° °
10. If u, v = 2, v, w = −3, u, w = 5, ° u ° = 1, ° v ° = 2, ° w ° = 7, evaluate the following expressions:
­ ® ° °
a. u − v − 2w, 4u + v b. ° u − 2v + 4w °
­ ® 1° °2 1 ° °2
11. Prove that u, v = ° u + v ° − ° u − v ° .
4 4
³ p ´ ³ p ´
12. With respect to the Euclidean inner product, the vectors u = 1, 3 and v = −1, 3 have norm 2,
and the angle between them is 60◦ . Find the weighted Euclidean inner product with respect to which
u and v are orthogonal unit vectors.

Answers

8.6 Orthogonal Complement

â Let W be a subspace of an inner product space V . A vector u ∈ V is said to be orthogonal to W if it is


orthogonal to every vector of W .

â The set of all vectors of V that are orthogonal to W is called the orthogonal complement of W and is
denoted by W ⊥ . That is,
W ⊥ = u ∈ V : u, w = 0, ∀w ∈ W
© ­ ® ª

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 92

8.7 Properties of W ⊥

1. W ⊥ is a sunspace of the inner space V. 2. The only common vector in W and W ⊥ is 0.


¢⊥
W⊥ = W = W ⊥⊥ 4. dimW + dimW ⊥ = dimV
¡
3.

5. Let W be a subspace of an inner product space V. Then u ∈ W ⊥ (that is u is orthogonal to W ) if and


only if u is orthogonal to every vectors of spanning set of W .
Thus, if W = span w 1 , w 2 , w 3 , ...w n then ū ∈ W ⊥ ⇔
© ª ­ ®
u, w i = 0 ∀i = 1, 2, 3...n.

Illustration 8.7 Let R4 have the Euclidean inner product and let u = (−1, 1, 0, 2). Determine whether the
vector u is orthogonal to the subspace spanned by the vectors w 1 = (1, 0, 0, 0) , w 2 = (1, −1, 3, 0) and w 2 =
(4, 0, 9, 2) or not ? (or Is u ∈ W ⊥ ?)

Solution: In order to check whether given vector is orthogonal to subspce or not, it is sufficient to check
the orthogonality with each of the spanning vectors [See property 5 of section 8.7].
­ ®
Since u, w 1 = u·w 1 = (−1, 1, 0, 2)·(1, 0, 0, 0) = 1 6= 0, u is not orthogonal to w 1 . Hence u is not orthogonal
to given subspace W. That is u ∉ W ⊥ .

8.8 Results

Let A be an (m × n) matrix then

1. The null space of A and the row space of A are orthogonal complements in Rn with respect to Eu-

Target AA
clidean inner product. That is,

W = row (A) ⇔ W ⊥ = null (A)

2. The nullspace of A T and the column space of A are orthogonal complements in Rn with respect to
L
Euclidean inner product. That is RECAL |
R E DO
READ | W = col (A) ⇔ W ⊥ = null A T
¡ ¢

Illustration 8.8 Find thePowered


orthogonal Prof. (Dr.) Rajesh M. Darji
by complement of subspace of R3 spanned by the victors (1, −1, 3) , (5, −4, −4) , (7, −6, 2

Solution: Given that W = span {(1, −1, 3) , (5, −4, −4) , (7, −6, 2)} .
1 −1 3
 

Consider the matrix A by putting given vectors in row, that is A =  5 −4 −4  . Therefore, given sub-
7 −6 2
space is W = row (A) and hence its orthogonal complement is W ⊥ = nul (A) [See Result 1 of section 8.8].
Now, for null space of A, reducing A to row echelon form.

1 −1 3
 

A =  5 −4 −4  → R 2 − 5R 1 ; R 3 − 7R 1
7 −6 2
1 −1 3
 

∼  0 1 −19  → R 3 − R 2
0 1 −19
 
1 −1 3
∼ 0 1 −19  ⇒ x = 16t , y = 19t , z = t, t ∈R
 
0 0 0
n o
W ⊥ = nul (A) = X = x, y, z : AX = 0
¡ ¢
Hence,
W ⊥ = {(16t , 19t , t ) : t ∈ R} = span {(16, 19, 1)}
£ ¤
∴ Straight line

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 93

* Important:

Observe that,

1. Subspace W is the row space of A and hence dimension of W is number of pivot rows of echelon form.
Therefore dim (W ) = 2.

2. Orthogonal complement W ⊥ is the null space of A and hence dimension of W ⊥ is number of non
pivot columns of echelon form. That is dim W ⊥ = 1.
¡ ¢

3. The whole space V is R3 and hence dimension of V is 3. That is dim (V ) = 3.

dim (W ) + dim W ⊥ = dim (V )


¡ ¢
Thus,
Hence dimension theorem for orthogonal complement is verified.

Illustration 8.9 If subspace W is the intersection of two planes x + y + z = 0 and x − y + z = 0 in R3 , find its
orthogonal complement W ⊥ .

Solution: Given W = intersection of two planes x + y + z = 0 and x − y + z = 0 in R3 . This can be obtain by


somving system of two eaytions as
)
x +y +z =0
⇒ x = −z = t , y = 0, t ∈ R
x −y +z =0

x, y, z : x = −z = t , y = 0, t ∈ R
©¡ ¢ ª £ ¤
Hence, W= Straight line

Target AA
= {(t , 0, −t ) : t ∈ R} = t {}
£ ¤
∴ W = span {(1, 0, 1)} = row (A) where A = 1 0 1
⇒ W ⊥ = nul (A)
Now for null space of A, matrix A is already in echelon form (because it has only one row). Hence its
corresponding homogeneous system has two
O | R ECALL
parametric solution as x = −t 2 , y = t 1 , z = t 2 , t 1 , t 2 ∈ R.
Hence, W = nul (A) D | R
⊥ E D
REA
= {(−t 2 , t 1 , t 2 ) : t 1 , t 2 ∈ R}
= {t 1 (0, 1, 0) + t 2 (−1, 0, 1) : t 1 , t 2 ∈ R}

Powered by Prof. (Dr.) Rajesh M. Darji
∴ W = span {(0, 1, 0) , (−1, 0, 1)} [Plane]

8.9 Orthogonal Set

â A subset of an inner product space V is called an orthogonal set if all vectors are pairwise orthogonal.
That is all pairs of distinct vectors in the set are orthogonal.
© ª
Hence, if u 1 , u 2 , u 3 , ...u n is orthogonal set then,
­ ® ­ ®
u i , u j = 0 ∀i 6= j , and u i , u i 6= 0

â An orthogonal set in which each vector has unit norm (unit vector) is called an orthonormal set.

e. g. {(0, 1, 0) , (1, 0, 1) , (1, 0, −1)} is an orthogonal subset of R3 where as {(1, 0, 0) , (0, 1, 0) , (0, 0, 1)} is an
orthonormal subset of R3 .

â On normalizing each vectors of an orthogonal set we get an orthonormal set.

Note: Every orthogonal set (orthonomal set) is always linearly independent and hence, in particular an
orthogonal subset of 3 vectors of R3 is always basis for R3 . In general it is true for Rn .
e. g. {(0, 1, 0) , (1, 0, 1) , (1, 0, −1)} is an orthogonal subset of R3 and hence it is basis for R3 .

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 94

8.10 Orthogonal Projection

Let u and v be vectors of an inner product space V then the orthogo-


nal projection of u on v is defined as
­ ®
u, v
pr o j v u = ° °2 v
°v °

Similarly, orthogonal projection of v on u is defined as


­ ®
v, u
pr o j u v = ° °2 u
°u °

Illustration 8.10 Find orthogonal projection of u = (1, −1, 2) on v = (2, 0, 2) with respect to standard Eu-
clidean inner product in R3 .

Solution: By definition of orthogonal projection,


­ ®
u, v u·v £ ¤
projv u = ° °
°u ° kvk
v=° °
°u ° kvk
v ∵ Standard inner product
µ ¶
(1, −1, 2) · (2, 0, 2)
= p p (2, 0, 2)
1+1+4 4+0+4
µ ¶
2+0+4
= p p (2, 0, 2)
6 8

Target AA
6 6 p ³p p ´
= p (2, 0, 2) = p (2, 0, 2) = 3 (1, 0, 1) ⇒ projv u = 3, 0, 3
48 4 3

Exercise 8.2
ECALL
| R u = (2, k, 6) , v = (l , 5, 3) and w = (1, 2, 3) are mutually or-
Ovectors
| R E Dthe
1. Do there exist k and l such that
thogonal withR EADto the Euclidean inner product ?
respect
[Hint: Take u · v = v · w = w · u = 0.]
Powered by Prof. (Dr.) Rajesh M. Darji
2. Let R3 have the Euclidean inner product, and let u = (1, 1, −1) and v = (6, 7, −15) . If ° ku + v ° = 13,
° °

then what is k ?
° °2 ­ ®
[Hint: °ku + v ° = 169 ⇒ ku + v, ku + v = 169]

3. Show that p = 1 − x + 2x 2 and q = 2x + x 2 are orthogonal in P 2 .


­ ®
[Hint: For orthogonal polynomial p, q = 0.]

4. Let R4 have the Euclidean inner product. Find two unit vectors that are orthogonal to the three vectors
u = (2, 1, −4, 0) , v = (−1, −1, 2, 2) and w = (3, 2, 5, 4) .

5. If w is orthogonal to both u 1 and u 2 then prove that it is orthogonal to k 1 u 1 + k 2 u 2 , ∀ k 1 , k 2 ∈ R.


­ ® ­ ® ­ ®
[Hint: Given w, u 1 = w, u 2 = 0. Prove w, k 1 u 1 + k 2 u 2 = 0.]
­ ®
6. Verify that the set { (1, 0) , (0, 1) } is orthogonal with respect to the inner product u, v = 4u 1 v 1 + u 2 v 2 ,
then convert it to an orthonormal set by normalizing the vectors.

7. Find orthogonal complement ( W ⊥ ) and hence basis for W ⊥ . Also verify that dimW + dimW ⊥ =
dimV , given

a. W = span {(1, 4, −2) , (2, 1, −1)} in R3 . b. W = span {(1, −1, 0, 2) , (0, 1, 2, −1)} in R4 .
c. W = span {(1, −2, 1)} in R3 .

[Hint: See Illustration 8.8]

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 95

8. Find the equation of W ⊥ , for the each of the following subspace: [Hint: See Illustration 8.9]

a. W be the line in R2 with the equation y = 2x.


b. W be the plane in R3 with the equation x − 2y + 3z = 0.
c. W be the line in R3 with parametric equation x = 2t , y = −5t , z = 4t , t ∈ R.

[Hint: See Illustration 8.9]

Answers

E E E

8.11 Orthogonal and Orthonormal Bases


© ª
â A basis B = v 1 , v 2 , v 2 .....v n of an inner product space V is called an orthogonal basis if it is an or-
thogonal set.

â If each vector of orthogonal basis is unit vector then it is called an orthonormal basis of V.

e. g. {(1, 0, 0) , (0, 1, 0) , (0, 0, 1)} is an orthonormal basis for R3 whereas {(0, 2, 0) , (1, 0, −1) , (1, 0, 1)} is an
orthogonal basis for R3 and it can be reduce to orthonormal by normalizing each vector.

8.12 Coordinate Relative to Orthonormal Basis

Target AA
© ª
Let S = v 1 , v 2 , v 3 .....v n be an orthonormal basis for an inner product space V, and u is any vector in V,
then
­ ® ­ ® ­ ®
u = u, v 1 v 1 + u, v 2 v 2 + ... + u, v n v n

Hence the coordinate of u relative to S is given by


E CALL
| R E D¢ O ¡­| R ®
REA D ¡
u =
­ ® ­
u, v 1 , u, v 2 , ..... u, v n
®¢
S

Powered by Prof.µ (Dr.)


3 4
¶ Rajesh
µ
4 3
¶ M. Darji
Illustration 8.11 Verify that the vectors v 1 = − , , 0 , v 2 = , , 0 , v 3 = (0, 0, 1) form an orthonormal
5 5 5 5
basis for R3 with respect to the Euclidean inner product and hence express the vector u = (1, −1, 2) as linear
combinations of v 1 , v 2 and v 3 .

Solution: Observe that for given vectors v̄ 1 , v̄ 2 , v̄ 3 ,


µ ¶ µ ¶ µ ¶µ ¶ µ ¶µ ¶
­ ® 3 4 4 3 3 4 4 3 12 12
v 1, v 2 = v 1 · v 2 = − , , 0 · , , 0 = − + +0 = − + =0
5 5 5 5 5 5 5 5 25 25
µ ¶ µ ¶ µ ¶
­ ® 4 3 4 3
v 2 , v 3 = v 2 · v 3 = , , 0 · (0, 0, 1) = (0) + (0) + (0) (1) = 0
5 5 5 5
µ ¶ µ ¶ µ ¶
­ ® 3 4 3 4
v 3 , v 1 = v 3 · v 1 = (0, 0, 1) · − , , 0 = (0) − + (0) + (1) (0) = 0
5 5 5 5

Further,
s
µ ¶2 µ ¶2 r r
3 4 2 9 16 25 p
kv̄ 1 k = − + + (0) = + +0 = = 1=1
5 5 25 25 25
s
µ ¶2 µ ¶2 r r
4 3 2 16 9 25 p
kv̄ 2 k = + + (0) = + +0 = = 1=1
5 5 25 25 25
q p p
kv̄ 3 k = (0)2 + (0)2 + (1)2 = 0 + 0 + 1 = 1 = 1

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 96

∴ Given set is an orthonormal subset of R3 containing three vectors. So it is an orthonarmal basis for R3 .

Now for orthonormal basis, we have [See section 8.8]


­ ® ­ ® ­ ®
u = v 1, u v 1 + v 2, u v 2 + v 3, u v 3
¡ ¢ ¡ ¢ ¡ ¢ £ ¤
= v1 · u v1 + v2 · u v2 + v3 · u v3 ∵ Standard Euclidean inner product
·µ ¶ ¸ ·µ ¶ ¸
3 4 4 3
= − , , 0 · (1, −1, 2) v 1 + , − , 0 · (1, −1, 2) v 2 + [(1, 0, 0) · (1, −1, 2)] v 3 [∵ Given]
5 5 5 5
7 7
∴ u = − v 1 + v 2 + v 3 Required linear combination.
5 5
µ ¶
© ª ¡ ¢ 7 7
Also coordinate of u relative to given orthonormal basis S = v 1 , v 2 , v 3 is u S = − , , 1 .
5 5

8.13 Gram-Schmidt Process 1

â With the help of this process we can construct an orthogonal basis from the given basis and on nor-
malizing each vector we can obtain an orthonormal basis.
© ª
â Consider the basis S = u 1 , u 2 , u 3 .....u n of an inner product space V.
© ª
â The orthogonal basis of an inner product space V is given by B = w 1 , w 2 , w 3 .....w n , where

w 1 = u1

Target AA
w 2 = u 2 − projw 1 u 2
­ ®
w 1, u2
= u 2 − ° °2 w 1
°w 1 °
w 3 = u 3 − projw 1 u 3 − projw 2 u 3
­
w 1, u3
® ­
w 2O, u 3| REC
® AL L
= u 3 − ° °2 |wR 1−ED ° °2 w 2 and so on.
RE°AwD 1
° °w 2 °

Note that on normalizing Powered


each vector Prof. (Dr.) Rajesh M. Darji
by using wb = °w ° , we get an othonormal basis B = {wb 1 , wb 2 , wb 3 , ...wb n } .
°w °

Illustration 8.12 Let R3 have Euclidean inner product. Transform the basis S = u 1 , u 2 , u 3 into an or-
© ª

thonormal basis using Gram-Schmidt process, where u 1 = (1, 0, 0) , u 2 = (3, 7, −2) and u 3 = (0, 4, 1) . [Summer-
2017]
­ ®
Solution: Given inner product is standard Euclidean inner product, that is u, v = u · v.
By Gram-Schmidt process,

w1 (1, 0, 0) 1
w 1 = u 1 = (1, 0, 0) ⇒ w
b1 = ° ° = p
°w 1 °
= p (1, 0, 0) ∴ w
b 1 = (1, 0, 0)
1+0+0 1
w 2 = u 2 − projw 1 u 2
­ ®
w 1, u2 w 1 · u2 £ ­ ® ¤
= u 2 − ° °2 w 1 = u 2 − ° °2 w 1 ∵ w 1, u2 = w 1 · u2
°w 1 ° °w 1 °
(1, 0, 0) · (3, 7, −2)
= (3, 7, −2) − (1, 0, 0)
(1)
= (3, 7, −2) − 3 (1, 0, 0)
w2
µ ¶
(0, 7, −2) 1 7 2
∴ w 2 = (0, 7, −2) ⇒ w
b2 = ° ° = p
°w 2 °
= p (0, 7, −2) ∴ w
b 2 = 0, p , − p
0 + 49 + 4 53 53 53
1
Jorgen Pedersen Gram; Danish, 1850-1916 and Erhard Schmidt; Berlin, 1876-1959.

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 97

w 3 = u 3 − projw 1 u 3 − projw 2 u 3
­ ® ­ ®
w 1, u3 w 2, u3 w 1 · u3 w 2 · u3
= u 3 − ° °2 w 1 − ° °2 w 2 = u 3 − ° °2 w 1 − ° °2 w 2
°w 1 ° °w 2 ° °w 1 ° °w 2 °
(1, 0, 0) · (0, 4, 1) (0, 7, −2) · (0, 4, 1)
= (0, 4, 1) − (1, 0, 0) − (0, 7, −2)
(1) 53
µ ¶
26 182 52
= (0, 4, 1) − 0 − (0, 7, −2) = (0, 4, 1) − 0, ,−
53 53 53
µ ¶
30 105 15
w 3 = 0, , − = (0, 2, 7)
53 53 53
w3
µ ¶
(0, 2, 7)1 2 7
⇒ w
b3 = ° ° = p
°w 3 °
= p (0, 2, 7) ∴ w
b 3 = 0, p , p
0 + 4 + 49 53 53 53
½ µ ¶ µ ¶¾
7 2 2 7
∴ Required orthonormal basis is B = {w
b1, w
b2, w
b 3 } = (1, 0, 0) , 0, p , − p , 0, p , p .
53 53 53 53
Note: In case ½of orthogonal basis ¶¾ to find normalized vector. Hence orthogonal basis is B =
µ we need not
© ª 30 105
w 1 , w 2 , w 3 = (1, 0, 0) , (0, 7, −2) , 0, , − .
53 53

Illustration 8.13 Let R3 have an Euclidean inner product, Find the orthonormal basis for the space spanned
by (0, 1, 2) , (−1, 0, 1) , (−1, 1, 3) .

Solution: Let W = span {(0, 1, 2) , (−1, 0, 1) , (−1, 1, 3)}

Target AA
In order to find orthogonal basis of W, first of all we find basis for W.
0 −1 −1
 

Observe that, for the matrix of column vectors of given set A =  1 0 1 , det (A) = 0. therefore
2 1 3
given set is linearly dependent and hence it is not a basis. Now to remove linearly dependent vector reducing
matrix A to row echelon form, we get
O | R ECALL
RED
READ |
  
0 −1 −1 1 0 1
A= 1 0 1  ∼  0 −1 −1 
 

Powered by 2 Prof. (Dr.) Rajesh M. Darji


1 3 0 0 0

Discarding third vector corresponding to non pivot column from given set we get basis for subspace W as
© ª
S = {(0, 1, 2) , (−1, 0, 1)} = u 1 , u 2 .

Now by Gram-Schmidt method,

w1
µ ¶
1 1 2
w 1 = u 1 = (0, 1, 2) ⇒ w b1 = = (0, 1, 2) ∴ w b 1 = 0, p , p
°w 1 ° p5
° °
5 5
­ ®
w 1, u2 w 1 · u2 £ ¤
w 2 = u 2 − projw 1 u 2 = u 2 − ° °2 w 1 = u 2 − ° °2 w 1 ∵ Eulidean inner product
°w 1 ° °w 1 °
(0, 1, 2) · (−1, 0, 1) 2
= (−1, 0, 1) − ¡ 2 2
¢ (0, 1, 2) = (−1, 0, 1) − (0, 1, 2)
0+1 +2 5
µ ¶ µ ¶
2 4 2 1
= (−1, 0, 1) − 0, , = −1, − ,
5 5 5 5
w2
µ ¶
1 1 −5 −2 1
w 2 = (−5, −2, 1) ⇒ w b 2 = ° ° = p (−5, −2, 1) ∴ w b2 = p , p , p
5 °w 2 ° 30 30 30 30
½µ ¶ µ ¶¾
1 2 −5 −2 1
∴ Required orthonormal basis for given subspace W is B = {w
b1, w
b 2 } = 0, p , p , p , p , p .
5 5 30 30 30

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 98

Exercise 8.3

1. In each part, an orthonormal basis relative to the Euclidean inner product is given, find coordinate of
w with respect to that basis:
µ ¶ µ ¶
1 1 1 1
a. w = (3, 7) , u 1 = p , − p , u 2 = p , p .
2 2 2 2
µ ¶ µ ¶ µ ¶
2 2 1 2 1 2 1 2 2
b. w = (−1, 0, 2) , u 1 = , − , , u 2 = , , − , u 3 = , , .
3 3 3 3 3 3 3 3 3
[See Illustration 8.11]

2. Use Gram-Schmidt process to transform the given basis in to an orthonormal basis:

a. u 1 = (1, −3) , u 2 = (2, 2) in R2 , with Euclidean inner product.


b. u 1 = (1, 1, 1) , u 2 = (−1, 1, 0) , u 3 = (1, 2, 1) in R3 , with Euclidean inner product.

[See Illustration 8.12]

3. Let R3 have an Euclidean inner product, Find the orthonormal basis for the space spanned by (1, −1, 2) ,
(1, 1, 0) , (1, 0, 1) .
[See Illustration 8.13]
© ª ° °2 ­ ®2
4. Let v 1 , v 2 , v 3 be the orthonormal basis for an inner product space V . Show that ° w ° = w, v 1 +
­ ®2 ­ ®2
w, v 2 + w, v 3 , ∀w ∈ V.

Target AA
¡ ¢
[Hint: For orthonormal basis S, w S = (〈w̄, v̄ 1 〉 , 〈w̄, v̄ 2 〉 , 〈w̄, v̄ 3 〉) . ]

Answers
p p ´
µ ¶
³ 4
1. a. −2 2, 5 2
b. 0, − , 1
ECA½µLL
3
O | R
3 ED
R
EpAD, |p , p
½µ ¶ µ ¶¾ ¶ µ ¶ µ ¶¾
1 3 1 1 1 1 1 1 1 1 2
2. a. B = p R ,− b. B = p , p , p , − p , p , 0 , p , p , − p
10 10 10 10 3 3 3 2 2 6 6 6

Prof. (Dr.) Rajesh M. Darji


½µ ¶ µ ¶¾
1 1 2 1 1
3. B = p , − p , p Powered
, p , pby, 0
6 6 6 2 2

E E E

Theorem 8.1 (Projection Theorem | Orthogonal Projection on a Subspace) © ª


Let W be a finite dimensional subspace of an inner product space V with an orthonormal basis v 1 , v 2 , v 3 , ...v n .
Then every vector u ∈ V can be uniquely expressed as

u = w1 + w2

where,

â w 1 is called orthogonal projection of u on W and is denote by projW u and is given by

w 1 = projW ū = 〈ū, v̄ 1 〉 v̄ 1 + 〈ū, v̄ 2 〉 v̄ 2 + 〈ū, v̄ 3 〉 v̄ 3 + .... + 〈ū, v̄ n 〉 v̄ n

â w 2 is called component of u orthogonal to W and is denote by pr o jW ⊥ u. that is w 2 = pr o jW ⊥ u ∈ W ⊥ .


Thus
ū = projW ū + projW ⊥ ū

Note: In case of orthogonal basis first transform in to orthonormal basis by normalizing each vector and
then proceed further.

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 99

Projection on line
Projection on plane

µ ¶
4 3
Illustration 8.14 Let W be subspace R3 spanned by the orthogonal vectors v 1 = (0, 1, 0) and v 2 = − , 0, .
5 5
Find the projection of u = (1, 1, 1) on W. also obtain the component of u orthogonal to W.
µ ¶
4 3
Solution: For given orthogonal vectors v 1 = (0, 1, 0) and v 2 = − , 0, , observe that kv̄ 1 k = kv̄ 2 k = 1.
5 5
Hence {v̄ 1 , v̄ 2 } form an orthonormal basis for subspace W.
∴ Orthogonal projection of u = (1, 1, 1) on W is given by

projW ū = 〈ū, v̄ 1 〉 v̄ 1 + 〈ū, v̄ 2 〉 v̄ 2 [∵ Theorem 8.1]


£ ¤
= (v̄ 1 · ū) v̄ 1 + (v̄ 2 · ū) v̄ 2 ∵ Euclidean inner product
· µ ¶¸ µ ¶
4 3 4 3
= [(1, 1, 1) · (0, 1, 0)] (0, 1, 0) + (1, 1, 1) · − , 0, − , 0,
5 5 5 5
· ¸µ ¶ µ ¶
1 4 3 4 3
= [1] (0, 1, 0) + − − , 0, = (0, 1, 0) + , 0, −
5 5 5 25 25

Target AA
µ ¶
4 3
∴ projW ū = , 1, −
25 25

Also by Projection Theorem 8.1, we have

u = projW u + projW ⊥ u
∴ projW ⊥ u = u − projW u EDO | R
ECALL
R
R EAD |µ 4 3
¶ µ
4 3

= (1, 1, 1) − , 1, − = 1 − , 0, 1 +
25 25 25 25

∴ projW ⊥ u =
µ
21 Powered
, 0,
28

by
Required Prof. (Dr.) Rajesh M. Darji
component of u orthogonal to W.
25 25

Exercise 8.4

1. The subspace of R3 spanned by the vectors u 1 = 45 , 0, − 35 and u 2 = (0, 1, 0) is a plane passing through
¡ ¢

the origin. Express w = (1, 2, 3) in the form w = w 1 + w 2 , where w 1 lies in the plane and w 2 is perpen-
dicular to the plane.

2. Let W be The subspace of R4 spanned by the vectors u 1 = (−1, 0, 1, 0) and u 2 = (0, 1, 0, 1), Express w =
(−1, 2, 6, 0) in the form w = w 1 + w 2 , where w 1 lies in subspace W and w 2 is orthogonal to W.

Answers
µ ¶ µ ¶ µ ¶ µ ¶
4 3 9 12 7 7 5 5
1. w 1 = − , 2, , w 2 = , 0, 2. w 1 = − , 1, , 1 , w 2 = , 1, , −1
5 5 5 5 2 2 2 2

E E E

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 100

8.14 Least Square Approximate Solution for Linear System

â Let AX = b be the inconsistent system of linear equations. That is its exact solution does not exist.

â The beast approximate solution of AX = b is known as the least square solution and is given by the
normal system,
A T AX = A T b (8.3)

â Also if W denotes the column space of A and X be the least square solution then the orthogonal
projection of b on W is given by
projW b = AX (8.4)

4 0 2
  

Illustration 8.15 Find the least squares solution of the linear system Ax = b for A =  0 2  , b =  0  .
1 1 11
Also find projection of b on column space of A. [Winter-2015]

Solution: The least square solution is given by (8.3),


³ ´
A T AX = A T b, X ∈ R2
¸ 4 0 · 2
   
· ¸ · ¸
4 0 1  x 4 0 1
0 2  =  0 
0 2 1 y 0 2 1
1 1 11

Target AA
· ¸· ¸ · ¸
17 1 x 19
= ⇒ 17x + y = 19, x + 5y = 11
1 5 y 11

Solving above two equations, we get required least square solution as x = 1, y = 5.


LL
Also projection of b on column space of A is given by (8.4),
E D O | RECA
D |WR= col (A) and X is the least square solution.
RE,Awhere
projW b = AX
4 0 · 4
   
¸
1
∴ projW b =  0
1
2 
Powered
1
5 by
6
Prof. (Dr.) Rajesh M. Darji
=  10 

∴ projW b = (4, 10, 6)

Illustration 8.16 Find the orthogonal projection of the vector u = (−3, −3, 8, 9) on the subspace of R4
spanned by the vectors u 1 = (3, 1, 0, 1) , u 2 = (1, 2, 1, 1) , u 3 = (−1, 0, 2, −1) .

Solution: Let W = span { ū 1 , ū 2 , ū 3 } = span {(3, 1, 0, 1) , (1, 2, 1, 1) , (−1, 0, 2, −1)} .  


3 1 −1
 1 2 0 
Then W = col (A) , where A be matrix obtained by putting vectors in columns, that is A =  .
 
 0 1 2 
1 1 −1
∴ By least square method, required orthogonal projection is given by

projW u = AX (8.5)

where X is the least square solution of the system


   
3 1 −1  −3
x

 1 2 0   −3
  
AX = u ⇒   y =
 
 0 1 2   8

z

1 1 −1 9

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 101

The normal system for the least square solution:


A T AX = A T u

  
1 −1  3 −3
3 1 0 1  x 3 1 0 1 
    
2 0  1 −3 
 1 2 1 1   y = 1 2 1 1 
   
1 2  0 8

−1 0 2 −1 z −1 0 2 −1
  
1 −1 1 9
11 6 −4 x
    
−3
 6 7 0   y = 8  (8.6)
−4 0 6 z 10
To solve system (8.6), reducing corresponding augmented matrix to row echelon form we get
11 6 −4 11 6 −4
   
−3 −3
£ ¤
A:u = 6 7 0 8  ∼  0 41 24 106 
−4 0 6 10 0 0 1 1
Making back substitution, we get the least square solution as x = −1, y = 2, z = 1.
∴ From (8.5), required orthogonal projection is
   
3 1 −1   −2
 1 2 −1
0   3 
projW u =  2 = ∴ projW u = (−2, 3, 4, 0)
   
 0 1 2   4 
 
1
1 1 −1 0

Exercise 8.5

Target AA
2 −2 2
   

1. Find the least squares solution of the linear system Ax = b for A =  1 1  , b =  −1  . Also find
3 1 1
projection of b on column space of A. [Winter-2015]

R E4x LL2 = 12,


C1A−3x
2.
O |
Find the least square solution for the system 2x 1 +5x 2 = 32, 3x 1 +x 2 = 21. [Summer-
RED
2015]
READ |
3. Find the orthogonal projection of the vector u = (6, 3, 9, 6) onto the subspace of R4 spanned by the
vectors u 1 = (2, 1, 1, Powered Prof. (Dr.) Rajesh M. Darji
1) , u 2 = (1,by
0, 1, 1) , u 3 = (−2, −1, 0, −1) .

4. Find projW u, where u = (5, 6, 7, 2) and W is the solution space of the homogeneous system x 1 +x 2 +x 3 =
0, 2x 2 + x 3 + x 4 = 0.
½µ ¶ µ ¶¾
1 1 1 1
[Hint: Solution space: W = span − , − , 1, 0 , , − , 0, 1 ]
2 2 2 2

Answers
µ ¶
3 2 46 5 13 305 704
1. x = , y = − , projcol(A) b = ,− , 2. x = ,y = 3. projW u = (7, 2, 9, 5)
7 3 21 21 21 39 273
4. projW u = (0, −1, 1, 1)
E E E

8.15 Orthogonal Matrix

A square matrix A is said to be orthogonal matrix if, A −1 = A T , that is A A T = A T A = I .


e. g.
 
p1 p1
 
2 2  cos θ − sin θ
A= , A = 
 
− p1 p1 sin θ cos θ
2 2
are orthogonal matrices.

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 102

* Important:

A matrix of order n is an orthogonal matrix if and only if its row (column) vectors form orthonormal subset
of Rn . ³ ´ ³ ´
e. g. Consider orthonormal subset u 1 , u 2 , u 3 of R3 where u 1 = p1 , 0, p1 , u 2 = (0, 1, 0) , u 3 = − p1 , 0, p1 ,
© ª
    2 2 2 2
1 1 1 1
p 0 p   2 0 − 2 
p p
 2 2
T
   
then A = 
 0 1 0   and A =  0 1
 0   are always orthogonal matrices.
   
− p1 0 p1 p1 0 p1
2 2 2 2

Note: In order to check weather given matrix is orthogonal or not it is sufficient to check orthonoamal-
ity of its row or column vectors.
 
1 2 2
 3 3 3 
 is orthogonal matrix and hence find A −1 .
 
2
Illustration 8.17 Show that the matrix A = 
 3 − 23 1
3 
 
− 32 − 13 2
3
µ ¶ µ ¶ µ ¶
1 2 2 2 2 1 2 1 2
Solution: Consider the column vectors u 1 = , , − , u 2 = , − , − , u 3 = , , .
µ ¶ µ ¶3 3 3 3 3 3 3 3 3
1 2 2 2 2 1 2 4 2
Observe that, u 1 · u 2 = , , − · , − , − = − + = 0
3 3 3 3 3 3 9 9 9
µ ¶ µ ¶
2 2 1 2 1 2 4 2 2
u2 · u3 = , − , − · , , = − − =0

Target AA
3 3 3 3 3 3 9 9 9
µ ¶ µ ¶
1 2 2 2 1 2 2 2 4
u1 · u3 = , , − · , , = + − =0
s 3 3 3 3 3 3 9 9 9
µ ¶2 µ ¶2 µ ¶2 r
° ° 1 2 2 1 4 4
Also, °u 1 ° = + + − = + + =1
3 3 3 9 9 9
R E C1ALL
s
|
µ ¶2 µ ¶2 µ ¶2 r
° °
°u 2 ° = 2 2
+ − |+ R
1
−ED= O 4 4
+ + =1
3READ 3 3 9 9 9
s
µ ¶2 µ ¶2 µ ¶2 r
° ° 2 1 2 4 1 4
°u 3 ° =
3
+ +
Powered
3
=
Prof. (Dr.) Rajesh M. Darji
3 by 9 9 9
+ + =1

∴ u 1 , u 2 , u 3 forms orthonormal subset of R3 . Hence A is an orthogonal matrix, and


© ª

 
1 2
 3 3 − 23 
A −1 = A T = 
 
2
 3 − 23 − 13 

 
2 1 2
3 3 3

 
 2 2 1 
 
Illustration 8.18 Is A =  −2 1 2 

 orthogonal matrix ? if not can it be converted in to orthogonal
 
1 −2 2
matrix ? [Summer-2015]

Solution: Consider the column ° °u 1 =° (2,°−2, 1) , u 2 = (2, 1, −2) , u 3 = (1, 2, 2) . Observe that u 1 ·
° °vectors
u 2 = u 2 · u 3 = u 1 · u 3 = 0 and °u 1 ° = °u 2 ° = °u 3 ° = 3. That is column vectors are orthogonal but not or-
thonormal (not unit vector). Hence, A is not orthogonal matrix.

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 103

Since column vectors are orthogonal, on normalizing each vector by dividing its magnitude, we obtain
orthonormal vectors. Thus given matrix A can be converted into orthogonal matrix, and is given by
 
 2/3 2/3 1/3 
 
 −2/3
 1/3 2/3 

 
1/3 −2/3 2/3

Illustration 8.19 If A is an orthogonal then prove that det (A) = ±1. Also show that converse may not be
true.

Solution: Suppose that A is an orthogonal matrix.


Therefore, A T = A −1
∴ A A T = I ⇒ det A A T = det (I )
¡ ¢
£ ¤
∴ det (A) det (A) = 1 ∵ det (I ) = 1 as I is an identity matrix
∴ [det (A)]2 = 1 ⇒ det (A) = ±1 Proved.
 
2 1
Now consider the matrix A =   . Here det (A) = 1 but matrix is not orthogonal because its column
1 1
vectors are not orthonormal. Thus converse of given statement may not be true.

Target AA
Exercise 8.6

1. Show that the matrix A is orthogonal then A T and A −1 are also orthogonal.
[Hint: Apply definition of orthogonal matrix.]

2. Find the normal system of Ax = b when A is orthogonal matrix.


O | R ECALL
3. Let A be a square matrix|such D A = I . Prove that A is symmetric if and only if A is orthogonal.
REthat 2

READ
4. Let A is an (n × n) orthogonal matrix with n is odd, then prove that A cannot be skew-symmetric.
Powered by
5. Let A is an (n × n) orthogonal Prof. (Dr.) Rajesh M. Darji
matrix such that| A | = −1. Prove that (A + I n ) is singular matrix.

E E E

Powered by
Prof. (Dr.) Rajesh M. Darji
B. Sc. (Gold Medalist)
M. Sc. (Gold Medalist)
Tetra Gold Medalist in Mathematics
Ph. D. (Mathematics)
ISTE (Reg. 107058)
IMS, AMS
http://rmdarji.ijaamm.com/
Contact: (+91) 9427 80 9779

LAVC (GTU-2110015) B.E. Semester II


Chapter 9
Vector calculus I: Vector differentiation

9.1 Scalar and Vector

A physical quantity for the representation of it only magnitude is sufficient is called scalar whereas a physical
quantity for the representation of it magnitude as well as direction is required, is called vector.
e. g. statistical data are scalars and velocity, acceleration, force etc are vectors.

General Remarks
→−
1. A scalar is generally denoted by α, β, a, b etc. whereas the vector is denoted by A , A, A etc.

Target AA
2. A vector having unit magnitude is called unit vector and is denoted by ab or Ab (read as cap or carat).
In particular the unit vectors along the positive direction of the coordinate axes, X -axis, Y -axis and
Z-axis, are denoted by ib, jb, kb or Ib, Jb, Kb respectively.

b that is →
3. Any vector A can always be expressed as a combination of ib, jb, k,

A = a 1 ib+ a 2 jb+ a 3 kb for some
a 1 , a 2 , a 3 ∈ R. This form of vector is known as
| R AL L
ECcomponent form and a 1 , a 2 , a 3 are called component
along respective coordinate ED
Raxes. O
READ | →

4. The magnitude
¯→ q of a vector A is a scalar and denoted by the symbol or A and is given by the formula
¯− ¯
¯
¯ A ¯ = A = + a 12 + aPowered
2 2
2 + a 3 . (only Prof. (Dr.) Rajesh M. Darji
by positive value)
5. Dividing the vector by its own magnitude we get a unit vector along the direction of given vector, that
→−
is unit vector along the direction of A is given by


A a 1 ib+ a 2 jb+ a 3 kb
ab = Ab = = q
A a 12 + a 22 + a 32


− 1 ¡ ¢
e. g. A = 2ib− jb+ 3kb ⇒ ab = p 2ib− jb+ 3kb
13

9.2 Algebraic Operations of Vectors


→− b →−
For A = a 1 ib+ a 2 jb+ a 3 k, B = b 1 ib+ b 2 jb+ b 3 kb and α ∈ R, the basic algebraic operations are defined as follow:

1. Scalar Multiplication:


α A = α a 1 ib+ a 2 jb+ a 3 kb = (αa 1 ) ib+ (αa 2 ) jb+ (αa 3 ) kb
¡ ¢

2. Vector Addition and subtraction:



− → − ¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢
A ± B = a 1 ib+ a 2 jb+ a 3 kb ± b 1 ib+ b 2 jb+ b 3 kb = a 1 ib+ a 2 jb+ a 3 kb ± b 1 ib+ b 2 jb+ b 3 kb

104
Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 105


− →

3. Vector Multiplications If θ denotes an angle between the directions of two vectors A and B then
→− →

there defined two types of product between A and B as

i. Dot Product (Scalar Product):



− → −
A · B = A B cos θ = a 1 b 1 + a 2 b 2 + a 3 b 3

ii. Cross Product (Vector Product):


¯ ¯
¯ ¯
¯
¯ ib jb k ¯¯
b

− → −
A × B = A B sin θ n
¯ ¯
b = ¯¯ a1 a2 a 3 ¯¯
¯ ¯
b1 b2 b3 ¯
¯ ¯
¯

− →

where n
b is the unit vector perpendicular to both A and B and in direction in which the right

− →

handed screw would advance when it rotate from A to B .

9.3 Point Functions

â A function whose value depends on the position of the point in space is called point function.
â If that function is scalar then it is called scalar point function.
â If that function is vector then the function is called vector point function.
e. g. Temperature in the medium is scalar point function and the velocity of particle in the moving

Target AA
fluid is the vector point function.


â In symbolic form, φ x, y, z = x y 2 + 7x y z 3 is scalar point function and V = 3xz ib− 5x y 2 jb+ x y z 3 kb is
¡ ¢
¡ ¢
the vector point function defined at the point p x, y, z .

9.4 Vector Differential Operator


CALL
∂ ∂ ED∂
|R O | RE
An operator of the R EA∂xDib+ ∂y jb+ ∂z kb is called the vector differential operator and is denoted by the
form
symbol ∇ (read as del or nabla). That is,
Powered by Prof.∂ (Dr.)

Rajesh M. Darji

∇= ib+ jb+ kb
∂x ∂y ∂z

9.5 Gradient

Let φ x, y, z be the scalar point function then the gradient of φ x, y, z is defined as grad φ = ∇φ.
¡ ¢ ¡ ¢

∂ ˆ ∂ ˆ ∂ ∂φ ˆ ∂φ ˆ ∂φ
µ ¶
∴ grad φ = ∇φ = i+ j + k̂ φ = i+ j+ k̂
∂x ∂y ∂z ∂x ∂y ∂z

Observe that gradient is defined for the scalar point function and it gives vector point function.

Geometrical Interpretation of Gradient

If p x, y, z be any point on the given surface φ x, y, z = c then


¡ ¢ ¡ ¢
¡ ¢
∇φ P defines normal vector to the surface, and hence the unit
normal vector to the surface φ = c at the point p is given by,
¡ ¢
∇φ p
n̂ = ¯¡ ¢ ¯
¯ ∇φ p ¯
¯ ¯

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 106

Illustration 9.1 Find grad φ if φ = log x 2 + y 2 + z 2 at the point (1, 2, 1) .


¡ ¢

Solution: By definition of gradient vector,

grad φ = ∇φ
∂φ ∂φ ∂φ b
= ib+ jb+ k
∂x ∂y ∂z
µ ¶ µ ¶ µ ¶
2x 2y 2z
φ = log x 2 + y 2 + z 2
£ ¡ ¢¤
= 2 i+ 2
b j+ 2
b kb ∵
x + y 2 + z2 x + y 2 + z2 x + y 2 + z2
2
grad φ = 2
¡ ¢
∴ 2 2
x ib+ y jb+ z kb
x +y +z
¡ ¢
At x, y, z = (1, 2, 1) ,
2¡ ¢ 1 2 1
grad φ = ib+ 2 jb+ kb = ib+ jb+
6 3 3 3

Illustration 9.2
If r = ¯ →
¯− ¯
r ¯ , where →
− b prove that∇ f (r ) = f 0 (r ) ∇r. Hence deduce that ∇¯ →
¯ − ¯2
r = x ib+ y jb+ z k, r ¯ = 2~
r.
q
r = x iˆ + y jˆ + z k̂ and r = ¯ →
¯− ¯
Solution: Given ~ r ¯ = x 2 + y 2 + z 2.

∂ ∂ ∂ b
µ ¶
∇ f (r ) = i+
b j + k f (r )
b
∂x ∂y ∂z

Target AA
∂ ∂ ∂
= f (r ) ib+ f (r ) jb+ f (r ) kb
∂x ∂y ∂z
∂r ∂r ∂r
= f 0 (r ) ib+ f 0 (r ) jb+ f 0 (r ) kb ∵ By chain rule for partial derivative
£ ¤
∂x ∂y ∂z
∂r ∂r ∂r
µ ¶
= f 0 (r )
∂x
ib+
∂y
jb+ kb
∂zO | REC
AL L (9.1)
D
R0 EA∂
µ
Db | ∂RbE ∂ b¶
= f (r ) i+ j+ k r
∂x ∂y ∂z
∂ ∂ ∂ b
Prof. (Dr.) Rajesh M. Darji
· µ ¶ ¸
0
∴ ∇ f (r ) = f (r ) ∇rPowered ∵ byi + b j + k =∇
b
∂x ∂y ∂z

∂r 2x x ∂r y ∂r z
q
Also, r= x2 + y 2 + z2 ⇒ = p = . Similarly, = and = .
∂x 2 x 2 + y 2 + z 2 r ∂y r ∂z r
Substituting in (9.1),
³x ´ ³y´ ³z ´
∇ f (r ) = f 0 (r ) ib+ f 0 (r ) jb+ f 0 (r ) kb
r r r
f 0 (r ) ¡ ¢ f 0 (r )
= x ib+ y jb+ z kb = r)
(~
r r
~
r
∴ ∇ f (r ) = f 0 (r ) (9.2)
r
~
r
f (r ) = ¯ →
¯ − ¯2
∇¯ →
¯ − ¯2
Put r ¯ =r2 ⇒ f 0 (r ) = 2r ∴ r ¯ = 2r = 2~
r
r
Illustration 9.3 Find the unit normal vector to the surface x y 3 z 2 = 4 at the point (−1, −1, 2) .
Solution: Let φ = x y 3 z 2 − 4 (Taking all terms of given surface x y 3 z 2 = 4 on one side) and given point
p (−1, −1, 2) .
Unit normal vector to given surface at a point p is [See section 9.5]
¡ ¢
∇φ p
n̂ = ¯¡ ¢ ¯ (9.3)
¯ ∇φ p ¯
¯ ¯

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 107

Now,
∂φ ∂φ ∂φ b
∇φ = ib+ jb+ k
∂x ∂y ∂z
= y z ib+ 3x y 2 z 2 jb+ 2x y 3 z kb
¡ 3 2¢
∵ φ = x y 3z2 − 4
¡ ¢ ¡ ¢ £ ¤

At the point p (−1, −1, 2) ,


¡ ¢ ¡ ¢
∇φ p = −4ib− 12 jb+ 4kb = 4 −ib− 3 jb+ kb (9.4)
q p
∴ ¯∇φ¯p = (−4)2 + (−12)2 + (4)2 = 4 11
¯ ¯
(9.5)

Substituting the values from (9.5) and (9.4) in (9.3), we get required unit normal vector,
¡ ¢
4 −ib− 3 jb+ kb 1 ¡ ¢
n
b= p ∴ nb = p −ib− 3 jb+ kb
4 11 11

9.6 Divergence

− →
−¡ ¢
~=
Let V = V1 ib+ V2 jb+ V3 kb be the vector point function then the divergence of V x, y, z is defined as div V
∇·V~.
∂ ˆ ∂ ˆ ∂ ¢ ∂V1 ∂V2 ∂V3
µ ¶
~ = ∇·V~= j + k̂ · V1 iˆ + v 2 jˆ + V3 k̂ =
¡
∴ div V i+ + +
∂x ∂y ∂z ∂x ∂y ∂z

Observe that divergence is defined for the vector point function and it gives scalar.



Target AA
Physical Interpretation of Divergence

increase of fluid per unit volume at a point p.


~R ∇D
=E O | R
~ = ∇·V
â If V denotes the linear velocity of the particle of moving fluid then div V
ECALL
~ defines the rate of

~ gives the rate at which the fluid is originating (diverges) from the
READ |
Thus, we can say that div V ·V
point per unit volume.
~ = ∇·V
~ = 0, then such a fluid is known as solenoidal or
incompressible.
Powered by Prof. (Dr.) Rajesh M. Darji
â If the divergence of velocity is zero i.e. div V

Illustration 9.4 Evaluate div 3x 2 ib+ 5x y 2 jb+ x y z 3 kb at (1, 2, 3) .


¡ ¢


− ¡
Solution: Let V = 3x 2 iˆ + 5x y 2 jˆ + x y z 3 k̂ = V1 ib+ V2 jb+ V3 kb
¢

− ∂V1 ∂V2 ∂V3 £


→ ¤
∴ div V = + + ∵ By definition
∂x ∂y ∂z
= (6x) + 10x y + 3x y z 2
¡ ¢ ¡ ¢



At the point (1, 2, 3) , div V = 6 (1) + 10 (2) + 3 (18) = 80.


Illustration 9.5 Find the value of α such that the vector V = αx 2 y + y z ib+ x y 2 − xz 2 jb+ 2x y z − 2x 2 y 2 kb
¡ ¢ ¡ ¢ ¡ ¢

is solenoidal.

− →

Solution: We know that a vector V is solenoidal (incompressible) if div V = 0.

∴ div αx 2 y + y z iˆ + x y 2 − xz 2 jˆ + 2x y z − 2x 2 y 2 k̂ = 0
¡¡ ¢ ¡ ¢ ¡ ¢ ¢

∂ ¡ 2 ¢ ∂ ¡ 2 ¢ ∂ ¡
αx y + y z + x y − xz 2 + 2x y z − 2x 2 y 2 = 0
¢

∂x ∂y ∂z
¡ ¢ ¡ ¢ ¡ ¢
∴ 2αx y + 0 + 2x y − 0 + 2x y − 0 = 0
∴ 2αx y + 4x y = 0 ⇒ 2α + 4 = 0 ∴ α = −2

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 108

9.7 Curl

− →
−¡ ¢
~ = ∇×V
~.
Let V = V1 ib+ V2 jb+ V3 kb be the vector point function then the curl of V x, y, z is defined as curl V
¯ ¯
iˆ jˆ
¯ ¯
¯
¯ k̂ ¯¯
∂ ˆ ∂ ˆ ∂ ∂ ∂ ∂ ¯¯
µ ¶
¢ ¯¯
~ = ∇×V
~= j + k̂ × V1 iˆ + v 2 jˆ + V3 k̂ = ¯
¡
∴ curl V i+
∂x ∂y ∂z ∂x ∂y ∂z ¯
¯
¯
¯ ¯
V1 V2 V3 ¯
¯ ¯
¯

Observe that curl is defined for the vector point function and it gives again vector.

Physical Interpretation of Curl


¡ ¢
Let p x, y, z be any particle of the rotating rigid body, about some axis

− →−
L. Suppose that V be its linear velocity and Ω be its angular velocity
then
− 1
→ →

Ω = Cul r V
2
Thus, we can say that for the rotating body the angular velocity is the
half curl of its linear velocity.

Target AA
â If the angular velocity of the rotating body is zero then there no rotation.
∴ We have,

− →

Ω= 0 ⇒

RED

− →−
Cul r V = 0 .

â Such a motion is known as irrotational motion and vector filed is known as irrotationa field.
O | R ECALL
there exist a scalar function (scalar potential function) φ such that
READ |
â For an irrotational field always



V = grad φ = ∇φ
Powered by Prof. (Dr.) Rajesh M. Darji
â Such a system is called conservative system, that is work does not depend on the path.

− →
− →

Illustration 9.6 Find curl F , if F = y 2 cos x + z 2 iˆ + 2y sin x − 4 jˆ + 3xz 2 k̂. Is F irrotational ? [Summer-
¡ ¢ ¡ ¢

2016]

− ¡
Solution: Let F = y 2 cos x + z 2 iˆ + 2y sin x − 4 jˆ + 3xz 2 k̂ = F 1 ib+ F 2 jb+ F 3 k.
¢ ¡ ¢
b
By definition of curl,

− →

curl F = ∇ × F
¯ ¯
¯ ¯
¯ ib j
b k ¯¯
b
¯
¯ ∂ ∂ ∂ ¯¯

¯
¯ ∂x ∂y ∂z ¯
¯
¯ ¯
¯ F1 F2 F3 ¯
¯ ¯
¯ ¯
¯ ¯
¯
¯ i
b j
b k
b ¯
¯
¯ ∂ ∂ ∂ ¯¯

¯
∂x ∂y ∂z ¯
¯
¯
¯ ¡ ¯
¯ 2
¯ y cos x + z 2 2y sin x − 4 3xz 2 ¯
¢ ¡ ¢ ¯

∂ ¡ ¢ ∂ ¡ ∂ ¡ ¢ ∂ ¡ 2
· ¸ · ¸
2 2 2
¢ ¢
=ib 3xz − 2y sin x − 4 − j
b 3xz − y cos x + z
∂y ∂z ∂x ∂z

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 109

∂ ¡ ¢ ∂ ¡ 2
· ¸
2
¢
+ kb 2y sin x − 4 − y cos x + z
∂x ∂y
= ib[0 − 0] − jb 3z 2 − (2z) + kb 2y cos x − 2y cos x
£¡ ¢ ¤ £¡ ¢ ¡ ¢¤


− ¡
curl F = 2z − 3z 2 jb
¢


− → − → −
Since curl F 6= 0 , F is not an irrotational vector field.

− ¡
Illustration 9.7 Show that the vector field A = x 2 − y 2 + x ib− 2x y + y jb is irrotational. Also find scalar
¢ ¡ ¢
→−
function φ such that A = grad φ.

− ¡
Solution: Given A = x 2 − y 2 + x ib− 2x y + y jb.
¢ ¡ ¢


− →

curl A = ∇ × A
¯ ¯
¯ ¯
¯
¯ i
b j
b k ¯¯
b
¯ ∂ ∂ ∂ ¯¯ £ ¤
=¯ ¯ ∵ Coefficient of kb is 0
¯
¯ ∂x ∂y ∂z ¯
¯ ¡ ¯
¯ 2 2
¢ ¡ ¢
¯ x −y +x −2x y − 2y 0 ¯
¯

∂ ∂ ¡ ∂ ∂ ¡ 2
· ¸ · ¸
x − y2 + x
¢ ¢
= ib (0) − −2x y − 2y − jb (0) −
∂y ∂z ∂x ∂z
∂ ¡ ¢ ∂ ¡ 2
· ¸
x − y2 + x
¢
+ kb −2x y − 2y −
∂x ∂y
£¡ ¢ ¤ →

Target AA
= ib[0 − 0] − jb[0 − 0] + kb −2y − (−2) = (0) ib− (0) jb+ (0) kb = 0

− → − →

∴ curl A = 0 ⇒ A is an irrotational.
→−
Since A is an irrotational field, there exist a scalar function (called scalar potential function) φ such that


A = grad φ = ∇φ
| RE CALL
=
∂φ ∂φ
ib+ R jbE
∂φ b
+ AD k |£ REDO
∵ Definition of gradient
¤
∂x ∂y ∂z
→−
Equating with components of A , we
Powered byget Prof. (Dr.) Rajesh M. Darji
∂φ ∂φ ∂φ
= x 2 − y 2 + x, = −2x y − 2y, =0
∂x ∂y ∂z

Integrating above equations partially with respect to x, y, z respectively, keeping other variable constant, we
get following equations:

x3 x2 ∂φ
· ¸
φ= − x y2 +
¡ ¢
+ c 1 y, z From kepping y, z constant (9.6)
3 2 ∂x
∂φ
· ¸
φ = −x y 2 − y 2 + c 2 (x, z) From kepping x, z constant (9.7)
∂y
∂φ
· ¸
φ = c 3 x, y
¡ ¢
From kepping x, y constant (9.8)
∂z

in (9.6), c 1 y, z be the terms of φ not containing x, and be taken from (9.7) and (9.8), that is c 1 y, z = −y 2 .
¡ ¢ ¡ ¢


Hence from (9.6) required scalar function for which A = grad φ, is

x3 x2
φ= − x y2 + − y2
3 2
¡ ¢ ¡ ¢
Note: Instead of c 1 y, z , if we find c 2 (x, z) or c 3 x, y from other two equations using same logic, we will
get the same answer. More precisely, just add all terms of φ (without c 1 , c 2 , c 3 ) by taking each term exactly
once. (Verify !)

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 110

9.8 Directional Derivative


¡ ¢ −−→
Let f x, y, z be the scalar point function and PQ be the given direc-
¡ ¢
tion then the directional derivative of f x, y, z at the point p in the
−−→ ∂f
direction of PQ is denoted by and is defined as
∂r

∂f ¡ ¢
= ∇f P · N
b (9.9)
∂r
−−→
−−→ PQ
whereN is unit vector along the given direction PQ, that is N = ¯ −−→ ¯
b b
¯ PQ ¯
¯ ¯

Also if θ denotes the angle between ∇ f and N


¡ ¢
b then from (9.9), we have
P

∂ f ¯¯¡ ¢ ¯¯ ¯¯ ¯¯
= ∇ f P N̂ cos θ = ¯ ∇ f ¯P cos θ
¯ ¯ £ ¯ ¯ ¤
∵ N̂ is unit vector, so ¯N̂ ¯ = 1
∂r
∂ f ¯¯ ¯¯
⇒ max = ∇ f P when cos θ = 1 (i.e. θ = 0)
∂r
¡ ¢
Thus, we conclude that the maximum directional derivative
¯ ¯ (the rate of change) of f x, y, z occurs along
¡ ¢
the direction of ∇ f P and its magnitude is equals to ¯ ∇ f ¯P .

Illustration 9.8 Find the directional derivative of the function f = x y 2 + y z 3 at the point (2, −1, 1) in the
direction of the vector ib+ 2 jb+ 2k. [Summer-2017]

Target AA
b
−−→
Solution: Given f = x y 2 + y z 3 , P (2, −1, 1) PQ = ib+ 2 jb+ 2kb
By definition, required directional derivative is

∂f ¡ ¢
= ∇f · N (9.10)
∂r CALPL
b
E
| R E DO | R −−→ ˆ ˆ
where, N̂ = Unit AD along given direction PQ = i + 2 j + 2k̂
REvector
−−→
PQ iˆ + 2 jˆ + 2k̂
= −−→¯ = p
¯
¯PQ ¯ Powered
¯ ¯ 1 + 4 + 4by Prof. (Dr.) Rajesh M. Darji
1 ¡ˆ
i + 2 jˆ + 2k̂
¢
∴ N̂ =
3
∂f ∂f ∂f b
Also, ∇f = ib+ jb+ k
∂x ∂y ∂z
∂ ¡ 2 ∂ ¡ 2 ∂ ¡ 2
x y + y z 3 ib+ x y + y z 3 jb+ x y + y z 3 kb
¢ ¢ ¢
=
∂x ∂y ∂z
¡ 2¢ ¡ 3
¢ ¡ 2¢
= y ib+ 2x y + z jb+ 3z kb
¡ ¢
∴ ∇ f P = ib− 3 jb+ 3kb [∵ P (2, −1, 1)]
¡ ¢
Substituting the values of ∇ f P and N b in (9.10),

∂f ¡ ¢ 1¡ ¢ 1 1 ∂f 1
= ib− 3 jb+ 3kb · iˆ + 2 jˆ + 2k̂ = (1 − 6 + 6) = ∴ =
∂r 3 3 3 ∂r 3
¯ ¯ p p
Note: The maximum directional derivative is ¯∇ f ¯P = 1 + 9 + 9 = 19 and it¡occurs ¢ in the direction of
∇f
b Direction may be given by unit vector, that is ¯ ¯ P = p1 ib− 3 jb+ 3kb .
¡ ¢ ¡ ¢
gradient vector ∇ f P = ib−3 jb+3k.
¯∇ f ¯ 19
P

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 111

9.9 Angle between two Surfaces


³ π´
An angle θ between two surfaces (tangent planes) at a given point is defined as an acute angle 0 É θ É
2
between two normals at that point. The cosine of θ it is given by

− → −
N1·N2
cos θ = ¯→ (cos θ Ê 0)
¯− ¯ ¯→
¯ ¯− ¯
¯N 1¯ ¯N 2¯
¯

Illustration 9.9 Find the angle between the surfaces x 2 +y 2 +z 2 = 9 and z = x 2 +y 2 −3 at the point (2, −1, 2) .
Solution: Let φ1 = x 2 + y 2 + z 2 − 9, φ1 = z − x 2 − y 2 + 3, p (2, −1, 2)

∂φ1 ∂φ1 ∂φ1 b


∇φ1 = ib+ jb+ k = 2x ib+ 2y jb+ 2z kb
∂x ∂y ∂z
∂φ2 ∂φ2 ∂φ2 b
∇φ2 = ib+ jb+ k = −2x ib− 2y jb+ kb
∂x ∂y ∂z
At the point p (2, −1, 2) , normal to the given surfaces are

− ¡ ¢ →
− ¡ ¢
N 1 = ∇φ1 p = 4ib− 2 jb+ 4kb and N 2 = ∇φ2 p = −4ib+ 2 jb+ kb

Now, angle between two surfaces is given by



− → −
N1·N2
cos θ = ¯→ ¯ ¯− ¯ (cos θ Ê 0)
¯− ¯ ¯→

Target AA
¯N 1¯ ¯N 2¯
¯
¡ ¢ ¡ ¢
4ib− 2 jb+ 4kb · −4ib+ 2 jb+ kb −16 − 4 + 4
= p p = p
16 + 4 + 16 16 + 4 + 1 6 21
8
∴ cos θ = − p
3 21
REC AL L
DO |
D | RE
µ ¶
8
REAthat is cos θ Ê 0, required angle between given surfaces is θ = cos
−1
Since θ is an acute angle, p .
3 21
Illustration 9.10 Find the values of constants λ and µ so that the surfaces λx 2 −µy z = (λ + 2) x and 4x 2 y +
3 Powered by Prof. (Dr.) Rajesh M. Darji
z = 4 may intersect orthogonally at the point (1, −1, 2) .
Solution: Given surfaces are intersect orthogonally ii angle between them at the point (1, −1, 2) is θ = 90◦ .
Hence,

− → −
N1·N2 →
− → −
cos θ = 0 ⇔ − ¯¯ = 0 ⇔ N 1 · N 2 = 0 (9.11)
− ¯ ¯→
¯→
¯ ¯ ¯
¯N 1¯ ¯N 2¯

Let φ1 = λx 2 − µy z − (λ + 2) x, φ2 = 4x 2 y + z 3 − 4, p (1, −1, 2)


∂φ1 ∂φ1 ∂φ1 b
∇φ1 = ib+ jb+ k = (2λx − λ − 2) ib− µz jb− µy kb
∂x ∂y ∂z
∂φ2 ∂φ2 ∂φ2 b
∇φ2 = ib+ jb+ k = 8x y ib+ 4x 2 jb+ 3z 2 kb
∂x ∂y ∂z
At the point p (1, −1, 2) , normal to the given surfaces are
→− →

N 1 = ∇φ1 p = (λ − 2) ib− 2µ jb+ µkb and N 2 = ∇φ2 p = −8ib+ 4 jb+ 12kb
¡ ¢ ¡ ¢

From (9.11),

− →−
N1·N2 =0 ⇔ 8λ − 4µ = 16 (9.12)
Also point p (1, −1, 2) lies on both surfaces, and hence it satisfies both equations of surfaces. Putting the
point in the surface equation λx 2 − µy z = (λ + 2) x, we get µ = 1. Substituting µ = 1 in (9.12), we get λ = 5/2.
5
Hence, required values are λ = µ=1
2

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 112

Exercise 9.1

1. If r = ¯ →
¯− ¯
r ¯ , where →

r = x ib+ y jb+ z k,
b prove that


r
b. ∇ e r = 2e r →
2−
³ 2´
a. ∇ log r = r
r2
2. Find the unit normal vector to the surface,

a. x 2 y + 2xz = 4 at (2, −2, 3) b. z = 4 x 2 + y 2 at (1, 0, 2)


¡ ¢
[Winter-2015] [Summer-2016]

− ¡ b then show that →

3. If F = y 2 − z 2 + 3y z − 2x ib+ 3xz + 2x y jb+ 3x y − 2xz + 2z k,
¢ ¡ ¢ ¡ ¢
F is both solenoidal
and irrotational. [summer-2016]

− ¡ ¢ ¡ ¢
b show that →
− →

4. If F = x + y + 1 ib+ jb− x + y k, F · curl F = 0.

5. If φ = x y z − 2y 2 z + x 2 z 2 , find div grad φ at the point (2, 4, 1) .


¡ ¢
[summer-2015]
µ 2
∂ ∂2 ∂2 ∂2 φ ∂2 φ ∂2 φ
¶ µ ¶
2
[Hint: div grad φ = ∇ · ∇φ = (∇ · ∇) φ = ∇ φ = φ=
¡ ¢ ¡ ¢
+ + + + ]
∂x 2 ∂y 2 ∂z 2 ∂x 2 ∂y 2 ∂z 2

− ¡ ¢ ¡ ¢ ¡ ¢
6. Show that the vector field V = sin y + z ib + x cos y − z jb + x − y kb is irrotational. Hence find its
scalar potential function.

− ¡ ¢ ¡ ¢ ¡ ¢
7. Find the constants a, b, c so that the vector A = x + 2y + az ib + bx − 3y − z jb + 4x + c y + 2z kb is
→− x 2 3y 2
irrotational. If A = grad φ, show that φ = − + z 2 + 2x y + 4xz − y z.

Target AA
2 2
8. Find the directional derivative of the following functions:

a. f = 2x y + z 2 at the point (1, - 1,3) in the direction of the vector ib+ 2 jb+ 2k.
b

b. φ = 4xz 3 − 3x 2 y z 2 at the point (2, −1, 2) along the Z -axis.


CALL
| RE f = x 2 − y 2 + 2z 2 at the point P (1, 2, 3) in the direction
9. Find the directional derivative of
R E D O
the function
of the line PQR EAD
where Q is|the point (5, 0, 4) .
−−→
[Hint: PQ = Q − P = (5, 0, 4) − (1, 2, 3) = (4, −2, 1) = 4ib− 2 jb+ k]
b
Powered by Prof. (Dr.) Rajesh M. Darji
10. In what direction from (3.1. − 2) is the directional derivative of φ = x 2 y 2 z 4 is maximum and what is its
magnitude ?

11. What is the greatest rate of the increase of v = x 2 + y z 2 at the point(1, −1, 3)?

12. If θ is the acute angle between the surfaces x y 2 z = 3x + z 2 and 3x 2 − y 2 + 2z = 1 at the point (1, −2, 1),
3
show that cos θ = p .
7 6

13. Calculate the angle between the normals to the surface x y = z 2 at the points (4, 1, 2) and (3, 3, −3) .

14. Find the angle between the tangent planes to the surfaces x log z = y 2 − 1 and x 2 y = 2 − z at the point
(1, 1, 1) .

Answers
1¡ ¢ 1 ¡ ¢ 14
2. a. −ib+ 2 jb+ 2kb b. p −8ib+ kb 5. 14 6. x sin y + xz − y z 8. a. b. 144
3 65 3
p
µ ¶ µ ¶
28 1 ¡ ¢ −1 1 −1 1
9. p 10. p −ib+ 3 jb− 3kb , 96 19 11. 11 13. cos p 14. cos p
21 19 22 30

E E E

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 113

Powered by
Prof. (Dr.) Rajesh M. Darji
B. Sc. (Gold Medalist)
M. Sc. (Gold Medalist)
Tetra Gold Medalist in Mathematics
Ph. D. (Mathematics)
ISTE (Reg. 107058)
IMS, AMS
http://rmdarji.ijaamm.com/
Contact: (+91) 9427 80 9779

Target AA | RE CALL
READ | R E DO

Powered by Prof. (Dr.) Rajesh M. Darji

LAVC (GTU-2110015) B.E. Semester II


Chapter 10
Vector Calculus II: Vector Integration

10.1 Line Integral

â Any integral that is evaluated along the curve is called line integral.


â Let F = F1 ib+ F2 jb+ F3 kb be the vector point function defined along the smooth curve C then integral


of F along C is called line integral and is defined as
Z Z
→− → − ¡ ¢ ¡ ¢
F ·d r = F 1 ib+ F 2 jb+ F 3 kb · d x ib+ d y jb+ d z kb
C C

Target AA
Z Z

− →
F · d−
¡ ¢
∴ r = F1 d x + F2 d y + F3 d z
C C

* Important:
I

− →
ALL by the symbol
EisCdenoted
1. If C is the closed curve then the line integral F · d−
r.
RE DO | R C
→− READ |
2. Work: If F denotes the
Z force acting on a moving particle along the curve C then work done by the

− →
force is given by W = F · d−
r.
Powered
C by Prof. (Dr.) Rajesh M. Darji


3. Circulation: If V denotes the velocity of the moving particle I around the closed curve C then the

− −
circulation of the particle round the curve C is defined as ω = V · d →
r.
C
Further if the circulation is zero then the motion is called irrotational.

− →
− →
− → −
4. Path independence of line Z Integral: If F is an irrotatinal vector field, that is curl F = ∇× F = 0 , then

− →
the value of line integral F · d−
r does not depends on path (line integral is independent to path).
C
That is it depends on initial and final points.


Also, since F is an irrotational there exist a scalar φ such that


F = grad φ = ∇φ
∂φ ∂φ ∂φ b
= ib+ jb+ k
∂x ∂y ∂z
∂φ ∂φ ∂φ b ¡
µ ¶

− → − £ →
∵ −r = x ib+ y jb+ z kb
¢ ¤
∴ F ·d r = i+
b j+
b k · d x ib+ d y jb+ d z kb
∂x ∂y ∂z
∂φ ∂φ ∂φ
= dx + dy + dz
∂x ∂y ∂z
= dφ ∵ Total differentiation of φ
£ ¤

114
Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 115



Hence line integral of F along any path C joining the points A and B is
Z

− →
Z B
F · d−
£ ¤B ¡ ¢
dφ = φ A = φ B − φ A
¡ ¢
r =
C A

5. Arc length: The arc length of the curve defined by position vector →

r (t ) = f (t ) ib+ g (t ) jb+ h (t ) kb from
t = a to t = b is given by

Z b¯ ¯ Z b q£
¯→
− 0 ¤2 ¤2
+ g 0 (t ) + [h 0 (t )]2 d t
£
L= ¯ r (t )¯ d t = f 0 (t )
¯
a a

I

− → →

F · d−
r , where F = x 2 + x y ib + x 2 + y 2 jb and C is the
¡ ¢ ¡ ¢
Illustration 10.1 Evaluate the line integral
C
square formed by the lines x = ±1, y = ±1.

Solution: Since,
I I

− →
F · d− x2 + x y d x + x2 + y 2 d y
£¡ ¢ ¡ ¢ ¤
r = (10.1)
C C

Here given closed curve C is the square formed by x = ±1, y = ±1.


It is bounded by four different sub-curves curves like C 1 ,C 2 ,C 3
C 4 as shown in figure.

Target AA
In order to find line integral (10.1), we will find line integral
along each sub-curve, in anti-clockwise direction (counter
clockwise direction) and then will add them.

Along C 1 : x = 1, ∴ d x = 0 and y varies from −1 to 1.


From (10.1),
O | R ECALL
Z
RZ 1AD
E £¡ ¢ | RED Z 1
~ 1 + y (0) + 1 + y 2 d y = 1 + y2 d y
¡ ¢ ¤ £¡ ¢ ¤
F · d~
r=
C1 −1 −1

= y+
y
·
= 1+
1 3 ¸1
Powered
¸ ·
− −1 −
by
·
1
¸
= 2+
2 Prof. (Dr.) Rajesh M. Darji
3 −1 3 3 3
8
Z
∴ ~ · d~
F r=
C1 3

Along C 2 : y = 1, ∴ d y = 0 and x varies from 1 to −1.


From (10.1),
Z Z −1 £¡ Z −1 £¡
~ · d~ 2 2
x2 + x d x
¢ ¤ ¡ ¢ ¢ ¤
F r= x + x d x + x + 1 (0) =
C2 1 1
3 2 ¸−1
x x
· · ¸ · ¸
1 1 1 1 2
= + = − + − + =−
3 2 1 3 2 3 2 3
2
Z
∴ ~ · d~
F r =−
C2 3

Along C 3 : x = −1, ∴ d x = 0 and y varies from 1 to −1.


From (10.1),
Z Z −1 £¡ Z −1 £¡
~ · d~ 1 − y (0) + 1 + y 2 d y = 1 + y2 d y
¡¢ ¢ ¤ ¢ ¤
F r=
C3 1 1
3 ¸−1
y
· · ¸ · ¸
1 1 2
= y+ = −1 − − 1+ = −2 −
3 1 3 3 3

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 116

8
Z
∴ ~ · d~
F r =−
C3 3

Along C 4 : y = −1, ∴ d y = 0 and x varies from −1 to 1.


From (10.1)
Z Z 1 Z 1
~ · d~
£¡ 2
x − x d x + x 2 + 1 (0) =
¢ ¡ ¢ ¤ £¡ 2 ¢ ¤
F r= x −x dx
C4 −1 −1
3 2 ¸1
x x
·
¸ · ¸ ·
1 1 1 1 2
= − = − − − − =
3 2 −1 3 2 3 2 3
2
Z
∴ ~ · d~
F r=
C4 3

Hence required line integral,


I Z Z Z Z

− → − →
− → − →
− → − →
− → − →
− →
F ·d r = F ·d r + F ·d r + F ·d r + F · d−
r
C C1 C2 C3 C4
8 2 8 2
= − − +
I 3 3 3 3

− →
∴ F · d−
r =0
C



Illustration 10.2 Find the work done when the force F = x 2 − y 2 + x ib− 2x y + y jb moves the particle in
¡ ¢ ¡ ¢

the xy-plane from (0, 0) to (1, 1) along the parabola y 2 = x. Is the work done different when the parh is the

Target AA
straight liney = x ? [Winter-2016]


Solution: Work done by the force F along the path C is given by
Z Z

− →
F · d−
£¡ 2
x − y 2 +Lx d x − 2x y + y d y
¢ ¡ ¢ ¤
W= r = (10.2)
C
O | RCECAL
D
CE
Along the parabolaR 1:
D | RE
A y 2 = x, ∴ d x = 2d y and y varies from 0
to 1.
From (10.2), Powered by Prof. (Dr.) Rajesh M. Darji
Z 1 £¡
y 4 − y 2 + y 2 2yd y − 2y 3 + y d y
¢¡ ¢ ¡ ¢ ¤
W=
0
Z 1¡
2y 5 − 2y 3 − y d y
¢
=
0
· 6 ¸1
y y4 y2 1 1 1 2
= 2 −2 − = − − ∴W =−
6 4 2 0 3 2 2 3
2
Work done along the parabola y 2 = x is − .
3
Along the line C 2 : y = x, ∴ d x = d y and y varies from 0 to 1.
From (10.2),
Z 1
£¡ 2
y − y 2 + y d y − 2y 2 + y d y
¢¡ ¢ ¡ ¢ ¤
W=
0
¸1
1¡ y3
·
2 2
Z
2
¢
= −2y d y = −2 =− ∴W =−
0 3 0 3 3
2
Work done along the line y = x is also − .
3
Hence work is not different for given different paths.
Note that here line integration is independent of path. Also such a system in which work does not
depends on path is called conservative system.

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 117

Z

− ¡ →
− →
Illustration 10.3 If F = 2x y + z 3 ib+ x 2 jb+ 3xz 2 k, F · d−
¢
b show that r is independent of path of integra-
C
tion. Hence find the integral when C is any path joining (1, −2, 1) and (3, 1, 4) . [Summer-2016]


Solution: We know that line integral of F is independent of path, if the vector field is irrotational. That is


~= 0.
curl F

− ¡
Given F = 2x y + z 3 ib+ x 2 jb+ 3xz 2 k,
¢
b
¯ ¯
¯ ¯
¯
¯ i
b j
b k ¯¯
b
¯ ∂ ∂ ∂ ¯¯
~ = ∇×F
curl F ~ = ¯¯
∂x ∂y ∂z ¯
¯
¯
¯ ¡ ¯
¯ 2x y + z 3 x 2 3xz 2 ¯
¯ ¢ ¯

∂ ¡ ¢ ∂ ¡ 2¢ ∂ ¡ ¢ ∂ ¡ ∂ ¡ 2¢ ∂ ¡
· ¸ · ¸ · ¸
2 2 3 3
¢ ¢
=i b 3xz − x −jb 3xz − 2x y + z +k
b x − 2x y + z
∂y ∂z ∂x ∂z ∂x ∂y

− ~ =→ −
= ib[0 − 0] − jb 3z 2 − 3z 2 + kb [2x − 2x] = 0
£ ¤
∴ curl F 0

Now in order to find line integral, let


− ∂φ ∂φ ∂φ b ∂φ ∂φ ∂φ
F = grad φ = ib+ jb+ k ⇒ = 2x y + z 3 , = x 2, = 3xz 2
∂x ∂y ∂z ∂x ∂y ∂z

Integrating above equations partially with respect to x, y, z respectively, keeping other variable constant, we
get following equations: [See Illustration 9.7]

Target AA
φ = x 2 y + xz 3 + c 1 y, z , φ = x 2 y + c 2 (x, z) , φ = xz 3 + c 3 x, y φ = x 2 y + xz 3
¡ ¢ ¡ ¢

Hence required path independent line integral joining the points A (1, −2, 1) and B (3, 1, 4) is given by
Z

− →
Z B
F · d− LL
£ ¤B £ ¤
dφ = φ A = φ B − φ E A CA
£ ¤
r =
O | R
¤ ED £ 2
C A
£ | 3R
R=ExA2 yD+ xz 3
¤
(3,1,4) − x y + xz (1,−2,1)
= [201] − [−1] = 202


Z

− →
F · d−
r = 202
Powered by
Ans.
Prof. (Dr.) Rajesh M. Darji
C

Illustration 10.4 Find the arc length of the portion of the circular helix →

r (t ) = cos t ib+ sin t jb + t kb from
t = 0 to t = π. [Summer-2015]
d~
r
r (t ) = cos t iˆ + sin t jˆ + t k̂
Solution: Given ~ ⇒ r 0 (t ) =
~ = − sin t iˆ + cos t jˆ + k̂, 0Ét Éπ
dt
By definition of arc length,
Z π¯ Z πq
r 0¯ d t =
¯~ (− sin t )2 + (cos t )2 + 1d t
¯
L=
0 0
Z πp Z πp
= sin2 t + cos2 t + 1d t = 2d t
0 0
hp iπ p p
= 2t = 2π ∴ L = 2π Ans.
0

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 118

10.2 Surface Integral (Normal Surface Integral)

â Any integral that is evaluated over the surface is called line inte-
gral.


â Let F = F1 ib + F2 jb + F3 kb be the vector point function defined
over the smooth surface S and R be the orthogonal projection
on x y− plane. If n
b denotes the unit outward normal vector to


the surface S then the surface integral of F over S is defined as

d xd y
Ï Ï

− →

F ·n
bds = F ·n
b¯ ¯
S R ¯n
b · kb ¯

Remark:
− − → − − →
Ï Z Z
→ → →

1. Surface integral is denoted by the symbol F · d s OR F · d s OR F ·n
b d s.
S S S

2. Instead of on x y−plane,
Ï if we take Ï
orthogonal projection S on y z−plane or zx−plane, then
Ï of surfaceÏ

− →
− d yd z →
− →
− d xd z
surface integrals are F ·n
bds = F ·n
b¯ ¯ or F ·n
bds = F ·nb¯ ¯ respectively.
S R ¯n
b · ib¯ S R ¯nb · jb¯
Ï

− →

3. Flux: If F denotes the velocity of the fluid then F ·n
b d s gives the amount of fluid emerging from
S
the surface area S per unit time, that is it gives flux.
Ï

− →

Target AA
Further if F ·nb d s = 0, then F is called solenoidal.
S
Ï

− →

Illustration 10.5 Evaluate F · nd
b s, where F = 18z ib− 12 jb+ 3y kb and S is the surface of the plane 2x +
S
3y + 6z = 12 in the positive octant.

|x REy C z AL L
Solution:
E D O
Given surface S : 2xR+E3yA+D6z|=R12, that is 6 + 4 + 2 = 1, is a
plane in first octant with intercepts (6, 0, 0) , (0, 4, 4) and (0, 0, 2)
Powered by Prof. (Dr.) Rajesh M. Darji
on x, y and z axis respectively. Let R be orthogonal projection on
x y−plane as show in figure. By definition of surface integral

d xd y
Ï Ï

− →

F · nd
b s= F ·n
b¯ ¯ (10.3)
S R ¯n
b · kb¯

where nb is unit outward normal vector to the given surface S :


2x + 3y + 6z = 12, and is given by [See section 9.5]

∇S 2ib+ 3 jb+ 6kb 1 ¡ ¢


n
b= =p = p 2ib+ 3 jb+ 6kb
|∇S| 22 + 32 + 62 7
¯ ¯
¯ ¯6¯ 6 →
− 1¡ ¢ 1¡
b = 2ib+ 3 jb+ 6kb · 18z iˆ − 12 jˆ + 3y k̂ = 36z − 36 + 18y
¯ ¢ ¡ ¢
∴ ¯nb · kb¯ = ¯¯ ¯¯ = and F · n
7 7 7 7


Since R is projection on x y−plane, that is RHS of (10.5) is a double integral in x, y, we convert F · n
b in terms
1¡ ¢
of x, y bu putting the value of z from equation of surface 2x + 3y + 6z = 12. That is put z = 12 − 2x − 3y .
6

− 1£ ¡ ¢ ¤ 6
∴ F ·n
b = 6 12 − 2x − 3y − 36 + 18y = (6 − 2x)
7 7
Substituting values in (10.5), we get
6 d xd y
Ï Ï Ï


F · nd
b s= (6 − 2x) ¡ 6 ¢ = (6 − 2x) d xd y
S R 7 7 R

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 119

where projection R is triangle in x y−plane as show in figure. For limits of double integral, according to
1
Y −strip (parallel to Y −axis), 0 É y É (12 − 2x) , 0 É x É 6. Hence required surface integral,
3
Ï Z 6 Z 1 (12−2x)

− 3
F · nd
b s= (6 − 2x) d yd x
S 0 0
Z 6
£ ¤ 1 (12−2x) £ ¤
= (6 − 2x) y 03 dx ∵ Integrating w.r.t. y keeping x constant
0
Z 6 · ¸
1
= (6 − 2x) (12 − 2x) − 0 d x
0 3
Z 6
4 4 6¡
Z
18 − 9x + x 2 d x
¢
= (3 − x) (6 − x) d x =
3 0 3 0
2 3 ¸6
x
·
4 9x 4
= 18x − + = [18 − 0]
3 2 3 0 3
Ï


∴ F · nd
b s = 24 Ans.
S

3
Ï

− →

Illustration 10.6 Show that F ·nb d s = , whare F = 4xz ib− y 2 jb+ y z kb and S is the surface of the cube
S 2
bounded by the planes x = 0, x = 1, y = 0, y = 1, z = 0, z = 1.

Solution:
Given S is the closed surface of unit cube. It is

Target AA
bounded by six different subsurface (squares) as
show in figure, like

S1 : x = 0 (OC DE square in the y z−plane)

S2 : x = 1 (ABGF square parallel to y z−plane)


| RE CALL
S3 : y = 0 READ
(OE F A square
| R E DO
in the xz−plane)

S4 : y = 1 (C DGB squarePowered Prof. (Dr.) Rajesh M. Darji


parallel xz−plane)
by

S5 : z = 0 (O ABC square in the x y−plane)

S5 : z = 1 (E F GD square parallel to x y−plane)



In order to find surface integral of F = 4xz ib− y 2 jb+ y z kb we will find surface integral over each subsurface
and then will add them.
Over S 1 : x = 0, and since S 1 lies in y z−plane, the unit outward normal vector (in outside direction of cube,
that is unit vector along negative X −axis direction) is n b = −ib.
Ï Ï

− →

∴ F · nb = 4xz iˆ − y 2 jˆ + y z k̂ · −ib = −4xz = 0 [∵ x = 0] ⇒
¡ ¢ ¡ ¢
F · nd
b s= (0) d s = 0
S1 S1

Over S 2 : x = 1, and since S 1 is parallel to y z−plane, the unit outward normal vector (in outside direction
of cube, that is unit vector along positive X −axis direction) is nb = ib.
Ï Ï

− →

∴ F · nb = 4xz iˆ − y 2 jˆ + y z k̂ · ib = 4xz = 4z [∵ x = 1] ⇒
¡ ¢ ¡ ¢
F · nd
b s= 4zd s
S2 S2

Since S 2 is parallel to y z−plane, its orthogonal projection is on the y z−plane and it is a square OC DE .
¡ ¢
0 É y É 1, 0 É z É 1 We put formula of surface integral for y z−plane.
Z 1Z 1
d yd z
Ï Ï


∴ F · nd
b s= 4zd s = 4 z¯ ¯
S2 S2 0 0 ¯nb · ib¯

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 120

Z 1Z 1 £ ¤
=4 zd yd z ∵ nb · ib = 1
0 0
£ ¤1 z 2 1
· ¸ · ¸
1
Ï


=4 y 0 = 4 [1] =2 ∴ F · nd
b s =2
2 0 2 S2

Similarly, we have other integrals as follow:




Over S 3 : y = 0, n b = − jˆ ⇒ F · n b = y2 = 0
Ï Ï


∴ F · nd
b s= (0) d s = 0
S3 S3



Over S 4 : y = 1, b = jˆ
n ⇒ b = −y 2 = −1
F ·n
Ï Ï Z 1Z 1 Ï

− →

∴ F · nd
b s= (−1) d s = − d xd z = −1 ∴ F · nd
b s = −1
S4 S4 0 0 S4



Over S 5 : z = 0, n b = −k̂ ⇒ F · n b = −y z = 0
Ï Ï


∴ F · nd
b s= (0) d s = 0
S5 S5



Over S 6 : z = 1, n
b = k̂ ⇒ F ·n
b = yz = y
Z 1Z 1 1 1
Ï Ï Ï

− ¡ ¢ →

∴ F · nd
b s= y ds = yd xd y = ∴ F · nd
b s=

Target AA
S6 S4 0 0 2 S6 2

Hence,
Ï Ï Ï Ï Ï Ï Ï

− →
− →
− →
− →
− →
− →

F · nd
b s= F · nd
b s+ F · nd
b s+ F · nd
b s+ F · nd
b s+ F · nd
b s+ F · nd
b s
S S1 S2 S3 S4 S5 S6
LL
| RECA
1
= 0+2+0−1+0+
| RED 2 O
R E3AD
Ï


∴ F · nd
b s= Proved.
S 2

10.3 Volume Integral


Powered by Prof. (Dr.) Rajesh M. Darji
â Any integral that is evaluated through the volume of solid is called volume integral.


â Let F = F1 ib+F2 jb+F3 kb be the vector point function and φ x, y, z be the scalar point function defined
¡ ¢

through the volume V of the solid then the volume integrals are defined as
Ñ Ñ


F d v OR φ d v, where d v = d xd yd z
V V

Exercise 10.1
Z

− →
− →
1. If F = 3x y ib− y 2 jb. evaluate F · d−
r where C is the curve y = 2x 2 from (0, 0) to (1, 2) .
C

− −→
I
→ →

2. Evaluate F .d r , where F = e x sin y ib+e x cos y jb and C is the rectangle whose vertices are (0, 0) , (1, 0) ,
³ π´ C³
π´
1, , and 0, .
2 2
− −→
I

− → →

3. Find the circulation of F round the curve C, that is F .d r where F = y ib+ z jb+ x kb and C is the circle
C
x 2 + y 2 = 1 and z = 0.
[Hint: Let x = cos θ, y = sin θ, z = 0 0 É θ É 2π ⇒ d x = − sin θd θ, y = cos θd θ, d z = 0]

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 121



4. Find the work done in moving a particle by the force field F = 3x 2 ib+ 2xz − y jb+z k,
¡ ¢
b along the straight
line from (0, 0, 0) to (2, 1, 3) .

5. Determine the length of curve → −r (t ) = 2t ib+ 3 sin (2t ) jb+ 33 cos (2t ) k.
b on the interval 0 É t É 2π.
Z

− →
− →
6. If F = 2x y z 3 iˆ + x 2 z 3 jˆ + 3x 2 y z 2 k̂, show that F · d− r is independent of path of integration. Hence
C
find the integral when C is any path joining (0, 0, 0) and (1, 2, 3) .
I

− →
Further show that for any simple closed curve C , F · d−
r = 0.
C
Ï

− →−
7. Evaluate F · nd
b s, where F = x ib+xz jb+4x y kb and S is the triangular surface with the vertex (2, 0, 0) , (0, 2, 0)
S
and (0, 0, 4) .
x y z
[Hint: S : + + = 1 i.e. S : 2x + 2y + z = 4 See Illustration 10.5]
2 2 4
Ï

− →

8. Evaluate b d s, whare F = y z ib+zx jb+x y kb and S is that part of the surface x 2 + y 2 +z 2 = 1 which
F ·n
S
lies in the positive octant.
I
9. Show that → −
r · d→
−r = 0 independently of the origin of →

r.
C
Z
yd x + xd y + zd z , where C is given by x = cos t , y = sin t , z = t 2 , 0 É t É 2π. [Winter-2015]
¡ ¢
10. Evaluate
C

Target AA
Answers
7 p 3
1. − 2. 0 3. −π 4. 16 5. 4π 10 6. 54 7. 8 8. 9.
6 8

E E E

| RE CALL
RE AD | R E DO
10.4 Integral Theorems

Theorem
¡
10.1
¢ ¡ ¢
Theorem1by
(Green’s Powered Prof. (Dr.) Rajesh M. Darji
: Relation between Line Integral & Double Integral)
Let M x, y and N x, y are functions of two variable having
continuous first order partial derivative in the region R of
x y−plane bounded by the closed curve C then

∂N ∂M
I Ï µ ¶
¡ ¢
Md x + N d y = − d xd y
C R ∂x ∂y

where C traverse in anti-clockwise direction.

I
3x − 8y 2 d x + 4y − 6x y d y, where C is the
¡ ¢ ¡ ¢
Illustration 10.7 Verify Green’s theorem in the plane for
C
boundary of the boundary of the triangle with vertices (0, 0) , (1, 0) and (0, 1) . [Summer-2017]

Solution: By Green’s theorem,

∂N ∂M
I Ï µ ¶
¡ ¢
Md x + N d y = − d xd y
C R ∂x ∂y

Here M = 3x − 8y 2 , N = 4y − 6x y and C is the triangle with vertices (0, 0) , (1, 0) , (0, 1) . That is triangle
¡ ¢ ¡ ¢

bounded by x = 0, y = 0, x + y = 1 as show in figure.


1
George Green; English, 1793-1841.

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 122

To find line integral:


I I
3x − 8y 2 d x + 4y − 6x y d y
¡ ¢ ¡ ¢
Md x + N d y = (10.4)
C C

Along C 1 : y =0 ⇒ dy =0 x :0→1
Z 1 · 2 ¸1
3x 3
From (10.4), I1 = [(3x − 0) d x + 0] = =
0 2 0 2

Along C 2 : x+y =1 ∴ y = 1−x ⇒ d y = −d x; x : 1 → 0


Z 0 £©
3x − 8(1 − x)2 d x + {4 (1 − x) − 6x (1 − x)} (−d x)
ª ¤
From (10.4), I2 =
1
Z 1 £©
3x − 8 1 − 2x + x 2 − {4 (1 − x) − 6x (1 − x)} d x
¡ ¢ª ¤
=−
0
Z 1£
3x − 8 + 16x − 8x 2 − 4 + 4x + 6x − 6x 2 d x
¤
=−
0
¸1
1£ 14x 3 29x 2
· · ¸
14 29
Z
−14x 2 + 29x − 12 d x = − −
¤
=− + − 12x = − − + − 12
0 3 2 0 3 2
13
∴ I2 =
6
Along C 3 : x = 0 ⇒ d x = 0; y : 1 → 0
Z 0 Z 0
¤0
4yd y = 2y 2 1 = −2
£ ¡ ¢ ¤ £
From (10.4), I3 = 0 + 4y − 0 d y =
1 1
Hence,

Target AA
3 13 3
I
Md x + N d y =I 1 + I 2 + I 3 = + −2 = (10.5)
C 2 6 3

∂M ∂N
To find double integral: Here M = 3x − 8y 2 , N = 4y − 6x y ⇒ = −16y, = −6y
∂y ∂x
ECALL
∂N ∂M
| R E DO | R
∂y READ
∴ − = −6y + 16y = 10y
∂x

∂N ∂M
Ï µ ¶ Ï Ï
Prof. (Dr.) Rajesh M. Darji
¡ ¢
− d xdPowered
y= by d xd y = 10
10y yd xd y
R ∂x ∂y R R

where R is triangular region as show in figure. According to Y −strip (parallel to Y − axis), limits of double
integral are 0 É y É 1 − x, 0 É x É 1.

∂N ∂M
Ï µ ¶ Z 1 Z 1−x
− d xd y = 10 yd d x y
R ∂x ∂y 0 0
Z 1 · 2 ¸1−x
(1 − x)2
Z 1·
y
¸
= 10 d x = 10 dx
0 2 0 0 2
¸1
(1 − x)3
Z 1 ·
2 5
=5 (1 − x) d x = 5 = − [0 − 1]
0 3 (−1) 0 3
∂N ∂M
Ï µ ¶
5
∴ − d xd y = (10.6)
R ∂x ∂y 3

Hence from (10.5) and (10.6), Green’s theorem is verified.


I
©¡ ¢ ª
Illustration 10.8 Apply the Green’s theorem to evaluate y − sin x d x + cos x d y where C is the plane
C
π
triangle enclosed by the lines y = 0, x = 2 and y = π2 x.

Solution: By Green’s theorem,

∂N ∂M
I Ï µ ¶
¡ ¢
Md x + N d y = − d xd y (10.7)
C R ∂x ∂y

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 123

¡ ¢
Here M = y − sin x , N = cos x and C is boundary of triangular
region R enclosed by the lines y = 0, x = π2 and y = π2 x as show in
figure.

To evaluate line integral using Green’s theorem, we find double


integral. According to Y −strip limits of double integral are
2x π
0Éy É , 0Éx É .
π 2
From (10.7), required line integral

I Ï
¡ ¢
Md x + N d y = (− sin x − 1) d xd y
C R
Z π/2 Z 2x/π
=− (sin x + 1) d yd x
0 0
Z π/2 π/2 · ¸
2x
Z
£ ¤2x/π
=− (sin x + 1) y 0 d x = − (sin x + 1) −0 dx
0 0 π
2
Z π/2 2
Z π/2
=− x (sin x + 1) d x = − (x sin x + x) d x
π 0 π 0
¸π/2
x2
·
2 £ ¤
(x) (cos
=− x) − (1) (− sin x) + ∵ Integrating by parts
π 2 0
2 π π π 1 π ´2
·³ ´ ³ ´ ³ ´ ³ ¸
=− cos + sin + −0
π 2

Target AA
2 2 2 2
π2
· ¸
2
=− 0+1+
π 8
2³ π´
I
©¡ ¢ ª
∴ y − sin x d x + cos xd y = − 0 + 1 + Ans.
C π 8

| RE CALL
| R E DO 1
I
Illustration 10.9 R
Apply D theorem to prove area encased by plane curve is
EAGreen’s
¡ ¢
xd y − yd x . Hence,
2 C
find the area of an ellipse whose semi-major and semi-minor axes are of length a and b.
Powered by Prof. (Dr.) Rajesh M. Darji
Solution: Let R be the region enclosed by simple closed curve C , then by Green’s theorem

∂N ∂M
I Ï µ ¶
¡ ¢
Md x + N d y = − d xd y
C R ∂x ∂y

∂N ∂M
From given line integral, M = −y, N =x ⇒ − = 1 − (−1) = 2
∂x ∂y
I Ï
¡ ¢ ¡ ¢
xd y − yd x = 2 d xd y = 2 Area enclosed by closed curve C
C R
1
I
¡ ¢
∴ Area enclosed by closed curve C = xd y − yd x
2 C

x2 y 2
Now equation of given ellipse is C : + = 1. To find line integral substitute parametric equation of ellipse
a2 b2
in above formula, that is

x = a cos t , y = b sin t , 0 É t É 2π ⇒ d x = −a sin t , d y = b cos t

1 ¡
I
¢
A= xd y − yd x
2 C
1 2π
Z
= [(a cos t ) (b cos t d t ) − (b sin t ) (−a sin t d t )]
2 0

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 124

1 2π ¡ ab 2π ¡
Z Z
abcos2 t + absin2 t d t = cos2 t + sin2 t d t
¢ ¢
=
2 0 2 0
ab 2π ab 2π ab
Z
= (1) d t = [t ]0 = [2π]
2 0 2 2
∴ A = πab Ans.

Note: To find an area of the circle of radius a, take a = b in above illustration. (Verify !)

Theorem 10.2 (Stokes’ theorem2 : Relation between Line Integral & Surface Integral)


Let F be the differentiable vector point function defined over an
open surface S bounded by the closed curve C. If n b denotes the
unit outward normal vector to the surface S, then
I Ï Ï ³

− → →
− →
−´
F · d−
r = culr F · nd
b s= ∇ × F · nd
b s
C S S

where C traverse in anti-clockwise direction.



− ¡ ¢ ¡ ¢
Illustration 10.10 Verify Stokes’ theorem for the vector field F = y − z + 2 ib+ y z + 4 jb− xz kb over the
box bounded by the planes x = 0, y = 0, z = 0, x = 2, y = 2, z = 2 above the xy-plane. [Summer-2015]
I Ï
Solution: Stokes’ theorem: ~ · d~
F r= ~ · n̂d s
culrF
C S
where S is the surface of the box bounded by the planes x = 0, y = 0, z = 0, x = 2, y = 2, z = 2 above the xy-

Target AA
plane (that is open at the bottom) and C is the boundary of the square in x y−plane (z = 0 plane) as shown
in figure.
To find surfave integral:

Consider five sub-surfaces,


L
RECAL
S1 : x = 0 (OC DE square in theR E DO |
y z−plane)
READ |
S2 : x = 2 (ABGF square parallel to y z−plane)

S3 : y = 0
Powered by
(OE F A square in the xz−plane)
Prof. (Dr.) Rajesh M. Darji
S4 : y = 2 (C DGB square parallel xz−plane)

S5 : z = 2 (E F GD square parallel to x y−plane)


− ¡
F = y − z + 2 iˆ + y z + 4 jˆ − xz k̂
¢ ¡ ¢
Here
¯ ¯
¯ ¯
¯
¯ i
b j
b kb ¯
¯

− →
− ¯¯ ∂ ∂ ∂ ¯¯
∴ curl F = ∇ × F = ¯
∂x ∂y ∂z ¯
¯
¯
¯ ¯
¯ y − z + 2 y z + 4 −xz ¯
¯ ¯

∂ ∂ ¡ ∂ ∂ ¡ ∂ ¡ ¢ ∂ ¡
· ¸ · ¸ · ¸
¢ ¢ ¢
=i b (−xz) − yz +4 − j b (−xz) − y −z +2 +kb yz +4 − y −z +2
∂y ∂z ∂x ∂z ∂x ∂y
£ ¤
= ib 0 − y − jb[−z − (−1)] + kb [0 − (1)]


∴ curl F = −y ib+ (z − 1) jb− kb
2
Sir George Stokes; Irish, 1819-1903.

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 125



Over S 1 : x = 0, n b = −iˆ ⇒ curl F · n b=y
Ï Z 2Z 2


∴ curl F · nd
b s= yd xd y =4
S1 0 0


Over S 2 : x = 2, n b = iˆ ⇒ curl F · nb = −y
Ï Z 2Z 2


∴ curl F · nd
b s =− yd xd y = − 4
S2 0 0


Over S 3 : y = 0, n b = − jˆ ⇒ curl F · n b = (z − 1)
Ï Z 2Z 2


∴ curl F · nd
b s =− (z − 1) d xd z =0
S3 0 0


Over S 4 : y = 2, n b = jˆ ⇒ curl F · n b = (z − 1)
Ï Z 2Z 2


∴ curl F · nd
b s= (z − 1) d xd z =0
S3 0 0


Over S 5 : z = 2, nb = k̂ ⇒ curl F · n b = −1
Ï Z 2Z 2


∴ curl F · nd
b s= (−1) d xd y = − 4
S5 0 0

Hence,
Ï Ï

− →

curl F · nd
b s= curl F · nd
b s = 4−4+0+0−4

Target AA
S
(S 1 +S 2 +S 3 +S 4 +S 5 )
Ï


∴ curl F · nd
b s = −4 (10.8)
S

To find line integral:


I
− −→

I
¢| RECA¤L £¡
LI
F · dr =
£¡ ¢
x +ED
y − z + 2 dR
¡
O
y z + 4 d y − xzd z =
¢
y + 2 d x + 4d y
¤
[∵ z = 0] (10.9)
C C
READ | C

Along C 1 : y =0 ⇒ d y = 0, x :0→2


Z
− −→

F · dr =
Z 2 Powered
2d x = 4
by Prof. (Dr.) Rajesh M. Darji
C1 0

Along C 2 : x = 2 ⇒ d x = 0, y : 0 → 2
Z 2
− −→
Z

∴ F · dr = 4d y = 8
C2 0

Along C 3 : y =2 ⇒ d y = 0, x :2→0
Z
− −→

Z 0
∴ F · dr = 4d x = −8
C3 2

Along C 4 : x = 0 ⇒ d x = 0, y : 2 → 0
Z 0
− −→
Z

∴ F · dr = 4d y = −8
C4 2

Hence from (10.9),


− −→ − −→ − −→ − −→ − −→
I Z Z Z Z
→ → → → →
F · dr = F · dr + F · dr + F · dr + F · dr
C C1 C2 C3 C3
= 4 + 8 − 8 − 8 = −4
− −→
I

∴ F · d r = −4 (10.10)
C

Thus from (10.8) and (10.10), Stokes’ theorem is verified.

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 126



Illustration 10.11 Verify Stokes’ theorem for the vector field F = 2x − y ib− y z 2 jb− y 2 z kb over the upper
¡ ¢

half surface of the sphere x 2 + y 2 + z 2 = 1 and C is its boundary.


I Ï
Solution: Stokes’ theorem: ~
F · d~
r= culrF~ · n̂d s
C S
where S is the surface of unit sphere x 2 + y 2 + z 2 = 1 above the xy-plane (that is open at the bottom) and C
is the boundary of the unit circle x 2 + y 2 = 1 in x y−plane (z = 0 plane) as shown in figure.
To find surfce integral:


− ¡
F = 2x − y ib− y z 2 jb− y 2 z kb
¢
Here
¯ ¯
¯ ¯
¯
¯ i
b j
b k ¯¯
b

− →
− ¯¯ ∂ ∂ ∂ ¯¯
∴ curl F = ∇ × F = ¯
¯ ∂x ∂y ∂z ¯
¯
¯ ¯
¯ 2x − y −y z 2 −y 2 z ¯
¯ ¯
¡ ¢
= −2y z + 2y z ib− (0 − 0) jb+ (0 + 1) kb = kb


∴ curl F = kb
¡ ¢
∇S 2x ib+ 2y jb+ 2z kb
∵ S : x2 + y 2 + z2 = 1
£ ¤
Also n
b= =p
|∇S| 4x 2 + 4y 2 + 4z 2
¡ ¢ ¡ ¢
2x ib+ 2y jb+ 2z kb 2x ib+ 2y jb+ 2z kb
= p = p

Target AA
2 x2 + y 2 + z2 2 1
∴ nb = x ib+ y jb+ z kb

− ¡ ¢ ¯ ¯
⇒ curl F · n
b = kb · x ib+ y jb+ z kb = z, ¯n b · kb¯ = |z| = z

d xd y
Ï Ï

− →

curl F · nd
b s= curl F · n
CALL
O¯nb ·|kb¯RE
b¯ ¯
S R
| RREisDregion of bounded by circle x 2 + y 2 = 1 in x y − plane.
READwhere
d xd y
Ï Ï
= 6z = d xd y
Powered
R 6z
by R Prof. (Dr.) Rajesh M. Darji
= Area enclosed by R : x 2 + y 2 = 1 [∵ Formula of area]
2 2
= π(radius) = π(1) = π
Ï


∴ b s =π
curl F · nd (10.11)
S

To find line integral:

− −→
I I

2x − y d x − y z 2 d y − y 2 zd z
£¡ ¢ ¤
F · dr =
C
IC
¡ ¢
= 2x − y d x [∵ z = 0]
C

Since C is the boundary of the circle x 2 + y 2 = 1, using parametric substitution

x = cos t , y = sin t , 0 É t É 2π ⇒ d x = sin t , d y = cos t

We get,
I
− −→

Z 2π
F · dr = (2 cos t − sin t ) (− sin t d t )
C 0
Z 2π ¡ Z 2π µ 1 − cos 2t

−2 sin t cos t + sin2 t d t =
¢
= − sin 2t + dt
0 0 2

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 127

sin 2t 2π
· ¸
cos 2t 1
= + t−
2 2 4 0
· ¸ · ¸
cos 4π 1 sin 4π 1
= + (2π) − − +0−0
2 2 4 2
· ¸ · ¸
1 1
= +π−0 − =π
2 2
− −→
I

∴ F · dr = π (10.12)
C

Thus from (10.11) and (10.12), Stokes’ theorem is verified.

Theorem 10.3 (Gauss Divergence Theorem3 : Relation between Surface Integral & Volume Integral)


Let F be the differentiable vector point function defined
through the volume V enclosed by the closed surface S. If nb de-
notes the unit outward normal vector to the surface S, then
Ï Ñ Ñ

− →
− →

F · nd
b s= div F d v = ∇· F dv
S V V



Illustration 10.12 Verify the divergence theorem for F = 4xz ib− y 2 jb+ y z kb taken over the cube bounded
by the planes x = 0, x = 1, y = 0, y = 1, z = 0, z = 1.

Target AA
Ï Ñ Ñ ³

− →
− →
−´
Solution: Divergence theorem: F · nd
b s= div F d v = ∇ · F d xd yd z
S V V
To find surface integral:



Here F = 4xz iˆ − y 2 jˆ + y z k̂ and S is the closed surface of unit cube. [See Illustration 10.6]
O | R ECALL
D
D | RE
3
Ï



S
F · nd
b s=
R E 2
A (10.13)

To find volume integral


Powered by Prof. (Dr.) Rajesh M. Darji


F = 4xz iˆ − y 2 jˆ + y z k̂

− →
− ∂ ∂ ¡ 2¢ ∂ ¡ ¢
∴ div F = ∇ · F = (4xz) + −y + yz
∂x ∂y ∂z
= 4z − 2y + y = 4z − y

Also V is the volume of unit cube. So limits of triple integral are 0 É x É 1, 0 É y É 1, 0 É z É 1.


Ñ Z 1Z 1Z 1¡

− ¢
div F d v = 4z − y d xd yd z
V 0 0 0
Z 1Z 1¡ Z 1Z 1¡
4z − y [x]10 d yd z =
¢ ¢
= 4z − y d yd z
0 0 0 0
¸1
y2
Z 1· Z 1µ ¶
1
= 4y z − dz = 4z − d z
0 2 0 0 2
· ¸1
1 1
= 2z 2 − z = 2 −
2 0 2
3
Ñ


∴ div F d v = (10.14)
V 2

Hence from (10.13) and (10.14), Gauss divergence theorem is verified.


3
Johann Carl Friedrich Gauss; German, 1777-1855.

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 128

− − →
Ï
→ →

Illustration 10.13 Use divergence theorem to evaluate F · d s, where F = x 3 ib+ x 2 y jb+ x 2 z kb and S is
S
the surface bounding the region x 2 + y 2 = a 2 , z = 0, z = b. [Winter-2015]

Solution:


Here F = x 3 iˆ + x 2 y jˆ + x 2 z k̂

− →

∴ div F = ∇ · F
∂ ¡ 3¢ ∂ ¡ 2 ¢ ∂ ¡ 2 ¢
= x + x y + x z = 3x 2 + x 2 + x 2
∂x ∂y ∂z


∴ div F = 5x 2
By Divergence theorem,
Ï Ñ

− →

F nd
b s= div F d v
S V
Ñ
=5 x 2 d xd y xz (10.15)
V
Since V is the volume of cylinder (as show in figure), for triple integral we change cartesian coordinate
x, y, z into cylindrical coordinate (r, θ, z) , as
¡ ¢

x = r cos θ, y = r sin θ, z = z,
2 2 2
x +y =r , d xd yd z = r d r d θd z
0 É r É a, 0 É θ É 2π, 0Éz Éb [limits for whole cylinder]

Target AA
Substituting in (10.15), we get
Ï


Z b Z 2π Z a
F nd
b s =5 (r cos θ)2 r d r d θd z
S 0 0 0
µZ b ¶ µZ 2π ¶ µZ a ¶
2 3
cos θd θ
£ ¤
=5 dz r dr ∵ Separating intgrals
0 AL L
0 0
| RE C
| R E DO
2π µ 1 + cos 2θ ¶ ¸ · 4 ¸a
r
·Z

R=E5A[z]D0 ×
b
dθ ×
0 2 4 0
5a 4 b sin 2θ 2π 5a 4 b
· ¸ ·µ ¶ ¸
sin 4π
θ+
=
Powered
8 by2 0
=
8 Prof. (Dr.) Rajesh M. Darji
2π +
2
−0
4
5πa b
Ï


∴ F · nd
b s= Ans.
S 4


Illustration
Ï 10.14 If S is any closed surface enclosing the volume V and F = x ib+ 2y jb+ 3z k,
b prove that


F · nd
b s = 6V.
S

Solution: If S is a closed surface enclosing the volume V, then by Divergence theorem we have
Ï Ñ

− →

F nd
b s= div F d v
S V

− →
− →

Let F = x ib+ 2y jb+ 3z kb ⇒ div F = ∇ · F = 6
Ï Ñ Ñ

− ¡ ¢
∴ F · nd
b s= (6) d v = 6 d v = 6 Volume enclosed by S
ÏS V V


∴ F · nd
b s = 6V Proved.
S

Exercise 10.2

− ¡ ¢
1. Verify Green’s theorem for the function F = x + y ib+ 2x y jb in the x y−plnae for the region bounded
by x = 0, y = 0, x = a and y = b. [Summer-2016]

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 129

I
¡ 2
3x − 8y 2 d x + 4y − 6x y d y, where C is the boundary of
¢ ¡ ¢
2. Verify Green’s theorem in the plane for
p C
the region bounded by y = x , y = x 2 .
I
x y + y 2 d x + x 2 d y , where C is the closed curve of the
©¡ ¢ ª
3. Verify Green’s theorem in the plane for
C
region bounded y = x and y = x 2 .
I
y 2 d x + x 2 d y where C is the plane triangle enclosed by the
¡ ¢
4. Apply the Green’s theorem to evaluate
C
lines x = 0, y = 0 and x + y = 1. [Winter-2016]

− −y ib+ x jb
I

− → →
5. Use the Green’s theorem to evaluate F · d−
r , where F = 2 and C is the x 2 + y 2 = 1 traversed
C x + y2
in counterclockwise direction. [Winter-2015]


6. Verify Stokes’ theorem for the vector field F = x 2 + y 2 ib− 2x y jb integrated round the rectangle in the
¡ ¢

z = 0 plane and bounded by the lines x = 0, y = 0, x = a and y = b.



− ¡
7. Verify Stokes’ theorem for the vector field F = x 2 − y 2 ib+ 2x y jb over the box bounded by the planes
¢

x = 0, x = a, y = 0, y = b, z = 0, z = c; if the face z = 0 is cut.


Ï
→− →

8. Use Stokes’ theorem to evaluate b s F = z 2 ib−3x y jb+x 3 y 3 kb and S is a part of z = 5−x 2 − y 2
curl F · nd
S
above the plane z = 1. Assume that S is oriented upwards.
[Hint: In this case the boundary curve C will be where the surface intersects the plane z = 1 and so

Target AA
will be the curve 1 = 5 − x 2 − y 2 ⇒ x 2 + y 2 = 4. ]
− −→
I
→ →
− ¡ ¢ ¡ ¢
9. Use Stokes’ theorem to evaluate F · d r , where F = x + y ib+(2x − z) jb− y + z kb and C is the trian-
C
gle with vertices (1, 0, 0) , (0, 1, 0) and (0, 0, 1) with counter-clockwise rotation.
− − → LL →
Ï
→ −
10. Use divergence theorem to evaluate | R F ·E s, A
dC where F = y ib+ x jb+ z 2 kb and S is the cylindrical region
bounded by xR2 2 D2
+EyA
| R E DO
= a , z = 0 and z = h.
S

− − →
Ï
→ →

F · d s, where F = x 3 ib+ y 3 jb+ z 3 kb and S is the surface of the
11. Use divergence theorem to evaluate
Powered by Prof. (Dr.) Rajesh M. Darji
S
sphere x 2 + y 2 + z 2 = a 2 .
[Hint: Use spherical polar coordinate for triple integral.]
¢ ¤−→
Ï
£ ¡ ¢ ¡
12. For any closed surface S, prove that x y − z ib+ y (z − x) jb+ z x − y kb d s = 0.
S


[Hint: div F = 0]
4
Ï

− →

13. If F = ax ib+ b y jb+ cz k,
b a, b, c are constant then show that b d s = π (a + b + c) . Where S is the
F ·n
S 3
surface of the unit sphere.

Answers
1 12πa 5
4. 0 5. 2π 8. 0 9. 10. πa 2 h 2 11.
2 5

E E E

LAVC (GTU-2110015) B.E. Semester II


Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 130

References:

1. Introduction to Linear Algebra with Application, Jim Defranza, Daniel Gagliardi, Tata McGraw-Hill.

2. Elementary Linear Algebra, Applications version, Anton and Rorres, Wiley India Edition.

3. Advanced Engineering Mathematics, Erwin Kreysig, Wiley Publication.

4. Higher Engineering Mathematics, B. S. Grewal, Khanna Publishers.

5. A Textbook of Engineering Mathematics, N. P. Bali, Laxmi Publications.

Powered by
Prof. (Dr.) Rajesh M. Darji
B. Sc. (Gold Medalist)
M. Sc. (Gold Medalist)
Tetra Gold Medalist in Mathematics
Ph. D. (Mathematics)
ISTE (Reg. 107058)
IMS, AMS
http://rmdarji.ijaamm.com/
Contact: (+91) 9427 80 9779

Target AA | RE CALL
READ | R E DO

Powered by Prof. (Dr.) Rajesh M. Darji

LAVC (GTU-2110015) B.E. Semester II

Вам также может понравиться