Академический Документы
Профессиональный Документы
Культура Документы
READ | REDO
Linear Algebra
and
Vector Calculus
LAVC (GTU Subject Code - 2110015)
B.E. Semester II
Version 1.0
Powered by
Dear Readers,
For any query regarding this subject,
feel free to ask or WhatsApp on (+91) 9427 80 9779
Target AA |R ECALL
READ | REDO
Linear Algebra
and
Vector Calculus
LAVC (GTU Subject Code - 2110015)
B.E. Semester II
Version 1.0
Powered by
Dear Readers,
For any query regarding this subject,
feel free to ask or WhatsApp on (+91) 9427 80 9779
Linear Algebra
and
Vector Calculus
Target AA
LAVC (GTU- 2110015) B.E. Semester II
| RE CALL
READ | R E DO
Dedicated to
My Beloved Students
Contents
1 Review of Matrices 1
1.1 Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Types of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.1 Row and Column Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.2 Zero or Null Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.3 Square Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.4 Transpose of Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.5 Symmetric Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.6 Skew Symmetric Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Target AA
1.2.7 Diagonal Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.8 Scalar Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.9 Unit Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.10 Upper Triangle Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.11 Lower Triangle Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Determinant of Matrix . . . . . . . . . . . . . . .L.L. . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Minor of an Element . . . E . .D.O . .R
. . | . E
CA
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
anE
1.5 Cofactor of R AD | .R. . . . . . . . . . . . . . . . . . . . . . . . . . .
Element . . . . . . . . . . . . . . . 3
1.6 Adjoint of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Powered by Prof. (Dr.) Rajesh M. Darji
1.7 Singular and Non-singular Matrix . . . . . . . . . . . . . . . . . . . . .
1.8 Operations of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4
4
1.8.1 Equality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.8.2 Multiplication of Matrix by a Scalar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.8.3 Addition and Substation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.8.4 Multiplication of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.9 Inverse of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.10 Elementary Transformations on Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.11 Equivalent Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.12 Gauss-Jordan Method to find Inverse Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.13 Rank of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.14 Rank by Row Echelon Method: (Elementary Transformation Method) . . . . . . . . . . . . . . . 9
1.14.1 Row-Echelon or Canonical form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.15 Reduced Row Echelon Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
i
Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) ii
3 Notions of Vectors in Rn 20
3.1 Euclidean Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2 Linear Combination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3 Linearly Independent Vectors (LI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.4 Linearly Dependent Vectors (LD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.5 Euclidean Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.6 Normalized Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.7 Euclidean Distance and Angle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.8 Cauchy-Schwarz’s inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.9 Minkowski’s Triangular Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4 Vector Space 26
4.1 Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.2 Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.3 Some Standard Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.4 Subspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.5 Linear Combination and Span . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.6 Linearly Independent Vectors (LI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.7 Linearly Dependent Vectors (LD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.8 Wronskian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.9 Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.10 Some Standard Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.11 Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Target AA
4.12 Ordered Basis and Coordinate Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.13 Translation Matrix (Change of Basis Matrix) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.14 Fundamental Spaces: Row Space, Column Space, Null Space . . . . . . . . . . . . . . . . . . . . 45
4.15 Rank and Nullity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.16 Rank-Nullity Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
E CALL
| R E DO | R
5 Linear Transformation (Linear Mapping) 51
READ . . . . . . . . . . . . . .
5.1 Linear transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.2 Particular Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Target AA
8.12 Coordinate Relative to Orthonormal Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
8.13 Gram-Schmidt Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
8.14 Least Square Approximate Solution for Linear System . . . . . . . . . . . . . . . . . . . . . . . . 100
8.15 Orthogonal Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
ECALL
9 Vector calculus I: Vector differentiation 104
9.1 Scalar and Vector . . . . . E . .D.O . .R
. . | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
R
| Vectors
READ of
9.2 Algebraic Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
9.3 Point Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Powered by Prof. (Dr.) Rajesh M. Darji
9.4 Vector Differential Operator . . . . . . . . . . . . . .
9.5 Gradient . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
105
105
9.6 Divergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
9.7 Curl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
9.8 Directional Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
9.9 Angle between two Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
1.1 Matrix
A matrix is a rectangular arrangement of certain numbers (called elements or entries) in an array of m rows
and n columns such as,
a 11 a 12 a 13 . . . . . . a 1n
a 21 a 22 a 23 . . . . . . a 2n
a 31 a 32 a 33 . . . . . . a 3n
. .. .. .. .. ..
A= ..
. . . . .
. .. .. .. .. ..
.
Target AA
. . . . . .
a m1 a m2 a m3 ... ... a mn
is called an m × n matrix and generally it is denoted by A m×n . Here m × n is known as order of the matrix.
£ ¤
More generally matrix can be denoted by, A m×n = a i j , Where i = 1, 2, 3, ...m, and j = 1, 2, 3, ...n.
Here a i j denotes the elements on the i t h row and the j t h column that may be real or complex.
e. g.
O | R ECALL
A D | RED1 2
3 0 4
R E A 3×2 = 2 6 , A 3×3 = −1 0.8 5
3 −4 2 2 3 + 7i
Remark:
Powered by Prof. (Dr.) Rajesh M. Darji
1. Distinct notations are used for enclosing the elements of matrix are [] , () , {} , kk
2. Elements a 11 , a 22 , a 33 ... are said to be on leading diagonal or principal diagonal elements of the ma-
trix.
A matrix of order 1 × n is having only one row and n column is known as row matrix or row vector.
£ ¤
That is A 1×n = a 11 a 12 a 13 . . . . a 1n
e. g.
£ ¤
A 1×4 = −2 1 0 3
Similarly, a matrix of order m × 1 is having m rows and only one column is known as column matrix or
column vector.
a 11
a
0
21
a
That is A m×1 = 31 e. g. A 3×1 = 1
.. 2
.
a m1
1
Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 2
A matrix containing all zero elements is said to be zero matrix or null matrix and is denoted by Z or O.
e. g.
0 0 0 0
Z 3×4 = 0 0 0 0 = O
0 0 0 0
A matrix containing same numbers of rows and columns i.e. m = n is said to be square matrix. If A is square
matrix of order n then it is also denoted by A n .
e. g.
1 2 3
· ¸
0 1
A 2×2 = = A 2; A 3×3 = 0 −5 1 = A 3
−1 2
1 2 −2
Matrix obtained by interchanging the rows and columns of the given matrix A is called transpose of A and is
denoted by the symbol A 0 or A T .
e. g.
1 2 3
1 2 0 1
2 1 4
A= ⇒ AT = 2 1 1 1
Target AA
0 1 2
3 4 2 −1
1 1 −1
A= 2 5 3 and A = h
b f
−1 3 7 g f c
Powered by
Thus, in a symmetric matrix a i j = a j i
Prof.
∀i , j
(Dr.) Rajesh M. Darji
1.2.6 Skew Symmetric Matrix
A = −1 0 3
2 −3 0
If in a square matrix, all non-diagonal elements are zeros then it is called diagonal matrix.
e. g.
1 0 0
A= 0 2 0
0 0 3
If a diagonal matrix has all diagonal elements equal i.e. a 11 = a 22 = a 33 .... then it is called scalar matrix.
e. g.
7 0 0
A= 0 7 0
0 0 7
A diagonal matrix of order n in which all the diagonal elements are unity (one) is called unit matrix of order
n and is denoted by I n . Unit matrix is also called an identity matrix.
e. g.
1 0 0
· ¸
1 0
I2 = amd I 3 = 0 1 0
0 1
0 0 1
It is a square matrix in which all the elements below the principle diagonal are zero.
e. g.
3 1 −2
A= 0 7 4
0 0 1
Target AA
1.2.11 Lower Triangle Matrix
It is a square matrix in which all the elements above the principle diagonal are zer.
e. g.
1 0 0
C A L L
|AR=E
R E DO
3 3 0
READ | 2 1 5
The minor of an element of | A | is determinant obtained by omitting the row and column in which the ele-
ment present. In general the minor of an element a i j is denoted byM i j .
¯ ¯
¯ a1 b1 c1 ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯
¯ ¯ ¯ b2 c2 ¯ ¯ b1 c1 ¯ ¯ a1 c1 ¯ ¯ a1 b1 ¯
e. g. If | A | = ¯ a 2 b 2 c 2 ¯ then, M 11 = ¯
¯ ¯ ¯ ¯ , M 21 = ¯
¯ ¯ , M 22 = ¯
¯ ¯ , M 33 = ¯¯ ¯
¯ a b c ¯ b3 c3 ¯ b3 c3 ¯ a3 c3 ¯ a3 b3 ¯
3 3 3
Adjoint of a square matrix A is the transpose of the matrix formed by the cofactor of the elements the given
a 11 a 12 a 13 A 11 A 21 A 31
For a square matrix A if | A | = 0 then it is called singular and if | A | 6= 0 then it is called non-singular matrix.
1.8.1 Equality
Two matrices A and B of the same order are said to be equal if all the elements of A and B in the correspond-
ing position are equal.
e. g.
· ¸ · ¸
1 2 1 2
A= , B= ⇒ A=B
3 4 3 4
e. g.
Target AA
1.8.2 Multiplication of Matrix by a Scalar
£ ¤ £
| RE
¤
For any scalar k if , A = a i j then k A = ka i j , 1 É i É m, 1 É j É n.
CALL
·
1 2RE3AD
¸ | R E DO
·
2 4 6
¸ ·
−1 −2 −3
¸
If A = then, 2A = and (−1) A = −A =
3 −1 2 6 −2 4 −3 1 −2
â In order to find product of two matrices A and B , take row from first matrix A and column from second
matrix B .
â Find the product of respective entries of row and column, and then add them.
â It gives the entry on corresponding row and column of the product matrix (AB ).
â For example, in following matrices A and B , if we consider first row (R 1 ) from matrix A and second
column (C 2 ) from matrix B then corresponding entry of product matrix AB lies on first row and sec-
ond column and is given by (1 × 1) + (2 × 2) + (3 × 4) = 17.
2 1 1
· ¸ · ¸
1 2 3 9 17 6
A 2×3 = , B 3×3 = −1 2 1 ⇒ (AB )2×3 =
2 3 4 13 24 9
3 4 1
Remark:
1. In general AB 6= B A.
6. k (AB ) = (k A) B = A (kB )
¢n
7. If A be a square matrix thenA 2 = A A, A 3 = A 2 A. In general A m+n = A m A n , A 0 = I .and A m = A mn
¡
Target AA
1.9 Inverse of a Matrix
For a non-singular square matrix A if there exist another non-singular matrix B such that AB = B A = I then
matrix A is called invertible and matrix B is called inverse of A. It is denoted by B = A −1 and is given by
R−1EC1A
LL
O | A
| RED
= adj (A)
READ
|A|
Thus a square matrix A is invertible if |A| 6= 0, That is, A is non-singular In this case A A −1 = A −1 A = I
Powered by Prof. (Dr.) Rajesh M. Darji
Remark:
¢−1 ¢0 ¢−1
1. (AB )−1 = B −1 A −1 A0 = A −1 , A −1 I n−1 = I n
¡ ¡ ¡
2. = A,
3. Multiplication of each element of i t h row by a nonzero scalar k and adding corresponding element to
the j t h row is denoted by R i j (k) or R j → R j + kR i
¡ ¢
Multiplication of each element of i t h row by a nonzero scalar k and adding corresponding element to
the j t h column is denoted by C i j (k) or C j → C j + kC i
¡ ¢
Two matrices A and B are said to be equivalent if one can be obtained from other one by applying the finite
numbers of elementary operations and they are denoted by A ∼ B or A → B .
The method of finding the inverse of a given matrix by elementary row transformations is called Gauss-
Jordan method and can be apply as follow:
I : A −1
£ ¤
[A : I] ⇒
Illustration 1.1 Prove that any square matrix can be expressed as sum of symmetric and skew symmet-
4 2 −3
ric matrices. Hence express the matrix 1 3 −6 as such a sum of symmetric and skew symmetric
−5 0 7
matrices.
2A = A + A T + A − A T
¡ ¢ ¡ ¢
1¡ ¢ 1¡
∴ A = A + A T + A − A T = P +Q
¢ ¡ ¢
Say (1.1)
2 2
Now, we will see that P and Q are symmetric and skew-symmetric matrices respectively.
1¡
P = A + AT
¢
2
Target AA
¢ T 1¡
· ¸
1¡ ¢T
PT = A + AT = A + AT ∵ (k A)T = k A T
£ ¤
∴
2 2
1 ³ T ¡ T ¢T ´ 1 ¡ T ¢
= A + A = A +A =P
2 2
∴ P T =P
E CALL
R
Therefor, P is a symmetric matrix.
| E DO | R
Also, READ
1¡
Q = A − A T Powered by Prof. (Dr.) Rajesh M. Darji
¢
2
¢ T 1¡
· ¸
T 1¡ T
¢T
∴ Q = A−A = A − AT
2 2
1 ³ T ¡ T ¢T ´ 1 ¡ T ¢
= A − A = A −A
2 2
1¡ T
¢
= − A − A = −Q
2
∴ QT = −Q
4 2 −3 4 1 −5 8 3 −8
1¡ 1 1
A + A T = 1 3 −6 + 2
¢
P= 3 0 = 3 6 −6
2 2 2
−5 0 7 −3 −6 7 −8 −6 14
4 3/2 −4
∴ P = 3/2 3 −3
−4 −3 7
and
4 2 −3 4 1 −5 0 1 2
1¡ 1T
¢ 1
Q= A − A = 1 3 −6 − 2 3 0 = −1 0 −6
2 2 2
−5 0 7 −3 −6 7 −2 6 0
0 1/2 1
∴ Q = −1/2 0 −3
−1 3 0
Hence,
4 2 −3 4 3/2 −4 0 1/2 1
1 3 −6 = 3/2 3 −3 + −1/2 0 −3
−5 0 7 −4 −3 14 −1 3 0
0 1 2
[A : I ] = 1 2 3 0 1 0
3 1 1 0 0 1
In order to find inverse of a given matrix, we transform A to I in above matrix by applying only row opera-
Target AA
tions successively, as follow:
0 1 2 1 0 0
[A : I ] = 1 2 3 0 1 0 → R 12
3 1 1 0 0 1
1 2 3 0 1 0
ECALL
∼ 0 1 2 1 0 E 0D O→ R|13 R
(−3)
|R
1 E1AD0 0 1
3 R
1 2 3 0 1 0
∼ 0 1 Powered
2 1 by Prof. (Dr.) Rajesh M. Darji
0 0 → R 21 (−2) , R 23 (5)
0 −5 −8 0 −3 1
1 0 −1 −2 1 0
∼ 0 1 2 1 0 0 → R 31 (1/2) , R 32 (−1)
0 0 2 5 −3 1
1 0 0 1/2 −1/2 1/2
µ ¶
1
∼ 0 1 0
−4 3 −1 → R3
2
0 0 2 5 −3 1
1 0 0 1/2 −1/2 1/2
−1 = I : A −1
£ ¤
∼ 0 1 0 −4 3
0 0 1 5/2 −3/2 1/2
Hence,
1/2 −1/2 1/2
A −1 = −4 3 −1
5/2 −3/2 1/2
Exercise 1.1
1 2 4
1. Express the matrix −2 5 3 as such a sum of symmetric and skew symmetric matrices.
−1 6 3
1 2 3
3. Find adjoint of the matrix A = 0 1 2 . Also verify that A adj (A) = adj (A) A = | A | I
2 0 1
¸n
λ 1 λn nλn−1
· · ¸
4. Show that = , where n is a positive integer.
0 λ 0 λn
5. Show that ¯ adj (A) ¯ = | A |n−1 , where A be a square matrix of order n. Hence deduce that ¯ adj adjA ¯ =
¯ ¯ ¯ ¡ ¢¯
2
| A |(n−1)
6. Find the inverse of the following matrices by using Gauss-Jordan method (using row operations):
2 1 3 1 3 3
a. 3 1 2 b. 1 4 3
1 2 3 1 3 4
−1 −3 3 −1
1 1 −1 0
c.
2 −5 2 −3
−1 1 0 1
7. Find the inverse of the following matrices by using adjoint method if exist:
1 1 1 1 2 −2
Target AA
a. 1 2 3 b. −1 3 0
1 4 9 0 −2 1
8. If A and B are symmetric matrices then prove that AB is symmetric, provided A and B commute.
3 −5/2 1/2 3 2 6
7. a. −3 4 −1 b. 1 1 2 9. A
1 −3/2 1/2 2 2 5
E E E
1. At least one minor of the order r which is not equal to zero and
1. The rank of a matrix A is the maximum order of its non vanishing minor.
3. If the matrix A has all the minors of order (r+1) as zeros ρ (A) É r
5. Elementary transformations do not alter the order and the rank of the matrix.
Target AA
â The row-echelon or canonical form of the matrix A m×n is a matrix in which one or more elements in
each of the first r rows are non-zero and all the elements in remaining rows are zeroes.
â Any matrix A can always be reduced to the echelon form by applying only row transformations.
â In this case rank of a matrix is given by ρ (A) = m − k, where m denotes total numbers of rows and k
LL
ECifAk = 0 then ρ (A) = m.
| Rthat
denotes total numbers of zero-rows. Note
READ | R E DO
* Important:
â Observe that R 1 , R 2 , R 3 are pivot rows whose pivots are enclosed by the rectangular box.
â Further the column on which the pivot of row exist is known as pivot column. In above matrix
C 1 ,C 2 ,C 4 are pivot columns.
Reduced-echelon form is a row echelon form in which all the pivot are unity and above all elements of pivot
are zeros.
e. g.
1 0 −3 0 2
0 1 2 0 8
0 0 0 1 5
0 0 0 0 0
1 2 3
Illustration 1.3 Find the rank of the matrix 1 4 2 , using determinant method.
2 6 5
1 2 3
Solution: Let A = 1 4 2
2 6 5
Obviously, the highest ordered minor of A is 3rd order and it is det(A) itself.
¯ ¯
¯ 1 2 3 ¯
¯ ¯
∴ det (A) = ¯¯ 1 4 2 ¯¯ = 1 (20 − 12) − 2 (5 − 4) + 3 (6 − 8) = 8 − 2 + 6 = 0.
¯ 2 6 5 ¯
So rank of A is not 3. ¯ ¯
¯ 1 2 ¯
Now, consider 2nd order minor of A formed by its 1st and 2nd rows as ¯ ¯ ¯ = 4 − 2 = 2 6= 0.
1 4 ¯
Hence, rank of matrix A is 2. ∴ ρ(A) = 2.
0 1 −3 −1
1 0 1 1
Illustration 1.4 Find the rank of the matrix by reducing to echelon echelon form.
3 1 0 2
1 1 −2 0
0 1 −3 −1
1 0 1 1
Solution: Let A =
3 1 0 2
1 1 −2 0
Target AA
To reduce A to its row-echelon form, use only row transformations, and bring 0 (zero) under the first non
zero (pivot) element of each row (enclosed by rectangular box in bellow process), starting from first row.
0 1 −3 −1
1 0 1 1
A = R3 → R3 − R1 ; R4 → R4 − R1
3 1 0 2
O | R ECALL
1 1 −2 0 | RED
READ
0 1 −3 −1
1 0 1 1
∼
3 0 3 Powered 3
R3 → Prof. (Dr.) Rajesh M. Darji
byR3 − 3R2 ; R4 → R4 − R2
1 0 1 1
0 1 −3 −1
1 0 1 1
∼
0 0 0 0
0 0 0 0
Hence, ρ(A) = m − k = 4 − 2 = 2. (number of non-zero rows)
Exercise 1.2
1 2 −1 3
1 2 3
3. Convert the matrix 2 3 4 in to reduced row echelon form and hence find the rank of matrix.
3 4 5
Answers
1 0 −1
1. 2 2. a. 3 b. 4 3. 0 1 12 , Rank = 2
0 0 0
E E E
Powered by
Prof. (Dr.) Rajesh M. Darji
B. Sc. (Gold Medalist)
M. Sc. (Gold Medalist)
Tetra Gold Medalist in Mathematics
Ph. D. (Mathematics)
ISTE (Reg. 107058)
IMS, AMS
http://rmdarji.ijaamm.com/
Contact: (+91) 9427 80 9779
Target AA | RE CALL
READ | R E DO
The collection of more than one linear equation is called system of equations.
Consider the system of m equations in n unknown x 1 , x 2 , x 3 .....x n as follow:
a 11 x 1 + a 12 x 2 + a 13 x 3 + . . . . . . + a 1n x n = b 1
a 21 x 1 + a 22 x 2 + a 23 x 3 + . . . . . . + a 2n x n = b 2
a 31 x 1 + a 32 x 2 + a 33 x 3 + . . . . . . + a 3n x n = b 3
··· ··· ··· ··· ··· ··· ··· ··· ···
Target AA
··· ··· ··· ··· ··· ··· ··· ··· ···
a m1 x 1 + a m2 x 2 + a m3 x 3 + . . . . . . + a mn x n = b m
The solutions of above system means the value of unknown x 1 , x 2 , x 3 .....x n that will satisfy the given system,
that may or may not be exist.
LL
CAby
| REform
â The above system can be rewrite in compact using matrix notations as,
R E DO
READa | a 12
11 a 13 ... ... a 1n
x1
b1
a
21 a 22 a 23 ... ... a 2n x2 b2
Prof. (Dr.) Rajesh M. Darji
Powered
a 31 aby 32 a 33 ... ... a 3n
x3
b3
. .. .. .. .. ..
=
.
. . . ... ... . . .
.. .. .. .. .. ..
. . . ... ... . . .
a m1 a m2 a m3 ... ... a mn xn bm
* Important:
When the system AX = B has the solution then system is said to be consistent otherwise the system is called
inconsistent.
12
Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 13
If AX = B is the system of m equations in n unknown then the matrix written as [A : B ] or (A, B ) is called as
the Augmented matrix of the given system. Hence
a 11 a 12 a 13 ... ... a 1n b1
a 21 a 22 a 23 ... ... a 2n b2
a 31 a 32 a 33 ... ... a 3n b3
[A : B ] =
.. .. .. .. ..
. . . ... ... . .
.. .. .. .. ..
. . . ... ... . .
a m1 a m2 a m3 ... ... a mn bm
For the system of equation AX = B , if matrix B is not a null matrix (non-zero matrix) then the system of
equation AX = B is known as non-homogeneous system of equations.
For the system of equation AX = B , if matrix B is a null matrix (zero matrix) then the system of equation
AX = B (AX = Z ) is known as Homogeneous system of equations.
e. g.
Target AA
x +y +z =3 x +y +z =0
x − y + 2z = 4 Non - homo. equations; 2x + 3y − 4z = 0 Homo. equations.
2x + 3y − z = 0 x − y + 2z = 0
* Important:
â To find rank of the augmented matrix we shall reduce the matrix in to the row echelon form.
â This method is known as Reduction Method or Gauss Elimination Method. Also in this method we
apply only row elementary transformations.
â Gauss Jordan Elimination Method: In this method convert the matrix system [A : B ] in to reduced
row echelon form and then apply back substitution.
Illustration 2.1 Examine the consistency of following system of equations, and solve if consistent by Gauss
elimination method:
x 1 + x 2 + x 3 = 9, 2x 1 + 4x 2 − 3x 3 = 1, 3x 1 + 6x 2 − 5x 3 = 0.
Solution: Here given system has three equations in three unknown x A , x 2 , and x 3 , that is (m × n = 3 × 3)
system.
In Gauss elimination method, we reduce augmented matrix [A : B ] to row echelon form using row oper-
ations only, as follow:
1 1 2 9
[A : B ] = 2 4 −3 1 R 2 → R 2 − 2R 1 ; R 3 → R 3 − 3R 1
3 6 −5 0
1 1 2 9
3
∼ 0 2 −7
−17 R 3 → R 3 − R 2
2
0 3 −11 −27
1 1 2 9
∼ 0 2 −7
−17 (2.1)
0 0 −1/2 −3/2
Target AA
R3 : − x3 = −
2 2
R 2 : 2x 2 − 7x 3 = −17 ⇒ x 1 = 1, x 2 = 2, x 3 = 3.
R 1 : x 1 + x 2 + 2x 3 = 9
[A : B ] = −2 5
2 1 R 2 → R 2 + R 1 ; R 3 → R 3 − 4R 1
8 1 4 −1
2 2 2 0
∼ 0 7
4 1 R3 → R3 + R2
0 −7 −4 −1
2 2 2 0
∼ 0 7 4 1 (2.2)
0 0 0 0
z = t, t ∈R
1 − 4t
R 2 : 7y + 4z = 1 ∴ y= ,
7
1 + 3t
R 1 : 2x + 2y + 2z = 0 ∴ x =−
7
x 1 + 3x 2 − 2x 3 + 2x 5 = 0
2x 1 + 6x 2 − 5x 3 − 2x 4 + 4x 5 − 3x 6 = 1
5x 3 + 10x 4 + 15x 6 = 5
2x 1 + 6x 2 + 8x 4 + 4x 5 + 18x 6 = 6
Target AA
0 0 4 8 0 18 6
1 3 −2 0 2 0 0
0 0 −1 −2 0 −3 −1
∼ R 2 → (−1) R 2 ; R 3 ↔ R 4 (i.e R 34 )
0 0 0 0 0 0 0
0 0 0 0 0 6 2
1 3 −2 0 2 0 0 | RECA
LL
0 0 1 2| 0RE O
D 1
EAD
µ ¶
3 1
∼
R
0 0 0 0 0 6
R4 →
2
R4
6
0 0 0 0 0 0 0
Powered by
1 3 −2 0 2 0 0
Prof. (Dr.) Rajesh M. Darji
0 0 1 2 0 3 1
∼ R 1 → R 1 + 2R 2 ; R 2 → R 2 − 3R 3
0 0 0 0 0 1 1/3
0 0 0 0 0 0 0
1 3 0 2 2 3 2
0 0 1 2 0 0 0
∼ (Reduced row echelon form) (2.3)
0 0 0 0 0 1 1/3
0 0 0 0 0 0 0
x + y + z = 6, x + 2y + 3z = 10, x + 2y + λz = µ,
has (i) a unique solution, (ii) an infinite numbers of solutions and (iii) no solution.
Solution: Consider the augmented matrix [A : B ] for given equations and apply reduction method, as
1 1 1 6
[A : B ] = 1 2 3 10 R 2 → R 2 − R 1 ; R 3 → R 3 − R 1
1 2 λ µ
1 1 1 6
∼ 0 1 2 4 R3 → R3 − R2
0 1 λ−1 µ−6
1 1 1 6
∼ 0 1 2 4
0 0 λ−3 µ − 10
Observe that,
ii. If λ = 3 and µ = 10, than ρ(A) = ρ(A : B ) = 2 < 3. Hence system has 1-parametric infinite solutions.
iii. If λ = 3 and µ 6= 10, than ρ(A) = 2, ρ(A : B ) = 3, that is ρ(A) 6= ρ(A : B ). Hence system is inconsistent
and has no solution.
Exercise 2.1
1. Examine the following system of equations for consistency and if consistent then solve it, using Gauss
elimination method:
Target AA
a. 2x 1 + x 2 − x 3 + 3x 4 = 8,
b. x + y + z = 6,
c. 4x − 2y + 6z = 8,
x 1 + x 2 + x 3 − x 4 = −2,
x − y + 2z = 3, 3x + y + z = 8,
x + y − 3z = −1,
| RE
2. Solve the following equations using Gauss Jordan
L
3x 1 + 2x 2 − x 3 = 6,
2x − 2y + 3z = 7.
15x − 3y + 9z = 21.
AD
E4 = 11, | R E DO
2x 1 + x 2 − x 3 +R3x x 1 − 2x 2 + x 3 + x 4 = 8, 4x 1 + 7x 2 + 2x 3 − x 4 = 0, 3x 1 + 5x 2 + 4x 3 + 4x 4 = 17.
2x − 3y + 6z − 5w = 3, y − 6z + w = 1, 4x − 5y + 6z − 9w = λ,
2x + y = a, x + µy − z = b, y + 2z = c
has unique solution for all a, b, c. Also if µ = 0 then determine the relation satisfied by a, b, c such that
system is inconsistent. Find the general solution by taking µ = 0, a = 1, b = 1, c = −1.
2x − y + 3z = 6, x + y + 2z = 2, 5x − y + az = b,
has (i) no solution, (ii) a unique solution and (iii) an infinite solutions.
x + y + z = 0, 2x + 3y − z = −5, x − y + z = 4.
[Hint: AX = B ⇒ X = A −1 B ]
8. Use matrix inversion method to determine the value of λ for which following system is consistent:
x + 2y + z = 3, x + y + z = λ, 3x + y + 3z = λ2 ,
Answers
7. x = 1, y = −2, z = 1 8. λ = 2, 3
E E E
Target AA
Consider the homogeneous system of m equations in n unknown as AX = Z , where Z is a null matrix of
order (m × 1).
For the augmented matrix [A : Z ], we note that ρ (A) = ρ (A : Z ). Hence Homogeneous system is always
consistent and has either unique solution or infinite numbers solutions. This can be written as follow:
solutions.
Powered by Prof. (Dr.) Rajesh M. Darji
sented parametrically in trems of (n-r)-parameters. These solutions are are also known as non-trivial
x 1 + 3x 2 + 2x 3 = 0, 2x 1 − x 2 + 3x 3 = 0, 3x 1 − 5x 2 + 4x 3 = 0, , x 1 + 17x 2 + 4x 3 = 0.
1 3 2 0
0 −7 −1 0
∼
0 0 0 0
0 0 0 0
∴ ρ (A) = ρ (A : Z ) =2 < 3 (number of unknowns)
11t t
x1 = , x2 = − , x3 = t , t ∈R
7 7
Illustration 2.6 Find the values of k for which the system of equations (3k −8)x +3y +3z = 0, 3x +(3k−)y +
3z = 0, 3x + 3y + (3k − 8)z = 0 has a non-trivial solution.
Solution: For the given system of equations to have a n0n-trivial solution, the determinant of the coeffi-
cient matrix should be zero. That is,
¯ ¯
¯ 3k − 8 3 3 ¯
¯ ¯
¯
¯ 3 3k − 8 3 ¯=0
¯
¯ 3 3 3k − 8 ¯
¯ ¯
¯ 3k − 2 3k − 2 3k − 2 ¯¯
¯ £ ¤
⇒ ¯
¯ 3 3k − 8 3 ¯=0
¯ Operating R 1 + R 2 + R 3
3 3 3k − 8 ¯
Target AA
¯
¯ ¯
¯
¯ 1 1 1 ¯
¯
⇒ (3k − 2) ¯¯ 3 3k − 8 3 ¯=0
¯
¯ 3 3 3k − 8 ¯
¯ ¯
1 0 0
¯ = 0 ECALL Operating C 2 −C 1 ; C 3 −C 1
¯ ¯
¯ ¯ £ ¤
⇒ (3k − 2) ¯¯ 3 3k − 11 3
O| R
3 | R 3kE −D
¯
3
REA D 11 ¯
¯
⇒
2 11 11 Powered by
k= , , Prof. (Dr.) Rajesh M. Darji
3 3 3
Exercise 2.2
5x + 2y − 3z = 0, 3x + y + z = 0, 2x + y + 6z = 0.
3. For the different values of k discuss the nature of the solutions of the following system:
x + 2y − z = 0, 3x + (k + 7) y − 3z = 0, 2x + 4y + (k − 3) z = 0.
5. Show that the system of equations, x + 2y + 3z = λx, 3x + y + 2z = λy, 2x + 3y + z = λz, can possess a
non-trivial solution only if λ = 6. Obtain the non-trivial solution for real value of λ.
are consistent, and find the ratio x : y : z when λ has smallest of these values. What happens when λ
has the greatest of these values.
Answers
1. a. x 1 = x 2 = x 3 = x 4 = 0 b. x = y = t , z = −t , t ∈ R c. det(A) 6= 0, x = y = z = 0
5. x = y = z = t , t ∈ R 6. λ = 0, 3
E E E
Powered by
Prof. (Dr.) Rajesh M. Darji
B. Sc. (Gold Medalist)
M. Sc. (Gold Medalist)
Target AA
Tetra Gold Medalist in Mathematics
Ph. D. (Mathematics)
ISTE (Reg. 107058)
IMS, AMS
http://rmdarji.ijaamm.com/
| RE CALL Contact: (+91) 9427 80 9779
READ | R E DO
â Let R denotes the set of real numbers then, an n times Cartesian product of R with itself is denoted by
Rn . That is
Rn = R × R × R × ... × R (n times)
â In particular, R2 = R × R, R3 = R × R × R
â Elements of Rn are (x 1 , x 2 , ...x n ) , x i ∈ R ans are known as an order n − t upl es. Thus
Target AA
Rn = {(x 1 , x 2 , x 3 , . . . . . . x n ) :x i ∈ R, 1 É i É n}
â Elements of R2 are called order pair and elements of R3 are called order triplets.
R2 = x, y : x, y ∈ R , R3 = x, y, z : x, y, z ∈ R
©¡ ¢ ª ©¡ ¢ ª
O | R ECALL
n
RE D as a vector or point and can be presented by mean of column
REA |
â The elements of R D
are also referred
matrix as
x1
Powered by
Prof.
x2
(Dr.) Rajesh¤ M. Darji T
x3
£
x = = x1 x2 x3 ... xn =X
..
.
xn
â Here Rn is known as Real Euclidean n dimensional space. For example, R2 and R3 are two and three
dimensional spaces respectively.
â The standard arithmetic addition, subtraction, scalar-multiplication, zero (null) vector etc. in Rn are
as same as define for the matrices.
20
Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 21
x1 y1
x2
y2
x3 y3 ∈ Rn then
Let x = , y =
.. ..
. .
xn yn
x1
x2
yT x = x3
£ ¤
y1 y2 y3 ··· yn = x 1 y 1 + x 2 y 2 + x 3 y 3 + ..... + x n y n
..
.
xn
â This product is known as Euclidean Inner product or Dot product and is denoted by x · y. Hence,
n
X
x·y = xi y i , 1 É i É n.
i =1
e. g. Let x = (1, 4, −2) , y = (2, −1, 3) ∈ R3 ⇒ x · y = (1) (2) + (4) (−1) + (2) (3) = 2 − 4 + 6 = 4
Target AA
n
X
c 1 x 1 + c 2 x 2 + c 3 x 3 + ..... + c n x n = ci x i
i =1
Powered
3.3 Linearly Independent by (LI)
Vectors Prof. (Dr.) Rajesh M. Darji
The vectors x 1 , x 2 , x 3 , ...x n ∈ Rn are said to be linearly independent vectors if, whenever
c 1 x 1 + c 2 x 2 + c 3 x 3 + ..... + c n x n = 0 ⇒ c 1 = c 2 = c 3 = ..... = c n = 0.
The vectors x 1 , x 2 , x 3 , ...x n ∈ Rn are said to be linearly dependent vectors if, whenever
Let x = (x 1 , x 2 , x 3 , ...x n ) ∈ Rn then norm or magnitude of vector x is denoted by ° x ° and is defined as,
° °
° ° p q
° x ° = x · x = x 2 + x 2 + x 2 + ..... + x 2
1 2 3 n
The vector of unit norm is called unit vector or normalized vector and is denoted byx̂.
° ° ° ° x
â If ° x ° 6= 1 then x can be converted to normalized vector by dividing with ° x °. Thus, x̂ = °
°x °
° be always
normalized vector.
cos θ
· ¸ · ¸ · ¸
1 0
e. g. , , are normalized (unit) vectors in R2 .
sin θ 0 −1
Let x, y ∈ Rn
¡ ¢ ° °
â Distance: The distance between two vectors is defined as d x, y = ° x − y ° .
x·y
â Angle: The angle θ between two vectors is defined as cos θ = ° °° °.
°x °° y °
Also, x, y ∈ Rn are called orthogonal (perpendicular) vectors is cos θ = 0 i.e. x·y =0
For x, y ∈ Rn ,
¯ ¯ ° °° °
¯x · y ¯ É °x ° °y °
Target AA
¯ ¯
x·y
¯
¯ x·y ¯
¯ ¯x · y ¯
cos θ = ° ° ° ° ⇒ | cos θ | = ¯ ° ° ° ° ¯¯ =
¯ ° °° °
°x °° y ° °x °° y ° °x °° y °
But | cos θ | É 1,
¯ xA L¯L
¯ ¯
C
|∴RE°° °° °° °° É 1,
· y
R E DO
READ |
x y
Hence,
¯ ¯ ° °° °
Powered by Prof. (Dr.) Rajesh M. Darji
¯x · y ¯ É ° x ° ° y °
For x, y ∈ Rn ,
° ° ° ° ° °
°x + y ° É °x °+° y °.
° x + y °2 = x + y · x + y
° ° ¡ ¢ ¡ ¢
¡ ¢ ¡ ¢
= x+y · x+y
= x ·x +x ·y +y ·x +y ·y
£ ¤
= x · x + 2x · y + y · y ∵ x · y = y · x
° °2 ° ° ° ° ° °2
É °x ° +2°x °° y °+° y °
£ ¯ ¯ ° °° °¤
∵ ¯x · y ¯ É °x °° y °
° °2 ¡° ° ° °¢2
∴ °x + y ° É °x °+°y °
° ° ° ° ° °
°x + y ° É °x °+°y °
2
Augustin-Louis Cauchy; French, 1789 and Karl Hermann Amandus Schwarz; German, 1843-1921.
3
Hermann Minkowski; German, 1864-1909.
Illustration 3.1 Find the constant k such that the vectors (1, k, −3) and (2, −5, 4) are orthogonal.
Solution: Let x = (1, k, −3) and y = (2, −5, 4). For orthogonal vectors,
Illustration 3.2 verify the Cauchy-Schwartz’s and triangle inequality for the vectors x = (1, −3, 2) and y =
(1, 1, −1). Also find distance and angle between them.
Solution:
° ° p p ° ° p p
x = (1, −3, 2) , y = (1, 1 − 1) ⇒ x · y = 1 − 3 − 2 = −4, °x ° = 1 + 9 + 4 = 14, °y ° = 1 + 1 + 1 = 3
Therefore,
¯ ¯ ° °° ° p p p ¯ ¯ ° °° °
¯x · y ¯ = |−5| = 5 and °x ° ° y ° = 14 3 = 42 ⇒ ¯x · y ¯ É °x ° ° y ° (3.1)
° ° p ° ° ° ° p p ° ° ° ° ° °
°x + y ° = k(2, −2, 1)k = 4 + 4 + 1 = 3 and °x ° + ° y ° = 14 + 3 ⇒ °x + y ° É °x ° + ° y ° (3.2)
Hence from (3.1) and (3.2), Cauchy-Schwarz’s and Triangle angle inequalities are verified.
Also distance and angle between them are given by
Target AA
¡ ¢ ° ° p
d x, y = °x − y ° = k(0, −4, 3)k = 0 + 16 + 3
¡ ¢ p
∴ d x, y = 19
and L
xR· yECAL (−5) 5
E D O
cosθ = | = p p = −p
R
READ |
° ° ° °
° x y
° ° ° 14 3 42
µ ¶
5
∴ θ = cos−1 − p
Powered by Prof. (Dr.) Rajesh M. Darji
42
Illustration 3.3 Show that x 1 , x 2 , x 3 are linearly independent and x 4 depends on them, where x 1 = (1, 2, 4) , x 2 =
(2, −1, 3) , x 3 = (0, 1, 2) , x 4 = (−3, 7, 2).
c1 x 1 + c2 x 2 + c3 x 3 = 0
∴ c 1 (1, 2, 4) + c 2 (2, −1, 3) + c 3 (0, 1, 2) = (0, 0, 0)
⇒ c 1 + 2c 2 = 0, 2c 1 − c 2 + c 3 = 0, 4c 1 + 3c 2 + 2c 3 = 0. (3.3)
Now x 1 , x 2 , x 3 are linearly independent if c 1 = c 2 = c 3 = 0, that is the homogeneous system (3.3) should have
trivial solution. The determinant of coefficient matrix is
¯ ¯
¯ 1 2 0 ¯
¯ ¯
¯ 2 −1 1 ¯ = 1 (−2 − 5) − 2 (4 − 4) + 0 = −7 6= 0
¯ ¯
¯ 4 3 2 ¯
c1 x 1 + c2 x 2 + c3 x 3 + c4 x 4 = 0
∴ c 1 (1, 2, 4) + c 2 (2, −1, 3) + c 3 (0, 1, 2) + c 4 (−3, 7, 4) = (0, 0, 0)
⇒ c 1 + 2c 2 − 3c 4 = 0, 2c 1 − c 2 + c 3 + 7c 4 = 0, 4c 1 + 3c 2 + 2c 3 + 4c 4 = 0. (3.4)
Now we solve homogeneous system (3.4) for unknown c 1 , c 2 , c 3 and c 4 . The augmented matrix is
1 2 0 −3 0
[A : Z ] = 2 −1 1 4 0 → R 2 − 2R 1 ; R 3 − 4R 1
4 3 2 7 0
1 2 0 −3 0
∼ 0 −5 1 10 0 → R3 − R2
0 −5 2 19 0
1 2 0 −3 0
∼ 0 −5 1 10 0 (3.5)
0 0 1 9 0
∴ ρ (A) =ρ (A : Z ) = 3 < 4
System has one-parametric non-trivial (non-zero) solution, that is we are not getting all c 1 , c 2 , c 3 and c 4 are
zero. So x 4 depends on x 1 , x 2 , x 3 .
â In this case to find relation among them, solving (3.5) for c 1 , c 2 , c 3 , c 4 , we get
13t t
c1 = , c2 = , c 3 = −9t , c4 = t , t ∈R
5 5
Substitute in linear combination of (3.4),
13t t
t ∈R
Target AA
x 1 , + x 2 − 9t x 3 + t x 4 = 0, ⇒ 13x 1 , +x 2 − 45x 3 + 5x 4 = 0
5 5
Exercise 3.1
1. Examine for linear dependence or independence the following system of vectors. If dependence, find
CALL
relation among them: (1-4) O | RE
| RED
a. −
(1, RE
→ = −1,
x
ADx→ = (2, 1, 1) , −x→ = (3, 0, 2)
1) , −
1 2 3
b. −
x 1 = (2, 2, 7, −1) , x 2 = (3, −1, 2, 4) , −
→ −
→ → = 1, 3, 1)
x 3 (1,
c. −
→ Powered
−
→ by Prof. (Dr.) Rajesh M. Darji
−
→
x 1 = (3, 1, −4) , x 2 = (2, 2, −3) , x 3 = (0, −4, 1)
d. → = £ 1 2 4 ¤T , −
−
x → = £ 3 7 10 ¤T
x
1 2
3. Find the constant k such that the vectors (2, 3k, −4, 1, 5) and (6, −1, 3, 7, 2k) are orthogonal.
4. Discuss and find the relation of linear dependence among the row vectors of the matrix,
1 1 −1 1
1 −1 2 −1
3 1 0 1
5. If a and b are unit vectors such that a +2b and 5a −4b are perpendicular to each other, then find angle
between a and b.
4
Pythagoras, Greek; 570-495 BC
° x + y °2 + ° x − y °2 = 2° x °2 + 2° y °2 .
° ° ° ° ° ° ° °
Answers
1. a. L.D., −
→+−
x → − →
1 x2 = x3 b. L.I. c. L.D., 2−
→ = 3−
x 1
→+−
x →
2 x3 d. L.I. 2. x, y and y, z 3. −1
4. L.D., 2R 1 + R 2 = R 3 5. 60◦
E E E
Powered by
Prof. (Dr.) Rajesh M. Darji
B. Sc. (Gold Medalist)
M. Sc. (Gold Medalist)
Tetra Gold Medalist in Mathematics
Ph. D. (Mathematics)
ISTE (Reg. 107058)
Target AA
IMS, AMS
http://rmdarji.ijaamm.com/
Contact: (+91) 9427 80 9779
| RE CALL
READ | R E DO
4.1 Field
Target AA
e. g.
â The set of rational numbers, Q and the set of real numbers, R are real fields.
A non empty set V is said to be a vector space or a linear space over the field F if there exist two maps
+ : V ×V → V as + u, v = u + v, called vector addition (VA), and · : F ×V → V as · α, v = α·· u, called scalar
¡ ¢ ¡ ¢
∀ u, v, w ∈ V and α, β ∈ F
7. α · u + v = α · u + α · v.
¡ ¢
26
Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 27
8. α + β · u = α · u + β · u.
¡ ¢
9. αβ · u = α β · u .
¡ ¢ ¡ ¢
10. 1 · u = u.
Remark:
1. The elements of V are called vectors even though they are any objects like matrices, polynomials,
functions, n − t upl es etc. The vector space is also known as Abstract vector space.
2. Sometimes vector addition and scalar multiplication are also denoted by ⊕ and ∗ respectively.
3. Instead of field F if we take R, the set of real numbers then V is called the real vector space or real
linear space or real vector linear space overR. Generally we consider always F = R unless given.
1. The n dimensional space Rn is a vector space over R under usual addition and scalar multiplication
in Rn .
Let x = (x 1 , x 2 .....x 3 ) , y = y 1 , y 2 .....y 3 ∈ Rn and α ∈ R then the usual vector addition and scalar multi-
¡ ¢
Target AA
¡ ¢ ¡ ¢
x + y = (x 1 , x 2 , .....x n ) + y 1 , y 2 , .....y n = x 1 + y 1 , x 2 + y 2 , .....x n + y n
and
α · x = α (x 1 , x 2 , .....x n ) = (αx 1 , αx 2 , .....αx n )
Let u =
·
u 1 u 2 Powered
¸
,v =
·
v 1 byv 2
¸
Prof. (Dr.) Rajesh M. Darji
∈ M22 and α ∈ R then the matrix addition and scalar multiplica-
u3 u4 v3 v4
tion in M22 are defined as
· ¸
u1 + v 1 u2 + v 2
u+v =
u3 + v 3 u4 + v 4
and
αu 1 αu 2
· ¸
α·u =
αu 3 αu 4
3. The set of all polynomial with real coefficients, of degree É n that is P n (R) is a vector space over R.
Here, P n (R) = p : p = p (x) / deg p (x) É n
© ª
p + q = (a 0 + b 0 ) + (a 1 + b 1 ) x + (a 2 + b 2 ) x 2 + ..... + (a n + b n ) x n
and
α · p = (αa 0 ) + (αa 1 ) x + (αa 2 ) x 2 + ..... + (αa n ) x n
4. The class of all functions f : R → R that is A is vector spaces under the functions addition and function
scalar multiplication over R.
Let f , g ∈ A and α ∈ R the vector addition and scalar multiplication in A are defined as
¡ ¢
f + g x = f (x) + g (x)
and
α · f x = α f (x) ∀x ∈ R
¡ ¢
¡ ¢ ¡ ¢
Illustration 4.1¢ Show ¡that the set of all pairs, of real numbers of the form 1, y with the operation 1, y +
0 0
1, y = 1, y + y and k 1, k y , where k ∈ R is a vector space.
¡ ¢ ¡ ¢
Solution: Let V = 1, y : y ∈ R . In order to prove V is a vector space, we have to show that all ten condi-
©¡ ¢ ª
tions for vector space listed in definition of vector space are satisfied.
Let u, v, w ∈ V and k, m ∈ R ∴ u = (1, x) , v = 1, y , w = (1, z) for some x, y, z ∈ R
¡ ¢
¡ ¢
1. u + v = (1, x) + 1, y
∵ x, y ∈ R ⇒ x + y ∈ R
¡ ¢ £ ¤
= 1, x + y ∈ V
∴ V is closed under vector addition.
¡ ¢ £¡ ¢ ¤
2. u + v + w = (1, x) + 1, y + (1, z)
£¡ ¢¤
= (1, x) + 1, y + z
¡ ¢
= (1, x) + 1, y + z
Target AA
¡ ¢
= 1, x + y + z
¡ ¢ £ ¡ ¢¤
u + v + w = (1, x) + 1, y + (1, z)
£¡ ¢¤
= 1, x + y + (1, z)
¡ ¢
= 1, x + y + (1, z)
L
RECAL
¡ ¢
= 1, x + y + z
u + v + w = u + v + w| REDO
¡ ¢ ¡ ¢ |
∴
READ
Vector addition is associative in V .
4. For additive identity, we need to find an element say 0 = (1, θ) ∈ V for some θ ∈ R such that, ∀u ∈
V, u + 0 = 0 + u = u, That is
Observe that, above condition holds if θ = 0. Hence 0 = (1, 0) ∈ V is the additive identity.
∴ Additive identity exist for vector addition in V.
5. For additive inverse, we need to find an element say −u = (1, λ) ∈ V for some λ ∈ R such that ∀u ∈
¡ ¢ ¡ ¢
V, u + −u = −u + u = 0. That is
Observe that, above condition holds if λ = −x. Hence −u = (1, −x) ∈ V is the additive inverse of u =
(1, x).
∴ Additive inverse exist for vector addition in V.
8. (k + m) u = (k + m) (1, x)
= [1, (k + m) x]
= (1, kx + mx)
ku + mu = k (1, x) + m (1, x)
= (1, kx) + (1, mx)
= (1, kx + mx)
∴ (k + m) u = ku + mu
Target AA
9. (km) u = (km) (1, x)
= [1, (km) x]
= (1, kmx)
¡ ¢
k mu = k [m (1, x)]
CALL
= k [(1, mx)]
| RE
= k (1, mx)
RE AD | R E DO
= (1, kmx)
¡ ¢
∴ (km) u = k mu
Powered by Prof. (Dr.) Rajesh M. Darji
10. 1u = 1 (1, x)
= (1, 1x)
= (1, x) [∵ 1x = x]
∴ 1u = u
Thus, all ten conditions for vector space are hold true for given vector addition and scalar multiplication in
V . Therefor V is a vector space.
Solution: Given that V = R2 and the defined addition is the usual vector addition of R2 , hence all five
conditions for vector space are satisfied evidently. So it is sufficent to check remaining five conditions for
scalar multiplication.
Let u = x, y ∈ V and α, β ∈ R.
¡ ¢
1. αu = α x, y
¡ ¢
= α2 x, α2 y ∈ V ∵ α, x, y ∈ R ⇒ α2 x, α2 y ∈ R
¡ ¢ £ ¤
α + β u = α + β x, y
¡ ¢ ¡ ¢¡ ¢
2.
h¡ ¢2 ¡ ¢2 i £
= α + β x, α + β y
¤
∵ By definition of SM
= α2 + 2αβ + β2 x, α2 + 2αβ + β2 y
£¡ ¢ ¡ ¢ ¤
= α2 x + 2αβx + β2 x, α2 y + 2αβy + β2 y
¡ ¢
αu + βu = α x, y + β x, y
¡ ¢ ¡ ¢
= α2 x, α2 y + β2 x, β2 y
¡ ¢ ¡ ¢
= α2 x + β2 x, α2 y + β2 y
¡ ¢
∴ α + β u 6= αu + βu
¡ ¢
That is scalar multiplication is not distributive over scalar addition. Hence V is not a vector space. (Reader
can verify that all other remaining conditions for scalar multiplications are hold)
4.4 Subspace
A non empty subset W of the vector space V over R, is said to be a subspace of V if, W itself vector space
over R, udder the same vector addition and scalar multiplication of V .
Theorem 4.1 A non empty subset W of a vector space V over R, is subspace of V if and only if,
i. ∀ u, v ∈ W then u + v ∈ W.
ii. ∀ u ∈ W, α ∈ R then αu ∈ W.
Target AA
That is, W should be closed under vector addition and scalar multiplication.
Note: Every vector spcae V has two precise subspace like, singleton set {0} and vector space V itself. These
subspces are called trivial subspace.
ECAL
Illustration 4.3 Check whether the following subsets WLof vector space V are subspaces or not?
DO | R
A D | RRE
3
x, y, z : x 2 + y 2 + z 2 É 1 ;V = R3 .
E R
©¡ ¢ ª
a. R
W = {(x, 3x, 2x) : x ∈ } ;V = . b. W =
c. W = The set of all points lying on the line passing through the origin and V = R2 .
Powered by Prof. (Dr.) Rajesh M. Darji
Solution: In order to check subspace, first of all we show that given set W is a non empty subset of V (that
is to show atleast one element exist in W ), and then we check two conditions of Theorem 4.1.
¡ ¢
i. u + v = (x, 3x, 2x) + y, 3y, 2y
¡ ¢
= x + y, 3x + 3y, 2x + 2y
∵ x, y ∈ R ⇒ x + y ∈ R
¡ ¡ ¢ ¡ ¢¢ £ ¤
= x + y, 3 x + y , 2 x + y ∈ W
ii. αu = α (x, 3x, 2x)
= (αx, 3αx, 2αx)
= (αx, 3 (αx) , 2 (αx)) ∈ W [∵ α, x ∈ R ⇒ αx ∈ R]
x, y, z : x 2 + y 2 + z 2 É 1 ;V = R3 .
©¡ ¢ ª
b. Here W =
Obviously W ⊂ V , and for 0 = (0, 0, 0) ∈ R3 , 02 + 02 + 02 É 1. Therefor, 0 ∈ W , so W is non empty.
Let u = x 1 , y 1 , z 1 , v = x 2 , y 2 , z 2 ∈ W ⇒ x 12 + y 12 + z 12 É 1, x 22 + y 22 + z 22 É 1 By definition of W
¡ ¢ ¡ ¢ £ ¤
¡ ¢
i. u + v = x 1 + x 2 , y 1 + y 2 , z 1 + z 2 .
Now,
¢2
(x 1 + x 2 )2 + y 1 + y 2 + (z 1 + z 2 )2 = x 12 + 2x 1 x 2 + x 22 + y 12 + 2y 1 y 2 + y 22 + z 12 + 2z 1 z 2 + z 22
¡ ¡ ¢ ¡ ¢ ¡ ¢
= x 12 + y 12 + z 12 + x 22 + y 22 + z 22 + 2x 1 x 2 + 2y 1 y 2 + 2z 1 z 2
¡ ¢ ¡ ¢ ¡ ¢
¡ ¢
É 1 + 1 + 2x 1 x 2 + 2y 1 y 2 + 2z 1 z 2
¡ ¢
É 2 + 2x 1 x 2 + 2y 1 y 2 + 2z 1 z 2
Ð 1 (always)
Hence the interior of the unit sphere is not a subspace of the whole space.
¡ ¢ ¡ ¢ ¡ ¢
i. u + v = x 1 , y 1 + x 2 , y 2 = x 1 + x 2 , y 1 + y 2
y 1 + y 2 = mx 1 + mx 2 = m (x 1 + x 2 ) ⇒ u+v ∈W
Target AA
ii. αu = α x 1 , y 1 = αx 1 , αy 1
¡ ¢ ¡ ¢
αy 1 = α (mx 1 ) m (αx 1 ) ⇒ αu ∈ W
e. V = R3 with usual vector addition and scalar multiplication defined by k x, y, z = (0, 0, kz) .
¡ ¢
2. Explain why the set of all 2-by-2 matrices with rational entries is not a real vector space?
[Hint: Not closed under scalar multiplication.]
3. Check whether the following subsets W of vector space V , under usual operations, are subspace or
not?
a. W = (x, 0, 0) : x, y ∈ R ; V = R3 . b. W = x, y, z : x 2 + y 2 + z 2 = 1 ;V = R3 .
© ª ©¡ ¢ ª
a 0
c. W = 0 b : a, b, c ∈ R ;V = M32 . d. W = x, x 2 , 0 : x ∈ R ; V = R3 .
©¡ ¢ ª
c 0
e. W = The set of all 2-by-2 symmetric matrices; V = M22 .
f. W = x, y, z ∈ R3 : x + y + z = 1 ; V = R3 .
©¡ ¢ ª
© ª
g. W = f ∈ V : f (0) = 0 ; and V = the set of all real valued functions.
4. Prove that the circular cylinder generated by unit circle is not a subspace of the space, under usual VA
and SM.
[Hint: W = x, y, z ∈ R3 : x 2 + y 2 = 1, z ∈ R ]
©¡ ¢ ª
5. Prove that if W1 and W2 are subspaces of the vector space V then W! ∪ W2 is also subspace of V . But
W! ∩ W2 may not be sub space of V .
Answers
1. a. Yes, all other no. 3. a, c, e, g, i are subspace, all other are not subspace.
E E E
Target AA R E D
c 1 v 1 + c 2 v 2 , +c 3 v 3 + ..... + c n v n ,
O | R ECª A©LL
©
c 1 , c 2 , c 3 .....c n ∈ R
ª
â Set of all linear combinations of vectors of W = v 1 , v 2 , v 3 .....v n is called span of W and is denoted by
spanW , that is
R
|
© ª
spanW = span v , v , v .....v = c v + c v + c v + ..... + v : c ∈
READ
1 2 3 n 1 1 2 2 3 3 n i
â We can say that a w vector of a vector space V is a linear combination of the vectors v 1 , v 2 , v 3 .....v n of
a vector space V , if there exist scalars c 1 , c 2 , c 3 .....c n ∈ R such that,
w = c 1 v 1 + c 2 v 2 , +c 3 v 3 + ..... + c n v n
© ª
â For the subset W = v 1 , v 2 , v 3 .....v n of a vector space V , spanW is subspace of V .
c 1 v 1 + c 2 v 2 + c 3 v 3 + ..... + c n v n = 0 ⇒ c 1 = c 2 = c 3 = ..... = c n = 0.
â In particular, two vectors u and v are linearly dependent if and only if u = kv for some k ∈ R.
e. g.
â A finite set of vectors that contains a zero vector, 0 is always linearly dependent .
e. g. {(3, −1, 2) , (1, 2, −4) , (0, 0, 0)} is linearly dependent set because it contains zero vector.
â A singleton set (set containing only one vector) is linearly dependent if and only if it contains a zero
vector. That is 0.
4.8 Wronskian1
Target AA
The Wronskian of the functions u, v or u, v, w is defined as a determinant
¯ ¯
¯ u
¯ 0 v0 w0
¯ ¯ ¯
¯ u v ¯ ¯
W = ¯¯ 0 or W = ¯ u v w
u v0 ¯
¯ ¯
¯ ¯
¯ u 00 v 00 w 00 ¯
To solve non-homogeneous system (4.1), consider the augmented matrix [A : B ], can be obtained by putting
the vectors in columns as
1 6 9
[A : B ] = 2 4 2 → R 2 − 2R 1 ; R 3 + R 1
−1 2 7
1 6 9
∼ 0 −8 −16 → R 3 + R 2
0 8 16
1 6 9
∼ 0 −8 −16
0 0 0
∴ ρ (A) =ρ (A : B ) = 2 (number of unknowns)
Solution: To show the span, we have to show that every vector u = x, y, z ∈ R can be expressed as a linear
¡ ¢
combination of the given vector. That is there should exist c 1 , c 2 , c 3 such that
¡ ¢
c 1 (1, 0, 1) + c 2 (−1, 2, 3) + c 3 (0, 1, −1) = x, y, z (4.2)
1 −1 0 x
[A : B ] = 0 2 1 y → R3 − R1
1 3 −1 z
1 −1 0 x
∼ 0 2
1 y → R 3 + 2R 2
0 −4 −1 z −x
1 −1 0 x
∼ 0 2 1
y
0 0 1 z − x + 2y
∴ ρ (A) =ρ (A : B ) = 3 (no of unknowns.)
Target AA
Note: To find linear combination for given vector of R3 , solve above system by back substitution, we get
1¡ ¢ 1¡ ¢
c1 = 3x − y − z , c2 = x −y −z , c 3 = −x + 2y + z
2 2
CALL
Hence from (4.2)
| RE
REyA−D
3x − z | R E DO
x −y −z ¡ ¢ ¡ ¢
(1, 0, 1) + (−1, 2, 3) + −x + 2y + z (0, 1, −1) = x, y, z
2 2
e. g. If u = (1, 1, 1) ⇒
Powered
1 by 1 Prof. (Dr.)
c1 = , c2 = − , c3 = 2
Rajesh1 M.1 Darji
∴ (1, 1, 1) = (1, 0, 1) − (−1, 2, 3) + 2 (0, 1, −1)
2 2 2 2
Illustration 4.6 Examine the following vectors for LI or LD:
· ¸ · ¸ · ¸
3 2 3 2 3 1 −1 −2 3 1 0
a. 1 − t + t , −2 + 3t + t + 2t , 1 + t + 5t in P 3 . b. , , in M22 .
1 1 1 2 1 0
c. cos2 x, sin2 x in R .
Solution:
a. Consider the linear combination of given vectors of P 3 :
c 1 1 − t + t 3 + c 2 −2 + 3t + t 2 + 2t 3 + c 3 1 + t 2 + 5t 3 = 0 = 0 + 0 · t + 0 · t 2 + 0 · t 3
¡ ¢ ¡ ¢ ¡ ¢
(4.3)
The augmented matrix for corresponding homogeneous system of (4.3) is, (obtained by putting coefficient
ascending powers of each polynomial in columns)
c1 c2 c3
1 −2 1 0 1
−1 3 0 0 t
[A : Z ] =
2
0 1 1 0 t
1 2 5 0 t3
∴ System (4.3) has non-trivial (non zero) one parametric solution. Hence all c 1 , c 2 , c 3 can not be zero.
Hence given vector are linearly dependent.
Note: To find dependent relation, solving above system we get c 2 = −c 3 , c 1 = −3c 3 .
∴ From (4.3), −3 1 − t + t 3 − −2 + 3t + t 2 + 2t 3 + 1 + t 2 + 5t 3 = 0
¡ ¢ ¡ ¢ ¡ ¢
Target AA
· ¸ · ¸ · ¸ · ¸
1 −1 −2 3 1 0 0 0
c1 + c2 + c3 =0= (4.4)
1 1 1 2 1 0 0 0
The augmented matrix for corresponding homogeneous system of (4.4) is, (obtained by putting row-entries
of each matrix in columns)
O | R ECALL
E0 D
AD | R 0
1 −2 1
−1E 3 0
R
[A : Z ] = → R2 + R1 ; R3 − R1 ; R4 − R1
1 1 1 0
1 2 0
Powered 0
by
Prof. (Dr.) Rajesh M. Darji
1 −2 1 0
0 1 0 0
∼ → R 3 + 3R 2 ; R 4 − 4R 2
0 −3 1 0
0 4 −1 0
1 −2 1 0
0 1 0 0
∼ → R4 + R3
0 0 1 0
0 0 −1 0
1 −2 1 0
0 1 0 0
∼ → R4 + R3
0 0 1 0
0 0 0 0
∴ ρ (A) =ρ (A : Z ) = 3 (number of unknowns)
∴ System (4.4) has unique trivial (zero) solution, that is c 1 = c 2 = c 3 = 0. Hence given vectors are linearly
independent.
c. Given functions sin2 x and cos2 x are linearly in dependent because we can not write one function as
a constant multiple of another function. That is sin2 x 6= k · cos2 x, k ∈ R.
Alternate Method:
¯ u v ¯ ¯ sin2 x cos2 x 2
cos2 x ¯¯
¯ ¯ ¯ ¯ ¯ ¯
¯ = sin 2x ¯ sin x
¯ ¯
W = ¯¯ 0 = = − sin 2x 6= 0
u v 0 ¯ ¯ sin 2x − sin 2x
¯ ¯
¯ ¯ 1 −1 ¯
Illustration 4.7 Find the condition on parameter a such that the set {(1, −1, 1) , (1, 0, a) , (−1, −a, 0)} is lin-
early independent.
Solution: Three vectors of R3 are linearly independent if the determinant of vectors is not zero. That is
¯ ¯
¯ 1 1 −1 ¯
¯ = a 2 6= 0
¯ ¯
¯ −1
¯ 0 −a ¯ ⇒ a 6= 0
¯ 1 a 0 ¯
Exercise 4.2
3. Is (4, 20) is linear combination of the vectors (2, 10) and (−3, −15) ?
Target AA
4. Show that in R4 the vector (1, 4, −2, 6) is a linear combination of the vectors (1, 2, 0, 4) and (1, 1, 1, 3)
where as (2, 6, 0, 9) is not a linear combination of given vectors.
· ¸ · ¸ · ¸ · ¸
1 0 1 1 1 1 1 1
5. Show that the matrices , , , span M22
0 0 0 0 1 0 1 1
9. Let v 1 , v 2 , .....v n be LI subset of vector space V over R. If {x 1 , x 2 , .....x n } and y 1 , y 2 , .....y n be two
© ª © ª
n n
subsets of R such that
X X
xi v i = y i v i then prove that x i = y i ∀ 1 É i É n
i =1 i =1
Answers
1. R2 2. −308v 1 + 69v 2 + 179v 3 3. Yes, (4, 20) = 5 (2, 10) + 2 (−3, −15) 6. LD
E E E
4.9 Basis
© ª
A subset W = v 1 , v 2 , v 3 .....v n of a vector space V is said to be the basis of V if,
â If W is basis for V then we can say that V is generated by W and W is called generator of V .
1. The standard basis of R2 is {(1, 0) , (0, 1)}, of R3 is {(1, 0, 0) , (0, 1, 0) , (0, 0, 1)} and so on.
½· ¸ · ¸ · ¸ · ¸¾
1 0 0 1 0 0 0 0
2. The standard basis of M2×2 is , , , ,
0 0 0 0 1 0 0 1
Note: A vector space can have more than one basis. But every basis number of the vectors is same. (vectors
may be different)
4.11 Dimension
The number of the vectors in any basis of the vector space V is said to be the dimension of V and is denoted
by dimV . â If dimV is finite then V is called finite dimensional vector space.
dim Rn = n,
¡ ¢
e. g. dim (P n ) = (n + 1) , dim (M 22 ) = 4.
* Important:
If dim v = n, then
Target AA
2. A subset of less or more than n vectors could not be a basis of V.
RELI
5. A subset of exactly n vectors, which is|either CA LL
or span V is always basis of V .
READ | R E DO
Illustration 4.8 Show that whether the following sets form basis for given vector space or not? Justify the
answers.
Powered by Prof. (Dr.) Rajesh M. Darji
a. {(1, 2) , (3, −1)} for R2 . b. {(1, 1, 0) , (−1, 0, 0)} for R3 .
c. {(1, −1, 1) , (−1, 2, −2) , (−1, 4, −4)} for R3 . d. {(1, 0, 1) , (1, 1, 0) , (0, 1, 1) , (2, 1, 1)} for R3 .
e. 3 + x 3 , 2 − x − x 2 , x + x 2 − x 3 , x + 2x 2 for P 3 (x) .
© ª
Solution:
a. We know that dim R2 = 2 and given set contain exactly two vectors. So it is sufficient to check weather
the set is linearly independent of not.
¯ ¯
2
¯ 1 3 ¯
For two vectors of R , the determinant of the vectors is ¯
¯ ¯ = −7 6= 0.
2 −1 ¯
∴ Given subset is linearly independent subset of R2 , hence it is basis.
b. Given subset has two vector and dim R3 = 3, hence it can not span R3 . So it is not basis.
c. Given subset has three vectors and dim R3 = 3. Hence for basis it sufficient to check linearly dependent
of given subset.
¯ ¯
¯ 1 −1 −1 ¯
For three vectors of R3 , the determinant of the vectors is ¯¯ −1 2
¯ ¯
4 ¯¯ = 0.
¯ 1 −2 −4 ¯
d. Given subset has four vector and dim R3 = 3, hence it linearly dependent subset. So it is not basis.
e. Given subset has four vectors (polynomials) and dim P 3 = 4, so it is sufficient to check linearly in
dependence of given polynomials. Consider the linear combination,
c 1 3 + x 3 + c 2 2 − x − x 2 + c 3 x + x 2 − x 3 + c 4 x + 2x 2 = 0
¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢
Target AA
∼ R3 ↔ R4
0 0 0 0 0
0 0 5 2 0
1 0 −1 0 0
0 −1 1 1 0
∼
CALL
0 0 5 2 0
0 0ED0O0| R0E
∴ R=ρ
ρ (A)
D|R
EA(A : Z ) = 3 < 4 (number of unknowns)
Illustration 4.9 Reduce the following set {(1, 0, 0) , (0, 1, −1) , (0, 4, −3) , (0, 2, 0)} to obtain the basis for the
vector space R3 .
Solution: We know that dim R3 = 3 and given set has four vectors. So it is linearly dependent , and not a
¡ ¢
basis. To reduce to basis of R3 , we have to remove one vector from set which depends on other vectors. It
can be done as follow:
â The vectors corresponding to non pivot columns are linearly dependent vectors, that will be removed
from original set, and we required basis.
Let v 1 = (1, 0, 0) , v 2 = (0, 1, −1) , v 3 = (0, 4, −3) , v 4 = (0, 2,). Matrix of vectors,
v1 v2 v3 v4
1 0 0 0
A = 0 1 4 2 → R3 + R2
0 −1 −3 0
1 0 0 0
∼ 0 1 4 2
0 0 1 2
Since the fourth column in echelon form is non pivoting, on removing corresponding v 4 = (0, 2, 0) from
given set we get required basis. That is {(1, 0, 0) , (0, 1, −1) , (0, 4, −3) ,}
Illustration 4.10 Find the basis for the solution space of the equation AX = 0, where
−1 0 1 2
−1 1 0 −1
A=
0 −1 1 3
1 −2 1 4
Solution: To find basis for solution space of the equation AX = 0, first of all we obtain solution of the given
homogeneous system. Consider the augmented matrix for given equation is given by
x1 x2 x3 x4
−1 0 1 2 0
−1 1 0 −1 0
[A : Z ] = → R2 − R1 ; R4 + R1
0 −1 1 3 0
1 −2 1 4 0
−1 0 1 2 0
0 0 −1 −3 0
∼ R2 ↔ R3
Target AA
0 −1 1 3 0
0 −2 2 6 0
−1 0 1 2 0
0 −1 1 3 0
∼ → R 4 − 2R 2
0 0 −1 −3 0
CALL
0 −2 2 6
| RE 0
EAD 0
−1 1 2 | R E DO
0
R
0 −1 1 3 0
∼
0 0 −1 −3 0
0
Powered by
0 0 0
Prof. (Dr.) Rajesh M. Darji
0
∴ ρ (A) =ρ (A : Z ) = 3 < 4
∴ System has non trivial one parametric solution which is obtained by assuming parameter t ∈ R to free
variable x 4 , and is given by
x 1 = −t , x 2 = −2t x 3 = −3t , x4 = t , t ∈R
∴ Solution space is spaned by the set {(−1, −2, −3, 1)} and set is also linearly independent because it on-
tained only one non zero vector.
∴ {(−1, −2, −3, 1)} is required basis for solution space.
Note that, dimension of the solution space is 1, and it is same as number of non pivot column of row
echelon form of A.
W = x, y, z : x + 2y + 3z = 0, x, y, z ∈ R
©¡ ¢ ª
∴
= −y − 3z, y, z : y, z ∈ R
©¡ ¢ ª £ ¤
∵ x + 2y + 3z = 0 ⇒ x = −y − 3z
= −y, y, 0 + (−3z, 0, z) : y, z ∈ R
©¡ ¢ ª £ ¤
Separating y and z
= y (−1, 1, 0) + z (−3, 0, 1) : y, z ∈ R
© ª
Exercise 4.3
a. 1 − 3x + 2x 2 , 1 + x + 4x 2 , 1 − 7x for P 2
½· ¸ · ¸ · ¸ · ¸¾
1 2 0 −1 0 2 0 0
b. , , , for M22 [Winter-2015]
1 −2 −1 0 3 1 −1 2
2. Let V be the space spanned by v 1 = cos 2x, v 2 = sin2 x, v 3 = cos2 x then show that S = v 1 , v 2 , v 3 is
© ª
Target AA
3. For what real values of λ do the following vectors form a basis for R3 ?
µ
1 1
|
[Hint: Take determinant of vectors = 0]
¶ µ
1
RECAL
1
¶ µ
v 1 = λ, − , − , v 2 = − , λ, − , v 3 = − , − , λ
2 2 2 2
L
1 1
2 2
¶
AD | R E DO
E1, 1) , (1, 1, −1) , (3, 1, −3) , (1, 2, 0) to basis for R3 .
R(0,
4. Reduce the set
7. Reduce the following set to obtain the basis for the vector space P 2 :
p 0 = 2, p 1 = −4x, p 2 = x 2 + x + 1, p 3 = 2x + 7, p 4 = 5x 2 − 1.
8. In each part, determine whether the three vectors lie in a plane (linearly independent ) or on the same
line (linearly dependent ):
10. Determine a basis for and the dimension of the solution space of the following homogeneous system:
a. 2x 1 + 2x 2 − x 3 + x 5 = 0 b. x +y +z =0
−x 1 − x 2 + 2x 3 − 3x 4 + x 5 = 0 3x + 2y − 2z = 0
x 1 + x 2 − 2x 3 − x 5 = 0 4x + 3y + z = 0
x 3 + x 4 + x 5 = 0 [Winter-2017] 6x + 5y + z = 0
a. The line x = 2t , y = −t , z = 4t .
b. All the vectors of the form (a, b, c) for which b = a + c.
c. The subspace x, y, z, w ∈ R4 : x + y − z = y + z = 0 .
©¡ ¢ ª
1 0 x
12. Is v 1 , v 2 form basis for H ? Where v 1 = 0 , v 2 = 1 , H = x : x ∈ R .
© ª
0 0 0
13. Under what condition is a set with one vector linearly independent ?
14. Show that every set with more then three vectors from P 2 is linearly dependent.
15. Prove that the space spanned by two vectors in R3 is a line through the origin, a plane through the
origin or the origin itself.
16. Use proportional identities, where required, to check which of following sets of vectors in F (−∞, ∞)
are linearly dependent.
17. Given two linearly independent vectors (1, 0, 1, 0) and (0, −1, 1, 0) of R4 , find a basis for R4 that includes
these two vectors.
18. Determine whether the vectors v 2 = (1, 2, −1) , v 3 = (−3, 1, 0) , v 4 = (2, 11, −5) forms a basis for R3 or not
Target AA
? If not choose, construct a basis of R3 consisting the vectors out of the given vectors.
Answers
½ ¾
1
1. a. No b. Yes 3. λ ∈ R − − , 1 4. (0, 1, 1) , (1, 1, −1) , (3, 1, −3)
ECA© LL
2
D O | R
0)E
(1, 0,|0,R 6. 1, x, x 2 , x 3 , x 4
ª © ª
5. {(1, 1, 1, 1) , (1, 2, 1, 2)
A ,D , (0, 1, 0, 0)} 7. p0, p1, p2
R E
8. a. in plane. b. on line 10. a. {(−1, 1, 0, 0) , (−1, 0, −1, 0, 1)} , dim = 2 b. Null space, dim= 0
16. a. d. LD, b. ĻI 17. {(1, 0, 1, 0) , (0, −1, 1, 0) , (1, 0, 0, 0) , (0, 0, 0, 1)}
E E E
¡ ¢
u S = (c 1 , c 2 , c 3 , .....c n )
* Important:
2. Here P is always invertible and P−1 be a translation matrix from S to T , that is P−1
S→T
. Hence
PT →S ⇔ P−1
S→T
Illustration 4.12 If u = (10, 5, 0) ∈ R3 , Find the coordinate vector for u relative to,
Target AA
b. The basis T = {(1, −1, 1) , (0, 1, 2) , (3, 0, −1)}
Solution:
a. The standard basis for R3 is S = {(1, 0, 0) , (0, 1, 0) , (0, 0, 1)} = e 1 , e 2 , e 3 (Say). To find coordinate vector
RE5,C
relative to standard basis S, we represent u = (10,
R E DO
0)A
| asLaL
© ª
0)A
u = (10,R5,E
D|
= (10, 0, 0) + (0, 5, 0) + (0, 0, 0)
=10 (1, 0, 0)Powered
+ 5 (0, 1, 0)by Prof. (Dr.) Rajesh M. Darji
+ 0 (0, 1, 0)
u =10 · e 1 + 5 · e 2 + 0 · e 3
¡ ¢
∴ u S = (10, 5, 0) Required coordinate vector relative to standard basis.
â It is worth to note that, for any vector of R3 (in fact of Rn ), given vector it self represent a coordinate
vector relative to standard basis.
© ª
b. For coordinate vector relative to basis T = {(1, −1, 1) , (0, 1, 2) , (3, 0, −1)} = v 1 , v 2 , v 3 (say), we represent
u as a linear combination of vectors of T . Consider,
To solve non-homogeneous system (4.5), consider augmented matrix: (Put coefficient vectors in column)
c1 c2 c3
1 0 3 10
[A : B ] = −1 1
0 5 → R2 + R1 ; R3 − R1
1 2 −1 0
1 0 3 10
∼ 0 1 3 15 → R 3 − 2R 1
0 2 −4 −10
1 0 3 10
∼ 0 1
3 15
0 0 −10 −40
Back substitution ⇒ c 1 = −2, c 2 = 3, c3 = 4
¡ ¢
∴ Required coordinate vector relative to basis T is u T = (c 1 , c 2 , c 3 ) = (−2, 3, 4)
Illustration 4.13 Determine the coordinate vector of p = 4−2x+3x 2 relative to the basis B = 2, −4x, 5x 2 − 1
© ª
for P 2 .
c 1 (2) + c 2 (−4x) + c 3 5x 2 − 1 = 4 − 3x + 4x 2
¡ ¢
c1 p 1 + c2 p 2 + c3 p 3 = p ⇒ (4.6)
To solve system (4.6), consider augmented matrix, (putting coefficients of polynomials in column)
c1 c2 c3
2 0 −1 4
[A : Z ] = 0 −4 0 −2
0 0 5 3
23
Observe that above matrix is already in echelon form, so making back substitution we get c 1 = , c2 =
µ ¶ 10
1 3 ¡ ¢ 23 1 3
, c 3 = . Hence, Required coordinate is p B = , , .
Target AA
2 5 10 2 5
Illustration 4.14 Consider the standard basis for R3 i.e. S = e 1 , e 2 , e 3 and another basis T = v 1 , v 2 , v 3
© ª © ª
1 0 3
£ ¤ £ ¤ £ ¤
v 1 S = −1 , v1 S = 1 , v1 S = 0
1 2 −1
1 0 3
â To find these coordinate vectors, first we obtain coordinate for any vector say u = (a, b, c) ∈ R3 relative
to basis T. Let
c1 c2 c3
1 0 3 a
[A : B ] = −1 1 0 b → R2 + R1 ; R3 − R1
1 2 −1 c
1 0 3 a
∼ 0 1 3 b + a → R 3 − 2R 2
0 2 −4 c −a
1 0 3 a
∼ 0 1
3 b+a
0 0 −10 c − 3a − 2b
1 1 1
c1 = (a − 6b + 3c) , c2 = (a + 4b + 3c) c3 = (3a + 2b − c)
10 10 10
Target AA
c1 a − 6b + 3c
£ ¤ 1
∴ u T = [(a, b, c)]T = c 2 = a + 4b + 3c
10
c3 3a + 2b − c
1 3
−6
£ ¤ 1 £ ¤ 1 £ ¤ 1
⇒ e 1 T = [(1, 0, 0)]T = 1 , e 2 T = [(0, 1, 0)]T = 4 , e 3 T = [(0, 0, 1)]T = 3
10
E C A L L 10 10
DO | R
3 2 −1
| R E
READ 1
1 −6 3
1 0 3 9
−15
£ ¤ £ ¤ ¡ ¢
u S = P u T = −1 1 0 −1 = −10 ∴ u S = (−15, −10, 17)
1 2 −1 −8 17
£ ¤ £ ¤
c. Similarly, to find u T using u S = (−6, 7, 2). That is to convert S coordinate into T coordinate. Hence
we use the translation matrix S to T , that is Q, given by the relation
1 −6 3
−6 −42 µ ¶
£ ¤ £ ¤ 1 1 ¡ ¢ 21 14 3
u T =Q u S = 1 4 3 7 = 28 ∴ u S = − , ,−
10 10 5 5 5
3 2 −1 2 −6
Exercise 4.4
1. Find the coordinate of the following vectors relative to the basis S, given that,
© ª
a. v = (2, −1, 3), S = v 1 , v 2 , v 3 , v 1 = (1, 0, 0) , v 2 = (2, 2, 0) , v 3 = (3, 3, 3)
b. p = 4 − 3x + x 2 , S = p 1 , p 2 , p 3 , p 1 = 1, p 2 = x, p 3 = x 2
© ª
· ¸ · ¸ · ¸ · ¸ · ¸
1 0 n o −1 1 1 1 0 0 0 0
c. A = , S = A1, A2, A3, A4 , A1 = , A2 = , A3 = , A4 =
−1 0 0 0 0 0 1 0 0 1
[Hint: c 1 A 1 + c 2 A 2 + c 3 A 3 + c 4 A 4 = 0]
3. Consider the standard basis B for R3 and another basis C = {(1, 2, 1) , (1, −1, 1) , (1, 0, −1)}:
4. Consider the standard basis for P 2 i.e. B = 1, x, x 2 and another basis C = 2, −4x, 5x 2 − 1
© ª © ª
Answers
µ ¶
1 1
1. a. (3, −2, 1) b. (4, −3, 1) c. − , , −1, 0 2. (4, −2, 3)
2 2
Target AA
3. a. P = 2 −1 0 b. Q = 1/3 −1/3 1/3
1 1 −1 1/2 0 −1/2
2 0 −1 1/2 0 1/10
Powered
4.14 Fundamental Spaces: Prof. (Dr.) Rajesh M. Darji
bySpace, Column Space, Null Space
Row
â The rows of above matrix are referred as row vectors of Rn and are denoted by r i , 1 É i É m
â The columns of above matrix are referred as column vectors of Rm and are denoted by c j , 1 É j É n
â Row space: The row space of the matrix A is defined as the span of row vectors of A and is denoted by
row (A). Hence,
â Column space: The column space of of the matrix A is defined as the span of column vectors of A and
is denoted by col (A). Hence,
â Null space: The solution space of the system of homogeneous system AX = 0 , is called the null space
of A and is denoted by nul (A) . Hence,
n o
nul (A) = X ∈ Rn : AX = 0
â Row space and Null space are subspace of of Rn , Column space is a subspace of Rm .
i. The set of pivot rows of matrix B forms a basis for the row space of A.
ii. The set of columns of A, corresponding to the pivot columns of B forms a basis for the column space
of A
â Rank of A is given by number of pivots in the row echelon form, and is denoted by ρ(A).
Target AA
â The dimension of the null space of A is called nullity of A and is denoted by µ (A) .
Alternatively, nullity is defined to be the number of non-pivot columns in the echelon form of matrix
A or numbers of free variables in the solution of AX = 0..
| RE CALL
RE
4.16 Rank-Nullity AD
Theorem
| R E DO
Note: Rank-Nullity theorem is also known as dimension theorem in context of ρ (A) = dim [row (A)] and
µ (A) = dim [nul (A)]. Hence
Illustration 4.15 Find row (A) , col (A) , nul (A) , row A T , col A T , nul A T , given
¡ ¢ ¡ ¢ ¡ ¢
1 −2 1 1 2
−1 3 0 2 −2
A=
0 1 1 3 4
1 2 3 13 5
Solution: Since
1 −1 0 1
1 −2 1 1 2 −2 3 1 2
−1 3 0 2 −2 T
A= ⇔ A = 1 0 1 3
0 1 1 3 4
1 2 3 13
1 2 3 13 5
2 −2 4 5
n o
3. nul (A) = X ∈ R5 : AX = 0 .
Consider the augmented matrix for homogeneous system AX = 0:
x1 x2 x3 x4 x5
1 −2 1 1 2 0
−1 3 0 2 −2 0
[A : Z ] =
0 1 1 3 4 0
1 2 3 13 5 0
Reducing to row echelon form, we get
x1 x2 x3 x4 x5
Target AA
1 −2 1 1 2 0
0 1 1 3 0 0
[A : Z ] ∼
0 0 −2 0 3 0
0 0 0 0 4 0
∴ ρ (A) = ρ (A : Z ) = 4 < 5 (number of unknowns)
L
ECALgiven
O |
∴ System has non trivial one parametricRsolution by assigning parameter t to free variable x 4 ,
A D | RED as
R
and is given by E
back substitution,
x 1 = −7t , x 2 = −3t , x 3 = 0, x4 = t , x 5 = 0, t ∈R
Powered
∴ Xby Prof. (Dr.) Rajesh M. Darji
= (x 1 , x 2 , x 3 , x 4 , x 5 ) = (−7t , −3t , 0, t , 0) , t ∈ R
∴ nul (A) = {(−7t , −3t , 0, t , 0) : t ∈ R} = span {(−7, −3, 0, 1, 0)}
¡ T¢ n o
4. nul A = X ∈ R4 : A T X = 0 .
Consider the augmented matrix for homogeneous system A T X = 0:
x1 x2 x3 x4
1 −1 0 1 0
−2 3 1 2 0
AT : Z =
£ ¤
1 0 1 3 0
1 2 3 13 0
2 −2 4 5 0
Reducing to row echelon form, we get
x1 x2 x3 x4
1 −1 0 1 0
0 1 1 4 0
T
£ ¤
A :Z ∼ 0 0 4 3 0
0 0 0 −2 0
0 0 0 0 0
∴ ρ (A) = ρ (A : Z ) = 4 (= number of unknowns)
Illustration 4.16 Find the basis for the row space, column space and null space of the following matrix:
[Summer-2016]
1 −3 4 −2 5 4
1 −6 9 −1 8 2
2 −6 9 −1 9 7
−1 3 −4 2 −5 −4
1 −3 4 −2 5 4
1 −6 9 −1 8 2
Solution: Let A =
2 −6 9 −1 9 7
−1 3 −4 2 −5 −4
Reducing to row echelon form:
1 −3 4 −2 5 4
0 −3 5 1 3 −2
A∼ =B (4.10)
0 0 1 3 −1 −1
0 0 0 0 0 0
By Theorem (4.2),
1. Basis for row space of A is given by the set pivot rows of B . Hence basis for row space of A is
{(1, −3, 4, −2, 5, 4) , (0, −3, 5, 1, 3, −2) , (0, 0, 1, 3, −1, −1)}
Target AA
2. Basis for column space of A is given by the set of columns of A corresponding to pivot columns of B .
Hence basis for row space of A is
{(1, 1, 2, −1) , (−3, −6, −6, 3) , (4, 9, 9, −4)}
* Important:
From (4.10),
Illustration 4.17 Find the basis for the vector space span {(1, −1, 2) , (0, 5, −8) , (3, 2, −2) , (8, 2, 0)} .
Solution: Let W = span {(1, −1, 2) , (0, 5, −8) , (3, 2, −2) , (8, 2, 0)} .
If we consider a matrix A by putting the vectors in column, then W becomes column space of A. (Alter-
nately, we can also put vectors in rows, then W becomes row space of S)
1 0 3 8
A = −1 5 2 2 ⇒ W = col (A)
1 −8 −2 0
Target AA
A∼ 0 5 5 10 = B
0 0 3 8
[Hint: Consider matrix A by taking coefficients in column. Required basis is basis for col(A).]
2 0 −1
6. Find rank and nullity of the matrix A = 4 0 −2 and verify dimension theorem. [Summer-2015]
0 0 0
Answers
µ ¶
1 5
1. a. Row space basis: (2, −4, 1, 2, −2, −3) , (0, 16, −7, −6, 8, 19) , 0, 0, , 1, 0, − ,
2 2
Column space basis:
½µ {(2. − 1, 10) , (−4, 2,
¶ µ −4) , (1, 0, −2)}, ¶ ¾
1 1
Null space basis: −1, − , −2, 1, 0, 0 , 0, − , 0, 0, 1, 0 , (1, 1, 5, 0, 0, 1) , Rank 3, Nullity 3.
2 2
b. Row space basis: (1, 3, 2, 0, 1) , (0, 2, 1, 1, 1) , (0, 0, 0, 2, 1),
Column space basis:
½µ {(1, −1, 1, 0) ¶ ,µ(1, −1, 4, 3) , (0, 1,¶¾4, −2)},
1 1 1 1 1
Null space basis: − , − , 1, 0 , − , − , − , 1 , Rank 3, Nullity 2.
2 2 4 4 2
½· ¸ · ¸ · ¸¾
−1 1 2 −1 −5 4
3. 1 + x + x 2 + x 3 , 1 + x 2 , 1 + 2x + 2x 2 + x 3 , 1 + x 2 + 2x 3
© ª
2. , ,
−2 1 3 1 −9 −1
E E E
Powered by
Prof. (Dr.) Rajesh M. Darji
B. Sc. (Gold Medalist)
M. Sc. (Gold Medalist)
Tetra Gold Medalist in Mathematics
Target AA
Ph. D. (Mathematics)
ISTE (Reg. 107058)
IMS, AMS
http://rmdarji.ijaamm.com/
Contact: (+91) 9427 80 9779
| RE CALL
READ | R E DO
Let V and W are real vector spaces then a mapping or a function or a transformation defined from V to W
that is T : V → W is said to be linear transformation if it will satisfies the following two conditions.
¡ ¢ ¡ ¢ ¡ ¢
i. ∀u, v ∈ V ; T u +v = T u +T v
ii. ∀u ∈ V, α ∈ R; T αu = αT u
¡ ¢ ¡ ¢
* Important:
Target AA
1. T : V → W preserves two basic operations of the vector space namely vector addition and scalar mul-
tiplication.
2. For α = 0, T 0v = 0w . Hence linear transformation maps zero vector V to the zero vector of W .
¡ ¢
Illustration 5.1 Check whether following mapping are linearly transformation or not.
a. T : R2 → R2 ,
¡ ¢ ¡ ¢
T x, y = x + 2y, 3x − y [summer-2016]
b. T : R3 → R2 , T x, y, z = |x| , y + z
¡ ¢ ¡ ¢
· ¸ µ¯ ¯ ¶
a b ¯ a b
c. T : M22 → R2 , T
¯
= ¯¯ ¯,0
c d c d ¯
Solution:
a. Let u = x 1 , y 1 , v = x 2 , y 2 ∈ R2 , α ∈ R
¡ ¢ ¡ ¢
¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢
∴ T u = T x 1 , y 1 = x 1 + 2y 1 , 3x 1 − y 1 , T v = T x 2 , y 2 = x 2 + 2y 2 , 3x 2 − y 2
¡ ¢ ¡ ¢
i. T u + v =T x 1 + x 2 , y 1 + y 2
£ ¡ ¢ ¡ ¢¤
= (x 1 + x 2 ) + 2 y 1 + y 2 , 3 (x 1 + x 2 ) − y 1 + y 2
¡ ¢
= x 1 + x 2 + 2y 1 + 2y 2 , 3x 1 + 3x 2 − y 1 − y 2
¡ ¢ ¡ ¢
= x 1 + 2y 1 , 3x 1 − y 1 + x 2 + 2y 2 , 3x 2 − y 2
¡ ¢ ¡ ¢ ¡ ¢
∴ T u +v = T u +T v
51
Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 52
T αu =T αx 1 , αy 1
¡ ¢ ¡ ¢
ii.
= (αx 1 ) + 2 αy 1 , 3 (αx 1 ) − αy 1
£ ¡ ¢ ¡ ¢¤
= αx 1 + 2αy 1 , 3αx 1 − αy 2
¡ ¢
= α x 1 + 2y 1 , 3x 1 − y 1
¡ ¢
∴ T αu = αT u
¡ ¢ ¡ ¢
b. u = x 1 , y 1 , z 1 , v = x 2 , y 2 , z 2 ∈ R3 , α ∈ R
¡ ¢ ¡ ¢
¡ ¢ ¡ ¢
i. T u + v =T x 1 + x 2 , y 1 + y 2 , z 1 + z z
¡ ¢ £ ¤
= |x 1 + x 2 | , y 1 + y 2 + z 1 + z z By given definition
¡ ¢ ¡ ¢
6= |x 1 | , y 1 + z 1 + |x 2 | , y 2 + z z [∵ |x 1 + x 2 | 6= |x 1 | + |x 2 |]
¡ ¢ ¡ ¢ ¡ ¢
∴ T u + v 6= T u + T v
∴ T does not preserve vector addition. Hence T is not linear transformation.
· ¸ µ¯ ¯ ¶
a b ¯ a b ¯¯
c. Given that T = ¯¯ , 0 = (ad − bc, 0)
c d c d ¯
· ¸ · ¸
a1 b1 a2 b2 ¡ ¢ ¡ ¢
∴ u= ;v = ∈ M 22 ⇒ T u = (a 1 d 1 − b 1 c 1 , 0) , T v = (a 2 d 2 − b 2 c 2 , 0)
c1 d1 c2 d2
Target AA
· ¸
¡ ¢ a1 + a2 b1 + b2
i. T u + v =T
c1 + c2 d1 + d2
µ¯ ¯ ¶
¯ a1 + a2 b1 + b2 ¯
= ¯¯ ¯,0
c1 + c2 d1 + d2 ¯
c 2A
= ((a 1 + a 2 ) (d 1 + d 2 ) − (b 1 + b 2 ) (c 1 +C LL
) , 0)
DbO | RE
= (a 1 d 1 + a 2 d 1 | 2 dE
+ aR 2− 1 c 1 − b 1 c 2 − b 2 c 1 − b 2 c 2 , 0)
READ
6= (a 1 d 1 − b 1 c 1 , 0) + (a 2 d 2 − b 2 c 2 , 0)
¡ ¢ ¡ ¢ ¡ ¢
T u + v 6=T u + T u
Powered by Prof. (Dr.) Rajesh M. Darji
∴ T does not preserve vector addition. Hence T is not linear transformation.
* Important:
â For linear transformation, formula of T must be linear, that is of the form ax + b y + c z, other wise
mapping is not linear transformation.
â In formula of T , if there is some non linear term like product, power, modulus, constant, any non
linear function etc. then T be never linear transformation.
Illustration 5.2 Determine the linearly transformation T : R2 → R3 such that T (1, 0) = (1, 2, 3) and T (1, 1) =
(0, 1, 0). Also find T (2, 3) . [Summer-2016]
¡ ¢ ¡ ¢
Hence, x, y = x − y (1, 0) + y (1, 1)
¡ ¢ £¡ ¢ ¤ £ ¤
⇒ T x, y =T x − y (1, 0) + y (1, 1) Applying T on both sides
¡ ¢
= x − y T (1, 0) + yT (1, 1) [∵ T is linear transformation]
¡ ¢
= x − y (1, 2, 3) + y (0, 1, 0) [∵ Given T (1, 0) = (1, 2, 3) , T (1, 1) = (0, 1, 0)]
¡ ¢ ¡ ¢
= x − y, 2x − 2y, 3x − 3y + 0, y, 0
¡ ¢ ¡ ¢
∴ T x, y = x − y, 2x − y, 3x − 3y Required formula.
¡ ¢
Now put x, y = (2, 3) ∴ T (2, 3) = (−1, 1, −3)
â Let A be an m×n matrix then its induced linear transformation T A : Rn → Rm is defined as T A x = Ax.
¡ ¢
â Further, ¡if T¢ : Rn → Rm be a linear transformation then there exit an m × n matrix A such that T = T A ,
that is T x = Ax.
Illustration 5.3
a. Find matrix of the linearly transformation T : R4 → R3 defined by,
¡ ¢ ¡ ¢
T w, x, y, z = w − 2x − y + 2z, −2w + 4x + 3y − z, −w + 2x + y − z .
Target AA
−2 1 4
Solution:
a. Given T : R4 → R3 , so induced (standard) matrix A is L
O | R EC A L of the order (3 × 4), which can be constructed by
following method:
R ED
R EAD |
â There are four unknowns w, x, y, z in definition of T , so A has four columns.
w x y z
1 −2 −1 2
A = −2 4 3 −1 = [T ]
−1 2 1 −1
−2 1 4
b. Let A = 3 5 7 .
6 0 −1
³ ´
Since A is (3 × 3) matrix, the induced transformation is T A : R3 → R3 and is defined by T A X = AX , X ∈ R3 .
Hence,
x x −2 1 4 x −2x + y + 4z
∀X = y ∈ R3 , T A y = 3 5 7 y = 3x + 5y + 7z
z z 6 0 −1 z 6x − z
∴ T A x, y, z = −2x + y + 4z, 3x + 5y + 7z, 6x − z , x, y, z ∈ R3
¡ ¢ ¡ ¢ ¡ ¢
Exercise 5.1
1. Determine whether the following mappings are linearly transformation or not?
a. T : R3 → R2 , b. T : R2 → R2 , T x, y = x + 2, y 2
¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢
T x, y, z = y, z
c. T : R2 → R2 , T x, y = x y, x d. T : R2 → R3 , T x, y = x + 3, 2y, x + y
¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢
· ¸ · ¸
a b a b
e. T : M22 → R2 , T = (ad + 1, b + c) f. T : M22 → R2 , T = (2a − b − d , 0)
c d c d
2. Determine the linearly transformation T : R2 → R3 such that T (−1, 1) = (1, 1, 2) and T (3, −1) = (−2, 0, 1).
3. If S = {ê 1 , ê 2 , ê 3 } be the standard basis for R3 and T : R3 → R3 be linearly transformation such that
T (ê 3 ) = 2ê 1 + 3ê 2 + 5ê 3 , T (ê 2 + ê 3 ) = ê 1 and T (ê 1 + ê 2 + ê 3 ) = ê 2 − ê 3 then find T (ê 1 + 2ê 2 + 3ê 3 ).
[Hint: T (ê 1 + 2ê 2 + 3ê 3 ) = T (ê 1 ) + T (ê 2 + ê 3 ) + T (ê 1 + ê 2 + ê 3 )]
a. w 1 =2x 1 − 3x 2 + x 3 b. w 1 =x 1
w 2 =3x 1 + 5x 2 − x 3 w 2 =x 1 + x 2
w 3 =x 1 + x 2 + x 3
w 4 =x 1 + x 2 + x 3 + x 4
³ ´
[Hint: Take W = T X ]
Target AA
Answers
¡ ¢ 1¡ ¢
1. a. c. f. are LT. 2. T x, y = −x + y, x + 3y, 3x + 7y 3. (3, 4, 4)
2
1 0 0 0
· ¸
2 −3 1 1 1 0 0
a.ALL
¡ ¡¢ ¢
4. T x, y, z = 5x − 2y − 2z, 6x − 2y − 2z 5. C
E b.
| R 3 5 −1 1 1 1 0
R E DO
READ | 1 1 1 1
E E E
Powered by Prof. (Dr.) Rajesh M. Darji
5.4 Composition Linear Transformations
Illustration 5.4 Show that T : R2 → R2 and S : R2 → R3 defines as T (a, b) = (a + b, b) and S (a, b) = (2a, b, a + 2b)
are linear. Also find formula for composite transformation S ◦ T .
Solution: Given T (a, b) = (a + b, b) and S (a, b) = (2a, b, a + 2b). Clearly T and S are linear. (verify !)
Consider induced matrices of T and S, given by
2 0
· ¸
1 1
[T ] = = A, [S] = 0 1 = B
0 1
1 2
[S ◦ T ] = [S] [T ] = B A
2 0 · 2 2
¸
1 1
= 0 1 = 0 1
0 1
1 2 1 3
2 2
¡ ¢
∴ [S ◦ T ] = 0 1 = C
Say
1 3
∴ formula for S ◦ T : R2 → R3 is
³ ´
S ◦ T X =C X , X ∈ R2
2 2 ·
¸
a
= 0 1
b
1 3
2a + 2b
= b
a + 3b
∴ S ◦ T (a, b) = (2a + 2b, b, a + 3b) Required formula.
Target AA
5.5 Onto (Surjective) Linear Transformations
A linear transformation T : V → W is said to be onto (surjective), if for all w ∈ W there exist an element v ∈ V
¡ ¢
such that T v = w.
CALL
5.6 One-one (Injective) Linear Transformations
E
| R E DO | R
RETA:D
¡ ¢ ¡ ¢
A linear transformation V → W is said to be one-one (injective) , if ∀u, v ∈ V ; T u =T v ⇒ u=v
¡ ¢ ¡ ¢
or ∀u, v ∈ V ; u 6= v ⇒ T u 6= T v
* Important:
Powered by Prof. (Dr.) Rajesh M. Darji
Let T : Rn → Rm be linearly transformation and A be its induced matrix transformation then,
1. T is onto if AX = b has solution for all b, that is all rows of row echelon form of A are pivots.
2. T is one-one if AX = 0 has unique solution, that is all columns of row echelon form of A are pivots.
¡ ¢ ¡ ¢
i. T u + v =T x 1 + x 2 , y 1 + y 2 , z 1 + z 2
¡ ¡ ¢ ¡ ¢ ¢ £ ¤
= (x 1 + x 2 ) + 3 y 1 + y 2 , y 1 + y 2 , 2 (x 1 + x 2 ) + (z 1 + z 2 ) By given definition
¡ ¢
= x 1 + x 2 + 3y 1 + 3y 2 , y 1 + y 2 , 2x 1 + 2x 2 + z 1 + z 2
¡ ¢ ¡ ¢
= x 1 + 3y 1 , y 1 , 2x 1 + z 1 + x 2 + 3y 2 , y 2 , 2x 2 + z 2
¡ ¢ ¡ ¢ ¡ ¢
∴ T u + v =T u + T v
ii. T αu + v =T αx 1 , αy 1 , αz 1
¡ ¢ ¡ ¢
= α x 1 + 3y 1 , α y 1 , α (2x 1 + z 1 )
¡ ¡ ¢ ¡ ¢ ¢
= αx 1 + 3αy 1 , αy 1 , 2αx 1 + αz 1
¡ ¢
¡ ¢
=α x 1 + 3y 1 , y 1 , 2x 1 + z 1
∴ T αu =αT u
¡ ¢ ¡ ¢
1 3 0
A = 0 1 0 → R3 − R1
1 0 1
1 3 0
∼ 0 1 0 → R 3 + 3R 2
0 −3 1
1 3 0
∼ 0 1 0 =B
0 0 1
Observe that, in echelon form B all columns and rows are pivot. Hence given linearly transformation is
one-one and onto.
Note: Here T is both one-one and onto, hence it is bijective.
Target AA
5.7 Range (Image) and Kernel
In particular, if T : Rn → RPowered
m
and A be Prof. (Dr.) Rajesh M.X ∈Darji
byits induced matrix, that is T X = AX ,
³ ´
R , then n
n o
1. Range of T : R (T ) = AX : X ∈ Rn = col (A) = col (T ) [Known column space of T ]
n o
2. Kernel of T : ker (T ) = X ∈ Rn : AX = 0 = nul (A) = nul (T ) [Known null space of T ]
Remark:
Rank-Nullity theorem is also known as dimension theorem in context of rank (T ) = dim R (T ) and nullity (T ) =
dim [nul (T )]. Hence
dim [R (T )] + dim [nul (T )] = dim Rn
A = 1 −1
0 1
Reducing to row echelon form:
1 1
A∼ 0 −2
0 1
1. Range: By definition,
n ³ ´ o ©¡
R (T ) = T X : X ∈ R2 = x + y, x − y, y : x, y ∈ R2 = col (T ) = col (A)
¢ ª
n ³ ´ o n o
2. Null space (Kernel): By definition, ker (T ) = X ∈ R2 : T X = 0 = X ∈ R2 : AX = 0 , that is kernel of
T is the solution space of AX = 0. From row echelon form of A, system has trivial solution ρ (A) = 2 =
Target AA
number of unknowns. Therefore x = 0, y = 0. Thus,
© ª
ker (T ) = 0 = col (T ) = nul (A)
3. Rank: Rank of T = Rank of A = 2. (Dimension of null space of A, that is number of non pivot columns)
O | R ECALL
RED
READ |
4. Nullity: Nullity of T = Nullity of A = 0. (Dimension of column space of A, that is number of pivot
columns)
Solution: Statement og dimension theorem for linearly transformation is given by Theorem 5.1.
Now rank and nullity of T are defined to be the rank and nullity of A. Reducing A to row echelon form,
we get
−1 2 0 4 5 −3
3 −7 2 0 1 4
A = → R 2 + 3R 1 ; R 3 + 2R 1 ; R 4 + 4R 1
2 −5 2 4 6 1
4 −9 2 −4 −4 7
−1 2 0 4 5 −3
0 −1 2 12 16 −5
∼ → R3 − R2 ; R4 − R2
0 −1 2 12 16 5
0 −1 2 12 16 5
−1 2 0 4 5 −3
0 −1 2 12 16 −5
∼
0 0 0 0 0 0
0 0 0 0 0 0
1 2 −1
1 2 −1
A = 0 1 1 → R3 − R1
1 1 −2
1 2 −1
∼ 0 1 1 → R3 + R2
0 −1 −1
1 2 −1
∼ 0 1 1 =B
Target AA
0 0 0
1. Basis for Range of T is the set of columns of A corresponding to pivot columns of B, and is given by
{(1, 0, 1) , (2, 1, 1)} . Also dimension of range of T is 2.
2. Basis for Kernel (null space) of T is the basis for null space of A, that is solution space of AX = 0, X ∈ R3 .
CALL
O | REnon trivial solution given by X = (3t . − t , t ) , t ∈ R.
From B , system AX = 0 has one parametric
| RED
RE
Hence basis for AD
null space of T is {(3. − 1, 1)}. Also dimension of kernel of T is 1.
b. Which of the vectors from the set {(1, 2, −2) , (3, 5, 2) , (−2, 3, 4)} belongs to R (T ) ?
Exercise 5.2
1. Determine whether the given LT be one-onto-one (bijective) or not ?
a. T : R4 → R3 ; T (a, b, c, d ) = (a − 2b − c + 2d , −2a + 4b + 3c − d , −a + 2b + c − d )
b. T : R3 → R4 ; T (a, b, c) = (a + 3b + 2c, −a − b − c, 4b + 2c, a + 3b + 2c)
c. T : R4 → R4 ; T (a, b, c, d ) = (a + 2b − c + 2d , 2a + b + 3c + 2d , a − b + 2c + 2d , 2b + d )
d. T : R2 → R3 ; T (a, b) = (a + 2b, 2a + 3b, 3a + 4b)
3. Find the corresponding transformation and indicate the source (image) and the target (co-domain)
Euclidean spaces for the matrices:
2 0 2 1
£ ¤
a. 1 2 b. 1 1 −1 0
0 1 −2 1
Answers
1. a. onto, not one-one b. Not one-one, Not onto c. one-one and onto d. one-one, not onto
2. Basis for Range: {(1, 1, 1) , (−1, 0, 1)} , dim = 2; Basis for kernel: {(−2, −1, 1, 0) , (1, 2, 0, 1)} , dim = 2.
b. T : R4 → R3 , T x, y, z, w = 2x + 2z + w, x + y − z, y − 2z + w , Source: R4 , Target: R3
¡ ¢ ¡ ¢
E E E
Target AA
In this case formula for T −1 is given by
³ ´
T −1 X = A −1 X
£ −1 ¤
⇔ T = A −1
3 0 1
A = −2 1 0
−1 2 4
We know that T is invertible if and only if A −1 exist, that is A should be non singular. Since
¯ ¯
¯ 3 0 1 ¯
¯ ¯
det (A) = ¯¯ −2 1 0 ¯¯ = 3 (4 − 0) − 0 + 1 (−4 + 1) = 9 6= 0
¯ −1 2 4 ¯
∴ A is non singular, that is A −1 exist. Hence T is invertible (Isomorphism) and formula for T −1 is
³ ´
T −1 X = A −1 X , X ∈ R3 . (5.1)
Now,
4 2 −1
1 1
A −1 = adj (A) = 8 13 −2
|A| 9
−3 −6 3
∴ From (5.1),
4 2 −1 x1 4x 1 + 2x 2 − x 3
³ ´ 1 1
T −1 X = 8 13 −2 x 2 = 8x 1 + 13x 2 − 2x 3
9 9
−3 −6 3 x3 −3x 1 − 6x 2 + 3x 3
4x 1 + 2x 2 − x 3 8x 1 + 13x 2 − 2x 3 −3x 1 − 6x 2 + 3x 3
µ ¶
−1
∴ T (x 1 , x 2 , x 3 ) = , ,
9 9 9
Exercise 5.3
In each of the following case find T −1 , if exist ?
1. T : R2 → R2 , T x, y = (2y, 3x − y) 2. T : R3 → R3 ,
¡ ¢ ¡ ¢ ¡ ¢
T x, y, z = 2y + z, x − 4y, 3x
3. T : R3 → R3 , T (x 1 , x 2 , x 3 ) = (x 1 − x 2 , x 2 − x 1 , x 1 − x 3 )
Answers
x + 2y x ¢ ³z y z y z´
µ ¶
1. T −1 x, y = 2. T −1 x, y, z = , − + , x + − 3. T −1 does not exist.
¡ ¢ ¡
,
6 2 3 4 12 2 6
E E E
Powered by
Target AA
Prof. (Dr.) Rajesh M. Darji
B. Sc. (Gold Medalist)
M. Sc. (Gold Medalist)
Tetra Gold Medalist in Mathematics
CALL
Ph. D. (Mathematics)
| RE
| R E DO
ISTE (Reg. 107058)
READ IMS, AMS
http://rmdarji.ijaamm.com/
6.1 Definition
â Let A be the square matrix of order n. A non-zero vector X ∈ Rn is said to be an eigenvector of A if there
a scalar λ (real or complex) such thatAX = λX .
â The scalar λ is said to be eigenvalue or characteristic value of A and the vector X is said to be eigen-
vector or characteristic vector of A corresponding to the eigenvalue λ.
Target AA
←
−
Suppose λ be the eigenvalue corresponding the non-zero eigenvector X , then
³ ´
AX = λX X 6= 0
4. The eigenvalues of upper or lower triangular matrix, hence the diagonal matrix are the elements of
its main diagonal.
1
5. If λ is an eigenvalues of a non singular matrix A then is an eigenvalue of A −1 .
λ
6. If λ is an eigenvalues of A then kλ is an eigenvalue of k A.
61
Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 62
10. A matrix is a non singular if and only if λi 6= 0, ∀i = 1, 2, 3.....n. OR A square matrix A is invertible if
and only if λ = 0 is not an eigenvalue of A.
|A|
11. If λ is an eigenvalues of a non singular matrix A then is an eigenvalue of adj A.
λ
12. The eigenvalues of symmetric matrix are real.
1. The eigenvector corresponding to the eigenvalue is not unique. That is if X is an eigenvector corre-
sponding to the eigenvalue λ so is k X , for the scalark 6= 0 .
2. If λ1 , λ2 , λ3 .....λn are the distinct eigenvalues of an (n × n) matrix then the corresponding eigenvectors
X 1 , X 2 , X 3 .......X n are linearly independent.
3. When two or more eigenvalues are equal it may or may not be possible to get linearly indepen-
dent eigenvectoreigenvectors corresponding repeated eigenvalues.
Target AA
5. All eigenvectors of a symmetric matrix are always linearly independent and orthogonal.
n o
6. Eigen space: Let λ be an eigenvalue of the matrix A then the set E λ = X : AX = λX is called the eigen
space of λ.
λ2 − S 1 λ + |A| = 0,
where
S 1 = trace (A) = Sum of diagonal elements = a 11 + a 22 , |A| = det (A)
2. If A be the square matrix of order (2 × 2) then its characteristic equation is given by,
λ3 − S 1 λ2 + S 2 λ − |A| = 0,
where
S 1 = trace (A) = Sum of diagonal elements = a 11 + a 22 + a 33 ,
S 2 = Sum of minors of diagonal elements = M 11 + M 22 + M 33
|A| = det (A)
3. Cramer’s Rule:
)
a1 x + b1 y + c1 z = 0 x y z
t ∈R
¡ ¢
⇒ ¯ ¯ = −¯ ¯=¯ ¯ =t Say ,
a2 x + b2 y + c2 z = 0 ¯ b1
¯ c 1 ¯¯ ¯ a1
¯ c 1 ¯¯ ¯¯ a 1 b 1 ¯¯
¯ b c2 ¯ a c2 ¯ ¯ a2 b2 ¯
2 2
¯
· ¸
14 −10
Illustration 6.1 Find the eigenvalues and eigenvectors of the matrix .
5 −1
· ¸
14 −10
Solution: Suppose A = .
5 −1
←
−
If λ be the eigenvalue of A corresponding to non zero eigenvector X , then
³ ´
AX = λX , X 6= 0 ∈ R2 ⇒ (A − λI ) X = 0 (6.3)
det (A − λI ) = 0
⇒ λ2 − S 1 λ + |A| = 0,
where S 1 = trace (A) = a 11 + a 22 = 14 − 1 = 13,
|A| = det (A) = −14 + 50 = 36
2
∴ λ − 13λ + 36 = 0 ⇒ (λ − 4) (λ − 9) = 0
∴ λ = 4, 9 = λ1 , λ2 (Say)
∴ eigenvalues of A are 4, 9.
(A − λI ) X = 0
Target AA
µ· ¸ · ¸¶ · ¸ · ¸
14 −10 1 0 x 0
∴ −λ =
5 −1 0 1 y 0
λ 0
µ· ¸ · ¸¶ · ¸ · ¸
14 −10 x 0
∴ − =
5 −1 0 λ y 0
14 − λ
= O | REC
AL L
· ¸· ¸ · ¸
−10 x 0
∴ (6.4)
5 −1 − λ | yRED 0
READ · ¸· ¸ · ¸
10 −10 x 0
λ = λ1 = 4 : Substituting λ = 4 in equation (6.4), we get = . This yields one
equation
Powered by Prof. (Dr.) Rajesh M. Darji 5 −5 y 0
t ∈R
¡ ¢
10x − 10y = 0 ⇒ x=y =t say ,
The corresponding eigenvector is obtained by taking any non zero value of t . Without loss of generality (for
· ¸
1
simplicity), we take t = 1. Hence the eigenvector for λ1 = 4 is X 1 = .
1
λ = λ2 = 9 : Substituting λ = 9 in equation (6.4), we get
x
· ¸· ¸ · ¸
5 −10 x 0
t ∈R
¡ ¢
= ⇒ 5x − 10y = 0 ∴ =y =t say ,
5 −10 y 0 2
· ¸
2
For t = 1, eigenvectoreigenvector corresponding to λ2 = 9 is X 2 = . Thus,
1
· ¸ · ¸
1 2
λ1 = 4 → X 1 = , λ2 = 9 → X 2 =
1 1
3 −1 0
Illustration 6.2 Find the eigenvalues and basis for eigen space for the matrix A = −1 2 −1 .
0 −1 3
[Summer-2016]
Method to find roots of characteristic equation: Equation (6.6) has cubic polynomial, so it may has
three real roots. To find these roots one of the following method:
â Method 1: Put different values of λ say 0, 1, −1, 2, −2, 3, −3.... until equation is satisfied (that is LHS
becomes 0).
Target AA
Observe that for λ = 1, equation (6.6) is satisfied. Hence one root is λ = 1. That is one factor is (λ−1).
λ3 − 8λ2 + 19λ − 12 = λ2 (λ − 1) − 7λ (λ − 1) + 12 (λ − 1)
C−A1)LL
¡ 2
λ − 7λ + 12
¢
|R E(λ
=
A D | R E DO = (λ − 1) (λ − 3) (λ − 4)
R E
â Method 2: Since one value of λ is 1 (i.e. one factor is (λ − 1)), second factor can be obtained using
Powered
Synthetic Division (click here)byas follow: Prof. (Dr.) Rajesh M. Darji
1 1 −8 19 −12
0 1 −7 12
1 −7 12 0
From (6.6),
λ3 − 8λ2 + 19λ − 12 = 0
∴ (λ − 1) λ2 − 7λ + 12 = 0
¡ ¢
∴ (λ − 1) (λ − 3) (λ − 4) = 0
∴ λ = 1, 3, 4 = λ1 , λ2 , λ3
¡ ¢
Say
∴ eigenvalues of A are 1, 3, 4.
⇒ −1 2 −1 − λ 0 1 0 y = 0
0 −1 3 0 0 1 z 0
3 −1 0 λ 0 0 x 0
∴ −1 2 −1 − 0 λ 0 y = 0
0 −1 3 0 0 λ z 0
3 − λ −1 0 x 0
∴ −1 2 − λ −1 y = 0 (6.7)
0 −1 3 − λ z 0
2 −1 0 x 0
λ = λ1 = 1 : From (6.7), −1 1 −1 y = 0 .
0 −1 2 z 0
Since the system has non trivial solution, this yields exactly two different equations. Solution of this
system is obtained by considering any two different equations out of three rows. Considering 1st and 3rd
row, we get
y
2x − y = 0, −y + 2z = 0 ⇒ 2x = y = 2z ∴ x = = z = t , t ∈ R
2
1
Target AA
Since the singleton set {(1, 2, 1)} is linearly independent (because vector in not zero) and it spans the eigen
space E λ1 , hence it is basis for eigen space E λ1 .
∴ Basis for eigen space E λ1 = {(1, 2, 1)}
â It is worth to note that the basis for eigen space is the set of corresponding eigenvector.
O
−1D 0
REC
|
LL
A
|0 RE x 0
λ = λ2 = 3 : From R EAD −1 1 −1 y = 0
(6.7),
0 −1 0 z 0
∴ From 1st and 2nd equation,
Powered by Prof. (Dr.) Rajesh M. Darji
−y = 0, −x + y − z = 0 ⇒ y = 0, x = −z = t , t ∈R
1
−1 −1 0 x 0
λ = λ3 = 4 : From (6.7), ∴ −1 −2 −1 y = 0
0 −1 −1 z 0
∴ From 1st and 3rd equations,
−x − y = 0, −y − z = 0 ⇒ −x = y = −z = t , t ∈R
−1
For t = 1, eigenvector for λ3 = 4 is X 3 = 1
−1
â Eigen space E λ2 = {(−t , t , −t ) : t ∈ R} and basis for eigen space is {(−1, 1, −1)} .
Important deductions:
Using properties of eigenvalue [See section 6.3], we have following important deductions:
2. λ1 + λ2 + λ3 = 1 + 3 + 4 = 7 = Trace of A.
3. λ1 × λ2 × λ3 = 1 × 3 × 4 = 12 = det (A) .
1 2 2
Illustration 6.3 Find the eigenvalues and eigenvectors of the matrix 0 2 1 . [Summer-2017]
−1 2 2
where
Target AA
Solution: If λ be the eigenvalue of A corresponding to eigenvector X ∈ R3 , then the characteristic equation
is,
det (A − λI ) = 0 ⇒ λ3 − S 1 λ2 + S 2 λ − |A| = 0
S 1 =trace (A) = 1 + 2 + 2 = 5,
O
¯ ED ¯ ¯ | R ECALL
(6.8)
A D |R ¯ ¯ ¯
ME
¯ 2 1 ¯ ¯ 1 2 ¯ ¯ 1 2 ¯
S 2 =M 11 +R 22 + M 33 = ¯ 2 2 ¯ ¯ −1 2 ¯ ¯ 0 2 ¯ = 2 + 4 + 2 = 8,
¯ ¯ + ¯ ¯ + ¯ ¯
λ3 − 5λ2 + 8λ − 4 = 0
∴ λ = 1, λ2 − 4λ + 4 = 0
∴ λ = 1, (λ − 2)2 = 0
∴ λ = 1, 2, 2 = λ1 , λ2 , λ3
¡ ¢
Say
¡ ¢
∴ Egien values of A are 1,2,2 Repeated eigenvalues
(A − λI ) X = 0 ⇒ 0 2−λ 1 y = 0 (6.9)
−1 2 2−λ z 0
0
0 2 2 x
λ = λ1 = 1 : From (6.9), 0 1 1 y = 0
−1 2 1 z 0
∴ From 2nd and 3rd equation,
y + z = 0, −x + 2y + z = 0 ⇒ x = y = −z = t t ∈R
1
From (6.9), 0 0 1 y = 0
−1 2 0 z 0
x
∴ From 2nd and 3rd equation, z = 0, −x + 2y = 0 ⇒ z = 0, = y = t, t ∈ R
2
2
λ2 = λ3 = 2 → X 2 = 1
0
5 0 1
Target AA
−7 1 0
Solution: eigenvalue of A is given by characteristic equation,
det (A − λI ) = 0
∴ λ3 − 6λ2 + 12λ − 8 = 0 (λ − 2) λ2 − 4λ2 + 4L= 0
¡ ¢
⇒
¡ | RECA L
∴ (λ − 2)3 = 0 λ = 2, D O
¢
⇒ R E
2, 2 All eigenvalues are equal
READ |
Corresponding to three equal eigenvalues, the linearly independent eigenvectors may be one or two or
three, given by
Powered by Prof. (Dr.) Rajesh M. Darji
5−λ 0 1
x
0
³ ´
(A − λI ) X = 0, X 6= 0 ∈ R3 ⇒ 1 1−λ 0 y = 0
−7 1 0−λ z 0
3 0 1 x 0
For λ = 2, 1 −1 0 y = 0
−7 1 −2 z 0
z
From 1st and 2nd equation, 3x + z = 0, x−y =0 x = y = − = t , t ∈ R.
⇒
3
For t = 1, we have only one linearly independent eigenvector corresponding to three repeated eigenvalues
1
as X = 1
−3
2 1 1
0 1 1 x 0
λ = λ1 = 2 : 0 −1 0 y = 0 .
0 0 −1 z 0
1
λ = λ2 = λ3 = 1 : 0 0 0 y = 0 .
0 0 0 z 0
From 1st equation (2nd and 3rd columns are non pivot),
x + y + z = 0, y = t1 , z = t2 ⇒ x = −t 1 , −t 2 , y = t 1 , z = t 2 , t1 , t2 ∈ R
Since, for repeated eigenvalue (two same eigenvalues), we get two parametric solution. Hence we can find
two linearly independent eigenvectors, by assuming the values of parameters t 1 , t 2 as follow:
−1 −1
t 1 = 1, t 2 = 0 ⇒ X 2 = 1 , and t 1 = 0, t 2 = 1 ⇒ X3 = 0
0 −1
Target AA O it E
SoR
Solution: Here given matrix is symmetric. |
ED of A is
Calways
has
3 −1
Illustration 6.6 Find the eigenvalue and eigenvector for the matrix A = −1
L
0
2 −1
0 −1 3
AL three linearly independent vectors irrespective
D | Requation
of eigenvalues. The characteristic
REA
det (A − λI ) = 0
∴ λ3 − 8λ
Powered Prof. (Dr.)
by 2 + 19λ − 12 = 0 ⇒ λRajesh
= 1, 3, 4 = λ ,M.
λ , λ Darji 1 2 3
3−λ 0 x 0
³ ´ −1
(A − λI ) X = 0, X 6= 0 ∈ R3 ⇒ −1 2 − λ −1 y = 0
0 −1 3 − λ z 0
2 −1 0 x 0
λ = λ1 = 1 : −1 1 −1 y = 0
0 −1 2 z 0
1
y
From 1st and 3rd equation, 2x = y = 2z ∴ x = = z = t , t ∈ R. Hence, λ1 = 1 → X 1 = 2
2
1
0 −1 0 x 0
λ = λ2 = 3 : −1 −1 −1 y = 0 .
0 −1 0 z 0
1
1
det (A − λI ) = 0 ⇒ λ3 − 3λ − 2 = 0 ⇒ λ = 2, −1, −1
1 1 x 0
−2
λ = λ1 = 2 : 1 −2 1 y = 0 . From 1st and 2nd equation,
1 1 −2 z 0
)
−2x + y + z = 0 x y z
Target AA
£ ¤
⇒ ¯ ¯ = −¯ ¯=¯ ¯ By Cramer’s rule
x − 2y + z = 0 ¯ 1 1 ¯
¯ ¯
¯ −2 1
¯
¯ ¯ −2 1
¯ ¯
¯
¯
¯ −2 1 ¯ ¯ 1 1 ¯ ¯ 1 −2 ¯
1
x y z
∴ = = ⇒ x = y = z = t, t ∈ R Hence, λ1 = 2 → X 1 = 1
3 3 3
CA L L
| RE
1
E
R x D O
1 |1
RE1AD 0
λ = λ2 = λ3 = − 1 : 1 1 1 y = 0 .
1 1 1
Powered by
z
Prof. (Dr.) Rajesh M. Darji
0
∴ x + y + z = 0, y = t 1 , z = t 2 ⇒ x = −t 1 − t 2 , y = t 1 , z = t 2 , t1 , t2 ∈ R
Since A is symmetric, it has thre linearly independent and pair wise orthogonal eigenvectors. Second vector
−1
is given by taking t 1 = 1 and t 2 = 0 as X 2 = 1 .
0
â To find third vector X 3 , we select t 1 and t 2 such that X 2 · X 3 = 0. This can be achieve by taking general
−t 1 − t 2
1
−1
∴ λ2 = λ3 = −1 → X 2 = 1 , X 3 = 1
0 −2
â The number of times an eigenvalue λ exist, is called an algebraic multiplicity of λ and is denoted by
multa (λ).
â The dimension of eigen space of λ is called geometric multiplicity of λ and is denoted by multg (λ).
0 1 1
Solution: To determine the algebraic and geometric multiplicity first find eigenvalues and eigenvectors.
For given matrix we have obtained eigenvalues and eigenvectors in Illustration 6.7, and are
1 1
−1
λ1 = 2 → X 1 = 1 λ2 = λ3 = −1 → X 2 = 1 , X 3 = 1
1 0 −2
By definition,
1. eigenvalue λ = 2 exist one time and dimension of eigen space (number of corresponding eigenvector)
is also one.
∴ Algebraic multiplicity of 2 = multa (2) = 1 and Geometric multiplicity of 2 = multg (2) = 1.
2. eigenvalue λ = −1 exist two time and dimension of eigen space (number of corresponding eigenvec-
Target AA
tor) is also two.
∴ Algebraic multiplicity of −1 = multa (−1) = 2 and Geometric multiplicity of −1 = multg (−1) = 2.
Note:
1. In Illustration 6.1, =E
multa (4) = multa (9)R 1,CAL L g (4) = multg (9) = 1.
mult
|
A D | R E DO
2. In IllustrationR E
6.2, multa (1) = multa (3) = multa (4) = 1, multg (1) = multg (3) = multg (4) = 1.
6. In Illustration 6.6, multa (1) = multa (3) = multa (4) = 1, multg (1) = multg (3) = multg (4) = 1.
Exercise 6.1
1. Find the eigenvalues, eigenvectors and hence the basis for the eigen space for the following matrices:
· ¸ · ¸ · ¸
0 3 1 0 3 0
a. b. c.
4 0 0 1 8 −1
1 0 −1 4 6 6
a. 1 2 1 [Winter-2015] b. 1 3 2
2 2 3 −1 −4 −3
1 0 0 2 1 0 4 6 6
a. 2 0 1 b. 0 2 1 c. 1 3 2
3 1 0 0 0 2 −1 −5 −2
−2 5 4 5 0 1
a. 5 7 5 b. 0 −2 0
4 5 −2 1 0 5
1 2 3 3 1 1
a. 2 4 6 b. 1 3 −1
3 6 9 1 −1 3
−5 4 34
0 1 0
1
8. If λ is an eigenvalues of an orthogonal matrix A, prove that is also an eigenvalue of A.
λ
[Hint: For orthogonal matrix A −1 = A T ]
9. Let A be a 6 × 6 matrix with the characteristic equation λ2 (λ − 1) (λ − 2)3 = 0. What are the possible
Target AA
dimensions for eigen spaces for A ?
Answers
· p ¸ · p ¸ · ¸ · ¸ · ¸ · ¸
2 2 3/2 − 3/2 1 0 3 0
1. a. p , − p → , b. 1, 1 → , c. −1, 3 → ,
1 1 0 1 0 4
3 3
CA L L
DO| RE
E
R −1
EA |
6 0 3
−1 D −2
2. a. 1, 2, 3 → R 1 , 1 , 1 b. −1, 1, 4 → 2 , −1 , 1
0 2 2 −7 1 −1
Powered
by Prof. (Dr.)
Rajesh
M.
Darji
0 0 1 4 3
3. a. −1 → −1 , 1, 1 → 1
b. 2, 2, 2 → 0
c. 1 → 1 , 2, 2 → 1
1 1 0 −3 −2
1 1 0 1
−1 −1
4. a. −6, −3, 12 → 0 , −1 , 2 b. 6, −2, 4 → 0 , 1 , 0
1 1 1 1 0 1
1 1 1
−3 −1 −1
5. a. 0, 0, 14 → 0 , 5 , 2 b. 1, 4, 4 → 1 , 1 , −1
1 −3 3 1 0 2
1
E E E
A 2 − 5A − 2I 2 = 0 (6.10)
Verification:
· ¸ · ¸· ¸ · ¸
1 3 2 1 3 1 3 7 15
A= ⇒ A = AA = =
2 4 2 4 2 4 10 22
· ¸ · ¸ · ¸
7 15 1 3 1 0
∴ A 2 − 5A − 2I 2 = −5 −2
10 22 2 4 0 1
· ¸ · ¸ · ¸
7 15 5 15 2 0
Target AA
= − −
10 22 10 20 0 2
· ¸ · ¸
7 − 5 − 2 15 − 15 − 0 0 0
= =
10 − 10 − 0 22 − 20 − 2 0 0
=0
Hence, 2
A − 5A − 2I 2 = 0 | RE
∴ CALLtheorem is verified.
Cayley - Hemilton
EAD | R E DO
R equation (6.10) by A, we get
To find A 3 : Multiplying
Powered A
3 2
Prof.
by − 5A − 2A = 0
·
(Dr.)
¸
⇒ ARajesh
·
= 5A + 2A M. Darji
¸ · ¸
3 2
7 15 1 3 37 81
∴ A3 = 5 +2 =
10 22 2 4 54 118
1
A − 5I 2 − 2A −1 = 0 ⇒ A −1 = (A − 5I 2 )
2
µ· ¸ · ¸¶ · ¸
−1 1 1 3 1 0 1 −4 3
∴ A = −5 =
2 2 4 0 1 2 2 −1
· ¸
−2 3/2
∴ A −1 =
1 −1/2
2 1 1
Illustration 6.10 Using Cayley-Hemilton theorem find A −1 if A = 0 1 0 . Hence find the matrix
1 1 2
represented by A 8 − 5A 7 + 7A 6 − 3A 5 + A 4 − 5A 3 + 8A 2 − 2A + I .
1
Arthur Cayley; British, 1821-1895 and William Rowan Hamilton; Irish, 1805–1865.
A 8 − 5A 7 + 7A 6 − 3A 5 + A 4 − 5A 3 + 8A 2 − 2A + I ¡ 5 ¢ A2 + A + I
= A + A + , where I = I 3
A 3 − 5A 2 + 7A − 3I A 3 − 5A 2 + 7A − 3I
Target AA
Multiply both the sides by A 3 − 5A 2 + 7A − 3I , we get
¡ ¢
A 8 − 5A 7 + 7A 6 − 3A 5 + A 4 − 5A 3 + 8A 2 − 2A + I = A 5 + A A 3 − 5A 2 + 7A − 3I + A 2 + A + I
¡ ¢¡ ¢ ¡ ¢
= A 5 + A (0) + A 2 + A + I
¡ ¢ ¡ ¢
[∵ (6.11)]
R E CA L=LA2 + A + I
RE DO |
READ |
5 4 4 2 1 1 1 0 0
= 0 1 0 + 0 1 0 + 0 1 0
4 4 5 1 1 2 0 0 1
Powered by Prof. (Dr.) Rajesh M. Darji
8 5 5
∴ A 8 − 5A 7 + 7A 6 − 3A 5 + A 4 − 5A 3 + 8A 2 − 2A + I = 0 3 0
5 5 8
Exercise 6.2
1. Verify Cayley-Hamilton theorem for the following matrix A and hence find A 3 and A −1 :
· ¸ · ¸
1 2 −1 1
a. [Winter-2015] b.
3 4 3 0
2. Verify Cayley-Hamilton theorem for the following matrix A and hence find A 4 and A −1 :
2 −1 1 6 −1 1
1 3 2
Answers
· ¸ · ¸ · ¸ · ¸
37 54 −2 1 −7 4 0 1/3
1. a. , b. ,
81 118 3/2 −1/2 12 −3 1 1/3
9 13 −14 13 8 −40
1
4. 8 −9 4 , −2A 2 + 4A + 3I = 16 −11 −16
37
2 7 1 16 8 −27
E E E
Two matrices A and B are said to be similar if there exist a non-singular matrix P such that B = P −1 AP . Also
similar matrices have same eigenvalues.
Target AA
6.8 Diagonalization
A matrix A is said to be diagonalizable (or can be diagonalizable) if it is similar to some diagonal matrix.
That is there exist a non-singular matrix P such that P −1 AP = D, where D is a diagonal matrix.
Powered
â P is the matrix whose Prof. (Dr.) Rajesh M. Darji
by are the linearly independent eigenvectors and is known as Modal
columns
Matrix.
â D is the diagonal matrix whose diagonal elements are the eigenvalues of A and is known as a Spectral
Matrix.
â We know that an n × n symmetric real matrix always has n linearly independent eigenvectors, even
through the eigenvalues are repeated.
â On normalizing each eigenvector we obtain the modal matrix M which is always orthogonal.
Illustration 6.11 Determine whether the following matrices are diagonalizable or not ? If so, diagonalize
them.
2 0 −2 1 2 1
a. A = 0 3 0 [Winter-2015] b. A = 2 0 −2
0 0 3 −1 2 3
Solution: In order to digonalize, first of all we obtain eigenvalues and eigenvectors of given matrix.
a. Since given matrix is an upper triangle matrix, its eigenvalues are main diagonal elements. That is
λ = 2, 3, 3 are eigenvalues of A.
Now eigenvector is given by homogeneous system
2−λ 0 x 0
³ ´ −2
det (A − λI ) X = 0, X 6= 0 ∈ R3 ⇒ 0 3−λ 0 y = 0
0 0 3−λ z 0
0 0 −2 x 0
λ = λ1 = 2 : 0 1 0 y = 0
0 0 1 z 0
1
λ = λ2 = λ3 = 3 : 0 0 0 y = 0
0 0 0 z 0
From 1st equation (2nd and 3rd columns are not pivot), x = −2z, y = t 1 , z = t 2 , t 1 , t 2 ∈ R.
0
−2
Hence, λ2 = λ3 = 3 → X 2 = 1 , X 3 = 0 .
0 1
Since A has three linearly independent eigenvectors, it is diagonalizable. The Modal Matrix P which diago-
Target AA
nalize A is given by taking eigenvectors in columns, that is
1 0 −2
£ ¤
P = X1 X2 X3 = 0 1 1
0 0 0
O | R ECA2LL
| RED
0 0
1−λ 2 1 x 0
³ ´
(A − λI ) X = 0, X 6= 0 ∈ R3 ⇒ 2 −λ −2 y = 0
−1 2 3−λ z 0
1 2 1 x 0
λ = λ1 = 0 : 2 0 −2 y = 0
−1 2 3 z 0
From 1st and 2nd equation, x + 2y + z = 0, 2x − 2z = 0 ⇒ x = −y = z = t , t ∈ R.
1
Hence, λ1 = 0 → X 1 = −1 .
1
2 1 x 0
−1
λ = λ2 = λ3 = 2 : 2 −2 −2 y = 0
−1 2 1 z 0
From 1st and 2nd equation, −x + 2y + z = 0, 2x − 2y − 2z = 0 ⇒ y = 0, x = z = t , t ∈ R.
1
Hence, λ2 = λ3 = 2 → X 2 = 0 .
1
Since given matrix A has only two linearly independent eigen vectors X 1 , X 2 corresponding to three eigen-
values λ = 0, 2, 2. Hence, it is not diagolalizable. [See Theorem 6.1]
2 0 1
Illustration 6.12 Find the normalized modal matrix M for the matrix A = 0 3 0 and diagonalize
1 0 2
orthogonally.
Solution: Observe that given matrix is symmetric because A = A T . Hence it is always orthogonally diago-
nalizable. [See section 6.9]
Characteristic equation of A : λ3 − 7λ2 + 15λ − 9 = 0 ⇒ λ = 1, 3, 3.
Eigen vectors are given by,
2−λ 0 1 x 0
³ ´
(A − λI ) X = 0, X 6= 0 ∈ R3 ⇒ 0 3−λ 0 y = 0
1 0 2−λ z 0
1 0 1 x 0
λ = λ1 = 1 : 0 2 0 y = 0
1 0 1 z 0
1
Target AA
X1 1
∴ Normalized eigenvector is, X 1 = ° ° = p 0 = 0
b
p
° °
°X 1° 2 −1
−1/ 2
−1 0 1 x 0
λ = λ2 = λ3 = 3 : 0 0 0 y = 0
1 0 −1 z 0
From 1st or 3rd equation, x = z = t 1 , y = | t 2 ,RE t 2A
t 1 ,C
LL
∈ R. The first linearly independent vector is given by
D O
is D | RE
REA
taking t 1 = 1, t 2 = 0, that
p
1 1 1/ 2
Powered
Prof. (Dr.) Rajesh M. Darji
X 2 = 0 by ⇒ X 2 = ° ° = p
b
X2 1
0 = 0
p
1 °X 2°
° ° 2 1
1/ 2
t1
X2·X3 =0 ⇒ 2t 1 = 0 ∴ t 1 = 0.
0 0
X 3
So we can take any value of t 2 , let t 2 = 1. X 3 = 1 ⇒ Xb3 = ° ° = 1 . Thus, required nor-
°X 3°
° °
0 0
malized modal matrix is p p
1/ 2 1/ 2 0
£ ¤
M= Xb1 Xb2 Xb3 = 0 0 1
p p
−1/ 2 1/ 2 0
Also diagnonalization of A is defined as,
1 0 0
M T AM = D = 0 3 0 = Spectral matrix.
0 0 3
Note:
3. For simply diadonalization, do not normalize the eigen vectors. In this case the modal matrix is
1 1 0 1 0 0
P −1 AP = D = 0 3 0
£ ¤
P= X1 X2 X3 = 0 0 1 ⇒
−1 1 0 0 0 3
Exercise 6.3
1. For the following matrix, find the non singular matrix P and the diagonal matrix D such that D =
P −1 AP.
· ¸ · ¸
−4 −6 5 3
a. [Winter-2016] b.
3 5 3 5
1 1 −2 1 1 3
c. −1 2 1 d. 1 5 1
0 1 −1 3 1 1
1 1 1
3. Find the normalized modal matrix M and diagonalize orthogonally the following matrices:
Target AA
2 2 0
· ¸
2 1
a. b. 2 5 0
1 2
0 0 3
· ¸
a b
4. Prove that if b 6= 0, then
0 | RE
a CALL
is not diagonalizable.
READ | R E DO
Answers
c. P = 0 3 2 ,D = 0 2 0 d. P = −1 2 0 ,D = 0 6 0
1 1 1 0 0 1 1 1 1 0 0 −2
p p
" p p # 0 −2/ 5 1/ 5
3 0 0
p p
· ¸
−1/ 2 1/ 2 1 0
3. a. P = p p ,D = b. P = 0 1/ 5 2/ 5 , D =
0 1 0
1/ 2 1/ 2 0 3
1 0 0 0 0 6
E E E
Powered by
Prof. (Dr.) Rajesh M. Darji
B. Sc. (Gold Medalist)
M. Sc. (Gold Medalist)
Tetra Gold Medalist in Mathematics
Ph. D. (Mathematics)
ISTE (Reg. 107058)
IMS, AMS
http://rmdarji.ijaamm.com/
Contact: (+91) 9427 80 9779
â A homogeneous polynomial of degree two in n variables is called the quadratic form (QF) in n vari-
ables.
Target AA
â General QF in three variable:
Q (x 1 , x 2 , x) = a 11 x 12 + a 22 x 22 + a 33 x 32 + 2a 12 x 1 x 2 + 2a 23 x 2 x 3 + 2a 31 x 3 x 1
1. Q (x 1 , x 2 ) = a 11 x 12 + 2a 12 x 1 x 2 + a 22 x 22 = X T AX ,
· ¸ · ¸
x1 a 11 a 12
where X = and A = .
x2 a 12 a 22
2. Q (x 1 , x 2 , x 3 ) = a 11 x 12 + a 22 x 22 + a 33 x 32 + 2a 12 x 1 x 2 + 2a 23 x 2 x 3 + 2a 13 x 1 x 3 = X T AX ,
x1 a 11 a 12 a 13
where X = x 2 and A = a 12 a 22 a 23 .
x3 a 13 a 23 a 33
* Important:
â A is a symmetric matrix in which diagonal entries are the coefficients of variables having square and
other entries are half of the coefficients of cross multiplied variables, filled by symmetry in appropriate
columns.
Illustration 7.1
78
Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 79
· ¸ · ¸
x1 4 −3
a. 4x 12 − 9x 22 − 6x 1 x 2 = X T AX , where X = , A= .
x2 −3 −9
· ¸ · ¸
¢2 x 1 −1
b. x − y = x 2 − 2x y + y 2 = X T AX ,
¡
where X = , A= .
y −1 1
x1 3 −1 0
c. 3x 12 + 2x 22 + 3x 32 − 2x 1 x 2 − 2x 3 x 2 = X T AX , where X = x 2 , A = −1 2 −1 .
x3 0 −1 3
x 2
−1 −1/2
d. 2x 2 + 5y 2 − 6z 2 − 2x y − y z + 8xz = X T AX , where X = y , A = −1 5 4 .
z −1/2 4 −6
[summer-2015]
Target AA
7.4 Definiteness of Quadratic Form
3. Semi-positive definite if all eigenvalues are positive and atleast one eigenvalue of A is zero.
Powered
4. Semi-negative definite by
if all eigenvalues Prof. (Dr.) Rajesh M. Darji
are negative and atleast one eigenvalue is zero.
5. Infinite if some eigenvalues of A are positive and some eigenvalues of A are negative.
Illustration 7.2 Determine the index, signature, rank and definiteness of the quadratic form −3x 2 − 5y 2 −
3z 2 + 2x y + 2y z − 2xz.
x −3 1 1
Illustration 7.3 Find the canonical form of the quadratic form 2x 12 + 3x 22 + 2x 32 + 2x 1 x 3 , using orthogonal
transformation. Also find index, rank and signature of the quadratic form. [Summer-2014]
x1 3 0 1
2x 12 + 3x 22 + 2x 32 + 2x 1 x 3 = X T AX , where X = x 2 , A= 0 3 0
x3 1 0 2
Characteristic equation of A :
λ3 − 8λ2 + 20λ − 15 = 0 (λ − 3) λ2 − 5λ + 5 = 0
¡ ¢
⇒
∴ λ = 3, λ2 − 5λ + 5 = 0
Target AA
p " p #
5 ± 25 − 20 2 −b ± b 2 − 4ac
∴ λ = 3, λ = ∵ ax + bx + c = 0 ⇒ x=
2 2a
p p
5+ 5 5− 5
∴ λ = 3, , = λ1 , λ2 , λ3
2 2
O | R ECALL
RED 7.1], given quadratic form can be reduce to canonical form, under
Thus by Principal Axis Theorem [Theorem
READ |X = P Y as
the orthog0nal transformation
à p ! à p !
X T Powered
AX = λ1 yby
2 Prof. (Dr.) Rajesh M. Darji
2 2
1 + λ2 y 2 + λ3 y 3 = 3y 12 +
5+ 5 2
2
y2 +
5− 5 2
2
y3
x1 y1
Note: In order to find orthogonal transformation X = P Y that reduce given quadratic form to canonical
form (to verify principal axis theorem) it is essential to find normalized modal matrix P of symmetric matrix
A. See below illustration.
Illustration 7.4 Determine the orthogonal transformation which transform the quadratic form 5x 2 +5y 2 +
5z 2 + 4x y + 4y z + 4zx into canonical form.
x1 5 2 2
5x 12 + 5x 22 + 5x 32 + 4x 1 x 2 + 4x 2 x 3 + 4x 3 x 1 = X T AX , X = x2 , A= 2 5 2
x3 2 2 5
5−λ 2 2 x 0
³ ´
(A − λI ) X = 0, X 6= 0 ∈ R3 ⇒ 2 5−λ 2 y = 0
2 2 5−λ z 0
2 2 2 x 0
λ = λ1 = λ2 = 3 : 2 2 2 y = 0
2 2 2 z 0
This yields, x + y + z = 0 ⇒ x = −t 1 − t 2 , y = t 1 , z = t2 , t 1 , t 2 ∈ R.
∴ Two linearly independent vectors are
p
−1/ 2
−1 −1
X1 1 p
X1 = 1 ⇒ Xb1 = ° ° = p 1 = 1/ 2
0 °X 1°
° ° 2 0 0
and p
1 1 1/ 6
X2 1 p
⇒ Xb2 = ° ° = p 1 = 1/ 6
X2 = 1
6 −2 p
°X 2°
° °
−2 −2/ 6
−4 2 2 x 0
λ = λ1 = λ2 = 3 : 2 −4 2 y = 0
2 2 −4 z 0
Target AA
∴ From 1st and 2nd equation, −2x + y + z = 0, x − 2y + z = 0 ⇒ x = y = z = t , t ∈ R.
p
1 1 1/ 3
X3 1 p
∴ X3 = 1 ⇒ Xb3 = ° ° = p 1 = 1/ 3
3 1 p
°X 3°
° °
1 1/ 3
| RE CALL p p p
| REDO£
−1/ 2 1/ 6 1/ 3
p p p
The normalized modal AD of A is P =
REmatrix X1 X2 X3
¤
= 1/ 2 1/ 6 1/ 3 .
p p
0 −2/ 6 1/ 3
Hence required orthogonal transformation
Powered by is, Prof. (Dr.) Rajesh M. Darji
p p p
x1 −1/ 2 1/ 6 1/ 3 y1
p p p
X = P Y ⇒ x 2 = 1/ 2 1/ 6 1/ 3 y 2
p p
x3 0 −2/ 6 1/ 3 y3
y1 y2 y3 y1 y2 y3 y2 y3
∴ x 1 = − p + p + p , x 2 = p + p + p , x 3 = −2 p + p
2 6 3 2 6 3 6 3
â Note that, if we substitute these values of x 1 , x 2 , x 3 in given quadratic form and simplify, we get the
canonical form of quadratic form as X T AX = 3y 12 + 3y 22 + 9y 32 . This is the statement principal axis theorem.
(Verify !)
Note: Recall from following table, some quadratic equations and its geometrical nature/name:
Equation Nature/Name
1. x2 + y 2 = a2 Circle
x2 y 2
2. + =1 Ellipse
a2 b2
x2 y 2
3. − =1 Hyperbola
a2 b2
LAVC (GTU-2110015) B.E. Semester II
Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 82
4. x2 + y 2 + z2 = a2 Sphere
x2 y 2 z2
5. + + =1 Ellipsoid
a2 b2 c 2
x2 y 2 z2 x2 y 2 z2
6. ± ∓ = 1 or − − =1 Hyperboloid
a2 b2 c 2 a2 b2 c 2
Illustration 7.5 Find the nature of the graph represented by the following equations:
a. x 2 + 4x y + y 2 = 16 b. 5x 2 + 5y 2 + 5z 2 + 4x y + 4xz + 4y z = 9
Solution:
a. The quadratic form corresponding to given eqution is
· ¸ · ¸
2 2 T x 1 2
x + 4x y + y = X AX , where X = , A=
y 2 1
Target AA
X T AX = λ1 y 12 + λ2 y 22 ⇒ x 2 + 4x y + y 2 = 3y 12 − y 22 = 16 [∵ Given]
y 12 y 22
∴ 3y 12 − y 22 = 16 ⇒ − =1 → Hyperbola [See 3rd equation in above table]
16/3 16
∴ Given quadratic equation represent the curve hyperbola.
ECALL
| Rare λ = 3, 3, 9. [See Illustration 7.4]
E DOform
b. The eigenvalues for given quadratic
| R
∴ By principal AD
REaxis theorem, we have
5x 2 + 5y 2 + 5z 2 + 4x y + 4xz + 4y z = 3y 12 + 3y 22 + 9y 32 = 9
∴
Powered by
3y 12 + 3y 22 + 9y 32 = 9
Prof. (Dr.) Rajesh M. Darji
y 12 y 22 y 32
⇒ + + =1 → Ellipsoid [See 5th equation in above table]
3 3 1
∴ Given quadratic equation represent the surface of ellipsoid.
Exercise 7.1
1. Which of the following forms are the quadratic form ? If so, express it as matrix form X T AX and
determine the its index, signature, rank and definiteness.
a. x 2 − 2x y b. 3x 12 + 7x 22
c. x y + y z + zx d. 4x 12 + x 22 + 15x 32 − 4x 1 x 2 2
2. Reduce the following quadratic form to the canonical form (sum of square) by using orthogonal linear
transformation and write the rank, index and signature:
a. 2 x 12 + x 1 x 2 + x 22 b. 2x 12 + 5x 22 + 3x 32 + 4x 1 x 2
¡ ¢
c. 2x 12 + x 22 − 3x 32 d. 3x 2 + 3z 2 + 8x y + 8xz + 8y z [Winter-2014]
3. Find the nature of the graph represented by the following equations: (Name the quadratic)
a. x 2 + 4x y + 3y 2 = 4 b. 2x 2 − 4x y + 2y 2 = 1
c. 5x 2 − 4x y + 8y 2 − 36 = 0 d. 5x 2 − 2y 2 + 5z 2 + 2xz = 1
Answers
1. a. Index 1, Signature 0, Rank 2, Infinite form. b. Index 2, Signature 2, Rank 2, Positive definite
form. c. Index 1, Signature 1, Rank 3, Infinite form. d. not quadratic form
E E E
A matrix is said to be complex matrix if it has at least one complex entry otherwise it is known as real matrix.
e. g.
1 2 −1
· ¸
1 2+i
A= , A= 0 i 3 , where i 2 = −1
−i 5
3 7 4
Target AA
7.6 Conjugate Matrix
Matrix obtained by replacing the elements of a complex matrix A by its complex conjugate numbers is said
to be conjugate matrix of A and is denoted by A.
e. g.
AL L 1 2 − i
A = DO | R E C ⇒ A =
· ¸ · ¸
1 2+i
READ | RE −i 5 i 5
3. Unitary if A ∗ = A −1 that is A ∗ A = A A ∗ = I n .
4. Normal if A ∗ A = A A ∗
* Important:
â Observe that the real symmetric and skew-symmetric matrices are complex analogous of Hermition
and Skew-Hermition matrices.
â Every Hermitian matrix is normal since A ∗ A = A A = A A ∗ and every unitary matrix is normal matrix
since A ∗ A = I = A A ∗
7.9 Properties
¢∗
A∗ 2. (A ± B )∗ = A ∗ ± B ∗
¡
1. =A
3. (k A)∗ = k A ∗ 4. (AB )∗ = B ∗ A ∗
³ ´
5. A = A 6. AB = A B
7. (k A) = k A 8. (A ± B ) = A ± B
³ ´
9. det A = det (A)
k
−1 −i
Target AA
Illustration 7.6 Find k, l and m to make A a Hermitian matrix; where A = 3 − 5i 0 m .
l 2 + 4i 2
k = a 12 = a 21 = (3 − 5i ) = 3 + 5i
| RE CALL ∴ k = 3 + 5i
l = a 31 = a 13 = (−i ) =D
REA
i | R E DO ∴ l =i
m = a 23 = a 32 = (2 + 4i ) = 2 − i 4 ∴ m = 2−i4
Exercise 7.2
1. In each part find A ∗ :
2i 1−i 2i 1−i −1 + i
a. A = 4 3+i b. A = 4 5 − 7i −i
5+i 0 i 3 1
1−i −1 + i 1 0 0
−2
c. A = 1+i 0 3 d. A = 0 1 0
−1 − i 3 5 0 0 1
3. Prove that,
Answers
4
· ¸ −2i −i
−2i 4 5−i
1. a. b. 1 + i 5 + 7i 3 2. a, c, d yes, b no.
1+i 3−i 0
−1 − i i 1
E E E
Powered by
Prof. (Dr.) Rajesh M. Darji
Target AA
B. Sc. (Gold Medalist)
M. Sc. (Gold Medalist)
Tetra Gold Medalist in Mathematics
Ph. D. (Mathematics)
ISTE (Reg. 107058)
| RE CALL IMS, AMS
READ | R E DO http://rmdarji.ijaamm.com/
Contact: (+91) 9427 80 9779
Let V be the real vector space then a mapping 〈 , 〉 : V ×V → R is said to be an inner product on V if it satisfies
the following axioms:
∀ u, v, w ∈ V and α ∈ R
® ®
1. u, v = v, u [Symmetry]
® ® ®
2. u + v, w = u, w + v, w [Additive]
Target AA
3. αu, v = α u, v
® ®
[Homogeneity]
®
4. u, u Ê 0 [Positivity]
®
5. u, u = 0 ⇔ u = 0
O|R ECALL
ED
A vector space together with an inner product
R is called an inner product space.
READ |
8.2 Properties of Inner Product
Powered by Prof. (Dr.) Rajesh M. Darji
Let V be the real inner product space. For u, v, w ∈ V and α ∈ R
® ®
1. 0, u = u, 0 = 0
® ® ®
2. u, v + w = u, v + u, w
3. u, αv = α u, v = 0
® ®
® ® ®
4. u − v, +w = u, w − v, w
® ® ®
5. u, v − w = u, v − u, w
86
Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 87
2. Weighted inner product space Rn : Let u = (u 1 , u 2 , u 3 .....u n ) and v = (v 1 , v 2 , v 3 .....v n ) are vectors of
Rn and w 1 , w 2 , w 3 .....w n are positive numbers which we shall call weight, then the weighted inner
product is defined as
®
u, v = w 1 u 1 v 1 + w 2 u 2 v 2 + w 3 u 3 v 3 + ..... + w n u n v n
1
e. g. u = (1, 2, −2) , v = (1, 1, 3) ∈ R3 w 1 = 2, w 2 = , w 3 = 1
2
® 1
⇒ u, v = 2 (1) (1) + (2) (1) + 1 (−2) (3) = −3.
2
3. An Inner product generated by matrix:
Let u, v ∈ Rn and A be invertible n × n then an inner product generated by matrix A is defined by,
®
u, v = Au · Av
· ¸
1 −1
e. g. Let u = (1, 2) , v = (2, −3) ∈ R2 , A=
4 2
· ¸· ¸ · ¸ · ¸· ¸ · ¸
1 −1 1 −1 1 −1 2 5
⇒ Au = = = (−1, 8) , Av = = = (5, 2)
4 2 2 8 4 2 −3 2
®
∴ u, v = Au · Av = (−1, 8) · (5, 2) = −5 + 16 = 11
®
∴ u, v = 11
Target AA
4. Inner product on M 22 :
· ¸ · ¸
a1 a2 b1 b2
Let A = ,B = ∈ M 22 , then the standard inner product on M22 is defined by
a3 a4 b3 b4
〈A, B 〉 = trace B T A aL
1 bL
¡ ¢
1 + a2 b2 + a3 b3 + a4 b4
DO | RECA
R·E1A3 D | RE ¸ · ¸
−2 3
e. g. Let A = , B= ∈ M22 ⇒ 〈A, B 〉 = −2 + 9 + 0 + 10 = 17.
4 2 0 5
Powered by Prof. (Dr.) Rajesh M. Darji
5. Inner product on P 2 (x) : Let p = a 0 + a 1 x + a 2 x 2 , q = b 0 + b 1 x + b 2 x 2 ∈ P 2 (x). Then the standard
inner product on P 2 (x) is defined by
®
p, q = a 0 b 0 + a 1 b 1 + a 2 b 2
Illustration
8.1 Let u = (u 1 , u 2 ) and v = (v 1 , v 2 ) be vectors in R2 . Verify that the weighted Euclidean inner
®
product u, v = 3u 1 v 1 + 2u 2 v 2 satisfies the inner product axioms. [Winter-2017]
Target AA
®
5. u, u = 0 ⇔ 3u 1 u 1 + 2u 2 u 2 = 0
⇔ 3u 12 + 2u 22 = 0
⇔ u 1 = 2u 2 = 0
⇔ u = (u 1 , u 2 ) = (0, 0) = 0
CALL
®
∴ u, v ⇔ u == 0
| RE
RE
Hence, given product satisfies | R E DO
AD all the inner product axioms.
Also u and v are said to be orthogonal to each other if u, v = 0 i.e. θ = 90◦ and is denoted by u⊥v.
® ¡ ¢
Illustration 8.2 Let R4 have the Euclidean inner product. Find the cosine of the angle θ and distance
between the vectors u = (4, 3, 2, −1) and v = (−2, 1, 2, 3) . [Winter-2017]
Solution: Given u = (4, 3, 2, −1) , v = (−2, 1, 2, 3) . With respect to standard inner product in R4 , we have
®
u, v = u · v = −8 + 3 + 4 − 3 = −4
° ° p p ° ° p p p
°u ° = 16 + 9 + 4 + 1 = 30, °v ° = 4 + 1 + 4 + 9 = 18 = 3 2
Illustration 8.3 Find ° p °, d p, q and cos θ using standard inner product on P 2 where p = −3 − x +
° ° ¡ ¢
x 2, q = 2 + x 2.
Solution: Standard inner product in P 2 is defined as p, q = a 0 b 0 + a 1 b 1 + a 2 b 2 .
®
â Norm of p is
° ° q ® p q
°p ° = p, p = a 0 a 0 + a 1 a 1 + a 2 a 2 = a 02 + a 12 + a 22
° ° q p
∴ °p ° = (−3)2 + (−1)2 + (1)2 = 11 ∵ p = −3 − x + x 2
£ ¤
Target AA
−6 + 0 + 1 5
∴ cos θ = p p = − p
11 5 55
8.5 Results
° °
1. °u ° Ê 0 | RE CAL2.L °° αu °° = | α | °° u °°
AD | R E DO
RE inequality: Let V be the real inner product space and u, v ∈ V then
3. Cauchy-Schwarz’s
4. Triangle inequality: Let V be the real inner product space and u, v ∈ V then
° ° ° ° ° °
°u + v ° É °u ° + °v °
°u + v °2 = u + v, u + v
° ° ® £ ¤
Proof: ∵ By definition of norm
® ® £ ¤
= u, u + v + v, u + v ∵ By definition of inner product
® ® ® ®
= u, u + u, v + v, u + v, v
° °2 ® ° °2 £ ® ®¤
= °u ° + 2 u, v + °v ° ∵ u, v = v, u
° °2 ¯ ®¯ ° °2 £ ® ¯ ®¯¤
É °u ° + 2 ¯ u, v ¯ + °v ° ∵ u, v É ¯ u, v ¯
° °2 ° ° ° ° ° °2 £ ¤
É °u ° + 2 °u ° °v ° + °v ° ∵ Cauchy - Schwarz’s inequality
°u + v °2 É °u ° + °v ° 2
° ° ¡° ° ° °¢
° ° ° ° ° °
∴ °u + v ° É °u ° + °v ° Proved.
5. Generalized Pythagoras Theorem: If u and v are orthogonal vectors of the real inner product space
V then
° u + v °2 = ° u °2 + ° v °2
° ° ° ° ° °
®
Proof: Since u and v are orthogonal vectors, we have u, v = 0.
°u + v °2 = u + v, u + v
° ° ®
Now
® ®
= u, u + v + v, u + v
® ® ® ®
= u, u + u, v + v, u + v, v
° °2 ° °2 £ ® ® ¤
= °u ° + 0 + 0 + °v ° ∵ u, v = v, u = 0
° °2 ° °2 ° °2
∴ °u + v ° É °u ° + °v ° Proved.
Illustration
®
8.4 Verify Cauchy-Schwarz’s inequality for u = (−2, 1) and v = (1, 0) , using the inner product
u, v = 4u 1 v 1 + 5u 2 v 2 .
®
Solution: For given weighted ineer product u, v = 4u 1 v 1 + 5u 2 v 2 ,
° ° q ® p q
°u ° = u, u = 4u 1 u 1 + 5u 2 u 2 = 4u 12 + 5u 22
q p
⇒ °u ° = 4(−2)2 + 5(1)1 = 21
° ° £ ¤
∵ u = (−2, 1)
° ° q
°v ° = 4(1)2 + 5 (0) = 2
£ ¤
Similarly, ∵ v = (1, 0)
° °° ° p
∴ °u ° °v ° = 2 21 (8.1)
®
Also, u, v = 4u 1 v 1 + 5u 2 v 2 = 4 (−2) (1) + 5 (1) (0) = −8
Target AA
¯ ®¯
∴ ¯ u, v ¯ = 8 (8.2)
¯ ®¯ ° ° ° °
∴ From (8.1) and (8.2), we have ¯ u, v ¯ É °u ° °v °
Hence, Cauchy-Schwarz’s inequality is satisfied.
® ° °
Illustration 8.5 If u and v are unit vectors such that u, v = −1, evaluate ° 2u − v ° .
O|R ECALL
Solution: Using definitions of inner
| R E Dproduct and norm,
° °2 READ ®
° 2u − v ° = 2u − v , 2u − v
® ®
= 2u − v , 2u + 2u − v , −v
Powered
® Prof. (Dr.) Rajesh M. Darji
by® ®
= 2u , 2u + −v , 2u + 2u, −v + −v , −v
®
® ® ® ®
= 4 u ,u −2 v ,u −2 u ,v + v ,v [∵ Axiom 3 of definition]
® ® ® £ ® ®¤
= 4 u ,u −4 u ,v + v ,v ∵ v ,u = u ,v
° °2 ® ° °2
= 4°u ° − 4 u , v + °v ° [∵ Definition of norm]
= 4 (1) − 4 (−1) + (1) [∵ Given]
° °2 ° °
° 2u − v ° = 9 ⇒ ° 2u − v ° = 3
Illustration 8.6 Use Cauchy-Schwarz inequality to prove for all real values of a, b, θ,
(a cos θ + b sin θ)2 É a 2 + b 2
Exercise 8.1
· ¸
2 3 0
1. Show that u, v = 9u 1 v 1 + 4u 2 v 2 is the inner product on R generated by the matrix A =
®
.
0 2
®
Also find u, v for u = (−3, 2) and v = (1, 7) .
3. Let p = p (x) , q = q (x) ∈ P 2 . Show that p, q = p (0) q (0) + p 21 q 12 + p (1) q (1) is an inner product
® ¡ ¢ ¡ ¢
4. In each part use the given inner product on R2 to find ° w ° and d u, v , where w = (−1, 3) , u =
° ° ¡ ¢
Target AA
6 1 3 3
° °2 ° °2 ° °2 ° °2
7. Prove that ° u + v ° + ° u − v ° = 2° u ° + 2° v ° .
Answers
â The set of all vectors of V that are orthogonal to W is called the orthogonal complement of W and is
denoted by W ⊥ . That is,
W ⊥ = u ∈ V : u, w = 0, ∀w ∈ W
© ® ª
8.7 Properties of W ⊥
Illustration 8.7 Let R4 have the Euclidean inner product and let u = (−1, 1, 0, 2). Determine whether the
vector u is orthogonal to the subspace spanned by the vectors w 1 = (1, 0, 0, 0) , w 2 = (1, −1, 3, 0) and w 2 =
(4, 0, 9, 2) or not ? (or Is u ∈ W ⊥ ?)
Solution: In order to check whether given vector is orthogonal to subspce or not, it is sufficient to check
the orthogonality with each of the spanning vectors [See property 5 of section 8.7].
®
Since u, w 1 = u·w 1 = (−1, 1, 0, 2)·(1, 0, 0, 0) = 1 6= 0, u is not orthogonal to w 1 . Hence u is not orthogonal
to given subspace W. That is u ∉ W ⊥ .
8.8 Results
1. The null space of A and the row space of A are orthogonal complements in Rn with respect to Eu-
Target AA
clidean inner product. That is,
2. The nullspace of A T and the column space of A are orthogonal complements in Rn with respect to
L
Euclidean inner product. That is RECAL |
R E DO
READ | W = col (A) ⇔ W ⊥ = null A T
¡ ¢
Solution: Given that W = span {(1, −1, 3) , (5, −4, −4) , (7, −6, 2)} .
1 −1 3
Consider the matrix A by putting given vectors in row, that is A = 5 −4 −4 . Therefore, given sub-
7 −6 2
space is W = row (A) and hence its orthogonal complement is W ⊥ = nul (A) [See Result 1 of section 8.8].
Now, for null space of A, reducing A to row echelon form.
1 −1 3
A = 5 −4 −4 → R 2 − 5R 1 ; R 3 − 7R 1
7 −6 2
1 −1 3
∼ 0 1 −19 → R 3 − R 2
0 1 −19
1 −1 3
∼ 0 1 −19 ⇒ x = 16t , y = 19t , z = t, t ∈R
0 0 0
n o
W ⊥ = nul (A) = X = x, y, z : AX = 0
¡ ¢
Hence,
W ⊥ = {(16t , 19t , t ) : t ∈ R} = span {(16, 19, 1)}
£ ¤
∴ Straight line
* Important:
Observe that,
1. Subspace W is the row space of A and hence dimension of W is number of pivot rows of echelon form.
Therefore dim (W ) = 2.
2. Orthogonal complement W ⊥ is the null space of A and hence dimension of W ⊥ is number of non
pivot columns of echelon form. That is dim W ⊥ = 1.
¡ ¢
Illustration 8.9 If subspace W is the intersection of two planes x + y + z = 0 and x − y + z = 0 in R3 , find its
orthogonal complement W ⊥ .
x, y, z : x = −z = t , y = 0, t ∈ R
©¡ ¢ ª £ ¤
Hence, W= Straight line
Target AA
= {(t , 0, −t ) : t ∈ R} = t {}
£ ¤
∴ W = span {(1, 0, 1)} = row (A) where A = 1 0 1
⇒ W ⊥ = nul (A)
Now for null space of A, matrix A is already in echelon form (because it has only one row). Hence its
corresponding homogeneous system has two
O | R ECALL
parametric solution as x = −t 2 , y = t 1 , z = t 2 , t 1 , t 2 ∈ R.
Hence, W = nul (A) D | R
⊥ E D
REA
= {(−t 2 , t 1 , t 2 ) : t 1 , t 2 ∈ R}
= {t 1 (0, 1, 0) + t 2 (−1, 0, 1) : t 1 , t 2 ∈ R}
⊥
Powered by Prof. (Dr.) Rajesh M. Darji
∴ W = span {(0, 1, 0) , (−1, 0, 1)} [Plane]
â A subset of an inner product space V is called an orthogonal set if all vectors are pairwise orthogonal.
That is all pairs of distinct vectors in the set are orthogonal.
© ª
Hence, if u 1 , u 2 , u 3 , ...u n is orthogonal set then,
® ®
u i , u j = 0 ∀i 6= j , and u i , u i 6= 0
â An orthogonal set in which each vector has unit norm (unit vector) is called an orthonormal set.
e. g. {(0, 1, 0) , (1, 0, 1) , (1, 0, −1)} is an orthogonal subset of R3 where as {(1, 0, 0) , (0, 1, 0) , (0, 0, 1)} is an
orthonormal subset of R3 .
Note: Every orthogonal set (orthonomal set) is always linearly independent and hence, in particular an
orthogonal subset of 3 vectors of R3 is always basis for R3 . In general it is true for Rn .
e. g. {(0, 1, 0) , (1, 0, 1) , (1, 0, −1)} is an orthogonal subset of R3 and hence it is basis for R3 .
Illustration 8.10 Find orthogonal projection of u = (1, −1, 2) on v = (2, 0, 2) with respect to standard Eu-
clidean inner product in R3 .
Target AA
6 6 p ³p p ´
= p (2, 0, 2) = p (2, 0, 2) = 3 (1, 0, 1) ⇒ projv u = 3, 0, 3
48 4 3
Exercise 8.2
ECALL
| R u = (2, k, 6) , v = (l , 5, 3) and w = (1, 2, 3) are mutually or-
Ovectors
| R E Dthe
1. Do there exist k and l such that
thogonal withR EADto the Euclidean inner product ?
respect
[Hint: Take u · v = v · w = w · u = 0.]
Powered by Prof. (Dr.) Rajesh M. Darji
2. Let R3 have the Euclidean inner product, and let u = (1, 1, −1) and v = (6, 7, −15) . If ° ku + v ° = 13,
° °
then what is k ?
° °2 ®
[Hint: °ku + v ° = 169 ⇒ ku + v, ku + v = 169]
4. Let R4 have the Euclidean inner product. Find two unit vectors that are orthogonal to the three vectors
u = (2, 1, −4, 0) , v = (−1, −1, 2, 2) and w = (3, 2, 5, 4) .
7. Find orthogonal complement ( W ⊥ ) and hence basis for W ⊥ . Also verify that dimW + dimW ⊥ =
dimV , given
a. W = span {(1, 4, −2) , (2, 1, −1)} in R3 . b. W = span {(1, −1, 0, 2) , (0, 1, 2, −1)} in R4 .
c. W = span {(1, −2, 1)} in R3 .
8. Find the equation of W ⊥ , for the each of the following subspace: [Hint: See Illustration 8.9]
Answers
E E E
â If each vector of orthogonal basis is unit vector then it is called an orthonormal basis of V.
e. g. {(1, 0, 0) , (0, 1, 0) , (0, 0, 1)} is an orthonormal basis for R3 whereas {(0, 2, 0) , (1, 0, −1) , (1, 0, 1)} is an
orthogonal basis for R3 and it can be reduce to orthonormal by normalizing each vector.
Target AA
© ª
Let S = v 1 , v 2 , v 3 .....v n be an orthonormal basis for an inner product space V, and u is any vector in V,
then
® ® ®
u = u, v 1 v 1 + u, v 2 v 2 + ... + u, v n v n
Further,
s
µ ¶2 µ ¶2 r r
3 4 2 9 16 25 p
kv̄ 1 k = − + + (0) = + +0 = = 1=1
5 5 25 25 25
s
µ ¶2 µ ¶2 r r
4 3 2 16 9 25 p
kv̄ 2 k = + + (0) = + +0 = = 1=1
5 5 25 25 25
q p p
kv̄ 3 k = (0)2 + (0)2 + (1)2 = 0 + 0 + 1 = 1 = 1
∴ Given set is an orthonormal subset of R3 containing three vectors. So it is an orthonarmal basis for R3 .
â With the help of this process we can construct an orthogonal basis from the given basis and on nor-
malizing each vector we can obtain an orthonormal basis.
© ª
â Consider the basis S = u 1 , u 2 , u 3 .....u n of an inner product space V.
© ª
â The orthogonal basis of an inner product space V is given by B = w 1 , w 2 , w 3 .....w n , where
w 1 = u1
Target AA
w 2 = u 2 − projw 1 u 2
®
w 1, u2
= u 2 − ° °2 w 1
°w 1 °
w 3 = u 3 − projw 1 u 3 − projw 2 u 3
w 1, u3
®
w 2O, u 3| REC
® AL L
= u 3 − ° °2 |wR 1−ED ° °2 w 2 and so on.
RE°AwD 1
° °w 2 °
Illustration 8.12 Let R3 have Euclidean inner product. Transform the basis S = u 1 , u 2 , u 3 into an or-
© ª
thonormal basis using Gram-Schmidt process, where u 1 = (1, 0, 0) , u 2 = (3, 7, −2) and u 3 = (0, 4, 1) . [Summer-
2017]
®
Solution: Given inner product is standard Euclidean inner product, that is u, v = u · v.
By Gram-Schmidt process,
w1 (1, 0, 0) 1
w 1 = u 1 = (1, 0, 0) ⇒ w
b1 = ° ° = p
°w 1 °
= p (1, 0, 0) ∴ w
b 1 = (1, 0, 0)
1+0+0 1
w 2 = u 2 − projw 1 u 2
®
w 1, u2 w 1 · u2 £ ® ¤
= u 2 − ° °2 w 1 = u 2 − ° °2 w 1 ∵ w 1, u2 = w 1 · u2
°w 1 ° °w 1 °
(1, 0, 0) · (3, 7, −2)
= (3, 7, −2) − (1, 0, 0)
(1)
= (3, 7, −2) − 3 (1, 0, 0)
w2
µ ¶
(0, 7, −2) 1 7 2
∴ w 2 = (0, 7, −2) ⇒ w
b2 = ° ° = p
°w 2 °
= p (0, 7, −2) ∴ w
b 2 = 0, p , − p
0 + 49 + 4 53 53 53
1
Jorgen Pedersen Gram; Danish, 1850-1916 and Erhard Schmidt; Berlin, 1876-1959.
w 3 = u 3 − projw 1 u 3 − projw 2 u 3
® ®
w 1, u3 w 2, u3 w 1 · u3 w 2 · u3
= u 3 − ° °2 w 1 − ° °2 w 2 = u 3 − ° °2 w 1 − ° °2 w 2
°w 1 ° °w 2 ° °w 1 ° °w 2 °
(1, 0, 0) · (0, 4, 1) (0, 7, −2) · (0, 4, 1)
= (0, 4, 1) − (1, 0, 0) − (0, 7, −2)
(1) 53
µ ¶
26 182 52
= (0, 4, 1) − 0 − (0, 7, −2) = (0, 4, 1) − 0, ,−
53 53 53
µ ¶
30 105 15
w 3 = 0, , − = (0, 2, 7)
53 53 53
w3
µ ¶
(0, 2, 7)1 2 7
⇒ w
b3 = ° ° = p
°w 3 °
= p (0, 2, 7) ∴ w
b 3 = 0, p , p
0 + 4 + 49 53 53 53
½ µ ¶ µ ¶¾
7 2 2 7
∴ Required orthonormal basis is B = {w
b1, w
b2, w
b 3 } = (1, 0, 0) , 0, p , − p , 0, p , p .
53 53 53 53
Note: In case ½of orthogonal basis ¶¾ to find normalized vector. Hence orthogonal basis is B =
µ we need not
© ª 30 105
w 1 , w 2 , w 3 = (1, 0, 0) , (0, 7, −2) , 0, , − .
53 53
Illustration 8.13 Let R3 have an Euclidean inner product, Find the orthonormal basis for the space spanned
by (0, 1, 2) , (−1, 0, 1) , (−1, 1, 3) .
Target AA
In order to find orthogonal basis of W, first of all we find basis for W.
0 −1 −1
Observe that, for the matrix of column vectors of given set A = 1 0 1 , det (A) = 0. therefore
2 1 3
given set is linearly dependent and hence it is not a basis. Now to remove linearly dependent vector reducing
matrix A to row echelon form, we get
O | R ECALL
RED
READ |
0 −1 −1 1 0 1
A= 1 0 1 ∼ 0 −1 −1
Discarding third vector corresponding to non pivot column from given set we get basis for subspace W as
© ª
S = {(0, 1, 2) , (−1, 0, 1)} = u 1 , u 2 .
w1
µ ¶
1 1 2
w 1 = u 1 = (0, 1, 2) ⇒ w b1 = = (0, 1, 2) ∴ w b 1 = 0, p , p
°w 1 ° p5
° °
5 5
®
w 1, u2 w 1 · u2 £ ¤
w 2 = u 2 − projw 1 u 2 = u 2 − ° °2 w 1 = u 2 − ° °2 w 1 ∵ Eulidean inner product
°w 1 ° °w 1 °
(0, 1, 2) · (−1, 0, 1) 2
= (−1, 0, 1) − ¡ 2 2
¢ (0, 1, 2) = (−1, 0, 1) − (0, 1, 2)
0+1 +2 5
µ ¶ µ ¶
2 4 2 1
= (−1, 0, 1) − 0, , = −1, − ,
5 5 5 5
w2
µ ¶
1 1 −5 −2 1
w 2 = (−5, −2, 1) ⇒ w b 2 = ° ° = p (−5, −2, 1) ∴ w b2 = p , p , p
5 °w 2 ° 30 30 30 30
½µ ¶ µ ¶¾
1 2 −5 −2 1
∴ Required orthonormal basis for given subspace W is B = {w
b1, w
b 2 } = 0, p , p , p , p , p .
5 5 30 30 30
Exercise 8.3
1. In each part, an orthonormal basis relative to the Euclidean inner product is given, find coordinate of
w with respect to that basis:
µ ¶ µ ¶
1 1 1 1
a. w = (3, 7) , u 1 = p , − p , u 2 = p , p .
2 2 2 2
µ ¶ µ ¶ µ ¶
2 2 1 2 1 2 1 2 2
b. w = (−1, 0, 2) , u 1 = , − , , u 2 = , , − , u 3 = , , .
3 3 3 3 3 3 3 3 3
[See Illustration 8.11]
3. Let R3 have an Euclidean inner product, Find the orthonormal basis for the space spanned by (1, −1, 2) ,
(1, 1, 0) , (1, 0, 1) .
[See Illustration 8.13]
© ª ° °2 ®2
4. Let v 1 , v 2 , v 3 be the orthonormal basis for an inner product space V . Show that ° w ° = w, v 1 +
®2 ®2
w, v 2 + w, v 3 , ∀w ∈ V.
Target AA
¡ ¢
[Hint: For orthonormal basis S, w S = (〈w̄, v̄ 1 〉 , 〈w̄, v̄ 2 〉 , 〈w̄, v̄ 3 〉) . ]
Answers
p p ´
µ ¶
³ 4
1. a. −2 2, 5 2
b. 0, − , 1
ECA½µLL
3
O | R
3 ED
R
EpAD, |p , p
½µ ¶ µ ¶¾ ¶ µ ¶ µ ¶¾
1 3 1 1 1 1 1 1 1 1 2
2. a. B = p R ,− b. B = p , p , p , − p , p , 0 , p , p , − p
10 10 10 10 3 3 3 2 2 6 6 6
E E E
u = w1 + w2
where,
Note: In case of orthogonal basis first transform in to orthonormal basis by normalizing each vector and
then proceed further.
Projection on line
Projection on plane
µ ¶
4 3
Illustration 8.14 Let W be subspace R3 spanned by the orthogonal vectors v 1 = (0, 1, 0) and v 2 = − , 0, .
5 5
Find the projection of u = (1, 1, 1) on W. also obtain the component of u orthogonal to W.
µ ¶
4 3
Solution: For given orthogonal vectors v 1 = (0, 1, 0) and v 2 = − , 0, , observe that kv̄ 1 k = kv̄ 2 k = 1.
5 5
Hence {v̄ 1 , v̄ 2 } form an orthonormal basis for subspace W.
∴ Orthogonal projection of u = (1, 1, 1) on W is given by
Target AA
µ ¶
4 3
∴ projW ū = , 1, −
25 25
u = projW u + projW ⊥ u
∴ projW ⊥ u = u − projW u EDO | R
ECALL
R
R EAD |µ 4 3
¶ µ
4 3
¶
= (1, 1, 1) − , 1, − = 1 − , 0, 1 +
25 25 25 25
∴ projW ⊥ u =
µ
21 Powered
, 0,
28
¶
by
Required Prof. (Dr.) Rajesh M. Darji
component of u orthogonal to W.
25 25
Exercise 8.4
1. The subspace of R3 spanned by the vectors u 1 = 45 , 0, − 35 and u 2 = (0, 1, 0) is a plane passing through
¡ ¢
the origin. Express w = (1, 2, 3) in the form w = w 1 + w 2 , where w 1 lies in the plane and w 2 is perpen-
dicular to the plane.
2. Let W be The subspace of R4 spanned by the vectors u 1 = (−1, 0, 1, 0) and u 2 = (0, 1, 0, 1), Express w =
(−1, 2, 6, 0) in the form w = w 1 + w 2 , where w 1 lies in subspace W and w 2 is orthogonal to W.
Answers
µ ¶ µ ¶ µ ¶ µ ¶
4 3 9 12 7 7 5 5
1. w 1 = − , 2, , w 2 = , 0, 2. w 1 = − , 1, , 1 , w 2 = , 1, , −1
5 5 5 5 2 2 2 2
E E E
â Let AX = b be the inconsistent system of linear equations. That is its exact solution does not exist.
â The beast approximate solution of AX = b is known as the least square solution and is given by the
normal system,
A T AX = A T b (8.3)
â Also if W denotes the column space of A and X be the least square solution then the orthogonal
projection of b on W is given by
projW b = AX (8.4)
4 0 2
Illustration 8.15 Find the least squares solution of the linear system Ax = b for A = 0 2 , b = 0 .
1 1 11
Also find projection of b on column space of A. [Winter-2015]
Target AA
· ¸· ¸ · ¸
17 1 x 19
= ⇒ 17x + y = 19, x + 5y = 11
1 5 y 11
Illustration 8.16 Find the orthogonal projection of the vector u = (−3, −3, 8, 9) on the subspace of R4
spanned by the vectors u 1 = (3, 1, 0, 1) , u 2 = (1, 2, 1, 1) , u 3 = (−1, 0, 2, −1) .
projW u = AX (8.5)
Exercise 8.5
Target AA
2 −2 2
1. Find the least squares solution of the linear system Ax = b for A = 1 1 , b = −1 . Also find
3 1 1
projection of b on column space of A. [Winter-2015]
4. Find projW u, where u = (5, 6, 7, 2) and W is the solution space of the homogeneous system x 1 +x 2 +x 3 =
0, 2x 2 + x 3 + x 4 = 0.
½µ ¶ µ ¶¾
1 1 1 1
[Hint: Solution space: W = span − , − , 1, 0 , , − , 0, 1 ]
2 2 2 2
Answers
µ ¶
3 2 46 5 13 305 704
1. x = , y = − , projcol(A) b = ,− , 2. x = ,y = 3. projW u = (7, 2, 9, 5)
7 3 21 21 21 39 273
4. projW u = (0, −1, 1, 1)
E E E
* Important:
A matrix of order n is an orthogonal matrix if and only if its row (column) vectors form orthonormal subset
of Rn . ³ ´ ³ ´
e. g. Consider orthonormal subset u 1 , u 2 , u 3 of R3 where u 1 = p1 , 0, p1 , u 2 = (0, 1, 0) , u 3 = − p1 , 0, p1 ,
© ª
2 2 2 2
1 1 1 1
p 0 p 2 0 − 2
p p
2 2
T
then A =
0 1 0 and A = 0 1
0 are always orthogonal matrices.
− p1 0 p1 p1 0 p1
2 2 2 2
Note: In order to check weather given matrix is orthogonal or not it is sufficient to check orthonoamal-
ity of its row or column vectors.
1 2 2
3 3 3
is orthogonal matrix and hence find A −1 .
2
Illustration 8.17 Show that the matrix A =
3 − 23 1
3
− 32 − 13 2
3
µ ¶ µ ¶ µ ¶
1 2 2 2 2 1 2 1 2
Solution: Consider the column vectors u 1 = , , − , u 2 = , − , − , u 3 = , , .
µ ¶ µ ¶3 3 3 3 3 3 3 3 3
1 2 2 2 2 1 2 4 2
Observe that, u 1 · u 2 = , , − · , − , − = − + = 0
3 3 3 3 3 3 9 9 9
µ ¶ µ ¶
2 2 1 2 1 2 4 2 2
u2 · u3 = , − , − · , , = − − =0
Target AA
3 3 3 3 3 3 9 9 9
µ ¶ µ ¶
1 2 2 2 1 2 2 2 4
u1 · u3 = , , − · , , = + − =0
s 3 3 3 3 3 3 9 9 9
µ ¶2 µ ¶2 µ ¶2 r
° ° 1 2 2 1 4 4
Also, °u 1 ° = + + − = + + =1
3 3 3 9 9 9
R E C1ALL
s
|
µ ¶2 µ ¶2 µ ¶2 r
° °
°u 2 ° = 2 2
+ − |+ R
1
−ED= O 4 4
+ + =1
3READ 3 3 9 9 9
s
µ ¶2 µ ¶2 µ ¶2 r
° ° 2 1 2 4 1 4
°u 3 ° =
3
+ +
Powered
3
=
Prof. (Dr.) Rajesh M. Darji
3 by 9 9 9
+ + =1
1 2
3 3 − 23
A −1 = A T =
2
3 − 23 − 13
2 1 2
3 3 3
2 2 1
Illustration 8.18 Is A = −2 1 2
orthogonal matrix ? if not can it be converted in to orthogonal
1 −2 2
matrix ? [Summer-2015]
Solution: Consider the column ° °u 1 =° (2,°−2, 1) , u 2 = (2, 1, −2) , u 3 = (1, 2, 2) . Observe that u 1 ·
° °vectors
u 2 = u 2 · u 3 = u 1 · u 3 = 0 and °u 1 ° = °u 2 ° = °u 3 ° = 3. That is column vectors are orthogonal but not or-
thonormal (not unit vector). Hence, A is not orthogonal matrix.
Since column vectors are orthogonal, on normalizing each vector by dividing its magnitude, we obtain
orthonormal vectors. Thus given matrix A can be converted into orthogonal matrix, and is given by
2/3 2/3 1/3
−2/3
1/3 2/3
1/3 −2/3 2/3
Illustration 8.19 If A is an orthogonal then prove that det (A) = ±1. Also show that converse may not be
true.
Target AA
Exercise 8.6
1. Show that the matrix A is orthogonal then A T and A −1 are also orthogonal.
[Hint: Apply definition of orthogonal matrix.]
READ
4. Let A is an (n × n) orthogonal matrix with n is odd, then prove that A cannot be skew-symmetric.
Powered by
5. Let A is an (n × n) orthogonal Prof. (Dr.) Rajesh M. Darji
matrix such that| A | = −1. Prove that (A + I n ) is singular matrix.
E E E
Powered by
Prof. (Dr.) Rajesh M. Darji
B. Sc. (Gold Medalist)
M. Sc. (Gold Medalist)
Tetra Gold Medalist in Mathematics
Ph. D. (Mathematics)
ISTE (Reg. 107058)
IMS, AMS
http://rmdarji.ijaamm.com/
Contact: (+91) 9427 80 9779
A physical quantity for the representation of it only magnitude is sufficient is called scalar whereas a physical
quantity for the representation of it magnitude as well as direction is required, is called vector.
e. g. statistical data are scalars and velocity, acceleration, force etc are vectors.
General Remarks
→−
1. A scalar is generally denoted by α, β, a, b etc. whereas the vector is denoted by A , A, A etc.
Target AA
2. A vector having unit magnitude is called unit vector and is denoted by ab or Ab (read as cap or carat).
In particular the unit vectors along the positive direction of the coordinate axes, X -axis, Y -axis and
Z-axis, are denoted by ib, jb, kb or Ib, Jb, Kb respectively.
b that is →
3. Any vector A can always be expressed as a combination of ib, jb, k,
−
A = a 1 ib+ a 2 jb+ a 3 kb for some
a 1 , a 2 , a 3 ∈ R. This form of vector is known as
| R AL L
ECcomponent form and a 1 , a 2 , a 3 are called component
along respective coordinate ED
Raxes. O
READ | →
−
4. The magnitude
¯→ q of a vector A is a scalar and denoted by the symbol or A and is given by the formula
¯− ¯
¯
¯ A ¯ = A = + a 12 + aPowered
2 2
2 + a 3 . (only Prof. (Dr.) Rajesh M. Darji
by positive value)
5. Dividing the vector by its own magnitude we get a unit vector along the direction of given vector, that
→−
is unit vector along the direction of A is given by
→
−
A a 1 ib+ a 2 jb+ a 3 kb
ab = Ab = = q
A a 12 + a 22 + a 32
→
− 1 ¡ ¢
e. g. A = 2ib− jb+ 3kb ⇒ ab = p 2ib− jb+ 3kb
13
1. Scalar Multiplication:
→
−
α A = α a 1 ib+ a 2 jb+ a 3 kb = (αa 1 ) ib+ (αa 2 ) jb+ (αa 3 ) kb
¡ ¢
104
Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 105
→
− →
−
3. Vector Multiplications If θ denotes an angle between the directions of two vectors A and B then
→− →
−
there defined two types of product between A and B as
â A function whose value depends on the position of the point in space is called point function.
â If that function is scalar then it is called scalar point function.
â If that function is vector then the function is called vector point function.
e. g. Temperature in the medium is scalar point function and the velocity of particle in the moving
Target AA
fluid is the vector point function.
→
−
â In symbolic form, φ x, y, z = x y 2 + 7x y z 3 is scalar point function and V = 3xz ib− 5x y 2 jb+ x y z 3 kb is
¡ ¢
¡ ¢
the vector point function defined at the point p x, y, z .
9.5 Gradient
Let φ x, y, z be the scalar point function then the gradient of φ x, y, z is defined as grad φ = ∇φ.
¡ ¢ ¡ ¢
∂ ˆ ∂ ˆ ∂ ∂φ ˆ ∂φ ˆ ∂φ
µ ¶
∴ grad φ = ∇φ = i+ j + k̂ φ = i+ j+ k̂
∂x ∂y ∂z ∂x ∂y ∂z
Observe that gradient is defined for the scalar point function and it gives vector point function.
grad φ = ∇φ
∂φ ∂φ ∂φ b
= ib+ jb+ k
∂x ∂y ∂z
µ ¶ µ ¶ µ ¶
2x 2y 2z
φ = log x 2 + y 2 + z 2
£ ¡ ¢¤
= 2 i+ 2
b j+ 2
b kb ∵
x + y 2 + z2 x + y 2 + z2 x + y 2 + z2
2
grad φ = 2
¡ ¢
∴ 2 2
x ib+ y jb+ z kb
x +y +z
¡ ¢
At x, y, z = (1, 2, 1) ,
2¡ ¢ 1 2 1
grad φ = ib+ 2 jb+ kb = ib+ jb+
6 3 3 3
Illustration 9.2
If r = ¯ →
¯− ¯
r ¯ , where →
− b prove that∇ f (r ) = f 0 (r ) ∇r. Hence deduce that ∇¯ →
¯ − ¯2
r = x ib+ y jb+ z k, r ¯ = 2~
r.
q
r = x iˆ + y jˆ + z k̂ and r = ¯ →
¯− ¯
Solution: Given ~ r ¯ = x 2 + y 2 + z 2.
∂ ∂ ∂ b
µ ¶
∇ f (r ) = i+
b j + k f (r )
b
∂x ∂y ∂z
Target AA
∂ ∂ ∂
= f (r ) ib+ f (r ) jb+ f (r ) kb
∂x ∂y ∂z
∂r ∂r ∂r
= f 0 (r ) ib+ f 0 (r ) jb+ f 0 (r ) kb ∵ By chain rule for partial derivative
£ ¤
∂x ∂y ∂z
∂r ∂r ∂r
µ ¶
= f 0 (r )
∂x
ib+
∂y
jb+ kb
∂zO | REC
AL L (9.1)
D
R0 EA∂
µ
Db | ∂RbE ∂ b¶
= f (r ) i+ j+ k r
∂x ∂y ∂z
∂ ∂ ∂ b
Prof. (Dr.) Rajesh M. Darji
· µ ¶ ¸
0
∴ ∇ f (r ) = f (r ) ∇rPowered ∵ byi + b j + k =∇
b
∂x ∂y ∂z
∂r 2x x ∂r y ∂r z
q
Also, r= x2 + y 2 + z2 ⇒ = p = . Similarly, = and = .
∂x 2 x 2 + y 2 + z 2 r ∂y r ∂z r
Substituting in (9.1),
³x ´ ³y´ ³z ´
∇ f (r ) = f 0 (r ) ib+ f 0 (r ) jb+ f 0 (r ) kb
r r r
f 0 (r ) ¡ ¢ f 0 (r )
= x ib+ y jb+ z kb = r)
(~
r r
~
r
∴ ∇ f (r ) = f 0 (r ) (9.2)
r
~
r
f (r ) = ¯ →
¯ − ¯2
∇¯ →
¯ − ¯2
Put r ¯ =r2 ⇒ f 0 (r ) = 2r ∴ r ¯ = 2r = 2~
r
r
Illustration 9.3 Find the unit normal vector to the surface x y 3 z 2 = 4 at the point (−1, −1, 2) .
Solution: Let φ = x y 3 z 2 − 4 (Taking all terms of given surface x y 3 z 2 = 4 on one side) and given point
p (−1, −1, 2) .
Unit normal vector to given surface at a point p is [See section 9.5]
¡ ¢
∇φ p
n̂ = ¯¡ ¢ ¯ (9.3)
¯ ∇φ p ¯
¯ ¯
Now,
∂φ ∂φ ∂φ b
∇φ = ib+ jb+ k
∂x ∂y ∂z
= y z ib+ 3x y 2 z 2 jb+ 2x y 3 z kb
¡ 3 2¢
∵ φ = x y 3z2 − 4
¡ ¢ ¡ ¢ £ ¤
Substituting the values from (9.5) and (9.4) in (9.3), we get required unit normal vector,
¡ ¢
4 −ib− 3 jb+ kb 1 ¡ ¢
n
b= p ∴ nb = p −ib− 3 jb+ kb
4 11 11
9.6 Divergence
→
− →
−¡ ¢
~=
Let V = V1 ib+ V2 jb+ V3 kb be the vector point function then the divergence of V x, y, z is defined as div V
∇·V~.
∂ ˆ ∂ ˆ ∂ ¢ ∂V1 ∂V2 ∂V3
µ ¶
~ = ∇·V~= j + k̂ · V1 iˆ + v 2 jˆ + V3 k̂ =
¡
∴ div V i+ + +
∂x ∂y ∂z ∂x ∂y ∂z
Observe that divergence is defined for the vector point function and it gives scalar.
→
−
Target AA
Physical Interpretation of Divergence
~ gives the rate at which the fluid is originating (diverges) from the
READ |
Thus, we can say that div V ·V
point per unit volume.
~ = ∇·V
~ = 0, then such a fluid is known as solenoidal or
incompressible.
Powered by Prof. (Dr.) Rajesh M. Darji
â If the divergence of velocity is zero i.e. div V
→
− ¡
Solution: Let V = 3x 2 iˆ + 5x y 2 jˆ + x y z 3 k̂ = V1 ib+ V2 jb+ V3 kb
¢
→
−
At the point (1, 2, 3) , div V = 6 (1) + 10 (2) + 3 (18) = 80.
→
−
Illustration 9.5 Find the value of α such that the vector V = αx 2 y + y z ib+ x y 2 − xz 2 jb+ 2x y z − 2x 2 y 2 kb
¡ ¢ ¡ ¢ ¡ ¢
is solenoidal.
→
− →
−
Solution: We know that a vector V is solenoidal (incompressible) if div V = 0.
∴ div αx 2 y + y z iˆ + x y 2 − xz 2 jˆ + 2x y z − 2x 2 y 2 k̂ = 0
¡¡ ¢ ¡ ¢ ¡ ¢ ¢
∂ ¡ 2 ¢ ∂ ¡ 2 ¢ ∂ ¡
αx y + y z + x y − xz 2 + 2x y z − 2x 2 y 2 = 0
¢
∴
∂x ∂y ∂z
¡ ¢ ¡ ¢ ¡ ¢
∴ 2αx y + 0 + 2x y − 0 + 2x y − 0 = 0
∴ 2αx y + 4x y = 0 ⇒ 2α + 4 = 0 ∴ α = −2
9.7 Curl
→
− →
−¡ ¢
~ = ∇×V
~.
Let V = V1 ib+ V2 jb+ V3 kb be the vector point function then the curl of V x, y, z is defined as curl V
¯ ¯
iˆ jˆ
¯ ¯
¯
¯ k̂ ¯¯
∂ ˆ ∂ ˆ ∂ ∂ ∂ ∂ ¯¯
µ ¶
¢ ¯¯
~ = ∇×V
~= j + k̂ × V1 iˆ + v 2 jˆ + V3 k̂ = ¯
¡
∴ curl V i+
∂x ∂y ∂z ∂x ∂y ∂z ¯
¯
¯
¯ ¯
V1 V2 V3 ¯
¯ ¯
¯
Observe that curl is defined for the vector point function and it gives again vector.
Target AA
â If the angular velocity of the rotating body is zero then there no rotation.
∴ We have,
→
− →
−
Ω= 0 ⇒
RED
→
− →−
Cul r V = 0 .
â Such a motion is known as irrotational motion and vector filed is known as irrotationa field.
O | R ECALL
there exist a scalar function (scalar potential function) φ such that
READ |
â For an irrotational field always
→
−
V = grad φ = ∇φ
Powered by Prof. (Dr.) Rajesh M. Darji
â Such a system is called conservative system, that is work does not depend on the path.
→
− →
− →
−
Illustration 9.6 Find curl F , if F = y 2 cos x + z 2 iˆ + 2y sin x − 4 jˆ + 3xz 2 k̂. Is F irrotational ? [Summer-
¡ ¢ ¡ ¢
2016]
→
− ¡
Solution: Let F = y 2 cos x + z 2 iˆ + 2y sin x − 4 jˆ + 3xz 2 k̂ = F 1 ib+ F 2 jb+ F 3 k.
¢ ¡ ¢
b
By definition of curl,
→
− →
−
curl F = ∇ × F
¯ ¯
¯ ¯
¯ ib j
b k ¯¯
b
¯
¯ ∂ ∂ ∂ ¯¯
=¯
¯
¯ ∂x ∂y ∂z ¯
¯
¯ ¯
¯ F1 F2 F3 ¯
¯ ¯
¯ ¯
¯ ¯
¯
¯ i
b j
b k
b ¯
¯
¯ ∂ ∂ ∂ ¯¯
=¯
¯
∂x ∂y ∂z ¯
¯
¯
¯ ¡ ¯
¯ 2
¯ y cos x + z 2 2y sin x − 4 3xz 2 ¯
¢ ¡ ¢ ¯
∂ ¡ ¢ ∂ ¡ ∂ ¡ ¢ ∂ ¡ 2
· ¸ · ¸
2 2 2
¢ ¢
=ib 3xz − 2y sin x − 4 − j
b 3xz − y cos x + z
∂y ∂z ∂x ∂z
∂ ¡ ¢ ∂ ¡ 2
· ¸
2
¢
+ kb 2y sin x − 4 − y cos x + z
∂x ∂y
= ib[0 − 0] − jb 3z 2 − (2z) + kb 2y cos x − 2y cos x
£¡ ¢ ¤ £¡ ¢ ¡ ¢¤
→
− ¡
curl F = 2z − 3z 2 jb
¢
∴
→
− → − → −
Since curl F 6= 0 , F is not an irrotational vector field.
→
− ¡
Illustration 9.7 Show that the vector field A = x 2 − y 2 + x ib− 2x y + y jb is irrotational. Also find scalar
¢ ¡ ¢
→−
function φ such that A = grad φ.
→
− ¡
Solution: Given A = x 2 − y 2 + x ib− 2x y + y jb.
¢ ¡ ¢
→
− →
−
curl A = ∇ × A
¯ ¯
¯ ¯
¯
¯ i
b j
b k ¯¯
b
¯ ∂ ∂ ∂ ¯¯ £ ¤
=¯ ¯ ∵ Coefficient of kb is 0
¯
¯ ∂x ∂y ∂z ¯
¯ ¡ ¯
¯ 2 2
¢ ¡ ¢
¯ x −y +x −2x y − 2y 0 ¯
¯
∂ ∂ ¡ ∂ ∂ ¡ 2
· ¸ · ¸
x − y2 + x
¢ ¢
= ib (0) − −2x y − 2y − jb (0) −
∂y ∂z ∂x ∂z
∂ ¡ ¢ ∂ ¡ 2
· ¸
x − y2 + x
¢
+ kb −2x y − 2y −
∂x ∂y
£¡ ¢ ¤ →
−
Target AA
= ib[0 − 0] − jb[0 − 0] + kb −2y − (−2) = (0) ib− (0) jb+ (0) kb = 0
→
− → − →
−
∴ curl A = 0 ⇒ A is an irrotational.
→−
Since A is an irrotational field, there exist a scalar function (called scalar potential function) φ such that
→
−
A = grad φ = ∇φ
| RE CALL
=
∂φ ∂φ
ib+ R jbE
∂φ b
+ AD k |£ REDO
∵ Definition of gradient
¤
∂x ∂y ∂z
→−
Equating with components of A , we
Powered byget Prof. (Dr.) Rajesh M. Darji
∂φ ∂φ ∂φ
= x 2 − y 2 + x, = −2x y − 2y, =0
∂x ∂y ∂z
Integrating above equations partially with respect to x, y, z respectively, keeping other variable constant, we
get following equations:
x3 x2 ∂φ
· ¸
φ= − x y2 +
¡ ¢
+ c 1 y, z From kepping y, z constant (9.6)
3 2 ∂x
∂φ
· ¸
φ = −x y 2 − y 2 + c 2 (x, z) From kepping x, z constant (9.7)
∂y
∂φ
· ¸
φ = c 3 x, y
¡ ¢
From kepping x, y constant (9.8)
∂z
in (9.6), c 1 y, z be the terms of φ not containing x, and be taken from (9.7) and (9.8), that is c 1 y, z = −y 2 .
¡ ¢ ¡ ¢
→
−
Hence from (9.6) required scalar function for which A = grad φ, is
x3 x2
φ= − x y2 + − y2
3 2
¡ ¢ ¡ ¢
Note: Instead of c 1 y, z , if we find c 2 (x, z) or c 3 x, y from other two equations using same logic, we will
get the same answer. More precisely, just add all terms of φ (without c 1 , c 2 , c 3 ) by taking each term exactly
once. (Verify !)
∂f ¡ ¢
= ∇f P · N
b (9.9)
∂r
−−→
−−→ PQ
whereN is unit vector along the given direction PQ, that is N = ¯ −−→ ¯
b b
¯ PQ ¯
¯ ¯
∂ f ¯¯¡ ¢ ¯¯ ¯¯ ¯¯
= ∇ f P N̂ cos θ = ¯ ∇ f ¯P cos θ
¯ ¯ £ ¯ ¯ ¤
∵ N̂ is unit vector, so ¯N̂ ¯ = 1
∂r
∂ f ¯¯ ¯¯
⇒ max = ∇ f P when cos θ = 1 (i.e. θ = 0)
∂r
¡ ¢
Thus, we conclude that the maximum directional derivative
¯ ¯ (the rate of change) of f x, y, z occurs along
¡ ¢
the direction of ∇ f P and its magnitude is equals to ¯ ∇ f ¯P .
Illustration 9.8 Find the directional derivative of the function f = x y 2 + y z 3 at the point (2, −1, 1) in the
direction of the vector ib+ 2 jb+ 2k. [Summer-2017]
Target AA
b
−−→
Solution: Given f = x y 2 + y z 3 , P (2, −1, 1) PQ = ib+ 2 jb+ 2kb
By definition, required directional derivative is
∂f ¡ ¢
= ∇f · N (9.10)
∂r CALPL
b
E
| R E DO | R −−→ ˆ ˆ
where, N̂ = Unit AD along given direction PQ = i + 2 j + 2k̂
REvector
−−→
PQ iˆ + 2 jˆ + 2k̂
= −−→¯ = p
¯
¯PQ ¯ Powered
¯ ¯ 1 + 4 + 4by Prof. (Dr.) Rajesh M. Darji
1 ¡ˆ
i + 2 jˆ + 2k̂
¢
∴ N̂ =
3
∂f ∂f ∂f b
Also, ∇f = ib+ jb+ k
∂x ∂y ∂z
∂ ¡ 2 ∂ ¡ 2 ∂ ¡ 2
x y + y z 3 ib+ x y + y z 3 jb+ x y + y z 3 kb
¢ ¢ ¢
=
∂x ∂y ∂z
¡ 2¢ ¡ 3
¢ ¡ 2¢
= y ib+ 2x y + z jb+ 3z kb
¡ ¢
∴ ∇ f P = ib− 3 jb+ 3kb [∵ P (2, −1, 1)]
¡ ¢
Substituting the values of ∇ f P and N b in (9.10),
∂f ¡ ¢ 1¡ ¢ 1 1 ∂f 1
= ib− 3 jb+ 3kb · iˆ + 2 jˆ + 2k̂ = (1 − 6 + 6) = ∴ =
∂r 3 3 3 ∂r 3
¯ ¯ p p
Note: The maximum directional derivative is ¯∇ f ¯P = 1 + 9 + 9 = 19 and it¡occurs ¢ in the direction of
∇f
b Direction may be given by unit vector, that is ¯ ¯ P = p1 ib− 3 jb+ 3kb .
¡ ¢ ¡ ¢
gradient vector ∇ f P = ib−3 jb+3k.
¯∇ f ¯ 19
P
Illustration 9.9 Find the angle between the surfaces x 2 +y 2 +z 2 = 9 and z = x 2 +y 2 −3 at the point (2, −1, 2) .
Solution: Let φ1 = x 2 + y 2 + z 2 − 9, φ1 = z − x 2 − y 2 + 3, p (2, −1, 2)
Target AA
¯N 1¯ ¯N 2¯
¯
¡ ¢ ¡ ¢
4ib− 2 jb+ 4kb · −4ib+ 2 jb+ kb −16 − 4 + 4
= p p = p
16 + 4 + 16 16 + 4 + 1 6 21
8
∴ cos θ = − p
3 21
REC AL L
DO |
D | RE
µ ¶
8
REAthat is cos θ Ê 0, required angle between given surfaces is θ = cos
−1
Since θ is an acute angle, p .
3 21
Illustration 9.10 Find the values of constants λ and µ so that the surfaces λx 2 −µy z = (λ + 2) x and 4x 2 y +
3 Powered by Prof. (Dr.) Rajesh M. Darji
z = 4 may intersect orthogonally at the point (1, −1, 2) .
Solution: Given surfaces are intersect orthogonally ii angle between them at the point (1, −1, 2) is θ = 90◦ .
Hence,
→
− → −
N1·N2 →
− → −
cos θ = 0 ⇔ − ¯¯ = 0 ⇔ N 1 · N 2 = 0 (9.11)
− ¯ ¯→
¯→
¯ ¯ ¯
¯N 1¯ ¯N 2¯
From (9.11),
→
− →−
N1·N2 =0 ⇔ 8λ − 4µ = 16 (9.12)
Also point p (1, −1, 2) lies on both surfaces, and hence it satisfies both equations of surfaces. Putting the
point in the surface equation λx 2 − µy z = (λ + 2) x, we get µ = 1. Substituting µ = 1 in (9.12), we get λ = 5/2.
5
Hence, required values are λ = µ=1
2
Exercise 9.1
1. If r = ¯ →
¯− ¯
r ¯ , where →
−
r = x ib+ y jb+ z k,
b prove that
→
−
r
b. ∇ e r = 2e r →
2−
³ 2´
a. ∇ log r = r
r2
2. Find the unit normal vector to the surface,
Target AA
2 2
8. Find the directional derivative of the following functions:
a. f = 2x y + z 2 at the point (1, - 1,3) in the direction of the vector ib+ 2 jb+ 2k.
b
11. What is the greatest rate of the increase of v = x 2 + y z 2 at the point(1, −1, 3)?
12. If θ is the acute angle between the surfaces x y 2 z = 3x + z 2 and 3x 2 − y 2 + 2z = 1 at the point (1, −2, 1),
3
show that cos θ = p .
7 6
13. Calculate the angle between the normals to the surface x y = z 2 at the points (4, 1, 2) and (3, 3, −3) .
14. Find the angle between the tangent planes to the surfaces x log z = y 2 − 1 and x 2 y = 2 − z at the point
(1, 1, 1) .
Answers
1¡ ¢ 1 ¡ ¢ 14
2. a. −ib+ 2 jb+ 2kb b. p −8ib+ kb 5. 14 6. x sin y + xz − y z 8. a. b. 144
3 65 3
p
µ ¶ µ ¶
28 1 ¡ ¢ −1 1 −1 1
9. p 10. p −ib+ 3 jb− 3kb , 96 19 11. 11 13. cos p 14. cos p
21 19 22 30
E E E
Powered by
Prof. (Dr.) Rajesh M. Darji
B. Sc. (Gold Medalist)
M. Sc. (Gold Medalist)
Tetra Gold Medalist in Mathematics
Ph. D. (Mathematics)
ISTE (Reg. 107058)
IMS, AMS
http://rmdarji.ijaamm.com/
Contact: (+91) 9427 80 9779
Target AA | RE CALL
READ | R E DO
â Any integral that is evaluated along the curve is called line integral.
→
−
â Let F = F1 ib+ F2 jb+ F3 kb be the vector point function defined along the smooth curve C then integral
→
−
of F along C is called line integral and is defined as
Z Z
→− → − ¡ ¢ ¡ ¢
F ·d r = F 1 ib+ F 2 jb+ F 3 kb · d x ib+ d y jb+ d z kb
C C
Target AA
Z Z
→
− →
F · d−
¡ ¢
∴ r = F1 d x + F2 d y + F3 d z
C C
* Important:
I
→
− →
ALL by the symbol
EisCdenoted
1. If C is the closed curve then the line integral F · d−
r.
RE DO | R C
→− READ |
2. Work: If F denotes the
Z force acting on a moving particle along the curve C then work done by the
→
− →
force is given by W = F · d−
r.
Powered
C by Prof. (Dr.) Rajesh M. Darji
→
−
3. Circulation: If V denotes the velocity of the moving particle I around the closed curve C then the
→
− −
circulation of the particle round the curve C is defined as ω = V · d →
r.
C
Further if the circulation is zero then the motion is called irrotational.
→
− →
− →
− → −
4. Path independence of line Z Integral: If F is an irrotatinal vector field, that is curl F = ∇× F = 0 , then
→
− →
the value of line integral F · d−
r does not depends on path (line integral is independent to path).
C
That is it depends on initial and final points.
→
−
Also, since F is an irrotational there exist a scalar φ such that
→
−
F = grad φ = ∇φ
∂φ ∂φ ∂φ b
= ib+ jb+ k
∂x ∂y ∂z
∂φ ∂φ ∂φ b ¡
µ ¶
→
− → − £ →
∵ −r = x ib+ y jb+ z kb
¢ ¤
∴ F ·d r = i+
b j+
b k · d x ib+ d y jb+ d z kb
∂x ∂y ∂z
∂φ ∂φ ∂φ
= dx + dy + dz
∂x ∂y ∂z
= dφ ∵ Total differentiation of φ
£ ¤
114
Prof. (Dr.) Rajesh M. Darji (Tetra Gold Medalist in Mathematics) 115
→
−
Hence line integral of F along any path C joining the points A and B is
Z
→
− →
Z B
F · d−
£ ¤B ¡ ¢
dφ = φ A = φ B − φ A
¡ ¢
r =
C A
5. Arc length: The arc length of the curve defined by position vector →
−
r (t ) = f (t ) ib+ g (t ) jb+ h (t ) kb from
t = a to t = b is given by
Z b¯ ¯ Z b q£
¯→
− 0 ¤2 ¤2
+ g 0 (t ) + [h 0 (t )]2 d t
£
L= ¯ r (t )¯ d t = f 0 (t )
¯
a a
I
→
− → →
−
F · d−
r , where F = x 2 + x y ib + x 2 + y 2 jb and C is the
¡ ¢ ¡ ¢
Illustration 10.1 Evaluate the line integral
C
square formed by the lines x = ±1, y = ±1.
Solution: Since,
I I
→
− →
F · d− x2 + x y d x + x2 + y 2 d y
£¡ ¢ ¡ ¢ ¤
r = (10.1)
C C
Target AA
In order to find line integral (10.1), we will find line integral
along each sub-curve, in anti-clockwise direction (counter
clockwise direction) and then will add them.
= y+
y
·
= 1+
1 3 ¸1
Powered
¸ ·
− −1 −
by
·
1
¸
= 2+
2 Prof. (Dr.) Rajesh M. Darji
3 −1 3 3 3
8
Z
∴ ~ · d~
F r=
C1 3
8
Z
∴ ~ · d~
F r =−
C3 3
→
−
Illustration 10.2 Find the work done when the force F = x 2 − y 2 + x ib− 2x y + y jb moves the particle in
¡ ¢ ¡ ¢
the xy-plane from (0, 0) to (1, 1) along the parabola y 2 = x. Is the work done different when the parh is the
Target AA
straight liney = x ? [Winter-2016]
→
−
Solution: Work done by the force F along the path C is given by
Z Z
→
− →
F · d−
£¡ 2
x − y 2 +Lx d x − 2x y + y d y
¢ ¡ ¢ ¤
W= r = (10.2)
C
O | RCECAL
D
CE
Along the parabolaR 1:
D | RE
A y 2 = x, ∴ d x = 2d y and y varies from 0
to 1.
From (10.2), Powered by Prof. (Dr.) Rajesh M. Darji
Z 1 £¡
y 4 − y 2 + y 2 2yd y − 2y 3 + y d y
¢¡ ¢ ¡ ¢ ¤
W=
0
Z 1¡
2y 5 − 2y 3 − y d y
¢
=
0
· 6 ¸1
y y4 y2 1 1 1 2
= 2 −2 − = − − ∴W =−
6 4 2 0 3 2 2 3
2
Work done along the parabola y 2 = x is − .
3
Along the line C 2 : y = x, ∴ d x = d y and y varies from 0 to 1.
From (10.2),
Z 1
£¡ 2
y − y 2 + y d y − 2y 2 + y d y
¢¡ ¢ ¡ ¢ ¤
W=
0
¸1
1¡ y3
·
2 2
Z
2
¢
= −2y d y = −2 =− ∴W =−
0 3 0 3 3
2
Work done along the line y = x is also − .
3
Hence work is not different for given different paths.
Note that here line integration is independent of path. Also such a system in which work does not
depends on path is called conservative system.
Z
→
− ¡ →
− →
Illustration 10.3 If F = 2x y + z 3 ib+ x 2 jb+ 3xz 2 k, F · d−
¢
b show that r is independent of path of integra-
C
tion. Hence find the integral when C is any path joining (1, −2, 1) and (3, 1, 4) . [Summer-2016]
→
−
Solution: We know that line integral of F is independent of path, if the vector field is irrotational. That is
→
−
~= 0.
curl F
→
− ¡
Given F = 2x y + z 3 ib+ x 2 jb+ 3xz 2 k,
¢
b
¯ ¯
¯ ¯
¯
¯ i
b j
b k ¯¯
b
¯ ∂ ∂ ∂ ¯¯
~ = ∇×F
curl F ~ = ¯¯
∂x ∂y ∂z ¯
¯
¯
¯ ¡ ¯
¯ 2x y + z 3 x 2 3xz 2 ¯
¯ ¢ ¯
∂ ¡ ¢ ∂ ¡ 2¢ ∂ ¡ ¢ ∂ ¡ ∂ ¡ 2¢ ∂ ¡
· ¸ · ¸ · ¸
2 2 3 3
¢ ¢
=i b 3xz − x −jb 3xz − 2x y + z +k
b x − 2x y + z
∂y ∂z ∂x ∂z ∂x ∂y
→
− ~ =→ −
= ib[0 − 0] − jb 3z 2 − 3z 2 + kb [2x − 2x] = 0
£ ¤
∴ curl F 0
→
− ∂φ ∂φ ∂φ b ∂φ ∂φ ∂φ
F = grad φ = ib+ jb+ k ⇒ = 2x y + z 3 , = x 2, = 3xz 2
∂x ∂y ∂z ∂x ∂y ∂z
Integrating above equations partially with respect to x, y, z respectively, keeping other variable constant, we
get following equations: [See Illustration 9.7]
Target AA
φ = x 2 y + xz 3 + c 1 y, z , φ = x 2 y + c 2 (x, z) , φ = xz 3 + c 3 x, y φ = x 2 y + xz 3
¡ ¢ ¡ ¢
⇒
Hence required path independent line integral joining the points A (1, −2, 1) and B (3, 1, 4) is given by
Z
→
− →
Z B
F · d− LL
£ ¤B £ ¤
dφ = φ A = φ B − φ E A CA
£ ¤
r =
O | R
¤ ED £ 2
C A
£ | 3R
R=ExA2 yD+ xz 3
¤
(3,1,4) − x y + xz (1,−2,1)
= [201] − [−1] = 202
∴
Z
→
− →
F · d−
r = 202
Powered by
Ans.
Prof. (Dr.) Rajesh M. Darji
C
Illustration 10.4 Find the arc length of the portion of the circular helix →
−
r (t ) = cos t ib+ sin t jb + t kb from
t = 0 to t = π. [Summer-2015]
d~
r
r (t ) = cos t iˆ + sin t jˆ + t k̂
Solution: Given ~ ⇒ r 0 (t ) =
~ = − sin t iˆ + cos t jˆ + k̂, 0Ét Éπ
dt
By definition of arc length,
Z π¯ Z πq
r 0¯ d t =
¯~ (− sin t )2 + (cos t )2 + 1d t
¯
L=
0 0
Z πp Z πp
= sin2 t + cos2 t + 1d t = 2d t
0 0
hp iπ p p
= 2t = 2π ∴ L = 2π Ans.
0
â Any integral that is evaluated over the surface is called line inte-
gral.
→
−
â Let F = F1 ib + F2 jb + F3 kb be the vector point function defined
over the smooth surface S and R be the orthogonal projection
on x y− plane. If n
b denotes the unit outward normal vector to
→
−
the surface S then the surface integral of F over S is defined as
d xd y
Ï Ï
→
− →
−
F ·n
bds = F ·n
b¯ ¯
S R ¯n
b · kb ¯
Remark:
− − → − − →
Ï Z Z
→ → →
−
1. Surface integral is denoted by the symbol F · d s OR F · d s OR F ·n
b d s.
S S S
2. Instead of on x y−plane,
Ï if we take Ï
orthogonal projection S on y z−plane or zx−plane, then
Ï of surfaceÏ
→
− →
− d yd z →
− →
− d xd z
surface integrals are F ·n
bds = F ·n
b¯ ¯ or F ·n
bds = F ·nb¯ ¯ respectively.
S R ¯n
b · ib¯ S R ¯nb · jb¯
Ï
→
− →
−
3. Flux: If F denotes the velocity of the fluid then F ·n
b d s gives the amount of fluid emerging from
S
the surface area S per unit time, that is it gives flux.
Ï
→
− →
−
Target AA
Further if F ·nb d s = 0, then F is called solenoidal.
S
Ï
→
− →
−
Illustration 10.5 Evaluate F · nd
b s, where F = 18z ib− 12 jb+ 3y kb and S is the surface of the plane 2x +
S
3y + 6z = 12 in the positive octant.
|x REy C z AL L
Solution:
E D O
Given surface S : 2xR+E3yA+D6z|=R12, that is 6 + 4 + 2 = 1, is a
plane in first octant with intercepts (6, 0, 0) , (0, 4, 4) and (0, 0, 2)
Powered by Prof. (Dr.) Rajesh M. Darji
on x, y and z axis respectively. Let R be orthogonal projection on
x y−plane as show in figure. By definition of surface integral
d xd y
Ï Ï
→
− →
−
F · nd
b s= F ·n
b¯ ¯ (10.3)
S R ¯n
b · kb¯
where projection R is triangle in x y−plane as show in figure. For limits of double integral, according to
1
Y −strip (parallel to Y −axis), 0 É y É (12 − 2x) , 0 É x É 6. Hence required surface integral,
3
Ï Z 6 Z 1 (12−2x)
→
− 3
F · nd
b s= (6 − 2x) d yd x
S 0 0
Z 6
£ ¤ 1 (12−2x) £ ¤
= (6 − 2x) y 03 dx ∵ Integrating w.r.t. y keeping x constant
0
Z 6 · ¸
1
= (6 − 2x) (12 − 2x) − 0 d x
0 3
Z 6
4 4 6¡
Z
18 − 9x + x 2 d x
¢
= (3 − x) (6 − x) d x =
3 0 3 0
2 3 ¸6
x
·
4 9x 4
= 18x − + = [18 − 0]
3 2 3 0 3
Ï
→
−
∴ F · nd
b s = 24 Ans.
S
3
Ï
→
− →
−
Illustration 10.6 Show that F ·nb d s = , whare F = 4xz ib− y 2 jb+ y z kb and S is the surface of the cube
S 2
bounded by the planes x = 0, x = 1, y = 0, y = 1, z = 0, z = 1.
Solution:
Given S is the closed surface of unit cube. It is
Target AA
bounded by six different subsurface (squares) as
show in figure, like
→
−
In order to find surface integral of F = 4xz ib− y 2 jb+ y z kb we will find surface integral over each subsurface
and then will add them.
Over S 1 : x = 0, and since S 1 lies in y z−plane, the unit outward normal vector (in outside direction of cube,
that is unit vector along negative X −axis direction) is n b = −ib.
Ï Ï
→
− →
−
∴ F · nb = 4xz iˆ − y 2 jˆ + y z k̂ · −ib = −4xz = 0 [∵ x = 0] ⇒
¡ ¢ ¡ ¢
F · nd
b s= (0) d s = 0
S1 S1
Over S 2 : x = 1, and since S 1 is parallel to y z−plane, the unit outward normal vector (in outside direction
of cube, that is unit vector along positive X −axis direction) is nb = ib.
Ï Ï
→
− →
−
∴ F · nb = 4xz iˆ − y 2 jˆ + y z k̂ · ib = 4xz = 4z [∵ x = 1] ⇒
¡ ¢ ¡ ¢
F · nd
b s= 4zd s
S2 S2
Since S 2 is parallel to y z−plane, its orthogonal projection is on the y z−plane and it is a square OC DE .
¡ ¢
0 É y É 1, 0 É z É 1 We put formula of surface integral for y z−plane.
Z 1Z 1
d yd z
Ï Ï
→
−
∴ F · nd
b s= 4zd s = 4 z¯ ¯
S2 S2 0 0 ¯nb · ib¯
Z 1Z 1 £ ¤
=4 zd yd z ∵ nb · ib = 1
0 0
£ ¤1 z 2 1
· ¸ · ¸
1
Ï
→
−
=4 y 0 = 4 [1] =2 ∴ F · nd
b s =2
2 0 2 S2
→
−
Over S 4 : y = 1, b = jˆ
n ⇒ b = −y 2 = −1
F ·n
Ï Ï Z 1Z 1 Ï
→
− →
−
∴ F · nd
b s= (−1) d s = − d xd z = −1 ∴ F · nd
b s = −1
S4 S4 0 0 S4
→
−
Over S 5 : z = 0, n b = −k̂ ⇒ F · n b = −y z = 0
Ï Ï
→
−
∴ F · nd
b s= (0) d s = 0
S5 S5
→
−
Over S 6 : z = 1, n
b = k̂ ⇒ F ·n
b = yz = y
Z 1Z 1 1 1
Ï Ï Ï
→
− ¡ ¢ →
−
∴ F · nd
b s= y ds = yd xd y = ∴ F · nd
b s=
Target AA
S6 S4 0 0 2 S6 2
Hence,
Ï Ï Ï Ï Ï Ï Ï
→
− →
− →
− →
− →
− →
− →
−
F · nd
b s= F · nd
b s+ F · nd
b s+ F · nd
b s+ F · nd
b s+ F · nd
b s+ F · nd
b s
S S1 S2 S3 S4 S5 S6
LL
| RECA
1
= 0+2+0−1+0+
| RED 2 O
R E3AD
Ï
→
−
∴ F · nd
b s= Proved.
S 2
through the volume V of the solid then the volume integrals are defined as
Ñ Ñ
→
−
F d v OR φ d v, where d v = d xd yd z
V V
Exercise 10.1
Z
→
− →
− →
1. If F = 3x y ib− y 2 jb. evaluate F · d−
r where C is the curve y = 2x 2 from (0, 0) to (1, 2) .
C
− −→
I
→ →
−
2. Evaluate F .d r , where F = e x sin y ib+e x cos y jb and C is the rectangle whose vertices are (0, 0) , (1, 0) ,
³ π´ C³
π´
1, , and 0, .
2 2
− −→
I
→
− → →
−
3. Find the circulation of F round the curve C, that is F .d r where F = y ib+ z jb+ x kb and C is the circle
C
x 2 + y 2 = 1 and z = 0.
[Hint: Let x = cos θ, y = sin θ, z = 0 0 É θ É 2π ⇒ d x = − sin θd θ, y = cos θd θ, d z = 0]
→
−
4. Find the work done in moving a particle by the force field F = 3x 2 ib+ 2xz − y jb+z k,
¡ ¢
b along the straight
line from (0, 0, 0) to (2, 1, 3) .
5. Determine the length of curve → −r (t ) = 2t ib+ 3 sin (2t ) jb+ 33 cos (2t ) k.
b on the interval 0 É t É 2π.
Z
→
− →
− →
6. If F = 2x y z 3 iˆ + x 2 z 3 jˆ + 3x 2 y z 2 k̂, show that F · d− r is independent of path of integration. Hence
C
find the integral when C is any path joining (0, 0, 0) and (1, 2, 3) .
I
→
− →
Further show that for any simple closed curve C , F · d−
r = 0.
C
Ï
→
− →−
7. Evaluate F · nd
b s, where F = x ib+xz jb+4x y kb and S is the triangular surface with the vertex (2, 0, 0) , (0, 2, 0)
S
and (0, 0, 4) .
x y z
[Hint: S : + + = 1 i.e. S : 2x + 2y + z = 4 See Illustration 10.5]
2 2 4
Ï
→
− →
−
8. Evaluate b d s, whare F = y z ib+zx jb+x y kb and S is that part of the surface x 2 + y 2 +z 2 = 1 which
F ·n
S
lies in the positive octant.
I
9. Show that → −
r · d→
−r = 0 independently of the origin of →
−
r.
C
Z
yd x + xd y + zd z , where C is given by x = cos t , y = sin t , z = t 2 , 0 É t É 2π. [Winter-2015]
¡ ¢
10. Evaluate
C
Target AA
Answers
7 p 3
1. − 2. 0 3. −π 4. 16 5. 4π 10 6. 54 7. 8 8. 9.
6 8
E E E
| RE CALL
RE AD | R E DO
10.4 Integral Theorems
Theorem
¡
10.1
¢ ¡ ¢
Theorem1by
(Green’s Powered Prof. (Dr.) Rajesh M. Darji
: Relation between Line Integral & Double Integral)
Let M x, y and N x, y are functions of two variable having
continuous first order partial derivative in the region R of
x y−plane bounded by the closed curve C then
∂N ∂M
I Ï µ ¶
¡ ¢
Md x + N d y = − d xd y
C R ∂x ∂y
I
3x − 8y 2 d x + 4y − 6x y d y, where C is the
¡ ¢ ¡ ¢
Illustration 10.7 Verify Green’s theorem in the plane for
C
boundary of the boundary of the triangle with vertices (0, 0) , (1, 0) and (0, 1) . [Summer-2017]
∂N ∂M
I Ï µ ¶
¡ ¢
Md x + N d y = − d xd y
C R ∂x ∂y
Here M = 3x − 8y 2 , N = 4y − 6x y and C is the triangle with vertices (0, 0) , (1, 0) , (0, 1) . That is triangle
¡ ¢ ¡ ¢
Along C 1 : y =0 ⇒ dy =0 x :0→1
Z 1 · 2 ¸1
3x 3
From (10.4), I1 = [(3x − 0) d x + 0] = =
0 2 0 2
Target AA
3 13 3
I
Md x + N d y =I 1 + I 2 + I 3 = + −2 = (10.5)
C 2 6 3
∂M ∂N
To find double integral: Here M = 3x − 8y 2 , N = 4y − 6x y ⇒ = −16y, = −6y
∂y ∂x
ECALL
∂N ∂M
| R E DO | R
∂y READ
∴ − = −6y + 16y = 10y
∂x
∂N ∂M
Ï µ ¶ Ï Ï
Prof. (Dr.) Rajesh M. Darji
¡ ¢
− d xdPowered
y= by d xd y = 10
10y yd xd y
R ∂x ∂y R R
where R is triangular region as show in figure. According to Y −strip (parallel to Y − axis), limits of double
integral are 0 É y É 1 − x, 0 É x É 1.
∂N ∂M
Ï µ ¶ Z 1 Z 1−x
− d xd y = 10 yd d x y
R ∂x ∂y 0 0
Z 1 · 2 ¸1−x
(1 − x)2
Z 1·
y
¸
= 10 d x = 10 dx
0 2 0 0 2
¸1
(1 − x)3
Z 1 ·
2 5
=5 (1 − x) d x = 5 = − [0 − 1]
0 3 (−1) 0 3
∂N ∂M
Ï µ ¶
5
∴ − d xd y = (10.6)
R ∂x ∂y 3
∂N ∂M
I Ï µ ¶
¡ ¢
Md x + N d y = − d xd y (10.7)
C R ∂x ∂y
¡ ¢
Here M = y − sin x , N = cos x and C is boundary of triangular
region R enclosed by the lines y = 0, x = π2 and y = π2 x as show in
figure.
I Ï
¡ ¢
Md x + N d y = (− sin x − 1) d xd y
C R
Z π/2 Z 2x/π
=− (sin x + 1) d yd x
0 0
Z π/2 π/2 · ¸
2x
Z
£ ¤2x/π
=− (sin x + 1) y 0 d x = − (sin x + 1) −0 dx
0 0 π
2
Z π/2 2
Z π/2
=− x (sin x + 1) d x = − (x sin x + x) d x
π 0 π 0
¸π/2
x2
·
2 £ ¤
(x) (cos
=− x) − (1) (− sin x) + ∵ Integrating by parts
π 2 0
2 π π π 1 π ´2
·³ ´ ³ ´ ³ ´ ³ ¸
=− cos + sin + −0
π 2
Target AA
2 2 2 2
π2
· ¸
2
=− 0+1+
π 8
2³ π´
I
©¡ ¢ ª
∴ y − sin x d x + cos xd y = − 0 + 1 + Ans.
C π 8
| RE CALL
| R E DO 1
I
Illustration 10.9 R
Apply D theorem to prove area encased by plane curve is
EAGreen’s
¡ ¢
xd y − yd x . Hence,
2 C
find the area of an ellipse whose semi-major and semi-minor axes are of length a and b.
Powered by Prof. (Dr.) Rajesh M. Darji
Solution: Let R be the region enclosed by simple closed curve C , then by Green’s theorem
∂N ∂M
I Ï µ ¶
¡ ¢
Md x + N d y = − d xd y
C R ∂x ∂y
∂N ∂M
From given line integral, M = −y, N =x ⇒ − = 1 − (−1) = 2
∂x ∂y
I Ï
¡ ¢ ¡ ¢
xd y − yd x = 2 d xd y = 2 Area enclosed by closed curve C
C R
1
I
¡ ¢
∴ Area enclosed by closed curve C = xd y − yd x
2 C
x2 y 2
Now equation of given ellipse is C : + = 1. To find line integral substitute parametric equation of ellipse
a2 b2
in above formula, that is
1 ¡
I
¢
A= xd y − yd x
2 C
1 2π
Z
= [(a cos t ) (b cos t d t ) − (b sin t ) (−a sin t d t )]
2 0
1 2π ¡ ab 2π ¡
Z Z
abcos2 t + absin2 t d t = cos2 t + sin2 t d t
¢ ¢
=
2 0 2 0
ab 2π ab 2π ab
Z
= (1) d t = [t ]0 = [2π]
2 0 2 2
∴ A = πab Ans.
Note: To find an area of the circle of radius a, take a = b in above illustration. (Verify !)
Theorem 10.2 (Stokes’ theorem2 : Relation between Line Integral & Surface Integral)
→
−
Let F be the differentiable vector point function defined over an
open surface S bounded by the closed curve C. If n b denotes the
unit outward normal vector to the surface S, then
I Ï Ï ³
→
− → →
− →
−´
F · d−
r = culr F · nd
b s= ∇ × F · nd
b s
C S S
Target AA
plane (that is open at the bottom) and C is the boundary of the square in x y−plane (z = 0 plane) as shown
in figure.
To find surfave integral:
S3 : y = 0
Powered by
(OE F A square in the xz−plane)
Prof. (Dr.) Rajesh M. Darji
S4 : y = 2 (C DGB square parallel xz−plane)
→
− ¡
F = y − z + 2 iˆ + y z + 4 jˆ − xz k̂
¢ ¡ ¢
Here
¯ ¯
¯ ¯
¯
¯ i
b j
b kb ¯
¯
→
− →
− ¯¯ ∂ ∂ ∂ ¯¯
∴ curl F = ∇ × F = ¯
∂x ∂y ∂z ¯
¯
¯
¯ ¯
¯ y − z + 2 y z + 4 −xz ¯
¯ ¯
∂ ∂ ¡ ∂ ∂ ¡ ∂ ¡ ¢ ∂ ¡
· ¸ · ¸ · ¸
¢ ¢ ¢
=i b (−xz) − yz +4 − j b (−xz) − y −z +2 +kb yz +4 − y −z +2
∂y ∂z ∂x ∂z ∂x ∂y
£ ¤
= ib 0 − y − jb[−z − (−1)] + kb [0 − (1)]
→
−
∴ curl F = −y ib+ (z − 1) jb− kb
2
Sir George Stokes; Irish, 1819-1903.
→
−
Over S 1 : x = 0, n b = −iˆ ⇒ curl F · n b=y
Ï Z 2Z 2
→
−
∴ curl F · nd
b s= yd xd y =4
S1 0 0
→
−
Over S 2 : x = 2, n b = iˆ ⇒ curl F · nb = −y
Ï Z 2Z 2
→
−
∴ curl F · nd
b s =− yd xd y = − 4
S2 0 0
→
−
Over S 3 : y = 0, n b = − jˆ ⇒ curl F · n b = (z − 1)
Ï Z 2Z 2
→
−
∴ curl F · nd
b s =− (z − 1) d xd z =0
S3 0 0
→
−
Over S 4 : y = 2, n b = jˆ ⇒ curl F · n b = (z − 1)
Ï Z 2Z 2
→
−
∴ curl F · nd
b s= (z − 1) d xd z =0
S3 0 0
→
−
Over S 5 : z = 2, nb = k̂ ⇒ curl F · n b = −1
Ï Z 2Z 2
→
−
∴ curl F · nd
b s= (−1) d xd y = − 4
S5 0 0
Hence,
Ï Ï
→
− →
−
curl F · nd
b s= curl F · nd
b s = 4−4+0+0−4
Target AA
S
(S 1 +S 2 +S 3 +S 4 +S 5 )
Ï
→
−
∴ curl F · nd
b s = −4 (10.8)
S
Along C 1 : y =0 ⇒ d y = 0, x :0→2
∴
Z
− −→
→
F · dr =
Z 2 Powered
2d x = 4
by Prof. (Dr.) Rajesh M. Darji
C1 0
Along C 2 : x = 2 ⇒ d x = 0, y : 0 → 2
Z 2
− −→
Z
→
∴ F · dr = 4d y = 8
C2 0
Along C 3 : y =2 ⇒ d y = 0, x :2→0
Z
− −→
→
Z 0
∴ F · dr = 4d x = −8
C3 2
Along C 4 : x = 0 ⇒ d x = 0, y : 2 → 0
Z 0
− −→
Z
→
∴ F · dr = 4d y = −8
C4 2
→
−
Illustration 10.11 Verify Stokes’ theorem for the vector field F = 2x − y ib− y z 2 jb− y 2 z kb over the upper
¡ ¢
→
− ¡
F = 2x − y ib− y z 2 jb− y 2 z kb
¢
Here
¯ ¯
¯ ¯
¯
¯ i
b j
b k ¯¯
b
→
− →
− ¯¯ ∂ ∂ ∂ ¯¯
∴ curl F = ∇ × F = ¯
¯ ∂x ∂y ∂z ¯
¯
¯ ¯
¯ 2x − y −y z 2 −y 2 z ¯
¯ ¯
¡ ¢
= −2y z + 2y z ib− (0 − 0) jb+ (0 + 1) kb = kb
→
−
∴ curl F = kb
¡ ¢
∇S 2x ib+ 2y jb+ 2z kb
∵ S : x2 + y 2 + z2 = 1
£ ¤
Also n
b= =p
|∇S| 4x 2 + 4y 2 + 4z 2
¡ ¢ ¡ ¢
2x ib+ 2y jb+ 2z kb 2x ib+ 2y jb+ 2z kb
= p = p
Target AA
2 x2 + y 2 + z2 2 1
∴ nb = x ib+ y jb+ z kb
→
− ¡ ¢ ¯ ¯
⇒ curl F · n
b = kb · x ib+ y jb+ z kb = z, ¯n b · kb¯ = |z| = z
d xd y
Ï Ï
→
− →
−
curl F · nd
b s= curl F · n
CALL
O¯nb ·|kb¯RE
b¯ ¯
S R
| RREisDregion of bounded by circle x 2 + y 2 = 1 in x y − plane.
READwhere
d xd y
Ï Ï
= 6z = d xd y
Powered
R 6z
by R Prof. (Dr.) Rajesh M. Darji
= Area enclosed by R : x 2 + y 2 = 1 [∵ Formula of area]
2 2
= π(radius) = π(1) = π
Ï
→
−
∴ b s =π
curl F · nd (10.11)
S
− −→
I I
→
2x − y d x − y z 2 d y − y 2 zd z
£¡ ¢ ¤
F · dr =
C
IC
¡ ¢
= 2x − y d x [∵ z = 0]
C
We get,
I
− −→
→
Z 2π
F · dr = (2 cos t − sin t ) (− sin t d t )
C 0
Z 2π ¡ Z 2π µ 1 − cos 2t
¶
−2 sin t cos t + sin2 t d t =
¢
= − sin 2t + dt
0 0 2
sin 2t 2π
· ¸
cos 2t 1
= + t−
2 2 4 0
· ¸ · ¸
cos 4π 1 sin 4π 1
= + (2π) − − +0−0
2 2 4 2
· ¸ · ¸
1 1
= +π−0 − =π
2 2
− −→
I
→
∴ F · dr = π (10.12)
C
Theorem 10.3 (Gauss Divergence Theorem3 : Relation between Surface Integral & Volume Integral)
→
−
Let F be the differentiable vector point function defined
through the volume V enclosed by the closed surface S. If nb de-
notes the unit outward normal vector to the surface S, then
Ï Ñ Ñ
→
− →
− →
−
F · nd
b s= div F d v = ∇· F dv
S V V
→
−
Illustration 10.12 Verify the divergence theorem for F = 4xz ib− y 2 jb+ y z kb taken over the cube bounded
by the planes x = 0, x = 1, y = 0, y = 1, z = 0, z = 1.
Target AA
Ï Ñ Ñ ³
→
− →
− →
−´
Solution: Divergence theorem: F · nd
b s= div F d v = ∇ · F d xd yd z
S V V
To find surface integral:
→
−
Here F = 4xz iˆ − y 2 jˆ + y z k̂ and S is the closed surface of unit cube. [See Illustration 10.6]
O | R ECALL
D
D | RE
3
Ï
→
−
∴
S
F · nd
b s=
R E 2
A (10.13)
− − →
Ï
→ →
−
Illustration 10.13 Use divergence theorem to evaluate F · d s, where F = x 3 ib+ x 2 y jb+ x 2 z kb and S is
S
the surface bounding the region x 2 + y 2 = a 2 , z = 0, z = b. [Winter-2015]
Solution:
→
−
Here F = x 3 iˆ + x 2 y jˆ + x 2 z k̂
→
− →
−
∴ div F = ∇ · F
∂ ¡ 3¢ ∂ ¡ 2 ¢ ∂ ¡ 2 ¢
= x + x y + x z = 3x 2 + x 2 + x 2
∂x ∂y ∂z
→
−
∴ div F = 5x 2
By Divergence theorem,
Ï Ñ
→
− →
−
F nd
b s= div F d v
S V
Ñ
=5 x 2 d xd y xz (10.15)
V
Since V is the volume of cylinder (as show in figure), for triple integral we change cartesian coordinate
x, y, z into cylindrical coordinate (r, θ, z) , as
¡ ¢
x = r cos θ, y = r sin θ, z = z,
2 2 2
x +y =r , d xd yd z = r d r d θd z
0 É r É a, 0 É θ É 2π, 0Éz Éb [limits for whole cylinder]
Target AA
Substituting in (10.15), we get
Ï
→
−
Z b Z 2π Z a
F nd
b s =5 (r cos θ)2 r d r d θd z
S 0 0 0
µZ b ¶ µZ 2π ¶ µZ a ¶
2 3
cos θd θ
£ ¤
=5 dz r dr ∵ Separating intgrals
0 AL L
0 0
| RE C
| R E DO
2π µ 1 + cos 2θ ¶ ¸ · 4 ¸a
r
·Z
R=E5A[z]D0 ×
b
dθ ×
0 2 4 0
5a 4 b sin 2θ 2π 5a 4 b
· ¸ ·µ ¶ ¸
sin 4π
θ+
=
Powered
8 by2 0
=
8 Prof. (Dr.) Rajesh M. Darji
2π +
2
−0
4
5πa b
Ï
→
−
∴ F · nd
b s= Ans.
S 4
→
−
Illustration
Ï 10.14 If S is any closed surface enclosing the volume V and F = x ib+ 2y jb+ 3z k,
b prove that
→
−
F · nd
b s = 6V.
S
Solution: If S is a closed surface enclosing the volume V, then by Divergence theorem we have
Ï Ñ
→
− →
−
F nd
b s= div F d v
S V
→
− →
− →
−
Let F = x ib+ 2y jb+ 3z kb ⇒ div F = ∇ · F = 6
Ï Ñ Ñ
→
− ¡ ¢
∴ F · nd
b s= (6) d v = 6 d v = 6 Volume enclosed by S
ÏS V V
→
−
∴ F · nd
b s = 6V Proved.
S
Exercise 10.2
→
− ¡ ¢
1. Verify Green’s theorem for the function F = x + y ib+ 2x y jb in the x y−plnae for the region bounded
by x = 0, y = 0, x = a and y = b. [Summer-2016]
I
¡ 2
3x − 8y 2 d x + 4y − 6x y d y, where C is the boundary of
¢ ¡ ¢
2. Verify Green’s theorem in the plane for
p C
the region bounded by y = x , y = x 2 .
I
x y + y 2 d x + x 2 d y , where C is the closed curve of the
©¡ ¢ ª
3. Verify Green’s theorem in the plane for
C
region bounded y = x and y = x 2 .
I
y 2 d x + x 2 d y where C is the plane triangle enclosed by the
¡ ¢
4. Apply the Green’s theorem to evaluate
C
lines x = 0, y = 0 and x + y = 1. [Winter-2016]
− −y ib+ x jb
I
→
− → →
5. Use the Green’s theorem to evaluate F · d−
r , where F = 2 and C is the x 2 + y 2 = 1 traversed
C x + y2
in counterclockwise direction. [Winter-2015]
→
−
6. Verify Stokes’ theorem for the vector field F = x 2 + y 2 ib− 2x y jb integrated round the rectangle in the
¡ ¢
Target AA
will be the curve 1 = 5 − x 2 − y 2 ⇒ x 2 + y 2 = 4. ]
− −→
I
→ →
− ¡ ¢ ¡ ¢
9. Use Stokes’ theorem to evaluate F · d r , where F = x + y ib+(2x − z) jb− y + z kb and C is the trian-
C
gle with vertices (1, 0, 0) , (0, 1, 0) and (0, 0, 1) with counter-clockwise rotation.
− − → LL →
Ï
→ −
10. Use divergence theorem to evaluate | R F ·E s, A
dC where F = y ib+ x jb+ z 2 kb and S is the cylindrical region
bounded by xR2 2 D2
+EyA
| R E DO
= a , z = 0 and z = h.
S
− − →
Ï
→ →
−
F · d s, where F = x 3 ib+ y 3 jb+ z 3 kb and S is the surface of the
11. Use divergence theorem to evaluate
Powered by Prof. (Dr.) Rajesh M. Darji
S
sphere x 2 + y 2 + z 2 = a 2 .
[Hint: Use spherical polar coordinate for triple integral.]
¢ ¤−→
Ï
£ ¡ ¢ ¡
12. For any closed surface S, prove that x y − z ib+ y (z − x) jb+ z x − y kb d s = 0.
S
→
−
[Hint: div F = 0]
4
Ï
→
− →
−
13. If F = ax ib+ b y jb+ cz k,
b a, b, c are constant then show that b d s = π (a + b + c) . Where S is the
F ·n
S 3
surface of the unit sphere.
Answers
1 12πa 5
4. 0 5. 2π 8. 0 9. 10. πa 2 h 2 11.
2 5
E E E
References:
1. Introduction to Linear Algebra with Application, Jim Defranza, Daniel Gagliardi, Tata McGraw-Hill.
2. Elementary Linear Algebra, Applications version, Anton and Rorres, Wiley India Edition.
Powered by
Prof. (Dr.) Rajesh M. Darji
B. Sc. (Gold Medalist)
M. Sc. (Gold Medalist)
Tetra Gold Medalist in Mathematics
Ph. D. (Mathematics)
ISTE (Reg. 107058)
IMS, AMS
http://rmdarji.ijaamm.com/
Contact: (+91) 9427 80 9779
Target AA | RE CALL
READ | R E DO