Вы находитесь на странице: 1из 18

Chapter: 03

R.C Matrix Algebraic structures and


Eigen Properties

49
What you will find in this Chapter?

• This chapter will explore Eigen Space of R.C Matrix.

Pre requisites:

• Eigen properties of any Matrices.

3.1 Introduction:
th
R.C matrix is a square matrix in which i row, for all integer value of i, is orthogonal
to i th column. Though the set of all R.C matrices is a sub-class of square matrices but refrains
to obey some of the basic tenets of matrix algebra. Member matrices of the set of R.C
matrices, except the null matrix, are always non-singular and they disguise many invincible
characteristics seemingly common in nature. It has wide open many avenues for further
research work.

We introduce R.C matrix and try to unveil some of its excellent salient features. These
are the features which have become known to us on looking at its basic structure from
different angles.

3.2 Defining Property and General Form:


What -we call, an R.C matrix is an outcome of an open discussion on the various
properties of matrices. Just a thought; what can happen if each one of the n rows of a matrix is
th
orthogonal to corresponding column of the same matrix. In general i row, for all integer
value of i, is orthogonal to i th
column and hence truly justified to name as ‘R.C’ matrix.

We give an illustration;
1 2 −3
A= 3 is an R.C matrix.
1 2 −6

We write its general form. A R.C matrix A of order × from the set JJn is as follows.
1 1

50

… ∈
… … … …
A= … …

…( ) + ∑ = 0 "#$ % ≠ ’, ) = 1 *#
We have the R.C property; ..(1)
. (,-) . + , , + , , + , , + … … … . = 1
-. .- -/ /- -0 0-
[e.g ]

We have R.C matrices of higher order and even can be constructed by extending R.C
property from a given R.C matrix of order 3 × 3.

R.C matrices of order 2 × 2 are,


−1 1 −1 −1
2 3, 2 3, 2 3, 4 2 3
−1 −1 1 1 1 1 −1 1
(1) (2) (3) (4)
1 −1 1 1
These are parallel in constitutional nature to what is known as spin matrices in
quantum physics. This will give our readers the best insight giving the debut in the vast field.

3.3 Pair Wise Graphical Presentation:


At this stage we would prefer to draw facts from graphical presentation of any R.C
matrix and extend our imaginations to possibly search for such matrices of higher order then
what we tackle with on hand. We take the first matrix shown above.
−1 −1
2 3
−1
Let
−1A = −1 ; it is a R.C matrix. In two dimensional rectangular frames,1the vectors
−1
2 3 4 21 3 2 3 2 3
1 −1
are orthogonal to each other. In the same way the vectors and
−1orthogonal to each other.
are −1

We have developed an elegant method of extending a R.C matrix of order nxn to the
next higher order matrix of order ( + 1) × ( + 1). We shall discuss the same in the section
to follow. We, just for citation purpose, write R.C matrices of order 3 × 3 and 4 × 4.
−1
1
1/2 −1/2 7= 9 1 −1 2 3 :
8
A= and 3
3/2 3 3 −3
1 1
1 1 −1

1/4 −1/2 1/2


−3 3

1 1
51
[Extension of a R.C matrix from 2x2 to any R.C matrix of subsequent higher order has been
shown in the annexure. Readers are requested to please go through the technical proceedings.]

3.4 General Format of / × / R.C Matrices:


We write its general form. A R.C matrix A of order × from the set JJn is as follows.


… ∈
… … … … …
A=


( ) + ∑ = 0 "#$ % ≠ ’, ) = 1 *#

We have the R.C property; ..(1)


We will be treating with a general format of 3 × 3 R.C matrix and derive many theorems and
properties. We will consider

; <
A= $ ..(2)
= > ?

Matrix as a standard matrix and treat this as a R.C matrix with all real entries.
@ A
[If all real entries are zero, then it is a null matrix and hence defining property of an R.C
matrix permits a Null Matrix in the category of a R.C matrix.

0 0 0
i.e 0 -- A null matrix is, by definition, a R.C matrix ..(3)
1 = 0 0 0

We+have,
;= +by definition
<@ = 0 of R.C matrix for matrix A as in (2), the following conditions. ..
0 0
(4)
;= + > + A? = 0 ..(5)
<@ + A? + $
= 0 ..(6)

We shall write
B = ;=, 7 = <@, 4 C = A? ..(7)
+ B + 7 = 0, > + B + C = 0, 4 $
i.e. + 7+ C =0
From the three equations written F
above, we derive
+ > – $ = −2B ⟹ B = Ga
+ y – r K. In the same way we can write two more
equations. All three are listed below.

52
B = ;= = (− − > + $ )P
N
L7 = <@ = (− +> − $ )
C = A? = ( − > − $ )N ..(8)
O

M
Using the above relations those we have derived, we will prove some important notions in
terms of theorems.

Theorem 01: A R.C matrix with real entries, except a null matrix, cannot be a
symmetric matrix.

; <
Proof: Let the R.C matrix be A = @ $ as defined by (2) If it is a symmetric matrix
= > ?

then each one QR, ST, 4 UV must be positive but can never be negative.
A + y – r
Using property given in (8), as ;= > 0, we have a <0
− y + r + y +
In the same way as @< > 0 , we have a < 0 and A? > 0 gives −a
r < 0.

+ y
Adding all the results obtained above we have a + r < 0 which is possible for any
real values of ‘a, y, and r ’.

This implies that each one of ;=, <@, 4 A? = 0. [Which can make each one of ‘ a, y, and
+ y
r’= 0 and hence in turn a + r = 0]

This proves that except a null matrix a R.C matrix cannot be symmetric one. This proves the
theorem.

Lemma: On the same lines except for a non-null R.C matrix cannot be a skew
symmetric matrix.

Proof: we must have at least one of ;=, @<, 4 A? < 0 ; while two of them
+ are
y less or
equal to zero. This in turn implies that, on addition of 2;=, 2@<, 4 2A? , a + r ≤ 0;
which is not true. If it is a non-symmetric one then by its basic property
+ y +each
r one of diagonal
elements , ,, Y, ,Z[ \ = 1. This, in turn, implies that a < 0; which is not
possible for real entries matrix. This proves the lemma.

53
In the next section we will prove some defining features of 3x3 R.C matrices and then will
establish that those can be extended for the R.C matrices of higher order also.

Theorem 02: A R.C matrix, except a null matrix, is always a non-singular matrix.

Proof: By definition we accept that a null matrix is a R.C matrix and hence in that case R.C
matrix is a singular one.

The case when a R.C matrix is a non-null one, by the defining property of R.C matrix we
claim its non-singularity.

th th
[By definition of R.C matrix its i row vector is orthogonal to i column vector only making
the result of their dot product/ inner product equal to zero. Had it been orthogonal to any other
th
column except the i one then the column vectors become linearly dependent which results in
singularity of the matrix.]

We conclude that a R.C matrix, except a null matrix, is always a non-singular matrix.

Theorem 3: In the 3x3 R.C matrix with real entries product of at least two off diagonal
(principal) entries are negative.

[This is an important clue to construct a 3x3 R.C matrix. The same concept can be extended to
the R.C matrices of higher order.]

; <
@ $ as it is defined by (2)
Proof: Let us consider the R.C matrix A = >
= ?

As mentioned in the statement we want to prove that at least two of the terms QR, ST, 4 UV
A
must be negative; this is rather one of the most important salient features of R.C matrix. The
theorem targets on establishing that at least two of QR, ST, 4 UV are strictly less than zero

Being an R.C matrix the entries follow R.C property. We write relations (4), (5), and (6) as
below.
+ ;= + <@ = 0

;= ++ >A? +
+ $A? = 0
<@
= 0

We introduce some notations as B = ;=, 7 = <@, 4 C = A?

54
With this we enjoin the notations with above relation to get
+ B + 7 = 0, > + B + C = 0, 4 $ + 7 + C = 0

From the three equations written above, we derive


+ > – $ = −2B ⟹ B = ;= = (− − > + $ )
. In the same way we can write

two more equations. All three are listed below.


B = ;= = (− − > + $ ) P
N
L7 = <@ = (− + > − $ )

C = A? = ( − > − $ )N
O

M
From this junction we discuss different cases for B = ;=, 7 = <@, 4 C = A?.

Case 1: All of A, B, and C cannot be positive.


− y + r <@ > 0
Say B = ;= > 0. This implies that −a > 0; in the same way 7 =
+ y − r
implies that −a > 0. Adding the two results, we have – 2 > 0 which is not
possible.

[In the same way we can derive that – 2 y > 0 4 – 2r > 0. ]

This helps us conclude that B = ;=, 7 = <@, 4 C = A? ^^ > 0 is not possible for a
R.C matrix.

Case 2: Any two of A, B, and C > 0 and the remaining < 0 is not possible.

The proof is an immediate consequence of the above written case 1.

Case 3: Any one of A, B, and C = 0 is not possible.

The proof is supported by case 1.mentioned above.

Case 4: Any one of A, B, and C < 0 is not possible.

The case 2 mentioned above supports the statement and hence the proof.

Case 5: Any two of A, B, and C are < 0 and the remaining one is > 0.

55
− y
Say A = bx < 0. This implies that −a + r <0
+ y
and B = cp < 0. This implies that −a − r <0

Adding them we get -2 a 2< 0 which is true. [∵ A is a real entry matrix.] In addition to this, for
− y
C = A? > 0 Implies a − r > 0. We have to show validity of the result. From the

first result we have, a – r > −y


− y
∴a –r > −2y
−2y − y
< 0⟹a −r > 0.

This proves that along with bx and cp <0, C = qz > 0 is necessary.

Case 6: All the three A, B, and C are negative.


− y + r
Say B = ;= < 0. This implies that −a <0
+ y
7 = <@ < 0. This implies that −a − r < 0 and
− y
C = A? < 0 Implies that a − r <0
. < 0 ). a , . >
As found in the above case, adding first two relations we derived– . ,
. < 0 4 −2Y.
0, we can derive −. \ < 0. We get each one of , > , 4 $ > 0;
which is true. It proves the statement.

3.5 Eigen Values:

It is the most important and useful notion in matrix algebra. For the given square
matrix A there exists a non-zero vector X such that for some value λ we have the matrix
equation bc = d c Satisfied. ‘d’ is called Eigen value and X is called the Eigen vector. In
this section we discuss salient features of Eigen values and Eigen vectors for the R.C matrix.

56
3.5.1 Introduction to important Preliminaries:
Before we proceed to enunciate our findings would like to mention certain peripherals
regarding the R.C matrix and Eigen value. This will simplify our proceedings. All these will
prove useful in the arguments proving the next theorems.

; <
Let us initially focus our attention on A = $
= > ? a R.C matrix with all real entries

defined by (2).
d , d . ,Z[ d @ A
- . /
Let be the Eigen values of A with corresponding non zero Eigen vectors
c-, c., ,Z[ c/.

[1] As the matrix A is an R.C matrix, by defining properties, we have the following results.
These results are already mentioned in earlier work but just to abridge we cite those at this
point.
+ ;= + <@ = 0, ;= + > + A? = 0, 4 <@ + A? + $ = 0

We introduce some notations as B = ;=, 7 = <@, 4 C = A? and derive

B = ;= = (− − > + $ )P
N
L7 = <@ = (− +> − $ )
C = A? = ( − > − $ )N
O

M
∴ +> +$
= −2 (bx + cp + qz ..(8)

d , d . ,Z[ d
- . /
[2] Also recalling the facts pertaining to Eigen values ; we write
d
(a) + d of+Eigen
Sum d =values
k\,Sl mn opl q,o\%R = , + Y + \
- . /
..(10)

d dSum
(b) + of
d product = (,Y
d + dofd Eigen − QR)
values + (,\
taken two−atTS) + (Y\ − UV)
a time
- . . / / -
..(11)

= of
d dProduct
(c) [lo. b =values
Eigen |b|
- .d/
..(12)

57
With this on hand we state some theorems.

Theorem 4: Eigen values of a R.C matrix with real entries are either all zero or exactly
one real.

Proof: As a null matrix is also a R.C matrix, we have all Eigen values are zero and hence the
proof. If the matrix A is not a null matrix then we proceed as follows.

Using the facts mentioned we have,


(d + d + d ) . = d. + d. + d. + .(d d + d d + d d )
- . / - . / - . . / / -

∴ ( + > + $) = d . + d . + d. + 2( > − ;=) + ( $ − @<) + (>$ − A?)


- . /

+ > + $ = −2 (;= + <@ + A?)


but [as mentioned earlier]

So we have ( + > + $) = −2 (;= + <@ + A?) + 2( > + >$ + $)

On comparing two results for ( + > + $ ) we have


d. + d . + d. + 2( > − ;=) + ( $ − @<) + (>$ − A?)
- . /

= −2 (;= + <@ + A?) + 2( > + >$ + $)


⟹ d . + d . + d. = 1
- . /
..(13)

Hence the proof

Theorem 5: All the Eigen values of a R.C matrix are non-zero.

Proof: If any one of the Eigen value is zero then it implies that |A| = 0.

This means that the R.C matrix is a singular matrix. This violates the defining property of the

R.C matrix. It except the null matrix, is always non-singular.


d , d , ,Z[ d
- . /
Note:
d. + dWe have the derived fact that the Eigen values
. + d. = 1
are such that
- . /
..(14)

Then one is real and different from zero while other two are complex conjugate of each
other.

58
3.5.2 Important Derivation:
In tune with above results we have up till now two important results--- (13) and (14).
d. + d . + d . = 1
- . /
From (13) we write that with no λ being zero.
⟹ d . + d . = −d. d , 4d
. / - . /
. We conclude that are complex conjugates of each other.
d = ∝ + )t d = ∝ −)t d. + d. = .(u . − v. ) = − d .
. / . / -
Let and . So we get .

∴ − d . = .(u . − v . ) ..
-
(15)

This logically implies that |v| > | u|


d. + d . + d . = 1
- . /
These Eigen values will satisfy

Deductions: The results established above will help deduce the following relations.
; <
For the R.C matrix A = $
= > ?

[1] Trace,
@ $A = x
w + w + w = + >+
(Say)

[2]Sum of Minors of Principal Diagonals,


w w + w w + w w = ( > − ;=) + ( $ − <@) + (>$ − A?) = y
(Say)

[3]Determinant,
w w w = |B| = det (B) = }

w + w + w =0
[4]

Using all these we derive


~5]w – x w + y w – } = 0 Characteristic equation

~6]w + w = 2 = x – w 4w – w = 2)t
and in connection with [1] above

59
] w = x − 2 ). a = (w − x)/2 w (2 ) + w w = y
[7 and

Gw – w K = 2 2 − 3 w = + − w = – −
[8] Using we have and

This conveys that knowing only the real Eigen value it is sufficient enough to write the
remaining two complex conjugate Eigen values.
C
[9] Fact: in a given R.C matrix there exists at least one column or a row say such that
for a non –zero real value ‘c ’ such that either C = <. C #$ = <. ; i.e. the column
or the row is a multiple of some real constant.

1 2 −3 −1
For B = 1 3 1
−6 the third column
C = 3 −2
2

1 1
3.5.3 Graphical Method of Approximating Real Eigen Value:
By now, it is well known that a non-null R.C matrix has only one non-zero real root
while the remainder two are complex conjugate. [At this stage we reiterate that A real entry
R.C matrix cannot be either symmetric or skew symmetric.]

The vision to shape this section is to locate graphically and approximate algebraically
the real root of the characteristic equation of the given R.C matrix. As we have discussed
many possible properties inter-linking the different Eigen values of a given R.C matrix, we
state here what we shall require at times. We need the first one (5) above in set (12); It is our
characteristic equation.
d/ – k d . + d– = 1

For real eigen root λ, the graph of f(λ) on set of perpendicular real axis, will intersect the x-
axis in a point, say x 1= λ ;1 its location is our objective.
d = 0, "(d) = − ℎa$a = [lo. |b| = d d d d , 4d
- . / . /
For where, as said earlier,
are complex Eigen values. Plotting this, we get the graph of a cubic curve. We parallel our
work citing a real R.C matrix.

60
1 2 −3 x = x$ <a = 6, y = 18, 4 } = −3
Let A = 1 3
2 −6 With

"(d) = d/ – 1d .
1
+ - d + / , "#$ d = 1, n(d) = /. This situation is graphed as below in
Figure: 3.5.1.The Figure: 3.5.2 shows its magnification on an interval about its intersection on
X-Axis.

Figure 3.5.1: Characteristics Equation of a R.C Matrix

Figure 3.5.2: Magnified interval


61
Let "(0) = =ℎ~[In
= our case "(0) = 3].
< 0] "(=As
)=it has
−ℎ root, it crosses"(=
X-axis.
) = –This
ℎ =implies
−3 that
there exists = such that . [In our case ] We can
always find such algebraically.
"(= = − 0.3015) = −3 "(0) = 3 = (− 0.3015 ,0)
As
"(= ) = −ℎ < "(0) = ℎ; = and
∈ (= , 0) , root
= = lies. =
in
. Let , Now we find f(x 3)
F . = − 0.15075 , "(−0.15075) = ℎ
The next approximation is, say.

Figure 3.5.3: Iterative version


> 0 =
In" 2this way after a finite number of iteration, for a given , we can find a real so that
¡
3 < .This is the most effective method for approximating graphically the finer

approximation to the real Eigen value.

During the time that we derived and critically reviewed the characteristics of R.C
matrices of dimensions 2 × 2 and onwards, we could find many interesting features. We
commit, we have searched a small area and still we enjoin our efforts inspired by a new result
we work upon. Excavating such unknown area may elaborate mathematically ignited minds.
All constructive suggestions are welcome.

Annexure: As discussed, we, in this section, will elaborate the technique of finding an
extension of a 2 × 2 R.C matrix to the R.C matrices of higher order. We begin with a simple
R.C matrix of order 2 × 2.

62
−1
2 3
1 1
Let us consider, A1 =
1 > = (1)= + 1 4 > = ( −1)= + 1
Let us consider the column system as [which shows
2
perpendicular lines in R space.]

Integrating each one with respect to x, we get,


= =
> = + = + < 4> = − + =+ <.
2 2

The matrix which corresponds to this system is,


1/2 −1/2 1/2−1/2 @
< 1 < < <
2 2 2
A = < we extend this matrix A as A = where all the
1 1 1 A
letters

in different positions
3 + 2−are the real values. It is so planned
3 (1) + that
(1) they satisfy R.C property.
2 3 (1) + @< = 0, 2− + A< = 0 ,
We have,
4 @< + A< + (< ) = 0

This gives us a free choice for selection of variables remaining within the given equation.
@ = 1, < = , A = 1, < = − 4 £a <a < = .
¢
We select
1/2 1
1/4 −1/2
2
The extended version of R.C matrix is now, A = 1
1
1 4 × 4.
Again on the same lines, this can be extended to a R.C matrix of the size

−1/2 1/2

3.5.4 Some Examples-Graphical method


We take one more illustration,

1 5 −1
5 then Eigen
For example, if we consider an example of R.C matrix as B= 2 −2
2
values are obtained as,
|B − w⁄| = 0 ⇒ @ (w) = w − 8w + 32w + 128 = 0
11 7

63
Step: 01
d =1
p
Initial approximation of real Eigen vector is .This initial approximation will always
work as we know that for any real R.C matrix is with a Real Eigen value and a pair of
complex Eigen value. As a sum of square of Eigen values of R.C matrix is always zero, all
Eigen values of R.C matrix are zero if real Eigen value is zero.
@ (0) = 128

This gives information about height ‘h’ of characteristic equation in XY plane, which in turn
is always non zero as discussed above.

Characteristic equation of A is a cubic polynomial and using nature of graph we can claim a
point on X axis with exact height ‘-h’ (same magnitude but opposite in sign)
@ (w) = −128
So, equation is always consistent and gives a real solution.
@ (w) = −128 ⇒ @ (w) = w − 8w + 32w + 128 = −128

⇒ w = −3.5247, w = 5.7626 + i 6.2786 and w = 5.7626 − i 6.2786


d = −/. ƒ.0§
Fp
Considering this Real Eigen value as

Then average of these two approximations is first approximation derived as,


w
w = ¤ F¤ = ⇒ d = −-. § ./ƒ
+w 0 + (−3.5247) -

2 2
Step: 02
w = −1.76235
¤
Now we iterate same process with input into characteristic equation
@ (w) = 0
.
@ (−1.76235) = 41.28413657

@ (w) = −41.28413657
So, equation is always consistent and gives a real solution.

@ (w) = −41.28413657 ⇒ @ (w) = w − 8w + 32w + 128 = −41.28413657

⇒ w = −2.74987, w = 5.37493 − i 5.71583 and w = 5.37493 + i 5.71583


w = −2.74987

Considering this Real Eigen value as

64
Then average of these two approximations is second approximation derived as,
w
w = ¤ F¤ = ⇒ d = −.. .ƒ --
+w (−1.76235 ) + (−2.74987) .

2 2
Step: 03
w = −2.25611
¤
Now we iterate same process with input into characteristic equation
@ (w) = 0
.
@ (−2.25611) = 3.60054850

@ (w) = −3.60054850
So, equation is always consistent and gives a real solution.
@ (w) = −3.60054850 ⇒ @ (w) = w − 8w + 32w + 128 = −41.28413657

⇒ w = −2.34119, w = 5.1705 − i 5.42915 and w = 5.1705 + i 5.42915


w = −2.34119

Considering this Real Eigen value as

Then average of these two approximations is third approximation derived as,


w
w = ¤ F¤ = ⇒ d = −.. .“ ƒ
+w (−2.25611 ) + (−2.34119) /

2 2
Step: 04
w = −2.29865
¤
Now we will repeat our first step with input into characteristic equation
@ (w) = 0
.
@ (−2.29865 ) = 0.02727735

@ (w) = −0.02727735
So, equation is always consistent and gives a real solution.
@ (w) = −0.02727735 ⇒ @ (w) = w − 8w + 32w + 128 = −0.02727735

⇒ w = −2.29929, w = 5.14964 − i5.4002 and w = 5.14964 + i5.4002


w = −2.29929

Considering this Real Eigen value

Then average of these two approximations is fourth approximation derived as,


w +w (−2.29865 ) + (−2.29929)
w = ¤ F¤ = ⇒ d = −.. .“ “§
¢ 0

2
= −1.76235 w = −2.256112w = −2.29865
w , , and w ¢ = −2.29897
Conclusion:

65
Four approximations gives three decimal accurate solution. For further accuracy we can
repeat same stapes.

66

Вам также может понравиться