Вы находитесь на странице: 1из 13

A

TERM PAPER ON

TOPIC :

SUBMITTED ON : 15th NOV,2010

DEPARTMENT OF COMPUTER SCIENCE

LOVELY PROFESSIONAL UNIVERSITY

Submitted to: Miss Swati Aggarwal

Submitted by: Deepanshu Bansal

Registration No.: 11013005

Roll No.: RE1001A29


Email ID:
diipanshu@gmail.com

Section: E1001

Subject Code: MTH 101


In mathematics, a Hermitian matrix (or self-adjoint matrix) is a
square matrix with complex entries that is equal to
its own conjugate transpose – that is, the element in
the i-th row and j-th column is equal to the complex
conjugate of the element in the j-th row and i-th
column, for all indices i and j:
If the conjugate transpose of a matrix A is denoted
by , then the Hermitian property can be written
concisely as
Hermitian matrices can be understood as the complex
extension of real symmetric matrices.
Hermitian matrices are named after Charles Hermite,
who demonstrated in 1855 that matrices of this
form share a property with real symmetric matrices
of having eigenvalues always real.
Properties
The entries on the main diagonal (top left to bottom
right) of any Hermitian matrix are necessarily real.
A matrix that has only real entries is Hermitian if
and only if it is a symmetric matrix, i.e., if it is
symmetric with respect to the main diagonal. A real
and symmetric matrix is simply a special case of a
Hermitian matrix.
Every Hermitian matrix is normal, and the finite-
dimensional spectral theorem applies. It says that
any Hermitian matrix can be diagonalized by a
unitary matrix, and that the resulting diagonal
matrix has only real entries. This means that all
eigenvalues of a Hermitian matrix are real, and,
moreover, eigenvectors with distinct eigenvalues
are orthogonal. It is possible to find an orthonormal
basis of Cn consisting only of eigenvectors.
The sum of any two Hermitian matrices is Hermitian,
and the inverse of an invertible Hermitian matrix is
Hermitian as well. However, the product of two
Hermitian matrices A and B will only be Hermitian if
they commute, i.e., if AB = BA. Thus An is Hermitian if
A is Hermitian and n is an integer.

The Hermitian n-by-n matrices form a vector space over


the real numbers (but not over the complex
numbers). The dimension of this space is n2 (one
degree of freedom per main diagonal element, and
two degrees of freedom per element above the
main diagonal).
The eigenvectors of a Hermitian matrix are
orthogonal, i.e., its eigendecomposition is where
Since right- and left- inverse are the same, we also
have
and therefore
,
where σi are the eigenvalues and ui the eigenvectors.
Additional properties of Hermitian matrices include:
The sum of a square matrix and its conjugate
transpose is Hermitian.
The difference of a square matrix and its conjugate
transpose is skew-Hermitian (also called
antihermitian).
An arbitrary square matrix C can be written as the sum
of a Hermitian matrix A and a skew-Hermitian matrix
B:

The determinant of a Hermitian matrix is real


Hermitian and Unitary Matrices (7.3, 7.4)
Let A be a complex nxn matrix (in general).
Denote complex conjugation of A as A
(recall: x = x + iy ⇒ z = x − iy )
Definition.
A is Hermitian if A = AT

A is unitary if A −1 = A T

(compare H. and u. with symmetric and orthogonal in


real case. They reduce to such cases if A is real.)
Example. Hermitian matrix
 15 6 − 2i
A = ⇒
 6 + 2i 3 

 15 6 + 2i   15 6 − 2i 
A =  ⇒ AT =  =A
 6 − 2i 3   6 + 2i 3 

Example. Unitary matrix:


 1 i   1 − 1
   
A =  − 21 2  ⇒ AT =  2 2 ⇒
 i 
  − i −i

 2 2  2 2
 1 0
AA T =  
 0 1

Theorem (p. 386).


The eigenvalues of a Hermitian matrix are all real.
The eigenvalues of a unitary matrix all have λ =1.

Fact: Write A as (c1, c2,..., cn) (row of columns of A).


Then
If A is a unitary matrix, then
c T j c k = δ jk

where
0 j ≠k
δ jk = 
1 j =k

is the Kroneker delta. (p. 390)


The column vectors are said to form a unitary system.
This gives a useful way of characterizing unitary (and,
in particular, orthogonal) matrices. Note that the
formula
c T j c k = δ jk

is merely another way to say that


( A T A) jk = ( I ) jk
Similarity Transformations and Diagonalization (7.5)
Definition. A matrix A is said to be similar to a matrix
A if there exists a non-singular matrix P such that
.
P −1 AP = A

Example. Look back at the 3-masses-on-a-string


example. We have
 3 −1 0
 
A =  −1 2 −1
 
0 −1 3
λ1 = 1, λ2 = 3, λ3 = 4
x1T = (1,2,1), x 2 T = (1,0,−1), x 3 T = (1,−1,1)

Construct P=(x1, x2 , x3), the 3x3 matrix whose columns


are the eigenvectors of A. As Axi =λ i xi for i = 1, 2,
3, we can write these 3 equations together as
A P= P D (* )
 λ1 0 0 
 
D =  0 λ2 0 
 
 0 0 λ3

where D is a diagonal matrix.


Since the 3 eigenvectors are linearly independent (3
distinct eigenvalues), P is non-singular (i.e..
invertible, having rank 3).
So, multiplying (*) on left by P-1 obtain:
P-1A P = D,
i.e., the matrix A is similar to the diagonal matrix D.
We can also write
A = PDP-1.
Theorem 1. (p. 392) Similar matrices have the same
eigenvalues and the same determinant.

Proof. We have Ax =λ x and  = P −1 AP .


A

Thus,

P− 1 Ax = P− 1(λ x) = λ ( P− 1x) − 1 − 1
−1 −1  −1  ⇒ A ( P x) = λ ( P x)
P A P x =P A( P x) 

So, eigenvalues λ are same, and P −1x


is eigenvector of .
A

Also,
 ) = det( P −1 ) det( A) det( P ) = det A
det( A
 −1 1 
det( P ) = det( P ) 
 

Theorem (p. 394). Let A be nxn.


If A has n linearly independent eigenvectors (not
necessarily distinct), then A is similar to a diagonal
matrix (and vice versa).
Specifically, we have
D = P-1AP, where
 λ1 0
 
D= λ2 ,
 
0 λn 
P = ( x1 , x 2 ,..., x n )

is matrix of eigenvectors.
Definitions, A is a normal matrix if
AA T = A T A.
Examples of normal matrices:
all real symmetric matrices
all real skew-symmetric matrices
all Hermitian matrices
unitary matrices.
Fact. If A is normal than A is similar to a diagonal
matrix, and then A has n linearly independent
eigenvectors, and vice versa (p.393).
Let A be a normal nxn matrix. To diagonalize it,

find the n eigenvalues of A, λ j

find n linearly independent eigenvectors, xj


Then P and D follow.
Now, we can always arrange that the n eigenvectors of
a normal matrix form a unitary system (p.393).
It follows that P is actually a unitary matrix, so that
D = P T AP .

In particular, if A is real symmetric, we can diagonalize


it using an orthogonal matrix P:
D = P T AP .

Example. A symmetric (normal) matrix


3 2
A = ;
2 0

3 −λ 2
det( A − λI ) = = λ2 − 3λ − 4 = 0
2 −λ
⇒λ1 = −1, λ2 = 4.
3 − λ 2   x  0
   =  
 2 − λ  y  0
 1
λ1 = −1 ⇒ 4 x + 2 y = 0 ⇒ x1 = α  
 − 2
 2
λ2 = 4 ⇒ − x + 2 y = 0 ⇒ x 2 = β  
Eigenvectors:  1

Let's take α =β =1.


 1 2 1 1 2
P =  ⇒ P −1 =  
− 2 1 5 − 2 1

Check that
−1 0
P −1 AP =  .
 0 4

Example.
Ax = b
P −1 APP −1
x = P −1b

Solve
Dy = f
y = P −1x ⇒x = Py
f = P −1b

Easy to solve if D is diagonal.


Example (dif. eq.)
 = Ax ( t )
x
y = P −1x ⇒y
 = Dy

Applications: Principal axes and rotation of


coordinates.
In a 3D world, let Ox1, x2 , x3 be a fixed Cartesian
coordinate system, with unit vectors e i along O xi .
Any vector c can be represented as
3
c = c1e i + c2 e 2 + c3e 3 = ∑ c j e j (*)
j =1
e i e j = δij ⇒ c j = c ⋅ e j .
As
Now, consider a second Cartesian coordinate system
Ox1′x2′ x3′ with the same origin and unit vectors ei′ .

Thus, we also have:


3
c = ∑c ′j e ′j
j =1

c′j = c ⋅ e ′j .
with
In particular, taking c = e i gives

3
e i = ∑qij e ′j
j =1

where
qij = e i e ′j

are called direction cosines.

Alternatively, if we choose c = e i′
in (*), we get
3
e i′ = ∑q ji e j
j =1

Q = (qij ),
Let a 3x3 matrix. We see

that the jth column gives the components of e ′j in


terms of Ox1, x2 , x3 , so that Q specifies the
orientation of Ox1′x2′ x3′ relative to Ox1, x2 , x3 .
Fact: Q is an orthogonal matrix:
QQ T = Q T Q = I .

The matrix Q also relates vectors in rotated coordinate


systems:
c′ = Q T c

where c ′ gives the components of c in the rotated


coordinate system.
Suppose we have a problem leading to
t = As

in the Ox1, x2 , x3 system where A is a 3x3 symmetric


matrix. Multiply by QT:
Q T t = ( Q T AQ )( Q T s).

Then diagonalize: solve (A-λ I)x=O for eigenvalues and


eigenvectors and the define:
 λ1 0 0
 
D=0 λ2 0
 
0 0 λ3 
Q = ( x1 , x 2 , x 3 )

From Q we can find the rotated coord. system Ox1′x2′ x3′ so


that we get
t ′ = Ds ′,
where t ′ = Q T t , s ′ = Q T s

Explicitly, we have
ti′ = λi si′, i = 1,2,3.

The new axes are called principal axes. Their virtue is


that if
s ′ = s ′j e ′j for some j ,

then
t ′ = t ′j e ′j = λ j s ′j e ′j .
This allows to simplify mechanical systems by
adjusting to the principal axes (rotation of rigid
body, stress in elastic body, etc.)

Вам также может понравиться