Вы находитесь на странице: 1из 20

# TERM PAPER

## SUB:- ENGINEERING MATEHMATICS

TOPIC:- For a given matrix A, prove that trace of A equals the sum of its eigen values. What will be the Eigen values of A-kI ? Also give examples.

## SUBMITED SUBMITED BY:Mr.GURPREET NAME:-MAAN ROLL.NO:-RF4002A50 SINGH

TO:BHATIA

CERTIFICATE

THIS IS TO CERTIFIED THAT THE TERM PAPER HAS BEEN PREPARED BY Maan {ROLLNO:RF4002A50} UNDER OUR SUPERVISION AND GUIDANCE. HIS EFFORT IS SATISFACTORY AND WE APPRECIATE IT. Sig.of teacher:

ACKNOWLEDGEMENT

IT IS THE MATTER OF GREAT PLEASURE FOR ME TO ACKNOWLEDGE MY INDEBTNESS TO MY TEACHER Lect.Mr.GURPREET SINGH BHATIA (DEPTT. OF MATHEMATICS), LOVELY PROFESSIONAL UNIVERSITY, PHAGWARA(PB). FOR LENDING HIS HELPING HAND IN THE COMPLITION OF MY TERM PAPER. HE INSPIRED ME THROUGH OUT THE ACADEMIC SEMSTER AND HELPED ME IN FINDING THE SUITABLE TERM PAPER FOR ME. VALUABLE GUIDENESS, ENCOURAGEMENT AND THE FACILITIES PROVIDED TO ME DURING THE PREPRATION OF MY TERM PAPER ARE VERY HELPFUL TO ME.

Introduction

to

Eigen Values-: In mathematics, and in particular in linear algebra, an important tool for describing eigenvalues of square matrices is the characteristic polynomial: saying that is an eigenvalue of A is equivalent to stating that the system of linear equations (A - I) v = 0 (where I is the identity matrix) has a non-zero solution v (namely an eigenvector), and so it is equivalent to the determinant det (A - I) being zero. The function p() = det (A - I) is a polynomial in since determinants are defined as sums of products.

This is the characteristic polynomial of A: the eigenvalues of a matrix are the zeros of its characteristic polynomial. It follows that we can compute all the eigenvalues of a matrix A by solving the equation pA() =0. If A is an n-by-n matrix, then pA has degree n and A can therefore have at most n eigenvalues. Conversely, the fundamental theorem of algebra says that this equation has exactly n roots (zeroes), counted with multiplicity. All real polynomials of odd degree have a real number as a root, so for odd n, every real matrix has at least one real eigenvalue. In the case of a real matrix, for even and odd n, the non-real eigenvalues come in conjugate pairs.

## An example of a matrix with no real eigenvalues is the 90-degree rotation

whose characteristic polynomial is x2 + 1 and so its eigenvalues are the pair of complex conjugates i, -i. The CayleyHamilton theorem states that every square matrix satisfies its own characteristic polynomial, that is, pA(A) = 0. Types-: Eigenvalues of 22 matrices An analytic solution for the eigenvalues of 22 matrices can be obtained directly from the quadratic formula: if

## so the solutions are-

Notice that the characteristic polynomial of a 22 matrix can be written in terms of the trace tr(A) = a + d and determinant det(A) = ad bc as

where I2 is the 22 identity matrix. The solutions for the eigenvalues of a 22 matrix can thus be written as

Thus, for the very special case where the 22 matrix has zero determinant, but non-zero trace, the eigenvalues are zero and the trace (corresponding to the negative and positive roots, respectively). For example, the eigenvalues of the following matrix are 0 and (a2 + b2):

It cannot be stressed enough that this formula holds for only a 22 matrix. Eigenvalues of 33 matrices-: If

## then the characteristic polynomial of A is

Alternatively the characteristic polynomial of a 33 matrix can be written in terms of the trace tr(A) and determinant det(A) as

where I3 is the 33 identity matrix. The eigenvalues of the matrix are the roots of this polynomial, which can be found using the method for solving cubic equations. A formula for the eigenvalues of a 44 matrix could be derived in an analogous way, using the formulae for the solutions of the quartic equation.

Trace (linear algebra)-: In linear algebra, the trace of an n-by-n square matrix A is defined to be the sum of the elements on the main

diagonal (the diagonal from the upper left to the lower right) of A, i.e.,

where aii represents the entry on the ith row and ith column of A. Equivalently, the trace of a matrix is the sum of its eigenvalues, making it an invariant with respect to a change of basis. This characterization can be used to define the trace for a linear operator in general. Note that the trace is only defined for a square matrix (i.e. nn). Examples Let T be a linear operator represented by the matrix

Then tr(T) = 2 + 1 1 = 2. The trace of the identity matrix is the dimension of the space; this leads to generalizations of dimension using trace. The trace of a projection (i.e., P2 = P) is the rank of the projection. The trace of a nilpotent matrix is zero.

The product of a symmetric matrix and a skewsymmetric matrix has zero trace. More generally, if f(x) = (x 1)d1(x k)dk is the characteristic polynomial of a matrix A, then

## If A and B are positive semi-definite matrices of the same order then

Properties-: The trace is a linear map. That is, tr(A + B) = tr(A) + tr(B),

for all square matrices A and B, and all scalars c. If A is an mn matrix and B is an nm matrix, then tr(AB) = tr(BA). Conversely, the above properties characterize the trace completely in the sense as follows. Let f be a linear

functional on the space of square matrices satisfying f(xy) = f(yx). Then f and tr are proportional. The trace is similarity-invariant, which means that A and P1AP have the same trace. This is because tr(P 1AP) = tr(P tr(A).
1(AP))

= tr((AP)P

1)

A matrix and its transpose have the same trace: tr(A) = tr(AT). Let A be a symmetric matrix, and B an antisymmetric matrix. Then tr(AB) = 0. When both A and B are n by n, the trace of the (ringtheoretic) commutator of A and B vanishes: tr([A, B]) = 0; one can state this as "the trace is a map of Lie algebras from operators to scalars", as the commutator of scalars is trivial (it is an abelian Lie algebra). In particular, using similarity invariance, it follows that the identity matrix is never similar to the commutator of any pair of matrices.

Conversely, any square matrix with zero trace is the commutator of some pair of matrices. Moreover, any square matrix with zero trace is unitarily equivalent to a square matrix with diagonal consisting of all zeros. The trace of any power of a nilpotent matrix is zero. When the characteristic of the base field is zero, the converse also holds: if tr(xk) = 0 for all k, then x is nilpotent. Note that order does matter in taking traces: in general,

In other words, we can only interchange the two halves of the expression, albeit repeatedly. This means that the trace is invariant under cyclic permutations, i.e., tr(ABCD) = tr(BCDA) = tr(CDAB) = tr(DABC). However, if products of three symmetric (or, more generally, Hermitian) matrices are considered, any permutation is allowed. (Proof: tr(ABC) = tr(AT BT CT) = tr((CBA)T) = tr(CBA).) For more than three factors this is not true. This is known as the cyclic property. Unlike the determinant, the trace of the product is not the product of traces. What is true is that the trace of the

## tensor product of two matrices is the product of their traces:

The trace of a product can be rewritten as the sum of all elements from a Hadamard product (entry-wise product): . This should be more computationally efficient, since the matrix product of an matrix with an one (first and last dimensions must match to give a square matrix for the trace) has mn2 multiplications and n + (m 1)n2 additions, whereas the computation of the Hadamard version (entry-wise product) requires only nm multiplications followed by nm additions. The exponential trace-: Expressions like exp(tr(A)), where A is a square matrix, occur so often in some fields (e.g. multivariate statistical theory), that a shorthand notation has become common:

This is sometimes referred to as the exponential trace function. Trace of a linear operator-: Given some linear map f : V V (V is a finitedimensional vector space) generally, we can define the trace of this map by considering the trace of matrix representation of f, that is, choosing a basis for V and describing f as a matrix relative to this basis, and taking the trace of this square matrix. The result will not depend on the basis chosen, since different bases will give rise to similar matrices, allowing for the possibility of a basis independent definition for the trace of a linear map. Such a definition can be given using the canonical isomorphism between the space End(V) of linear maps on V and VV*, where V* is the dual space of V. Let v be in V and let f be in V*. Then the trace of the decomposable element vf is defined to be f(v); the trace of a general element is defined by linearity. Using an explicit basis for V and the corresponding dual basis for V*, one can show that this gives the same definition of the trace as given above. Eigen value relationships-:

If A is a square n-by-n matrix with real or complex entries and if 1,...,n are the (complex and distinct) eigenvalues of A (listed according to their algebraic multiplicities), then

This follows from the fact that A is always similar to its Jordan form, an upper triangular matrix having 1,...,n on the main diagonal. In contrast, the determinant of A is the product of its eigenvalues; i.e.,

More generally,

Derivatives-: The trace is the derivative of the determinant: it is the Lie algebra analog of the (Lie group) map of the determinant. This is made precise in Jacobi's formula for the derivative of the determinant (see under determinant). As a particular case, : the trace is the derivative of the determinant at the identity. From this (or from the connection between the trace and the eigenvalues), one can derive a connection between the

trace function, the exponential map between a Lie algebra and its Lie group (or concretely, the matrix exponential function), and the determinant: det(exp(A)) = exp(tr(A)). For example, consider the one-parameter family of linear transformations given by rotation through angle ,

These transformations all have determinant 1, so they preserve area. The derivative of this family at = 0 is the antisymmetric matrix

which clearly has trace zero, indicating that this matrix represents an infinitesimal transformation which preserves area. A related characterization of the trace applies to linear vector fields. Given a matrix A, define a vector field F on Rn by F(x) = Ax. The components of this vector field are linear functions (given by the rows of A). The divergence div F is a constant function, whose value is

equal to tr(A). By the divergence theorem, one can interpret this in terms of flows: if F(x) represents the velocity of a fluid at the location x, and U is a region in Rn, the net flow of the fluid out of U is given by tr(A) vol(U), where vol(U) is the volume of U. The trace is a linear operator, hence its derivative is constant:

Examples-: 1. Any matrix A, with eigen values ai is similar to its jordan form which is an upper triangular matrix-: a1 0 0 b1 c1 a2 0 c2 a3

and since similar matrices have same trace value, the trace of matrix A = sum of the eigen value.