Вы находитесь на странице: 1из 127

Homework 2.

1
1. For any tensor , show that, = 2. Gurtin 2.6.1 3. Show that that if the tensor is invertible, for any vector , = automatically means that = . 4. Show that if the vectors , and are independent and is invertible, then the vectors , and are also independent. 5. Show that = and that = 2 6. Gurtin 2.8.5 7. Gurtin 2.9.1 8. Gurtin 2.9.2 9. Gurtin 2.9.4 Due January 28, 2013
Department of Systems Engineering, University of Lagos 1 oafak@unilag.edu.ng 12/30/2012

11. 12. 13. 14. 15.

Gurtin 2.11.1 d&e Gurtin 2.11.3 Gurtin 2.11.4 Gurtin 2.11.5 Let be a rotation. For any pair of independent vectors , show that = () () 16. For a proper orthogonal tensor , show that the eigenvalue equation always yields an eigenvalue of +1. 17. For an arbitrary unit vector , the tensor, = cos + ( cos ) + sin ( ) where ( ) is the skew tensor whose component is , show that ( ) = cos ( ) + sin ( ). 18. For an arbitrary unit vector and the tensor, defined as above, Show for an arbitrary vector that = has the same magnitude as . Due Feb 4, 2013
Department of Systems Engineering, University of Lagos 2 oafak@unilag.edu.ng 12/30/2012

Homework 2.2

Solutions 2.1
1. The easiest proof is to observe that is the identity tensor. Consequently, = = = . Alternatively, a longer route is to obtain the components of this result on the base. Recall that this is simply the inner product of this with the dual of the base: : = tr

= tr

= tr

= = Consequently, we may express in component form as, = =

Department of Systems Engineering, University of Lagos

oafak@unilag.edu.ng 12/30/2012

2. The tensor = Similarly, = and = .


Clearly, = = = = = = + =

Department of Systems Engineering, University of Lagos

oafak@unilag.edu.ng 12/30/2012

3. First note that if the tensor is invertible, for any vector , = automatically means that = . The proof is easy and all we need to do is to contract with a tensor inverse of : 1 = = as required.

Department of Systems Engineering, University of Lagos

oafak@unilag.edu.ng 12/30/2012

4. Next we can show that if the vectors , and are independent vectors and is invertible, then the vectors , and are also independent. We provide a reduction ad absurdum proof of this assertion: Imagine that our conditions are satisfied, and yet , and are not independent. This would mean that , and not all of which are zero such that, + + = . This means that, + + = = where, in this case, + + must now necessarily vanish since is invertible. Obviously, this shows that , and are not independent. This contradicts our premise! In a similar way, we may prove that the independence of the vectors and implies the independence of vectors and .
Department of Systems Engineering, University of Lagos 6 oafak@unilag.edu.ng 12/30/2012

5. We show immediately that = 2 by writing one covariantly and the other in contravariant components: = = = = = = 2 We show that = = = = = On account of the symmetry and antisymmetry in and
Department of Systems Engineering, University of Lagos 7 oafak@unilag.edu.ng 12/30/2012

6. Gurtin 2.8.5 Show that for any two vectors and , the inner product : = 2 . Hence show that = 2 = , = . Hence, : = : = = =

= 2 = 2 = 2
The rest of the result follows by setting =
Department of Systems Engineering, University of Lagos 8 oafak@unilag.edu.ng 12/30/2012

7. Multiplying the two tensors, we find, + 1 + 1 = = 1 + 1 1 + 1 + = 1 + 1 1 + 1 + = 1 + 1 + 1 + 1 + = 1 + 1 + 1 + = And since this product gives the identity, the two tensors are inverses of each other.

Department of Systems Engineering, University of Lagos

oafak@unilag.edu.ng 12/30/2012

9. 1

dev

But tr 1 = tr 1 = tr . We may therefore write that,

1 tr 3

1
1 tr 3

= 1

1 tr 3

= 1

1 =
1

1 tr 1 = 1 3

Department of Systems Engineering, University of Lagos

10

oafak@unilag.edu.ng 12/30/2012

4.

Department of Systems Engineering, University of Lagos

11

oafak@unilag.edu.ng 12/30/2012

Homework 2.3
1. Gurtin 2.13.1 2. If the reciprocal relationship, = is satisfied, what relationship is there between the tensor bases (1) and , and (2) and ? 3. Gurtin 2.14.1 4. Gurtin 2.14.2 5. Gurtin 2.14.3 6. Gurtin 2.14.4 7. Gurtin 2.14.5 8. Gurtin 2.15 1-3a, 3b, 3c 9. Gurtin 2.16 1-8 Due Feb 11, 2013
Department of Systems Engineering, University of Lagos 12 oafak@unilag.edu.ng 12/30/2012

Quiz
For a given a tensor and its transpose T , Write out expressions for the 1. Symmetric Part 2. Skew Part 3. Spherical Part 4. Deviatoric Part. What is the magnitude of ?
Department of Systems Engineering, University of Lagos 13 oafak@unilag.edu.ng 12/30/2012

Tensor Algebra
Tensors as Linear Mappings

Jan 21, 28 to Feb 4, 2013


No Topics From Slide Date

0 Home Work & Due dates & Quiz 1 Definitions, Special Tensors 2 Scalar Functions or Invariants 3 Inner Product, Euclidean Tensors 4 The Tensor Product Tensor Basis & Component 5 Representation 6 The Vector Cross, Axial Vectors 7 The Cofactor 8 Orthogonal Tensors Eigenvalue Problem, Spectral 9 Decomposition & Cayley Hamilton
Department of Systems Engineering, University of Lagos 15

1 7 17 26 29 40 60 68 88 100

21-Jan

28-Jan

4-Feb
oafak@unilag.edu.ng 12/30/2012

Second Order Tensor


A second Order Tensor is a linear mapping from a vector space to itself. Given V the mapping, : V V states that V such that, = . Every other definition of a second order tensor can be derived from this simple definition. The tensor character of an object can be established by observing its action on a vector.
Department of Systems Engineering, University of Lagos 16 oafak@unilag.edu.ng 12/30/2012

Linearity
The mapping is linear. This means that if we have two runs of the process, we first input and later input . The outcomes () and (), added would have been the same as if we had added the inputs and first and supplied the sum of the vectors as input. More compactly, this means, + = () + ()

Department of Systems Engineering, University of Lagos

17

oafak@unilag.edu.ng 12/30/2012

Linearity
Linearity further means that, for any scalar and tensor = The two properties can be added so that, given , R, and , V, then + = + Since we can think of a tensor as a process that takes an input and produces an output, two tensors are equal only if they produce the same outputs when supplied with the same input. The sum of two tensors is the tensor that will give an output which will be the sum of the outputs of the two tensors when each is given that input.

Department of Systems Engineering, University of Lagos

18

oafak@unilag.edu.ng 12/30/2012

Vector Space

In general, , R , , V and , T + = ( + )

With the definition above, the set of tensors constitute a vector space with its rules of addition and multiplication by a scalar. It will become obvious later that it also constitutes a Euclidean vector space with its own rule of the inner product.
Department of Systems Engineering, University of Lagos 19 oafak@unilag.edu.ng 12/30/2012

Special Tensors

Notation. It is customary to write the tensor mapping without the parentheses. Hence, we can write, () For the mapping by the tensor on the vector variable and dispense with the parentheses unless when needed.

Department of Systems Engineering, University of Lagos

20

oafak@unilag.edu.ng 12/30/2012

Zero Tensor or Annihilator

The annihilator is defined as the tensor that maps all vectors to the zero vector, : = , V

Department of Systems Engineering, University of Lagos

21

oafak@unilag.edu.ng 12/30/2012

The Identity

The identity tensor is the tensor that leaves every vector unaltered. V , = Furthermore, R , the tensor, is called a spherical tensor. The identity tensor induces the concept of an inverse of a tensor. Given the fact that if T and V , the mapping produces a vector.
Department of Systems Engineering, University of Lagos 22 oafak@unilag.edu.ng 12/30/2012

The Inverse
Consider a linear mapping that, operating on , produces our original argument, , if we can find it: = As a linear mapping, operating on a vector, clearly, is a tensor. It is called the inverse of because, = = So that the composition = , the identity mapping. For this reason, we write, = 1
Department of Systems Engineering, University of Lagos 23 oafak@unilag.edu.ng 12/30/2012

Inverse

It is easy to show that if = , then = = . HW: Show this. The set of invertible sets is closed under composition. It is also closed under inversion. It forms a group with the identity tensor as the groups neutral element

Department of Systems Engineering, University of Lagos

24

oafak@unilag.edu.ng 12/30/2012

Transposition of Tensors
Given , V , The tensor T satisfying T = () Is called the transpose of . A tensor indistinguishable from its transpose is said to be symmetric.

Department of Systems Engineering, University of Lagos

25

oafak@unilag.edu.ng 12/30/2012

Invariants

There are certain mappings from the space of tensors to the real space. Such mappings are called Invariants of the Tensor. Three of these, called Principal invariants play key roles in the application of tensors to continuum mechanics. We shall define them shortly. The definition given here is free of any association with a coordinate system. It is a good practice to derive any other definitions from these fundamental ones:
Department of Systems Engineering, University of Lagos 26 oafak@unilag.edu.ng 12/30/2012

The Trace
If we write
, , where , , and are arbitrary vectors. For any second order tensor , and linearly independent , , and , the linear mapping 1 : T R , , + , , + [, , ] 1 tr = [, , ] Is independent of the choice of the basis vectors , , and . It is called the First Principal Invariant of or Trace of tr 1 ()
Department of Systems Engineering, University of Lagos 27 oafak@unilag.edu.ng 12/30/2012

The Trace

The trace is a linear mapping. It is easily shown that , R , and , T tr + = tr + tr() HW. Show this by appealing to the linearity of the vector space. While the trace of a tensor is linear, the other two principal invariants are nonlinear. WE now proceed to define them
Department of Systems Engineering, University of Lagos 28 oafak@unilag.edu.ng 12/30/2012

Square of the trace

The second principal invariant 2 is related to the trace. In fact, you may come across books that define it so. However, the most common definition is that 1 2 2 = 1 1 (2 ) 2 Independently of the trace, we can also define the second principal invariant as,

Department of Systems Engineering, University of Lagos

29

oafak@unilag.edu.ng 12/30/2012

Second Principal Invariant


The Second Principal Invariant, 2 , using the same notation as above is , , + , , + , , , , 1 2 = tr tr 2 2 that is half the square of trace minus the trace of the square of which is the second principal invariant. This quantity remains unchanged for any arbitrary selection of basis vectors , and .
Department of Systems Engineering, University of Lagos 30 oafak@unilag.edu.ng 12/30/2012

The Determinant

The third mapping from tensors to the real space underlying the tensor is the determinant of the tensor. While you may be familiar with that operation and can easily extract a determinant from a matrix, it is important to understand the definition for a tensor that is independent of the component expression. The latter remains relevant even when we have not expressed the tensor in terms of its components in a particular coordinate system.
Department of Systems Engineering, University of Lagos 31 oafak@unilag.edu.ng 12/30/2012

The Determinant

As before, For any second order tensor , and any linearly independent vectors , , and , The determinant of the tensor , , , det = , , (In the special case when the basis vectors are orthonormal, the denominator is unity)

Department of Systems Engineering, University of Lagos

32

oafak@unilag.edu.ng 12/30/2012

Other Principal Invariants


It is good to note that there are other principal invariants that can be defined. The ones we defined here are the ones you are most likely to find in other texts. An invariant is a scalar derived from a tensor that remains unchanged in any coordinate system. Mathematically, it is a mapping from the tensor space to the real space. Or simply a scalar valued function of the tensor.

Department of Systems Engineering, University of Lagos

33

oafak@unilag.edu.ng 12/30/2012

Deviatoric Tensors
When the trace of a tensor is zero, the tensor is said to be traceless. A traceless tensor is also called a deviatoric tensor. Given any tensor , A deviatoric tensor may be created from by the following process: 1 0 dev tr = 3 1 So that = tr ; is called the spherical part, and 0 as 3 defined here is called the deviatoric part of . Every tensor thus admits the decomposition, = 0 +
Department of Systems Engineering, University of Lagos 34 oafak@unilag.edu.ng 12/30/2012

Inner Product of Tensors

The trace provides a simple way to define the inner product of two second-order tensors. Given , T The trace, tr = tr( ) Is a scalar, independent of the coordinate system chosen to represent the tensors. This is defined as the inner or scalar product of the tensors and . That is, : tr = tr( )
Department of Systems Engineering, University of Lagos 35 oafak@unilag.edu.ng 12/30/2012

Attributes of a Euclidean Space

The trace automatically induces the concept of the norm of a vector (This is not the determinant! Note!!) The square root of the scalar product of a tensor with itself is the norm, magnitude or length of the tensor: = tr( ) = :

Department of Systems Engineering, University of Lagos

36

oafak@unilag.edu.ng 12/30/2012

Distance and angles

Furthermore, the distance between two tensors as well as the angle they contain are defined. The scalar distance (, )between tensors and : , = = And the angle (, ), : 1 = cos

Department of Systems Engineering, University of Lagos

37

oafak@unilag.edu.ng 12/30/2012

The Tensor Product


A product mapping from two vector spaces to T is defined as the tensor product. It has the following properties: "": V V T = ( ) It is an ordered pair of vectors. It acts on any other vector by creating a new vector in the direction of its first vector as shown above. This product of two vectors is called a tensor product or a simple dyad.

Department of Systems Engineering, University of Lagos

38

oafak@unilag.edu.ng 12/30/2012

Dyad Properties
It is very easily shown that the transposition of dyad is simply a reversal of its order. (Shown below). The tensor product is linear in its two factors. Based on the obvious fact that for any tensor and , , V , = = It is clear that = Show this neatly by operating either side on a vector Furthermore, the contraction, = A fact that can be established by operating each side on the same vector.
Department of Systems Engineering, University of Lagos 39 oafak@unilag.edu.ng 12/30/2012

Transpose of a Dyad
Recall that for , V , The tensor T satisfying T = () Is called the transpose of . Now let = a dyad. = = = = = = So that T = Showing that the transpose of a dyad is simply a reversal of its factors.

Department of Systems Engineering, University of Lagos

40

oafak@unilag.edu.ng 12/30/2012

If is the unit normal to a given plane, show that the tensor is such that is the projection of the vector to the plane in question. Consider the fact that = = The above vector equation shows that is what remains after we have subtracted the projection onto the normal. Obviously, this is the projection to the plane itself. as we shall see later is called a tensor projector.
Department of Systems Engineering, University of Lagos 41 oafak@unilag.edu.ng 12/30/2012

Substitution Operation
Consider a contravariant vector component let us take a product of this with the Kronecker Delta: which gives us a third-order object. Let us now perform a contraction across (by taking the superscript index from and the subscript from ) to arrive at, = Observe that the only free index remaining is the superscript as the other indices have been contracted (it is consequently a summation index) out in the implied summation. Let us now expand the RHS above, we find,
Department of Systems Engineering, University of Lagos 42 oafak@unilag.edu.ng 12/30/2012

Substitution
= = 1 1 + 2 2 + 3 3

Note the following cases: if = 1, we have 1 = 1 , if = 2, we have 2 = 2 if = 3, we have 3 = 3 . This leads us to conclude therefore that the contraction, = . Indicating that that the Kronecker Delta, in a contraction, merely substitutes its own other symbol for the symbol on the vector it was contracted with. This fact, that the Kronecker Delta does this in general earned it the alias of Substitution Operator.
Department of Systems Engineering, University of Lagos 43 oafak@unilag.edu.ng 12/30/2012

Composition with Tensors


Operate on the vector and let = . On the LHS, = On the RHS, we have: = = Since the contents of both sides of the dot are vectors and dot product of vectors is commutative. Clearly, = follows from the definition of transposition. Hence, = =

Department of Systems Engineering, University of Lagos

44

oafak@unilag.edu.ng 12/30/2012

Dyad on Dyad Composition


For , , , V , We can show that the dyad composition, = Again, the proof is to show that both sides produce the same result when they act on the same vector. Let V , then the LHS on yields: = ( ) = ( ) Which is obviously the result from the RHS also. This therefore makes it straightforward to contract dyads by breaking and joining as seen above.
Department of Systems Engineering, University of Lagos 45 oafak@unilag.edu.ng 12/30/2012

Trace of a Dyad

Show that the trace of the tensor product is . Given any three independent vectors , , and , (No loss of generality in letting the three independent vectors be the curvilinear basis vectors 1 , 2 and 3 ). Using the above definition of trace, we can write that,

Department of Systems Engineering, University of Lagos

46

oafak@unilag.edu.ng 12/30/2012

Trace of a Dyad

tr 1 , 2 , 3 + 1 , 2 , 3 + 1 , 2 , 3 = 1 , 2 , 3 1 = , 2 , 3 + 1 , 2 , 3 + 1 , 2 , 3 123 1 1 = 1 23 + 31 2 + 12 3 123 1 = 1 231 1 + 312 2 2 + 123 3 3 123 = =

Department of Systems Engineering, University of Lagos

47

oafak@unilag.edu.ng 12/30/2012

Other Invariants of a Dyad


It is easy to show that for a tensor product = , V 2 = 3 = 0 HW. Show that this is so. We proved earlier that 1 = Furthermore, if T , then, tr = tr = =

Department of Systems Engineering, University of Lagos

48

oafak@unilag.edu.ng 12/30/2012

Tensor Bases & Component Representation


Given T , for any basis vectors V , = 1,2,3 V , = 1,2,3 by the law of tensor mapping. We proceed to find the components of on this same basis. Its covariant components, just like in any other vector are the scalars, = Specifically, these components are
Department of Systems Engineering, University of Lagos 49

1 , 2 , 3

oafak@unilag.edu.ng 12/30/2012

Tensor Components
We can dispense with the parentheses and write that = So that the vector

= =

The components can be found by taking the dot product of the above equation with : = = = = tr = :
Department of Systems Engineering, University of Lagos 50 oafak@unilag.edu.ng 12/30/2012

Tensor Components

The component is simply the result of the inner product of the tensor on the tensor product . These are the components of on the product dual of this particular product base. This is a general result and applies to all product bases: It is straightforward to prove the results on the following table:

Department of Systems Engineering, University of Lagos

51

oafak@unilag.edu.ng 12/30/2012

Tensor Components

Components of

Derivation

Full Representation

: ( ) : : ( ) : ( )

= = = = .
oafak@unilag.edu.ng 12/30/2012

Department of Systems Engineering, University of Lagos

52

IdentityTensor Components
It is easily verified from the definition of the identity tensor and the inner product that: (HW Verify this)
Components of Derivation Full Representation

. .

= = =
.

: ( ) : : ( ) : ( )

= = = = = . =
.

= .

Department of Systems Engineering, University of Lagos

Showing that the Kronecker deltas are the components of the identity tensor in certain (not all) coordinate bases.
53

oafak@unilag.edu.ng 12/30/2012

Kronecker and Metric Tensors


The above table shows the interesting relationship between the metric components and Kronecker deltas. Obviously, they are the same tensors under different bases vectors.

Department of Systems Engineering, University of Lagos

54

oafak@unilag.edu.ng 12/30/2012

Component Representation
It is easy to show that the above tables of component representations are valid. For any V , and T, =
Expanding the vector in contravariant components, we have,

= =
= = = = =
Department of Systems Engineering, University of Lagos 55 oafak@unilag.edu.ng 12/30/2012

Symmetry

The transpose of = is = . If is symmetric, then, = = Clearly, in this case, =

It is straightforward to establish the same for contravariant components. This result is impossible to establish for mixed tensor components:
Department of Systems Engineering, University of Lagos 56 oafak@unilag.edu.ng 12/30/2012

Symmetry

For mixed tensor components, . = The transpose, . T = = . While symmetry implies that, . = = T = . We are not able to exploit the dummy variables to bring the two sides to a common product basis. Hence the symmetry is not expressible in terms of their components.
Department of Systems Engineering, University of Lagos 57 oafak@unilag.edu.ng 12/30/2012

AntiSymmetry
A tensor is antisymmetric if its transpose is its negative. In product bases that are either covariant or contravariant, antisymmetry, like symmetry can be expressed in terms of the components: The transpose of = is = .

If is antisymmetric, then, = =
Clearly, in this case, =

It is straightforward to establish the same for contravariant components. Antisymmetric tensors are also said to be skewsymmetric.
Department of Systems Engineering, University of Lagos 58 oafak@unilag.edu.ng 12/30/2012

Symmetric & Skew Parts of Tensors


For any tensor , define the symmetric and skew parts

sym + , and skw T . It is easy to show the following: = sym + skw skw sym = sym skw = 0 We can also write that, 1 sym = + 2 and 1 skw = 2
Department of Systems Engineering, University of Lagos 59 oafak@unilag.edu.ng 12/30/2012

1 2

1 2

Composition
Composition of tensors in component form follows the rule of the composition of dyads. = , = = = = = . . = . .
Department of Systems Engineering, University of Lagos 60 oafak@unilag.edu.ng 12/30/2012

Addition
Addition of two tensors of the same order is the addition of their components provided they are refereed to the same product basis.

Department of Systems Engineering, University of Lagos

61

oafak@unilag.edu.ng 12/30/2012

Component Addition

Components

+
+ +
. .

+
+ +
. .

. +.

. +.
62 oafak@unilag.edu.ng 12/30/2012

Department of Systems Engineering, University of Lagos

Component Representation of Invariants


Invoking the definition of the three principal invariants, we now find expressions for these in terms of the components of tensors in various product bases. First note that for = , the triple product, 1 , 2 , 3 = 1 , 2 , 3 = 1 , 2 , 3 = 1 (231 1 ) = 1 1 231

Recall that =
Department of Systems Engineering, University of Lagos 63 oafak@unilag.edu.ng 12/30/2012

The Trace

The Trace of the Tensor = 1 , 2 , 3 + 1 , 2 , 3 + 1 , 2 , 3 tr = 1 , 2 , 3 1 1 231 + 2 2 312 + 3 3 123 = 123 = 1 1 + 2 2 + 3 3 = = .

Department of Systems Engineering, University of Lagos

64

oafak@unilag.edu.ng 12/30/2012

Second Invariant
{Proof not very illuminating, Ignorable] , , = , , , ,
= =

Changing the roles of dummy variables, we can write, , , + , , + , , = + + = + + 1 = 2


Department of Systems Engineering, University of Lagos

65

oafak@unilag.edu.ng 12/30/2012

Second Invariant
The last equality can be verified in the following way. Contracting the coefficient + + with + +
= +
+

= + +

= 3

Department of Systems Engineering, University of Lagos

66

oafak@unilag.edu.ng 12/30/2012

Similarly, contracting with , we have, = 6. Hence + + =


3

1 = = 2 2 Which is half the difference between square of the trace and the trace of the square of tensor .
Department of Systems Engineering, University of Lagos 67 oafak@unilag.edu.ng 12/30/2012

Determinant

The invariant, , ,

= =

= det = det , , From which


det = 1 2 3

Department of Systems Engineering, University of Lagos

68

oafak@unilag.edu.ng 12/30/2012

The Vector Cross


Given a vector = , the tensor is called a vector cross. The following relation is easily established between a the vector cross and its associated vector: V , = The vector cross is traceless and antisymmetric. (HW. Show this) Traceless tensors are also called deviatoric or deviator tensors.
Department of Systems Engineering, University of Lagos 69 oafak@unilag.edu.ng 12/30/2012

Axial Vector
For any antisymmetric tensor , V , such that = which can always be found, is called the axial vector to the skew tensor. It can be proved that 1 1 = = 2 2 (HW: Prove it by contracting both sides of = with while noting that = = 2 )
Department of Systems Engineering, University of Lagos 70 oafak@unilag.edu.ng 12/30/2012

Examples
Gurtin 2.8.5 Show that for any two vectors and , the inner product : = 2 . Hence show that = 2 = , = . Hence, : = : = = = =
2

= 2 = 2

The rest of the result follows by setting = HW. Redo this proof using the contravariant alternating tensor components, and .
Department of Systems Engineering, University of Lagos 71 oafak@unilag.edu.ng 12/30/2012

For vectors , and , show that = . The tensor = Similarly, = and = . Clearly, = = = = = = + =

Department of Systems Engineering, University of Lagos

72

oafak@unilag.edu.ng 12/30/2012

Index Raising & Lowering

and

These two quantities turn out to be fundamentally important to any space that which either of these two basis vectors can span. They are called the covariant and contravariant metric tensors. They are the quantities that metrize the space in the sense that any measurement of length, angles areas etc are dependent on them.
Department of Systems Engineering, University of Lagos 73 oafak@unilag.edu.ng 12/30/2012

Index Raising & Lowering


Now we start with the fact that the contravariant and covariant components of a vector , = , = respectively. We can express the vector with respect to the reciprocal basis as = Consequently, = = = The effect of contracting with is to raise and substitute its index.
Department of Systems Engineering, University of Lagos 74 oafak@unilag.edu.ng 12/30/2012

Index Raising & Lowering

With similar arguments, it is easily demonstrated that, = So that , in a contraction, lowers and substitutes the index. This rule is a general one. These two components are able to raise or lower indices in tensors of higher orders as well. They are called index raising and index lowering operators.

Department of Systems Engineering, University of Lagos

75

oafak@unilag.edu.ng 12/30/2012

Associated Tensors
Tensor components such as and related through the index-raising and index lowering metric tensors as we have on the previous slide, are called associated vectors. In higher order quantities, they are associated tensors. Note that associated tensors, so called, are mere tensor components of the same tensor in different bases.
Department of Systems Engineering, University of Lagos 76 oafak@unilag.edu.ng 12/30/2012

Cofactor Definition
We will define the cofactor of a tensor as, cofac c det and proceed to show that, for any pair of independent vectors and the cofactor satisfies, = c We will further find an invariant component representation for the cofactor tensor. Lastly, in this section, we will find an important relationship between the trace of the cofactor and second invariant of the tensor itself: tr c = 2
Department of Systems Engineering, University of Lagos 77 oafak@unilag.edu.ng 12/30/2012

Transformed Basis
First note that if is invertible, the independence of the vectors and implies the independence of vectors and . Consequently we can define the non-vanishing 0. It follows that must be on the perpendicular line to both and . Therefore, = = 0. We can also take a transpose and write, T = T = 0 Showing that the vector T is perpendicular to both and . It follows that R such that T =
Department of Systems Engineering, University of Lagos 78 oafak@unilag.edu.ng 12/30/2012

Cofactor Transformation
Therefore, T = . Let = so that , and are linearly independent, then we can take a scalar product of the above equation and obtain, T = The LHS is also = . In the equation, = , it is clear that = det We therefore have that, = T det .
Department of Systems Engineering, University of Lagos 79 oafak@unilag.edu.ng 12/30/2012

Cofactor Tensor
We therefore have that, = T det . This quantity, det is the cofactor of . If we write, cofac c det we can see that the cofactor satisfies, = c We now express the cofactor in its general components. c = c = c 1 = c 2 1 = . 2
Department of Systems Engineering, University of Lagos 80 oafak@unilag.edu.ng 12/30/2012

Cofactor Components
The scalar in brackets, = = = = Inserting this above, we therefore have, in invariant component form, 1 c = 2 1 = 2 1 = 2
Department of Systems Engineering, University of Lagos 81 oafak@unilag.edu.ng 12/30/2012

Trace of the Cofactor


For any invertible tensor, show that the trace of the cofactor is the second principal invariant of the original tensor: 1 c = 2 1 c = = c tr 1 2 1 1 = = 2 2 1 1 = = 2 2 = 2
Department of Systems Engineering, University of Lagos 82 oafak@unilag.edu.ng 12/30/2012

Determinants
Show that the determinant of a product is the product of the determinants = =
so that the determinant of in component form is, 3 1 2 = 1 2 3 = 1 2 3 = 1 2 3 det = det det . If is the inverse of , then becomes the identity matrix. Hence the above also proves that the determinant of an inverse is the inverse of the determinant.
Department of Systems Engineering, University of Lagos 83 oafak@unilag.edu.ng 12/30/2012

3 det = 1 2 = 3 det

For any invertible tensor we show that det C = det The inverse of tensor ,

= det let the scalar = det . We can see clearly that, C = Taking the determinant of this equation, we have, det C = 3 det = 3 det 1 as the transpose operation has no effect on the value of a determinant. Noting that the determinant of an inverse is the inverse of the determinant, we have, 3 det C = 3 det 1 = = det 2
Department of Systems Engineering, University of Lagos 84 oafak@unilag.edu.ng 12/30/2012

C T

Cofactor
Show that C = 2 C Ans C = det T = 3 det 1 T = 2 det T = 2 C Show that 1 C = det 1 T Ans. 1 C = det 1 1 = det 1 T

Department of Systems Engineering, University of Lagos

85

oafak@unilag.edu.ng 12/30/2012

(d) Show that Ans.

C 1

= det

1 T

C = det = det
C C 1

Consequently,

C 1

= det

1 T

(e) Show that Ans. So that,


C C

= det

C = det
C

= det

= det
1 T

C 1 2

= det 2 det = det as required.


Department of Systems Engineering, University of Lagos

= det

det

86

oafak@unilag.edu.ng 12/30/2012

3. Show that for any invertible tensor and any vector , = C where C and are the cofactor and inverse of respectively. By definition, C = det T We are to prove that, = C = det T or that, T = det = C On the RHS, the contravariant component of is

which is exactly the same as writing, = in the invariant form.

Department of Systems Engineering, University of Lagos

87

oafak@unilag.edu.ng 12/30/2012

Similarly, C

1 2

= 2 so that its transpose C . . We may therefore write,

T C

1 = 2 1 = 2 1 = 2 1 = 2 1 = 2 1 = = 2 =

Department of Systems Engineering, University of Lagos

88

oafak@unilag.edu.ng 12/30/2012

We now turn to the LHS; = =


. Now, = . so that its transpose, T = = so that

T =

=
=

= = C

Department of Systems Engineering, University of Lagos

89

oafak@unilag.edu.ng 12/30/2012

Show that C = T The LHS in component invariant form can be written as: C = C
= so that 2 1 C C = = 2 Consequently, 1 C = 2 1 = 2 1 = = 2 On the RHS, T = . We can therefore write, T = =

where

Department of Systems Engineering, University of Lagos

Which on a closer look is exactly the same as the LHS so that, C = T as required.
90

oafak@unilag.edu.ng 12/30/2012

4. Let be skew with axial vector . Given vectors and , show that = and, hence conclude that C = . = = = = = But by definition, the cofactor must satisfy, = c which compared with the previous equation yields the desired result that C = .
Department of Systems Engineering, University of Lagos 91 oafak@unilag.edu.ng 12/30/2012

5. Show that the cofactor of a tensor can be C = 2 1 + 2 T

written as

even if is not invertible. 1 , 2 are the first two invariants of . Ans. The above equation can be written more explicitly as, C = 2 1 2 tr + tr tr 2 2
T

In the invariant component form, this is easily seen to be, 1 C = + 2


Department of Systems Engineering, University of Lagos 92 oafak@unilag.edu.ng 12/30/2012

But we know that the cofactor can be obtained directly from the equation,

1 1 = = 2 2

1 + 2 1 = + 2 = 2 2 + 2 1

Department of Systems Engineering, University of Lagos

93

oafak@unilag.edu.ng 12/30/2012

Using the above, Show that the cofactor of a vector cross is 2 = = = = = = = tr But from previous result,
C 2

= 3 = 2 tr =0 1 2 tr tr 2
T
T 2

tr +

1 = 0 + 0 + 2 2 = 0 + T =
Department of Systems Engineering, University of Lagos 94

oafak@unilag.edu.ng 12/30/2012

Show that C = In component form,

=
So that
2

= = = = =
C 2

Clearly, tr and tr
=
2

= , tr 2
2

=
1 2

tr +

tr 2 tr 1 + 2
95

T 2

=
Department of Systems Engineering, University of Lagos oafak@unilag.edu.ng 12/30/2012

Orthogonal Tensors
Given a Euclidean Vector Space E, a tensor is said to be orthogonal if, , E, = Specifically, we can allow = , so that = Or = In which case the mapping leaves the magnitude unaltered.
Department of Systems Engineering, University of Lagos 96 oafak@unilag.edu.ng 12/30/2012

Orthogonal Tensors

Let = = = = By definition of the transpose, we have that, = = = Clearly, = A condition necessary and sufficient for a tensor to be orthogonal is that be invertible and its inverse equal to its transpose.
Department of Systems Engineering, University of Lagos 97 oafak@unilag.edu.ng 12/30/2012

Orthogonal

Upon noting that the determinant of a product is the product of the determinants and that transposition does not alter a determinant, it is easy to conclude that, det = det det = det 2 = 1 Which clearly shows that det = 1 When the determinant of an orthogonal tensor is strictly positive, it is called proper orthogonal.
Department of Systems Engineering, University of Lagos 98 oafak@unilag.edu.ng 12/30/2012

Rotation & Reflection

A rotation is a proper orthogonal tensor while a reflection is not.

Department of Systems Engineering, University of Lagos

99

oafak@unilag.edu.ng 12/30/2012

Rotation
Let be a rotation. For any pair of vectors , show that = () () This question is the same as showing that the cofactor of is itself. That is that a rotation is self cofactor. We can write that = () () where = cof = det T Now that is a rotation, det = 1, and T = (1 ) = (T ) = This implies that = and consequently, = () ()
Department of Systems Engineering, University of Lagos 100 oafak@unilag.edu.ng 12/30/2012

For a proper orthogonal tensor Q, show that the eigenvalue equation always yields an eigenvalue of +1. This means that there is always a solution for the equation, = For any invertible tensor, C = det T For a proper orthogonal tensor , det = 1. It therefore follows that, C = det T = T = It is easily shown that trC = 2 () (HW Show this Romano 26) Characteristic equation for is, det = 3 2 1 + 2 3 = 0 Or, 3 2 1 + 1 1 = 0 Which is obviously satisfied by = 1.
Department of Systems Engineering, University of Lagos 101 oafak@unilag.edu.ng 12/30/2012

If for an arbitrary unit vector , the tensor, = cos + (1 cos ) + sin ( ) where ( ) is the skew tensor whose component is , show that ( ) = cos ( ) + sin ( ). = cos + (1 cos ) + sin [ ] The last term vanishes immediately on account of the fact that is a symmetric tensor. We therefore have, = cos + (1 cos ) = which again mean that so that = cos + 1 cos + sin = os + sin as required.

Department of Systems Engineering, University of Lagos

102

oafak@unilag.edu.ng 12/30/2012

If for an arbitrary unit vector , the tensor, = cos + (1 cos ) + sin ( ) where ( ) is the skew tensor whose component is . Show for an arbitrary vector that = has the same magnitude as . Given an arbitrary vector , compute the vector = . Clearly, = cos + 1 cos + sin The square of the magnitude of is = = cos 2 + 1 cos + sin2 2 + 2 cos 1 cos = cos 2 + 1 cos 1 cos + cos + sin2 2 = cos2 + 1 cos 1 + cos + sin2 = cos 2 + 1 cos 2 + sin2 2 = cos 2 + sin2 + 2 = cos 2 + sin2 = .

The operation of the tensor on is independent of and does not change the magnitude. Furthermore, it is also easy to show that the projection of an arbitrary vector on as well as that of its image are the same. The axis of rotation is therefore in the direction of .
Department of Systems Engineering, University of Lagos 103 oafak@unilag.edu.ng 12/30/2012

If for an arbitrary unit vector , the tensor, = cos + (1 cos ) + sin ( ) where ( ) is the skew tensor whose component is . Show for an arbitrary 0 , 2, that + = . It is convenient to write and in terms of their components: The ij component of

= (cos ) + 1 cos (sin )

Consequently, we can write, = = = (cos ) + 1 cos (sin ) (cos ) + 1 cos (sin ) = (cos cos ) + cos (1 cos ) cos sin + cos (1 cos ) + 1 cos 1 cos 1 cos (sin ) (sin cos ) (sin ) 1 cos + (sin sin ) = (cos cos ) + cos (1 cos ) cos sin + cos (1 cos ) + 1 cos 1 cos (sin cos ) + (sin sin ) = (cos cos sin sin ) + 1 ( cos cos sin sin ) cos sin sin cos = +
104 oafak@unilag.edu.ng 12/30/2012

Department of Systems Engineering, University of Lagos

Use the results of 52 and 55 above to show that the tensor = cos + (1 cos ) + sin ( ) is periodic with a period of 2. From 55 we can write that + 2 = 2 . But from 52, 0 = 2 = . We therefore have that, + 2 = 2 =

which completes the proof. The above results show that is a rotation along the unit vector through an angle .

Department of Systems Engineering, University of Lagos

105

oafak@unilag.edu.ng 12/30/2012

Define Lin+as the set of all tensors with a positive determinant. Show that Lin+ is invariant under G where is the proper orthogonal group of all rotations, in the sense that for any tensor Lin+ G T Lin+ .(G285) Since we are given that Lin+ , the determinant of is positive. Consider det T . We observe the fact that the determinant of a product of tensors is the product of their determinants (proved above). We see clearly that, det T = det det det T . Since is a rotation, det = det T = 1. Consequently we see that, det T = det det det T = det T = 1 det 1 = det Hence the determinant of T is also positive and therefore T Lin+ .
Department of Systems Engineering, University of Lagos 106 oafak@unilag.edu.ng 12/30/2012

Define Sym as the set of all symmetric tensors. Show that Sym is invariant under G where is the proper orthogonal group of all rotations, in the sense that for any tensor A Sym every T Sym. (G285) Since we are given that A Sym, we inspect the tensor T . Its transpose is, T = T T = T . So that T is symmetric and therefore T Sym. so that the transformation is invariant.
T T

Department of Systems Engineering, University of Lagos

107

oafak@unilag.edu.ng 12/30/2012

Central to the usefulness of tensors in Continuum Mechanics is the Eigenvalue Problem and its consequences. These issues lead to the mathematical representation of such physical properties as Principal stresses, Principal strains, Principal stretches, Principal planes, Natural frequencies, Normal modes, Characteristic values, resonance, equivalent stresses, theories of yielding, failure analyses, Von Mises stresses, etc. As we can see, these seeming unrelated issues are all centered around the eigenvalue problem of tensors. Symmetry groups, and many other constructs that simplify analyses cannot be understood outside a thorough understanding of the eigenvalue problem. At this stage of our study of Tensor Algebra, we shall go through a simplified study of the eigenvalue problem. This study will reward any diligent effort. The converse is also true. A superficial understanding of the Eigenvalue problem will cost you dearly.
Department of Systems Engineering, University of Lagos 108 oafak@unilag.edu.ng 12/30/2012

The Eigenvalue Problem


Recall that a tensor is a linear transformation for V : V V states that V such that, = Generally, and its image, are independent vectors for an arbitrary tensor . The eigenvalue problem considers the special case when there is a linear dependence between and .
Department of Systems Engineering, University of Lagos 109 oafak@unilag.edu.ng 12/30/2012

Eigenvalue Problem
Here the image = where R = The vector , if it can be found, that satisfies the above equation, is called an eigenvector while the scalar is its corresponding eigenvalue. The eigenvalue problem examines the existence of the eigenvalue and the corresponding eigenvector as well as their consequences.
Department of Systems Engineering, University of Lagos 110 oafak@unilag.edu.ng 12/30/2012

In order to obtain such solutions, it is useful to write out this equation in its component form: =
so that, =

the zero vector. Each component must vanish identically so that we can write = 0

Department of Systems Engineering, University of Lagos

111

oafak@unilag.edu.ng 12/30/2012

From the fundamental law of algebra, the above equations can only be possible for arbitrary values of if the determinant, Vanishes identically. Which, when written out in full, yields, 1 1 1 1 2 3 2 2 2 =0 1 2 3 3 3 3 1 2 3
Department of Systems Engineering, University of Lagos 112 oafak@unilag.edu.ng 12/30/2012

Expanding, we have, 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 3 2 1 + 2 3 1 + 3 1 2 1 3 2 2 1 3 1 2 3 1 2 1 2 1 3 2 3 + 1 2 3 + 2 1 1 2 + 3 1 + 3 2 3 1 3 2 3 1 2 1 3 2 3 + 1 2 + 2 2 + 3 2 3 =0 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 = 3 2 1 + 2 3 1 + 3 1 2 1 3 2 2 1 3 1 2 3 + 1 2 3 1 2 1 2 1 3 2 3 1 3 + (2 1 1 2 + 3 1 + 3 2 1 3 3 2 3 1 2 2 3 ) + 1 + 2 + 3 2 3 = 0 or 3 1 2 + 2 3 = 0
Department of Systems Engineering, University of Lagos 113 oafak@unilag.edu.ng 12/30/2012

Principal Invariants Again


This is the characteristic equation for the tensor . From here we are able, in the best cases, to find the three eigenvalues. Each of these can be used in to obtain the corresponding eigenvector. The above coefficients are the same invariants we have seen earlier!

Department of Systems Engineering, University of Lagos

114

oafak@unilag.edu.ng 12/30/2012

Positive Definite Tensors


A tensor is Positive Definite if for all V , > 0 It is easy to show that the eigenvalues of a symmetric, positive definite tensor are all greater than zero. (HW: Show this, and its converse that if the eigenvalues are greater than zero, the tensor is symmetric and positive definite. Hint, use the spectral decomposition.)

Department of Systems Engineering, University of Lagos

115

oafak@unilag.edu.ng 12/30/2012

Cayley- Hamilton Theorem


We now state without proof (See Dill for proof) the important Caley-Hamilton theorem: Every tensor satisfies its own characteristic equation. That is, the characteristic equation not only applies to the eigenvalues but must be satisfied by the tensor itself. This means, 3 1 2 + 2 3 = is also valid. This fact is used in continuum mechanics to obtain the spectral decomposition of important material and spatial tensors.
Department of Systems Engineering, University of Lagos 116 oafak@unilag.edu.ng 12/30/2012

Spectral Decomposition
It is easy to show that when the tensor is symmetric, its three eigenvalues are all real. When they are distinct, corresponding eigenvectors are orthogonal. It is therefore possible to create a basis for the tensor with an orthonormal system based on the normalized eigenvectors. This leads to what is called a spectral decomposition of a symmetric tensor in terms of a coordinate system formed by its eigenvectors:
3

=
=1

Where is the normalized eigenvector corresponding to the eigenvalue .


Department of Systems Engineering, University of Lagos 117 oafak@unilag.edu.ng 12/30/2012

Multiplicity of Roots
The above spectral decomposition is a special case where the eigenbasis forms an Orthonormal Basis. Clearly, all symmetric tensors are diagonalizable. Multiplicity of roots, when it occurs robs this representation of its uniqueness because two or more coefficients of the eigenbasis are now the same. The uniqueness is recoverable by the ingenious device of eigenprojection.

Department of Systems Engineering, University of Lagos

118

oafak@unilag.edu.ng 12/30/2012

Eigenprojectors

Case 1: All Roots equal. The three orthonormal eigenvectors in an ONB obviously constitutes an Identity tensor . The unique spectral representation therefore becomes
3 3

=
=1

=
=1

since 1 = 2 = 3 = in this case.


Department of Systems Engineering, University of Lagos 119 oafak@unilag.edu.ng 12/30/2012

Eigenprojectors

Case 2: Two Roots equal: 1 unique while 2 = 3 In this case, = 1 1 1 + 2 1 1 since 2 = 3 in this case. The eigenspace of the tensor is made up of the projectors: 1 = 1 1 and 2 = 2 2
Department of Systems Engineering, University of Lagos 120 oafak@unilag.edu.ng 12/30/2012

Eigenprojectors

The eigen projectors in all cases are based on the normalized eigenvectors of the tensor. They constitute the eigenspace even in the case of repeated roots. They can be easily shown to be:
1. Idempotent: = (no sums) 2. Orthogonal: = (the anihilator) 3. Complete:
=1

= (the identity)

Department of Systems Engineering, University of Lagos

121

oafak@unilag.edu.ng 12/30/2012

Tensor Functions
For symmetric tensors (with real eigenvalues and consequently, a defined spectral form in all cases), the tensor equivalent of real functions can easily be defined: Trancendental as well as other functions of tensors are defined by the following maps: : Sym Sym Maps a symmetric tensor into a symmetric tensor. The latter is the spectral form such that,
3


=1
Department of Systems Engineering, University of Lagos

( )
122 oafak@unilag.edu.ng 12/30/2012

Tensor functions
Where ( ) is the relevant real function of the ith eigenvalue of the tensor . Whenever the tensor is symmetric, for any map, :R R, : Sym Sym As defined above. The tensor function is defined uniquely through its spectral representation.

Department of Systems Engineering, University of Lagos

123

oafak@unilag.edu.ng 12/30/2012

Show that the principal invariants of a tensor satisfy T = , = 1,2, or 3 Rotations and orthogonal
transformations do not change the Invariants

1 T = tr T = tr T = tr = 1 () 1 2 T = 2 tr T tr T T 2 1 2 = I1 tr T 2 1 2 = I1 tr T 2 1 2 = I1 () tr = 2 () 2 3 T = det T = det T = det = 3

Hence T = , = 1,2, or 3
Department of Systems Engineering, University of Lagos 124 oafak@unilag.edu.ng 12/30/2012

2 Show that, for any tensor , tr 2 = 1 () 22 and 3 tr 3 = 1 31 2 + 33

1 2 2 = tr tr 2 2 1 2 = 1 () tr 2 2 So that,
2 tr 2 = 1 () 22 By the Cayley-Hamilton theorem, 3 1 2 + 2 3 = Taking a trace of the above equation, we can write that, tr 3 1 2 + 2 3 = tr(3 ) 1 tr 2 + 2 tr 33 = 0 so that, tr 3 = 1 tr 2 2 tr + 33 2 = 1 1 22 1 2 + 33 3 = 1 31 2 + 33 As required.
125

Department of Systems Engineering, University of Lagos

oafak@unilag.edu.ng 12/30/2012

Suppose that and are symmetric, positive-definite tensors with 2 = , write the invariants of C in terms of U 2 1 = tr 2 = 1 () 22 By the Cayley-Hamilton theorem, 3 1 2 + 2 3 = which contracted with gives, 4 1 3 + 2 2 3 = so that, 4 = 1 3 2 2 + 3 and tr 4 = 1 tr 3 2 tr 2 + 3 tr 3 = 1 1 31 2 + 33
2 2 1 22 + 1 3 4 2 2 = 1 41 2 + 41 3 + 22
Department of Systems Engineering, University of Lagos 126 oafak@unilag.edu.ng 12/30/2012

But,

1 2 1 2 2 2 2 = 1 tr = 1 tr 4 2 2 1 2 2 = tr tr 4 2 2 1 2 = 1 22 tr 4 2 1 4 2 2 = 1 41 2 + 42 2 2 4 2 1 41 2 + 41 3 + 22

The boxed items cancel out so that, 2 2 = 2 21 3 as required. 2 3 = det = det 2 = det 2 = 3

Department of Systems Engineering, University of Lagos

127

oafak@unilag.edu.ng 12/30/2012

Вам также может понравиться