Вы находитесь на странице: 1из 16

Introductory Tensor Analysis

Mathematics, rightly viewed, possesses not only truth, but supreme beauty - a beauty cold and austere, like that of a sculpture Bertrand Russell Dyadic Algebra a and Consider two vectors, b . As we saw in chapter 3 we can write them as follows: a = a 1 u x + a2 u y + a3 u z b = b 1 u x + b2 u y + b 3 u z where each vector has three components in our Cartesian space. If we multiply them in the 'normal' distributive fashion: a b = ( a 1 u x+ a2 u y + a3 uz )( b1 ux +b2 u y +b3 u z ) = a 1 b1 u x u x +a1 b2 u x u y + a1 b 3 u x u z + a2 b1 u y u x +a2 b2 u y u y +a2 b3 u y u z + a3 b 1 u z u x+ a3 b2 u z u y + a3 b3 u z u z a and This is the direct product of b , referred to briefly in chapter 2, and the resulting object is called a dyad. Note that, in our Cartesian space, there are now nine scalar coefficients, aibj , that a and is, 3x3 from the vectors b . One represents this compactly as1: D= a b x is termed a unit vector, u x u x is a unit dyad. Just as u Is this product commutative, you ask? Let's see:

1 Do not confuse this notation with the outer product notation of chapter 3. Recall that the outer product of vectors a and b as we have defined it is the product of a and the transpose of b. Note also that we use an underscore here to represent dyadic (and higher) products.

b a = ( b 1 u x+ b2 u y + b3 uz )( a1 ux +a2 u y +a3 u z ) = b 1 a1 u x u x +b1 a2 u x u y + b1 a3 u x u z +b2 a1 u y u x +b2 a2 u y u y +b2 a3 u y u z +b 3 a1 u z u x+ b3 a2 u z u y +b 3 a3 u z u z Now we subtract the dyadic products: a b b a =( a1 b1b 1 a1 ) u x u x+( a1 b2 b1 a2) u x u y +( a 1 b3 b1 a3 ) u x u z = ( a2 b1 b 2 a1 ) u y u x+( a2 b2b 2 a 2) u y u y +( a2 b 3b 2 a 3) u y u z = ( a 3 b 1 b3 a1 ) u z u x +( a 3 b 2b3 a2 ) u z u y +( a3 b3 b3 a3 ) u z u z The terms with the same subscripts are all zero however the nonidentical subscript terms are not necessarily equal. Therefore the dyadic product is not commutative in general. Now, what would be the result of, say, the inner product of c with dyad D ? We define this operation by 'associating' the vector c with the vector 'beside' it in D . Thus, if we premultiply by vector c: cD=( c a) b = b Post multiplication gives:
D c = a ( b c )= a

Thus, this type of inner product is not commutative, ie. cD D c unlike the inner product of two vectors. We see that the inner product of vector with a dyad gives back one of the vectors that make up the dyad multiplied by a constant. We will use this property to construct a set of very general hamiltonian operators. The astute reader will by now be saying to herself Huh? What is this?. And well so. To rationalize this in terms of previous discussions of vectors lets switch to our matrix representations and construct our dyad again:

a1 a = a2 a3

[] []
b1 b = b2 b3 b T = [ b1 b 2 b3 ] D = a bT

In order to construct the dyad such that doing an inner product of the dyad with a vector makes sense in terms of matrices we must use the transpose of b: a1 a = a2 a3

[]

This is, of course, the outer product of chapter 3. We could write this equivalently using the operator as was mentioned in chapter 3. The convention when writing a dyadic product of vectors is not to explicitly indicate that the second vector is actually a transposed vector.

c explicitly in Now, if we do an inner product of D with vector terms of matrices:


a1 c1 D=( a b ) c = a2 [ b1 b 2 b3 ] c 2 a3 c3 a1 = a2 a3

[] [] []

c is, in terms of matrices, as we see that the inner product of b with we have already seen in the chapter on matrices. We might perhaps make this a bit clearer using Dirac notation:
D =| a >< b | D c =| a >< b | c > = | a >

What about premultiplication? A little thought will show that this c . Again, in the must require the use of the transpose of vector Dirac notation:

D=| a > < b | c D=< c | a >< b | = |a >


T

We can do the same type of analysis with cross products:


c D=( c a ) b = d b= N D c = a( b c )= a f =O

and again we find that the product is not commutative: c D D c but this time the result of the product of a dyad with vector is a new dyad. The third type of product that we will consider is the same type that we started with .. the normal distributive multiplication. Thus we c: will multiply dyad D by vector c D= c a b Long multiplication will produce: c a b z )( b1 u x +b2 u y +b3 u z ) ( c 1 ux +c 2 u y + c3 u z ) ( a 1 u x+ a2 u y + a3 u = c1 a1 b1 u x u x u x +c 1 a 1 b2 u x u x u y +c 1 a1 b3 u x u x u z c3a3b3u z u z u z in which there are now 81 or 3x3x3 coefficients. This product of three vectors is called a triad. Hopefully, you can see that we can take this as far as we wish to produce tetrads, pentads etc. We can consider a way to calculate the number of terms or coeffiecients in each of these objects. Our vectors have three terms, our dyads have 9 terms and our triads have 81 terms. Let us say that a vector has a rank of 1, a dyad a rank of 2 and a triad has a rank of three. Using these numbers we can now say that the number of coefficients in one of these objects is:
ncoefficients= 3
rank

Now, consider the inner product of a vector with a dyad that we just discussed. The same operation on a triad will produce a dyad times a constant (try it for yourself). Thus, the inner product reduces the rank by one to produce a lower ranked object. With this

in mind we can see that a scalar will be an object of rank 0 since the inner product of a rank 1 object with a vector (or simply the inner product of a vector with a vector) is a scalar as we have seen in chapter 3. Our determination of the number of coefficients is a little artificial since we have been using Cartesian space for our deliberations. To be completely general we would write:
ncoefficients= d
rank

where 'd' represents the number of dimensions of the space under consideration. For clarity, however, we will continue to work with Cartesian space. Let's take a closer look at the dyad, D . Our longhand representation of it is: D= a b = ( a 1 u x+ a2 u y + a3 uz )( b1 ux +b2 u y +b3 u z ) = a 1 b1 u x u x +a1 b2 u x u y + a1 b 3 u x u z + a2 b1 u y u x +a2 b2 u y u y +a2 b3 u y u z + a3 b 1 u z u x+ a3 b2 u z u y + a3 b3 u z u z with coefficients and unit dyads, very similar to the longhand representation of vectors. Recall from chapter 3 that a simple Euclidean vector can be represented using a 1x3 column matrix. Thus a has the components a1, a2 and a3: which we include in a matrix vector representing the vector:

a1 a = a2 a3

[]
] ]

In a very similar manner we can represent the dyad as a square matrix using the coefficients of the unit dyads:
a1 b 1 a1 b2 a1 b3 D = a2 b 1 a 2 b2 a2 b 3 a3 b 1 a3 b2 a3 b3 d 11 d12 d13 = d 21 d22 d23 d 31 d32 d33

This makes sense, considering our earlier discussion of the formation of a dyad from two vectors using the Dirac notation. This involves an outer product which, as we have seen in chapter 3, results in a matrix. Since we can represent the dyad as a square matrix we expect that the algebra of the dyad will be identical to that of matrices: A+ B = B+ A (commutative) A+( B +C )=( A+ B)+C (associative) A+0 = A (identity) A +( A )=0 (additive inverse) ( A+ B )= A+ B ( scalar distributive ) (+) A= A+ A ( matrix distributive ) ( ) A=( A) ( associative law for multiplication ) The dyad 0 represents the zero dyad with all zero coefficients, as you no doubt already suspected. So, all dyads can be represented by matrices. How about the reverse? Are all square matrices representations of dyads? From our previous discussion this would require that we be able to factor a dyad into two vectors. In matrix notation this is:
T D ab or

a1 b1 a 1 b 2 a1 b3 a1 a2 b1 a 2 b 2 a2 b3 a2 [ b1 b2 b 3 ] a3 b1 a 3 b 2 a3 b3 a3

][]

Any matrix can have any values that we want to put into it so if we have the matrix:

a1 b1 a 2 b 2 a3 b3 a2 b1 a 2 b 2 a2 b3 a3 b1 a 3 b 2 a3 b3

in which the first row differs from the previous matrix, we cannot a and construct this matrix from the direct product of vectors b (except in the trivial case of 0 vectors ) nor can it be factored into a and b . Therefore we cannot say that in general, all square matrices are dyads. There is an operation that we can do on a dyad called contraction. As we have learned, the dyad can be constructed from the

direct product of two vectors. The dyad is said to be contracted if inner product is taken of the two component vectors (using Dirac notation): D a b D(contracted )=< a | b >= This reduces dyad, D , to a scalar. Of course for higher rank objects there are multiple contractions in general there will be (rank 1) possible ways to do a contraction. Also, note that the contraction operation reduces the rank by two. We must point out some potential notational problems before proceeding. First, in our discussion of matrices we distinguished between the matrix product of two matrices and the direct product of two matrices (equation [2-3]). We must also be careful to do so here for dyads. 'Regular' multiplication is the same as matrix multiplication:

a11 a12 = a21 a22 a31 a32

B A a13 b11 b12 b13 a23 b21 b22 b23 a33 b31 b32 b33

][

] ]

a11 b11+ a12 b 21+ a13 b 31 a11 b12+a12 b22+a13 b32 a11 b13 + a12 b 23 +a13 b33 = a 21 b11+ a22 b 21+ a23 b 31 a21 b12+a22 b22+a 23 b32 a 21 b13 + a22 b 23 +a23 b33 a31 b11+ a32 b 21+ a33 b 31 a31 b12+a32 b22+a33 b32 a31 b13 + a32 b 23 +a33 b33 =C

The direct product of two dyads is:

a11 a12 = a21 a 22 a31 a32

= a 21

AB a13 b11 b12 b13 a23 b21 b 22 b23 a33 b31 b32 b33

][

b 11 b12 b13 a 11 b 21 b22 b23 b 31 b32 b33 b 11 b12 b13 b 21 b22 b23 b 31 b32 b33

b 11 b12 b13 a 31 b 21 b22 b23 b 31 b32 b33

[ [ [ [

] [ ] [ ] [
a22

b11 b 12 b13 a12 b21 b 22 b23 b31 b 32 b33 b11 b 12 b13 b21 b 22 b23 b31 b 32 b33

b11 b 12 b13 a32 b21 b 22 b23 b31 b 32 b33

a11 b11 a11 b12 a11 b13 a13 b 13 a31 b11 a33 b 33

] ] [ ] [ ] [ ]
a 23

b 11 b12 b 13 a13 b 21 b22 b 23 b 31 b32 b 33

b 11 b12 b 13 b 21 b22 b 23 b 31 b32 b 33

b 11 b12 b 13 a33 b 21 b22 b 23 b 31 b32 b 33 ( 81 terms )

] ] ]

Also, in linear algebra it is common to write the premultiplication of a vector by a matrix as: M x = y the result of which is a new vector. However, in the context of dyads this would produce a triad, increasing the rank of the object: M x =O To produce a vector we must use the inner product notation: M x = y We must take care not to confuse the two. Our notation here has been do denote a matrix and M do denote a dyad. Usually the to use M context will tell us which is which; however in other texts the B is used in this text for distinction may not be so clear. Thus A standard matrix multiplication and A B or A B for dyad (or triad, tetrad etc.) direct product multiplication. So, to recap, we have some new mathematical objects developed from the application of the direct product of vectors with each

other. Each of these objects has a 'rank' associated with it which is equivalent to the power that the dimensionality of the space of the vector(s) is raised to in order to generate the number of coefficients of the object. Thus, the dyad results from the direct product of two 3D vectors and has rank 2 or the power of 2 in 32. Three vectors give a triad of rank 3 and four vectors give a tetrad of rank 4. Scalars are ranked 0 since they consist of no vectors. The Gradient of a Vector In chapter 4 we alluded to the gradient calculation:
a or grad a

and made the assertion that the result is a dyad. We now show that a: this is so via the direct product of and a= u x+ u y + u (a u +a u +a u ) x y z z x x y y z z a a a = xu x u x+ y u x u y + z u u x x x x z a a a + xu y u x+ y u y u y + z u u y y y y z a a a + xu z u x + y u z u y + z u u z z z z z

i u j are the unit dyads as above and the partial derivatives are The u the components of the dyad. We can compact this a bit using matrix notation: ax x ax a = D= y ax z

[ ]
a y x a y y a y z az x az y az z

Transformations We have seen in chapter three that the norm of a vector or more

generally, the inner product of a pair of vectors is invariant to rotations. Rotation operators are orthogonal which in visual terms means that an operator and its transpose rotate in opposite directions by the same amount. Also, we have seen that the rotation operation may be considered a rotation of coordinates with a consequent change of basis set. One can also envision a change of coordinates involving translation or perhaps both translation and rotation together. In magnetic resonance spectroscopy we are primarily concerned with rotations. Intuitively, we expect that the norm of the vector will remain the same in the new coordinate system as in the old one, as it did for rotations only. Thus, a vector in coordinate system A is considered to be the same vector in coordinate system B, assuming the beginning and ending points of the vector do not move with the coordinate system change. The components of the vector in each coordinate system will generally not be equal however we expect the norm (the length) to remain a =a 1 u x+ a2 u y , constant. Let's suppose, then, that we have a 2D vector, in coordinate system A. To transform to coordinate system B we use a function of some type:
b1= b1 ( a1, a 2) b2= b2 ( a1, a2)

and our vector is now: b =b 1 v x +b2 v y However, we have just to be the same as the change as a result of system A must see the Thus to indicate that said that we expect the norm of the new vector old vector since the vector itself doesn't the transformation. An observer in coordinate same vector that an observer in B will see. these are the same vector we write:
{ a = b }< a | a > =< b| b > =

which is meant to indicate that (although their components are different) their norms are the same and that they are in reality the same vector. Are there any vectors to which this reasoning might not apply? Yes, the position vector that locates a point in space is one example. The head of the vector is at the point in space and the tail is located at the origin of the coordinate system. Moving the coordinate system (as in translational motion) will potentially move the origin and very likely change the length of the position vector. Thus, our condition for equivalence of vectors in different

coordinate systems is not, in general, satisfied for this type of vector. This will not however, be a problem for us as all of our considerations of coordinate changes will involve rotations in which the origins of the old and new coordinates will be at the same point in the space. Let's apply this idea to our higher rank objects starting with the dyad. Thus, we assert that in transforming from coordinate system A to coordinate system B, the dyad in question will remain the same dyad in both coordinate systems much the same as is the case with vectors. An observer in A sees dyad D and an observer in B sees dyad E . In the case of the vectors we used the inner product operation to reduce them to scalars that were invariant with respect to coordinate changes so let's try to do the same type of thing with dyads. Our tool for doing so is the dyad contraction. We will contract dyads D and E to scalars d and e and compare them. We begin by supposing the the dyads are in fact, equal2:
D = a b a b= d E= c d cd = e { D= E } a b = c d

a : Taking the left inner product with


a a b = a c d 2 a b =( a c) d ( a c) b= 2 d a

The term in brackets is the scalar result of an inner product calculation which is divided by a2, another scalar. For convenience we replace this with a single scalar variable: let = ( a c) 2 a b = d

Now we do the right inner product with b:

2 This exposition is that of J.C. Kolecki. See the references.

a b b = c d b 2 a b = c ( d b) c ( d b) a= 2 b Using the result of our left inner product calculation:


c ( d b) 2 b c ( b b) c a= = 2 b a=

Now, we have:

c a b = d = c d or d= e
Thus, if the dyads are equivalent so are their associated scalars. Presumably the reverse is true as well. If the contractions of each dyad are equal to each other then so are the dyads. We mean this in the same sense in which we discussed vectors. In other words, although the components of D and E may not be the same, they represent the same dyad if their contractions are equivalent. A Tensor Definition We can now define a tensor. We mean by this term a mathematical object which is invariant to transformation of basis. We have already seen that the scalar object is invariant to a change of basis as are vectors and dyads. In other words a scalar such as temperature of a cup of tea is the same whether the coordinate system's origin is on the earth or on the moon. Formally, if the temperature in coordinate system A is T and in coordinate system A' it is T' then the transformation from T to T' is: T ' =a T where a is always unity and T' is therefore equal to T and is said to be a tensor of rank 0.

The vector object is also a tensor if it too can be said to be the same in any basis. In terms of coefficients of a 3D vector the a: transformation from coordinate basis u to u' is, for vector a = a1 u 1+a2 u 2+ a3 u 3 a ' =a 1 ' u 1 ' +a2 ' u 2 '+ a3 ' u 3 ' ai ' = cos ( ui ' ,u j ) a j
j 3

using equation 3-27. Intuitively, we know that the vector itself does not change even though its coefficients may do so. Thus a calculation of the norm of the vector will be the same in the new basis as in the old basis. If we have two vectors, a and b, and b has been produced by a change of basis from u to u' then:
a =a 1 u 1 +a2 u 2+ a3 u 3 b =b 1 u 1 ' +b2 u 2 ' +b 3 u 3 ' 2 2 2 2 | a | = a1 + a2 + a3 2 2 | b |2 =b2 1 + b 2 + b3 and | a |=| b|

or equivalently, as we showed in Dirac notation in the last section:


{ a = b }< a | a >=< b | b >=

So, if two vectors are equivalent after a basis transformation then they are said to be tensors of rank 1. We seen that the dyad, as well, can be invariant to transformation such that dyad D is equivalent to dyad E if E is produced by a change of basis from D . The dyad D produced from a pair of 3D vectors can conveniently be represented by a 3 x 3 matrix with 9 (or 32)components. In order to transform D to E we must perform a set of operations that is similar to the vector transformation. In the case of the vector (or rank-1 tensor in our present context) each new component of the vector is a linear combination of all of the old components. The case with tensors is much the same; each component of the new rank-2 tensor is a linear combination of all of the components of the old tensor and is expressed in a similar fashion to vectors:

d11 d 12 d 13 D= d21 d 22 d 23 d31 d 32 d 33 e11 e12 e 13 E= e 21 e22 e 23 e31 e32 e 33 To transform from D to E :

[ [

] ]

e ij ' = cos ( u i ' , u k ) cos ( ui ' , u l) d kl


k l

i ' , u k) is the direction cosine of where, as with the vector case, cos ( u the angle between axes i' in the new set of coordinates and k in the old coordinates (see Appendix II). We can simplify the equation a bit:

e ij ' = i ' k j ' l d kl


k l

where the cosine terms have been replace with for brevity. An alternate way to look at how we might define rank-2 v is to is to say that the action of the tensor T on a vector new vector times a scalar. We have already encountered this projection operators (see eq. 3-22). Recall that we defined projection operator as: tensors produce a in the

i= u i u iT P or Pi =| ui >< ui |
v is: and its action on vector

i P v = vi u i or Pi v =| ui >< ui | v >= v i | u i >


We could say then that the projection operator is a tensor and in terms of our notation for tensors:
i u iT P i =u Pi v = vi u i

where we emphasize the tensor character of the operator. What would happen if we were to change our basis from u to u'? Would the new tensor still give the same results when it operates on the new vector? Intuitively, we would suspect that the answer is yes but let us explore it a little.

a in coodinate system A and do a basis We start with a vector transformation to coordinate system B in which we measure the vector as b:
a =a 1 u 1 +a2 u 2+ a3 u 3 b =b 1 u 1 ' +b2 u 2 ' +b 3 u 3 ' Since we are familiar with the effect of the projection operator/tensor on a vector we will use it in our example. We apply the operator in coordinate system A and then in coordinate system B, remembering that the operator must be transformed as well as the vector: Pi a = ai u i P ' b=b u '
i i i

The transformations are: b i= ij a j


3 k j 3 l 3

pij ' = i' k j' l p kl where, as before, the 's are the direction cosines for the indicated axes in the old and new coordinate systems. Let us look closely at one coordinate of the projection operator: p11 ' = 1' k 1 ' l pkl
k l 3 3

Problems

References 1. J.C. Kolecki, Foundations of Tensor Analysis for Students of Physics and Engineering With an Introduction to the Theory of Relativity, NASA Science and Technical Information, TP-2005213115. 2. A.I. Borisenko and I.E. Tarapov, Vector and Tensor Analysis with Applications, Dover Publications Inc., 1968.

Вам также может понравиться