Академический Документы
Профессиональный Документы
Культура Документы
2010
ii
Contents
Foreword 1 Vectors and tensors algebra
1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 1.10 1.11 1.12 1.13 1.14 1.15 1.16 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12 Scalars and vectors . . . . . . . . . . . . . . . . Geometrical operations on vectors . . . . . . . Fundamental properties . . . . . . . . . . . . . Vector decomposition . . . . . . . . . . . . . . Dot product of vectors . . . . . . . . . . . . . . Index notation . . . . . . . . . . . . . . . . . . Tensors of second order . . . . . . . . . . . . . Dyadic product . . . . . . . . . . . . . . . . . . Tensors of higher order . . . . . . . . . . . . . . Coordinate transformation . . . . . . . . . . . . Eigenvalues and eigenvectors . . . . . . . . . . Invariants . . . . . . . . . . . . . . . . . . . . . Singular value decomposition . . . . . . . . . . Cross product . . . . . . . . . . . . . . . . . . . Triple product . . . . . . . . . . . . . . . . . . Dot and cross product of vectors: miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
v
1 2 2 3 3 5 5 7 7 10 12 14 15 15 16 17
2 Tensor calculus
Tensor functions & tensor elds . . . . . . . . . . . . Derivative of tensor functions . . . . . . . . . . . . . Derivatives of tensor elds, Gradient, Nabla operator Integrals of tensor functions . . . . . . . . . . . . . . Line integrals . . . . . . . . . . . . . . . . . . . . . . Path independence . . . . . . . . . . . . . . . . . . . Surface integrals . . . . . . . . . . . . . . . . . . . . Divergence and curl operators . . . . . . . . . . . . . Laplace's operator . . . . . . . . . . . . . . . . . . . Gauss' and Stoke's Theorems . . . . . . . . . . . . . Miscellaneous formulas and theorems involving nabla Orthogonal curvilinear coordinate systems . . . . . . iii
19
19 20 20 21 22 23 23 24 25 26 27 28
iv
CONTENTS
2.13 Dierentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.14 Dierential transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.15 Dierential operators in curvilinear coordinates . . . . . . . . . . . . . . . .
29 31 31
Foreword
A quick review of vector and tensor algebra, geometry and analysis is presented. The reader is supposed to have sucient familiarity with the subject and the material is included as an entry point as well as a reference for the subsequence.
vi
FOREWORD
Chapter 1
Notation 1.
Vectors and tensors are denoted by bold-face letters like A and v , and magnitude of a vector by v or simply by its normal-face v . In handwriting an underbar is used to denote tensors and vectors like v . 1
CHAPTER 1.
Denition 3
w
u=v+w
v
u=v+w
v
(a)
w
(c)
For any real number (scalar) and vector v , their multiplication is a vector v whose magnitude is || times the magnitude of v and whose direction is the same as v if > 0 and opposite to v if < 0. If = 0 then v = 0.
Property 5
1. 2. 3. 4.
Existence of additive identity Existence of additive inverse Commutative law Associative law
Property 6
1. 2. 3. 4.
(Multiplication with scalars). Distributive law Distributive law Associative law Multiplication with scalar identity.
(v + w) = v + w ( + ) v = v + v ( v ) = ( ) v 1v = v
Denition 7
(Real vector space). The mathematical system (V, +, R, ) is called a real vector space or simply a vector space, where V is the set of geometrical vectors, + is vector
1.4.
VECTOR DECOMPOSITION
summation with the four properties mentioned, R is the set of real numbers furnished with summation and multiplication of numbers, and stands for multiplication of a vector with a real number (scalar) having the above mentioned properties. The reader may wonder why this recent denition is needed. Remember that a quantity whose expression requires magnitude and direction is not necessarily a vector(!). A well known example is nite rotation which has both magnitude and direction but is not a vector because it does not follow the properties of vector summation, as dened before. Therefore, having magnitude and direction is not sucient for a quantity to be identied as a vector, but it must also obey the rules of summation and multiplication with scalars.
A unit vector is a vector of unit magnitude. For any vector v which is equal to v = v / v . One would say that its unit vector is referred to by ev or v the unit vector carries the information about direction. Therefore magnitude and direction . as constituents of a vector are multiplicatively decomposed as v = v v
v = v1 e1 + v2 e2 + v3 e3 ,
(1.1)
where {e1 , e2 , e3 } are unit basis vectors of coordinate system, {v1 , v2 , v3 } are components of v and {v1 e1 , v2 e2 , v3 e3 } are component vectors of v . Since a vector is specied by its components relative X3 to a given coordinate system, it can be alternatively denoted by array notation v v e v1 3 3 T v v2 = (v1 v2 v3 ) . (1.2) v1 e 1 v3 X2 Using Pythagorean theorem the norm (magnitude) of a vector v in three-dimensional space can be written as
X1
v2 e2
Fig. 1.3:
Components.
v =
2 v1
2 v2
2. v3
(1.3)
v w = v w cos ,
where is the angle between v and w.
(1.4)
CHAPTER 1.
Property 10
1. 2. 3.
vw =wv u (v + w) = u v + u w v v 0 and v v = 0 v = 0
) (ww ) = v ( ) = vw cos , v w = (v v v w ) = w (v w
(1.5)
which says the dot product of vectors v and w equals projection of w on v direction times magnitude of v (see Fig. 1.1), or the other way around. Also, dividing leftmost and rightmost terms of (1.5) by vw gives
w = cos , v
(1.6)
which says the dot product of unit vectors (standing for directions) determines the angle between them. A simple and yet interesting result is obtained for components of a vector v as
v1 = v e1 ,
v2 = v e2 ,
v3 = v e3 .
(1.7)
Important results
1. In any orthonormal coordinate system including Cartesian coordinates it holds that
e1 e1 = 1 , e1 e2 = 0 , e1 e3 = 0 e2 e1 = 0 , e2 e2 = 1 , e2 e3 = 0 e3 e1 = 0 , e3 e2 = 0 , e3 e3 = 1 ,
2. therefore
(1.8)
v w = v1 w1 + v2 w2 + v3 w3 .
3. For any two nonzero vectors a and b, a b = 0 i a is normal to b. 4. Norm of a vector v can be obtained by
(1.9)
v = (v v ) /2 .
1
(1.10)
1.6.
INDEX NOTATION
v w = (v1 e1 + v2 e2 + v3 e3 ) (w1 e1 + w2 e2 + w3 e3 ) = v1 w1 e1 e1 + v1 w2 e1 e2 + v1 w3 e1 e3 + v2 w1 e2 e1 + v2 w2 e2 e2 + v2 w3 e2 e3 + v3 w1 e3 e1 + v3 w2 e3 e2 + v3 w3 e3 e3 ,
(1.11)
which together with equation (1.8) yields the right-hand-side of (1.9). As can be seen for a very basic expansion considerable eort is required. The so called index notation is developed for simplication. Index notation uses parametric index instead of explicit index. If for example the letter i is used as index in vi it addresses any of the possible values i = 1, 2, 3 in three dimensional space, therefore representing any of the vector v 's components. Then {vi } is equivalent to the set of all vi 's, namely {v1 , v2 , v3 }.
Notation 11. When an index appears twice in one term then that term is summed up over all possible values of the index. This is called Einstein's convention.
To clarify this, let's have a look at trace of a 3 3 matrix tr(A) = 3 i=1 Aii = A11 + A22 + A33 . Based on Einstein's convention one could write tr(A) = Aii , because the index i has appeared twice and shall be summed up over all possible values of i = 1, 2, 3 then Aii = A11 + A22 + A33 . As another example a vector v can be written as v = vi ei because the index i has appeared twice and we can write vi ei = v1 e1 + v2 e2 + v3 e3 .
Remark 12. A repeating index (which is summed up over) can be freely renamed. For
example vi wi Amn can be rewritten as vk wk Amn because in fact vi wi Amn = Bmn and index i does not appear as one of the free indices which can vary among {1, 2, 3}. Note that renaming i to m or n would not be possible as it changes the meaning.
CHAPTER 1.
Another generalization would concern projection. Projection can be seen as geometrical transformation that can be dened based on inner product. Up to three dimensions vector projection is easily understood. Now if we allow vectors to be entities belonging to an arbitrary vector space endowed with an inner product, one can think of inner product as the means of projection of a vector in the space onto another vector or direction. Note that we have put no restrictions on the vector space such as its dimension. By the way one would ask what do we need all these cumbersome generalizations for? The answer is to understand more complex constructs based on the basic understanding we already have of components of a geometrical vector. So far we have seen that a scalar can be expressed by a single real number. This can be seen as: scalars have no components on the three coordinate axes so require 30 real numbers to be specied. On the other hand a vector v has three components which are its projections onto the three coordinate axes Xi 's each obtained by vi = v ei . Therefore, v requires 31 scalars to be specied. There are more complex constructs which are called second order tensors. The three projections of a second order tensor A onto coordinate axes are obtained by inner product of A with basis vectors as Ai = A ei . These projections are vectors (not scalars!), and since each vector is determined by three scalars its components a second order tensor requires 3 3 = 32 real numbers to be specied. In array form the three components of a tensor A are vectors denoted by
A = A1 A2 A3
where each vector Ai has three components
(1.12)
(1.13)
(1.14)
where each vector Ai is written component-wise in one column of the matrix. In analogy to the notation
v = vi e i
for vectors (based on Einstein's summation convention), a tensor can be written as
(1.15)
A = Aij ei ej
or
A = Aij ei ej
(1.16)
with being dyadic product. Note that we usually use the rst notation for brevity.
1.8.
DYADIC PRODUCT
Aij = vi wj .
(1.17)
Property 13.
1. 2. 3.
vw =wv u (v + w) = u v + u w ( u + v ) w = u w + v w (u v ) w = (v w) u u (v w) = (u v ) w
From equation (1.16) considering the above properties we obtain
Aij = ei A ej ,
which is the analog of equation (1.7). In equation (1.16) the dyads which are given in matrix notation as 100 010 00 e1 e1 = 0 0 0 e1 e2 = 0 0 0 e1 e3 = 0 0 000 000 00 00 000 000 e2 e1 = 1 0 0 e2 e2 = 0 1 0 e2 e3 = 0 0 00 000 000 000 000 00 e3 e1 = 0 0 0 e3 e2 = 0 0 0 e3 e3 = 0 0 100 010 00
(1.18)
(1.19)
(1.20)
(1.21)
Note that not every second order tensor can be expressed as dyadic product of two vectors in general, however every second order tensor can be written as linear combination of dyadic products of vectors, as in its most common form given by the decomposition onto a given coordinate system as A = Aij ei ej .
T = v 1 v2 v n
(1.22)
which is a tensor of the nth order. To make sense of it, let's take a look at inner product of the above tensor with vector w
T w = (v 1 v 2 v n ) w = (v n w) (v 1 v 2 v n1 )
(1.23)
CHAPTER 1.
where (v n w) is of course a scalar and v 1 v n1 a tensor of (n 1)th order. This means the tensor T maps a rst-order tensor (vector) to a tensor of the (n 1)th order. We reemphasize that not all tensors can be written as dyadic product of vectors, however tensors can be decomposed on the basis of a coordinate system. Decomposition of the nth -order tensor T is written as
(1.24)
where i1 in {1, 2, 3} and ei1 ei2 ein is the basis tensor of the nth order. It should be clear that, visualization of e.g. third-order tensors will require three-dimensional arrays and so on.
Remark 14. A vector is a rst-order tensor and a scalar is a zeroth-order tensor. Order of
a tensor is equal to number of its indices in index notation. Furthermore, a tensor of order n requires 3n scalars to be specied. We usually address a second order tensor by simply calling it a tensor and the meaning should be clear from the context.
If an mth order tensor A is multiplied with an nth order tensor B we get an (m + n)th order tensor C such that
C = A B = (Ai1 im ei1 eim ) (Bj1 jn ej1 ejn ) = Ai1 im Bj1 jn ei1 eim ej1 ejn = Ck1 km+n ek1 ekm+n .
(1.25)
(1.26)
(1.27)
having the properties similar to 5 and 6. In fact tensors of order n are elements of a vector space having the dimension 3n .
Exercise 1.
Show that dimension of a vector space consisting of all nth -order tensors is 3n . (Hint: nd 3n independent tensors that can span all elements of the space.)
A B = Ai1 im ei1 eim Bj1 jn ej1 ejn = Ai1 im1 k Bkj2 jn ei1 ei2 eim1 ej2 ej3 ejn
(1.28) which is a tensor of (m + n 2)th order. Note that the innermost index k is common and summed up. Generalization of this idea follows
1.9.
Denition 15
(contraction product). For any two tensors A of the mth order and B of
r
the nth order, their r-contraction product for r min{m, n} yields a tensor C = A of order (m + n 2r) such that
(1.29)
Note that the r innermost indices are common. The frequently used case of contraction
B.
Write the identity = C : in index notation, where C is a fourth-order tensor and and are second-order tensors. Then, expand the components 12 and 11 for the case of two-dimensional Cartesian coordinates.
Exercise 2.
Remark 16. Contraction product of two tensors is not symmetric in general, i.e. A
r
B= B A, except when r = 1 and A and B are rst-order tensors (vectors), or A and B have certain symmetry properties.
Kronecker delta
The second order unity tensor or identity tensor denoted by I is dened by the following fundamental property.
Property 17.
I v = vI = v.
Representation of unity tensor by index notation as
(1.30)
ij =
1 , if i = j 0 , if i = j .
(1.31)
is called Kronecker's delta. Its expansion for i, j {1, 2, 3} gives in matrix form 100 (ij ) = 0 1 0 . 001 Some important properties of Kronecker delta are
(1.32)
ei ej = ij .
2. The so called index exchange rule is given as
(1.33)
ui ij = uj
3. Trace of Kronecker delta
(1.34) (1.35)
ii = 3 .
Exercise 3.
10
CHAPTER 1.
jik
ikj
kji
kij
jki .
(1.37)
det(A) =
(1.38)
3. The general relationship between Kronecker delta and permutation symbol reads il im in (1.39) ijk lmn = det jl jm jn kl km kn and as special cases
jkl jmn
= km ln kn lm , = 2km .
(1.40) (1.41)
ijk ijm
Exercise 4.
Denition 18
1.10.
COORDINATE TRANSFORMATION
11
Suppose that vector v is decomposed in Cartesian coordinates X1 X2 X3 with bases {e1 , e2 , e3 } (Fig. 1.4)
X3
X3 v
v = v1 e1 + v2 e2 + v3 e3 .
(1.42)
Since a vector is completely determined by its components, we should be able to nd the decomposition of vector v on a second coordinate system X1 X2 X3 with bases {e1 , e2 , e3 } in terms of {v1 , v2 , v3 }. Using equation (1.7) we have
X1
X2 X1 X2
Fig. 1.4:
dinates.
Transformation of coor-
v1 = v e1 ,
v2 = v e2 ,
v3 = v e3 .
(1.43)
(1.45)
vj = ej ei vi .
(1.46)
Then again, introducing transformation tensor Q = ej ei we can rewrite the transformation rule as v = Qv . (1.47)
Property 19
(Orthonormal tensor). Transformation Q is geometrically a rotation map with the property QT Q = I or Q1 = QT (1.48) which is called orthonormality.
Transformation of tensors
Having the decomposition of an nth -order tensor A in X1 X2 X3 coordinates as
(1.49)
we look for its components in another coordinates X1 X2 X3 . Following the same approach as for vectors, using equation (1.29)
Aj1 jn = A
ej1 ejn
n
ej1 ejn
(1.50)
For the special case when A is a second order tensor we can write in matrix notation
A = QAQT ,
Q = ej ei .
(1.51)
12
CHAPTER 1.
Remark 20. The above transformation rules are substantial. They are so fundamental that,
as an alternative, we could have dened vectors and tensors based on their transformation properties instead of the geometric and algebraic approach.
Exercise 5.
Find the transformation matrix for a rotation by angle around X1 in threedimensional Cartesian coordinates X1 X2 X3 .
Exercise 6.
For a symmetric second order tensor A derive the transformation formula with a transformation given as cos sin 0 (Qij ) = sin cos 0 . 0 0 1
Denition 21.
Given a second order tensor A, every nonzero vector like is called an eigenvector of A if there exists a real number such that
A =
where is the corresponding eigenvalue of .
(1.52)
In general, a tensor multiplied by a vector gives another vector with a dierent magnitude and direction, interestingly however, there are vectors on which the tensor acts like a scalar as laid down by equation (1.52). Now suppose that a coordinate system is chosen so that X1 axis coincides with . It is clear that
A e1 = e1
(1.53)
1.11.
13
(1.54)
It is possible to show that for non-degenerate symmetric tensors in R3 there are three orthonormal eigenvectors which establish an orthonormal coordinate system which is called principal coordinates. Following equation (1.54) a symmetric tensor in principal coordinates takes the diagonal form 11 0 0 (1.55) = 0 22 0 , 0 0 33 where its diagonal elements are eigenvalues corresponding to principal directions which are parallel to eigenvectors. The following theorem generalizes these ideas to n dimensions.
space with 1 , , n being its n linearly independent eigenvectors and their corresponding eigenvalues 1 , , n . Then A can be written as
A = 1
(1.56)
where is the n n matrix having 's as its columns = ( 1 , , n ), and is an n n diagonal matrix with ith diagonal element as i . Remark 23. In the case of symmetric tensors equation (1.56) takes the form of
A = T
(1.57)
because for orthonormal eigenvectors, the matrix is an orthonormal matrix which is in fact a rotation transformation.
Remark 24. The is the representation of A in principal coordinates. Since it is diagonal, one can write
n
=
i=1
i ni ni .
(1.58)
Suppose that we want to derive some relationship or develop a model based on tensorial expressions. If the task is accomplished in principal coordinates it takes much less eort and at the same time, the results are completely general i.e. they hold for any other coordinate system. This is often done in material modeling and in other branches of science as well. As the nal step we would like to solve the equation (1.52). It can be recast in the form
(A I ) = 0 .
This equation has nonzero solutions if
(1.59)
det(A I ) = 0
(1.60)
14
CHAPTER 1.
which is called the characteristic equation of tensor A. Let us have a look at its expansion
(1.61)
A11 A12 A13 A21 A22 A23 = 0 A31 A32 A33 (1.62)
1.12 Invariants
Components of a vector v change from one coordinate system to another, however its length remains the same in all coordinates because length is a scalar. This is interesting because one can write the length of a vector in terms of its components in any coordinate 1 system as v = (vi vi ) /2 , which means there is a combination of components that does not change by coordinate transformation while the components themselves do. Any function of components f (v1 , v2 , v3 ) that does not change under coordinate transformation is called an invariant. This independence of coordinate system makes invariants physically important, and the reason should become clear soon. Tensors of higher order have also invariants. For a second order tensor A, looking back on equation (1.62), the values , 2 and 3 are scalars and stay unchanged under coordinate transformation, therefore coecients of the equation must remain unchanged as well, and they are all invariants of tensor A denoted by
IA = A11 + A22 + A33 IIA = A22 A23 A A A A + 11 13 + 11 12 A32 A33 A31 A33 A21 A22
A11 A12 A13 IIIA = A21 A22 A23 A31 A32 A33
where I, II and III are called rst, second and third principal invariants of A. The above formulas look familiar. The rst invariant is trace, the second is sum of principal minors and the third is determinant. Of course, we could express invariants in terms of eigenvalues of the tensor
(1.66)
= 1 2 + 2 3 + 3 1
(1.67) (1.68)
1.13.
15
In material modeling we often look for a scalar valued function of a tensor in the form of (A). Since the function value is a scalar it must be invariant under transformation of coordinates. To fulll this requirement, the function is formulated in terms of invariants of A, then it will be invariant itself. Therefore the most natural form of such a formulation would be = (IA , IIA , IIIA ) . (1.69) This will be used in the sequel when hyper-elastic material models are explained.
Denition 25.
When A is a symmetric tensor singular values of A equal the absolute values of its eigenvalues.
u=vw
u = vw sin
0 .
(1.70)
v
Fig. 1.5:
w
Cross product.
v w = w v u (v + w) = u v + u w
Anti-symmetry Linearity.
Important results
1. In any orthonormal coordinate system including Cartesian coordinates it holds that
e1 e1 = 0 , e1 e2 = e3 , e1 e3 = e2 e2 e1 = e3 , e2 e2 = 0 , e2 e3 = e1 e3 e1 = e2 , e3 e2 = e1 , e3 e3 = 0 ,
(1.71)
16 2. therefore
CHAPTER 1.
e1 e2 e3 v w = v1 v2 v3 , w1 w2 w3
3. which in index notation reads
(1.72)
vw =
ijk vj wk ei
or
[v w]i =
ijk vj wk
(1.73)
Exercise 7.
v
Fig. 1.6:
Area of triangle.
Av =
jmn Aim vn ei ej
and
w sin
vA=
imn vm Anj ei ej
(1.75)
u (v w ) ,
the so called triple product. Based on equation (1.72) it can be written in component form
(1.76)
u w
u1 u2 u3 u (v w) = v1 v2 v3 , w1 w2 w3
or in index notation
(1.77)
v
Fig. 1.7:
Parallelepiped.
u (v w ) =
ijk ui vj wk
(1.78)
Geometrically, triple product of the three vectors equals the volume of parallelepiped with sides u, v and w (Fig. 1.7).
1.16.
17
Exercise 8. Exercise 9.
For a tensor given as T = I + vw with v and w being orthogonal vectors i.e. 1 n v w = 0, calculate T 2 , T 3 , , T n and nally eT (Hint: eT = n=0 n! T ).
18
CHAPTER 1.
Chapter 2
Tensor calculus
So far, vectors and tensors are considered based on operations by which we can combine them. In this section we study vector and tensor valued functions in three dimensional space, together with their derivatives and integrals. It is important to keep in mind that we address tensors and vectors both by the term tensor, and the reader should already know that a vector is a rst-order tensor and a scalar a zeroth-order tensor.
x( t)
20
CHAPTER 2.
TENSOR CALCULUS
(2.3)
We assume that all derivatives exist unless otherwise stated. For the case of a vector function A(u) = Ai (u) ei in Cartesian coordinates the above denition becomes
(2.4)
Exercise 10.
For a given second-order tensor function A(u) = Aij (u) ei ej in Cartesian coordinates write down the derivatives.
Property 27.
1. 2. 3. 4. 5.
For tensor functions A(u) , B (u) , C (u), and vector function a(u) we have (2.5) (2.6)
dB dA d (A B ) = A + B du du du d dB dA (A B ) = A + B du du du d dA [A (B C )] = (B C ) + A du du da da a =a du du da =0 i a = const a du
dB C du
+A B
dC du
Exercise 11.
(2.10)
2.4.
21
or in compact form
A(x + xi ei ) A(x) A = lim . (2.11) xi xi 0 xi In general, dx is not parallel to coordinate axes. Therefore based on the chain rule of dierentiation A A A A dA(x1 , x2 , x3 ) = dxi = dx1 + dx3 + dx3 , (2.12) xi x1 x2 x3 A ei . xi
(2.13)
In the latter formula gradient of tensor eld grad(A) appears which is a tensor of one order higher than A itself. To highlight this fact, we rewrite the above equation as
grad(A) =
A ei . xi
(2.14)
A proper choice of notation in mathematics is substantial and introduction of operators notation in calculus is no exception. Since linear operators follow algebraic rules somewhat similar to that of arithmetics of real numbers, writing relations in operators notation is invaluable. Here, we introduce the so called nabla operator as
ei
= e1 + e2 + e3 . xi x1 x2 x3
(2.15)
grad(A) = A = A .
(2.16)
(2.18)
(2.19)
where Aij is a real valued function of real numbers. Therefore, integration of a tensor function comes down to integration of its components.
In recent equations the dyadic is written explicitly sometimes for pedagogical reasons.
22
CHAPTER 2.
TENSOR CALCULUS
X3 C P1 X1
Fig. 2.2:
P2 xn X2
Space curve.
A dx =
C P1
A dx = lim
A(xi ) xi
i=1
(2.20)
max xi 0 .
i
A dx =
C C
(2.21)
Property 28.
P2 P1
1.
P1 P2
A dx =
P2 P3
A dx
P2
(2.22)
2.
P1
A dx =
P1
A dx +
P3
A dx
P3 is between P1 and P2
(2.23)
Parameterization
A curve in space is a one-dimensional entity which can be specied by a parameterization of the form
x(s)
which obviously means
(2.24)
A dx =
x1 s1
dx ds . ds
(2.25)
2.6.
PATH INDEPENDENCE
23
(x) =
P1
A dx .
(2.26)
d = A dx ,
which according denition of gradient (2.13) means
(2.27)
A = grad() = .
This argument could be reversed which would lead to the following theorem.
(2.28)
Prove the reverse argument of the above theorem. That is, starting from existence of (x) such that A = , prove that the line integral is path independent.
Remark 30. Note that in particular case of C being a closed path the line integral is path
independent if and only if
A dx = 0 ,
C
(2.29)
for arbitrary path C . The circle on integral sign shows that C is closed. This is another statement of path-independence which is equivalent to the ones mentioned before.
X3
ni
Si S
A n dS = lim
S
A(xi ) ni Si
i=1
(2.30)
X2 X1
Fig. 2.3:
Surface integral.
max Si 0 .
i
24
CHAPTER 2.
TENSOR CALCULUS
A n dS = lim
S
A(xi ) ni Si
i=1
(2.31)
Note the surface integral (2.30) is a tensor of one order lower than A due to appearance of A n in the integrand, while the surface integral (2.31) is a tensor of the same order as A due to the term A n.
The transformation between the area element dS and its projection dx1 dx2 is given by
Exercise 13.
Calculate the surface integral of the normal component of vector eld v = 1/r2 er over the unit sphere centered at origin.
D=
1 V
A n dS
S
C=
1 V
A n dS ,
S
(2.33)
2.9.
LAPLACE'S OPERATOR
25
where integrals are divided by volume V . Now if the domain shrinks, that is V 0, then these average values reect the local intensity or concentration of sources and rotations of the tensor eld. This is the physical analogue of density which is the concentration of mass. These local values are called divergence and curl of the tensor eld A, denoted and formally dened by 1 A n dS (2.34) div(A) = lim V 0 V S
curl(A) = lim
It can be shown that
V 0
1 V
A n dS .
S
div(A) = A curl(A) = A
in operator notation. The proof is straightforward in Cartesian coordinates and can be found in most calculus books.
Exercise 14.
Using a good calculus book, starting from (2.34) and (2.35) derive equations (2.36) and (2.37) in Cartesian coordinates for a vector eld u(x) as
div(u) = u = ui i
and
curl(u) = u =
kij ui j
Remark 31. So far gradient, divergence and curl operators are applied to their operand
form the right-hand-side i.e. grad(A) = A , div(A) = A , curl(A) = A . This is not a strict requirement by their denition. In fact the direction from which the operator is applied depends on the context. If we exchange the order of dyadic, inner and outer products in equations (2.26), (2.34) and (2.35) respectively, the direction of application of nabla will be reversed, to wit
dx
V 0
lim
1 V
n A dS = A
S
V 0
lim
1 V
n A dS = A (2.38)
S
which are equally valid. However, keeping a consistence convention shall not be overlooked.
Exercise 15.
div(A) = div(grad()) =
(2.39)
This combination of gradient and divergence operators appears so often that it is given a distinct name, the so called Laplace's operator or Laplacian which is denoted by
= k k .
(2.40)
26
CHAPTER 2.
TENSOR CALCULUS
Laplacian of a eld A is sometimes denoted by 2 A. Laplace's operator is also called harmonic operator. That is why two consecutive application of Laplacian over a eld is called biharmonic operator denoted by
4 A = A = (A)
which is expanded in Cartesian coordinates as
(2.41)
4 4 4 4 4 4 + 2 + 2 + + + 2 2 2 2 . x4 x4 x4 x2 x2 x2 1 2 3 1 x2 2 x3 3 x1
(2.42)
A dV =
V S
A n dS .
(2.43)
This result is also called Green's or divergence theorem. If we remember the physical interpretation of divergence, then the Gauss theorem means the ux of a tensor eld through a closed surface equals the intensity of the sources bounded by the surface, which physically makes perfect sense.
X3
dS S
(A ) n dS
(2.44)
C X2
Fig. 2.5:
where n is directed towards counter-clockwise view of the path X1 direction as indicated in the gure.
Stoke's theorem.
Physical interpretation of Stoke's theorem is much similar to that of Gauss' theorem except that it involves closed line integration and enclosed surface, instead of closed surface integration and enclosed volume. Namely, the overall rotation of the tensor eld along a closed path equals the intensity of rotations enclosed by the path.
2.11.
27
(A ) n dS =
S V
(A ) dV = 0
(2.56)
and
(A) dx =
C S
(A) n dS = 0 .
(2.57)
Show that in general A = 0 and A = 0. (Green's identities). For two scalar elds and the rst Green's identity
[ ( ) + () ( )] dV =
V S
( ) n dS
(2.58)
[ ( ) () ] n dS .
(2.59)
28
CHAPTER 2.
TENSOR CALCULUS
A dV =
V S
A n dS
C
dx =
S
() n dS .
(2.60)
A dV =
V S
n A dS
C
dx =
S
n () dS .
(2.61)
u3
u2
u1
Fig. 2.6:
c2
e3 u1 = c1 e2 u2
e1
u3 = c3
Curvilinear coordinates.
x = x(u)
and
u = u(x)
(2.62)
where the transformation functions are smooth and on-to-one. Smoothness means being dierentiable, and being one-to-one requires Jacobian determinant to be non-zero
det
(u1 , u2 , u3 ) (x1 , x2 , x3 )
u1 x1 u2 x1 u3 x1
u1 x2 u2 x2 u3 x2
u1 x3 u2 x3 u3 x3
=0
(2.63)
Denition 37
(Jacobian). For a dierentiable function f (x) such that f : Rm Rn , its fi Jacobian is an m n matrix dened as J = x with i = 1, . . . , m and j = 1, . . . , n. j At the point P there are three surfaces described by
u1 = const ,
u2 = const ,
u3 = const
(2.64)
passing through P , called coordinate surfaces. And there are also three curves which are intersections of every two coordinate surfaces. These are called coordinate curves. On each
2.13.
DIFFERENTIALS
29
coordinate curve only one of the three coordinates ui varies and the other two are constant. If the position vector is denoted by r then the vector r /ui is tangent to ui coordinate curve, and the unit tangent vectors (Fig. 2.6) are obtained from
e1 =
r /u1 , r /u1
e2 =
r /u2 , r /u2
e3 =
r /u3 r /u3
(2.65)
r /u1 = h1 e1 ,
r /u2 = h2 e2 ,
r /u3 = h3 e3
(2.66)
If e1 , e2 and e3 are mutually orthogonal then we call (u1 , u2 , u3 ) an orthogonal curvilinear coordinate system.
Remark 38. In rectangular coordinates the coordinate surfaces are at planes and coordinate curves are straight lines. Also, the scale factors hi 's are equal to one due to straight coordinate curves. Furthermore, the unit vectors ei 's are the same for all points in space. These properties do not generally hold for curvilinear coordinate systems.
2.13 Dierentials
In three dimensional space, there are three types of geometrical objects (other than points) regarding dimensions one-dimensional objects: lines two-dimensional objects: surfaces three-dimensional objects: volumes. Each type has its own dierential element, namely line elements, surface elements and volume elements. We assume that the reader has already dealt with the related ideas in rectangular (Cartesian) coordinates. Now we want to generalize those ideas to curvilinear coordinates.
Line elements
An innitesimal line segment with an arbitrary direction in space can be projected onto coordinate axes. Consider the position vector r explained in a curvilinear coordinate system by r (u1 , u2 , u3 ). A line element is the dierential of the position vector obtained by chain rule and using equation (2.66)
dr =
where
r r r du1 + du2 + du3 = h1 du1 e1 + h2 du2 e2 + h3 du3 e3 . u1 u2 u3 dr1 = h1 du1 , dr2 = h2 du2 , dr3 = h3 du3
(2.67)
(2.68)
30
CHAPTER 2.
TENSOR CALCULUS
are the basis line elements. If the position vector sweeps a curve (which is typical for example in kinematics) the arc length element denoted by ds is obtained by
2 2 2 2 2 (ds)2 = dr dr = h2 1 (du1 ) + h2 (du2 ) + h3 (du3 ) .
(2.69)
Note that the above formulas are completely general. For instance in Cartesian coordinates where hi = 1 they are reduced to familiar forms
and
Area elements
An area element is a vector described by its magnitude and direction
dS = dS n ,
whose decomposition is obtained by
(2.70)
(2.71)
where dS ei is in fact the projection of dS on the coordinate surface normal to ei . On the other hand, having the coordinate line elements (2.68), the area elements on each coordinate surface are obtained by
dS1 e1 = dr2 e2 dr3 e3 = h2 h3 du2 du3 e1 dS2 e2 = dr3 e3 dr1 e1 = h3 h1 du3 du1 e2 dS3 e3 = dr1 e1 dr2 e2 = h1 h2 du1 du2 e3
which are called basis area elements. Note that an area element is a parallelogram with its side being line elements, and its area is obtained by cross product of the line element vectors. Comparing the two above equation gives (2.72)
(2.73)
As the simplest case, in Cartesian coordinates the latter formula gives the familiar form
Volume element
A volume element dV is a parallelepiped built by the three coordinate line elements dr1 e1 , dr2 e2 and dr3 e3 . Since the volume of a parallelepiped is obtained by triple product of its side vectors as in equation (1.77), we have
(2.74)
2.14.
DIFFERENTIAL TRANSFORMATION
31
dui =
ui dqj qj
and
r r qj = ui qj ui
(2.75)
1/2 .
(2.76)
dAq =
(q1 , q2 , q3 ) (u1 , u2 , u3 )
(q1 , q2 , q3 ) (u1 , u2 , u3 )
dAu
(2.77)
dVq =
(2.78)
. (2.82)
Let us apply these formulas for the two most commonly used curvilinear coordinate systems, namely, cylindrical and spherical coordinates.
32
CHAPTER 2.
TENSOR CALCULUS
X3
z = x3
x x1 x2 X1
Fig. 2.7:
Cylindrical coordinates.
X2
Cylindrical coordinates
In cylindrical coordinates a point is determined by (r, , z ) (Fig. 2.7). Transformation to rectangular coordinates are given as
x1 = r cos , x2 = r sin , x3 = z r=
Scale factors are
2 x2 1 + x2 , = arctan(x2 /x1 ) , z = x3
(2.83) (2.84)
hr = 1
and dierential elements
h = r
hz = 1 ,
(2.85)
dr = dr er + r d e + dz ez dS = r d dz er + dr dz e + r dr d ez dV = r dr d dz .
Finally the dierential operators can be listed as
1 + e + ez r r z 1 1 a az a = (rar ) + + r r r z 1 1 2 2 2 1 1 2 2 = r + 2 + = + + + 2 r r r r 2 z 2 r2 r r r2 2 z a 1 az ar az 1 ar a = er + e + (ra ) ez r z z r r r er
2.15.
33
X3
x3
r x2 X1 x1 X2
Fig. 2.8:
Spherical coord.
Spherical coordinates
In spherical coordinates a point is determined by (r, , ) (Fig. 2.8). Transformation to rectangular coordinates are given as
(2.93)
.
(2.94) (2.95) (2.96) (2.97)
hr = 1
and dierential elements
h = r
h = r sin ,
1 1 + e + e r r r sin 1 a 1 1 r 2 ar + (sin a ) + a = 2 r r r sin r sin 2 2 2 cos 1 1 2 = + + + + r2 r r r2 sin r2 2 r2 sin2 2 1 a 1 1 ar (sin a ) er + (ra ) e a = r sin r sin r 1 ar (ra ) e + r r er
34
CHAPTER 2.
TENSOR CALCULUS
Bibliography
[1] A.I. Borisenko and I.E. Tarapov. Vector and Tensor Analysis with Applications. Dover Pub. Inc, 1968. [2] P.C. Chou and N.J. Pagano. Elasticity: Tensor, Dyadic, and Engineering Approaches. Dover Pub. Inc, 1992. [3] R.W. Ogden. Non-Linear Elastic Deformations. Dover Pub. Inc, 1997. [4] M.R. Spiegel. Mathematical Handbook of Formulas and Tables. McGraw-Hill (Schaum's outline), 1979.
35