Вы находитесь на странице: 1из 6

# 1

## Index Gymnastics and Einstein Convention

g = e e

Part (v)
We wish to show that is the statement that:

= 3
3 X =1

(in

3-dimensions).

Explicitly, this

Part (i)
Let be the metric. Then:

= 3

## That this is true is directly veried:

xx

= = = = = =

g (x , x ) e (x e ) e (x e ) x x e (e ) e (e )
x x

11 + 22 + 33 = 1 + 1 + 1 = 3
In general, we have that:

= N
for

x x x x

## Part (a) Part (ii)

The same analysis proceeds: We wish to show that:

(A B ) (C D ) = (A C ) (B D )
This can be veried directly:
3 X =1 3 X =1 3 X 3 X =1 =1

xy

= = = = = =

g (x , y ) e (x e ) e (y e ) x y e (e ) e (e )
x y

(A B )

(C D )

A B C D A C B D

x y x y

= =

3 X 3 X =1 =1

(A C ) (B D )

Part (iii)
We wish to show

Part (b)

= . = =
3 X =1

If

A = A

and

B = B , = =
3 X 3 X =1 =1

we have that:

A B

A B

1 1 + 2 2 + 3 3

survives, so

= 1 = 1 = 1

i

2.

survives, so

i

Aij B ij

## (no summation) is equal to

A12 B 12 + A21 B 21
survives, so i

= = =

0 0 0

3.

23

+ A32 B

32

= 1

i

= ;

so this is

## A11 = A22 = A33 = 0.

A B = 0
Part (iv)
We wish to show that Alternatively, we can simply write that:

a = a .

Explicitly, we have:

A B

= =

A B A B

## (by symmetry / antisymmetry) (by changing dummy indices

a = a1 1 + a2 2 + a3 3
Again, we can proceed on a cases by cases basis, and we will see that only the

So we end up with:

## term for the summation survives; so

A B = A B
This can be only true if

a = a .

A B = 0.

Antisymmetry
3
dimensions, the even permutations of

= = {1, 2, 3} = =

X
S3

## sgn ( ) l(i) m(j ) n(k) abc p1a p2b p3c

Part (a)
Consider that in are:

X
a,b,c

## (1, 2, 3) , (2, 3, 1) , (3, 1, 2)

while the odd permutations are:

det (p) 2

li det 4 mi ni

lj mj nj

3 lk mk 5 nk

## (1, 3, 2) , (3, 2, 1) , (2, 1, 3)

Therefore we have that:

but since the determinant of a matrix and its transpose are the same, we have that:

123 132

= =

ijk lmn

il = det 4 jl kl

im jm km

3 in jn 5 kn

## Expanding this gives us:

4 dimensions, the even permutations of {1, 2, 3, 4} (1, 2, 3, 4) , (1, 3, 4, 2) , (1, 4, 2, 3) , (2, 1, 4, 3) , (2, 3, 1, 4) , (2, 4, 3, 1) , (3, 1, 2, 4) , (3, 2, 4, 1) , (3, 4, 1, 2) , (4, 1, 3, 2) , (4, 2, 1, 3) , (4, 3, 2, 1) .

ijk lmn

ijk i

j k

## while the odd permutations are:

(1, 2, 4, 3) , (1, 3, 2, 4) , (1, 4, 3, 2) , (2, 1, 3, 4) , (2, 3, 4, 1) , (2, 4, 1, 3) , (3, 1, 4, 2) , (3, 2, 1, 4) , (3, 4, 2, 1) , (4, 1, 2, 3) , (4, 2, 3, 1) , (4, 3, 1, 2) .
In this case then, it is instead that

Part (c)
We wish to show the contraction formula:

1234 = 1

but

2341 = 1.

ajk amn = jm kn jn km
Consider from part (b), replacing

Part (b)
We wish to derive the general expression:

i a, l a,

## and summing over

a: ajk amn in jn 5 kn 3 = = aa (jm kn jn km ) +am (jn ka ja kn ) +an (ja km jm ka ) 3 (jm kn jn km ) + (jn km jm kn ) + (jn km jm kn ) =
Part (d)
Proceeding in a manner similar to part (b), we will have that:

Consider that

im jm km

ijk

## is eectively dened as:

ijk = sgn ( )
where

jm kn jn km

(1, 2, 3) to (i, j, k). Since of the permutation from (1, 2, 3) to (i, j, k ) and lmn is the sign of the permutation from (1, 2, 3) to (l, m, n), then eectively ijk lmn is the sign of the permutation 0 that takes (i, j, k ) to (l, m, n). As such, we can dene:
permutation that takes

## ijk lmn = sgn (0 )

Now, we can articially extend the denition as:

ijk lmn =
since if

X
S3

## sgn ( ) (i)l (j )m (k)n

Note however that have:

## ijkl abcd 2 ia 6 ja 6 = det 4 ka la 2

ib jb kb lb

ic jc kc lc

= 0 ,

then

(i)l (j )m (k)m = 0.

## this is the very denition of the determinant; if we make the identication

p1a = l(i) , p2b = m(j ) , p3c = n(k) , then we X ijk lmn = sgn ( ) (i)l (j )m (k)n
S3

## 2 3 jb jc jd ja jc jd = ia det 4 kb kc kd 5 ib 4 ka kc kd 5 lb lc ld la lc ld 2 3 2 3 ja jb jd ja jb jc +ic det 4 ka kb kd 5 id det 4 ka kb kc 5 la lb ld la lb lc

3 id jd 7 7 kd 5 ld 3

## = ia jkl bcd ib jkl acd + ic jkl abd id jkl abc

So we have that:

Part (iv)
We wish to show that (i) is the spherical sine law, and that (iii) is the spherical cosine law. and contracting: First we show (i). The spherical sine law is the statement that:

ijkl abcd = ia jkl bcd ib jkl acd + ic jkl abd id jkl abc
Taking

i s, a s, = = = =

sjkl sbcd

ss jkl bcd sb jkl scd + sc jkl sbd sd jkl sbc 4jkl bcd jkl bcd + jkl cbd jkl dbc 4jkl bcd jkl bcd jkl bcd jkl bcd jkl bcd
where respectively. Let

sin A sin B sin C = = sin a sin b sin c a, b, c are the lengths of the sides opposite to angles A, B, C ,
be unit vectors at the origin, such that

So we have that:

a, b, c

is the angle

at the endpoint of

a, B

b,

and

is

sjkl sj

k l

= jkl j

k l

c.

Then

and

c, b

and

c,

and

b.

Vector Products
a, b, c,
we have that:

## Now, suppose we can show that:

Part (i)
We wish to show that for vectors

a ((a b) (a c)) = a (b c)
Then we will have solved our problem; because by (i), we have that

a (b c) = b (c a) = c (a b)
Note that this is really a consequence of the preservation of under cyclic permutations. Consider that:

## a (b c) is invariant under cyclic a, b, c; so likewise we will have: a ((a b) (a c)) ijk

Now, the quantity

= =

is given as:

a (b c)

= = =

a ((a b) (a c))

where

## is the angle between the vectors

and

(a b) (a c); a
is a unit vector,

and likewise:

however, by their construction, these two vectors are necessarily colinear, so that

cos () = 1.

b (c a)

= = =

then

|a| = 1.

## a ((a b) (a c)) = |(a b) (a c)|

Using the fact that

## |u v| = |u| |v| sin (uv ),

we obtain:

Part (ii)
We wish to show that

## a ((a b) (a c)) = |a b| |a c| sin (ab,ac ) a (b c) = (a c) b (a b) c.

This is: Note again, however, that by its construction, we have that

ab,ac = A

a b,a c

a (b c)

= = = = =

a = =

and

c.

a ((a b) (a c))

## |a b| |a c| sin (A) sin (c) sin (b) sin (A) a, b, c

(and repeating

Finally, cyclically permutating the variables the same procedure) gives us:

Part (iii)
We wish to show that

## sin (c) sin (b) sin (A) (a b) (c d) = (a c) (b d)

Now, dividing everything by This can be obtained from using parts (i) and (ii).

= =

sin (a) sin (c) sin (B ) sin (b) sin (a) sin (C )
gives us:

(a d ) ( b c ).

## sin (a) sin (b) sin (c)

(a b) (c d) = c (d (a b))
and from part (ii), we have that:

sin (A) sin (B ) sin (C ) = = sin (a) sin (b) sin (c)
This is the spherical sine law. To actually show that:

c (d (a b))

= =

c (d b) a c (d a) b (a c) (b d) (a d) (b c)

a ((a b) (a c)) = a (b c)

we have that:

Therefore we have:

a ((a b) (a c)) = =
Now, for

## a (((a b) c) a ((a b) a) c) ((a b) c) |a|2 ((a b) a) (a c) 0.

Likewise,

1 2 RHS = v s xi (v s ) + v s xs v i + xi v 2
Note however that:

## is perpendicular to a so ((a b) a) = 2 is a unit vector, |a| = 1. Therefore we obtain:

ab

xi

1 2 v 2

= = = xi

1 s s v v 2

a ((a b) (a c)) = (a b) c = c (a b)
Using (i), we have:

1 s 1 v xi (v s ) + xi (v s ) v s 2 2 v s xi (v s )

a ((a b) (a c)) = a (b c)

Therefore:

RHS = v s xs v i
Now the cosine law. Using (iii), we have that: so we have proven our claim. As such, we have shown that:

(a b) (c d) = (a c) (b d) (a d) (b c)
Taking

(v ) v = v ( v ) +
Therefore Euler's equation becomes:

1 2 v 2

b = d,

we have:

(a b) (c b) = (a c) (b b) (a b) (b c)
Explicitly, this is:

dv + v ( v ) + dt
from which we obtain:

1 2 v 2

= h

## |a b| |c b| cos (ab,cb ) = cos (b) cos (c) cos (a)

Again, by our construction we have

ab,cb = B ;

and for

|a b| = sin (c),

etc., we have:

dv + v = dt
for which we have identied
dv As such, if dt

1 2 v +h 2

sin (c) sin (a) cos (B ) = cos (b) cos (c) cos (a)
Rearranging terms give us:

=v

as the vorticity.

= 0,

## then we are left with:

cos (b) = cos (c) cos (a) + sin (c) sin (a) cos (B )
This is the spherical cosine law.

v =

1 2 v +h 2

## Bernoulli and Vector Products

dv + (v ) v = h dt

Along a streamline

r (t),

## Given Euler's equation for uid motion:

v=
As such, we have:

dr dt

## In Cartesian coordinates, this is:

= = =

dv + v s xs v i = xs h dt
We wish to show that:

dr dt d d ( r ) = (0) dt dt 0 v =

1 2 v s xs v i = ijk v j kab xa v b + xi v 2
On the RHS, we have that:

As such,

v = 0.

## This gives us that:

ijk kab j

1 2 v +h 2

=0

ijk j

kab

xa v

= = =

v xa v b ia jb v j xa v b + ib ja v j xa v b v j xi v j + v j xj v i

## along a streamline. Therefore, we must have:

1 2 v + h = const 2

## Antisymmetry and Determinants

A,
an

We have:

Part (a)
We begin with the classical denition of the determinant of

## det (A) det (B )

= = = = = =

det (B ) det (A) i1 ...in B1i1 . . . Bnin j1 ...jn A1j1 . . . Anjn j1 ...jn i1 ...in B1i1 . . . Bnin A1j1 . . . Anjn i1 ...in Bj1 i1 . . . Bjn in A1j1 . . . Anjn i1 ...in (AB )1i1 . . . (AB )nin det (AB )

nn

matrix:

## det (A) = i1 i2 ...in A1i1 A2i2 . . . Anin

We wish to show that:

## i1 ...in det (A) = j1 jn ...jn Ai1 j1 Ai2 j2 . . . Ain jn

We begin by using the denition of

Part (b)
(i) Given

det (A)

to obtain that:

## i1 ...in det (A) = i1 ...in j1 ...jn A1j1 . . . Anjn

Now, since the product rst index

is an

n-dimensional

## vector space. Let

: Vn C

be a

completely anti-symmetric multi-linear form. First, we wish to show that, up to a multiplicative constant, only one such map of all Let

A1j1 . . . Anjn

of

Akl

## goes in the order of

i1 . . . in :

is one-dimensional.
i V ; then for x V , we have x = x ei . ` x(1) , x(2) , . . . , x(n) . We can rewrite this as: ` in 1 x(1) , . . . , x(n) = xi (1) ei1 , . . . , x(n) ein

## i1 ...in det (A)

= =

i1 ...in j1 ...jn A1j1 . . . Anjn i1 ...in j1 ...jn Ai1 ji1 . . . Ain jin j1 ...jn
into the order of

{ei }n i=1

be a basis for

Consider then

ji1 . . . jin ,

## at the cost of a permutation sign:

=
Explicitly, this is:

## i1 ...in det (A)

= =

i1 ...in j1 ...jn Ai1 ji1 . . . Ain jin sgn ( ) i1 ...in ji1 ...jin Ai1 ji1 . . . Ain jin 1 . . . n i1 . . . in . However, this is i1 ...in . That is, i1 ...in = sgn ( ). So this

## X i ` n 1 x(1) , . . . , x(n) = x(1) . . . xi (n) (ei1 , . . . , ein )

i1 ...in

where

is the permutation

## exactly the denition of give us:

However, due to the complete antisymmetry of sum over all indicies i1 permutations of

we need not

## i1 ...in det (A)

= = =

sgn ( ) i1 ...in ji1 ...jin Ai1 ji1 . . . Ain jin (sgn ( ))2 ji1 ...jin Ai1 ji1 . . . Ain jin ji1 ...jin Ai1 ji1 . . . Ain jin
are just dummy indicies, so we may as well

## over those that are of permutations of

1, . . . , n. Then we can equally write: X ` in 1 x(1) , . . . , x(n) = xi (1) . . . x(n) (ei1 , . . . , ein )
i1 ...in Sn

ji k ji k jk :

ei1 , . . . ein

of

and pick up

= =

## ji1 ...jin Ai1 ji1 . . . Ain jin j1 ...jn Ai1 j1 . . . Ain jn

` x(1) , . . . , x(n)

= =

X
i1 ...in Sn

X
i1 ...in Sn

n = 4. =

We have:

## ijkl det (A)

Let us take, say,

## ijkl abcd A1a A2b A3c A4d

Then:

is the permutation : i1 , . . . , in 1, . . . , n `. That means : 1, . . . , n i1 , . . . , in . But for sgn ( ) = sgn 1 , we can equally write sgn ( ) = i1 ...in . Therefore, we can more compactly
where 1

ijkl = 4132. = = =

write:

## 4132 det (A)

4132 abcd A1a A2b A3c A4d 4132 abcd A4d A1a A3c A2b sgn ( ) 4132 dacb A4d A1a A3c A2b

` x(1) , . . . , x(n)

= =

## in 1 xi (1) . . . x(n) i1 ...in (e1 , . . . , en )

D (e1 , . . . , en )

where

where So

: abcd dacb. Equivalently, this is : 1234 4132. sgn ( ) = 4132 , so at the end of the day we have: 4132 det (A) = dacb A4d A1a A3c A2b

in 1 D is the proportionally factor D = xi (1) . . . x(n) i1 ...in , (the letter D suggestively chosen to stand for determinant). As such, we see that any map is completely determined by its action on (e1 , . . . , en ). If (e1 , . . . , en ) = 0, then is identically

zero.

## We can therefore consider, w/o l.o.g., only functions with (if

And then we can relabel the dummy indices as we wish. From this now, we wish to show that

(e1 , . . . , en ) = 0
function).

## det (A) det (B ) = det (AB ).

Suppose now we have another such function cedure, we will also arrive at:

## ` x(1) , . . . , x(n) = D (e1 , . . . , en )

where the such: to the map

. Let this constant of proportionality be D. We will write A ` = D . This x(n) V n , we have that means that ` x(1) , . . . , A x(1) , . . . , x(n) = D x(1) , . . . , x(n) . If we can compute the value of D and show that D = det (A), then we are done.
to Since

D is the same D as in . In fact, D makes no reference or , and depends only on the arguments x(k) . As ` ` x(1) , . . . , x(n) x(1) , . . . , x(n) =D= (e1 , . . . , en ) (e1 , . . . , en )

A = D ,
where

A ei = Ai ,

Ai

is the

## More explicitly, this is

{ei }. Our choice of basis will be s.t. ith column of the matrix form of A. A ei = Aik ek .

so:

## ` (e1 , . . . , en ) ` x(1) , . . . , x(n) = x(1) , . . . , x(n) (e1 , . . . , en )

and:

A (e1 , . . . , en ) = D (e1 , . . . , en )

n x(1) . . . x ` ` (k) V , we will have that x(1) , . . . , x(n) x(1) , . . . , x(n) . So every such function is

## proportional to another such function by a multiplicative constant.

A (e1 , . . . , en )

= = = =

(A e1 , . . . , A en ) (A1i1 ei1 , . . . , Anin ein ) i1 ...in A1i1 . . . Anin (e1 , . . . , en ) det (A) (e1 , . . . , en )

## is not identically zero. We wish to show that, given is

a set of vectors

x(1) , . . . , x that x(1) , . . . , x(n) `(n) , we will have independent i x(1) , . . . , x(n) = 0.

## Consequently, we have that:

x(1) , . . . , x(n) is linearly dependent, then ` x(1) , . . . , x(n) = 0. For x(1) , . . . , x(n) is linearly dependent, Pn1 then we can w/o l.o.g. write x(n) = i=1 ai x(i) . As such, we will
First, we show that if have that:
n 1 X i=1

## D (e1 , . . . , en ) = det (A) (e1 , . . . , en ) ei are (e1 , . . . , en ) = 0.

and since a basis, they are linearly independent, so As such, we must have:

! ai x(i)

D = det (A)
Hence our new denition coincides with our old one. Finally, we wish to show that sider:

` x(1) , . . . , x(n)

n 1 X i=1

x(1) , . . . ,

Therefore,

## det (AB ) = det (A) det (B ).

Con-

However, throughout this summation, every index cide once with another argument

= = = = =

Therefore,

## always has a repeated argument, and so is zero.

` x(1) , . . . , x(n) = 0. x(1) , . . . , x(n) is linearly independent, ` then x(1) , . . . , x(n) = 0. The key point now is that since x(1) , . . . , x(n) is a set of n linearly independent vectors in an ndimensional vector space, then this set x(1) , . . . , x(n) forms a basis i for V . Hence, any vectors y V can be expressed as y = y x(i) .
Next, we show that if Consider now:

` ABx(1) , . . . , ABx(n) ` A Bx(1) , . . . , Bx(n) ` det (A) Bx(1) , . . . , Bx(n) ` det (A) B x(1) , . . . , x(n) ` det (A) det (B ) x(1) , . . . , x(n)
then we must have:

x's,

## ` ` i1 in y(1) , . . . , y(n) = y(1) . . . y( n) i1 ...in x(1) , . . . , x(n)

where that

y have `(k) are arbitrary vectors in V . As ` such, we cannot x(1) , . . . , x(n) = 0; otherwise y(1) , . . . , y(n) = 0 for arbitrary y 's, meaning that would be identically zero, contradicting our original assumption. Therefore, pendent. We now dene the determinant of a linear map

## A : V V ` ` det (A) x(1) , . . . , x(n) = Ax(1) , . . . , Ax(n)

as:

We must show that this coincides with the original denition. Dene:

## ` ` A x(1) , . . . , x(n) = Ax(1) , . . . , Ax(n)

It is easily seen that linear map.

is proportional