Вы находитесь на странице: 1из 39

Relativistic Quantum Field Theory

Assessment Exercises #1
Instructions
You must hand in your solutions to the two hand-in questions that start on page 37 to the
teaching office by 9 am on Monday 20 October 2014 (week 6).
Some of the questions may look more challenging than they really are, so dont be put off if
the you cannot follow all the details of the material preceding the exercises.
The solutions for all the other exercises will be posted on LEARN a week before the hand-in
deadline, so you can check your solutions: if you have any questions or want some more
feedback on your answers we will be happy to do so, especially at the tutorials.
If you are asked to verify something then you do not need to exhaustively enumerate all cases,
it suffices to check a few typical cases.
It is most important that you demonstrate your understanding, so you must clearly explain
your reasoning and not just state the answer. While you can use textbooks and discuss the
topics in general with others, you must answer the exercises individually.

SO(n) and its Representations


Groups and Representations
A group is the natural mathematical structure to describe symmetry. A symmetry transformation leaves some property of the system it is applied to unchanged, and a group is an
abstraction of a set of such transformations.
A group G is a set with a rule for assigning to every (ordered) pair of elements a third
element, satisfying
if f, g G then h = f g G (closure);
for f, g, h G, f (gh) = (f g)h (associativity);
There is an identity element 1, such that 1f = f 1 = f

f G;

every element f G has an inverse f 1 such that f f 1 = f 1 f = 1.


A representation of a group associates each group element with a matrix, or more generally
with a linear transformation (which we may think of as an infinite matrix). From the physical
point of view quantum mechanics is expressed in terms of linear operators acting on states,
so symmetry transformations naturally appear as matrices. From the mathematical point of
view linear algebras have more structure than groups, and so representations give us powerful
tools to analyze the structure and properties of groups.

All finite groups have a faithful regular representation (Cayleys theorem), but this is not
so useful for infinite groups as the dimension of the linear space upon which the regular
representation is equal to the number of group elements.
Exercise 1: S3 is the group of all six permutations of three objects, and the regular representation therefore acts on a six-dimensional vector space with basis vectors eh corresponding
to each permutation h. The action of the regular representation Rg of permutation g on the
basis vector eh is defined by Rg eh = egh , so the matrix element (Rg )gh,h = 1 and all other
entries in the same column vanish.


1 2 3
If we denote the permutation p that takes objects 1 3, 2 2, and 3 1 by
,
3 2 1


1 2 3
then the product pq of p with q =
is the permutation first applies q and
2 3 1
then 
applies p, 
namely 1 2 2, 2 3 1, and 3 1 3, which means that
1 2 3
pq =
.
2 1 3
Construct the regular representation for S3 , and verify that it is a representation by constructing the group multiplication table.
Solution 1.

1 2 3
1 2 3

1 2 3
2 1 3

1 2 3
3 1 2

1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0

0
1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1

0
0
1
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0

0
0
0
1
0
0
0
1
0
0
0
0
1
0
0
0
0
0

0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
1
0
0

0
0
0
0
0
1
0
0
0
0
1
0
0
0
1
0
0
0

, R

1 2 3

1 3 2

, R

1 2 3

2 3 1

, R

1 2 3

3 2 1

0
1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1

1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0

0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
1
0
0

0
0
0
0
0
1
0
0
0
0
1
0
0
0
1
0
0
0

0
0
1
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0

0
0
0
1
0
0
0
1
0
0
0
0
1
0
0
0
0
0

where the ordering of the permutations is the same as that for the rows and columns.
The group multiplication table may be computed by multiplying permutations or by multiplying their representation matrices, and the answer is

1 2 3 4 5 6
2 1 5 6 3 4

3 4 1 2 6 5

4 3 6 5 1 2 .

5 6 2 1 4 3
6 5 4 3 2 1
2

For example, the permutation p is sixth in our ordering, and q is fourth, sopq
1
the bottom row and fourth column, namely the third permutation pq =
2
corresponding calculation using the regular representation is


0 0 0 0 0 1
0 0 0 0 1 0
0 0 1 0
0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1


0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0


Rp Rq =
0 0 1 0 0 0 1 0 0 0 0 0 = 0 1 0 0


0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0
1 0 0 0 0 0
0 0 1 0 0 0
0 0 0 0

is the entry
in

2 3
. The
1 3
0
0
0
0
0
1

0
0
0
0
1
0

Representations are useful for many algebraic structures other than groups; for example we
consider representations of Lie and Clifford algebras below. In all cases a representation
is a map from the abstract algebra A to the space aut X of non-singular linear operators
(matrices) acting on some vector space X, : A aut X, which preserves the relevant
algebraic structure. For a group this means that for all x, y A the representation must
satisfy (a)(b1 ) = (ab1 ) where (a)(b1 ) denotes multiplication of matrices and ab1
denotes multiplication of group elements.
Exercise 2: Show that the identity (a)(b1 ) = (ab1 ) a, b A suffices to guarantee
that
1. (u)(v) = (uv) u, v A;
2. (1) = 1 where one one is the identity element of the group and the other one is the
unit matrix;
3. (u1 ) = (u)1 u A where one superscript 1 is a group inverse and the other
is a matrix inverse.
Solution 2.
1. The identity holds for all a, b A, and therefore in particular for a = u and b = v 1 ,
hence (u)(v) = (uv) since (v 1 )1 = v (by definition (v 1 )1 v 1 = 1, so right
multiplication by v together with the definition v 1 v = 1 and the associativity of group
multiplication gives v = 1v = ((v 1 )1 v 1 ) v = (v 1 )1 (v 1 v) = (v 1 )1 1 = (v 1 )1 ).
2. Take b = 1, for which (a)(1) = (a1) = (a) a A, which gives that (1) = 1
upon left multiplication by (a)1 .
3. Take b = a, for which (a)(a1 ) = (aa1 ) = (1) = 1, and multiply on the left by
(a)1 and use associativity of matrix multiplication.

If A is a Lie algebra then we require that is linear, (a+b) = (a)+(b) for all a, b A
where and are arbitrary scalars; and we also require that the abstract Lie product is
mapped to a commutator ([a, b]) = [(a), (b)]. In this last identity [a, b] is the abstract Lie
3

product and [(a), (b)] = (a)(b) (b)(a) is the commutator of two matrices defined in
terms of matrix multiplication. In an abstract Lie algebra ab does not mean anything, as no
such abstract multiplication operation is defined in general.
If X 0 is a linear subspace of X and (a)x X 0 x X 0 and a A ((a)x denotes matrixvector multiplication) then we say that X 0 is an invariant subspace of X. This means we
can choose a basis for X in which the basis vectors spanning X 0 are
and in
 at the top, 
11 (a) 12 (a)
this basis all the matrices (a) are simultaneously of the block form
, and
0
22 (a)
map 11 : A aut X 0 is itself a representation by matrices in aut X 0 acting on the invariant
subspace X 0 . If a representation has a non-trivial invariant subspace it is called reducible.
For finite groups, semi-simple Lie algebras, and some other structures it can be shown that if
a representation is reducible then a basis can be found in which it is completely reducible, that
is there is some definition of an orthogonal invariant subspace X 00 such that X = X 0 X 00 .
This means that there a basis in which 12 (a) = 0 and 22 : A aut X 00 is a representation.
In the case of groups this result is known as Maschkes theorem, whereas for Lie algebras it
was first proved by Hermann Weyl.
If X has no invariant subspaces then the representation is called an irreducible representation.
Two representations are called equivalent if they differ only by a change of basis.
Exercise 3: Change the basis in the regular representation of S3 that
Exercise 1 to the basis given by the linear combination of permutations
 
 
 
 
q  
1 2 3
1 2 3
1 2 3
1 2 3
1
+
+
+
+
6
2 3 1
2 1 3
1 3 2
1 2 3
 
 
 
 
q  
1 2 3
1 2 3
1 2 3
1 2 3
1

+
+
6
1 2 3
1 3 2
2 1 3
2 3 1
 


 

1 2 3
1 2 3
1 2 3
1
+

2
1 2 3
2 1 3
2 3 1
 
 
 
 
q  
1 2 3
1 2 3
1 2 3
1 2 3
1
1

+2

+2
2
3
1 2 3
1 3 2
2 1 3
2 3 1










q
1 2 3
1 2 3
1 2 3
1 2 3
1
1
2

+2
+
+
2
3
1 2 3
1 3 2
2 1 3
2 3 1




 
1 2 3
1 2 3
1
+
+

2
1 3 2

2 3 1

we constructed in

1 2 3
3 1 2
1 2 3
3 1 2

1 2 3
3 1 2
1 2 3
3 1 2
1 2 3
3 1 2

 
+
 

 

 

 

1 2 3
3 2 1



1 2 3
3 2 1



1 2 3
3 2 1



1 2 3
3 2 1



1 2 3
3 2 1



1 2 3
3 2 1



This completely reduces the representation into the sum of two inequivalent one-dimensional
irreducible representations plus two equivalent two-dimensional ones.
Solution 3. The matrix that takes the basis used in the solution given for Exercise 1 to

that given here is


q

U =

q
q
q

1
6

1
6

q
16
q
16
q

1
6
1
6

1
6

1
6

1
6

1
2

1
6

12

q
q

1
3

12

1
3

21 12

0
1
2

1
6

q
13
q
1
2 13
q

1
3

1
3

1
3
1
3

1
2

1
3

1
2

1
3

q
q
q
16 21 12 13 12 13

1
2

,
1

12

12

giving

U T R

U =

1 2 3

1 2 3

1
0
0
0
0
0

0
1
0
0
0
0

U T R

U T R

U T R

1 2 3
1 3 2

1 2 3
2 1 3

1 2 3
2 3 1

0
0
1
0
0
0

0
0
0
1
0
0

0
0
0
0
1
0

0
0
0
0
0
1

1 0
0
0
0
0 1 0
0

q0

3
1
0
0 0
2
4

q
U =
3
12
0
0 0
4

1
0 0
0
0
2

q
0 0
0
0 34

1 0
0
0
0
0 1
0
0
0

1
34 0
0 0
2
U =
q

0 0 34 21
0

0 0
0
0
1
0 0
0
0
0

1 0
0
0
0
0 1
0
0
0

3
0
0 0 12
4

q
U =
0
0 0 34 12

0 0
0
0
12

q
0 0
0
0 34

0
0

0 ,
q

34

1
2

0
0

0
,

0
1

0
0

0 ,
q
3

4
12

1 0 0
0 1 0

0 0 12

q
U =
3
U T R
0 0
1 2 3
4

3 1 2
0 0 0

0 0 0

1 0
0
0 1 0

0 0 1

0
R
=
0 0
1 2 3

0 0
0
3 2 1

0 0
0

0
0
q

0
0
3
4

0
0

1
q2

3
4

0
0
0
0
1

0
0
0
0

1
2

0 ,
q

34

1
2

12

0
0
0
q0
3
4

3
4

12

The two two-dimensional representations are obviously irreducible, as they do not all commute and therefore cannot be simultaneously diagonalised. That they are equivalent may
be shown explicitly by applying the change of basis
q

3
4

V =

2
q

3
4

1
2

to the matrices in the last block on the diagonal to obtain those in the preceding block.
There are powerful techniques to find all the irreducible representations of finite groups using
the theory of characters due to G. Frobenius and I. Schur (the character of a representation
matrix is its trace).
If M is an invariant matrix, that is if it commutes with all the matrices in a representation,
then Schurs lemma states that it must be a multiple of the unit matrix. If it were not then
it would necessarily have more than one eigenvalue, and the corresponding eigenspaces are
invariant subspaces.
Finally, we call a representation faithful if it is bijective (a one-to-one map). In our symmetric
group example the one-dimensional irreducible representations are manifestly not faithful,
whereas the two-dimensional irreducible representation is faithful.
Lie Groups
In QFT we are especially interested in continuous symmetries, both internal symmetries such
as the group SU(1) of phase transformations associated with electric charge conservation and
the SO(3, 1) space-time symmetry of special relativity. These symmetry groups are in fact
Lie groups, for which the group elements are not only continuous but analytic functions of a
set of parameters; this means that although there are an infinite number of group elements
each one may be specified by a finite number of parameters and may be expanded in a
Taylor series in these parameters in some neighbourhood of each group element. Verb. Sap.:
it turns out that all continuous groups are almost Lie groups (the GleasonMontgomery
Zippin theorem). Of course there are Lie groups with an infinite number of parameters, but
6

the Lie groups of interest in physics have only a small number of parameters. The number
of parameters is called the dimension of the group; for example, the group U (1) of phase
transformations has one parameter, the group of isospin transformations SU (2) has three
parameters, and the Lorentz group SO(3, 1) has six. In physics we are almost always interest
in groups with real-valued parameters, but it turns out that it is mathematically easier to
first consider the groups with complex parameters and then specialise them to their various
real forms: we will therefore investigate the complex orthogonal groups such as SO(4, )
and then consider what happens when we restrict it to its real forms such as SO(4, ) or
SO(3, 1, ) = SO(3, 1).

Abelian Groups, Generators, and the Exponential Map


Lets start with the simplest case of abelian (commutative) groups. Consider the transformation that translates a smooth function by an amount G(a) : f (x) 7 f (x + a). This transformation generates the discrete infinite group of all transformations of the form G(na) = G(a)n
1
where n ; in particular we have
 G(0) = 1and G(a) = G(a) . This is clearly an abelian
group because G(na)G(ma) = G (m + n)a = G(ma)G(na). We want to consider the Lie
group of such translations, for which there is no smallest step size a and therefore the group
is not generated by any group element. Since we are considering translations of smooth
functions we can however consider what happens if we write G(a) = G(a/N )N and take the
limit as N ; we find

 a 2
a
a
a
0
1 00
: f (x) 7 f x +
= f (x) + f (x) + 2 f (x)
+
G
N
N
N #
N
"

2
a d
a d
1
=
1+
+
+ f (x).
N dx 2 N dx

Here we find something surprising: the group of transformations is determined by just the
d
then
linear term in this Taylor expansion. If we write G(a/N ) = 1 + Na dx
N



a d
a d
G(a) = lim 1 +
= lim exp N ln 1 +
N
N
N dx
N dx


 

d
a d
2
+ O(N )
= exp a
= lim exp N
.
N
N dx
dx


We call the linear operator d/dx the generator of the group, although it is not an element
of the group itself: in fact we shall show that it lives in a particular kind of linear algebra
called a Lie algebra.
Direct and Semi-Direct Products
What happens if we consider translations in more than one dimension? For finite translations
in n dimensions there are n independent generators, G(ai ) : f (x) 7 f (x + ai ) where {ai } are
a set of linearly independent
orthogonal) vectors in an n dimensional space.
P(not necessarily

N
The group of elements G
is still an abelian group: in fact it is the
i=1 ki ai with ki
direct product of n one-dimensional translation groups, where we consider an element of the


direct product as a tuple G(k1 a1 ), . . . , G(kn an ) of elements of the one-dimensional groups
together with the multiplication operation (G1 , . . . , Gn )(G01 , . . . , G0n ) = (G1 G01 , . . . , Gn G0n ).
You can easily verify that the direct product group is abelian if and only if all the factor
groups are abelian.
This is the simplest way of combining groups, but it is certainly not the only way. Another
important type of product of groups is the semi-direct product; consider the product of onedimensional translations with the finite discrete abelian group of rotations R() generated
by the rotation R(/n) through angle /n in two dimensions.

R() has a natural action of vectors in 2 , R() : a 7 a0 ; for example if a = ex is the unit
vector in the x-direction then a0 = cos()ex + sin()ey . We define the semi-direct product


of such translations and rotations as the group of pairs R(), G(b) with the product
(R, G) (R0 , G0 ) = (R R0 , G RG). To be a little more explicit


 
 

R(1 ), G(a1 ) R(2 ), G(a2 ) = R(1 )R(2 ), G(a1 )R(1 )G(a2 ) = R(1 +2 ), G(a1 +a02 ) .
An element of the infinite discrete product group consists of a rotation through some multiple
of /n followed by a translation though some integer multiple of a unit vector in some
direction k/n with k . If we consider rotations in more than two dimensions then the
choice of angles is very much restricted if we require the rotations form a finite discrete
group (such groups are closely related to Coxeter groups, which play an important role in
the theory of Lie algebras, but which will not be considered further here), but we can also
consider the semi-direct product of the Lie group SO(n) of continuous rotations in n with
one-dimensional continuous translations. In four dimensions the semi-direct product of the
Lorentz group SO(3, 1) with translations is the Poincare group.

Non-Abelian Groups, Commutators, and Lie Algebras


We now turn our attention to non-abelian Lie groups. A group is non-abelian if its elements
do not all commute with each other; if g and h are a pair of such elements then the amount
by which they fail to commute is given by their commutator ghg 1 h1 . For a Lie group we
can consider the case where g = exp( L) = + L + 12 ( L)2 + and h = exp( L) =
+ L + 12 ( L)2 + are very close to the identity, that is || and || are small and
the components of L are the generators of the Lie algebra. Ignoring cubic terms in these
parameters we obtain ghg 1 h1 = + ( L)( L) ( L)( L) + = + i j [Li , Lj ] +
where [A, B] = AB BA is the commutator of two elements in the Lie algebra. The
key observation is that the group commutator ghg 1 h1 is an element of the Lie group
near to the identity, and therefore there must be some vector of parameters such that
ghg 1 h1 = + L + , albeit with || of the same magnitude as || ||. This requires
that the generators L of the Lie algebra must close under commutation, [Li , Lj ] = cij k Lk , for
some coefficients cij k called structure constants. The structure constants completely specify
the Lie group not only in a small neighbourhood of the identity but for finite transformations
too, at least up to a discrete global subgroup which we will come across later.

Exercise 4: Verify the Jacobi identity [A, [B, C]] + [B, [C, A]] + [C, [A, B]] = 0 for matrix
groups.

Solution 4.
A(BC CB) (BC CB)A
+ B(CA AC) (CA AC)B
+ C(AB BA) (AB BA)C
= ABC ACB BCA + CBA
+ BCA BAC CAB + ACB
+ CAB CBA ABC + BAC = 0.

[A, [B, C]] + [B, [C, A]] + [C, [A, B]] =

Matrix Groups, Metric and Levi-Civita tensors


We do not have time to explore the classification of all possible finite dimensional Lie algebras
by Killing and Cartan, so we shall just consider the classical matrix group SO(n) defined
as the group of all matrices that preserve lengths, angles, and volumes. The lengths and
angles are defined by a non-singular symmetric metric tensor g which specifies the inner
product of two vectors u v = u v g ; we define g = to be the Kronecker tensor (unit
matrix), and g g = so that g is the inverse of the matrix g . If R SO(n) then
the fact that it does not change the metric properties requires that (Ru) (Rv) = u v
or equivalently R u R v g = u v g R R g = g . In matrix notation this is
RT R = or RT = R1 , such a matrix is called orthogonal.

Such a transformation does not necessarily conserve volume. Volume is defined by the
totally antisymmetric n-index Levi-Civita tensor 1 2 ...n , the volume of the parallelogram
defined by the n vectors u1 , . . . , un being u1 1 u2 2 unn 1 2 ...n . The properties of the LeviCivita tensor follow from two simple observations: firstly that it has only one independent
component which we shall take to be 1,2,...,n = 1, and secondly that
X
1 2
11 nn = n! [
1 ...n 1 n =
nn ] ,
(1)
1 2

where the sum is over all permutations of the numbers 1, . . . , n and is +1 if is an


even permutation and 1 if it is an odd one. The square brackets around the lower indices
in the last equality indicate antisymmetrization, which is defined by this equation.
We can also write equation (1) in diagrammatic notation as
1 2 3

1 2 3

1 2 3

1 2 3

1 2 3

1 2 3

1 2 3

1 2 3

1 2 3

1 2 3

1 2 3

1 2 3

1 2 3

1 2 3

= 3!

1 2 3

1 2 3

where the hatched boxes on the left represent the Levi-Civita tensors and the black box on
the right represents the antisymmetrizer.
1
The antisymmetrizer [
kk ] is a very useful operator that projects onto the totally anti1
symmetric k-index subspace: in other words it is an identity to apply it to k antisymmetric
indices,
1
[
kk ] v[1 ...k ] = v[1 ...k ]
1

where v1 ...i ...j ...k = v1 ...j ...i ...k for any 1 i < j k, a special case of this being that
it is idempotent (it equals its square)
1
1
1
kk ] .
kk] = [
kk ] [
[
1
1
1

What equation (1) tells us is that if k = n then this antisymmetric projector factors into a
product of two Levi-Civita tensors. The following exercise shows why this is useful.
Exercise 5:
1. Write the identity of equation (1) explicitly for n = 3.
2. Show that for n = 3
0 0

ijk ij k

ijk ijk

= jj kk kj jk
= 2kk

3. Prove that
(a) a totally antisymmetric tensor with n indices in an n-dimensional space has only
one independent component, and
(b) the Levi-Civita tensor satisfies
1 ...n 1 n =

11 nn ,

(these are both just a consequence of antisymmetry).


4. The determinant of an n n matrix M is defined to be
1
det M = [
. . . nn ] M11 . . . Mnn .
1

1
3!

1
3!

(a) Evaluate det M as a polynomial in its components for n = 3, and


(b) also for n = 3 show that
det M =


1
(tr M )3 3 tr M 2 tr M + 2 tr M 3 .
6

(c) Show that det A det B = det AB, and


10

(d) prove that M Adj M = det M where the Adjoint matrix Adj M is defined by
1
n1
.
. . . nn ] M11 . . . Mn1
(Adj M )nn = n [
1

= 3

Adj M

= 1
2!

M
M

= 1
2!

M
M

M
M

Hint: You can use the identity


1 ...n1 n 1 ...n1 = (n 1)! n
that is a generalization of part 2 above.
Solution 5.
1. For n = 3 equation (1) is
0 0 0

ijk i j k = ii jj kk ii kj jk + ki ij jk ki jj ik + ji kj ik ji ij kk .
2. Contract the first indices on the Levi-Civita tensors in the last expression, that is
multiply it by ii0 and sum over i and i0 , to obtain
0 0

ijk ij k

= ii jj kk ii kj jk + ki ij jk ki jj ik + ji kj ik ji ij kk
0

= 3jj kk 3kj jk + kj jk kk jj + jk kj jj kk
0

= jj kk kj jk .
Further contracting the indices j and j 0 we obtain
0

ijk ijk = jj kk kj jk = 2kk .


3. The Levi-Civita tensor has exactly n indices in an n-dimensional space, and therefore
either the indices are a permutation of 1, 2, . . . , n or some pair of indices must have
the same value by the pigeonhole principle. If two indices have the same value then
interchanging them clearly cannot change the tensors value, but by antisymmetry it
must also change the sign, and the only number which is equal to minus itself is zero.
We have therefore shown that all the non-zero components of the Levi-Civita tensor
must have indices that are permutations of 1, 2, . . . , n, but such a permutation can be
reached by starting with 1, 2, . . . , n and swapping pairs of indices. If the permutation
is even then the component of the Levi-Civita tensor must equal 12...n , while if it is
odd then the component must be 12...n . The only freedom is the choice of this single
independent component, which is chosen by convention to be +1.
Equation (1) is now readily established, since for Levi-Civita tensors to be non-zero
both 1 , . . . , nand 1 , . . . 
, n must be permutations
of 1, . . . , n: we may write =


1 ... n
1 ... n
and =
. There is therefore a unique permutation
1 . . . n
1 . . . n
p = 1 such that pj = j for j = 1, . . . , n. This permutation is included in the sum
over all permutations , and for that term all the Kronecker factors will equal one
and p = is the appropriate sign. All the other terms vanish because at least two
of the factors must vanish.
11

4. The determinant of an n n matrix M is defined to be


1
. . . nn ] M11 . . . Mnn .
det M = [
1

(a) From the definition of the antisymmetrizer we may write the determinant as a
sum over permutations
1 X
det M =
11 nn M11 Mnn
n!
X
1 X
=
M110 Mnn0
M11 Mnn =
n!
0
1
=

where in the last step we sorted the Mjj (which of course commute) to obtain n!
copies of the sum over 0 . For the case where n = 3 this is
0

k
3
det M = [ii jj k]
Mii0 Mjj0 Mkk0 = 3! [i1 j2 k]
M1i M2j M3k

= M11 M22 M33 M11 M32 M23 + M31 M12 M23 M31 M22 M13 + M21 M32 M13 M21 M12 M33
= M11 (M22 M33 M32 M23 ) M12 (M21 M33 M31 M23 ) + M13 (M21 M32 M22 M31 ).
(b) We can also write a determinant as a polynomial of traces,
0

k
det M = [ii jj k]
Mii0 Mjj0 Mkk0

0
0
0
0
0
1  i0 j 0 k 0
0
0
0
0
0
0
0
0
0
0
=
i j k ii kj jk + ki ij jk ki jj ik + ji kj ik ji ij kk Mii0 Mjj0 Mkk0
6

1
Mii Mjj Mkk Mii Mkj Mjk + Mki Mij Mjk Mki Mjj Mik + Mji Mkj Mik Mji Mij Mkk
=
6

1
(tr M )3 3 tr M 2 tr M + 2 tr M 3 = det M.
=
6

(c) We may now use the identity of equation (1) to get


1
det M = [
. . . nn ] M11 . . . Mnn =
1

1
... M 1 . . . Mnn 1 ...n .
n! 1 n 1

The determinant of the product of two matrices M = AB is therefore


1
... A1 B 1 Ann Bnn 1 ...n
n! 1 n 1 1
1
1
n
01
0n 1 ...n
1 ...n A11 Ann [
,
=
0 0 ] B1 Bn
n
1
n!

det AB =

where we noted that the expression is antisymmetric under interchange of any

pair i j , and therefore since Aii and Ajj commute the expression must
also be antisymmetric under interchange of i j . We can therefore insert an
antisymmetrizer on the indices without changing anything.
Finally, inserting equation (1) again we obtain


1 1 ...n
1
0
0
1
n
det AB =
1 ...n A1 An

01 ...0n B11 Bnn 1 ...n


n!
n!



1
1
1
n 1 ...n
01
0n 1 ...n
=
... A An
0 ...0 B Bn
n! 1 n 1
n! 1 n 1
= det A det B.
12

det AB = 1
3!
2
1
=( )
3!

A
A
A

A
A
A

B
B
B

= 1
3!

B
B
B

= det A det B

A
A
A

B
B
B

(d) The adjoint matrix may be defined by


1
n1
=
. . . nn ] A11 . . . An1
(Adj A)nn = n [
1

1
n1 1 ...n

,
... A1 . . . An1
(n 1)! 1 n 1

and from this it immediately follows that


1
n1
An
. . . nn ] A11 . . . An1
(Adj A)nn An = n [
1
0

1
1
n1
n1
0n ]
Ann [
. . . nn ] A11 . . . An1
= n [
0 . . . 0
1
1

n1

by the usual argument that since the Ajj commute and the expression is antisymmetric on the indices it must also be antisymmetric on the 0 indices, and
therefore we can insert an antisymmetrizer on them without changing anything.
Inserting identity (1) twice in this expression leads to




01 ...0n 1 ...n1
1 ...n 1 ...n
0n1 0
n
01
(Adj A)n An = n
A1 . . . An1 Ann
n!
n!
!
0
0
0


n1 n

1 ...n A11 . . . An1


An 01 ...0n
1 ...n1 n 1 ...n1
=
n!
(n 1)!
= det A n
using the identity
1 ...n1 n 1 ...n1 = (n 1)! n
which follows from
n k + 1 1

[1 . . . k1
,
k1 ]
k
which in turn can be established by induction.
1
[
. . . k1
k =
k1 k ]
1

Adj M

= 1
2!

M
M

= 1
2!

M
M
M

= det M 1
2!

= det M 1
2!

= det M 3

= det M

13

1
2! 3!

M
M
M

In matrix notation this is (Adj A)A = (det A) . If det A 6= 0 we obtain Cramers


rule for the inverse A1 = Adj A/ det A by multiplying on the right by A1 / det A.

The invariance of Levi-Civita tensor gives the condition that det R = 1. Since 1 = det =
det RT R = det R det RT = (det R)2 it follows that det R = 1, so the constraint of being
volume preserving or special (the S in SO(n)) does not have any effect on the Lie algebra
so(n), but it tells us that the group O(n) has two disconnected pieces corresponding to
matrices of determinant 1. Verb. Sap.: this does not mean that SO(n) is necessarily
connected, for example the Lorentz group SO(3, 1) has two disconnected pieces.
Generators for SO(n)
We have shown that the elements of SO(n) are orthogonal matrices with unit determinant,
but what is the nature of the elements of the corresponding Lie algebra so(n)? This is easily
discovered by considering the identity RT R = for R = + L + close to the identity,

( + L)( + LT ) = + (L + LT ) + = ,
so the generators of the Lie algebra so(n) must be antisymmetric matrices, Li = LTi . The
identity det R = exp tr ln R = 1 tells us that tr Li = 0, but this gives nothing new since
antisymmetry forces the diagonal elements of the generators to vanish anyhow. This is
consistent with the observation we made before that volume preservation does not have any
effect on the Lie algebra.
Fundamental Representation of so(n) and SO(n)
[ ]

We can choose the matrices (L[] ) = ( ) = 2 as a basis for such


antisymmetric n n matrices, where the index i has been replaced by the pair of indices
, with < , and we have introduced an arbitrary normalization constant . How many
 n 
generators are there? There are
ways of choosing the ordered pair 1 < n, so
2
the dimension of so(n) is 21 n(n 1).
Exercise 6: For n = 2, 3, 4
1. write down the generators in the fundamental representation and check that the number of generators is 12 n(n 1);
2. verify that the generators satisfy tr L[] L[] = 22 .
Solution 6.
1. For n = 2 we have

1
2

2 (2 1) = 1 generator


0
[12]
L =
.
0
14

3 (3 1) = 3 generators

0
0 0
0 0 , L[13] = 0 0 0 ,
0 0
0 0

For n = 3 we have

0
L[12] =
0

the

For n = 4 we have

0
L[12] =
0 0
0 0
0 0

0 0
L[23] =
0
0 0

the
0
0
0
0
0

0
0

1
2

4 (4 1) = 6 generators

0
0 0 0

0
0 0 0 0
, L[13] =
0 0 0
0
0
0 0 0 0
0
0 0 0 0

0
, L[24] = 0 0 0
0 0 0 0
0
0
0 0 0

L[23]

0 0 0
= 0 0 .
0 0

1
2

, L[14] =

, L[34] =

0
0
0

0 0
0 0
0 0
0 0

0
0
0
0

0
0
0
0
0
0
0

0
0
0
0
0

2. This just requires taking the trace of products of these matrices.

We thus have constructed a set of n n matrices that represent the generators of so(n), this
is called the defining or fundamental representation. We can use it to construct a canonical
way of representing every element of the group SO(n), or at least those elements connected
to the identity, by using the exponential map.
Exercise 7: Carry this out explicitly for SO(3).

1. Evaluate the group element G = exp ij L[ij] .

Hint: Compute the characteristic polynomial () = det( X) for the matrix

0
12 13
0
23
X = ij L[ij] = 12
13 23 0
and use the CayleyHamilton theorem, which states that a matrix satisfies its own
characteristic polynomial, (X) = 0. This should give you an identity of the form
(X 2 + c2 )X for some constant c, allowing you to replace X 2j in the expansion of the
exponential function by (c2 )j for all but the leading term.
p
2
2
2
+ 13
+ 23
= 2k with
2. For what value of || does X = if and only if || = 12
k ?

3. What is the corresponding value for tr L[12] L[12] ?


Solution 7.
2
2
1. The characteristic polynomial is 3 + 2 ||2 where we have defined ||2 = 12
+ 13
+
2
2
2
23 , so the CayleyHamilton theorem tells us that (X + c )X = 0 with c = ||.

15

Substituting this into the series expansion for the exponential we get

X
Xj

exp X =

j=0

= +

j=0

X X 2j
X 2j
X+
X2
(2j + 1)!
(2j
+
2)!
j=0

X (c2 )j
(c2 )j
+
X+
X2
(2j
+
1)!
(2j
+
2)!
j=0
j=0
 2

X (ic)2j+1 X X
(ic)2j+2
X
+
+
(2j + 1)! ic j=0 (2j + 2)! ic
j=0
 2
X
X
+ sin c + (1 cos c)
c
c

1 0 0
0
12 13
0 1 0 + sin(||) 12
0
23
||
0 0 1
13 23 0

2
2
12 13 13 23
12 23
1 cos(||)
2
2
12 13 .
23
13 23 12
+
||2
2
2
12 23
12 13 13
23

=
=

2. exp X =

j!

I if and only if || = 2k with k Z. If || = 2k this requires = 1.

3. tr L[12] L[12] = 22 = 2.

Observe that for this special case we have a three index Levi-Civita tensor, so we can
express an antisymmetric rank-2 tensor in terms of its dual vector, namely ij = ijk k
and L[ij] = ijk Lk , whence an arbitrary element of the Lie algebra so(3) may be written as
ij L[ij] = 2i Li .
Structure Constants
Let us now find the structure constants for so(n) by evaluating the commutators of its
[]
[ ]
generators. The generators are L = 2 , so the commutation relations are
0 0

0 0

0 0

[L[] , L[ ] ] = (L[] ) (L[ ] ) (L[ ] ) (L[] )






0
0
0
0
0
0
0
0
= 42 [ ] [ ] [ ] [ ] = 42 [ ] [ ] [ ] [ ]


0
0
0
0
0 0]
00
00
[
= 42 [ ] [ ] [ ] [ ] = 42 00 ] [ 00 2 [ ]
0

0]

00 00 ]

= 8 00 ] [ 00 L[

whence the structure constants are


0 0

0]

c[][ ] [00 00 ] = 8 [00 || [ 00 ]



0
0
0
0
0
0
0
0
= 00 00 00 00 00 00 + 00 00
0
0
00 00

0
0
00 00

16

0
0
00 00

0
0
00 00

The essence of this calculation is manipulating indices and antisymmetrizers. If we draw lines
[ ]
connecting dummy indices and a black box for each antisymmetrizer we
can rewrite it this
much more compact
diagrammatic form.

=2

Exercise 8: Write the structure constants for so(3) in terms of the dual generators L
where L[ij] = ijk Lk .
Hint: there are two ways to do this, you can expand everything out and collect terms, or
you can make explicit use of the symmetry or anti-symmetry under exchange of indices. The
latter is much easier once you are used to it.
Solution 8. Invert the implicit definition of Lk by multiplying by ij` to obtain ij` L[ij] =
ij` ijk Lk = 2`k Lk = 2L` , and hence
[Li , Lj ] =
=

0 00
0 00
0 00
0 00
0 00
1
0 00 0 00 [L[i i ] , L[j j ] ] = 41 i0 i00 i j 0 j 00 j c[i i ][j j ] k0 k00 L[k k ]
4 ii i jj j
0 00
0 00
0 00
1
0 00 0 00 c[i i ][j j ] k0 k00 k k k Lk
4 ii i jj j
0 00

[i0 i00 ]

j 00 ]

= 2 i0 i00 i j 0 j 00 j k k k k0 ` `[j k00 Lk


0 00

k]

= 2 k0 `i ` k00 j k k k Lk = 4 k0 `i `[k j Lk = 2 ij k Lk .

Adjoint Representation
Given any Lie algebra we can can construct its adjoint representation as a set of linear
transformations acting on a linear space whose basis elements are labelled by the generators
of the Lie algebra. The action of Li on Lj is defined to be ad(Li )Lj = [Li , Lj ]. Lets
verify that this actually provides a representation of the Lie algebra, in other words that the
matrices ad(Li ) satisfy the correct commutation relations.


[ad(Li ), ad(Lj )]Lk = ad(Li ) ad(Lj ) ad(Lj ) ad(Li ) Lk = ad(Li )[Lj , Lk ] ad(Lj )[Li , Lk ]
= [Li , [Lj , Lk ]] [Lj , [Li , Lk ]] = [[Li , Lj ], Lk ] = ad([Li , Lj ])Lk
where in the penultimate step we have made use of the Jacobi identity
[Li , [Lj , Lk ]] + [Lj , [Lk , Li ]] + [Lk , [Li , Lj ]] = 0.
Since the action of the adjoint commutator on Lk holds for all k we must have [ad(Li ), ad(Lj )] =
ad([Li , Lj ]) = ad(cij k Lk ) = cij k ad(Lk ) as required.
Another way of looking at this is that the adjoint representation of Li is the matrix ad(Li )
where ad(Li )Lj = [Li , Lj ] = cij k Lk , hence the matrix element ad (Li )k j = cij k .
The adjoint representation of a Lie group is obtained by exponentiating the adjoint representation of the corresponding Lie algebra.
Exercise 9: Give an example of a physical field that transforms according to the adjoint
representation of the Lorentz group.
Hint: Think of a rank two antisymmetric tensor.
17

Solution 9. The electromagnetic field strength tensor F is a rank-2 antisymmetric tensor,


which means it transforms according to the adjoint representation of SO(3, 1). We usually
0
0
write such a transformation as F 7 F0 0 where the antisymmetry in the Lorentz
transformation is implicit, but we could write the electric and magnetic fields E and B as
a 6-component vector, in which case the Lorentz transformation would be a 6 6 adjoint
matrix.

Exercise 10: Verify the commutation relations for the adjoint representation of so(n); in
other words verify that
0 0

0 0

00 00 ]

[ad(L[] ), ad(L[ ] )] = c[][ ] 00 00 ad(L[


where

0 0

0 0

00

00 ]

ad(L[] )[ ] [00 00 ] = c[][ ] [00 00 ] = 8 [00 | | [ 00 ] .


The vertical bars around the indicate that it is not included in the antisymmetrization of
the indices 00 and 00 .
Solution 10.

=2

)=

In order to write the adjoint representation in terms of matrices we need to choose a


way of mapping pairs of indices 1 i < j n into a single integer 1 k 12 n(n
1). We shall choose to do this by enumerating the pairs in lexicographical order, that
is (1, 2), (1, 3), . . . , (1, n), (2, 3), (2, 4), . . . , (2, n), (3, 4), . . . , (3, n), . . . , (n 1, n). The adjoint
representation of so(3) is isomorphic to the fundamental representation, in particular their
dimensions are the same 3 = 21 3(3 1), but this is not the case for n > 3.
Exercise 11: How would the electric and magnetic field components appear if arranged in
a 6-component adjoint vector using the lexicographical ordering?
Solution 11.

0 Ex Ey Ez

Ex

0
Bz By

F =
7

Ey Bz
0
Bx

Ez By Bx
0

F01
F02
F03
F12
F13
F23

Ex
Ey
Ez
Bz
By
Bx

where we have made a trivial modification to allow the fundamental indices to run from 0
to 3 rather than 1 to 4.

Exercise 12:
18

1. Write down the generators of SO(n) in the adjoint representation for n = 3 and n = 4.
2. Verify that the generators satisfy tr L[] L[] = 2(n 2)2 .
Solution 12.
1. For n = 3 we have the 12 3 (3 1) = 3 generators

0 0 0
0 0
L[12] = 0 0 , L[13] = 0 0 0 ,
0 0
0 0

L[23]

0 0
= 0 0 .
0 0 0

These are just a permutation of the fundamental generators, but this is


for n > 3. For n = 4 we have the 21 4 (4 1) = 6 generators

0 0
0 0 0 0
0 0 0 0
0 0

0 0 0
0

0
0
0 0

0 0
0 0 0
0 0
[13]
0 0 0
L[12] =
0 0 0 0 0 , L = 0 0
0 0

0 0 0 0 0
0 0 0
0 0
0 0 0 0
0 0
0 0 0 0
0 0 0 0 0
0 0 0 0
0 0 0 0 0
0 0 0 0

0 0 0 0 0
0 0 0 0 0
0
[14]
[23]

L =
, L =
0 0 0 0 0

0
0 0 0 0 0

0 0 0 0
0 0 0 0 0
0
0
0 0 0 0
0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0 0
0 0 0 0

0 0 0 0 0
0 0 0 0
[24]
[34]

L =
,
L
=
0 0 0 0

0
0
0
0
0

0 0 0 0 0 0
0 0 0 0
0 0 0 0 0
0 0 0 0 0

not the case


0
0

0
0
0
0
0
0
0

0
0
0
0
0
0
0

2. This just requires taking the trace of products of these matrices.

Exercise 13: Use these generators to show how the E and B fields transform under a finite
rotation in the y z plane.
Solution 13. A rotation through an angle about the x axis is given by the adjoint group
element R23 () = exp(L[23] /), and since (L[23] /)2 = diag(0, 1, 1, 1, 1, 0) this is
R23 () =

I diag(0, 1, 1, 1, 1, 0) + cos diag(0, 1,1, 1, 1, 0) + sin L[23]/

1
0
0
0
0
0
0 cos sin
0
0
0
0 sin cos
0
0
0
0
0
0
cos sin 0
0
0
0
sin cos 0
0
0
0
0
0
1
19

The generator was normalized such that R23 (2) = . We thus find that



Ex0
Ex
Ex
Ey Ey cos Ez sin Ey0



Ez Ey sin Ez cos Ez0


R23 ()
Bz = Bz cos + By sin = Bz0



By Bz sin + By cos By0
Bx
Bx
Bx0

where E 0 and B 0 are the 3-vectors E and B rotated through angle in the y z plane.

Clifford Algebras and Spinor Representation


Cartan in 1913 who
so(n) also has a spinor representation. These were discovered by Elie
systematically constructed all the representations of Lie algebras using root and weight
spaces, but we shall follow the approach of Dirac (1927) which leads us to consider Clifford
algebras.
A Clifford algebra is generated by a set of n matrices satisfying the anticommutation
relations { , } = 2 , and these are invariant under SO(n) because the metric tensor
g = is invariant.
The
 first question iswhethersuch matrices exist. For n = 2 the Pauli matrices 1 =
0 1
0 i
and 2 =
satisfy the anticommutation relations. In four dimensions
1 0
i 0
we may define the 4 4 matrices

I
1 = I 2 ,
2 = 1 3 ,
3 = 2 3
where 3 = i1 2 , I is the 2 2 unit matrix, and A B is the tensor product defined by
0 = 1 ,

(A B)ii0 ,jj 0 = Aij Bi0 j 0 . In this case i, i0 , j, and j 0 can each take the values 0 or 1, so we
can interpret the index pair ii0 as a binary representation of a number between 0 and 3. We
may therefore write these matrices in block form as








1 0
2 0
0 3
0
i3
.
,
3 =
0 =
,
1 =
,
2 =
3 0
i3 0
0 1
0 2
Exercise 14: Verify that these satisfy the anticommutation relations.
In n = 2 dimensions we can construct a suitable set of 2 2 matrices by defining

2j = | {z
} 1 3 3 ,
|
{z
}
j1

2j+1 = | {z
} 2 3 3 ,
|
{z
}
j1

(2)

where there are j 1 identity matrices on the left and j factors of 3 on the right.
In order to construct the spinor representation S we require the following identity
S 1 () S() = R()

20

(3)

to hold for all group parameters where R is the fundamental representation. If this holds
then we have
h
i
R(00 ) = R() R(0 )
h
i

0
= R() R( )
h
i
= R() S 1 (0 ) S(0 )
i
h
= S 1 (0 ) R() S(0 )
h
i
1 0
1
= S ( ) S () S() S(0 )
h
i h
i
= S 1 (0 )S 1 () S()S(0 )
h
i1 h
i
= S()S(0 ) S()S(0 )
= S(00 )1 S(00 )
where 00 are the parameters corresponding to a transformation by 0 followed by a transformation by . We thus see that we can define S(00 ) = S()S(0 ), so S is indeed a
representation (up to a sign).
The next question is to find an S that satisfies equation (3). To do this we follow the same
strategy as we did before, namely we consider group elements S() = + + O(2 )
in the neighbourhood of the identity (remember that in so(n) the group parameters are
naturally labelled by a pair of indices 1 < n).

Exercise 15: Show that S()1 = + O(2 ).

Hint: consider S()1 S() = .


Solution 15. S()1 is the spinor representation of a group element near the identity, and
therefore it corresponds to some parameters and can be expressed as

S()1 = S() = exp = + + O(2 ),

hence
S()1 S() = S()S()



0 0
=
+ 0 0 + O(2 )
+ + O(2 )

=
+ + + O(2 , 2 , ).

Since S()1 S() =


= .

I for all

in some neighbourhood of the identity we must have

To leading order in equation (3) is





0 0
( ) + 0 0 + = + (L[ ] ) + ,

[ ]

where (L[ ] ) = 2 are the generators in the fundamental representation. We thus


obtain
[ , ] = (L[ ] ) = ( ) = ( ) .
21

Exercise 16:
1. Verify that = 4 [ , ] is a solution of this equation.
2. Enumerate the spinor generators for n = 4 using the representation of the matrices
above.
3. Verify the commutation relations for these generators.
Solution 16.
1. This just requires substituting the proposed solution into the equation,
h
i
, [ , ] = +
=
=
=
=

{ , } + + { , }
{ , } { , } 2 + 2
2 2 2 + 2
4 ( ) .

2. Again, this is just an exercise in multiplying

1 0 0 0
0 1 0 0

12 = 21 i
0 0 1 0 , 13
0 0 0 1
0 0 0 1
0 0 1 0

14 = 12 i
0 1 0 0 , 23
1 0 0 0
0 0 0 i
0 0 i 0

24 = 12 i
0 i 0 0 , 34
i 0 0 0

matrices.

0 0 0
0 0 i
1
i
=
2
0 i 0
i 0 0
0
0
0
0
0
1
= 12 i
0 1 0
0
1 0
1 0 0
0 1 0
1
i
=
2
0
0 1
0
0 0

i
0
,
0
0
1
0
,
0
0
0
0
.
0
1

3. Another exercise in matrix multiplication.

Spinor Traces and Antisymmetric Basis


A powerful tool for manipulating expressions built out of matrices is the trace operation. A
trace is linear, tr(A + B) = tr A + tr B, and cyclic, tr AB = tr BA, so we immediately
have 2 tr = tr{ , } = tr + tr = 2 tr .

We may define 5 = 1 . . . n which anticommutes with all the other if n is even. This
is easy to see because anticommutes with the n 1 of the factors with 6= in
5 and commutes with itself. The product of the n matrices in 5 = 1 . . . n can be
22

reversed by swapping (n 1) + (n 2) + + 1 = n(n 1)/2 adjacent matrices giving


n . . . 2 1 = (1)n(n1)/2 5 , and n . . . 2 1 5 = n . . . 2 1 1 2 . . . n = n . . . 2 2 . . . n =
since 2 = 21 { , } = = (not summed over ). We thus deduce that 52 =
(1)n(n1)/2 = (1)j(2j1) = 1 for any even value n = 2j in Euclidean space; indeed it is
+1 if n is a multiple of four and 1 is it is an odd multiple of two. The terminology 5 is
rather unfortunate for n > 4, so caveat emptor .

Consider tr 1 . . . 2`+1 = tr 1 . . . 2`+1 52 = tr 5 1 . . . 2`+1 5 where we get a minus


sign from anticommuting 5 through an odd number of matrices, = tr 1 . . . 2`+1 52
because the trace is cyclic, = tr 1 . . . 2`+1 and thus we have established that the trace
of an odd number of matrices vanishes for n even, and in particular tr = 0.
We may evaluate the trace of a product of 2j matrices using the following recursion relation
tr 1 2 3 . . . 2j = tr{1 , 2 }3 . . . 2j tr 2 1 3 . . . 2j
= 21 2 tr 3 . . . 2j tr 2 {1 , 3 } . . . 2j + tr 2 3 1 . . . 2j
= 21 2 tr 3 . . . 2j 21 3 tr 2 . . . 2j + tr 2 3 1 . . . 2j
2j
X
(1)` 21 ` tr 2 3 . . . ` . . . 2j tr 2 3 . . . 2j 1
=
`=2
2j

X
(1)` 21 ` tr 2 3 . . . ` . . . 2j tr 1 2 3 . . . 2j
`=2
2j

X
(1)` 1 ` tr 2 3 . . . ` . . . 2j
`=2

where indicates that is omitted from the product.


Exercise 17: Show that for n even
1. The trace of four matrices is given by

tr = ( + ) tr .
2. The trace of six matrices is given by

tr 1 . . . 6 = 1 2 3 4 5 6 1 2 3 5 4 6 + 1 2 3 6 4 5 1 3 2 4 5 6
+1 3 2 5 4 6 1 3 2 6 4 5 + 1 4 2 3 5 6 1 4 2 5 3 6
+1 4 2 6 3 5 1 5 2 3 4 6 + 1 5 2 4 3 6 1 5 2 6 3 4

+1 6 2 3 4 5 1 6 2 4 3 5 + 1 6 2 5 3 4 tr .

3. The trace of 2j matrices is given by


tr 1 . . . 2j =

X
1
tr
0 1 2 . . . 2j1 2j
2j j!

where the sum is over all permutations of the numbers 1, . . . , 2j and 0 = 1. If


you write the integers 1, . . . , 2j in a row and draw lines connecting the pairs (1 , 2 ),
(3 , 4 ), . . . , (2j1 , 2j ) then 0 = +1 if the lines cross an even number of times and
1 if they cross an odd number of times.
23

Hint: It may help to use the notation = then our recursion relation becomes
tr 1 2 3 . . . 2j =

2j
X

(1)` tr 1 2 3 . . .` . . . 2j .

`=2

Solution 17.
1. Using the notation from the hint
tr = tr tr + tr
= tr tr + tr
= ( + ) tr .

2. Likewise
tr 1 2 3 4 5 6
= + tr 1 2 3 4 5 6 tr 1 2 3 4 5 6 + tr 1 2 3 4 5 6
tr 1 2 3 4 5 6 + tr 1 2 3 4 5 6
= + tr 1 2 3 4 5 6 tr 1 2 3 4 5 6 + tr 1 2 3 4 5 6
tr 1 2 3 4 5 6 + tr 1 2 3 4 5 6 tr 1 2 3 4 5 6
+ tr 1 2 3 4 5 6 tr 1 2 3 4 5 6 + tr 1 2 3 4 5 6
tr 1 2 3 4 5 6 + tr 1 2 3 4 5 6 tr 1 2 3 4 5 6
+ tr 1 2 3 4 5 6 tr 1 2 3 4 5 6 + tr 1 2 3 4 5 6
= + tr 1 2 3 4 5 6 tr 1 2 3 4 5 6 + tr 1 2 3 4 5 6
tr 1 2 3 4 5 6 + tr 1 2 3 4 5 6 tr 1 2 3 4 5 6
+ tr 1 2 3 4 5 6 tr 1 2 3 4 5 6 + tr 1 2 3 4 5 6
tr 1 2 3 4 5 6 + tr 1 2 3 4 5 6 tr 1 2 3 4 5 6
+ tr 1 2 3 4 5 6 tr 1 2 3 4 5 6 + tr 1 2 3 4 5 6

=
+ 1 2 3 4 5 6 1 2 3 5 4 6 + 1 2 3 6 4 5
1 3 2 4 5 6 + 1 3 2 5 4 6 1 3 2 6 4 5
+ 1 4 2 3 5 6 1 4 2 5 3 6 + 1 4 2 6 3 5
1 5 2 3 4 6 + 1 5 2 4 3 6 1 5 2 6 3 4


+ 1 6 2 3 4 5 1 6 2 4 3 5 + 1 6 2 5 3 4 tr .
24

3. This formula follows from the recursion relation by making the observation each term
in the result corresponds a choice of j pairs of indices from 2j indices, and the number
of ways of choosing such pairs is (2j)!/(2j j!). The rule for the sign 0 follows by looking
at the preceding examples.

The fact the trace of any number of matrices can be expressed as a sum of products of
Kronecker tensors immediately tells us that the trace of an antisymmetrized product of one
or more matrix must vanish (for n even). We therefore define the totally antisymmetric
tensors
(j)
for j = 0, . . . , n
(4)
1 ...j = [1 . . . j ]
1 ...

(j)

k
for which tr (j) = 0 for j > 0. A similar argument shows that tr 1 ...j (k)
=
j
1
.
.
.

tr
if
j
=
k
and
vanishes
if
j
=
6
k.
A
fairly
simple
formula
also
exists
for
k! [
j ]
1
the trace of three tensors tr (j) (k) (`) .

We can reduce any element X of the Clifford algebra into a sum of tensors with coefficients
that are (tensor) multiples of the unit spinor matrix using the anticommutation relations
P
1 ...j
(j)
, and the coefficients are easily evaluated as traces
X = nj=0 c1 ...j (j)
tr X(k)
1 ...k

n
X

(j)
c(j)
1 ...j tr

1 ...j

(k)

(k)
= c(k)
1 . . . kk ] k! tr = c[1 ...k ] k! tr .
1 ...k
1 ...k [1

j=0

(1)

In four dimensions the tensors are just the scalar (0) = , vector = , tensor
(2)
= [ ] = 21 [ , ] (the generators of so(4)), the dual of the pseudoscalar with respect
to the Levi-Civita tensor,
0


(4)
= [ ] = [ ] 0 0 0 0 =

1
0 0 0 0
0 0 0 0 = 5 ,
4!

(3)

and the dual of the axial vector 5 = c , where the coefficient is


tr c[0 0 0 ] (3)

0 0 0


(3)
= 3! c[0 0 0 ] [ ] tr = 3! c[] tr
1
0
0
0
0
0 0 0 0 tr (3)
= tr 5 (3)
=

4!
1
0
0
0
0
0 0 0 0 tr [ ]
=
4!
0 0 0 0
= 0 0 0 0 [
] tr = tr ,

(3)

so 5 = /3!.
(j)

The number of independent components of the antisymmetric tensor


is

n
j


, so the

dimension of the Clifford algebra generated by 1 , . . . , n is


 X

n 
n 
X
n
n
=
1j 1nj = (1 + 1)n = 2n .
j
j
j=0

j=0

We therefore see that the representation matrices have the same number of degrees of freedom
if the spinors have 2n/2 components.
25

Real Forms
So far we have been considering so(n) in Euclidean space, but we are really interested in
Minkowski space. Fortunately it is very easy to relate the two. If we start with the algebra
so(n, ), that is the Lie algebra with complex coefficients and Euclidean generators, then
we can construct its real form so(n, ) by restricting the coefficients to be real numbers.
Since the Euclidean generators are antihermitian matrices all the matrices in so(n, ) are
also antihermitian, and therefore their exponentials the elements of a representation of
the group SO(n, ) are unitary.

Exercise 18: Show that the exponential of an antihermitian matrix is unitary.


Solution 18. If A is an antihermitian matrix A = A then it is normal, [A, A ] = [A, A] =
0, so it can be fully diagonalized, and moreover all its eigenvalues are purely imaginary
since Au = u implies that (u, u) = (u, Au) = (A u, u) = (Au, u) = (u, u) and
thus = . Furthermore its eigenvectors provide a unitary matrix that diagonalizes A
because if Av = 0 v then (v, u) = (v, Au) = (A v, u) = (Av, u) = 0 (v, u), whence
( + 0 )(v, u) = 0. This means that (u, v) = 0 as required unless 0 = , in which case
we can use the GramSchmidt procedure to construct u0 = u v(v, u) which belongs to
the eigenvalue and satisfies (v, u0 ) = (v, u) (v, v)(v, u) = 0 since we may take v to be
normalized (v, v) = 1. We may then normalize u00 = u0 /|u0 | such that |u00 |2 = (u00 , u00 ) = 1.
We may thus compute exp A by first diagonalizingA = U DU
and then exponentiating
P 1 j 
P 1
j

U = U (exp D)U , where


it exp A = exp(U DU ) =
j=0 j! D
j=0 j! (U DU ) = U
we have noted that (U DU )j = U (DU U )j U = U Dj U since the matrix U of its eigenvectors is unitary. Finally we observe that (exp A) exp A = (U exp D U ) U exp D U =
U exp D U U exp D U = U exp(D) exp D U = U exp(D D) U = U U =
since

D = D.

What this means is that if we exponentiate an element A in the Lie algebra of so(n, )
we will return to the identity = 0 for some finite value of for which = 2i since the
eigenvalues of A are imaginary. The group SO(n, ) is thus closed and bounded (compact),
and it is called the maximal compact form of the group.

so(n, ) is not the only real form of so(n, ) however. We can construct other real forms
using the Weyl unitary trick, which just replaces some anithermitian generators Lj with
the hermitian generators iLj . The idea is that we find some transformation ? : so(n, )
so(n, ) that maps the Lie algebra onto itself with the properties that

1. it is an automorphism of the Lie algbra, that is (Li Lj )? = L?i L?j as well as being linear
(Li + Lj )? = L?i + L?k , therefore it leaves the commutation relations unchanged,
[L?i , L?j ] = L?i L?j L?j L?i = [Li , Lj ]? = cij k L?k , and
2. it it involutive, applying it twice is does nothing, (L?i )? = Li .
Any such an involutive automorphism can play the role of complex conjugation as
follows: let X so(n, ) be an eigenvector of ?, X ? = X for some . Since
(X ? )? = (X)? = 2 X = X its eigenvalue must satisfy 2 = 1, so = 1. We can

26

therefore decompose the Lie algebra into two eigenspaces so(n, ) corresponding to these
two eigenvalues.
Exercise 19: Show that reflection in a mirror perpendicular to the [12]-direction in the
adjoint representation of so(3, ) is an involutive automorphism, and find its eigenspaces.
Note that such a reflection is not an element of SO(3, ).

Solution 19. Such a reflection corresponds to

0
R=
0

X ? = RXR where the matrix

0 0
1 0
0 1

satisfies R2 = . Under this operation the adjoint generators transform as

0 0 0
0 0 0
?
L[12] = 0 0 7 L[12] = RL[12] R = 0 0 = L[12] ,
0 0
0 0

0 0
0 0
?
L[13] = 0 0 0 7 L[13] = RL[13] R = 0 0 0 = L[13] ,
0 0
0 0

0 0
0 0
?
L[23] = 0 0 7 L[23] = RL[23] R = 0 0 = L[23] ,
0 0 0
0 0 0
It is obviously an involutive automorphism since
?

(L[ij] L[k`] )? = R(L[ij] L[k`] )R = RL[ij] RRL[k`] R = L[ij] L[k`] ,


and L[ij]

??

= (L[ij] )? = L[ij] .

The subspace corresponding to eigenvalue = +1 consists of all multiples of L[12] , and that
corresponding to = 1 is spanned by L[13] and L[23] .
Using the convenient notation [A, B] C to mean that for all a A and b B the commutator [a, b] = c C we observe that [so(n, )+ , so(n, )+ ] so(n, )+ , [so(n, ) , so(n, ) ]
so(n, )+ and[so(n, )+ , so(n, ) ] so(n, ) , since both sides are eigenvectors of ?
and must therefore belong to the same eigenvalue. We may now apply the Weyl unitary trick, which is to restrict the coefficients to be real and multiply all the elements of
so(n, ) by i. The commutation relations then become [so(n, )+ , so(n, )+ ] so(n, )+ ,
[i so(n, ) , i so(n, ) ] so(n, )+ and [so(n, )+ , i so(n, ) ] i so(n, ) which are
closed even with real coefficients.

R
R

If the generators of so(n, ) are L1 , . . . , Lk and those of so(n, )+ are Lk+1 , . . . , Ln we


shall call the real Lie algebra with real coefficients and generators iL1 , . . . , iLk , Lk+1 , . . . , Ln
so(k, n k).

Of course all the generators in so(n, ) are antihermitian whereas k of those in so(k, n k)
become hermitian, so under exponentiation we will obtain the non-compact group SO(k, n
k).
27

R
R

The commutation relations for so(k, n k) differ from those of so(n, ) by some changes of
sign (they were invariant
under ?, but the transformation from so(n, ) to so(k, n k) is
not ? but something like ?). In particular this means that the adjoint representation will
also be slightly different.
Exercise 20:
1. Construct the adjoint representation of the Lie algebra so(2, 1) obtained by multiplying
the generators L[13] and L[23] by i in the previous example where ? was reflection in a
mirror perpendicular to the [12]-axis.
Hint: This does not mean that you just multiply some of the adjoint generators of
so(n, ) by i. Remember the definition of the adjoint representation.

2. Exponentiate an arbitrary element of the so(2, 1) Lie algebra.

Hint: The characteristic polynomial () = det( X) for the matrix

0
23 13
0
12
X = ij L[ij] = 23
13 12 0
2
2
2
is 3 + 2 (12
13
23
) in this case.

3. Show that this takes the manifestly real form

1
0
0
0 cos sin
0 sin cos
for 12 = , 13 = 23 = 0.
4. Show that this takes the manifestly real form

cosh sinh 0
sinh cosh 0 .
0
0
1
for 12 = 13 = 0, 23 = .
5. Verify that det exp X = 1 and (exp X)T g exp X = g where g = diag(1, 1, 1).
Solution 20.
1. The adjoint representation of so(2, 1) is

0 0 0
0 0
L[12] = 0 0 , L[13] = 0 0 0 ,
0 0
0 0

28

L[23]

0 0
= 0 0 .
0
0 0

2
2
2
2. The characteristic polynomial is 3 +2 2 where we have defined 2 = 12
13
23
, so
2
2
the CayleyHamilton theorem tells us that (X + c )X = 0 with c = . Substituting
this into the series expansion for the exponential we get
 2
X
X
exp X =
+ sin c + (1 cos c)
c
c

1 0 0
0
23 13
sin()
23
0
12
= 0 1 0 +

0 0 1
13 12 0

2
2
13 + 23
12 13
12 23
1 cos()
2
2
13 23 .
+ 23
12 23 12
+
2

2
2
+ 13
12 23
13 23 12

3. For 12 = , 13 = 23 = 0 we have = , hence

1 0 0
0 0
sin()

0 1 0
0 0
X(, 0, 0) =
+

0
0 0 1

1 0 0
0 0 0

0 1 0
=
+ sin 0 0 1
0 0 1
0 1 0

1
0
0
= 0 cos sin .
0 sin cos

0
1 cos()
+
2
0

0
(1 cos ) 0
0

0 0
0
0 2
0
0 0 2

0 0
1 0
0 1

4. For 23 = , 12 = 13 = 0 we have = i, hence

1 0 0
0 0
sin(i)
1 cos(i)
0 0
X(0, 0, ) = 0 1 0 +
i
2
0 0 1
0
0 0

1 0 0
0 1 0
1 0

0 1 0
=
sinh 1 0 0
(1 cosh ) 0 1
0 0 1
0 0 0
0 0

cosh sinh 0

sinh cosh 0 .
=
0
0
1

2 0 0
0 2 0
0 0 0

0
0
0

5. This is just an exercise in simple algebra.

Real Representations
Restriction to a real form of a Lie group does not mean that the representations have to
be real. Nevertheless, for the fundamental representation of so(k, n k) (and all other
representations that can be constructed from it by reducing its tensor prodcuts, a topic we
shall not discuss here) we can choose a basis in which the vectors are real.
Exercise 21:
29

1. Consider so(2, 1) again, but this time consider its generators L[12] , iL[13] , iL[23]
fundamental representation; from Exercise 6 we have

0 1 0
0 0 1
0 0
L[12] = 1 0 0 , iL[13] = i 0 0 0 , iL[23] = i 0 0
0 0 0
1 0 0
0 1

in the

0
1 .
0

[ij] = R1 L[ij] R are real and either hermitian


Find a matrix R such that the generators L
or antihermitian.
[ij] satisfy the commutation relations for so(2, 1).
2. Show that the L
[ij] ) with
3. What metric g = diag(g1 , g2 , g3 ) is preserved by the group elements exp([ij] L
[ij] ?

Solution 21.
1. For any 6= 0 the matrix R = diag(1, 1, i) gives

0 1 0
0 1
L[12] = 1 0 0 = R 1 0
0 0 0
0 0

0 0 1
0 0
[13]

0 0 0
iL
= i
= R 0 0
1 0 0
1 0

0 0 0
0 0
[23]

iL
= i 0 0 1
= R 0 0
0 1 0
0 1

0
[12] R1 ,
0 R1 = RL
0

1
[13] R1 ,
0 R1 = RL
0

0
[23] R1 .
1 R1 = RL
0

2. The commutation relations are invariant under a similarity transformation,


 [ij] [k`] 




,L

L
= RL[ij] R1 , RL[k`] R1 = R L[ij] , L[k`] R1
[mn] .
= c[ij],[k`] [mn] RL[mn] R1 = c[ij],[k`] [mn] L
0
0 g0 0 = g for the so(2, 1)
3. We need to find the metric tensor g such that
 0
0 = exp [ij] L
[ij] . It suffices to consider a group element
group transformation

in a neighbourhood of the identity, whence


 0
 0
[ij] + [k`] L
[k`] g0 0 = g
+ [ij] L

[ij] 0 g0 + L
[ij] 0 g 0 = 0
L

[ij].

[ij] g + g L
[ij]T = 0, or explicitly
This may be written as a matrix equation, L

0
g2 g1 0
0
0 g1 + g3
0
0
0
g2 g1
= 0
0
0 =
0
0
0
0
g2 + g3 = 0, 1
0
0
0
g1 + g3 0
0
0 g2 + g3
0
from which we obtain g = diag(1, 1, 1) for any 6= 0.

30

Universal Covering Group Spin(n)


The Pauli matrices 1 and 2 are generators of a Clifford algebra in two dimensions, and the
corresponding 5 = 1 2 = i3 . We may also use the Pauli matrices 1 , 2 , 3 as generators
of the three-dimensional Clifford algebra, this representation will not be faithful but that
does not matter for the purpose of constructing the spinor representation of so(3).
The spinor generators of so(3) are then [jk] = 4 [ j , k ] = i
jk` ` . Moreover, in order for
2
the fundamental representation of SO(3) to have period 2 we must have = 1 so, taking
= 1, we have [12] = 2i 3 , [13] = 2i 2 , and [23] = 2i 1 whence


i
12
23 i13
[ij]
X = ij =
.
12
2 23 + i13
The characteristic polynomial for this matrix is 2 + 14 ||2 so the CayleyHamilton theorem
tells us that X 2 + 41 ||2 = 0, and hence

exp X =

I cos

1
||
2

X
+ 1
sin 12 || =
||
2

cos 21 || + i12 sin 21 || (13 + i23 ) sin 12 ||


(13 i23 ) sin 12 || cos 21 || i12 sin 12 ||

with ij = ij /||. It is easily to check that this is a unitary matrix, but we see that for
|| = 2k it is (1)k . This means that strictly speaking this is not a representation of the
group SO(3), although it is the exponential of a perfectly good representation of the Lie
algebra so(3): it is however a representation of the universal covering group of SO(3) which
is called Spin(3).

The situation is the same for larger values of n: for each element of SO(n) there are two
corresponding elements of Spin(n) of opposite sign, so Spin(n) is a double covering of SO(n).
This is related to the fact that the centre of Spin(n), that is the subgroup of elements of
Spin(n) that commute with all other elements, contains the two elements ; in other words
the centre is the finite group (2). This is also related
to

 the topology of the group manifold
of SO(n), which has a first homotopy group 1 SO(n) = 2 . This means that there every
closed loop in the groups parameter space can be continuously deformed into either the
trivial loop that always remains at the origin or a loop that passes through the origin and a
point with || = .

The global structure of Spin(3) is the 3-sphere S3 , whereas SO(3) is topologically the real
projective space P (3), which is the surface of 3-sphere with antipodal points identified.
For reasons we will discuss later Spin(4) is the product of two 3-spheres, S3 S3 and SO(4)
is the product S3 SO(3) = S3 P3 .

P , C, and T : Disconnected Components


So far we have been considering the proper Lorentz group SO(3, 1) and its n-dimensional
analogues. If we drop the requirement that the transformations have determinant +1 we
get the group O(3, 1) which has several disconnected components. In fact there are four
disconnected pieces: those transformations corresponding to a spatial reflection P (or a
spatial inversion through the origin, which is a reflection in odd-dimensional spaces), those
corresponding to time reversal T , and those corresponding to both C = P T . Observe that
31

for this last component the determinant is +1, so these transformation lie in so(3, 1) despite
not being connected to the identity.
It is important to notice that if a Lagrangian is Lorentz invariant, that is invariant under
SO(3, 1) transformations, it is not necessarily invariant under the improper transformations
P , T , and C. Electromagentic and strong interactions do have these symmetries, but the
weak interaction breaks P , and the combination CP is also broken even more weakly. On
the other hand, the CP T theorem says that any quantum field theory will be invariant under
the composition of all three operations: it could be that nature does not respect CP T but
if it does not then it cannot be decribed by a quantum field theory.
Spinor Representation of P , C, and T
It is perhaps not immediately obvious that the spinor representations should also have representations of P and T . Nevertheless they do, and this extends the group Spin(3, 1) to
Pin(3, 1).1 To find the representation of the discrete transformations corresponding to P
and T we need to find elements of the Clifford algebra P and T that are unitary and satisfy the relations P 1 P = R(P ) and T 1 T = R(T ) where R(P ) and R(T ) are
the so(n) fundamental representation matrices corresponding to parity and time-reversal
operations, R(P ) = diag(1, 1, 1, 1) and R(T ) = diag(1, 1, 1, 1). Such spinor transformations are easily found by inspection, namely P = 0 and T = i5 0 , in term of Euclidean
matrices.
Exercise 22: Verify that P and T satisfy the relations given above, are unitary and hermitian, and therefore are involutions P 2 = T 2 = .

Solution 22.
1. Hermiticity: P = 0 = 0 = P.

2. Unitarity: P P = 0 0 = 02 = .
3. Hermiticity: T = (i5 0 ) = i0 5 = i0 5 = i5 0 = T .

4. Unitarity: T T = (i5 0 ) i5 0 = i0 5 i5 0 = 0 5 5 0 = 02 = .
5. P 1 R(P ) P = R(P ) 0 0 = R(P ) diag(1, 1, 1, 1) = since 0 0 =
diag(1, 1, 1, 1) because 0 commutes with itself and anticommutes with i .
6. T 1 R(T ) T = R(T ) T T = R(T ) diag(1, 1, 1, 1) , since T anticommutes
with 0 and commmutes with i .
7. Any operator X that is hermitian X = X and unitary X = X 1 is an involution
since X 2 = X X = X 1 X = .

I am not responsible for this name.

32

5 and Chiral Symmetry


Perhaps the most remarkable properties of spinors is that there is another continuous symmetry lurking within the Clifford algebra. This symmetry is called chiral symmetry and its
generator is 5 . It is easy to see that [5 , ] = 0, so 5 commutes with all SO(n) transformations that are connected to the identity, which means that a chiral transformation of
the form exp(i5 ) is compatible with Lorentz invariance. Moreover, for such a chiral
transformation is unitary because i5 is antihermitian (remember we are using Euclidean
matrices throughout).

Just as for P and T it does not automatically follow that any given Lagrangian is invariant
is invariant the
under chiral transformations. Indeed, while the Dirac kinetic term i/

corresponding mass term m is not.


It is also important to note that chiral symmetry is not part of the Lorentz group, and that
chirality and helicity are therefore quite different things.
Exercise 23: Prove that the Dirac kinetic term is chirally invariant whereas the Dirac mass
term is not (both in Euclidean and Minkowski space).
Hint: Under a small chiral transformation 7 exp(i5 ) = + i5 + O(2 ), so
7 [exp(i5 )] = exp(i5 ) = i 5 + O(2 ).
Hint: We are free to specify the chiral transformation properties of the the spinor independently from those of (dont be mislead by the notation, which is intended to tell us that
must transform in the same manner as under SO(n) transformations). In particular, in
Euclidean space we can choose to transform as 7 exp(i5 ) , while in Minkowski
space we can choose = 0 where transforms as the hermitian conjugate of .
Hint: In Minkowski space the Dirac Lagrangian is written in terms of Minkowski space
matrices, in which the spatial j are multiplied by a factor of i to give j = ij . This
means that the Minkowski matrices are not all hermitian: in fact =
= 0 0
with a positive sign for = 0 and a negative one for = 1, 2, 3. This means that
is not hermitian in general, and thus it would not lead to a real action; we therefore need
with = 0 so that (
) = ( 0 ) = =
to construct the bilinear
0
is hermitian.
0 0 0 =

In fact, this is not quite the full story: we should really consider the Dirac kinetic term i/

built from two spinors and with arbitrary chiral properties. This is not hermitian, but
if we observe that
) = i( )0 +
i
i (
then we have
) +
i .
(
i ) = i( )
0 = i( )0 = i (
+ i/

is hermitian up to a total derivative which
This means that the kinetic term i/

does not contribute to the action.
Solution 23.
1. In Euclidean space under a small chiral transformation
7 (1 i5 + ) as
+ i5 + ).
noted in the hint, thus 7 (1
33

2. In Minkowski space 7 (1 + i5 + ) under a small chiral transformation, and


as noted in the hint we can can choose = 0 with 7 [(1 + i5 + )] =
(1 i5 + ). Therefore 7 (1 i5 + )0 = 0 (1 + i5 + ) =
+ i5 + ).
(1
3. The kinetic term therefore transforms as
7 (1
+ i5 + ) (1 + i5 + ) =
+ i{ , 5 } +

) = O(2 ).
so (
4. The mass term transforms as
7 (1
+ i5 + )(1 + i5 + ) =
+ 2i
5 + )

6= 0.
so ()

We can always decompose a Dirac spinor into eigenstates of 5 in a Lorentz-invariant way,


we just introduce the chiral projectors 12 (1 5 ). In a chiral invariant theory (e.g., massless
free fermion fields) the chiral fields 12 (1 5 ) decouple, and we can indeed consider theories
with only a single such chiral or Weyl fermion. A mass term then appears as a coupling
between left- and right-handed chiral fermions.
Weyl fermions are one of the real forms of the spinor representation in four dimensions: but
it is not the only one. One can also consider eigenstates of the charge conjugation operator
C to obtain Majorana fermions. The real forms of spinor representations is an interesting
function of dimension, but we shall not consider this further here.
SO(4) is not Simple
The final topic we will briefly look at is that SO(4) is not a simple group: it factorizes into
SO(3) Spin(3). Another way of expressing this is that the generators of so(4) can be split
into two sets which commute with each other. This does not happen for any other dimension.
Why is this?
The reason is that for n = 4 we have the invariant four index Levi-Civita tensor , which
can be used to construct an involutive invariant matrix in the adjoint representation, namely
M[],[] = .
Exercise 24:

1. Show that M is involutive, M 2 = .


2. Construct the matrix representation of M in the same basis as was used in Exercise 12
for the adjoint generators of so(4).
3. Verify that this matrix representation of M is involutive and invariant under SO(4).
Solution 24.
34

1. We may compute M 2 as follows:


(M 2 )[]

[]

0 0

0 0



[]
.
= M[],[0 0 ] M [ ],[] = 12 0 0 = 12 [
0 0 ] = 2[ ] = []

2. In our canonical basis we have

M =

0 0 0
0 0 0
0 0 0
0 0 1
0 1 0
1 0 0

0 0 1
0 1 0

1 0 0
.
0 0 0

0 0 0
0 0 0

3. M 2 = 1 is trivial to verify. It is just a matter of tedious algebra to verify that


[M, L[] ] = 0 for all the adjoint generators of so(4).

Since M is invariant under SO(4) and is not a multiple of the unit matrix the adjoint
representation of so(4) must be reducible: this is a consequence of Schurs lemma, but it
is really just the observation that the eigenspaces of M split the representation into two
commuting parts. Since it is the adjoint representation this means that the Lie algebra (and
hence the group) factors into two commuting parts.
Since we have constructed M to be involutive its eigenvalues must be 1, so we can construct
the projection operators P = 12 ( M ) that reduce the adjoint representation.

Exercise 25:
1. Write down the matrices P .
2. Construct the set of generators P L[] for so(4) and verify that they form two mutually
commuting sets of three generators each.
3. Verify that P+ L[] and P L[] with [] = [12], [13], [14] separately satisfy the commutation relations for so(3).
Solution 25.
1. The projectors are

P =

1
2

0
0

1
0
2
1
0
0
2
0
0 12
0 12 0
12 0
0

35

0
0 12
0 12 0

12 0
0
.
1
0
0
2

0
0
2
1
0
0
2

2. There are six generators for so(4), so we need to be a little careful to generate a linearly
independent set of generators when we apply the projectors. With our choice of basis
the first three generators suffice, that is

0 0
0
0 0 0
0 0
0 0 0 0
0 0
0 0 1 1 0 0
1
1 0 0

0 1

0
1
0
0
1
0
0
0
1
0
[12]
[12]
,
,
P+ L =
P L =

0 1 0
2 0 1 0
2 0 1 0 0 1 0

0 0 1 1 0 0
0 0 1 1 0 0
0 0
0
0 0 0
0 0
0 0 0 0

0 0 1 1 0 0
0 0 1 1 0 0
0 0 0
0 0 0
0 0 0
0 0 0

1 0 0
1 0 0
0 0 1
0 0 1
[13]
[13]
,
P+ L =
,
P L =

0 0 1
1
0
0
0
0
1
2 1 0 0
2

0 0 0
0 0 0
0 0 0
0 0 0
0 0 1 1 0 0
0 0 1 1 0 0

0 1 0 0 1 0
0 1 0 0 1 0
1 0 0 0 0 1
1 0 0 0 0 1

0 0 0 0 0


0
0
0
0
0
0
0
[14]
[14]

,
,
P
L
=
P+ L =

0
0
2
2
0 0 0 0 0
0 0 0 0 0

1 0 0 0 0
1 0 0 0 0 1
1
0 1 0 0 1 0
0 1 0 0 1
0
form a complete set of generators.
3. Let us enumerate the generators in the order L1 = P+ L[12] , L2 = P+ L[13] , L3 =
P+ L[14] , L4 = P L[12] , L5 = P L[13] , L6 = P L[14] , then their commutators are [Li , Lj ] =
L|Xij | where

0 3 2
0
0
0
3
0 1 0
0
0

2 1
0
0
0
0

X=

0
0
0
0
6
5

0
0
0 6 0
4
0
0
0
5 4 0
and the sign of Xij indicates the sign in the commutation relations.

One way of understanding this decomposition is that a rotation in four dimensions can
always be written as the product of two rotations in non-intersecting planes. The two so(3)
factors correspond to the two rotations being through the same angle and in the same or the
opposite sense.
We have shown that the Lie algebra so(4) = so(3) so(3), it is not hard to extend this
result to show that under exponentiation the Lie group SO(4) = SO(3) Spin(3), Spin(4) =
Spin(3) Spin(3), and to extend these to the real forms. There are other accidental
degeneracies of low-dimensional Lie algebras, for instance su(2) = so(3) = sp(2), su(4) =
so(6), so(5) = sp(4), and using the first of these we find the decomposition so(3, 1) = sl(2, )
which is often found in the literature: SL(n) is the special linear group of special (i.e., unit
determinant) n n matrices, one of whose real forms is the group of special unitary matrices
SU(n).

36

Hand-in Exercises
Exercise 26: In this exercise we shall consider some of the properties of Clifford algebras
in n dimensions where n may be odd, so be careful when using -matrix identities that they
do not implicitly assume than n is even.
1. Show that

d tr [1 2 d ] = 0
(d n) tr [1 2 d ] = 0

if d is even
if d is odd

where as usual the square brackets around the indices indicate antisymmetrization,
1 X
tr [1 2 d ] =
tr 1 2 d
d! S
d

with = 1 the parity of the permutation .


Hint: Consider tr [1 2 d ] and anticommute all the way to the right and
then use the cyclic property of the trace.
2. Show that this implies that the trace of an antisymmetric product of -matrices vanishes unless either d = 0 or n is odd and d = n.
From here on we shall assume that n is odd.
1

3. For n = d show that tr [1 2 d ] = c 1 2 ...n with c = (i) 2 n(n1) tr where

= c1 2 n (calling this quantity 5 would be too confusing).


4. Show that commutes with all the , and its square is one. Show that in any
irreducible representation of the Clifford algebra = , and thus such irreducible
representations are not faithful.

Hint: Reverse the order of the -matrices in the definition of .


Hint: Schurs lemma.
5. In the 2(n+1)/2 2(n+1)/2 representation (2) of the -matrices in n+1 dimensions that we
constructed before in which the -matrices are hermitian, show that is also hermitian
and that it is not a multiple of , the unit element of the Clifford algebra. Deduce that
the restriction of this irreducible representation of the (n + 1)-dimensional Clifford
algebra to the n-dimensional Clifford algebra is reducible. What are the projection
operators onto invariant subspaces that implement this reduction?

6. In any irreducible representation of the n-dimensional Clifford algebra show that

n
(k)
1 k 1 n (nk) k+1
.
Hint: (k) was defined in equation (4).
7. Show that there are two inequivalent 2(n1)/2 2(n1)/2 irreducible representations of
the n-dimensional Clifford algebra. Are they faithful?
Hint: Use the 2(n1)/2 2(n1)/2 representation (2) of the (n 1)-dimensional Clifford
(n1)
algebra, and set n 1,...,n1 .
8. Construct the two inequivalent 2 2 matrix representations of the 3-dimensional Clifford algebra.
37

9. Construct the corresponding representations of the so(3) Lie algebra. Are they faithful?
Are they equivalent?
Hint: Make sure you do not confuse the representations of the Clifford algebra with
those of the related Lie algebra and Lie group.

Exercise 27:
1. Compute the element G = exp X in the spinor representation of the Lie group SO(4)
where
X
X =

<

i
=
2

12 + 34

(12 34 )

(14 23 )
+i(13 + 24 )

(14 23 )
i(13 + 24 )

12 34

14 + 23
+i(13 24 )

14 + 23
i(13 24 )

(12 + 34 )

is a general element of the spinor representation of the Lie algebra so(4), where we
have made us of the generators computed in Exercise 16.

Hint: Using the adjoint matrix M of Exercise 24 and the projectors P = 21 ( M )


P P
[][]
of Exercise 25 construct the quantities X =
P
and show that
< <

X = X+ + X and [X+ , X ] = 0. Construct orthogonal projectors P onto the


subspaces such that X = P X P . Show that the characteristic polynomials of X
are
"
2 #


,
() = 2 2 +
2
where
2 = (12 34 )2 + (13 24 )2 + (14 23 )2 .
Compute G = exp X and hence G = G+ G = G G+ .
2. Show that on the two-dimensional invariant subspaces identified by P
i
i h
(14 + 23 )1 (13 24 )2 + (12 + 34 )3
2
i
i h
=
(14 23 )1 (13 + 24 )2 + (12 34 )3 .
2

X+ =
X

This shows that so(4, ) = so(3, ) so(3, ) = sl(2, ) sl(2, ). The complex Lie
algebra su(2, ) is a misnomer, as the matrices in the group SU(2, ) are not unitary,
so it is more properly called sl(2, ), the space of all 2 2 complex traceless matrices.

Hint: Remember that all the group parameters are complex in so(4, ).
38

2 P P
0 0
M [][ ] 0 0 and show that it
2
3 < 0 < 0
is just 5 . Since the Levi-Civita tensor appears both in M and 5 perhaps this should
not be too surprising. How does a chiral transformation act on the subspaces identified
by the projectors P ?

3. Construct the invariant spinor matrix

4. If we restrict all the parameters to be real show that we get the real form of the
Lie algebra so(4, ) = so(3, ) so(3, ) = su(2) su(2).

Note: One of the real forms of sl(2, ) is su(2) = su(2, ), the space of all antihermitian traceless 22 matrices for which the Pauli matrices i1 , i2 , i3 form a basis. This
is called the compact real form because the Lie group obtained by exponentiation is
SU(2)SU(2) where the group manifold (parameter space) of SU(2) is topologically S3
(the set of points in 4 of distance one from the origin), which is closed and bounded
and hence compact.

5. Show that the operation that takes 2 7 2 and 4 7 4 is an involutive automorphism on so(4, ). Use the Weyl unitary trick to construct the spinor generators of
the real form so(2, 2), and show that the analogues of X in so(2, 2) are
i
h
(14 + 23 )1 + (13 + 24 )i2 + (12 + 34 )3
X+0 =
2 h
i

0
X = (14 23 )1 + (13 24 )i2 + (12 34 )3 .
2

Note that in the usual representation of the Pauli matrices all three of 1 , i2 , and
3 are real. The Lie algebra sl(2, ) of all 2 2 real matrices is another real form
of sl(2, ). We have thus shown that so(2, 2) = sl(2, ) sl(2, ).

6. Show that the parity operation that takes 4 7 4 while leaving the other three
-matrices unchanged is another involutive automorphism on so(4, ). Use the Weyl
unitary trick to construct the spinor generators of so(3, 1), and show that the analogues
of X in so(3, 1) are
i
h
X+00 =
(14 i23 )1 + (24 + i13 )2 + (34 i12 )3
2
i
h
00
X =
(14 + i23 )1 + (24 i13 )2 + (34 + i12 )3 .
2

This shows that so(3, 1) = sl(2, ), as all six real parameters of so(3, 1) appear independently in X+00 and in X00 . The real form so(3, 1) cannot be split into two commuting
subalgebras.
Note: All the representations of SO(3, 1) can be built from tensor products of twocomponent spinors that carry the X00 representations of SL(2, ); it is conventional
to put dots on the spinor indices in the X00 subspace to indicate that they transform
with the complex conjugate parameters.

7. What are the characteristic polynomials 00 in this case?


A. D. Kennedy
Modified 01:35:01 October 07, 2014
LATEXed October 7, 2014

39

Вам также может понравиться