Академический Документы
Профессиональный Документы
Культура Документы
Assessment Exercises #1
Instructions
You must hand in your solutions to the two hand-in questions that start on page 37 to the
teaching office by 9 am on Monday 20 October 2014 (week 6).
Some of the questions may look more challenging than they really are, so dont be put off if
the you cannot follow all the details of the material preceding the exercises.
The solutions for all the other exercises will be posted on LEARN a week before the hand-in
deadline, so you can check your solutions: if you have any questions or want some more
feedback on your answers we will be happy to do so, especially at the tutorials.
If you are asked to verify something then you do not need to exhaustively enumerate all cases,
it suffices to check a few typical cases.
It is most important that you demonstrate your understanding, so you must clearly explain
your reasoning and not just state the answer. While you can use textbooks and discuss the
topics in general with others, you must answer the exercises individually.
f G;
All finite groups have a faithful regular representation (Cayleys theorem), but this is not
so useful for infinite groups as the dimension of the linear space upon which the regular
representation is equal to the number of group elements.
Exercise 1: S3 is the group of all six permutations of three objects, and the regular representation therefore acts on a six-dimensional vector space with basis vectors eh corresponding
to each permutation h. The action of the regular representation Rg of permutation g on the
basis vector eh is defined by Rg eh = egh , so the matrix element (Rg )gh,h = 1 and all other
entries in the same column vanish.
1 2 3
If we denote the permutation p that takes objects 1 3, 2 2, and 3 1 by
,
3 2 1
1 2 3
then the product pq of p with q =
is the permutation first applies q and
2 3 1
then
applies p,
namely 1 2 2, 2 3 1, and 3 1 3, which means that
1 2 3
pq =
.
2 1 3
Construct the regular representation for S3 , and verify that it is a representation by constructing the group multiplication table.
Solution 1.
1 2 3
1 2 3
1 2 3
2 1 3
1 2 3
3 1 2
1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
1
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
1
0
0
0
0
1
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
0
1
0
0
0
1
0
0
0
, R
1 2 3
1 3 2
, R
1 2 3
2 3 1
, R
1 2 3
3 2 1
0
1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
0
1
0
0
0
1
0
0
0
0
0
1
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
1
0
0
0
0
1
0
0
0
0
0
where the ordering of the permutations is the same as that for the rows and columns.
The group multiplication table may be computed by multiplying permutations or by multiplying their representation matrices, and the answer is
1 2 3 4 5 6
2 1 5 6 3 4
3 4 1 2 6 5
4 3 6 5 1 2 .
5 6 2 1 4 3
6 5 4 3 2 1
2
For example, the permutation p is sixth in our ordering, and q is fourth, sopq
1
the bottom row and fourth column, namely the third permutation pq =
2
corresponding calculation using the regular representation is
0 0 0 0 0 1
0 0 0 0 1 0
0 0 1 0
0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1
0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0
Rp Rq =
0 0 1 0 0 0 1 0 0 0 0 0 = 0 1 0 0
0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0
1 0 0 0 0 0
0 0 1 0 0 0
0 0 0 0
is the entry
in
2 3
. The
1 3
0
0
0
0
0
1
0
0
0
0
1
0
Representations are useful for many algebraic structures other than groups; for example we
consider representations of Lie and Clifford algebras below. In all cases a representation
is a map from the abstract algebra A to the space aut X of non-singular linear operators
(matrices) acting on some vector space X, : A aut X, which preserves the relevant
algebraic structure. For a group this means that for all x, y A the representation must
satisfy (a)(b1 ) = (ab1 ) where (a)(b1 ) denotes multiplication of matrices and ab1
denotes multiplication of group elements.
Exercise 2: Show that the identity (a)(b1 ) = (ab1 ) a, b A suffices to guarantee
that
1. (u)(v) = (uv) u, v A;
2. (1) = 1 where one one is the identity element of the group and the other one is the
unit matrix;
3. (u1 ) = (u)1 u A where one superscript 1 is a group inverse and the other
is a matrix inverse.
Solution 2.
1. The identity holds for all a, b A, and therefore in particular for a = u and b = v 1 ,
hence (u)(v) = (uv) since (v 1 )1 = v (by definition (v 1 )1 v 1 = 1, so right
multiplication by v together with the definition v 1 v = 1 and the associativity of group
multiplication gives v = 1v = ((v 1 )1 v 1 ) v = (v 1 )1 (v 1 v) = (v 1 )1 1 = (v 1 )1 ).
2. Take b = 1, for which (a)(1) = (a1) = (a) a A, which gives that (1) = 1
upon left multiplication by (a)1 .
3. Take b = a, for which (a)(a1 ) = (aa1 ) = (1) = 1, and multiply on the left by
(a)1 and use associativity of matrix multiplication.
If A is a Lie algebra then we require that is linear, (a+b) = (a)+(b) for all a, b A
where and are arbitrary scalars; and we also require that the abstract Lie product is
mapped to a commutator ([a, b]) = [(a), (b)]. In this last identity [a, b] is the abstract Lie
3
product and [(a), (b)] = (a)(b) (b)(a) is the commutator of two matrices defined in
terms of matrix multiplication. In an abstract Lie algebra ab does not mean anything, as no
such abstract multiplication operation is defined in general.
If X 0 is a linear subspace of X and (a)x X 0 x X 0 and a A ((a)x denotes matrixvector multiplication) then we say that X 0 is an invariant subspace of X. This means we
can choose a basis for X in which the basis vectors spanning X 0 are
and in
at the top,
11 (a) 12 (a)
this basis all the matrices (a) are simultaneously of the block form
, and
0
22 (a)
map 11 : A aut X 0 is itself a representation by matrices in aut X 0 acting on the invariant
subspace X 0 . If a representation has a non-trivial invariant subspace it is called reducible.
For finite groups, semi-simple Lie algebras, and some other structures it can be shown that if
a representation is reducible then a basis can be found in which it is completely reducible, that
is there is some definition of an orthogonal invariant subspace X 00 such that X = X 0 X 00 .
This means that there a basis in which 12 (a) = 0 and 22 : A aut X 00 is a representation.
In the case of groups this result is known as Maschkes theorem, whereas for Lie algebras it
was first proved by Hermann Weyl.
If X has no invariant subspaces then the representation is called an irreducible representation.
Two representations are called equivalent if they differ only by a change of basis.
Exercise 3: Change the basis in the regular representation of S3 that
Exercise 1 to the basis given by the linear combination of permutations
q
1 2 3
1 2 3
1 2 3
1 2 3
1
+
+
+
+
6
2 3 1
2 1 3
1 3 2
1 2 3
q
1 2 3
1 2 3
1 2 3
1 2 3
1
+
+
6
1 2 3
1 3 2
2 1 3
2 3 1
1 2 3
1 2 3
1 2 3
1
+
2
1 2 3
2 1 3
2 3 1
q
1 2 3
1 2 3
1 2 3
1 2 3
1
1
+2
+2
2
3
1 2 3
1 3 2
2 1 3
2 3 1
q
1 2 3
1 2 3
1 2 3
1 2 3
1
1
2
+2
+
+
2
3
1 2 3
1 3 2
2 1 3
2 3 1
1 2 3
1 2 3
1
+
+
2
1 3 2
2 3 1
we constructed in
1 2 3
3 1 2
1 2 3
3 1 2
1 2 3
3 1 2
1 2 3
3 1 2
1 2 3
3 1 2
+
1 2 3
3 2 1
1 2 3
3 2 1
1 2 3
3 2 1
1 2 3
3 2 1
1 2 3
3 2 1
1 2 3
3 2 1
This completely reduces the representation into the sum of two inequivalent one-dimensional
irreducible representations plus two equivalent two-dimensional ones.
Solution 3. The matrix that takes the basis used in the solution given for Exercise 1 to
U =
q
q
q
1
6
1
6
q
16
q
16
q
1
6
1
6
1
6
1
6
1
6
1
2
1
6
12
q
q
1
3
12
1
3
21 12
0
1
2
1
6
q
13
q
1
2 13
q
1
3
1
3
1
3
1
3
1
2
1
3
1
2
1
3
q
q
q
16 21 12 13 12 13
1
2
,
1
12
12
giving
U T R
U =
1 2 3
1 2 3
1
0
0
0
0
0
0
1
0
0
0
0
U T R
U T R
U T R
1 2 3
1 3 2
1 2 3
2 1 3
1 2 3
2 3 1
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
1 0
0
0
0
0 1 0
0
q0
3
1
0
0 0
2
4
q
U =
3
12
0
0 0
4
1
0 0
0
0
2
q
0 0
0
0 34
1 0
0
0
0
0 1
0
0
0
1
34 0
0 0
2
U =
q
0 0 34 21
0
0 0
0
0
1
0 0
0
0
0
1 0
0
0
0
0 1
0
0
0
3
0
0 0 12
4
q
U =
0
0 0 34 12
0 0
0
0
12
q
0 0
0
0 34
0
0
0 ,
q
34
1
2
0
0
0
,
0
1
0
0
0 ,
q
3
4
12
1 0 0
0 1 0
0 0 12
q
U =
3
U T R
0 0
1 2 3
4
3 1 2
0 0 0
0 0 0
1 0
0
0 1 0
0 0 1
0
R
=
0 0
1 2 3
0 0
0
3 2 1
0 0
0
0
0
q
0
0
3
4
0
0
1
q2
3
4
0
0
0
0
1
0
0
0
0
1
2
0 ,
q
34
1
2
12
0
0
0
q0
3
4
3
4
12
The two two-dimensional representations are obviously irreducible, as they do not all commute and therefore cannot be simultaneously diagonalised. That they are equivalent may
be shown explicitly by applying the change of basis
q
3
4
V =
2
q
3
4
1
2
to the matrices in the last block on the diagonal to obtain those in the preceding block.
There are powerful techniques to find all the irreducible representations of finite groups using
the theory of characters due to G. Frobenius and I. Schur (the character of a representation
matrix is its trace).
If M is an invariant matrix, that is if it commutes with all the matrices in a representation,
then Schurs lemma states that it must be a multiple of the unit matrix. If it were not then
it would necessarily have more than one eigenvalue, and the corresponding eigenspaces are
invariant subspaces.
Finally, we call a representation faithful if it is bijective (a one-to-one map). In our symmetric
group example the one-dimensional irreducible representations are manifestly not faithful,
whereas the two-dimensional irreducible representation is faithful.
Lie Groups
In QFT we are especially interested in continuous symmetries, both internal symmetries such
as the group SU(1) of phase transformations associated with electric charge conservation and
the SO(3, 1) space-time symmetry of special relativity. These symmetry groups are in fact
Lie groups, for which the group elements are not only continuous but analytic functions of a
set of parameters; this means that although there are an infinite number of group elements
each one may be specified by a finite number of parameters and may be expanded in a
Taylor series in these parameters in some neighbourhood of each group element. Verb. Sap.:
it turns out that all continuous groups are almost Lie groups (the GleasonMontgomery
Zippin theorem). Of course there are Lie groups with an infinite number of parameters, but
6
the Lie groups of interest in physics have only a small number of parameters. The number
of parameters is called the dimension of the group; for example, the group U (1) of phase
transformations has one parameter, the group of isospin transformations SU (2) has three
parameters, and the Lorentz group SO(3, 1) has six. In physics we are almost always interest
in groups with real-valued parameters, but it turns out that it is mathematically easier to
first consider the groups with complex parameters and then specialise them to their various
real forms: we will therefore investigate the complex orthogonal groups such as SO(4, )
and then consider what happens when we restrict it to its real forms such as SO(4, ) or
SO(3, 1, ) = SO(3, 1).
Here we find something surprising: the group of transformations is determined by just the
d
then
linear term in this Taylor expansion. If we write G(a/N ) = 1 + Na dx
N
a d
a d
G(a) = lim 1 +
= lim exp N ln 1 +
N
N
N dx
N dx
d
a d
2
+ O(N )
= exp a
= lim exp N
.
N
N dx
dx
We call the linear operator d/dx the generator of the group, although it is not an element
of the group itself: in fact we shall show that it lives in a particular kind of linear algebra
called a Lie algebra.
Direct and Semi-Direct Products
What happens if we consider translations in more than one dimension? For finite translations
in n dimensions there are n independent generators, G(ai ) : f (x) 7 f (x + ai ) where {ai } are
a set of linearly independent
orthogonal) vectors in an n dimensional space.
P(not necessarily
N
The group of elements G
is still an abelian group: in fact it is the
i=1 ki ai with ki
direct product of n one-dimensional translation groups, where we consider an element of the
direct product as a tuple G(k1 a1 ), . . . , G(kn an ) of elements of the one-dimensional groups
together with the multiplication operation (G1 , . . . , Gn )(G01 , . . . , G0n ) = (G1 G01 , . . . , Gn G0n ).
You can easily verify that the direct product group is abelian if and only if all the factor
groups are abelian.
This is the simplest way of combining groups, but it is certainly not the only way. Another
important type of product of groups is the semi-direct product; consider the product of onedimensional translations with the finite discrete abelian group of rotations R() generated
by the rotation R(/n) through angle /n in two dimensions.
R() has a natural action of vectors in 2 , R() : a 7 a0 ; for example if a = ex is the unit
vector in the x-direction then a0 = cos()ex + sin()ey . We define the semi-direct product
of such translations and rotations as the group of pairs R(), G(b) with the product
(R, G) (R0 , G0 ) = (R R0 , G RG). To be a little more explicit
R(1 ), G(a1 ) R(2 ), G(a2 ) = R(1 )R(2 ), G(a1 )R(1 )G(a2 ) = R(1 +2 ), G(a1 +a02 ) .
An element of the infinite discrete product group consists of a rotation through some multiple
of /n followed by a translation though some integer multiple of a unit vector in some
direction k/n with k . If we consider rotations in more than two dimensions then the
choice of angles is very much restricted if we require the rotations form a finite discrete
group (such groups are closely related to Coxeter groups, which play an important role in
the theory of Lie algebras, but which will not be considered further here), but we can also
consider the semi-direct product of the Lie group SO(n) of continuous rotations in n with
one-dimensional continuous translations. In four dimensions the semi-direct product of the
Lorentz group SO(3, 1) with translations is the Poincare group.
Exercise 4: Verify the Jacobi identity [A, [B, C]] + [B, [C, A]] + [C, [A, B]] = 0 for matrix
groups.
Solution 4.
A(BC CB) (BC CB)A
+ B(CA AC) (CA AC)B
+ C(AB BA) (AB BA)C
= ABC ACB BCA + CBA
+ BCA BAC CAB + ACB
+ CAB CBA ABC + BAC = 0.
Such a transformation does not necessarily conserve volume. Volume is defined by the
totally antisymmetric n-index Levi-Civita tensor 1 2 ...n , the volume of the parallelogram
defined by the n vectors u1 , . . . , un being u1 1 u2 2 unn 1 2 ...n . The properties of the LeviCivita tensor follow from two simple observations: firstly that it has only one independent
component which we shall take to be 1,2,...,n = 1, and secondly that
X
1 2
11 nn = n! [
1 ...n 1 n =
nn ] ,
(1)
1 2
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
= 3!
1 2 3
1 2 3
where the hatched boxes on the left represent the Levi-Civita tensors and the black box on
the right represents the antisymmetrizer.
1
The antisymmetrizer [
kk ] is a very useful operator that projects onto the totally anti1
symmetric k-index subspace: in other words it is an identity to apply it to k antisymmetric
indices,
1
[
kk ] v[1 ...k ] = v[1 ...k ]
1
where v1 ...i ...j ...k = v1 ...j ...i ...k for any 1 i < j k, a special case of this being that
it is idempotent (it equals its square)
1
1
1
kk ] .
kk] = [
kk ] [
[
1
1
1
What equation (1) tells us is that if k = n then this antisymmetric projector factors into a
product of two Levi-Civita tensors. The following exercise shows why this is useful.
Exercise 5:
1. Write the identity of equation (1) explicitly for n = 3.
2. Show that for n = 3
0 0
ijk ij k
ijk ijk
= jj kk kj jk
= 2kk
3. Prove that
(a) a totally antisymmetric tensor with n indices in an n-dimensional space has only
one independent component, and
(b) the Levi-Civita tensor satisfies
1 ...n 1 n =
11 nn ,
1
3!
1
3!
1
(tr M )3 3 tr M 2 tr M + 2 tr M 3 .
6
(d) prove that M Adj M = det M where the Adjoint matrix Adj M is defined by
1
n1
.
. . . nn ] M11 . . . Mn1
(Adj M )nn = n [
1
= 3
Adj M
= 1
2!
M
M
= 1
2!
M
M
M
M
ijk i j k = ii jj kk ii kj jk + ki ij jk ki jj ik + ji kj ik ji ij kk .
2. Contract the first indices on the Levi-Civita tensors in the last expression, that is
multiply it by ii0 and sum over i and i0 , to obtain
0 0
ijk ij k
= ii jj kk ii kj jk + ki ij jk ki jj ik + ji kj ik ji ij kk
0
= 3jj kk 3kj jk + kj jk kk jj + jk kj jj kk
0
= jj kk kj jk .
Further contracting the indices j and j 0 we obtain
0
(a) From the definition of the antisymmetrizer we may write the determinant as a
sum over permutations
1 X
det M =
11 nn M11 Mnn
n!
X
1 X
=
M110 Mnn0
M11 Mnn =
n!
0
1
=
where in the last step we sorted the Mjj (which of course commute) to obtain n!
copies of the sum over 0 . For the case where n = 3 this is
0
k
3
det M = [ii jj k]
Mii0 Mjj0 Mkk0 = 3! [i1 j2 k]
M1i M2j M3k
= M11 M22 M33 M11 M32 M23 + M31 M12 M23 M31 M22 M13 + M21 M32 M13 M21 M12 M33
= M11 (M22 M33 M32 M23 ) M12 (M21 M33 M31 M23 ) + M13 (M21 M32 M22 M31 ).
(b) We can also write a determinant as a polynomial of traces,
0
k
det M = [ii jj k]
Mii0 Mjj0 Mkk0
0
0
0
0
0
1 i0 j 0 k 0
0
0
0
0
0
0
0
0
0
0
=
i j k ii kj jk + ki ij jk ki jj ik + ji kj ik ji ij kk Mii0 Mjj0 Mkk0
6
1
Mii Mjj Mkk Mii Mkj Mjk + Mki Mij Mjk Mki Mjj Mik + Mji Mkj Mik Mji Mij Mkk
=
6
1
(tr M )3 3 tr M 2 tr M + 2 tr M 3 = det M.
=
6
1
... M 1 . . . Mnn 1 ...n .
n! 1 n 1
det AB =
pair i j , and therefore since Aii and Ajj commute the expression must
also be antisymmetric under interchange of i j . We can therefore insert an
antisymmetrizer on the indices without changing anything.
Finally, inserting equation (1) again we obtain
1 1 ...n
1
0
0
1
n
det AB =
1 ...n A1 An
det AB = 1
3!
2
1
=( )
3!
A
A
A
A
A
A
B
B
B
= 1
3!
B
B
B
= det A det B
A
A
A
B
B
B
1
n1 1 ...n
,
... A1 . . . An1
(n 1)! 1 n 1
1
1
n1
n1
0n ]
Ann [
. . . nn ] A11 . . . An1
= n [
0 . . . 0
1
1
n1
by the usual argument that since the Ajj commute and the expression is antisymmetric on the indices it must also be antisymmetric on the 0 indices, and
therefore we can insert an antisymmetrizer on them without changing anything.
Inserting identity (1) twice in this expression leads to
01 ...0n 1 ...n1
1 ...n 1 ...n
0n1 0
n
01
(Adj A)n An = n
A1 . . . An1 Ann
n!
n!
!
0
0
0
n1 n
[1 . . . k1
,
k1 ]
k
which in turn can be established by induction.
1
[
. . . k1
k =
k1 k ]
1
Adj M
= 1
2!
M
M
= 1
2!
M
M
M
= det M 1
2!
= det M 1
2!
= det M 3
= det M
13
1
2! 3!
M
M
M
The invariance of Levi-Civita tensor gives the condition that det R = 1. Since 1 = det =
det RT R = det R det RT = (det R)2 it follows that det R = 1, so the constraint of being
volume preserving or special (the S in SO(n)) does not have any effect on the Lie algebra
so(n), but it tells us that the group O(n) has two disconnected pieces corresponding to
matrices of determinant 1. Verb. Sap.: this does not mean that SO(n) is necessarily
connected, for example the Lorentz group SO(3, 1) has two disconnected pieces.
Generators for SO(n)
We have shown that the elements of SO(n) are orthogonal matrices with unit determinant,
but what is the nature of the elements of the corresponding Lie algebra so(n)? This is easily
discovered by considering the identity RT R = for R = + L + close to the identity,
( + L)( + LT ) = + (L + LT ) + = ,
so the generators of the Lie algebra so(n) must be antisymmetric matrices, Li = LTi . The
identity det R = exp tr ln R = 1 tells us that tr Li = 0, but this gives nothing new since
antisymmetry forces the diagonal elements of the generators to vanish anyhow. This is
consistent with the observation we made before that volume preservation does not have any
effect on the Lie algebra.
Fundamental Representation of so(n) and SO(n)
[ ]
1
2
2 (2 1) = 1 generator
0
[12]
L =
.
0
14
3 (3 1) = 3 generators
0
0 0
0 0 , L[13] = 0 0 0 ,
0 0
0 0
For n = 3 we have
0
L[12] =
0
the
For n = 4 we have
0
L[12] =
0 0
0 0
0 0
0 0
L[23] =
0
0 0
the
0
0
0
0
0
0
0
1
2
4 (4 1) = 6 generators
0
0 0 0
0
0 0 0 0
, L[13] =
0 0 0
0
0
0 0 0 0
0
0 0 0 0
0
, L[24] = 0 0 0
0 0 0 0
0
0
0 0 0
L[23]
0 0 0
= 0 0 .
0 0
1
2
, L[14] =
, L[34] =
0
0
0
0 0
0 0
0 0
0 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
We thus have constructed a set of n n matrices that represent the generators of so(n), this
is called the defining or fundamental representation. We can use it to construct a canonical
way of representing every element of the group SO(n), or at least those elements connected
to the identity, by using the exponential map.
Exercise 7: Carry this out explicitly for SO(3).
1. Evaluate the group element G = exp ij L[ij] .
0
12 13
0
23
X = ij L[ij] = 12
13 23 0
and use the CayleyHamilton theorem, which states that a matrix satisfies its own
characteristic polynomial, (X) = 0. This should give you an identity of the form
(X 2 + c2 )X for some constant c, allowing you to replace X 2j in the expansion of the
exponential function by (c2 )j for all but the leading term.
p
2
2
2
+ 13
+ 23
= 2k with
2. For what value of || does X = if and only if || = 12
k ?
15
Substituting this into the series expansion for the exponential we get
X
Xj
exp X =
j=0
= +
j=0
X X 2j
X 2j
X+
X2
(2j + 1)!
(2j
+
2)!
j=0
X (c2 )j
(c2 )j
+
X+
X2
(2j
+
1)!
(2j
+
2)!
j=0
j=0
2
X (ic)2j+1 X X
(ic)2j+2
X
+
+
(2j + 1)! ic j=0 (2j + 2)! ic
j=0
2
X
X
+ sin c + (1 cos c)
c
c
1 0 0
0
12 13
0 1 0 + sin(||) 12
0
23
||
0 0 1
13 23 0
2
2
12 13 13 23
12 23
1 cos(||)
2
2
12 13 .
23
13 23 12
+
||2
2
2
12 23
12 13 13
23
=
=
2. exp X =
j!
3. tr L[12] L[12] = 22 = 2.
Observe that for this special case we have a three index Levi-Civita tensor, so we can
express an antisymmetric rank-2 tensor in terms of its dual vector, namely ij = ijk k
and L[ij] = ijk Lk , whence an arbitrary element of the Lie algebra so(3) may be written as
ij L[ij] = 2i Li .
Structure Constants
Let us now find the structure constants for so(n) by evaluating the commutators of its
[]
[ ]
generators. The generators are L = 2 , so the commutation relations are
0 0
0 0
0 0
0]
00 00 ]
= 8 00 ] [ 00 L[
0]
0
0
00 00
16
0
0
00 00
0
0
00 00
The essence of this calculation is manipulating indices and antisymmetrizers. If we draw lines
[ ]
connecting dummy indices and a black box for each antisymmetrizer we
can rewrite it this
much more compact
diagrammatic form.
=2
Exercise 8: Write the structure constants for so(3) in terms of the dual generators L
where L[ij] = ijk Lk .
Hint: there are two ways to do this, you can expand everything out and collect terms, or
you can make explicit use of the symmetry or anti-symmetry under exchange of indices. The
latter is much easier once you are used to it.
Solution 8. Invert the implicit definition of Lk by multiplying by ij` to obtain ij` L[ij] =
ij` ijk Lk = 2`k Lk = 2L` , and hence
[Li , Lj ] =
=
0 00
0 00
0 00
0 00
0 00
1
0 00 0 00 [L[i i ] , L[j j ] ] = 41 i0 i00 i j 0 j 00 j c[i i ][j j ] k0 k00 L[k k ]
4 ii i jj j
0 00
0 00
0 00
1
0 00 0 00 c[i i ][j j ] k0 k00 k k k Lk
4 ii i jj j
0 00
[i0 i00 ]
j 00 ]
k]
= 2 k0 `i ` k00 j k k k Lk = 4 k0 `i `[k j Lk = 2 ij k Lk .
Adjoint Representation
Given any Lie algebra we can can construct its adjoint representation as a set of linear
transformations acting on a linear space whose basis elements are labelled by the generators
of the Lie algebra. The action of Li on Lj is defined to be ad(Li )Lj = [Li , Lj ]. Lets
verify that this actually provides a representation of the Lie algebra, in other words that the
matrices ad(Li ) satisfy the correct commutation relations.
[ad(Li ), ad(Lj )]Lk = ad(Li ) ad(Lj ) ad(Lj ) ad(Li ) Lk = ad(Li )[Lj , Lk ] ad(Lj )[Li , Lk ]
= [Li , [Lj , Lk ]] [Lj , [Li , Lk ]] = [[Li , Lj ], Lk ] = ad([Li , Lj ])Lk
where in the penultimate step we have made use of the Jacobi identity
[Li , [Lj , Lk ]] + [Lj , [Lk , Li ]] + [Lk , [Li , Lj ]] = 0.
Since the action of the adjoint commutator on Lk holds for all k we must have [ad(Li ), ad(Lj )] =
ad([Li , Lj ]) = ad(cij k Lk ) = cij k ad(Lk ) as required.
Another way of looking at this is that the adjoint representation of Li is the matrix ad(Li )
where ad(Li )Lj = [Li , Lj ] = cij k Lk , hence the matrix element ad (Li )k j = cij k .
The adjoint representation of a Lie group is obtained by exponentiating the adjoint representation of the corresponding Lie algebra.
Exercise 9: Give an example of a physical field that transforms according to the adjoint
representation of the Lorentz group.
Hint: Think of a rank two antisymmetric tensor.
17
Exercise 10: Verify the commutation relations for the adjoint representation of so(n); in
other words verify that
0 0
0 0
00 00 ]
0 0
0 0
00
00 ]
=2
)=
0 Ex Ey Ez
Ex
0
Bz By
F =
7
Ey Bz
0
Bx
Ez By Bx
0
F01
F02
F03
F12
F13
F23
Ex
Ey
Ez
Bz
By
Bx
where we have made a trivial modification to allow the fundamental indices to run from 0
to 3 rather than 1 to 4.
Exercise 12:
18
1. Write down the generators of SO(n) in the adjoint representation for n = 3 and n = 4.
2. Verify that the generators satisfy tr L[] L[] = 2(n 2)2 .
Solution 12.
1. For n = 3 we have the 12 3 (3 1) = 3 generators
0 0 0
0 0
L[12] = 0 0 , L[13] = 0 0 0 ,
0 0
0 0
L[23]
0 0
= 0 0 .
0 0 0
0 0
0 0 0 0
0 0 0 0
0 0
0 0 0
0
0
0
0 0
0 0
0 0 0
0 0
[13]
0 0 0
L[12] =
0 0 0 0 0 , L = 0 0
0 0
0 0 0 0 0
0 0 0
0 0
0 0 0 0
0 0
0 0 0 0
0 0 0 0 0
0 0 0 0
0 0 0 0 0
0 0 0 0
0 0 0 0 0
0 0 0 0 0
0
[14]
[23]
L =
, L =
0 0 0 0 0
0
0 0 0 0 0
0 0 0 0
0 0 0 0 0
0
0
0 0 0 0
0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0 0
0 0 0 0
0 0 0 0 0
0 0 0 0
[24]
[34]
L =
,
L
=
0 0 0 0
0
0
0
0
0
0 0 0 0 0 0
0 0 0 0
0 0 0 0 0
0 0 0 0 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Exercise 13: Use these generators to show how the E and B fields transform under a finite
rotation in the y z plane.
Solution 13. A rotation through an angle about the x axis is given by the adjoint group
element R23 () = exp(L[23] /), and since (L[23] /)2 = diag(0, 1, 1, 1, 1, 0) this is
R23 () =
1
0
0
0
0
0
0 cos sin
0
0
0
0 sin cos
0
0
0
0
0
0
cos sin 0
0
0
0
sin cos 0
0
0
0
0
0
1
19
The generator was normalized such that R23 (2) = . We thus find that
Ex0
Ex
Ex
Ey Ey cos Ez sin Ey0
Ez Ey sin Ez cos Ez0
R23 ()
Bz = Bz cos + By sin = Bz0
By Bz sin + By cos By0
Bx
Bx
Bx0
where E 0 and B 0 are the 3-vectors E and B rotated through angle in the y z plane.
I
1 = I 2 ,
2 = 1 3 ,
3 = 2 3
where 3 = i1 2 , I is the 2 2 unit matrix, and A B is the tensor product defined by
0 = 1 ,
(A B)ii0 ,jj 0 = Aij Bi0 j 0 . In this case i, i0 , j, and j 0 can each take the values 0 or 1, so we
can interpret the index pair ii0 as a binary representation of a number between 0 and 3. We
may therefore write these matrices in block form as
1 0
2 0
0 3
0
i3
.
,
3 =
0 =
,
1 =
,
2 =
3 0
i3 0
0 1
0 2
Exercise 14: Verify that these satisfy the anticommutation relations.
In n = 2 dimensions we can construct a suitable set of 2 2 matrices by defining
2j = | {z
} 1 3 3 ,
|
{z
}
j1
2j+1 = | {z
} 2 3 3 ,
|
{z
}
j1
(2)
where there are j 1 identity matrices on the left and j factors of 3 on the right.
In order to construct the spinor representation S we require the following identity
S 1 () S() = R()
20
(3)
to hold for all group parameters where R is the fundamental representation. If this holds
then we have
h
i
R(00 ) = R() R(0 )
h
i
0
= R() R( )
h
i
= R() S 1 (0 ) S(0 )
i
h
= S 1 (0 ) R() S(0 )
h
i
1 0
1
= S ( ) S () S() S(0 )
h
i h
i
= S 1 (0 )S 1 () S()S(0 )
h
i1 h
i
= S()S(0 ) S()S(0 )
= S(00 )1 S(00 )
where 00 are the parameters corresponding to a transformation by 0 followed by a transformation by . We thus see that we can define S(00 ) = S()S(0 ), so S is indeed a
representation (up to a sign).
The next question is to find an S that satisfies equation (3). To do this we follow the same
strategy as we did before, namely we consider group elements S() = + + O(2 )
in the neighbourhood of the identity (remember that in so(n) the group parameters are
naturally labelled by a pair of indices 1 < n).
hence
S()1 S() = S()S()
0 0
=
+ 0 0 + O(2 )
+ + O(2 )
=
+ + + O(2 , 2 , ).
I for all
[ ]
Exercise 16:
1. Verify that = 4 [ , ] is a solution of this equation.
2. Enumerate the spinor generators for n = 4 using the representation of the matrices
above.
3. Verify the commutation relations for these generators.
Solution 16.
1. This just requires substituting the proposed solution into the equation,
h
i
, [ , ] = +
=
=
=
=
{ , } + + { , }
{ , } { , } 2 + 2
2 2 2 + 2
4 ( ) .
1 0 0 0
0 1 0 0
12 = 21 i
0 0 1 0 , 13
0 0 0 1
0 0 0 1
0 0 1 0
14 = 12 i
0 1 0 0 , 23
1 0 0 0
0 0 0 i
0 0 i 0
24 = 12 i
0 i 0 0 , 34
i 0 0 0
matrices.
0 0 0
0 0 i
1
i
=
2
0 i 0
i 0 0
0
0
0
0
0
1
= 12 i
0 1 0
0
1 0
1 0 0
0 1 0
1
i
=
2
0
0 1
0
0 0
i
0
,
0
0
1
0
,
0
0
0
0
.
0
1
We may define 5 = 1 . . . n which anticommutes with all the other if n is even. This
is easy to see because anticommutes with the n 1 of the factors with 6= in
5 and commutes with itself. The product of the n matrices in 5 = 1 . . . n can be
22
X
(1)` 21 ` tr 2 3 . . . ` . . . 2j tr 1 2 3 . . . 2j
`=2
2j
X
(1)` 1 ` tr 2 3 . . . ` . . . 2j
`=2
tr = ( + ) tr .
2. The trace of six matrices is given by
tr 1 . . . 6 = 1 2 3 4 5 6 1 2 3 5 4 6 + 1 2 3 6 4 5 1 3 2 4 5 6
+1 3 2 5 4 6 1 3 2 6 4 5 + 1 4 2 3 5 6 1 4 2 5 3 6
+1 4 2 6 3 5 1 5 2 3 4 6 + 1 5 2 4 3 6 1 5 2 6 3 4
+1 6 2 3 4 5 1 6 2 4 3 5 + 1 6 2 5 3 4 tr .
X
1
tr
0 1 2 . . . 2j1 2j
2j j!
Hint: It may help to use the notation = then our recursion relation becomes
tr 1 2 3 . . . 2j =
2j
X
(1)` tr 1 2 3 . . .` . . . 2j .
`=2
Solution 17.
1. Using the notation from the hint
tr = tr tr + tr
= tr tr + tr
= ( + ) tr .
2. Likewise
tr 1 2 3 4 5 6
= + tr 1 2 3 4 5 6 tr 1 2 3 4 5 6 + tr 1 2 3 4 5 6
tr 1 2 3 4 5 6 + tr 1 2 3 4 5 6
= + tr 1 2 3 4 5 6 tr 1 2 3 4 5 6 + tr 1 2 3 4 5 6
tr 1 2 3 4 5 6 + tr 1 2 3 4 5 6 tr 1 2 3 4 5 6
+ tr 1 2 3 4 5 6 tr 1 2 3 4 5 6 + tr 1 2 3 4 5 6
tr 1 2 3 4 5 6 + tr 1 2 3 4 5 6 tr 1 2 3 4 5 6
+ tr 1 2 3 4 5 6 tr 1 2 3 4 5 6 + tr 1 2 3 4 5 6
= + tr 1 2 3 4 5 6 tr 1 2 3 4 5 6 + tr 1 2 3 4 5 6
tr 1 2 3 4 5 6 + tr 1 2 3 4 5 6 tr 1 2 3 4 5 6
+ tr 1 2 3 4 5 6 tr 1 2 3 4 5 6 + tr 1 2 3 4 5 6
tr 1 2 3 4 5 6 + tr 1 2 3 4 5 6 tr 1 2 3 4 5 6
+ tr 1 2 3 4 5 6 tr 1 2 3 4 5 6 + tr 1 2 3 4 5 6
=
+ 1 2 3 4 5 6 1 2 3 5 4 6 + 1 2 3 6 4 5
1 3 2 4 5 6 + 1 3 2 5 4 6 1 3 2 6 4 5
+ 1 4 2 3 5 6 1 4 2 5 3 6 + 1 4 2 6 3 5
1 5 2 3 4 6 + 1 5 2 4 3 6 1 5 2 6 3 4
+ 1 6 2 3 4 5 1 6 2 4 3 5 + 1 6 2 5 3 4 tr .
24
3. This formula follows from the recursion relation by making the observation each term
in the result corresponds a choice of j pairs of indices from 2j indices, and the number
of ways of choosing such pairs is (2j)!/(2j j!). The rule for the sign 0 follows by looking
at the preceding examples.
The fact the trace of any number of matrices can be expressed as a sum of products of
Kronecker tensors immediately tells us that the trace of an antisymmetrized product of one
or more matrix must vanish (for n even). We therefore define the totally antisymmetric
tensors
(j)
for j = 0, . . . , n
(4)
1 ...j = [1 . . . j ]
1 ...
(j)
k
for which tr (j) = 0 for j > 0. A similar argument shows that tr 1 ...j (k)
=
j
1
.
.
.
tr
if
j
=
k
and
vanishes
if
j
=
6
k.
A
fairly
simple
formula
also
exists
for
k! [
j ]
1
the trace of three tensors tr (j) (k) (`) .
We can reduce any element X of the Clifford algebra into a sum of tensors with coefficients
that are (tensor) multiples of the unit spinor matrix using the anticommutation relations
P
1 ...j
(j)
, and the coefficients are easily evaluated as traces
X = nj=0 c1 ...j (j)
tr X(k)
1 ...k
n
X
(j)
c(j)
1 ...j tr
1 ...j
(k)
(k)
= c(k)
1 . . . kk ] k! tr = c[1 ...k ] k! tr .
1 ...k
1 ...k [1
j=0
(1)
In four dimensions the tensors are just the scalar (0) = , vector = , tensor
(2)
= [ ] = 21 [ , ] (the generators of so(4)), the dual of the pseudoscalar with respect
to the Levi-Civita tensor,
0
(4)
= [ ] = [ ] 0 0 0 0 =
1
0 0 0 0
0 0 0 0 = 5 ,
4!
(3)
0 0 0
(3)
= 3! c[0 0 0 ] [ ] tr = 3! c[] tr
1
0
0
0
0
0 0 0 0 tr (3)
= tr 5 (3)
=
4!
1
0
0
0
0
0 0 0 0 tr [ ]
=
4!
0 0 0 0
= 0 0 0 0 [
] tr = tr ,
(3)
so 5 = /3!.
(j)
is
n
j
, so the
j=0
We therefore see that the representation matrices have the same number of degrees of freedom
if the spinors have 2n/2 components.
25
Real Forms
So far we have been considering so(n) in Euclidean space, but we are really interested in
Minkowski space. Fortunately it is very easy to relate the two. If we start with the algebra
so(n, ), that is the Lie algebra with complex coefficients and Euclidean generators, then
we can construct its real form so(n, ) by restricting the coefficients to be real numbers.
Since the Euclidean generators are antihermitian matrices all the matrices in so(n, ) are
also antihermitian, and therefore their exponentials the elements of a representation of
the group SO(n, ) are unitary.
D = D.
What this means is that if we exponentiate an element A in the Lie algebra of so(n, )
we will return to the identity = 0 for some finite value of for which = 2i since the
eigenvalues of A are imaginary. The group SO(n, ) is thus closed and bounded (compact),
and it is called the maximal compact form of the group.
so(n, ) is not the only real form of so(n, ) however. We can construct other real forms
using the Weyl unitary trick, which just replaces some anithermitian generators Lj with
the hermitian generators iLj . The idea is that we find some transformation ? : so(n, )
so(n, ) that maps the Lie algebra onto itself with the properties that
1. it is an automorphism of the Lie algbra, that is (Li Lj )? = L?i L?j as well as being linear
(Li + Lj )? = L?i + L?k , therefore it leaves the commutation relations unchanged,
[L?i , L?j ] = L?i L?j L?j L?i = [Li , Lj ]? = cij k L?k , and
2. it it involutive, applying it twice is does nothing, (L?i )? = Li .
Any such an involutive automorphism can play the role of complex conjugation as
follows: let X so(n, ) be an eigenvector of ?, X ? = X for some . Since
(X ? )? = (X)? = 2 X = X its eigenvalue must satisfy 2 = 1, so = 1. We can
26
therefore decompose the Lie algebra into two eigenspaces so(n, ) corresponding to these
two eigenvalues.
Exercise 19: Show that reflection in a mirror perpendicular to the [12]-direction in the
adjoint representation of so(3, ) is an involutive automorphism, and find its eigenspaces.
Note that such a reflection is not an element of SO(3, ).
0
R=
0
0 0
1 0
0 1
0 0 0
0 0 0
?
L[12] = 0 0 7 L[12] = RL[12] R = 0 0 = L[12] ,
0 0
0 0
0 0
0 0
?
L[13] = 0 0 0 7 L[13] = RL[13] R = 0 0 0 = L[13] ,
0 0
0 0
0 0
0 0
?
L[23] = 0 0 7 L[23] = RL[23] R = 0 0 = L[23] ,
0 0 0
0 0 0
It is obviously an involutive automorphism since
?
??
= (L[ij] )? = L[ij] .
The subspace corresponding to eigenvalue = +1 consists of all multiples of L[12] , and that
corresponding to = 1 is spanned by L[13] and L[23] .
Using the convenient notation [A, B] C to mean that for all a A and b B the commutator [a, b] = c C we observe that [so(n, )+ , so(n, )+ ] so(n, )+ , [so(n, ) , so(n, ) ]
so(n, )+ and[so(n, )+ , so(n, ) ] so(n, ) , since both sides are eigenvectors of ?
and must therefore belong to the same eigenvalue. We may now apply the Weyl unitary trick, which is to restrict the coefficients to be real and multiply all the elements of
so(n, ) by i. The commutation relations then become [so(n, )+ , so(n, )+ ] so(n, )+ ,
[i so(n, ) , i so(n, ) ] so(n, )+ and [so(n, )+ , i so(n, ) ] i so(n, ) which are
closed even with real coefficients.
R
R
Of course all the generators in so(n, ) are antihermitian whereas k of those in so(k, n k)
become hermitian, so under exponentiation we will obtain the non-compact group SO(k, n
k).
27
R
R
The commutation relations for so(k, n k) differ from those of so(n, ) by some changes of
sign (they were invariant
under ?, but the transformation from so(n, ) to so(k, n k) is
not ? but something like ?). In particular this means that the adjoint representation will
also be slightly different.
Exercise 20:
1. Construct the adjoint representation of the Lie algebra so(2, 1) obtained by multiplying
the generators L[13] and L[23] by i in the previous example where ? was reflection in a
mirror perpendicular to the [12]-axis.
Hint: This does not mean that you just multiply some of the adjoint generators of
so(n, ) by i. Remember the definition of the adjoint representation.
0
23 13
0
12
X = ij L[ij] = 23
13 12 0
2
2
2
is 3 + 2 (12
13
23
) in this case.
1
0
0
0 cos sin
0 sin cos
for 12 = , 13 = 23 = 0.
4. Show that this takes the manifestly real form
cosh sinh 0
sinh cosh 0 .
0
0
1
for 12 = 13 = 0, 23 = .
5. Verify that det exp X = 1 and (exp X)T g exp X = g where g = diag(1, 1, 1).
Solution 20.
1. The adjoint representation of so(2, 1) is
0 0 0
0 0
L[12] = 0 0 , L[13] = 0 0 0 ,
0 0
0 0
28
L[23]
0 0
= 0 0 .
0
0 0
2
2
2
2. The characteristic polynomial is 3 +2 2 where we have defined 2 = 12
13
23
, so
2
2
the CayleyHamilton theorem tells us that (X + c )X = 0 with c = . Substituting
this into the series expansion for the exponential we get
2
X
X
exp X =
+ sin c + (1 cos c)
c
c
1 0 0
0
23 13
sin()
23
0
12
= 0 1 0 +
0 0 1
13 12 0
2
2
13 + 23
12 13
12 23
1 cos()
2
2
13 23 .
+ 23
12 23 12
+
2
2
2
+ 13
12 23
13 23 12
1 0 0
0 0
sin()
0 1 0
0 0
X(, 0, 0) =
+
0
0 0 1
1 0 0
0 0 0
0 1 0
=
+ sin 0 0 1
0 0 1
0 1 0
1
0
0
= 0 cos sin .
0 sin cos
0
1 cos()
+
2
0
0
(1 cos ) 0
0
0 0
0
0 2
0
0 0 2
0 0
1 0
0 1
1 0 0
0 0
sin(i)
1 cos(i)
0 0
X(0, 0, ) = 0 1 0 +
i
2
0 0 1
0
0 0
1 0 0
0 1 0
1 0
0 1 0
=
sinh 1 0 0
(1 cosh ) 0 1
0 0 1
0 0 0
0 0
cosh sinh 0
sinh cosh 0 .
=
0
0
1
2 0 0
0 2 0
0 0 0
0
0
0
Real Representations
Restriction to a real form of a Lie group does not mean that the representations have to
be real. Nevertheless, for the fundamental representation of so(k, n k) (and all other
representations that can be constructed from it by reducing its tensor prodcuts, a topic we
shall not discuss here) we can choose a basis in which the vectors are real.
Exercise 21:
29
1. Consider so(2, 1) again, but this time consider its generators L[12] , iL[13] , iL[23]
fundamental representation; from Exercise 6 we have
0 1 0
0 0 1
0 0
L[12] = 1 0 0 , iL[13] = i 0 0 0 , iL[23] = i 0 0
0 0 0
1 0 0
0 1
in the
0
1 .
0
Solution 21.
1. For any 6= 0 the matrix R = diag(1, 1, i) gives
0 1 0
0 1
L[12] = 1 0 0 = R 1 0
0 0 0
0 0
0 0 1
0 0
[13]
0 0 0
iL
= i
= R 0 0
1 0 0
1 0
0 0 0
0 0
[23]
iL
= i 0 0 1
= R 0 0
0 1 0
0 1
0
[12] R1 ,
0 R1 = RL
0
1
[13] R1 ,
0 R1 = RL
0
0
[23] R1 .
1 R1 = RL
0
L
= RL[ij] R1 , RL[k`] R1 = R L[ij] , L[k`] R1
[mn] .
= c[ij],[k`] [mn] RL[mn] R1 = c[ij],[k`] [mn] L
0
0 g0 0 = g for the so(2, 1)
3. We need to find the metric tensor g such that
0
0 = exp [ij] L
[ij] . It suffices to consider a group element
group transformation
[ij] 0 g0 + L
[ij] 0 g 0 = 0
L
[ij].
[ij] g + g L
[ij]T = 0, or explicitly
This may be written as a matrix equation, L
0
g2 g1 0
0
0 g1 + g3
0
0
0
g2 g1
= 0
0
0 =
0
0
0
0
g2 + g3 = 0, 1
0
0
0
g1 + g3 0
0
0 g2 + g3
0
from which we obtain g = diag(1, 1, 1) for any 6= 0.
30
exp X =
I cos
1
||
2
X
+ 1
sin 12 || =
||
2
with ij = ij /||. It is easily to check that this is a unitary matrix, but we see that for
|| = 2k it is (1)k . This means that strictly speaking this is not a representation of the
group SO(3), although it is the exponential of a perfectly good representation of the Lie
algebra so(3): it is however a representation of the universal covering group of SO(3) which
is called Spin(3).
The situation is the same for larger values of n: for each element of SO(n) there are two
corresponding elements of Spin(n) of opposite sign, so Spin(n) is a double covering of SO(n).
This is related to the fact that the centre of Spin(n), that is the subgroup of elements of
Spin(n) that commute with all other elements, contains the two elements ; in other words
the centre is the finite group (2). This is also related
to
the topology of the group manifold
of SO(n), which has a first homotopy group 1 SO(n) = 2 . This means that there every
closed loop in the groups parameter space can be continuously deformed into either the
trivial loop that always remains at the origin or a loop that passes through the origin and a
point with || = .
The global structure of Spin(3) is the 3-sphere S3 , whereas SO(3) is topologically the real
projective space P (3), which is the surface of 3-sphere with antipodal points identified.
For reasons we will discuss later Spin(4) is the product of two 3-spheres, S3 S3 and SO(4)
is the product S3 SO(3) = S3 P3 .
for this last component the determinant is +1, so these transformation lie in so(3, 1) despite
not being connected to the identity.
It is important to notice that if a Lagrangian is Lorentz invariant, that is invariant under
SO(3, 1) transformations, it is not necessarily invariant under the improper transformations
P , T , and C. Electromagentic and strong interactions do have these symmetries, but the
weak interaction breaks P , and the combination CP is also broken even more weakly. On
the other hand, the CP T theorem says that any quantum field theory will be invariant under
the composition of all three operations: it could be that nature does not respect CP T but
if it does not then it cannot be decribed by a quantum field theory.
Spinor Representation of P , C, and T
It is perhaps not immediately obvious that the spinor representations should also have representations of P and T . Nevertheless they do, and this extends the group Spin(3, 1) to
Pin(3, 1).1 To find the representation of the discrete transformations corresponding to P
and T we need to find elements of the Clifford algebra P and T that are unitary and satisfy the relations P 1 P = R(P ) and T 1 T = R(T ) where R(P ) and R(T ) are
the so(n) fundamental representation matrices corresponding to parity and time-reversal
operations, R(P ) = diag(1, 1, 1, 1) and R(T ) = diag(1, 1, 1, 1). Such spinor transformations are easily found by inspection, namely P = 0 and T = i5 0 , in term of Euclidean
matrices.
Exercise 22: Verify that P and T satisfy the relations given above, are unitary and hermitian, and therefore are involutions P 2 = T 2 = .
Solution 22.
1. Hermiticity: P = 0 = 0 = P.
2. Unitarity: P P = 0 0 = 02 = .
3. Hermiticity: T = (i5 0 ) = i0 5 = i0 5 = i5 0 = T .
4. Unitarity: T T = (i5 0 ) i5 0 = i0 5 i5 0 = 0 5 5 0 = 02 = .
5. P 1 R(P ) P = R(P ) 0 0 = R(P ) diag(1, 1, 1, 1) = since 0 0 =
diag(1, 1, 1, 1) because 0 commutes with itself and anticommutes with i .
6. T 1 R(T ) T = R(T ) T T = R(T ) diag(1, 1, 1, 1) , since T anticommutes
with 0 and commmutes with i .
7. Any operator X that is hermitian X = X and unitary X = X 1 is an involution
since X 2 = X X = X 1 X = .
32
Just as for P and T it does not automatically follow that any given Lagrangian is invariant
is invariant the
under chiral transformations. Indeed, while the Dirac kinetic term i/
In fact, this is not quite the full story: we should really consider the Dirac kinetic term i/
built from two spinors and with arbitrary chiral properties. This is not hermitian, but
if we observe that
) = i( )0 +
i
i (
then we have
) +
i .
(
i ) = i( )
0 = i( )0 = i (
+ i/
is hermitian up to a total derivative which
This means that the kinetic term i/
does not contribute to the action.
Solution 23.
1. In Euclidean space under a small chiral transformation
7 (1 i5 + ) as
+ i5 + ).
noted in the hint, thus 7 (1
33
) = O(2 ).
so (
4. The mass term transforms as
7 (1
+ i5 + )(1 + i5 + ) =
+ 2i
5 + )
6= 0.
so ()
[]
0 0
0 0
[]
.
= M[],[0 0 ] M [ ],[] = 12 0 0 = 12 [
0 0 ] = 2[ ] = []
M =
0 0 0
0 0 0
0 0 0
0 0 1
0 1 0
1 0 0
0 0 1
0 1 0
1 0 0
.
0 0 0
0 0 0
0 0 0
Since M is invariant under SO(4) and is not a multiple of the unit matrix the adjoint
representation of so(4) must be reducible: this is a consequence of Schurs lemma, but it
is really just the observation that the eigenspaces of M split the representation into two
commuting parts. Since it is the adjoint representation this means that the Lie algebra (and
hence the group) factors into two commuting parts.
Since we have constructed M to be involutive its eigenvalues must be 1, so we can construct
the projection operators P = 12 ( M ) that reduce the adjoint representation.
Exercise 25:
1. Write down the matrices P .
2. Construct the set of generators P L[] for so(4) and verify that they form two mutually
commuting sets of three generators each.
3. Verify that P+ L[] and P L[] with [] = [12], [13], [14] separately satisfy the commutation relations for so(3).
Solution 25.
1. The projectors are
P =
1
2
0
0
1
0
2
1
0
0
2
0
0 12
0 12 0
12 0
0
35
0
0 12
0 12 0
12 0
0
.
1
0
0
2
0
0
2
1
0
0
2
2. There are six generators for so(4), so we need to be a little careful to generate a linearly
independent set of generators when we apply the projectors. With our choice of basis
the first three generators suffice, that is
0 0
0
0 0 0
0 0
0 0 0 0
0 0
0 0 1 1 0 0
1
1 0 0
0 1
0
1
0
0
1
0
0
0
1
0
[12]
[12]
,
,
P+ L =
P L =
0 1 0
2 0 1 0
2 0 1 0 0 1 0
0 0 1 1 0 0
0 0 1 1 0 0
0 0
0
0 0 0
0 0
0 0 0 0
0 0 1 1 0 0
0 0 1 1 0 0
0 0 0
0 0 0
0 0 0
0 0 0
1 0 0
1 0 0
0 0 1
0 0 1
[13]
[13]
,
P+ L =
,
P L =
0 0 1
1
0
0
0
0
1
2 1 0 0
2
0 0 0
0 0 0
0 0 0
0 0 0
0 0 1 1 0 0
0 0 1 1 0 0
0 1 0 0 1 0
0 1 0 0 1 0
1 0 0 0 0 1
1 0 0 0 0 1
0 0 0 0 0
0
0
0
0
0
0
0
[14]
[14]
,
,
P
L
=
P+ L =
0
0
2
2
0 0 0 0 0
0 0 0 0 0
1 0 0 0 0
1 0 0 0 0 1
1
0 1 0 0 1 0
0 1 0 0 1
0
form a complete set of generators.
3. Let us enumerate the generators in the order L1 = P+ L[12] , L2 = P+ L[13] , L3 =
P+ L[14] , L4 = P L[12] , L5 = P L[13] , L6 = P L[14] , then their commutators are [Li , Lj ] =
L|Xij | where
0 3 2
0
0
0
3
0 1 0
0
0
2 1
0
0
0
0
X=
0
0
0
0
6
5
0
0
0 6 0
4
0
0
0
5 4 0
and the sign of Xij indicates the sign in the commutation relations.
One way of understanding this decomposition is that a rotation in four dimensions can
always be written as the product of two rotations in non-intersecting planes. The two so(3)
factors correspond to the two rotations being through the same angle and in the same or the
opposite sense.
We have shown that the Lie algebra so(4) = so(3) so(3), it is not hard to extend this
result to show that under exponentiation the Lie group SO(4) = SO(3) Spin(3), Spin(4) =
Spin(3) Spin(3), and to extend these to the real forms. There are other accidental
degeneracies of low-dimensional Lie algebras, for instance su(2) = so(3) = sp(2), su(4) =
so(6), so(5) = sp(4), and using the first of these we find the decomposition so(3, 1) = sl(2, )
which is often found in the literature: SL(n) is the special linear group of special (i.e., unit
determinant) n n matrices, one of whose real forms is the group of special unitary matrices
SU(n).
36
Hand-in Exercises
Exercise 26: In this exercise we shall consider some of the properties of Clifford algebras
in n dimensions where n may be odd, so be careful when using -matrix identities that they
do not implicitly assume than n is even.
1. Show that
d tr [1 2 d ] = 0
(d n) tr [1 2 d ] = 0
if d is even
if d is odd
where as usual the square brackets around the indices indicate antisymmetrization,
1 X
tr [1 2 d ] =
tr 1 2 d
d! S
d
n
(k)
1 k 1 n (nk) k+1
.
Hint: (k) was defined in equation (4).
7. Show that there are two inequivalent 2(n1)/2 2(n1)/2 irreducible representations of
the n-dimensional Clifford algebra. Are they faithful?
Hint: Use the 2(n1)/2 2(n1)/2 representation (2) of the (n 1)-dimensional Clifford
(n1)
algebra, and set n 1,...,n1 .
8. Construct the two inequivalent 2 2 matrix representations of the 3-dimensional Clifford algebra.
37
9. Construct the corresponding representations of the so(3) Lie algebra. Are they faithful?
Are they equivalent?
Hint: Make sure you do not confuse the representations of the Clifford algebra with
those of the related Lie algebra and Lie group.
Exercise 27:
1. Compute the element G = exp X in the spinor representation of the Lie group SO(4)
where
X
X =
<
i
=
2
12 + 34
(12 34 )
(14 23 )
+i(13 + 24 )
(14 23 )
i(13 + 24 )
12 34
14 + 23
+i(13 24 )
14 + 23
i(13 24 )
(12 + 34 )
is a general element of the spinor representation of the Lie algebra so(4), where we
have made us of the generators computed in Exercise 16.
,
() = 2 2 +
2
where
2 = (12 34 )2 + (13 24 )2 + (14 23 )2 .
Compute G = exp X and hence G = G+ G = G G+ .
2. Show that on the two-dimensional invariant subspaces identified by P
i
i h
(14 + 23 )1 (13 24 )2 + (12 + 34 )3
2
i
i h
=
(14 23 )1 (13 + 24 )2 + (12 34 )3 .
2
X+ =
X
This shows that so(4, ) = so(3, ) so(3, ) = sl(2, ) sl(2, ). The complex Lie
algebra su(2, ) is a misnomer, as the matrices in the group SU(2, ) are not unitary,
so it is more properly called sl(2, ), the space of all 2 2 complex traceless matrices.
Hint: Remember that all the group parameters are complex in so(4, ).
38
2 P P
0 0
M [][ ] 0 0 and show that it
2
3 < 0 < 0
is just 5 . Since the Levi-Civita tensor appears both in M and 5 perhaps this should
not be too surprising. How does a chiral transformation act on the subspaces identified
by the projectors P ?
4. If we restrict all the parameters to be real show that we get the real form of the
Lie algebra so(4, ) = so(3, ) so(3, ) = su(2) su(2).
Note: One of the real forms of sl(2, ) is su(2) = su(2, ), the space of all antihermitian traceless 22 matrices for which the Pauli matrices i1 , i2 , i3 form a basis. This
is called the compact real form because the Lie group obtained by exponentiation is
SU(2)SU(2) where the group manifold (parameter space) of SU(2) is topologically S3
(the set of points in 4 of distance one from the origin), which is closed and bounded
and hence compact.
5. Show that the operation that takes 2 7 2 and 4 7 4 is an involutive automorphism on so(4, ). Use the Weyl unitary trick to construct the spinor generators of
the real form so(2, 2), and show that the analogues of X in so(2, 2) are
i
h
(14 + 23 )1 + (13 + 24 )i2 + (12 + 34 )3
X+0 =
2 h
i
0
X = (14 23 )1 + (13 24 )i2 + (12 34 )3 .
2
Note that in the usual representation of the Pauli matrices all three of 1 , i2 , and
3 are real. The Lie algebra sl(2, ) of all 2 2 real matrices is another real form
of sl(2, ). We have thus shown that so(2, 2) = sl(2, ) sl(2, ).
6. Show that the parity operation that takes 4 7 4 while leaving the other three
-matrices unchanged is another involutive automorphism on so(4, ). Use the Weyl
unitary trick to construct the spinor generators of so(3, 1), and show that the analogues
of X in so(3, 1) are
i
h
X+00 =
(14 i23 )1 + (24 + i13 )2 + (34 i12 )3
2
i
h
00
X =
(14 + i23 )1 + (24 i13 )2 + (34 + i12 )3 .
2
This shows that so(3, 1) = sl(2, ), as all six real parameters of so(3, 1) appear independently in X+00 and in X00 . The real form so(3, 1) cannot be split into two commuting
subalgebras.
Note: All the representations of SO(3, 1) can be built from tensor products of twocomponent spinors that carry the X00 representations of SL(2, ); it is conventional
to put dots on the spinor indices in the X00 subspace to indicate that they transform
with the complex conjugate parameters.
39