Вы находитесь на странице: 1из 86

229B: Lie Groups and Lie Algebras

Andy Soffer

October 26, 2014


Last updated October 26, 2014 CONTENTS

Contents
1 Preliminaries 3

2 Introduction 4
2.1 First definitions and examples . . . . . . . . . . . . . . . . . . 4
2.2 Classification of 2-dimensional Lie algebras . . . . . . . . . . 6
2.3 A few more definitions and examples . . . . . . . . . . . . . . 7

3 Nilpotent Lie algebras 10


3.1 Definitions and properties . . . . . . . . . . . . . . . . . . . . 10
3.2 Engel’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 11

4 Solvable Lie algebras 14


4.1 Definitions and properties . . . . . . . . . . . . . . . . . . . . 14
4.2 Solvable Lie sub-algebras of gl(V ). . . . . . . . . . . . . . . . 14
4.3 Radicals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

5 Semi-simple Lie algebras 19


5.1 Killing form . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.2 Cartan Criterion . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.3 Simplicity and semi-simplicity . . . . . . . . . . . . . . . . . . 24

6 The universal enveloping algebra 26


6.1 Universal Enveloping Algebra . . . . . . . . . . . . . . . . . . 26
6.2 The Casimir element . . . . . . . . . . . . . . . . . . . . . . . 28

7 Representations of Lie algebras 30


7.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7.2 Representations of semi-simple Lie algebras . . . . . . . . . . 31
7.3 sl2 (C) as a worked example . . . . . . . . . . . . . . . . . . . 33
7.4 Cartan subalgebras . . . . . . . . . . . . . . . . . . . . . . . . 37

8 Root systems 44
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
8.2 Root system bases . . . . . . . . . . . . . . . . . . . . . . . . 48
8.3 Coxeter and Dynkin Diagrams . . . . . . . . . . . . . . . . . 53
8.4 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

1
Last updated October 26, 2014 CONTENTS

9 Semi-simple Lie algebra construction 61


9.1 Cartan sub-algebra construction . . . . . . . . . . . . . . . . 61
9.2 Construction from Cartan matrix . . . . . . . . . . . . . . . . 63

10 More represntation theory 65


10.1 Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
10.2 Poincaré-Birkhoff-Witt theorem . . . . . . . . . . . . . . . . . 67
10.3 Verma Module . . . . . . . . . . . . . . . . . . . . . . . . . . 67

A Adjointness of (U, F ) 69

References 72

Index 84

2
Last updated October 26, 2014 1 PRELIMINARIES

1 Preliminaries
This course was taught at UCLA in the spring quarter of 2012 by Raphael
Rouquier. This is a compilation of my notes from the course. No doubt
they are full of errors, and incomplete proofs. Use them at your own risk. If
you do find an error, please email me at asoffer@ucla.edu so I can correct
them.
Unless otherwise specified, all vector spaces and algebras will be over
C. At times, as a reminder, or to specifically point out some property of C
being used, we will explicitly mention the field. Certainly if we work with
any algebra or vector space over a field other than C, it will be explicitly
mentioned. Similarly, one can expect that all Lie algebras and all vector
spaces are finite dimensional. Where we would like to have a vector space
or Lie algebra to possibly be infinite dimensional, we will say so explicitly.
Lastly, the symbol “N” denotes the set of natural numbers: {0, 1, 2, . . . }.
To denote the set {1, 2, . . . }, we use Z+ .

3
Last updated October 26, 2014 2 INTRODUCTION

2 Introduction
2.1 First definitions and examples
A Lie algebra is a C-vector space g equipped with a C-bilinear map [−, −] :
g × g → g such that for all a, b, c ∈ g,

• [a, [b, c]] + [b, [c, a]] + [c, [a, b]] = 0 (Jacobi identity)

• [a, b] + [b, a] = 0 (skew-symmetry)

We will often refer to [−, −] as the bracket, or bracket map.

Note 2.1. In fact, we can define a Lie algebra over any field analogously.
Generally, we require that [x, x] = 0 for each x instead of skew-symmetry.
These conditions are equivalent for any field which has characteristic other
than 2. Unless otherwise specified, all Lie algebras will be over C. We will
make no effort to generalize to other fields, and leave such endeavors as an
exercise for the reader.

Example 2.2. Let A be a finite dimensional C-algebra. Define a Lie algebra


g to be A as a C-vector space endowed with the bilinear map

[a, b] = ab − ba.

To see that [−, −] is bilinear, note that for any a, b, c ∈ g,

[a, b + c] = a(b + c) − (b + c)a = ab + ac − ba − ca = [a, b] + [a, c].

Checking linearity on the other side is just as easy. It is immediate that


[−, −] satisfies the skew-symmetry condition. To see the Jacobi identity
holds, we have

[a, [b, c]] + [b, [c, a]] + [c, [a, b]] = [a, bc − cb] + [b, ca − ac] + [c, ab − ba]
= [a, bc] − [a, cb] + [b, ca] − [b, ac] + [c, ab] − [c, ba]
= a(bc) − (bc)a − a(cb) + (cb)a +
b(ca) − (ca)b − b(ac) + (ac)b +
c(ab) − (ab)c − c(ba) + (ba)c
= 0,

by the associativity of A.

4
Last updated October 26, 2014 2 INTRODUCTION

Example 2.3. For a C-vector space V , EndC V is the C-algebra of linear


maps from V to V . We may endow EndC V with a bracket map

[φ, ψ] = φ ◦ ψ − ψ ◦ φ.

The resulting Lie algebra is denoted gl(V ). When V is finite dimensional, we


will often write gln (C), and associate the endomorphisms to n × n-matrices.

We say that a ⊆ g is a Lie sub-algebra if a is a subset of g, and [−, −]


restricted to [a, a] ⊆ a. That is, a is a subset and a Lie algebra under the
same bracket map. Moreover, if the stronger condition [g, a] ⊆ a holds, then
we say that a is an ideal of g.
There are several points of note here. First, by skew-symmetry, it does
not matter if we require [g, a] ∈ a or [a, g] ∈ a. That is, if we were to define
left and right ideals separately, every ideal would be two-sided. Second, it
is immediate that every ideal of g is also a Lie sub-algebra of g.
Lastly, notice that these definitions mirror the definitions of subrings
and ideals in the category of rings (where the bracket takes the place of the
product).

Example 2.4. For a vector space V , let sl(V ) ⊆ gl(V ) consisting of all
endomorphisms with trace zero (again, for finite dimensional vector spaces,
we will often write sln (C)). Then sl(V ) is a Lie sub-algebra of gln (C). We
need only check that [−, −] restricted to sln (C) has image in sln (C).
Indeed, for φ, ψ ∈ sln (C), tr(φ ◦ ψ) = tr(ψ ◦ φ), so

tr[φ, ψ] = tr(φ ◦ ψ) − tr(ψ ◦ φ) = 0.

Note that we do not use the fact that φ and ψ have trace 0. That is,
[gl(V ), gl(V )] ⊆ sl(V ), so in fact sl(V ) is an ideal of gl(V ).

Note 2.5. We use Fraktur letters to distinguish between gl(V ) and GL(V )
(the general linear group). Similarly for sl(V ) and SL(V ) (the special linear
group). The relationship between the two is that the Lie algebra gl(V ) is the
tangent space at the identity to the Lie group GL(V ). Similarly for sl(V )
and SL(V ).

Given two Lie algebras g1 and g2 , we can endow the vector space g1 ⊕ g2
with a Lie algebra structure by

[(x1 , x2 ), (y1 , y2 )] = ([x1 , y1 ], [x2 , y2 ]).

5
Last updated October 26, 2014 2 INTRODUCTION

A Lie algebra g is abelian (or commutative) if for every a, b ∈ g, we


have [a, b] = 0.
We could also take the definition to be that for each a, b ∈ g, [a, b] = [b, a],
mirroring the definition for an abelian group. This is equivalent, because
the bracket is skew-symmetric and we are not working in characteristic 2.
Further, in the case where we start with an algebra and give it a Lie
structure, the bracket in some sense measures commutativity. In such a
case, a Lie algebra is abelian when its underlying algebra is commutative.

2.2 Classification of 2-dimensional Lie algebras


Proposition 2.6. Every 2-dimensional Lie algebra is isomorphic to one of:

• C2 with [−, −] = 0
  
u v
• u, v ∈ C with [A, B] = AB − BA.
0 −u

Proof. For z, w ∈ C, let gz,w be the Lie algebra with basis {e1 , e2 }, and
[e1 , e2 ] = ze1 + we2 . We have not shown that every gz,w is a Lie algebra, but
certainly every 2-dimensional Lie algebra is isomorphic to some gz,w . Note
that once we have a basis {e1 , e2 } for a Lie algebra as a vector space, the
bracket [e1 , e2 ] completely determines [−, −]. Indeed, for a, b, c, d ∈ C

[ae1 + be2 , ce1 + de2 ] = [ae1 , ce1 ] + [ae1 , de2 ] + [be2 , ce1 ] + [be2 , de2 ]
= ac[e1 , e1 ] + ad[e1 , e2 ] + bc[e2 , e1 ] + bd[e2 , e2 ]
= (ad − bc)[e1 , e2 ]

Moreover, we can see that the image of [−, −] is at most one-dimensional


(zero-dimensional if and only if ad = bc). So let g be a 2-dimensional Lie
algebra. In particular, pick a basis {e1 , e2 } for g, and in that basis, let
g = gr,s . If r = s = 0, then g is abelian.
Otherwise, Let x ∈ g, and let x = ae1 + be2 . Then

[x, [e1 , e2 ]] = [ae1 + be2 , re1 + se2 ] = (as − br)[e1 , e2 ],

which is zero if and only if as = br. That is, if and only if x is in the image
of [−, −]. So pick any x not in the image of [−, −], and let d ∈ C be given
such that [x, [e1 , e2 ]] = d[e1 , e2 ]. As d 6= 0, we h1 = d−1 x, and h2 = [e1 , e2 ].

6
Last updated October 26, 2014 2 INTRODUCTION

Note that h1 is not in the image of [−, −] and h2 is, so they must be linearly
independent and thus form a basis for g. Computing [h1 , h2 ] gives us

[h1 , h2 ] = [d−1 x, [e1 , e2 ]] = d−1 [x, [e1 , e2 ]] = [e1 , e2 ] = h2 .

Thus, representing g in the basis {h1 , h2 } shows us that g = g0,1 . Im-


mediately this tells us that all non-abelian 2-dimensional Lie algebras are
isomorphic. It then suffices to check that the collection of 2 × 2 matrices de-
fined above is indeed a non-abelian Lie algebra. We leave this as an exercise
to the reader.

2.3 A few more definitions and examples


For a Lie algebra g, define the center of g, denoted Z(g) to be

Z(g) = {x ∈ g | [x, y] = 0, ∀y ∈ g}.

From this definition, it is immediate that Z(g) is an abelian Lie sub-algebra


of g. Moreover, it is easy to see that Z(g) is an ideal of g.
If g is a Lie algebra, and a is an ideal of g, then we can endow the
quotient vector space g/a with a Lie algebra structure, by defining for any
x, y ∈ g,
[x + a, y + a] = [x, y] + a.
It is routine to check that this construction is well-defined.

Example 2.7. Let A be a finite dimensional C-algebra. A derivation on


A is a linear map D : A → A satisfying

D(ab) = D(a) · b + a · D(b).

Let g = Der A, the set of all derivations on A. Then g is a Lie algebra


when endowed with the bracket map

[D1 , D2 ] = D1 ◦ D2 − D2 ◦ D1

Proof. Let V be the underlying vector space of the algebra A. Notice that
Der A ⊆ gl(V ) is a Lie sub-algebra, so it suffices to check that [Der A, Der A] ⊆
Der A. This approach avoids checking the Jacobi identity and skew-symmetry
directly.

7
Last updated October 26, 2014 2 INTRODUCTION

If D1 , D2 ∈ Der A and a, b ∈ A,
[D1 , D2 ](ab) = D1 D2 (ab) − D2 D1 (ab)
= D1 ((D2 a)b + aD2 b) − D2 ((D1 a)b + aD1 b)
= D1 ((D2 a)b) + D1 (aD2 b) − D2 ((D1 a)b) − D2 (aD1 b)
= (D1 D2 a)b + (D2 a)(D1 b) + (D1 a)(D2 b) + a(D1 D2 b)
−(D2 D1 a)b − (D1 a)(D2 b) − (D2 a)(D1 b) − a(D2 D1 b)]
= (D1 D2 a)b − (D2 D1 a)b + a(D1 D2 b) − a(D2 D1 b)
= [D1 , D2 ](a)b + a[D1 , D2 ](b)
It is not even required that A be an algebra. We may instead take A to
be any set having associative addition, and a multiplication operation which
distributes over addition.

Let g and h be Lie algebras. We say that a linear map f : g → h is a Lie


algebra homomorphism if
f ([x, y]) = [f (x), f (y)],
where the bracket on the left is from g, and the bracket on the right is from
h. Moreover, if f : g → gl(V ) for some vector space V , we say that f is a
representation of g on V .
Example 2.8. Let x ∈ g, and define the linear map adx : g → g by
adx : y 7→ [x, y].
Then we see that ad : g → EndC g is a Lie algebra homomorphism where
the bracket on EndC g is given by [f, g] = f ◦ g − g ◦ f (the same construction
as in example 2.2). The map ad is known as the adjoint representation
of g.
Indeed, it is a representation ad : g → gl(g). The linearity of ad comes
from the bilinearity of [−, −] on g. To check the that ad respects the bracket
map, we make use of the Jacobi identity and skew-symmetry:

ad[x,y] : z 7→ [[x, y], z] = [z, [y, x]]


= −[x, [z, y]] − [y, [x, z]]
= [x, [y, z]] − [y, [x, z]]
= adx ◦ ady (z) − ady ◦ adx (z)
= [adx , ady ](z)

8
Last updated October 26, 2014 2 INTRODUCTION

We can also see that for each x ∈ g, adx is a derivation, as

adx ([y, z]) = [x, [y, z]]


= −[z, [x, y]] − [y, [z, x]]
= [[x, y], z] + [y, [x, z]]
= [adx (y), z] + [y, adx (z)]

9
Last updated October 26, 2014 3 NILPOTENT LIE ALGEBRAS

3 Nilpotent Lie algebras


3.1 Definitions and properties
For a Lie algebra g, let C 0 (g) = g, and for i > 1, define C i (g) = [g, C i−1 (g)].
Clearly each C k (g) is an ideal. Moreover they form a descending chain,
· · · ⊆ C 1 (g) ⊆ C 0 (g) = g.
This series is known as the central series of g. We say that g is nilpo-
tent if for some n > 0, C n (g) = 0.
Lemma 3.1. The following are equivalent:
1. g is nilpotent
2. There exists an m > 0 such that for every x1 , . . . , xm ∈ g,
adx1 ◦ · · · ◦ adxm = 0

3. There exists a chain of ideals 0 = an ⊆ an−1 ⊆ · · · ⊆ a0 = g such that


ai /ai+1 ⊆ Z(g/ai+1 ) for each i = 1, . . . , n − 1
Proof. (1 → 3) Take ai = C i (g). It suffices to check that
C i (g)/C i+1 (g) ⊆ Z(g/C i+1 (g)).
Indeed, let c ∈ C i (g), and g ∈ g. Then in g/C i+1 (g),
[g + C i+1 (g), c + C i+1 (g)] = [g, c] + C i+1 (g).
By definition, [g, c] ∈ C i+1 (g), which is what we needed to show.
(3 → 2) Let 0 = ak ⊆ ak−1 ⊆ · · · ⊆ a1 = g be a chain of ideals such that
ai /ai+1 ⊆ Z(g/ai+1 ). Let x ∈ g, and a ∈ ai . Since
a + ai+1 ∈ ai /ai+1 ⊆ Z(g/ai+1 ),
it must be that [x, a] + ai+1 = [x + ai+1 , a + ai+1 ] = 0 + ai+1 , so [x, a] ∈ ai+1 .
In other words, adx (ai ) ⊆ ai+1 . Taking any sequence of x1 , . . . , xk , we see
that adx1 ◦ · · · ◦ adxk has its image contained in ak = 0.
(2 → 1) Note that
C m+1 (g) = {adx1 ◦ · · · ◦ adxm (y) | x1 , . . . , xm , y ∈ g}.
If for some m, any such composition is zero, then C m+1 (g) = 0 meaning g
is nilpotent.

10
Last updated October 26, 2014 3 NILPOTENT LIE ALGEBRAS

3.2 Engel’s theorem


We saw from lemma 3.1 that g was nilpotent if and only if for some large
enough m, every sequence of m elements in g had

adx1 ◦ adx2 ◦ · · · ◦ adxm = 0.

Of course, we could take all xi to be equal to get that in a nilpotent Lie


algebra g, adm
x = 0 for large enough m. An interesting question is whether
the converse holds. That is, if there exists some m for which adm x = 0 for
all x ∈ g, must it be that g is nilpotent. Applying Engel’s theorem to the
adjoint representation answers this question in the affirmative. This gives
us a simpler condition to check for nilpotency; instead of checking all length
m sequences from g, we need only check that for each x ∈ g, adm x = 0.

Lemma 3.2. Let g be a finite dimensional sub-algebra of gl(V ), where V is


a finite dimensional vector space. If x is nilpotent, so is adx .
Proof. Choose N such that xN = 0. Recall that the bracket map on gl(V )
is given by [f, g] = f ◦ g − g ◦ f . For any y ∈ gl(V ), we can expand ad2N
x by

2N
X
ad2N
x (y) = cd · xd ◦ y ◦ x2N −d ,
d=0

for some cd ∈ C. In fact, we can compute the coefficients cd (they will


be binomial coefficients, up to sign), but their value is of no import here.
Instead note that for any choice of d, either d or 2N − d is greater than or
equal to N , so the entire sum is zero, as desired.

Note 3.3. Engel’s theorem can be stated about sub-algebras of gl(V ) or


about representations on a finite dimensional vector space V . Since we only
care about properties of the image of the representation in gl(V ), we may
safely use either version. We will use each at times as it suits our needs.
The following theorem is often included as part of the proof of Engel’s
theorem (corollary 3.5). We have separated it into two parts for the sake of
clarity.
Before we begin, we will need one definition specific to this section. For
a representation ρ : g → gl(V ), define
\
Kg,ρ = ker ρ(x).
x∈g

11
Last updated October 26, 2014 3 NILPOTENT LIE ALGEBRAS

Theorem 3.4. Let V be a finite dimensional vector space, and let g ⊆ gl(V )
be a Lie sub-algebra consisting entirely of nilpotent operators. That is, the
inclusion map ι : g → gl(V ) is a representation of g on V . Then Kg,ι 6= 0.

Proof. We induct on dim g. When dim g = 0, the result is immediate.


Let h be a maximal proper sub-algebra of g, and let

Ng (h) = {x ∈ g | adx (h) ⊆ h}.

This is called the normalizer of h in g. It is clear that Ng (h) is sub-algebra


of g which contains h, so we must either have Ng (h) = h or Ng (h) = g. We
will in fact show that Ng (h) = g, implying that h is an ideal (not just a
sub-algebra) of g.
Let W denote the vector space g/h, and define a representation ψ : h →
gl(W ) given by
ψ(y) = x + h 7→ [y, x] + h.
We can see that ψ is the adjoint map on a quotient space. From lemma
3.2, we can deduce that for each y ∈ h, ψ(y) is nilpotent. By induction,
0 6= Kψ(h),ι = Kh,ψ . There is some nonzero a ∈ W such that ψ(y)(a) = 0
for every y ∈ h. Let ã ∈ g be such that a = ã + h. Then ã is such that for
each y ∈ h, [y, ã] ∈ h. In other words, ã ∈ Ng (h), but ã 6∈ h. This tells us
that Ng (h) ) h, so h. is an ideal of g.
Recall that for x, y ∈ gl(V ), [x, y] = xy − yx. Since h is an ideal, for any
x ∈ g and y ∈ h, we have [x, y] ∈ h. It follows that if v ∈ Kh,ι and x ∈ g,
then x(v) ∈ Kh,ι , since

y(x(v)) = x(y(v)) − [x, y](v) = x(0) − 0 = 0,

for all y ∈ h.
Thus, in fact g ⊆ gl(Kh,ι ) so, we have a well-defined quotient represen-
tation ρ of g/h on Kh,ι given by

ρ(x + h) : v 7→ x(v).

So long as h 6= 0, induction gives us some v0 ∈ V such that for any x ∈ g,


x(v) = 0 for every x ∈ g, giving the desired result.
If, on the other hand, we have h = 0, recognize that for any x ∈ g,
Cx is a sub-algebra containing h. As h was maximal, g = Cx. Since x is
nilpotent, let n be maximal such that xn 6= 0. Then 0 6= Im xn ⊆ ker x, so
Kg,ι = ker x 6= 0 as desired.

12
Last updated October 26, 2014 3 NILPOTENT LIE ALGEBRAS

Corollary 3.5 (Engel’s Theorem). Let ρ be a representation of a finite


dimensional Lie algebra g into a finite dimensional vector space V . If ρ(x)
is nilpotent for every x ∈ g, then there is a full flag for V (i.e., a chain

0 = V0 ⊆ · · · ⊆ Vn = V,

where dim Vk = k) such that ρ(g)(Vi ) ⊆ Vi−1 .


Proof. We proceed by induction on dim V . If dim V = 0, the result is
immediate. Otherwise, from theorem 3.4, there exists some v0 ∈ V for
which ρ(x)(v) = 0 for every x ∈ g. Let W = V /(Cv0 ), and let π : V → W
be the canonical projection π : v 7→ v + Cv0 . We can define a representation
ρ̄ : g → gl(W ) by

ρ̄ : x 7→ (v + Cv0 7→ ρ(x)(v) + Cv0 ).

This map is well-defined because ρ(x)(Cv0 ) = 0 for every x ∈ g. Further-


more, ρ̄(x) is nilpotent for all x because ρ(x) is always nilpotent. By induc-
tion, W has a sequence 0 = W0 ⊆ · · · ⊆ Wn = V such that ρ(g)(Wi ) ⊆ Wi−1
We can pull this back to V by defining V0 = 0 and Vi = π −1 (Wi−1 ). The
sequence {Vi } has the desired property.

Corollary 3.6. Let g be a finite dimensional Lie algebra. Then g is nilpotent


if and only if adx is nilpotent for every x ∈ g.
Proof. The forward direction is immediate from lemma 3.1.
In the reverse direction, suppose that adx nilpotent for every x ∈ g.
Applying Engel’s theorem to the adjoint representation of g, gives us a
sequence
0 = g0 ⊆ g1 ⊆ · · · ⊆ gn = g
for which [gi , g] = adgi (g) ⊆ gi−1 . This says that gi is an ideal, and that
gi /gi−1 ⊆ Z(g/gi−1 ), which is equivalent to the nilpotency of g by lemma
3.1.

Corollary 3.7. Let g be a finite dimensional nilpotent Lie algebra. Then


there exists a basis {e1 , . . . , en } for g on which adx is strictly upper-triangular.
Proof. Let 0 = g0 ( g1 ( · · · ( gn = g as from Engel’s theorem, and pick
ei ∈ gi \ gi−1 .

13
Last updated October 26, 2014 4 SOLVABLE LIE ALGEBRAS

4 Solvable Lie algebras


4.1 Definitions and properties
For a Lie algebra g, define D0 (g) = g and Di (g) = [Di−1 (g), Di−1 (g)]. It is
clear that each Di (g) is a Lie sub-algebra of g. In fact, Di (g) is an ideal of
g.
Proposition 4.1. For each i, Di (g) is an ideal of g.
Proof. Our base case, when i = 0 is trivial. Now suppose that x ∈ g and
y ∈ Di+1 (g). We must showP that [x, y] ∈ Di+1 (g). By definition there are
aj , bj ∈ Di (g) for which y = j [aj , bj ]. From the Jacobi identity, we have
X
[x, y] = [x, [aj , bj ]]
X
= −[aj , [bj , x]] − [bj , [x, aj ]]
j

By induction, [bj , x] and [x, aj ] are in Di (g), and the result follows.

It is also evident that we have the chain of ideals:

· · · ⊆ D2 (g) ⊆ D1 (g) ⊆ D0 (g) = g.

This series is known as the derived series of g. We say that g is


solvable if for some n, Dn (g) = 0.
Lemma 4.2. A Lie algebra g is solvable if and only if there exists a chain
of ideals
0 = a0 ⊆ · · · ⊆ am = g
such that ai+1 /ai is abelian.
Proof. Left as an exercise to the reader. (Hint: follow the proof of theo-
rem 3.1.)

4.2 Solvable Lie sub-algebras of gl(V ).


Theorem 4.3 (Lie’s theorem). Let g be a solvable Lie sub-algebra of gl(V ),
where V is a finite dimensional vector space (with dim V > 0). Then there
exists some nonzero v0 ∈ V such that x(v) ∈ Cv0 for each x ∈ g. That is,
v0 is an eigenvector of every x ∈ g.

14
Last updated October 26, 2014 4 SOLVABLE LIE ALGEBRAS

Proof. We proceed by induction on dim g. When g = 0, the result is imme-


diate. Note that D1 (g) = [g, g] is a proper ideal of g. If it were not proper,
then Di (g) = g for all i, and g would not be solvable.
Let H be a hyperplane in g/[g, g], and let h = {x ∈ g | x+[g, g] ∈ H}. As
h is a sub-algebra of g, it is certainly solvable (as Di (h) ⊆ Di (g)). Moreover,
h is an ideal of g. Let x ∈ g and y ∈ h. Then [x, y] ∈ [g, g] ⊆ h. Lastly, it is
clear from the definition that h has codimension 1 in g.
By induction, there exists some nonzero v0 ∈ V such that y(v0 ) = Cv0
for each y ∈ h. Define χ : h → C by letting χ(y) be the eigenvalue of v0
associated with y. That is, let

y(v0 ) = χ(y)v0 .

We can see that χ is a C-linear map.


It would be nice if Cv0 were stable under the action of g, but this need
not be so. Instead, define

W = {v ∈ V | y(v) = χ(y)v, ∀y ∈ h}.

It is immediate that W is a subspace of V which contains Cv0 . Moreover,


W is stable under the action of g. Indeed, if x ∈ g, v ∈ W , y ∈ h, we have

x(y(v)) − y(x(v)) = [x, y](v) = χ([x, y])v.

We will now show that χ([x, y])v = 0, so that

y(x(v)) = x(y(v)) = χ(y)x(v),

implying that x(v) ∈ W . The following is a proof of this fact.


Let w ∈ W , and let L be the subspace of V generated by w, x(w), x2 (w), . . . .
Let bi = xi (w). We will show by induction, that if we write y as a matrix in
the basis of {b0 , . . . , br−1 } that y is upper triangular. More specifically, we
will show that for each y there exist αi,k ∈ C for which
X
y(bi ) = χ(y) · bi + αi,k .
k<i

Our induction will be on the index i of bi . First, b0 = w ∈ W , so


y(b0 ) = χ(y)w. Now, if y(bi ) is as described above, then

y(bi+1 ) = y(x(bi ))
= x(y(bi )) − [x, y](bi )
X
= χ(y)bi+1 + αi,k bk
k<i+1

15
Last updated October 26, 2014 4 SOLVABLE LIE ALGEBRAS

because [x, y] ∈ h. This completes the induction.


It is now clear that trL (y) = r · χ(y). In particular, if y happens to be a
commutator [x, y 0 ] for some y 0 ∈ h, then
1
χ([x, y 0 ]) = trL (xy 0 − yx0 ) = 0.
dim L
For some x 6∈ h, there exists a w for which x(w) ∈ Cw. Then Cw is stable
under the action of x and of h. Since h has codimension 1 in g, g = Cx ⊕ h,
Cw is stable under the action of g as desired.

It is worth noting that Lie’s theorem holds in a greater generality. It


holds for any Lie algebra over an algebraically closed field of characteristic
zero. The algebraic closure of C was used implicitly to obtain an eigenvector.
Corollary 4.4. Let g be a finite dimensional solvable Lie algebra, and let
ρ : g → gl(V ) be a representation of g into a finite dimensional vector spcae
V . Then there exists a full flag

0 = V0 ( V1 ( · · · ( Vd = V

such that ρ(g)(Vi ) ⊆ Vi for each i.


Proof. We induct on the dimension of V . When dim V = 0, the theorem is
trivial. By theorem 4.3, there exists a vector v0 ∈ V for which ρ(x)(v0 ) ∈ Cv0
for each x ∈ g. Let W = V /Cv0 . Define the representation

ρ̄ : g → gl(W )
ρ̄(x) : v + Cv0 7→ ρ(x)(v) + Cv0 .

By induction, there is a full flag

0 = W0 ( W1 ( · · · ( Wd−1 = W,

such that ρ̄(g)(Wi ) ⊆ Wi . If we let π : v 7→ v + Cv0 , we can define V0 = 0


and for each i > 0, Vi = π −1 (Wi ). Then the full flag

0 = V0 ( V1 ( · · · ( Vd = V

has the desired property.

Corollary 4.5 (Lie-Kolchin triangularization theorem). Let g be a solvable


Lie algebra. Then there exists a basis for g on which adx is upper triangular
for every x ∈ g.

16
Last updated October 26, 2014 4 SOLVABLE LIE ALGEBRAS

Proof. Apply corollary 4.4 to the adjoint representation to get a chain of


subspaces
0 = g0 ( g1 ( · · · ( gn = g,
such that for each k, dim gk = k and adg (gk ) ⊆ gk . For each 1 ≤ k ≤ n,
pick ek ∈ gk and ek 6∈ gk−1 . Then {e1 · · · en } is a basis for the vector space
g. Moreover, for any x,

adx (gk ) ⊆ gk = span{e1 , . . . , ek }.

This is precisely the requirement that adx is upper triangular in the ordered
basis {e1 , . . . , ek }.

4.3 Radicals
Define the radical of a finite dimensional Lie algebra g to be the largest
solvable ideal of g. We denote the radical by Rad(g).
This definition requires some justification. We must show that there is a
unique largest solvable ideal. It suffices to show that if we have two solvable
ideals a and b of g, then the ideal a + b is also solvable.

Proof. Let a and b be solvable ideals of g. In particular, let

0 = a0 ⊆ a1 ⊆ · · · ⊆ am = a
0 = b0 ⊆ b1 ⊆ · · · ⊆ bn = b,

be chains of ideals of g such that for each i (that makes sense), ai+1 /ai and
bi+1 /bi are abelian.
Then the sequence of ideals

0 = a0 ⊆ · · · ⊆ am = am + b0 ⊆ · · · ⊆ am + bn = a + b

witnesses the solvability of a + b. We must check that c = (a + bi+1 )/(a + bi )


is abelian. Each element of c can be written as x + a + bi where x ∈ bi+1 .
Then for any two elements x + a + bi and y + a + bi of a + bi+1 , we have

[x + a + bi , y + a + bi ] = [x, y] + a + bi .

Since x, y ∈ bi+1 , [x, y] ∈ bi , giving the desired result.

17
Last updated October 26, 2014 4 SOLVABLE LIE ALGEBRAS

Note 4.6. We are often interested in the case where Rad(g) = 0. This is
often taken as the definition for semi-simplicity. In section 5 we will see
that semi-simplicity has a more natural definition (in terms of direct sums
of “simple” Lie algebras), and that a Lie algebra is semi-simple if and only
if its radical is zero.

For a finite dimensional Lie algebra g, we have the following equivalences:

• Rad(g) = 0

• g has no nonzero solvable ideals (other than 0)

• g has no nonzero abelian ideals (other than 0)

The equivalence of the first two conditions is obvious. As for the second
pair, note that any abelian Lie algebra is solvable. In the other direction, if
there is a nonzero solvable ideal a, then take the largest n for which Dn (a)
is nonzero. This must be an abelian ideal of g.

Proposition 4.7. For any finite dimensional Lie algebra g,

Rad (g/ Rad(g)) = 0.

Proof. Let a = Rad(g/ Rad(g)). There is some n for which Dn (a) = 0.


By the correspondence theorem for Lie algebras, there is an ideal b of g
containing Rad(g) such that Dn (b) ⊆ Rad(g). As there is some m for
which Dm (Rad(g)) = 0, Dm+n (b) = 0 meaning that b is solvable. Thus,
b = Rad(g), so a = 0.

18
Last updated October 26, 2014 5 SEMI-SIMPLE LIE ALGEBRAS

5 Semi-simple Lie algebras


5.1 Killing form
Let g be a Lie algebra, and let ρ : g → gl(V ) be a representation of g
into a finite dimensional vector space V . A bilinear form on V is said to
be g-invariant (with respect to the representation ρ) if for any x ∈ g and
v, w ∈ V
β(ρ(x)v, w) + β(v, ρ(x)w) = 0.
Example 5.1. Consider the adjoint representation ad : g → gl(g), and
define a bilinear form

β(x, y) = trg (adx ◦ ady ).

Then β is g-invariant (with respect to the adjoint representation). We


can check for any x, y, z ∈ g,

β(adx (y), z) = trg (ad[x,y] ◦ adz )


= trg (adx ◦ ady ◦ adz − ady ◦ adx ◦ adz )
= − trg (ady ◦ adx ◦ adz ) + trg (ady ◦ adz ◦ adx )
= − trg (ady ◦ adx ◦ adz − ady ◦ adz ◦ adx )
= − trg (ady ◦ ad[x,z] )
= −β(y, adx (z)).

Note that in the above computation, we use the fact that ad is a Lie
algebra homomorphism, and the fact that for any two operators A and B,

tr(AB) = tr(BA)

(where the product makes sense). This bilinear form is known as the Killing
form on g. The g-invariance of the Killing form can be expressed in the
following way to highlight its symmetry:

β([x, y], z) = β(x, [y, z])

Notice that if we restrict the Killing form from g to a sub-algebra h, we


certainly get an h-invariant form, but it need not be the Killing form on h.
However, if h happens to be an ideal, then it is the Killing form for h as the
following theorem says:

19
Last updated October 26, 2014 5 SEMI-SIMPLE LIE ALGEBRAS

Theorem 5.2. Let g be a finite dimensional Lie algebra, and let a be an


ideal. Then β|a is the Killing form on a.

Proof. Take a basis for a and extend it to a basis for g. Expanded as a


matrix in this basis,  
∗ ∗
adx = ,
0 0
for x ∈ a. If x, y ∈ a, then
 
adx ◦ ady |a ∗
adx ◦ ady = .
0 0

Taking the trace shows that the Killing form on a is the restriction of the
Killing form on g.

We say that a g-invariant bilinear form α is degenerate if there is some


nonzero x for which α(x, −) is identically 0. Otherwise we say it is non-
degenerate.
The Killing form is particularly important for classifying semi-simple Lie
algebras, so we consider it to be the “standard” inner product on g. Along
with this notation, we have that for any sub-algebra h of g,

h⊥ = {y ∈ g | β(x, y) = 0, ∀x ∈ h}.

Thus, another way to say that β is non-degenerate is to say that g⊥ = 0.


With this in mind, we have the following lemma:

Lemma 5.3. Let a be an ideal of g. Then a⊥ is also an ideal of g.

Proof. Let x ∈ a⊥ and y ∈ g. Then for any z ∈ a,

β([x, y], z) = β(x, [y, z]) = 0,

so [x, y] ∈ a⊥ .

We may hope that, as in the case of vector spaces, g = a ⊕ a⊥ . Unfortu-


nately this need not be the case. It may be that a ∩ a⊥ 6= 0. Certainly, if g
is abelian, then the Killing form is zero everywhere, and regardless of what
a is, a⊥ = g.

20
Last updated October 26, 2014 5 SEMI-SIMPLE LIE ALGEBRAS

5.2 Cartan Criterion


We have now developed the necessary tools to discuss the Cartan criterion.
The criterion gives necessary and sufficient conditions solvability and for
Rad(g) = 0 in terms of the non-degeneracy of a g-invariant bilinear form.
We first need a lemma:

Lemma 5.4. Let V be a finite dimensional vector space, and let

L ⊆ M ⊆ gl(V )

be a chain of subspaces. Define T = {t ∈ gl(V ) | [t, M ] ⊆ L}. If there exists


some z ∈ T such that trV (zt) = 0 for every t ∈ T , then z is nilpotent.

Proof. Let z ∈ T . We can find a basis of V in which z is upper triangular


(say, putting it in Jordan canonical form). Let
 
λ1 ∗
z=
 .. ,

.
0 λn

for λi ∈ C. We use the fact from linear algebra that there exists a
polynomial P without constant term such that
 
λ1 0
P (z) = 
 . .. .

0 λn
P
Let F = Qλi be Q-vector space in C spanned by the λi . Let f be an
arbitrary Q-linear functional on F , and let
 
f (λ1 ) 0
z0 = 
 .. .

.
0 f (λn )

We can also construct a polynomial Q via Lagrange interpolation such


that Q(λi − λj ) = f (λi ) − f (λj ) for each i, j. We can explicitly check that
adz 0 = Q(adP (z) ). We leave the computation as an exercise for the reader.
Note that P (z) ∈ T , so adP (z) (M ) ⊆ L, implying that adz 0 (M ) ⊆ L.
This tells us that z 0 ∈ T . Then,
X
0 = trV (z · z 0 ) = λi · f (λi ),

21
Last updated October 26, 2014 5 SEMI-SIMPLE LIE ALGEBRAS

so 0 = f (trV (z · z 0 )) = f (λi )2 . This tells us that f is zero on each


P
λi , so F must be zero-dimensional. But this only happens if each λi = 0,
whence z is nilpotent.

Theorem 5.5 (Cartan criterion for solvability). Let g ⊆ gl(V ), and define
the g-invariant bilinear form hx, yi = trV (x · y). Then g is solvable if and
only if hx, yi = 0 for every x ∈ g and y ∈ [g, g].

Proof. We first need to check that h·, ·i is g-invariant. Indeed, for x, y, z ∈ g,


we have

trV ([x, y]z) = trV (xyz − yxz) = trV (xyz − xzy) = trV (x[y, z]),

which is similar to the computation we did for the Killing form.


Let g ⊆ gl(V ) be solvable. By the Lie-Kolchin triangularization, we may
take a basis for V in which g is upper triangular. Pick x, y ∈ g. Then
[x, y] = xy − yx is strictly upper triangular. Then it is clear that for any
z ∈ g,
hz, [x, y]i = trV (z · [x, y]) = 0.
In the other direction, let T = {t ∈ gl(V ) | ∀x ∈ g, [t, x] ∈ [g, g]}. By
assumption,
trV (t · [x, y]) = ht, [x, y]i = h[t, x], yi = 0.
By lemma 5.4, [g, g] consists of nilpotent elements, so by Engel’s theorem,
is nilpotent. This is equivalent to g being solvable.

Recall that the condition Rad(g) = 0 is often given as the definition of


semi-simplicity. We have a different definition of semi-simplicity in mind,
and have not yet shown that the two are equivalent (let alone, even defined
semi-simplicity). For this reason, a more accurate name for the following
theorem would be ”Cartan criterion for Rad(g) = 0.” Regardless, we will
continue with the more common naming convention.

Theorem 5.6 (Cartan criterion for semi-simplicity). Let g be a finite di-


mensional Lie algebra, and let β be the Killing form on g. Then Rad(g) = 0
if and only if β is non-degenerate.

22
Last updated October 26, 2014 5 SEMI-SIMPLE LIE ALGEBRAS

Proof. First suppose that Rad(g) = 0. Let a = g⊥ . Consider the image of


a in gl(g) under the adjoint representation. Let ρ denote the restriction of
ad to a. By definition, β(x, y) = trg (adx ◦ ady ) = 0 for x, y ∈ a, so ρ(a)
is solvable by Cartan’s criterion for solvability. Since ker ρ = a ∩ Z(g) is
solvable, it follows from the correspondence theorem for Lie algebras that a
is solvable, and hence a = 0. Thus, β is non-degenerate.
In the other direction, let a be an abelian ideal, and let x ∈ a and
y ∈ g. If σ = adx ◦ ady , then it is clear that σ 2 = 0. As σ is nilpotent,
β(x, y) = trg (σ) = 0. As this is the case for any x ∈ a and y ∈ g, it must be
that a = 0. Since there are no nontrivial abelian ideals in g, Rad(g) = 0.

Lemma 5.7. Let g be a Lie algebra. Then [g, g] ∩ Rad(g) is a nilpotent


ideal.
In fact, [g, g] ∩ Rad(g) is the largest nilpotent ideal, though we will not
prove this here.

Proof. Let V be a faithful representation of g. There must exist some


nonzero v ∈ V such that Rad(g)v ⊆ Cv by Lie’s theorem. Let λ : Rad(g) →
C be the functional given by
x(v) = λ(x)v
for x ∈ Rad(g).
Define W = {w ∈ V | x(v) = λ(x)w, ∀x ∈ Rad(g)}. It is evident that
W is a subrepresentation of g. Moreover, trW ([y, z]) = 0 for every y, z ∈ g.
If any operator from g acts as multiplication by a constant c on W , then
its trace on W must be c · dim W . Since the trace is zero, it means that
[g, g] ∩ Rad(g) must act by zero on W .
Now let a = {z ∈ g | z(W ) = 0}. It is clear that a is an ideal of g. An
inductive argument on dim g will show that the theorem holds for both a
and g/a, from which we can complete the proof.

Theorem 5.8. For any finite dimensional Lie algebra, Rad(g) = [g, g]⊥ ,
where orthogonality is with respect to the Killing form.
Proof. Notice first that [g, g]⊥ is an ideal of g, and that ad([g, g]⊥ ) is solvable.
Clearly, the Killing form on [g, g]⊥ is zero, and so it follows from the Cartan
criterion for solvability that [g, g]⊥ is solvable and hence contained in Rad(g).
On the other hand, Rad(g) ⊆ [g, g]⊥ , again from the Cartan criterion.

23
Last updated October 26, 2014 5 SEMI-SIMPLE LIE ALGEBRAS

Corollary 5.9. Let g be a finite dimensional Lie algebra, and a an ideal of


g. Then Rad(a) is an ideal of g.

Proof.
Rad a = [a, a]⊥ = a ∩ [a, a]⊥ ,
where the first orthogonality is with respect to the Killing form on a, and
the second is with respect to the Killing form on g. Since a is an ideal, these
are the same. Notice that both a and [a, a]⊥ are ideals of g, and so their
intersection, Rad a is an ideal of g.

5.3 Simplicity and semi-simplicity


We say that g is simple if it is not abelian, and its only ideals are 0 and g.
If g is isomorphic to a finite direct sum of simple Lie algebras, we say g is
semi-simple.

Lemma 5.10. If g is semi-simple, then Rad(g) = 0.

Proof. First suppose g is simple. Since Rad(g) is an ideal of g, if Rad(g) 6= 0,


then Rad(g) = g, making g solvable. Take n maximal for which Dn (g) 6= 0.
Recall from proposition 4.1 that Dn (g) is an ideal of g, and so Dn (g) = g.
But then 0 = Dn+1 (g) = [g, g], so g is abelian, a contradiction.
Now, recognize that Rad(g ⊕ h) ⊆ Rad(g) ⊕ Rad(h), whence any semi-
simple Lie algebra g has Rad(g) = 0.

The following result is the converse of this lemma. That is, together,
these results say that g is semi-simple if and only if Rad(g) = 0.

Theorem 5.11. Let g be a Lie algebra with Rad(g) = 0. Then

1. g has finitely many nonzero minimal ideals a1 , . . . , an .

2. a1 , . . . , an are simple.

3. g = a1 ⊕ · · · ⊕ an (and thus g is semi-simple.)

Proof. We proceed by induction on the longest chain of proper inclusions of


ideals
0 = a0 ⊆ a1 ⊆ · · · ⊆ an = g.
When n = 1, g is simple, and the theorem is trivial.

24
Last updated October 26, 2014 5 SEMI-SIMPLE LIE ALGEBRAS

Otherwise, let b = a1 . Certainly b⊥ is a subspace of the vector space g,


but it is also an ideal: If x ∈ b⊥ and y ∈ g, for any z ∈ b,

β([x, y], z) = β(x, [y, z]),

since [y, z] ∈ b. By Cartan’s criterion, β is non-degenerate on g. Moreover,


since Rad(b) = 0, β|b is non-degenerate. Note that β|b need not be the
Killing form on b, but it is still a b-invariant non-degenerate bilinear form.
As vector spaces, and hence as Lie algebras, g = b+b⊥ . By construction,
b is a minimal nonzero ideal of g, and so b ∩ b⊥ = 0, so g = b ⊕ b⊥ as vector
spaces. The Lie structure agrees with this direct sum, because if x1 , x2 ∈ b
and y1 , y2 ∈ b⊥ , then

[x1 + y1 , x2 + y2 ] = [x1 , x2 ] + [x1 , y2 ] + [y1 , x2 ] + [y1 , y2 ] = [x1 , x2 ] + [y1 , y2 ],

so g = b ⊕ b⊥ as Lie algebras.
By induction, b⊥ is the direct sum of finitely many nonzero minimal
ideals of b⊥ . In general, an ideal c of b⊥ need not be an ideal of g. However,
since b⊥ is a direct summand of g, c is an ideal of g. Moreover, it is clearly
minimal and simple. If b⊥ = c1 ⊕ · · · ⊕ ck , then

g = b ⊕ c1 ⊕ · · · ⊕ ck .

It now suffices to show that these are the only minimal ideals of g.
Suppose we had a minimal ideal h of g. Then h ∩ b is an ideal, so either
h = b, or h ∩ b = 0. In the former case we are done immediately. In the
latter by induction, h must be one of the ci . This completes the proof.

25
Last updated October 26, 2014 6 THE UNIVERSAL ENVELOPING ALGEBRA

6 The universal enveloping algebra


Much of what we would like to say about representations of Lie algebras
requires the so-called universal enveloping algebra. We construct it here.

6.1 Universal Enveloping Algebra


Let AC denote the category of (associative) C-algebras, and let LC denote
the category of Lie algebras (over C).
We have the functor F : AC → LC where we impose [x, y] = xy − yx as
in example 2.2. We would like a way to somehow reverse this operation. It
is not the case that Lie algebras can always uniquely be endowed with an
algebra structure. Indeed, Lie algebras need not be associative. Thus, we
cannot hope to find an inverse functor to F . However, it so happens that
F has a right adjoint U : LC → AC . For a Lie algebra g, U (g) is called the
universal enveloping algebra of g.
More precisely, for any C-vector space V , let T 0 (V ) = C, and for n > 0,
T n (V ) = V ⊗n = V
| ⊗ ·{z· · ⊗ V}. Define
n times

M
T (V ) = T k (V )
k=0

If V is a vector space, we can make T (V ) into an (graded) algebra by


defining

(w1 ⊗ · · · ⊗ wm ) · (v1 ⊗ · · · ⊗ vn ) = w1 ⊗ · · · ⊗ wm ⊗ v1 ⊗ · · · ⊗ vn ,

and extending via linearity to all of T (V ). We refer to T (V ) as the tensor


algebra of V .
In the case that V is finite dimensional, we can take a basis {e1 , . . . , en }
for V . Then T (V ) ∼
= Che1 , . . . , en i, a “polynomial ring” in n non-commuting
indeterminants in the natural way.
Let I be the ideal of T (g) generated by

{x ⊗ y − y ⊗ x − [x, y] | x, y ∈ g}.

Then we define the universal enveloping algebra of g to be

U (g) = T (g)/I.

The first result we have about U (g) is that U and F are adjoint. While
the result is certainly important, the proof comes down to checking many

26
Last updated October 26, 2014 6 THE UNIVERSAL ENVELOPING ALGEBRA

unenlightening details. It is included here for completeness in appendix A.


You are highly encouraged to skim the proof, or skip it entirely.
Theorem 6.1. Let F and U be the functors as defined above. Then (U, F )
form an adjoint pair. That is, for any Lie algebra g and C-algebra A,
homAC (U (g), A) ∼
= homLC (g, F (A)),
where the isomorphism is natural in both g and A.
Proof. See appendix A.

Proposition 6.2. The following are equivalent for a Lie algebra g:


1. g is abelian
2. U (g) is commutative
If g is finite dimensional, then we also have that,
3. U (g) is isomorphic to a polynomial ring.
Proof. If g is abelian, then x ⊗ y − y ⊗ x ∈ I for every x, y ∈ g, so x ⊗ y
and y ⊗ x represent the same equivalence class in U (g). This property is
easily seen to extend to the entire tensor algebra T (g), so U (g) must be
commutative.
Conversely, if g is not abelian, then exists x, y ∈ g for which [x, y] 6= 0,
and a map f : g → F (A) for which f : [x, y] → z 6= 0. By the adjointness
property, there is a corresponding Ψ(f ) : U (g) → A. From its construction,
we see that it is defined by
Ψ(f ) : x1 ⊗ · · · ⊗ xn + I 7→ f (x1 ) · . . . · f (xn ).
Thus, Ψ(f )([x, y] + I) = f ([x, y]) = z 6= 0, so [x, y] 6∈ I. It follows that
x ⊗ y − y ⊗ x 6∈ I, and so modulo I, x ⊗ y 6= y ⊗ x. This is precisely what it
means for U (g) to be non-commutative.
The third condition becomes immediately obvious from the explicit iso-
morphism T (V ) ∼ = Che1 , . . . , en i described above.

Note 6.3. We will make two notational shortcuts from now on when dis-
cussing U (g). The first is that we will forego writing x ⊗ y and instead write
xy for the product of x and y in U (g) when the context is clear.
Second, g embeds into U (g) by first identifying g with T 1 (g) and then
seeing that the quotient map taking T (g) to U (g) preserves T 1 (g). From
now on, we will make the identification g ⊆ U (g) tacitly.

27
Last updated October 26, 2014 6 THE UNIVERSAL ENVELOPING ALGEBRA

6.2 The Casimir element


Let g be a finite dimensional Lie algebra, and pick a basis {b1 , . . . , bn } for
the vector space g. For each bi there is a linear functional fi : g → C given
by fi (bj ) = δi,j , where δi,j is the Kronecker-delta function. Moreover, each
linear functional can
i

i be expressed in the1form hx, n
−i, for some x ∈ g. Let
b ∈ g such that b , bj = δi,j . Then {b , . . . , b } is a basis for g, and we
call it the dual basis with respect to the inner product h−, −i.
Take an arbitrary basis {b1 , . . . , bn } of g, and let {b1 , · · · , bn } be its dual
basis with respect to the Killing form. Then we can define the Casimir
element of g to be Cg ∈ U (g) given by
n
X
bi bi .
i=1

A priori, it is not clear that Cg is well defined. We must check that if


we pick an alternate basis {e1 , . . . , en }, then the Casimir element defined in
this basis is the same. Indeed, if each ei is given by
n
X
ei = ci,j bj ,
j=1

and the corresponding dual basis is {e1 , . . . , en } and


n
X
ei = di,j bj ,
j=1

then we have
n
X X
bi bi = ci,j di,k ej ek .
i=1 i,j,k

If δa,b is the Kronecker-delta function, then the coefficient of ej ek can be

28
Last updated October 26, 2014 6 THE UNIVERSAL ENVELOPING ALGEBRA

given by
n
X n X
X n
ci,j di,k = ca,j db,k δa,b
i=1 a=1 b=1
Xn X n D E
= ca,j db,k ej , ek
a=1 b=1
* n n
+
X X
= ca,j ej , bb,k ek
b=1 a=1
D E
k
= bj , b
= δj,k .
P i P i
This means ei e = bi b , so the Casimir element is well-defined.
Another fact of note is that Cg commutes with every element of the universal
enveloping algebra of g.

Lemma 6.4.
Cg ∈ Z(U (g)).

Proof. The proof is entirely computational and left as an exercise for the
reader.

29
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS

7 Representations of Lie algebras


7.1 Definitions
As defined in section 2.1, we say that a representation of a Lie algebra
g on a finite dimensional C-vector space is a Lie algebra homomorphism
ρ : g → gl(V ). One particularly important representation is the adjoint
representation ad : g → gl(g), given by

ad : x 7→ [x, −].

We use the notation adx for ad(x). Just as in the case with representations
of groups, we often identify the vector space V with the action of g on V ,
dropping the map ρ altogether to simplify notation.
Now suppose that V, W are representations of a Lie algebra g. We endow
the vector spaces V ⊕ W , V ∗ , V ⊗ W , and hom(V, W ) with actions of g to
make them representations by:

• For (v, w) ∈ V ⊗ W and x ∈ g, x(v, w) = (xv, xw).

• For ϕ ∈ V ∗ , and g ∈ g, x(φ) : v 7→ −φ(xv).

• For v ⊗ w ∈ V ⊗ W and x ∈ g, g(v ⊗ w) = (xv) ⊗ w + v ⊗ (xw).

• For ψ ∈ hom(V, W ) and x ∈ g, xψ : v 7→ gψ(v) − ψ(xv).

Note that hom(V, W ) ∼ = V ∗ ⊗ W as vector spaces, and the represen-


tations defined respect this identification. For φ ∈ V ∗ and w ∈ W , the
corresponding maps ψ : V → W is given by

ψ : v 7→ φ(v)w.

Applying the g action to V ∗ ⊗ W gives us

g(φ ⊗ w) = xφ ⊗ w + φ ⊗ xw
= −φ(x·) ⊗ w + φ ⊗ xw

corresponding to the map

v 7→ −φ(xv)w + φ(v)(xw),

which is precisely the same as what would be given by the action of g on


hom(V, W ) if we had computed it that way.

30
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS

We say that a representation V of g is irreducible if the only proper


subspace W of V closed under the action of g is 0.
If V is a representation of g, define V g by
V g = {v ∈ V | xv = 0, ∀x ∈ g}.
It should be clear from this definition that if V, W are representations of g,
then
homC (V, W )g = homCg (V, W ).

7.2 Representations of semi-simple Lie algebras


In this section, the Lie algebra g will always denote a semi-simple Lie algebra.
Suppose we have a representation ρ : g → gl(V ) for some finite dimen-
sional vector space V . We claim that this induces a representation of U (g)
on V . Indeed, let ρ ∈ homLC (g, gl(V )). In the language of section 6.1,
gl(V ) = F (EndC (V )), and so, since (U, F ) is an adjoint pair, we have a
unique C-algebra homomorphism ρ0 : U (g) → EndC (V ).
ρ-
g gl(V )
....-
..

6
....
....

U F
....
....
....
....

?
U (g) - EndC (V )
ρ0
Applying the functor F to ρ0 yields a representation of U (g) (as a Lie algebra)
on V , which restricts to g. This is to say, any representation of g on V factors
through U (g), and does so uniquely.
Lemma 7.1. If V is an irreducible representation of g, then the Casimir
element Cg acts on V by multiplication by a nonzero element of C.
Proof. Extend ρ to ρ : U (g) → gl(V ). Let f be the characteristic polynomial
of ρ(Cg ). Let λ be a root of f , and let W = ker(ρ(Cg )−λ·id). For any x ∈ g,
ρ(x)(W ) ⊆ W because ρ(x) commutes with ρ(Cg ). Thus, W is a nontrivial
subrepresentation of U (g), so W = V . Thus, for each v ∈ V , ρ(Cg )v = λv
as desired.

Lemma 7.2. Let V ⊆ W be a subrepresentation of a semi-simple Lie algebra


g such that dim W/V = 1 (i.e., ρ : g → gl(W )). Then there exists some
subrepresentation L ⊆ W such that W = V ⊕ L.

31
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS

Proof. Let βW : g × g → C by

(x, y) 7→ trW (ρ(x)ρ(y)).

Choose a basis {e1 , . . . , en } of V , and extend it by en+1 ∈ W \ V to a basis


for W . In such a basis, any x ∈ g is represented by a matrix:
 

 .. 

 ρ(x)|V . ,
 ∗ 
0 ··· 0 α(x)

for some α(x). We can define the map g → gl(W/V ) by

x 7→ α(x).

Since g is semi-simple, α must be zero, implying that

βW (x, y) = βV (x, y).

As the action of the Casimir elements on V and W are defined with


respect to the inner products βV and βW , and the actions are multiplication
by some constants, those constants must be equal. This completes the proof.

Theorem 7.3. Every representation of a semi-simple Lie algebra is semi-


simple.

Proof. Let V ⊆ W be representations of a semi-simple Lie algebra g. Let


N = W/V . Then we have the exact sequence of Lie algebra representations:

0→V →W →N →0 (∗)

which we wish to prove is split-exact.


The functor homC (N, −) is an exact functor in the category of C-vector
spaces, and so we have the exact sequence of vector spaces:
π
0 → homC (N, V ) → homC (N, W ) −
→ homC (N, N ) → 0. (∗∗)
However, it is routine to check that the functor homC (N, −) respects the
g-action on (∗), making (∗∗) an exanct sequence of Lie algebra representa-
tions as well.
Consider the commutative diagram:

32
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS

π
0 - homC (N, V ) - homC (N, W ) - homC (N, N ) - 0
6 6 6

=
∪ ∪
0 - homC (N, V ) - π −1 (C · idN ) - C · idN - 0

By lemma 7.2, the bottom row is exact and will split.

The preceeding result should remind us of Maschke’s theorem for repre-


sentations of finite groups.

7.3 sl2 (C) as a worked example


It is routine to check that sl2 (C) is simple. Throughout this section we’ll be
using the basis {e, f, h} for sl2 (C), where
     
0 1 0 0 1 0
e= f= h=
0 0 1 0 0 −1

We leave the computation of the brackets as an exercise for the reader:

[e, f ] = h [h, e] = 2e [h, f ] = −2f.

Let V be a possibly infinite dimensional representation of sl2 (C). Let


Vλ = ker(h − λ · id) be the eigenspace with eigenvalue λ. Clearly, the sum
of these subspaces is direct, so we have
M
Vλ ⊆ V.
λ∈C

Note that there is no need for V to be spanned by these eigenspaces.


If v ∈ Vλ is nonzero and e(v) = 0, then we say that v is a primitive
vector of weight λ. Now let v be primitive of weight λ, and define v0 , v1 , . . .
by
1
vn = f n (v).
n!
Then we have the following proposition:

Proposition 7.4. For each n, and vn defined as above,

33
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS

1. f (vn ) = (n + 1)vn+1

2. h(vn ) = (λ − 2n)vn

3. e(vn ) = (λ − n + 1)vn−1

Proof. The first equation is immediate from the definition.


For the second and third, we induct on n. For each, the case n = 0
is immediate. The inductive step follows from the following computations.
For h, we have,

1
h(vn ) = h(f (vn−1 ))
n
1
= ([h, f ] + f h)vn−1
n
f
= (−2 + f )vn−1
n
f
= (λ − 2(n − 1) − 2)vn−1
n
f
= (λ − 2n)vn−1
n
= (λ − 2n)vn

And for e we have,

1
e(vn ) = e(f (vn−1 ))
n
1
= ([e, f ] + f e)vn−1
n
1 1
= (λ − 2n + 2)vn−1 + f (λ − n + 2)vn−2
n n
1 n−1
= (λ − 2n + 2)vn−1 + (λ − n + 2)vn−1
n n
= (λ − n + 1)vn−1

Proposition 7.5. Let V be a (possibly infinite dimensional) representation


of sl2 (C), and let v ∈ V be primitive of weight λ. Let v0 , v1 , . . . be defined
as above. Then exactly one of the following is true:
• {v0 , v1 , . . . } is linearly independent

34
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS

• There is some m ∈ N for which λ = m and for every i,


m
M
vi ∈ Cvj
j=0

Proof. By a simple dimension counting argument, at most one condition can


be true. Suppose that the first condition fails. Take m minimal such that
v0 , v1 , . . . , vm , vm+1 is linearly dependent. That is,
m
M
vm+1 ∈ Cvj .
j=1

From proposition 7.4, vm+1 ∈ Vλ−2m−2 . As


m
X
Vλ−2i
i=0

is direct, and contains vm+1 , it must be that vm+1 = 0. Thus, L


f (vm ) = 0,
and vn = 0 for every n > m, clearly implying that each vi is in m j=0 Cvj .
Now, h(vm ) = [e, f ](vm ) = 0 − f e(vm ), so

h(vm ) = −f e(vm )
(λ − 2m)vm = −(λ − m + 1)mvm
λ − 2m = −λm + m2 − m
λ = m.

Now let L = C2 = Cx ⊕ Cy be a representation of sl2 (C), where x, y are


the first and second coordinates vectors respectively with the natural action
form sl2 (C). That is, e(x) = 0, f (x) = y, and h(x) = x. Similarly, for y,
e(y) = x, f (y) = 0, and h(y) = −y. Let us extend L to
M
Lm = Cxr y s ⊆ [x, y].
r+s=m

We can view Lm as the homogeneous degree m polynomials, but this doesn’t


tell us how to extend the action of sl2 (C). More formally, we have Lm =
S m (L), the mth grade of the Symmetric algebra. This may be obtained
by a quotient of T m (L) = L⊗m . We can thus extend the action on L by

35
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS

the natural tensor representation construction done in section 7.1. This


construction yields
e(xr y s ) = s·xr+1 y s−1 f (xr y s ) = r·xr−1 y s+1
h(xr y s ) = (r−s)xr y s .
Moreover xm is primitive in Lm of weight m, and vn = m
 m−n n
n x y .
Lastly, the previous proposition says that not only as a vector space, but
also as an sl2 (C) representation,
m
M
Lm = Cvi .
i=0

Corollary 7.6. Let V be a (possibly infinite dimensional) representation


P of
a sl2 (C). Let m ∈ N and let v be primitive of weight m. Let W = i≥0 Cvi
(with the vi defined as above). Then if dim W < ∞, there is an isomorphism
'
W − → Lm given by  
m m−n n
vn 7→ x y .
n
Proof. The proof is immediate from the preceding paragraphs.

Theorem 7.7. Let V be a finite dimensional representation of sl2 (C). Then


there exist m1 , . . . , mr ≥ 0 such that V ∼
= Lm1 ⊕ · · · ⊕ Lmr , and each Lm is
simple (of dimension m + 1).
Proof. We already know that sl2 (C) is semi-simple, and so any finite dimen-
sional representation is necessarily a direct sum of simple sl2 (C)-representations.
It therefore suffices to show that each simple representation of sl2 (C) is iso-
morphic to one of the Lm .
Let V be an irreducible representation of 2 (C). Take λ such that Vλ 6= 0
and Vλ+2 = 0. Let v ∈ Vλ be primitive. In fact, any v suffices, because by
proposition
P 7.4, e(v) ∈ Vλ+2 , so e(v) = 0.
r
Then r≥0 Cf (v) is a non-zero subrepresentation of V and hence equal
to V . Let m be such that f m+1 (v) = 0 (which must exist as V is finite
dimensional). This representation is isomorphic to Lm by identifying v with
xm .

It is worth noting that h is diagonalizable on Lm , and hence on any finite


dimensional representation of sl2 (C).
Our strategy for sl2 (C) generalizes nicely to a strategy that works for all
finite dimensional simple Lie algebras. Our approach will be to find a col-
lection of elements e1 , . . . , en , f1 , . . . , fn , and h1 , . . . , hn which generate the

36
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS

Lie algebra in such a way that [ei , fi ] = hi , [hi , ei ] = 2ei and [hi , fi ] = −2fi ,
along with other properties. The Lie sub-algebra h generated by {h1 , . . . , hn }
should be abelian, and maximally so. The sub-algebra h will be a Cartan
sub-algebra (defined below) and will have nice representation theoretic prop-
erties. We will prove that such generating set always exists, and that all such
are isomorphic. We can then use the relations between them to build root
systems and finally Dynkin diagrams which we can classify via combinatorial
means, and pull back this classification to the Lie algebras.
Specifically, for sl2 (C), the Cartan sub-algebra was h = Ch, and restrict-
ing the adjoint action to h, we see that sl2 (C) decomposes as

sl2 (C) = h ⊕ Ce ⊕ Cf.

It is worthwhile to, while reading the next section, think back to our work
on sl2 (C).

7.4 Cartan subalgebras


For a Lie algebra g, we say that a Lie sub-algebra h is a Cartan sub-algebra
if h is nilpotent, and self-normalizing

Example 7.8. In the case of sl2 (C) from the previous section, it is clear
that Ch is nilpotent (it is abelian). Moreover, [h, ae + bf + ch] = 2ae − 2bf
which is in Ch if and only if a = b = 0, so Nsl2 (C) (Ch) = Ch. Making Ch a
Cartan sub-algebra of sl2 (C).

Exercise 7.9. Let h be the Lie sub-algebra of sln (C) consisting of the di-
agonal matrices (with trace zero).

• h is a Cartan sub-algebra.

• Given u ∈ SLn (C), uhu−1 is a Cartan sub-algebra.

• Every Cartan sub-algebra of sln (C) is as above.

Now let g be a Lie algebra, and pick some x ∈ g. Define gxλ to be the
generalized λ-eigenspace of adx . We give a few equivalent formulations of
this definition:

gxλ = {y ∈ g | ∃n(adx −λ)n (y) = 0}


[
= ker(adx −λ)n
n∈N

37
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS

It is a general fact of linear algebra that for any x,


M
g= gxλ .
λ∈C

Though we will not continue in this direction, this is the beginnings of the
construction of the Jordan canonical form for the operator adx on the vector
space g.
Let Px ∈ C[t] denote the characteristic polynomial of adx acting on g. If
we write
a0,x + a1,x t + · · · + an−1,x tn−1 + tn .
Note that a0,x = (−1)n det adx = 0, since adx is not injective (as [x, x] = 0).
In fact, we can determine the smallest nonzero coefficient ai,x by determining
the rank of the map adx . In particular, the smallest i for which ai,x is nonzero
is simply dim gx0 . More generally, we may also write Px as
x
Y
Px = (t − λ)dim gλ .
λ∈C

Sometimes it will be useful to use the notation r(x) in place of dim gx0 .
That is, we define r(x) to be the smallest i such that ai,x 6= 0, where ai,x
are the coefficients of the characteristic polynomial Px .
Now we may define the rank of g as rank g = min{dim gx0 | x ∈ g}.
Clearly rank g is bound by 1 ≤ rank g ≤ dim g if g 6= 0. We say that x ∈ g
is regular if r(x) = rank g.

Theorem 7.10. If x ∈ g is regular, then gx0 is a Cartan sub-algebra of g.


In particular, Cartan sub-algebras exist.

Proof. Consider the exact sequence of vector spaces

0 → gx0 → g → g/gx0 → 0.

Because gx0 is a subalgebra, The vector spaces are stable under the action
of the adjoint representation, making the sequence an exact sequence of
representations as well.
Let Ω = {y ∈ gx0 | ady is invertible on g/gx0 }. Note that x ∈ Ω, by the
definition of gx0 . Moreover, Ω is open and dense in gx0 . To see this, note
that y ∈ Ω if and only if det ady 6= 0, where ady in this context means the
representation on g/gx0 . Since, in any basis for gx0 , det ady is smooth (actually
polynomial), it follows that the set of points with nonzero determinant is
dense. Since {0} is closed in C, Ω must be an open set.

38
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS

Now define Ω0 = {y ∈ gx0 | ady is not nilpotent on gx0 }. Similarly if Ω0


is nonempty, it must be open and dense: For any y, we know y ∈ Ω0 if and
only if fy (t) = tn , where fy is the characteristic polynomial of ady on gx0 ,
and fy depends polynomially on y.
If Ω0 is nonempty, Ω ∩ Ω0 6= ∅ by the Baire category theorem. Take some
z ∈ Ω ∩ Ω0 . Then dim gz0 is the multiplicity of 0 as an eigenvalue of adz on
g. But dim gz0 < dim gx0 , a contradiction. Thus, Ω0 = ∅, and gx0 consists
entirely of nilpotent elements. Engel’s theorem tells us that gx0 is nilpotent.
We have showed the nilpotency of gx0 . Now we need to show that
Ng (gx0 ) = gx0 . One direction of inclusion is immediate. For the other, sup-
pose z ∈ Ng (gx0 ). Then for each x ∈ gx0 , [x, z] ∈ gx0 . Since adx is nilpotent on
gx0 , there is some n for which adnx ([x, z]) = 0. Then adn+1 x
x (z) = 0, so z ∈ g0 .

In light of the topological influence on the previous proof, we have the


following nice result:

Proposition 7.11. Let g be a Lie algebra, and let greg = {x ∈ g | x is regular}.


Then greg is open and dense.

Proof. Let fx (t) = b0 (x) + b1 (x)t + · · · + bn−1 (x)tn−1 + tn denote the char-
acteristic polynomial of adx acting on g. The coefficients are polynomials
bi (x) depending on x. Clearly, bi = 0 for each i < rank g. The condition
that x ∈ greg is equivalent to brank g (x) 6= 0, proving the result.

Until now we have not drawn the appropriate corollaries to groups. A


Cartan sub-algebra is nilpotent and self-normalizing. We think of Cartan
sub-algebras as an analogue of Sylow subgroups of a given group. While
Sylow subgroups need not be self-normalizing, they are always nilpotent,
and give us information about self-normalizing subgroups. Sylow subgroups
are invaluable in determining simple groups, and so it is not unreasonable to
expect Cartan sub-algebras to be useful for classifying simple Lie algebras.
More specifically, the Sylow theorems tell us that any two Sylow p-
subgroups of a given group are conjugate. One way to say this is that
for any group G with Sylow p-subgroups P and Q, there exists some inner
automorphism θ ∈ G such that θ(P ) = Q. The appropriate analogue for
Cartan sub-algebras is also true:

Theorem 7.12. Let G be the subgroup Aut(g) generated by all eadx for
x ∈ g. Let h, h0 be Cartan sub-algebras of g. Then there exists an α ∈ G
such that α(h) = h0 .

39
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS

Note 7.13. It is important to recognize that in general ef ◦g 6= ef ◦ eg . This


is true if f and g commute, but need not be the case otherwise.
By way of analogy to group theory, the precise group theoretic analog
of Cartan sub-algebras is the notion of a Carter subgroup. While not every
group contains a Carter subgroup, all Carter subgroups are conjugate to
each other. We take Sylow subgroups in place of Carter subgroups as there
existence is guaranteed by the Sylow theorems.
To give a proof, we first need a lemma:
Lemma 7.14. Let h be a Cartan sub-algebra of a Lie algebra g. Then there
exists some x ∈ h such that the action of adx on the vector space g/h is
invertible.
Proof. Since h is nilpotent, and hence solvable, by corollary 4.4 to Lie’s
theorem, there exists a full flag

0 = D0 ⊆ D1 ⊆ · · · ⊆ Dr = g/h

where dim Di = i and h(Di ) ⊆ Di .


This yields a one-dimensional representation of h on Di /Di−1 ∼ = C. Such
a representation is a map λi : h → C. We wish to show that no λi is
everywhere zero.
If not, let i be minimal such that λi+1 = 0. Then ker λ1 , . . . , ker λi are
proper subspaces of h, so there union is not all of h. Thus, let x ∈ h such
that λj (x) 6= 0 for j = 1, . . . , i.
Since λi+1 (x) = 0, x acting on Di+1 /Di has 0 as an eigenvalue with
multiplicity 1. Thus, ker x 6= 0 is a subspace of dimension 1. Since the
action of x is invertible on Di ,

Di+1 = Di ⊕ ker x.

Now take any y ∈ h and v ∈ ker x. The action of y on v is ady (v) ∈ Di+1 .
Since h is nilpotent, there is some n for which adnx (y) = 0, so y(v) ∈ gx0 =
ker x. It follows that h(ker x) ⊆ ker x. This shows that the direct sum
respects the action of the representation.
Take z ∈ g such that z + h is a nonzero element of ker x, and let y ∈ h.
We know that, [y, z] ∈ h, so ady (z + h) = 0 in g/h. But [y, z] = − adz (y),
so z ∈ Ng (h) which is equal to h because h is Cartan. This contradicts our
choice of z, so there must be no such minimal i. This means that each λj
is nonzero. Thus, we can take x ∈ h such that k 6= ker λj for j = 1, . . . , r.
Such an x will have adx invertible on g/h.

40
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS

Proof of theorem. Let Ω = {x | adx is invertible on g/h}, and let Ω0 = GΩ.


For any x ∈ Ω, the multiplicity of 0 as an eigenvalue of adx on g is simply
dim h. It is clear from the definition that Ω is both open and dense (the
condition of being non-invertible is polynomial). Since Ω0 = GΩ is a union
of translates of Ω, Ω0 is also open and dense in g. As we noted previously,
greg is open and dense, so Ω0 ∩ greg is nonempty. Let y ∈ Ω0 ∩ greg . Then
y = α(z) for some α ∈ G and z ∈ Ω. Then z = α−1 (y) is regular since ady
and adz have the same characteristic polynomial. Thus, z ∈ Ω ∩ greg . Note
that g0z = h since adz is invertible on g/h and nilpotent on h.
Now we still need to check transitivity of G. Define the equivalence
relation x ∼ y if there exists an α ∈ G such that gx0 = α(gy0 ). Notice that
each equivalence class is open, but that greg is connected. Restricting the
topology to greg , we see that given x, y ∈ greg , gx0 = α(gy0 ) for some α ∈ G.
As all Cartan sub-algebras of g are of the form gx0 for some x ∈ greg , this
proves the theorem.

Corollary 7.15. Every Cartan sub-algebra of g has dimension rank g.

Proof. Left as an exercise for the reader.

For our next result, we need a several definitions. First, define the cen-
tralizer of a sub-algbera h of a Lie algebra g by

Cg (h) = {x ∈ g | [x, h] = 0}.

This definition is analogous to the definition of a centralizer of groups, in


that each is the collection of elements which commute with every element of
h. In fact, for any vector space V , a centralizer in End V denotes the same
object in the group setting and in the Lie algebra setting. This is because
the Lie bracket is given by [x, y] = xy − yx, and [x, y] = 0 if and only if x
and y commute under the group operation of composition.
We define an element x of a Lie algebra g to be semi-simple if adx is
diagonalizable.

Theorem 7.16. Let h be a Cartan sub-algebra of a semi-simple Lie algebra


g. Let β denote the Killing form on g. Then

1. β|h is non-degenerate.

2. h is abelian

41
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS

3. Cg (h) = h

4. Each element of h is semi-simple.

Proof. Pick x ∈ greg such that h = gx0 . Note that


M
g = gx0 ⊕ gxλ ,
λ∈C\{0}

because each element is zeroed by some polynomial, and C is alebraically


closed. Furthermore, a simple computation yields [gxλ , gxµ ] ⊆ gxλ+µ .
Now, if y ∈ gxλ , then ady : gx → gxλ+µ . In particular, if λ 6= 0, trg (ady ) =
0 (in a basis subordinate to this decomposition, there can be no entries in
the matrix representation of ady on the diagonal).
Thus, gxλ is orthogonal to gxµ whenever λ + µ 6= 0. Let us then write
M
g = gx0 ⊕ gxλ ⊕ gx−λ .


λ∈C× /{±1}

This is a decomposition of g into orthogonal subspaces, where “orthogonal-


ity” is with respect to the Killing form β.
Since h is nilpotent, and hence solvable, Cartan’s criterion for solvability
says that h ⊆ [h, h]⊥ (with respect to β). But if β is non-degenerate on h, it
must be that [h, h] = 0, meaning h is abelian.
Since h is abelian, h ⊆ Cg (h) ⊆ Ng (h) = h, where the last equality is
because h is a Cartan sub-algebra.
For any x ∈ h, putting adx in Jordan canonical form, we may express
x = n + s such that n is nilpotent (equivalently, adn is nilpotent) and s is
semi-simple. Pick an arbitrary y ∈ h. Then if [x, y] = 0, we can see that
[n, y] = [s, y] = 0, since y ∈ Cg (h). Thus, n, s ∈ Cg (h).
As ady ◦ adn is a nilpotent endomorphism of g, trg (ady ◦ adn ) = 0, imply-
ing that n ∈ h⊥ and therefore must be zero. That is, x = s is semi-simple.

Let g be a semi-simple Lie algebra, and let h be a Cartan sub-algebra.


For each α ∈ h∗ , define a sub-algebra

gα = {x ∈ g | [y, x] = α(y) · x, ∀y ∈ h}.

Another way to say this is that gα is the collection of all x for which ady acts
as multiplication by α(y) for every y ∈ h. Let R = {α ∈ h∗ | gα 6= 0, α 6= 0}.

42
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS

Theorem 7.17. Let g be a semi-simple Lie algebra, and let h be a Cartan


sub-algebra. Then, with gα and R defined as above,
M
g=h⊕ gα .
α∈R
P
Proof. Clearly g = h+ α∈R gα , so it suffices to show that this sum is direct.
Note that h = g0 = Cg (h). From theorem 7.16, the sub-algebra h has the
property that each adx is diagonalizable for x in h, and that all such maps
commute. In other words, adh = {adx | x ∈ h} is a family of commuting
operators on the finite dimensional vector space g, and so they are simul-
taneously diagonalizable (this is a standard spectral-type theorem of linear
algebra ). The result follows.

Example 7.18. Let h be the sub-algebra of sln (C) consisting of diagonal


matrices with trace zero. It is easy to check that h is a Cartan sub-algebra
of sln (C). We have the exact sequence of vector spaces
0 → h → Cn → C → 0,
where the map Cn → C is given by taking the sum of the coordinates in a
specificed basis. Let us name such a basis {e1 , . . . , en } for Cn . Dualizing
yields
0 → C → Cn → h∗ → 0,
whereby, we can write
M n−1
M

h = Ce∗i /C(e∗1 , . . . , e∗n ) = Cαi,i+1 ,
i=1
with αi = e∗i − e∗i+1 . More generally, let αi,j = e∗i − e∗j .
We aim to show that
R = {αi,j | i, j ∈ [n], i 6= j} is a gives the decomposition prescribed above,
for sln (C).
Notice that for i 6= j,
    



a1 

gαi,j = x ∈ sln (C) 
 . .. 
, x

= (a − a ) · x = C · Ei,j 6= 0,
  i j
 
an
 

where Ei,j is the elementary matrix with a 1 in position (i, j) and zeroes
everywhere else. Each such space is 1-dimensional, which gives n2 − n di-
mensions. Since dim h = n − 1, and dim sln (C) = n2 − 1, we already have
enough subspaces. So by the decompositions from theorem 7.17, we have
found all possible functionals in R.

43
Last updated October 26, 2014 8 ROOT SYSTEMS

8 Root systems
8.1 Introduction
We now turn away from Lie algebras briefly to discuss a combinatorial object
known as a root system. We will classify all irreducible root systems, and
then use this classification to classify the simple Lie algebras.
Let V be a finite dimensional vector space over R, and let Φ ⊆ V be a
finite subset of V . We say that Φ is a a root system of V if
1. span Φ = V

2. For each α ∈ Φ, there exists a reflection sα : V → V such that

• sα (α) = −α
• sα (Φ) = Φ
• sα (β) − β ∈ Zα for all β ∈ Φ.

3. If α ∈ Φ, then 2α 6∈ Φ
It is worth recalling the definition of a reflection. A reflection on V
is an endomorphism T ∈ GL(V ) such that T 2 = id, and ker(T − id) is a
hyperplane (subspace of codimension 1) of V . It should be clear that in an
appropriate basis, T can be represented by the matrix
 
1
 .. 
 . 
 
 1 
−1
What information can we glean about these root systems? In fact, they
have quite rigid structure. First, we notice that for each α ∈ Φ, sα is unique.
If s, s0 are two such reflections, then ss0 (α) = α. Thus, its only eigenvalue
is 1. So in an appropriate basis,
 
1 ∗ ··· ∗
 0 1 
ss0 =  .
 
 .. .. 
. 
1

If any of the starred elements are nonzero, then ss0 has infinite order.
However, ss0 acts on Φ and has finite order. Since Φ spans V , ss0 must have
finite order and hence be the identity. Thus, s = s0 .

44
Last updated October 26, 2014 8 ROOT SYSTEMS

Now that we know sα is unique, for each α ∈ Φ, there exists a unique


α̌ ∈ V ∗ such that sα (v) = v − α̌(v)α. For such an α̌, ker α̌ = ker(sα − id),
and
α̌(α)α = α − sα (α),
so α̌(α) = 2.
Suppose for a root system Φ, and α, β ∈ Φ, Rα = Rβ. Without loss of
generality, we may assume α = cβ for some c ∈ [−1, 1]. Since Rα = R(−α),
and −α is also in Φ, we may pick α such that c is non-negative. Furthermore,
c 6= 0, because 0 6∈ Φ by the third condition on root systems. So if α 6= β,
we must have that
sα (β) − β = 2cβ ∈ Zα.
This requires that c = 1/2, contradicting the fact that 2β = α 6∈ Φ is
required by the third condition.

Example 8.1. Figures 1 through 5 are examples of all 1- and 2-dimensional


root systems up to some equivalence we have yet to make precise (essentially
uniqueness up to an orthogonal transformation). We have yet to prove that
these are all such root systems.

−α α

Figure 1: Root system A1

β β α+β

−α α −α α

−β −(α + β) −β

Figure 2: Root system A1 ×A1 Figure 3: Root system A2

45
Last updated October 26, 2014 8 ROOT SYSTEMS

Figure 4: Root system B2 Figure 5: Root system G2

`
If we have a root system Φ, and V = V1 ⊕ V2 , Φ = Φ1 Φ2 for i =
1, 2, and each Φi is a root system on Vi , we say that Φ is the sum of the
root systems Φ1 and Φ2 . If no such nontrivial decomposition exists (i.e.,
dim Vi > 0 for i = 1, 2), we say that the root system Φ is irreducible. For
example, A1 is irreducible, whereas A1 × A1 is not. All other 2-dimensional
root systems are irreducible.
Fix a root system Φ of V . Let WΦ denote the subgroup of GL(V )
generated by the set of reflection {sα | α ∈ Φ}. WΦ is called the Weyl
group of Φ. When the root system is evident, we often drop the subscript,
and simply write W .

Proposition 8.2. For any root system Φ, WΦ is finite.

Proof. From the definition of root systems, W (Φ) = Φ, so we get the map
ψ : W → SymΦ defined by

ψ(sα ) : β → sα (β),

where SymΦ is the group of symmetries of the set Φ. Because span Φ = V ,


ψ must be injective. Hence, we can bound the size of W by |Φ|!.

Corollary 8.3. There exists a symmetric positive-definite bilinear form


(−, −) on V which is W invariant, for any Weyl group W .

Proof. The existence of a symmetric positive-definite bilinear form is not


the content of the theorem. The standard Euclidean inner-product h−, −i

46
Last updated October 26, 2014 8 ROOT SYSTEMS

suffices. Then define the bilinear form (−, −) by


1 X
(v1 , v2 ) = hwv1 , wv2 i.
|W |
w∈W

Note the importance of W being finite. The finiteness condition allows


us to be sure that the sum is well-defined. This proof should remind us of
Maschke’s theorem for representations of finite groups.
With such a bilinear form, we can see that for a root system Φ, and
α ∈ Φ with associated reflection sα ,

(α, v)
sα (v) = v − 2 α.
(α, α)
(α,β)
For α, β ∈ Φ, let aαβ = 2 (α,α) . It should be clear from the definition of
a root system (and sα ) that aαβ ∈ Z. In particular, aαα = 2.
If we let θ denote the angle from α to β, we have (α, β) = |α| |β| cos θ.

α
θ

Now we can compute aαβ aβα = 4 cos θ ∈ Z, which constrains the angles
between vectors in a root system to be one of very few choices. If aαβ aβα = 4,
then β = ±α. Otherwise one of aαβ and aβα must be ±1 or 0, and up to
symmetry, the only options are given in the table in figure 6.

47
Last updated October 26, 2014 8 ROOT SYSTEMS

aαβ aβα θ Length


0 0 π/2 |α| = |β|
1 1 π/3 |α| = |β|
−1 −1 2π/3 √|α| = |β|
1 2 π/4 √2 |α| = |β|
−1 −2 3π/4 √2 |α| = |β|
1 3 π/6 √3 |α| = |β|
−1 −3 5π/6 3 |α| = |β|

Figure 6: All possibilities for lengths of and angles between two vectors in
an arbitrary root system

8.2 Root system bases


Let ∆ ⊆ Φ, and let X
Φ+ = Φ ∩ R≥0 α
α∈∆
X .
Φ− = Φ∩ R≤0 α
α∈∆

That is, Φ+is the set of those vectors in Φ which can be expressed
only as a positive linear combination of vectors from ∆. The set Φ− is the
collection of vectors in Φ which can be expressed only as a negavite linear
combination of` vectors from ∆.
If Φ = Φ+ Φ− , we say that ∆ is a basis for the root system. It is
immediate that any basis for a root system spans the entire vector space, as
it spans the root system, which in turn spans the vector space. Moreover, if
there is a linear dependence among ∆, then there one could add or subtract
terms from each other to take a vector in Φ+ and express it with the coeffi-
cient of some basis vector in ∆ being negative, contrary to the definition of
Φ+ . Thus, a basis for a root system is a basis for the entire vector space as
well.
As one might hope, every root system has a basis. In fact, we can do
better than this, but to do so, we need some definitions. For a root system
Φ, let t ∈ V ∗ be a linear functional, and let Φ+ t = {α ∈ Φ | t(α) > 0}.
Similarly, define Φ− t = {α ∈ Φ | t(α) < 0. We say that a root α ∈ Φ+ t is
+
decomposable if there exist β, γ ∈ Φt such that α = β + γ. Otherwise,
we say that α is indecomposable. Let

∆t = {α ∈ Φ | α is indecomposable}.

48
Last updated October 26, 2014 8 ROOT SYSTEMS

Then we claim that every root system has a basis and that every basis for
a root system is ∆t for some t ∈ V ∗ . To prove this result, we first need a
lemma.

Lemma 8.4. Let α, β ∈ Φ such that Rα 6= Rβ, and assume (α, β) > 0.
Then β − α ∈ Φ.

Proof. Since Rα 6= Rβ, one of aαβ and aβα is ±1. Without loss of generality,
(α,β)
let aβα = ±1. Since aβα = 2 (α,α) > 0, it must be that aβα = 1. Then

sα (β) = β − aβα α = β − α ∈ Φ.

Theorem 8.5. Let Φ be a root system over a vector space V . For each
t ∈ V ∗ such that t(α) 6= 0 for all α ∈ Φ, ∆t is a basis. Moreover, every basis
for a root system is of this form.

Proof. It should be immediately obvious that for any t ∈ V ∗ such that


+` −
t(α) 6= 0 for all α ∈ Φ, Φ = Φt Φt . We simply need to show that
+
Φt is the subset of Φ consisting of roots which are positive integer linear
combinations of roots from ∆t . It is also clear that if α is a positive integer
linear combination of roots from ∆t , then α ∈ Φ+ t , so it suffices to show
the other direction. Let α ∈ Φ+ t , and suppose for the sake of contradiction
that α is not a positive integer linear combination of roots from ∆t . We
may pick such an α which minimizes t(α). Certainly α 6∈ ∆t . Then α is
necessarily decomposable as α = β + γ for β, γ ∈ Φ+ t . But t(β) and t(γ) are
both less than α, and hence positive integer linear combinations of roots in
∆t , meaning that α is as well. Contradiction.
Now let ∆ is an arbitrary basis for Φ, for each root α ∈ Φ, define αX ∗ to be

the dual vector given by α∗ (α) = 1, and rank α∗ = 1. Then let t = α∗ .


α∈∆

Clearly, for each α ∈ ∆, t(α) > 0, and so Φ+ ⊆ Φ+ −
t . Similarly, Φ ⊆ Φt ,
and so equalities must hold for each. It now suffices to show that ∆ = ∆t .
Let α ∈ ∆ and assume for the sake of contradiction that α 6∈ ∆t . Let
β, γ ∈ Φ+ such that α = β + γ. Then we have
X
β= bi αi
αi ∈∆
X
γ= ci αi ,
αi ∈∆

49
Last updated October 26, 2014 8 ROOT SYSTEMS

for bi , ci ∈ R≥0 . However, then α = (bi + ci )αi , meaning that for each i
P
with αi 6= α, bi + ci = 0. This means that bi = ci = 0. However for αj = α,
bi + ci = 1, this is impossible if β, γ are both to be in Φ+ . It must be, then,
that ∆ ⊆ ∆t . Since they are both bases for V , they have the same size, and
are therefore equal.

Let Φ be a root system, and ∆ a basis. We call the matrix

C = (aαβ )α,β∈∆

the Cartan matrix for Φ. Up to reordering its rows and columns, C is


independent of the basis chosen (this should not be obvious, and we will
prove it later). Here are several examples. The Cartan matrix for the root
system of type A1 is the 1×1 matrix with single entry 2. For two-dimensional
root systems:
   
2 0 2 −1
0 2 −1 2

Figure 7: Cartan matrix for Figure 8: Cartan matrix for


root system of type A1 × A1 root system A2

   
2 −1 2 −3
−2 2 −1 2

Figure 9: Cartan matrix for Figure 10: Cartan matrix for


root system of type B2 root system C2

Moreover, a Cartan matrix determines the root system up to isomor-


phism. Given a Cartan matrix, we can construct a semi-simple Lie algebra
(see section 9.2), which will also be uniquely determined by the Cartan
matrix.
Theorem 8.6. Let ∆ be a basis for a root system Φ in a vector space V .
Let W be the Weyl group for Φ, and let W0 = hsα | α ∈ ∆i ≤ W . Then,

1. For all t ∈ V ∗ , there exists some w ∈ W0 such that w(t)(α) ≥ 0 for


every α ∈ ∆.

2. For each basis ∆0 of Φ, there exists a w ∈ W0 such that ∆0 = w(∆).

3. For each α ∈ Φ, there exists some w ∈ W0 such that w(α) ∈ ∆.

50
Last updated October 26, 2014 8 ROOT SYSTEMS

4. W = W0 .

Proof. Let β ∈ ∆. We claim that sβ (Φ+ \ {β}) = Φ+ \ {β}. Indeed, if


γ ∈ Φ+ \ {β}, write X
γ= mα α,
α∈∆
where each mα ≥ 0. Since γ 6= 0, there is some specific α0 such that
mα0 6= 0. Then
(β, γ) X (β, γ)
sβ (γ) = γ − 2 β= mα α + mβ − 2 β ∈ Φ.
(β, β) (β, β)
α∈∆\{β}

Since one coefficient is positive (namely mα0 > 0), by the disjointness of Φ+
and Φ− , it must be that all coefficients are positive, and hence sβ (γ) ∈ Φ+ .
Moreover, sβ (γ) 6= β becauseX γ 6= −β.
1
Define the vector ρ = 2 α. Then
α∈Φ+

1 X 1
sβ (ρ) = α + sβ (β) = ρ − β
2 2
α∈Φ+ \{β}

Now for any t ∈ V ∗ , take w ∈ W0 (which is finite) such that w(t)(ρ) is


maximal. Then (sβ · w) = w(t)(sβ (ρ) = w(t)(ρ) − w(t)(β), so it must be
that w(t)(β) ≥ 0, lest w(t)(ρ) not be maximal. This proves part 1.
Let t ∈ V ∗ such that t(γ) 6= 0 for every γ ∈ Φ, and such that ∆t = ∆0 .
Such a t exists by theorem 8.5. From part 1, there exists some w ∈ W0 such
that w(t)(α) ≥ 0 for all α ∈ ∆. If α ∈ Φ+ , then t(w−1 (α) = w(t)(α) > 0, so
w−1 α ∈ Φ+ 0+ −1 + 0+
t = Φ . Thus, w (Φ ) ⊆ Φ . Since elements of W0 permute
roots (in particular, act faithfully on roots), w−1 (Φ+ ) = Φ0+ , meaning that
w−1 (∆) = ∆0 , proving part 2.
Let α ∈ Φ, and let
[
Ω=V∗\ ker evβ ,
β∈Φ\{±α}

where evβ ∈ V ∗∗ is the map evβ : ϕ 7→ ϕ(β) denoting evalutaion of the


functional ϕ at β. Pick some t ∈ ker evα such that t ∈ Ω. Then clearly
t(α) = 0 but for all β 6= ±α, t(β) 6= 0.
Pick ε > 0 such that ε < |t(β)| for every β ∈ Φ \ {±α}. Let α̌ ∈ V ∗ be
(α,v)
given by α̌ : v 7→ 2 (α,α) , and define
ε
t0 = t + α̌.
8

51
Last updated October 26, 2014 8 ROOT SYSTEMS

Then t0 (α) = ε/4 >, and


α̌(β)
t0 (β) = β(t) + ε · .
8
Since |α̌(β)| ≤ 3 for α, β ∈ Φ, we see that t0 (β) 6= 0, but |t0 (β)| > ε/2.
By part 2, there exists some w ∈ W0 such that w(∆t0 ) = ∆. We can see
then, that w(α) must be indecomposable because |t0 (α)| is minimal amongst
all roots in Φ+
t0 . Thus, w(α) ∈ w(∆t ) = ∆. This proves part 3.
0

Lastly, take any β ∈ Φ. By part 3, there exists some w ∈ W0 such that


w(β) = α ∈ ∆. Then sβ = w−1 sα w ∈ W0 . Since a generating set for W is
contained in W0 , it must be that W0 = W .

Proposition 8.7. For α, β ∈ ∆. Then aαα = 2, and if α 6= β, aαβ ∈


{0, −1, −2, −3}.
Proof. We have already seen that aαα = 2. By lemma 8.4, if α, β ∈ Φ such
that aαβ > 0 and Rα 6= Rβ, then α − β ∈ Φ. In such a case, it cannot be
that α and β are both in some basis ∆ for Φ, because α − β is in neither
Φ+ nor Φ− . Thus, all off-diagonal entries must be non-positive. We have a
list of possibilities for aαβ , which requires aαβ ∈ {0, −1, −2, −3} if aαβ ≤ 0.

Proposition 8.8. Up to reordering rows and columns, a Cartan matrix for


C is independent of the choice of basis ∆.
Proof. Let ∆,∆0 be two bases for Φ. By theorem 8.5, there exists some
w ∈ W such that w(∆) = ∆0 . Then
(wα, wβ) (α, β)
awα,wβ = 2 =2 = aαβ ,
(wα, wα) (α, α)
since (−, −) is W -invariant. That is, w induces a reordering of the rows and
columns.

Proposition 8.9. Let Φ ⊆ V be a root system with basis ∆, and let Φ0 ⊆ V 0


be a root system with basis ∆. Suppose we have a bijection f : ∆ → ∆0 such
that aαβ = af (α),f (β) . Then Φ ∼
= Φ0 .
Proof. We know that Φ = {w(α) | w ∈ W, α ∈ ∆}. So it suffices to check
that f ◦ sα = sf (α) ◦ f . Indeed,
f (sα (β)) = f (β) − f (aαβ α = f (β) − af (α,f (β) f (α = sf (α) (f (β)).

52
Last updated October 26, 2014 8 ROOT SYSTEMS

8.3 Coxeter and Dynkin Diagrams


Let Φ be a root system with a basis ∆. We define the Coxeter diagram
to be a graph (where some edges are doubled or tripled) whose vertex set
is ∆, and such that an edge occurs between α and β with multiplicity k if
aαβ aβα = k. A priori, we know that aαβ aβα ∈ {0, 1, 2, 3, 4}, but indeed, if
the product is 4, then α = ±β, and hence not both of α and β are in ∆.
Such diagrams lose some of the information about the underlying root
system, so we encode more information in what is known as a Dynkin
diagram. A Dynkin diagram is a Coxeter diargarm where the edges are
oriented according to the size of the vectors. We orient an edge from α to
β if |α| > |β|. If |α| = |β|, we give no orientation. Note that the only edges
which are oriented are those which are doubled or tripled. We can see this
by referring to the table in figure 6. Here are the Dynkin diagrams for all
2-dimensional root systems:

A1 × A1

A2 = D 2

B 2 = C2 i

G2 i

Figure 11: Dynkin diagrams for all 2-dimensional root systems

You should notice that A1 × A1 is the only 2-dimensional root system


which is reducible, and that it’s Dynkin diagram is the only one which is
disconnected. This is not a coincidence. If a Dynkin diagram has more
than one connected component, then each basis vector in one component is
perpendicular to every basis vector in another and hence it is reducible. Sim-
ilarly, if the diagram is connected, we cannot split the basis into components
perpendicular to each other so it must be irreducible.
We now state the classification theorem for root systems via their Dynkin
diagrams.
Theorem 8.10. Every irreducible root system has one of the following
Dynkin diagrams:

An

Bn i

53
Last updated October 26, 2014 8 ROOT SYSTEMS

Cn h

Dn

G2 i

F4 i

E6

E7

E8

The index n denotes the number of vertices. Notice that for n < 2, Bn
and Cn are not defined. For n < 3, Dn is not defined. Also, B2 is isomorphic
to C2 , so when we listed the 2-dimensional root systems we did not need to
mention C2 .
The diagrams An , Bn , Cn , and Dn are infinite series of diagrams and
correspond to Lie algebras of classical importance. For instance, the Lie
algebra of type An is isomorphic to sln+1 (C). The other Lie algebras can be
found in the table below:
Type Lie algebra Dimension
An sln+1 (C) n2 + 2n
Bn so2n+1 (C) 2n2 + n
Cn sp2n (C) 2n2 + n
Dn so2n (C) 2n2 − n

Figure 12: Table of classical Lie algebras

The diagrams E6 , E7 , E8 , F4 and G2 all correspond to exceptional Lie


algebras whose constructions don’t generalize to arbitrarily large dimension.

54
Last updated October 26, 2014 8 ROOT SYSTEMS

8.4 Classification
Through a series of lemmas about inadmissible diagrams, we will arrive at
the classification theorem for admissible Dynkin diagrams (and hence root
systems and then eventually simple Lie algebras). We will then explicitly
construct root systems of all connected Dynkin diagrams we have not ruled
out.
Our classification of Dynkin diagrams will actually be a classification of
Coxeter diagrams. As we will see, if one diagram is admissible, then reversing
any collection oriented edges results in an admissible diagram. This is sort
of a misleading sentence (though accurate), because as we will see, there is
at most only ever one oriented edge in an admissible diagram. Reversing
this edge also results in an admissible diagram.

Lemma 8.11. Let D be a Dynkin diagram, and let D0 denote a subgraph of


D. Then D0 is also a Dynkin diagram.

Proof. Let D0 be a subdiagram of an admissible Dynkin diagram D, cor-


responding to a root system Φ with basis ∆. Then taking all vectors in Φ
which are linear combinations of those in ∆ corresponding to vertices in D0
yields an admissible Dynkin diagram.

Lemma 8.12. Let D be a Dynkin diagram on n vertices. There are at most


n − 1 edges (where an edge with multiple lines counts as a single edge).

Proof. If u, v ∈ V (D), the vertex set of D, with uv ∈ E(D), the edge set of
D, then 2(u, v) ≤ −1. Then:
 
X X X
0< v, v = n + 2 (u, v),
v∈V (D) v∈V (D) u6=v

implying there are at most n − 1 such edges.

Corollary 8.13. Dynkin diagrams are acyclic.

Proof. If there was a cycle, as a subgraph, it must be an admissible Dynkin


diagram, but it has as many edges as it does vertices. Contradiction.

Lemma 8.14. Let D be a Dynkin diagram. No vertex has weighted-degree


larger than 3.

55
Last updated October 26, 2014 8 ROOT SYSTEMS

Proof. If there was such a vertex v, then the subdiagram induced by v and
its neighbors would yield a problem. Indeed, the neighbors of v are all only
adjacent to v in the induced subdiagram lest there be a cycle. Thus, for
each u ∈ N (v), (u, v)2 > 1/4. Let f (u) denote the multiplicity of the edge
between u and v. Then,
X X
1 = (v, v) > (u, v)2 = f (u),
u∈N (v) u∈N (v)

since N (v) are perpendicular and v is not in their span. If the weighted
degree of v is 4 or larger, we arrive at a contradiction.

Lemma 8.15. If D is a Dynkin diagram, and e a single edge, the edge


contracted graph D/e is also a Dynkin diagram.

Proof. Let u and v be the basis roots corresponding to the vertices on the
edge e. Let x = u + v, and let D0 = D/e (the new vertex is x) Then
1
(x, x) = (u + v, u + v) = (u, u) + 2(u, v) + (v, v) = 1 − 2 · + 1 = 1.
2
Moreover, it is clear that if uy is an edge in D, then (x, y) = (u, y) + (v, y) =
(u, y), and so in D0 , the edge between x and y is of the same type as it was
in D. Similarly, for when vy is an edge in D.

Corollary 8.16. G2 is the only Dynkin diagram with a triple edge.

Strictly speaking, we don’t even know yet that G2 is admissible (unless


you recall the root system from section 8.1 and see that it has the associated
Dynkin diagram). Below we have a table constructing root systems for all
admissible diagrams, so we can be sure that all such diagrams really do
exist.

Corollary 8.17. No Dynkin diagram has more than one doubled edge.

Lemma 8.18. The diagram

x1 x2 x3 x4 x5

is inadmissible.

56
Last updated October 26, 2014 8 ROOT SYSTEMS

Proof. Consider u = x1 + 2x2 and v = 3x3 + 2x4 + x5 . Then kuk2 =


12 + 22 − 1 · 2 =√2 and kvk√
2 = 12 + 22 + 32 − 1 · 2 − 2 · 3 = 6. We also have

(u, v) = −2 · 3/ 2 = −6/ 2. The Cauchy-Schwarz inequality guarantees


that
kuk2 kvk2 > (u, v)2 ,
since the vectors are not linearly dependent. This is a contradiction.

Corollary 8.19. The only admissible diagram with a doubled edge not hav-
ing any leaf nodes is F4 .

Corollary 8.20. The only admissible diagrams with a doubled edge are Bn ,
Cn , and F4 .

Proof. We have already limited ourselves to diagrams with exactly one dou-
bled edge. If that doubled edge contains no leaves of the tree, it must be F4 .
Now it suffices to argue that if the doubled edge is a leaf edge, the diagram
must be a path (have no vertices of degree ≥ 3). This is clear, because if
there was such a vertex, let e denote the doubled edge, and let v denote the
closest vertex of degree larger than or equal to 3 (why is this well-defined?).
Then contract the path from e to v to obtain a vertex of weighted degree at
least 4, yielding a contradiction.

We have classified all admissible Dynkin diagrams with a multiple edge.


We know wish to classify the diagrams where all edges are single edges.

Lemma 8.21. There is at most one vertex with degree 3

Proof. If there were two such vertices, find two such vertices u and v whose
distance is minimal. Contract all of the edges between them to yield a vertex
of degree 4. This contradicts our ability to contract single edges and obtain
admissible diagrams.

Corollary 8.22. All admissible diagrams with only single edges are of the
form Tpqr . Where Tpqr has a central vertex v and 3 “legs” of length p − 1,
q − 1, and r − 1. That is, Tpqr is given by

57
Last updated October 26, 2014 8 ROOT SYSTEMS

yq−1
yq−2
y2
y1

xp−1 xp−2 x2 x1
v
z1
z2
zr−2
zr−1

Lemma 8.23. If Tpqr is admissible, then


1 1 1
+ + > 1. (∗)
p q r
Pp−1 Pq−1
Proof. Let X = √1 (x1 + √1 (x1
p i=1 xi ). Similarly, let Y = q + i=1 xi ),
√1 (x1 +
P r−1
and Z = r i=1 xi ).
If any of p, q, or r is 1, then the corresponding X, Y , or Z is taken to
be zero. It should be clear that (X, X) = (Y, Y ) = (Z, Z) = 1. Moreover,
1 1 1
1 = (v, v) ≥ (X, v)2 + (Y, v)2 + (Z, v)2 = + + .
p q r

Corollary 8.24. The only admissible diagrams of type Tpqr are An , Dn ,


E6 , E7 , and E8 .
Proof. If Tpqr is admissible, let’s let p ≥ q ≥ r without loss of generality.
Then if r = 1, T p, q, 1, then (∗) holds for any p and q and we have Tp,q,1 =
Ap+q−1 . If r = q = 2, we have Tp,2,2 must have p ≥ 2, and Tp,2,2 = Dp+2 .
Otherwise, if r > 2, then p, q, r ≥ 3, and (∗) will fail to hold. Thus, we may
assume r = 2. We have already seen the case where q = 2. If q ≥ 4, then
p, q ≥ 4 and again, (∗) fails. Thus, it must be that q = 3. This leaves only
the possibilities of p = 3, 4, 5, corresponding to E6 , E7 , and E8 , respectively.

Now it suffices to show that the rest of these diagrams are Dynkin dia-
grams of viable root systems. We provide coordinates for the simple roots
in the table below. You may notice that some matrices are not square. For

58
Last updated October 26, 2014 8 ROOT SYSTEMS

instance, in G2 , we provide a basis for the root system in 3 dimensions. Of


course, the matrix has rank two. We use three dimensions to achieve nicer
(in this case, integer) coordinates.
 
  1 −1
1 −1  1 −1 
 1 −1  
.. ..

An Bn
   
 .. ..   . . 
 . .   
 1 −1 
1 −1
1
   
1 −1 1 −1

 1 −1 


 1 −1 

Cn
 .. .. 
Dn
 .. .. 

 . . 


 . . 

 1 −1   1 −1 
2 1 1

59
Last updated October 26, 2014 8 ROOT SYSTEMS

−1
 
1

 1 −1 

 1 −1 
E6  

 1 −1 

 1 1 
− 12 − 12 − 12 − 2 − 12 − 12
1

−1
 
1

 1 −1 


 1 −1 

E7 
 1 −1 


 1 −1 

 1 1 
− 12 − 21 − 12 − 12 − 2 − 12 − 12
1

−1
 
1

 1 −1 


 1 −1 

 1 −1 
E8  

 1 −1 


 1 −1 

 1 1 
− 12 − 12 − 12 − 12 − 21 − 12 − 12 − 12
 
1 −1
 1 −1 
F4  
 1 
− 21 − 12 1 1
−2 −2
 
1 −1 0
G2
−1 2 −1

60
9
Last updated October 26, 2014 SEMI-SIMPLE LIE ALGEBRA CONSTRUCTION

9 Semi-simple Lie algebra construction


9.1 Cartan sub-algebra construction
Let h be a Cartan sub-algebra of a semi-simple Lie algebra g, and let α ∈
h∗ = hom(h, C). Define

gα = {x ∈ g | ∀y ∈ h, [y, x] = α(y)x}.

We refer to α as an eigenvalue for obvious reasons, though strictly speaking


it is not an eigenvalue of ady (it’s not a constant). Regardless, the notation
is useful, and yields the correct intuition. Likewise, we say that gα is the
eigenspace associated with α, and it’s members are the eigenvectors with
eigenvalue α. Let Φ = {α M ∈ h∗ | α 6= 0, gα 6= 0}. It should be clear that
as a vector space, g = h ⊕ gα . Moreover, Φ is a root system, and every
α∈Φ
root system gives rise in this way to a semi-simple Lie algebra. The number
of direct summands in the semi-simple Lie algebra is precisely equal to the
number of connected components of the associated Dynkin diagram. In
particular, if Φ is an irreducible root system, the Lie algebra it produces is
simple, and every simple Lie algebra comes from an irreducible root system.
Proofs of the statements in the preceding paragraph will come shortly.
We hope to first give an outline of the relationships between Lie algebras,
root systems, and Dynkin diagrams. The idea is to consider the adjoint
representation (restricted to h) on g, and decompose g as
M
h⊕ gα .
α∈Φ

That is, we would like to take the vector space decomposition and induce
the adjoint representation on it as a direct sum. We cannot precisely obtain
this result, but we can get “close enough,” as we will see in this section.
Additionally, it is often convenient to think of h = g0 . Strictly speaking
this is an abuse of notation, as 0 6∈ Φ if we want Φ to be a root system, but
it simplifies statements of theorems, so we keep it.

Theorem 9.1. hα = [gα , g−α ] is one-dimensional.

Proof.

Theorem 9.2. There exists a unique Hα ∈ hα such that α(Hα ) = 2.

61
9
Last updated October 26, 2014 SEMI-SIMPLE LIE ALGEBRA CONSTRUCTION

Proof. It suffices to show that α(hα ) 6= 0. If so, then we can take any vector
in v ∈ hα with α(v) 6= 0, and define
2
Hα = v.
α(v)

Since hα is one-dimensional, we know that if α(hα ) 6= 0, it must in fact be


that α(x) = 0 for x ∈ hα if and only if x = 0. So take v ∈ hα . There exist
v+ ∈ gα and v− ∈ g−α such that v = [v+ , v− ]. Then [v, v+ ] = α(v)v+ and
[v, v− ] = α(v)v− , since v ∈ hα ⊆ h. If α(v) = 0, then

A = Cv ⊕ Cv+ ⊕ Cv−

is a Lie algebra with [A, A] = Cv, so A is nilpotent. In particular, this


means that the adjoint representation (in fact any representation) has adv
nilpotent. But since v ∈ h, it is also diagonal, and hence must be zero.
Thus, if we take 0 6= v ∈ hα , α(v) 6= 0

Let Xα ∈ gα be nonzero. Then [Xα , g−α ] 6= 0, so it must be 1-dimensional


(as g−α is 1-dimensional). Pick the unique Yα such that [Xα , Yα ] = Hα . We
use the notation slα to denote the copy of sl2 (C) embedded in g by

slα = CXα ⊕ CYα ⊕ CHα ⊆ g.

Theorem 9.3. Let V = α∈Φ Rα ⊆ h∗ . Then dimR V = dimC h.


P

Theorem 9.4. Let g be a semi-simple Lie algebra with Cartan sub-algebra


h, let Φ = {α ∈ h∗ | ∀y ∈ h, ady (x) = α(y)x}. Then Φ is a root system.

Proof. For any α, β ∈ Φ, x ∈ gα ,y ∈ gβ , and h ∈ h, β([h, x], y) + (x, [h, y]) =


0,
(α(h) + β(h))(x, y) = 0.
Thus, if α 6= −β, (x, y) = 0.

M
If ∆ is a basis for Φ as defined above, let n+ = gα , and n− defined
α∈Φ+
similarly. Then we have the following result:

Theorem 9.5. 1. n+ , n− are nilpotent in g.

2. b+ = n+ ⊕ h and b− = n− ⊕ h are solvable.

62
9
Last updated October 26, 2014 SEMI-SIMPLE LIE ALGEBRA CONSTRUCTION

3. g = n+ ⊕ h ⊕ n− .

Example 9.6. For g = sln (C), h is the diagonal n × n matrices of trace


zero, n+ denotes the strictly upper-triangular matrices, and n− denotes the
strictly lower triangular matrices. In this particular case, all of results of
theorem 9.5 should be immediate.

9.2 Construction from Cartan matrix


Here we give a construction of a Lie algebra from the Cartan matrix asso-
ciated to a root system Φ. If C is the cartan matrix and has entries aij ,
then let g be presented by the generators e1 , . . . en , f1 , . . . , fn , h1 , . . . , hn and
relations:
[ei , fj ] = 0 ∀i 6= j [ei , fi ] = hi

[hi , hj ] = 0 ∀i, j [hi , ej ] = aij ej

1−aij
[hi , fj ] = −aij fj adei (ej ) = 0

1−aij
adfi (fj ) = 0

In particular, when we construct the simple Lie algebras of types A, B,


C, and D, we get:

Type Description
An sln (C)  
1 0 0
Bn so2n+1 (C) = {x ∈ gl2n+1 (C) | xt S = −Sx} S =  0 0 idn 
0 idn 0
 
0 idn
Cn sp2n (C) = {x ∈ gl2n (C) | xt S = −Sx} S=
idn 0

 
0 idn
Dn sp2n (C) = {x ∈ gl2n (C) | xt S = −Sx} S=
− idn 0

Figure 13: Types of classical Lie algebras and their realizations.

For the exceptional Lie algebras, we do not get as nice descriptions,


though they can be realized in reasonable ways. For instance, there is a

63
9
Last updated October 26, 2014 SEMI-SIMPLE LIE ALGEBRA CONSTRUCTION

specific 3-form ω ∈ ∧3 C7 for which g2 = {A ∈ gl7 C | A(ω)} = 0 (where g2 is


the simple Lie algebra of type G2 ). Alternatively G2 can be given as Der(O),
the Lie algebra of derivations on the octonions. For more information on g2
and the other exceptional Lie algebras, please see [FH].

64
Last updated October 26, 2014 10 MORE REPRESNTATION THEORY

10 More represntation theory


10.1 Weights
Let g be a complex semi-simple Lie algebra, and let h be a Cartan sub-
algebra. Then recall that if Φ is the root system for g,
M
g=h⊕ (gα ⊕ g−α ),
α∈Φ+

as a representation of h under the adjoint action. We are going to gen-


eralize this idea to arbitrary representations.
Let V be a (possibly infinite dimensional) representation of g. That is,
we have a map
g 7→ EndC (V ) = gl(V ).
If h is a Cartan sub-algebra, then for each λ ∈ h∗ , define
Vλ = {v ∈ V | xv = λ(x)v, ∀x ∈ h}.
This definition should feel familiar. In the case of of the adjoint represen-
tation (restricted to h), Vλ = gλ , the eigenspace corresponding to λ. In
general, the Vλ have trivial intersection, so
M
Vλ ⊆ V,
λ∈h∗

but we need not have equality.


We have Eα , Fα , and Hα in g, as given in the Serre relations, for each
α ∈ Φ+ . For any α, the Lie sub-algebra (not ideal) generated by Eα , Fα ,
and Hα is isomorphic to sl2 (C) (exercise). For ease of notation, enumerate
the α ∈ Φ+ by α1 , . . . , αn , and let Ei = Eαi , and similarly for Fi and Hi .
We say v ∈ Vλ is primitve if Ei (v) = 0 for each i ∈ {1, . . . , n}.
Lemma 10.1. If V is a representation of g, then Fi (Vλ ) ⊆ Vλ−αi , and
Ei (Vλ ⊆ Vλ+αi .
Proof. Let v ∈ Vλ . Then
Hj Ei v = [Hj , Ei ]v + Ei Hj v
= aji Ei v + λ(Hj )Ei v
= aji + λ(Hj )Ei v
= (λ + αi )(Hj )Ei v,
and similarly for Fi .

65
Last updated October 26, 2014 10 MORE REPRESNTATION THEORY

Theorem 10.2. Let V is an irreducible representation of g, and let v ∈ Vλ


be primitve (with λ ∈ h∗ ). If Vµ 6= 0, then
n
M
λ−µ∈ Nαi .
i=1

Proof. Let V 0 = (Fi1 · · · Fir Hj1 · · · Hjs (v)) C. That is, V 0 is the subspace
P
generated by all possible repeated applications of Fi and Hj to v. Note
that we need not worry about the order because we can replace Hi Fj with
[Hi , Fj ] + Fj Hi . The first term is either zero, or some linear combination
of Fk s, and the second term is in the correct order. This technique is rem-
iniscent of the proof of the Poincaré-Birkhoff-Witt theorem, to be seen in
section 10.2.
Now Hi (v) = λ(Hi )v, so V 0 = (Fi1 · · · Fir ) (v)C is a subrepresentation
P
of η − generated by v. Thus, if V is to be irreducible, V = V 0 .
By our lemma, Fi1 · · · Fir v ∈ Vλ−αi1 −···−αir , meaning
X
V ⊆ Vλ−αi1 −···−αir ,
which gives the desired result.

Corollary 10.3. Under the same conditions as the preceding theorem, if V


is finite dimensional, M
V = Vµ .
µ∈λ−⊕Nαi

We say that λ a the highest weight. Given any basis ∆ for our root
system Φ, there is a unique highest weight. Indeed, if there were two highest
weights, λ1 and λ2 , then
X X
λ1 = λ2 − αik = λ1 − αjk ,
where αi are positive roots. This means that we must be subtracting noth-
ing, and λ1 = λ2 .
Moreover, since our Weyl group acts transitively on bases for Φ, it acts
transitively on highest weights. In fact, if λ is a highest weight for a finite
dimensional representation V , and W the Weyl group associated with Φ,
then the collection of non-trivial weights all lie in the convex hull of the set
W λ = {wλ | w ∈ W }.
Let P = {λ ∈ h∗ | λ(Hi ) ∈ Z ∀i = 1, . . . , n}, and let P + = {λ ∈ h+ |
λ(Hi ) ∈ N ∀i = 1, . . . , n}. We call P the set of weights, and P + the set of
dominant weights. Clearly P + ⊆ P .

66
Last updated October 26, 2014 10 MORE REPRESNTATION THEORY

 
1 0
Example 10.4. In the case of sl2 (C), h = Ch, where h = (as
0 −1
in section 7.3). Then h∗ = Ch∗ , and α = 2h∗ . Then P = Zh∗ ⊇ Nh∗ . Note
that α ∈ P + , but P + is not generated by α.

10.2 Poincaré-Birkhoff-Witt theorem


Theorem 10.5 (Poincaré-Birkhoff-Witt). Let g be a finite dimensional
Lie algebra. There is a canonical C-linear map ϕ : g → U (g) given by
ϕ : x 7→ x+I (from the construction of U (g)). Let β = {b1 , . . . , bn } be a basis
equipped with a total ordering bi < bj whenever i < j. A standard mono-
mial is a finite sequence (bi1 , . . . , bik ) of basis elements in weakly increasing
order. Extend ϕ to standard monomials by ϕ : (bi1 , . . . , bik ) 7→ bi1 · · · · · bik .
Then ϕ is injective, and it’s image is a basis for for U (g) as a C-vector
space.

Proof. Certainly every element of U (g) is a finite linear combination of


monomials, but these monomials need not be standard (in weakly increasing
order). If ever we have two basis elements out of order, xi · xj with i > j,
we can replace xi · xj with xj · xi + [xi , xj ]. Then
P xj · xi is in weakly increas-
ing order. We must then expand [xi , xj ] as b∈β cbuv b. The exact structure
depends on these constants. If such terms happen to be in a product of a
larger monomial, then applying such a swap moves us strictly closer to a
linear combination of standard monomials.

A more careful treatment will show that the order in which we apply
such swaps is irrelevant; in the end, any ordering of the swaps will yield the
same result.
Note that the Poincaré-Birkhoff-Witt theorem made no use of the field.
The theorem is true for Lie algebras over any field, regardless of characteris-
tic. Moreover, we do not even need our Lie algebra to be finite dimensional.
If g is not finite dimensional, take any basis and assign it a well-ordering.
Define standard monomials in the same way. The rest of the proof goes
through with mild changes.

10.3 Verma Module


We now wish to construct the irreducible representation of highest weight
λ. Such a representation is called the Verma module associated to λ, a
root of positive weight.

67
Last updated October 26, 2014 10 MORE REPRESNTATION THEORY

Remember that we can decompose g (as an h-module under the adjoint


action) as g = n+ ⊕ h ⊕ n− , where h is a Cartan sub-algebra, as in theorem
9.5. Let b+ = n+ ⊕ h. This is a solvable sub-algebra.
Now define Cλ to be C with an b-module structure, whereby h acts as
multiplication by λ(h) for each h ∈ h, and n+ acts trivially. Note that Cλ is
only a left b-module, not a bimodule. We can pull back the structure to U (b),
making it a left U (b)-module. By the Poincaré-Birkhoff-Witt theorem, there
is a natural right action of U (b) on U (g) (by right multiplication). Since
U (g) is naturally a left g-module, altogether, we have a (g, U (b))-bimodule.
Now define the Verma module Mλ = U (g) ⊗U (b) Cλ .
Consider the vector 1 ⊗ 1. As an b-module, it has weight λ, because for
any h ∈ h,
h(1 ⊗ 1) = 1 ⊗ λ(h)1 = λ(h)(1 ⊗ 1).
Moreover, Verma modules are weight modules. That is, they are direct
sums of their weight spaces. Indeed, the weight space Vλ can be found as a
direct summand in Mλ , and one of finite dimension.

68
Last updated October 26, 2014 A ADJOINTNESS OF (U, F )

A Adjointness of (U, F )
Theorem A.1. Let F and U be the functors defined in section 6.1. Then
(U, F ) form an adjoint pair. That is, for any Lie algebra g and C-algebra
A,
homAC (U (g), A) ∼
= homLC (g, F (A)),
where the isomorphism is natural in both g and A.
Proof. Let us first describe the isomorphism. Let Φ : homAC (U (g), A) →
homLC (g, F (A)) by
Φ(f ) : x 7→ f (x + I).
The map Ψ : homLC (g, F (A)) → homAC (U (g), A) is harder to describe. If
g ∈ homLC (g, F (A)), we can extend g to ḡ : T (g) → A by defining

ḡ(x1 ⊗ · · · ⊗ xn ) = g(x1 ) · . . . · g(xn ).

Note that for any x, y ∈ g,

ḡ([x, y]) = g([x, y]) = g(x)g(y) − g(y)g(x) = ḡ(x ⊗ y − y ⊗ x),

so g is trivial on I, and the map ĝ : U (g) → A by ĝ : x + I 7→ ḡ(x) is


well-defined. Let Ψ(g) = ĝ.
Now we can see that Ψ ◦ Φ and Φ ◦ Ψ are each the identity on their
respective objects. For f ∈ homAC (U (g), A), we have,

Ψ(Φ(f )) = Ψ(x 7→ f (x + I))


= x1 ⊗ · · · ⊗ xn + I 7→ f (x1 + I) · . . . · f (xn + I)
= f

In the other direction, if g ∈ homLC (g, F (A)), then

Φ(Ψ(g)) = Φ(ĝ)
= x 7→ ĝ(x + I)
= x 7→ ḡ(x)
= g

We need to show that Φ and Ψ are natural in both A and g. In fact,


we only need to show that Φ satisfies the naturality conditions, as Ψ will
inherit the naturality from Φ. We will also write ΦA g for the map we have
thus far been referring to as Φ. This should make the naturality clearer. We
see that the map is independent of g and A.

69
Last updated October 26, 2014 A ADJOINTNESS OF (U, F )

First we will check naturality in g. Let φ : g → h. This induces the map


U (φ) : U (g) → U (h) by

U (φ) : x1 ⊗ · · · ⊗ xn 7→ φ(x1 ) ⊗ · · · ⊗ φ(xn )

In turn this induces hU (φ) : homAC (U (h), A) → homAC (U (g), A) by compos-


ing with U (φ) on the side that makes sense. Similarly, hφ denotes composi-
tion with φ. We must check that

ΦA
g
homAC (U (g), A) - homL (g, F (A))
C

6 6
hU (φ) hφ

homAC (U (h), A) - homL (h, F (A))


ΦA
C
h

commutes. For any f ∈ homAC (U (h), A),

hφ (ΦA A
h (f )) = Φh (f ) ◦ φ
= (x 7→ f (x + I)) ◦ φ
= y 7→ f (φ(y) + I)
= y 7→ f (U (φ)(y + I))
= ΦA
g (f ◦ U (φ))
= ΦA
g (hU (φ) (f )).

Similarly to above, let hφ denote composition with φ (now on the other


side), and analogously for hF (φ) . To check naturality in A, we must check
that for φ : A → B

ΦA
g
homAC (U (g), A) - homL (g, F (A))
C

hφ hF (φ)
? ?
homAC (U (g), B) - homL (g, F (B))
ΦB
C
g

commutes. For any f ∈ homAC (U (g), A),

70
Last updated October 26, 2014 A ADJOINTNESS OF (U, F )

hF (φ) (ΦA
g (f )) = h
F (φ)
(x 7→ f (x + I))
= x 7→ F (φ)(f (x + I))
= x 7→ φ(F (f (x + I)))
= hφ (x 7→ f (x + I))
= hφ (ΦB
g (f ))

71
Last updated October 26, 2014 REFERENCES

References
[FH] W. Fulton and J. Harris, Representation Theory

[Sch] O. Schiffmann, Lectures on Hall algebras, arXiv preprint


math/0611617, (2006),

[SJ] J.-P. Serre and G. A. Jones, Complex semisimple Lie algebras

[Var] V. S. Varadarajan, Lie groups, Lie algebras, and their representations

[] ,

[SS] D. Speyer and B. Sturmfels, Tropical mathematics, arXiv preprint


math/0408099, (2004),

[Mik] G. Mikhalkin, Tropical geometry and its applications, arXiv preprint


math/0601041, (2006),

[SDW] J. Simons and B. De Weger, Theoretical and computational bounds


for m-cycles of the 3n+ 1 problem, Acta Arith, 117 (2005), 51–70.

[Bar] V. Baranovsky, The variety of pairs of commuting nilpotent matrices


is irreducible, Transformation Groups, 6 (2001), 3–8.

[Fei] W. Feit, The Representation Theory of Finite Groups

[Hag] J. Haglund, The q, t-Catalan numbers and the space of diagonal


harmonics, University Lecture Series, 41 (2008),

[CP] G. Cooperman and I. Pak, The product replacement graph on gen-


erating triples of permutations, (2000),

[Eke] J. van Ekeren, The orbit method for nilpotent Lie groups, lecture
notes, http://math. mit. edu,

[Mih] A. Mihailovs, The Orbit Method for Finite Groups of Nilpotency


Class Two of Odd Order, arXiv preprint math/0001092, (2000),

[HP] Z. Halasi and P. P. Pálfy, The number of conjugacy classes in pattern


groups is not a polynomial function, Journal of Group Theory, 14
(2011), 841–854.

[Gal] P. X. Gallagher, The number of conjugacy classes in a finite group,


Mathematische Zeitschrift, 118 (1970), 175–179.

72
Last updated October 26, 2014 REFERENCES

[Pak] I. Pak, The nature of partition bijections I. Involutions, Advances in


Applied Mathematics, 33 (2004), 263–289.
[Hac] P. Hacking, The homology of tropical varieties, Collectanea mathe-
matica, 59 (2008), 263–273.
[DP] T. Dokos and I. Pak, The Expected Shape of Random Doubly
Alternating Baxter Permutations, arXiv preprint arXiv:1401.0770,
(2014),
[Mor] M. Morin, The Chromatic Symmetric Function of Symmetric Cater-
pillars and Near-Symmetric Caterpillars,
[Arn] V. I. Arnol’d, The calculus of snakes and the combinatorics of
Bernoulli, Euler and Springer numbers of Coxeter groups, Russian
Mathematical Surveys, 47 (1992), 1–51.
[Las] B. Lass, The algebra of set functions II: An enumerative analogue of
Hall’s theorem for bipartite graphs, European Journal of Combina-
torics, 33 (2012), 199–214.
[Wei] A. Weir, Sylow p-subgroups of the general linear group over finite
fields of characteristic p, Proceedings of the American Mathematical
Society, 6 (1955), 454–464.
[B+ ] J. L. Brumbaugh and M. Bulkow and P. S. Fleming and L. A. Garcia
and S. R. Garcia and G. Karaali and M. Michal and A. P. Turner,
Supercharacters, exponential sums, and the uncertainty principle,
arXiv preprint arXiv:1208.5271, (2012),
[DI] P. Diaconis and I. Isaacs, Supercharacters and superclasses for alge-
bra groups, Transactions of the American Mathematical Society, 360
(2008), 2359–2392.
[G+ ] F. J. Grunewald and D. Segal and G. C. Smith, Subgroups of fi-
nite index in nilpotent groups, Inventiones mathematicae, 93 (1988),
185–223.
[RR] C. Reid and A. Rosa, Steiner systems S(2, 4, v)-a survey, The Elec-
tronic Journal of Combinatorics, 1000 (2010), DS18–Feb.
[BH] A. E. Brouwer and W. H. Haemers, Spectra of graphs
[Sta] R. P. Stanley, Spanning trees and a conjecture of Kontsevich, Annals
of Combinatorics, 2 (1998), 351–363.

73
Last updated October 26, 2014 REFERENCES

[KG] J. Keilson and H. Gerber, Some results for discrete unimodality,


Journal of the American Statistical Association, 66 (1971), 386–389.

[Ste] J. Stembridge, Some combinatorial aspects of reduced words in finite


Coxeter groups, Transactions of the American Mathematical Soci-
ety, 349 (1997), 1285–1332.

[VA] A. Veralopez and J. Arregi, Some algorithms for the calculation of


conjugacy classes in the Sylow p-subgroups of GL (n, q), Journal of
Algebra, 177 (1995), 899–925.

[Ro] S. P. Radziszowski and others, Small ramsey numbers, Electron. J.


Combin, 1 (1994),

[BL] S. Billey and V. Lakshmibai, Singular loci of Schubert varieties

[Col] F. N. Cole, Simple groups from order 201 to order 500, American
journal of Mathematics, 14 (1892), 378–388.

[Col] F. N. Cole, Simple groups as far as order 660, American Journal of


Mathematics, 15 (1893), 303–315.

[Woo] R. Woodroofe, Shelling the coset poset, Journal of Combinatorial


Theory, Series A, 114 (2007), 733–746.

[Dup] J. L. Dupont, Scissors congruences, group homology and character-


istic classes

[Zo] I. Zakharevich and others, Scissors congruence as K-theory, Homol-


ogy, Homotopy and Applications, 14 (2012), 181–202.

[Hes] L. Hesselholt, Scissor’s congruence groups,

[S+ ] T. S. Sundquist and D. G. Wagner and J. West, A Robinson–


Schensted Algorithm for a Class of Partial Orders, journal of com-
binatorial theory, Series A, 79 (1997), 36–52.

[Lev] L. Levine, Orlik-Solomon Algebras of Hyperplane Arrangements,


(2004),

[Mac] I. G. Macdonald, Symmetric functions and Hall polynomials

[Boy] M. Boyarchenko, Representations of unipotent groups over local


fields and Gutkin’s conjecture, arXiv preprint arXiv:1003.2742,
(2010),

74
Last updated October 26, 2014 REFERENCES

[Ker] A. Kerber, Representations of permutation groups

[Yan] N. Yan, Representations of finite unipotent linear groups by the


method of clusters, arXiv preprint arXiv:1004.2674, (2010),

[KL] D. Kazhdan and G. Lusztig, Representations of Coxeter groups and


Hecke algebras, Inventiones mathematicae, 53 (1979), 165–184.

[FH] W. Fulton and J. Harris, Representation theory: a first course

[DS] L. Devroye and A. Sbihi, Random walks on highly symmetric graphs,


Journal of Theoretical Probability, 3 (1990), 497–514.

[Lov] L. Lovász, Random walks on graphs: A survey, Combinatorics, Paul


erdos is eighty, 2 (1993), 1–46.

[F+ ] C. F. Fowler and S. R. Garcia and G. Karaali, Ramanujan sums as


supercharacters, The Ramanujan Journal, (2012), 1–37.

[Hig] G. Higman, Enumerating p-groups, II: Problems whose solution is


PORC, Proc. of the LMS, 3 (1960), 566–582.

[Sau] M. du Sautoy, Zeta functions and counting finite p-groups, Electronic


Research Announcements of the American Mathematical Society, 5
(1999), 112–122.

[Pak] I. Pak, What do we know about the product replacement algorithm,


Groups and computation, III (Columbus, OH, 1999), 8 (2001), 301–
347.

[GW] C. F. Gauss and G. G. der Wissenschaften, Werke. Bd. 8

[Lau] J. Lauri, Vertex-deleted and edge-deleted subgraphs, A collection


of papers by members of the University of Malta on the occasion of
its quartercentenary celebrations editors: R. Ellul-Micallef and S.
Fiorini), Malta, (1992),

[Kir] A. A. Kirillov, Variations on the triangular theme, Translations of


the American Mathematical Society-Series 2, 169 (1995), 43–74.

[Alp] J. Alperin, Unipotent conjugacy in general linear groups, Communi-


cations in Algebra ,
R 34 (2006), 889–891.

[Hag] J. Haglund, q-Rook Polynomials and Matrices over Finite Fields,


Advances in Applied Mathematics, 20 (1998), 450–487.

75
Last updated October 26, 2014 REFERENCES

[LM] G. Ling and G. Miller, Proof that there is no simple group whose
order lies between 1092 and 2001, American Journal of Mathemat-
ics, 22 (1900), 13–26.

[H+ ] P. E. Holmes and S. A. Linton and S. H. Murray, Product replace-


ment in the monster, Experimental mathematics, 12 (2003), 123–126.

[MV] J. Matousek and J. Vondrak, The probabilistic method, Lecture


notes, (2008),

[A+ ] M. Agrawal and N. Kayal and N. Saxena, PRIMES is in P, Annals


of mathematics, (2004), 781–793.

[WY] B. J. Wyser and A. Yong, Polynomials for symmetric orbit closures


in the flag variety, arXiv preprint arXiv:1310.7271, (2013),

[VLA] A. Vera-López and J. Arregi, Polynomial properties in unitriangular


matrices, Journal of Algebra, 244 (2001), 343–351.

[Man] A. Mann, Philip Hall’s ‘rather curious’ formula for abelianp-groups,


Israel Journal of Mathematics, 96 (1996), 445–448.

[Mar] E. Marberg, Combinatorial methods of character enumeration for


the unitriangular group, Journal of Algebra, 345 (2011), 295–323.

[F+ ] W. Feit and N. Fine and others, Pairs of commuting matrices over a
finite field, Duke Mathematical Journal, 27 (1960), 91–94.

[Zie] G. M. Ziegler, Oriented matroids today, World Wide Web


http://www. math. tuberlin. de/˜ ziegler, (1996),

[Ber] C. Berge, On two conjectures to generalize Vizing’s theorem, Le


Matematiche, 45 (1991), 15–24.

[Rot] G.-C. Rota, On the foundations of combinatorial theory I. Theory


of Möbius functions, Probability theory and related fields, 2 (1964),
340–368.

[KS] V. Kaibel and A. Schwartz, On the complexity of polytope isomor-


phism problems, Graphs and Combinatorics, 19 (2003), 215–230.

[Kir] A. A. Kirillov, On the combinatorics of coadjoint orbits, Functional


Analysis and Its Applications, 27 (1993), 62–64.

76
Last updated October 26, 2014 REFERENCES

[Hal] Z. Halasi, On the characters and commutators of finite algebra


groups, Journal of Algebra, 275 (2004), 481–487.

[Raz] A. A. Razborov, On systems of equations in a free group, Mathemat-


ics of the USSR-Izvestiya, 25 (1985), 115.

[Gud] P. Gudivok, On Sylow subgroups of the general linear group over


a complete discrete valuation ring, Ukrainian Mathematical Jour-
nal, 43 (1991), 857–863.

[M+ ] J. L. Martin and M. Morin and J. D. Wagner, On distinguishing trees


by their chromatic symmetric functions, Journal of Combinatorial
Theory, Series A, 115 (2008), 237–253.

[KM] A. A. Kirillov and A. Melnikov, On a remarkable sequence of poly-


nomials, preprint, (1995),

[Kel] R. Kellerhals, Old and new about Hilbert’s third problem, European
women in mathematics (Loccum 1999), (1999), 179–187.

[ME] C. Monico and M. Elia, Note on an additive characterization of


quadratic residues modulo p, Journal of Combinatorics, Informa-
tion & System Sciences, 31 (2006), 209–215.

[Tro] W. T. Trotter, New perspectives on interval orders and inter-


val graphs, London Mathematical Society Lecture Note Series, 241
(1997), 237–286.

[Vak] R. Vakil, Murphy’s law in algebraic geometry: badly-behaved defor-


mation spaces, Inventiones mathematicae, 164 (2006), 569–590.

[Har] K. Hare, More on the total number of prime factors of an odd perfect
number, Mathematics of computation, 74 (2005), 1003–1008.

[Wil] A. Wiles, Modular elliptic curves and Fermat’s last theorem, Annals
of Mathematics, (1995), 443–551.

[B+ ] P. Belkale and P. Brosnan and others, Matroids motives, and a con-
jecture of Kontsevich, Duke Mathematical Journal, 116 (2003), 147–
188.

[OO] J. G. Oxley and J. Oxley, Matroid theory

[Lin] C. E. Linderholm, Mathematics made difficult

77
Last updated October 26, 2014 REFERENCES

[SS] J. P. Serre and L. L. Scott, Linear representations of finite groups

[HH] J. E. Humphreys and J. E. Humphreys, Linear algebraic groups

[Hal] P. R. Halmos, Linear algebra problem book, AMC, 10 (1995), 12.

[Ser] J.-P. Serre, Lie algebras and Lie groups

[Cox] D. Cox, Lectures on toric varieties, CIMPA Lecture Notes, (2005),

[Kir] A. A. Kirillov, Lectures on the orbit method

[Mil] J. S. Milne, Lectures on étale cohomology, Available on-line at


http://www. jmilne. org/math/CourseNotes/LEC. pdf, (1998),

[Pak] I. Pak, Lectures on discrete and polyhedral geometry, Preliminary


version available at author’s web page, (2009),

[Kle] A. Kleshchev, Lectures on Algebraic Groups, Oregon: University of


Oregon, (2008),

[Pak] I. Pak, Partition bijections, a survey, The Ramanujan Journal, 12


(2006), 5–75.

[Sim] C. C. Sims, Enumerating p-groups, Proc. of the LMS, 3 (1965), 151–


166.

[Fom] S. Fomin, Knuth equivalence, jeu de taquin, and the Littlewood-


Richardson rule, (1999),

[LY] L. Li and A. Yong, Kazhdan–Lusztig polynomials and drift configu-


rations, Algebra & Number Theory, 5 (2012), 595–626.

[Tho] J. Thompson, k(Un (Fq )), Preprint, http://www. math. ufl.


edu/fac/thompson. html, (2004),

[MS] D. Maclagan and B. Sturmfels, Introduction to tropical geometry,


Book in preparation, 34 (2009),

[Kir] A. A. Kirillov, An introduction to Lie groups and Lie algebras

[Hum] J. Humphreys, Introduction to Lie algebras and representation the-


ory

[Kay] R. Kaye, Infinite versions of minesweeper are Turing-complete,


Manuscript, August, (2000),

78
Last updated October 26, 2014 REFERENCES

[K+ ] A. G. Kuznetsov and I. Pak and A. E. Postnikov, Increasing trees and


alternating permutations, Russian Mathematical Surveys, 49 (1994),
79–114.

[Gas] V. Gasharov, Incomparability graphs of (3 + 1)-free posets are s-


positive, Discrete Mathematics, 157 (1996), 193–197.

[FS] P. Flajolet and R. Sedgewick, Analytic combinatorics

[Bak] A. Baker, An Introduction to Galois Theory, University of Glasgow,


lecture notes, retrieved from the address http://www. maths. gla. ac.
uk/˜ ajb/dvi-ps/Galois. pdf, (2013),

[Sta] R. P. Stanley, Hyperplane arrangements, interval orders, and trees,


Proceedings of the National Academy of Sciences, 93 (1996), 2620–
2625.

[Mar] E. Marberg, Heisenberg characters, unitriangular groups, and Fi-


bonacci numbers, Journal of Combinatorial Theory, Series A, 119
(2012), 882–903.

[G+ ] R. L. Graham and M. Grötschel and L. Lovász, Handbook of com-


binatorics

[Nyd] V. Nỳdl, Graph reconstruction from subgraphs, Discrete Mathemat-


ics, 235 (2001), 335–341.

[Sta] R. P. Stanley, Graph colorings and related symmetric functions:


ideas and applications A description of results, interesting applica-
tions, & notable open problems, Discrete Mathematics, 193 (1998),
267–286.

[SAM] M. R. Salavatipour and M. Adviser-Molloy, Graph colouring via the


discharging method

[Vau] M. Vaughan-Lee, Graham Higman’s PORC Conjecture, Jahres-


bericht der Deutschen Mathematiker-Vereinigung, 114 (2012), 89–
106.

[Wil] H. S. Wilf, generatingfunctionology, (1994),

[Raz] A. A. Razborov, Flag algebras, The Journal of Symbolic Logic, 72


(2007), 1239–1282.

79
Last updated October 26, 2014 REFERENCES

[GR] R. M. Guralnick and G. R. Robinson, On the commuting probability


in finite groups, Journal of Algebra, 300 (2006), 509–528.

[Yip] M. Yip, q-Rook placements and Jordan forms of upper-triangular


nilpotent matrices, DMTCS Proceedings, (2013), 1017–1028.

[B+ ] E. Breuillard and B. Green and R. Guralnick and T. Tao, Expansion


in finite simple groups of Lie type, arXiv preprint arXiv:1309.1975,
(2013),

[MT] A. Marcus and G. Tardos, Excluded permutation matrices and the


Stanley–Wilf conjecture, Journal of Combinatorial Theory, Series
A, 107 (2004), 153–160.

[A+ ] K. Appel and W. Haken and J. Koch and others, Every planar map
is four colorable. Part II: Reducibility, Illinois Journal of Mathemat-
ics, 21 (1977), 491–567.

[A+ ] K. Appel and W. Haken and others, Every planar map is four
colorable. Part I: Discharging, Illinois Journal of Mathematics, 21
(1977), 429–490.

[Woo] R. Woodroofe, Erdős–Ko–Rado theorems for simplicial complexes,


Journal of Combinatorial Theory, Series A, 118 (2011), 1218–1227.

[Sta] R. P. Stanley, Enumerative combinatorics

[B+ ] S. Blackburn and P. Neumann and G. Venkataraman, Enumeration


of finite groups, AMC, 10 (2007), 12.

[Hig] G. Higman, Enumerating p-groups. I: Inequalities, Proc. of the


LMS, 3 (1960), 24–30.

[Ste] W. Stein, Elementary number theory: primes, congruences, and se-


crets: a computational approach

[GY] I. Goulden and A. Yong, Dyck paths and a bijection for multisets of
hook numbers, Discrete mathematics, 254 (2002), 153–164.

[Aho] A. V. Aho, Compilers: Principles, Techniques, And Tools Author:


Alfred V. Aho, Ravi Sethi, Jeffrey D. Ullman, Publisher: Addison
Wesle, (1986),

80
Last updated October 26, 2014 REFERENCES

[AS] M. Aguiar and F. Sottile, Cocommutative Hopf algebras of permu-


tations and trees, Journal of Algebraic Combinatorics, 22 (2005),
451–470.

[Mil] J. W. Milnor, Characteristic classes

[BH] H. Bürgstein and W. H. Hesselink, Algorithmic orbit classification


for some Borel group actions, Compositio Mathematica, 61 (1987),
3–41.

[BD] M. Boyarchenko and V. Drinfeld, A motivated introduction to char-


acter sheaves and the orbit method for unipotent groups in positive
characteristic, arXiv preprint math/0609769, (2006),

[Ass] S. Assaf, A combinatorial realization of Schur-Weyl duality


via crystal graphs and dual equivalence graphs, arXiv preprint
arXiv:0804.1587, (2008),

[Gla] G. Glauberman, A new look at the Feit-Thompson odd order theo-


rem, Mat. Contemp, 16 (1999), 73–92.

[PS] P. Petrullo and D. Senato, An instance of umbral methods in rep-


resentation theory: the parking function module, arXiv preprint
arXiv:0807.4840, (2008),

[Isa] I. M. Isaacs, Characters of groups associated with finite algebras,


Journal of Algebra, 177 (1995), 708–730.

[Leh] G. Lehrer, Discrete series and the unipotent subgroup, Compositio


Mathematica, 28 (1974), 9–19.

[Hli] P. Hlinenỳ, Discharging technique in practice, Lecture text for Spring


School on Combinatorics, (2000),

[Mih] A. Mihailovs, Diagrams of representations, arXiv preprint


math/9803079, (1998),

[Ste] J. R. Stembridge, Counting points on varieties over finite fields re-


lated to a conjecture of Kontsevich, Annals of Combinatorics, 2
(1998), 365–385.

[Du] M. Du Sautoy, Counting p-groups and nilpotent groups, Publications


Mathématiques de l’IHÉS, 92 (2000), 63–112.

81
Last updated October 26, 2014 REFERENCES

[GR] S. Goodwin and G. Roehrle, Counting conjugacy classes in the unipo-


tent radical of parabolic subgroups of GLn (q), Pacific Journal of
Mathematics, 245 (2010), 47–56.

[Goo] S. M. Goodwin, Counting conjugacy classes in Sylow p-subgroups of


Chevalley groups, Journal of Pure and Applied Algebra, 210 (2007),
201–218.

[Isa] I. Isaacs, Counting characters of upper triangular groups, Journal of


Algebra, 315 (2007), 698–719.

[IK] I. Isaacs and D. Karagueuzian, Conjugacy in groups of upper trian-


gular matrices, Journal of Algebra, 202 (1998), 704–711.

[VLA] A. Vera-López and J. M. Arregi, Conjugacy classes in unitriangular


matrices, Linear Algebra Appl., 370 (2003), 85–124.

[BB] A. Bjorner and F. Brenti, Combinatorics of Coxeter groups

[Aig] M. Aigner, Combinatorial theory, Heidelberg, New York, (1979),

[F+ ] P. S. Fleming and S. R. Garcia and G. Karaali, Classical Kloost-


erman sums: representation theory, magic squares, and Ramanujan
multigraphs, Journal of Number Theory, 131 (2011), 661–680.

[G+ ] P. Gudivok and Y. V. Kapitonova and S. Polyak and V. Rud’ko and


A. Tsitkin, Classes of conjugate elements of the unitriangular group,
Cybernetics, 26 (1990), 47–57.

[Mac] S. Mac Lane, Categories for the working mathematician

[G+ ] S. M. Goodwin and P. Mosch and G. Röhrle, Calculating conjugacy


classes in Sylow-subgroups of finite Chevalley groups of rank six and
seven, LMS Journal of Computation and Mathematics, 17 (2014),
109–122.

[Oos] J. van Oosten, Basic category theory

[Kan] W. M. Kantor, Automorphism groups of designs, Mathematische


Zeitschrift, 109 (1969), 246–252.

[JP+ ] H. Jürgen Prömel and A. Steger and A. Taraz, Asymptotic enumer-


ation, global structure, and constrained evolution, Discrete Mathe-
matics, 229 (2001), 213–233.

82
Last updated October 26, 2014 REFERENCES

[Odl] A. M. Odlyzko, Asymptotic enumeration methods, Handbook of com-


binatorics, 2 (1995), 1063–1229.

[Tao] T. Tao, An Epsilon of Room: Real Analysis

[LM] D. Leemans and M. Mixer, Algorithms for classifying regular poly-


topes with a fixed automorphism group, Contributions to Discrete
Mathematics, 7 (2012),

[Hat] A. Hatcher, Algebraic topology, Cambridge UP, Cambridge, 606


(2002),

[Sta] R. P. Stanley, A symmetric function generalization of the chromatic


polynomial of a graph, Advances in Mathematics, 111 (1995), 166–
194.

[Sta] R. P. Stanley, A survey of alternating permutations, Contemp.


Math, 531 (2010), 165–196.

[Sau] M. du Sautoy, A nilpotent group and its elliptic curve: non-


uniformity of local zeta functions of groups, Israel Journal of Math-
ematics, 126 (2001), 269–288.

[R+ ] N. Robertson and D. Sanders and P. Seymour and R. Thomas, A


new proof of the four-colour theorem, Electronic Research Announce-
ments of the American Mathematical Society, 2 (1996), 17–25.

[Bon] J. A. Bondy, A graph reconstructor’s manual, Surveys in combina-


torics, 166 (1991), 221–252.

[Fok] M. M. Fokkinga, A Gentle Introduction to Category Theory-the cal-


culational approach, (1992),

[Ko] P. J. Kelly and others, A congruence theorem for trees, Pacific J.


Math, 7 (1957), 961–968.

[GS] D. D. Gebhard and B. E. Sagan, A chromatic symmetric function


in noncommuting variables, Journal of Algebraic Combinatorics, 13
(2001), 227–255.

83
Index
abelian, 6 evaluation map, 51
adjoint exceptional, 54
functor, 26
representation, 8, 30 full flag, 13, 16, 40
alebraically closed, 42 functor, 26
algebra, 4
general linear group, 5
Baire category theorem, 39 gl(V ), 5
basis, 48
homomorphism, 8, 30
bilinear form, 19
hyperplane, 44
bracket, 4
ideal, 5
Cartan
indecomposable, 48
criterion, 22
inner automorphism, 39
matrix, 50
inner product, 20
sub-algebra, 37
invariant form, 19
Carter subgroup, 40
irreducible, 31, 46
Casimir element, 28
category, 26 Jacobi identity, 4
center, 7 Jordan canonical form, 21, 38, 42
central series, 10
centralizer, 41 Killing form, 19, 28
characteristic polynomial, 38 Kronecker-delta, 28
correspondence theorem, 23
Coxeter diagram, 53 Lagrange interpolation, 21
Lie algebra, 4
degenerate, 20 Lie’s theorem, 14
derivation, 7, 64 Lie-Kolchin triangularization, 16
derived series, 14
direct sum, 5, 61 Maschke’s theorem, 33, 47
dual basis, 28
nilpotent, 10
Dynkin diagram, 37, 53
non-degenerate, see degenerate
eigenspace, 33, 61, 65 normalizer, 12, 37
eigenvalue, 15, 33, 61
octonions, 64
eigenvector, 14, 61
elementary matrix, 43 Poincaré-Birkhoff-Witt theorem, 66,
Engel’s theorem, 13, 39 68

84
Last updated October 26, 2014 INDEX

primitive vector, 33
primitve, 65

quotient, 7

radical, 17
rank, 38
reflection, 44
regular, 38
representation, 8, 11, 30
root system, 37, 44

semi-simple, 18, 24, 41


Serre relations, 65
simple, 24
skew-symmetry, 4
sl(V ), 5
solvable, 14
special linear group, 5
spectral theorem, 43
split-exact, 32
Sylow subgroups, 39
Sylow theorems, 40

tensor algebra, 26
trace, 5, 19

universal enveloping algebra, 26

Verma module, 67

weight, 66
dominant, 66
highest, 66
modules, 68
Weyl group, 46

85

Вам также может понравиться