Академический Документы
Профессиональный Документы
Культура Документы
Andy Soffer
Contents
1 Preliminaries 3
2 Introduction 4
2.1 First definitions and examples . . . . . . . . . . . . . . . . . . 4
2.2 Classification of 2-dimensional Lie algebras . . . . . . . . . . 6
2.3 A few more definitions and examples . . . . . . . . . . . . . . 7
8 Root systems 44
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
8.2 Root system bases . . . . . . . . . . . . . . . . . . . . . . . . 48
8.3 Coxeter and Dynkin Diagrams . . . . . . . . . . . . . . . . . 53
8.4 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
1
Last updated October 26, 2014 CONTENTS
A Adjointness of (U, F ) 69
References 72
Index 84
2
Last updated October 26, 2014 1 PRELIMINARIES
1 Preliminaries
This course was taught at UCLA in the spring quarter of 2012 by Raphael
Rouquier. This is a compilation of my notes from the course. No doubt
they are full of errors, and incomplete proofs. Use them at your own risk. If
you do find an error, please email me at asoffer@ucla.edu so I can correct
them.
Unless otherwise specified, all vector spaces and algebras will be over
C. At times, as a reminder, or to specifically point out some property of C
being used, we will explicitly mention the field. Certainly if we work with
any algebra or vector space over a field other than C, it will be explicitly
mentioned. Similarly, one can expect that all Lie algebras and all vector
spaces are finite dimensional. Where we would like to have a vector space
or Lie algebra to possibly be infinite dimensional, we will say so explicitly.
Lastly, the symbol “N” denotes the set of natural numbers: {0, 1, 2, . . . }.
To denote the set {1, 2, . . . }, we use Z+ .
3
Last updated October 26, 2014 2 INTRODUCTION
2 Introduction
2.1 First definitions and examples
A Lie algebra is a C-vector space g equipped with a C-bilinear map [−, −] :
g × g → g such that for all a, b, c ∈ g,
• [a, [b, c]] + [b, [c, a]] + [c, [a, b]] = 0 (Jacobi identity)
Note 2.1. In fact, we can define a Lie algebra over any field analogously.
Generally, we require that [x, x] = 0 for each x instead of skew-symmetry.
These conditions are equivalent for any field which has characteristic other
than 2. Unless otherwise specified, all Lie algebras will be over C. We will
make no effort to generalize to other fields, and leave such endeavors as an
exercise for the reader.
[a, b] = ab − ba.
[a, [b, c]] + [b, [c, a]] + [c, [a, b]] = [a, bc − cb] + [b, ca − ac] + [c, ab − ba]
= [a, bc] − [a, cb] + [b, ca] − [b, ac] + [c, ab] − [c, ba]
= a(bc) − (bc)a − a(cb) + (cb)a +
b(ca) − (ca)b − b(ac) + (ac)b +
c(ab) − (ab)c − c(ba) + (ba)c
= 0,
by the associativity of A.
4
Last updated October 26, 2014 2 INTRODUCTION
[φ, ψ] = φ ◦ ψ − ψ ◦ φ.
Example 2.4. For a vector space V , let sl(V ) ⊆ gl(V ) consisting of all
endomorphisms with trace zero (again, for finite dimensional vector spaces,
we will often write sln (C)). Then sl(V ) is a Lie sub-algebra of gln (C). We
need only check that [−, −] restricted to sln (C) has image in sln (C).
Indeed, for φ, ψ ∈ sln (C), tr(φ ◦ ψ) = tr(ψ ◦ φ), so
Note that we do not use the fact that φ and ψ have trace 0. That is,
[gl(V ), gl(V )] ⊆ sl(V ), so in fact sl(V ) is an ideal of gl(V ).
Note 2.5. We use Fraktur letters to distinguish between gl(V ) and GL(V )
(the general linear group). Similarly for sl(V ) and SL(V ) (the special linear
group). The relationship between the two is that the Lie algebra gl(V ) is the
tangent space at the identity to the Lie group GL(V ). Similarly for sl(V )
and SL(V ).
Given two Lie algebras g1 and g2 , we can endow the vector space g1 ⊕ g2
with a Lie algebra structure by
5
Last updated October 26, 2014 2 INTRODUCTION
• C2 with [−, −] = 0
u v
• u, v ∈ C with [A, B] = AB − BA.
0 −u
Proof. For z, w ∈ C, let gz,w be the Lie algebra with basis {e1 , e2 }, and
[e1 , e2 ] = ze1 + we2 . We have not shown that every gz,w is a Lie algebra, but
certainly every 2-dimensional Lie algebra is isomorphic to some gz,w . Note
that once we have a basis {e1 , e2 } for a Lie algebra as a vector space, the
bracket [e1 , e2 ] completely determines [−, −]. Indeed, for a, b, c, d ∈ C
[ae1 + be2 , ce1 + de2 ] = [ae1 , ce1 ] + [ae1 , de2 ] + [be2 , ce1 ] + [be2 , de2 ]
= ac[e1 , e1 ] + ad[e1 , e2 ] + bc[e2 , e1 ] + bd[e2 , e2 ]
= (ad − bc)[e1 , e2 ]
which is zero if and only if as = br. That is, if and only if x is in the image
of [−, −]. So pick any x not in the image of [−, −], and let d ∈ C be given
such that [x, [e1 , e2 ]] = d[e1 , e2 ]. As d 6= 0, we h1 = d−1 x, and h2 = [e1 , e2 ].
6
Last updated October 26, 2014 2 INTRODUCTION
Note that h1 is not in the image of [−, −] and h2 is, so they must be linearly
independent and thus form a basis for g. Computing [h1 , h2 ] gives us
[D1 , D2 ] = D1 ◦ D2 − D2 ◦ D1
Proof. Let V be the underlying vector space of the algebra A. Notice that
Der A ⊆ gl(V ) is a Lie sub-algebra, so it suffices to check that [Der A, Der A] ⊆
Der A. This approach avoids checking the Jacobi identity and skew-symmetry
directly.
7
Last updated October 26, 2014 2 INTRODUCTION
If D1 , D2 ∈ Der A and a, b ∈ A,
[D1 , D2 ](ab) = D1 D2 (ab) − D2 D1 (ab)
= D1 ((D2 a)b + aD2 b) − D2 ((D1 a)b + aD1 b)
= D1 ((D2 a)b) + D1 (aD2 b) − D2 ((D1 a)b) − D2 (aD1 b)
= (D1 D2 a)b + (D2 a)(D1 b) + (D1 a)(D2 b) + a(D1 D2 b)
−(D2 D1 a)b − (D1 a)(D2 b) − (D2 a)(D1 b) − a(D2 D1 b)]
= (D1 D2 a)b − (D2 D1 a)b + a(D1 D2 b) − a(D2 D1 b)
= [D1 , D2 ](a)b + a[D1 , D2 ](b)
It is not even required that A be an algebra. We may instead take A to
be any set having associative addition, and a multiplication operation which
distributes over addition.
8
Last updated October 26, 2014 2 INTRODUCTION
9
Last updated October 26, 2014 3 NILPOTENT LIE ALGEBRAS
10
Last updated October 26, 2014 3 NILPOTENT LIE ALGEBRAS
2N
X
ad2N
x (y) = cd · xd ◦ y ◦ x2N −d ,
d=0
11
Last updated October 26, 2014 3 NILPOTENT LIE ALGEBRAS
Theorem 3.4. Let V be a finite dimensional vector space, and let g ⊆ gl(V )
be a Lie sub-algebra consisting entirely of nilpotent operators. That is, the
inclusion map ι : g → gl(V ) is a representation of g on V . Then Kg,ι 6= 0.
for all y ∈ h.
Thus, in fact g ⊆ gl(Kh,ι ) so, we have a well-defined quotient represen-
tation ρ of g/h on Kh,ι given by
ρ(x + h) : v 7→ x(v).
12
Last updated October 26, 2014 3 NILPOTENT LIE ALGEBRAS
0 = V0 ⊆ · · · ⊆ Vn = V,
13
Last updated October 26, 2014 4 SOLVABLE LIE ALGEBRAS
By induction, [bj , x] and [x, aj ] are in Di (g), and the result follows.
14
Last updated October 26, 2014 4 SOLVABLE LIE ALGEBRAS
y(v0 ) = χ(y)v0 .
y(bi+1 ) = y(x(bi ))
= x(y(bi )) − [x, y](bi )
X
= χ(y)bi+1 + αi,k bk
k<i+1
15
Last updated October 26, 2014 4 SOLVABLE LIE ALGEBRAS
0 = V0 ( V1 ( · · · ( Vd = V
ρ̄ : g → gl(W )
ρ̄(x) : v + Cv0 7→ ρ(x)(v) + Cv0 .
0 = W0 ( W1 ( · · · ( Wd−1 = W,
0 = V0 ( V1 ( · · · ( Vd = V
16
Last updated October 26, 2014 4 SOLVABLE LIE ALGEBRAS
This is precisely the requirement that adx is upper triangular in the ordered
basis {e1 , . . . , ek }.
4.3 Radicals
Define the radical of a finite dimensional Lie algebra g to be the largest
solvable ideal of g. We denote the radical by Rad(g).
This definition requires some justification. We must show that there is a
unique largest solvable ideal. It suffices to show that if we have two solvable
ideals a and b of g, then the ideal a + b is also solvable.
0 = a0 ⊆ a1 ⊆ · · · ⊆ am = a
0 = b0 ⊆ b1 ⊆ · · · ⊆ bn = b,
be chains of ideals of g such that for each i (that makes sense), ai+1 /ai and
bi+1 /bi are abelian.
Then the sequence of ideals
0 = a0 ⊆ · · · ⊆ am = am + b0 ⊆ · · · ⊆ am + bn = a + b
[x + a + bi , y + a + bi ] = [x, y] + a + bi .
17
Last updated October 26, 2014 4 SOLVABLE LIE ALGEBRAS
Note 4.6. We are often interested in the case where Rad(g) = 0. This is
often taken as the definition for semi-simplicity. In section 5 we will see
that semi-simplicity has a more natural definition (in terms of direct sums
of “simple” Lie algebras), and that a Lie algebra is semi-simple if and only
if its radical is zero.
• Rad(g) = 0
The equivalence of the first two conditions is obvious. As for the second
pair, note that any abelian Lie algebra is solvable. In the other direction, if
there is a nonzero solvable ideal a, then take the largest n for which Dn (a)
is nonzero. This must be an abelian ideal of g.
18
Last updated October 26, 2014 5 SEMI-SIMPLE LIE ALGEBRAS
Note that in the above computation, we use the fact that ad is a Lie
algebra homomorphism, and the fact that for any two operators A and B,
tr(AB) = tr(BA)
(where the product makes sense). This bilinear form is known as the Killing
form on g. The g-invariance of the Killing form can be expressed in the
following way to highlight its symmetry:
19
Last updated October 26, 2014 5 SEMI-SIMPLE LIE ALGEBRAS
Taking the trace shows that the Killing form on a is the restriction of the
Killing form on g.
h⊥ = {y ∈ g | β(x, y) = 0, ∀x ∈ h}.
so [x, y] ∈ a⊥ .
20
Last updated October 26, 2014 5 SEMI-SIMPLE LIE ALGEBRAS
L ⊆ M ⊆ gl(V )
for λi ∈ C. We use the fact from linear algebra that there exists a
polynomial P without constant term such that
λ1 0
P (z) =
. .. .
0 λn
P
Let F = Qλi be Q-vector space in C spanned by the λi . Let f be an
arbitrary Q-linear functional on F , and let
f (λ1 ) 0
z0 =
.. .
.
0 f (λn )
21
Last updated October 26, 2014 5 SEMI-SIMPLE LIE ALGEBRAS
Theorem 5.5 (Cartan criterion for solvability). Let g ⊆ gl(V ), and define
the g-invariant bilinear form hx, yi = trV (x · y). Then g is solvable if and
only if hx, yi = 0 for every x ∈ g and y ∈ [g, g].
trV ([x, y]z) = trV (xyz − yxz) = trV (xyz − xzy) = trV (x[y, z]),
22
Last updated October 26, 2014 5 SEMI-SIMPLE LIE ALGEBRAS
Theorem 5.8. For any finite dimensional Lie algebra, Rad(g) = [g, g]⊥ ,
where orthogonality is with respect to the Killing form.
Proof. Notice first that [g, g]⊥ is an ideal of g, and that ad([g, g]⊥ ) is solvable.
Clearly, the Killing form on [g, g]⊥ is zero, and so it follows from the Cartan
criterion for solvability that [g, g]⊥ is solvable and hence contained in Rad(g).
On the other hand, Rad(g) ⊆ [g, g]⊥ , again from the Cartan criterion.
23
Last updated October 26, 2014 5 SEMI-SIMPLE LIE ALGEBRAS
Proof.
Rad a = [a, a]⊥ = a ∩ [a, a]⊥ ,
where the first orthogonality is with respect to the Killing form on a, and
the second is with respect to the Killing form on g. Since a is an ideal, these
are the same. Notice that both a and [a, a]⊥ are ideals of g, and so their
intersection, Rad a is an ideal of g.
The following result is the converse of this lemma. That is, together,
these results say that g is semi-simple if and only if Rad(g) = 0.
2. a1 , . . . , an are simple.
24
Last updated October 26, 2014 5 SEMI-SIMPLE LIE ALGEBRAS
so g = b ⊕ b⊥ as Lie algebras.
By induction, b⊥ is the direct sum of finitely many nonzero minimal
ideals of b⊥ . In general, an ideal c of b⊥ need not be an ideal of g. However,
since b⊥ is a direct summand of g, c is an ideal of g. Moreover, it is clearly
minimal and simple. If b⊥ = c1 ⊕ · · · ⊕ ck , then
g = b ⊕ c1 ⊕ · · · ⊕ ck .
It now suffices to show that these are the only minimal ideals of g.
Suppose we had a minimal ideal h of g. Then h ∩ b is an ideal, so either
h = b, or h ∩ b = 0. In the former case we are done immediately. In the
latter by induction, h must be one of the ci . This completes the proof.
25
Last updated October 26, 2014 6 THE UNIVERSAL ENVELOPING ALGEBRA
(w1 ⊗ · · · ⊗ wm ) · (v1 ⊗ · · · ⊗ vn ) = w1 ⊗ · · · ⊗ wm ⊗ v1 ⊗ · · · ⊗ vn ,
{x ⊗ y − y ⊗ x − [x, y] | x, y ∈ g}.
U (g) = T (g)/I.
The first result we have about U (g) is that U and F are adjoint. While
the result is certainly important, the proof comes down to checking many
26
Last updated October 26, 2014 6 THE UNIVERSAL ENVELOPING ALGEBRA
Note 6.3. We will make two notational shortcuts from now on when dis-
cussing U (g). The first is that we will forego writing x ⊗ y and instead write
xy for the product of x and y in U (g) when the context is clear.
Second, g embeds into U (g) by first identifying g with T 1 (g) and then
seeing that the quotient map taking T (g) to U (g) preserves T 1 (g). From
now on, we will make the identification g ⊆ U (g) tacitly.
27
Last updated October 26, 2014 6 THE UNIVERSAL ENVELOPING ALGEBRA
then we have
n
X X
bi bi = ci,j di,k ej ek .
i=1 i,j,k
28
Last updated October 26, 2014 6 THE UNIVERSAL ENVELOPING ALGEBRA
given by
n
X n X
X n
ci,j di,k = ca,j db,k δa,b
i=1 a=1 b=1
Xn X n D E
= ca,j db,k ej , ek
a=1 b=1
* n n
+
X X
= ca,j ej , bb,k ek
b=1 a=1
D E
k
= bj , b
= δj,k .
P i P i
This means ei e = bi b , so the Casimir element is well-defined.
Another fact of note is that Cg commutes with every element of the universal
enveloping algebra of g.
Lemma 6.4.
Cg ∈ Z(U (g)).
Proof. The proof is entirely computational and left as an exercise for the
reader.
29
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS
ad : x 7→ [x, −].
We use the notation adx for ad(x). Just as in the case with representations
of groups, we often identify the vector space V with the action of g on V ,
dropping the map ρ altogether to simplify notation.
Now suppose that V, W are representations of a Lie algebra g. We endow
the vector spaces V ⊕ W , V ∗ , V ⊗ W , and hom(V, W ) with actions of g to
make them representations by:
ψ : v 7→ φ(v)w.
g(φ ⊗ w) = xφ ⊗ w + φ ⊗ xw
= −φ(x·) ⊗ w + φ ⊗ xw
v 7→ −φ(xv)w + φ(v)(xw),
30
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS
6
....
....
U F
....
....
....
....
?
U (g) - EndC (V )
ρ0
Applying the functor F to ρ0 yields a representation of U (g) (as a Lie algebra)
on V , which restricts to g. This is to say, any representation of g on V factors
through U (g), and does so uniquely.
Lemma 7.1. If V is an irreducible representation of g, then the Casimir
element Cg acts on V by multiplication by a nonzero element of C.
Proof. Extend ρ to ρ : U (g) → gl(V ). Let f be the characteristic polynomial
of ρ(Cg ). Let λ be a root of f , and let W = ker(ρ(Cg )−λ·id). For any x ∈ g,
ρ(x)(W ) ⊆ W because ρ(x) commutes with ρ(Cg ). Thus, W is a nontrivial
subrepresentation of U (g), so W = V . Thus, for each v ∈ V , ρ(Cg )v = λv
as desired.
31
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS
Proof. Let βW : g × g → C by
x 7→ α(x).
0→V →W →N →0 (∗)
32
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS
π
0 - homC (N, V ) - homC (N, W ) - homC (N, N ) - 0
6 6 6
∼
=
∪ ∪
0 - homC (N, V ) - π −1 (C · idN ) - C · idN - 0
33
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS
1. f (vn ) = (n + 1)vn+1
2. h(vn ) = (λ − 2n)vn
3. e(vn ) = (λ − n + 1)vn−1
1
h(vn ) = h(f (vn−1 ))
n
1
= ([h, f ] + f h)vn−1
n
f
= (−2 + f )vn−1
n
f
= (λ − 2(n − 1) − 2)vn−1
n
f
= (λ − 2n)vn−1
n
= (λ − 2n)vn
1
e(vn ) = e(f (vn−1 ))
n
1
= ([e, f ] + f e)vn−1
n
1 1
= (λ − 2n + 2)vn−1 + f (λ − n + 2)vn−2
n n
1 n−1
= (λ − 2n + 2)vn−1 + (λ − n + 2)vn−1
n n
= (λ − n + 1)vn−1
34
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS
h(vm ) = −f e(vm )
(λ − 2m)vm = −(λ − m + 1)mvm
λ − 2m = −λm + m2 − m
λ = m.
35
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS
36
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS
Lie algebra in such a way that [ei , fi ] = hi , [hi , ei ] = 2ei and [hi , fi ] = −2fi ,
along with other properties. The Lie sub-algebra h generated by {h1 , . . . , hn }
should be abelian, and maximally so. The sub-algebra h will be a Cartan
sub-algebra (defined below) and will have nice representation theoretic prop-
erties. We will prove that such generating set always exists, and that all such
are isomorphic. We can then use the relations between them to build root
systems and finally Dynkin diagrams which we can classify via combinatorial
means, and pull back this classification to the Lie algebras.
Specifically, for sl2 (C), the Cartan sub-algebra was h = Ch, and restrict-
ing the adjoint action to h, we see that sl2 (C) decomposes as
It is worthwhile to, while reading the next section, think back to our work
on sl2 (C).
Example 7.8. In the case of sl2 (C) from the previous section, it is clear
that Ch is nilpotent (it is abelian). Moreover, [h, ae + bf + ch] = 2ae − 2bf
which is in Ch if and only if a = b = 0, so Nsl2 (C) (Ch) = Ch. Making Ch a
Cartan sub-algebra of sl2 (C).
Exercise 7.9. Let h be the Lie sub-algebra of sln (C) consisting of the di-
agonal matrices (with trace zero).
• h is a Cartan sub-algebra.
Now let g be a Lie algebra, and pick some x ∈ g. Define gxλ to be the
generalized λ-eigenspace of adx . We give a few equivalent formulations of
this definition:
37
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS
Though we will not continue in this direction, this is the beginnings of the
construction of the Jordan canonical form for the operator adx on the vector
space g.
Let Px ∈ C[t] denote the characteristic polynomial of adx acting on g. If
we write
a0,x + a1,x t + · · · + an−1,x tn−1 + tn .
Note that a0,x = (−1)n det adx = 0, since adx is not injective (as [x, x] = 0).
In fact, we can determine the smallest nonzero coefficient ai,x by determining
the rank of the map adx . In particular, the smallest i for which ai,x is nonzero
is simply dim gx0 . More generally, we may also write Px as
x
Y
Px = (t − λ)dim gλ .
λ∈C
Sometimes it will be useful to use the notation r(x) in place of dim gx0 .
That is, we define r(x) to be the smallest i such that ai,x 6= 0, where ai,x
are the coefficients of the characteristic polynomial Px .
Now we may define the rank of g as rank g = min{dim gx0 | x ∈ g}.
Clearly rank g is bound by 1 ≤ rank g ≤ dim g if g 6= 0. We say that x ∈ g
is regular if r(x) = rank g.
0 → gx0 → g → g/gx0 → 0.
Because gx0 is a subalgebra, The vector spaces are stable under the action
of the adjoint representation, making the sequence an exact sequence of
representations as well.
Let Ω = {y ∈ gx0 | ady is invertible on g/gx0 }. Note that x ∈ Ω, by the
definition of gx0 . Moreover, Ω is open and dense in gx0 . To see this, note
that y ∈ Ω if and only if det ady 6= 0, where ady in this context means the
representation on g/gx0 . Since, in any basis for gx0 , det ady is smooth (actually
polynomial), it follows that the set of points with nonzero determinant is
dense. Since {0} is closed in C, Ω must be an open set.
38
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS
Proof. Let fx (t) = b0 (x) + b1 (x)t + · · · + bn−1 (x)tn−1 + tn denote the char-
acteristic polynomial of adx acting on g. The coefficients are polynomials
bi (x) depending on x. Clearly, bi = 0 for each i < rank g. The condition
that x ∈ greg is equivalent to brank g (x) 6= 0, proving the result.
Theorem 7.12. Let G be the subgroup Aut(g) generated by all eadx for
x ∈ g. Let h, h0 be Cartan sub-algebras of g. Then there exists an α ∈ G
such that α(h) = h0 .
39
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS
0 = D0 ⊆ D1 ⊆ · · · ⊆ Dr = g/h
Di+1 = Di ⊕ ker x.
Now take any y ∈ h and v ∈ ker x. The action of y on v is ady (v) ∈ Di+1 .
Since h is nilpotent, there is some n for which adnx (y) = 0, so y(v) ∈ gx0 =
ker x. It follows that h(ker x) ⊆ ker x. This shows that the direct sum
respects the action of the representation.
Take z ∈ g such that z + h is a nonzero element of ker x, and let y ∈ h.
We know that, [y, z] ∈ h, so ady (z + h) = 0 in g/h. But [y, z] = − adz (y),
so z ∈ Ng (h) which is equal to h because h is Cartan. This contradicts our
choice of z, so there must be no such minimal i. This means that each λj
is nonzero. Thus, we can take x ∈ h such that k 6= ker λj for j = 1, . . . , r.
Such an x will have adx invertible on g/h.
40
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS
For our next result, we need a several definitions. First, define the cen-
tralizer of a sub-algbera h of a Lie algebra g by
1. β|h is non-degenerate.
2. h is abelian
41
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS
3. Cg (h) = h
λ∈C× /{±1}
Another way to say this is that gα is the collection of all x for which ady acts
as multiplication by α(y) for every y ∈ h. Let R = {α ∈ h∗ | gα 6= 0, α 6= 0}.
42
Last updated October 26, 2014 7 REPRESENTATIONS OF LIE ALGEBRAS
where Ei,j is the elementary matrix with a 1 in position (i, j) and zeroes
everywhere else. Each such space is 1-dimensional, which gives n2 − n di-
mensions. Since dim h = n − 1, and dim sln (C) = n2 − 1, we already have
enough subspaces. So by the decompositions from theorem 7.17, we have
found all possible functionals in R.
43
Last updated October 26, 2014 8 ROOT SYSTEMS
8 Root systems
8.1 Introduction
We now turn away from Lie algebras briefly to discuss a combinatorial object
known as a root system. We will classify all irreducible root systems, and
then use this classification to classify the simple Lie algebras.
Let V be a finite dimensional vector space over R, and let Φ ⊆ V be a
finite subset of V . We say that Φ is a a root system of V if
1. span Φ = V
• sα (α) = −α
• sα (Φ) = Φ
• sα (β) − β ∈ Zα for all β ∈ Φ.
3. If α ∈ Φ, then 2α 6∈ Φ
It is worth recalling the definition of a reflection. A reflection on V
is an endomorphism T ∈ GL(V ) such that T 2 = id, and ker(T − id) is a
hyperplane (subspace of codimension 1) of V . It should be clear that in an
appropriate basis, T can be represented by the matrix
1
..
.
1
−1
What information can we glean about these root systems? In fact, they
have quite rigid structure. First, we notice that for each α ∈ Φ, sα is unique.
If s, s0 are two such reflections, then ss0 (α) = α. Thus, its only eigenvalue
is 1. So in an appropriate basis,
1 ∗ ··· ∗
0 1
ss0 = .
.. ..
.
1
If any of the starred elements are nonzero, then ss0 has infinite order.
However, ss0 acts on Φ and has finite order. Since Φ spans V , ss0 must have
finite order and hence be the identity. Thus, s = s0 .
44
Last updated October 26, 2014 8 ROOT SYSTEMS
−α α
β β α+β
−α α −α α
−β −(α + β) −β
45
Last updated October 26, 2014 8 ROOT SYSTEMS
`
If we have a root system Φ, and V = V1 ⊕ V2 , Φ = Φ1 Φ2 for i =
1, 2, and each Φi is a root system on Vi , we say that Φ is the sum of the
root systems Φ1 and Φ2 . If no such nontrivial decomposition exists (i.e.,
dim Vi > 0 for i = 1, 2), we say that the root system Φ is irreducible. For
example, A1 is irreducible, whereas A1 × A1 is not. All other 2-dimensional
root systems are irreducible.
Fix a root system Φ of V . Let WΦ denote the subgroup of GL(V )
generated by the set of reflection {sα | α ∈ Φ}. WΦ is called the Weyl
group of Φ. When the root system is evident, we often drop the subscript,
and simply write W .
Proof. From the definition of root systems, W (Φ) = Φ, so we get the map
ψ : W → SymΦ defined by
ψ(sα ) : β → sα (β),
46
Last updated October 26, 2014 8 ROOT SYSTEMS
(α, v)
sα (v) = v − 2 α.
(α, α)
(α,β)
For α, β ∈ Φ, let aαβ = 2 (α,α) . It should be clear from the definition of
a root system (and sα ) that aαβ ∈ Z. In particular, aαα = 2.
If we let θ denote the angle from α to β, we have (α, β) = |α| |β| cos θ.
α
θ
Now we can compute aαβ aβα = 4 cos θ ∈ Z, which constrains the angles
between vectors in a root system to be one of very few choices. If aαβ aβα = 4,
then β = ±α. Otherwise one of aαβ and aβα must be ±1 or 0, and up to
symmetry, the only options are given in the table in figure 6.
47
Last updated October 26, 2014 8 ROOT SYSTEMS
Figure 6: All possibilities for lengths of and angles between two vectors in
an arbitrary root system
That is, Φ+is the set of those vectors in Φ which can be expressed
only as a positive linear combination of vectors from ∆. The set Φ− is the
collection of vectors in Φ which can be expressed only as a negavite linear
combination of` vectors from ∆.
If Φ = Φ+ Φ− , we say that ∆ is a basis for the root system. It is
immediate that any basis for a root system spans the entire vector space, as
it spans the root system, which in turn spans the vector space. Moreover, if
there is a linear dependence among ∆, then there one could add or subtract
terms from each other to take a vector in Φ+ and express it with the coeffi-
cient of some basis vector in ∆ being negative, contrary to the definition of
Φ+ . Thus, a basis for a root system is a basis for the entire vector space as
well.
As one might hope, every root system has a basis. In fact, we can do
better than this, but to do so, we need some definitions. For a root system
Φ, let t ∈ V ∗ be a linear functional, and let Φ+ t = {α ∈ Φ | t(α) > 0}.
Similarly, define Φ− t = {α ∈ Φ | t(α) < 0. We say that a root α ∈ Φ+ t is
+
decomposable if there exist β, γ ∈ Φt such that α = β + γ. Otherwise,
we say that α is indecomposable. Let
∆t = {α ∈ Φ | α is indecomposable}.
48
Last updated October 26, 2014 8 ROOT SYSTEMS
Then we claim that every root system has a basis and that every basis for
a root system is ∆t for some t ∈ V ∗ . To prove this result, we first need a
lemma.
Lemma 8.4. Let α, β ∈ Φ such that Rα 6= Rβ, and assume (α, β) > 0.
Then β − α ∈ Φ.
Proof. Since Rα 6= Rβ, one of aαβ and aβα is ±1. Without loss of generality,
(α,β)
let aβα = ±1. Since aβα = 2 (α,α) > 0, it must be that aβα = 1. Then
sα (β) = β − aβα α = β − α ∈ Φ.
Theorem 8.5. Let Φ be a root system over a vector space V . For each
t ∈ V ∗ such that t(α) 6= 0 for all α ∈ Φ, ∆t is a basis. Moreover, every basis
for a root system is of this form.
49
Last updated October 26, 2014 8 ROOT SYSTEMS
for bi , ci ∈ R≥0 . However, then α = (bi + ci )αi , meaning that for each i
P
with αi 6= α, bi + ci = 0. This means that bi = ci = 0. However for αj = α,
bi + ci = 1, this is impossible if β, γ are both to be in Φ+ . It must be, then,
that ∆ ⊆ ∆t . Since they are both bases for V , they have the same size, and
are therefore equal.
C = (aαβ )α,β∈∆
2 −1 2 −3
−2 2 −1 2
50
Last updated October 26, 2014 8 ROOT SYSTEMS
4. W = W0 .
Since one coefficient is positive (namely mα0 > 0), by the disjointness of Φ+
and Φ− , it must be that all coefficients are positive, and hence sβ (γ) ∈ Φ+ .
Moreover, sβ (γ) 6= β becauseX γ 6= −β.
1
Define the vector ρ = 2 α. Then
α∈Φ+
1 X 1
sβ (ρ) = α + sβ (β) = ρ − β
2 2
α∈Φ+ \{β}
51
Last updated October 26, 2014 8 ROOT SYSTEMS
52
Last updated October 26, 2014 8 ROOT SYSTEMS
A1 × A1
A2 = D 2
B 2 = C2 i
G2 i
An
Bn i
53
Last updated October 26, 2014 8 ROOT SYSTEMS
Cn h
Dn
G2 i
F4 i
E6
E7
E8
The index n denotes the number of vertices. Notice that for n < 2, Bn
and Cn are not defined. For n < 3, Dn is not defined. Also, B2 is isomorphic
to C2 , so when we listed the 2-dimensional root systems we did not need to
mention C2 .
The diagrams An , Bn , Cn , and Dn are infinite series of diagrams and
correspond to Lie algebras of classical importance. For instance, the Lie
algebra of type An is isomorphic to sln+1 (C). The other Lie algebras can be
found in the table below:
Type Lie algebra Dimension
An sln+1 (C) n2 + 2n
Bn so2n+1 (C) 2n2 + n
Cn sp2n (C) 2n2 + n
Dn so2n (C) 2n2 − n
54
Last updated October 26, 2014 8 ROOT SYSTEMS
8.4 Classification
Through a series of lemmas about inadmissible diagrams, we will arrive at
the classification theorem for admissible Dynkin diagrams (and hence root
systems and then eventually simple Lie algebras). We will then explicitly
construct root systems of all connected Dynkin diagrams we have not ruled
out.
Our classification of Dynkin diagrams will actually be a classification of
Coxeter diagrams. As we will see, if one diagram is admissible, then reversing
any collection oriented edges results in an admissible diagram. This is sort
of a misleading sentence (though accurate), because as we will see, there is
at most only ever one oriented edge in an admissible diagram. Reversing
this edge also results in an admissible diagram.
Proof. If u, v ∈ V (D), the vertex set of D, with uv ∈ E(D), the edge set of
D, then 2(u, v) ≤ −1. Then:
X X X
0< v, v = n + 2 (u, v),
v∈V (D) v∈V (D) u6=v
55
Last updated October 26, 2014 8 ROOT SYSTEMS
Proof. If there was such a vertex v, then the subdiagram induced by v and
its neighbors would yield a problem. Indeed, the neighbors of v are all only
adjacent to v in the induced subdiagram lest there be a cycle. Thus, for
each u ∈ N (v), (u, v)2 > 1/4. Let f (u) denote the multiplicity of the edge
between u and v. Then,
X X
1 = (v, v) > (u, v)2 = f (u),
u∈N (v) u∈N (v)
since N (v) are perpendicular and v is not in their span. If the weighted
degree of v is 4 or larger, we arrive at a contradiction.
Proof. Let u and v be the basis roots corresponding to the vertices on the
edge e. Let x = u + v, and let D0 = D/e (the new vertex is x) Then
1
(x, x) = (u + v, u + v) = (u, u) + 2(u, v) + (v, v) = 1 − 2 · + 1 = 1.
2
Moreover, it is clear that if uy is an edge in D, then (x, y) = (u, y) + (v, y) =
(u, y), and so in D0 , the edge between x and y is of the same type as it was
in D. Similarly, for when vy is an edge in D.
Corollary 8.17. No Dynkin diagram has more than one doubled edge.
x1 x2 x3 x4 x5
is inadmissible.
56
Last updated October 26, 2014 8 ROOT SYSTEMS
Corollary 8.19. The only admissible diagram with a doubled edge not hav-
ing any leaf nodes is F4 .
Corollary 8.20. The only admissible diagrams with a doubled edge are Bn ,
Cn , and F4 .
Proof. We have already limited ourselves to diagrams with exactly one dou-
bled edge. If that doubled edge contains no leaves of the tree, it must be F4 .
Now it suffices to argue that if the doubled edge is a leaf edge, the diagram
must be a path (have no vertices of degree ≥ 3). This is clear, because if
there was such a vertex, let e denote the doubled edge, and let v denote the
closest vertex of degree larger than or equal to 3 (why is this well-defined?).
Then contract the path from e to v to obtain a vertex of weighted degree at
least 4, yielding a contradiction.
Proof. If there were two such vertices, find two such vertices u and v whose
distance is minimal. Contract all of the edges between them to yield a vertex
of degree 4. This contradicts our ability to contract single edges and obtain
admissible diagrams.
Corollary 8.22. All admissible diagrams with only single edges are of the
form Tpqr . Where Tpqr has a central vertex v and 3 “legs” of length p − 1,
q − 1, and r − 1. That is, Tpqr is given by
57
Last updated October 26, 2014 8 ROOT SYSTEMS
yq−1
yq−2
y2
y1
xp−1 xp−2 x2 x1
v
z1
z2
zr−2
zr−1
Now it suffices to show that the rest of these diagrams are Dynkin dia-
grams of viable root systems. We provide coordinates for the simple roots
in the table below. You may notice that some matrices are not square. For
58
Last updated October 26, 2014 8 ROOT SYSTEMS
59
Last updated October 26, 2014 8 ROOT SYSTEMS
−1
1
1 −1
1 −1
E6
1 −1
1 1
− 12 − 12 − 12 − 2 − 12 − 12
1
−1
1
1 −1
1 −1
E7
1 −1
1 −1
1 1
− 12 − 21 − 12 − 12 − 2 − 12 − 12
1
−1
1
1 −1
1 −1
1 −1
E8
1 −1
1 −1
1 1
− 12 − 12 − 12 − 12 − 21 − 12 − 12 − 12
1 −1
1 −1
F4
1
− 21 − 12 1 1
−2 −2
1 −1 0
G2
−1 2 −1
60
9
Last updated October 26, 2014 SEMI-SIMPLE LIE ALGEBRA CONSTRUCTION
gα = {x ∈ g | ∀y ∈ h, [y, x] = α(y)x}.
That is, we would like to take the vector space decomposition and induce
the adjoint representation on it as a direct sum. We cannot precisely obtain
this result, but we can get “close enough,” as we will see in this section.
Additionally, it is often convenient to think of h = g0 . Strictly speaking
this is an abuse of notation, as 0 6∈ Φ if we want Φ to be a root system, but
it simplifies statements of theorems, so we keep it.
Proof.
61
9
Last updated October 26, 2014 SEMI-SIMPLE LIE ALGEBRA CONSTRUCTION
Proof. It suffices to show that α(hα ) 6= 0. If so, then we can take any vector
in v ∈ hα with α(v) 6= 0, and define
2
Hα = v.
α(v)
A = Cv ⊕ Cv+ ⊕ Cv−
M
If ∆ is a basis for Φ as defined above, let n+ = gα , and n− defined
α∈Φ+
similarly. Then we have the following result:
62
9
Last updated October 26, 2014 SEMI-SIMPLE LIE ALGEBRA CONSTRUCTION
3. g = n+ ⊕ h ⊕ n− .
1−aij
[hi , fj ] = −aij fj adei (ej ) = 0
1−aij
adfi (fj ) = 0
Type Description
An sln (C)
1 0 0
Bn so2n+1 (C) = {x ∈ gl2n+1 (C) | xt S = −Sx} S = 0 0 idn
0 idn 0
0 idn
Cn sp2n (C) = {x ∈ gl2n (C) | xt S = −Sx} S=
idn 0
0 idn
Dn sp2n (C) = {x ∈ gl2n (C) | xt S = −Sx} S=
− idn 0
63
9
Last updated October 26, 2014 SEMI-SIMPLE LIE ALGEBRA CONSTRUCTION
64
Last updated October 26, 2014 10 MORE REPRESNTATION THEORY
65
Last updated October 26, 2014 10 MORE REPRESNTATION THEORY
Proof. Let V 0 = (Fi1 · · · Fir Hj1 · · · Hjs (v)) C. That is, V 0 is the subspace
P
generated by all possible repeated applications of Fi and Hj to v. Note
that we need not worry about the order because we can replace Hi Fj with
[Hi , Fj ] + Fj Hi . The first term is either zero, or some linear combination
of Fk s, and the second term is in the correct order. This technique is rem-
iniscent of the proof of the Poincaré-Birkhoff-Witt theorem, to be seen in
section 10.2.
Now Hi (v) = λ(Hi )v, so V 0 = (Fi1 · · · Fir ) (v)C is a subrepresentation
P
of η − generated by v. Thus, if V is to be irreducible, V = V 0 .
By our lemma, Fi1 · · · Fir v ∈ Vλ−αi1 −···−αir , meaning
X
V ⊆ Vλ−αi1 −···−αir ,
which gives the desired result.
We say that λ a the highest weight. Given any basis ∆ for our root
system Φ, there is a unique highest weight. Indeed, if there were two highest
weights, λ1 and λ2 , then
X X
λ1 = λ2 − αik = λ1 − αjk ,
where αi are positive roots. This means that we must be subtracting noth-
ing, and λ1 = λ2 .
Moreover, since our Weyl group acts transitively on bases for Φ, it acts
transitively on highest weights. In fact, if λ is a highest weight for a finite
dimensional representation V , and W the Weyl group associated with Φ,
then the collection of non-trivial weights all lie in the convex hull of the set
W λ = {wλ | w ∈ W }.
Let P = {λ ∈ h∗ | λ(Hi ) ∈ Z ∀i = 1, . . . , n}, and let P + = {λ ∈ h+ |
λ(Hi ) ∈ N ∀i = 1, . . . , n}. We call P the set of weights, and P + the set of
dominant weights. Clearly P + ⊆ P .
66
Last updated October 26, 2014 10 MORE REPRESNTATION THEORY
1 0
Example 10.4. In the case of sl2 (C), h = Ch, where h = (as
0 −1
in section 7.3). Then h∗ = Ch∗ , and α = 2h∗ . Then P = Zh∗ ⊇ Nh∗ . Note
that α ∈ P + , but P + is not generated by α.
A more careful treatment will show that the order in which we apply
such swaps is irrelevant; in the end, any ordering of the swaps will yield the
same result.
Note that the Poincaré-Birkhoff-Witt theorem made no use of the field.
The theorem is true for Lie algebras over any field, regardless of characteris-
tic. Moreover, we do not even need our Lie algebra to be finite dimensional.
If g is not finite dimensional, take any basis and assign it a well-ordering.
Define standard monomials in the same way. The rest of the proof goes
through with mild changes.
67
Last updated October 26, 2014 10 MORE REPRESNTATION THEORY
68
Last updated October 26, 2014 A ADJOINTNESS OF (U, F )
A Adjointness of (U, F )
Theorem A.1. Let F and U be the functors defined in section 6.1. Then
(U, F ) form an adjoint pair. That is, for any Lie algebra g and C-algebra
A,
homAC (U (g), A) ∼
= homLC (g, F (A)),
where the isomorphism is natural in both g and A.
Proof. Let us first describe the isomorphism. Let Φ : homAC (U (g), A) →
homLC (g, F (A)) by
Φ(f ) : x 7→ f (x + I).
The map Ψ : homLC (g, F (A)) → homAC (U (g), A) is harder to describe. If
g ∈ homLC (g, F (A)), we can extend g to ḡ : T (g) → A by defining
Φ(Ψ(g)) = Φ(ĝ)
= x 7→ ĝ(x + I)
= x 7→ ḡ(x)
= g
69
Last updated October 26, 2014 A ADJOINTNESS OF (U, F )
ΦA
g
homAC (U (g), A) - homL (g, F (A))
C
6 6
hU (φ) hφ
hφ (ΦA A
h (f )) = Φh (f ) ◦ φ
= (x 7→ f (x + I)) ◦ φ
= y 7→ f (φ(y) + I)
= y 7→ f (U (φ)(y + I))
= ΦA
g (f ◦ U (φ))
= ΦA
g (hU (φ) (f )).
ΦA
g
homAC (U (g), A) - homL (g, F (A))
C
hφ hF (φ)
? ?
homAC (U (g), B) - homL (g, F (B))
ΦB
C
g
70
Last updated October 26, 2014 A ADJOINTNESS OF (U, F )
hF (φ) (ΦA
g (f )) = h
F (φ)
(x 7→ f (x + I))
= x 7→ F (φ)(f (x + I))
= x 7→ φ(F (f (x + I)))
= hφ (x 7→ f (x + I))
= hφ (ΦB
g (f ))
71
Last updated October 26, 2014 REFERENCES
References
[FH] W. Fulton and J. Harris, Representation Theory
[] ,
[Eke] J. van Ekeren, The orbit method for nilpotent Lie groups, lecture
notes, http://math. mit. edu,
72
Last updated October 26, 2014 REFERENCES
73
Last updated October 26, 2014 REFERENCES
[Col] F. N. Cole, Simple groups from order 201 to order 500, American
journal of Mathematics, 14 (1892), 378–388.
74
Last updated October 26, 2014 REFERENCES
75
Last updated October 26, 2014 REFERENCES
[LM] G. Ling and G. Miller, Proof that there is no simple group whose
order lies between 1092 and 2001, American Journal of Mathemat-
ics, 22 (1900), 13–26.
[F+ ] W. Feit and N. Fine and others, Pairs of commuting matrices over a
finite field, Duke Mathematical Journal, 27 (1960), 91–94.
76
Last updated October 26, 2014 REFERENCES
[Kel] R. Kellerhals, Old and new about Hilbert’s third problem, European
women in mathematics (Loccum 1999), (1999), 179–187.
[Har] K. Hare, More on the total number of prime factors of an odd perfect
number, Mathematics of computation, 74 (2005), 1003–1008.
[Wil] A. Wiles, Modular elliptic curves and Fermat’s last theorem, Annals
of Mathematics, (1995), 443–551.
[B+ ] P. Belkale and P. Brosnan and others, Matroids motives, and a con-
jecture of Kontsevich, Duke Mathematical Journal, 116 (2003), 147–
188.
77
Last updated October 26, 2014 REFERENCES
78
Last updated October 26, 2014 REFERENCES
79
Last updated October 26, 2014 REFERENCES
[A+ ] K. Appel and W. Haken and J. Koch and others, Every planar map
is four colorable. Part II: Reducibility, Illinois Journal of Mathemat-
ics, 21 (1977), 491–567.
[A+ ] K. Appel and W. Haken and others, Every planar map is four
colorable. Part I: Discharging, Illinois Journal of Mathematics, 21
(1977), 429–490.
[GY] I. Goulden and A. Yong, Dyck paths and a bijection for multisets of
hook numbers, Discrete mathematics, 254 (2002), 153–164.
80
Last updated October 26, 2014 REFERENCES
81
Last updated October 26, 2014 REFERENCES
82
Last updated October 26, 2014 REFERENCES
83
Index
abelian, 6 evaluation map, 51
adjoint exceptional, 54
functor, 26
representation, 8, 30 full flag, 13, 16, 40
alebraically closed, 42 functor, 26
algebra, 4
general linear group, 5
Baire category theorem, 39 gl(V ), 5
basis, 48
homomorphism, 8, 30
bilinear form, 19
hyperplane, 44
bracket, 4
ideal, 5
Cartan
indecomposable, 48
criterion, 22
inner automorphism, 39
matrix, 50
inner product, 20
sub-algebra, 37
invariant form, 19
Carter subgroup, 40
irreducible, 31, 46
Casimir element, 28
category, 26 Jacobi identity, 4
center, 7 Jordan canonical form, 21, 38, 42
central series, 10
centralizer, 41 Killing form, 19, 28
characteristic polynomial, 38 Kronecker-delta, 28
correspondence theorem, 23
Coxeter diagram, 53 Lagrange interpolation, 21
Lie algebra, 4
degenerate, 20 Lie’s theorem, 14
derivation, 7, 64 Lie-Kolchin triangularization, 16
derived series, 14
direct sum, 5, 61 Maschke’s theorem, 33, 47
dual basis, 28
nilpotent, 10
Dynkin diagram, 37, 53
non-degenerate, see degenerate
eigenspace, 33, 61, 65 normalizer, 12, 37
eigenvalue, 15, 33, 61
octonions, 64
eigenvector, 14, 61
elementary matrix, 43 Poincaré-Birkhoff-Witt theorem, 66,
Engel’s theorem, 13, 39 68
84
Last updated October 26, 2014 INDEX
primitive vector, 33
primitve, 65
quotient, 7
radical, 17
rank, 38
reflection, 44
regular, 38
representation, 8, 11, 30
root system, 37, 44
tensor algebra, 26
trace, 5, 19
Verma module, 67
weight, 66
dominant, 66
highest, 66
modules, 68
Weyl group, 46
85