Вы находитесь на странице: 1из 85

1

Algebras
Karin Erdmann
Mathematical Institute
University of Oxford
October 2007
1
Introduction
You will probably have studied groups acting on sets, and have seen that group actions
occur in many situations. Very often, the underlying set has the structure of a vector space,
and the group acts by linear transformation. This is very good because in this case, one gets
new information, for example, by exploiting invariant subspaces, or eigenvalues, or other
properties coming from linear algebra.
The action of a group element on a vector space is always invertible. But in many
applications one needs to deal with linear maps which are not invertible. As an extreme,
take a linear map whose iteration eventually maps everything to zero. For example, taking
derivatives of functions is a linear map, and if polynomial of degree 2 is dierentiated three
times one always gets zero.
Therefore one would also like to study actions by linear transformations which are not
necessarily invertible. One appropriate structure to model such actions is that of an algebra,
more precisely, an associative K-algebra where K is a eld. This includes many known
examples, like polynomials K[X] , or square matrices. New examples are group algebras
which can be thought of as a linearisation of groups; and these ensure that group actions
by linear transformations can be viewed from this new perspective.
We will start by introducing these algebras, and give examples, and we will investigate
some general properties.
If an algebra A acts on a vector space V , then V together with this action is said to
be an A-module. [This is analogous to the approach for group actions. If G acts on a set
, then is a G-set becomes If A acts on a vector space V then V is an A-module.]
The second chapter studies modules, and their general properties. Furthermore, we dene
actions of algebras on vector spaces, and show that A-modules and actions of A on vector
spaces are the same. For some purposes, the language of modules is more convenient, but
sometimes it is more natural to think of actions.
An A-module V is said to be simple (or irreducible) if it is non-zero, and if it does
not have any subspaces which are invariant under the action of A except V and 0. Simple
modules are the building blocks for arbitrary nite-dimensional A-modules. The Jordan-
Holder Theorem makes this precise, and we will prove this theorem, for modules, in the
third chapter.
The nicest modules are the ones which are direct sums of simple modules, they are called
semisimple. An algebra for which all modules are semisimple, is said to be a semisimple
algebra. Semisimple modules and algebras are investigated in chapter 4. Fortunately, the
structure of semisimple algebras is completely understood, it is described by the Wedderburn
Theorem. This is a very important result on algebras, and it is used in many situations. We
will give a proof when K = C, in chapter 5.
Maschkes Theorem characterizes precisely when algebras arising from group actions are
semisimple. Namely, this is the case if and only if the characteristic of the eld does not
divide the order of the group. This is proved in chapter 6.
Maschkes Theorem has numerous important applications. Combining it with Wedder-
burns theorem gives a complete description of the irreducible representations of G over C.
This can be taken as the starting point for the study of group characters. Given a rep-
2
resentation : G GL(n, C), then the character associated to this representation is the
function : G C which takes g G to the trace of (g), that is (g) =

n
i=1
a
ii
where
(g) = [a
ij
]. Characters have very remarkable properties. For example, one can detect by
just looking at the characters whether or not two representations are equivalent. Characters
have applications in many other parts of mathematics (or even in other sciences).
The last chapter deals with general properties of characters, and gives some applications.
This chapter is short since the subject is well covered by existing literature. Further material
can be found for example in the books by Ledermann ??, or James and Liebeck ??.
It is expected that you are familiar with the rst and second year basic linear algebra,
such as elementary properties of vector spaces, and linear maps. As well, we expect that
you remember the group axioms. We make some conventions: We only consider rings with
identity, and all vector spaces will be nite-dimensional, except for the polynomial ring
K[X]. We mention some occasions when results also hold without assuming that the vector
spaces are nite-dimensional.
Oxford, 2007 KE
1
Algebras
The main object we want to study, are algebras over some eld. Roughly speaking, an algebra
is a ring which also is a vector space, in which scalars commute with everything.
We remind of the denition of a ring.
Denition 1.1 (Reminder)
A ring R is an abelian group (R, +) which in addition has another operation, (r, s) rs :
R R R, called multiplication, such that
(i) (Distributivity) r(x +y) = rx +ry and (r +s)x = rx +sx.
(ii) (Associativiaty) r(st) = (rs)t.
The ring is commutative if rs = sr for all r, s R. An identity of R is an element 1
R
R
such that 1
R
x = x1
R
= x for all x R.
Convention In this course, all rings are assumed to have a an identity. (If no confusion
is likely we write 1 for 1
R
). Rings are usually not commutative.
You have already seen various examples:
(1) The integers, Z. The rational numbers, Q, etc
(2) The polynomials K[X] in one variable X, with coecients in K. Similarly, the polyno-
mials K[X, Y ] in two commuting variables X and Y , with coecients in K.
(3) The nn matrices M
n
(K), with entries in a eld K, with respect to matrix multiplication
and addition. This is not commutative for n 2.
(4) If R and S are rings, the direct product of R and S is dened as
R S = (r, s) : r R, s S
where addition and multiplication are componentwise.
4 1. Algebras
1.1 Algebras
The above examples (2) and (3) are not just rings but also vector spaces. There are many
more rings which are vector spaces, and this has led to making the denition of an algebra.
Denition 1.2
An algebra A over a eld K (or a K-algebra) is a ring, with multiplication
(a, b) a.b (a, b A)
which also is a K-vector space, with scalar multiplication
(, a) a ( K, a A),
and where the scalar multiplication and the ring multiplication satisfy
(a.b) = (a).b = a.(b) ( K, a, b A).
The algebra A is nite-dimensional if dim
K
(A) < .
The condition relating scalar multiplication and ring multiplication says that scalars
commute with everything. One might want to spell out the various axioms. We have already
listed the ones for a ring. To say that A is a K-vector space means that for all a, b A and
, K we have
(i) (b +c) = b +c;
(ii) ( +)a = a +a;
(iii) ()a = (a);
(iv) 1
K
a = a.
Properties (i) and (ii) are sometimes summarized by saying that scalar multiplication is
bilinear.
Strictly speaking, we should say that A is an associative algebra; the underlying multipli-
cation in the ring is associative. There are other types of algebras, for example Lie algebras;
but we will only consider associative algebras.
Since A is a vector space and 1
A
is a non-zero vector, it follows that the map 1
A
from
K to A is 1-1. We will therefore view K as a subset of A, using this map as identication. This
also means that it is not really necessary to have dierent notation for scalar multiplication
and ring multiplication, so we will usually write ab instead of a.b, this should not cause
confusion.
1.2 The multiplication 5
Example 1.3
Let A be the set of upper triangular 2 2-matrices over R, that is
A =

x y
0 z

: x, y, z R
with respect to matrix addition and multiplication. This is clearly a subspace of M
2
(R).
Furthermore, it is a subring: since it is a subspace, we know that (A, +) is a subgroup of
M
2
(R); furthermore a product of two upper triangular matrices is again upper triangular;
and the identity of M
2
(R) lies in A. So A is a ring. Scalar multiples of the identity matrix
commute with all matrices, so the property of scalar multiplication and ring multiplication
holds.
1.2 The multiplication
Suppose A is an algebra, what does one need to understand the multiplcation? Take any
vector space basis of A, say v
1
, . . . , v
n
If we know the products v
i
v
j
for all i, j then we
know all products. Namely, take two arbitrary elements a, b A, then a =

i
a
i
v
i
and
b =

j
b
j
v
j
for a
i
, b
j
K, and
ab = (

i
a
i
v
i
)(

j
b
j
v
j
) =
n

i,j=1
a
i
b
j
(v
i
v
j
)
This is very useful; in practice one would aim to use a basis where the products v
i
v
j
are
easy for example one may take the identity 1
A
as one of the basis elements.
Example 1.4
In Example 1.3, you would probably choose as a basis the matrix units in A, that is
E
11
=

1 0
0 0

, E
12
=

0 1
0 0

, E
22
=

0 0
0 1

Then the products are easy to describe; E


2
ii
= E
ii
and E
11
E
12
= E
12
= E
12
E
22
, and
E
12
E
11
= 0 = E
22
E
12
. One might visualize the multipication by a diagram like
E
11

E
12
E
22
1.3 Constructing algebras
You can also construct algebras using the fact that the multiplication is already determined
by products of some basis. You might start with some vector space V , x a basis, and just
6 1. Algebras
dene the products of any two basis elements. However, you need to make sure that the
associative law holds.
Exercise 1.1
Let V have basis v
1
, v
2
. Which of the following products satises the associative law?
If so, does this dene an algebra (with identity)? Here c, d K.
(i) v
1
v
2
= v
2
v
1
= v
2
, v
2
1
= v
1
, v
2
2
= cv
1
+dv
2
.
(ii) v
2
v
1
= v
2
v
1
= v
2
, v
2
1
= v
2
, v
2
2
= v
1
+v
2
.
Solution 1.5
Consider the denition in (i). The products in which v
1
occurs, tell us that v
1
is the identity.
So to check associativity, we only need to compare (v
2
v
2
)v
2
and v
2
(v
2
v
2
), and one checks
that these are equal. So the denition (i) denes an algebra with identity.
Now consider the denition in (ii), we have (v
1
v
1
)v
2
= v
2
v
2
= v
1
+ v
2
but v
1
(v
1
v
2
) =
v
1
v
2
= v
2
These are not equal and this multiplication does not satisfy the associative law.
1.4 Some important examples
(1) The eld K is a K-algebra.
(2) Polynomial rings K[X], or K[X, Y ], are K-algebras.
(3) The n n matrices M
n
(K), with respect to matrix multplication and addition.
There are many more algebras consisting of matrices. For example, take the upper trian-
gular matrices
T
n
(K) := [a
ij
] M
n
(K) : a
ij
= 0 for i > j
They also form an algebra, with respect to matrix multiplication and addition.
(4) Let V be a K-vector space, and dene
End
K
(V ) := : V V : is K-linear
the K-linear transformations V V . This is a K-algebra, if one takes as multiplication
the composition of maps, and where the addition and scalar multiplcation are pointwise,
as usual.
(5) The eld C is also an algebra over R, of dimension 2. Similarly, the eld Q(

2) is an
algebra over Q, of dimension 2. More generally, if K is a subeld of a larger eld L, then
L is an algebra over K.
(6) The algebra H of quaternions is the 4-dimensional algebra over R with basis 1, i, j, k and
where the multiplication is dened by
ij = ji = k, jk = kj = i, ki = ik = j
1.5 Subalgebras and ideals, factor rings 7
This algebra is a division ring: The general element of H is of the form u = a+bi +cj +dk
with a, b, c, d R. Let u := a bi cj dk, then uu = a
2
+b
2
+c
2
+d
2
, which is ,= 0 for
u ,= 0, and one can write down the inverse of a non-zero element u.
(7) Let G be a group and K any eld. The group algebra A = KG has underlying vector
space with basis v
g
: g G. The multiplication on the basis elements is dened as
v
g
v
h
:= v
gh
.
and it is extended to linear combinations. This denes an associative multiplication,
(v
g
v
h
)v
x
= v
gh
v
x
= v
(gh)x
= v
g(hx)
= v
g
v
hx
= v
g
(v
h
v
x
).
The identity of KG is the element v
1
where 1 = 1
G
is the identity of G. This algebra has
dimension equal to the order of G.
Some authors write simply g for the vector in KG, instead of v
g
.
(8) If A is any K-algebra, then the opposite algebra A
op
of A has underlying space A, and
the multiplication in A
op
is dened by
a b := ba (a, b A).
It is easy to check that this is again an algebra. Clearly (A
op
)
op
= A.
1.5 Subalgebras and ideals, factor rings
We recall some standard constructions which are completely analogous to those you have
seen for commutative rings.
The example (3) in 1.4 suggests that we should dene a subalgebra. Suppose A is a K-
algebra, then a subalgebra B of A is a subset of A which is an algebra with respect to the
operations in A, that is:
Denition 1.6
Let B be a subset of A. Then B is a subalgebra if B is a subspace such that
(i) for all b
1
, b
2
B, the product b
1
b
2
belongs to B; and
(ii) the identity 1
A
belongs to B.
1.5.1 Examples
Let A = M
n
(K). This has many important subalgebras.
(1) The upper triangular matrices, T
n
(K) form a subalgebra of A. This is not commutative
for n 2, for example the matrix units E
11
and E
12
do not commute (see 1.4).
8 1. Algebras
(2) Let A, and dene A

to be the span of 1, ,
2
, . . .. That is, A

is the space of all


matrices which are polynomials in . This a subalgebra of A, and it is always commutative.
(3) The diagonal matrices D
n
(K), form a subalgebra of A, of dimension n.
(4) The three-subspace algebra is the subalgebra of M
4
(K) dened by

a
1
b
1
b
2
b
3
0 a
2
0 0
0 0 a
3
0
0 0 0 a
4

: a
i
, b
j
K.
(5) There are also subalgebras such as

a b 0 0
c d 0 0
0 0 x y
0 0 z u

: a, b, c, d, x, y, z, uzinK M
4
(K).
Not every subring of M
n
(K) is a subalgebra. For example, M
n
(Z) is a subring of M
n
(R)
but it is not a subalgebra.
Denition 1.7
If R is a ring (or an algebra ) then I is a left ideal of R provided (I, +) is a subgroup of
(R, +) such that rx I for all x I and r R.
Similarly I is a right ideal of R if (I, +) is a subgroup such that xr I for all x I and
r R. I is an ideal if it is both a left ideal and a right ideal.
For example, if z R then Rz = rz : r R is a left ideal. For non-commutative rings,
Rz need not be an ideal.
Exercise 1.2
Let R = M
2
(K) and n 1, and let z be the matrix unit z = E
11
. Calculate Rz, and
also zR. Are they equal?
1.5.2 Factor rings
If I is an ideal of R , consider cosets r +I for r R. Recall that the cosets R/I form a ring,
with +, . dened as usual by
(r +I) + (s +I) := (r +s) +I, (r +I)(s +I) := (rs) +I
for r, s R.
When the ring is a K-algebra then we have some extra structure.
1.6 Algebra homomorphisms 9
Lemma 1.8
Assume A is an algebra.
(a) Suppose I is a left (or right or 2-sided) ideal of A. Then I is a subspace of A.
(b) If I is an ideal of A then A/I is an algebra.
Proof
(a) By denition, (I, +) is a group. We need to show that if c K and x I then cx I.
But (c1
A
) A, and
cx = c(1
A
x) = (c1
A
)x I.
(b) We know already that the cosets A/I form a ring, and they also form a vector space
(see A1). We only have to check that scalars commute with everything, but this property is
inherited from A. Explicitly, let K and a, b A, then
(1
A
+I)[(a +I)(b +I)] = (1
A
+I)(ab +I)
= 1
A
(ab) +I)
= (a)b +I = (a +I)(b +I)
but since (ab) = a(b), it is also equal to (a +I)(b +I).
Example 1.9
Let A = K[X], the algebra of polynomials, and let I be a non-zero ideal of A, then there
is some non-zero polynomial f(X) such that I = (f(X)), a principal ideal; and A/I =
K[X]/(f(X)). Such factor algebra is nite-dimensional, its dimension is the degree of f(X).
1.6 Algebra homomorphisms
Denition 1.10
Let A and B be K-algebras. A map : A B is a K-algebra homomorphism if
(i) is K-linear,
(ii) (ab) = (a)(b) for all a, b A; and
(iii) (1
A
) = 1
B
.
The map is a K-algebra isomorphism if it is a K-algebra homomorphism and is in
addition bijective.
Example 1.11
Let A be the algebra of upper triangular 2 2-matrices over R (see 1.3), and let B be the
10 1. Algebras
direct product of two copies of R, that is B = R R. Dene : A B by
(

a b
0 c

) := (a, c).
Then is linear; as a vector space map it is a projection onto some coordinates. You should
check that preserves the multiplication and maps the identity of A to the identity of B.
When you write linear transformations of a vector space as matrices with respect to a
xed basis, you basically prove that the algebra of linear transformations is isomorphic to
the algebra of square matrices. We recall the proof, partly as a reminder, but also since we
will later need a generalization.
Lemma 1.12
Suppose V is an n-dimensional vector space over the eld K. Then the algebras End
K
(V )
and M
n
(K) are isomorphic.
Proof
We x a K-basis of V . Suppose is a linear transformation of V , let M() be the matrix
of with respect to the xed basis. Then dene a map
: End
K
(V ) M
n
(K), () := M().
One checks that is K-linear. One also checks that it preserves the multiplication, that is
M() M() = M( ). [This is done in rst year linear algebra]. The map is also a
one-to-one. Suppose M() = 0, then by denition of the matrix maps the xed basis to
zero, but then = 0. The map is surjective, because every n n matrix denes a linear
transformation of V .
In general, homomorphism and isomorphism are very important to compare dierent
algebras.
Exercise 1.3
Suppose : A B is an isomorphism of K-algebras. Show that then
(i) If a A then a
2
= 0 if and only if (a
2
) = 0.
(ii) a A is a zero divisor if and only if (a) is a zero divisor.
(iii) A is a eld if and only if B is a eld.
1.6.1 Some common algebra homomorphisms
Some algebra homomorphisms occur very often, we will list some. You are encouraged to
check these in detail.
1.7 Some algebras of small dimensions 11
(1) Let I be an ideal of A, then the canonical map : A A/I which is dened as
(a) := a +I, is an algebra homomorphism.
(2) Substitution is an algebra homomorphism whenever it makes sense. Let B be any K-
algebra and b B. Dene : K[X] B by
(f) = f(b) (f K[X]).
(3) Let A = A
1
A
2
, the direct product of two algebras. Then the projection
1
: A A
1
dened by
1
(a
1
, a
2
) := a
1
is an algebra homomorphism, and similarly the projection
2
from A onto A
2
is an algebra homomorphism.
Note however that the inclusion map a
1
(a
1
, 0) is not an algebra homomorphism, as it
does not take the identity of A
1
to the identity of A.
Exactly as for rings we have an Isomorphism Theorem.
Theorem 1.13 (Isomorphism Theorem)
Let A and B be K-algebras, and suppose : A B is a K-algebra homomorphism. Then
ker() is an ideal of A, im() is a subalgebra of B and
A/ker()

= im().
Proof
Almost everything follows from the isomorphism theorem for rings. We only need to check
that im() is actually a subalgebra of B. Since is linear, the image Im() is a subspace,
and we know it is a subring containing the identity of B, and therefore it is a subalgebra.
Example 1.14
Suppose A = A
1
A
2
, the direct product of two K-algebras. Then as we have seen, the
projection
1
: A A
1
is an algebra homomorphism, and it is onto. By the Isomorphism
Theorem we have A/ker(
1
)

= A
1
. Furthermore, the denition of
1
gives that
ker(
1
) = (0, a
2
) : a
2
A
2
= 0 A
2
.
This also shows that 0 A
2
is an ideal of A.
1.7 Some algebras of small dimensions
One might like to know how many K-algebras there are of a given dimension, up to isomor-
phism, and if possible have a complete description. Looking at small dimensions, we observe
12 1. Algebras
that any 1-dimensional K-algebra is isomorphic to K. Namely, it must contain the the scalar
multiples of the identity, and this is then the whole algebra, by dimension.
We consider now algebras of dimension 2 over R. The construction in 1.9 produces
many examples. Namely, take any polynomial f(X) R[X] of degree 2, and take A :=
R[X]/(f(X)). We ask when two such algebras are isomorphic. We would also want to know
whether there are others.
We will now classify 2-dimensional algebras over R, up to isomorphism. Take such algebra
A. We can choose a basis which contains the identity of A, say 1
A
, b.
Then b
2
must be linear combination of 1, b, so there are scalars c, d R such that
b
2
= c1
A
+db. We consider the polynomial X
2
dX c and we complete squares,
X
2
dX c = (X d/2)
2
(c + (d/2)
2
).
Let

:= b (d/2)1
A
, this is an element in the algebra, and we also set r = (c + (d/2)
2
),
which is a scalar. Then we have

2
= r1
A
.
Then set
:=

[r[
1

r ,= 0

r = 0
Then the set 1
A
, also is a basis of A, and we have
2
= 0 or = 1
A
.
This brings A into only three possible forms. We write A
j
for the algebra in which
2
=
j1
A
for j = 0, 1, 1. We want to show that no two of these three algebras are isomorphic.
We use 1.3.
(1) The algebra A
0
has a non-zero element with square zero. By 1.3, any algebra isomor-
phic to A
0
must have such element.
(2) The algebra A
1
does not have a non-zero element whose square is zero:
Suppose
2
= 0 for A. Write = x1 +y with x, y R, then

2
= (x
2
+y
2
)1 + 2xy = 0
and it follows that 2xy = 0 and x
2
+y
2
= 0, since 1
A
and are linearly independent. Now
x, y R and we deduce x = y = 0, and therefore = 0.
This shows that the algebra A
1
is not isomorphic to A
0
.
(3) Consider the algebra A
1
. This occurs in nature, namely C is such R-algebra, taking
= i.
In fact we can see directly that A
1
is a eld, from
(c +d)(c d) = c
2
+d
2
and if c +d ,= 0 we can write down its inverse with respect to multiplication.
Clearly A
0
is not a eld, and A
1
also is not a eld, it has zero divisors: (1)(+1) = 0.
So A
1
is not isomorphic to A
0
or A
1
.
We can list a canonical representative for each of the three algebras. Consider the algebra
R[X]/(X
2
), this is 2-dimensional and is generated by a non-zero element with square zero.
1.8 Finite-dimenional algebras A which can be generated by one element. 13
So it isomorphic to A
0
. Next, consider R[X]/(X
2
1), this has a generator with square equal
to 1, so it is isomorphic to A
1
. Similarly R[X]/(X
2
+1) is isomorphic to A
1
. To summarize,
we have proved:
Lemma 1.15
Up to isomorphism, there are precisely three 2-dimensional algebras over R. Any 2-
dimensional algebra over R is isomorphic to precisely one of
R[X]/(X
2
), R[X]/(X
2
1), R[X]/(X
2
+ 1).
One might ask what happens for dierent elds. There are innitely many non-isomorphic
2-dimensional algebras over Q , and there are only two 2-dimensional algebras over C (see
exercises).
Denition 1.16
The K-algebra A is generated by a set S =
1
, . . . ,
k
if A is the K-span of 1 together
with all monomials
i
1
...
i
r
for
i

S.
Sometimes it is useful to have a small set of generators, for practical purposes. For
example, the polynomial algebra A = K[X] is generated by X. Or, let A = KG be the
group algebra. If G = g) cyclic, then A is generated by v
g
.
1.8 Finite-dimenional algebras A which can be
generated by one element.
Suppose A is a nite-dimensional algebra that is generated by one element , say. Then
A is spanned by the set 1, ,
2
, . . . , . [The algebra A

in 1.4 is an example]. There is a


polynomial of smallest degree, m(X), such that m() = 0. This is the same argument as
in Linear Algebra, when we prove that a linear map has a minimal polynomial. Namely,
since A is nite-dimensional, the elements
j
cannot all be linearly independent. Let n be
smallest such that 1, , . . . ,
n
are linearly dependent. Then write

n
=
n1

i=0
c
i

i
for some c
i
K. Then, as in linear algebra, if m(X) = X
n

n1
i=0
c
i
X
i
then m(X) is the
unique monic polynomial of smallest degree such that m() = 0.
Dene : K[X] A by substituting ,
(f) = f().
14 1. Algebras
This is a K algebra homomorphism (see 1.6.1). It is surjective, by denition of A. The
Isomorphism Theorem shows that
K[X]/ker()

= A.
Moreover, ker() = (m(X)); namely as in linear algebra, we have f() = 0 if and only if
m(X) divides f(X). This also shows that the dimension of K[X]/(m(X)) is equal to the
degree of m(X) (which you have probably seen).
Example 1.17
Let A = KG where G = g), the cyclic group of order 3. Then A is generated by = v
g
.
The previous shows that A

= K[X]/(m(X)) where m(X) is the minimal polynomial of v
g
.
We have
v
3
g
= v
g
3 = v
1
= 1
A
and therefore the minimal polynomial of v
g
divides X
3
1. We know that dimA = 3 and
hence m(X) must have degree 3 and it follows that m(X) = X
3
1.
EXERCISES
1.4. Let A = B = C and (a) := a, the map which takes a to its complex conjugate.
Verify that
(i) the map is a ring homomorphism;
(ii) A is a 2-dimensional R-algebra, and the map is an R-algebra homomorphism.
(iii) We know that A and B are C-algebras. Show that is not a C-algebra
homomorphism.
1.5. Let A be the set of matrices
A =

a b
b a

: a, b R.
Show that A is an R-subalgebra of M
2
(R). [Which of the three algebras in ?? is
it?]
1.6. Let K = Z/2Z, the eld with two elements. Let A be the set of matrices
A =

a b
b a +b

: a, b K.
Show that A is a subalgebra of M
2
(K). [Note that A is generated by

0 1
1 1

.
Find its square.]
1.7. Show that the algebra in 1.4(4) is isomorphic to the direct product M
2
(K)
M
2
(K).
1.8 Finite-dimenional algebras A which can be generated by one element. 15
1.8. Consider the three-subspace algebra S in 1.4(4). Show that there is a surjective
algebra homomorphism from S onto the direct product K K K K.
1.9. Let S be the three-subspace algebra in 1.4(4). Find a description of S which is
similar to the algebra in example 1.4.
1.10. (Continuation) Find the opposite algebra S
op
. Is it isomorphic to S?
1.11. Show that there are precisely two 2-dimensional algebras over C, up to isomor-
phism.
1.12. Consider 2-dimensional algebras over Q. Show that the algebras Q[X]/(X
2
p)
and Q[X]/(X
2
q) are not isomorphic if p and q are distinct primes.
Solution 1.18
We x the basis 1
A
, of A = Q[X]/(X
2
p) where 1
A
is the coset of 1 and is
the coset of X. That is
2
= p1
A
. Similarly we x the basis of B = Q[X]/(X
2
q)
consisting of 1
B
and with
2
= q1
B
.
Suppose : A B is an algebra isomomorphism, then
(1
A
) = 1
B
, () = c1
B
+d
where c, d Q and d ,= 0. We must have
p = p1
A
= p(1
A
) = (p1
A
)
= (
2
) = ()
2
= (c1
A
+d)
2
= c
2
1
B
+d
2

2
+ 2cd
= (c
2
+qd
2
)1
B
+ 2cd
But 1
B
and are linearly independent (

q , Q), so 2cd = 0 and p = (c


2
+qd
2
).
Since d ,= 0 we must have c = 0 and then it follows that p = q, contrary to the
hypothesis. This shows that A and B are not isomorphic.
2
Representations, Modules
We want to study actions of groups and algebras on vector spaces. If V is a vector space,
then End
K
(V ) is the algebra of linear transformations of V , and this contains GL(V ), the
group of invertible linear transformations of V . When V is n-dimensional and we use matri-
ces with respect to some xed basis, then End
K
(V ) becomes M
n
(K), and GL(V ) becomes
GL
n
(K).
Convention We write consistently all maps to the left , following the practice used in
Linear Algebra. To be consistent, we then also let groups act on the left of vector spaces.
Denition 2.1
Let G be a group, and let V be some vector space. A (linear) representation of G on V is
a group homomorphism
: G GL(V ).
The representation has degree n where n = dimV .
If we write linear transformations as matrices with respect to a xed basis, we get the
group homomorphism : G GL(n, K). This is sometimes called a matrix representation
of G.
Denition 2.2
Let A be a K-algebra and V be a vector space A representation of A on V is a K-algebra
homomorphism
: A End
K
(V ).
18 2. Representations, Modules
The representation has degree n where n = dimV . If we write linear transformations as
matrices with respect to a xed basis, we get an algebra homomorphism
: A M
n
(K)
This is sometimes called a matrix representation of A.
The denitions of a representation as above also make sense when V is not nite-
dimensional. But recall that in these notes, all vector spaces are assumed to be nite-
dimensional (except for K[X]).
2.0.1 Examples
(1) Let A be a subalgebra of End
K
(V ) where V is a vector space. Then the inclusion map
(a a)fromA to End
K
(V ) is clearly an algebra homomorphism, hence is a representation.
Similarly, if A is a subalgebra of M
n
(K) then the inclusion map is a representation of A.
For example, take A = End
K
(V ), or M
n
(K). Or take A as in the exercise 1.5 and V = R
2
.
(2) Let V = A and dene : A End
K
(A) by
(a) = [x ax].
Then (a) End
K
(A). This is known as the left regular representation. You should
check that is an algebra homomorphism.
(3) Let A = K[X], the algebra of polynomials. Take a vector space V and a linear transfor-
mation of V . Dene
: A End
K
(V ), (f) := f()
that is substitute into f. This is a representation of A. [In chapter 1 we have seen that
is an algebra homomorphism.] For each , there is such representation, and we write
=

.
Lemma 2.3
Every representation of the algebra A = K[X] is of the form

for some linear transforma-


tion .
Proof
Suppose : A End
K
(V ) is a representation. Then set := (X). This is linear transfor-
mation of V . Furthermore, if f A, say f =

i
a
i
X
i
then we have
(f) =

i
a
i
(X)
i
=

i
a
i

i
= f()
where the rst equality holds since is an algebra homomorphism. So =

.
2. Representations, Modules 19
2.0.2 Examples
(1) Suppose is a nite G-set. Take a vector space V with a basis labelled by the elements
of , say V = spanb

: . We will call V = K later. Dene


: G GL(V )
as follows. For g G, we take for (g) the linear map which takes b

to b
g()
. One checks
that is a group homomorphism. This comes from the fact that G acts on . Note that
we write maps to the left.
For example let n=3 and G = o
3
. If we take = 1, 2, 3 and if g is the transposition
permuting 1 and 2 then (g) has matrix
(g) =

0 1 0
1 0 0
0 0 1

.
(2) As a special case of (1), take = G and where the action is by left multiplication. The
corresponding representation is the left regular representation . That is, for g G, (g) is
the linear map with
v
x
v
gx
.
Its degree is the order of G.
(3) Let G be any group and take V = K. Dene : G GL(K) by
(g) = 1
K
, (g G).
This is a representation of G, called the trivial representation.
(4) Let G be the group of symmetries of a square. As a group, this is isomorphic to the
dihedral group of order 8. Draw the square in the plane, such that the center is in the
origin, and such that the corners are at (1, 1). Let V = R
2
. For g G, let (g) be the
linear map of V which induces the symmetry given by g. We write (g) as a matrix with
respect to the standard basis.
Let be the rotation by /2 (anti-clockwise), then
() =

0 1
1 0

.
Suppose G is the reection taking (1, 1) to (1, 1), then
() =

1 0
0 1

.
The group G can be generated by and . Since we want to dene a group homomorphism,
these two matrices determine already the action of all elements of G. But we must check
that this really denes a group homomorphism.
20 2. Representations, Modules
The group G has a presentation
, :
4
= 1,
2
= 1,
1
=
1
)
All we have to do is to check that the matrices () and () satisfy the relations dening
G, and this is an easy calculation. Then we have shown that we have a well-dened
representation
: G GL
2
(R)
which takes , to the matrices () and () dened above.
Exercise 2.1
Show that the trivial representation (3) can be viewed as a special case of (1).
Remark 2.4
(a) Let A = KG, the group algebra of a nite group G. Suppose we have a representation
of A, say : A End
K
(V ). For g G, the element v
g
A is invertible, and hence (v
g
)
also is invertible and therefore lies in GL(V ). Moreover, if g, h G then
(v
g
v
h
) = (v
g
)(v
h
), (v
1
) = Id
V
since preserves the multiplication. So we can dene
: G GL(V ), (g) := (v
g
)
and this is a group homomorphism. This shows that any representation of the group algebra
A = KG automatically gives a group representation of G.
(b) Conversely, suppose V is a vector space over K and : G GL(V ) is a representation
of G. We view G as a basis of the group algebra A = KG, and therefore we get a linear map
: A End
K
(V ) by setting (

g
a
g
v
g
) :=

g
a
g
(g). One checks that this is an algebra
homomorphism. This shows that every representation of G also gives a representation of the
group algebra KG.
Denition 2.5
Given two representations
1
,
2
of the algebra A where
1
: A End
K
(V
1
) and
2
: A
End
K
(V
2
). Then
1
and
2
are said to be equivalent if there is a vector space isomorphism
: V
1
V
2
such tat

1

2
(a) =
1
(a)
for all a A.
This means that
1
(a) and
2
(a) should be simultaneously similar, for all a A. In the
special case when A = K[X] we have therefore the following.
2.1 Modules 21
Lemma 2.6
Let A = K[X], then two representations

and

are equivalent if and only if the linear


transformations and are similar.
Take any ideal I of the algebra A, and let B = A/I. Since the canonical map A A/I
is an algebra map, we can view representations of the factor algebra as representations of
the original algebra. More precisely:
Lemma 2.7
Let A be any algebra, and I an ideal of A. Let B := A/I. Then the representations of B are
precisely the representations of A which map I to zero.
Proof
Let : A End
K
(V ) be a K-algebra homomorphism such that (x) = 0 for all x I.
Then dene
: B End
K
(V )
by (a + I) := (a). This is well-dened: if a + I = a

+ I, then a a

I and (a
a

) = 0 and therefore (a) = (a

). One checks that is an algebra homomorphism, this is


straightforward.
Conversely, let : B End
K
(V ) be a representation of B. Dene

: A End
K
(V ) by

(a) := (a +I)
That is,

is the composition of and the canonical map : A A/I, and therefore

is
an algebra homomorphism.
Denition 2.8
Given a representation of B where B = A/I, then the corresponding representation

of
A as in the Lemma is called ination of .
2.1 Modules
If we have a representation of the algebra A on the vector space V , then A acts on V ,
and V together with this action is said to be an A-module. This is analogous to the case of
groups acting on sets. [Given a group homomorphism from G to Sym(), then together
with this action is a G-set].
Modules can be dened for arbitrary rings, not just algebras. They are very common
(and important), for example modules over Z occur frequently. The basic concepts are the
same; therefore we give the general denition.
22 2. Representations, Modules
Denition 2.9
Let R be a ring. An R-module is an abelian group (M, +) together with a map
R M M, (r, m) rm (r R, m M)
such that for all r, s R and all m, n M
(i) (r +s)m = rm+sm;
(ii) r(m+n) = rm+rn;
(iii) r(sm) = (rs)m;
(iv) 1
R
m = m.
One can think of a module as a generalization of a vector space: A vector space is an
abelian group M together with a scalar multiplication, that is a map KM M, satisfying
the usual axioms. If one replaces K by a ring R, then one gets an R-module. When R = K,
that is R is a eld, then R-modules are therefore exactly the same as K-vector spaces.
The above denes a left R-module, and one denes right R-modules analogously. When R
is not commutative the behaviour of left modules and of right modules can be dierent; to
go into details is beyond this course (see however an exercise in chapter 3).
We will consider only left modules since our rings are K-algebras, and scalars are usually
written to the left.
Example 2.10
Take any left ideal I of R, then I is an R-module. First, (I, +) is a group, by denition. The
properties (i) to (iv) hold even for m, n, r, s R, and then also for m, n I and r, s R.
In this course, we will focus on the case when the ring is a K-algebra. Some of the general
properties are the same for rings.
Convention We write R and M if we think of an R-module for a general ring, and we
write A and V if we work with an A-module where A is a K-algebra.
Suppose A is a K-algebra. Then A-modules are automatically vector spaces, and this is
very important:
Lemma 2.11
Let A be a K-algebra. Then any A-module is automatically a K-vector space.
Proof
Recall that we view K as a subset of A, so this gives us a map K V V , and it satises
the vector space axioms, they are then just (i) to (iv) in 2.9.
2.1 Modules 23
2.1.1 Relating A-modules and representations of A
The following shows that modules and representations of an algebra are the same. This
is a formal matter, nothing is done to the modules or representations and it only describes
two dierent views of the sameobject. It is convenient as it often saves one a lot of checking,
and it gives twice as much information.
Lemma 2.12
Let A be a K-algebra.
(a) Suppose V is an A-module. Then we have a representation of A on V ,
: A End
K
(V ), (a) = [v av] (a A, v V ).
(b) Suppose : A End(V ) is a representation. Then V becomes an A-module by setting
av := (a)(v), (a A, v V ).
Proof
(a) The map (a) lies in End
K
(V ): It is a linear transformation of V ,
(a)(
1
v
1
+
2
v
2
) = a(
1
v
1
+
2
v
2
) = (a
1
v
1
) + (a
2
v
2
)
=
1
(av
1
) +
2
(av
2
)
=
1
(a)(v
1
) +
2
(a)v
2
To show that it is an algebra homomorphism,
(ab)(v) = (ab)v = a(bv)
= (a)[bv] = (a)[(b)(v)]
= [(a) (b)](v)
which holds for all v V , hence (ab) = (a)(b). Similarly one checks that (1
A
) = Id
V
.
(b) It is straightforward to check the axioms for an A-module. For example
(a
1
+a
2
)v = (a
1
+a
2
)(v)
= [(a
1
) +(a
2
)](v)
= (a
1
)(v) +(a
1
)(v) = a
1
v +a
2
v
We leave the rest as an exercise.
24 2. Representations, Modules
2.1.2 Examples
(1) When A = K, then A-modules are the same as K-vector spaces.
(2) The natural module. Assume A is a subalgebra of End
K
(V ). Then V is an A-module,
where the action of A is just applying the linear maps to the vectors. That is,
(a, v) a(v) (a A, v V ).
This is the action where the representation is the inclusion map A End
K
(V ). So V is
an A-module.
Alternatively, one can check the module axioms: Let a, b A and v, w V , then
(i) (a +b)(v) = a(v) +b(v)
by the denition of the sum of two maps; and
(ii) a(v +w) = a(v) +a(w)
since a is linear; and
(iii) (ab)(v) = a(b(v))
since the multiplication in End
K
(V ) is dened to be composition of maps; and
(iv) 1
A
(v) = v.
(3) The natural module also has a matrix version. Let A be a subalgebra of M
n
(K), and let
V := (K
n
)
t
, the space of column vectors. Then V is an A-module if one takes as action
to be multiplying the matrix and the column vector.
(4) Permutation modules. Let A = KG where G is a nite group. Suppose is any G-set,
and let
V = K = Spanb

:
as in 2.0.2(1). Dene an action of A on K by setting
v
g
b

:= b
g()
.
and extend to linear combinations in A. This denes an A-module.
To see this, take the group representation in 2.0.2(1) and view it as a representation of
KG (as in 2.4(b)), then use 2.12.
Alternatively, you can check the axioms.
(5) The trivial module. Let A = KG where G is a nite group. The trivial module has
underlying vector space K, and the action of A os dened by
v
g
x = x (x K, g G)
Take the trivial representation in 2.0.2(3), view it as a representation of KG, and use
2.12. Or else, check axioms.
2.2 K[X]-modules 25
Lemma 2.13
Let A be an algebra and B = A/I where I is an ideal of A. Then the B-modules are precisely
the A-modules V on which I acts as zero, and where the actions are related by
(a +I)v = av (a A, v V ).
Proof
This is a reformulation of what we called ination. Apply 2.7 and use 2.12
2.2 K[X]-modules
Take a vector space V and some linear transformation : V V . We have dened the
representation

from A := K[X] to End


K
(V ), by

(f) = f(). This gives that V is an


A-module by setting
fv := f()(v) (f A, v V.)
We denote this K[X]-module by V

. Since every representation of A is of the form

for
some , every K[X]-module is isomorphic to V

for some .
The following relates K[X]-modules with modules for factor algebras K[X]/I. This is
important since for I ,= 0, these factor algebras are nite-dimensional, and many nite-
dimensional algebras occuring in nature are of this form.
Lemma 2.14
Let A = K[X]/(f) where f is some non-zero polynomial in K[X]. Then the A-modules can
be viewed as the K[X]-modules V

which satisfy f() = 0.


Proof
This is a special case of 2.13 with I = (f). Then note that if V is an A-module, with action
fv = f()v. So I maps V to zero if and only if f() = 0.
Note that we only change the point of view, and we dont do anything to the module.
One advantage of modules as compared with representations is that this perspective natu-
rally lead to new concepts. We introduce some of these now.
26 2. Representations, Modules
2.3 Submodules, factor modules
Denition 2.15
Let R be a ring and M some R-module. A submodule of M is a subgroup (U, +) which is
closed under the action of R, that is ru U for all r R and u U.
2.3.1 Examples
(1) The left ideals I of R are precisely the submodules of the R-module R.
(2) Suppose M
1
and M
2
are R-modules. Then the direct product
M
1
M
2
:= (m
1
, m
2
) : m
i
M
i

is an R-module if one denes the action of R componentwise, that is


r(m
1
, m
2
) := (rm
1
, rm
2
) (r R, m
i
M
i
).
(3) Consider the 2-dimensional R-algebra A
0
at the end of Chapter 1. The 1-dimensional
subspace spanned by is a submodule of A
0
.
On the other hand, if you look at the algebra A
1
in the same section, then the subspace
spanned by is not a submodule. (But the space spanned by 1
A
is a submodule.)
Exercise 2.2
Let A = A
1
A
2
, the product of two K-algebras. Suppose M is some A-module.
Dene
M
1
:= (1
A
1
, 0)m : m M, M
2
:= (0, 1
A
2
)m : m M.
Show that M
1
and M
2
are submodules of M and that M = M
1
M
2
.
You might note that we have seen direct products of modules, and also direct sums. The
products are needed, to construct a new module from given ones which are not related. On
the other hand, if we write M = M
1
M
2
then we always implicitly mean that M is a given
module and M
1
, M
2
are submodules. [Some books distinguish these two constructions, by
calling them external and internal direct sum.]
Exercise 2.3
Let A = M
n
(K), the algebra of n n matrices over K, and consider the A-module
A. We dene C
i
A to be the set of matrices which are zero outside the i-th column.
Show that C
i
is a submodule of A, and that
A = C
1
C
2
. . . C
n
.
2.4 Module homomorphisms 27
Suppose U is a submodule of an R-module M, you know that the cosets M/U := m+U :
m M form an abelian group. In the case when M is an R-module and U is a submodule,
the set of cosets has the structure of an R-module.
Denition 2.16
Let M be an R-module and U a submodule of M. Then the cosets M/U form an Rmodule,
if one denes
r(m+U) := rm+U, (r R, m M).
This is called the factor module.
One has to check that the action is well-dened: If m+U = m

+U then mm

U and
then r(mm

) U as well. But r(mm

) = rmrm

and therefore rm+U = rm

+U.
The axioms are inherited from M.
Example 2.17
Let M = R as an R-module, then for any d R, M has submodule I = Rd, and a factor
module R/Rd. When R = Z, you will have seen these. In general, a module of the form
Rd = rd : r R is said to be a cyclic R-module.
2.4 Module homomorphisms
We have said that a module is a generalization of a vector space where scalars are replaced
by elements in the ring. Accordingly, R-module homomorphisms are the analog of linear
maps of vector spaces.
Denition 2.18
Suppose R is a ring, and M and N are R-modules. A map : M N is an R-module
homomorphism if for all m, m
1
, m
2
M and r R we have
(i) (m
1
+m
2
) = (m
1
) +(m
2
); and
(ii) (rm) = r(m).
An isomorphism of R-modules is an R-module homomorphism which is also bijective.
The set of all R-module homomorphisms from M to N is denoted by
Hom
R
(M, N).
An R-module homomorphism from M to M is called an R-endomorphism of M, and the
set of all R-endomorphisms of M is denoted by
End
R
(M).
28 2. Representations, Modules
In the case when the ring is a K-algebra A, then this denition also says that must be
K-linear. Namely, we vies K as an element of A, by taking 1
A
, and then we have for
, K that
(m
1
+m
2
) = ((1
A
)m
1
+ (1
A
)m
2
)
= (1
A
)(m
1
) + (1
A
)(m
2
)
= (m
1
) +(m
2
).
Exercise 2.4
Suppose V is an A-module where A is a K-algebra. The set End
A
(V ) of all A-module
homomorphisms V V is by what we just noted, a subset of End
K
(V ). Check that
it is actually a subalgebra.
Example 2.19
Consider A = K[X]. The algebra is generated by X as an algebra, so an A-module homo-
morphism between A-modules is just a linear map that commutes with the action of X.
We have described the A-modules (see 2.2), let V

and W

be A-modules, an A-module
homomorphism is a linear map such that (Xv) = X(v) (v V

). On V

, the element X
acts by , and on W

, the action of X is given by . So this means


((v)) = ((v)) (v V

)
This holds for all v, so we have
= .
In particular V


= W

if and only if there is an invertible linear map such that

1
= .
Exercise 2.5
Suppose A is a K-algebra, and assume V and W are A-modules. Show that V

= W
as A-modules if and only if the corresponding representations are equivalent.
2.4.1 Some common module homomorphisms
(1) Suppose U is a submodule of an R-module M, then the canonical map : M M/U,
dened by (m) = m+U, is an R-module homomorphism.
2.4 Module homomorphisms 29
(2) Suppose M is an R-module, and m M. Then we always have an R-module homomor-
phism : R M, given by
(r) := rm (r R).
This is a very common homomorphism, perhaps we might call it a multiplication homo-
morphism. There is a general version of this, which also is very common. Namely, suppose
m
1
, m
2
, . . . , m
n
are given elements in M. Now take the R-module R
n
:= RR. . . R,
and dene
: R
n
M, (r
1
, r
2
, . . . , r
n
) := r
1
m
1
+r
2
m
2
+. . . +r
n
m
n
(r
1
, . . . , r
n
R). You should check that this is indeed an R-module homomorphism.
(3) Suppose M = M
1
M
2
, the direct product of two R-modules. Then the projection maps

i
onto the coordinates are R-module homomorphisms. Similarly, the inclusion maps

1
: M
1
M,
1
(m
1
) := (m
1
, 0)
and similarly
2
, are R-module homomorphism. You should check this as well.
(4) Similarly, if M = U V , the direct sum of two submodules U and V , then the projection
maps, and the inclusion maps are R-module homomorphisms.
Theorem 2.20 (Isomorphism theorems)
(a) Suppose : M N is an R-module homomorphism. Then ker() is a submodule of M
and im() is a submodule of N, and
M/ker()

= im().
(b) Suppose U, V are submodules of M, then so are U +V and U V , and
(U +V )/U

= V/(U V ).
(v) Suppose U V M are submodules, then V/U is a submodule of M/U, and
(M/U)/(V/U)

= (M/V ).
Proof
(a) Since is in particular a homomorphism of the additive groups, we know that ker() is
a subgroup of M, so we just have to check that it is R-invariant. Let m ker() and r R,
then
(rm) = r(m) = r.0 = 0
and rm ker(). Similarly one checks that im() is a submodule of N. The isomorphism
theorem for abelian groups shows that the map
: M/ker() im(), (m+ ker()) = (m)
30 2. Representations, Modules
is well-dened and is an isomorphism of abelian groups. One now checks that this map is in
fact an R-module homomorphism.
Parts (b) and (c) hold for abelian groups, and one just has to check that the maps used in
that case are also compatible with the action of R. For example, in (b), the general element
of (U +V )/U can be written as v +U for v V . Then the map is dened as
: (U +V )/U V/(U V ), (v +U) = v +U V
If r R then
(r(v +U)) = ((rv) +U) = rv +U V = r(v +U V ) = r(v +U).
2.5 The submodule correspondence
Suppose M is an R-module and U is a submodule. Then there is a 1-1 correspondence,
inclusion preserving, between
(i) the submodules of M/U, and
(ii) the submodules of M that contain U.
Namely, given a submodule X of M/U, dene

X := m M : x +U X.
This is a submodule of M and it contains U:
(a) First,

X is a subgroup of M. It contains 0, and if x
1
, x
2


X then
(x
1
x
2
) +U = (x
1
+U) (x
2
+U)
and this lies in X since X is a subgroup of M/U.
(b)

X is a submodule: Let r R and x

X, then
rx +U = r(x +U) X
since X is an A-submodule of M/U and therefore rx

X.
Conversely, given a submodule V of M such that U V . Then V/U := v +U : v V
is a submodule of M/U, as we have seen. We leave as an exercise to show that these
correspondences preserve inclusion.
To get the 1-1 correspondence, we must check that

X/U = X, and that

V/U = V . First,

X/U = x +U : x

X
= x +U : x +U X = X.
Second, we have

V/U = x M : x +U V/U
= x M : x +U = v +U, for some v V
Now x + U = v + U if and only if x v U, that is x v = u U. But U V , by
assumption, so x v = u U if and only if x V . Hence

V/U = V .
2.6 Tensor products 31
2.6 Tensor products
This is not part of the B2 syllabus
Denition 2.21
Suppose V and W are vector spaces over some eld K with bases v
1
, . . . , v
m
and w
1
, . . . , w
n
respectively. For each i, j with 1 i m and 1 j n, we introduce a symbol v
i
w
j
.
The tensor product space V W is dened to be the mn-dimensional vector space over K
with a basis given by
v
i
w
j
: 1 i m, 1 j n
Thus V W consists of all expressions of the form

i,j

i,j
(v
i
w
j
) (
i,j
K)
For v V and w W with v =

m
i=1

i
v
i
and w =

n
j=1

j
w
j
(with
i
,
j
K) we dene
v w by
v w :=

i,j

j
(v
i
w
j
)
For example,
(2v
1
v
2
) (w
1
+w
2
) = 2(v
1
w
1
) v
2
w
1
+ 2(v
1
w
2
) (v
2
w
2
).
Note that not every element of V W is of the form v w.
Exercise 2.6
Show that v
1
w
1
+ v
2
w
2
cannot be expressed in the form v w for v V and
w W.
Proposition 2.22
If e
1
, . . . , e
m
is any basis of V and f
1
, . . . , f
n
is any basis of W then
e
i
f
j
: 1 i m, 1 j n
is a basis for V W.
Proof
Write v
i
=

m
k=1
c
ki
e
k
and w
j
=

n
l=1
d
lj
f
l
with c
ki
and d
lj
K. Then
v
i
w
j
=

k,l
c
ki
d
jl
(e
k
f
l
)
This shows that the mn elements e
i
f
j
span V W. But dim(V W) = mn and therefore
they form a basis.
32 2. Representations, Modules
Now suppose G is a group, and
V
: G GL(V ) and
W
: G GL(W) are representa-
tions of G. Then we have a representation of G on V W. In the following we use the bases
of V , W and V W as above.
Proposition 2.23
Let g G and dene : G GL(V W) by
(g)(v
i
w
j
) :=
V
(g)(v
i
)
W
(g)(w
j
).
Then is a representation of G.
Proof
We must show that (gh) = (g) (h) for all g, h G. One way is rst to check that for
all v V and w W we have
(g)(v w) =
V
(g)(v)
W
(g)(w).
When this done, then we get
(gh)(v
i
w
j
) =
V
(gh)(v
i
)
W
(gh)(w
j
)
=
V
(g)[
V
(h)(v
i
)]
W
(g)[
W
(h)(w
j
)]
= (g)[
V
(h)(v
i
)
W
(h)(w
j
)]
= (g)[(h)(v
i
w
j
).
EXERCISES
2.7. Suppose V is an A-module with submodules U, V and W.
(a) Check that U + V and U V are submodules of M. Show by means of an
example that it is not in general the case that U (V +W) = (U V ) +(U W).
[Try A = R and U, V, W be subspaces of R
2
].
(b) Show that U V is never a submodule. Show also that U V is a submodule
if and only if U V or V U.
2.8. Suppose M = U V , the direct product of A-modules U and V . Check that

U := (u, 0) : u U is a submodule of M, isomorphic to U. Write down a


similar submodule

V of M isomorphic to V , and show that M =

U

V , the
direct sum of submodules.
2.9. Let A = KG be the group algebra where G is a nite group. The trivial A-module
is dened to be the 1-dimensional module with underlying space K, with action
v
g
x = x (g G, x K).
2.6 Tensor products 33
Show that the corresponding representation : A End
K
(K)satises (a) =
Id
K
for all a A. Check that this is indeed a representation.
2.10. Let be a transitive G-set, and let V = K be the corresponding permutation
module. Let :=

. Show that v
g
= for all g G, and deduce that K
is a submodule of V and that it is isomorphic to the trivial A-module.
2.11. (Continuation) Show also K is the unique submodule of V which is isomorphic
to the trivial module. Is this still true when is not transitive?
2.12. Let A = K[X]/(X
n
), and let V = A, as an A-module. By applying the submodule
correspondence, or otherwise, nd all submodules of V . Deduce that if V
1
and V
2
are submodules, then either V
1
V
2
, or V
2
V
1
.
2.13. Let A = CG be the group algebra of the dihecral group of order 10,
G = , :
5
= 1,
2
= 1,
1
=
1
).
Suppose is some 5-th root of 1. Show that the matrices
() =

0
0
1

, () =

0 1
1 0

satisfy the dening relations for G, hence give rise to a group representation
: G GL(2, C), and an A-module.
2.14. (Continuation) Let G and be as above, and view : G GL(V ) where V = C
2
.
Consider the tensor product V V as a G-module. Does this have a 1-dimensional
submodule? That is, does there exist some =

c
ij
(v
i
v
j
) V V which is
a common eigenvector for all group elements?
3
The Jordan-Holder Theorem
Let A be a nite-dimensional K-algebra. We have seen that every A-module V is also a
K-vector space. This allows us to apply results from linear algebra to A-modules.
Denition 3.1
Suppose V is an A-module. Then V is simple (or irreducible ) if V is non-zero, and if it does
not have any submodules other that 0 and V .
For example, take a module V such that dim
K
(V ) = 1, then Then V must be simple. It
does not have any subspace except 0 and V and therefore it cannot have a submodule except
0 or V . The converse is not true. Simple modules can have arbitrary large dimensions, or
can even be innite-dimensional (see an exercise).
Example 3.2
Let A = M
n
(K), and take V to be the natural module, the space of column vectors V =
(K
n
)
t
.
We claim that V is simple: We have to show that if U is a non-zero submodule of V then
actually U = V . So take such U, and take a non-zero element u U, say
u =

x
1
x
2
. . .
x
n

.
The algebra A contains all matrix units E
ij
, and one checks that E
ij
u has x
j
in the i-th
coordinate, and all other coordinates are zero.
36 3. The Jordan-Holder Theorem
Since u ,= 0, for some j we know that x
j
is non-zero. So for this value j, (x
1
j
)E
ij
u is
the basis vector
i
of V . But (x
1
j
)E
ij
lies in A and therefore
i
U. But i is arbitrary, so
U contains a basis for M and therefore U = V .
The method we used to show that U is simple, is more general.
If m V where V is some module, let Am := am : a A. This is a submodule of V ,
you should check this.
Lemma 3.3
Let V be an A-module. Then V is simple if and only if for each 0 ,= m V we have Am = V .
Proof
Suppose V is simple, and take 0 ,= m V . We know that Am is a submodule, and it
contains m(= 1
A
m), and so Am is non-zero and therefore Am = V .
Suppose 0 ,= U is a submodule of V , then there is some non-zero m U. Since U is
a submodule, we have Am U, but by the hypothesis,
V = Am U V
and hence U = V .
Example 3.4
Let A = RG, where G is the symmetry group of the square, see chapter 2. We have seen
there is the representation : G GL
2
(R) such that
() =

0 1
1 0

, () =

1 0
0 1

.
The corresponding A-module is V = R
2
, and for g G, the basis element v
g
acts on V
through
v
g

x
1
x
2

= (g)

x
1
x
2

.
We claim that V is simple. Suppose, for a contradiction, that V has a submodule 0 ,= U V
and U ,= V . Then U is 1-dimensional, say U is spanned by u. But then v

u = u for some
R, which means that u R
2
is an eigenvector of (v

). But the matrix () does not


have a real eigenvalue, a contradiction.
We will need to understand also when a factor module is simple. This is done by using
the submodule correspondence.
3.1 Examples 37
Lemma 3.5
Suppose V is an A- module and U is a submodule of V . Then the module V/U is simple
U is a maximal submodule of V . [That is, if U W V then W = U or W = V .]
Proof
Apply the submodule correspondence.
Denition 3.6
Suppose V is an A-module. A composition series of V is a nite chain of submodules
0 = V
0
V
1
V
2
. . . V
n
= V
such that the factor modules V
i
/V
i1
are simple, for 1 i n. The length of the composition
series is n, the number of quotients.
3.1 Examples
(1) If V is simple then 0 = V
0
V
1
= V is a composition series.
(2) Given a composition series as in the denition, if V
k
is one of the terms, then V
k
inherits
the composition series
0 = V
0
V
1
. . . V
k
.
(3) Let K = R and take A to be the 2-dimensional algebra over R, with basis 1
A
, such
that
2
= 0 (see ??). Take V := A. Then if V
1
is the space spanned by , then V
1
is a
submodule. [It is a subspace, and it is invariant under the action of the basis of A]. Since
V
1
and V/V
1
are 1-dimensional, they are simple. Hence V has composition series
0 = V
0
V
1
V
2
= V.
(4) Let A = M
n
(K) and V = A. In Exercise ?? we have seen that A = C
1
C
2
. . . C
n
,
a direct sum of simple A-modules. So we have a nite chain of submodules
0 C
1
C
1
C
2
. . . C
1
. . . C
n1
A
Each factor module is simple: By the isomorphism theorem
C
1
. . . C
k
/C
1
. . . C
k1

= C
k
/C
k
(C
1
. . . C
k1
) = C
k
/0 = C
k
.
So this chain is a composition series.
38 3. The Jordan-Holder Theorem
Lemma 3.7
Assume V is a nite-dimensional A-module. Then V has a composition series.
Proof
This is proved by induction on dimV . If dimV = 1 then V is simple, and we are done by
3.1 (1).
So assume now that dimV > 1. If V is simple then, by 3.1(1), V has acomposition
series. Otherwise, V has proper submodules. Take a proper submodule U V of largest
possible dimension. Then V/U must be simple, by the Submodule Correspondence. Since
dimU < dimV , we can apply the inductive hypothesis. So U has a composition series, say
0 = U
0
U
1
U
2
. . . U
k
= U.
This gives us the composition series of V ,
0 = U
0
U
1
U
2
. . . U
k
= U V.
In general, a module can have many composition series (we will see examples). The
Jordan-Holder Theorem shows that any two composition series have the same length, and
the same factors up to isomorphism counted with multiplicities:
Theorem 3.8 (Jordan-Holder Theorem)
Suppose V has two composition series
(I) 0 V
1
V
2
. . . V
n1
V
n
= V
(II) 0 W
1
W
2
. . . W
m1
W
m
= V.
Then n = m, and there is a permutation of 1, 2, . . . , n such that V
i
/V
i1

= W
(i)
/W
(i)1
for each i.
The simple factor modules V
i
/V
i1
are called the composition factors of V . By this
theorem, they only depend on V , and not on the composition series.
Example 3.9
Let A = M
n
(K) and V = A. With the notation as in Chapter 2.1, V has submodules
V
1
:= C
1
,
V
2
:= C
1
C
2
,
. . .
V
i
:= C
1
C
2
. . . C
i
.
3.1 Examples 39
These submodules form a series
0 = V
0
V
1
. . . V
n1
V
n
= V.
This is a composition series, as we have seen. The module V also has submodules
W
1
:= C
n
,
W
2
:= C
n
C
n1
,
. . .
W
j
:= C
n
C
n1
. . . C
nj+1
,
and this gives us a series of submodules
0 = W
0
W
1
. . . W
n1
W
n
= V.
This also is a composition series, since W
j
/W
j1

= C
nj+1
which is simple. Both composi-
tion series have lenght n, and if we take the permutation
= (1 n)(2 n 1) . . .
then W
(i)
/W
(i)1

= C
i

= V
i
/V
i1
.
For the proof of the Jordan-Holder Theorem, we need to compare two given composition
series. The case when V
n1
is dierent from W
m1
, makes more work, and we will use the
following.
Lemma 3.10
With the notation as in the Jordan-Holder Theorem, suppose V
n1
,= W
m1
. Let
D := V
n1
W
m1
. Then
(a) V
n1
/D

= V/W
m1
, and hence it is simple;
(b) W
m1
/D

= V/V
n1
, and hence it is simple.
Proof
We rst show that V
n1
+W
m1
= V .
We have V
n1
V
n1
+ W
m1
V , and since V/V
n1
is simple, V
n1
is a maximal
submodule of V . So either V
n1
+W
m1
= V , or V
n1
+W
m1
= V
n1
.
Assume (for a contradiction) that V
n1
+W
m1
= V
n1
, then we have
W
m1
V
n1
+W
m1
V
n1
V
But W
m1
also is a maximal submodule of V , therefore W
m1
= V
n1
, a contradiction to
the hypothesis. Therefore V
n1
+W
m1
= V , as stated.
40 3. The Jordan-Holder Theorem
Now we apply an isomorphism theorem, and get
V/W
m1

= (V
n1
+W
m1
)/W
m1

= V
n1
/V
n1
W
m1
= V
n1
/D
Similarly one shows that V/V
n1

= W
m1
/D.
Proof (of the Jordan-Holder Theorem)
Given two composition series (I) and (II) as above, we say that they are equivalent provided
n = m and there is a permutation Sym(n) such that V
i
/V
i1

= W
(i)
/W
(i)1
. In this
proof we will abbreviate composition series by CS.
We use induction on n. Assume rst n = 1. Then V is simple, so W
1
= V (since there is
no non-zero submodule except V ); and m = 1.
Now suppose n > 1. The inductive hypothesis is that the theorem holds for modules
which have a composition series of length n 1.
(a) Assume rst that V
n1
= W
m1
=: U, say. Then the module U inherits a CS of length
n 1, from (I). By the inductive hypothesis, any two composition series of U have length
n1. So the composition series of U inherited from (II) also has length n1 and therefore
m1 = n 1 and m = n. Moreover, by the inductive hypothesis, there is a permutation
of n 1 such that V
i
/V
i1

= W
(i)
/W
(i)1
. We also have V
n
/V
n1
= W
n
/W
n1
. So if we
view as an element of Sym(n) xing n then we have the required permutation.
(b) Now assume V
n1
,= W
m1
, now we dene D := V
n1
W
m1
. Take a composition
series of D, say
0 = D
0
D
1
. . . D
t
= D
Then V has composition series
(III) 0 = D
0
D
1
. . . D
t
= D V
n1
V
(IV ) 0 = D
0
D
1
. . . D
t
= D W
m1
V
since, by 3.10, the quotients V
n1
/D and W
m1
/D are simple. Moreover, by the Lemma
3.10, we know that (III) and (IV) are equivalent: take the permutation = (t + 1 t + 2).
Next, we claim that m = n. The module V
n1
inherits a a CS of length n1 from (I). So
by the inductive hypothesis, all CS of V
n1
have length n1. But the CS which is inherited
from (III) has length t + 1 and hence n 1 = t + 1. Now, the module W
m1
inherits from
(IV) a composition series of length t + 1 = n 1, so by the inductive hypothesis all CS
of W
m1
have length n 1. In particular the CS inherited from (II) does, and therefore
m1 = n 1 and m = n.
The series (I) and (III) are equivalent: By the inductive hypothesis, there is a permutation
of n 1 letters, say, such that
D
i
/D
i1

= V
(i)
/V
(i)1
, (i ,= n 1), and V
n1
/D

= V
(n1
/V
(n1)1
.
3.2 Examples 41
We view as a permutation of n letters, and then also V/V
n1
= V
n
/V
n1

= V
(n)
/V
(n)1
,
which proves the claim.
Similarly one shows that (II) and (IV) are equivalent. We have already seen that (III)
and (IV) are equivalent as well, and it follows that (I) and (II) are equivalent. This completes
the proof.
Lemma 3.11
Suppose V is a nite-dimensional A-module, and N is a submodule of V . Then there is a
composition series of V in which N is one of the terms.
Proof
Take a composition series of N, say
0 = N
0
N
1
. . . N
t
= N
Now take a composition series of V/N. By the Submodule Correspondence, we can write
such composition series as
0 = U
0
/N U
1
/N . . . U
s
/N = V/N
since any submodule of V/N is of the form U/N where U is a submodule of V containing
N. Moreover, by the submodule correspondence, U
i
/N U
i+1
/N if and only if U
i
U
i+1
.
So we have U
0
= N and U
s
= V , and we get a series of submodules of V
() 0 = N
0
N
1
. . . N
t
= N U
1
U
2
. . . U
s
= V.
We know that N
i
/N
i1
is simple. Moreover, by an isomorphism theorem we have
U
i
/U
i1

= (U
i
/N)/(U
i1
/N)
which is simple. This proves that (*) is a composition series of V .
3.2 Examples
(1) Let A = M
n
(K) and V = A. We have constructed a composition series in 3.9, and we
have seen that any two composition factors of A are isomorphic to the natural module.
(2) This example shows that non-isomorphic composition factors can occur.
Let K = R and A = RC, the direct product of R-algebras. Let S
1
:= (r, 0) : r R A,
this is a left ideal of A and therefore a submodule. Let also S
2
:= (0, z) : z C A,
this also is a left ideal of A and therefore a submodule.
42 3. The Jordan-Holder Theorem
Consider the series
() 0 S
1
A
We claim that A/S
1

= S
2
. Dene : A S
2
to be the projection on the second
coordinate. By ??, this is an A-module homomorphism, and it is clearly onto, and it has
kernel S
1
.
To show that (*) is a composition series, we must verify that S
1
and S
2
are simple. This
is clear for S
1
since it is 1-dimensional. To prove it for S
2
we apply 3.3.
Take 0 ,= (0, w) S
2
, we must show that the submodule A(0, w) generated by (0, w) is
equal to S
2
.
Since w is a non-zero complex number, (0, w
1
) lies in A and therefore (0, w
1
)(0, w) =
(0, 1) is contained in the submodule generated by (0, w), and it follows that this submodule
is S
2
.
Exercise 3.1
Let A = T
2
(K), the algebra of upper triangular 2 2 matrices. Find a composition
series of the A-module A. Verify that non-isomorphic composition factors occur.
For a nite-dimensional algebra A, the composition series of V = A are very important
because this gives actually information on all simple A-modules. We will show this now, it
is based on the following:
Lemma 3.12
Suppose S is a simple A-module, so that S = As for some non-zero s S. Let
I := Ann(s) = a A : as = 0.
Then I is a left ideal of A, and S

= A/I as A-modules.
Proof
Dene a map
: A S, (a) := as.
This is an A-module homomorphism, by ??. It is clearly onto, and hence by the Isomorphism
Theorem we have
A/Ker()

= Im() = S.
By denition, the kernel of is Ann(s). In particular it is a left ideal, by the Isomorphism
Theorem
Corollary 3.13
Let A be a nite-dimensional algebra. Then every simple A-module occurs as a composition
factor of A, up to isomorphism. Hence there are only nitely many simple A-modules (up
to isomorphism).
3.3 Simple A
1
A
2
-modules 43
Proof
By lemma 3.12 we know that if S is a simple A-module then S

= A/I for some left ideal I.
Now, I is then a submodule, so by 3.11 there is some composition series of A in which I is
one of the terms. So A/I is a composition factor of A.
Example 3.14
Let A = M
n
(C), this has a unique simple module, up to isomorphism, namely the natural
module (C
n
)
t
of column vectors. This follows from 3.1 and 3.13.
3.3 Simple A
1
A
2
-modules
Let A = A
1
A
2
, the direct product of two K-algebras. Recall from 1.14 that A
1
and A
2
are factor algebras of A, (taking the projections). Recall that we can inate modules from
factor algebras to the large algebra, see 2.7. So we inate the simple modules for A
1
and
A
2
, and we get A-modules. These inations are still simple as A-modules, roughly speaking
since we dont do anything. But you should check this.
Now consider a simple A-module S. We apply exercise ??, this shows that S = S
1
S
2
where S
i
is a module for the algebra A
i
. But S is simple, so S = S
1
and S
2
= 0 or S = S
2
and S
1
= 0. Say S = S
1
, then from the denition of S
1
in exercise ?? we see that elements of
the ideal 0 A
2
of A annihilate S
1
. This shows that S
1
really is the ination of a module
for A
1
and it is still simple as such. Hence every simple A-module can be viewed as a module
for A
1
or for A
2
(not both). We have now proved the following.
Lemma 3.15
The simple A-modules are precisely the inations of the simple A
1
-modules, together with
the inations of the simple A
2
-modules to A.
Example 3.16
Let A = A
1
A
2
where A
1
= M
2
(K) and A
2
= M
3
(K). By 3.14, the natural 2-dimensional
module, of column vectors, is the only simple A
1
-module, up to isomorphism, and similarly
the natural 3-dimensional module is the only simple A
2
-module, up to isomorphism. By the
lemma, the algebra A has precisely two simple modules, up to isomorphism. The action on
the 2-dimensional module is explicitly
(a
1
, a
2
)v = a
1
v (v (K
2
)
t
, a
i
A
i
).
44 3. The Jordan-Holder Theorem
Remark 3.17
Let R be any ring. Then the denition of simple also makes sense for R-modules, and
in the denition of simple modules, we could equally well have taken R instead of A. For
general rings, the concept of simple modules is far less important, even when the module
in question is small, such as generated by one element.
For example, take R = Z, and M = R. This does not have a simple submodule. Namely,
any non-zero submodule U of M is a left ideal, hence is of the form U = Za for some
0 ,= a Z. Then for example Z(2a) is a proper submodule of U, so U is not simple.
EXERCISES
3.2. Find a composition series for the 3-subspace algebra.
3.3. This extends example 3.4. Let A = CG where G is the group of symmetries of
the square. Let V = C
2
, this is an A-module if we take the representation as in
example 3.4 (or example 3.2). Show that V is simple.
3.4. Suppose V and W are A-modules and : V W is an A-module isomorphism.
(a) Show that V is simple if and only if W is simple.
(b) Suppose 0 = V
0
V
1
. . . V
n
= V is a composition series of V . Show that
then
0 (V
1
) . . . (V
n
) = W
is a composition series of W.
3.5. Suppose M is an A-module and that U and V are maximal submodules of M.
Suppose U ,= V , show that then U +V = M.
3.6. Let A be the (3-dimensional) algebra of all upper triangular 22 matrices over a
eld K. Find a composition series of the A-module A. Show that A has precisely
two simple modules, up to isomorphism.
3.7. Let A be the matrix ring
A =

C C
0 R

.
[That is, A is the subring of M
2
(C) of upper triangular matrices with 22-entry in
R.] Show that A is an R-algebra. What is its dimension (over R)? Let
I =

0 C
0 0

.
(a) Show that I is a simple left ideal of A. It is also a right ideal. Is I simple as
a right ideal?
(b) Show that A/I is isomorphic to C R, as R-algebras.
3.3 Simple A
1
A
2
-modules 45
[A simple left ideal of A is a left ideal I such that there are no left ideals J of A
such that 0 ,= J I and J ,= I.]
3.8. Let V be a 2-dimensional vector space over K, and let A be a subalgebra of
End
K
(V ). Recall that V is then an A-module (by applying linear transformations
to vectors). Show that V is not simple as an A-module if and only if there is
some 0 ,= v V which is an eigenvector for all A.
3.9. Let A be the R-algebra in question 3.7. Find a composition series of the A-module
A. Find also all simple A-modules (up to isomorphism).
3.10. Let A = K[X]/I where I = (f(X)).
(a) Let f(X) = X
4
1 and K = R. Find all simple A-modules (up to isomor-
phims).
(b) Let f(X) = X
3
2 and K = Q. Find all simple A-modules (up to isomor-
phism).
3.11. Let V be an A-module where A is a nite-dimensional algebra, and let M and N
be maximal submodules of V such that M ,= N. Prove that
(i) M +N = V , and
(ii) V/M

= N/M N and V/N

= M/M N.
Suppose now that MN = 0. Deduce that then M and N are simple, and hence
that V has two composition series
0 M V, and 0 N V.
Write down the permutation as in the Jordan-Holder Theorem.
3.12. Let A = K[X]/(X
n
) and V = A. Show that V has a unique composition series,
which has length n. [You might use 2.12.]
3.13. Find the simple modules for the algebra A = K[X]/(X
2
) K[X]/(X
3
).
3.14. Let A = M
2
(R), and V = A. The following will show that A has innitely many
dierent composition series.
(a) Let e A be a projection, that is e
2
= e. Show that then Ae := ae : a A
is a submodule of A. Show that if e ,= 0, 1 then
0 Ae A
is a composition series of A. [You may apply the Jordan-Holder Theorem].
(b) For R, check that e

is a projection where
e

1
0 0

.
(c) Show that for ,= , the modules Ae

and Ae

are distinct. Hence deduce


that V has innitely many dierent composition series.
46 3. The Jordan-Holder Theorem
3.15. Suppose A = CG where G is the dihedral group of order 10, as in Exercise 2.13.
Suppose V is a simple A-module.
(a) Prove that dimV 2. [Show that if w is an eigenvector of the linear map
x v

x, then so is v

w, and Spanw, v

w is an A-submodule of V .]
(b) Show that if dimV = 1, then v

has eigenvalue 1, and v

2 has eigenvalue 1
on V . Hence nd all 1-dimensional simple A-modules.
3.16. Let V be any vector space over K, and let A = End
K
(V ). Show that V is a simple
A-module. [The interesting case is when V is innite-dimensional.]
4
Simple and semisimple modules, semisimple
algebras
Let A be a nite-dimensional K-algebra. The Jordan-Holder Theorem shows that simple
modules are building blocks for arbitrary nite-dimensional A-modules. So it is important
to understand simple modules. The rst question one might ask, given two simple A-modules,
how can we nd out whether or not they are isomorphic? This is answered by Schurs lemma,
which we will now present. In fact Schurs lemma has many applications (and well give a
few).
Lemma 4.1 (Schurs Lemma)
Suppose S and T are simple A-modules and : S T is an A-module homomorphism.
Then either = 0, or is an isomorphism.
Suppose S = T and K = C. If dimS < then = Id
S
for some scalar C.
Proof
Suppose is non-zero. The kernel ker() is an A-submodule of S but S is simple. Since
,= 0, ker() ,= S. So ker() = 0 and is 1-1.
The image im() is a submodule of T, and T is simple. Since ,= 0, we know im() ,= 0
and therefore im() = T. So is onto, and we have proved that is an isomorphism.
For the last part, we know that over C, has an eigenvalue, say. That is, there is some
non-zero v S such that (v) = v. The map Id
S
is also an A-module homomorphism;
and so is Id
S
as well. The kernel of Id
S
is a submodule and is non-zero (it contains
v). It follows that ker( Id
S
) = S, so that we have = Id
S
.
48 4. Simple and semisimple modules, semisimple algebras
This is very general; in the rst part S and T need not be nite-dimensional. It has many
applications. One is that elements in the centre of some algebra act as scalars on simple
modules when A is a C-algebra.
The centre of A is dened to be
Z(A) := z A : za = az for all a A
Lemma 4.2
Let A be a C-algebra, and S a simple A-module. If z Z(A) then there is some =
z
C
such that zx =
z
x for all x S.
Proof
The linear map : S S dened by (s) = zs is an A-module homomorphism (this is easy
to check). But S is simple, and A is an algebra over C. So by Schurs Lemma there is some
C such that (s) = s for all s S, that is zs = s.
Corollary 4.3
Assume A is a commutative algebra over C. Then every simple A-module is 1-dimensional.
Proof
Let S be a simple A-module. We have A = Z(A), so by the previous, every a A acts as
scalar multiplication on S. Take 0 ,= v S, then for every a A, av belongs to the span of
v. So the span of v is a non-zero submodule, so it must be equal to S.
The assumption that the eld is C, is important. For example, consider the 2-dimensional
algebra A over R with basis 1
A
, where
2
= 1
A
. Take V = A, this is a simple A-module:
Suppose, for a contradiction, V has a non-trivial submodule, this must 1-dimensional, say
it is spanned by some 0 ,= v A. Then v is a scalar multiple of v, that is v is an eigenvector
of . But does not have an eigenvector in A. So we have a commutative algebra over R
which has a 2-dimensional simple module.
Innite-dimensional algebras can behave dierently.
Example 4.4
The Heisenberg algebra H is the algebra over C generated by two non-commuting elements
X and Y , and the only relation which holds in the algebra is that XY Y X = q1
H
where q is
a non-zero element in C. It is not nite-dimensional, for example the monomials 1, X, X
2
, . . .
are linearly independent.
The Heisenberg algebra does not have any nite-dimensional simple modules:
4.1 Some classications of simple modules 49
Suppose, for a contradiction, S is a nite-dimensional simple H-module. Fix a basis of
S, and write multiplication by X, Y as matrices with respect to this basis. Then the matrix
of XY Y X is qI
n
where I
n
is the identity matrix, where n = dimS. Take the trace of this
matrix,
0 = Tr(XY Y X) = Tr(qI
n
) = qn
and dimS = 0 but S ,= 0, a contradiction.
4.1 Some classications of simple modules
Let A be a nite-dimensional algebra over K. We have seen that all simple A-modules occur
as composition factors of the A-module A (see 3.13). In particular this implies that simple
modules of a nite-dimensional algebra are always nite-dimensional.
One could ask whether it is possible given A, to completely describe all simple modules,
that is one simple module of each isomorphism class. In general, this is rather hard. But in
special cases, it is possible.
4.1.1 Simple modules of A = CG where G is a cyclic group
Let A = CG where G = g), a cyclic group of order n. Then A is commutative, so by 4.3,
every simple A-module is 1-dimensional. So let S = spanx be a 1-dimensional A-module.
Then the structure of S is completely determined by the action of v
g
since v
g
generates the
algebra A. We must have v
g
x = x for some C. We have then
x = v
1
x = v
g
nv
= (v
g
)
n
x =
n
x
and
n
= 1. Hence = exp(
2ki
n
) for some k with 0 k n 1.
This really does dene an A-module, to see this it suces to note that the corresponding
map from G to GL(S) is a group homomorphism.
Choose and x a primitive n-th root of unity, say. Then =
k
for some k with
0 k n 1. The representation we want is the map
: G GL(1, C), (g
j
) :=
jk
(1 j n), and one checks that this a group homomorphism.
Note also that for dierent k we get representations which are not equivalent. In total
we have n distinct simple modules.
50 4. Simple and semisimple modules, semisimple algebras
4.1.2 Simple modules for G where G has order p
d
, over F
p
Another type of algebra where we can nd all simple modules, up to isomorphism, is the
group algebra A = KG where G is some group of order p
r
for p a prime, and K = F
p
, the
eld with p elements.
Lemma 4.5
Assume [G[ = p
r
, p prime, and K = F
p
, the nite eld with p elements. Then the trivial
module is the only simple A-module (up to isomorphism)
Proof
Let V be a simple A-module. Let : G GL(V ) the corresponding representation. View V
as a G-set, then it is a disjoint union of orbits and each orbit has size dividing [G[, ie some
power of p.
If dim(V ) = n, then the set V has size p
n
which is a power of p. So the number of orbits
of size 1 is divisible by p. Now 0 is an orbit of size 1, so the number of orbits of size 1 is
non-zero and then must be at least p. So there is some 0 ,= x V such that
g
x = x for all
g G.
For the module, this means v
g
x = x for all g G. Then Spanx is a submodule. But
V is simple, so V = Spanx, and this is the trivial module.
Denition 4.6
Let A be a K-algebra, and let V be a non-zero (nite-dimensional) A-module. Then V is
semi-simple if V has simple submodules S
1
, S
2
, . . . , S
k
such that
V = S
1
S
2
. . . S
k
.
4.1.3 Examples
(1)Any simple module is semisimple. (So the name semisimple is reasonable).
(2) Let A = K. Then A-modules are the same as vector spaces. Given a vector space V ,
take basis b
1
, . . . , b
n
and set S
i
:= Spanb
i
. Then S
i
is a simple A-submodule of V ,
and V = S
1
. . . S
k
. This shows that every A-module is semisimple.
(3) Let A = M
n
(K) and V = A. We know from ?? that V = C
1
C
2
. . . C
n
where C
i
is the space of matrices which are zero outside the i-th column. We have also seen that
each C
i
is a simple A-module. So V is semisimple.
(4) Not every module is semisimple. Let A be the 2-dimensional algebra over R with basis
1
A
, such that
2
= 0. Let V = A, this is not semisimple.
4.1 Some classications of simple modules 51
Assume for a contradiction that V is semisimple. In 3.1 we have proved that V has a
composition series of length two, with two 1-dimensional composition factors. So V is
not simple, and then we can only have V = S
1
S
2
where S
1
and S
2
are 1-dimensional
submodules of V . Then we have a basis of V consisting of eigenvectors for every element in
A, and in particular, eigenvectors for x x. But this is not diagonalizable( for example,
has only eigenvalue = 0 and if it were diagonalizable then it would follow that = 0).
Given some A-module V , how can we decide whether or not it is semisimple? There are
several criteria, and each of them has advantages, depending on the circumstances.
Lemma 4.7
Let V be an A-module, then the following are equivalent.
(1) If U is a submodule of V then there is a submodule C of V such that U C = V .
(2) V is a direct sum of simple submodules (that is, V is semisimple).
(3) V is a sum of simple modules.
Proof
(1) (2) We may assume that V ,= 0, then V has at least one simple submodule. There
is then a submodule U of V which is a direct sum of simple modules, of largest possible
dimension. We must show that U = V . By (1), there is a submodule C such that V = UC.
Assume (for a contradiction) that U ,= V , then C is non-zero, and then C must have a simple
submodule, S say. Consider now U

:= U+S. We have SU CU = 0, that is U

= US.
Since U is a direct sum of simple submodules, also U

is a direct sum of simple submodules.


But U is a proper submodule of U

and dimU < dimU

, which contradicts the choice of U.


This shows that U = V , that is, V is a direct sum of simple modules.
(2) (3) This is clear.
(3) (1) Let U be a submodule of V . Consider the set of submodules of V given by
o = W V : U W = 0.
Then o ,= . Take C o of largest possible dimension. We claim that then U C = V .
By construction UC = 0. Sppose we have Assume U+C ,= V . Since V = S
1
+S
2
+. . .+
S
k
where the S
j
are simple submodules of V , there must be a simple submodule S
i
of V with
S
i
, U +C and then S
i
, C. So C C+S
i
, a proper submodule and dimC < dim(C+S
i
).
So we get a contradiction if we show that the module C + S
i
belongs to the set o. So we
must show that
(C +S
i
) U = 0 :
Take u = c+x U and c C and x S
i
. Then x = uc (U+C)S
i
. But (U+C)S
i
is a
submodule of S
i
and is not equal to S
i
(since S
i
is not contained in U+C). So (U+C)S
i
= 0
It follows that x = 0 and u = c U C = 0.
So we have now a contradiction. This completes the proof that V = U C.
52 4. Simple and semisimple modules, semisimple algebras
Lemma 4.8
(a) Submodules and factor modules of semi-simple modules are semi-simple.
(b) If V
1
and V
2
are semisimple A-modules then V
1
V
2
is a semisimple A-module.
Exercise 4.1
Suppose f : S X is an A-module homomorphism where S is simple. Show that
then f(S) is either simple, or is zero.
Solution The Isomorphism Theorem gives f(S)

= S/ker(f). Since S is simple, we have
ker(f) = 0 or = S. In the rst case, f(S)

= S and f(S) is simple, otherwise f(S) = 0.
Proof (of 4.8)
(a) Suppose V is semi-simple with factor module V/U. Let : V V/U be the canonical
map (v) = v + U, this is an A-module homomorphism. Suppose V = S
1
+ S
2
+ . . . + S
k
with S
i
simple, then (V ) = (S
1
) +(S
2
) +. . . +(S
k
) and (S
i
) is either zero or simple.
So V/U is a sum of simple modules, hence is semi-simple [here we use part (3) of 4.7].
If U is a submodule of V then by part (1) of 4.7 we know V = U C and then U

= V/C,
so U is semi-simple by what we have just proved.
(b) Let V
1
and V
2
be semisimple. By exercise 2.8 we have V := V
1
V
2
=

V
1


V
2
with

V
i

= V
i
. So

V
i
is a direct sum of simple modules, for each i, and hence V is also a direct
sum of simple modules.
In Example 4.1.1.(1) we have seen that the algebra A = K has the property that every
A-module is semisimple. Other algebras have the same property, and this has inspired the
following denition.
Denition 4.9
The algebra A is semisimple if every A-module is semisimple.
How can one see whether or not A is semisimple without having to check all modules? This
is easy, because of the following.
Lemma 4.10
Let Abe a nite-dimensional K-algebra. Then Ais semisimple the A-module Ais semisim-
ple.
Proof
If all A-modules are semisimple then in particular A as an A-module is semisimple.
4.1 Some classications of simple modules 53
To prove the other implication, suppose A as an A-module is semisimple. Take an arbi-
trary A-module M. Take a K-basis of M, say m
1
, . . . , m
n
. Write A
n
= A A . . . A,
the direct product of n copies of A. Dene : A
n
M by
(a
1
, . . . , a
n
) =
n

i=1
a
i
m
i
This is an A-module homomorphism (by 1.6.1) and it is surjective. So the Isomorphism
Theorem gives that M

= A
n
/ker().
The module A is semi-simple, and then also A
n
, by 4.8 part (b) and induction on n. Now
4.8(a) shows that V is semi-simple.
4.1.4 Examples
(1) The algebra A = M
n
(K) is semi-simple. (See 4.1.3).
(2)Let A be the 2-dimensional algebra over R as in 4.1.3(3). We have found there a module
which is not semisimple, and hence A is not semisimple.
The algebra is also isomorphic to the algebra of matrices,

a b
0 a

: a, b R.
[Namely, the algebra is 2-dimensional and it contains a non-zero element with square zero,
and see ??]. So this algebra of matrices also is not semisimple. However, it is a subalgebra
of M
2
(R) which is semisimple!
The last example shows that a subalgebra of a semisimple algebra need not be semisimple.
On the other hand, factor algebras or semisimple algebras are always semisimple, we show
this now.
Lemma 4.11
Let I be an ideal of A and B = A/I. The following are equivalent:
(i) V is a semisimple B-module.
(ii) V is a semisimple A-module with IV = 0.
Hence if A is semisimple then B is semisimple.
Proof
Recall from 2.7 that B-modules can be viewed as the A-modules V with IV = 0, and where
the two actions are related by the equation
ax = (a +I)x, (a A, x V ).
[We write IV for the span of the set xv : x I, v V .]
54 4. Simple and semisimple modules, semisimple algebras
(i) (ii) Let V = S
1
. . . S
k
where S
i
are simple B- submodules of V . Then we view V
as an A-module with IV = 0. Then IS
j
IV = 0, therefore S
i
can also be viewed as an
A-module. As an A-module it is simple as well: Namely if 0 ,= x S
j
then Ax = Bx = S
j
.
So V is also semisimple as an A-module.
(ii) (i) Suppose V = S
1
. . . S
k
with S
i
simple A-submodules of V , and IV = 0.
Then IS
i
IV = 0, so S
i
is viewed as B-module, and it is still simple as a B-module, since
for 0 ,= x S
i
we have Ax = S
i
but Ax = Bx.
For the last part, suppose A is semisimple. Take any B-module V , then we can view V
as an A module satisfying IV = 0. But A is semisimple, therefore V is semisimple as an
A-module. By the implication (i) (ii) it is also semisimple as a B-module. This shows
that B is semisimple.
Proposition 4.12
Let A = A
1
A
2
, the direct product of algebras. Then A is semisimple if and only if both
A
1
and A
2
are semisimple.
Proof
Suppose A is semisimple, the projection
1
: A A
1
onto the rst coordinate is
an algebra homomorphism and it is surjective. By 4.11, A
1
is semisimple. Similarly, A
2
is
semisimple.
Assume A
1
and A
2
are semisimple. Write A
1
= S
1
S
2
. . . S
k
where the S
i
are
simple A
1
-submodules of A
1
, similarly A
2
= T
1
. . . T
l
with T
i
simple A
2
-submodules of
A
2
. Then the A-module A
1
A
2
can be written as the sum of all S
i
0 and 0 T
j
.
These are simple A-modules, by 3.15.
EXERCISES
4.2. For each of the following subalgebras A of M
2
(K), consider the natural module
V = (K
2
)
t
of column vectors. Show that V is simple, and nd End
A
(V ), that is,
the algebra of linear maps : V V which commute with all elements in A.
By Schurs Lemma, this algebra is a division ring. Identify it with something
known.
(i) K = R, A =

a b
b a

: a, b R.
(ii) K = Z/2Z, A =

a b
b a +b

: a, b Z/2Z.
[Note: to see that in each case A really is an algebra, see the Exercise 1.5]
4.1 Some classications of simple modules 55
4.3. Suppose A is a nite dimensional algebra over a nite eld F, and S is a (nite-
dimensional) simple A-module. Let D := End
A
(S). Show that then D must be a
eld.
4.4. An idempotent of an algebra A is an element e A such that e
2
= e. Let e
1
and
e
2
be idempotents such that 1
A
= e
1
+e
2
and e
1
e
2
= 0 = e
2
e
1
. Assume also that
e
1
and e
2
are central in A, that is e
i
a = ae
i
for all a A.
(a) Suppose V is an A-module, show that then e
1
V and e
2
V are submodules of
V and that V = e
1
V e
2
V . Moreover, show that e
1
V = v V : v = e
1
v.
Suppose now that S
1
and S
2
are simple A-modules such that S
1
= e
1
S
1
and
S
2
= e
2
S
2
.
(b) Show that then S
1
is not isomorphic to S
2
.
(c) Assume K = C. Let V = S
1
S
2
, show that End
A
(V ) is isomorphic to the
algebra of diagonal matrices in M
2
(C). [Hint: Apply Schurs Lemma ]
Show also that if W = S
1
S
1
then End
A
(W)

= M
2
(C).
4.5. (Continutation) With the notation of the previous question, what can you say
about End
A
(N) where N = (S
1
S
1
) (S
2
S
2
S
2
)?
4.6. Suppose A is an algebra and N is some A-module. We dene a subquotient of N
to be a module Y/X where X, Y are submodules of N such that 0 X Y N.
Suppose N has a composition length 3, and assume that every subquotient of
N which has composition length 2, is semisimple. Show that then N must be
semisimple.
[Suggestion: Choose a simple submodule X of N and show that there are sub-
modules U
1
,= U
2
of N, both containing X, of compositionlength 2. Then show
that U
1
+U
2
is the direct sum of three simple modules.]
4.7. Let G be the symmetric group and the natural G-set, so that the permutation
module K has basis b
1
, b
2
, . . . , b
n
. Let W :=

a
i
b
i
: a
i
K,

i
a
i
= 0.
Show that W is a submodule of K. Show also that if K = C then W is simple,
and C is the direct sum of W with a copy of the trivial module.
5
The Wedderburn Theorem
Given a K-algebra A, when is it semisimple? Wedderburns theorem answers this, and it
gives a complete description of arbitrary semisimple algebras. We will prove the theorem
for the case K = C, and we will explain what the answer is for arbitrary elds. For start,
we consider commutative nite-dimensional semisimple algebras over C. The classication
of such algebras is attributed to Weierstrass and Dedekind.
Proposition 5.1
Suppose A is a nite-dimensional commutative algebra over C. Then A is semisimple A
is isomorphic to the direct product of copies of C, as algebra: A

= C C . . . C.
Proof
We know that C as an algebra over C is semisimple, and by Proposition 4.12, so is the
direct product of nitely many copies of C.
Suppose A is the direct sum of simple submodules,
A = S
1
S
2
. . . S
k
.
By Corollary 4.3, each S
i
is 1-dimensional. We write the identity of A as
1
A
= e
1
+e
2
+. . . +e
k
, e
i
S
i
.
(a) We claim that e
2
i
= e
i
and e
i
e
j
= 0 for i ,= j. Namely, we have
e
i
= e
i
1
A
= e
i
e
1
+e
i
e
2
+. . . +e
i
e
k
and therefore
e
i
e
2
i
= e
i
e
1
+. . . +e
i
e
i1
+e
i
e
i+1
+. . . +e
i
e
k
.
58 5. The Wedderburn Theorem
The left hand side belongs to S
i
and the right hand side belongs to

j=i
S
j
. But the sum
is direct, therefore
S
i

j=i
S
j
= 0.
So e
2
i
= e
i
, and now 0 = e
i
e
1
+. . . +e
i
e
i1
+e
i
e
i+1
+. . . +e
i
e
k
and since we have a direct
sum, each summand must be zero.
We claim that e
i
,= 0 for all i. Take some non-zero x S
i
, then
x = x1
A
= xe
1
+. . . xe
i
+. . . xe
k
.
Now x xe
i
S
i

j=i
S
j
= 0 and therefore x = xe
i
,= 0. It follows that e
i
must be
non-zero. Since S
i
is 1-dimensional, we deduce that S
i
= Ce
i
and therefore for each a A,
we have ae
i
= a
i
e
i
with a
i
C.
Now, for every a A we have
() a = a1
A
= ae
1
+ae
2
+. . . ae
k
= a
1
e
1
+. . . +a
k
e
k
.
Dene a map : A C . . . C by
(a) := (a
1
, a
2
, . . . , a
k
) (a
i
as in ())
This is clearly linear. It is also onto, as (e
i
) = (0, 0, . . . , 1, 0, . . .) for each i, by the above.
It is 1-1: If all a
i
are zero then a = 0, by (*). The map is an algebra homomorphism:
(a)(b) = (a
1
, . . . , a
k
)(b
1
, b
2
, . . . , b
k
)
= (a
1
b
1
, a
2
b
2
, . . . , a
k
b
k
)
= (ab)
since
ab1
A
= a(b
1
e
1
+b
2
e
2
+. . . +b
k
e
k
)
= ab
1
e
1
+ab
2
e
2
+. . . +ab
k
e
k
= b
1
(ae
1
) +b
2
(ae
2
) +. . . +b
k
(ae
k
)
= b
1
a
1
e
1
+b
2
a
2
e
2
+. . . +b
k
a
k
e
k
= a
1
b
1
e
1
+a
2
b
2
e
2
+. . . +a
k
b
k
e
k
.
For arbitrary semisimple algebras, the building blocks are matrix algebras M
n
(C), and
the structure theorem for semisimple algebras over C is due to Wedderburn (though, ac-
cording to [1], a more general version was proved by Artin).
Theorem 5.2
[The Wedderburn Theorem for C ] Let A be a nite-dimensional algebra over C. Then A is
semi-simple if and only if A is isomorphic to the direct product of matrix rings
A

= M
n
1
(C) M
n
2
(C) . . . M
n
k
(C)
5. The Wedderburn Theorem 59
One direction is already known: We know that the direct product of matrix algebras is
always semisimple. Namely, in 4.1.3 we have seen that M
n
(K) is semisimple for each n 1,
in fact for any eld K. Now, Lemma 4.12 shows that the direct product of matrix algebras
also is semisimple.
To prove that any nite-dimensional semisimple algebra over C is isomorphic to a direct
product of such matrix rings, takes more work.
The rst thing we might ask is, where do the matrices come from? We are used to writing
linear maps as matrices, and a good guess might be that this should be generalized in some
way. If a linear map is written as matrix, one starts by xing a basis of the space, and works
with coordinates with respect to this basis. For example, if we take a 2-dimensional space
the we identify the vector space with column vectors in K
2
.
For our generalization we imitate this, and we consider an A-module which is a direct
product. So let V = U
1
U
2
= (u
1
, u
2
) : u
i
U
i
where U
1
and U
2
are A-modules. We
dene a matrix algebra, with underlying space
= [
ij
] :
ij
Hom
A
(U
j
, U
i
)
and with matrix multiplication and -addition. One checks that this really is an algebra. Note
however that the matrix entries do not commute in general.
Next, we want to relate this algebra to the endomorphism algebra of V . Let
i
: V U
i
be the projection,

i
(u
1
, u
2
) = u
i
This is an A-module homomorphism, by 1.6.1.
Similarly, let
1
: U
1
V be the inclusion map,

1
(u
1
) = (u
1
, 0),
and similarly dene
2
. These are also A-module homomorphisms, by 1.6.1. These maps
have a very important property:
() We have
1

1
+
2

2
= Id
V
:
Namely if m = (u
1
, u
2
) then u
i
=
i
(m) and

1
(m) +
2

2
(m) =
1
(u
1
) +
2
(u
2
)
= (u
1
, 0) + (0, u
2
) = (u
1
, u
2
) = m.
Lemma 5.3
Let V = U
1
U
2
. Then the algebra End
A
(V ) is isomorphic to .
60 5. The Wedderburn Theorem
Remark 5.4
When A = K and the U
i
are just 1-dimensional vector spaces then this is the same as
writing linear maps of a 2-dimensional space as matrices. The proof in general is completely
analogous.
Proof
Given an A-module homomorphism : V V . Then let

ij
:=
i

j
: U
j
U
i
this is an A-module homomorphism. Now dene
: End
A
(V ) , [
ij
].
We claim that this map is an algebra homomorphism.
(a) It is linear: We have

i
(c +)
j
= c
i

j
+
i

j
for all i, j where c is a scalar, and , are A-module endomorphisms of V , and therefore
(c +) = c() +().
(b) commutes with taking products: Consider ()() = [
ij
][
ij
]. This matrix product
has ij-entry
2

t=1

it

tj
=

j
=
i
(
1

1
+
2

2
)
j
By (*) we have that
1

1
+
2

2
= Id
V
, and so the ij-th entry is equal to

j
.
and this is precisely the ij-th entry of (). This is true for all i, j and therefore
()() = ().
(c) If = Id
V
then one gets
ij
= 0 for i ,= j and
ii
= Id
U
i
. This shows that (Id
V
) is the
identity matrix in .
(d) The map is one-to-one: Suppose
i

j
= 0 for all i, j. Then expanding shows that for
all m V we have (m) =

i,j

i

j
(m) and so this is zero. That is = 0.
(e) The map is also onto: Given a matrix [
ij
] in , then dene : V V by setting
(u
1
, u
2
) = [
ij
]

u
1
u
2

.
One checks that () = [
ij
].
5. The Wedderburn Theorem 61
Example 5.5
Let A = C. Let V = S
1
S
2
S
2
= (s
1
, s
2
, s
2
) : s
1
S
1
, s
2
S
2
, s
2
S
2
. We assume S
1
is not isomorphic to S
2
.
Let : V V be an A-module homomorphism, then
(s
1
, s
2
, s
2
) = (s
1
, 0, 0) +(0, s
2
, 0) +(0, 0, s
2
)
So we look at each of the three terms. Consider (s
1
, 0, 0) = (x
1
, x
2
, x
2
) with x
1
S
1
and
the other two components in S
2
.
We get a map s
1
x
1
, and by Schurs Lemma this map is a scalar multiple of the
identity, in other words there is a scalar , such that x
1
= s
1
, for any s
1
S
1
. Again by
Schurs Lemma, x
2
= 0 and x
2
= 0. We start writing down the matrix for , and what we
have found tells us that the rst column of this matrix is

0
0

.
If we continue in this way, we get a matrix of the form

0
0 A

where A M
2
(C).
Proposition 5.6
Assume V is a semisimple A-module. Then End
A
(V ) is isomorphic to a direct product of
matrix rings,
M
n
1
(C) . . . M
n
k
(C).
Proof
Let V = S
1
S
2
. . . S
n
where S
i
are simple. By 5.3 and induction, we have End
A
(V )

=
where
() = [
ij
] :
ij
: S
j
S
i

Now we apply Schurs Lemma. If S


j
,

= S
i
then
ij
= 0.
Suppose S
j

= S
i
. Then we identify S
j
and S
i
, and by Schurs Lemma,
ij
=
ij
Id with

ij
C.
We label the simple modules so that isomorphic ones come together: Let S
1

= S
2

= . . .

=
S
n
1
, then S
n
1
+1

= S
n
1
+2

= . . .

= S
n
1
+n
2
, where S
n
1
+1
,

= S
1
, and so on.
62 5. The Wedderburn Theorem
Then a matrix in has block diagonal shape, o-diagonal blocks are zero, and on the
diagonal blocks we have arbitrary matrices:

A
1
0 0 . . .
0 A
2
0 . . .
. . . . . .
. . . A
k

.
Multiply two of these, get

A
1
0 0 . . .
0 A
2
0 . . .
. . . . . .
. . . A
k

B
1
0 0 . . .
0 B
2
0 . . .
. . . . . .
. . . B
k

A
1
B
1
0 0 . . .
0 A
2
B
2
0 . . .
. . . . . .
. . . A
k
B
k

.
This shows that the multiplication is precisely as in the direct product of matrix rings. This
suggests to dene
(

A
1
0 0 . . .
0 A
2
0 . . .
. . . . . .
. . . A
k

) = (A
1
, A
2
, . . . , A
k
)
which is an element in M
n
1
(C) . . . M
n
k
(C). This map is clearly C-linear and bijective.
We have just shown that it preserves products, hence it is an isomorphism of algebras.
Recall from 1.4 that the opposite algebra A
op
of A is the algebra with underlying vector
space A, and with multiplication
a b := ba.
Lemma 5.7
Let A = M
n
(K), then A
op
is isomorphic to A.
Proof
For any n n matrix X, let (X) := X
t
, the transpose of the matrix. This is linear (from
basic linear algebra), and it is a vector space isomorphism since
2
= id. Furthermore, as
one learns it in elementary linear algebra,
(XY ) = (XY )
t
= Y
t
X
t
= (Y )(X) = (X) (Y ).
This shows that is an isomorphism of algebras A A
op
.
Exercise 5.1
Let A = A
1
A
2
, the direct product of algebras. Check that A
op
is isomorphic to
A
op
1
A
op
2
.
5. The Wedderburn Theorem 63
Solution 5.8
The underlying vector spaces for A
op
and A
op
1
A
op
2
are the same. The multiplication in A
op
is
(a
1
, a
2
) (b
1
, b
2
) = (b
1
, b
2
)(a
1
, a
2
) = (b
1
a
1
, b
2
a
2
).
The multiplication in A
op
1
A
op
2
is
(a
1
, a
2
)(b
1
, b
2
) = (a
1
b
1
, a
2
b
2
) = (b
1
a
1
, b
2
a
2
)
Hence the identity map gives us an algebra isomorphism from A
op
to A
op
1
A
op
2
.
Lemma 5.9
Let A be any K-algebra. Then A is isomorphic to End
A
(A)
op
.
Proof
(a) Let a A, dene right multiplication r
a
: A A by
r
a
(x) = xa (x A).
Then r
a
is an A-module homomorphism, by 1.6.1. Furthermore, we see that if a = 1
A
then
r
a
is the identity map of A.
(b) We have End
A
(A) = r
a
: a A: One inclusion holds by (a), and for the other
inclusion, take f : A A to be an A-module homomorphism. Set a := f(1
A
). Then for any
x A
f(x) = f(x1
A
) = xf(1
A
) = xa = r
a
(x).
This is true for all x A, and hence f = r
a
.
(c) Consider the composition. We have
r
a
r
b
(x) = r
a
(xb) = (xb)a = x(ba) = r
ba
(x)
So r
b
r
a
= r
a
r
b
= r
ba
.
Hence we dene a map : A End
A
(A)
op
by setting
(a) = r
a
.
By (a), it takes the identity to the identity, and by part (c), it preserves products. One
checks that is K-linear. Finally, by (b) it is bijective, and we have now proved that is
an isomorphism.
64 5. The Wedderburn Theorem
5.1 The proof of Wedderburns theorem
We only have to put the previous results together. By 5.9 we have
A

= (End
A
(A))
op

= (M
n
1
(C) . . . M
n
k
(C))
op
and by 5.1 and 5.7, this is isomorphic to
(M
n
1
(C))
op
. . . (M
n
k
(C))
op

= M
n
1
(C) . . . M
n
k
(C).
Remark 5.10
We get the result for the commutative case now as a corollary. Namely, for a commutative
semisimple algebra, all matrix blocks in the Wedderburn theorem must be commutative,
and this is only true if n
i
= 1 for all i.
We can now give a complete description of the simple modules of a semisimple algebra over
C.
Corollary 5.11
Let A be a nite-dimensional semisimple algebra over C, and suppose A

= M
n
1
(C) . . .
M
n
k
(C) as algebras. Then A has precisely k simple modules (up to isomorphism). They are
of the form S
1
, S
2
, . . . , S
k
where we can take S
i
= ((C)
n
i
)
t
, and such that
(a) The i-th factor of A acts on S
i
by matrix multiplication;
(b) The i-th factor of A acts on S
j
as zero for j ,= i.
In particular the dimensions of the simple modules are n
1
, n
2
, . . . , n
k
.
Proof
In 3.15 we have classied the simple modules of an algebra of the form A
1
A
2
. Namely,
these are precisely all modules of the form
S 0, 0 T
where S runs through the simple A
1
-modules, and T runs through the simple A
2
-modules.
We apply this inductively, and see that all but one of the factors of our product of matrix
blocks act as zero on a simple A-module.
From 3.14 we know that M
n
i
(C) has a unique simple module (up to isomorphism),
namely the natural module of column vectors (C
n
i
)
t
.
Remark 5.12
What can we say about a nite-dimensional semisimple algebra A over an arbitrary eld K?
The answer is that always A is isomorphic to a product of matrix rings where the matrix
blocks are M
n
i
(D
i
) and D
i
is some division ring containing K.
5.1 The proof of Wedderburns theorem 65
One can see this with little trouble if one goes through the proof and checks where we
used that the eld was C. In fact, this is only used when we apply Schurs Lemma, to say
the endomorphism ring of the simple module S
i
is isomorphic to C. In general we can only
say that the endomorphism ring of S
i
is a division ring, this is the general version of Schurs
Lemma.
Everything else stays the same, and then the proof gives
A

= M
n
1
(D
1
) . . . M
n
k
(D
k
).
EXERCISES
5.2. Suppose A is a nite-dimensional commutative semisimple algebra over C. Find
all ideals of A.
5.3. Show that A = M
n
(C) does not have any ideals except 0 and A. Hence nd all
ideals of a nite-dimensional semisimple algebra over C.
5.4. Suppose A = K
1
K
2
K
3
, the direct product of three elds. Find all the ideals
of A.
5.5. Suppose A = M
n
1
(C) . . . M
n
k
(C). Show that the center of A is commutative
and semisimple, and has dimension k.
5.6. Suppose A is a nite-dimensional semisimple algebra over C. Suppose x is an
element in the center Z(A). Show that if x is nilpotent then x = 0.
5.7. Which of the following commutative algebras over C are semisimple? Note that
the algebras in (1) have dimension 2, and the others have dimension 4.
(1) C[X]/(X
2
X), C[X]/(X
2
), C[X]/(X
2
1).
(2) C[X
1
]/(X
2
1
X
1
) C[X
2
]/(X
2
2
X
2
)
(3) C[X
1
, X
2
]/(X
2
1
X
1
, X
2
2
X
2
)
(4) C[X
1
]/(X
2
1
) C[X
2
]/(X
2
2
),
(5) C[X
1
, X
2
]/(X
2
1
, X
2
2
)
6
Maschkes Theorem
In the previous chapter we have proved a main structure theorem for semisimple algebras.
One would like now to know to which algebras this can be applied. For example, you might
ask when a group algebra of a nite group is semisimple. This is answered by Maschkes
theorem.
Theorem 6.1 (Maschkes Theorem)
Let G be a nite group and A = KG the group algebra where K is some eld. Then A is
semisimple if and only if the characteristic of K does not divide the order of G.
The main idea of the proof which we are going to present is, that from any linear map
between KG-modules, one can always construct a KG-homomorphism, by averaging over
the group.
Lemma 6.2
Suppose M and N are KG-modules, and f : M N is a K-linear map. Dene
T(f) : M N, m

gG
v
g
(f(v
g
1m)).
Then T(f) is a KG-homomorphism.
Proof
One checks that T(f) is linear. Alternatively one could argue that multiplication by elements
in A is linear, and also f is linear and T(f) is a linear combination of compositions of linear
68 6. Maschkes Theorem
maps and is therefore linear. To see that it is an A-homomorphism it suces to check for
the basis of A, so let x G, then
T(f)(v
x
m) =

g
v
g
(f(v
g
1v
x
m)) =

g
v
x
(v
x
1v
g
(f(v
(x
1
g)
1m))) = v
x
(T(f)(m)).
6.1 Proof of Maschkes Theorem
Assume that the characteristic of K does not divide [G[. Let W be a submodule of
A = KG, we must show that there is a submodule C of A such that W C = A.
There is a subspace V such that W V = A. Let : A A be the projection onto W
with kernel V , this is just a linear map. Dene
:=
1
[G[
T.
This is a scalar multiple of an A-module homomorphism and hence is also an A-module
homomorphism. So C := Ker() is an A-submodule. We will now show that KG = W C
as KG-modules.
(a) We claim that the restriction of to W is the identity map, and that Im() = W :
If m W then v
g
1m W and so (v
g
1m) = v
g
1m therefore v
g
(v
g
1m) = m and
(m) =
1
|G|
(

gG
m) = m.
This implies W Im(). Conversely, let m A. Since (v
g
1m) W and W is a submodule
it follows that v
g
(v
g
1m) W and then (m) W.
(b) We claim that W C = 0 : If m W C then (m) = m and (m) = 0.
(c) W +C = A : We have dim(W +C) = dim(W) +dim(C) (by (b) and Linear Algebra)
which is dimIm() + dimKer() and which is equal to dim(A) by rank-nullity.
For the converse of Maschkes Theorem, suppose A = KG is semi-simple. We claim that
char(K) does not divide the order of G:
Let :=

gG
v
g
which is an element of KG. Check that
() v
x
= ( allx G.)
Therefore, let U be the span of , this is a (1-dimensional) submodule of A. Suppose A is
semi-simple, then there is a submodule C of A such that U C = A. Then one checks that
U = Ae where e is an idempotent of A, and so e = c for c K. From e
2
= e ,= 0 we get
using (*) that
0 ,=
2
= [G[
and consequently [G[ , = 0 in K.
6.1 Proof of Maschkes Theorem 69
6.1.1 Exploiting the map T further
In the rst Lemma of this section we have seen that by averaging over G we can produce
KG-module homomorphisms starting with arbitrary linear maps. This is a very powerful
tool which is used in other contexts. Recall that the trace tr() of a linear transformation
is the trace of some matrix representing . For a detailed reminder, see the beginning of
chapter 7.
Corollary 6.3
Suppose V and W are simple CG-modules, and suppose f : V W is a linear transforma-
tion.
(a) Assume V and W are not isomorphic, the T(f) = 0.
(b) Assume V = W, then T(f) = I where
=
[G[
dimV
tr(f)
Proof
We apply 6.2 and Schurs Lemma, this gives (a) and also that in (b) we have T(f) is a
multiple of the identity. We calculate the trace of T(f). We have
tr(v
g
fv
g
1) = tr(f) (g G)
hence trT(f) = [G[tr(f) On the other hand, the trace of T(f) is equal to dimV . The
statement follows.
6.1.2 Permutation modules
Suppose G is a nite group and is a left G-set. Recall that the permutation module K
is dened to be the vector space over K with basis
b

: .
where the action is given by
v
g
b

:= b
g
.
Let :=

. We have seen that v


g
= for all g G and hence ) is a submodule
isomorphic to the trivial module. The following is a Maschke type argument.
Lemma 6.4
If char(K) does not divide [[ then the submodule spanned by is a direct summand of
K.
70 6. Maschkes Theorem
Proof
Let : K ) be the linear map with (b

) = for each . Check that this is a


KG-homomorphism.
Then () = [[. So if [[ , = 0 then the intersection of Ker() with the trivial module
) is zero and by dimensions K = ) Ker().
6.2 Some consequences of Maschkes Theorem
Suppose G is a nite group, then by Maschkes Theorem the group algebra CG is semisimple.
We can therefore apply Wedderburns theorem and obtain that
CG

= M
n
1
(C) M
n
2
(C) . . . M
n
k
(C).
This contains a lot of information. First, by comparing dimensions, we have
[G[ =
k

i=1
n
2
i
.
Moreover, the sizes of the matrix blocks are the dimensions of the simple modules for CG,
by 5.11. There is always the trivial representation, which is 1-dimensional. We can take this
to correspond to the matrix block M
n
1
(C), that is n
1
= 1.
The number k of matrix blocks has an interpretation in terms of the group. It is equal
to the number of conjugacy classes (see the exercises below).
Example 6.5
We can sometimes nd the dimensions of simple modules. Let G be the dihedral group of
order 10, as in Exercise 2.13, then by Exercise 3.15, the dimension of a simple module is
2, and there are precisely two 1-dimensional simple modules. We have now
10 = 1 + 1 +
k

i=3
n
2
i
and the only possible solution is 10 = 1 + 1 + 4 + 4. So we know that there are two non-
isomorphic 2-dimensional simple modules. You might nd these explicitly.
Then we know there are four matrix blocks, so there should be four conjugacy classes.
Perhaps you know this anyway.
EXERCISES
6.1. Show that the center of the group algebra KG has a basis consisting of the class
sums. The class sum [C] of a conjugacy class C = g
G
is dened to be the sum of
all elements in C (it has
|G|
|C
G
(g)|
terms).
6.2 Some consequences of Maschkes Theorem 71
6.2. Let G be the symmetric group Sym(3), of order 6, and let V = K where is
the natural G-set, of size 3.
(a) Suppose K = C, express V as a direct sum of two simple CG-modules.
(b) Suppose K has characteristic 3. Show that then V has a composition series
of length 3.
6.3. Let G be the dihedral group of order 2n where n is odd. Generalize 2.13 and 3.15.
Find the dimensions of all simple CG-modules. Now do the same when n is even
and n > 2. What happens if n = 2?
6.4. Let A = KG, the group algebra of a nite group. If C is a conjugacy class of G,
dene the class sum to be
[C] :=

gC
v
g
Show that [C] belongs to the center Z(A) of A. Show also that the class sums
form a K-basis for Z(A).
6.5. (Continuation) Suppose now that K = C. Deduce from Wedderburns Theorem
that the number of matrix blocks is equal to the number of conjugacy classes of
G. What does this tell if G is abelian?
6.6. Let A = CG where G is the symmetric group Sym(3), and take = (1 2 3) and
= (1 2). We want to show directly that A is a direct product of three matrix
algebras. [We know from 6.3 that there should be two blocks of size 1 and one
block of size 2].
(a) Let e

:=
1
6
(1v

)(1+v

+v

2), show that e

are idempotents in the centre


of A, and e
+
e

= 0.
(b) Let
f =
1
3
(1 +
1
v

+v

2)
where is a primitive 3rd root of 1. Let f
1
:= v

fv

1. Show that f and f


1
are
orthogonal idempotents, and that
f +f
1
= 1
A
e

e
+
.
(c) Show that Spanf, fv

, v

f, f
1
is an algebra, isomorphic to M
2
(C).
Apply these calculations, and show directly that A is isomorphic to a product of
three matrix algebras. [Taking direct sums, might be more natural].
7
Characters
Suppose that : G GL(n, C) is a representation of the group G. With each n n matrix
(g) we can associate its trace, that is we add its diagonal entries. We write (g) for this
trace. The function : G C is dened to be the character associated to the representation
.
Characters of representations have many nice properties, and they are a very important
tool for calculting with representations of groups.
7.1 Denition, examples, basic properties
Suppose A is some n n matrix, recall that the trace of A is dened as the sum of the
diagonal entries,
tr(A) :=
n

i=1
a
ii
.
Recall also that tr(AB) = tr(BA) if B is some other nn matrix, and therefore tr(P
1
AP) =
tr(A) if P is an invertible n n matrix.
If : V V is a linear transformation of a nite-dimensional vector space V then we
write tr() for the trace of a matrix of , with respect to some basis. By the above, this
does not depend on the choice of a basis.
As well, over C, the trace tr(A) is equal to the sum of the eigenvalues of A. Actually,
most of our matrices will satisfy equations of the form A
m
= I, all over C, and then A is
diagonalizable, by Linear Algebra.
74 7. Characters
Denition 7.1
Suppose : G GL(n, C) is a representation of G. The associated character

: G C is
dened by

(g) := tr((g)).
Write also
V
if V is the CG-module corresponding to the representation .
Example 7.2
The trivial representation of G is the map : G GL
1
(C) with (g) = 1 for each g G.
The associated character is known as the trivial character. Write
1
for this character, then

1
(g) = 1 for each g G.
Example 7.3
Let be a G-set, and V = C be the permutation module corresponding to . Call its
character

, the permutation character. Then

(g) = [Fix

(g)[
where Fix

(g) = : g() = .
Example 7.4
Recall that the (left) regular representation has underlying vector space V = CG, and the
action is by left multiplication.
Let
reg
its character. Then

reg
(g) =

[G[ g = 1
G
0 otherwise
This is also a special case of a permutation character.
Given a nite group G and a representation : G GL(V ), we want to nd the trace
of (g) for g G. Since g has nite order, we know that (g)
m
= (g
m
) = (1) = I for some
m 1. So the linear transformation (g) satises the polynomial
X
m
1 = 0
and therefore it is diagonalizable, with eigenvalues some m-th roots of 1. This is very good
to know in many applications.
Denition 7.5
Let be a character of G. Then is said to be irreducible if =
V
where V is a simple
CG-module.
7.1 Denition, examples, basic properties 75
For example, the trivial character is irreducible. More generally, if the corresponding
module is 1-dimensional then the associated character is irreducible.
Example 7.6
Let G = o
n
, the symmetric group, and let C be the natural permutation module, where
= 1, 2, . . . , n. In chapter 4, we have seen that C

= K W as a CG-module where W
is simple, of dimension n1 (and K is a copy of the trivial module). So
W
is an irreducible
character. By 7.3, we have

W
(g) = [Fix

(g)[ 1 (g G)
Lemma 7.7
Suppose
1
and
2
are equivalent representations of G, with associated characters
1
and

2
. Then
1
=
2
. Then
1
=
2
.
Proof
By assumption, there is an invertible matrix P such that for all g G we have

1
(g) = P
2
(g)P
1
Let
1
,
2
denote the characters associated to
1
,
2
. Then

1
(g) = tr(
1
(g)) = tr(P
2
(g)P
1
) = tr(
2
(g)) =
2
(g).
Proposition 7.8
Let : G GL(n, C) be a representation of the nite group G, and let be the associated
character. Then
(i) (1) = n;
(ii) (g
1
) = (g) (g G).
(iii) (ygy
1
) = (g) for all y, g G. That is, is a class function.
Proof
(i) We have (1) = I
n
, the identity n n matrix. It has trace n.
(ii) Fix some g G. The matrix (g) is diagonalizable (as we noted before, since g has nite
order), so let P be some invertible nn matrix with P(g)P
1
= D, a diagonal matrix, and
let
1
, . . . ,
n
be its diagonal entries. Then the
i
are m-th roots of unity where g
m
= 1.
The inverse of this matrix is diagonal with diagonal entries
i
. But the inverse of this matrix
76 7. Characters
is P(g
1
)P
1
, and hence we have
(g
1
) =

i

i
=

i
= (g)
(iii) (clear)
Lemma 7.9
Suppose W
1
, W
2
are CG-modules, with corresponding characters
1
,
2
. Then the character
associated to W
1
W
2
is equal to
1
+
2
.
Proof
Write
i
for the representation of G on W
i
for i = 1, 2, and write for the representation
of G on W
1
W
2
. Take a basis of W
1
and one of W
2
, then the union is a basis of W
1
W
2
.
Since W
1
and W
2
are submodules, for each g G the matrix of (g) is block diagonal, where
the diagonal blocks are
1
(g) and
2
(g). Therefore tr((g)) = tr(
1
(g)) + tr(
2
(g)).
Let S
1
, S
2
, . . . , S
k
be the simple CG-modules and let
1
,
2
, . . . be the corresponding
characters. That is, they are the irreducible characters of G.
Corollary 7.10
Suppose W is any CG-module, write W = S
a
i
i
as a direct sum of simple CG-modules,
where the a
i
0. Then the character
W
is given by

W
=

a
i

i
.
Hence, to understand all characters, we need to understand the irreducible chararacters.
7.2 The orthogonality relations
Characters are functions from G to C. So we set
C
G
:= : G C
This is a vector space over C, with + and scalar multiplication pointwise. It has dimension
= [G[. We dene an inner product on C
G
by setting
, ) :=
1
[G[

gG
(g)(g)
7.2 The orthogonality relations 77
Exercise 7.1
Show that , ) is an innter product on C
G
.
Let Cl(G) be the set of C
G
which are constant on conjugacy classes. This is a subspace
of C
G
, it has dimensionl the number of conjugacy classes. The characters of G are contained
in Cl(G). Moreover, the number of irreducible characters is precisely the dimension of Cl(G).
The motivation for the inner product comes from orthogonality properties of the irre-
ducible characters. The following is sometimes called the rst orthogonality relation.
Theorem 7.11
Suppose and

are irreducible characters of G, corresponding to representations V and


W. Then
, ) =

1 V

= W
0 V ,

= W.
Proof
By 7.9 we can assume that V = W in the case V

= W. Let
V
and
W
be the corresponding
representations. Write
W
(g) = [a
kl
(g)] and
V
(g
1
) = [b

(g
1
)]. We reformulate the inner
product:
() ,

) =
1
[G[

gG
(
dimV

i=1
a
ii
(g))(
dimW

j=1
b
jj
(g
1
)) =

i,j
[
1
[G[

gG
a
ii
(g)b
jj
(g
1
).
From Chapter 6, for any linear map h : V W we have
() T(h) =

gG

W
(g)h
V
(g
1
) =

I V = W
0 V ,

= W
where =
tr(h)|G|
dimV
. Now we x i, j and take h := E
ij
, this has trace
ij
. Taking matrices,

W
(g)E
ij

V
(g
1
) = [a
ki
(g)b
j
(g
1
)]
k,
.
Then the k-th entry of the matrix (**) is

gG
a
ki
(g)b
j
(g
1
) =


ij
|G|
dimV
V = W
0 V ,

= W
Now take k = i and = j, and then sum over all i, j, and we get that (*) is = 0 if V ,

= W
and otherwise (*) is equal to

i,j

ij
[G[
dimV
=
dimV

i=1
[G[
dimV
= [G[,
as stated.
78 7. Characters
Theorem 7.12
Suppose V is a nite-dimensional CG-module with character
V
. Write V = V
1
V
2
. . .V
r
where the V
i
are simple. If S is any simple CG-module with character
S
then

V
,
S
) = #i : V
i

= S
Proof
We have
V
=
1
+
2
+. . . +
r
where
i
is the character of V
i
. Then

V
,
S
) =
1
,
S
) +. . . +
r
,
S
)
If V
i

= S then the inner product is 1, otherwise it is zero.
Theorem 7.13
Suppose V and W are CG-modules, with characters
V
and
W
. Then V

= W if and only
if
V
=
W
.
Proof
This is 7.9
Suppose
V
=
W
, then
V
,
S
) =
W
,
S
) for all simple modules S. By 7.12 it
follows that V is isomorphic to W.
7.3 The character table
The irreducible characters of a nite group G are the building blocks for all characters of G,
and it is convenient to display them in the form of a matrix, which is known as the character
table of G.
We have seen that characters are constant on conjugacy classes. We know that the
number of conjugacy classes is equal to the number of matrix blocks in the Wedderburn
decomposition, hence is equal to the number of irreducible characters.
We recall that the irreducible characters are labelled as
1
, . . . ,
k
, and we take
1
to be
the trivial character.
Let C
1
, C
2
, . . . , C
k
be the conjugacy classes of G. Pick g
i
C
i
. We make the convention
that g
1
= 1.
Denition 7.14
The Character table of G is the k k matrix
[
i
(g
j
)]
i,j
7.3 The character table 79
Example 7.15
Let G be a cyclic group of order n, say G = g). In chapter 4 we have classied the irreducible
representations of G over C. Take to be a primitive n-th root of unity, then we have for
each t with 1 t n the 1-dimensional simple module on which g acts with eigenvalue
t
,
and hence g
j
acts with eigenvalue
jt
. The group is abelian, so each g
j
is the only element
in its conjugacy class.
So for example when n = 3, the character table is classes have size 1. The character table
is
1 g g
2

1
1 1 1

2
1
2

3
1
2

Example 7.16
Let G be the Klein 4-group, say G = x, y). This has four 1-dimensional (simple) modules.
We describe them by the eigenvalues of x and y on a generator.
x y
S
1
1 1
S
2
1 1
S
3
1 1
S
4
1 1
Let
1
,
2
,
3
,
4
be the corresponding irreducible characters. Then the character table is
1 x y xy

1
1 1 1 1

2
1 1 1 1

3
1 1 1 1

4
1 1 1 1
Example 7.17
Let G be the dihedral group of order 8. We take the presentation as in chapter 2 as
G = , :
4
= 1,
2
= 1,
1
=
1
).
The element
2
commutes with all elements of G and hence The subgroup N :=
2
) is
normal. One checks that G/N is isomorphic to the Klein 4-group. Any representation of
G/N can be inated to a representation of G, and therefore we have four 1-dimensional rep-
resentations (by the previous example). These are still irreducible viewed as representation
of G, and this gives us four 1-dimensional irreducible characters.
In chapter 2 we have constructed a 2-dimensional representation of G. This is checked
to be simple (alternatively, check that the inner product of its character is 1). The formula
[G[ =

n
2
i
shows that we have found all irreducible characters.
80 7. Characters
As well G has ve conjugacy classes. We choose representatives for the classes, as
g
1
= 1, g
2
=
2
, g
3
= , g
4
= , g
5
= .
The element
2
acts trivially on the 1-dimensional modules, and we can write down the
character values of the 1-dimensional modules by just copying appropriately the character
of the Klein 4-group. For the 2-dimensional representation we calculate the trace of the
representation constructed in chapter 2. We get the character table:
1
2

1
1 1 1 1 1

2
1 1 1 1 1

3
1 1 1 1 1

4
1 1 1 1 1

5
2 2 0 0 0
By 7.11 the rows of the character table satisfy an orthogonality relation. We will now
see that this implies orthogonality of the columns of the character table.
Theorem 7.18
Fix j, . Then we have
k

i=1

i
(g
j
)
i
(g

) =

0 j ,=
[C
G
(g
j
)[ j =
Proof
Dene a matrix X := [x
ij
] with
x
ij
:= ([C
G
(g
j
)[)
1/2

i
(g
j
)
Consider the ith entry of X

X
t
. This is
k

j=1

i
(g
j
)

(g
j
)[C
G
(g
j
)[
1
=
1
[G[

gG

i
(g)

(g) =
i
,

) =
ij
.
This means that X

X
t
= I. Therefore we have

X
t
X = I as well. We spell this out and get

j
=
k

i=1
[C
G
(g
j
)[
1/2
[C
G
(g

)[
1/2

i
(g
j
)
i
(g

)
which gives the statement.
7.4 Products of characters 81
Example 7.19
Let G = A
4
, the alternating group on four letters. Recall that [G[ = 12. We want to nd
the character table.
(1) Let N be the Klein 4-group G. Then N is a normal subgroup of G, and G/N is
cyclic of order 3. Each simple module for G/N over C can be viewed as a CG-module, by
ination. So we get therefore three simple modules for CG of dimension 1, so we have three
linear characters.
(2) The group G has four conjugacy classes (!). Representatives are g
1
= 1, g
2
= (12)(34)
and g
3
= (123), g
4
= (132). The character table is square, so there must be precisely one
more irreducible character. The sum-squares formula shows that it has degree three.
We start constructing the character table, let be a primitive 3rd root of 1, recall
1 + +
2
= 0. Then we have (using Example 7.15)
1 g
2
g
3
g
4

1
1 1 1 1

2
1 1
2

3
1 1
2

4
3
We nd the last row by using orthogonality relations. From the orthogonality of the
second and rst column we get
0 = 1 + 1 + 1 + 3
4
(g
2
)
and hence
4
(g
2
) = 1.
Next, the orthogonality of the third and rst column shows that
0 = 1 + +
2
+ 3
4
(g
3
)
and therefore
4
(g
3
) = 0. Similarly
4
(g
4
) = 0.
This shows that there is an irreducible representation of A
4
of degree 3, and it also says
what the character of this representation is. So one would like to actually construct such
representation!
Take the natural permutation module C of o
4
. This is the direct sum of the trivial
module and a (simple) module W of dimension 3 (see 7.6 and chapter 4). We view this module
W as a module for A
4
and we see that the character of the corresponding representation is
precisely
4
.
So we can deduce that W is a simple CA
4
-module (and then it must also be simple as a
Co
4
-module! A proof without calculations)
7.4 Products of characters
This part is not in the B2 syllabus
82 7. Characters
We have dened tensor products of vector spaces, and tensor products of group repre-
sentations. The importance of this is that the character of a tensor product is the product
of the characters. This gives a very powerful tool to construct new characters from known
ones.
Theorem 7.20
Assume G is nite. Suppose V, W are CG-modules, with characters
V
,
W
respectively.
Then V W has character
V

W
.
Proof
Let
V
,
W
be the representations corresponding to V, W respectively, and write for the
representation corresponding to V W. Let g G. We can choose a basis e
i
of eigenvectors
of g on V , with eigenvalues
i
, and a basis f
j
of eigenvectors of g on W, with eigenvalues

j
(say). Then we use the basis e
i
f
j
of V W, to calculate the character of V W.
We have
(g)(e
i
f
j
) =
V
(g)(e
i
)
W
(g)(f
j
)
=
i
e
i

j
f
j
=
i

j
(e
i
f
j
)
Hence

V W
(g) =

i,j

j
= (

i
)(

j
) =
V
(g)
W
(g).
EXERCISES
7.2. Calculate the character table of the symmetric group G = o
4
. [You might exploit
group factor groups C
2
(

= G/A
4
) and o
3
(

= G/V
4
, V
4
the Klein 4-group). You
might also exploit products of characters

where

is a linear character.]
7.3. Calculate the character table of the symmetric group o
5
.
7.4. Find the character table of the quaternion group of order 8. Verify that this is
the same as the character table of the dihedral group of order 8.
Bibliography
[1] C.W. Curtis, Pioneers of representation theory, AMS History of Mathematics 15, 1999.
[2] Y. A. Drozd, V. V. Kirichenko, Finite-dimensional Algebras, Springer 1994.
[3] G. James, M. Liebeck. Representations and characters of groups, 2nd edition, CUP 2001.
[4] W. Ledermann, Introduction to group characters, 2nd edition, CUP 1987.

Вам также может понравиться