You are on page 1of 7

On common eigenbases of commuting operators

Paolo Glorioso

In this note we try to answer the question: Given two commuting Hermitian operators A and B,
is each eigenbasis of A also an eigenbasis of B?
We take this occasion to review the mathematical
results needed to explore the answer to such question. Moreover, we will assume that the reader is
familiar with the concepts of vector space, vector subspace, linear combination, linear independence,
diagonalization, inner product, and basis. These concepts can be found in Sections 1.1, 1.2 and 1.4
in [1]. A less specific treatment of the following is given in Section 1.8 therein.
Consider an operator A, acting on vectors belonging to a vector space V. We will make use of the
following definitions:
i) Eigenvalue: A constant C is called an eigenvalue of A if it satisfies the following
equation:
= v,
Av (1)
for some nonvanishing vector v V.
ii) Eigenvector: A nonvanishing vector v V is an eigenvector of A if it satisfies Equation (1)
for some C. Note that v is also called eigenstate, or eigenfunction, depending on the
context.
iii) Nondegenerate eigenvalue/eigenvector: An eigenvalue of A is called nondegenerate
if Equation (1) is satisfied by only one vector v, up to an overall complex number, i.e. all
the solutions of Equation (1) are of the form v, where C. This is equivalent to say
that the eigenvalue corresponds to only one eigenstate of A. Similarly, an eigenvector is
called nondegenerate if it is the only vector, up to an overall complex number, that satisfies
Equation (1) for some . Degenerate eigenvalues/eigenvectors are those which dont satisfy
this uniqueness property. Note that some references use the expression degenerate state
with respect to Equation (1) but referring only to the energy operator, see for example [2].
= v,
Ev

see below for more details.


iv) Eigenbasis: A set of vectors
V = {vi },
such that each vi is an eigenvector of A and V is a basis of V, is called an eigenbasis of V

with respect to A.

1
In this note we will refer to Hermitian operators, where A is Hermitian if, for any u, v V,
= (Au,
(u, Av) v),

and (u, v) is the scalar product in V. There are two reasons why we consider Hermitian operators.
First, because in Quantum Mechanics all observables are postulated to be Hermitian.1 Second,
because Hermitian operators are diagonalizable, i.e. they admit a basis in which they have a
diagonal form, which is then an eigenbasis. See Theorem 10 in Chapter 1 of [1] for this point.
Proposition 1. Let A be a Hermitian operator with only nondegenerate eigenvalues, and V = {vi }
and W = {wi } two eigenbases of A. Then V is obtained from W by permutations and multiplications
by complex numbers of the eigenvectors of W, i.e., for each vi V, there is wj W and C
such that
vi = wj .
In other words, V and W contain the same eigenstates.

Proof. Let i and i be the eigenvalues of vi and wi , respectively, i.e.


i = i vi ,
Av i = i wi .
Aw (2)

Since W is a basis, we can write any vi V as a linear combination of the wi s,


X
vi = j wj , (3)
j

where j C. Then,
X X X
i = A
i vi = Av j wj = j=
j Aw j j wj .
j j j

where we used the linearity of A. Comparing the first and the last members of the above Equation
with Equation (3), we get
X X X
i j wj = j j wj = (j i ) j wj = 0,
j j j

and using the fact that the wj s are linearly independent, we obtain,

(j i ) j = 0.

This is a set of equations, labelled by j. Each of them has two solutions: either j = 0, or j = i .
Since by hypothesis, the j s are all different from each other, being nondegenerate eigenvalues,
only one of the above set of equations can satisfy j = i . Thus all j s but one are zero, and we
obtain, from Equation (3),
vi = j wj ,
1
Recall that this is implied by the requirement of having real eigenvalues.

2
for one value of j. This tells us that each vector of V is also contained in W, up to an overall
complex number, which in the above Equation is given by j . Of course, we can also revert the
whole reasoning and show that each vector of W is contained in V. Therefore we can say that the
two eigenbases are the same, up to permutations and multiplications by complex numbers of their
vectors, and we are done.
Proposition 2. Suppose that A and B
are Hermitian operators with vanishing commutator, i.e.

B]
[A, = 0.

Then A and B
share a common eigenbasis.

V = {vi }, with i the eigenvalue associated to vi . Then, for


Proof. Consider an eigenbasis of A,
any vi ,
i = BAv
ABv i = i Bv
i, (4)
i 6= 0, Bv
i.e., if Bv i is an eigenvector of A associated to the same eigenvalue as vi , i . We have two
cases to consider:
i can differ from vi only by a constant factor, or
i nondegenerate: Bv
i = i vi ,
Bv
with eigenvalue i .
and thus vi is also an eigenvector of B,
i degenerate: in this case there are more vectors associated to i , which we denote by
j is still an eigenvector of A,
wj , j = 1, . . . , N , where N is the degeneracy of i . Since Bw

we can write it as a linear combination of the wj s. For this reason, the operator B can be
seen as acting internally in the subspace spanned by the wj s. Since B is Hermitian, it is
Hermitian in particular in this subspace. Indeed, for any u1 , u2 belonging to such subspace,
2 ) = (Bu
(u1 , Bu 1 , u2 ),

just because u1 , u2 belong also to the big vector space V.


Now, since B is Hermitian in this subspace we can diagonalize it, or in other words we can
choose a basis of eigenvectors of B which span this subspace, and we call them w0 . These w0
j j
and thus the Proposition is proved.
are still eigenvectors of A,

To understand better the last proof, we can view as an infinite matrix, in the sense explained
B
by the following picture:
1) 2)

(v1 , Bv (v1 , Bv
B = (v2 , Bv
1) (v2 , Bv 2 ) ,

.. .. ..
. . .

3
where the vi s belong to the basis with respect to which B is represented. If we view B
in the
eigenbasis V introduced at the beginning of the proof, we have

B1
B2

=

B , (5)

B3

..
.
on the subspaces similar to
where each block Bi is a submatrix representing the internal action of B
and if i is nondegenerate
the one spanned by the wj s. Each block refers to an eigenvalue i of A,
the block will be just a 1 1 matrix. If i is degenerate with degeneracy N then the block will be
N N . What we did in the degenerate case of the proof was just to show that the corresponding
block Bi is a Hermitian matrix, and thus diagonalizable.
Finally, note that if we know that A and B share a common eigenbasis, then their commutator is
zero. Indeed, sharing a common eigenbasis means that in such basis they are both represented as
diagonal operators, and thus they commute. This consideration allows us to state a more powerful
statement than the above Preposition:
Proposition 3. Let A and B be two Hermitian operators. Then the following two statements are
equivalent:
i) A and B
possess a common eigenbasis.

ii) A and B
commute.

Aimed of the mathematical results we have found, we shall now answer the following question:

Given two commuting Hermitian operators A and B,


is each eigenbasis of A also an

eigenbasis of B?
The short answer is: it depends. Consider the case where both A and B have only nondegenerate
eigenvalues. Then, by virtue of Proposition 1 and 2, each eigenbasis of A is also an eigenbasis of
Indeed, by Proposition 2 we can consider a common eigenbasis of A and B,
B. which we denote
by V . By Proposition 1 we know that we would exhaust all the eigenbases of A by permuting and
multiplying by complex numbers the vectors of V, and the same for the eigenbases of B. Thus, in
this case, the answer to the above question is YES. In all the other cases, the answer is NO.
Consider e.g. the case where A has some degenerate eigenvalues. Then, in some eigenbasis of A,

B would look like in Equation (5), which is not in diagonal form if some of the blocks Bi are
nondiagonal. For a neat example, we can consider the following matrices:

1 3
A = 2 , B= 1 2i ,
2 2i 1
note that [A, B] = 0, essentially because the 2 2 lower diagonal block of A is a scalar matrix,2 and
thus it must commute with the corresponding lower diagonal block of B. Moreover, B is Hermitian,
2
A scalar matrix is a matrix proportional to the identity.

4
and then diagonalizable. Note that A and B are represented in terms of an eigenbasis of A, and
that 2 is a degenerate eigenvalue of A. We denote this eigenbasis by V = {e1 , e2 , e3 }, where

1 0 0
e1 = 0 , e2 = 1 , e3 = 0 .
0 0 1

Following the proof of Proposition 2, we only need to find two linear combinations of e2 , e3 such
that the lower block of B assumes a diagonal form. Working out the standard diagonalization
procedure, we find the common eigenbasis W = {e1 , v2 , v3 }, where
1 1
v2 = (e2 + ie3 ), v3 = (e2 ie3 ),
2 2
and the above matrices assume the following form:

1 3
A = 2 , B = 1 .
2 3

For a more physical example, consider the hydrogen atom. The energy eigenstates are commonly
labelled by
|n, `, mi, (6)
2 , and m is the label associated to L
where n is the energy level, ` is the label associated to L z
through
L z = ~m.
L
In this basis, E, 2 and L
z are all diagonal. Using the fact that the states (6) are degenerate, we
want to construct a new basis where L z is nondiagonal. One example is
s
1 X
V = {|n, `, si}, where |n, `, si = |n, `, mi,
` + s + 1 m=`

and L
and where s = `, . . . , `. Clearly, V is a complete set of eigenstates of E 2 , but they are not
z , indeed
eigenstates of L
s s
z |n, `, si = 1 X
1 X
m|n, `, mi,
L Lz |n, `, mi = ~
` + s + 1 m=` ` + s + 1 m=`

which clearly doesnt correspond to Equation (1). Another simple construction we can make with
the hydrogen atom is the following. Consider, instead of (6), the basis given by the eigenvectors of
L
E, 2 , and L
x , instead of L
z , and denote it by

W = {|n, `, mx i}. (7)

Thus we have that


x |n, `, mx i = ~mx |n, `, mx i.
L

5
However, L z will not be diagonal in this basis, indeed suppose that, for all mx , there exist mx
such that
L z |n, `, mx i = mx |n, `, mx i.

Then, on any eigenstate of the form (7),


y |n, `, mx i = [L
iL z, L
x ]|n, `, mx i = [~m, mx ]|n, `, mx i = 0,

which we know to be false. Therefore W is not an eigenbasis for L z , but it keeps being an eigenbasis
2
for E and L . In the basis given by the states (6), the operator L z assumes the following matrix
form for the first 4 eigenstates:

0
1


0

Lz = ~
,
1



..
.

whereas, in the basis (7),


0
0 1 0


~ 1 0 1

Lz = .
2
0 1 0



..
.

References

[1] R. Shankar, Principles of Quantum Mechanics (Kluwer, 1994).


[2] D.J. Griffiths, Introduction to Quantum Mechanics (Pearson Prentice Hall, 2005).

6
MIT OpenCourseWare
http://ocw.mit.edu

8.04 Quantum Physics I


Spring

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.