Вы находитесь на странице: 1из 35

Tor Kjellsson

Stockholm University

Chapter 3
3.2
a)
Q. For what range of is the function f (x) = x in Hilbert space on the interval
(0,1)? Assume that is real but not necessarily positive.
Sol:
A function in Hilbert space is always normalizable. This means that:
Z

f (x)f (x)dx = A

(1)

where A is a real number. Note also that this number can not be negative since
f (x)f (x) = |f (x)|2 .
Using this for our function we obtain:
Z

x2 dx =

1
1  2+1 1
x
=
0
2 + 1
2 + 1

(2)

under the assumption that 6= 1/2 which is a case that we have to study
separately. Now, since the LHS of eq.(2) is non-negative we obtain: > 1/2.
But what about = 1/2?
In this case we obtain:
Z
0

1
1
dx = [log |x|]0 = 0 +
x

(3)

which is not normalizable.


Answer : > 1/2

b)
Q. For the specific case = 1/2, is f(x) in this Hilbert space? What about
d
f (x)?
g(x) = xf (x)? How about h(x) = dx
Sol:
f (x), g(x) are obviously in the Hilbert space since we proved that x for all
> 1/2 reside in the space. h(x) however does not live inside the Hilbert
d 1/2
space because h(x) = dx
x
= 21 x1/2 and we earlier proved that this is not a
normalizable function.

3.3
Q. Show that the following two definitions of a hermitian operator are equivalent:
= hQh|hi

hh|Qhi

(4)

= hQf
|gi.
hf |Qgi

(5)

and

where f, g and h are functions residing in Hilbert space H.


Sol:
Start with the first of the two definitions:
= hQh|hi.

hh|Qhi
Now consider two functions f (x) and g(x) that exist in H. Since H is a vector
space, any linear combination of two elements within the space lie in the same
space. Thus the two following linear combinations both reside within the space:

f (x) + ig(x) = h(x).

f (x) + g(x) = h(x)

(6)

Now we plug the first combination into the definition:


(f + g)i = hQ
(f + g) |f + gi
hf + g|Q
and split the expectation values up into smaller parts:
+ hg|Qf
i + hg|Qgi
= hQf
|f i + hQf
|gi + hQg|f
i + hQg|gi.

i + hf |Qgi
hf |Qf
Note that the underlined terms cancel pair-wise because of the definition of
hermitian operator that we are currently using. We thus obtain:
+ hg|Qf
i = hQf
|gi + hQg|f
i.
hf |Qgi

(7)

Now we do the same thing for h:


(f + ig)i = hQ
(f + ig) |f + igi
hf + ig|Q
i + hf |Qigi
+ hig|Qf
i + hig|Qigi

|f i + hQf
|igi + hQig|f

hf |Qf
= hQf
i + hQig|igi

i = hQf
|igi + hQig|f

hf |Qigi
+ hig|Qf
i.

Recall the following property of an expectation value:


2

(8)

Z
hf |cgi =

f (x)cg(x)dx = c

f (x)g(x)dx = chf |gi

and
Z
hcf |gi =

(cf (x)) g(x)dx =

c f (x)g(x)dx = c

f (x)g(x)dx = c hf |gi

so
hf |cgi = chf |gi
(9)
hcf |gi = c hf |gi

Using this on eq.(8):

i = hQf
|igi + hQig|f

hf |Qigi
+ hig|Qf
i
ihg|Qf
i = ihQf
|gi ihQg|f
i
ihf |Qgi
and dividing by i we obtain:
hg|Qf
i = hQf
|gi hQg|f
i.
hf |Qgi

(10)

Recall eq. (7):


+ hg|Qf
i = hQf
|gi + hQg|f
i.
hf |Qgi
If you now add this to eq. (10) you get:
= 2hQf
|gi
2hf |Qgi
and hence we have arrived at the second definition of a hermitian operator:
= hQf
|gi
hf |Qgi

(11)

3.4
a)
Q. Show that the sum of two hermitian operators is also hermitian.

Sol:
be two hermitian operators. Then1 :
Let P and Q
i = hQf
|f i
hf |P f i = hP f |f i and hf |Qf
=K
and compute the expectation
Define the sum of the operators as P + Q
value of this operator:
i = hf |(P + Q)f
i = hf |P f i + hf |Qf
i
hf |Kf
are hermitian we deduce:
and since our operators P and Q
i = hP f |f i + hQf
|f i = h(P + Q)f
|f i = hKf
|f i
hf |Kf

(12)

has been shown to be hermitian.


and thus K

b)
is hermitian and is a complex number. Under what condition
Q. Suppose Q
hermitian?
on is the operator P = Q
Sol:
Let f be a function in H. Recall that for a hermitian operator:
i = hQf
|f i.
hf |Qf
To test if an operator is hermitian we study the expectation value on the LHS
and the expectation value on the RHS separately and then compare if they are
the same. If this is the case the opertor is hermitian.
Starting with the left hand side we obtain:
i = hf |Qf
i = hQf
|f i.
hf |Qf

(13)

Consider now the right hand side:


|f i = hQf
|f i = hf |Qf
i.
hQf

(14)

We see that the two expressions are equal if and only if = . Hence the
condition to put on is that it needs to be real.

1 We could here use either of the two definitions of hermiticity since we showed in the
previous problem that they are equivalent.

c)
Q. When is the product of two hermitian operators hermitian?
Sol:
be two hermitian operators, define their product as P Q
=K
and
Let P and Q
let f be a function in H. Now consider the following:
i = hf |P Qf
i.
hf |Kf
I will now manipulate this and show you the result in a fast way. Then I will
redo the steps but explaining them in more detail if you dont follow the fast
steps:
i = hP f |Qf
i = hQ
P f |f i
hf |P Qf
= P Q
is a hermitian operator if and only if:
so K
=Q
P P Q
Q
P = 0
P Q

(15)

which is to say that the two operators commute. As a slight warning though
I want to highlight what generally happens when you do this:
Note:
The following is the general equality of moving operators across the vertical
line:
hf |P gi = hP f |gi
(16)
where P is called the hermitian conjugate of P . For a hermitian operator we have that P = P so what we have done until now is a special case
of this.
Now, if you were satisfied by the solution there is no need to read further. For
those that however feel uneasy with moving one operator at a time over I have
included the details of why you can do this below.
Detailed explanation on the manipulations:
are hermitian operators the result of either of them acting on a
Since P and Q
function f in H must also reside in H. That is to say:
= g and P f = h
Qf
for some functions f,g and h in H. Therefore:
i = hf |P gi = hP f |gi = hP f |Qf
i = hh|Qf
i = hQh|f

P f |f i.
hf |P Qf
i = hQ

d)
Q. Show that the position operator x
and the hamiltonian operator
~2 d2

H = 2m dx2 + V (x) are hermitian.


Sol:
First a short remark on the position operator. When this operator acts on
a function f (x) in H it simply gives you back the function multiplied by the
position x:
x
f (x) = xf (x).
Do not confuse the two things, one is the operator that does not do anything
unless it has something to operate on while the other is a variable that is a
number (just that we dont know the number).
Let us now investigate the hermiticity of the two operators:

x:
Z
hf |
xgi =

f (x)
xg(x)dx =

f (x)xg(x)dx.

Now, a position must be real so x = x . Also, the functions f and g gives


scalars and we know that scalar multiplication commutes (c1 c2 = c2 c1 ).
Thus:
Z
Z
Z
Z

f (x)xg(x)dx = f (x)x g(x)dx = x f (x)g(x)dx = (xf (x)) g(x)dx


and we deduce that
Z
hf |
xgi =

f (x)
xg(x)dx =

(xf (x)) g(x)dx = h


xf |gi

(17)

so the position operator is hermitian.

H:
hf |


~2 d2
+ V (x) gi =
2
2m dx


 2
Z
~2
d
f (x)
g(x)dx
+
f (x)V (x)g(x)dx
2m dx2

and now we study the two integrals separately. The last term is easiest if we
assume the potential V (x) to be real (which generally is the case). Then we
have:
Z
Z
Z

f (x)V (x)g(x)dx = f (x)V (x)g(x)dx = (V (x)f (x)) g(x)dx.


so this is hermitian (As an aside: A real number is hermitian but what about
an imaginary one?). Thus we now only need to check the first integral.

We can drop the real constant factor (why?) to get:


Z
d2
d2 g
hf | 2 gi = f 2 dx.
dx
dx
To compute this we need to know the limits of the integral which are (since
nothing else is specified). Partial integration (twice) gives:
Z

d2 g
f 2 dx
dx



Z
Z 2

df dg
d f
df
=f
gdx

dx =
g
+
2
dx
dx
dx
dx

dx

| {z }
| {z }
dg
0

where the underbraced terms are 0 because the the functions must go to 0 at
and the derivatives are finite there2 . Hence we have proven that:
d2
d2
gi = h 2 f |gi
(18)
2
dx
dx
so the hamiltonian operator is hermitian (since it is the sum of two hermitian operators). Does this come as a surprise3 ?
hf |

3.5
is the operator Q

The hermitian conjugate (or adjoint) of an operator Q


such that:
= hQ
f |gi
hf |Qgi

(19)

= Q).

for all f and g in H. (For a hermitian operator then Q


a)
Q. Find the hermitian conjugates of x,i and d/dx.
Sol:
Recall the following definition:
=
hf |Qgi

f (x)Qg(x)dx.

For the position operator we have (dropping notation of x-dependence):


Z
Z
Z
Z

hf |
xgi = f x
g dx = f xg dx = xf g dx = (x f ) g dx.
(20)
so for a variable (or constant) that represents a scalar the hermitian conjugate
is the complex conjugation of that scalar. Because x is a real variable x = x
2 This is the case for all physical functions at . When yo have finite limits on the
integral this need not be the case.
3 Think about what the eigenvalues of H
represent.

and we find that x


= x
.
The imaginary number i is a scalar so from our discussion above we deduce
that i = i 4 .
d
As goes for the derivative operator dx
(no need for a hat here either, we know
that it is an operator) we get:

Z
Z
Z

df
df
d

f

g dx = f g
g dx =
g dx
(21)
dx
dx

dx

| {z }
0


and we see that a

d
dx


=

d
dx

b)
Q. Construct the hermitian conjugate of the harmonic oscillator raising operator
a+ . Recall the definition of the step operators of the harmonic oscillator:
a+ =

1
(ip + mx)
2~m

(22)

1
(ip + mx)
2~m

(23)

a =

Sol:
The hermitian conjugate of a sum of operators is the sum of the individual
hermitian conjugates5 . Thus we can study the two terms in a+ separately and
we notice that the second term is the position operator (multiplied by real constants). We know from the previous problem that this is a hermitian operator
so we now turn to study the first term: ip.
Since momentum can be represented as p =
ip = ~
d
=
But we saw earlier that dx
ip must be ip and we obtain:

(a+ ) =


d
dx

~ d
i dx

we see that

d
.
dx

. Hence the hermitian conjugate of

1
(ip + mx) = a
2~m

(24)

4 The hats on operators may be omitted if it from the context is clear that it is an operator.
In this case writing i would be a waste of ink since a scalar acting as an operator is just a pure
multiplication of that scalar. Furthermore it might be confused with the notation of basis
vectors in a cartesian coordinate system.
5 If this sounds strange you should pause a minute and explicitly write down the integral
to convince yourself.

c)
R)
=R
Q
.
Show that (Q
Sol:
We move one operator at a time:
Rgi
= hQ
f |Rgi

hf |Q

(25)

and as we do that we have to exchange the operator with its hermitian conjugate
(recall problem 3.4c where we did this for hermitian operators). Now move the
other operator:
f |Rgi
= hR
Q
f |gi
hQ
(26)
R)
=R
Q

so we have indeed proved that (Q

3.6
= d22 where is the azimuthal angle in polar
Q. Consider the operator Q
d
coordinates 0 2. The functions in H are subject to the condition
hermitian? Find its eigenfunctions and eigenvalues.
f () = f ( + 2). Is Q
What is the spectrum? Is it degenerate?
Sol:
Let f and g be functions in H defined on the interval (0, 2). Now we check if
the operator is hermitian:
d2
hf | 2 gi =
d

Z
0

2 Z 2

df dg
d2 g
dg
f
d = f

d.
2
d
d 0
d d
0

The first term is 0 if you assume that the derivatives are continuous, which
amounts to the condition that functions are differentiable twice. For all systems
we will consider this can be assumed and we proceed with evaluating the last
integral:
Z

2 Z 2 2
df dg
df
d f
d =
g +
g d.
d d
d 0
d2
0
| {z }
0

Under the assumption that we made we have found that our operator is hermitian. To find its eigenfunctions and eigenvalues we solve the eigenvalue
equation:
= qf
Qf

d2
f = qf = f = Ae q + Be q
2
d

(27)

The general solution is built up by a linear combination of the eigenfunctions:


f = e

(28)

and q denotes our eigenvalues. Since any multiple of an eigenfunction is itself


an eigenfunction you could in principle multiply the exponential with a constant. However, this constant is accounted for when you normalize your linear
combination.
To see what the eigenvalues are we note that the boundary condition implies:

= e

q(+2)

= e

q2

= 1 = e2in

n Z.

(29)

This sets the condition on q:


q = n2 n Z
q = 0, 1, 4, 9 . . .
For each value of q except 0 there are
which is called the spectrum of Q.
two distinct eigenfunctions, f+ and f , so for q 6= 0 the spectrum is doubly
degenerate. For q = 0 there is only one eigenfunction so for this eigenvalue it
is not degenerate.

3.7
a)
with
Q. Suppose that f (x) and g(x) are two eigenfunctions of an operator Q,
the same eigenvalue q. Show that any linear combination of f and g is itself an
with the eigenvalue q.
eigenfunction of Q
Sol:
This is very straightforward. We want to show that for any complex numbers
c1 , c2 :
1 f + c2 g) = q(c1 f + c2 g)
Q(c
holds if:

= qf and Qg
= qg.
Qf

has no effect on them:


Since c1 , c2 are just numbers, Q
1 f + c2 g) = Qc
1 f + Qc
2 g = c1 Qf
+ c2 Qg

Q(c
= qf and Qg
= qg:
Now we use that Qf
+ c2 Qg
= c1 qf + c2 qg = q(c1 f + c2 g)
c1 Qf
so thus we have shown that:
1 f + c2 g) = q(c1 f + c2 g)
Q(c

10

b)
Q. Check that f (x) = ex and g(x) = ex are eigenfunctions of the operator
d2 /dx2 , with the same eigenvalue. Construct two linear combinations of f and
g that are orthogonal eigenfunctions on the interval (-1,1).
Sol:
Apply the operator to the functions and you get:
d2 x
e = ex
dx2
so indeed they have the same eigenvalue (q = 1).
Now we want to construct two linear combination of these that are orthogonal
on the interval (-1,1). If we name them:
h1 = a1 ex + b1 ex and h2 = a2 ex + b2 ex

(30)

we have that they are orthogonal on the interval (-1,1) if:


Z

h1 h2 dx = 0.

a1 a2 e2x + b1 a2 + a1 b2 + b1 b2 e2x dx = 0

1
1
a a2 (e2 e2 ) + 2(b1 a2 + a1 b2 ) + b1 b2 (e2 e2 ) = 0.
(31)
2 1
2
For this equation to hold we need to put constraints on the coefficients a1 , a2 , b1
and b2 . Since we are only asked to give an example we can put whatever values
we want on these in order for the equation to hold6 true.
In our example we first choose to make the constants real. Then we put the
following conditions:
a1 a2 = b1 b2
b1 a2 = a1 b2
where the top condition makes the first and third term in eq.(31) to cancel each
other while the bottom condition makes the middle term in eq.(31) 0. (check
this)
Our conditions can be solved for instance by putting a1 = a2 and b1 = b2 .
One specific example would then be a1 = a2 = b1 = 1 and b2 = 1 which gives
us:
h1 = ex + ex

h2 = ex ex

(32)

These are a multiples of the functions cosh(x) and sinh(x).

6 You could in principle try to use the Gram-Schmidt procedure to construct two orthogonal
functions but dont do it.

11

3.8
a)
Q. Let H denote the Hilbert space of all functions subject to the condition
f () = f ( + 2) and let 0 2. Check that the eigenvalues of the
d
hermitian operator i d
are real. Also, show that the eigenfunctions for distinct
eigenvalues are orthogonal.
Sol:
The eigenvalue equation for this is:
i

d
f = qf = f = Aeiq
d

(33)

which, like before, is the general solution of the differential equation. Just like
the previous problem we get a condition on the eigenvalues from the periodicity
of the functions:
eiq = eiq(+2) = iq2 = 2n n Z = q = n n Z
so the eigenvalues are indeed real. To check the orthogonality we evaluate the
braket of two eigenfunction with the distinct eigenvalues a and b:
Z
hfa |fb i =

eia eib d =

ei(ab) d =

 i(ab)2

1
e| {z } 1 = 0
i(a b)
1

so we see that the functions with distinct eigenvalues must be orthogonal.

b)
Q. Now do the same for the operator in Problem 3.6.
Sol:
In problem 3.6 we saw that the eigenfunctions were:
f = e

(34)

with eigenvalues:
q = 0, 1, 4, 9 . . .

so we immeditely see that the eigenvalues are real. Furthermore, since q = i


for a non-negative integer , the eigenfunctions with dinstinct eigenvalues are
orthogonal. This we know since it is the case we just solved in the previous
problem.

12

3.10
Q. Is the ground state of the infinite square well an eigenfunction of momentum? If so, what is its momentum? If not, why not?
Sol:
First we write down the ground state of the infinite square well:
r
 x 
2
(35)
1 (x) =
sin
a
a
Then we can explicitly test if this is an eigenfunction. If so then the operation
on 1 should give back 1 multiplied by some constant. Now we check this:
r
 x 
 x 
d
2
 sin
cos
i 1 (x) = i
dx
a a
a
a
and we see that after the operation we do not have something that is proportional to 1 because cos() and sin() are linearly independent functions.

3.11
Q. Find the momentum-space wave function, (p, t), for a particle in the ground
state of the Harmonic Oscillator.
Sol:
Use eqn.(3.54) in Griffiths:
(p, t) =

1
2~

eipx/~ (x, t) dx

(36)

with the ground state position wave function of the Harmonic Oscillator:
0 (x, t) = 0 (x)T (t) =

 m 1/4
~

e 2~ x ei

E0 t
~

(37)

where the ground state spatial wave function of the H.O have been looked up.
(Griffiths eqn (2.59) or Phiscs Handbook section 6.3). To get total space wave
function we just tack on the exponential with the energy-dependence. (eqn (2.6)
in Griffiths).
Use this, and that E0 = 12 ~, to find the momentum-space wave function:
Z
 m 1/4 m 2
t
1
(p, t) =
eipx/~
e 2~ x ei 2 dx
~
2~
Z
1  m 1/4 i t (ipx/~+ m x2 )
2
2~
(p, t) =
e
e
dx
2~ ~

13

The integral looks tricky but is easy to re-write. The goal is to complete the
squares (in Swedish: kvadratkomplettering) to get a Gaussian distribution:
m 2 ipx
x +
=
2~
~

m
1
x+
2~
2

m 2 ip
x + x=
2~
~

m
x+
2~

2~ ip
m ~
r

!2

1
2

2~ ip
m ~

!2

!2
1
p2
ip +
2m~
2m~

(38)

Reinsert this into the integral and move out parts that are not dependent on x:

(p, t) =

1  m 1/4 i t p2
e 2 e 2m~
2~ ~


2
1
m

2~ x+
2m~ ip

dx

Now we make the substitution:


r
r
r
m
1
m
x+
ip dy =
dx , < y <
y=
2~
2m~
2~
and put the square-root factor that comes from the substitution outside the
integral:
r
Z
2
2~
1  m 1/4 i t p2
(p, t) =
ey dy
e 2 e 2m~
m
2~ ~
the integral is now the error function and has a known value (look itup if you
need, for instance in PH section M-6, definite integrals7 .) which is . This
gives us:
r
1  m 1/4 i t p2
2~
2
2m~

(p, t) =
e

e
~
m
2~

(p, t) =

1
m~

1/4

ei

t
2

p2

e 2m~

(39)

7 Note however that our integral runs from and not 0 as in PH. However, since the
integrand is symmetric about zero it has the same appearance to the left side of the y-axis as
the right side. Thus, our value is twice the stated value in P.H. Make sure you understand
this argument since it is a common one in physics/mathematics.

14

Q. What is the probability that a measurement of p on a particle in this state


would yield a value outside the classical range (for the same energy). State the
answer with two digits precision. You are allowed to use computer software (like
Mathematica).
Sol:

p2
The relation between energy and momentum is E = 2m
giving p = 2Em =

~m. To get the probability that a measurement yields a momentum outside


this region we calculate the two integrals:
Z

~m

|(p, t)| dp =

P1 =

and

r
P2 =

1
m~

1
m~

~m

p2

e m~ dp

p2

e m~ dp.

(40)

(41)

~m

Note that the integrand is symmetric about p = 0, so P1 = P2 . Now, to calculate this numerically we need to look in tables or use a computer software at
the end. In PH we have tabulated values of the normal distribution so we aim
to use this here (but note, you are allowed to use computer software).
Focusing on P2 we make a substitution:
r
r

2
2
y=
p dy =
dp , 2 < y <
m~
m~
giving
r
1/2 r

Z
Z
y 2
1
m~ y2
1
2
2

P2 =
e
dy
=
dy

e
m~
2
2
2
2
To get the value for the integral we turn to section M-16 in PH. Note that the
function produces the value for < y < x and we are interested in x < y <
(for given value of x ). Thus, if the value from PH is P2,P H , our answer will be
P2 = 1 P2,P H . Reading off the table we obtain:
P2 = 1 P2,P H = 1 0.921 = .079

(42)

which gives us the total probability:


Ptot = P1 + P2 = 2P2 = 0.158 0.16.

15

(43)

3.12
Q. Show that:
Z
hxi =



~

dp
i p

Hint (not in the textbook):


Z

eipx/~ dx = 2(p)

(This is by the way the Fourier representation of the delta-function and you can
work it out in problem 2.26 if you have not done so already.)
Sol:
As a quick remark - whenever the limits on an integral is not stated in the
problem it is .
We start with the left hand side:
Z
Z
hxi =
(x, t) x
(x, t) dx =
(x, t) x (x, t) dx
x

(44)

remember, x
is the operator that gives the value of x when it operates on the
space wave function.
Now we substitute the space wave function for the momentum-space wave function using eq. (3.55) in Griffiths:
Z 
hxi =
x

1
2~

 

Z
0
1
eip x/~ (p0 , t) dp0 x
eipx/~ (p, t) dp dx
2~ p
p0

The two integrals with variables p and p0 are different (but labels the same
space!) and thus labelled with different letters. This is the usage of dummy
indices.
Since x, p and p0 are different variables the order of performing the integrals
commute. We are thus allowed to order the integrands in anyway we want
as long as terms with the respective variable is evaluated together with the
respective integral. For reasons that will soon be obvious we order it in the
following way:
Z
Z Z
0
1
hxi =
(p0 , t) dp0
x eix(pp )/~ (p, t) dxdp
2~ p0
p x


0
0

Now we note that p


ei(pp )x/~ = ~i xe(pp )x . Using this we obtain:

Z
Z Z 
1
~  i(pp0 )x/~ 
hxi =
(p0 , t) dp0
e
(p, t) dxdp.
2~ p0
i p
p x

16

Partial integrating the x-integral gives:


Z
 i(pp0 )x/~ 
(p, t) dx =
e
x p
Z


0

i(pp0 )x/~

e
(p, t)
ei(pp )x/~
((p, t)) dx =
p
x

Z
0

0
((p, t)) ei(pp )x/~ dx.
p
x
where the first term here is 0 since the wave function must go to zero at infinity8
and we from the second term have moved out everything with no x-dependence
from the integral.
Inserting this result back to our hideous expression for hxi we get:

Z
Z 
Z
0
~
1
0
0
(p , t) dp
(p, t) dp ei(pp )x/~ dx.
hxi =
2~ p0
i
p
p
x
Once again we focus on the last x-integral. It is now very close to the form that
the hint is given in, a variable substitution will take care of the small difference:
z=

dx
x
dz =
, < z <
~
~

which gives:
Z
x

ei(pp )x/~ dx =

ei(pp )z ~ dz = 2~(p p0 ).

This gives (rearranging the integrals a bit):




Z Z
~
0
hxi =
(p , t)
(p, t)(p p0 )dp0 dp.
i p
p p0
Now, the delta-function collapses one of the integrals and forces that variable
to be the same as the other. We might as well collapse the p0 -integral to obtain:


Z
~

(p, t)dp

(45)
hxi = (p, t)
i p
p

8 The

exponent of the exponential is complex so the exponential itself must be finite.

17

3.13
a)
Q. Prove the following commutator identity:
C]
= A[
B,
C]
+ [A,
C]
B

[AB,

Sol:
We start by dropping the hats but always have in mind that we are working
with operators. Starting with the left hand side we get:
[AB, C] = ABC CAB = (A[B, C] + ACB) + ([A, C]B ACB) =
= A[B, C] + [A, C]B
just as a general remark - it might not appear that we did that much and it is
true. A lot of these show that problems can be quite easy if you just look
at what you are striving for. On the other hand, the proof might be quite the
workload, like the previous problem.

b)
Q. Show that
[xn , p] = i~nxn1 .

Sol:
We use the previous problem to solve this one:
[xn , p] = [x xn1 , p] = x[xn1 , p] + [x, p]xn1 = x[xn1 , p] + i~xn1
where I underlined a term with a purpose that is soon explained. Now we repeat
the procedure on the first commutator [xn1 , p]:

x[xn1 , p] = x x[xn2 , p] + [x, p]xn2 =

= x x[xn2 , p] + i~xn2 = x2 [xn2 , p] + i~xn1
observe that we once again got a term i~xn1 .
Now focus on the new commutator again. That commutator will obviously
survive the same procedure until it vanishes. This happens when the exponent
of x inside the commutator is 0. But this has to be done n times from the start
to happen. This then means that:


[xn , p] = x[xn1 , p] + i~xn1 = recursively n times =

18

0 + i~xn1 + . . . + i~xn1 = i~nxn1 .


|
{z
}

n elements

c)
Q. Show more generally that
[f (x), p] = i~

df
dx

for any function f (x).


Sol:
Here we will need to use the explicit form of the momentum operator. Whenever
you use the explicit form it is best to have a test function to work with, otherwise
you probably will do something crazy. Let g(x) be such a test function.





~ d
~ d
[f (x), p]g(x) = f (x)
pg(x)
pf (x)g(x) = f (x)
g(x)
f (x)g(x) =
i dx
i dx



dg
df
dg
~ df
~
f (x)

g(x) +
f (x)
=
g(x)
i
dx
dx
dx
i dx
Now that the test function has fulfilled its purpose we can throw it away to
conclude:
[f (x), p] = i~

df
.
dx

(46)

3.14
Q. Prove the famous (your name) uncertainty principle, relating the uncertainty in position (A = x) to the uncertainty in energy H = p2 /2m + V :
~
|hpi|
2m
For stationary states this doesnt tell you much - why not?
x H

Sol:
In section 3.5 in the textbook Griffiths derives the generalized uncertainty principle:
2 2
A
B

2
1
h[A, B]i
2i

19

(47)

We now focus on the commutator above:


= [
[
x, H]
x, p2 /2m + V (x)] = [
x, p2 /2m] + [
x, V (x)]
| {z }
0

If you are not convinced that the commutator can be split like that I encourage
you to do the algebra step by step. Anyhow, the last commutator is 0 because
the x-operator and a function of x (note, not an operator!) must commute.
Note now the following:


B]
= AB
B
A = B
A AB
= [B,
A]

[A,
and


B]
= AB
B
A = AB
B
A = [A,
B]
= [A,
B]

[A,
this is to say, a scalar commutes with everything and can be moved anywhere
we prefer.
Using the two properties we just showed we obtain:

[
x, p2 /2m] =

1
1
p
1 2
[
p ,x
] =
(
p[
p, x
] + [
p, x
]
p) =
(2 (i~) p) = i~ .
2m
2m
2m
m

The expectation value of this is:


h
pi
.
m
Inserting this into eq. (48) and taking the square-root of it we obtain:
h[
x, p2 /2m]i = i~

x H



1 h
pi
~

i~
=
|hpi|.
2i m 2m

(48)

Note: we dropped the hat in the last equality because it is understood that p is a
Hermitian operator that represents an observable - the momentum. If you like,
this is merely a question of notation.
For stationary states the spread in energy, H , is zero. Also, every expectation
value is constant in time so hxi is constant. Therefore hpi = m dhxi
dt = 0. Eq. (48)
then just says 0 0.

20

3.15
Q. Show that two noncommuting operators cannot have a complete set of com have a complete set of common
mon eigenfunctions. Hint: Show that if P and Q

eigenfunctions, then [P , Q]f = 0 for any function f in Hilbert space.


Sol:
that have a complete
Following the hint we consider two operators, P and Q,
set of common eigenfunctions. Since the set is complete, any function f (x) in
Hilbert space can be expressed as a linear combination of the set:
X
f (x) =
ci ei
(49)
i

and we denote the eigenfunctions as ei . This is the same as expressing any


vector in terms of linear combinations of basis vectors if the set of basis vectors
is complete.
we have:
Now, since {ei } is a set of eigenfunctions for both P and Q
X
X
P f = P
ci ei =
p i ci e i
i

and likewise
=Q

Qf

ci ei =

qi ci ei

(Recall that if the operator acts on one of its eigenfunctions it gives back the
eigenfunction itself multiplied by a scalar.)

Now we look at the commutator of P and Q:


= P Qf
Q
P f =
[P , Q]f
= P

X
i

qi ci ei Q

pi ci ei =

pi qi ci ei

qi pi ci ei = 0

(50)

Conclusion: two operators with a complete set of common eigenfunctions will


always commute since we proved this for any function in Hilbert space. This
means that if two operators are not commuting they cannot have a complete
set of common eigenfunctions.

21

3.16
Q. Solve the following differential equation:


~ d
hpi = ia (x hxi)
i dx

(51)

Remember that expectation values are constants!


Sol:
If you have not done so already, read section 3.5.2 carefully - it is just one page.
It explains why we even consider this differential equation to start with.
Now we reshuffle eq. (51) a bit:
d
= (hpi + iax iahxi)
dx
Note that the equation is separable:
i~

d
= (hpi + iax iahxi) dx

integrating both sides gives:


Z
Z
d
i~
= (hpi iahxi + iax) dx

i~

i~ log = hpix iahxix +


(x) = e

i
~

iax2
+C
2

h
i
2
hpixiahxix+ iax
2 +C

Now we focus on the exponent and try to simplify it:


 2

iax2
x
hpix iahxix +
+ C = hpix + C + ia
hxix =
2
2
i
ia h
iahxi2
ia
2
2
= hpix + C +
(x hxi) hxi2 = C
+hpix +
(x hxi)
2
2
2
{z
}
|
constant=C1

Reinserting this back into the exponential we get:


2

ihpix
a(xhxi)
2
i
ia
(x) = e ~ [C1 +hpix+ 2 (xhxi) ] = A e ~ e 2~

This wave function is the minimum-uncertainty wave packet, the wave function
that gives the limit in the generalized uncertainty principle. Note that it is a
gaussian.

22

3.17
Q. Apply eq. (3.71) (the equation is a measure of how fast a system is changing):
 
Q
t

d
i
hQi = h[H,
Q]i +
dt
~
to the following special cases:
=1
a) Q

b) Q = H

c) Q = x

= p.
d) Q

In each case, comment on the result, with particular reference to eq. (1.27),
(1.33), (1.38) and the discussion on the conservation of energy following eq.
(2.39) in the textbook.
Sol:
a)
i
d
h1i = h[H,
1]i +
dt
~
| {z }
0


1
t
| {z }

(a scalar commutes with everything) so the left hand side must be 0. Recall
that:
Z
h1i = h|1|i =
||2 dx
(52)

so this is exactly the result in eq. (1.27) in the textbook:


Z
d
|(x, t)|2 dx = 0.
dt
|
{z
}
h1i

b)
 
i
H
d
hHi = h[H,
H]i +
.
dt
t
|~ {z }
0

The first term on the right hand side is 0 because every operator commutes with
itself. Assuming that the Hamiltonian is independent of time the last term is
also 0. Thus we read:
d
hHi = 0.
dt
23

(53)

which is the statement of conservation of energy. (in the textbook this is presented right after eq. (2.39)).

c)
d
i
hxi = h[H,
x
]i +
dt
~


x

.
t
| {z }
0

= p + V (x) so:
Recall that H
2m
 2

p
1 2
1
i~
p
x
[H,
] =
+ V (x), x
=
[
p ,x
] =
(
p[
p, x
] + [
p, x
]
p) =
2m
2m
2m
m
(see problem 3.13 for explanations of the steps). Inserting this we get:


d
i i~
p
h
pi
hxi =
=
.
dt
~ m
m
which is eq. (1.33) in the textbook.

d)
i
d
hpi = h[H,
p]i +
dt
~


p
t
| {z }
0

where the last term is 0 since the momentum operator does not have a time = p2 + V (x) and the fact that operators (and powers of
dependence. Using H
2m
them) commute with themselves we now obtain:
d
i
hpi = h[V (x), p]i.
dt
~
Focusing on the commutator and using the result in problem 3.13 c) we obtain:
[V (x), p] = i~

dV
dx

so re-inserting this we get:




d
dV
hpi =
dt
dx
which is eq. (1.38) in the textbook, an instance of Ehrenfests theorem (and
so is the previous problem). This theorem states that expectation values follow
classical laws. Do you recognize the equation above? (Hint: dV
dx = F )

24

3.22
Q. Consider a three-dimensional vector space spanned by an orthonormal basis
|1i, |2i, |3i. The two kets |i, |i are given by:
|i = i|1i 2|2i i|3i

and

|i = i|1i + 2|3i.

a)
Q. Construct h| and h| in terms of the dual basis vectors h1|, h2|, h3|.
Sol:
This is very easy, just remember that the constants are now given by the complex
conjugate of the constants in front of the kets:
h| = ih1| 2h2| + ih3|

(54)

h| = ih1| + 2h3|

(55)

b)
Q. Find h|i and h|i and confirm that h|i = h|i .
Sol:


h|i = ih1| 2h2| + ih3| i|1i + 2|3i =
i i h1|1i i 2 h1|3i 2 i h2|1i 2 2 h2|3i +i i h3|1i +i 2 h3|3i = 1 + 2i
| {z }
| {z }
| {z }
| {z }
| {z }
| {z }
1

where the brakets are either 0 (orthonormal basis) or 1 (orthonormal basis).


Now we do the same for h|i:


h|i = ih1| + 2h3| i|1i 2|2i i|3i =
ii h1|1i i(2) h1|2i i(i) h1|3i +2i h3|1i +2(2) h3|2i +2(i) h3|3i = 12i
| {z }
| {z }
| {z }
| {z }
| {z }
| {z }
1

and we indeed see that h|i = h|i

as it should.

c)
Q. Find all nine matrix elements of the operator A = |ih| in this basis and
then write A on matrix form. Is it hermitian?
Sol:
In a given basis we find the matrix elements of an operator by computing the
expectation values:

Aab = ha|A|bi
25

where a, b are labels of the basis vectors9 . Before explicitly showing the steps I
want to say something about the notation:
Aab = ha||ih||bi
will be written as
Aab = ha|ih|bi.

So let us now start computing the matrix elements, which at start will be shown
into the details:





ih1| + 2h3| |1i
A11 = h1|ih|1i = h1| i|1i 2|2i i|3i
{z
} |
{z
}
|

we see that the matrix elements become products of two brakets. Here I put
square brackets to separate these two, just for a clear overview, and now we
compute them one at a time. Remember that the basis is orthonormal, this
saves you some time because you only need to consider the products where the
bra ha| and ket |bi have the same index (that is, a = b). Thus:





ih1| + 2h3| |1i
A11 = h1|ih|1i = h1| i|1i 2|2i i|3i
{z
} |
{z
}
|




= ih1|1i ih1|1i = i (i) = 1
(also, remember that the basis is normalized (orthonormal basis) so ha|ai = 1.)
Now we compute A12 :





A12 = h1|ih|2i = h1| i|1i 2|2i i|3i
ih1| + 2h3| |2i
The first square bracket here is the same as before since it is only h1|i which
we computed before; h1|i = i. Continuing with the algebra:


 

A12 = i ih1| + 2h3| |2i = i 0 = 0
where the second square bracket is 0 because the bras are h1| and h3| but the
ket is |2i so the products of these will be 0.
Now we do the same for A13 :





A13 = h1|ih|3i = h1| i|1i 2|2i i|3i
ih1| + 2h3| |3i

=i
9 It





ih1| + 2h3| |3i = i 2h3|3i = 2i

is also common to write the basis kets as |ea i, |eb i - dont be fooled by the notation.

26

Computing the rest is really just the same algebraic steps all over again. Therefore we only list the results:
A21 = h2|ih|1i = 2(i), A22 = h2|ih|2i = 20, A23 = h2|ih|3i = 22
A31 = h3|ih|1i = i(i), A32 = h3|ih|2i = i0, A33 = h3|ih|3i = i2

As you see, even if you have to compute nine matrix elements a lot of them have
much in common. Now, writing this out in matrix form yields:

1 0 2i
A = 2i 0 4 .
1 0 2i
To check if this is hermitian we need to see if A = A . In this case it is not
hard, you could just transpose it and then complex conjugate it to see that the
result is not A. But there is an even faster way - there cannot be any imaginary
numbers in the diagonal!10 So assume you had a 100 100 matrix instead, then
you definitely check the diagonal first.

10 This

is rooted in the definition of hermitian operators: the diagonal remains the same
after computing the transpose so if there are imaginary numbers you can never get the same
before and after the complex conjugation.

27

3.23
Q. The Hamiltonian for a certain two-level system is:


=  |1ih1| |2ih2| + |1ih2| + |2ih1|
H
where |1i, |2i forms an orthonormal basis and  is a number with the dimension
of energy. Find its eigenvalues and eigenvectors (as linear combinations of |1i
in this basis?
and |2i). What is the matrix representation H of H
Sol:
Technically you could answer all questions by first computing the matrix ele and then solve the matrix eigenvalue equation. I will do this after
ments of H
I first solve it in another way (the way the formulation of the problem suggests
we do it): using only Dirac notation.
We are asked to give the eigenvectors of the operator


=  |1ih1| |2ih2| + |1ih2| + |2ih1|
H
which means that we want to solve the eigenvalue equation:

H|i
= |i
for some scalars and non-zero eigenvectors |i. To start with we define:
|i = c1 |1i + c2 |2i
which just means that we express the eigenvectors in the basis given.
Now we plug this into the eigenvalue equation:




c1 |1i + c2 |2i = c1 |1i + c2 |2i
H
and insert the definition of our operator:





 |1ih1| |2ih2| + |1ih2| + |2ih1| c1 |1i + c2 |2i = c1 |1i + c2 |2i
where the left hand side can be simplified in the following manner:



 |1ih1| |2ih2| + |1ih2| + |2ih1| c1 |1i + c2 |2i =

 c1 |1i h1|1i c1 |2i h2|1i +c1 |1i h2|1i +c1 |2i h1|1i +
| {z }
| {z }
| {z }
| {z }
1


c2 |1i h1|2i c2 |2i h2|2i +c2 |1i h2|2i +c2 |2i h1|2i
| {z }
| {z }
| {z }
| {z }
0

and we see that our eigenvalue equation now has taken the following form:




 c1 |1i + c1 |2i c2 |2i + c2 |1i = c1 |1i + c2 |2i
which gives the following system of equations:
28

(c1 + c2 ) = c1
(c1 c2 ) = c2

c2 = c1 (  1)
c2 = c1 (  + 1)1

Equating the two rows we can find our eigenvalues:

1 = ( + 1)1


2
1=1
2
2 = 22

= 2 .

(56)

Inserting the eigenvalues into our expression for the coefficients c1 , c2 we obtain11 :

2

c2 = c1 ( 1) = c1 (
1) = c1 ( 2 1)


which gives us two eigenvectors:

| i = c1 |1i + c1 ( 2 1)|2i .

(57)

If we also want to normalize the vectors we must choose12


c1 = q

1
.

12 + ( 2 1)2

Finally we find the matrix representation of our Hamiltonian:

H11 = h1|H|1i
=

H12 = h1|H|2i
=

H21 = h2|H|1i
=

H22 = h2|H|2i
= 

H=

1
1

1
1


(58)

for details on how we computed the expectation values the reader should consult
problem 3.22, the steps are completely equivalent.
Now I will show you how you could have solved it by using the matrix equation
itself (the usual way to solve the problem). You start by finding the matrix
representation of the operator in question:
11 Any

of the two lines work, here I just picked the top one.
you dont see why, it is a good exercise to explicitly do this. You find c1 by computing
h | i and by requiring that this should be 1 you get what c1 must be.
12 If

29


H=

1
1

1
1

and then you set up the eigenvalue equation:


H|i = |i
(H I) |i = 0
where I is the identity matrix. The equation above is satisfied if and only if
det (H I) | = 0
( )( ) 2 = 0
2 = 22

= 2
which are the eigenvalues of our operator.
To find the eigenvectors we now put in the eigenvalues one at a time into the
eigenvalue equation:
=

2 :
H|i =

2|i

insert the matrix form and represent |i by a general column vector:



 
 

1 1
a
a

= 2
1 1
b
b
 

 
a
1 1
a
= 2
b
1 1
b
This gives us a system of equations:

a + b = 2a
=
a b = 2b

b = a( 2 1)
b = a( 2 + 1)1

which gives us the eigenvector (the lines are equivalent):




1

|+ i = a
21

(59)

Now we could repeat the procedure for the negative eigenvalue but the exact
same steps would repeat. Therefore we just state the result:


1

| i = a
.
(60)
21
As we see now we got the same results as before. The two ways are actually
equivalent but I wanted to solve the problem in this way too because it is the
usual way to do it. If you find the matrix representation of an operator in a
basis and solve for the eigenvectors you automatically find them in the same
basis as the matrix is represented in.
30

3.24
be an operator with a complete set of orthonormal eigenvectors:
Let Q
n i = qn |en i (n = 1, 2, 3 . . . ).
Q|e
can be written in terms of its spectral decomposition:
Show that Q
X
=
Q
qn |en ihen |.
n

Hint: An operator is characterized by its action on all possible vectors, so what


you must show is that:
X


Q|i =
qn |en ihen | |i.
n

Sol:
The basis is complete, so all elements in the Hilbert space can be expressed as
linear combinations of the {en }. A general element, |i, can thus be written as:
X
|i =
cn |en i.
n

What are {cn }? These are the coefficients that tell you the amount of that
particular basis that is a part of the total element. Since the basis is orthonormal
you can find the coefficients by considering brakets between the basis elements:
X
X
hem |i = hem |
cn |en i =
cn hem |en i = cn mn = cm
n

where mn is the Kronecker delta function. mn is 1 if n = m and 0 otherwise.


on |i:
We will very soon use this but first we apply the operator Q
X
X
X

ni =
Q|i
=Q
cn |en i =
cn Q|e
cn qn |en i
(61)
n

Since cn are scalars we can really put them in front of or behind a basis vector:
X
X

Q|i
=
qn cn |en i =
qn |en icn
n

Then we exchange cn = hen |i to get:

Q|i
=

X
n

qn |en icn =

qn |en ihen |i =

X


qn |en ihen | |i.

(62)

Here we have pulled out |i from the sum, a move that you always can do if the
object you remove or insert into/from a sum does not depend on the variable
of summation. This applies to |i because:
X
|i =
c0n |e0n i.
n0

that is, even before the outer summation over n begins, |i has already been
summed over by another summation variable. There is nothing that says that
31

the two by definition have to be equal! They do however span the same spectrum of values. Make sure you understand this argument since it is frequently
used in mathematical physics.

3.27
representing observable A, has
Q. Sequential measurements. An operator A,
two normalized eigenstates 1 and 2 , with eigenvalues a1 and a2 respectively.
representing the observable B, has two normalized eigenstates 1
Operator B,
and 2 , with eigenvalues b1 and b2 . The eigenstates are related by:
1 = (31 + 42 )/5 and 2 = (41 32 )/5.

(63)

a)
Q. Observable A is measured and the value a1 is obtained. What is the state of
the system (immediately) after this measurement?
Sol:
Some background information:
Observables correspond to Hermitian operators. The Hermitian operators
have eigenstates {ei } that span the Hilbert space, so any wave function can
be expressed as a linear combination of these eigenstates.
=

N
X

ci ei .

(64)

i=1

To each eigenstate ei there is an eigenvalue qi . The observed quantity is


then one of these eigenvalues and the probability of getting qi is given
by |ci |2 . If the eigenstates are non-degenerate you then automatically know
in what state the measurement has put the system in.
In the case of degenerate states there are shared eigenvalues. If you get
such a value you can only say that the state is in a linear combination of
the degenerate states.
Before the measurement is done there is only a probability that the object will
be found in a certain eigenstate. When you perform the measurement and find
it in a given state (by measuring the related observable) it will stay in that
state until the wave function evolves. Since this evolution is not immediate the
answer is 1 .

32

b)
Q. If B is measured now, what are the possible results and what are their probabilities?
Sol:
The particle is in the state 1 , which can be expressed as a linear combination

of the eigenvectors of B:
1 = (31 + 42 )/5

(65)

so the possible eigenstates that it will be in after the measurement is 1 or 2 .


The coefficients in front of these in the current state tells you the probability
of finding the particle in the respective state (if you compute the magnitude
squared). So the answer is:
Result
b1
b2

Probability
9/25
16/25

c)
Q. Right after the measurement of B, A is measured again. What is the probability of getting a1 ? (Note that the answer would be quite different if you were
told what the measurement of B in the previous problem gave.)
Sol:
First we must express the eigenstates of observable B in terms of the eigenstates
of observable A. This is done by algebraic manipulations of eq.(63):
1 = (31 + 42 )/5 and 2 = (41 32 )/5.

(66)

If we had known what the measurement in the previous problem yielded, that
is, if we got b1 or b2 , we would have known the state of the object. Say that
the previous measurement yielded b1 . Then the probability of getting a1 would
have been 9/25. If the measurement however had given b2 the probability of
getting a1 would now have been 16/25. (just square the coefficients in front of
1 in the equation above).
Now we dont know what the previous measurement gave so we must compute
the probability of getting a1 as follows:
With the probability of 9/25 we previously got b1 . So if we have this state now
the probability of getting a1 is also 9/25 (as we see in eq.(66)). So the combined
probability of getting a1 in this way is the product of these two probabilities:
P (b1 , a1 ) =

33

9 9
.
25 25

With the probability of 16/25 we previously got b2 . So if we have this state now
the probability of getting a1 is 16/25 (as we see in eq.(66)). So the combined
probability of getting a1 in this way is the product of these two probabilities:
16 16

25 25
so the total probability of getting a1 is then the sum of these probabilities:
P (b2 , a1 ) =

Ptot (a1 ) = P (b1 , a1 ) + P (b2 , a1 ) 0.539

3.31
Virial theorem. Apply:
d
i
hQi = h[H,
Q]i +
dt
~

 
Q
t

to show that:


dV
d
hxpi = 2hT i x
.
dt
dx

(67)

Sol:
Let us start with noting that there is no explicit time dependence in the operator
d
=x
itself: Q
p = x ~i dx
, so:


x
p
=0
t
and thus we now need to consider the following equation:
d
i
hxpi = h[H,
x
p]i.
dt
~
In problem 3.13 we proved the commutator rule:
C]
= A[
B,
C]
+ [A,
C]
B

[AB,
and applying that to the commutator above we get:
x
=
[
p=x
p] + [H,
x
[H,
p] = [
xp, H]
x[
p, H]
x, H]
[H,
]
p
Both commutators have been worked out in problem 3.17:

34

i~
p
x
[H,
] =
m
 2

 2 
p] = p + V (x), p = p , p [V (x), p] = i~ dV
[H,
2m
2m
dx
| {z }
0

(so the extra steps in the last row are not really necessary, but I thought it
might help understanding the result.)
Inserting this into our relation we get:

  2 



d
i
i
i~
p2
dV
p
dV
dV
hxpi = h[H,
x
p]i =

+i~
x
=
x
= 2hT i x
dt
~
~
m
dx
m
dx
dx
which is what we wanted to show. Note here that I dropped the hats on the
operators: the hat on x
because it acted on a function of x and thus gives back
just the variable x. The hat on p is dropped because you usually do not write
but hQi, that is, you usually drop the hat.
hQi

Q. The relation you just proved is called the virial theorem. Use it to prove
that hT i = hV i for stationary states of the harmonic oscillator and check that
this is consistent with the results you got in problems 2.11 and 2.12.
Sol:
For the harmonic oscillator, the potential is given by:
V (x) =

1
m 2 x2
2

so inserting this into eq. (67) gives:


d
hxpi = 2hT i hm 2 x2 i = 2hT i 2hV i.
(68)
dt
For any stationary state all expectation values are constant in time, so the left
side above is 0. Thus:
hT i = hV i

(69)

and we have just shown what we were asked to show. This is consistent with
what was found in problem 2.11 c) and 2.12 where hT i and hV i were explicitly
computed.

35

Вам также может понравиться