Вы находитесь на странице: 1из 8

Lecture 10: Time-dependent

perturbation theory (10/20/2005)

We want to start one of the main overarching themes of this course today:
time-dependence. The detailed question we want to ask is how probable it is
for a system to jump from one energy eigenstate to another energy eigenstate
when a perturbation is added to the Hamiltonian.
Additional reading if you wish: Griffiths ch. 9.1, Feynman vol.3
ch. 9-11

Time-dependent and time-independent

Schr
odinger equation: a review
What is the relationship between these two equations? Recall that the general time-dependent equation is
H = i
h

.
t

If the Hamiltonian H is independent of time, one may find particular solutions of this equation by separation of variables i.e. by assuming in the
form
(r, t) = (r)f (t)
Substitute it to the equation to get
H(r)(r)f (t) = (r)i
h

f (t)
t

H(r) = i
h

1 f
.
f t

The left-hand side is only a function of r and the right-hand side only depends
on t. The only way it may happen is that both of them are equal to a constant
called E that we call energy. Therefore we obtain two equations. One of
them will determine f to be a simple phase
i
h

1 f
=E
f t

f (t) = C exp(Et/i
h)

and the other is the time-independent equation

H(r) = E(r).
1

The last equation is telling us that (r) is an eigenstate, and the reasoning
above guarantees that such initial states will not change in time except for
the trivial phase f (t) that does not affect any probabilities. The most general
state may be expressed as a combination of eigenstates. Because we know
their dependence on time and because the Schrodinger equation is linear, we
may write down the most general solution of the time-dependent equation as
(r, t) =

cn n (r)eEt/ih .

Such a combination depends on time non-trivially. But the probability for

the energy to be En remains independent of time, namely |cn |2 . The only
case a transition between different energy eigenstates may occur is when the
Hamiltonian is time-dependent.

Two-level systems
One-dimensional Hilbert spaces are too trivial because the probability that
the system is in state A remains 100 percent. There is no information in
such a Universe. The next simplest models of quantum mechanics have twodimensional Hilbert spaces and Richard Feynman was among those who
loved them which is not the only reason why you should look at his lectures
on physics.
Imagine that a two-level system has states |ai and |bi whose energies are
Ea and Eb . Let us also use the notation
|ai = |a(0)i, |bi = |b(0)i
indicating that the states are what they should be at t = 0. We have seen
on the previous page how the states evolve with time. Two solutions of the
Schrodinger equation are
|a(t)i = |aieEa t/ih ,

|b(t)i = |bieEb t/ih

which means that the most general solution of the same equation is
|(t)i = ca |aieEa /ih + cb |bieEb /ih .
The probability for |i to be found in the state |ai is
Pa = |ha|(t)i|2 = |ca |2 |eEa t/ih |2 = |ca |2
because the other term in the inner product is proportional to ha|bi = 0
and drops out. The absolute value of the phase equals one. Similarly, the
probability Pb = |cb |2 and we physically normalize the total probability to be
|ca |2 + |cb |2 = 1.
2

The probabilities Pa , Pb are constant but we may also ask what is the probability that the system is in a more general state which is a linear combination
of a, b. In that case, we find oscillations. To be very specific, define the following combinations
1
|+ i = (|ai + |bi),
2

1
| i = (|ai |bi).
2

Again, extend these states at t = 0 to general solutions of the Schrodinger

equation:

1 
| (t)i = eEa t/ih |ai eEb t/ih |bi .
2
Let us now substitute the inverse relations between + , and a, b that are
trivially
1
|ai = (|+ i + | i) ,
2

1
|bi = (|+ i | i)
2

into our time-dependent solution to see that

| (t)i =

i
1 h Ea t/ih
e
(|+ i + | i) eEb t/ih (|+ i | i) .
2

Lets now assume that we observe |+ i at t = 0. What is the probability

that it will be | i at a later time? By realizing that h |+ i = 0, you see
from the formula above that the amplitude in front of | i is simply

e(Ea Eb )t/2ih e(Ea Eb )t/2ih
1  Ea t/ih
e
eEb t/ih = e(Ea +Eb )t/2ih
2
2
(Ea +Eb )t/2i
h
= e
(i) sin(Et/2
h)

h |+ (t)i =

where we intelligently factorized the average phase to get the sin and where
we used E = Ea Eb . We are interested in the probability only, so the
phases (i) much like the exponential are irrelevant and we see that
P (+ ) = sin2 (Et/2
h).
The very same calculation with a relative plus sign gives
P (+ + ) = cos2 (Et/2
h)
and these two probabilities sum up to one as required by conservation of
probabilities. At any rate, you see that the probabilities to be in the (+)
state or the () state oscillate with the angular frequency
=

E
2
h

although the probabilities to be in the states (a) or (b) are constant in time.
3

Examples of two-level oscillations

This is a simple enough mathematical system but there are already many
examples of quantum mechanical systems in physics that follow exactly these
rules:
an electron precessing in the magnetic field. Imagine the field B = Bz
in the z-direction. Because E = 2B, the frequency of oscillations
of the spin in the xy-plane is
=

B
|E|
=
2
h
h

in particle physics we have kaons or K mesons that are made of one

0 ). I hope
down-quark and one strange-antiquark (K 0 ) or vice versa (K
you will forgive me if I interchanged the particle and its antiparticle.
At any rate, fast processes in physics usually prepare either a K 0 or
0 , but actually because of the weak interactions, the exact energy
aK

0 i)/ 2 the long-lived and short-lived Keigenstates are (|K 0 i + |K

mesons, respectively. A copy of K 0 will oscillate into its antiparticle
roughly in 0.6 nanoseconds of proper time.
another example in particle physics involves neutrinos. The neutrinos
are neutral and light partners of the electron, muon, and tau. The
latter three charged particles are energy eigenstates but their exact
partners are not; the true eigenstates are some linear combinations.
This implies that if we prepare one of the e , , neutrinos, there will
be nonzero probabilities that they will oscillate into each other. In the
case of neutrinos, it is really the energies that influence the oscillation
frequencies, not the rest masses themselves. In fact, we measure these
oscillations and they only determine the differences of (m2 ), not the
masses themselves. The solar neutrino anomaly we observe less e
flowing from the Sun than what we would expect according to our
standard Solar model is explained by one kind of oscillations, while
the atmospheric neutrino anomalies involving neutrinos with energies
around 1 GeV are explained by a different oscillating pair.
Feynman also liked the ammonia molecule. The nitrogen in NH3 can
classically be either be above the plane with the H3 triangle, or below it.
Quantum mechanically, there is a certain amplitude that it may tunnel
from one to the other an off-diagonal element of the Hamiltonian.
This term implies that the energy eigenstates are actually

(|i |i)/ 2
where the arrow indicates in which direction from the hydrogen plane
you find the nitrogen. The up-and-down states oscillate with a frequency around 24 GHz.
4

Time-dependent Hamiltonians
We want to look at the cases where the Hamiltonian depends on time, but to
avoid completely uncharted territories, let us assume that the Hamiltonian
is a small deformation of a time-independent Hamiltonian H0 :

0 + H
0(t).
H(t)
=H
Recall that we have found the general solution of the Schrodinger equation
0;
for the Hamiltonian H
|(t)i = ca (t)|aieEa /ih + cb (t)|bieEb /ih .
The coefficients ca and cb used to be time-independent constants, but now
we allow them to depend on time (slightly) so that this wavefunction has
a chance to solve the time-dependent Schrodinger equation with the timedependent Hamiltonian:


0 + H
0 (t) |(t)i = i
H
h |(t)i
t


What happens if we plug our Ansatz into the equation? The terms involving
0 will cancel against the terms where ca (t) and cb (t) enter without their
H
0
time-derivatives but in which the phase is differentiated because of the H
Schrodinger equation and the remaining new terms give us
ca eEa t/ih H 0 (t)|ai + cb eEb t/ih H 0 (t)|bi = i
h

ca Ea t/ih
cb Eb t/ih
e
|ai + i
h
e
|bi
t
t

Of course, this equation may be multiplied by ha| and hb| to get two independent equations. After dividing by the majority phase and identifying
E = Ea Eb in the remaining phase, the equations read

ca
1 
0
0
=
ca Haa
+ cb e+iEt/h Hab
t
i
h

cb
1 
0
0
cb Hbb
+ ca eiEt/h Hba
=
t
i
h
0
where Hab
= ha|bi (and similarly for other combinations of a, b) are the
0
0
matrix elements. It often happens that Haa
= Hbb
= 0 and therefore the first
term on the right hand side drops out.

Iterations
Of course, whenever the latest equations may be solved exactly, they also give
us an exact solution of the original problem. However, this is often impossible
and we need to apply approximative techniques. The simplest method of this
(0)
kind is the iterative method. We start with ca,b being constants i.e. solutions
5

of the unperturbed problem, and we insert these c(0) to the right-hand side
(1)
of the equations above. By doing so, we find the solutions ca,b (t) ca,b (t).
These resulting functions may be again used for the right-hand side of the
(2)
same equations, and the new solutions we obtain are ca,b (t) ca,b (t). In
(n)
principle, when we repeat this step infinitely many times, the functions ca,b (t)
will converge to the exact solutions as n goes to infinity. In practice, we only
repeat our step several times to achieve a desired level of accuracy.

An off-diagonal example
0
0
Consider the perturbation with Haa
= Hbb
= 0 and an arbitrary nonzero
0
0
(t). Imagine that you start with the zeroth order
complex Hab (t) = H
ba
approximation of the solution
(0)

c(0)
a = 1,

cb = 0

If you substitute it to our equations, you will see that

c(1)
a (t) = 1
(0)

once again because the right-hand side depended on cb which equals zero.
However, the equation for the evolution of cb (t) is slightly non-trivial
1
cb
0
= eEt/ih Hba
t
i
h
but it can be solved by a simple integration:
(1)

cb (t) =

1 Zt 0 0
dt Hba (t)eEt/ih .
i
h 0
(1)

2
2
Note that our results do not satisfy |c(1)
a | + |cb | = 1 because the iterative
technique does not guarantee all good features to survive in the intermediate
results. Of course, the rule total probability equals one will be satisfied by
(n)
the exact results limn ca,b . We are often satisfied with the first iterative
(1)
solutions ca,b .
Technically, we would prefer to normalize c(1)
a not to be equal to one but
(1) 2 1/2
rather (1 |cb | ) and indeed, such an improvement will be generated as
an expansion once we calculate the second order iterative solution c(2)
a . Note
that this new iteration gives us
(1)
c(2)
a (t) = ca +

1
i
h

(1)

0
dt0 cb eiEt /h Hab
(t),

(2)

(1)

cb (t) = cb (t).

Just like the first iteration did not improve ca (t) at all, the second iteration
does not improve cb (t) because c(1)
a appearing on the right-hand side of the
(2)
equation for cb (t) equals one.
6

Although our formalism directly applied to two-level systems only, it is

straightforward to generalize the technique for multilevel systems. Consider
0 whose energies are Ek . The solutions
a system with n eigenstates |ki of H
of the unperturbed equations are
|(t)i =

X
k

ck (t)eEn t/ih |ki

for constant ck (t), but we allow their time-dependence in order to solve the
Schrodinger equation with the full Hamiltonian. By plugging our |(t)i into
this equation and after taking the inner product with hm|, we obtain
1 X
ck
0
=
cm (t)eiEmk t/ih Hmk
(t).
t
i
h m
Its first iterative solution is again
(1)
ck (t)

1 XZ t 0
0
0
dt cn (0)eiEknt /h Hkn
(t).
= ck (0)
i
h n 0

Factorized perturbations and pulses

Such a general integral may look obscure. In order to understand the physical
content better, assume that the perturbation may be factorized to
0 (r)f (t).
H 0 (r, t) = h
We will use the matrix elements h0kl = hk|h0 (r)|li, omit the hats, and define
kl Ekl /
h. Let us also shift the time t by an additive constant and assume
that we prepared the system in the initial state |li at t = instead of t = 0.
What is then the probability that we obtain a different state |ki at t = +?
The probability is given by
2

(1)
|ck ()|2

h0 2
kl

h

ikl t

dt e

2

f (t) .

Well, the probability of the transition is proportional to the absolute value

of the squared matrix element of the reduced Hamiltonian h0 (r), but there
is also some dependence on the time-dependent profile f (t). Note that the
integrand includes an oscillating phase and the contributions f (t) will largely
cancel if the pulse in f (t) is spread over time intervals t  1/kl . On
the other hand, if the pulse is concentrated into a very short time interval
t  1/kl , the influence of the perturbation will be very efficient because
almost no cancellations will occur. To see it quantitatively, consider, for
example,
(
1/T, 0 t T
f (t) =
0, otherwise
7

Z

2

it
dt e f (t)

1
= 2
T

2

eit dt

sin2 (T /2)
.
2 T 2 /4

This function of T goes to one for T 0, as a simple approximation

sin x x will convince you. And then it oscillates between zeroes that are
obtained whenever T is a positive integer multiple of 2, and between a
decreasing function 4/( 2T 2 ). Draw the graph of the function here!