Академический Документы
Профессиональный Документы
Культура Документы
http://ocw.mit.edu
Spring 2009
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Andrei Tokmakoff, MIT Department of Chemistry, 2/13/2007
2-1
For many time-dependent problems, most notably in spectroscopy, we often can partition the
time-dependent Hamiltonian into a time-independent part that we can describe exactly and a
time-dependent part
H = H 0 + V (t ) (2.1)
Nitzan, Sec. 2.3., offers a nice explanation of the circumstances that allow us to use this
approach. It arises from partitioning the system into internal degrees of freedom in H0 and
external degrees of freedom acting on H0. If you have reason to believe that the external
Hamiltonian can be treated classically, then eq. (2.1) follows in a straightforward manner. Then
there is a straightforward approach to describing the time-evolving wavefunction for the system
in terms of the eigenstates and energy eigenvalues of H 0 . We know
H 0 n = En n . (2.2)
ψ ( t ) = ∑ cn ( t ) n (2.3)
n
The TDSE can be used to find an equation of motion for the expansion coefficients
ck ( t ) = k ψ ( t ) (2.4)
Starting with
∂ψ −i
= H ψ (2.5)
∂t
∂ ck ( t ) i
= − k H ψ (t ) (2.6)
∂t
i
inserting ∑ n n = 1 =− ∑ k H n cn( t ) (2.7)
n n
2-2
∂ ck ( t )
∑ k (H + V ( t ) ) n cn ( t )
i
=−
∂t
0
n
(2.8)
i
=− ∑ ⎡E
n
⎣ n δ kn + Vkn ( t ) ⎤⎦ cn ( t )
∂ck ( t ) i i
or,
∂t
+ Ek ck ( t ) = − ∑V (t ) c ( t ) .
n
kn n (2.9)
If we make a substitution
cm ( t ) = e− iEmt bm ( t ) , (2.10)
which defines a slightly different expansion coefficient, we can simplify considerably. Notice
“trivial” part of the time-evolution, the time-evolving phase factor for state m. The reasons will
become clear later when we discuss the interaction picture. It is easy to calculate bk ( t ) and then
add in the extra oscillatory term at the end. Now eq. (2.9) becomes
∂bk i
e−iE t
k
∂t
=− ∑V ( t )
n
kn e−iE t bn ( t )
n
(2.11)
∂bk
or i = ∑ Vkn ( t ) e−iωnk t bn ( t ) (2.12)
∂t n
This equation is an exact solution. It is a set of coupled differential equations that describe how
probability amplitude moves through eigenstates due to a time-dependent potential. Except in
simple cases, these equations can’t be solved analytically, but it’s often straightforward to
integrate numerically.
2-3
As an example of the use of these equations, let’s describe what happens when you drive a two-
level system with an oscillating potential.
V ( t ) = V cos ωt = Vf ( t ) (2.13)
Note: This is what you expect for an electromagnetic field interacting with charged particles, i.e.
dipole transitions. In a simple sense, the electric field is
E ( t ) = E0 cos ωt (2.14)
F =qE (2.15)
∂V
Fx = − = qEx ⇒ V = −qEx x (2.16)
∂x
qx is just the x component of the dipole moment μ . So matrix elements in V look like:
k | V (t ) | = − qE x k | x | cos ωt (2.17)
More generally,
V = −E ⋅ μ . (2.18)
V ( t ) = V cos ωt = − E0 ⋅ μ cos ωt
So, . (2.19)
Vk ( t ) = Vk cos ωt = − E0 ⋅ μk cos ωt
∂
i bk ( t ) = ∑ bn ( t ) Vkn ( t ) e−iωnk t
∂t n
(2.20)
= ∑ bn ( t ) Vkn e −iωnk t
×
1
2
(e −iωt
+e + iωt
)
n
Or more explicitly
i (ω −ω )t i (ωk +ω )t ⎤
i bk = 12 b Vk ⎡⎢e k +e ⎡ iωt
⎥⎦ + 2 bk Vkk ⎣e + e ⎦
1 − iωt
⎤
⎣
i (ω −ω )t i ( ω +ω ) t ⎤
i b = 12 b V ⎡⎣eiωt + e−iωt ⎤⎦ + 12 bkV k ⎡⎢e k +e k ⎥⎦
⎣ (2.21)
or
⎡ e −i(ωk +ω ) t
+e ( k
−i ω −ω )t ⎤
⎢⎣ ⎥⎦
Two of these terms can be dropped since (for our case) the diagonal matrix elementsVii = 0 . We
also make the secular approximation (rotating wave approximation) in which the nonresonant
terms are dropped. When ωk ≈ ω , terms like e± iωt or e (
i ω k +ω ) t
oscillate very rapidly (relative
−1
to Vk ) and so don’t contribute much to change of cn . (Remember that ωk is positive). So we
have:
−i i (ω −ω )t
bk = b Vk e k (2.22)
2
−i −i (ω −ω )t
b = bk V k e k (2.23)
2
−i ⎡ i (ω −ω )t i (ωk −ω )t ⎤
bk = ⎢ b Vk e k + i (ωk − ω ) b Vk e (2.24)
2 ⎣ ⎦⎥
2-5
2i −i (ω −ω )t
b = bk e k (2.25)
Vk
and substitute (2.25) and (2.23) into (2.24), we get linear second order equation for bk .
2
V
bk − i (ωk − ω ) bk + k 2
bk = 0 (2.26)
4
This is just the second order differential equation for a damped harmonic oscillator:
ax + bx + cx = 0 (2.27)
x = e ( ) ( Acos μ t + B sin μ t )
− b 2a t
1 1 (2.28)
μ= ⎡⎣ 4ac − b 2 ⎤⎦ 2
2a
With a little more work, and remembering the initial conditions bk ( 0 ) = 0 and b ( 0 ) = 1 , we find
2
Vk
Pk = bk ( t ) =
2
sin 2 Ω r t (2.29)
+ ( ωk − ω )
2 2 2
Vk
1 ⎡ 1
ΩR = + ( ωk − ω ) ⎤
2 2 2 2
Vk (2.30)
2 ⎣ ⎦
P = 1 − bk
2
Also, (2.31)
The amplitude oscillates back and forth between the two states at a frequency dictated by the
coupling between them. [ Note a result we will return to later: Electric fields couple quantum
states, creating coherences! ]
The efficiency of driving between and k states drops off with detuning. Here plotting the
maximum value of Pk as a function of frequency:
Readings
We will use our intuition here (largely based on correspondence to classical mechanics). We are
seeking an equation of motion for quantum systems that is equivalent to Newton’s (or more
accurately Hamilton’s) equations for classical systems.
lim ψ ( t ) = ψ ( t0 ) (2.33)
t →t0
ψ ( t ) = U ( t , t0 ) ψ ( t0 ) (2.34)
ψ ( r ) = eik ( r − r ) ψ ( r0 )
0
(2.35)
necessary for conservation of probability, i.e. to retain normalization for the system. If
ψ ( t0 ) = a1 ϕ1 ( t0 ) + a2 ϕ2 ( t0 ) (2.36)
then
ψ ( t ) = U ( t , t0 ) ψ ( t0 )
= U ( t , t0 ) a1 ϕ1 ( t0 ) + U ( t , t0 ) a2 ϕ 2 ( t0 ) (2.37)
= a1 ( t ) ϕ1 + a2 ( t ) ϕ 2
2-8
This is a reflection of the importance of linearity in quantum systems. While ai ( t ) typically not
equal to ai ( 0 ) ,
∑
a ( t ) = ∑ an ( t0 )
2
2
n (2.38)
n n
Properties of U(t,t0)
1) Unitary. Note that for eq. (2.38) to hold and for probability density to be conserved, U must
be unitary
P = ψ ( t ) ψ ( t ) = ψ ( t0 ) U †U ψ ( t0 ) (2.39)
which holds only if U † = U −1 . In fact, this is the reason that equates unitary operators with
probability conservation.
2) Time continuity:
U (t, t ) = 1 . (2.40)
( t0 → t2 ) or multiple steps ( t0 → t1 → t2 ) :
U ( t2 , t0 ) = U ( t2 , t1 )U ( t1 , t0 ) (2.41)
ψ ( t2 ) = U ( t2 , t1 ) U ( t1 , t0 ) ψ ( t0 )
(2.42)
= U ( t2 , t1 ) ψ ( t1 )
Equation (2.41) is already very suggestive of an exponential form. Furthermore, since time
is continuous and the operator is linear it also suggests what we will see that the time
propagator is only dependent on a time interval
2-9
U ( t1 , t0 ) = U ( t1 − t0 ) (2.43)
and
U ( t2 − t0 ) = U ( t2 − t1 ) U ( t1 − t0 ) (2.43
4) Time-reversal. The inverse of the time-propagator is the time reversal operator. From eq.
(2.41):
U ( t , t0 ) U ( t0 , t ) = 1 (2.32)
∴U −1 ( t , t0 ) = U ( t0 , t ) . ( 2.33)
Let’s find an equation of motion that describes the time-evolution operator using the change of
the system for an infinitesimal time-step, δ t : U ( t0 + δ t , t0 )
lim U ( t0 + δ t , t0 ) = 1 (2.34)
δ t→0
We expect that for small δ t , the difference between U ( t0 , t0 ) and U ( t0 + δ t , t0 ) will be linear in
U ( t 0 + δ t , t 0 ) = U ( t 0 , t 0 ) − iΩ
ˆ ( t ) δ t
0 (2.35)
We take Ω̂ to be a time-dependent Hermetian operator. We’ll see later why the second term
must be imaginary. So, now we can write a differential equation for U. We know that
U ( t + δ t , t0 ) = U ( t + δ t , t ) U ( t , t0 ) . (2.36)
Knowing the change of U during the period δt allows us to write a differential equation for the
time-development of U ( t , t0 ) . The equation of motion for U is
2-10
d U ( t , t0 ) U ( t + δ t , t0 ) −U ( t , t0 )
= lim
dt δ t→0 δt
(2.37)
⎡U ( t + δ t , t ) − 1⎤⎦ U ( t , t0 )
= lim ⎣
δ t→0 δt
∂U ( t , t0 ) ˆ U (t, t )
= −iΩ (2.38)
∂t
0
You can now see that the operator needed a complex argument, because otherwise probability
density would not be conserved (it would rise or decay). Rather it oscillates through different
states of the system.
We note that Ω̂ has units of frequency. Since (1) quantum mechanics says E = ω and
(2) in classical mechanics the Hamiltonian generates time-evolution, we write
Ĥ
Ω̂ = (2.39)
∂
i U ( t , t0 ) = HU
ˆ (t, t ) (2.40)
∂t
0
∂
i ψ = Ĥ ψ (2.41)
∂t
We are also interested in the equation of motion for U † which describes the time-evolution of the
conjugate wavefunctions. Following the same approach and recognizing that U † ( t , t0 ) acts to
the left:
ψ ( t ) = ψ ( t0 ) U † ( t , t0 ) , (2.42)
we get
∂ †
−i U ( t , t0 ) = U † ( t , t0 ) Hˆ (2.43)
∂t
2-11
⎡ i ⎤
U ( t , t0 ) = exp ⎢ − H ( t − t0 ) ⎥ (2.44)
⎣ ⎦
⎛ −i ⎞ ⎡⎣ H ( t − t0 ) ⎤⎦
2
−iH
2
⎡ iH ⎤
exp ⎢ − ( t − t0 ) ⎥ = 1 + ( 0) ⎜ ⎟
t − t + +… (2.45)
⎣ ⎦ ⎝ ⎠ 2
Note H commutes at all t . You can confirm the expansion satisfies the equation of motion
for U .
H n = En n ∑n
n
n =1 (2.46)
So we have
U ( t , t0 ) = ∑ exp ⎡⎣ −iH ( t − t0 ) / ⎤⎦ n n
n
(2.47)
= ∑ n exp ⎡−iE
⎣ n ( t − t0 ) / ⎤⎦ n
n
and
ψ ( t ) = U ( t , t0 ) ψ ( t0 )
⎡ −i ⎤
= ∑ n n ψ ( t0 ) exp ⎢ En ( t − t0 ) ⎥ (2.48)
n ⎣ ⎦
( )
cn t0
=∑ n cn ( t )
n
A(t ) = ψ (t ) A ψ (t )
(2.49)
= ψ ( t0 ) U † ( t , t0 ) AU ( t , t0 ) ψ ( t0 )
2-12
= ∑ c (t ) c (t ) A
n ,m
*
m n mn
At first glance it may seem straightforward to deal with. If H is a function of time, then the
formal integration of i ∂U ∂t = HU gives
⎡ −i t ⎤
U ( t , t0 ) = exp ⎢ ∫ H ( t ′ ) dt ′⎥ (2.51)
⎣ t0
⎦
We would define this exponential as an expansion in a series, and substitute into the equation of
motion to confirm it:
1 ⎛ −i ⎞
2
i
U ( t,t0 ) = 1 − ∫ H ( t ′ ) dt ′ + ⎜ ⎟ dt ′ dt ′′H ( t ′ ) H ( t ′′ ) +…
t t t
t0 2! ⎝ ⎠ ∫∫
t0 t0
(2.52)
⎡ −i t ⎤
U ( t , t0 ) = ∑ n exp ⎢ ∫ E j ( t ′ ) dt ′⎥ n (2.53)
n ⎣ t0
⎦
However, this is dangerous; we are not treating H as an operator. We are assuming that the
Now, let’s proceed a bit more carefully assuming that the Hamiltonian at different times
does not commute. Integrate
∂ −i
U ( t , t0 ) = H ( t ) U ( t , t0 ) (2.54)
∂t
i
U ( t , t0 ) = 1 − dτ H (τ ) U (τ ,t0 )
t
To give: ∫
t0
(2.55)
This is the solution; however, it’s not very practical since U ( t , t0 ) is a function of itself. But we
i ⎡ i τ ⎤
U ( t , t0 ) = 1 − ∫t0 dτ H (τ ) ⎢⎣1 − dτ ′H (τ ′ ) U (τ ′, t0 ) ⎥
t
∫ t0
⎦
(2.56)
⎛ −i ⎞ t ⎛ −i ⎞
2
τ
= 1 + ⎜ ⎟ ∫ dτ H (τ ) + ⎜ ⎟ dτ ∫ dτ ′H (τ ) H (τ ′ ) U (τ ′,t0 )
t
⎝ ⎠ t0 ⎝ ⎠
∫
t0 t0
Note in the last term of this equation, the integration variable τ ′ preceeds τ . Pictorally, the area
of integration is
Next Step:
⎛ −i ⎞ t
U ( t , t0 ) = 1 + ⎜ ⎟ ∫ dτ H (τ )
⎝ ⎠ t0
⎛ −i ⎞
2
τ
dτ ∫ dτ ′H (τ ) H (τ ′ )
t
+⎜ ⎟
⎝ ⎠
∫
t0 t0
(2.57)
⎛ −i ⎞
3
τ τ′
dτ ∫ dτ ′∫ dτ ′′H (τ ) H (τ ′ ) H (τ ′′ ) U (τ ′′,t0 )
t
+⎜ ⎟
⎝ ⎠
∫
t0 t0 t0
From this expansion, you should be aware that there is a time-ordering to the interactions. For
the third term, τ ′′ acts before τ ′ , which acts before τ : t0 ≤ τ ′′ ≤ τ ′ ≤ τ ≤ t .
Imagine you are starting in state ψ 0 = and you are working toward a target
The expression for U describes all possible paths between initial and final state. Each of these
paths interfere in ways dictated by the acquired phase of our eigenstates under the time-
dependent Hamiltonian. The solution for U obtained from this iterative substitution is known as
the (positive) time-ordered exponential
⎡ −i t ⎤
U ( t , t0 ) ≡ exp + ⎢ ∫ dτ H (τ ) ⎥
⎣ t0
⎦
⎡ −i t ⎤
≡ Tˆ exp ⎢ ∫ dτ H (τ ) ⎥ (2.58)
⎣ t0
⎦
⎛ −i ⎞
∞ n
τ
= 1+ ∑ ⎜ ⎟ dτ n ∫ dτ n … ∫ dτ 1 H (τ n ) H (τ n −1 )… H (τ 1 )
t t
n=1 ⎝ ⎠
∫t0 t0 t0
(Tˆ is known as the Tyson time-ordering operator.) In this expression the time-ordering is:
t0 → τ 1 → τ 2 → τ 3 .... τ n → t
(2.59)
t0 → … τ ′′ → τ ′ → τ
So, this expression tells you about how a quantum system evolves over a given time interval, and
it allows for any possible trajectory from an initial state to a final state through any number of
intermediate states. Each term in the expansion accounts for more possible transitions between
different intermediate quantum states during this trajectory.
1 ⎛ −i ⎞
∞ n
1+ ∑ dτ n … ∫ dτ 1 H (τ n ) H (τ n −1 )… H (τ 1 )
t t
⎜ ⎟
n =1 n! ⎝ ⎠
∫ t0 t0
(2.60)
2-16
Here the time-variables assume all values, and therefore all orderings for H (τ i ) are calculated.
The areas are normalized by the n! factor. (There are n! time-orderings of the τ n times.)
We are also interested in the Hermetian conjugate of U ( t , t0 ) , which has the equation of
∂ † +i
U ( t , t0 ) = U † ( t , t0 ) H ( t ) (2.61)
∂t
ψ ( t ) = ψ ( t0 ) U † ( t , t0 ) (2.62)
then from
i
U † ( t , t0 ) =
U † ( t0 ,t0 ) + dτU † ( t ,τ ) H (τ )
t
∫
t0
(2.63)
⎡i ⎤
U † ( t , t0 ) = exp − ⎢ ∫ dτ H (τ ) ⎥
t
⎣ t0
⎦
n
(2.64)
∞
⎛i⎞ τn τ2
= 1 + ∑ ⎜ ⎟ ∫ dτ n ∫ dτ n −1 … ∫ dτ 1 H (τ 1 ) H (τ 2 )… H (τ n )
t
n =1 ⎝ ⎠ t0 t0 t0
Readings
1. Merzbacher, E. Quantum Mechanics, 3rd ed. (Wiley, New York, 1998), Ch. 14.
2. Mukamel, S. Principles of Nonlinear Optical Spectroscopy (Oxford University Press, New
York, 1995), Ch. 2.
3. Sakurai, J. J. Modern Quantum Mechanics, Revised Edition (Addison-Wesley, Reading, MA,
1994), Ch. 2.
Andrei Tokmakoff, MIT Department of Chemistry, 2/22/2007
2-17
Aˆ ( t ) = ψ ( t ) Aˆ ψ ( t ) = ψ ( 0 ) U † AU
ˆ ψ (0)
( ) (
= ψ ( 0 ) U † Aˆ U ψ ( 0 ) ) (2.65)
(
= ψ ( 0 ) U † AU
ˆ ψ (0) )
The last two expressions here suggest alternate transformation that can describe the dynamics.
These have different physical interpretations:
(1) Schrödinger Picture: Everything we have done so far. Operators are stationary.
Eigenvectors evolve under U ( t , t0 ) .
(2) Heisenberg Picture: Use unitary property of U to transform operators so they evolve
in time. The wavefunction is stationary. This is a physically appealing picture, because
particles move – there is a time-dependence to position and momentum.
Schrödinger Picture
∂
i ψ =H ψ (2.66)
∂t
2-18
∂ ˆ ⎡ ∂ψ ∂ψ ∂ ⎤
i A(t ) = i ⎢ ψ Â + Â ψ + ψ ψ ⎥
∂t ⎢ ∂t ∂t ∂t ⎥
⎣ ⎦
= ψ Aˆ H ψ − ψ H Aˆ ψ (2.67)
= ψ ⎡⎣ Aˆ , H ⎦⎤ ψ
= ⎡⎣ Aˆ , H ⎤⎦
∂ ˆ ∂
i
∂t
A(t ) = i
∂t
Tr Aˆ ρ ( )
⎛ ∂ ⎞
= i Tr ⎜ Aˆ ρ ⎟
⎝ ∂t ⎠ (2.68)
(
= Tr Aˆ [ H , ρ ] )
(
= Tr ⎡⎣ Aˆ , H ⎤⎦ ρ )
If  is independent of time (as we expect in the Schrödinger picture) and if it commutes with H ,
it is referred to as a constant of motion.
Heisenberg Picture
From eq. (2.65) we can distinguish the Schrödinger picture from Heisenberg operators:
Aˆ (t ) = ψ ( t ) Â ψ ( t ) = ψ ( t0 ) U † AU
ˆ ψ (t )
0 = ψ Aˆ ( t ) ψ (2.69)
S S H
Aˆ H ( t ) = U † ( t , t0 ) Aˆ SU ( t , t0 )
(2.70)
Aˆ H ( t0 ) = Aˆ S
2-19
ψ S ( t ) = U ( t , t0 ) ψ H (2.71)
So, ψ H = U † ( t , t0 ) ψ S ( t ) = ψ S ( t0 ) (2.72)
 ϕi S
= ai ϕi S
ˆ † ϕ
†
U AUU = aiU ϕi
†
(2.73)
i S S
ÂH ϕi H
= ai ϕi H
∂ÂH ∂ ∂U † ˆ ∂U ∂Aˆ
=
∂t ∂t
(
U † Aˆ S U =
∂t
)AS U + U † Aˆ S
∂t
+U † S U
∂t
i i ⎛ ∂Aˆ ⎞
= U † H Aˆ S U − U † Aˆ S H U + ⎜⎜ ⎟⎟
⎝ ∂t ⎠ H (2.74)
i i
= H H ÂH − Aˆ H H H
−i ⎡ ˆ ⎤
= ⎣ A, H ⎦ H
∂
The result: i ÂH = ⎡⎣ Aˆ , H ⎤
⎦H (2.75)
∂t
Particle in a potential
Often we want to describe the equations of motion for particles with an arbitrary potential:
p2
H= + V ( x) (2.76)
2m
∂V
p=− . (2.77)
∂x
p
x= (2.78)
m
These equations indicate that the position and momentum operators follow equations of motion
identical to the classical variables in Hamilton’s equations. These do not involve factors of .
Note here that if we integrate eq. (2.78) over a time period t we find:
p t
x (t ) = + x ( 0) (2.81)
m
implying that the expectation value for the position of the particle follows the classical motion.
These equations also hold for the expectation values for the position and momentum operators
(Ehrenfest’s Theorem) and indicate the nature of the classical correspondence. In correspondence
to Newton’s equation, we see
∂2 x
m = − ∇V (2.82)
∂t 2
2-21
The interaction picture is a hybrid representation that is useful in solving problems with time-
dependent Hamiltonians in which we can partition the Hamiltonian as
H (t ) = H0 + V (t ) (2.83)
H 0 is a Hamiltonian for the degrees of freedom we are interested in, which we treat exactly, and
can be (although for us generally won’t be) a function of time. V ( t ) is a time-dependent potential
which can be complicated. In the interaction picture we will treat each part of the Hamiltonian in
a different representation. We will use the eigenstates of H 0 as a basis set to describe the
useful basis to describe H. If H 0 is not a function of time, then there is simple time-dependence
to this part of the Hamiltonian, that we may be able to account for easily.
Setting V to zero, we can see that the time evolution of the exact part of the Hamiltonian
H 0 is described by
∂ −i
U 0 ( t , t0 ) = H 0 ( t ) U 0 ( t , t0 ) (2.84)
∂t
⎡i ⎤
U 0 ( t , t0 ) = exp + ⎢ ∫ dτ H 0 ( t ) ⎥
t
where, most generally, (2.85)
⎣ t0
⎦
U 0 ( t , t0 ) = e
− iH 0 ( t −t0 )
but for a time-independent H 0 (2.86)
ψ S ( t ) ≡ U 0 ( t , t0 ) ψ I ( t ) (2.87)
or ψ I = U 0† ψ S (2.88)
Effectively the interaction representation defines wavefunctions in such a way that the phase
accumulated under e−iH t
0
is removed. For small V, these are typically high frequency
oscillations relative to the slower amplitude changes in coherences induced by V.
2-22
We are after an equation of motion that describes the time-evolution of the interaction
picture wave-functions. We begin by substituting eq. (2.87) into the TDSE:
∂
i ψS = H ψS (2.89)
∂t
∂ −i
U 0 ( t , t0 ) ψ I = H ( t ) U 0 ( t , t0 ) ψ I
∂t
∂U 0 ∂ ψI −i
ψ I +U 0 = ( H 0 + V ( t ) ) U 0 ( t , t0` ) ψ I (2.90)
∂t ∂t
−i ∂ ψI −i
H 0 U 0 ψ I +U 0
∂t
= (
H 0 +V ( t ) U 0 ψ I )
∂ ψI
∴ i = VI ψ I (2.91)
∂t
ψ I satisfies the Schrödinger equation with a new Hamiltonian: the interaction picture
ψ I ( t ) = U I ( t , t 0 ) ψ I ( t0 ) (2.93)
⎡ −i t ⎤
where U I ( t , t0 ) = exp + ⎢ ∫ dτ VI (τ ) ⎥ . (2.94)
⎣ t0
⎦
ψ S ( t ) = U 0 ( t , t0 ) ψ I ( t )
= U 0 ( t , t0 ) U I ( t , t 0 ) ψ I ( t 0 ) (2.95)
= U 0 ( t , t0 ) U I ( t , t 0 ) ψ S ( t 0 )
U ( t , t0 ) = U 0 ( t , t0 ) +
⎛ −i ⎞ t
∞ n
τn τ2
∑ ⎜ ⎟ ∫t0 dτ n ∫t0 dτ n−1 … ∫t0 dτ 1 U 0 ( t ,τ n ) V (τ n ) U 0 (τ n ,τ n−1 ) …
n=1 ⎝ ⎠
(2.97)
U 0 (τ 2 ,τ 1 ) V (τ 1 ) U 0 (τ 1 ,t0 )
where we have used the composition property of U ( t , t0 ) . The same positive time-ordering
applies. Note that the interactions V(τi) are not in the interaction representation here. Rather we
used the definition in eq. (2.92) and collected terms.
⎡ +i t ⎤ ⎡ +i t ⎤
U † ( t , t0 ) = U I† ( t , t0 ) U 0† ( t , t0 ) = exp − ⎢ ∫ dτ VI (τ ) ⎥ exp − ⎢ ∫ dτ H 0 (τ ) ⎥ (2.98)
⎣ t0
⎦ ⎣ t0
⎦
iH ( t − t0 )
or U 0† = e when H 0 is independent of time.
Aˆ ( t ) = ψ ( t ) Â ψ ( t )
= ψ ( t0 ) U † ( t , t0 ) Aˆ U ( t , t0 ) ψ ( t0 )
(2.99)
= ψ ( t0 ) U I†U 0† ÂU 0U I ψ ( t0 )
= ψ I ( t ) ÂI ψ I ( t )
where AI ≡ U 0† AS U 0 (2.100)
Differentiating AI gives:
2-24
∂ i
ÂI = ⎡⎣ H 0 , Aˆ I ⎤⎦ (2.101)
∂t
∂ −i
also, ψ I = VI ( t ) ψ I (2.102)
∂t
Notice that the interaction representation is a partition between the Schrödinger and Heisenberg
representations. Wavefunctions evolve under VI , while operators evolve under H0.
∂ ∂ −i
For H 0 = 0, V ( t ) = H ⇒ = 0; ψS = H ψS Schrödinger
∂t ∂t
(2.103)
∂Aˆ
i ⎡ ∂ψ
For H 0 = H , V ( t ) = 0 ⇒ = ⎣ H , Aˆ ⎤⎦ ; = 0
Heisenberg
∂t ∂t
2-25
H 0 n = En n (2.105)
and we can describe the state of the system as a superposition of these eigenstates:
ψ ( t ) = ∑ cn ( t ) n (2.106)
n
Alternatively we can express the expansion coefficients in terms of the interaction picture
wavefunctions
bk ( t ) = k ψ I ( t ) = k U I ψ ( t0 ) (2.108)
ck ( t ) = k U 0U I ψ ( t0 )
= e− iωk t k U I ψ ( t0 ) (2.109)
= e− iωk t bk ( t )
This is the same identity we used earlier to derive the coupled differential equations that describe
the change in the time-evolving amplitude of the eigenstates:
∂bk
i = ∑ Vkn ( t ) e −iωnk t bn ( t ) (2.110)
∂t n
ψ I ( t ) = ∑ bn ( t ) n (2.111)
n
• using the coupled differential equations for the amplitudes of n . For a complex time-
dependence or a system with many states to be considered, solving these equations isn’t
practical. Alternatively, we can choose to work directly with U I ( t ,t0 ) , calculate bk ( t ) as:
bk = k U I ( t , t0 ) ψ ( t0 ) (2.112)
⎡ −i ⎤
U I ( t , t0 ) = exp + ⎢ ∫ V (τ ) dτ ⎥⎦
t
where I (2.113)
⎣ t0
Now we can truncate the expansion after a few terms. This is perturbation theory, where the
dynamics under H 0 are treated exactly, but the influence of V ( t ) on bn is truncated. This
works well for small changes in amplitude of the quantum states with small coupling matrix
elements relative to the energy splittings involved ( bk ( t ) ≈ bk ( 0 ) ; V Ek − En ) As we’ll see,
the results we obtain from perturbation theory are widely used for spectroscopy, condensed
phase dynamics, and relaxation.
Transition Probability
Let’s take the specific case where we have a system prepared in , and we want to know the
Pk ( t ) = bk ( t ) bk ( t ) = k U I ( t , t0 )
2
(2.114)
⎡ i ⎤
bk ( t ) = k exp + ⎢ − dτ VI (τ ) ⎥
t
⎣
∫t0
⎦
(2.115)
2-27
i
bk ( t ) = k dτ k VI (τ )
t
− ∫t0
(2.116)
⎛ −i ⎞
2
τ2
dτ 1 k VI (τ 2 ) VI (τ 1 )
t
+⎜ ⎟
⎝ ⎠
∫
t0
dτ 2 ∫
t0
+…
i
bk ( t ) = δ k − dτ 1 e−iω kτ1Vk (τ 1 )
t
So, ∫
t0
“first order” (2.118)
⎛ −i ⎞
2
τ2
+ ∑ ⎜ ⎟ dτ 1 e−iωmkτ 2 Vkm (τ 2 ) e−iω mτ1 Vm (τ 1 ) + …
t
m ⎝ ⎠
∫
t0
dτ 2 ∫ t0
(2.119)
“second order”
The first-order term allows only direct transitions between and k , as allowed by the matrix
element in V, whereas the second-order term accounts for transitions occuring through all
possible intermediate states m . For perturbation theory, the time ordered integral is truncated at
the appropriate order. Including only the first integral is first-order perturbation theory. The
order of perturbation theory that one would extend a calculation should be evaluated initially by
which allowed pathways between and k you need to account for and which ones are
For first order perturbation theory, the expression in eq. (2.118) is the solution to the
differential equation that you get for direct coupling between and k :
∂ −i − iω k t
bk = e Vk ( t ) b ( 0 ) (2.120)
∂t
This indicates that the solution doesn’t allow for the feedback between and k that accounts
for changing populations. This is the reason we say that validity dictates bk ( t ) ≈ bk ( 0 ) .
ψ 0 = ∑ bn ( 0 ) n and bk ( t ) = ∑ bn ( 0 ) k U I n . ( 2.121)
n n
2-28
Now there may be interference effects between the pathways initiating from different states:
2
Pk ( t ) = ck ( t ) = bk ( t ) = ∑ k bn ( t ) n
2 2
(2.122)
n
Also note that if the system is initially prepared in a state , and a time-dependent
perturbation is turned on and then turned off over the time interval t = −∞ to +∞ , then the
complex amplitude in the target state k is just the Fourier transform of V(t) evaluated at the
energy gap ω k .
i +∞
bk ( t ) = − ∫ dτ e−iω kτ Vk (τ ) (2.123)
−∞
1 +∞
V (ω ) ≡ F ⎡V
⎣ ( t ) ⎤⎦ = ∫ dt V ( t ) exp ( iωt ) , (2.124)
2π −∞
Pk = V (ωk )
2
then . (2.125)
2-29
p2 1
H (t ) = T + V (t ) = + k (t ) x2 (2.126)
2m 2
⎛ ( t − t0 ) 2 ⎞
k ( t ) = k0 + δ k ( t ) k0 = mΩ 2
δ k ( t ) = δ k0 exp ⎜ − ⎟ (2.127)
⎜ 2 σ 2
⎟
⎝ ⎠
p2 1 1 ⎛ ( t − t0 ) 2 ⎞
H= + k0 x + δ k0 x exp ⎜ −
2 2
⎟ (2.128)
2m 2 2 ⎜ 2 σ 2
⎟
⎝ ⎠
H0 V (t )
⎛ 1⎞ ⎛ 1⎞
H 0 n = En n H 0 = Ω ⎜ a†a + ⎟ En = Ω ⎜ n + ⎟ (2.129)
⎝ 2⎠ ⎝ 2⎠
−i
bn ( t ) = dτ Vn0 (τ ) eiωn0τ
t
For n ≠ 0 :
2 ∫t0
(2.130)
Using ωn0 = ( En − E0 ) = nΩ :
−i +∞
bn ( t ) = ∫ dτ einΩτ e−τ 2σ 2
2
δ k0 n x 2 0 (2.131)
2 −∞
−i
bn ( t ) = e−n σ
2 2Ω2 /2
So, δ k0 2πσ n x 2 0 (2.132)
2
Here we used:
x2 =
2mΩ
(a + a ) † 2
=
2mΩ
( aa + a a + aa † †
+ a† a† ) (2.133)
Generally this wouldn’t be realistic, because you would certainly expect excitation to v=1
would dominate over excitation to v=2. A real system would also be anharmonic, in which case,
the leading term in the expansion of the potential V(x), that is linear in x, would not vanish as it
does for a harmonic oscillator, and this would lead to matrix elements that raise and lower the
excitation by one quantum.
2 x2 0 = 2 (2.134)
2mΩ
−i π δ k0 σ
e−2σ
2Ω2
So, b2 = (2.135)
2mΩ
πδ k0 2σ 2 π ⎛ δ k0 2 ⎞
e−4σ −4σ 2Ω2
2Ω 2
P2 = b2 = = σ 2Ω2
2
and ⎜ ⎟e (2.136)
2m Ω 2 2
2 ⎝ k0 2 ⎠
2-31
From the exponential argument, significant transfer of amplitude occurs when the compression
pulse is short compared to the vibrational period.
1
σ << (2.137)
Ω
Validity: First order perturbation theory doesn’t allow for bn to change much from its initial
value. For P2 << 1
π ⎛ δ k0 2 ⎞
σ 2Ω2 ⎜ ⎟ << 1 (2.138)
2 ⎝ k0 2 ⎠
Generally, the perturbation δk(t) must be small compared to k0, i.e. H 0 >> V , but it should also
A number of important relationships in quantum mechanics that describe rate processes come
from 1st order perturbation theory. For that, there are a couple of model problems that we want
to work through:
V ( t ) = θ ( t − t0 ) V
⎧0 t < 0
=⎨
⎩V t ≥ 0
iωk ( t −t0 )
To first order, we have: k U 0† V U 0 =V e
i iωk (τ −t0 )
Vk (τ )
t
bk = δ k − ∫
t0
dτ e (2.139)
i iωk τ
bk = − Vk
t
∫ dτ e
0
(2.140)
Vk
=− ⎡exp ( iωk t ) −1⎤⎦ (2.141)
Ek − E ⎣
2iVk eiωk t / 2
=− sin (ωk t / 2 ) (2.142)
Ek − E
2
4 Vk
Pk =
bk = sin 2 1 ωk t
2
(2.143)
Ek − E
2 2
2-33
V2
Pk = sin 2 ( Δt / ) (2.144)
Δ 2
where Δ = ( Ek − E ) 2 . Compare this with the exact result we have for the two-level problem:
Pk =
V2
V 2 + Δ2
sin 2 ( Δ2 + V 2 t / ) (2.145)
V 2t 2
Pk = 2
sinc 2 ( Δt / 2 ) (2.146)
lim Pk = V 2t 2 2
(2.147)
Δ→0
Since the energy spread of states to which transfer is efficient scales approximately
as Ek − E < 2π t , this observation is sometimes referred to as an uncertainty relation
with ΔE ⋅ Δt ≥ 2π . However, remember that this is really just an observation of the principles of
Fourier transforms, that frequency can only be determined by the length of the time period over
which you observe oscillations. Since time is not an operator, it is not a true uncertainly relation
like Δp ⋅ Δx ≥ 2π .
2-34
The quadratic growth for Δ=0 is certainly unrealistic (at least for long times), but the expression
shouldn’t hold for what is a “strong coupling” case Δ=0. However, let’s continue looking at this
behavior. In the long time limit, the sinc2(x) function narrows rapidly with time giving a delta
function:
sin 2 ( ax 2 ) π
lim = δ ( x) (2.148)
t →∞ ax 2 2
2π Vk
2
lim Pk ( t ) = δ ( Ek − E ) t (2.149)
t →∞
The delta function enforces energy conservation, saying that the energies of the initial and target
state must be the same in the long time limit.
What is interesting in eq. (2.149) is that we see a probability growing linearly in time.
This suggests a transfer rate that is independent of time, as expected for simple kinetics:
∂P ( t ) 2π Vk
2
wk ( t ) = k = δ ( Ek − E ) (2.150)
∂t
This is one statement of Fermi’s Golden Rule −the state-to-state form− which describes
relaxation rates from first order perturbation theory. We will show that this rate properly
describes long time exponential relaxation rates that you would expect from the solution
to dP dt = −wP .
2-35
V ( t ) = θ ( t − t0 ) V ( t )
This leads to unphysical consequences—you generally can’t turn on a perturbation fast enough
to appear instantaneous. Since first-order P.T. says that the transition amplitude is related to the
Fourier Transform of the perturbation, this leads to additional Fourier components in the spectral
dependence of the perturbation—even for a monochromatic perturbation!
V ( t ) = V eηt
−i
∫
t
bk = k U I = dτ eiωk τ k V eητ
−∞
exp ⎡⎣η t + i ( Ek − E ) t / ⎤⎦
= Vk
Ek − E + iη
exp [ 2η t ] exp [ 2η t ]
2 2
Vk Vk
Pk = bk = =
2
η +ω ( Ek − E ) + (η )
2 2 2 2 2
k
The gradually turned on perturbation has a width dependent on the turn-on rate, and is
independent of time. (The amplitude grows exponentially in time.) Notice, there are no nodes
in Pk .
lim η
= πδ (ωk )
η → 0 η + ωk2 2
2π 2π
wk = δ ( ωk ) = δ ( Ek − E )
2 2
2
Vk Vk
Harmonic Perturbation
Vk ( t ) = Vk cos ωt
Vk (2.152)
= ⎡⎣eiωt + e−iωt ⎤⎦
2
To first order, we have:
−i
bk = k ψ I ( t ) = dτ Vk (τ ) eiωk τ
t
∫
t0
−iVk i (ωk +ω )τ
−e (
i ωk −ω )τ
∫ dτ ⎡⎣e ⎤
t
=
⎦ setting t0 → 0 (2.153)
2 0
−Vk ⎡ e ( k ) −1 e ( k ) − 1 ⎤
i ω +ω t i ω −ω t
= ⎢ + ⎥
2 ⎣ ωk + ω ωk − ω ⎦
i (ωk −ω )t /2
e(
i ωk +ω )t /2
−iVk ⎡⎢ e sin ⎡⎣(ωk − ω ) t / 2 ⎤⎦ sin ⎡⎣(ωk + ω ) t / 2 ⎤⎦ ⎤
⎥
bk = + (2.154)
⎢ ωk − ω ωk + ω ⎥
⎣ ⎦
Notice that these terms are only significant when ω ≈ ωk . As we learned before, resonance is
required to gain significant transfer of amplitude.
2-38
Ek > E Ek < E
Ek = E + ω Ek = E − ω
2
Vk
Pk = bk = sin 2 ⎡⎣ 12 (ωk − ω ) t ⎤⎦
2
( ωk − ω )
2 2
(2.155)
E02 μk
2
or sin 2 ⎡⎣ 12 (ωk − ω ) t ⎤⎦
( ωk − ω )
2
which points out that this is valid for couplings Vk that are small relative to the
x3
By expanding sin x = x − +… , we see that on resonance Δω = ωk − ω → 0
3!
2
lim Vk
Pk ( t ) = t2 (2.157)
Δω → 0 4 2
This clearly will not describe long-time behavior. This is a result of 1st order perturbation theory
not treating the depletion of . However, it will hold for small Pk , so we require
2
t << (2.158)
Vk
At the same time, we can’t observe the system on too short a time scale. We need the field to
make several oscillations for it to be a harmonic perturbation.
1 1
t> ≈ (2.159)
ω ωk
Vk << ωk (2.160)
2-40
V ( t ) = V eηt cos ωt
−i iωk τ +ητ ⎡ e
iωτ
+ e−iωτ ⎤
bk =
t
∫−∞ k
dτ V e ⎢
⎣ 2 ⎥
⎦
i ( ωk +ω ) t i (ωk −ω )t
V ⎡ e e ⎤
= k eηt ⎢ + ⎥
2 ⎣ − (ωk + ω ) + iη − (ωk − ω ) + iη ⎦
Again, we have a resonant and anti-resonant term, which are now broadened byη . If we only
consider absorption:
2
V 1
Pk = bk = k e2ηt
2
( ωk − ω ) + η 2
2 2
4
calculate the adiabatic limit, settingη → 0 . We will calculate the rate of transitions ωk = ∂Pk / ∂t .
But let’s restrict ourselves to long enough times that the harmonic perturbation has cycled a few
times (this allows us to neglect cross terms) → resonances sharpen.
π
wk = ⎡⎣δ (ωk − ω ) + δ (ωk + ω ) ⎤⎦
2
2
Vk
2
Andrei Tokmakoff, MIT Department of Chemistry, 3/1/2007
2-41
The transition rate and probability of observing the system in a state k after applying a
perturbation to from the constant first-order perturbation doesn’t allow for the feedback
between quantum states, so it turns out to be most useful in cases where we are interested just the
rate of leaving a state. This question shows up commonly when we calculate the transition
probability not to an individual eigenstate, but a distribution of eigenstates. Often the set of
eigenstates form a continuum of accepting states, for instance, vibrational relaxation or
ionization.
Transfer to a set of continuum (or bath) states forms the basis for a describing irreversible
relaxation. You can think of the material Hamiltonian for our problem being partitioned into two
portions, H = H S + H B + VSB ( t ) , where you are interested in the loss of amplitude in the H S
states as it leaks into H B . Qualitatively, you expect deterministic, oscillatory feedback between
discrete quantum states. However, the amplitude of one discrete state coupled to a continuum
will decay due to destructive interferences between the oscillating frequencies for each member
of the continuum.
So, using the same ideas as before, let’s calculate the transition probability from to a
Pk = bk
2
Probability of observing amplitude in discrete eigenstate of H 0
ρ ( E
k ) : Density of states—units in1 Ek , describes distribution of final
states—all eigenstates of H 0
Pk = ∑ Pk . (2.161)
k
We are just interested in the rate of leaving and occupying any state k or for a continuous
distribution:
2-42
P k = ∫ dEk ρ ( Ek ) Pk (2.162)
sin 2 ( ( Ek − E ) t / 2 )
P k = ∫ dEk ρ ( Ek ) 4 Vk
2
(2.163)
Ek − E
2
Here, we have chosen the limits −∞ → + ∞ since ρ ( Ek ) is broad relative to Pk . Using the
identity
+∞ sin 2 aΔ
∫ −∞
dΔ
Δ2
= aπ (2.165)
with a = t / we have
2π
Pk = ρ Vk
2
t (2.166)
The total transition probability is linearly proportional to time. For relaxation processes, we will
be concerned with the transition rate, wk :
2-43
∂P k
wk =
∂t (2.167)
2π
wk = ρ Vk
2
2π
wk = ρ ( Ek = E ) Vk
2
(2.168)
2π
wk = δ ( Ek − E ) wk = ∫ dEk ρ ( Ek ) wk
2
Vk (2.169)
This expression is known as Fermi’s Golden Rule. Note the rates are independent of time. As
we will see going forward, this first-order perturbation theory expression involving the matrix
element squared and the density of states is very common in the calculation of chemical rate
processes.
Range of validity
For discrete states we saw that the first order expression held for Vk << ωk , and for
1
Pk = wk ( t − t0 ) t << (2.170)
wk
ΔE >> wk
. (2.172)
Δωk >> wk