Вы находитесь на странице: 1из 55

Lecture Notes for Statistical Mechanics

Fall 2010
Lecturer: Professor Malvin Ruderman
Transcriber: Alexander Chen
December 15, 2010
Contents
1 Lecture 1 3
1.1 Basic Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Fundamental Assumption of Statistical Mechanics . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Energy and Expectation Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Justication of the Assumption that P Only Depends on E . . . . . . . . . . . . . . . . . . 4
1.5 Deriving the Fundamental Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Lecture 2 6
2.1 Some Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Harmonic Oscillator Example, Continued . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Some Note about Counting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3 Lecture 3 9
4 Lecture 4 12
4.1 A Homework Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.2 About the deniton of Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.3 About Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5 Lecture 5 14
5.1 Some Statements on Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5.2 The Chemical Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
6 Lecture 6 15
6.1 Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
6.2 Reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
7 Lecture 7 18
7.1 The Entropy of Mixing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
7.2 The Chemical Potential for Photons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1
8 Lecture 8 21
9 Lecture 9 22
10 Lecture 10 24
10.1 Digestion on Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
10.2 Historical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
11 Lecture 11 26
11.1 Classical Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
12 Lecture 12 28
12.1 Canonical Ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
13 Lecture 13 30
13.1 Continue on Canonical Ensembles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
13.2 Grand Canonical Ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
14 Lecture 14 33
15 Lecture 15 34
16 Lecture 16 37
17 Lecture 17 40
17.1 Problem of White Dwarf Star . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
17.2 Heavy Atoms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
18 Lecture 18 43
18.1 Paramagnetism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
18.2 Diamagnetism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
19 Lecture 18 45
20 Lecture 20 47
20.1 Superuids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
21 Lecture 21 49
22 Lecture 22 52
2
Statistical Mechanics Lecture 1
1 Lecture 1
1.1 Basic Information
Course Instructor: Malvin Ruderman
Oce Hour: Tue 1:10 - 2:00 pm, Wed 12:00 - 1:30 pm
Textbook: Statistical Mechanics by Pathria
1.2 Fundamental Assumption of Statistical Mechanics
If we take a system of N particles in a container of volume V , in order to know the basic properties of the
system, we need to construct a general Hamiltonian of N coordinates and N momentum of these particles,
and solve for its eigenvalues, which are the energy levels achievable by this system. In general this will
be a very dicult problem, as the Hamiltonian will usually involve complicated interactions between the
particles and it will become a messy manybody problem.
But we dont do that in statistical mechanics. What we do is to assume that the energy eigenvalues are
known, and we want to know the behavior of the system in nite temperature. Take a system of N particles
in a heat bath of temperature T, wait until equilibrium, then we measure the energy of the system. Note
that by wait until equilibrium we mean that we wait until every state is accessible, after suciently long
time. The fundamental theorem of statistical mechanics states that the probability of nding the system
in a state with energy E
i
is:
P
i
=
e
E
i
/k
B
T

all states
e
E
i
/k
B
T
(1.1)
The quantity in the denominator is called the Partition Function and is usually denoted by Q (not Z?).
This theorem tells us that the probability is only dependent on the energy of that state, and that the state
with lowest energy, i.e. the ground state, is the most probable state.
1.3 Energy and Expectation Values
Now that we know the probability of states, we can calculate the average energy of the system, and the
expectation of other observables, too. The average energy, by denition, is just
E =

i
P
i
E
i
(1.2)
However, the expectation of the square of the energy,

E
2
_
=

P
i
E
2
i
is not, in general, equal to the
square of the average energy. So there usually exists uctuations:

(E)
2
_
=

E
2
_
E > 0 (1.3)
In general we can estimate the magnitude of the uctuation:

(E
2
)
_
E
2
O(
1
N
) (1.4)
so the uctuation is negligible for a large enough system. In such a system, we can talk about the energy
of the system, but in smaller systems we need to distinguish between the expectation value of energy, the
median value of energy, or the most probable energy.
3
Statistical Mechanics Lecture 1
The expectation value of any observable can be calculated in the same way, i.e.
A =

i
P
i
i[A[i (1.5)
1.4 Justication of the Assumption that P Only Depends on E
Let us consider a 2-state problem where states
1
and
2
have the same energy. This can be due to physical
separation by a potential barrier. Now we can form symmetric and antisymmetric linear combinations of
these states which will be the ground state and the excited state. No matter what is the initial state, the
time evolution of that state will look like the following:
[(t) = cos t [
1
+ sin t [
2
(1.6)
The state just oscillates between the two states. So if we do a time average, the probability of nding
the particle in either state is 1/2.
We can consider a particle with n states using Fermis Golden Rule:
1

ij
=
2

[V
2
ij
[
dn
i
dE
(1.7)
If a particle is in state i and can decay into some states j, and particle in states j can also jump back,
then we can calculate the probability change of the particle to be found in state i:
dP
i
dt
=

j
[ j[V [i [
2
P
i
+

j
[ i[V [j [
2
P
j
=

j
[ i[V [j [
2
(P
j
P
i
) (1.8)
so if the system is in equilibrium and probabilities become stable, then the probability of the particle to
be in state i must be the same as in state j.
1.5 Deriving the Fundamental Theorem
Our fundamental assumption is that the probabilities only depend on the temperature and the energy of
the state E
i
. So we can write the probability of nding the system in state i as a function P
i
(T, E
i
). Now
we want to study it using some invariance. Because energy scale is usually arbitrary, and we should have
the freedom to choose the reference point of energy without modifying the relative probabilities, we have
the following invariance:
P
i
(T, E
i
)
P
j
(T, E
j
)
=
P
i
(T, E
i
+)
P
j
(T, E
j
+)
(1.9)
This is a functional equation and the only solution is:
P
i
=
e
E
i
(T)

i
e
E
i
(T)
(1.10)
where (T) is an (yet) unknown function of T.
Now we ask the question whether (T) depends on the detail properties of the system we consider.
Suppose we have two systems 1 and 2, and they have
1
(T) and
2
(T) in the probability formula. Now
suppose they are each in equilibrium at temperature T, and brought together to form a new system 3,
4
Statistical Mechanics Lecture 1
which has (T). The probability of system to be in state ij (system 1 is in state i and system 2 in state
j) is:
P
ij
=
e
(E
i
+E
j
)(T)

ij
e
(E
i
+E
j
)(T)
(1.11)
But we can also think of the system as a composite one and the same probability can be expressed as
the product of individual probabilities:
P
ij
= P
i
P
j
=
e
E
i

1
(T)

i
e
E
i

1
(T)
e
E
j

2
(T)

j
e
E
j

2
(T)
(1.12)
the two ways of calculation are the same if and only if (T) =
1
(T) =
2
(T). So we can conclude that the
(T) function is independent of the details of the system. So the only thing we need to do is to determine
(T) for one simple system.
Consider one particle in a 1D box (innite well). We know from elementary quantum mechanics that
E
n
=

2
2m
_
n
l
_
2
. Plug this into the above formula for probability and expectation value, we get:
E =
1
2(T)
(1.13)
Generalizing this calculation into 3D and N particles, we get E =
3N
2(T)
. But we know from the
classical theory of ideal gases, that E =
3
2
Nk
B
T. So we know that:
(T) =
1
k
B
T
(1.14)
1.6 An Example
Consider the harmonic oscillator potential. We know that the energy level is proportional to n. Let us omit
the zero point energy of
1
2
for now and assume that E
n
= n. We can easily calculate the partition
function for one particle since it is just a geometric series:
Q =

i
e

n
k
B
T
=
1
1 e
/k
B
T
,
1
Q
= 1 e
/k
B
T
(1.15)
So we can see that the probability of nding the particle in state n goes like e
n
. Now suppose we
have more particles. If there are two particle, then the energy becomes E = (n
1
+ n
2
) and there is
degeneracy. The number of states corresponding to the same E is proportional to E. Generalizing this
result, for N particles, the number of states with the same energy is proportional to E
n1
.
To be continued. . .
5
Statistical Mechanics Lecture 2
2 Lecture 2
2.1 Some Remarks
Recall that last time we argued the form of the fundamental theorem of Statistical Mechanics. The
argument was based on rst principles, but there is one assumption which is not rst principle, i.e. we
assumed that the perturbation of energy is hermitian:
[ i[V [j [
2
= [ j[V [i [
2
(2.1)
This assumption may not be true if the transition is not time-reversible.
2.2 Harmonic Oscillator Example, Continued
Consider again the harmonic oscillator example. We want to study how physics change from 1 particle to
N particles. We have already known the form of probability of the system taking energy level n for 1 and
2 particles, we can anticipate that for N particles it is:
N = 1 : P
n
=
e
n/kT
Q
=
_
1 e
/kT
_
e
n/kT
N = 2 : P
n
=
ne
n/kT
Q
. . . . . .
N = N : P
n

n
N1
e
n/kT
Q
the Q is there to ensure proper normalization, so that

i
P
n
= 1. We will show that the nal line is true
using some approximations.
Because the energy level of the whole N particle system is just the sum of energy levels of individual
particles, we have:
n = n
1
+n
2
+ +n
N
(2.2)
E = E
1
+E
2
+ +E
N
(2.3)
In the case where kT is much greater than , we can think of the energy levels being continuous,
because the little steps of energy will not introduce much error in our approximation. Therefore we
can replace summation by integrals and the result will look much nicer. In order to calculate the total
probability, we basically integrate over all the possible congurations:
P(n) =
_ _
. . .
_
P(n n
1
n
2
n
N1
)P
1
(n
1
)P
2
(n
2
) . . . dn
1
dn
2
. . . dn
N1
(2.4)
This integral looks quite dicult, but we can do it in Fourier space. After Fourier transform, we have:
_

e
ikn
P(n)dn =
__

e
ikn
1
P
1
(n
1
)dn
1
_
N
(2.5)
We can take the P
1
(n) = 0 for all n < 0 and do the integral on P
1
:
6
Statistical Mechanics Lecture 2

P
1
(k) =
_

0
e
iknn
dn =
1
ik
(2.6)
And doing a Fourier transform back, we can nd
P(n) =
1
2
_

e
ikn
_

P
i
(k)
_
dk (2.7)
Because

P
1
(k) 1, the N power will damp everywhere except for the highest point, which will turn to
somewhat like a gaussian. Therefore the P(n) should also resemble a gaussian in the large N limit. Now
lets evaluate the above integral analytically. We substitute z = k (i) and convert the integral to
a contour integral:
P(n) = e
n
_
dz
z
N
e
inz
(2.8)
where we close the countour from below, because z has a pole in the lower halfplane. In the end we have
P
N
(E) =
e
E
E
N1
(N 1)!
. The most probable energy can be found by solving the equation

E
P
N
(E) = 0, and
the result is:
E = (N 1)kT (2.9)
We can also nd the width of this near-gaussian distribution, by looking at the two points where the
second derivatives change sign. These can be found by solving

2
E
2
P
N
(E) = 0 and the answer is:
E =
2

(N 1)
1/2
(2.10)
2.3 Some Note about Counting
Consider N particles in a volume V . We have a smaller volume v inside V and we want to ask questions
like what is the probability of nding n particles inside the volume v?.
First lets recall the binomial expansion
(p +q)
N
=
N

n=0
_
N
n
_
p
n
q
Nn
(2.11)
where the binomial coecients are dened to be
_
N
n
_
=
N!
n!(N n)!
. If we let p +q = 1 then we have
1 =
N

n=1
N!
n!(N n)!
p
n
(1 p)
Nn
(2.12)
for any p < 1. Suppose we are considering classical particles, nding a particle inside a volume v inside
a box V is just v/V , and if we substitute p = v/V into the above equation, then the summand is just
the probability of nding n out of N particles inside volume v at any given time. If we consider quantum
mechanics, then we need to integrate the wavefunction square, but just need to replace p.
Consider some extremes. For n = N, i.e. all the particles squeezed inside v, the probability is:
7
Statistical Mechanics Lecture 2
P =
_
v
V
_
N
= e
N log
v
V
(2.13)
For n = 0, however, we have:
P =
_
V v
V
_
N
=
_
1
1
N
vN
V
_
N
N
e
N
v
V
(2.14)
8
Statistical Mechanics Lecture 3
3 Lecture 3
What we have discussed is how statistical mechanics might have developed if we had the knowledge of
quantum levels and stu. We started out with a fundamental assumption that all states with the same
energy are equally probable in equilibrium. The key is that we have a hamiltonian with a little perturbation
which permits time-reversible transitions.
Recall last time we did some counting exercise, and found that for N particles in volume V , the
probability of nding n particles in smaller volume v is:
P(n) =
N!
n!(N n)!
_
v
V
_
n
_
V v
V
_
Nn
(3.1)
We will often use some approximations:
n! =
_

0
e
x
x
n
dx (3.2)
log n! = nlog n n +
1
2
log n (3.3)
where the latter equation is called the Stirling approximation. Which means that for n! = n(n 1)(n
2)
n
n
e
n
= n
n
e
n
. If we take these expressions and use them on the previous expression of P(n) to
calculate log P(n), then we can see the graph is like a thin Gaussian, which peaks at around N/2. The
peak probability is about 1/

N, and width is about

N.
Recall if we want to measure a quantum mechanical variable A, the expectation value we get is
A =

i
P
i
i[A[i (3.4)
and usually we have

A
2
_
,= A
2
. If we go to N limit, then the system has well-dened energy, but
there are some variables of particular interest. This is called the thermodynamics limit. The thermody-
namic variables fall into two categories:
Extensive variables: U, C
V
, S, N, these variables increase as the size of the system increases.
Intensive variables: T, P, these variables do not increase as the size of the system increases.
Now lets turn to the special variable entropy, which is the missing link between thermodynamics and
statistical mechanics. We want to dene entropy and nd the way to calculate it. Consider two adjacent
boxes labeled 1 and 2, they are connected to form system 3. Label the states in 1 by i and states in 2 by
j, we have:
P
(3)
ij
= P
(1)
i
P
(2)
j
(3.5)
To form an extensive quantity, i.e. S
(3)
= S
(2)
+ S
(1)
, it is appealing to dene S =

i
log P
i
, but
because probability for some high energy states goes to zero, this denition will lead easily to innities.
What about dene S = k

i
P
i
log P
i
? We can calculate the entropy of system 3 as:
9
Statistical Mechanics Lecture 3
S
(3)
= k

j
(P
i
P
j
) log(P
i
P
j
) = k

ij
(P
i
P
j
log P
i
+P
i
P
j
log P
j
) = k(

i
P
i
log P
i
+

j
P
j
log P
j
)
(3.6)
this quantity is manifestly extensive and this is what we want. This denition is good even if the system
is not in thermal equilibrium, because whatever is the stucture of P
i
, this expression still works.
If we look at distribution of P
i
of states with almost exactly same energy, then P
i
=
1
N
i
where N
i
is
the number of that state. The entropy in this case is:
S = k

i
P
i
log P
i
= k log N
i
(3.7)
This denition of entropy is such that when you depart from the thermodynamic equilibrium, the
entropy lowers. In other words, the entropy takes maximum at thermodynamic equilibrium. This is how
we interpret entropy physically.
Remember in thermodynamics we dened entropy as a function of P and V and T. When you integrate
along the change of the system from A to B:
S =
_
B
A
dQ
T
(3.8)
the integral is independent of path and is dened to be the entropy change. The second law of thermody-
namics says this function always exists.
We introduced two potentials in thermodynamics:
1. Helmholtz Free Energy: F = U ST
2. Gibbs Free Energy: G = U ST +PV
These quantities are useful in special circumstances when we want to isolate some thermodynamic eect.
Say if we study a system with xed temperature, then the system tends to minimize its Helmholtz free
energy. If we consider pressure xed instead of the volume, then we use Gibbs free energy, which also tends
to be minimized.
The name Entropy is the greek word for transformation. In some sense, it measures how much
transformation was done on the system. What about free energy? Lets consider the change of Helmholtz
free energy:
dF = dU TdS SdT (3.9)
Because we have
1. Q = dU +PdV
2. Q = TdS
so we can rewrite the change of free energy to dF = PdV SdT. On an isotherm, then dF = PdV
which is the work done to the outside world if we allow the volume to change. In this sense, it is free.
Lets look back at the denition of S. We can plug in the canonical probability distribution:
10
Statistical Mechanics Lecture 3
S = k

e
E
i
/kT
Q
log
e
E
i
/kT
Q
= k

P
i
E
i
kT
+k log Q

P
i
=
U
T
+k log Q (3.10)
Compare the expression with the denition of the Helmholtz energy, we see that:
F = kT log Q, Q = e
F/kT
(3.11)
which relates the Helmholtz free energy directly to the partition function.
Lets consider an example. Combining the rst and second law of thermodynamics, we have TdS =
dU +PdV , so we know that:
dS =
dU
T
+
P
T
dV =
1
T
_
U
V
_
T
dV +
1
T
_
U
T
_
V
dT +
P
T
dV (3.12)
=
_
S
T
_
V
dT +
_
S
V
_
T
dV (3.13)
Equating corresponding terms, and using the Maxwell relation
_
S
V
_
T
=
_
P
T
_
V
, we get P =

_
U
V
_
T
+T
_
P
T
_
V
.
In microscopic view, we can calculate the pressure by calculating the expectation of energy change
when changing volume:
P =

i
_
E
i
V
_
e
E
i
/kT
Q
(3.14)
which is much harder.
11
Statistical Mechanics Lecture 4
4 Lecture 4
4.1 A Homework Problem
Remember in our Hilbert space the basis we use is the energy eigenbasis H[i = E
i
[i. So the expectation
value of variable A is just A =

P
i
i[A[i. Now what if we change into the eigenbasis of operator A, i.e.
A[j = A
j
[j, then is the following expression equivalent to our previous expectation value?

A
j
e
j|H|j(T)
= A (4.1)
4.2 About the deniton of Entropy
There are two points about the entropy we introduced by the formula S = k

P
i
log P
i
:
It is the same entropy as we introduced in thermodynamics.
It is still a good denition even when not in equilibrium.
The second point is easier to see because the denition is robust, as it only uses the concept of probability.
In order to see the rst point, lets calculate the change in entropy:
S = k

i
(P
i
log P
i
) = k

i
(P
i
log P
i
P
i
) (4.2)
And by the fundamental theorem, we have log P
i
= E
i
log Q, we now have:
S = k

i
(P
i
)E
i
+k

i
P
i
log Q = k

i
(P
i
)E
i
= k

i
(P
i
E
i
) k

i
P
i
E
i
(4.3)
The rst term is just U, where U = E. The second term can be interpreted by noting that
dE
i
= E
i
/V dV , which is just PdV . Therefore we have:
TS = U +PV (4.4)
which is just the statistical mechanics version of the second law of thermodynamics.
We saw that in a system with N states of equal energy, the probability of nding the system in any
state is the same: P
i
= 1/N. This can be accomplished by introducing a small perturbation which allows
reversible transition between dierent states. If we adopt the above denition of entropy, we can calculate
the entropy of the system:
S = klog P = k

i
1
N
log
1
N
= k log N (4.5)
The same denition leads to the fact that at zero temperature, all states have probability of unity, so
the entropy is zero. This is the third law of thermodynamics.
12
Statistical Mechanics Lecture 4
4.3 About Thermodynamics
Remember our introduction of thermodynamic potentials. In addition to what we have introduced, we
want to study situations where the number of particles can vary, like that in a chemical reaction. Similar
problems include e

+e
+
where we are interested in the equilibrium temperature and absolute value
of entropy.
Remember last time we derived the expression of pressure:
P =
_
U
V
_
T
+T
_
P
T
_
V
(4.6)
this was done by using the thermodynamic identities, which was much easier than using rst principles
from statistical mechanics. When there is a large number of particles, we can do things easier by using
thermodynamics.
Remember we introduced the Helmholtz free energy F = U ST, which satises an elegant relation
F = kT log Q. And the change in free energy is:
dF = dU TdS SdT = PdV SdT (4.7)
By the way of taking dierentials, we know that P =
_
U
V
_
T
and S =
_
U
T
_
V
. Now suppose
we change the number of particles in the system, we introduce the change in the free energy by:
dF = PdV SdT +dN (4.8)
where N is the number of particles and can be identied as =
_
F
N
_
V,T
. It is similar to a binding
energy, but in a more general form. Now if we look at the system energy instead, we add the same term
to the change of internal energy, and we get another expression of as =
_
U
N
_
V,S
. Now what if we
consider the Gibbs free energy?
dG = dU SdT TdS +PdV +V dP +dN = SdT +V dP +dN (4.9)
Consider if we x the temperature and pressure of the system, and we remove one particle, the system
adjusts its volume automatically to accomodate the change, and the other particles do not feel the change.
Another way to say this is that both P and T are intensive quantities, so they dont depend on the particle
number. Therefore
_
dG
dN
_
P,T
= (P, T), G = (P, T)N (4.10)
Next time we will consider things like vapor pressure and osmotic pressure.
13
Statistical Mechanics Lecture 5
5 Lecture 5
5.1 Some Statements on Entropy
Remember we dened the entropy in the statistical mechanics sense:
S = k

P
i
log P
i
(5.1)
and we derived some properties such as kT log Q = U TS = F. If we are looking at an isolated system,
it tends to maximize its entropy.
If we are looking S and treat it classically, then it is always innity. In classical situation, the distance
between states is going to zero, and instead of discrete states we get a contiuum of states. So all the
entropies go to innity. However, sometimes knowing the value of S will be helpful in some situations
when we want to ask all kinds of questions about the system.
The second statement about entropy is that when temperature goes to zero, S goes to 0. This has
the implication that all specic heat should approach zero at zero temperature. In order to see this, lets
suppose that S = T

where is a positive number. We have


dS
dT
= T
1
=
1
T
DQ
dT
=
1
T
C
V
=C
V
T

(5.2)
5.2 The Chemical Potential
Remember that we derived the various expressions for the chemical potential
=
_
F
N
_
T,V
=
_
G
N
_
P,T
=
_
U
N
_
S,V
= T
_
S
N
_
U,V
(5.3)
And we also know that G = N(P, T). The change in chemical potential is
d =
dG
N
=
S
N
dT
V
N
dP (5.4)
14
Statistical Mechanics Lecture 6
6 Lecture 6
6.1 Review
Remember we introduced the fundamental assumption of statistical mechanics, and dened the entropy as
S = k log P
i
= k

i
P
i
log P
i
(6.1)
and we have the relation between the Helmholtz free energy and partition function
F = U ST = kT log Q (6.2)
From the denition of Helmholtz free energy we have some relations
S =
_
F
T
_
N,V
(6.3)
P =
_
F
V
_
N,T
(6.4)
U = E =
log Q

(6.5)
and we have several expressions for chemical potential
=
_
F
N
_
T,V
=
_
G
N
_
P,T
=
_
U
N
_
S,V
= T
_
S
N
_
U,V
(6.6)
In order to nd the absolute value of , the easiest way is to x temperature and volume and calculate
the Helmholtz free energy of the system, then dierentiate it with respect to particle number N and we
get . We are going to do this in a moment.
6.2 Reactions
Consider a chemical (or particle) reaction
A+B C (6.7)
When equilibrium is achieved, the Gibbs free energy is at its minimum, and the particle numbers of dierent
species should be equal, so the chemical potentials should satisfy

A
+
B

C
= 0 (6.8)
But does the method that we achieve equilibrium matter? What if we release a photon during this
chemical process? Do we need to calculate the chemical potential for photons? We will address these
questions in this lecture. We will consider processes like e

+e
+
+ and H
+
+e

H which are
elementary processes of the universe.
Consider a 1 dimensional harmonic oscillator, we know the energy levels to be E
n
= (n + 1/2)
0
, so
we can calculate the partition function
Q
1
=

n
e
(n+
1
2
)
0

=
e

1
2

1 e

(6.9)
15
Statistical Mechanics Lecture 6
Suppose now we have 3 dimensions and N particles, then the total partition function is
Q =
_
(Q
1
)
3

N
(6.10)
So the Helmholtz free energy can be calculated as
F = kT log(Q
1
)
3N
= kT3N log Q
1
(6.11)
Now we need to think a little harder. The way we thought about the system was treating the particles as
distinguishable. But in reality we have indistinguishable particles, and we need to incorporate that eect.
However, if the probability for nding more than one particle in any single excited state is negligible, i.e.
the states are diluted enough, then we can just divide our Q by N! which is just the factor of overcounting.
In fact we should have used the factor
N!
n
0
!n
1
! . . . n

!
(6.12)
where n
i
is the number of particles in state i. Our assumption of dilutedness is equivalent to taking all n
i
to be either 0 or 1.
So under our assumption of diluted density of particles, our free energy for indistinguishable particles
becomes (in the limit when N is large)
F
indistinguishable
= kT log
Q
N!
= F
distinguishable
+kT(N log N N +. . . ) (6.13)
Remember our sum for the 1 dimensional harmonic oscillator. We can treat the sum in contiuum
approximation and treat the sum as an integral:
_

0
dne

0
n
dn =
1

=
kT

0
(6.14)
This is just the rst term if we expand the exponential in the exact partition function in equation (6.9).
This discourse justies our use of continuum limit. So when the sum is hard to do, we will try to do the
integral instead.
A word for the Gibbs Paradox. Consider a chamber partitioned into two equal parts, with some particles
in part 1 and some in part 2. We can calculate the entropy for each partition and they will work out.
Now we want to remove the partition and join the two parts. If the particles were distinguishable, then we
would run into some paradoxical situation as in history. We wouldnt have the paradox because we put in
the N! which accounts for overcounting.
We can carry out the above calculations for other potentials as well, say the 1 dimensional box, i.e.
innite potential well. For 1 dimensional box the allowed energies are
E
n
= 2m
_
n
2mL
_
2
(6.15)
and the partition function is
Q
1
=

n
e

2
2mL
2
n
2
contiuum limit

_

0
dne

2
2mL
2
n
2
(6.16)
The integral is just a Gaussian integral and easy to evaluate.
16
Statistical Mechanics Lecture 6
The Helmholtz free energy for N identical particles in a box of volume V = L
3
, then we have
F = kT log
Q
3N
1
N!
(6.17)
Carrying out the previous integral and plug into the expression for free energy, we get
F = NkT log
_
N
V
_
h
2
2mkT
_
3/2
1
_
(6.18)
and we can calculate the chemical potential by dierentiating
= kT log
N
V
_
h
2
2mkT
_
3/2
=
G
N
(6.19)
What is the term in the bracket? Because we have p = k = h/. If we take the characteristic kinetic
energy p
2
/2m kT then the term in the bracket is just (apart from a factor of )
2
which is the DeBroglie
wavelength squared. Our dilutedness condition here translated into
N
V

3
1 (6.20)
Questions left to consider: When we have photons in the game, the chemical potential will become zero.
Why is this true? Another question is that when the particles are relativistic, the energy and momentum
relation becomes dierent, and the integral becomes hard to evaluate. What to do?
17
Statistical Mechanics Lecture 7
7 Lecture 7
Remember last time we studied the N particle partition function when they are indistinguishable
Q =

e
E
i
/kT
= (Q
1
)
N
(7.1)
And the free energy is proportional to
F kT log
_
V

3
_
N
(7.2)
Now if we have identical particles, i.e. indistinguishable. If we continue to use the above expression
then we are overcounting the states. Remember the approximation we used last time we to assume that
states are suciently diluted and the occupation of any state is no more than 1. Under this approximation
Q =
(Q
1
)
N
N!
(7.3)
therefore the Helmholtz free energy becomes
F kT log
__
N

3
N
_
1
N!
_
kTN log
V
N
3
(7.4)
7.1 The Entropy of Mixing
Consider a box of volume V being partitioned into two equal halves, both with volume V/2. Particles
are distributed equally inside two partitions. We know the entropy of the whole system is the sum of the
entropies for the two partitions:
S = S
1
+S
2
= 2
_
k
N
2
log
_
V
2(N/2)
3
__
+N log
_
N
2
_
(7.5)
Now if we remove the lid in between, then the entropy of the whole system becomes
S = kN log
_
V
N
3
_
+N log N (7.6)
and the entropy has increased by S = kN log 2. This is a reversible process. This means that, if we had
done this process carefully, we can extract some work from it, like the Carnot engine. Indeed we can do
this by imagining that we insert a semi-permeable membrane in between which only blocks one specic
particle from going through. When we move the membrane to one end of the box, the system is doing work
by this particular particle. Imagine if we have a huge number of this kind of membrane, we can extract
work from every particle in this box, and this is the work we could have extracted from the system. The
amount of energy is gained from the heat absorbed from the reservoir which is kept at constant T.
Now suppose the particles inside this box are indistinguishable. In the case of partitioned box the total
partition function is
Q
N
=
Q
N
1
(N/2)!(N/2)!
(7.7)
18
Statistical Mechanics Lecture 7
so the initial entropy will be dierent from what we calculated above. The nal entropy will also be
dierent. But the dierence in entropy will be the same as what we had above. This is reasonable because
we have essentially labeled the particles on the left dierently from the particles on the right. If we insert
two semi-permeable membrane and pushing them to either end, then we can extract the same amount of
work as the above case. This change of entropy is called the entropy of mixing.
Now suppose all the particles in the box are the same. Then it doesnt matter if there is a partition in
the middle. So the entropy of mixing has disappeared. But the nal entropy is not the same as the nal
entropy above, as the total partition function has been changed into
Q
N
=
Q
N
1
N!
(7.8)
The above discussion is to dierentiate the three cases of distinguishable/indistinguishable particles
and explain how to quantify the entropy change.
7.2 The Chemical Potential for Photons
Remember we had an explicit expression for the chemical potential of an ideal gas from last time
= kT log
N
V
_
h
2
2mkT
_
3/2
=
G
N
(7.9)
Now consider the case when there is photon involved in the reaction like e

+p N +. We would
have trouble if the chemical potential of photon were not zero. So we are going to show that this is true.
Suppose we have a box of photons with volume V at temperature T. We know that we can calculate
the energy using blackbody radiation U = u(T)V . The pressure on the wall is just
P = u(T)/3 (7.10)
Now suppose we take away a photon in the box, while xing the temperature and volume, the change in
free energy must be
_
F
N
_
T,V
= 0 (7.11)
because F is totally decided by T and V .
Lets take another way of looking at this. In 3 dimensions we have P = u/3 where
u(T) = T
4
(7.12)
where is a constant. Lets suppose there is some heat coming into the system
Q = dU +PdV
= V u

dT +udV +
u
3
dV = TdS
(7.13)
Therefore we have that
dS =
_
V u

T
_
dT +
_
4
3
u
T
_
dV (7.14)
19
Statistical Mechanics Lecture 7
Using the rule for partial derivatives we get

V
_
V u

T
_
=

T
_
4u
3T
_
=u = T
4
(7.15)
Therefore thermodynamics requires that u T
4
. Dimensional analysis will show that
u = kT
_
kT
c
_
3
k

(7.16)
where k

is a constant that we must introduce.


Now we can calculate the chemical potential
=
G
N
=
U +PV TS
N
=
_
uV
N
+
uV
3N

T
N
_
T
0
C
V
T

dT

_
(7.17)
Because C
V
= u

V , we can plug in the u = T


4
then we can see that = 0 for photons.
20
Statistical Mechanics Lecture 8
8 Lecture 8
Consider a box of xed volume V kept at xed temperature T. The system insided tends to minimize its
Helmholtz free energy F. Remember
dF = dU TdS SdT +

i
dN
i
= SdT PdV +

i
dN
i
(8.1)
where the sum is over all species of particles inside the box. If we x T and V , then at equilibrium dF = 0,
we have

i
dN
i
= 0 for all allowed changes dN
i
(8.2)
Consider we have a chemical reaction involving particles A
i
with number coecients
i
, then we write
our reaction as

i
A
i
= 0. For example, we can consider H
2
+Cl
2
2HCl. Then
H
2
= 1,
Cl
2
= 1 and

HCl
= 2. In this scenario, then we have

i
= 0 (8.3)
We want to make an assumption that the existence of other kinds of particles will not aect the chemical
potential of any one species of particles, in the limit of dilute gas. This makes sense because here interaction
is not so strong, so that the sum of entropies of two species of particles in two boxes of V is the same as
the entropy if the two species of particles in the same box. Any particle is not aware of the presence of
other species of particles.
Remember for photons = 0. For non-relativistic particles we have

i
= kT log
_
N
i

3
i
V
_
+m
i
c
2
(8.4)
Note because of dierent energies in binding, we have dierent mass for, e.g. HCl and H+Cl. That is why
we include the m
i
c
2
term. If we plug the above equation into equation (8.3) then we can get

i
_
kT log
_
n
i

3
i
_
+m
i
c
2

i
= 0 (8.5)
[n
1

3
1
][n
2

3
2
]
[n
3

3
3
]
= e
E/kT
(8.6)
where n
i
is the density of state of state in V . Remember this is non-relativistic.
Consider now the process e

+ e
+
+ . Because there is degeneracy associated with spin s, we
replace V V (2s + 1). We write

e
= kT log
n

2s + 1
+m
e
c
2
(8.7)

e
+ = kT log
n
+

3
+
2s + 1
+m
e
c
2
(8.8)
Therefore we can evaluate
n
+
n

=
_
2s + 1

3
_
2
e
2mc
2
(8.9)
But this equation does not tell us either n

or n
+
. We have to put in some extra condition such as n
+
= n

to get the exact value of either number densities.


21
Statistical Mechanics Lecture 9
9 Lecture 9
Up to now we have been considering problems where many particles share a box, but for each state there
is only zero or one particle at that particular state. We derived the chemical potential
= kT log
_
N
V

3
_
+mc
2
(9.1)
Here is about the eective wavelength of the particle. The assumption of diluteness is equivalent to
N
V

3
1 (9.2)
Remember also that we have the relation
=

N
(U TS)

T,V
(9.3)
So can also be thought of something like the binding energy. We can also write
N =
V

3
e
/kT
(9.4)
Consider the reaction e

+ H
+
H. We will assume the system is neutral, so that the number of
electrons is the same as number of ions. We had the result
[
3
e
n
e
][n
H
+
3
H
+
]
[
3
H
n
H
]
= e

b
(H)/kT
(9.5)
But because we know that the number density of e is the same as H
+
, while
H

H
+, so we have
n
e
=
n
H

3
e
e

b
/2kT
(9.6)
Because the above process is essentially reversible, the rate of both directions should be the same
n
e
n
+
[H[
2

H
= n

n
H
[H[
2

e+
(9.7)
In principle we can invert this equation to get the same ratio as the above. But in n there is a factor
of c coming into play, while there is no speed of light in the above equation. So there must be a delicate
cancelation among the terms to eliminate the speed of light. We know the form of the product n
e
n
H
+ on
the left hand is approximately
n
e
n
H
+ n
H
n

e
|
b
|/kT
(9.8)
Because
b
13.6 eV, the corresponding temperature for equilibrium is k(10
5
K) 10 eV.
Suppose we have a box with a magnetic eld B(x) pointing down which is a function of position. The
particles inside have spin 1/2. We want to know how the distribution of the spin of the particle. Lets
dene

=
B=0
mB,

=
B=0
+mB (9.9)
We know from the fundamental assumption of statistical mechanics that in equilibrium we have
n

= n(0)e
mB/kT
, n

= n(0)e
+mB/kT
(9.10)
22
Statistical Mechanics Lecture 9
Therefore if we take the sum of these two equations, we have
n

+n

e
mB/kT
+e
mB/kT
= 2 +
mB
kT

mB
kT
+
_
mB
kT
_
2
+. . . (9.11)
Which means that there is an accumulation of particles at places with larger B eld.
Consider a system with available energy levels. We denote the number of particles at energy level
r
as n
r
. We have been dealing with the partition function, which can now be written as the sum of states
Q =

all distributions of states


e

r
nrr/kT
(9.12)
subject to the condition that

r
n
r
= N. If we ignore the constraint for now, we can write
Q =

nr
e
nrr/kT
=

r
1
1 e
r/kT
(9.13)
This looks like the Einstein-Bose distribution, so if we calculate the Helmholtz free energy we get
F = kT log Q = kT

r
log
_
1 e
r/kT
_
(9.14)
Remember this result we have is right for photons. But note that F does not depend on N explicitly,
so the partial derivative is zero, which means this applies to particles with = 0. So we reached another
result where the number of particles can vary freely.
Lets think about this problem in another way. Remember from the beginning we have thought of
as something resulting from the properties of the system. However, we can treat as given a priori, and
see what can we get. Consider a box connected with an reservoir with xed entropy and innite number
of particles. The chemical potential of the reservoir is just
_
F
N
_
T,V
= = [
b
[ (9.15)
So in equilibrium the of the system is the same as that of the reservoir, and we have a way to control the
chemical potential. If we call the energy of the reservoir zero, then the energy of the system is shifted by
. We want to calculate the average number of particles in a certain state by summing over the possible
state congurations
n
r
=

nr
n
r
e
(r)nr

nr
e
(r)nr
(9.16)
Remember the chemical potential of a system is often negative, so the inclusion of eectively raises the
energy and suppressed the probability of every state. We can continue playing this game and nd an
expression for the expectation value
n
r
=
1

log

nr=0 x r
e
(r)nr
= kT

log
_
1 e
(r)/kT
_

1
e
(r)/kT
1
(9.17)
So we have derived the Bose-Einstein distribution from rst principles.
23
Statistical Mechanics Lecture 10
10 Lecture 10
10.1 Digestion on Distributions
Remember given a system, if we know the partition function of the system, then we can basically answer
any questions about the system. However the partition function may be very hard to calculate. Now
consider we have a system with N particles and some energy levels, last time we employed a trick to
calculate the expectation of occupation number of each state. We shifted the system energy by N by
adding a reservoir of xed chemical potential, but now the number of particles in the system is no longer
N, but can be varied. Now lets see how we can do it physically without using the trick.
The results we get from last time is
n
r
=

r
e
(r)nr/kT
n
r

r
e
(r)nr/kT
=
1
e
(r)/kT
1
(10.1)
This result applies to bosons. What if we have fermions? Remember we can treat the sum as xing r and
summing up n
r
, but in the case of fermions n
r
= 0 or 1, so
n
r

f
=

r
e
(nr
)nr/kT
n
r

nr
e
(r)nr/kT
=
e
(r)/kT
1 +e
(r)/kT
=
1
e
(r)/kT
+ 1
(10.2)
This is the Fermi-Dirac distribution.
Now we want to hold the number of particles in the box xed. Then we can use the above formula to
compute the chemical potential. We can dene the probability with some chemical potential
P() =
e
(E
i
N)/kT

all states
e
(E
i
N)/kT)
(10.3)
Lets look at an example. Suppose we have an atmosphere of O
2
, and some Hemoglobin (proteins that
can have 1 binding site that can bind to oxygen) with binding energy . How does the amount of binding
depend on the pressure of oxygen? We know the chemical potential of oxygen is

O
2
= kT log n
O
2

3
(10.4)
This formula applies to perfect dilute gas of indistinguishable paricles. So we can solve for the number
density of O
2
n
O
2
=
1

3
e
/kT
(10.5)
So the chemical potential tells us how dilute the gas is compared to the wavelength of the gas particle.
The partition function of binding is simple, as it only has two dierent congurations
Q = 1 +e
/kT
e
/kT
(10.6)
So the average number of binding
n
r
=
1
e
()/kT
+ 1
=
1
e
/kT
/n
O
2

3
+ 1
(10.7)
The physical meaning of this expectation is the average occupation number on one Hemoglobin site, which
is between 0 and 1. When temperature goes to 0, the exponential goes to 0 and there is no problem in
binding. When the density of oxygen is innite then 1/n goes to 0, and this occupation goes to 1 so there
is no objection to binding either.
24
Statistical Mechanics Lecture 10
10.2 Historical Notes
Suppose we have a hamiltonian H(q
1
, . . . , q
3N
, p
1
, . . . , p
3N
) which is a function of 6N variables in the
phase space. A point in the phase space species the system completely. There is an interesting theorem
attributed to Liouville, which says the following. Let us assume the hamiltonian to be independent of time,
and suppose we have a large number of systems moving around in the phase space with (q
1
, . . . , p
3N
)
denoting the number density of dierent systems in the phase space, and the total number of systems is
conserved, then we have the conservation equation

t
+ (v) = 0 =

t
+ v +v (10.8)
The comoving density of systems, which is dened to be the total time derivative of is
d
dt
=

t
+ (v ) = v (10.9)
Remember we have the Hamiltons equations
q
i
= +
H
p
i
, p
j
=
H
q
j
(10.10)
So the above equation for derivative for becomes
d
dt
=

_

q
i
q
i
+

p
j
p
j
_
= 0 (10.11)
So wherever we sit, the density of system at my neighborhood remains constant, regardless of the details
of the dynamics of the system. So there is no way you can use a static hamiltonian to focus a density
distribution in the phase space. If we want to establish a density distribution which does not change with
time, i.e. /t = 0, then the only solution is that d/dt = 0 and is a constant.
For a classical system we have the expectation of a physical quantity
A =
_
dq
1
. . . dp
3N
A(q
1
, . . . , p
3N
)e
H/kT
_
dq
1
. . . dp
3N
e
H/kT
(10.12)
Note that this is purely classical and we dont need to know the density of states. But there is no way we
can calculate the entropy in this classical way, because we dened it to be
S = k

P
i
log P
i
(10.13)
which assumes the knowledge of probabilities of micro states.
25
Statistical Mechanics Lecture 11
11 Lecture 11
We discussed last time that because what we derived, there is no way to focus a parallel beam to a smaller
area of concentration. That was wrong. Only when the incoming beams are diverging that we cant use a
static device to focus it to a concentrated parallel beam.
Another way of interpreting the Liouvilles theorem is that the comoving density , which is equal to
=
d
2
n
dpdq
(11.1)
is constant wherever we move to in phase space. Suppose the distribution n is constant, then this amounts
to the constancy of any volume element dpdq that we are moving. So if we have an incoming parallel
beam, then the volume is zero because there is no p dispersion in the y direction, so it is ne to transport
it to another zero volume with smaller q dispersion in the y direction. But we cant do this for diverging
incoming beam, because the phase space volume should be constant.
Another way to look at this is that suppose we have a focusing device as that. Suppose we have a
body on the left that radiates at temperature T and another body at the right of the device to absorb
that radiation and radiate back with a smaller surface area but the same temperature, then because they
have the same power of radiation, the left body will radiate more energy and will get colder after a period
of time, while the right body will gain energy and get hotter, and we did not do any work! So that is
forbidden by the second law of thermal dynamics.
11.1 Classical Systems
Lets consider the expectation value of a variable with respect to a classical system
A =
_
dq
1
. . . dp
3N
e
H(p,q)
A
_
dq
1
. . . dp
3N
e
H(p,q)
(11.2)
Note that there should be a factor of (2)
k
and some factorial, but the coecients in the numerator and
denominator cancel. This expression translates into our similar expression in the quantum case where the
integral is replaced by a summing over accessible states. The following relation is true regardless of we are
dealing with classical or quantum physics
_

q
i
p
i
_
+
_

q
i
p
i
_
= 0 (11.3)
To see this, note that
d
dt
pq = pq +p q, and that
d
dt
pq = pq +p q = pq +p q (11.4)
But in a conned system the time average of the derivative is zero, so the sum is zero, whether in classical
or quantum systems. Note that in the Hamiltonian formulation, if our Hamiltonian is of the form H =

p
2
i
2m
+V (q
1
. . . q
N
) then the kinetic energy is just
2 KE =
_

p
i
q
i
_
=
_

q
i
p
i
_
=
_

r F
_
(11.5)
26
Statistical Mechanics Lecture 11
The force on particle i can be written as F
i
=

j=i
F
ij
where F
ij
is the force of the j-th particle on
the i-th one. Then we can write

r
i
F
i
=

r
i

_
_

j=i
F
ij
_
_
(11.6)
If suppose the force between particles can be written as the gradient of a potential U(r) = Ar
n
, then we
have F r = nU(r) so we can get the virial theorem:
2KE = nU (11.7)
So if the potential is gravity or electrostatic, then we know that n = 1 and
E
total
= KE (11.8)
So if in a dilute system we know that E
3
2
NkT.
In an extreme relativistic case

p v KE, which says that rest mass is ignorable to the energy, then
KE = U so E
total
0. So it is very dicult to form a bound system in this extreme case.
In a classical system we can evaluate the expectation value as
p
i
q
j
=
_
dq
1
. . . dp
3N
p
i
q
j
e
H(p,q)
_
dq
1
. . . dp
3N
e
H(p,q)
(11.9)
But by the Hamiltons equations we can write q
i
=
H
p
i
, so the numerator can be written as
_
p
i
H
p
j
e
H
dq
1
. . . dp
3N
=
1

_

p
j
_
p
i
e
H
_

_
p
i
p
j
e
H
_
dq
1
. . . dp
3N
(11.10)
The former term is zero because it is a boundary term, therefore
p
i
q
j
=
1

ij
(11.11)
and
p q = 3NkT = 2KE (11.12)
And this is what we have for ideal gas.
27
Statistical Mechanics Lecture 12
12 Lecture 12
Remember last time we derived that for a conned system
_

q
i
p
i
_
+
_

q
i
p
i
_
= 0 (12.1)
Remember the rst quantity is equal to 2KE in nonrelativistic case while it is equal to KE in
relativistic case. Remember we have the relation
3NkT =
_

q
i
F
i
_
(12.2)
Now if we have a box of volume V of noninteracting particles. The force that the box is exerting on the
particle averaged is just
_
V
r PdS = P
_
V
rdV = 3PV = 3NkT (12.3)
So we recovered the ideal gas law.
Suppose the gas particles have nite size and they are hard balls that cant penetrate each other. We
can rederive the Van der Waals equation
NkT = P(V b) (12.4)
where b is proportional to the space occupied by a gas particle.
Now suppose that the particles have nontrivial interactions. We have already seen this kind of problem,
approximating the interaction to be harmonic oscillator potential or gravitational potential. We will write
down the result rst
3NkT = 3PV +
N(N 1)
2
_ _
r

r
u(r)
_
g(r)
4r
2
V
dr (12.5)
where u(r) is the two particle potential, and g(r) is dened to be the probability of nding two particles
with distance r. If there is no correlation in the positions of the particles then g(r) = 1.
We may approximate g(r) = e
u(r)/kT
which is a sensible way of approximation. Now we can write the
integral out and integrate by parts to get
PV
NkT
= 1 2N
_

0
_
e
u(r)/kT
1
_
r
2
dr (12.6)
If u(r)/kT 1 for r > 2, and u(r)/kT = inside r < 2, which is just the hard core repulsion, then
PV
NkT
= 1 +
4
3
(2)
2
NV
2
+
N
2V
_

2
u(r)4r
2
kT
dr +. . . (12.7)
We can regroup the terms so that it looks like the Van der Waals equation NkT = P(1 a)(V b), where
a is the integral above and b is half the excluded volume.
28
Statistical Mechanics Lecture 12
12.1 Canonical Ensemble
Suppose we have n boxes, and the total energy of the whole system is the sum of energies of individual
systems. We are dealing with indistinguishable particles, so if we take exchange the particles in one box
with the particles in another box we cant tell the dierence. Every box has N particles. Now lets use n
r
to denote the number of boxes in which the system has energy E
r
. This is the energy of the N particle
interacting system. We can write
E
total
=

r
n
r
E
r
, n =

r
n
r
(12.8)
These two quantities are conserved throughtout the process in question. The collection n
r
= (n
1
, n
2
, . . . )
species a conguration of energy states of all boxes. The number of ways to nd the whole system to be
in a specic conguration is
W(n
r
) =
n!
n
1
!n
2
! . . .
(12.9)
29
Statistical Mechanics Lecture 13
13 Lecture 13
13.1 Continue on Canonical Ensembles
Remember last time we introduced the canonical ensemble as n boxes of volume V with particle number
N. We used n
r
to denote the number of boxes in which the system has energy E
r
, so the total energy is
c
total
=

r
n
r
E
r
(13.1)
with the constraint that

n
r
= n. A distribution is a set of numbers n
1
, . . . , n
r
, . . . which characterizes
the energy state of the whole system. It is derived in the book that
_
_
n
r
n
r

_
2
_
=
1
n
r

o
_
1
n
_
(13.2)
This says that for very large n the gaussian distribution becomes very narrow and well dened. Remember
we know that the number of ways to arrange the boxes for the same energy level is
W(n
r
) =
n!
n
1
!n
2
! . . . n
r
! . . .
(13.3)
The maximum of W corresponds to the most probable state. It is equivalent to nding the maximum of
the logarithm:
log W(n
r
) = nlog nnn
1
log n
1
+n
1
n
2
log n
2
+n
2
= nlog nn
1
log n
1
n
2
log n
2
. . . (13.4)
In order to nd the maximum, we dierentiate it with respect to n
r
and we can get
log W =

r
(log n
r
1) n
r
(13.5)
subject to the constraint that

n
r
= 0 and

n
r
E
r
= 0. So we can add these zeroes to the above
equation

r
(log n
r
1 + +E
r
) n
r
= 0 (13.6)
This is satised when n
r
= n

r
which corresponds to the most probable distribution, and since n
r
is
arbitrary apart from the constraints, it must be the terms in the bracket that vanish for all values of r.
The and parameters are added to ensure this. So we know that the most probable distribution is
n

r
= e
(1)Er
(13.7)
But if we divide by n then the factor of e
(1)
cancels so we get
n

r
n
=
e
Er

e
Er
= P
r
(13.8)
which is the distribution of a canonical ensemble.
30
Statistical Mechanics Lecture 13
As long as the number of particles in each box N is nite, the above probability will uctuate, as it is
essentially an expectation value. For each box we have a volume V and particle number N, so its energy is
U = E =

r
P
r
E
r
=

E
r
e
Er

e
Er
(13.9)
We can dierentiate this expression with respect to and we get

E = E
2
+E
2
(13.10)
So we get the expression which is useful
(E)
2
= kT
2
_
U
T
_
V,N
= kT
2
C
V
(13.11)
If we further divide by the square of expectation value then we get
(E)
2

E
2
=
kT
2
C
V
E
2
O
_
kT
E
_
(13.12)
13.2 Grand Canonical Ensemble
Now we want to generalize and consider the grand canonical ensemble. On top of the canonical ensemble,
we add many boxes with dierent number of particles from N. We add the constraint that the sum of
numbers of particles in all boxes is xed. We follow the same procedure as above, but will arrive at the
result

r,s
[log n

rs
1 + +E
rs
+N
s
] n
rs
= 0 (13.13)
where N
s
is the number of particles in the box s and the energy of a box s is E
r
(N
s
, V ). So energy has two
subscripts r, s. We can turn the crank and obtain the probability of a box to have N particles at energy
level r(N) is
P
N,r(N)
=
e
Er(N,V )N

e
Er(N,V )N
(13.14)
This is the distribution for the grand canonical ensemble. Note that when is a positive number the most
probable state is with 0 particle in the box.
We want to make connections between the quantities we introduce here and the old familiar thermo-
dynamic quantities. We call the sum in the denominator the grand partition function

r,N
e
ErN
= Q (13.15)
Remember in canonical ensemble we have the relations
Q =

e
Er
, E TS = kT log Q (13.16)
We can also make connection to the grand canonical ensemble by dening the quantity
q = log Q, kTq = TS E +N (13.17)
31
Statistical Mechanics Lecture 13
where = . Bet remember that N = G = U TS +PV . So we get
q =
PV
kT
(13.18)
Remember our microscopic denition of entropy
S =

N,r
P
N,r
log P
N,r
(13.19)
We also have the expressions for U and P which actually look similar
U =

N,r
E
N,r
P
N,r
, P =

N,r
P
N,r
_

E
N,r
(V )
V
_
(13.20)
32
Statistical Mechanics Lecture 14
14 Lecture 14
Remember last time we introduced the quantity q which is the logarithm of the grand partition function
Q
q = log

N,r
e
N
e
Er(N,V )
(14.1)
Its dierential is
dq = Nd Ed +PdV (14.2)
We can do some massaging to get
d (q +N PV ) +E = dN V dP +dE (14.3)
Remember the Gibbs free energy
G = N = E TS +PV (14.4)
Therefore we know that
dE = TdS PdV +dN (14.5)
We can compare this expression with equation (14.3) and end up with
q =
TS +GE
kT
, q =
PV
kT
, = (14.6)
From the same logic we can get that
N =
1

(14.7)
We can get this by simply look at the denition of q.
We can work out the uctuation of the number density of the particles
(n)
2

n
2
=
kT
V N
_
V
P
_
T
(14.8)
where V/P is the compressability of the system.
33
Statistical Mechanics Lecture 15
15 Lecture 15
Remember we introduced the grand canonical ensemble last time, and we dened the grand partition
function
Q =

N
Q
N
e
N
(15.1)
where Q
N
is the partition function for the canonical ensemble with N particles. Remember we have
log Q
N
= A/kT (15.2)
where A is the Helmholtz free energy of the canonical ensemble. However with grand canonical ensembles
we have
log Q =
PV
kT
(15.3)
which we also derived last time.
Now lets consider an Einstein-Bose gas of noninteractive and indistinguishable particles. The number
of particles in one particular single particle state s has no limit. The expectation value of the number of
particles in state s is just
n
s
=

Ns=0
N
s
e
Ns(s)

Ns=0
e
Ns(s)
=
1
e
s
1
(15.4)
In the book they took the limit where particles are far away from each other and there is little chance
for overlap of wavefunctions, then the above distribution reduces to the so-called Maxwell-Boltzmann
distribution. But thats not interesting as in condense matter physics there are few scenarios where this
assumption is really applicable.
The probability for a single particle state s to have n
s
number of particles is
P(n
s
) =
e
ns(s)

ns=0
e
ns(s)
=
_
n
s

1 +n
s

_
ns
_
1
1 +n
s

_
(15.5)
This was accomplished summing the geometric series. It can be checked that

P(n) = 1.
The book introduced a new variable, the fugacity, which is just
z = e

(15.6)
For dilute gas is usually a very large negative number, so z will be very small. Because is always
negative for a Bose-Einstein gas, so z is always less than 1. We can write the grand partition function as
Q =

n
1
n
2
...ns...
(ze

1
)
n
1
(ze

2
)
n
2
. . . (ze
s
)
ns
=

i
_
1
1 ze

i
_
(15.7)
We introduced the logarithm of the grand partition function q, and we can compute it and nd it to be
q =
PV
kT
=
1

log
_
1 +ze

_
(15.8)
where = 1 for Bose-Einstein gas and = 1 for Fermi-Dirac gas.
34
Statistical Mechanics Lecture 15
Remember for indistinguishable particles we need to account for overcounting by dividing the N-particle
partition function by N!
Q
N
=
Q
N
1
N!
(15.9)
but this is true as long as we know that n
s
1 for all states s. The factor needs to be changed if the gas
is not dilute.
Consider a Bose gas in a potential well. The ground state is the state where all particles are in the
ground state. This statement also applies to distinguishable particles. In fact the Bose-Einstein gas has
the same ground state as when the particles are distinguishable, as long as the Hamiltonian is symmetric
among the particles and the ground state is unique, regardless of there being interaction or not. So it is
not the ground state that dierentiates the Bose gas and a gas of distinguishable particles. However there
is profound dierence in the excited states.
For example, if we have N distinguishable particles. The rst excited state of the system has N dierent
possible congurations, corresponding to the excited states of the N individual particles. However for the
indistinguishable case we only have one rst excited state, i.e. a state where exactly one particle is in its
rst excited state. This is an enormous suppression of the excited states when the number of particles is
large, which is usually the case in condense matter systems.
Lets consider a quantum mechanical system of 2 particles of Bose statistics. One excited state can be
(x
1
, x
2
) =
1

2
_
e
ik
1
x
1
e
ik
2
x
2
+e
ik
1
x
2
e
ik
2
x
1
_
(15.10)
The probability distribution is
[(x
1
, x
2
)[
2
= 1 + cos (k
1
k
2
) (x
1
x
2
) (15.11)
So when then particles are on top of one another then the probability is actually larger. This means that
for excited particles they tend to come close to each other. So if we add a compulsive potential to the
particle interaction, then the excited state acquires an energy gap because they tend to be closer to get
repulsed. This eectively introduces an energy gap between the ground state and the rst excited state.
By the reasoning above, for a Bose gas the particles prefer the stay in the ground state compare to
the Maxwell-Boltzmann gas, rstly because there are much less excited states in Bose gases, and secondly
because there is an energy gap between the ground state and the excited state.
We can evaluate the expectation value of the total number of particles
N =

1
z
1
e

1
(15.12)
We know from this expression already that z must be less than 1, because for the lowest energy level = 0
then if z were greater than 1 then we have a negative number of particles, which is not physical. We can
replace the above sum with an integral

_

0
dg() (15.13)
where g() is the density of states(DOS) of the system, and for Bose gas we know that
N =
_

0
4V p
2
dp
(2)
3
_
1
z
1
e

1
_
+
z
1 z
(15.14)
35
Statistical Mechanics Lecture 15
where the last term is added to give credit to the ground state where = 0. Thus the rst term is explicitly
the number of particles in excited states. If we look at the rst term alone, we want to make it as large
as possible, that is to make z as large as possible, which is at most 1. If z approaches 1 then the term
becomes
4V
(2)
3
_

0
p
2
dp
e

1
(15.15)
which can be evaluated by change of variables, assuming that p
2
, and it turns out to be nite. So there
is a maximum value of number of particles in the excited states. If we put more particles in, then these
particles have nowhere to go except into the ground state.
If we do the same thing in 1-dim, then what happens? The above integral diverges when 1, so
there is no problem for unlimited particles into the excited states. How about 2-dim? We need to work
harder and it needs special studies. So if we want to talk about Bose-Einstein condensation, we need to
specify the space dimension, as well as how does depend on p.
36
Statistical Mechanics Lecture 16
16 Lecture 16
Recall that we have the following results for the Bose-Einstein gas in 3D
n =
1
e
()
1
(16.1)
N =
V

3
_

0
x
1/2
z
1
e
x
1
(16.2)
where z = e

is the fugacity of the system, and it has to be less than 1, otherwise the whole machinery
breaks. The wavelength in the second formula is

=

p
=

2mkT
(16.3)
The integral is usually hard to evaluate analytically, and we usually need to look up a table. The z
factor eectively tunes the relevance of the integral. Note that the x 1 part is always irrelevant because
they get suppressed by e
x
. When z = 1 the integral converges, so if we put more particles into the
system there is nowhere to go for the extra particles except for the ground state. This is the cause for
Bose-Einstein condensation.
We can write out the ground state contribution explicitly
N =
V

3
_

0
x
1/2
z
1
e
x
1
+
1
1 z
(16.4)
Now the integral means the number of particles in the excited state. If we divide both sides by V , then
the number density is a xed number plus the ground state contribution. The ground state is suppressed
by 1/V , and is relevant only when z is very close to 1.
The temperature dependence of the total number of particles is
N = T
3/2
+N
GS
(16.5)
The temperature for Bose-Einstein condensation T
c
is when the temperature is so low that the particles
are just able to be in the excited state.
T/T
c
N
E
/N
Figure 16.1: Illustration of Bose-Einstein Condensation
Note when T < T
c
then N
E
/N (T/T
c
)
3/2
. The critical temperature is estimated to be
kT
c
=
(2)
2
2m
_
N
2.612V
_
3/2
(16.6)
37
Statistical Mechanics Lecture 16
If we compute the pressure of the system, we should get
P =
2U
3V
(16.7)
This result can also be obtained without doing the integrals, as long as we have non-relativistic particles,
by just arguing about the velocity and momentum transfer at collisions. For a relativistic particle we
should have
P =
U
3V
(16.8)
The specic heat of the system looks like this. The specic heat has a discontinuity at T = T
c
.
T/T
c
C
V
N
1
Figure 16.2: Specic Heat for a Bose-Einstein Gas
We can also plot the fugacity of the system vs the quantity V/

3
N.
V/

3
N
z
1/2.612
Figure 16.3: Fugacity for a Bose-Einstein Gas
Now we want to study the phonons in solids. How much of the result here can be translated to condense
matter systems? We want to ask when can we treat it as a continuum. The answer is just when the wave
length of the phonon is larger than the interparticle spacing we can treat the background as a continuum.
That is
kT
D
>
c
a
(16.9)
where a is the interparticle spacing. In this regime we can copy our results for photons directly to phonons,
except for that the velocity is changed to the speed of sound in the solid.
Remember the energy density for black body radiation is
U/V = kT
_
kT
c
_
3
(16.10)
38
Statistical Mechanics Lecture 16
There are only 2 polarizations for photons, but 3 for phonons. Plus the energy density is suppressed by
1/c
3
, so the energy density of phonons in solid is much higher than that of photons, so we usually neglect
the eect of black body radiation inside the solid.
If we do the calculation more carefully we can get
U
V
=
_
kmax
0

1
V 4k
2
dk
(2)
3
(16.11)
where k
max
is dened as
c
s
k
max
= kT
D
(16.12)
where T
D
is where our approximation breaks, and is the Debye temperature.
Now we move to Chapter 8 of the book. We already know the occupation number n
s
of a single particle
state of energy
s
for a fermion is
n
s
=
1
z
1
e
s
+ 1
(16.13)
Now there is no constraint on z. It can be larger than 1 or less than 1. The total number of particles is
N =

s
n
s
=

s
1
z
1
e
s
+ 1
(16.14)
The quantity q can be calculated as
V P
kT
= log Q =

log
_
1 +ze
s
_
(16.15)
Remember when we deal with Bose-Einstein particles we can sum the logarithm and get a geometric series
which is easy to express in a closed form. But now we only have two terms in the logarithm corresponding
to occupation number 1 or 0. We want to see what happens for very low temperatures in the case of
Fermi-Dirac gas.
For very low temperatures, the term e

will become very large, so we have problems unless we also


have very large z. The largest energy in the fermi system is the fermi energy. It makes sense because for
below fermi energy then the occupation number goes to 1, whereas when >
F
then at limit the
occupation number goes to 0.
Now because of Pauli exclusion principle, there are far more particles in the excited states than when
we dont have Fermi-Dirac statistics. So now PV/NkT 1 + . . . instead of 1 . . . which is the case for
Bose-Einstein gas.
The average energy per particle is
U
N
=
3
5

F
_
1 +
5
2
12
_
kT

F
_
2
+. . .
_
(16.16)
When T = 0 we have the usual 3
F
/5. The rst correction depends on (kT)
2
because the energy is
proportional to kT and the number of fermions that can play the game is also proportional to kT.
39
Statistical Mechanics Lecture 17
17 Lecture 17
We want to study the fermions in greater detail. Usually we quantify our energy or kT by electron volts
eV. 1 eV is about 10
4
K. We dene the work function w to be the energy required it takes an electron
at ground state to become a free electron. Because for typical systems the energy is actually much larger
than the room temperature, so the system is essentially glowing. We can calculate the thermal emission
R = 2
_
dp
z
(2mw)
1/2
(2)
_
dp
x
2
_
dp
y
2
u
z
1
e
()
+ 1
=
4me
(2)
3
(kT)
2
e
(w
f
)
(17.1)
This result is not very trivial because the number density of electrons disappear in the nal expression.
We need to think about it deeper. Ref. the book.
17.1 Problem of White Dwarf Star
A white dwarf is a dense star where the electrons are highly degenerate. The internal energy is mainly from
the kinetic energy of the electrons, because the ions have the energy around kT, but electrons because
of their degeneracy have much higher energy at
F
. We will try to write down an answer by physics
arguments.
The potential of the star is described by the Newtons equation

2
= 4G
mass
(17.2)
The gradient of the pressure is
P = (17.3)
At T = 0 the pressure would be
P =
2
5
n
e

k
(17.4)
The kinetic energy of the electron is
k
= p
2
/2m
e
for non-relativistic motion and
k
= pc for relativistic.
But when electrons are relativistic the white dwarf has a maximum mass.
The questions we want to ask are: What is the radius of the star? How does the radius depend on the
mass of the star? What is the maximum mass of the star?
Recall the Virial Theorem for inverse square law forces. We have a relation between the kinetic energy
and potential energy
_
K =
1
2
U, for non-relativistic case
K = U, for relativistic case
(17.5)
The potential energy of the star itself can be written as
U =
GM
2
R
(17.6)
This is sloppy, but we are allowed to make approximations and this is accurate to a factor of order unity.
The kinetic energy is the sum of all electron energies
K =
3
5
N
e

F
, where
F
=

2
2m
e
n
2/3
e
(17.7)
40
Statistical Mechanics Lecture 17
Combining the expression of K and U and the virial theorem we can compute the mass of the white dwarf
and get
R

2
M
1/3
Gm
e
m
5/3
He
10
9
cm
_
M
sun
M
_
1/3
(17.8)
For a neutron star, the idea is similar but the particles providing the degeneracy kinetic energy are
neutrons. So we should have
R
NS
R
WD

_
m
e
m
n
__
M
WD
M
NS
_
1/3
(17.9)
Note that the fermi energy of the electrons in the white dwarf goes like

F
n
2/3
e

_
M
R
3
_
2/3
M
4/3
(17.10)
So when the mass is large enough the electrons will go relativistic. In extreme relativistic case we have the
following
GM
R
N
e
_
N
e
R
3
_
1/3
c (17.11)
Note for extreme relativistic electrons K = V the total energy would be zero. So the whole system is
not stable. From the above equation we can nd an expression for the mass
M N
2/3
e
_
c
G

_
c
Gm
2
H
_
m
H
(17.12)
We can call the quantity
G
= Gm
2
H
/c the gravitational ne structure constant because it is the attraction
between two protons. This is the Chandrasekar limit for the white dwarf mass. The mass can be smaller
because then it will be in the non-relativistic regime, but cant be larger. There is a mass limit for neutron
stars, too, but it is of a dierent origin. This mass is also a characteristic mass for astronomical objects
which are heavy enough to be stars.
17.2 Heavy Atoms
Lets also look at heavy atoms. It is very similar to a star, except that the positive charge are all lumped
in a heavy nucleus. Otherwise it is very similar to what we have done because the electrons also form a
degenerate fermi sea. This is called a Thomas-Fermi atom.
The potential here is the Coulomb potential
=
Ze
2
r
(17.13)
We take the radius of the atom to be the average radius of electron orbits. Then kinetic energy of the
electrons can be approximated as
N
_
N
r
3
_
2/3

2
2m
e

(Ze)
2
r
(17.14)
Note we got the kinetic energy by p
2
/2m
e
and p is about divided by the characteristic length of the
orbit. From this expression we can get the so-called Thomas-Fermi radius
r
TF


2
m
e
e
2
1
Z
1/3
(17.15)
41
Statistical Mechanics Lecture 17
We can ask what is the maximum mass of an atom. Of course it cant be too big because we have
ssion. But we can also do something like the white dwarf one. What would that value of Z be? If the
electrons become extremely relativistic we have

2
m
e
e
2
1
Z
1/3


m
e
c
(17.16)
This places a limit on Z just like the case of white dwarf, which is
Z
1/3

c
e
2
10
6
(17.17)
which is ridiculously big. But similar things happen in the Dirac equation where things scale dierently
than here. There when Z is larger than 137 the electrons become relativistic and there is no solution.
42
Statistical Mechanics Lecture 18
18 Lecture 18
18.1 Paramagnetism
Consider a particle in a static magnetic eld B. The energy is just B. The partition function is
Q
1
=
J

m=J
exp (mg
B
B) (18.1)
And for dilute N particles we have
Q =
Q
N
1
N!
(18.2)
The classical approximation is to replace the sum with an integral
J

m=J

_
J
J
dm =
_
1
1
d cos (18.3)
Carrying out the integral we can nd the magnetization for N particles to be
M = N
_
coth(B)
1
B
_
= NL(x) (18.4)
where L(x) is called the Langevin function and x = B.
For small x the magnetization is
M =
N
2
B
3kT
(18.5)
This is a paramagnetic response. However we can also have a spontaneous magnetization if the temperature
is low enough. In that case B = M/V is the eld generated by the material itself. This is the mean eld
approximation where B is taken as a local average of the eld generated by nearby atoms. This spontaneous
magnetization disappears at high enough temperature, when 1 = N
2
/3V kT. This temperature is called
Curie temperature and we write
kT
c
=
N
V

2
3
(18.6)
For a large solid if we lower the temperature below Curie temperature we will see the material is self-
magnetized in small patches, which are called domains. The magnetizations in dierent domains arrange
themselves so that as little magnetic eld stick out of the material as possible. In general this is very
complicated.
Consider a Fermi sea of electrons in a potential well. The energies ll up to
F
. At zero temperature
all the states below
F
are lled while all the states above it are empty. If we put a magnetic eld into
this potential well, depending on the spin orientation of the electrons inside, the energy will be raised or
lowered. Ref. the textbook. We could use the chemical potential for this problem and write

+
B
B =

B
B (18.7)
in equilibrium. If we nd the chemical potential for a degenerate fermi gas then we can nd that the
induced magnetization divided by B, which forms a dimensionless quantity, is equal to
M
V B
=
N
V

2
B
k
B
T
_
1
N
V

3
2
3/2
+. . .
_
(18.8)
43
Statistical Mechanics Lecture 18
This is for k
B
T
F
. For the other end of the spectrum k
B
T
F
we have
M
V B
=
N
V

2
B

F
_
1
_
k
B
T

F
_
2

2
12
+. . .
_
(18.9)
18.2 Diamagnetism
For a closed shell atom like a Helium, if we apply a magnetic eld, there will not be any net paramagnetism.
Recall in elementary quantum mechanics we deal with the problem by replacing
p p
e
c
A (18.10)
in the Hamiltonian and solve for the eigenvalues. For a constant homogeneous magnetic eld we choose A
to be
A =
1
2
Br (18.11)
The expectation value for energy is
H =
_
p
2
2m
_
+p A+A p +

r
2

_
e
2
8mc
2
B
2
(18.12)
The second term is zero, and the last term is the correction . Its sign is positive which tells us that this
is diamagnetism.
Now lets take the atom and smear it out into a larger volume. If we take it large the system goes to
the limit of a box of free electrons. In this case the above energy correction gets larger and larger because
r
2

gets larger and eventually it will be larger than the binding energy itself.
Dont want to take notes. . . Ref. Pathria Problem 3.43 and page 208. The Bohr-van Leeuwen Theorem
says that with a magnetic eld does not change the velocity distribution for classical free electron gas.
There will be no diamagnetic response for classical electron gas.
Consider a box of electrons under magnetic eld. The classical electron diamagnetism will give rise to
small current loops that repels the magnetic eld. If we add up all the current loops we get a big current
loop around the edge of the box. This seems to a diamagnetic response, but if we also consider edge
electrons which only respond with a partial current loop the contributions cancel. So this is another way
to see classical diamagnetic response is zero.
44
Statistical Mechanics Lecture 19
19 Lecture 18
Given a box of particles, we want to ask what is the magnetic susceptibility. Remember the Hamiltonian
is
H =
1
2m
_
p
e
c
A
_
2
(19.1)
The quantum diamagnetic contribution to the energy is
H =

r
2
_
e
2
8mc
2
B
2
(19.2)
Now we want to know what is r
2
. We can have a guess of it using the uncertainty principle
r

mv
, r
2


2
m
2
v
2
=

2
m
K
(19.3)
where
K
is the kinetic energy of the electrons. Again we must make a choice what is
K
. It could be
kT if the electron gas is dilute, or could be the Fermi energy
F
if the electron gas is dense. Then the
diamagnetic susceptibility is
M
V B
= n
e
_
e
mc
_
2
_
1
kT
_
(19.4)
or replace kT with
F
.
Now we use quantum mechanic ways to do this problem. We need to choose a gauge to write down our
vector potential. The naive choice is A = B r/2. But we choose another gauge which is called Landau
gauge, where
A = yBx (19.5)
Then the Hamiltonian is
H =
p
2
x
2m
+
(p
y
eBx/c)
2
2m
+
p
2
z
2m
(19.6)
We can readily write down the eigenfunctions
(k
x
, k
y
, k
z
) = e
ikzz
e
ikyy
(x x
0
) (19.7)
The motion around x
0
will look like a harmonic oscillator, and x
0
is about
x
0

c
eB
k
x
(19.8)
The nal eigenvalues of the energy are

j
=
e
_
j +
1
2
_
+
p
2
z
2m
(19.9)
Now that we know the energies of the single particle states, we want to know how many possible
congurations correspond to the same energy. We can write the density of states as
dn
d
=
dxdy dp
x
dp
y
(2)
2
(19.10)
45
Statistical Mechanics Lecture 19
Now we can carry out the integral over x and y and combine the dp
x
dp
y
into 2p dp. We can recognize
this later expression is just md, and we can write = , so we have
dn
d
=
L
x
L
y
(2)
2
2m (19.11)
where = eB/mc.
We can think about the problem in another way. We can think about the total ux of the magnetic
eld through the region as L
x
L
y
B, and we want to put in the electron orbits and ask how many electron
orbits can we put into the region. The ux quanta is
e
0
c
= 2 (19.12)
and we know n = BL
x
L
y
/
0
. We argue that for higher and higher values of j, the extra ux introduced
by increasing j by 1 is xed, so that we can count the number using this unit ux.
Now with all the knowledge, we want to compute the grand partition function of the system. Recall
log Q =

log
_
1 +ze

_
(19.13)
We can calculate N and M from the grand partition function. Results ref. Pathria Chapter 8. The
result is not very dierent from our heuristic result from the above, apart from a constant factor.
46
Statistical Mechanics Lecture 20
20 Lecture 20
20.1 Superuids
Recall the uid equation
v
t
+ (v )v
P

+
2
v = 0 (20.1)
For superuid = 0 and there is no viscosity. Superuidity is found in He at very low temperature. If
the lattice spacing for the uid is a, then the kinetic energy for localization is
E
(/a)
2
2m
(20.2)
In order to understand superuidity, we rst need to understand what are the forces between particles
in the uid. When the forces are zero, we have the familiar result from Einstein-Bose statistics
k
B
T
c
=
(2)
2
2m
_
N
V 2.612
_
2/3
3.13 K (20.3)
Note for He superuidity happens for T < 2.2 K. So we know superuidity is related to but does not come
solely from the Bose condensation eect.
Recall the Van der Waals interaction between particles. Consider an atom in a uniform electric eld.
In equilibrium the atom will be stationary and the electron cloud will be polarized. The electrons will form
a dipole such that the nucleus in the middle will experience no net force. The dipole moment is about
ed a
3
E
0
(20.4)
where a is the size of the atom. Now suppose we have two atoms 1 and 2 separated by distance r. The
Van der Waals potential between them due to polarization is
V
12
=
e
2
a
3
2
a
2
1
r
6

a
6
r
6
e
2
a
(20.5)
The rst ratio (a/r)
6
is about 10
6
in order of magnitude, and the second ratio is about e
2
/a 10
2
eV. So
the attraction by Van der Waals force is about 10
4
K in strength. Now the potential is attractive for larger
distance but increase rapidly for smaller distances. It is hard to see what to neglect in our approximation.
Consider a box of volume V and N bosons inside. Suppose the interaction between particles is a weak
attraction of range approximately a, with potential energy about u
0
. We rst put all particles at the
ground state so the total energy is 0. Now we want to put every particle to squeeze into a small box of
size a. The energy corresponding to this conguration is
E
N
=

2
2ma
N
N(N 1)
2
u
0
(20.6)
This says that if there is no repulsion, the energy will be lowered by the particles all coming together. We
we need to add a repulsion to prevent the particles to all fall together. We introduce a pseudopotential to
represent repulsion.
Suppose we add a weak repulsion u(r) > 0. The ground state again has zero kinetic energy, but because
there is repulsion, the energy is actually
E
GS
= 0 +
N(N 1)
2
w
V
(20.7)
47
Statistical Mechanics Lecture 20
where w =
_
u(r)dV is the integrated interaction. Now what about the excited state? Consider a 1-particle
excited state which has momentum p. The energy is now
E
e1
=
p
2
2m
+N(N 1)
w
k
V
(20.8)
where w
k
= 4
_
r
2
dr cos(k r)u(r), and k = p/. The eect of repulsion is almost twice as big as the
repulsion in ground state. So excited states cost energy.
If there are only two particles we can write out the wave function explicitly. Assuming periodic boundary
condition, the ground state wave function is

GS
=
1
L
3/2
1
L
3/2
(20.9)
whereas the 1-particle excited state is

e1
=
e
ikr
1
L
3/2
1
L
3/2
=
1
L
3/2
e
ikr
2
L
3/2
(20.10)
Now if the particles are bosons then there is only one excited state which is the symmetric combination
of the above two states

BE
=
1

2
_
e
ikr
1
L
3/2
1
L
3/2
+
1
L
3/2
e
ikr
2
L
3/2
_
(20.11)
The probability for nding the particles at distance r
12
is, in the case of Einstein-Bose particles
[
BE
[
2
=
1
L
3
(1 + cos k r
12
) (20.12)
So there is twice more probability for the particles to be close to each other. This is another factor that
prevents the formation of excited states when there is repulsion. So the one-particle excited state has the
spectrum
E =
p
2
2m
+ (20.13)
where is a kind of gap between excited state and the ground state. It has a lowest point at some p
0
which shows the tendency of forming a lattice of that spacing. However the phonon spectrum is gapless,
and the dispersion relation is linear
E
phonon
= c
s
k (20.14)
Now consider a particle going through the liquid at velocity V
0
. Its energy is E = V
0
P
0
/2, so the
energy change is E = V
0
P
0
. Now if the dispersion of the particle (which is just a straight line) lies
lower than the lowest excitation states in the liquid, including both phonon and electrons, then the particle
will not see any dissipation and the liquid will turn out as if it is a superuid.
Suppose we can describe the electrons in the uid by a wave function
0
= [
0
[ e
i
. The density of
particles is just
= N [
0
[
2
(20.15)
We dene the velocity as
V =

(20.16)
and we have
V =

m
(20.17)
48
Statistical Mechanics Lecture 21
21 Lecture 21
Now consider the many-electron wavefunction of the form
= [[ e
i
(21.1)
We can dene the velocity of a particle inside the many body system as
v =

m
(21.2)
Note that the velocity is a gradient, so its curl vanishes. Now because is a phase factor, the integral of
v over a closed loop must be quantized if there is some matter distribution inside the loop.
_
v ds =
2
m
(21.3)
Now recall the uid equation
v
t
+ (v )v
P

+
2
v = 0 (21.4)
We dene the following quantity
=
_
dP

(21.5)
When there is no dissapation, we have
Dv
Dt
=
v
t
+ (v )v =
P

(21.6)
There are many interesting things for superuids with v = 0.
We want to study the vortex lines in superuids. A vortex line must start and end on the boundary of
the uid, or just exist as a loop. First we introduce a theorem
D
Dt
_
v ds = 0 (21.7)
We show this is true by evaluating
D
Dt
_
v ds =
_ _
D
Dt
v
_
ds +
_
v d
_
Ds
Dt
_
(21.8)
The rst term is just the integral of the gradient of , so it is zero. Now we consider the second term
Dr
Dt
=
r
t
+ (v )r = 2v (21.9)
Now the second term becomes _
v d(2v) =
_
(v
2
) ds = 0 (21.10)
49
Statistical Mechanics Lecture 21
Suppose we have a vortex line along the z direction and we consider the motion on the x-y plane. If
the velocity is around the vortex line, then it must be
v =

mr
(21.11)
However if there is a vortex line and the ow is going pass it at velocity v, we want to know how the vortex
line itself moves. We still have the quantization condition (21.3). The force on the vortex line is
F
vortex
= 2

m

v
2
(r) (21.12)
where

is the direction of the vortex line. This says that when there is a ow past the vortex line, there
will be a force perpendicular to the vortex line and the ow direction. This is called the Magnus force.
If we have a vortex line, there will be some tension on it, because we need to do work to stretch it
longer. The tension is
T =
_
2r dr
v
2
2
=
2
2
2m
2

_
rmax
r
min
r dr
r
2
log
_
r
max
r
min
_
(21.13)
The tension is related to the Magnus force
2T
R
= F = 2

m
v (21.14)
so the velocity of the vortex line is
v

mR
(21.15)
except for a logarithm factor of pure number.
Consider a large container with a small hole connecting to the outside. Now the phonon excitation of
long wavelengthes cant go through the hole, because it cant even see it. Everything owing out of the
small hole must be excitationless, so it is a 0 temperature uid with no entropy. The entropy in the bulk
is unchanged and volume decreased. But there is one kind of excitation that can occur when the uid is
coming out, which is vortex excitation. Now if the vortex loop is moving at a velocity comparable to the
velocity of the uid, we can get an estimation of the radius of the vortex loop, and we know
E
loop
R
3

2
m
2
R
2
, p R
3

mR
(21.16)
Now consider a vortex near a solid boundary with boundary condition v n = 0. We can create an
image vortex at the other side of the boundary with the same magnitude and opposite direction, so that
the normal component of velocity at the wall is zero. Thus we can calculate the velocity of uid near the
vortex.
We usually think about vortex lines of minimum excitation, because for higher excitations say 2, then
the kinetic energy will be 4 times as large, and it pays energetically for the line to become two separate
vortex lines. So the vortex lines tend to repel each other when they are very close.
Consider a superuid in a container, and we spin the container at angular velocity
0
. If it were a
normal uid we know we will have a curved surface in the shape of a parabola, with
v = 2
0
(21.17)
50
Statistical Mechanics Lecture 21
For classical uids we have a lot of ways to communicate the motion of the container to the uid, but it is
not the case for superuids. It is convenient to change into the comoving frame and minimize the energy
there, and then transform back to the lab frame. The quantity
E
0
L =
I
2
2

0
I (21.18)
is what we want to minimize. The minimum is at =
0
with magnitude
2
0
I/2. For a superuid what
happens is that the motion of the container creates vortex lines in the uid, and the number density of the
vortex lines is
n
V
=

0
m

(21.19)
It mimics classical uid rotation, with the same parabolic surface, but it achieves it by creating a large
array of vortices which comoves with the rotation of the bucket. If we dont have a magnifying glass, the
motion wont be very dierent from classical motion, but microscopically it will be quite dierent.
51
Statistical Mechanics Lecture 22
22 Lecture 22
Last time we did the superuid in a rotating bucket. The way we think about this formally is to minimize
H L
0
(22.1)
and it is equivalent to minimizing the following

2
I
2

0
I (22.2)
The physics happening in the rotating bucket is as follows. When you start to rotate the bucket,
the superuid does not respond until a critical velocity is reached. At that time a vortex line is created
somewhere and in the steady state will correspond to the vortex line stay in the middle of the uid. The
kinetic energy of the vortex line is
K =
_
R
0
1
2
N
2
L2r

dr

=
2
2
L
_

2
m
_
2
_
R
0
_
1
r
_
2
r dr (22.3)
And we know that

0
L =
0
(N) (22.4)
When these two quantities are equal we have a critical point and the vortex line is formed. Then after a
while we have another vortex line, and after a while we have another. When there are more vortex lines,
the behavior of the uid mimics that of the classical uid. The critical angular velocity for creating one
vortex line is

c
=

m
1
R
2
log
R
a
(22.5)
where a is about the atomic separation.
We want to know the number density of vortex line lattice in the bucket when it is spinning at
0
. The
total circulation is
_
v ds = 2R
2
= N
V
2
m
(22.6)
So if we divide by the total area we can get
n
V
=
N
V
R
2
=
m

(22.7)
Note that this number density does not depend on the density of the underlying material, only the funda-
mental mass of the atoms.
Vortices in the lattice talk to each other even at long distances, because the velocity around a vortex
line drops at 1/r which is slow. So the whole lattice co-rotates as a body. But there is some subtlety
because there will be some variation in speed of the vortex lines. Another question to ask is what kind
of lattice will the vortices form. The answer is that they will form a hexagonal lattice. Then we can
introduce perturbation onto the lattice and produce sound waves just as we have sound waves in solids.
We can calculate the velocity in the lattice
v
T


ma
sep
(22.8)
52
Statistical Mechanics Lecture 22
where a
sep
is the separation of the vortex lattice. This velocity is extremely small, so all the waves are at
very large wavelengthes, they dont contribute much to the heat capacity of the substance.
Now suppose we have an irregular container with superuid in it. If it is spinning at
0
then we will
have some vortex lines in the uid. Now if we are spinning down, the vortex lines will move out and
disappear at the edge, so there is no problem. But suppose the container is not smooth and there is some
place where the separation between upper and lower edge is small, the vortex line will be pinned there
because it takes energy for it to move elsewhere. Remember the tension is
T
v
=
K
L
10
9
(22.9)
Now if T 1

K then the s for pinning the vortex line should satisfy


sT
v
kT (22.10)
If this is satised then termal excitation will not have enough energy to move the vortex line, so the line
will be stuck when the container is spinning down.
Remember in a material we have the dispersion for sound waves
= c
s
k (22.11)
The energy of the excitation looks like black body radiation
E kT
_
kT
c
s
_
3
(22.12)
The number density of phonons goes like
n
s

1
c
3
s
(22.13)
Because c
s
c the number density of phonons is enormous compared to the number density of photons.
You dont need to worry about the blackbody radiation contributing to the heat capacity of the solid. The
scattering cross section between photons is neglegible compared to the scattering cross section between
phonons. So the blackbody radiation just bounces o the wall and dont scatter o each other, and its
mean free path is just the dimension of the box. But the mean free path of phonons is much much shorter,
because of the cross section and much larger density. Now in superuid the only excitation is the phonon
excitation and the phonon collides with nothing but other phonons. There is no other degree of freedom
except the number density of phonons. We can think of it as an interacting gas and disturbance will travel
out at a speed dierent from c
s
, which we call the speed of second sound.
The pressure in a phonon gas is related to energy density by
u
3
= P (22.14)
and we have the mass distribution
=
u
c
2
2
(22.15)
where c
2
is the speed of second sound. Combining equations we can get
c
2
2
=
dP
d
=
c
2
s
3
= c
2
=
c
s

3
(22.16)
53
Statistical Mechanics Lecture 22
and this is the relation between speed of second sound and speed of sound.
Suppose we have a small container in a big one with a impermeable membrane connecting the uids
inside. The membrane allows superuid to go through but not phonons. Now in equilibrium the tem-
perature in the smaller container T
1
is greater than the temperature T
2
in the bigger container. This is
reminiscent of the osmotic pressure problem we introduced earlier in the course. We have
P(T
1
) P(T
2
) =
SF
gh (22.17)
where h is the height dierence of the two uids. Now if we seal o the membrane, the superuid can
move from one container to another along the wall of the smaller container, while classical liquid cannot.
When superuid is exchanged this way, the total entropy and number of phonons are not changed in either
containers. Now if T
1
> T
2
, energy can only ow from T
1
to T
2
, but here what is owing is not energy, so
we can have some paradox-like scenarios where uid ow from T
2
to T
1
.
Now what happens if we elevate the smaller container from the large one? The superuid will still
come out because of capillary motion. This is not because of He is small and permeates the container, but
rather it moves along the wall of the container.
The nal scenario is a container with a small pipe at the bottom and we know phonon cant escape
while superuid can. The temperature of the uid inside the container will rise because no entropy is
changed, and the container at the other end of the pipe will be lowered in temperature. This will lead to
hot object getting hotter and cold object getting colder.
Now we shift our attention to neutral Einstein-Bose gas to charged fermion gas. We want to study
superconductivity treating it like a charged superuid. In superconductors we know that no magnetic eld
will penetrate it. We want to study why.
Recall the Maxwell equations. In a perfect conductor the force on a particle is
eE = m v (22.18)
and the current inside is just
j = env (22.19)
In a perfect conducter there is no damping and current will not dissipate. We combine the above two
equations and Maxwell equations and so some algebra, then will get

_
mc
e
2
n

j
_
=

B (22.20)
Discarding displacement current, we can replace

j with curl of B and we get

2

B =
4ne
2
mc
2

B (22.21)
The coecient in front is just the plasma frequency squared. Note that there are dots above B.
Now suppose we are considering charged Einstein bosons and they are in a big ground-state wavefunc-
tion. The current has the form
j = en(

0
v
0
) = en
_

0
_
p eA/c
m
_

0
_
(22.22)
Remember quantum mechanics tells us that p = , and the curl of this quantity is zero, so we have
j =
e
2
nB
mc
(22.23)
54
Statistical Mechanics Lecture 22
This lead to the same equation as above except we dont have dots above B. This is the peculiarity of
the charged quantum boson system. It is like integrating the above equation and select the one solution
without any time independent extra term.
55

Вам также может понравиться