Вы находитесь на странице: 1из 9

Thermal Physics Notes

QiLin Xue

August 2020

1 Energy
• The linear thermal expansion coefficient is given by:
∆L/L
α≡ (1)
∆T
• Similarly, the thermal expansion coefficient is given by:
∆V /V
β≡ (2)
∆T
where β = αx + αy + αz .
• The ideal gas law is
P V = nRT (3)
which is valid for tiny molecules that don’t interact with one another apart from collisions. The Virial Expansion provides
correction terms:  
A1 (T ) A2 (T )
P V = nRT 1 + + + ··· (4)
(V /n) (V /n)2
where An (t) is an experimentally determined coefficient.
• Another relationship is the van der Waals equation:
an2
 
P + 2 (V − nb) = nRT (5)
V
where a and b are experimentally determined values that depend on the specific gas. By applying the binomial expansion,
we can see that the first two terms of the Virial expansion gives:
a
A1 (T ) = b −
RT
A2 (T ) = b2

• The equipartition of energy gives the total energy of a system as


f
E = N kT (6)
2
where N is the number of moles.
• The first law of thermodynamics is:
∆U = Q + W (7)
where W is the work done on the system such that together, Q + W is the total energy that enters the system.
• For a quasistatic (slow) expansion or compression, the work done is given by:

dW = −P dV (8)

As a result, the area underneath a P-V diagram is thus the negative of the work done on the gas.

1
• For an adiabatic compression or expansion, the following is an invariant:

P V γ = constant (9)
f +2
where γ = f is the adiabatic constant.
• The dry adiabatic lapse rate is given by:
dT 2 Mg
=− ≈ −1◦ /100 m (10)
dz f +2 R

Derivation: When an ideal gas expands adiabatically, we have:


f
dU = −P dV =⇒ N kdT = −P dV (11)
2
We also know that:
d(P V ) = N k dT =⇒ P dV + V dP = N k dT (12)
Substituting P dV from earlier, we get:
f
− N k dT + V dP = N k dT (13)
2
N kT
From the ideal gas law, V = P so:
 
N kT f 2 T
dP N k + 1 dT =⇒ dT = dP (14)
P 2 f +2P

By balancing pressure forces, or using the relationship for the exponential atmosphere, we can write
mg
dP = − P dz (15)
kT
which once substituted, naturally gives the desired relationship.

• The heat capacity is defined as:


Q
C≡ (16)
∆T
and the specific heat capacity is the heat capacity per unit mass. From the first law, we get:
∆U − W
C= (17)
∆T

• At constant volume, ∆U is zero, so we have:


   
∂U ∂ f Nfk
CV = N kT = (18)
∂T V ∂T 2 2

For a solid, there are six degrees of freedom, so the specific heat capacity at constant volume is:

c = 3R (19)

known as the rule of Dulong and Petit, though this only works for high temperatures and they can freeze out as T → 0.

• At constant pressure, we have:    


∂U ∂V
CP = +P (20)
∂T P ∂T P
The second term is equal to N k so we can relate the two together:

Cp = Cv + N k (21)

2
Idea: Measured heat capacities are typically at constant pressure and not constant volume. The reason is that
heating up an object causes it to expand and it takes an extraordinary amount of pressure to counter this expansion.

• The latent heat tells us the heat per unit mass to accomplish a phase transformation:
Q
L≡ (22)
m
• The enthalpy gives the total energy needed to create a system out of nothing.
H ≡ U + PV (23)
which includes both the internal energy of a system as well as the energy to make room for the system. Notice that this
can be used to rewrite CP as:  
∂H
CP = (24)
∂T P
• At constant pressure, we have:
∆H = ∆U + P ∆V (25)
• The first law of thermodynamics can be rewritten as:
∆H = Q + Wother (26)

• The Fourier heat conduction law relates the heat gradient to the energy flux:
Q dT
= −kt A (27)
∆t dx
where kt is the thermal conductivity.
• The heat equation is given by:
∂T ∂2T
=K 2 (28)
∂t ∂x
kt
where K = cρ

Derivation: Consider a small segment of length ∆x on a uniform rod. Fourier’s law gives:

d2 T
 
Q Q1 Q2 dT dT
= + = kt A − = kt A∆x 2 (29)
∆t ∆t ∆t dx ∆x/2 dx −∆x/2 dx

m
We can rewrite Q = mc∆T and the density of the rod is ρ = A∆x . Making these substitutions will yield the heat
equation.

• Conduction can also happen via gases, where the thermal conductivity is given by:
1 CV fP
kt = `hvi = `hvi (30)
2 V 4T
This can be shown by considering the energy flow across a given boundary between two identical gases with different
temperatures.
• The force of viscosity is given by
dux
|Fx | = ηA (31)
dr
where η is the viscosity of the fluid and |Fx | gives the force on the plates that surround the fluid.
• The rate of diffusion is given by Fick’s Law:
dn
Jx = −D (32)
dx
where D is called the diffusion coefficient.
• Similar to how Fourier’s law was derived, we can derive Fick’s Second Law to be:
∂n ∂2n
=D 2 (33)
∂t ∂x

3
Recommended Problems: The following are recommended to supplement this section:
• (IPhO 1996 1c) Finding the heat capacity of a metal given a particular time dependence temperature T (t).

• (EstFinn 2004 7) Modelling a passive air cooling system.


• (Kalda Thermo 4) Exercise that examines the heating power of a single person.
• (EstFinn 2003 2) Graphing problem regarding a microprocessor with a water-cooling system.

2 Entropy
• The microstate refers to a specific configuration (permutation) while a macrostate describes the overall state (com-
bination). The number of microstates that correspond to a particular macrostate is called the multiplicity of that
macrostate.
• The fundamental assumption of statistical mechanics is that in an isolated system in thermal equilibrium, all accessible
microstates are equally probable.
• The second law of thermodynamics states that the spontaneous flow of energy stops when a system is at, or very near,
its most likely macrostate, the one with the highest multiplicity.

• Stirling’s Approximation states that √


N ! ≈ N N e−N 2πN (34)
for N  1

Remarks: N ! gives the product of the first N numbers. As a very crude approximation, we can estimate it to be
N N , but this is an overestimation.
√ It turns out that each number is on average smaller by a factor of e, creating
the e−N term. The last term 2πN is a correction factor that can be ignored for very large values of N . For
example, when we ask for the logarithm of N !, we can omit the last factor to get

ln N ! ≈ N ln N − N (35)

• The Einstein model models interactions between microscopic systems as a quantum mechanical harmonic oscillator,
where energy is quantized. The multiplicity of an Einstein solid with N oscillators and q energy units is:
 
q+N −1 (q + N − 1)!
Ω(N, q) = = (36)
q q!(N − 1)!

which can be proved via the stars-and-bars method. This is approximately equal to

q N2
ln Ω ≈ N ln +N + (37)
N q
If q  N , this simplifies to:
 eq N
Ω= (38)
N
For q  N , this simplifies to:  q
eN
Ω= (39)
q

• The multiplicity of a monatomic ideal gas is given by:


V Vp
Ω= (40)
h3
where V is the volume of the space, h is planck’s constant, and Vp is the area of the momentum hyperspace.

4
Derivation: It makes sense that the multiplicity should be directly proportional to the volume (more space for a
molecule to be in) and the volume of the momentum space (more momenta for a molecule to have). Since:

p2x + p2y + p2z = 2mU (41)



the momentum hyperspace has a radius of 2mU and is equal to Vp = 4π(2mU ) for the one atom case. This
talk of position and momentum space is exactly analogous to that in quantum mechanics, where the uncertainties
follow the Heisenberg uncertainty principle:

∆x∆px ≈ h (42)

The possible of distinct states with a well defined position and momentum is thus:
 3
LLp V Vp
= (43)
∆x∆px h3

which although may be missing a few factors of 2 or π, is very accurate since we only care about the order of
magnitude of such large numbers.

• For a system with N indistinguishable molecules, the same derivation can be applied to show that:

1 v N Vp
ΩN = (44)
N ! h3N
1
where the N! comes from the fact we are overcounting.
• The area of the momentum hyperspace Vp is given by:

2π d/2
Vp = rd−1 (45)
(d/2 − 1)!

where d = 3N is the dimension. Putting this all together, we can get a high level look at the multiplicity:

Ω(U, V, N ) = f (N )V N U 3N/2 (46)

• We can define the entropy of a system as:


S ≡ k ln Ω (47)
We show that entropy is an extensive property. When two systems with multiplicities ΩA and ΩB are put together, the
total multiplicity becomes ΩA ΩB and the total entropy becomes SA + SB .
• By applying the equation of multiplicity and Stirling’s approximation, we can show that the entropy of an ideal gas is:
 3/2 ! !
V 4πmU 5
S = N k ln + (48)
N 3N h2 2

known as the Sackur-Tetrode equation.


• During a quasistatic process, the change in entropy is:
Q
∆S = (49)
T

Recommended Problems: The following are recommended to supplement this section:


• (placeholder)

5
3 Interactions and Implications
• A weakly coupling system achieves equilibrium when:
∂Stotal ∂SA ∂SB
= + =0 (50)
∂qA ∂UA ∂UA
where we have let ∂UA = −∂UB .

Entropy of Weakly Coupling System


250 Stotal
SA
200 SB
Entropy (units of k)

150

100

50

0 20 40 60 80 100
qA

Alternatively, we can write at equilibrium,


∂SA ∂SB
= (51)
∂UA ∂UB
• In the S − U plot, the second law tells us that energy will tend to flow into the object with the steeper slope, and thus
we can use this to define temperature as:  
1 ∂S
= (52)
T ∂U N,V

1
where the T comes from the fact that the steeper the slope, the lower the temperature.

• We can use this definition of temperature to verify results. For example, an Einstein solid with q  N has an entropy of:

S = N k[ln(q/N ) + 1] = N k ln U − N k ln(N ) + N k (53)

and thus the temperature is:


1 ∂S Nk
= = =⇒ U = N kT (54)
T ∂U U
This also works for the entropy of a monatomic ideal gas where:

S = N k ln V + N k ln U 3/2 + f (N ) (55)
3
The 2 factor naturally comes in front once the partial derivative with respect to U is taken to yield U = 23 N kT .
• We can rearrange the definition of temperature to calculate entropy as
dU Q
dS = = (56)
T T
where the system is placed at constant volume with no work done. (Later we show that this is not the only case this is
valid, it is true for all quasistatic processes.) If the temperature is changing, it’s convenient to write Q = CV dT to get:

dT
dS = CV (57)
T

6
• We can associate a bit of information with its energy by looking at what happens when the bit is erased, assuming the
system is connected to a heat bath with temperature T .

E = kT ln(2) (58)

where the ln(2) factor comes from the fact that a bit of information has a multiplicity of Ω = 2.
• A two-state paramagnet is a material in which the magnetic dipoles which make it up can only have two possible
orientations, which correspond to the externally applied field.

• According to quantum mechanics, a spin-1/2 particle only has two possible dipole moments which we denote as ↑ and ↓
with an energy of ±µB. The total energy of a system of paramagnets are:

U = µB(N↓ − N↑ ) = µB(N − 2N↑ ) (59)

where we have used N = N↑ + N↓ .

• The magnetization is defined as


U
M = µ(N↑ − N↓ ) = − . (60)
B
• The multiplicity of having N↑ particles in the up state is:
 
N N!
Ω(N↑ ) = = (61)
N↑ N↑ !N↓ !

• We can calculate the entropy using Stirling’s approximation to be:

S/k = ln N ! − ln N↑ ! − ln(N − N↑ )! (62)


≈ N ln N − N↑ ln N↑ − (N − N↑ ) ln(N − N↑ ) (63)

where:  
∂S N − U/µB
= k ln (64)
∂N↑ N + U/µB
This can be used to calculate the energy of the system. We have:
   
1 ∂S ∂N↑ ∂S k N − U/µB
= = = ln (65)
T ∂U N,B ∂U ∂N↑ 2µB N + U/µB

which solving for U yields:  


µB
U = −N µB tanh (66)
kT

• The heat capacity of the paramagnet is:

N kβ 2
 
∂U
CB = = (67)
∂T N,B cosh2 (β 2 )

µB
with β ≡ .
kT
• Similarly, the entropy of a two-state paramagnet is

S = N k (ln(2 cosh β) − β tanh β) (68)

• At high temperatures µB  kT , the magnetization is given by:

N µ2 B
M≈ (69)
kT
and was experimentally determined by Pierre Curie, known as Curie’s Law.

7
• For a system to be at equilibrium in which volume can also change, then the following two conditions must be satisfied:
∂Stotal ∂Stotal
= 0, =0 (70)
∂UA ∂VA
as represented in the following representation:

Example

Stotal

UB
UA

The first condition relates to the temperature of the system, that is: the system is in thermal equilibrium. The second
condition relates to the pressure, since we know the pressures also need to be equal:
 
∂S
P =T (71)
∂V U,N

• We can use this definition to derive the ideal gas law. The multiplicity of a monatomic ideal gas is Ω = f (N )V M U 3N/2 .
We can use this to determine S and take the partial derivative to arrive at:

P V = N kT (72)

• The thermodynamics identity when the number of particles in a system is fixed, is given by:

dU = T dS − P dV (73)

Proof: Consider a process where we change both the energy ∆U and the volume ∆V by small amounts:
∆S ∆S
∆S = ∆U + ∆V (74)
∆U ∆V
Since the two steps are happening in separate steps, we can rewrite it as
   
∂S ∂S
dS = dU + dV (75)
∂U V ∂V U
1 P
= dU + dV (76)
T T

• The thermodynamic relationship is similar to the first law dU = Q + W where for a quasistatic process, W = −P dV .
Thus, we show that for any quasistatic process (not just at constant volume), we have

Q = dU = T dS (77)

Idea: For non-quasistatic volume changes, the very fast compression creates internal disequilibrium, causing W >
−P dV such that the added heat is less than T dS, creating extra entropy. Similarly for free expansion, ∆U = 0,

8
which means that
T dS = P dV > 0 (78)
If the volume increases, then the entropy must also increase. It is always possible to create more entropy. Only in
an isentropic (adiabatic and quasistatic) is entropy left unchanged.

• Following the discussion of a system where volume is allowed to change, the natural thing to do is to analyze when the
number of particles can change as well, which leads to the third equilibrium statement:
 
∂Stotal ∂SA ∂SB
= 0 =⇒ = (79)
∂NA UA ,VA ∂NA ∂NB

Thus, we can define the chemical potential as


 
∂S
µ ≡ −T (80)
∂N U,V

such that at equilibrium µA = µB . Thus, the generalized thermodynamic identity is given by


X
dU = T dS − P dV + µi dNi (81)
i

where we have generalized it to include more than two types of particles.

• We can derive an expression for µ for an ideal gas, starting with the Sackur-Tetrode equation to give
"  3/2 #
V 2πmkT
µ = −kT ln (82)
N h2

4 Engines and Refrigerators


• A typical heat engine is any device that absorbs heat and converts part of that energy into work. It absorbs a heat Qh
from the hot reservoir, does a work W , and dumps the waste heat Qc into the cold reservoir.

• The efficiency of an ideal engine is given by


benefit W Qh − Qc Qc
η≡ = = =1− (83)
cost Qh Qh Qh

• To show that no engine can surpass this efficiency, we invoke the second law of thermodynamics: The entropy it expels
must be at least as much entropy it absorbs such that ∆S ≥ 0. Thus:
Qc Qh
≥ (84)
Tc Th
and thus
Tc
e≤1− . (85)
Th

Вам также может понравиться