Вы находитесь на странице: 1из 28

8

lectures
UCL DEPARTMENT OF CHEMISTRY
CHEM2301/2304: Physical chemistry
Module3: Statistical Mechanics
Professor Francesco Luigi Gervasio

Scope of statistical mechanics

Review of Classical Thermodynamics

Equilibrium

The Ensemble Concept (Heuristic)

10

Ensembles
microcanonical ensemble
canonical ensemble
grand canonical ensemble

12
13
14

Boltzmann Distribution

15

nomenclature sums and products, Stirlings approximation

16

Distributions and arrangements in an ensembles

17

The most probable distribution in the canonical ensemble

20

Recommended Texts:
Atkins de Paula, Atkins' Physical Chemistry, 10th edition, Oxford.
Maczek, 'Statistical Thermodynamics', Oxford Chemistry Primers.
Widom, 'Statistical Mechanics- A concise introduction for chemists', Cambridge University Press
1

1. Scope of Statistical Mechanics


Statistical mechanics is a complete subject. It spans from Classical
Thermodynamics, through Spectroscopy to Quantum Mechanics. And it
provides the ability to calculate equilibrium constants from spectroscopic
data using classical thermodynamics.
While the laws of thermodynamics are broad empirical generalizations
which allow correlations to be made among the properties of macroscopic
systems (1020 or more atoms) avoiding any reference to the underlying
microscopic structure of matter,
Statistical Mechanics relates the properties of macroscopic systems to their
microscopic constitution.
Statistical Thermodynamics (a branch of SM) is devoted to calculating the
thermodynamic functions of a system of given composition when the
interactions between systems component are known.

1. Scope of Statistical Mechanics (2)


Classical
or
Quantum

small systems
precisely known

eigen values exact


expectation values
involve probabilities

precise
predictions

x = X d

Initial State

Final State

(large) systems
incomplete knowledge

reasonable
predictions
statistical basis

Statistical
Mechanics
3

As N increases we might expect complexities and intricacies of the


properties to increase.
But
New regularities appear because there are many degrees of freedom
and new statistical laws apply.
Thermodynamics provides connections between many properties but
nothing on the properties of any one. Atoms and molecules are not
required and many systems are too complicated to be characterized.

2- Review of Classical Thermodynamics


The law of thermodynamics consists of two parts:
(a) the equalities, for example, changes in internal energy U

dU = TdS pdV
This is the fundamental equation
From a combination of the first law of thermodynamics:

dU = q + w

The change in internal energy is given by the heat absorbed


by a system plus the work done on the system

wrev = -pdV (expansion work)


qrev = TdS (from the definition of Entropy)
dU is an EXACT Differential
(its value is independent of the path)
5

(b) the inequalities

# S &
0
%
(
$ t 'V ,U ,N

for the

closed

system

the entropy S increases over time.

This is the fundamental inequality of the second law of thermodynamics.

1. Time is denoted by t and is treated like any other variable when you differentiate.
2. The superscript means the whole system consisting of phases , , so = + +

3. N is the total amount irrespective of species

+ +

N = ( nA + nB + )

3- Equilibrium
All the conditions for equilibrium and stability can be derived from the
fundamental inequality

# S &
0
%
(
$ t 'V ,U ,N

S increases over time.

Entropy is a thermodynamic quantity representing the unavailability of a


system's thermal energy for conversion into mechanical work. It is often
erroneously referred to as the 'state of disorder' of a system. Qualitatively,
entropy is a measure of how evenly energy is distributed in a system.
We will soon establish its connection to W, the number of configurations (or
microstates) corresponding to the macrostate, through the Boltzmann equation:
S = k ln W

(please note that sometimes is used instead of W)

The variables held constant (V,U,N) in the fundamental inequality are experimentally
inconvenient. Usually it is possible to keep the temperature, the pressure and/or the
volume constant. Thus it is easier to use one of the other thermodynamic inequalities
(that can be derived from the fundamental one using the properties of partial
derivatives):
H(S,p) = U + pV;

Enthalpy,

Enthalpy,
is the preferred expression of system energy changes in many chemical,
biological, and physical measurements at constant pressure, because it simplifies the
description of energy transfer. At constant pressure and entropy, the enthalpy change
is equal to:
H = the heat flow into the system
Using the fundamental inequality and the properties of partial derivatives it can be
shown that:
# H &
%
(
0
$ t 'S, p,N
H decreases spontaneously as a function of time: it has a minimum
8

Gibbs and Helmholtz free energy,


The Gibbs free energy is a thermodynamic potential that measures the


maximum or reversible work that may be performed by a thermodynamic
system at a constant temperature and pressure(the most common experimental
conditions).
G(p,T ) = H TS ;

dG = -SdT + Vdp +

BdnB

The Helmholtz free energy is a thermodynamic potential that measures the


maximum or reversible work that may be performed by a thermodynamic
system at a constant temperature and volume.
A(T,V ) = U - TS

dA = -SdT - pdV +

BdnB

Using the fundamental inequality, it can be shown that


# G &
%
(
$ t 'T , p,N

# A &
%
(
$ t 'T ,V ,N

G and A spontaneously decrease as a function of time: they have a


minimum
9

4. The Ensemble Concept (Heuristic definition)


For a typical macroscopic system, the total number of particles N~ 1023.
One way to calculate the properties of a system is to follow its evolution
over time then take an average. For example, if we measure the pressure of
1dm3 of a gas with a manometer and it takes 10 s. From simple kinetic
theory the collision density for gaseous N2 at ambient conditions is about
Z11 = 1035 m3 s1.
So the pressure measurement averages

)(

Z11 = 10 35 m -3 s -1 10 -3 m3

33

) (10 s) = 10

molecular events

How long would it take to calculate the pressure on this basis?


With a 1 GHz computer it would take roughly

1027 s = 3 1019 years


Compare this with:
the age of the earth
time since the big bang

4 billion years
(13.75 0.17) billion years

Such an approach is not practical.


10

Alternatively, surround the system with N replica systems


and impose thermodynamic constraints so all systems have
identical macroscopic properties but allow different microscopic
properties. We say we have an ensemble of systems.
Average over the systems in the ensemble
1 N
X = Xj
N j =1

Since from a macroscopic


Point of view precise
microscopic details are
largely unimportant, we use
the ensemble concept to
wash out the microscopic
details.

Provided N is sufficiently large, then X is the


experimental value of X for the system.
A large number of observations made on a single system at
N arbitrary instants of time have the same statistical
properties as observing N arbitrarily chosen systems at the same time from
an ensemble of similar systems.

we have replaced a time average (many observations on a single system)


by a number average (single observation of many systems) This approach
was developed independently by Gibbs and by Einstein.

11

5.1 Ensembles
There are several different type of ensembles depending on which
thermodynamic variables are common to the systems in the ensemble.

5.1(a) Microcanonical Ensemble


Walls: impermeable
Rigid
Adiabatic

no exchange of N
no exchange of V
no exchange of U

U, V
N,

Each system is isolated


Common densities:
Common fields:

common N, V, U

V/N, U/N
none

U, V
N,

U, V
N,

U, V
N,

The independent variables are N, V, U . So the Microcanonical


Ensemble corresponds to the fundamental surface U(S, V, N)

12

It is often difficult to work with the microcanonical


ensemble because, for example, to find T we must study
the variation in S as U changes.

5.1(b) Canonical Ensemble

common N, V, T

Walls: impermeable

rigid

no exchange of V

diathermic exchange of U to
achieve uniform T

no exchange of N
T, V
T, V
N,
N,
T, V

T, V

N,

N,

Each system is closed


Ensemble is thermally isolated by adiabatic wall
The independent variables are N, V, and T so the canonical
ensemble corresponds to
the Helmholtz surface A(T, V, N).
We will soon find out its link to the canonical partition
function
A = kT ln Q

13

common , V, T

5.1(c) Grand Canonical Ensemble


Walls: permeable
rigid
diathermic

exchange of N to
achieve uniform
no exchange of V
exchange E to
achieve uniform T

T
, V,

T
, V,

, V,

, V,

Each system is open


Ensemble is surrounded by a wall that is adiabatic and impermeable.
Hence the ensemble is thermally isolated and closed with respect to the surroundings.

14

6. The Boltzmann distribution


The 'Boltzmann' distribution is among the most important equations in chemistry. It is used to
predict the population of states in systems at thermal equilibrium and provides an insight into
the nature of 'temperature'.
We will adopt the principle of "equal a priori probabilities", i.e. the assumption that the
system is equally likely to be found in any of its accessible states with agiven energy
(e.g. all those with the same energy and composition)*.
We note that the Schroedinger equation H =E can be applied to both isolated molecules
and to macroscopic systems. In the latter case we refer to the accessible states of the
macroscopic systems j as complexions of that system.
One very important conclusion is that the overwhelmingly most probable populations of the
available states depend on a single parameters, the temperature. As we will see in the
following, for the population of quantized energy states it takes the form:

ni
(i j )/kT
=e
nj
*We

Or, for two


macroscopic systems
mi and mj with
energies Ei and Ej

mi
(Ei E j )/kT
=e
mj

have no reason to assume for a collection of molecules at thermal equilibrium that a vibrational
15
state of a certain energy is not equally probable than a rotational state of the same energy

6.1 Nomenclature: sums and products


You are familiar with the notation for summation

k
x
i=1 i

= x1 + x2 + + xk

which is simplified to

i xi or x when there is no

ambiguity. By analogy, for a product we write


k

i=1 xi = x1 x2 xk

which often can be simplified to


Note in particular that

i xi or just x .

ln ( x ) = ln ( x1 x2 xk ) = ln x1 + ln x2 + + ln xk
= ln x

Stirlings approximation for large factorials


When x is large
ln x! x ln x - x
For chemical systems, the number of particles is so large
that this approximation is essentially exact.
16

6.2 Distributions and Arrangements in an Ensemble


The systems in the ensemble have the same macroscopic properties but the molecular
details are different. Suppose the ensemble has M systems in total, of these
m1 systems have one arrangement of molecular states, with energy E1
m2 systems have a second arrangement of molecular states with energy E2 and so on
We can group the systems by their molecular states to give a distribution of m1, m2,
systems.

The number of arrangements of the ensemble with this


distribution is

Combinatory rule:
n1 particles in Box 1
n2 particles in Box 2
Total of N = n1+n2 particles

W=

M!
where
m1 !m2 !...

M = m1 + m2 + =

jmj

We can arrange these particles in


W2 = N!/(n1!n2!)
non-equivalent arrangements

17

Example: the possible


arrangements of a 3,2
configuration

From Atkins' Physical Chemistry 10e Chapter 15.


18

Example 2: the possible


arrangements of 4 coins
HHHH
HTTT
THTT
TTHT
TTTH

HHTT
TTHH
THTH
HTHT
THHT
HTTH

4!/4! = 1
4!/3!1!= 4

4!/(2!2!) = 6

Typically we are working with large numbers (M 1023)


so it is convenient to work with ln W rather than W
ln W = ln M !- ln ( m!) = ln M !- ln(m1 ! m2 !...)
Stirling approximation:
ln(n!)= nln(n)-n
Valid for large values of n

= ln M !- ln m!
= M ln M - /
M - { m ln m - /
m}
= M ln M - j m j ln m j

where we have used Stirlings approximationfor the factorials.

lnW M ln M m j ln m j

(1)

19

6.3 The most probable distribution in the canonical ensemble


We want to find the most probable distribution, where W will be a maximum, for
an ensemble where m1 systems have energy E1, m2 systems have energy E2 and
so on.
But before doing so, we ask the question, how dominant is the most probable
distribution?
We know from experience that in tossing 100 coins successively many
times, the 50-50 configuration for head or tails will stand above the rest.
What would happen with a mole of coins? The distribution of
configurations peaks so sharply, that no other configuration gets so much
as look in!
This is also true for the statistical weight of the most probable
distribution Wmax. If we compare Wmax for a mole of particles to the
statistical weight of a distrivution that differs as little as 1 part in 1010
we find that Wmax is more than 10434 times more probable!

Nature of the system considered.


1. The thermodynamic system is an assembly of M systems in a state defined by
the volume V. The total internal energy of the assembly is Utot
2. The energies of the individual systems are eigenvalues of the Schoedinger eq.
3. There is no restriction in which energies can be allocated to the individual
systems (i.e. any number of systems can have any particular energy level)
4. All complexion associated with a given E and V are equally likely

20

Conservation of number and Energy these are rather


obvious constraints that we need to place on our system.
The appropriate constraints for the canonical ensemble are:
The total number of systems M is constant
M - j mj = 0
the entire ensemble is surrounded by an adiabatic wall
so the total energy is constant

Utot - i mi Ei = 0
Lagranges method of undetermined multipliers allows us
to find the distribution that maximises ln W subject to the
constraints of constant M and constant Utot.

I am using "Utot" (instead


of Etot) for consistency
with previous notation.
Utot=

miEi

21

The method of Lagrange multipliers


is used to find the local maxima and minima
of a function subject to equality constraints.
For instance if we want to
maximize f(x, y)
subject to g(x, y) = c.
We introduce a new variable () called a
Lagrange multiplier and study the Lagrange
function (or Lagrangian) defined by:

F ( x, y, ) = f (x, y) + (g(x, y) c)
Since we are imposing that our constraints "c"
are 0, our F takes the form:

F ( x, y, ) = f (x, y) + (g(x, y))


21

Form a new function F by adding to ln W each constraint, multiplied by a constant

F = ln W + M - j m j

) + ( U -
tot

j mj Ej

Now take the derivative with respect to the populations mj to obtain


the conditions that are necessary to make W(or lnW) a maximum.

F
=0=
m j
m j

{lnW + (M- m )+ (U
j

Tot-

j m j E j )}

The algebra turns out to be easier than it looks for three


reasons.
(i) Etot and M are constant so
and
(ii) mj and Ej only occur once in the sums so
and
and

(iii)

and are constant

23

F
m j

=0=

ln W

M - jmj +
Utot - j m j E j
+
m j
m j
m j -

(1)

(1)

(2)

M ln M m j ln m j
j
m j

(3)

lnW M ln M m j ln m j
j

(2) M is constant and


(its deriva<ve is =0)

(3)

24

M ln M m j ln m j
j
m j

= - ln m j -1 - - E j= 0
Hence ln m j = -1 - - E j

m j = (e

)e

E j

24

Now sum over all the systems


1
m
=
e
( ) e

E j

M = ( e1 ) e

E j

Divide one equation by the other (the e-1- cancels) to obtain


E
mj
e j
=
M e E j

mj
M

is clearly the probability Pj that the system has energy


Ej and it can be shown that (but we wont do it here)

= 1 kT

exp ( - E j )

and T is uniform in the canonical ensemble

is the canonical partition function Q and is the key quantity

for the canonical ensemble.

25

With the canonical partition function defined by

Q = j exp - E j kT

we can write

Pj

m j exp -E j kT
=
=
M
Q

In the next lecture we will connect Q to thermodynamic variables

26

Вам также может понравиться