Вы находитесь на странице: 1из 3

Quantum Mechanics 113A Spring 2016

Mini Lecture 2
Discrete and Continuous Probabilities
Before discussing the interpretation of the quantum mechanical wave function, we will
take a brief foray into probability and statistics in the context of a set of measurements
on an ensemble of (classical or quantum mechanical) systems for example, a series of
coin flips or dice rolls.

Activity 3: Basics of Probability

Discrete Probability
Let j label each possible discrete outcome of a measurement performed on a system.
Note that j can be a pure label with no numerical meaning (e.g., red and blue,
or heads and tails), or it could be the numerical value of a measurement (e.g., the
number of dots on the face of a rolled dice). If an experiment counts N (j) occurrences
of outcome j then the probability (distribution) of j is

N (j) X
P (j) where N N (j) . (1)
N
{j}

The notation {j} on the summation denotes the set of possible outcomes for an indi-
vidual measurement. With the definitions in eq. 1, the total probability is normalized:
X
P (j) = 1 (2)
{j}

Let f (j) be any value that is uniquely determined for each possible outcome j. We
define the ensemble average of f as
X
hf (j)i f (j)P (j) (3)
{j}

We also refer to this quantity as the mean value or (especially in QM) the expectation
value (although be aware that, in general, this is not the most likely expected value).
Note that although j may be a pure label with no numerical meaning, the function
f (j) must be numerical and is not necessarily dimensionless (i.e., it may have units
attached).
Now consider three special cases of eq. 3, where we now assume that j itself is a

Page 1 of 3
Quantum Mechanics 113A Spring 2016

numerical value:

f (j) = j 0 = 1 hj 0 i
P
{j} P (j) 0th moment (normalization) (4)
1
P
f (j) = j hj i {j} jP (j) 1st moment (average) (5)
f (j) = j 2 hj 2 i j 2 P (j)
P
{j} 2nd moment (6)

Two probability distributions, P1 (j) and P2 (j), can have the same average hji yet
have very different shapes. We measure the spread of values around the mean using
the variance:
2 h(j hji)2 i . (7)
We could instead have picked something else to measure this spread for example,

h|j hji|i (8)

but the 2 choice is better behaved and has useful properties we will discover later.
A useful trick for calculating the variance is to expand the square in eq. 7, keeping
in mind that hji is just a number (the ensemble average), and that the h...i operation
commutes with addition:

2 = h(j hji)2 i (9)


= hj 2 2jhji + hji2 i (10)
= hj 2 i 2hjihji + hji2 (11)
= hj 2 i hji2 . (12)

This conveniently
decomposes 2 into the first and second moments of the distribution.
We call = 2 the standard deviation. This should not be confused with the square
root of the second moment called the root-mean-square:
p
RMS hj 2 i (13)

From eq. 12, we see that the variance and the RMS are numerically equal only when
hji = 0.
The results above are for f (j) = j, but we can generalize to an arbitrary f (j):

f2 = hf (j)2 i hf (j)i2 . (14)

Page 2 of 3
Quantum Mechanics 113A Spring 2016

Continuous Probability

Activity 4: Continuous Probability Distributions


If the possible outcomes of a measurement are labeled by a continuous variable x
instead of a discrete variable j, then the discrete formulas above can be generalized
by replacing sums with integrals. (Remember that a 1D integral is equivalent to an
infinite sum, and note that we never actually required that there be a finite number of
discrete outcomes in the previous section.) We start with
Z
N (x)
P (x) where N dx N (x). (15)
N
Then the normalization condition, eq. 2, becomes
Z
dx P (x) = 1 (16)

The limits of integration might be finite or (half) infinite. Note that N (x) is no longer
a simple count of how many times x occurs since any particular value of x is infinitely
unlikely. Instead, we interpret N (x)dx as the count of how often the outcome of the
measurement lies between x and x + dx, which means that N (x) and P (x) now have
dimensions of 1/[x], where the notation [x] represents the dimensions of x. For this
reason, we call P (x) a probability density (function) or PDF.
The generalized expectation value, eq. 3, is
Z
hf (x)i dx f (x)P (x) (17)

and the generalized variance of f (x) for the continuous distribution is

f2 = hf (x)2 i hf (x)i2 . (18)

Page 3 of 3

Вам также может понравиться