Академический Документы
Профессиональный Документы
Культура Документы
Brandon Malone
Much of this material is adapted from Bilmes 1998 and Tomasi 2004.
Many of the images were taken from the Internet
2 Mixture Models
3 Expectation-Maximization
4 Wrap-up
C1 C2 P(C1 , C2 )
H H θ·θ
H T θ · (1 − θ)
T H (1 − θ) · θ
T T (1 − θ) · (1 − θ)
C1 C2 P(C1 , C2 )
H H θ·θ
H T θ · (1 − θ)
T H (1 − θ) · θ
T T (1 − θ) · (1 − θ)
Mixtures of distributions
Suppose we have K Poisson distributions (components) with
parameters λ1 . . . λK mixed together with proportions p1 . . . pK .
We often write P = {p1 . . . pK } and θ = {λ1 . . . λK , P}.
)
p1 . . . pk , samples N
D←∅
for l = 1 to N do
component zl ←sample(Mult(p1 . . . pK ))
observation Dl ←sample(Poisson(λzl ))
D ← D ∪ Dl
end for
return D
end procedure
Brandon Malone Poisson Mixture Models
The Poisson Distribution Mixture Models Expectation-Maximization Wrap-up
Mixtures of distributions
Suppose we have K Poisson distributions (components) with
parameters λ1 . . . λK mixed together with proportions p1 . . . pK .
We often write P = {p1 . . . pK } and θ = {λ1 . . . λK , P}.
λk P
K
Dl zl
M
Likelihood of data
We can write the (log) probability of any mixture model as follows.
K
X
P(D : θ) = pk g (D : λk )
k
N X
Y K
P(D : θ) = pk g (Dl : λk )
l k
N X
Y K
`(D : θ) = log pk g (Dl : λk )
l k
N
X K
X
`(D : θ) = log pk g (Dl : λk )
l k
Membership probabilities
Notation
Jensen’s Inequality
Recall the likelihood of the mixture model.
N
X K
X
`(D : θ) = log q(k, l)
l k
Expectation-Maximization (EM)
N X
X K
Q(θ) = P(k|n) log q(k, l)
l k
PN
P(k|l)Dl
l
λk =
Z (k)
Z (k)
pk =
N
Other questions...
Do we really know how many species there are?
Can we differentiate species with similar abundances?
How do we pick “good” initial parameters?
When have we converged?
More on EM
Recap