Bayesian Statistics without Tears: A SamplingResampling Perspective Author(s): A. F. M. Smith and A. E. Gelfand Source: The American Statistician, Vol. 46, No. 2 (May, 1992), pp. 8488 Published by: American Statistical Association Stable URL: http://www.jstor.org/stable/2684170
Accessed: 14/08/2010 13:31
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless
you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, noncommercial use.
Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/action/showPublisher?publisherCode=astata.
Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.
JSTOR is a notforprofit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.
American Statistical Association is collaborating with JSTOR to digitize, preserve and extend access to The American Statistician.
http://www.jstor.org
Bayesian Statistics Without Tears:
A SamplingResampling
Perspective
A. F. M. SMITH and A. E. GELFAND*
Even to the initiated, statistical calculations based on Bayes's Theorem can be daunting because of the nu merical integrations required in all but the simplest ap plications. Moreover, from a teaching perspective, in troductions to Bayesian statisticsif they are given at allare circumscribed by these apparent calculational
difficulties.
resampling perspective on Bayesian inference, which has both pedagogic appeal and suggests easily imple
mented calculation strategies.
KEY WORDS: Bayesian inference; Exploratory data analysis; Graphical methods; Influence; Posterior dis tribution; Prediction; Prior distribution; Random var iate generation; Samplingresampling techniques; Sen sitivity analysis; Weighted bootstrap.
Here we offer a straightforward sampling
1.
INTRODUCTION
Given data x obtained under a _{p}_{a}_{r}_{a}_{m}_{e}_{t}_{r}_{i}_{c} model in dexed by finitedimensional 0, the Bayesian learning process is based on
_{p}_{(}_{0}_{J}_{x}_{)} _{=}
1(0;X)P(0)
f 1(0;x)p(0) dO
_{(}_{1}_{.}_{1}_{)}
the familiar form of Bayes's Theorem, relating the pos
terior distribution _{p}_{(}_{0}_{l}_{x}_{)} to
the prior distribution isp(0). If 0 = (/, tf), with interest
centering on /, the joint posterior distribution is mar ginalized to give the posterior distribution for /,
the likelihood 1(0; x), and
p(OJX) =
p(o, tfrx)dtf.
_{(}_{1}_{.}_{2}_{)}
If summary inferences in the form of posterior expec tations are required (e.g., posterior means and vari _{a}_{n}_{c}_{e}_{s}_{)}_{,} these _{a}_{r}_{e} based on
= f m(0)p(0Jx) dO, (1.3)
for suitable choices of _{m}_{(}_{Q}_{)}_{.} Thus, in the continuous case, the integration oper ation plays a fundamental role in Bayesian statistics, whether it is for _{c}_{a}_{l}_{c}_{u}_{l}_{a}_{t}_{i}_{n}_{g} the _{n}_{o}_{r}_{m}_{a}_{l}_{i}_{z}_{i}_{n}_{g} constant in
E[m(0)Jx]
_{*}_{A}_{.} F. M. Smith is Professor, Department of Mathematics, Im _{p}_{e}_{r}_{i}_{a}_{l} _{C}_{o}_{l}_{l}_{e}_{g}_{e} _{o}_{f} Science Technology and Medicine, London SW7 _{2}_{B}_{Z}_{,} _{E}_{n}_{g}_{l}_{a}_{n}_{d}_{.} _{A}_{.} _{E}_{.} _{G}_{e}_{l}_{f}_{a}_{n}_{d} _{i}_{s} _{P}_{r}_{o}_{f}_{e}_{s}_{s}_{o}_{r}_{,} Department of Statistics, _{U}_{n}_{i}_{v}_{e}_{r}_{s}_{i}_{t}_{y} of Connecticut, Storrs, CT 06269. The authors are grateful _{t}_{o} _{D}_{a}_{v}_{i}_{d} Stephens for assistance with computer experiments. His _{w}_{o}_{r}_{k} _{a}_{n}_{d} _{a} _{v}_{i}_{s}_{i}_{t} _{t}_{o} _{t}_{h}_{e} _{U}_{n}_{i}_{t}_{e}_{d} Kingdom by the second author were _{s}_{u}_{p}_{p}_{o}_{r}_{t}_{e}_{d} _{b}_{y} _{t}_{h}_{e} _{U}_{.}_{K}_{.} _{S}_{c}_{i}_{e}_{n}_{c}_{e} _{a}_{n}_{d} Engineering Council Complex Stochastic Systems Initiative.
(1.1), the marginal distribution in (1.2), or the expec tation in (1.3). However, except in simple cases, explicit _{e}_{v}_{a}_{l}_{u}_{a}_{t}_{i}_{o}_{n} _{o}_{f} such integrals will rarely be possible, and realistic choices of likelihood and prior will necessitate the use of sophisticated numerical integration or ana lytic approximation techniques (see, for example, Smith et al. 1985, 1987; Tierney and Kadane, 1986). This can _{p}_{o}_{s}_{e} _{p}_{r}_{o}_{b}_{l}_{e}_{m}_{s} for the applied practitioner seeking rou tine, easily implemented procedures. For the student, who may already be puzzled and discomforted by the intrusion of too much calculus into what ought surely to be a simple, intuitive, statistical learning process, this can be totally offputting. In the following sections, we address this problem by taking a new look at Bayes's Theorem from a sampling resampling perspective. This will open the way to both easily implemented calculations and essentially calculus free insight into the mechanics and uses of Bayes's Theorem.
_{2}_{.} _{F}_{R}_{O}_{M} DENSITIES _{T}_{O} SAMPLES
_{A}_{s} _{a} _{f}_{i}_{r}_{s}_{t} _{s}_{t}_{e}_{p}_{,} _{w}_{e} _{n}_{o}_{t}_{e} the essential duality between
a sample and the density (distribution) fromwhich it is _{g}_{e}_{n}_{e}_{r}_{a}_{t}_{e}_{d}_{.} Clearly, the density generates the sample; _{c}_{o}_{n}_{v}_{e}_{r}_{s}_{e}_{l}_{y}_{,} given a sample we can approximately re create the density (as a histogram, a kernel density estimate, an empirical cdf, or whatever). _{S}_{u}_{p}_{p}_{o}_{s}_{e} we now shift the focus in _{(}_{1}_{.}_{1}_{)} from densities
to _{s}_{a}_{m}_{p}_{l}_{e}_{s}_{.} In terms of densities, the inference process
is encapsulated in the updating of the prior density p(0)
to the posterior density p(OJx)through the medium of the likelihood function _{1}_{(}_{0}_{;}_{x}_{)}_{.} Shifting to samples, this
corresponds to the updating of a sample from p(O) to
_{a} _{s}_{a}_{m}_{p}_{l}_{e} _{f}_{r}_{o}_{m} _{p}_{(}_{O}_{l}_{x}_{)} _{t}_{h}_{r}_{o}_{u}_{g}_{h} the likelihood function
1(0; x). In _{S}_{e}_{c}_{t}_{i}_{o}_{n} _{3}_{,} we examine two resampling ideas that _{p}_{r}_{o}_{v}_{i}_{d}_{e} techniques whereby samples from one distri bution _{m}_{a}_{y} be modified to form samples from another distribution. In Section _{4}_{,} we illustrate how these ideas _{m}_{a}_{y} be utilized to modify prior samples to posterior _{s}_{a}_{m}_{p}_{l}_{e}_{s}_{,} as well as to modify posterior samples arising under _{o}_{n}_{e} model _{s}_{p}_{e}_{c}_{i}_{f}_{i}_{c}_{a}_{t}_{i}_{o}_{n} to posterior samples aris _{i}_{n}_{g} under another. An illustrative example is provided in Section _{5}_{.}
_{3}_{.} _{T}_{W}_{O} RESAMPLING METHODS
_{S}_{u}_{p}_{p}_{o}_{s}_{e} _{t}_{h}_{a}_{t} _{a} _{s}_{a}_{m}_{p}_{l}_{e} _{o}_{f} random variates is _{e}_{a}_{s}_{i}_{l}_{y} generated, or has already been generated, from a con
tinuous density g(0), but that what is really required is
a sample from a density h(0) absolutely continuous with
_{8}_{4}
_{T}_{h}_{e} _{A}_{m}_{e}_{r}_{i}_{c}_{a}_{n} _{S}_{t}_{a}_{t}_{i}_{s}_{t}_{i}_{c}_{i}_{a}_{n}_{,} _{M}_{a}_{y} _{1}_{9}_{9}_{2}_{,} _{V}_{o}_{l}_{.} _{4}_{6}_{,} _{N}_{o}_{.} _{2}
(? _{1}_{9}_{9}_{2} American StatisticalAssociation
respect to _{g}_{(}_{O}_{)}_{.} _{C}_{a}_{n} we somehow utilize the _{s}_{a}_{m}_{p}_{l}_{e} from g(O) to form a _{s}_{a}_{m}_{p}_{l}_{e} from _{h}_{(}_{O}_{)}_{?} _{S}_{l}_{i}_{g}_{h}_{t}_{l}_{y} more generally, given a positive function _{f}_{(}_{O}_{)} which is _{n}_{o}_{r}_{} malizable to such a _{d}_{e}_{n}_{s}_{i}_{t}_{y} _{h}_{(}_{O}_{)} = _{f}_{(}_{O}_{)}_{l}_{f}_{f}_{(}_{O}_{)} _{d}_{O}_{,} _{c}_{a}_{n} we form a _{s}_{a}_{m}_{p}_{l}_{e} from the latter _{g}_{i}_{v}_{e}_{n} _{o}_{n}_{l}_{y} a _{s}_{a}_{m}_{p}_{l}_{e} from _{g}_{(}_{O}_{)} and _{t}_{h}_{e} functional form of _{f}_{(}_{O}_{)}_{?}
3.1 Random Variates via the _{R}_{e}_{j}_{e}_{c}_{t}_{i}_{o}_{n} _{M}_{e}_{t}_{h}_{o}_{d}
In the case where there exists an identifiable constant
M > 0 such that f(O)lg(O) < M, for _{a}_{l}_{l} _{0}_{,} _{t}_{h}_{e} _{a}_{n}_{s}_{w}_{e}_{r} _{i}_{s}
yes to both questions, and the procedure is as _{f}_{o}_{l}_{l}_{o}_{w}_{s}_{:}
1. 
Generate 0 from _{g}_{(}_{0}_{)}_{.} 
2. 
Generate u from _{u}_{n}_{i}_{f}_{o}_{r}_{m} _{(}_{0}_{,} _{1}_{)}_{.} 
3. 
If u c f(O)/Mg(O),accept 0; _{o}_{t}_{h}_{e}_{r}_{w}_{i}_{s}_{e}_{,} _{r}_{e}_{p}_{e}_{a}_{t} _{S}_{t}_{e}_{p}_{s} 
13. 
Any accepted 0 is _{t}_{h}_{e}_{n} _{a} _{r}_{a}_{n}_{d}_{o}_{m} _{v}_{a}_{r}_{i}_{a}_{t}_{e} _{f}_{r}_{o}_{m}
h(0) = f(O)
f(o)
do.
The proof (see also Ripley 1986, p. 60) is straight forward. Let
and let
_{S}_{O} =
S
[(0,
=
u): 0 c
00, u c f(O)/Mg(O)],
[(0, u): U < f(0)/Mg(0)].
Then the cdf of _{a}_{c}_{c}_{e}_{p}_{t}_{e}_{d} _{0}_{,} _{a}_{c}_{c}_{o}_{r}_{d}_{i}_{n}_{g} to the _{p}_{r}_{e}_{c}_{e}_{d}_{i}_{n}_{g} procedure, is
Pr(0 c O, 0 accepted)
fIs(i
f
Is
g(0)
du dO
g(O) du dO
^{f}
f(O) dO
ff(o)
dO
It follows that _{a}_{c}_{c}_{e}_{p}_{t}_{e}_{d} 0 _{h}_{a}_{v}_{e} _{d}_{e}_{n}_{s}_{i}_{t}_{y} _{h}_{(}_{0}_{)} cf(0).
, _{n}_{,} from _{g}_{(}_{0}_{)}_{,} in
_{r}_{e}_{s}_{a}_{m}_{p}_{l}_{i}_{n}_{g} to obtain a _{s}_{a}_{m}_{p}_{l}_{e} from _{h}_{(}_{0}_{)} we will tend
to retain those _{O}_{i} for
which _{t}_{h}_{e} _{r}_{a}_{t}_{i}_{o} of _{f} _{r}_{e}_{l}_{a}_{t}_{i}_{v}_{e} to _{g}
is large, in _{a}_{g}_{r}_{e}_{e}_{m}_{e}_{n}_{t} with intuition. The _{r}_{e}_{s}_{u}_{l}_{t}_{i}_{n}_{g} sam
ple size is, of course, random, and the probability that
an individual _{i}_{t}_{e}_{m} is _{a}_{c}_{c}_{e}_{p}_{t}_{e}_{d} is _{g}_{i}_{v}_{e}_{n} _{b}_{y}
_{H}_{e}_{n}_{c}_{e}_{,} for a _{s}_{a}_{m}_{p}_{l}_{e} _{O}_{i}_{,}_{i} = 1,
Pr(0 accepted) = _{f}_{I}_{s}
_{g}_{(}_{0}_{)} du dO
=
_{M}_{}_{1} ff(x)
_{d}_{x}_{.}
The expected sample size for _{t}_{h}_{e} _{r}_{e}_{s}_{a}_{m}_{p}_{l}_{e}_{d} _{0}_{i}_{'}_{s}_{i}_{s} _{t}_{h}_{e}_{r}_{e}_{}
fore M1ln f> xf(x)
dx.
3.2 Random Variates via a _{W}_{e}_{i}_{g}_{h}_{t}_{e}_{d} _{B}_{o}_{o}_{t}_{s}_{t}_{r}_{a}_{p}
In cases where the bound M required in the preceding
procedure is not readily available, we may still approx
imately resample from _{h}_{(}_{0}_{)} = _{f}_{(}_{0}_{)}_{/}_{f}_{f}_{(}_{0}_{)}
_{d}_{O}_{a}_{s} follows.
Given
Ci = f(01)lg(01)
_{O}_{i}_{,} i
=
_{1}_{,}
.
.
and then
_{.}_{,}
_{n}_{,}
a _{s}_{a}_{m}_{p}_{l}_{e}
from
_{g}_{,}
calculate
_{q}_{i}
^{=}
(oi /
/ I EC).
Draw 0* from the discrete distribution over _{{}_{0}_{l}_{,} . _{0}_{,}_{1}_{}}_{p}_{l}_{a}_{c}_{i}_{n}_{g} mass _{q}_{i} on _{O}_{i}_{.} Then 0* is approximately distributed according to h with the approximation "im proving" as n increases. We _{p}_{r}_{o}_{v}_{i}_{d}_{e} a _{j}_{u}_{s}_{t}_{i}_{f}_{i}_{c}_{a}_{t}_{i}_{o}_{n} for this claim in a moment. However, first note _{t}_{h}_{a}_{t} _{t}_{h}_{i}_{s} procedure is a variant of _{t}_{h}_{e} _{b}_{y} now _{f}_{a}_{m}_{i}_{l}_{i}_{a}_{r} _{b}_{o}_{o}_{t}_{s}_{t}_{r}_{a}_{p} resampling procedure (Efron 1982). The usual boot strap provides equally likely resampling of the _{O}_{i}_{,}_{w}_{h}_{i}_{l}_{e} here we have weighted resampling _{w}_{i}_{t}_{h} _{w}_{e}_{i}_{g}_{h}_{t}_{s} _{d}_{e}_{t}_{e}_{r}_{} mined by the ratio of f:g, again in agreement with in tuition. See also Rubin (1988), who _{r}_{e}_{f}_{e}_{r}_{r}_{e}_{d} _{t}_{o} _{t}_{h}_{i}_{s} procedure as SIR (sampling/importance _{r}_{e}_{s}_{a}_{m}_{p}_{l}_{i}_{n}_{g}_{)}_{.} Returning to our claim, suppose for convenience that 0 is univariate. Under the customary bootstrap, 0* has cdf
Pr(0* <
a)

l(r
a)(
i)
g(0) dO
, Eg1(_ ,a)(0)
so that 0* is approximately distributed as an observation from g(0). Similarly, under the _{w}_{e}_{i}_{g}_{h}_{t}_{e}_{d} _{b}_{o}_{o}_{t}_{s}_{t}_{r}_{a}_{p}_{,} 0* has cdf
_{P}_{r}_{(} 0* _{c}_{a}_{)}
=Eqil
i= 1
1
n
n
f(O)
ff(0)
_{(}  _{}_{a}_{)}
(0O)
~~~~~~~7f(0)
1 ()il(,)(0)
l
^{d}^{O}
a
2
= fah(O)dO
dO
go
f
,a(0)
so that 0 is _{a}_{p}_{p}_{r}_{o}_{x}_{i}_{m}_{a}_{t}_{e}_{l}_{y} distributed as an observation from h. Note that the _{s}_{a}_{m}_{p}_{l}_{e} size under such _{r}_{e}_{s}_{a}_{m}_{p}_{l}_{i}_{n}_{g} can be as _{l}_{a}_{r}_{g}_{e} as desired. We mention one _{i}_{m}_{p}_{o}_{r}_{t}_{a}_{n}_{t} caveat. The less h resembles _{g}_{,} the _{l}_{a}_{r}_{g}_{e}_{r} the _{s}_{a}_{m}_{p}_{l}_{e} size n will need to be in order that the distribution of 0* well approximates _{h}_{.} Finally, the fact that either resampling method _{a}_{l}_{l}_{o}_{w}_{s} h to be known only up to proportionality constant _{(}_{i}_{.}_{e}_{.}_{,} only through f) iscrucial, since in our Bayesian appli cations we wish to avoid the integration required _{t}_{o}
The American Statistician, May 1992, Vol. 46, No. 2
85
standardize _{f}_{.} (Although, from the preceding, we can see that
1
^{7}
i =
1
provides 
a consistent 
estimator 
of 
the 
normalizing 
constant 
ff(0)
dO
if such be required.)
4. BAYESIAN CALCULATIONS VIA SAMPLINGRESAMPLING
Both methods of the previous section may _{b}_{e} _{u}_{s}_{e}_{d} to resample the posterior (h) from _{t}_{h}_{e} _{p}_{r}_{i}_{o}_{r} _{(}_{g}_{)} _{a}_{n}_{d} also to resample a second posterior _{(}_{h}_{)} from a first _{(}_{g}_{)}_{.} In this section we give details of both _{a}_{p}_{p}_{l}_{i}_{c}_{a}_{t}_{i}_{o}_{n}_{s}_{.}
4.1 Prior to Posterior
How does Bayes's Theorem generate a posterior sam ple from a prior sample? For _{f}_{i}_{x}_{e}_{d} _{x}_{,} _{d}_{e}_{f}_{i}_{n}_{e} _{f}_{v}_{(}_{0}_{)} = 1(0;x)p(0). If 0 maximizes 1(0;x), let M = _{1}_{(}_{0}_{;}_{x}_{)}_{.} _{T}_{h}_{e}_{n} with g(0) = p(O), we may immediately apply the re jection method of Section 3.1 to _{o}_{b}_{t}_{a}_{i}_{n} _{s}_{a}_{m}_{p}_{l}_{e}_{s} _{f}_{r}_{o}_{m} the density corresponding to the standardized _{f}_{,} _{w}_{h}_{i}_{c}_{h}_{,}
posterior density _{p}_{(}_{O}_{l}_{x}_{)}_{.} _{T}_{h}_{u}_{s}_{,}
from (1.1), is precisely the
we see that Bayes's Theorem, as a mechanism for _{g}_{e}_{n}_{} erating a posterior _{s}_{a}_{m}_{p}_{l}_{e} from a _{p}_{r}_{i}_{o}_{r} _{s}_{a}_{m}_{p}_{l}_{e}_{,} _{t}_{a}_{k}_{e}_{s} the following simple form: for each 0 in the _{p}_{r}_{i}_{o}_{r} _{s}_{a}_{m}_{p}_{l}_{e}
accept 0 into the posterior sample with probability
fl(0) 
1(0;x) 
Mp(0) 
_{4}_{0}_{;} _{x}_{)} 
otherwise _{r}_{e}_{j}_{e}_{c}_{t} it. The likelihood therefore acts as a _{r}_{e}_{s}_{a}_{m}_{p}_{l}_{i}_{n}_{g} _{p}_{r}_{o}_{b}_{} ability; those 0 in the prior sample having _{h}_{i}_{g}_{h} likeli hood are more likely to be retained in the posterior sample. Of course, since _{p}_{(}_{O}_{l}_{x}_{)} oc _{1}_{(}_{0}_{,} _{x}_{)}_{p}_{(}_{0}_{)}_{,} we _{c}_{a}_{n} also straightforwardly resample _{u}_{s}_{i}_{n}_{g} the _{w}_{e}_{i}_{g}_{h}_{t}_{e}_{d} bootstrap with
qi
=
I(0i; X)/
E4I0;
x).
Several obvious uses of this samplingresampling perspective are immediate. _{U}_{s}_{i}_{n}_{g} _{l}_{a}_{r}_{g}_{e} _{p}_{r}_{i}_{o}_{r} _{s}_{a}_{m}_{p}_{l}_{e}_{s} and iterating the resampling process for successive in dividual data elements for twodimensional _{0}_{,} _{s}_{a}_{y} provides a simple _{p}_{e}_{d}_{a}_{g}_{o}_{g}_{i}_{c} tool for _{i}_{l}_{l}_{u}_{s}_{t}_{r}_{a}_{t}_{i}_{n}_{g} _{t}_{h}_{e} sequential Bayesian _{l}_{e}_{a}_{r}_{n}_{i}_{n}_{g} _{p}_{r}_{o}_{c}_{e}_{s}_{s}_{,} as well as the in creasing concentration of the _{p}_{o}_{s}_{t}_{e}_{r}_{i}_{o}_{r} as the amount of data increases. In _{a}_{d}_{d}_{i}_{t}_{i}_{o}_{n}_{,} _{t}_{h}_{e} _{a}_{p}_{p}_{r}_{o}_{a}_{c}_{h} _{p}_{r}_{o}_{v}_{i}_{d}_{e}_{s} natural links with elementary graphical _{d}_{i}_{s}_{p}_{l}_{a}_{y}_{s} _{(}_{e}_{.}_{g}_{.}_{,} histograms, stemandleaf displays, boxplots to _{s}_{u}_{m}_{} marize univariate marginal _{p}_{o}_{s}_{t}_{e}_{r}_{i}_{o}_{r} _{d}_{i}_{s}_{t}_{r}_{i}_{b}_{u}_{t}_{i}_{o}_{n}_{s}_{,} _{s}_{c}_{a}_{t}_{} terplots to summarize bivariate posteriors). In _{g}_{e}_{n}_{e}_{r}_{a}_{l}_{,} the translation from functions _{t}_{o} _{s}_{a}_{m}_{p}_{l}_{e}_{s} _{p}_{r}_{o}_{v}_{i}_{d}_{e}_{s} _{a}
wealth of opportunities for creative _{e}_{x}_{p}_{l}_{o}_{r}_{a}_{t}_{i}_{o}_{n} _{o}_{f} Bayesian ideas and calculations in _{t}_{h}_{e} _{s}_{e}_{t}_{t}_{i}_{n}_{g} _{o}_{f} _{c}_{o}_{m}_{} puter graphical and _{e}_{x}_{p}_{l}_{o}_{r}_{a}_{t}_{o}_{r}_{y} data _{a}_{n}_{a}_{l}_{y}_{s}_{i}_{s} _{(}_{E}_{D}_{A}_{)} tools.
4.2 Posterior to Posterior
An important issue in Bayesian inference is sensitivity of inferences to model _{s}_{p}_{e}_{c}_{i}_{f}_{i}_{c}_{a}_{t}_{i}_{o}_{n}_{.} In _{p}_{a}_{r}_{t}_{i}_{c}_{u}_{l}_{a}_{r}_{,} _{w}_{e} might ask:
1. How does the posterior change if we change the
prior?
2. How does the posterior change if we change the
likelihood?
In the density function/numerical _{i}_{n}_{t}_{e}_{g}_{r}_{a}_{t}_{i}_{o}_{n} _{s}_{e}_{t}_{t}_{i}_{n}_{g}_{,} such sensitivity studies are rather offputting, in that each change of a functional input typically requires one to carry out new calculations _{f}_{r}_{o}_{m} _{s}_{c}_{r}_{a}_{t}_{c}_{h}_{.} _{T}_{h}_{i}_{s} _{i}_{s} _{n}_{o}_{t} the case with the samplingresampling _{a}_{p}_{p}_{r}_{o}_{a}_{c}_{h}_{,} _{a}_{s} _{w}_{e} now illustrate in _{r}_{e}_{l}_{a}_{t}_{i}_{o}_{n} _{t}_{o} _{t}_{h}_{e} _{q}_{u}_{e}_{s}_{t}_{i}_{o}_{n}_{s} _{p}_{o}_{s}_{e}_{d} _{a}_{b}_{o}_{v}_{e}_{.} In comparing two _{m}_{o}_{d}_{e}_{l}_{s} _{i}_{n} _{r}_{e}_{l}_{a}_{t}_{i}_{o}_{n} _{t}_{o} _{t}_{h}_{e} _{s}_{e}_{c}_{o}_{n}_{d} question, we note that change in likelihood _{m}_{a}_{y} _{a}_{r}_{i}_{s}_{e} in terms of:
1. change in distributional specification with _{0} _{r}_{e}_{}
taining the same interpretation, for _{e}_{x}_{a}_{m}_{p}_{l}_{e}_{,} _{a} _{l}_{o}_{c}_{a}_{t}_{i}_{o}_{n}
2. change in data to a larger data set (prediction), a
smaller data set (diagnostics), or a different _{d}_{a}_{t}_{a} _{s}_{e}_{t} (validation)
To unify notation, we _{s}_{h}_{a}_{l}_{l} _{i}_{n} _{e}_{i}_{t}_{h}_{e}_{r} _{c}_{a}_{s}_{e} _{d}_{e}_{n}_{o}_{t}_{e} _{t}_{w}_{o} likelihoods _{b}_{y} _{l}_{l}_{(}_{O}_{)} _{a}_{n}_{d} _{1}_{2}_{(}_{0}_{)}_{.} _{W}_{e} _{d}_{e}_{n}_{o}_{t}_{e} _{t}_{w}_{o} _{d}_{i}_{f}_{f}_{e}_{r}_{e}_{n}_{t} priors to be compared in relation to _{t}_{h}_{e} _{f}_{i}_{r}_{s}_{t} _{q}_{u}_{e}_{s}_{t}_{i}_{o}_{n} by _{p}_{l}_{(}_{O}_{)} and _{P}_{2}_{(}_{O}_{)}_{.} For complete generality, we shall consider changes to both / _{a}_{n}_{d} _{p}_{,} _{a}_{l}_{t}_{h}_{o}_{u}_{g}_{h} _{i}_{n} _{a}_{n}_{y} _{p}_{a}_{r}_{} ticular application we would not typically _{c}_{h}_{a}_{n}_{g}_{e} both. Denoting the correspondingposterior _{d}_{e}_{n}_{s}_{i}_{t}_{i}_{e}_{s} _{b}_{y} _{i}_{1}_{(}_{0}_{)}_{,} I52(O),we _{e}_{a}_{s}_{i}_{l}_{y} see that
,02(0)
Letting v(O) ^{=}
^{O}^{C}
X
_{2}
li(O)Pi(O)
PI(O).
_{(}_{4}_{.}_{1}_{)}
12(0)P2(0)l11(0)p1(O),
we note that to
implement the rejection method _{f}_{o}_{r} _{(}_{4}_{.}_{1}_{)} _{r}_{e}_{q}_{u}_{i}_{r}_{e}_{s}
sup v(O).
0
In many examples this will simplify to an _{e}_{a}_{s}_{y} _{c}_{a}_{l}_{c}_{u}_{} lation. Alternatively, we _{m}_{a}_{y} _{d}_{i}_{r}_{e}_{c}_{t}_{l}_{y} _{a}_{p}_{p}_{l}_{y} _{t}_{h}_{e} _{w}_{e}_{i}_{g}_{h}_{t}_{e}_{d} bootstrap method taking g = _{I}_{1}_{(}_{0}_{)}_{,} f = v(0)131(0),and )i = _{V}_{(}_{O}_{i}_{)}_{.} Resampled 0* will then be approximately distributed according to f standardized, which is _{p}_{r}_{e}_{} cisely P2(0). Again, different aspects of the sensitivity of the _{p}_{o}_{s}_{} teriors to changes in inputs are easily studied _{b}_{y} _{g}_{r}_{a}_{p}_{h}_{} ical examination _{o}_{f} _{t}_{h}_{e} _{p}_{o}_{s}_{t}_{e}_{r}_{i}_{o}_{r} _{s}_{a}_{m}_{p}_{l}_{e}_{s}_{.}
5. AN ILLUSTRATIVE _{E}_{X}_{A}_{M}_{P}_{L}_{E}
To illustrate the _{p}_{a}_{s}_{s}_{a}_{g}_{e}_{,} _{v}_{i}_{a} _{B}_{a}_{y}_{e}_{s}_{'}_{s} _{T}_{h}_{e}_{o}_{r}_{e}_{m}_{,} _{f}_{r}_{o}_{m} a prior sample to a posterior _{s}_{a}_{m}_{p}_{l}_{e}_{,} _{w}_{e} _{c}_{o}_{n}_{s}_{i}_{d}_{e}_{r} _{a} _{t}_{w}_{o}_{}
86
The American Statistician, May 1992, _{V}_{o}_{l}_{.} _{4}_{6}_{,} _{N}_{o}_{.} _{2}
_{p}_{a}_{r}_{a}_{m}_{e}_{t}_{e}_{r} _{p}_{r}_{o}_{b}_{l}_{e}_{m} first considered by McCullagh and _{N}_{e}_{l}_{d}_{e}_{r} (1989, sec. 9.3.3).
For i 
1, 2, 3, suppose that _{X}_{i}_{1}_{,} 
Binomial(ni1, 01) 
_{a}_{n}_{d} Xi2 
_{B}_{i}_{n}_{o}_{m}_{i}_{a}_{l}_{(}_{n}_{i}_{2}_{,} 02), conditionally independent 
_{g}_{i}_{v}_{e}_{n} _{0}_{1}_{,} _{0}_{2}_{,} with _{n}_{i}_{l}_{,} _{n}_{i}_{2} specified. Suppose further that
the observed random variables are _{Y}_{i} = _{X}_{1}_{,} + Xi2, i = _{1}_{,} _{2}_{,} 3, _{s}_{o} that the likelihood for 01, 02 given Y1 = _{y}_{l}_{,} _{Y}_{2} = _{Y}_{2} _{a}_{n}_{d} _{Y}_{3} = _{y}_{3} takes the form:
^{I}^{1}^{[}^{I}
(n!l)
X
ni2)
_{0}_{j}_{(}_{1}
where maxtO,yi 

ni2}
_{0}_{1}_{)}_{n}_{i}_{i}
yio0iji(1

_{0}_{2}_{)}_{)}
<j _{i}_{i} 'mintni1, yi.
^{i} _{+}_{j}_{i}
The data considered by McCullagh and Nelder are the following:
1
5
5
7
For _{p}_{u}_{r}_{p}_{o}_{s}_{e}_{s} of illustration, we take the joint prior distribution for 01, 02to be uniform over the unit square. In _{a}_{c}_{c}_{o}_{r}_{d}_{a}_{n}_{c}_{e} with the shift to a sampling perspective that constitutes our fundamental message in this article, _{F}_{i}_{g}_{u}_{r}_{e} _{1} _{p}_{r}_{e}_{s}_{e}_{n}_{t}_{s} a _{s}_{c}_{a}_{t}_{t}_{e}_{r}_{p}_{l}_{o}_{t} of _{p}_{o}_{i}_{n}_{t}_{s} uniformlydrawn from _{t}_{h}_{e} unit square, together with summary histo _{g}_{r}_{a}_{m}_{s} _{c}_{o}_{n}_{f}_{i}_{r}_{m}_{i}_{n}_{g} _{t}_{h}_{e} uniform "shape" of the prior mar ginals for _{0}_{,} and 02. We now _{p}_{r}_{o}_{c}_{e}_{e}_{d} to generate a posterior sample by _{r}_{e}_{s}_{a}_{m}_{p}_{l}_{i}_{n}_{g} _{f}_{r}_{o}_{m} _{t}_{h}_{e} _{p}_{r}_{i}_{o}_{r} _{s}_{a}_{m}_{p}_{l}_{e}_{.} For this illustration, the _{w}_{e}_{i}_{g}_{h}_{t}_{e}_{d} bootstrap procedure was used and resulted in the posterior sample scatterplot shown in Figure 2, _{t}_{o}_{g}_{e}_{t}_{h}_{e}_{r} with summaryhistograms of the posterior mar _{g}_{i}_{n}_{a}_{l}_{s} _{f}_{o}_{r} _{0}_{,} _{a}_{n}_{d} _{0}_{2}_{.} _{G}_{e}_{n}_{e}_{r}_{a}_{l} _{f}_{e}_{a}_{t}_{u}_{r}_{e}_{s} of the posterior are _{e}_{a}_{s}_{i}_{l}_{y} _{i}_{d}_{e}_{n}_{t}_{i}_{f}_{i}_{e}_{d} from this picture for example,
2
6
4
5
3
4
6
6
ni1
ni2
Yi
0.22 04
0.
Figure1.
Sample fromPrior.
.
0.8 

0.6 

e2 

0.4 

0.2 

0 
_{I} 
_{l} 
_{l} 
I 

0 
0.2 
0.4 
0.6 
0.8 
Figure 2.
e
_{S}_{a}_{m}_{p}_{l}_{e} _{f}_{r}_{o}_{m} Posterior.
_{1}
the marginal locations and _{s}_{p}_{r}_{e}_{a}_{d}_{s} of _{i}_{n}_{f}_{e}_{r}_{e}_{n}_{c}_{e}_{s} _{f}_{o}_{r} _{t}_{h}_{e} two parameters, together with the _{n}_{e}_{g}_{a}_{t}_{i}_{v}_{e} correlation and slight _{b}_{i}_{m}_{o}_{d}_{a}_{l}_{i}_{t}_{y}_{,} which reflects the _{a}_{m}_{b}_{i}_{g}_{u}_{i}_{t}_{y} re sulting from observations in the form _{o}_{f} _{s}_{u}_{m}_{s} _{o}_{f} _{b}_{i}_{} nomial outcomes. Numerical summaries in the form of _{p}_{o}_{s}_{t}_{e}_{r}_{i}_{o}_{r} _{m}_{o}_{} ments, _{q}_{u}_{a}_{n}_{t}_{i}_{l}_{e}_{s}_{,} or whatever are _{t}_{r}_{i}_{v}_{i}_{a}_{l}_{l}_{y} _{o}_{b}_{t}_{a}_{i}_{n}_{e}_{d}_{,} if required, by forming corresponding _{s}_{a}_{m}_{p}_{l}_{e} _{q}_{u}_{a}_{n}_{t}_{i}_{t}_{i}_{e}_{s} in the obvious _{w}_{a}_{y}_{.} A further flexible and straightforwardlyimplemented feature of the samplebased _{a}_{p}_{p}_{r}_{o}_{a}_{c}_{h} is that _{p}_{o}_{s}_{t}_{e}_{r}_{i}_{o}_{r} inferences can _{b}_{e} _{t}_{r}_{i}_{v}_{i}_{a}_{l}_{l}_{y} _{r}_{e}_{e}_{x}_{p}_{r}_{e}_{s}_{s}_{e}_{d} _{i}_{n} _{t}_{e}_{r}_{m}_{s} _{o}_{f} _{a}_{n}_{y} reparameterization of _{i}_{n}_{t}_{e}_{r}_{e}_{s}_{t}_{.} For _{e}_{x}_{a}_{m}_{p}_{l}_{e}_{,} _{t}_{h}_{e} _{l}_{o}_{g}_{i}_{t} (logodds) _{r}_{e}_{p}_{a}_{r}_{a}_{m}_{e}_{t}_{e}_{r}_{i}_{z}_{a}_{t}_{i}_{o}_{n} is often of interest in problems involving binomial _{d}_{a}_{t}_{a}_{,} so that, in the _{a}_{b}_{o}_{v}_{e}_{,} it might be of interest to recalculate the _{j}_{o}_{i}_{n}_{t} _{a}_{n}_{d} mar
10 

5 
I 
II 
0 
Figure3.
log [GX/(IGi)]
Sample fromTransformedPosterior.
The American Statistician, May 1992, Vol. _{4}_{6}_{,} _{N}_{o}_{.} 2
_{8}_{7}
ginal posteriors for the parameters log _{[}_{0}_{i}_{/}_{(}_{1}  _{0}_{)}_{]}_{,} i = 1, 2. The sample for 01, 02 translates directly into a sample from the logit transformed parameters and re sults in the summary picture given in Figure 3. We note that the forms of joint posterior revealed in Figures 2 and 3 are far from "nice" and would require subtle numericalhandling by the nonsampling approaches cited in the Introduction. _{S}_{o} far as choice of _{s}_{a}_{m}_{p}_{l}_{e} sizes for initial and resam ples is concerned, this will typically be a matter of ex perimentation with particular applications, having in mind the level of "precision" required from pictorial or numerical summaries. For example, in Figure 1 we dis play 1,000 sample points as an effective pictorial rep resentation. However, the resampling ratio needs to be at least 1 in 10, with some 2,000 points plotted in Figures 2 and 3 to convey adequately these awkward posterior forms. The actual generated sample from the prior thus needed to be in excess of 20,000 points. _{C}_{l}_{e}_{a}_{r}_{l}_{y}_{,} there is _{c}_{o}_{n}_{s}_{i}_{d}_{e}_{r}_{a}_{b}_{l}_{e} _{s}_{c}_{o}_{p}_{e} for more refined graphical outputs in terms of joint density contours, kernel density curves, and so on. We encourage readers to be creative in fusing EDA and graphical techniques
with the samplebased approach to formal inference presented here.
[Received May 1990. Revised Januiary1991.]
REFERENCES
Efron, B. (1982), The Bootstrap, Jackknife and Other Resampling _{P}_{l}_{a}_{n}_{s}_{,} _{P}_{h}_{i}_{l}_{a}_{d}_{e}_{l}_{p}_{h}_{i}_{a}_{:} _{S}_{o}_{c}_{i}_{e}_{t}_{y} of Industrialand Applied Mathematics. _{M}_{c}_{C}_{u}_{l}_{l}_{a}_{g}_{h}_{,} _{P}_{.}_{,} _{a}_{n}_{d} _{N}_{e}_{l}_{d}_{e}_{r}_{,} _{J}_{.} _{A}_{.} _{(}_{1}_{9}_{8}_{9}_{)}_{,} Generalized Linear Models _{(}_{2}_{n}_{d} _{e}_{d}_{.}_{)}_{,} London: Chapman and Hall. _{R}_{i}_{p}_{l}_{e}_{y}_{,} B. (1986), Stochastic Simulation, New York: John Wiley. _{R}_{u}_{b}_{i}_{n}_{,} _{D}_{.} _{B}_{.} _{(}_{1}_{9}_{8}_{8}_{)}_{,} _{"}_{U}_{s}_{i}_{n}_{g} the SIR Algorithm to Simulate Posterior Distributions," in Bayesian Statistics 3, eds. J. M. Bernardo, _{M}_{.} _{H}_{.} _{D}_{e}_{G}_{r}_{o}_{o}_{t}_{,} D. V. Lindley, and A. F. M. Smith, Cambridge, _{M}_{A}_{:} _{O}_{x}_{f}_{o}_{r}_{d} University Press, pp. 395402. _{S}_{m}_{i}_{t}_{h}_{,} _{A}_{.} _{F}_{.} _{M}_{.}_{,} _{S}_{k}_{e}_{n}_{e}_{,} _{A}_{.} _{M}_{.}_{,} _{S}_{h}_{a}_{w}_{,} J. E. H., and Naylor, J. C. _{(}_{1}_{9}_{8}_{7}_{)}_{,} "Progress with Numerical and Graphical Methods for _{B}_{a}_{y}_{e}_{s}_{i}_{a}_{n} Statistics," The Statistician, 36, 7582. _{S}_{m}_{i}_{t}_{h}_{,} _{A}_{.} _{F}_{.} _{M}_{.}_{,} _{S}_{k}_{e}_{n}_{e}_{,} _{A}_{.} _{M}_{.}_{,} _{S}_{h}_{a}_{w}_{,} J. E. H., Naylor, J. C., and _{D}_{r}_{a}_{n}_{s}_{f}_{i}_{e}_{l}_{d}_{,} _{M}_{.} _{(}_{1}_{9}_{8}_{5}_{)}_{,} "The Implementation of the Bayesian Par
Statistics, Part A  Theory and Meth
_{a}_{d}_{i}_{g}_{m}_{,} " Communications in
_{o}_{d}_{s}_{,} 14, 10791102. _{T}_{i}_{e}_{r}_{n}_{e}_{y}_{,} _{L}_{.}_{,} _{a}_{n}_{d} _{K}_{a}_{d}_{a}_{n}_{e}_{,} _{J}_{.} _{(}_{1}_{9}_{8}_{6}_{)}_{,} "Accurate Approximations for
_{P}_{o}_{s}_{t}_{e}_{r}_{i}_{o}_{r} _{M}_{o}_{m}_{e}_{n}_{t}_{s} _{a}_{n}_{d} _{M}_{a}_{r}_{g}_{i}_{n}_{a}_{l} Densities," Journal of the Amer _{i}_{c}_{a}_{n} StatisticalAssociation, 81, 8286.
88
The American Statistician, May 1992, _{V}_{o}_{l}_{.} _{4}_{6}_{,} _{N}_{o}_{.} _{2}
Гораздо больше, чем просто документы.
Откройте для себя все, что может предложить Scribd, включая книги и аудиокниги от крупных издательств.
Отменить можно в любой момент.