Вы находитесь на странице: 1из 2

Review

Author(s): Peter C. Kiessler


Review by: Peter C. Kiessler
Source: Journal of the American Statistical Association, Vol. 104, No. 488 (December 2009),
p. 1714
Published by: Taylor & Francis, Ltd. on behalf of the American Statistical Association
Stable URL: http://www.jstor.org/stable/40592374
Accessed: 23-07-2016 21:51 UTC

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
http://about.jstor.org/terms

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted
digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about
JSTOR, please contact support@jstor.org.

American Statistical Association, Taylor & Francis, Ltd. are collaborating with JSTOR to digitize,
preserve and extend access to Journal of the American Statistical Association

This content downloaded from 193.227.1.29 on Sat, 23 Jul 2016 21:51:08 UTC
All use subject to http://about.jstor.org/terms
1714 Book Reviews

Approximate Dynamic Programming: Solving the Curses of Dimen- What is appealing about this result is that the only assumption about the left-
sionality. hand tail is in the existence of the mean, and the fact that it has a simple prob-
abilistic meaning, since the right-hand side represents the probability of the
Warren B. Powell. Hoboken, NJ: Wiley, 2007. ISBN 978-0-470-17155- different ways just a "large jump" can occur. This "one large jump" principle
4.xvi + 460pp. $125.00 (H). lies at the heart of most of the results treated here.
One obvious way to extend this result is to ask for the same estimate to
Markov decision processes are a class of stochastic optimization problems hold over a larger range of (n, x) values: here the price one pays is that extra
that are typically analyzed using tools from dynamic programming. Starting assumptions are required on the left-hand tail. For example, if a e (1, 2), and
with the Bellman equation, an elegant theory exists which not only demon- F(- x) ^ cF(x), where c is a finite, positive constant, then the random walk
strates the existence of an optimal solution but also provides an algorithm for is in the domain of attraction of a stable law; if cn is the appropriate norming
obtaining it. An excellent reference to the theory and application of Markov sequence, then (1) holds uniformly as x/cn -► 00. This provides a nice counter-
decision processes is Putterman (1994). Due to the "curse of dimensionality" part to the stable Central Limit Theorem, which gives information when x/cn
classical algorithms fail to solve large and many realistic problems. Approx- is bounded.
imate dynamic programming is a recent theory that blends Markov decision When the "one large jump" principle holds, one can expect that other quanti-
processes and stochastic iterative algorithms and has been successful in solving ties, such as P(maxr<„ Sr > x), should have similar asymptotic behavior. Other
large problems. Approximate Dynamic Programming provides a treatment of obvious extensions include the case of rv tails with a < 1 , the case of subexpo-
this theory, accesible to a wide audience. nential tails, and the case of semi-exponential tails. This last case, typified by
Powell's intended audience is undergraduate/master's students in operations tails of the form exp(- r(t)), where r is rv with index in [0, 1], is on the bound-
research. The prerequistes are probability and some optimization. The book em- ary of the heavy/light tailed division, and as well as treating this topic in some
phasizes modeling and algorithm development. The presentation begins with detail, the authors actually trespass over the boundary by devoting a chapter to
examples of dynamic programming problems. Using the examples as motiva- distributions satisfying Cramer's condition, whose tails necessarily decay at an
tion, classical Markov decision theory is developed. Given the classical devel- exponential rate.
opment, Powell then gives an outline of approximate dynamic programming The asymptotic estimates treated in this book are normally proved via a
demonstrating how the new theory avoids the "curses of dimensionality." Be- lower and upper asymptotic bound. The authors choose to give a large num-
fore discussing the new theory in more depth, the author devotes Chapter 5 to ber of such bounds, valid in various different circumstances, including cases
modeling issues. Markov decision processes are notationally cumbersome. Ad- where the appropriate tails are bounded by functions which are rv or in some
ditionally, the classical approach works with the probability law of the process, other class of functions, rather than being in that class. I am not convinced that
while the approximate methods work with sample paths. This chapter allows the the extra material generated is that interesting, and certainly the extra notation
reader to clarify these ideas. The book then delves into the theory and imple- that it generates makes this a difficult, as well as a lengthy, read. In addition
mentation of approximate dynamic programming. Chapters 8 and 9 are devoted to the topics mentioned above, there are further chapters on the asymptotics of
to solving finite and infinite horizon problems. The remainder of the book deals J2T anP(Sn > jc), where an > 0, on first passage times, on more general large
with additional topics which as the author claims "brings out the richness of deviation events, and on random walks in higher dimensions. Further chapters
approximate dynamic programming." are devoted to the case of nonidentically distributed summands, certain cases
The book should make a excellent text for a course in dynamic program- of dependent summands, and processes in continuous time, including Levy
ming. At the very least, the first nine chapters must be covered. Each chapter processes and generalized renewal processes.
concludes with a problem set. The problems include modeling problems, prob- In conclusion: this is a valuable work of reference, containing a large number

lems dealing with implementation issues and projects. Most of the projects are of results in an important area of probability theory which have not appeared

found in the later chapters. The author separtates mathematical proofs from the
in book form before. Many of these results were published in Russian journals,
and so may not have been easily available to Western readers. I think it would
main body of the book. These proofs are found near the end of the chapter and
perhaps have been a better book if the authors had not tried to cover such a
are entitled "Why it works."
wide area, and I am sure that a more user-friendly notation could have been
Peter C. KiESSLER devised. But despite these quibbles the authors are to be congratulated for this
Clemson University significant addition to the literature on random walks.

Ronald DONEY
REFERENCE
University of Manchester
Putterman, M. L. (1994), Markov Decision Processes, New York: Wiley.

Bayesian Biostatistics and Diagnostic Medicine.

Asymptotic Analysis of Random Walks: Heavy-Tailed Distributions. Lyle D. Broemeling. Boca Raton, FL: Chapman & Hall/CRC, 2007.
ISBN 1-58488-767-2. 198 pp. $83.95 (H).
A. A. BOROVKOV and K. A. Borovkov. New York, NY: Cambridge
University Press, 2008. ISBN 978-0-521-881 17-3. xxix + 625 pp. $171.00 As a statistics textbook dedicated entirely to statistical evaluation of diag-
(H). nostic medicine from the Bayesian perspective, this book stands out in the land-
scape of predominantly frequentisi expositions. The two best known books in
A random walk is the process of partial sums Sn = £" Yr formed from the field of statistical evaluation of diagnostic tests are perhaps those by Pepe
an independent and identically distributed (iid) sequence of random variables (2004) and Zhou, Obuchowski, and McClish (2002), that give an introduction
Y' , Y2,
from an essentially frequentisi perspective.
case and the heavy-tailed case, typified by the tails of the distribution F of the The format of Bayesian Biostatistics and Diagnostic Medicine is attractive
Yr decaying exponentially fast at infinity, or going to zero at a rate slower than and simple to follow. For example, in the Introduction, Section 1.3 particularly
stands out: it is a nice overview of what is to be found in each section of the
exponential. This volume is essentially dedicated to large deviation results for
the second class, corresponding results for the first class being promised in a book and can be quite handy when looking up an approach to a specific prob-
second volume. lem. Likewise, the chapter on the use of diagnostic imaging in clinical trials is
an interesting and timely discussion on the topic.
In contrast to many areas in which large deviations are studied, in this rel-
However, after reading this book, as a medical physicist and an experienced
atively simple setting one expects exact asymptotic estimates for the probabil-
modeler of diagnostic test results, I was left with a lengthy list of limitations
ities of rare events, rather than logarithmic estimates. A classical example is
and shortcomings of the book, threatening to far overshadow its qualities and
the following result, due to S. V. Nagaev. Suppose that EY' = 0 and the tail
usefulness to the medical researchers new to the area of Bayesian biostatistics
~F(x) = P(Y' > x) is regularly varying (rv) with index -a, where a > 1. Then
and diagnostic testing.
for any 8 > 0, uniformly for n < Sx,
First, as a textbook for nonstatisticians, this particular work is somewhat un-
P(Sn>x)^nF(x) asjt-^oo. (1) satisfactory. I have found the notation confusing and even misleading at times,

This content downloaded from 193.227.1.29 on Sat, 23 Jul 2016 21:51:08 UTC
All use subject to http://about.jstor.org/terms

Вам также может понравиться