Вы находитесь на странице: 1из 185

Financial Risk Analysis

of
Infrastructure Debt

This page intentionally left blank

Financial Risk Analysis


of

Infrastructure Debt
THE CASE OF WATER AND
POWER INVESTMENTS

C. Vaughan Jones
Foreword by David Bendel Hertz

QUORUM BOOKS
Westport, Connecticut London

Library of Congress Cataloging-in-Publication Data


Jones, Clive Vaughan.
Financial risk analysis of infrastructure debt : the case of water
and power investments / C. Vaughan Jones ; foreword by David Bendel
Hertz,
p. cm.
Includes bibliographical references and index.
ISBN 0-89930-488-5 (alk. paper)
1. Public utilitiesFinanceCase studies. 2. Risk management
Case studies. 3. Water resources developmentFinanceCase
studies. 4. Power resourcesFinanceCase studies.
5. Infrastructure (Economics)FinanceCase studies. I. Title.
HD2763.J59 1991
363.6'068'1-KJC20

90-45144

British Library Cataloguing in Publication Data is available.


Copyright 1991 by C. Vaughan Jones
All rights reserved. No portion of this book may be
reproduced, by any process or technique, without the
express written consent of the publisher.
Library of Congress Catalog Card Number: 90-45144
ISBN: 0-89930-488-5
First published in 1991
Quorum Books, 88 Post Road West, Westport, CT 06881
An imprint of Greenwood Publishing Group, Inc.
Printed in the United States of America

The paper used in this book complies with the


Permanent Paper Standard issued by the National
Information Standards Organization (Z39.48-1984).
10 9 8 7 6 5 4 3

Contents
Figures and 1 ables
Foreword
David Bendel Hertz
Preface
1 Introduction

vn
ix
xiii
1

2 Concepts and Procedures

17

3 Financial Risks in the Construction Period

47

4 Revenue RiskRate and Demand Factors

69

5 Revenue RiskThe Customer Base

85

6 Applications

97

7 Reflections on the Method

129

Appendix

143

Bibliography

155

Index

165

This page intentionally left blank

Figures and Tables


FIGURES
1.1
2.1
2.2
2.3
2.4
2.5
2.6
2.7a
2.7b
3.1
3.2
4.1
5.1
6.1
6.2
6.3
6.4
6.5

Risk Profile of Rates of Return


Steps in Conducting a Risk Simulation
Triangular Probability Distribution
Denver, Colorado, GCD
Cumulative Probability Distribution
Event Tree for Rate Increase
Comparison of Risk Profiles
First Order Stochastic Dominance (relative
probability)
First Order Stochastic Dominance (cumulative
probability)
Overlay of Risk Profiles Developed with Uniform and
Triangular Distributions and Table 3.2 Ranges
Comparison of Cumulative Distributions
Total Revenue Curve
Stochastic Models of Population Growth
Default with and without a Sinking Fund
Debt Service Schedules
Cost Overrun Probability
Default RiskLevel and Tipped Debt Service
Present Value of Investment Costs

7
18
28
31
35
36
39
40
41
58
59
76
93
104
106
108
111
120

Vlll

Figures and Tables

TABLES
3.1

Cost Estimates and Realized Nuclear Power Plant


Costs, 1966-1972
3.2 Range Estimates and Expected Costs by Cost
Component
4.1 Error in Revenue Forecast from Neglecting
Consumer Price Responses
5.1 Typical Presentation of Population Forecasts in High,
Medium, and Low Variants
6.1 Cash Flow Model for Wholesale Producer
6.2 Minimum Present Value Capacity Investment
Sequences
7.1 Payoff Matrix Illustrating Alternative Choice Criteria
under Uncertainty

48
55
73
91
99
115
137

Foreword

DAVID BENDEL HERTZ

It is my pleasure to write a few words preceding C. Vaughan Jones's


Financial Risk Analysis of Infrastructure Debt. When I began working on risk analysis and its simulations in the 1970s (eventually
publishing "Risk Analysis in Capital Investment," Harvard Business Review [September-October 1979]: 169-181, and writing various books on the subject), the procedures of using computers to
simulate investment outcomes were first becoming available. However, early technology was time-consuming and awkward. Today,
rapid simulations can run on microcomputers, making multiple
analyses easy and relatively quick. In this well-written and lucid
book, Jones demonstrates some surprising results such as the fact
that such simulations in the utility industry
readily show that contingency funds should be calculated against total
construction costs rather than against individual cost components at given
confidence levels.... [and] least cost capacity plans developed for average
or expected population growth may be different from those suggested by
a probabilistic or simulation point of view; for example, deterministic
procedures may lead to higher expected discounted costs than plans developed within a framework allowing for variation in population growth
and other key variables (p. 2).

Risk abounds, and in risk analysis we are always considering


planning for an uncertain future, even in time intervals as different
as a month, a year, or a century. Risk analysis simulations might

Foreword

lead to effective risk "management"that is, reduced numbers


and magnitude of undesirable outcomes at reasonable (or even
negative) input costs. Reductions in risk usually come at a price,
usually in deferred profits of dollars or dividends. Muddled decision-making approaches make well-informed expectations unlikely. Where the investment is millions of dollars, and the results
arise from projects whose foreseeable "lives" and usefulness are
measured in decades and impact on a large population and its
surrounding environment, decisions should be based on the strongest and most accurate methodologies available. While my original
work was intended to approach the risks of investment in general,
I now realize that specific scenarios such as the water and power
industries require their own analyses for effective understanding
and application. Cases must be evaluated on their own merits and
characteristics; the estimation of probabilities for various events
in these individual cases is critical to effective decision making. As
Jones points out, "Risk simulation . . . makes explicit what is implicit in many qualitative appraisals, attaching numbers that, at a
deep level, guide thought about the prospects of an investment
situation. Inconsistencies in qualitative appraisal can be ferreted
out, and the very process of inputting ranges and probabilities to
variables forces a deeper integration of information and understanding about the process in question" (p. 10).
Jones creates a risk simulation model based on a good analysis
of the complexity of financing utility projects. The discussion is
well developed and follows step-by-step the development of agreements and contracts, the issuance of bonds, and the management
of construction. On this well-researched and thought-out base,
Jones then analyzes the unique characteristics of utilities, such as
excess capacity, growth demands, and extension into other peripheral plants. He does this without becoming obscure or pedantic, and he backs his logic strongly with computer-based simulations
and stochastic modeling of the risks involved in each crucial phase,
as well as an outline of several applications based on these projections.
If life is composed of decisions and uncertainty, these decisions
involve varying degrees of risk. It is not possible to make a decision,
business or otherwise, without incurring a certain amount of risk.
All decisions are risky because a stake is placed on the uncertain

Foreword

xi

future consequences resulting from acting on these decisions.


There have been all too few books on the subject of risks involving
large-scale investment in a utility project, such as C. Vaughan
Jones's Financial Risk Analysis of Infrastructure Debt. A modern
treatment such as this one is much needed by the industry, and
Jones's book fills this need admirably.

This page intentionally left blank

Preface
The origins of this book date to conversations with A. G. Hart at
Columbia University in the late 1960s and, earlier, to a seminar
in Monte Carlo analysis at the University of Colorado.
Throughout the 1970s and 1980s, I had an opportunity to see
the relevance and to develop my understanding of financial risk
analysis in projects carried out for the Iowa State Commerce Commission, the Colorado Public Utilities Commission, and the Denver Water Board. Since then, I have had access to the considerable
resources in the area of risk analysis of the Environment and Behavior Program at the University of Colorado at Boulder, as well
as the good offices of research librarians at the University of Colorado School of Business.
I also have received invaluable comments, encouragement, and
support from a number of individuals, especially including John
J. Boland, Department of Geography and Environmental Engineering, Johns Hopkins University; David B. Hertz, Department
of Intelligent Computer Systems, University of Miami; James Manire, assistant vice president, Boettcher and Company; Margaret
Ludlum, senior economist, Seattle Water Department; William A.
Steele, financial analyst, Colorado Public Utilities Commission;
Timothy Tatam, vice president, Municipal Finance Department,
Standard & Poor's Corporation; Fred J. Leonard, financial analyst;
and my wife, Carol Eileen Ryan.

This page intentionally left blank

Financial Risk Analysis


of

Infrastructure Debt

This page intentionally left blank

1
Introduction
This book is about how to measure financial risk associated with
investments in specific infrastructureelectricity generation;
power transmission; and water storage, treatment, and distribution
systems. Such evaluations are especially accessible with respect to
these types of investments and with respect to infrastructure generally. In addition, power and water enterprises showed signs of
financial strain in the 1980s and confront critical decisions in the
1990s.
As one of a small number of extended discussions of financial
risk analysis, the book should be of interest to students of business,
finance, and economics. Readers can fix in mind basic principles
of quantitative risk analysis or risk simulation and consider the
interplay between context or industry specific information and
global approaches to risk evaluation.
This discussion has special and practical relevance for an audience of investment analysts, engineers, utility administrators and
financial personnel, regulatory officials, bond underwriters, and
public interest advocates. These groups need to develop a common
language to address risks associated with bond issuance in a decade
that promises significant departures from past patterns in population and economic growth, the natural environment, and regulatory politics. To this end, this book attempts an integration of
material from business, engineering, economics, demography,
probability theory, computer simulation, and policy studies. The

Analysis of Infrastructure Debt

presentation also supports hands-on analysis with spreadsheet and


simulation examples.
The materials are organized around risk factors affecting costs
and revenues. Separate chapters present findings relating to the
variability of construction costs, customer demand, and population
growth. Together with qualitative information about risks, these
chapters offer suggestions about quantitative representation of relevant patterns of variability of key risk sources. The techniques
are integrated in simulation models dealing with contract risk, the
evaluation of sinking funds and amortization schedules, and longrun capacity planning. The concluding chapter recapitulates major
findings, considers issues of reliability and validation, and discusses
the manner in which the analysis developed in this book can be
generalized for a variety of infrastructure investments.
Risk simulation, widely available for the first time due to new
microcomputer technology and computer software, can lead to
surprising results. Simulation examples readily show that contingency funds should be calculated against total construction costs
rather than against individual cost components at given confidence
levels. Similarly, it is easy to demonstrate that tipped amortization
can run afoul of inflation, offering little, if any, risk mitigation,
even when substantial chances for construction cost overruns exist.
Most surprising of all, least cost capacity plans developed for average or expected population growth may be different from those
suggested by a probabilistic or simulation point of view; for example, deterministic procedures may lead to higher expected discounted costs than plans developed within a framework allowing
for variation in population growth and other key variables.
The point is that a great deal more is possible than running a
few "what ifs" in cash flow models. Consensual estimates of the
ranges of key risk sources can lead to simulations that are more
informative than relying on a "standard case" or a "sensitivity
analysis."
The following discussion suggests a perspective on financial risks
and electric power and water investments, considers the rationale
for financial risk analysis in general and risk simulation in particular, and includes further information about the organization of
the book.

Introduction

THE PROBLEM
Slower population growth, coupled with erratic and slower economic growth, took a toll on the financial viability of the electric
power industry in the 1970s and 1980s. Capacity expansion typically
involves long lead times. Designs in the pipeline since the 1960s
advanced to construction in the 1970s and 1980s. These projects
often were configured to take advantage of scale economies, and
they embodied, in the case of nuclear facilities, optimism vis-a-vis
new or frontier technology. In a sense, many projects were tailored
to a different economic environment than they met when they
came on line. Pervasive overcapacity was one result and only recently has been overcome in some regions of the nation. 1
One upshot was deterioration in electric utility bond ratings.
Dividends declined along with interest coverage ratios. In the mid1980s, investor-owned public utilities accounted for 30 to 40 percent of the dollar volume of junk bonds. Many were "fallen angels"
or issues whose default probability was considered sufficient by
bond rating organizations to be dropped from the list of investment
grade securities.2
The most spectacular recent example of financial risk in the
power industry was the default of the Washington Public Power
Supply System (WPPSS) on interest payments on $2.25 billion
worth of bonds in January 1984. Commentary on this event, the
largest default on utility bonds since the Great Depression, focuses
on construction delays, poor project planning, and the practice
WPPSS adopted, for a time, of capitalizing several years of interest
in bond issues. Financing costs escalated, and projected electricity
demand did not materialize. Questions arose concerning the legal
status of WPPSS's so-called take-or-pay contracts with buyers. 3
Less well-known examples also exist. A Colorado utility serving
some 200,000 customers filed for Chapter 11 protection in 1990,
following years of erratic revenue and questionable investment and
management practices; it was the largest bankruptcy filing by an
electric utility in that year. As noted later in this discussion, many
other examples can be culled from Standard & Poor's CreditWeek
where particularly questionable bond issues are put on "Credit
Watch."

Analysis of Infrastructure Debt

Financial problems in the urban water industry have been less


dramatic in recent years, although there are signs of financial strain
as utilities confront new water quality standards and growing scarcity of new supplies. Institutional problems may be of decisive
importance to this industry in the future. In many metropolitan
regions, central water systems service static or declining populations, while a patchwork of districts meet the needs of outlying
areas with more robust growth. Since near and cheaper reservoir
sites and water sources already have been exploited, transbasin
diversions must be anticipated to meet future needs. How will
these massive, long distance projects be financed by consortia
whose members have mixed financial profiles and differing views
about their commonality of interest?
This point generalizes for a variety of infrastructure investments.
With tighter federal budgets, airport, highway, bridge, and port
facilities must rely more on debt issues for initial construction and
rehabilitation funds.
Issues
While there are many sources of financial risk for power and
water utilities, overbuilding plants and equipment relative to population growth is a significant risk, given current demographic realities. The nation's population growth has been slowing for some
time. Almost all projections indicate this will continue, possibly
resulting in negative growth or diminishing population by the second or third decade of the next century.4 This suggests problems
for utility planning. Traditionally, facilities have been capital intensive and have required long gestation periods. In order to bring
a central power station or water facility with attractive economies
of scale5 on line, initial operation at less than full capacity usually
is countenanced. This is a gamble if population in a community or
region cannot be expected to grow at historic rates. The extent of
the gamble is underlined by the literature on the accuracy of forecasting models, which stresses the unreliability of predictions of
turning points.
Just as the perils of overbuilding and relying on large, central
generating or water storage facilities are coming to be perceived
generally, the situation shows signs of shifting. Oil prices may

Introduction

rebound to the 1970s levels in the coming years, rendering recent,


more incremental additions to electric capacity expensive indeed.
Regrets may surface about lost opportunities in developing water
infrastructure if global climatic warming occurs as rapidly as some
scientists anticipate. There are indications, also, that reliance on
existing supplies and optimizing existing transmission, construction
of smaller units such as gas turbines, and purchase of cogeneration
power or wells can promulgate "false economies" and be "penny
wise and pound foolish."
There is no general answer to the question, "Should large central
facilities be built, or should incremental additions to capacity be
sought?" Cases must be evaluated individually on their merits. It
is suggested here that investment evaluation should take into account the potential variability of key factors. Rather than relying
upon inflexible forecasts of conditions five, ten, and twenty years
in the future, financial appraisal of infrastructure investments
should consider the economic tradeoffs of projects, recognizing
random influences on parameters such as construction costs, population growth, and the like. Concern with the variability of parameters affecting the success or failure of an investment is vital
to steering a middle course between reckless devotion to the grandiose and overweening caution and piecemeal solutions associated
with a falling standard of living.
In addition, competitive pressures from deregulation in the electric power field may impact the credit strength of municipal and
cooperative systems, and revision of the Clean Air Act promises
to have financial repercussions for many electric power producers.
RISK ANALYSIS AND RISK SIMULATION
Financial risk analysis shares a great deal with risk analysis as
practiced in engineering or natural hazards policy. Expert opinion
can be polled for anticipated ranges for risk factors and the values
most likely to be realized. Using estimates of the range and distribution of a risk factor, financial risk analysis considers the performance of investments and the balance sheet of an enterprise.
Financial risk analysis may contribute to (1) project selection, (2)
the selection of financing options relevant to a given project, and
(3) perspectives concerning company risk, based on an examina-

Analysis of Infrastructure Debt

tion of cash flows. One basic motivation is to identify or develop


risk management plans for investors or a utility organization to
cope with uncertainty concerning future market conditions.
Two issues arise at the outset of discussion of financial risk
analysis. These relate to: (1) the relation between financial risk
analysis and financial market processes, and (2) the rationale for
quantifying the analysis.
Financial Risk Analysis and Financial
Market Processes
The concept of risk as defined in this book is measured by the
distribution of the returns or revenues from an investment. This
is called a risk profile and is a probability distribution. Thus, analysis might portray Figure 1.1 as a risk profile of rates of return
from a particular investment. Probabilities are marked on the vertical axis. The horizontal axis lists rates of return that may be
achieved under the varying conditions encompassed in the analysis.
As is discussed more extensively in Chapter 2, one major consideration is usually the variation or dispersion of this risk profile
around its mean or expected rate of return. Risk averse individuals
prefer tighter distributions around the mean, even, perhaps, at the
cost of a slightly lower expected value. Other decision makers may
favor investments offering more dispersed returns, using the rationale that they might achieve very high rates of return. The idea
is to present information as full and complete as possible on the
distribution of returns from a prospective investment to inform
choices about whether to go ahead with construction, whether to
adopt modifications in project configuration and financing, or the
like.
Modern portfolio analysis contributes an intriguing twist to financial risk analysis. The capital asset pricing theory and similar
approaches focus on the covariation of an investment's returns
with the payback from a portfolio of investments representative
of the market as a whole. 6 The interrelation of company risk,
market risk, systematic risk, and unsystematic risk largely displaces
individual investment risk profiles in deciding how to buy stocks.
One implication is that risk management can be achieved through
suitable diversification.

Introduction

Figure 1.1
Risk Profile of Rates of Return

The problem with diversification at the level of utility projects,


however, is the potential size of electric power and water investments. At the level of project orfinancingselection, there may be
no recourse except the comparison of risk profiles. The significant
impact, particularly among certain classes of investors, occasioned
by the default of WPPSS also illustrates the dangers of complacency

Analysis of Infrastructure Debt

concerning single projects or single companies vis-a-vis the national


bond market.
Another notion is that, if a project can be financed, its market
valuation summarizes all relevant information about its prospects.
At best, therefore, financial risk analysis "begs the market," and,
at worst, its findings might deter investors who otherwise would
be pleased to place their money down.
The first thing to observe about such a careless interpretation
of "rational expectations" is that financial risk analysis is conducted, in any case, by agencies responsible for credit ratings.
Thus, as an example, Standard & Poor's CreditWeek affirms several
issues on the grounds that ' T h e ratings reflect the sound economic
base of the service area, strong historical and projected financial
performance, and the strong cash position of the electric system.
The St. John's River Power Park project remains on schedule with
commercial operation dates of April 1987 and October 1988 for
units 1 and 2 . . . . " 7 A less favorable report lowered ratings because,
As of June 30, [1986,] Gulf States had about $3.2 billion of debt and
preferred securities outstanding. Continued absence of rate relief and the
prospect of an additional rate cut in Louisiana due to disallowance of
Southern Co. contract capacity requirements will result in further deterioration in measures of creditor protection. The highly political regulatory
environments in both Louisiana and Texas and a severely depressed service territory substantially heighten bondholder risk.8
These assessments are supported by analysis of cash flow models
and the exploration of what-ifs. Thus, if a utility is interested in a
target bond rating, or if a regulatory body is concerned with the
supply of capital to agencies under its supervision, prefiguring the
implications of various local and regional developments seems advisable.
In other respects, skepticism seems warranted by recent events
in financial markets. Thus, the entrapment of major investment
figures in a web of insider trading and a pattern of collapse and
bailout vis-a-vis a number of lending institutions do not seem to
vindicate the invariable "rationality" of financial markets. Some
means of checking expectations vis-a-vis various investments, especially ex ante their final selection, seem advisable.

Introduction

The Rationale for Quantifying Risks


This book is primarily concerned with a type of analysis called
risk simulation.9 This is the most complete and comprehensive type
of risk analysis and is the next logical step after identifying major
risk factors and the examination of some what-ifs or a sensitivity
analysis. A risk simulation seeks to weight various configurations
of risk factors by their probabilities. Then, in a sense, all possible
configurations and values for the risk factors are summarized in a
risk profile for the investment under consideration.
Although not a new concept, risk simulation is now more accessible with advances in microcomputer technology and simulation software. In this regard, the examples in this book have been
prepared with Lotus 1-2-3 and an add-in program called @RISK. 10
No claim is advanced that this is the best or most up-to-date spreadsheet program. Rather, it is popular and widely used in business,
and the learning curve in the discussions of actual program sequences that follow (called macros) should not be too demanding
for many readers. Hardware requirements for the simulation computer programs are satisfied by an 8088 PC system, although the
added speed of execution with an 80286 or 80386 system is well
worth the upgrade.
The trick in a risk simulation is to estimate the probabilities for
various events. The important thing is that, insofar as people are
willing to make decisions, these probabilities already exist. In other
words, risk simulation operates with subjective probabilities based
on expert opinion supplemented by data about the objective frequencies of events, where available. This approach aims at as
complete a picture as possible in order to make the most informed
decision possible.
Simulation is useful when the relationship between variables is
too complex to derive analytic or "closed form" solutions for the
likelihood of processes and occurrences. Analytic derivations are
possible when there are relatively simple formulas describing the
relation of random variables to a criterion variable. Thus, suppose
revenues (R) and costs (C) are known to be normally distributed
around their expected values E(R) and E(C). This implies that
net income (NI) in the relation,
NI = R - C

(1.1)

10

Analysis of Infrastructure Debt

also is normally distributed around its expected value E(R)


E(C), owing to certain well-known results of probability theory
(see Appendix). Of course, costs may not be normally distributedin fact, they may be skewed in one direction or another;
for example, their probability distribution may have a long tail
rightward from the expected value. Furthermore, only rudimentary
information about their probability distribution may be available,
perhaps only an idea of absolute upper and lower bounds or the
range of costs and revenue. Simulation becomes a tool of mathematical research in such contexts. 11
Risk simulation, it might be said, makes explicit what is implicit
in many qualitative appraisals, attaching numbers that, at a deep
level, guide thought about the prospects of an investment situation.
Inconsistencies in qualitative appraisal can be ferreted out, and
the very process of imputing ranges and probabilities to variables
forces a deeper integration of information and understanding about
the process in question.

The Staging of the Analysis


Like other evaluation techniques, financial risk analysis makes
different contributions at the project level at various stages in the
planning and building of a project. Early on, designs and cost
estimates are largely conceptual. General discussion of risk sources
is particularly useful at this point. Simpler analytical methods such
as sensitivity analysis and scenario development can indicate
whether or not financial problems are in any sense likely.
In a sensitivity analysis, each risk factor is allowed to vary over
a range, such as plus or minus 15 percent, indicating, in rough
terms, which variable contributes most importantly to the success
of the project.
Scenario development involves developing a plausible story or
"scripting" the future. In the best situation, reviewing several imaginative scenarios helps produce agreement about what is likely
to happen and how key variables can change.
The major product at this juncture is a list of risk factors and
an indication of what impact they might have, ignoring syncretisms
or joint influences that surpass the sum of individual effects and

Introduction

11

ignoring the question of what probabilities can be associated with


various growth rates and the like.
Later, as project alternatives are narrowed and detail comes into
focus, risk analysis can shift to actual designs and the implementation process. Milestone events with regard to a project can include (1) signature of agreements and contracts committing a
company or public body to this option, (2) issuing bonds to finance
construction, and (3) managing construction of the project. At
each such juncture, consideration of the range of future values for
key parameters is critical. Early acceptance of a project configuration can elicit confidence in its financial feasibility. After preliminary cost estimates for a project are forged, risk simulation is well
suited to inform choices regarding financing options, including the
structure of debt service, the value of bond insurance, bank letters
of credit, and sinking funds. Continuous monitoring of risk factors
can provide a kind of early warning system for isolating cash flow
problems that might impact future bond ratings, the stability of
power or water rates, or other sensitive factors.
There is also what might be termed a "generic" planning problem particularly suited to risk simulation. Larger facilities, which
may exhibit economies of scale, often imply long periods of excess
capacity. If bond interest rates are high or demand growth slows,
financing large central facilities may present problems, despite
their low unit costs when fully utilized. Of course, insufficient
revenue in some future year is not by itself an unmitigated disaster.
To evaluate this problem, one has to consider the responses available to the organization building the facility. Such an analysis can
consider the priority among payments and the feasibility of solving
cash flow problems with higher rates or rescheduling of debt. Thus,
one would want to know whether a sizeable rate increase can cover
the shortfall or whether proposed rates, in fact, would be so large
that both demand and revenues would be constrained.12 Another
option may be to reduce principal payments with new borrowing,
which adds to interest charges but extends the payback period.
Other major risk factors include construction cost overruns and
revenue shortfalls due to financial difficulties of wholesale customers.
This can be broadened into a framework for considering the
general financial risk facing a company or public concern. A power

12

Analysis of Infrastructure Debt

system may have several baseload plants, intermediate and peaking


facilities, and may exist within a context of physical reliability and
distribution net factors. An urban water system may have complex
linkages between multiple reservoirs, treatment plants, and pumping facilities. These systems also can possess a complex structure
of debt and multiple financing options. Just as many Fortune 500
firms maintain and continuously revise financial risk models pertaining to their general operations, water and power utilities may
find extension of these techniques useful.
Another direction for generalizing these techniques is in capacity
planning. Regulatory requirements and the need to prepare public
opinion and to reserve future options lead many power and water
utilities to engage in very-long-run planning, specifying the trajectory or sequence of capacity expansion over thirty years or
more. Given some thought as to likely reactions to changed circumstances, lag and lead times, and the like, risk simulation can
be a tool to explore the adequacy of these capacity plans.
ORGANIZATION OF THE BOOK
At the simplest level, there are two groupings of financial risk
factors: those affecting costs and those affecting revenues. This
classification influences the structure of this book.
Chapter 2 presents a reconnaissance of basic concepts and procedures. The exposition considers default risk, surveys methods
for developing information about the probabilities of financial and
other events, and considers how decision makers react to and
utilize information from a risk simulation. The discussion at many
points is general and can be applied usefully to the evaluation of
financial risks for a variety of infrastructure and other investments.
Chapter 3 focuses on financial risks during the construction period and their evaluation. The discussion lists generic factors identified by cost engineering as major contributors to cost escalation
in capital construction projects. Range estimation is introduced as
a technique for developing probability imputations and is shown
to apply to cost escalation, scheduling delays, and the timing of
expenditures in the construction period. This chapter also considers the sizing of construction contingency allowances.
Chapter 4 considers customer demand patterns, an important

Introduction

13

element of revenue risk. The discussion underlies the effect of the


price elasticity of demand on revenue projections and suggests
specific tactics for incorporating consumer demand patterns in risk
simulations of cash flows.
Chapter 5 deals with another major element of revenue risk
forecasting the customer base. The discussion notes the dismal
track record of economic and demographic forecasts and, following
a brief discussion of population forecasting methods, outlines
methods for incorporating uncertainty into forecasts of population
in utility service areas.
Chapter 6 pulls the various strands of stochastic modeling together with several applications. Chapter 6 provides quantitative
analysis of problems related to sinking funds, tipped versus level
amortization schedules, contract risk, capacity planning, and the
interface between financial and benefit-cost analysis.
Chapter 7 recapitulates the findings and reflects, more generally,
on the promise and limitations of the method.
The Appendix discusses rules for finding the probability distributions of sums, products, and other combinations of random variables and lists a number of commonly encountered probability
distribution functions.
SUMMARY AND CONCLUSIONS
The value of financial risk analysis is that it leads decision makers
away from a favored and fixed set of assumptions about an essentially uncertain future. A more comprehensive view helps identify
the desirable size of contingency funds,13 leads to reconfigurations
of a project, or promotes new strategies of meeting an objective
or responding to difficulties.
Coming to grips with risk and uncertainty is a new and challenging aspect of business policy. As noted above, an organization
can juggle investments to minimize the risk from a portfolio. Yet
electric power and water projects can be large enough to present
special situations deserving of specific analysis. Power and water
utilities also characteristically derive the preponderance of their
revenues from investments within a particular geographic area and,
thus, are subject to systematic risk following from regional economic developments.

14

Analysis of Infrastructure Debt

Wholesale change in investment policies may well be a response


to the recognition of financial risks associated with such projects.
Indeed, it can be argued that, for many organizations, new policies
emphasizing smaller incremental approaches to capacity expansion, joint venturing, or reliance on conservation and wholesale
purchases already reflect responses to the new financial climate in
which utilities find themselves.
In addition to helping decision makers explore the performance
of an investment project and their preferences for various types
of returns, risk simulation integrates information that might otherwise remain the special province of certain experts and various
specialists in a project team. Clearly, pulling together information
about the likely range ot risk factors is superior to a decisionmaking process in which prejudices develop about particular numbersfor example, "the plant is going to cost exactly $756 million
dollars" or "population and demand will grow at 2.4 percent for
twenty years." Compiling the information to conduct such an analysis is itself valuable, especially since this process can provide focus
and can integrate disparate elements in a project team.
This technique is useful in appraising the significance of contingencies from the standpoint of early decisions about a capital construction project, and it has relevance to project management. In
addition to bearing on the sizing or financing of a specific proposal,
risk simulation is useful in project comparison as well as the appraisal of individual companies and their financial prospects.
NOTES
1. See, for example, the North American Electric Reliability Council,
1987 Reliability Assessment: The Future of Bulk Electric System Reliability
in North America 1987-1996 (Princeton, N.J.: North American Electric
Reliability Council, September 1987).
2. See Edward I. Altmann and Scott A. Nammacher, Investing in
Junk Bonds: Inside the High Yield Debt Market (New York: John Wiley
& Sons, 1987); and Scott Fenn, America s Electric Utilities (New York:
Praeger, 1984).
3. See James Leigland, "WPPSS; Some Basic Lessons for Public Enterprise Managers," California Management Review (Winter 1987): 7888; Gene Laber and Elisabeth R. Hill, "Market Reaction to Bond Rating
Changes: The Case of WHOOPS Bonds," Mid-Atlantic Journal of Busi-

Introduction

15

ness (Winter 1985/1986): 53-65; Darryl Olsen and Robert J. Defillippi,


"The Washington Public Power Supply SystemA Question of Managerial Control and Accountability in the Public Sector," Journal of Management Case Studies (Winter 1985): 323-343; and David Myrha, Whoops/
WPPSS: Washington Public Power System (Jefferson, N.C.: Mcfarland,
1984).
4. U.S. Bureau of the Census, Projections of the Population of the
United States by Age, Sex, and Race: 1983 to 2080, Current Population
Reports, Population Estimates and Projections, Series P-25, No. 952,
U.S. Department of Commerce (Washington, D.C.: Government Printing
Office, 1984).
5. A facility exhibits economies of scale if, for example, it has twice
the service capacity at less than twice the capital and operating costs of
another plant. Scale economies can have a physical basis. Thus, F. M.
Scherer writes,
The output of a processing unit tends within certain physical limits to be roughly
proportional to the volume of the unit, other things being equal, while the amount
of materials and fabrication effort (and hence investment cost) required to construct
the unit is more apt to be proportional to the surface area of the unit's reaction
chambers, storage tanks, connecting pipes, and the like. Since the area of a sphere
or cylinder of constant proportions varies as the two-thirds power of the volume,
the cost of constructing process industry plant units can be expected to rise as the
two-thirds power of their output capacity, at least up to the point where they
become so large that extra structural reinforcement and special fabrication techniques are required.
Industrial Market Structure and Economic Performance, 2d ed. (Boston:
Houghton Mifflin, 1980).
6. John C. Hull notes the difference between the variance of returns
view and portfolio risk in '4Risk in Capital Investment Proposals: Three
Viewpoints," Managerial Finance (1986): 12-15.
7. Standard & Poor's CreditWeek, October 27, 1986, p. 9.
8. Ibid., p. 52.
9. See, for example, John C. Hull, The Evaluation of Risk in Business
Investment (London: Pergamon Press, 1980).
10. @RISK is a computer program produced by the Palisade Corporation of Newfield, New York, telephone (607) 564-9993.
11. The literature on analytic probabilistic features of complex financial
computations is developing apace and should not be ignored in this connection. Interesting work has been done on the expected present value
of serially correlated revenue streamsa realistic characterization, if one
considers that good and bad business years tend to occur in runs. Unfortunately, the derivations underlying the analysis of time-interrelated cash

16

Analysis of Infrastructure Debt

flows are presently unable to accommodate fluctuations like the business


cycle and other varying influences on revenues over the period of analysis.
See Carmelo Giacotto, kkA Simplified Approach to Risk Analysis in Capital Budgeting with Serially Correlated Cash Flows," Engineering Economist (Summer 1984): 273-286; and Chan S. Park, kkThe Mellin Transform
in Probabilistic Cash Flow Modeling," Engineering Economist (Winter
1987): 115-133.
12. This is a situation of an elastic demand response. For moderate
price increases, the econometric evidence regarding, for example, residential water or electricity suggests that the consumer response is price
inelastic. Thus, higher prices cause the consumer to purchase lessas
suggested by the law of demandbut the consumer's total expenditure
on this lesser quantity actually increases. Larger price increases, however,
can push demand responses into the price elastic region of the demand
curve. This point is discussed further in Chapter 4.
13. Sang-Hoon Kim and Hussein H. Elsaid, "Safety Margin Allocation
and Risk Assessment Under the NPV Method," Journal of Business Finance and Accounting (Spring 1985): 133-144.

2
Concepts and Procedures
Risk simulation has been widely applied to investment evaluation,
including corporate planning models with Sears, Roebuck and
Company data;1 World Bank loans;2 computer leasing;3 petroleum
investment decisions;4 plant expansion proposals;5 hotel construction;6 and the analysis of insurance companies. In the late 1970s
and early 1980s, the Electric Power Research Institute (EPRI)
sponsored studies to assess demand uncertainty and expansion
plans for electric power systems.7 Risk simulations of Sizewell, a
British pressurized water nuclear reactor, focus on its projected
financial and economic advisability and timing in the capacity plan. 8
Parallel techniques can be identified in engineering reliability
studies9 and natural hazard assessment.10 The mathematical basis
of risk simulation dates to World War II and a Manhattan Project
analysis of the diffusion of neutrons in fissionable material, developed by simulation methods and code named Monte Carlo."
The method of risk simulation has four basic steps, indicated
schematically in the flowchart of Figure 2.1. These include:
1. the identification of risk factors
2. the appraisal of the likely range and probability distribution of risk
factors
3. the simulation of investment performance with parameters sampled
from the probability distributions developed for the various risk factors

18

Analysis of Infrastructure Debt

Figure 2.1
Steps in Conducting a Risk Simulation

4. the summary of the results of the analysis in a risk profile for the
investment performance measure or criterion.
This process supports the formulation of risk management policies
and tactics. The analysis as a whole is rendered more effective by
attention to the facts and idiosyncracies of risk communication.
This chapter discusses this procedure in some detail, defining
and motivating key concepts that are relevant and useful in this
type of analysis. These concepts include default risk; random variable; frequency and subjectivist interpretations of probability; techniques of probability elicitation, including juries of executive
opinion, the Delphi method, and interview techniques supporting
probability encoding', the bootstrap; time series analysis; structural
models; simulation sampling strategies; random numbers and
pseudo random numbers', and more about risk profiles and risk
preferences. The purpose of this chapter is to lay a foundation for
subsequent discussion and the series of examples presented in later
chapters. The reader may want to check his or her comprehension,
accordingly, returning to this chapter to pin down a basic point
discovered in a later chapter.

Concepts and Procedures

19

IDENTIFYING RISK FACTORS


Generally, the objective of utility planning is to meet demand
in a defined geographic service area and, possibly, to supply other
utility systems through bulk or wholesale contracts. This often
implies that new capacity is planned for completion at the time
that system demand is projected to exceed available capacity. Utilities rely on capital markets for financing that cannot be met by
internally generated funds. For publicly owned companies, options
include debt, preferred stock, and common stock. New equity is
generally more costly than debt, due to capital gains and dividend
taxation, transactions costs, and tax deductibility of interest on
debt.
One specific reading of what financial risk means for power and
water utilities, therefore, is default risk on debt. Larger, riskier
capital construction projects almost inevitably are financed by issuance of bonds. New debt service increases total obligations and
usually is associated with pledges in the bond ordinance drawn up
prior to the sale.
While it may seem extreme to focus on breaking points, there
are several good reasonshistorical and conceptualfor doing so.
It is helpful to recall that widespread misallocation of utility financial resources in the 1930s prompted public regulation in the
first place.12 Thus the current push for deregulation may resurrect
some of the old problems. There was also the pervasive downgrading of investment ratings of investor-owned utility bonds in
the 1980s. Uneven patterns of regional growth resulted in a number
of problem power and water systems, particularly in the oil patch
and in the southwestern part of the nation, and the newfound
freedom to diversify investments led to some of the same excesses
witnessed in the thrift industry. In general, times seem to be growing more complex with uneven impacts of business restructuring
and foreign competition and changing social patterns. The conceptual argument, on the other hand, identifies default risk as the
bottom line against which bond ratings and other assessments of
financial condition occur.13 Movements from slight to slightly more
risk of default on debt obligations can prove significant to a company's access to capital markets and its ability to provide high
quality service. Clearly, the appraisal of default risk should be the

Analysis of Infrastructure Debt

20

first study in financial risk analysis, possibly to be supplemented


by research vis-a-vis other measures of financial performance.
Relevant Accounting Categories
Several accounting categories, which are outlined in Text Box
2.1, are relevant to appraising default risk. Thus, net income is
commonly defined as total revenues net of purchases from other
systems, operating and maintenance (O&M), and other expenses.
Typically, a utility pledges to sustain a debt coverage ratio, defined
as net income divided by the debt service. The debt coverage ratio
is commonly specified at about 1.25 or 1.3. When debt coverage
falls below the preassigned figure, the utility is in technical default
and must attempt to increase revenues through rate increases.
The word "attempt" stresses the fact that there can be constraints on a utility's ability to increase revenues through higher
rates. Thus, many power and water producers are subject to regulation regarding their rate of return or earnings, where adverse
developments in rate hearings have earned the appellation "political risk." Rate increases in these proceedings tend to be evaluated against a revenue requirement and a rate of return judged
to be adequate (or "fair") to attract investment capital to the firm.I4
In simple terms, the relationship
RR = C + d + T + (V-D)r
where RR = the revenue requirement, C = operating expenses
or costs, d = annual depreciation, T = taxes, V = gross valuation
of utility property, D = accrued depreciation, and r = rate of
return, governs identification of the allowable average price P*,
P* = (C + d + T + (V-D)r)/Q
where Q is the quantity sold in a representative and recent year.
This creates a difference between actual and allowable expense in
the rate base. Hence, the valuation of utility property, depreciation
procedures, and what constitutes a fair rate of return become hotly
contested issues.
It is less often recognized, however, that enhanced revenues

Concepts and Procedures

21

Text Box 2.1


Revenues
Sales
Retail Customers
Wholesale Customers
Invested Funds
Expenses
Purchases from Other Systems
Operation & Maintenance (O&M)
Administration

Taxes
Debt Service on Bonds
Amortization Schedule
Debt Service
Interest
Principal
Demand/Supply Balance
Demand
Additions to Demand
Total Capacity
Excess Demand

may not result from price increases for a product, even if higher
rates are allowed by a regulatory body. This is especially likely if
large increases are contemplated, since consumer demand responses may be price elastic, as discussed in detail in Chapter 4,

22

Analysis of Infrastructure Debt

"Revenue RiskRate and Demand Factors."


Actual default occurs when a utility fails to pay the debt service
in a given period, that is, it fails to set aside funds for payment of
bond interest and principal payments. In many cases, the situation
may be highly dynamic, in the sense that there can be considerable
uncertainty in the short term about whether revenues will be sufficient to meet obligations or whether short-term cash can be raised
to save the day. Given the capital intensity of water and power
systems and the potential for holdings of land as well as plant and
equipment, it may be possible to arrange a sale of assets to generate
short-term cash to meet debt service. Another option is the refinancing and rescheduling of existing debt, although, clearly, caution may be necessary on the part of the utility in broadcasting the
extent of their duress. The argument here always must be one of
temporary exigency and the promise of future growth in revenues.
A deeper level of distress occurs when a utility cannot meet
operating expenses and interest payments. This is generally viewed
as irremediable without recourse to reorganization or liquidation,
but it is interesting that there may be room for maneuver even in
this circumstance. Thus, under normal conditions, interest payments are sometimes capitalized in loans or the issuance of bonds,
that is, there is an initial grace period in which there are no interest
payments. So, again, if the promise of future revenues is extraordinary, interim expediencies may be negotiated.
Financial exigencies are caused by higher costs or lower revenues
than anticipated when debt and other obligations were assumed.
On the cost side, construction cost growth and delay in construction schedules are a major source of financial risk. If interest
is capitalized during the construction period, project delays exact
high penalties. As contingency funds are exhausted, new debt or
other, usually more costly, funds must be obtained, adding to the
level of subsequent debt payments.
Revenue shortfalls due to actions of bulk customers also may
be a consideration. Utilities may be lulled into a false sense of
security by so-called take-or-pay contracts without verifying recognition of such contracts in local jurisdictions. Continuation of
specific bulk contracts may be risky in regions of abundant electric
supply in which transmission is being erected to support more
power pooling.

Concepts and Procedures

23

A pervasive financial risk is related to long periods of excess


capacity that can accompany construction of large facilities designed to reap economies of scale. If bond interest rates are high
or if demand growth slows, the financing of these projects may
become difficult, despite prospects of their low unit costs when
fully utilized.
Other Measures of Investment Performance
Default risk is conditional on servicing demand in a defined
geographic service area (since the safest option may be to liquidate
productive assets and buy Treasury bonds), and its appraisal may
be conditioned on other investment performance criteria also.
Achieving a high or target rate of return on capital, for example,
is a standard yardstick of business success.15 In general, the goals
and motives of a large electric power producer or water purveyor
can be complex.
From a planning and regulatory standpoint, emphasis in recent
years has been placed on the present value of capacity investments
projected by a utility. This is a broad performance measure linking
financial risks and categories of benefit cost analysis and economic
optimization in utility systems. With suitable accumulation of reserves, the investment sequence with the minimum present value
of costs often is the expansion path with the minimum financial
risks.16 When utility rates are flexible, the minimum present value
of costs is associated with a capacity expansion trajectory in which
marginal cost pricing is applied and maximum social benefits are
attained, insofar as these may be measured. Considerable literature exists concerning the application of mathematical algorithms
to optimize capacity investment plans, where the attainment of
these optima are gauged by the present value of the costs criterion.
APPRAISING RANGES AND PROBABILITIES
The problem addressed by risk simulation is simply that it is
desirable to have some way to compare cash flow projections developed upon optimistic or pessimistic assumptions or with various
intermediate values for key risk parameters. This involves associating probabilities with various values of these risk factors.

24

Analysis of Infrastructure Debt

What are probabilities? One answer is associated with random


variables. A random variable is the outcome of a random experiment, that is, a procedure that is repeatable under similar conditions in which the outcome characteristically varies, such as
flipping coins or tossing dice. Thus, a random variable might be
the number coming face up on a toss of a die. Given certain,
intuitive physical conditions, the probability that, say, one toss
shows a six face up is 1/6. Another way of looking at this probability
of one in six is to consider it to be approximately the proportion
of sixes that come face up as a die is tossed a very large number
of times (see Appendix). This reasoning is the basis of the frequency interpretation of probability, upon which, to a large extent,
the mathematics of probability have been developed. To an extent,
historical data support frequency assessments. Thus, if one is interested in the probability of cost overruns on construction projects
of a certain type, it makes sense to consider the past record of
such projects.
There are several alternative views of the probability concept.
Some argue, for instance, that repeated trials under similar conditions are an almost pointless abstraction in many contexts. Nevertheless, they grant that people persist in making and acting on
assessments like, "Her chances of promotion are about 50:50," or
' T d give him a 60 percent chance of getting the contract." What
do these statements mean? The subjectivist interpretation suggests
that such attributions of probability signify a degree or intensity
of belief that conditions our responses, if any, to the events in
question. Quite remarkably, subjective probabilities conform to
the same mathematical laws governing probabilities viewed as frequencies. Subjective probabilities, in addition, can be viewed as
being responsive to new, objective information about an event or
process.
These interpretations reinforce each other. Rarely do we have
the opportunity to run controlled experiments in social or financial
situations. Nevertheless, there are several techniques of quantitative analysis, such as statistical regression procedures, that enable
us to consider processes and events under comparable situations.
Thus, we may abstract the effects of estimated total costs from
estimated completion time on construction cost overruns on power
or water projects if we can assemble data on dozens of projects in

Concepts and Procedures

25

which these variables have different relative values (see Chapter


3). We may even have success in linking financial events to probability density functions for failure rates of equipment (exponential
distribution), the likelihood of rare events (Poisson distribution),
the chance that errors of measurement will cumulate into significance (normal distribution), or that binary or binomial processes
or other probability densities will predominate. More generally,
immersion in the facts produces judgments about the relevant
bounds of events and expected values relevant to decision making.
Thus, frequency analysis can contribute to subjective probability
evaluations. The use of objective or historical data also limits the
influence of typical biases in processing risk information, such as
overoptimism, letting recent events dominate responses, and the
like.
As a general rule, objective and subjective probability assessments can be put to best use when a complex event or process is
broken into relatively independent constituent occurrences and
processes. Thus, until the final hour, the "probability of default"
may not be a very available notion, even to direct participants in
the investment process. The probabilities of construction cost overruns, losing an important buyer, and lower population growth,
however, are concepts that are more intuitively accessible. Decomposition of events into simpler occurrences imparts focus to
probability appraisals and elicitation.

Gathering Probability Information


The collection of probability information is a key step in risk
simulation. The availability and effectiveness of methods for eliciting subjective probabilities and analyzing objective data are perhaps less understood than they should be. This section considers
several such methods, their key features, and provides some
sources for further reading.
Subjective Assessment Methods
The general tenor of subjective probability elicitation is conveyed in David Hertz's classic discussion of financial risk in manufacturing, where analysts are advised to

26

Analysis of Infrastructure Debt

probe and question each of the experts involvedtofindout, for example,


whether the estimated cost of production can be said to exactly equal a
certain value or whether, as is more likely, it should be estimated to lie
within a certain range of values. . . . The ranges are directly related to the
degree of confidence that the estimator has in the estimate. Thus certain
estimates may be known to be quite accurate. They would be represented
by the probability distributions stating, for instance, that there is only 1
chance in 10 that the actual value will be different from the best estimate
by more than 10%. Others may have as much as 100% ranges above and
below the best estimate. Thus, we treat the factor of selling price for the
finished product by asking executives who are responsible for the original
estimates these questions. . . . Given that $510 is the expected sales price,
what is the probability that the price will exceed $510? . . . Is there any
chance that the price will exceed $650? .. . How likely is it that the price
will drop below $475? Management must ask similar questions for all of
the other factors until they can construct a curve for each.. .. Often information on the degree of variation in factors is easy to obtain. For
instance, historical information on variations in the price of a commodity
is readily available. Similarly, management can estimate the variability of
sales from industry sales records.17
Subjective assessment methods attach quantitative tags to people's
perception of various risks and probabilistic relationships, where
"subjective" need not have the connotation of "capricious" but
can refer to opinions formed against the backdrop of special expertise and experience. Some standard methods distinguished in
the literature include juries of executive opinion, Delphi methods,
and probability encoding techniques based on interviews.
Jury of executive opinion. This simple, widely used method involves group decisions about the best estimate for a risky or uncertain item. It is essentially a committee approach where
participants meet "to resolve this thing once and for all." One
drawback is that face-to-face interaction can result in weights or
probabilities for events that depend on a person's role in an organization rather than, strictly speaking, their knowledge about
the process being considered.
Delphi method. This technique aims to elicit a consensus from
a group of experts about an uncertain event or development while
minimizing undesirable elements of group interaction. The basic
method involves circulating a questionnaire, summarizing expert

Concepts and Procedures

27

evaluations in an anonymous format, and repeating this process.18


Initial responses are summarized without identifying their source
and are circulated in a second round. The same group is asked
whether they are inclined to change their estimate. Estimates are
supposed to converge in a few rounds.
Encoding probabilities. Techniques for eliciting appraisals or
evaluation of the probabilities of eventscalled encoding probabilitieshave been widely applied in risk analysis.19 The interview
is a one-on-one situation between someone with privileged knowledge about an element of risk and an individual skilled in eliciting
information without imparting his or her biases in the process. The
literature contains guidelines for introducing the basic topic and
tactics to avoid or minimize bias. The ultimate objective is to elicit
the subject's assessment of the cumulative probability distribution
of a risk factor.
The direct approach, of course, is to ask the subject to draw the
probability density function or to inquire about the chance that
the variable in question will be greater than or less than various
values. A simple approach, for example, is to focus on three items
of information: the lower bound for the risk factor deemed at all
within the realm of possibility, the upper bound, and the most
likely value. Then the probability distribution associated with this
risk factor can be approximated by a triangular probability density
function, such as that presented in Figure 2.2. Note the distinction
here between "most likely" and "expected." The most likely value
of a random variable is its mode or the most frequently occurring
value, while the mathematical expectation is its mean or expected
value. Only when the probability distribution is symmetric do the
mode and mean or expected value coincide.
Psychological research suggests people can have problems with
direct approaches to probability elicitation.20 Alternative procedures involve questions to which the subject can respond by choosing between simple alternatives or bets. Options can be adjusted
until the subject is indifferent between them. This indifference can
then be translated into a probability or value statement. A probability wheel, for example, facilitates these comparisons. This is a
disk with adjustable sectors of two different colors and a fixed
pointer in the center. When spun, the disk stops with the pointer
in one or the other color sector (usually blue or orange). The

28

Analysis of Infrastructure Debt

Figure 2.2
Triangular Probability Distribution

relative size of the two sectors can be altered by the interviewer,


changing the probability that the pointer will be in one or the other
sector when the disk stops spinning. The subject is asked which
of two events he or she considers more likelythe event relating
to the uncertain quantity, for example, construction costs will be
50 percent greater than the base case estimate, or the event that
the pointer ends up, for example, in the orange sector. The amount
of orange is then varied until the subject judges the two events
equally likely. The probability of a cost overrun then can be read

Concepts and Procedures

29

off from calibrations on the back of the disk. While seemingly


artificial, this approach may be useful in prompting thought in the
early stages of an elicitation process.
Reference can also be made to fixed probability events such as
drawing a royal flush in poker or tossing ten heads of a coin in a
row.
Verbal encoding is another method of probability elicitation. It
ascribes verbal descriptors such as high, medium, and low to events
but has been shown to be relatively variable across individuals.
Developing Probability Assessments from
Objective Data
Complementary methods of characterizing the probabilities of
various risks rely on the analysis of data from comparable projects
or situations and historic data on the variation of key factors. These
techniques, which include the bootstrap, time series analysis, and
structural regression analysis, are technically challenging but,
again, are more accessible with new software and new computer
capabilities.
The bootstrap. A relatively new method for developing confidence intervals for estimates of distributions or parameters, the
bootstrap makes few prior assumptions about the probability distribution generating observed data and figures in important recent
applications.21 Bootstrap methods, for example, are used by statistical consultants to the Rogers Commission in analyzing data
from the space shuttle Challenger disaster and the question of
whether a relation between low temperature and O-ring failure
should have been seen as a risk before the launch.22 These methods
also have been applied to the characterization of probability distributions of peak electricity demand (see Chapter 4).
The idea is to sample repeatedly from the existing empirical
distribution of a random variable and to use these synthetic samples
in establishing confidence intervals for the underlying distribution
that generates the complete sample data in the first place. This
approach is particularly useful when an apparent functional relationship between variables exists, but it is not expected that the
variability of the dependent variable around the explanatory variables is normally distributed. An example is discussed in the following chapter vis-a-vis the simulation of total construction costs,

30

Analysis of Infrastructure Debt

when interdependencies between cost categories or between costs


and time must be acknowledged.
Time series analysis. Early discussions of financial risk simulation
were concerned primarily with variables representing discrete or,
at least, independent events. However, financial risks can be related to ongoing processes, such as population growth and inflation, and the existence of time interdependencies can affect
outcomes, for example, when one is drawing down limited reserves. Thus, instead of simply considering the probability distribution for a variable, it is desirable to be able to characterize the
band of variation of a process over time and time interdependencies
between stochastic components of this process. Time series analysis
provides important techniques for accomplishing this type of characterization.
A time series is simply a set of observations on a variable taken
at various times. Examples include daily closing prices of a common
stock over a period of time, hourly blood pressures of an individual,
or, as illustrated in Figure 2.3, average annual water consumption
in gallons per capita per day (GCD) over seven decades.
Historically, researchers have been intrigued with the possibility
of identifying periodicities in such series. Early efforts applied
methods such as the periodiogram or regression fitting of sinusoidal
functions to this type of data and led to development of a branch
of mathematics called spectral analysis. Often, however, the variation at hand was really almost periodic or pseudoperiodic and
was not adequately described as the result of a fixed oscillation
and period with superimposition of random effects.
In 1976, George E. P. Box and G. M. Jenkins wrote a groundbreaking book titled Time Series Analysis: Forecasting and Control.23 This showed how, given certain stability conditions relating
to the mean and variances of the variable being analyzed: (1)
deterministic and stochastic components of a time series could be
identified, and (2) stochastic or random components could be further decomposed into autoregressive and moving average processes, or some combination of these two processes.
An example shows the superiority of this method. Consider, for
instance, the water consumption data in gallons per capita per day
in Figure 2.3, which are derived from a western metropolitan water
system over the period 1918-1989. Note the wavelike pattern and

Figure 2.3
Denver, Colorado, GCD

32

Analysis of Infrastructure Debt

its irregularity. This wavelike pattern suggests autocorrelation or


serial correlation and can be confirmed by various diagnostic tests.
If GCD is above trend in year j , it is likely that the year j + 1 GCD
also is above trend, and vice versa for below-trend observations.
Construction of diagnostic graphs called the autocorrelation and
partial autocorrelation functions suggests a simple model:

where et is the GCD expressed as a deviation from the trend line.


This model appears to reduce the detrended series to white noise
residuals, a concept explained below.
Accordingly, there are two components: the GCD trend, and
what standard regression analysis would call the error term or
residuals e t . These residuals et can be analyzed further into a first
order autoregressive process, described by equation 2.1 above,
and white noise or Gaussian residuals.
Note this is a peculiar type of explanation in the sense that it
tells how to reproduce the data series and, by implication, how to
generate representative future terms. With respect to causes, however, one must appeal to technological and attitudinal change leading to more water-using appliances in households, particularly
since World War II, and other systematic factors, where this type
of explanation motivates the general trend in the data.
Many processes lend themselves to this type of analysis. Time
series may be decomposed into deterministic and stochastic components with weighted averages of white noise components and/
or correlations between the current value of the variable and its
previous values in the series (autocorrelation). Diagnostic tests are
available to determine, at least at a gross level of detail, which
characterizations of stochastic structure are most plausible.24 When
combined with a priori information about contributing processes
and factors, time series models can help arrive at the relevant band
and likely pattern of variability in a risk source.25 Standard time
series modeling approaches emphasize stationarity conditions, or
constant mean and variance, which often can be enforced by first
differencing data or through logarithmic transformations of the
time series variable.
White noise refers to a purely random time series exhibiting no

Concepts and Procedures

33

systematic time interdependencies or lag effects. Note that white


noise processes have a special significance in financial studies since
researchers believe this type of randomness characterizes the
movement of stock prices over time.26 A white noise process is a
random variable determined by a normal or Gaussian distribution
having a zero mean value. In time period 1, one value of this
random variable is sampled. In time period 2, another value of
the random variable is sampled on a completely independent basis,
that is, the value of the random variable in period 1 does not affect
the value sampled for period 2. While recent studies reveal some
pattern in the movement of stock prices,27 for anyone but the
largest investors, transaction fees make the slight correlations identified over time difficult to exploit.
Structural models. The deterministic components in time series
may be described by equations relating a dependent variable to
independent or explanatory variables. These are called structural
equations in econometrics, a leading example being consumer demand models. Thus, the amount of a commodity a consumer is
willing to buy at various prices typically is presented as a linear
system in which price (P), income (Y), and other factors (Z) influence the quantity (Q) a consumer would like to purchase:
Q = b + b,P + b2Y + b,Z
Here b 0 , b,, b 2 , and b 3 are coefficients usually estimated by multiple
regression procedures, which assume a particularly simple characterization of the affiliated random component of the observations
on the consumer's purchases. Often such models are developed
on cross-sectional data, where observations on households for the
same time period are available.
CONDUCTING THE SIMULATION
Ultimately, all risk factors are represented, either directly or in
more complex formulation, as random variables characterized by
probability distributions. Given this information, risk simulation
involves sampling the possible values of these variables based on
the representation of their cumulative distributions.

Analysis of Infrastructure Debt

34
Sampling Strategies

To illustrate the sampling procedure, assume we consult a random number table or rely on a computer program to produce a
random number between 0 and 1 called n*. This random number
n* will guide our selection of the value of some risk factor for one
run of the cash flow model and contribute to development of the
performance index for this investment. Assume that Figure 2.4
represents the cumulative distribution of the risk factor, that is,
the probability that the risk factor is at most the value of the x
axis. Then, we associate a value C* with n* by reading from the
probability axis to the cumulative distribution F(x) and down to
the x axis. Repeating this sampling process with many random
numbers assures that the repetitions of the cash flow model are
computed with a random sample of this risk factor, determined by
this particular cumulative distribution.
The question of how many samplings ought to be made depends
on the mathematical complexity of what is being sampled, the
sampling procedure, and the penalty function established to assess
the consequences of error in the risk profile. A crude test for the
adequacy of the sample size is to vary the number of samplings by
one and two orders of magnitude to determine whether there are
noticeable differences in the resulting shape of the risk profile.28
The risk profiles in the examples in this book seemed to achieve
stability in the 10,000 iteration range.
Computer programs, it should be noted, produce pseudo-random numbers, usually a repeating series with some relatively long
recurrence number. The Lotus 1-2-3 (a RAND function has a
recurrence cycle of over 1 million, which makes it a relatively good
random number generator for a microcomputer. 29 For advanced
applications, short computer programs exist to purge undesirable
features of stock pseudo-random number generators. 30
Management Responses
Some of the most difficult questions in simulation involve policy
or management responses over longer time periods. Thus, random
factors leading to actual or prospective deficits in income can trigger a search process on the part of management. The analog of

Concepts and Procedures

35

Figure 2.4
Cumulative Probability Distribution

this in the computer simulation is a goal-seeking routine programmed to go into motion when predefined thresholds are crossed
in a cash flow model. Thus, if the debt coverage ratio falls below
a key level, a rate increase can be triggered whose time lag, extent,
and effect would be determined by assumptions about regulatory
and demand constraints. Often, multiple possibilities can be represented by an event tree,31 such as that presented in Figure 2.5.

36

Analysis of Infrastructure Debt

Figure 2.5
Event Tree for Rate Increase

Here, possibilities associated with application for a rate increase


by a regulated utility are portrayed and indicate branches of the
event tree leading to or supporting financial problems that require
further action to resolve.
It would be helpful if unambiguous bounds to risk could be
established by specifying response modes in certain ways. Thus,
good management and honest employees are usually assumed in
risk simulation. Matters become more complex, however, when it
becomes necessary to specify how quickly responses will occur.
Usually, there is debate within an organization about whether a
change is transitory or whether it signals the beginning of a new
trend. Another question is whether the basis decision makers use
to establish policies will change in the future. The technical basis
of specifying management responses in the simulation is a series
of if-then or conditional statements. Examples are presented in
Chapter 6.
THE RISK PROFILE
The risk profile measures investment performance by incorporating probabilistic information and appraisals. Indeed, this curve,
which follows from assumptions and imputations in the risk simulation, is a probability distribution. Hence, discrete and continuous versions exist, and it is convenient sometimes to present the
risk profile as a cumulative distribution rather than as a probability
density function.
Figure 1.1 shows a continuous probability density function. The
horizontal axis charts the value of the random variable selected as

Concepts and Procedures

37

the performance measure. For continuous distributions, the height


of the risk profile at a particular rate of return measures the relative
frequency of a rate of return in a small interval around a given
point rather than the absolute numeric probability of the random
variable attaining precisely this particular value. In other words,
the probability distribution for a continuous random variable indicates the relative likelihood of attaining a particular value or
something close to this value. The important thing, however, is
the area under the curve or the probability density. Thus, about
75 percent of the area under the curve occurs to the right of the
10 percent point on the horizontal axis. Based on this relationship,
therefore, decision makers might be advised there is a three in
four chance, or a probability of about 0.75, that the internal rate
of return will exceed 10 percent.
The default risk in any particular year is determined by the
chance that some variable, such as the ratio of net income to debt
service, exceeds a preassigned number. Note that net income in
any year is the sum of a number of separate components, many
of which can be relatively independent from each other. Accordingly, we might appeal to the Central Limit Theorem of statistics
and approximate the probability distribution of net income by a
normal curve (see Appendix). Debt service, once a bond is issued,
is predetermined or nonstochastic, except when it is tied to a variable interest rate. Accordingly, the probability distribution of the
debt coverage ratio of net income to debt service also usually may
be approximated by a normal distribution. Thus, the probability
of default can be visualized as an area in the tail of a bell-shaped
distribution to the right of the minimum debt coverage promised
in the bond covenant.
Risk Preferences and Risk Profiles
Risk profiles support project comparisons, subject to the risk
preferences of decision makers. Leaving aside love of gambling
per serelish in the thrill of staking it all and having one's fate
determined by the wheel of chancewe can rank people's willingness to risk losing various amounts in return for the chance to
gain other amounts.
This connotation is illustrated with a simple example. Suppose

38

Analysis of Infrastructure Debt

a credible party offers you a chance to win $1,000 if a coin, predetermined to be fair, comes up heads on a toss. If the flip comes
up tails, you pay a given amount. One possibility is that you flatly
refuse to participate. Or you might be willing to risk something to
have the chance of winning $1,000. Individuals, accordingly, can
be ranked as risk averse or risk seeking and along a gradient
between these opposites. Similarly, decision makers, responsible
for taking one or another course of action, can be risk averse or
willing to accept higher risk in order to have potential access to
higher earnings.
The classic comparison is with respect to the variances of continuous risk profiles having the same expected values. Suppose
Figure 2.6 shows the probabilities of rates of return on projects A
and B. Both greater gains and losses are possible with project B,
graphed with the crosses in Figure 2.6, than with project A, the
risk profile indicated by the boxes in the figure. Risk-averse individuals would select A. On the other hand, some investors would
be willing to accept a higher probability of low returns for an
approximately equal chance to earn higher returns.
There also are situations in which one risk profile seems superior
more or less independently of risk preferences. Figures 2.7a and
2.7b illustrate a relationship between investment projects known
as first order stochastic dominance with risk profiles and their associated cumulative distributions. If project A is delineated by the
boxes and project B is indicated by the crosses, it is clear that
project A dominates project B in an important sense. This relationship is perhaps more compelling when we examine the cumulative distributions of the rate of return for these two projects,
which, in this case, show the chance of attaining a given rate of
return or less. The cumulative probability distribution associated
with project B, indicated by the crosses, is always above or to the
left of the cumulative distribution of project A for any given rate
of return. Pick any probability on the vertical axis in Figure 2.7b,
say, 0.10. Then, following this probability level over to the two
cumulative distributions, one can see that the chances of attaining
a higher return are always greater with A than with B. Clearly,
project A is superior, under a range of risk preferences. The analogue for default risk distributions, of course, is simply that one
course of action produces a lower default probability than another.

Concepts and Procedures


Figure 2.6
Comparison of Risk Profiles

SUMMARY AND CONCLUSIONS


Risk simulation recognizes that precise forecasts of variables are
impossible but holds that actual values may conform to specifiable
stochastic processes. A utility with cash flow problems may raise
rates and charges, borrow or reschedule debt, declare a moratorium on bond principal repayment, default on interest or premium

40

Analysis of Infrastructure Debt

Figure 2.7a
First Order Stochastic Dominance (relative probability)

payments, seek the protection of Chapter 11, or liquidate assets.


Evaluation of these options is a matter of informed judgment and
analysis, which can be supported by computer simulations made
more accessible by recent advances in microcomputer technology
and computer software.
Ultimately, all risk factors are represented, either directly or in
more complex formulation, as random variables characterized by
probability distributions. Given this probability information, risk
simulation involves sampling the possible values of random variables based on the representation of their cumulative distributions.
The product of a risk simulation is a picture of the likely variation
of some target variable, termed here the investment performance
measure. Comparison of investment options is most straightforward when one risk profile exhibits stochastic dominance over
another. Otherwise the choice of investments cannot be decoupled
from the risk preferences of decision makers.
In general, there are two sources of information about probable
variation in risk factors: expert opinion or evaluation, and statis-

Concepts and Procedures

41

Figure 2.7b
First Order Stochastic Dominance (cumulative probability)

tical and historical, comparative analysis. Juries of executive opinion, Delphi methods, and formal probability elicitation techniques
can be helpful in polling expert opinion.
One useful distinction is between essentially "one-shot" variables, like construction costs and multiple period variables such as
labor costs, interest rates, or population growth. Single period
variables may be characterizable by a simple probability distribution. Frequently, discussions of risk simulation stop with identification of this simple type of stochastic process. More complex,
time-interrelated processes, however, can characterize the time
path of multiple period variables. General features of these stochastic processes can be analyzed by time series methods.
There are styles of thought regarding risk, in addition to risk
preferences, that make risk communication particularly important.
Executives or top administrators may favor decisive, seemingly
deterministic modes of thought. Numerous studies, furthermore,
show the prevalence of bias in reasoning about risky situations,
even with highly trained subjects.12 Simplicity is probably the key

42

Analysis of Infrastructure Debt

to effective communication. The results of an analysis must be


stated in a sentence or twosomething like, ' T h e chance of a 50
percent cost overrun with the nuclear baseload plant is about four
times greater than with a conventional coal-burning facility," or,
"The rate of return with option A is about twice as likely to be
above 15 percent than with option B . " Simple graphics also can
be useful. If this information touches a nerve, assumptions and
the structure of the risk analysis model can be discussed.
In many respects, the process is the product. A risk simulation,
in other words, draws together diverse strands of information about
costs, physical contingencies, organizational and financial responses, and socioeconomic forecasts. In doing this, management,
the investment community, and regulators increase their overall

awareness of the context of a project or a utility'sfinancialposition.


NOTES

1. Pamela K. Coats and Delton L. Chesser, "Coping with Business


Risk through Probabilistic Financial Statements," Simulation 38 (April
1982): 111-121.
2. L. Y. Pouliquen, Risk Analysis in Project Appraisal (Baltimore:
Johns Hopkins Press for the International Bank for Reconstruction and
Development, 1970).
3. A. M. Economos, "A Financial Simulation for Risk Analysis of a
Proposed Subsidiary," Management Science 15 (1968): 75-82.
4. P. D. Newendorp and P. J. Root, "Risk Analysis in Drilling Investment Decisions," Journal of Petroleum Technology (June 1968): 579585.
5. L. Kryzanowski, P. Lustig, and B. Schwab, "Monte Carlo Simulation and Capital Expenditure DecisionsA Case Study," Engineering
Economist 18 (1972): 31-47.
6. D. A. Cameron, "Risk Analysis and Investment Appraisal in Marketing," Long Range Planning (December 1972): 43-47.
7. Charles E. Clark, Thomas W. Keelin, and Robert D. Shur, User's
Guide to the Over/Under Capacity Planning Model, EA-1117, Final Report (Palo Alto, Calif.: prepared for the Electric Power Research Institute;
October 1979); Martin L. Baughman and D. P. Kamat, Assessment of the
Effect of Uncertainty on the Adequacy of the Electric Utility Industry's
Expansion Plans, 1983-1990, EA-1446, Interim Report (Palo Alto, Calif.:
prepared for the Electric Power Research Institute, July 1980). See also
Ian S. Jones, "The Application of Risk Analysis to the Appraisal of

Concepts and Procedures

43

Optional Investment in the Electricity Supply Industry," Applied Economics 3 (May 1986): 509-528.
8. Nigel Evans, "The Sizewell Decision: A Sensitivity Analysis," Energy Economics 6 (January 1984): 15-20; and Jones, "The Application
of Risk Analysis to the Appraisal of Optional Investment in the Electricity
Supply Industry."
9. E. J. Henley and H. Kumamoto, Reliability Engineering and Risk
Assessment (Englewood Cliffs, N.J.: Prentice-Hall, 1981), discuss these
earlier engineering applications.
10. For an interesting selection of articles on this topic see Paul R.
Kleindorfer and Howard C. Kunreuther (ed.), Insuring and Managing
Hazardous Risks: From Severso to Bhopal and Beyond (New York: Springer-Verlag, 1987).
11. Hence, the other name for risk simulationMonte Carlo simulation
or analysis.
12. See Charles F. Phillips, Jr., The Regulation of Public Utilities: Theory and Practice (Arlington, VA: Public Utilities Report, Inc., 1984).
13. See Standard & Poor's Corporation, Municipal Finance Criteria
(New York: Standard & Poor's, 1989). R. Charles Moyer and Shomir Sil
list factors affecting bond ratings as follows: "The level of long-term debt
relative to the firm's equity... the firm's liquidity, including an analysis
of accounts receivable, inventory, and short-term liabilities. . . t h e size
and economic significance of the company and the industry in which it
operates... [and] the priority of the specific debt issue with respect to
bankruptcy or liquidation proceedings and the overall protective provisions of the issue." "Is There an Optimal Utility Bond Rating?" Public
Utilities Fortnightly, May 12, 1989, pp. 9-15.
14. Frederic H. Murphy and Allen L. Soyster, Economic Behavior of
Electric Utilities (Englewood Cliffs, N.J.: Prentice-Hall, 1983), provide a
comprehensive survey of public utility commission rate standards as of
the early 1980s in Tables 2 and 3.
15. The internal rate of return r satisfies the equation,

where the a n i = 1 , . . ., n, are expected annual payback amounts for an


investment C. The rate of return r is equivalent to an interest rate at
which C dollars could be invested to produce a stream of discounted
earnings equivalent to that actually anticipated for the investment in question.
16. Here, as elsewhere in the discussion of this book, the phrase "min-

44

Analysis of Infrastructure Debt

imum financial risks" will be subject to the condition that supply and
demand are in balance. Thus, obviously, absolutely minimum financial
risks might be attained by liquidating utility investments and buying U.S.
Treasury securities.
17. David B. Hertz, "Risk Analysis in Capital Investment," Harvard
Business Review (September-October 1979): 174-175.
18. See H. A. Linstone and M. Turoff, The Delphi Method: Techniques
and Applications (Reading, MA: Addison-Wesley, 1975).
19. See M. W. Merkhofer, "Quantifying Judgmental Uncertainty:
Methodology, Experiences, and Insights," IEEE (Institute of Electrical
and Electronic Engineers) Transactions on Systems, Man, and Cybernetics
SMC-17, no. 5 (September/October 1987): 741-752.
20. See Detlof von Winterfeldt and Ward Edwards, "Cognitive Illusions," Decision Analysis and Behavioral Research (New York: Cambridge University Press, 1986).
21. An early discussion of the approach is found in B. Efron, "Bootstrapping Methods: Another Look at the Jackknife," Annals of Statistics
1 (January 1979): 1-26. See also B. Efron, "Nonparametric Standard
Errors and Confidence Intervals," Canadian Journal of Statistics 9 (1981):
139-172; and B. Efron and G. Gong, "A Leisurely Look at the Bootstrap,
the Jackknife, and Cross-Validation," The American Statistician 37 (February 1983): 36-48.
22. Siddartha R. Dalai, Edward B. Fowlkes, and Bruce Hoadley, "Risk
Analysis of the Space Shuttle: Pre-Challenger Prediction of Failure,"
Journal of the American Statistical Association 84 (December 1989): 945957.
23. George E. P. Box and G. M. Jenkins, Time Series Analysis: Forecasting and Control (San Francisco: Holden-Day, 1976).
24. John M. Gottman, Time Series Analysis: A Comprehensive Introduction for Social Scientists (New York: Cambridge University Press,
1981), presents a nice discussion of these graphic patterns.
25. See, for example, C.W.J. Granger, Forecasting in Business and
Economics (New York: Academic Press, 1980); Richard McCleary and
Richard A. Hay, Jr., Applied Time Series Analysis for the Social Sciences
(Beverly Hills, CA: Sage Publications, 1980); and William W. S. Wei,
Time Series Analysis: Univariate and Multivariate Methods (Redwood
City, CA: Addison-Wesley, 1990).
26. Burton G. Malkiel, A Random Walk Down Wall Street, 4th ed.
(New York: W. W. Norton, 1985), is still an entertaining and informative
introduction to the subject.
27. See Stephen Taylor, Modeling Financial Time Series (Chichester,
England: John Wiley & Sons, 1986).

Concepts and Procedures

45

28. This test is applied, for example, by Peter Pflaumer, "Confidence


Intervals for Population Projections Based on Monte Carlo Methods,"
International Journal of Forecasting 4 (1988): 135-142.
29. See D.T.R. Modianos, C. Scott, and L. W. Cornwall, "Testing
Intrinsic Random-Number Generators," Byte (January 1987): 175-178.
30. See William H. Press, Brian P. Flannery, Saul A. Teukolsky, and
William T. Vetterling, Numerical Recipes: The Art of Scientific Computing
(New York: Cambridge University Press, 1986), p. 197.
31. Event trees are a standby of risk assessment. See Risk Assessment:
Report of a Royal Society Study Group (London: The Royal Society,
January 1983).
32. See, for example, P. Slovic, "Perception of Risk," Science 236
(1987): 280-285.

This page intentionally left blank

Financial Risks in the


Construction Period
Financial risks during construction have proved significant for nuclear power plants (see Table 3.1). Cost overruns, scheduling delays, and unanticipated changes in the disbursement of
construction payments have triggered multiple and large rate increases, utility reorganization, and abandonment of nuclear projects in some instances. Cost overruns for central, coal-fired power
stations or large, capital-intensive waterworks seem less of a problem. Their technology is "tried and true," and construction techniques are well understood. Capital costs associated with all such
facilities, however, are large enough to recommend attention to
risk management tactics, such as the sizing of construction contingency allowances.
Simulation methods to assess cost overrun potential are a natural
extension of existing practices in cost estimation and project scheduling. Cost estimates get fine-tuned as design details come into
focus, and the anticipated range or interval of costs tends to narrow
through this process. Such range estimates of construction costs
can be deployed to indicate the risk of cost overruns and to support
the sizing of contingency funds with given confidence levels. The
scheduling of large projects has long relied on tools such as Gantt
diagrams, critical path modeling (CPMs), and program evaluation
and review technique (PERT) analysis. As a review of project
management computer programs illustrates,1 simulation with these

Analysis of Infrastructure Debt

48

Table 3.1
Cost Estimates and Realized Nuclear Power Plant Costs, 1966-1972
(nominal dollars per kWh of generation capacity)

Year
Construction
Started
1966
1967
1968
1969
1970
1971
1972

Average
Estimated
Cost

Average
Final
Cost

Average
Percent
Overrun

147
150
155
179
228
258
418

299
352
722
890
1,331
1,313
2,258

103
135
365
397
484
409
440

network models is increasingly important in appraising the likely


variability of completion times and expenditure patterns.
This chapter: (1) surveys general and qualitative factors contributing to cost overruns in capital construction projects; (2) considers the literature on cost overruns, specifically with regard to
water and power projects; and (3) discusses risk simulation in
assessing the likelihood of cost overruns, scheduling delays, and
the characterization of the pattern of construction expenditures.
Seven general factors contributing to cost escalation on capital
construction projects are identified in the next section. A review
of construction cost literature pertaining to water and power projects indicates the importance of technological factors, as well as
competent management in accounting for construction cost performance.
Following that, the discussion of quantitative technique considers risk simulation of construction cost. A simple example is developed to illustrate the tendency of the total cost distribution to
be approximated by a normal probability distribution, even though
component cost distributions are not characterized by normal dis-

Risks in Construction

49

tributions. This result, which depends in part on the way construction costs are classified, is useful in sizing contingency allowances.
An interesting finding supported by these exercises is that contingency funds designed to cover each construction cost component
at a given confidence level would sum to a total contingency fund
that would be larger than needed to cover total construction costs
at that same confidence level. Additional topics considered in the
discussion of quantitative techniques in this chapter include simulation of the construction schedule and expenditure curves and
the problem of stochastically interdependent costs and construction
schedules.
A primary objective of this chapter is to show that generating
construction cost risk estimates is straightforward, at least as a first
approximation. Methods in this chapter are applied, with other
techniques identified in the following two chapters, to several integrated simulation applications in Chapter 6.
FACTORS ASSOCIATED WITH CONSTRUCTION
COST OVERRUNS
In broad terms, several factors are linked with cost overruns in
capital construction projects. 2 These include:
1.
2.
3.
4.
5.
6.

the stage of the product cycle at which the cost estimate is developed
the type of technology
the size and complexity of a project
the competence of product management
regulatory or political considerations
contracting arrangements

7. the volatility of prices and other economic variables

The Stage of the Product Cycle at Which the Cost


Estimate Is Developed
There is a classification of construction cost estimates based on
the stage of the product cycle, which suggests an avoidable risk
making irreversible decisions based on initial or early cost estimates. Initially, conceptual design cost estimates can be based on

50

Analysis of Infrastructure Debt

analogous or comparative information about similar units (e.g.,


power plants, water facilities). Later, preliminary or engineering
cost estimates are prepared as major design decisions are resolved
and preliminary site plans and system flow diagrams become available. This preliminary estimate need not imply a complete scope
of work, although it is often useful to prepare estimates of contingency allowances at this point. Finally, there is the definitive,
baseline, or official cost estimate, generally developed to coincide
with the start of construction activity on site. This official estimate
''becomes the basis for evaluating subsequent cost performance by
management and regulatory agencies," even though engineering
design work may not be fully complete at that point. 3
The Type of Technology
Novel technology may be associated with flagrant overruns in
capital construction projects. Thus, attempts to fast-track new
technologies on a large and previously untested scale have led to
memorable cost escalation in the San Francisco subway system
(BART), the Eurotunnel under the English Channel, the transAlaskan pipeline, and nuclear power plants. By the same token,
routine engineering in fossil fuel power plant construction or in
water facilities may be undertaken with a higher expectation that
completion can be achieved at or under estimated cost and scheduled time.
The Size and Complexity of a Project
Along a somewhat different line of comparison, there is a relationship between cost overruns and the size and complexity of
projects. Thus, a classic parametric analysis developed by James
F. Tucker for 107 civil works projects, 39 water resource projects,
39 highway projects, and 29 building projects states,
R = .0233L - .0092(T - 1940) + .0019C - .0066t
where R = cost growth, L = project length in years, T = calendar
year of estimate, C = estimated project cost, t = fraction of
project length completed at the time of the cost estimate. 4 This

Risks in Construction

51

indicates that projects with higher initial cost estimates and lengthy
construction periods have more cost growth and that cost estimates
made later in the project cycle are more accurate. Similar results
are produced by other, more recent studies.3
The Competence of Project Management
Management and organization problems are widely cited contributors to cost overruns. An investigatory report about the transAlaska pipeline, for example, a project whose costs escalated from
$900 million to final costs in 1977 of $7.7 billion, states that,
the project was virtually run by committees; it was structured with vertical
and horizontal duplication of supervision and decision making, cumbersome decision chains, unclear lines of authority, and fragmentation of
responsibility. Compounding this were significant communication, coordination, and liaison problems between project groups. The result of this
duplicative management structure was paralysis of the project management decision making process.6
Regulatory or Political Considerations
Regulatory or political considerations became more important
with enactment of the National Environmental Protection Act
(NEPA) of 1971 and related legislation. Local interests object to
the plume of smoke from power plants or potential radioactivity.
New water storage sites are increasingly scarce as real estate development locks up the countryside. Regulatory conflicts and bad
initial planning often lead to numerous change orders that cause
"ripple effects" throughout the timing and cost of activities in a
power or water project.
Contracting Arrangements
Inefficiencies can be associated with contracting arrangements.
Lowest-bidder rules applying to contracts let by public agencies
have been known to let inexperienced firms in the door, based on
"lowballing" the bid. Such companies later may have to be replaced in costly, time-consuming recontracting.

Analysis of Infrastructure Debt

52

The Volatility of Prices and Other Economic Variables


Finally, the volatility of economic variables, such as prices and
interest rates, can be a significant factor in large capital construction projects with a construction period of several years. 7

THE RECORD OF WATER AND


POWER INVESTMENTS
Studies of cost performance on water and power projects generally are consistent with the preceding discussion. Water and
power investments using conventional technology are likely to be
associated with small or negative cost overruns, while fast-tracking
technical innovation in the nuclear field has definite hazards.
Studies of water resource development show improvements in
construction cost control in federal agencies after World War II.
Studies by Edward G. Altourney, 8 Maynard M. Hufschmidt and
Jacques Gerin, 9 and Robert H. Haveman 10 indicate progress by
the U.S. Army Corps of Engineers and the U.S. Bureau of Reclamation. The Corps of Engineers' actual to estimated (A/E) cost
ratio improved from 2.24, as of a 1951 study of 182 projects, to
about 1.0 in a 1964 study considering 68 subsequent projects. The
Tennessee Valley Authority (TVA) maintained good cost control
from the beginning. Over the period 1933 to 1967, the TVA A/E
cost ratio was .947 with approximately one-third of its projects
showing overruns. In addition to good management practices, this
is probably attributable to the relatively familiar technology embodied in these projects.
Glenn J. Davidson, Thomas F. Sheehan, and Richard G. Patrick
note that fossil fuel power plants usually finished on time and within
budget over the period from the late 1940s to the early 1960s.
These plants had unit sizes ranging from 75 megawatts (MW) to
400 MW and, generally, were designed by either the owner or an
architect/engineer and were built under contract to the owner. In
most cases, it has been noted, "the design was almost complete
before construction started." 11
Relying on conventional technology can overcome some adverse
factors, such as lengthening regulatory delay. An EPRI-sponsored

Risks in Construction

53

study of construction lead times notes that coal plant construction


has been able to adjust to
a fairly continuous stream of regulations aimed primarily at decreasing
the environmental impact of plant emissions. . . . They lengthen the licensing period significantly by imposing stricter requirements for evaluating
the environmental impacts of the plant before it begins construction.
Second, the environmental regulations have required an increased level
of air pollution control. . . [such as] electrostatic precipitators . . . and flue
gas desulfization devices (scrubbers).12
The importance of technological factors in nuclear power is
underlined by cases in which cost overruns appear to be associated
with (1) larger, more complex plants; (2) new or frontier technology; and (3) an evolving regulatory climate. Table 3.1 summarizes an appalling cost performance early in the period during
which a move to larger plants was made. The average estimated
and final costs in the table are in nominal dollars and so partly
reflect the increasing pace of inflation in the early 1970s. Nevertheless, an important influence is the fact that larger capacity plants
in this period required designs, containment vessels, and foundation systems that challenged the limits of structural engineering
knowledge. In addition, wholly new reactor designs were, in some
cases, fast-tracked (e.g., WPPSS), and regulatory attitudes favored
more vigorous intervention.
In this regard, there has been a study of the pattern of regulatory
delay on construction schedules of fossil fuel and nuclear power
plants. Construction lead time research sponsored by EPRI distinguishes out-of-scope workinvoluntary delays caused by the
actions of an agency other than the constructing utilityfrom deliberate delays caused by the voluntary actions of the utility. Based
on a survey of twenty-six nuclear units, out-of-scope delay was
identified in 78 percent of the cases. Redesign and rework problems
(53 percent) dominated in the sample. The situation was reversed
for fossil fuel plants, where deliberate delay caused 68 percent of
the total delays in a sample of twenty-eight units.13
This study reached other relevant conclusions. For example,
A large part of thefinancialrisk to a utility constructing a large generating
facility is the direct result of long and uncertain leadtimes.. .. Two decades

54

Analysis of Infrastructure Debt

ago, the capital cost of most plants was less than $400 per kilowatt for a
nuclear plant. Even taking inflation into account, the real cost per kilowatt
has tripled. . . . Today, the combination of large capital expenditures over
a long period and high interest rates cause time dependent charges to
make up a substantial portion of the total capital cost of the plant.14
In addition there are investment risks because "the cost of large
power plants, especially nuclear plants, is now so high that they
make up a large percentage of the total assets of many utilities." 15
RANGE ESTIMATION AND SIMULATIONS TO
DETERMINE CONTINGENCY FUNDS
How can the likelihood of construction cost overruns be evaluated? Range estimation is a major approach to this question, as
noted in the cost engineering literature. 16 This method is supported
by historical cost data or reliance on judgmental factors when a
database of comparable facilities does not exist. Range estimation
operates with cost summaries of major items in a construction
project, estimates of their range of variability, and other information, where available, about the likely distribution of component costs. The term "range estimation" derives from first
approximations that operate with lower and upper bound estimates
of component costs. Additional information usually shrinks the
estimated variance of total costs, reducing estimates of contingency
funds needed to cover changes in the total cost at a given confidence
level or a given percentage of the time. In this sense, range estimation provides a yardstick for measuring the adequacy of contingency funds and the value of information about the variability
of component costs.
Let us illustrate the power and generality of this method. Table
3.2 lists the major construction cost categories on some project.
The first column simply names these cost categories generically,
as c,, c2, c3, and so on. Columns 2 and 3 tabulate the anticipated
lower and upper bounds for these cost categories, defining a cost
range. These ranges may be absolute, encapsulating all possible
values of the cost components, or can incorporate a given percentage of likely variation of these cost categories, for example,
five and ninety-five percentile costs. (The five and ninety-five per-

55

Risks in Construction
Table 3.2
Range Estimates and Expected Costs by Cost Component (millions of
dollars)

Lower
Bound
Costs

(D

(2)

Upper
Bound
Costs
(3)

d
c2
C3
c4
C5
c6
c7
c8
c9
C10

100
60
30
20
10
10
10
10
10
5

140
90
50
40
40
40
40
40
40
30

121.7
75
41.7
28.3
25
25
25
25
25
15

TOTALS

265

550

406.7

Cost
Component

Expected
Costs
(4)

centile points on a probability distribution mark off events likely


to happen only one time in twenty. Thus, the chance of a value
below the five percentile point or above the ninety-five percentile
mark of a probability distribution is one in twenty.) The assumption, of course, is that these range estimates are produced by
polling expert opinion and that such expertise is specialized by cost
component.
The fourth column lists expected or average costs by component
and the expected total cost. These expected costs, mathematically
the average costs that would be achieved in multiple realizations
of the same situation, are assumed to be the cost estimates developed by engineers for this project.
Note that there are only a few major costs and more numerous
smaller components in Table 3.2an illustration of Pareto's Law

56

Analysis of Infrastructure Debt

of the significant few and insignificant many. For purposes of discussion, the estimates in the tables are assumed to be in millions
of dollars.
Stochastically Independent Costs
The standard assumption in this type of analysis is that the costs
are stochastically independent. In other words, there can be no
correlation between the cost overruns experienced or realized by
the various cost components. In Table 3.2, c2 is 20 percent higher
than its expected value in column 4, there is no added chance that
c3 or any of the other cost components will come in higher than
expected.
Although this is a strong assumption, cost engineering suggests
that this condition can be approximated by suitable aggregation. 17
Thus, substitutability between construction tasks is generally clustered in significant groups. Overall, subcontracting of different
phases of the project to different firms (e.g., site preparation,
foundation work, erection) limits the degree to which cost overruns
or slowdowns in one phase can be made up by economies or speedups in another phase. Even within groupings of tasks, statutes
pertaining to overtime pay, surcharges for immediate delivery of
materials, and the like limit speedup opportunities. Once estimated
costs are exceeded, therefore, it is seldom possible to cheapen
other aspects of a project. For purposes of discussion, then, let us
assume the cost categories of Table 3.2 are grouped so as to be
stochastically independent. Later, we will comment on more sophisticated tactics for dealing with correlated costs.
Risk Simulation
The information in Table 3.2 leads to a risk profile for total costs
when we develop a characterization of the probability distributions
of components' costs. Thus, given that the numbers in columns 2
and 3 represent absolute lower and upper bounds for component
costs, the availability of the expected values in column 4 suggests
the use of the triangular probability distribution. The bounds for
the costs delineate the base of this distribution, and the height or

Risks in Construction

57

vertex of the triangle for each component cost distribution can be


obtained by solving for the mode (M) in the formula
Expected value of a triangular distribution = (L + M + U)/3
where L is the lower bound and U is the upper bound.
It is interesting to compare this triangular approximation with
a simpler probability density function assuming that any value of
a component cost between its lower and upper bound is equally
likely. This assumption produces the uniform probability distribution, often deployed as an uniformed prior distribution in decision analysisthat is, in the absence of better information, each
contingency might be assumed to be equally probable.
Given either of these constituent probability distributions, simulations lead to a risk profile resembling a bell-shaped curve or
normal distribution. This remarkable result is not accidental but
follows from the Central Limit Theorem of statistics (see Appendix). The basic notion is that the sum of sufficiently numerous
independent random variables has a limiting distribution that is a
normal or bell-shaped curve.
Figure 3.1 shows an overlay of the resulting risk profiles. The
risk profile associated with the imputed triangular probability distributions is graphed in black. The more dispersed risk profile
indicated by the white bars is associated with the imputation of
uniform probability distribution functions to the component costs
C! through c10.
Contingency Funds
The risk profiles in Figure 3.1 help form perceptions of the size
of construction contingency funds needed for this project. Here,
one must employ a broad concept of contingency allowances,
meaning funds to cover potential deviations of total cost above its
estimated value a given percent of the time. 18
Suppose we intend for the contingency fund to cover cost variation 95 percent of the time and want to include a fund of this
size in the bond issue as an addition to funds determined by the
construction costs estimate. We can compute the required contingency allowance with the cumulative probability distribution of

58

Analysis of Infrastructure Debt

Figure 3.1
Overlay of Risk Profiles Developed with Uniform and Triangular
Distributions and Table 3.2 Ranges

total costs. Figure 3.2 presents the cumulative distributions associated with the risk profiles of Figure 3.1. Here, the usual presentation is rotated so that the probability of the project totaling a
certain cost or less is shown on the horizontal axis. Following the
line up from the 95 percent probability level on the horizontal axis
to the cumulative distributions produced by triangular or uniform
distributions of component costs, one can estimate a total cost
figure that is exceeded only one time in twenty. If contingency
allowances are desired to cover cost overruns 95 percent of the
time, and the cost estimate is the expected total cost ($406.7 million), the analysis suggests that a contingency fund of about $40
million will cover most exigencies. Here the higher variance of the
risk profile of the simulation performed with component uniform
distributions carries through to a higher estimate of the contingency
fund to cover cost overruns 95 percent of the time.19 Thus, in
determining how much extra debt one wishes to add for contin-

Risks in Construction
Figure 3.2
Comparison of Cumulative Distributions

gencies, one can see tangible evidence of the value of precision in


probability estimation.
This simple exercise underlines an important and practical point.
Covering total costs at a given confidence level does not require
coverage of component costs at that same confidence level. Thus, a
95 percent confidence level for a uniform distribution would span
95 percent of the area between the lower and upper bound, beginning at the lower bound. Given that the range of the high
variance risk profile in Figure 3.1 is just short of $200 million
(between about $312.5 million and $500 million), a contingency
fund of, in round numbers, $450 million is far less that $500 million
minus 5 percent of $200 million, or about $490 million. The same
point also follows by a simple argument if we assume that the

60

Analysis of Infrastructure Debt

component cost distributions are normal (bell-shaped) as well as


stochastically independent, 20 and this point is true for many characterizations of component costs and their variability. Another way
of understanding this is in terms of risk pooling. The chance that
a series of independent random variables will exhibit large deviations in the same direction from their mean values is, intuitively,
a low probability.

Correlations in Component Cost Distributions


Suppose there are correlations between cost categories that are
not compelling enough to justify aggregation of the categories into
a single classification but are strong enough to have implications
for the resulting risk profile of total costs. In this case, one must
ask how such correlations are established.
If such correlations follow from databases of costs on comparable
projects, an analysis could run along the following lines. Compute
an ordinary least squares (OLS) regression of a cost category on
its related counterpart(s). Then, bootstrap an estimate of the probability distribution of the coefficient of this regression. If two costs
are involved, this procedure runs as follows. Select a series of
random subsamples from the observed residuals of the regression
linking the dependent and explanatory cost categories. Combine
this subsample of the residuals with the associated values of the
dependent and independent variables and reestimate the regression coefficient linking these costs. Do this repeatedly and the
result is that the various coefficients estimated will trace out a
distribution from which confidence intervals for the coefficient can
be established.21
Otherwise, one may suspect correlations, perhaps motivated by
some type of engineering or economics argument, but be without
hard data to back up this suspicion. In this case, one tactic is to
look at the sensitivity of the risk profile to such interrelationships.
If the risk profile is highly sensitive to the assumptions made about
the correlation of some set of random variables, one needs to dig
for more information or attempt to set these correlations at some
intermediate value.

Risks in Construction

61

Simulation Analysis of the Length


of the Construction Period
A similar analysis can be applied to scheduling problems. Construction activities are characterized by precedence relations
some activities must occur before others. Construction tasks necessarily occur in parallel or in sequence. Methods like CPM (critical
path modeling) and PERT (program evaluation and review technique)22 accommodate such precedence requirements in generating
estimates of completion times and information about the construction schedule. A key concept is the critical path, which is the longest
chain of activities linked to each other by precedence relationships
(i.e., task al must precede activity bl, bl must precede cl, and
so on). In the PERT system, for example, three time estimates
are obtained for each activity: an optimistic time, a most likely
time, and a pessimistic time. A risk profile for the critical path is
developed with these range estimates and can be linked with the
time path of construction expenditures. 23
Integrating Time and Costs
The issue of mutually correlated random variables reappears in
connection with the question of interdependencies between cost
overruns and scheduling delays.24 In some instances, stochastic
independence can be preserved; that is, time and cost issues can
be decoupled. Thus, interest during construction is related to the
length of the construction period and can be independent of
whether activities are brought in under or over cost. Nonetheless,
interdependencies can exist. There are correlations between cost
overruns and the construction period, for instance, since the site
must be kept open and administration expenses generally continue.
Delays in task completion also can be associated with overtime
pay.
A mixed strategy usually accommodates the relation between
time and cost. Direct linkages can be established, as between interest costs and the total construction period. Otherwise, correlation between major cost components and delays in the
completion schedule can be explored by regression analysis, if data
from comparable projects exist. Finally, interrelationships not sup-

Analysis of Infrastructure Debt

62

ported by cost and scheduling data may be dealt with along the
lines of a sensitivity analysis. If such correlations have significant
impacts on the risk profile of total costs or completion time, care
must be exercised in setting their precise value in the final simulation.25
The Disbursement Pattern
Cumulative expenditures over the construction period usually
will be some type of S-shaped or sigmoid curve. Expenditures in
the design period ordinarily will be less than initial startup costs,
and payout will mount as the project goes on, probably peaking
about midway through construction and then trailing off as detail
tasks and finishing become the main preoccupation. Given the lag
between project completion, billing, and payment, expenditure
curves can be developed with the same information used for costs
and completion times.
Mention also can be made of an explicit stochastic model of the
payment stream, unrelated to range estimation. This derives interesting results from the assumption that the probability of the
completion of any work element in any small interval within the
construction period is a small numberan observation that may
elicit empathy from construction project managers. Based on analogies with engineering reliability theory, payment is represented
as a Poisson process, and the payment completion rate is an exponential function. The resulting probability distribution of payments is a mixture of uniform and Weibull distributions, which
describes a kind of S curve. 26
CONCLUSION
Text Box 3.1 lists several qualitative factors identified in the
engineering and economics literature as being linked with cost
escalation on capital construction projects. Evidence suggests these
same factors are relevant to water and power capital construction
projects. This specialized literature emphasizes, on the one hand,
the dependability of conventional technology and, on the other,
the hazards of fast-tracking technical innovation in the nuclear
field. Accordingly, the factors in Text Box 3.1 might be taken as

Risks in Construction

63

Text Box 3.1


stage of the product cycle at
which the cost estimate is
developed
technology
project size
project complexity
competence of project
management
regulatory or political
considerations
contracting arrangements
volatility of economic
variables such as
prices and interest rates

the basis for a parametric ranking system that would appraise the
relative likelihood of cost overruns on a series of projects.
Quantitative assessment of the likelihood of cost overruns and
scheduling delay on power and water projects can be carried out
with range estimation. Range estimation involves associating probabilities of excess with various cost or completion time estimates
and identifying the expected or most likely values for component
activities or tasks. With respect to costs, implementing this technique involves the following:
1. appropriate classification of costs (preserving stochastic independence)
2. identifying five and ninety-five percentile costs or absolute lower and
upper cost bounds

64

Analysis of Infrastructure Debt

3. imputing probability distributions to component costs


4. simulating the risk profile of total costs

Prior to bond issuance, this procedure might be implemented with


conceptual construction cost estimates; it can size contingency allowances; and, as suggested in Chapter 6, it might inform determination of optimal amortization schedule or sinking fund
arrangements. Cost estimation software employing hundreds or
thousands of cost categories exists, of course, and supports refinements of such early estimates of risks from construction cost
overruns.
The simulation example presented in this chapter illustrates several important technical points, including (1) the operation of the
Central Limit Theorem for sums of stochastically independent variables, and (2) economies from risk pooling in setting contingency
allowances against total costs rather than component cost categories.
This simulation example may leave a question in readers' minds
as to how skewed total cost distributionsimplied, for example,
by Table 3.1are possible if sums of random variables tend to
add up to normally distributed totals. While a definitive answer
requires detailed examination of cases, a process that might be
called amplification may be at issue. Costs, in other words, can be
amplified through repeated redesign, due to regulatory or sponsor
initiative, after construction begins. This essentially introduces
multiplicative factors to the realization of total costs, insofar as
existing work has to be removed and the same tasks done more
than once. Together with positive correlations in costs that might
accompany this frustrating and inefficient process, simulations
show that cost amplification can produce markedly skewed total
cost distributions.
Given the computer software now available, analysis of the risk
profile of construction costs is increasingly accessible. Preliminary
analysis, working with a few major cost categories, may provide
substantial insight into the potential variability and financial risks
of a project. Cost estimation software suitable for bottoms-up estimation of large power and water facilities can expand the number
of cost categories manyfold. The same principles apply, however,
and the added detail may support only incremental refinements to

Risks in Construction

65

a carefully conducted analysis with major cost categories and full


design and site information.
NOTES
1. James A. Bent, Project Management for Engineering and Construction (Lilburn, GA: Fairmont Press, distributed by Prentice-Hall, 1989).
2. This expands on John W. Hackney, Control and Management of
Capital Projects (New York: John Wiley & Sons, 1965), p. 17. Hackney
groups the causes of cost growth under four headings pertaining to (1)
changes in project scope or design, (2) problems in management and
organization performance, (3) changes in economic and legal parameters,
and (4) limitations and imperfections in estimating method. He draws on
research and reviews in Edward W. Merrow, Stephen W. Chapel, and
Christopher Worthing, A Review of Cost Estimation in New Technologies:
Implications for Energy Process Plants, prepared for the U.S. Department
of Energy by the Rand Corporation, R-2481-DOE, Washington, D.C.,
July 1979.
3. Earl J. Miller, "Project Information Systems and Controls," in Jack
H. Willenbrock and H. Randolf Thomas (eds.), Planning, Engineering,
and Construction of Electric Power Generation Facilities (New York: John
Wiley & Sons, 1980), p. 308. Miller suggests that in typical electric power
construction the official estimate is based on engineering that is 30 percent
complete and preferably 40 percent to 60 percent complete.
4. James F. Tucker, Cost Estimation in Public Works, MBA thesis,
University of California at Berkeley, September 1970.
5. The Japanese may be setting the standard today in their aggressive
utilization of modern computing capabilities and extensive cost databases,
as the following description in an article by Kurazo Yokoyama from ABIInform, a computer database, suggests,
Actual cost data on about 10.000 buildings constructed by 40-50 firms in Japan
over the period 1975-1984 were analyzed. . . . The construction cost of a standard
building design with the usual functions is calculated, which is called the median
cost. Then, special feature factors are selected from the tables of various building
conditions. The proposed method allows an easy calculation of the cost... necessary to simply imagine the building. . . . The workability and accuracy of the
technique have been approved by applying it to over 270 cases in the past three
years.
Annual Transactions of the American Association of Cost Engineers
(AACE) (Morgantown, W.V.: AACE, 1988).
6. Terry F. Lenzer, The Management, Planning, and Construction of

66

Analysis of Infrastructure Debt

the Trans-Alaska Pipeline System (Anchorage, AK: Alaska Pipeline Commission, August 1, 1977), Chapter II.
7. See Derek T. Beeston, Statistical Methods for Building Price Data
(London: E. & F. N. Spon, 1983).
8. Edward G. Altourney, The Role of Uncertainties in the Economic
Evaluation of Water Resources Projects (Stanford, CA: Institute of Engineering-Economic Systems, Stanford University, 1963).
9. Maynard M. Hufschmidt and Jacques Gerin, "Systematic Errors
in Cost Estimates for Public Investment Projects," in Julius Margolis (ed.),
The Analysis of Public Outputs (New York: Columbia University Press,
1970), pp. 267-315.
10. Robert H. Haveman, The Economic Performance of Public Investments: An Ex Post Evaluation of Water Resources Investments (Baltimore: Johns Hopkins Press, 1972).
11. Glenn J. Davidson, Thomas F. Sheehan, and Richard G. Patrick,
"Construction Phase Responsibilities," in Willenbrock and Thomas
(eds.), Planning, Engineering, and Construction of Electric Power Generation Facilities, p. 160.
12. D. S. Bauman, P. A. Morris, and T. R. Rice, An Analysis of Power
Plant Construction Lead Times, Volume I: Analysis and Results, E A 2880, Final Report (Palo Alto, CA: EPRI, February 1983), p. 2-2.
13. Ibid.
14. Ibid., section 1, p. 4.
15. Ibid.
16. See, for example, Michael W. Curran, "Range Estimating: Reasoning with Risk," Annual Transactions of the AACE (Morgantown,
W.V.: AACE, 1988), n.3.1-n.3.9; R. W. Hayes, J. G. Perry, P. A.
Thompson, and G. Willmer, Risk Management in Engineering Construction (Morgantown, W.V.; Thomas Telford Ltd., 1986); Karlos A. Artto,
"Approaches in Construction Project Cost Risk," Annual Transactions
of the AACE (Morgantown, W.V.: AACE, 1988), B-4, B.5.1-B.5.4; and
Krishan S. Mathur, "Risk Analysis in Capital Cost Estimating," Cost
Engineering 31 (August 1989): 9-16.
17. See Derek Beeston, "Combining Risks in Estimating," Construction Management and Economics 4 (1985): 75-79.
18. James E. Diekmann, Edward E. Sewestern, and Khalid Taher,
Risk Management in Capital Projects, a report to the Construction Industry
Institute (Austin: University of Texas at Austin, October 1988), pp. 6 3 81. The authors of this book note that some exceptions in the sources of
construction cost variation covered by contingency funds often are allowed
(e.g., out-of-scope variation in costs).
19. Here, the expected value of total costs is similar in both simulations,
the primary difference being the variance of the implied risk profiles.

Risks in Construction

67

20. When the component cost distributions are normal or Gaussian as


well as stochastically independent,

and

where E(.) (the period indicates the argument of the function E) is the
expectation operator indicating the mean or average of a random variable and Var(.) indicates its variance. In ordinary language, these
equations mean that the expected total cost equals the sum of the expected values of component costs and, similarly, that the expected variance of total costs equals the sum of the variances of the individual
component cost probability distributions. Confidence levels of a normal
distribution, however, are determined by the standard deviation, which
is the square root of the variance. Thus, the standard deviation of total
costs must be less than the sum of the standard deviations of the component cost distributions. Accordingly, a 95 percent confidence interval
for total costs is less than the sum of the 95 percent confidence level
values for component costs.
21. See Michael R. Veall, "Bootstrapping the Probability Distribution
of Peak Electricity Demand," International Economic Review 28 (February 1987): 203-212, and the cited references for a particularly clear
discussion of this method.
22. PERT (program evaluation and review technique) had a number
of precursors. Its development was associated with the development of
the Polaris missile system in 1958 and the earlier Gantt (bar) charts and
milestone reporting systems. See Joseph J. Moder, Cecil R. Phillips, and
Edward W. Davis, Project Management with CPM, PERT, and Precedence
Diagramming, 3rd ed. (New York: Van Nostrand Reinhold Company,
1983).
23. One problem is that the critical path identified by a deterministic
analysis may be supplanted by other, more lengthy chains of activities
when task completion times are considered to be random variables. Until
the advent of modern microcomputers, this was a real barrier because of
the cost of core computer time to generate all possible combinations of
completion times and their resulting critical paths. Currently, simulation
is the favored approach to this problem, and algorithms for the efficient
or near-optimal solution of this problem are available. See Thomas Byers
and Paul Teicholz, "Risk Analysis of Resource Levelled Networks," in

68

Analysis of Infrastructure Debt

Proceedings of the Conference on Current Practice in Cost Estimating and


Cost Control, sponsored by the Construction Division of the American
Society of Civil Engineers in Cooperation with the University of Texas
at Austin (New York: American Society of Civil Engineers, 1983),
pp. 178-186. Bruce E. Woodsworth notes some problems, in this regard,
with standard PC software, which purports to take account of resource
constraints. "Is Resource-Constrained Project Management Software Reliable?" Cost Engineering 31 (July 1989): 7-11.
24. Chandra S. Murthy notes that cost and schedule functions are rarely
integrated in common construction projects. "Cost and Schedule Integration in Construction," in Proceedings of the Conference on Current
Practice in Cost Estimating and Cost Control, pp. 119-129.
25. Note that the most widely studied joint probability distribution
allowing mutual correlation of variables, in this regard, is the bivariate
normal distribution. With some care, however, uniform and triangular
distributions can be adapted to a two or many variable context in which
there are mutual relationships between the constituent random variables.
Thus, the Morgenstern distribution allows uniform marginal distributions,
as might be implied by range estimates without information on the mode
or central tendency, and correlation coefficients ranging between - 1/3 to
-I-1/3. A good reference here is Mark E. Johnson, Multivariate Statistical
Simulation (New York: John Wiley & Sons, 1987).
26. See S. N. Tucker, "Formulating Construction Cash Flow Curves
Using a Reliability Theory Analogy," Construction Management and Economics 4 (1986): 179-188.

4
Revenue RiskRate and
Demand Factors
Attitudes toward rate responsiveness in the power and water industries have shifted since the mid-1970s. Initially, it was not uncommon to meet utility professionals who doubted whether
consumers paid attention to the price of electricity or water. One
might hear (and there is modest published literature to the effect)
that rate increases initially impact demand but that, after a year
or so, consumers forget about changes in rates and return to their
old patterns of usage. Of course, those advancing this thesis seldom
distinguish between real and nominal rates. They may have observed inflation reducing the real or inflation-adjusted rate after a
time to its prerate-change level, or, alternatively, they may have
witnessed differences between short- and long-run price responses.
The statistical evidence for price effects on water or electricity
demand, however, is overwhelming.1 By the late 1970s, there was
greater awareness among utility staff about the implications of
price elasticity on per capita usage levels. The message may not
have reached echelons at which key decisions were made until
later, however. Thus, as noted in Chapter 1, demand projections
and capacity decisions in the electric power industry were out of
synch with the realities of lower usage rates and demand growth
in the face of higher, real rates until recently. Even today, despite
widespread discussion of system optimization, conservation, and
econometric modeling of demand, bond covenants often require
utilities to pledge to increase rates in the event the debt coverage

70

Analysis of Infrastructure Debt

ratio falls below a certain level. This, of course, assumes higher


rates lead to higher revenues, which, demonstrably, is not always
the case.
Financial risk, therefore, can be associated with rate responses
in a fairly catastrophic way. If the inherent variability of demand
due to weather or other stochastic factors is ignored, or if analysts
fail to capture long-term trends related to income growth or change
in housing types, problems can develop. A real disaster potential,
however, is that adverse cost conditions will push a utility to raise
rates high enough to enter the price elastic region of demand.
Thus, typically, residential demand and system demand for water
and electric power are price inelastic in the year or so after a price
or rate increase (see the following discussion for an explication of
the concept of price elasticity). This means rate increases produce
an increase in expenditures on the part of consumers, although,
as the Law of Demand suggests, the consumption level or total
quantity consumed will decrease somewhat. Generally speaking,
demand becomes price elastic at high enough rate or price levels,
and the opposite effect comes into playprice increases reduce
the consumer's total expenditure as well as the quantity consumed.
Some claim this perverse relationship between rate levels and consumer expendituresometimes called rate shockcan be seen at
work in certain power systems attempting to finance nuclear plant
cost overruns.
These issues are explored with simple examples in the following
discussion. Suggestions are offered about how to capture deterministic rate effects, as well as the overall band of variation, characterizing power or water demands of utility customers.
Conditioned on physical facts about households and commercial
establishments, electricity and water demand patterns change relatively slowly, although use levels respond immediately to price
or rate changes. Structural and time series models based on historical data, therefore, provide a guide to prospective patterns of
demand variability for purposes of analyzing revenue risk, if allowance is made for price effects.
Remarks on the general concept of demand, as used in economic
and other discussions, follow. Then, two rules are presented illustrating the price elasticity of demand and its impact on revenues
and the limits of revenue enhancement. After provisos concerning

Rate and Demand Factors

71

the propositions, the discussion turns to several ways in which


demand uncertainty can be represented in simulation models.
WHAT IS DEMAND?
Once, in the early days of the television program "Saturday
Night Live," the character Father Guido Sarducci exhorted viewers
to sign up for instruction at the Five Minute University. This required a minimum time commitment. The economics course consisted of two words. Anytime one was asked something about
economics, one would be taught to say "supply and demand."
Of course, "demand" has a range of meanings. Demand may
be presented as an achieved fact, as in the statement, "after crude
oil prices quadrupled in 1974, growth in gasoline demand flattened
out." More abstractly, demand is a relationship describing the
amount people are willing to purchase at various prices, given other
factors like income and tastes. This produces the demand curve,
a downward-sloping function in price-commodity space. Its negative slope illustrates what has been called the Law of Demand.
Apart from exceptional cases, that is, people tend to buy less of
a good when its price increases, other things being equal. 2
Readers accustomed to a probabilistic framework may question
how real these textbook descriptions of consumer responses actually are. The answer is that the operation of the Law of Demand
in electric power and urban water markets has been confirmed in
numerous econometric studies. At the same time, there are conceptual and statistical issues pertaining to measures of consumer
price or rate responses. Thus, strictly speaking, "rates" are not
"prices," and there can be questions about exactly what aspects
of the rate schedule influence consumer responses. In addition,
individual demand patterns are subject to considerable statistical
noise. Many persons are surprised to discover that, generally
speaking, structural demand models or equations including price,
income, family size, and other explanatory variables rarely explain
more than 20 to 30 percent of the variation in individual usage.
Structural variables influencing demand are numerous and differ
for households and businesses. For residential customers, family
size, family income, the number of electrical or water-using appliances, and their efficiency ratings are relevant. For commerce

72

Analysis of Infrastructure Debt

and industry, market factors and the current state of technology


are significant. Furthermore, conservation plays a growing role in
water and electricity demand studies, as utilities disseminate information about resource-efficient appliances; adopt rate forms,
such as increasing block rate schedules, which encourage care in
usage; and as various ordinances governing plumbing fixtures or
appliance efficiency are implemented. Finally, there is the matter
of the temporal distribution of demand through daily, weekly,
seasonal, and other cycles.
Having said this, it must be acknowledged that consumer responses to power and water rates are important because rate setting
is one of the major ways utilities implement policy objectives, such
as increasing revenues. The following discussion shows, in this
regard, that even small rate effects can imply significant revenue
impacts.
PARABLE #1REVENUE EFFECTS OF CONSUMER
PRICE RESPONSES
Let us propound and illustrate the following rule:
Ignoring the price responsiveness of consumer demand contributes
to error in the revenue forecast, which is at least equal to the percentage change in prices or rates multiplied by the price elasticity.

The price elasticity is defined as the percentage change in quantity demanded divided by the percentage change in price, where
these percentages usually are taken around the average values of
the variables in the sample. 3 Such elasticities are estimated from
empirical data, usually in connection with a statistical demand
analysis. This could involve identifying explanatory variables influencing consumption (Q), collecting cross-sectional data, and
estimating coefficients of a structural model, such as
Q = a + a,P + a2X, + .. . + anXn

(4.1)

where P is the real or inflation-adjusted price of the good and the


Xj are other explanatory variables. The price elasticity of demand
in the simple linear model of equation 4.1 is a,(P/Q).

73

Rate and Demand Factors


Table 4.1
Error in Revenue Forecast from Neglecting Consumer Price Responses
(33 percent price increase)

Current
Demand
Projection
at Current
Price
Zero Price
Elasticity
Revenue
Forecast,
Higher Price

Revenue
Forecast,
Higher Price
[Assuming

e - 0.3)

Unit
Price
($)

Quantity
Demanded
(million)

Revenue
Forecast
($millions)

1.50

18.5

27.75

2.00

18.5

37

2.00

16.67

33.3

FORECAST ERROR

10%

e * price elasticity of demand, or the percentage change in quantity demanded as a


result of a given percent change in the price of a good

Short-run residential-commercial price elasticities for electricity


or urban water typically are on the order of .3 to .75 in absolute
magnitude. This is not especially price responsive, but large rate
changes, contemplated because of cost overruns or reductions in
the growth of the service area, can lead to significant revenue
effects. Thus, if a rate increase necessary to balance costs and
revenues is approximately 50 percent, reductions in systemwide
electricity or water demand of 15 percent can result in the short
run. This translates directly to the bottom line in the manner shown
in Table 4.1. Here, the initial price is $1.50 per unit, where we
assume for simplicity that this commodity is sold at a uniform price,

74

Analysis of Infrastructure Debt

rather than a schedule of rates. At that price, consumption of 18.5


million units produces revenues of $27.75 million. A revenue forecast is prepared for a price increase of $2.00 per unit (33 percent
increase) on the assumption that demand is unaffected by the price
change (zero price elasticity, perfectly inelastic). If the short-run
price elasticity is - 0 . 3 , consumption will decrease to 16.67 million
units after the price change (18.5[1 - (.33)(.3)]). The error in the
revenue forecast, therefore, is $3.66 million or equals the price
elasticity multiplied into the percentage change in price (i.e., 9.9
percent).
Thus, even with moderate price elasticities and price increases,
ignoring consumer price responses can produce 10 percent errors
in the revenue forecast. These errors cumulate over time and may
increase for another reason. Producers and consumers will have
added incentive to make investments in energy or water efficient
equipment, appliances, or appurtenances. Thus, rate effects estimated with a short-term price elasticity provide a lower bound to
the actual effects on consumption and revenues that are likely to
be encountered. Rate effects, therefore, ought to be taken into
account in financial risk analysis of water and power systems.
PARABLE #2LIMITS TO REVENUE
ENHANCEMENT THROUGH RATE INCREASES
A second rule or proposition can be advanced here also:
There are limits to revenue enhancement through rate increases.

Assertions about price elasticities are similar to statements found


in applied physics or other sciences, where differences exist between general statements and those with working usefulness. Thus,
people often are taught to associate the size of a price elasticity
with whether a good is a luxury or a necessity. Luxuries tend to
be price elastic (e > 1), and, interestingly, expenditures on them
will fall as the price goes up. Necessities, like food, on the other
hand, tend to be price inelastic (e < 1). When their price increases,
the quantity demanded will drop, but expenditures will continue
to increase.4 This seems logical, but there is a catch. Price elastic-

Rate and Demand Factors

75

ities vary along the demand curve. 5 At low prices, demand is typically price inelastic, whereas at high prices, it becomes elastic.
This is relevant to real-world planning because a utility often
pledges in a rate covenant to increase rates to cover a shortfall in
revenuesan oversight, in all likelihood, linked with traditionally
low prices for electricity and water. Without information about
price responsiveness, rate analysts tend to look at the ratio of
needed revenues to existing revenues. Thus, if commodity charges
produce revenues of $52,500,000, and there is a revenue shortfall
of $20,000,000, a 40 percent rate increase might be judged adequate to boost revenues back up to the breakeven point.
Suppose, however, the relevant demand relationship is linear in
the price of the commodity in question, according to the formula
Q = 25,000,000 - 2,500,000 P

(4.2)

so that at consumption of 17,500 units, the price elasticity is between 0.2 and 0.3. Here, total demand (Q) is measured in 1,000
units, and the initial price is assumed to be $3.00 per 1,000 units.
Other influences are assumed to be constant for the duration of
the analysis and are reflected in the constant term.
The fact is that there is no price increase capable of generating
revenues of $75 million, given these parameters. Multiplying both
sides of equation 4.2 by P, the following results:
PQ = 25,000,000P - 2,500,000P2

(4.3)

75,000,000 = 25,000,000P* - 2,500,000P*2

(4.4)

or

since we are interested in attaining a target revenue of $75 million.


Applying the quadratic formula leads to solutions that include the
square root of - 1 or to imaginary pricesa sign that something
is definitely amiss.
The basic reason can be seen in the revenue curve implied by
this demand function, presented in Figure 4.1. This is a parabola
with the maximum revenue occurring at a price of $5.00 per unit,
at which rate level revenues total a little more than $60 million.

76

Analysis of Infrastructure Debt

Figure 4.1
Total Revenue Curve

Normally, a utility operates within the inelastic region of the


electricity or water demand curve in the short run of one to three
years. Price responsiveness is low in percentage terms and consumer expenditures increase with higher rates. Sufficiently large
rate increases, however, can move consumption into the price
elastic region of demand. Then, consumers buy less, and their total
outlay is reduced. At the same time, the size rate increase necessary
to bring this situation to pass can cause public resentment.

Rate and Demand Factors

77

PROVISOS
These relationships must be qualified with respect to several
factors. The following relationships bear mention because they
have been intensively researched in recent years.
Rate Versus Price
The primary problem of rate versus price is that it is unclear
exactly what people respond to when faced with a schedule of rates
as opposed to a uniform price for a commodity. A typical rate
schedule has a fixed fee or charge for connection to the system
plus rates applying to various quantities of consumption. Electric
power rates also might include a demand charge for consumption
during some defined peak demand period. Water rates usually are
described by a schedule applying to various quantity intervals
that is, purchases up to and including 10,000 gallons in a billing
period are charged at one rate, while consumption in excess of
10,000 gallons in a billing period is charged at a second rate, and
so on.
What's the price? Economists tend to identify the marginal price
as the critical decision criterionthe commodity charge for the
final unit consumed in a billing period. In applied studies and
general discussion, the average price of water or power is often
cited as important. Yet surveys show only a small percent of water
or power customers know the rate schedule,6 and their imputations
of average price may be wide of the mark, at least as this quantity
is computed from the actual bill. Indeed, the only certain thing is
that consumers, at some point, look at their utility bill. It seems
reasonable, therefore, that they may react when bills rise above a
certain threshold, making discretionary adjustments in usage in
the short run and, if high bills continue, contemplating purchase
of efficient appliances, new landscaping, and so on in the longer
run. This type of behavior may look like a price response, but
actually it does not have to be mediated by knowledge of rates. 7
This issue is not trivial in an era of rate reform. Many power
and water companies are switching from declining rate block schedules to ascending or increasing rate block schedules. This conversion is considered to offer incentives to conserve, since if use

Analysis of Infrastructure Debt

78

increases, the bill goes up more rapidly. Until the problem of


deciding which aspect of the rate schedule elicits action is solved,
it does not seem possible to fine tune predictions of how rate
restructuring will affect usage. Our fallback position in conceptual
discussions is the standard uniform price model; empirical work,
on the other hand, usually employs some type of average price or,
if most household purchases are contained within a broad rate
interval in a season, the marginal rate applying to most households.
To the extent that these categories or price variables move together, as in the usual rate increase that multiplies rates and
charges by the same factor (e.g., a 10 percent rate increase), there
is no problem. As soon as there is some claim to be able to discriminate the effects of rate restructuring, however, these conceptual and statistical matters become controlling.
Statistical Estimation Issues
Along with conceptual problems, there are issues concerning
how regression or curve-fitting can get around the facts that the
applicable rates are determined by the quantity used and that the
quantity used is influenced by the rates. This is called simultaneity
in the technical language of econometrics. Rate effects, of course,
are part of the deterministic portion of time series on electric power
and water purchases over time. To assess them, comparisons between similar customers facing different rate levels must be made,
either across geographically distinct customer service areas or
through some period of time in which rate levels for a particular
service area have varied appreciably. The topic has been extensively treated in the literature, and a consensus appears to be
evolving concerning an approximation technique. 8
Conservation Effects
Over long periods of time, price or rate increases provide incentives to substitute more resource-efficient equipment for the
current stock. Thus, long-term price elasticities are higher than
short-term price elasticities. Indeed, studies dating from the 1970s
suggest that long-run electricity demand for residential and com-

Rate and Demand Factors

79

mercial users can be price elastic, afindingthat is controversial in


the urban water demand field.9
What does seem clear is that merely having the technical opportunity to conserve is a less potent influence on demand without
price or rate incentives. Often, public discussion of more efficient
air conditioners or washing machines is in terms of end use savings
predicted on the basis of engineering criteria. A household runs
an air conditioner an average number of hours in a summer day.
Hence, the logic goes, the electricity saved should be predictable
from technical characteristics. Econometric studies have detected
a flaw in this argument, however. If the air conditioner is more
energy efficient, the marginal cost of using it during peak demand
periods may be lower. Some people, recognizing this, will increase
its hours of operation. This is sometimes called the rebound effect,
and has been documented also with respect to the installation of
insulation and thermostat settings.10 Thus, to trigger the full savings
potential of resource-efficient appliances, price or rate increases
may be necessary.11

ACCOUNTING FOR UNCERTAINTY


Planning studies often try to cope with uncertainty about future
electric power or water demand by presenting high, medium, and
low forecasts. These ranges might be taken as a starting point for
developing a stochastic model of demand suitable for risk simulation, except that blind application of such ranges ignores responses of consumers to rate effects that may become necessary
to support coverage ratios.
There are basically two approaches to this problem. One approach is to use historic data and develop multivariate time series
models of demand, thereby capturing deterministic components
such as price effects and stochastic elements such as autocorrelation
between usage levels over time. Many electric power utilities and
some water utilities have developed econometric demand models,
often disaggregated by customer class and primarily for forecasting
applications. Some recallibration of these models may be required,
where attention should be given to the "error term," so that the
structure of random components in demand, which define the band
of variation of demand over time, is adequately represented.

Analysis of Infrastructure Debt

80

The other approach relies more on judgment and, possibly, demand studies produced for comparable communities. A simple,
often defensible form of a synthetic demand relationship is
qt - ERt + pqt_, + et = Q, - C(t)

(4.5)

where qt is the deviation in per capita demand (Q t ) from some


trend C(t) in period t, E is the price coefficient operating against
the rate level R t , E < 0, p is the autocorrelation coefficient against
the previous period's per capita deviation in usage from trend q t _,,
and et is a white noise process. Then, total utility system demand
equals population in year t multiplied by Q t . The price coefficient
E in this equation can be based on empirical studies from the utility
service area or from documentary sources. Statistical research also
should establish a reasonable value for the autocorrelation coefficient (which, conceivably, may be zero) and the variance of the
random error term et (which may be a more complex moving
average). It can be noted, in general, that first order autocorrelation is a component of many behavioral time series, perhaps the
major component. 12 The trend C(t) can incorporate judgmental
factors, such as how conservation is anticipated to affect the trend
in per capita usage. Since community income changes gradually,
its influence on demand also can be summarized in the trending
specification. Weather, on the other hand, exhibits considerable
short- and long-run variability, within the context of general seasonal patterns, contributing to the random shock component e.
The interrelation between the level of per capita demand over
adjacent and nearby periods represented in the autocorrelation
coefficient p can be related to the business or weather cycles.
The simplest approach of all, of course, is to effect a determinate, nonstochastic adjustment in customer usage levels according
to a price elasticity judged to be representative of the service area.
This crude method is perhaps superior to assuming that demand
patterns will be completely inflexible to changes in rate levels.
CONCLUSION
Demand patterns of individual customers contribute to revenue
risks in at least two respects. First, underestimates of the price

Rate and Demand Factors

81

responsiveness of demand translate directly into similarly sized


errors in revenues. A point estimate of such errors in the revenue
estimate is produced by the product of the percentage change in
prices or rates and the price elasticity. Second, there are limits to
revenue enhancement through rate or price increases because extreme rate increases may drive demand into the price elastic region.
These processes are subject to provisos relating to how people
actually respond to the schedule of rates typically governing sales
of water and power, how these rate effects may be captured in
statistical terms, and how long-term price impacts can be sorted
out. An analysis is on safest grounds when all the possibly relevant
rate factorsthe marginal price, the bill, and the average price
move in concert. This occurs with rate increases in which all rates
and charges increase proportionally.
Although after any particular price increase, stochastic factors
can wash out the effect of consumer price responses, over the long
run and statistically, price effects have an impact on revenue and
should be included in the analysis of financial risks. Furthermore,
diminishing supplies of energy in the form of coal or natural gas,
ever more scarce landior reservoirs and catchment areas, and new
water requirements may lead to higher relative prices for electricity
and water. Hence, risks of perverse total expenditure or revenue
effects from rate increases are likely to increase over coming decades.
NOTES
1. Earlier statistical studies of water price effects are reviewed in John
J. Boland, Benedykt Dziegielewski, Duane D. Baumann, and Eva M.
Optiz, Influence of Price and Rate Structures on Municipal and Industrial
Water Use, Contract Report 84-C-2, Engineer Institute for Water Resources, U.S. Army Corps of Engineers (Washington, D.C.: U.S. Government Printing Office, June 1984). J. Kindler and C. S. Russell, have
useful discussions of industrial and agricultural water demands, as well as
modeling methodology. See their Modeling Water Demands (Orlando,
FL: Academic Press, 1984). An early but still pertinent review in the
electric power demand field is Lestor Taylor, v'The Demand for Electricity: A Survey," Bell Journal of Economics 6, no. 1 (1975): 74-110. For
a more recent synopsis, see the University of California at Berkeley, Price
Elasticity Variation: An Engineering Economic Approach, EM-5038, Final

82

Analysis of Infrastructure Debt

Report (Berkeley, CA: Electric Power Research Institute, February


1987).
2. Matters are complicated by the practice of selling water and electric
power according to a schedule of rates, but a similar generalization applies.
See C. Vaughan Jones, "Nonlinear Pricing and the Law of Demand,"
Economics Letters 23 (1987): 125-128.
3. Another definition of the price elasticity of demand uses the differential calculus:

This is simply the limit, in the mathematical sense, of the arc


elasticity defined in the text as the percentage change in the price
becomes smaller and smaller.
4. This follows in a straightforward way from the definition of
price elasticity. Thus, if demand over some interval (ql, q2) is
price inelastic, this means that
{(ql -q2)/(ql +q2)}/{(pl -p2)/(pl +p2)} > - 1
and
{(ql -q2)/(ql + q2)}/{(pl -p2)/(pl + p2)} < 0
It readily follows from the first expression that
(ql-q2)(pl+p2) < (pi-p2)(ql+q2)
or that
qlp2 < q2pl
However, since, by assumption, p2 > p i ,
qlpl < qlp2 < q2pl < q2p2
where the first and last of these expressions are the initial and final expenditures, respectively.
5. The exception is a logarithmic demand curve.
6. Thus, despite a recall initiative triggered by rate restructuring in
the 1970s, only 21 percent of several hundred persons responding to a
question about the type of rate schedule applied for water sales in Tucson
knew that this schedule was increasing block rates. See Donald E. Agthe,
R. Bruce Billings, and Judith M. Dworkin, "Effects of Rate Structure
Knowledge on Household Water Use," Water Resources Bulletin 24 (June
1988): 627-630.

Rate and Demand Factors

83

7. Along these lines, it is interesting that research seems to indicate


the viability of a hybrid price hypothesis, that is, responses that fall between marginal and average price responses. See, for example, David L.
Chicione and Ganapathi Ramamurthy, "Evidence on the Specification of
Price in the Study of Domestic Water Demand," Land Economics 62
(February 1986): 26-32; and Michael L. Nieswiadomy and David J. Molina, The Perception of Price in Residential Water Demand Models Under
Decreasing and Increasing Block Rates, paper for the 64th Annual Western
Economics Association International Conference, June 21, 1989.
8. This is the technique of instrumental estimation. A good exposition
is presented in Steven E. Henson, "Electricity Demand Estimates Under
Increasing Block Rates," Southern Economic Journal 51 (July 1984): 147156. See also C. Vaughan Jones and John R. Morris, "Instrumental Price
Estimates and Residential Water Demand," Water Resources Research 20
(February 1984): 197-202.
9. Kent Anderson computes the long-run residential electricity price
elasticity to be - 1 . 2 . Residential Demand for Electricity: Econometric
Estimates for California and the United States, R-905 NSF (Santa Monica,
CA: Rand Corporation, January 1972). D. Chapman, T. Tyrell, and T.
Mount report a commercial long-run electricity demand price elasticity of
- 1 . 5 . "Electricity Demand Growth and the Energy Crisis," Science 178
(November 1972): 703-708. Philip H. Carver and John J. Boland find
long-run price elasticities to be higher than short-run price elasticities but
still in the inelastic range. "Short and Long-Run Effects of Price on Municipal Water Use," Water Resources Research 16 (August 1980): 609616.
10. See, for example, Jeffrey A. Dubin, Allen K. Miedema, and Ram
V. Chandran, "Price Effects of Energy-Efficient Technologies: A Study
of Residential Demand for Heating and Cooling," Rand Journal of Economics 17 (August 1986): 310-325.
11. Raymond S. Hartman and Michael J. Doane, "The Estimation of
the Effects of Utility-Sponsored Conservation Programmes," Applied
Economics 18 (1986): 1-25; and Raymond S. Hartman, "Self Selection
Bias in the Evaluation of Voluntary Energy Conservation Programmes,"
Review of Economics and Statistics 70 (August 1988): 448-458. These two
articles argue that, in many instances, the effects of nonprice conservation
measures have been overestimated.
12. See John M. Gottman, Time Series Analysis: A Comprehensive
Introduction for Social Scientists (New York: Cambridge University Press,
1981).

This page intentionally left blank

5
Revenue Risk
The Customer Base
Financial risks are associated with population in a utility service
area. If population growth is slower than anticipated, the costs of
capacity investments will be paid by fewer customers. This can
require rate adjustments and can be politically volatile. In some
instance, efforts to compensate for revenue shortfalls resulting
from slow growth may push rates into the price elastic range or
trigger customer protest and resistance.
Attention to this type of risk is recommended by changes in
fertility and death rates, family formation, and the age structure
of the U.S. population. After World War II, there was an increase
in family formation, and the U.S. birth rate surged. Five decades
later, the demographic picture looks very different. Many central
locations in U.S. metropolitan areas are losing population. Sudden
reversals have affected whole regions, such as portions of the west
around the Rocky Mountains. At least one instance of an electric
utility being pushed into receivership by loss of energy investment
projects and subsequent depopulation can be cited. Depopulation
also has been discussed as a problem with respect to financing
multibillion-dollar rehabilitation of older utility systems in core
urban areas.
A risk that is possibly more potent, because it is more subtle,
is gradually slowing population growth. Studies of the accuracy or
errors of forecasting models suggest that turning pointsthe timing
of a switch from positive to negative growth of a variableare the

86

Analysis of Infrastructure Debt

hardest item to predict. Thus, how slower growth will play itself
out presents a formidable problem, creating substantial risks for
central facilities promising economies of scale at the cost of excess
capacity in the near term. Yet gradually slowing population is the
basic forecast for the forseeable future and is related to a number
of factors. The current low U.S. birth rate developed in the 1970s.
At the same time, advances in modern medicine and lifestyle
changes have led to longer lives for nonminority populations. One
consequence is that the U.S. population is aging. An older population, in turn, is more likely to be settled, to migrate less. Thus,
not only is population growth for the nation anticipated to slow in
the next century, possibly diminishing in absolute numbers, but
sizeable migration, which has buoyed population growth in many
areas over past decades, cannot be counted on in the 1990s or the
first decade of the twenty-first century.
This chapter confronts these problems with information that may
help in the evaluation of demographic projections and proposes
ways to represent uncertainty and inevitable variability in population time series. The following section considers evidence relating
to the accuracy of economic and personnel projections that, by
general consensus, drive population growth in a community or
region. There is agreement that population forecasts are not very
accurate, especially when longer time periods and smaller geographic or population units must be considered. Then, a synopsis
of standard population projection techniques commonly applied
by agencies supplying utilities with forecasts is presented. The
discussion thereafter advances suggestions regarding stochastic
modeling of population change for purposes of risk simulation.
These methods run the gambit from extremely sophisticated multivariate time series models to simple stochastic models that capture subjective estimates of high, medium, and low growth.

THE ACCURACY OF ECONOMIC AND


POPULATION FORECASTS
The single most important factor in population growth probably
is the economic situation. Employment opportunities operate as a
magnet for new migration, supporting the rate of family formation
and the natural increase of the population. Lacking jobs, commu-

The Customer Base

87

nities and whole regions (such as Wyoming, Idaho, Montana, and


the Dakotas) experience significant depopulation. Unfortunately,
there is no golden road to forecasting the size of the workforce, despite the availability of mathematically complex econometric forecasting models.
Most planners and engineers are aware of a range of private and
public sector forecasting models offering some guide as to what
course the economy, regions of the nation, and various industrial
sectors will take over the short, medium, and long term. Data
Resources, a Cambridge, Massachusetts, firm founded by the late
Otto Eckstein, is preeminent among the private services that typically allow on-line access to forecasts and associated databases on
an annual subscription basis. The Bureau of Economic Research
OBERS model is one of the main public long-range economic
forecasting models. It utilizes a "step-down" method in which
national economic growth is forecasted based on manpower projections from the Bureau of Labor Statistics and other data. National projections are then allocated, on some basis or other, to
individual states and to industrial groupings.
Since 1968, records have been kept by the American Statistical
Association and the National Bureau of Economic Research on
the accuracy of about fifty forecasting operations. In reviewing
these, William Ascher notes that macroeconomic models produced
errors "about a quarter of the magnitude of change for nominal
quarterly forecasts, and about an eighth of the change for nominal
annual forecasts," while for real GNP (gross national product)
changes "the annual errors are about a third as great as the annual
changes." 1 Ascher, furthermore, presents evidence that forecasting accuracy has not improved appreciably since development of
the first multiequation econometric models in the 1950s.
Persistent and large errors in economic forecasting models were
recognized during the adjustment to the energy crisis of the early
1970s. One reaction was examination and comparison of alternative forecasting methods. Among the most comprehensive of these
studies was led by Spyros Makridakis, who concludes that no matter what forecasting technique is applied c*nd regardless of its periodization (monthly, quarterly, or annual), forecast errors as
measured by the mean absolute percentage rise to 20 to 30 percent
after about five forecast periods. 2

88

Analysis of Infrastructure Debt

Since jobs and population are linked, population forecasts evince


similar size errors. At the level of U.S. Census projections for the
national economy, errors in the range of 20 to 30 percent in forecasted to actual growth rates can be documented for periods f\\c
to thirty years after a projection is published. 3 Errors increase for
smaller population units and subregions of the nation. 4
Population growth is not just a matter of job availability but has
an inner dynamic depending on adjustments in fertility, mortality,
and the age structure. Thus, the expanding economy and high rate
of family formation led to high birth rates and an age cohortthe
"Baby Boom"that swelled college enrollment and real estate
sales in the 1950s, 1960s, and early 1970s. Baby Boomers have
controlled various markets and investments in the economy and
society as they have matured. First, there was a surge in the demand for public schools, then colleges experienced extraordinary
expansion in the 1960s. As the Baby Boomers hit the housing
market in the late 1960s and 1970s, home and land costs escalated.
Curiously, this generation was itself reluctant to raise children. For
one thing, the recent entry of large numbers of women into the
workforce has been associated with postponed childrearing and
smaller families. Lower fertility rates, combined with a bulge in
the middle-age population segment, and extended life expectancy
for segments of the population with access to jobs and medical
care result in a gradual drift upward of the median or average age
of the population. The other major demographic forcemigrationalso is affected by a generally aging population. Migrants
from one region to another in search of employment typically are
younger, usually in the twenty-four to thirty-five age group, although there is also some movement of the sixty-and-over crowd
to retirement residences, communities, and, eventually, facilities
and places where they may receive full care.
The new population trends add up to a startling fact. In many
areas of the nation and in many communities, the future will not
be like the past. The U.S. Census Department, by the mid-1980s,
began to forecast a leveling off of national population in the first
decades of the next century followed, perhaps, by a period in which
the total population will be absolutely reduced. In cities of the
eastern seaboard and midwest, older, core urban areas already are
experiencing population loss.

The Customer Base

89

POPULATION FORECASTSBASIC METHODS


Electric power and water utilities often rely on external organizations such as cities, counties, regional, and state authorities
for population forecasts. These forecasts usually are based on techniques such as: (1) extrapolation, curve-fitting, and regressionbased techniques; (2) ratio-based techniques; (3) land-use techniques; (4) economic-based techniques; and (5) cohort-component
techniques. Each has strengths and weaknesses, discussion of
which is helpful in establishing an idea of the confidence nonexperts
should place in these forecasts.
Extrapolation techniques often are used for the short term and
generally involve projection of total population. A common assumption is that past rates of change can be linearly extrapolated
into the future. Typically, high, medium, and low forecasts are
generated by varying an underlying annual growth rate. In many
communities, however, simple extrapolation of patterns from the
1960s and 1970s will not produce a plausible account of population
change in the 1990s or thereafter.
Ratio-based and land-use-based techniques are widely employed
in projecting populations of component areas within a larger region
for which total population projections have been obtained. Using
alternative ratios, trends in ratios, density limits, and other similar
factors, these methods can be applied in small jurisdictions in cities
or counties. Their accuracy depends, in large part, on the accuracy
of the projections for the larger area.
Economic-based procedures use projections of trends in economic factors, usually employment, to project population change
over the projection period. There is a range of such models from
those using simple population to employment ratios, such as the
U.S. Bureau of Economic Analysis model, to those that balance
employment supply and demand, based on separate economic and
demographic models, and forecast migration. Some account of
future economic conditions seems essential in developing any picture of future population growth, although in the final analysis
these methods are no more accurate than the economic forecasts
on which they are based.
Cohort-component procedures utilize the fact that populations
change as a result of fertility, mortality, and migration. Because

90

Analysis of Infrastructure Debt

rates for these components vary by age and sex, separate assumptions are developed about each age group. The most difficult component to project in subnational projections is migration,5 and most
cohort-component procedures utilize either past migration rates
or absolute numbers of migrants. The advantage of cohort-component procedures is their conceptual completeness relative to the
demographic processes and population structure. Their disadvantages derive from the fact that only population factors are considered in the projection of future events. Most texts on demography
acknowledge that cohort-component or cohort-survival methods
are not more accurate than simpler extrapolations or judgmental
estimates, although they have the potential to provide a better
picture of how trends may play themselves out.

ACCOUNTING FOR UNCERTAINTY


There are several methods to account for uncertainty in population projections. These include use of sensitivity analysis and the
construction of confidence intervals, either with empirical data
derived from errors of past population forecasts or from stochastic
models in which age-specific fertility and mortality are random
variables. 6 The choice of methods depends, in large part, on the
assessment of how stable population change patterns are within a
particular utility service area. Thus, time series models are appropriate when past variations of population parameters are anticipated to be similar to those in the future. When there are
substantial shifts in age structure, fertility, and mortality, stochastic
models incorporating judgmental factors may be the best way to
assess the likely range of variation of future population. Stochastic
models also are recommended for smaller population areas, such
as communities, as opposed to states or the nation as a whole.
Hence, a risk simulation method that accommodates current expectations of future population change without holding the future
to past patterns may be advisable for utility service area population.
When high, medium, and low population forecasts are provided,
a kind of range estimation is available to simulate the possible
future population growth in an area. The range of population
growth rates can be identified, and population trajectories can be
generated depending on a sampling procedure from this estab-

91

The Customer Base


Table 5.1

Typical Presentation of Population Forecasts in High, Medium, and Low


Variants

Year

High

Medium

Low

1995

200,000

200,000

200,000

2000

231,855

226,281

210,202

2005

266,184

249,833

220,924

2010

301,163

272,471

232,193

lished range. As an illustration, Table 5.1 shows high, medium,


and low population projections for a fifteen-year period for a hypothetical community whose size in 1995 is 200,000 persons. These
numbers can be taken to imply different underlying growth rates
in five-year periods. Thus, the increase in population for 1995 to
2000 in the high forecast presented in the first column of Table
5.1 can be produced with an annual growth rate of 3 percent. Let
us assume that the high and low projections, therefore, define the
bounds on expected population growth, much the way previous
range estimates established bounds for other risk factors. Then,
given these bounds, how do we impute a probability distribution
to the implied range of growth rates for purposes of a risk simulation?
Since there seldom is any reason to believe one number rather
than another, when these figures pertain to abstract events (e.g.,
growth) several years hence, the uniform probability distribution
seems a natural choice. As noted in the previous chapter, this is
a probability density function that assumes that all values between
a given lower and upper bound are equally likely to occur.

92

Analysis of Infrastructure Debt

Now a trick can be employed to create interdependency between


population growth rates in adjacent time periods, as is characteristic of empirical growth rate series. Instead of developing a time
series analysis to determine the coefficient of autocorrelation, say,
this alternative approach starts with preassigned and absolute limits
for the annual growth of population over a period. A second condition requires that population growth in a subsequent period be
sampled from an interval around population growth in the previous
period. Thus, referring to Table 5.1, this would mean that in year
1 of the simulation, a growth rate is randomly sampled from the
bounds 1 percent to 3 percent per year. Depending on what value
is selected for this first growth rate gl, the second growth rate g2
will be sampled from a permissible growth rate range subject to
the condition that Igl - g2l < k, where, for example 0 < k < 1
percent. Similarly, each subsequent growth rate will be sampled
according to a uniform distribution over the range indicated by
the population forecasts in the first and third columns of Table
5.1, subject to the condition that the absolute value of its difference
with the preceding growth rate is less than some number k. The
effect of k on the simulation can be seen in Figure 5.1. The ''tightlywound" series, indicated by the boxes in the diagram, has a smaller
value of k (.25 percent) than the more variable series indicated by
cross marks (k = 1 percent). Here, selection of the time interdependency parameter is partly a matter of judgment but can be
informed by reference to past series of growth rates. 7
Application of this method implicitly assumes there is some homeostatic process coming into play when growth nears its upper or
lower bound. This is reasonable in respect of the fact that communities growing very rapidly experience various types of diseconomies that tend to discourage further increases in the growth
rate. On the other hand, citizens usually mobilize to halt dramatic
drops in population growth rate in their area.
Another note may be added with regard to disaggregation of
the population growth model. Disaggregation beyond the level of
total population may be advisable if migration plays a major role
in population change in the utility service area. Population change
in a geographic area is a result of natural increases through births,
deaths, and the balance of immigration and emigration. It is widely
recognized that some of these components tend to be more volatile

Figure 5.1
Stochastic Models of Population Growth

94

Analysis of Infrastructure Debt

than others. Of these components, the death rate appears to be


the most stable, reflecting long-term trends toward increasing life
expectancy due to medical advance, except in certain inner-city
and disadvantaged populations. Fertility rates fluctuate morethe
Baby Boom of the immediate post-World War II years and the
baby bust of the 1970s being notable examples.8 In relative and
sometimes in absolute terms, however, migration rates tend to be
the most volatile element of regional or local population change
and are hardest to predict. Thus, the range of variation of migration
can be large compared to the total population change contributed
by natural increasethe net of births over deaths. 9 In this case,
separate stochastic models for natural increase in the population
and migration may be desirable.

CONCLUSION
Water and power utilities face a formidable planning problem,
insofar as they attempt to have new capacity ready in a timely
fashion for population growth in a service area.
Risk simulation has special advantages in this context since, as
noted earlier, the accuracy of population forecasts is low and decreases rapidly with the length of the forecast period. Given future
prospects in many U.S. communities, there is need to consider
financial models in which the customer base may peak and then
shrink, models in which large components of pure uncertainty,
stochastic variation, and jumps or shifts in behavior can be incorporated. Typically, these will allow for random variation and time
interdependency in population growth rates or in both migration
and natural increase of the population.
The modeling choices again revolve around the application of
judgment or formal statistical analysis of historic data. The stochastic model suggested here is a variant of range estimation. Maximum ranges reasonable for population growth in a series of years
are established first. Then, the analyst focuses on the time interdependency with which he or she feels comfortable for these stochastically generated series of population growth rates.
While this method leaves considerable discretion in the model
setup, it is recommended in risk simulation because, as this chapter

The Customer Base

95

has stated, the future is likely to be different than the past when
it comes to American population patterns.
NOTES
1. William Ascher, Forecasting: An Appraisal for Policy-Makers and
Planners (Baltimore: Johns Hopkins Press, 1978), p. 74.
2. Spyros Makridakis, "The Art and Science of Forecasting: An Assessment and Future Directions," International Journal of Forecasting
(1986): 15-39.
3. Michael A. Stoto, "The Accuracy of Population Projections," Journal of the American Statistical Association 78 (March 1983): 13-20.
4. See Donald B. Pittenger, Projecting State and Local Populations
(Cambridge, MA: Ballinger, 1976).
5. Errors in national economic forecasts are most closely related to
variability in fertility, in part because of immigration barriers to migration.
At the regional level, however, migration becomes the dominating influence in many cases.
6. Peter Pflaumer, "Confidence Intervals for Population Projections
Based on Monte Carlo Methods," International Journal of Forecasting 4
(1988): 135-142.
7. See Juha M. Alho and Bruce D. Spencer, "Uncertain Population
Forecasting," Journal of the American Statistical Association 80 (June
1985): 306-314.
8. Some researchers have proposed a generational argument based on
the relative standard of living to account for the decline in fertility in
recent decades. Fertility is said to be related to the difference between
currently attainable standards of living and the standard of living of the
family of origin. Given declining real incomes per worker since about
1970, younger families are by this account less prone to raise children.
This explanation has some credibility in light of widely reported difficulties
of young couples, brought up in the suburbs, in buying their first home.
But does the theory explain the surge in births in inner city areas with
ethnic populations?
9. This is particularly true in relatively sparsely populated areas of the
west and the southwest. In Colorado, for example, annual net migration
since 1950 has fluctuated between a loss of about 24,000 persons and a
net gain of 90,000 persons while natural increase has had bounds varying
between 18,000 and 35,000 people. See Colorado Division of Local Government, Colorado Population Growth (Denver: Colorado Division of
Local Government, 1989), Table 1.

This page intentionally left blank

6
Applications
This chapter presents five applications of risk simulation to utility
investment evaluation. The topics include
1. contract risk and the potential loss of a bulk customer,
2. the impact of sinking funds on financial risk,
3. risk comparisons of alternative debt service schedules,
4. risk profiles for the present value of capacity expansion plans, and
5. benefit-cost analysis and financial risks.

The discussion focuses on how simulation of multiple risk sources


can be orchestrated, integrating material from the preceding chapters. While the examples can be refined with respect to encoding
probabilities or representing components like population growth
or consumer demand, the detail developed illustrates the gains in
insight supported by this type of analysis.
First, we consider a simple binary risk relating to renewal of a
contract for bulk electric power. This allows us to present a cash
flow model and outline steps in developing a relatively simple
probability estimate of financial risk. The second example focuses
on the advantages of sinking funds and requires a more dynamic
or process-oriented analysis of default risk. A third and related
application looks more intensively at level and tipped amortization
arrangements, where construction cost overruns, rate shock, in-

98

Analysis of Infrastructure Debt

flation of O&M costs, and variability in the growth of revenues


are incorporated into the analysis. This is the most ambitious simulation model developed in these pages. Then, we turn to the topic
of risk simulation in capacity planning, where the criterion is the
minimization of the present value of costs. We consider the tradeoff
between lower unit costs from a large facility having economies of
scale and flexibility of response allowed by smaller, high unit cost
capacity investments. Brief mention is made, in this regard, of
stochastic aspects of benefit cost analysis. The chapter finishes with
a summary of conclusions indicated by these applications.
Several applications incorporate techniques for simulating the
overall variability and trend in the general price level applying to
expenses. These inflation models appeal to time series analysis and
the observation that periods of high inflation or deflation tend to
occur in runs of several years duration.

POTENTIAL LOSS OF A BULK CUSTOMER


The following example, which is based on data from a recent
bond prospectus, concerns a joint-ventured wholesale power company that supplies its product to several municipalities and another
power company according to two different types of contracts. Currently, the cities who created this power company cannot absorb
the output of its single, relatively new baseload plant. Nearly half
the productive capacity of the wholesale producer is currently sold
to another power company under a contract scheduled to expire
in 1995. Table 6.1 displays the standard case cash flow projection
associated with this situation, allowing for gradual increases in
expenses and revenues. The critical year in which the bulk contract
with the other power company must be extended, renegotiated,
or lost is shaded by a vertical gray band.
If the contract for bulk power is not renewed after 1995, revenues
will fall 45 percent. The rate covenant requires increases in the
rates quoted to the municipalities to maintain the debt service
coverage ratio. Analysis suggests that rates in mills per kilowatt
hour (kWh) for municipal customers would have to increase by at
least 80 percent to make up the shortfall in 1995. The question is
whether this would push consumer responses into the price elastic

Table 6.1
Cash Flow Model for Wholesale Producer
($1000)
1990

1991

1992

1993

1994

1995

1996

1997

1998

1999

2000

OPERATING REVENUES
Primary Service Area (retail cuttomert)
Bulk Contract

60.000
45.000

61.200
46.800

62.424
48.672

63.672
50.619

64.946
52.644

66.245
54.749

67.570
56.939

68.921
59.217

70.300
61.586

71.706
64.049

73.140
68.611

OPERATING EXPENSES
Operation & Maintenance
Adminittration

35.000
4.500

37.100
4.770

39.328
5.056

41.686
5.360

44.187
5.681

46,838
6.022

49.648
6.383

52.627
6.766

55.785
7.172

59.132
7.603

62.680
6.059

INCOME
Net Operating Income
Other Income (Inverted Fundt)

65.500
12.000

66.130
12.000

66.714
12.000

67.246
12.000

67.722
12.000

68,134
12.000

68.478
12.000

68.745
12.000

68.928
12.000

69.020
12.000

69.012
12.000

DEBT
Debt Service on Bondt
Debt Service Coverage Ratio

62.000
1.25

62.000
1.26

62.000
1.27

62.000
1.28

62.000
1.29

62.000
1.29

62.000
1.30

62.000
1.30

62,000
1.31

62.000
1.31

62.000
1.31

DEMAND AND RATES


Annual KWh (million kWh)
Millt per kWh (primary tervice area)
Bulk Contract Rate (Mlllt per KWh)

1.500 0
40
50.000

1.530.0
40
52.000

1.560.6
40
54.080

1.501.8
4 0
56.243

1.623.6
4 0
68.493

1.656.1
4 0
60.833

1.689.2

1.723.0
4 0
65.797

1.757.5

1.792.6
4 0
71.166

1.828.6
4 0
74.012

4 0
63.266

4 0
68.428

100

Analysis of Infrastructure Debt

demand region and, short of that, whether such a rate increase


would trigger consumer protest.
Fortunately, the initial charges to the municipal buyers in mills
per kWh are relatively low in this case. Because of this and the
fact that wholesale power costs constitute only about 20 percent
of total distribution costs to final users in these cities, chances are
good that an 80 percent rate increase to these municipal distributors
can be absorbed without dramatic effects (this estimate ignores,
for the moment, the price elasticity of consumer responses). On
these assumptions, something like a 20 percent rate increase to
final customers could generate the needed revenues. Given current
levels, this does not seem likely to elicit price elastic responses,
and the total effect on per customer usage might be on the order
of a few percent reduction. If rates increase, in short, revenues
are likely to increase also.
Provisos, therefore, might relate to competing power sources
available to the municipalities.1 There is the possibility that with
an 80 percent jump in wholesale costs, some municipalities might
consider alternative wholesale sources despite their historic relationship with this supplying organization. This possibility is compelling in this case, perhaps, because of substantial excess
producing capacity in continguous electric power systems.
Imputing probabilities to these contingencies is straightforward.
It involves encoding subjective assessments of key players and
observers. The contract renewal occurs in a single time frame. Its
successful negotiation is associated with a simple probability (e.g.,
0.50), which may vary from period to period before the event,
depending on negotiations, power demands, and so on. An event
tree with suitable probabilities along its branches can neatly summarize the situation as viewed from a contemporaneous perspective.

EVALUATION OF DEBT SCHEDULES


If the preceding analysis pointed to rate shock as a real possibility, creation of some type of reserve, funded, possibly, by immediate rate increases, would be a logical step. Such reserve or
sinking funds are a classic risk management tactic, and their rationale requires adoption of a perspective that acknowledges ex-

Applications

101

pectations of variability in revenues. Sinking funds generally


operate by capturing extraordinary but uncertain revenues to make
debt service payments at future dates. Their funding is more flexible than ordinary repayment of debt, although their creation may
be mandatory.
Sinking funds can reduce default risk in various ways, depending on how they are structured. One tactic is to reduce nearterm financial risk by lowering year-by-year debt service. This
creates, of course, tradeoffs related to the chance that fund accumulations will be insufficient to meet a compensating end-ofperiod lump sum payment. Another tactic might be to skim accumulations in high income years and use the sinking fund to
make up shortfalls in low income years. Interest earnings on
sinking fund accumulations can be exploited by tax-free municipal issues in some instances. For these reasons, sinking funds
may enhance the chance that debt can be paid off without default, where the relevant probabilities depend on parameters of
(1) capital construction costs for a project, (2) low and high
forecasts for the growth of revenues and expenses, (3) some
grounds for estimating revenue and inflation variability, and (4)
relevant capital market information. Text Box 6.1 summarizes
key parameters.
Our problem is to conduct a "with and without" analysis of a
sinking fund arrangement, estimating the relative impacts on financial risk associated with repaying a $100 million bond.
Without a sinking fund, suppose the bond is amortized over
fifteen years with a level debt service. Assume that subjective
anticipations are that revenue growth will vary between - .5 and
+ 3 percent per year. In addition to these overall bounds for revenue growth, it is assumedperhaps on the basis of inspection of
past recordsthat year-to-year variation in the revenue growth
rate does not exceed.0025 or one quarter of one percent. Given
this interdependency, a uniform probability distribution is allowed
to govern selection of the annual revenue growth rates in the
simulation.
Inflation is characterized by a simple time series model. This
first determines the overall trend or expected average increase in
the level of priceshere assumed to be 5 percent per year. Then,
variation around the trend is introduced with a serially correlated

T e x r b o x 6 . 1

Construction Cost Estimate


$100,000,000
Payback Period
15 years
Interest Rate
10 percent
Debt Coverage Ratio
1.3
Initial Revenue
$18,500,000
Revenue Growth
low: -1.0 percent
per annum
high: 3.0 percent
per annum
Inflation
low: 3.0 percent
per annum
high: 8.0 percent
per annum
Initial Operating Expense
$1,000,000
Interdependency Factor for
Revenues: established by
assuming variation is within a
0.25 band around the previous
year's growth rate

Applications

103

residual. This leads to a rate of inflation falling between about 3


and 8 percent per year, applying to production, distribution, and
administration expenses. The autocorrelation coefficient of - 0 . 5
corresponds to the perception that inflation and deflation tend to
occur in runs of several years duration.
Net income is calculated as revenues minus expenses, and the
debt coverage ratio is net income divided by the debt service.
The object is to compare the default risk in the case with no
sinking fund with risks likely to be realized when a sinking fund
is in place. This, of course, requires two simulations. Assume,
then, a second situation in which $90 million of the $100 million
bond debt is amortized according to a level debt service. A sinking
fund is set up to pay off the remaining obligation, which, after 15
years at 10 percent interest, amounts to an additional $41,772,482.
The sinking fund is allowed to skim off up to $2 million a year,
whenever the debt coverage ratio exceeds 1.4.
There is a remarkable difference in the default risk in these two
cases. The results are graphed in Figure 6.1, which displays the
risk profiles, or, more precisely, fifteen risk profiles from the no
sinking fund casethe bars with sloping slashesand fifteen risk
profiles from the sinking fund casethe solid bars in the diagram.
Each bar in this graph indicates the probability of default in a
particular year.
Always positive inflation of expenses as well as the potential of
zero or negative revenue growth appears to create a risk equilibrium after the first four or five years of the payback period in the
no sinking fund case when there is relatively constant default risk.
With a sinking fund in force, the annual debt service is lower, but
there is a kind of "balloon" payment that must be made out of
the sinking fund. Accordingly, the risk profiles for this case indicate
lower default risk in the early years of the payback period and
markedly higher risk of default in the final year of the payback
period.
What is best? Overall, the probability that this project is in
default one or more years in the payback period cannot be read
from the chart because the debt coverage may be inadequate in
multiple years in a realization of the variables in the cash flow
model. A second criterion variable must be defined to answer this
question, a variable that is 0 if the coverage ratio is met every year

104

Analysis of Infrastructure Debt

Figure 6.1
Default with and without a Sinking Fund

of the payback period and 1 otherwise. This variable indicates that


the overall probability of default is higher without the sinking fund,
where the financial risk in both situations is exaggerated for purposes of illustration.
We do not include rate adjustments to improve debt coverage
levels in this example. One way this might be justified is by assuming that the analysis is carried out in terms of maximum revenues. In other words, year-by-year demand levels could be set at
price levels determined to fall at the dividing line between inelastic

Applications

105

and elastic price responses. By definition, then, raising rates would


prove self-defeating, and the simulation would indicate an irreducible default risk of some type. The other option is to introduce
an additional component or module to mimic probable rate responses of the utility to information about narrowing coverage
ratiosthe approach adopted next.
In any case, the point is to develop an initial estimate of financial
risk. Such an analysis can be refined in various ways. With the
exception of the representation of the time interdependency of
revenues, many of these refinements add little to the point of this
analysis, which is that sinking funds can reduce default risk. The
time interdependency of revenues, however, is critical because
runs of above or below trend earnings have disproportionate effects
on outcomes because of interest on fund accumulations. Accordingly, further attention to the characterization of time interdependency may be warranted, where, for example, time series models
of historic series on utility revenues may be considered.
Note also that there appears to be room to fine tune the debt
service level and end of period payout, in a sense, to optimize or
minimize overall default risk. Thus, along with refinement of the
risk simulation model, attempts to optimize the sinking fund arrangement become increasingly plausible.
Deferred Versus Level Debt Service
A publication of the First Boston Corporation concerning hydroelectric facilities, notes that,
There is a significant degree of flexibility available to a hydroelectric
project developer in designing afinancingplan and debt structure to best
fit the economies associated with a given project. It is generally believed
in the power development field that hydroelectric projects have the most
tentative economic feasibility in the first four or five years of operation.
. . . The project may not be deemed economically feasible when perfectly
reasonable and accepted capital financing techniques... are left unexplored. . . . It is possible to produce "accelerated," "level," or "deferred"
debt service payments.2
Let us consider, then, the effect of a deferred schedule of debt
service on project feasibility, where a comparison is drawn with a

106

Analysis of Infrastructure Debt

Figure 6.2
Debt Service Schedules

conventional level amortization. The debt payment schedules to


be compared are shown in Figure 6.2. Schedule B (indicated by
the boxes in the figure) defers debt payment, beginning at a lower
level than the level payments in schedule A (indicated by tick
marks), but rises above the schedule A level at around the middle
of a typical bonding period of fifteen years. Both schedules amortize a $100 million bond over fifteen years. Text Box 6.2 summarizes the assumed parameters, and the following discussion
develops the details of the simulation.
In addition to revenue and price level uncertainty, the application allows for uncertainty in construction costs and rate adjustments and stochastic variation in population growth.
Variability in the demand and inflation series is introduced through

Texr box 6.2


Construction Cost Estimate
$100,000,000
Payback Period
15 years
Interest Rate
10 percent
Debt Coverage Ratio
1.3
Initial Revenue
$36,000,000
Initial Per Capita Usage
90 units per year
Price Elasticity
approximately -0.1 in a
range of $1-$2 price per unit
Initial Population
200,000
Population Growth Trend
1.02 per annum
Interdependency in Demand and
Inflation determined by a
serially correlated residual
Interdependency in Population
determined by k = 1 percent
change per year in a band from 0
to 4 percent

108

Analysis of Infrastructure Debt

Figure 6.3
Cost Overrun Probability

time series models. Time series models determine a fuzzy, rather


than absolute, band of variation outside of which it is unlikely a
variable will be found in a realization. Variability in population
growth is developed with a stochastic model such as that described
in the previous chapter. The population growth rate is assumed to
occupy a band between 0 and 4 percent per year, where a time
interdependency factor k = 1 percent is allowed.
Construction cost overruns. We assume that prior simulations
provide a risk profile for construction cost growth, as indicated in
Figure 6.3. This is generated by a normal distribution, truncated
at the $50 million level. Note there is a high probability of costs
running above the assumed construction cost estimate of $100 million.
Construction costs in excess of $100 million are presumed to be

Applications

109

funded by additional debt amortized according to a level debt


service for fifteen years. Additional realism could be incorporated
at this point by assuming that some of the cost escalation must be
funded by shorter term debt, possibly at higher interest rates. All
debt in this application is assumed to be issued at 10 percent interest, and the analysis disregards sinking funds or construction
contingency allowances.
Revenue risk. Revenues are broken down into population, per
capita demand, and rate level components. Bands of random variation are introduced around anticipated trends in demand and
inflation with simple time series models, while a more judgmentally
developed stochastic model produces the population series. Rates
are determined to cover average costs, where instantaneous adjustment is assumed for year-to-year changes in revenue requirements. Consumer price responses are incorporated into the
demand model, where a threshold below which demand is completely inelastic or totally unresponsive to rate change is acknowledged also.
The baseline per capita demand (PCD) per year is

a function which represents a minimum demand of fifty units per


year and embodies a price elasticity of around - 0 . 1 when price
varies between $1.00 and $2.00 per unit. Variability around this
level is introduced by a norrrrally distributed random component
with a zero mean and standard deviation of ten units. The first
order serial correlation coefficient is assumed to be 0.3.
A stochastic model is used to generate the population series,
where the annual population growth rate is assumed to fall between
0 and 4 percent, and year-to-year changes in growth are limited
to 1 percentthat is, if year t growth is 1.5 percent, year t + 1
growth is constrained to be greater than zero percent and less than
2.5 percent. Sampling from these ranges is determined by uniform
probability distributions, as described in Chapter 5.
Rates that are set on an average cost basis are assumed to adjust
instantaneously to revenue and cost conditions. The rate module

110

Analysis of Infrastructure Debt

totals expenses and debt service from a year, applies a factor of


1.3, and divides by the product of total population and per capita
demand for this same year. The resulting rate is adjusted upwards
by a tolerance factor of 15 percent, and a fixed component of $0.50
per unit is added, which roughly corresponds to the fixed charge
portion of the rate schedule. A ceiling on rates of $5.00 per unit,
adjusted by inflation from the first year in the payback period, also
is assumed. This derivation simulates some features of rate setting,
but in allowing for instant annual adjustments, it represents the
best responses possible, given the demand response patterns in the
customer service area. Accordingly, default risks are lower by this
procedure than they might be with response lags and errors in
forecasting the revenue needs of coming years.
Expenses. Initial expenses of $1 million are adjusted each year
by an inflation rate that averages three percent annually. Variation
around this trend in the overall level of prices is determined by a
normal random error term and a first order serial correlation coefficient of 0.5.
Risk Profile. The criterion is whether or not the debt coverage
ratio falls below 1.3 in a year. The results are presented in Figure
6.4, which summarizes fifteen discrete, binary variables representing the risk of technical default in particular years of the payback period. The bars with sloping marks in Figure 6.4 represent
year-by-year default risks associated with the level amortization
schedule. The solid bars indicate risks associated with the tipped
debt service alternative.
The early problems with construction costs, it would seem, are
not decisive in this formulation. This is surprising in view of the
immense variance of construction costs and the allowance for relatively rapid population growth. Again we cannot simply add these
annual default risks together to arrive at a total default risk. Hence,
we must add a binary variable to the simulation (0 if no default in
period, 1 otherwise). If we do this, the tipped debt service schedule
embodies the more significant risks. Thus, the overall risk of default is lower with the level debt service. The growth in revenues
is simply not adequate to cover both inflation and the jump in the
debt schedule after year 7. The deterioration in performance of
the tipped amortization schedule at the end of the payback period
is related to the chance that the end of period growth rates are

Applications

111

Figure 6.4
Default RiskLevel and Tipped Debt Service

consistently below trend, while end-of-period inflation is above


trend.
The assumption is presenting this simulation example is that the
ranges and values for key parameters are those best supported by
judgment and available data. In a real world application, this may
or may not be so. Thus, it is interesting that the conclusions indicated by this analysis are robust to changes in the basic modeling

112

Analysis of Infrastructure Debt

strategy. Instead of judgmental and absolute ranges for population


growth, population may be represented by a time series model.
Providing that this time series model incorporates a comparable
variability and time interdependence, roughly the same results are
obtained. Alternatively, the time series models in the above application can be replaced by wholly judgmental models imposing
absolute bounds on demand growth and so on without fundamentally changing the conclusionagain provided that similar variability and time interdependence are built in.
Related Applications
A comparable simulation framework can apply to questions concerning tradeoffs between new capacity and power or water purchases from other systems. Problems, for example, have been
noted in the northeast region of the nation, where
Most [electric power] purchase agreements are signed using a least cost
alternative formula that provides only minimal savings over the building
of base load units. However, the life of the contracts is anywhere from
one-half to one-fifth the life of a typical base load unit (such as a coalfired plant), and even shorter if life-extension programs are taken into
account.3
This issue can be explored by a risk simulation in which future
power demands and the costs of new generating units are allowed
to be partly stochastic, given the exact contractual arrangements
in the purchase agreements. If power purchase contracts allow
considerable flexibility in the amounts taken each year, and substantially uncertain and volatile demand is anticipated, this simulation could confirm what might appear as a suboptimal choice
on the part of the northeastern U.S. electric power utilities.
CAPACITY PLANNING
The preceding analysis is concerned with fairly work-a-day problems in utility finance and business strategy. Let us now move to
what appears to be a higher level of abstraction. The topic is
capacity planning and the utilization of risk simulation in deter-

Applications

113

mining least cost capacity plans, where costs usually are measured
in terms of present values. The present value (PV) of a series of
investments (KM K 2 ,. . . , Kn) is defined as

where r is generally taken to equal current or projected interest


rates on debt issues for the projects under consideration. 4
Least-cost planning procedures have received attention in regulatory hearings and other contexts in recent years. Although there
are varying interpretations of what constitutes a least-cost plan, 5
the primary emphasis is on the minimum present value of costs
criterion. One of the reasons this criterion has received so much
attention is that it lends itself to mathematical optimization or
programming procedures. Because electricity cannot be stored,
optimization of power systems focuses on the mix of generating
resources to serve peak and off-peak demands and the merit order
of operating units. 6 Linear programming7 and Monte Carlo
simulations8 contribute to the identification of optimal power loads
and reliability factors. Water resource planning, on the other hand,
focuses more on the sequence of supply projects, their safe annual
yields, and anticipated annual demand, appealing to dynamic programming or similar formulations.9
Another factor recommending the minimum present value criterion is that this criterion identifies the expansion plan with the
lowest financial risks. There are provisos to this observation, and
at least one electric power planning model includes subroutines to
check the financial feasibility of investment series, given financing
from internally generated funds, debt, preferred stock, common
stock sold under nondilution constraints, and common stock sold
below book value. 10 Nevertheless, with ready access to debt and
equity markets, the minimum PV path has the lowest default risk,
providing that special risks of construction cost overruns do not
exist and feasibility constraints, such as debt to equity ratios, can
be met. Thus, the minimum PV expansion path produces the maximum present value of net revenues and should lead to favorable
access to financial markets for capital.
Some simple examples of how capacity investments fit into de-

114

Analysis of Infrastructure Debt

mand projections are presented in Table 6.2, and the following


discussion makes one or two further analytic points regarding the
minimum present value criterion. Then we turn to applications of
risk simulation in this type of planning exercise and develop an
example showing tradeoffs between economies of scale and flexibility of response.
The Minimum Present Value Criterion and Changes in
Interest Rates and Demand Growth
A salient feature of the minimum present value criterion is that
reversals in project evaluation can accompany changes in the discount rate or changes in the rate of growth of demand. Thus, if
the real significance of long-run planning is to determine which
project to build next, immediate construction of a large facility
exhibiting economies of scale might be favored by one regime of
interest and demand growth rates but be inferior for different
values for these variables.
This point can be made straightforwardly, if somewhat abstractly, with simple algebra. Thus, a discussion in the engineering
literature postulates a linear growth of demand and an infinite planning horizon, avoiding valuation problems associated with excess
capacity at the end of a period of analysis.11 It employs the cost
function K(x) = axb, where K is capital cost and the present value
of all capacity-related O&M costs of providing capacity x and economies of scale are indicated when the elasticity of cost parameter b
> 1. If we assume operating and maintenance costs are negligible,
it is possiblegiven the discount or interest rate rto identify the
optimal size facility and intervals t* at which it should be constructed.
For this design period t*, we have the relation
b = (rt*)/(ert* - 1)

(6.1)

where e is the base of the natural logarithm system (2.718. . . ) .


Solving this equation for t* (by numerical approximation) implies
the corresponding optimal size for a facility since, ex hypothesis,
we know the demand growth. Interestingly, implicit differentiation
of equation 6.1 indicates that dt57d < 0. In other words, the optimal
construction interval t* decreases as the discount rate increases.

Table 6.2
Minimum Present Value Capacity Investment Sequences
Period
Years
CASE1

CASE 2

CASE 3

CASE 4

1
0

2
5

3
10

Demand
Capacity Investment ($million)

100
58

130
25,25

150

Demand
Capacity Investment ($million)

100
25

110
25

120

Demand
Capacity Investment ($million)

100
25

110
58

150

Demand
Capacity Investment ($million)

100
58

120

135

Present Value
(10 percent discount)

89.05
40.52

61.01

58.00

116

Analysis of Infrastructure Debt

Hence, smaller projects with higher unit costs produce the least
present value expansion path when the discount rate increases.
We can study how acceleration and deceleration in the growth
of demand affect the optimal investment sequence with Table 6.2.
This table presents the optimal capacity investments for several
scenarios of nonlinear demand growth. Investment sequences are
built with two types of capacity projects: a $58 million design with
a capacity of 30 demand units, and $25 million projects with capacities of 10 units apiece. Table 6.2 pairs demand paths and the
costs of the optimal sequence of capacity investments, listing the
present value of these sequences in the last row, assuming a discount rate of 10 percent. There is an initial demand of 100 capacity
units and a planning period of 10 years. Two point estimates of
projected demand are shown in each case, for the end of a 5-year
period and at the end of the planning period.
If demand growth is rapid, as in case 1 in the table, initial
construction of a large project is economic and has the lowest
present value of costs$89.05 million.
If demand growth is low, as in case 2 of Table 6.2, staging two
smaller capacity additions has the lowest present value of costs,
even when their unit cost, as measured by their total capital cost
divided by their capacity, is fully one and one half times greater
than the thirty-unit project.
Case 3 illustrates the resorting of investments in an optimal
sequence as the growth of demand changes. There is slow initial
growth in demand, followed by a rapid surge in demand. A smaller,
higher unit cost project is best suited to periods of slow demand
growth, and the larger project with economies of scale is best for
more rapid growth of demand.
Finally, case 4 shows a demand projection that traces a path
precisely halfway between the upper and lower bounds established
by the other cases in Table 6.2. This could be average or expected
demand growth and is conceptually often taken to be the standard
case in planning exercises. Building the large project initially produces the lowest present value of costs for this demand trajectory.
Note this "capacity plan" does not satisfy demand at the end of
the ten-year period. There are five additional demand units that
must be met somehow in the average demand scenario of case 4.

Applications

117

In any case, the initial thirty units of demand are satisfied in the
cheapest way by the $58 million project.
Risk Simulations of the Present Value of Least-Cost
Capacity Investments
Risk simulation is relevant to capacity expansion planning because of uncertainties in the growth of demand and other factors.
Analysis of demand uncertainty can be traced to work by H. Baleriaux and E. Jamoulle in the 1960s12 and studies prepared for the
California Energy Resources Conservation and Development
Commission and the U.S. Federal Energy Administration in the
1970s.13 Several citations reference similar methods, including simulations of uncertainty in the availability of hydroelectricity14 and
an ambitious study of long-term capacity needs for the Electric
Power Research Institute. 15 In the United Kingdom, a related
approach was developed by D. V. Papaconstantinou. 16 Studies associated with public debate over the Sizewell project systematically
apply many of these techniques to the evaluation of the timing and
advisability of construction of Britain's first pressurized water reactor.17
As an illustration of these methods, it is interesting to develop
a risk simulation to consider data, such as in Table 6.2. The results
produce some surprises and show the dangers inherent in not thinking in terms of stochastic process.
To consider these investment options in a stochastic framework,
suppose the high and low demand projections in cases 1 and 2 of
Table 6.2 establish the lower and upper bounds pertinent to each
of the five-year time periods considered. Thus, we presume that
after the first five years of the planning period, total demand is
between 110 and 130 units. Let us also assume that: (a) chance
variation in demand follows a uniform distribution between the
bounds implied in any five-year period by these projections, and
(b) variation between adjacent five-year periods is limited to 25
percent of the allowable range.
The End of Period Capacity Valuation Problem
To set up this risk simulation, we must make decisions about
the end-of-period valuation of excess capacity. Note that the first

118

Analysis of Infrastructure Debt

three case examples in Table 6.2 avoid this problem by considering


end-of-period projections that exactly dovetail with capacity additions. We confronted but did not elaborate upon this difficulty
in case 4, which presents the expected or average demand profile
over this ten-year planning period.
Note what would happen in case 4 if we prorated the cost of
five units of needed and unmet capacity at the end of the planning
period. Thus, we would allocate one-sixth of the costs of five units
of capacity from the thirty-unit project and half of the costs of the
ten-unit project to these five demand units. This would indicate
that the cheapest alternative is to build the large project because
its prorated costs would be $9.66 million, as compared with $12.5
million from the smaller project. Therefore, such a prorating of
capacity costs to excess demand in a period means that a large
project generating substantial excess capacity is usually selected
over a smaller, higher unit cost project under this type of end-ofperiod convention, particularly if there is high excess capacity in
this project. This is unsatisfactory, especially if further study indicates that system demand will peak in the planning period, declining thereafter, or if there is considerable uncertainty as to
whether growth will occur following the period under analysis.
Another option is to simply allocate all investment costs as of
the date of a bond issue to meet excess capacity, no matter how
small this excess capacity may be. Thus, we may include a large
investment that is almost completely unutilized but is in the capacity plan as one alternative way to meet demand until the end
of the planning period. This could make such a plan seem unduly
expensive when compared with another plan with no excess capacity at the end of the planning period. This is unsatisfactory if
our expectation that the planning period currently being considered will be followed by a period of more rapid growth or if,
somehow, not utilizing this large project in the current period will
result in this investment option being lost altogether.
It is clear that a part of the problem is solved by selection of
the planning period. Thus, a prior analysis should look at alternative plans and the expectation for demand growth over future
periods. The length of an appropriate planning period will be influenced by the typical gestation period for the investments, ex-

Applications

119

pectations for growth, and the point at which uncertainty becomes


almost absolute, that is, more than a decade or two hence.
Our approach here is to assume that a prior analysis has indicated
ten years as the appropriate planning period. We, furthermore,
assume that, because of uncertainty or an expectation of zero
growth after this period, we have no grounds to prorate projects.
Thus, to avoid a bias toward the smaller project sequences, we
will simply select the investment sequence that meets the maximum
length of time in the ten-year planning period so as to leave no
excess capacity at the end of the planning period. In some runs,
this means the schedule of investments does not satisfy demand
for a full ten years. If there are no limits to the number of small,
$30 million projects that can be built, this presents no problem to
meeting the physical needs over the foreseeable future. Furthermore, the result indicates that the average period of time covered
by investment sequences determined in this fashion, where we
separate those beginning with the large project from those beginning with the small project, is roughly comparable.
Determining the investment series with the lowest present value
is simplified, also, by the fact that the $58 million project is a
multiple of the capacity of the $30 million project. A simple way
to create an optimal sequence, therefore, is to first locate the
smaller, higher unit cost projects into a time sequence that meets
the demand over the planning period. Then, this series of investments can be inspected to determine whether it contains any segments or subsequences that are "tightly packed." For example,
building $25 million investments each year for two years when the
discount rate is 10 percent produces a present value of approximately $62.74 million as of the time of the first of these three
investments. This is, therefore, more costly than satisfying the
same thirty-unit increment in capacity with a single $58 million
project.
A Risk Simulation
The practical problem usually is to identify the next project to
be placed on the construction schedule and brought into operation.
Let us focus, therefore, on comparing capacity plans that result
after an initial selection of either the large or small project is

120

Analysis of Infrastructure Debt

Figure 6.5
Present Value of Investment Costs

made.18 Then, we obtain the interesting information depicted in


Figure 6.5. The two risk profiles there refer to the minimum expected present value of investment costs, given an initial commitment to the thirty-unit or ten-unit projects as the initial project to
be built.19 The solid bars are associated with sequences in which
the large project is built first, whereas the bars with the slashes
mark out the risks of building the small project first.
Building the small project first has the lowest expected present
value of costs ($59.17 million versus $61.31 million), even though
the average or midpoint demand path would indicate that the
cheapest tactic would be to build the large project first. At the
same time, this lower expected present value of costs is associated

Applications

121

with a somewhat shorter expected period covered by investment


sequences beginning with the small project first (8.54 years versus
9.02 years). In fact, the concentration of present value when the
large project is built initially at $58 million is due to the fact that
in most of the simulation runs, the large project completely satisfied
demand during the planning period.
This example underlines the potential complexity of decisions
about what project in a chain of projects ought to be next in the
construction schedule. Thus, there are tradeoffs not only in the
cost dimension but also in the time dimension. There are also issues
presumed to be addressed before the analysis can properly take
place at all, such as that of the appropriate planning period.
Further attention might be given to refining the probability distribution imputed to the demand forecasts. For example, is lower
rather than higher demand considered more likely in the forecast
range? If demand growth is negatively skewed, there is an even
greater chance costs can be finessed with a smaller investment. Of
course, this simple analysis abstracts from project lead times, response lags, and the impact of shortages on customers.
BENEFIT COST ANALYSIS AND FINANCIAL RISKS
The challenge in financial risk modeling is to develop appropriate
detail and complexity. In some cases, this requires choosing between refining existing components of a model and adding new
linkages and elements. Thus, in the previous application, in addition to fine tuning probabilities, it is possible to develop more
realistic assumptions concerning management responses and the
valuation of supply options. Among the more important of these
new assumptions are lags in management response to changing
demand and the opportunity cost of electric power or water shortages.
The preceding simulation assumes foresight on the part of utility
managers, which may not exist. Sizeable demand fluctuations may
not elicit immediate management responses. Existing plans may
be adhered to until decision makers are persuaded that recent shifts
in demand growth are more than transitory. Accordingly, two
effects commonly accompany shifts in the growth rate of demand.
When there are sharp upward revisions in growth, shortages may

122

Analysis of Infrastructure Debt

be recurrent throughout the utility system. Long periods of excess


capacity, on the other hand, may accompanying a downward shift
in demand growth and impose higher rates and charges on a smaller
than anticipated volume of production or fewer customers.
Here, then, there is a problem of what metric to select to express
a diverse set of possibilities. Originally, we took the view that
financial risk is evaluated, subject to the condition that new facilities are brought on line in a timely manner to meet growing demand in a utility system. In a long-term planning context, however,
the situation is complicated by the possibility of response lags or
selection of projects that turn out to be inappropriate to the conditions that develop.
The solution suggested by economics is to rely on a demandbased metric called the consumer willingness to pay or consumer
surplus. In dollar terms, U.S. average outage costs for electricity
have been estimated from $1.00 to $3.80 per kWh in 1978 prices. 20
The effects of water shortages have been subject to empirical
study21 and have been variously estimated with econometric models
distinguishing summer or outdoor from winter or indoor water use
of residential users. The imposition represented by excessively high
utility rates that can result when there is excess capacity for prolonged periods also is amenable to this metric.
Models can be developed, for example, that balance off the
possibility and imputed costs of utility rates that are excessively
high with the chance that noisome shortages will impose disamenities on customers. Initially, what usually must be a search algorithm identifies the investment sequence which is optimal from a
benefit/cost standpoint, given project lead times, likely management response lags, and imputed costs for shortages. This investment sequence represents a minimum feasible present value of
costs, given the range of contingencies that may arise during the
planning period. Then, financial risk analysis can consider the optimal investment path, as costs are disaggregated from the present
value framework into a cash flow framework, to identify the best
way of implementing it.
Integrating financial and benefit/cost analysis is a fertile research
area. Thus, in the minds of many utility managers, there appears
to be a presumption that, over the longer period, per customer
costs are likely to be less for persisting surpluses than for shortages.

Applications

123

This could be because projects will be dropped from the construction schedule if high excess capacity persists. On the other hand,
the acceleration of construction plans may favor quick fixes, increasing the costs of meeting demand over the longer period. There
is a sense here of an ensemble of probability-governed relationships, although the question is largely unexplored.22 Of course,
financial debacles surrounding many ill-fated nuclear power installations suggest that a large project bias could contribute to
financial risk.
CONCLUSION
The examples in this chapter underline one point above others
the probabilistic point of view leads to insights into financial and
planning problems, insights that may not be mere restatements of
the results of simpler "what-if" or sensitivity analysis.
The first application illustrates what commonly is regarded as
the subject matter of risk analysisthe appraisal of probabilities
of an event essentially bounded in time.
The other applications exhibit more dynamic and time-dependent meanings of financial risk, summarized by a series of default
probabilities over the payback period or by a distribution of minimum present values.
Interestingly, with regard to sinking funds, there is some presumption that people always think in probabilistic terms. Thus, it
is almost impossible to justify and appraise a sinking fund without
regard to variability factors and the advantage of capturing income
during exceptionally good years. Nevertheless, risk simulation suggests a curious equilibrium of default risk after the first few years
of the payback period in the no sinking fund case, an equilibrium
or stabilization apparently due to an interaction between inflation
and the potential for zero or negative revenue growth. It is difficult
to see how anything other than a stochastic framework could indicate this result, which is of some interest because it is a circumstance likely to be encountered in some utility systems in coming
years.
The simulations concerned with the relative risks of level and
tipped amortization schedules also produce an unconventional result. Despite extraordinary chances for construction cost overruns,

124

Analysis of Infrastructure Debt

the total default risk associated with a tipped amortization schedule


exceeds that associated with a level debt service schedule. The
culprit here is inflation or too ambitious a reduction of initial debt
service. The clear challenge is to devise an optimum debt schedule
for the perceived circumstances, and the apparatus indicated in
this chapter can inform this pursuit.
Similarly, the risk simulations of the capacity planning problem,
when the initial capacity investment is "locked in," indicate a result
that could not have been guessed at by a "what-if" analysis. Here,
the average or expected demand profile is best met by building
the larger project first, the project embodying economies of scale.
Nevertheless, the risk simulation indicates that an investment plan
in which the smaller project is selected for initial construction is
superior, namely with respect to the present value of costs. This
approach is usually cheaper and attains a roughly comparable,
although slightly lower, time coverage in the planning period. This
conclusion, it is noted, is more emphatic if the probability distribution for demand in a particular year is positively skewed.
Generous use of the uniform probability distribution and a type
of range estimation pertaining to growth rates is apparent in the
applications discussed in this chapter. This corresponds to what
Bayesian statistical theory calls a "uniform prior" distribution; that
is, in the absence of any other information, assume that all acknowledged options are equally probable. If the true probability
distribution has a distinct and single peak, the uniform distribution
overstates the variance of the process under study. This may provide incentives to further explore the anticipated variation of
growth rates. Alternatively, one may "back in" to the probability
distribution and time interdependence characteristics of growth
rate series that produce an acceptable level of risk. Then, the
question can be whether this distribution is realistic or plausible
or whether, for example, its variance is too small to be believed.
In recent years, utilities have brought a number of newer instruments and tactics into play, which include, as Charles F. Phillips, Jr., notes,
convertible securities, issues with warrants, preference stock, short-term
notes (five years) and intermediate-term bonds (seven to ten years), pollution control facilities financed with industrial development bonds or

Applications

125

pollution abatement revenue bonds, and employee stock option and dividend reinvestment plans. Leasing and project financing became viable
options. With high interest rates, refunding and debt-equity swaps were
carried out. A few utilities have offered common stock to their customers
through a monthly installment plan; a few electric utilities have established
an energy trust to finance nuclear power plants or the purchase of nuclear
fuel.23
Restrictions on arbitrage of tax-exempt debt issues in the Tax
Reform Act of 1986 and innovations such as interest rate futures
or swaps add to the complexity of financial analysis for power and
water utilities. In principle, each of these financial instruments,
tactics, or options can be evaluated within risk simulation contexts
such as these developed in this chapter.
The first task of risk simulation models is to get an answer.
Then, refinements and extensions can be explored to determine
their impact on the risk profile.
NOTES
1. About one half of all states protect municipal and cooperative service areas by exclusive franchise rights or state law. Elsewhere, no specific
provisions shield service areas. See Malachy Fallon, "Municipal Electric
Credit Review," Standard & Poor's CreditWeek, June 5, 1989, p. 8.
2. First Boston Corporation, Financing Hydroelectric Facilities (Boston: First Boston Corporation, April 1981), pp. 10-11.
3. Robert Woodard, "Power Shortage Threatens Northeast," Standard & Poor's CreditWeek, June 5, 1989, p. 9.
4. Operating costs here are assumed to be capitalized in the values of
K,
5. See Daniel J. Duann, "Alternative Searching and Maximum Benefit
in Electric Least-Cost Planning," Public Utilities Fortnightly, Decemb
21, 1989, pp. 19-22.
6. This means scheduling the lowest cost unit to provide baseload
capacity and intermediate and peaking units to come on line in order of
increasing cost.
7. A pioneering paper applying this method is P. Masse and R. Gibrat,
"Application of Linear Programming to Investments in the Electric Power
Industry," Management Science 3 (January 1957): 149-166.
8. See Ralph Turvey and Dennis Anderson, Electricity Economics
Essays and Case Studies (Baltimore: Johns Hopkins University Press,
published for the World Bank, 1977), p. 259.

126

Analvsis of Infrastructure Debt

9. See, for example, Yacov Y. Haimes, Hierarchical Analysis of Water


Resources Systems: Modeling and Optimization of Large-Scale Systems
(New York: McGraw-Hill, 1977).
10. Martin L. Baughman, Paul L. Joskow, and Dilip P. Kamat, Electric
Power in the United States: Models and Policy Analysis (Cambridge, MA:
MIT Press, 1979).
11. This exposition relies on the discussion and derivations in Daniel
P. Loucks, Jery R. Stedinger, and Douglas A. Haith, Water Resource
System Planning and Analysis (Englewood Cliffs, N.J.: Prentice-Hall,
1981), pp. 121 passim. See also A. S. Manne, "Capacity Expansion and
Probabilistic Growth," Econometrica 29 (1961): 632-641.
12. H. Baleriaux and E. Jamoulle, "Simulation de l'exploitation d'un
pare des machines thermiques de production d'electricite couple a des
stations de pompage," Revue Electricitie, edition SRBE, 5 (1967).
13. Stanford Research Institute, Decision Analysis of California Electrical Capacity Expansion, report submitted to California Energy Resources Conservation and Development Commission, Menlo Park,
February 1977; Gordian Associates, Optimal Capacity Planning Under
Uncertainty in Demand, report submitted to the U.S. Federal Energy
Administration (Washington, D.C.: U.S. Government Printing Office,
November 1976).
14. Arun P. Sanghvi and Dilip R. Limaye, "Planning Future Electrical
Generation Capacity," Energy Policy 1 (June 1979): 102-116.
15. Martin L. Baughman and D. P. Kamat, Assessement of the Effect
of Uncertainty on the Adequacy of the Electric Utility Industry's Expansion
Plans, 1983-1990, EA-1446, Interim Report (Palo Alto, Calif.: prepared
for the Electric Power Research Institute, July 1980).
16. D. V. Papaconstantinou, Power System Planning Under Uncertainty Using Probabilistic Simulation, Internal Report EPU/DP/1 (London: Energy Policy Unit, Department of Mechanical Engineering,
Imperial College of Science and Technology, May 1980). See also Nigel
Lucas and Dimitrios Papaconstantinou, "Electricity Planning Under Uncertainty," Energy Policy 10 (June 1982): 143-152.
17. Nigel Evans, "The Sizewell Decision: A Sensitivity Analysis," Energy Economics 6 (January 1984): 15-20; and Ian S. Jones, "The Application of Risk Analysis to the Appraisal of Optional Investment in the
Electricity Supply Industry," Applied Economics 3 (May 1986): 509-528.
18. Note that this will not violate our convention of never selecting a
project that will leave excess capacity at the end of the demand period
since the minimum growth of demand exceeds thirty units over this twentyyear period.
19. Because one project is a multiple of another and the minimum

Applications

127

demand increment exceeds the capacity of the largest project, the periods
to which these expected present values refer are identical in expected
value and distribution.
20. A. Kaufmann, Reliability CriteriaA Cost Benefit Analysis, OR
Report 75-79 (Albany: New York State Department of Public Service,
June 1975); "Report on the Reliability Survey of Industrial Plants," IEEE
Transactions on Industry Applications 1A-10, no. 2 (March 1974): 231233. See Roland Andersson and Lewis Taylor, "The Social Cost of Unsupplied Electricity," Energy Economics (July 1986): 139-146.
21. See Mark Hoffman, Robert Glickstein, and Stuart Liroff, "Urban
Drought in the San Francisco Bay Area: A Study of Institutional and
Social Resiliency," in American Water Works Association Resource Management, Water Conservation Strategies (Denver: American Water Works
Association, 1980), pp. 78-85.
22. C. Vaughan Jones, "Analyzing Risk in Capacity Planning from
Varying Population Growth," Proceedings of the American Water Works
Association (Denver: June 22-26, 1986), pp. 1715-1720. Jones explores
the issue in outline.
23. Charles F. Phillips, Jr., The Regulation of Public Utilities Theory
and Practice (Arlington, VA: Public Utilities Reports, Inc., 1984), p. 221.

This page intentionally left blank

7
Reflections on the Method
Monopolistic markets and capital intensity make financial risk analysis of power and water investments more available than, say,
analysis of investment risks for a new assembly line for computer
components or a retail outlet. Debt financing is the favored vehicle
in financing major capacity expansion, and debt service is a major
cost component. Construction cost overruns, therefore, are a primary risk factor on the cost side, along with the potential for
interest rate changes prior to bond issuance. Population growth
and the level of customer demand introduce risks on the revenue
side. Customers are "captive," and their numbers and market
responses can be analyzed separately, although deregulation is
introducing new competitive possibilities.
This discussion demonstrates the potential of this approach for
generating insights into financial risk, defined chiefly as default risk
on debt. The vantages gained go beyond those derived from the
examination of various "what-ifs" or a sensitivity analysis in various ways and are, in some instances, almost surprising.
Thus, risk simulation underlines how contingency allowances
ought to be figured against total construction costs rather than, as
sometimes suggested in engineering discussions, as a fixed percentage allowance against each major construction cost category.
Once one conceptualizes the simulations, the logic becomes apparentit is the logic of risk pooling, when cost overruns in component cost categories are stochastically independent.

130

Analysis of Infrastructure Debt

The discussion of sinking funds in Chapter 6 led to an expected


resultthat such arrangements can reduce default risk. By the
same token, a relatively ambitious simulation model in the same
chapter indicated the superiority of level debt service over one
tipped debt service alternative, even though initial construction
cost overrun probabilities were large.
In capacity planning, the preceding chapter developed a risk
simulation to consider whether initial construction of a large facility
exhibiting economies of scale or a smaller, high unit cost facility
was superior, given an anticipated range of variation for demand
over ten years. This analysis generated distributions for the present
value of costs, when subsequent projects were optimized to the
random fluctuations of demand that occurred, and coverage time
for the projects analyzed. The findings proved intriguing because
of nonlinearities in the cost equations in this model. The least-cost
sequence fitted to the average or expected demand profile suggested that the large project ought to be the first facility to be
built, when minimizing the present value of costs was the investment performance measure. On the other hand, risk simulation
indicated the superiority of building the small project first because
of the extremely high and nonlinear variation in present values
when demand grew slowly over this planning period.
These are stimulating findings, but questions arise about how
defensible and robust they are. It is time to take stock, therefore,
and reflect on the vantage gained and on the strengths and limitations of the technique.
ADVANTAGES AND DISADVANTAGES
OF RISK SIMULATION
The prominent strength of risk simulation is its integrative function. It forces coordination of diverse information and coerces one
to be comprehensive in accounting for risk factors and the specification of interlinked processes. A second strength is that the
method directly acknowledges risk preferences, providing decision
makers with maximum information about the distribution of likely
gains and losses.
Attention to probability has unanticipated payoffs, and one
might reasonably speak of a stochastic or probabilistic outlook.

Reflections on the Method

131

On the other hand, the complexity of risk analysis is a limitation.


This complexity, of course, motivates books such as this, which
attempt a review of critical issues and methods. In general, it is
not clear why anyone would expect investing large sums of money
to be easy.
Perhaps what needs to be emphasized is that risk simulation is
a responsible way of proceeding when projects running into the
tens of millions or billions of dollars are contemplated. With the
advent of faster microprocessors and the growing number of computer programs to handle simulation problems, the cost of conducting a risk simulation is dropping. Whereas earlier a full-scale
analysis for a several-hundred-million-dollar project could cost a
few hundred thousand dollars, today a competent analysis can be
carried out for much less. The techniques described in this book
also support simple approachesback of the envelope simulations,
if you will.
The following discussion amplifies these remarks. It considers
the significance of procedure in validating a risk simulation, the
treatment of pure uncertainty, and issues having to do with the
robustness of estimation. The discussion closes with a recapitulation of basic points and findings and remarks on the generalization
of the methods presented here to other types of infrastructure
investment.
COMPREHENSIVENESS AND THE IMPORTANCE
OF PROCEDURE
Reviewing the steps taken in developing a risk analysis is the
primary means of validation, short of waiting for the attendant
processes to manifest themselves.
This underlines the importance of preliminary analysis of risk
sources. At an initial stage of study, a list of risk factors should
be developed. This should include a sensitivity analysis in the standard sense. Variants of the base case cash flow model should be
explored to determine, broadly speaking, which subset of these
factors exerts the most significant influence on the risk of default
or other appropriate performance measures. Attention should be
focused on accurately characterizing the anticipated range of risk
factors.

132

Analysis of Infrastructure Debt

An EPRI-sponsored study of electric power capacity needs for


the 1980s illustrates what can happen when elements of this basic
procedure are ignored. The primary problem with this study, which
concludes that the 1980s was an era of capacity shortages, is that
range for risk factors were developed from partial evidence.
At the end of the 1970s, this EPRI-sponsored study concluded
that
Based on 1978 reported electricity demand forecasts, 1978 capacity expansion plans, and an assumed economic cost of $1 per kWh for energy
undelivered due to shortages, the results of this study indicated that for
most regions of the country, even with the narrowest range of demand
uncertainty, a more rapid expansion of generating capacity would result
in lower total social costs in 1990.'
This conclusion was obsolete almost the moment it was announced. What was the problem? The error appears to stem from
basing range estimates for the growth of electricity demand on
forecasts supplied by the regional reliability councils and excluding
other information. Regional reliability councils collate member
electric company forecasts. Until recently, many utilities extrapolated patterns from the 1960s and earlier to the future, producing
growth rates that proved far too high in the 1970s and 1980s. Thus,
the raw average growth forecast for 1978-1990 in the EPRI study
from the nine reliability regions was 5.62 percent, 2 while the actual
summer peak electricity demand in the North American Electricity
Reliability Council grew an average of 2.2 percent per year from
1981 to 1986 and at a slightly higher rate thereafter. 3 Note, however, that alternative growth rate forecasts had been produced for
several years and had received confirmation in growth trends of the
1970s. Authorities note, for example, that since 1974, forecasts of
load growth "ranged from 2 to 7% and have resulted in a continuing controversy between utilities and various intervenors at ratemaking proceedings." 4
This example is revealing in another respect. It illustrates that
it is possible for events to chagrin analysis in fairly short order and
raises questions about a position staked out on the verification
issue by John C. Hull:

Reflections on the Method

133

Even if we know in a certain situation that the output of a risk evaluation


study influenced a manager to take decision A rather than decision B, we
will not know for several years how decision A turns out and even then
we may not be sure about what would have happened if decision B had
been taken. Furthermore, suppose that it is definitely established that
decision A turned out worse than decision B would have done, an advocate
of risk evaluation will, undaunted, argue along the following lines: "It
was a good decisionjust an unlucky outcome. We were in the extreme
left-hand tail of the distribution of NPV's."5
In principle, the point in this quote must be conceded. At the
same time, the focus of the quote is perhaps more philosophical
than warranted by practical situations in which risk simulation is
the technique selected to inform an investment decision. Errors
can reveal themselves quickly, and plausible linkages between errors of omission and commission in procedure and the findings of
a risk analysis can exist. Indeed, if a risk simulation is deemed
necessary in the first place, there may be reason to expect that bad
decisions may soon lead to adverse situations. Cost overruns in
power plant construction and inadequate contingency plans become apparent before a project comes on line. Problems of population growth may be imminent, if they appear salient at all.
One safeguard, therefore, is a thorough review of existing range
estimates of risk factors. Decisions to exclude some lower or upper
bounds should be defended so that outlying estimates are at least
acknowledged.'Qualitative analysis provides similar safeguards.
Understanding of market and demographic relationships as well
as familiarity with standard rate making and financial procedures
are important in characterizing responses in a risk simulation.
ROBUSTNESS
Robustness in risk simulation pertains to whether imputations
of probabilities and ranges of risk factors can be approximate and
still indicate fundamentally the same conclusions.
Symmetric, unimodal distributions are the easiest to approximate. Since the mean, mode, and the median of such distributions
are identical, judgmental approaches may be more likely to correctly estimate the central tendency.
Similarly, the number of sufficient statistics required to char-

134

Analysis of Infrastructure Debt

acterize a probability distribution can be small. If the distribution


is known a priori, based on an analytic argument, it may be possible
to develop relatively precise characterizations with a few items of
information. Thus, if the underlying probabilities are normally
distributed, the risk factor is summarized by stating its mean and
variance.
The principle of compensating errors also imparts a degree of
robustness to risk simulations. Small errors in identifying the central tendency of one variable, for example, can be offset by errors
in the other direction with respect to other, stochastically independent risk factors, such as cost or revenue components.
An important tactic can be mentioned here alsothat of paying
the most attention to the largest risk elements. Thus, the influences
on revenues or costs generally manifest Pareto's Lawthe law of
the significant few and insignificant many.
An interesting series of suggestions for imputing probability distributions to risk factors is developed in an Environmental Protection Agency handbook, and bears mentioning in this context:
A uniform distribution would be used to represent a factor when nothing
is known about the factor except its finite range . . .
If the range of the factor and its mode are known, then a triangular
distribution would be used.
If the factor has afiniterange of possible values and a smooth probability
function is desired, a Beta distribution (scaled to the desired range) may
be most appropriate. The Beta distribution can be fit from the mode and
the range that defines a middle specific percent (e.g. 95 percent) of the
distribution.
If the factor only assumes positive values, then a Gamma, Lognormal,
or Weibull distribution may be an appropriate choice. The Gamma distribution is probably the most flexible; its probability function can assume
a variety of shapes by varying its parameters and it is mathematically
tractable. These distributions also can be fit from the knowledge of the
mode and the percentiles that capture the middle 95 percent of the distribution.
If the factor assumes positive and negative values and is symmetrically
distributed around its mode, then a normal distribution may be an appropriate distribution.
Unless specific information on the relationships between . . . parameters

Reflections on the Method

135

is available, assume values for the required input parameters are independent.6
These guidelines aim at robustness in estimation. Thus, the variance of the uniform distribution is larger than a broad class of
unimodal distributions. If risk is proxied by the variability of a
variable that is an additive or multiplicative composite of various
component risks, use of such maximum variance distributions to
characterize these component risks leads to something like an upper bound estimate of a risk. Similarly, use of uniform distributions
for population variability exaggerates risks of low and high population growth. If population growth is lower than anticipated,
financial burdens may be placed on customers. If population
growth is higher than anticipated, shortagesbrownouts or restrictions in usemay occur, imposing other types of cost. If these
costs can somehow be made commensurate, as discussed in Chapter 6, then maximum combined costs are associated with maximum
variance distributions for population processes.
The use of the triangular probability distribution was illustrated
in Chapter 3, which showed how estimates of the mode and average
value of a random variable with finite range are linked.
Use of the Beta, Gamma, Lognormal, and Weibull distributions
has not been discussed in this book, although these forms are
mentioned in the Appendix.
Of course it should be clear from this discussion that simulations
with substantially skewed risk factors are less reliable. A recent
text on engineering economics considers this under the rubric of
the "problem of outliers," 7 significant in accidents or other low
probability, high damage events. Thus, the distribution of cost
overruns for large nuclear facilities embodying new design features
appears sharply skewed to the right (negatively skewed). More
attention must be devoted to plotting points on cumulative or
probability distributions for such risk variables.
The modeling tactic suggested here is to employ simple modeling
representations and to examine the effect of: (a) successive refinement of assumptions, and (b) extensions of the model on the
risk profile. This is a higher order sensitivity analysis that allows
assumptions and the characterization of process to vary in addition
to considering the impact of changes in the magnitude of risk

136

Analvsis of Infrastructure Debt

factors. If the acknowledged range of risk factors is faithfully recorded and realism is sought in the simulations, the risk profile
should represent a best effort at prefiguring the total consequences
of various investment decisions.

PURE UNCERTAINTY
The Chicago economist Frank H. Knight suggested a distinction
between risk and uncertainty. 8 A risk situation, according to
Knight's terminology, is one where probabilities are known. When
we have no knowledge of the range or distribution of a variable,
on the other hand, we are faced with uncertainty. This is the
difference between, say, drawing a ball of a certain color from an
urn when we know beforehand there are ten red balls and ten
black balls and drawing a ball when we lack essential information,
such as the number of balls, their color, and so on. In the first
case, the probability of drawing a red ball (with replacement) is
0.5, while in the other situation we may have no way of asserting
anything about probabilities of drawing, for instance, green balls.
The first case is a risk situation in Knight's terms, while the second
is a situation in which there is uncertainty.
There are various ways uncertainty manifests itself, where pure
uncertainty refers to the fact that not only do we not know probabilities of an event, but we also have no information about the
nature of the event in the first place. Thus, history shows a sustained capacity for producing surprises, that is, occurrences for
which we really have no way of imputing probabilities because we
cannot even conceptualize their existence. These historical innovations can develop on various levels. Most recently, there are the
profound changes in Eastern Europe and the Soviet Union. On
the economic front, few anticipated the regime of high interest
rates initiated by United States Federal Reserve Bank policies in
late 1979. One can also refer back to Pearl Harbor and so on.
In econometric language, the issue of surprises becomes the
"structural shift" problem. Specific tests are recommended to identify time periods in which the coefficients of a regression are distinct.9 Many large-scale econometric models of the U.S. economy
had to be fundamentally revised after 1974, for example, as the

137

Reflections on the Method

Table 7.1
Payoff Matrix Illustrating Alternative Choice Criteria under Uncertainty

"state of nature"

20
10
50
100

40
20
40
15

5
30
75
40

100
40
8
30

actions

1
2
3
4

regime of higher energy prices triggered cascading changes in other


economic relationships.
The economist Kenneth Boulding comments that
the general conclusion... is that under uncertainty alternatives have a
high value which involve liquidity, flexibility, capacity for reversal, and
capacity for continual learning from mistakes and revision of images of
the future; under conditions of certainty, we simply select what appears
to be the best alternative and zero in on it. In uncertain situations, this
often involves zeroing in on disaster.10

This conclusion is embodied in decision criteria suggested for


choices under uncertainty, including the maximin rule and the
Hurwicz rule.11 The payoff matrix in Table 7.1 illustrates the general operation of such rules. This lists the payoff for "actions" (a(,
where i = 1, 2, 3), listed along the right side of the table, given
the "state of nature" (Sj, where i = 1,.. ., 4), which prevails.
Note probabilities are not attached to the payoffs. Accordingly,
there is Knightian uncertainty about whether one as opposed to
another state of nature will come to pass; although, in this case,
we are fortunate in being able to anticipate the prospective states
of nature.

138

Analysis of Infrastructure Debt

The maximin criterion is an abbreviation for selection of the


maximum of the minima. It is implemented by identifying the
minimum payoff associated with each action, that is, the minimum
payoff in each row. Then, the action to be taken is determined by
finding the maximum of these minimum payoffs. This is ax in Table
7.1.
The Hurwicz rule amalgamates this cautious criterion with an
optimistic rulethe maximax criterion which states "take the action that leads to the maximum payoff in the set of maximum
payoffs in each row." The Hurwicz rule involves maximizing a
weighted sum of the payoffs associated with the maximin and maximax rules, where the weight is referred to as the pessimism-optimism index.
These rules seem excessively technical and reflect a general lack
of consensus about what the optimal course of action is for choices
under uncertainty.
Perhaps the best reply to various types of uncertainty in financial
choice situations is that we must try our best under the circumstances. Risk simulation, subject, again, to procedural checks, provides decision support at least as good and probably in some
instances superior to other approaches. In other respects, the ability of history to produce genuine innovations means that risk analysis, like other planning and evaluation tools, requires continuous
updating.
CONCLUSION
Shifts in industrial patterns and age structure present challenging
problems for utility planning. Economies of scale of large, central
power and water facilities must be balanced against flexibility of
response. The uncertainty and volatility of business and demand
conditions have led to greater emphasis on optimizing the existing
utility system, conservation, construction of smaller units, purchases, and joint ventures of production facilities. Yet some tactics
can lead to regrets, such as reliance on gas turbines if oil prices
rebound to 1970s levels and foregoing new structural water projects
if forecasts of global warming are borne out.
Risk simulation makes explicit what is implicit in many qualitative appraisals, attaching numbers that, at a deep level, guide

Reflections on the Method

139

thought about the prospects of an investment situation. The


method promotes consistency and a deeper integration of information and understanding about the processes in question. It helps
identify desirable risk management approaches, such as contingency or sinking funds, bond insurance, bank letters of credit, or,
on the physical side, reconfiguring a project.
At the simplest level, factors contributing to financial risk can
be classified as to whether they affect costs or revenues. On the
cost side, there are construction costs and O&M costs. Revenues,
it is suggested here, can be considered from the standpoint of per
capita usagewhere allowance is made for price effects and commonly observed chance variationand population process. This
abstracts from other segmentations of demand by customer class
but can be a useful simplification for a first-cut analysis. This discussion develops and justifies methods of representing per capita
usage and population growth so as to imprint their stochastic features onto the cash flows. Two questions are especially relevant in
this regard: (1) Will the future be like the past, and (2) if not,
what assumptions seem salient? If strong elements of continuity
appear to exist in the situation, time series or structural models of
risk relationships should be a mainstay of the analysis. Otherwise,
judgmental methods and stochastic modeling become more important. This emphasizes, in particular, how qualitative information and modeling techniques from the behavioral sciences can
contribute to risk simulation. One key to the applicability of this
method is the largely monopolistic aspect of markets for water and
power. Although cogeneration and power pooling introduce a degree of competitiveness into electric power markets, many utility
customer areas remain "captive markets." Urban water systems
appear to have relatively unchallenged customer bases and fixed
supply sources also.
Ultimately, all risk factors are represented, either directly or in
more complex formulation, as random variables characterized by
probability distributions. Given this information, risk simulation
involves sampling the possible values of these variables based on
the representation of their cumulative distributions.
Range estimation is a major method of appraising risk factors.
Its use is suggested in connection with cost overruns and delays in
the construction schedule and in the stochastic modeling of pop-

140

Analysis of Infrastructure Debt

ulation growth. Range estimation involves identifying lower and


upper bounds associated with risk factors, as well as probabilities
of exceedance of various levels of such risk factors. One major
application is the identification of construction contingency allowances.
In addition to range estimation, time series models can characterize the variation of time-dependent variables. Thus, there are
essentially "one-shot" factors, like construction costs, and multiple
period variables, such as the quantity demanded and population
growth. Single period risk factors may be described by a simple
probability distribution. More complex, time-interrelated processes usually must be projected with time series models. The
simplest of such models employ first order autocorrelation and a
white noise or random shock term.
These methods generalize to a number of contexts. Market share
almost always becomes an issue when we leave the "captive customers" of a power or water company, adding a dimension to the
risk simulation. If a toll road or highway is contemplated, chances
for diversion of traffic along alternative routes must be considered.
The elasticity of this diversion to changes in the toll tends to make
for more elastic price responses. An airport cannot set landing fees
without considering whether major carriers will pull up stakes and
centralize their operations somewhere else. The question of market
share, however, is amenable to analytical approaches as mentioned
here in connection with population growth or demand variation
time series and structural modeling when data permit and judgmental evaluation. The methods presented here, therefore, are
broadly useful, although, as the exposition underlines, specific industry knowledge is essential for realistic appraisal of financial risk.
NOTES
1. Martin L. Baughman and D. P. Kamat, Assessment of the Effect
of Uncertainty on the Adequacy of the Electric Utility Industry's Expan
Plans, 1983-1990, EA-1446, Interim Report (Palo Alto, Calif.: prepare
for the Electric Power Research Institute, July 1980), p. vi.
2. Ibid., Table 4-2, p. 4-3.
3. See North American Electric Reliability Council, 1987 Reliabilit
Assessment: The Future of Bulk Electric System Reliability in North A

Reflections on the Method

141

ica 1987-1996 (Princeton, N.J.: North American Electric Reliability Council, September 1987), p. 8.
4. Frederic H. Murphy and Allen L. Soyster, Economic Behavior of
Electric Utilities (Englewood Cliffs, N.J.: Prentice-Hall, 1983), p. 75.
5. John C. Hull, The Evaluation of Risk in Business Investment (London: Pergamon Press, 1980), p. 135.
6. Exposure Assessment Group, Office of Health and Environmental
Assessment, Exposure Factors Handbook, EPA/600/8-89/043 (Washington, DC: U.S. Environmental Protection Agency, March 1989).
7. John A. White, Marvin H. Agee, and Kenneth E. Case, Principles
of Engineering Economic Analysis, 3rd ed. (New York: John Wiley &
Sons, 1989), p. 392.
8. Frank H. Knight, Risk, Uncertainty, and Profit (New York: Houghton Mifflin, 1921).
9. See A. C. Harvey, The Econometric Analysis of Time Series (Cambridge, MA: MIT Press, 1989), and the discussion on model selection.
The classic test in this regard was developed by Gregory Chow, "Tests
for Equality Between Sets of Coefficients in Two Linear Regressions,"
Econometrica 28 (1960): 591-605.
10. Kenneth Boulding, "Social Risk, Political Uncertainty, and the
Legitimacy of Private Profit," in R. Hayden Howard, Risk and Regulated
Firms (East Lansing: Michigan State University Press, 1973), pp. 82-93.
For an early but well-reasoned discussion, see Albert G. Hart, Anticipations, Uncertainty and Dynamic Planning (Clifton, N.J.: Augustus M.
Kelly, 1940).
11. See G. J. Thuesen and W. J. Fabrycky, Engineering Economy (Englewood Cliffs, N.J.: Prentice-Hall, 1984).

This page intentionally left blank

Appendix
This Appendix focuses on a topic closely linked to the evaluation of
financial risk: the logic of probability transformations, or rules for finding
the probability or probability distribution of sums, products, and other
combinations of random variables, each characterized by its own probability or probability distribution. This is the basis of analytic studies of
financial risk and informs risk simulation in a specific sense. This discussion
is undertaken without stopping in every instance to explain terms. Additional explication is presented following the main exposition of points
below, where concepts are defined, including random experiment, probability distribution, random variable, law of large numbers, probabilit
discrete and continuous distributions, and cumulative distribution. Ma
expositions deal with these points, such as Norman S. Matloff's Probability
Modeling and Computer Simulation (Boston: PWS-Kent, 1988). There
also are works that treat the foundations of the subject, such as V. Barnett's Comparative Statistical Inference, 2d ed. (New York: John Wile
& Sons, 1982).
The Appendix concludes by reviewing some main probability distributions,
including the normal, Gamma, Beta, and exponential distributions.
PROBABILITY TRANSFORMATIONS
The basic issue discussed here is how the analytic approach to risk
analysis, mentioned in Chapter 1, works. There is probably nowhere better

144

Appendix

to begin than with a discussion of the major accomplishment of the analytic


approach to probability transformationthe Central Limit Theorem. The
Central Limit Theorem can be stated as follows:
THEOREM 1: Suppose xlt x2, JC3, . . . are independent, identically
distributed random variables with E(x) = m and Var(x) = v2. Let
Wn = (Tn - nm)l(
where Tn = JC, 4- x2 + . . . + xn. Then for any real number u,
Urn P(Wn < u) = P (z < u)
where x has a normal distribution with a zero mean and a variance

ofl.
We have n independent random variables, each characterized by the same
probability distribution. The major finding of the Central Limit Theorem
is that the sum of these n variables can be approximated by a normal
distribution. This approximation becomes more and more accurate as the
number of terms in the sum becomes larger. Thus, the normal distribution
can be said to be a limiting distribution to the probability distribution
applying to such sums of random variables. Extensions and generalizations
of this have been a major preoccupation of mathematical statistics. Thus,
there can be cases in which the random variables to be summed are
characterized by different probability distributions or exhibit cross-correlations; that is, they are not stochastically independent and yet their
sum converges in the limit to a normal distribution. Indeed, exploratory
simulation suggests that it is difficult to produce anything but a bell-shaped
curve when summing almost any random variables, provided sufficiently
numerous terms are summed.
Part of the reason why the Central Limit Theorem applies quite broadly
can be seen if one understands what is involved in summing random variables and determining the probability distribution of their sum, given their
individual probability distributions. Two important concepts here are a convolution of probability distributions and the moment-generating function.
Suppose we have two stochastically independent random variables x,
and x2. Random variable x, is characterized by the probability density
function f(x,), and g(x2) describes the probability density of x2. Stochastic
independence means that f(.) is in no way dependent on the value attained
by x2, or vice versa. Given this, what can be said about the probability
distribution of X = x, + x2?
One approach is to look to the cumulative distribution of the sum X,
which we will denote by F x (.). Thus, Fx(t) = P(X < t) = P(x, + x2 <
t). For continuous f(.) and g(.) and nonnegative random variables, this is

Appendix

145

known as the convolution of f(.) and g(.). Note, we use the fact here that
the joint distribution of two independent random variables is the product
of their marginal distributionsa general form of one of the so-called
laws of chance mentioned in the following section. Now, if f(.) and g(.)
are normal distributions, F x (.) will be the cumulative distribution of a
normal distribution having a mean and variance that is the sum of the
means and variances of f(.) and g(.), respectively. This follows from the
additive property of exponents.
Another way of demonstrating this important fact about normal variables summing to normal variables is to consider the moment-generating
function. The moment-generating function is defined as
mx = E [e'x]
for any random variable X. This function has a number of interesting
properties. Its name obtains from the fact that the kth moment of any
random variable X equals the kth derivative of its moment-generating
function, if the moment-generating function exists. Otherwise the most
important fact is that the moment-generating function is unique and completely determines the distribution of a random variable. Thus, if two
random variables have the same moment-generating function, they have
the same probability distribution. Again, application of the moment-generating function to the sum of normally distributed independent random
variables indicates that this sum is itself normally distributed and has a
variance equal to the sum of the component variances and a mean equal
to the sum of the component variable means.
This points to extensions of the Central Limit Theorem. Thus, suppose
we have a set of random variables (x,,.. ., xn) characterized either by a
probability distribution f(.) or g(.) and that as n gets larger the number
of random variables characterized by both these distributions increases.
Then, Theorem 1 applies to sums of the xt characterized by one or another
of these distributions. That subset of terms characterized by f(.) will converge to a normal distribution. Its complement among the n terms characterized by g(.) also will converge to a normal distribution. Then, on the
basis of the result obtained from moment-generating functions, that is,
that the sum of two variables characterized by normal distributions is itself
characterized by a normal distribution, the sum of these n variables can
be seen to be approximated by a normal distribution.
This additivitywhere random variables characterized by one type of

146

Appendix

probability distribution add up to a sum also characterized by the same


type of probability distributionis true of some but, it should be stressed,
not all probability distributions. Thus, it is also true of the Poisson distribution, but it is not true of the uniform distribution.
The point of developing this perspective on probability transformations
can be seen in the following quote:
It is well-known that if the future cash flows are normal variates, the NPV [net
present value] distribution would also be normal regardless of whether the returns
are independent or not. Moreover, if the future cash flows can be assumed to be
identically and independently distributed, the NPV distribution would also be approximately normal provided there are a relatively large number of future cash
flows. Also, assuming a large number of cashflows,the NPV distribution converges
to normality if the cashflowsare dependent stationary Markovian random variables.
In general, where inter-period dependency exists, other than the aforementioned
normal and stationary Markovian cases, the NPV distribution cannot be specified.'
Then, since the net present value is a sum of the discounted cash flows
in each year of some period of analysis, it is possible to apply variants of
the Central Limit Theorem or the aforementioned logic of probability
transformation to reason from independent, identically distributed cash
flows and normally distributed cash flows to the normality of the net
present value. The authors of the above quote also extend this type of
result to correlated cash flows under certain circumstances. In general
circumstances, however, only simulation techniques are capable of characterizing the NPV probability distribution.
Analytic methods and simulation experience suggest that it is likely that
net income for an enterprise such as a utility will be approximated by a
normal distribution. A large enterprise will have numerous cost and revenue accounts that are not highly interrelated. As noted in Chapter 2,
this may imply that the probability distribution of the debt coverage ratio
(net income divided by debt service) also will be a normal distribution,
provided debt service is fixed or nonstochastic. Thus, it should be possible
to compute the probability of default, given the variances of the components of net income or its overall variance.
A circumstance can be noted when debt service is determined by a
variable interest rate. If, as a consequence of the variable interest rate,
total debt service in a particular year is normally distributed around some
mean debt service, the coverage ratio will be characterized by a Cauchy
probability distribution. This is an analytic result from considering the
ratio of two normally distributed random variables. For the Cauchy distribution first and second moments, the mean, and the variance do not
exist as mathematical objects, although location parameters analogous to
the mean and variance can be developed.

Appendix

147

Clearly, this type of thinking, although technically demanding, has


promise. It is interesting that scholars also have considered the mean and
variance of a lump-sum cash flow of uncertain timing,2 which might be
interpreted as the arrival of a subsidy to a project, promised by some
supporting, but not directly responsible, level of government, such as the
federal level. The analytic approach can give us some preview of what to
expect under given circumstances, expectations that may be extended by
the application of simulation to related cases.

BASIC CONCEPTS
Finally, some introduction to basic probability and statistics concepts
seems appropriate. Thus, to give readers a flavor of the foundations of
probability theory, we consider a conceptual framework below that is
suggested by the frequency interpretation of probability. The function of
this framework is to motivate definitions of probability that suggest mathematical interpretation. Then, we briefly review the laws of chance, or
the rules for figuring the probabilities of joint and mutually exclusive
events. In addition, we provide examples of important probability distributions in Text Box A l .

Random Experiment
The first problem is to define random variable. This is usually solved
by appeal to other primitive concepts that, ultimately, must be left largely
undefined. Thus, in essence, a random variable is the outcome of a random
experiment.
Suppose we have one red die and one blue die. If we toss them repeatedly under similar conditions and add the face numbers, we perform
a random experiment that generates information about a random variable
defined as, say, the sum of the two numbers coming face up. These
numbers belong to the set of 36 ordered pairs (1, 1 ) , . . . , (1, 6), (2, 1),
. . . , (2, 6 ) , . . . , (6, 6) delineating the sample space of this experiment.
Under the frequency interpretation of probability, we accept a law of large
numbers, which suggests that as the number of tosses of these two dice
increases, the relative frequency of the occurrence of these pairs stabilizes
at or converges to a set of ratios or fractions that are identified as the
probability of occurrence of the respective pairs of outcomes. Thus, under
the frequency interpretation, the probability that the random variable
assumes a value of 12 converges to 1 in 36.

148

Appendix

Discrete and Continuous Random Variables,


Probability Distributions, and
Cumulative Distributions
There are two major types of probability distribution: discrete and
continuous.
A discrete random variable is associated with a discrete probability
distribution, which canonically lists the relative frequencies of occurrence
of the values assumed by this random variable. In the preceding example,
this is a discrete triangular distribution. Such distributions have two key
properties, the discussion of which is facilitated by symbolism. Let us
designate the random variable by z and agree that P(z = 12) means "the
probability that the random variable z sums to a total of 12," where, when
we are sure which random variable is denoted, we simply write p(12),
which can be read as indicating an event has occurred in which the random
variable assumes the value 12. Then, we have two properties:
P(z = i) > 0, i =1,2,..., 12
P(z = 1) + P(z = 2) + ... + P(z = 12)

Equation A.l states that probabilities are nonnegative. Equation A.2


asserts that the probabilities of any one of the mutually exclusive events
in the sample space occurring is 1.
Continuous distributions are a second important type of probability
distribution.3 An example of a continuous probability distribution is produced by the spin of a pointer on a circular disk with numbers along its
perimeter. If we suppose this disk has a radius of 0.5 feet, the number
the pointer indicates can be defined as a random variable. It will range
from 0 at some arbitrary starting point to approximately 2.1412 feet.
Note there exists a curious fact involving continuous probability distributions, namely that P(z = v) = 0, where v is any number between 0
and 2.1412 in the above example. This is because under the frequency
interpretation of probability, the ratio of the number of times that the
pointer stops at exactly v to the total number of spins of the pointer
becomes smaller and smaller, converging to zero, as the spins of the
pointer increase. On the other hand, denoting P(v = z) by p(z), there
exists the analogue of equation A.2 for continuous distributions, namely

149

Appendix

where instead of a summation we apply the operation of integration from


calculus. The relevant question, of course, is the probability that the
random variable assumes a value within a finite interval,

For any random variable z, the cumulative distribution function F is


defined by
F(t) = P(v < t)

(A.6)

For discrete distributions, this is the summation of all the values of the
random variable v that are less than or equal to t. For a continuous
distribution, this is defined as an integral.

The Laws of Chance


The rules for combining probabilities depend, to a large extent, on
whether or not events are stochastically independent and whether events
are mutually exclusive or can be realized together. Mutually exclusive
events are like distinct values of a random variable in a random experiment; that is, it is impossible for a coin to come up both heads and tails
on a single toss. Stochastically independent events, on the other hand,
can occur together, but the probability of one event occurring is not related
to whether another event occurs or not. Because they are so fundamental,
these rules can be called the laws of chance.
Perhaps the simplest rule is that if A and B are mutually exclusive
events, P(A and B) = 0, where we read P(x) as "the probability that
event x occurs." Like some of the elemental propositions of Euclidean
geometry, this rule follows from verbal definition and is intuitive.
More interesting is the assertion that if A l , A2, and A3 are mutually
exclusive events, then P(A1 or A2 or A3) = P(A1) + P(A2) + P(A3).
Note here that if the Ai are mutually exclusive events that include all
possible outcomes, the sum of the probabilities of each mutually exclusive
event will equal 1. A general form of this rule may be derived from the
proposition that P(A or B occurs) = P(A) + P(B) - P(A and B).
Similarly, we may begin from an obvious rule for the case of stochastic
independence. Thus, if A and B are stochastically independent,
P(A,B) = P(A)P(B). The general form of this rule is also relatively obvious, and states that P(A and B) = P(A1B)P(B), where the P(x!y) is read
as "the conditional probability that event x occurs, given that event y has
occurred."
Further extensions of these rules yield interesting results such as Bayes

150

Appendix

Theorem, which determines the probability that one event occurs, given
information about the prior occurrence of conditionally related events.
Probability Distributions
Text Box A.l lists the mathematical form of several probability distribution functions important in financial risk analysis. The following text
discusses each of these functions in turn, noting important features and
potential contexts of application.
Among the discrete distributions, the binomial distribution is basic and
important. The binomial distribution describes a random experiment with
two mutually exclusive outcomes in a series of repetitions or trials. Suppose P(A) = p, so that the probability that event A does not occur,
usually symbolized as P(A), is, by definition, equal to 1 - p = q. Thus,
if we are interested in the likelihood of three heads occurring in ten flips
of a coin, the answer is given by the expression:

where ('?) indicates the combination of ten things taken three at a time
and is equal to 10!/((10-3)!3!).
An interesting aspect of the binomial distribution is that it converges
to the normal distribution or another discrete distribution called the Poisson, as the number of trials increases without limit. If probabilities p and
q are roughly the same size, the binomial converges to a normal distribution. On the other hand, if the probability of event A occurring is not
near .5, but, rather, much nearer 0, the binomial converges to the Poisson
distribution.
Note that the first two moments, the mean and variance, completely
characterize the normal distribution. 4 Thus, a random variable with a normal distribution may be standardized by the transformation z = (x - u,)/
a where u, is the mean and a is the standard deviation of the random variable X. A standardized variable such as z obeys the three sigma rulethe
probability that the absolute difference between a normally distributed
variable and its mean is greater than 3a is less than .003. Similarly, a deviation of more than one sigma from the mean is to be expected about once
every three trials. Since the normal distribution is characterized by its first two
moments, probability tables are easy to prepare and consult.
Another mainstay of the normal distribution is its role as a sampling
distribution. Suppose, for example, we have a population listing of a
human characteristic such as height and weight. Then, sample from this

Text Box A.l

Binomial Distribution

Normal Distribution i

Poisson Distribution

Uniform Distribution i
b = maximum, a= minimum

Triangular Distribution

a = minimum, b = most l i k e l y , c = maximum

Gamma Distribution

Exponential Distribution

152

Appendix

population to estimate its mean with the information we obtain from the
mean of the sample. Then, in repeated samplings with the same size lots,
the sampling distribution will be normally distributed around the mean
of the population and will have a variance determined by the sample size
and the variance of the population. For these reasons, modern statistics
associated with names such as Karl Pearson or Irving Fisherhas been
erected on the basis of the normal distribution.
The Poisson distribution also has application to real world processes.
It is a discrete probability density function with the unusual property that
its mean and variance are equal. This distribution is perhaps most important in queuing problems. Given purely random arrival times in a line,
timing of telephone calls, and so on, the number of arrivals in a line or
calls in an interval of time is described by a Poisson distribution.
We have discussed the uniform distribution throughout the text of the
book. It is a continuous distribution possessing finite range and the property that intervals of equal size in its range always have the same probability.
The triangular distribution also is discussed in the text at some length.
The triangular distribution is a rough approximating function for any
unimodal probability distribution.
One way to consider probability distribution is in terms of parametric
families. In this light, the binomial, normal, and Poisson distributions are
linked by convergence processes. The normal distribution, furthermore,
is linked by probability transformation to the Cauchy distribution (as a
ratio of two normal variates) and the Chi-square distribution, as the sum
of squared normal variates.
The Gamma distribution, on the other hand, is a form that is entitled
to a family of its own due to the ease in which it is transformed algebraically
to other related distributions. Here, the peculiar symbol V in Text Box
A l in the denominator of the Gamma distribution is the Gamma function,
a sort of generalization of the factorial. The Gamma function has a nonnegative range and is determined by two parameters (a, (3) whose product
is the mean of the distribution. Gamma functions can assume a variety
of shapes, depending on the values selected for these parameters. For
specific values of these parameters, the Gamma function becomes a Chisquare, exponential, Erlang, or Beta distribution.5
Finally, the exponential distribution has an interesting relationship to
the Poisson in waiting time problems. Thus, in a Poisson arrival process,
the distribution of waiting times is exponential. The exponential distribution often is held to be a good representation of failure processes, such
as the time it takes for a part to wear out.

Appendix

153

NOTES
1. L. C. Leung, V. V. Hui, and G. A. Fleischer, "On the PresentWorth Moments of Serially Correlated Cash Flows," Engineering Costs
and Production Economics 16 (1989): 281-289.
2. John H. Estes, "Stochastic Cash Flow Evaluation Under Conditions
of Uncertain Timing," Engineering Costs and Production Economics 18
(1989): 65-70.
3. Mixed cases also can exist.
4. The moments of a distribution provide important summary data
concerning a distribution's central tendency, dispersion, and shape. The
first moment is the mean, average, or expected valuea measure of
central location. The second moment is the variancea measure of the
dispersion around the mean. The third and fourth moments are less familiar but indicate broadly whether and how the distribution is nonsymetric (right or left skewed) and whether it is flat or peaked (kurtosis).
5. See Stephen Kokoska and Christopher Nevison, Statistical Tables
and Formulae (New York: Springer-Verlag, 1989).

This page intentionally left blank

Bibliography
Agthe, Donald E., R. Bruce Billings, and Judith M. Dworkin. "Effects
of Rate Structure Knowledge on Household Water Use." Water
Resources Bulletin 24 (June 1988): 627-630.
Alho, Juha M., and Bruce D. Spencer. "Uncertain Population Forecasting." Journal of the American Statistical Association 80 (June 1985):
306-314.
Altmann, Edward I., and Scott A. Nammacher. Investing in Junk Bonds:
Inside the High Yield Debt Market. New York: John Wiley & Sons,
1987.
Altourney, Edward G. The Role of Uncertainties in the Economic Evaluation of Water Resources Projects. Stanford, CA: Institute of Engineering-Economic Systems, Stanford University, 1963.
Anderson, Kent. Residential Demand for Electricity: Econometric Estimates for California and the United States, R-905 NSF. Santa Monica, CA: Rand Corporation, January 1972.
Andersson, Roland, and Lewis Taylor. "The Social Cost of Unsupplied
Electricity." Energy Economics (July 1986): 139-146.
Artto, Karlos A. "Approaches in Construction Project Cost Risk." Annual Transactions of the American Association of Cost Engineers
(AACE). Morgantown, W.V.: AACE, 1988, B-4, B.5.1-B.5.4.
Ascher, William. Forecasting: An Appraisal for Policy-Makers and Planners. Baltimore: Johns Hopkins Press, 1978.
Baleriaux, H., and E. Jamoulle. "Simulation de I'exploitation d'un pare
des machines thermiques de production d'electricite couple a des
stations de pompage." Revue Electricitie, edition SRBE, 5 (1967).
Baughman, Martin L., Paul L. Joskow, and Dilip P. Kamat. Electric

156

Bibliography

Power in the United States: Models and Policy Analysis. Cambridge,


MA: MIT Press, 1979.
Baughman, Martin L., and D. P. Kamat. Assessment of the Effect of
Uncertainty on the Adequacy of the Electric Utility Industry's Expansion Plans, 1983-1990, EA-1446, Interim Report. Palo Alto,
Calif.: prepared for the Electric Power Research Institute, July
1980.
Bauman, D. S., P. A. Morris, and T. R. Rice. An Analysis of Power
Plant Construction Lead Times, Volume 1: Analysis and Results.
EA-2880, Final Report. Palo Alto, Calif.: EPRI, February 1983.
Beeston, Derek T. "Combining Risks in Estimating." Construction Management and Economics 4 (1985): 75-79.
. Statistical Methods for Building Price Data. London: E. & F. N.
Spon,1983.
Bent, James A. Project Management for Engineering and Construction.
Lilburn, GA: Fairmont Press, distributed by Prentice-Hall, 1989.
Boland, John J., Benedykt Dziegielewski, Duane D. Baumann, and Eva
M. Optiz. Influence of Price and Rate Structures on Municipal and
Industrial Water Use, Contract Report 84-C-2, Engineer Institute
for Water Resources, U.S. Army Corps of Engineers. Washington,
D.C.: U.S. Government Printing Office, June 1984.
Boulding, Kenneth. "Social Risk, Political Uncertainty, and the Legitimacy of Private Profit." In R. Hayden Howard, Risk and Regulated
Firms. East Lansing: Michigan State University Press, 1973,
pp. 82-93.
Box, George E. P., and G. M. Jenkins. Time Series Analysis: Forecasting
and Control. San Francisco: Holden-Day, 1976.
Byers, Thomas, and Paul Teicholz. "Risk Analysis of Resource Levelled
Networks." In Proceedings of the Conference on Current Practice
in Cost Estimating and Cost Control, sponsored by the Construction
Division of the American Society of Civil Engineers in Cooperation
with the University of Texas at Austin. New York: American Society of Civil Engineers, 1983, pp. 178-186.
Cameron, D. A. "Risk Analysis and Investment Appraisal in Marketing."
Long Range Planning (December 1972): 43-47.
Carver, Philip H., and John J. Boland. "Short and Long-Run Effects of
Price on Municipal Water Use." Water Resources Research 16 (August 1980): 609-616.
Chapman, D., T. Tyrell, and T. Mount. "Electricity Demand Growth
and the Energy Crisis." Science 178 (November 1972): 703-708.
Chicione, David L., and Ganapathi Ramamurthy. "Evidence on the Specification of Price in the Study of Domestic Water Demand." Land
Economics 62 (February 1986): 26-32.

Bibliography

157

Chow, Gregory. "Tests for Equality Between Sets of Coefficients in Two


Linear Regressions." Econometrica 28 (1960): 591-605.
Clark, Charles E., Thomas W. Keelin, and Robert D. Shur. User's Guide
to the Over!Under Capacity Planning Model, EA-1117, Final Report. Palo Alto, Calif.: prepared for the Electric Power Research
Institute, October 1979.
Coats, Pamela K., and Delton L. Chesser. "Coping with Business Risk
through Probabilistic Financial Statements." Simulation 38 (April
1982): 111-121.
Colorado Division of Local Government. Colorado Population Growth.
Denver: Colorado Division of Local Government, 1989, Table 1.
Curran, Michael W. "Range Estimating: Reasoning with Risk." Annual
Transactions at the American Association of Cost Engineers
(AACE). Morgantown, W.V.: AACE, n.3.1-n.3.9.
Dalai, Siddartha, Edward B. Fowlkes, and Bruce Hoadley. "Risk Analysis
of the Space Shuttle: Pre-Challenger Prediction of Failure." Journal of the American Statistical Association 84 (December 1989):
945-957.
Davidson, Glenn J., Thomas F. Sheehan, and Richard G. Patrick. "Construction Phase Responsibilities." In Jack H. Willenbrock and H.
Randolf Thomas (eds.), Planning, Engineering, and Construction
of Electric Power Generation Facilities. New York: John Wiley &
Sons, 1980.
Diekmann, James E., Edward E. Sewestern, and Khalid Taher, Risk
Management in Capital Projects. A report to the Construction Industry Institute. Austin: University of Texas at Austin, October
1988, pp. 63-81.
Duann, Daniel J. "Alternative Searching and Maximum Benefit in Electric Least-Cost Planning." Public Utilities Fortnightly, December
21, 1989, pp. 19-22.
Dubin, Jeffrey A., Allen K. Miedema, and Ram V. Chandran. "Price
Effects of Energy-Efficient Technologies: A Study of Residential
Demand for Heating and Cooling." Rand Journal of Economics
17 (August 1986): 310-325.
Economos, A. M. "A Financial Simulation for Risk Analysis of a Proposed Subsidiary." Management Science 15 (1968): 75-82.
Efron, B. "Bootstrapping Methods: Another Look at the Jackknife."
Annals of Statistics 7 (January 1979): 1-26.
. "Nonparametric Standard Errors and Confidence Intervals." Canadian Journal of Statistics 9 (1981): 139-172.
Efron, B., and G. Gong. "A Leisurely Look at the Bootstrap, the Jackknife, and Cross-Validation." The American Statistician 37 (February 1983): 36-48.

158

Bibliography

Estes, John H. "Stochastic Cash Flow Evaluation Under Conditions of


Uncertain Timing." Engineering Costs and Production Economics
18(1989): 65-70.
Evans, Nigel. "The Sizewell Decision: A Sensitivity Analysis." Energy
Economics 6 (January 1984): 15-20.
Exposure Assessment Group, Office of Health and Environmental Assessment. Exposure Factors Handbook, EPA/600/8-89/043. Washington, DC: U.S. Environmental Protection Agency, March 1989.
Fallon, Malachy. "Municipal Electric Credit Review." Standard & Poor's
CreditWeek, June 5, 1989, p. 8.
Fenn, Scott. America's Electric Utilities. New York: Praeger, 1984.
First Boston Corporation. Financing Hydroelectric Facilities. Boston: First
Boston Corporation, April 1981, pp. 10-11.
Giacotto, Carmelo. "A Simplified Approach to Risk Analysis in Capital
Budgeting with Serially Correlated Cash Flows." Engineering
Economist (Summer 1984): 273-286.
Gordian Associates. Optimal Capacity Planning Under Uncertainty in Demand. Report submitted to the U.S. Federal Energy Administration. Washington, D.C.: U.S. Government Printing Office,
November 1976.
Gottman, John M. Time Series Analysis: A Comprehensive Introduction
for Social Scientists. New York: Cambridge University Press, 1981.
Granger, C.W.J. Forecasting in Business and Economics. New York:
Academic Press, 1980.
Hackney, John W. Control and Management of Capital Projects. New
York: John Wiley & Sons, 1965.
Haimes, Yacov Y. Hierarchical Analyses of Water Resources Systems:
Modeling and Optimization of Large-Scale Systems. New York:
McGraw-Hill, 1977.
Hart, Albert G. Anticipations, Uncertainty and Dynamic Planning. Clifton, N.J.: Augustus M. Kelly, 1940.
Hartman, Raymond S. "Self Selection Bias in the Evaluation of Voluntary
Energy Conservation Programmes." Review of Economics and Statistics 70 (August 1988): 448-458.
Hartman, Raymond S., and Michael J. Doane. "The Estimation of the
Effects of Utility-Sponsored Conservation Programmes." Applied
Economics 18 (1986): 1-25.
Harvey, A. C. The Econometric Analysis of Time Series. Cambridge, MA:
MIT Press, 1989.
Haveman, Robert H. The Economic Performance of Public Investments:
An Ex Post Evaluation of Water Resources Investments. Baltimore:
Johns Hopkins Press, 1972.

Bibliography

159

Hayes, R. W., J. G. Perry, P. A. Thompson, and G. Willmer. Risk Management in Engineering Construction. Morgantown, W.V.: Thomas
Telford Ltd., 1986.
Henley, E. J., and H. Kumamoto. Reliability Engineering and Risk Assessment. Englewood Cliffs, N.J.: Prentice-Hall, 1981.
Henson, Steven E. "Electricity Demand Estimates Under Increasing
Block Rates." Southern Economic Journal 51 (July 1984): 147156.
Hertz, David B. "Risk Analysis in Capital Investment." Harvard Business
Review (September-October 1979): 169-181.
Hoffman, Mark, Robert Glickstein, and Stuart Liroff. "Urban Drought
in the San Francisco Bay Area: A Study of Institutional and Social
Resiliency." In American Water Works Association Resource
Management, Water Conservation Strategies. Denver: American
Water Works Association, 1980, pp. 78-85.
Hufschmidt, Maynard M., and Jacques Gerin. "Systematic Errors in Cost
Estimates for Public Investment Projects." In Julius Margolis (ed.),
The Analysis of Public Outputs. New York: Columbia University
Press, 1970, pp. 267-315.
Hull, John C. The Evaluation of Risk in Business Investment. London:
Pergamon Press, 1980.
. "Risk in Capital Investment Proposals: Three Viewpoints." Managerial Finance (1986): 12-15.
Iman, R. L., J. M. Davenport, and D. K. Zeigler. "Latin Hypercube
Sampling (A Program Users Guide)," Technical Report SAND791473. Albuquerque, NM: Sandia Laboratories, 1980.
Johnson, Mark E. Multivariate Statistical Simulation. New York: John
Wiley & Sons, 1987.
Jones, C. Vaughan. "Analyzing Risk in Capacity Planning from Varying
Population Growth." Proceedings of the American Water Works
Association. Denver: June 22-26, 1986, pp. 1715-1720.
. "Nonlinear Pricing and the Law of Demand." Economics Letters
23 (1987): 125-128.
Jones, C. Vaughan, and John R. Morris. "Instrumental Price Estimates
and Residential Water Demand." Water Resources Research 20
(February 1984): 197-202.
Jones, Ian S. "The Application of Risk Analysis to the Appraisal of
Optional Investment in the Electricity Supply Industry." Applied
Economics 3 (May 1986): 509-528.
Kaufmann, A. Reliability CriteriaA Cost Benefit Analysis. OR Report
75-79. Albany: New York State Department of Public Service,
June 1975.

160

Bibliography

Kim, Sang-Hoon, and Hussein H. Elsaid. "Safety Margin Allocation and


Risk Assessment Under the NPV Method." Journal of Business
Finance and Accounting (Spring 1985): 133-144.
Kindler, J., and C. S. Russell. Modeling Water Demands. Orlando, FL:
Academic Press, 1984.
Kleindorfer, Paul R., and Howard C. Kunreuther (eds.). Insuring and
Managing Hazardous Risks: From Severso to Bhopal and Beyond.
New York: Springer-Verlag, 1987.
Knight, Frank H. Risk, Uncertainty, and Profit. New York: Houghton
Mifflin, 1921.
Koskoska, Stephen, and Christopher Nevison. Statistical Tables and Formulae. New York: Springer-Verlag: 1989.
Kryzanowski, L., P. Lustig, and B. Schwab. "Monte Carlo Simulation
and Capital Expenditure DecisionsA Case Study." Engineering
Economist 18 (1972): 31-47.
Laber, Gene, and Elisabeth R. Hill. "Market Reaction to Bond Rating
Changes: The Case of WHOOPS Bonds." Mid-Atlantic Journal of
Business (Winter 1985/1986): 53-65.
Leigland, James. "WPPSS; Some Basic Lessons for Public Enterprise
Managers." California Management Review (Winter 1987): 78-88.
Lenzer, Terry F. The Management, Planning, and Construction of the
Trans-Alaska Pipeline System. Anchorage, AK: Alaska Pipeline
Commission, August 1, 1977, Chapter II.
Leung, L. C , V. V. Hui, and G. A. Fleischer. "On the Present-Worth
Moments of Serially Correlated Cash Flows." Engineering Costs
and Production Economies 16 (1989): 281-289.
Linstone, H. A., and M. Turoff. The Delphi Method: Techniques and
Applications. Reading, MA: Addison-Wesley, 1975.
Loucks, Daniel P., Jery R. Stedinger, and Douglas A. Haith. Water Resource System Planning and Analysis. Englewood Cliffs, N.J.: Prentice-Hall, 1981.
Lucas, Nigel, and Dimitrios Papaconstantinou. "Electricity Planning Under Uncertainty." Energy Policy 10 (June 1982): 143-152.
Makridakis, Spyros. "The Art and Science of Forecasting: An Assessment
and Future Directions." International Journal of Forecasting
(1986): 15-39.
Malkiel, Burton G. A Random Walk Down Wall Street, 4th ed. New
York: W. W. Norton, 1985.
Manne, A. S. "Capacity Expansion and Probabilistic Growth." Econometrica 29 (1961): 632-641.
Masse, P., and R. Gibrat. "Application of Linear Programming to Investments in the Electric Power Industry." Management Science 3
(January 1957): 149-166.

Bibliography

161

Matloff, Norman S. Probability Modeling and Computer Simulation. Boston: PWS-Kent, 1988.
McCleary, Richard, and Richard A. Hay, Jr. Applied Time Series Analysis
for the Social Sciences. Beverly Hills, CA: Sage Publications, 1980.
Merkhofer, M. W. "Quantifying Judgmental Uncertainty: Methodology,
Experiences, and Insights." IEEE (Institute of Electrical and Electronic Engineers) Transactions on Systems, Man, and Cybernetics
17, no. 5 (September/October 1987): 741-752.
Merrow, Edward W., Stephen W. Chapel, and Christopher Worthing. A
Review of Cost Estimation in New Technologies: Implications for
Energy Process Plants. Prepared for the U.S. Department of Energy by the Rand Corporation, R-2481-DOE, Washington, D.C.,
July 1979.
Miller, Earl J. "Project Information Systems and Controls." In Jack H.
Willenbrock and H. Randolf Thomas (eds.), Planning, Engineering, and Construction of Electric Power Generation Facilities. New
York: John Wiley & Sons, 1980.
Moder, Joseph J., Cecil R. Phillips, and Edward W. Davis. Project Management with CPM, PERT, and Precedence Diagramming, 3rd ed.
New York: Van Nostrand Reinhold Company, 1983.
Modianos, D.T.R., C. Scott, and L. W. Cornwall. "Testing Intrinsic Random-Number Generators." Byte (January 1987): 175-178.
Moyer, R. Charles, and Shomir Sil. "Is There an Optimal Utility Bond
Rating?" Public Utilities Fortnightly, May 12, 1989, pp. 9-15.
Murphy, Frederic H., and Allen L. Soyster. Economic Behavior of Electric
Utilities. Englewood Cliffs, N.J.: Prentice-Hall, 1983.
Murthy, Chandra S. "Cost and Schedule Integration in Construction." In
Proceedings of the Conference on Current Practice in Cost Estimating and Cost Control, sponsored by the Construction Division
of the American Society of Civil Engineers in Cooperation with
the University of Texas at Austin. New York: American Society
of Civil Engineers, 1983, pp. 119-129.
Myrha, David. Whoops/WPPSS: Washington Public Power System. Jefferson, N.C.: Mcfarland, 1984.
Newendorp, P. D., and P. J. Root, "Risk Analysis in Drilling Investment
Decisions." Journal of Petroleum Technology (June 1968): 579585.
Nieswiadomy, Michael L., and David J. Molina. The Perception of Price
in Residential Water Demand Models Under Decreasing and Increasing Block Rates. Paper for the 64th Annual Western Economics Association International Conference, June 21, 1989.
North American Electric Reliability Council. 1987 Reliability Assessment:

162

Bibliography

The Future of Bulk Electric System Reliability in North America


1987-1996. Princeton, N.J.: North American Electric Reliability
Council, September 1987.
Olsen, Darryl, and Robert J. Defillippi. "The Washington Public Power
Supply SystemA Question of Managerial Control and Accountability in the Public Sector." Journal of Management Case Studies
(Winter 1985): 323-343.
Papaconstantinou, D. V. Power System Planning Under Uncertainty Using
Probabilistic Simulation, Internal Report EPU/DP/1. London: Energy Policy Unit, Department of Mechanical Engineering, Imperial
College of Science and Technology, May 1980.
Park, Chan S. "The Mellin Transform in Probabilistic Cash Flow Modeling." Engineering Economist (Winter 1987): 115-133.
Pflaumer, Peter. "Confidence Intervals for Population Projections Based
on Monte Carlo Methods." International Journal of Forecasting 4
(1988): 135-142.
Phillips, Charles F., Jr. The Regulation of Public Utilities: Theory and
Practice. Arlington, VA: Public Utilities Reports, Inc., 1984,
p. 221.
Pittenger, Donald. Projecting State and Local Populations. Cambridge,
MA: Ballinger, 1976.
Pouliquen, L. Y. Risk Analysis in Project Appraisal. Baltimore: Johns
Hopkins Press for the International Bank for Reconstruction and
Development, 1970.
Press, William H., Brian P. Flannery, Saul A. Teukolsky, and William
T. Vetterling. Numerical Recipes: The Art of Scientific Computing.
New York: Cambridge University Press, 1986.
"Report on the Reliability Survey of Industrial Plants." IEEE Transactions on Industry Applications 1A-10, no. 2 (March 1974): 231
233.
Risk Assessment: Report of a Royal Society Study Group. London: The
Royal Society, January 1983.
"Risk in Capital Investment Proposals: Three Viewpoints," Managerial
Finance (1986): 12-15.
Sanghvi, Arun P., and Dilip R. Limaye. "Planning Future Electrical Generation Capacity." Energy Policy 7 (June 1979): 102-116.
Scherer, F. M. Industrial Market Structure and Economic Performance,
2d ed. Boston: Houghton Mifflin, 1980.
Slovic, P. "Perception of Risk." Science 236 (1987): 280-285.
Standard & Poor's Corporation. Municipal Finance Criteria. New York:
Standard & Poor's, 1989.
Stanford Research Institute. Decision Analysis of California Electrical

Bibliography

163

Capacity Expansion. Report submitted to California Energy Resources Conservation and Development Commission, Menlo Park,
February 1977.
Stoto, Michael A. "The Accuracy of Population Projections." Journal of
the American Statistical Association 78 (March 1983): 13-20.
Taylor, Lestor. "The Demand for Electricity: A Survey." Bell Journal of
Economics 6, no. 1 (1975): 74-110.
Taylor, Stephen. Modeling Financial Time Series. Chichester, England:
John Wiley & Sons, 1986.
Thuesen, G. J., and W. J. Fabrycky. Engineering Economy. Englewood
Cliffs, N.J.: Prentice-Hall, 1984.
Tucker, James F. Cost Estimation in Public Works. MBA thesis, University of California at Berkeley, September 1970.
Tucker, S. N. "Formulating Construction Cash Flow Curves Using a Reliability Theory Analogy." Construction Management and Economics 4 (1986): 179-188.
Turvey, Ralph, and Dennis Anderson. Electricity Economics: Essays and
Case Studies. Baltimore: Johns Hopkins University Press, published for the World Bank, 1977, p. 259.
University of California at Berkeley. Price Elasticity Variation: An Engineering Economic Approach, EM-5038. Final Report. Berkeley,
CA: Electric Power Research Institute, February 1987.
U.S. Bureau of the Census. Projections of the Population of the United
States by Age, Sex, and Race: 1983 to 2080. Current Population
Reports, Population Estimates and Projections, Series P-25, No.
952, U.S. Department of Commerce. Washington, DC: Government Printing Office, 1984.
Veall, Michael R. "Bootstrapping the Probability Distribution of Peak
Electricity Demand." International Economic Review 28 (February
1987): 203-212.
von Winterfeldt, Detlof, and Ward Edwards. Decision Analysis and Behavioral Research. New York: Cambridge University Press, 1986.
Wei, William W. S. Time Series Analysis: Univariate and Multivariate
Methods. Redwood City, CA: Addison-Wesley, 1990.
White, John A., Marvin H. Agee, and Kenneth E. Case. Principles of
Engineering Economic Analysis, 3rd ed. New York: John Wiley &
Sons, 1989.
Woodard, Robert. "Power Shortage Threatens Northeast." Standard &
Poor's CreditWeek, June 5, 1989, p. 9.
Yokoyama, Kurazo. Annual Transactions of the American Association of
Cost Engineers (AACE). Morgantown, W.V.: AACE, 1988.

This page intentionally left blank

Index
Amortization schedules, 105-112
Autocorrelation, 80
Benefit cost analysis, 121-123
Beta distribution. See Probability
distribution
Bootstrap methods, 29, 60
Bulk power distributor, 99-100
Bureau of Economic Research
OBERS model, 87
Capacity planning, 112-121
Central Limit Theorem, 57
Conservation, 78-79
Consumer willingness to pay, 122
Contingency funds, 57-60
Critical path analysis, 61
Critical path modeling (CPM), 47
Debt schedules, 100-112
Default risk, 20-22
Demand: definition of, 71; inelastic and elastic, 74-77; Law of,
70; price elasticity of, 72
Disbursement pattern for construction expenditures, 62

Economic forecasts, 86-88


Economies of scale: engineering
basis for, 15; role in utility
planning, 11, 114, 117-121
Electric Power Research Institute
(EPRI), 17, 132
Event tree, 35-36
Exponential distribution. See
Probability distribution
Financial risks, 47-68, 121-123
Gamma distribution. See Probability distribution
Hurwicz rule, 138
Inelastic and elastic demand. See
Demand, inelastic and elastic
Internal rate of return on capital,
43
Junk bonds, 3
Law of demand. See Demand,
Law of

166

Index

Least cost planning procedures,


113, 117-121

Probability transformations, 143147


Pure uncertainty. See Uncertainty

Microcomputer simulation programs: @RISK, 9


Minimum present value criterion,
23, 114
Monte Carlo simulation, 17

Random numbers, 34; Lotus 1-23 @RAND function, 34


Range estimation, 54; of population growth, 91-92
Rate effects: rate shock, 70; simultaneity in estimates of, 78
Revenue requirement, 20
Risk analysis, 5-8
Risk factors, indentification of,
19-23
Risk pooling, 60
Risk preferences, 36-39; first order stochastic dominance, 38-39
Risk profiles, 6-7, 36-39
Risk simulation, 9, 17-18, 56-57,
117-121, 130-131; robustness
in, 133-136
Robustness. See Risk simulation,
robustness in

National Environmental Protection Act, 51


Normal distribution. See Probability distribution
Nuclear power plant delays and
cost overruns, 53
Pareto's Law, 56
PERT analysis, 47, 61
Poisson distribution. See Probability distribution
Political risk, 20
Population forecasts: accuracy of,
87; baby boom and, 88; fertility
rates and, 89; methods, 89-90;
migration and, 90; mortality
rates and, 89; time interdependency parameter for, 92
Price elasticity of demand. See
Demand, price elasticity of
Probability: frequency interpretation, 24; subjectivist interpretation, 24
Probability distribution: Beta,
134; exponential, 25; Gamma,
134; normal, 25; Poisson, 25;
triangular, 56; uniform, 57;
Weibull, 134
Probability elicitation: Delphi
method, 26-27; jury of executive opinion, 26; probability encoding, 27

Sampling strategies, 34
Scenario development, 10
Sensitivity analysis, 10
Sinking funds, 101-105
Standard & Poor's, 8
Stochastic dominance, 38, 40, 41
Stochastic independence, 56, 60
Structural models, 33
Tennessee Valley Authority, 52
Time series analysis, 30-33, 79,
106-108
Trans-Alaska pipeline cost overruns, 51
Triangular distribution. See Probability distribution
Uncertainty, 90-94; maximin criterion for decisions, 138; pure,
136-138

Index
Uniform probability distribution.
See Probability distribution
U.S. Army Corps of Engineers,
52
Utility bond ratings, 3, 43
Verification of risk analysis, 132
133

167
Washington Public Power Supply
System (WPPSS), 3
Weibull distribution. See Probability distribution
White noise residuals, 32-33

About the Author


C. VAUGHAN JONES is a principal in Economic Data Resources
of Boulder, Colorado, a consultingfirmspecializing in public utility
and infrastructure issues. His numerous research studies have appeared in the Journal of Economic Issues, Economic Letters, American Economist, Atlantic Economic Journal, and Water Resources
Research.

Вам также может понравиться