Академический Документы
Профессиональный Документы
Культура Документы
of
Infrastructure Debt
Infrastructure Debt
THE CASE OF WATER AND
POWER INVESTMENTS
C. Vaughan Jones
Foreword by David Bendel Hertz
QUORUM BOOKS
Westport, Connecticut London
90-45144
Contents
Figures and 1 ables
Foreword
David Bendel Hertz
Preface
1 Introduction
vn
ix
xiii
1
17
47
69
85
6 Applications
97
129
Appendix
143
Bibliography
155
Index
165
7
18
28
31
35
36
39
40
41
58
59
76
93
104
106
108
111
120
Vlll
TABLES
3.1
48
55
73
91
99
115
137
Foreword
Foreword
Foreword
xi
Preface
The origins of this book date to conversations with A. G. Hart at
Columbia University in the late 1960s and, earlier, to a seminar
in Monte Carlo analysis at the University of Colorado.
Throughout the 1970s and 1980s, I had an opportunity to see
the relevance and to develop my understanding of financial risk
analysis in projects carried out for the Iowa State Commerce Commission, the Colorado Public Utilities Commission, and the Denver Water Board. Since then, I have had access to the considerable
resources in the area of risk analysis of the Environment and Behavior Program at the University of Colorado at Boulder, as well
as the good offices of research librarians at the University of Colorado School of Business.
I also have received invaluable comments, encouragement, and
support from a number of individuals, especially including John
J. Boland, Department of Geography and Environmental Engineering, Johns Hopkins University; David B. Hertz, Department
of Intelligent Computer Systems, University of Miami; James Manire, assistant vice president, Boettcher and Company; Margaret
Ludlum, senior economist, Seattle Water Department; William A.
Steele, financial analyst, Colorado Public Utilities Commission;
Timothy Tatam, vice president, Municipal Finance Department,
Standard & Poor's Corporation; Fred J. Leonard, financial analyst;
and my wife, Carol Eileen Ryan.
Infrastructure Debt
1
Introduction
This book is about how to measure financial risk associated with
investments in specific infrastructureelectricity generation;
power transmission; and water storage, treatment, and distribution
systems. Such evaluations are especially accessible with respect to
these types of investments and with respect to infrastructure generally. In addition, power and water enterprises showed signs of
financial strain in the 1980s and confront critical decisions in the
1990s.
As one of a small number of extended discussions of financial
risk analysis, the book should be of interest to students of business,
finance, and economics. Readers can fix in mind basic principles
of quantitative risk analysis or risk simulation and consider the
interplay between context or industry specific information and
global approaches to risk evaluation.
This discussion has special and practical relevance for an audience of investment analysts, engineers, utility administrators and
financial personnel, regulatory officials, bond underwriters, and
public interest advocates. These groups need to develop a common
language to address risks associated with bond issuance in a decade
that promises significant departures from past patterns in population and economic growth, the natural environment, and regulatory politics. To this end, this book attempts an integration of
material from business, engineering, economics, demography,
probability theory, computer simulation, and policy studies. The
Introduction
THE PROBLEM
Slower population growth, coupled with erratic and slower economic growth, took a toll on the financial viability of the electric
power industry in the 1970s and 1980s. Capacity expansion typically
involves long lead times. Designs in the pipeline since the 1960s
advanced to construction in the 1970s and 1980s. These projects
often were configured to take advantage of scale economies, and
they embodied, in the case of nuclear facilities, optimism vis-a-vis
new or frontier technology. In a sense, many projects were tailored
to a different economic environment than they met when they
came on line. Pervasive overcapacity was one result and only recently has been overcome in some regions of the nation. 1
One upshot was deterioration in electric utility bond ratings.
Dividends declined along with interest coverage ratios. In the mid1980s, investor-owned public utilities accounted for 30 to 40 percent of the dollar volume of junk bonds. Many were "fallen angels"
or issues whose default probability was considered sufficient by
bond rating organizations to be dropped from the list of investment
grade securities.2
The most spectacular recent example of financial risk in the
power industry was the default of the Washington Public Power
Supply System (WPPSS) on interest payments on $2.25 billion
worth of bonds in January 1984. Commentary on this event, the
largest default on utility bonds since the Great Depression, focuses
on construction delays, poor project planning, and the practice
WPPSS adopted, for a time, of capitalizing several years of interest
in bond issues. Financing costs escalated, and projected electricity
demand did not materialize. Questions arose concerning the legal
status of WPPSS's so-called take-or-pay contracts with buyers. 3
Less well-known examples also exist. A Colorado utility serving
some 200,000 customers filed for Chapter 11 protection in 1990,
following years of erratic revenue and questionable investment and
management practices; it was the largest bankruptcy filing by an
electric utility in that year. As noted later in this discussion, many
other examples can be culled from Standard & Poor's CreditWeek
where particularly questionable bond issues are put on "Credit
Watch."
Introduction
Introduction
Figure 1.1
Risk Profile of Rates of Return
Introduction
(1.1)
10
Introduction
11
12
Introduction
13
14
Introduction
15
16
2
Concepts and Procedures
Risk simulation has been widely applied to investment evaluation,
including corporate planning models with Sears, Roebuck and
Company data;1 World Bank loans;2 computer leasing;3 petroleum
investment decisions;4 plant expansion proposals;5 hotel construction;6 and the analysis of insurance companies. In the late 1970s
and early 1980s, the Electric Power Research Institute (EPRI)
sponsored studies to assess demand uncertainty and expansion
plans for electric power systems.7 Risk simulations of Sizewell, a
British pressurized water nuclear reactor, focus on its projected
financial and economic advisability and timing in the capacity plan. 8
Parallel techniques can be identified in engineering reliability
studies9 and natural hazard assessment.10 The mathematical basis
of risk simulation dates to World War II and a Manhattan Project
analysis of the diffusion of neutrons in fissionable material, developed by simulation methods and code named Monte Carlo."
The method of risk simulation has four basic steps, indicated
schematically in the flowchart of Figure 2.1. These include:
1. the identification of risk factors
2. the appraisal of the likely range and probability distribution of risk
factors
3. the simulation of investment performance with parameters sampled
from the probability distributions developed for the various risk factors
18
Figure 2.1
Steps in Conducting a Risk Simulation
4. the summary of the results of the analysis in a risk profile for the
investment performance measure or criterion.
This process supports the formulation of risk management policies
and tactics. The analysis as a whole is rendered more effective by
attention to the facts and idiosyncracies of risk communication.
This chapter discusses this procedure in some detail, defining
and motivating key concepts that are relevant and useful in this
type of analysis. These concepts include default risk; random variable; frequency and subjectivist interpretations of probability; techniques of probability elicitation, including juries of executive
opinion, the Delphi method, and interview techniques supporting
probability encoding', the bootstrap; time series analysis; structural
models; simulation sampling strategies; random numbers and
pseudo random numbers', and more about risk profiles and risk
preferences. The purpose of this chapter is to lay a foundation for
subsequent discussion and the series of examples presented in later
chapters. The reader may want to check his or her comprehension,
accordingly, returning to this chapter to pin down a basic point
discovered in a later chapter.
19
20
21
Taxes
Debt Service on Bonds
Amortization Schedule
Debt Service
Interest
Principal
Demand/Supply Balance
Demand
Additions to Demand
Total Capacity
Excess Demand
may not result from price increases for a product, even if higher
rates are allowed by a regulatory body. This is especially likely if
large increases are contemplated, since consumer demand responses may be price elastic, as discussed in detail in Chapter 4,
22
23
24
25
26
27
28
Figure 2.2
Triangular Probability Distribution
29
30
Figure 2.3
Denver, Colorado, GCD
32
33
34
Sampling Strategies
To illustrate the sampling procedure, assume we consult a random number table or rely on a computer program to produce a
random number between 0 and 1 called n*. This random number
n* will guide our selection of the value of some risk factor for one
run of the cash flow model and contribute to development of the
performance index for this investment. Assume that Figure 2.4
represents the cumulative distribution of the risk factor, that is,
the probability that the risk factor is at most the value of the x
axis. Then, we associate a value C* with n* by reading from the
probability axis to the cumulative distribution F(x) and down to
the x axis. Repeating this sampling process with many random
numbers assures that the repetitions of the cash flow model are
computed with a random sample of this risk factor, determined by
this particular cumulative distribution.
The question of how many samplings ought to be made depends
on the mathematical complexity of what is being sampled, the
sampling procedure, and the penalty function established to assess
the consequences of error in the risk profile. A crude test for the
adequacy of the sample size is to vary the number of samplings by
one and two orders of magnitude to determine whether there are
noticeable differences in the resulting shape of the risk profile.28
The risk profiles in the examples in this book seemed to achieve
stability in the 10,000 iteration range.
Computer programs, it should be noted, produce pseudo-random numbers, usually a repeating series with some relatively long
recurrence number. The Lotus 1-2-3 (a RAND function has a
recurrence cycle of over 1 million, which makes it a relatively good
random number generator for a microcomputer. 29 For advanced
applications, short computer programs exist to purge undesirable
features of stock pseudo-random number generators. 30
Management Responses
Some of the most difficult questions in simulation involve policy
or management responses over longer time periods. Thus, random
factors leading to actual or prospective deficits in income can trigger a search process on the part of management. The analog of
35
Figure 2.4
Cumulative Probability Distribution
this in the computer simulation is a goal-seeking routine programmed to go into motion when predefined thresholds are crossed
in a cash flow model. Thus, if the debt coverage ratio falls below
a key level, a rate increase can be triggered whose time lag, extent,
and effect would be determined by assumptions about regulatory
and demand constraints. Often, multiple possibilities can be represented by an event tree,31 such as that presented in Figure 2.5.
36
Figure 2.5
Event Tree for Rate Increase
37
38
a credible party offers you a chance to win $1,000 if a coin, predetermined to be fair, comes up heads on a toss. If the flip comes
up tails, you pay a given amount. One possibility is that you flatly
refuse to participate. Or you might be willing to risk something to
have the chance of winning $1,000. Individuals, accordingly, can
be ranked as risk averse or risk seeking and along a gradient
between these opposites. Similarly, decision makers, responsible
for taking one or another course of action, can be risk averse or
willing to accept higher risk in order to have potential access to
higher earnings.
The classic comparison is with respect to the variances of continuous risk profiles having the same expected values. Suppose
Figure 2.6 shows the probabilities of rates of return on projects A
and B. Both greater gains and losses are possible with project B,
graphed with the crosses in Figure 2.6, than with project A, the
risk profile indicated by the boxes in the figure. Risk-averse individuals would select A. On the other hand, some investors would
be willing to accept a higher probability of low returns for an
approximately equal chance to earn higher returns.
There also are situations in which one risk profile seems superior
more or less independently of risk preferences. Figures 2.7a and
2.7b illustrate a relationship between investment projects known
as first order stochastic dominance with risk profiles and their associated cumulative distributions. If project A is delineated by the
boxes and project B is indicated by the crosses, it is clear that
project A dominates project B in an important sense. This relationship is perhaps more compelling when we examine the cumulative distributions of the rate of return for these two projects,
which, in this case, show the chance of attaining a given rate of
return or less. The cumulative probability distribution associated
with project B, indicated by the crosses, is always above or to the
left of the cumulative distribution of project A for any given rate
of return. Pick any probability on the vertical axis in Figure 2.7b,
say, 0.10. Then, following this probability level over to the two
cumulative distributions, one can see that the chances of attaining
a higher return are always greater with A than with B. Clearly,
project A is superior, under a range of risk preferences. The analogue for default risk distributions, of course, is simply that one
course of action produces a lower default probability than another.
40
Figure 2.7a
First Order Stochastic Dominance (relative probability)
41
Figure 2.7b
First Order Stochastic Dominance (cumulative probability)
tical and historical, comparative analysis. Juries of executive opinion, Delphi methods, and formal probability elicitation techniques
can be helpful in polling expert opinion.
One useful distinction is between essentially "one-shot" variables, like construction costs and multiple period variables such as
labor costs, interest rates, or population growth. Single period
variables may be characterizable by a simple probability distribution. Frequently, discussions of risk simulation stop with identification of this simple type of stochastic process. More complex,
time-interrelated processes, however, can characterize the time
path of multiple period variables. General features of these stochastic processes can be analyzed by time series methods.
There are styles of thought regarding risk, in addition to risk
preferences, that make risk communication particularly important.
Executives or top administrators may favor decisive, seemingly
deterministic modes of thought. Numerous studies, furthermore,
show the prevalence of bias in reasoning about risky situations,
even with highly trained subjects.12 Simplicity is probably the key
42
43
Optional Investment in the Electricity Supply Industry," Applied Economics 3 (May 1986): 509-528.
8. Nigel Evans, "The Sizewell Decision: A Sensitivity Analysis," Energy Economics 6 (January 1984): 15-20; and Jones, "The Application
of Risk Analysis to the Appraisal of Optional Investment in the Electricity
Supply Industry."
9. E. J. Henley and H. Kumamoto, Reliability Engineering and Risk
Assessment (Englewood Cliffs, N.J.: Prentice-Hall, 1981), discuss these
earlier engineering applications.
10. For an interesting selection of articles on this topic see Paul R.
Kleindorfer and Howard C. Kunreuther (ed.), Insuring and Managing
Hazardous Risks: From Severso to Bhopal and Beyond (New York: Springer-Verlag, 1987).
11. Hence, the other name for risk simulationMonte Carlo simulation
or analysis.
12. See Charles F. Phillips, Jr., The Regulation of Public Utilities: Theory and Practice (Arlington, VA: Public Utilities Report, Inc., 1984).
13. See Standard & Poor's Corporation, Municipal Finance Criteria
(New York: Standard & Poor's, 1989). R. Charles Moyer and Shomir Sil
list factors affecting bond ratings as follows: "The level of long-term debt
relative to the firm's equity... the firm's liquidity, including an analysis
of accounts receivable, inventory, and short-term liabilities. . . t h e size
and economic significance of the company and the industry in which it
operates... [and] the priority of the specific debt issue with respect to
bankruptcy or liquidation proceedings and the overall protective provisions of the issue." "Is There an Optimal Utility Bond Rating?" Public
Utilities Fortnightly, May 12, 1989, pp. 9-15.
14. Frederic H. Murphy and Allen L. Soyster, Economic Behavior of
Electric Utilities (Englewood Cliffs, N.J.: Prentice-Hall, 1983), provide a
comprehensive survey of public utility commission rate standards as of
the early 1980s in Tables 2 and 3.
15. The internal rate of return r satisfies the equation,
44
imum financial risks" will be subject to the condition that supply and
demand are in balance. Thus, obviously, absolutely minimum financial
risks might be attained by liquidating utility investments and buying U.S.
Treasury securities.
17. David B. Hertz, "Risk Analysis in Capital Investment," Harvard
Business Review (September-October 1979): 174-175.
18. See H. A. Linstone and M. Turoff, The Delphi Method: Techniques
and Applications (Reading, MA: Addison-Wesley, 1975).
19. See M. W. Merkhofer, "Quantifying Judgmental Uncertainty:
Methodology, Experiences, and Insights," IEEE (Institute of Electrical
and Electronic Engineers) Transactions on Systems, Man, and Cybernetics
SMC-17, no. 5 (September/October 1987): 741-752.
20. See Detlof von Winterfeldt and Ward Edwards, "Cognitive Illusions," Decision Analysis and Behavioral Research (New York: Cambridge University Press, 1986).
21. An early discussion of the approach is found in B. Efron, "Bootstrapping Methods: Another Look at the Jackknife," Annals of Statistics
1 (January 1979): 1-26. See also B. Efron, "Nonparametric Standard
Errors and Confidence Intervals," Canadian Journal of Statistics 9 (1981):
139-172; and B. Efron and G. Gong, "A Leisurely Look at the Bootstrap,
the Jackknife, and Cross-Validation," The American Statistician 37 (February 1983): 36-48.
22. Siddartha R. Dalai, Edward B. Fowlkes, and Bruce Hoadley, "Risk
Analysis of the Space Shuttle: Pre-Challenger Prediction of Failure,"
Journal of the American Statistical Association 84 (December 1989): 945957.
23. George E. P. Box and G. M. Jenkins, Time Series Analysis: Forecasting and Control (San Francisco: Holden-Day, 1976).
24. John M. Gottman, Time Series Analysis: A Comprehensive Introduction for Social Scientists (New York: Cambridge University Press,
1981), presents a nice discussion of these graphic patterns.
25. See, for example, C.W.J. Granger, Forecasting in Business and
Economics (New York: Academic Press, 1980); Richard McCleary and
Richard A. Hay, Jr., Applied Time Series Analysis for the Social Sciences
(Beverly Hills, CA: Sage Publications, 1980); and William W. S. Wei,
Time Series Analysis: Univariate and Multivariate Methods (Redwood
City, CA: Addison-Wesley, 1990).
26. Burton G. Malkiel, A Random Walk Down Wall Street, 4th ed.
(New York: W. W. Norton, 1985), is still an entertaining and informative
introduction to the subject.
27. See Stephen Taylor, Modeling Financial Time Series (Chichester,
England: John Wiley & Sons, 1986).
45
48
Table 3.1
Cost Estimates and Realized Nuclear Power Plant Costs, 1966-1972
(nominal dollars per kWh of generation capacity)
Year
Construction
Started
1966
1967
1968
1969
1970
1971
1972
Average
Estimated
Cost
Average
Final
Cost
Average
Percent
Overrun
147
150
155
179
228
258
418
299
352
722
890
1,331
1,313
2,258
103
135
365
397
484
409
440
Risks in Construction
49
tributions. This result, which depends in part on the way construction costs are classified, is useful in sizing contingency allowances.
An interesting finding supported by these exercises is that contingency funds designed to cover each construction cost component
at a given confidence level would sum to a total contingency fund
that would be larger than needed to cover total construction costs
at that same confidence level. Additional topics considered in the
discussion of quantitative techniques in this chapter include simulation of the construction schedule and expenditure curves and
the problem of stochastically interdependent costs and construction
schedules.
A primary objective of this chapter is to show that generating
construction cost risk estimates is straightforward, at least as a first
approximation. Methods in this chapter are applied, with other
techniques identified in the following two chapters, to several integrated simulation applications in Chapter 6.
FACTORS ASSOCIATED WITH CONSTRUCTION
COST OVERRUNS
In broad terms, several factors are linked with cost overruns in
capital construction projects. 2 These include:
1.
2.
3.
4.
5.
6.
the stage of the product cycle at which the cost estimate is developed
the type of technology
the size and complexity of a project
the competence of product management
regulatory or political considerations
contracting arrangements
50
Risks in Construction
51
indicates that projects with higher initial cost estimates and lengthy
construction periods have more cost growth and that cost estimates
made later in the project cycle are more accurate. Similar results
are produced by other, more recent studies.3
The Competence of Project Management
Management and organization problems are widely cited contributors to cost overruns. An investigatory report about the transAlaska pipeline, for example, a project whose costs escalated from
$900 million to final costs in 1977 of $7.7 billion, states that,
the project was virtually run by committees; it was structured with vertical
and horizontal duplication of supervision and decision making, cumbersome decision chains, unclear lines of authority, and fragmentation of
responsibility. Compounding this were significant communication, coordination, and liaison problems between project groups. The result of this
duplicative management structure was paralysis of the project management decision making process.6
Regulatory or Political Considerations
Regulatory or political considerations became more important
with enactment of the National Environmental Protection Act
(NEPA) of 1971 and related legislation. Local interests object to
the plume of smoke from power plants or potential radioactivity.
New water storage sites are increasingly scarce as real estate development locks up the countryside. Regulatory conflicts and bad
initial planning often lead to numerous change orders that cause
"ripple effects" throughout the timing and cost of activities in a
power or water project.
Contracting Arrangements
Inefficiencies can be associated with contracting arrangements.
Lowest-bidder rules applying to contracts let by public agencies
have been known to let inexperienced firms in the door, based on
"lowballing" the bid. Such companies later may have to be replaced in costly, time-consuming recontracting.
52
Risks in Construction
53
54
ago, the capital cost of most plants was less than $400 per kilowatt for a
nuclear plant. Even taking inflation into account, the real cost per kilowatt
has tripled. . . . Today, the combination of large capital expenditures over
a long period and high interest rates cause time dependent charges to
make up a substantial portion of the total capital cost of the plant.14
In addition there are investment risks because "the cost of large
power plants, especially nuclear plants, is now so high that they
make up a large percentage of the total assets of many utilities." 15
RANGE ESTIMATION AND SIMULATIONS TO
DETERMINE CONTINGENCY FUNDS
How can the likelihood of construction cost overruns be evaluated? Range estimation is a major approach to this question, as
noted in the cost engineering literature. 16 This method is supported
by historical cost data or reliance on judgmental factors when a
database of comparable facilities does not exist. Range estimation
operates with cost summaries of major items in a construction
project, estimates of their range of variability, and other information, where available, about the likely distribution of component costs. The term "range estimation" derives from first
approximations that operate with lower and upper bound estimates
of component costs. Additional information usually shrinks the
estimated variance of total costs, reducing estimates of contingency
funds needed to cover changes in the total cost at a given confidence
level or a given percentage of the time. In this sense, range estimation provides a yardstick for measuring the adequacy of contingency funds and the value of information about the variability
of component costs.
Let us illustrate the power and generality of this method. Table
3.2 lists the major construction cost categories on some project.
The first column simply names these cost categories generically,
as c,, c2, c3, and so on. Columns 2 and 3 tabulate the anticipated
lower and upper bounds for these cost categories, defining a cost
range. These ranges may be absolute, encapsulating all possible
values of the cost components, or can incorporate a given percentage of likely variation of these cost categories, for example,
five and ninety-five percentile costs. (The five and ninety-five per-
55
Risks in Construction
Table 3.2
Range Estimates and Expected Costs by Cost Component (millions of
dollars)
Lower
Bound
Costs
(D
(2)
Upper
Bound
Costs
(3)
d
c2
C3
c4
C5
c6
c7
c8
c9
C10
100
60
30
20
10
10
10
10
10
5
140
90
50
40
40
40
40
40
40
30
121.7
75
41.7
28.3
25
25
25
25
25
15
TOTALS
265
550
406.7
Cost
Component
Expected
Costs
(4)
56
of the significant few and insignificant many. For purposes of discussion, the estimates in the tables are assumed to be in millions
of dollars.
Stochastically Independent Costs
The standard assumption in this type of analysis is that the costs
are stochastically independent. In other words, there can be no
correlation between the cost overruns experienced or realized by
the various cost components. In Table 3.2, c2 is 20 percent higher
than its expected value in column 4, there is no added chance that
c3 or any of the other cost components will come in higher than
expected.
Although this is a strong assumption, cost engineering suggests
that this condition can be approximated by suitable aggregation. 17
Thus, substitutability between construction tasks is generally clustered in significant groups. Overall, subcontracting of different
phases of the project to different firms (e.g., site preparation,
foundation work, erection) limits the degree to which cost overruns
or slowdowns in one phase can be made up by economies or speedups in another phase. Even within groupings of tasks, statutes
pertaining to overtime pay, surcharges for immediate delivery of
materials, and the like limit speedup opportunities. Once estimated
costs are exceeded, therefore, it is seldom possible to cheapen
other aspects of a project. For purposes of discussion, then, let us
assume the cost categories of Table 3.2 are grouped so as to be
stochastically independent. Later, we will comment on more sophisticated tactics for dealing with correlated costs.
Risk Simulation
The information in Table 3.2 leads to a risk profile for total costs
when we develop a characterization of the probability distributions
of components' costs. Thus, given that the numbers in columns 2
and 3 represent absolute lower and upper bounds for component
costs, the availability of the expected values in column 4 suggests
the use of the triangular probability distribution. The bounds for
the costs delineate the base of this distribution, and the height or
Risks in Construction
57
58
Figure 3.1
Overlay of Risk Profiles Developed with Uniform and Triangular
Distributions and Table 3.2 Ranges
total costs. Figure 3.2 presents the cumulative distributions associated with the risk profiles of Figure 3.1. Here, the usual presentation is rotated so that the probability of the project totaling a
certain cost or less is shown on the horizontal axis. Following the
line up from the 95 percent probability level on the horizontal axis
to the cumulative distributions produced by triangular or uniform
distributions of component costs, one can estimate a total cost
figure that is exceeded only one time in twenty. If contingency
allowances are desired to cover cost overruns 95 percent of the
time, and the cost estimate is the expected total cost ($406.7 million), the analysis suggests that a contingency fund of about $40
million will cover most exigencies. Here the higher variance of the
risk profile of the simulation performed with component uniform
distributions carries through to a higher estimate of the contingency
fund to cover cost overruns 95 percent of the time.19 Thus, in
determining how much extra debt one wishes to add for contin-
Risks in Construction
Figure 3.2
Comparison of Cumulative Distributions
60
Risks in Construction
61
62
ported by cost and scheduling data may be dealt with along the
lines of a sensitivity analysis. If such correlations have significant
impacts on the risk profile of total costs or completion time, care
must be exercised in setting their precise value in the final simulation.25
The Disbursement Pattern
Cumulative expenditures over the construction period usually
will be some type of S-shaped or sigmoid curve. Expenditures in
the design period ordinarily will be less than initial startup costs,
and payout will mount as the project goes on, probably peaking
about midway through construction and then trailing off as detail
tasks and finishing become the main preoccupation. Given the lag
between project completion, billing, and payment, expenditure
curves can be developed with the same information used for costs
and completion times.
Mention also can be made of an explicit stochastic model of the
payment stream, unrelated to range estimation. This derives interesting results from the assumption that the probability of the
completion of any work element in any small interval within the
construction period is a small numberan observation that may
elicit empathy from construction project managers. Based on analogies with engineering reliability theory, payment is represented
as a Poisson process, and the payment completion rate is an exponential function. The resulting probability distribution of payments is a mixture of uniform and Weibull distributions, which
describes a kind of S curve. 26
CONCLUSION
Text Box 3.1 lists several qualitative factors identified in the
engineering and economics literature as being linked with cost
escalation on capital construction projects. Evidence suggests these
same factors are relevant to water and power capital construction
projects. This specialized literature emphasizes, on the one hand,
the dependability of conventional technology and, on the other,
the hazards of fast-tracking technical innovation in the nuclear
field. Accordingly, the factors in Text Box 3.1 might be taken as
Risks in Construction
63
the basis for a parametric ranking system that would appraise the
relative likelihood of cost overruns on a series of projects.
Quantitative assessment of the likelihood of cost overruns and
scheduling delay on power and water projects can be carried out
with range estimation. Range estimation involves associating probabilities of excess with various cost or completion time estimates
and identifying the expected or most likely values for component
activities or tasks. With respect to costs, implementing this technique involves the following:
1. appropriate classification of costs (preserving stochastic independence)
2. identifying five and ninety-five percentile costs or absolute lower and
upper cost bounds
64
Risks in Construction
65
66
the Trans-Alaska Pipeline System (Anchorage, AK: Alaska Pipeline Commission, August 1, 1977), Chapter II.
7. See Derek T. Beeston, Statistical Methods for Building Price Data
(London: E. & F. N. Spon, 1983).
8. Edward G. Altourney, The Role of Uncertainties in the Economic
Evaluation of Water Resources Projects (Stanford, CA: Institute of Engineering-Economic Systems, Stanford University, 1963).
9. Maynard M. Hufschmidt and Jacques Gerin, "Systematic Errors
in Cost Estimates for Public Investment Projects," in Julius Margolis (ed.),
The Analysis of Public Outputs (New York: Columbia University Press,
1970), pp. 267-315.
10. Robert H. Haveman, The Economic Performance of Public Investments: An Ex Post Evaluation of Water Resources Investments (Baltimore: Johns Hopkins Press, 1972).
11. Glenn J. Davidson, Thomas F. Sheehan, and Richard G. Patrick,
"Construction Phase Responsibilities," in Willenbrock and Thomas
(eds.), Planning, Engineering, and Construction of Electric Power Generation Facilities, p. 160.
12. D. S. Bauman, P. A. Morris, and T. R. Rice, An Analysis of Power
Plant Construction Lead Times, Volume I: Analysis and Results, E A 2880, Final Report (Palo Alto, CA: EPRI, February 1983), p. 2-2.
13. Ibid.
14. Ibid., section 1, p. 4.
15. Ibid.
16. See, for example, Michael W. Curran, "Range Estimating: Reasoning with Risk," Annual Transactions of the AACE (Morgantown,
W.V.: AACE, 1988), n.3.1-n.3.9; R. W. Hayes, J. G. Perry, P. A.
Thompson, and G. Willmer, Risk Management in Engineering Construction (Morgantown, W.V.; Thomas Telford Ltd., 1986); Karlos A. Artto,
"Approaches in Construction Project Cost Risk," Annual Transactions
of the AACE (Morgantown, W.V.: AACE, 1988), B-4, B.5.1-B.5.4; and
Krishan S. Mathur, "Risk Analysis in Capital Cost Estimating," Cost
Engineering 31 (August 1989): 9-16.
17. See Derek Beeston, "Combining Risks in Estimating," Construction Management and Economics 4 (1985): 75-79.
18. James E. Diekmann, Edward E. Sewestern, and Khalid Taher,
Risk Management in Capital Projects, a report to the Construction Industry
Institute (Austin: University of Texas at Austin, October 1988), pp. 6 3 81. The authors of this book note that some exceptions in the sources of
construction cost variation covered by contingency funds often are allowed
(e.g., out-of-scope variation in costs).
19. Here, the expected value of total costs is similar in both simulations,
the primary difference being the variance of the implied risk profiles.
Risks in Construction
67
and
where E(.) (the period indicates the argument of the function E) is the
expectation operator indicating the mean or average of a random variable and Var(.) indicates its variance. In ordinary language, these
equations mean that the expected total cost equals the sum of the expected values of component costs and, similarly, that the expected variance of total costs equals the sum of the variances of the individual
component cost probability distributions. Confidence levels of a normal
distribution, however, are determined by the standard deviation, which
is the square root of the variance. Thus, the standard deviation of total
costs must be less than the sum of the standard deviations of the component cost distributions. Accordingly, a 95 percent confidence interval
for total costs is less than the sum of the 95 percent confidence level
values for component costs.
21. See Michael R. Veall, "Bootstrapping the Probability Distribution
of Peak Electricity Demand," International Economic Review 28 (February 1987): 203-212, and the cited references for a particularly clear
discussion of this method.
22. PERT (program evaluation and review technique) had a number
of precursors. Its development was associated with the development of
the Polaris missile system in 1958 and the earlier Gantt (bar) charts and
milestone reporting systems. See Joseph J. Moder, Cecil R. Phillips, and
Edward W. Davis, Project Management with CPM, PERT, and Precedence
Diagramming, 3rd ed. (New York: Van Nostrand Reinhold Company,
1983).
23. One problem is that the critical path identified by a deterministic
analysis may be supplanted by other, more lengthy chains of activities
when task completion times are considered to be random variables. Until
the advent of modern microcomputers, this was a real barrier because of
the cost of core computer time to generate all possible combinations of
completion times and their resulting critical paths. Currently, simulation
is the favored approach to this problem, and algorithms for the efficient
or near-optimal solution of this problem are available. See Thomas Byers
and Paul Teicholz, "Risk Analysis of Resource Levelled Networks," in
68
4
Revenue RiskRate and
Demand Factors
Attitudes toward rate responsiveness in the power and water industries have shifted since the mid-1970s. Initially, it was not uncommon to meet utility professionals who doubted whether
consumers paid attention to the price of electricity or water. One
might hear (and there is modest published literature to the effect)
that rate increases initially impact demand but that, after a year
or so, consumers forget about changes in rates and return to their
old patterns of usage. Of course, those advancing this thesis seldom
distinguish between real and nominal rates. They may have observed inflation reducing the real or inflation-adjusted rate after a
time to its prerate-change level, or, alternatively, they may have
witnessed differences between short- and long-run price responses.
The statistical evidence for price effects on water or electricity
demand, however, is overwhelming.1 By the late 1970s, there was
greater awareness among utility staff about the implications of
price elasticity on per capita usage levels. The message may not
have reached echelons at which key decisions were made until
later, however. Thus, as noted in Chapter 1, demand projections
and capacity decisions in the electric power industry were out of
synch with the realities of lower usage rates and demand growth
in the face of higher, real rates until recently. Even today, despite
widespread discussion of system optimization, conservation, and
econometric modeling of demand, bond covenants often require
utilities to pledge to increase rates in the event the debt coverage
70
71
72
The price elasticity is defined as the percentage change in quantity demanded divided by the percentage change in price, where
these percentages usually are taken around the average values of
the variables in the sample. 3 Such elasticities are estimated from
empirical data, usually in connection with a statistical demand
analysis. This could involve identifying explanatory variables influencing consumption (Q), collecting cross-sectional data, and
estimating coefficients of a structural model, such as
Q = a + a,P + a2X, + .. . + anXn
(4.1)
73
Current
Demand
Projection
at Current
Price
Zero Price
Elasticity
Revenue
Forecast,
Higher Price
Revenue
Forecast,
Higher Price
[Assuming
e - 0.3)
Unit
Price
($)
Quantity
Demanded
(million)
Revenue
Forecast
($millions)
1.50
18.5
27.75
2.00
18.5
37
2.00
16.67
33.3
FORECAST ERROR
10%
74
75
ities vary along the demand curve. 5 At low prices, demand is typically price inelastic, whereas at high prices, it becomes elastic.
This is relevant to real-world planning because a utility often
pledges in a rate covenant to increase rates to cover a shortfall in
revenuesan oversight, in all likelihood, linked with traditionally
low prices for electricity and water. Without information about
price responsiveness, rate analysts tend to look at the ratio of
needed revenues to existing revenues. Thus, if commodity charges
produce revenues of $52,500,000, and there is a revenue shortfall
of $20,000,000, a 40 percent rate increase might be judged adequate to boost revenues back up to the breakeven point.
Suppose, however, the relevant demand relationship is linear in
the price of the commodity in question, according to the formula
Q = 25,000,000 - 2,500,000 P
(4.2)
so that at consumption of 17,500 units, the price elasticity is between 0.2 and 0.3. Here, total demand (Q) is measured in 1,000
units, and the initial price is assumed to be $3.00 per 1,000 units.
Other influences are assumed to be constant for the duration of
the analysis and are reflected in the constant term.
The fact is that there is no price increase capable of generating
revenues of $75 million, given these parameters. Multiplying both
sides of equation 4.2 by P, the following results:
PQ = 25,000,000P - 2,500,000P2
(4.3)
(4.4)
or
76
Figure 4.1
Total Revenue Curve
77
PROVISOS
These relationships must be qualified with respect to several
factors. The following relationships bear mention because they
have been intensively researched in recent years.
Rate Versus Price
The primary problem of rate versus price is that it is unclear
exactly what people respond to when faced with a schedule of rates
as opposed to a uniform price for a commodity. A typical rate
schedule has a fixed fee or charge for connection to the system
plus rates applying to various quantities of consumption. Electric
power rates also might include a demand charge for consumption
during some defined peak demand period. Water rates usually are
described by a schedule applying to various quantity intervals
that is, purchases up to and including 10,000 gallons in a billing
period are charged at one rate, while consumption in excess of
10,000 gallons in a billing period is charged at a second rate, and
so on.
What's the price? Economists tend to identify the marginal price
as the critical decision criterionthe commodity charge for the
final unit consumed in a billing period. In applied studies and
general discussion, the average price of water or power is often
cited as important. Yet surveys show only a small percent of water
or power customers know the rate schedule,6 and their imputations
of average price may be wide of the mark, at least as this quantity
is computed from the actual bill. Indeed, the only certain thing is
that consumers, at some point, look at their utility bill. It seems
reasonable, therefore, that they may react when bills rise above a
certain threshold, making discretionary adjustments in usage in
the short run and, if high bills continue, contemplating purchase
of efficient appliances, new landscaping, and so on in the longer
run. This type of behavior may look like a price response, but
actually it does not have to be mediated by knowledge of rates. 7
This issue is not trivial in an era of rate reform. Many power
and water companies are switching from declining rate block schedules to ascending or increasing rate block schedules. This conversion is considered to offer incentives to conserve, since if use
78
79
80
The other approach relies more on judgment and, possibly, demand studies produced for comparable communities. A simple,
often defensible form of a synthetic demand relationship is
qt - ERt + pqt_, + et = Q, - C(t)
(4.5)
81
82
83
5
Revenue Risk
The Customer Base
Financial risks are associated with population in a utility service
area. If population growth is slower than anticipated, the costs of
capacity investments will be paid by fewer customers. This can
require rate adjustments and can be politically volatile. In some
instance, efforts to compensate for revenue shortfalls resulting
from slow growth may push rates into the price elastic range or
trigger customer protest and resistance.
Attention to this type of risk is recommended by changes in
fertility and death rates, family formation, and the age structure
of the U.S. population. After World War II, there was an increase
in family formation, and the U.S. birth rate surged. Five decades
later, the demographic picture looks very different. Many central
locations in U.S. metropolitan areas are losing population. Sudden
reversals have affected whole regions, such as portions of the west
around the Rocky Mountains. At least one instance of an electric
utility being pushed into receivership by loss of energy investment
projects and subsequent depopulation can be cited. Depopulation
also has been discussed as a problem with respect to financing
multibillion-dollar rehabilitation of older utility systems in core
urban areas.
A risk that is possibly more potent, because it is more subtle,
is gradually slowing population growth. Studies of the accuracy or
errors of forecasting models suggest that turning pointsthe timing
of a switch from positive to negative growth of a variableare the
86
hardest item to predict. Thus, how slower growth will play itself
out presents a formidable problem, creating substantial risks for
central facilities promising economies of scale at the cost of excess
capacity in the near term. Yet gradually slowing population is the
basic forecast for the forseeable future and is related to a number
of factors. The current low U.S. birth rate developed in the 1970s.
At the same time, advances in modern medicine and lifestyle
changes have led to longer lives for nonminority populations. One
consequence is that the U.S. population is aging. An older population, in turn, is more likely to be settled, to migrate less. Thus,
not only is population growth for the nation anticipated to slow in
the next century, possibly diminishing in absolute numbers, but
sizeable migration, which has buoyed population growth in many
areas over past decades, cannot be counted on in the 1990s or the
first decade of the twenty-first century.
This chapter confronts these problems with information that may
help in the evaluation of demographic projections and proposes
ways to represent uncertainty and inevitable variability in population time series. The following section considers evidence relating
to the accuracy of economic and personnel projections that, by
general consensus, drive population growth in a community or
region. There is agreement that population forecasts are not very
accurate, especially when longer time periods and smaller geographic or population units must be considered. Then, a synopsis
of standard population projection techniques commonly applied
by agencies supplying utilities with forecasts is presented. The
discussion thereafter advances suggestions regarding stochastic
modeling of population change for purposes of risk simulation.
These methods run the gambit from extremely sophisticated multivariate time series models to simple stochastic models that capture subjective estimates of high, medium, and low growth.
87
88
89
90
rates for these components vary by age and sex, separate assumptions are developed about each age group. The most difficult component to project in subnational projections is migration,5 and most
cohort-component procedures utilize either past migration rates
or absolute numbers of migrants. The advantage of cohort-component procedures is their conceptual completeness relative to the
demographic processes and population structure. Their disadvantages derive from the fact that only population factors are considered in the projection of future events. Most texts on demography
acknowledge that cohort-component or cohort-survival methods
are not more accurate than simpler extrapolations or judgmental
estimates, although they have the potential to provide a better
picture of how trends may play themselves out.
91
Year
High
Medium
Low
1995
200,000
200,000
200,000
2000
231,855
226,281
210,202
2005
266,184
249,833
220,924
2010
301,163
272,471
232,193
92
Figure 5.1
Stochastic Models of Population Growth
94
CONCLUSION
Water and power utilities face a formidable planning problem,
insofar as they attempt to have new capacity ready in a timely
fashion for population growth in a service area.
Risk simulation has special advantages in this context since, as
noted earlier, the accuracy of population forecasts is low and decreases rapidly with the length of the forecast period. Given future
prospects in many U.S. communities, there is need to consider
financial models in which the customer base may peak and then
shrink, models in which large components of pure uncertainty,
stochastic variation, and jumps or shifts in behavior can be incorporated. Typically, these will allow for random variation and time
interdependency in population growth rates or in both migration
and natural increase of the population.
The modeling choices again revolve around the application of
judgment or formal statistical analysis of historic data. The stochastic model suggested here is a variant of range estimation. Maximum ranges reasonable for population growth in a series of years
are established first. Then, the analyst focuses on the time interdependency with which he or she feels comfortable for these stochastically generated series of population growth rates.
While this method leaves considerable discretion in the model
setup, it is recommended in risk simulation because, as this chapter
95
has stated, the future is likely to be different than the past when
it comes to American population patterns.
NOTES
1. William Ascher, Forecasting: An Appraisal for Policy-Makers and
Planners (Baltimore: Johns Hopkins Press, 1978), p. 74.
2. Spyros Makridakis, "The Art and Science of Forecasting: An Assessment and Future Directions," International Journal of Forecasting
(1986): 15-39.
3. Michael A. Stoto, "The Accuracy of Population Projections," Journal of the American Statistical Association 78 (March 1983): 13-20.
4. See Donald B. Pittenger, Projecting State and Local Populations
(Cambridge, MA: Ballinger, 1976).
5. Errors in national economic forecasts are most closely related to
variability in fertility, in part because of immigration barriers to migration.
At the regional level, however, migration becomes the dominating influence in many cases.
6. Peter Pflaumer, "Confidence Intervals for Population Projections
Based on Monte Carlo Methods," International Journal of Forecasting 4
(1988): 135-142.
7. See Juha M. Alho and Bruce D. Spencer, "Uncertain Population
Forecasting," Journal of the American Statistical Association 80 (June
1985): 306-314.
8. Some researchers have proposed a generational argument based on
the relative standard of living to account for the decline in fertility in
recent decades. Fertility is said to be related to the difference between
currently attainable standards of living and the standard of living of the
family of origin. Given declining real incomes per worker since about
1970, younger families are by this account less prone to raise children.
This explanation has some credibility in light of widely reported difficulties
of young couples, brought up in the suburbs, in buying their first home.
But does the theory explain the surge in births in inner city areas with
ethnic populations?
9. This is particularly true in relatively sparsely populated areas of the
west and the southwest. In Colorado, for example, annual net migration
since 1950 has fluctuated between a loss of about 24,000 persons and a
net gain of 90,000 persons while natural increase has had bounds varying
between 18,000 and 35,000 people. See Colorado Division of Local Government, Colorado Population Growth (Denver: Colorado Division of
Local Government, 1989), Table 1.
6
Applications
This chapter presents five applications of risk simulation to utility
investment evaluation. The topics include
1. contract risk and the potential loss of a bulk customer,
2. the impact of sinking funds on financial risk,
3. risk comparisons of alternative debt service schedules,
4. risk profiles for the present value of capacity expansion plans, and
5. benefit-cost analysis and financial risks.
98
Table 6.1
Cash Flow Model for Wholesale Producer
($1000)
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
OPERATING REVENUES
Primary Service Area (retail cuttomert)
Bulk Contract
60.000
45.000
61.200
46.800
62.424
48.672
63.672
50.619
64.946
52.644
66.245
54.749
67.570
56.939
68.921
59.217
70.300
61.586
71.706
64.049
73.140
68.611
OPERATING EXPENSES
Operation & Maintenance
Adminittration
35.000
4.500
37.100
4.770
39.328
5.056
41.686
5.360
44.187
5.681
46,838
6.022
49.648
6.383
52.627
6.766
55.785
7.172
59.132
7.603
62.680
6.059
INCOME
Net Operating Income
Other Income (Inverted Fundt)
65.500
12.000
66.130
12.000
66.714
12.000
67.246
12.000
67.722
12.000
68,134
12.000
68.478
12.000
68.745
12.000
68.928
12.000
69.020
12.000
69.012
12.000
DEBT
Debt Service on Bondt
Debt Service Coverage Ratio
62.000
1.25
62.000
1.26
62.000
1.27
62.000
1.28
62.000
1.29
62.000
1.29
62.000
1.30
62.000
1.30
62,000
1.31
62.000
1.31
62.000
1.31
1.500 0
40
50.000
1.530.0
40
52.000
1.560.6
40
54.080
1.501.8
4 0
56.243
1.623.6
4 0
68.493
1.656.1
4 0
60.833
1.689.2
1.723.0
4 0
65.797
1.757.5
1.792.6
4 0
71.166
1.828.6
4 0
74.012
4 0
63.266
4 0
68.428
100
Applications
101
T e x r b o x 6 . 1
Applications
103
104
Figure 6.1
Default with and without a Sinking Fund
Applications
105
106
Figure 6.2
Debt Service Schedules
108
Figure 6.3
Cost Overrun Probability
Applications
109
110
Applications
111
Figure 6.4
Default RiskLevel and Tipped Debt Service
112
Applications
113
mining least cost capacity plans, where costs usually are measured
in terms of present values. The present value (PV) of a series of
investments (KM K 2 ,. . . , Kn) is defined as
114
(6.1)
Table 6.2
Minimum Present Value Capacity Investment Sequences
Period
Years
CASE1
CASE 2
CASE 3
CASE 4
1
0
2
5
3
10
Demand
Capacity Investment ($million)
100
58
130
25,25
150
Demand
Capacity Investment ($million)
100
25
110
25
120
Demand
Capacity Investment ($million)
100
25
110
58
150
Demand
Capacity Investment ($million)
100
58
120
135
Present Value
(10 percent discount)
89.05
40.52
61.01
58.00
116
Hence, smaller projects with higher unit costs produce the least
present value expansion path when the discount rate increases.
We can study how acceleration and deceleration in the growth
of demand affect the optimal investment sequence with Table 6.2.
This table presents the optimal capacity investments for several
scenarios of nonlinear demand growth. Investment sequences are
built with two types of capacity projects: a $58 million design with
a capacity of 30 demand units, and $25 million projects with capacities of 10 units apiece. Table 6.2 pairs demand paths and the
costs of the optimal sequence of capacity investments, listing the
present value of these sequences in the last row, assuming a discount rate of 10 percent. There is an initial demand of 100 capacity
units and a planning period of 10 years. Two point estimates of
projected demand are shown in each case, for the end of a 5-year
period and at the end of the planning period.
If demand growth is rapid, as in case 1 in the table, initial
construction of a large project is economic and has the lowest
present value of costs$89.05 million.
If demand growth is low, as in case 2 of Table 6.2, staging two
smaller capacity additions has the lowest present value of costs,
even when their unit cost, as measured by their total capital cost
divided by their capacity, is fully one and one half times greater
than the thirty-unit project.
Case 3 illustrates the resorting of investments in an optimal
sequence as the growth of demand changes. There is slow initial
growth in demand, followed by a rapid surge in demand. A smaller,
higher unit cost project is best suited to periods of slow demand
growth, and the larger project with economies of scale is best for
more rapid growth of demand.
Finally, case 4 shows a demand projection that traces a path
precisely halfway between the upper and lower bounds established
by the other cases in Table 6.2. This could be average or expected
demand growth and is conceptually often taken to be the standard
case in planning exercises. Building the large project initially produces the lowest present value of costs for this demand trajectory.
Note this "capacity plan" does not satisfy demand at the end of
the ten-year period. There are five additional demand units that
must be met somehow in the average demand scenario of case 4.
Applications
117
In any case, the initial thirty units of demand are satisfied in the
cheapest way by the $58 million project.
Risk Simulations of the Present Value of Least-Cost
Capacity Investments
Risk simulation is relevant to capacity expansion planning because of uncertainties in the growth of demand and other factors.
Analysis of demand uncertainty can be traced to work by H. Baleriaux and E. Jamoulle in the 1960s12 and studies prepared for the
California Energy Resources Conservation and Development
Commission and the U.S. Federal Energy Administration in the
1970s.13 Several citations reference similar methods, including simulations of uncertainty in the availability of hydroelectricity14 and
an ambitious study of long-term capacity needs for the Electric
Power Research Institute. 15 In the United Kingdom, a related
approach was developed by D. V. Papaconstantinou. 16 Studies associated with public debate over the Sizewell project systematically
apply many of these techniques to the evaluation of the timing and
advisability of construction of Britain's first pressurized water reactor.17
As an illustration of these methods, it is interesting to develop
a risk simulation to consider data, such as in Table 6.2. The results
produce some surprises and show the dangers inherent in not thinking in terms of stochastic process.
To consider these investment options in a stochastic framework,
suppose the high and low demand projections in cases 1 and 2 of
Table 6.2 establish the lower and upper bounds pertinent to each
of the five-year time periods considered. Thus, we presume that
after the first five years of the planning period, total demand is
between 110 and 130 units. Let us also assume that: (a) chance
variation in demand follows a uniform distribution between the
bounds implied in any five-year period by these projections, and
(b) variation between adjacent five-year periods is limited to 25
percent of the allowable range.
The End of Period Capacity Valuation Problem
To set up this risk simulation, we must make decisions about
the end-of-period valuation of excess capacity. Note that the first
118
Applications
119
120
Figure 6.5
Present Value of Investment Costs
Applications
121
122
Applications
123
This could be because projects will be dropped from the construction schedule if high excess capacity persists. On the other hand,
the acceleration of construction plans may favor quick fixes, increasing the costs of meeting demand over the longer period. There
is a sense here of an ensemble of probability-governed relationships, although the question is largely unexplored.22 Of course,
financial debacles surrounding many ill-fated nuclear power installations suggest that a large project bias could contribute to
financial risk.
CONCLUSION
The examples in this chapter underline one point above others
the probabilistic point of view leads to insights into financial and
planning problems, insights that may not be mere restatements of
the results of simpler "what-if" or sensitivity analysis.
The first application illustrates what commonly is regarded as
the subject matter of risk analysisthe appraisal of probabilities
of an event essentially bounded in time.
The other applications exhibit more dynamic and time-dependent meanings of financial risk, summarized by a series of default
probabilities over the payback period or by a distribution of minimum present values.
Interestingly, with regard to sinking funds, there is some presumption that people always think in probabilistic terms. Thus, it
is almost impossible to justify and appraise a sinking fund without
regard to variability factors and the advantage of capturing income
during exceptionally good years. Nevertheless, risk simulation suggests a curious equilibrium of default risk after the first few years
of the payback period in the no sinking fund case, an equilibrium
or stabilization apparently due to an interaction between inflation
and the potential for zero or negative revenue growth. It is difficult
to see how anything other than a stochastic framework could indicate this result, which is of some interest because it is a circumstance likely to be encountered in some utility systems in coming
years.
The simulations concerned with the relative risks of level and
tipped amortization schedules also produce an unconventional result. Despite extraordinary chances for construction cost overruns,
124
Applications
125
pollution abatement revenue bonds, and employee stock option and dividend reinvestment plans. Leasing and project financing became viable
options. With high interest rates, refunding and debt-equity swaps were
carried out. A few utilities have offered common stock to their customers
through a monthly installment plan; a few electric utilities have established
an energy trust to finance nuclear power plants or the purchase of nuclear
fuel.23
Restrictions on arbitrage of tax-exempt debt issues in the Tax
Reform Act of 1986 and innovations such as interest rate futures
or swaps add to the complexity of financial analysis for power and
water utilities. In principle, each of these financial instruments,
tactics, or options can be evaluated within risk simulation contexts
such as these developed in this chapter.
The first task of risk simulation models is to get an answer.
Then, refinements and extensions can be explored to determine
their impact on the risk profile.
NOTES
1. About one half of all states protect municipal and cooperative service areas by exclusive franchise rights or state law. Elsewhere, no specific
provisions shield service areas. See Malachy Fallon, "Municipal Electric
Credit Review," Standard & Poor's CreditWeek, June 5, 1989, p. 8.
2. First Boston Corporation, Financing Hydroelectric Facilities (Boston: First Boston Corporation, April 1981), pp. 10-11.
3. Robert Woodard, "Power Shortage Threatens Northeast," Standard & Poor's CreditWeek, June 5, 1989, p. 9.
4. Operating costs here are assumed to be capitalized in the values of
K,
5. See Daniel J. Duann, "Alternative Searching and Maximum Benefit
in Electric Least-Cost Planning," Public Utilities Fortnightly, Decemb
21, 1989, pp. 19-22.
6. This means scheduling the lowest cost unit to provide baseload
capacity and intermediate and peaking units to come on line in order of
increasing cost.
7. A pioneering paper applying this method is P. Masse and R. Gibrat,
"Application of Linear Programming to Investments in the Electric Power
Industry," Management Science 3 (January 1957): 149-166.
8. See Ralph Turvey and Dennis Anderson, Electricity Economics
Essays and Case Studies (Baltimore: Johns Hopkins University Press,
published for the World Bank, 1977), p. 259.
126
Applications
127
demand increment exceeds the capacity of the largest project, the periods
to which these expected present values refer are identical in expected
value and distribution.
20. A. Kaufmann, Reliability CriteriaA Cost Benefit Analysis, OR
Report 75-79 (Albany: New York State Department of Public Service,
June 1975); "Report on the Reliability Survey of Industrial Plants," IEEE
Transactions on Industry Applications 1A-10, no. 2 (March 1974): 231233. See Roland Andersson and Lewis Taylor, "The Social Cost of Unsupplied Electricity," Energy Economics (July 1986): 139-146.
21. See Mark Hoffman, Robert Glickstein, and Stuart Liroff, "Urban
Drought in the San Francisco Bay Area: A Study of Institutional and
Social Resiliency," in American Water Works Association Resource Management, Water Conservation Strategies (Denver: American Water Works
Association, 1980), pp. 78-85.
22. C. Vaughan Jones, "Analyzing Risk in Capacity Planning from
Varying Population Growth," Proceedings of the American Water Works
Association (Denver: June 22-26, 1986), pp. 1715-1720. Jones explores
the issue in outline.
23. Charles F. Phillips, Jr., The Regulation of Public Utilities Theory
and Practice (Arlington, VA: Public Utilities Reports, Inc., 1984), p. 221.
7
Reflections on the Method
Monopolistic markets and capital intensity make financial risk analysis of power and water investments more available than, say,
analysis of investment risks for a new assembly line for computer
components or a retail outlet. Debt financing is the favored vehicle
in financing major capacity expansion, and debt service is a major
cost component. Construction cost overruns, therefore, are a primary risk factor on the cost side, along with the potential for
interest rate changes prior to bond issuance. Population growth
and the level of customer demand introduce risks on the revenue
side. Customers are "captive," and their numbers and market
responses can be analyzed separately, although deregulation is
introducing new competitive possibilities.
This discussion demonstrates the potential of this approach for
generating insights into financial risk, defined chiefly as default risk
on debt. The vantages gained go beyond those derived from the
examination of various "what-ifs" or a sensitivity analysis in various ways and are, in some instances, almost surprising.
Thus, risk simulation underlines how contingency allowances
ought to be figured against total construction costs rather than, as
sometimes suggested in engineering discussions, as a fixed percentage allowance against each major construction cost category.
Once one conceptualizes the simulations, the logic becomes apparentit is the logic of risk pooling, when cost overruns in component cost categories are stochastically independent.
130
131
132
133
134
135
is available, assume values for the required input parameters are independent.6
These guidelines aim at robustness in estimation. Thus, the variance of the uniform distribution is larger than a broad class of
unimodal distributions. If risk is proxied by the variability of a
variable that is an additive or multiplicative composite of various
component risks, use of such maximum variance distributions to
characterize these component risks leads to something like an upper bound estimate of a risk. Similarly, use of uniform distributions
for population variability exaggerates risks of low and high population growth. If population growth is lower than anticipated,
financial burdens may be placed on customers. If population
growth is higher than anticipated, shortagesbrownouts or restrictions in usemay occur, imposing other types of cost. If these
costs can somehow be made commensurate, as discussed in Chapter 6, then maximum combined costs are associated with maximum
variance distributions for population processes.
The use of the triangular probability distribution was illustrated
in Chapter 3, which showed how estimates of the mode and average
value of a random variable with finite range are linked.
Use of the Beta, Gamma, Lognormal, and Weibull distributions
has not been discussed in this book, although these forms are
mentioned in the Appendix.
Of course it should be clear from this discussion that simulations
with substantially skewed risk factors are less reliable. A recent
text on engineering economics considers this under the rubric of
the "problem of outliers," 7 significant in accidents or other low
probability, high damage events. Thus, the distribution of cost
overruns for large nuclear facilities embodying new design features
appears sharply skewed to the right (negatively skewed). More
attention must be devoted to plotting points on cumulative or
probability distributions for such risk variables.
The modeling tactic suggested here is to employ simple modeling
representations and to examine the effect of: (a) successive refinement of assumptions, and (b) extensions of the model on the
risk profile. This is a higher order sensitivity analysis that allows
assumptions and the characterization of process to vary in addition
to considering the impact of changes in the magnitude of risk
136
factors. If the acknowledged range of risk factors is faithfully recorded and realism is sought in the simulations, the risk profile
should represent a best effort at prefiguring the total consequences
of various investment decisions.
PURE UNCERTAINTY
The Chicago economist Frank H. Knight suggested a distinction
between risk and uncertainty. 8 A risk situation, according to
Knight's terminology, is one where probabilities are known. When
we have no knowledge of the range or distribution of a variable,
on the other hand, we are faced with uncertainty. This is the
difference between, say, drawing a ball of a certain color from an
urn when we know beforehand there are ten red balls and ten
black balls and drawing a ball when we lack essential information,
such as the number of balls, their color, and so on. In the first
case, the probability of drawing a red ball (with replacement) is
0.5, while in the other situation we may have no way of asserting
anything about probabilities of drawing, for instance, green balls.
The first case is a risk situation in Knight's terms, while the second
is a situation in which there is uncertainty.
There are various ways uncertainty manifests itself, where pure
uncertainty refers to the fact that not only do we not know probabilities of an event, but we also have no information about the
nature of the event in the first place. Thus, history shows a sustained capacity for producing surprises, that is, occurrences for
which we really have no way of imputing probabilities because we
cannot even conceptualize their existence. These historical innovations can develop on various levels. Most recently, there are the
profound changes in Eastern Europe and the Soviet Union. On
the economic front, few anticipated the regime of high interest
rates initiated by United States Federal Reserve Bank policies in
late 1979. One can also refer back to Pearl Harbor and so on.
In econometric language, the issue of surprises becomes the
"structural shift" problem. Specific tests are recommended to identify time periods in which the coefficients of a regression are distinct.9 Many large-scale econometric models of the U.S. economy
had to be fundamentally revised after 1974, for example, as the
137
Table 7.1
Payoff Matrix Illustrating Alternative Choice Criteria under Uncertainty
"state of nature"
20
10
50
100
40
20
40
15
5
30
75
40
100
40
8
30
actions
1
2
3
4
138
139
140
141
ica 1987-1996 (Princeton, N.J.: North American Electric Reliability Council, September 1987), p. 8.
4. Frederic H. Murphy and Allen L. Soyster, Economic Behavior of
Electric Utilities (Englewood Cliffs, N.J.: Prentice-Hall, 1983), p. 75.
5. John C. Hull, The Evaluation of Risk in Business Investment (London: Pergamon Press, 1980), p. 135.
6. Exposure Assessment Group, Office of Health and Environmental
Assessment, Exposure Factors Handbook, EPA/600/8-89/043 (Washington, DC: U.S. Environmental Protection Agency, March 1989).
7. John A. White, Marvin H. Agee, and Kenneth E. Case, Principles
of Engineering Economic Analysis, 3rd ed. (New York: John Wiley &
Sons, 1989), p. 392.
8. Frank H. Knight, Risk, Uncertainty, and Profit (New York: Houghton Mifflin, 1921).
9. See A. C. Harvey, The Econometric Analysis of Time Series (Cambridge, MA: MIT Press, 1989), and the discussion on model selection.
The classic test in this regard was developed by Gregory Chow, "Tests
for Equality Between Sets of Coefficients in Two Linear Regressions,"
Econometrica 28 (1960): 591-605.
10. Kenneth Boulding, "Social Risk, Political Uncertainty, and the
Legitimacy of Private Profit," in R. Hayden Howard, Risk and Regulated
Firms (East Lansing: Michigan State University Press, 1973), pp. 82-93.
For an early but well-reasoned discussion, see Albert G. Hart, Anticipations, Uncertainty and Dynamic Planning (Clifton, N.J.: Augustus M.
Kelly, 1940).
11. See G. J. Thuesen and W. J. Fabrycky, Engineering Economy (Englewood Cliffs, N.J.: Prentice-Hall, 1984).
Appendix
This Appendix focuses on a topic closely linked to the evaluation of
financial risk: the logic of probability transformations, or rules for finding
the probability or probability distribution of sums, products, and other
combinations of random variables, each characterized by its own probability or probability distribution. This is the basis of analytic studies of
financial risk and informs risk simulation in a specific sense. This discussion
is undertaken without stopping in every instance to explain terms. Additional explication is presented following the main exposition of points
below, where concepts are defined, including random experiment, probability distribution, random variable, law of large numbers, probabilit
discrete and continuous distributions, and cumulative distribution. Ma
expositions deal with these points, such as Norman S. Matloff's Probability
Modeling and Computer Simulation (Boston: PWS-Kent, 1988). There
also are works that treat the foundations of the subject, such as V. Barnett's Comparative Statistical Inference, 2d ed. (New York: John Wile
& Sons, 1982).
The Appendix concludes by reviewing some main probability distributions,
including the normal, Gamma, Beta, and exponential distributions.
PROBABILITY TRANSFORMATIONS
The basic issue discussed here is how the analytic approach to risk
analysis, mentioned in Chapter 1, works. There is probably nowhere better
144
Appendix
ofl.
We have n independent random variables, each characterized by the same
probability distribution. The major finding of the Central Limit Theorem
is that the sum of these n variables can be approximated by a normal
distribution. This approximation becomes more and more accurate as the
number of terms in the sum becomes larger. Thus, the normal distribution
can be said to be a limiting distribution to the probability distribution
applying to such sums of random variables. Extensions and generalizations
of this have been a major preoccupation of mathematical statistics. Thus,
there can be cases in which the random variables to be summed are
characterized by different probability distributions or exhibit cross-correlations; that is, they are not stochastically independent and yet their
sum converges in the limit to a normal distribution. Indeed, exploratory
simulation suggests that it is difficult to produce anything but a bell-shaped
curve when summing almost any random variables, provided sufficiently
numerous terms are summed.
Part of the reason why the Central Limit Theorem applies quite broadly
can be seen if one understands what is involved in summing random variables and determining the probability distribution of their sum, given their
individual probability distributions. Two important concepts here are a convolution of probability distributions and the moment-generating function.
Suppose we have two stochastically independent random variables x,
and x2. Random variable x, is characterized by the probability density
function f(x,), and g(x2) describes the probability density of x2. Stochastic
independence means that f(.) is in no way dependent on the value attained
by x2, or vice versa. Given this, what can be said about the probability
distribution of X = x, + x2?
One approach is to look to the cumulative distribution of the sum X,
which we will denote by F x (.). Thus, Fx(t) = P(X < t) = P(x, + x2 <
t). For continuous f(.) and g(.) and nonnegative random variables, this is
Appendix
145
known as the convolution of f(.) and g(.). Note, we use the fact here that
the joint distribution of two independent random variables is the product
of their marginal distributionsa general form of one of the so-called
laws of chance mentioned in the following section. Now, if f(.) and g(.)
are normal distributions, F x (.) will be the cumulative distribution of a
normal distribution having a mean and variance that is the sum of the
means and variances of f(.) and g(.), respectively. This follows from the
additive property of exponents.
Another way of demonstrating this important fact about normal variables summing to normal variables is to consider the moment-generating
function. The moment-generating function is defined as
mx = E [e'x]
for any random variable X. This function has a number of interesting
properties. Its name obtains from the fact that the kth moment of any
random variable X equals the kth derivative of its moment-generating
function, if the moment-generating function exists. Otherwise the most
important fact is that the moment-generating function is unique and completely determines the distribution of a random variable. Thus, if two
random variables have the same moment-generating function, they have
the same probability distribution. Again, application of the moment-generating function to the sum of normally distributed independent random
variables indicates that this sum is itself normally distributed and has a
variance equal to the sum of the component variances and a mean equal
to the sum of the component variable means.
This points to extensions of the Central Limit Theorem. Thus, suppose
we have a set of random variables (x,,.. ., xn) characterized either by a
probability distribution f(.) or g(.) and that as n gets larger the number
of random variables characterized by both these distributions increases.
Then, Theorem 1 applies to sums of the xt characterized by one or another
of these distributions. That subset of terms characterized by f(.) will converge to a normal distribution. Its complement among the n terms characterized by g(.) also will converge to a normal distribution. Then, on the
basis of the result obtained from moment-generating functions, that is,
that the sum of two variables characterized by normal distributions is itself
characterized by a normal distribution, the sum of these n variables can
be seen to be approximated by a normal distribution.
This additivitywhere random variables characterized by one type of
146
Appendix
Appendix
147
BASIC CONCEPTS
Finally, some introduction to basic probability and statistics concepts
seems appropriate. Thus, to give readers a flavor of the foundations of
probability theory, we consider a conceptual framework below that is
suggested by the frequency interpretation of probability. The function of
this framework is to motivate definitions of probability that suggest mathematical interpretation. Then, we briefly review the laws of chance, or
the rules for figuring the probabilities of joint and mutually exclusive
events. In addition, we provide examples of important probability distributions in Text Box A l .
Random Experiment
The first problem is to define random variable. This is usually solved
by appeal to other primitive concepts that, ultimately, must be left largely
undefined. Thus, in essence, a random variable is the outcome of a random
experiment.
Suppose we have one red die and one blue die. If we toss them repeatedly under similar conditions and add the face numbers, we perform
a random experiment that generates information about a random variable
defined as, say, the sum of the two numbers coming face up. These
numbers belong to the set of 36 ordered pairs (1, 1 ) , . . . , (1, 6), (2, 1),
. . . , (2, 6 ) , . . . , (6, 6) delineating the sample space of this experiment.
Under the frequency interpretation of probability, we accept a law of large
numbers, which suggests that as the number of tosses of these two dice
increases, the relative frequency of the occurrence of these pairs stabilizes
at or converges to a set of ratios or fractions that are identified as the
probability of occurrence of the respective pairs of outcomes. Thus, under
the frequency interpretation, the probability that the random variable
assumes a value of 12 converges to 1 in 36.
148
Appendix
149
Appendix
(A.6)
For discrete distributions, this is the summation of all the values of the
random variable v that are less than or equal to t. For a continuous
distribution, this is defined as an integral.
150
Appendix
Theorem, which determines the probability that one event occurs, given
information about the prior occurrence of conditionally related events.
Probability Distributions
Text Box A.l lists the mathematical form of several probability distribution functions important in financial risk analysis. The following text
discusses each of these functions in turn, noting important features and
potential contexts of application.
Among the discrete distributions, the binomial distribution is basic and
important. The binomial distribution describes a random experiment with
two mutually exclusive outcomes in a series of repetitions or trials. Suppose P(A) = p, so that the probability that event A does not occur,
usually symbolized as P(A), is, by definition, equal to 1 - p = q. Thus,
if we are interested in the likelihood of three heads occurring in ten flips
of a coin, the answer is given by the expression:
where ('?) indicates the combination of ten things taken three at a time
and is equal to 10!/((10-3)!3!).
An interesting aspect of the binomial distribution is that it converges
to the normal distribution or another discrete distribution called the Poisson, as the number of trials increases without limit. If probabilities p and
q are roughly the same size, the binomial converges to a normal distribution. On the other hand, if the probability of event A occurring is not
near .5, but, rather, much nearer 0, the binomial converges to the Poisson
distribution.
Note that the first two moments, the mean and variance, completely
characterize the normal distribution. 4 Thus, a random variable with a normal distribution may be standardized by the transformation z = (x - u,)/
a where u, is the mean and a is the standard deviation of the random variable X. A standardized variable such as z obeys the three sigma rulethe
probability that the absolute difference between a normally distributed
variable and its mean is greater than 3a is less than .003. Similarly, a deviation of more than one sigma from the mean is to be expected about once
every three trials. Since the normal distribution is characterized by its first two
moments, probability tables are easy to prepare and consult.
Another mainstay of the normal distribution is its role as a sampling
distribution. Suppose, for example, we have a population listing of a
human characteristic such as height and weight. Then, sample from this
Binomial Distribution
Normal Distribution i
Poisson Distribution
Uniform Distribution i
b = maximum, a= minimum
Triangular Distribution
Gamma Distribution
Exponential Distribution
152
Appendix
population to estimate its mean with the information we obtain from the
mean of the sample. Then, in repeated samplings with the same size lots,
the sampling distribution will be normally distributed around the mean
of the population and will have a variance determined by the sample size
and the variance of the population. For these reasons, modern statistics
associated with names such as Karl Pearson or Irving Fisherhas been
erected on the basis of the normal distribution.
The Poisson distribution also has application to real world processes.
It is a discrete probability density function with the unusual property that
its mean and variance are equal. This distribution is perhaps most important in queuing problems. Given purely random arrival times in a line,
timing of telephone calls, and so on, the number of arrivals in a line or
calls in an interval of time is described by a Poisson distribution.
We have discussed the uniform distribution throughout the text of the
book. It is a continuous distribution possessing finite range and the property that intervals of equal size in its range always have the same probability.
The triangular distribution also is discussed in the text at some length.
The triangular distribution is a rough approximating function for any
unimodal probability distribution.
One way to consider probability distribution is in terms of parametric
families. In this light, the binomial, normal, and Poisson distributions are
linked by convergence processes. The normal distribution, furthermore,
is linked by probability transformation to the Cauchy distribution (as a
ratio of two normal variates) and the Chi-square distribution, as the sum
of squared normal variates.
The Gamma distribution, on the other hand, is a form that is entitled
to a family of its own due to the ease in which it is transformed algebraically
to other related distributions. Here, the peculiar symbol V in Text Box
A l in the denominator of the Gamma distribution is the Gamma function,
a sort of generalization of the factorial. The Gamma function has a nonnegative range and is determined by two parameters (a, (3) whose product
is the mean of the distribution. Gamma functions can assume a variety
of shapes, depending on the values selected for these parameters. For
specific values of these parameters, the Gamma function becomes a Chisquare, exponential, Erlang, or Beta distribution.5
Finally, the exponential distribution has an interesting relationship to
the Poisson in waiting time problems. Thus, in a Poisson arrival process,
the distribution of waiting times is exponential. The exponential distribution often is held to be a good representation of failure processes, such
as the time it takes for a part to wear out.
Appendix
153
NOTES
1. L. C. Leung, V. V. Hui, and G. A. Fleischer, "On the PresentWorth Moments of Serially Correlated Cash Flows," Engineering Costs
and Production Economics 16 (1989): 281-289.
2. John H. Estes, "Stochastic Cash Flow Evaluation Under Conditions
of Uncertain Timing," Engineering Costs and Production Economics 18
(1989): 65-70.
3. Mixed cases also can exist.
4. The moments of a distribution provide important summary data
concerning a distribution's central tendency, dispersion, and shape. The
first moment is the mean, average, or expected valuea measure of
central location. The second moment is the variancea measure of the
dispersion around the mean. The third and fourth moments are less familiar but indicate broadly whether and how the distribution is nonsymetric (right or left skewed) and whether it is flat or peaked (kurtosis).
5. See Stephen Kokoska and Christopher Nevison, Statistical Tables
and Formulae (New York: Springer-Verlag, 1989).
Bibliography
Agthe, Donald E., R. Bruce Billings, and Judith M. Dworkin. "Effects
of Rate Structure Knowledge on Household Water Use." Water
Resources Bulletin 24 (June 1988): 627-630.
Alho, Juha M., and Bruce D. Spencer. "Uncertain Population Forecasting." Journal of the American Statistical Association 80 (June 1985):
306-314.
Altmann, Edward I., and Scott A. Nammacher. Investing in Junk Bonds:
Inside the High Yield Debt Market. New York: John Wiley & Sons,
1987.
Altourney, Edward G. The Role of Uncertainties in the Economic Evaluation of Water Resources Projects. Stanford, CA: Institute of Engineering-Economic Systems, Stanford University, 1963.
Anderson, Kent. Residential Demand for Electricity: Econometric Estimates for California and the United States, R-905 NSF. Santa Monica, CA: Rand Corporation, January 1972.
Andersson, Roland, and Lewis Taylor. "The Social Cost of Unsupplied
Electricity." Energy Economics (July 1986): 139-146.
Artto, Karlos A. "Approaches in Construction Project Cost Risk." Annual Transactions of the American Association of Cost Engineers
(AACE). Morgantown, W.V.: AACE, 1988, B-4, B.5.1-B.5.4.
Ascher, William. Forecasting: An Appraisal for Policy-Makers and Planners. Baltimore: Johns Hopkins Press, 1978.
Baleriaux, H., and E. Jamoulle. "Simulation de I'exploitation d'un pare
des machines thermiques de production d'electricite couple a des
stations de pompage." Revue Electricitie, edition SRBE, 5 (1967).
Baughman, Martin L., Paul L. Joskow, and Dilip P. Kamat. Electric
156
Bibliography
Bibliography
157
158
Bibliography
Bibliography
159
Hayes, R. W., J. G. Perry, P. A. Thompson, and G. Willmer. Risk Management in Engineering Construction. Morgantown, W.V.: Thomas
Telford Ltd., 1986.
Henley, E. J., and H. Kumamoto. Reliability Engineering and Risk Assessment. Englewood Cliffs, N.J.: Prentice-Hall, 1981.
Henson, Steven E. "Electricity Demand Estimates Under Increasing
Block Rates." Southern Economic Journal 51 (July 1984): 147156.
Hertz, David B. "Risk Analysis in Capital Investment." Harvard Business
Review (September-October 1979): 169-181.
Hoffman, Mark, Robert Glickstein, and Stuart Liroff. "Urban Drought
in the San Francisco Bay Area: A Study of Institutional and Social
Resiliency." In American Water Works Association Resource
Management, Water Conservation Strategies. Denver: American
Water Works Association, 1980, pp. 78-85.
Hufschmidt, Maynard M., and Jacques Gerin. "Systematic Errors in Cost
Estimates for Public Investment Projects." In Julius Margolis (ed.),
The Analysis of Public Outputs. New York: Columbia University
Press, 1970, pp. 267-315.
Hull, John C. The Evaluation of Risk in Business Investment. London:
Pergamon Press, 1980.
. "Risk in Capital Investment Proposals: Three Viewpoints." Managerial Finance (1986): 12-15.
Iman, R. L., J. M. Davenport, and D. K. Zeigler. "Latin Hypercube
Sampling (A Program Users Guide)," Technical Report SAND791473. Albuquerque, NM: Sandia Laboratories, 1980.
Johnson, Mark E. Multivariate Statistical Simulation. New York: John
Wiley & Sons, 1987.
Jones, C. Vaughan. "Analyzing Risk in Capacity Planning from Varying
Population Growth." Proceedings of the American Water Works
Association. Denver: June 22-26, 1986, pp. 1715-1720.
. "Nonlinear Pricing and the Law of Demand." Economics Letters
23 (1987): 125-128.
Jones, C. Vaughan, and John R. Morris. "Instrumental Price Estimates
and Residential Water Demand." Water Resources Research 20
(February 1984): 197-202.
Jones, Ian S. "The Application of Risk Analysis to the Appraisal of
Optional Investment in the Electricity Supply Industry." Applied
Economics 3 (May 1986): 509-528.
Kaufmann, A. Reliability CriteriaA Cost Benefit Analysis. OR Report
75-79. Albany: New York State Department of Public Service,
June 1975.
160
Bibliography
Bibliography
161
Matloff, Norman S. Probability Modeling and Computer Simulation. Boston: PWS-Kent, 1988.
McCleary, Richard, and Richard A. Hay, Jr. Applied Time Series Analysis
for the Social Sciences. Beverly Hills, CA: Sage Publications, 1980.
Merkhofer, M. W. "Quantifying Judgmental Uncertainty: Methodology,
Experiences, and Insights." IEEE (Institute of Electrical and Electronic Engineers) Transactions on Systems, Man, and Cybernetics
17, no. 5 (September/October 1987): 741-752.
Merrow, Edward W., Stephen W. Chapel, and Christopher Worthing. A
Review of Cost Estimation in New Technologies: Implications for
Energy Process Plants. Prepared for the U.S. Department of Energy by the Rand Corporation, R-2481-DOE, Washington, D.C.,
July 1979.
Miller, Earl J. "Project Information Systems and Controls." In Jack H.
Willenbrock and H. Randolf Thomas (eds.), Planning, Engineering, and Construction of Electric Power Generation Facilities. New
York: John Wiley & Sons, 1980.
Moder, Joseph J., Cecil R. Phillips, and Edward W. Davis. Project Management with CPM, PERT, and Precedence Diagramming, 3rd ed.
New York: Van Nostrand Reinhold Company, 1983.
Modianos, D.T.R., C. Scott, and L. W. Cornwall. "Testing Intrinsic Random-Number Generators." Byte (January 1987): 175-178.
Moyer, R. Charles, and Shomir Sil. "Is There an Optimal Utility Bond
Rating?" Public Utilities Fortnightly, May 12, 1989, pp. 9-15.
Murphy, Frederic H., and Allen L. Soyster. Economic Behavior of Electric
Utilities. Englewood Cliffs, N.J.: Prentice-Hall, 1983.
Murthy, Chandra S. "Cost and Schedule Integration in Construction." In
Proceedings of the Conference on Current Practice in Cost Estimating and Cost Control, sponsored by the Construction Division
of the American Society of Civil Engineers in Cooperation with
the University of Texas at Austin. New York: American Society
of Civil Engineers, 1983, pp. 119-129.
Myrha, David. Whoops/WPPSS: Washington Public Power System. Jefferson, N.C.: Mcfarland, 1984.
Newendorp, P. D., and P. J. Root, "Risk Analysis in Drilling Investment
Decisions." Journal of Petroleum Technology (June 1968): 579585.
Nieswiadomy, Michael L., and David J. Molina. The Perception of Price
in Residential Water Demand Models Under Decreasing and Increasing Block Rates. Paper for the 64th Annual Western Economics Association International Conference, June 21, 1989.
North American Electric Reliability Council. 1987 Reliability Assessment:
162
Bibliography
Bibliography
163
Capacity Expansion. Report submitted to California Energy Resources Conservation and Development Commission, Menlo Park,
February 1977.
Stoto, Michael A. "The Accuracy of Population Projections." Journal of
the American Statistical Association 78 (March 1983): 13-20.
Taylor, Lestor. "The Demand for Electricity: A Survey." Bell Journal of
Economics 6, no. 1 (1975): 74-110.
Taylor, Stephen. Modeling Financial Time Series. Chichester, England:
John Wiley & Sons, 1986.
Thuesen, G. J., and W. J. Fabrycky. Engineering Economy. Englewood
Cliffs, N.J.: Prentice-Hall, 1984.
Tucker, James F. Cost Estimation in Public Works. MBA thesis, University of California at Berkeley, September 1970.
Tucker, S. N. "Formulating Construction Cash Flow Curves Using a Reliability Theory Analogy." Construction Management and Economics 4 (1986): 179-188.
Turvey, Ralph, and Dennis Anderson. Electricity Economics: Essays and
Case Studies. Baltimore: Johns Hopkins University Press, published for the World Bank, 1977, p. 259.
University of California at Berkeley. Price Elasticity Variation: An Engineering Economic Approach, EM-5038. Final Report. Berkeley,
CA: Electric Power Research Institute, February 1987.
U.S. Bureau of the Census. Projections of the Population of the United
States by Age, Sex, and Race: 1983 to 2080. Current Population
Reports, Population Estimates and Projections, Series P-25, No.
952, U.S. Department of Commerce. Washington, DC: Government Printing Office, 1984.
Veall, Michael R. "Bootstrapping the Probability Distribution of Peak
Electricity Demand." International Economic Review 28 (February
1987): 203-212.
von Winterfeldt, Detlof, and Ward Edwards. Decision Analysis and Behavioral Research. New York: Cambridge University Press, 1986.
Wei, William W. S. Time Series Analysis: Univariate and Multivariate
Methods. Redwood City, CA: Addison-Wesley, 1990.
White, John A., Marvin H. Agee, and Kenneth E. Case. Principles of
Engineering Economic Analysis, 3rd ed. New York: John Wiley &
Sons, 1989.
Woodard, Robert. "Power Shortage Threatens Northeast." Standard &
Poor's CreditWeek, June 5, 1989, p. 9.
Yokoyama, Kurazo. Annual Transactions of the American Association of
Cost Engineers (AACE). Morgantown, W.V.: AACE, 1988.
Index
Amortization schedules, 105-112
Autocorrelation, 80
Benefit cost analysis, 121-123
Beta distribution. See Probability
distribution
Bootstrap methods, 29, 60
Bulk power distributor, 99-100
Bureau of Economic Research
OBERS model, 87
Capacity planning, 112-121
Central Limit Theorem, 57
Conservation, 78-79
Consumer willingness to pay, 122
Contingency funds, 57-60
Critical path analysis, 61
Critical path modeling (CPM), 47
Debt schedules, 100-112
Default risk, 20-22
Demand: definition of, 71; inelastic and elastic, 74-77; Law of,
70; price elasticity of, 72
Disbursement pattern for construction expenditures, 62
166
Index
Sampling strategies, 34
Scenario development, 10
Sensitivity analysis, 10
Sinking funds, 101-105
Standard & Poor's, 8
Stochastic dominance, 38, 40, 41
Stochastic independence, 56, 60
Structural models, 33
Tennessee Valley Authority, 52
Time series analysis, 30-33, 79,
106-108
Trans-Alaska pipeline cost overruns, 51
Triangular distribution. See Probability distribution
Uncertainty, 90-94; maximin criterion for decisions, 138; pure,
136-138
Index
Uniform probability distribution.
See Probability distribution
U.S. Army Corps of Engineers,
52
Utility bond ratings, 3, 43
Verification of risk analysis, 132
133
167
Washington Public Power Supply
System (WPPSS), 3
Weibull distribution. See Probability distribution
White noise residuals, 32-33