Вы находитесь на странице: 1из 33

RISK ANALYSIS AND THE DECISION-MAKING PROCESS

IN ENGINEERING

MAU R IC IO SANCHE Z-S ILVA

1. INTRODUCTION

The developm ent of any kind of enginee ring facility requires, at some stage, to make
decisions, and a thorough consideration ofthe context within which these decisions are
made. In engineering, there is always the chance that unintend ed conseque nces might
occur. Co nsequently, there is a perm anent search for measures to assess the margin
between the capacity of an enginee red facility and the demands upo n it. Since both
the demand and the capacity canno t be described accurately, modeling and managing
thc un certainty is paramount. Th is chapter presents a discussion on the relatio nship between risk analysis and decision making. Furthermore, a gener al framework in whi ch
risk analysis is considered as the main tool for the decision-making is proposed. Special emphasis is given to optimization as a fundamental tool for supporting assertive
decisions.
2. THE NEED FO R RISK MANAGEMENT

The reason for the existence of engineering is to provide bett er and efficient means
to improve the quality ofl ife. Therefore, for any engine ering project to be successful,
it is necessary to estimate all possible futur e scenarios and make the appropriate considerations in the planning, design, constru ction and operation stages. Some scenarios
cannot be foreseen at all, and those wh o can be predicted might be difficult to model.
If an un expected event occurs, engin eerin g and society canno t do mu ch ; as stated by
Plato: "How can I kn ow what I do not know?" H owever, regarding the events we can
297

298

Mauricio Sanchez-Silva

predict, our responsibility is to find technical requirements that balance safety and cost,
in order to be included in design and construction specifications. It is in the meaning of
the word balance where the decision making process and risk analysis become relevant
and risk analysis relevant. Besides, appropriate decisions should also take into account
the context and the preferences of the person or institution that makes this choice.
Within this required balance, safety is related to avoiding losses of any type, e.g.
economic, human, and environmental. Cost is referred to as the value of such losses
for an individual, an institution or for society as a whole. Value is defined as the
proportion of the satisfaction of needs divided by the use of resources. In other words,
value is proportionate to quality divided by cost. Thus, the main quandary is: how
many resources is someone willing to invest in safety given that they are limited?
The answer lies in one of the most interesting and difficult fields in engineering, risk
acceptability (section 6). The paramount importance of this concept is that it defmes
the way the economy and social welfare evolves. Although it is not evident for many
engineers, behind any regulation, e.g., code of practice, there is a whole framework
which determines our life.
3. RISK

Haimes (1998) defines risk as a measure ofthe probability and severity ofadverse effects.
Harr (1987) defines it as the lack of ability of the system to accommodate the imposed
demands placed upon it by the sponsor, the user and the environment. More specifically,
Melchers (1999) defines the term risk in two ways: 1) the probability of failure of a
system from all possible causes (violation oflimit states); and 2) the magnitude of the
failure condition, usually expressed in financial terms. These definitions of risk are
commonly used in specific engineering problems called "hard systems" which tackle
well-structured systems engineered to achieve a given objective (Checkland, 1981). A
common and widely used definition of risk is in terms of the expected value of a given
level oflosses (i.e. cost):
Risk

II

E(L)

= LP(L;)L;

(1)

i=l

where L is the loss (i.e. measure of the consequences) of the system for a given failure
scenario i; P(L i ) is the probability of occurrence of such a loss; and n the total number
of scenarios considered. For instance, the risk, or the expected lossesin a city, might be
described as the losses caused by an earthquake multiplied by the probability of having
such losses in case of an earthquake, plus the losses caused by landslides multiplied by
the probability of having such losses in case oflandslides, and so forth.
Note that p (L i ) in equation 1, not only refers to the probability of occurrence of
the trigger event (e.g. earthquake), but to the occurrence of any scenario i. Detailed
risk analysis should consider the immediate and long-term consequences, as well as
the changes in the probability assessment in time (Blockley and Dester 1999). The
definition of what is "high" or "low" risk depends upon the context and the decision
to be made (section 6).

Risk analysis and the decision-making process in engineering 299

The probability oflosses in a given scenario (equation 1) (e.g. caused by earthquakes)


can be expressed as (Sanchez-Silva, 2001):

L p(Li I Aj)p(Aj)
m

p(L i) =

(2)

j=l

where p(L; I A j ) is the probability of a loss level L; given that the event A j has occurred; and p(Aj ) is the probability of Ai- Events A j are defined by the context of the
problem and can describe, for example, different intensities of the same phenomenon.
Then, following the previous example, the probability of having losses due to earthquakes is the sum of the probability of having a level ofloss due to earthquakes given
that an earthquake of intensity I = 4 has occurred, multiplied by the probability of
having an earthquake of intensity I = 4, plus the probability of having a loss given
that an earthquake of intensity I = 5 has occurred, multiplied by the probability of
having an earthquake of intensity I = 5, and so on.
Therefore, in its general form risk can be described as:
Risk = E(L)

t [~P(Li

I Aj)P(A;)] t;

(3)

Probability assessmentdepends highly on the definition oflimit states, which in turn,


is highly related to the model ofthe system. The probability oflosses (i.e. probability of
failure, p f) depends upon the relationship between the demand (D) and the capacity
(C) of a system, and it can be described in terms of the limit state function, g (D, C),
as:
Pf

= p(g(D, q :::: 0)

(4)

Reliability theory states that in its general form, failure probability can be computed
as:
(5)

where f xCx) is the joint density function of the state variables of the system and D the
unsafe region. A detailed discussion on the mathematical aspects of equation 5 can be
found in Harr (1987), Kottegoda and Rosso (1997), and Melchers (1999).
Blockley (1992) argues that a dependable risk analysis requires the complete identification and assessment of all unwanted possible futures for computing the probability
Although many of them can be identified, risk assessment can only be carried out over
a small number of future scenarios, which are technically foreseeable. In other words,
risk analysis may be a limited guarantee of a proper description of possible future scenarios, especially when those possible futures are difficult to predict. Therefore, the
analysis should focus on hazard. However, that is not to say that a risk analysis does not

300

Mauricio Sanchez-Silva

provide useful information, but that it is information which is an input into an overall
analysis. Blockley (1992) proposes a definition of risk slightly different from traditional
engineering approaches but more general:
"risk is the combined effect of the chances of occurrence of some failure or disaster and its
consequences in a given context".
This definition considers three important factors: probability, consequences and context, which are the key for a dependable risk analysis and have been widely discussed
(Blockley 1992, Sanchez-Silva et al. 1996, and Elms(Blockley 1992)). The value of
this definition is that it is robust enough to be used in the analysis of diverse situations
such as natural risks and industrial safety.
Frequently, vulnerability functions are combined with hazard data in order to estimate the probable distribution of losses for all possible events in a given time period
to determine the risk (Coburn and Spence, 1992). For instance, in earthquake engineering, risk is defined as "the probability that social or economic consequences of a
specific event will equal, or exceed, specified values at a site, at several sites, or in an area,
during a specified exposure time" (EERI, 1984). The way hazard and vulnerability
functions are combined to obtain risk is motive for continuous debate. An event can be
considered a hazard, only if it is a threat to a system, and a system is only vulnerable if it
can be damaged by an event (i.e. hazard). There is a strong dependence between these
two concepts, and only in a few situations can hazard and vulnerability be assumed to
be independent variables. Therefore, risk cannot be evaluated just by multiplying these
functions because this implies an independant relationship. The term "convolution"
is sometimes used as a way to describe the complex connection between hazard and
vulnerability, but it does not have any meaning in the strict mathematical sense.
On the whole, risk focuses on the identification and quantification of those factors
contributing to cause a loss of fitness for purpose (i.e. function) of a system.
4. DECISION-MAKING PROCESS

4.1. Basic concepts

Decision is a choice or a judgment that is made about something. When a decision


is required, the person or institution is faced with a set of alternative actions and the
uncertainty about the consequences of all or some of these actions. The problem lies
in deciding what the best possible action is. Usually, the best action is defined in terms
of a rational decision strategy which is based only on the information available.
Any engineering study is directed at providing information for decision making and
the decision is usually made by optimizing the objective function F 0:
aF(X j , X 2 ,

XII)

-------=0

ax;

i=1,2, ... ,n

(6)

F 0 describes the decision criteria (e.g. cost) and Xi, the main variables involved in the
decision. In spite of the fact that there are a significant number of numerical methods

Ri sk analysis and the decision -m aking process in engineering 301

Decision 'ode

Probability node

Conse quences

Possible decisions

Alternative actions

Figure 1. General structure of a decision tree.

for solving equation 6, the applicability and scope of such an approach is limited
by the mathematical requirements and the nature of enginee ring problems. Makin g
decisions require s consid erin g different types of information and evidence that cannot
be described within a single mathematical model. Therefore, several meth ods, such as
event threes (section 4.2), have been developed and are widely used in many different
enginee ring problems.
4.2. Decision trees

For problems related to engineerin g decisions, decision trees are a com mon alterna tive
which is a convenient meth od to integrate graphic and analytic conce pts. In particular,
the analytic component is based on conditional probabili ty and the Bayes rules. Decision trees have a structure which is similar to an event tree. It is drawn by identifying
the various possible decisions. Fur ther subdivisions of every possible decision present
alternative actions. Then, the final struc ture provides an overview of all possible actions
and consequences over w hich estima ted probability of occur rence can be computed
(Figure 1).
Making decision s based on the probabilities at the end of every branch is not appropriate, since the nature of the conseque nces has to be considered. In ot her words, it is
not enou gh to make a decision based on a criterion such as maximu m probability of
loss, but it is necessary to carefully consider the implications and possible consequences
of such a decision . This leads to define a criteria for selecting the best alterna tive among
all available. A criterion widely used for that purpose is the maximum expected value.
It is based on the Von N ewm an and Morgenstern (1944) idea that any decision has

302

Mauricio Sanchez-Silva

to be made based on the expected value; otherwise it will not be appropriate. It is


important to mention that appropriateness does not mean success; it implies that given
certain conditions, this is the best option.
The expected value analysis is possible only if all parties interested share the same
objectives and use the same attributes for making the decision (e.g. economic value).
Thus, the best decision is:

D=max

~Pi ~Xij
>II

("

)]

(7)

where m is the number of alternative decisions, n the number of actions associated to


every alternative, Pi is the probability of every alternative and Xij the consequences
of action j asociated to alternative i.
Note that the expected value computed in equation 7 is the same conceptual approach described in equation 1 to define risk.
Many other criteria have been defined for improving the selection of alternatives.
For instance, the minimax strategy focuses on the minimum value of the maximum risk
of every possible decision. Similarly, other approaches such as the maximin, maximax
or the Hwrlitz rule are also well known strategies. The mathematics on this matter has
been widely discussed in the literature. However, what is important is that there has
to be a way to select the best alternative rationally.
4.3. Defining utility criteria

Quantifying the result of every possible action is only possible if the decision criterion
is "measurable". When the decision has to be made in terms of parameters difficult to
measure, e.g. preferences, level of quality, comfort, danger, it is difficult to calculate
the expected value. An appropriate decision is the one that maximizes the expected
benefits of the outcome, which are not necessarily economic, but might also be social
or personal.
In order to solve this eventuality, a common approach to obtain quantifiable criteria
for decision making is through utilityJunctions. Utility functions are defined as a measure
the implications of the decision, for the decision maker, in an overall form. Utility is
defined as the true measure of value for the decision maker. Thus, a utility function
is a factious function which describes the relationships between the actual values of a
given decision.
Let us assume, for instance, that someone is facing the decision to take route A or B
to get to their work place in the morning. Route A is longer but less expensive, and
route B is shorter but very expensive. The decision about what route to take depends
upon the values of the person who makes the decision. Ifhe/she prefers to get on time
to his office, route A is not an option. However, if cost is the main decision factor, the
best option is route A. On the whole, decision making is based on preferences; that is,
on the utility or disutility function (figure 2).

Risk analysis and the decision-making process in engineering

303

Utility function
u(x)
Higher
preference

1.0
Risk -aver1Sion

.,

..,

:::.::::.::::.:::.::: .
."

Neutral to risk

Lower
0.0
Preference

Risk -affinity

Attribute
(e.g. US$)

Figure 2. Typical utility functions.

The choice of a suitable utility or disutility (i.e., loss) function is perhaps the main
problem in the Bayesian approach and in defining the best alternative in a decision.
There are many ways to define such functions; some are based on expert opinions
and others based on standard well known functions (e.g., exponential: a + b - yx,
Logarirmic: a In(x + f3) + b, quadratic: a (x - 0.5 a x 2 ) + b). The detailed discussion
on this matter is beyond the scope of this chapter but has been widely discussed by
many authors.
Currently, utility functions are widely used for modeling decision processes which
move away from pure technical problems. Within the context of applied engineering,
utility concepts will continue developing and, in the future, it will take a more relevant
role in risk analysis and the decision making process.
The discussion on risk (section 3) and the basics of decision making (section 4) will
be now integrated in the next section.
5. RISK-ANALYSIS BASED DECISION PROCESS

Decision-making is based upon a process of collecting evidence and developing models


to combine this evidence. However, one of the main problems is that there is not a
widely accepted notion of what a "good" decision is. In risky decisions, the expected
benefits or risks are not assured; on the contrary, they mayor may not occur. Therefore,
uncertainty management becomes a key element in the process.
The cyclic process of the interaction of a decision maker with the world has been
described by Blockley (1992) as a Riiflective Practice Loop (RPL) (Figure 3). In the
RPL a decision maker perceives the world, reflects on it and takes a decision to act.
According to Blockley and Dester (1999), the decision making process is driven by

304

Mauricio Sanchez-Silva

Figure 3. Reflective practice loop, adapted from Blockley and Godfrey (2000).

expectation, which is the result of the perceived past, present and forecast rates of change
for outcomes and consequences. They also suggest that decisions should consider not
only whether there is a benefit, or a risk, but also their respective chances and the
potential for success or failure. Thus, a decision depends also on the opportunity and
the risk, understood as the relative chances of having a benefit or a loss, respectively. In
addition, the potential for success or failure determines the concern of the decisionmaker after considering the present situation in the light of the potential consequences
of his/her decision; this is a fundamental element for the fmal decision.
Risk analysis is a decision tool and, as a consequence, cannot be abstracted from the
decision process. A decision process based on risk analysisusually connotes quantitative
assessmentsand therefore reliance on probability and statistics. Many authors, e.g., Ang
and Tang (1984), Kottegoda and Rosso (1997) and Haimes (1998), present and discuss the most common quantitative and qualitative strategies for decision-making (e.g.
decision trees, decision matrix, fractile method, etc.). However, risk-based decisionmaking methodologies do not necessarily require knowledge of probability. Through
risk, the identification of hazards, the definition and description offacilities, the assessment of the susceptibility to damage and the consequences of failure may be handled
within a single framework, as discussed in the section 5.1.
5.1. General framework for integrating risk to the decision making process

The decision process consists of defining, analyzing, classifyingand ranking all possible
scenarios in terms of their likelihood and consequences, and to define an acceptance
criterion for selecting the best option. A complete strategy for using risk analysis

Risk analysis and the decision- making process in engineering 305

Identification and definition of


the system

ature of the decision problem

-FUnClio n - (Wha t it is for )


-Ele rne nts and relat ionships
[ -Boundaries/Co nrexuli nvi ro n rnent
-Sysrems approac h ("Hard"rSoft")
-O bj Cctivc of thc dec ision
-Aspccts involved within the decision

-Decision characteristics (i.c, limits, context )


- Bencflt/Conscqucnccs of the decision

Risk analysis

Assessment of risk
acce ptability

Risk analysis base d decis ion


Figure 4. R isk based decision process.

as part of the decision makin g process is present ed in Figur e 4. In this pro cess, the
identification and definiti on of th e system, the nature of th e decision problem , a risk
analysis strategy, the assessme nt of risk acceptability and th e selection of an alterna tive
(i.e., decision) are considered within a single framework .
Th e identificatioll and dejinuion of the system focus on und erstanding the problem
as precisely as possible. It sho uld be considered from a systemic point of view and
must include the definiti on of the bou ndaries, the elem ent s and their relation ships.
In addition, other imp ortant aspects that are fundam ental in the problem definition ,
such as transformations, owners, players, client and resources, should also be considered. An en gineering system includes physical as well as organizational components,
wh ich interact tightly to fulfill its function . Thus, the con text plays a significant part,
and it has been argued th at it is a key element in the definition of risk acceptability.
Wh at drives the decision is th e need to move from a current state of affairs to a new
state of affairs. The motive of this process m ay be the result of the ow ner's decision or
th e result of imposed needs by extern al conditions. Therefore, a clear understanding
of the relationship between th e function of th e system and the implicatio ns of the
scenarios resulting from the decision is required. The scop e and aims of the decision ,
as well as the relationships between benefit/ consequences, oppor tunity/ risk and preparedness/hazard (Blockley and Dester, 1999), must be carefully listed, studied and
reco rded.
The risk analysis process, as described in Figure 5, rearranges and includes the concepts of hazard, vulnerability and risk. System modelil1g conce ntrates on the fun ction al

306

Mauricio Sanchez-Silva

Na tur8 oI 1h8 decision problem

Risk Analysis
Functions/attribute s of system co mponents
[ Dependen ce relationships of components

System modelling

Selection of the decision criteria:


strength/ form! grounding! adaptability.

External factor (trigger event)


Internal factor (precondition for the failure)
[ Haza rd modelling characteri stics.

.s r------------ - -- --.-.---.j
~I

~I
~I
.~

Probabilistic analysis of
failure scenarios

t_. ._.

Sl!
..;c

I
!

~i

Analysis of consequences

.J

Identify and define all failure scenarios.


Look for scenarios related to the decision criteria.
C lassify and rank scenarios on the bases of the
[
decision criteria.

Detailed probabil istic analysis for all scenarios


defined.
Check dependabil ity of the probabi listic models.

Evaluation of consequences by scenario.


Co nsider techn ical/social/economic aspects.
[ Qua ntification of the consequences

Ass6ssm8nr 01risk acceptability

Figure 5. Risk analysis (follows from Figure 4).

relationships between system components and deals with the dependability of the
cause-effect model. Modeling dependence between parameters and identifying the
parameters' statistic model is paramount. The defmition of assessment criteria (e.g. form,
strength, grounding) is decided in terms of the function of the system, and it might
be possible that more than one criteria is needed (Table 2). The identification of riskgenerating hazards defines all the different sets of state variables which are preconditions
for failure. They will be used later to describe the scenarios considered in the analysis.
Risk assessment requires the consideration of all possible future scenarios although
they might not all be easy to predict (Table 2). Not all future scenarios identified are
technically foreseeable and may require the use of alternative assessment procedures.
Uncertainty due to the difficulty of predicting scenarios must be an essential part of
the decision process.
Each scenario requires a probabilistic analysis, not necessarily based on classic probability, and a detailed consideration of the consequences. The probabilistic analysis must
be as complete as possible and has to manage carefully all factors related to the uncertainty: incompleteness, fuzziness and randomness. Quantitative parameters should
be estimated by using probabilistic methods or empirical evidence. They should not
be estimated on the basis of extrapolation beyond the limits of empirical data unless

Risk analysis and the decision-making process in engineering

307

Table 1 Basic concepts included in the risk analysis

Criteria

Description

Assessment criteria

Strength (i.e. the ability to cope with external demands)


Form (i.e. its shape or pattern)
Grounding (i.e, basis on which the model is founded)
Adaptability (i.c, change to deal with new situations)
Response capacity (i.e. ability to recover)

Characterisation of losses

Costs of goods and services


Economic effects (e.g. unemployment, low productivity)
Deaths and Injuries (e.g. Fatality Rate)
Health (e.g. illness, life expectance)
Psychological factors (e.g. traumas)
Environmental impact
Quality oflife

Most likely
Most frequent
Maximum losses
Expected recovery time

Definition

offailure scenarios

there is dependable evidence. Sensitivity analysis might be a very valuable tool for
estimating the effect of imprecision and uncertainties (Stwart and Melchers, 1997).
The consequences must be viewed from a systemic perspective and must be considered
in terms of technical, social and economic aspects (Table 1). Immediate and long term
consequences must be also considered.
Risk analysis should not search for precise solutions to the problem, but provide
relevant elements (i.e. evidence) for a decision. Thus, after all possible scenarios have
been considered, enough evidence must have been collected for a decision. The level
of detail used for the risk analysis has to be considered very carefully for the relevance
of the decision. In fact, the level of detail of the analysis should not go beyond a limit
where relevance and precision are mutually exclusive (Zadeh, L. A., 1965).
The final criteria for the decision should consider the problem of acceptability oj
risk (section 6). It should be assessed on the basis of current risk levels (e.g. code
of practice) and expected changes. The acceptability of risk should be assessed with
regard to explicit assessment of all relevant quantitative and qualitative characteristics
of the system. It should not be assessed on the basis of single-valued measures of risk.
Alternatives for the decision must be evaluated with reference to the economic, legal
and political context. Finally, a decision is made, and further actions, such as monitoring
and review, have to be designed to ensure dependable long-term solutions.
5.2. Final remarks

To close this section it is important to stress that a good decision maker is not only a
person who has a good strategy and makes appropriate choices, but rather a person with
the ability to see the world clearly in a coherent picture. Blockley and Godfrey (2000)
state that this clarity is simple but not simplistic and depends upon strong underlying

308

Mauricio Sanchez-Silva

conceptual models. They also stress the concept of "wisdom engineering" and quote
Elms (1989),
"A wise person has to have knowledge, ethicalness and appropriate skills to a high degree.
There also has to be an appropriate attitude, an ability to cut through complexity and to see
the goals and aims, the fundamental essentials in a problem situation and to have the will and
purpose to keep these clearly in focus. It has to do with finding simplicity in complexity. More
fundamentally it has to do with world views and the way in which the person constructs the
world in which they operate, which is to say, in engineering, that wisdom hasto do with having
appropriate conceptual models to fit the situation".
In summary, good decisions are strongly related to wisdom engineering.
6. ACCEPTABILITY OF RISK

Risk based decisions are not only about the best option in terms of measurable parameters (cost, utility, etc), they are also related to what a person, or group of people,
is actually willing to accept. Risk acceptance depends on complex value judgments,
which consider both qualitative and quantitative characteristics of risk. Reid (Blockley
1992) argues that the acceptability of risk is determined by the need for risk exposure, control of the risk and fairness. For instance, a decision might be optimum, in a
mathematical sense, but the psychological, social or personal conditions may make that
option not viable. For instance, in tossing a coin, the probability of winning or losing,
is 50%. If people are asked how much they will bet, most people will bet 1 dollar,
fewer people will bet 50 dollars, only a few 100 dollars, and very few 1000 dollars.
The shape of the function that describes the willingness to bet defmes the utility and
the aversion or affinitive functions described in section 4.3. It has been observed that,
in general, people are risk averse.
The definition of acceptable level of risk has always been a key issue in engineering
design. In reliability terms, this is related to the decision about whether the probability
of a limit state violation is acceptable or not. However, the decision about acceptance
should also include an assessment of the consequences of failure and the context
within which an unfavorable event might happen. Among the most common criteria
for making decisions about risk acceptance are: the comparison between the calculated
failure probability with other risk in society, the definition of the ALARP (As Low
As Reasonably Possible) region, the calibration at past and present practice, and the
cost-benefit analysis (Sanchez-Silva and Rackwitz, 2003).
In comparing calculated failure probability with other risks in society, special attention should be given to the differentiation between individual and collective risks
(Figure 6). An individual acts with respect to his/her needs, preferences and lifestyle.
Thus, risk acceptance depends on the degree to which the risk is incurred voluntarily.
Collective (public) risk is of concern for the government, or the operator of a technical facility, who acts on behalf of society as a whole and is not concerned with the
individual's safety.
Table 2 presents the main causes of death in Colombia and the corresponding failure
probability obtained from the National Statistical Office (DANE, 2002). The values

Risk analysis and the decision-making process in engineering

309

Individuals
0

~1O-3

~1O-6

Not important

10-4

1.0
Unacceptable

Costlbenefit analysis

Society
0

~1O-7

Not
Important

10-8

~1O-5

a 10-6

Costlbenefit analysis

1.0

Unacceptable

Figure 6. Risk perception for individuals and society (Pate-Cornell, 1994).


Table 2 Main causes of death in Colombia

Causes of death
Heart attack
Cancer
Respiratory problems
Digestive system diseases
Malnutrition
Homicide
Traffic accident
Drowning
Fire

Number of
fatalities

Annual probability
of death

50504
26477
16981
9280
8562
26163
10807
1152

1,22 10-.1
6,37.10- 4
4,09.10- 4
2,23.10- 4
2,06.10- 4
6,20.10- 4
2,60. !O-4
2,77.10- 5
4,5310-('

188

presented in Table 2 vary significantly from country to country, in particular between


developed and less developed, The effect of these differences is directly related to the
life expectancy of a population, which is a concept that will be discussed in section 8.2What is relevant in terms of the acceptability is the relation between actual annual
probability of death and peoples' perception. A representative statistical study was
carried out in Colombia's Capital, Bogota, over 1100 people with different socioeconomical backgrounds in order to have a better knowledge of their perception of
risk of death. The results were compared with those values obtained by the official
statistics. The comparison was made in a normalized space for which different models
relating perception and actual failure probability values were calibrated. In Table 3
the result of people's perception regarding the affinity to risk can be seen. It can be
observed that for most causes of death considered, risk-aversiveness is common.
In spite of the fact that this approach seems to be a "rational" way ahead, defining
acceptable risk by comparing death rates of different activities within a society may be
misleading. Acceptability of risks varies with age, gender, socioeconomic conditions,

310

M aur icio Sanc hez-S ilva

Table 3 Risk perceptio n for Colombi a


Ca use of death

R isk attitu de

Trafftc acciden t
H erat Atta ck
Enf. Respiratorias
N atural disasters
En f. Sistema Di gestivo
Ca nce r
Fire
Me di cal co mplications
H om icide
N ervo us system
Air accide nt
Infrastru cture dam age / collapse
D rowning
Infection
Brain dam age
M alnutrition

10.1
10-2
~

1\1 10-3
:::::
>.

10-4

...

10.5

C
U
::l

0"

u,

10-6
10-7

i
i

...

----~-..::-+ _

_1i

Aversion
Affinit ive
Aversion
Aversion
Aversion
Afftnitive
Aversion
Aversion

Affinitive
Aversion
Aversio n
Aversio n
Aversio n

Affinitivc
Aversion

Affinirive

.Ii

E!tremely high
!

1----1-----I i
i

'AiARP-r' ----1---

00

. -

---"'~--,+-_._ot important

I
!

100

10

1,000

10,000

umber of fatalit ies, N


Figure 7. D efin ition of th e ALARP Region.

level of education, cultural background, available information , medi a influence , physiological aspects, and so forth.
The ALARP approach defines a region of acceptable values of probability of failure
in a plot of the occurrence prob ability of adverse events versus their conseque nces
(Figure 7). It has been used widely in the industry as part ofHealth and Safety programs.
Althou gh this approach might be appealing, there arc significant difficulties in its
interpretation , openness of decision processes, morality of actions and comparability
between facilities (Melchcrs, 1999).

Risk analysis and the decision-making process in engineering 311

Calibration of acceptable levels of risk at past and present practice has also been used
for defining target reliabilities. It is tacitly assumed that this practice is optimal although
this is not at all obvious. Rackwitz (2001) argues that despite this acceptability criterion
is based on trial and error, it cannot give totally wrong numbers because of their long
history. Nevertheless, analysis shows that there is great variation in reliability levels.
Finally, a reliability oriented cost-benefit analysis considers that technical facilities
should be economically optimal. However, the question of the economical value of
human life, or better, the question of how to reduce the risk to human life, cannot
be avoided. This approach has been recently updated by including the Life Quality
Index (LQI) as proposed by Nathwani et al. (1997), leading to the conclusion that risk
acceptability from the public perspective is essentially a matter of efficient investment
into life saving measures. This approach will be discussed in detail in section 7.2.
Acceptable risks for most engineering artifacts which might cause fatalities, measured in terms of annual probability, have shown to be below the level for common
chronic disease (10- 3 /yr) but somewhat above the "de minimis" risk threshold, around
10- 7 (Pate-Cornell, 1994), where individuals and society are indifferent to the risk
(Ellingwood, 2000). Since this range is extremely wide, a strategy for selecting appropriate target reliabilities and risk acceptance criteria becomes a key element for making
decisions. The optimization strategy presented in the following sections is suggested
as a dependable and effective approach for defining the criteria for optimum seismic
structural design.
7. OPTIMIZATION

The overall discussion in the previous sections has taken us to the problem of making
assertive decisions. In spite of the fact that a decision involves to great extent the context of the problem and a significant degree of subjectivity, "numerical models" are
important to provide criteria with a low degree of subjectivity. Therefore, although
incomplete, optimization techniques are rational dependable models which may support coherent decision processes. This section describes some basics of optimization
and its applicability to engineering problems.
7.1. Basic optimization concepts

The essence of numerical optimization is described by equation 6, which provides the


maximum or minimum value of the objective function F O. Considering uncertainty,
a reliability-based optimization process consists of defining the optimum value of the
vector parameter p for which the engineering facility is financially feasible. The vector
parameter p stands for any measure capable to control the risk offailure. For instance, in
the case ofa building structure, p could be the dimension ofthe structural elements, the
reinforcement, the quality assurance program during construction, the maintenance
program during service and so forth.
The general objective function for maximization can be expressed as:
Z(p) = B - C(p) - D(p)

(8)

312

Mauricio Sanchez-Silva

$
B

(Benefit)

Figure 8. Description of cost functions for the optimization.

where B is the benefit derived from the structure assumed independent of the vector
parameter p; C(p) is the cost ofdesign and construction; and D(p) the expected failure
cost (Figure 8).
Since the structure will eventually fail after sometime, the optimization has to be
performed at the decision point (i.e. t = 0). Therefore, all cost need to be discounted,
for example, by using a continuous discounting function such as 8(t) = exp[-yt].
In accordance with economic theory benefits and expected cost, whatever types of
benefits and cost considered should be discounted by the same rate; however, different
parties may use different rates. While the owner may use interest rates from the financial
market, the interest rate for an optimization in the name of the public is difficult. A
detailed discussion on participation and importance of the interest and benefit rates
can be found in (Rackwitz, 2002).
For typical engineering facilities, Hasofer and Rackwitz (2000) proposed several
replacement strategies: (1) the facility is given up after service or failure or (2) the facility
is systematically replaced after failure. Further, it is possible to distinguish between
structures that fail upon completion or never and structures that fail at a random point
in time. Assuming a constant benefit (i.e. b = f3 Co), the objective function for all cases
is presented in Table 4, where H is the cost of failure; y is the annual discount rate
corrected for in/deflation averaged over sufficiently long periods; and A(p) the rate of
failure for stationary Poissonian failure processes.
The merit of the objective function for random failures in time with systematic
reconstruction (e.g., seismic design case) is that it does not depend on a specific lifetime
of the structure, which is a random variable very difficult to quantify and usually much
longer than values specified by codes of practice. The solution is based on failure
intensities and not on time dependent failure probabilities. It is neither necessary
to define arbitrary reference times of intended use nor is it necessary to compute
first passage time distributions. The same targets, in terms of failure rates, can be set

Risk analysis and the decision-making process in engineering 313

Table 4 Objective function for various replacement strategies


Replacement strategy

Objective function

Failure upon completion due to time invariant loads


- Systematic reconstruction

Z(p)

Random failure in time


- Given up after completion - Systematic reconstruction -

b
A(p)
Z(p) = Y +A(p) - C(p) - H Y +A(p)

Random failure in time due to random disturbances

Z(p) =

=~Y

~
Y

C(p) - (C(p)

- C(p) - (C(p)

+ H)

Pf(P)

1 - Pf(P)

+ H)_A--,,-Pf---,(P_)_
y +APf(P)

for temporary structures and monumental buildings, given the same marginal cost
for reliability and failure consequences (Rackwitz, 2000). Thus, by using the third
equation in Table 4 there is no need to perform the optimization in terms of the
expected total cost over a time period (i.e. design life for a new facility, or remaining
life for a retrofitted facility), usually called Lifecycle Cost Design criteria (section 8),
but to refer the analysis to yearly rates of occurrence of the event and annual failure
probabilities.
7.2. Cost of saving human lives

It was stated in section 2 that decision making requires a balance between safety and cost,
and that safety is related to avoiding losses. Commonly in engineering, safety is related
to saving human lives. There has been always a great amount of discussion about the
cost of human life, but despite the moral and ethical considerations, economic values
are still assigned, mainly by insurance companies. For instance, FEMA (1992) reports
that, for the United States, the cost of injury can be taken as US$1,000/person and
US$lO,OOO/person, for minor and serious injury respectively. According to the same
source, the life saving cost has been assessed at US$1 ,700,000. In general terms, fatality
and injury losses can be evaluated using one of two approaches, namely, human capital
approach and willingness-to-pay approach (Cho et al., 1999). These are interesting
approaches that support a common need in engineering, expressed by Lind (2001) as
"a measure of tolerable risk should be based on human values and expressed in human
terms".
Many widely used social and economic indicators have been developed for international organizations as an attempt to measure and compare the "quality" of life and
development of different societies. Basic social indicators are statistical time series such
as life expectancy or Gross Domestic Product (GDP), while compound social indicators are functions of such data for specific purposes. Lind (2001) argues further that
any social index that is a differentiable function of life expectancy and GDP per person, imply a tolerable and simultaneously affordable risk value. The principle of equal
marginal returns defines those risk-cost combinations. Although the management of
public risks has several ethical, psychological and political dimensions, the core of the
overall management of risk is a problem of allocation of economic resources to serve
the public good.

314

Mauricio Sanchez-Silva

Risk management is the purchase of extra life expectancy. Thus, "the cost of lifesaving is not so many dollars; rather the cost ofa dollar is so much life" Lind (2001). The
cost of human life will be taken into account through the Life Quality Index (LQI),
which is a compound indicator of the well-being of a society defined as (Nathwani
et aI., 1997):
L = f(g)h(t)

(9)

where the function jig) represents the intensity, while the factor h(t) represents the
duration of the enjoyment oflife. The LQI assumes that jig) and h(t) are independent
functions. The parameter g is the individual mean contribution to the GDp, and t
the time for enjoyment of life whose quality is measured by g. Life expectancy, e, is
proposed as a measure of safety and the GDP per person, g, as a surrogate measure of
the quality oflife. On the whole, the LQI is a cost/efficiency-based criterion expressed
in terms of a marginal utility that does not depend on absolute values oflife expectancy
at birth or the gross national product. It is based on considerations about the potential
loss oflife and does not deal with risks to any particular group, nor does it deal with
risks to any identifiable person.
The LQI is based on the assumption that GDP per capita (i.e. g) is proportional to
working time w; thus, if the time spent in economic activities is we (0 < w < 1), then,
the time for enjoyment oflife is t = (1 - w)e. It is therefore reasonable to assume that
individuals maximize their income with respect to the time they spend in earning it,
that is,
dL

-=0

dw

(10)

After some mathematical manipulation, the LQI can be approximated as (Nathwani


et aI., 1997; Rackwitz, 2001):
(11)
where w is the fraction of e devoted to economic activities in order to raise g. It
has been observed that the value of w varies between 0.10 for developed countries
and more than 0.20 for undeveloped countries. An activity, regulation, project or
undertaking changing life expectancy and involving cost is reasonable if (Nathwani
et al., 1997):
dL
dg
de
-=w-+(l-w)->O
L
g
e

(12)

The cost of life saving operation requires estimating the cost of averting a fatality in
terms of the gain in life expectancy .0.e. The cost of the safety measure is expressed as a
reduction .0.g of the GDP. Thus, the Implied Cost of Averting a Fatality (leAF) can
be obtained from the equality of equation 12 after separation and integration from g

Risk analysis and the decision-making process in engineering

315

to g + !1g and e to e + !1e. Therefore, the cost per year (i.e. !1 C = -!1g) to extend
a person's life by !1e is:
(13)

Because !1 C is a yearly cost and the (undiscounted) lCAF has to be spent for safety
related investments into technical projects at the decision point t = 0, one should
multiply by er = !1e and the lCAF becomes:
lCAF(e r ) =g(1- (1 +e,/e)-(!/q))e,

(14)

The societal equality principle prohibits differentiating with respect to special ages
within a group; therefore (Rackwitz, 2002),
lCAF=

1'"
o

lCAF(e - a)h(a)da

(15)

where h (a) is the density of the age distribution of the population. The density of
the age distribution can be obtained from life tables. Recently, Pandey and Nathwani
modified the LQI by defining a Societal Life Quality Index (SLQI) and a detailed
discussion ofthis can be found in Rackwitz (2003). The lCAF in equation 14 is in fact
very close to the human capital, i.e. the lost earnings if a person is killed at mid life;
therefore, it might be used as an orientation for ajustifiable compensation by insurance
or the social system in a country. The lCAF is derived from changes in mortality by
changes in safety-related measures implemented in a regulation, code or standard by
the public. Values of g, e, wand lCAF for selected countries are shown in Table 5.
Therefore, in an exposed group of technical projects, with N F potential fatalities, the
"life saving cost" is (Rackwitz, 2002):
HI

= lCAF . k . N I

(16)

where k (0 ~ k ~ 1) is a constant that relates changes in mortality to changes in the


failure rate and can be interpreted as the probability of actually being killed in case
of failure. H F implies that incremental investments into structural safety should be
undertaken as long as one can "buy" additional life years. It is emphasized that H F
is not an indicator of the magnitude of a possible monetary compensation for the
fatalities, such as in case of an earthquake event, nor a measure of the human life. It is
a number which society is willing to pay to save life years, i.e. investments in structural
safety via codes of practice or the like. It enters into the optimization as a fictitious
number at the decision point (Rackwitz, 2002).
7.3. Life cycle costing

A particular case of optimization has to do with the life cycle of any engineering
artifact. This is a topic that has taken more relevance every day given its importance for

316

Mauricio Sanchez-Silva

Table 5 GOP/per capita (all monetary values in PPP US$, 1999,


i.e. adjusted for purchasing power parity), life expectancy and w for
selected countries

Country
Canada
France
Germany
USA
Mexico
Brazil
Colombia
India
China
Japan
Egypt
South Africa
Kenya
Mozambique

g (US$)
(GOP/capita)

e
(Years)

leAF

26,251
24,470
23,742
31,872
8,297
7,073
5,749
2,248
3,617
24,898
3,420
8,908
1,022
861

78.7
78.7
77.6
76.8
72.4
67.5
70.9
62.9
70.2
80.8
66.9
53.9
51.3
39.8

0.125
0.125
0.125
0.125
0.15
0.15
0.15
0.18
0.18
0.15
0.15
0.15
0.18
0.18

2.1.106
1.9.106
2.2.106
4.7.10 6
6.2.10 6
4.8.10 5
4.0.10 5
1.5.105
2.6.10 5
2.1 .10 6
2.4.10 5
5.0. 105
5.2.10 4
3.7.10 4

efficient assignment of resources in the long term and its relationship with the concept
of sustainability. The basics of life cycle costing will be described in this section.
7.3. 1. General aspects

As in industry, engineering products such as infrastructure facilities have to extend the


cost evaluation from the simple "counting" approach to the life cycle where value
is created and to employ foresight instead of hindsight. Thus, the analysis should
look forward in time, beyond the organization production costs. It should focus on
the underlying drivers of business performance, which are essential for managing the
statistical nature ofcosts. Within this context, Life Cycle Cost analysisplays a far greater
role than traditionally thought.
Proactive cost management should handle all kinds ofrisks that can incur lossesto the
infrastructure. Those risks range from classical engineering (failure of the structure)
to business risks, which have shown recently to be a new focal point of corporate
governance. Within this context, it is evident that cost management should ideally
be expanded to risk-based cost management as well as focus on total cost. Life Cycle
Cost analysis should take risk and uncertainty into consideration to be really useful for
decision making.
Emblemsvag (2003) argues that the Life Cycle Cost analysis idea requires acting on
the following directions:
From
From
From
From

partial focus to holistic thinking


structure to process orientation
cost allocation to cost tracing
deterministic to uncertainty management.

Ri sk analysis and the decision-making process in engineering

317

Planning
Manufacturer
User
Planner
Owner

Conception

Planning

Design

Construction

Operation

Replacement
IDisposal

Figure 9. Life cost cycle from different perspectives.

Moving to holistic thinki ng involves recognizing the real effect of a project on the
well-being of a society. It is of course related to quality in a broader sense and consequently with the relevance of the decisions. Considering systems as processes is a
paramount concept in a holistic approach since that conce pt recognizes the dynamic
nature of decision s. Cos t allocation refers to assigning costs using arbitrary allocation bases, whereas tracing relies on cause and effect relationships. Finally, un certainty
managing is at the heart of any decision whose outcome is not certain.
7.3.2. Basics oflife cycle costing

The life cycle of any artifact depends up on the perspective from which it is looked
at; however, it refers to lapse of time during which "someo ne" invests resources and
expects to obtain some benefit (Figure 9). Therefore, Life Cy cle C ost refers to the cost
incur red by "someone" dur ing the lite cycle of the artifact. N eedless to say that it is
expe cted that ideally costs sho uld be lower than benefits in the lon g run; ot herwise ,
the produ ct is not worth being built.
C ost and benefit estimation is not easy to quantify since the social impact ofa produ ct
plays a significant part in the decision to develop it. It is imp or tant to distinguish here
between produ ct life cycle and market life cycle. The forme r is related to one or a
few items, while the latter is conce rn ed with busine ss management of product. In this
chapter the discussion will focus on th e product life cycle.
The life cycle cost is closely related to design and development because it has been
realized that it is better to elimina te costs before they are incurred instead of trying to
cut costs after they are incurred. T his represents a paradigm shift away from cost cutting
to cost control during design; in other words, the tradition al approach of cost cutting
is very ineffective. In term s of infrastructure it has been shown that the investme nt
J ur ing the life span is several times higher than the original cost. Nowadays, public
institutions and companies canno t afford to segregate cost accounting from design
engineeri ng, constru ction and other core business processes. T his puts new challenges
on management that traditional cost accounting techniques can not handle properly.
O n th e whole, the Life- Cycle- C ost can be defin ed as: "the total cost that is incurred,
or may be incurred, in all stages of the product life cycle" (Emblemsvag, 2003). The

318

Mauricio Sanchez-Silva

Life-Cycle-Cost is a decision support tool that must fit the purpose and not an external
financial reporting system.
8. EXAMPLES

In order to illustrate the main concepts described in this chapter, two applied examples
of decision making will be discussed.
8.1. Allocation of resources to transport networks

Allocation of resources for construction, maintenance and rehabilitation of transport


network facilities has become a priority in most countries, and in particular, in those
where most freight is transported by land. This section presents a model for optimizing
the allocation of resources based on the operational reliability of transport network
systems. The optimum assignment of resources is carried out based on a set of possible
actions described in terms of the failure and repair rates of every link. Thus, the model
optimizes the assignment of resources so that the accessibility of a centroid or the total
network is maximized (Sanchez-Silva et al., 2004).
8.1.1. Basic considerations

Reliability of transport systems is a complex issue that involves several factors that
differ in nature. Transport systems analysis must combine physical and functional considerations, which are not necessarily independent. Physical aspects are related to the
impossibility for the user to reach a destination due to damage of the infrastructure (e.g.
collapse of a bridge). Functional aspects are concerned with level of service provided,
such as excessive travel times (Sanchez-Silva et al., 2004).
A transport network system can be thought of as a stochastic dynamic system, where
the state oflinks (i.e. failed or not failed) and the users' decisions change permanently.
A road network is defined as a system, which can be represented mathematically as a
graph G( N, A) made up of a finite set of N nodes and A Links. The network includes
a set ofroads selected on the basis ofany technical, functional or administrative criteria.
Centers ofspecial interest such as cities are designated as centroids and should be clearly
defined within the network.
In order to assess the reliability of a network system, it is required to consider the
network's variation with time. This can be looked at from two perspectives: (1) the
decisions that the user has to make as hel she goes along a route from one node to
another; and (2) the average failure and repair rates of a link within a route between
two nodes. These two aspects are considered in this example within a single model.
For more details of the model refer to (Sanchez-Silva et al., 2004).
8.1.2. Decision criteria

Accessibility is selected as the main decision criteria. It is the ability to command


the transportation facilities that are necessary to reach desired destinations at suitable
times. It is the most important relationship emerging from the interaction between the
elements of the network.

R isk analysis and the decision-m aking process in engineering 319

8. 1.3. Accessibility

Accessibility is widely used in transport studies under different contexts. Different


authors , such as Mo seley (1979), Halden (1996), Geertman and Van Eck (1995) agree
that accessibility depends on two factors: (1) an activity or motivation based on the
opportunities available in a location and (2) a resistance factor based on generalized
cost of traveling (e.g. efficiency, low cost). Others (Shen, 1998) have also included
concepts such as the dem and for the foregoing mentioned opportunities. Accessibility
is strongly related to the location and relevance of centroids, the willingness to move
and the opportunities and ben efits of moving in accord ance with the attributes of the
network .
The operation conditi on of the network is then evaluated thro ugh the Accessibility
Index A, which is used to describ e the efficiency of the netw ork to communicate
centroids. Accessibility can be defined in terms of any variable of the network system;
however, disutility, which is the cost of the trip as network users perceive it, is a usual
parameter. Disutility encompasses all factors that affect the cost for the user and the
way he/ she integrates them. That includes aspects such as travel time, speed limit,
quality of the road, safety, landscape, congestion, and so forth . Although the disutility
considers every important factor in the decision making process, travel time and direct
cost are usually the most relevant components (Bell and Iida, 1997).
The accessibility to centroid i is defined here by:
Ai

L"

j ; l, ji

l\j-.J (E [Cj -4;])

(17)

where f( E [Cj--->;] ) is a monotonic decreasing function , e.g., f (E [ C;---> i]) =


1/ E [Cj ..... ;] , n is the numb er centroids in the network, Nj--->i is the number of vehicles (traffic); and E[ Cj --->;] is the expected cost of traveling from every centroid j to
centroid i. This definition implies that as the cost of traveling increases, accessibility
decreases (Sanchez-Silva et al., 2004).
H owever, equation 17 does not provide enough information about the behavior of
the wh ole network. Therefore, the N etwork Reliability Index (F) is proposed to measure
the change in the accessibility of the entire network, and it is defined as (Sanchez-Silva
et aI., 2004):
II'

FN=Lw jAj

(18)

j; \

where 111 is the number of centroids in the network and w j the weight of centroid j in
accordance to its importance for the netwo rk (e.g. economi c, social). Every centroid
has a weight IV i calculated according to the evaluation obj ectives, such as amount of
freight or passengers generated or attracted. In order for the values of IV j to guarantee
completeness, it is necessary that the sum of all w j be equal to 1 (Lleras and SanchezSilva, 200 1).

320

Mauricio Sanchez-Silva

8.1.4. Optimization of resource allocation

The appropriate assignment of resources depends on an effective use of the resources


available to induce a change of the accessibility as function of the changes in the failure
and repair rates (Ak and f..Lk)' If the repair rate (f..Lk) is increased, the response capacity of
any interruption of link k improves; similarly, a decrease of the failure rate (Ak) means
that prevention measures have been successful. The approach using Ak and f..Lk as the
main parameters of the model, facilitates the definition of performance indexes and
the decision making process.
Any change in these rates has an associate cost associated. Therefore, every action has
to be looked at in terms of the relationship between the cost and the benefit obtained
in the operation of the network if such action is taken. The objective of optimization
is to maximize the change in the accessibility, which may be looked at from two
perspectives: (1) improving accessibility to a particular centroid; or (2) enhancing the
accessibility of the whole network. The first analysis focuses on assessing the change in
the accessibility of a given centroid as a function of modifications of the parameter A
and u, The second alternative considers an increase in the accessibility of the network
as a whole.
For the centroid analysis, the optimization can be expressed as (Sanchez-Silva et al.,
2004):
Maximize:

L
/I

A i()'j,A2, ... ,A,,,J.1.1,J.1.2, ... ,J.1./I)=

~->J(E[C)->;])

(19)

j~l.jfi

Subject to:

L C,i(Ai) + L C/li(J.1. i) = C
tI

1/

i=l

i=l

(20)

A, ::: 0
M, ::: 0

where Ai (Al' A2, ... , A,,, f..Ll, f..L2, ... , f..Ln) is the accessibility as defined in equation
17; C L is the maximum amount of resources (e.g. US$) available for investing in the
road network; and C (Ai) and CM, (f..Lj) are the cost of modifying the failure, A, or
repair, u, rates of link i. The results of the optimization are the changes on the failure
and repair rates of every link such that the accessibility to centroid i is maximized.
The optimization of the operation of the complete network can be expressed simply
replacing the objective function (equation 19) by equation 18:

=L
/I,

FN()" 1, ),,2, ... , A,,, J.1.1, J.1.2,'' J.1.")

UJjAj(AI, A2,"" A,,, J.1.1, J.1.2,, J.1./I)

(21)

j=1

The proposed model requires a nonlinear optimization of 2n variables. Note that


the constrains Ai > 0 and u, > 0 will alwaysbe unbinding (Sanchez-Silva et al., 2004).

R isk analysis and the decision- making process in eng inee ring

T RAH IC :\IATRI X
Destin)

:\ Ia n iza lcs

23~567

100 60 '10 20

200 0

1'10200 270

20 10 60 50 30 '10

70 30 0

~ 30

, 0 20

10 10 25

100 70

80 60 15 110 0

130

110 80 26 94 130 0
~

321

20 5

so

120

~5

'10

'10 110

115 130 0

170

290 SO 10 I~O 110 120 30

C a li

Figure 10. Transpo rt network co nsidered for the illustrative exam ple. Adapted fro m Sanc hez-Silva et a1.
(2003) .

Table 6 Accessibility to every centroid


C ent roid

Weight

Accessibility

1
2
3

0.277 5
0.OM8
0.0056
0. 1388
0.1850
0.092 5
0.0046
0.23 13

0. 1213
0.0853
0.0383
0. 1559
0.46 14
0.4 566
0. 1579
0. 1497

4
5
6

7
8

8. 1.5. Case study

Asan example, a part ofthe main transport network in cent ral Co lombia was considered
(Figure 10) where the relevant inform ation of the parameters, in suitable units for the
study, is also presented (Sanchez-Silva et al., 200 4).
As ment ioned before, the decision criterion for allocating resour ces is the change
in the accessibility; in particular, investmen t should lead to a reductio n in the total
accessibility of the network. Therefore, in first place it is necessary to compute the
accessibility to every centroid independently. This is performed by using Equation 17
with the results show n in Table 6 (Sanchez-Silva et a!', 2004).
By considering the weights also shown in Table 6, which where obtained based on
traffic demand and specific socioeco nomic characteristics of each centroid, it is possible

322

Mauricio Sanchez-Silva

O'

:;:
~

z>

0.0

t-

0 .'

-; -

0 .1

:;
:;

06

~e
~

oj

02

O'
OJ
0 .2

0 .1
0

01
06

0'

;: O'
-c
x

0'

01
6

10

II

L1:'oo K

(a)

10

II

L1SK

(b)

Figure 11. (a) relative investment in the failure rate with respect to the maximum; and (b) relative
investment in the repair rate with respect to the maximum.

to compute the

FN =

L'"

Network Reliability Index, F, as:

wJA j =

0.224

j=1

This value corresponds to the current accessibility of the network. Thus, the optimization consists in defining the assignment of resources (fix budget) to modify these
parameters so that the Network Reliability Index is maximized. This requires defining
a cost function for every possible action, which in this case are the changes ofthe failure
and repair rate of every link. Defining this cost function depends on the context of
the problem and has to be carefully structured. For this example, the cost function for
every link has the form C(x) = k(x - xO)4; where x represents A or fJ., and Xo is the
corresponding original value. These values were obtained from a regression analysis
and reflect the actual socioeconomic conditions of the region (Sanchez-Silva et al.,
2004).
Optimization is clearly a non linear problem which can be solved using standard
methods such as the projected gradient. The restrictions on optimization are the
amount of resources available and the final value of the parameters A and u, which
have to be positive. For a limited budget of US$1867 million, Figure 11 shows the
results of the optimization in terms of the relative investment in every link for the
failure and repair rates.
It can be observed in Figure 11 that investments on improving the repair rate are
higher and more even throughout the links than those required for enhancing the
failure rates. The investment in the failure rate is highly concentrated in link 9 and is
followed by the investment in links 5 and 6. Actions directed to reduce A are related
to physical interventions such as construction of retaining wall structures, retrofitting
bridges and so forth. In terms of the repair rates, links 3 and 4 are significant for the

Risk analysisand the decision-making process in engineering 323

network since they have the smallest cost of repair and provide the redundancy for the
route from centroids 1 to 8. A failure oflink 3 makes links 2 and 5 to become extremely
critical and any alternate route very expensive. In general, it has been observed that the
investment directed to improve repair rates has a lesser impact on the network since
the time a link is out of service is very low.
8.1.6. Summary andfinal remarks

Allocation of resources for enhancing the reliability of a transport system is a priority


and a very controversial issue due to the differences in the criteria used for that purpose.
An approach to computing the transport network systems reliability based on an entirely
probabilistic view, as proposed by Sanchez-Silva et al. (2004), has been presented. It
considers the state of the network through the relationship between the failure and
repair rates of every link comprising the network. These rates are directly related
to social or physical characteristics of the road, such as condition of the road, or
frequency and size oflandslides. The example illustrates how optimizing the assignment
of resources can be an efficient decision making strategy to enhance the reliability of
any transport network system.
8.2. Design of structural systems

Building design and construction is a fundamental activity for a society. Technical


requirements are defined by codes of practice based on well tested mechanical models.
In areas were earthquake activity is important, the uncertainty of seismic load defines
the design requirements. In this particular case, the balance between safety and cost is
extremely important and a careful consideration of the cost of saving human lives is
required (section 7.2). This example discusses how the decision of the acceptability of
the earthquake design criteria is controlled by saving life years criteria.
8.2.1. Decision criteria

For this example, the decision has to do with the level ofacceleration for which a building structure has to be designed in accordance with the socioeconomic characteristics
of the society where it is going to be built.
8.2.2. Probabilistic model oftheground motion

Earthquake hazard assessment focuses on defining the probability of excedence of


a particular ground motion parameter at a site, in a given period of time, T. For
convenience and in agreement with current practice, the peak ground acceleration is
taken as the design parameter. Attenuation laws, which relate peak ground acceleration
with magnitude and epicentral distance, have the general form:
a

= h(m,

r)

= b1(r)e/"'"

(22)

where a is the peak ground acceleration at the site of interest, b: = 0.573, m is the
Richter-magnitude and b 1 (r) is a function ofdistance describing the energy dissipation.

324

Mauricio Sanchez-Silva

For the seismic conditions of California, which are similar to those of the region of
interest, i.e. Bogota, bl (r) = (9.81*0.0955 exp(-0.00587r))/ Jr 2 + 7.3 2 (Joyner and
Boore, 1981). For the attenuation law in equation 22, a coefficient of variation of
about 0.6 is reported. The data collected showed that the magnitude can be fitted
by an extreme value distribution type III (for maxima). The upper bound is defined
by historical data of earthquake events and by the regional geological characteristics.
In addition, only events with magnitudes which may cause significant damage are
taken into account, i.e. m > M mi" = 4.0. Therefore, the conditional probability density
function for the magnitude will be given by (Sanchez-Silva and Rackwitz, 2004):
(23)
The values of the distribution parameters wand u can be determined based on the
available data of the seismicity of the area. The denominator of the equation corresponds to 1 - FM(m), which is the probability ofhaving an earthquake with magnitude
higher than M mi,.. The parameters of the distribution were computed by a maximum
likelihood method based on data for Bogota. The yearly rate of occurrences of earthquakes has been determined as A = 2.9. Based on the attenuation law considered in
equation 22 and the conditional density function for the magnitude equation 23, the
derived conditional density distribution function for the acceleration can be calculated
as:
(24)
with M mi" < h-l(a, r) = (1/b 2)ln(a/(b l(r)) < Mil. In order to obtain the unconditional density function for the acceleration, it is necessary to integrate over the
area within which the analysis is performed (i.e. circular around the point of interest, Rmax = 200 Km). Therefore, the probability density function of the distance R is
considered uniform with a value offR (r) = 2r / R;tax. Ofcourse more elaborated models can be developed if the location and earthquake pattern of the seismic sources are
clearly identified. Assuming stochastic independence between magnitude and distance,
the density for the acceleration is (Sanchez-Silva and Rackwitz, 2004):
(25)
The mean and the standard deviation for the acceleration without considering the
uncertainty in the attenuation law are 0.075 m/s 2 and 0.116 m/s 2 respectively, implying
a coefficient ofvariation of 155%. The maximum and minimum acceleration expected
on site are a max = 9.44 m/s 2 (R = 0, M = M u ) and ami" = 0.014 m/s 2 (R = 200 Km,
M=4).

Risk analysis and the decision-making process in engineering

8.2.3. Model

325

of theprobability offailure of the structural system

The structural system is modeled as a one degree of freedom system. The demand
on the building structure subjected to a ground motion depends upon the probability distribution function of the acceleration and the variation of the acceleration
obtained from the response spectrum. Thus, the limit state function can be defined as
g (R, S, A) = R - SAs = 0, where R is the resistance including all structural characteristics like distribution of masses and ductility, S accounts for the variability of the
response spectral acceleration of the system to the ground motion (p.. s = 1, as = 0.6),
A is the peak ground acceleration at the base, and e accounts for the uncertainty in the
attenuation law. Ifboth resistance and demand are modeled as lognormal distributions,
the conditional probability of failure of the system can be expressed analytically as
(Sanchez-Silva and Rackwitz, 2004):

(26)

The ratio plsa corresponds to the central safety factor, where p is the mean value of
the resistance. The variability of the attenuation law is included within the variability
of the demand Vs , which was increased for the purpose of this study to 0.8. It is
pointed out that equation 9 is valid only conditionally on the acceleration, a; therefore,
expectation has to be taken to remove the condition. The unconditional probability
of failure can be computed by integrating over the acceleration range. Thus, the
unconditional probability depends only on the parameter p and represents the failure
probability of the structural system when subjected to acceleration with probability
density as defined in equation 25.
8.2.4. Estimation of cost

The definition of cost is fundamental for the optimization process. The cost of interest
for the optimization process is: (1) cost of construction; (2) cost of retrofitting; (3) cost
of expected material losses; and (4) cost of human life losses (Table 7).
The construction cost of the structure consists of two parts (equation 27), one that
does not depend on the structural characteristics (non-structural elements), and the
Table 7 Cost functions used in the optimization
Type of cost

Cost function

Equation No.

Construction and retrofitting


Rehabilitation
Expected damage
Saving Human Lives

C(p) = (Co + ClpD)


CR(a, p) = (Co + CzpDar)
HM(a) = CM1/'
HF(a) = (lCAF . k N F) . 1/a

(27)
(28)
(29)
(30)

C II (C(, = 1 x 10")-construction cost. C I (C I = 1 x 100)-construction cost that depends on the amount of resistance
provided to the structure and 8 = 1.1; C, (C2 = 8000 (C,/C(, = 0.008)) is the cost of retrofitting the structure; r = 1.5;
CM (C.\! = 1.5 x lO')-cost of material damage, ~ = 1.25; a-acceleration in m/s-.

326

Mauricio Sanchez-Silva

direct cost of the structure itself. It is widely known that the cost of the structure, if
designed to withstand earthquakes, accounts for some 20% to 30% of the total cost of
the building. The investment in rehabilitation or retrofitting is related mainly to the
structural system although it may imply some repairs to non-structural elements. This
cost depends upon both the structural behavior and the actual ground motion. The
parameter p defines the requirement in structural terms to upgrade a structure to
the reliability level of the current code of practice after it has been damaged, and the
acceleration expected in site relates it to the expected structural damage (equation 28).
Expected earthquake damage is usually measured in terms ofthe earthquake intensity
(eg. MMI, MSK) and can be modeled as vulnerability curves, damage matrices, damage
intensity curves and so forth. In this paper, the damage cost is referred to not as
intensity, but to peak ground acceleration (equation 29). Empirical relationships and
tables were used as reference to relate damage observed and acceleration. The damage
functions developed do not discriminate between different intensities of damage (eg.
low, moderate, high), but take into account the total value of the damage. In addition
to the direct cost of damage, the cost of loss of opportunity, H o , was included as a
constant depending upon the socioeconomic climate, i.e. Hr, = Co, but independent
of acceleration. For comparative purposes, was selected as 3.615, 1.0 and 0.231 for
high, moderate and low socioeconomic climates (Sanchez-Silva and Rackwitz, 2004).
8.2.5. Optimization

The random failure in time with systematic reconstruction can be applied to earthquakes in which the seismic events follow a Poissonian process with occurrence rate A
and failures can occur independently with probability Pr(p) (Hasofer and Rackwitz,
2000) (Table 4).
An area with moderate to high seismic activity was considered as a case study. The
ground motion characteristics used in the model correspond to data obtained from
the US Geological Service (UGS, 2001). The study focused on the implications of
the LQI in the optimization process and the consequences in terms of structural safety
for different socioeconomic contexts. Thus, three different socioeconomic conditions
were reviewed (Table 8), with an annual benefit of O. 03C o and a discount rate of 2%
per year.
Table 8 Data for comparing different socioeconomic contexts
Socioeconomic level

Parameter
g
Cjw
ICAF
F. opportunity

High
(Western Europe, USA,
Japan)

Moderate
(Latin America &
Caribbean)

Low
(Least develop
countries)

23,500
20
0.125
3.10 6
3.615

6,500
28
0.15
4.10 5
1.0

1,500
40
0.18
7.10 4
0.231

Risk analysis and the decision-making process in engineering 327

High Social CUmBie V, riaUon with k


' 00

- - - - - - , .,

,.~
"
:
:
''' '
.
1

1.!:. ,. -- - - - - -~
~ -"'- , -

..... .

~ . . . . . .

- -..,..,- ~

.-

~
.

1::r .~'::-:"t.~:: ;::: ::: ~::::::~


'!

' ~~

'

...

b O.1

*:0.01
k:O .OO1

' 00

(a)

.J" V

- :,...-

"

,,,.UU.,

Numb ot

'00

(b)

Figure 12. (a) Spectral acceleration for k = 0.01 and for different social climates. (b) Spectral acceleration
for high social climate and different values of k.

A reliability analysis using FORM/SORM was performed in order to determine


the acceleration associated to every p-optimum obtained in any case considered. This
corresponds to the design value ofthe peak ground acceleration for which the structure
has to be designed. Figure 12a presents the spectral design acceleration as function of
the number of fatalities in case of collapse of the structure for a value of k = 0.01 and
different social climates, while in Figure 12b. the results are shown for the high social
climate depending upon the value of k.
First of all, it can be observed that in all cases, as the number offatalities per building
increase, there is also an increase in the required design acceleration, and it changes
depending upon the socioeconomic condition of the population. In all cases it is clear
that a highly developed country should spend more in order to save life years than
a developing country. Furthermore, the design acceleration levels defined in current
codes ofpractice are, in many cases, sub-optimal. They do not account for the number
of fatalities nor for the socioeconomic characteristics of the population. Therefore,
the average design acceleration specified in the code of practice leads to overdesign in
moderate and low socioeconomic climates and to underdesign of structures in highly
developed countries. Results showed that the design criteria cannot be set in terms of
the return period alone, but they have to consider also construction and reconstruction
costs, opportunity losses, and the potential for human life saving (Sanchez-Silva and
Rackwitz, 2004).
8.2.6. Summary andfinal remarks

Structures should be optimal in terms of the capital invested and the saving oflife years.
Selecting appropriate target reliabilities and risk acceptance criteria is paramount for
making decisions on infrastructure development. The reliability-based optimization
process defines the optimum value of the design vector parameter p, for which the
building is financially feasible. Results showed that optimum design values are very
sensitive to the socioeconomic context and to the number of fatalities as the economic

328

Mauricio Sanchez-Silva

level of the society increases. In addition, it could be inferred that countries with
different socioeconomic contexts should invest their resources in structural safety differently. Low developed countries should invest less in structural safety, and redirect
the resources to other aspects such as education or health, which may prove to be more
important for development (Sanchez-Silva and Rackwitz, 2004).
9. CONCLUSIONS

Risk analysis is an essential tool for making decisions since it uses evidence effectively
to provide information on the potential consequences of a given scenario. The main
challenges that engineers face is how to make decisions which balance cost and safety
within an uncertain environment. Decisions range from pure technical aspects to
planning and idealizing engineering projects. There is not a way to define the correct
decision because it not only depends upon the strategy of comparing alternatives,
but also on the point of view of who makes it and his/her perception of risk. In
other words, it depends highly on the acceptability criteria. An alternative for defining
rational acceptable criteria has to be linked to actual facts and must be the result of an
optimization process. A great deal of work has to be done on integrating risk analysis
and decision making strategies; it is definitely an important way ahead for developing
better engineering which serves society.
REFERENCES
Ang A. H-S. and Tang W. H. (1984), Probability concepts in engineeringplanning and design. Vol II Decision Risk
and reliability. John Wiley & Sons, New York.
Bell M. G. H. and lida Y. (1997), Transportation Network Analysis, Wiley, Chichester.
Blockley, D. I. (1992), En,!;ineering Seifety, Mc Graw Hill, London.
Blockley D. I. and Dester W. (1999), "Hazard and Energy in risky decisions".]. of Civil Eng and Ev. Syst,
Vol. 16, pp. 315-337.
Checkland P. Systems Thinking, Systems Practice. John Wiley and Sons, 1981.
Cho H. N., Ang A. H-S. and LimJ. K. (1999), "Cost effective optimal target reliability for seismic design
upgrading oflong span PC bridges". Proc. Application of Statistics and probability. (Ed. R. E. Melchers &
M. G. Stwart), Balkema Rotterdam, pp. 519-526.
Coburn A, Spence R. Earthquake protection. John Wiley & Sons, 1992.
DANE (2002). http://www.DANE.gov.co.
EERI Committee on seismic risk (1984). Glossary of terms for probabilistic seismic-risk and hazard analysis.
Earthquake Spectra, Vol. 1, No.1, November, 35-40.
Ellingwood B. (2000). "Probability-based structural design: prospects for acceptable risk bases". Proc. of
International Conference on Applications of Statistics and Probability, pp. 11-18, Balkema, Sydney.
Elms D. G. (1989), "wisdom engineering: the methodology of versatility". J. of Applied Eng. Education,
5, (6),711-717.
Emblemsvag (2003), Life Cycle Costing. John Wiley.
Federal Emergency Management Agency (FEMA). (1992). A benefit-cost modelforthe seismic rehabilitation of
buildings. Volume 1 & Volume 2, FEMA-227 & FEMA-228.
Geerrman S. C. M. and Van EckJ. R. R. (1995), GIS and models of accessibility potential: an application
in planning, International Journal Geographical Information Systems, 1995, Vol. 9, No.1, 67-80.
Halden D. (1996), Managing uncertainty in transport policy development, Proc. Inst. Civ. Engrs Transport,
117, Nov, 256-262.
Haimes Y. Y. (1998), Risk modellin,!; assessment and management. John Wiley & Sons, New York.
Harr M. E. Reliability-Based Design in Civil Engineering. Dover, 1996.
Hasofer A. M. and Racwitz R. (2000), "Time-dependant models for code optimization", Proceedings ICASP99. Balkema Rotterdam, Vol. 1, pp. 151-158.

Risk analysisand the decision-making process in engineering 329

Joyner and Boore, (1981), "Peak horizontal acceleration and velocity from strong motion records including
records from the 1979 Imperial Valley, California earthquake". Bull. Seism. Soc. Amer., 71, pp. 2011-2038.
Kottegoda and Rosso, (1997), Probability, statistics and reliability for civil and environmental engineers.
McGrawHil!.
Lind N. (2001). "Tolerable Risk". Proc. of the International Conference Safety Risk and Reliability, Trends
in Engineering, pp. 23-28. Malta.
Lleras G. c., and Sanchez-Silva M., (2001), Vulnerability of Highway Networks, Proceedings of the Institution
of Civil Engineers (ICE), United Kingdom, November 2001, Issue 4, pp. 223-230.
Melchers R. E. (1999), Structural reliability AnalysisandPrediction. 2nd Edition,John Wiley & Sons, Chichester,
UK.
Moseley M. J. (1979), Accessibility, the rural challenge, London, 1979.
Nathwani J. S., Lind N. C. and Pandey M. D. (1997), Affordable safety by Choice: the Life Quality Method.
Institute for Risk Research, University ofWaterllo, Ontario Canada.
Pate-Cornell E. (1994), "Quantitative safety goals for risk management of industrial facilities". Structural
safety. Vo!. 13, No.3, pp. 145-157.
Rackwitz R. (2000), "Optimization-The basisof code making and reliability verification". Structural Safety,
Vo!. 22, No.1, pp. 27-60.
Rackwitz R. (2001), "A new approach for setting target reliabilities". Proc. ofthe International Conference
SafetyRisk and Reliability, Trends in Engineering, pp. 531-536. Malta.
Rackwitz R. (2002), "Structural optimization and the Life Quality Index". Structural safety. Special Issue,
To be published.
Rackwitz R. (2003) "Discounting for optimal and acceptable technical facilities", to be presented at the
ICASP9, San Francisco.
Sanchez-Silva M. (2001), Basic concepts in risk analysisand the decision making process.] Civil Engineering
and Environmental Systems, Vo!. 18, No.4, pp. 255-277.
Sanchez-Silva M., Taylor C. A., Blockley D. I. (1996), Uncertainty modeling of earthquake hazards, Microcomputers in Civil Engineering, Vo!. 11, 99-114.
Sanchez-Silva M., Daniels M., Lleras G., Patino D. (2004). "A transport network reliability model for the
efficient assignment of resources".] of Transport Research B. Elsevier. In print.
Sanchez-Silva M., Racwitz R. (2004). "Implications of the Life Quality Index in the optimum design of
structures to withstand earthquakes". The] ofStructures, American Society of Civil Engineers ASCE. In
print.
Shen Q. (1998). Spatial Technologies, Accessibility, and the Social Construction ofUrban Space. Computers,
Environment and Urban Systems, Vo!. 22, No.5, pp. 447-464.
Stwart M. and Melchers R. (1997), Probabilistic riskassessment of engineering systems. Chapman Hill, London.
U'S: Geological Service (2001), http://eqint.cr.usg.gov/neic/, National Earthquake Information Center,
World data Center for Seismology, Denver.
United Nations Human Development Report, 2001, http://www.undp.org/hdr2001 /completenew. pdf.
Von Neumann J. and Morgenstern 0. (1944), Theory of games and economic behaviour. Princeton University
Press.
Zadeh L. A. (1965), "Fuzzy sets".] Information and Control, Vo!. 8, 338-353.

Вам также может понравиться