Вы находитесь на странице: 1из 17

Review

A critical review of methods and models for evaluating organizational


factors in Human Reliability Analysis
M.A.B. Alvarenga
a
, P.F. Frutuoso e Melo
b,
*
, R.A. Fonseca
a
a
National Commission of Nuclear Energy (CNEN), Rua General Severiano, 90, 22290-900 Rio de Janeiro, RJ, Brazil
b
COPPE, Graduate Program of Nuclear Engineering, Federal University of Rio de Janeiro, Caixa Postal 68509, 21941-972 Rio de Janeiro, RJ, Brazil
a r t i c l e i n f o
Article history:
Received 1 October 2013
Received in revised form
7 April 2014
Accepted 8 April 2014
Keywords:
Human Reliability Analysis
Organizational factors
THERP
ATHEANA
FRAM
STAMP
a b s t r a c t
This work makes a critical evaluation of the deciencies concerning human factors and evaluates the
potential of quantitative techniques that have been proposed in the last decades, like THERP (Technique
for Human Error Rate Prediction), CREAM (Cognitive Reliability and Error Analysis Method), and
ATHEANA (A Technique for Human Event Analysis), to model organizational factors, including cognitive
processes in humans and interactions among humans and groups. Two important models are discussed
in this context: STAMP (Systems-Theoretic Accident Model and Process), based on system theory and
FRAM (Functional Resonance Analysis Method), which aims at modeling the nonlinearities of socio-
technical systems. These models, however, are not yet being used in risk analysis similarly to Probabi-
listic Safety Analyses for safety assessment of nuclear reactors. However, STAMP has been successfully
used for retrospective analysis of events, which would allow an extension of these studies to prospective
safety analysis.
2014 Elsevier Ltd. All rights reserved.
1. Introduction
Organizational factors are addressed in rst generation models
of human reliability analysis by means of performance shaping
factors such as training, experience, procedures, management,
communication and culture. Several individual psychological and
physiological stressors for humans are also treated by such factors.
Organizations are made up of humans and these models suffer from
a chronic deciency in terms of modeling the cognitive processes in
humans. Human error is treated similarly to a physical component
error. These models lack a cognitive architecture of human infor-
mation processing, with cognitive error mechanisms, Swain (1990),
Kantowitz and Fujita (1990), Cacciabue (1992), Fujita (1992),
Hollnagel (1998).
Second generation HRA methods have some kind of cognitive
architecture or cognitive error mechanisms. Organizational factors
are taken into account by performance shaping factors. The evo-
lution here was to establish a mapping between these factors and
the error mechanisms being inuenced or triggered in a given
operational context, since not all performance shaping factors
inuence a specic error mechanism. Thus, one can generate tables
of inuences between performance shaping factors and error
mechanisms and between these and specic types of human errors
associated to a given operational context for each stage of infor-
mation processing (detection, diagnosis, decision making and ac-
tion). In fact, ATHEANA contains comprehensive tables with such
interrelationships, NRC (2000). CREAM (Hollnagel, 1998) proceeds
similarly.
Although the above methods have evolved in matters relating to
human cognition, organizational factors do not have a proper
model that can highlight social, political and economic processes
that inuence such factors in a similar way as error mechanisms in
human cognition. Such processes involve complexity that models
of rst or second generation cannot handle properly, Qureshi
(2008).
Digital technology systems require an analysis that takes into
account complexity not found in analog technology. Digital systems
may be at intermediate fault modes before reaching a nal failure
state that will be revealed to human operators in the humane
machine interface. These intermediate states are mostly invisible to
operators and can move the system to often catastrophic condi-
tions, where human beings do not have consciousness or infor-
mation on what the system is doing, NRC (2008).
In addition to digital systems, complex systems deal with social,
political and economic levels of individual, group and organization
* Corresponding author. Tel.: 55 21 2562 8438; fax: 55 21 2562 8444.
E-mail addresses: bayout@cnen.gov.br (M.A.B. Alvarenga), frutuoso@nuclear.
ufrj.br, pffrutuoso@gmail.com (P.F. Frutuoso e Melo), rfonseca@cnen.gov.br (R.
A. Fonseca).
Contents lists available at ScienceDirect
Progress in Nuclear Energy
j ournal homepage: www. el sevi er. com/ l ocat e/ pnucene
http://dx.doi.org/10.1016/j.pnucene.2014.04.004
0149-1970/ 2014 Elsevier Ltd. All rights reserved.
Progress in Nuclear Energy 75 (2014) 25e41
relationships (Leveson, 2004a; Qureshi, 2008). Traditional models
are based on a successive chain of events each relating to a previous
event causation. This is a strictly linear view of causeeeffect rela-
tionship. In contrast to sequential models, epidemiological models
evolved later, which distinguish latent failures (design, mainte-
nance, management) that can converge to a catastrophic event
when using a trigger, i.e., by combining operational failures
(unsafe acts, active failures) and typical system conditions (oper-
ating environment, context), Leveson (2004a), Qureshi (2008), NRC
(2008), Dekker et al. (2011).
These two classical approaches work well when applied to
components of conventional (non-digital) systems that have well-
dened failure modes and exhibit linear relationships between
these failure modes and their causes, even when not expected in
the design since these failure modes are quite visible. Nonlinear
interactions, on the other hand, are unplanned, unfamiliar unex-
pected sequences of failures and, in addition, invisible and
incomprehensible, Leveson (2004a), Qureshi (2008), Dekker et al.
(2011).
In complex nonlinear interactions, failures do not arise from the
relationship (which may not be exhaustive) of components failure
modes and their causes, but emerge from the relationships be-
tween these components during operational situations. To study
these interrelationships, it is necessary to identify the laws that rule
them. The only model that can do that is a model based on systems
theory, which aims to study the laws that govern any system, be it
physical, biological or social Leveson (2004a), Qureshi (2008),
Dekker et al. (2011).
Human factors should be evaluated in three hierarchical levels.
The rst level should concern the cognitive behavior of human
beings during the control of processes that occur through the man-
systeminterface. Here, one evaluates human errors through human
reliability techniques of rst and second generation, like THERP
(Swain and Guttman, 1983), ASEP (Swain, 1987), and HCR
(Hannaman et al., 1984) (rst generation) and ATHEANA (NRC,
2007) and CREAM (Hollnagel, 1998) (second generation). In the
second level, the focus is on the cognitive behavior of human beings
when they work in groups, as in nuclear power plants. The focus
here is on the anthropological aspects that rule the interaction
among human beings. In the third level, one is interested in the
inuence of organizational culture on human beings as well as on
the tasks being performed. Here, one adds to the factors of the
second level the economical and political aspects that shape the
company organizational culture. Nowadays, Human Reliability
Analysis (HRA) techniques incorporate organizational factors and
organization levels through performance shaping factors.
List of acronyms
ACCI-MAP Accident Map
AHP/DEAAnalytical Hierarchical Process/Data Envelopment
Analysis
APJ Absolute Probability Judgment
AOE Analysis of Operational Events
ASEP Accident Sequence Evaluation Program
ATHEANA A Technique for Human Error Analysis
BBN Bayesian Belief Network
BHEP Basic Human Error Probability
BN Bayesian Network
BWR Boiling Water Reactor
CAHR Connectionism Assessment of Human Reliability
CESA Commission Errors Search and Assessment
CBDT Cause-Based Decision Tree
CC Common Conditions
CODA Conclusions from occurrences by descriptions of
actions
CPC Common Performance Conditions
CREAM Cognitive Reliability and Error Analysis Method
DT Decision Tree
EPRI-HRA Electric Power Research Institute e Human
Reliability Analysis
ESD Event Sequence Diagram
FCM Fuzzy Cognitive Map
FLIM Failure Likelihood Index Methodology
FRAM Functional Resonance Accident Model
FT Fault Tree
GST General Systems Theory
HCR Human Cognitive Reliability
HEART Human Error Assessment and Reduction Technique
HEP Human Error Probability
HFACS Human Factors Analysis and Classication System
HF-PFMEA Human Factors-Process Failure Mode and Effect
Analysis
HFI Human Factors Integration
HOF Human and Organizational Factors
HORAAMHuman and Organizational Reliability Analysis in
Accident Management
HRA Human Reliability Analysis
HSC High Speed Craft
IDAC Information, Decision, and Action in Crew
IF Inuencing Factor
INTENT Not an acronym
IPSN Institute for Nuclear Safety and Protection
JHEDI Justied Human Error Data Information
K-HPES Korean-Human Performance Enhancement System
LISREL Linear Structure Relation
MERMOSMthode dEvaluation de la Ralisation des Missions
Operateur pour la Sret
MTS Maritime Transport System
NARA Nuclear Action Reliability Assessment
NPP Nuclear Power Plant
NRC Nuclear Regulatory Commission
ORE Operator Reliability Experiment
PC Paired comparisons
PRA Probabilistic Risk Assessment
PSA Probabilistic Safety Assessment
PSF Performing Shaping Factors
QRA Quantitative Risk Assessment
SADT Structured Analysis and Design Technique
SD System Dynamics
SHARP1 Systematic Human Action Reliability Procedure
(enhanced)
SLIM-MAUD Success Likelihood Index Methodology, Multi-
attribute Utility Decomposition
SMAS Safety Management Assessment System
SPAR-H Standardized Plant Analysis Risk-Human Reliability
Analysis
STAMP Systems-Theoretic Accident Model and Process
STEP Sequential Time Events Plotting
STPA System-Theoretic Process Analysis
THERP Technique for Human Error Rate Prediction
TRC Time Reliability Correlation
UMH University of Maryland Hybrid
M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e41 26
Our aim in this paper is to make a critical evaluation of the de-
ciencies concerning human factors and evaluate the potential of
quantitative techniques that have been proposed in the last decade
to model organizational factors, including the interaction among
groups.
Three important techniques are discussedin this context: STAMP
(Leveson, 2002), based on system theory, ACCI-MAP (Rasmussen
and Svedung, 2000), and FRAM (Hollnagel, 2004, 2012), which
aims at modeling the nonlinearities of socio-technical systems.
This paper is organized as follows: Section 2 presents a litera-
ture review on human reliability analysis and organizational fac-
tors. Section 3 addresses the candidate models for treating human
reliability analysis and organizational factors in the context of
probabilistic safety assessment. Section 4 displays a critique of non-
systemic approaches for human reliability analysis in face of the
current socio-technical system approach. Section 5 discusses how
THERP (Swain and Guttman, 1983) takes organizational factors into
account. Section 6 is dedicated to discuss the same subject in
respect to SPAR-H (Gertman et al., 2005). This same subject is
discussed in Section 7 in the context of ATHEANA (NRC, 2000).
FRAM (Hollnagel, 2004, 2012), which models the nonlinearities of
socio-technical systems is discussed in Section 8. STAMP (Leveson,
2002), an approach based on systemtheory, is the subject of section
9. Features and pros and cons of the discussed HRA techniques are
the subject of Section 10. Conclusions and recommendations are
presented in Section 11.
2. Literature review
Papers on the use of HRA models that include organizational
factors have been searched for. The period covered spans from1999
to 2012. The domain of application, where mentioned, has been
displayed. Also, qualitative and/or quantitative approaches have
been discussed and potential PRA/PSA applications have also been
displayed.
Tables 1e5 display the ndings. Table 1 displays the papers
found in the period from 1995 to 2006. Table 2 presents those
papers that appeared in 2007, while Table 3 is devoted to those
papers published in 2008. Papers published in 2009 are the subject
of Table 4. Finally, Table 5 is dedicated to the papers published in
the 2010e2012 period.
The literature review just presented shows that there are a
number of papers dealing with organizational factors among which
some consider quantication approaches in the light of probabi-
listic safety assessment and quantitative risk analysis (as the
chemical process industry traditionally calls the former).
Papazoglou and Aneziris (1999), Papazoglou et al. (2003), Strter
and Bubb (1999), ien (2001), Chang and Mosleh (2007), Daz-
Cabrera et al. (2007), Ren et al. (2008), Trucco et al. (2008),
Mohaghegh and Mosleh (2009), Mohaghegh et al. (2009),
Baumont et al. (2000), and Schnbeck et al. (2010) fall into this
category.
Applications may fall into several plant categories [like the
chemical process eld addressed by Papazoglou and Aneziris
(1999) and Papazoglou et al. (2003)], or the nuclear eld, like
Strter and Bubb (1999), and their discussion on the lack of human
reliability data. It is to be considered that different industry elds
have different regulatory requirements, which implies that possible
candidate methodologies have to be evaluated as to how they
comply with the nuclear regulatory culture, for instance. In this
sense, the quantitative approaches highlighted may be of interest in
the sense that they eventually address probabilistic safety
Table 1
Literature review: 1999e2006.
Reference Domain Approach PRA/PSA
usage
Qualitative Quantitative
Hee et al. (1999) Offshore SMAS to assess marine systems for human and
organizational factors. Case study conducted at a
marine terminal in California.
No Yes
Papazoglou and
Aneziris
(1999)
a
Chemical No Quantitative effects of organizational and
management factors by linking the results of a
safety management audit with the basic events of a
QRA. Case study of an ammonia storage facility.
Yes
Strter and Bubb
(1999)
Nuclear
(BWR)
Method developed to describe and analyze human
interactions observed within events. Approach
implementation as a database is outlined. Method
proposed for the analysis of cognitive errors or
organizational aspects.
Application to 165 BWR events and estimate of
probabilities and their comparison with THERP
data.
Yes
Baumont et al.
(2000)
Nuclear Describes the method developed to introduce in
PSA the Human and Organizational Reliability
Analysis in Accident Management (HORAAM) to
consider human and organizational reliability
aspects during accident management.
Based on decision trees (DT). Observation of crisis
center exercises, in order to identify the main
inuence factors (IFs), which affect human and
organizational reliability. IFs used as headings in the
decision tree method. Expert judgment was used to
verify the IFs, to rank them, and to estimate the
value of the aggregated factors to simplify the tree
quantication.
Yes
ien (2001) Offshore Qualitative organizational model developed Proposal of risk indicators as a complement to QRA-
based indicators. Quantication methodology for
assessing the impact of the organization on risk.
Yes
Lee et al. (2004) Nuclear Operational events related to reactor trips in Korean
reactors
Use of conditional core damage frequency to
consider risk information in prioritizing
organizational factors.
No
Reiman et al.
(2005)
Nuclear Assesses the organizational culture of Forsmark
(Sweden) and Olkiluoto (Finland) NPP maintenance
units. Uses core task questionnaires and semi-
structured interviews.
No No
Carayon (2006) Not
specic
Addresses the interaction among people who work
across organizational, geographic, cultural, and
temporal boundaries.
No No
a
Similar results to those found here may be found in Papazoglou et al. (2003).
M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e41 27
Table 2
Literature review: 2007.
Reference Domain Approach PRA/PSA
usage
Qualitative Quantitative
Bertolini
(2007)
Food
processing
Fuzzy cognitive map (FCM) approach to explore the
importance of human factors in plants.
No No
Chang and
Mosleh
(2007)
Nuclear Information, Decision, and Action in Crew Context
(IDAC) operator response model. Includes cognitive,
emotional, and physical activities during the course of
an accident.
Probabilistically predicts the response of an NPP
control room operating crew in accident
conditions.
Assesses the effects of performance inuencing
factors.
Yes
Daz-Cabrera
et al. (2007)
Miscellaneous Evaluation of a safety culture measuring instrument
centered on relevant safety-related organizational
values and practices. Seven dimensions that reect
underlying safety measures are proposed.
Explores the four cultural orientations in the eld of
safety arising from the competing values framework:
human relation or support, open system or innovation,
internal process or rules, and rational goal or goal
models.
No No
Galn et al.
(2007)
Nuclear No u-factor approach explicitly incorporates
organizational factors into the probability safety
assessment of NPPs. Bayesian networks are
used for quantication purposes.
No
Grote (2007) Not specic No Management of uncertainties as a challenge for
organizations. Discussion on minimizing
uncertainties versus coping with uncertainties.
No
Reiman and
Oedewald
(2007)
Not specic Four statements presented and discussed:
Current models of safety management are largely based
on either a rational or a non-contextual image of an
organization.
Complex socio-technical systems are socially
constructed and dynamical structures.
In order to be able to assess complex socio-technical
systems an understanding of the organizational core
task is required.
Effectiveness and safety depend on the cultural
conception of the organizational core task.
No No
Table 3
Literature review: 2008.
Reference Domain Approach PRA/PSA
Usage
Qualitative Quantitative
Bellamy et al. (2008) Chemical Describes preparatory ground work for the
development of a practical holistic model to help
stakeholders understand how human factors, safety
management systems and wider organizational issues
t together.
No No
Li et al. (2008) Aviation They analyzed 41 civil aviation accidents occurring to
aircraft registered in the Republic of China in the period
1999e2006 using the HFACS framework. The authors
claim that their research lent further support to
Reasons organizational model of human error that
suggests that active failures are promoted by latent
conditions in the organization.
No No
Ren et al. (2008) Offshore Propose a methodology to model causal relationships.
Reasons Swiss cheese model is used to form a generic
offshore safety assessment framework and a Bayesian
network is tailored to t into the framework to
construct a causal relationship model. Uses a ve level
structure model to address latent failures.
No No
Trucco et al. (2008) Maritime An approach to integrate human and organizational
factors into risk analysis
A Bayesian belief network has been developed to model
the maritime transport system (MTS) by taking into
account its different actors. The BBN model has been
used in a case study for the quantication of HOF in the
risk analysis carried out at the preliminary design stage
of a High Speed Craf (HSC).
The study has focused on a collision in open sea hazard
carried out by an integration of a Fault Tree Analysis of
technical element with a BBN model of the inuence of
organizational function and regulations.
M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e41 28
assessments, like the discussions in Mohaghegh and Mosleh
(2009), and Mohaghegh et al. (2009). However, eventual applica-
tions to the nuclear industry should be considered in advance due
to specic regulatory issues, like eventual validation by regulatory
bodies.
The models discussed in the literature presented are decient
for three reasons, in general. It is assumed that the system can be
represented by a linear model and a linear combination of events;
they do not simulate the system complexity, i.e., the complex way
of causeeeffect by which the various system parameters and var-
iables relate to each other; they assume that the system can be
decomposed into individual parts that can in turn be analyzed
individually and not as a whole. In the next section, we will detail
these shortcomings and subsequently evaluate models that try to
overcome them. BBN (Bayesian Belief Networks) and Fuzzy
Cognitive Maps, which can model social, political and economic
systems, can be constructed by building a network of causal re-
lationships between functions or states of objects (nodes) that
represent the system. In the rst case, a network learns Bayesian
probability, and in the second case, the coupling strengths in such
causal relationships. In any case, these networks can be constructed
as a neural network.
Non-linear learning laws can be used to train the neural
network. It means to set the Bayesian probabilities or connection
strengths between network nodes. Nonlinear Hebbian learning
laws have been developed in the case of Fuzzy Cognitive Maps,
Table 4
Literature review: 2009.
Reference Domain Approach PRA/
PSA
usage
Qualitative Quantitative
Grabowski et al. (2009) Marine
transportation
Focus on the challenges of measuring performance
variability in complex systems using the lens of human
and organizational error modeling.
A case study of human and organizational error analysis
in a complex, large-scale system, marine transportation
is used to illustrate the impact of the data challenges on
risk assessment process.
No
Mohaghegh and
Mosleh (2009)
Aviation Explore issues regarding measurement techniques in a
quantitative safety analysis context.
A multi-dimensional perspective is offered through
combinations of different measurement methods and
measurement bases.
A Bayesian approach is proposed to operationalize the
multi-dimensional measurements.
Focus on extending PRA modeling frameworks to
include the effects of organizational factors as the
fundamental causes of accident and incidents.
Yes
Mohaghegh et al. (2009) Aviation Discussion on the results of a research with the primary
purpose of extending PRA modeling frameworks to
include the effects of organizational factors as the
deeper, more fundamental causes of accidents and
incidents.
The focus is on the choice of representational schemes
and techniques. A hybrid approach based on System
Dynamics (SD), Bayesian Belief Networks (BBN), Event
Sequence Diagram (ESD), and Fault Tree (FT) is
proposed to demonstrate the exibility of the hybrid
approach that integrates deterministic and probabilistic
modeling perspectives.
No No
Tseng and Lee (2009) Electronics Propose an Analytical Hierarchical Process/Data
Envelopment Analysis (AHP/DEA) model that helps in
investigating the associated importance of human
resource practices and organizational performance
variables. The case study contains 5 human resource
practice variables and 7 organizational performance
variables through Linear Structure Relation (LISREL)
No No
Table 5
Literature review in the period 2010e2012.
Reference Domain Approach PRA/
PSA
usage
Qualitative Quantitative
Schnbeck et al. (2010) Process
plants
A benchmarking study for comparing and evaluating HRA
methods in assessing operator performance in simulator
experiments is performed. Approach to address human and
organizational factors in the operational phase of safety
instrumented systems. It shows which human and
organizational factors are most in need of improvement and
provides guidance for preventive or corrective action.
No No
Waterson and Kolose (2010) Defense Outline a framework which aims to capture some of the
social and organizational aspects of human factors
integration (HFI). The framework was partly used to design
a set of interview questions that were used with a case
study of human factors.
No No
Peng-cheng et al. (2012) Developed a fuzzy Bayesian network (BN) approach to
improve the quantication of organizational inuences in
human reliability analyses.
A conceptual causal framework was built to analyze the
causal relationships between organizational factors and
human reliability or human error.
The probability inference model
for HRA was built by combining
the conceptual causal
framework with BN to
implement causal and
diagnostic inference.
Yes
M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e41 29
Stylios and Groumpos (1999), Papageorgiou et al. (2003), Song et al.
(2011), and Kharratzadeh and Schultz (2013).
3. On HRA models to be discussed
In order to justify the HRA techniques discussed next, a search
was made on three critical HRA reviews: the discussion on good
practices of NUREG-1842 (NRC, 2006), a review of HRA techniques
prepared for the US National Aeronautics and Space Administration
(NASA, 2006), and, nally, a reviewperformed by the British Health
and Safety Executive (HSE, 2009).
Table 6 displays 24 techniques discussed in the references
mentioned in the last paragraph. We have adopted the classica-
tion used in HSE (2009). As newtools are emerging based on earlier
rst generation tools, such as HEART (Williams, 1988) they are
being referred to as third generation techniques, as is the case with
NARA(Kirwan et al., 2005). Note that some techniques are classied
solely on expert opinion grounds and some of them do not fall in
any of the categories considered.
The criterion we have followed is basically to consider tech-
niques that have been recommended by the NRC and at least by one
of the remaining reviews. This has lead initially to the consideration
of THERP, ASEP, SPAR-H, ATHEANA, SLIM-MAUD, and CBDT. How-
ever, we have decided to discard those techniques not classied in
one of the generations (and, thus, SLIM-MAUD and CBDT have been
discarded) and also ASEP because it is an adaptation of THERP. On
the other hand, as we are to discuss systemic approaches [at this
point, we refer to FRAM, Hollnagel (2004, 2012)] for human reli-
ability, we have decided to consider CREAM also, although it has
not been reviewed by the NRC (2006).
4. A critique of non-systemic models
NRC has published two reports related to good practices in the
eld of Human Reliability Analysis (HRA). The rst of these, NRC
(2005), establishes the criteria of good practices and the tech-
niques and methodologies HRA should comply with. The second
one, NRC (2006), performs an evaluation of HRA main techniques in
comparison with the criteria established in NRC (2005).
These reports recognize the fact that Performing Shaping
Factors (PSFs) are important and help HRA analysts identify and
understand some inuences that PSFs exert on the actions of the
tasks allocated to human beings. They also recognize the fact
that databases on plant operational events are useful to get data
on the inuences the operational context (including organiza-
tional factors) exerts upon the unsafe actions involving human
failures and establish some quantitative basis to quantify human
errors probabilities (HEPs) as a function of organizational
factors.
However, the above cited NUREGs do not have any criteria or
specic guides for the implementation of organizational factors in
HRA, leaving this task as a suggestion for future research. In spite of
recognizing certain efforts in this sense, as for example, Davoudian
et al. (1994), NRC also admits that the state of the art in how to
identify and understand the important organizational inuences
and how to use this information for determining HEPs is not
adequate until now.
The origin of the deciency previously mentioned is in the
model type that has been adopted so far for Probabilistic Safety
Assessments (PSA) and Analysis of Operational Events (AOE),
including several types of incidents and accidents. Let us start then
to discuss the main characteristics of these models and how they
can be altered to establish a paradigm to treat organizational fac-
tors in an appropriate way but, before that, some comments on
system theory are due.
A model based on systems theory studies the mechanisms that
regulate the systems by feedback control to leave them in a con-
dition of stability. Therefore, in this theory, systems are not
designed to be static in time, but to react to themselves and the
conditions of their environment, i.e., they are constantly trying to
adapt. Accidents occur, then, as a result of the system failure to
adapt, i.e., control systems failure. Breaking of restrictions imposed
by the control systemto maintain stability of the systemis revealed
as the main cause of failures or accidents. This concept can be
Table 6
Candidate HRA techniques reviewed by NRC, NASA and HSE.
Technique Reference NUREG-1842
(NRC, 2006)
NASA
(NASA, 2006)
HSE
(HSE, 2009)
Classication Acronym
1st generation THERP Swain and Guttman (1983) Yes Yes Yes
ASEP Swain (1987) Yes Yes Yes
HEART Williams (1988) No Yes Yes
SPAR-H Gertman et al. (2005) Yes Yes Yes
JHEDI Kirwan (1990) No No Yes
INTENT Gertman et al. (1992) No No Yes
2nd generation ATHEANA NRC (2000) Yes Yes Yes
CREAM Hollnagel (1998) No Yes Yes
CAHR Strter (2005) No Yes Yes
CESA Reer et al. (2004) No Yes Yes
CODA Reer (1997) No No Yes
MERMOS Bieder et al. (1998) No No Yes
3rd generation NARA Kirwan et al. (2005) No Yes Yes
Expert opinion SLIM-
MAUD
NRC (1985) Yes Yes Yes
APJ Seaver and Stillwell (1983) No No Yes
PC Seaver and Stillwell (1983) No No Yes
Unclassied ORE Parry et al. (1992) Yes No No
CBDT Parry et al. (1992) Yes No Yes
FLIM Chien et al. (1988) Yes No No
SHARP1 Wakeeld et al. (1992) Yes No No
EPRI-HRA Grobbelaar et al. (2003) Yes No No
UMH Shen and Mosleh (1996) No Yes Yes
TRC Dougherty and Fragola (1987) No Yes No
HF-
PFMEA
Broughton et al. (1999) No Yes No
M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e41 30
extended to social, political and economic systems that also have
their own restrictions or rules of operation, Rasmussen and
Svedung (2000), Leveson (2004a), Qureshi (2008), Dekker et al.
(2011).
In the systems theory approach, the system has several states
close to its stable operation limit, which are not visible at rst, as in
the above case of digital systems. In this limit state, systems may
become unstable catastrophically, even when small oscillations
occur in the parameters that inuence the behavior of their physical
components or human beings. This state is not captured by classical
approaches, Rasmussen and Svedung (2000), Qureshi (2008).
In the discussion above, we can see that the complexity is pre-
sented in four different ways: interactive complexity (arises from
the interaction between components), non-linear complexity
(cause and effect not related by component failure modes), dy-
namic complexity (adaptive behavior system, reaching limit states
on the boundary of systemstability) and decomposition complexity
(the system is not decomposed in its structure, but in the functions
it performs, which appear in relationships between components),
Leveson (2004a), Dekker et al. (2011).
The natural denition of system also arises from the above
discussion: a set of components that interact with each other and
have relationships that dene the functions of one or more pro-
cesses that occur within a well-dened boundary, and functions of
processes occurring in relationships with other systems, Laszlo and
Krippner (1998).
The system can then be divisible, structurally speaking, but
functionally is an indivisible entity with emergent properties. To
better understand this issue, we have to observe biological systems.
Life appears in an integrated manner. An organ of the human body
will not function if separated from the body, i.e., it loses its emer-
gent properties. For example, a hand separated from body cannot
write, Laszlo and Krippner (1998).
In Reason (1997), Leveson (2002), and Hollnagel (2004) one can
nd the comparison between the traditional approach (non-sys-
temic) and the systemic approach of risk assessment. Below we
describe the main differences between them, pointed out by those
authors. There are three types of accident models and associated
risk analyses: accident sequential models, accident epidemiological
models and accident systemic models.
The sequential models of accidents are those used in most of
HRA and PSA techniques. These models are based on the hypothesis
that accidents can evolve in a pre-dened sequence of events that
involve failures of systems, components and human failures. It is
part of the initial hypothesis that the global system that is being
modeled can be decomposed into individual parts, in other words,
systems, subsystems and components, described in the bottom-up
direction (lower, physical hierarchical level to the higher, abstract
hierarchical level).
Risks and failures are, therefore, analyzed in relation to events
and individual components, with their associated probabilities.
Human failures are treated just as component failures. The outputs
(the effects in terms of catastrophic failures, e.g., core meltdown) of
the global system are proportional to the inputs (causes in terms of
individual failures), that are predictable for those that know the
design of the subsystems and components; therefore, these sys-
tems are linear. The risks are represented by a linear combination of
failures and malfunctioning, for example, observed in event trees
and fault trees. Therefore, accidents are avoided by the identica-
tion and elimination of the possible causes. The safety level can be
assured by improving the answer of the organization that is
responsible for the plant in reacting to the triggered accidents
(robustness feature).
The hypotheses e decomposition, linearity and simple com-
binations of individual failures e work well strictly for
technological systems, because system and component designers
know how these systems and components can be decomposed
into individual parts and how they work. Therefore, they can
make reliable inferences on the failure modes of such systems
and components.
The epidemiological models of accidents are equally based on
the linearity and decomposition hypotheses, but they have com-
plex characteristics, because they are based on more complex
combinations of failures and mainly on safety barrier weaknesses.
In this case, simple failures of systems and components, or human
failures, combined with latent failures (design, maintenance, pro-
cedures, management, etc.) contribute to the degradation of safety
barriers, thus affecting the defense in depth concept. The global
system should be modeled from top to bottom, from the safety
objectives and functions, hierarchically higher until the lower
functional levels. Risks and failures can, therefore, be described in
relation to the functional behavior of the whole system. Accidents
are avoided by reinforcing safety barriers and thus the defense-in-
depth concept. Safety is assured by monitoring these barriers and
defenses, through safety action indicators.
On the other hand, socio-technical systems are by their very
nature, complex, non-linear and non-decomposable. These three
features appear naturally from a unique outstanding characteristic
of these systems: they are emergent. The fact of being emergent
means that the complex relationships among inputs (causes) and
outputs (effects) make unexpected and disproportional conse-
quences to emerge, which lead one to the resonance concept, in
other words, certain combinations of the variability of system
functional actions as a whole can reach the threshold allowed for
the variability of a system function. This is the approach of the
systemic models of accidents.
The action variability is an immediate consequence of the nature
of socio-technical systems that survive in social, economical and
political environments. Those environments determine the vari-
ability index, because they are composed by human beings, with a
highly adaptive cognitive and emotional character, with their own
cognitive mechanisms, and they are not of a technical nature as
found in common systems. Therefore, the approach based on
functional variability behaves quite differently fromthat of systems
composed by components only.
In this case, the risk analysis associated with these systems
should leave aside the individual failures of systems and compo-
nents to simulate the system dynamics as a whole, seeking for
combinations of individual variability of the system functions that
can lead to undesirable functional behaviors, after the propagation
of these combinations through the whole system. This means that
an individual part of the system is linked to the system as a whole.
However, the risk analysis must pay attention to the dynamics of
the whole systemand not to the action or behavior of the individual
part.
In this approach, accidents result from unexpected combina-
tions (resonances) of the variability of action or behavior. Therefore,
accidents can be avoided by monitoring and reducing variability.
Safety is assured by the constant ability of anticipating future
events.
Before detailing these systemic models of accidents, we will
describe how the available rst and second generation HRA tech-
niques treat organizational factors.
5. Organizational factors in THERP (Swain and Guttman,
1983)
In the rst generation techniques of HRA, like THERP, organi-
zational factors are taken into consideration by means of perfor-
mance shaping factors (PSFs), which are multiplied by the basic
M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e41 31
human error probabilities (BHEPs), increasing or decreasing the
baseline values.
NRC (2005) and NRC (2006) establish fteen PSFs that are
important for HRA. These PSFs are described in Table 7 together
with a classication scheme that highlights those of an organiza-
tional nature.
The cognitive characteristics can be included in the error
mechanisms of second generation HRA techniques. The group
factors can be interpreted as human factors of the second level
(between humanesystem interaction in the rst level and organi-
zational factors in the third level) and can be included in the
organizational factors, related to management and supervision
activities. Manesystem interface and ergonomics design charac-
teristics are plant-specic features, but can be interpreted as the
poor result of design processes having as root causes some decient
organizational factors. The characteristics of the design process are
not modeled in THERP. In NRC (2006) there are several critiques
about the use of PSFs in THERP. These critiques are reproduced
below.
PSFs are listed and discussed in Chapter 3 of THERP, which also
presents their relevance; however, there is not a detailed orienta-
tion of how to quantitatively evaluate each PSF.
THERP reveals explicit factors for stress levels and levels of
experience only. For other qualitatively analyzed PSFs, it does not
display explicit factors.
Besides PSFs with explicit factors (stresses and experience),
there are three additional groups of PSFs that may modify the
nominal value of a HEP. However, that modication occurs in a
subjective way: a) PSFs already included in the nominal value of a
HEP that are listed in THERP tables (for example: if the written
procedure to be followed is long or short); b) PSFs specied as
rules that modify the nominal value of a HEP, inside its uncer-
tainty limits (for example: to use the higher limit, if the operator
involved in an action is not well trained); c) PSFs for which there
are not specic orientations with relation to the quantitative
evaluation (factors).
The quantitative evaluation of PSFs of the third group depends
on the experts experience and judgment on human factors or of
human reliability analysts.
The lack of orientation in the quantitative evaluation of PSFs can
become a source of the analysts or specialists variability when
THERP is used. This feature can distort the results obtained in HRA
and consequently in Probabilistic Safety Assessments (PSA).
The quantitative evaluation of some PSFs can induce the an-
alyst simply to assume that those are the most important PSFs
that should be treated, in detriment of the remaining ones.
Therefore, an inadequate quantication of HEPs can happen,
because there can be other important PSFs not considered by the
analyst.
In the quantitative evaluation, it is also necessary to point out
that THERP, due to its characteristics, does not treat organizational
factors, so that possible latent PSFs that can inuence HEPs are not
treated in the analysis.
Besides the critiques described above, THERP linearly ap-
proaches the plant and consequently the organization in which it is
inserted, therefore THERP does not consider the plant socio-
technical characteristics that should be taken into account in the
qualitative and quantitative analysis of PSFs.
6. Organizational factors in SPAR-H (Gertman et al., 2005)
SPAR-H evaluates the following eight PSFs: 1) available team; 2)
stress; 3) complexity; 4) experience/training; 5) procedures; 6)
ergonomics; 7) tness for duty; and 8) work process. Among these,
only 4, 5, 7, and 8 can be considered organizational factors. The
others can be considered cognitive characteristics (1, 2, and 3) or
ergonomics factors (6). SPAR-H suggests some metrics to calculate
specic PSFs, like complexity, time pressure and available team.
There are no specic suggestions, however, for the remaining ones,
although it mentions research in this area through the technical
literature. NRC (2005, 2006) point out the following deciencies of
the SPAR-H methodology related to performance shaping factors:
The approaches supplied by SPAR-H on how to evaluate the
inuences of a specic PSF are generally useful, but they can be
insufcient to analyze and understand the conditions of a sce-
nario. Those conditions affect the task of attributing levels (to
generate factors) for each PSF, especially if analysts without
enough knowledge of HRA and human factors are used in the
measurement;
The detailed analysis of PSFs presents inadequate solutions,
because a nominal level (xed factor) is almost always
Table 7
Main Performance Shaping Factors according to NRC good practices, NRC (2005, 2006).
PSF Organizational Cognitive Man-
system
interface
Design
ergonomic
factors
Group
interaction
factors
1. Quality of training X
2. Quality of procedures and administrative controls X
3. Availability and clarity of instrumentation X
4. Team available and time required to complete the act
including the impact of concurrent activities
X
5. Complexity of the required diagnosis and response X
6. Workload time, pressure, and stress X
7. Crew dynamics and characteristics
a
X
8. Available stafng and resources X
9. Ergonomic quality of human-system interface X
10. Environmental factors X
11. Accessibility and operability of the equipment to be
manipulated
X
12. Need for special tools
b
X
13. Communication (strategy and coordination) and
whether one can be easily heard
X X
14. Special tness needs X
15. Accident sequence diversions/deviations
c
X
a
E.g., degree of independence among individuals, operator biases/rules, use of status checks, level of aggressiveness in implementing procedures.
b
E.g., keys, ladder, hoses, clothing.
c
E.g., extraneous alarms, outside discussions.
M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e41 32
attributed to PSFs, due to the way they are dened, limiting their
usefulness for identifying different situations. This approach has
as an effect a generalization that assists some of the applications
directed to SPAR-H, but it can be insufcient for detailed eval-
uations of plants or specic scenarios;
In the analysis of complexity SPAR-H considers a multiplicity of
factors (PSFs), guiding the analyst to a technical literature of
orientation that makes it possible the evaluation of the factors.
This approach does not seem appropriate for a method of
simplied and normalized HRA, because it may go beyond the
analysts capacity that, for example, does not have enough
knowledge on psychology. However, the discussion is healthy
and can help the decision-making process in what concerns
complexity;
The orientation and the solution to measure complexity are
practical, useful and important for a generic analysis. However,
they can be inadequate for a detailed plant or scenario analysis;
In the analysis of SPAR-H, the six training or experience months
represent a parameter that is not applicable in many situations,
because the plant operation team may have a member with less
than six months of training or experience. The level, frequency
and training type the team of a plant receives about the sce-
narios and actions related to them are much more important for
the success of the action and these subjects are not approached
in PSFs;
The analysis of the generic PSFs named tness for duty seems
not to be useful, as long as there is a nominal value for almost all
cases in commercial nuclear plants. It is worth pointing out that
PSFs are very important in the retrospective analysis of current
events.
The good recommendation on SPAR-H methodology is the
interaction between PSFs, including organizational factors, through
the qualitative interaction matrix that is plant specic. SPAR-Hdoes
not suggest a quantication technique, but only the qualitative
indication on how PSFs can be realistically and coherently
quantied.
The approach of SPAR-H is to use the plant and specic scenario
information to evaluate the effects of PSFs. However, in the calcu-
lation of HEPs, each PSF is treated independently from the
remaining ones; in other words, the analysts will make judgments
separately on each PSF. However, SPAR-H supplies a good discus-
sion on the potential of the interaction effects among PSFs, due to
the event and scenario specicities to be analyzed. Besides, it is
certainly possible, and in many cases probable, that other factors
(for example: team characteristics, procedure strategies) can in-
uence the team action for a given scenario.
So, unless it happens an analysts attempt (independently,
without explicit orientation) to take into account such effects (in
other words, to consider interactions), it is possible that the re-
sults do not reect important plant and scenario specic charac-
teristics. In other words, if analysts do not try to incorporate the
inuences of the interaction potential effects among PSFs and do
not include the inuences of other important PSFs when neces-
sary, taking into account the accident scenario, there is the pos-
sibility that the generic analysis does not supply a realistic
evaluation of a specic accident or plant condition. Therefore, this
is a limitation of SPAR-H.
If a generic high-level analysis is considered appropriate for a
specic application (for example: the analysis of Accident Sequence
Precursors) (NRC, 1992; Modarres et al., 1996) or if after some an-
alyses, the independent evaluation of all PSFs is considered
appropriate for the event and for the scenario that will be exam-
ined, a simple application of SPAR-Hcan be appropriate. Otherwise,
the results might contain errors and an important potential plant
problem (for example, procedure limitations) might not be
identied.
7. Organizational factors in ATHEANA (NRC, 2000)
ATHEANA evaluates sixteen PSFs, shown in Table 8. The PSFs
treated in ATHEANA should undergo an analysis with the purpose
of covering subjects from plant design to plant organization. This
classication aims just to break the linearity of classifying PSFs as
mere multiplying factors e this approach needs to be enlarged.
Table 8 sets a link between PSFs that can be considered as effects
and their possible characteristics or generating factors.
It can be observed that the organizational factors described in
Table 8 involve critical points like: training, procedures, adminis-
trative controls and human resources (personal). Considering
training and procedures only, one is already facing a binomial set of
decisive factors in plant operation and safety. In spite of not being
Table 8
Classication of performance shaping factors in ATHEANA (NRC, 2007).
Performing shaping factors Characteristics and factors
1. Quality of training/
experience
Organizational factors:
Possible failure in quality assurance.
Important: within that factor is
another factor, the latent factor, which
hides the problem.
2. Quality of procedures/
administrative controls
Organizational factors:
Possible failure in quality assurance.
Important: within that factor is
another factor, the latent factor, which
hides the problem.
3. Availability and clarity of
instrumentation
Man-machine interface design
features.
4. Time available and time
required to complete the act
including the impact of
concurrent activities
Cognitive characteristics
5. Complexity of the required
diagnosis and response
Cognitive characteristics
6. Workload/time pressure/
stress
Cognitive characteristics
7. Crew dynamics and
characteristics (e.g., degree
of independence among
individuals, operator biases/
rules)
Group interaction factors
8. Use of status checks, level of
aggressiveness in
implementing the
procedures
Group interaction factors
9. Available stafng/resources Organizational factors:
Possible failure in quality assurance.
Important: within that factor is
another factor, the latent factor, which
hides the problem.
10. Ergonomic quality of the
human-system interface
Design ergonomics factors
11. Environmental factors Design ergonomics factors
12. Accessibility and operability
of the equipment to be
manipulated
Design ergonomics factors
13. The need for special tools
(e.g., keys, ladder, hoses,
clothing)
Design ergonomics factors
14. Communications (strategy
and coordination) and
whether one can be easily
heard
Design ergonomics factors or group
interaction factors
15. Special tness needs Cognitive characteristics
16. Accident sequence
diversions/deviations (e.g.,
extraneous alarms, outside
discussions)
Special characteristics
M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e41 33
characterized as organizational factors, the other factors charac-
teristics described in Table 8 can be effects whose causes may be
due to possible organizational fragilities.
For example, factors related to ergonomic designs can be the
result of incorrect options taken during the design phase or even
because of economical issues. Incorrect options and economical
aspects are characterized as causes of organizational decisions.
These observations have the sole intention of enlarging the un-
derstanding of human reliability experts.
The sixteen PSFs described in Table 8 are discussed in ATHEANA.
A deeper study of PSFs is found in NRC (2007), which supplies
additional information on expert judgment that allows for devel-
oping a PSF quantication process. NRC (2000) does not focus on
this subject. Also, in Appendix B of NRC (2005) a discussion is
presented on PSFs that is consistent with ATHEANAs.
ATHEANA uses the context to identify PSFs and it evaluates the
most important plant conditions that inuence the human action
being analyzed and can trigger PSFs. Therefore, although ATHEANA
lists several important PSFs for most HEPs, experts can present
other PSFs (positive or negative) that are judged to inuence HEPs.
Their estimation is performed by taking into account the plant
conditions. So, instead of using a group of predetermined PSFs, they
are obtained starting from the plant context evolution, which is
analyzed by experts.
In their judgment to estimate HEPs experts use the context
general information to obtain the best decision for each HEP. Due to
the way PSFs are identied and considered in ATHEANA, it should
simply be avoided the measure or evaluation of the inuence de-
gree of each PSF. The reason is because, as previously seen, ATHE-
ANA uses the context evaluation to decide which PSFs are
important or triggered by the context (former: the context is
complex, because the procedure does not satisfactorily solve a
specic situation) and, therefore, they are not pre-established
multipliers for the inuence degree of each PSF. As in many other
HRA methods, the task to decide the way how PSFs affect the es-
timate of HEPs remains to experts.
Due to the importance of the appropriate context study, which
includes important PSFs for the action that will be evaluated, it is
important that experts that use ATHEANA identify the specic plant
information and PSA scenarios to dene the context and its inu-
ence on human actions that are under analysis. Second HRA gen-
eration techniques, like NRC (2000) and Hollnagel (1998), use the
cognitive model of the human being and the error mechanisms that
inuence human failures or unsafe actions. In these two ap-
proaches, there is the proposal of a prospective analysis that links
PSFs with error mechanisms and error types or modes (unsafe
actions). This analysis has qualitative features and there is, in the
proposed quantication, the idea of taking into account the con-
ditional probabilities given the pairs of PSFs (together with oper-
ational context) e error mechanisms e unsafe actions (errors
modes or error types), Alvarenga and Fonseca (2009).
It is important to stress the fact that ATHEANA does not use PSFs
as many other HRA techniques do (NRC, 2006), as has been dis-
cussed in this section. There is not an a priori list of PSFs to be
considered by the analyst and then adjust the HEP by given
multiplying factors. ATHEANA approaches the context developing
process to identify what PSFs and plant conditions are relevant to
the human action under consideration and so identify which PSFs
could be triggered.
We must observe that in all rst or second generation tech-
niques the use of PSFs to describe factors that inuence basic hu-
man errors probabilities (BHEPs) are accomplished through a linear
approach, because BHEPs are to be multiplied by these factors. Any
organizational factors quantied in this way cannot take into ac-
count the nonlinearities of socio-technical systems.
8. FRAM (Functional resonance accident model) (Hollnagel,
2004, 2012)
In Section 1 we identied the concept of emergence as the main
foundation of systemic models of accidents. This concept was
introduced by the English philosopher of science Lewes (2005). He
proposed a distinction between resulting and emergent phenom-
ena in which the resulting phenomena could be predictable by
starting from its constituent parts and the emergent phenomena
could not.
According to systemic models, failures emerge from the normal
variability of the action of the functions that compose the system.
The concept of functional variability leads one to the concept of
functional resonance, which in turn can be derived from stochastic
resonance. This appears when a random noise is superposed to a
weak signal in the output of the system or one of their component
functions. The mixture of the two can reach or surpass the detec-
tion threshold of this signal, characterizing a stochastic resonance.
Most of the time, in a stable condition, the variation or oscillation of
the signal around a reference value remains within a variation
range with very well dened boundaries, characterized by limiting
values. The variation of each signal depends on the interaction with
other signals that exist in the system. For a specic signal, the other
signals constitute the environment responsible for the noise, which
represents the variability of this environment. Consequently,
functional resonance can be considered as the detectable signal
that appears out of the non-deliberate interaction of the weak
variability of many signals interacting with each other.
Systemic models also have roots in chaos theory (Lorentz, 2003),
which describes complex and dynamic systems. This kind of system
can present unstable behavior as a function of a temporary varia-
tion of their parameters in a random way, even when the system is
ruled by physical laws. The consequence of this instability is that
these systems can present a great sensitivity to disturbances (noise)
and errors, which can be amplied by the systemnonlinearities and
the great number of interactions among the system components.
Chaos theory, however, has limited practical value to become a
model of accidents, according to Hollnagel (2004, 2012), because
the equations of the noise processes have to be added to the system
equations. These, in turn, are formulated starting from physical
laws describing the system. However, there are no deterministic
equations and general physical laws for socio-technical systems,
although an effort on this may be found in Sterman (2000),
following earlier discussions by Bertalanffy (1971) on general sys-
tem theory and Hetrick (1971), which briey discusses topics on
nonlinear mechanics and their application to nuclear reactor sys-
tems from the point of view of system stability. On the other hand,
the management concept, a typically organizational factor, repre-
sents a control function. Several formulations have been proposed
based on control systems (Rasmussen et al., 1994) to model it.
Rasmussen (1997) proposed a systemic model of socio-technical
systems based on control systems.
Rasmussen et al. (1994) display the basic model of control sys-
tem theory for socio-technical systems. The model is composed of
inputs, outputs, boundary conditions and feedback control. Another
representation, the Structured Analysis and Design Technique
(SADT) has been used for dening systems, analysis of software
requirements and system and software design (Pressman, 1992).
SADT consists of procedures that allow the analyst to decompose
the software (or system) into several functions. An application
example for modeling systems of nuclear power stations has been
described by Rasmussen and Petersen (1999). The diagrammatic
notation of SADT consists of blocks of functions, each one with
three types of input parameters (inputs, controls and resources)
and one of output (outputs). Hollnagel, in his FRAM model
M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e41 34
(Hollnagel, 2004, 2012) extended this basic model by adding two
more parameters: available time and pre-requirements.
In FRAM, the Functional Resonance Analysis is performed in four
steps (Hollnagel, 2004):
1. Identify essential system functions through the six basic pa-
rameters mentioned below;
2. Characterize the potential variability (positive or negative) of
each function as a function of the context;
3. Dene functional resonance based on possible dependencies or
couplings between these functions;
4. Identify barriers or damping factors to absorb the variability and
specify required performance monitoring.
In the rst step, the six basic parameters are described as follows
(Hollnagel, 2004, 2012):
1. Input: what is used or transformed to produce the output;
2. Output: what is produced by a specic function;
3. Control: what supervises or adjusts the function;
4. Resource: what is needed or consumed by the function to pro-
cess the inputs;
5. Precondition: system conditions that must be fullled before
the function can be carried out;
6. Time available: it can be the constraint and can also be
considered a special kind of resource.
In step 2 above, it becomes necessary to dene the context to
characterize the variability. There are two levels of context. The
context of rst order is dened internally by the complements of
the systems functions. The second order context is dened by the
system environment. For a given function, the variability of the rest
of the system denes its environment. The complement of the
functions is supplied by Common Conditions (CC). Each CC affects
one or more types or categories of functions, depending on the
nature of the function. There are three categories of functions, in
agreement with their nature: human (M), technological (T) and
organizational (O).
Originally this was described by performance shaping factors,
but in techniques like ATHEANA (NRC, 2000), it evolved to the
concept of error forcing conditions. In CREAM (Hollnagel, 1998),
PSFs appear under the name of Common Performance Conditions
(CPC). All the CPCs in CREAMinuence all task steps as a whole. It is
different from THERP, where specic PSFs inuence specic task
steps.
An extension of CREAM Common Performance Conditions has
been elaborated for FRAM. In this context, they are called Common
Conditions (CCs). There are eleven CCs in FRAM (Hollnagel, 2004,
2012) and each one inuences certain types of functions (M, O, or
T), as displayed in Table 9.
In order to quantify the effect of CCs on function variability, a CC
gradation becomes necessary, although they are qualitative. Below
we nd Hollnagels proposal on this gradation, Hollnagel (2004,
2012):
Stable or variable but adequate e associated performance vari-
ability is low;
Stable or variable but inadequate e associated performance
variability is high;
Unpredictable e associated performance variability is very high.
This proposal allows for quanticationprovidedthat we associate
values to CCs. CCs will inuence all function parameters. If a
parameter has the highest performance variability value, the con-
nections associated with this parameter will fail or be reduced.
Consequently, we can see its impact on the whole network of
functions by observing whether the outputs of each functionwill fail.
In the third step of the analysis, functional resonance is deter-
mined for the couplings between functions, established by the
coupling among different function parameters. In other words, the
output of a function can be connected to different types of inputs of
the other functions (input, resource, control and pre-condition).
Through these couplings, one can identify how the variability of a
given function can affect the action of the other functions. Either
the function performance at the output or unexpected connections
(functional resonance) between the functions can be discovered,
thus invalidating (failing) the connection, or relaxing the coupling
of the interconnected parameters in the connection. This is depic-
ted in Fig. 1.
In the fourth step of the analysis, safety barriers are identied to
avoid functional resonance. These barriers can be associated to any
of the six function parameters, simply by adding one more
connection (an AND logical node) to the pertinent parameter,
which characterizes a restriction to the performance of that
parameter in the function. However, there is another approach,
which uses the same control system paradigm, but with a different
kind of quantication, the STAMP model, which will be next
described. The STAMP model uses a mathematical model to
describe the system dynamics. It is different from FRAM, which
uses the qualitative analysis (structural) based on connections and
variability (semi-quantitative).
We briey review some available FRAM applications. FRAM
website (http://functionalresonance.com/publications.html) may
be consulted for further information.
Herrera and Woltjer (2010) compare the use of the Sequential
Time Events Plotting (STEP) (Hendrick and Benner, 1987) and FRAM
in the framework of civil aviation. Their conclusion is that STEP
helps to illustrate what happened, involving which actors at what
time, whereas FRAM illustrates the dynamic interactions within
socio-technical systems and lets the analyst understand the how
and why by describing non-linear dependencies, performance
conditions, variability, and their resonance across functions.
Belmonte et al. (2011) present a FRAM application to a railway
trafc supervision that uses modern automatic train supervision
systems. According to them, examples taken from railway trafc
supervision illustrate the principal advantage of FRAM in compar-
ison to classical safety analysis models, i.e., its ability to take into
account technical as well as human and organizational aspects
within a single model, thus allowing a true multidisciplinary
cooperation between specialists from the different domains
involved.
Table 9
Common Conditions (CCs) and functions they inuence (Hollnagel, 2004, 2012).
Common condition (CC) Human
(M)
Technological
(T)
Organizational
(O)
1. Availability of resources X X
2. Training and experience X
3. Quality of communication X X
4. Human-machine interface (HMI)
and operational support
X
5. Access to procedures and
methods
X
6. Work conditions X X
7. Number of goals and conict
solving
X X
8. Available time (time pressure) X
9. Circadian rhythm X
10. Crew collaboration quality X
11. Organization quality and
support
X
M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e41 35
The mid-air collision between ight GLO1907, a commercial
aircraft Boeing 737-300, and ight N600XL, an executive jet
EMBRAER E-145 was analyzed by means of FRAMto investigate key
resilience characteristics of the air trafc management system,
Carvalho (2011).
Pereira (2013) makes an introduction to the use of FRAM to the
effectiveness assessment of a specic radiopharmaceutical dis-
patching process.
9. STAMP (Systems-Theoretic Accident Model and Process)
(Leveson, 2002)
STAMP is based on system process dynamics and not on events
and human actions individually. Rasmussen (1997) proposed a
model for socio-technical systems, where the accident is viewed as
a complex process with several hierarchical control levels,
involving legislators, regulators, elaborate associations, company
policy, plant management, engineering and technical departments
and operation staff [see Fig. 2 and Qureshi (2007) and Rasmussen
(1997)].
Later, Rasmussen and Svedung (2000) applied this model to risk
management. However, they described the process downstream in
each level through an event chain similar to event trees and fault
trees. On the other hand, a model of socio-technical systems using
concepts of process control systems theory was applied by For-
rester to business dynamics involving economic processes
(Forrester, 1961). STAMP combines the Rasmussen and Svedung
structure with Forresters mathematical model for system dy-
namics to describe the process occurring in each level.
The systemic model of STAMP leans on 4 basic concepts:
Emergence, Hierarchy, Communication and Control. As discussed
in the last section dedicated to the FRAM methodology, systemic
models need the concept of emergence to explain functional
resonance. Accidents are seen as the result of the unexpected
interaction (resonance) of system functions. Therefore, the com-
ponents of these functions cannot be separately analyzed (indi-
vidual failures) and later combined in a linear way to evaluate
safety. Safety can only be evaluated by considering the relation-
ship of each component with the other plant components, that is,
in the global context. Therefore, the rst fundamental concept of
STAMP is that of underlying the emergent properties that are
associated to the restrictions imposed on the degree of freedom of
those components that compose the functions belonging to a
given system hierarchy. The concept of system functions hierarchy
becomes, therefore, the second fundamental concept (Leveson,
2002, 2004a).
The emergent properties are controlled by a group of re-
strictions that represent the control actions about the behavior of
the system components. Consequently, higher hierarchical levels
impose restrictions or control actions on lower levels. Accidents
appear, therefore, as the result of restriction violations for the in-
teractions between components in the global context or because of
the lack of appropriate control laws that are imposed in the
execution of the restrictions (Leveson, 2002, 2004a). Due to the
several hierarchical levels, the system is composed by several
control loops nested through feedback mechanisms, other basic
concept that comes from control theory (open systems that receive
information from the environment to reach a steady state). These
several feedbacks inside the systemkeep it permanently in a steady
state, when constantly adapting to variations in itself and in the
external environment. From this point of view, the accident is seen
as the incapacity of the feedback in making the controls reinforce
the execution of restrictions. When the system components belong
to the social and organizational levels, the restrictions take the form
of internal policies, regulations, procedures, legislation, certica-
tions, standards, authorizations, labor agreements and other
Follow
reviewed
procedures
Resonance
Safety
functions
actuation
Resonance
Verify the
return of plant
operation to
normal
conditions
Monitor safety
functions
Resonance
Control actions with
procedures not
reviewed
Monitoring tasks
with procedures
not reviewed
Use of not
reviewed
procedures
Occurrence of a
Deviation with Small
Alteration in the
Parameter
T C
I o
P R
T C
I
P R
O
I O
P R
T C
I
P R
T C
O
Fig. 1. A functional resonance with safety function control and monitoring.
M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e41 36
instruments of economical, social, political and administrative
control.
The other two fundamental concepts of STAMP are the con-
trollers that exert the controls on the restrictions in the lower hi-
erarchy levels and the effective communication channels to
transmit the control information and to receive the feedback in-
formation about the state of restrictions. Comparing STAMPs
control structure with the parameters of FRAMs hexagonal struc-
ture, one can identify two types of input parameters in FRAM,
Controls and Pre-requirements are restrictions imposed on the
behavior of the function to be controlled, while Resources and
Available Time are part of the Inputs of the function. The output of
the function is feedback for the controllers of higher hierarchical
level.
Human or automatic controllers should exist in all hierarchy
levels. Even in the automatic controllers case, human beings will
still be present in the monitoring of the automatic functions. Both
types of controllers will need models to simulate the process that
are controlled and the interfaces with the rest of the system to
which they are interlinked. Some inputs to the controller are re-
strictions coming from higher levels. On the other hand, the
controller output supplies the restriction for the lower levels and
the feedback of the state of the restrictions on its hierarchical level.
These basic ideas are illustrated and detailed in Leveson (2002,
2004a).
The controllers are not necessarily physical control devices, but
they can be design principles, such as redundancies, interlocks and
safe failure, or even processes, production and maintenance pro-
cedures. It is necessary to observe that human controllers possess a
cognitive modeling (Alvarenga and Fonseca, 2009). Accidents
happen when restrictions are not satised or controls on them are
not effective. Therefore, in STAMP, the following accident causes are
identied (Leveson, 2002, 2004a):
1. Control actions exerting inadequate coercion related with
restrictions:
a. Unidentied hazards;
b. Inadequate, inefcient or non-existing control actions for the
identied hazards;
i. Design of control algorithms of the processes that do not
make coercion related with restrictions:
1. Failures in the creation process;
2. Changes of processes without corresponding changes
in the control algorithm (asynchronous evolution);
3. Incorrect modications or adaptations .
ii. Inconsistent, incomplete or incorrect (alignment lack)
processes models:
1. Failures in the creation process;
2. Failures in the updating process (asynchronous
evolution);
3. time delays and inaccuracy measures that are not
taken into consideration.
iii. Inadequate coordination between controllers and deci-
sion makers (superimposing areas and boundaries).
c. Inadequate execution of control action.
i. Failures of communication;
ii. Inadequate actuator operation;
iii. Time delays.
d. Inadequate or inexistent feedback.
i. Non-provided arrangements in system design;
ii. Communication failures;
iii. Time delays;
iv. Inadequate operation of sensors (incorrect information or
not supplied).
The models should contain the same information either for
human beings or automatic systems. The fundamental informa-
tion is: relationships between system variables, current state of
system variables, and available process mechanisms for changing
the state of system variables. The relationships between the
system variables are modeled through the technique of system
dynamics (Leveson, 2002) that is based on the theory of non-
linear dynamics and feedback control. There are three basic
blocks for building the models of system dynamics. These blocks
perform basic feedback loops. The functions of each hierarchical
level are composed by the complex coupling of several of these
basic loops.
The rst basic loop is the Reinforcement Loop, a structure that
feeds itself, creating growth or decline, similarly to the loops of
positive feedback in the theory of control systems. An increase in
variable 1 implies an increase in variable 2, which causes an in-
crease in variable 1, and so on, in an exponential way, if there are no
external inuences. The same reasoning is valid for a negative
reinforcement, which generates an exponential decrease for one
variable and an exponential increase for the other (reinforcements
in the opposite directions), because an increase in a variable implies
a decrease in the other variable. The change does not necessarily
mean changes in values but in direction. In many instances, the
variable is interacting with the variation rate of the other variable
and not with the variable itself.
The second type of loop is the Balance Loop, in which the
current value of a system variable or a reference parameter is
modied through some control action. This corresponds to the
loop of negative feedback in the theory of control systems. In this
case, the difference between the variable value and the wanted or
reference value is noted as an error. The action is proportional to
Government
Regulators,
Associations
Company
Management
Staff
Work
Hazardous Process
Research
Discipline
Political Science;
Law; Economics
Sociology
Economics;
Decision Theory;
Organizational;
Sociology
Industrial
Engineering;
Management &
Organization
Psychology;
Human Factors;
Human-Machine
Interaction
Mechanical;
Chemical;
and Electrical
Engineering
Environmental
Stressors
Changing political
climate and public
awareness
Changing market
conditions and
financial pressure
Changing
competency and
levels of education
Fast pace of
technological
change
Public
Opinion
Fig. 2. The socio-technical hierarchical system involved in risk management,
Rasmussen (1997), Qureshi (2007).
M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e41 37
the error, so that it brings the variable value to the reference value
along time.
The third type is the Delay, which is used to model the time
interval that elapses between the causes (or actions) and the ef-
fects. It could be the source of several instabilities in the system
depending on the complex interactions between the system vari-
ables along time. Once the whole system is modeled through these
basic structures, after the hierarchical functional decomposition
and identication of the variables and parameters that dene the
functions, it becomes possible to accomplish the simulation of the
system dynamic behavior, assuming certain initial values for the
system variables. Consequently, the system can be observed along
time to check for instabilities as well as abrupt behaviors, including
the probable system collapse.
Marais and Leveson (2006) discuss basic structures that can be
identied in the dynamic simulation of systems that can erode or
even cause the system to collapse. These structures were named
safety archetypes:
1. Stagnant safety practices in face of technological progresses, due
to delays in the control loops;
2. Decreases in safety conscience due to the absence of incidents;
3. Side effects due to unintended safety actions;
4. Corrections of symptoms instead of root causes;
5. Erosion of safety by:
a. Complacency e low rates of incidents and accidents
encourage an anti-regulatory feeling;
b. Postponed safety programs e a production versus safety
conict;
c. Incident and event reports with lower priority eeffects of the
prize versus punishment strategy.
Fig. 3 displays an example of an archetype (complacency).
With the concept of safety archetypes associated to the STAMP
methodology, it becomes a complete tool to model, simulate and
evaluate the safety of socio-technical systems. STAMP, however, is
not yet being used in risk analysis similarly to probabilistic safety
analysis for safety assessment of nuclear power plants. For this
purpose, it is necessary that safety archetypes have some type of
associated metric that allows a quantitative assessment and,
therefore, to identify measurable safety criteria in practical terms.
Examples of this can be found in the STAMP analysis of the Chal-
lenger and Columbia accidents, Leveson (2008).
We now review some STAMP applications. A detailed bibliog-
raphy on this subject may be found at http://sunnyday.mit.edu/
STAMP-publications.html.
Leveson (2004b) describes how systems theory can be used to
build up newaccident models that better explain system accidents.
The loss of a Milstar satellite being launched by a Titan/Centaur
launch vehicle is used as an illustration of the approach.
Leveson et al. (2012) demonstrate how to apply STAMP to
healthcare systems. Pharmaceutical safety is used as the example in
the paper, but the same approach is potentially applicable to other
complex healthcare systems. Systemengineering techniques can be
used in re-engineering the system as a whole while, at the same
time, encourage the development of new drugs.
Thomas et al. (2012) discuss a STAMP application to evaluate the
safety of digital instrumentation and control systems in pressurized
water reactors.
Ishimatsu et al. (2014) discuss a new STAMP-based hazard
analysis technique, called System-Theoretic Process Analysis
(STPA), which is capable of identifying potential hazardous design
aws, including software and system design errors and unsafe in-
teractions among multiple system components and an application
to the Japanese Aerospace Exploration Agency H-II Transfer Vehicle
is presented. A comparison of the results of this new hazard anal-
ysis technique to those of the standard fault tree analysis used in
the design and certication of the H-II Transfer Vehicle, System-
Theoretic Hazard Analysis is also presented.
10. A comparison between models discussed
Several models based on the theory of systems have been pro-
posed. Models presenting a full proposal to incorporate the four
types of complexity discussed earlier are STAMP, FRAM and ACCI-
MAP, which is an implementation of Rasmussen-Svedung model
(Rasmussen and Svedung, 2000).
STAMP implements the framework of Rasmussen-Svedung
and is quantied by techniques of system dynamics, which
were used in the1960s to analyze economic systems (Forrester,
1961), unlike ACCI-MAP, which is a qualitative model. More-
over, STAMP identies in the structures of control systems some
archetypes (patterns) responsible for the collapse of the system
(Marais and Leveson, 2006), causing accidents, as well as criteria
for identifying deciencies in these structures. This provides a
preliminary evaluation of the system design, enabling correc-
tions in their design. The techniques of system dynamics also
enable the study of unstable oscillations of the system and
sensitivity to critical parameters and variables, which also pro-
vides a prospective analysis Dulac and Leveson (2004), Leveson
(2004a,b).
However, to fully implement the Svedung-Rasmussen
framework, STAMP has to use control structures with closed
loop feedback in three cognitive levels (skills, rules and knowl-
edge) and Rasmussen hierarchical abstraction levels when rep-
resenting the analysis of work eld tasks, Rasmussen and
Svedung (2000).
Moreover, FRAM does not identify how to recognize or quantify
systemresonance modes and howsystems migrate to limit states of
resonance on the system operation boundary. It is possible that the
techniques of system dynamics can be used to quantify FRAM, as it
was done with STAMP, which may be the subject of future in-
vestigations, Sringfellow (2010).
In order to compare conventional techniques with techniques
based on systems theory, a table has been prepared containing a set
of 16 characteristics, some of which were discussed in Section 3.
The comparison has been performed by considering both the
non-systemic and systemic approaches for treating organizational
features. Each technique and/or approach has been checked in or-
der to identify what features it eventually takes into account. The
results are shown in Table 10.
Accident
Probability
Risk
Anti-regulatory
Behavior
External Pressure
Training, Inspection
and Monitoring
Risk Monitoring
Loop
Oversight Loop
Accident Loop
Oversight
+
+
+
-
-
-
-
-
Fig. 3. An example of a safety archetype (complacency), Marais and Leveson (2006).
M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e41 38
It may be inferred from this table that STAMP is the most suit-
able approach to treat organizational features in the context of
human reliability analysis.
11. Conclusions
The techniques of human reliability analysis of rst generation
take organizational factors into consideration through performing
shaping factors (PSFs). These factors are quantied in a subjective
way by experts in the eld of HRA or through databases of specic
operational events for each plant that contains among the root
causes the programmatic (or systematic) causes that characterize
organizational factors. However, the interaction of these factors
with error mechanisms and unsafe actions of human failures, or
between the factors themselves is linear, because, according to the
quantied level of the state of each factor, the probabilities of hu-
man error associated with unsafe actions are multiplied by
adjusting factors that can decrease or increase them. The inter-
connection matrix between factors (Gertman et al., 2005) is also
linear. HRA methodologies of the second generation continue to
use PSFs, although they include in the network of conditional
probabilities the error mechanisms as functions of the PSFs.
This approach has two basic deciencies. The rst is that the
number of organizational factors is not enough to model all aspects
of this nature, especially the political, economical and normative
ones. On the other hand, the interaction of these factors with each
other and with error mechanisms, unsafe actions, modes and types
of errors observed in the individual level and group level is highly
nonlinear.
Two modern approaches are the most promising to solve these
deciencies, because they are based on non-linear models: FRAM
and STAMP. These two methodologies are based on the General
Systems Theory (GST) (Bertalanffy, 1971) that largely uses the
concepts of Control Systems Theory to accomplish in practice the
basic ideas of GST. FRAM extends the basic model of the theory of
control systems with input and output variables or parameters, as
well as for resources, controls or restrictions (boundary condi-
tions), adding two more types of variables or parameters: time
and pre-requirements that are considered special boundary con-
ditions or special types of resources and restrictions. The socio-
technical system is decomposed into functions and each func-
tion has a hexagonal structure as described in this paper. The
system has internal feedback, because each hexagonal function is
linked with the others through one or more of the six parameters
or variables.
The nonlinear nature of FRAM is established by the resonance
concept, in other words, given a limited variation in one of the
parameters of one of the functions, the system can, in each in-
formation transmission for the interlinked functions, enhance the
effect on other parameters of other functions and on its own
function, which is generated in such a way so as to generate an
effect of stochastic resonance along time in the parameters that,
in certain cycles of information transmission, can surpass its
variation threshold, indicating a rupture in the system safety or
stability. These transmission cycles for the functions can enter a
process of stabilization or not, since the cycles can be dampened
or not. Apart from the mathematical formulation for the sto-
chastic resonance, Hollnagel (2004, 2012) establishes the concept
of functional resonance, in which he tries to fail or relax the
connections between the functions, through the six intercon-
nection parameters or variables. Thus, it is necessary to seek for
unexpected connections between the functions. The failure or
relaxation of the connection is a function of the parameters or
variables variability in the connection. This variability, in turn, is a
function of the variability of the common conditions (CCs) that
inuence all functions at the same time. Therefore, we can have
several alterations in the parameters or variable values at the
same time. This analysis is, therefore, of a qualitative or semi-
quantitative nature, depending on the external subjective evalu-
ations of CCs. On the other hand, the concept of stochastic reso-
nance can be worked through mathematical models as long as
one establishes, in each function to be modeled, a mathematical
function for the dependence of each one of the six variables or
parameters as a function of the other ve that compose the
function. One should bear in mind, however, that the six com-
ponents of the function can have one or more variables or
representative parameters of the element, and this feature makes
the modeling quite complex.
A proposal for establishing a mathematical model, as requested
above, comes from STAMP, which uses the modeling of system
dynamics, used in economical systems. Although in this model a
hexagonal structure is not used as in FRAM, it establishes in the
same way a functional decomposition of the organization as a
function of the organizational structure that is composed by several
departments or divisions, each one with its specic function. They
include the external organizations, such as the government that
interface with the organization. Each department has parameters
and variables (P&Vs) in the input and output that make the inter-
connection with other departments or divisions. It also possesses
P&Vs that represent the controls or restrictions of higher hierar-
chical levels, as well as P&Vs that represent resources to perform
the function, including the time and pre-requirements as in FRAM.
The dynamic simulation of these variables in STAMP is equivalent
to the functional resonance in FRAM, with the advantage of
Table 10
Comparison among HRA techniques/approaches on relevant features.
Feature Technique
Non-systemic Systemic
THERP CREAM ATHEANA ACCI-MAP FRAM STAMP
Sequential Linear
Model
Yes Yes Yes No No No
Epidemiological
Linear Model
No No Yes No No No
System-theory-based
model
No No No Yes Yes Yes
Decomposition
complexity
No No No Yes Yes Yes
Nonlinear complexity No No No Yes Yes Yes
Interactive
complexity
No No No No
f
Yes Yes
Dynamic complexity No No No No
f
Yes Yes
Cognitive
architecture
a
No Yes Yes No
f
No No
Cognitive error
mechanisms
a
No Yes Yes No
f
No No
Hierarchical levels of
abstraction
b
No No No Yes No Yes
Levels of decision
making
b
No No No Yes No Yes
Performance Shaping
Factors
Yes Yes Yes No No No
Metrics for PSFs
c
No No No NA
d
NA NA
Operational context No Yes Yes Yes Yes Yes
Control structures in 3
consecutive levels
(SeReK)
e
No No No No
f
No No
a
NRC (2000).
b
Rasmussen and Svedung (2000).
c
Gertman et al. (2005).
d
NA not applicable, because it does not make sense to consider PSF metrics for
system dynamic models as the underlying concepts are different.
e
Skill, Rule, Knowledge.
f
ACCI-MAPs are general and must be detailed to enable theses features.
M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e41 39
identifying safety archetypes that are responsible for the system
erosion and collapse, which serves as a safety criteria to evaluate
socio-technical systems.
References
Alvarenga, M.A.B., Fonseca, R.A., 2009. Comparison of THERP quantitative tables
with the human reliability analysis techniques of second generation. In: Pro-
ceedings of the International Nuclear Atlantic Conference, Available in CD-ROM.
Brazilian Association of Nuclear Energy, Rio de Janeiro.
Baumont, G., Menage, F., Schneiter, J.R., Spurgin, A., Vogel, A., 2000. Quantifying
human and organizational factors in accident management using decision
trees: the HORAAM method. Reliab. Eng. Syst. Saf. 70, 113e124.
Bellamy, L.J., Geyer, T.A.W., Wilkinson, J., 2008. Development of a functional model
which integrates human factors, safety management systems and wider
organisational issues. Saf. Sci. 46, 461e492.
Belmonte, F., Schn, W., Heurley, L., Capel, R., 2011. Interdisciplinary safety analysis
of complex socio-technological systems based on the functional resonance
accident model: an application to railway trafc supervision. Reliab. Eng. Syst.
Saf. 96, 237e249.
Bertalanffy, L., v., 1971. General Systems Theory, Foundations, Development, Ap-
plications. Allen Lane, The Penguin Press, London.
Bertolini, M., 2007. Assessment of human reliability factors: a fuzzy cognitive maps
approach. Int. J. Indus. Ergon. 37, 405e413.
Bieder, C., Le-Bot, P., Desmares, E., Bonnet, J.-L., Cara, F., 1998. MERMOS: EDFs New
Advanced HRA Method. In: Mosleh, A., Bari, R.A. (Eds.), Probabilistic Safety
Assessment and Management (PSAM 4). Springer-Verlag, New York.
Broughton, J., Carter, R., Chandler, F., Holcomb, G., Humeniuk, B., Kerios, B., Bruce, P.,
Snyder, P., Strickland, S., Valentino, B., Wallace, L., Wallace, T., Zeiters, D., 1999.
Human Factors Process Failure Modes & Effects Analysis. PGOC91-F050-JMS-
99286. Boeing Company, Seattle.
Cacciabue, P.C., 1992. Cognitive modelling: a fundamental issue for human reli-
ability assessment methodology? Reliab. Eng. Syst. Saf. 38, 91e97.
Carayon, P., 2006. Human factors of complex sociotechnical systems. Appl. Ergon.
37, 525e535.
Carvalho, P.V.R., 2011. The use of Functional Resonance Analysis Method (FRAM) in a
mid-air collision to understand some characteristics of the air trafc manage-
ment system resilience. Reliab. Eng. Syst. Saf. 96, 1482e1498.
Chang, Y.H.J., Mosleh, A., 2007. Cognitive modeling and dynamic probabilistic
simulation of operating crew response to complex system accidents. Part 4:
IDAC causal model of operator problem-solving response. Reliab. Eng. Syst. Saf.
92, 1061e1075.
Chien, S.H., Dykes, A.A., Stetkar, J.W., Bley, D.C., 1988. Quantication of human error
rates using a SLIM-based approach. In: IEEE Fourth Conference on Human
Factors and Power Plants, Monterey, CA.
Davoudian, K., Wu, J.S., Apostolakis, G.E., 1994. Incorporating organizational factors
into risk assessment through the analysis of work processes. Reliab. Eng. Syst.
Saf. 45, 85e105.
Dekker, S., Gilliers, P., Hofmeyr, J.-H., 2011. The complexity of failure: implications of
complexity theory for safety investigation. Saf. Sci. http://dx.doi.org/10.1016/
j.ssci.2011.01.008.
Daz-Cabrera, D., Hernndez-Fernau, E., Isla-Daz, R., 2007. An evaluation of a new
instrument to measure organizational safety culture values and practices. Accid.
Anal. Prev. 39, 1202e1211.
Dougherty, E., Fragola, J., 1987. Human Reliability Analysis: a Systems Engineering
Approach with Nuclear Power Plant Applications. John Wiley & Sons Ltd,
Canada.
Dulac, N., Leveson, N., 2004. An approach to design for safety in complex systems.
In: Proceedings of the International Conference on System Engineering
(INCOSE04), Toulouse, France, pp. 393e407.
Forrester, J.W., 1961. Industrial Dynamics. MIT Press, Cambridge, MA.
Fujita, Y., 1992. Human reliability analysis: a human point of view. Reliab. Eng. Syst.
Saf. 38, 71e79.
Galn, S.F., Mosleh, A., Izquierdo, J.M., 2007. Incorporating organizational factors
into probabilistic safety assessment of nuclear power plants through canonical
probabilistic models. Reliab. Eng. Syst. Saf. 92, 1131e1138.
Gertman, D.I., Blackmann, H.S., Haney, L.N., Seidler, K.S., Hahn, H.A., 1992. INTENT: a
method for estimating human error probabilities for decision based errors.
Reliab. Eng. Syst. Saf. 35, 127e136.
Gertman, D.I., Blackman, H.S., Byers, J., Haney, L., Smith, C., Marble, J., 2005. The
SPAR-H Method, NUREG/CR-6883. U.S. Nuclear Regulatory Commission,
Washington, DC.
Grabowski, M., You, Z., Zhou, Z., Song, H., Steward, M., Steward, B., 2009. Human and
organizational error data challenges in complex, large-scale systems. Saf. Sci. 47,
1185e1194.
Grobbelaar, J., Julius, J., Lewis, S., Rahn, F., 2003. Guidelines for Performing Human
Reliability Analysis: Using the HRA Calculator Effectively. Draft Report. Electric
Power Research Institute, Monterey, CA.
Grote, G., 2007. Understanding and assessing safety culture through the lens of
organizational management of uncertainty. Saf. Sci. 45, 637e652.
Hannaman, G.W., Spurgin, A.J., Lukic, Y., 1984. Human Cognitive Reliability Model
for PRA Analysis, NUS-4531. Draft EPRI Document. Electric Power Research
Institute, Palo Alto, CA.
Hee, D.D., Pickrell, B.D., Bea, R.G., Roberts, K.H., Williamson, R.B., 1999. Safety
Management Assessment System (SMAS): a process for identifying and evalu-
ating human and organization factors in marine system operations with eld
test results. Reliab. Eng. Syst. Saf. 65, 125e140.
Hendrick, K., Benner, L., 1987. Investigating Accidents with STEP. Marcel Dekker Inc.,
New York.
Herrera, I.A., Woltjer, R., 2010. Comparing a multi-linear (STEP) and systemic
(FRAM) method for accident analysis. Reliab. Eng. Syst. Saf. 95, 1269e1275.
Hetrick, D.L., 1971. Dynamics of Nuclear Reactors. The University of Chicago Press,
Chicago.
Hollnagel, E., 1998. Cognitive Reliability and Error Analysis Method (CREAM).
Elsevier Science, New York.
Hollnagel, E., 2004. Barriers and Accident Prevention. Ashgate Publishing Company,
Aldershot.
Hollnagel, E., 2012. FRAM: the Functional Resonance Analysis Method, Modelling
Complex Socio-technical Systems. Ashgate, Aldershot, UK.
HSE, 2009. Reviewof Human Reliability Assessment Methods. Report RR679. Health
and Safety Executive, Buxton, Derbyshire, UK.
Ishimatsu, T., Leveson, N.G., Thomas, J.P., Fleming, C.H., Katahira, M., Miyamoto, Y.,
Ujiie, R., Nakao, H., Hoshino, N., 2014. Hazard analysis of complex spacecraft
using systems-theoretic process analysis. J. Spacecr. Rock. http://dx.doi.org/
10.2514/1.A32449.
Kantowitz, B.H., Fujita, Y., 1990. Cognitive theory, identiability and human reli-
ability analysis (HRA). Reliab. Eng. Syst. Saf. 29, 317e328.
Kharratzadeh, M., Schultz, T.R., 2013. Neural-network modelling of Bayesian
learning and inference. In: Proceedings of the 35th Annual Meeting of the
Cognitive Science Society, Austin, TX, ISBN 978-0-9768318-9-1, pp. 2686e
2691.
Kirwan, B., 1990. A resource exible approach to human reliability assessment for
PRA. In: Safety and Reliability Symposium. Elsevier Applied Sciences, Amster-
dam, pp. 114e135.
Kirwan, B., Gibson, H., Kennedy, R., Edmunds, J., Cooksley, G., Umbers, I., 2005.
Nuclear Action Reliability Assessment (NARA): a data based HRA tool. Saf.
Reliab. 25, 38e45.
Laszlo, A., Krippner, S., 1998. Systems theories: their origins, foundations, and
development. In: Jordan, J.S. (Ed.), Systems Theories and a Priori Aspects of
Perception. Elsevier, Amsterdam, pp. 47e74.
Lee, Y.S., Kim, Y., Kim, S.H., Kim, C., Chung, C.H., Jung, W.D., 2004. Analysis of human
error and organizational deciency in events considering risk signicance. Nucl.
Eng. Des. 230, 61e67.
Leveson, N.G., 2002. System Safety Engineering: Back to the Future. Massachusetts
Institute of Technology, Cambridge. http://sunnyday.mit.edu/book2.pdf.
Leveson, N.G., 2004a. A new accident model for engineering safer systems. Saf. Sci.
42, 237e270.
Leveson, N.G., 2004b. A systems-theoretic approach to safety in software-intensive
systems. IEEE Trans. Depend. Secure Comput. 1, 66e86.
Leveson, N.G., 2008. Technical and managerial factors in the NASA challenger and
Columbia losses: looking forward to the future. In: Kleiman, D.L., Cloud-
Hansem, K.A., Matta, C., Handelsman, J. (Eds.), Controversies in Science and
Technology, vol. 2. Mary Ann Liebert Press, New York.
Leveson, N.G., Couturier, M., Thomas, J., Dierks, M., Wierz, D., Psaty, B.,
Finkelstein, S., 2012. Applying system engineering to pharmaceutical safety.
J. Healthc. Eng. 3, 391e414.
Lewes, G.H., 2005. Problems of Life and Mind. In: First Series: the Foundations of a
Creed, vol. 2. University of Michigan Library Reprinting Series, Ann Arbor.
Li, W.-C., Harris, D., Yu, C.-S., 2008. Routes to failure: analysis of 41 civil aviation
accidents from the Republic of China using the human factors analysis and
classication system. Accid. Anal. Prev. 40, 426e434.
Lorentz, E., 2003. The Essence of Chaos. Rutledge, London.
Marais, K., Leveson, N.G., 2006. Archetypes for organizational safety. Saf. Sci. 44,
565e582.
Modarres, M., Martz, H., Kaminskiy, M., 1996. The accident sequence precursor
analysis. Nucl. Sci. Eng. 123, 238e258.
Mohaghegh, Z., Kazemi, R., Mosleh, A., 2009. Incorporating organizational fac-
tors into probabilistic risk assessment (PRA) of complex socio-technical
systems: a hybrid technique formalization. Reliab. Eng. Syst. Saf. 94,
1000e1018.
Mohaghegh, Z., Mosleh, A., 2009. Measurement techniques for organizational safety
causal models: characterization and suggestions for enhancements. Saf. Sci. 47,
1398e1409.
NASA, 2006. Human Reliability Analysis Methods, Selection Guidance for NASA.
NASA Headquarters Ofce of Safety and Mission Assurance, Washington,
D.C.
NRC, 1985. Application of SLIM-MAUD: A Test of an Interactive Computer-Based
Method for Organizing Expert Assessment of Human Performance and Reli-
ability. NUREG/CR-4016. U.S. Nuclear Regulatory Commission, Washington,
DC.
NRC, 1992. Precursors to Potential Severe Core Damage Accidents: 1992. A Status
Report, NUREG/CR-4674. Oak Ridge National Laboratory, U. S. Nuclear Regula-
tory Commission, Washington, D.C.
NRC, 2000. Technical Basis and Implementation Guidelines for A Technique for
Human Event Analysis (ATHEANA). NUREG-1624. U.S. Nuclear Regulatory
Commission, Washington, D.C.
NRC, 2005. Good Practices for Implementing Human Reliability Analysis. NUREG-
1792. U. S. Nuclear Regulatory Commission, Washington D.C.
M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e41 40
NRC, 2006. Evaluation of Human Reliability Analysis Methods against Good Prac-
tices. NUREG-1842. U. S. Nuclear Regulatory Commission, Washington D.C.
NRC, 2007. ATHEANAs User Guide Final Report. NUREG-1880. US Nuclear Regula-
tory Commission, Washington, D.C.
NRC, 2008. Human Factors Considerations with Respect to Emerging Technology in
Nuclear Power Plants. NUREG/CR-6947 BNL-NUREG-79828. US Nuclear Regu-
latory Commission, Washington, D.C.
ien, K., 2001. A framework for the establishment of organizational risk indicators.
Reliab. Eng. Syst. Saf. 74, 147e167.
Papageorgiou, E., Stylios, C., Groumpos, P., 2003. Fuzzy cognitive map learning
based on non-linear Hebbian rule. In: Gedeon, T.D., Fung, L.C.C. (Eds.), AI 2003:
Advances in Articial Intelligence. Springer-Verlag, Berlin Heidelberg.
Papazoglou, I.A., Aneziris, O., 1999. On the quantication of the effects of organi-
zational and management factors in chemical installations. Reliab. Eng. Syst.
Saf. 63, 33e45.
Papazoglou, I.A., Bellamy, L.J., Hale, A.R., Aneziris, O.N., Ale, B.J.M., Post, J.G.,
Oh, J.I.H., 2003. I-RISK: development of an integrated technical and manage-
ment risk methodology for chemical installations. J. Loss Prev. Process Indus.
16, 575e591.
Parry, G., et al., 1992. An Approach to the Analysis of Operator Actions in PRA. EPRI
TR-100259. Electric Power Research Institute, Palo Alto, CA.
Peng-cheng, L., Guo-hua, C., Li-cao, D., Li, Z., 2012. A fuzzy Bayesian network
approach to improve the quantication of organizational inuences in HRA
frameworks. Saf. Sci. 50, 1569e1583.
Pereira, A.G.A.A., 2013. Introduction to the Use of FRAM on the effectiveness
assessment of a radiopharmaceutical Dispatches process. In: Proceedings of the
International Nuclear Atlantic Conference, Available in CD-ROM. Brazilian As-
sociation of Nuclear Energy, Rio de Janeiro.
Pressman, R.S., 1992. Software Engineering e a Practitioners Approach. McGraw-
Hill Book Company, New York.
Qureshi, Z.H., 2007. A review of accident modelling approaches for complex socio-
technical Systems. In: Proceedings of the 12th Australian Workshop on Safety
Related Programmable Systems, vol. 9. Australian Computer Society, Adelaide,
pp. 47e59.
Qureshi, Z., 2008. HA Review of Accident Modelling Approaches for Complex
Critical Sociotechnical Systems. Report DSTO-TR-2094. Command, Control,
Communications and Intelligence Division, Defence Science and Technology
Organisation, Edinburgh, South Australia.
Rasmussen, B., Petersen, K.E., 1999. Plant functional modeling as a basis for
assessing the impact of management on plant safety. Reliab. Eng. Syst. Saf. 64,
201e207.
Rasmussen, J., 1997. Risk management in a dynamic society: a modelling problem.
Saf. Sci. 27, 183e213.
Rasmussen, J., Petersen, A.M., Goodstein, L.P., 1994. Cognitive System Engineering.
John Wiley & Sons, New York.
Rasmussen, J., Svedung, I., 2000. Proactive Risk Management in a Dynamic Society.
Swedish Rescue Services Agency, Karlstad.
Reason, J., 1997. Managing the Risks of Organizational Accidents. Ashgate Publishing
Company, Aldershot.
Reiman, T., Oedewald, P., Rollenhagen, C., 2005. Characteristics of organizational
culture at the maintenance units of two Nordic nuclear power plants. Reliab.
Eng. Syst. Saf. 89, 331e345.
Reiman, T., Oedewald, P., 2007. Assessment of complex sociotechnical systems e
theoretical issues concerning the use of organizational culture and organiza-
tional core task concepts. Saf. Sci. 45, 745e768.
Ren, J., Jenkinson, I., Wang, J., Xu, D.L., Yang, J.B., 2008. A methodology to model
causal relationships on offshore safety assessment focusing on human and
organizational factors. J. Saf. Res. 39, 87e100.
Reer, B., 1997. Conclusions from occurrences by descriptions of actions (CODA). In:
Drottz Sjberg, B.M. (Ed.), New Risk Frontiers, Proceedings of the 1997 Annual
Meeting of the Society for Risk Analysis-Europe.
Reer, B., Dang, V.N., Hirschberg, S., 2004. The CESA method and its application in a
plant-specic pilot study on errors of commission. Reliab. Eng. Syst. Saf. 83,
187e205.
Schnbeck, M., Rausand, M., Rouvroye, J., 2010. Human and organisational factors in
the operational phase of safety instrumented systems: a new approach. Saf. Sci.
48, 310e318.
Seaver, D.A., Stillwell, W.G., 1983. Procedures for using Expert Judgment to Estimate
Human Error Probabilities in Nuclear Power Plant Operations. NUREG/CR-2743.
U.S. Nuclear Regulatory Commission, Washington, D.C.
Shen, S.H., Mosleh, A., 1996. Human Error Probability Methodology Report RAN:
96e002. Calvert Cliffs Nuclear Power Plant, BGE.
Song, L., Yang, L., Huainan, A., Huainan, J.J., Guanghua, J.L., 2011. A Bayesian belief
Net model for evaluating organizational safety risks. J. Comput. 6, 1842e
1846.
Sterman, J., 2000. Business Dynamics, Systems Thinking and Modeling for a Com-
plex World. Irwin McGraw-Hill, Boston.
Sringfellow, M.V., 2010. Accident Analysis and Hazard Analysis for Human and
Organizational Factors. PhD dissertation. Department of Aeronautics and As-
tronautics, Massachusetts Institute of Technology, Cambridge.
Stylios, C.D., Groumpos, P.P., 1999. Mathematical formulation of fuzzy cognitive
maps. In: Proceedings of the 7th Mediterranean Conference on Control and
Automation (MED99), Haifa, Israel, pp. 2251e2261.
Strter, O., 2005. Cognition and Safety e an Integrated Approach to Systems Design
and Performance Assessment. Ashgate, Aldershot, UK.
Strter, O., Bubb, H., 1999. Assessment of human reliability based on evaluation of
plant experience: requirements and implementation. Reliab. Eng. Syst. Saf. 63,
199e219.
Swain, A.D., 1987. Accident Sequence Evaluation Program Human Reliability Anal-
ysis Procedure. NUREG/CR-4772/SAND86-1996. Sandia National Laboratories,
U.S. Nuclear Regulatory Commission, Washington, DC.
Swain, A.,D., 1990. Human reliability analysis: need, status, trends and limitations.
Reliab. Eng. Syst. Saf. 29, 301e313.
Swain, A.D., Guttman, H.E., 1983. Handbook of Human Reliability Analysis with
Emphasis on Nuclear Power Plant Applications. NUREG/CR-1278. U.S. Nuclear
Regulatory Commission, Washington, D.C.
Thomas, J., Lemos, F.L., Leveson, N.G., 2012. Evaluating the Safety of Digital Instru-
mentation and Control Systems in Nuclear Power Plants. Research Report NRC-
HQ-11-6-04-0060. Massachusetts Institute of Technology, Cambridge.
Trucco, P., Cagnoa, E., Ruggeri, F., Grande, O., 2008. A Bayesian belief network
modelling of organisational factors in risk analysis: a case study in maritime
transportation. Reliab. Eng. Syst. Saf. 93, 823e834.
Tseng, Y.-F., Lee, T.-Z., 2009. Comparing appropriate decision support of human
resource practices on organizational performance with DEA/AHP model. Expert
Syst. Appl. 36, 6548e6558.
Wakeeld, D., Parry, G., Hannaman, G., Spurgin, A., 1992. SHARP1: A Revised Sys-
tematic Human Action Reliability Procedure. EPRI TR-10171 1, Tier 2. Electric
Power Research Institute, Palo Alto, CA.
Waterson, P., Kolose, S.L., 2010. Exploring the social and organizational aspects of
human factors integration: a framework and case study. Saf. Sci. 48, 482e
490.
Williams, J., 1988. A data-based method for assessing and reducing human error to
improve operational performance. In: IEEE Conference on Human Factors in
Power Plants. Monterey California.
M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e41 41

Вам также может понравиться