Вы находитесь на странице: 1из 143

RISK

IN PROCESS INDUSTRIES





The objectives
SUSTAINABLE OPERATIONAL EXCELLENCE

STAKEHOLDER, REPUTATION & POLITICAL RISK MANAGEMENT

T h e f o c u s a r e a s
REVIEW RISK MGT
MONITOR AND

PROJECT/CONTRACTOR MGT

ENVIRONMENTAL IMPACTS

FINANCIAL PERFORMANCE
Key RM Activities

SAFETY & HEALTH

CYBERSECURITY
SOCIAL IMPACTS
SUPPLY CHAINS
AND TREAT RISKS
IDENTIFY, ASSESS

T h e F o u n d a t i o n s
PROFESSIONAL PRACTICE HUMANS AND RISK
FUNDAMENTALS OF RISK MGT








By Maureen Hassall and Paul Lant

Foundational Theory
Front Matter












This pages has intentionally been left blank




i
Front Matter

Table of contents

List of figures ........................................................................................................................... ii.
List of tables ........................................................................................................................... iii.
Acknowledgements ................................................................................................................ iv.
SECTION A – INTRODUCTION ........................................................................................... 1.
Chapter 1: Risk in the process industries ................................................................. 3.
1.1 Introduction to risk in the process industries ............................................. 3.
1.2 What is risk? ............................................................................................... 6.
1.3 Why is risk management so important? ..................................................... 7.
1.4 What types of risk should engineers consider? ........................................ 10.
1.5 Case studies of engineering decisions ...................................................... 13.
1.6 Summary .................................................................................................. 14.

SECTION B – THE FOUNDATIONS .................................................................................... 17.
Chapter 2: Fundamentals of risk management ....................................................... 19.
2.1 Introduction to risk fundamentals ............................................................ 19.
2.2 The risk management process .................................................................. 20.
2.3 The risk language ...................................................................................... 23.
2.4 A brief history of risk management .......................................................... 24.
2.5 Two approaches of modern risk management ........................................ 25.
2.6 Case studies illustrating two approaches to risk management ................ 29.
2.7 Who is responsible for risk management ................................................. 31.
2.8 Summary .................................................................................................. 32.
Chapter 3: Professional practice ............................................................................ 33.
3.1 Introduction .............................................................................................. 33.
3.2 What is professional practice? ................................................................. 33.
3.3 What is a professional engineer? ............................................................. 34.
3.3.1 Ethics ........................................................................................................ 35.
3.3.2 Competence ............................................................................................. 36.
3.3.3 Performance ............................................................................................. 38.
3.4 Obligations, accountabilities and responsibilities .................................... 39.
3.4.1 Legal and regulatory obligations .............................................................. 39.
3.4.2 Accountabilities ........................................................................................ 43.
3.4.3 Responsibilities ........................................................................................ 43.
3.5 Summary .................................................................................................. 43.
Chapter 4: Humans and risk ................................................................................... 45.
4.1 Introduction .............................................................................................. 45.
4.2 The role of humans – risk analysers, controllers and perceivers ............. 46.
4.2.1 Risk perceivers ......................................................................................... 47.
4.2.2 Risk analysers ........................................................................................... 48.
4.2.3 Risk controllers ......................................................................................... 50.
4.3 Risk communication ................................................................................. 54.
4.4 The human decision making process ....................................................... 57.
4.4.1 Situation awareness ................................................................................. 59.
4.4.2 Decision making strategies ....................................................................... 60.
4.4.3 Performance ............................................................................................. 63.
4.5 Summary .................................................................................................. 64.


ii
Front Matter

SECTION C – KEY RISK MANAGEMENT ACTIVITIES ........................................................... 66.
Chapter 5: Identify, assess and treat risks .............................................................. 68.
5.1 Introduction .............................................................................................. 68.
5.2 Establish the context ................................................................................ 70.
5.3 Risk assessment ........................................................................................ 72.
5.3.1 Risk identification theory ......................................................................... 72.
5.3.2 Risk analysis theory .................................................................................. 74.
5.3.3 Risk evaluation, tolerable risk and ALARP ................................................ 77.
5.3.4 Risk assessment techniques and tools ..................................................... 79.
5.3.5 The risk register ........................................................................................ 85.
5.4 Risk treatment and management ............................................................. 86.
5.4.1 Overview of risk treatment ....................................................................... 86.
5.4.2 Unwanted event identification ................................................................ 86.
5.4.3 Selection and optimization of risk controls .............................................. 88.
5.4.4 Bowtie analysis ......................................................................................... 90.
5.4.5 Management of controls .......................................................................... 93.
5.5 Summary .................................................................................................. 95.
Chapter 6: Monitor and review risk management .................................................. 96.
6.1 Introduction .............................................................................................. 98.
6.2 Why perform event investigations? ......................................................... 99.
6.3 Purpose and theory behind investigations ............................................. 101.
6.4 Incident investigation techniques and application considerations ........ 103.
6.4.1 Timeline ................................................................................................... 105.
6.4.2 Five whys analysis .................................................................................. 106.
6.4.3 Fishbone ................................................................................................. 107.
6.4.4 HFACs ..................................................................................................... 109.
6.4.5 Bowtie analysis ....................................................................................... 116.
6.4.6 Accimap .................................................................................................. 118.
6.4.7 SAfER ...................................................................................................... 120.
6.5 Integration of learning back into the business ....................................... 124.
6.6 Summary ................................................................................................ 124.

SECTION D – CLOSING ................................................................................................... 126.
Epilogue .............................................................................................................. 128.
References .......................................................................................................... 130.


iii
Front Matter

List of Figures

Figure 1.1: Australia’s global competitiveness rank .............................................................. 3.
Figure 1.2: Australia’s multifactor productivity index ........................................................... 3.
Figure 1.3: Australian worker fatalities in oil, gas, mining and manufacturing ..................... 4.
Figure 1.4: Asset damage in upstream oil and gas ................................................................ 4.
Figure 1.5: Adoption of technology by US households .......................................................... 4.
Figure 1.6: Sampling of recent media coverage relating to high hazard industries .............. 5.
Figure 1.7: Quality of risk management versus companies’ financial performance ............. 6.
Figure 1.8: Three tiered system of knowledge ...................................................................... 8.
Figure 1.9: The influences and impacts on/of industrial risks ............................................... 8.
Figure 1.10: Global risk landscape ....................................................................................... 10.
Figure 1.11: Scope of risk management .............................................................................. 12.

Figure 2.1: Scope of risk and uncertainty management ...................................................... 19.
Figure 2.2: The risk management process (AS/NZS ISO31000:2018) .................................. 21.
Figure 2.3: Extended risk management process for process industries .............................. 21.
Figure 2.4: Example of a risk matrix .................................................................................... 22.
Figure 2.5: History of process safety highlighting different ages of safety ......................... 25.
Figure 2.4: History of industrial performance ..................................................................... 26.
Figure 2.7: Types of error .................................................................................................... 27.
Figure 2.8: Organisational decision making ......................................................................... 31.

Figure 3.1: Three tiered system of knowledge .................................................................... 34.
Figure 3.2: The three dimensions of profession engineering .............................................. 35.
Figure 3.3: Ethical dilemma grid .......................................................................................... 36.
Figure 3.4: Ethical decision grid ........................................................................................... 36.
Figure 3.5: Some environment laws ..................................................................................... 41.
Figure 3.6: A comparison of professional standards and legal requirements ..................... 42.

Figure 4.1: Organisational decision making ......................................................................... 45.
Figure 4.2: The influences and impacts on/of industrial risks .............................................. 46.
Figure 4.3: Risk perception and its components ................................................................. 47.
Figure 4.4: Risk perception inputs and outputs ................................................................... 47.
Figure 4.5: Risk levels and risk preferences ......................................................................... 48.
Figure 4.6: Factors that characterize high and low organisation risk appetites .................. 49.
Figure 4.7: Types of error .................................................................................................... 50.
Figure 4.8: History of industrial work .................................................................................. 51.
Figure 4.9: The risk management process ........................................................................... 54.
Figure 4.10: Risk communication model .............................................................................. 55.
Figure 4.11: The evolution of risk communication ............................................................... 55.
Figure 4.12: Methods of risk communication – results from US Flood research ................. 56.
Figure 4.13: Example of visualizing the interconnectedness of different risks .................... 57.
Figure 4.14: Human contribution to incidents ..................................................................... 58.
Figure 4.15: Designing for humans ....................................................................................... 59.
Figure 4.16: Components of human performance ............................................................... 59.
Figure 4.17: Model of situation awareness .......................................................................... 60.
Figure 4.18: Range of strategies and strategy shaping factors ............................................ 61.


iv
Front Matter

Figure 5.1: The risk management process ........................................................................... 68.
Figure 5.2: Extended version of ISO31000 risk management process ................................ 69.
Figure 5.3: Risk management process .................................................................................. 70.
Figure 5.4: Stakeholder identification diagram ................................................................... 71.
Figure 5.5: Approach to risk identification .......................................................................... 73.
Figure 5.6: Risk matrix 1 ...................................................................................................... 75.
Figure 5.7: Risk matrix 2 ...................................................................................................... 76.
Figure 5.8: Risk matrix 3 ...................................................................................................... 76.
Figure 5.9: Risk tolerability and ALARP ................................................................................ 78.
Figure 5.10: Selection of ALARP option for a water treatment technology ......................... 79.
Figure 5.11: Example of hazard identification process ........................................................ 81.
Figure 5.12: Traditional SWOT analysis ................................................................................ 82.
Figure 5.13: Second example of SWOT analysis ................................................................... 83.
Figure 5.14: Range of people risks ....................................................................................... 84.
Figure 5.15: Human factors risk assessment prompts ......................................................... 84.
Figure 5.16: Example of a risk register ................................................................................. 85.
Figure 5.17: Safe/unsafe operating zone diagram ............................................................... 87.
Figure 5.18: Risk treatment options for addressing unwanted events ................................ 88.
Figure 5.19: Defining a control ............................................................................................. 89.
Figure 5.20: Representation of the work of an organisation as it relates to risk control ..... 89.
Figure 5.21: Example of a risk register with controls listed ................................................. 90.
Figure 5.22: A basic bowtie diagram .................................................................................... 91.
Figure 5.23: Basic bowtie diagram linked to control assurance management systems ....... 91.
Figure 5.24: Advanced bowtie with control erosion factors ................................................ 92.
Figure 5.25: Control monitoring and review activies assigned to organizational levels ...... 94.
Figure 5.26: Example of control specification, monitoring and verification information ..... 94.

Figure 6.1: Extended version of ISO31000 risk management process ................................ 97.
Figure 6.2: Three lines of defence model ............................................................................ 97.
Figure 6.3 Risk management process ................................................................................... 98.
Figure 6.4: Prospective and retrospective risk analysis ...................................................... 99.
Figure 6.5: Range of event outcomes experienced in industry ........................................... 99.
Figure 6.6: Reasons for investigating different types of events ........................................ 100.
Figure 6.7: Illustration of distribution of recurring vs novel events .................................. 100.
Figure 6.8: An example of a timeline ................................................................................. 106.
Figure 6.9: Illustration of 5 whys analysis .......................................................................... 107.
Figure 6.10: Blank fishbone diagram .................................................................................. 108.
Figure 6.11: Example of fishbone diagram ......................................................................... 108.
Figure 6.12: The HFACs framework .................................................................................... 109.
Figure 6.13: Bowtie highlighting presence and effectiveness of risk controls during a fuel
tanker overfilling incident ............................................................................... 117.
Figure 6.14: Generic accimap ............................................................................................. 118.
Figure 6.15 Accimap showing missing control assurance elements for tanker overfilling
incident ........................................................................................................... 119.


v
Front Matter

List of Tables

Table 1.1 – Top 10 risk concerns for industry ...................................................................... 11.
Table 2.1 – Examples of risk based standards and industry guidelines ................................ 20.
Table 2.2 – Two approaches to modern risk management .................................................. 29.
Table 3.1 – Summary of the Warren Centre PPIR protocol for professional performance .. 38.
Table 3.2 – Meaning of obligation, accountability and responsibility .................................. 39.
Table 4.1 –Estimates of human error as a percent of all failures ......................................... 50.
Table 4.2 – Range of possible decision making strategies ................................................... 62.
Table 5.1 – Scope table populated with examples from ship to shore fuel transfer ........... 71.
Table 5.2 – Two approaches to modern risk management .................................................. 72.
Table 5.3 – Definitions for risk identification ....................................................................... 73.
Table 5.4 – Definitions for risk analysis ................................................................................ 74.
Table 5.5 – Definitions for risk evaluation ............................................................................ 77.
Table 5.6 – List of risk assessment techniques ..................................................................... 80.
Table 5.7 – Examples of hazard identification technique guidelines ................................... 82.
Table 5.8 – Examples of unacceptable risks and unwanted events from industry .............. 86.
Table 6.1 – Requirements for methods for accident investigation and analysis ................ 103.
Table 6.2 – Summary of incident investigation techniques ............................................... 104.
Table 6.3 – Tools and/or processes used by current practitioners in investigations ......... 104.
Table 6.4 – HFACs item examples from the airline, healthcare and defence industries .... 111.
Table 6.5 – Generic strategies ............................................................................................ 121.
Table 6.6 – Example of simplified SAfER analysis for filling a fuel tanker .......................... 122.
Table E1: Criteria for assessing quality of risk management system ................................. 128.


vi
Front Matter

Acknowledgements

This book is a resource that captures the important foundational thinking that is required to
effectively identify and manage risk in the process industries. It has evolved out of the need
to provide engineering students and others with a reference that introduces the key risk
management attributes that need to be understood in order to effectively manage risk in
contemporary industry contexts. The book was originally devised to complement risk and
impact courses taught at The University of Queensland. Its content and structure has been
shaped by those involved in teaching the content and we thank these people especially
• Robert Hannah
• Clive Killick
• Andrew Murphy
• Kelly Smith
• Chris Lilburne
• Jannie Groves
• Mathew Hancock
















vii
Front Matter











This pages has intentionally been left blank



viii







SECTION A: INTRODUCTION




The objectives
SUSTAINABLE OPERATIONAL EXCELLENCE

STAKEHOLDER, REPUTATION & POLITICAL RISK MANAGEMENT

T h e f o c u s a r e a s
REVIEW RISK MGT
MONITOR AND

ENVIRONMENTAL IMPACTS

FINANCIAL PERFORMANCE
PROJECTS/CONTRACTORS
Key RM Activities

SAFETY & HEALTH

CYBERSECURITY
SOCIAL IMPACTS
SUPPLY CHAINS
AND TREAT RISKS
IDENTIFY, ASSESS

T h e F o u n d a t i o n s
PROFESSIONAL PRACTICE HUMANS AND RISK
FUNDAMENTALS OF RISK MGT





















This pages has intentionally been left blank

2
Chapter 1

The objectives
SUSTAINABLE OPERATIONAL EXCELLENCE

STAKEHOLDER, REPUTATION & POLITICAL RISK MANAGEMENT

Chapter 1 T h e f o c u s a r e a s

REVIEW RISK MGT


MONITOR AND

ENVIRONMENTAL IMPACTS

FINANCIAL PERFORMANCE
PROJECTS/CONTRACTORS
Risk in the

Key RM Activities

SAFETY & HEALTH

CYBERSECURITY
SOCIAL IMPACTS
SUPPLY CHAINS
Process
Industries

AND TREAT RISKS


IDENTIFY, ASSESS


T h e F o u n d a t i o n s
HUMANS AND RISK
PROFESSIONAL PRACTICE
FUNDAMENTALS OF RISK MGT

“Risk is a situation or event where something of human value (including
humans themselves) is at stake and where the outcome is uncertain”
(Rosa, 1998, 2003 cited in Aven, Renn, & Rosa, 2011, p. 1074)

1.1. Introduction to Risk in the Process Industries


The sustainability of processing industries is being threatened by the inability to address
current threats and future opportunities and uncertainties that can impact operational
competitiveness. Australian Industry is experiencing adverse competitiveness, productivity
and safety performance trends as shown in Figures 1.1 to 1.3. Australia’s competiveness
ranking has slipped from a ranking 15 to 22 in the world (Figure 1.1). The multifactor
productivity for mining and manufacturing is below the average for the market sectors
(Figure 1.2). Mining productivity is declining while manufacturing productivity seems to
have plateaued. The safety performance of industry seems to have plateaued across a
number of safety measures including worker fatalities (Figure 1.3) and process safety losses
(Figure 1.4). In addition, processing related industries are facing increasing interest and in
some cases opposition from individual members of the public and from groups within
society.










Figure 1.1: Australia's Global Competitiveness Rank Figure 1.2: Australia's Multifactor Productivity Index
(Source: www.trackingeconomics.com) (Source: Bell et al., 2014)



3
Chapter 1










Figure 1.3:
Australian Worker Fatalities in oil, gas, mining Figure 1.4: Asset damage in upstream oil and gas
and manufacturing (source: SafeWork Australia) (source: www.march.com.tr )


These trends are occurring in a business context where sustaining leading-edge
performance is becoming more challenging because of increasing risks and uncertainties
which are being driven by the following:
1. Faster rate of change which is making the future more difficult to predict and gives
business less time to respond (Elahi, 2010; Withers, Gupta, Curtis, & Larkins, 2015).
Examples of the increasing rate of change can be found in the adoption rates for new
technology as shown in Figure 1.5.


Figure 1.5: Adoption of technology by US households (created by Nicholas Felton of the New
York Times and retrieved from https://hbr.org/2013/11/the-pace-of-technology-adoption-is-
speeding-up )


2. Increasing complexity of business processes, technologies and supply chains which
increases the chance of hidden faults and unexpected outcomes (Elahi, 2010; World
Economic Forum, 2015).
3. Globalization which is increasing the interactions and interdependencies between
businesses and means that risks and impacts from one region or business can quickly
spread to others (Elahi, 2010; Withers et al., 2015; World Economic Forum, 2015).
4. Changing regulatory requirements (Allianz, 2015; Aon, 2014).
5. Increasing cost competitiveness associated with import alternates and the corresponding
loss of local suppliers and customers (Allianz, 2015; Aon, 2014; Mooney, 2014).

4
Chapter 1

6. Increasing stakeholder expectations (Allianz, 2015; Ernst & Young, 2015) as illustrated
with the sample of media stories shown in Figure 1.6.


Figure 1.6: Sampling of recent media coverage relating to the mining industry



Significant resources have been invested to improve industry’s operating performance.
Recent research findings into industry operational performance include the following:
1. Most risks faced by companies emerge from combinations of, and changes in,
technology, human and environmental factors. So approaches to risk management need
to reflect that systems have become more complex (non-linear) and more integrated.
2. Humans are crucial to effective risk management. Humans can be the risk analyser, risk
controller and/or risk perceiver. Adopting a human-centred approach to risk
management is needed to deliver real improvements in risk identification,
understanding, analysis, control, communication and governance. However, most risk
management approaches used in industry focus of technology, procedural compliance or
financial outcomes. Expanding risk management to include and leverage the human
factor should deliver more effective risk identification, control and oversight.
3. Most industrial incidents and production disruptions are the result of foreseeable, repeat
or reoccurring events. Moreover, investigations of these incidents usually do not find
new causes for the events. So risk management needs to expand to include an increased
focus on the selection, optimisation and management of human and technological risk
controls to prevent adverse outcome and to ensure beneficial outcomes.
4. Achieving step change improvements in risk management and to effect sustainable
performance improvements using risk based approaches it is necessary to build good risk
management capability.

5
Chapter 1

5. Effective management of industry’s risks and uncertainties drives competitive advantage
as evidenced by Ernst & Young research which found that companies with more mature
risk management practices generated high growth as shown in Figure 1.7 (Ernst & Young,
2013).

















Figure 1.7: Quality of risk management versus companies’
financial performance (Ernst & Young, 2013)

Thus better management of risk is a priority area for industry. Successfully addressing the
risk challenges, will require a fundamental understanding of human performance,
operational risks –and enterprise risks and impacts in order to determine how best to
optimise them.
This reference has been written to introduce engineers (and other processing industry
professional) to the fundamentals of risk management that you need to successfully identify
and address the range of contemporary risk that challenge the sustainable competitiveness
of companies operating within the process industry. In particular, it describes the risks that
matter, and how should you manage them.

1.2. What is risk?


You have undoubtedly been exposed to the concept of risk. You are, of course, managing
risk every day as you make your way through life. But what does “risk” really mean?
There are many definitions of risk. The international standard for risk management - ISO
31000:2018 - defines risk as the ‘effect of uncertainty on objectives’. In this guide we use a
more detailed definition of risk which is ‘uncertainty that matters because it can affect the
attainment of objectives’. “Risk is a situation or event where something of human value is at
stake and where the outcome is uncertain” (Rosa, 1998, 2003 cited in Aven et al., 2011, p.
1074). Risk is created by variability, incomplete knowledge as well as known and unknown
threats and opportunities.
Some key risk fundamentals include the following:

6
Chapter 1

1. Risk exists when there is a possibility that unexpected things may happen.
2. Risk is NOT bad. Indeed, it is only by taking, and managing, risk that companies can
provide the products and services that society demands, and provide commercial
returns to their shareholders.
3. Managing risk is really about forming perceptions, making decisions and taking
actions.
The very nature of this definition has probably already got you asking several pertinent
questions about risk. What is uncertainty that matters? What uncertainties? How do I
know if it matters? How do I know if I have done enough? Matters to who? These are all
good and pertinent questions. We hope that this reference will help you to answer them.

1.3. Why is risk management so important?


ISO 31000:2009 ‘Risk Management – Principles and Guidelines’ describes some pertinent
reasons why risk management is important. These include:
• Risk management creates and protects value
• Risk management is part of decision-making
• Risk management explicitly addresses uncertainty
• Risk management takes human and cultural factors into account
• Risk management facilitates continual improvement of the organization

Risk management is also a critically important aspect of professional engineering practice.
Engineering is about doing things! It is about understanding the foundational knowledge,
learning how to apply tools and techniques and being able to do so in wise and professional
ways in order to identify and solve real-world challenges. Thus good risk management
requires the development and use of three tiers of knowledge described by Aristotle (the
Greek philosopher) and Peter Oliver and Bill Dennison (Oliver & Dennison, 2013) as shown
in Figure 1.8. This guide/book aims to build readers’ episteme and Sophia – their
understanding of knowledge and systems of knowledge usually described in risk
management books. It also focuses on informing readers about risk tools and techniques
that can be used to solve problems. However, ultimately good risk management is about
building phronesis and demonstrating praxis – it is about building practical wisdom and
demonstrating professional practice.

7
Chapter 1

Phronesis
Professional & Praxis Practical wisdom &
practice thoughtful doing

Engineering Techne Applying


tools knowledge

Underpinning Book smarts


knowledge and and systems of
understanding Episteme & Sophia knowledge


Figure 1.8. Three tiered system of knowledge


Using phronesis and praxis to identify and manage risks is about making decisions with
incomplete knowledge and taking actions to manage uncertainties. It is fundamentally
about you, your attitudes, your abilities and your professionalism. It is about your ability to
form perceptions, making decisions and taking actions as shown in Figure 1.9.

Community & HUMAN Human/Asset
Social Impact PERFORMANCE Health, Safety & Security

RISK DETECTION
AND ASSESSMENT TREATMENT
OF RISKS
COMPLIANCE
REGULATORY

PERFORMANCE

RISK
LEGAL &

TECHNICAL

YOUR YOUR
DECISIONS Uncertainty ACTIONS
that matters!

REVIEW
CHANGES
IN RISKS

Economic Environmental
Impact FINANCIAL
PERFORMANCE Impact

Figure 1.9. The influences and impacts on/of industrial risks


Figure 1.9 highlights that risk is the uncertainty that matters and it is managed by human
decisions and actions as shown in the center of the diagram and with the inner green ring.

8
Chapter 1

Human decisions and actions impact on and are impacted by the risk management tools and
techniques selected, influences and advice provided by others, and professional ethics and
standards that specify the quality of output produced as shown in the middle blue ring. The
decisions, actions, and work produced to manage risks and uncertainty impacts and is
impacted by an organisation’s human performance, technical performance, financial
performance and its ability to comply with legal and regulatory requirements as shown in
the outer red ring. Lastly the internal workings of an organization, as shown in all three
rings, impacts and is impacted by external factors including community and social factors,
human health and safety, asset security, economic conditions, and environmental
conditions.
Risk management is a core competency of all engineers. Engineers are employed to
“facilitate continual improvement of the organization”. To do so professional engineers are
responsible for the identification, assessment and management of risks associated with all
aspects of a business. This will include the planning and execution of projects, the operation
of processing plants, management of other aspects of the business including its
environment and community interactions to obtain and sustain operation excellence.

9
Chapter 1

1.4. What types of risk should engineers consider?


Many different types of risks can impact a business. Some of these risks have been
identified and ranked by the World Economic Forum (2016) as shown in Figure 1.10.


Figure 1.10: Global risk landscape (retrieved from
http://www3.weforum.org/docs/GRR/WEF_GRR16.pdf )

10
Chapter 1

There are many surveys taken to identify the range of risks that can impact businesses. A
summary of the results from some of the surveys that relate to the processing industries is
shown in Table 1.1.
Table 1.1: Top 10 Risk Concerns for Industry
Specific Industry Risks Overall Business Risks
Mining and 3
Manufacturing Oil and gas risks
1 Metals 4 (agcs.allianz. (www.aon.
2016 2 (energydigital.co (ey.com) 5 6
2016-2017 com) com)
(bdo.com) m)
(ey.com)
1 Supplier/ vendor Cash optimisation Volatile prices Pricing pressure Business
Damage to
concerns interruption,
reputation/ brand
supply chain risk
2 Regulations Capital Regulatory and Cost cutting and Market
Economic
access legislative change profit pressure development (e.g.
slowdown/ slow
and cost volatility,
recovery
competition)
3 Labor concerns Productivity Inability to Market risks Cyber incidents Regulatory/
expand/find new (e.g. crimes, legislative
reserves failures) changes
4 Competition and Social license to Operational Economic risk:
consolidation operate hazards weaker/ more Natural Increasing
volatile world catastrophes competition
growth outlook
5 Commodity/ raw Transparency Natural disasters/ Managing talent Changes in Failure to
material prices extreme weather and skill shortages legislation and attract/retain top
regulation talent
6 General economic Switch to growth Inaccurate Regulation and Failure to
Macro-economic
conditions reserve estimates compliance innovate/meet
developments
customer needs
7 Environmental Access to energy Liquidity and Expansion of
Loss of reputation Business
laws and access to capital government’s role
or brand value interruption
regulations
8 International Joint Environmental Emerging
Third-party
operations and ventures restrictions/ technologies Fire, explosion
liability
sales threats regulations
9 Breaches of Cyber security Economic Political shocks Political risks Computer crime
technology concerns (war, upheaval)
10 Innovation Competition Sovereign debt: Theft, fraud,
Currency risks fiscal austerity corruption Property damage
impacts


1
Retrieved from https://www.bdo.com/insights/industries/manufacturing-distribution/2016-bdo-
manufacturing-riskfactor-report
2
Retrieved from http://www.ey.com/Publication/vwLUAssets/EY-business-risks-in-mining-and-metals-2016-
2017/%24FILE/EY-business-risks-in-mining-and-metals-2016-2017.pdf
3
Retrieved from http://www.energydigital.com/utilities/2259/Top-20-Risk-Factors-Facing-the-Oil-Gas-Industry
4
Retrieved from http://www.ey.com/Publication/vwLUAssets/Business_Pulse_-
_top_10_risks_and_opportunities/$FILE/Business%20pulse%202013.pdf
5
Retrieved from http://www.agcs.allianz.com/assets/PDFs/Reports/AllianzRiskBarometer2016.pdf
6
Retrieved from http://www.aon.com/2015GlobalRisk/attachments/2015-Global-Risk-Management-Report-
230415.pdf

11
Chapter 1

From the WEF, survey and other information, a range of modern risks that impact the
process industry can be identified. The scope of these risks is shown pictorially with the
pillar diagram in Figure 1.11 (adapted from M. Hassall, Hannah, & Lant, 2015).

The objectives
SUSTAINABLE OPERATIONAL EXCELLENCE

STAKEHOLDER, REPUTATION & POLITICAL RISK MANAGEMENT

T h e f o c u s a r e a s
REVIEW RISK MGT
MONITOR AND

ENVIRONMENTAL IMPACTS

FINANCIAL PERFORMANCE
PROJECTS/CONTRACTORS
Key RM Activities

SAFETY & HEALTH

CYBERSECURITY
SOCIAL IMPACTS
SUPPLY CHAINS
AND TREAT RISKS
IDENTIFY, ASSESS

T h e F o u n d a t i o n s
PROFESSIONAL PRACTICE HUMANS AND RISK
FUNDAMENTALS OF RISK MGT

Figure 1.11: Scope of risk management


At the bottom of the diagram are the foundations that underpin modern risk management.
We have broken the foundations into three core components: fundamentals of risk
management, professional practice, and humans and risk. These three topics cover the
foundational information that underpins how we perceive and manage risk. To the left of
the figure, are located the two key activities of risk management, namely identify, assess
and treat risks, and monitor and review risk management. These two activities are
illustrated to ‘cut across’ the major risk factors that are shown as pillars in the centre of the
figure. The pillars represent the range of operational risks that need to be managed in order
for an organization to achieve its objectives. A risk-based approach is needed to optimise
the trade-offs between these risks in such a way as to reduce the likelihood of negative
outcomes and increase the likelihood of positive outcomes. Sitting on top of the pillars are
the external factors that need to be managed if an organization is to achieve its objectives.
In the diagram these factors have been labeled ‘stakeholder, reputational and political risk
management’. At the top or apex of the diagram is the objective of risk management work
and of organisations themselves, namely to achieve sustainable operational excellence.

12
Chapter 1

1.5. Case studies of engineering decisions
As an engineer, you will be confronted with decision-making on a daily basis that may focus
on one or several different aspects of risk. These decisions will be in the face of uncertainty.
They will also require you to rely on a lot more than just technical experts as highlighted
with the following case studies.

Case Study 1:
You are in a design team which is developing a conceptual design for a proposed succinic
acid plant. As part of your scope of work, you are required to design a hydrochloric acid
storage tank. The HCl is a feed for a fermentation process where the acid is dosed to
control the pH. You have to specify the size of the tank. What factors would you consider in
determining the size of the storage tank?

Case Study 2:
You are part of a design team that is designing a scrubbing system which is part of a high
pressure venting system from a liquid storage tank. The liquid (MIC) is a highly volatile
compound. It is produced onsite as an intermediate, and is subsequently reacted to produce
a pesticide. The scrubbing system is a safety system that is required in the unlikely event
that there will be a release of vapour from the storage tank. The design team has identified
two possible options for the scrubbing system, option A and option B. Option B is
significantly more expensive because it also includes a flare after the scrubber to burn off
any of the gas that might not be removed in the scrubber. The design team has assured you
that, in the unlikely event there will be a release of gas through the vent, the scrubber can
absorb the gas to the specified level. How would you decide which option to select?

Case Study 3:
You are a junior engineer working at a large factory. Your factory has a wastewater stream
which is discharged to the sewer (this is called trade waste). Your company pays a fee to
discharge the trade waste. They paid more than $1million last year in trade waste charges.
The trade waste specifications are regulated by the trade waste agreement that your
factory has with the local council, and the trade waste is monitored by trade waste officers.
It is your job to manage the operations of the wastewater treatment plant. You have
noticed that the trade waste officers collect their samples on regular days each month, and
on those days the effluent is within the specifications, largely because the operators are
modifying the operations on those particular days to improve the water quality (e.g. by
blending with higher quality effluent that has been stored in a spare storage tank). You are
also aware that the factory tends to run batches that result in lower quality effluent on the
days when the trade waste officers are not in attendance. The factory has very few licence
breaches. What would you do?

Case Study 4:
You work for a large oil and gas company. The company is in the early stages of a new CSG
project in western Queensland. You have been working in the project team that has been
evaluating disposal options for the water that is produced from the gas wells. As part of
your role, you have been requested to attend a public meeting in the local community.
Apparently there is some unrest, as the local farmers are concerned about the impact of the
gas processing on their farm, and several have reported gas being emitted from
groundwater on their properties. What would you do?

13
Chapter 1

1.6. Summary
Risk is ‘uncertainty that matters because it can affect the attainment of objectives.’ In
process industries these objectives are often associated with safety, health, project and
contractor management, supply chain, environmental and social impact, political and
financial performance.
Risk is created by variability, incomplete knowledge as well as known and unknown threats
and opportunities. Managing risks in order to deliver optimum performance is what of
engineers do! So as an engineer, you will be constantly faced with making decisions, in
environments that are ‘under-specified’ – that is, if there is no one correct solution to the
problem.
Risk management is the approach that engineers can use to create and protect value,
explicitly address uncertainty, take human and cultural factors into account and to facilitate
continual improvement of the organization. This book has been written to help engineers
understand the scope and approaches for managing contemporary risks in the process
industries. The reference has been set out to follow the structure of the pillar diagram with:
SECTION A: OVERVIEW
Chapter 1: Introduction
SECTION B: THE FOUNDATIONS
Chapter 2: Fundamentals of risk management
Chapter 3: Professional Practice
Chapter 4: Humans and Risk
SECTION C: KEY RISK MANAGEMENT ACTIVITIES
Chapter 5: Identify, assess and treat risks
Chapter 6: Monitor and review risks
SECTION D: EPILOGUE

After completing this reference you should be able to:
• To understand and be able to articulate and effectively apply the principles and
processes of modern risk management
• Apply the principles of selected risk analysis and sustainability techniques in an accurate
and effective manner
• Analyse the performance of future, current and historical projects to identify the OH&S
and environmental risk management issues and articulate their performance in terms of
modern risk analysis, regulatory and sustainability frameworks
• Communicate to both a technically astute and a general audience all technical, human
and environmental issues associated with the risk management of specific projects. This
communication will be both oral and written and will encompass both project analysis
and the synthesis of solutions. This will require you to work individually and as an
effective member of an engineering team.
• Use critical thinking and planning skills to analyse all potential technical and
environmental risks and propose solutions to mitigate or minimise these risks
• Demonstrate a sound understanding of the centrality of ethical and legal considerations
when making technical decisions in all aspects of risk management. Make sound
decisions when applying these principles to case studies and projects.

14
Chapter 1

• Understand and be able to articulate your core accountabilities and obligations
associated with risk management and professional engineering
• Understand how sustainability is presently articulated and applied in the processing
industry and the role that risk management should play in obtaining and maintaining
sustainable operations.
• Be able to risk management approaches to apply, at and introductory level, these
sustainablility principles to actual projects and operations.

15
Chapter 1











This pages has intentionally been left blank

16
Section B







SECTION B: THE FOUNDATIONS




The objectives
SUSTAINABLE OPERATIONAL EXCELLENCE

STAKEHOLDER, REPUTATION & POLITICAL RISK MANAGEMENT

T h e f o c u s a r e a s
REVIEW RISK MGT
MONITOR AND

ENVIRONMENTAL IMPACTS

FINANCIAL PERFORMANCE
PROJECTS/CONTRACTORS
Key RM Activities

SAFETY & HEALTH

SOCIAL IMPACTS

CYBERSECURITY
SUPPLY CHAINS
AND TREAT RISKS
IDENTIFY, ASSESS

T h e F o u n d a t i o n s
PROFESSIONAL PRACTICE HUMANS AND RISK
FUNDAMENTALS OF RISK MGT






17
Section B











This pages has intentionally been left blank

18
Chapter 2

The objectives
SUSTAINABLE OPERATIONAL EXCELLENCE

STAKEHOLDER, REPUTATION & POLITICAL RISK MANAGEMENT

Chapter 2 T h e f o c u s a r e a s

REVIEW RISK MGT


MONITOR AND

ENVIRONMENTAL IMPACTS

FINANCIAL PERFORMANCE
PROJECTS/CONTRACTORS
Fundamentals

Key RM Activities

SAFETY & HEALTH

SOCIAL IMPACTS

CYBERSECURITY
SUPPLY CHAINS
of Risk
Management

AND TREAT RISKS


IDENTIFY, ASSESS


T h e F o u n d a t i o n s
PROFESSIONAL PRACTICE HUMANS AND RISK
FUNDAMENTALS OF RISK MGT

“If one’s fate is predetermined, there is no need for anticipating future outcomes
and the term risk makes no sense” (Renn, 1992, p. 56). If the future isn’t predetermined
the term risk denotes the possibility of an unexpected outcome.

2.1. Introduction to risk fundamentals


Risk is uncertainty that matters because it can affect an organisations ability to achieve its
objectives. Most organisations have two broad objectives: (i) to preserve their existence by
preventing disasters and adverse events from happening, and (ii) to continue to improve
and to capture opportunities in order to realise their potential. The achievement of these
objectives can be thwarted by known and unknown threats, incomplete knowledge,
variability or change. Achieving and exceeding the objectives can be achieved by adopted
risk and resilience based management approaches as shown in Figure 2.1. This reference
focuses on risk management rather than resilience enhancement approaches.


Figure 2.1: Scope of risk and uncertainty management

19
Chapter 2

Risk management is about identifying, assessing and treating the uncertainties that
matter because they can affect objectives as stated in the standard ISO 31000 (ISO 31000,
2009). Risk management is inherently future focused – it seeks to identify those future
uncertainties that can affect objectives. This chapter we cover the following fundamentals
of risk management for process industry contexts:
• Understand the risk management process (ISO31000).
• Know the core risk management terms and definitions.
• Learn about the two main risk management approaches used in process industries
• Review some case studies on the application of risk management in the process
industries.

2.2. The Risk Management Process


The International Standard ‘AS/NZS ISO31000:2009 Risk Management-Principles and
Guidelines’ that describes the risk management framework, key processes and defines the
key language underpinning risk management (ISO 31000, 2009). There are also many other
standards and industry guidelines that provide guidance on risk management. Examples are
shown in table 2.1. These standards seek to set out vocabulary and criteria for achieving
consistency and reliability of activities associated with the identification, analysis,
evaluation, treatment and overall management of risk. It is recommended that people read
and reference the standard and industry guidelines relevant to their work. The remainder of
this subsection focuses an overview of ISO31000.

Table 2.1. Examples of risk based standards and industry guidelines
Source Title Description
International IEC31010 Risk management – risk assessment
Standards techniques
ISO22000 Food safety management standard
ISO27000 series Security risk management standards
ISO Guide 73 Details the vocabulary and definitions for
generic risk management terms
OSHSAS 18001 British standard for occupational health
and safety management systems
Industry guidelines COSO Enterprise risk The Committee of Sponsoring
management – integrated Organizations of the Treadway
framework Commission (COSO) guide that defines a
framework and essential elements for
managing enterprise risk
Risk Management: Leading Risk Management Handbook for mining
practice sustainable published by the Australian Government
development program for
the mining industry
Hazard identification and Guidance note published by NOPSEMA
risk assessment
MDG1010 Minerals Industry risk management guide published
industry safety and health by NSW Government
risk management guide

20
Chapter 2

In ISO31000:2009, the risk management process that is described is the one depicted in
Figure . Figure is simple and a useful reference to begin to understand some of the
fundamentals of risk. It highlights that managing risk first requires an assessment of the risks
then a determination of how to treat the risks. However, in the process industry we have
learned from incidents that there needs to be equal emphasis put into risk assessment
activities and risk treatment activities as shown in Figure 2.3. As these figures highlight
establishing the scope of risk management activities is a very important first step. This step
involves determining what things will be consider within the scope of the risk management
activities and what things will be considered outside of scope. Using framework like PLEAS
(P= people, L = locations, E = equipment and plant, A = activities, S = scenarios, and T =
timeframes can help ensure a more complete range of factors will be considered.



Figure 2.2: The Risk Management Process (AS/NZS ISO31000:2018)



Figure 2.3: Extended risk management process for process industries
(adapted from M. E. Hassall, Joy, Doran, & Punch, 2015)

21
Chapter 2

Once the context has been established the next step in the risk management process is risk
assessment by identifying, analyzing and evaluating the risks. Risk identification step is done
to identify the sources of uncertainty that matter. This includes identifying the threats that
could harm an entity’s performance and/or the opportunities which could produce better
than expected performance outcomes. After the risks are identified the next step is risk
analysis. Risk analysis involves determining the range of consequences or impacts that might
result if the risk materialises and the likelihood that these different consequences will occur.
The outcome of the risk analysis step is risk rating derived from multiplying the estimated
likelihood by the consequences. The risk rating number is used to evaluate the risk. The risk
evaluation is a process for determining if the risk is acceptable or tolerable or if there needs
to be some treatment of the risks to make it acceptable or tolerable. Often risk analysis and
evaluation guidance is provided within risk matrix as shown in Figure 2.4. More details on
risk assessment processes are described in Chapter 5.

Risk Ranking Matrix Likelihood
Impact Rare Unlikely Moderate Likely Almost Certain
Not expected to could occur
Might occur once Could occur every Could occur every
Reputation / occur (but has
in entity's life 5-10 years 1-5 years
monthly, weekly
Impact OH&S Asset damage Environment or daily
Legal occurred in industry)

A B C D E
Widespread
Serious
Catastrophic Fatalities > $50 m serious adverse 5 15 15 20 25 50
LT effect
impact
Permanent, Serious Wider spread
Major $10m - $50 m 4 5 10 10 20 25
serious disability Med to LT effect moderate impact
Moderate
Moderate Localised(Qld)
Moderate irreversible $1m - $10m 3 3 5 10 10 20
ST to Med effect moderate
impairment
Objective but
Minor Localised(Qld)
Minor recoverable $100k - $1m 2 2 5 5 5 10
Short term effect Minor
impairment
Low level / no
Insignificant First aid / minor < $100 k
lasting effect
No impact 1 1 2 3 5 5

15 - 50 High Unacceptable Risk Unacceptable – Operations do not to continue until risk is reduced.

Tolerable Risk (only if risk reduction ALARP Band 1 – Action as a high priority to reduce risk. Assign senior manager responsible to action,
10 - 14 Significant
is impracticable or it's cost is monitor and review. If interim measures installed monitor closely and continuously.
Risk Rating
grossly disproportionate to ALARP Band 2 – Action to reduce risk where possible. Assign manager responsible to continuously
4-9 Medium improvement gained) monitor and review

1-3 Low Broadly Acceptable Generally Acceptable - Manage with regular monitoring and review.

Figure 2.4 Example of a risk matrix


If it is determined that action is required then the risk treatment steps involves identifying
event scenarios, selecting and optimizing risk controls and determining the control
management plans needed to ensure the risk controls are implemented, monitored and
maintained so the risk is prevented to mitigated to an acceptable level. The event scenarios
are often extracted from the risk assessment phase along with their causes and
consequences. Controls are then selected that address each cause to prevent or reduce the
likelihood of the unwanted events from occurring or to enhance or increase the likelihood
of wanted events occurring. Controls are also selected to address the consequences to
acceptable levels should the event scenario occur. The control management plan substep
involves evaluating the management systems and activities required to ensure the controls
are implemented, monitored and maintained in a manner that ensures they would work as
required when required and are effective at addressing the risk to an acceptable level.

The risk management framework (shown in Figures 2.2 and 2.3) also highlight two very
important activities in the darker blue boxes on each the side of the diagram, namely
‘communication and consultation’ and ‘monitoring and review’. Communication and
consultation should take place with all stakeholders to get their input and to help them
understand the risk and risk management requirements and their accountabilities in

22
Chapter 2

delivering them. The monitoring and review processes focus on ensuring quality and
effectiveness of risk management activities for current risks and potential future changes
and challenges. The approaches, tools and techniques to use for each of these steps and
importance of these activities will become more evident as we go through the reference.

There are several key words and phrases used in the ISO31000 process and in the above
Figure. These are some of the key words and phrases that we use in the language of risk.
These are defined in the following section.

2.3. The Risk Language



This section defines some of the key terms that we will be using throughout this book.

Risk is uncertainty that matters because it can positively or negatively affect objectives.

Risk identification = Identifying sources of uncertainty that matter.
• Hazard = A potential source of harm (e.g. electricity, gas at pressure, hot fluids).
• Threat = Something that can release a hazard (e.g. corrosion).
• Opportunity = Something that can lead to the exceeding of objectives or produce
better than expected performance outcomes

Risk analysis = an estimation of the likelihood x consequence of something happening.
• Likelihood = how often something might happen (e.g. 1/d, 1/y).
• Consequence = The outcome or impact of an event (e.g. injury, death, damage/loss
of assets, environmental destruction, social unrest, profit, growth, improved quality
etc).

Risk evaluation = the process of deciding whether a risk is acceptable or tolerable as is or
whether there needs to be some treatment of the risk to make it acceptable or tolerable.

Risk treatment = a process of identifying the controls and control management systems
needed to address the risks by preventing unwanted events or mitigating their
consequences to an acceptable level.

Unwanted event = an unplanned release of a hazard (e.g. loss of containment of a
hazardous material or loss of awareness of the situation).

Control = An device and/or human action that, of itself, will arrest or mitigate an unwanted
event sequence and whose performance is specifiable, measureable and auditable.

Prevention control = An object and/or human action that, of itself, prevents or hinders an
unwanted event.

Mitigating control = An object and/or human action that, of itself, reduces the severity of
the consequences of an unwanted event sequence.

Control management systems = the organizational activities required to ensure that a
control is implemented, maintained and operating effectively as required when required.

23
Chapter 2

2.4. A Brief History of Risk Management

There are numerous ways that the history of risk management can be presented. In this course we look
at a summary of the modern history of risk management as it relates to the process industries. Risk
management techniques first emerged as a means to addressing poor technical performance and poor
safety performance. The changes in management of technical and safety performance over time is
discussed by Hale and Hovden (1998), Borys et al. (2009), and Hollnagel (2011b). This history has
also been represented in diagrams as shown in
Figure 5.

Another perspective on the history of unwanted event management is shown in Figure . As
Figure shows, the understanding of what causes unwanted events in industry has changed
which has led to the development of different types of risk management techniques. After
the industrial revolution, the unreliable nature of plant and equipment often resulted in
unwanted events such as equipment malfunctions, plant breakdowns, and catastrophic
failures.

During this period risk assessment techniques were developed to help practitioners identify
and address potential technology failures. Some of these techniques included:
• Failure Mode Effects Analysis (FMEA): This analysis involves brainstorming the things
that could fail following a structured step-by-step process. The first steps to select an
item to be analysed. The next step is to identify potential failure modes. This often
involves identifying the possible things that could cause the failures. The next step is
to determine the effects of the failure. This could be separated into local and system
effects. It might also be separated in terms of equipment, human, environmental and
organizational effects. The next steps involves determining whether the risk
associated with the failure is acceptable or needs to be addressed. If it is to be
addressed the last step is to identify the actions required to address the risk.
• Fault Tree Analysis (FTA): This type of risk assessment involves using deductive
thinking to decompose an unwanted event or undesired system state into its possible
causal sub-events using Boolean logic. FTA are sometimes referred to as Cause
Diagrams. FTA analysis is used to quantify the risks associated with a given event or
system state.
• Event Tree Analysis (ETA): ETA involves developing a tree of the possible outcomes of
an event. It is sometimes referred to as a Consequence Diagram. ETA analysis often
involves the quantification of the different outcomes in terms of the probability of
occurrence. More detailed information on ETA is provided in Chapter 5.
• Hazard and Operability studies (HAZOP): This type of analysis involves selecting a
subsection of a process (a node) and systematically identifying and assessing the risks
that deviations in process parameters might present.
• Bowtie Analysis (BTA): This type of risk assessment involves visually representing an
unwanted event, its causes and consequences and the prevention and mitigation
controls that have been put in place to prevent the unwanted event or mitigate the
severity of its possible consequences. The bowtie diagram helps decision makers
determine whether they have adequate controls to address the risk.
Each of these techniques are explained in more detail in Chapter 5.

24
Chapter 2

History of process safety


Age of technology

Age of human factors

Age of safety management

Integrationist age of safety

Adaptive age of safety


First industrial Second industrial Third industrial Fourth industrial
revolution revolution revolution revolution
Steam Mechanisation Mass Production Computerisation Cyber Physical Systems
WW1 WW2

1800 1850 1900 1950 2000


US Railway Safety
Appliance Act 1893
20th Century Industrial Disasters
Mt Kembla explosion 1902 Coalbrook disaster 1960 Lake Peigneur accident 1980 Enschede fireworks 2000
Grover boiler explosion 1905 Laobaidong colliery 1960 Ocean Ranger 1982 Nambija mine 2000
Virginia City mine 1903 Mitsui Miike coal mine 1963 Mohd.Darabi mine 1983 Baia Mare spill 2000
Courrieres mine 1906 Dhanbad coal mine 1965 Moura Colliery 1986 Toulouse explosion 2001
Naomi, Monongah, Yolande and Cambrian Colliery 1965 Bhopal 1984 Columbia 2003
Dare mines disasters 4 x 1907 Aberfan disaster 1966 Cubatao gas explosion 1984 Seest fireworks 2004
Cherry mine disaster 1909 Farmington mine 1968 San Juanico Disaster 1984 Texas City 2005
Pretoria pit disaster 1910 Thiokol-Woodbine explosion 1971 Val di Strava dam 1985 Buncefield fuel depot fire 2005
Banner mine disaster 1911 Wankie coal 1972 Challenger 1986 Falk explosion 2006
Mt Lyell mine fire 1912 Box Flat colliery 1972 Chernobyl 1986 Qinghe steel 2007
Senghenydd Colliery 1913 Times Beach dioxin spill 1972-76 Sandoz disaster 1986 Georgia sugar 2008
Dawson Stag Canon No.2 1913 Lofthourse Colliery 1973 Kinross mining disaster 1986 Instanbul fireworks 2008
Boston molasses flood 1919 Flixborough 1974 Norco refinery explosion 1988 Sayano-Shushenskaya turbine 2009
Oppau explosion 1921 Banqiao Dam failure 1975 PEPCON disaster 1988 Upper Big Branch mine 2010
Mt Mulligan explosion 1921 Moura No. 1 1975 Borken explosion 1988 Pike River mine 2010
Bellbird colliery 1923 Chasnala mining disaster 1975 Piper Alpha 1988 Connecticut power plant 2010
Dawson Stag Canon No 1 1923 Sevesco 1976 Exxon Valdez 1989 Deepwater Horizon 2010
Hawks Nest Tunnel 1927-1932 Amoco Cadiz sinking 1978 Phillips disaster 1989 Ajka dam 2010
Minamata disaster 1932-1968 Spyros disaster 1978 Arco disaster 1990 Tesoro oil refinery 2010
Wonthaggi colliery 1937 Roland Mill dust explosion 1979 M/T Haven explosion 1991 Copiapo mining accident 2010
Benxihu colliery 1942 Three Mile Island 1979 Westray mine 1992 Fukushima 2011
Texas city dock 1947 Ixtoc I oil spill 1979 Nambija mine 1993 Lac-Megantic derailment 2013
BASF explosion 1948 Appin colliery explosion 1979 Moura colliery 1994 West Texas 2013
Orient 2 mine explosion 1951 Vaal Reefs 1995 Soma mine 2014
Newton colliery 1954 Haysville elevator explosion 1998 Kaohsiung explosion 2014
Bois du Casier disaster 1956 Mount Polley dam 2014
Windscale nuclear fire 1957 Port of Tianjin 2015
Know mine disaster 1959 Somarco Dam 2015
Gazipur boiler explosion 2016

Figure 5.5: History of process safety highlighting the different ages of safety

25
Chapter 2

Technology
& Standards

• Engineering/hardware Behaviour &


Unwanted

improvements Management
events

• HSE Compliance
Systems
Organisational
• Risk Assessment
Safety and Risk
• Safety Management Culture Human
• Quality control Centred Risk
• Risk based legislation
• Leadership and culture
Control
• Individual accountability
Unwanted events are caused Unwanted events are caused Unwanted events are caused Unwanted events emerge
by technology failures by “human error” by management failures from combinations of factors

1970 1980 1990 2000 2010 2017+

FMEA Fault Trees Event Trees HAZOP Bowtie Swiss Accimap HFACs STAMP FRAM BLHAZID Critical Risk
1950 1962 ~1970 1977 1980 Cheese 1997 2000 2003 2004 SAfER Control Mgt
2013 2013
Numerous human Behaviour based
error techniques safety initiatives


Figure 2.6: History of Industrial Performance (adapted from M. E. Hassall, 2014)


Results from these risk assessment techniques combined with other initiatives lead to
improved technology and standards which in turn led to a reduction in unwanted events.
However the focus on improving reliability didn’t eliminate unwanted events which
continued to occur at an acceptably high level.

Analysis on the unwanted events that continued to occur highlighted that humans were
playing an important role in initiating and/or escalating these unwanted events. For
example, the Texas City refinery explosion occurred when operators overfilled a column
releasing sufficient hydrocarbon to form a vapour cloud that found an ignition source and
caused a violent explosion (U.S. Chemical Safety and Hazard Investigation Board, 2007).
Another example is the Exxon Valdez oil spill that happen when the third mate sailed the
ship outside the normal shipping lane and collided with Bligh Reef causing a massing oil spill
in Prince William Sound (National Transport Safety Board, 1990). The attributing incident
causes to “human error” led to the development of numerous – over fifty – risk assessment
techniques aimed at helping practitioners identify and address potential sources of “human
error” (Stanton et al., 2005). Examples of “human error” risk assessment approaches include
HEART (Williams, 1986), SHERPA (Embrey, 1986), SPEAR (CCPS, 1994), Human Error HAZOP
(Whalley-Lloyd, 1998), CREAM (Hollnagel, 1998), THEA (Pollock et al., 1999), LOPA-HF
(Shappell & Wiegmann, 2000b), TRACer (Shorrock & Kirwan, 2002), and human factors
checklists (Bellamy et al., 2008). Towards the end of this era behaviour based safety
initiatives also began to emerge. Behaviour based safety programs focus on reinforcing
workers to behave safely and to see as their responsibility not just a management issue
(Tuncel, Lotlikar, Salem, & Daraiseh, 2006).

26
Chapter 2

The focus on human error and behaviour based safety lead to the realization that most
human behaviour was shaped by organizational and system factors as shown in Figure 2.7.
An example of organizational contributions to accidents is highlighted in the Challenger
space shuttle disaster which occurred because of the decision to launch in unsafe conditions
when the actual temperature was below safe temperature for o-rings (Committee on
Science and Technology, 1986; Rogers et al., 1986).



Figure 2.7: Types of error
(retrieved from http://energy.gov/sites/prod/files/2013/06/f1/doe-hdbk-1028-2009_volume1.pdf )


This insight led to the development of risk assessment approaches to help practitioners
identify and address organisational factors that could cause unwanted events. Examples of
such approaches included “Swiss Cheese” model (J. T. Reason, 2008), Accimap (Rasmussen
& Svedung, 2000), and HFACs (Shappell & Wiegmann, 2000a). These organisational factor
analysis identified improvements in safety management systems that might deliver further
reductions in unwanted events. More detailed information on these risk assessment
approaches for organizational management systems is provided in Chapter 5.

The focus on organizational systems and human behaviour led to the insight that
organizational culture – particularly it’s safety culture – is an important factor that can
contributes to or avert the occurrence and severity of unwanted events. Safety culture can
be defined as the shared safety or risk related perceptions, beliefs and behaviour shaping
norms held by people within the workplace (Casey, Griffin, Flatau Harrison, & Neal, 2017;
Glendon & Stanton, 2000; Groeneweg, Hudson, Vandevis, & Lancioni, 2010). The
importance of culture was highlighted in the Baker Panel review of the BP Texas City
Refinery explosion which found “BP did not instill a common, unifying process safety culture
. . . The Panel found instances of a lack of operating discipline, toleration of serious
deviations from safe operating praceses, and apparent complacency toward serious process
safety risks”.

27
Chapter 2

Approaches developed to improve the effectiveness of risk controls include STAMP based
approaches (Leveson, 2011; Leveson, Daouk, Dulac, & Marais, 2003) and Risk Control
Management approaches (M. E. Hassall & Harris, 2017; M. E. Hassall, J. Joy, et al., 2015;
ICMM, 2015a, 2015b). More details on Risk Control Management approaches are described
in Chapter 5.

All the above mentioned approaches have used reduced the number of expected or
recurring unwanted events. To reduce unexpected and beyond design unwanted events
additional approaches are being developed to enhance organizational resilience. One
technique is Functional Resonance Analysis Method (FRAM: Hollnagel, 2012) which seeks to
help analysis identify how normal variation within a system can lead to unexpected
outcomes. Another technique is Blended HAZID that combines function-driven and
component-driven approaches to develop detailed structured representations of failure
causality in process systems (Seligmann, Németh, Hangos, & Cameron, 2012). A third
approach is Strategies Analysis for Enhancing Resilience (SAfER: M. E. Hassall, Sanderson, &
Cameron, 2014) which seeks to help analysis identify ways to improve system designs so
humans can better manage industrial operations across both normal and abnormal
situations. More details on these different approaches been provided in Chapter 5.

The prevention of unwanted events is referred to as the “loss prevention” or “precautionary
approach” to risk management. It is recognized that a sole focus on “loss prevention” can
undermine the competitive advantage of an organization. For example, according to The
Daily Mail cafes in UK were refusing to heat up baby food because it might burn the child’s
mouth (Martin, 2012) and according to The Telegraph qualified electricians are required to
change fuses in kiosk’s at London’s Victoria Station (Timpson, 2012).

To gain and sustain leading edge operational performance, organisations need to consider
both a loss prevention and risk optimization approaches to risk management. These two
approaches will be described next.

2.5. Two approaches to modern risk management



There are two dominant approaches to modern risk management, namely the ‘loss
reduction’ mindset and the ‘risk optimisation’ mindset as described in Table 2.2.

The loss reduction approach is the dominant historical view that focuses on the prevention
of negative outcomes, and which views risk as the chance or probability of loss or an
adverse outcome.

In contrast, the risk optimisation approach considers both the upside and downside
associated with uncertainty across a range of key performance areas (e.g. cost, safety,
environment, employee satisfaction, community relations etc). In reference to Table .2, a
leading HSE senior engineer from a global oil and gas company recently commented:
“My company would appoint people who think like the left side of the table, but we would
promote the people that think like the right side”

28
Chapter 2

Table 2.2: Two approaches to modern risk management

Loss reduction mindset Risk Optimisation Mindset


(Cameron & Raman, 2005) (Hillson, 2010)
What can go wrong? What hazards and What are we trying to achieve? What are our
threats exist? key objectives?
What are the consequences if things go What is the ‘uncertainty that matters’?
wrong? Including both downside threats and upside
opportunities.
What is the likelihood that things might Acknowledge that risk management is
go wrong? affected by perception and “zero risk” is
unachievable and undesirable, so what is the
appropriate level of risk to aim for?
Is the risk low enough to be acceptable, What actions are required to manage risks?
or is action required to lower the risk?
Have enough controls been How are we going? What has changed? What
implemented to prevent the unwanted have we learned?
events from occurring, or to mitigate the
consequences if it does occur?
The key concepts of risk management can be summarised as Hillson (2010, p 153) did into
the following 6 concepts:

1. Risk is “uncertainty that matters” – but different things matter to different people
to a different extent in different circumstances.
2. Risk includes both downside (threats) and upside (opportunities) – both types of
risk need to be addressed proactively, in order to minimise threats and maximise
opportunities.
3. “Zero risk” is unachievable and undesirable – all aspects of life (including business
and projects) involve risk, so some degree of risk-taking is inevitable, but we should
only take appropriate risks in relation to the level of return we expect or require.
4. Risk has two key dimensions – uncertainty can be expressed as “probability” or
“frequency”, and how much it matters can be called “impact” or “consequence”.
5. Risk management requires an understanding of both dimensions – if the uncertain
event is very unlikely or it would have negligible effect, it requires less attention.
6. Risk management is affected by perception – answers to the questions “How
uncertain is it?” and “How much does it matter?” are subjective.

The basis of these six key concepts is that risk is intricately linked with human decision
making processes which is discussed in detail in Chapter 4.

2.6. Case studies illustrating two approaches to risk management



Case Study 1: Process plant operation
A processing plant receives raw ore from the mine then crushes it, conveys, washes and
sizes it through vibrating screens before transferring it to product stockpiles. From ore

29
Chapter 2

receival to product stockpile involving 10 different conveyors transporting the ore over
more than a kilometer. Often, due to mine supply issues, there are times when there is no
ore supply for the plant. This leaves operators with the question – Should the plant be kept
running or should it shut down if there is no ore for a specified period of time?

The answer to such a question could be determined using the loss prevention approach or
risk optimization approach. If the plant is focused on availability – being able to process ore
as soon as it arrives from the mine – then the loss prevention approach would probably lead
to the decision to keep the plant running so it is more likely to be available when the ore
arrives from the mine. Such an approach would minimize production losses associated with
situations where operators are unable to restart the plant in time to process ore when
supplies resumes.

If a risk optimization approach was employed, what sort of factors should be considered?

Some factors that might be worth considering include:
Time: Time before ore supply resumes, time it takes to shut down and start up plant.
Cost: Cost of running the plant versus cost of stopping and restarting plant.
Risk of not being available when ore supply resumes: Risk of breaking down without ore
versus risk of not being able restart in time to process ore when supplies resume.

Case Study 2: The availability of safety PPE
At present most commercial airlines are equipped with life jackets and require the
attendants to run through how to use the life jacket as part of the pre-takeoff safety talk
and demonstration. This is an example of loss prevention risk management approach as life
jackets on airplanes are intended to save lives if the aircraft ends up in the water. However
“such a thing has never happened in modern commercial airline flying” (McCartney, 2016).
In an article title “Do planes really need life vests?” published the Wall Street Journal in
January 2016 (McCartney, 2016 found at http://www.wsj.com/articles/do-planes-really-
need-life-vests-1453310773) included the following quotes:
• In most crash landings, safety experts say, it’s more important to get out of the plane as
quickly as possible to avoid being trapped by a fire rather than take precious seconds to
find a life vest and try to put it on.
• Laboratory tests and actual emergencies have both shown that passengers will give up
and flee before actually finding a vest under their seat.
• Life vests are really only useful when there is advance warning of a water landing, when
planes without engine power glide down from high altitude and passengers have time
to find vests under the seat, open pouches and put them on in the cabin before hitting
the water. . . . and are “not intended to represent a forced landing on water. But for at
least several decades, water landings have universally been the sudden type, not the
planned variety.
• There’s also a psychological benefit: Passengers would think it ridiculous to travel over
an ocean without some type of emergency flotation device beyond life rafts.
• Vests weigh a little more than 1 pound each, so a medium-size jet has about 200
pounds of vests onboard. That weight increases fuel burn and emissions. . . . Eliminating
life vests might save more than 1 million gallons a year just at a large airline like
American, United or Delta.

30
Chapter 2


Another cost: . . . . [is that the] vests disappear regularly—passengers steal them as
souvenirs, airlines say.
Question: Assuming the Wall Street Journal is correct, what factors would you consider
when seeking to determine the find the optimum trade-offs between different risks?

2.7. Who is responsible for risk management


Risk management involves everyone at all levels of the organization as shown in Figure 2.8.
Decisions at all levels of an enterprise determines how risks are identified, assessed and
treated. The outcome of these decisions results in value being eroded, preserved or
additional value created. In addition people at the board and executive levels of an
organisation are responsible for defining risk, setting the entities’ risk management appetite
and establishing the risk management frameworks, and assigning roles and responsibilities
to execute the risk management system. The importance of taking responsibility for
managing risks is formally recognized in a number of laws, regulations and professional
competency requirements. The professional responsibilities for managing risk in process
industries is explored in Chapter 3.


Figure 2.8: Organisational decision making (adapted from Rasmussen, 1997)

31
Chapter 2

2.8. Summary
This chapter introduced readers to the risk management process (ISO31000) and core risk
management terms and definitions. It also provided and overview of the history of risk
management and highlighted that the future of risk management in the process industry
should involve two main risk management approaches – the traditional precautionary
approach and the value-adding risk optimization approach. Some generic and process
industry specific case studies that highlight the different approaches to risk management.
The chapter concluded with a discussion highlighting that everyone an organization is
responsible for making risk-based decisions with upper management having the additional
responsibilities of establishing, monitoring and improving an organisation’s risk
management system. The material covered in this chapter entails the important the
foundations for the effective management of risk – having common definitions, a common
and appropriate risk framework or approach and defined roles and responsibilities (Deloitte,
2009).

32
Chapter 3

The objectives
SUSTAINABLE OPERATIONAL EXCELLENCE

STAKEHOLDER, REPUTATION & POLITICAL RISK MANAGEMENT

T h e f o c u s a r e a s

Chapter 3

REVIEW RISK MGT


MONITOR AND

ENVIRONMENTAL IMPACTS

FINANCIAL PERFORMANCE
PROJECTS/CONTRACTORS
Key RM Activities
Professional

SAFETY & HEALTH

SOCIAL IMPACTS

CYBERSECURITY
SUPPLY CHAINS
Practice

AND TREAT RISKS


IDENTIFY, ASSESS


T h e F o u n d a t i o n s
PROFESSIONAL PRACTICE HUMANS AND RISK
FUNDAMENTALS OF RISK MGT

“The aim of a chemical engineer is to be of service to the community and society expects the
highest professional standards. Ethics lies at the heart of our discipline”
Institution of Chemical Engineers

3.1. Introduction
As mentioned in chapter 1, risk management is a core competency of all engineers.
Engineers are employed to “facilitate continual improvement of the organisation”. To do so
professional engineers are responsible for the identification, assessment and management
of risks associated with all aspects of a business. This will include the planning and execution
of projects, the operation of processing plants, and management of other aspects of the
business including its environment and community interactions to obtain and sustain
operation excellence. Therefore professional engineering practice comprises one of the
three foundations for effective risk management within process industries.

This chapter examines the concept of process engineering professional practice. Specifically
this section should help you answer the following questions:
• What does it mean to be a professional engineer?
• Why is it important in managing process industry risk?
This chapter aims to help engineers manage their professional, process or project and
organisational related risks, by addressing the following learning objectives:
1. To understand what it means to be a professional engineer.
2. To understand your own professional risk - what are your obligations,
accountabilities and responsibilities?

3.2. What is Professional Practice?


We can use Aristotle’s terms and framework, discussed in Chapter 1 to describe engineering
practice. As shown in Figure 15 the three tiers of knowledge comprise:
1. Underpinning knowledge and understanding (i.e. episteme and sophia)
2. Engineering tools (i.e. techne)
3. Professional practice (i.e. phronesis and praxis)
Undergraduate engineering education predominantly focuses on nous, episteme, sophia and
techne. In many ways, this makes sense, as it is critical that engineers do have the

33
Chapter 3

fundamental knowledge and tools to enable them to execute their tasks. However, Error!
Reference source not found. helps to illustrate that being a professional engineer requires
much more knowledge, and relies on praxis and phronesis. That is, professional engineers
should be able to demonstrate prudent understanding of what should be done in a practical
situation, and should also demonstrate practical, thoughtful doing.

Phronesis
Professional & Praxis Practical wisdom &
practice thoughtful doing

Engineering Techne Applying


tools knowledge

Underpinning Book smarts


knowledge and and systems of
understanding Episteme & Sophia knowledge


Figure 3.1: Three tiered system of knowledge

3.3. What is a professional engineer?


“To legitimately call themselves professionals, those working in ICT must not only operate
within the bounds of the law, but go beyond that to conduct themselves ethically and
responsibly at all times” Brenda Aynsley, President ACS, The Australian, 29 Sept 2015
The Warren Centre Report ‘Professional Performance, Innovation and Risk’ (2009), defined
3 dimensions of professionalism, namely ethics, competency and performance (Figure .2).

34
Chapter 3


Figure 3.2: The three dimensions of professional engineering
(Adapted from http://ppir.com.au/the-ppir-professional-performance-program/)

3.3.1. Ethics
“A standard dictionary will define the word ‘ethics’ along the lines of being ‘the moral
system of a particular writer or school of thought’, or ‘the rules of conduct recognised in
certain limited departments of human life’ or even ‘the science of human duty in its widest
extent’. In essence, ethics is about moral values – a moral philosophy or set of moral
principles – that express in a formal way what ‘doing the right thing’ means. However,
because ethics is about moral values, it is also inherently subjective: it may be that the
appropriate prevailing moral values in one social or religious context are somewhat
different in another.
Nevertheless, when we describe ethics in a professional context, we usually express these
professional values as a ‘higher duty’ that transcends differences in social or religious
values, yet responds to the generic interests of the community and pays proper respect to
the inherent dignity of the individual. It is usual for a profession to set out its version of
these higher duty professional values in a document such as the EA Code of Ethics” (The
Warren Centre for Advanced Engineering, 2009)
Often risks emerge where there are ethical dilemmas that need resolving. Figure 6 and
Figure 3. show some illustration of the ethical decision considerations.

35
Chapter 3


Figure 6: Ethical dilemma grid
(source: http://blogs.ubc.ca/nparkhaev/2013/03/24/product-marketing-in-hospital/ )


Figure 3.4: Ethical decision grid
(source: http://www-rohan.sdsu.edu/~renglish/370/notes/chapt04/ )


It is worth noting the preamble in the Engineers Australia Code of Ethics (Engineers
Australia, n.d.), which states:
As engineering practitioners, we use our knowledge and skills for the benefit of the
community to create engineering solutions for a sustainable future. In doing so, we
strive to serve the community ahead of other personal or sectional interests.
The IChemE states on their web page about Professional Ethics that:
The aim of a chemical engineer is to be of service to the community and society
expects the highest professional standards. Ethics lies at the heart of our discipline7.

3.3.2. Competence
“Competence is the ability to carry out a task to an effective standard. To achieve
competence requires the right level of knowledge, understanding and skill, and a
professional attitude. Competence is developed by a combination of formal and informal
learning, and training and experience, generally known as initial professional development.


7
http://www.icheme.org/about_us/ethics.aspx

36
Chapter 3

However, these elements are not necessarily separate or sequential and they may not
always be formally structured” (Engineering Council, 2014)
The Warren Report (The Warren Centre for Advanced Engineering, 2009) distinguishes
between two levels of competence:
1. Gateway competence
2. Task-specific competence
Gateway competence refers to acquisition of the specified (accredited) university degree,
with associated workplace practice, which is required in order to become a qualified
engineer. There are also higher levels of gateway competence such as Chartered
Professional Engineer status.
Task-specific competence refers to competence required to perform a particular task. Even
if you are a qualified engineer, you need to continuously assess your level of competence
for particular tasks, and continue to develop your skills and knowledge.
The management of risk is recognised as a core competence for engineers. For example the
UK Engineering Council (2014) list the following as examples of risk related competencies
that could be employed by engineers
- Contribute to the design and development of engineering solutions which could include
the ability to identify operational risks and evaluate possible engineering solutions,
taking account of cost, quality, safety, reliability, appearance, fitness for purpose,
security, intellectual property (IP) constraints and opportunities, and environmental
impact (Engineering Council, 2014, p. 18)
- Prepare, present and agree design recommendations, with appropriate analysis of risk,
and taking account of cost, quality, safety, reliability, appearance, fitness for purpose,
security, intellectual property (IP) constraints and opportunities and environmental
impact (Engineering Council, 2014, p.25)
- Plan for effective project implementation which could include an ability to carry out
holistic and systematic risk identification, assessment and management (Engineering
Council, 2014, p. 18 and 26)
- Undertake engineering work in a way that contributes to sustainable development
which include methodical assessment of risk in specific projects; actions taken to
minimise risk to society or the environment (Engineering Council, 2014, p. 12)
- Manage and apply safe systems of work by developing and implementing appropriate
hazard identification and risk management systems and culture (Engineering Council,
2014, p.12 and p. 20)

Similarly the Institute of Engineers Australia, recognises risk management as a core
competency for experienced professional engineers in the following ways (quoted from
Engineers Australia, 2012):
- ensuring that costs, risks and limitations are properly understood in the context of the
desirable outcomes
- managing risk as well as sustainability issues
- authorise engineering outputs only on the basis of an informed understanding of the
costs, risks, consequences and limitations
- identify, assess and manage risks [which] means that you develop and operate within a
hazard and risk framework appropriate to engineering activities [which includes]:

37
Chapter 3

o identify, assess and manage product, project, process, environmental or system risks that
could be caused by material, economic, social or environmental factors
o establish and maintain a documented audit trail of technical and operational changes
during system or product development, project implementation or process operations
o follow a systematic documented method and work in consultation with stakeholders and
other informed people to identify unpredictable events (threats, opportunities, and other
sources of uncertainty or missing information) that could influence outcomes
o assess the likelihood of each event, and the consequences, including commercial,
reputation, safety, health, environment, regulatory, legal, governance, and social
consequences
o devise ways to influence the likelihood and consequences to minimise costs and undesirable
consequences, and maximise benefits
o help in negotiating equitable ways to share any costs and benefits between stakeholders
and the community manage projects effectively, including scoping, procurement and
integration of physical resources and people; control of cost, quality, safety, environment
and risk; and
o monitoring of progress and finalisation of projects

3.3.3. Performance
It is interesting to consider what it means to perform as an engineer. The Warren Centre
report attempts to define performance and actually proposes a protocol for engineering
performance.
‘This protocol documents the essentials of performance for Professional Engineers
acting in a professional capacity.’
The protocol is summarised in Table .


Table 3.1: Summary of the Warren Centre PPIR Protocol for Professional Performance
(The Warren Centre for Advanced Engineering, 2009)
1. Relevant Parties and The Professional Engineer should develop a clear understanding of the
Other Stakeholders Relevant Parties to and Other Stakeholders in the Engineering Task and
the relationships between them.
2. The Engineering Task The Professional Engineer should consult and agree with the Responsible
Person for the objectives and extent of the Engineering Task.
3. Competence to Act The Professional Engineer should assess and apply the competencies and
resources appropriate to the Engineering Task.
4. Statutory The professional Engineer should identify and respond to relevant
Requirements and statutory requirements and public interest issues.
Public Interest
The Professional Engineer should develop and operate within a Hazard and Risk
5. Risk Assessment and
Framework appropriate to the Engineering Task.
Management
6. Engineering The Professional Engineer should seek to use engineering innovation to enhance
Innovation the outcomes of the Engineering Task.
The Professional Engineer should apply appropriate engineering task
7. Engineering Task
management protocols and related standards in carrying out and accomplishing
Management
the Engineering Task.
The Professional Engineer should ensure that any contract or other such
8. Contractual
evidence of agreement governing or relevant to the Engineering Task is
Framework
consistent with the provisions of this Protocol.

38
Chapter 3

3.4. Obligations, accountabilities and responsibilities
As a professional engineer, you will have obligations, accountabilities and responsibilities. It
is important to understand the difference, as your required level of conformance is very
different for each (Table 1.1).


Table 1.2: Meaning of obligation, accountability and responsibility8

Meaning Level of Conformance

Obligation Established within a system of MUST conform


“law” by organisations that have
been legally constituted.
Requirements should be specific
(spelt out).
Accountability Established within an authority You need to be able to
system. Requirements set by your account for your decisions,
team leader and external bodies. judgments and actions.
Requirements open to Behaviours driven by
interpretation, room for personal professional and social
judgment. ethics.

Responsibility Standards and actions that you Responsible to your own


impose upon yourself. personal ethics.

3.4.1. Legal and regulatory obligations


Professional engineers are obligated to comply with the law at all times. This raises two key
questions:
1. What laws are out there that you have to comply with?
2. What are the specific requirements within each law that you have to comply with?
Have you ever thought about these legal obligations? It can be quite confronting to
consider this for the first time. This subsection introduces some concepts try to help you to
start navigating through this complex topic.
Legal and regulatory regimes can be broadly categorised as compliance or risk-based
systems. In compliance systems, the governing body (Government and/or Regulator) specify
mandatory guidelines and standards that a company must abide by in order to operate
legally and to avoid civil and criminal penalties. In risk-based systems the emphasis is on the
operator to demonstrate duty of care to ensure health and safety of workers and to
demonstrate systematic processes are used to identify and manage potential risks to an
acceptable/tolerable level. In the mining industry, different countries have adopted
different approaches. For example, the USA uses a compliance based regulatory system.

8
Developed by Bob Hannah for CHEE4002 in 2015

39
Chapter 3

Australia, on the other hand, officially adopted risk-based regulations in 1999 for
Queensland and in 2002 for NSW (Poplin et al., 2008).
Most jurisdictions have laws relating to the management of risks associated with health and
safety, environment and social impact and executing business activities (e.g. contracting,
conducting business transactions etc). Examples of such legislation are listed below:
- Commonwealth of Australia Work Health and Safety Act 2011.
- NSW Work Health and Safety Act 2011 and Regulations 2011
- QLD Work Health and Safety Regulation 2011
- Victoria Occupational Health and Safety Act 2004 and Regulations 2007, Equipment
(Public Safety) Act 1994 and Regulations 2007, Dangerous Goods Act 1985 and 2012
for Storage 2008 for Transport by Road or Rail, 2016 for HCDG 2008, Major Hazards
Facilities Regulation, and Magistrates' Court (Occupational Health and Safety) Rules
2005
- WA Work Health Safety (Resources and Major Hazards) Act and Regulation (being
developed by consolidating other acts and regulations)
In addition there can be legislation aimed at a specific industry. Examples of such legislation
targeted at the Australian mining operations include:
- NSW Mine Safety Acts and Regulations which includes:
o Work Health and Safety (Mines) Act 2013 and Regulations 2014
o Explosives Act 2003 and Regulations 2013
- QLD mining legislation which includes:
o Coal Mining Safety and Health Regulation 2001
o Mining and Quarrying Safety and Health Regulation 2001
- WA mining legislation which includes:
o Mines Safety and Inspection Act 1994
o Mines Safety and Inspection Regulations 1995
o Mines Safety and Inspection Levy Regulations 2010
In addition to the legislation listed above there are other legal requirements that a company
and/or individual should abide by. These include an environment authority/license/permit,
commitment agreements with stakeholders, contractual agreements with employees,
vendors, supplier etc. Environmental laws including international, commonwealth and QLD
state laws are shown in Figure 7.

Therefore there is a myriad of laws that an industry professional needs to be aware of
especially when you consider the OHS, environmental, financial, contractual, human
resources laws that typically apply to the activities of companies operating high hazard
facilities. Thus all students are encouraged to examine for themselves the legislation and
regulations that govern the activities they are involved with.

40
Chapter 3



Figure 7: Some Environment Laws
(Source: Murphy,2015, MINE4200 presentation on Environmental Risk)

41
Chapter 3

As mentioned above, in risk-based legislation and regulation the emphasis is on a person
to demonstrate duty of care. According to the Warren Centre (The Warren Centre for
Advanced Engineering, 2009):
the 'duty of care' is designed to ascertain whether there is a duty to apply a standard of
reasonable care in the particular circumstances; and given that duty, the 'standard of care' is
about the appropriate standard in those circumstances.
The standard of care owed by professionals is determined by what can reasonably be
expected of a person professing the professional skill, taking into account all the relevant
circumstances at the time – that is, the appropriate professional performance in that
particular situation.
Whether the court relies as a matter of course on standard professional practice to define
performance has varied significantly over the past two decades. For several decades prior to
1992, it was not open to a court to find a standard professional practice to be negligent. This
changed in 1992 from which time the standard of care had to be determined by the court.
While such determination was still usually based on standard practice within the relevant
profession, it remained the court's choice as to whether this would be the case.
Following the Commonwealth Government Review of the Law of Negligence in 2002, a
modified version of the pre-1992 rule was introduced in NSW. In essence, the court now
accepts a standard professional practice if it is 'widely accepted in Australia by peer
professional opinion' as being so, although with some qualifications as discussed later in this
report.
Notwithstanding these important changes, if a particular profession does not have a
generally applicable and widely shared view of standard professional practice, the
professional's duty and standard of care is defined by default by the view of performance
formed by the court in retrospect, in the course of each particular litigation proceeding. The
difference between professional practice and legal requirements is shown in Figure 8.


Figure 8: A comparison of professional standards and legal requirements

42
Chapter 3

3.4.2. Accountabilities
Accountability is involves being answerable for given activities, actions, performance and/or
behaviour (derived from Merriam-Webster.com dictionary). Engineers are often
accountable for their own work performance, professional advice and output produced. If
they are leaders, project managers or leaders, they may also be accountable for the work
performance of people reporting to them or working on their projects.
As mentioned in Table 3.1 accountability is established within an authority system.
Authority system is the formal organisational rules that state the power or rights and
limitations of individuals within specific roles to issue orders, to make decisions and direct
or control other people or things (derived from Merriam-Webster.com dictionary).
Authority systems for engineers can be created by their organisations, by their professional
bodies or by regulators. Examples of engineering accountabilities set by organisations might
include performing statutory inspections, managing a project team, redesigning a part of
the plant, collecting, analysing and reporting production data. Examples of engineering
accountabilities set by external organisations might include:
- The Institute of Engineers Australia hold engineers are accountable for adhering to
their code of ethics (Engineers Australia, n.d.)
- Companies and Regulators in most jurisdictions hold engineers are accountable for
the health and safety of themselves, their subordinates and others in their
workplace.

3.4.3. Responsibilities
Responsibility has been variously defined in dictionaries, literature and guidelines. For the
purpose of this reference, we refer to responsibilities as the requirements individuals take
on to be an upstanding person, member of the community and professional engineer.
Responsibilities may overlap with obligations and accountability but they also can include
addition requirements you choose to adopt that are not specified by the law, companies or
professional bodies. Examples of such responsibilities might include donating certain
amount of time and/or money to volunteer and charitable activities, ensuring you are aware
of and understand causes behind contemporary catastrophic accidents, staying abreast of
the latest global trends that could impact your and your organisation’s objectives, publically
advocate for specified causes.

3.5. Summary
The main aim of this chapter was to highlight how critical your own professional practice is
in guiding how you will, and should, make decisions. This is, of course, pivotal in how you
will manage risk.
You must make a lifelong commitment to your own development as a professional, and you
must take responsibility for this. The reality of modern professional life is that you, and only
you, must take control and manage your own professional career.
We have covered a broad range of issues, and merely scratched the surface. You should
now understand what it means to be a professional engineer. Professional practice relies
on praxis and phronesis. That is, professional engineers should be able to demonstrate
prudent understanding of what should be done in a practical situation, and should also
demonstrate practical, thoughtful doing. We presented a simple framework for
professional engineering, based on ethics, competence and performance.

43
Chapter 3

Being a professional engineer means that you have an obligation to comply with the law at
all times. As we have seen, the law can be extremely complex, and there is a lot of
legislation that is relevant to practicing professional engineers. Whilst you do not need to
be a lawyer, you should:
• Understand how The Law works.
• Understand how you will interact with The Law in your professional life.
• Know which laws are most important in your specialist area.
• Have the professional judgement to seek advice from an expert as required – ie work
within your competence.
Consider the following key words and phrases. Can you define them and explain their
significance?
Professional practice Ethics
Competence Performance
Obligations Accountability Responsibility


44
Chapter 4

The objectives
SUSTAINABLE OPERATIONAL EXCELLENCE
STAKEHOLDER, REPUTATION & POLITICAL RISK MANAGEMENT

Chapter 4 T h e f o c u s a r e a s

REVIEW RISK MGT


MONITOR AND

ENVIRONMENTAL IMPACTS

FINANCIAL PERFORMANCE
PROJECTS/CONTRACTORS
Humans

Key RM Activities

SAFETY & HEALTH

SOCIAL IMPACTS

CYBERSECURITY
SUPPLY CHAINS
and Risk

AND TREAT RISKS


IDENTIFY, ASSESS


T h e F o u n d a t i o n s
PROFESSIONAL PRACTICE HUMANS AND RISK

FUNDAMENTALS OF RISK MGT


“There are risks and costs to action. But they are far less than the
long range risks of comfortable inaction” John F. Kennedy

4.1. Introduction
Humans play an important part in the management of risk in industry. In this chapter we
explore in more detail how human decisions and actions at various organization levels as
shown in Figure 4.1 directly impact, and are directly impacted by, risk. It is these decisions
and actions that dictate the performance of an organization and how it interacts with the
external world as shown in Figure 4.2 (see chapter 1 for detailed explanation this figure).

Competitiveness
Government, Regulators, Stressors
other external associations
Human decisions Audits and Changing political,
and actions reports economic, climatic
and public awareness
Laws, regulations Company pressures
and requirements executive
Human decisions Audits and
and actions reports Changing markets,
supply chains and
Company culture,
Company stakeholder
systems and policy management expectations

Human decisions Operations


and actions reviews and data Changing employee
Front-line competencies,
Plans, resources and expectations and
task assignments employees perceptions
Human decisions Observations and
and actions sense making

Tasks executed
Human-system Face pace
interaction technological
change

Figure 4.1: Organisational decision making (adapted from Rasmussen, 1997)

45
Chapter 4

Community & HUMAN Human/Asset
Social Impact PERFORMANCE Health, Safety & Security

RISK DETECTION
AND ASSESSMENT TREATMENT
OF RISKS

COMPLIANCE
REGULATORY

PERFORMANCE
RISK
LEGAL &

TECHNICAL
YOUR YOUR
DECISIONS Uncertainty ACTIONS
that matters!

REVIEW
CHANGES
IN RISKS

Economic Environmental
Impact FINANCIAL
PERFORMANCE Impact

Figure 4.2. The influences and impacts on/of industrial risks

This chapter will cover the following learning objectives:
1. To appreciate the different roles that humans perform to manage risks in industry.
2. To understand the core components of risk-based decision making and some of the
complexities of human decision making.
3. The be aware of important factors that that influence humans ability to make
informed and timely decisions and take correct and effective actions

4.2. The role of humans – risk perceivers, analysers and controllers


When it comes to risk in industry, humans adopt a number of roles:
• Risk perceiver: Stakeholders who consider and hold a view about a risk.
• Risk analyser: People who identify and assess risks and determine the controls
needed to manage risk.
• Risk controller: People who are exposed to the risk and have to manage it.
• Risk communicator: People who disseminate information about risk and risk
management processes
How people perform each of these roles is shaped by the attributes of the person, the
attributes of the risk and the attributes of the situation. The attributes of the person include
their experience, motivation, preferences, abilities and cognitive biases. The attributes of a
person shape how they perceive, assess, control and communicate risk. Such attributes can
also lead a person to be sensitised or desensitised to the risk. The attributes of the risk can
also affect how people perceive, assess, response to and communicate about risk. According
to Slovic and colleagues (Slovic, 1987; Slovic, Fischhoff, & Lichtenstein, 1982) include the
observability of risk, controllability of risk, immediacy, likelihood, severity and irreversibility
of outcomes as well as whether the persons affected will be voluntarily and involuntarily
impacted. The attributes of the situation that can affect risk perception, analysis, control

46
Chapter 4

and communication include organisational factors and contextual factors. From an
organisational perspective, it is factors like the culture and risk appetite of the leadership
that shape risk perception, analysis, control and communication. For example, organisations
can be risk averse or risk takers. They can have centralised top-down decision making or
decentralised bottom-up decision making. The contextual factors that can shape risk
perception, analysis, control and communication are quite extensive. They include
attributes such as the social complexity, the technical complexity, the normality/novelty of
context, and the time pressure of the situation plus the perceived severity of the
consequences if thing do go wrong. Other attributes might include the number and seniority
of people involved in the situation as well as whether the risk is observable or not. In
creating, using and refining risk management systems, it is necessary to understand all these
factors as well as the range of roles that humans will perform to deliver acceptable and
possible optimal levels of organizational risk. More specific details on the role of humans as
risk perceivers, risk analysers, risk controllers and risk communicators are explained in more
detail in the following text.

4.2.1. Risk perceivers


Risk perceivers are stakeholders who consider and hold a view about a risk or a number of
risks. Their point of view is often referred to as their “risk perception”. A person’s
perception of risk can be influenced by a number of factors such as those shown in Figure
4.3 and 4.4. As Figure 4.3 highlights risk perception influences risk analysis which is
discussed next


Figure 4.3: Risk perception and its components
(ref: http://www.predictivesolutions.com/safetycary/risk-perception-needs-to-be-managed/ )

ANALYSIS


Figure 4.4: Risk perception inputs and outputs
(adapted from Hillson & Murray-Webster, 2012)

47
Chapter 4

4.2.2. Risk analysers
Risk analysers are the people who apply risk management principles and processes to
identify and determine ways of addressing the uncertainties that matter to a business.
Some organisations have dedicated risk professionals, some organisations leave risk
analysing to subject-matter specialists (e.g. environmental engineers, safety specialists etc),
some organisations make risk analysing to a core competency for all key leadership and
technical roles, some organisations outsource the risk analysing and some organisations use
a combination of approaches. Other stakeholders can also be important risk analysers for a
company. Examples include financial analysts, shareholders, insurance agents, suppliers and
customers. Some of the approaches and techniques used for risk analysis in process industry
contexts is described in more detail in Chapters 5 and 6.
The analysis of risk is influenced by an entities risk appetite. Risk appetite is a measure of
how much risk an individual or entity is willing to accept. Some of the differences between
high risk appetite entities and low risk appetite entities are illustrated in Figure 4.6. Risk
appetite is the foundation on which risk analysers determine whether a risk is tolerable, too
low or too high as shown in Figure 4.5. If the risk is too low then an entity might seek more
risk to ensure they are not missing out on the opportunities that come from taking risks. If
the risk is tolerable then the entity should monitor it to see if it remains tolerable or
increases or decreases. If the risk is too high then an entity will seek to transfer it to another
entity, manage or control it or if they can’t control it then insure against it. In process
industries most risks are inherent to the process operations and therefore need to be
managed or controlled by humans. Humans as risk controllers is discussed next.

Suits the risk seekers Extremely High


Others will seek less risk

High

Tolerable Tolerable
Tolerable level of risk

Low

Suits the risk averse


Extremely low Others will seek more risk


Figure 4.5: Risk levels and risk preferences
(adapted from Hillson & Murray-Webster, 2012)

48
Chapter 4


Figure: 4.6 Factors that chararcterise high and low organisation risk appetites
(source: http://riskspotlight.com/risk-infographics)

49
Chapter 4

4.2.3. Risk controllers
Risk controller refers to the role that humans can take in controlling risks in real time.
There are two perspectives of humans and risk. There is the perspective that humans are
the source of risk within industry. This is evident in findings that the majority of incidents
are caused by human error as shown in Table 4.1. These finding have been extended
beyond individual humans to organisational error as shown in Figure 4.1. The key point
from these Figures is that human performance dominates risk in hazardous industries.

Table 4.1: Estimates of human error as a percent of all failures
(reference: James Reason, Human Factors Seminar, Helsinki, 13 February 2006)

Jet transport 65-85


Air traffic control 90
Maritime vessels 80-85
Chemical industry 80-90
Nuclear power plants (US) 70
Road transportation 85




Figure 4.7: Types of error (retrieved from
http://energy.gov/sites/prod/files/2013/06/f1/doe-hdbk-1028-2009_volume1.pdf )


The other perspective on humans and risk is that humans are the adaptable resource that
controls risk in real time (Hollnagel, 2014). James Reason (2008) provides some great
studies that illustrate the role humans can play in preventing disasters.
In the process industries, humans have always played an important role in controlling the
operations. Humans are often required to oversee the process and to make the adjustments
necessary to keep the process in control. However, the nature of human control and the
nature of industry has changed over time. This history has been represented in diagram as
shown in Figure 4.8 and explained in the following text from M. E. Hassall (2015) which
describes these different eras.

50
Chapter 4

Era 1: Local, physical Era 2: Regional, centralised Era 3: Global, collaborative
Electro-mechanical systems Automated & Electronic/digital systems Cyber-physical & autonomous systems
Localised direct human control Centralised human supervisory control Remote and collaborative control
High number of highly experienced (increasing complexity and spans of control) More mobile workers with low levels of
workers Less workers working for larger, regional/ experience seeking adaptable work
Small, usually locally-run companies international companies that share services Highly interconnected global companies

Incidents caused by Incidents caused by Incidents caused by Incidents emerge from


technology failure human failure organisational failures combinations of factors

1970 1980 1990 2000 2010

FMEA Fault Trees Environment HAZOP Bowtie Swiss Accimap HFACs STAMP FRAM Risk perception
1950 1962 & social impact 1977 1980 Cheese3 1997 2000 2003 2004 and SLTO
assessments ~2000s
Numerous human Behaviour based
1970s
error techniques3 safety initiatives


Figure 4.8: History of Industrial Work (M. E. Hassall, 2015)



First era of industrial work – Localised, direct and physical
The first era of industrial work ran from the nineteenth century to after the Second World
War. This era began with the introduction of electricity into industry. The industrial
organisations of this era tended to be local separately run companies that used local
suppliers. The control of the operating processes also tended to be local and physical in
nature. The operators where physically located within the process they managed. They
manually interacted with levers, dials, and other controls to manage the process and their
span of control was often constrained to what they could see and touch. The unreliable
nature of the equipment often meant that operators had to manually intervene on a regular
basis to keep production running. The constant interaction with the process led human
controllers to develop good tactical knowledge and mental models of the fundamental
operation of the process. However, the constant manual intervention and pressure to keep
production running resulted in unacceptable levels of fatigue and injuries. This led to the
development of specialist boards and committees who were charged with improving both
human well-being and productivity at work (Zionchenko & Munipov, 2005).
Scientific studies of safety were introduced and used to address technical causes of
accidents and were aimed at guarding machinery, stopping explosions, and preventing
structures collapsing (Hale & Hovden, 1998). Tools and techniques such as Failure Mode
Effects Analysis (FMEA) and Hazard and Operability Studies (HAZOPs) were developed to
help identify and address technical failures (Hollnagel, 2011b). Scientific Management also
emerged with the use of time and motion studies to determine the one best way that
workers could perform tasks efficiently and with minimal stress (M. E. Hassall, Xiao,
Sanderson, & Neal, 2015).

51
Chapter 4

Toward the end of this era was when consideration of environmental and social impacts
started to be considered. The initial focus was on identifying and reporting the economic,
environmental and social effects a major project might have if implemented.
Second era of industrial work: Regional, centralised and more cognitive
After the war, military technology was transferred back into civilian industry. For example,
during World War II, new technology such as radar and sonar, led to the development of
screen-based monitoring jobs (Chapanis, 1996). After World War II, this technology along
with nuclear weapons technology was converted for civilian use. This led to the
development of large scale, complex work systems with centralised control rooms. Such
industries often were managed by large corporations and they employed experienced
people to run the operations. Examples include nuclear power plants and civil aviation
systems. This transformation meant that human controllers were more isolated from
directly interacting or being able to their senses to assess and control the systems. They
were physically isolated from directly observing the system and had to rely more on
instrumentation readings and alarm systems to diagnose and control system states.
This second era of industrial work overlaps the second age of safety, described as the “Age
of Human Factors”, focused on solving safety problems by matching humans better to
technology (Hale & Hovden, 1998). As technology became more complex, accidents were
involving experienced people. Herbert Heinrich (1941) found that 88% of industrial
accidents resulted from workers’ unsafe actions. For example experienced pilots were
crashing planes. At the Three Mile Island nuclear power plant experienced operators failed
to correctly interpret reactor status from the panel indications, and made control actions
that worsened plant conditions (Booth, 1987). Further research has highlighted the
importance of organisational human factors play in ensuring or undermining system safety.
For example, the Columbia space shuttle accident was attributed to the interaction between
a combination of technical factors, human factors failures, and organisational culture
(Columbia Accident Investigation Board, 2003).
The scientists and engineers investigating how to prevent such incidents realised that
instead of trying to change people to fit into the system, we should be designing the
technology to fit people better (Shaver, 2009). This research lead to the establishment of
the field of human factors and ergonomics (Shaver, 2009). Many human factors and
organisational factors risk assessment techniques were developed during this period.
Examples of human factors techniques include the TRACEr method, Systematic Human Error
Reduction and Prediction Approach (SHERPA), Human Error Template (HET), Task Analysis
For Error Identification (TAFEI), Human Error HAZOP, Technique for Human Error
Assessment (THEA), Human Error Assessment and Reduction Technique (HEART) and
Cognitive Reliability Analysis Method (CREAM) (for more complete information see Stanton
et al., 2005). Examples of organisational factors techniques include “Swiss Cheese” model (J.
T. Reason, 2008), Accimap (Rasmussen & Svedung, 2000) and HFACs (Shappell & Wiegmann,
2000a).
The Three Mile Island nuclear power plant accident in 1979 was followed by the Chernobyl
nuclear reactor meltdown in 1986. These two events combined with the military’s ongoing
use of nuclear weapons raised the awareness of nuclear risks across the world. When it
comes to nuclear energy, research has shown that technical experts and the public use
distinctly different bases for assessing risk and impact (OECD, 2010). Insights from this and

52
Chapter 4

other research led to the emerging area of risk management that focuses on social/societal
risk and risk perceptions. Techniques are risk management based techniques that focus on
identifying and managing social risk and risk perceptions are still evolving. A list of
techniques and methods can be found at http://trasi.foundationcenter.org/ and best
practice information can be found at www.iaia.org
Third era of industrial work: Global, collaborative, cognitive
Modern industry is currently adopting automation, autonomous technology, and they are
on the verge of adopting cyber-physical systems (i.e. 3-D printing) and nano-technologies
for wide ranging applications. The introduction of these technologies has removed humans
further from the work face. Control rooms for automated and semi-automated technologies
are being located hundreds of kilometres away from where the technology is actually
working. For example, Rio Tinto has a control room in Perth that can control truck and rail
movements in the Pilbara (Sato, 2011). Coal seam gas operators have control rooms in
Brisbane that manage gas fields in Central Queensland. Human control of fully autonomous
technologies such as drones and robots can be the sole responsibility of coders and
maintainers who are often separated in time and distance from the work done by the
autonomous technologies. Human control in the third era of industrial control is shifting
away from a supervisory control model more towards a collaborative or adaptive control
model where humans and computers share and exchange control duties (Parasuraman &
Wickens, 2008; Sheridan, 2002). The new technologies and forms of control have evolved in,
and facilitated the growth of, global companies which now are heavily dependent of other
companies like internet and satellite providers to ensure safe operations.
The third era of industrial control also encompasses changes in the attributes of the people
who will be employed by industry. The next generation of workers are unlikely to have
spent their childhood fixing mechanical or electro-mechanical powered machinery (e.g.
radios, music players, motorbikes, or motorcars) as most of today’s technology is disposable
or computerised. For a significant number of the next generation of workers, their
predominant experiences will come from electronic gaming and media. This type of
experience conveys different cause-effect-consequence relationships than previously
generations gained from more traditional human-technology and social interactions. This
change in experiential knowledge may impact on risk perceptions as well as the ability of
humans to understand and manage hazardous operations at a fundamental level or to base
decisions on real-life experiences of harmful impacts and consequences.
For new technology, technological, human and organisational factors can interact in novel
ways to produce “new” or emergent system responses that in turn can have significantly
positive or negative impacts on human well-being and overall system performance. In
addition, changes in society and societal perceptions and expectation of hazardous will
continue to impact organisations and how they will operate. These changes associated with
the third era of industrial work are likely to result in the emergence of new system
responses for which there is no precedent. Previously mentioned approaches to safety and
human factors have been based on learning from experience so that interventions can be
identified and adopted that will prevent adverse events happening again in the future
(Woods & Hollnagel, 2006). Such approaches will not necessary deliver the risk
identification requirements needed to manage emergent risks.

53
Chapter 4

To avoid catastrophic events in this third era of control, it is necessary to have approaches
that can proactively identify ways to design systems that help people successfully cope with
complex, unexpected and even unimagined situations. Such approaches need to emphasise
the importance of learning from and promoting successful performance as well as learning
from and reducing unsuccessful performance (Borys et al., 2009). One area of research that
may deliver benefits in this area is Resilience Engineering. Resilience Engineering aims to
develop tools and techniques that allow an organisation to produce successful outcomes by
anticipating and adapting to disturbances as well as continuing to recognise, learn from and
protect against failures (Woods & Hollnagel, 2006). Two techniques are being developed
using Resilience Engineering principles. The first technique is Functional Resonance Analysis
Method (FRAM: Hollnagel, 2012) which seeks to help analysis identify how normal variation
within a system can lead to unexpected outcomes. The second technique is Strategies
Analysis for Enhancing Resilience (SAfER: M. E. Hassall et al., 2014) which seeks to help
analysis identify ways to improve system designs so humans can better manage industrial
operations across both normal and abnormal situations.

4.3. Risk Communication


Risk communication involves the exchange of information about what risks exist, their
prevalence, causes, consequences and the assessment and treatment of them. It is a
prescribed step in the ISO31000 standard for risk management (as shown in the left box of
Figure 4.9) which states “effective external and internal communication and consultation
should take place to ensure that those accountable for implementing the risk management
process and stakeholders understand the basis on which decisions are made, and the
reasons why particular actions are required (ISO 31000, 2009, p. 14)



Figure 4.9: The Risk Management Process (AS/NZS ISO31000:2018)

54
Chapter 4

Risk communication is can be formal or informal. It is a human-centred process that involves
senders, messages and receivers as shown in Figure 4.10.


Figure 4.10: Risk communication model


There is also growing recognition that risk communication needs to extend beyond the
technical calculations of probability x consequence x exposure type information to
incorporate the risk perception and risk appetite aspects of risk. It also needs to extend
beyond internal company documents to reach external stakeholders. Risk communication
evolved over time. The evolution phases of risk communication as identified by Fischhoff
(1995) are as shown in Figure 4.11.

4. All we have to
do is show them
that they’ve
3. All we have to accepted similar 5. All we have to
do is explain what risks in the past do is show them
the numbers that it is a good
mean deal for them

2. All we have to 6. All we have to


do is tell them do is treat them
the numbers nice

1. All we had to 7. All we have to


All we have to do is
do is get the do is make them
all of the above
numbers right partners


Figure 4.11: The evolution of risk communication (Fischhoff 1995)

55
Chapter 4

Risk communication can occur across the many modes of communication including face-to-
face discussions, printed material, online materials, and local/social media (as shown in
4.12). In communicating risk, consideration needs to be given to how risk information is
presented to the intended audience so that they can comprehend the significant of the risk
and its associated uncertainties in a manner that will help them make timely and good
quality decisions. Visualisation of risk is becoming a more prevalent way of communicating
risk. An example is shown in Figure 4.13.


Figure 4.12: Methods of risk communication – Results from US Flood Research
(http://www.fema.gov/protecting-homes/public-survey-findings-flood-risk )

56
Chapter 4


Figure 4.13: Example of visualizing the interconnectedness of different risks
(retrieved from http://reports.weforum.org/global-risks-2017/shareable-infographics/ )





57
Chapter 4

4.4. The Human Decision-Making Process
Human performance dominates the successful or unsuccessful management of risk in high
hazard industries.
Humans can effectively identify and manage risks leading to what some refer to as Highly
Reliable Organisations (HROs) (K. Weick, Sutcliffe, & Obstfeld, 2008; K. E. Weick & Sutcliffe,
2015). Humans at an individual or organsation level were also seen as the cause of accidents
as shown in Figure 4.14. However, more recent research recognises that the “human error”
approach is limited for the following reasons:
- It is only with hindsight that actions are labeled errors (e.g. The same or similar
actions might considered either effective or even heroic if they lead to success or
erroneous if they lead to failure.
- Humans rarely aim to fail, they are often trying to achieve successful outcomes
therefore if an accident occurs it is often because issues with the human-system or
work design has induced the failure.

Estimates of Human Error


(as a per cent of all failures)
Jet transport 65-85
Air traffic control 90
Maritime vessels 80-85
Chemical industry 80-90
Nuclear power plants (US) 70
Road transportation 85

Figure 4.14: Human contribution to incidents (source: J. Reason (2006) and
http://energy.gov/sites/prod/files/2013/06/f1/doe-hdbk-1028-2009_volume1.pdf )

Therefore to effectively identify and manage risk, we need to design systems that allow
humans to accurately perceive, assess, control and communicate risks. This involves
understanding the two sides to “human” performance as shown in Figure 4.15. Designing
for humans reqThis involves understanding the two sides to “human” performance. In order
to create designs that optimize both human well-being and overall system performance
designers need to balance the defensive and offensive approaches and design:
- Error tolerant systems that prevent and protect against adverse outcomes should
deviations in desired performance occur.
- Adaptive systems that support and promote human performance that leads to
successful detection and management of unexpected situations.

58
Chapter 4


Figure 4.15: Designing for humans


At a fundamental level, human decision making comprises the following steps (as shown in
Figure 4.16:
- Awareness of the situation and the need to make a decision
- The strategy used to make the decision
- Execution of the decision

Figure 4.16: Components of human performance


59
Chapter 4

4.4.1. Situational Awareness
Awareness of the situation is also known as situation awareness. According to Endsley and
Jones (2012, pp. 13-14):
Situational Awareness “is being aware of what is happening around you and
understanding what that information means to you now and in the future. . . . The
formal definition of SA is “the perception of the elements in the environment within a
volume of time and space, the comprehension of their meaning, and the projection of
their status in the near future” (Endsley, 1988). . . . The formal definition of SA breaks
down into three separate levels:
• Level 1—perception of the elements in the environment
• Level 2—comprehension of the current situation
• Level 3—projection of future status”.
The role of situation awareness in the decision making process is shown in Figure 4.17.


Figure 4.17: Model of Situation Awareness (Endsley, 1995)


In risk management, accuracy in the higher levels of situation awareness will help produce
better identification and assessments of risk. However, building and maintaining accuracy in
situation awareness can be challenging. Some factors or mechanisms that can confound
accurate situation awareness include attentional tunnelling, memory traps, stress (e.g. due
to workload, anxiety, fatigue and other stressors), data/information overload, misplaced
salience, complexity, erroneous mental models, and being out-of-the-loop (Endsley & Jones,
2012).

4.4.2. Decision Making Strategies


When people make decisions they can do so using a number of strategies for doing so. The
most commonly known strategies are the fast, intuitive strategies and the slower,
deliberative strategies. However, in the high hazards industry it can be useful to think in

60
Chapter 4

more detail about the categories of strategies used in order to create designs that promote
the response strategies that will lead to success, prevent the response strategies that will
lead to failure and tolerate the use of other strategies so that they will not lead to adverse
outcomes (M. E. Hassall, 2013). Different people (e.g. experts versus notices, thinkers versus
doers, conformists versus mavericks) can have different preferences for decision making
strategies. Different tasks (e.g. complex versus simple, new/novel versus well-practice and
routine) can elicit different decision making strategy preferences. Different contexts (e.g.
high risk versus low risk, easy to predict versus unpredictability, high stress versus low
stress, significant time pressure versus little to no time pressure) can also lead people to use
different decision making strategies as shown in Figure 4.18. The range of strategies that
might be employed in risk management activities is shown in Figure 4.18 and these
strategies are described in Table 4.2. The choice of which decision strategy to use will
depend on the human making the choice, the circumstances within which the choice is
made and the perceived consequences associated with the choice. Variability in people,
circumstances and perceived consequences will exist so we should expect there will be a
range of decision making strategies that will be used and we need to understand and
support this range of strategies so that good decisions will be made. Examples of good
decisions that are risky and impact the risk of a system include:
- The decisions made by Captain Sullenberger who landed UW Airways Flight 1549 on
the Hudson in 2009 – for more details watch
https://www.youtube.com/watch?v=Zns758otLrY
- The decisions made by NASA to launch Space Shuttle Challenger – for a summary of
the event, watch http://www.history.com/topics/challenger-
disaster/videos/engineering-disasters---
challenger?m=528e394da93ae&s=undefined&f=1&free=false



Figure 4.18: Range of strategies and strategy shaping factors
(M. E. Hassall & Sanderson, 2012)

61
Chapter 4

Table 4.2: Range of possible decision making strategies (Hassall and Sanderson, 2011)
Types of Description
Strategies
Avoidance “Avoidance strategies are approaches to a task that include, delaying, deferring or not performing the
strategy task. Avoidance strategies require no effort or resources but they may expose a work domain to risk if
action is required.” (p. 236)
Intuitive “Intuitive strategies are approaches to a task that are executed automatically and include habitual
strategy responses. Intuitive strategies do not require explicit, deliberate thought processes. Intuitive strategies
therefore can be efficient and effective ways of dealing with familiar situations but they may lead
workers to overlook exceptional characteristics of the situation, or deviations in system performance.” (p
236).
Arbitrary- “Arbitrary-choice strategies are approaches to a task that are scrambled, ad hoc or haphazard and that
choice do not include consideration of options or cues. Arbitrary-choice strategies might be the result of a
strategy response under stress where there is little time for rational thinking or reflection . . . Even though
arbitrary-choice strategies for carrying out tasks impose minimal cognitive load, they are completely
reliant on robust designs, or luck, to produce desired rather than undesired outcomes. Arbitrary-choice
strategies are more likely to occur when there are strong time pressure and high-risk levels. They are also
more likely to occur ‘when the task demands are very high, the situation is unfamiliar and is changing in
unexpected ways’ as Hollnagel (1999, p. 166) describes for his scrambled mode of control." (p. 241).
Imitation “Imitation strategies are approaches to a task that are adopted or copied, usually from another worker or
strategy from an approach that has been successful in a similar situation. For imitation strategies to be successful,
the activity must be simple enough for the worker to remember and imitate it. Problems can arise if the
behaviour copied has not been correctly reproduced or if it is used in unsuitable situations. Imitation is a
critical component of successful on-the-job training programs. . . Imitation strategies might be employed
because ‘the context is not clearly understood or because time is too constrained’ (Hollnagel 1999, p.
166), or they might be employed by novices in risky situations that compel the worker to copy successful
actions." (p. 241)
Option- “Option-based strategies are approaches to a task where the worker selects from a set of alternatives an
based action option that meets some minimum requirement. To select the action option, workers use either
strategy reasoning and evaluation, or heuristic ‘rules-of-thumb’ independent of situation cues. The set of
alternatives often includes possible actions that are remembered, recognised or suggested by another
person or presented by an agent. Utility-based reasoning is also included in this category. . . Option-
based strategies differ from intuitive and imitation strategies in that the worker will deliberately choose
between several options" (p. 242).
Cue-based “Cue-based strategies are approaches to a task where the worker takes into account apparently relevant
strategy evidence from the environment that lets them include or exclude action possibilities, so guiding their
responses. Cue-based strategies might involve using cues to determine the difference between the
current and desired situation in order to guide the selection of a suitable strategy (Simon 1990). They
might involve matching the cues to a repertoire of patterns relating to system operations that the worker
has developed over time, in order to make rapid decisions about the course of action to choose (Klein
2008). When using cue-based strategies, workers may also use heuristics based on situation related
information to select a strategy" (p.242).
Compliance “Compliance strategies are approaches to a task that conform to rules and procedures. Compliance
strategy strategies require workers to make the effort to find, read, understand and execute procedures as they
are written. For example, a worker would be using a compliance strategy if they followed a written
procedure when starting up a process or piece of equipment. Compliance strategies are likely to be used
in familiar situations where stored rules are used to guide the worker through the correct execution of an
activity (Rasmussen 1978b). Thus the activity needs to be simple enough for the stored rules to be made
available in a useable and understandable format. Compliance strategies are also more likely to be used
in risky situations where following the rules is necessary to avert danger" (p. 242)
Analytical “Analytical reasoning strategies are approaches to a task where the worker uses reason to carry out the
reasoning task. Analytical reasoning (1) can be based on the fundamental principles of the work system, (2) may
strategy involve mental simulation of the system’s possible response to given actions or (3) could use deduction,
induction, abduction, means–end analysis, top down or bottom up analysis or another type of reasoning
processes. . . Such a strategy is especially likely if there is little or no risk associated with the diagnosis
and correction processes. Analytical reasoning strategies often occur when other strategies are
unavailable or are deemed inappropriate. ‘When decision makers encounter unfamiliar situations for
which know-how and control rules are not available, then they must access relevant knowledge they do
have (e.g. declarative knowledge) and transform it into procedural knowledge. In other words, they
generate a plan or a course of action’ (Hutton and Klein 1999, p. 36). Analytical reasoning is also more
likely when there is no time or risk-related pressure" (p. 243).

62
Chapter 4

4.4.3. Improving human performance
However sometimes decisions are made and actions are taken that turn out to be the
incorrect ones. This is can happen when it comes to the identification, assessment,
treatment and ongoing management of risks. One reason for incorrect decisions are due to
decision making bias. As humans the decisions we make are all subject to bias. There are
many types of bias that influence our ability to perceive, comprehend and forecast a
situation and there are many types of bias that influence our decision making processes.
Wikipedia.com has created a collation of bias9. What bias do you have? Watch this video to
see https://www.youtube.com/watch?v=IGQmdoK_ZfY
One of the reasons that humans take incorrect actions is due to slips and lapses. [insert
more].
Improving human performance requires a human-centred approach to design whereby
industrial systems are designed to support and enhance human performance rather than
designing the system with the expectation that humans can be trained to deal with any
operational complexity thrown at them. Conversely, poor designs do not consider human-
system interactions. They do not provide the right people with the correct and timely
information needed to diagnose the systems operating state. They do not equip the human
and technology system with the capability to successfully control the full range of operating
states.
Good designs use human factors approaches to optimize human-system interactions.
Human factors (also called ergonomics) is “the scientific discipline concerned with the
understanding of interactions among humans and other elements of a system, and the
profession that applies theory, principles, data and methods to design in order to optimize
human well-being and overall system performance (www.HFES.org). Human factors work
can inclcude:
- Physical ergonomics which focuses on designing jobs, tools, equipment, and
workspaces to fit the physical attributes of workers. Physical ergonomics
professionals draw on human anthropometrical, biomechanical and physiological
information to design physical environments that accommodate and enhance
human sensation and perception, human physical postures, and human
musculoskeletal performance (Kroemer and Grandjean 2009, Marras and Karwowski
2006).
- Cognitive ergonomics which focuses on human mental processing abilities, and
limitations. Following the introduction of computerised technology, work has
become less physically demanding, and more cognitively demanding. Cognitive
ergonomists seek to design tools and systems based on a scientific understanding of
attention, perception, memory, mental models, expertise and mental workload in
order to enhance human situation awareness, problem solving, and decision making
capabilities (Harris 2013).
- Macroergonomics (also known as organisation ergonomics) which focuses on the
optimization of organizational and work systems design to ensure the organisation
as a whole is designed to facilitate safe and effective interactions between
technological subsystems, personnel subsystems and the external environment
(Hendrick, 2005).

9
https://en.wikipedia.org/wiki/List_of_cognitive_biases

63
Chapter 4

To help people better appreciate human factors as a discipline, Russ (Russ et al., 2013)
outlining some of the facts and fiction of the discipline which are as follows:
Fact #1: Human factors is about designing systems that are resilient to unanticipated events.
Fiction: Human factors is about eliminating human error.
Fact #2: Human factors addresses problems by modifying the design of the system to better aid
people.
Fiction: Human factors addresses problems by teaching people to modify their behaviour.
Fact #3: Human factors work ranges from the individual to the organisational level.
Fiction: Human factors is focused only on individuals.
Fact #4: Human factors is a scientific discipline that requires years of training; most human factors
professionals hold relevant graduate degrees.
Fiction: Human factors consists of a limited set of principles that can be learnt during brief training.
Fact #5: Human factors professionals are bound together by the common goal of improving design
for human use, but represent different specialty areas and methodological skills sets.
Fiction: Human factors scientists and engineers all have the same expertise.

Therefore we considering the design of high hazard or inherently risky systems then it is
worth considering referencing human factors information and practitioners in order to
develop a human centred design that helps people more successfully perceive, assess,
control, manage and communicate about risk.

4.5. Summary
The aim of this chapter was to highlight the importance of humans in risk perception, risk
assessment, risk appetite, risk control, risk communication, and risk management activities.
Most of the material reviewed has been quite theoretical and conceptual in nature. This is
because our understanding of these areas of risk is still emerging and a lot of research is
occurring and needs to occur to deepen our understanding of these concepts and to
develop useful techniques and tools to help people identify and manage these aspects of
risk in the process industries.
Identifying and managing risk is about being aware of the situation and the risks associated
with it and then making decisions about how to manage those risks. Understanding decision
making and its components, as well as factors that influence decision makers like the choice
of decision making strategies, bias and safety culture that can come into play are crucial for
ensuring work and work systems are design to ensure that good risk-based decisions are
made in a timely manner.

64
Chapter 4












This pages has intentionally been left blank

65
Chapter 5







SECTION C: KEY RISK MANAGEMENT ACTIVITIES




The objectives
SUSTAINABLE OPERATIONAL EXCELLENCE

STAKEHOLDER, REPUTATION & POLITICAL RISK MANAGEMENT

T h e f o c u s a r e a s
REVIEW RISK MGT
MONITOR AND

ENVIRONMENTAL IMPACTS

FINANCIAL PERFORMANCE
PROJECTS/CONTRACTORS
Key RM Activities

SAFETY & HEALTH

SOCIAL IMPACTS

CYBERSECURITY
SUPPLY CHAINS
AND TREAT RISKS
IDENTIFY, ASSESS

T h e F o u n d a t i o n s
PROFESSIONAL PRACTICE HUMANS AND RISK
FUNDAMENTALS OF RISK MGT









66
Chapter 5











This pages has intentionally been left blank

67
Chapter 5

The objectives
SUSTAINABLE OPERATIONAL EXCELLENCE

STAKEHOLDER, REPUTATION & POLITICAL RISK MANAGEMENT

Chapter 5 T h e f o c u s a r e a s

REVIEW RISK MGT


MONITOR AND

FINANCIAL PERFORMANCE
ENVIRONMENTAL IMPACTS
PROJECTS/CONTRACTORS
Identify, Assess

Key RM Activities

SAFETY & HEALTH

SOCIAL IMPACTS

CYBERSECURITY
SUPPLY CHAINS
& Treat Risks

AND TREAT RISKS


IDENTIFY, ASSESS


T h e F o u n d a t i o n s
PROFESSIONAL PRACTICE HUMANS AND RISK
FUNDAMENTALS OF RISK MGT


“Risk comes from not knowing what you’re doing” Warren Buffett

5.1. Introduction

This chapter focuses on establish the context, risk assessment and risk treatment steps of
ISO31000 as shown in Figure 5-1. The first part of this chapter focuses on Establishing the
Context. The second part of this chapter focuses on Risk Assessment. This encompasses the
key activities of risk identification, risk analysis and risk evaluation. In the third part of the
chapter, we will look at how we can then treat the risks that we assess as requiring further
action. This encompasses the key activities of scenario or event identification, control
selection and analysis, control management and evaluation as shown in Figure 5.2. Establish
the context, risk assessment and risk treatment activities cut across all process industry
focus areas. In other words, when performing risk assessment, we need to consider OH&S,
process safety, project and contractor management, supply chain, environmental and social
impacts, cyber security and financial performance as shown in the pillar diagram.



Figure 5.1: The Risk Management Process (ISO 31000, 2009)

68
Chapter 5



Figure 5.2: Extended Version of ISO31000 Risk Management Process

Another representation of the risk management process is shown in Figure 5.3. This
representation highlights that risk management if a continuous and infinite process
involving the identification, assessment, monitoring, control, review, evaluation and learn
and adapt. These are very similar to the categories used in ISO31000. In this chapter we will
explore the importance of establishing the context then the identification, assessment,
treatment and control of risks. The monitoring, reviewing and evaluating will be discussed in
Chapter 6. Therefore the learning objectives for this chapter are:

• Understand risk identification and analysis theory.


• Understand and apply risk evaluation and use the concepts of tolerable risk and
ALARP.
• Understand and apply risk identification and analysis tools and techniques.
• Selection and optimisation of risk controls and critical risk controls.
• Management of controls – monitoring and review/verification activities.
• Communication and consultation.
• Learn that professional practice is critical for performing rigorous, robust, practical
and useful risk assessment and risk treatment work.

69
Chapter 5

We assess risks We continually evaluate
Before acting,
consistently and our effectiveness and look
we identify risks
collaboratively for changing outcomes

We learn and adapt to We monitor risk levels We control unacceptable


continually improve our against our risk risk levels to a level that is
approach to managing risk tolerance to determine tolerable
controls needed


Figure 5.3: Risk Management Process (adapted from Solicitors Regulation Authority, 2014)

5.2. Establish the context


To establish the context for a risk management activity it is important to first determine the
objective of the risk assessment. Different risk management activities might have different
objectives. Examples of some common objectives are as follows:
- To identify and address hazards that could cause adverse incidents
- To determine the deviations and uncertainties that need to be proactively managed
in order to minimise adverse outcomes and maximise beneficial outcomes.
- To identify and address one particular type of risk (e.g. safety and health risks,
project risks, contractor risks supply chain risks, environmental risks, social risks,
cybersecurity risks, and financial or economic risks).
The second step in setting the context is identifying the stakeholders who will be involved
and consulted as part of the risk management process. In Australia the need to consult
relevant stakeholders is a legislative requirement for certain risk management activities.
When identifying stakeholders some to the people to consider are the front-line people
exposed to the risk (e.g. employees, contractors, customers, visitors, members of the public
etc), managers responsible for the risk, subject matter experts and other parties that a high
might have an interest and/or high influence on the risk as shown in Figure 5.4

70
Chapter 5

Interest/knowledge in how risk is


Engaged stakeholders – involve Very important stakeholders –
High
consultations consult with extensively
managed

Important stakeholders –
Stakeholder with negligible effect –
Low

communicate with and try to engage


keep them informed and monitor
in consultations


Low High

Influence/power on how risk is managed

Figure 5.4: Stakeholder identification diagram




The next steps in setting the context involves establishing the scope. Establishing the scope
of a risk management activity is done to clarify what is included and excluded from
consideration in the risk assessment and risk treatment processes. Documenting the context
helps ensure those involved in performing, using or reviewing the risk management work
will understand the objective, focus areas, assumptions and limitations or boundaries of the
risk management work. Using a table of categories like the one shown in Table 5.1 can help
ensure completeness of the scope and help communicate the scope.

Table 5.1 Scope table populated with examples from ship-to-shore fuel transfere
Categories In Scope Out of Scope
People Employees, contractors, visitors Ship crew

Locations Tank farm site Wharf, ship and off-site locations

Equipment Tanks, pipes, pumps, instruments etc on Fuel tankers, ship


site
Activities Unloading ships, loading tankers, storing Major refurbishments and expansion
fuel, transferring fuel between tanks, within tank farm
Maintenance activities on facilities Activities done to wharf and on ships
including surveying
Scenarios Loss of containment, loss of control Non-reportable incidents e.g.
leading to fatalities or reportable slips/trips/falls and other deviations
environmental incidents that do not impact safety or
environmental performance

71
Chapter 5

The categories used in Table 5.1 are People, Locations, Equipment, Activities, and Scenarios.
These are prompts to help users think of the range of elements that should be included and
excluded from risk assessment considerations. These categories can be tailored to suit the
context relevant for a particular risk assessment. The table is populated with examples of
what might be included and excluded from a risk assessment of transferring petroleum fuel
from a bulk ship tanker to a fuel farm tank.

5.3. Risk Assessment


Before performing a risk assessment, it is necessary to determine what approach will be
adopted for the risk assessment. In Chapter 2 two dominant approaches to modern risk
management were introduced, namely the ‘loss reduction’ mindset and the ‘risk
optimisation’ mindset (Table ). The aim of risk assessment is to identify the risks – threats,
opportunities, uncertainties that matter to an entity because they can positively or
negatively impact on the achievement of objectives. This subsection covers risk
identification, risk analysis and risk evaluation theory and risk assessment tools and
techniques.

Table 5.2: Two approaches to modern risk management (Hillson, 2010)

Loss reduction mindset Risk Optimisation Mindset


What can go wrong? What hazards and What are we trying to achieve? What are our key
threats exist? objectives?
What are the consequences if things go What is the ‘uncertainty that matters’? Including
wrong? both downside threats and upside opportunities.
What is the likelihood that things might go Acknowledge that risk management is affected by
wrong? perception and “zero risk” is unachievable and
undesirable, so what is the appropriate level of
risk to aim for?
Is the risk low enough to be acceptable, or is What actions are required to manage risks?
action required to lower the risk?
Have enough controls been implemented to How are we going? What has changed? What
prevent the unwanted events from have we learned?
occurring, or to mitigate the consequences if
it does occur?

5.3.1. Risk Identification Theory


Risk identification can be defined as shown in Table . However, in this course we define risk
as the uncertainty that matters because it can have an impact on objectives. Risk
identification is about trying to identify the uncertainties that matter. Uncertainty can be
caused by variability, known and unknown threats and opportunities and incomplete
knowledge. Uncertainty can be associated with the present and the future. Risk
identification should involve looking for uncertainty that derives from current and future
variability, threats and potential threats, opportunities and potential opportunities and
incomplete knowledge. Risk identification activities should also refer to the past to identify

72
Chapter 5

from history the variability, threats, opportunities and knowledge issues that have impacted
the objectives so these risks can be monitored and managed into the future in ways that
prevent reoccurrences of unwanted events.

Table 5.3: Definitions for Risk Identification (quoted from Marling, Horberry, & Harris, 2014)

Definition from Plain English definition


ISO31000:2009
Process of finding, recognizing ‘Risk identification’ is the process of identifying the
and describing risks. opportunities or hazards (sources of harm) and describing
the types of credible risks that could affect your
organisation. It involves a thorough examination of your
organisation’s activities and the potential events that could
occur and those that have occurred in similar
circumstances. These events can be planned or unplanned.
This results in a comprehensive list of well-defined risks,
albeit there may be some uncertainties and ambiguities,
unique to your organisation and its operational
environment.


Risk identification activities can be formal, informal or a combination of both as shown in
Figure 5.. The activity of risk identification should be regular and ongoing and it should be
linked to learnings from past events as well as current and anticipated hazards,
opportunities and uncertainties that span the focus areas (e.g. health, safety, projects,
contractors, supply chain, environmental, social, cybersecurity, financial areas as shown in
the pillar diagram). Examples of risk identification techniques include brainstorming,
prompt/cue based techniques like HAZID, HAZOP and SWOT, reviewing historical incidents,
expert elicitation techniques, what-if analysis, futures/foresighting techniques, and
formative human factors techniques. Section 5.3.4 provides more detail on some of these
techniques that are used in process industries. More detailed information on the specific
types of hazards, opportunities and uncertainties risks associated with the different focus
areas (as shown in the pillar diagram) are covered in Section D of this reference.



Figure 5.5: Approaches to Risk Identification

73
Chapter 5

5.3.2. Risk Analysis Theory
Risk analysis can be defined as shown in Table . Risk analysis is often performed using
likelihood and consequence. Sometimes it should be based on just an assessment of
consequence (e.g if the consequence is severe enough then the risk should be treated
regardless of the probability). It can also involve an assessment of exposure. Likelihood is
the probability that an uncertainty might eventuate in the future. Consequence is the
impact or outcomes that might result should the uncertainty eventuate. Consequences can
be positive, negative or both. Exposure is how many people and/or entities could be
impacted if the risk eventuates.

Table 5.4: Definitions for Risk Analysis (quoted from Marling et al., 2014)

Definition from ISO31000:2009 Plain English definition


Process to comprehend the nature ‘Risk analysis’ is the process of determining the relative
of risk and to determine the level of effect individual risks are likely to exert on your
risk. organisation/role.
Risks to your organisation are analysed in terms of the
likelihood of the event(s) occurring (e.g. ranging from rare
to almost certain) and consequence(s) if the event occurs
(e.g. ranging from minor to catastrophic). Events can be
planned or unplanned.
This results in data that can then be used to prioritise risk
for management action as part of ‘risk evaluation’.


Risk analysis can, and often does involve using a matrix. Risk matrix is designed to provide a
consistent structure/framework that helps people determine and articulate what is
tolerable/acceptable or intolerable/unacceptable. The determination can (and is often)
used to direct people to the approaches that they need to determined the risk treatments
required. So the absolute number isn't the goal, getting the right decision on what sort (if
any) risk treatment approach is to be applied to the risk is crucial.
Several matrices are shown in Figures 5.6, 5.7 and 5.8. These matrices are more detailed
than others in that they provide descriptors for the range of consequences – people, assets,
environment, reputation – that relate to organisational objectives as well as descriptors for
the range of likelihoods. They also provide guidance on what is tolerable and intolerable
risk. As such risk matrices are a means by which an organization can express its risk
appetite. The matrices also provides guidance on actions to take when the risk is assessed as
having a certain consequence and likelihood rating.
Figure is an example of a matrix that has range of consequences and provides for assessing
the severity of the different consequences as well as specific guidance on assessing
likelihood and required focus of risk treatments.

74
Chapter 5


Figure 5.6: Risk Matrix 1 (Source: http://www.eimicrosites.org/heartsandminds/ram.php)

However, one of the lessons learned in using these matrices is that the high consequence,
low likelihood events can often get overlooked. Events such as dust explosions, successful
anti-mining campaigns, well blowouts, tailings dam failures might not get the priority or
attention they deserve because some matrix rank them as low[er] risk. Major disasters have
taught us not to look at likelihood for high consequence outcomes as that often results in
them being incorrectly categorised as requiring lower levels of risk treatment. For high
hazard industry contexts we should look at consequence first. If consequence is "material"
or intolerable (i.e. a fatality or a multiple fatality) then that should require people to use the
most appropriate and rigorous risk treatment approach. If the consequence is below the
intolerable level then the risk treatment approach might be less rigourous.
Figure 5.7 is similar to Figure 5.6. However, the output from using a matrix like the one
shown in Figure 5.7 is a number. This number can then be used to rank risks from highest to
lowest thereby allowing people to focus on the highest risks first. These matrices can help
people assess and prioritise risk. The matrix shown in Figure 5.7 also provides more detail
for the consequence and likelihood categories and identifies five different levels of risks
requiring five different levels of risk treatment as shown in the guidance provided below the
matrix. However this matrix also has the issue of downgrading the risk analysis for
significant consequences. For example if a risk is a multiple fatality event but rare or unlikely
(e.g. catastrophic well blowout) it gets the same risk number and therefore the same risk
treatment as a risk with negligible or insignificant consequences that are almost certain to
happen (e.g. minor first aid or medical treatment).
Figure 5.8 provides guidance on consequences and likelihood categories, produces and
number to assist with risk ranking and provides advice on how to treat the four different
levels of risk that might result from risk analysis processes. However this matrix does
provide appropriate risk ratings for the catastrophic events (as shown in red).

75
Chapter 5


Figure 5.7: Risk Matrix 2
(Source http://www.jakeman.com.au/media/whats-right-with-risk-matrices)



Risk Ranking Matrix Likelihood
Impact Rare Unlikely Moderate Likely Almost Certain
Not expected to could occur
Might occur once Could occur every Could occur every
Reputation / occur (but has
in entity's life 5-10 years 1-5 years
monthly, weekly
Impact OH&S Asset damage Environment or daily
Legal occurred in industry)

A B C D E
Widespread
Serious
Catastrophic Fatalities > $50 m serious adverse 5 15 15 20 25 50
LT effect
impact
Permanent, Serious Wider spread
Major $10m - $50 m 4 5 10 10 20 25
serious disability Med to LT effect moderate impact
Moderate
Moderate Localised(Qld)
Moderate irreversible $1m - $10m 3 3 5 10 10 20
ST to Med effect moderate
impairment
Objective but
Minor Localised(Qld)
Minor recoverable $100k - $1m 2 2 5 5 5 10
Short term effect Minor
impairment
Low level / no
Insignificant First aid / minor < $100 k
lasting effect
No impact 1 1 2 3 5 5

15 - 50 High Unacceptable Risk Unacceptable – Operations do not to continue until risk is reduced.

Tolerable Risk (only if risk reduction ALARP Band 1 – Action as a high priority to reduce risk. Assign senior manager responsible to action,
10 - 14 Significant
is impracticable or it's cost is monitor and review. If interim measures installed monitor closely and continuously.
Risk Rating
grossly disproportionate to ALARP Band 2 – Action to reduce risk where possible. Assign manager responsible to continuously
4-9 Medium improvement gained) monitor and review


1-3 Low Broadly Acceptable Generally Acceptable - Manage with regular monitoring and review.

Figure 5.8 Risk matrix 3

76
Chapter 5

5.3.3. Risk Evaluation, Tolerable Risk and ALARP
Once a risk has been analysed, an entity (person or organisation) needs to evaluate it. The
formal definitions for Risk Evaluation are shown in Table . Risk evaluation is conducted to
determine whether that risk will be accepted as is or whether it needs to be proactively
managed so that it becomes acceptable.

Table 5.5: Definitions for Risk Evaluation (quoted from Marling et al., 2014)

Definition from ISO31000:2009 Plain English definition


Process of comparing the results of ‘Risk evaluation’ is the process of comparing estimated
risk analysis with risk criteria to levels of risk against the criteria defined earlier when
determine whether the risk and/or establishing the context. It then considers the balance
its magnitude is acceptable or between potential benefits and adverse outcomes, to
tolerable. determine if the risk is acceptable or tolerable based on the
quality of the controls in place.
This results in decisions being made about the current and
potential future risk mitigation strategies and their
priorities to ‘as low as reasonably practicable’ principles.

Risk acceptability is often judged using terms like tolerability and ALARP (As Low As
Reasonably Practicable). Understanding the concepts of ALARP and risk tolerability can be
challenging. A good explanation of the concepts is as follows and as shown in Figure.

ALARP
“refers to reducing risk to a level that is As Low As Reasonably Practicable. In
practice, this means that the operator has to show through reasoned and supported
arguments that there are no other practicable options that could reasonably be
adopted to reduce risks further” (NOPSEMA, 2015)

Reasonable Practicable
“The legal definition on this was set out in England by Lord Justice Asquith in Edwards
vs. National Coal Board [1949] who said: ‘Reasonably practicable’ is a narrower term
that ‘physically possible’ and seems to me to imply that a computation mush be
made by the owner, in which the quantum of risk is placed on one scale and the
sacrifice involved in the measures necessary for averting the risk (whether in money,
time or trouble) is placed in the other; and that if it be shown that there is a gross
disproportion between them – the risk being insignificant in relation to the sacrifice –
the defendants discharge the on them. Moreover, this computation falls to be made
by the owner at a point of time anterior to the accident. This English decision has
since been confirmed by the Australian High Court” (NOPSEMA, 2015)

Intolerable Risk:
“Clearly, if the risk is in this region then ALARP cannot be demonstrated and action
must be taken to reduce the risk almost irrespective of cost” (Source:
http://www.hse.gov.uk/foi/internalops/hid_circs/permissioning/spc_perm_37/).

77
Chapter 5

“Tolerable if ALARP” Risk
If the risks fall in this region then a case specific ALARP demonstration is
required. The extent of the demonstration should be proportionate to the level of risk
(Source http://www.hse.gov.uk/foi/internalops/hid_circs/permissioning/spc_perm_37/).

Broadly Acceptable Risk


If the risk has been shown to be in this region, then the ALARP demonstration may be
based on adherence to codes, standards and established good practice. However,
/these must be shown to be up-to-date and relevant to the operations in question”
(Source: http://www.hse.gov.uk/foi/internalops/hid_circs/permissioning/spc_perm_37/).


Figure 5.9: Risk Tolerability and ALARP


As the definitions and figures imply the demonstration of ALARP requires owners to show
that the cost of further risk reduction measures becomes unreasonably disproportionate to
the additional reduction in risk that would be achieved as shown in the last two bars on the
right of Figure 5.10. This may result in an ALARP below the risk tolerability criteria (also
shown in Figure 5.10). The best way to determine ALARP is to measure cost and risk
reduction impacts delivered for incremental levels of control. To illustrate this point Figure
5.10 has been annotated to show selection of an ALARP option for a water treatment
technology to treat a hydrocarbon effluent stream. Options 3-6 can all satisfy the specified
company emission limit of 20ppm. However, the hydrocyclone (option 5) is the ALARP
option. This example illustrates that you need to go beyond the ALARP option in order to
know that you have identified the ALARP option. In this case, that means it was necessary
to design and cost the centrifuge in order to determine that the significant increase in cost
meant that it was not justified, and thus the hydrocyclone was the ALARP option. This
example also illustrates that the company emissions targets may be stricter than the
national regulated limits. This is not unusual for large multi-national companies.

78
Chapter 5

What would have happened if the project had been required to adopt World Bank
standards?


Figure 5.10: Selection of ALARP option for a water treatment technology
(Source: Clive Killick, CHEE4002 notes 2014)



As mentioned, ALARP is a legal term. It is most commonly used when assessing safety
and/or fatality risks. However the principles underlying tolerability and ALARP are applicable
to environmental, social, financial and technology risks. They are also linked to risk
perception and risk appetite so they are important concepts to understand.

5.3.4. Risk Assessment Techniques and Tools


Many risk assessment tools techniques and tools exist. Some techniques are general – for
example brainstorming, event inventories and loss event data, interviews, self-assessments,
facilitated workshops, risk questionnaires and risk surveys as shown in Table 5.6. Tools are
often developed to provide prompts and guidance to help risk assessors identify, analyse
and evaluate risks within a given context. Some examples of tools used in the high hazard
industry section are listed below and an overview of each is provided in following
subsections:
- Hazard identification
- Opportunities and threat analysis
- Human factors analysis
- Impact assessment
- Sustainability analysis.

In selecting a technique, it is important to understand the objective and scope of the


assessment then select one or more techniques that will help achieve that objective. Good
information and critiques of the different techniques exist (e.g., SA/SNZ HB89, 2013;

79
Chapter 5

Tworek, 2010). It is also important to understand risk assessment pitfalls and how to
overcome them with good process as described by Joy and Griffiths (2007).


Table 5.6: List of Risk Assessment Techniques
(Department of Education Training and Employment, n.d.)

Approach /Sources Description


SWOT analysis Commonly used as a planning tool for analysing a business, its
(Strength, Weakness, resources and its environment by looking at internal strengths and
Opportunity Threats) weaknesses; and opportunities and threats in the external
environment
PESTLE analysis Commonly used as a planning tool to identify and categorise threats
(Political, Economic, in the external environment (political, economic, social,
Sociological, technological, legal, environmental)
Technological, Legal,
Environmental)
Brainstorming Creative technique to gather risks spontaneously by group
members. Group members verbally identify risks in a ‘no wrong
answer’ environment. This technique provides the opportunity for
group members to build on each other’s ideas
Scenario analysis Uses possible (often extreme) future events to anticipate how
threats and opportunities might develop.

Surveys/Questionnaires Gather data on risks. Surveys rely on the questions asked.

One-on-one interviews Discussions with stakeholders to identify/explore risk areas and


detailed or sensitive information about the risk.

Stakeholder analysis Process of identifying individuals or groups who have a vested


interest in the objectives and ascertaining how to engage with them
to better understand the objective and its associated uncertainties
Working groups Useful to surface detailed information about the risks i.e. source,
causes, consequences, stakeholder impacted, existing controls

Corporate knowledge History of risks provide insight into future threats or opportunities
through:
• Experiential knowledge – collection of information that a person
has obtained through their experience.
• Documented knowledge – collection of information or data that
has been documented about a particular subject.
• Lessons learned – knowledge that has been organised into
information that may be relevant to the different areas within
the organisation.
Process analysis An approach that helps improve the performance of business
activities by analysing current processes and making decisions on
new improvements.
Other jurisdictions Issues experienced and risks identified by other jurisdictions should
be identified and evaluated. If it can happen to them, it can happen
here.

80
Chapter 5

5.3.4.1 Hazard identification
The aim of hazard identification techniques is to
1) identify the things that could harm people, the environment, assets and the
economic situation of an organization in ways that would prevent them achieving
their objectives
2) Assess the impact the level of harm that might result if these hazards are not
eliminated, substituted or controlled. Figure 5.9 illustrates an example of a hazard
identification process. Some hazard identification techniques require users to state
how unacceptable risks will be controlled (although more rigourous risk treatment
approaches should be applied for high consequence risks).


Figure 5.11: Example of hazard identification process
http://www.ready.gov/risk-assessment (source www.ready.gov/risk-assessment )


In the process industries, hazard identification techniques include HAZIDs (HAZard
Identification), HAZOP (HAZard and Operability study), Process or Preliminary Hazard
Analysis (PHA) and Job Hazard Analysis (JHA). These techniques can differ in how thoroughly
and systematically they are applied. It is preferable that hazard identification processes are
performed by a team and appropriately reference historical incident data. Guidelines for
performing these techniques are available from Government and industry sites. Some
examples are provided in Table 5.5.






81
Chapter 5

Table 5.7: Examples of hazard identification technique guidelines
Technique Example references
The Beginner's Guide To Hazard Identification Studies (HAZID)
HAZID https://www.oilandgasiq.com/integrity-hse-maintenance/white-papers/the-beginner-
s-guide-to-hazard-identification-stud
NSW Government Department of Planning HAZOP guidelines
HAZOP http://www.planning.nsw.gov.au/Policy-and-
Legislation/~/media/C9CC2DA7E9B947C78C7C1355AD5B4B15.ashx
Workplace safety and health guidelines – Process hazard analysis
PHA https://www.wshc.sg/files/wshc/upload/infostop/attachments/2017/IS20170403000
0000416/Workplace_Safety_Health_Guidelines_Process_Hazard_Analysis.pdf
Job hazard analysis https://www.osha.gov/Publications/osha3071.pdf
Job safety analysis
JHA
https://www.dnrm.qld.gov.au/__data/assets/pdf_file/0005/240359/qld-guidance-
note-17.pdf


5.3.4.2 Opportunity and threat analysis
Opportunity and threat analysis is usually done in conjunction with a strengths and
weakness analysis. The method used is typically called a SWOT analysis because it involves
listing the Strengths, Weaknesses, Opportunity and Threats associated with a given
operation. There are two main types of SWOT. The most common type seeks to identify
internal strengths and weaknesses as external opportunities and threats as shown in Figure
5.12. The second is to identify the strengths and opportunities that should be leveraged and
the weaknesses and threats that need to be address as they will challenge the ability to
leverage the strengths and opportunities. This second form of a SWOT analysis is shown in
Figure 5.13.

Figure 5.12: Traditional SWOT analysis (source:


https://en.wikipedia.org/wiki/SWOT_analysis )

82
Chapter 5
SWOT Model (different from the most common SWOT model used):

Strengths Weaknesses




Opportunities Threats





Figure 5.13: Second example of SWOT analysis


5.3.4.3 Human factors analysis
Human factors methods should also be used to assess risk because risk can emerge from
human-system interactions at the front-line, technical specialist, management, and
executive levels of an organization. Risk can created or mitigated through human decisions
and actions. Therefore performing human factors analysis as a part of risk assessment work
is needed to fully identify, assess and evaluate risk. Human factors approaches in risk
assessment (e.g. Cognitive Work Analysis) are currently performed by human factors
specialists. More recent work has been done to develop more practitioner based
approaches. Most of these focus on providing prompts to help practitioners think about the
range of human factors risks.
Human factors risks assessment can look at people risks - “the risk of loss due to the
decisions or non-decisions of people inside and outside of the organization” (Blacker &
McConnell, 2015, p. 124) as shown in Figure 5.14 or they can go broader to include actions
and in-actions as well as issues about competence, capabilities, bias, motivations and
organizational influences as shown in Figure 5.15. The elements shown in Figure 5.14 are
describe by Blacker and McConnell (2015) in the following way:
• Illegal activity refers to the intentional or unintentional actions that are liable to
criminal sanctions,
• Proscribed conduct is the illegal actions and also actions prohibited by company
policies.
• People risks are the risks that can cause humans to make incorrect decisions or non-
decision and incorrect actions and non-actions that can lead to organizational losses,
proscribed or illegal conduct
• Operational risks are the human decisions, non-decision, actions and non-actions
that can cause loss of control of operational processes
• Business risk includes the internal risks (e.g. operational risk, people, risk, proscribe
conduct, illegal activity) as well as the external risks from markets, communities and
having a sustainable strategy.
Prompts like those provide in Figures 5.14 and 5.15 can help with risk identification and
consequence analysis so these types of risks can be assessed and evaluated.

83
Chapter 5


Figure 5.14: Range of people risks (Blacker & McConnell, 2015)


Figure 5.15: Human factors risk assessment prompts (Xie & Guo, 2018)

84
Chapter 5

5.3.4.4 Impact assessments and Sustainability analysis
Impact assessments are a specific type of risk assessment that seeks to identify the impact
(both positive and negative) that an entity will have on society and the environment both
during a project and after its closure.
Sustainability analysis incorporates impact assessments but may also consider a broader
range of topics including the risk and consequences of having a non-sustainable operating
strategy in an ever-changing political, commercial, social and technological world. More
details are provided on impact assessments in the Environment and Social risk chapters.
More details on sustainability analysis is provided in the chapter on Sustainable Operating
Excellence.

5.3.5. The Risk Register


The outcome of the risk assessment processes (discussed previously) is a list of risks that are
acceptable and a list of risks that are unacceptable and need to be controlled. It is common
for process companies to record and monitor this list in a Risk Register. Depending on the
nature of the risk assessment activities, these risks can be articulated as risks, threats,
opportunities and/or unwanted event scenarios. In determining the best way to treat a risk,
it is often useful to identify the unwanted events that can lead to the unacceptable risks.
See Error! Reference source not found. for an example of a risk register.


Figure 5.16: Examples of a Risk Registers
(Source: http://www.tdm-ltd.com/elimin81/images/new/RiskRegister.jpg )

85
Chapter 5

5.4. Risk Treatment and Management

5.4.1. Overview of risk treatment



Having identified, analysed and evaluated your risks, and compiled the risk assessment into
the risk register, you now need to decide what to do. How will you treat the risks that you
have previously determined need to be proactively managed? This step is called risk
treatment. ISO31000 simply refers to this step as risk treatment, but does not explicitly
break it down in to a structured set of tasks. In this course, we extend the framework in
the Standard, and propose that risk treatment comprises, unwanted event identification,
control analysis and selection, and control management and evaluation (Figure ). As you
can see, the selection, optimisation and management of controls is the critical part of risk
treatment. Note that risk management not only covers the management of controls, but
also the monitoring and review, and communication and consultation processes required to
effectively manage the risk (Figure ).

5.4.2. Unwanted event identification



The identification of unwanted events can come from brainstorming exercises,
benchmarking exercises, reviews of past events and expert forecasting exercises. Some
examples of unacceptable risks and unwanted events from industry are listed in Table
5.8Table .

Table 5.5: Examples of unacceptable risks and unwanted events from industry
Unacceptable Risk Example of unwanted events
Human fatalities Unsafe operation of a vehicle, loss of containment, loss of
control of strata, ignition of gas, fall from heights, etc.
Permanent and disabling health Diesel emissions above specified limits, undetected
issues depression in employees, etc.
Community protests Loss/suspension of license to operate, failure/delay to get
permits approved, community blockage of access ways etc.
Environment damage Air/noise over license limits, uncontrolled release of
contaminated water, tailings leak etc.
Business disruption Loss of containment, strata failure, loss of supply of key
components, loss of transport routes etc.
Lost opportunities Failure to exploit new markets, failure to meet/exceed local
supplier targets, failure to adopt beneficial technology.

As shown in the previous module, the output of a risk assessment process is a list of the
risks and/or unwanted events, which may be ranked from highest to lowest risk based on an
assessment of the severity of the consequence and sometimes the likelihood of the
outcome. This ranking can help a business to prioritise resources in order to treat the risks
that have the most potential to seriously harm the business, or the most potential to deliver
significant benefit/opportunities to the business.

When describing the unwanted event, ideally the description of the unwanted event should
describe the point at which an opportunity is lost or a system has gone from being “in

86
Chapter 5

control” to being “out of control”. In terms of safety this can be expressed as follows and as
highlighted in Figure .


Unacceptable & unrecoverable
accident zone

Unacceptable but recoverable


operating zone (HPI/SPI zone)

Acceptable operating zone


(normal operating zone)


Figure 5.17: Safe/unsafe operating zone diagram (from Hassall et al, 2015)

The same logic can be applied to other types of unwanted events (e.g. environmental,
financial, production losses, community protests etc).

The description should be of the system state and not a description of the reasons why
the system state has gone into the unsafe region. In some case it is clear what the
description should be. For example, the fuel leak from a bulk fuel storage area (loss of
fuel containment) could become the description of an unwanted event, rather than the
subsequent fire, explosion or pollution which should be considered as a consequence.
However in other cases it may not be clear what the description should be. In these
instances discussion and discretion will be required to determine the most appropriate
description for the unwanted event. It may be helpful to think about describing the
unwanted event as the situation which represents the last opportunity to intervene
and prevent an accident.

The most effective way to manage unwanted events is to eliminate the hazard that
can cause unwanted events. If elimination is not an option then substituting the
hazard with something that has less risk and minimising exposures should be the next
area to focus on to reduce risk levels. If elimination, substitution and reducing
exposure levels do not reduce the risks to a tolerable level then the next option
involves identifying the unwanted events that can emerge from the hazard and
selecting and optimising controls that help ensure effective protection of people,
assets, and the environment.(M. E. Hassall, J. Joy, et al., 2015)

The risk treatment options are summarised in Figure . The risk treatment options outlined
are consistent with the Hierarchy of Controls.

87
Chapter 5

Eliminate hazard
(design it out)

Substitute hazard

Increasing effectiveness
(replace with something better)
Substitute hazard
(replace with something better)

Eliminate exposure occurrences


(isolate the hazard)

Eliminate threats that release hazard


(design them out)

Implement controls that


- Reduce likelihood of occurrence of unwanted events
- Mitigate consequences of unwanted events

Risk Treatment Options



Figure 5.18: Risk treatment options for addressing unwanted events (M. E. Hassall, J. Joy, et
al., 2015)

5.4.3. Selection and optimisation of risk controls



Risk controls are the interventions taken to manage the risk to an acceptable level. The ISO
31000 standard defines risk control as a “measure that maintains or modifies risk” (ISO
31000, 2018). This definition is quite abstract and many things could be interpreted as
controls. In research work funded by ACARP, it was found that a more stringent definition of
control could lead to better selection of controls that directing impact risk. The proposed
definition is: “A control is an object and/or human action that of itself will arrest or mitigate
an unwanted event sequence” and whose performance is specifiable, measurable and
auditable (M. E. Hassall, J. Joy, et al., 2015). Arresting controls are used to reduce the
likelihood of unwanted events occurring. Mitigating controls limit the adverse effects of an
unwanted event if it does occur. This definition is shown as a decision tree in Figure 5.19.

The proposed definition of control was derived to address operational risks and therefore
focuses on what control actions and devices are needed by frontline staff and supervisors to
effectively manage risks at the operational interface as shown in Figure 5.20. Other risks,
such as social risks, economic risks, and political risks are typically managed higher up in a
business. The definition of control should still apply to the management of external threats
and opportunities.

The identification of risk controls can be done by brainstorming, focus groups,
benchmarking or getting expert advice. Risk controls can then be documented in the risk
register as shown in Figure 5.21. It can also be done in a more formal way through bowtie
analysis as explained in the next subsection.

88
Chapter 5

Is it, of itself Not part of


Start a physical object, No
Is it activity that
maximises health/
No Is it an in-field No control
technological system, check that control is
and/or human minimises erosion of implemented? management
action? the control? system
Yes Yes Yes

Does it,
of itself, directly Is the Is it activity that
prevent an unwanted No activity specifiable, No determines if control No
event or mitigate an observable and is healthy/functioning
unwanted auditable? as required?
outcome?

Yes Yes Yes

Is the required Is it activity that


performance specifiable, No No
determines
measurable, and effectiveness of
auditable? controls?

Yes Yes Yes

A CONTROL Support Verification


Activity Activity
Figure 5.19: Defining a control (M. E. Hassall & Harris, 2017)

Point where there is effective or ineffective


Operational Reviews control/management of external risks

Human actions

Control
Point where there is effective control
Objects
or loss of control of operational risk

Operational
Hazards and Threats

Figure 5.20: Representation of the work of an organisation as it relates to risk control


(adapted from Rasmussen & Svedung, 2000)

89
Chapter 5


Figure 5.21: Example of a risk register with controls listed
(Source: http://www.hertsdirect.org/docs/pdf/b/busfailure click on link for full register).

5.4.4. Bowtie analysis



The following information on Bowtie analysis has been taken from the ACARP report on
selecting and optimising risk controls (quoted from M. E. Hassall, J. Joy, et al., 2015).
Bowtie analysis helps people visually represent and assess the controls and the control
assurance management systems elements present and/or needed to address both the
threats and consequences associated with a given unwanted event. The output of the
Bowtie method includes the following:
• Description of an unwanted event as well as the threats and consequences
associated with the event.
• Identification of the controls that arrest the unwanted event.
• Identification of the controls that mitigate the consequences of the unwanted
event.
• Identification of the factors that can cause controls to fail or can undermine the
effectiveness of controls.
• Description of the activities, actions, procedures, policies and standards that are
required to monitor, maintain and improve control effectiveness.

Figure 5.22 shows a basic or simplified bowtie. Figure 2.23 illustrates the linking of Bowtie
controls with a control assurance management system table. Figure 2.24 shows an

90
Chapter 5

advanced Bowtie which includes control failure modes and failure prevention factors. In all
these figures, the unwanted event being analysed is shown in the centre of the Bowtie (also
referred to as the “knot”). The threats that could lead to the unwanted event are shown on
the left side of the Bowtie along with the control measures that arrest (prevent or reduce)
the likelihood that the unwanted event occurs. The consequences that might result from
the unwanted event are shown on the right side of the Bowtie along with the control
measures needed to minimise the severity of the consequences.


Figure 5.15: A basic Bowtie diagram

Hazard

Monitoring, maintaining and improving controls

CONTROL ASSURANCE
MANAGEMENT SYSTEM (CAMS)
Operations activities

Maintenance activities

Engineering activities

Management activities

Figure 9: Basic bowtie diagram with links to the control assurance management system

91
Chapter 5

Hazard

Monitoring, maintaining and improving controls

CONTROL ASSURANCE
MANAGEMENT SYSTEM (CAMS)
Operations activities

Maintenance activities

Engineering activities

Management activities

Figure 5.24: Advanced Bowtie with control erosion factors.

In performing a bowtie analysis it is important to cover the following steps:

a. Describe the unwanted event to be analysed.

b. Determine the scope of analysis. This is a process of defining what is in and out of the
analysis in terms of organisational areas and/or functions, operational processes and/or
functions, spatial area, and time horizons.

c. Identify the range of threats [and opportunities] that can be related to an unwanted
event. These threats [and opportunities] can exist or emerge from
environment/context, equipment and technology, people and organisational interaction
issues and opportunities.

d. Identify possible consequences that might result if the unwanted event occurs. The
consequences are associated with impacts on the objectives of a business (e.g. financial,
safety, health, environment, community relations, productivity etc) and can be positive
or negative.

e. Identify optimum controls to arrest or prevent the unwanted event from occurring and
controls to mitigate consequences of the unwanted event if it does occur. Use the
definition of control as shown in Figure 3 to identify controls that address each threat
and mitigate each consequence. In order to determine whether sufficient controls have
been identified, an assessment of individual control effectiveness can be conducted.
Also an assessment of control suite adequacy can by performed by ordering and

92
Chapter 5

grouping the controls in terms of when they act in the event timeline. For more details
on assessing control effectiveness and control adequacy refer to Hassall et al (2015)
ACARP report on selecting and optimising risk controls.

f. Identify failure modes for important controls and identify failure prevention elements
needed to ensure controls do not fail or their performance does not erode over time.
Control failure modes are the things that can cause the control to fail or cause the
control performance to erode over time so it fails to work as required when required.
Control failure modes can be represented on the bowtie as shown in Figure or in a
separate table.

g. Determine control assurance management system (CAMS) items needed to ensure


controls work as required when required. CAMS items are the monitoring,
maintenance, testing, calibration and other activities that are required to check and test
that the control is maintained in working condition so it will work as required when
required.

Note that bowtie analysis can theoretically be done to incorporate opportunities on the left
and positive as well as negative consequences on the right. However, at present, this is not
common practice. Typically, where risks can be associated with upsides and downsides
tailored analysis methods are used.

5.4.5. Management of Controls



As we have discussed throughout this document, we still experience unwanted events in
industry. The International Council of Mining and Metals (ICMM) looked at the fatality
related events and found:

“The top factors for … incidents are people not properly identifying risks, controls not
being in place, or the controls not being effectively implemented or maintained”.
(ICMM 2013)

This statement highlights that just doing the analysis to determine effective risk treatments
is not sufficient – those treatments need to be effectively implemented and maintained
over time. The ICMM produced guidelines that recommend extra verification on those
controls that are critical for preventing significant unwanted events. Verification activities
are management’s check that the CAMS items are being done in a timely manner and to a
high degree of quality (i.e they are not been overlooked or rushed in terms of
implementation and maintenance and that the checking process has not become a ‘tick and
flick’ paper exercise). Verification activities should also be checking the reliability and
effectiveness of controls in order to answer the question are we doing the best we can do?

Figure 5.25 shows how monitoring and verification activities can be allocated in an
organisation. Figure 5.26 is an example of the assurance and verification activity details for
a specific control.

93
Chapter 5

Senior Leaders
Verify effectiveness
of controls

Managers
Verify that monitoring and checks are
done in a timely manner and to a high
standard

Frontline Staff
Monitor and check controls are present and working to
their required performance standards


Figure 5.25: Control monitoring and review activities assigned to organisational levels

What are the CAMs activities that What verification activities are
front line people need to do to needed to check critical controls are
ensure control working as effective and CAM activities are
required when required? being done to a high standard?

By who?
On what/where?
When/how often? By who?
Action triggers? When?/How many?/How often?


Figure 5.26: Example of control specification, monitoring and verification information
(ICMM, 2015b)

94
Chapter 5

5.5. Summary

Identifying, evaluating and treating risk is crucial for ensuring an organisation achieves its
objectives. Often it is the overlooked or underestimated risk that causes companies losses
from incidents and/or lost opportunities. In this chapter has introduced the theory and
methodology of performing risk assessments and risk treatment activities.

Risk assessment is a structured process with three distinct steps – risk identification, risk
analysis and risk evaluation. The tools are not conceptually complex, and can be quite
straightforward to use. The best way to fully understand the process of risk assessment is
to do it. However, within process industries, it is critical that this process is implemented
rigorously, with suitably experienced and qualified people.

Risk treatment is required when an inherent risk is deemed unacceptable or intolerable.
Risk treatment involves identifying the unwanted event, analyzing controls to select the
ones required to prevent and mitigate the unwanted event, then determining the
management activities required to ensure controls are implemented, monitored,
maintained and effective at addressing the unacceptable risk.

95
Chapter 6

The objectives
SUSTAINABLE OPERATIONAL EXCELLENCE
STAKEHOLDER, REPUTATION & POLITICAL RISK MANAGEMENT

Chapter 6 T h e f o c u s a r e a s

MONITOR AND
REVIEW RISKS

ENVIRONMENTAL IMPACTS

FINANCIAL PERFORMANCE
PROJECTS/CONTRACTORS
Monitor and

Key RM Activities

SAFETY & HEALTH

SOCIAL IMPACTS

CYBERSECURITY
SUPPLY CHAINS
Review Risks

AND TREAT RISKS


IDENTIFY, ASSESS


T h e F o u n d a t i o n s
PROFESSIONAL PRACTICE HUMANS AND RISK
FUNDAMENTALS OF RISK MGT


“One of the great mistakes is to judge policies and programs by their intentions
rather than their results”. Milton Friedman

6.1. Introduction
To understand status of risk management activities it is necessary to monitor and review
performance. Monitoring activities involve the ongoing and continual checks done to
understand status and determine there are changes that have occurred or are required in
risk levels and risk management activities. Reviews involve the periodic and in-depth
analysis of information to assure risk management activities are at the required level of
quality and effectiveness.

This chapter focuses on the monitoring and reviewing of risks step in ISO31000 as shown on
the right hand side of Figure 6.1. Monitoring and reviewing risks involves:
- Reviewing and updating the risk assessments as discussed in Chapter 5 and is
summarised into the three lines of defence model shown Figure 6.2.
- Monitoring the status, adequacy and effectiveness of controls implemented to treat
risk also discussed in Chapter 5 and summarised in Figure 6.2.
- Investigating successful and unwanted events to capture learnings in order to
prevent the recurrence of accidents and to promote the recurrence to fortuitous
events.

The three lines of defence model highlights that
• The first line of defence is operation management taking ownership, responsibility
and accountability for assessing and controlling risks using the required frameworks
and approaches to deliver the agreed level of risk.
• The second line of defence is senior management providing the required framework
and resources to effectively manage risk and monitors compliance and provides
feedback on performance
• The third line of defence comprises an internal audit function that provides an
independent often in-depth review of the performance of the risk management
framework and first two levels of defence.

96
Chapter 6



Figure 6.1: Extended Version of ISO31000 Risk Management Process


Figure 6.2: Three lines of defence model

97
Chapter 6

As mentioned in Chapter 5, another is a good way to view the risk management process is
with the continuous infinite process shown in Figure 6.3. One of the key steps in this
continuous loop is the “learn and adapt” step which entails the activities needed to
continually improve an entity’s risk management approach. Event investigations is done to
learn lessons on how to improve risk management in ways that prevent adverse outcomes
and promote fortuitous outcomes.

We assess risks We continually evaluate
Before acting,
consistently and our effectiveness and look
we identify risks
collaboratively for changing outcomes

We learn and adapt to We monitor risk levels We control unacceptable


continually improve our against our risk risk levels to a level that is
approach to managing risk tolerance to determine tolerable
controls needed

Figure 6.3: Risk Management Process
(adapted from http://www.sra.org.uk/sra/strategy.page#skip)


This chapter focuses on event investigations - the third component of monitoring and
reviewing risk (from ISO31000 framework) and an important component on the “learn and
adapt” step in the continuous risk management process.

Event investigations are conducted to specifically analyse unsuccessful and successful
events or exposures that have occurred in the past in order to identify learnings on how to
further improve risk management activities for the future as shown in Figure 6.4. Therefore
the learning objectives for this chapter are:
• Review the purpose and theory behind event investigations.
• Introduce Incident investigation techniques and application considerations.
• Consider how learnings can be integrated back into the business.
• Understand background knowledge and techniques to enable you to perform an
incident investigation.

98
Chapter 6


Figure 6.4: Prospective and Retrospective Risk Analysis

6.2. Why perform event investigations?



Most industries experience a range of event outcomes as shown in Figure 6.5. The
occurrence of novel and reoccurrence of unexpected adverse and successful events is an
indication that there are shortcomings in risk management processes. The reason why we
chose to investigate because we wanted to learn to improve. Investigating both adverse and
successful events can provide insights that help improve and strengthen risk management
processes. So the events that should be investigated are the ones we can and want to learn
useful lessons from! But sometimes we need to investigate to comply with the law as shown
in Figure 6.6


Figure 6.5: Range of event outcomes experienced in industry (Hollnagel, 2011a)

99
Chapter 6


Figure 6.6: Reasons for investigating different types of events


In process industries most events investigated are adverse events or incidents because they
have significant actual or potential impacts on health, safety, environment, and operational
performance. The adverse events in the process industry are predominantly recurring
events as shown in Figure 6.7. Recurring events are those events that are similar to previous
incidents events (E.g. Fishwick, 2014; Gill, 2013; Waite, 2013). Research conducted has
shown recurring adverse events often result from a failure to identify risks properly and a
failure to implement and maintain known controls for known hazards (ICMM, 2013; Noetic
Solutions, 2014). Investigations, if done well, should identify and address shortcomings in
risk identification and control.

Known events Novel events

Experience of investigating a wide range of incidents internationally


strongly suggests that most incidents are associated with a failure to
implement what should be well-known controls for well-known risks
(Noetic, 2015)


Figure 6.7: Illustration of distribution of recurring vs novel events (M. E. Hassall & Dodshon,
2017)


Investigations can be performed for different reasons. They can be performed to:
• Identify the initial loss of control event – the initiating event
• Find causes of the event.
• Identify the actual and potential consequences of the event.

100
Chapter 6

• Collect information on the full consequences of an event.
• Provide insights into the effectiveness of involved controls.
• Collect information to use for legislated reporting requirements.
• Collect information for legal reasons including for use in legal proceedings.
• Collect and disseminate learnings from the event (that may include findings from
some or all of the above points).

6.3. Purpose and theory behind investigations



The purpose of investigating events is to learn about what happened and to gain insights
into how to prevent negative and promote positive outcomes in the future. Specifically
event investigations seek to achieve at least one of the following:
• To determine the circumstances that lead to unwanted outcomes in order to identify
ways to prevent reoccurrences.
• To determine the circumstances that lead to positive outcomes in order to identify
ways to promote reoccurrences.
The importance of investigating both types of incidents is discussed by Hollnagel (2011a).

Before conducting incident investigations it is important to determine:
• Whether to do an investigation?
• Why we are doing the investigation?
• What are we investigating?
• How are we going to go about doing the investigation?

Determining whether to do an investigation is the first decision to be made. According to


van Kampen and Drupsteen (2017), choosing whether to investigate or not should include
considerations of the following:
1. High actual severity of the incident’s consequences such as loss of life, loss of
containment or extensive (property) damage;
2. A legal or procedural requirement to do so;
3. The occurrence of similar incidents earlier in time, within the same organisation or
sector;
4. Incidents with limited actual consequences but with a high potential for serious
consequences;
5. Near misses when an incident is just barely avoided.
(van Kampen & Drupsteen, 2017, p. https://oshwiki.eu/wiki/Accident_investigation_and_analysis)
In addition, van Kampen and Drupsteen (2017) go on to say “the investigation of near misses
and incidents with limited actual consequences allows the company to identify and
control unforeseen hazards or inadequate control measures before they cause a more
serious incident. In this way regular incident investigation can help to improve safety
performance continuously” (source: https://oshwiki.eu/wiki/Accident_investigation_and_analysis ).

Determining why an investigation is being performed is important because it can help guide
decisions associated with the scope and focus of the investigation as well as the tools to be
used.

101
Chapter 6


Determining what to investigate involves making decisions about the scope of the
investigation. Events often emerge from a series of preceding activities and decisions that
can stretch a long way back in time and extend beyond the affect part of the organisation to
other departments and organisational levels. The impact of an event can also stretch
forward in time and in spatial area.

Determining the scope is about setting the time, physical area, organisational and external
limits of the investigation. It is about clearly specifying what is in and what is out of the
investigation. For example an incident investigation might be limited to the events that
occurred in the week leading up to the accident and exclude activities and decision that
occurred prior to that and that occurred after the accident. In addition the incident
investigation might be confined to looking at the actions of those directly involved rather
than all potentially contributing actions including those of leaders and regulators. Incident
investigations might also be limited to looking at the immediate causes and acute
consequences of an event and not include investigating the latent causes of the event and
potential chronic consequences that could result from the event. Finally in terms of spatial
area, the investigation might include only things that fall within the spatial area controlled
by the company and exclude things outside that area like supply chain issues, affects the
event might have on company reputation, cumulative impact that such events can have on
environment and/or community health.

Determining how to investigate involves determining what incident investigation process
and tools will be used and what resources will be involved in the investigation. This step
clarifies if the investigation is going to follow 1) a formal process or informal process; 2)
involve predefined and specified techniques and tools; 3) be conducted by an individual,
team or prescribed set of people. The importance or significance of the event can dictate
how formal and how in-depth the investigation process is going to be. The complexity of the
event to be analysed can also be used to guide decisions and the processes, tools and
resources required.

Steps to consider when conducting and incident investigation process are:

STEP 1: Describe the context, which involves articulating:
• The event to be investigated and the outcome of that event
• The purpose of the investigation
• The scope of investigation (what’s in/out)

STEP 2: Investigate the accident to determine what happened


The output of Step 2 usually is a timeline and/or a narrative that outlines what
happened, where it happen, when it happened and who was involved. This step is
often done iteratively with determining the scope of the investigation.

STEP 3: Analyse the accident to determine why it happened
The aim of Step 3 is to identify the factors that initiated, escalated (either positively
or negatively) and/or thwarted the event (i.e., to prevent a more positive or negative
outcome). Investigation techniques are used to help investigators identify technical,

102
Chapter 6

human and system contributions to accidents. These are discussed in more detail in
the next section.

STEP 4: Recommend ways to prevent or promote similar unwanted or wanted events.
The aim of Step 4 is to summarise the findings of the incident investigation into
actionable recommendations that are specific, measureable, assignable,
realistic/relevant, time-bounded (SMART).

To ensure quality incident investigation and analysis, (Ryan, 2015), produced a list of eight
requirements that need to be met. These requirements are listed in Table 6.1.

Table 6.1. Requirements for methods for accident investigation and analysis (Ryan, 2015, p.
827).
An accident analysis method should
1. Have a clear scope for analysis (e.g., whether it should focus at the level of the work and
the technological system, or more broadly at influences from government and regulators)
2. Be influenced by a model or group of models
3. Provided a detailed description of the accident, including a visual representation of the
accident sequence if appropriate
4. Search for and reveal underlying causes
5. Contribute to understanding of prevention (e.g., safety barriers)
6. Help in generating recommendations
7. Give consideration to practical aspects, such as level of education and training that is
needed to use the method
8. Be valid and reliable

6.4. Incident Investigation Techniques and Application Considerations



A summary of incident investigation techniques and what they focus on is shown in table
6.2. In a survey conducted by Dodshon and Hassall (2017), they found that incident
investigation practitioners often mentioned those techniques shown in table 6.3. When
selecting incident investigation tools or techniques it is important to ensure the incident
investigation will deliver the desired outcome. Different tools and techniques technique will
help analysts identify and/or analyse different facets of an event. For example:
- The scope of the investigation and determining what happened can be derived using
timeline analysis. A descriptive narrative can also be used.
- The root or primary cause(s) of the event. An example of a technique that can
facilitate this is 5-whys analysis. Other examples include Root Cause Analysis (RCA),
and Taproot.
- The range of causes or contributing factors (e.g., technical, human, organisations
systems and environmental). An example of a technique that can facilitate this is the
fishbone technique. Other examples include Essential Factors Analysis
- The human contribution to the event across different organisational levels. An
example of a technique that can facilitate this type of analysis is the HFACs
technique

103
Chapter 6

- The role that organisation systems (e.g. communication, contractor management,
training systems) played in the event. An example of a technique that can facilitate
this type of analysis is the Accimap technique.
- The role that risk controls and risk control management plans played in the event. An
example of a technique that can facilitate this type of analysis is the Bowtie analysis
technique.
- Design interventions that help humans successfully manage similar situations in the
future. An example of a technique that can facilitate this type of analysis is the SAfER
technique.



Table 6.2 Summary of incident investigation techniques from Dodshon and Hassall (2017)




Table 6.3: Tools and/or processes used by current practitioners in investigations (Dodshon &
Hassall, 2017)

104
Chapter 6

In the remainder of this chapter we will describe the more prevalent and newly emerging
techniques: Timeline (sequence of events), 5 whys, root cause analysis, Accimap, HFACs
(human error analysis), fault and event tree analysis, and bowtie analysis. TapRoot and
ICAM are not discussed as these are commercially available software based incident
investigation frameworks that incorporate the philosophies of the other techniques covered
in this chapter. Systems theoretic accident model and processes (STAMP and its STPA
derivation) and the function resonance analysis method (FRAM) are approaches that
typically require expertise to administer so they are only introduced here.

Each of these techniques will be discussed in detail next. However, it is important to note
that when conducting incident investigations it is important to understand that biases can
affect the quality of the analysis and recommendations. We discussed bias earlier. Many of
these biases are applicable in event investigations. There is also the phenomena of “what
you look for is what you find” as highlighted Lundberg et al (2009).

6.4.1. Timeline
Timeline – also known as event timeline - is used in incident investigations to state what
happened and when. It is a statement of facts that may include
– the initial events that led to the loss of control incident,
– if appropriate, the subsequent events that escalated the loss of control incident to the
actual consequence
– the event associated with the response and recovery of the situation
Event timelines are “is the workhorse in an event investigation because it provides a
systematic tool to separate events in time to allow events that may be critical to
determining appropriate causal factors to be seen and acted upon” (U.S. Department of
Energy, 2012, pp. 2-44) Thus timelines are a way of articulating the scope of the
investigation because they highlight the start point, end point and events considered.

Timelines can be represented as a picture as shown in Figure 6.8 or in tabular or descriptive
form (see http://chernobylgallery.com/chernobyl-disaster/timeline/ for an example of a
descriptive timeline). The degree of detail placed in timelines can vary. The important
features are that it clearly:
- Defines the start point, end point of the investigation
- Describes the key events to help readers understand what happened and when
- Focuses on the facts/events “that matter” because they contributed to the initial
loss of control, the escalation of the event, the recovery of the situation. To test
whether an event matters, a good question to ask is – If this event didn’t occur
would the accident be prevented or would the consequences of it been significantly
less.
- Represents the information is a readable and easy-to-understand format.

105
Chapter 6



Figure 6.8: An example of a timeline retrieved from
http://www.slideshare.net/duckcy04/deep-water-horizon-accident-investigation-lessons-
learned

6.4.2. 5 Whys analysis


The 5 whys incident investigation technique involves taking the key events from the timeline
and asking why did they happen? The aim of the technique is to force people into thinking
about the root cause of an event .The 5 whys technique is illustrated in Figure 6.9. As Figure
6.9 shows the timeline of events is at the top and the why’s are asked for each event to
determine root cause. The technique is a quick technique to use that helps people to think
more deeply about the causes of an event. However, without a good timeline and without
good guidance the technique tends to produce different results when different investigator
use it. It is one of the techniques where ‘what you look for you find’ (Lundberg et al., 2009).
In addition questions arise “what is a root cause?”, “how far back do you go before you
claim you have root cause?”. As Figure 6.9 highlights this may be achieved after any number
of whys and for some events there are potentially more than one root cause.

106
Chapter 6


Figure 6.9 Illustration of 5 whys analysis (source: Slide presented by Clive Killick to
MINE4200 class, July 2014)

6.4.3. Fishbone
The fishbone diagram, also known as the Ishikawa diagram can be used by incident
investigators to diagrammatically represent the range of causes and contributing factors (or
secondary causes) to an event. An example of a fishbone diagram is shown in Figure 6.10
and Figure 6.11. The fishbone diagram requires analyses to identify the classes of causes
and factors that need to be consider. As shown in Figures 6.10 and 6.11 these can include
equipment, process, people, material, environment, and management. Other classes of
causes that might be relevant to the process industry are procedures or methods, data,
communications, etc. The fishbone diagram is another technique that is limited to what the
investigators focus on and therefore different investigators will produce different diagrams
for the same event. Another major limitation of the fishbone technique is that it focuses on
the investigator to identify things that could be done to prevent the incident and the factors
that caused the incident and exclude anything after the event (e.g. escalation and recovery
issues).

107
Chapter 6


Figure 6.10 Blank fishbone diagram (source: Wikipedia.org)



Figure 6.11 Example of fishbone diagram (source http://www.conceptdraw.com/solution-
park/business-fishbone-diagram )

108
Chapter 6

6.4.4. HFACs
Human Factors Analysis and Classification System (HFACs) is a technique designed to help
incident investigator identify the human errors and human contributions, at various
organizational levels, associated with a given accidents (Shappell & Wiegmann, 2000c). The
HFACs framework is shown in Figure 6.12.


Figure 6.12 The HFACs framework (source: https://hfacs.com/hfacs-framework.html )


The definitions for each of the items in the framework, as sourced from www.hfacs.com, are
as follows:
Organizational Influences
• Organizational Climate (OC): Prevailing atmosphere/vision within the organization
including such things as policies, command structure, and culture.
• Operational Process (OP): Formal process by which the vision of an organization is
carried out including operations, procedures, and oversight among others.
• Resource Management (RM): This category describes how human, monetary, and
equipment resources necessary to carry out the vision are managed.

109
Chapter 6

Supervisory Factors
• Inadequate Supervision (IS): Oversight and management of personnel and resources
including training, professional guidance, and operational leadership among other
aspects.
• Planned Inappropriate Operations (PIO): Management and assignment of work
including aspects of risk management, crew pairing, operational tempo, etc.
• Failed to Correct Known Problems (FCP): Those instances when deficiencies among
individuals, equipment, training, or other related safety areas are “known” to the
supervisor, yet are allowed to continue uncorrected.
• Supervisory Violations (SV): The willful disregard for existing rules, regulations,
instructions, or standard operating procedures by management during the course of
their duties.
Preconditions for Unsafe Acts
• Environmental Factors
o Technological Environment (TE): This category encompasses a variety of issues
including the design of equipment and controls, display/interface characteristics,
checklist layouts, task factors and automation.
o Physical Environment (PhyE): The category includes both the operational setting
(e.g., weather, altitude, terrain) and the ambient environment, such as heat,
vibration, lighting, toxins, etc.
• Personnel Factors
o Communication, Coordination, & Planning (CC): Includes a variety of
communication, coordination, and teamwork issues that impact performance.
o Fitness for Duty (PR): Off-duty activities required to perform optimally on the job
such as adhering to crew rest requirements, alcohol restrictions, and other off-
duty mandates
• Condition of the Operator
o Adverse Mental States (AMS): Acute psychological and/or mental conditions
that negatively affect performance such as mental fatigue, pernicious attitudes,
and misplaced motivation.
o Adverse Physiological States (APS): Acute medical and/or physiological
conditions that preclude safe operations such as illness, intoxication, and the
myriad of pharmacological and medical abnormalities known to affect
performance.
o Physical/Mental Limitations (PML): Permanent physical/mental disabilities that
may adversely impact performance such as poor vision, lack of physical strength,
mental aptitude, general knowledge, and a variety of other chronic mental
illnesses.
Unsafe Acts
• Errors
o Decision Errors (DE): These “thinking” errors represent conscious, goal-
intended behavior that proceeds as designed, yet the plan proves inadequate
or inappropriate for the situation. These errors typically manifest as poorly
executed procedures, improper choices, or simply the misinterpretation
and/or misuse of relevant information.
o Skill-based Errors (SBE): Highly practiced behavior that occurs with little or no
conscious thought. These “doing” errors frequently appear as breakdown in

110
Chapter 6

visual scan patterns, inadvertent activation/deactivation of switches,
forgotten intentions, and omitted items in checklists often appear. Even the
manner or technique with which one performs a task is included.
o Perceptual Errors (PE): These errors arise when sensory input is degraded as
is often the case when flying at night, in poor weather, or in otherwise
visually impoverished environments. Faced with acting on imperfect or
incomplete information, aircrew run the risk of misjudging distances,
altitude, and decent rates, as well as responding incorrectly to a variety of
visual/vestibular illusions.
• Violations
o Routine Violations (RV): Often referred to as “bending the rules” this type of
violation tends to be habitual by nature and is often enabled by a system of
supervision and management that tolerates such departures from the rules.
o Exceptional Violations (EV): Isolated departures from authority, neither
typical of the individual nor condoned by management.

Examples of for each of these HFACs items derived from research conducted in the airline,
healthcare and defence industries are summarised in Table 6.4


Table 6.4: HFACs item examples from the airline, healthcare and defence industries

HFACs Category Airline example (quoted Healthcare example (quoted from Defence example
from Shappell & Diller et al., 2013) (quoted from Department
Wiegmann, 2000c) of Defense, 2015)
UNSAFE ACTS
OPERATOR
ERRORS
Decision errors Improper procedure - Inadequate risk assessment - Inadequate real-time risk
Misdiagnosed emergency - Critical-thinking failure assessment
Wrong response to - Caution/warning ignored or - Failure to prioritise tasks
emergency misinterpreted adequately
Exceeded ability - Wrong response to urgent/ - Ignored a caution/
Inappropriate emergent situation warning
manoeuvre - Inadequate report provided - Choice of action during an
Poor decision - Misinterpretation of information operation
- Selected incorrect procedure
- Failure to prioritise task
- Inadequate work pre-planning
- Exceeded ability (i.e., competency)
- Improper use of instrumentation,
equipment, PPE, and/or materials
- Use of defective instrument,
equipment, PPE, and/ or materials
Inadequate maintenance of
equipment/supplies
Skill-based errors Breakdown in visual scan - Timing errors (i.e., performed task - Unintended operation
Failed to prioritise at wrong time) - Checklist not followed
attention - Safety checklist error correctly
Inadvertent use of flight - Work or motion at improper speed - Procedure not followed
controls - Lapse of memory/ recall for all or correctly
Omitted step in part of a procedure - Over- or under-control
procedure - Poor technique (e.g., intubation, - Breakdown in visual scan
Poor technique central line insertion) - Rushed or delayed
necessary action

111
Chapter 6

Over-controlled the - Conducted sequence item out of
aircraft order
- Habit transference with new
equipment/ procedure
- Improper lifting/ position for task
Perceptual errors - Misjudged distance/ - Misperceived patient factors (e.g. - Motion illusion –
altitude/ airspeed strength/weight-bearing) kinaesthetic
- Spatial disorientation - Misinterpreted/ misread equipment - Turning/balance illusion –
- Visual illusion vestibular
- Visual illusion
- Misperception of
changing environment
- Misinterpreted/ misread
instrument
- Spatial disorientation
- Temporal/time distortion

VIOLATIONS
Routine - Violation of policy/ procedures/
standard of care
- Failure to assess patient
- Failed to adhere to brief - Failure to monitor patient
- Failed to use radar - Inadequate/ untimely
altimeter documentation/ communication
- Flew an unauthorized - Distracting behaviour
approach - Taking shortcuts (not otherwise
- Violated training rules specified)
- Flew an overaggressive - Failure to follow orders
maneuver - Disabled guards, warning systems, - Performs work-around
- Failed to properly or safety devices violation
prepare for flight - Use of equipment/ instruments/ - Commits widespread/
- Briefed unauthorized PPE/ material improperly routine violation
flight - Delivery of care beyond scope of - Extreme violation – lack
- Not current/ qualified practice of discipline
for the mission - Failed to secure equipment or
- Intentionally exceeded material properly
Exceptional the limits of the aircraft - Violation of policy/ procedures/
- Continued low altitude standard of care
flight in VMC - Disabled guards, warining systems,
- Unauthorised low- or safety devices
altitude canyon running - Inadequate/ untimely
documentation/ communication
- Excessive risk taking
- Failure to assess patient
PRECONDITIONS FOR UNSAFE ACTS
Situational Factors
Physical - Inadequate/ improper design for - Environmental conditions
Environment patient care affecting vision
- Obstructed access/ monitoring/ - Vibration affects vision or
visualisation of patient/ equipment balance
- Heat/cold stress impacts
performance
- External forces or object
impeded an individual’s
movement
- Lights of other vehicle/
vessel/aircraft affected
vision
- Noise interferences
Tools/Technology - Poorly designed or inadequate - Seat and restraint system
equipment/ material/ PPE/ problems
instruments

112
Chapter 6

- Inadequate/ defective warnings/ - Instrumentation and
alarms warning system issues
- Unclear/ out-dated policies/ - Visibility restrictions (not
procedures/ checklists weather related)
- Filaure of information technology - Controls and switches are
(software and hardware issues) inadequate
- Defective equipment/ material/ - Automated system
PPE/ instruments creates unsafe situation
- Workspace incompatible
with operation
- Personal equipment
interference
- Communication
equipment inadequate
Condition of Operators
Mental States - Channelised attention - Task overload - Psychological problem
- Complacency - Perceived haste/ pressure to - Life stresses
- Distraction complete task - Emotional state
- Mental fatigue - Inattention/ distraction - Personality style
- Get-home-it is - Complacency - Overconfidence
- Haste - Stress (job related) - Pressing
- Loss of situational - Overconfidence - Complacency
awareness - Frustration - Motivation
- Misplaced motivation - Task fixation - Mental exhausted
- Task saturation - Lack of confidence (burnout)
Physiological - Impaired physiological - Task overload - Substance effects
States state - Loss of consciousness
- Medical illness - Physical illness/ injury
- Physiological - Fatigue
incapacitation - Trapped gas disorders
- Physical fatigue - Evolved gas disorders
- Hypoxia/ hyperventilation
- Inadequate adaptation to
darkness
- Dehydration
- Body size/ movement
limitations
- Physical strength and
coordination
- Nutrition/ diet
Physical/Mental - Insufficient reaction - Limited experience/ proficiency - Not paying attention
Limitations time - Lack of technical procedural - Fixation
- Visual limitation knowledge - Task oversaturation/
- Incompatible undersaturation
intelligence/ aptitude - Confusion
- Incompatible physical - Negative habit transfer
capability - Distraction
- Geographically lost
- Interference/ interruption
- Technical or procedural
knowledge not retained
after training
- Inaccurate expectation
Personnel Factors
Communication, Crew Resource - Inadequate communication - Failure of crew/ team
Coordination & Management between providers leadership
Planning - Failed to back-up - Failure to warn/ disclose critical - Inadequate task
- Failed to communicate/ information delegation
coordinate - Inadequate communication during - Rank/ position
- Failed to conduct handoff intimidation
adequate brief - Failed to use all available resources - Lack of assertiveness
- Failed to use all - Inadequate communication: - Critical information not
available resources between workgroups communicated

113
Chapter 6

- Failure of leadership - Lack of teamwork - Standard/ proper
- Misinterpretation of - No or ineffective communication terminology not used
traffic calls methods - Failed to effectively
- Confusing/ conflicting directions communicate
- Inadequate communication: staff - Task/mission planning/
and patient/family briefing inadequate
- Failure in leadership (no one in
charge)
- Inaccurate information provided
- verification techniques not used
- Lack of a plan or care
- inadequate communication:
leadership/provider
- Confusing/ conflicting orders
- Lack of discharge planning
Fitness for Duty Personal Readiness - Inadequate rest/ sleep
- Excessive physical - Lack of physical fitness
training - Inaccurate information provided
- Self-medicating
- Violation of crew rest
requirement
- Violation of bottle-to-
throttle requirement
SUPERVISORY FACTORS
Inadequate - Failed to provide - Inadequate mentoring/ coaching/ - Supervisory/command
supervision guidance instruction oversight
- Failed to provide - Inadequate oversight - Improper role modelling
operational doctrine - Inadequate training - Failed to provide proper
- Failed to provide - Failed to communicate policies/ training
oversight procedures - Failed to provide
- Failed to provide appropriate policy.
training Guidance
- Failed to track - Personality conflict with
qualifications supervisor
- Failed to track - Lack of supervisory
performance responses to critical
information
- Failed to identify/ correct
risky or unsafe practices
- Selected individual with
lack of proficiency
Planned - Failed to provide - Failure to match staff competency - Directed task beyond
Inappropriate correct data with the task personnel capabilities
Operations - Failed to provide - Inappropriate team
adequate brief time composition
- Improper manning - Selected individual with
- Mission not in lack of current or limited
accordance with rules/ experience
regulations - Performed inadequate
- Provided inadequate risk assessment – formal
opportunity for crew rest - Authorised unnecessary
hazard
Failure to Correct - Failed to correct - Failed to initiate corrective action
Known Problem document in error - Failed to ensure problem was
- Failed to identify an at- corrected
risk aviator - Failed to review and revise a policy/
- Failed to initiate procedure
corrective action
- Failed to report unsafe
tendencies

114
Chapter 6

Supervisory - Authorised unnecessary - Failed to enforce policies/ - Failure to enforce existing
Violation hazard procedures rules (supervisory act of
- Filed to enforce rules - Authorised hazardous operations omission)
and regulations - Allowing unwritten
- Authorised unqualified policies to become
crew for flight standard
- Directed individual to
violate existing regulations
- Authorised unqualified
individuals for task
ORGANISATIONAL INFLUENCES
Organisational - Structure: Chain-of- - Inadequate policies - Organisational culture
Culture command, delegation of - Chain of command (attitude/ actions) allows
authority, - Organisational culture/values for unsafe task/mission
communication, formal - Organisational
accountability for actions overconfidence or
- Policies: Hiring and underconfidence in
firing, promotion, drugs equipment
and alcohol - Unit mission/aircraft/
- Culture: norms and vehicle/equipment change
rules, values and beliefs, or unit deactivation
organisational justice - Organisational structure is
unclear or inadequate
Operational - Operations: operational - Strategic risk assessment - Pace of ops-
Process tempo, time pressure, - Corporate procedures temp/workload
production quotas, - Organisational program/
incentives, policy risks not adequately
measurement/ appraisal, assessed
schedules, deficient - Provide inadequate
planning procedural guidance or
- Procedures: standards, publications
clearly defined - Organisation (formal)
objectives, training is inadequate or
documentation, unavailable
instructions - Flawed doctrine/
- Oversight: risk philosophy
management, safety - Inadequate program
programs management
- Purchasing or providing
poorly designed or
unsuitable equipment
Resource - Human resources: - Inadequate staffing - Personnel recruiting and
Management selection, staffing/ - Budgetary constraints selection policies are
manning, training - Human resources practices inadequate
- Monetary/budget - Failure to provide
resources: Excessive cost adequate manning/staffing
cutting, lack of funding resources
- Equipment/facility -Command and control
resources: Poor design, resources are deficient
purchasing of unsuitable - Inadequate infrastructure
equipment - Failure to remove
inadequate/ worn-out
equipment in a timely
manner
- Failure to provide
adequate operational
information resources
- Failure to provide
adequate funding

115
Chapter 6

As these examples shown, HFACs focuses on intentional and unintentional acts of human
error with some exploration of a select range of underlying factors that may have influenced
people in a way that could result in an error that may have been a factor in an incident.
However HFACs does not help analysts decipher:
- the significance of the identified issues as contributors to the event being analysed
(i.e. were they a significant or insignificant contributors).
- whether the issues identified always lead to incidents or whether they can also
contribute to successful outcomes.
Therefore HFACs by itself does not help analysts identify nor prioritise recommendations for
preventing the recurrence of incidents.

6.4.5. Bowtie analysis



As mentioned in Chapter 5, bowtie analysis is a visual representation of the controls and
control assurance management system elements needed to prevent and mitigate an
unwanted incident. Bowties can be used in incident investigations to highlight the controls
that were absent, present but ineffective, present and effective as shown below in Figure
6.13. Using bowtie analysis in incident investigations guides analysts to identify threats or
causes, effective and ineffective controls as well as actual and potential consequences. Once
the absent and ineffective controls have been identified, the analyst can then explore the
control assurance management system information to determine why the control was
absent or ineffective.

The process for using bowtie analysis in incident investigation is as follows:
1. Obtain or construct bowtie for unwanted event (see chapter 5 for details).
2. Develop control assurance management system elements for each control on the
bowtie.
3. Using information from the incident to determine which controls on the bowtie were
missing, which controls were present but ineffective or failed, which controls were
present and effective and which control status was unknown. This information can be
shown visually on bowtie as illustrated in Figure 6.13.
4. For the missing, ineffective and failed controls, use control assurance management
system information to investigate what control support activities failed to ensure
control was implemented, monitored and maintained to the required standard.
5. From insights gained on effectiveness of the controls and control assurance
management system elements make recommendations to improve effectiveness of
prevention and mitigation controls.

Crucial to performing an accurate and informative incident investigation with bowtie analysis is
having quality bowtie and control assurance management system information to reference.

116
Chapter 6

Incorrect
volume
specified/
pumped Overflowing
• Incorrect
1 2* 3+ 4# 8 9 10 fuel catches on
fire/explodes
input
• Incorrect Automatic Automated Automated Alarm that alerts Vapour detection Access/security Automated
volume reconciliation regulation and regulation and worker to perform alarm system that barriers that fire fighting
pumped manually shutoff of flow shutoff of flow a manual alerts operator to prevent non- response
entered based on two based on two emergency manually shutoff intrinsically
volume, independent independent shutdown of
capacity and measures of level system when of filling system safe equipment
height data volume indications tanker filling and startup of in area
entry done to pumped continues beyond vapour/fire
set maximum max volume or suppression
pumping high level indicator system
volume shutoff
limit required
before system Volatile
Volume will start petrol
pumped into
wrong Operator
compartment injured by
overflowing
• Pumped into
completely or 5 2* 3+ 4# Tanker
overflow 11" 12 13= 14 fuel
(e.g. slips in it,
partly filled
compartment is overcome
Lockout/tagging Automated Automated Alarm that alerts Operator Barriers that Overflow bund and Emergency
• Pumped into system for fill regulation and regulation and worker to perform monitoring for prevent drainage system to medical by it etc)
smaller points so shutoff of flow shutoff of flow a manual liquid spills and unauthorised divert fuel away response to
compartment worker cannot based on two based on two emergency when detected access to from operator/ assess and
connect to full independent independent shutdown of manually shuts tanker and fuel environment and treat injured
compartment measures of level system when system down overflow areas
volume indications tanker filling into safe storage
pumped continues beyond
max volume or high
level indicator

Failure in
pumping Overflowing


system
doesn’t slow
6 7 11" 13= 15 fuel causes
environmental
or stop as it harm
should Emergency Alarm that alerts Operator Emergency
Overflow bund and
shutdown valve operator to monitoring for response for
drainage system that
to stop system manually stop liquid spills and to clean up
diverts fuel away
when pumping system because when detected fuel in area
from operator/
system fault/ flowrate has not manually shuts and on truck
environment and
failure alarms slowed for last 10%? system down- into safe storage
are triggered of fill

Missing controls Present but ineffective/failed controls Present and effective Status unknown

Figure 6.13: Bowtie highlighting presence and effectiveness of risk controls during a fuel tanker overfilling incident (M. E. Hassall, 2017)

117
Chapter 6

6.4.6. Accimap

Accimap analysis of incidents involves mapping the critical event and its contributing factors
at various organisational levels as shown in Figure 6.14. Accimaps can be used to highlight
issues with decisions, communications and information flows between different
organisational levels and between different people within an organisational levels.
Therefore Accimaps can be used to extend the analysis the direct event chain to uncover
useful insights and recommendations regarding organisational, latent factors or
preconditions that permitted the initiation, escalation or failure to detect and address the
event.

The Accimap process can also be used in conjunction with the bowtie analysis to map
failures in control assurance management systems. An Accimap can be developed that
shows the control assurance management system elements required to implement, monitor
and maintain the controls (as shown on the bowtie). This Accimap can then be referenced in
an event investigation to highlight what elements were missing, present but ineffective and
ineffective as shown in Figure 6.15.


Figure 6.14. Generic Accimap (Rasmussen & Svedung, 2000)

118
Chapter 6


Figure 6.15 Accimap showing missing control assurance elements (in gray) for fuel tanker overfilling incident (M. E. Hassall, 2017)

119
Chapter 6

6.4.7. SAfER

In dynamic work situations often people have a range of options for the strategies
(decisions and actions) they can take. In such situations often strategies that may be
preferred or successful in one context will be unsuccessful or undesirable in another
context. For example an expert using an intuitive approach might be preferred in normal
operating conditions but not preferred when a novice is doing the work or the situation is
novel and abnormal. Similarly the use of an avoidance strategy (not doing or deferring the
task) might be appropriate if the situation is unsafe to proceed but inappropriate if a critical
alarm is activated and needs attention for avoid a catastrophe. If the critical event being
analysed is associated with a dynamic work situation then the Strategies Analysis for
Enhancing Resilience (SAfER) approach can be used to gain insights that would lead to
meaningful recommendations about how to prevent future reoccurrences of an adverse
outcome or promote future reoccurrences of a fortuitous outcome.

The SAfER approach has been designed to help investigators identify the important cues and
range of strategies that might be used to control operations and the circumstances in which
these strategies will lead to successful and unsuccessful outcomes (M. E. Hassall et al., 2014;
M. E. Hassall, Sanderson, & Cameron, 2016). To help identify the strategies that might be
used SAfER provides a set of generic strategies as shown in Table 6.5 that investigators can
use as prompts to determine which strategies might be adopted in the situation they are
investigating and the system attributes (or performance shaping factors) that could lead a
person to select or not select a given strategy.

To perform a simplified SAfER analysis, an investigator should:
1. Identify the important cues or critical situation assessment factors that need to be
monitored to correctly diagnose the state of the operations – whether the operations
are in normal/safe or abnormal/unsafe.
2. Identify the strategies that operators might adopt in normal/safe or abnormal/unsafe
situations.
3. Determine whether the design should promote the strategy because it will produce a
successful outcome, prevent the strategy because it will produce an unsuccessful
outcome or tolerate the strategy because it cannot be prevented and therefore needs
to be tolerated in a manner that will not lead to an adverse outcome.
4. Identify recommendations that will make
a. the important cues that indicate safe versus unsafe operations more salient
b. allow the design to promote, prevent or tolerate each strategy in both
normal/safe and abnormal unsafe situations

An example of a SAfER analysis done for filling a fuel tanker is shown in Table 6.5. For more
information on performing detailed SAfER analysis refer to Hassall et al (M. E. Hassall et al.,
2014, 2016)articles.



120
Chapter 6

Table 6.5 Generic strategies (quoted from M. E. Hassall & Sanderson, 2012; M. E. Hassall et
al., 2016)
Strategy Description Examples from librarian working in public library of domain factors that
Categories could prompt strategy selection
Avoidance Strategies that involve - No consequences associated with using avoidance (Low risk).
not doing the task, - Chaotic situation (High difficulty).
deferring the task, or - Librarian thinks customer would be better dealt with by another staff
forgetting to do it. member (High risk, High difficulty).
- Librarian thinks task would be better dealt with by another staff
member (High difficulty).
- Librarian has other higher priority work (High time pressure).
- Librarian has low skill/motivation to serve customer (High difficulty).
Intuitive Strategies that include - Normal/familiar situation (Low risk).
automatic responses - Need to get a quick result (High time pressure).
or doing a task - Simple, specific, achievable request from customer (Low difficulty).
without explicitly or - Easy-going customer (Low risk).
deliberately using - Experienced librarian familiar with customer (Low difficulty).
thought processes.
Arbitrary Strategies that include - Chaotic situation (High difficulty).
choice guessed, scrambled, - Unclear customer request (High difficulty).
haphazard or - Many or no books found suitable (High difficulty).
panicked responses. - Easy-going customer (Low risk).
- Inexperienced librarian (High difficulty).
- Librarian needs to make choice quickly (High time pressure).
Imitation Strategies that include - Situation, task and customer are familiar (Low risk).
copying how others - Task is simple (Low difficulty).
do the task or copying - Librarian has only been shown how to do it by copying on-the-job
what has worked in mentor (Low risk).
the past. - Librarian has no time or motivation to try other strategies (Low time,
low difficulty).
Option- Strategies that involve
- Situation in state of flux or chaotic so cues are not available/cannot be
based selecting a possible relied upon (High difficulty).
action option without - Number of customers low (Low time pressure).
considering system or - Customer request novel/vague (High difficulty).
environmental - Customer easy-going (Low rise).
information. - Inexperienced librarian (High difficulty).
- Librarian has time to search bibliographically and/or by browsing.
Cue-based Strategies that - Cues in the form of borrowing trends by customer interests readily
consider apparently- available (Low difficulty).
relevant information - Customer happy to help with search and provide information (Low
from the risk, Low time pressure).
environment. - Customer request matches library books (Low difficulty).
- Librarian experienced (Low difficulty).
- Librarian does not have time to use full analytical approach to
questioning (High time).
Compliance Strategies that include - Procedure easy to access and understand (Low difficulty).
following authorized - Following procedures produces good choices for books (Low risk).
procedures (written - Typical and simple customer request (Low difficulty, Low risk).
or practiced). - Customer not in hurry (Low time pressure).
- Librarian has sufficient time to find and follow procedures (Low time).
Analytical Strategies include - New customer (High difficulty).
using analytical - Complex/novel customer request (High difficulty).
thinking to reason out - Experienced operator (Low risk).
the best way to - Sufficient time to use analytical/trial and error approach (Low
perform a task. pressure).

121
Chapter 6

Table 6.6 Example simplified SAfER analysis for filling a fuel tanker
Critical Situation Recommendations for design
List the factors that need to be monitored to ensure safe
Assessment interventions to make the
operation
Factors critical factors more salient
- Status of pipes, connections, valves, pumps etc involved
in fuel transfer - Real-time mass balance
- Status of instruments required to monitor fuel transfer - Instrument functional/fault
Plant/process - Contents and flowrate within pipes alarms
factors - Contents and level (absolute and rate of change) in - Colour coding for different
storage and tanker (e.g. reconciled mass balance for fuel types of fuel stored and
flowing out of storage into tanker) flowing
- Current vs projected fill time
- Real-time location and status of field operator - - Camera monitoring of ship
- Real-time location and status of others in vicinity and key parts of terminal
People factors
- Presence and vigilance of controller/operator during - Radio comms with field
transfer operator
- Current and forecast weather - Lightning warning system
Context factors - Presence of other fuel types and tankers - Localised work tracking
- Other [unusual] work/activity in vicinity system

Should design
promote, Recommendations for design
What decision/actions might be
Generic Strategy prevent or improvements for both safe and
associated with this generic strategy
tolerate unsafe operations
strategy?
Avoidance = Not For normal operations:
done, defer, or - Operator doesn’t start loading or stops 1. Ensure operator has E-stop
forget to do loading because plant or people not Promote that can be manually activate
assessed as safe and/or ready from control room

For abnormal operations:


2. Vapour/liquid monitoring
- Operator does not address loss of
systems to detect leaks
containment/control because it isn't Tolerate
Mass balance showing expected
detected by instruments
vs actual flows to highlight losses
Intuitive = For normal operations:
3. Automating system checks
automatic - Operator start loading assuming
interlocked with unloading
response, done everything is ok (e.g. plant and
Tolerate pumps so system can’t start until
without explicitly instrumental is functional, connections
safety critical equipment
or deliberately from ship to tank are correctly made,
confirmed to be functional
using thought fuel type and quality are to spec etc)
processes For abnormal operations:
- Operator assumes only one loss of
control/containment event is occurring Tolerate As per 2. above

Arbitrary-choice = For normal operations: 4. Colour coding of tanks and


guessed, - Operator guesses which piping and pipes to indicate type of fuel
scrambled tank to use Prevent
(ULP, diesel etc) and heights
haphazard or shown on display
panicked response
For abnormal operations:
5. System has ability to do real
- Operator guesses size of spill and how
time reconciliations and display
best to respond Prevent
loss of control/containment
volumes

122
Chapter 6

Imitation For normal operations: 6. Further work required to
strategies = copy - Operator copies how previously determine best intervention(s) to
Tolerate
how others do it unloaded ULP (but could be different allow strategy to be used without
or copy what has tank(s), different volume, etc) causing adverse outcome
worked in the past For abnormal operations: 7. Further work required to
- Operator copies previously used determine best way to prevent
Prevent
response (but event could involve this strategy from being used in
different fuel, locations, people etc) abnormal situations
Cue-based For normal operations: 8. Employ forcing function
strategies = select - Operator closely monitors unloading technology to ensure unloading
Chosen Option process on screens Promote only progresses when operator is
using the monitoring (e.g. eye tracking,
Observed acknowledge buttons etc)
Info/Cues and For abnormal operations:
Predict - Operator looking for and acts on 'weak 9. Camera and interface systems
Consequences signals' of Promote allow operator to do "deep dive"
results abnormal operations (chronic unease) interrogations

Compliance-based For normal operations: 10. Embed SOP within CRO


strategies = - Operator follows SOP monitoring system as a checklist
following Promote process so detailed procedural
procedures as reading not required (integrate
they are with 8.)
written/practiced For abnormal operations:
11. Create ERP checklist (similar to
- Operator follows Emergency response
aviation) to help operator
plan Promote
expediently activate and monitor
emergency response
Analytical For normal operations:
12. Give operator camera line-of-
Reasoning - Operator goes back to first principles
sight, a smart control system and
strategies = using and checks and double checks
Tolerate a field operator to expedite
analytical thinking everything before starting the unload
checks without undermining the
to reason out the process (could significantly delay
quality of them
best way to unloading)
perform task For abnormal operations:
13. Conduct regular emergency
- Operator thinks about and develops
response drills so reaction to LOC
own emergency Prevent
events becomes a well practiced
response
response.

123
Chapter 6

6.5. Integration of learning back into the business

Once an investigation has been conducted and recommendations made, these
recommendations need to be communicated back to the business and then actioned. The
integration of learnings back into the business is an important way to prevent reoccurring
unwanted events and to promote the reoccurrence of wanted events. Some of the factors
that can impede the integration of learning back into the business include the following:
- No identification of actions or sense of urgency to implement actions due to the
belief that the event was unique.
- Implemented actions are not effective because they are poorly identified, no one
was given the accountability to implement, and/or there was insufficient time to
implement properly (Drupsteen & Hasle, 2014).
Other factors have been identified by Lundberg and others in the following article. A good
critique on learning is also provided by Lundberg et al (2009) and Kletz (2009).

6.6. Summary

Investigating both adverse and successful events can provide insights that help improve and
strengthen risk management processes.

Investigations can be performed for different reasons. They can be performed to:
• Find causes of the event.
• Collect information on the full consequences of an event.
• Provide insights into the effectiveness of involved controls.
• Collect and disseminate learnings from the event.
• Collect information to use for legislated reporting requirements or for use in legal
proceedings.
Before performing an investigation, it is important to determine, why, what and how.

We have introduced a structured methodology for performing an investigation, and we


have introduced several techniques that can be used as part of the methodology.

Event investigations help practitioners identify lessons to be learned, but to prevent


reoccurrence of adverse events and to promote reoccurrence of fortuitous events, the
‘lessons learned’ also need to be embedded back into the organisation.



124
Chapter 6











This pages has intentionally been left blank

125
SECTION D







SECTION D: CLOSE OUT




The objectives
SUSTAINABLE OPERATIONAL EXCELLENCE

STAKEHOLDER, REPUTATION & POLITICAL RISK MANAGEMENT

T h e f o c u s a r e a s
REVIEW RISK MGT
MONITOR AND

ENVIRONMENTAL IMPACTS

FINANCIAL PERFORMANCE
PROJECTS/CONTRACTORS
Key RM Activities

SAFETY & HEALTH

CYBERSECURITY
SOCIAL IMPACTS
SUPPLY CHAINS
AND TREAT RISKS
IDENTIFY, ASSESS

T h e F o u n d a t i o n s
PROFESSIONAL PRACTICE HUMANS AND RISK
FUNDAMENTALS OF RISK MGT











126
SECTION D











This pages has intentionally been left blank

127
Epilogue

The objectives
SUSTAINABLE OPERATIONAL EXCELLENCE

STAKEHOLDER, REPUTATION & POLITICAL RISK MANAGEMENT

Epilogue T h e f o c u s a r e a s

REVIEW RISK MGT


MONITOR AND

ENVIRONMENTAL IMPACTS

FINANCIAL PERFORMANCE
PROJECTS/CONTRACTORS

Key RM Activities

SAFETY & HEALTH

CYBERSECURITY
SOCIAL IMPACTS
SUPPLY CHAINS

AND TREAT RISKS


IDENTIFY, ASSESS


T h e F o u n d a t i o n s
HUMANS AND RISK
PROFESSIONAL PRACTICE
FUNDAMENTALS OF RISK MGT

“You have to work hard to get your thinking clean to make it simple. But it's
worth it in the end because once you get there, you can move mountains”
(Steve Jobs)


The effective management of risk in the process industries requires an understanding of the
foundational theory which has been introduced in this book. It also requires the appropriate
selection and application of risk identification, treatment, monitoring and communication
approaches. The aim should be the delivery of a transparent understanding of the current
inherent risks and the status of risk treatments being applied to management those deemed
unacceptable. A good criteria for prospectively assessing the efficacy of the risk
management system used in a process industry context is shown in Table E1. Other way to
assess the efficacy of a risk management system is to perform benchmarking exercises with
industry leaders and to reconcile learnings from internal and external incident and audit
findings with current risk management activities at all organisational levels.

Table E1: Criteria for assessing quality of risk management system (adapted from Friberg,
Prodel, & Koch, 2011; Paul, Niewoehner, & Elder, 2007)
Measure Questions to ask

Significance Does information focus on most relevant

Accuracy Is the information valid - does it correctly represent reality


and is it logical
Clarity Is information concise, easy to interpret and understandable

Completeness Does information cover breadth of risks and all aspects of


the risks to an appropriate level of detail

128
Epilogue











This pages has intentionally been left blank

129
References

REFERENCES

Allianz. (2015). Allianz risk barometer: Top business risks 2015. Retrieved from Munich, Germany: ey.com
Aon. (2014). Aon's 2014 Australiasian Risk Survey. Retrieved from Australia:
http://www.aon.com.au/australia/thought-leadership/risk-survey.jsp
Aven, T., Renn, O., & Rosa, E. A. (2011). On the ontological status of the concept of risk. Safety Science, 49(8–
9), 1074-1079. doi:http://dx.doi.org/10.1016/j.ssci.2011.04.015
Bell, J., Frater, B., Butterfield, L., Cunningham, S., Dodgson, M., Fox, K., . . . Webster, E. (2014). The role of
science, research and technology in lifting Australian productivity. Retrieved from Australian Council
of Learned Academies:
Bellamy, L. J., Geyer, T. A., & Wilkinson, J. (2008). Development of a functional model which integrates human
factors, safety management systems and wider organisational issues. Safety Science, 46, 461-492.
Blacker, K., & McConnell, P. (2015). People risk management : a practical approach to managing the human
factors that could harm your business / Keith Blacker and Patrick McConnell (1st Edition.. ed.).
London, United Kingdom ; Philadelphia, PA: London, United Kingdom ; Philadelphia, PA : Kogan Page.
Booth, W. (1987). Postmortem on Three Mile Island. Science, 238(4832), 1342-1345.
Borys, D., Else, D., & Leggett, S. (2009). The fifth age of safety: the adaptive age? Journal of Health & Safety
Research & Practice, 1(1), 19-27.
Cameron, I. T., & Raman, R. (2005). Process Systems Risk Management. San Diego, CA: Elsevier.
Casey, T., Griffin, M. A., Flatau Harrison, H., & Neal, A. (2017). Safety Climate and Culture: Integrating
Psychological and Systems Perspectives. Journal of Occupational Health Psychology, No Pagination
Specified. doi:10.1037/ocp0000072
CCPS. (1994). Guidelines for preventing human error in process safety. New York: Center for Chemical Process
Safety of the American Institute of Chemical Engineers.
Chapanis, A. (1996). Human factors in systems engineering. New York: Wiley.
Committee on Science and Technology. (1986). Investigation of the Challenger accident: Report of the
Committee on Science and Technology House of Representatives - Ninety-ninth congress - Second
session. Retrieved from Washington, DC:
TM
Deloitte. (2009). Take the right steps: 9 principles for building the risk intelligent enterprise . Retrieved from
www2.deloitte.com: https://www2.deloitte.com/au/en/pages/risk/articles/enterprise-risk-
management.html
Department of Defense. (2015). Human factors analysis and classification systetms (DOD HFACS) version 7.0.
Retrieved from https://www.uscg.mil/hr/cg113/docs/pdf/DoD_HFACS7.0.pdf and from
http://www.public.navy.mil/NAVSAFECEN/Documents/5102/DOD_HFACS_v7.0_Guide.pdf
Department of Education Training and Employment. (n.d.). Risk management fact sheet 1: risk identification
techniques/sources. In Department of Education Training and Employment (Ed.),
https://web.actuaries.ie/sites/default/files/erm-resources/fact_sheet_1_-_risk_identification.pdf.
Diller, T., Helmrich, G., Dunning, S., Cox, S., Buchanan, A., & Shappell, S. (2013). The Human Factors Analysis
Classification System (HFACS) Applied to Health Care. American Journal of Medical Quality, 29(3), 181-
190. doi:10.1177/1062860613491623
Dodshon, P., & Hassall, M. E. (2017). Practitioners’ perspectives on incident investigations. Safety Science, 93,
187-198. doi:http://dx.doi.org/10.1016/j.ssci.2016.12.005
Drupsteen, L., & Hasle, P. (2014). Why do organizations not learn from incidents? Bottlenecks, causes and
conditions for a failure to effectively learn. Accident Analysis & Prevention, 72, 351-358.
doi:http://dx.doi.org/10.1016/j.aap.2014.07.027
Elahi, E. (2010). How Risk Management Can Turn into Competitive Advantage. College of Management
Working Papers and Reports, Paper 6.
Embrey, D. (1986). SHERPA: A systematic human error reduction and prediction approach. Paper presented at
the International Meeting on Advances in Nuclear Power Systems, Knoxville, TN.
Endsley, M. R. (1995). Toward a Theory of Situation Awareness in Dynamic Systems. Human Factors: The
Journal of the Human Factors and Ergonomics Society, 37(1), 32-64.
doi:10.1518/001872095779049543
Endsley, M. R., & Jones, D. G. (2012). Designing for situation awareness: An approach to user-centered design.
Boca Raton, FL: CRC Press.

130
References

Engineering Council. (2014). UK-SPEC: UK standard for professional engineering competence - Engineering
technician, incorporated engineer and chartered engineer standard (Third edition). In. UK:
Engineering Council.
Engineers Australia. (2012). Australian engineering competency standards stage 2 - Experienced professional
engineer. In E. Australia (Ed.).
Engineers Australia. (n.d.). Code of ethics. In E. Australia (Ed.).
Ernst & Young. (2013). Turning risk into results: How leading companies use risk mangement to fuel better
performance. Retrieved from UK:
http://www.ey.com/Publication/vwLUAssets/Turning_risk_into_results/$FILE/Turning%20risk%20int
o%20results_AU1082_1%20Feb%202012.pdf
Ernst & Young. (2015). Buniness risks facing mining and metals 2015-2016: Moving from the back seat to the
driver's seat. Retrieved from UK:
Fischhoff, B. (1995). Risk Perception and Communication Unplugged: Twenty Years of Process1. Risk Analysis,
15(2), 137-145. doi:10.1111/j.1539-6924.1995.tb00308.x
Fishwick, T. (2014). Recurring accidents: Slips, trips and falls. The Chemical Engineer, 28-34.
Friberg, T., Prodel, S., & Koch, R. (2011). Information quality criteria and their importance for experts in crisis
situations. Paper presented at the 8th International ISCRAM Conference, Lisbon, Portugal.
Gill, G. (2013). Recurring accidents: Inadequate isolations. The Chemical Engineer, 52-56.
Glendon, A. I., & Stanton, N. A. (2000). Perspectives on safety culture. Safety Science, 34(1–3), 193-214.
doi:http://dx.doi.org/10.1016/S0925-7535(00)00013-8
Groeneweg, J., Hudson, P. T., Vandevis, T., & Lancioni, G. E. (2010). Why Improving the Safety Culture Doesn't
Always Improve the Safety Performance.
Hale, A. R., & Hovden, J. (1998). Management and culture: the third age of safety. A review of approaches to
organizational aspects of safety, health and environment. In A. M. Feyer & A. Williamson (Eds.),
Occupational Injury. Risk Prevention and Intervention. London: Taylor & Francis.
Hassall, M., Hannah, R., & Lant, P. (2015). Lessons learned from teaching about risks and impacts in industry.
Paper presented at the Hazards Australasia, Brisbane.
Hassall, M. E. (2013). Methods and tools to help industry personnel identify and manage hazardous situations.
(Doctor of Philosophy), The University of Queensland, Queensland, Australia.
Hassall, M. E. (2014). MINE4200 Humans and Risk - Cognitive Human Factors Lecture Slides March 31, 2014.
Brisbane Australia: The University of Queenland.
Hassall, M. E. (2015). Improving human control of hazards in industry. Paper presented at the 19th Triennial
Congress of the IEA 9-14 August 2015, Melbourne, Australia.
Hassall, M. E. (2017). Incident investigation masterclass: Introduction of proposed solution. Paper presented at
the UQ R!SK Incident Incident Investigation Masterclass, Brisbane, QLD.
Hassall, M. E., & Dodshon, P. (2017). Incident investigation masterclass: Research findings. Paper presented at
the UQ R!SK Incident Investigation Masterclass, Brisbane, QLD.
Hassall, M. E., & Harris, J. (2017). Risk controls knowledge: determining leading practice from case study
analysis (ACARP report C25036). Retrieved from Australia:
Hassall, M. E., Joy, J., Doran, C., & Punch, M. (2015). Selection and optimisation of risk controls (ACARP report
C23007). Retrieved from Australia: http://www.acarp.com.au/abstracts.aspx?repId=C23007
Hassall, M. E., & Sanderson, P. M. (2012). A formative approach to the strategies analysis phase of cognitive
work analysis. Theoretical Issues in Ergonomics Science, 1-47. doi:10.1080/1463922X.2012.725781
Hassall, M. E., Sanderson, P. M., & Cameron, I. T. (2014). The Development and Testing of SAfER: A Resilience-
Based Human Factors Method. Journal of Cognitive Engineering and Decision Making, 8(2), 162-186.
Hassall, M. E., Sanderson, P. M., & Cameron, I. T. (2016). Incident Analysis: A Case Study Comparison of
Traditional and SAfER Methods. Journal of Cognitive Engineering and Decision Making, 10(2), 197-
221. doi:10.1177/1555343416652749
Hassall, M. E., Xiao, T., Sanderson, P. M., & Neal, A. (2015). Human Factors and Ergonomics. In J. D. Wright
(Ed.), International Encyclopedia of the Social & Behavioral Sciences (Second Edition) (Vol. 11, pp. 297-
305). Oxford: Elsevier.
Heinrich, H. W. (1941). Industrial accident prevention: a scientific approach. New York ; London: McGraw-Hill
book company, inc.
Hillson, D. (2010). Exploiting future uncertainty creating value from risk. Farnham, Surrey, England: Burlington,
VT.
Hillson, D., & Murray-Webster, R. (2012). A short guide to risk appetite by David Hillson and Ruth Murray-
Webster Gower. Farnham, Surrey

131
References

Burlington, VT: Farnham, Surrey

Burlington, VT : Gower Pub.


Hollnagel, E. (1998). Cognitive reliability and error analysis method. Oxford: Elsevier.
Hollnagel, E. (2011a). Prologue: The scope of resilience engineering. In E. Hollnagel, J. Pariès, D. D. Woods, & J.
Wreathall (Eds.), Resilience engineering in practice: A Guidebook. Surrey UK: Ashgate.
Hollnagel, E. (2011b). When things go wrong: Failures as the flip side of successes. . In D. A. Hofmann & M.
Frese (Eds.), Errors in organizations. New York, NY.: Routledge.
Hollnagel, E. (2012). FRAM: the functional resonance analysis method - modelling complex socio-technical
systems. Surrey, England: Ashgate.
Hollnagel, E. (2014). Safety-I and safety-II: the past and future of safety management. Farnham, Surrey:
Ashgate Publishing Limited.
ICMM. (2013). Requests for proposals – Health and safety risk managing in the mining and metals sector. In
ICMM (Ed.). London.
ICMM. (2015a). Critical Control Management Implementation Guide. Retrieved from London, UK:
http://www.icmm.com/document/9722
ICMM. (2015b). Health and safety critical control management good practice guide. Retrieved from London,
UK: http://www.icmm.com/document/8570
ISO 31000. (2009). Risk management - Principles and guidelines. In. Geneva: International Organization for
Standardization.
ISO 31000. (2018). Risk management - Principles and guidelines. In. Geneva: International Organization for
Standardization.
Joy, J., & Griffiths, D. (2007). National Minerals Industry Safety and Health Risk Assessment Guidelines.
Retrieved from https://cmlr.uq.edu.au/filething/get/7825/NMISHRAG_v6.pdf
Kletz, T. A. (2009). What went wrong? Case histories of process plant disasters and how they could have been
avoided. Burlington, MA: Gulf Professional Pub.
Leveson, N. G. (2011). Engineering a safer world: Systems thinking applied to safety. Cambridge, Mass: The MIT
Press.
Leveson, N. G., Daouk, M., Dulac, N., & Marais, K. (2003). Applying STAMP in Accident Analysis. Workshop on
the Investigation and Reporting of Accidents, Sept.
Lundberg, J., Rollenhagen, C., & Hollnagel, E. (2009). What-You-Look-For-Is-What-You-Find - The consequences
of underlying accident models in eight accident investigation manuals. Safety Science, 47(10), 1297-
1311. doi:DOI: 10.1016/j.ssci.2009.01.004
Marling, G., Horberry, T., & Harris, J. (2014). Words have meaning . . . . . especially in risk management. Paper
presented at the Risk 2014 conference, Brisbane.
Martin, D. (2012). Exposed, the excuses of elf 'n' safety jobsworths: Including cafe that refused to heat up baby
food 'in case it burnt child's mouth'. Daily Mail. Retrieved from
http://www.dailymail.co.uk/news/article-2192815/Exposed-excuses-elf-n-safety-
jobsworths.html#ixzz4t5aBktPn
McCartney, S. (2016). Do planes really need life vests? The Wall Street Journal. Retrieved from
http://www.wsj.com/articles/do-planes-really-need-life-vests-1453310773
Mooney, S. (Ed.) (2014). Asia risk report: The top concerns for Asian risk manageers - 2015 edition Ausatralia.
Sydney, Australia: Newsquest Specialist Media Ltd.
National Transport Safety Board. (1990). Grounding of the U.S. tankship Exxon Valdez on Bligh Reef, Prince
William Sound near Valdez, Alaska March 24 1989. Retrieved from Washington DC:
Noetic Solutions. (2014). MSAC fatality review 2013-14: Report for NSW safety advisory council.
NOPSEMA. (2015). Guidance note - ALARP (N-04300-GN0166 Revision 6). In N. O. P. S. a. E. M. A. (NOPSEMA)
(Ed.), https://www.nopsema.gov.au/assets/Guidance-notes/A138249.pdf. Australia.
OECD. (2010). Radioactive waste in perspective. Vienna: OECD Publishing.
Oliver, P., & Dennison, W. (2013). Dancing with dugongs: Having fun and developing a practical philosophy for
environmental teaching and research. University of Maryland US: IAN Press.
Parasuraman, R., & Wickens, C. D. (2008). Humans: Still Vital After All These Years of Automation. Human
Factors: The Journal of the Human Factors and Ergonomics Society, 50(3), 511-520.
doi:10.1518/001872008x312198
Paul, R., Niewoehner, R., & Elder, L. (2007). The thinker's guide to engineering reasoning.
www.criticalthinking.org: Foundation for Critical Thinking.

132
References

Pocock, S., Writght, P., & Harrison, M. (1999). THEA – A technique for human error assessment early in design.
Paper presented at the RTO HFM Workshop on the Human Factor in System Reliability – Is Human
Performance Predictable?, Siena, Italy.
Poplin, G. S., Miller, H. B., Ranger-Moore, J., Bofinger, C. M., Kurzius-Spencer, M., Harris, R. B., & Burgess, J. L.
(2008). International evaluation of injury rates in coal mining: A comparison of risk and compliance-
based regulatory approaches. Safety Science, 46(8), 1196-1204. doi:10.1016/j.ssci.2007.06.025
Rasmussen, J. (1997). Risk management in a dynamic society: a modelling problem. Safety Science, 27(2-3),
183.
Rasmussen, J., & Svedung, I. (2000). Proactive Risk Management in a Dynamic Society. Karlstad, Sweden: Risk
& Environmental Department, Swedish Rescue Services Agency.
Reason, J. (2006). Human factors seminar Helsinki February 13, 2006. Paper presented at the Retrieved from
http://www.docstoc.com/docs/document-preview.aspx?doc_id=82204509.
http://www.docstoc.com/docs/document-preview.aspx?doc_id=82204509
Reason, J. T. (2008). The human contribution: Unsafe acts, accidents and heroic recoveries. Farnham, England:
Ashgate.
Renn, O. (1992). Concepts of risk: A classification. In S. Krimsky & D. Golding (Eds.), Social theories of risk (pp.
53-79). London: Praeger.
Rogers, W. P., Armstrong, N. A., Acheson, D. C., Covert, E. E., Feynman, R. P., Hotz, R. B., . . . Yeager, C. E.
(1986). Report of the Presidential Commission on the Space Shuttle Challenger Accident. Retrieved
from Washington DC: http://history.nasa.gov/rogersrep/51lcover.htm
Russ, A. L., Fairbanks, R. J., Karsh, B.-T., Militello, L. G., Saleem, J. J., & Wears, R. L. (2013). The science of
human factors: separating fact from fiction. BMJ Quality & Safety, 22(10), 802-808.
doi:10.1136/bmjqs-2012-001450
Ryan, B. (2015). Incident reporting and analysis. In S. Sharples & J. Wilson (Eds.), Evaluation of Human Work,
fourth ed. . Boca Raton.: CRC Press.
SA/SNZ HB89. (2013). Risk management - Guidelines on risk assessment techniques. In. Sydney, NSW:
Standards Australia.
Sato, J. (2011). Unparalleled sector leadership - Operations centre [Press release]. Retrieved from
http://www.riotinto.com/documents/Media-
Speeches/110610_Financial_community_site_visit_Operations_Centre_slides.pdf
Seligmann, B. J., Németh, E., Hangos, K. M., & Cameron, I. T. (2012). A blended hazard identification
methodology to support process diagnosis. Journal of Loss Prevention in the Process Industries, 25(4),
746-759. doi:https://doi.org/10.1016/j.jlp.2012.04.012
Shappell, S. A., & Wiegmann, D. A. (2000a). (DOT/FAA/AM-00/7). Retrieved from (Report Number
DOT/FAA/AM-00/7). Washington DC:
Shappell, S. A., & Wiegmann, D. A. (2000b). The human factors analysis and classification system - HFACS.
(DOT/FAA/AM-00/7).
Shappell, S. A., & Wiegmann, D. A. (2000c). The human factors analysis and classification system - HFACS
(DOT/FAA/AM-00/7). Retrieved from
Shaver, E. (2009). A short history of human factors and ergonomics. The Human Factor Advocate(January).
Sheridan, T. B. (2002). Some musings on four ways humans couple: implications for systems design. Systems,
Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on, 32(1), 5-10.
doi:10.1109/3468.995525
Shorrock, S. T., & Kirwan, B. (2002). Development and application of a human error identification tool for air
traffic control. Applied Ergonomics, 33(4), 319-336.
Slovic, P. (1987). Perception of risk. Science, 236(4799), 280-285.
Slovic, P., Fischhoff, B., & Lichtenstein, S. (1982). Why Study Risk Perception? Risk Analysis, 2(2), 83-93.
doi:10.1111/j.1539-6924.1982.tb01369.x
Solicitors Regulation Authority. (2014). SRA regulatory risk framework. In Solicitors Regulation Authority (Ed.),
http://www.sra.org.uk/risk/risk-framework.page.
Stanton, N. A., Salmon, P. M., Walker, G. H., Baber, C., & Jenkins, D. P. (2005). Human factors methods: A
practical guide for engineering and design. Aldershot, UK: Ashgate.
The Warren Centre for Advanced Engineering. (2009). Professional Performance, Innovation and Risk In
Australian Engineering Practice. Retrieved from http://thewarrencentre.org.au/wp-
content/uploads/2012/02/PPIR_full_report.pdf

133
References

Timpson, J. (2012). Has the 'elf and safety culture gone too far? The Telegraph. Retrieved from
http://www.telegraph.co.uk/finance/businessclub/management-advice/9561085/John-Timpson-has-
the-elf-and-safety-culture-gone-too-far.html
Tuncel, S., Lotlikar, H., Salem, S., & Daraiseh, N. (2006). Effectiveness of behaviour based safety interventions
to reduce accidents and injuries in workplaces: critical appraisal and meta-analysis. Theoretical Issues
in Ergonomics Science, 7(3), 191-209. doi:10.1080/14639220500090273
Tworek, P. (Ed.) (2010). Methods of risk identification in companies' investment projects. Ostrave: SSB:
Technicka Univerzita Ostrava.
U.S. Chemical Safety and Hazard Investigation Board. (2007). Investigation report - Refinery explosion and fire -
BP, Texas City, March 23, 2005. In. Retrieved from http://www.csb.gov/investigations/completed-
investigations/
U.S. Department of Energy. (2012). DOE Handbook: Accident and operational safety analysis - Volume I:
Accident analysis techniques (DOE-HDBK-1208-2012). Retrieved from Washington D.C.:
https://energy.gov/sites/prod/files/2013/09/f2/DOE-HDBK-1208-2012_VOL1_update_1.pdf
van Kampen, J., & Drupsteen, L. (2017, February 21, 2017). Accident investigation and analysis.
Waite, P. (2013). Recurring accidents: Overfilling vessels. The Chemical Engineer, 40-44.
Weick, K., Sutcliffe, K., & Obstfeld, D. (2008). Organizing for High Reliability: Processes of Collective
Mindfulness. In A. Boin (Ed.), Crisis Management (Vol. 3, pp. 105-125). London: SAGE Publications Ltd.
Weick, K. E., & Sutcliffe, K. M. (2015). Managing the unexpected : sustained performance in a complex world /
Karl E. Weick, Kathleen M. Sutcliffe (3rd ed.. ed.). Hoboken, New Jersey: Hoboken, New Jersey : Wiley.
Whalley-Lloyd, S. (1998). Reducing the impact of human error. The Safety & Health Practitioner, 16(5), 20.
Williams, J. (1986). HEART - A proposed method for assessing and reducing human error. Paper presented at
the 9th Advances in Reliability Technology Symposium, University of Bradford.
Withers, G., Gupta, N., Curtis, L., & Larkins, N. (2015). Securing Austalia's future: Australia's comparative
advantage. Retrieved from Melbourne, Australia:
Woods, D. D., & Hollnagel, E. (2006). Prologue: Resilience engineering concepts. In E. Hollnagel, D. Woods, &
N. Leveson (Eds.), Resilience engineering concepts and precepts (pp. 1-6). Aldershot, UK: Ashgate.
World Economic Forum. (2015). Global risks 2015. Retrieved from Geneva, Switzerland:
World Economic Forum. (2016). Global risk report 2016 - 11th Edition. Retrieved from Geneva, Switzerland:
http://www3.weforum.org/docs/GRR/WEF_GRR16.pdf
Xie, X., & Guo, D. (2018). Human factors risk assessment and management: Process safety in engineering.
Process Safety and Environmental Protection, 113, 467-482.
doi:https://doi.org/10.1016/j.psep.2017.11.018
Zionchenko, V., & Munipov, V. (2005). Fundamentals of ergonomics. In N. Moray (Ed.), Ergonomics: Major
writings (Vol. 1 - The history and scope of human factors, pp. 17-37). New York, NY: Taylor and
Francis.

134

Вам также может понравиться