Вы находитесь на странице: 1из 11

Risk Model Abstracts Reviewed for September 2015 Workshop

# Document Title Organization TEXT OF ABSTRACT (Overview - not exact/complete abstract)


Nuclear power plants (NPPs) are routinely inspected by the United States Nuclear Regulatory Commission (NRC) to assure continued safe operation and to
detect any performance degradation that needs attention. Over the last two decades, NRC has used a variety of risk/probabilistic risk assessment (PRA) tools to
help inspection personnel focus their efforts on the most important aspects of NPP design and operation. Most recently, the NRC’s Reactor Oversight Process
(ROP) established a risk-informed inspection process to inspect, measure, and assess the safety and security performance of commercial NPPs and to respond to
any decline in performance. The ROP focuses inspections on areas of greatest risks, increases regulatory attention to nuclear power plants as performance
declines, uses objective measurements of performance, gives the public timely and understandable assessments of plant performance, and provides responses
to violations in a predictable and consistent manner that corresponds to the safety significance of the problem.
To achieve the objectives of the ROP, Brookhaven National Laboratory (BNL) developed risk-informed inspection notebooks for all NPPs operating in the United
States. The notebooks are simplified risk assessment tools, based on the PRA models of the NPPs. The notebooks and the associated risk assessment tools are
useful for a number of reasons. They bring together risk information in a systematic manner to inspectors making them easier to understand and use; they
provide consistency in assessment and responses to violations based on safety significance of the issue; and they can provide a relatively quick assessment of the
significance of the degradation identified.
Using a Standardized Tool The simplified risk assessment methodology of the risk-informed notebooks for NPPs is based on probabilistic risk assessment methods using fault and event
1 for Risk-Informed Brookhaven National trees. Simplification is achieved by conducting an order of magnitude risk assessment, which is adequate in the inspection arena. Event trees are developed in a
Inspection of Nuclear Laboratory (BNL) consistent manner using consistent assumptions and fault trees are simplified using dominant contributors based on mitigation system designs. In this approach
Power Plants data and modeling needs are reduced, while adequate credits are provided for unique risk reduction features of a particular facility based on event tree
modeling. Human error and common cause modeling are applied consistently using a defined set of criteria. Conservative assumptions are made to avoid
underestimation of risk impact of any degradation.
The approach developed for the commercial NPPs can be effective for the pipeline risk assessment as needed to define inspection approaches. Such an
approach will help inspectors in understanding the risk assessment for their use; provide consistency in modeling assumptions and use of data; and make
available a risk tool for communication and training of stakeholders.

Whereas Relative Risk is a quantitative or qualitative assessment that attempts to measure the comparative risk within a group of assets (typically assessed using
a scoring or index system), Absolute Risk is the quantification of risk or expected loss on a specific basis, typically dollars, that is intended to represent a direct
and true measure of the risk present. While Relative Risk provides the ability to prioritize addressing risk in the pipeline system, it does not assess if the overall
A Framework for risk to the pipeline is acceptable in absolute terms. An Absolute Risk approach provides for prioritizing risk mitigation activities in real terms, as well as an
2 Characterizing Absolute JANA Corporation assessment of actual risk to the pipeline, In this way, the true level of risk in the pipeline can be understood and the level of mitigation required to bring overall
Risk in Gas Pipelines pipeline risk to acceptable levels can be determined. A framework for assessing Absolute Risk for gas pipelines is presented along with strategies for
implementation.

The proper use of risk models in pipeline safety and integrity management is critical to be able to continuously improve the safety performance of pipelines. The
key to properly using risk models lies in understanding the capabilities and limitations of the risk models being employed and ensuring that the best risk model is
employed for the safety and integrity management objectives being pursued. Currently, there are many different risk modeling approaches employed in pipeline
A Critical Review of risk management, including index models, empirical/probabilistic models, mechanistic models and mechanistic-probabilistic models. A critical review is provided
3 Pipeline Risk Modeling JANA Corporation of the primary risk modeling approaches and their advantages and disadvantages. The key capabilities/limitations of each approach are assessed in terms of data
Approaches requirements, model complexity, predictive capabilities, accuracy and model granularity. A framework is presented for assessing risk modeling approaches.

Understanding the potential consequences of pipeline incidents is a critical component of pipeline risk management. By their very nature, these consequences
are probabilistic for any given potential incident, there is a range of the potential severity of the consequences that can arise. In this paper, the form of the
distribution of consequences arising from pipeline incidents is examined and it is seen, in a variety of industries (gas distribution pipelines, gas transmission
Modeling the pipelines, hazardous liquid pipelines and gas gathering pipelines), to follow a power law or Pareto type distribution. This behavior has specific implications for
4 Consequences of Pipeline JANA Corporation both modeling and managing pipeline risk, particularly for the assessment and management of low probability-high consequence events. The link between
Risk pipeline leaks and incidents and the resulting power law distributions of incident consequences is examined for the primary gas transmission pipeline threat
categories. The application of the approach to assessing absolute risk on a global basis is explored.

1 of 11
Risk Model Abstracts Reviewed for September 2015 Workshop
# Document Title Organization TEXT OF ABSTRACT (Overview - not exact/complete abstract)
Risk assessment of a complex system, such as a pipeline, requires integration of diverse sources of knowledge within and outside the pipeline operator. The
challenge of knowledge integration becomes tougher as the work force ages and experience is lost. Furthermore, as sensors and monitoring systems become
ubiquitous, any risk assessment methodology must be able to readily integrate sensor output to produce real-time assessment of risk on a location-by-location
basis. Bayesian network offers a quantitative framework for linking diverse sources of knowledge – expert judgments, physics-based models, and sensor
readings. More importantly, any risk assessment system must be properly validated i.e. the question that must be answered is, “could this risk assessment have
Integrating Knowledge in predicted past failures if provided with data available then?” Bayesian network models readily provide the means to answer such a question since Bayes theorem
Pipeline Risk Assessment – updates prior probabilities with observation in a mathematically elegant manner. The benefit of Bayesian network ultimately lies in its ability to bring the
5 A Bayesian Network DNVGL
knowledge of all stakeholders into a quantitative, testable framework. This talk will briefly describe the Bayesian network approach; show how we have used it
Approach for risk assessment for a variety of threats (third party damage, internal & external corrosion, SCC, and theft) through case studies, and how we have validated
these predictions based on past occurrences. (NOTE: REFERENCES ARE OMITTED)

This presentation will cover how to integrate process risk management principles within a corporate management system for effective, efficient, and highly
reliable operation. Every pipeline operator has a management system, which normally includes elements of policy, strategy, objectives,
enablers/controls/measures, plans, and procedures/practices. In addition, most management systems formally include enterprise risk principles to guide
The Role of Risk Principles decision-making having high business impact. However, few operators formalize process safety risk (e.g., unlike occupational safety risk). Process risk is normally
6 in Corporate Management DNV GL managed within the lower elements of a management system (e.g., integrity management plans), or it is managed in isolation (e.g., a separate API 1173
Systems compliant system) resulting in high implementation costs and/or a missed opportunity to maximize business impact.

Equally important to choosing the correct pipeline risk modeling approach is ensuring the usability of the developed risk management program. Implementing
the most sophisticated risk modeling algorithm on the market will not help a company manage their risks if the model and associated program are not usable. A
successful risk management program sets the groundwork for a usable risk model by ensuring the model is integrated into the company in a way in which all
relevant stakeholders perceive the benefit of its use. This involves a number of aspects including ensuring the quality and completeness of data, establishing
Ensuring Risk Model work flows to integrate the risk model into the pipeline integrity program, and providing accessible outputs which readily support decision making.
7 Usability DNV GL
The envisioned presentation will outline a number of key components of a successful risk management program, which when paired with a technically sound risk
model, will help companies manage the risk of their pipeline systems.

Pipeline operators use risk models as tools for determining the vulnerability of their system from various threats.
Most of these risk management tools are based on historical leak and corrosion data and input from subject matter experts (SME) to establish relative indices
and ranks of the risk levels. This ranking system to quantify risk is a common low-cost approach which is easy to setup and use, and it simplifies the maintenance
priorities of large systems.
However, these index-type models rely mainly on the SME judgment which may reflect subjective opinions. Also, this approach typically does not apply sound
statistical techniques; which makes it difficult to integrate multiple threats in quantifying risk and to perform scenario analysis.
The proposed presentation provides a summary of a Bayesian Networking approach to establish an integrated risk model.
The Bayesian approach is an adaptive and flexible system which allows for revising and updating the initial predictions in light of new information.
Use of Bayesian Network in In real-world events, where many unknown events are related, all the uncertain variables can be graphically represented as nodes in a Bayesian Network.
8 Integrated Risk Model GTI An example of such representation will be presented. The example presents a Bayesian Network for the probabilities of damage to cast iron gas mains due to
Approach the natural forces induced by Hurricanes (e.g., structure and soil movement, and flood).
The example incorporates an analysis of vertical and horizontal ground displacements to establish the limiting permissible pipe deformation and pullout.
The advantage of this approach is that it allows running various scenarios of the probabilities due to changes in the input data of the primary events.
It also enables reverse calculation of the probability of the causes/threats based on observed occurrences of the end events.

2 of 11
Risk Model Abstracts Reviewed for September 2015 Workshop
# Document Title Organization TEXT OF ABSTRACT (Overview - not exact/complete abstract)
Enbridge decision tools for mainline pipelines have historically been based upon Fitness for Service (FFS) techniques, and code/regulatory directives. Learnings
from pipeline failures have highlighted the need for more comprehensive approaches capable of achieving expected levels of safety and that can demonstrate
the performance of integrity processes. In order to enhance the capability to demonstrate performance, the use of engineering reliability science has been
added to the Enbridge decision process. That change, alongside the co-ordination of all decision tools into a collated safety case report for each pipe segment,
enables advanced demonstration of safety for pipelines. The safety case evaluates information related to the operating environment, uncertainties associated
with pipe properties and pressure, fitness for purpose analysis, mitigation strategies, likelihood of failures against target reliabilities, and consequences data.
Enbridge Pipelines Although reliability remains our focus, Enbridge has been working to develop in house risk analysis tools to quantify the risk of our pipelines and related facilities.
9 Enbridge Pipelines Inc. Inc. By understanding the risk within the system, Enbridge is able to inform integrity decisions and guide mitigation programs in leak detection, valve placement and
emergency response to most effectively reduce risk to the public and environment. Enbridge will share some operational examples of how both reliability and
risk are being used to manage the integrity of our system.

Formal risk management has become an essential part of pipelining. As an engineered structure placed in a constantly changing natural environment, a pipeline
can be a complex thing. Good risk assessment is an investigation into that complexity; providing an approachable, understandable, manageable incorporation of
the physical processes potentially acting on a pipeline: external forces, corrosion, cracking, human errors, material changes, etc. Recent work in the field of
pipeline risk assessment has resulted in the development of methodologies that overcome limitations of the previous techniques while also reducing the cost of
W. Kent Muhlbauer - the analyses. All serious practitioners of pipeline risk assessment should be aware of and utilize these techniques in order to optimize their efforts. Alternative
10 G2 Partners approaches simply no longer compete. Pipeline risk managers can now better understand and therefore, manage, risks associated with any type of pipeline
Comment
operating in any environment. The purpose of this presentation is to outline recent advancements in the development and application of risk models applicable
to pipelines and related facilities.

This presentation will explain how statistically based Monte-Carlo analysis methods can be used to quantitatively evaluate and compare risk mitigation options
11 W.R. Byrd - Comment RCP Inc. for individual sites containing, or adjacent to, gas transmission pipelines. The model can incorporate a wide variety of population densities in both time and
space, and quantify the potential benefits of specific site adjustments in reducing cumulative or maximum risk.
In the Oil and Gas industry, work never ends. The same applies to ongoing compliance associated with threats and risks for Pipeline Companies in
concurrently managing data and all aspects of Risk associated with Federal compliance for Operators State and Federal written IM programs. The sub-second
output of data also makes it inherently complex to effectively monitor and maintain control of operations and critical assets. By leveraging data modeling (PODS,
SCADA, SQL and Legacy Systems) as well as business intelligence (BI) technology, we’ll walk through how Rolta has impacted key performance metrics through
innovative risk mitigation models structured in a real-world environment, and displayed in dashboards that are pervasive across all levels of the company. This
results in removing ‘gut-feel’ scenarios, providing a better visibility into the controls that can be leveraged in order to reduce risk levels, and provide stability to
an otherwise hazardous industry. Less paper more real time visibility.
Rolta OneView™ takes a composite risk management approach to ensure the technical and operational integrity within the Oil and Gas industry. Each of the
safety-critical elements is assessed, and a series of boundaries built into the process where hazards or problems can occur. Composite risk is essentially the
Straight from the Source; escalating nature of the consequences of multiple successive risks. As each successive risk barrier is breached, the consequences of specific and interacting
Adaptation of threats become greater, leading to potentially catastrophic failures and consequences.
Composite Risk Modeling Rolta supports integrity management requirements through analytics to characterize all pipeline integrity threats and consequences concurrently, as well as the
in Oil & Gas for impact that quantified risk evaluation can provide in support of not only PIM but also operational excellence. Rolta OneView™ uses advanced risk modeling
12 Transmission and Rolta Americas
approaches through our experience with Upstream, Transmission, and Downstream for pipeline and non-pipeline facilities systems focusing on the key elements
Distribution Pipeline of Risk Mitigation, but also providing and encouraging the utilization of these advanced technologies for safety and operational excellence for both internal and
Operators external clients.
Our approach to evaluating and creating risk models is based not only on our experience in Energy Oil and Gas but our ability to apply data appropriately for
preventive and mitigative measures. Additionally the use of this information to support and identify risk based on and Operators Written Integrity Management
Programs and the design and implementation of risk mitigative measures and approaches.

3 of 11
Risk Model Abstracts Reviewed for September 2015 Workshop
# Document Title Organization TEXT OF ABSTRACT (Overview - not exact/complete abstract)
FERC mandated pipeline integrity management systems are typically customized to meet the needs, concerns, and tools of each specific operator, allowing the
operators to uniquely specialize their Integrity Management Program (IMP); however, this approach also makes comparison of like pipeline systems more
challenging because of the disparate nature of the calculation and reporting.
All pipeline systems share common fundamental design properties which can be collected in the form of supported shape files and projected across the Earth’s
surface. A simple and dynamic forum is available to illustrate and allow interactive comparison of features. This novel platform also provides the ability to
Standardized Pipeline illustrate the information throughout the available history of geospatial data.
Observation Tool (SPOT) Bureau Veritas North Furthermore, it is then conceivable to use available data, history, correlations, and estimates to run multiple projections on pipeline sets to anticipate pipeline
13 and Monte Carlo Incident America, Inc. degradations and their interactions with one another as well as their potential impacts on the surface of the planet. Hundreds of potential future scenarios for
Projection each pipeline with each set of potential results could be grouped and compared to one another on an absolute scale.
Standardized Pipeline Observation Tool (SPOT) provides visually detailed risk comparison across the spectrum of existing and anticipated pipeline integrity
threats.

Risk modeling has been going on for decades. At a high level, all of risk modeling proceeds from decision analysis, and risk modeling needs to retain its
fundamental focus on decision-making. But at a slightly more detailed level, it is found that different technologies seem to benefit most from technology-specific
approaches, especially when different figures of merit are involved. For example, severe accidents at currently-operating commercial nuclear plants have been
analyzed within the classical Reactor Safety Study paradigm. Such “severe” accidents always involve failure of key safety functions aimed at ensuring core
integrity, and for some accident types, especially those involving failures of active components, this job is done well by classical fault-tree / event-tree modeling.
At some process plants, the kinds of accidents possible are, in some ways, more various, and much effort is focused on identifying hazards (e.g., using HAZOP)
and formulating controls to address them. Submarine integrity is yet another example; part of the SUBSAFE program is aimed at assuring the quality of passive
components whose rupture would create significant problems for the crew.
All three of the above examples have something in common technically. For example, failure of passive elements, including piping, is part of what they must
deal with. But the modes of inquiry that are practical and reasonably optimal for each are different. Risk modeling involves characterization of scenarios,
Matching Risk Analysis frequencies, consequences, and associated uncertainties; but how one goes about analyzing these things in light of the decision support needs is technology-
Methods and Tools Idaho National specific and application-specific.
14 to the Decision Support Laboratory This talk will
Needs and the Technology • survey the range of modeling techniques applied in venues such as the Department of Energy, NASA, process plants, and NRC, relating the techniques to the
problem attributes,
• address the data needs, and
• address issues of interpretation of risk analysis results, and the application of risk analysis in the formulation of safety cases, with a view to development of a
practical approach to the problem of pipeline integrity.

No serious risk analysis effort can proceed without trying to learn as much as possible from operating experience. Most quantitative risk models rely heavily on
operating experience to quantify their basic event probabilities. But more importantly, it is found that synthetic risk models developed for novel technologies
significantly understate the risks of new systems. This is not to say that effort to identify accident potential a priori, and eliminate it, is wasted; but rather that so
far, this has not been achieved, despite significant efforts to do so.
Many major accidents have had precursors: events whose careful analysis might have pointed to the potential for the major accidents that subsequently
occurred. Three Mile Island, Challenger, and Columbia are all in this category. “Precursors” frequently signal the presence of failure mechanisms that are
qualitatively not recognized. Unfortunately, in many technologies, anomalies occur in operation at a sufficient rate that it is impractical to devote a major analysis
effort to analyzing all of them. We need tools for spotting the ones that need to be analyzed.
Another facet of operating experience is analysis of the rates at which certain kinds of failure events occur. When these events were foreseen qualitatively but
are seen to occur at much higher rates than had been anticipated, they are in some sense precursors, but the management response is different.
Learning from Operating Idaho National The USNRC has had a program of precursor analysis for decades. Recently, a more qualitative approach to precursor analysis has been formulated and applied
15 Experience Laboratory for NASA, closely based on process industry hazard analysis techniques. The thought process in the NASA approach is geared both to spotting events that have
something qualitative to teach us, and events that warrant trending, because the combination of their apparent rates of occurrence and their accident potential
may not be adequately reflected in the conceptual model that is being used for decision-making.
The talk will survey the general process of collecting and analyzing experience data for NRC, and the particular NASA precursor analysis process. For pipelines,
the heuristics will no doubt be different, but consideration of certain process aspects will be worthwhile.

4 of 11
Risk Model Abstracts Reviewed for September 2015 Workshop
# Document Title Organization TEXT OF ABSTRACT (Overview - not exact/complete abstract)
DOT regulations and ASME B31.8-S require that pipeline operators consider all threats to pipeline integrity. By definition, this includes interacting threats. As
noted in recommendations P-15-10, P-15-16, and P-15-17 to PHMSA by the NTSB, the collection and evaluation of previously identified multiple threat
interaction data are important considerations in natural gas transmission pipeline risk assessment programs.
NYSEARCH/Northeast Gas Association completed a comprehensive, systematic database study and developed a methodology that delineates and quantifies the
potential interacting threats and the increased likelihood or risk impact attributable to interacting threats for in-service natural gas transmission pipeline
segments. The threat interactions and their developed algorithms are based on Subject Matter Expert (SME) analysis of actual pipeline incident data in the
United States. The approach was developed in accordance with the regulatory requirement to consider “all threats to pipeline integrity” including the interactive
NYSEARCH/Kiefner nature of threats.
Modeling of Interacting NYSEARCH/Northeast An interactive threats database was assembled by merging SME - analyzed real pipeline incident data collected from the DOT’s database and Kiefner’s in-house
16 Threats for Natural Gas Gas Association failure database that were compiled over the course of more than two decades. While many significant threat interactions envisioned by Subject Matter Experts
Transmission Pipelines were supported in the incident analyses, other significant threat interactions were identified and quantified that were not originally envisioned by Subject Matter
Experts. The result is a spreadsheet-based pipeline segment interactive threat assessment tool for users of relative risk models, in either 9 - threat or expanded
threat versions (ASME B31.8-S), which are relatively simple to understand and implement.

Risk management is an integral part of day-to-day business activities in the energy industry. Oil and gas companies face continual risks ranging from volatile
commodity prices to increased public and personnel safety, and environmental pressures. Risks related to asset damage, business interruption, pollution,
personnel injuries and property damage are inherent in everyday oil and gas business operation. However, data driven technology offers new approaches to
improved risk assessment, prediction and mitigation.
The energy pipeline industry depends on evaluating risks and then taking action to manage the integrity of the pipeline systems. Intuitively, more information
should yield better risk assessments, which is why big data along with its associated tools and techniques have received such fierce attention. The ability
to harness larger and more diverse data sets in support of business decision--makers holds the promise of both reducing failures by better understanding
San Diego managing risks It has been applied in other industries where the cost of failure is unacceptable from our public stakeholders’ vantage point.
The Role of Big Data and Supercomputer There has been a growing interest in enhancing the pipeline risk assessment tools for managing pipeline safety and integrity given that it is an integral
17 Predictive Analytics in Center, University of component of pipeline integrity management. As complexity has grown across every dimension of the business, so have the challenges of producing insightful
Enhancing Risk Modeling California at San and accurate risk models.
Diego This abstract explores the application of Big Data technologies to operational risks faced by the pipeline industry in today's business and regulatory
environment. These data driven approaches, coupled with the appropriate underlying technology, can offer enormous benefits to risk assessment and failure
prediction guiding decision--making in mitigation of risk.

METHODOLOGY
PROP (Predicting Ruptures on Operating Pipeline) Systems collects huge quantities of information (weather, engineering and scientific data) for data mining,
using proprietary algorithms to identify potential pipeline ruptures on at-risk gas piping systems in advance by identifying:
* Patterns which are predictable and repeatable
* Outcomes BEFORE incidents occur (early warning system)
* Trends where something is happening (movement to or from)
These tools minimize gas leaks and reduce explosions.
PURPOSE
18 PROP Systems - Comment PROP Systems PROP Systems Methodologies:
* Develop and refine response methods for the right solution, pipe material, timeframe and solve the right problem
* Direct emergency gas leak survey and construction response to affected areas to minimize risk to people, pipes, property
Predictive analytics is COMPSTAT for gas piping systems

5 of 11
Risk Model Abstracts Reviewed for September 2015 Workshop
# Document Title Organization TEXT OF ABSTRACT (Overview - not exact/complete abstract)
Problem: 49 CFR 192.911 states the requirement for operators to perform a risk assessment of transmission pipelines for the prioritization of baseline
assessment in the TIMP framework and valuate the merits of additional preventive and mitigative measures, however offers little guidance for evaluation of
preventive and mitigative measures or on how results of risk assessment should be used subsequent to the completion of baseline assessment.
Abstract: This presentation focuses on how the results of risk assessment are used outside of the TIMP framework for decision making and assignment of
resources at PG&E. This will focus on the current actions that PG&E takes using TIMP risk assessment elements, such as our recently completed risk informed
19 Ryan Lindblom - Comment Pacific Gas & Electric rate case testimony and prioritization of work within existing programs, as well as initiatives where PG&E has plans to use the TIMP risk assessment results in the
future, such as to meet safety review requirements for 192.555 (uprates) and 192.609 (class location increases). TIMP risk assessment elements are also planned
to be used to support Asset Management and maintaining PG&E’s certification in PAS 55 and ISO 55001.

The attached white paper outlines our efforts in the advancement of pipeline safety through the use of data analytics to expand current pipeline risk models by
incorporating “Active Risk.”
Traditionally, the assessment of the risk score of any given pipeline segment is a function of the probability and consequence of failure (PoF and CoF)
determined by the influence of identified factors that carry a predetermined weight and positive or negative impact in the score. The attributes of those factors
are measured independently and included in the risk assessment as a static input to a threat model or algorithm.
Leveraging advanced data aggregation and analytics, we are able to expand an existing risk model to include dynamic, or near real time data inputs, from any
outside data source that can impact the pipeline asset. This allows the operator to have risk assessment methodologies that can be adjusted as conditions
change.
Using real-time data sources in place of user updated attributes for key threat factors, we are able to monitor vast amounts of data and recalculate the risk
scores (with the corresponding PoF and CoF) on a daily basis. For example, pressure, flow and temperature attributes are linked to direct readings from systems
such as SCADA historian, pulling real-time information of the actual conditions for every segment of the pipeline. The dynamic nature of the data sources enables
not only an accurate assessment of condition or risk at any moment in time, but also predictive capabilities as historical correlations can be made for specific
attribute values. This allows integrity teams to anticipate preventive or corrective actions and to improve their risk management assessment over time with every
validated observation, ultimately allowing for better prioritization of mitigation actions. (ABSTRACT FROM 12 PAGE WHITEPAPER): The attached white paper
outlines our efforts in the advancement of pipeline safety through the use of data analytics to expand current pipeline risk models by incorporating “Active Risk.”
Traditionally, the assessment of the risk score of any given pipeline segment is a function of the probability and consequence of failure (PoF and CoF)
20 GE Oil & Gas determined by the influence of identified factors that carry a predetermined weight and positive or negative impact in the score. The attributes of those factors
are measured independently and included in the risk assessment as a static input to a threat model or algorithm.
Leveraging advanced data aggregation and analytics, we are able to expand an existing risk model to include dynamic, or near real time data inputs, from any
outside data source that can impact the pipeline asset. This allows the operator to have risk assessment methodologies that can be adjusted as conditions
change.
Using real-time data sources in place of user updated attributes for key threat factors, we are able to monitor vast amounts of data and recalculate the risk
scores (with the corresponding PoF and CoF) on a daily basis. For example, pressure, flow and temperature attributes are linked to direct readings from systems
such as SCADA historian, pulling real-time information of the actual conditions for every segment of the pipeline. The dynamic nature of the data sources enables
not only an accurate assessment of condition or risk at any moment in time, but also predictive capabilities as historical correlations can be made for specific
attribute values. This allows integrity teams to anticipate preventive or corrective actions and to improve their risk management assessment over time with every
validated observation, ultimately allowing for better prioritization of mitigation actions.

6 of 11
Risk Model Abstracts Reviewed for September 2015 Workshop
# Document Title Organization TEXT OF ABSTRACT (Overview - not exact/complete abstract)
The Risk Based Integrity Management (RBIM) framework is an innovative yet practical risk modelling methodology for integrity management recently developed
and implemented on a transmission pipeline network in Australia. The presentation will explain how the framework helps to assess the performance of an
integrity management (IM) plan and how it helps to actively manage the risk of failures and non-compliance.
An RBIM baseline assessment establishes a link between each threat and sub-threat and the IM activities providing mitigation. Existing risk assessments can be
utilized to support this. One outcome of the assessment is a segmented and quantified likelihood of failure for each threat for each pipeline. This baseline is
updated annually.
A crucial aspect of RBIM is a set of criteria for evaluating every IM activity in terms of its frequency of execution and expected outcomes. These criteria are multi-
leveled, defining tolerances of acceptability and multiple criteria can be applied to each activity; these can vary across pipelines or within sections. The criteria
are effectively the performance measures against which the execution of the IM plan will be assessed. Other elements of the baseline consider pre-requisites for
regulatory compliance and identification of external factors that will immediately elevate risk.
The RBIM framework has been utilized for management of in-service gas transmission pipelines assets operated by QGC in Queensland. It is supported by a web
application and a performance dashboard. The system continuously monitors and updates risk profiles across the pipeline network based on the execution of
planned activities and any new activities which may be initiated to mitigate elevated risk levels.
QGC integrity engineers use the driving factors identified by the system to investigate elevated risk levels, typically caused by severely delayed activities,
unacceptable results or notification of works in close proximity to the pipeline; either individually or in combination. Additional mitigation activities to reduce
NEW RISK BASED risk may then be entered into the system, with pre-defined criteria enabling the system to monitor and measure the activity outcomes. Risk suppression factors
INTEGRITY MANAGEMENT Dafea Ltd, QGC, DNV permit SME engineering judgment to be applied, supported by an approval matrix defining expiry periods and the levels of authority required depending on the
21 FRAMEWORK FOR PIPELINE GL level of suppression involved.
ASSETS In addition to identifying segments of elevated risk, the dashboard is continually updated to present metrics on the overall performance of the IM plan across the
network. The metrics are designed as leading indicators which measure the level of threat exposure and schedule attainment for each credible threat. Other
metrics focus on groups of activities which are trending to non-compliance, the level of contribution and the contributing cause across all threats or a specific
threat.
The authors believe the RBIM risk management framework is a significant advancement for pipeline integrity management, offering benefits which include:
• Provides a holistic view of the IM program with early warning indicators;
• Applies pre-defined business criteria automatically in a transparent and consistent way;
• Identifies risks of interacting threats;
• Supports investigative analysis [by allowing scenarios to be evaluated in terms of their impact on risk profiles];
• Facilitates audits.
The concept is applicable to any type of asset including pipeline facilities.

Pipeline transmission systems traverse varied geomorphic landscapes, including streams, wetlands, slopes, karst, and sand dunes. Geomorphology is the
scientific discipline that studies landform evolution and function, and is composed of multiple sub-disciplines that correspond to different landscapes (e.g., rivers,
wind-formed landscapes, soils, slopes, and coastal systems). Geomorphic methods provide a scientific basis for assessing environmental risks to pipeline
integrity, as well as support an understanding of why certain landscape processes are occurring in a given location.
Recently, a geomorphic assessment of pipeline integrity was performed at over 1,000 water and slope crossings in the Midwest. The project consisted of two
phases, an initial desktop phase and a field assessment phase. The desktop assessment collected and analyzed data from public agencies (i.e., topographic maps,
aerial photos, digital elevation maps, etc.) to assign preliminary risk categories to each crossing.
Field assessments were also conducted at each crossing. Data such as depth of cover, anthropogenic features, and corridor width/condition were documented
for each crossing. Additional data at stream crossings included floodplain, channel bed and bank, and river/stream characteristics. Slope crossing data included
A Geomorphic Approach to slope angle, degree/type of vegetation, soil type, water sources, and visible slope failure indicators (i.e., tension cracks, slumping, slope creep).
Pipeline Integrity Field data and background data were analyzed and a final erosion/mass failure risk category was assigned to each crossing. Additionally, depth-of-cover
22 Assessments (confidential Bay West LLC,
measurements were combined with the risk category to derive pipeline exposure categories. These categories (erosion/mass failure and exposure categories)
client) were used to develop a standard monitoring schedule for each crossing, as well as an emergency schedule triggered in response to certain environmental events
(i.e., flooding, heavy precipitation events, seismic events, etc.).
A geomorphic approach to environmental risk management can be combined with traditional pipeline inspection methods (i.e., in-line inspections, integrity
digs, etc.) to develop a comprehensive pipeline integrity management program.

7 of 11
Risk Model Abstracts Reviewed for September 2015 Workshop
# Document Title Organization TEXT OF ABSTRACT (Overview - not exact/complete abstract)
Pipeline Integrity Management (IM), including risk analysis, is inherently geospatially oriented and data intensive. Substantial information – most with a spatial
component - is required to describe the pipeline facility, its construction, its operation, and the environment through which it is routed. Numerous options exist
for collection, storage, management, maintenance, and display/analysis of this data. GIS is an excellent choice for the job because it is designed to store, manage,
and analyze large volumes of spatial information such as are associated with pipeline systems. Many Risk/Integrity Management solutions utilize proprietary,
non-GIS oriented data management approaches with options for importing data from commonly available GIS systems. This data import process can be time
consuming and prone to errors, requiring more time and effort for accurate data translation with less remaining for risk analysis. In addition, reasonable
capabilities for providing risk related information to non-technical personnel in both the office and the field have been limited. Boardwalk Pipeline Partners has
decided on an approach that fully integrates the risk assessment process with their Cloud based Geographic Information System (GIS).
Boardwalk’s Integrity Management solution has been implemented within the framework of the ESRI ArcGIS Pipeline Open Data Standard (PODS) Spatial
Geodatabase. It includes Class Location and HCA determination, Risk analysis, MAOP calculations, and Baseline Assessment Plan (BAP) management. These
applications are fully integrated within the Cloud based GIS environment. Results of HCA and Risk analysis feed the BAP application which manages the current
and historical HCA segments, assigns relevant threats to the HCAs (based on risk results), tracks management of change, and manages required assessments and
Preventive and Mitigative Measures (P&MM). Two cloud based solutions distribute resulting information to users: Willbros’ Integra Link and Geonamic System’s
Mitigation Manager. The GIS, IM, and dissemination capabilities have been deployed within a virtual private cloud; architected by Willbros and operated by
Boardwalk on top of Amazon Web Services (AWS) to provide increased information security posture, elasticity and availability. The solution is composed of the
Integration of Pipeline Risk following integrated components:
23 Analysis with Advanced Boardwalk Pipeline 1. ArcGIS PODS Spatial Geodatabase. ESRI’s spatial technology is widely employed in many different solution areas worldwide – including transmission pipeline
Partners, LP data management. Within the ESRI ArcGIS environment, Boardwalk has elected to use the PODS Spatial Pipeline data model to store and manage the data. The
Geospatial Technologies
PODS model is extensively used within the transmission pipeline industry in the United States.
2. Boardwalk utilizes the “Drivers, Resistors, Indicators, and Preventers” (DRIP) risk model. “Drivers” are data elements that provide direct causal information on
specific failure or consequence components. “Resistors” are data items that indicate a potential resistance to a failure or consequence component. “Indicators”
are data that provide an indication that a particular failure or consequence component may or may not exist. “Preventers” indicate actions taken to prevent the
occurrence of a failure or consequence component. The risk model is fully implemented within Boardwalk’s GIS system utilizing the Python programming
language (the standard scripting/programming language provided by ESRI for use within their GIS solution – ArcGIS). Python programming and support has been
provided by Willbros.
3. A full suite of GIS based IM applications. The Cloud based GIS serves as the platform for Class and HCA determination, MAOP calculations, and management of
Boardwalk’s Baseline Assessment Plan (BAP) schedule. These solutions are provided by Geonamic Systems.
4. Integra Link. Developed by Willbros, Integra Link is a Cloud based spatial data delivery system that allows Boardwalk personnel access to highly secure pipeline
data and high-resolution imagery, enabling efficient and reliable field data collection that helps to manage risk, lower operating costs, and improve pipeline asset
performance.
5. Mitigation Manager. Also a product of Geonamic Systems, Mitigation Manager allows relevant Boardwalk personnel, both in the field and in the office, to
explore/analyze company-wide risk results. Users can filter the results, drill down to individual risk casual factors, generate reports, and execute detailed “What-
If” scenarios against the data in order to determine mitigation actions that will provide the highest risk reduction for the lowest cost.
6. Pipeline GIS Maintenance applications. Eagle Information Mapping’s PDMT and TurboRoute tools are utilized for base pipeline data management and
In order to advance risk modeling methodologies for gas transmission and hazardous liquid pipelines and non-pipeline systems, new approaches to risk
modelling should be considered by operators and regulators in the US pipeline industry. Barrier based approaches in combination with traditional semi-
quantitative tools are one such consideration. In contrast to the traditional fate and transport studies relying on quantitative dispersion modeling, the barrier
based approaches of Bow-Ties and Tripod Beta spend most of the modeling effort on identifying, assessing and maintaining preventative and mitigative controls
(barriers) to major accident events.
Bow-tie diagrams and Tripod Beta trees are graphical methods for modeling the cause and effect
Barrier Based Approaches relationships around major accident events such as loss of containment and collisions.
to Risk Modeling for Global Business For many years, the Bow-Tie and Tripod-Beta methods have been used in Europe and Australia as cost effective approaches to look at the integrity of barriers in
24 Pipeline Safety: Making Management the petroleum, chemicals and aviation industries. Furthermore, in the wake of the Macondo offshore well disaster of 2010, these approaches are being adopted
Regulations, Standards and Consultants, LLC for US offshore oil and gas exploration and production facilities.
Practices More Effective This paper will specifically show how Bow-Tie diagrams could be applied to pipeline safety. It will illustrate the method through a step-by-step overview
accompanied by representative examples of its application to pipeline safety management. It will also touch on how Tripod Beta trees emphasize not only the
how, but the why behind barrier failures of specific accidents for use as lessons learned.

In addition to the risk analysis process codified in 49 CFR 192 Subpart O and CFR 195.452 regulations, pipeline operators are required to evaluate preventive and
Evaluating preventive mitigative measures for pipeline segments in high consequence areas. Conventionally, pipeline operators primarily focus on satisfying the regulatory
mitigative measures New Century requirements for integrity management program risk analysis and inadequately support risk reduction measures within the PMM process. This paper provides an
25 insight into a proper PMM process and how risk estimates, based on qualitative or quantitative risk algorithms, can be effectively utilized to evaluate and
process driven by risk Software
analysis prioritize preventive and mitigative measures.

8 of 11
Risk Model Abstracts Reviewed for September 2015 Workshop
# Document Title Organization TEXT OF ABSTRACT (Overview - not exact/complete abstract)
Integrity management regulations codified 49 CFR 192 Subpart O and 49 CFR 195.452 provide for a risk based means of managing natural gas and hazardous
liquids pipeline integrity. Pipeline operators employ numerous methods of risk analysis such as matrix risk screening, indexing algorithms, and probabilistic
modeling. While operators often invest significant monies and resources in satisfying the regulatory requirement for integrity management program risk analysis,
industry risk programs often insufficiently support risk reductions and other integrity management program elements. In developing and maintaining a robust
New Century integrity management program, operators should first evaluate each approach to risk analysis (not software platforms) and initiate the development of
26 unknown Software systematic process language prior to the implementation of such programs. The following paper provides a summary of current risk analysis methods, with
associated advantages and disadvantages, and discussion of process development that facilitates the procedural driven application of risk analysis results in other
integrity management program elements.

When performing pipeline risk assessments, the time and precision spent on the consequence analyses is often significantly less than that spent on estimating
the probability or likelihood of failure. In addition, the consequence models are typically not incorporated into the preventive and mitigative decision process.
Consequence modeling for New Century With incident costs sometimes reaching billions of dollars or resulting in multiple injuries and fatalities, it is important that the pipeline industry leverage more
27 pipeline risk assessments Software advanced models and spatial data for estimating the consequences of pipeline failure. The use of improved release and hazard models to enhance pipeline risk
assessment can facilitate more detailed consideration of human, environmental, and economic consequences in modern pipeline risk assessments.

Many industries perform risk modeling where models range from extremely detailed, data driven, probabilistic quantitative models to qualitative indexing
models. The different models serve different purposes and require varied types of quantities and qualities of inputs. The appropriate model should be chosen
considering the event being modeled to prevent, purpose or decision making process it will be used for, the granularity of the decisions at hand, the input data
available, the characteristics of the threats, types of relevant threats and consequences that need to be represented, and the types of risks that are being
modeled. There are other useful considerations such as ability to validate and verify results, scalability, transparency and consequent enabling of effective
continuous improvement.
The purpose of risk modeling in the pipeline industry is to reduce risk to public safety, environment, and property by identifying the pipelines segments that
pose higher risk on risk receptors and mitigating them in a timely and effective manner. To mitigate the risk one must clearly identify the threats and
consequences that contribute to the risk. The many different types of threats and their interaction types should be identified. In the pipeline industry the
evidence of threats can be from historical incidents (including near-misses, construction issues, etc.), integrity assessments and mitigations (Hydrostatic tests, In-
line inspections, indirect surveys, excavations, recoats, replacements etc.), or pipeline and right-of-way characteristics that can be causal factors or resistance
factors. This evidence comes in disparate and varied forms. To bring all available evidence together these disparate sets need to be used to calculate one or
several probabilities of failure of the pipe segment. This requires models that can compare between threats and between segments. The consequences that one
is trying to protect against have to be identified at the appropriate granularity to enable accounting for relevant receptors and also mitigating at the appropriate
level. A framework for system wide risk modeling has been developed at TransCanada using quantitative risk modeling methods so that between threat
Effective Risk Modeling – comparison and accumulation can be performed and validated against historical performance data. The framework allows a consistent approach across threats
28 Essential Characteristics TransCanada and consequences but also enables the unique system specific trends to be incorporated. This transparent approach enables understanding of risk modeling
and Opportunities for details, scalability, updating algorithms with new data, learnings and feedback, and continuous improvement. Effective improvement has been further enabled
Pipeline Operations by developing diagnostic tools to examine the risk models, such as sensitivity analysis. Many types of graphical representations of results in terms of maximums
and averages of risk, probability of failure and consequence aspects provide many perspectives of risk for varying decision support related to pipeline integrity
maintenance. Further details of the merits of this modeling can be found in publications IPC 2014-33474, IPC 2014-33477, and IPC 2014-33639.
TransCanada has tested and used many of these models in the past, and has learned from those experiences which models have been successful and which
have not. Based on this experience TransCanada is proposing to discuss at the Workshop which quantitative risk approaches have been successful and why, how
interactive integrity threats are being addressed and what limitations exist regarding interactive threat modeling, which risk models have been proven to properly
estimate risk and which have not, and the importance of pipeline design and operating data to support the respective models and decision-making. This
presentation will be from an operator’s perspective and will display the struggles and successes a diligent, quantitative approach can achieve.

This paper provides a brief description of a conceptual pipeline integrity management approach that utilizes an integrated set of probabilistic predictive models
and maintenance/improvement programs designed to work together to minimize integrity-related risk. The Assess, Inspect, Maintain, Mitigate, Improve and
Repeat (AIMMIR) program is a proposed quantitative approach that has the goal of minimizing the total expected cost of a pipeline's long-term integrity
Comprehensive Integration management and maintenance programs when combined with the future costs of leaks, spills and ruptures. The program requires a statistical events/impacts
of Modeling Tools and database, a comprehensive set of probabilistic computer modeling and simulation tools, a supporting financial framework and the software to assess the results,
29 Programs as an Aid in MH Consulting, Inc and an integrated maintenance and improvement program designed to minimize the long term operational hazards and costs based on the results of the
Assessing Spill Risk in financial risk assessment. This paper provides a description of the program coupled with an assessment of the current availability of the software tool sets
Pipeline Operations needed to implement the AIMMIR framework.

9 of 11
Risk Model Abstracts Reviewed for September 2015 Workshop
# Document Title Organization TEXT OF ABSTRACT (Overview - not exact/complete abstract)
Critical asset management is one of the fundamental disciplines required to sustain long-term profitability in any operating company. Many industries have
developed specific methodologies to improve sustainability of operations by focusing on the long-term reliability of these critical assets.
Our envisioned presentation will explore a data analytic approach Jacobs has used in the development of an asset health and criticality model for a gas
transmission and distribution company. This approach allows an objective and fair indication of the health status of individual assets; assesses the remaining
useful life of the assets; and provides a mixed methods analysis approach, combining qualitative and quantitative data, for intervention and preventative or
mitigation actions.
The gas asset health and criticality model is an overarching interaction between the likelihood of a failure or fault at the system, asset, or asset-component level
and the potential consequence of that asset’s failure or fault on the mission of the company.
The asset health model involves a criticality analysis directed at the organizational and functional system level as well as a specific item within a larger system.
The critical analysis evaluates the importance of systems and assets to the objectives, mission, and values within the organization and considers the
consequences of failure / faults to mission objectives. We examine the functional systems, assets, failure type, and scenario focus for the analysis.
The presentation will examine strengths, weaknesses, and lessons learned in the development of the model, such as:
 The use of failure/fault data to develop expected failures rates, such as mean-time-between failure parameter (MTBF)
 System and asset functional analysis
Gas Asset Health and  Analytic methods such as the homogeneous Poisson process, a widely used technique when the collection of time intervals between failures is distributed
30 Criticality Model Jacobs Consultancy according to an exponential distribution
 Establishing asset heath based on expected time until fault
 Insight to estimate serial failures and change in the rate of degradation / deterioration
Applications
 Mixed methods approach
 Intervention, preventions, and mitigation strategies
 Adaptable approach to pipeline and non-pipeline systems
 Impact of deferring investment

Our presentation explores a programmatic work process approach to the development and application of Quantitative Risk Analysis applicable to pipeline, rail,
barge and ocean vessels modes of transportation using a case study for refinery crude oil and other feedstock. The overall work process includes a root cause
analysis followed by event consequence analysis and ending with Risk Identification and mitigation. The model examines how geologic, weather, and event
response risk are driven into the analysis. Dispersion analysis data and models (GNOME, PHAST) feed the QRA. Applications of the model include development of
Application of QRA in risk event (weather events) response loss and cost value distribution and mitigation strategies.
Applications
31 Modeling Risk Event Jacobs Consultancy * Quantitative Risk Analysis approach
Response * Prevention and mitigation strategies
* Adaptable approach to pipeline and non-pipeline systems

A comprehensive assessment of pipeline operating risk requires the consideration of all significant threats to integrity. Generally, pipeline threats can be
characterized as either time dependent or time independent. Time dependent threats encompass damage mechanisms that involve existing damage features
that have the potential to grow with time (e.g. corrosion and cracking). Management of this type of damage involves feature detection through inspection,
fitness-for-service assessment and selective remediation. The uncertainties inherent in feature detection, sizing, growth projection and fitness assessment, all
contribute to the future likelihood of line failure and these uncertainties should be reflected in the modeling process used to assess the risk posed by this type of
integrity threat. Time-independent threats encompass damage mechanisms that can occur randomly over time (e.g. equipment impact or sudden ground
movement). The management of this type of damage involves estimating the likelihood of a damage event, assessment of the likelihood of line failure given an
Quantitative probability event and implementation of selective damage prevention measures where appropriate. The likelihood of damage occurrence, as influenced by damage
estimation methods for management practices, and the uncertainties inherent in assessing the likelihood of pipeline failure given a damage event should be reflected in the modeling
32 time-dependent and time- C-FER Technologies process used to assess the operating risk posed by this type of integrity threat. This presentation will outline a framework that has been developed for
independent pipeline estimating the probability of pipeline failure due to these two distinctly different types of damage mechanisms, which explicitly addresses the above noted
failure mechanisms sources of uncertainty. This probability analysis framework has been successfully employed for quantitative risk and reliability assessment of many existing and
proposed pipeline systems and it is central to the so-called Reliability-based Design and Assessment (RBDA) process now embodies in Annex O of the Canadian
Pipeline Design code (CSA Z662).

10 of 11
Risk Model Abstracts Reviewed for September 2015 Workshop
# Document Title Organization TEXT OF ABSTRACT (Overview - not exact/complete abstract)
This paper presents the severe and often times game changing limitations associated with our observational learning mode and the ineffective knowledge base
that results from it. More specifically, this paper stands our current approach of pipeline risk assessment on its head, describing how “rare events” data, as they
relate to significant pipeline failures, are simply not being addressed by our probabilistic risk analyses and modern statistical approaches. Further, risk blindness
has been shown to consistently minimize the perception of high consequence risk in the corporate world. Touching on themes from Nassim Taleb’s book, The
Black Swan [1], this paper is about those failure events that lie outside the realm of regular possibility. Such events would never have been convincingly
predicted prior to their occurrence, and yet they carry an extreme impact. Note that just because a failure was catastrophic does not necessarily mean it was a
Black Swan. For example the financial melt down of 1982 was a catastrophe, but it was not a black swan. There have been numerous pipeline failures over the
years that were quite catastrophic, but which were not Black Swans as referenced in this paper. They were neglect. It is important not to confuse the two.
Critical Pipeline Failures Kendrick Consulting Introduction
33 and Rare Events Data LLC The approach recommended in this paper emphasizes the importance of effectively utilizing subject matter experts (SMEs) to provide input during pipeline
and/or facility risk assessments, an approach referred to in this paper as “risk profiling.” Because of the broad scope of the risk management field, the objective
of this paper is to initiate industry dialogue and challenge the existing framework of the probabilistic approaches seen commonly in the pipeline industry.

In the late 1990’s when, as part of its Congressional mandate to conduct a Risk Management Demonstration Program, the Office of Pipeline Safety (OPS) began
authorizing pipeline operators to conduct demonstration projects to determine how risk management might be used to complement and improve the existing
Federal pipeline safety regulatory process. These early risk models initially focused on corrosion and third party damage likelihood concerns, together with
consequence concerns that considered business, environmental and safety impacts. Based on these early lessons, combined with the need to act following
several serious pipeline incidents, the Pipeline Safety Act of 2002 mandated the use of risk assessment as a requirement to prioritize the baseline inspections of
our nations transmission infrastructure. The predominant risk model algorithms are based on relative risk indices that leverages subject matter expertise, while
a few models tried to elevate that analysis into a probabilistic regime.
The industry in is now about 15 years into the gathering and analysis of pipeline construction, operation, inspection, integrity assessment and corrosion control
information. As part of an effective integrity management program, many operators have used the lessons learned to try to improve their relative risk models
through the alteration of risk weighting scores or targeted conditional queries to identify sets of conditions at increased risk based on their experience.
This presentation will share an alternate approach to validate and advance pipeline risk model algorithms by examining the risk tactic currently used by another
34 Risk Model Validation Structural Integrity industry with significant amounts of critical buried infrastructure. This industry has begun embracing data mining techniques to cull through years of inspection
through Data Mining Associates, Inc. results and corrosion control data (both with and without degradation found). By aggregating this industry data, they have been able to identify the probability
for interacting effects for both the occurrence and absence of threats such as external and internal corrosion. This presentation will discuss the approach used to
aggregate and interrogate pipeline information for the purposes of gaining knowledge from the data using freely available data mining software tools. If the
interstate transmission pipeline industry were to participate in a similar common inspection results database, the data mining results could yield improved
pipeline safety, environmental protection, increased efficiency and service reliability as well as improve communication and dialogue among all stakeholders.

11 of 11

Вам также может понравиться