Вы находитесь на странице: 1из 84

Norwegian University of Science and Technology

Faculty of Social Sciences and Technology Management


Department of Industrial Economics and Technology Management

Master Thesis

PARTIAL AND IMPERFECT TESTING OF


SAFETY INSTRUMENTED FUNCTIONS
Hanne Roln
June 2007

MASTEROPPGAVE

Vrsemester 2007

Student Hanne Roln


Institutt for industriell konomi og teknologiledelse

ERKLRING
Jeg erklrer herved p re og samvittighet at jeg har utfrt ovennevnte hovedoppgave selv
og uten noen som helst ulovlig hjelp

Lysaker, 8.juni 2007

____________________________________________________________
Signatur

Besvarelsen med tegninger m.v. blir i henhold til Forskrifter om studier ved 20, NTNU's
eiendom. Arbeidene - eller resultater fra disse - kan derfor ikke utnyttes til andre forml uten
etter avtale med de interesserte parter.

PREFACE
This master thesis was written during 20 weeks throughout spring 2007 as the final work
performed by Hanne Roln at Norwegian University of Science and Technology (NTNU). The
thesis is written within the Department of Industrial Economics and Management, study
field Health, Environment and Safety. It is also closely related to the Department of
Industrial Production and Quality as the thesis is within the field of technical safety. The
thesis is written in close cooperation with Aker Kvrner Subsea.
Intended audience are those with knowledge of reliability theory, and it is recommended that
the reader is familiar with the concepts described in the book System Reliability Theory by
Rausand and Hyland (second edition, 2004).
I would like to thank my colleagues at Aker Kvrner Subsea for the support throughout the
semester, and especially Thor Ketil Hallan as the supervisor. Further on I would like to thank
Ring-O, Lars Bak (Lilleaker Consulting), and Luciano Sanguineti and Enrico Sanguineti at
ATV for giving me the necessary practical understanding of valves. And finally thanks to
Mary Ann Lundteigen (NTNU) for good discussions and Marvin Rausand (supervisor at
NTNU) for important feedback and input throughout the thesis.

Lysaker, 8th of June 2007

Hanne Roln

Partial and imperfect testing of SIF

Introduction

SUMMARY
In order to avoid substantial hardware costs of building platforms, moving petroleum
production facilities subsea is becoming a popular solution. Fields can be remotely operated
and stand-alone fields that would not be profitable to develop separately can now be tied
together to one pipeline/riser and hence save expenses. Safety instrumented systems are
implemented to reduce or eliminate unacceptable risk associated with such production, and
the safety integrity level is a common requirement describing the safety availability of the
equipment. When performing analysis of the consistency of the safety functions to perform
when needed, it is important to evaluate the assumptions that form the basis for the
calculations. The author has in particular assessed the assumption that a component is as
good as new after each proof test, meaning that the unavailability is reduced to zero.
The reasons for imperfect tests may be related to the five M-factors; method, machine,
milieu, man-power and material. Through different case studies the potential effects of
imperfect tests have been analyzed. SINTEF has proposed a method for including the
systematic failures in the calculations of the probability of failure on demand (PFD) by
adding a constant value called PTIF (test independent failures) in the PDS method. A method
for quantifying the PFD impact of an imperfect test due to non-testable random hardware
failures have been proposed by the author. Case results indicate that the PFD impact is far
more significant for imperfect testing of hardware failures than the PDS approach for
systematic failures.
Implementing partial stroke testing enables to reveal failure modes only before possible
through tests that require process shutdown. A successful implementation may improve the
safety integrity level rating of the system. The use of partial stroke testing in subsea
petroleum production has so far not been common and several of the arguments for and
against implementing partial stroke testing are assessed.
It has been argued that partial stroke testing leads to an increase of the spurious trip rate, as
it is likely that if it starts to move it will continue to closed position. The likely reasons for
such an event were placed in a Bayesian belief network and proved the need for the right
equipment to be implemented. New devices such as smart positioners and digital valve
controllers have been introduced for the purpose of partial stroke testing, reducing the
human interference in partial stroke testing and thus reducing the causes for spurious trips.
Partial stroke testing may be implemented in order to justify extended proof test intervals. As
common cause failures are those failures that happen within the same proof test interval, an
extension of the interval could imply that more failures are classified as common cause
failures (Rausand, 2007). In such situations, it should be discussed whether the -factor
should be incremented to reflect the PFD impact this may have.
Another argument for implementing partial stroke testing has been the opportunity to reduce
the hardware fault tolerance (implies cost saving) since the safe failure fraction is

11

Partial and imperfect testing of SIF

Introduction

incremented by detecting more dangerous undetected failures and converting them to


dangerous detected failures. As partial stroke testing is not fulfilling the criteria for a
diagnostic test, it is argued that the partial stroke testing should not be used to affect the safe
failure fraction, and hence can not be an argument for a reduction in the hardware fault
tolerance (McCrea-Steele, 2006).
Based on a failure mode assessment of gate valves and OREDA data the author has proposed
a tentative partial stroke testing coverage factor of 62%. The result is in accordance with
former research. The partial stroke testing coverage for the dangerous failure modes fail to
close, leakage in closed position, delayed operation and external leakage in closed position
could not be justified quantitatively as the production companies do not give such detailed
information. The coverage may differ dependent on the valve type, design and production
environment.
In particular for components with higher failure rates, from DU = 1.0 10 6 hours 1 and
above, investing in partial stroke testing can be recommended. Achieving the exact partial
stroke testing coverage is less important than the test frequency. The positive PFD impact is
greater if the tests are carried out often than to improve the coverage by additional 10%.
On the other hand, a reduction of the non-testable part with 10% yields a greater
improvement of the PFD than obtaining both higher partial stroke testing coverage and
shorter test intervals. Hence the focus should be upon diminishing reasons for why the test
should be unsuccessful. The author performed a case study of the Morvin HIPPS (high
integrity pipeline protection system) that confirms this outcome. Ignoring the estimation of
non-testable failures yields inaccurate PFD results and inaccurate rating of the safety
integrity level. Considering the use of safety instrumented systems is becoming the common
approach for reducing risk in the petroleum production industry, it is important to improve
the quality of these calculations.

12

Partial and imperfect testing of SIF

Introduction

INDEX
PREFACE

9
11
13
15
15
16

SUMMARY

INDEX
LIST OF TABLES
LIST OF FIGURES
TERMS AND ABBREVIATIONS
1

INTRODUCTION

1.1
1.2
1.3
1.4
1.5
2

BACKGROUND
OBJECTIVES
DELIMITATIONS
SCIENTIFIC APPROACH
STRUCTURE OF THE REPORT

17
17
17
18
19

THEORETICAL FRAMEWORK

2.1
2.1.1
2.1.2
2.2
2.2.1
2.2.2
2.2.3
2.3
2.3.1
2.3.2
2.3.3
2.4
2.5
3

17

20

STANDARDS AND GUIDELINES


IEC 61508 & IEC 61511
OLF 070 GUIDELINE
RELIABILITY DATA SOURCES
OREDA
PDS
EXIDA
SAFETY INSTRUMENTED FUNCTIONS
MAIN PRINCIPLES
AVAILABILITY OF SIF
SIS REQUIREMENTS
SIS APPLIED IN SUBSEA XMT
TESTING OF THE SIS ABILITY TO PERFORM THE SIF

IMPERFECT TESTING

21
21
23
23
23
23
24
25
25
28
31
34
37
39

3.1 CAUSES FOR AN IMPERFECT TEST


3.2 EFFECTS OF AN IMPERFECT TEST
3.2.1 CASE A, CONSTANT PFD ADDITION
3.2.2 CASE B, INCREASING PFD ADDITION
3.2.3 CASE C, DECREASING PFD
3.2.4 COMMENTS TO THE IMPERFECT TEST CASES

13

39
41
43
46
52
54

Partial and imperfect testing of SIF

PARTIAL STROKE TESTING

4.1
4.2
4.3
4.4
4.5
4.6
5

DISCUSSION

55
57
58
62
63
65
69

QUALITY OF THE RELIABILITY ASSESSMENT


UNCERTAINTY REGARDING THE RESULTS
RECOMMENDATIONS FOR FURTHER WORK
CASE STUDY

6.1
6.2
6.3
6.3.1
6.4
7

55

MAIN PRINCIPLES AND CONCEPTS


ADVANTAGES AND DISADVANTAGES
PST COVERAGE FACTOR
CORRELATION PST AND SPURIOUS TRIPS
INFLUENCING FACTORS FOR PST CONTRIBUTION FOR A SIS
PST IMPACT ONTO THE SIL

5.1
5.2
5.3
6

Introduction

69
71
71
72

INTRODUCTION CASE STUDY; MORVIN


REQUIREMENTS FROM CUSTOMER
HIPPS
HIPPS TESTING
SIL RATING

72
72
74
75
76

CONCLUDING REMARKS

79

REFERENCES
ANNEX A, XMT

81
84

14

Partial and imperfect testing of SIF

Introduction

LIST OF TABLES
TABLE 1, SIL FOR LOW AND HIGH DEMAND MODE OF OPERATION (IEC 61508-1, 2002)............................................................26
TABLE 2, SIL FOR TYPE A SUBSYSTEM (IEC 61508-2) ..........................................................................................................32
TABLE 3, SIL FOR TYPE B SUBSYSTEM (IEC 61508-2)...........................................................................................................32
TABLE 4, DATA FOR THE SYSTEM TEST EXAMPLE ..................................................................................................................41
TABLE 5, UNAVAILABILITY AT TIME T OF A SINGLE COMPONENT UNDER IMPERFECT TEST CONDITIONS .......................................48
TABLE 6, PFD AVERAGE DIFFERENCES BETWEEN PERFECT AND IMPERFECT TESTS ....................................................................49
TABLE 7, MATRIX FOR SIL RATING SENSITIVITY DUE TO IMPERFECT TESTING ..........................................................................50
TABLE 8, DANGEROUS FAILURE MODES AND TEST STRATEGY FOR A SAFETY GATE VALVE ( ADAPTED FROM SUMMERS & ZACHARY
2000A, MCCREA-STEELE 2006, KOP 2002, BAK 2007 AND ATV 2007) ......................................................................59
TABLE 9, RELIABILITY DATA AS BASIS FOR PST COVERAGE ESTIMATION (ADAPTED FROM LUNDTEIGEN & RAUSAND, 2007).......61
TABLE 10, PFD RELATED TO DIVERSE PST COVERAGES, TEST INTERVALS AND (IM)PERFECT TESTING ........................................68
TABLE 11, MORVIN HIPPS REQUIREMENTS (STATOIL, 2007A)...............................................................................................73
TABLE 12, MORVIN HIPPS CASE DATA ...............................................................................................................................76

LIST OF FIGURES
FIGURE 1, IEC 61508 SAFETY LIFECYCLE (IEC 61508) .........................................................................................................22
FIGURE 2, SKETCH OF A SIMPLE SIS (RAUSAND & HYLAND, 2004).......................................................................................25
FIGURE 3, ALLOCATION OF SIL (IEC 61508-1).....................................................................................................................27
FIGURE 4, RISK REDUCTION (IEC 61508) .............................................................................................................................27
FIGURE 5, FAILURE MODE CLASSIFICATION (IEC 61508) .......................................................................................................28
FIGURE 6, FRACTIONS OF DIFFERENT TYPES OF FAILURES FOR A SYSTEM WITH TWO COMPONENTS .............................................30
FIGURE 7, SIS DESIGN REQUIREMENTS .................................................................................................................................31
FIGURE 8, WELLHEAD AND XMT (OREDA, 2002)...............................................................................................................34
FIGURE 9, HORIZONTAL XMT (AKS, 2007).........................................................................................................................35
FIGURE 10, GATE VALVE WITH ACTUATOR (RING-O, 2007) ...................................................................................................36
FIGURE 11, UP AND DOWN TIME RELATED TO TESTS ..............................................................................................................38
FIGURE 12, CAUSES FOR IMPERFECT TESTING OF SUBSEA SAFETY VALVES ...............................................................................40
FIGURE 13, RELIABILITY BLOCK DIAGRAM OF A SIMPLE SIS..................................................................................................41
FIGURE 14, CONTRIBUTION TO UNAVAILABILITY (PDS METHOD, 2006) ..................................................................................43
FIGURE 15, SKETCH OF THE PFD IMPACT WITH PTIF ADDITION (CASE A) .................................................................................44
FIGURE 16, UNAVAILABILITY UNDER IMPERFECT TEST CONDITION CASE A ..............................................................................44
FIGURE 17, SKETCH OF THE PFD IMPACT WITH IMPERFECT TEST ADDITION (CASE B)................................................................46
FIGURE 18, SERIES STRUCTURE WHEN IMPERFECT TEST OF A COMPONENT ...............................................................................46
FIGURE 19, UNAVAILABILITY UNDER IMPERFECT TEST CONDITION CASE B ..............................................................................47
FIGURE 20, UNAVAILABILITY FOR DIFFERENT FAILURE RATES UNDER IMPERFECT TESTING ........................................................49
FIGURE 21, THE M-FACTORS CONTRIBUTION TO THE IMPERFECT TEST ADDITION ....................................................................50
FIGURE 22, UNAVAILABILITY WITH DECREASING PTIF ADDITION (CASE C1) .............................................................................52
FIGURE 23, UNAVAILABILITY WITH DECREASING IMPERFECT TEST ADDITION (CASE C2) ...........................................................53
FIGURE 24, PFD RESULTS FROM CASE STUDIES ON IMPERFECT TESTING ...................................................................................54
FIGURE 25, PST IMPACT ON THE PFD (LUNDTEIGEN & RAUSAND, 2007)................................................................................55
FIGURE 26, SIMPLE SIS WITH PST IMPLEMENTATION (ADAPTED FROM MCCREA-STEELE, 2006) ..............................................56
FIGURE 27, OVERVIEW OF RELEVANT FAILURE RATES (LUNDTEIGEN & RAUSAND, 2007) .........................................................58
FIGURE 28, BAYESIAN BELIEF NETWORK FOR ST DURING PST ...............................................................................................62
FIGURE 29, UNAVAILABILITY WITH PST ..............................................................................................................................65
FIGURE 30, UNAVAILABILITY WITH PST AND PTIF ADDITION ..................................................................................................66
FIGURE 31, UNAVAILABILITY WITH PST AND IMPERFECT TESTING ..........................................................................................67
FIGURE 32, HIPPS SCHEMATIC (KOP, 2004)........................................................................................................................74
FIGURE 33, HIPPS RELIABILITY BLOCK DIAGRAM FOR M ORVIN FIELD DEVELOPMENT ..............................................................76
FIGURE 34, PFD RESULTS FOR THE DIFFERENT CALCULATION APPROACHES .............................................................................78

15

Partial and imperfect testing of SIF

Introduction

TERMS AND ABBREVIATIONS


ALARP
CCF
CSU
DC
DD
DU
DOP
E/E/PE(S)
ELP
ESD
FT
FTC
FME(C)A
HAZOP
HFT
HIPPS
HP/HT
HSE
IEC
ISO
KOP
LCP
OLF
OREDA
PDS
PFD
PMV
PWV
PST
PT
RBD
ROV
SCSSV
SD
SFF
SIF
SIL
SIS
SRS
ST
SU
XMT

As low as is reasonably practicable


Common Cause Failures
Critical Safety Unavailability
Diagnostic Coverage
Dangerous Detected
Dangerous Undetected
Delayed Operation
Electrical/Electronic/Programmable Electronic (System)
External Leakage in closed Position
Emergency Shut Down
Function Test
Fail To Close
Failure Mode Effect and (Criticality) Analyses
Hazard and Operational Analyses
Hardware Fault Tolerance
High Integrity Pressure Protection System
High Pressure High Temperature
Health, Safety and Environment
International Electrotechnical Commission
International Organization for Standardization
Kvrner Oilfield Products
Leakage in Closed Position
The Norwegian Oil Industry Association
Offshore Reliability Data
Reliability of computer-based safety systems (Norwegian)
Probability of Failure on Demand
Production Master Valve
Production Wing Valve
Partial Stroke Testing
Proof Test
Reliability Block Diagram
Remotely Operated Vehicle
Surface Controlled Subsurface Safety Valve
Safe Detected
Safe Failure Fraction
Safety Instrumented Function
Safety Integrity Level
Safety Instrumented System
Safety Requirement Specification
Spurious Trip
Safe Undetected
X-mas Tree

16

Partial and imperfect testing of SIF

Introduction

1 Introduction

1.1 Background
As safety instrumented systems are getting increasingly important within petroleum
production, there is a need to have a good understanding of the assumptions and
simplifications that form the basis for the assessment. Reliability calculations of subsea
production systems are usually based on the IEC 61508 (2002) approach and utilizing
OREDA (2002) data. A basic assumption is that after test and repair, the system is as good
as new, meaning that the system unavailability is reduced to zero. As the demand for
continuous production extends the test intervals further every time, it is increasingly
important to study these aspects more profoundly. Are the safety levels as good as claimed?
Is it required to change the calculation-method in order to reflect the reality?
The topic of this thesis has been developed in cooperation with Marvin Rausand at NTNU
and Thor Ketil Hallan at Aker Kvrner Subsea, as this is of interest for both parties.
Intended audience are those with knowledge of reliability theory, and it is recommended that
the reader is familiar with the concepts described in the book System Reliability Theory by
Rausand and Hyland (second edition, 2004). Basic knowledge of subsea petroleum
production is also an advantage.

1.2 Objectives
The main objectives have been to;
Study the IEC 61508 (2002) and IEC 61511 (2004) standards and OLF 070 guideline
(2004)

Describe the causes for imperfect tests

Estimate the impact of imperfect tests on the probability to fail on demand


Describe partial stroke testing (PST)

Estimate the impact of PST to the probability to fail on demand

Perform a case study from one of Aker Kvrner Subseas actual projects

1.3 Delimitations
The time scope to carry out the master thesis is set to 20 weeks; hence it has been a need to
choose more specific topics to assess. Since the author has been present at Aker Kvrner
Subsea during the thesis period, the focus is upon reliability in subsea petroleum production.
Because of the authors special interest in safety topics, the reliability assessment is limited to
the safety reliability (availability), excluding the production reliability.
The purpose of the case study is to relate the results to an actual field. The weight of the
thesis is to be found in assessment of PST and imperfect testing.
17

Partial and imperfect testing of SIF

Introduction

1.4 Scientific approach


The information gathering process has basically been done by literature research and
feedback from experts. In order to learn about subsea systems in general, and Aker Kvrner
Subsea products in particular, the author has been present at Aker Kvrner Subsea facilities
during the whole semester, getting input from the engineers day by day. This has been
significant for the progress of the thesis. The authors knowledge to subsea systems was
limited prior to the thesis start-up and as a consequence a substantial effort was put into
getting familiarized with the topic. The meetings with valve suppliers and external
professionals with experience from the field gave a valuable insight to the subsea petroleum
production.
As the topic of the thesis is closely connected to the OLF 070 guideline (2004), IEC 61508
(2002) and IEC 61511 (2004) standards, it was natural to study these. Further on it has been
an extensive search for literature, as the concept of partial stroke testing is fairly new and
imperfect testing is hardly discussed in the literature. Doing so, Engineering Village and
ScienceDirect were of great importance, in addition to the recommendations from my
supervisor. The search engine Google has also been utilized to find additional information.
The IEC 61508/61511 standards hardly mention imperfect testing of SIS while the PDS
method (2006) has a different approach than the standards, making it interesting to develop
new concepts and approaches to testing. The progress and quality of the work has been
assured through discussions and feedback from my supervisor and other key personnel at the
university as well as colleagues at Aker Kvrner Subsea.

18

Partial and imperfect testing of SIF

Introduction

1.5 Structure of the report


The master thesis has the structure as described below;
Chapter 1
Introduction

Presentation of the background of the master thesis topic, its


objectives, delimitations and scientific approach.

Chapter 2
Theoretical framework

Introduction to IEC 61508/61511, OLF 070 guideline


(2004), PDS method (2006), OREDA (2002) and exida
(2003). Description of basic concepts; SIF, SIS, safety
unavailability contributors, failure classification, failure
modes and testing. Briefly about subsea X-mas trees and
safety valves.

Chapter 3
Imperfect testing

Imperfect testing is defined, and its possible reasons


described. Three cases are developed to illustrate possible
effects of imperfect testing. A new method for quantifying
imperfect tests is proposed.

Chapter 4
Partial stroke testing

Description of partial stroke testing and its advantages and


disadvantages. Assessment of the rationale behind the
partial stroke testing coverage factor. The correlation
between the partial stroke testing and spurious trips and
other influencing factors of the partial stroke testing
contribution is described. Partial stroke testing is applied on
the case results from chapter 3.

Chapter 5
Discussion

Discussion about the results of imperfect and partial stroke


testing, the credibility of the data utilized and the need for
further work.

Chapter 6
Case

The theories from the former chapters are used for a real life
system; the Morvin field development.

Chapter 7
Concluding remarks

Concluding remarks regarding imperfect testing and partial


stroke testing.

19

Partial and imperfect testing of SIF

Theoretical framework

2 Theoretical framework
Subsea equipment is becoming the typical solution for the petroleum offshore industry as the
production is moved to deeper and more demanding areas. It is expected to be a substantial
increment of the subsea petroleum and gas production in the years to come (Subseazone,
2007). At the same time it can be claimed that there has been an increased concern regarding
health, safety and environment (HSE) among the general public and governments the last
years, which has lead to a more strict legislation within the field. Together with the high costs
for subsea intervention this gives incentives for the oil companies to achieve a high level of
safety and reliability of their systems.
Safety instrumented systems (SIS) have become ever more common as a measure for
reducing risk. A SIS is designed to prevent, or mitigate, the hazardous event that could harm
the system in which it is implemented to protect. Examples of hazards related to subsea
production are blowouts topside (possible personnel fatalities and material damage) and
leakage to water (environmental danger). One SIS can perform one or several safety
instrumented functions (SIF).
With this increased dependency of SIS to mitigate risk, it is crucial to be aware of the
assumptions and simplifications that form basis for the reliability calculations. In the
following short introductions to the important standards and guidelines within the field are
given, as well as a more thorough description of SIF.

20

Partial and imperfect testing of SIF

Theoretical framework

2.1 Standards and guidelines


The IEC 61508 (2002) standard and the more process specific IEC 61511 (2004) standard are
safety standards that state requirements for the use of SIF. OLF has developed a guideline for
the implementation of the two standards (2004).

2.1.1 IEC 61508 & IEC 61511


The IEC 61508 shall be applied whenever there is a possibility that E/E/PE technologies
might be used, () so that the functional safety requirements for any E/E/PE safety-related
systems are determined in a methodical, risk-based manner (IEC, 2002). If the hardware
device already has been proven in use, IEC 61511 can be followed as this standard focus on
the integration of such components. Note that the E/E/PE (Electrical/Electronic/
Programmable Electronic) safety related systems is referred to as SIS (safety instrumented
system) throughout this thesis.
IEC 61508-6 states that the overall goal is to ensure that plant and equipment can be safely
automated. A key objective of this standard is to prevent failures of control systems triggering
other events, which in turn could lead to danger, and (to prevent) undetected failures in
protection systems, making the systems unavailable when needed for a safety action.
The IEC 61508 standard is divided in following 7 parts;
Part 1, General requirements
Specifies the requirements that are applicable to all parts.
Introduces the safety life cycle perspective as the technical
framework for the standard.
Part 2, Requirements for
electrical/electronic/ programmable
electronic safety-related systems

Provides additional and more specific requirements for the


hardware than the first part. Specifies the requirements for
activities in the design and manufacturing phase.

Part 3, Software requirements

Provides additional and more specific requirements for the


design and development of the software than the first part.

Part 4, Definitions and abbreviations

Lists all the definitions and abbreviations used throughout


the standard.

Part 5, Examples of methods for the


determination of safety integrity levels

Describes the underlying concepts of risk and gives


methods for determining safety integrity levels.

Part 6, Guidelines on the application of


IEC 61508-2 and IEC 61508-3

Gives a guideline for the application of part 2 and part 3


with examples and methods.

Part 7, Overview of techniques and


measures

Gives an overview of techniques and measures for control


of hardware failures and avoidance and control of
systematic failures.

21

Partial and imperfect testing of SIF

Theoretical framework

The IEC 61508 has a lifecycle approach for the SIS as presented in Figure 1. Following the
steps in the lifecycle is to ensure that the SIF is achieved through a systematic approach to all
the necessary activities.

Figure 1, IEC 61508 Safety lifecycle (IEC 61508)

The system design complies with the IEC 61508 and IEC 61511 standards when the company
accomplishes the requirements related to:

Requirements for application


software, including selection criteria
for utility software

SIS safety validation


SIS operation and maintenance
SIS modification

Management of functional safety


Safety lifecycle requirements
Verification
Process hazard and risk analysis
Allocation of safety functions to
protection layers
SIS safety requirements specification

SIS decommissioning

SIS design and engineering

Factory acceptance testing (FAT)

Information and documentation


requirements

SIS installation and commissioning

The calculation of the reliability of SIF is only a small part of the IEC 61508 compliance.
Some of the assumptions and simplifications done in the IEC standard related to reliability
are assessed in this thesis. Lundteigen & Rausand (2006) stated; The standards are not
prescriptive, which gives room for different interpretations, and hence opens up for new
methods, approaches and technology.
The implications the IEC 61508 standard has for SIS is described more in detail throughout
chapter 2.
22

Partial and imperfect testing of SIF

Theoretical framework

2.1.2 OLF 070 guideline


The purpose of the OLF 070 guideline (2004) is to give a guideline for the application of IEC
61508 and IEC 61511 in the Norwegian petroleum industry that is simplified in use. Note that
while the standards are risk-based, meaning that the users have to determine the risks
related to the system and on basis of this state the required SIL of the SIF, the OLF 070
guideline provides minimum SIL requirements for the most common SIFs, giving the
guideline a different approach to assessing the SIS than the intentions in the IEC 61508
standard.

2.2 Reliability data sources


2.2.1 OREDA
OREDA (Offshore REliabilty DAta) (2002) collects and exchanges reliability data among the
participating companies and act as the forum for co-ordination and management of
reliability data collection within the oil and gas industry (OREDA, 2007). It was initiated in
1981 and has issued four public editions of a Reliability Data Handbook (1984, -92, -97, -02).
In OREDA a failure is either classified as critical, degraded or incipient.
Critical failure; a failure that causes immediate and complete loss of a systems
capability of providing its outputs.
Degraded failure; a failure which is not critical, but which prevents the system from
providing its outputs within specifications. Such a failure would usually, but not
necessarily, be gradual or partial, and may develop into a critical failure in time.
Incipient failure; a failure which does not immediately cause loss of a systems
capability of providing its output, but which, if not attended to, could result in a
critical or degraded failure in the near future.
The failure has been associated with one of these severity classes independent of the failure
mode and failure cause, meaning that a leakage in closed position failure mode can be
found listed both as a critical and as a degraded failure.

2.2.2 PDS
PDS is the Norwegian abbreviation for reliability of computer-based safety systems.
SINTEF is the author of both a PDS Method Handbook and a PDS Data Handbook. The PDS
approach is described in the first, while the latter contains a data dossier for different
components. These are based on OREDA, but the project group have done some adjustments
and expert judgements upon the figures. The PDS method is in line with the main principles
in the IEC 61508 standard, except of a somewhat different approach regarding failure
classification, modelling of common cause failures and the treatment of systematic failures
(PDS Method, 2006). Of special relevance for this thesis are the quantification of systematic
failures called PTIF (test independent failures, TIF), and the concept of Critical Safety
Unavailability (CSU) which in addition to the IEC 61508 approach to PFD calculation also
includes downtime due to test and repair.

23

Partial and imperfect testing of SIF

Theoretical framework

2.2.3 Exida
Exida (excellence in dependable automation) provides reliability data for use in process and
machinery industries. The numbers are based on FMEDA data or exida comprehensive
analysis with data from OREDA, PDS etc. The main reliability concepts and failure
classifications correspond to a great extent those described in IEC 61508.

24

Partial and imperfect testing of SIF

Theoretical framework

2.3 Safety Instrumented Functions


Within every industry there is a need to perform hazard identification methods such as
HAZID, HAZOP and FMEA studies in order to decide the need for extra safety measures.
Different measures may be implemented and among those introducing SIF. This concept is
thoroughly described in the following.

2.3.1 Main principles


SIFs are functions whose purpose is to achieve or maintain a safe state for the system. The
Norwegian Facilities regulations 7 state that SIF has several purposes. It should detect
abnormal circumstances, it should prevent that such abnormal situations escalates into a
dangerous state, and it should mitigate damages in case of accidents. Lundteigen & Rausand
(2006) describe SIF even simpler; detect decide act. There are two concepts regarding
SIFs; safety function requirements and safety integrity requirements. While the first is
which SIF the SIS should do, the latter is regarding the likelihood that the SIS is able to
perform the specific SIF satisfactorily within a stated period of time (IEC, 2002).
Any system, implemented in any technology, which carries out SIF, is a SIS (IEC, 2002). A
SIS covers all parts of the system that are required to carry out a SIF, and may consist of for
example the subsystems; sensors, control logic and communication systems, final actuator,
and the critical actions of a human operator (IEC, 2002). In this thesis the actuating items
are analogous with safety valves. Figure 2 is an illustration of a simple SIS.

Figure 2, Sketch of a simple SIS (Rausand & Hyland, 2004)

Examples of SIS are among others the emergency shut-down system in a hazardous chemical
process plant, automobile indicator lights, anti-lock braking and engine-management
systems, and remote monitoring, operation or programming of a network-enabled process
plant (adapted from Rausand & Hyland, 2004).
The safety integrity level (SIL) specifies the safety integrity requirements of the SIF to be
allocated to the SIS. It states the probability of the SIS to fail to perform the requested SIF
upon demand, often referred to as the PFD (probability of failure on demand). The PFD may
be interpreted in two ways; the probability that the system will be in a dangerous failure
mode upon demand, and the fraction of time the system will be in a dangerous failure mode
and not work as a SIF. In order to attain the requested SIL it is also required to avoid and
25

Partial and imperfect testing of SIF

Theoretical framework

control systematic failures and to select the hardware configuration within the architectural
constraints. These requirements are further described in section 2.3.3.
The IEC 61508 standard divides the SIL into 4 levels, where the highest SIL rating states the
lowest probability that the SIS will fail to perform the required SIF. Depending if it is a low or
high/continuous demand mode of operation, the range of the levels differ as shown in Table
1. Low demand mode embrace systems where the frequency of demands for operation made
on a safety related system is no greater than one per year and no greater than twice the prooftest frequency (IEC 61508, 2002), otherwise it is classified as a high demand system. An
example of a low demand application in subsea production is the safety valve where the valve
remains static until a demand occurs. An application in high demand mode can for example
be the brake system in a car.
Table 1, SIL for low and high demand mode of operation (IEC 61508-1, 2002)

Safety
integrity
level

Low demand mode of operation


(Average probability of failure to
perform its design function on
demand)

High demand or continuous


mode of operation
(Probability of a dangerous failure
per hour)

10 5 to < 10 4

10 9 to < 10 8

10 4 to < 10 3

10 8 to < 10 7

10 3 to < 10 2

10 7 to < 10 6

10 2 to < 10 1

10 6 to < 10 5

Note that one SIS may perform several SIFs, and that the reliability assessments are done for
each SIF and not for the SIS. The SIL rating is often required to be within the midpoint of
each level to be considered good enough, meaning that the PFD has to be less than

5.0 10 3 for meeting the SIL 3 rating.


Two qualitative methods to determine the required SIL are presented in the IEC 61508
standard; the risk graph and the hazardous event severity matrix. This is basically done by
assessing the probabilities, frequencies and consequences of certain events and then the need
for risk reduction by a SIF is decided.
In order to realize the SIL target it is necessary to allocate the safety integrity requirements to
each SIF, and subsequently obtain the design requirements for the SIS as shown in Figure 3.
Note that each of the subelements sensor, logic and final elements have to achieve the
required SIL rating in order to achieve the overall SIL requirement. The oil companies often
require a certain SIL rating of the equipment they purchase, making the manufacturers
responsibility, or challenge, to design the system to meet the requirements.

26

Partial and imperfect testing of SIF

Theoretical framework

Figure 3, Allocation of SIL (IEC 61508-1)

The IEC 61508 standard, the Norwegian Activities regulations 1 -2 and the Haddon
energy model (1973) all have the same philosophy regarding barriers. In order to reduce risk,
the priority should be to apply measures in the design, as this is the best way to eliminate the
hazards. If this does not reduce the risk to the tolerable region (reference ALARP principle as
described in IEC 61508-5 and the Norwegian Framework regulations 9). Barriers are
introduced in order to prevent or mitigate impact on people, environment and/or material
assets. A SIS can be such a barrier as shown in Figure 4. Further reduction of risk than
strictly necessary in order to come within the acceptable area should be done as long as this is
economically reasonable.

Figure 4, Risk reduction (IEC 61508)

Note that introducing a SIS is only one measure among others. It is equally important to
introduce other barriers.

27

Partial and imperfect testing of SIF

Theoretical framework

2.3.2 Availability of SIF


Dangerous and safe failures
If the SIS fails to perform the intended SIF the system is brought to a fault state. The failure
that causes the fault can be either dangerous or safe. A dangerous failure is defined as a
failure which has the potential to put the safety-related system in a hazardous or fail-tofunction state. Some of these failures can be revealed at an early stage through testing or by
coincidence by personnel, while others remain undetected. It is differentiated between
dangerous detected and dangerous undetected failures, and safe detected and safe
undetected failures as illustrated in Figure 5.

Figure 5, Failure mode classification (IEC 61508)

Examples of dangerous detected failures are those revealed by diagnostic testing. On the
other hand, a dangerous undetected failure is a failure not revealed before a proof test or by a
demand. These failures are important to discover as soon as possible. Representative for a
safe failure is a spurious trip, which is for example that the safety valve closes without a real
demand. Note that to classify a failure according to these classes is not always
straightforward and can easily be interpreted differently among the users.
Random hardware failures and systematic failures
Another way of classifying the failures is to differentiate between random hardware
(physical) failures due to aging and stress, and systematic (non-physical) failures due to
design and interaction (adapted from IEC 61508);

Random hardware failures: Failures occurring in random time resulting from a


variety of degradation mechanisms in the hardware. Usually, only degradation
mechanisms arising from conditions within the design envelope (natural conditions)
are considered as random hardware failures. System failure due to such failures can
be quantified with reasonable accuracy.

Systematic failures: Failures that are related in a deterministic way to a certain cause,
which can only be eliminated by modification of the design, manufacturing process,
operational procedures, documentation or other relevant factors. Design faults and
maintenance procedure deficiencies are example of causes that may lead to systematic
failures. System failure due to systematic failures can not be easily predicted.

28

Partial and imperfect testing of SIF

Theoretical framework

It is of great interest to assess the unavailability of the safety system. It is only the dangerous
undetected failures that form the basis for the PFD calculation. With as the proof test
interval and F(t) the distribution function, the safety unavailability is expressed by PFD as
given below;
(t) = Pr(a DU failure has occurred at, or before, time t) = Pr(Tt) =F(t)

PFD =

1
1
(t )dt = F (t )dt

0
0

Calculation of safety unavailability is further explained in section 2.3.3, while detecting


failures through testing is treated in section 2.5 and further discussed in chapter 3 and 4.
Failure modes
The failure mode describes the various abnormal states of an equipment unit, and the
possible transition from correct to incorrect state (OREDA, 2002). The main failure modes of
the safety valve are (adapted from Lundteigen & Rausand 2007 and Rausand & Hyland,
2004):
FTC = Fail to close on demand
LCP = Leakage (through the valve) in closed position
ELP = External leakage in closed position
DOP = Delayed operation
ST = Spurious trip
FTO = Fail to open on command
When valves are designed to stop the flow (as emergency shutdown valves), the FTC and LCP
failure modes can be classified as dangerous failures, since the purpose of the valves are not
fulfilled (Rausand & Hyland, 2004). The valves are designed to be closed within a specific
amount of time, and if this demand is not fulfilled it is classified as a DOP failure mode and is
a dangerous failure. It is considered to be an ELP failure when there is a leakage to the
exterior when the valve is in a closed position, and is classified as a dangerous failure.
As already mentioned, these failures can be classified both as a critical and as a degraded
failure in OREDA, leaving it up to the user how to interpret the data. In a safety perspective,
the ST and FTO failure modes do not imply danger, since for safety valves these failure
modes correspond to their safe position (closed).
Common cause failures and spurious trips
In order to make a system more failure resistant, a common approach is to introduce
redundancy. There are two aspects that may reduce this benefit; spurious trips and common
cause failures. Redundancy for safety valves is often obtained by placing two valves in series,
meaning that if one valve fails to close, the other can shut down the system instead. As it
takes only one valve to close down the system, redundancy could lead to a higher spurious
trip rate. For sensors this can be solved, or at least minimized, by introducing voting. Voting
is when 2 sensors out of 3 sensors are programmed to give an order to close the valves, thus
removing the possibility that one single sensor can command the valve to close. A similar
solution is not possible for valve.

29

Partial and imperfect testing of SIF

Theoretical framework

Common cause failures (CCF) may bring down both safety valves at the same time, reducing
the potential positive effects by introducing redundancy to the system. By IEC 61508
definition a common cause failure is a failure, which is the result of one or more events,
causing coincident failures of two or more separate channels in a multiple channel system,
leading to system failure. The -factor is one way of describing common cause failures
quantitatively, where the -factor gives the fraction of common cause failures among all
failures of a component;
Pr (Common cause failure | Failure) =
This is illustrated in Figure 6. Rausand & Hyland (2004) give more details about the factor model and other alternative models.
Component 1 Component 2
1-

1-

Figure 6, Fractions of different types of failures for a system with two components

Goble (2003) state that three principles should be followed in order to avoid CCF;
1. Reduce the chance of a common stress physical separation and electrical separation
in redundant units.
2. Respond differently to a common stress redundant units should use diverse
technology/mechanisms.
3. Increased strength against all failures.
A method for estimating the common cause beta factor is provided in IEC 61508-6, or the
maximum values given in table D.4 in IEC 61508-6 can be used directly.

30

Partial and imperfect testing of SIF

Theoretical framework

2.3.3 SIS requirements


There are several requirements in IEC 61508 related to the design of a SIS. As illustrated in
Figure 7 these requirements can be classified related to hardware safety integrity, the system
behaviour on detection of fault and systematic safety integrity. Safety integrity is the
probability that the SIS performs the required SIF under all the stated conditions within a
stated period of time (IEC 61508).

Figure 7, SIS design requirements

Hardware safety Integrity


In order to avoid the random hardware failures there are certain architectural constraints
that limit the designers freedom on how the hardware may be configured (Lundteigen and
Rausand, 2006). There are two concepts that are related to architectural constraints, Safe
failure fraction (SFF) and Hardware fault tolerance (HFT). The SFF can be interpreted in two
ways, one is as the fraction of failures considered as safe versus the total failure rate. The
other is as the fraction of failures not leading to a dangerous failure of the SIF (op. cit).

SFF =

DD

Tot

+
+ +
S

DD

DD

DU

S = Rate of safe failures


DD = Rate of dangerous detected failures
DU = Rate of dangerous undetected failures
TOT = Total rate of dangerous and safe failures
Subsequently, the SFF can be increased by detecting more dangerous undetected failures and
classify them as dangerous detected.
The HFT is a measure of how many of the components can be lost without losing the property
of being a safety function. A 1oo2 and 2oo3 architecture has a HFT of 1, while a 1oo3 system
has a HFT of 2.
The highest SIL that can be claimed for a safety function is limited by the HFT and the SFF of
the subsystems that carry out the safety function (IEC 61508-2). It is differentiated between
type A and type B subsystems. Simplified, the subsystem is regarded to be of type B when the
subsystem consists of one or more components that have uncertainty regarding failure
data/modes or uncertainty of its behaviour in a fault mode, if otherwise the system is of type
31

Partial and imperfect testing of SIF

Theoretical framework

A. A safety valve is normally defined as a type A subsystem (Lundteigen and Rausand, 2007).
Table 2 gives the attainable SIL rating under these constraints for type A subsystems.
Table 2, SIL for type A subsystem (IEC 61508-2)

Safe failure
fraction
< 60%
60% - < 90%
90% - < 99%
99%

Hardware fault tolerance


0
1
2
SIL1
SIL2
SIL3
SIL2
SIL3
SIL4
SIL3
SIL4
SIL4
SIL3
SIL4
SIL4

The attainable SIL rating for type B subsystems is given in Table 3.


Table 3, SIL for type B subsystem (IEC 61508-2)

Safe failure
fraction
< 60%
60% - < 90%
90% - < 99%
99%

Hardware fault tolerance


0
1
2
Not allowed
SIL1
SIL2
SIL1
SIL2
SIL3
SIL2
SIL3
SIL4
SIL3
SIL4
SIL4

Because of the difficulties of achieving and maintaining a SIL 4 throughout the safety
lifecycle, applications which require the use of a single SIF with SIL 4 should be avoided
where reasonably practicable (IEC 61511-1).
Related to hardware safety integrity there are also requirements for the PFD. In the
calculations it should be taken into account the system architecture, dangerous failures
undetected/detected by diagnostic tests, susceptibility to common cause failures, diagnostic
coverage, test intervals and repair times etc. The calculations should be done for each sub
element and gives the following formula for the SIS in Figure 2;

PFD SYS = PFD D + PFD L + PFD AI

1
1
Where the PFD is given by PFD = (t )dt = F (t )dt as presented in last section.
0
0

System behaviour on detection of fault


The requirements to the system behaviour on detection of fault are to specify an action to
achieve or maintain a safe state, or to assure a safe operation while repairs are carried out.

32

Partial and imperfect testing of SIF

Theoretical framework

Systematic safety integrity


The systematic safety integrity is related to evidence proven in use, avoidance of failures and
control of such failures. Evidence of proven in use is basically adequate documentation that
the likelihood of any failure of the subsystem in the SIS is low enough in order to achieve the
required SIL for the SIF.
The requirements for avoidance of failures embrace the measures for preventing the
introduction of faults during design and development of the SIS hardware. These
requirements have to be applied only on those systems which have not yet been proven in
use.
The requirements for the control of systematic failures emphasize that the design process
shall make the SIS tolerant against residual design faults in the hardware and software,
environmental stresses and mistakes made by the operator of the equipment under control.
The maintainability and testability shall be considered already at the design and development
phase. Annex A and B of IEC 61508-2 give techniques of how to avoid and control systematic
failures.

33

Partial and imperfect testing of SIF

Theoretical framework

2.4 SIS applied in subsea XMT


An example of a SIS in subsea installations are the safety valves in the X-mas Tree (XMT).
The XMT itself is regarded as a secondary well barrier while the surface controlled subsurface
safety valve (SCSSV) is regarded as one of the primary well barriers.
The XMT is placed onto a wellhead on the seabed. Basically the XMT consists of a range of
valves and measurement instruments. Its function is to be a connection point between the
well and the flowlines, to give the opportunity to shut down the well in case of emergency, to
guide the flow and to give facilities to control the well. The XMT can lead the flow directly or
indirectly through a manifold onshore/topside. Figure 8 shows a schematic of a XMT and
wellhead. Note the location of the Production Master Valve (PMV) and the Production Wing
Valve (PWV) as these are considered to be the most important valves to close in an
emergency situation. They are typically safety gate valves. Testing of these valves is discussed
in chapter 3 and 4.

Figure 8, Wellhead and XMT (OREDA, 2002)

34

Partial and imperfect testing of SIF

Theoretical framework

Two common types of XMT are the conventional (dual bore) and horizontal. One of the main
advantages for choosing a conventional XMT is that the tree can be retrieved without
removing the tubing hanger and the tubing (Sangesland, 2007). In a horizontal XMT the
diameter of the tubing can be larger and the tubing can be replaced without retrieving the
tree. The stack-up height of the XMT including BOP may otherwise be difficult to handle on a
conventional drilling vessel. On the other hand, retrieval of the tree implies retrieval of the
tubing. Illustrations of both the horizontal and conventional tree are in Annex A. In Figure 9
is an example of a horizontal XMT.

Figure 9, Horizontal XMT (AKS, 2007)

The gate valve is normally preferred as safety valve compared to a ball valve. The ball valve
requires a rotation force in addition to vertical movement. Gate valves normally have a lower
internal leakage, and a shorter stem travel. It is assisted by pressure in the bore cavity
35

Partial and imperfect testing of SIF

Theoretical framework

pushing the stem out of the valve cavity when closing. The actuator spring is also designed to
close also without pressure in the system. In Figure 10 a gate valve is shown in closed (left)
and open (right) position. Note that this is only one of many solutions.

Figure 10, Gate valve with actuator (Ring-O, 2007)

Note that for fail safe gate valves the hole in the gate is on the upper part, thus when the valve
is de-energized it will shift upwards and close. The valve will close whenever loss of electric
and/or hydraulic power is detected.

36

Partial and imperfect testing of SIF

Theoretical framework

2.5 Testing of the SIS ability to perform the SIF


IEC 61508 stresses the importance of considering the testability already in the design phase
of a system. In order to maintain the required SIL it is necessary to perform tests to assure
that the equipment is working as desired. If safety system devices are not tested the
dangerous failures reveal themselves when a process demand occurs, often resulting in the
unsafe event that the safety system was designed to prevent (Summers, 2000b). Such tests
are performed both before and during installation, but the testing during the production life
is of great importance. Several methods and philosophies exist on this matter today.
Proof test is a periodic test performed to detect failures in a safety-related system so that, if
necessary, the system can be restored to an as new condition or as close as practical to this
condition (IEC 61508, 2002). As a proof test requires production shut-down, other
measures has been introduced that offer online testing; diagnostic tests and partial stroke
tests. The logic solver in the SIS is often programmable, and may carry out diagnostic selftesting during operation. That is done by the logic solver sending frequent signals to the
detectors and to the actuating items, and compares the responses with predefined values
(Rausand & Hyland, 2004). Since there is no explicit definition of diagnostic testing in IEC
61508, the interpretations of Velten-Philipp and Houtermans (2004), are used; a test is a
diagnostic test if it fulfils the following three criteria;
1. It is carried out automatically () and frequently ().
2. The test is used to find failures that can prevent the safety function from being
available.
3. The system automatically acts upon the results of the test.
Partial stroke testing (PST) is, when implemented, normally applied on valves. The test is
conducted by simply stroking the valve to check that it is not stuck and thus revealing hidden
dangerous failures. It is not done as frequently as diagnostic testing and dependent of the
chosen system it is neither performed automatic. Hence PST does not fulfil the criteria for
being classified as a diagnostic test.
A function test is not defined by the IEC 61508/61511 standards, but for a valve it often
implies a full stroke test. This can be interpreted as the function test simply confirms that the
valve can close, not if it seals. Since the standard defines proof tests as a measure for restore
the system to as new condition, reducing the unavailability to zero, it must be assumed that
such tests embrace a wider range of test-methods in order to be capable of discover all the
failure modes. It is noticed that it is seldom distinguished clearly between proof testing and
function testing in the literature.
The proof testing is only done at certain intervals since it demands full shutdown of the
production. Yet, this test is important because it reveals some failure modes that can not be
detected through the diagnostic self-testing or PST. The diagnostic coverage (DC), which is
the fraction of dangerous failures that are discovered by diagnostic testing relative to the total
number of dangerous failures (adapted from IEC 61508), may differ dependent on the system

37

Partial and imperfect testing of SIF

Theoretical framework

in question and the chosen test approach. The proof test itself may however also be
incomplete and may be considered as an imperfect test. This is further elaborated in the next
chapter. In Figure 11 the relationship between the time concepts and tests is given. Rausand
& Hyland (2004) give a thorough description of these concepts.

A- The system is taken down


in order to perform a full
proof test.

B- The system is already


down when the full proof
test is performed and
reveals the dangerous
undetected failure.

C- Diagnostic or partial
testing reveals the failure
before the scheduled proof
test, thus reducing the
time of the undetected
dangerous failure.
Figure 11, Up and down time related to tests

It has been developed a high level of diagnostic coverage for the sensors and logic and with
redundancy it has been succeeded to reduce the contribution to the PFD (Metso automation,
2002), leaving the greatest contributor to be the actuating items/final elements.
Because of the disturbances testing impose on the production, the risks associated with the
testing itself and restarting after the test is finalized, it is preferred to test as seldom as
possible. Hence it is a need to optimize the test intervals to maintain both the safety and
production interests. That is to assure as high safety availability as possible without
introducing additional production downtime. The causes and consequences of imperfect
testing, and partial stroke testing is further discussed in chapter 3 and 4.

38

Partial and imperfect testing of SIF

Imperfect testing

Imperfect testing

In the IEC 61508 standard it is assumed that after a proof test the component is as good as
new. For the proof test to be fully effective this means that it is necessary to detect 100 % of
all dangerous failures, reducing the unavailability to zero. This may not be feasible. An
imperfect test situation may be defined as a situation where the test does not discover all
dangerous failures and subsequently component unavailability remains. It can be claimed
that there are two possible classifications of an imperfect test situation;
1. The test does not cover all possible failures inadequate test method.
2. The test does not detect all the failures unsuccessful test.
Hence the function test, the PST and the diagnostic test can all be classified as an imperfect
test since they do not cover all failure modes while a proof test may be imperfect due to
unsuccessful testing.
Since the focus in this thesis is upon testing it is assumed that as long as all failures are
discovered they can be repaired to an as good as new condition. Analogue to the definition
of imperfect testing in this thesis, imperfect repair can be defined as the situations where the
fault is not repaired perfectly or that the failure is chosen not to be repaired, as well as lack of
an adequate method for repairing the component. This can be the case when for example the
leakage is considered minimal, or the repair of a somewhat delayed operational time (DOP) is
postponed until it is more significant. Rausand & Hyland (2004) give an introduction to
imperfect repair processes.
This uncertainty related to the test quality is not included in the reliability calculations, and is
neither discussed much in the IEC 61508/61511 standards nor literature in general. IEC
61508-6 mention briefly the effects of a non-perfect proof test in annex A (informative only).
This topic is elaborated in the following, both the possible causes for imperfect testing and its
impact on the PFD.

3.1 Causes for an imperfect test


The PDS method (2006) claims that there are three main contributors to the loss of safety;
unavailability due to dangerous undetected failures (consists of random hardware failures),
unavailability due to systematic failures and unavailability due to known or planned
downtime.
Planned downtime is of no significance in this context, but it is of great interest to assess the
reasons why random hardware failures and systematic failures are not discovered through
testing. A reason why failures are not discovered could be because the instruments needed to
be able to confirm the test may not exist. Another reason could be that the company wants to
avoid putting stress onto the system, thus instead of slamming the SCSSV shut as in a real
situation, the test could be modelled as controlled closing by closing the PWV in order to

39

Partial and imperfect testing of SIF

Imperfect testing

create a static well. The fishbone diagram in Figure 12 gives additional causes for imperfect
testing of safety valves subsea.

Figure 12, Causes for imperfect testing of subsea safety valves

As described in Figure 12 the reasons for imperfect testing can be related to the attributes;
methods, materials, machines, milieu and manpower. The attribute materials cover the test
equipment, methods the procedure and formalities around the testing, machines the
subsea-system itself, milieu describes the context of the system and manpower the
managers and workers conducting the tests. As illustrated it is obvious that the human
interference is an important reason for imperfect testing.
There are no data collected for the proportion of tests that can be claimed to be imperfect. A
possible method for estimating the contribution of each of the Mfactors described above is
proposed in 3.2.2.

40

Partial and imperfect testing of SIF

Imperfect testing

3.2 Effects of an imperfect test


An imperfect test influences the estimation of the PFD since the overall unavailability of the
system will be higher than when perfect testing is assumed. Three different interpretations of
this situation are described in the following. Depending on the assumptions taken, the PFD
can be expected to increase with a constant quantity over the interval, increase continually or
decrease.
Case A a constant PFD addition (due to systematic failures not revealed)
Case B an increasing PFD addition (due to random hardware failures not revealed)
Case C a decreasing PFD addition
Case C depends on which approach is chosen, case A or case B, and is split into C1 and C2.
Using the simple SIS shown in Figure 2, with failure rates from OLF 070 guideline (2004) as
stated in Table 4, the impact of the imperfect tests on the PFD is estimated for the three
approaches. In order to do this, the reliability block diagram is a good tool for describing the
system. Note that the logic has a 1oo2 configuration. The sensors have 2oo3 redundancy, and
the final elements have 1oo2 redundancy.

Figure 13, Reliability Block Diagram of a simple SIS


Table 4, Data for the system test example

Failure rate DU
Component

Test interval
hours

3.0 10 7

3%

8760

3.94 10 5

1.0 10 7

2%

8760

9.01 10 6

1.0 10 7

2%

8760

9.01 10 6

( hours )
(OLF 070)

Sensors
(2oo3)
Logic
(1oo2)
Valve
(1oo2)

PFD under perfect


testing assumptions

Common cause
failure; - factor
(PDS Data, 2006)

System

( hours 1 )

5.74 10 5

This result corresponds to a SIL 4 classification for the system. No diagnostic testing is
assumed for the sensors or logic in this example. The time required to test and repair the
items is considered negligible. This may make the result higher than genuine field results.

41

Partial and imperfect testing of SIF

Imperfect testing

Note that HFT and SFF requirements are not considered in the case examples, only the PFD
requirement for the SIL rating is assessed.
To determine the PFD impact of an imperfect test is dependent of the type of failure that
remains undetected. It is only the dangerous undetected failures that should be included in
the calculations. Using the simplified equations for the PFDs, the PFD for the system is:

PFD SYS = PFD D + PFD L + PFD AI


PFD SYS = PFD 2 oo3 + PFD1oo 2 + PFD1002
PFD SYS = [(1 ) DU ] +
2

=
=
DU =

DU [(1 ) DU ]
[(1 ) DU ]

+
+ DU +
+ DU
2
3
2
3
2
2

common cause failure factor


test interval
dangerous failure rate

These simplified equations can be used when DU is small (<0.1), and is often used in
practical calculations. The approximation is conservative, which means that the
approximated value is always greater than the correct value (Rausand & Hyland, 2004).
In the calculations of the system test example in the next sections, the PFD is chosen to be
calculated for 20 years because the XMTs are often required to have a life span of at least 20
years. It is assumed that the XMT is not retrieved and overhauled during this period. In order
to estimate the impact of PFD imperfect testing of the valves to the test-system, it is
necessary to assess the valves PFD first and then use the result in the system test example at
the end of each section.

42

Partial and imperfect testing of SIF

Imperfect testing

3.2.1 Case A, constant PFD addition


PDS interpretation of imperfect testing is covered through the concept of Critical Safety
Unavailability (CSU). CSU consists of the both the PFD as well as the contribution from
(Systematic) Test Independent Failures (TIF). The PDS method is in line with IEC 61508, but
differs regarding quantification of systematic failures. This contribution is not quantified in
the IEC 61508 standard, as it may vary from each application. But it is argued in the PDS
Handbook (2006) that there are several reasons why this should be included.
One reason is to reflect the actual risk related to
the operation, another is that failure rates are
based on historic data, thus the rates already
include the systematic failures to a certain extent.
A third argument is that the systematic failures
may be the dominant contributor and should not
be excluded, and last that the measures against
systematic failures should be reflected in the
quantitative failure rate estimate. This approach is
supported by the OLF 070 guideline (2004). As
illustrated in Figure 14, downtime due to testing
and repair (DTUT and DTUR) also contributes to
the safety unavailability; but these do not
contribute to the CSU.
Figure 14, Contribution to
unavailability (PDS method, 2006)

The definition of PTIF is The probability that the module/system will fail to carry out its
intended function due to a (latent) systematic failure not detectable by functional testing
(therefore the name test independent failure). The PTIF is assumed to be constant
throughout the lifetime, and for extended testing (proof testing) of the valves, a value of
PTIF= 1.0 10 5 hours 1 is suggested. PDS does not give any details for this choice of value.
The difficulty with detecting systematic failures is an example of an imperfect test due to
inadequate test methods. This gives the following equation:

CSU = PFD + PTIF


This situation compared with the IEC 61508 approach is illustrated in Figure 15.

43

Partial and imperfect testing of SIF

Imperfect testing

Figure 15, Sketch of the PFD impact with PTIF addition (case A)

With failure rate DU = 1.0 10 7 hours 1 and PTIF= 1.0 10 5 , the unavailability for one
component for a lifecycle of 20 years is illustrated in Figure 16.

Unavailability Case A
0,0012
0,001

(t)

0,0008

Unavailability
SIL 2

0,0006

PFD average
PFD avg perfect test

0,0004
0,0002

16 0
30
32 0
60
48 0
90
65 0
20
81 0
50
97 0
8
11 00
41
13 00
04
14 00
67
16 00
30
00

Hours

Figure 16, Unavailability under imperfect test condition case A

For this exact example the PFD for the imperfect test situation is PFD = 4.48 10 4 while a
perfect test yields the result PFD = 4.38 10 4 and a difference of 1.0 10 5 , which is the PTIF
addition. This addition does not lead to a change of the SIL rating for the component as the
PTIF is small.

44

Partial and imperfect testing of SIF

Imperfect testing

Result system test example


With the data given in Table 4, the PFD of the test-system is described below. Note that with
a 1oo2 configuration for the safety valves, the TIF contribution becomes PTIF . C1oo2 is the
PDS configuration factor for the valve configuration.

PFD SYS _ PDS = CSU = PFD SYS + PTIF


PFD SYS _ PDS = [(1 ) DU ]

[(1 ) DU ]
[(1 ) DU ]
+ DU +
+ DU +
2
3
2
3
2

DU
+ C1oo 2 PTIF
2

PFD SYS _ PDS = 5.74 10 5 + 1.0 0.02 1.0 10 4 = 5.94 10 5


This result differs marginally from the perfect testing situation since the PTIF addition is
relatively small. It does not lead to a change in the SIL-rating.

45

Partial and imperfect testing of SIF

Imperfect testing

3.2.2 Case B, Increasing PFD addition


The second interpretation of imperfect testing is that there is an increasing PFD addition over
the life span of the component due to unsuccessful proof testing, meaning that the
unavailability is not reduced to zero after the test is conducted. This leads to a shift upwards
for the PFD since the overall unavailability of the system will be higher than when perfect
testing is assumed as illustrated in Figure 17. Note that the unavailability starts at zero as it is
assumed that the condition of the component is perfect.

Figure 17, Sketch of the PFD impact with imperfect test addition (case B)

As a basis for calculating the PFD impact of imperfect testing, a component is divided into a
series structure where one part is non-testable and the remaining part testable. In order
to function, both parts of the component have to function.

Figure 18, Series structure when imperfect test of a component

The dangerous undetected failures rate is split into two parts dependent of the imperfect test
fraction, here named .

DU = DU NT + DU T
DU NT = DU
DU T = (1 ) DU
When the test interval for the testable part is =8760h, and the NT =175200h corresponds to
the component life span of 20 years, the PFD for the component is described by:

46

Partial and imperfect testing of SIF

PFD =

Imperfect testing

(1 ) DU DU NT
+
2
2

As the relation between PFD and the test interval is a linear function, it is reasonably that a
shorter test interval leads to a smaller PFD. Because of the implications with proof testing,
shorten the test intervals is not a desired solution for achieving the required SIL. For this
reason the test interval is held fixed throughout the calculations.
In Figure 19 a sketch is drawn for the failure rate DU = 1.0 10 7 hours 1 over 20 years. It is
assumed that the non-testable part is =20%. This means that 80% of the failure rate is set to
zero after every proof test while the remaining 20% continues over the whole interval. As
shown in Figure 19 the non-testable part increasingly override the testable part of the system.

Unavailability Case B
0,0045
0,004

Unavailability

0,0035
SIL 2

(t)

0,003
0,0025

Unavail. Perfect
testing

0,002

PFD average

0,0015
0,001

PFD avg perfect test

0,0005

15 0
90
31 0
80
47 0
70
63 0
60
79 0
50
95 0
4
11 0 0
13
12 00
72
14 00
31
15 00
90
17 00
49
00

Hours

Figure 19, Unavailability under imperfect test condition case B

The equation for this situation is as follows:

PFD

DU NT

+ (1 ) DU PT
2
2

PFD 0.2

1.0 10 7 20 8760
1.0 10 7 8760
+ (1 0.2)
= 2.10 10 3
2
2

Note that the PFD average is actually an average of average in order to illustrate the possible
change in the SIL rating. For this exact example the PFD for the imperfect test situation is

PFD = 2.10 10 3 while a perfect test yields the result PFD = 4.38 10 4 which gives a
difference of 1.66 10 3 .
The PFD impact by different combinations of the failure rate and percentage non-testable are
given in Table 5. The unavailability is calculated for different failure rates, and the range from

DU = 1.0 10 8 hours 1 till DU = 1.0 10 6 hours 1 is chosen as this interval reflects the
failure rate of a PMV in subsea XMT. The non-testable part ranges from 10% till 90% in the
calculations. For convenience it is assumed that the same failures remain undetected during
47

Partial and imperfect testing of SIF

Imperfect testing

the whole lifetime. For illustrating the PFD development the accumulated PFD is shown for
year 1, 10 and 20.
Table 5, Unavailability at time t of a single component under imperfect test conditions

Failure rate component ( hours 1 )


% imperfect
test
0%

Years
1
10
20

10%

1
10

Unavailability at time t with imperfect testing (hours-1)

20
20%

1
10
20

30%

1
10
20

40%

1
10
20

50%

1
10
20

60%

1
10
20

70%

1
10
20

80%

1
10
20

90%

1
10
20

DU = 1.0 10 8

DU = 1.0 10 7

DU = 1.0 10 6

8.80 10 5
8.70 10 5
8.70 10 5
8.80 10 5
1.66 10 4
2.54 10 4
8.80 10 5
2.46 10 4
4.22 10 4
8.80 10 5
3.25 10 4
5.89 10 4
8.80 10 5
4.05 10 4
7.56 10 4
8.80 10 5
4.84 10 4
9.24 10 4
8.80 10 5
5.63 10 4
1.09 10 3
8.80 10 5
6.43 10 4
1.26 10 3
8.80 10 5
7.22 10 4
1.43 10 3
8.80 10 5
8.01 10 4
1.59 10 3

8.80 10 4
8.70 10 4
8.70 10 4
8.80 10 4
1.66 10 3
2.54 10 3
8.80 10 4
2.46 10 3
4.21 10 3
8.80 10 4
3.25 10 3
5.88 10 3
8.80 10 4
4.04 10 3
7.54 10 3
8.80 10 4
4.83 10 3
9.20 10 3
8.80 10 4
5.62 10 3
1.09 10 2
8.80 10 4
6.41 10 3
1.25 10 2
8.80 10 4
7.20 10 3
1.42 10 2
8.80 10 4
7.99 10 3
1.58 10 2

8.76 10 3
8.66 10 3
8.66 10 3
9.70 10 4
1.66 10 2
2.53 10 2
1.84 10 3
2.44 10 2
4.15 10 2
2.70 10 3
3.21 10 2
5.75 10 2
3.57 10 3
3.98 10 2
7.32 10 2
4.44 10 3
4.74 10 2
8.86 10 2
8.78 10 3
5.50 10 2
1.04 10 1
8.78 10 3
6.24 10 2
1.19 10 1
8.77 10 3
6.98 10 2
1.33 10 1
8.77 10 3
7.71 10 2
1.47 10 1

In Table 6 the average differences between perfect testing results and the imperfect test
situation is given. The imperfect test situation yields higher average PFDs than with perfect
testing.

48

Partial and imperfect testing of SIF

Imperfect testing

Table 6, PFD average differences between perfect and imperfect tests

Failure rate ( hours 1 )


% Non-testable 1.0 10 8
10
20
30
40
50
60
70
80
90
PFD average
differences
(PFDdiff)

8.32 10 5
1.66 10 4
2.50 10 4
3.33 10 4
4.16 10 4
4.99 10 4
5.83 10 4
6.66 10 4
7.49 10 4

1.0 10 7
1.0 10 6
8.32 10 4 8.32 10 3
1.66 10 3 1.66 10 2
2.50 10 3 2.50 10 2
3.33 10 3 3.33 10 2
4.16 10 3 4.16 10 2
4.99 10 3 4.99 10 2
5.83 10 3 5.83 10 2
6.66 10 3 6.66 10 2
7.49 10 3 7.49 10 2

4.16 10 4 4.16 10 3

4.16 10 2

As illustrated in Figure 20 the impact is greater when the failure rate is getting higher. For a
component with failure rate of DU = 1.0 10 6 hours 1 and a high percentage of non-testable
failures could potentially lead to a change of the SIL rating as the result would tend to go to
the outer limit of the classification. Often the SIL is required by the client to be in the
midpoint of the range.

Unavailability
0,06000

0,05000

PFD diff

0,04000
10^-6
0,03000

10^-7
10^-8

0,02000

0,01000

0,00000
0

10

20

30

40

50

60

70

80

90

% Non-te sta ble

Figure 20, Unavailability for different failure rates under imperfect testing

Based on the PFD average differences given in Table 6, special care should be shown in cases
with high failure rates for the valves, while for the lower failure rates the impact is not
considered as critical if the SIL requirement is low. If imperfect tests are to be included in the
calculations, refer to Table 7 for knowing when this topic should be given attention.

49

Partial and imperfect testing of SIF

Imperfect testing

Table 7, Matrix for SIL rating sensitivity due to imperfect testing

Imperfect test addition


PFDdiff

4.16 10 4

4.16 10 3

4.16 10 2

DU = 1.0 10 8

DU = 1.0 10 7

DU = 1.0 10 6

Failure
rate valves

SIL

( hours 1 )
4
3
2
1

Red

The inclusion of imperfect test addition leads to a change of SIL-rating

Yellow

Dependent of the exact PFD calculation the inclusion of imperfect test


addition could lead to a change of SIL-rating

Green

The inclusion of imperfect test addition will not have an impact on the SILrating

As discussed at the beginning of this chapter it may be hard to assess the exact percentage
that remains untested after a proof test, hence using the imperfect test addition as proposed
in Table 7 ensures that conservative estimates are made.
Note that these imperfect test additions are done for one component only, and for different
architectures they need to be modified. For the system in question where the valves have a
1oo2 configuration, the PFDdiff shouled be modified analogous to the PDS method (2006);

PFD diff _ 1oo 2 = PFDdiff


However, it has been argued (Bak, 2007) that constant factors as such are not motivating
seen from a safety designers point of view since improvements in the design will never lead
to a quantitatively change. A simple solution is to perform a weighing of each of the M-factors
described in section 3.1 and then adjust the imperfect test additions in Table 7 dependent of
each specific case.

Figure 21, The M-factors contribution to the imperfect test addition

50

Partial and imperfect testing of SIF

Imperfect testing

The contribution of each of the M-factors can be estimated by:


0 = 0 % non-testable

0.5 = 25 % non-testable
1.0 = 50 % non-testable
1.5 = 75 % non-testable
2.0 = 100 % non-testable

Pimp, the proability of imperfect tests, is then calculated by the average of the contributors;

Pimp =

(M 1 + M 2 + M 3 + M 4 + M 5 )
PFDdiff
5

Determining the contributing factors should be done by a multidisciplinary team analogous


with other risk analysis. To facilitate the estimation, more detailed questions should be
elaborated to reflect the possible impact of each of the M-factors.

Result system test example


All the M-factors are set to 1.0 in this example except the factor method. It is assumed that
the valves are tested by a function test, meaning that there is no testing for internal and
external leakages. The fraction of LCP and ELP failure modes of the total dangerous failure
modes gives a non-testable part of 26% (based on OREDA data as shown in Table 9). This
corresponds to a 0.5 for the M-factor.

Pimp =

(1 + 0.5 + 1 + 1 + 1)
4.16 10 3 = 3.74 10 3
5

Using the simplified equations for calculating the PFD as described in the introduction of this
chapter, and adding the imperfect test contribution as calculated, the PFD for the system is:

PFD SYS = PFDSYS + PFDdiff


PFD SYS = [(1 ) DU ]

[(1 ) DU ]
[(1 ) DU ]
+ DU +
+ DU +
2
3
2
3
2

DU
+ Pimp
2

PFD SYS = 5.74 10 5 + 0.02 3.74 10 3 = 1.32 10 4


This result is higher than the original PFD and the SIL rating when only the PFD is
considered is changed to SIL3, one level lower.

51

Partial and imperfect testing of SIF

Imperfect testing

3.2.3 Case C, Decreasing PFD


Another approach to the assessment of imperfect tests is to let the initial PFD decrease over
time. Arguments for this are that failures that originally were impossible to detect over time
become so obvious that they are discovered, or that they are discovered incidentally by the
operators. Depending if case A or case B is used as a basis, this situation may have two
interpretations, named C1 and C2 respectively.
Case C1, decreasing PTIF addition
In the PDS method, it is assumed a TIF addition already from the start of the component life
time. But over time it can be presumed that more and more of the systematic failures will be
revealed, leading to a decreasing TIF addition and PFDavg. This situation is sketched in Figure
22 for a lifetime of 20 years, where the failure rate is DU = 1.0 10 7 hours 1 and
PTIF= 1.0 10 5 . In the example the PTIF is assumed to decrease with 1.0 10 6 each year which
is only for the benefit for the example.

Unav ailability Case C1


0,0012
0,001

(t)

0,0008

Unavailability
SIL 2

0,0006

PFD average
PFD avg perfect test

0,0004
0,0002

16 0
30
32 0
60
48 0
90
65 0
20
81 0
50
97 0
8
11 00
41
1 3 00
04
1 4 00
67
1 6 00
30
00

Hours

Figure 22, Unavailability with decreasing PTIF addition (case C1)

The impact on the PFD for a single component is hardly traceable. The PFD for the imperfect
test situation is PFD = 4.41 10 4 while a perfect test yields the result PFD = 4.38 10 4 and
a difference of 3.0 10 6 .
Result system test example
Using the same approach as in case B, but with a decreased PTIF value gives the following
result for the test system example:

PFD SYS = [(1 ) DU ]

[(1 ) DU ]
[(1 ) DU ]
+ DU +
+ DU +
2
3
2
3
2

DU
+ PTIF
2
52

Partial and imperfect testing of SIF

Imperfect testing

PFD SYS = 5.74 10 5 + 0.02 3.0 10 6 = 5.75 10 5


This result is even lower than case A, and does not lead to a change of the SIL-rating.

Case C2, decreasing imperfect test addition


The non-testable part (as described in Case B) of the random hardware failures are not
discovered by tests, but a realistic assumption may be that over time the failures become so
obvious that they are discovered. In Figure 23 a sketch is made for a component with failure
rate DU = 1.0 10 7 hours 1 , and where it is assumed that the non-testable part is =20% for
a lifetime of 20 years. For the convenience of the example, the non-testable part is assumed
to diminish and become testable with a rate of 1% per year.

Unavailability Case C2
0,002
0,0018
0,0016
0,0014
Unavailability

(t)

0,0012

SIL2

0,001

PFD average

0,0008

PFD avg perfect test

0,0006
0,0004
0,0002

16 0
30
32 0
60
48 0
90
65 0
20
81 0
50
97 0
8
11 00
41
1 3 00
04
1 4 00
67
1 6 00
30
00

Hours

Figure 23, Unavailability with decreasing imperfect test addition (case C2)

The equation for one component for this situation is given below;

PFD

DU NT

+ (1 ) DU PT
2
2

PFD 2 0.2

1.0 10 7 10 8760
1.0 10 7 8760
+ (1 0.2)
= 3.85 10 3
2
2

For this exact example the PFD for the imperfect test situation is PFD = 3.85 10 3 while a
perfect test yields the result PFD = 4.38 10 4 which gives a difference of 3.41 10 4 for the
component.

Result system test example


Using the same approach as in case B, but with the Pimp = 0.9 3.41 10 4 since the failure
distribution is different, gives the following result for the system test example:

PFD SYS = [(1 ) DU ]

[(1 ) DU ]
[(1 ) DU ]
+ DU +
+ DU +
2
3
2
3
2

53

Partial and imperfect testing of SIF

Imperfect testing

DU
+ Pimp
2

PFD SYS = 5.74 10 5 + 0.02 0.9 3.41 10 3 = 1.19 10 4


As in case B this leads to a change of the SIL rating to SIL4 considering the PFD requirement only.

3.2.4 Comments to the imperfect test cases

PFD

In this chapter several interpretations of imperfect testing have been proposed. The PDS
(2006) perspective on systematic failures was described in case A, while the influence of
undetectable random hardware failures was assessed in case B. Case C gave an alternative
approach to the prior ones, where the additions were assumed to reduce over time. The PFD
results from the cases are gathered in Figure 24.

1,40E-04
1,20E-04
1,00E-04
8,00E-05
6,00E-05
4,00E-05
2,00E-05
0,00E+00
Base Case

Case A

Case B

Case C1

Case C2

Case

Figure 24, PFD results from case studies on imperfect testing

The case B and case C2 PFD values are considerably higher than the base case, and are also
the only ones that lead to a shift of the SIL rating. The PDS approach to systematic failures in
case A and case C1 hardly has an impact on the PFD. The results illustrate the importance of
making a testable system and the impact this may have if it is not taken into consideration.
There may be several difficulties applying the methods described in this chapter. One topic
that has been debated is the challenges with quantifying the PTIF-value correctly to reflect the
hidden systematic failures. Further on should it should be developed a detailed approach for
deciding the contribution of each of the M-factors proposed in case B.
Another aspect that should be taken into consideration is that normally the PFD is based on
the average values. For increasing unavailability as shown in case B, it is a substantial
difference between the PFD for the first and the last proof test interval as illustrated by the
unavailability sketches. This makes it interesting to assess the PFD max values in addition to
the traditional PFD average. For high failure rates the imperfect test addition could
potentially lead to another decrease of the SIL rating for the system at the end of its life span.

54

Partial and imperfect testing of SIF

Partial stroke testing

4 Partial stroke testing


Partial stroke testing (PST) has been introduced in order to reveal failure modes before only
feasible through tests requiring process shutdown (Lundteigen and Rausand, 2007). As
indicated by its name, PST involves moving the valve only partially in order to confirm that
the valve is not stuck. It is assumed that if the valve is able to move, it will most likely
continue to fully closed position in a real demand situation. PST can be categorized as an
imperfect test as described in chapter 3. Even so, PST is implemented in some industries
because of the positive contribution to reveal some of the failure modes earlier than the
scheduled proof tests, or as a measure for extending the proof test interval. PST is yet not a
common approach in subsea petroleum production; one reason might be the difficulties to
detect that the valve actually moves. New technology as smart positioners is now introduced
to the market, giving incentives to assess the advantages and disadvantages for subsea
equipments more profoundly.

4.1 Main principles and concepts


A successful implementation of PST into a SIS has several advantages. It may improve the
safety as failures are detected at an earlier stage than when only proof testing is conducted. It
is possible to maintain the same SIL rating even with longer proof test intervals. This is of
special importance when there is high risk related to the testing. It is even claimed that it is
possible to take out a component of the architecture and still maintain the requested SIL. The
PFD impact of PST is illustrated in Figure 25.

Figure 25, PST impact on the PFD (Lundteigen & Rausand, 2007)

There are three basic types of partial stroke test equipment; mechanical limiting, position
control and solenoids (Summers & Zachary, 2000a):
Mechanical limiting: Requires manual interaction and visual inspection of valve
movement which is obviously not practical to incorporate in subsea systems.

55

Partial and imperfect testing of SIF

Partial stroke testing

Position control: Enables to detect how far the valve has moved. This requires
additional hardware to be installed, and a system for collecting the test information, making
the cost a major drawback.
Solenoids: The test is conducted by pulsing the solenoid, and the preset valve travel is
confirmed by a limit switch or position transmitter, allowing for automatic documentation of
test status.
The solenoid may either be integrated with the SIS, or it may be a separate PST package
(Lundteigen and Rausand, 2007). The SIS sketch in Figure 26 is an illustration of a solution
with both position control and solenoid (adapted from Beurden & Amkreutz, 2001).

Figure 26, Simple SIS with PST implementation (adapted from McCrea-Steele, 2006)

In subsea petroleum production PST has been implemented in the Kristin field for testing the
High Integrity Pressure Protection System (HIPPS) (Lundteigen and Rausand, 2007).

56

Partial and imperfect testing of SIF

Partial stroke testing

4.2 Advantages and disadvantages


Some of the advantages and disadvantages of the PST are described below (based on
Lundteigen & Rausand (2007), McCrea-Steele (2005, 2006)):
Advantages
Reduced wear of the valve seat area
o Since the valve is less frequently brought to a closed position
Reduced probability of sticking seals
o Due to more frequent operation of the valve
Valve is available during the test period
o If properly designed
Reduced operational disturbances due to testing
o If properly designed
Disadvantages
Tests only a portion of the DU failures
Increased wear
o Due to more frequent operation

More complex system


o Due to added software and hardware
Potentially increased spurious trip rate
o Since the valve may continue to fail safe position instead of returning to the
initial position
Potentially converts the valve to a type B complex subcomponent
o Due to the extra components installed

In addition to this there are some problems related to estimate the coverage when using PST.
The measuring devices used to confirm that the test was successful may introduce failures
themselves. Besides, there are not many methods that can measure with certainty that the
PST actually moved the valve, very often it is only assumed on basis of for example that the
hydraulics were bled off. These topics are discussed throughout the next chapters.

57

Partial and imperfect testing of SIF

Partial stroke testing

4.3 PST coverage factor


The dangerous failure rate can be split up as shown in Figure 27, consisting of diagnostic
testing, partial stroke testing and proof testing. This gives the PFDavg equation:
PFDavg = PFDavg_PT + PFDavg_PST + PFDavg_diagnostic
The PFDavg_diagnostic hardly gives any contribution to the PFDavg since it is very small and will
not be included in the calculations.

Figure 27, Overview of relevant failure rates (Lundteigen & Rausand, 2007)

It is necessary to estimate the coverage of PST in order to optimize the proof test intervals or
to determine if a higher SIL rating can be obtained. The PST coverage is defined as the
fraction of dangerous undetected failures detected by PST relative to the total number of
dangerous undetected failures by Lundteigen & Rausand (2007).

PST =

DU , PST
DU

The PST coverage can be given two interpretations;


1. The mean fraction of dangerous undetected failures that are detected by PST among
all dangerous undetected failures.
2. The probability that a dangerous undetected failure is detected by the PST once a
dangerous undetected failure is present.
The failure rates expressed in terms of D are (op.cit.):

DD = DC D
DU , PST = (1 DC ) PST D
DU , PT = (1 DC ) (1 PST ) D
Where DC is the diagnostic coverage (as explained in section 2.5):

DC =

DD
D

Summers (1998) put emphasis on the importance of being plant specific when the PST
coverage is assessed, as the valve and exposure environment may differ greatly from case to
case. She also claims that credit for partial stroking in the quantitative verification of a SIL
should be considered only when the process service is clean and tight shutoff is not required.

58

Partial and imperfect testing of SIF

Partial stroke testing

In Table 8 the reasons for the different failure modes are described and connected to the test
strategy they are assumed to be revealed by.
Table 8, Dangerous failure modes and test strategy for a safety gate valve (adapted from
Summers & Zachary 2000a, McCrea-Steele 2006, KOP 2002, Bak 2007 and ATV 2007)

Failure Descriptor

Failure Mode

Spring cap plate is jammed


Too high test pressure that leads to
deformation
Failure in closed compensation system
Hydraulic line is blocked
Formation of hydrates in bonnet cavity
during shutdown
Foreign object/debris in cavity
Valve seal is seized due to temperature
changes
One of the two springs breaks and jams
Valve stem sticks
Valve is not fully set back/fully disconnected
after ROV intervention

Test
Strategy

Fail To Close (FTC)


Fail To Close (FTC)

FT/PT
PST/FT/PT

Fail To Close (FTC)


Fail To Close (FTC)
Fail To Close (FTC)

FT/PT
PST/FT/PT
PST/FT/PT

Fail To Close (FTC)


Fail To Close (FTC)

FT/PT
PST/FT/PT

Fail To Close (FTC)


Fail To Close (FTC)
Fail To Close (FTC)

FT/PT
PST/FT/PT
FT/PT
PT

Valve seat plugged due to deposition or


polymerization

Leakage in closed position


(LCP)
Leakage in closed position
(LCP)
Leakage in closed position
(LCP)

Foreign object/debris in cavity

Delayed operation (DOP)

Hydraulic line to actuator choked

Delayed operation (DOP)

Due to high friction the valve may delay/be


prevented from fully close

Delayed operation (DOP)

FT or PST
with speed
of travel test
FT or PST
with speed
of travel test
PT (PST in
low
pressure
fields)

Valve seal is damaged

External leakage in closed


position (ELP)

Valve seat is scarred


Valve seat contains debris

PST: Partial Stroke Test

PT: Proof Test

PT
PT

PT

FT: Function test (full stroke, no leakage test)

It is the safety gate valve that forms the basis for the assessment of the failure modes in Table
8. The table does not cover all possible failure descriptors for all different valve designs, but
reflects the design of the valve producer ATV to a high degree (ATV, 2007). Note that it isnt
59

Partial and imperfect testing of SIF

Partial stroke testing

any reason why moving the valve 20% should lead to a build up of corrosion/scaling that
eventually would lead to a fail to close failure (Bak, 2007), hence it is not put as a failure
descriptor.
Note that as a basis for the PST coverage estimation several assumptions have been done:
Both the critical and degraded failure rates from OREDA are included in the
calculations as the practical distinction between the two may be vague (adapted from
Lundteigen & Rausand, 2007).
Whether it is FTC or FTO that is safety critical for the system is not possible to read
from the OREDA data, but for avoiding calculations with twice the contribution the
FTC rate was chosen to be used throughout the calculations.
Only the failure modes DOP, ELP, FTC and LCP are considered dangerous and
relevant for testing of a safety system, hence are only these failure modes included in
the calculations.
Since it is not much reliability data for subsea systems in particular, the available data
for topside and subsea is merged. This may not be accurate as there are different
requirements related to subsea equipment than topside. Even so, the inner
environment is considered equal for both subsea and topside valves, and the mix may
thus be justified.
Only the latest OREDA edition (2002) is utilized. It may be discussed whether the
earlier OREDA edition should be used or not since it is a continuously improvement
of the design as more operational experience is attained. Hence the old versions may
not reflect the failure rates of new equipment realistically.
Leakage in closed position can not be detected by PST as the valve needs to be fully closed.
Neither is external leakage in closed position assumed to be discovered by PST as the
pressure difference over the valve may be minimal for a high pressure field. The failure mode
itself is very unlikely to occur for valves with backseat, as the leakage is only possible during
transaction. Consequently both of these coverages are set to 0 %. It is likely that DOP can be
discovered by PST if a speed of travel detector is installed. For the failure descriptor due to
high friction the valve may delay/be prevented from fully close the PST will probably not
detect problems with closing the very last part. Regarding FTC, it may be discussed if the
assumption that the valve will continue to a closed position if it can start to move is realistic.
The coverage is estimated on the failure descriptors in Table 8 only and on basis of the
assumptions described above, a tentative estimation of the PST coverage for each failure
mode may be set to;
FTC- 80%
LCP- 0%
DOP- 90%
ELP- 0%
The PST coverage factor is estimated by collecting the relevant failure modes from OREDA
(2002) as shown in Table 9. Only the valves used for ESD, control and safety purposes is

60

Partial and imperfect testing of SIF

Partial stroke testing

chosen. Then the fraction of failures that can be discovered by PST is calculated as shown
below:

PST =

( PSTFTC _ cov erage N _ failures ) + ( PSTDOP _ cov erage N _ failures )


Total _ critical _ failures

Table 9, Reliability data as basis for PST coverage estimation (adapted from Lundteigen
& Rausand, 2007)

Failure data
(OREDA 2002)

FTC

LCP

DOP

ELP

Total

Subsea - Common component


process isolation valves (p.
806)
Subsea Manifold (p. 818)
Topside - Control & safety
equipment (p.568)
Topside -Control & safety
equipment (p.575)
Topside -Control & safety
equipment (p.607)
Topside -Control & safety
equipment (p.581)
Topside -Control & safety
equipment (p.689)
Topside -Control & safety
equipment (p.695)
Topside - ESD/PSD Ball
valves (p.706)
Topside - ESD/PSD Gate
valves (p. 717)

1
134

2
40

0
30

1
44

4
248

61

21

10

93

34

22

69

42

49

21

23

11

20

317

67

71

71

526

Total

When only the data from the latest OREDA edition (2002) is used and with the PST coverage
for the different failure modes as estimated above, the PST coverage is 62%. With more
optimistic PST dangerous failures coverages, e.g. both FTC and DOP set to 95%, the PST
coverage is estimated to be 72%. On the contrary, a more pessimistic approach, with both
FTC and DOP set to 50%, the PST coverage is estimated to be 38%.
Summers & Zachary (2000a) proposed a PST coverage of 70% and Lundteigen & Rausand
(2007) estimated a PST coverage for the Kristin HIPPS valves of 62%, hence the result is in
about the same range as former research.

61

Partial and imperfect testing of SIF

Partial stroke testing

4.4 Correlation PST and spurious trips


It has been claimed that PST leads to a higher degree of spurious trips (ST) (Willem-Jan, N.
& Rens, W. 2005, McCrea-Steele 2006). A repetitive argument is that if the valve starts to
move, it will easily continue to close instead of returning to its open position. In Figure 28 a
Bayesian belief network illustrates the possible causes for ST during PST. Bayesian belief
network can be used instead of fault trees and cause and effects diagrams to illustrate the
relationship between a system failure and its contributing factors (adapted from Rausand &
Hyland, 2004).
The result depends highly on the chosen system, whether it is automatic, semi-automatic or
manual. For the automatic system, the organisational and human contributors are designed
away. In the figure the contributors for a ST for the manual solution is shown with wholedrawn line, while the contributors relevant also for the automatic system are dashed.

Figure 28, Bayesian belief network for ST during PST

From the figure it can be concluded that by implementing an automatic system for
conducting the PST, many of the potential reasons for ST can be designed away.

62

Partial and imperfect testing of SIF

Partial stroke testing

4.5 Influencing factors for PST contribution for a SIS


Based on the formulas described in section 4.3, the PFDavg of a system with PST implemented
can be given by;

PFD PFD PT + PFD PST + PFD DT


PFD (1 PST )

DU PT


+ PST DU PST + DD DT
2
2
2

The proportion of failures that is not detected through PST is left to be discovered through
proof testing which is assumed to be perfect. This may not be a realistic assumption as
discussed in chapter 3 and is further assessed in section 4.6. A system which has
implemented PST may be influenced by several factors than normally included in PFD
calculations. Some of the simplifications and assumptions are briefly discussed in the
following.
Increase of -factor
It is likely that when one of the two valves in the same line fails due to corrosion, there is a
high probability that given the same body materials and process conditions, the other valve
will fail as well (metso, 2002).
To be considered a CCF, the failures have to be within a short time interval. To be classified
as such, the failures have to occur within the same proof test interval. Hence an extension of
the test interval might imply that the -factor should increment over time (Rausand, 2007).
Reasons for this are;
1. Longer time intervals that several components can fail within.
2. Preventive maintenance initiated because of findings on one component will not be
performed as often as with shorter proof test intervals.
If PST is used to extend the proof test intervals, this should lead to an assessment of the factor to reflect the PFD impact this may have.
SFF calculation
An argument for implementing PST is the potential improvement of the SFF and
consequently the possibility to reduce the hardware fault tolerance requirement. This is
obtained by the PST converting part of the dangerous undetected failures from the
denominator, till dangerous detected in the numerator;

SFF =

SD + SU + DD
SD + SU + DD + DU

As shown in Table 2, an improvement from SFF < 60 % till SFF= 60 % 90 % enables a


reduction of hardware of 1, and still maintain the SIL 2 rating. DD refers to the failures
discovered by the automatic diagnostic tests. As PST is not fulfilling the criteria for function
as a diagnostic test, the PST should not be used to affect the SFF, and hence can not be an
argument for a reduction in the HFT (McCrea-Steele, 2006).

63

Partial and imperfect testing of SIF

Partial stroke testing

Choice of position indicator


Different types of position indicators may influence the impact of the PST to the reliability
and efficiency differently. Mechanical position indicators have been utilized topside and are
considered to be rather unreliable as they can easily be changed from their actual position by
stepping onto it. Erroneous indication of position will not prevent the valve from closing, but
it is critical for safety when it shows closed when is actually open and when the correct
sequence of valve closure is crucial. But without a correct indication of the position, the
testing will be of no value as the actual position is not confirmed.
Mechanical position indicators subsea can be read by ROVs only, hence it is a need for other
instrumentation. Today smart microprocessor based devices have been developed by a series
of vendors (McCrea-Steele, 2006). Implementation of such components are normally not a
standard delivery (ATV, 2007) but can easily be implemented in the design.
Together with the mechanical indicator, the electronic indicator offer a redundancy for
reading off the position open or closed. Some indicators can measure the exact degree
closed, which is of highly relevance for PST. McCrea-Steele (2006) claims that
supplementary smart programmable equipment may introduce additional undetected
failures. On the other hand, the implementation of smart positioners can design away the
need of human interference when conducting a PST, removing the number of reasons for ST
substantially as shown in section 4.4. Furthermore the electronic indicators can be
continuously monitored and the operability readily observed (Summers, 2000b). Digital
valve controllers monitor performance trends, enabling the detection of failures long before
they prevent the system from functioning (Ali, 2004). This is done by comparing the current
test with those performed in the past, also referred to as a signature test.
Solenoid configuration
Summers & Zachary (2002) recommend installing redundant solenoids to prevent ST. They
claim that this yields substantial savings for the avoidance of lost production. For subsea
equipment there are several other topics to take into consideration if the solenoids should be
made redundant. The solenoids are placed in the control pod on the side of the XMT,
enabling for retrieving the whole module if necessary. For the installation purposes it is
important that the equipment is balanced, so if the weight of the control pod is increased, this
demands a balancing weight on the opposite side of the XMT. This impacts both the cost and
difficulties with the handling of the equipment. A special vessel capable of doing heavier lifts
might be required, and the installation becomes more weather-dependent. For the control
pod installed on the manifold this is less important as the weight already requires special
lifting vessel. Redundancy for the solenoids should be considered as a possible solution for
the manifold.

64

Partial and imperfect testing of SIF

Partial stroke testing

4.6 PST impact onto the SIL


The PFD impact of a single component when implementing PST is illustrated in Figure 29.
For better illustrating the potential effect of PST, a failure rate for the component of

DU = 3.0 10 7 hours 1 is chosen. The PST coverage is set to PST=60% and the test interval
PST=1 week.

Unavailability
0,0014
0,0012
0,001
(t)

Unavailability
0,0008

PFD avg proof test

0,0006

PFD average
SIL 2

0,0004
0,0002

13 0
70
27 0
40
41 0
10
54 0
80
68 0
50
82 0
20
95 0
9
1 0 00
96
1 2 00
33
1 3 00
70
1 5 00
07
1 6 00
44
00

Hours

Figure 29, Unavailability with PST

The equation for this situation is as follows;

PFD PFD PT + PFD PST


PFD (1 PST )

DU PT

+ PST DU PST
2
2

3.0 10 7 8760
3.0 10 7 182.5
PFD (1 0.6)
+ 0.6
= 5.42 10 4
2
2
The implementation of PST would for this case lead to a change of SIL rating from SIL 2 to
SIL 3 for the single component since the PFD is reduced from the previous PFD = 1.31 10 3
till PFD = 5.4 10 4 .
In the following PST is applied onto the case A and case B described in chapter 3.

65

Partial and imperfect testing of SIF

Partial stroke testing

Case A, systematic failures


The PDS method proposes a PTIF addition of 1.0 10 3 when PST is implemented. In Figure 30
the PTIF additions for both PT and PST are included. The failure rate for the component
is DU = 1.0 10 7 hours 1 , the PST coverage is set to PST=60% and the test interval is PST=1
week.

Unavailability
0,0016
0,0014
0,0012
Unavailability

(t)

0,001

PFD avg proof test

0,0008

PFD average

0,0006

SIL 2

0,0004
0,0002

13 0
70
27 0
40
41 0
10
54 0
80
68 0
50
82 0
20
95 0
9
10 00
96
12 00
33
13 00
70
15 00
07
16 00
44
00

Hours

Figure 30, Unavailability with PST and PTIF addition

The equation for this situation is as follows;

PFD PFD PT + PFD PST + PTIF


PFD (1 PST )

DU PT

+ PST DU PST + PTIF
2
2

1.0 10 7 8760
1.0 10 7 182.5
PFD (1 0.6)
+ 0.6
+ 1.0 10 5 + 1.0 10 3 = 1.19 10 3
2
2
As the PFD when proof testing the component is PFD = 4.38 10 4 , implementing PST leads
to a worse PFD result when the PTIF-additions proposed by PDS is used. This is the case when
the PTIF is significantly greater than the failure rate. Hence, for a safety valve with adequately
low failure rate, the PDS method does not support the implementation of PST. But with two
valves in series, the PST contribution will be positive since Addition = PTIF , which also
implies that for small -values the PTIF-addition becomes very small.

66

Partial and imperfect testing of SIF

Partial stroke testing

Case B, random hardware failures


Figure 31 illustrates the unavailability

of

component

with

failure

rate

DU = 1.0 10 7 hours 1 . The PST coverage is set to PST=60% and the test interval PST=1
week. The non-testable part is assumed to be =20% and it is assumed that the non-testable
part is equal for both the PT and PST.

Unavailability
0,004
0,0035
0,003
Unavailability

(t)

0,0025

PFD avg proof test

0,002

PFD average

0,0015

SIL 2

0,001
0,0005

13 0
70
27 0
40
41 0
10
54 0
80
68 0
50
82 0
20
95 0
9
10 00
96
12 00
33
13 00
70
15 00
07
16 00
44
00

Hours

Figure 31, Unavailability with PST and imperfect testing

The equation for this situation is as follows;

PFD PFD PT + PFD PST


PFD

DU NT

+ (1 ) (1 PST ) DU PT + PST DU PST


2
2
2

PFD 0.2

1.0 10 7 20 8760
2

1.0 10 7 8760
1.0 10 7 182.5
= 1.90 10 3
+ (1 0.2) (1 0.6)
+ 0.6

2
2

The insertion of the non-testable part and PST leads to a change of SIL rating from SIL 3 to
SIL 2 rating since the PFD increases from PFD = 4.38 10 4 (only proof testing conducted)
till PFD = 1.90 10 3 .
The impact on the PFD of a component with failure rate DU = 1.0 10 7 hours 1 and where
20% is assumed non-testable is assessed in Table 10. The table shows the results for both the
imperfect test situation and perfect testing for diverse PST coverages and intervals.

67

Partial and imperfect testing of SIF

Partial stroke testing

Table 10, PFD related to diverse PST coverages, test intervals and (im)perfect testing
PST coverage 50 %
Without
PST

With
PST
PST = 1
week
PST = 1
month
PST = 3
months

4.36 10 4

Imperfect
test

Perfect
test

Imperfect
test

Perfect test

1.94 10 3

2.23 10 4

1.91 10 3

1.80 10 4

1.95 10 3

2.38 10 4

1.92 10 3

1.98 10 4

1.98 10 3

2.71 10 4

1.95 10 3

2.38 10 4

PST coverage 70 %
With
PST
PST = 1
week
PST = 1
month
PST = 3
months

PST coverage 60 %

PST coverage 80 %

Imperfect
test

Perfect
test

Imperfect
test

Perfect
test

1.87 10 3

1.37 10 4

1.84 10 3

9.50 10 5

1.88 10 3

1.58 10 4

1.86 10 3

1.19 10 4

1.93 10 3

2.05 10 4

1.90 10 3

1.72 10 4

Considering only the perfect test situation it can be claimed that there are relatively small
differences between the diverse PST coverages and for the different test intervals. A change of
the failure rate to DU = 1.0 10 6 hours 1 showed that it more important to conduct the PST
often, than assess the exact PST coverage. A lower test interval could potentially lead to a
change of the SIL rating of the component while a higher PST coverage hardly has an impact.
Dependent of each situation, the improved PFD by PST implementation can be obtained by
either improving the PST coverage, or shorten the PST test interval (or both).
It is clear that an imperfect test gives a greater negative impact to the PFD than the positive
PST contribution; hence the priority should be upon reducing the non-testable part. A
reduction of the non-testable part with 10% gives a greater improvement of the PFD than for
a component with 80% PST coverage and a test interval of 1 week.

68

Partial and imperfect testing of SIF

Discussion

5 Discussion

5.1 Quality of the reliability assessment


As discussed in chapter 3 there are many reasons for imperfect testing. In this thesis they
have been classified accordingly to the five M-factors; method, machine, milieu, man-power
and material. From the results of the case studies conducted on the topic, it is obvious that it
should be of great interest to assess the proportion of non-testable failures. Even a relatively
low proportion of non-testable failures have a significant PFD impact and hence the SIL
rating of the system.
The assessment of imperfect testing is of special importance for valves with higher failure
rates, meaning from DU = 1.0 10 6 hours 1 and upwards. If the reliability of the safety
valves continues to improve so that adequately low failure rates are obtained, the imperfect
test assessment may be obsolete as the PFD impact is considered insignificant. This should
however not be used as an excuse for assessing the matter now.
Since it can be difficult to achieve the exact proportion of non-testable failures for the system,
a method facilitating the estimation has been proposed. It can be claimed that by introducing
such an imperfect test addition further uncertainty is introduced to the results. Not assessing
it will be to step backwards from the customarily conservative estimation approach by
ignoring the possible impact of imperfect tests.
The assessment of the PST implementation shows that for higher failure rates, from

DU = 1.0 10 6 hours 1 and above, it is more important to conduct the PST often than to
achieve a higher PST coverage. The reason is that the positive PFD impact is greater if the
tests are carried out more often than to improve the coverage by additional 10%. This makes
the difficulty deciding the exact PST coverage somewhat less important as the assumptions
taken doesnt give a great impact.
On the contrary, a reduction of the non-testable part with 10% gives a greater improvement
of the PFD than obtaining both higher PST coverage and shorter test intervals. Hence the
focus should be upon diminishing the factors that leads to unsuccessful tests.
If PST is implemented in order to extend the proof test interval, it might be necessary to
change the -factor (Rausand, 2007). As CCF are those failures that happen within the same
proof test interval, an extension of the interval could lead to more failures are classified as
CCF. Analogous for the imperfect test situation, the period for the non-testable part is larger
and it is likely that several components would fail within the same period. Hence it should be
discussed whether the -factor should be incremented to realistically reflect the PFD impact.

69

Partial and imperfect testing of SIF

Discussion

The SIL rating is not a static measure. The PFD is greatly influenced by the operator
companies in the production lifetime. The equipment can be delivered in excellent condition
and with the opportunity to check and validate the system, but the operational philosophy is
significant. As mentioned, the operational philosophy may be to minimize the stress on the
system by not conducting the tests in a realistic manner, e.g. conducting function tests
instead of proof tests (which implies leakage test). A leakage test may lead to degradation of
the components which results in higher failure rates and consequently higher PFD. This
makes it interesting to optimize the function tests, PST and proof tests interval with the
possible degradation of the system if proof tests were conducted. For some situations it may
be worth including the imperfect test addition instead of degrade the components by proof
testing.

70

Partial and imperfect testing of SIF

Discussion

5.2 Uncertainty regarding the results


The author had very limited statistical material as the OREDA Handbook and the PDS data
handbook were the only reliability data sources. Because of the lack of experience with subsea
equipments both topside and subsea data had to be used to get enough material and may not
be accurate as there are different requirements related to subsea equipment than topside.
The inner environment is considered equal for both subsea and topside valves so it can be
reasoned that the numbers may be used for illustrating purposes.
The OREDA handbooks are neither detailed enough regarding the different failure
descriptors. Attempts to get more detailed data from the operator company did not succeed
and thus the author was limited to qualitative adjustments of the PST coverage of the
dangerous failure modes.
The models developed are fairly simple. More advanced approaches may give results with
higher accuracy than obtained in this thesis. The approximation formulas have been utilized
in the calculation of the PFD. More accurate results may be achieved by using the proper
equations.
In order to evaluate the assumption of perfect testing additional assumptions were needed to
simplify the problem. One example is that it is assumed that the same failures remain nontestable throughout the life span of the component. Further on has the theories been applied
on the valves only, different results could have been achieved by including the logic and
sensors as well.

5.3 Recommendations for further work


Throughout the work with the thesis, several topics that are interesting to assess more
profoundly were discovered;
Assess more perspectives/interpretations of imperfect testing. Only a few alternatives
were included in this thesis in order to illustrate the impact on the PFD. Improved
models may improve the accuracy of the results.
Calculations for imperfect tests have been conducted on one single component only;
other architectures such as 1oo2, 2oo3 etc. should be analyzed more profoundly.
The PST impact on the CCF can be analyzed further. A possible increment of -value
according to extended proof test intervals should be investigated.
The method for estimating the contributions from the M-factors should be developed.
A detailed questionnaire enables an easy approach for estimating a conservative and
realistic imperfect test addition.

The PFD impact of imperfect testing should be assessed also for the logic and sensors.
For illustrative reasons the focus has been only upon the safety valves in this thesis.

71

Partial and imperfect testing of SIF

Case study

6 Case study
The topics discussed throughout the thesis have in this section been applied on a genuine
field development. The Morvin field have been chosen for this purpose. Special attention has
been given to examine the HIPPS. Up to this date, the contract for Morvin has not yet been
awarded hence has this case study been based on the initial concept studies done by AKS and
the first drafts done by Statoil.

6.1 Introduction case study; Morvin


The Morvin field is part of Halten West Unit, PL134B, and is situated in block 6506/11, north
of Kristin and east of Smrbukk. The reservoir pressure is 818 bars, and temperature is 162C
and will thus be developed as a subsea HP/HT (High Pressure High Temperature) field.
The ownership interests per February 2007;
Statoil ASA
50 %
Norsk Hydro ASA
14 %
ENI AS
30 %
Total
6%
Statoil is the responsible company for the development phase.

6.2 Requirements from customer


The main structures are: 2 templates, 2 manifolds and 4 X-mas trees. The field will be
produced from two 4-slot templates with two wells on each, and tied back to sgard B
through a 10.5 inner diameter pipeline (Statoil, 2007c). Since it is a HP/HT field, HIPPS
(high integrity pressure protection system) will be installed on the manifold, enabling the
pipeline and riser to be designed for 390 bars while the shut-in pressure is 715 bars.
Statoil defines safety requirements for the emergency shutdown, the process shutdown and
for the HIPPS specifically (Statoil, 2007a). HIPPS is a kind of a SIS and must comply with the
IEC 61508 standard. The safety requirements given by Statoil for the HIPPS are quoted in
Table 11.
Note that the closing time for the valves should be calculated for each specific field, as the
required time to close may vary (Patni & Davalath, 2005).

72

Partial and imperfect testing of SIF

Case study

Table 11, Morvin HIPPS requirements (Statoil, 2007a)

Topic
Definition of safety function
Definition of functional limits
Equipment under control
(EUC)
Safe state of the function
SIL requirement

Max allowed response time


Other performance measures
Operational requirements

Requirement or description
Closing valves upon high pressure in production header
The function includes pressure transmitter subsea, logic,
and valves.
The EUC is defined as the flowline and riser.
Safe state of the function is when one of the valves is
closed.
SIL 3. PFD shall be less than 5.0 10 4 .
PFD allocation:
- Initiator < 35 %
- Logic < 15 %
- Final element < 50 %
Closing time shall be less than 13 seconds, including signal
polling time.
Internal leak shall be 0 kg/s at FAT
The status/position of all safety critical components in the
function shall be available at any time.

Along with the requirements directed to the physical structures, there are several other
requirements related to activities, testing and documentation. Regarding safety and
reliability analyses, the following is required as a minimum:
HAZIDs; including hazard register, according to ISO17776
HAZOPs; Hazard and Operability analyses
SAZOPs; Safety and Operability analyses
FMECAs; Failure Mode Effect and Criticality Analyses

RAMs; Reliability, Availability and Maintainability Analyses


Uncertainty and risk analysis, inclusive Quantitative Risk Analyses

Documentation and analyses of SIS as required by IEC 61508 or IEC 61511 as


interpreted in OLF 070 guideline.

The documentation requirements for compliance with the IEC 61508 standard are specified
as follows (Statoil, 2007b);

Safety Requirements Specification (SRS)

Safety Analysis Report (SAR)


Documentation of SIL
Proven in use
The documentation requirements show the emphasis put upon a systematic approach to
assess the risks associated to the system. Note that the requirements quoted here are only
those directly related to safety and reliability.

73

Partial and imperfect testing of SIF

Case study

6.3 HIPPS
The HIPPS may be installed topside or subsea on a X-mas tree, manifold or pipeline end
terminal. HIPPS provides a pressure break between the subsea systems rated to full shut-in
pressure and the flowline and riser rated to a lower pressure (Patni & Davalath, 2005). An
example of a HIPPS schematic is given in Figure 32 (KOP, 2004) and as shown the HIPPS
basically consists of two safety valves in series and redundant pressure transmitters. One
HIPPS configuration is normally placed on each header on the manifold securing redundancy
for the function.

Figure 32, HIPPS schematic (KOP, 2004)

There are several advantages of implementing HIPPS subsea. Among others (adapted from
Patni & Davalath, 2005);

Lower installation cost of flowline and risers due to lighter components


Reduce cost of HP/HT risers
The thicker flowlines facilities early payback of installation cost due to higher flow of
oil and gas
The thicker flowlines facilities higher flow and thus higher temperature which is
positive for the cool down time and danger of creating hydrates
The larger flow area allows the field to be abandoned at a lower reservoir pressure

74

Partial and imperfect testing of SIF

Case study

6.3.1 HIPPS testing


The testing methods described below are adapted from the methods for the HIPPS valves on
the Kristin field (2003).
Proposed method for FT (no leakage test) of HIPPS valves
Removing power from the pressure sensors shows a high pressure reading and contributes to
the 2oo4 voting. Removing the power from an additional pressure sensor causes the valves to
close. For testing the sensor function fluid can be injected in order to reach the trip point. If
the sensors vote high pressure the function is confirmed. In this test all the components are
tested; sensors, logic and final element.
Proposed method for leakage test of HIPPS valves
All wells have to be closed in order to perform this test. The test is conducted by injecting
fixed volumes of methanol to a preset pressure and then the decay in pressure is monitored
to decide the leakage rate for methanol. This leakage can be converted to the equivalent
gas/petroleum leakage.
Proposed method for PST of HIPPS valves
The test is initiated by the safety and automation system which provides a PST command to
the selected HIPPS valve. The solenoid is de-energizing the hydraulic fluid, causing the
HIPPS valve to move towards closed position. After a pre-set time the solenoid re-energizes
the hydraulic fluid and the valve returns to open position. In this test only the final element is
tested; solenoid, control valve, actuator, valve and position indicator.

75

Partial and imperfect testing of SIF

Case study

6.4 SIL rating


The reliability block diagram for the HIPPS is illustrated in Figure 33.

Sensor 1

Sensor 2

Sensor 1

Sensor 3

Sensor 1

Sensor 4

Logic unit 1

Valve 1

Sensor 2

Sensor 3

Logic unit 2

Valve 2

Sensor 2

Sensor 4

Sensor 3

Sensor 4

Figure 33, HIPPS reliability block diagram for Morvin field development

When assuming the same results of the FMEA analysis performed for the Kristin project
(AKS, 2002) the data for the HIPPS valve is as described in Table 12. The failure rate includes
all the final elements as actuator and solenoid. The sensor and logic failure data is achieved
from a topside HIPPS example in the PDS data handbook (2006).
Table 12, Morvin HIPPS Case data

Failure rate DU
Component
Sensors
(2oo4)
Logic
(1oo2)
Valve
(1oo2)

( hours 1 )
*PDS data, 2006

- factor
(PDS Data, 2006)

SFF

Test interval
(hours)

3.0 10 7 *

3%

< 60%

8760

1.0 10 7 *

2%

99%

8760

1.01 10 6

2%

60%

8760

It is assumed that the equipment complies with the SFF requirement given by Statoil.
Considering the HFT and SFF requirements in Table 2, the 2oo4 voting on the sensors
enables a SIL 3 rating, the logic enables a SIL4 rating and the final elements allow a SIL 3
rating. Hence the system complies with the SIL 3 requirements.

76

Partial and imperfect testing of SIF

Case study

1. Proof testing of the system


Using the simplified equations as before, the following PFD is obtained when only proof
testing is conducted on the system:

PFD SYS = PFD D + PFD L + PFD AI


PFD SYS = PFD 2 oo 4 + PFD1oo 2 + PFD1002
[(1 ) DU ]
[(1 ) DU ]

= [(1 ) DU ] + DU +
+ DU +
+ DU
2
3
2
3
2
5
6
4
4
= 3.94 10 + 9.01 10 + 1.13 10 = 1.61 10
2

PFD SYS
PFD SYS

The PFD corresponds to a SIL 3 rating.


2. Case A, PTIF addition
Corresponding to the PDS method as explained in section 3.2.1.

PFD SYS _ PDS = CSU = PFD SYS + PTIF


PFD SYS _ PDS = [(1 ) DU ]

[(1 ) DU ]
[(1 ) DU ]
+ DU +
+ DU +
2
3
2
3
2

DU
+ PTIF
2

PFD SYS _ PDS = 3.94 10 5 + 9.01 10 6 + 1.13 10 4 + 0.02 1.0 10 5 = 1.62 10 4


3. Case B, Imperfect test addition Pimp
The PFDdiff is added correspondingly to case B in section 3.2.2 with the same PIMP value.

PFD SYS = PFDSYS + PFDdiff


PFD SYS = [(1 ) DU ]

[(1 ) DU ]
[(1 ) DU ]
+ DU +
+ DU +
2
3
2
3
2

DU
+ Pimp
2

PFD SYS = 3.94 10 5 + 9.01 10 6 + 1.13 10 4 + 0.02 3.74 10 2 = 9.09 10 4


4. Implementation of PST
Assuming PST interval of =730 hours (every month), and a PST coverage of 60%.

PFD PFD PT + PFD PST

PFD SYS = [(1 ) DU PT ] +


3

DU PT [(1 ) DU PT ]
DU PT
+
+
2
3
2
2

[(1 ) DU PT ]2 DU PT
+ (1 PST )
+
+ PST
3
2

5
6
5
PFD SYS = 3.94 10 + 9.01 10 + 4.56 + 4.53 6
77

[(1 ) DU PST ]2 DU PST

3
2

5
= 9.85 10

Partial and imperfect testing of SIF

Case study

5. PFD for HIPPS including PST, PTIF and Pimp


Including all the cases described above yields the result;

PFD PFD PT + PFD PST

PFD SYS = [(1 ) DU PT ]

DU PT [(1 ) DU PT ]
DU PT
+
+
+
2
3
2
2

[(1 ) DU PT ]2 DU PT
+ (1 PST )
+

3
2

[(1 ) DU PST ]2 DU PST


+ PST
+
+ PTIF + Pimp
3
2

PFD SYS = 3.94 10 5 + 9.01 10 6 + 4.56 5 + 4.53 6 + 0.02 1.01 10 3


+ 0.02 3.74 10 2 = 8.67 10 4

PFD Morvin HIPPS case

Comments to the results


The results are gathered in Figure 34, and all the approaches yield a SIL3 rating. The PST
could potentially increase the rating to a SIL4, but the other SIL requirements, SFF and HFT,
should be improved to a SIL4 rating as well.

0,001
0,0009
0,0008
0,0007
0,0006
0,0005
0,0004
0,0003
0,0002
0,0001
0
1. PT

2. P TIF

3. P imp

4. PST

5. All

Calculation approach

Figure 34, PFD results for the different calculation approaches

It is evident that special attention should be put upon discovering the non-testable part of the
system as this has a great impact on the PFD. Introducing PST as a manner to decrease the
PFD has practically no impact when PTIF and Pimp is included in the calculations.

78

Partial and imperfect testing of SIF

Concluding remarks

7 Concluding remarks
SIL rating is a common requirement for subsea petroleum systems, making it interesting to
evaluate the assumptions that form basis for the calculations. The assumption that a
component is as good as new after each proof test, meaning that the unavailability for the
component is reduced to zero, has been subject for assessment. The effect of a test that is
imperfect, meaning that the unavailability is not reduced to zero, has not been discussed to a
great extent in the literature. Hence the author has aimed to define and analyze the effect of
imperfect testing.
An imperfect test was classified according to two dimensions;
1. The test does not cover all possible failures inadequate test method.
2. The test does not detect all the failures unsuccessful test.
The reasons for imperfect tests were related to the five M-factors; method, machine, milieu,
man-power and material. It has been proven that the PFD impact by imperfect tests can be
significant. While the PDS proposed PTIF value hardly makes any impact, an imperfect test
with a high proportion of non-testable failures proved the potential of changing the SIL
rating of the system. As it may be difficult to decide the exact percentage that is non-testable
for a system, a method based on the M-factors facilitating such estimation was proposed.
PST has been introduced in order to reveal failure modes which before only has been feasible
through tests that require process shutdown. A successful implementation may improve the
SIL rating of the system. The use of PST in subsea petroleum production has so far not been
common. Several of the arguments for and against implementing PST in subsea equipment
have been assessed.
A tentative PST coverage factor was set to 62 %, based on a failure mode assessment of gate
valves and OREDA data. The result is in accordance with former research. The PST coverage
for the dangerous failure modes FTC, LCP, DOP and ELP could not be justified quantitatively
as the production companies do not give such detailed information. The coverage may differ
dependent on the valve type in question, design and production environment.
It has been argued that PST leads to an increase of the ST rate, assuming that if the valve
starts to move it will continue to closed position. The likely reasons for such an event were
assessed in a Bayesian belief network and proved the need for the right equipment. New
devices such as smart positioners and digital valve controllers have been introduced for the
purpose of PST, reducing the human interference in PST and thus the reasons for ST.
PST is by some implemented in order to justify extended proof testing intervals. As CCF are
those failures that happen within the same proof test interval, an extension of the interval
could lead to more failures being classified as CCF. In such situations, it should be discussed

79

Partial and imperfect testing of SIF

Concluding remarks

whether the -factor should be incremented to realistically reflect the PFD impact this may
have.
Another argument for implementing PST has been the opportunity to reduce the HFT since
the SFF is incremented by detecting more dangerous undetected failures and convert them to
dangerous detected failures. As PST is not fulfilling the criteria for being a diagnostic test, it
is argued that the PST should not be used to affect the SFF, and hence can not be an
argument for a reduction in the HFT (McCrea-Steele, 2006).
Especially for components with higher failure rates, from DU = 1.0 10 6 hours 1 and above,
investing in PST can be recommended. The case studies showed that achieving the exact PST
coverage was less important than the test frequency. The positive PFD impact was greater if
the tests were carried out often than if the coverage was improved by additional 10%.
On the contrary, a reduction of the non-testable part with 10% gave a greater improvement of
the PFD than if both higher PST coverage and shorter test intervals were obtained. Hence the
focus should be upon diminishing reasons for why the test should be unsuccessful.
Throughout the thesis it has become obvious that it is necessary to assess the assumptions
regarding reliability calculations more closely. The imperfect test case proved that ignoring
the estimation of non-testable failures could lead to an inaccurate PFD result. As the use of
SIS develops as one of the standard method for reducing risk in the petroleum production, it
should be highly relevant to improve the quality of these calculations.

80

REFERENCES
Articles and textbooks
Ali, R. 2004. Problems, concerns and possible solutions for testing (and diagnostic
coverage) of final control element of SIF loops. FIELDVUE Business, USA.
Bak, L. 2007. Personal communication 2nd of May 2007. Sandvika, Norway.
Beurden, I. Amkreutz, R. 2003. The effects of partial valve stroke tesing on SIL level.
exida.com.
Goble, W.M. 2003. Estimating the Common Cause Beta Factor. Exida.com, USA.
Haddon, W. Jr. 1973. Energy damage and the ten countermeasure strategies. Human
Factors, 15(4):355-66.
Hovden, J., 2003. Theory formations about the Risk Society. NoFS XV, Karlstad, Sweden.
Lundteigen, M. and Rausand, M., 2006. Assessment of Hardware Safety Integrity
Requirements. Proceedings of the 30th ESReDA Seminar. NTNU,Trondheim-Norway.
Lundteigen, M. and Rausand, M. 2007. The effect of partial stroke testing on the reliability
of safety valves. NTNU, Trondheim-Norway.
McCrea_Steele, R. 2005. Partial Stroke Testing. Implementing for the Right Reasons. Paper
at ISA EXPO 2005, Chicago.
McCrea_Steele, R. 2006. Partial Stroke Testing. The Good, the Bad and the Ugly. Premier
Consulting Services, USA.
Metso automation, 2002. Comparison between testing methodologies to achieve the
required SIL level. Application report 2726/01/02.
Rausand, M. 2007. Personal communication 10th of April 2007. Trondheim, Norway.
Rausand, M. and Hyland, A., 2004. System Reliability Theory. Second edition. John Wiley
& Sons, Inc., Hoboken, New Jersey.
Sangesland, S. 2007. Drilling and completion of subsea wells. Course compendia, NTNU,
Trondheim.
Sanguineti, L. & Sanguineti, E. 2007. Personal communication 4th of May 2007. ATV,
Colico, Italy.

81

Subseazone, 2007. World Subsea Production Capex. Internet:


http://www.subseazone.com/zones/subsea_home.aspx
Summers, 1998. Valve safety concerns. Letters, InTech May 1998. Internet:
http://findarticles.com/p/articles/mi_qa3739/is_199805/ai_n8800478
Summers, A. Zachary, B. 2000a. Partial-stroke testing of safety block valves. SIS-TECH
Solutions, Published in Control Engineering November 1, 2000. Internet:
http://www.controleng.com/article/CA190350.html
Summers, A. 2000b. High Integrity Pressure Protection Systems (HIPPS). SIS-TECH
Solutions, Published in Chemical Engineering Progress November, 2000.
Summers, A. & Zachary, B. 2002. Improve Facility SIS Performance and Reliability. SISTECH Solutions, published in Hydrocarbon Processing, vol. 81, number 10, pp. 71-74
October 2002.
Velten-Philipp, W. and Houtermans, M. 2004. The effect of diagnostic and periodic testing
on the reliability of safety systems. Software and Information Technology (ASI),
Cologne-Germany.
Willem-Jan, N. & Rens, W. 2005. Partial Stroking on fast acting applications. Mokveld
Valves, Gouda, The Netherlands.

Standards & Guidelines


Activities regulations, 2002. The Petroleum Safety Authority Norway (PSA).
exida, 2003. Safety equipment reliability handbook. exida.com. L.L.C. PA, USA.
Facilities regulations, 2001. The Petroleum Safety Authority Norway (PSA).
Framework regulations, 2001. The Petroleum Safety Authority Norway (PSA).
IEC, 2002. Functional safety and IEC 61508. A basic guide.
IEC 61508 Standard, 2002. Functional safety of electrical/electronic/programmable
electronic safety-related-systems. Part 1-7.
IEC 61511 Standard, 2004. Functional safety. Safety instrumented systems for the process
industry sector. Part 1-3.

82

OLF 070, 2004. Application of IEC 61508 and IEC 61511 in the Norwegian Petroleum
Industry. OLF, Rev. 02, 10.29.2004.
OREDA, 2002. Offshore Reliability Data. 4th Edition. SINTEF, Trondheim-Norway.
OREDA, 2007. OREDA homepage. Internet: http://www.sintef.no/static/tl/projects/oreda/
PDS Data Handbook, 2006. Reliability Data for Safety Instrumented Systems. 2006
Edition. SINTEF, Trondheim-Norway.
PDS Method Handbook, 2006. Reliability Prediction Method for Safety Instrumented
Systems. 2006 Edition. SINTEF, Trondheim-Norway.

Other documents
AKS, 2007. Dalia X-mas Tree. Internal document Aker Kvrner Subsea.
KOP, 2005. IEC 61508 / IEC 61511 - SIL kurs, modul 1. Presentation for training purposes.
KOP, 2004. There is something about Kristin. Presentation at Society for Underwater
Technology.
KOP, 2003. Safety requirement specification, system 18, HIPPS. Doc. Number 22-KC000502
KOP, 2002. FMEA report HIPPS valve and actuator. Doc. Number C074-KOP-S-RA-0002
Ring-O, 2007. Single acting FSC actuator for 51/8 gate valve. Internal document Ring-O
Valves, Colico - Italy.
Statoil, 2007a. Morvin Safety Requirement Specification. Doc. number TR2250.
Statoil, 2007b. IEC61508/61511 Compliance. Doc.number TR2249
Statoil, 2007c. Delivery of subsea production system. Scope of work. Frame agreement
no.4600004645.

83

ANNEX A, XMT
Horizontal XMT
(Sangesland,2007)

Conventional XMT
(Sangesland,2007)

84

Вам также может понравиться