Академический Документы
Профессиональный Документы
Культура Документы
ENIQ
European Network for Inspection and Qualification
DG JRC
Institute for Energy
2007
EUR 22228 EN
Mission of the Institute for Energy
The Institute for Energy provides scientific and technical support for the conception, development,
implementation and monitoring of Community policies related to energy. Special emphasis is given to
the security of energy supply and to sustainable and safe energy production.
European Commission
Directorate-General Joint Research Centre (DG JRC)
http://www.jrc.ec.europa.eu/
Contact details:
Arne Eriksson
Tel: +31 (0) 224 56 5383
E-mail: arne.eriksson@jrc.nl
Legal Notice
Neither the European Commission nor any person acting on behalf of the Commission is responsible
for the use which might be made of this publication.
The use of trademarks in this publication does not constitute an endorsement by the European
Commission.
The views expressed in this publication are the sole responsibility of the author(s) and do not
necessarily reflect the views of the European Commission.
A great deal of additional information on the European Union is available on the Internet.
It can be accessed through the Europa server http://europa.eu/
EUR 22228 EN
ISSN 1018-5593
Luxembourg: Office for Official Publications of the European Communities
2
FOREWORD
This report is the outcome of activities undertaken by the ENIQ Task Group Risk (TGR) on
Risk Informed In-service Inspection (RI-ISI).
ENIQ, the European Network for Inspection and Qualification, was set up in 1992 in
recognition of the importance of the issue of qualification of NDE inspection procedures used
in in-service inspection programmes for nuclear power plants. Driven by European nuclear
utilities and managed by the European Commission Joint Research Centre (JRC) in Petten,
the Netherlands, ENIQ was intended to be a network in which available resources and
expertise could be managed at European level. It was also recognised that harmonisation in
the field of codes and standards for inspection qualification would be a major advantage for
all parties involved, and would ultimately increase the safety of European nuclear power
plants. More information on the ENIQ network and its activities can be found at
http://safelife.jrc.nl/eniq/.
ENIQ work is carried out by two sub-groups: the Task Group on Qualification (TGQ) focuses
on the qualification of in-service inspection (ISI) systems, while the Task Group on Risk
(TGR) focuses on risk-informed in-service inspection (RI-ISI) issues. The TGR has published
the European Framework Document for Risk-informed In-service Inspection, and is
producing more detailed recommended practices and discussion documents on several RI-
ISI-specific issues.
Structural Reliability Models (SRMs) are commonly used to evaluate failure probabilities in
the development of Risk-Informed In-service Inspection (RI-ISI) programmes. This report
summarises the Verification and Validation (V&V) requirements that a Structural Reliability
Model (SRM) and associated software should satisfy in order to be suitable for such
purposes.
The members of the ENIQ Task Group on Risk are: V Chapman (OJV Consultancy Ltd, UK),
C Cueto-Felgueroso (Tecnatom, Spain), A Eriksson (JRC, European Commission, the
Netherlands), C Faidy (EDF, France), R Fuchs (Leibastadt NPP, Switzerland), L Gandossi
(JRC, European Commission, the Netherlands), L Horacek (NRI, Czech Republic),
G Hultqvist (Forsmark Kraftgrupp AB, Sweden), W Kohlpaintner (E.ON Kernkraft, Germany),
P Lacaille (Areva, France), A Leijon (Ringhals AB, Sweden), A Toft (Serco Assurance,
United Kingdom), J Lotman (Forsmark Kraftgrupp AB, Sweden), K Nilsson (Ringhals AB,
Sweden), P ORegan (EPRI, United States), T Schimpfke (GRS, Germany), B Shepherd
(Mitsui Babcock, United Kingdom), K Simola (VTT, Finland), J Slechten (Tractebel, Belgium),
A Walker (Rolls-Royce, United Kingdom), A Weyn (AIB-Vinotte International, Belgium).
This ENIQ type 1 document was approved for publication by the ENIQ Steering Committee.
The authors of this report are Carlos Cueto-Felgueroso of Tecnatom, Kaisa Simola of VTT
and Luca Gandossi of DG JRC. Professional proofreading was carried out with the
assistance of DGT's Editing Unit.
The Steering Committee of ENIQ formally approved this Recommended Practice for
publication as an ENIQ report at the 32nd SC meeting held in Madrid (Spain) on 14 June
2007. The voting members of the Steering Committee of ENIQ are, in alphabetical order:
R Chapman (British Energy, United Kingdom), P Dombret (Tractebel, Belgium), K Hukkanen
(Teollisuuden Voima OY, Finland), P Krebs (Engineer Consulting, Switzerland), B Neundorf
(Vattenfall Europe Nuclear Energy, Germany; ENIQ chairman), J Neupauer (Slovensk
Elektrrne, Slovakia; ENIQ vice-chairman), S Prez (Iberdrola, Spain), U Sandberg
(Forsmark NPP, Sweden), P Kopcil (Dukovany NPP, Czech Republic), D Szab (Paks NPP,
Hungary). The European Commission representatives in the Steering Committee are
A Eriksson (JRC, European Commission; ENIQ Network Manager) and T Seldis (JRC,
European Commission; Scientific Secretary to ENIQ).
3
TABLE OF CONTENTS
1 INTRODUCTION..................................................................................................7
6 EXPERIMENTAL VALIDATION...........................................................................9
11 REFERENCES...................................................................................................12
ACRONYMS .............................................................................................................12
5
1 INTRODUCTION
Structural Reliability Models (SRMs) are commonly used to evaluate failure
probabilities in the development of Risk-Informed In-service Inspection (RI-ISI)
programmes. This report summarises the Verification and Validation (V&V)
requirements that a Structural Reliability Model (SRM) and associated software
should satisfy in order to be suitable for such purposes.
These requirements are based on work performed previously in this area, mainly
within the NURBIM project, in particular NURBIM report D2 Definition of a set of
criteria that should be met by a suitable structural reliability model [1], and NURBIM
report D4 WP-4, Review and benchmarking of SRMs and associated software [2].
2 BASIC DEFINITIONS
In the context of RI-ISI, a Structural Reliability Model (SRM) can be defined as an
engineering tool based on Probabilistic Fracture Mechanics (PFM) or other structural
reliability methods used to calculate component and piping failure probabilities.
Generally, PFM and structural reliability analyses involve deterministic analysis
procedures with random input variables. These analyses require numerical techniques
as implemented in computer programs.
Verification is about demonstrating that the SRM software does exactly what it was
intended to do.
Validation is about demonstrating that the SRM software output faithfully reflects what
the program is trying to represent.
7
4 SCOPE AND BASIC MODELLING PRINCIPLES AND
ASSUMPTIONS
A second fundamental requirement is a clear statement of the scope covered by the
SRM/software and the basic principles and assumptions that are specifically or
inherently included in it. This requirement has the following goals:
Many of the basic principles used in structural reliability modelling originate from
codified deterministic analysis. Thus, most of the numerical procedures are likely to
originate from the historical acceptance of these methods within that deterministic
environment. It is important, therefore, to ensure that any inherent pessimism, such as
hidden safety margins in the deterministic analytical procedures, are clearly identified.
On the other hand, one of the most significant factors is likely to be how the variables
within the model have been introduced. The choice of distributions to represent data
can make significant differences to a given estimate, especially if that estimate is
strongly dependent on the tails of the distribution. The reason behind a given choice
of distribution should be made clear and wherever possible mechanistic reasoning for
that choice should be given.
If no mechanistic reasoning can be given for an assumption and the choice is simply
based on a best-fit evaluation of the data, then the comparison with other distributions
should be investigated. Sensitivity analyses should always be carried out to assess
the influence of statistical distributions and choice of different parameters. The results
of such analyses should accompany the SRM software documentation.
8
6 EXPERIMENTAL VALIDATION
As stated in [1], there is an inherent problem in trying to prove that a probability
prediction is true because such a probability is not a property of the component or
structure itself. However, it may be possible to demonstrate the validity of some of the
constituent parts that make up the model.
This form of validation is closely related to the previous section in that it is primarily
looking to further demonstrate that the principles and assumptions used in the model
are well founded. This can be achieved by running the model, probably in an adapted
or sub-element form, in order to reproduce experimental results that form the bases of
the mechanistic assumptions on which the model is built. The available experimental
data should be used to test as many different aspects of the proposed model as
possible.
When comparing failure information from historical databases, several aspects and
potential difficulties must be borne in mind. A broad discussion is given in [3].
Generally, the historical failure data provides a point estimate determined by simply
adding all the known passive component failures together and dividing by the total
pipe population data, expressed for instance in weld-years. However, this data is
derived from a wide variety of conditions, environments and loads, among other
factors that influence failure probability. If this data is to be used to validate SRM
software predictions in some way, then the SRM software must be run so as to
represent the world data against which it is to be compared. This type of comparison
cannot be completed unless the necessary data is available, which is not normally the
case. On the other hand, qualitative trends between historical failure data and SRM
software predictions can be more readily compared.
In addition, large uncertainties inevitably exist with respect to rare events such as
gross structural failures and failures of large pipes. More data is available on identified
cracks and small leakages, which could be used for validation of the SRM software
with the limitations stated above. Furthermore, the use of expert elicitation could be
considered to check whether the SRM predictions are credible.
9
implicit assumption about the correctness of one of the two models, i.e. the one that is
being used to benchmark the other. There is also the question of whether or not any
two SRM software packages should provide the same answer or not. If the models
use different assumptions about the failure criteria or some other modelling
assumption, then they will probably give different answers to a given problem. While
recognising that there are limitations with this type of approach to V&V of any given
SRM software, its outcome should still show consistency of the results obtained with
the compared SRM software or it should provide a clear understanding of where any
differences originate.
This approach was undertaken within the NURBIM project for fatigue and stress
corrosion cracking [2]. The results showed good consistency and the differences were
consistent with the assumptions and approximations made in the analyses. However,
not all features of SRM software relevant to RI-ISI programmes (e.g. leakage
evaluation and detection) could be compared.
Expert judgment can be the result of informal or formal processes, the former being
the way expert judgment has traditionally been used, through the experts implicit and
undocumented reasoning, inferences and scientific knowledge. In contrast, more
recent formal uses of expert judgment exist that are explicit, structured and well
documented. They attempt to reveal assumptions and reasoning that are at the basis
of a judgment and to quantify and document them so that they can be appraised by
others [4].
10
1) The basic programming can be shown to have suitable quality assurance
documentation.
2) The scope, analytical assumptions and limitations of the modelling
capability are well defined.
3) The analytical assumptions in 2) are well grounded and based on theory
that is accepted as representative of the situations considered by the
given SRM.
4) The model is capable of reproducing the data on which its analytical
assumptions are based and examples are provided that demonstrate its
general agreement with available experimental data.
5) Attempts have been made to show how the model compares with the
world or field data, accepting the inherent limitations of this data.
6) The model has been benchmarked against other SRM models within the
same field or scope and any differences are adequately explained.
Further key elements within an SRM software package are recognised as being:
It is also recognised that a continued effort to update and validate the SRM software is
necessary. The performance monitoring and feedback process included in Risk-
Informed approaches should be used to this end.
11
11 REFERENCES
[1] O. J. V. Chapman, Definition of a set of criteria that should be met by a suitable
structural reliability model, NURBIM report D2, May 2004.
ACRONYMS
RI-ISI: Risk-Informed In-service Inspection
PFM: Probabilistic Fracture Mechanics
SRM: Structural Reliability Model
V&V: Verification and Validation
12
European Commission
Authors
Carlos Cueto-Felgueroso Tecnatom
Kaisa Simola VTT Technical Research Centre of Finland
Luca Gandossi DG-JRC-IE
Abstract
Structural Reliability Models (SRMs) are commonly used to evaluate failure probabilities in
the development of Risk-Informed In-service Inspection (RI-ISI) programmes. This report
summarises the Verification and Validation (V&V) requirements that a Structural Reliability
Model (SRM) and associated software should satisfy in order to be suitable for such purpose.
These requirements are mainly based on the work performed within the NURBIM project.
The mission of the Joint Research Centre is to provide customer-driven scientific and technical
support for the conception, development, implementation and monitoring of EU policies. As a service
of the European Commission, the JRC functions as a reference centre of science and technology for
the Union. Close to the policy-making process, it serves the common interest of the Member States,
while being independent of special interests, whether private or national.