Вы находитесь на странице: 1из 140

Guidelines for Performing Probabilistic Analyses of Boiler Pressure Parts

Effective December 6, 2006, this report has been made publicly available in accordance with Section 734.3(b)(3) and published in accordance with Section 734.7 of the U.S. Export Administration Regulations. As a result of this publication, this report is subject to only copyright protection and does not require any license agreement from EPRI. This notice supersedes the export control restrictions and any proprietary licensed material notices embedded in the document prior to publication.

M AT E

R I A L

N
LICE

SED

WARNING: Please read the License Agreement on the back cover before removing the Wrapping Material.

Technical Report

Guidelines for Performing Probabilistic Analyses of Boiler Pressure Parts


1000311

Final Report, December 2000

EPRI Project Manager R. Tilley

EPRI 3412 Hillview Avenue, Palo Alto, California 94304 PO Box 10412, Palo Alto, California 94303 USA 800.313.3774 650.855.2121 askepri@epri.com www.epri.com

DISCLAIMER OF WARRANTIES AND LIMITATION OF LIABILITIES


THIS DOCUMENT WAS PREPARED BY THE ORGANIZATION(S) NAMED BELOW AS AN ACCOUNT OF WORK SPONSORED OR COSPONSORED BY THE ELECTRIC POWER RESEARCH INSTITUTE, INC. (EPRI). NEITHER EPRI, ANY MEMBER OF EPRI, ANY COSPONSOR, THE ORGANIZATION(S) BELOW, NOR ANY PERSON ACTING ON BEHALF OF ANY OF THEM: (A) MAKES ANY WARRANTY OR REPRESENTATION WHATSOEVER, EXPRESS OR IMPLIED, (I) WITH RESPECT TO THE USE OF ANY INFORMATION, APPARATUS, METHOD, PROCESS, OR SIMILAR ITEM DISCLOSED IN THIS DOCUMENT, INCLUDING MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, OR (II) THAT SUCH USE DOES NOT INFRINGE ON OR INTERFERE WITH PRIVATELY OWNED RIGHTS, INCLUDING ANY PARTY'S INTELLECTUAL PROPERTY, OR (III) THAT THIS DOCUMENT IS SUITABLE TO ANY PARTICULAR USER'S CIRCUMSTANCE; OR (B) ASSUMES RESPONSIBILITY FOR ANY DAMAGES OR OTHER LIABILITY WHATSOEVER (INCLUDING ANY CONSEQUENTIAL DAMAGES, EVEN IF EPRI OR ANY EPRI REPRESENTATIVE HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES) RESULTING FROM YOUR SELECTION OR USE OF THIS DOCUMENT OR ANY INFORMATION, APPARATUS, METHOD, PROCESS, OR SIMILAR ITEM DISCLOSED IN THIS DOCUMENT. ORGANIZATION(S) THAT PREPARED THIS DOCUMENT Engineering Mechanics Technology

ORDERING INFORMATION
Requests for copies of this report should be directed to the EPRI Distribution Center, 207 Coggins Drive, P.O. Box 23205, Pleasant Hill, CA 94523, (800) 313-3774. Electric Power Research Institute and EPRI are registered service marks of the Electric Power Research Institute, Inc. EPRI. ELECTRIFY THE WORLD is a service mark of the Electric Power Research Institute, Inc. Copyright 2000 Electric Power Research Institute, Inc. All rights reserved.

CITATIONS
This report was prepared by Engineering Mechanics Technology 4340 Stevens Creek Blvd., Suite 166 San Jose, CA 95129 Principal Investigator D. Harris This report describes research sponsored by EPRI. The report is a corporate document that should be cited in the literature in the following manner: Guidelines for Performing Probabilistic Analyses of Boiler Pressure Parts, EPRI, Palo Alto, CA: 2000. 1000311.

iii

REPORT SUMMARY

Using probabilistic methodologies for life assessment of boiler components provides a more realistic basis for managing the inspection, maintenance, repair, and replacement actions for those components. Such a realistic basis also couples with economic parameters to allow utilities to make better overall decisions in their efforts to reduce operating and maintenance costs. In the competitive environment for power generation, more accurate assessments of risks and benefits must be incorporated into utility decision making. Probabilistic techniques are a proven way to refine the basis for such decisions. This document reviews some life prediction methodologies and discusses relevant statistical principles. It provides guidelines on the generation and use of such results in maintenance and inspection planning. Background In the past, maintenance decisions and corresponding expenditures could be based on engineering analyses using approximate models for component damage mechanisms and adding conservative safety factors to account for both model and data inaccuracies. However, because of the emerging competitive environment and more financially oriented management, utilities are finding that such conservative approaches are non-optimum in balancing costs and benefits. Decisions need to be justified from an economic point of view that better incorporates risks of equipment failure. Probabilistic techniques have been used in other industries to provide such a risk-based bridge to economic decision making. Recent EPRI analytical models such as the Boiler Life Evaluation and Simulation System (BLESS) incorporate options to allow probabilistic analyses. Utilities can now use these options to improve decisions on boiler component inspections, maintenance, repair, and replacement. Objectives N To review probabilistic methodologies for use in boiler component life management
N

To provide guidelines on the generation and use of such results in maintenance and inspection planning

Approach Using a probabilistic approach, the probability of failure within a certain time range can be estimated, rather than providing a deterministic failure time. The deterministic result uses a single set of input variables and is used with a safety factor to account for inaccuracies in the model and its input. Probabilistic results are generated by running a series of analyses in which key input parameters are varied to reflect the actual variation occurring in the population of similar components. These results are generally expressed as failure probabilities (or failure rates) versus time. The time for remedial action can be keyed to the time at which failure rate

becomes excessive. This document concentrates on boiler pressure parts, but much of the discussion is readily applicable to other components that degrade due to material aging. The probabilistic approach reviewed in this document is based on an underlying mechanistic model of lifetime. Representative lifetime models are reviewed with a focus on boiler pressure parts. Special attention is paid to probabilities associated with crack initiation and growth, which are leading causes of material degradation and component failure. Statistical background information is provided in the area of probabilistic structural analysis. The development of probabilistic models of component lifetimes is also discussed. Results When using probabilistic methodologies for component life management, the following points need to be kept in mind:
N N N N N

Lifetime models are available Scatter and uncertainties in inputs to the models usually preclude accurate deterministic results Probabilistic lifetime models can be obtained by quantifying the scatter and uncertainty and incorporating them into the underlying deterministic lifetime model Numerical procedures for generation of failure probabilities are available, and numerical results can usually be obtained using a personal computer Probabilistic results can be used in analyses of expected future operating costs, which are of great use in component life management

EPRI Perspective As utilities come under increased pressure to reduce costs and extend the lifetime of plant components, interest has increased in procedures for rationally planning inspection and maintenance. EPRI and other organizations have facilitated the use of risk-principles to prioritize maintenance actions. Building on software tools such as the BLESS code and processes such as those developed for extending intervals between turbine maintenance outages (TURBO-X), this report provides guidance for the use of probabilistic approaches in managing boiler component life. This information can then be incorporated into the component cost-benefit models to optimize overall costs. Keywords Probabilistic analysis Boiler components Life assessment

vi

EPRI Licensed Material

CONTENTS

1 INTRODUCTION.................................................................................................................. 1-1 2 REVIEW OF DETERMINISTIC LIFE PREDICTION PROCEDURES FOR BOILER PRESSURE PARTS ............................................................................................................... 2-1 2.1 Crack Initiation........................................................................................................... 2-1 2.1.1 Fatigue Crack Initiation ......................................................................................... 2-1 2.1.2 Creep Crack Initiation ........................................................................................... 2-3 2.1.3 Creep/Fatigue Crack Initiation............................................................................... 2-6 2.1.4 Oxide Notching ..................................................................................................... 2-7 2.2 Crack Growth ............................................................................................................ 2-8 2.2.1 Crack Tip Stress Fields......................................................................................... 2-8 2.2.2 Crack Driving Force Solutions............................................................................. 2-10 2.2.3 Calculation of Critical Crack Sizes ...................................................................... 2-13 2.2.4 Fatigue Crack Growth ......................................................................................... 2-13 2.2.5 Creep Crack Growth ........................................................................................... 2-14 2.2.6 Creep/Fatigue Crack Growth .............................................................................. 2-15 2.3 Simple Example Problems....................................................................................... 2-18 2.3.1 Fatigue of a Crack in a Large Plate..................................................................... 2-18 2.3.2 Creep Damage in a Thinning Tube ..................................................................... 2-19 3 SOME STATISTICAL BACKGROUND INFORMATION...................................................... 3-1 3.1 3.2 3.3 Probability Density Functions .................................................................................... 3-1 Fitting Distributions .................................................................................................... 3-4 Combinations of Random Variables ........................................................................ 3-11

3.3.1 Analytical Methods.............................................................................................. 3-11 3.3.2 Monte Carlo Simulation Principles ................................................................... 3-12 3.3.3 Monte Carlo Confidence Intervals .................................................................... 3-16 3.3.4 Monte Carlo Simulation Importance Sampling ................................................. 3-19

vii

EPRI Licensed Material

3.3.5 First Order Reliability Methods Basics.............................................................. 3-21 3.3.6 First Order Reliability Methods General ........................................................... 3-27 4 DEVELOPMENT OF PROBABILISTIC MODELS FROM DETERMINISTIC BASICS.......... 4-1 4.1 4.2 Discussion................................................................................................................. 4-1 Simple Example Problems......................................................................................... 4-4

4.2.1 Fatigue Crack Growth in a Large Plate ................................................................. 4-4 4.2.2 Creep Damage in a Thinning Tube ..................................................................... 4-11 4.2.3 Hazard Rates...................................................................................................... 4-15 4.3 Inspection Detection Probabilities............................................................................ 4-18

5 EXAMPLE OF A PROBABILISTIC ANALYSIS ................................................................... 5-1 5.1 Gathering the Necessary Information ........................................................................ 5-1 5.1.1 Component Geometry and Material ...................................................................... 5-1 5.1.2 Operating Conditions ............................................................................................ 5-3 5.2 5.3 Performing the Analysis............................................................................................. 5-6 Combining Data....................................................................................................... 5-12

6 USE OF PROBABILISTIC RESULTS.................................................................................. 6-1 6.1 6.2 Target Hazard Rates ................................................................................................. 6-1 Economic Models ...................................................................................................... 6-5

7 CONCLUDING REMARKS .................................................................................................. 7-1 A DETAILS OF BLESS EXAMPLE ........................................................................................A-1 B REFERENCES ....................................................................................................................B-1

viii

EPRI Licensed Material

LIST OF FIGURES
Figure 2-1 Strain Life Data for A106B Carbon Steel in Air at 550F (290C) With Median Curve Fit [From Keisler 95] .............................................................................................. 2-2 Figure 2-2 Creep Rupture Data for 1 Cr Mo With Curve Fit [From Grunloh 92] (1 ksi=6.895MPa) ............................................................................................................ 2-4 Figure 2-3 Creep/Fatigue Damage Plane Showing Combinations Corresponding to Crack Initiation................................................................................................................. 2-6 Figure 2-4 Crack-Like Defect Initiated by Oxide Notching........................................................ 2-7 Figure 2-5 Depiction of Procedure for Determination of Oxide Thickness for a Time at Tlo Followed by a Time at Thi ................................................................................................. 2-8 Figure 2-6 Coordinate System Near a Crack Tip ..................................................................... 2-9 Figure 2-7 Through-Crack of Length 2a in a Large Plate Subject to Stress . ....................... 2-11 Figure 2-8 Single Edge-Cracked Strip in Tension With J-Solution ......................................... 2-12 Figure 2-9 Fatigue Crack Growth as a Function of the Cyclic Stress Intensity Factor for 2 Cr 1 Mo at Various Temperatures [Drawn From Viswanathan 89]........................... 2-14 Figure 2-10 Creep and Creep/Fatigue Crack Growth Data and Fits. Left Figure Is for Constant Load and Right Figure Is for Cyclic Load With Various Hold Times [From Grunloh 92].......................................................................................................... 2-17 Figure 2-11 Half-Crack Length as a Function of the Number of Cycles to 20 ksi (137.9 MPa) for Example Fatigue Problem .................................................................... 2-19 Figure 2-12 Time to Failure as a Function of the Wall-Thinning Rate for the Creep Rupture Example Problem (1 Mil/Year=25.4 m/yr)....................................................... 2-21 Figure 3-1 Plot of Data of Table 3-2 on Log-Linear Scales ...................................................... 3-8 Figure 3-2 Lognormal Probability Plot of Data of Table 3-2 ..................................................... 3-8 Figure 3-3 Normal Probability Plot of Data of Table 3-2........................................................... 3-9 Figure 3-4 Cumulative Probability of the Sum of Two Lognormals as Computed by Numerical Integration and Monte Carlo With 20 and 500 Trials ..................................... 3-15 Figure 3-5 Values of Factors in Table 3-5 Divided by the Number of Failures ....................... 3-19 Figure 3-6 Cumulative Probability of the Sum of Two Lognormals as Computed by Numerical Integration and Monte Carlo Simulation With 20 Trials, With and Without Importance Sampling..................................................................................................... 3-21 Figure 3-7 Pictorial Representation of Joint Density Function in Unit Variate Space Showing Failure Curve and Most Probable Failure Point (MPFP) .................................. 3-22 Figure 3-8 Plot of the Performance Function in Reduced Variate Space for the Example Problem of Two Lognormals for z=2. The Origin Is at the Upper Right Corner, and the Most Probable Failure Point Is Indicated.................................................................. 3-24

ix

EPRI Licensed Material

Figure 3-9 Cumulative Probability of the Sum of Two Lognormals as Computed by the First Order Reliability Method and Numerical Integration ............................................... 3-25 Figure 3-10 Example of a Performance Function With the Vector Normal to the Axis of One of the Variables Showing the Insensitivity of to That Variable.............................. 3-26 Figure 3-11 The Direction Cosines of x and y for the Example Problem of the Sum of Two Lognormals ............................................................................................................ 3-27 Figure 3-12 Diagrammatic Representation of a Procedure for Finding Most Probable Failure Point .................................................................................................................. 3-31 Figure 4-1 Probabilistic Treatment of Strain-Life Data for A106B Carbon Steel in Air at 550F (288C) Showing Various Quantiles of the Keisler Curve Fit [From Keisler 95]...... 4-3 Figure 4-2 Cumulative Distribution of Critical Crack Size for the Fatigue Crack Growth Example Problem (104 Trials) .......................................................................................... 4-6 Figure 4-3 Lognormal Probability Plot of the Failure Probability as a Function of the Number of Cycles for the Fatigue Example Problem (104 Trials)...................................... 4-7 Figure 4-4 Plot on Lognormal Scales of the Distribution of Cycles to Failure for the Example Fatigue Crack Growth Problem and the Same Problem With the Mean and Standard Deviation Divided by Two (106 Trials) ............................................................... 4-7 Figure 4-5 Cumulative Failure Probabilities in the Lower Probability Region of the Fatigue Crack Growth Example Problem With Two Distributions of the Fracture Toughness (106 Trials)..................................................................................................... 4-8 Figure 4-6 Cumulative Failure Probabilities in the Lower Probability Region for the Fatigue Crack Growth Example Problem. Solid Line is Monte Carlo With 106 Trials, Points are Results From Rackwitz-Fiessler...................................................................... 4-9 Figure 4-7 Cumulative Failure Probabilities in the Lower Probability Region for the Fatigue Crack Growth Problem as Obtained Using Importance Sampling With 1000 Trials With Different Shifts in Parameters of Input Random Variables. Points are From Rackwitz-Fiessler. ................................................................................................ 4-10 Figure 4-8 Direction Cosines for the Fatigue Crack Growth Example Problem as a Function of the Number of Cycles to Failure .................................................................. 4-11 Figure 4-9 Creep Rupture Data for 1 Cr 1/2 Mo as Obtained From Van Echo 66 With Least Squares Linear Fit (Stress in ksi, Ta in Degrees Rankine, tR in Hours) .................. 4-13 Figure 4-10 Cumulative Distribution of the Random Variable A Used to Describe the Scatter in the Larson-Miller Data for 1 Cr 1/2 Mo Steel .............................................. 4-13 Figure 4-11 Results of Example Problem of Creep Damage in a Thinning Tube as Obtained by Monte Carlo Simulation With 104 Trials ...................................................... 4-15 Figure 4-12 Hazard Function vs. Cycles for the Fatigue Crack Growth Example Problem..... 4-17 Figure 4-13 Failure Rate as a Function of Time for the Thinning Problem Only. Histogram Results Are for the Smallest 2,000 Failure Times in 106 Trials, the Line Is for the Closed Form Result. ........................................................................................... 4-18 Figure 4-14 Nondetection Probability as a Function of Crack Depth Divided by Plate Thickness for Fatigue Cracks in Ferritic Steel for Three Qualities of Ultrasonic Inspection ...................................................................................................................... 4-21 Figure 4-15 Failure Probability as a Function of Time for the Inspection Example Problem ......................................................................................................................... 4-22

EPRI Licensed Material

Figure 4-16 Comparison of Fatigue Example Problem of Infinite Plate With Finite Thickness Results With No Inspection Using PRAISE Code.......................................... 4-23 Figure 5-1 Schematic Representation of Typical Header With Illustration of Segment Considered in Analysis .................................................................................................... 5-2 Figure 5-2 Summary of Header Geometry Analyzed (Length Dimensions in Inches)............... 5-3 Figure 5-3 Log-Linear Plot of Probability of Leak as a Function of Time for Oxide Notching Initiation and Creep Fatigue Crack Growth in Header Example Problem (2,000 Trials) ................................................................................................................... 5-7 Figure 5-4 Lognormal Probability Plot of BLESS Results for the Header Example Problem (Probability in Percent) ...................................................................................... 5-7 Figure 5-5 Lognormal Hazard Function for the Header Example Problem ............................... 5-9 Figure 5-6 Comparison of Hazard as a Function of Time as Obtained From the BLESS Output Data and the Fitted Lognormal Relation ............................................................. 5-10 Figure 5-7 Results of Header Example Problem With Varying Shifts and Number of Trials. The Solid Line is the Curve Fit Based on the Estimated Lognormal Distribution With No Shifts (the Line of Figure 5-6). ...................................................... 5-12 Figure 6-1 Log-Log Plot of Frequency Severity Data for Boiler Components From Table 6-2 ......................................................................................................................... 6-4

xi

EPRI Licensed Material

LIST OF TABLES
Table 2-1 Table of the Function h1(,n) in the J-Solution for an Edge Cracked Plate in Tension for Plane Strain [From Shih 84] ........................................................................ 2-12 Table 3-1 Characteristics of Some Commonly Encountered Probability Functions Used to Describe Scatter and Uncertainty ................................................................................ 3-3 Table 3-2 Values of Cf for Fatigue Crack Growth Data ............................................................ 3-7 Table 3-3 Information on Parameters of Distribution of Cf Data of Table 3-2 ......................... 3-10 Table 3-4 Results of Monte Carlo Example With 20 Trials..................................................... 3-14 Table 3-5 Summary of Some of the Statistics from the Monte Carlo Trials ............................ 3-15 Table 3-6 Confidence Upper Bounds on Ntrpf for Various Numbers of Failures ...................... 3-18 Table 3-7 Coordinates and Direction Cosines of the MPFP for Example Problem With z = 2 .............................................................................................................................. 3-25 Table 3-8 Intermediate Steps in Iterative Procedure for Finding the MPFP for the Example Problem with z=3 ............................................................................................ 3-32 Table 3-9 Intermediate Steps in Iterative Procedure for Finding the MPFP for the Example Problem With z=2 ........................................................................................... 3-33 Table 4-1 Random Variables for the Fatigue Crack Growth Example Problem........................ 4-5 Table 4-2 Steps in Estimating the Hazard Function for the Fatigue Crack Growth Example Problem .......................................................................................................... 4-16 Table 4-3 Parameters of the Equation Describing the Non-Detection Probability .................. 4-20 Table 5-1 Summary of Time-Variation of Operating Conditions ............................................... 5-5 Table 5-2 Summary of Initiation Time for Various Operating Scenarios................................... 5-8 Table 6-1 Examples of Hazards of Common Activities as Measured by Fatality Rate ............. 6-2 Table 6-2 Partial List of Boiler Component Failure Rate and Consequences........................... 6-3 Table 6-3 Calculation of Expected Cost of Continuing Operation of Example Header for Another 20 Years............................................................................................................. 6-7 Table 6-4 Calculation of Expected Cost of Replacing Header Now and Then Operating for Another 20 Years........................................................................................................ 6-8

xiii

EPRI Licensed Material

1
INTRODUCTION
Deregulation of the electric utility industry has lead to increased competition in the generation of electrical power, which has led to increasing need for systematic means of inspection and maintenance planning. Power plant components are subject to aging due to a variety of mechanisms and must occasionally be replaced or repaired. It is not economical to replace them earlier than necessary, nor is it economical to run them until they cause an unscheduled outage or safety problem. Hence, it is desired to define an optimum time range for remedial action. The purpose of this document is to review probabilistic methodologies for use in component life management and to provide guidelines on the generation and use of such results in maintenance and inspection planning. The points to be made are that: Lifetime models are available There are scatter and uncertainties in inputs to the models that usually preclude accurate deterministic results Probabilistic lifetime models can be obtained by quantifying the scatter and uncertainty and incorporating them into the underlying deterministic lifetime model Numerical procedures for generation of failure probabilities are available, and personal computers are becoming so fast and cheap that generation of numerical results is usually not a problem The probabilistic results generated (failure probability as a function of time) are of direct use in analyses of expected future operating costs, which are of great use in component life management

This document reviews some life prediction methodologies, followed by a discussion of relevant statistical principles. Examples of probabilistic analyses are provided, including the analysis of a header using the EPRI BLESS software. The use of the failure probability results in a run/retirement decision is demonstrated. All of this information is available elsewhere, but not in a single convenient document. Guidance on exercising the resulting probabilistic models is given and interpretation of the results is discussed. Due to uncertainties and inherent randomness in parameters that determine the component life, the precise time of failure can rarely be accurately defined. To account for inherent inaccuracies, conservative safety factors are often applied to deterministic results. Use of probabilistic approaches provides a more useful way of accounting for analysis uncertainties. Using a probabilistic approach, the probability of failure within a certain time range can be estimated, rather than providing a deterministic failure time. The availability of probabilistic information can be viewed as a plus, because this can lead directly to application of risk-based concepts in 1-1

EPRI Licensed Material Introduction

run/replace/retire decisions. Risk is conventionally defined as the product of the probability of failure and the consequences of failure. Hence, one component of risk (the failure probability) is a direct outcome of probabilistic analyses. The time for remedial action can be keyed to the time at which failure rate becomes excessive excessive being based on level of risk. Including consequences in the process allows attention to be focused on the items of importance. Items with a high failure rate but low consequences do not require the level of attention that would occur if only failure rate was used in the decision process. Another advantage of using failure probability is that it can be used to estimate the future expected cost of failure, which can be an important component of expected future operating cost. These expected costs can be incorporated into financial models to optimize equipment life management over an entire group of plants. These costs are expressed in terms that are readily communicated to utility management. This document concentrates on boiler pressure parts, but much of the discussion is readily applicable to other components that degrade due to material aging.

1-2

EPRI Licensed Material

2
REVIEW OF DETERMINISTIC LIFE PREDICTION PROCEDURES FOR BOILER PRESSURE PARTS
The probabilistic approach reviewed in this document is based on an underlying mechanistic model of lifetime. Hence, such models are fundamental to the approach, and this section will review some representative lifetime models. Boiler pressure parts are subject to material degradation by a wide variety of mechanisms. Boiler pressure parts are considered to be headers (superheater, reheater, and economizer), tubing (superheater and reheater), and pipes. Viswanathan 89 provides a comprehensive summary of degradation mechanisms and life prediction approaches, with his Chapter 5 being devoted to boiler components. Material degradation in boiler pressure parts is mostly due to creep and/or fatigue. This can involve crack initiation and/or crack growth. Corrosion, pitting, oxidation and erosion can also be problems. Oxidation and fire-side corrosion can be troublesome in tubing, but will not be covered here. Viswanathan 89 and 92 provide information on these topics. Crack initiation in headers by oxide notching is an important degradation mechanism, and the model used in BLESS [Grunloh 92] is discussed and considered in an example problem. The life prediction methodologies for creep and fatigue are quite different. Also, the procedures to be employed for crack initiation are quite different than for crack growth. The crack growth methodologies discussed here are based on fracture mechanics and are applicable when the lifetime is controlled by the behavior of a single (or a few) dominant cracks.

2.1

Crack Initiation

Crack initiation can occur due to fatigue, stress corrosion cracking, creep, oxide notching or a combination of these. Stress corrosion will not be discussed here. Fatigue crack initiation occurs due to cyclic loading and may occur in the absence of time-dependent material response. In contrast to this, creep crack initiation occurs due to time spent in the stress and temperature range where time-dependent material response (creep) is important. Creep generally is not a problem in metals at temperature less than about 1/3 to 1/2 of the absolute melting temperature. In the steels used in electrical power generating plants, this is about 800F (427C). Oxide notching is a problem that is aggravated by load cycling. 2.1.1 Fatigue Crack Initiation The initiation of fatigue cracks is due to cyclic loading and can occur at temperatures well below the range where creep is important. This is the degradation mechanism most familiar to 2-1

EPRI Licensed Material Review of Deterministic Life Prediction Procedures for Boiler Pressure Parts

engineers. The cyclic lifetime is measured in the laboratory as a function of the cyclic stress (or strain) level, and expressed as an S-N curve. The data is most often in terms of the cycles to failure, rather than cycles to initiation. Figure 2-1 provides an example, which is from Keisler 95 and is for A106 carbon steel in air at 550F (290C). The amount of scatter in the data is usually large, and the desirability of a probabilistic approach is immediately apparent. The data in Figure 2-1 are actually the cycles for a 25% load drop in the test, which corresponds approximately to a 3 mm deep crack. The data is in terms of the strain amplitude (one-half the peak-to-peak value). The following functional form is often used to represent fatigue data:

a = BN b + A

Eq. 2-1

The line in Figure 2-1 is a plot of the best fit obtained for this data by Keisler and Chopra [Keisler 95], which corresponds to A=0.11, B=27.47, and b=0.534.

Figure 2-1 Strain Life Data for A106B Carbon Steel in Air at 550F (290C) With Median Curve Fit [From Keisler 95]

2-2

EPRI Licensed Material Review of Deterministic Life Prediction Procedures for Boiler Pressure Parts

In cases where the cyclic stress level varies during the lifetime, such as is usually the case due to different loads, the cycles to failure is generally computed by use of Miners rule. The damage per cycle of strain of amplitude is taken to be equal to 1/N(), so the damage for n cycles of amplitude is n/N(). The total damage is then the sum for each of the contributors, and failure is taken to occur when the damage totals to unity. For instance, if a given time period consists of ni cycles of strain amplitude i, and there are L different amplitudes of cycling, then the fatigue damage for this time period is Df =

N ( )
L i i =1 i

Eq. 2-2

The value of N(i) is obtained from Equation 2-1. The number of time periods to failure is the number of time periods to reach D=1. Failure can be final failure of a specimen, the presence of a crack, or, in the particular case of the Keisler data, a crack of about 3 mm in size. Fracture mechanics principles can be used to then grow the crack once it initiates. The accumulated fatigue damage does not change the stresses in a complex body, except for the presence of the crack. 2.1.2 Creep Crack Initiation Creep cracks can initiate after a period of time under steady loading. The lifetimes of uniaxial tensile specimens are typically measured for a range of temperatures as a function of the applied stress. The rupture lifetimes are often then plotted as function of a so-called Larson-Miller parameter, which is defined as LMP = TA [C + log(t R )]
Eq. 2-3

TA is the absolute temperature. C is evaluated as part of the curve fitting procedure. Figure 2-2 is a typical plot of creep rupture data for 1 Cr Mo steel. A plot of the best fit is also shown, but a considerable amount of scatter is again observed, and the usefulness of a probabilistic approach is apparent.

2-3

EPRI Licensed Material Review of Deterministic Life Prediction Procedures for Boiler Pressure Parts

Figure 2-2 Creep Rupture Data for 1 Cr Mo With Curve Fit [From Grunloh 92] (1 ksi=6.895MPa)

The following expression is the curve fit shown in Figure 2-2, with C in Equation 2-3 equal to 20: LMP = 42869 5146[log ] 956[log ]2
Eq. 2-4

The log is to the base 10, the temperature is in degrees Rankine (1R=5/9K), the stress is in thousand pounds per square inch (ksi), and the rupture time is in hours. Equations 2-3 and 2-4 can be considered to provide a function that gives the rupture time as a function of stress and temperature, tR(,T). The time to rupture for varying stress conditions is often evaluated by the creep counterpart of Miners rule, which is called Robinsons rule. For a set of times ti spent at a stress i and temperature Ti, the creep damage is evaluated by use of the expression Dc =

t ( , T )
L

ti

Eq. 2-5

i =1

2-4

EPRI Licensed Material Review of Deterministic Life Prediction Procedures for Boiler Pressure Parts

Failure is considered to occur when the creep damage reaches unity. Failure is the rupture of a laboratory specimen, or, in a larger component, can be considered to be the initiation of a creep crack. An alternative procedure for considering creep damage has been suggested that is usually associated with the names of Kachanov and Rabotnov. Skrzypek and Hetnarski [Skryzpek 93] provide a recent summary. This is the continuum creep damage mechanics methodology, which has the advantage of being easily expanded to complex geometries and stress/temperature histories. Finite element procedures can easily be implemented. Unlike fatigue damage, creep straining and associated damage can result in appreciable redistribution of stresses relative to initial elastic conditions. Such factors are readily treated by continuum creep damage mechanics. The discussion here is limited to simple stress conditions. The creep damage enters into the relation between the creep strain rate and the stress and temperature. In the simplest case of uniaxial tension, the creep rate is often expressed by a so-called Norton relation. The following form contains a term to account for temperature variations and describes the minimum strain rate (which is also known as the steady state or secondary creep rate). = Ae Q / T n
Eq. 2-6

The parameter Q is the activation energy for creep divided by the gas constant. T is the absolute temperature. Creep damage can be included in the stress-strain rate relation in the following way = Ae Q / T

LM OP N1 Q

Eq. 2-7

The term is the creep damage, which accumulates according to the relation d 1 = dt (1 + )t R ( , T )(1 )
Eq. 2-8

The only additional material constant is , which can be evaluated from data on the tertiary creep characteristics of the material (the increase in strain rate that occurs as failure is approached). Failure occurs when =1. For constant stress and temperature conditions this corresponds to failure at tR for that stress and temperature. For stress and temperature that vary in a known fashion, Equation 2-8 can be used to evaluate the time to failure by separating variables and integrating. The initial damage is 0 and failure occurs when =1. This leads directly to the following relation

tR

dt =1 t R [ (t ), T ]

Eq. 2-9

This is the counterpart of Robinsons rule (Equation 2-5) expressed as an integral rather than a sum. An example of the use of continuum creep damage mechanics to a simple problem is provided in Section 2.3.2.

2-5

EPRI Licensed Material Review of Deterministic Life Prediction Procedures for Boiler Pressure Parts

In some instances, primary creep can be important, and is often included as another term in the creep ratestress relation. Typically, the following relation is employed: = Be Q / T (1 + p)
1/(1+ p )

m + Ae Q / T n p /(1+ p ) p t (1 + )

Eq. 2-10

No creep damage is included in this expression. The first term is the primary creep strain rate, and the second term is the secondary creep strain rate. B, m, p, Q, A, and n are material constants that are obtained from curve fits to creep straintime test data. They are considered to be independent of temperature, at least over a limited range of temperature. There are ways to include both of these terms in one expression [Stouffer 96] and include creep damage at the same time. Such representations are much more convenient to include in finite element computations for life prediction of complex geometries. Such representations are beyond the scope of these guidelines. 2.1.3 Creep/Fatigue Crack Initiation Creep/fatigue crack initiation is based on the fatigue and creep damage expressed by Equations 2-2 and 2-5, respectively. It is tempting to assume that failure (crack initiation) will occur when the sum of the creep and fatigue damage is equal to unity. However, it has been experimentally observed that there is some interaction between the damage mechanisms, and failure is considered when the creep and fatigue damage conditions fall outside a line on a creepfatigue damage plot. Figure 2-3 schematically shows the usual treatment.

Figure 2-3 Creep/Fatigue Damage Plane Showing Combinations Corresponding to Crack Initiation

2-6

EPRI Licensed Material Review of Deterministic Life Prediction Procedures for Boiler Pressure Parts

2.1.4 Oxide Notching Crack-like defects can be initiated by the growth and cracking of oxide layers. Figure 2-4 is a photomicrograph of a crack-like defect that has initiated due to repeated cracking of the oxide scale during start-stop cycles.

Figure 2-4 Crack-Like Defect Initiated by Oxide Notching

This initiation mechanism is considered in a deterministic fashion in the BLESS software [Grunloh 92, Harris 93]. A corresponding probabilistic treatment is not available. Section 4.2.1 of Grunloh 92 provides the details of the oxide notching model in BLESS. In this case, the growth of the steam-side oxide under constant temperature conditions is expressed as hox = C1e C2 / T t C3
Eq. 2-11

The values of C1 C3 are taken to be deterministically defined. The temperature is in absolute degrees. Simple procedures for evaluation of the oxide thickness when temperature varies are given by Grunloh 92 and are depicted in Figure 2-5 for a time t1 at T1 and t2 at T2. The oxide thickness is taken to continuously increase as long as the adjacent metal temperature is greater than Tlo-ox. If the metal temperature decreases below Tlo-ox, the oxide is assumed to crack, and the crack depth is incremented by an amount equal to the increment in the oxide thickness since the last time it cracked. The oxide thickness is then rezeroed and grown during subsequent times above Tlo-ox. Once the oxide notch crack depth reaches a specified depth, it is considered to be an initiated crack that then grows by fracture mechanics principles, as discussed in Section 2.2. The value of Tlo-ox is 700F (371C) in BLESS and the depth of notching at which fracture mechanics principles takes over is 0.030 inches (0.76 mm).

2-7

EPRI Licensed Material Review of Deterministic Life Prediction Procedures for Boiler Pressure Parts

Figure 2-5 Depiction of Procedure for Determination of Oxide Thickness for a Time at Tlo Followed by a Time at Thi

2.2

Crack Growth

The initiation of a crack most often does not mean that the component has reached the end of its useful life. Fracture mechanics procedures can be used to estimate the remaining time to grow the crack to the point where it will pose a significant risk to continued operation. Similarly, cracks may initially be present in a component, and fracture mechanics is again called for. 2.2.1 Crack Tip Stress Fields Fracture mechanics principles are most often based on the analysis of the stresses near a crack tip. The stresses depend on the stress strain relation of the material, which, for uniaxial tension, can often be expressed as

F I =G J H DK

Eq. 2-12

When n=1 and D=E, this is the familiar Hookes law of linear elasticity, and is the elastic strain. When n 1, then this can represent nonlinear elastic behavior which is the same as plasticity as long as no unloading occurs. The strain is then the plastic strain. This is the Ramberg-Osgood representation of plasticity. If the strain is a rate, rather than a strain directly, then this can represent the secondary creep relation of Equation 2-6. This is readily applied to primary creep also. Hence, Equation 2-12 can be used to describe a variety of material responses.

2-8

EPRI Licensed Material Review of Deterministic Life Prediction Procedures for Boiler Pressure Parts

Figure 2-6 shows the coordinate system near a crack tip.

Figure 2-6 Coordinate System Near a Crack Tip

The deformation field near a crack tip in a homogeneous isotropic body whose stress-strain relation is given by Equation 2-12 is characterized by the so-called Hutchinson-Rice-Rosengren singularity and is given as [Kanninen 85, Kumar 81, Anderson 95] JD n ij = I r n n +1 ~ ij ( , n)
n 1

J n +1 ~ ij = DI r ij ( , n) n J ui = DI n n +1 n1 +1 ~ r u i ( , n)
n

Eq. 2-13

~ , ~ and u ~ are dimensionless tabulated functions [Shih 83] and I is a dimensionless where ij ij i n constant [Anderson 95, Kanninen 87] that depends on n and whether the conditions are plane stress or plane strain. These equations show that i) the stresses and strains are large as r approaches zero, ii) the deformation field (for a given n) always has the same spatial variation, and iii) the magnitude of the field (for a given D and n) is controlled by the single parameter J. Dimensional considerations require that J has the units of Dr, which is (stress)x(length) or (F/L). J is a measure of the crack driving force. The parameter J is Rices J-integral, which is the value of the strain energy release rate with respect to crack area (joules/m2, in-lb/in2, etc.). Specific examples of J solutions are given in Section 2.2.2. The case of linear elasticity is when n in Equation 2-12 is equal to 1. This case is of particular interest, and Equations 2-13 can be written explicitly as follows:

x =

3 cos (1 sin sin ) 2 2 2 2r K 3 y = cos (1 + sin sin ) 2 2 2 2r K 3 xy = sin cos cos 2 2 2 2r


K

Eq. 2-14

2-9

EPRI Licensed Material Review of Deterministic Life Prediction Procedures for Boiler Pressure Parts

As expected, the stresses are controlled by a single parameter, which is denoted as K and is called the stress intensity factor. K and J are related to one another by the expression

R K | |E J=S | K (1 ) | T E
2 2 2

plane stress
Eq. 2-15

plane strain

Equation 2-14 shows that K has the units of (stress)x(length)1/2. (E is Youngs modulus and is Poissons ratio). Specific examples of K solutions are discussed in Section 2.2.2. If the strain in Equation 2-12 is replaced by a strain rate, the stress-strain rate relation is as given in Equation 2-6. Equations 2-13 still describe the stress and strain rate field near the crack tip, and the field is controlled by a single parameter which is the rate analog of the J-integral, which is denoted as C*. If primary creep is also included, as in Equation 2-10, then C* is applicable to the secondary creep and there is another parameter to account for the primary creep. This * parameter is referred to as Ch , and is the parameter controlling the crack tip stress field when primary creep is dominant. 2.2.2 Crack Driving Force Solutions Equation 2-13 shows that the stress-strain-displacement field near a crack tip is controlled by a single parameter J. As Equation 2-14 shows, if n=1, then the parameter K controls the field, but K is related to J by Equation 2-15. The magnitude and type of loading, as well as the geometry of the cracked body, have an influence on the crack tip fields, and this influence enters into the expression for J or K, which are referred to here as the crack driving forces. From dimensional considerations, the stress intensity factor K has dimensions of (stress x square root of length). For the linear problems to which K is applicable, K must vary linearly with the applied loads. For a through-crack in a large plate, such as is shown in Figure 2-7, K must be proportional to a , because a is the only length available.

2-10

EPRI Licensed Material Review of Deterministic Life Prediction Procedures for Boiler Pressure Parts

Figure 2-7 Through-Crack of Length 2a in a Large Plate Subject to Stress .

The proportionality constant turns out to be 1/2, as obtained from the limiting case of the elasticity solution for stresses near the tip of an elliptical hole in a plate. In general, stress intensity factor solutions can be written as K = a F (geometry)
Eq. 2-16

There are numerous such K-solutions available for a wide range of crack configurations and loadings. Tada, Paris and Irwin [Tada 00] is an example of such a compendium. If the crack driving force is expressed in terms of J, which has units of in-lb/in2 or Joules/m2, the crack driving force can be written as J = a G ( geometry , n)
Eq. 2-17

For the simple problem of Figure 2-7, and n=1, G=. If the material is creeping, then J is . J-solutions are tabulated in Kanninen 87, Anderson 95 replaced by C* and is replaced by and Kumar 81. As an example of a J-solution, consider the edge-cracked strip in tension shown in Figure 2-8. The expression for J is given in the figure. The function h1(,n) has been determined by finite element computations and tabulated [Shih 84]. Table 2-1 is the table for plane strain.

2-11

EPRI Licensed Material Review of Deterministic Life Prediction Procedures for Boiler Pressure Parts

J=

h1 ( , n) n +1 a n D (1 ) n () n +1

=a/h

1 2 + 2 2 1 plane strain plane stress

1455 . R | =S | T1.072

Figure 2-8 Single Edge-Cracked Strip in Tension With J-Solution Table 2-1 Table of the Function h1(,n) in the J-Solution for an Edge Cracked Plate in Tension for Plane Strain [From Shih 84] a/h 0.125 0.250 0.375 0.500 0.625 0.750 0.875 n=1 5.01 4.42 3.97 3.45 2.89 2.38 1.93 2 7.17 5.20 3.48 2.62 2.16 1.86 1.62 3 9.09 5.16 2.88 2.02 1.70 1.55 1.43 5 12.7 4.54 1.92 1.22 1.11 1.13 1.18 7 16.3 3.87 1.28 0.754 0.744 0.858 1.00 10 21.7 3.02 0.704 0.373 0.420 0.585 0.812 13 27.3 2.38 0.396 0.188 0.243 0.409 0.672 16 34.1 1.90 0.225 0.0952 0.142 0.290 0.563 20 45.2 1.48 0.111 0.0391 0.0710 0.186 0.452

2-12

EPRI Licensed Material Review of Deterministic Life Prediction Procedures for Boiler Pressure Parts

A variety of other geometries have been analyzed with solutions analogous to the one shown in Figure 2-8. See for instance, Kanninen 87, Kumar 81, and Anderson 95. Many more crack cases have been analyzed for the linear problem, because the procedures involved (usually finite elements) are more straightforward and linear superposition is applicable. Tada 00 is an example of a compendium of stress intensity solutions. 2.2.3 Calculation of Critical Crack Sizes Since the stresses and strains surrounding a crack tip are controlled by the value of the J-integral, a reasonable failure criterion is that failure occurs when the applied value of J equals some critical value, JIc. The value of JIc is obtained in a test. This criterion is often valid and has been widely used. There are many complications, however, including the influence of non-singular terms on the stresses and strains, as well as well as increasing resistance of the material to crack growth as the crack extends. These complications will not be considered here. Anderson 95 provides details. In the case of conditions where the body remains substantially elastic, the failure criterion can be expressed in terms of the stress intensity factor, with a critical value being denoted as KIc. The critical value of J or K is usually called the fracture toughness. 2.2.4 Fatigue Crack Growth Since the stress-strain field near the tip of a crack is controlled by a single parameter, it is reasonable to presume that the rate of growth of a crack in a body subject to cyclic loading (da/dN) is controlled by the cyclic value of the crack tip stress parameter. For linearly elastic bodies, the cyclic parameter is K, which is equal to Kmax Kmin. This has been observed to be the case for a wide variety of metals and conditions, and the following relation is often found to provide a good fit to data da = C f K m dN
Eq. 2-18

This is the so-called Paris relation and is a suitable representation under a wide variety of conditions. At extremes of crack growth rates, such as very slow (<~10-7 in/cycle) or quite rapid (>~10-3 in/cycle), the crack growth behavior can deviate from this relation, and more complex representations are appropriate. The Forman relation is widely used in such instances; see for instance Henkener 93. Figure 2-9 is an example of fatigue crack growth data. The material is 2 Cr 1 Mo steel at various temperatures. The figure is drawn from Viswanathan 89. The room temperature fit is also shown, and the dashed line is a least squares curve fit to the 1100F (593C) data. Both of the lines in Figure 2-9 are of the form of Equation 2-18, that is, the Paris relation. This figure shows that the fatigue crack growth rate is not a strong function of temperature, with data for 700F (371C) being comparable to the room temperature line. The 1100F (593C) data is considerably above the data for the lower temperatures, however. The amount of scatter in the data is seen to be quite large, especially for the 1100F (593C) data. This suggests the use of a probabilistic treatment. The values of Cf and m for the lines in Figure 2-9 are as follows: 2-13

EPRI Licensed Material Review of Deterministic Life Prediction Procedures for Boiler Pressure Parts

Cf Room temperature 1100F (593C) 1.41x10 8.07x10


-11 -10

m 3.85 2.95

The values of Cf are applicable when K is in ksi-in1/2 (1 ksi-in1/2=1.099 MPa-m1/2) and da/dN is in inches per cycle.

Figure 2-9 Fatigue Crack Growth as a Function of the Cyclic Stress Intensity Factor for 2 Cr 1 Mo at Various Temperatures [Drawn From Viswanathan 89]

2.2.5 Creep Crack Growth The stresses near a crack tip in a body that is undergoing secondary creep according to Equation 2-6 are controlled by the parameter C*, which is the time analog of the J-integral. Hence, it would be reasonable for the creep crack growth rate (da/dt) to be controlled by the value of C*. This has been observed to be the case, but many complicating factors arise. The primary restriction is that the body must be fully in the steady-state creep range; elastic and primary creep strain rates must be negligible. Even if the material exhibits no primary creep, there is still an elastic response that must be considered. Under the case of secondary creep dominating, Equation 2-19 has been found to provide a good representation of data da = Cc C * q dt
Eq. 2-19

2-14

EPRI Licensed Material Review of Deterministic Life Prediction Procedures for Boiler Pressure Parts

When the strain rates consist of elastic, primary and secondary creep rates, the situation becomes more complex. A variety of procedures have been suggested, such as described in Riedel 87 , Saxena 98, and Bloom and Malito [Bloom 92]. The approach of Bloom is to consider a timedependent crack driving force that has terms corresponding to elastic, primary creep, and secondary creep. The crack driving force is referred to as Ct(t) and is given as Ct (t ) = C *2 /[(1+ p )( m1)]

LM (1 ) K OP N E (n + 1)t Q
2 2

2 (1+ p )(1 m )

n + p +1 * 1 Ch t (n + 1)( p + 1)

FG IJ HK

p /(1+ p )

Eq. 2-20

+ C*

The first line is the elastic transient that occurs on initial loading, the second line is the secondary creep, and the third line is the steady-state creep. As time becomes very large, the third line dominates. The parameter C*h is the primary creep analog of the steady state parameter C*. It is obtainable n from the J-solution by replacing (1/D ) with B(1 + p)e Q / T
1/(1+ p )

The creep crack growth rate is then considered to be related to Ct(t) by use of Equation 2-19 with C* replaced by Ct(t). This provides the relation da = Cc [Ct (t )]q dt 2.2.6 Creep/Fatigue Crack Growth When cyclic loading occurs at temperatures and cycling rates where creep is important, the increment of crack growth per cycle has been found to be related to the average value of Ct(t) during the cycle plus a fatigue contribution. The growth per cycle is given by the expression a|cycle = C f K
mf

Eq. 2-21

+ Cc Ct ,ave t h

Eq. 2-22

The average value of Ct is given by

2-15

EPRI Licensed Material Review of Deterministic Life Prediction Procedures for Boiler Pressure Parts

Ct ,ave =

th

Ct (t )dt

Eq. 2-23

to

In this expression, th is the duration of the time at load and to is a small time, such as the rise time of the loading. Figure 2-10 provides an example of creep crack growth and creep/fatigue crack growth data. The data is for 1 Cr Mo steel 1000F (538C). The left-hand part of the figure is for creep crack growth (i.e. no load cycling), and the right-hand part of the figure is for cyclic loading with various hold times. The line in the figure is best fit to the data and corresponds to Cc = 0.0246 and q = 0.825 in Equation 2-21. (The value of Cc is for Ct(ave) in kips/inch-hour and 5 2 crack growth rates in inches per hour, 1 kip/inch-hour = 1.75x10 J/m -hr). There is a considerable amount of scatter observed in Figure 2-10, which suggests the usefulness of a probabilistic approach.

2-16

EPRI Licensed Material Review of Deterministic Life Prediction Procedures for Boiler Pressure Parts

Figure 2-10 Creep and Creep/Fatigue Crack Growth Data and Fits. Left Figure Is for Constant Load and Right Figure Is for Cyclic Load With Various Hold Times [From Grunloh 92].

2-17

EPRI Licensed Material Review of Deterministic Life Prediction Procedures for Boiler Pressure Parts

2.3

Simple Example Problems

Two simple problems are presented to serve as demonstration of the procedures involved in life prediction. The problems in this section are deterministic. Their probabilistic counterparts are included in Section 4.2. 2.3.1 Fatigue of a Crack in a Large Plate Consider a through crack in a large plate, such as shown in Figure 2-7. The initial half-crack length is ao, and the plate is subject to a stress that cycles between 0 and . Hence the cyclic stress intensity factor is given by the expression K = a
Eq. 2-24

The fatigue crack growth relation is the Paris relation of Equation 2-18. Failure occurs when the maximum applied stress intensity factor is equal to the critical value, KIc. The critical crack size, ac, is 1 K Ic ac =

LM OP N Q

Eq. 2-25]

A differential equation for the crack length as a function of the number of fatigue cycles is obtained by inserting the relation for K into the Paris relation for crack growth rate. da = C f K m = C f a dN
m

= C f m m / 2 a m / 2

This equation can be solved by separating variables and integrating, thereby providing the following end result for the cycles to failure, Nf, for a given initial crack size ao. Nf = 1
m 2 Cf 2

m/ 2

LM Na

1
( m 2 )/ 2 o

1 ac
( m 2 )/ 2

OP Q

Eq. 2-26

As an example of the above relations, Figure 2-11 is a plot of a vs N for a of 20 ksi (137.9 MPa), an initial crack half-length of 0.050 inches (1.27 mm), and Cf and m for the room temperature line in Figure 2-9. The results of Figure 2-11 are fairly typical of fatigue crack growth problems with initial cracks that are quite small; not much happens for a long period, but once the crack starts to grow appreciably, it quickly becomes long. Also, the cycles to failure are not strongly influenced by the critical crack size if the initial size is small. If the critical crack size is larger than about 4 inches (100 mm), then the cycles to failure is nearly 1,300,000, independent of critical crack size. This is a consequence of the nearly vertical slope of the line in Figure 2-11 as a exceeds about 2 inches (50 mm). 2-18

EPRI Licensed Material Review of Deterministic Life Prediction Procedures for Boiler Pressure Parts

Figure 2-11 Half-Crack Length as a Function of the Number of Cycles to 20 ksi (137.9 MPa) for Example Fatigue Problem

In most practical situations, a closed form expression can not be obtained for the crack size as a function of the number of cycles. Stress intensity factor solutions for realistic problems are more complex than in this example, so the integration can not be performed. Another complicating factor is that the fatigue crack growth relation is usually more complex than the Paris relation. The above example demonstrates the principles involved, but realistic problems usually require numerical procedures for computation of crack sizes and lifetimes. 2.3.2 Creep Damage in a Thinning Tube An example of creep rupture with wall thinning is presented as an example of creep life prediction. This example problem could be representative of a superheater/reheater tube, in which case the stresses are dominated by pressure and easily estimated. Consider a tube with constant internal pressure p, mean radius R, and a thickness that decreases with time according to h(t ) = ho t
Eq. 2-27

The stress strain rate relation is given by Equation 2-7 and the damage kinetics by Equation 2-8. Although the stress that controls creep rupture can be a combination of the principal stress, equivalent stress, and the hydrostatic stress, for this example consider the maximum principal stress to be governing. This is the hoop stress due to the pressure, which is given by

(t ) =

pR pR = h(t ) ho t

Eq. 2-28

Consider temperature to be fixed. The above equation can be used along with Equation 2-9 to obtain the time to failure. In general, numerical integration is necessary. As a simple example, if the curve fit in Figure 2-2 is taken to be a straight line, rather than the second order relation of 2-19

EPRI Licensed Material Review of Deterministic Life Prediction Procedures for Boiler Pressure Parts

Equation 2-4, then a closed form relation for the rupture time can be obtained. The following is a good linear representation of the data of Figure 2-2 for stresses less than 20 ksi (137.0 MPa) log = A B( LMP) = 6.615 1538 . x10 4 LMP
Eq. 2-29

Using the definition of LMP from Equation 2-3, this can be rearranged to give the following expression for the rupture time t R ( , T ) = 10 C + A /( BT ) 1/ BT
Eq. 2-30

Use the following definitions:

= 1 / BT
C R = 10 C + A / BT t T = time for wall to thin to zero = h / pR o = hoop stress at initial thickness = ho t C = time for creep rupture at intial stress = 10 C + A / BT C R = / BT 1 o o

The time to rupture with wall thinning and creep damage is then obtained by using Equation 2-30 for the rupture time and Equation 2-28 for the stress in conjunction with Equation 2-9. Using the above definitions and grinding through the algebra leads to the following relation for the rupture time tR: tR = 1 tT

LM1 + ( 1) t OP t Q N
C T

1/( 1)

Eq. 2-31

As an example, consider a 2.125 inch (54 mm) diameter 1 Cr 1/2 Mo tube with a wall thickness of 0.4 inches (10.2 mm) operating at 1,000F (1,460R = 811.1K) and 2400 psi (16.55 MPa) pressure. Using the above definitions, the following values are obtained
-4 = 1/BT = 1/(1460x1.538x10 ) = 4.453

CR = 10-C+A/(BT) = 10-20+6.615/(1460x0.0001538) = 2.88x109 o = 2.4x1.0625/0.4 = 6.38 ksi (44.0 MPa) tC = 7.51x105 hours = 85.7 years

2-20

EPRI Licensed Material Review of Deterministic Life Prediction Procedures for Boiler Pressure Parts

Figure 2-12 provides a plot of the failure time for various wall thinning rates . It is seen that there is a strong interaction between the creep and thinning degradation mechanisms, because the failure time is much smaller than if only one mechanism is acting.

Figure 2-12 Time to Failure as a Function of the Wall-Thinning Rate for the Creep Rupture Example Problem (1 Mil/Year=25.4 m/yr)

In most practical situations, a closed form expression for the creep lifetime can not be obtained, because of more complex stress and temperature histories and more complex geometries. The stresses in the above example are statically determined, so the stress analysis is simple. In fact, creep strain and damage can result in large changes in stress in complex bodies, and detailed lifetime evaluations often require finite element computations. This simple example problem serves to show the principles involved. The above discussion provides examples of deterministic lifetime models. Although such models are available, results obtained in their application to real plant components are subject to many sources of uncertainty, including scatter in material properties, uncertainty in service conditions (pressure, temperature, etc.), and derivation of model constants from test data. Probabilistic models account for these uncertainties and scatter, and are discussed in Section 4, but first some statistical background information is provided in Section 3.

2-21

EPRI Licensed Material

3
SOME STATISTICAL BACKGROUND INFORMATION
Some background information on statistics is provided in areas of particular interest in probabilistic structural analysis. No attempt is made to be comprehensive, and those familiar with statistics can proceed directly to Section 4. The following basic references are suggested for additional information [Ang 75, Ang 84, Ayyub 97, Hahn 67, Wolstenholme 99]

3.1

Probability Density Functions

For a continuous random variable, x, the probability of x falling within a range of values is described by the probability density function, p(x). Mathematically, this is expressed as probability that x falls between x and x + dx = p( x )dx
Eq. 3-1

Various characteristics of random variables are of use, the most common ones being the mean (or average) and the standard deviation. If a set of x data is available, the mean is given by the expression 1 x= N

x
N i =1

Eq. 3-2

and the standard deviation, commonly denoted as , is 1 = N 1


2

N i =1

1 ( xi x ) = N 1
2

LM x MN
N i =1

2 i

Nx

OP PQ

1/ 2

Eq. 3-3

The square of the standard deviation is called the variance. Sometime the N-1 in the denominator in Equation 3-3 is replaced by N. If N-1 is used, then the value of is called unbiased; if N=1 then is said to be biased. If N is large, it doesnt make much difference. The values of the mean and variance can be obtained from the probability density function (pdf) by use of the following x=

2 =

z z

xp( x )dx

Eq. 3-4

( x x ) 2 p( x )dx

3-1

EPRI Licensed Material Some Statistical Background Information

The coefficient of variation, cov, is equal to the standard deviation divided by the mean, and is often referred to. The probability that x is less than some value is often also of interest. This is referred to as the cumulative distribution function and is denoted as P(x). The cumulative distribution function is obtained from the probability density function by the equation P( x) =

p( y )dy

Eq. 3-5

Since probabilities are always between 0 and 1, the maximum value of P(x) is 1 and Equation 3-5 implies that

p( x )dx = 1

The probability that x is greater than some value is also of interest. This is known by various names, usually the complementary cumulative distribution, and sometimes the reliability or survivor function. The complementary cumulative distribution is equal to one minus the cumulative distribution. The median of a random variable is also often of interest. This is the value that is exceeded with a 50-50 chance. Mathematically, this is expressed as 1 = 2

x50

p( x )dx or P( x50 ) =

1 2

Eq. 3-6

Another item of interest is the failure rate. This is the probability of failure between x and x+dx given that failure has not already occurred. This is also called the hazard function, h(x), and is related to the pdf and cumulative by the expression h( x ) = p( x ) 1 P( x )
Eq. 3-7

There are many different probability density functions. Just about any function that integrates to unity can be used as a pdf. The type of distribution to be used in a given situation can be selected based on fits to data, theoretical considerations, convenience, or personal taste. The most compelling reason is fits to data, if sufficient data is available. The most commonly used distribution is the normal, or Gaussian, distribution. Any random variable that is the sum of many other random variables is normally distributed, regardless of the distributions of the individual variables. This is a consequence of the central limit theorem. Table 3-1 summarizes several of the distributions most often encountered in probabilistic lifetime analysis. Their usefulness here is in describing the scatter or uncertainty in the variables entering into the lifetime model.

3-2

EPRI Licensed Material Some Statistical Background Information Table 3-1 Characteristics of Some Commonly Encountered Probability Functions Used to Describe Scatter and Uncertainty Probability Density Function Cumulative Distribution Function Range of Random Variable

Name uniform

Mean

Standard Deviation

Comments simplest

1 b 1 x/ e
1 e 2 ( x x )2 2 2

xa b 1 e
x/

a, a + b 0,

a+

b 2

b 12

exponential

simple, has constant hazard function, Weibull with c=1 most commonly encountered, erf(x) commonly tabulated

normal or Gaussian

xx 1 1 + erf 2 2
2

LM N

FG H

IJ OP KQ I OP JK PQ

lognormal

x 2
Weibull

LM ln( x / x ) OP e N 2 Q

50

ln( x / x50 ) 1 1 + erf 2 2 1 e ( x /b )


c

LM MN

F GH

0,

x50 e

/2

x50 e 2 e
2

log(x) is normally distributed, x50 is 1 the median

c x b b

FG IJ HK

c 1

( x /b )c

0,

b 1 +

FG H

1 c

IJ K

see note 2 a favorite of many, simple cumulative distribution, function tabulated, c=1 is the exponential distribution, c=2 is Rayleigh

1. x50 is equal to exp[mean of ln(x)], is the standard deviation of ln(x) 2. standard deviation of a Weibull variate is b instance, Abramowitz 64).

R | | F 1I L F 1 I O U G 2 + J M G 1 + J P V , is the gamma function, which is widely tabulated (see, for S H K H K | T c N cQ| W


2

3-3

EPRI Licensed Material Some Statistical Background Information

A function that is often encountered is the so-called error function, erf(x). It is useful because it conveniently expresses the cumulative normal distribution, as shown in Table 3-1. The error function is defined as erf ( x ) 2

e y dy
2

Eq. 3-8

The complementary error function is often also of use. It is one minus the error function, and is denoted as erfc(x). That is erfc( x ) = 1 erf ( x )
Eq. 3-9

The error function is related to the cumulative unit normal variate, which is the cumulative distribution for a normally distributed variate with zero mean and a standard deviation of unity. The standard normal variate is often tabulated in statistics texts, and is denoted as P(x) or ( x ) . Using the ( x ) notation, the relationship to the error function is ( x ) = x 1 1 + erf 2 2

LM N

OP Q

Eq. 3-10

The error function is widely tabulated, either as erf(x) or ( x ) . Abramowitz and Stegun [Abramowitz 64] provide convenient tables and curve fits. They also provide curve fits for inverse error functions.

3.2

Fitting Distributions

A key part of developing probabilistic lifetime models is the definition of the distributions of the random input variables. This is most often done by use of data. Equations 3-2 and 3-3 can be used to compute the mean and standard deviation from the data, and Table 3-1 then used to get the constants in the distribution functions. The constants, such as b and c for the Weibull distribution, are often referred to as the parameters of the distribution. Another way to evaluate the parameters of the distribution that entails the distribution type is the method of maximum likelihood. This method has some theoretical advantages but is often more difficult to implement. The parameters are evaluated by finding their values that will maximize the likelihood, which is expressed as L( parameters)

pb x , parametersg
N i i =1 c 1

Eq. 3-11

For instance, if a Weibull distribution is assumed, the expression for the likelihood would be L(b, c) = FGH IJK
N i =1

c xi b b

e ( xi /b )

Eq. 3-12

3-4

EPRI Licensed Material Some Statistical Background Information

The values of b and c that maximize L would be selected. It is most often not easy to solve Equation 3-12, and numerical methods must be employed. Just knowing the parameters is not sufficient to define the distributionthe distribution type must also be known, such as lognormal or Weibull. If maximum likelihood is used, then the distribution type has already been assumed, but it is still not known how well the data has been represented. A good way to assess the fit to the distribution type is to plot the distribution on a standard paper, such as normal probability paper. Such paper is available for selected distribution types, including normal, lognormal, and Weibull, but this paper can be constructed by appropriate transformations. The data, say N values of x, can be used to obtain the cumulative distribution, P(x), by first sorting the data in ascending order. Alternatively, a histogram of the data can be constructed. Denote this sorted list as N values of s. The values of P are then estimated by a relation such as Pi = i N
Eq. 3-13

Many relations similar to Equation 3-13 have been suggested as better approximations for P. One widely used relation is Pi i N 1 2

Eq. 3-14

and this will be used herein. Once the values of si (sorted xi) are known, a Pi si plot is made on paper whose scales are transformed in such a manner that the cumulative distribution will plot as a straight line if the data is of the distribution type that the paper was constructed for. The -x/ simplest example is for an exponential distribution. In this case, P(x)=1-e . Rearranging and taking logarithms, this can be written as x = ln

FG 1 IJ H1 PK

so that if ln[1/(1-P)] is plotted versus x, and x is exponentially distributed, then the data will plot as a straight line with slope related to the parameter . Similar transformations can be made for other distribution types. For the lognormal distribution, the cumulative distribution can be written as ln( x ) = ln( x50 ) 2erfc -1 (2 P)
Eq. 3-15

3-5

EPRI Licensed Material Some Statistical Background Information

In this equation, erfc-1(x) is the inverse complementary error function, that is if erfc()= then -1 erfc ()=. Abramowitz and Stegun contain convenient curve fits to the inverse complementary unit normal variate (see page 933 of Abramowitz 64), which can be used to define a curve fit to the inverse complementary error function. Hence, if erfc-1(2P) is plotted versus x on a log scale, the data will plot as a straight line if it is lognormally distributed. The counterpart of Equation 3-15 for a normal variate is x = x 2erfc -1 (2 P)
Eq. 3-16

As an example, consider the 1100F (593C) fatigue crack growth data in Figure 2-9. There are 25 data points. The linear least squares curve fit to the data gave a Cf and m (in Equation 2-18) of 8.07x10-10 and 2.95, respectively. In order to characterize the scatter in the 1100F (593C) data, fix m at 2.95 and evaluate Cf for each of the 25 data points in Figure 2-9. This provides the data in Table 3-2. In this table, P is evaluated by use of Equation 3-14 and Y=erfc-1(2P).

3-6

EPRI Licensed Material Some Statistical Background Information Table 3-2 Values of Cf for Fatigue Crack Growth Data (values of Cf are for crack growth in inches per cycle and K in ksi-in )
1/2

i 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

10 Cf 7.0350 7.7870 6.2580 6.2920 6.5250 8.1490 7.5300 7.4740 9.3170 9.3730 10.6000 9.5870 8.0210 7.1660 7.6880 7.2100 7.7140 9.9450 10.3200 10.7000 7.7840 7.2870 7.5690 8.5400 8.1970

10

Sorted 10 10 Cf 6.2580 6.2920 6.5250 7.0350 7.1660 7.2100 7.2870 7.4740 7.5300 7.5690 7.6880 7.7140 7.7840 7.7870 8.0210 8.1490 8.1970 8.5400 9.3170 9.3730 9.5870 9.9450 10.3200 10.6000 10.7000

P .0200 .0600 .1000 .1400 .1800 .2200 .2600 .3000 .3400 .3800 .4200 .4600 .5000 .5400 .5800 .6200 .6600 .7000 .7400 .7800 .8200 .8600 .9000 .9400 .9800

X=ln(Cf) Sorted -21.1900 -21.1900 -21.1500 -21.0700 -21.0600 -21.0500 -21.0400 -21.0100 -21.0100 -21.0000 -20.9900 -20.9800 -20.9700 -20.9700 -20.9400 -20.9300 -20.9200 -20.8800 -20.7900 -20.7900 -20.7700 -20.7300 -202.6900 -20.6600 -20.6600

Y 1.4540 1.1000 .9066 .7640 .6472 .5458 .4547 .3707 .2917 .2163 .1435 .0724 -.0022 -.0724 -.1435 -.2163 -.2917 -.3707 -.4547 -.5458 -.6472 -.7640 -.9066 -1.1000 -1.4540

The data in Table 3-2 is plotted on log-linear scales in Figure 3-1. Figure 3-2 is the same plot on lognormal probability scale, and Figure 3-3 is the corresponding normal probability scale plot. The solid line in these figures is the linear least squares fit to the corresponding X-Y data. Both Figures 3-2 and 3-3 provide a good fit to the data. The straight lines shown in these figures are the result of linear least squares curve fits on the transformed scales of the figures. The 3-7

EPRI Licensed Material Some Statistical Background Information

lognormal fit (Figure 3-2) appears to be somewhat better than the normal fit (Figure 3-3). Which of these is better can be judged by the goodness of fit, which is an entire topic in itself, see, for instance, DAgostino 86. Such goodness-of-fit tests are restricted to the range of data, and the interest in probabilistic structural models is often in extrapolations beyond the data.

Figure 3-1 Plot of Data of Table 3-2 on Log-Linear Scales

Figure 3-2 Lognormal Probability Plot of Data of Table 3-2

3-8

EPRI Licensed Material Some Statistical Background Information

Figure 3-3 Normal Probability Plot of Data of Table 3-2

Another way to see how well the data is being fitted within the range is the coefficient of linear correlation, which can be evaluated from the data by the expression

R U | | x x) y y) V ( ( S | | T W
N N 2 2 i i i =1 i =1

(x x) ( y y)
N 2 i i i =1

1/ 2

Eq. 3-17

The value of quantifies how well the x-y data is represented by a linear relation. A value of +1 or 1 means that the data is fitted perfectly, and a value of 0 means that there is no linear relation between them (that is not to say that there may be some nonlinear relation between them). In selecting between the normal and lognormal fits of Figures 3-2 and 3-3, the corresponding coefficient of linear correlation is of interest. The values from the data are for lognormal fit-0.977 for normal fit -0.966 Hence, the fit to the lognormal is somewhat better, and the lognormal distribution would be selected for further use. This is preferred, because if the data on a log da/dN-logK plot are evenly distributed above and below the best fit line, as observed in Figure 2-9, then Cf would be expected to be lognormally distributed rather than normal. In fact, unless the normal fit was substantially better than the lognormal, this would be the basis for selection of the lognormal. 3-9

EPRI Licensed Material Some Statistical Background Information

Another reason for selecting the lognormal is that if Cf is normally distributed, there is a finite probability that Cf will be negative, which is physically unrealistic (fatigue cracks do not usually get shorter). For the mean and standard deviation based on the data and included in Table 3-3, the probability of having a negative Cf is about 10-10, so this is not really a serious objection. Based on the data, and the related curve fits, the information in Table 3-3 on the parameters of the distribution of Cf is obtained.
Table 3-3 Information on Parameters of Distribution of Cf Data of Table 3-2 (values of Cf are for crack growth in inches per cycle and K in ksi-in1/2) least squares fit on log-log da/dN-K median of data mean of Cf exp[mean(ln Cf)] median from Fig 3-2 fit (Y=0) standard deviation of Cf (from data) stdev[ln(Cf)] from slope of Fig 3-2 fit 8.07x10
-10

7.78x10 8.16x10 8.07x10

-10

-10

-10

8.07x10 -10 1.28x10 0.153 0.160

-10

The parameters for this particular set of data, as estimated by various means summarized in Table 3-3, agree well with one another. A of 0.160 and a median of 8.07x10-10 would be selected. The parameters as evaluated by various procedures do not always agree as well as in the above example. Quite often, the mean and standard deviation evaluated from the data by Equations 3-2 and 3-3 do not agree well with a fit on probability paper. This is because the mean and standard deviation evaluated by Equations 3-2 and 3-3 provide good representations near the mean and are not affected by a few data points well away from the mean (out in the tails of the distribution). However, the least squares line on probability paper is highly influenced by a few data points out in the tail. Since the goal of probabilistic analyses is usually to define the reliability out where the failure probability is small, it is important to have a good representation of the random variables out in the appropriate tails. The fit on probability paper generally provides a better representation in the tails, and parameters based on such a fit are preferred to a good fit in the central region of the distribution. We would much rather have a good representation in the tails with a poor agreement with the mean of the data than the other way around.

3-10

EPRI Licensed Material Some Statistical Background Information

3.3

Combinations of Random Variables

In probabilistic lifetime models, there is often an underlying deterministic model with some of the inputs being random variables. For instance, there may be an equation for the lifetime that contains several inputs. Equation 2-26 is an example, with the initial crack size, critical crack size, fatigue crack growth coefficient, and cyclic stress as inputs. In more realistic problems, there is still a way to compute the lifetime for a set of inputs, but numerical procedures may be required. Using Equation 2-26 as an example, the reliability as a function of the number of load cycles is known if the distribution of Nf is known; the probability of failure prior to a given number of cycles is just the cumulative distribution of Nf. If some of the input variables are random, then Nf is a random variable. Hence, it is desired to obtain the distribution of Nf for a given set of random input variables. Closed form expressions can be obtained only in exceptional circumstances, and numerical procedures are most often required. In fact, for the simple example of Equation 2-26, a closed form expression for the distribution of Nf is not obtainable (as far as known to the author) regardless of the distribution types of the random variablesunless most of them are deterministic. 3.3.1 Analytical Methods As an example of analytical methods, let x and y be random variables with known probability density functions f(x) and g(y). The corresponding cumulative distributions of x and y are also known, and are denoted as F(x) and G(y). Then find the distribution of z = x + y. This is a simple problem. The cumulative distribution of z is then obtained from the equation P( z < u) = probability that z is less than u = =

(probability that x is between x and x + dx)bprobability that y < u xg

z
x

f ( x )G (u x )dx

If x and y are normally distributed, then it turns out that z is also normally distributed, and the parameters describing the distribution of z are readily obtained from the parameters of x and y. About the simplest case would be for x and y to be exponentially distributed with parameters and , respectively. The probability density function of z is given by the following expression, but z is no longer exponentially distributed. p( z ) = 1 e z/ e z/

(Great care must be used in the limits of integration). If x and y are lognormally distributed with parameters x50, and y50, (respectively), then the cumulative distribution of z is given by the expression

3-11

EPRI Licensed Material Some Statistical Background Information

P( z) =

1 2 2

1 2 2 G H ln x50 JK erfc 1 ln y50 e x zx 2


1 x

LM N

FG H

IJ OPdx KQ

Eq. 3-18

The integral in this equation can not be evaluated in closed form, to the authors knowledge. Hence, the distribution of the sum of two lognormals can not be expressed in closed form. The integral can be easily evaluated by numerical integration, but if numerical techniques are called for, then there are other more straightforward procedures, such as Monte Carlo simulation. The point is that analytical methods are of limited use. There are a few cases where analytical methods are useful. The following provides a brief summary of such cases. This list is probably not complete. The sum of normals is normal, if z =
K

x and the x have means x and variances then


K
2

i =1 2

the mean of z is

x and variance =
K i i =1 i =1

2 i

The product of lognormals is lognormal, if z =

x and the x have medians x


K i
i

50,i

and

second parameters i, then z is lognormally distributed with median

i =1

x
K i =1

50 ,i

and second parameter =


2

K i =1

2 i

. This is the counterpart of the above relation

for normals. A lognormal to some power is lognormal, if z = x p and x has median and second parameter p x50 and , then z is lognormally distributed with median x50 and second parameter p. This, along with the immediately above case, means that the product of lognormals to powers is itself lognormally distributed with easily evaluated parameters. A Weibull to some power is a Weibull, if z = x p and x is a Weibull with parameters b and c, then z is a Weibull with parameters b 1/ p and c / p .

Analytical methods are of limited use in all but the simplest of component reliability models. They can provide closed form representations that are of use in some instances that provide good checks on numerical results. 3.3.2 Monte Carlo Simulation Principles This is a commonly used method of obtaining results from a probabilistic model. It is simple to understand and implement and can be made as accurate as desired by taking sufficient samples. However, it can be burdensome in computer time, which is not near the problem that it was formerly when computer time was much more expensive. In these days of fast, inexpensive PCs, 3-12

EPRI Licensed Material Some Statistical Background Information

Monte Carlo simulation is a useful tool. However, much like finite elements, you can always make a problem bigger and computer time can become of concern. Once a deterministic model of lifetime is available, it provides the relation between the input variables and the failure time. Equation 2-26 is a simple example for fatigue crack growth. The probabilistic model then involves characterizing the distributions of the random input variables and evaluating the failure probability. As discussed immediately above, analytical tools are seldom suitable. Monte Carlo is one alternative. Monte Carlo simulation involves drawing a sample from the distribution of each of the random input variables. The sampling of an input variable is analogous to rolling a dice that is shaped like the distribution of the random variable. Once a sample of each of the random input variables is drawn, a value of the lifetime is calculated using the deterministic lifetime model. Each of these lifetime calculations is called a trial. Each trial provides a value of the lifetime. A series of trials is then sorted (like the Cf data of Table 3-2) and the distribution of the lifetime is constructed. The cumulative distribution of the lifetime is the failure probability as a function of time. Another way to look at it is that the lifetime data from the trials can be sorted into a histogram and the resulting histogram normalized to define the probability density function of the lifetime. It is immediately obvious that the better you want to define the failure probability and the smaller the failure probability that you want to consider, the more trials you need. A frequently asked question is how do I know how many trials to take. This depends on the answer, so you do not know in advance. Before further discussions on the number of trials to take and the resulting confidence in the results, an example of Monte Carlo simulation will be provided. First you need to shape the die that you are rolling. Nearly all computers have random number generators built into them (of varying quality). Typically the random number is for a uniform distribution of a random variable that is between 0 and 1. Call the sampled random number u. This is like a cumulative probability and you want to get the value of the random variable that corresponds to this cumulative. For a Weibull variate, the sampled value is

L F 1 IJ OP x = b MlnG N H 1 uK Q

1/ c

(Weibull)

For normal and lognormal variates, there are faster ways than using inverse error functions. Abramowitz 64 provides the trick that if u1 and u2 are two samples drawn from unit uniform random numbers then cos(2u1 ) 2 ln(u2 ) appears to be a unit normal. The following then appear to be normal and lognormal variates. x = x + cos(2u1 ) 2 ln(u2 ) x = x50 exp cos(2u1 ) 2 ln(u2 )
(normal) (lognormal) Eq. 3-19 Eq. 3-20

3-13

EPRI Licensed Material Some Statistical Background Information

Thus, the die are shaped. This is easy to program in FORTRAN, MATHCAD 91, or on a spreadsheetonce the underlying deterministic model is available. As an example, apply Monte Carlo simulation to compute the distribution of the sum of two lognormals (which is the problem that provided the complicated integral in Equation 3-18). Let x have a median of 3 and a of 0.8, and y have a median of 4 and a of 3. Let z = x + y, Table 3-4 provides some intermediate results for the case of 20 trials. Figure 3-4 provides a plot on lognormal scales of the cumulative distribution of z as obtained by 20 and 500 trials. Figure 3-4 also shows the result of numerical integration of Equation 3-18 obtained by use of the numerical integrator in MATHCAD [MATHCAD 91]. Since the results in Figure 3-4 do not fall on a straight line, they are not lognormally distributed. Table 3-5 summarizes some of the statistics from the Monte Carlo trials.
Table 3-4 Results of Monte Carlo Example With 20 Trials i 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 x 1.978 7.517 9.513 6.059 4.071 5.282 5.057 1.509 4.830 5.685 2.235 1.699 1.353 2.511 4.333 1.876 26.320 2.666 4.380 19.090 y 9.161 .131 15.000 6.526 2.064 29.100 9.097 .473 3.437 .326 6.806 240.900 .355 .701 48.110 32.780 1.579 17.180 93.400 13.270 z 11.140 7.649 24.510 12.580 6.135 34.390 14.150 1.982 8.267 6.010 9.042 242.600 1.708 3.212 52.440 34.660 27.900 19.840 97.780 32.350 Sort z 1.708 1.982 3.212 6.010 6.135 7.649 8.267 9.042 11.140 12.580 14.150 19.840 24.510 27.900 32.350 34.390 34.660 52.440 97.780 242.600 P .025 .075 .125 .175 .225 .275 .325 .375 .425 .475 .525 .575 .625 .675 .725 .775 .825 .875 .925 .975

3-14

EPRI Licensed Material Some Statistical Background Information Table 3-5 Summary of Some of the Statistics from the Monte Carlo Trials Mean Inputs x y 20 Trials x y z 500 Trials x y z 4.131 360.1 5.898 26.52 32.42 4.129 331.06 335.2 Median 3 4 4.357 7.951 13.36 2.914 3.254 9.177 0.8 3 0.782 1.989 1.215 0.844 3.178 1.929

Figure 3-4 Cumulative Probability of the Sum of Two Lognormals as Computed by Numerical Integration and Monte Carlo With 20 and 500 Trials

Table 3-4 provides each of the 20 randomly sampled values of x and y along with the resulting value of z. To the right of the vertical line in Table 3-4 are the sorted values of z and the value of the probability (obtained by use of Equation 3-14). These left two columns are plotted as crosses in Figure 3-4. Figure 3-4 also shows the results for 500 trials, which are smoother than the results for 20 trials. Table 3-5 provides some of the statistics from the trials. Note that with 20 trials, the 3-15

EPRI Licensed Material Some Statistical Background Information

median and of x and y do not agree well with the inputs. The situation improves with 500 trials. Figure 3-4 also shows the results of numerical integration. The agreement between the 500 Monte Carlo trials and the numerical integration is good, but the numerical integration is shown only up to a probability of about 0.95. This is because the integration got progressively less accurate as the probability increased beyond this point, and the results are not plotted. If greater accuracy was desired, it would be burdensome to improve the numerical integration but it would be easy for Monte Carlo simulationjust perform more trials. 3.3.3 Monte Carlo Confidence Intervals The above example shows how the results change as more trials are taken, which is relevant to the question of how many samples to take. Probabilistic analyses are usually concerned with low failure probabilities, so we need to define the lower tail of the distribution. The scale in Figure 3-4 only goes down to 10-4, which is a big number for many applications. To define the lower tail better, just make more trials. Making more trials can lead to excessive computer time. There are tricks that can be employed, such as importance sampling, which is discussed in Section 3.3.4. Latin hypercube and stratified sampling are also often useful and are discussed in Mahadevan 97a, and Ayyub 95. In addressing the number of trials to be performed and what is adequate, there are ways to estimate the confidence intervals of the results. For example, if there is one failure in 10,000 trials, a point estimate of the failure probability would be 10-4. It is then of interest to address the confidence interval on this estimate. For instance, it would be useful to say that I am 90% confident that the true value lies below some number that I can evaluate. This is equivalent to saying that if I perform the Monte Carlo simulation very many times, the estimated failure probability will be below this number 90% of the time. (The outcome of the Monte Carlo simulation is itself a random number, and will not always give the same answer each time you do the problem. Some random number generators have a seed that allows you to control where they start, and they will then give you the same sequence of random numbers, so that you will always get the same answer for the same problem. This is helpful in checking, but is an artifact of the use of a pseudo-random number generator). Wolstenholme 99 provides a good discussion on confidence intervals on the results of Monte Carlo simulations, and her treatment will be followed here. We will concentrate on the case where the failure probability (proportion of failures in the trials) is small, and the number of trials is large. (Large means say 100). Let p be the probability of failure (which is what we want). The estimate of p will be f/n, (or ( f 1 / 2) / n ) where f is the number of failures and n is the number of trials. If p is the probability of a failure in one trial, then the probability of observing f failures in n trials is binomially distributed with a probability density given by the expression pf ( f ) = n! p f (1 p) n f (n f )! f !

(pf is not really a probability density function, because it is discreet (that is, f is an integer)). This gives the probability of seeing exactly f failures in n trials if p is the failure probability in one trial. The probability of seeing f or fewer failures is the cumulative binomial distribution and is equal to

3-16

EPRI Licensed Material Some Statistical Background Information

Pf ( f ) =

(n nf!)! f ! p (1 p)
f k k =0 nk

nk

An upper limit on p (with a given confidence) can be obtained by finding the value of p that is the solution to the equation

(n nk!)!k ! p (1 p)
f k k =0

Eq. 3-21

where 100(1-)% is the desired confidence level. If p is the true value, there would be f failures in n trials fewer that 100(1-)% of the time. (For instance, =0.1 for a 90% confidence). When n is large, Equation 3-21 is difficult to employ, because of numerical overflow problems with n!. A convenient approximation to the binomial density function when p is small and n is large, is provided by the Poisson distribution p f ( f ) = e pn ( pn) f f!

The corresponding cumulative distribution is Pf ( f ) = e


pn

f k =0

( pn) f f!

Then the 100(1-)% upper confidence upper limit on p is the value of p that will give f or fewer failures in n trials. This is given by the root of the expression e
pn

f k =0

( pn) k = k!

Eq. 3-22

Results obtained by use of this equation can be evaluated, because we are interested in values of f that are not too big. The root of the equation can be found by MATHCAD [MATHCAD 91] with Table 3-6 summarizing the results for 90 and 95% confidence levels.

3-17

EPRI Licensed Material Some Statistical Background Information Table 3-6 Confidence Upper Bounds on Ntrpf for Various Numbers of Failures Number of Failures 0 1 2 3 4 6 8 10 15 20 50 100 0.5 1.5 2.5 3.5 5.5 7.5 9.5 14.5 19.5 49.5 99.5 Point Estimate Confidence Value 90% 2.303 3.890 5.322 6.681 7.994 10.532 12.995 15.403 21.292 27.045 60.339 114.075 95% 2.996 4.744 6.296 7.754 9.154 11.842 14.435 16.962 23.97 29.062 63.287 118.079

Figure 3-5 provides a plot of the results in Table 3-6, normalized with respect to the number of failures, which shows how the upper limit confidence limits approach the point estimate as the number of failures increases. There must be about six failures before the 95% confidence limit is within a factor of two of the point estimate. Even with 100 failures, the upper confidence limit is about 15% above the point estimate. The confidence level (90 vs 95%) does not have a large influence. As an example of the application of these results, consider the first value of z in the Monte Carlo example with 20 trials in Table 3-4. (This is not a large number of trials, but the above tabulation should be adequate for one failure.) From Table 3-4, this value is 1.708 and the point estimate of the probability is 0.025. The 95% upper confidence limit on the probability of seeing the value 1.708 is 4.774/20 = 0.239. Figure 3-4 shows that the more accurate results for 500 trials and the numerical integration both gave a probability of about 0.05 (5%) for z = 1.708, which is well below the 95% confidence limit. This also shows the big difference between the point estimate and the 95% confidence limit when there is only one failure; Table 3-5 shows an order of magnitude difference (0.5 vs 4.744).

3-18

EPRI Licensed Material Some Statistical Background Information

Figure 3-5 Values of Factors in Table 3-5 Divided by the Number of Failures

3.3.4 Monte Carlo Simulation Importance Sampling Section 3.3.2 discusses Monte Carlo procedures that involve direct sampling from the distributions of the random variables and calculation of the value of the outcome, such as the lifetime. Such procedures as particularly straightforward, but are not efficient for evaluation of low probabilities. Section 3.3.3 discusses the number of trials that must be performed in order to confidently evaluate low probabilities, and the need for large numbers of trials. In some instances, the large number of trials required can lead to excessive computer time, and more rapid procedures are desirable. Section 3.3.5 discusses an alternative procedure that can lead to substantial economies, but there are ways to employ Monte Carlo techniques to provide much quicker results than those obtained from standard Monte Carlo simulation. Such techniques include stratified sampling, importance sampling, and Latin hypercube sampling. Importance sampling is discussed in this section. Mahadevan 97a and Ayyub 95 provide discussions on procedures that can be used to speed up Monte Carlo simulation. Importance sampling involves sampling from the portions of the distributions of the input random variables that are more likely to lead to failure, and then compensating the results to account for not having sampled from the appropriate original distributions. Following Mahadevan 97a, this is best described by first looking at an alternate means of combining random variables. The failure probability (at a given time) can be viewed as the integral of the joint probability density function of the random variables over the volume in which the combination of random variables could lead to failure (within that time). Let g(xi) be the function that describes failure, with g(xi)<0 meaning failure occurs. g(xi) is called the performance function and is a function of the n random variables xi. The failure probability can be written as

3-19

EPRI Licensed Material Some Statistical Background Information

Pf =

z z
.......

f ( xi )dxi

Eq. 3-23

g ( xi ) < 0

Defining the following indicator function, Ig(xi) allows Equation 3-23 to be rewritten as in Equation 3-25 0 R | I (x ) = S | T1
g i

if g ( xi ) > 0
Eq. 3-24

Pf =

z z
.......
N g i =1

if g ( xi ) < 0

I g ( xi ) f ( xi )dxi

Eq. 3-25

In the case of simple Monte Carlo, the failure probability is estimated as the number of failures divided by the number of trials, which can be written as 1 Pf N

I (x )
i

Eq. 3-26

where N is the number of trials. This is simply the mean value of Ig(xi). Equation 3-26 also shows that the failure probability is the mean value of the indicator function (see Equation 3-24). For the purposes of importance sampling, rewrite Equation 3-25 as Pf =

z z

.......

I g ( xi )

f ( xi ) h( xi )dxi h( xi )

Eq. 3-27

Consider h(xi) as a new set of density functions. The failure probability is then equal to the mean value of I g ( xi ) f ( xi ) / h( xi ) , which can be estimated by Monte Carlo simulation as 1 Pf N

N i =1

I g ( xi )

f ( xi ) h( x i )

Eq. 3-28

The density function h(xi) is chosen so as to lead to more failures during the N Monte Carlo trials, but each failure counts as f(xi)/h(xi) rather than one. Importance sampling can lead to significant reduction in computer time when computing small probabilities. The altered probability density functions, h(xi), must be selected to provide more failures, such as increasing the mean initial crack size, reducing the mean fracture toughness, etc. One way to do this is to use the same type of density function but increase (or decrease) the mean. The amount of the increase or decrease influences the number of failures that are obtained for a given shift, but too large a shift can introduce inaccuracies in the results. A suggested procedure is to 3-20

EPRI Licensed Material Some Statistical Background Information

increase (or decrease) the mean by an amount scaled by the standard deviation. Such shifts can be made to all of the random variables or only to the dominant ones. As an example, consider the problem of the sum of two lognormal variates, as discussed in Section 3.3.2 with the previous results summarized in Figure 3-4. Figure 3-6 shows corresponding results as obtained with 20 trials and importance sampling. The medians of x and y were both shifted downward by a factor of 2 in order to obtain more small values of z in the 20 trials. The previous results with 20 trials and no importance sampling are also shown along with the values obtained by use of numerical integration. The agreement between the three sets of results in Figure 3-6 would be improved by the use of more trials, the 20 used in this example is a small number of trials. Figure 3-6 shows results down to probabilities of 0.005 and z of 0.401. This is in contrast to a probability of 0.025 and a z of 1.708 (see Table 3-4) obtained by conventional Monte Carlo simulation with the same number of trials. This demonstrates the lower probabilities that are obtainable with importance sampling. In order to get to the same probabilities using conventional Monte Carlo, some 200 trials would be required, which is an order of magnitude more than used here. Even greater benefits can be obtained in many situations.

Figure 3-6 Cumulative Probability of the Sum of Two Lognormals as Computed by Numerical Integration and Monte Carlo Simulation With 20 Trials, With and Without Importance Sampling

3.3.5 First Order Reliability Methods Basics Monte Carlo simulation techniques can sometimes involve excessive computer time, and faster methods can become of interest. Specialized sampling techniques can be helpful [Mahadevan 97a, Ayyub 95], as mentioned above. There is an alternative procedure that has seen widespread use. [Ang 84, Haldar 95, Mahadevan 97b] They are known by a variety of names and involve various wrinkles, but the method discussed here is one of the simplest and is often referred to 3-21

EPRI Licensed Material Some Statistical Background Information

as the first order reliability method (FORM). This method relies on a couple of observations, which are first most easily visualized for the case of two random variables. Let x1 and x2 be two normally distributed random variables. Then define the transformed random variables, u1 and u2 ui = xi xi i
Eq. 3-29

These are referred to as unit normal variates; they have a zero mean and a standard deviation of 1. The joint probability density function of u1 and u2 is an axisymmetric unit normal variate. Denote the condition that is to be evaluated as g(ui)=0. For instance, in the example problem of the sum of two variables that is discussed in Section 3.3.1, if we are interested in the probability that z<4, then the function g is written as g ( x i ) x1 + x 2 4 = 0 The function g(xi) is called the performance function. If the xi are normally distributed, then the probability that z<4 is the volume under the joint probability density function of the unit normals outside the line in u1-u2 space that is a plot of the performance function. This is shown pictorially in Figure 3-7.

Figure 3-7 Pictorial Representation of Joint Density Function in Unit Variate Space Showing Failure Curve and Most Probable Failure Point (MPFP)

The above performance function is a straight line in xi space. If the xi are normally distributed, then the performance function is a straight line in the reduced variate space, ui. The volume under the joint density function outside a line in reduced variate space turns out to be related to 3-22

EPRI Licensed Material Some Statistical Background Information

the closest distance (in reduced variate space) from the origin to the line. This distance is called . The volume is equal to the probability of interest. The relation is P = volume = ( ) =

1 erfc 2 2

FG IJ H K

Eq. 3-30

Hence, in order to obtain the probability, all that is required is the distance from the origin in reduced variate space to the line corresponding to the performance function. As shown in Figure 3-7, if the performance function is not a straight line in the reduced variate space, then procedure involves locating the point of closest approach of the line to the origin, and the performance function is then approximated as a straight line passing through this point. The above relation still then provides an approximation to the failure probability. This is also shown in Figure 3-7. The point of closest approach, which is a distance away from the origin, is where the joint density function above the performance function is a maximum and is called the most probable failure point (MPFP). It is also sometimes called the design point. If the random variables are not normal, then they can be converted to an equivalent normal. This conversion depends on the location of the point where the conversion is being made and uses an equivalent normal, which is a normal with the mean and standard deviation set to give the same value of the probability density function and cumulative at that point as the function being converted. For lognormal variates, the log of the variate is normally distributed, so the performance function is written in terms of the logarithms of the variables. Using Equation 3-29, the reduced normal variates are ui = x ln( xi ) ln( xi ) mean ln( xi ) ln( x50 ) 1 = = ln i x50 i ln( xi ) stddev

FG IJ H K

Eq. 3-31

As an example, consider the sum of two lognormals discussed above. The performance function in the xi-space is (take x1 as x and x2 as y) x1 + x 2 z = 0 The reduced variates are given in Equation 3-31, and the equation for the performance function in xi-space leads to the following equation in ui-space. x50 e x u1 + y50 e
y u2

z=0

The example problem has a linear performance function in xi-space, but a nonlinear one in uispace. Consider the case of z=2, Figure 3-8 is a plot of the performance function in ui-space. The location of the MPFP is shown. The distance from a point on the line to the origin in reduced variate space is denoted as S and is given by the expression 3-23

EPRI Licensed Material Some Statistical Background Information


2 S 2 = u12 + u2

Eq. 3-32

The MPFP is the point of closest approach of the performance function to the origin. It is relatively straightforward to find the MPFP when there are only two random variables. In this case, the location of the MPFP was found by making up a table of xi and ui, calculating the value of S for each set of xi and ui, and finding the smallest value of S. This defines the location of the MPFP, and the smallest value of S is . The value of in Figure 3-8 is 1.098. Using Equation 3-30, this corresponds to a probability of 0.368. This is easy to do for two dimensions (two random variables), but not a good way to do it for many random variables. Similar calculations were carried out for a range of z, Figure 3-9 shows the results on lognormal probability scales along with the results of the numerical integration included in Figure 3-4. Good agreement is observed.

Figure 3-8 Plot of the Performance Function in Reduced Variate Space for the Example Problem of Two Lognormals for z=2. The Origin Is at the Upper Right Corner, and the Most Probable Failure Point Is Indicated.

3-24

EPRI Licensed Material Some Statistical Background Information

Figure 3-9 Cumulative Probability of the Sum of Two Lognormals as Computed by the First Order Reliability Method and Numerical Integration

The direction cosines of the angles between the vector from the origin and the MPFP are of interest in studying the influence of the random variables on the probabilities. They are given by the expression

i =

ui

Eq. 3-33

Table 3-7 summarizes the information on the MPFP for the example problem with z=2.
Table 3-7 Coordinates and Direction Cosines of the MPFP for Example Problem With z = 2 Identification Name median coordinates in x-y space coordinates in u1-u2 space direction cosines 1 x 3.0 0.8 1.59 -0.7936 -0.7225 2 y 4.0 3.0 0.41 -0.7593 -0.6913

3-25

EPRI Licensed Material Some Statistical Background Information

The negative values of ui in Table 3-7 means that the values of the random variables at the MPFP are less than the median. Hence, they are at the low end of their distributions. If ui is negative, then the corresponding direction cosine is also negative. If the absolute value of the direction cosine is relatively large, then the probability is relatively sensitive to the value of that random variable. In Figure 3-8 (and Table 3-7) the direction cosines are nearly the same, so each of the random variables has about the same influence. If the direction cosine k is zero, then the vector is normal to the uk coordinate. This means that the value of the random variable xk has little or no influence on the probability. The direction cosines serve as measures of the sensitivity of the probability to the random variables. Figure 3-10 shows a case with a zero 2, and the insensitivity of the location of the MPFP to the value of u2 is apparent. Figure 3-11 presents the direction cosines as functions of z for the example problem of the sum of two lognormals. Figure 3-11 shows that the value of x (x1) is more influential on the probability of z when z is small. This switches at about z=2, and then x2 (y) becomes more influential as z becomes large. This is because as z becomes large, it will do so on the basis of the random variable that has a large median and standard deviation (). y has the larger values of both these parameters. When z is large, its value will be only slightly influenced by adding on the value of x, which has a small median and . This is an example of the use of the direction cosines to assess the relative influence of the random variables.

Figure 3-10 Example of a Performance Function With the Vector Normal to the Axis of One of the Variables Showing the Insensitivity of to That Variable

3-26

EPRI Licensed Material Some Statistical Background Information

Figure 3-11 The Direction Cosines of x and y for the Example Problem of the Sum of Two Lognormals

3.3.6 First Order Reliability Methods General The previous section gave a basic discussion of the first order reliability methods. It considered two random variables that were independent and of a normal type. It served to show the concepts involved, and restricting the discussion to two dimensions allowed a plot to be drawn of the performance function and the location of the MPFP. In real problems, things are not this simple. First, there can be many random variables. The performance function can not then be drawn as a line on a sheet of paper. The performance function becomes a hypersurface in hyperspace, the dimension of the hyperspace being the number of random variables. The process still involves finding the MPFP. This is the point of closest approach of the performance function to the origin of the reduced variate space. It is inefficient to just make up a table of the distance to points on the performance function for all combinations of the random variables and then look for the minimum, as was done in the above example problem. Instead, more efficient methods have been devised. The desired end result is to minimize the distance to the origin (in reduced variate space) subject to the condition of being on the performance function. Equation 3-23 is still used to define the reduced variates, if they are normal. The following equation, which is the multidimensional analog of Equation 3-32, is used to define the distance to the origin

=
2

u
N i =1

2 i

Eq. 3-34

where N is the number of random variables. Mathematically, the problem is to minimize subject to the constraint that the point is on the performance function, i.e. g(ui)=0. Numerical procedures are available for accomplishing this, 3-27

EPRI Licensed Material Some Statistical Background Information

see, for instance Ang 84, Mahadevan 97b. The procedure is often called the Rackwitz-Fiessler algorithm, after Rackwitz 78, and uses a Newton-type recursive formula to search for the MPFP. This involves guessing the MPFP, and then updating the guess. Once the guess no longer changes, an approximation to the location of the MPFP has been found. Figure 3-12 depicts one way of searching for the MPFP. The procedure shown in this figure is based on linearizing the performance function in the reduced variate space (xi) at the guessed MPFP (this is why the derivatives are needed). This defines a hyperplane in hyperspace. Denote the values at the guessed location of the MPFP with a * superscript. The linearized performance function is obtained by expanding the performance function in a Taylor series about the guessed point and retaining only the linear terms. This results in the following expression

FG g IJ cu u h = A + A u g (u ) = g (u ) + H u K
N * N i * i i * i 0 i i =1 i i =1

Eq. 3-35

where the coefficients A are given by

F g IJ A =G H u K
i i

A0 = g (u )
* i

Au
N i i =1

* i

Eq. 3-36

Now, to update our guess, find the point (values of ui) on the hyperplane that is closest to the guessed point and at which g=0. Mathematically, these are the values of ui on the plane defined by Equation 3-35 at which g=0 and =

cu u h
N i i =1 N 0 i i i =1

* 2 i

is minimized. This can be

accomplished by the use of a Lagrange multiplier, , and defining L = + g =

F I u u h + G A + Au J c H K
N i * 2 i i =1

Eq. 3-37

The above set of equations can be solved by taking the derivative of L with respect to and ui and setting each of them to zero. This results in the set of N+1 equations for the N+1 unknown and ui. ui

(u u )
N i i =1 N

+ Ai =

ui + Ai = 0

* 2 i

A0 +

Au = 0
i i i =1

3-28

EPRI Licensed Material Some Statistical Background Information

From the first of these equations, ui = Ai . Inserting this into the second equation A0 +

A u = 0 = A + A (A ) = A A
N N N i i 0 i i 0 i =1 i =1 i =1 N 2 i i =1

2 i

Hence, = A0 /

A , which leads directly to the following expression for the updated u


Ai

ui = Ai = A0

A
N i =1

Eq. 3-38

2 i

Equation 3-38 provides the updated values of the coordinates of the guessed MPFP. The process in Figure 3-12 is continued until the value of changes by only a small amount () between steps, and the value of g at the candidate MPFP is close to zero. This process can be tricky, and the performance function had better be smooth, so that the derivatives can be accurately and economically evaluated. It is said that the procedure usually converges rapidly, if it converges at all. Sometimes it bounces between two points, and sometimes it diverges away from the solution. Ang 84, Ayyub 97, and Mahadevan 97b describe the process and give several examples. Harris 95a and 97a provide examples of the use of these procedures with fracture mechanics problems. Once the location of the MPFP is available, the values of ui are known. The value of is then the value in Equation 3-34 obtained by using the MPFP ui. Equation 3-33 still provides the direction cosines, and Equation 3-30 still gives the probability. A second complication involves random variables that are not normal (or lognormal). The equation P=(-) is limited to normal distributions. To circumvent this problem, equivalent normals are used. This is alluded to in Figure 3-11. For a particular guess of the location of the MPFP, the actual non-normal variate is replaced by a normal with a mean and standard deviation that are adjusted to have the same value of the probability density function and cumulative distribution function as the non-normal variate at that same point. Then P=(-). A third complication occurs if the random variables are not independent. The equation P=(-) is for independent normal variates. Problems with lack of independence can be treated by a rotation of coordinates using a correlation matrix or by use of a so-called Rosenblatt transformation [Ang 84, Mahadevan 97b]. It must be kept in mind that the equation P=(-) is exact only for independent normal variates with a linear performance function in reduced variate space. The use of this relation for nonlinear performance functions involves replacing the performance function with a hyperplane that is

3-29

EPRI Licensed Material Some Statistical Background Information

tangent to the performance function at the MPFP. For non-normal variates, additional approximations are involved in the use of equivalent normal distributions. As an example of the use of FORM and the Rackwitz-Fiessler algorithm, apply the above procedure to the example problem of the sum of two lognormal variates. A variation of FORM was used in the above discussion, such as in obtaining Figure 3-8, but this involved finding the coordinates of the MPFP by making up a big table of for points on g=0. This is not efficient for problems with more random variables. Applying the procedure of Figure 3-12 provides the results shown in Table 3-8, which summarizes the results of the intermediate steps for z=3 with an initial guess of x and y at their median values (ui=0). The table shows how the procedure converged in seven steps. The change in and the value of the performance function at the guessed MPFP (g0) both got progressively smaller.

3-30

EPRI Licensed Material Some Statistical Background Information

Figure 3-12 Diagrammatic Representation of a Procedure for Finding Most Probable Failure Point

3-31

EPRI Licensed Material Some Statistical Background Information Table 3-8 Intermediate Steps in Iterative Procedure for Finding the MPFP for the Example Problem with z=3 i 1 2 3 4 5 6 7 old .0000 .3292 .5876 .6578 .6612 .6615 .6617 new .3292 .5876 .6578 .6612 .6615 .6617 .6617 u1 -.0649 -.2640 -.4066 -.3737 -.4023 -.3810 -.3975 u2 -.3228 -.5250 -.5170 -.5455 -.5251 -.5410 -.5290 1 -.1972 -.4492 -.6182 -.5652 -.6082 -.5758 -.6007 2 -.9804 -.8934 -.7860 -.8250 -.7938 -.8176 -.7995 g0 4.0000E+00 1.3670E+00 2.5701E-01 1.5062E-02 3.3417E-03 2.3235E-03 1.0234E-03

The validity of the final results in Table 3-8 can be verified by comparing them with the results obtained by the look-up table approach discussed in Section 3.3.5. The agreement of the direction cosines in Table 3-8 with the results in Figure 3-11 for z=3 is good. As mentioned above, the iterative procedure often converges quickly, but can sometimes jump between two points, neither of which is correct, or can diverge away from the solution. As an example of bouncing between two points, Table 3-9 provides intermediate steps for the case of z=2. The process has not converged after 20 steps and the value of g0 has not approached zero. After about 13 steps, the result is bouncing between two points, neither of which is correct. Line 5 is fairly close to the answer, but further steps wander away from the known answer from Table 3-7. The known answer is =1.098, which corresponds to a probability of 0.368, so the conditions are not at low probabilities and out in the tails of the distributions, which is where convergence problems would be more likely. There are other variations of the theme. For instance, Ang 84 suggests a procedure that involves solving a nonlinear equation for that involves roots of the nonlinear performance function. This would probably work for the example problem with z=2, but would involve numerous calls to the performance function, which largely defeats the purpose of using the Rackwitz-Fiessler approach.

3-32

EPRI Licensed Material Some Statistical Background Information Table 3-9 Intermediate Steps in Iterative Procedure for Finding the MPFP for the Example Problem With z=2 i 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 old .0000 .4116 .8614 1.0626 1.0771 1.0653 1.0149 1.0138 .9097 .9830 .8955 .9827 .9025 .9837 .9005 .9833 .9010 .9834 .9009 .9834 new .4116 .8614 1.0626 1.0771 1.0653 1.0149 1.0138 .9097 .9830 .8955 .9827 .9025 .9837 .9005 .9833 .9010 .9834 .9009 .9834 .9009 u1 -.0811 -.46.04 -.8261 -.6576 -.8927 -.4981 -.8988 -.3379 -.8732 -.3236 -.8697 -.3321 -.8716 -.3296 -.8711 -.3301 -.8712 -.3300 -.8712 -.3300 u2 -.4035 -.7281 -.6683 -.8530 -.5813 -.8843 -.4689 -.8446 -.4515 -.8350 -.4575 -.8392 -.4560 -.8381 -.4563 -.8383 -.4562 -.8383 -.4562 -.8383 1 -.1972 -.5344 -.7775 -.6106 -.8380 -.4908 -.8866 -.3714 -.8883 -.3613 -.8850 -.3680 -.8861 -.3661 -.8858 -.3664 -.8859 -.3663 -.8859 -.3663 2 -.9804 -.8452 -.6289 -.7919 -.5457 -.8713 -.4626 -.9285 -.4593 -.9324 -.4656 -.9306 -.4635 -.9306 -.4640 -.9305 -.4639 -.9305 -.4639 -.9305 g0 5.0000E+00 2.0037E+00 5.2594E-01 8.7899E-02 8.2260E-02 1.6813E-01 2.9581E-01 4.4128E-01 6.0682E-01 5.2436E-01 6.4253E-01 5.1232E-01 6.2270E-01 5.1232E-01 6.2847E-01 5.1203E-01 6.2719E-01 5.1205E-01 6.2743E-01 5.1205E-01

The first order reliability method is finding wide use and provides great economies in computer time. Sometimes a single call to the performance function, which is analogous to one Monte Carlo trial, can take an appreciable amount of computer time, such as in problems involving nonlinear finite element analysis. FORM (and its related procedures) is ideally suited to such problems, and Monte Carlo simulation can be impractical in such situations. There are variations on the theme, such as second order reliability methods and using Monte Carlo simulation to sample about the MPFP in order to improve the accuracy. As computer time has become cheaper and computers faster, Monte Carlo simulation is returning from disfavor. One problem with FORM is that you can not account for actions taken during the lifetime, such as inspections and repairs. This can be treated as a natural part of Monte Carlo simulation. 3-33

EPRI Licensed Material

4
DEVELOPMENT OF PROBABILISTIC MODELS FROM DETERMINISTIC BASICS
Having discussed underlying deterministic lifetime models and statistical background, we are now ready to discuss the development of probabilistic models of component lifetime. The underlying deterministic bases were discussed in Section 2 and statistical background was provided in Section 3. It is important to remember that, although there are advantages to this approach, there are also pitfallsas discussed in the Introduction and the Summary sections. The simple deterministic example problems of Section 2.3 are analyzed probabilistically, in order to provide simple results, and then a more realistic example problem is provided in Section 5.

4.1

Discussion

Once a deterministic lifetime model is available, it can be cast into a probabilistic model by simply: 1. Defining which of the inputs are deterministic and which are probabilistic. 2. Defining the values of the deterministic inputs. 3. Characterizing the statistical distributions of the input random variables (defining the distribution type and the parameters of the distributions). This can often be done based on data from the literature, or a testing program can be performed. 4. Using the deterministic lifetime model in conjunction with Monte Carlo simulation to generate a set of failure times (Alternatively, using the lifetime model as the performance function in FORM to obtain the failure probability as a function of time). 5. Analyzing the set of failure times to obtain the failure probability as a function of time. The deterministic model provides a relation such as lifetime = function (input variables)
Eq. 4-1

Equation 2-26 is an example for fatigue crack growth lifetime. This equation is repeated here for convenience. Nf = 1
m 2 Cf 2

m/ 2

LM Na

1
( m 2 )/ 2 o

1 ac
( m 2 )/ 2

OP Q

Eq. 2-26

4-1

EPRI Licensed Material Development of Probabilistic Models From Deterministic Basics

This equation is particularly simple. Usually, the lifetime model can not be written explicitly and will involve numerical calculations for its evaluation. If all of the inputs (Cf, ao, ac, and m) are precisely known (and the underlying model is correct), then this equation can be used to compute a precise number of cycles to failure. However, the inputs are seldom known precisely, and some of them may be subject to inherent randomnesssuch as scatter in fatigue crack growth data. To transform the deterministic model to a probabilistic one, it must be decided which of the inputs are to be considered as random, and then to characterize the statistical distributions of these random inputs. The decision on what to treat as random depends on the inputs that are subject to the greatest scatter and/or uncertainty. If an input is well known or if its value does not have a strong influence on the lifetime, then take it to be deterministic. There are various types and sources of uncertainty. The following quote is from Ang 84 (page 384): Uncertainties in engineering may be associated with physical phenomena that are inherently random or with predictions ...... performed under conditions of incomplete or inadequate information. (italics added) Some inputs are inherently random. Fatigue data is an example. Once sufficient data has been obtained, going out and getting more data will not alter the distribution of the crack growth rate. (Lets not get into the discussion of what is sufficient data.) Alternatively, there may be uncertainty in the value of an input due to lack of data. The lack of confidence can be expressed in terms like I am 90% certain that the cyclic stress is less than 30 ksi, and an average (best guess) would be about 23 ksi (158.6 MPa). This could be used to define the parameters of the distribution of the cyclic stress, once a distribution type was assumed. In this discussion, these sources of scatter/uncertainty will be treated the same. The statistical distributions of the input random variables are preferably determined by the use of data, hopefully enough data to also provide guidance on the distribution type. Such data can be generated as a test program or, perhaps, obtained from the literature. An example was provided in Section 3.2 of fitting the distribution of the fatigue crack growth for a given K. In that case Cf was found to be well represented by a lognormal distribution with a median of 8.07x10-10 (crack growth rate in inches per cycle and cyclic stress intensity factor in ksi-in1/2) and a of 0.16. Figure 4-1 provides another example of the characterization of the randomness of a material input variable. This figure is based on fatigue data that was characterized by Keisler and Chopra [Keisler 95]. Figure 2-1 provided the data and the best fit to it. Now the curve fits to the various quantiles fitted by Keisler and Chopra [Keisler 96] are plotted. This characterizes the scatter in the fatigue data and facilitates a probabilistic analysis of fatigue crack initiation.

4-2

EPRI Licensed Material Development of Probabilistic Models From Deterministic Basics

Figure 4-1 Probabilistic Treatment of Strain-Life Data for A106B Carbon Steel in Air at 550F (288C) Showing Various Quantiles of the Keisler Curve Fit [From Keisler 95]

Returning to the example of Equation 2-26, in addition to the fatigue crack growth coefficient, the initial crack size, critical crack size, and cyclic stress could also be taken as random. The distribution of could be obtained from strain gage data, and could be of a random amplitude. The critical crack size is a function of the fracture toughness and the peak stresses (see Equation 2-25). The distribution of the fracture toughness could be characterized by a set of replicate tests to provide a set of values that could be analyzed to define the distribution type and the corresponding parameters. The initial crack size, which is an important input, can be based on estimates on what could escape detection, or on a set of observations, or on simulation of the fabrication process. The work of Chapman [Chapman 93] is an example of this latter process, with Khaleel 99 providing curve fits of parameters of crack depth distributions obtained from simulations of the welding process in carbon and austenitic steels. The fatigue crack growth example is discussed further in Section 4.2.2. The remaining step is to get results (failure probability as a function of time). This can be accomplished by a variety of means. Most often, Monte Carlo simulation is the most straightforward and suitable. Then it is only necessary to program a Monte Carlo simulator. As discussed in Section 3.3.2, this involves sampling each of the random variables on each trial and calculating a value of the failure time. Alternatively, FORM methods can be used, with the deterministic lifetime model serving as the performance function. Examples are provided in the following.

4-3

EPRI Licensed Material Development of Probabilistic Models From Deterministic Basics

4.2

Simple Example Problems

Two simple example probabilistic problems will be presented to demonstrate the procedures involved. More complex problems only involve more complex lifetime models. 4.2.1 Fatigue Crack Growth in a Large Plate Once again, the example of fatigue of a large plate will be considered. In this case consider a weld in the plate. Equation 2-26 (which was repeated above) provides the deterministic lifetime model. Consider the material to be 2 Cr 1 Mo steel at 1,100F (593C) which is cycled at a high enough rate that creep is not a contributor to crack growth. The distribution of the fatigue crack growth coefficient, Cf, is plotted in Figure 3-1. It is lognormal with a median of 8.07x10-10 and a of 0.160. (Fatigue crack growth properties are not strongly influenced by whether the crack is in a weld, heat affected zone, or base metal, so assume these parameters to be applicable.) The distribution of initial crack length is an important input to the problem. This distribution could be estimated by sectioning of welds and gathering statistics on the observed crack sizes. This is a direct, but tedious and expensive, way to estimate the distribution. Khaleel, et al., [Khaleel 99] provide results that can be used for estimation of the crack size distribution. Using their results for a 1 inch (25.4 mm) thick weld in a ferritic steel, the initial crack size, ao, is taken to be lognormal with median 0.0519 inches (1.32 mm) and second parameter, , of 0.5665. The cyclic stress is assumed to be normally distributed with a mean of 20 ksi (137.9 MPa) and a standard deviation of 4 ksi (27.6 MPa). The stress is considered to cycle between zero and a maximum value equal to the cyclic stress. The value of the fracture toughness will not have a large influence on the results, and at 1100F (593C) the material will be very tough (high fracture toughness). It would be difficult to obtain data on the toughness of 2 Cr 1 Mo at 1,100F (593C) from the literature, and a series of replicate tests to generate data would be expensive and of dubious value. As an example, take 1/2 1/2 the toughness to be Weibull distributed with a mean of 120 ksi-in (131.9 MPa-m ) and a 1/2 1/2 standard deviation of 20 ksi-in (22.0 MPa-m ). The importance of the fracture toughness on the failure probability can be ascertained by sensitivity studies or by direction cosines if FORM is used. In a realistic situation, the toughness distribution could be assumed (as here), and if this turned out to have an important influence, then more resources could be spent to better define the distribution. The values of the parameters of the Weibull distribution that correspond to the assumed mean and standard deviation of the fracture toughness can be found by trial and error. MATHCAD is useful in this regard. The above assumption gives a ratio of the standard deviation to the mean of 1/6. Table 3-1 shows that for a Weibull variate, this ratio (which is called the coefficient of variation, cov) is given by the expression

4-4

EPRI Licensed Material Development of Probabilistic Models From Deterministic Basics

cov =

R | | F 1I L F 1I O U G 2 + J M G 1 + J P V S H K H K | T c N cQ| W F 1I G1 + J H cK
2

MATHCAD has a built in gamma function and a root finding capability that allows the value of c to be obtained. The resulting value is c=7.036. Once c is known, b is easily found to be 128.25 ksi-in1/2 from the expression mean = b(1+1/c). Table 4-1 summarizes the input random variables for the example problem.
Table 4-1 Random Variables for the Fatigue Crack Growth Example Problem Distribution Type lognormal lognormal normal Weibull

Name initial crack size fatigue crack coefficient cyclic stress fracture toughness

Symbol ao Cf KIc

Parameters median=0.0519 inches, =0.5665 median =8.07x10-10, =0.160 mean=20 ksi, std dev =4 ksi b=128.25 ksi-in1/2, c=7.036

The distribution of the critical crack size is of interest in its own right. It can be obtained by use of Equation 2-24. Using the mean values of the fracture toughness and cyclic stress, the value of corresponding value of the critical crack size is 1 120 20

LM OP N Q

= 1146 . inches (0.29 m)

Since this is so much larger than the median crack depth (0.0519 inches = 1.32 mm), it is not expected to be an important factor. The distribution of the critical crack size, as obtained by Monte Carlo simulation with 104 trials using MATHCAD is plotted in Figure 4-2.

4-5

EPRI Licensed Material Development of Probabilistic Models From Deterministic Basics

Figure 4-2 Cumulative Distribution of Critical Crack Size for the Fatigue Crack Growth Example Problem (104 Trials)

This figure shows that the critical crack size is nearly lognormally distributed, which is a consequence of the central limit theorem [Ang 75, Hahn 67, Ayyub 97] which states that the sum of random variables will tend to be normal, regardless of the type of distribution of the random variables. A corollary to this is that the product of random variables to powers will tend to be lognormal, regardless of the distribution-types of the random variables. The probability that the critical crack size is less than 1 inch (25.4 mm) is 10-4, and the median initial crack size is about 0.050 inches (1.27 mm). Figure 4-3 is a plot of the cumulative failure probability as a function of the number of fatigue 4 cycles as obtained with Monte Carlo simulation with 10 trials. Figure 4-4 is a plot on lognormal scales of the cumulative failure probability for the base case of the Weibull parameter 1/2 1/2 1/2 b=128.25 ksi-in (140.95 MPa-m ) along with the case of b=128.25/2 ksi-in (140.95/2 MPa-m1/2). The results in this figure were obtained with 106 trials. This figure shows that the failure probability is not strongly influenced by the value of b once the number of cycles exceeds about 5x104, which corresponds to a failure probability of about 10-3. So at cumulative -3 failure probabilities exceeding 10 the toughness has a negligible influence. However, for component reliabilities, the interest is usually in lower failure probabilities, in which case the distribution of the fracture toughness can be important.

4-6

EPRI Licensed Material Development of Probabilistic Models From Deterministic Basics

Figure 4-3 Lognormal Probability Plot of the Failure Probability as a Function of the Number of Cycles for the Fatigue Example Problem (104 Trials)

Figure 4-4 Plot on Lognormal Scales of the Distribution of Cycles to Failure for the Example Fatigue Crack Growth Problem and the Same Problem With the Mean and Standard Deviation Divided by Two (106 Trials)

4-7

EPRI Licensed Material Development of Probabilistic Models From Deterministic Basics

Figure 4-5 presents the results with 106 trials at lower failure probabilities. The distribution of the fracture toughness is seen to have a large effect at small numbers of cycles. At 10,000 cycles there is over two orders of magnitude difference in the failure probabilities. At very few cycles, -6 the lower toughness result is a failure probability of about 8x10 , whereas the higher toughness result appears to be far below 10-7. If the interest is in lifetimes of tens of thousands of cycles, then additional information on the statistics of the fracture toughness would be worthwhile to gather, and greater attention should be paid to the appropriate failure criterion.

Figure 4-5 Cumulative Failure Probabilities in the Lower Probability Region of the Fatigue Crack Growth Example Problem With Two Distributions of the Fracture Toughness (106 Trials)

Figure 4-6 provides results for the problem with b=128.25 ksi-in1/2 (140.95 MPa-m1/2) as obtained by the Rackwitz-Fiessler procedure depicted in Figure 3-12. The problem ran quickly and gave results to lower failure probabilities than shown in Figure 4-5. Figure 4-6 provides results to probabilities that are an order of magnitude lower than the Monte Carlo. Hence, 107 trials would have been necessary to obtain similar results with Monte Carlo. The data points are seen to 4 flatten out below 10 cycles, which is probably similar to the flattening seen in Figure 4-5 for the results with b=128.25/2. The failure probability at N=0 can be evaluated by Monte Carlo simulation, which involves only calculating a and comparing it to Kc. Rackwitz-Fiessler and numerical integration were attempted with no success. They could probably be made to apply to 9 this problem, but Monte Carlo is easy to perform. Monte Carlo simulation with 10 trials took about 100 minutes on a 400MHz Pentium II pc and gave 137 failures. This gives a point estimate of the failure probability of 1.37x10-7. Hence, the flattening out of the results at low cycles in Figure 4-6 is due to the influence of the finite critical crack size. The implications of this on the hazard function are discussed in Section 4.2.3.

4-8

EPRI Licensed Material Development of Probabilistic Models From Deterministic Basics

Figure 4-6 Cumulative Failure Probabilities in the Lower Probability Region for the Fatigue Crack Growth Example Problem. Solid Line is Monte Carlo With 106 Trials, Points are Results From Rackwitz-Fiessler

As an example of the use of importance sampling, the fatigue example problem was run with 1,000 trials and various shifts in the distributions of the random input variables. It was hoped that this would serve to better identify the failure probability for small numbers of cycles. Figure 4-7 summarizes the results obtained with 1000 trials while employing the following shifts
initial crack size and fatigue crack growth coefficient cyclic stress mean shifted up 1 or 2 standard deviations held constant mean shifted up 1 or 2 standard deviations standard deviation held constant b shifted down 1 or 2 standard deviations c held constant

fracture toughness

Figure 4-7 also presents the Rackwitz-Fiessler results from Figure 4-6. Figure 4-7 shows good agreement between the importance sampling results and the Rackwitz-Fiessler result, except at small numbers of cycles. The Rackwitz-Fiessler results are seen in Figure 4-6 to agree well with 6 the results Monte Carlo simulation with 10 trials and no importance sampling. Hence, the Monte Carlo simulation with and without importance sampling agree well with one another. Hence, comparable results were obtained with importance sampling with only one-thousandth as many trials, which demonstrates the substantial savings that can be made with importance sampling. The results with a shift of two standard deviations provides results to very low failure probabilities, but the flattening of the curve at small numbers of cycles is not captured. It is known that this flattening is real, because of the Monte Carlo simulation without importance sampling to obtain 137 failures in 109 trials. Thus, it is apparent that importance sampling can 4-9

EPRI Licensed Material Development of Probabilistic Models From Deterministic Basics

provide substantial economies, but may fail to capture some important features. In this particular instance, this shortcoming is probably due to not really having many failures in the simulation that were influenced by being initially close to failure, a feature that was not captured by the shifts in the importance sampling.

Figure 4-7 Cumulative Failure Probabilities in the Lower Probability Region for the Fatigue Crack Growth Problem as Obtained Using Importance Sampling With 1000 Trials With Different Shifts in Parameters of Input Random Variables. Points are From Rackwitz-Fiessler.

Returning to the FORM results, this method also gives the direction cosines, which serve as measures of the sensitivity of the failure probability to the values of the random variables. Figure 4-8 provides a plot of the direction cosines for the fatigue crack growth example problem. This figure shows many interesting features. For small number of cycles to failure, the problem is dominated by the fracture toughness, and the fatigue crack growth coefficient has no influence (its direction cosine is close to zero). The only negative direction cosine is for the fracture toughness. This is because in order to get failure in a relatively few cycles, the toughness must be low (below the mean, so the direction cosine is negative), and the cyclic stress, crack size, and fatigue crack growth coefficient must be large (above the mean, so the direction cosine is positive). At larger numbers of cycles, the direction cosine for the toughness becomes close to zero, which is a consequence of the insensitivity of the lifetime to the critical crack size, as discussed in Section 2.3.1 in conjunction with Figure 2-11. The initial crack size and cyclic stress are always important.

4-10

EPRI Licensed Material Development of Probabilistic Models From Deterministic Basics

The example of fatigue of a crack in a very large plate demonstrates the principles involved in a probabilistic analysis of lifetime, even though it is a particularly simple fracture mechanics problem. In more complex problems, the calculation of the critical crack size is not so easy, and the lifetime relation (the counterpart of Equation 2-26) is not as simple. Most often, the lifetime evaluation itself involves numerical calculations, but the principles are no more complicated. Its all in the performance function.

Figure 4-8 Direction Cosines for the Fatigue Crack Growth Example Problem as a Function of the Number of Cycles to Failure

4.2.2 Creep Damage in a Thinning Tube The deterministic basis of this superheater/reheater tube problem was covered in Section 2.3.2, where a closed form expression for the lifetime, tR, was derived. This relation was tR = 1 tT

LM1 + ( 1) t OP t Q N
C T

1/( 1)

Eq. 2-31

In this relation, tC is the time for creep rupture if no thinning occurred, and tT is the time for the wall thickness to go to zero if thinning is the only degradation mechanism. For a wall thinning rate of and initial thickness ho, tT is ho/ . For a given stress and temperature, and for the linear LMP relation of Equation 2-29, the expression for tC is t C ( , T ) = 10 C + A /( BT ) 1/ BT
Eq. 2-30

4-11

EPRI Licensed Material Development of Probabilistic Models From Deterministic Basics

In this expression, C comes from the definition of the Larsen-Miller parameter in Equation 2-3, and is equal to 20. A and B come from the linear fit to creep rupture data, Equation 2-29. The value of in Equation 2-31 is 1/(BT). Since there is considerable scatter in creep rupture data, as shown in Figure 2-2, and there would be expected to be great uncertainty and scatter in wall thinning rates, a probabilistic analysis is appropriate. The random variables to include in the analysis are the thinning rate, , and some constant associated with the creep rupture. Take C=20, because this is the value used in the original analysis employing the Larsen-Miller parameter, then take the slope of the fit to the data to be constant. This fixes B. Then consider A to be a random variable with distribution type and parameters obtained from data. It would be possible to use the characterization of the scatter employed in Grunloh, et al., [Grunloh 92], but this example will proceed independently of this and use data available from the literature. This serves as an example of developing statistical distributions from information available in the literature, and will lead to a result somewhat different than obtained by Grunloh, et al. Van Echo 66 contains a tabulation of creep rupture data for chromium-molybdenum steels, and data for wrought 1 Cr 1/2 Mo steel with stresses less than or equal to 30 ksi was obtained from this reference. This provides 207 data points, which are temperature, stress and time to rupture. This data is plotted in Figure 4-9 in terms of log10() ( in ksi) and LMP=Ta[20+log10(tR)]/1000 (Ta is degrees Rankine, stress in ksi, and tR is the rupture time in hours.) Also shown in Figure 4-9 is the least squares linear fit to the data. This fit is log( ) = A B( LMP) with A=5.645 and B=0.1264. The scatter in Figure 4-9 can be characterized by fixing B and calculating a value of A for each data point. This will provide 207 data points for evaluation of the distribution of A. Figure 4-10 presents a normal probability plot of the cumulative distribution of A. The straight line corresponds to the normal distribution with the mean and standard deviation as obtained from the data. A good fit to the data is observed. Figure 4-10 shows that A is well represented by a normal distribution with mean and standard deviation obtained directly from the data. These values are 5.645 and 0.0579, respectively. Note that this is a tight distribution; the cov is 0.01. Since the random variable A appears as an exponent in the expression for tC, as seen in Equation 2-30, and A is normally distributed, then tC is lognormally distributed.

4-12

EPRI Licensed Material Development of Probabilistic Models From Deterministic Basics

Figure 4-9 Creep Rupture Data for 1 Cr 1/2 Mo as Obtained From Van Echo 66 With Least Squares Linear Fit (Stress in ksi, Ta in Degrees Rankine, tR in Hours)

Figure 4-10 Cumulative Distribution of the Random Variable A Used to Describe the Scatter in the Larson-Miller Data for 1 Cr 1/2 Mo Steel

4-13

EPRI Licensed Material Development of Probabilistic Models From Deterministic Basics

The other random variable is the wall thinning rate . This variable will be dependent on the problem at hand, and can be subject to wide variation depending on, among other things, the thinning mechanism. Wall thinning can occur due to ID oxidation, OD oxidation and OD erosion. Figure 13 of Viswanathan, et al. [Viswanathan 92] shows the wall loss to be a multiple of the steamside oxide thickness, with large amounts of scatter. Figure 4 of this same reference shows considerable scatter in the steamside oxidation rate, and a thinning rate that is not constant in time. Hence, a very large variance could be expected in wall thinning rates. Some estimate of a mean rate can be found from Viswanathan 92, although the data information is for 2 Cr 1 Mo. From Figure 13 of Viswanathan 92 a mean wall loss of 5 times the steamside oxide thickness appears reasonable. Figure 4 of Viswanathan 92 provides a mean line that gives a thickness of about 5 mils (0.005 inches = 0.13 mm) for 3x104 hours at 1,000F (537C). This corresponds to a thinning rate of 1.7x10-7 inches per hour or 1.5 mils/year (38m/yr). To include fireside effects, multiply this by 5, which gives 7.5 mils per year (0.19 mm/yr). Assume the thinning rate to be lognormally distributed, with this as a median value. The scatter in the thinning rate is large, so assume that the coefficient of variation (cov) is large, say 2. This means that the standard deviation of the thinning rate is 2 times the mean. From Table 3-1, it is seen that the cov for a lognormal variate depends on , but not on the median. The cov in terms of for a lognormal distribution is given by the expression

= ln(1 + cov 2 )
This results in a value of of ln 5 which equals 1.27.

All of the inputs are now available to address the problem analyzed deterministically in Section 2.3.2. The results obtained here will differ from those in Section 2.3.2, because different constants in the Larson-Miller curve fit are used. Consider again a 2.125 inch (54 mm) diameter 1 Cr 1/2 Mo tube with a wall thickness of 0.40 inches (10.2 mm) operating at 1,000F (537.8C) and 2,400 psi 16.5 MPa). The following provides some intermediate results o=6.38 ksi (44 MPa) (same as in Section 2.3.2)
-3 =1/BT=1/(1460x0.1264x10 )=5.42

median CR = 10-C+A50/(BT) = 3.88x1010 tC (median properties) = 3.88x1010/6.385.42 = 1.68x106 hours = 192 years tT (median properties) = ho/ 50 = 0.40/0.0075 = 53.3 years. Since the thinning failure time (tt) is so much smaller than the creep rupture time (tR) the problem will be dominated by the distribution of the thinning rateespecially at small times. Monte Carlo simulation can be easily used to evaluate the failure probability of the tube as a function of time. Alternatively, FORM procedures could be used; such procedures are especially easy to apply to this problem because analytical derivatives of the performance function 4-14

EPRI Licensed Material Development of Probabilistic Models From Deterministic Basics

(Equation 2-31) are available and the random variables are of the normal type (lognormal). Figure 4-11 provides the results of a Monte Carlo simulation with 104 trials plotted on lognormal probability scales. This figure shows the expected behavior of the failure probability at short times (and low probabilities) in that it is dominated by the thinning rate, because the two lines in the figure are close to one another. This suggests that in this example problem, more care should be taken in the definition of the distribution of the thinning rate than in the distribution of the creep rupture properties. The deviation from linearity for the thinning-only result is due to the finite number of samples taken, because tT is lognormally distributed. This is because the thinning rate ( ) is lognormally distributed and tT=h/ , so tT is lognormally distributed (recall from Section 3.3.1 that a lognormal variate to some power is also lognormally distributed).

Figure 4-11 Results of Example Problem of Creep Damage in a Thinning Tube as Obtained by Monte Carlo Simulation With 104 Trials

4.2.3 Hazard Rates As discussed in Section 3.1, the probabilistic information often of most use is the hazard or failure rate, which is the probability of failure within the next time increment, given that failure has not already occurred. The hazard rate as obtained by Monte Carlo simulation is usually much less continuous than the cumulative probability results that have been presented in the above examples, because numerical differentiation or binning into histograms is involved. This is especially true at the low failure rates generally of interest. The hazard functions for the example problems are developed here. Starting with the fatigue crack growth problem, Figure 4-6 provides the cumulative failure probability as a function of the number of fatigue cycles and shows that the Rackwitz-Fiessler data points provide a good representation for cycles greater that about 10,000. For fewer cycles, the failure probability is constant at about the value obtained from the Monte Carlo simulation, 10-7. Hence, the hazard function is 10-7 per cycle on the first cycle, and is then essentially zero until about 104 cycles. 4-15

EPRI Licensed Material Development of Probabilistic Models From Deterministic Basics

Figure 4-6 shows that the cumulative failure probability is a smooth curve on the log scale employed, and a good estimate of the slope of this curve is just a straight line between the data points. This slope is most suitable at the mid-point between the points. The hazard function can then be estimated by the approach outlined in Table 4-2. This procedure appears suitable to this problem, and its applicability to other problems should be assessed before it is used elsewhere.
Table 4-2 Steps in Estimating the Hazard Function for the Fatigue Crack Growth Example Problem N, kc 10 Pf 3.06x10-8 L(1) 0.485 12.5 15 8.04x10-7 1.905 17.5 20 8.08x10-6 2.907 22.5 25 4.35x10-5 3.638 27.5 30 1.57x10-4 4.197 32.5 35 4.33x10-4 4.637 37.5 40 9.83x10-4 4.993 42.5 45 1.94x10-4 5.288 47.5 50 NOTES (1) L=log10(108Pf) (2) Midway point between line above and line below (3) Linear interpolation on a log scale [10-8 x 10(Labove + Lbelow)/2] (4) See text 3.44x10-4 5.537 0.249 2.59x10-3 1.29x10-7 0.295 1.38x10-3 8.14x10-8 0.356 6.53x10-4 4.66x10-8 0.440 2.61x10-4 2.30x10-8 0.559 8.27x10-5 9.25x10-9 0.731 1.87x10-5 2.73x10-9 1.00 2.55x10-6 5.11x10-10 1.42 1.57x10-7 4.46x10-11 N(2) L P(3) h(4),cyc-1

4-16

EPRI Licensed Material Development of Probabilistic Models From Deterministic Basics

The hazard function, h, is given in Equation 3-7 as h( N ) = p( N ) dP / dN = 1 P( N ) 1 P( N )


Eq. 4-2

The slope of the cumulative probability on a log scale is approximately L/N, where L is given in the above table, and N is 5,000 cycles. Hence, d(logP)/dN~L/N and dP/dN can be estimated from the following expression: d (log P ) dP dP d (log P) d (log P) L dN = = =P P d (log P ) dN d (log P) dN dN N dP

Eq. 4-3

Equation 4-2 in conjunction with Equation 4-3 and the values in Table 4-2 provide the values of the hazard function in the right-hand column of the table. These results are plotted in Figure 4-12, which includes the value of the hazard function on the first cycle.

Figure 4-12 Hazard Function vs. Cycles for the Fatigue Crack Growth Example Problem

This figure shows the interesting feature of a high hazard on the first cycle, with a much lower hazard in succeeding cycles. It takes 50,000 cycles before the hazard again reaches the value it had on the first cycle. (This is why we perform leak and proof tests on new equipment under well-controlled and careful conditions.) The above wall thinning example is also useful in this discussion of hazard rates. At low failure probabilities, the problem in dominated by the thinning and is therefore lognormally distributed 4-17

EPRI Licensed Material Development of Probabilistic Models From Deterministic Basics

with known parameters (i.e. median of 53.3 years and a of 1.27). Hence, the hazard function is known in closed form at low failure probabilities. Figure 4-13 presents the closed form failure rate along with the results the smallest 2,000 failure times from 106 trials. Very good agreement is observed.

Figure 4-13 Failure Rate as a Function of Time for the Thinning Problem Only. Histogram Results Are for the Smallest 2,000 Failure Times in 106 Trials, the Line Is for the Closed Form Result.

4.3

Inspection Detection Probabilities

Once a deterministic model has been cast into a probabilistic form, it is possible to address issues that can not be handled in a deterministic contextsuch as the influence of inspection and proof testing on component reliability. The influence of inspection enters through the probability of detecting a defect as a function of its size. This discussion concentrates on inspections for cracklike defects, and the probability of not-detecting a crack as a function of its size is considered. This is denoted as PND(a) [a is the crack dimension that most strongly influences the value of the stress intensity factor. For surface cracks, this is the crack depth (not the surface length).] The nondetection probability is a very strong function of the inspection technology employed, the material and the crack morphology (tightness, roughness, etc). All of these things remaining constant, there will still be wide variations between individual inspectors. Hence, the nondetection probability to be used in a given application must be carefully selected. The best way is to have a database for your inspection procedure and material. Such databases are not numerous. Example nondetection probabilities for dye penetrant, ultrasonic, eddy current, and radiography are presented by Rummel, et al. [Rummel 89], but the general applicability of their results must be kept in mind. The Pacific Northwest National Laboratory (PNNL) has investigated nondetection probabilities for ultrasonic inspections of austenitic and ferritic steels commonly used in power plant piping. See for instance, Khaleel 94, 95. To represent their results, they use the following functional form 4-18

EPRI Licensed Material Development of Probabilistic Models From Deterministic Basics

PND (a ) = +

a 1 (1- )erfc ln a* 2

LM N

OP Q

Eq. 4-4

In this relation, is the probability of missing a very large crack (which will not be zero), a* is the crack depth that has about a 50% chance of being found, and controls the slope of the PND a curve. Results are suggested by Khaleel et al. for various qualities for inspection; outstanding, good, and marginal. According to Khaleel 95, PNNL experts use a four-tier model to judge inspection qualities. The PNNL model allows for performance levels possible with future technology and procedures. The four tiers are as follows: Marginal: a detection performance described by this curve would represent a team using given equipment and procedures that would have only a small chance of passing an Appendix VIII (of the ASME Boiler and Pressure Vessel Code) performance demonstration. Good: a detection performance described by this curve corresponds to the better performance levels in the PNNL round robin, Very good: this curve corresponds to a team with given equipment and procedures that significantly exceed the minimum level of performance needed to pass an Appendix VIII performance test, Advanced: this curve describes a level of performance significantly better than expected from present-day teams, equipment, and procedures that have passed an Appendix VIII-type of performance demonstration. (This performance level implies advanced technologies and/or improved procedures that could be developed in the future.)

Using the three-tier model of Khaleel et al., the constants [Khaleel 94, 95] in Equation 4-4 for the nondetection probabilities for the various qualities, materials and types of cracks are summarized in Table 4-3. The value of a* scales with the thickness, h.

4-19

EPRI Licensed Material Development of Probabilistic Models From Deterministic Basics Table 4-3 Parameters of the Equation Describing the Non-Detection Probability Ferritic Fatigue Cracks Austenitic Fatigue Cracks SCC Cracks

Inspection Qualities/Parameters Outstanding a* Good a* Marginal a*

0.05h 1.6 0.005

0.05h 1.6 0.005

0.05h 1.6 0.005

0.15h 1.6 0.02

0.15h 1.6 0.02

0.4h 1.6 0.10

0.4h 1.6 0.10

0.4h 1.6 0.10

0.65h 1.6 0.25

Figure 4-14 provides a plot of the of the nondetection probability for fatigue cracks in a ferritic plate as defined by the constants in Table 4-3. The plot is on lognormal probability scales, which makes the results a straight line when PND>>. The parameter controls the slope of the nondetection curve when PND>>. Since is the same for all qualities, the lines are parallel except when the value of starts to have an influence.

4-20

EPRI Licensed Material Development of Probabilistic Models From Deterministic Basics

Figure 4-14 Nondetection Probability as a Function of Crack Depth Divided by Plate Thickness for Fatigue Cracks in Ferritic Steel for Three Qualities of Ultrasonic Inspection

The nondetection probabilities are used in a probabilistic fracture mechanics analysis to remove cracks that are detected. For instance, if ppre(a) is the probability density function of cracks before an inspection with nondetection probability PND(a), and it is assumed that all detected cracks are removed, then the probability density function of cracks after the inspection is given by the expression ppost (a ) =

p pre (a ) PND (a )

Eq. 4-5

p pre (a ) PND (a )da

(The term in the denominator is merely a normalizing constant so that the probability density function for post-inspection crack depths integrates to unity.) The inspection has a beneficial effect, because it will tend to remove the larger cracks. The benefit of the inspection depends on the nondetection probability and when the inspections are performed. As an example of the influence of inspection on calculated failure probability, consider the fatigue crack growth problem of Section 4.2.1. This problem is simple enough that the results presented there were generated by Monte Carlo simulation using a program written in MATHCAD. When inspections are performed, the logic becomes more complex, so the use of MATHCAD and starting from scratch would not be effective. In order to demonstrate the type of results obtainable when inspections are employed, use will be made of existing software, the PRAISE code. PRAISE was originally developed for nuclear reactor piping [Harris 81,92a], but it is applicable to fatigue crack growth of semi-elliptical surface cracks in materials that follow the Paris-type relation. It is therefore suitable for an example that is very similar to the fatigue example in Section 4.2.1. The only difference is that now a very long surface crack in a plate of 4-21

EPRI Licensed Material Development of Probabilistic Models From Deterministic Basics

finite thickness is considered (PRAISE does not consider the simple problem of a through crack in a very large plate; that K-solution is not built into PRAISE). Also, the fracture toughness will be taken to be deterministic with a value of 120 ksi-in1/2 (131.9 MPa-m1/2). Calculations were performed for a one-inch-thick plate with the same distributions of initial crack size (a), fatigue crack coefficient and cyclic stress as in the fatigue example problem. Failure is the growth of a crack to become through-thickness or exceedance of the fracture toughness. Results with no inspection, pre-service inspection and in-service inspections at 25 and 50 thousand cycles were obtained. The nondetection probability parameters for a good inspection were employed. Figure 4-15 presents the results.

Figure 4-15 Failure Probability as a Function of Time for the Inspection Example Problem

A comparison of the results with no inspection in Figure 4-15 with the results for an infinite plate in Figure 4-3 is included in Figure 4-16. This figure shows remarkably similar results when the number of cycles exceeds about 50,000. This is because the finite thickness is large relative to the small size of cracks that require so many cycles to grow to failure. For small numbers or cycles, the finite thickness results of Figure 4-15 are higher than the infinite plate results, which is expected.

4-22

EPRI Licensed Material Development of Probabilistic Models From Deterministic Basics

Figure 4-16 Comparison of Fatigue Example Problem of Infinite Plate With Finite Thickness Results With No Inspection Using PRAISE Code

The results of Figure 4-15 are typical of those obtained for various inspection schedules. The effects of an inspection are most pronounced immediately following the inspection, and diminish with increasing cycles (time). The pre-service inspection has a pronounced effect below about 50,000 cycles, but the pre-service and no inspection results are very similar beyond that point. The effects of the in-service inspections at 25 and 50 thousand cycles are noticeable in that the cumulative failure probability remains flat for a period following the inspection. This means that the hazard rate is small following the inspections. For large numbers of cycles (time) beyond the inspection, the inspection has only a small influence. Results such as included in Figure 4-15 are useful in inspection planning. Studies on the influence of the quality and inspection frequency on the failure probability can be performed and inspection strategies optimized. This is discussed in Harris 92b, where a decision tree is applied to estimate the cost of candidate inspection schedules. The economic models discussed in Section 6.2 are also useful in this regard. Results such as in Figure 4-15 account for actions taken as a result of observations made during a Monte Carlo trial (that is, a crack is removed if it is detected). Such results can not be obtained by the use of FORM, because the performance function can not be written to include inservice inspection. One advantage of Monte Carlo is that it can consider the effects of remedial actions taken during the lifetime.

4-23

EPRI Licensed Material

5
EXAMPLE OF A PROBABILISTIC ANALYSIS
We are now ready to provide a realistic example of a probabilistic analysis of a boiler pressure component. Previous sections discussed background on how a deterministic lifetime model can be constructed and then placed into a probabilistic format for evaluation of component reliability. Statistical background was provided. A header is selected for the example, and the BLESS software [Grunloh 92, Harris 93] is used for the evaluation of the reliability. Although specialized software is used in this example, it is important to remember that the technical basis is really no more complex than the simple example problems already discussed. The lifetime analysis itself now requires numerical calculations, because the stresses are more complex, and the stress intensity and C* solutions for cracks in headers are not so simple, but basically the same principles are involved. Of course, the volume of the Monte Carlo simulations requires a computer, but, once again, the principles are the same simple ones discussed in the previous examples.

5.1

Gathering the Necessary Information

It is necessary to perform the following steps when performing a probabilistic analysis of a given component: Identify the material Identify the degradation mechanism Define the component geometry Define the operating history

Once the above steps have been completed, it is necessary to construct a deterministic lifetime model applicable to the degradation mechanism, and then cast this model into a probabilistic form by the five steps outlined at the beginning of Section 4. In the case of a high-temperature header, the models for prediction of reliability are already available in the BLESS software [Grunloh 92, Harris 93], and it is only necessary to identify the material and geometry along with the operating history. 5.1.1 Component Geometry and Material A header analysis by use of the BLESS code will be used as an example of a probabilistic analysis of a boiler component. A typical geometry of a header is depicted in Figure 5-1, which shows that the header is idealized as a repeating pattern of tube penetrations. The superheater outlet header considered by Harris, et al., [Harris 98] will be used for this example. 5-1

EPRI Licensed Material Example of a Probabilistic Analysis

The component material and the required geometric information can be obtained from a drawing of the header. The available information often appears to be scanty, but the necessary information can usually be obtained from an old drawing. The appendix provides portions of the drawing of the example header, which shows the tube size, spacing, and geometry of connections to the header. The material is identified in the drawing as SA336 (Cr 1 ) which corresponds to 1 Cr 1/2 Mo steel. Based on the information in the drawing in the Appendix, the header is idealized as 47 slices each having the configuration shown in Figure 5-2. This picture is generated as a standard feature in BLESS.

Figure 5-1 Schematic Representation of Typical Header With Illustration of Segment Considered in Analysis

5-2

EPRI Licensed Material Example of a Probabilistic Analysis

Figure 5-2 Summary of Header Geometry Analyzed (Length Dimensions in Inches)

5.1.2 Operating Conditions The history information that is required is the number of cold starts in the plant history and/or projected into the future. Also required is the number of other operating excursions (such as load swings) and the corresponding conditions. The nominal operating conditions for the header are usually readily available. For the example, these are 1,500 psi (10.3 MPa) operating pressure, 1,000F (538C) operating temperature, and 600,000 lbs/hour (272,000 kg/hr) of steam. The 600,000 lbs/hour of steam translates to 600,000/98 = 6,122 lbs/hour (2783 kg/hr) for each tube. (The flow rate in the tube is an input to BLESS, because it influences the temperature in the header when steam temperature transients occur. In turn, this influences the stress and creep rate.) In addition to the nominal operating conditions, information on the operating transients that occur is required. Ideally, this consists of typical temperature, pressure and flow rate conditions during typical transients, along with the number of occurrences and sequencing of each of the transient types. In the case of pipes, the details of the temperature history are not important, because pipes are thin enough that radial temperature gradients do not lead to significant thermal stresses. The case of headers is different, because the temperature gradients within the thick header wall can lead to appreciable thermal stresses.

5-3

EPRI Licensed Material Example of a Probabilistic Analysis

It is impossible to reconstruct the operating history in great detail, and variations in operation undoubtedly occur from year to year and even from day to day. All that can be hoped for is to construct some representative repeating pattern. Studies have shown that fairly large variations in operating conditions do not necessarily have an appreciable influence on the predicted reliability [Harris 97b]. The number of plant cold starts is usually readily available. Table A-1 in the Appendix summarizes information for the example plant. The information in that table leads to an average of 43 cold starts per year with an average of 6,618 yearly hours of operation. The example header was observed to be extensively cracked during an inspection at year 34, but was continued in operation for another 4 years. The cracks were observed to grow during this period, but no leaks were reported. Data on the typical conditions during a heat-up and cool-down should be gathered, as well as information on the conditions during a typical day of operation. Such information is often in the form of charts, examples of which are shown in Figure A-3 in the Appendix. These charts show a typical day of main steam pressure and temperature history. Based on the data in the charts and similar data for a heat-up and cool-down, the data of Table 5-1 is compiled, and will be used to construct the operating history in the BLESS example.

5-4

EPRI Licensed Material Example of a Probabilistic Analysis Table 5-1 Summary of Time-Variation of Operating Conditions Header Flow Rate, klb/hr

Time, Hours Heat-up 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 Cool-Down 0.0 1.0 4.0 11.0 13.0 Daily Cycle 0.0 8.5 9.5 11.0 21.7 22.5 24.0 Steady Operation

Pressure, psi

Steam Temp., F

100 300 1000 1200 1400 1500 1500 1500

96 196 294 390 588 600 600 600

200 350 500 550 850 920 970 1000

1500 1500 1300 1000 100

600 600 400 200 96

1000 960 680 520 200

1500 1500 950 300 300 900 1500

600 600 500 400 400 600 600

1000 1000 850 670 670 820 1000

1500

600

1000

It has generally been found that details of the operating history, such as minor short duration temperature excursions, do not have a noticeable influence on the predicted lifetime or reliability. Similarly, variations in the flow rate have been found to not be influential. Hence, extreme accuracy in these inputs is not necessary. For a particular application, the user should perform some sensitivity studies to see if these observations are applicable to his case. There is no need to precisely define operating histories if wide variations do not influence the results. 5-5

EPRI Licensed Material Example of a Probabilistic Analysis

The temperature of the steam in the tubes is often different than the temperature of the steam in the header, and the temperature often varies from tube to tube. Temperature is a very important input, so it is therefore advisable to make some tube temperature measurements, such as with thermocouples. In the example problem, measurements showed tube temperatures during steady operation that varied from 982 to 1,031F (528 to 555C). A typical tube temperature can be selected, say 1,020F (549C), and a sensitivity study performed to see if this has an appreciable effect.

5.2

Performing the Analysis

The information from Section 5.1 and the Appendix is input to the BLESS software through the Windows interface. The data of Table 5-1 is input as operating procedures, and these procedures are combined to define operating histories. Based on the averages, an average year consists of 6,618 hours (276 days) of operation with 43 cold starts. This is one cold start every 276/43 = 6.42 days. Therefore, consider the history to be a heat up, 6 days of daily cycle, and a cool down. This will be repeated over the time span of interest, with the current time being 244,855 hours (Table A-1). The BLESS software was used to predict the header failure times. The predicted median time for 6 initiation by creep/fatigue was 4.25x10 hours, which is nearly 20 times the length of time that the header has been in service. (As mentioned above, extensive cracking was observed in the header at the 34th year, which corresponds to 236,000 hours of service.) BLESS considers oxide notching as an alternative crack initiation mechanism, and an analysis for this mechanism was also performed. As discussed in Section 2.1.4, the BLESS oxide notching model is deterministic. This model predicted an initiation time of 5,290 hours. Hence oxide notching is predicted to result in crack initiation in a very short time. A BLESS analysis of lifetime with oxide notching for initiation with creep/fatigue crack growth from ID to OD predicted a median time to leak of 499,800 hours, which is just twice the reported hours of service to date. (The corresponding time for bore-hole to bore-hole growth was 500,433 hours, so the remaining discussion will consider ID to OD growth.) The oxide notching initiation and creep/fatigue crack growth analysis provided results more in line with observations of header performance than corresponding evaluations considering initiation by creep/fatigue. Therefore a probabilistic lifetime analysis for oxide notching initiation and creep/fatigue crack growth is considered. Figure 5-3 provides a log plot of the cumulative leak probability as a function of time for the example problem. This plot is as directly generated by BLESS. A BLESS run was also made for 20,000 trials and a maximum evaluation time of 100,000 hours. Figure 5-4 presents the results for this run on lognormal probability scales, along with the results included in Figure 5-3.

5-6

EPRI Licensed Material Example of a Probabilistic Analysis

Figure 5-3 Log-Linear Plot of Probability of Leak as a Function of Time for Oxide Notching Initiation and Creep Fatigue Crack Growth in Header Example Problem (2,000 Trials)

Figure 5-4 Lognormal Probability Plot of BLESS Results for the Header Example Problem (Probability in Percent)

5-7

EPRI Licensed Material Example of a Probabilistic Analysis

The dominance of oxide notching and the short time for oxide notching initiation suggest that little can be done to increase the reliability of the header except for aspects that reduce the oxide initiation time or reduce the creep/fatigue crack growth. As discussed in Section 2.1.4, the oxide layer is considered to crack whenever the temperature falls below Tlo-ox of 700F (371C). About the only thing that could be done to increase the oxide notching initiation time is to decrease the number of times the temperature falls below this value. Table 5-1 shows that this occurs during the daily cycle. The predicted time for oxide notching should increase if the minimum temperature during the daily cycle was increased above 700F (371C). Table 5-2 summarizes the oxide initiation time under various operating scenarios and shows that increasing the minimum temperature during the daily cycle has a large influence, but that the initiation time is short for all cases considered.
Table 5-2 Summary of Initiation Time for Various Operating Scenarios Cold Starts per Year 41 41 41 Notching Initiation Time, Hours 5290 13575 14240

Cycles Between Cold Starts daily, Tmin=670F (354C) daily, Tmin=720F (382C) daily, Tmin=720F (382C), tube temp=1010F (543C) none none

41 20

8631 11609

Hence, in order to increase the predicted component reliability, it would be necessary to extend the growth portion of the lifetime. This could be accomplished by decreasing the operating pressure and/or temperature, but, of course, this results in reduced plant output with a corresponding economic penalty. The linearity of the results of Figure 5-4 suggests that the lifetime is lognormally distributed. A linear least-squares fit that included the BLESS results generated with 2,000 trials for times beyond 100,000 hours and the results with 20,000 trials for times beyond 40,000 hours was performed. This combining of the data uses each set for the times at which it is most accurate 5 (most failures). This resulted in a fit with a median failure time of 6.17x10 hours and a of 0.810. Figure 5-5 provides plots of the corresponding hazard function. The upper part of the figure shows results out to 106 hours, and the lower part shows the short time results out to 50,000 hours.

5-8

EPRI Licensed Material Example of a Probabilistic Analysis

Figure 5-5 Lognormal Hazard Function for the Header Example Problem

Figure 5-6 presents a plot of the fitted hazard function out to 100,000 hours along with a histogram corresponding to the BLESS results of Figure 5-4 with 20,000 trials. The histogram values are calculated from the relation h( t ) ~ P (t + dt ) P(t ) dt[1 P (t )]
Eq. 5-1

dt is the time between the tabulated data points in the BLESS output, which is 2,000 hours. Good agreement between the BLESS histogram and the fitted lognormal is seen in Figure 5-6.

5-9

EPRI Licensed Material Example of a Probabilistic Analysis

Figure 5-6 Comparison of Hazard as a Function of Time as Obtained From the BLESS Output Data and the Fitted Lognormal Relation

Section 6 discusses the application of these results. The times considered in the example are from the current time (244.9 thousand hours) to 20 years into the future (end time of 377.2 thousand hours). Figures 5-4 and 5-6 show that the BLESS run provides accurate results within this time frame. The failure rate within the time frame of interest was generated by BLESS using standard Monte Carlo without excessive computer run times. The smallest cumulative failure probability in Figure 5-4 is 10-4. In many instances, lower failure probabilities than this are of concern, and it is of interest to generate results to lower probabilities. As discussed in Section 2, tools are available to conveniently accomplish this, such as first order reliability methods and importance sampling. Importance sampling has recently been incorporated into BLESS (Version 4.2), and an example of its use in the sample header problem is provided. The importance sampling is defined by altering the distributions of selected random variables. The dominant random variables in headers have been found to be the creep rate coefficient (A1000, which is Ae-Q/T in Equation 2-6, with T=1,000F=538C) and the coefficient in the creep crack growth relation (Cc in Equation 2-21). Both of these are lognormal. The altered distribution is defined by increasing the median of the distribution by adding on a multiple of the standard deviation, but leaving the second parameter alone ( of the lognormal density function, see Table 3-1). A shift of one increases the median used in the Monte Carlo sampling to one standard deviation above its nominal value. A shift of zero corresponds to no shift, and a positive shift increases the median. The shift selected should be in the direction to increase the number of failures. (The shift should be towards the most probable failure point). In the current case, the shifts should be greater than zero (increases in A1000 and Cc result in more trials that produce failures). 5-10

EPRI Licensed Material Example of a Probabilistic Analysis

The selected shifts should be held to a minimum and the number of trials maximized, while still maintaining the computer time within acceptable limits. As always, trade offs are involved. The shifts in each of the variables do not have to be the same, and should be larger for the more influential variables. The shifts should be gradually increased in successive runs to see the trends. Large shifts are to be avoided, unless they are gradually approached. For example, Figure 5-7 shows the results of eight BLESS runs with varying numbers of trials and multipliers, including no shifts. The solid line is the line defined by the median and evaluated for the unshifted results in Figure 5-4 (t50=6.17x105, =0.810), and shows that this line agrees quite well with most of the results generated by the importance sampling. The table with Figure 5-7 identifies the shifts employed, the number of trials taken, the computer execution time in hours and the minimum failure probability obtained in the run. If no importance sampling was employed, it would take some 1400 years of computer time to obtain results down to 10-14, whereas employing importance sampling with shifts of 1 and 3 with 20,000 trials provided results down to nominally this value with only 27 hours of computer time. However, care must be taken in defining the shifts; the triangles and diamonds data do not agree well, and differ by about two orders of magnitude at the lower failure probabilities. The diamonds are believed to be more accurate, because a smaller shift and larger number of trials were used. Also, the diamonds agree better with the solid line. The diamond data took nearly five times as long to generate, however.

5-11

EPRI Licensed Material Example of a Probabilistic Analysis

Figure 5-7 Results of Header Example Problem With Varying Shifts and Number of Trials. The Solid Line is the Curve Fit Based on the Estimated Lognormal Distribution With No Shifts (the Line of Figure 5-6).

These results demonstrate the usefulness of importance sampling in generating results at lower failure probabilities, but that some care must be exercised in using this feature. In practical problems that require estimates involving very low failure probabilities, importance sampling could be used to good advantage while maintaining reasonable computer execution times.

5.3

Combining Data

The above discussion covers the estimation of component reliability by use of deterministically based models. This can provide the failure probability as a function of time. Sometimes there is data or reliability estimates from other sources, and it is desired to somehow factor this other information into the estimates from the component reliability model. Potential other sources of 5-12

EPRI Licensed Material Example of a Probabilistic Analysis

information are generic failure rate data (such as the NERC-GADS data based discussed in Section 6.2), specific failures or inspection results, expert opinion and the results of interviews with plant personnel. Procedures for combining these types of information are discussed in ASME 00, and EPRI 98 If more than one source of failure probability information is available, it is important to combine them in some fashion. This will presumably lead to a better estimate than the use of one source by itself. One means of combining information is to construct a model similar to those discussed above, and then to use Bayesian procedures for updating some inputs to the model based on some additional information [Ang 75, Ayyub 97, Wostenholm 99]. Bayesian procedures conventionally involve considering some of the parameters (mean and standard deviation) of the random input variables to themselves be statistically distributed, with the spread in the distribution representing uncertainty in the value of the parameters. The values of the parameters are then updated using other information, such as a set of observations. Using the header of Section 5.2 as an example, the mean value of the parameter describing the secondary creep rate (A of Equation 2-7 or 2-10) could itself be considered to be described by a statistical distribution with a mean equal to the mean currently in use and a standard deviation selected to describe the uncertainty in this value. The selected standard deviation is usually subjective, that is, it is set by the user based on judgment, rather than any set of data. The distribution type is usually selected for mathematical convenience, such as the use of so-called conjugate functions that simplify the integrations involved. Considering the median value of A as being the parameter that is to be updated, denote the distribution as pA50(A50). This is the prior distribution. The distribution of A50 is then updated (to give the posterior) based on some observations, such as a set of n observed failure times, tf,i, i=1,n. The engineering model of component reliability is then used to evaluate the probability density function of failure times given a value of A50. This is denoted as pf(tf|A50). The likelihood of observing the set of failure times is L( A50 ) =

p (t
n f i =1

f ,i

| A50 )

The posterior distribution of A50 is then given by the following expression, which is an expression of Bayes theorem posterior p A50 ( A50 ) =

prior p A50 ( A50 ) L( A50 )

Eq. 5-2

prior p A50 ( A50 ) L( A50 )dA50

The term in the denominator is merely a normalizing constant that assures that the posterior distribution integrates to unity, as a probability density function must. The use of conjugate functions simplifies the evaluation of the integral involved in the normalization. Equation 5-2 can then be simply stated as posterior distribution = (prior distribution)(likelihood)(normalizing constant) An updated prediction of the reliability can then be made by using the median from the posterior distribution. Alternatively, the distribution of the parameter can be considered in the analysis (for 5-13

EPRI Licensed Material Example of a Probabilistic Analysis

instance, if Monte Carlo simulation is employed, the value of the parameter can be sampled from its posterior distribution for use in each trial). The use of Bayesian updating requires a set of observations, such as a set of failure times in the above example. Another example would be a set of crack sizes found in an inspection and using this to update the parameters of the initial crack size distribution in a probabilistic fracture mechanics model. Harris 97a provides such an example. A set of observations for use in the updating procedure is usually difficult to obtain in practical cases of interest here. To overcome this difficulty and to still be able to combine sets of data, ASME 00 and EPRI 98 suggest a procedure that they refer to as Bayesian-like updating. The procedure is appealingly simple, but is not based on any known (to the author) mathematical principle. Using the example of two sets of failure probability predictions, one from a reliability model (such as concentrated upon in this document) and a set of estimates based on interviews, the two sets are combined by multiplying their probability density functions together and normalizing so that the product integrates to unity. This provides the combined probability density function of lifetime that is presumably a better estimate than either set of information by itself.

5-14

EPRI Licensed Material

6
USE OF PROBABILISTIC RESULTS
The emphasis in this guideline is on performing probabilistic analyses based on underlying deterministic lifetime models to obtain estimates of component reliability as a function of time. The application of these results to decisions of run/replace/retire is certainly of interest, but is a secondary goal of this document. Some discussion and examples in these areas are included in this section. As utilities come under increasing pressure to reduce costs and extend the lifetime of plant components, interest has increased in procedures for rationally planning inspection and maintenance. One such procedure is risk-based (or risk-informed) inspection, which can use riskprinciples to prioritize inspection and maintenance. Several volumes on the topic have been published by the ASME, and are in References ASME 91, 92, 94, 98. ASME 94 is specific to fossil-fired power plants and contains an example of a multi-component economic optimization. ASME 00 is another volume of the series that is nearing completion and provides a handbook approach to risk-based methods for equipment life management. EPRIs Turbo-X process [EPRI 98] is another example of an economic approach that uses probabilistic information as an input. As stated in EPRI 98, The Turbo-X program utilizes component history data from NERC-GADS (North American Electric Reliability Council), maintenance personnel interviews, and state-of-the-art computer model predictions of remaining component life as inputs to a decision analysis process, which combines probabilistic engineering and economic risk assessment for optimal outage planning. The emphasis herein is on the estimation of component failure probability by state-of-the-art computer models, which is a subset of the efforts in Turbo-X. The use of failure probability in economic models is demonstrated in the simple examples of Section 6.2.

6.1

Target Hazard Rates

One way of using the results of probabilistic lifetime analyses is simply to replace components that are operating with a hazard rate that is above a selected target value. Alternatively, an inspection could be performed and the initial conditions for the probabilistic evaluation reset, such as (if no cracks are found during the inspection) resetting the crack size to that size which would be readily detected by the inspection. The higher the consequences of a failure, the lower the target hazard rate is set. Since one measure of risk is the product of the failure rate times the consequences, this can result in a somewhat constant risk of operation of components. The question of acceptable risk is a complex one, and will not be dwelled upon here. An estimate of an acceptable failure rate (or hazard rate) can be obtained by looking at past failures of similar components or by following the discussion in Harris 95b. Following first the approach in Harris 95b, a target level of the hazard rate, h(t), can be set relative to the frequency of events that occur 6-1

EPRI Licensed Material Use of Probabilistic Results

and are evidently acceptable to society. Such events could include fatalities in commercial air travel or automobile travel, or fatalities in the work place. The frequency is in terms of fatalities per hour of exposure. As an example of a high fatality rate, consider life in general. Considering a life span of 80 years, the average frequency can be thought of in simple terms as 1/80 years, which translates to 1.43x10-6 per hour. Other examples are summarized in Table 6-1, which suggests that a hazard rate of 2x10-8 per hour (~2x10-4 per year) is accepted by society. This could be used as the target level for purposes of discussion for events that are likely to lead to a fatality. The numbers in Table 6-1 are largely based values in Accident Facts [National Safety Council 92] for the year 1990.
Table 6-1 Examples of Hazards of Common Activities as Measured by Fatality Rate Annual Hours of Exposure per Person 2,000 ~400

Category work (all industries) work (trans & util) motor vehicles

Annual Fatalities 10,500 1,700 46,300


-4

Persons Exposed 117,400,000 250,000,000

Fatality Rate, per Hour 4.5x10


-8

1.0x10 * ~5x10
-7

-7

*Accident Facts fatality rate of 2.2x10 per year, with 2,000 hours of exposure per year.

The fatality rate in Table 6-1 is simply the fatalities per year divided by the product of the number of persons exposed timed the annual exposure per person. The other approach to setting a target hazard rate is to look at past experience with related components. The North American Electric Reliability Council (NERC) of Princeton, New Jersey, maintains a database on failures in components in electric power stations. Information is provided on a wide variety of components, including the frequency and severity of failures. The severity is in lost megawatt hours. For instance, Table 6-2 (which is NERC data reported in ASME 94) summarizes the failure rate and consequences from generic NERC data for boiler components.

6-2

EPRI Licensed Material Use of Probabilistic Results Table 6-2 Partial List of Boiler Component Failure Rate and Consequences Failure Rate, per Hour 1.59x10
-4

Component Code and Name 101 waterwalls 102 generating tubes 103 superheater 104 reheater-first 105 reheater-second 106 economizer 107 air preheater-tubular 108 air reheater-regenerative 109 induced draft fans 110 forced draft fans 111 recirculating fans 112 desuperheaters and attemperators

Average MWh Lost per Forced Outage 21951 21991 15765 21866 30351 16137 35107 24722 11331 8865 26412 11064

1.37x10

-5

1.07 x10 5.00 x10

-4

-5

5.65 x10 5.00 x10

-6

-5

1.60 x10 5.66 x10 1.38 x10

-6

-6

-5

8.90 x10

-6

8.07 x10 2.42 x10

-7

-6

Figure 6-1 provides a log-log plot of the frequency-severity data in Table 6-2. If risk is defined as the product of the frequency times the severity, then lines of constant risk are straight lines on theses scales, and such lines are shown. The figure shows that, based on generic data, Component 101 (waterwalls) has the highest risk, followed closely by 103 (superheater) and 104 (reheater-first). Hence, these would be the boiler components to concentrate on.

6-3

EPRI Licensed Material Use of Probabilistic Results

Figure 6-1 Log-Log Plot of Frequency Severity Data for Boiler Components From Table 6-2

Table 6-2 suggests typical failure rates that are much higher than the 2x10-8 per hour that is based on the rates from Table 6-1. The failure rate for superheaters given in Table 6-2 is about 10-4 per hour, which suggests a superheater failure about once a year. This seems high, but the data of Table 6-2 do suggest that typical component failure rates are well above the rates of Table 6-1. The failure rate for superheaters in Table 6-2 is about 10-4 per hour, which is much higher than the fatality rate of 2x10-8 per hour in Table 6-1. Evidently, superheater failure seldom cause fatalities, and such failures are evidently most often leaks that are not life threatening. Hence, the use of 2x10-8 per hour as a target hazard is conservative, and the use of 10-4 per hour would be more representative of past experience with headers.
-8 Using the target hazard rate of 2x10 per hour based on Table 6-1, from Figure 5-8 the example header can be operated at a hazard below this for the first 30,000 hours (3.4 years) of operation. At times beyond this, the predicted hazard rate increases rapidly. This is not a long time. The header has been operated for nearly 245,000 hours (Table 5-1), and Figure 5-7 shows that the cumulative leak probability at this time is about 0.10. This is consistent with the observations of extended cracking but no leak at this time. Figure 5-8 shows that the predicted hazard rate for this header is below 1.5x10-6 per hour for times below 106 hours (114 years). This is well above the fatality rates in Table 6-1, but still below the failure rates based on component failures given in Table 6-2. Hence, the risk of future operation of the header is not excessive based on past experience. Since the leaking of the header will most likely not result in a fatality, and the predicted failure rate is within the range of experience for boiler parts, the risk of future operation of the example header may not be excessive. Economic considerations of the cost of header failure may be the driving factor.

6-4

EPRI Licensed Material Use of Probabilistic Results

6.2

Economic Models

Another way to address the header example problem is to consider the expected costs of various scenarios. This involves the use of economic models that incorporate the failure probability as an input to the cost estimates. Such models have been discussed in the current context in Mauney 93, ASME 94, ASME 00, and EPRI 98. Such models can involve many components distributed over several plants, and can constrain the analysis so that failure rates will not exceed a defined valueregardless of the cost. This allows safety-related constraints to be imposed that will over-ride economic considerations. This discussion concentrates on the simpler problem of estimating the expected cost of candidate scenarios for a single component without constraints on failure rate. The expected cost of operating a component for a year that has a failure probability Pf in that year (given that it did not fail before that year) and a cost of a failure of Cf is PfCf. As projections are made over time, the effects of inflation, taxes, and the net present value of money must be accounted for. In this simple discussion, inflation will be assumed to be zero and the effects of taxes will be ignored. If the cost of a failure at the present time is Cf, then a failure in the future is lower in terms of net present value. This is because $1 now is worth $1.06 one year from now if the interest rate is 6%. So a failure that occurs one year from now that will cost $1 at that time will cost only $0.943 in current dollars. If r is the interest rate and n is the number of years into the future, the net present cost of a failure that costs Cfo at that time is given in Equation 6-1 Cf = C fo (1 + r ) n
Eq. 6-1

Thus, the cost of failures in the future can be cast in terms of current dollars, and the probability of failure in the future can be obtained from the reliability model. The failure probability within one year, given that it has not failed prior to that year, is the appropriate probability to use. This is related to the hazard function but is most conveniently expressed in terms of the cumulative failure probability by use of the following relation

Pf ( year n) =

1 Pf ( year n)

year ( n +1)

f year n

p (t )dt =

Pf [ year (n + 1)] Pf [ year n] 1 Pf [ year n]

Eq. 6-2

Returning to the superheater header example, what would be the expected costs over the next 20 years if the header was continued in operation, versus replacing it now and maybe even replacing it again 10 years from now? This can be addressed using the above relations and the failure probability of the header as evaluated in Section 5.2. The header failure lifetime will be taken to be lognormally distributed with a median of 6.17x105 hours and a of 0.810. This allows the failure probability within each year to easily be computed. These values of t50 and correspond to the solid line in Figure 5-7, which provides a good representation of the BLESS results in the time frame of interest (the next 20 years).

6-5

EPRI Licensed Material Use of Probabilistic Results

The cost of a header failure is an input to the analysis. Table 6-2 suggests that this would be the cost of the header (and installation) plus the cost of the lost generating capacity of 15,765 MWh. The cost of lost capacity varies greatly, but should be within an order of magnitude of 1 per kilowatt hour. This results in an estimated lost capacity of about $150,000 per superheater failure. This seems low and is probably applicable to failures that are readily fixed rather than requiring replacement of the header. This is consistent with the high failure frequency for superheaters in Table 6-2. For discussion purposes, consider the cost of a new header, including installation, to be $1 million, and the cost of a catastrophic header failure to be $10 million. The actual values are not important to the discussion, since the desire is to demonstrate the principles involved. First consider just continuing operation for the next 20 calendar years in the same mode as in the past. From Table 5-1, the current time is 244,885 hours, and one calendar year of operation typically consists of 6618 hours. The failure probability in each of the future 20 years and the net present cost of failure are obtainable from results discussed immediately above, so the results shown in Table 6-3 can be obtained.

6-6

EPRI Licensed Material Use of Probabilistic Results Table 6-3 Calculation of Expected Cost of Continuing Operation of Example Header for Another 20 Years Time Into Future, Years 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Cost of Failure, Eq. 6-1, k$ 10000 9434 8900 8396 7921 7473 7050 6651 6274 5919 5584 5268 4970 4688 4423 4173 3936 3714 3503 3305 3118 Incremental Cost of Failure, k$ 79.95 76.78 73.64 70.56 67.55 64.60 61.73 58.95 56.25 53.63 51.11 48.67 46.33 44.07 41.90 39.83 37.83 35.93 34.10 32.36

Cumulative Time, khrs 244.9 251.5 258.1 264.7 271.4 278.0 284.6 291.2 297.8 304.4 311.1 317.7 324.3 330.9 337.5 344.2 350.8 357.4 364.0 370.6 377.2

Incremental Hazard, Eq. 6-2 0.007995 0.008138 0.008274 0.008404 0.008528 0.008645 0.008757 0.008864 0.008965 0.009061 0.009152 0.009239 0.009322 0.009400 0.009474 0.009545 0.009611 0.009674 0.009734 0.009791 0.009844

The expected cost of operating the header for the next 20 years is the sum of the right-hand column in Table 6-3, which is $1,075,000. The results in Table 6-3 were obtained by use of MATHCAD, but could easily be obtained by the use of a spreadsheet. The handbook approach of ASME 00 utilizes spreadsheets for the type of calculation included in Table 6-3. Similar results for the cost of replacing the header and operating for the next 20 years are obtainable, but now the failure rate for a new header is used. The new header is assumed to be the same as the old header, but its failure probability will be different, because it has not been 6-7

EPRI Licensed Material Use of Probabilistic Results

operated in the past. The failure probability is evaluated using the same lognormal parameters as used above, which may somewhat overestimate the probabilities at the shorter times now of interest (see Figure 5-7). The use of the diamond results in Figure 5-7 would reduce the incremental hazard below the values used here, but only for about the 10,000 hours. This provides the results included in Table 6-4.
Table 6-4 Calculation of Expected Cost of Replacing Header Now and Then Operating for Another 20 Years Time Into Future, Years 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Cost of Failure, Eq. 6-1,k$ 10000 9434 8900 8396 7921 7473 7050 6651 6274 5919 5584 5268 4970 4688 4423 4173 3936 3714 3503 3305 3118

Cumulative Time, khrs 0.000 6.618 13.24 19.85 26.47 33.09 39.71 46.33 52.94 59.56 66.18 72.80 79.42 86.03 92.65 99.27 105.9 112.5 119.1 125.7 132.4

Incremental Hazard, Eq. 6-2 1.08x10 1.04x10 1.00x10 3.96x10


-8 -6

Incremental Cost of Failure, k$ 0.0001082 0.009837 0.08903 0.3327 0.8027 1.507 2.412 3.465 4.606 5.781 6.942 8.051 9.081 10.01 10.83 11.54 12.13 12.61 12.98 13.25

-5

-5

0.0001013 0.0002016 0.0003421 0.0005210 0.0007342 0.0009767 0.001243 0.001528 0.001827 0.002136 0.002450 0.002766 0.003082 0.003396 0.003705 0.004008 0.004305

6-8

EPRI Licensed Material Use of Probabilistic Results

Reducing the incremental hazard during the first 10,000 hours would have a minimal influence on these results, because the incremental costs associated with these short times are low. The cost of replacing the header and operating for the next 20 years is the sum of the right-hand column plus the cost of the new header, which is $1,000,000 + $126,400 = $1,126,400 If the header is replaced again in 10 years, the cost for the next 20 years is the cost of two headers plus the sum of the right-hand column of Table 6-4 for the first 10 years. This results in an expected cost of 2($1,000,000) + $19,000 = $2,019,000. The estimated expected costs for the three scenarios considered are summarized below:
Scenario Continue operation Replace header now Replace header now and at 10 years. Expected Cost $1,075,000 $1,126,400 $2,019,000 Max Annual Pf 0.010 0.004 0.001

Thus, it is seen that the least expensive scenario is to continue operating the existing header, but this also has the highest maximum annual failure probability. If a constraint of the annual failure probability not exceeding 0.005, the replace header now would be selected, and the cost would not increase much. The relative costs of the scenarios would change with the relative costs of header failure versus header replacement. Inspection schedules and type of inspection could be optimized by the use of results such as included in Figure 4-15 when the failure probabilities are combined with the costs of inspections and failure. The failure probability results play the same role as they did in the above header example for run/retire decisions, and the cost benefit of inspection would result from the reduced hazard rate following an inspection.

6-9

EPRI Licensed Material

7
CONCLUDING REMARKS
Damage models are available for many degradation mechanisms operating in boiler pressure parts, and these can be cast into a probabilistic format by considering some of the inputs to the model to be random variables to describe the scatter and/or uncertainty in these inputs. Probabilistic results generated by such models can be used as inputs to economic models to optimize the operation, replacement, and inspection of components. These optimizations can be done on a plant-wide basis or even on a utility-wide basis. Sections 5 and 6 of this document illustrate the application and potential use of these procedures. Some cautions are required in model selection and execution. A good model can be misused, such as using too few Monte Carlo trials or excessive shifts in importance sampling. Cautions in this regard are provided. There are other areas of caution; a good lifetime model can still give inappropriate results if the distributions of the inputs are not correct. Perhaps of even more concern is the use of an inappropriate lifetime model. You must be sure that you have modeled the damage mechanism of importance, and the best that you can do with the methodologies concentrated upon here is obtain the failure probability given the dominance of the damage mechanism modeled. The probabilistic approach described here should play an ever-increasing role in component life management.

7-1

A
DETAILS OF BLESS EXAMPLE
Figures A-1 and A-2 are examples of the type of information required for definition of the header geometry. Figure A-1 provides the overall definition of the tubes connected to the header, and Figure A-2 provides details of the tube penetrations. These are provided as examples of the type often available.

Figure A-1 Portion of Header Drawing Showing Configuration of Tube Penetration

EPRI Licensed Material Details of BLESS Example

Figure A-2 Portion of Header Drawing Showing Details of Tube Penetration

From the drawing in Figure A-1, it can be verified that the tubes connected to the header are 2inch OD, and that 94 of the 98 tubes of this size are in rows A and F. The axial distance between tubes (axial pitch) is 6 inches and there are 24 between these tube rows. The tubes penetrate into the header along a radial line (i.e. there is no offset.) The analysis will concentrate on the tubes in Rows A and F.

A-2

EPRI Licensed Material Details of BLESS Example

Table A-1 summarizes the information on the number of past cold-starts of the unit.
Table A-1 Summary of Cold Starts and Operating Hours for Example Header Year 1-13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 Starts 60 4 0 12 5 2 5 7 8 18 31 27 34 78 138 149 46 187 180 123 109 72 118 113 78 Cumulative Starts 60 64 64 76 81 83 88 95 103 121 152 179 213 291 429 578 624 811 991 1114 1223 1295 1413 1526 1604 Operating Hours 8485 8739 8639 7936 8439 8735 7739 8100 8228 7418 7395 6345 6794 5638 4178 3251 2451 4977 2211 5898 2487 4091 2638 ~2500 ~3600 Cumulative Hours 106728 115467 124106 132069 140508 149244 156983 165083 173311 180729 188125 194470 200964 206602 210780 214031 216482 221460 223671 229569 232056 236147 238785 ~241285 ~244885

A-3

EPRI Licensed Material Details of BLESS Example

Table A-2 provides the BLESS output for 2,000 trials and a maximum evaluation time of 200,000 hours.
Table A-2 BLESS Output File for Example Header Problem2,000 Trials With Maximum Time of 200,000 Hours
** Inputs were read from file: C:\BLESS41\BMARK.INB BLESS-Headers 4.1 Analysis Title: guidelines example Units Selected: US Customary Crack and Header Dimensions in inches Pressure in psi Stress in ksi Temperature in degrees F Crack growth rates in in/hour, in/cycle Flow rate in klbs/hr Time in hours Ligaments/Pipe = L Outlet Header Header OD Header ID Axial Pitch Tube OD Tube ID Bore Hole 15.250 10.500 6.000 2.000 1.344 1.096 2 Ref Angle .000000 24.0000 1500.00 1000.00 1000.00 600.00 6.12 2 4 # of Times 8 Steam Temperature Header Tube 200.000 200.000 350.000 350.000 500.000 500.000 550.000 550.000 920.000 920.000 970.000 970.000 1000.00 1020.00

Number of Tubes = Tube Row a b Steam Pressure Steam Temperature Tube Temperature Header Flow Rate Tube Flow Rate Material Type Code

Offset .000000 .000000

Penetration .800000 .800000

Number of Operating Procedures Number 1 Time .000000 .500000 1.50000 2.00000 2.50000 3.00000 3.50000 Lig Heat PI 100.000 300.000 1000.00 1400.00 1500.00 1500.00 1500.00

Flow Rate Header Tube 96.0000 1.00000 196.000 2.00000 294.000 3.00000 588.000 6.00000 600.000 6.12000 600.000 6.12000 600.000 6.12000

Number 2 Time .000000 100.000

Steady PI 1500.00 1500.00 Flow Rate Header Tube 600.000 6.12000 600.000 6.12000

# of Times 2 Steam Temperature Header Tube 1000.00 1020.00 1000.00 1020.00

A-4

EPRI Licensed Material Details of BLESS Example

Number 3 Time .000000 1.00000 4.00000 11.0000 13.0000

Lig Cool PI 1500.00 1500.00 1300.00 1000.00 100.000 Flow Rate Header Tube 600.000 6.12000 600.000 6.12000 400.000 4.10000 200.000 2.00000 96.0000 1.00000

# of Times 5 Steam Temperature Header Tube 1000.00 1020.00 960.000 960.000 680.000 650.000 520.000 520.000 200.000 200.000

Number 4 Time .000000 8.50000 9.50000 11.0000 21.7000 22.5000 24.0000

Daily Cycle (8-24-89) Flow Rate PI Header Tube 1500.00 600.000 6.11000 1500.00 600.000 6.11000 950.000 500.000 5.00000 300.000 400.000 4.00000 300.000 400.000 4.00000 900.000 600.000 6.00000 1500.00 600.000 6.11000

# of Times 7 Steam Temperature Header Tube 1000.00 1020.00 1000.00 1020.00 850.000 870.000 670.000 870.000 670.000 690.000 820.000 840.000 1000.00 1020.00

Number of Operating Histories 100 per year Op# 1 2 3 daily cycle Op# 4 steady Op# 2 Op Description steady Op Description daily cycle (8-24-89) Op Description lig heat steady lig cool down

4 # of Operating Procedures Repeats 1 1 1 # of Operating Procedures Repeats 1 # of Operating Procedures Repeats 1 # of Operating Procedures Repeats 1 6 1 3 1 1 3

daily cycle with cold start every 6 days Op# 1 4 3 Op Description lig heat daily cycle (8-24-89) lig cool down

Material ID = 2 (1.25Cr-0.5Mo Base & Weld ) A1000 (median) B1000 (median) n (creep) m p Q-prime (Secondary Creep) A1000 (second parameter) B1000 (second parameter) A1000-B1000 corr. coeff. UTS-virgin n (stress-strain curve) D Jic Flow stress Tearing Modulus (Tmat) = = = = = = = = = = = = = = = 1.120000E-13 1.000000E-17 8.00000 5.66000 .820000 82432.0 .594000 .923000 -.946000 80.0000 5.00000 69.0000 5.00000 26.0000 1000.00

A-5

EPRI Licensed Material Details of BLESS Example


fatigue coeff. C (median) creep coeff. C (median) oxide coeff. C (median) Q (oxide fatigue coeff. C (second par.) oxide coeff. C (second par.) fatigue exp., n oxide exp., q creep exp., q creep coeff. C (second par.) UTS transition - Larson Miller C-LMP (UTS > UTStr) median C-LMP (UTS < UTStr) median C-UTS C1sigma C2sigma C-LMP (UTS > UTStr) second par. C-LMP (UTS < UTStr) second par. strain-life curve - second par. transition C1 (>tr) C2 (>tr) C3 (>tr) C1 (<tr) C2 (<tr) C3 (<tr) Dc - Damage diagram mid-point Df - Damage diagram mid-point TplQ-Prime (Primary Creep) = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = 5.343000E-07 2.460000E-02 1.47720 -8496.50 .108400 .000000 1.08300 .500000 .825000 .724000 100.000 42951.0 42951.0 .000000 -5146.00 -956.000 511.000 511.000 .761500 1.07260 3.14200 -1.82200 .272600 3.14700 -2.11800 4.60100 .100000 .100000 3.980000E-02 82432.0 Number of Simulations = 2000 Temp. Multiplier = 1.00000 Std. Dev. (temp.) = .00000 Random Number Seed = 9275

Max. Evaluation Time (hrs)= 200000.00 Stress Multiplier = 1.00000 Std. Dev.(stress) = .00000

COMPONENT DATA: LIGAMENT SELECTION: b /b MATERIAL: 2 (1.25Cr-0.5Mo Base & Weld) SOLUTION DESCRIPTION: Creep-fatigue crack initiation & growth CRACK GROWTH DIRECTION: ID to OD Crack Depth = .000000(Mean) a/w for terminating calculations = .80000 Initiation Model: Fast Initiation calculation mode selected LMP-age = .000000 Current oxide crk depth = .00000(Mean) OPERATING HISTORY: daily cycle with cold start every 6 days

Std. Dev. = .000000 Oxide cracking

Std. Dev = .00000

**** Deterministic results using median properties **** Estimated time to crack initiation (hours) Crack initiation controlled by oxide cracking At initiation: oxide crack depth (inches) .300000E-01 Time (hrs) 5289.91 10537.5 15835.9 21185.1 26584.9 32035.4 37536.4 Crack Depth .030000 .031500 .033075 .034729 .036465 .038288 .040203 5289.9141

A-6

EPRI Licensed Material Details of BLESS Example


43087.9 48689.6 54341.3 60042.7 65793.6 71593.4 77441.8 83338.3 89282.5 95273.8 101312. 107395. 113524. 119697. 125913. 132172. 138472. 144812. 151191. 157608. 164060. 170548. 177068. 183620. 190200. 196808. 203441. 203441. .042213 .044324 .046540 .048867 .051310 .053876 .056569 .059398 .062368 .065486 .068761 .072199 .075809 .079599 .083579 .087758 .092146 .096753 .101591 .106670 .112004 .117604 .123484 .129658 .136141 .142948 .150096 .150096

** Time exceeds user-specified maximum for analysis ** BLESS-Head Probabilistic Results ** first 10 simulations ** Simulation Time in Hours to init. to fail 1 5289.9 > max 2 5289.9 > max 3 5289.9 > max 4 5289.9 > max 5 5289.9 > max 6 5289.9 > max 7 5289.9 > max 8 5289.9 > max 9 5289.9 > max 10 5289.9 > max a (in) .046540 .123484 .051310 .191564 .157600 .588394 .182442 .092146 .560375 .173754 Ox 1 1 1 1 1 1 1 1 1 1 Initiation C/F 0 0 0 0 0 0 0 0 0 0 Ai 0 0 0 0 0 0 0 0 0 0 Damage Creep Fatigue .00000 .00000 .00000 .00000 .00000 .00000 .00000 .00000 .00000 .00000 .00000 .00000 .00000 .00000 .00000 .00000 .00000 .00000 .00000 .00000

A-7

EPRI Licensed Material Details of BLESS Example


BLESS-Head Probabilistic Results Cumulative Probabilities of time to initiation & failure Total number of simulations = 2000 Time Cum. Probability of (hours) Initiation Failure 5000.00 .0000 .0000 10000.0 1.000 .0000 15000.0 1.000 .0000 20000.0 1.000 .0000 25000.0 1.000 .0000 30000.0 1.000 .0000 35000.0 1.000 .0000 40000.0 1.000 .0000 45000.0 1.000 .0000 50000.0 1.000 .0000 55000.0 1.000 2.0000E-03 60000.0 1.000 2.5000E-03 65000.0 1.000 3.5000E-03 70000.0 1.000 5.0000E-03 75000.0 1.000 6.0000E-03 80000.0 1.000 7.5000E-03 85000.0 1.000 8.0000E-03 90000.0 1.000 9.5000E-03 95000.0 1.000 1.0500E-02 100000. 1.000 1.2000E-02 105000. 1.000 1.5000E-02 110000. 1.000 1.7500E-02 115000. 1.000 1.9500E-02 120000. 1.000 2.3500E-02 125000. 1.000 2.7000E-02 130000. 1.000 2.9000E-02 135000. 1.000 3.0500E-02 140000. 1.000 3.3000E-02 145000. 1.000 3.5000E-02 150000. 1.000 3.9000E-02 155000. 1.000 4.0500E-02 160000. 1.000 4.5500E-02 165000. 1.000 5.2000E-02 170000. 1.000 5.4500E-02 175000. 1.000 5.7500E-02 180000. 1.000 6.1000E-02 185000. 1.000 6.8500E-02 190000. 1.000 7.3000E-02 195000. 1.000 7.6500E-02 200000. 1.000 8.3000E-02 205000. 1.000 8.3000E-02 Begin Analysis : Time - 10:20:11a Begin Probabilistic : Time - 10:20:17a End Analysis : Time - 2:43:06p Elapsed Time Elapsed Time Elapsed Time : : : (Begin Probabilistic) (End Analysis ) (End Analysis ) -

Date Date Date

01/03/2000 01/03/2000 01/03/2000 6.0 sec 15769.0 sec 15775.0 sec

(Begin Analysis ) = (Begin Probabilistic) = (Begin Analysis ) =

A-8

EPRI Licensed Material Details of BLESS Example

Figure A-3 Recordings of Temperature and Pressure for a Typical Day

A-9

B
REFERENCES
M. Abramowitz and I.A. Stegun, Handbook of Mathematical Functions, National Bureau of Standards Applied Mathematics Series 55, Washington, D.C. 1964. [Abramowitz 64] T.L. Anderson, Fracture Mechanics: Fundamentals and Applications CRC Press, Boca Raton, Florida 1995. [Anderson 95] A. H-S. Ang and W.H. Tang, Probability Concepts in Engineering Planning and Design, Vol. 1: Basic Principles, John Wiley & Sons, New York 1975. [Ang 75] A. H-S. Ang and W.H. Tang, Probability Concepts in Engineering Planning and Design, Vol.II: Decision, Risk and Reliability, John Wiley & Sons, New York 1984. [Ang 84] Risk-Based Inspection - Development of Guidelines: Volume 1 - General Document, ASME CRTD-Vol. 20-1 1991. [ASME 91] Risk-Based Inspection - Development of Guidelines: Volume 2, Part 1 Light Water Reactor (LWR) Nuclear Power Plant Components, ASME CRTD-Vol. 20-2 1992. [ASME 92] Risk-Based Inspection - Development of Guidelines: Volume 3 Fossil Fuel-Fired Electric Power Generating Station Applications, ASME CRTD-Vol. 20-3 1994. [ASME 94] Risk-Based Inspection - Development of Guidelines:Volume 2, Part 2 Light Water Reactor (LWR) Nuclear Power Plant Components, ASME CRTD-Vol. 20-4 1998. [ASME 98] Risk-Based Methods for Equipment Life Management: An Application Handbook, under preparation by the ASME Center for Technology Development (CRTD), Washington, D.C. 2000. [ASME 00] B.M. Ayyub and R.H. McCuen, Simulation-Based Reliability Methods. Ch. 4 of Probabilistic Structural Mechanics Handbook, ed. C. Sundarajan, Chapman & Hall, New York, 1995 pp. 53-69. [Ayyub 95] B.M. Ayyub and R.H. McCuen, Probability, Statistics, & Reliability for Engineers, CRC Press, Boca Raton, Florida 1997. [Ayyub 97] J.M. Bloom and M.L. Malito, Using Ct to Predict Component Life. Fracture Mechanics: Twenty-Second Symposium (Volume 1), ASTM STP 1131, Philadelphia, Pennsylvania, 1992 pp 393-411. [Bloom 92] B-1

EPRI Licensed Material References

O.J.V. Chapman, Simulation of Defects in Weld Construction. Reliability and Risk in Pressure Vessels and Piping, ASME PVP-Vol. 251, 1993 pp. 81-89. [Chapman 93] R.B. DAgostino and M.A. Stephens, Goodness-of-fit Techniques, Marcel Dekker, New York 1986. [DAgostino 86] Turbine-Generator Maintenance Outage Interval Extension: Turbo-X Version 1.0a Users Manual, EPRI, Palo Alto, CA: October 1998. CM-110998. [EPRI 98] H.J. Grunloh, et al., An Integrated Approach to Life Assessment of Boiler Pressure Parts, Volume 4: BLESS Code Users manual and Life Assessment Guidelines, report prepared for EPRI RP 3352-10, project manager R. Viswanathan 1992. [Grunloh 92] G.J. Hahn and S.S. Shapiro, Statistical Models in Engineering, John Wiley & Sons, New York 1967. [Hahn 67] A. Haldar and S. Mahadevan, First-Order and Second-Order Reliability Methods. Ch. 3 of Probabilistic Structural Mechanics Handbook, ed. C. Sundarajan, Chapman & Hall, New York, 1995, pp. 27-52. [Haldar 95] D.O. Harris, E.Y. Lim, and D.D. Dedhia, Probability of Pipe Fracture in the Primary Coolant Loop of a PWR Plant, Vol. 5: Probabilistic Fracture Mechanics Analysis. NUREG/CR2189, Vol. 5, U.S. Nuclear Regulatory Commission, Washington, DC 1981. [Harris 81] D.O. Harris, D. Dedhia and S.C. Lu, Theoretical and Users Manual for pc-PRAISE. U.S. Nuclear Regulatory Commission Report NUREG/CR-5864, Washington, D.C. 1992. [Harris 92a] D.O. Harris, Probabilistic Fracture Mechanics with Applications to Inspection Planning and Design. Reliability Technology 1992, ASME AD-Vol. 28, 1992 pp. 57-76. [Harris 92b] D.O. Harris, C.H. Wells, H.J. Grunloh, R.H. Ryder, J.M. Bloom, C.C. Schultz and R. Viswanathan, BLESS: Boiler Life Evaluation and Simulation System A Computer Code for Reliability Analysis of Headers and Piping. Reliability and Risk in Pressure Vessels and Piping, ASME PVP-Vol. 251, 1993 pp. 17-26. [Harris 93] D.O. Harris, Probabilistic Fracture Mechanics. Probabilistic Structural Mechanics Handbook, Ch. 6, edited by R. Sundarajan, Chapman & Hall, New York, 1995, pp. 106-145. [Harris 95a] D.O. Harris and D.Dedhia, Setting Reinspection Intervals for Seam Welded Piping by Use of Fracture Mechanics and Target Reliability Values. Fatigue and Fracture Mechanics in Pressure Vessels and Piping, ASME PVP-Vol. 304, 1995, pp. 443-451. [Harris 95b] D.O. Harris, Probabilistic Crack Growth and Modeling. Ch. 8 of Reliability-Based Mechanical Design, edited by T. Cruse, Marcel Dekker, Inc., New York, 1997, pp. 265-331. [Harris 97a]

B-2

EPRI Licensed Material References

D.O. Harris, R. Viswanathan, and D. Dedhia, The Effect of Operating Variables, Material and Geometric Variables on the Creep/Fatigue Life of Headers in Fossil Power Service. Fitness for Adverse Environments in Petroleum and Power Equipment, ASME-PVP Vol. 359, 1997, pp. 193-204. [Harris 97b] D.O. Harris, R. Tilley, and D.Dedhia, Comparisons of Predictions and Observations of Crack Growth in an Operating Header. Fitness-for-Service Evaluations in Petroleum and Fossil Power Plants, ASME-PVP Vol. 380, 1998 pp. 129-133. [Harris 98] J.A. Henkener, V.B. Lawrence, and R.G. Forman, An Evaluation of Fracture Mechanics Properties of Various Aerospace Materials. Fracture Mechanics: Twenty Third Symposium, ASTM STP 1189, Philadelphia, Pennsylvania, 1993 pp. 474-497. [Henkener 93] M.F. Kanninen and C.H. Popelar, Advanced Fracture Mechanics, Oxford University Press, New York 1985. [Kanninen 85] J. Keisler and O.M. Chopra, Statistical Analysis of Fatigue Strain-Life Data for Carbon and Low-Alloy Steels. Risk and Safety Assessment: Where is the Balance?, ASME PVP-Vol. 296/SERA-Vol. 3, 1995 pp. 355-366. [Keisler 95] M.A. Khaleel and F.A. Simonen, The Effects of Initial Flaw Size and Inservice Inspection on Piping Reliability. Service Experience and Reliability Improvement: Nuclear, Fossil, and Petrochemical Plants, ASME PVP-Vol. 288, 1994 pp. 95-107. [Khaleel 94] M.A. Khaleel, F.A Simonen, D.O. Harris, and D. Dedhia, The Impact of Inspection on Intergranular Stress Corrosion Cracking for Stainless Steel Piping. Risk and Safety Assessment: Where is the Balance?, ASME PVP-Vol.296/SERA-Vol.3, 1995 pp. 411-422. [Khaleel 95] M.A. Khaleel, O.J.V. Chapman, D.O. Harris, and F.A. Simonen, Flaw Size Distribution and Flaw Existence Frequencies in Nuclear Piping. Probabilistic and Environmental Aspects of Fracture and Fatigue, ASME PVP-Vol. 386, 1999 pp. 127-144. [Khaleel 99] V. Kumar, M.D. German, and C.F. Shih, An Engineering Approach for Elastic-Plastic Fracture Mechanics. EPRI Report NP-1931, Palo Alto, CA 1981. [Kumar 81] S. Mahadevan, Monte Carlo Simulation. Ch. 4 of Reliability-Based Mechanical Design, ed. T.A. Cruse, Marcel Dekker, Inc., New York, 1997 pp. 123-146. [Mahadevan 97a] S. Mahadevan, Physics-Based Reliability Models. Ch. 6 of Reliability-Based Mechanical Design, ed. T.A. Cruse, Marcel Dekker, Inc., New York,1997 pp. 197-232 [Mahadevan 97b] MATHCAD Users Guide, MathSoft, Inc., Cambridge, MA 1991. [MATHCAD 91] D.A. Mauney, Economic Optimization of Multiple Component Replacement/Inspection in the Power System Environment. Reliability and Risk in Pressure Vessels and Piping, ASME PVPVol. 251, 1993 pp. 1-16. [Mauney 93]

B-3

EPRI Licensed Material References

National Safety Council, Accident Facts, Chicago, Illinois 1992. [NSC 92] R. Rackwitz and B. Fiessler, Structural Reliability Under Combined Random Load Sequences. Comput. Struct,, Vol. 9, 1978 pp. 484-494. [Rackwitz 78] H. Riedel, Fracture at High Temperatures, Springer-Verlag, New York 1987 [Riedel 87] W.D. Rummel, G.L. Hardy, and T.D. Cooper, Applications of NDE Reliability to Systems. Metals Handbook, Vol. 17, Nondestructive Evaluation and Quality Control, Ninth Edition, ASM International, Metals Park, Ohio, 1989 pp. 674-688. [Rummel 89] A. Saxena, Nonlinear Fracture Mechanics for Engineers, CRC Press, Boca Raton, Florida 1998. [Saxena 98] C.F. Shih, Tables of Hutchinson-Rice-Rosengren Singular Field Quantities. Materials Research Laboratory Report MRL E-147, Brown University, Providence, Rhode Island, 1983. [Shih 83] C.F. Shih and A. Needleman, Fully Plastic Crack Problems, Part 1: Solution by a Penalty Method. Journal of Applied Mechanics, Vol. 51, 1984 pp. 48-56 [Shih 84] D.C. Stouffer and L.T. Dame, Inelastic Deformation of Metals, Models, Mechanical Properties and Metallurgy, John Wiley & Sons, New York 1996. [Stouffer 96] H. Tada, P.C. Paris, and G.R. Irwin, Stress Analysis of Cracks Handbook, Third Edition, ASME Press, New York 2000. [Tada 00] J.A. Van Echo and W.F. Simmons, Supplemental Report on the Elevated-Temperature Properties of Chromium-Molybdenum Steels, ASTM Data Series Publication No. DS6S1, American Society for Testing and Materials, Philadelphia, Pennsylvania 1966. [Van Echo 66] R. Viswanathan, Damage Mechanisms and Life Assessment of High-Temperature Components, ASM International, Metals Park, Ohio 1989. [Viswanathan 89] R. Viswanathan, S.R. Paterson, H. Grunloh, and S. Gehl, Life Assessment of Superheater/Reheater Tubes. ASME Piping and Pressure Vessel Conference, New Orleans, Louisiana, 1992. [Viswanathan 92] L.C. Wolstenholme, Reliability Modelling: A Statistical Approach, Chapman & Hall, New York 1999. [Wolstenholme 99]

B-4

Target: Boiler Life and Availability Improvement

SINGLE USER LICENSE AGREEMENT THIS IS A LEGALLY BINDING AGREEMENT BETWEEN YOU AND THE ELECTRIC POWER RESEARCH INSTITUTE, INC. (EPRI). PLEASE READ IT CAREFULLY BEFORE REMOVING THE WRAPPING MATERIAL. BY OPENING THIS SEALED PACKAGE YOU ARE AGREEING TO THE TERMS OF THIS AGREEMENT. IF YOU DO NOT AGREE TO THE TERMS OF THIS AGREEMENT,PROMPTLY RETURN THE UNOPENED PACKAGE TO EPRI AND THE PURCHASE PRICE WILL BE REFUNDED. 1. GRANT OF LICENSE EPRI grants you the nonexclusive and nontransferable right during the term of this agreement to use this package only for your own benefit and the benefit of your organization.This means that the following may use this package: (I) your company (at any site owned or operated by your company); (II) its subsidiaries or other related entities; and (III) a consultant to your company or related entities, if the consultant has entered into a contract agreeing not to disclose the package outside of its organization or to use the package for its own benefit or the benefit of any party other than your company. This shrink-wrap license agreement is subordinate to the terms of the Master Utility License Agreement between most U.S.EPRI member utilities and EPRI.Any EPRI member utility that does not have a Master Utility License Agreement may get one on request.

About EPRI EPRI creates science and technology solutions for the global energy and energy services industry. U.S. electric utilities established the Electric Power Research Institute in 1973 as a nonprofit research consortium for the benefit of utility members, their customers, and society. Now known simply as EPRI, the company provides a wide range of innovative products and services to more than 1000 energyrelated organizations in 40 countries. EPRIs multidisciplinary team of scientists and engineers draws on a worldwide network of technical and business expertise to help solve todays toughest energy and environmental problems. EPRI. Electrify the World

2. COPYRIGHT This package, including the information contained in it, is either licensed to EPRI or owned by EPRI and is protected by United States and international copyright laws.You may not, without the prior written permission of EPRI, reproduce, translate or modify this package, in any form, in whole or in part, or prepare any derivative work based on this package. 3. RESTRICTIONS You may not rent, lease, license, disclose or give this package to any person or organization, or use the information contained in this package, for the benefit of any third party or for any purpose other than as specified above unless such use is with the prior written permission of EPRI.You agree to take all reasonable steps to prevent unauthorized disclosure or use of this package. Except as specified above, this agreement does not grant you any right to patents, copyrights, trade secrets, trade names, trademarks or any other intellectual property, rights or licenses in respect of this package. 4.TERM AND TERMINATION This license and this agreement are effective until terminated.You may terminate them at any time by destroying this package. EPRI has the right to terminate the license and this agreement immediately if you fail to comply with any term or condition of this agreement. Upon any termination you may destroy this package, but all obligations of nondisclosure will remain in effect. 5. DISCLAIMER OF WARRANTIES AND LIMITATION OF LIABILITIES NEITHER EPRI,ANY MEMBER OF EPRI,ANY COSPONSOR, NOR ANY PERSON OR ORGANIZATION ACTING ON BEHALF OF ANY OF THEM: (A) MAKES ANY WARRANTY OR REPRESENTATION WHATSOEVER, EXPRESS OR IMPLIED, (I) WITH RESPECT TO THE USE OF ANY INFORMATION,APPARATUS, METHOD, PROCESS OR SIMILAR ITEM DISCLOSED IN THIS PACKAGE, INCLUDING MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, OR (II) THAT SUCH USE DOES NOT INFRINGE ON OR INTERFERE WITH PRIVATELY OWNED RIGHTS, INCLUDING ANY PARTYS INTELLECTUAL PROPERTY, OR (III) THAT THIS PACKAGE IS SUITABLE TO ANY PARTICULAR USERS CIRCUMSTANCE; OR (B) ASSUMES RESPONSIBILITY FOR ANY DAMAGES OR OTHER LIABILITY WHATSOEVER (INCLUDING ANY CONSEQUENTIAL DAMAGES, EVEN IF EPRI OR ANY EPRI REPRESENTATIVE HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES) RESULTING FROM YOUR SELECTION OR USE OF THIS PACKAGE OR ANY INFORMATION, APPARATUS, METHOD, PROCESS OR SIMILAR ITEM DISCLOSED IN THIS PACKAGE. 6. EXPORT The laws and regulations of the United States restrict the export and re-export of any portion of this package, and you agree not to export or re-export this package or any related technical data in any form without the appropriate United States and foreign government approvals. 7. CHOICE OF LAW This agreement will be governed by the laws of the State of California as applied to transactions taking place entirely in California between California residents. 8. INTEGRATION You have read and understand this agreement, and acknowledge that it is the final, complete and exclusive agreement between you and EPRI concerning its subject matter, superseding any prior related understanding or agreement. No waiver, variation or different terms of this agreement will be enforceable against EPRI unless EPRI gives its prior written consent, signed by an officer of EPRI.

2000 Electric Power Research Institute (EPRI), Inc. All rights reserved. Electric Power Research Institute and EPRI are registered service marks of the Electric Power Research Institute, Inc. EPRI. ELECTRIFY THE WORLD is a service mark of the Electric Power Research Institute, Inc. Printed on recycled paper in the United States of America 1000311

EPRI 3412 Hillview Avenue, Palo Alto, California 94304 PO Box 10412, Palo Alto, California 94303 USA 800.313.3774 650.855.2121 askepri@epri.com www.epri.com

Вам также может понравиться