Вы находитесь на странице: 1из 372

SMiRT 13 Post Conference Seminar Nr.

13
N. F. de Almeida Neto, ELETROPAULO, Sao Paulo, Brazil
J. M. A. Anson, ELETROPAULO, Sao Paulo, Brazil
P. Bonissone, GEC Schenectady, USA
S. Fukuda, Tokyo MIT, Japan
Co-organizers: S. Gehl, EPRI Palo Alto, USA
A. S. Jovanovic, MPA Stuttgart, Germany
A. C. Lucia, 'EC JRC Ispra, Italy
S. Yoshimura, RACE, Univ. of Tokyo, Japan

APPLICATIONS OF
INTELLIGENT
SOFTWARE SYSTEMS

IN POWER
PROCESS
STRUCTUI
ENGINEER
PROCEEDINGS
Editors: A.S. Jovanovic, A.C. Lucia

So Paulo, Brazil, August 21-23, 1995

MPA
STUTTGART
ISPRA

1997 EUR 17669 EN


APPLICATIONS
OF
INTELLIGENT
SOFTWARE
SYSTEMS IN
POWER PLANT,
PROCESS PLANT
AND
STRUCTURAL
ENGINEERING

*&::5*1 13th International Conference SMiRT


Post Conference Seminar No. 13
So Paulo, Brazil, August 21-23, 1995

Proceedings
LEGAL NOTICE

Neither the European Commission nor any person


acting on behalf of the Commission is responsible for the use which
might be made of the following information

EUR 17669 EN
ECSC-EC-EAEC Brussels Luxembourg, 1997
Printed in Italy
Co-Organizers of the Seminar:
Europe
Dr. A. S. Jovanovic (Chairman)
MPA Stuttgart, University of Stuttgart
Pfaffenwaldring 32
70569 Stuttgart, Germany
Tel.:+49-711-685 3007
Fax+49-711-685 3053
e-mail:jovanovic@mpa.uni-stuttgart.de

Dr. A. C. Lucia
JRC - Joint Research Centre Ispra
21020 Ispra (Va), Italy
Tel.: +39-332-789 155
Fax: +39-332-789 156
e-mail: alfredo.lucia.@cen.jrc.it

America
Dr. S. Gehl
EPRI - Electric Power Research Inst.
3412 Hiltview Ave., P.O. Box 10412
Palo Alto, CA 94303, U.SA.
Tel.:+1-415-855 2770
Fax+1 415-855 8759
e-mail:

Dr. P. Bonissone
General Electric Company
K1-SC32A, P.O. Box 8, Schenectady
NY 12301, U S A .
Tel.:+1-518-387 5155
Fax+1-518-387 6845
e-mail: bonissone@crd.ge.com

Asia
Prof. S. Fukuda
Tokyo MetropoRtan Inst, of Technology
6-6 Asahigaoka, Hino,
Tokyo 191, Japan
Tel.: +81-425-83 5111 ext. 3605
Fax+81-425-83 5119
e-mail: fukuda@mgbfu.tmit.ac.jp

Prof. S. Yoshimura
FACE, University of Tokyo
7-3-1 Hongo, Bunkyo
Tokyo 113, Japan
Tel.: +81-3-3812 2111 ext. 6960
Fax +81-3-5800 6876
e-mail: yoshi@nucl.gen.u-tokyo.ac.jp

Local organization:
Mr. Nicolau F. de Almeida Neto
Mr. Jos M. A Anson
ELETROPAULO Electricidade de So Paulo S A
Dep. Usina Termoelctrica Piratininga
Av. Nossa Senhora do Sabara 5312
04447-011-So Paulo SP, Brazil
Tel.:+55-11-563 0682
Fax+55-11-563 0052
e-mail:

Management Office:
Mr. Karl Lieven
MIT GmbH, Promenade 9
52076 Aachen, Germany
Tel.: +49-2408-945812
Fax +49-2408-94582
Commerzbank AG, Aachen,
Postfach 270, Theaterstr. 21-23,
52062 Aachen, Germany
Konto Nr. 12 12 588 01 - Kennung SMIRT, BLZ 390 400 13

Supporting organizations:
CEC - JRC, Ispra, Italy
ELETROPAULO, Brazil
EPRI, Palo Alto, USA
MPA, Stuttgart, Germany
Preface:

In 1990 the organizers of the SMiRT11 in Tokyo (SMiRT Structural Mechanics in Reactor
Technology) proposed to organize a new post conference seminar on EXPERT SYSTEMS AND A I
APPLICATIONS POWER GENERATION INDUSTRY in Hakone. By doing this, they had
reacted to the obviously growing interest in application of all kinds of "knowledgebased" software
tools in the areas relevant for SMiRT and for the power plant and structural engineering in general.
Starting from the positive experience of this seminar, the following one was organized in the
framework of SMiRT12 Conference in Constance, Germany, in August 1993. The proceedings
presented here, belong to the third seminar of the kind, organized in 1995 in So Paulo, Brazil,
within SMiRT13.
When compared to the first seminar the number of papers and participants significantly increased, as
well as the generic interest for the overall issue. The level (the number and quality of papers)
achieved at the second seminar (in Constance) has been approximately maintained. The trend
obvious in Constance is present also in So Paulo: most of the papers are linked nowadays to
practical problems, not just "general" or "in principle" solutions. On the other hand, there is an
obvious trend to encompass the areas outside the main domains of SMiRT, i.e. a trend to tackle not
only the problems relevant solely to nuclear power plants, but also those from e.g. fossilfired power
plants and/or process plants. This has been reflected also in the title of the seminar. It has been
slightly changed for the third seminar, being now APPLICATIONS OF INTELLIGENT
SOFTWARE SYSTEMS LN POWER PLANT, PROCESS PLANT AND STRUCTURAL
ENGINEERING.
The change of title mentioned above reflects also the shift from the "conventional" (rulebased)
expert systems and/or knowledgebased systems (KBSs) to the systems developed nowadays, which
all tend to be more or less integrated with other tools, and therefore, probably better described by the
term Intelligent Software Systems.
The seminar and the proceedings have been structured in a sort of a "progressive" way: both start
first with the tutoriallike lectures giving introduction and describing the stateoftheart in some of
the two important emerging enabling technologies: industrial scale fuzzy systems and data mining
("extracting knowledge from data"). They continue then by presenting the contributions giving an
idea and/or illustration of "what is going on" in the area of intelligent software systems, covering
Western and Eastern Europe, South America, USA and Japan. This review is followed by a series of
papers presenting single systems and/or projects and their results. In the final part of the seminar and
of proceedings, the endusers have been asked to express their opinion on the usability and
usefulness of KBSs, as well as about the problems they have been facing.
This 1995 Post Conference SMiRT Seminar and its coorganizers have received support and help
from various institutions, companies and persons. On behalf of all the coorganizers the editors want
to gratefully acknowledge this support here, especially the help provided by the host of this seminar,
the electric utility company ELETROPAULO from So Paulo, Brazil. Our special thanks go also to
all the contributing authors: without their research, and their willingness to prepare and present these
papers, the seminar would never have taken place. The same thanks go to the members of the end
users' panel: its their precious advice and opinion that must guide the work of researchers in the area
of intelligent software systems. Special thanks go to Dr. Poloni, for his precious help in preparation
of the overall seminar, and to Mr. Jos Anson for his marvelous mastering of the local organization
of the seminar.

The editors
Stuttgart, Ispra, May 1996
Table of Contents

Chapter 1: Enabling and emerging technologies:


knowledge-based systems, neural networks,
object-oriented programming, object oriented databases.. 1
M. Poloni
Extraction of knowledge from data: practical aspects
in mechanical engineering 3

Chapter 2: Review of same major SMiRT-


and KBS-related research programs 29
S. Yoshimura, J. S. Lee, G. Yagawa
Intelligent approaches for automated design of practical structures 31
S. Fukuda
Intelligent NDI data base for pressure vessels 45
R. D. Townsend
Advances in Damage Assessment and Life Management
of Elevated Temperature Plant - an ERA Perspective
(extended abstract) 63
B. R. Upadhyaya, M. Behravesh, Wu Yan, G. Henry
An Automated Diagnostic Expert System for Eddy Current Analysis
Using Applied Artificial Intelligence Techniques 67
G. M. Ribeiro, G. Lambert-Torres, . . Alves da Silva
Applications of intelligent Systems to substation protection and control 81
H. H. Over, A. S. Jovanovic, M. Iraci
The critical role of materials data storage and evaluation systems
within intelligent monitoring and diagnostic of power plants 93
M. Gruden
Technology awareness dissemination in Eastern Europe
with intelligent computer systems for remaining power plant
life assessment EU project TINCA 109

Chapter 3: Special course on Intelligent Software Systems


and Remaining Life Management 117
J. M. Brear, R. D. Townsend
Modern remanent life assessment methods:
degradation, damage, crack growth 119
A. S. Jovanovic, M. Friemann
Intelligent software systems for remaining life assessment
- The SP249 project 155
P. Auerkari
Theoretical and practical basis of advanced inspection planning involving
both engineering and non-engineering factors 173
A. S. Jovanovic, P. Auerkari, H. R. Kautz, H. P. Ellingsen, S. Psomas
Intelligent software systems for inspection planning
- The BE5935 project 183
B. J. Cane, G. T. Jones, J. D. Sanders, R. D. Townsend
The PLUS system for optimized O&M of power and process plant 203

Chapter 4: Specific applications 227


D. C. R. Poli, . A. Zimek, J. M. Vieira, V. Rivelli
Technical and economical feasibility study of the electron beam process
for SO2 andNOx removal from combustion flue gases in Brazil 229
M. C. Klinguelfus, M. Stemmer, D. Pagano
Implementation of an integrated environment for adaptive neural control 245
M. Poloni, R. Weber
Advanced analysis of material properties using DataEngine 259
J. A. B. Montevechi, P. E. Miyagi
Fuzzy logic - an application for group technology 271

Chapter 5: End-Users' Acceptance of Intelligent Software Systems. 291


Denis V. Coury, David C. Jorge
Artificial Neural Networks Applied to Protection of Power Plant 293
H. R. Kautz
Consequences of current failures for quality assurance 301
L. A. D. Correa, M. P. Ramos, L. M. Silveira, F. H. Budel, J. C. Nievola
Improvement on welding technology by an expert system application 309
T. Sato, H. Futami, T. Hamano, N. Narikawa
Integrated and Intelligent CAE Systems for Nuclear Power Plant 319
H. R. Kautz
SP249 End-Users response/acceptance of KBS's:
What is required, what is available, what has to be done 339

Alphabetical list of authors 355


(Only One Author per paper for contacting purposes)
CHAPTER 1

ENABLING AND EMERGING TECHNOLOGIES:


KNOWLEDGE-BASED SYSTEMS,
NEURAL NETWORKS,
OBJECT-ORIENTED PROGRAMMING,
OBJECT-ORIENTED DATABASES
Extraction of Knowledge from Data:
Practical Aspects in Mechanical Engineering

M. Poloni
MPA Stuttgart
Pfaffenwaldring 32, 70569 Stuttgart - Germany
Fax:+49 711 685 3053
e-mail: oloni@mp a.uni- Stuttgart, de

Abstract
A good engineering approach should be capable of making use of all the available information
effectively. For many practical problems, an important portion of information comes from
human experts. Usually, the expert information is not precise and is represented by fuzzy
terms.
In addition to the expert information, another important portion of information is numerical
information, which is collected from various sensors, instruments or obtained according to
physical models.
In power and process plants up to now mainly case studies relative to particular usituations
(failures, anomalies) were available usually in paper form.
Increasingly these industries are automating the activity of data logging, creating huge
databases of plant operation data. In case of failures detailed case studies can be created,
documenting the state of the plant before and after the situation of interest.
The need of Data Mining techniques, that allow knowledge estraction from these huge
collection of data, is currently a very hot topic for power and process industry.
Several methods are available, each with its advantages and drawbacks. The idea illustrated in
this tutorial is an integrated approach, a sort of "Computer Assisted Data Mining", that makes
use of methods typical both of the "classic" KBSs and of numerical processing to maximise
system performances.
The use of pattern recognition techniques (clustering / classification) allows an "Intelligent
extraction" of knowledge from the database. An "Advisor", a KBS-based module, can, at this
point, guide the user towards the use of the most suitable learning / reasoning method
(Machine Learning, Neural Networks, Case Based Reasoning) to reach the Diagnosis /
Prediction desired.
Currently at MPA Stuttgart such an architecture is under development. Different projects are
contributing to build up the general system
1. Introduction
In recent years, expert systems technology has been extensively developed [1, 2, 3] to be
applicable to a number of support systems addressing tasks such as integrity analysis and
residual life assessment of critical components. These experiences have demonstrated that the
development of an expert system can become rather complex especially when the exploited
knowledge consists mainly of a large collection of cases, where each relates to a specific
problem and its own identified solution.
In similar situations, several applications recently delivered in different domains (diagnosis,
engineering design, risk analysis, manufacturing quality control), have been based on an
alternative approach to expert systems, namely case-based reasoning (CBR) [6]. The main
feature of this approach is the capability of solving problems through a direct comparison with
available cases, which formally represent similar problems connected with their known
solutions. Advantages with respect to a conventional expert system are: the knowledge
acquisition process is greatly simplified (the knowledge-base is structured as an encoded set of
already solved problems); the system is more robust because it succeeds in adapting an
available case to propose a solution for a given problem; in addition, the explanation
supporting the proposed solution is quite expressive as it contains references to analogous
cases.
At the same time, some approaches coming from new fields like fuzzy logic (FL) [21], neural
networks (NN) [7] and machine learning (ML) [8] have shown their effectiveness in the
management of uncertainty and the possibility of interpolation of system behaviour from
sample data for complex problems. This has been demonstrated from a number of applications
both at academic and industrial level (for more details see e.g. [5, 9, 10]).
A good engineering approach should be capable of making use of all the available information
effectively. For the cited problems, a portion of information comes from human experts.
Usually, the expert information is not precise and is represented by fuzzy terms.
In addition to the expert information, another important portion of information is numerical
information, which is collected from various sensors, instruments or obtained according to
physical models. In power and process plants up to now mainly case studies relative to
particular situations (e.g. failures) were available and usually in paper form, e.g. reports.
Increasingly these industries are automating this activity of data logging, creating huge
databases of plant operation data. In case of failures detailed case studies can be created,
documenting the state of the plant before and after the situation of interest.
As a consequence of the avaikbility of large databases, extraction or learning o "models" or
"relations" from archive data is a very important topic in engineering. Knowledge extraction
from databases is performed by means of the so-called "Data Mining" [13]. With this name is
characterised the use of machine learning techniques when the environment is described
through a database.
For these purposes a number of different techniques are used, among them: neural networks
(NN), fuzzy and statistical data analysis (DA), data mine systems like ID3, AQ15 or CN2,
case-based reasoning (CBR). Each of these techniques has its own benefits and drawbacks.
The use of the right technique depends from a number of factors, namely the type of analysis
to be performed, the type of data available, the use that will be done of the results. Normally
an expert in the field of data mining can suggest an effective solution only together with a
domain expert. Moreover, one of the main obstacles in applying data mining to databases, is
the size of the database. In fact the size of the database has consequences for the cost of
validation of the induced models, and for the size of the search space. With the growing
dimensions of the current databases (orders of magnitude of several megabytes are normal
practice) serious problems could be encountered.
A solution for the first of these problems is the application of database optimisation
techniques. Instead of using the entire database, only a subset will be used for the initial search
phase. During the search process this set will be incrementally extended with data from the
database, using (e.g.) incremental browsing optimisation techniques.
Solutions for the second of the above cited problems, that is, the size of the search space,
should include the use of effective search strategies and heuristics. A very valuable source of
heuristic information is the end-user, normally an expert in the application domain. An
important part of the work has then to be focused on designing user interaction during the
search process, and understandable representations for the knowledge. Furthermore, the use
and integration of domain knowledge, provided by the user, may allow the discovery of
relationships that would remain otherwise hidden.
These drawbacks bring the need of an intelligent approach. Such an approach should not only
provide the way to compare and validate the results coming from the techniques available, but
also give advice on how to appropriately use these techniques. At the same time, it should help
the field expert to exploit his/her domain knowledge. In this way the analysis can profit from
all the advantages of using a data mining system without being affected from the cited
drawbacks.
The idea illustrated in this paper and outlined in Figure 1 is a sort of "Computer Assisted Data
Mining, that makes use of methods typical both for "classic" KBSs and for numerical
processing to maximise the performances of the data mining system.
Intelligent extraction
(NN/Fuzzy)

Figure 1: Integrated system structure


The use of pattern recognition techniques (clustering / classification) allows an "Intelligent
extraction" of knowledge from the database. An "Advisor", a KBS-based module, can, at this
point, guides the user towards the use of the most suitable learning / reasoning method
(Machine Learning, Neural Networks, Case Based Reasoning) to reach the Diagnosis /
Prediction desired. Such a system, when operational, is also likely to train current experts and
engineering personnel to identify and manage failure cases well beyond their usual range of
experience and/or training.
Currently at MPA Stuttgart such an architecture is under development. Different projects are
contributing to build up the structure shown in Figure 1 [18, 5, 4].
The object field is mainly the operation support in power and process plants. Such support is
in terms of (e.g.) metallic materials properties characterisation, inspection scheduling, damage
assessment.
In the following a general introduction to some basic concepts of Data Mining is given
together with a number of practical examples.
The current state of development of different systems at MPA Stuttgart is then summarised.

2. Pattern recognition in data mining


Pattern recognition can be described in general as a subdiscipline of data analysis. Some
definitions from the literature are: "afield concerned with machine recognition of meaningful
regularities in noisy or complex environments" or "the search for structure in data".
The fundamental steps to develop a pattern recognition system can be described as:
1. Humans nominate either features or pairwise relationships that hopefully capture basic
relationships between the apparently important variables of a process. Data are collected
from humans and sensors.
2. A search for underlying structure in the data is conducted. Its outcome may provide a
basis for hypothesizing relationships between variables governing and governed by the
process.
3. Hypotheses are formalised by characterising the process with equations, or rules, or
perhaps algorithms - in short, a model of the system is proposed. Therotical aspects of
the model, e.g., lineraity, continuity and stability, are established, providing clues and
insights into the model and the process it represents.
4. The model is "trained" with labelled training data (the model is parametrised by
providing it with examples of correct instances). The model is tested and compared with
other models of the same process for things such as relative sensitivity to perturbations
of its inputs and parameters, error rate performances, storage and speed.
5. A system that implements the model is built, tested and placed in service. The system
classifies, predicts, estimates and/or controls the process and its subprocesses.
This type of data analysis is particularly indicated when unknown relations in data sets are
present and there is the need to reveal the possible structures behind the data set. The
possibility to use the model obtained to characterise new data allows for its use as an
assessment tool also in presence of uncertainties.
The purpose of cluster analysis is to place objects into groups or clusters suggested by the
data, not defined a priori, such that objects in a given cluster tend to be similar to each other in
some sense, and objects in different clusters tend to be dissimilar. Cluster analysis can also be
used for summarising data rather than for finding "natural" or "real" clusters; this use of
clustering is sometimes called dissection.
Any generalisation about cluster analysis must be vague because a vast number of clustering
methods have been developed in several different fields, with different dfinitions of clusters
and similarity among objects.
It is possible to summarise the concepts that will be exposed in the following way:
Process description: A (in our case numerical) description of the system or system
behaviour in terms of data vectors, relational data, object data
Feature analysis: The search for structure in data items or observations Xk e X
Cluster analysis: The search for structure in data sets X c S (data space)
Classification: The search for structure in data spaces or populations S.
A graphic representation of the different steps and their interconnections is reported in
Figure 2
Process description

Feature nomination
X = Numerical object data

Humans Sensors
R = Pair-Relation data

Design Data | Test data


Classifier design
Feature analysis
Identification
Pre-processing Classification
Extraction Estimation
2-D display Prediction
Assessment
Control

Cluster analysis

Exploration
Validity
2-D display

Figure 2: General structure of a pattern recognition system


(adapted from [21])

2.1 Process description


The first choice faced by the Pattern Recognition System (PRS) designer concerns the way the
process of interest will be represented for study. In pattern recognition, the usual situation is
that the process is governed by, governs, or both, individual objects and their relationships
with each other. The most familiar choice is the representation of objects within the process by
a set of numerical data [21].
Generally speaking, two data structures are used in numerical PRSs: object data vectors
(features, pattern vectors) and pairwise relational data (similarities, proximities), each object
observed in the process (some kind of physical entity) has a vector of numerical features as
representation. Object data for pattern recognition can be labeled, in which case the identity of
each vector as belonging to one of several classes is known, or they can be unlabeled so that
we have only the vectors themselves and, perhaps, some idea about the classes of objects they
represent.
It may happen that, instead of an object data set X as described above, we have access to a set
of numerical relationships between pairs of objects. Relational data are to be found in many
applications and systems, perhaps hiding in differetn semantic guises.
2.2 Feature analysis
Feature analysis refers to a collection of methods that are used to explore and improve raw
data, that is, the data that are nominated and collected during process description. With but
few exceptions this problem area assumes the data to be object data.
Pre-processing includes operations such as scaling, normalisation, smoothing and various
other "clean-up" techniques. The utility of data for more complex downstream processing task
such as clustering and classifier design is clearly affected by preprocessing operations, so this
step in the design of a PRS is always important and should be given careful attention.
2D Display and Extraction techniques for object data can be cast in a single framework: any
function fE:B?>F? where > q is a feature extractor when applied to the set X e FF of
features vectors. The new features are the image of X under f E , say Y= f E [X]. Feature
selection, choosing subset of the original measured features, is done by taking / to be a
projection onto some coordinate subspace of FF.
The basic idea is that the feature space can be compressed by eliminating, via selection or
transformation, redundant (dependent) and not important (for the problem at hand) features. If
p q , time and space complexity of algorithms that use the transformed data are obviously
reduced in the process. Extraction techniques can be divided into analytic (closed form foifE)
versus algorithmic; and linear versus non-linear. In the following image some possible choices
are given.
2-D Display, the visual representation of J-dimensional data in a viewing plane is a way to
explore structures in, and get ideas about, the measured data. Methods for 2D-display fall into
two general categories: scatterplots and pictorial displays. When q = 2, the transformed data
set Y can be displayed as a scatter diagram for visual inspection, all of the methods itemised
below can be used to produce 2-D scatterplots by taking q = 2. The other class of display
techniques use analytic or algorithmic transformations of the data that result in Y being some
sort of pictorial representation of X. Included in this category are, for example, Kohonen's
Feature Maps [21].

2.3 Clustering
The clustering problem for the data set X is the identification of an "optimal" partition of X;
that is, one that groups together object or object data vectors which share some well defined
(mathematical) similarity. It is hoped and imphcitiry believed, of course, that an optimal
mathematical grouping is in some sense an accurate portrayal of natural groupings in the
phisycal process from whence the object data are derived. The number of cluster c is assumed
to be known; otherwise, its value becomes part of the clustering problem.
Experimental data carry information either about the process generating them or the
phenomena they represent. We search for a manner (structure) in which this information can
be organised so that relationships between the variables in the process can be identified.
Clustering methods are used in pattern recognition to determine structures in data sets. The
data space is divided in subspaces, where different subsets of the data set are more likely to
belong.
Objective function based clustering methods are a particular class of clustering methods in
which a criterion function is iteratively minimised until a global or local minimum is reached.
Usually the data items are represented, to apply the clustering algorithm, not from their entire
set of characteristics (variables), but only a number of them (forming the feature space) have
to be used. Every object has associated a feature vector (vector of the co-ordinates in the
feature space).
This kind of methods can be either hard (crisp) or fuzzy (based on fuzzy set theory),
depending on whether each feature vector characterising an object belongs exclusively to one
cluster or to all clusters to different degrees. In other words, classical (crisp) clustering
algorithms generate partitions such that each object is assigned to exactly one cluster. Often,
however, objects cannot adequately be assigned to strictly one cluster (because they are
located between clusters). In these cases fuzzy clustering methods provide a much more
adequate tool for representing real-data structures, where non-stochastic uncertainties of
different type can be present.
A crucial point for successful analysis is the selection of the right set of features (variables).
This choice should be representative for the physical process that generated the data to enable
us to construct realistic clusters.
Let us assume that the important problem of feature extraction has been solved. Our task is
then to divide n objects e X characterised by indicators (variables of different types) into
c, 2 < c ^ categorically homogeneous subsets called clusters. The objects belonging to any
one of the clusters should be similar and the objects of different clusters as dissimilar as
possible. The number of clusters, c, is normally not known in advance. Before applying any
clustering procedure is very important to select the mathematical properties of the data set (for
example distance, connectivity, intensity) and in which way should they be used in order to
identify clusters. Unfortunately these questions have to be answered for each different data set
since there is no universally optimal cluster criteria. The cluster criterion adopted can influence
heavily the results, leading to wrong interpretations when care is not taken in its choice.

2.4 Classification
A classifier is a device, means, or algorithm by which the data space is partitioned into C
decision regions. Classification attempts to discover associations between subclasses of a
population. In many cases the activity of classification is complementary to that of clustering.
One time the clustering of the data set has been performed and the clusters detected it is
possible to use these clusters to classify new data pairs mcoming. On the basis of the criterion
used to cluster the data the similarity of the new item is evaluated, providing an indication to
which region (cluster) of system behaviour it belongs.

2.5 Comparison with conventional analysis


To build a bridge between the usual data analysis techniques used in the material properties
characterisation (namely regression analysis) and pattern recognition ones a simplified
comparison is given in Figure 3.
When mixed data are available it is possible fit switching regression models. Simultaneous
estimates for the parameters of c regression models, together with a fuzzy c-partitioning of the
data is feasible.
Using regression analysis generally a global model in the data set interval definition is obtained
with its confidence intervals (Le. 95%).
Using pattern recognition techniques (e.g. cluster analysis) local models can be reached. These
models may possibly better approximate the material behaviour. Moreover the obtained
models can be used to automatically "classify" new data items. While a regression model can
be calculated in the new "point", the classification reports the tipicality of the new item in front
of the data-based model previously built.
Fuzzy clusters can also give rise to "locar regression models, this is in fact the essence of the
idea introduced originally in [11, 12]. The overall model is then structured into a series of'if-
then" statements. The conditional part of the statement includes linguistic labels of the input
10

variables while the action part contains a linear (or more generally non linear) numerical
relationship between input and output variables, the clustering method applies to the formation
of the conditional part.

Data Analysis
Neural networks
theory

Fuzzy set theory


h
Y
Pattern recognition Regression analysis
J

Classification Local models Global model

Assessment

Figure 3: Comparison between the use of pattern recognition and regression analysis

3. Pattern recognition techniques: theory and practice


In the following a number of different techniques are exposed that enable the realisation of
pattern recognition systems. The focus will be on fuzzy and neural techniques, that have
demostrated to have, in some cases, advantages over the conventional ones.
Short introductions on what a fuzzy set or a neural network are will be given. For exaustive
expositions the reader should refer to the cited literature.

3.1 Fuzzy clustering (objective function-based)


Fuzzy sets were introduced by Zadeh in 1965 to represent and manipulate data and
information that posses non-statistical uncertainty.
J. Bezdek in his famous book [16] says "Since their inception, fuzzy sets have advanced in a
wide variety of disciplines: control theory, topology, linguistics, .... Nowhere, however, has
their impact been more profound or natural as in pattern recognition, which in turn, is often a
basis for decision theory."
What is a fuzzy set?
People new to the field often wonder what the "set" is physically. In conventional set theory,
any set of actual, real objects is completely equivalent to, and isomorphically described by, a
crisp membership function such as m To describe it a common methods is through a
characteristic function, assuming value 1 if the object belongs to the set, 0 otherwise. A fuzzy
set can be described as an extension of the classical one: its characteristic (now called
membership) function mp can assume values not anymore in the set {0,1} but in the interval [0,
1]. There is, however, no set-theoretic equivalent of "real objects" corresponding to mp, the
function-theoretic representation of F. That is, fuzzy sets are always functions, from some
universe of objects, say X, onto [0, 1], the range of mp.
So, the membership function is the basic idea in fuzzy set theory, its values measure degrees to
which objects satisfy imprecisely defined properties. That is, this feature allows for the
description of uncertain properties of an object, simply giving a membership value less than 1.
11

0.8

0.2

50 60 Distance (Km)

Figure 4: Fuzzy sets: Example

An example is reported in Figure 4, where two fuzzy sets characterising the variable distance
are shown. While it is clear that a distance bigger than 60 kilometers is definitively BIG (at
least in the sense of the definition of distance in this particular case), and a distance of 25 Km
is definitively MEDIUM, what about of something in between?
A gradual membership in the two sets is given. For example a 50 Km distance will have a 0.8
membership in the BIG set and 0.2 in the MEDIUM set.
All the set theory has been "enlarged" to accomodate this extended definition, that is, set
operations like conjunction, disjunction and so on. For a detailed exposition on fuzzy set
theory see [25].
The question could be now: where to use fuzzy pattern recognition? the main lines can be
summarised as follows:
=> Insufficent information for implementing classical methods
=> Expert's uncertainty about the exact membership of the object
=> Inherent characteristics of objects which could be conveniently presented in
terms of fuzzy sets
=> Opportunity to include and process expert opinion in decision making and to
handle partial inconsistency
The replacement of statistical entities with fuzzy ones should be done very carefully, bearing in
mind the incompleteness of the analogy between them. Moreover, fuzzy theory is supposed to
be a useful mathematical description of non-statistical uncertainty. Therefore, it seems more
reasonable to invest efforts in a new statement of the problem which could handle fuzzy
information rather than to constrict the fuzzy problem into probabilistic frameworks.
Clustering methods can be either hard (crisp) or fuzzy, depending on whether each feature
vector (vector of the co-ordinates in the data space) characterising an object belongs
exclusively to one cluster or to all clusters to different degrees. The best-know algorithm from
which derive many variants, is the fuzzy c-means algorithm (FCM).
Table 1 summarises the different steps performed in a typical clustering session. The number
of clusters C is derived from the domain knowledge on the data or is a test value that can be
changed on the basis of the results obtained, m is an exponent that will have value 1 if we want
to perform a crisp clustering, and a growing value as we want a more fuzzy characterisation of
the clusters. The C-partition is a matrix where the membership values to each cluster of all
data items are stored. The initialisation of this matrix can be done randomly or using, if
available, a priori information.
12

Table 1: Objective function-based clustering procedure

Fix the number of clusters C; fix m, m e [ 1 ,]


Initialise the fuzzy C-partition U;
REPEAT
Update the parameters of each cluster prototype
Update the partition matrix V
UNTIL | A U | < ;

A matrix norm to evaluate the distance of each data item from the clusters has to be fixed.
This choice gives raise to different characterisations of the clustering algorithm. If use is made
of an Euclidean distance (like in the FCM algorithm), the cluster prototypes will be points
(also known as cluster centres), and, as a consequence, the algorithm will search for spherical
clusters. Different kinds of prototypes and distances bring different shapes for the searched
clusters: hyperellipsoidal, lines, planes, hyp er spherical shells. The iterative process proceeds
until the fuzzy partition does not significantly modify further (the norm of the difference of the
last two iterations has a value lower than a pre-determined threshold).
A tipical example where the application of a fuzzy clustering algorithm is of use is that of the
"butterfly" data set. In Figure 5 the data set is shown. The two triangular regions have a
common point in x$.

X3
* Xu
X6 * X12

Xs
X2 X5 X- X9 X11 Xl4

X4 X10

Xi Xl3

Figure 5: the Butterfly data set


Clustering these points by a crisp objective- function dgorithm might yeald the picture shown
in Figure 6, in which " 1 " indicate membership to the left-hand cluster and "0" membership to
the right-hand cluster. It is easy to observe that, even though the butterfly is simmetrie, the
clusters in Figure 6 are not because the point xg, the point "between" the clusters has to be
(fully) assigned to either cluster 1 or cluster 2. Applying a fuzzy clustering dgorithrn, a
membership of 0.5 in both clusters will result, which seems more appropriate.

3.1.1 Possibilistic clustering of hardness measurement data


Fuzzy clustering algorithms do not always estimate the parameters of the prototypes
accurately. The main source of this problem is the probabilistic constraint used in fuzzy
clustering, which states that the memberships of a data point across all clusters must sum to
one. Into the framework of possibility theory the cited constraint can be relaxed [23].
This approach permits a noise point, far from all the prototypes of the clusters found, to be
given a low membership in every cluster, as well as to characterise in a better way points
belonging, for their characteristics, to more than one cluster. In other words, in the FCM
13

algorithm the membership of a point in a class is a relative number, depending on the


membership of the point in all other classes and thus indirectly on the total number of classes
itself.
Center of cluster 1 Center <f cluster 2 Center of cluster 1 Center of cluster 2

Figure 6 Figure 7

Due to this fact, the membership values cannot distinguish between a moderately atypical
member and an extremely atypical member because the membership of a point in a class is a
relative number. Therefore noise points, which are often quite distant from the primary
clusters, can drastically influence the estimate of class prototypes, and hence the final partition.
Hardness-based temperature estimation
A set of experimental data has been extracted from [24] regarding the determination of
hardness properties for two different ferritic steels, namely 2%Cr1 Mo and 1 Cr1/4Mo.
In the following, hardness will be indicated with H, while the Sherby-Dom parameter will be
C
indicated with P. The expression of the Sherby-Dorn parameter is P = logt , where t is the
time in hours and T is the temperature in Kelvin. The two derived expressions will be used in
the paper:

T= (1)
logt-P

t = l(T TJ (2)
Hardness measures are used to estimate the temperature, and by means of temperature the
remain mg lifetime.
The material under consideration is a 1 Cr!4Mo steel with the fqhowing composition:
C Si S Ni Cr Mo V W
0.069 0.27 0.01 0.014 0.58 0.05 0.75 0.45 0.05 < 0.05
As reported in [24] this material does not exhibit a standard behaviour. Problems have been
encountered in determining the material C constant, ultimately assumed to have a value of
10270 in the normalised condition. The initial hardness is equal to 115 and, although it is clear
from looking at Figure 8, that there are a significant number of measurements above this
threshold. This kind of behaviour is detected for temperatures above 625C and for short
exposure times. The remaining data points show a progressive softening of the material with
time, although there exists a region where the values maintain a stable level (up to values of
about -8.5) and than start to decrease relatively rapidly. Such a behaviour requires a different
characterisation of the hardness-dependent variable.
14

The application of a chistering method provides a way to find out the possible structure of
these regions from the experimental data. In this case the possibilistic clustering approach has
been used. Two clusters has been assumed and the algorithm suggested in [22] employed. The
initialisation of the procedure has been performed using the FCM algorithm. The result is
shown in Figure 9.

SctlBiKAlol
1 acdB (icy.3io)
A
= 41907464.62076 3.41237 duster 1
duster 2

iijasrdcss initial ^
Oo 0,

o- "U -...o.

110 105 IQ 0 .4 5 9 0 8 5 41.0 7.5 7.0

Figure 8: Gaussian fit for the steel data Figure 9: Steel regression models

The clustering method can effectively detect two regions and a clear threshold between them
An example of approximation is given using two second order regression models. As a test
comparison, in Figure 8 a of the data is reported together with the 95% confidence
approximation region. It is easy to see how the Gaussian fit can hardly deal with the hardness
values over the initial one.

3.2 Fuzzy approximation of inspection intervals


In this example the rektionship between damage class, derived from metallographic replicas,
and expired life as input and remaining life as output is approximated by means a rulebased
system. The rules are automatically built from experimental data.

3.2.1 Adaptive fuzzy systems to build a rule-based classifier


In many data processing problems (control, signal processing, data analysis) the information
concerning design evaluation, realisation, etc., can be classified into two main types:
numerical information (e.g. sensor measurements) and linguistic information (experts
opinion). If both kind of resources are to be used, there is need to integrate them in a
common framework, to provide the way of evaluating the performances of the obtained
system In [26] a general approach to solve this problem is proposed. The solution is the
realisation of a fuzzy rule base where both types ofinformation are integrated.
The procedure consists in a five step algorithm and it is exploited realising the actions
described in TABLE 2. It is possible to prove that the generated fuzzy system is a universal
approximator from a compact set Q cz Rn to R, Le., it can approximate any real continuous
function defined on Q to any accuracy. Thus, the adaptive fuzzy system estimates fuzzy rules
from sample data; this reduces to patch or cluster estimation in the data space.
In the case of relationships difficult to model due to the lack of analytical insight in the
mechanisms of system behaviour, or to its high nonlinearity, such an approximation can be
of great advantage. It realises a graph cover with local averaging. The "fuzziness" or
multivalence of sets comes into play when output sets overlap. A fuzzy system is unique in
that it can ties vague definition to the mathematics of curves. In this way it ties natural
language and expert rules to statespace geometry.
15

TABLE 2- Rule generation algorithm


Stepl Division of Input and Output spaces into fuzzy regions:
to each variable is assigned a domain interval, where the variables value are expected
to lie. Then each domain interval is divided in regions characterised from a linguistic
label and a membership function (fuzzy set)

Step 2 Generation of fuzzy rules from given data pairs:


For each variable (input/output) value the region (linguistic label) with maximum
membership value is derived and a new rule of the following type is created (we assume
a two input-one output system):

IF xj is BIG and x2 is SMALL THEN y is MEDIUM

where BIG, SMALL and MEDIUMare defined as described in Step 1.

Step 3 Assign a degree to each rule:


The membership value of each variable plus the eventual a priori information about the
data pairs contribute to assign a degree to each rule. This to avoid rules with different
antecedents but with the same conclusion. In this case only the rule with the highest
degree will be hold. Such a procedure enable to collect only the most significant part of
the data. A kind of filtering action is performed, retaining only what is the possible
system mean behaviour".

Step 4 Creation of a combined rule base:


All the rules comine from data pairs and linguistic descriptions ("expert rules") are
integrated in the final rule base

Step 5 Determination of a mapping based on the combined rule base:


A suitable defuzzification strategy allow for the use of the obtained rule base to infer
conclusions from new input data coming from the system.

3.2.2 Problem statement and data available


In [27] a set of tests has been performed on two casts of 1 Cr1/Mo steel and one cast of
2%CrMo, simulating coarse-grained HAZ condition at temperatures in the range 535-635
C, using both uniaxial and biaxial (torsion) stress states. All three casts were heat treated in
order to obtain coarse-grained bainitic micro structures representative of the coarse HAZ of
header welds. Surfaces replicas have been realised to apply the damage classification rule. In
TABLE 3 is reported a description of the Neubauer classes of damage [28]. Damage
classification at different life fractions has been estimated, to have a test-bed for prediction
methods, namely the -parameter method [27]. This set of experimental data has been used
to check the performances of the new approach.
A test of the results has been performed against those reported in the cited work. The
comparison is particularly meaningful because the cited studies were the base of the
Californian Electric Power Research Institute (EPRI) approach for inspection scheduling.
EPRI in [29], to schedule reinspection intervals, takes into consideration not only the damage
class as in the Neubauer approach but also service life. As a consequence longer inter-
inspection intervals are obtained, (see Figure 10). In this way a functional relation between
next inspection time, on one side, and life expended and damage class, on the other, is
established. In the illustrated case the relation is linear and it is based on approximations
considering a worst-case approach in the elaboration of the experimental data plus some
safety factors.
16

The work illustrated in this section try to formulate a generic relation approximated by means
of a fuzzy system
Analysis method. To find out a relation between life spent, damage rating and remaining life
an adaptive fuzzy system had been used as previously described.
The rule-based system has been designed to accommodate a qualitative input in terms of
damage class and a crisp input as life expended, the output consists in a prediction of the
expired life fraction of the component.
Membership functions have been manually tuned on the base of the indications coming from
the experimental data and material behaviour.

3.2.3 Results
A software module has been programmed to implement and test the described analysis
method. This module, after the necessary tuning and validation will became one of the tools
belonging to a knowledge based system on material properties under development at MPA
Stuttgart.
TABLE 3 - Description of damage classes (from 271)

Neubauer classification of No consideration of service life


damage state

Undamaged No creep damage detected

Isolated Isolated cavities are observed. It is not possible to


deduce the direction of maximum principal stress from
No action A the damage seen

Oriented Cavities are observed, often with multiple cavities on


the same boundary. A clear alignment of damaged
re-inspection iy2-3 years boundaries can be seen, indicating the axis of maximum
principal stress

Microcracked Cavities are observed on boundaries normal to the


maximum principal stress. Some boundaries have
Repair or replacement separated due to the interlinkage of cavities on them to
6 months C form microcracks

Macrocracked In addition to cavities and microcracks being observed,


microcracks have joined together and widened to form
Immediate repair D macrocracks many grain boundaries long

In TABLE 4 the input data and the results are reported for a subset of cases, together with a
different estimate proposed in the original report [27]. The results are limited to the
1 Cr1/4Mo steeL, because for the other material too few experimental data were reported. It is
possible to see that the proposed approach gives good predictions and moreover is always
conservative.
In Figure 11 are reported the fuzzy sets describing the life expired and the damage class.
While the reliabity of such an approximation depends clearly on the quality of the input data,
as for every assessment method based on experimental results, the fuzzy representation of the
variables provides a tolerance against the possible uncertainties, e.g. the damage class
determination.
17

3.2.4 Remarks
The preliminary results obtained evaluating the performances of the fuzzy models elaborated
are encouraging, but in order to bring them to really support industrial applications, further
investigations are needed to assess the conservativeness and the generality of the procedures,
as well as the evaluation of different approaches based on the same theoretical background
(e.g. classification algorithms).
TABLE 4 - Data set and report prediction
Damage time time remaining Best Estimate Rule-based
Class expired (hours) approach
(See [27])
(hours)
real value
A 2008 6621 5588 5980
4017 4612 4720 4454
C 6195 2434 3755 2349
D 7712 918 2377 56
D 8629 0 308 0

On the other side the most relevant practical limitations of the results presented regards the
data. Data necessary for the type of analysis presented were available mainly for 1 CrVMo
steel, and the authors are not aware of similar sets of data available for other materials (e.g.
12%Cr-steel). The behaviour of these materials is different and, hence, the euristic values
extracted would be different. Further research in this direction is thus necessary.

Reinspecti on Interval (years) Undamaged


30 '

20
"" ^
10 EPRI
- . , , < : . - - - " " _ < ^
" " " - ~ ~ ~ ~ ~
" Neubauer
,' -' ---~'
.t'i---"l I I I ,
10 20 30 40
Service life Expended (years)

Figure 10 E PRI reinspection scheme (adapted from [3])

3.3 Neural network s in data mining


Artificial neural networks are massively parallel interconnections of simple neurons that
function as a collective system It has been observed that many problems in pattern recognition
are solved more easily by humans that by computers, perhaps because of the basic architecture
and functioning mechanism of their brains. Neural nets are designed in attempt to mimic the
human brain in order to emulate human performance and thereby function intelligently.
These networks may be broadly categorised into two types:
those that learn adaptively, updating their connection weights during training;
18

those whose parameters are timeinvariant, i.e., whose weights are fixed initially and no
eventual updating occurs.

2.D 25 30 15 4S 50 55 SO S

Figure 11 Fuzzy sets: expired life (above), damage classes (below)

For the purposes of this paper a network of the first kind will be considered. These
networks can be trained by examples (as is often required in real life) and sometimes generalise
well for unknown test cases. The worthiness of a network hes in its inferencing or
generalisation capabilities over such test sets:
"Connectionist learning procedures are suitable in domains with several graded features
that collectively contribute to the solution of a problem. In the process of learning, a network
may discover important underlying regularities in the task domain " [15].
The multilayer perception (MLP) consists of multiple layers of simple two state, sigmoid
processing elements (nodes) or nuerons that interact using weighted connections (see Figure
12). After a lowermost input layer there are usually any number of intermediate, or bidden,
layers followed by an output layer at the top. There exist no interconnections within a layer
while all neurons in a layer are fully connected to neurons in adjacent layers. Weights measure
the degree of correlation between the activity levels of neurons that they connect.
An external input vector is supplied to the network by clamping it at the nodes in the input
layer. For conventional classification problems, during training, the appropriate output node is
clamped to state 1 while the others are clamped to state 0. This is the desired output supplied
by the teacher.
The training procedure has to determine the internal parameters of the hidden units based on
its knowledge of the inputs and desired outputs. Hence training consists of searching a very
large parameter space and therefore is usually rather slow.
Multilayer perceptron using backpropagation of error
During training, each pattern of the training set is used in succession to clamp the input and
output layers of the network. A sequence of forward and backward passes constitutes a cycle
and such a cycle through the entire training set is termed a sweep. After a number of sweeps
through the training data, the error may be minimised. At this stage the network is supposed to
19

have discovered (learned) the relationship between the input and output vectors in the training
samples.
In the testing phase the neural net is expected to be able to utilise the information encoded in
its connection weights to assign the correct output labels for the test vectors that are now
clamped only at the input layer. It should be noted that the optimal number of hidden layersand
the number of units in each of such layers are mostly empirical in nature. The number of units
in layer H corresponds to the number of output classes.
MLP models using backpropagation have been applied in the exclusive OR problem and in
recognising familiar shapes in novel positions, discovering semantic features, recognising
written text, recognising speech and identification of sonar targets. A more detailed
description with some application examples can be found in [15].
The hierarchical (multilayer) networks with a supervised learning algorithm, among the variety
of neural networks architecture, have been applied to various engineering problems [1920].
Attractive features of the networks in the industrial applications can be summarised as follows:
a) It is possible the automatic construction of nonlinear mappings function from multiple
input data to multiple output data.
b) The trained network attains a capability of "generalisation", i.e. a kind of interpolation, such
that a properly trained network estimates appropriate output data even for input data sets
not belonging to the training patterns.
c) The trained network operates quickly in an application process.

Output

Layer H

Layer h+1

Layer h

Layer 0 Neuron

Input

Figure 12: Multilayer erceptron structure (adapted from [15])

Disadvantages
There are sone disadvantages in using neural networks (NN) for data mining Learning
processes in NN are very slow compared (e.g.) to simbolic learning systems. In literature
benchmarks available regarding the ID3 learning algorithm [13], NN are outperformed by a
factor of 500 to 10,000.
Knowledge, generated by NN, is not explicitly represented in the form of rules or conceptual
patterns, but implicitly in the network itself as a vast number of weights. One of the objectives
of data mining isto generate knowledge in a form suitable for verification or interpretation by
humans.
20

There has been some research on trasforming this knowledge to a format better suited for
human reading, but this mainly concerns single layer networks, that model simple, linear
functions.
Moreover, it is difficult to incorporate any domain knowledge or user interaction in the
learning process. Hence NN perform best in areas where no additional information is available,
which is generally not the case with data mining.

3.3.1 An Example: Prediction of creep-induced failure


In this section analyses of case studies on structural components failure in power plants using
hierarchical (multilayer) neural networks are described. Using selected test data about case
studies stored in the structural failure database of a knowledge-based system, the network is
trained to predict possible failure mechanisms like creep-, overheating (OH)- or overstressing
(OS)-induced failure. It should be noted here that because of the shortage of available case
studies, an appropriate selection of case studies and input parameters to be used for network
training was required to attain high accuracy. A collection of more case studies will however
resolve such problems, and will improve accuracy of the analyses. An analysis module for case
studies using the neural network has also been developed, and successfully implemented in a
knowledge-based system.
A three layer neural network employing the back propagation algorithm with the momentum
method is trained in such a way that it can predict possible failure mechanisms, inferring
operating conditions, component dimensions, material properties and others.
The attention is focused on the prediction of either creep-, overheating (OH)-, or overstressing
(OS)-induced failure. However, the analysis method presented here is feasible to be applied to
the prediction of the erosion-, corrosion- and fatigue-induced failures if case studies and
related information are available. Overheating-induced failure is defined as "failure caused by
higher temperature beyond calculated operating temperature due to different reasons", while
the overstressing-induced failure is defined as failure caused by higher stress beyond
calculated stress due to different reasons". The failure causes were identified through careful
observation of change of material micro-structure in each case study.

3.3.1.1 Selection process of 36 case studies


The network is trained to predict the possible occurrence of creep-induced failure, inferring
operating conditions and some other information. At first, 41 case studies, which contain
complete information except the number of Start-up / Shutdown, are selected out of 72 case
studies. Then 36 case studies are selected and utilised to train the network.

3.3.1.2 Network architecture and input / output data


An ordinary three layer network is employed. Operating conditions and other parameters are
given to the input units of the network, while the Yes / No value regarding occurrence of
creep-induced failure, i.e. 1 or 0, is shown to the network as a teacher signal. Through some
preliminary tests, the network parameters are determined as follows : the learning rate = 0.1,
the momentum factor m = 0.9, the constant of the sigmoid function U0 = 1, the range of initial
weights = -1 to 1, the number of bidden units = 10.
The network training is stopped when the estimation capability for both training and test
patterns reaches almost a steady state. That is, the total number of training iterations roughly
ranges from 5,000 to 10,000. Several combinations of input parameters are examined. Three
typical combinations are shown below:
Combination 1:
21

Table 5: Results of Prediction of Creep-Induced Failure Using Neural Network


Input Parameters Output Nr. of Learning Patterns Test Patterns25
Iterations
All cases Creep All Creep
cases cases cases
T/Tc Creep / 89% 100 % 78% 84%
/0 Not creep 10,000 (32/36)" (19/19) (28/36) (16/19)
LogioH
/., / 0 Creep/ 97% 100 % 86% 95%
LogioH Not creep 5,000 (35/36) (19/19) (31/36) (18/19)
T,P,<U
Material Classes
LogioH Creep / 94% 95% 67% 79%
T,P,d,t, Not 5,000 (34/36) (18/19) (24/36) (15/19)
Material Classes creep
The first number in bracket denotes that of successful prediction for creep-induced failure cases, while
the second number does that of total cases.
2)
A capability of generalisation, i.e. a capability of estimation for test patterns, is carefully examined for all
36 cases, repeatedly taking one case as a test pattern.
a) Operating temperature divided by creep temperature after operating hours (T/Tc),
b) Stress factor divided by creep rupture stress at operating temperature after
c) operating hours (/), and
d) Logaritirmic operating hours (LogioH),
where the stress factor is simply denned as = 0.5 (d -1) / 1 , which is a nominal hoop
stress, using operating pressure (MPa), pipe diameter d (mm) and pipe thickness t (mm).
Combination 2:
T/Tc, 70 and LogioH, which are the same as the combination (a),
(C), P, d, t and
Material Classes, Le. carbon steel (CA), low-alloy steel (LA), high-alloy steel (HA)
or austenitic steel (AU).
Combination 3:
LogioH, T, P, d, t and Material Classes
Among those parameters, H (hours), T, P, d, t and a material name (or a material class) are
included in the original case format [17], while the material properties are taken from a
material database of the knowledge-based system. All input parameters are normalised into a
unit range of (0, 1) parameter by parameter before given to the network.

3.3.2 Results
The results obtained using the three combinations of input parameters are shown in Table 5. It
can be clearly seen from the table that the training patterns of case studies are easily learned by
the network. The network also has to attain a capability of "generalisation". However, this last
feature it is not easy to achieve in the present problem because of the small number of
available case studies.
The following procedure is utilised to carefully examine the capability of generalisation. When
we have 36 cases, we give 35 cases to the network as training patterns. After the training
process, we test whether the network predicts or classifies the one subtracted test pattern
22

correctly. All the 36 cases are taken as a test pattern in order. In the table, the score for
success is counted when the output is greater than 1 for the correct answer of 1 or when the
output is smaller than 0.4 for the correct answer of 0.
It is clear from the table that the neural network successfully predicts the occurrence of creep-
induced failure for all the combinations of input parameters. Among them, the combination (b)
gives the best result, Le. less iterations and higher accuracy. It is also clearly seen that the
combination (c) excluding T/Tc and /0 results in less accurate prediction. The two
parameters seem to play an important role in this prediction.

4. Development of an intelligent data mining system


In the framework of two BRETE projects (BE 5936 ORACLE and BE 5245 C-FAT), parts of
the general structure are under development. In the following sections an overview of two
systems is given.

4.1 FRACTAL
Automatic Data Assessment (ADA) is a tool, developed for supporting the analysis of present
cases in a case studies database by means of neural networks. The network will be
automatically configured and trained. The matrixes of inputs and outputs will be a selected
number of influence parameters and objectives, previously defined in collaboration with
corrosion experts for several fields.
For the FRACTAL system (BRITE project BE 5936 ORACLE) the problem stress corrosion
cracking of turbine rotors and disks was selected for the qualification of automatic data
assessment [3].
A typical ADA session can be summarised in the following points:
The end-user fills in the parameters of his present case in the database, using the available
forms. By the definition of mandatory fields a minimum of data will be guaranteed.
By click on the menu item "Query relevant cases" a default query definition will be
provided by the system, which can be changed by the end user. This will lead to a subset of
relevant cases relating to the present case.
By click on the menu item "Automatic Assessment" a check for possible neural network
analysis begins.
Starting point is a pre-defined master matrix of input and output values for a
specific corrosion problem (e.g. SCC at turbine parts) to assure a correct problem
analysis structure from the subject of corrosion engineering
This matrix is now compared with the existing data in the subsets. If possible, the
system proposes suitable matrixes for the neural network analysis assigned to ranks,
which allow to use the highest possible number of data records in the queried subset
of data. The user will select a matrix by rank and is enabled to remove some of the
input and output parameters. Criteria:
At least 1 output parameter is available
more than 50% of the queried relevant data sets are used for the neural
network analysis
at least 3 times more data sets than input parameters are available
The system will check is the parameter combination of the present case allows an
interpolation by the use of neural networks, otherwise the neural network analysis
procedure will be cancelled.
23

The system will create a suitable network architecture (criteria: accuracy of


prediction in a combined training and test prediction phase, learning rate ...)
The neural network prediction for the present case will be done and be displayed together with
the number case studies the used input parameters and the hst of all possible relevant influence
parameters from the master matrix, serving as base for the further in detail analysis.
The neural networks used in FRACTAL are automatically created, according
to the specific problem to be analysed ("Master Matrix")
to the available data in the present case to be assessed
to the available data in the relevant case studies in the database
to the user's selection, based on his expertise about the problem
Nevertheless there is a number of neural network parameters to be set up to receive reliable
results from the neural networks. Because it cannot be expected from the enduser, to have
experience in this settings, an automatic evaluation and optimisation of these parameters is
performed by the system.
^ i Auemetic AatSfeiafctfl

::::::::: WM:':^**9. t&t S : :::


: : :
' ' : ' : . " ' ' : : ' : :

" S ^ T V "it 7 r
|perStresslnt!MNm/* True 318:
IstrengthLeviMN/mm1 W hs 572;
perTempFc itru "1*3820!

_ .
Growth Rate im/s ;Tnj!tp5S;

,iLi, u ^
e&otfe
mmnxa
i *mm^ mm
^ W ( WUU ^PPBWBMM'WWM ^ i ^ ^ . i t t t i i S f t sassi
: ' ' : ' ' ' : - ' :

:
' '
Figure 13: Two screenshots of the FRACTAL wizard

4.2 Expert Miner


Database mining is one of the fast growing technologies of last years, and probably, of the
next years, in the field of data processing.
As it is possible to realise from the gigantic flow of literature existing on data mining, the non
expert has to deal with a number of methods and techniques, which are all already available as
software packages (see e.g. [14]) but not equally effective in different cases.
The need to provide to everyone the capacity to explore his own data sets without being a
professional analyst and, on the other side, to support the professional analyst in his everyday
job, lead to solution based on an intelligent software to advice the choice of the right mining
method on the base of the data available and the required results.
This module, that is under development in the BRITE project BE 5245 CFAT [4], will enable
the user, a trained technician in his/her own field (in the case of CFAT a metallurgist) to
perform data mining tasks using advanced techniques. The system will support the user
through a KB S that analyses the user requests in terms of inputoutput data, resulting model
requirements and data available, rerrning an advice on which method or technique is more
suitable to solve the current problem. As shown in the previous sections, the application of
different techniques can bring to the formulation of useful engineering models. The analysis
illustrated in each of three examples was performed from an expert in the particular technique.
24

No one can assure that a different technique could not bring a better result. Moreover, the
domain expert, e.g. the plant engineer, had no possibility to perform personally the analysis,
but was only mdirectly active, furnishing some bits of domain knowledge.
To bring this new methods in the industrial practice a not-so-steep learning curve should be
provided directly to the plant engineer. This can be realised realising intelligent systems that
can support an end-user in managing these powerful but not always easy-to-use tools. This
first step in direction of an new advanced system is illustrated in Figure 14.
The end-user will formulate the objectives of his analysis, in terms of (e.g.) relationslups to be
searched. The Advisor, acting as an intelligent interface between the user and the set
methods/database, will check which data are available. On the basis of the data and of the type
of task to be accomplished, the applicable methods will be chosen and their effectiveness in the
particular case evaluated.
Methods: Database:
ADVISOR
Possibility of Intelligent
applicati on queries

Effectiveness Analysis of data


of ose avail able

t t
Methods:
m3/CN2/_. Database
CBR
NN
Extracted Mo del
1

Figure 14: The Advisor


The Advisor KB S will include the rules that are usually applied in choosing a search method,
methods appUcability in terms of their minimal requirements to operate, in terms of known
drawbacks with particular data sets or to realise particular tasks. For example, if assessed
results are not available, it is not possible to apply a supervised learning, but unsupervised
learning (e.g. clustering) techniques should be used.
Currently a research effort is starting to provide some indexes of performances to assess which
approach should be used in the analysis of the data available. Tests on small data samples
could give a first advice in this respect, intelligent querying and incremental browsing
optimisation techniques are going to be integrated in this part of the module.
Interaction with the user is also important in the Advisor behaviour. If, e.g., a clustering
method has to be applied, the feature selection activity is a very critical one. The system, on
the basis of a first search in the database, will propose a set of features. The user can, at this
point, select some of them using his/her domain knowledge.
The use of appropriate tools will enhance the user interface, making it more intuitive and
effective. An example is the presence of intelligent flowcharts, that is, a flowchart which can
interact with the user, asking for input and proposing different (pre-programmed) action paths
(see Figure 15).
The author consider the friendliness of the user interface an important issue for the system to
be accepted in the industrial practice. It is a matter of fact that such an innovative system, if
not supported from ease of use, will not be even considered in a normally conservative
environment like that of power and process industry. Nonetheless there is a growing industrial
interest in such systems, because the possible economic advantages can be quantified in several
millions of dollars each year in some cases.
25

4.3 Technical and economical evaluation of the integrated system


Recurring failures, i.e. repeated failures with similar root causes and damage/fracture
mechanisms, represent more than 90% of all fatigue and creep related failures in power and
process plants.
The traditional means of ensuring plant safety and availability by efficient diagnosis is based on
learning from prior experience with operational problems including service failures. However,
as power and process plants are becoming increasingly automated, they are operated with
smaller core personnel, despite still being about 4 times larger in Europe than in the USA, on
average, for a given plant capacity. This trend threatens to seriously reduce the available
resources for trouble shooting and failure prevention.
C
liei Extraction f relevant data sets
Formulation of the initial feature set

1
* ~*
Gattiering/generating of labeled data Gathering/generating of unlabeled data

*
Feature selection

1 '<...----

Cluster analysis r 1
Classifier design

. I
i ,
Evaluation of classification accuracy

Figure 15: Expert Miner knowledgebasedflowchartscreenshot


Consequently, the endusers will need additional fast and cost effective support tools to
improve the effectiveness of failure prevention and diagnosis, and to allow the storage,
utilisation and effective management of company specific inhouse experience.
This knowledge is now usually stored as large collections of case histories in paper form, and
mainly used for archiving purposes. The particular case histories targeted for the present
project proposal concern those related to components exposed to fatigue and/or creep loading
conditions. For the target components a failure can represent huge direct and/or indirect
losses. For example, a recent steam pipe failure in a German power plant involved over 5
MECU in replacement costs, over 3 MECU in costs for investigation and analysis, and over 10
MECU indirect costs due to loss of production.
Due to the lack of suitable means of managing large quantity of information, the knowledge
has been mainly transferred through failure analysis experts (as personnel training or as item
specific reports), technical articles on the subject in relevant engineering journals, or in
publications on failure cases and analysis. These vehicles of transfer are limited in scope and
content, and are also inefficient for solving immediate specific problems. For example, the
most extensive books on industrial failure analysis, such as the Metals Handbook (Volume on
Failure Analysis and Prevention), only contain references to a few hundred failure cases, with
only sketchy background information. In the absence of an appropriate means to manage the
bulk of failure cases, it is today not possible to utilise, even inhouse, failure cases efficiently
for solving present failure analysis problems.
Through the application of a suitable data mining system, the following economical benefits
can be foreseen (the evaluations are for a medium size European utility with 3,000 to 4,000
MWe ofinstalled thermal capacity):
26

improved failure prevention and life management as well as reduced loss of production in
European power and process plants to save 1% of related cost, or about 5 MECU per year;
a reduction of maintenance costs in extensively automated plants with reduced O&M
personnel to save 1 to 4% of the related cost, or 1 to 4 MECU per year
The benefit to the environment is anticipated through a reduction of the emissions from sub-
optimally operating plants, unexpected operational deviations and consequent loss in
efficiency, which increase emissions per Mwh.
For example, a typical fossil power plant every year produces for each MWe some 50 tons of
ash and slag, 75 tons of SOx (or desulfiuisation by-products), 10 tons of NOx, up to 30,000 GJ
of waste heat, and about 1000 tons of CO2.
Assuming that 1% of these emissions are avoided by more optimal operation and maintenance,
this amounts to 1,500 tons of ash and slag, 2,250 tons of SOx equivalent, 300 tons of NOx,
900,000 GJ of waste heat, and 30,000 tons of C0 2 avoided per year.

5. Conclusions
The most important result of the research described in the paper is probably the proof that
the applied advanced data mining methods (despite the fact of being based on pure numerics)
are capable to discover" and describe (analytically) the complex qualitative interrelationships
among the material parameters relevant for the life assessment of high temperature power
plant components.
The architecture of a new integrated system for data mining in databases has been outlined. Its
objectives are not general, but mainly related to application in power and process plants for
problems of creep, fatigue and corrosion of metallic materials.
Some parts of the proposed architecture have been already realised, or are in various phases of
design/implementation. The increasing avaikbility of large databases of material properties and
of failure case histories, together with the effectiveness of the preliminary results obtained,
makes the described system very attractive not only as an applied research tool but also as a
significant engineering tool in the industrial environment.

6. References
[1] A. Jovanovic, The SP249 Project and the SP249 Knowledge-Based System as Steps Towards the de
facto Standardisation of Power Plant Component Life Assessment Practice in Europe, 20th MPA-Seminar
October 6-7, 1994, Stuttgart, Germany
[2] A. Jovanovic, Multi-Utility Projects ESR-VGB and ESR-International: Integrated Knowledge-Based
Systems for the Remaining Life and Damage Assessment, Proc. SMirt Post Conference Seminar Nr. 13,
Knowledge-based (Expert) System Applications in Power Plant and Structural Engineering EUR 15408
EN, JRC, pp. 459-464
[3] P. M. Schfer, A. Jovanovic, W. Bogaerts, M. Vancoille, FRACTAL - An Intelligent Software System
for Failure Analysis of Metallic Components Susceptible to Corrosion Related Cracking 20th MPA-
Seminar October 6-7, 1994, Stuttgart, Germany
[4] Holdsworth S.R (1994) BRTTE-EURAM C-FAT Project BE 5245: KBS-aided Prediction of Crack
Initiation and Early Crack Growth Behaviour Under Complex Creep-Fatigue Loading Conditions, In
Knowledge-Based (Expert) System Applications in Power Plant and Structural Engineering Jovanovic,
Lucia, FukudaEds, Joint Research Centre of European Commission, EUR 15408 EN, pp. 235-243
[5] S. M. Psomas , A. Jovanovic, H.P. Ellingsen, V. Moustakis, G. Stavrakakis, J. Brear Application of
machine learning methodologies for extraction of expert knowledge out of the structural failure database,
20. MPA-Seminar - SPRINT/KBS Dissemination Workshops, 6. und 7. October 1994, Stuttgart, Germany
[6] Aamodt A , Plaza E., "Case-Based Reasoning: Fundamental Issues, Methodological Variations, and
System Approaches", AICOM vol.7 Nr. 1, March 1994
[7] Wasserman P.D. (1989) Neural Computing - Theory and Practice, Van Nostrand Reinhold, New York
27

[8] Quinlan J.R. (1988) Programs for Machine Learning, Morgan Kufmann Publishers, San Mateo,
California
[9] S. Yoshimura, S. Psomas, K. Maile, A. Jovanovic, H.P. Ellingsen, Prediction of Possible Failure
Mechanism in Power Plant Components using Neural Networks and Structural Failure Database, 20th
MPA-Seminar October 6-7, 1994, Stuttgart, Germany
[10] Poloni M., Jovanovic ., Maile ., Holdsworth S., Brear J. (1994) Fuzzy analysis of material
properties data: Application to high temperature components in power plants, 20th MPA-Seminar -
SPRINT/KBS Dissemination Workshops, October 6 and 7, Stuttgart, Germany, pp. 4.2.1-4.4.19
[11] Takagi T., Sugeno M. (1985) Fuzzy Identification of Systems and Its Applications to Modeling and
Control, Trans, on Syst. Man Cybern., Vol. SMC-15, No. 1, pp. 116-132
[12] Sugeno M., Tanaka K. (1991) Successive identification of a fuzzy model and its application to
prediction of a complex system, Int. Journal of Fuzzy Sets and Systems, No. 42, pp. 315-334
[13] M. Holsheimer, AP.J.M. Siebes (1994) Data Mining: the search for knowledge in databases, Centrum
voor Wiskunde en Informatica, Amsterdam, The Netherlands, Report CS-R9406
[14] MIT GmbH (1995) DataEngine User Manual, Third edition, Aachen, Germany
[15] Pal S. K , Sushmita S.(1992) Multilayer perceptron, fuzzy sets, and classification, Transactions
on Neural Networks, No 3, pp.683-697
[16] Bezdek J.C. (1987) Pattern recognition with fuzzy objective function algorithms, Plenum Press, New
York
[17] S. Psomas, G. Stavrakakis, V. Moustakis and A. S. Jovanovic, An expert system for avoiding repeated
structural failures in power plants, Proceedings of 1994 European Simulation Multiconferences, Barcelona,
Spain, June 1994, pp.480-485.
[18] A. S. Jovanovic, KBS-related research programs and software systems developed at MPA Stuttgart,
Germany, Proceedings of SMiRT Post Conference Seminar Nr. 13, Knowledge-based (Expert) System
Applications in Power Plant and Structural Engineering, Constance, Germany, Aug. 1993, EUR 15408
EN, JRC,pp.l75-187.
[19] G. Yagawa, S. Yoshimura, Y. Mochizuki andT. Oishi, Identification of crack shape hidden in solid
by means of neural network and computational mechanics, Proceedings oflUTAM Symposium on Inverse
Problems in Engineering Mechanics, Tokyo, Japan, May 1992, pp.213-222, Springer-Verlag.
[20] M. J. S. Vancoile, H. M. G. Smets and W. F. L. Bogaerts, Intelligent corrosion management systems,
Proceedings of SMiRT Post Conference Seminar Nr. 13, Knowledge-based (Expert) System Applications in
Power Plant and Structural Engineering, Constance, Germany, Aug. 1993, EUR 15408 EN, JRC, pp.93-
112.
[21] Bezdek J.C, Pal S. Eds (1992) Fuzzy models for Pattern Recognition, Press
[22] Krishnapuram R and Keller J. M. (1994) Fuzzy and Possibilistic Clustering Methods for Computer
Vision, in Neural and Fuzzy Systems, S. Mitra, M. Gupta, and W. Kraske (Eds.), SPIE Intitute Series, Vol.
IS 12, pp 133-159
[23] Dubois D., Prade H. (1988) Possibility Theory: An Approach to Computerized Processing of
Uncertainty, New York: Plenum Press
[24] Carruthers R.B., Day RV. (1968) 77ze Spheroidisation of some Ferritic Superheater Steels, Central
Electricity Generating Board, North Eastern Region, Scientific Services Department, Report
SSD/NE/R138.
[25] Zimmermann H.-J. (1991) Fuzzy Set Theory and Its Applications (2nd Edition), Kluwer Academic
Publishers, Boston, Dordrecht
[26] Wang Li-Xin (1994) Adaptive fuzzy systems and control: design and stability analysis, Prentice-Hall,
Englewood Cliffs, New Jersey
[27] Shammas M.S. (1987) Predicting the remanent life of lCr'^Mo coarse-grained heat affected zone
material by quantitative cavitation measurements, Central Electricity Generating Board, Report
TPRD/L/3199/R87
[28] Neubauer , Wedel U. (1983) Restlife Estimation of Creeping Components by Means of Replicas,
ASME International Conference on Advances in Life Prediction Methods, Albany, NY
[29] EPRI (1990) Field Metallography Research Leads to Improved Re-Examination Interval For Creep
Damaged Steampipes, EPRI First Use Report 197
29

CHAPTER 2

REVIEW OF SOME MAJOR SMIRT- AND KBS


RELATED RESEARCH PROGRAMS
31

INTELLIGENT APPROACHES FOR AUTOMATED DESIGN


OF PRACTICAL STRUCTURES

S. YOSHIMURA, J. S. LEE and G. YAGAWA

Department of Quantum Engineering and Systems Science,


The University of Tokyo,
7-3-1 Hongo, Bunkyo, Tokyo 113, Japan
e-mail : yoshi@nucl.gen.u-tokyo.ac.jp

Abstract
This paper describes an automated computer-aided engineering (CAE) system for practical structures related to various
coupled phenomena. An automatic finite element mesh generation technique, which is based on the fuzzy knowledge
processing and computational geometry techniques, is incorporated into the system, together with one of commercial
finite element (FE) analysis codes, MARC, and one of commercial solid modelers, DESIGNBASE. The system allows a
geometry model of concern to be automatically converted to different FE models, depending on physical phenomena to be
analyzed, i.e. electrostatic analysis, stress analysis, modal analysis and so on. The FE models are then automatically
analyzed. Among a whole process of each analysis, the definition of a geometry model, the designation of local node
patterns and the assignment of material properties and boundary conditions onto parts of the geometry model are only the
interactive processes to be done by a user. The interactive operations take only a few minutes. The other processes which
are time consuming and labour-intensive in conventional CAE systems are fully automatically performed in a popular
engineering workstation environment. With an aid of multilayer neural networks, the present CAE system also allows us
to effectively obtain a multi-dimensional design window (DW) in which a number of satisfactory design solutions exist,
considering various coupled phenomena. The developed system is successfully applied to evaluate performances of an
electrostatic micro wobble actuator as an example. The quantitative conditions for operating the actuator are identified as
one of typical CAE evaluations.

Keywords : Computer Aided Engineering, Micromachine, Fuzzy Knowledge Processing, Computational Geometry,
Finite Element Analysis, Neural Networks, Design Window, Micro Wobble Actuator

1. Introduction In accordance with dramatical progress of computer


Nuclear structural components such as pressure technology, numerical simulation methods such as the
vessels and piping are typical examples of huge scale finite element method (FEM) are recognized to be key
artifacts, while micromachines whose size ranges IO'6 to tools in practical designs and analyses. Computer
10'3 m are typical examples of tiny scale artifacts. They simulations allow for the testing of new designs and for
have their own missions, and are designed by different the iterative optimization of existing designs without time
engineers in different engineering fields. However, there consuming and considerable efforts to experiments.
are some common features in their design processes. These However, conventional computational analyses of
practical structures are in general related to various coupled practical structures are still labour-intensive and are not
physical phenomena. They are required to be evaluated easy for ordinary designers and engineers to perform. It
and designed considering the coupled phenomena. A lot is difficult for them to find a satisfactory or optimized
of trial and error evaluations are indispensable. Such solution of practical structures, utilizing such conventional
situations make it very difficult to find a satisfactory or computer simulations tools.
optimized solution of practical structures, although The present authors have been interested in
numerous optimization algorithms have been studied. automating analysis and design processes of practical
32

structures, and have developed several techniques and 2.1 Automated FE Analyses
systems for structural design automation [1-5]. This paper The developed CAE system allows designers to
describes a part of our latest research activities in this field, evaluate detailed physical behaviors of structures through
i.e. a novel CAE system to effectively support realization some simple interactive operations to their geometry
of three-dimensional practical structures consisting of free- models. In other words, designers do not have to deal
form surfaces. The system consists of two main portions. with mesh data when they operate the system. A flow of
The one is an automated FE analysis system, while the analyses using the system is shown in Fig. 1. Each
other a design window (DW) search system using the subprocess will be described below. The details of the
multilayer neural network [6]. Here the DW means an mesh generation part can be found elsewhere [7, 8].
area of satisfactory solutions in a permissible design
parameter space. In practical situations, a DW concept
2.1.1 Definition of geometry model
seems more useful than one optimized solution obtained
A whole analysis domain is defined using one of
under some restricted conditions.
commercial geometry modelers, DESIGNBASE [10],
The present authors have proposed a novel
which has abundant libraries enabling us to easily operate,
automatic FE mesh generation method for three-
modify and refer to a geometry model. Any information
dimensional complex geometry [7, 8]. To efficiently
related to a geometry model can be easily retrieved using
support design processes of practical structures, this mesh
those libraries. It should be noted here that different
generator is integrated with one of commercial FE analysis
geometry models are constructed, depending on physical
codes MARC [9] and one of commercial solid modelers
behaviors to be analyzed.
DESIGNBASE [10]. This integrated system includes the
following functions : (a) definition of a geometry model,
2.1.2 Attachment of material properties
i.e. solid modeling including boolean operations such as
union and intersection, (b) attachment of boundary and boundary conditions to
conditions and material properties directly to parts of the geometry model
geometry model, (c) fully automated mesh generation, (d) Material properties and boundary conditions are
various FE analyses such as electrostatic, stress and eigen directly attached onto the geometry model by clicking the
value analyses, and (e) visualization of analysis models loops or edges that are parts of the geometry model using
and results. With an aid of multilayer neural networks, a mouse, and then by inputting actual values. The present
the system also allows us to automatically obtain a multi- system accepts both Direchlet's and Neuman's type
dimensional DW in which a number of satisfactory design
Definition of Geometry Model
solutions exist [4]. Using DESIGNBASE

The developed system is applied to evaluate one


Attachment of Material Properties and Boundary' Conditions
of electrostatic micro wobble actuators [11]. Through the & Node Density' Distributions to Geometry Model
analyses, fundamental performances of the system are
discussed. Calculation of Global Node Density Distribution

2. Outline of the System Node Generation


Based on Bucketing Method
The present CAE system consists of two main
portions. The one is an automated FE analysis system,
Element Generation
while the other a DW search system supported by the Based on Delaunay Triangulation

multilayer neural network. The details of these systems


will be described in this chapter. The object-oriented Attachment of Material Properties and
Boundary Conditions to Mesh
technique to consistently manage a number of elemental
processes appeared in a CAE evaluation of practical Finite Element Analysis
(Deformation, Electrostatic, Eigen Value Analyses ....)
structures can be found in refs. [1-3]. The IF-THEN type Using MARC
empirical rules to find one of satisfactory solutions can
also be found in refs. [1-3]. | | : Interactive process
I I : Fully automatic process

Fig.l Flow of automated FE analyses


33

boundary conditions. or the hole exists solely in an infinite domain, the local
nodal patterns may be regarded locallyoptimum around
2.1.3 Designation of node density the cracktip or the hole. When these stress concentration
sources exist closely to each other in the analysis domain,
distributions
extra nodes have to be removed from the superposed region
In the present system, nodes are first generated,
of both patterns. In the present method, a global
and then a FE mesh is built. In general, it is difficult to
distribution of node density over the whole analysis
well control element size for a complex geometry. A node
domain is then automatically calculated through their
density distribution over a whole geometry model is
superposition using the fuzzy knowledge processing [12,
constructed as follows.
13]. When designers do not want any special meshing,
The system stores several local nodal patterns such
they can adopt uniformly subdivided mesh. It is also
as the pattern suitable to well capture stress concentration,
possible to combine the present techniques with an
the pattern to subdivide a finite domain uniformly, and
adaptive meshing technique [14].
the pattern to subdivide a whole domain uniformly. A user
selects some of those local nodal patterns, depending on
their analysis purposes, and specifies their relative 2.1.4 Node and element generation
importance and where to locate them. The process is Node generation is one of time consuming
illustrated in Fig. 2. For example, when either the crack processes in automatic mesh generation. Here, the

Node density

(a)

Crack tip ^Symmetric line

(b) t
(a) Example of nodal density distribution

1 1 1
1 t
Crack tip W 1 1
1 T\

Pu :ket
y ^ Membership function Membership function 1 ^""T Boundary
1
for nodal pattern I for nodal pattern II
(c) ^ (
X! .2 Hole!
s 1
1
1
1
1

^""Crackjtip
u c
Location (b) Example of bucket decomposition
0
A : Group of candidate nodes B : Group of employed nodes
A: Dominant area of
, | j nodal pattern I
B: Dominant area of
nodal pattern II Bucket
<cut>

Location oooooi
o o o o o o ooo ooo oo
oooooooooooooo
oooooo ooo oO O o o
oooooooooooooo
o o o o o o o o o o o o oo
oooooooooooooo
<01lt> _ <in>
(e)
Generated nodes
Candidate nodes not tested
o Nodes taken from Gr. A
o Tesled candidate nodes
Q Node being tested
Crack tip
(c) Node generation in one of buckets

Fig.2 Superposition of nodal patterns based on fuzzy theory Fig.3 Node generation based on bucketing method
34

bucketing method [15] is adopted to generate nodes which Then these are automatically attached onto appropriate
satisfy the distribution of node density over a whole nodes, edges, faces and volume of elements. Such
analysis domain. Fig. 3 shows its fundamental principle, automatic conversion can be performed owing to the
taking the previous twodimensional mesh generation as special data structure of finite elements such that each part
an example without any loss of generality. Let us assume of element knows which geometry part it belongs to.
that the distribution of node density over a whole analysis Finally, a complete finite element model consisting of
domain is already given as shown in Fig. 3(a). At fint, a mesh, material properties and boundary conditions is
superrectangle enveloping the analysis domain is defined created.
as shown in Fig. 3(b). In the threedimensional solid case,
a super hexahedron is utilized to envelop an analysis 2.1.6 FE analyses
domain. Next, the superrectangle is divided into a number The present system automatically converts
of small subrectangles, each of which is named "Bucket". geometry models of concern to various FE models,
Nodes are generated bucket by bucket. depending on physical phenomena to be analyzed, i.e.
At first, a number of candidate nodes with uniform stress analysis, eigen value analysis, thermal conduction
spacing are prepared in one of buckets as shown in Fig. analysis, electrostatic analysis, and so on. The current
3(c). The distance of two neighboring candidate nodes is version of the system produces FE models of quadratic
set to be smaller than the minimum distance of nodes to tetrahedral elements, which are compatible to one of
be generated in the relevant bucket. Next, candidate nodes commercial FE codes, MARC [9]. FE analyses are
are pick up one by one, starting from the leftbottom comer automatically performed. FE models and analysis results
of the bucket, and are put into the bucket. A candidate are visualized using a pre/post processor of MARC,
node is adopted as one of the final nodes when it satisfies MENTAT [9].
the following two criteria :
(a) The candidate node is inside the analysis domain 2.2 DesignWindow Analysis Using
(/ check).
Multilayer Neural Network
(b) The distance between the candidate node and the
The design window (DW) is a schematic drawing of an
nearest node already generated in the bucket
area of satisfactory solutions in a permissible multi
satisfies the node density at the point to some
dimensional design parameter space. The DW seems more
extent.
useful in practical situations than one optimum solution
Practically, the criterion (a) is first examined bucket by
determined under limited consideration. Among several
bucket. As for buckets lying across the domain boundary,
algorithms, the Wholearea Search Method (WSM) is
the criterion (a) is examined node by node. It should be
noted here that the nodes already generated in the
neighboring buckets have to be examined for the criterion
(b) as well when a candidate node is possibly generated
near the border of the relevant bucket. Thanks to the
bucketing method, the number of examinations of the
2D Design
criterion (b) can be reduced significany, and then a node Window
generation speed is remained to be proportional to the total
number of nodes.
The Delaunay triangulation method [16,17] is used
to generate tetrahedral elements from numerous nodes
produced within a geometry.

2.1.5 Attachment of material properties


and boundary conditions to FE mesh ^ Satisfactory solution
Through the interactive operations mentioned in O Unsatisfactory solution
section 2.1.2, a user designates material properties and
Fig.4 Illustration of whole area search method
boundary conditions onto parts of the geometry model.
for design window
35

employed here. As shown in Fig. 4, a lattice is first electrostatic micro wobble actuators [11]. Despite a large
generated in the design parameter space that is empirically number of macroelectrostatic motor designs [18], few
determined by a user. All the lattice points are then largescale electrostatic actuators are in use because of
examined one by one whether they satisfy design criteria their insufficient electrostatic power. However, in a
or not. The WSM is the most flexible and robust, but the microscopic domain, an electrostatic mechanism appears
number of lattice points to be examined tends to be to be more advantageous to use [19]. A number of efforts
extremely huge. Therefore, some of the present authors have been made so far to build electrostatic micro actuators
proposed a novel method to efficiently search the DW [1823]. However, most of the micro actuators have failed
using the multilayer neural network [4]. to generate enough force for practical applications. A new
This method consists of three subprocesses as concept of micro actuator is now demanded. The micro
shown in Fig. 5. At first, using the automated FE system actuator considered in the present study is designed as a
described in section 2.1, numerous FE analyses are part of a highly accurate positioning device [11]. This
performed to prepare training data sets and test data sets actuator uses an electrostatic force as other micromotors
for the neural network, each of which is a coupled data set do, and its fabrication process is almost the same as those
of assumed design parameters vs. calculated physical in ref. [24]. Compared with similar devices, the micro
values. The neural network is then trained using the wobble actuator has several advantages such as high
training data sets. Here the design parameters assumed performance, high reliability and high productivity.
are given to the input units of the network, while the Materials employed here are silicon and silicon
physical values calculated are shown to the output units compounds, which are well known as materials for
as teacher signal. A training algorithm employed here is semiconductor devices.
the backpropagation [6]. After a sufficient number of The basic structure of the present actuator is
training iterations, the neural network can imitate a illustrated in Fig. 6. Fig. 6(a) is its schematic plane view,
response of the FE system. That means, the well trained and Fig. 6(b) its crosssection view. The micro actuator
network provides some appropriate physical values even comprises a movable platform, i.e. rotor, three spiral
for unknown values of design parameters. Finally a multi beams, and a plurality of electrodes, i.e. stators.
dimensional DW is immediately searched using the well Dimensions of its reference design are as follows. The
trained network together with the WSM. platform is a ringlike plate of aproximately 200 in
outer diameter and 150 in inner diameter. The three
3. Electrostatic Micro Wobble Actuator
The present CAE system is applied to one of Movable Rine
(Rotor)
Anchor to Substrate

Spiral Beam
Preparation Phase

y Phase 1 ' ^ * " " ' '''~:r";'<&


Cross Sectional
Preparation of Learning Data View Cut
and Unlearning Data
by FEM analysis
Insulation

(a) Top view


Physical
Values
Anchor to
Movable Ring Substrate Electrode
Insulation '
Spiral beam
Utilization of Neural Network
as Evaluation Tool
(Searching Design Window by
Using Neuro as FEM Analyzer)
4 4 4 Substrate
Input data
(Design Parameters)
(b) Cross section view
Fig.5 Schematic view of procedure of design window
search using neural network Fig. 6 Basic structure of micro wobble actuator
36

beams are disposed at the inner space of the ring, and (3) Modal analysis of the ring with the three spiral
connect the ring with the substrate. The electrodes are beams.
placed in the circumferential direction around the platform. (4) Electrostatic analysis of the air gap between the
The inner diameter of a set of the electrodes is by 3 ring and one of the electrodes.
larger than the outer diameter of the ring. As each electrode (5) Recovery process of the deformed ring with the
is excited sequentially, the driving force produced as an three spiral beams.
electrostatic attraction force is generated between the ring Assuming the reference configuration and dimensions, the
and one of the electrodes, and the ring rolls along the inside above phenomena are analyzed. The results are described
surface of the electrodes with a little distortion of the three below.
spiral beams. When the ring rolls one cycle without
slipping, it has transversed a distance of the circumference 4.1.1 In-plane deformation of ring with
of the inner surface of the electrodes subtracted by its own
beams
circumference. Although the rotation of the ring is limited
Assuming the reference dimensions of the rotor
due to the spiral beams, the present actuator has several
listed in Table 1, its in-plane deformation is analyzed to
advantages including high torque and low friction. This
evaluate the quantitative relationship between a rotation
electrostatic micro wobble actuator is to be used as a micro-
angle and a torque necessary to rotate the rotor within the
positioner. Reference dimensions of the actuator are
elastic limit of the beams. Fig. 7 illustrates boundary
shown in Table 1. Material properties are summarized in
conditions of the present analysis. In reality, the rotor is
Table 2.
attracted to contact with one of the electrodes through an
electrostatic force. In this analysis, the displacement-
4. Results and Discussions controlled force with the magnitude of the distance
4.1 Automated FE Analyses between the rotor and one of the electrodes is applied to
To examine fundamental performances of the the rotor, considering its rotation along the inner surface
present micro wobble actuator, the following behaviors of the electrodes.
have to be analyzed : Fig. 8 shows a geometry model of the rotor, while
(1) In-plane deformation of the ring with the three Fig. 9 does a typical FE mesh, which consists of 24,655
spiral beams caused due to an electrostatic force. tetrahedral quadratic elements and 50,583 nodes. Fig.
(2) Out-of-plane deformation of the ring with the three 10 shows a calculated distribution of equivalent stress at
spiral beams caused due to its weight. rotation angle of 62 degrees at which the deformation
reaches the elastic limit of silicon, i.e. = 7 GPa. The
Table 1 Reference dimensions
maximum stress occurs at the middle of some spiral
Diameter of Plane Ring 200 Urn beams. The stress at one junction of the beam and the
ring also reaches the same as the maximum value. Fig.
Thickness of Plane Ring 2.5 um
11 shows the relationship between the calculated torque
Inner Diameter of Electrodes 206 um
Ts and the rotation angle. It can be seen from the figure
Thickness of Spiral Beams 2.5 um that the rotation angle of the rotor is limited at about 62
Width of Spiral Beams 5.0
Angle of Spiral Beams 360 deg.
forced
Thickness of Insulator 1.0 .. displacement

Table 2 Material properties

Material Si
Young's modulus 190 GPa
AB = CD
Poisson's ratio 0.3
fixed
Yield stress 7 GPa displacement
Mass density 2300 kg/m3
Fig.7 Boundary conditions for in-plane deformation
Permittivity of Insulator 4.0
analysis of rotor
37

degrees because of the elastic limit. Fig. 11 also tells us 35.0

that the starting torque required is 0.42x10"' Nm. This ~ 30.0


'_

value will be referred to in the section of electrostatic 2


C
o 25.0 U
analyses.
"
1mm
g" 20.0
o
H
15.0
1
>
10.0

, , , 1 , , , 1 , , , . .

22.5 45 67.5 90

Angle of rotation(degree)
Fig. 8 Geometry model of rotor
Starting torque : 0.42 xlO '^m

Fig. 11 Calculated torque vs. rotation angle

4.1.2 Out-of-plane deformation of rotor


due to its weight
Since the rotor is very thin compared with its
diameter, i.e. 2.5 vs. 200, the outofplane
deformation of the rotor caused due to its weight is
analyzed using the same FE mesh shown in Fig. 9. The
calculated maximum deflection is 2.049 10"5 . This
value is apparently negligible compared with the inplane
deformation of the rotor. This is one of typical scaling
effects of micro structures.

4.1.3 Modal analysis of rotor


Using the same mesh of Fig. 9, modal analyses of
Fig. 9 FE mesh of rotor
the rotor are performed. Fig. 12 shows the calculated first
and second eigen modes and eigen frequencies. It is found
that the first eigen frequency of 46 kHz is far beyond the
minimum requirement of 10 kHz.

4.1.4 Electrostatic analysis of air gap


between rotor and stators
To estimate electrostatic performances of micro
actuators, inplane twodimensional FE analyses are often
performed because of complexity of actual micro actuator
geometry. Here, we consider an actual threedimensional
geometry of the micro wobble actuator. Fig. 13 shows a
geometry model and boundary conditions of part of the
air gap between the rotor and the stators. Here a sufficiently
large area of the air is modeled in order to approximately
take into account infinite boundary conditions. As material
Fig. 10 Calculated distribution of equivalent stress properties, the permittivities of the air and insulator of S O,
38

that acts as an elastic constraint on the value that may be 2 KNOWLEDGE BASED SYSTEMS DEVELOPMENT
assigned to a variable [57]), to similarity value, (the com- Fuzzy Expert Systems (FES) and Fuzzy Logic Controllers
plement of the distance among possible worlds) [40, 41], to (FLC) are examples of Knowledge Based Systems (KBS).
desirability or preference values, (the partial order induced As such the share many common tasks at the reasoning
by the membership function on the universe of discourse). and at the application levels. We will briefly describes
1.1.3 Probabilistic Reasoning Systems these tasks and then we will proceed to illustrate the de-
velopment of KBS and FLC.
Some of the earliest techniques found among the ap-
proaches derived from probability are based on single- 2.1 The Reasoning Tasks
valued representations. These techniques started from ap- Three main reasoning tasks are common to KBS and FLC:
proximate methods, such as the modified Bayesian rule the knowledge representation, the inference mechanism
[26] and confirmation theory [47], and evolved into formal applicable to the chosen representation and the control of
methods for propagating probability values over Bayesian the inference. In the next two sections we will briefly cover
Belief Networks [38, 39]. Another trend among the proba- these tasks for KBS and FLC respectively. For a more de-
bilistic approaches is represented by interval-valued repre- tailed description of KBS reasoning tasks the reader is
sentations such as Dempster-Shafer theory [24, 45, 34, 49]. referred to reference [14], while the FKC reasoning tasks
In all these approaches, the basic inferential mechanism are extensively covered in reference [19].
is the conditioning operation.
2.2 The Application Tasks
1.1-4 Fuzzy Logic Based Reasoning Systems
Following the rapid prototyping paradigm, the author
Among the fuzzy logic based approaches, the most no- has identified five application tasks for a KBS (see refer-
table ones are based on a fuzzy-valued representation of ence [11]): (1) requirements and specifications: the knowl-
uncertainty. These include the Possibility Theory and Lin- edge acquisition stage; (2) design choices: the KB de-
guistic Variable Approach [57, 54], and the Triangular- velopment stage; (3) testing and modification: the KB
norm based approach [10, 9, 20]. functional validation stage; (4) optimizing storage and re-
The basic inferential mechanism used in possibilistic sponse time requirements: the KB compilation; (5) run-
reasoning is the generalized modus-ponens [54], which ning the application: the deployment stage.
makes use of inferential chains (syllogisms). The same task decomposition applies to the develop-
1.2 Complementarity ment of a FLC application. The first three stages corre-
spond to the development of the FLC application (per-
The distinction between probability and fuzziness has formance characteristics, order estimation, state variable
been presented and analyzed in many different publica- identification, rule base generation, validation and robust-
tions, such as [4, 25, 32] to mention a few. Most re- ness analysis); the fourth stage corresponds to the transi-
searches in probabilistic reasoning and fuzzy logic have tion from development to deployment (fuzzy rule set com-
reached the same conclusion about the complementarity pilation); the fifth stage corresponds to the application
of the two theories [18]. This complementarity was first deployment (porting and embedding the FLC on the host
noted by Zadeh [55], who in 1968 introduced the concept computer).
of the probability measure of a fuzzy event, and by Smets,
who extended belief functions to fuzzy sets [48]. 3 FUZZY EXPERT SYSTEM (FES)
Given the duality of purpose and characteristics be- For clarity's purpose, our discussion on FES will be an-
tween probabilistic and possibilistic methods, we conclude chored on RUM/PRIMO, a Fuzzy Expert System which
that these technologies ought to be regarded as being com- was developed by the author in 1987 [20] and further re-
plementary rather than competitive. Because of time and fined in the early nineties [6].
space limitations, we will limit the scope of our discussion
to cover the most notable trends and efforts in fuzzy logic 3.1 FES Reasoning tasks
based reasoning systems. As mentioned in the previous section, the reasoning tasks
required by FES can be divided into three layers: the
1.3 Structure of the Discussion knowledge representation, to determine issues such as the
In the next section we will briefly describe the develop- appropriate data structure for the uncertainty information
ment process common to both fuzzy-logic rule based sys- and meta-information, the input and termset granularity
tems and fuzzy controllers. In section three and four we selection; the inference mechanism, to determine the un-
will describe the technology development for a fuzzy ex- certainty calculi to perform the intersection, detachment,
pert system and a fuzzy controller, respectively. In section union, and pooling of the information; and the control
five we will discuss a few applications of fuzzy controllers. of the inference, to determine the calculi selection, the
Finally, in section six we will discuss some future trends conflict measurement and resolution, the ignorance mea-
of this promising technology. surement, and the resource allocation.
39

are assumed to be 8.854 x 1012 C/Vm and 3.542 x 10" C/ slipping. Fig. 15 shows a calculated distribution of electric
Vm, respectively. potential. Fig. 16 shows the calculated starting torque
Fig. 14 shows a FE mesh used. The mesh consists vs. driving voltage curves. The solid curve denotes the
of 8,301 tetrahedral quadratic elements and 14,964 nodes. three-dimensional FE solution, while the broken curve
To built this mesh, the following three nodal patterns are
utilized : (a) the base nodal pattern in which nodes are
generated with uniform spacing over a whole analysis
domain, (b) a local-optimum nodal pattern for the insulator,
and (c) a special nodal pattern in which a density of nodes
is getting coarser departing from the bottom face.
When one of the electrodes is excited, the rotor is
electrostatically attracted, and comes to contact with the
insulator on the inner surface of the electrode. When the
next electrode is excited, the rotor revolves without

Fig. 14 FE mesh of air gap

J
Mode I (46.2 kHz)

L
Mode II (101 kHz)

Fig. 12 Calculated first and second eigen modes of rotor Fig. 15 Calculated electric potential distribution

3.5

3.0 O 2D Analytical
O .' - * - 3D FEM
*. 2.5

2 2.0

I .,
1.0

Insulator
0=0 "-0 = i Starting
' Symmetric plane .... .... .
torque rj 100 200 300 400

Fig. 13 Geometry model and boundary conditions for air Driving voltage (V)

gap between rotor and one of stators Fig. 16 Calculated starting torque vs. driving force
40

does the two-dimensional analytical one. It can be seen 128MB memory). In the electrostatic analysis, it takes
from this figure that the starting torque is proportional to about 15 minutes to perform all interactive operations, i.e.
the square of driving voltage, and that the 2D analytical the definition of the geometry model, the designation of
solution is four to five times larger than the 3D FE one. local nodal patterns, and the assignment of material
Such a significant difference may be caused due to the properties and boundary conditions. Node and element
omit of electrical leakage in the 2D analytical solution. generation and a FE analysis are fully automatically
Considering that the torque of 0.42 x 10"' Nm is necessary performed in about 35 minutes.
to start rotating the rotor as given in section 4.1.1, it is In the analysis of in-plane deformation of the rotor,
obvious from Fig. 16 that a driving voltage exceeding 170 it takes about 35 minutes to perform interactive operations
V is indispensable. including solid modeling, while about 70 minutes is
required to fully automatically perform node and element
4.1.5 Recovery process of rotated rotor generation and a FE analysis. For all the analyses, the
designation of local nodal patters, the assignment of
When the rotor is rotated to a certain extent and
material properties and boundary conditions take only
the voltage is disconnected, the deformed rotor is recovered
about 20 seconds. That is, the developed CAE system
dynamically. During this recovery process, the rotor
allows designers to evaluate detailed physical behaviors
should not touch any electrodes around itself. To ensure
of practical structures such as micromachines through
this, the dynamic response of the deformed rotor is
some simple interactive operations to geometry models.
analyzed. It is found from the analysis that the rotor does
not touch any of electrodes.
4.2 Design Window Evaluation
4.1.6 Processing speed Here we demonstrate how the DW search method
Fig. 17 shows the measured processing times of is utilized. As one of examples, some dimensions of the
all the FE analyses described in sections 4.1.1 - 4.1.5. actuator to be operatable are schematically drawn as a DW.
These are measured on one of popular engineering
workstations, SUN SPARCstation 10 (1CPU, 50MHz, 4.2.1 Design parameters and geometrical
constraints
Q Definition of geometry model
Attachment of material properties and boundary conditions & Design parameters and geometrical constraints of
Designation of node density distributions to geometry model
Generation of nodes & elements the electrostatic micro wobble actuator considered here
Attachment of material properties and are as follows :
boundary conditions to mesh
H Finite element analysis Width of the ring (Wr) : 20 - 30
* Interactive
operations Thickness of the rotor (T ) : 2.0 - 2.5
Gap width between the rotor and stators (G) :
2.0 - 5.0
Thickness of the insulator (T.) : 0.2 -1.8 mm
Design criteria employed are as follows :
(1) The wobble actuator can rotate within the limit of
elasticity, i.e. the maximum equivalent stress amM
is less than the yield stress .
(2) In order to rotate the rotor, the starting torque
calculated from the electrostatic analysis t c is larger
than that calculated from the in-plane deformation
analysis of the rotor Tf.
14.964 50.583 50.583 50383 50383
Number of nodes
Note : (a) Electrostatic analysis between the rotor and stator 4.2.2 Network topology and training
(b) In-plane deformation of the rotor
(c) Out-of-plane deformation of the rotor conditions
(d) Modal analysis of the rotor
(e) Recovery process of the deformed rotor
A multilayer neural network employed is of three-
layered type as shown in Fig. 18. The network has four
Fig. 17 Processing times of various FE analyses
units in the input layer, ten units in the hidden layer, and
41

two units in the output layer. Through iterative training, I f


i.e. the backpropagation learning algorithm [6], the Mean_Error = ^pjt ' OPk\ (D
nt p=i = 1 ' y
network gradually tends to produce the appropriate output
data, which are similar to the teaching ones. The two units where
in the output layer output two kinds of starting torques, : the number of output units
i.e. and . The four design parameters, W, , T. and G t : the number of training or test data sets
are the input data for the network. teacher signal to the kth unit in the output layer
Pk '

In the present example, 81 training patterns are for the pth training or test pattern
prepared, i.e. all the combinations of (W = 20, 25, 30), O, output signal from the kth unit in the output layer
( T = 2, 2.25,2.5), (T. = 0.2,1,1.8), (G=2,3.5,5). On the for the pth training or test pattern
other hand, 10 test patterns are prepared to check a
generalization capability. They are randomly selected The well trained network is obtained at 200,000
within a possible range of each design parameter. All the learning iterations, when the mean error of estimation for
input data and output data are normalized to a unit range the test patterns reaches the minimum value of 0.005. With
from 0.05 to 0.95. this criterion, the estimation accuracy of the starting torque
Fig. 19 shows the history of learning process in is confirmed to be within 0.5%.
the case that a constant of the sigmoid function UQ is taken
to be 0.6. Here the following mean error of estimation is 4.2.3 DW search
employed for both training and test patterns : DWs are searched using the trained neural network.
Fig. 20 shows the DW in the T., G and W space, the
solutions in which satisfy that the starting torque . is larger
Teaching than 0.32 10"' Nm. It can be seen in the figure that
Data
satisfactory solutions can be found when T. is small, G is
small, and Wr is larger.
Output 2 Units
Layer Next, the sizes of the micro wobble actuator to be

Hidden
Layer 10 Units

Input
Layer 4 Units

Fig. 18 Network topology and its input/output data

0.O6 (30.0,5.0,1.8)

0.05

'5 0.04
u
Test data Training data
0.03
(20.0, 2.0,0.2)

0.02

001

Starting Torque > 0.32 10e09 Nm The number of searching points


0 40000 80000 120000 160000 200000 Thickness of rotor : 2.0 (const.) '" design window = 65,536
The number of searched points
Iteration Number of Learning in design window = 376

Fig. 19 Convergence of training process Fig. 20 Design window when > 0.32 IO"9 Nm
42

operatable are searched, considering both in-plane


deformation of the rotor and the electrostatic phenomena.
To rotate the rotor, xe has to be larger than Ts. Both torques
for different design parameters can be promptly evaluated
using the trained neural network. Fig. 21 shows the DW
in the Wr, G and T. space when the voltage 120 V is
applied and Tr ranges from 2 to 2.5 . The number of
searched points in this DW is 85. On the other hand, no
satisfactory solutions are found when 100 V is applied.
That is, a DW is null. Fig. 22 shows the DW in the G,
and space, when the driving voltage is 150 V and Wr
Thickness of rotor
ranges from 20 to 30 . This DW is much larger than
that for 120 V.
When obtaining the above DWs with the WSM,
we searched 65,536 points. That means, 131,072 FE
analyses, i.e. stress and electrostatic analyses, would be
needed if we do not use the neural network approach. In
reality, we employed only 182 FE solutions (=(81+10)x2)
for training patterns and test patterns. Although a training Thickness of
process of the neural network takes some amount of time, insulation
the present DW search can be performed in an extremely The number of searched points
short processing time. in design window = 18,420
(Driving Voltage : ISO V)
Width of rotor : 20 - 30

Fig. 22 Design windows for 150 V

5. Conclusions
A novel CAE system for practical structures is
described in the present paper. Interactive operations to
be done by a user are performed in a reasonably short
time even when solving complicated problems such as
micro actuators. The other processes which are time
consuming and labour-intensive in conventional systems
The number of searched points The number of searched points are fully automatically performed in a popular engineering
in design window = 0 in design window = 85
(Driving Voltage : 100 V) (Driving Voltage : 120 V) workstation environment. A DW search approach
supported by the multilayer neural network is also
described. This CAE system is successfully applied to
(30.0,5.0,1.8) the evaluation of performances of an electrostatic micro
wobble actuator.

(20.0, 2.0,0.2)
Acknowledgements
A part of this work was financially supported by
the Grant -in-Aid of the Ministry of Education, Japan.
Starting Torque (Electrostatic Analysis) > The authors wish to thank Prof. Kiriyama at RACE, the
Starting Torque (Stress Analysis)
Thickness rotor : 2.0 - 2.5 University of Tokyo and Mr. Shibaike at Matsushita
The number of searching points = 65, 536 Electric Industrial Co., Ltd. for their valuable discussions.
They also thank Nippon MARC Corp. and RICOH
Company Ltd. for their help.
Fig. 21 Design windows for 100 and 120 V
43

References IEEE Transactions on Systems, Man and Cybernetics,


[1] Yoshimura, S., Yagawa, G. and Mochizuki, Y., SMC-3 (1973) 28-44.
"Automation of Thermal and Structural Design Using AI [14] Yagawa, G., Yoshimura, S. and Kawai, H.,
Techniques", Engineering Analysis with Boundary "Automatic Large-Scale Mesh Generation Based on
Elements, 7 (1990) 73-77. Fuzzy Knowledge Processing and Computational
[2] Yoshimura, S., Yagawa, G. and Mochizuki, Y., Geometry : With a New Function for Three-Dimensional
"An Artificial Intelligence Approach to Efficient Fusion Adaptive Remeshing", Transactions of the Japan Society
First Wall Design", Lecture Notes in Computer Science of Mechanical Engineers, Ser. A, 61 (1995) 652-659 (in
(Computer-Aided Cooperative Product Development), Japanese).
Springer-Verlag, pp.502-521, (1990). [ 15] Asano, T., "Practical Use of Bucketing Techniques
[3] Yoshimura, S., Yagawa, G. and Mochizuki, Y., in Computational Geometry", Computational Geometry,
"Design Automation Based on Knowledge Engineering North-Holland, pp. 153-195 (1985).
and Fuzzy Control", International Journal of Computer- [16] Watson, D. F., "Computing the n-Dimensional
Aided Engineering and Software, in Print. Delaunay Tessellation with Application to Voronoi
[4] Mochizuki, Y., Yoshimura, S. and Yagawa, G., Polytopes", The Computer Journal, 24 ( 1981 ) 162-172.
"Automated System for Structural Design Using Design [17] Sloan, S. W., "A Fast Algorithm for Constructing
Window Search Approach : Its Application to Fusion First Delaunay Triangulation in the Plane", Advances in
Wall Design", Integrated Computer-Aided Engineering, Engineering Software, 9 (1987) 34-55.
in Print. [18] Omar, M. P. and Mullen, R. L., "Electric and Fluid
[5] Ueda, H , Uno, M., Ogawa, H., Shimakawa, T., Analysis of Side-Drive Micromotors", Journal of
Yoshimura, S. and Yagawa, G., "Development of Expert Microelectromechanical Systems, 1 (1992) 130-140.
System for Structural Design of FBR Components", [19] Tai, Y. C. and Muller, R. S., "IC-processed
Journal of the Atomic Energy Society of Japan, 37 (1995) Electrostatic Synchronous Micromotors", Sensors and
(in Japanese). Actuators, A20 (1989) 49-55.
[6] Rumelhart, D. E., Hinton, G. E. and Williams, G. [20] Fujita, H. and Omodaka, ., "The Principle of an
E., "Learning Representation by Back-propagation Electrostatic Linear Actuator Manufactured by Silicon
Errors", Nature, 323 (1986) 533-536. Micromachining", IEEE Solid-State Sensors and
[7] Yagawa, G., Yoshimura, S., Soneda, N. and Actuators (Transducers '87), Tokyo, Japan, pp.861-864,
Nakao, K., "Automatic 2- and 3-D Mesh Generation (1987).
Based on Fuzzy Knowledge Processing", Computational [21] Tai, Y. C , Fan, L. S. and Muller, R. S., "IC-
Mechanics, 9 (1992) 333-346. Processed Micro-Motors : Design, Technology, and
[8] Yagawa, G., Yoshimura, S. and Nakao, K., Testing", Proceedings of the 2nd IEEE Workshop on
"Automatic Mesh Generation of Complex Geometries Micro Electro Mechanical Systems (MEMS), Salt Lake
Based on Fuzzy Knowledge Processing and City, UT, USA, pp. 1-6, (1989).
Computational Geometry", Integrated Computer-Aided [22] Furano, T., Furuhata, T., Gabriel, . J. and Fujita,
Engineering, in Print. H , "Design, Fabrication, and Operation of Submicron
[9] MARC Analysis Research Corporation, MARC Gap Comb-Drive Micro-actuators", Journal of
manual k5.2 (1994) Microelectromechanical Systems, 1 (1992) 52-59.
[10] C h i y o k u r a , H., Solid M o d e l i n g with [23] Fujita, H. and Omodaka, ., " The Fabrication of
DESIGNBASE : Theory and Implementation, Addison- an Electrostatic Linear Actuator by Silicon
Wesley, (1988). Micromachining", IEEE Transactions on Electron
[11] Shibaike, N., "Design of Micro-mechanisms Devices, 35(1988)731-734.
Focusing on Configuration, Materials and Processes", [24] Petersen, K. E., "Silicon as a Mechanical
International Journal of Materials & Design, in Print. Material", Proceeding of IEEE, 70, pp.420-457, (1982).
[12] L. A. Zadeh, L. ., "Fuzzy Algorithms",
Information and Control, 12 (1968) 94-102.
[ 13] Zadeh, L. ., "Outline of a New Approach to the
Analysis of Complex Systems and Decision Process",
45

INTELLIGENT NDI DATA BASE FOR A PRESSURE VESSEL

Shuichi Fukuda
Tokyo Metropolitan Institute of Technology
6-6, Asahigaoka, Hino, Tokyo, 191, JAPAN
.tel:+81-425-83-5111 ext. 3605
fax:+81-425-83-5119
e-mail : fukuda@mgbfu.tmit.ac.jp

1. Introduction

This paper describes the activities of the committee of Nondestructive Evalution


Data Base of the Japan Society of Nondestructive Inspection. This committee was
set up together with several other committees in the Japan Society of
Nondestructive Inspection in order to promote the standardization in NDI
technologies with the financial support from the Ministry of International Trade
and Industry.
This committee developed several prototype systems based upon the survey.
This paper describes the outline of the survey and the developed systems based
upon it.

2. Survey

To clarify what should be done by this committee, we carried out survey work
and we received 62 answers out of 167.

[1] NDI data base necessary?


yes, very much (44%), yes (52%), not so much(5%)
absolutely not (0%)
[2] NDI data based developed? or to be developed?
yes (34%), no (56%),
no answer(10%)
[3] If yes in (2), what kind of data base?
image processing data base for inspection (3), flaw evaluation (3), welding
defects (3), text data base (3), maintenance inspection for a cubic storage
tank (1), Online welding defect evaluation (1), visual inspection^ ),
remaining life evaluation (1), corrosion of pipes (1),
46

{-:;; knowledge base for inspection procedure determination (1)


[4] What do you think is important for developing NDI data base?
evaluation techniques for UT flaw size, 3 D image processing of flaws,
image processing techniques, expert system techniques, keyword selection,
objective and user image of data base should be made clear
[5] Would you prefer some organizations to develop NDI data base for you?
6 yes's
[6] Why do not you develop NDI data base within your organization?
lack of manpower (15), cost performance problem (9), data are
too much diversified and very difficult to standardize (8), incorporated
into the larger data base, no need to treat NDI data separately (7) ^
[7] What will be the problem of development of NDI data base by some
official organizations?
know how will spill out (16), standardization of data formats (9),
establishment of common needs (4), standardization on the part of
computers (4)
[8] If. data base will be developed by someone else, would you use it ?
(question to those who answers no need for NDI data base)
_ yes, we will (2). no, we will not (1)
[9] What kind of data would you store in your data base?
fact (53%), papers (58%), knowledge(56%), others (15%)
The largest answer was to store fact+paper+knowledge.
[10] Where is your data base used?
lab (52%), factory (58%), field (56%), others (10%).
[11 ] What stage do you expect to utilize data base?
raw material (56%), intermediate products (44%), final products (68%),
others (11%)
[12] What kind of data format is necessary?
sentences (58%), figures (74%), images (60%), tables (73%), others (2%)
[13] How would you use data base?
as an encyclopedia (40%), information source on methods (55%),
standard data (81%), others (15%)
[14] Who will be the data base users?
NDI engineers (81%), NDI R&D engineers (64%), material R&D engineers
(26%) production engineers (39%), design engineers (15%), others (10%)
[15] What will you think of from the word "evaluation"?
inspection accuracy (34%), inspection reliability (48%),
strength of material evaluation (39%), structural strength evaluation (42%),
47

others (21%)
[16] What NDI techniques do you think is important for NDI data base?
.. UT (41), RT (24), ET (21), MT (20), Pt (15), SM (7), AE (6), others
[17] What conditions do you think is important for NDI data base?
field (18), factory(8),lab(2)
[18] What material? ,
metal, non-metal (13), steel (18), metal (3), new material (4)
[19] What structure?
Welded structure (11), vessels (8), piping (5), heat exchangers (3),
steel raw material (6)
[20] What failures?
cracks (15), corrosion (14), weld defects (5). embrittlement (4)

3. Intelligent NDI Data Processing: Conceptual Design

This is still in the stage of conceptual design. This system aims at sampling data
in real time and storing them in the data base. There is no appreciable difference
in real time processing between NDI data and keyboard inputs. Both can be
processed as several MB/sec discontinous variable length data. The only
difference is input speed. But discontinuous high speed analog inputs are very
difficult to process on an ordinary computer.
Therefore, we made a conceptual design for such a system using a computer
that possesses real time OS and an A/D conversion function. For an OS, we will
consider to use real time UNIX with 10 microsec - 1 milisec response time. And at
the same time we adopt multi CPU and distribute tasks symmetrically. Further,
data are transferred without CPU intervention to facilitate the processing speed
and to reduce the CPU load, Regrettably enough, we cannot find appropriate
means to retrieve these data in an intelligent manner at the present time, we are
processing them just as simple data files with time stamp tags. Fig.1 shows the
conceptual design of this system.

4. System for Intelligent Text Processing

When we make decisions by referring to, for example , codes and manuals as to
how we should carry out NDI, r when we produce NDI procedure specifications
or when we make up a report on NDI, what we will use as data are texts, figures,
48

tables and/or photos. These data constitute a piece of knowledge, the data can be
roughly divided into texts and figures (images). But figures, tables or photos are
without expection referred to on the text. Thus, we can utilize a text for retrieving
these pieces of information.
It is well known that a literature in a certain field or a report made up by a certain
person always has a certain style or uses a certain vocabulary. Thus, if we
examine the sentence for its concordance, we will be able to extract a certain
characteristic or a piece of knowledge. These vocabularies, therefore, are always
linked to other specific words or sentences. We can utilize this technique for
classification. The report will turn out to be an object oriented data base without
any trouble after the task of classification is completed, if we let a certain word be
an object, classification be class and category, and link be inheritance and
message passing.
We developed a prototype based on this idea with the sample from MITI code
number 501 "Technical Standard for Structure, etc. of Power Generating Nuclear
Reactor Facilities."
The computers we used are SUN Sparc 2 GX Plus, Macintosh 2Ci and Next
Cube. The Japanese sentences are inputted using a scanner and OCR
MacReader Japan. The sentences are analyzed using Micro OCP (Oxford
Concordance Program). And for object programming, we used Smalltalk - 80, and
bjectiveworks for Smalltalk, and HyperCard and Expanded Tool Kit are also
used.
As Smalltalk does discriminate data structure, we can easily take in image data.
Fig.2 and Fig.3 show samples of screen images.

5. NDI Data Base Using HyperCard

An NDI data base prototype was developed using a hypercard on a Mac. It is


known that hypertext provides a very useful tool for developing a data base. So
the purpose of this prototype system development was to demonstrate how a
hypertext is useful for constructing a data base, rather than to store a large
amount of data. Thus, the amount of data is very limited to such an extent that the
system demonstrates the capabilities of a hypertext approach.
Fig.4 shows the initial screen image where a user can choose, using the buttom
at right, the type of a structure to which an NDI technique is applied. In the
following example, a pressure vessel ischosen. Then an image of a pressure
vessel appears as shown in Fig.5 and we indicate the location where we wish to
49

inspect using an arrow. If the location is specified, then the type of a joint there will
appear and the button for material will also appear at right. Thus, even if a user
does not know the name for such type of a welded joint, he easily finds out by
indicating the location by an arrow. This improves the user interface a great deal if
the user is a non-expert, because he or she knows where to inspect, but he or she
does not know the expert's name for that joint. If a user designates materials used
there, then a computer prompts him or her to input thickness data. When all
necessary information is given, then the final screen image will appear as shown
in Fig.7, where the top item shows the inputted type of a structure; in this case, a
pressure vessel, and the second one shows the inputted material; low alloy steel,
and the third, thickness and 4th item, a type of joints, is automatically inputted if a
user indicates the joint he or she wishes to inspect as is already described. The
5th item, working stress, is also automatically inputted in the similar manner, if the
load conditions are normal. The location specification by an arrow also
automatically fills in the 6th item where applicable codes and standards are
shown and also in the7th item where applicable inspection methods are shown.
The 6th and 7th items corresponds and the inspection methods which are
specified by the code shown in the 6th column will be shown in the 7th. In this
example, the 6th item shows the name Fire Prevention Law and the 7th item
shows Ultrasonic Inspection, which is one of the applicable inspection techniques
specified by this law to the welded joint under these conditions. In the large box
under these 7 items, the NDI procedure specification is shown.
When we push the 7th button of inspection techniques, then another applicable
technique under the Fire Prevention Law will be shown. We can refer to all the
applicable technique by continuing the clicking procedure of the 7th button.
And if we click the 6th button, then we will know what other kinds of codes and
standards are applicable to the welded joint under these conditions.
This system can be used for design support, too, because if we click such
buttons of materials (2nd), thickness (3rd one), etc, then we will know what kinds
of codes and standards should be referred to.

6. PC-based Network System

There were many requests that as PC's are more popular among NDI
engineers in industrial sectors, it would be more appealing if it is demonstrated
what PC's can offer for them in Jerms of NDI data base, and further that as we are
getting into a networked society, what PC's can do for them using a network.
50

To answer these demands, we have developed a networked data base on


PC's which can be utilized over, the commercial telephone lines. This system was
developed using BigModel, which is a very popular software for communication
among PC network users. We stored text-base data such as reports, papers and
some associated figures, and codes and standards. But as the figures and texts
are stored in different forms of file , they cannot be referred t o at the same time in
this system. But information retrieval based on a string pattern matching
complements this disadvantage and we can easily retrieve relevant information
from papers and reports, and we can also easily refer to the relevant codes and
standards without knowledge about word thesaurus.
Fig.8 shows the schematic of the PC networked NDI data base system which
was developed using a BigModel communication software. Fig.9 shows the initial
screen image where all the committee member and system dvelopper names
are shown in Japanese. After inputting the password, the screen mage of Fig.10
appears as a main menu. And then a user goes on to a submenu shown in Fig.11
where [1], [2],[3], [4], [5]. [6] correspond to codes and standards, papers, research
materials, research news and lectures. And if we choose codes and standards,
we can refer to ISO , JIS and NDIS, and we can look for the details of ISO, for
example, by getting into another submenu. Fig.13 shows a sample of information
retrieval based on the string pattern matching. Fig.14 shows a sample retrieved
table of papers with their titles, authors, volumes, numbers and pages.
Alhough this is a very simple system, it is very useful for NDI engineers in Japan
since most of them have access to only PC's and workstations are still very limited
among them.
But as PC's are approaching workstations more and more these days and the
infrastructure for the internet is quickly improving, we are now developing an NDI
data base on the internet, which will permit processing of images and sounds as
well as texts, as our next step in this series.

*\7 9 A/DJEBtttX '

4 . !
t K UB/S

4 M
_ | ix LJUUUIIUMIIJ 1
Hiccw. j 11 | go c*a. [*-*

_ IPU

I
\

Fig.1 Conceptual Design for Ultrasonic Testing Equipment Using Real Time Unix
52

lSSStJHx*^* UES
4Jg 98. ne am 2: 7tfg 9. HB ( f m 234K3

Jg5ft
CD
536ft I
Q' Q1
58ft
sena
S37ft
CD CD CD CD CD
SS9ft SlOft Stift

..5 O I 5 S5ft
14 J i g 98.1 MB fffl 234K5S 7JSg 9B.I MB (Sffl 23'

Ai ! .S-
CD CD H i:-.

221
i.*J.
ui
222
1 .
ITU

sssft S5ftft?! * 6 ft. B a a g a a a 223


3&
CD CD' CD CD .01
lii.
"'
225
.
/.
"Il
226
357 ft 224
:
^
CD CD D CD
39 5
S9 ftftl Sg 1 o ft 'SS I O ft
SGftWl
CD
SS I I ft
CD
SS I I ft?!K
I2JSB 9B.I MB lffl

A\
t><. S 1:..
Illl Kj. rii
S E ft ai
(116.1 E16.2
9 3S 98.1 MBfJS 234K5S 35
t.l.

At- AV [1)5.5 0 u6 .116 ttJSI


11 t i li.
IIII
1 J 2 JS? 2 3^<7)(1
&
:7.
-b, ll-lt :.
k I . A:: A\ ;!K32 ng*S2 ^ U lsi ,
1: II V;7.
I
23^5(3 Un
3JS 1JS 5) S3
|i:iiiii| lilililJlrillllililililiiil lillUilililililllililllliiiiilfli'l
a

Fig.2 Hierachical Date Structure of Nondestructive Evaluation Data Base


(Text, Figures and Images have same file structures)
53

I D I Launch IEJ 3 f HSiSttS - -t> r< -

a
Browsers >
S E S ^ S T T - * ^ - -J 5 t f

Utilities >
Changes > ttS: U=e;,al(?SK
Special >
Quit...
(*E;(c <* 5 * 2 # ; & * * )
SBftS23a-^
S U * ^ * , ',k<o^^(c^--fz ^jT-^itnii' .
* ) . 5 - ^ , laurini* ^- = t o ( c * ^ x t *
, S* ,
!*#(*, 'Atz$-tZ*><T-$,x>t.
- ttfcl&t/tSiKi, I ?
ft5fcf51ST^tt.
11;^;(*, ;*:5[3(C t k .
(W^)
vtomwt,. s'JM-rjitfz.
2 T , n f t 7 5 =i ; ^ - h j l , ? 4 3 ^ 4 3
*)-R?O"iiitt?ft5W<Oe:e)2O 1 j R i k t S .
I

tas h*<7))

J,
75WJ:

Fig.3 Sample of Information Retrieval from Nondestructive Evaluation Data Base


54

x"^~*

Wiif t: j/^+ $ v* (^SS^S^J

Fig.4 Initial Screen Image


(Choose kind of structure)

( tm )

Fig.5 Screen Image (1)


55

C gg )

( fe )

Fig.6 Screen Image (2)

{_ iSt J J7J2ir
( MK tm ) &&

^HSfiff(mm)3 2 0

( f^ftt ) 55*

( m-h ) Hat
{ m ) fflRt
{ as*,* ) flftirflf
tftKtt# o
: ,
&t* JISZ2344Ofcfcf;ft* (><0'
/rf]&& 0.4-20MHZ
mi* tt S * * (5Z10M0A70)
!]&& 5MHz

^ o o C Quit )

Fg.7 Final Screen Image


56

Host 1 PC98 PC
(PC98 PC) DOS/V PC
AppleMacintosh
BigMode4.0
Other PC
31incs ; Communication Software

. RS-232C RS232C
Commercial telephone lines
vtisy

Fig.8 Network System Using BigModel

!
D 9 *< ? s A
N l 7
[ N D I * -r l 7 ? / X f A ^ A C I I ]
C <D * -y h 7 - ? > x r A ic m -t * m i 3
s
[ # m m L ^ a m * I SI S SB I 3 14. 3
t tfl EH Wl ft S f ri f S ft *
[ e-mailtyoshimgbfu. t m i t . a o . j p J
[ Tel : 0425-83-5111 ext.3641 ]
t Fax : 0 4 2 5 - 8 3 - 5 1 1 9
v e S B M C e B C B B B B C a a B B B B K K H B B = B = H = = = B B E B B B M B B B 0 = I : B 0 B

N D I 0 0 0 0 1 K S Pi c x m &. Ui & )
N D I 0 0 0 0 2 IP i* U l^ ( m db - )
N D I 0 0 0 0 3 M ta ff- fi ( t Si UJ i* Hi W ft f* )
N D I 0 0 0 0 4 rik ?f ^ ( m iL 14 ^ i* Hi * ^ )
N D I 0 0 0 0 5 fil II ( & ffi I S )
N D I 0 0 0 0 6 * CK t Jt ( -x m * ^ )
N D I 0 0 0 0 7 'h ill f ^ ( t l l H ^ - )
N D I 0 0 0 0 8 {t * * 3s H ( M * w a IA )
N D I 0 0 0 0 9 i l ift 0 ( |l| iff X M )
N D I 0 0 0 1 0 l m W ( Ell iL 14 3* i* i * ^ )
^ X * -t 1 D - NDISYS /< X 7 - K - ########

1 D H t ( A ^ ndlOOOlO
/ x 7 - K * L T < ti ^ : # # # # # # # #

Fig.9 Initial Screen Image


* S - ^ f A | N D I * y r- 7 - ? > ' A " *

[A] S W f 9 y D - F L I y >r > [T]


[] 3a f 7K f M1 ^ - ;i/ .f 7 ' x [U] z^mnm^
[C] * + h N1 7 f ? ^ fij S [W] T^feXtK&iftl
[D] ^ _ ^ _ ro] | 7 f ^ ^ I f f i L [] 'y rum
[E] 7 [P] ^ n ^ ^ A a ^ [] / A l>" y l Y l*
^ fr 7'
[] ^ ;l y' y r fr [Q] K PDI 3 t ~ [?]

R L < t l'

Fig.10 Main Menu


S R L < r S n : d
D - y * (1)
! i i ] Sfc S *
! 2) 3 t
!
!
[3] sp
14] T
! [5] 3 S C/i
00
j [6] f t
Vi L < ' : 1
& te S
! IH IR IS ( I S 0 * f t )
! [2] B * I 3 I ? & & ( J I S )
! [3] B * $ K g & S 1 8 t t C D I S )
IR L < ^ : 1 I R t [yy-mm-dd] B tt ) : r
JUR L T < ' * ^ ( [W] < [B

Fig.11 Submenu (1)


-NUM- -R.DATE- -R.TIME- -SENDER -CONTENTS-
00001 95-02-13 17:20:27 NDISYS ISO TC135 ( Nondestructive testing- )
00002 95-02-13 17:22:48 NDISYS ISO TC135 ( Nondestructive testing: )
00003 95-02-13 17:26:05 NDISYS ISO TC44 ( Welding- and allied pro'secc
00004 95-02-13 17:30:45 NDISYS ISO TC44 ( Welding- and alli ed* process
00005 95-02-13 17:33:12 NDISYS ISO TC42 ( Photography )
00006 95-02-13 17:36:46 NDISYS ISO TC42 ( Photography )
00007 95-02-13 17:44:36 NDISYS ISO TC107
00008 95-02-13 17:46:39 NDISYS ISO TC107
00009 95-02-13 17:49:02 NDISYS ISO TC112 ( Vacuum technology )

00010 "

Fig. 12 Submenu (2)


*?*<*^ : $m
-NUM- -R.DATE- - R . T I M E - -SENDER- -CONTENTS-
00000
f+m)i|:Soi>DLfc
00001 9 5 0 2 1 3 1 7 : 2 0 : 2 7 NDISYS ISO TC135 ( N o n d e s t r u c t i v e t e s t i n g ) o
ISO 30571974 j # S t ft $t ft t. <D & 1* & ftls 7 <) * &
-NUM- -R.DATE- -R.TIME- -SENDER- -CONTENTS-
00001 95-02-13 17:20:27 NDISYS ISO TC135 ( Nondestructive testing )
r * H C 1 o > I U **
00002 95-02-13 17:22:48 NDISYS ISO TC135 ( Nondestructive testing )
ISO 3058-1974 J $ K 8 & JR - g l l l i f g t - fif&JEfci'J'X^oiHR

Fig. 13 Information Retrieval


61

fpi| iaf * . tt ^ V-5,


1 xnK&icour o*K!r 1 1 S
2 tt ie J: t f* Fl., n i * II 1 1 13
3 i* EPiia. 'ra isn- 1 2 56

4 S H j | i i i i t * i i s#iaa:Kffi? JM gf&M 1 1 17
22
5
um ASMEdW s - i l W i l N l i l i l i K 1 1
t SH *B!xttU[HiaUco^T-i SJFSOi 1 1 24
LTXfflXWS^i: L T -
7 JHt4V;ig2RxtftiSiaiftffii (nsg) *8 1 1 24
8 xj8nioo^.iou - xttisiaico^T IHffiSftl 1 1 30
9 ' H Sil SI f i l l i 23 L T 7 ^ 7 5 g&f- 1 2 40
10 SH XtftRcXrfrW ffiMtir 1 2 45
M SH X-Ray S t a n d a r d s for Production and R 1 2 50
e p a l r Welds
12 SM EMS? OM BU *Sfl 1 2 52
13 S H w/t ^xS: ? .i-ni^/;ioookvXiAl >! 1 2 63
SKr?tftl
14 S H (St tyi ^- y -, H fe ( S t a n d a r d G r i d Method AUfiltftia 1 2 66
)ICfciXttg
15 SH HDTB-~lO& ( I 9 4 9 ~ 5 l ) gft 1 2 68
16 lX x i a a ^ K J : 6 i a i f t i i i < i w> iiii m s s 2 1 . 7
XafK1BH::ARS*i|8i|fi^
17 1X ffljattJf^t*ffiWtt N.C.Miller, G. H. Tenn 2 1 19
y
18 iftx xoatflciklliltBKRa. COBO) fienile taaJMiSA 2 2 3B
J:4ttMieroe*oKlKaE;ic|iliAlc

19 iftX aeift-nsfHSifissBotS ifiel- 2 2 ' 49
20 IftX I f l S i S l c A 4 f l 8 K P n O W I i 1 r l c o l * T 1 I , iSISSTa 2 2 54
21 i a * fl 5 T cf&c ic i ffl I m o m Ili s* HIJUE 2 2 57
22 IftX xiaafetffcttQ-itttwcRa. co6o) fierait 13 O & 2 3 69
i fik HitP if t * o R is d cx in (fit
23 I X a Ai t ic j : s m si rs m S SE M A in o H%^ * - | J 2 3 80
m K R -rus B ICO H T
24 iX i I 8 B M 5 l W ; f c f S f f i l c o U T -!?4. iljJHinB 2 3 92
25 i * BetalronlC J; i Radiographic o U T rais=fin 2 4 112
26 IX un is ia Ri ic j : (i %
f o x ta I ai ic o f*JJt.
u trlWUi 2 4 122

27 M * 3iifeiftr0ilSffiicoUT 2 4 132
28 1 U m o ' J t i I c o U T '- 2 4 139
29 SH IKIalSSSrlRJSffi - 2 1 2
30 S H -Y -v ic fc i i, MEStft | Hi ffi 2 1 11
31 S H X t l 7 -f ^ A j I S ' i f f i l C - D ^ T ^msspji 2 1 12
32 S H aeWllrffi O f i l i l ri* Ff. S 2 1 15
33 .SH n i W o i f l f i i A a t G. II. Raiimei s t e r 2 1
34 S f i i;i! s ni if\ /s ra * ytr ?, ta 2 1 25
35 S H HH!fe1rt5|:tiffilc|U|-r S i t f W f M i l l f 2 1 26
36 S H X W R ' ' y -7 t le s i|g|8-T- ti 2 1 29
iBlfc ( K- y )
37 SH *S1SWl-llin/;Si??Outll|)^|iiBf M nu 2 2 45

38 affi
S H Bi Ili VA t ili ffi ic J: i Ili * M iEffi- m il:-DB 2 2 48
39 S H l|i co y -i - te <> t Bli 13 li 2 2 56
<0 S H i 'J i ? 0 A|C J; iftliV/IOAkiHtJSJa^W dii m mi 2 2 61
<l/4"4rMT2HKKiti|H'IIE?J>

Fig.14 Sample of Information Retrieval


63

Advances in Damage Assessment and


Life Management of Elevated Temperature Plant
- An ERA Perspective

R D Townsend

Abstract

In the power, petroleum and process industries the current highly competitive business climate has
created a demand for methodologies and techniques which can assist plant operators in reducing
operation and maintenance (O&M) costs and deferring capital expenditure while maintaining safety
requirements. The primary requirement concerns the operation of high temperature components
which operate under the most hostile and frequently variable conditions of temperature and stress.
For these components there is a need to improve the accuracy of remaining life predictions as a
means of providing the basis for planned optimum replacement schedules and where possible to
allow extension of operating lives beyond the original designs.

Previous work at ERA has established many of the techniques, methodologies and non-destructive
methods for post-service condition assessment of pressure parts, weldments and turbine rotors which
in service have experienced degradation due to creep, creep-fatigue, fatigue and embrittlement.
These provide a sound basis for a deterministic approach for assessing the remaining life of a
component.

A difficulty with these methods however, arises from the uncertainties in the input data required for
the remaining life calculations. The uncertainties arise through variations in the operational stresses
and temperatures, changes and variations in materials properties and the actual equipment condition.
There is a need therefore to develop a risk-based approach to plant life management by coupling the
predictive life models with uncertainty distributions for the input parameters and thus derive
component failure probabilities as a function of future operation.

In this presentation a general description of the risk-based approach to life assessment will be
presented. Examples of the methodology will be given by consideration of Case Studies on:

Fired Heaters
Reactors
Pipework

SG/BC61 ADMI/RDT/sd/doc-5 71

ERA
TECHNOLOGY

'tmmm

Failure Probability Determination

ERA
TECHNOLOGY
W/69/FL0052B
GENERIC PROBABILISTIC APPROACH (Damage Modelling)

REMAINING LIFE =
F (OPERATING CONDITIONS,
MATERIAL DEGRADATION PROPERTY DATA BASE,
CURRENT DAMAGE STATUS).

STRESS MECHANICAL DAMAGE


TEMPERATURE ENVIRONMENTAL / MECHANICAL DEFECTS

<
to
O
en
Q.

TIME
66

Failure Probability Prediction

From Damage Analysis/


Modelling
ERA'S Life Prediction
Methodologies/Software

- fired heaters (Heater PLUS)


- reactors (Reactor PLUS)
- steam/hydrogen reformers (REFORM)
- process pipework
- steam raising plant
- rotating equipment

Damage Processes:

Creep
Corrosion probabilistic
Carburization damage/cracking
Hydrogen Attack assessment routes
Fatigue
Erosion
Temper Embrittlement

ERA
TECHNOLOGY
67

An Automated Diagnostic Expert System for Eddy Current


Analysis Using Applied Artificial Intelligence Techniques
Belle R. Upadhyaya
The University of Tennessee, Knoxville, TN, USA

Mohamad Behravesh
Electric Power Research Institute, Palo Alto, CA, USA

WuYan
The University of Tennessee, Knoxville, TN, USA

Gary Henry
EPRI NDE Center, Charlotte, NC, USA

ABSTRACT

A diagnostic expert system that integrates database management methods, artificial neural
networks, and decision making using fuzzy logic has been developed for the automation of steam
generator eddy current test (ECT) data analysis. The following key issues are considered: (1)
digital eddy current test data calibration, compression, and representation, (2) development of
robust neural networks with low probability of misclassification for flaw depth estimation, (3)
flaw detection using fuzzy logic, (4) development of an expert system for database management,
compilation of a trained neural network library, and a decision module, (5) evaluation of the
integrated approach using eddy current data, and (6) development of guidelines for the on-line
implementation of this technology. The implementation to field test data includes the selection
of proper feature vectors for ECT data analysis, development of a methodology for large eddy
current database management, artificial neural networks for flaw depth estimation, and a fuzzy
logic decision algorithm for flaw detection. A large eddy current inspection database from the
EPRI NDE Center is being utilized in this research towards the development of an expert system
for steam generator tube diagnosis.

The integration of ECT data pre-processing as part of the data management, fuzzy logic
flaw detection technique, and tube defect parameter estimation using computational neural
networks are the fundamental contributions of this research.

INTRODUCTION

Steam generators and heat exchangers are important components that affect the thermal
performance of power plants and chemical industry processes. There are thousands of tubes
inside a steam generator. For example, the B&W once-through nuclear steam generator has
1
68

about 15,000 tubes [1]. Tube degradation could occur due to thermal and mechanical stresses,
fatigue and creep, wear and fretting, and corrosion. Depending on plant operating conditions,
one or more of the above causes can damage the tubes [2]. The degradation of tubes in steam
generators causes the most failure rates. Therefore, the inspection of steam generators is critical
to the safe and economical operation of nuclear power plants.

In the past, eddy current inspection has proven to be fast and effective in detecting and
sizing most of the degradation mechanisms that occurred in steam generators, and therefore it has
been used as the standard technique for steam generator tubing inspection. However, eddy
current phenomenon is described by three-dimensional, nonlinear, partial differential equations
with very complicated boundary conditions. Modeling analysis methods are very difficult to
apply for test data analysis. Only visual observation technique is currently used for eddy current
test data analysis. This technique requires highly trained personnel and is labor intensive.
Human error in performing the analysis of test data is the main drawback for its successful
application. Some other disadvantages of eddy current inspection method include:
1. Eddy currents are affected by minor variations in the permeability of the test object.
2. Eddy currents are affected by the orientation of a flaw.
3. Sensitivity is much greater at the test surface closest to the test coil.
4. Multi-frequency test has large and complex databases.
The current research and development of eddy current inspection is directed in part towards more
quantitative test results and conclusions, and towards reducing human interaction with the testing
process [3].

The research undertaken here focuses on the problem of automating steam generator eddy
current data analysis using an integration of expert system, database management, artificial
neural networks, digital signal processing techniques, and decision making using fuzzy logic. In
recent years research in neural networks has been advanced to the point where several real-world
applications have been successfully demonstrated [4]. These include automated pattern
classification, signal validation, nuclear plant monitoring, plant state identification during
transients, estimation of performance related parameters, underwater acoustic signature
classification and text recognition. Fuzzy logic and expert systems have been shown to be highly
successful, reliable, and superior in performance to conventional systems [5]. Utilizing a fuzzy
logic representation offers the advantages of describing the state of the system in a condensed
form, developed through linguistic description and is convenient for applications in monitoring,
diagnostics, and control algorithms [6]. The integration of database management, neural
networks, fuzzy logic, expert systems, and digital signal processing techniques for the
automation of NDT signature analysis is a unique feature of this research. This research will also
provide a technology base for the safety assessment of system and subsystem technologies used
in nuclear power applications of artificial intelligence techniques.

THE EDDYAI EXPERT SYSTEM


An expert system called "EDDYAI" has been developed. This system integrates ECT
database management, flaw detection system using fuzzy logic, and defect parameter estimation
69

using artificial neural networks. EDDYAI is developed on the PC WINDOWS platform. It


consists of a user interface, a rule base, a knowledge base and supporting modules. It combines
all the analysis into a userfriendly diagnostics system. The overall system can perform the
analysis of multifrequency ECT data automatically. Figure 1 shows the major function blocks
of the EDDYAI expert system.

DRES Format ECT Data

M easurem ent Data General Information Calibration Data

Peak Detector * Define Null Point

U ser
Data Representation * Interface M agnitude Calib.

Flaw D e t e c t o r * * Phase Calibration

Defect Sizing

K n o w l e d g e / R u l e Base

Figure 1 The structure of the EDDYAI expert system

ECT data file


This system uses the multifrequency, DRES format ECT tube inspection data files. The
DRES format data file has three parts: index, calibration data, and measurement data.

User interface
The user interface provides choice of eddy current inspection data file, display of related
information and graphics, performing flaw detection and defect sizing, and presentation of data
analysis results.

Knowledge base
The knowledge base consists of fuzzy membership functions and the trained neural
networks for defect parameter estimation.
70

Rule base
The rule base consists of logical steps for data analysis and rules for decision making.

Data calibration
Data calibration can perform the null point determination, phase angle calibration and
magnitude calibration.

Peak detection
Peak detection scans the ECT measurement raw data and finds all the peaks.

Data representation
Data representation reorganizes the ECT measurement raw data using different data
representation algorithms.

Fuzzy flaw detection


The fuzzy flaw detection system prepares the fuzzy system input and fuzzy membership
functions, executes the fuzzy inference engine program, and finds the flaws in the ECT data.

Defect sizing
Defect sizing function block consists of trained neural networks for defect sizing.

ECT DATA MANAGEMENT AND CALIBRATION

Fifty-seven sets of multi-


frequency ECT data were obtained
from the EPRI NDE Center. These
data files contain the pitting,
ODSCC, and field eddy current
test data. These data files are
stored in the DRES format. The
DRES data have both calibration
data and actual data. Figure 2
shows a typical impedance plane
(resistance versus inductive
reactance) trajectory of data from a
differential eddy current probe
Figure 2 A typical impedance plant trajectory of data
transducer.
from an eddy current transducer.
ECT Data Management
The size of each EPRI ECT data file is about 35 kbits. Each data file contains sixteen types
of signals coming from eight measurement frequency channels. For such a large, multi-frequency
data file, it is necessary to develop a procedure to manage and compress the data. The data
management system of EDDYAI has the following main components.
71

Calibration data or actual data: User can perform data calibration procedure by selecting the
calibration data, or start the data analysis procedure by choosing actual data from the DRES format
data file.

Selection by measurement frequency: The user can select the ECT signal by measurement
frequency. Therefore, we deal with only 1/8 of the ECT measurement data for each data analysis
cycle.

Peak detector: The ECT data


can be classified as normal data
or unusual data. The ECT
signal for a good tube with the R

same structure appears as a Indices

straight line. This kind of data Threshold


is defined as normal data. The
ECT signal that appears with Count
peaks, curves, or big jumps Width
(
indicates changes in tube ^W *K~+. L/iJ -V W.
structure (such as tube support,
tube end) or tube damage (such
as pitting, thinning, etc.). This
kind of data is referred to as
Figure 3 Inputs and outputs of a peak detector
unusual data.

A peak detection technique is used to find unusual data parts. Figure 3 shows the inputs
and outputs of a peak detector. We analyze the input sequence R for valid peaks and keep a
Count of the number of peaks encountered and a record of Indices which locate the points at
which the Threshold is exceeded in a valid peak. A peak is valid when the number of
consecutive elements of R that exceed the Threshold is at least equal to the Width. We use
radii as the input sequence R. The radius is defined as the distance from the current position to
the null point.

ECT Data Calibration


Eddy current inspection requires standard calibration specimens (tubes) with artificial
defects for initial instrumentation set-up and subsequent signal analysis and interpretation. These
tubes should be identical in material and size to tubes to be tested. Minimum calibration
requirements include inner diameter (ID), outer diameter (OD) and through-wall defects [7]. For
eddy current data analysis, there are three steps in data calibration: null point determination, phase
angle calibration and magnitude calibration.

Null Point Determination: Null point is the ECT signal in the flaw free region of the
calibration specimen. It is the original point for phase angle and magnitude calculation. An
accurate null point can make the decisions based on the information of phase angle and
5
72

magnitude of the ECT signal more reliable. Two procedures to find the null point in the ECT
calibration data have been developed in this project.

The first procedure for null point determination uses the mean value of the flaw free
region data as the null point. The drawback of this procedure is that the lift-off effect can change
the true null point and the equipment noise will also reduce the accuracy of the null point
estimation. A second null point determination procedure has been developed using the
intersection of phase angle slopes of outside diameter (OD) standard defects. In the ECT
calibration data set, there are five OD defects varying from 100 percent through wall depth to 20
percent through wall depth. These OD signals should share the same null point. Any two phase
angle slopes of these OD signals can determine the null point by finding the intersection of the
two slopes.

Phase Angle Calibration: In ECT signal analysis, phase angle information is used to find the
flaw locations and the flaw depth. In order to remove the effect of lift-off, the phase angles of
100% through wall OD signals for different frequencies should be placed at an angle of 140
degrees. If the phase angle value of a 100% through wall OD signal is not around 140 degrees, it
should be rotated to 140 degrees. This is the phase angle calibration procedure.

Magnitude Calibration: In ECT signal analysis, the magnitudes of the 20% through wall OD
signals are usually set to 4 volts. The magnitudes of all other signals are converted to voltage
scale by comparison with the 20% OD signal. This is the magnitude calibration procedure.

FUZZY LOGIC DECISION MAKING FOR FLAW DETECTION


In ECT data analysis, the decision for flaw identification and estimation may have a high
uncertainty because of a large number of defects with overlapping patterns and due to
information from multi-frequency tests. Fuzzy logic may be used for decision making in this
situation which is characterized by uncertain and/or non-crisp information. Fuzzy logic is the
logic of fuzzy (approximation) measurements and is believed to be similar to the human decision
making process . The beginning of fuzzy logic is most widely associated with Lotfi Zadeh. In
1965 Zadeh wrote the original paper formally defining fuzzy set theory from which fuzzy logic
emerged [8]. The important difference between the fuzzy logic approach and the traditional
approache is that, the former uses qualitative information whereas the latter requires rigid
mathematical relationships describing the process.

Fuzzy logic is characterized by a linguistic variable whose values are words or sentences
in a synthetic language. For example, we can define temperature as a fuzzy variable to take
linguistic values of low, medium and high. The values low, medium and high are called "fuzzy
values." The definition of "low" or any other term depends on the user's judgment. In fuzzy
logic, such a judgment is formulated by a possibility distribution function (often taking values
between 0 and 1) and is referred to as a "membership function." The key issues in fuzzy logic
applications are:
73

(a) Information representation,


(b) Building fuzzy membership functions, and
(c) Developing compositional rule of inference.

Flaw Detection System Using Fuzzy Logic


Fuzzy logic is used for flaw detection in steam generator tubing. The fuzzy logic flaw
detection system contains system input, system output, fuzzification, dufuzzification,
membership functions, rule base and evaluation of rules. Figure 4 shows the organization of this
system.

Membership
F unctioi IS

System System
Input Hr jr
Rule Output
Denazi
Fuzzification
Evaluation fication
k.
1

Figure 4 Structure of a flaw detection system using fuzzy logic.

System Input and Output: System input is the status information (such as phase angle values
of eddy current signals) about the external system (such as the test specimen). In the fuzzy flaw
detection system, there are four system inputs. They are the phase angle values of the four
absolute signals for different frequencies. System output is the decision we want from the fuzzy
system. The output in a fuzzy flaw detection system should be decisions of "ID flaw", "OD
flaw", or "Not a Flaw."

Membership Functions: Each system input is associated with a fuzzy set which contains the
appropriate membership functions. Figure 5 shows typical membership functions of a flaw
detection system input. Each system input has three fuzzy sets: ID flaw (ID), OD flaw (OD), and
NOT a flaw (OT). Every fuzzy set has its own membership function. The values of membership
functions are between 0 and 1. The phase angle range is from 180 degree to -180 degree. The
phase angle values at points A, B, C, D, E and F are determined by the phase angles of absolute
signals in the calibration data set. The system output has seven fuzzy sets: Negative Unknown
(NN), ID flaw (ID), Possible ID flaw (PI), OD flaw (OD), Possible OD flaw (PO), Not a flaw
(OT), and Positive Unknown (PN). The universe of the fuzzy sets varies from 0 to 360.
7
74

Fuzzification: Fuzzification is the


process of estimating the degree of
membership of an input. Once a
system input is fuzzified, it becomes
a fuzzy input which can be used for
fuzzy inference.

Rules and Rule Evaluation: The


rules inside a fuzzy inference system
represent the relationships between
system inputs and system outputs.
The flaw detection decision can be
made based on the phase angle
information in the system input. In
the fuzzy flaw detection system,
there are four system input signals: F i u r e 5 F u z z y s e t s md membership functions of a
high frequency signal, primary system input
frequency signal, medium frequency signal, and quarter frequency signal. In the development of
rules, primary and medium frequency signals were used as the main inference signals, and the
high frequency and the quarter frequency signals were used as the additional information for flaw
detection. A total of 25 rules were developed for decision making for flaw detection. They
covered all possible input combinations.

Each rule has the form of an if/then statement. The 'if side of the rule has one or more
conditions. The 'then' side has one or more decisions. The conditions of the rules correspond
directly to degrees of membership (fuzzy inputs) calculated during the fuzzification process.
Once system inputs are fuzzified, the strength of each rule is computed for each rule using the
minimum operation. Then the output of each rule is multiplied with each strength value and the
final result is obtained by performing a maximum operation among the outputs of all rules. This
method is called max-product composition.

Denazification: After rule evaluation, more than one rule may be fired. Therefore, there may
be more than one decision with different strengths. These decisions are linguistic variables, and
they may conflict with each other. Defuzzification process is used to resolve the conflict issues
and convert the linguistic variable to a crisp value based on membership functions of system
outputs. The defuzzification is performed by the center of gravity method. The idea is to find the
center of gravity of the shaded area that represents the output. The projection of the gravity
center on the x-axis is the result of the inference. This result is then used in generating the
decision for flaw detection.

Evaluation of the Flaw Detection System Using Fuzzy Logic


The fuzzy flaw detection system was tested using one set of calibration absolute data
from the EPRI NDE Center. The calibration data set was first pre-processed by using the peak
8
75

detection procedure. In the data scan procedure, the Width was set to 3, and the Threshold was
set to 0.5 volt. Twenty peaks were found after data scan.

In the fuzzy flaw detection system, the membership functions were established by using
the calibrated OD defect phase angles. As described in Figure 5, once the values of points A, B,
C, D, E and F are determined, the membership functions can be obtained. The phase angle
values are converted to the conventional form which uses the 180 degree axis as the 0 degree
axis. The twenty peaks were tested using the fuzzy flaw detection program. Table 1 shows the
results of the test. From the results, it can be concluded that the fuzzy system can detect flaws
with a high degree of success.

Test# Data Phase Angle Values Desired Fuzzy Logic


Location Decision Decision
1 125 118.4 195.7 273.3 9.8 Unknown PN
2 130 118.4 195.7 273.3 9.8 Unknown PN
3 207 177.5 108.5 154.2 161.2 OT OT
4 214 177.3 162.3 155 159.1 OT OT
5 293 39.6 40 40 40 OD OD
6 318 170.1 166.8 195.1 225.7 OT OT
7 322 167.2 166.9 197 226.1 OT OT
8 324 165.7 167 197.5 223.8 OT OT
9 351 86.9 68.9 56.3 47.8 OD OD
10 406 102.3 81 63.3 51.8 OD OD
11 459 121.5 98.4 76.9 60.8 OD OD
12 511 154.5 122.9 95.2 71.8 OD OD
13 530 138.6 162.1 201.9 233.5 OT OT
14 532 139.1 159.1 199.4 233.5 OT OT
15 545 61.3 154.2 205.4 240.4 OT OT
16 563 351.2 348.3 205.3 19.2 Unknown PN
17 596 45.4 349.8 2.8 276.1 Unknown PN
18 598 51.3 355 298.9 276.3 Unknown OT
19 614 168.5 158.4 167.8 187.7 Unknown OT
20 651 170.1 164.3 175.9 195.4 Unknown OT

Table 1 Results of Preliminary Study of Flaw Detection System


76

NEURAL NETWORKS FOR FLAW DEPTH ESTIMATION

Artificial neural networks (ANNs) provide general mapping between two sets of
information. This nonlinear mapping from data to data is very useful in associating information
pairs where a clear mathematical relationship is not available. Artificial neural networks have
been applied to the problems of pattern classification, signal validation, plant monitoring,
transient state identification in power plants, underwater acoustic signature recognition, and
many others [9]. A general architecture of a multi-layer neural network is shown in Figure 6. The
input layer requires a signature vector from measured data . The network output may be in the
form of a signature vector or a pattern classification index. The solid nodes in the figure are
processing elements in a neural network. The number of processing elements in a middle layer is
often determined experimentally.

Pattern T y p e or Pattern Parameters

."'"
t t i ,".*-'"'*.,j..:".',.' "'
.""'..'.'.'.'-V"" ;"'/.';./.
:';:'' . I ' "":--:.
v"'* 1 ** *' ""*.
Hidden Layer 2
. """*-. " --^*.-.'"""

""*.*- -**""
.
*l * ; " " "*""***'"." " " ""*'"."
_ . - * " * * " - * " " " .* . " ; " " _ : ' ' ' > ' .
:
L-f*" "**.. .-'" *-*-v-_L
Hidden Layer 1

.'" :
ft,:: *'.. " ?/"^<v...
Jf>~* ..VV*:".7"*.'*
- " " * . - " * . - *-^*".* " " . - * * * * " * * * - *." * * * * * * . * * * .

, * ' ' ' ' " - ' " ' ' . ' ' * ' . - - * " " * " * * . - * " " * * * - . * * " * ": * -
It--''' ** - . * *"* l.


**

In tp ut ^m

Figure 6 Architecture of a multilayer neural network.

The backpropagation network (BPN) algorithm is used in this research project. This
algorithm is the most widely used systematic method for supervised learning in multiple (three or
more) layer artificial neural networks. The mathematical basis for the backpropagation training of
ANN's is straightforward but involves several steps. It is very well documented in Ref. [10].

10
77

ECT Data Representation


For the artificial neural network approach to be effective in defect type identification and
defect parameter estimation, the information input to the system must have certain features. These
are (a) size of the data vector, (b) invariance to data scaling, (c) invariance to data orientation, and
(d) sensitivity of defect type and defect parameters to input signatures.

The current neural network software has the following requirements.


(a) All data files must have the same number of elements in the input vector.
(b) The total number of elements in the input layer, hidden layer and output layer must not
exceed 350.
(c) All training and testing data files must be normalized.
In order to meet the above requirements, raw eddy current test data must be properly
represented. Data representation methods involve reorganizing the raw measurement data using
(1) selected raw data, (2) magnitude and phase of raw data, (3) linear integral value of raw data,
(4) sequence of radii from the center of gravity to the closed contour of the shape, and (5)
segmentation of raw data.

Selected Raw Data Representation: The eddy current test data file contains thousands of data
points. Only the data close to a defect are useful for analysis. Once a defect location is
determined, 50 data points around this location are selected for neural network training.

Magnitude and Phase Representation: Since the magnitude and phase information is very
important for flaw detection and sizing, the raw data is converted to magnitudes and phases for
neural network training. One advantage of this representation is that the phase and magnitude can
be normalized separately.

Linear Integral Signal Representation: The linear integrated raw data has been found to be
useful for defect type identification [11]. They will be used in this project for tube support
detection and flaw type identification.

Radii from the Center of Gravity: Since the defect parameters will influence the null point of the
eddy current signal, a sequence of radii from the center of gravity to the contour of the shape has
been used for neural network training. This method has been shown to be very effective in defect
parameter estimation [11].

Data Segmentation Representation: The operation of this method is to convert the data points
from Cartesian coordinates into a polar coordinate system. The null point is set as the origin of the
polar coordinate system. The polar coordinate system is then angularly divided into N segments,
each of which covers 360/N degrees. For each segment, three types ofinformation are obtained and
used. These three information are: (1) the value of maximum distance from the origin, (2) number

11
78

of data points in this region (in percent), and (3) the average value of the data. This technique
maps the structural shape information of an object into a fixed feature vector of real numbers. It is
robust to object variations in position, orientation and size.

Development of Computational Neural Networks for Depth Estimation of Pitting Defects


A threelayer backpropagation
neural network was trained using the 0.65
preprocessed magnitude and phase of CS i^
^ 4
eddy current pitting data for depth if
1
estimation. In the pitting data set, we
a
D.5 !
chose 29 samples for training and 6
V,
V
1
samples for testing. The neural network 0.45 !
il
has 100 input elements, 50 hidden l
\ \
elements, and 1 output element. The \ i
I
learning coefficient was set to 0.2 for the
first 5000 iterations, and 0.15 for the <s I
il

succeeding iterations. The momentum \


term is set to 0.4. The normalized G.2
cumulative hyperbolic tangent transfer u u r r or PATTERNS
function is used as the nonlinear transfer
function. After 28,000 iterations, the tuoric Out: Desired Ou t au t

normalized rootmeansquare error


(RMS) decreased to 0.00001. Figure 7
shows the recall results using the six Figure 7 Recall results of pitting defect depth
recall data points. The RMS error for estimation
the recall data was 0.03.

CONCLUSIONS AND FUTURE WORK

The purpose of this research project is to develop a robust methodology for eddy current
data analysis and automation of steam generator tube diagnostics. The fuzzy logic flaw detection
module developed in this context is a unique and important contribution of this research. For the
applied artificial intelligence methods to be successful, it is important to preprocess the ECT
data and perform proper calibration to avoid the effects of scaling and nullpoint shifts.

The estimation of tube defect parameters, such as defect size and flaw depth, can be
performed with high accuracy using computational neural networks. Thus, the integration of
ECT data preprocessing, fuzzy logic decision making, and parameter estimation using neural
networks, are the fundamental contributions of this research.

The following work will be performed in the future.


Integration of the modules of the diagnostic expert system with appropriate user interface.

12
79

Testing and improvement of fuzzy logic-based flaw detection system using an extensive ECT
database.
Analysis of the effect of noise in ECT data and its compensation.

ACKNOWLEDGMENTS

The research reported here has been sponsored by a grant from the Electric Power
Research Institute, Steam Generator NDE Program.

REFERENCES

1 "Steam/Its Generation and Use," Babcock& Wilcox, 23-11, 1978.

2 W. E. Deeds and C. V. Dodd, "Eddy Current Inspection of Steam Generator Tubing,"


Electromagnetic Methods of Nondestructive Testing Monographs and Tracts Vol. 3, W.
Lord, Ed., Gordon and Breach, New York, 1985.

3 R. D. Shaffer, "Eddy Current Testing: Today and Tomorrow," Materials Evaluation, pp


28-32, January 1994.

4 Y. Liu, B. R. Upadhyaya and M. Naghedolfeizi, "Chemometric Data Analysis Using


Artificial Neural Networks," Applied Spectroscopy, Vol. 47, No. 1, pp 12-23, 1993.

5 L. A. Zadeh, "Fuzzy Logic," IEEE Computer, April 1988.

6 L. A. Zadeh, "Outline of a New Approach to the Analysis of Complex Systems and


Decision Processes," IEEE Trans. Systems Man and Cybernetics, SMC-1, 1973.

7 V. S. Cecco et al, "Eddy Current Testing," Vol. 1, pp. 138, GP Publishing, Inc., 1987.

8 G. Viot, "Fuzzy Logic: Concepts to Constructs," AI Expert, pp 26-33, November, 1993.

9 R. P. Lippmann, "An Introduction to Computing with Neural Nets," IEEE ASSP


Magazine, Vol. 4, No. 2, pp 4-22, April 1987.

10 D. Rumelhart and J. McClelland, "Parallel Distributed Processing," Vol. 2, Bradford


Books/MIT Press, Cambridge, MA, 1986.

11 B. R. Upadhyaya and W. Yan, "Hybrid Digital Signal Processing and Neural Networks
for Automated Diagnostics Using NDE Methods," NRC Final Report, NUREG/GR-0010,
October 1993.

13
81

APPLICATIONS OF INTELLIGENT SYSTEMS TO SUBSTATION


PROTECTION CONTROL
Guilherme Moutinho Ribeiro1 Germano Lambert-Torres2 Alexandre P. Alves da Silva2

1. Companhia Energtica de Minas Gerais - CEMIG


OT/SE4 - AP/2 - Av. Barbacena 1200 - Belo Horizonte - 30161-970 - MG - Brazil
Phone:+55-31-3494161 - Fax:+55-31-3492691
2. Escola Federal de Engenharia de Itajub - EFEI
Av. BPS 1303 - Itajub - 37500-000 - MG - Brazil
Phone:+55-35-6291240 - Fax:+55-35-6291187 - e-mail:germano@efei.dcc.ufing.br

Summary: Every switching operation within a power substation is based on pre-


established conditions which stem from engineering studies considering both equipment and
component hmitations and constraints inherent to the switching itself in relation to the
system situation. Despite all its evolution, the analogic solutions used in blocking, control
and monitoring functions of switching actions do not eliminate the possibilities of erroneous
interpretation of the recognition of pattern conditions. With the ongoing digitalization of
power substations, it becomes propitious and recommendable the introduction of logic
computer programs which perform these functions more safely in a wide range of distinct
and adaptive configurations and conditions.

The aim of this intelligent automatic system for the switching operation is built using
the support of Expert System (ES) techniques, in such a way that the switching plan
constituted by and through the traditional logic is enriched and enhanced with the experience
and heuristic knowledge from the operators and dispatchers. Study cases which permit an
inference analysis of the problem solution with an evaluation of the results obtained are
presented. The process for inclusion and validation of new knowledge or components in the
substation system is also discussed. Finally, a comparison between the proposed solution
with the conventional system and procedures is carried out, pointing out the advantages and
limitations of this expert system application.

1. INTRODUCTION

The introduction of digital technology and the development of Expert Systems (ES)
techniques has made the intelligent automation of switchings in substations (SS), something
possible consolidating every operating technology besides aggregating advancements to the
supervision and control operating functions.

Any operating intervention entails the elaboration of a Switching Plan, in which the
actions and commands are sequentially linked. In the generation of the Switching Plan the
equipment operating limitations, the operating constraints inherent to each command, the
S2

operating criteria and the company switching philosophy are considered.

With the availability of real-time sampling of the state digitalized data (switching
equipment and protections) and of the analogical quantities, it is possible to proceed a
treatment (consistency and validation) and to structure a set of information with a certain
degree of truthfulness and suitability able to support the intelligent automation of the
switching operating functions.

The use of the ES technique makes possible to store every switching technology,
adding all the background and the heuristic knowledge to the formalized one. Consequently,
the operating processes are optimized under the functional and economical stand point,
besides making the automation of the power substation operation possible by substituting
the human decision/action by an artificial action of the same efficiency level.

So, with the new technologies and the availabihty of data and information, it is
possible to define and to establish a new operating paradigm. The reunification of actions
and commands in coherent processes of high aggregated value, allows to associate
reliability, quickness and efficiency in the switching operating functions.

2. POWER SUBSTATION OPERATION

The operation of a Power Substation (SE), due to the high degree of uncertainty and
the large number of variables involved, is intrinsically complex. The various supervision and
control actions require the presence of an operator, who must be capable of efficiently
responding to the most diverse requests, by handling various types of data and information.
Upon the introduction of digital technology into the SE's and the advent of the practical
application of Artificial Intelligence - Expert Systems techniques, a quality leap in the SE
operation mode was made possible. Every application, formerly based on the analog
technique, must be reconsidered in terms of its basic concepts, so that an intelligent move
may be achieved, with effective gains, taking advantage of the whole potential of the new
technologies. Simply transferring or adapting current procedures does not allow the
flourishing and incorporation of advancements into the operative procedures involving
supervision and control [1].

The utilization of ES in SE operation and control aims at the evolution of the local
supervision and control systems, incorporating practical and heuristic knowledge, optimizing
the operative processes from the functional and economic points of view, in addition to
allowing the automation of the operation, replacing the human decision/action with an
artificial intelligence, of equal effectiveness level.

In view of the operational complexity of a SE, which simultaneously puts together


various concepts and domains, it is necessary to frame up the whole assembly so that further
results may be achieved by the implementation of each fraction. Thus, there is a need for
validation of the measurements (input data) and diagnosis (identification of the need for
action), for determination of the strategies of operative action [2].

Figure 1 shows the operative architecture of a typical SE, with the various types of
operative functions, while Figure 2 shows the functional states of a SE, both listing the
83

various operative actions.

Input Data

Management of Estimation of
Measurements the State

<y a \.
Handling and Valadiation
of the Signals

Diagnosis

JL 3t
Supervision Control Protection

(a)

Preventine Actions
State:
Equipment Supervision Topology Management
Operative State Surveillance
Loading Surveillance.
Load Dispatch Availability
Operative Limits Service Life
Control Actions
Voltage Control:
Tap Control Stability Control and Prevention
Safety and Integrity Function
Reactive Control:
Capacitor Bank Switching Control of Reactive Power
Switching Plan:
Sequential Switching Transfers
Isolation < Interlocking
Correctivce Actions
Alarm Handling Oscillography (Recording)
Load Schedding & Restoration Schemes Analysis of Occurrences
Corrective Strategies for Emergency States Restoration

(b)

Figure 1: Operative Architecture and Functions of a Substation.


84

Corrective SAFE Elimination of


Preventive
Actions \ t h e Fault
Actions Control
Actions
\
S/ NORMAL / DISTURBANCE
/ \

J .,..^1
Control \
Actions ABNORMAL

change of state in
view of the occurrence

Figure 2 : Functional States of a Substation.

3. FUNCTIONAL REQUIREMENTS AND ORGANIZATIONAL ASPECTS

Any ES application must be conceived in a modular form, at a "typical span" level,


with an open and flexible architecture, allowing and facilitanting future expansions capable
of harmonically absorbin the evolutions of the SE topology and assuring portability. Such
future expansion possibilities must be taken into account not only in the formulation of the
Data Base, bus also in the construction of the rules (Knowledge Base). Therefore, the rules
must be organized as modules, in order to prevent an eventual need of revising a reasoning
and, consequently, the considerable number of rules attached to it, in the event of the
adaptation of the SE to an expansion or modification of its topology.

In addition, modularity allows expansion of the system itself, by incorporation of


new or complementary functions or upon the application of new funcions that may
eventually be conceived and made feasible. The conception within a functional hierarchical
structure, designed so as to meet various requirements such as reliability, flexibility, speed
and safety with the utilization of ES, enables the implementation of nonconventional control
and supervision functions. The simplification of the control system and the increase of the
value added to the supervision functions, in addition to improving the operational efficiency
of a SE, reduces its design, implementation and mainenance costs [3].

The formulation of the solution of the operative problems of a SE must take into
account the flow of information (dynamic data) and of control (actions) for its hierarchical
definition. The functions must be performed at a level as near as possible to the process,
with only functional subordination. Every established hierarchical level must contain only
elements which contribute for execution of the assignments associated to it, thus assuring
functional cohesion. The logical cohesion, where the activities to be performed are defined
ouside the level and the coincidental cohesion, where the activities of the level do not have
any significant relationship among them, must be prevented. In both instances, the activities
of the level are not logically related by data flows or control flow. The optimum tendency is
the decentralization of the supervisory control but, thinking in overall terms and acting
85

locally.

The knowledge of the causal, temporal, and functional relationships among the
evidences, the hypothesis or the parameters of the models which may be used in the solution
of an operational problem, comprises the formal part of the Knowledge Base. However, the
majority of the details of the specialization tend to be grouped and assembled in heuristic
rules, generally developed in the ming of the experts because of extesive observation of
typical results. Such rules may be combined and compared among themselves, in order to
reach a logically consistent, but inaccurate solution (Fuzzy Logic).

The heuristic rules attempt to substitute the need of memorizing details or particulars
of the SE or the Electrical Power System (SEP), well as experiment all the range of
operative conditions, the possible contingencies and their restoration sequences. Therefore,
it allows the decision-making to be made with basis on a wide knowledge, whether formal,
existential, factual or empirical, properly stored in the form of "facts and rules" in the
Knowledge Base.

Thus, the Knowledge Base is structured using as a basis Facts and Rules which must
portray all the knowledge available about the SE, its components and equipment. Is must
contain a detailed description of the SE, its main operational characteristics, such as
topology, static and dynamic attributes, switching, schemes, restoration guidelines and
philosophies.

The Inference Motor enables the ES to infer knowledge, by using the information
stored in its knowledge Base, in order to obtain some results which did not exist "a priori",
and define the solution of a problem or subproblem. In the inference process, the
information derived is not completely new; it actually results from the interrelationships of
previously stored information.

4. SWITCHINGS IN SUBSTATIONS

All the switchings in SS follow operating criteria pre-established by engineering


studies, which consider both the switching equipment operating limitations and the operating
constraints inherent to the switching itself.

The main switchings in a SS are:


- blocking/unblocking of automatisms (e.g.: automatic re-closing);
- circuit-breaker bypass;
- circuit-breaker transfer;
- insulation of equipment (bus, transformer or line);
- energization/disenergization of equipment.

The supervision and control functions are conceived aiming to monitor the
switchings, aiming to safeguard the integrity of equipment and people, and to provide its
correct suitability to the operating technology [4].

Thus, in face of the operating flexibility allowed by each topology, and considering
the context of the Power Electric System (PES) to which it belongs, a SS may have a ES
86

automatizing its switchings, should they be either individual or part of a sequential switching
plan.

A switching can be broke down in a set of actions and commands of the kind:
- action (open/close - block/unblock);
- command (verify open/verify close);

Considering as an example the data of the bay "IK", output of a Transmission Line
(TL), as presented by Figure 3, the following signal set displayed is obtained:
V(K) voltage measured in bus K
V(1K) voltage measured in the output of bay IK
I(1K) current in the output of the bay IK
1K3 disconnecting switch
1K5 disconnecting switch
1K5T grounding disconnecting switch
1K6 bypass disconnecting switch
1K4 circuit breaker
1K50/51 line overcurrent protection
1K50/51N neutral overcurrent protection

where, for example, 1K3 means:


1 - circuit number 1
K - voltage level 138 kV
3 - disconnecting switch near the bus

TL to SS2

Figure 3 : Typical TL Bay IK.

The switching plan to bypass the circuit-breaker of the bay IK would be made up of:

SEQUENCE SWITCHING
1 check the voltage in bus BK and in circuit IK
2 check closed switches 1K3 and 1K5
3 check closed breaker 1K4
87

4 block the automatic reclosing of breaker 1K4


5 close the bypass switch 1K6
6 open the breaker 1K4
7 open the switches 1K3 and 1K5

In the inference process for actions and commands which will make up a switching
plan, the following steps are found:
a) to receive the switching;
b) to identify the circuits involved;
c) to check the state of the equipment (energized/ disenergized);
d) to check the selector switches setting;
e) to check the switching equipment setting (opened/closed);
f) to refer to the rules concerning the problem interlocking, orientation and operating
constraints);
g) to layout the switching plan, generating necessary and sufficient actions and commands
in sequential order;
h) to validate the switching plan (simulation).

The conception and development of an automatic system able to switch a SS,


undergo necessarily the analysis and evaluation of the paradigms previously established and
the definition of the problem solving strategy. The characteristics inherent to the problem
recommend the use of Expert Systems as a more suitable tool to its equationing and solution
[5]

5. EXPERT SYSTEM FOR POWER SUBSTATION RESTORATION - ESRASE

An Expert System (ES) is a program able to treat a certain problem, within a specific
domain, imitating the behavior of a human specialist of this field. The ES is based upon
knowledge (and not on data), and solves complex problems through use of inference and
knowledge methods, which are structured with high flexibility degree in face of the access
associative to the rules [6].

The ES use knowledge outlined by symbolic representation and apply heuristic rules
through deductive processes, which create inference path for the problem solution. Its
application is justified in the cases of unavailability of an established theory, where there are
doubtful data and information and troubleshooting problems. Since control is separated from
knowledge, knowledges are allowed to be withdrawn, included, renewed, updated or
modified, not causing any change in the program operating structure.

By considering the circuit-breaker represented in Figure 3, with its respective


disconnecting switches, its topological and operating characteristics are stored as FACTS in
structures of the kind:

connectionf 1 ","K","B","K")
switching_switch(" 1 ","K","4","on")
selector_switch(" 1 ","K","4-43R","on")
protection("l","K","67M,MofF*)
measure(" 1 ","K", 138,250,60,0.9)
ss

limit_measure("l","K",128,144,-l,500,59,61,-0.8,0.8)
equipment(" 1 ","K",line_source("SEl ","SE2","initiator"),"on")

where:
1 = number of the circuit
K = voltage level 138 kV
B = bus
4 = circuit-breaker
4-43R = reclosing selector switch
67 = overcurrent protection
138 = voltage measured
250 = current measured
60 =frequencymeasured
0.9 power factor measured
128,144 = voltage limits
-1,500 = current limits
59,61 =frequencylimits
-0.8,0.8 = power factor limits
linesource = equipment kind
SE1,SE2 = TL terminals
initiator = terminal energization characteristic

By its turn, the RULES describing the functional and operating characteristics of a
switching, in which the operating philosophy and the heuristic knowledge are stored,
forming the Knowledge Base, have been structured through Production Rules of
standardized way in structures of the kind:

mterlockmg_switching_switch(CIRCUIT,S\VITCHING, SWITCHES_OPENED,
SWITCH_CLOSED)
mterlockmg_measure(CIRCUTT,VOLTAGE,CURRENT)
pmlosophy_switching(CIRCUIT,CLASS,SWrTCH,ACTION,CRITERIA)
criteria_switcrung(SWITCH,ACTION,ORBENTATION,CONSTRAINT)

From the FACTS and by using the Knowledge Base, the actions and the commands
which will make up the switching plan are infered, in the following standard way:

switching_plan(SEQUENCE,CIRCUIT,VOLTAGE,SWITCH,SWITCHING)

where SWITCHING can be either an ACTION (open/close) or a COMMAND (check the


voltage presence).

Figure 4 presents the partial listing of the predicates of the rules which make up the
Knowledge Base.
89

command_switching_device(N,CLAS S,DEVICE, ST ATUS) : -


check_interlocking(N,CLASS,DEVICE,STATUS),
check_interlocking_measure(N, CLAS S,DE VICE).

check_interlocking(N,CL AS S,DEVICE, ST ATE) : -


interlockmg_switchmg_device(DEVICE,STATE,DEVICE_ON,DEVIVE_OFF),
check_on(N,CLASS,DEVICE_ON),
check_ofiTN,CLASS,DEVICE_OFF).

check_interlocking_measure(N,CLAS S ,DE VICE) : -


interlocking_measure(DEVICE,V,A),
check_voltage(N,CLAS S, V),
check_current(N,CLASS,A).

Figure 4 : Partial listing of the Knowledge Base.

6. STUDY CASE

Among the most important actions in a switching plan, the switching of


disconnecting switches and of circuit-breakers are to be mentioned. While performing a
switching, one of the requirements of paramount importance is the interlocking, which
consists in restricting the switching freedom of a given equipment with respect to the states
of the other switching equipment existing in the circuit and of the control devices associated
to them, or with respect to the constraints or operating orientations present.

Considering the circuit-breaker represented in Figure 3, with its respective


disconnecting switches, the following statements are valid as far as the interlocking of the
switches opening/ closing command is concerned:
- switches 1K3 and 1K5 can be switched only if circuit-breaker 1K4 is open;
- switch 1K5T can be switched only if switches 1K5 and 1K6 are open;
- switch 1K6 can be switched only if circuit-breaker 1K4 and switches 1K3 and 1K5 are
closed.

These rules hold for all circuit-breakers with the same topological characteristics as
the present ones, independent of the voltage or the circuit level. Then, it is possible to
implement the interlocking for this kind of bay by using a set of Production Rules where the
general structure of the predicate is:

interlocking_switch_switching(CIRCUIT,SWITCHING,SWITCHES_OPEN,
SWITCHES_CLOSED)
90

Figure 5 presents a partial list of these rules written in Prolog.

mterlockmg_switching_device("3 ","off',["4","5T",],["3 "]).


mterlockmg_switching_device(,,4",,,off,,[ ],["4"]).
mterlockmg_switching_device(,,5","off,,[,,4",,,5T",],["5"]).
mterlockmg_switching_device(,,5T","ofF,,["5","6",],["5T"]).
mterlockmg_svvitching_device(,,6","ofF,,["5T"],["3","4","5","6"]).

mterlockmg_switching_device(,,3","on",[,,3","4","5T",],[ ]).

interlocking_measure(,,3"," ","A").
interlockmg_measure("4"," "," ").
interlocking_measure(,,5T",,,V","A").

Figure 5 : Partial Hst of the interlocking module rules.

To send a command signal from a switching device, the ES developed checks the
signal permissivity with respect to the rules interlocking set to validate it or block it.

In case of validating it, the messages are issued as presented in Figure 6.

corrimand_switching_device(" 1 ","K","3 ","open")

"INTERLOCKING OK"

"Open command 1K3 sended"

Figure 6 : Example of the program screen display.

In case the program identifies the interlocking constraint, a set of orientative


messages presented by Figure 7 will be issued.

10
91

command_switching_device(" 1 ","K","5T","close")

"INTERLOCKING FAIL - Reason:

- 1K 3 close
- 1K 4 close
- 1K 5 close
-VOLTAGE (IK ) = 138 kV
- CURRENT (IK ) = 50 A "

"Close command 1K 5T locked"

Figure 7 : Example of the program screen display.

More elaborate switchings such as isolating a bus or energizing a transformer, join


all the operating procedures and command actions necessary to its performance, including
the verifications of the selector switches settings, interlockings, signallings and conference of
the operating conditions (requirements and constraints).

7. CONCLUSIONS

The application of ES in the operation of the SS allows to add considerable


functional capacity to the operating system. With the definition of a new set of operating
techniques, taking advantage of all the existing background knowledge, it is possible to
perform the SS operating automation.

The concepts of Information Technology and Process Re-Engineering should be


applied to the products and services. So as to establish a new operating paradigm it is
necessary to re-evaluate the purpose, orienting for the processes and not for the tasks. So as
to achieve the goals proposed, it is necessary to identify accurately the service to be
rendered and to elaborate the architectures necessary to the Information System and able to
support it.

Only this way it will be possible to reach the goals such as supported evolution,
better swiftness, efficiency, establishment of relational standards, personal capacitation,
constant search for total quality and reduction of the operating costs.

8. REFERENCES

[1] G. Lambert-Torres et alli - "Computer Program Package for Power System Protection
and Control", 34th Cigr Biennal Session, Paper 39-304, Paris, France, 1992.

[2] G.M. Ribeiro and G. Lambert-Torres - "ESRASE - Expert System for Automatic
Restoration of Substations", SEPOPE, Belo Horizonte, Brazil, 1992.

11
92

[3] G. Lambert-Torres et alli - "A Fuzzy Knowledge-Based System for Bus Load
Forecasting", in Fuzzy Logic Technology and Applications. IEEE Press, 1994.

[4] C.C. Liu and CIGR TF38-06-03 - "Practical Use of Expert System in Planning and
Operation of Power System", Electra, 1993.

[5] . Valiquette, G. Lambert-Torres, and D. Mukhedkar - "An Tool for Teaching Power
System Operation E mergency Control Strategies", IEEE Trans, on Power Systems,
1991.

[6] G.M. Ribeiro, C I A . Costa, G. Lambert-Torres, and X.D. Do - "PROCONTROL - A


Hybrid E xpert System for Power System Protection and Control", TV International
Symposium of E xpert System Application in Power System (E SAP), Melbourne,
Australia, 1993.

12
93

The critical role of materials data storage and evaluation systems


within intelligent monitoring and diagnostics of power plants

H. H. Over , A. S. Jovanovic , H. Krckel


1. Introduction
Advanced life management of power plant operating high temperature pressurised systems
and components is based on an interactive strategy of:
a) design and re-design complying with formal requirements of regulatory codes and
incorporating component life assessment (CLA ) by analysis of the component life
exhausted by creep and fatigue;
b) non-destructive examination of systems, i.e. reliability and/or cost centred
maintenance of systems, components and locations, in particular the determination
of optimal inspection intervals;
c) a multi-criteria decision making based both on the regulatory guidelines and
experience i. e. heuristic knowledge to manage the decision to replace, repair,
operate, reduce load or re-inspect the component(s) concerned.
The solution of this engineering problem requires the performance of different-nature
information processes, which range from exact algorithmic calculations and data processes to
less formatted heuristic knowledge, fuzzy logic's and engineering decision processes. Each
of these processes can nowadays be represented or supported by modern information
technologies (IT) tools. The development of an IT architecture for plant life management in
which these different tools are combined, i.e. interfaced to functionally interact, is a
tremendous challenge.
In the background of the whole plant life management process and the corresponding IT tool
are the data sources (i.e. primarily those databases regarding material data, inspection data
and component/system data).
This paper tackles the issue of material databases, taking the High Temperature Materials
Databank (HTM-DB) as an example of "European databases" developed at JRC Petten and
widely used in several large European projects. One of the important problems to solve in
this architecture is the interfacing of the different materials information sources which are
needed in the form of databases and associated algorithm libraries.

2. Intelligent computer systems in the area of power plant maintenance


and diagnostics
Four major groups of intelligent computer systems in the area of power plant maintenance
and diagnostics can be identified.
In the first group are the tools that can be broadly classified as knowledge-based (expert)
systems (KBS) (see Figure 1), and can be used (usually by an expert) to set a realistic
inspection or maintenance interval, e.g. Boiler Maintenance Workstation of EPRI (USA) for
certain boiler components, or SOAP - State-of-the-Art Power Plant System) - see Dooley and

Institute of Advanced Materials, JRC Petten of the European Commission, The Netherlands
" MPA Stuttgart, Germany
see the Glossary for this and all following acronyms
94

co-workers (1993), or the system developed in the European SP249 project (Jovanovic,
Friemann, 1994). These tools define the inspection intervals implicitly, i.e. they can usually
provide a engineering assessment of the corresponding remaining life. The user is then
supposed, in each particular case (i.e. usually per one component and/or location), to decide
when is the right time to re-inspect again.
Such KBS systems are currently developed at MPA Stuttgart under sponsorship of the
Association of German Electric Utilities - VGB, for instance the ESR System (Jovanovic,
Maile, 1990). Much of the recent research effort in Europe, USA and Japan has been devoted
to development of such KBS's applied in the fields of power plant and structural engineering.
Some of these systems (Jovanovic, Gehl, 1991). In general systems in this group give only an
implicit recommendation (based exclusively on engineering factors) on when to inspect one
component - (KBS's for single problems).

Boiler maintenance Piping monitoring systems


(e.g. EPRI-BMW) (e.g. LMS- BE3080)

Piping analysis
Heat rate analysis
(e.g. MPA-ESR)
(e.g. EPRI-HEATEXP)
^
Generator monitoring

J
Coal quality impact (e.g. EPRI-GEMS)
(e.g. EPRI-CQIM)

Vibration advisor (e.g. IVO, EPRI)

J
The "whole plant" systems: e.g. corrosion (KUL), e.g.
EPRI-SOAP material selection (KUL, MPA,...) Other systems...

Figure 1: Some KBS's usedfor single problems (the "first group" of systems)
The second and the third group are databases and database-like systems. These are
developed especially for material data, non-destructive testing (NDT) results and for plant
component/system data. The nature i.e. confidentiality of NDT results has led to the fact that
many of the NDT result databases have been developed by utilities. The databases for
component/system data are usually developed and delivered by component/system
manufacturers. In general, these systems give only the possibility to store data from previous
inspections and/or component/system data.
The fourth group are systems for component/system state monitoring. These systems are
developed both by manufacturers and by utilities. In general, these systems give only the
possibility to monitor more components, but the recommendation on when and how to inspect
is "mechanistic" and based on extremely simplified assumptions.
The desiderata for the further development of these systems towards the 'ideal system' (see
Figure 2) refer to the following objectives:
a) refinement and additional features
provide guidance on e.g. timing of outages, optimal intervals between outages, range,
scope and methods of required inspections,
consider variable operational conditions,
include heuristic knowledge,
95

include non-engineering factors like costs, environmental impact and/or safety


implications,
implement the new (state-of-the-art) methodology for multi-criteria decision making,
provide link to database and monitoring systems existing in power plants,
integrate the most advanced engineering methods for damage evolution
assessment/prediction with the state-of-the-art intelligent software technologies such as
hypermedia, neural networks, intelligent flowcharting.
b) integration of material information
definition of material databases as internal systems,
interfacing of external material databases,
a co-ordinated approach to internal and external materials data analysis.


Momtoring KBS's for single
systems: problems - I
Inspection
^results
databases

Internal llijaefriall:i\
material material .III
database: database: I
SP 249 HTM-DB J

IDEAL
SYSTEM

im
PLANT ENGINEER /
ANALYST

Figure 2: Ideal system (KBS) linked with, other databases and other KBS's

3. CLA and the need for material data

3.1 CLA technologies


Component life assessment (CLA), historically a part of the design process, is now
accompanying the management of the component integrity throughout its life through
successively determined design and redesign (inverse design) steps following inspections.
CLA is that aspect of the overall architecture which has the main need for materials
information. Exploitation of the KBS's within the CLA technology has been successfully
demonstrated in programs such as those of presented at the ACT Conference (1992), in the
European SP249 and ESR projects.
In the SP249 project the KBS technology appears at two levels:
as a part of the modern CLA technology,
as the principal means or "vehicle" of the CLA technology transfer.
The SP249 CLA Generic Guidelines are implemented in a knowledge-based system (KBS),
which serves as the main tool for transfer and application of the target CLA technology. The
system appears as a conglomerate of single software modules controlled by an overview
96

module. The whole system is designed as an engineering "tool box", built on top of
commercially available software (Jovanovic, Friemann, 1994)
Object oriented programming (OOP) appears both at the level of the overall SP249 KBS
architecture (each part of the system is an object exchanging messages with other ones) and
at the level of its single parts. The architecture allows to introduce new modules or to
reorganise the existing ones any time. The hypermedia based parts/modules "cover" the
background information built into the system:
the CLA guidelines,
frequently used codes,
standards and other documents,
case studies.
The system covers:
decision making according to SP249 CLA guidelines, i. e. a decision aid for
making the "3R decision": replace, repair, run,
(This decision is based partly on the regulatory guidelines, partly on the
experience and heuristic knowledge incorporated into the CLA guidelines.)
recommendation regarding the annual inspection (revision),
damage analysis,
Using the system the user is supported by an "intelligent environment", helping him to:
1) retrieve data (about material, component, etc.),
2) evaluate/calculate data,
3) retrieve necessary standards,
4) obtain advice,
5) find an optimised solution for his problem (see Figure 3).
Materials data retrieval (1) and evaluation (2) can be done within the sophisticated
procedures of the HTMDB as an external KBS functionality and/or on the KBS side within
its internal functionality.
5333

v
P
rli ZftB
P W. *U
m
r-1
r_. HTM-DB as the external
tam
materials database

KBS
I *
Larson Miller

-j-s.


nm
M |<5^
V

Figure 3: "Intelligent" environment for the CLA analysis in SP249


97

3.2 Material data


Possibilities of practical integration of material databases with KBS's are nowadays realistic
and fully feasible. Basic issues to be solved are:
a) response to intelligent dynamic queries created either by the KBS or by the user,
b) creation of user-defined database reports which can be included directly into both
KBS inputs and electronic document based reports.
Most of them are currently tackled by JRC Petten and MPA Stuttgart in their efforts to
integrate HTM-DB data into projects like C-FAT, SP 249 and others.
Material data for plant life management (see Figure 4) can be categorized by the following
material pools:
Pool 1: Materials data from national or international standards: Those time-dependent
materials data are offered by the HTM-DB as Excel charts with hypertext
information in addition to the test data of pool 2. Their use in elastic analysis in the
design phase of the power plant.
These data are used for component design of power plants which is correlated to
existing codes (e.g. ASME, KTA, TRD). Most of them are strictly based on elastic
rules which guarantee that allowed strain limits are not exeeded within the operation
time of a component (Nickel, Schubert Penkalla Over, 1983). The codes use high
safety factors to cope with experimental uncertainties. They rely on materials data
referring to the respective national (e.g. DIN, BS, AFNOR, ASME) or international
(e. g. EN, ISO) standards. With respect to ASME CODE N47 the time-dependent
stress intensity limit, St, for instance is derived from the minimum of:
(a) 1% strain limit or stress to rupture,
(b) 1.5 of the stress to rupture,
(c) 1.25 of the stress for onset of tertiary creep.
As recommended in the ASME CODE N47 and DIN 50117 the minimum data are
determined from the confidence interval by subtracting 1.65 of the standard
deviation from the mean value. This procedure guarantees a probability of 95% that
the result of a single experiment is better than the minimum data (HTR Regelwerk,
1984).
98

Larson-Miller 550C isothermal curve


for a ferritic material

^w i!

^
,_
S


100 I
t ^ t :
'I1***
^^^ :

' 11-"^>

4L
*"^,
* }

10
1+1 1+2 1+3 1+4 1+5 1+6
Rupture time (h)

Figure 4: Example of a stress intensity limit determination

Figure 4 shows how conservative is the elastic design procedure against creep
deformation based on the St value. The 550C stress to rupture isothermal of a
ferritic material is calculated from a LarsonMiller plot by using an HTMDB data
set with rupture times between 450C and 650C. Within this example an operation
time of 100.000 hrs for steam pipes stressed at about 80 MPa based on mean values
would be reduced to about 80.000 hrs by using minimum values (dashed line) and to
about 2.000 hrs by using St values (bold line) calculated from the stress to rupture
criterion (b).

Pool 2: Materials data from tests, usually for materials and/or temperatures ranges which
are not covered by the codes: Such experimental test results of new and service
exposed materials data mostly coming from European projects are offered by the
HTM-DB as released data. They can be used together with utility test results for
inelastic analysis in design & redesign.
For cost aspects and thermal efficiency both the operation time and temperature
should be as as high as possible. But with higher temperature the material is
exhausted sooner, and one must find a compromise between lifetime and
temperature. The operation time is often restricted by the formal requirements of
elastic design analysis (design by rule) of the existing standards and code cases.
These requirements are necessarily conservative with respect to the material potential
available, in particular if one allows for the rapid progress in materials fabrication
and production technology, better heat treatments and better quality management,
which has the effect that improved materials are on the market before the code limits
can be adapted. Alternative code options which allow design by analysis, moreover,
require inelastic data and relationships which must be provided by the power plant
supplier himself and/or obtained from other data sources where this information is
available in reliable and validated quality.
99

Pool 3: Materials data from surveillance, in-pile & out-of-pile and in-plant tests (in
nuclear power plants primarily): Such experimental test results can be administrated
within the HTM-DB and used for damage assessment in reference to the information
coming from quality control tests or even other sources.
For safety reasons the authority can demand additional in-plant or in-pile tests and
surveillance tests for critical components to assess the damage and the irradiation
embrittlement or to guarantee the component integrity after emergency conditions.
Such requirements can arise for the following cases:
I. surveillance tests within nuclear power plants to guarantee the integrity of
components
the catastrophic failure of which could endanger the population in the
surrounding areas,
which are earthquake exposed,
which must be secured against extraordinary emergency conditions.
II. inpile & out-of-pile tests within nuclear power plants to secure against
embrittlement of components
which are irradiated.
III. in-plant tests within conventional power plants to secure against damage of
components
with complicated weldments in high temperature and/or stress exposed areas.
Pool 4: Materials data from quality control tests (in nuclear power plants primarily): Such
test results can be administrated within the HTM-DB. During the life-time of a power
plant they can be used as reference data for damage assessment of components.
The code cases demand quality control materials tests such as tensile and charpy-V
impact tests at different positions of the components to guarantee that the component
conforms to specification. Normally the measured test results are entered in the
component certification forms which are stored in thick files. The power plant
suppliers can administrate these materials data of all their plants within the HTM-
DB. Doing so the company has a fast access to the data and can easily use these data
together with material information coming from other sources and/or their evaluated
parameters for life-time analysis.

Figure 5 shows the correlation of the four material pools with KBS based life assessment and
management procedures. The materials data examined in quality control tests (pool 4) must
be compared with those coming from special in-plant tests, in-pile & out-of-pile tests and
surveillance tests (pool 3) and Non-Destructive Testing (NDT) from plant inspections to
assess the material damage, the irradiation embrittlement or the component integrity after in-
service inspections or emergency conditions.
100

Plant Life Management


J
Component Life Assessment Component Integrity

J
design & redesign Assessment

Inspections
NDT
1 results
reference
Emergency
inelastic damage &
elastic conditions
analysis failure data
analysis

h
/'Pool S:
Pooll:
Pool 2: Matenals data from :
Matenals data from :
Materials data from - in-plant tests,
national and inter-
testing (not covered in - in- & out-of-pile tests,
national standards
standards) - surveillance tests

Figure 5: Possible data pools of the HTM-DB for the Plant Life Management

4. Chapter 4: High Temperature Materials Databank


The HTM-DB is a computer-based system for the storage and evaluation of mechanical and physical
properties of engineering alloys such as tensile, creep, fatigue, fracture mechanics or Young's
modulus, thermal expansion that are mostly used in high temperature technology. Although this is
the main scope, the HTM-DB is not only limited to high temperature materials application (Over,
Krckel, Guttmann, Fattori, 1993).

The HTM-DB computerizes the scientific process of engineering data generation from
material testing through the functions of data organization, data validation, quality control,
model-based and statistical data evaluation, to the presentation of material parameters which
find use in engineering algorithms.
The database structure covers all engineering alloys and their testing at any temperature for
time independent and time dependent materials behaviour. Its emphasis is on data from
standardised tests and on evaluation methods which are well established and widely accepted.
The database and the evaluation programs are oriented to international material standards and
recommendations.
101

Besides the experimental materials data, the HTM-DB offers materials data from standards as
additional numerical and graphical information (see Figure 8: Data catalogue). Table 1
shows the HTM-DB materials data content. There is a big difference between the
experimental materials data and the data from standards. The records of the experimental
materials data are measured data and contain, as a minimum, all mandatory information of
data source, specimen, material and test control. In most cases much more information such
as grain size & hardness is provided. The materials data from standards, however, are
average data and contain the mandatory materials information only.

Table 1 : HTM-DB materials data content

Number of records Data coming from


Standard materials approx. 2000 DIN, ISO. BS, etc.
Experimental materials data approx. 6000 COST, BPJTE, etc.

The data management and evaluation functions can be applied to mechanical and physical
property test results reported by test laboratories in defined format and quality. Such test
results can be entered and stored in the "databank" component of the system where they can
be accessed and handled with typical databank routines and from where they can also be
taken to data evaluation by the other component, the "evaluation program library".

The HTM-DB evaluation program library is linked with the User-Interface. It contains
specific evaluation programs for data on the mechanical properties stored by the system.
Most of the specific property evaluation programs allow fitting of mathematical models,
constitutive equations, parametric expressions and regression functions to test result data. In
general the results are best fit parameters and statistical information about the data, such as
correlation coefficients and standard deviations. The evaluation programs are programmed in
VBA and C and implemented under Microsoft Windows on the PC-side with access to
Microsoft Excel for Windows. They can be selected from windows in the User-Interface.
The Norton Creep Law is shown as an example of such an HTM-DB evaluation program. It
is valid for ductile material behaviour and describes the relationship which exists between
characteristic creep rates or rupture time and the applied stress. The program has the several
analysis options:
minimum creep rate - applied stress dependence,
(It can be calculated from the creep curve data supplied by the laboratories using the
'Seven point fitting method' as defined in ASME E 647 or, alternatively, from the
delivered minimum creep rates.)
steady state creep rate - applied stress dependence,
(It can only be calculatedfrom delivered values.)
average creep rate,
(It is the rupture strain, er, divided by the rupture time, tr, and is higher than either the
minimum or steady state creep rate.)
creep rupture time - stress dependence,
(The program automatically sorts data according to isothermal criterion. Each
regression line ends at the minimum and maximum stress levels encountered at any
particular temperature.)
The user also has the option to remove minimum creep rate points which are not consistent
with the general trend where upon the creep law will automatically be re-evaluated. By
102

selecting a single point from the chart, the creep rate data can be compared with those of
neighbouring points in the data set. Through comparison of adjacent curves the user is able to
establish the reliability of each individual calculated creep rate value. An example of
calculated creep rate minimum data is given in Figure 6.
Besides the analysis options, the user has access to all intrinsic Excel functions for data
storage and data processing.

45CTC
Calculated minimum creep rate data
n 41.11

A 475C

12 :-:::: ==~ .:..' ' ' . ' - ' , , M 55


======== ss SS5S n = 18.08

* 500C

? 1E-03 ~:~S
h ^=^:^:=niri=:::=:=:::==sn^^=^:^^ 'zzz-.rr?r^r~r-trSL r ^ C~==b=:=
::^:=^.- .- jfr
n 13.73
'
"8 14 ,-' /r/\ 525C

a. . ; : - :

'
yrf ^ :.--:.,Lj. sr^=r.-
<=
' t
<D n =1172
<D

ry r-
U

530C

15 '-- sstm
:}::.-;
' i r - <-- i g =#a n . 13.45
jf
bs
* i . z~r
r
550-C

S 1E-06 -~.d V i Y ~r~SS. ===,


:
: ~_ - ! y : n = 10.56

O 600'C
1E-07 70
100 200 300 400 500 n = 9.17

Creep sties s (MP.)


O 625'C

n = 6.37

Figure 6: NORTON CREEP LAW analysis

The HTM-DB is available as PC or cliem/server versions with and without connection to the
Server computer at Petten. The Petten Server stores all data entered by different customers
and supports on-line data transfer to the customers (see Figure 7).

IO
103

TKTIEN SESYER O l ft

je - validation by the
data supplier
- quality control
by JRC Petten

Figure 7: Dataflow within the Replication Client/Server application

The User-Interface and the evaluation programs are installed on the PC of the client. Output
options are available as "reports" (tabular presentations) and "table & charts" using
spreadsheet options. The data entry function and data transfer options to and from the Petten
Server are also available as parts of the User-Interface (see Figure 8).

Figure 8: HTM-DB User-Interface main window


The PC-based User-Interface runs under Microsoft Windows and therefore may interact
with other Windows software such as spreadsheets or the system help application. To

11
104

guarantee data confidentiality, the access rights of the user to his local database and the
database of the server is controlled by his passwords and user identifications.
The User-Interface requires minimal user training. It uses advanced windowing techniques
to assist the user formulating his queries. Typing mistakes and nonrelevant queries are
avoided. It furthermore eliminates syntax errors by making the syntax fully transparent to the
user. The complicated SQL string with all the links between the HTMDB entities is
gradually and automatically built up. Active windows are shown in the foreground with a
blue frame whereas inactive ones are shown in the background with darkgray frames
depending on the Microsoft Windows setup. The buttons which are used are related to
Microsoft Windows standards (Over, De Luca, 1993).

5. HTMDB functionalities matching the requirements of the KBS


There are several concepts for the use of a materials databank for computeraided inservice
component life assessment. The HTMDB for instance has participated in the BRITE 1209
project to predict and extrapolate the component service behaviour under stress at high
temperature. The proposed use of the databank within such a materials information system
(Krckel, Westbrook, 1987) is shown in Figure 9. The databank and its evaluation programs
& models interact in a dynamic way with the FEM processor to deliver the data and
evaluated constitutive parameters for the stressstrainlife analysis. In a similar way the
HTMDB will operate as a dynamic, "external" databank within the KBS system.
The HTMDB represents many years of experience and expert knowledge in database
management, programming, material science and soft & hardware applications. A
standardised database structure, a userinterface which offers intelligent userguidance and an
extended evaluation program library are incorporated in the HTMDB. They enable the user
to easily retrieve material data coming from different pools and evaluate the data on their
relevance and quality before transferring them into the fixed data lists and calculations of the
KBS as shown in Figure 3 for the SP 249 programme. Within a Replication client/server
application (see Figure 7) the customer has also access to all released data which are
regularly updated.

4
A database management system requires a query language to enable users to access data. Structured Query
Language (SQL pronounced 'sequel') is the language used by most relational database systems. The SQL
language was developed in a prototype relational database management system System R by IBM in the
mid1970's. Information Technology Database Languages SQL, ISO/EEC 9075: 1992 (E)

12
105

HTM-DB rrrr
I user I
geometry

FEM
validated query processor
processor query variables design loads
finite
DBMS element
model
constitutive (FEM)
evaluation equations
data file programs & constitutive structural
models parameters stress-strain design
life
damage projection

Figure 9: Conceptual scheme for linking of a material properties databank with stress-
strain-life analysis by finite-element computation
An expert using a KBS needs fast solutions for his problems especially if damage or failure is
recognised on critical components. Then often an inelastic analysis for re-design of the
component and for definition of new inspection intervals is necessary to continue the
operation. If, for instance, in nuclear power plants, crack propagation is detected at a high
stress and temperature exposed weldment of a critical component, a fast analysis is requested
to decide whether the weldment can be repaired or not. In conventional power plants, the
inelastic analysis can improve e. g. the assessment of creep-crack growth and/or relaxation
effects. Therefore a databank which is linked to a KBS needs to contain the corresponding
data and to allow the data access at the same speed as the system itself.
Two years ago this high-speed response could not yet be provided by the HTM-DB, neither
from a PC/workstation client/server system nor from a standalone-PC which is used for on-
site plant conditions. Due to the hard- & software conditions the data access was too slow. In
the meantime these conditions have changed. Instead of about 4 minutes (2 years ago)
nowadays a Pentium PC needs 10 seconds to retrieve the same HTM-DB data content from
the local PC database (DBMS: SQL*Base). Similar speed of access is given for the
evaluation of the data. A Larson-Miller extrapolation within the HTM-DB (Over, Krckel,
Guttmann, Fattori, 1993) for which the data must be transferred from the data retrieval part
of the User-Interface to Microsoft Excel is nowadays a task achievable in the order of
seconds.
The dynamic link between the KBS and the HTM-DB is already made as shown in Figure 3.
Data which are retrieved and evaluated in the HTM-DB are transferred to the KBS-"internal"
SP 249 databank and entered into the pre-arranged data sheets.

6. Conclusion
Using the HTM-DB in combination with KBS-based plant life management for the power
plant industry from the planning phase up to operation, it can be fed with data from
acceptance tests and additional mechanical testing of the respective power plant components,
from similar components of other power plants and from safety experiments. It will therefore
provide the main material data input to the KBS. Recent hardware technology allows to
match the response speed of the KBS.
In a situation where the speed of the decision process is vital, other than computerised
methods are becoming inadequate, both for component life assessment and material data

13
106

retrieval & evaluation. Any hard & software costs are by far outweighed by the savings in
operation time and plant availability.
The development of an architecture for this integrated system is well advanced, and the
functional demonstration is the next goal.

7. Literature
ACT (1992). Advanced Computer Technology Conference 1992, held in Phoenix, Arizona,
December 911, 1992, Proceedings, Vol. 1 and 2, published by EPRI Palo Alto, US,
December 1992
Dooley, R. B., McNaughton, W. P., Viswanathan, R. (1992). Life extension and component
condition assessment in the United States, Proceedings of the VGB Conference on
Assessment of Residual Service Life, July 67, 1992, Mannheim, Germany
HTR Regelwerk (1984): "Erarbeitung von Grundlagen zu einem Regelwerk ber die
Auslegung von HTRKomponenten fr Anwendungstemperaturen oberhalb 800C",
Kernforschungsanlage Jlich, Jl Spez 248, ISSN 0343 7639, Mrz 1984,
Jovanovic, ., Friemann, M. (1994). Overall structure and use of SP249 knowledge based
system, Proceeding of the 20th MPA Seminar, vol. 3, MPA Stuttgart
Jovanovic, ., Friemann, M., Kautz, H. R. (1992). Practical Realisation of intelligent inter
process communication in integrated expert systems in materials and structural
engineering. Proc. of the Avignon '92 Conference Expert Systems and their
Applications (Vol. 2Specialised Conferences). Avignon, pp 707718.
Jovanovic, ., Kussmaul, . F., Lucia; A. C, Bonissone, P., Eds. (1989). Expert system in
structural safety assessment, Leet. Notes in Engineering, vol. 53, SpringerVerlag
Krckel ., Westbrook J. . (1987). 'Computerised materialsinformation systems', Phil.
Trans R. Soc. London A 322, 373391
Nickel H , Schubert F., Penkalla H. J., Over H. H. (1983). "Mechanical Design Methods for
High Temperature Reactor Components", Nuclear Engineering and Design 76 (1983)
197206
Over H. H , De Luca D. (1993). "Intelligent User Guidance for the HTMDB", 12th
International Conference on Structural Mechanics in Reactor Technology, Post
Conference, Seminar No. 13, 'Knowledgebased (Expert) System Applications in
Power Plant, Process Plant and Structural Engineering', Konstanz, Germany, August
2325, 1993
Over H. H , Krckel H., Guttmann V., Fattori H. (1993). "Data Management with the High
Temperature Materials Databank", 12th International Conference on Structural
Mechanics in Reactor Technology, Stuttgart, Germany, August 2325, 1993

8. Glossary

AFNOR: Association de Franaise de Normalisation


ASME C ODE C ASE N47-15 "Class 1 Components in Elavated Temperature Service
Division 1", The American Society of Mechanical Engineers, New York 1979
ASME: American Society of Mechanical Engineers
BRITE 3070: "LMS Development of an advanced life time monitoring system for
components of piping systems in the creep range"
BRITE 5245: "Optimization of methodologies to predict crack initiation and early growth in
components under complex creepfatigue loading (CFAT)"

14
107

BRITE 5935: "Decision making for requalification of structures"


BRITE 5936: "Reliability support system for metallic components susceptible to corrosion
related cracking"
BRITE 1209: "Lifetime Prediction and Extrapolation Methodologies for ComputerAided
Assessment of Component Service Behaviour under Stress at High Temperature", Final
Report, February, 1991
BRITE: Basic Research in Industrial Technologies for Europe
BS: British Standard
CLA: Component Life Assessment
COST: Cooperation in the field of Scientific and Technological Research
DBMS: Database Management System
DIN: Deutsche Industrienorm
EN: Euro norm
ESR-International: "Expertensystem fr Schdigungsanalyse and Restlebensdauerermittlung
MPA Stuttgart"
ESR-VGB: "Expertensystem fr Schdigungsanalyse and Restlebensdauerermittlung MPA
Stuttgart"
HTM-DB: High Temperature Materials Databank
ISO: International Standards Organisation
IT: Information Technology
IT: Information Technology
KBS: Knowledgebased System
KTA: Regeln fr Kerntechnische Anlagen
NDT: NonDestructive Testing
OOP: Object Oriented Programming
PC: Personal Computer
SPRINT RA230: "Methodology for development of knowledgebased systems"
SPRINT SP249: "Implementation of power plant component life assessment methodology
using a knowledgebased system"
SQL: Structured Query Language
TRD: Technische Regeln fr Dampfkessel

15
109

MIHAEL GRUDEN

TECHNOLOGY AWARENESS DISSEMINATION IN EASTERN


EUROPE WITH INTELLIGENT COMPUTER SYSTEMS FOR
REMAINING POWER PLANT LIFE ASSESSMENT EUROPEAN UNION
PROJECT TINCA

Abstract
Partners from Russia, Hungary and Slovenia are preparing an advanced intelligent computer
system together with MPA Stuttgart. The scope of the European Union project is to
disseminate the advanced technologies for remaining life assessment of power plant
components to Eastern Europe. The power plants in Eastern Europe are critically in demand of
such assessment since decisions must be met to prolong the operating life of the plants and
propose environmental solutions to the plants where applicable. A knowledge and experience
exchange will provide a case and data base for materials and practices used in Eastern Europe
for the future benefit of Eastern and Western European participants.

1. Introduction
The changes in the Eastern European countries have a long- term orientation towards a market
economy. The previously strongly centrally planed utility had effect on the state of component
life assessment in the utilities. The utilities now run on low budget programs where no life
assessment nor any serious maintenance is performed. In the best cases, the power plants
received planed material, replacement parts and work-force to enforce component repair
needed or not.
All the power plant engineers are now facing decisions to keep their plants in operation.
To this dramatic situation the growing respects of environmental requirements seem to ad
spice. Many older plants in Eastern Europe have insufficient equipment to operate within
prescribed local pollution limits. Power plants are forced to bum inadequate fuel, violating the
pollution standards and operating procedures prescribed by the equipment manufacturers. The
minimal safety requirements to their operating staff and local residents are neglected.
The failure of high temperature pressurized components are a critical issue. The component
lifetime assessment (CLA) of power plant systems is a vital activity for the engineers,
maintenance and operating staff. The plant engineers are confronted with the responsibility of
deciding what to do with the high temperature pressurized component and/or the plant itself
(for example, stop and re-inspect, reduce load, replace, etc.). Improvements of the methods
and procedures used for the assessment and management of remaining life, reliability and safety
of high temperature pressurized components is therefore extremely important. The level of
practice varies from country to country. Where many of the power plants were built by western
companies under license or in cooperation the level of CLA reflects the minimal normal level to
the country of origin at the time of commissioning.
On the power plant level of the utilities participating two benefits may be expected:
Extension of plant service life and
Reduction of maintenance cost.
These two goals can be met with a high cost low risk approach opposed to the highrisk
low cost approach possible with the limited funds in the new economies of the East European
110

Utilities. Commonsense compromise solutions can be founded on modern CLA and life
expectations estimate methods. Detailed benefit analyses give additional allowance:
Reduction of energy production cost,
Improved power plant safety,
Reduced environmental damage,
Standard plant component life assessment practice.
All these activities in the Western expert community are today more and more often supported
by complex expert or knowledge-based systems.
Much of the recent research effort in Europe, USA and in Japan has been devoted the
development of such systems.
Examples of important current developments at the (west) European level are carried out by
companies:
ESR international (Expert System for Remaining life assessment), ESR VGB, (members of the
VGB German Technical Association of Large Power Plant Operators) and SP-249 System.
In Eastern Europe these modern tools are actually unknown and hardly used in an appropriate
magnitude. The systems available are therefore not used in Eastern Europe in spite of huge
needs, caused by out dated technologies and maintenance concepts.
MPA offered to coordinate and guide partners from some East European countries to
participate and form a new East Europe oriented software expert system with the shortened
name TINCA derived from it's long official name: Enhancing Technological awareness and
technology transfer in the area of advanced INtelligent Computer systems for the Assessment
of the remaining life, reliability and safety of power plant components.

2. Objectives and Tasks of TINCA


The main objective of the TINCA project is the chssemination of western European methods
and procedures to the East European countries. TINCA intends to promote the exchange of
East European practices to the benefit of all participants.
The European Union project TINCA has three main objectives:
1. preparation and dissemination of information,
2. adaptation of software modules to the special needs in Eastern Europe,
3. promoting technology transfer to Eastern Europe.
2.1 Preparation and dissemination of information
2.1.1 Leaflets
In the first step it will be necessary to collect information for plant engineers in Eastern Europe
about advanced technologies for life prediction of high temperature pressurized components
and the available possibilities of support using knowledge based software systems and their
features.
Information leaflets are designed and spread by each of the project partners of Eastern Europe.
Besides a general description of the existing systems, the basic elements of the methods for
prediction of remaining lifetime used in the knowledge based systems, are to be high lighted.
The leaflets are to be sent as a first information to interested parties for the seminars described
later-on. The research results obtained during the development of the leaflets will be used to
highlight essential points for the future user oriented application of knowledge-based systems
in power plant practice in East-Europe.
2.1.2 Technical notes and special reports
As detailed information for interested parties are collected, formatted and assessed, special
technical notes about the existing knowledge-based systems will be prepared. These technical
notes will contain detail information about the functionality of the software and about the data
stored in the databases and hypermedia parts.
Ill

These technical notes establish the expertise foundation for interested parties to assess the
possibilities and benefits for the use of the existing knowledge-based systems for a particular
application.
2.1.3 Establishment of information booths
MPA and the partners in Eastern Europe gain admittance for the Eastern European users to
the program systems. In order to achieve this goal the partners organize or establish contact
booths in Eastern Europe. Every project partner from Eastern Europe will take care of this
task in his area and will become a technology dissemination center for the spreading of the
advanced software systems in Eastern Europe. The project partners from Eastern Europe are
trained on the knowledge-based systems by MPA Stuttgart in the subject of installation,
operation and application, maintenance and update procedures. After that, the partners can
perform the training for other interested parties in their country, area, region...
2.2 Adaptation of software modules to the special needs in Eastern Europe
An essential emphasis is the adaptation of the existing software packages to the particular
conditions in Eastern Europe. The concept is to develop several additive modules, which can
be linked into existing knowledge-based systems. For example if an interested party buys ESL
International System, it can add these modules for the possibility of a direct comparison of
western and eastern standards, guidelines, data and methods.
The base organization of the expert software will be adapted, but common software will enable
widespread use by partners without candid hardware.
2.2.1 Database modules
Different materials used in power plants in Eastern Europe require the development of new
database modules containing relevant data of these materials. The database with materials from
eastern Europe will allow comparison of eastern and western materials and their properties.
2.2.2 Hypermedia modules
The system should allow the comparison of Eastern European and advanced (West European)
codes, standards, methods and procedures that should be integrated in the system. Referring to
the existing guidelines and standards in Eastern Europe, additional software modules in
hypermedia format have to be developed, to be qualified and integrated with the existing
software. Furthermore it is necessary to integrate typical case studies from power plants in
Eastern Europe to the software system.
2.2.3 Pilot System for Eastern Europe
A pilot expert system for Eastern Europe will be created including additional modules with
special features for explaining and demonstrating the expert system for interested parties in
Eastern Europe
The demonstrator tutor is also designed to tackle the languages of the partners in Eastern
Europe (multiple-language add-on module) so that the language gap to the plant engineers in
practice is closely bridged.
2.3 Technology transfer to Eastern Europe
Methods of technology transfer can be adapted to the existing level of knowledge at each
partner. The level of translations to the native language necessary for the instructors in order to
receive a proper level of understanding will be managed with modern programming techniques.
2.3.1 Preparation of seminar program
The partners of Eastern Europe will take the role as multipliers for their countries to
disseminate the information and to consult interested parties. Therefore MPA Stuttgart will
prepare a seminar program together with the partners from Eastern Europe to inform plant
engineers about the usability of knowledge-based systems for questions of maintenance.
2.3.2 Organization of seminars
112

Every project partner from Eastern Europe will collect the addresses of the power plants of his
country, organize and carry out at least one seminar together with the interested plant

Each East European partner (KORONA, MISKOLC, ERKAR, LENERGOREMONT) has


sent an engineer responsible for preparation of local seminars and other technology transfer
measures to the project coordinator (MPA) for at least 8 to 12 months. These steps would take
place between month 3 and month 18 of the project.

3. Activity plan
Task Description
1. year Preparation and dissemination of information
1.1 Leaflet Select and prepare information,
Set up leaflet
1.2 Technical notes Select and prepare information,
and special reports Perform notes and reports
1.3 Establish contact Questions of software licenses, software
booths maintenance, updates,
Consulting of interest parties,

2. year Adaptation of software modules to the special needs in


Eastern Europe
2.1 Database modules Materials database
2.2 Hypermedia Standards, guidelines, methods,
modules Examples (case studies)
2.3 Pilot system for Explain methods,
Eastern Europe Explain software
3. year Technology transfer to Eastern Europe
3.1 Preparation of Contents
seminar program Media
3.2 Organization of Invitations
seminars Carry out the seminars
113

4. Roles and tasks of Partners


The work load within the development of the TINCA package will be distributed among the
parties MPA, ISQ, ISPRA, KORONA, ERKAR, Univ. MISKOLC and
LENERGOREMONT:
MPA:
1. Coordinator of the project.
2. Coordinator of software development.
3. Design and supply of the additional software modules based on the developed systems.
4. Adaptation to the end-users' need and requirements.
5. Build-up of the pilot system.
6. Trains the partners from Eastern Europe.
7. Examine the end-users' need and requirements in terms of specific materials and
components based on former experience and cooperation with the Russian partners.

ISQ
1. Develops parts of the hypermedia and database parts

ISPRA
1. Prepares and organizes the seminars in collaboration with the partners from East Europe

KORONA
1. Is the East European partner coordinating the contacts with the partners of Eastern Europe
and
2. arranges the elaboration of the technical part of the proposal and as

EROKAR, Univ. MISKOLC and LENERGOREMONT will


1. Completing of the additional modules to be developed. Translation of the systems language
to users' languages is a responsibility of each partner and language.
2. Each partner is responsible for the creation of leaflets for local users. Special technical
notes and reports needed to procure the system to the end user in the partner's country.
3. Definition of specific requirements to the system defined in cooperation with MPA.
Implementation of additional developed software.
4. Contacts established to the interest parties in Eastern Europe.
5. Organization of seminars performed in collaboration with ISPRA.
6. Organization performed to technology transfer, elaboration, dissemination of the
knowledge necessary for the local use and local and remote maintenance of the software.
The presented structure presents a starting point for the partners to engage with the task. As
the activities will progress, the detailed schedule will be fitted to involve all the recent changes
and development.

5. First results
The first efforts were made in elaborating the complete task and schedule for the first and
second year activities. In the field of software development, communication, data base
exchange and case studies' analysis have shown more problems than expected. The basic
approach to the opening structure is shown in Figure 1.
Reflecting these needs the work started compiling the databases:
1. Material databases: chemical composition, physical properties, temperature test
data, structural data, etc.,
114

2. Standard materials from domestic steel producers including cross reference with
similar western materials,
3. Design: standards, procedures, codes, guidelines used in the partner's area,
4. RLA methods: standards, procedures, codes, guidelines used in the partner's area,
5. Inspection planing methods and guidelines.

Database

D a t a b a s e Mode

lop-Bot t oro All Countries

Country


U titie> AlUtaic


Pianti Plants

9
Block! Alt Blocs.*

Bade io Main

Figure 1. State of striicture fo r the data Dase aceess


Material databases' window including access to chemical composition, physical properties,
temperature test data, structural data, etc. are presented in Figure 2. Standard materials from
domestic steel producers including cross reference with similar western materials
Design: standards, procedures, codes, guidelines used in the partner's area can be addressed
from the basic window sheet.
RLA methods: standards, procedures, codes, guidelines used in the partner's area can be
addressed from the basic window sheet yet similarly as the material database.
Inspection planing methods and guidelines have been addressed by generating a list of all
potential users in the partners' areas. Figure 3
Reflecting the level of equipment available for the computerized expert system to be developed
a widely used base software was introduced to achieve the transparency at an early stage of
work. For this purpose the package 'Microsoft Office' was chosen. Combined with 'Visual
Basic,' to establish the crossreference modules it will enable to start and test and verify the
collected data.
More sophisticated hypermedia software will eventually be introduced later during the
elaboration of the system.
115

33

TINCA Material Info Database
Material Data Chemical Composition Doss Ref ei enee List 10

Material Name Material Family

13CRM044 Ferritic Steels

Material Subfamily Material Group

Low Alloy Steels 1-1.25 X Chromuim Steels

Country Country Code Standard Number


Slovenia SI Ravne 1.7335

Note/Scope

Steel, resistant at temperatures increased up to 500 oC. tubes, component parts for steam

Main Menue Exit H4 MATERIAL INFO N Add Delete

Figure 2: Sample entrance sheet to the TINCA material database

Database - Utility
LOCATION Technical details
Magyar-EK Energia Kozpont / Hungary-EC Energy Centre
Parent

Country Code HUN Contacts Tibor Bertk. Director


Location Konyves Klmn krt. 76
Strat Class

1087 Budapest Points Value 0

Phone 1 <36>(1 ] 1 3 3 1 Phone 2 <36>[1]133 8 Fax <36>(1)269 9

The Centre has been jointly established by the CEC and the Government of Hungary, as a foundation.
Purpose: to strenghten the co-operation within Hungary and between Hungary and the EC in the field of
energy management, in particular energy conservation and energy efficiency, covering areas such as
training, education and information. TECHNOLOGY TRANSFER, organisation of exchanges of experiences
in energy planning and forecasting. In addition the Centre has since Nov 1992 taken on responsibility for the
THERMIE technology dissemination program in Hungary. It is strongly recommended that the Centre
should be approached immediately in order to establish a link with Hungarian programmes for power plant
engineering. While immediate profits might be low. sending experts from the Consortium could provide a
decisive strategic advantage. Being European-minded. Hungarians should be encouraged to participate in
the SPRINT idea by not letting in the US or other competitors...

Md
\ 1 Delete HHJmutiifelHH Select Utility Power Plant Back to Database

Figure 3: Example for a database file window for a Hungarian utility center
116

6. Conclusions
The partners of the TINCA project started our work together in March 1995. The activities are
progressing at pace and the participants are vigorously trying to fulfill all the current
obligations. Gradually the actual field state of the art is emerging. The participants will have
the material for the opening presentation of the expert system in development prepared in time.
More interesting details will be presented at our next meeting and could serve as a base for
future expert system development in other countries.

7. Literature
1. Work documents of the TINCA project 1994/1995
2. Mihael Gruden, Urban Jan: Experience and improvement of power plant operation due to
continuous monitoring of boiler drum life, International conference Life management of
power plants, December 12-14th 1994 Proceedings Edinburgh 1994, UK.
3. Mihael Gruden, Angelo B r { ~ i ~ : The low capital engagement approach to the pollution
control of Termoelektrarna Toplarna Ljubljana Energy and environment, Opatija 1994,
Croatia
117

CHAPTER 3

SPECIAL COURSE ON INTELLIGENT SOFTWARE


SYSTEMS AND REMAINING LIFE MANAGEMENT
119

MODERN APPROACHES TO COMPONENT LIFE ASSESSMENT


- DAMAGE, DEGRADATION, DEFECTS

J M Brear, R D Townsend
ERA Technology, UK

1 INTRODUCTION
The first phase of component assessment generally comprises a code-based calculation of life
consumption and a review of operating and maintenance history. Should the component prove
unacceptable on any of the indicated criteria, then the assessor should move to a hands-on
inspection, involving conventional NDE methods and various metallographic techniques. This paper
outlines approaches available to assess the levels of damage and degradation present and to
determine the remaining life of the component. The particular methods required to predict the
behaviour of any defects found are also described.

The overall philosophy is to identify the physical metallurgical processes that are either directly life
limiting, eg creep cavitation, or are correlated with life consumption, eg thermal softening. For each
process, directly observable indicators are selected that can be used qualitatively or quantitatively in
life prediction. Interactions between processes are also considered.

In the presence of a defect, this philosophy is extended to include the effects of different loading
regimes on initiation, growth and fracture processes.

2 DAMAGE AND DEGRADATION

Components exposed to high temperature conditions under which creep and other time dependent
processes occur, will suffer degradation of their properties over periods of extended service. In low
alloy steels creep damage leading to failure results from

i) Microstructural degradation and continuous reduction in creep strength during service


exposure

ii) Intergranular creep cavitation

Both process normally occur simultaneously, the prevalence of either is determined by the initial
structural state and purity, conditions of stress and temperature. Metallographic methods of
component life assessment are designed to generate information on these processes.

Microstructural degradation and corresponding thermal softening of the base metal can result in a
variety of microstructural effects in the steel, such as changes in composition, structure, size and
spacing of carbides; in ferrite composition; in solid-solution strength; and in lattice parameter.

Carbide characteristics, measured by direct observation or indirectly by hardness measurement, have


proved the most sensitive indicator of thermal degradation. Creep damage (cavitation) can be
measured directly.
120

Changes in grain-orientation also occur with strain during elevated temperature service and these can
be monitored by direct observation. However this technique has not yet reached the stage where it is
suitable for routine component assessment

Implementation of the metallographic methods can be done by removal of samples or non


destructive^ -situ' by replication. Although samples can be removed from most components, there
are situations in which -situ' replication may provide the only possible approach to microstructural
evaluation eg when the removal of a sample is geometrically difficult or is liable to affect the integrity
of the component, or when repeated observations are required. The two major applications of
replication techniques are (1) the study of microstructure (creep cavitation, grain size, etc) using
surface replication and optical microscopy, and (2) the examination and identification of small
second-phase particles by extraction replica techniques, as, for example, for the purpose of
interparticle spacing determination.

3 METALLOGRAPHIC TECHNIQUES

3.1 Surface Replication

Replicas can provide information on the condition of the material from which a component is made.
They are non-destructive and can be taken from any accessible point. They do, however, only
provide data relevant to the surface of the component. Samples extracted for metallographic
examination provide similar information, through the wall thickness, but at a limited number of
positions only. The methods of interpretation described here apply to both replicas and
metallographic samples. The information obtainable includes:

State of Degradation
Precipitate growth and spheroidization

State of Damage
Extent of creep cavitation and cracking.

Qualitative and quantitative methods of assessment are available and provide information that can be
used directly in life prediction.

3.2 Hardness Measurement

Hardness measurement can provide information on the state of degradation of ferritic steel
components. It is a non-destructive technique that can provide data on any accessible point on the
component surface. Similar data, through the wall thickness, can be obtained from extracted
samples. The information obtainable includes:

* State of Degradation
Indirect measure of overall precipitate size and spacing

Cross-weld Hardness Differential


Indirect measure of creep strength differences
121

The measurements can be used for:

* Temperature estimation
Qualitative life prediction
* Quantitative life prediction
Weld failure location prediction

The same standard of surface preparation is required for hardness measurement as for replication.

3.3 Carbide Extraction Replicas


Carbide extraction replicas can provide information on carbide precipitate particle characteristics,
specifically:

* Carbide spacing
* Carbide size and morphology
Carbide composition

The information obtained can be used for

* Temperature estimation
* Qualitative life prediction

* Quantitative life prediction

4 INTERPRETATION OF SURFACE REPLICAS

4.1 Qualitative Techniques


State of degradation

The microstructure of low alloy, creep resistant steels evolves with time at service temperatures, the
most obvious visual change being the coarsening and spheroidization of the carbide precipitates. This
is shown schematically in Fig.1. The precise evolution is dependent upon the initial, as fabricated,
state. More detailed schemes taking into account both grain boundary and grain interior precipitates,
have been developed for base material and for heat affected zones (Ref.10).

Damage location

Creep damage - cavitation or cracking - must be assessed correctly both for fitness for service
evaluation and for life prediction. In the case of weldments, it is an important part of damage
assessment to determine the microstructural region in which damage occurs. If several regions are
damaged, they should be assessed separately.

The structures occurring in a low-alloy steel weldment are determined by the temperature profile and
can be related to the iron-carbon phase diagram. If the weldment is subsequently renormalised, then
uniform fine-grained structures are produced throughout, with traces of the weld-beads visible on
etching as a consequence of slight differences in chemical composition.

On examination of a replica, the microstructural regions in which damage occurs should be noted and
the orientation, with respect to the weld, and general distribution of the damage recorded, prior to
122

formal quantification. The distance between the damage and the fusion boundary, or similar
unambiguous feature, should be given.

On examination of a sample, the same information should be recorded, together with the
position/variation of damage through thickness or association with the cusp region in the weld.

The location of the damage determines the quantification route to be used.

Damage classification

Various schemes of qualitative damage classification have evolved from the original proposals of
Neubauer (Ref.4). These have attempted to improve precision, increase the applicability to a range
of steels and microstructures and incorporate the effects of other forms of degradation (Ref.5,6).
These may be harmonised as sown in Fig.2 (Ref.11). This is intended to allow direct comparison
between the different schemes and to enable historical data, recorded according to the simpler
methods, to be re-interpreted in line with the newer.

4.2 Quantitative Techniques

Two methods of quantitative cavitation assessment have been validated appropriate to different
regions of the weld microstructure.

* Weld metal
Coarse grained HAZ A parameter
* Parent material

Fine grained HAZ


Intercritical (Type IV) region Cavity density
Cusp region in a double V weld

Procedures for evaluating these follow

The 'A' parameter

Originally developed for use on coarse-grained HAZ material, this method has since been extended
to parent material and weld metal. The 'A' parameter is defined as the number fraction of cavitating
grain boundaries encountered in a line parallel to the direction of maximum principal stress. The
measurements are made with an optical microscope, preferably using green monochromatic light and
a 40X objective and 10X or 12.5X eye-pieces fitted with a cross-hair graticule, giving a magnification
level of 400X to 500X. Using a micrometer stage the replica to be measured is traversed along the
direction of the maximum principal stress. As each grain boundary is intersected By the cross-hair
point it is classified as either damaged or undamaged using the following set of rules (Ref.2):

Rule 1 : An intersected grain boundary is only observed between the first triple point on either side of
the intersection. If the boundary extends beyond the field of view then the point at which it
leaves is treated as the triple point.

Rule 2: A grain boundary is classified as DAMAGED if it contains one or more cavities (or
microcracking) along its observable length including cavities centred on the triple point
itself, otherwise the boundary is UNDAMAGED. If in doubt as to whether a feature is a
cavity or not it is disregarded.
123

Rule 3: Multiple intersections with the same boundary are each counted and are classified with the
damage state of the whole boundary.

Rule 4: Intersections with triple points count as one boundary intersection. The classification of
DAMAGED or UNDAMAGED is determined by a 'majority vote' of damage states of the
three joined boundaries.

With reference to Fig.3, Boundaries , and C are DAMAGED according to Rule 2. Similarly,
boundaries D, G and J are UNDAMAGED using the same rule. Boundary J also illustrates the
definition of a boundary in Rule 1 in that it extends only between the first two triple points.

Boundary intersections H and I are both counted, and must have the same damage state (in this case
UNDAMAGED) since they are on the same boundary (Rule 3).

Intersections E and F are examples of triple point intersections classified according to the 'majority
vote' of Rule 4; that is E is damaged and F is not.

If the number of damaged boundaries is N Q and undamaged boundaries Ny then the number
fraction of cavitating boundaries, A, is simply defined as:

Nv+Nc

The length of the traverse (L) should be recorded and the grain size, defined by the mean linear
intercept, calculated:

l = L/(Nu + KO)

In order to achieve the necessary precision in A parameter value, it is usually necessary to count a
minimum of 400 grain boundaries, achieving this by a series of parallel traverses, separated by two
fields of view.

Cavity density

The cavity density is most simply defined as the number of cavities per unit area.

Measurement may be by direct observation or through photographs. Microscope requirements are as


for A parameter determination, with the addition of a camera if photographs are to be taken and a
rectangular grid to allow precise definition of the area observed.

The replica is traversed, in the direction of the maximum principal stress, ensuring that there is no
overlap between successive fields of view. (Small gaps between fields are acceptable). The total
length of the traverse or the sum of the lengths of the fields of view is recorded. For each field of
view, the total number of cavities observed within the field in noted. If there is any doubt as to the
identification of a feature, it is to be ignored. In cases of cavity linkage, clearly identifiable linked
cavities should be counted individually and the fact that linkage has occurred should be noted.

Counting may be by direct observation through the microscope. Alternatively, a photograph of each
field of view may be taken and the cavities counted on an enlarged print. This approach is often
124

more accurate at high cavity densities. To ensure that every cavity is counted once only, their images
on the photograph should be pricked through.

As with the A parameter, determinations of the cavity density for each traverse separately serve as a
check on material and damage homogeneity.

4.3 Life Prediction

Continuum damage mechanics

Accurate life prediction requires a model that is mechanistically realistic, capable of predicting the
evolution of damage, degradation and deformation up to the end of life and able to use quantitative
measurements actually obtained from plant.

The continuum damage mechanics approach meets these requirements. It comprises coupled
equations for deformation and damage rates:

= )-{\-)
= ()-/(-)
with

temperature
stress
strain
damage

Solution of this pair of equations yields a relationship between life fraction, strain fraction and
damage:

\-(tltr)Vtl =(\- 6rf =(1-0))^^

where

creep stress exponent


primary hardening exponent
A tertiary ductility ratio

This equation forms the basis for predicting creep life and time to crack initiation. It is also fully
compatible with creep fracture mechanics (C* type approaches) and can be adapted to include cyclic
creep and creep-fatigue effects. It is necessary to relate the physical measures of creep damage - A
parameter and cavity density - to the state variable, , or the strain fraction, .
125

Theoretical studies yield the following relationships, which have been confirmed experimentally:

A Parameter:

- S.A

Cavity density:

where "ris the rupture ductility and NF the cavity density at failure.

Calculations based on the A parameter

In the absence of a crack, the following relationship may be used:

LF=

. l-LF
remaining service \

where

LF = life fraction consumed - tttrvict I tr


A = number fraction of cavitated grain boundaries, measured on a line parallel to the
principal stress axis
= stress exponent for creep
= primary hardening exponent
= ,/,
= creep rupture strain

6S = Monkman-Grant parameter =

= minimum creep rate


m
tr = creep rupture life
{ =
seme service life to date

If the damage is uniform through the section, then this time is the time to failure, if the damage is
localised, this time is the time to crack initiation.

In the presence of a crack, then the relationship


126

{\- ) = \\--\"\ '

defines the residual ductility fraction used in the crack growth rate equation.

Values of the parameters , , Aare dependent on material, stress and temperature.

Calculations based on cavity-density

In the absence of a crack, times to failure (for through section damage) or crack initiation (for local
damage) are given by

LF=[\-{i-NA/NFy]u
w tri
' Remaining = Service 0~LF) / LP and all constants and variables are defined as before.

In the presence of a crack, the remaining ductility fraction is calculated directly from:

f \
A
1
V )

As for the A parameter, lower bound, realistic or probabilistic calculations may be performed.

Calculations based on cavity classification method

An approximate calculation may be made by estimating the A parameter value from the qualitative
cavity classification.

A qualitative assessment of the damage level may be compared with the observed relationship
between 'A' and Neubauer"s classification, and an upper bound 'A' value can be selected. The
maximum "A' value could then be used in any of the 'A'-life fraction equations to yield a suitable life
estimate, (Ref.2).

Alternatively, the damage classification could be related to life fraction directly. Figure 4 gives a plot
of damage classification vs life fraction whence minimum and maximum remanent life fractions (and
hence lives) can be obtained. Ranges for three material states (all ICrViMo steel) are given. These
include a ductile parent material and a coarse grained HAZ material of intermediate ductility (Both
Ref. 10) and a brittle (high impurity content) coarse HAZ (Ref.2).
127

5 INTERPRETATION OF HARDNESS DATA

5.1 Hardness and creep strength


The creep strength and hardness of ferritic steels are essentially controlled by the same
microstructural process. The materials deform plastically at ambient and elevated temperatures by
the movement of dislocations through the ferrite crystal matrix. Hardness and creep strength are both
a measure of the resistance to this movement offered by the matrix dispersion of alloy carbides
(typically vanadium, chromium, molybdenum carbides). In principle, therefore, it should be possible
to estimate creep strength and therefore expired and remaining lives from a measure of surface
hardness. In practice several approaches have been developed.

The hardness values measured can be used in a variety of ways:

as a means of identifying critical component regions where hardness is markedly different from
that which should be expected for a satisfactory material, eg overheated regions, improperly heat
treat components

in combination with calculational assessment of remaining life and creep damage quantification,
allowing improved predictive accuracy and wider coverage of the component

as a quantitative measure of microstructural degradation for input to base material and weldment
creep models.

5.2 Temperature estimation

The strength of low-alloy steel changes with service exposure in a time and temperature dependent
manner. Thus, any measure of change in strength during service (eg change in hardness) may be
used to estimate a "Mean" operating temperature for the component. This approach is particularly
suitable when strength changes in service occur primarily as a result of carbide precipitation and
growth (microstructural coarsening). Strain-induced softening can often be neglected for the low
strains involved in plant.

The tempering responses of steels at typical service temperatures, as evidenced by hardness


changes influenced by time (t) and temperature (T) of exposure, can be described by the Sherby-
Dom Parameter, log(t) - (q/T), where is in K. A correlation between hardness and the Sherby-
Dom parameter can be obtained by ageing a given material, with initial hardness H0 (at t = 0), at
temperature T, and measuring the change in hardness as a function of time t. The resulting
relationship is = f(P). The curve, however, is unique to the starting material condition represented
by the initial hardness . Figure 5 is a schematic illustration showing a typical experimentally
derived = f(P) correlation obtained on 2 l C r M o material having an initial hardness of H0 = 190
(Ref.7).

Assuming that hardness is inversely related to interparticle spacing, a formal description of these
ageing curves can be defined, by analogy with Lifschitz-Slyozov-Wagner-Greenwood coarsening
kinetics:

(Ht-Hss)-' =(H0-HSS)-' +C0exp=^t


128

where is the saturation (solid solution) hardness level. The temperature dependence of the
Sherby-Dom parameter is thus

q = Q/(R-\n\6)

where R is the gas constant and Q is related to the self-diffusion activation energy. These
relationships may be used to predict future softening trends or to determine mean temperature if two
successive hardness measurments are available. (In some cases, the hardness difference between
'hot' and 'cold' regions of a component may be used).

5.3 Life prediction

Due to the extensive post exposure stress programmes that have been earned out over recent years,
databases relating materials hardness empirically to rupture life are becoming available. However
within these databases, currently no compensation has been made for possible variations in heat
treatment or other process variables and therefore a wide scatterband in predicted rupture life
capability exists. A lower bound fit to these data is therefore generally adopted. Nevertheless despite
the limitations, the data already constitute a useful indicator of minimum remaining life capability,
based on hardness measurement.

In terms of application, if measured component parent material hardness values indicate minimum
remaining life in excess of the target life, then no further refinement is required at this stage,
however, if hardness values suggest the converse, then refinement to the analysis using quantitative
methods or accelerated post-exposure testing should be considered.

Figure 6 gives the data currently available for 21/iCrMo steels. The rupture life axis is temperature
compensated, the hardness axis is stress compensated. For known operating conditions - stress and
temperature - the measured hardness of the material can be used to generate a range of predicted
life. Typically the scatter is a factor of 3 smaller than that obtained using standard materials data
only.

Life can also be estimated from the qualitative degradation class. Figure 7 shows the relationship
between degree of spheroidisation and life fraction for the same three materials as were included in
Fig.4. It is immediately apparent that for the most ductile material, degradation class is the more
sensitive indicator of life consumption, whilst for the most brittle material, damage give the better
prediction. Most importantly, it is clear that for intermediate ductility materials, both factors need to
betaken into account.

5.4 Weld failure location prediction

Using the data of Fig.6, it is possible to construct a weld predictor diagram (Fig.8). This shows weld
metal hardness against parent metal hardness and two lines corresponding to equal rupture strengths
for sub-critically stress relieved welds and for fully renormalised welds (Ref.8).

Plotting a point to show the current hardnesses of a weldment allows prediction of failure location to
be made. Above the relevant line, parent material failure is expected. Below it, weld metal failure is
expected.
129

It is possible for the hardnesses of weld and parent material to reduce at different rates with service
exposure, causing the plotted point for the weld to cross the line, giving a transition in failure location
with service life. This approach is currently being extended to include Type IV failures.

6 INTERPRETATION OF CARBIDE EXTRACTION REPLICAS

6.1 Temperature estimation

Methods of temperature estimation analogous to those used for hardness measurements have been
proposed, and have met with some success. Methods of time-temperature estimation based on
carbide composition and morphology (Ref.9) are also available.

6.2 Life prediction

A mechanistic model based method of quantitative life prediction has been established on the
following principles.

The presence of carbide precipitates was postulated to result in a 'Threshold' stress which must be
exceeded to allow dislocations to climb over the particles so that

'

where a' is a constant, is the shear modulus, b is the Burgers vector, and is the mean
interparticle spacing. The creep-rate equation under the effective stress can be written as

where is the applied stress and is the constant containing the temperature dependence, defined
as

B= B0txV(kAT)

The kinetics of carbide coarsening be described as

)=X0+C0exp(T)t

where Xt is the instantaneous interparticle spacing at time t, 0 is the spacing at t = 0, is


temperature in K, and C0 and are constants. Thus:

V
=B0exp(kAT)x '
* h\ + C0 exp(/?7>r
130

By substituting values of B0,kA, , a',0,C0, and , this equation can be integrated between limits
of t = 0 and t = t, and the strain accumulated up to that time can be determined. Because the creep
rate is known, the failure time tr at any arbitrarily selected value of failure strain can be calculated.
Using the above model, reasonable agreement has been demonstrated between rupture life
predictions from precipitate size and actual rupture lives determined by experiment.

The model is based on the premise that once the kinetics of carbide growth are known, the creep rate
and hence rupture life an be calculated. The initial carbide spacing 0 is usually unknown.
Therefore, monitoring of the carbide spacing , as a function of time or at different locations of
known temperature is necessary in order to determine the carbide-growth kinetics.

For application of the model to a field component, ideally samples or replicas from three different
temperatures should be removed and the carbide spacing t measured. From these values to
constants 0, C0, and in the carbide coarsening kinetics equation can be determined. The service
applied stress and the local temperature where remaining life estimates are to be made should be
known. Values for A,k., and have to be assumed. All these values are substituted into the
O ' A '

above equation to compute a creep curve for the material. From the creep curve, the time to reach a
given critical strain or the time to rupture can be estimated.

The application of the carbide-coarsening model at present has numerous limitations. Carbide
distributions in steels are non-homogeneous and the starting microstructures for different components
are never the same. Therefore it is inevitable that the carbide coarsening kinetics specific to the
component must be determined by taking samples or replicas from locations of known temperature.
This is difficult to achieve in practice, since local temperature measurements in components are
rarely made. Further, if the temperatures and stresses are known, the expended life fraction can be
calculated directly instead of using the carbide-coarsening model, and the answers are expected to be
at least as accurate, if not more so.

Even after the carbide coarsening kinetics for the particular cast of the component under examination
have been determined, the other constants needed, such as B0,kA,n, and a', still have to be
assumed using bounding values of data obtained on other heats. Further, the failure criterion
assumed in terms of a critical strain is arbitrary. The carbide coarsening model thus contains many
constants which are difficult to obtain and evaluate.

Further, the practical application of the technique is difficult. For example, where samples cannot be
removed from the component, in-situ carbide extraction techniques have to be employed which are
difficult in plant situations. Additionally measurement of carbide spacing from extraction replicas is
extremely subjective, requires significant time commitment to achieve a representative measurement
and generally gives limited reproducibility.

7 HARMONISATION OF RESULTS

In assessing the state of a component, emphasis is placed on using several, complementary


techniques rather than relying on single methods of evaluation and assessment. This is particularly
true where the effects of a number of interacting metallurgical processes have to be correctly
interpreted.

In Europe, there has been much emphasis on cavitation based assessment, and only recently have
the effects of other degradation processes been incorporated into the interpretative schemes
131

(Ref.5,6). However, at present these modifications seem very limited in their applicability beyond the
particular classes of material and component design on which they were derived. More extensive
schemes (Ref.10), have been developed elsewhere, but these are subject to the same limitations.

It is considered, therefore, that a realistic future expectation is the development of an integrated


metallographic approach which correctly balances the influences of time, temperature and stress on
softening and strain and damage accumulation.

As a preliminary move towards such an integration, the data of Figs.4 and 7 have been combined in
Fig.9 to generate a damage degradation map. This showsthe evolution of the three materials
considered in terms of damage and degradation class and contrasts damage only, degradation only
and mixed behaviour.

On this map, life fraction contours - interpolated from the source data in Refs.2, 10 - have been
superimposed. These show, for typical service conditions, how damage and degradation processes
interact to control creep life.

8 DEFECTS

No material or structure is free from defects, nor immune to their formation. Ongoing improvements
in non-destructive examination techniques have provided the means to locate, characterize, size and
monitor defects such that it is now realistic to formulate rigorous procedures for their assessment.
Such procedures give a firm basis for run, repair, replace decisions and for defining inspection scope,
frequency and precision. They reflect current standards (Refs.12-16) and ongoing research worldwide
(Refs.17-19).

The procedure described here addresses the assessment of defects - either actual or postulated - in
components operating at elevated temperatures. It includes treatment of crack initiation and growth
under creep, fatigue and creep-fatigue.

The principles of each stage of the assessment process are outlined and detailed calculation
procedures are given. Throughout, the emphasis is on achieving an efficient compromise between
accuracy and simplicity.

The procedure covers the following aspects of defect analysis.

Failure Process

Global deterioration

embrittlement
ageing
creep damage

Crack initiation

by creep, fatigue and creep-fatigue


from manufacturing/fabrication defects
from accumulated damage
132

Crack growth

by creep, fatigue and creep fatigue


interaction with ligament damage

Failure criteria

global creep failure


critical crack size

brittle fracture
plastic collapse

leak-before-break

Materials

The procedure is applicable to ferritic and austenitic steels for which long term creep rupture and
ductility data are available, together with some fatigue data.

Components

The procedure covers components subject to steady mechanical and cyclic thermal or mechanical
loading, at elevated temperatures in or below the creep range.
At present it is restricted to components subject to 'global shakedown', that is, regions experiencing
cyclic plasticity are sufficiently small that the overall instantaneous load-deformation behaviour of the
structure is linear.

9 SERVICE PARAMETERS RELEVANT TO DEFECT ASSESSMENT

9.1 Cause of Cracking

Prior to performing the calculational defect assessment, the most likely cause of cracking should be
identified. This will be based upon the findings of conventional non-destructive examination (NDE),
which should indicate the size, form and location of the defect(s), local metallographic examination
(especially surface replication and hardness measurement), to characterize the general material
condition and any damage local to the cracking, and visual inspection - including dimensional checks
- to define the general component conditions.

Particular situations that may be discovered include:-

Evidence of stress corrosion or environmentally assisted cracking. In this case further advice
should be sought before proceeding.

Evidence of overheating, e.g. distortion plus excessive material degradation. If this is local, then
a repair may be the most cost effective solution. In any case the cause should be rectified.

Evidence of overstressing, e.g. distortion sometimes accompanied by creep damage. This


should be considered in the same way as overheating.
133

Evidence of a general end-of-life situation, e.g. general degradation and/or damage in the
component, sometimes with excessive deformation. Care should be taken to use appropriate
materials data if proceeding with an assessment in such cases. Such components should only be
kept in service with cracks for a short time, until repair or replacement can be effected. This
procedure may be used to underwrite such operation.

Evidence of a local end-of-life situation, e.g. degradation and/or damage or fatigue cracking local
to a stress raising feature.

Evidence of crack initiation and/or growth from a pre-existing defect.

9.2 Operating Conditions


Loading and temperature histories are required for the total assessment period, past and future.
Sensible assumptions regarding future operation should be made.

Normal temperature variation during operation can be accommodated by calculating an effective


temperature for the life limiting process. Cyclic operation and start-up transients are included in the
fatigue analysis. Major changes - of long term duration - in operating temperature can be dealt with
by noting that a general time-temperature equivalence can be established for creep dominated
process.

All applied stresses should be categorized as either primary (in equilibrium with external loads - e.g.,
mechanical) or secondary (in internal equilibrium - e.g. thermal and residual).
Account should be taken of the results of previous code-based calculations which should generate
estimates of steady state stresses, transient stresses and life fractions consumed to date by creep
and fatigue.

Previous code-based calculations will have divided the service history into periods of steady-state
operation, each characterized by a stress and temperature, and identified distinct categories of
service cycle, each characterized by heating/cooling rates and pressure and thermal stress ranges.
This information can be used directly in the defect assessment.

9.3 Crack Parameters

Defects are classified as:

known (or postulated) to be present at start of service

known to have formed during service

discovered during service

and, based on the NDE data, as:

volumetric
planar

point
134

In general, defects found during service are conservatively assumed to have existed from the start of
service at the same size as when discovered.

An accurate measure of crack size - in terms of length and through thickness depth - is required
together with as much information on the position and geometry of the defect as is available.

The generally irregular shape of a defect is idealised to an ellipse of axes 2a, 2c, based upon the
information available from the NDE data. If the defect is not aligned with a plane of principal stress,
then it should be projected onto the three principal planes and the stress intensity factors and
reference stress calculated for each plane. The assessment should be based upon the projection
onto the plane giving the highest values for these parameters. Further advice should be sought if:

the defect is at an angle of >20 to this plane

there is less than 20% difference in either of these parameters between two planes

the highest stress intensity and the highest reference stress lie on different planes

one of the principal stresses is significantly compressive (i.e. the second in magnitude)

Interactions between defects should be accounted for. In general, the effective dimensions after
interaction are those of the overall containment rectangle.

If there are multiple defects, interactions may need to be considered iteratively.

9.4 Stress Analysis

The relevant stresses are those which would exist in the neighbourhood of the defect if the
component were uncracked. Stress intensification factors are calculated within the procedure itself.

Stresses should be classified as:

Primary - due to loads which contribute to plastic collapse

in equilibrium with external forces

e.g. mechanical loads, internal pressure, long range thermal and residual
stresses

Secondary - due to forces which do not contribute to plastic collapse

in internal equilibrium

e.g. short range thermal and residual stresses

Peak - due to local stress raising features

Initially, code-derived stresses are used. When greater accuracy is needed, simplified inelastic
methods are used where possible, shakedown analysis being preferred.
Alternatively, elastic analysis may be performed, with the results corrected for plasticity. Neuber's
method is commonly applied.
135

Initial elastic and creep redistributed stresses are required - at the critical point(s) for initiation and
through the structure for crack growth. The timescale for redistribution should also be determined,
from creep/relaxation data.

For fatigue and creep-fatigue assessment, typical operational cycles should be analysed and - using
creep/relaxation and (cyclic) stress strain data - hysteresis loops derived. From these the stress and
strain ranges may be obtained.

10 MATERIALS DATA REQUIREMENTS FOR DEFECT ASSESSMENT

An understanding of materials behaviour improves, unified models of creep and plasticity are being
derived. This procedure is formulated to use these approaches where possible, thus allowing
consistent description of flow and creep strengths, rupture lives and ductilities, damage and
hardening. Potentially, complete integration of these with the fatigue models is possible. Such
approaches have great value where raw data are in short supply. In the absence of data appropriate
to such models, most of the information required for defect assessment can be obtained or estimated
from standard data tables. In many cases, simple approximations are also available (Ref.13). These
can be used if no better information is currently available. They are also suitable for a preliminary,
simplified defect assessment.

10.1 Creep Rate, Ductility and Life

Where possible, detailed creep data appropriate to continuum damage type models should be used.
Simple power law expressions can be used as a first approximation. Ductility data are required for
crack growth assessment.

Unified model

This is based on continuum damage mechanics and describes the accumulation of strain () and
damage () at a given stress () and temperature (T):

' \~
(-)~" exp{-Qc/RT)
=0 0 ; \sj

\~
7
(1- \ej exp(z) / RT)

with the following materials properties:

0(/0)" Mexp(-Qc/RT)

So
fundamental flow rate and strength

QOQD activation energies

, ,, - exponents
136

and R - the Gas Constant = 8.3143

Expressions for ductility and rupture life are also available.

Standard data

Rupture lives may be obtained directly from standard data, by interpolation or use of parametric
formulae.

Creep curves (strain-time and thence minimum creep rate) may be estimated by plotting the iso-strain
data and then interpolating at the required stress across the series of curves.

( "
Rupture ductility () may be obtained from the minimum creep rate Cr min
derived as above, the

rupture life (tr) and the ductility parameter:

. =
r
. tr. A
r
Cmin

10.2 Stress Relaxation

Stress relaxation by creep can occur during cyclic operation. It is therefore useful to have actual
relaxation data, rather than rely on forward creep data, in the stress analysis.

Unified model

No simple stress relaxation expression is available, but the form of equation due to Feltham is
broadly consistent with the creep model:

= 0 " In (/ 3600 + 1)

where is the stress decrease due to relaxation

0 is the initial stress

At is the relaxation period (in hours)

B" is a material constant

Standard data
If the minimum creep rate w, is derived, as above, for the initial stress, , then

Aa='min At

where E is the tensile modulus


137

10.3 Plasticity Data


For fatigue assessment, cyclic plasticity data are preferred to simple tensile data. Ordinarily a
Ramberg-Osgood relatio n is assumed - this is compatible with the creep formulation

Unified model

This is a simple inversion of the creep equation:

For load control:

\-
(
=, -\ {l-)"exp(Qc/RT)
0

For strain control:


l/n
'f f \^ ~^'

= 0(\-) exp(Oc / RT)
;

where E* is the effective mo dulus

and is the applied strain rate

or is the applied loading rate

Standard data

Tensile data are directly o btainable fro m standard data which pro vide stress-strain data including
yield (o r 0.2% pro o f stress) and ultimate tensile strength. These tables give minimum values. A
realistic estimate of the ultimate tensile strength may be obtained from the hardness of the material.

10.4 Fatigue Data

Endurance data are required fo r the initiatio n assessment, parametric relatio ns between cycles to
failure and strain range are available for several materials. Fo r creep-fatigue assessment, data fro m
tests with an appropriate dwell period are preferred.

Unified model

This is still at the development state.

Standard data

Information available in national codes is presently used.


138

10.5 Crack Growth

Creep crack growth data, in terms of the parameter C*, are required.

Fatigue crack growth data in terms of the parameter K, are required.


The standard expression for creep crack growth rate (o ) is directly related to the continuum damage
model for creep

a = A(C*Y

where A is a function of creep ductility

and is related to creep stress sensitivity

If the creep ductility (r) at the appropriate conditions is known, then

A = 0.003/fr

= n/(n+1)

where is the stress dependency factor for creep.

The standard expression for fatigue crack growth (da/dN) has not yet been formally linked to the
unified model:

= C(AK)m
y J
dN

where da/dN is the crack growth per cycle

is the range of stress intensity factor over the cycle

C, m are material constants

At present the conservative values

C = 8x10" 1 1 (consistent units)

m = 3

may be used for ferritic and austenitic steels.


139

10.6 Fracture Toughness


Values of fracture toughness are presently being collated. At present the following may be used
(from Ref. 13).

Actual Material Temperature C Toug mess Material


Mean Lower Bound Range
Si killed CMn steel 300-380 164 99 Si killed C,

CMn steels

Al killed CMn steel 300-380 196 146 Al killed C,

CMn steels
2%CrMo steel 100-500 150 100
Low alloy

steels
Wrought AISI 300-600 140 105
300 series
Type 316 steel
austenitic steels

Toughness values quoted are K1c based on IC, at 0.2 mm crack extension.

11 THE PROCESSES CONSIDERED DEFECT ASSESSMENT

11.1 Global Deterioration

Crack assessment must include allowance for global deterioration. Thermal ageing and creep
cavitation are the most important. Temper embrittlement is sometimes significant.

The effect of these processes on yield strength and toughness must be determined, as these
influence initiation, growth and final fracture.

11.2 Crack Initiation

Cracks may initiate from creep damage accumulation. Estimates of creep life from standard data or
measured damage levels may be used to assess the timescale. The results of the stress analysis
determine whether the initiation time corresponds to failure or whether a safe crack growth period is
possible.

Initiation of creep cracks from pre-existing defects is assessed on the basis of critical crack tip
opening displacement and local strain accumulation.

Fatigue crack initiation is based on endurance data for the appropriate operational cycle.

Creep-fatigue initiation is based on a linear summation of creep strain fraction and fatigue cycle
fraction. (Creep life fraction is a poor alternative to strain fraction).
140

11.3 Crack Growth


Standard solutions for reference stress and stress intensity are used.

Creep crack growth is predicted using C* correlations. If transient behaviour is expected, then C,
calculations are used. Safety factors are applied to crack growth in the period before full stress
redistribution has occurred. For pure creep, crack growth is calculated on a time base. For creep
fatigue, the creep element is determined per cycle.

Fatigue crack growth is predicted using AK correlations, taking due account of crack closure.

For creep-fatigue, the total crack growth per cycle is obtained as the linear sum of the creep and
fatigue components.

In all cases, account must be taken of any deterioration in materials properties ahead of the crack.
The stress and temperature gradients ahead of the crack should also be considered and crack arrest
calculations performed where appropriate.

It is usually wise, in a full assessment, to calculate growth in both the through-thickness direction
(crack size a) and that perpendicular to it (crack size c), since these may differ. For a simple,
preliminary assessment, calculation of through-thickness behaviour is usually sufficient.

11.4 Failure Criteria

Failure by global creep processes should always be considered.

Critical crack sizes are determined by reference to a two-parameter failure assessment diagram. For
a given structure, load and material, regions may be plotted on this representing stable defects,
initiating defects and crack growth regimes, (Fig.10).

The changes in toughness and collapse load with global deterioration should be included.

A leak before break analysis should be performed.

12 LIFE PREDICTION

The time to failure by global deterioration is first calculated, as this may be life limiting. The total life
due to crack initiation and growth, to the fast fracture limit, is then determined. Comparison of these
timescales gives the overall life of the structure (Fig.11).

Consideration of the sensitivity of the defects to overloads is required, as this may impose the
effective limit to operation.
141

13 ACKNOWLEDGMENTS

This paper is published with the permission of ERA Technology Ltd. It represents not merely the work
and experience of the authors but also the efforts of many direct collaborators and other workers in
the field - as the references show. Particularly, much of the recent consolidation of the approaches
discussed has taken place within the EU sponsored SPRINT Project SP249. The authors give their
special thanks to all the partners in that project.

14 REFERENCES

1 Cane B.J. & Townsend R.D.


Prediction of remaining life in low alloy steels
Proc Seminar 'Flow and Fracture at Elevated Temperatures'
ASM Philadelphia 1983 pp279-316

2 Shammas M.
Metallographic methods for predicting the remanent life of ferritic coarse-grained weld heat
affected zones subject to creep cavitation
Proc Int Conf "Life Assessment and Life Extension', VGB-EPRI-KEMA-CRIEPI
The Hague 1988

3 Poloni M., Jovanovic ., Maile K., Holdsworth S., Brear J.M.


Fuzzy Analysis of Material Properties Data
20 MPA Seminar, Stuttgart, October 1994

4 Neubauer .
Bewertung der Restlebensdauer zeitstandbeanspruchter Bauteile durch zerstrungsfreie
Gefugeuntersuchungen
3R International 19, 1980, H11 pp628-33

5 VGB Technical Report VGB-TW-507 Guideline for the Assessment of Microstructure and
Damage Development of Creep Exposed Materials for Pipes and Boiler Components
VGB Essen 1992

6 Auerkari P., Borggren ., Salonen J.


Reference Micrographs for Evaluation of Creep Damage in Replica Inspections
Nordtest Report 170 1992

7 Carruthers R.B. & Day R.V


The Spheroidisation of some Ferritic Superheater Steels
CEGB Report SSD/NE/R138 9186

8 Brear J.M., D'Angelo D., Seco F.J. & Tack A.J.


Mechanistic creep models for 2ViCrMo Welds and Parent Metal
ERA European Conf 'Life Assessment of Industrial Components and Structures'
Cambridge, October 1993, Paper 4.3
142

9 Benvenuti ., Ricci . & Fedeli G.


Evaluation of microstructural parameters for the characterisation of 2%CrMo steels operating at
elevated temperatures
Proc 4th Int Conf 'Creep and Fracture of Engineering Materials and Structures'
Swansea, April 1990

10 Masuyama F., Nishimura N., Haneda H.


Metallurgical life assessment for 2!CrMo hot reheat pipe welds
Mitsubishi Heavy Industries Ltd
Nagasaki 1987

11 Brear J.M., Auerkari P., de Arajo C.


Metallographic techniques for condition assessment and life prediction in SP249 Guidelines
20 MPA Seminar, Stuttgart, October 1994

12 British Standard Published Document


PD 6493 : 1991 Guidance on methods for assessment of the acceptability of flaws in fusion
welded structures
BSI London 1991

13 British Standard Published Document


PD 6539 : 1994 Methods for the assessment of the influence of crack growth on the
significance of defects in components operating at high temperature
BSI London 1994

14 Drubay B., Moulin D., Faidy C , Poette C , Bhandari S.,


Defect assessment procedure - a French approach for fast breeder reactors
SMiRT-12, MPA Stuttgart, August 1993
Paper FG15/1, Vol 12, pp139-144
pub Elsevier Science BV

15 Nuclear Electric Assessment Procedure R6


Assessment of the integrity of structures containing defects (Rev 3)
Nuclear Electric, Berkeley UK 1986

16 Nuclear Electric Assessment Procedure R5


An assessment procedure for the high temperature response of structures (Issue 2)
Nuclear Electric, Berkeley UK 1994

17 EPRI Project RP-2253-7


Remanent life of boiler pressure parts - crack growth
EPRI, Palo Alto USA, 1988

18 Riedel, H
Fracture at High Temperatures
Springer-Verlag, Berlin 1987

19 Piques R., Molinie E., Pineau .,


Comparison between two assessment methods for defects in the creep range
Fatigue Fract. Eng. Mat. Struct., Vol 14, No 9, pp871-885, 1991
143

Ferrite and lamellar


pear lite

Spheroidisation has begun,


carbides precipitating on
grain boundaries

Intermediate stage of
spheroidisation, pear li te
started to spheroidise
but lamellae still evident

Spheroidisation complete
but carbides still grouped
in their original pearlitic
grains

Evenly dispersed carbides


(no trace of prior ferritic/
pearlitic structures)

Evenly dispersed carbides


but some carbides have
grown, through coalescence
with others

Fig.1 : Microstructural degradation - schematic


Comparison of Damage Classes
Damage no no isolated orientated microcracks micro
scale damage cavitation cavitation cavitation cracks
Neubauer A C D

NTTR170 0 1 2.1 2.2 2.3 3.1 3.2 3.3 4.1 4.2 4.3 5
t
VGBTW 0 1 2a 2b 3a 3b 4
507 5
ISQ 0 0/1 1 1/2 2 2/3 ,' 3 ,' 3/4 4
1

Fig.2: Harmonised cavitation assessment schemes


4^
Ui

DAMAGED BOUNDARIES: A,B.C,E


UNDAMAGEO BOUNDARIES: O.F.G.H.l.J

Fig.3: Rules for A-parameter determination


Damage in 11.25%CrMo Steels

4-

(/)
ro 3 --
O 4

) --Ductile
ro A
" " Intermediate
2
ro Brittle
Q

I *

0 + + +

10 20 30 40 50 60 70 80 90 100
0

Life fraction, %

Fig.4: Relationship between damage and life fraction


147

INITIAL M*R.DWLSS

BO
V *^9&- 4-
On ,
0 +

IbO
O S
HV20 150
D
V AG
V

0
130

120
A e 'f
16 -17 -Ib -15 -14. -13

LOCJ t - l(=320 + 550 C


X 575* C
0 feCO'C
o fa2bc
NORMALISED A N D feSo'C
0 (o75C
TLMPE-RE.D ZfyCr-Ma 7 iooe
7 2 5 "C
150C

Fig.5: Experimentally derived softening curve for 21/iCrMo steel


148

-21 I I

-23
Parent metal
f(T,t) = log (t r ) - 23030/T(K)

Weld metal

-25
.1 .3 .5

Stress/Hardness

Fig.6: Normalised stress-rupture plot for 2%CrMo parent and weld material, compensated for
hardness
Degradation of 1-1.25%CrMo Steels

ro 4 -- %

c
o
4^
ro
o
ro
j _ ^"Ductile
>
^Intermediate
0) 2
Q A Brittle

1 -i !'* A A

0 + + + + +
0 10 20 30 40 50 60 70 80 90 100

Life fraction, %

Fig.7: Relationship between degradation and life fraction


150

25Q

HV
weld or
CGHAZ Parenf material failures

225-

200

175

Weld/HAZ Failures

150 175
HV Parent

Fig.8: Weld failure location predictor diagram for 2VCrMo steels


Damage/Degradation/Life Map for 1 -1.25%CrMo Steels

4--

(
0)
ro 3

0)
O)
ro
2
ro
Q

1 --

0
0

Degradation Class
Fig.9: Damage and degradation interactions
152

>"-

Brittle fracture Assessment line


1.0

D.8 Structurally
disallowed


j - 0.6 h

0.4 -

0.2

L Plastic
collapse

KT or Vor - $ In sec [ f Sr] J " "

Two-parameter failure assessment


diagram showing regimes for

(s) - stable defects


0) - initiating defects
(g) - growing defects

Fig.10: Failure assessment diagram


153

t=0 Initial sharp crack

t t, Cracking blunting

? t=t. Formation of a short crack


when the crack opening
reaches a critical value

t>t Creep crack growth

CO
A:
o
cc
o

Relative timescales for:

(i) - crack initiation


(g) - crack growth
(c) - reduction in critical crack size with global deterioration
0) - remaining life of the ligament ahead of the crack,
as a function of crack size and material degradation
(t f ) time to fast fracture
l
cd ) - time to failure by continuum damage

Fig.11 : Life limiting processes associated with defects


154

INTELLIGENT SOFTWARE SYSTEMS FOR REMAINING LIFE


ASSESSMENT - The SP249 project

A. Jovanovic, M. Friemann
MPA Stuttgart, FR Germany

1. Introduction
Following a proposal of 13 European partners, namely: AZT (Allianz.), Ismaning, FR
Germany, EdF, Paris, France, EDP Lisbon, Portugal, Endesa, Ponferrada, Spain, ERA
Technology, Leatherhead, UK, ESB Dublin, Ireland, GKM Mannheim, FR Germany, ISQ
Lisbon, Portugal, , Vantaa, Finland, Laborelec, Linkebeek, Belgium, MPA Stuttgart, FR
Germany, Tecnatom, Madrid, Spain and VTT, Espoo, Finland, under the coordination of
MPA Stuttgart, a SPRINT Specific Project (designated SP249) has been approved and is
currently running (1993-95).
Overall, main goal of SP249 has been to enhance the transfer of component life assessment
(CLA) technology for high-temperature components of fossil fuel power plants, assuring
diffusion of modem state-of-the-art plant CLA technology among power plant utilities and
research organizations in Europe. The project addresses pressure parts operating at elevated
temperature (in creep and creep-fatigue regime) in fossil power plants (Brear, Jovanovic,
1992).

2. Base line of the project


In order to achieve its main goal, the SP249 project foresees (Figure 1) development and use
of two main elements, namely of
a) a set ofSP249 CLA Generic Guidelines (a "paper summary" of the technology to be
transferred), and
b) a knowledge-based system (SP249 KBS) for enhancing the transfer of the CLA tech
nology.

SP249
CLA

GuideUnes
SP24i>
Knowledge
Based
System
(KBS)

W
Figure 1: Two basic elements of SP249
155

The basic idea of the project organization (Figure 1) is that the knowledge coming from the
power plant should be first summarized in the form of guidelines (paper) and then transferred
into the KBS. The CLA technology coming from different sources will thus be "packed" into a
framework similar to the one used in MPA ESR system (Jovanovic, Maile, 1992).
Main recipients (users) of the SP249 guidelines and KBS will be utilities in Belgium, France,
Finland, Germany, Ireland, Portugal and Spain. The "KBS-supported" use of the guidelines
and the corresponding training of end-users personnel are major issues in the project.

3. SP249 CLA Generic Guidelines


Bulk of the CLA technology to be incorporated into the SP249 KBS and to be transferred in
the framework of SP249 is summarized in the SP249 CLA Generic Guidelines. Their overall
content has been identified during the definition phase of the project (1991-92) by means of an
inquiry performed among the partners. A series of more than 40 issues has been examined and
ranked according to the recipients' priorities. The issues, together with some others (e.g. Weld
repair guidance, Off-line crack sizing, Advanced assessment route) have been selected for the
contents of the CLA/KBS technology package of SP249. More details about SP249 CLA
Generic Guidelines, their organization and contents are given in the paper of Brear and Jones
(1994).

4. SP249 Knowledge-based System (KBS)


Exploitation of the KBSs within the CLA technology has been successfully demonstrated in
programs such as those of presented at the ACT Conference (1992), and, in particular case of
SP249 project, by the ESR project of MPA Stuttgart (Jovanovic, Maile, 1992). A more
detailed review of the "CLA-related" knowledge-based (expert) systems is presented more in
detailed in the work of Jovanovic and Gehl (1991).
The KBS technology appears in the SP249 project at two levels:
as a part of the modem CLA technology
as the principal means or "vehicle" of the CLA technology transfer.
SP249 CLA Generic Guidelines are being implemented in a knowledge-based system (KBS),
which serves as the main tool for transfer and application of the target CLA technology. The
system appears as a conglomerate of single software modules controlled by an overview
module. The whole system is designed as an engineering "tool box", built on top of
commercially available software (a more detailed description is give in the work of Jovanovic
and Friemann, 1994).
The object oriented programming (OOP) appears both at the level of overall SP249 KBS
architecture (each part of the system is an object exchanging messages with other ones) and at
the level of its single parts. The architecture allows to introduce new modules or to reorganize
the existing ones any time. The hypermedia based parts/modules "cover" the background
information built into the system: the CLA guidelines, frequently used codes, standards and
other documents, case studies. All hypermedia-based modules and all other modules in SP249
KBS are controlled by the expert system shell, based on the novel approach of mirroring the
contents of the hypermedia documentation bases in the expert system shell (Jovanovic,
Freimann, Kautz, 1992: Jovanovic, Friemann, 1994). The shell is thus "aware" of the contents
of the documentation bases as well of the relations among the single documents (and even
parts of single documents).
As the SP249 CLA Generic Guidelines, the SP249 KBS will cover:
a) Decision making according to SP249 CLA guidelines (decision aid for making the
"3R decision" (replace, repair, run). This decision is based partly on the regulatory
156

guidelines, partly on the experience and heuristic knowledge incorporated into the
CLA guidelines.
b) Recommendation regarding the annual inspection (revision)
c) Damage analysis
Using the system the user is expected to be supported by an "intelligent environment", helping
him to calculate, retrieve data (about material, component, etc.), retrieve necessary standards,
obtain advice and, finally, find an optimized solution for his problem (Figure 2).

5. Strategic goals of SP249 - A European de facto standard


The project has defined the principal levels of the CIA-related problem tree, and in it, as main
causes the "Uneven distribution of CLA technology" and the "Uneven distribution of
experts/resources" (in Europe, for SP249, elsewhere probably, too). This means that there is a
lack of use of advanced (existing!) CIA technology, and it is therefore clear that the project
must address the issues of how the technology can be brought into use at the recipients of the
technology, or, in other words, that the project must address the issue of modem and
successful inter-European technology transfer.
The KBS technology has been identified as a modem and appropriate one (Brear, Jovanovic,
1992) and, in order to allow the incorporation of the CLA technology into the knowledge
based system, a need to consolidate and adapt this technology identified. In other words, it has
been necessary to bring the technology into "computer digestible form". These tasks are
recognized to be considerable exercises, needing frequent review to ensure success, but it is
also realized that there will be a number of spin-off benefits, particularly in the way of
guidelines and procedural documents that will pave the way towards the defacto European
standard desiredfor plant life management (in terms of CLA), leading to:
optimized plant component life assessment practice
improved plant safety
reduced environmental damage
increased economy of plant operation/maintenance

6. Expected benefits
SP249 will facilitate wider exploitation of CIA technology in the Union, leading to,
environmental and economic benefits. These would include the following ones:
The technology facilitates life extension of aging plant. There is an estimated 4
billion ECU investment in boiler plant in Europe. Taking the significance of critical
high temperature components in retirement decisions, and assuming that 20% of
plant may have its life extended by 10 years a financial benefit to European
Industry of 200 million ECU's per year is estimated. Further financial benefits
accrue due to optimized replacement and refurbishment planning, thereby
maximizing potential of capital investment and reductions of forced outages,
increasing plant efficiency and reliability.
Life management plays an important role in predictive maintenance strategies.
Experience in the USA has shown that predictive maintenance can save costs by
reducing unscheduled down time, associated lost production/alternative generation
costs, reduced downtime and inspection resources, optimized refurbishment
schedules, and the ability to make run/retire decisions without employing
specialists. Drawing parallels in Europe savings of 5.5 billion ECU's per year are
estimated.
157

Life management strategies have environmental benefits arising from obtaining


maximum availability of high efficiency plant, and from optimizing repowering
strategies. (Repowering normally involves the integration of combustion turbines
in existing steam cycles giving major efficiency advantages). Life predictions allow
the utility to assess where repowering will be most cost effective based on the
condition of the overall plant, and to plan the refurbishment to optimize plant
utilization and capital investment. This has significant European impact, with an
estimated 70% of fossil fired units in excess of 20 years old, repowering options
offer a means of meeting European power demands into the next century, while
making protection of the environment a high priority.

S S P 249 Advanced Assessment Route

Activation o f w * ^ Exchange of data


^2
related documents

SP 249 Generic
Guidelines
AcDoli
/ ! and Case Studies ,lc<-o*>.l

mzMssmmBm
1. Introduction
M tantan taitun
Mtmmt Vakje 1 VUue 2 Vili* 3 j
2.NDETMfa~ng Look up & Pari 12012 321 87 1D:79
InlUillMUKUitti plan Pv.l 231105 75 32 23.53
niiitnituitttRti Par. 5 211,02 19.56 65.12
MiiiMtttn
Per* 12 03 78 85 1 2 * 5
Parb 180 02 27 97*6
Look up Pire BS 64.12 45.90 j
Rati* 178. M

Calculate

\
SP249 awaror.
Walt-tal 1
MJMMUI2
1 WIM 1
12012
231106
]
Vilu 2 VIQM 3
121 B7 10.79
7832 2 1 5 3

KBS
Matenal3 21102 IB 56 65 12
MMrtiM 1203 7865 1 2 * 5
MMHU5 180.02 73.27 07.45
M a t n t l l fl 45 98 84 12 45 08

SP249 Generic
Guidelines
Figure 2: "Intelligent" environment for the CLA analysis in SP249
SP249 project exploits and disseminates technology developed under CEC
research initiatives such as BRITEEURAM, ESPRIT, those in the USA and
Japan (EPRI, CRLEPI) and other industrially funded research projects. It also
serves as a vehicle for the practical application and demonstration of
knowledge based system technology for life assessment. It is a forerunner for
the exploitation of the technology for other components, in other utilities and
other industrial sectors. It will also consolidate an advanced level of European
expertise in the field to compete effectively with Japanese and American initiatives
For a single utility company and in the longterm, expected benefits can be seen on an example
(here the Compostilla Plant of Endesa utility company in Spain):
In the long term,
if it is possible to extend the operation life of Compostilla Fossil Power Plant 5
years, and SP249 provides help in achieving this goal, extrasales would be
1312.103 kW 6000 hour/year 1,5 PTA/kWh 5 years = 59.000. IO6 PTA
158

400x IO6 ECU. Endesa has 4 fossil power plants in addition to Compostilla power
plant.
In the short-term
the expected benefits would be decrease the maintenance costs. The total annual
maintenance cost for Compostilla power plant would be (considering a
maintenance cost of 0,30 PTA/kWh, 0,30 PTA/kWh 1312.103 kill 6000 h =
2,360.106 PTA ) 18. IO6 ECU. If it is possible to decrease this cost by 2% using
the SP249, the annual benefits would be 0.02 18.106 ECU = 360,000 ECU, each
year in Compostilla power plant. Again, Endesa has 4 fossil power plants in
addition to Compostilla power plant.
Besides, individual utilities expect to benefit from simplified maintenance/inspection planning
resulting from higher precision component life predictions and from an ability to deploy
precious human resources more effectively. Both are highlighted in utility questionnaire
responses (Brear, Jovanovic, 1992). Optimized component life assessment leads to reduced
risks - of both large scale catastrophic failures and of small scale but extended duration
environmental degradation (use of new sites for new plant, higher emissions etc.). Such
factors, though of great importance, are not easy to quantify.

7. Current status of the SP249 KBS


The status of the project can be measured according to the achievements on major deliverables
of the SP249 project. These are:
Generic Guidelines for CLA produced
The bulk of guidelines has been produced in Nov. 1993, but some of the most
important guidelines are yet to be produced (e.g. the crack assessment one).
SP249 KBS
The first version of the system has been produced and distributed to partners in
May 1994. Partners' comments and wishes are currently being implemented, and
bugs eliminated.
Until midterm of the project following modules have been programmed and have
been implemented into the SP249 KBS system:
(Overall structure of the system)
Object management modules
Advance Assessment Route
Case history management (with about 100 case hystories)
Documentation management (with all CLA Generic Guidelines, with
relevant DLN, TRD, ASME, VGB and NT standards)
Material database (with relevant ISO, DIN, BSI, ASTM and other
materials)
A - Parameter Calculation
Hardness Calculation
TULIP (Tube Life Prediction)
Case History Selection and Management
Crack Dating
SP249 Remanent Life Calculation
159

Inclusion of oxidation effects


SP249 Material Database
Inverse Stress Calculation TRD (new version being programmed)
Creep and Fatigue Usage Calculation TRD (new version being
programmed)
Cavity Density
Linear extrapolation according to Generic Guidelines 003 and 004.
Chemical Composition influence onto remaining life
The modules yet to be developed are (being programmed and the corresponding
fiinal guideline yet to be produced):
Defect Assessment
Training materials for CLA and for SP249 KB S
The first (generic) training has been performed in June 1994. Main reason for the
early start up of the training activity is the need of direct interaction between the
developers of the guidelines and the system, on one side, and the end-users, in the
other side. Further training has been planned as per partner, on-site training (8
one-week sessions in 1995, at seven different power plants).
SP249 KBS Implemented at all Participant Utilities
The first version installed for testing purposes. Both installation and corroboration
will follow the pattern as the one initially foreseen for Carregado power plant
("Worked examples").
Furthermore, an Observer group' of over 25 European and world experts has been established
in order to ensure widespread dissemination of experience gained within SP249.

8. Structure of the system

8.1 The SP 249 software baselin e


The system is built on the basis of five development tools1 in the MS Windows2 3. lx
environment. The integration of these tools is realised via the Windows system facilities DDE,
OLE, DLL3 and launching with command strings. Based on the experience with end-users the
appearance and usage of the software is similar to standard MS Windows applications.
In contrast to more conventional systems the user interface is not presented by a single
application. All the development tools were used to create single modules and therefore are
part of the user interface.
The SP 249 KBS is a modular object-oriented software. Single software modules of the KBS
are objects, as well as data, information and knowledge in the system knowledge base are also
handled and stored as objects.

8.2 Software structure


The SP 249 software system is organised as "conglomerate" of software modules (performing

1
KAPPA-PC, Guide, MS C++, MS Visual Basic, MS Access. See [3] for details.
2
MS, Microsoft and Windows are trademarks of Microsoft Corporation
3
DDE = dynamic data exchange , OLE = object linking and embedding, DLL = dynamic link library
160

specific tasks) linked to the kernel of the system represented by the SP 249 Workbench. This
structure is shown in Figure 3, while the mam tasks of each module are given in Error! Not a
valid bookmark self-reference..
Table 1 : Single modules and their tasks in SP 249 KBS
Module Task
Workbench overview/control of the modules, logging of the session
Advanced advice for the next action to the user
Assessment Route
Documentation Base for background information and on-line documentation
Case Studies background information for support in decision making
Single Calculations to calculate single results (as input for AAR)
The modules communicate with the kernel module, called Workbench, mainly via DDE. This
communication contains the main results and other status information.
Data are stored as objects in the SP 249 knowledge base in a hierarchical structure. The
hierarchy of objects containing data relevant for the SP 249 analysis (the "plant objects") is
stored as a sub - structure, having Europe as root. Further levels in the hierarchy are "country"
(e.g. Germany, Spain, etc.), "plant" (e.g. Carregado, GKM, etc.), "block" (e.g. Block No. 1,
Gruppo No.l, etc.), "system" (e.g. main steam pipe, superheater header, etc.), "component"
(e.g. elbow, T-piece, etc.), and "location" (e.g. location n, weld upper side, etc.). The
hierarchy is schematically shown in Figure 4.
Inputs and outputs of the calculations/analyses performed are also handled as objects. These
"calculation/analysis objects" are then attached to various "plant objects". E.g. the remaining
life calculation based on hardness measurements can be performed on a location, the TRD
calculation can be performed on a component. A list of available analyses/calculations is given
in section 10.2.
Single Calculation Modules !
Module 1 Module 2 Module 3 Module
4 *
~~

System Kernel * * Intelligent Flowchart


Advanced
Workbench " " Assessment
1
Route
* -
Case study

m
Hypertext Environment Selection

Documentation Base e single case


single document single document single case I

sin
single case I
single document

Figure 3: Overall Structure of the SP 249 KBS


161

8.3 Data structure


The hierarchical structure of the "plant objects" simplifies their recovering. Furthermore it al
lows to apply the principle of "inheritance" e.g. if one defines a tubing to be built of material
13 Cr Mo 4 4, this will cause all parts of this tubing to have this material property as a default.
Applying inheritance helps to avoid unnecessary user input and facilitates storing of data.
Furthermore the software system provides the means to store all relevant data. The work done
during a session is logged in a transcript, which can be stored and printed out.
Block 01
. Block 10
Germany GKM <^ _ . ..,....
Europe \ ^Block
, , 15
Main
. steam vvpipe
. ^ - T piece Weld 01
V ^ - Y piece
VEngland ERA
Spain Endesa

Figure 4: Example of the object structure

9. General use of the system


Use of the system is schematically shown in Figure 2. A SP 249 user would have to solve a
problem related to a high temperature power plant component. In order to solve it, the SP 249
project is offering him two tools: the SP 249 Generic Guidelines for component life
assessment (CLA) in a "paper form", and the computer software SP 249 KBS. The structure
and use of the Guidelines "on paper" is described in work of Brear (1994).
The software part offers essentially three types of support:
a) "Look up" (and store) type of support:
The user can look up (and store) information in the case study collection in the
Advanced Assessment Route (AAR) and in the calculation part of the system He
can also look up the guidelines and other documents like standards, norms, etc., as
well as the material data.
b) "Calculate" type of support:
based on what the user has found in the background information modules he can
then proceed with analyses or calculations.
c) "Get an advice" type of support:
In addition to a) and b), the user can get an advice (or "second opinion") from the
AAR Besides having a generic flowchart, the user can have also his "specific
route", selected correspondingly to his case and data. He can also store this specific
route and use it later for further analysis. Single steps in this analysis are linked with
all other modules.

10. Calculation and Analysis Modules

10.1 Use of calculations and analyses


Several calculations can be used within the system The calculations are developed as separate
modules which can be used in three ways:
1. Started from the Windows Program-Manager or SP 249 System workbench (see
Figure 10) as "unbound" (separate) application
the user has to input all necessary data,
162

the user needs to care about saving input and results in a file;

2. started from the SP 249 System as a so called "bound" calculation (see Figure 12):
the necessary basic data is passed to the calculation from the system
kernel,

the user does not need to care about data storing,

the result is returned to the kernel, which will use it for further
examination;

3. started from the AAR in the ExpertChart application as a "bound" calculation (Figure
2):
like in the second way,

in addition the result is also returned to the AAR module, which will use
it for further examination and advice.

The modular construction was based on the end-users requests to reflect the way they work.
a) On one hand they have to deliver single calculation results. Therefore they use the
modules like pocket-calculators.
b) On the other hand, using the system as described in the introduction the calculation
modules serve for the higher goal to decide upon "run/repair/replace". Therefore the
coupling of all single modules has been automated.

10.2 Single calculation/analysis modules


Based on the idea of the engineering 'tool box' the object-oriented architecture allows to
introduce single modules. The main task of these modules is the integrated use with full
integration of data and program start-up. The calculation/analysis modules can also be used as
single applications in the MS Windows environment. The modules developed and integrated
into the system are Usted in chapter 7. Here, the TRD inverse design calculation and the 'A'-
parameter module are described as representative examples. Moreover the SP 249 material
data base as the supporting module for the calculations is described.

10.2.1 TRD inverse design Calculation


The TRD calculation module calculates the life fraction based on inverse design following the
German TRD design code. Creep and fatigue are taken into account. The stress calculation is
possible for straight tube, elbow, T-piece, Y-piece and header geometry. The module uses the
SP 249 material data base (described in section 10.2.3) as an underlying module. p-T tables
can be imported from an on-line data retrieval system. The calculation of the usage due to
cyclic loading is possible in the three different ways, as described in TRD code.

10.2.2 'A' - Parameter Calculation


The 'A'-parameter is defined as the number of fractions of cavitating grain boundaries
encountered in a line parallel to the direction of maximum principal stress. After performing
the measurements with an optical microscope the values have to be typed in into the software.
The software then calculates the remaining life and necessary statistical data, which shows the
accuracy of the measurements. In the absence of a crack the Life fraction LF is calculated, in
163

the presence of a crack the residual ductility fraction, used in the crack growth rate equation,
is calculated. The parameter , and on which lifetime calculation is based are included as
standard values in the program Figure 6 shows the appearance of the module on a large
screen.

File Edit View Data input Calculation Results Window Help


a i t i s i *I^|IS

Inside damata c<


Wall thickness of
Inside diamele! ol
Wall thickness ol
Length of inward
Ojt-ot-icundness
Design wall thicki

VrmVrnf........;

Figure 5: The TRD inverse design calculation, Results window


fjle View window Spedii Help
Igggggiggiigggiggl^ mi
St atlull
Iwtff-L t jt ; 4 'i : 1 tkfDli j : :: :-#M |i :A : ' I '
") i IO* f ?15 : 22 ' 237 : aos3
aos3 a 03337 Me*n vehje ol A (SiM-tAi/Tlfc Kiosix 1
airs' "um
243 0." 'ambsi' Obiarvvd ttcndaid deviation (CDIUbiJt fraras '1
230 D.07B a Ol 351
L pcKJ fandard davialian |SDC ][:
7" 'Dl 3 " 'amase
Abaoluto v a * o 5D(0b*)SDtEit ta.ay.5S
11H8 r OQ37S :
: :
: : :
: : : ; ; :
; : ; . ; . : : : ; : : : : " . '-::::':':':
Conlvfcne bandidti | S D [ E * V * ( T n :

G i l d e d D i m i el I n a i A ahi [SDlA Jt .00(3


U W I m e Calculation

::::::>:- 4 4 * > ^ : - 4 : * * : ; } - : * * & > : : :


) Ab.*, D Cio
H.[l-(l-f*^J ::::::::::;-:-.:|.**:::^**-' i J i V J b o * *
Corutanti on hich I h caietA alnn M b d


j$5sg5M 11 fKn !U B.0C3

;;:::::*:::::;:'

OPFI :::^;>:; 3
"
3
; : : : : - : * : : ; ; - : S . .*lt

DatcuUtcdVaojoir
'*; P n n l : F . . . :::::::::::y. ^312 ./ tuiw/?.

O P M I : nfhr. **w*tW*
- DAMAGED BOUNDARIES. B. D. F
O HA Z: C u , c oj
U NQA.MAGE0 BOU NDARIE 5' A, C . E
O W d d Melat B e . y:yj&hipy;cpcuMi*x;: j

Figure 6: The 'A'-parameter calculation window.

10.2.3 Supporting module - The material database


The SP249 material database comprises data for most of the today's materials used for high

10
164

temperature components in power plants, given by standards like ISO, DIN, AFNOR, ASTM,
BS, DIN and others, as well as some data from other sources.
Given that the users of the SP249 material database are subject to different statutory
requirements, the following approach has been adopted. The information from each relevant
summary document has been included into a common format, with blank fields where data are
not provided/available. Since the ISO data provide the closest existing approach to a
consolidated data set, blank fields in the ISO data sheets are filled, where possible, with the
best available data from elsewhere. For convenience and to help comparison, the materials are
grouped into families, classes and subclasses forming a hierarchical structure. All data are
stored in twelve data tables. The contents of the data tables are:
Title and description of the material, source specification, range for which the data are
expected to apply, tensile data, rupture parameters, stress dependency of rupture life
(parameters), stress dependency of rupture life (explicit), rupture strength - creep strength
relationship, average rupture strengths, allowable rupture strengths, creep strengths, physical
property data

11. Hypertext modules


The hypertext modules display documents from different sources for different goals:
frequently used codes and standards allow explanation of actions performed in the
SP 249 system
CLA guidelines for advanced assessment, produced in this project, allow expla-
nation of actions performed in the AAR
case studies - worked examples and failure cases - support the user in decision
making, where the system cannot decide on the basis of strict rules.

11.1 Documentation Base


The hypertext based environment allows the user to view documents on-screen, easily
navigating through the documents structure, expanding and collapsing documents to display
different levels of detail Figure 7 shows the main page with the hyperdocuments stored in the
system, while Figure 8 shows one single document.
The hypertext documents are linked where appropriate. The user simply needs to click on the
text where another document is referred. The system then will display the mentioned
document and scroll to the appropriate chapter, formula or table.

11.2 Casestudies
The SP249 KBS contains a series (currently 102) of case studies (histories) describing
interesting cases of high temperature component damage and/or life analysis. These case
histories are stored in a format agreed by the project partners. They are managed by the
corresponding case study selection module. The matrix contains typical combinations of
component types and materials. The search of case studies is carried out within the dialog box
shown in Figure 9. A second way of searching is a selection by keywords.
The two "dimensions" (materials/components) are hierarchically structured. A search is started
by selecting an item in the hst of materials and another on the hst of components. All cases
within the selected class and their substructure will be found. The number of entries found is
shown in the upper right corner while the case names are listed there below. The user selects a
single case out of a range of the listed cases, which is then displayed in the hypertext environ-
ment.

II
165

Tille tnfoB
[ Documentation base
|Ca8 doe men

Z3ZJTRD 300 mm 301 U U 508


? ' DIN mm 17155 P U 17175 | 54150
V G B . Guidelines
H R 509 L Recurring Examinations of Pipe Une Systems
: (1983/6) Examination of Surface Microstructure
:
f e c h n . Agreements (1992) Guideline for the Assessment of Microstructure
and T5-Reports and Damage Development of Creep Exposed Materials (draft)

C2QNordtest NT 170 Reference Micrographs

? 1ASME: U U ASME B31.1 -1989 Appendix D

WM SP 249 specific CLA GG 000 l i p CLA GG 001


I CLA GG 002 j CLA GG 003
H P C U GG 004 004 Annex 1 HP CLA GG 005
~ CLA GG 006 | CLA GG 007
CLA GG 008 M CLA GG 009
CLA Advanced Assessment Route
VTT Inspection criteria of hot pipework

Figure 7 SP 249 Documentation base


The usefulness of the system was further enhanced by linking the case studies with AAR. This
is realised by means of keywords, attached to single steps of AAR and keywords attached to
the case studies. The case studies are in this way used as an additional explanation how single
steps in the AAR have to be performed.
Generic Guidelines for Component Life A ssessment GG001
Metallographic Methods:
- Surface Replication SPRINT Project SPI 249
- Hardness Measurement Commercial in confidence
- Carbide Extraction Replicas
Prepared by: J M Brear
Approved by:

SUMMARY

1 INTRODUCTION
2 AST
A ND RD PROCEDURE F O R SURFA CE REPLICA TION
3 INTERPRET
A TION OF SURFA CE REPLICA S
4 AH RDNESS MEA SUREMENT
5 INTERPRET
A TION OF HA RDNESS MEA SUREMENTS
6 AST
A ND RD PROCEDURE F O R TA KING CA RBIDE EXTRA CTION REPUCA S
7 THE INTERPRETA TION OF EXTRA CTION REPLICA SI

tft j Main M<fu : ne*i*nt!><irv S AM gemuti : ; | Back CftrtitftfifS |Mftd:!: Quit

Figure 8 Single document, displayed in the hypertext module

12
166

Materials S3 Components
& f errBc Steels Q Yield Range
G 3 C / C M n steels (Creep Range Outside heating
= B L o w A Boy Steels GD Superheater tubing
0.3% steels C3Reheater tubing
0.5% steels Other Components (CROH)
SSlViCrMoVsteels &Creep Range Inside heating
& 1 1 % C r steels G Superheater Inlet Header
O Cn Superheater Outlet Header
T11 QReheater Outlet Header
12 B M a h Steam Pping
1 Cr Mo Steel (1 'Cr '/A to
32Q steels O Bend
& Intermediate Alloy Steels C J-1V-Piece
HighAloy Steels Valve/Cooler
Q A usternie Steels C3 Reheat Steam ttping
Q Other Materials 3 Other Components (CRIH)

mm^mf^wmm^
Figure 9: Dialog box of Case study matrix

12. The SP 249 Workbench

12.1 General
After starting the SP 249 system from ProgramManager in Windows, the "Workbench1 win
dow (Figure 10) appears. The user can open an existing or create a new file. This file contains
the user defined structure of objects and their data, given by the user. He would then probably
edit the object structure, to add data or add new objects.
SP 249 Workbench [Untitled] (no actual object]
File Object View Execute Options Knowledge Archive Help


Documentation Case
Base Studies
Material
Database
Open Edit
Objects
Select
Object
Help

Calculations

Figure 10: The startup and main window


Next step would be to select the object to analyse. For the overall analysis he would then start
the Advanced Assessment Route. In parallel to these steps the following parts of the system
are accessible:
the hypertext module to review documents and case studies
auxiliary tools (e. g. report generator, transcript).
An online help using the Windows help facility is included in the system Each of the modules
has also it's own help file which explains how to use the module (explanations for the
assessment side are handled in the CLA guidelines). The kernel help file explains how to use
the overall system.

12.2 Working with SP 249 objects ("plant objects")


As described previously, the system is working on the basis of an object structure. Figure 11

13
167

shows the dialog for editing the object tree. The user can add, rename, delete and edit objects.
If the user wants to analyse an object in the object base, he needs to select it first. He would
therefore use the 'Object Selection' dialog.

12.3 Use of single calculations from the SP 249 Workbench


As described previously the execution of a single calculation in SP 249 KBS also is possible.
The 'Execute' menu as shown in Figure 12 lists the available calculations. If there is an object
selected (displayed in the title bar of the Workbench window, here 'Header Body1), the user
will be asked if he wants to start a calculation for the selected object ("bound") or an
independent calculation. If there is no object selected, the user can only start an "unbound"
calculation. After confirming the "unbound" mode startup, the system will launch the corre
sponding module.

12.4 Module description


The Advanced Assessment Route (AAR) module is the key module in the SP 249 System. It
combines all the calculation modules to an overall advanced assessment route. The AAR itself
supports the user to decide upon the basic goals of the SP 249 System application
(run/rep air/replace).
'<RG).WfS} ittft t t t a N b j c Irl:
File Object V Edit Object Tree
^^MwMJMtS^
| 33Europe
">3 Germany
3 J Belgium
Documentation Case
OC3 Spain
Base Studes a
G) Portugal
Q ISQ
QEcP
>an#o\dai Q Carregado Uni 5
"CaCarregadoUntS
..IBffJIWHSflBlinS
Q Main Steam Pipe
QCarregado Ur* 7
fr Q Great Botati
Q Wand
G3 France
QFnland

m*SuB*t*A*Qult

Figure 11: Editing the 'object' base


SP 249 Workbench [CARREGAD.WIS] (Header Body]
Eile Qbject View I Execute I Options Knowledge Archive Help
TRD Calculation...
SP249 CLA calculation (+ CIF)...
AEarameter... AAR
Documentation Case M Cavity Density... Report Save Help
Base Studies De Hardness...
Temperature Ufe OXIdation... Calculations
Crack Dating...
WM Crack A ssessment...
Extrapolation...
TUbe Life Prediction.

Advanced A ssessment Route...

Figure 12: Selecting a calculation to perform on the selected Object'

14
168

13. The SP 249 Advanced Assessment Route


The AAR is implemented as an active flowchart. Boxes are activities which can have sub-ac
tivities. Facts coming as input during the session are then combined with the rules stored
within AAR, and a recommendation is produced for each actrvity/subactivity.
Advantages of such an appearance are:
Complex activities are divided into hierarchically ordered, small entities of
information. So one can limit the portion of details he wants to look at.
AAR automatically layouts activities and the connections between them
AAR stores every user input and uses it for calculations about the possible next
steps in the execution of one activity.
Compared to rule based systems the process is presented in a very clear way to the
end-user.
Activities can be "started", "informed" and "ended". Information means the entry of a value
that describes the result of an activity. The "information" can also be performed by a
calculation module, executed through the start of the activity.
Each of the boxes can be connected to a node in a hypertext document. Dependant on the
user's task, he can request for appropriate pieces of hypertext by clicking on the description
area of an activity. By corresponding keywords a fist of supporting case studies (see section
11.2) can be reviewed.

13.1 Integrated use of AAR


Figure 13 shows the main level of "Advanced Assessment Route" in SP 249. The main level
(activity) consists of 6 sub-activities. These activities represent Phases (or 'main'-activities) 1
to 5 of AAR
The description of an activity includes an activity number and the CLA guideline attached to
that activity. The user has to click on the green triangle to start the corresponding activity.
Activities that are finished can be ended by a click on the green triangle in the lower left.
Dependant on their preconditions other activities can be started. After having started an
activity with sub-activities the first sub-activity can be started.
Figure 14 shows an example of AAR, in this particular case the interpretation of surface
replica. This is covered by sub-activity 1.1.4 of Phase 2. The marked activity shows the
coupling of the AAR and the calculation modules. When "qualitative damage assessment is not
possible or inspection interval not adequate" (activity 2-. 1.1.4.5) then the next step will be
performing the -parameter method or cavity density method. Dependant on the user's
decision one of the methods will be performed. If that was -parameter, the user would then
do the -parameter measurements (activity 2-. 1.1.4.6) with the help of generic guideline on
CLA(GG1 chapter 3.2.1).
The following step (activity 2-1.1.4.7) would then be performed with the help of the SP 249
KBS calculation modules. The calculation of remaining life based on -parameter would be
done. This calculation module is started by the system, when the user starts the corresponding
activity. When the life fraction based on -parameter measurement is calculated, the result is
sent to the SP 249 system kernel. The kernel then informs AAR about the result and a
"history" event is produced. The AAR then includes the life fraction based on A-parameter
into it's interpretation process.

15
169

Advanced Assessment Route


Phase 1
General calculation
Review of Operation

Results of general calculation OK


Results of general calculation critical
Phase (1-6)
Continue Monitoring
Phase 2
Inspection-based assessment
Access for NDE & Metallography

Inspection results critical


j Phase 3
Repair / Replace / Refine assessment
Inspection results OK OR
NO replacement necessary
Replacement necessary
Phase 4
Phase5 triplement Monitoring 8
Replacement / retirement Strategie Inspection Strategy

Figure 13: AAR: highest level of the flowchart

14. Conclusions
The SP249 Project and the development of the SP249 knowledge-based system and its future
deployment in power plants should help in achieving a series of economic and technical
benefits: e.g. unproved availability of systems and plants, shorter and better utilized
maintenance periods, reduced costs of scheduled inspections due to the optimized inspection
strategy- reduced costs of daily operation (specialists called only when necessary), reduced
unplanned costs, or improved possibilities for the life extension of the plants.
Joint effort of CEC and European industries (utility companies and consulting and research
organizations), based on the large scale European application of KBS technology (the total
value of SP249 project is about 2.5 MECU) mark a milestone for the KBS technology
applications the area of power plant operation and management in Europe. It opens the way
for further applications in the area and establishes the KBS technology both as a part of the
modern CLA technology and as a powerful vehicle for technology transfer.
From the point of view of the applied KBS solutions, the SP249 system is a modem,
integrated, object oriented system As indicated by Jovanovic and Bogaerts (1991), the
conventional (production rule based) expert systems alone can deal successfully only with a
very limited range of practical problems in the domain of CLA technology. They often tend to
"block" the dialog at the moment when the user is not sure what to answer the system, either
because he needs an explanation of the question or because he is asked to provide (to the
system) some additional information which is currently not known/available. Possible answer
to this "blocking" of the dialog and similar related problems characterizing conventional expert
systems is to integrate tightly all additional systems modules (e. g. numerics including finite
elements, databases, etc.), so that the user is hardly aware that he/she is dealing with different
kinds of software. This idea is the base line of the MPA's approach called "KISS" -
Knowledge-based Integrated Software Systems, or Knowledge-based Intelligent Integrated

16
170

Software Systems, which has been applied in SP249, and it follows the same line of thinking
which lead to (e.g.) Intelligent Databases, Intelligent Hypermedia Systems KBSs with
Hypermedia Support, etc. (Parsaye et aL 1989).
By combining efforts of the CEC and European industries (utility companies and consulting
and research organizations) SP249 will become a milestone for large scale applications of the
KBS technology, both as a part of the modem CLA technology and as a powerful vehicle for
technology transfer.
ExpertChart IBA UT2.CHT]
S&j E'le View Window Help
M
Phase (2-1 j .4) {GG i .3} Interpretation of surface :',:':

Phase (21.1.41 )(GC 13.1)


Identity the microstnjctural rones covered by the repica
V
Phase (21.1 ..2) (G01 3.12)
hYicaie microstructiral zones contarne, creep damage

Damage assessment possible

Phase C21 1.4.3) (GG 13.13)


Determine the damage dass for this zone
V

Phase C21.1 .4.3) (G0133.4)


Merrnine the inspection interval
v
Damage assessment NOT possible OR
Inspection rterval NOT adequate v^ Phase (2.1.1A .10) (GG 1 3.1.1 )
CONTNUE V Phase (21.1 AS) (GG 1 32) QuaKative state of degradation
wth Phase (212) Cxjanttative damage assessment

Phase (21.1 AS) (GG 1322)


Cavty densty method

Phase (21.1 ASI (GG 1 3.3.3)


Calculate remarig Ite

CONTNJE CONTMJE
wfh Phase (21.1) wth Phase (21.1)

Figure 14: Detail view of the AAR (AParameter calculation)

15. Acknowledgments
The author wants to acknowledge herewith precious help and collaboration of partners in
SP249 (in alphabetical order): ATT (Allianz), Ismaning, FR Germany, EdF, Paris, France,
EDP Lisbon, Portugal, Endesa, Ponferrada, Spain, ERA Technology, Leatherhead, UK, ESB
Dublin, Ireland, GKM Mannheim, FR Germany, ISQ Lisbon, Portugal, IVO, Vantaa,
Finland, Laborelec, Linkebeek, Belgium, Tecnatom, Madrid, Spain and VTT, Espoo, Finland.
Support The C ommission of the European C ommunities and the staff of SPRINT-TA U
CEC, Luxembourg are highly appreciated. Special thanks go to the persons involved in the
project on behalf of their companies, in particular to Dr. L. Hagn, Allianz, to Messrs. G.
Thoraval and P. Rivron, EDF, to Mr. A. Batista, EdP, to Mr. E. Santos Endesa, to Messrs. J.
M. Brear and J. Jones, ERA, A BisselL to Prof H. R Kautz, GKM, to Dr. C. de Arajo ISQ,
to Mrs. U. McNiven and Mr. J. Rantala, IVO, to Mrs. Vereist, Laborelec, to Mrs. M. Aguado
Tecnatom, to Mr. P. Auerkari VTT, to Mr. P. Lwe, SPRINT TAU, to Messrs. Friemann and

17
171

Kluttig of MPA Stuttgart, and to all others who have in or the other form contributed in the
realization of this large European project.

16. References
ACT (1992). Advanced Computer Technology Conference 1992, held in Phoenix, Arizona, December
911, 1992, Proceedings, Vols 1 and 2, published by EPRI Palo Alto, US, December 1992
Brear, J. M., Jones, G. (1994). A consolidated approach to component life assessment in SP249,
Proceeding of the 20th MPA Seminar, vol. 3, MPA Stuttgart
Brear, J. M., Jovanovic, A. (1992). SPRINT Specific Project SP249 "Implementation of Power Plant
Component Life Assessment technology using a KnowledgeBased System", Phase I
Definition, Final report, May 1992, ERA technology, Leatherhead, UK, and MPA Stuttgart,
FR Germany
Jovanovic, ., Bogaerts, W. (1991). Hybrid knowledgebased and hypermedia systems for engineering
applications, Avignon '91 Conference Expert Systems and their Applications (vol. Tutorial Nr.
13), Avignon, May 2731, 1991
Jovanovic, ., Friemann, M. (1994). Overall structure and use of SP249 knowledge based system,
Proceeding of the 20th MPA Seminar, vol. 3, MPA Stuttgart
Jovanovic, ., Friemann, M., Kautz, H R. (1992). Practical Realization of intelligent interprocess
communication in integrated expert systems in materials and structural engineering. Proc. of
the Avignon '92 Conference Expert Systems and their Applications (Vol. 2Specialized
Conferences). Avignon, pp 707718.
Jovanovic, ., Gehl, S. (1991). Some expert systems for power plant components in Europe and USA
Proc. of the SMiRT 11 Post Conference Seminar Nr. 13 "Expert Systems and AI Applications
in the Power Generation Industry", Hakone (Japan), Aug. 2628, 1991
Jovanovic, ., Maile, K. (1992). ESRA Large Knowledge Based System Project of European Power
Generation Industry. Expert Systems With Applications. Vol. 5: 465477
Parsaye, K., Chignell, M., Khoshafian, S. and Wong, H. (1989). Intelligent databases: Objectoriented,
deductive hypermedia technologies. John Wiley & Sons Inc., New York, Chichester, Brisbane,
Toronto, Singapore, 479 pp.
TRD Technical Rules for Steam Boilers; Deutscher DampfkesselAusschu (DDA), Vereinigung der
Technischen berwachungsVereine e.V. (VdTV), Essen

18
173

THEORETICAL AND PRACTICAL BASIS OF


ADVANCED INSPECTION PLANNING

Pertti Auerkari

VTT

Espoo, Finland

ABSTRACT

Inspection planning for pressurised power plant components is traditionally directly or


indirectly subject to mandatory and non-mandatory rules or guidelines in Europe. The non-
mandatory approach is becoming overwhelmingly dominant and provides routes for
improved overall economy in the inspection policies. However, the trend cannot override
fundamental component life and safety related requirements. This creates both a need and an
opportunity for systematic methodologies to manage the process of inspection planning. For
certain aspects of the process such tools already exist and are widely used, because they have
been available and useful even for the mandatory inspections. These tools include eg project
type planning and execution timing for the actual off-line work, as well as data management
and mapping of the inspection results. Until recently, however, much of the decisions related
to actual content and timing of non-mandatory inspections were not subject to such
systematic tools or methodologies. This is about to change with the increasing integration of
inspection data management, inspection planning tools, and decision making methodologies.

1. INTRODUCTION
Inspection planning for pressurised power plant components is traditionally directly or
indirectly subject to mandatory and non-mandatory rules or guidelines in Europe. The rules
typically specify or suggest some aspects of

selection of the targets and methods as well as timing of inspections;


extent of inspections and management of inspection results; and
approach towards inspection results in terms of consequences.

The relatively stiff mandatory rules have generally best served their purpose in cases where
multiple failure mechanisms and relatively fast damage accumulation are not unreasonable
174

(eg for boilers) or where specific additional safety concerns apply (eg for pressure vessels of
nuclear plants). However, although the mandatory rules often reflect some industry
experience, they tend to be the same for all possible cases and therefore do not generally
provide optimal inspection policies which can be expected to depend very much on particular
cases and plants.
Instead, the non-mandatory approach of condition based maintenance is becoming over-
whelmingly dominant route towards improved overall economy of inspections and life
management. This implies that within certain limits, only plant and case specific data are
used to define the inspection strategies for the specified components. Since this cannot
override fundamental component life and safety related requirements, the background data
should exist in a form that can be used for such decision making, and the decision making
process should extend beyond the simple ways of the mandatory rules. In the same time, ever
increasing amounts of on-line and off-line measurement data are available throughout the
service life of a power plant. This creates both a need and an opportunity to extend the use of
systematic methodologies to manage the process of inspection planning (Jovanovics et al,
1992).
For certain aspects of the process such tools already exist and are widely used, because they
have been available and useful even for the mandatory inspections. These tools include eg
project type planning and execution timing for the actual off-line work, as well as data
management and mapping of the inspection results. Until recently, however, much of the
these items have not been combined together. More importantly, the decisions related to
actual content and timing of non-mandatory inspections have not been subject to such
systematic tools or methodologies. This is about to change with the increasing integration of
inspection data management, inspection planning tools, and decision making methodologies.
Advanced inspection planning makes full use of such tools and appears to carry considerable
promise for avoiding unnecessary outages, inspections and repairs, and for focusing the
inspections towards controlled life management.
For the present purpose, such tools must make use of

the rules to decide and quantify the order of merit between plants, if the analysis is
extended to account for inspections involving several plants;
the engineering factors that define the present and foreseeable future condition of
components;
the non-engineering factors that affect the final decision-making on inspection planning;
and
the decision-making methodologies that create the logical flow of inspection planning
using the above rules and factors as framework.

Below, the examples are mainly confined to the hot pipework of fossil fired power plants,
looking at the creep dominated regime of operating conditions. Also, the main domain of
consideration is limited to cases where predictive rather than corrective maintenance is likely.
175

2. THEORETICAL BASIS FOR INSPECTION PLANNING

2.1 ENGINEERING FACTORS


The inspection planning process may initially involve decisions on timing between several
plants according to the plant characteristics and availability needs. However, here the view is
basically limited to the narrower view of planning and timing of inspections for one plant.
Then of the engineering factors to be considered, some are related to service loading, ie

stresses, temperatures and time in service, and their distribution


number and character of startup cycles and other major thermomechanical cycles; and
environmental effects on components (oxidation, corrosion etc).

The environmental factors are generally not very significant for the pipework except for
indirect use in oxide thickness based temperature/time assessment and oxide dating of cracks.
The other service loading factors can be initially tackled by using stress analysis (to indicate
locations of interest if nothing else) and life consumption assessment methods analogous to
TRD 508, ASME CC N-47 or equivalent approaches. Even such a nominal type of
assessment is not possible without knowledge of another major group of significant
engineering factors, related to the material and component response to the service loading. In
its elementary form the required information includes the nominal materials data for the
given materials type, the geometry of the piping and the boundary conditions for the support
system. For actual life assessment type of evaluation, much more information is needed, such
as

material, component and location characteristics in detail; and


existing service-induced, manufacturing and assembly-related damage, ie results from
recent and earlier inspections as well as details on how these were carried out; such
measurements of damage indications can also include displacements, strains, hardness
values etc.

These essential items are mostly not available without (possibly repeated) measurements or
inspections in plant, for which guidelines exist (eg VGB R509L,1984; VGB TR 507,1992;
Auerkari et al, 1992; Auerkari, 1995). A further set of important engineering factors is related
to the inspections or measurements themselves. Such factors involve

component and location-specific accessibility for inspections;


component and location-specific sensitivity and resolution of measurement; and
quality, coverage and representativeness provided by the techniques that are used.

It is also important to realise that much of the available information on the engineering
factors and the state of the structures is patchy at best, and almost never as complete as eg the
life assessment theories would ideally require. However, there are often ways to overcome
such difficulties because
176

not all factors are equal in value for actual inspection planning; for example, usually the
latest measurements provide more important information on the component condition than
earlier measurements or nominal (design) data;
missing data can be often replaced by parallel information or from other experience; and
inspection strategies can be designed to improve thin and patchy databases with minimum
effort in additional inspections.

Classical examples of rules on inspection timing can be seen in the applications of replica
inspections. In this case the typical extracted rules for planning of the next inspections are
based both on latest measurements in the inspections and on more general experience (Table
1)

Table 1. Example rules for timing of the next inspections, based on most recent observed
class of creep damage; t = time in service. The numbers in parenthesis for the
Neubauer/Nordtest case refer to recommendations after the service time exceeds 100 000 h.

Recommended maximum service time to next inpection

Damage Neubauer/ Linear fraction Linear fraction,


class Nordtest 010 (Shammas direct) (evened lower bound)

1 (no No specified 7.33 t 4t


cavitation) limits

2 (isolated 20 000 h 1.171 1.5 t


cavities) (40 000 h)

3 (orientated 15 000 h 2t/3


cavitation) (30 000 h)

4 (micro- 10 000 h 0.19t 0.25 t


cracks) (20 000 h)

5 (macro- 0
scopic cracks)
177

As is seen from Table 1, one may need to select between alternative rules. This is not merely
a task to appease personal preferences but should reflect other available information or any
hints from the service or maintenance experience that could weigh in favour of a certain
approach. For example, if it is known that the location of current interest has not experienced
any significant additional loading excursions during its service time, it may be appropriate to
use the life fraction rules of damage (linear fractions in the Table 1). These are supported by
a limited experimental evidence for a low-alloy steel (Shammas 1988; Tolksdorf & Kautz
1994). However, if we know that significant additional loading has occurred, eg because of
supports of the pipework have not functioned, then it is likely that the damage process has
accelerated towards the end of the expired service time. In such cases it may be safer to use
the Neubauer/Nordtest type of fixed time rules (Neubauer & Wedel, 1984; Nordtest NT NDT
010,1991), because these are based on results from plant inspections, including a
considerable number of cases with non-functioning pipework supports.

In the future, the new 9 to 12 % Cr steels and other newer steels will be used in an increasing
proportion, and for many of the materials the experience-based evaluation mies are yet to be
created. This means also that caution is needed in utilising the present rules for new situations.
It is seen that the number of potentially influential engineering factors can be quite large, and
the available information variable in character. To assess all the necessary engineering
factors separately each at the time would impose a serious burden to any person trying to
deduce optimised inspection plans, and hence there is a fairly obvious opportunity for
computerised decision-making tools to help in creating such plans (Jovanovics et al, 1992).
Naturally, no such tool is any better than the rules on which it is working. In addition, it is
necessary to consider non-engineering factors that are essential for proper inspection
planning.

2.2 NON-ENGINEERING FACTORS


The underlying criteria of optimisation of inspections include the overall economy in plant
operations, including the value of availability, economy of inspections, economy of analysis
itself, as well as safety and environmental requirements. To the extent these are not hard-core
engineering factors, the engineering factors therefore do not determine optimal inspection
plans alone. The non-engineering factors may be required as boundary conditions to the
inspections, or enter directly as optimising variables of which money consumption must be a
major one.

Consequently, some of the most important non-engineering factors include eg the price of
replacement power, availability requirements and the local price of any action needed, such
as inspections, repairs or replacements. The background economical factors as cost of
potential consequenses to be avoided are at least partly measurable as insurance premiums,
but there are local variations. Many of the variations can be seen in local, national or regional
mandatory rules and traditions, at least in their extreme forms. Furthermore, in spite of their
engineering background, there are borderline and non-engineering features in the differences
between the local, company-related, national or regional traditions in design, inspections and
life management. For example, design rules based on ASME / BS or similar codes, and TRD
and equivalent codes produce somewhat different results because of some tradition-based
compromises that attempt to balance between engineering simplicity and rigorous analysis.
Some of the differences in tradition can be seen from Table 2.
178

Some additional inherent differences are revealed by looking at the failure statistics of these
regions. In case of the ASME/BS tradition, the literature citing failure cases refers fairly
often to the problem of ligament cracking of headers. Such cracking which is rare in the TRD
regions appears to be caused by the thermomechanical cycles of plant operation, combined
with the relatively thick (compared also to the ligament width) header material in the
anglosaxon design. This is exacerbated by using low alloy materials such as 2.25Cr IMo
steel, which requires much thicker walls than the higher alloyed steels like X20CrMoV12-l
with same steam values.
The TRD type of tradition appears to have its specific Achilles' heel also. In the literature of
past 20 years or so its nearly exclusively from the germanic design origin that problems with
steam line bends are reported. This frequency is by no means high: it is at least two orders of
magnitude less than for creep damage observed from major circumferential welds. However,
since the creep damage in bends is more severe from the safety point of view, i.e. unlike
damage in welds it can lead to catastrophic failures in the base metal, it has occasionally
received much attention in inspection programmes.

Comparable traditions can be seen in local, national or regional mandatory, eg safety-related


rules.
Table 2. Typical regional features (until about 1990) in large coal-fired power plants.

RegionMax.superheat Top ferritic Max unit size LP rotor Specific pressure


deg C material MWe type vessel authority

ASME/ 565 2.25CrlMo 1000 discs on shaft noni

BS etc. or l/2CrMoV

TRD& 545 12 Cr 600 monobloc/ exis


equival. welded

Also, the non-engineering factors include the local experience and possible training needs of
the employees or inspectors involved. For example, when very experienced operations and
maintenance personnel retire or move company, it may even become optimal to extend some
inspections or other measurements a little to provide the new personnel the additional feel of
the plant condition that was perhaps lost with the experience.

Of the regimes where some mandatory boundary conditions will remain in the future safety
and environmental issues are probably most important. In the recent past much of the safety
issue has been tackled so that more or less standardised engineering, organisational and
regulatory solutions exist everywhere. Meanwhile the environmental issue has gained more
and more weight, becoming a very significant cost issue. Therefore, any component
179

dysfunction that deteriorates the plant performance in these terms also becomes a cost item
and must be included in inspection planning somehow. However, for our example case of hot
pipework this is hardly an issue, whereas the safety aspects are.

3. PRACTICAL ASPECTS OF INSPECTION PLANNING

3.1 ECONOMY AND CONVENIENCE LN OPTIMISA TION


The optimisation process for inspection planning in practice translates into balancing the
necessary information for such planning with the economy of obtaining it. The economical
aspects include the economy of inspections, analysis, and consequences of not meeting the
desired condition for the specified time. Such consequences are measured in cost of
replacement power, required repairs, insurance etc, but also in more fuzzy terms such as
possible impact to environment, company image in public relations or towards regulatory
bodies.

A significant though apparently hidden cost factor within inspection planning analysis can be
the inconvenience of obtaining the required information. If the system that is used for such
analysis is internally very "stiff, ie accepts only complete sets of extensive data on each
location of interest, it is eventually likely to fall into disuse because at least initially any data
on the components are necessarily scarce. To minimise such problems, a good inspection
planning optimiser would accept patchy initial data and cover the missing pieces with default
values from nominal data or parallel experience. This also makes the process much faster for
the user, and provides easy paths to "what if' analyses. These in mm can pinpoint the most
valuable additional data that could be obtained in the next inspections.

One challenge in the optimisation process involves comparisons and weighing of data of
incompatible types. The quantities to be measured and compared may not be easily
measurable in same terms or units. For example, the user may have obtained a service time of
187 300 hours and a repair cost of 12 400 USD for a specific component, for which the
service temperatures, pressures and material strength can be given as probability
distributions, but the secondary axial stresses can only be classified as "higher than in other
comparable components". This requires combination of different types of information,
whether in form of crisp numeric results from measurements, distributions of probabilities, or
more fuzzy expert opinion.

As stated above, optimisation for inspection planning means balancing the cost of obtaining
the required information with the cost of missing it. Because of this balance, it is generally
not optimal to extend the quest for information and analysis beyond a case-by-case dependent
point from where onwards the value of additional information no more pays off. For the very
same reason, this point is generally not well known and it does not pay off to find out its
exact position. Consequently, much of the optimisation deals with a balancing process using
information and rules that are "good enough" rather than "optimal" themselves, and the
optimisation process is not like in classical minmax mathematics. However, by using a
number of different quantities for indicating the condition and cost items of the components
of interest, much of the uncertainties are reduced to an acceptable level in spite of possibly
considerable uncertainties in single quantities.

7
180

3.2 LOCAL AND REGIONAL FEA TURES: CUSTOMISA TION


It is seen that there can be plenty of local aspects in the task of optimisation for inspection
planning. In some of these factors, regional or even wider global unification takes place
through international standardisation; this is likely to apply to at least some traditions and
mandatory rules though very gradually. Meanwhile, it is often necessary to give some
reasonable weighting to the important factors affecting the inspection plans. Fortunately,
much of these factors remain fairly constant within one region and hence mostly within a
single company or plant, and adjustment is necessary only when systems are transferred.

These local factors typically require that any automatic tool for creating inspection plans
must be locally customised. It is clear from above that this is not a simple task of translation
to local language. Even in the cases when the local mandatory (regulatory) issues do not
prevent using generalised procedures, there are national and other local views of
acceptability. Such acceptability does not need to refer to any engineering absolutes to be
significant even in engineering sense. For example, there are two main regional traditions in
Europe and elsewhere to make plastic replicas from hot pipework during off-line inspections.
The "anglosaxon" preference uses typically mechanical polishing in all cases and relatively
large (eg 20 50 mm) sheets for covering a weld. The "continental" tradition prefers smaller
10 mm dia spots, electrolytical polishing (except for cracks), and multiple spots for covering
a weld. Whatever advantages or disadvantages each technique may have, there is an
important issue of repeatability and comparability over long periods of time, ie component
life. The ability to compare successive inspection results may be regarded more important
than clear differences in resolution or inspection cost. Clearly, such views will delay very
much any unification in the procedures of replication.

However, the heart of the optimisation process will remain basically untouched by such
modifications for local use. As in the case of boundary conditions from mandatory rules or
future development for rules that would be acceptable for new materials, generally
parametric changes only take place in technical customisation. The non-technical parts would
include the obvious aspects of language and training, as for any comparable system.

Furthermore, no customisation removes the ultimate needs for evaluating the observed
damage on technical terms, which provides an essence to the important decision on the
recommended time to reinspection or repairs. This evaluation is the most controversial part of the
process, because skills in optimal maintenance strongly affect the competitive edge of the plant. The
maintenance costs amount to a relatively high proportion, up to about 10 % of the total operational
cost of a power plant (EBSOM 1993), and the maintenance cost can be easier to adjust than other
major cost items such as cost of fuel or capital.

In the future the boundary conditions are likely to change somewhat. A new share between
materials and processes for power production will slowly emerge. For most countries, this will
mean a wider spread of high temperature materials than at present, and for many of the materials
the experience-based and even partially accepted creep damage classification and evaluation rules
are yet to be created.
181

4. SUMMARY
Inspection planning for pressurised power plant components is traditionally directly or
indirectly subject to mandatory and non-mandatory rules or guidelines in Europe. Until
recently, much of the decisions related to actual content and timing of non-mandatory
inspections were not subject to systematic tools or methodologies. This is about to change
with the increasing integration of inspection data management, inspection planning tools, and
decision making.

The optimisation process for inspection planning in practice translates into balancing the
necessary information for such planning with the economy of obtaining it. The economical
aspects include the economy of inspections, analysis, and consequences of not meeting the
desired condition for the specified time. Such consequences are measured in cost of
replacement power, required repairs, insurance etc, but also in more fuzzy terms such as
possible impact to environment, company image in public relations or towards regulatory
bodies. It is generally not optimal to extend the quest for information and analysis beyond a
point where the additional information no more pays off. This leaves a factor of uncertainty,
which however is reduced by using a number of different quantities for indicating the
condition and cost items. Such quantities can be made comparable even when they are
initially of totally different type, such as crisp numbers, probability distributions, and fuzzy
expert opinions.

A significant though apparently hidden cost factor within the inspection planning analysis
can be the inconvenience of obtaining the required information. If the system that is used for
such analysis is internally very "stiff, ie accepts only complete sets of extensive data on each
location of interest, it is eventually likely to fall into disuse because at least initially any data
on the components are necessarily scarce. To minimise such problems, a good inspection
planning optimiser would accept patchy initial data and cover the missing pieces with default
values from nominal data or parallel experience.

Because of the time-dependent changes in plant, optimisation for inspection planning is necessarily
a dynamic process. However, the future boundary conditions for inspection planning are also likely
to change. For example, a new share in terms of materials and processes for power production will
slowly emerge. For many of the newer materials the experience-based evaluation rules are vague or
non-existent, and in this sense the optimisation is also internally a moving target.

5. REFERENCES
Auerkari, P., 1995. NDT for high temperature installations - a review. IIW Commission IX
WG Creep, VTT Report VALB96, Espoo. 22 p.

Auerkari, P., Borggreen, K. & Salonen, J., 1992. Reference micrographs for evaluation of creep
damage in replica inspections. NORDTEST NT Technical Report 170. 41 p.

EBSOM- European Benchmark Study on Maintenance. EUREKA Project EU.724. MAINE /


EBSOM Kunnossapitoyhdistys (Fl), Freningen Underhllsteknik (S), Norsk Forening for
Vedlikehold & Den Danske Vedligeholdsforening (DK), December 1993.
182

Jovanovic, ., Maile, , Friemann, M., Auerkari, P., Vrhovac, M., Rantala, J., Gehl, S. &
Viswanathan, R, 1992. Knowledgebased system aided evaluation of replica results in terms of
remaining Ufe assessment of power plant components. Paper 48, 18th MPA Seminar, Stuttgart. 24
P

Neubauer, . & Wedel, U., 1984. NDT: Replication avoids unnecessary replacement of power
plant components. Power Engineering, May, p. 44.

NORDTEST NT NDT 010, 1991. Remanent lifetime assessment of high temperature components
in power plants by means of replica inspection. 6 p. + app.

Shammas, M.S., 1988. Metallographic methods for predicting the remanent life of ferritic coarse
grained weld heat affected zones subjected to creep cavitation. Int. Conf. on Life Assessment and
Extension, Den Haag. Vol ILL p. 238244.

Tolksdorf, E. & Hald, J., 1994. Experimental methods for determination of the creep and fatigue
damage conditions of power plant components. Int. VGB Conf. on Measures for Assessment and
Extension of the Residual Lifetime of Fossil Fired Power Plants, Moscow May 1621. l i p .

Tolksdorf, E. & Kautz, H.R, 1994. Assessment of theoretical models for determination of
remaining life. Int. VGB Conf. on Measures for Assessment and Extension of the Residual
Lifetime of Fossil Fired Power Plants, Moscow May 1621. 17 p.

VGBTW 507, 1992. Guideline for the Assessment of Microstructure and Damage Development
of Creep Exposed Materials for Pipes and Boiler Components. VGB, Essen. 83 p.

VGBRichtlinie R509L, 1984. Wiederkehrende Prfung an Rohrleitungsanlagen in fossilbefeuerte


Wrmekraftwerke. VGB, Essen. 28 p.

10
183

INTELLIGENT SOFTWARE SYSTEMS FOR INSPECTION


PLANNING - The BE5935 project
A. Jovanovic1, P. Auerkari2, iL R Kautz3, H P. Ellingsen1, S. Psomas1
1
- MPA Stuttgart, FR Germany
2
- VTT Espoo, Finland
3
- GKM Mannheim, FR Germany

1. Introduction
The power plant components operating at high temperatures are important targets in the in-
service inspections and measurements. Apart from being large and expensive and subjected to
complex mechanical and thermal (creep-fatigue) loading in service, these components can limit
the availability of the whole plant. Due to ageing, these components need additional
monitoring, repairs and replacements.
Such components include typically
boiler tubing, superheaters and reheaters;
headers, valves, T- and Y-pieces and the rest of the hot pipelines;
hot parts of the steam and gas turbines.
The safety aspects of design impose that the nominal (design) life, e.g. 200,000 service hours
and 1000 cold starts, is considerably shorter than the true average life for these components at
nominal (design) service loading leveL Reasons for this include e.g. using lower bound values
for material strength in design and upper bound dimensions in rnanufactxrring. The extent (or
occurrence) of excess life potential is not certain. Furthermore, overloading, overheating or
other disturbance not accounted for in design, can on the other hand, considerably shorten
component life.
Whenever feasible, extension of life or inspection periods is to be recommended, not only
because of the direct cost impact but also because any unnecessary maintenance compounds to
a significant additional risk for damage and failures. For example, excessive residual stresses,
embrittlement or cracking after unnecessary repair welding and local heat treatments will
occur at a non-zero (and high in case of susceptible materials in stiff structures) probability.
Nevertheless, timing of maintenance is always an optimisation problem, since also too lax
maintenance or too long maintenance periods will lead to costly unexpected shutdowns.
The amount of relevant background information, extent of data of the service, inspection and
maintenance history, number of locations of potential interest in a large system, as well as
needs for relatively long term systematics and expert experience, all support the view that
much of the work would be ideally handled by an application-oriented decision support
system. Following the initial concept [7], such a system for computer-aided planning on
forthcoming inspections of high temperature piping in fossil-fired power plants has been
developed in the European Union research project BE5935 [6]. Initial concept for the part
regarding the inspection results interpretation has been gfven by Auerkari [1], in connection to
the recent guidelines of Nordtest [2] and VGB [11].
184

Input to the system are results of previous inspection (if available), data about the piping
component and strategic constraints resulting from the importance of the component, the
desired level of confidence, etc. Based on the integration of several elements the system
produces a final output in form of a "component vs. year" matrix showing:
what inspection technique (replica, ultrasonic, etc.) and
to what extent (e.g. how percent of welded joint examined)
should be applied at a given location/component during the next inspection (overhaul).

2. Basic inspections principles


The most important factors affecting the conclusions made from the inspection results of the
hot pipework are
service history and its deviations from the expected (in design) range;
inspection history, Le. the results from earlier measurements;
materials and manufacturing / repairs / modifications; and
expected consequences of failure (cost and safety aspects).
Very often the conclusion resulting from the inspections or other measurements is a
recommended time period to next inspection. In principle, length of such a period is limited by
extent and quality of the available information of details e.g. in the service history
and future service, as well as in the maintenance history;
optimum failure risk level for the plant and the component;
inherent inaccuracies in the evaluation methods for limiting failure; and
limited systematics (holistics) in producing the final conclusions.
Many of these inadequacies are partly addressed by using an appropriate decision support
system, which hence should be useful in minimising the economical impact of too frequent or
extensive inspections. However, in the beginning it is not possible in general to optimise
exactly in this sense, because only the accumulating inspection results provide means for
improving the accuracy of optimisation. Therefore, sets of experience-based recommendations
(e.g. [10]) have been devised for initial selection of the methods and location of first
inspections.
Experience suggests that in the straight pipes and most areas (possibly excluding some bends
and T-piece bodies) of hot steam pipings the nominal design life is relatively easily exceeded,
on average perhaps by a factor of 3 to 10. In the welds that are likely to fail first, this safety
factor on life is probably of the order 1.5 to 3 on average, when the steam temperature
exceeds the value of about 500C. In the welded joints the variation of life is also large, and
hence while it is important to include the critical welds in the inspection programs, it is equally
relevant to use these programs for finding those welds that determine overall life and possible
corrective actions.
In an ageing plant some weld damage and failures are very likely and can be economically
important events. Normal consequences of (circumferential) weld failures are not catastrophic
and do not require consideration of personnel safety. Failures of bends can be sudden and
catastrophic, but are in general rare, and mainly limited to susceptible materials such as
14 MoV 6 3 (0.5Cr-0.5Mo-0.25V) after less than successful rnanufacturing. As a
consequence, the inspection programs tend to concentrate on welds and treat bends (and some
T-piece bodies) case by case. Straight pipes are usually not included in the programs, and are
of little interest before attaining a very long service life ( 250,000 h), if even then.
185

In addition to experience-based general rules or earlier inspections, naturally any indication of


overloading or overheating, as well as manufacturing or material defects, is useful in
determining the locations of interest. Local decrease and variation in life time is most common
in welds, nozzle joints and perhaps some bends. Particularly loading and thermal transients
tend to concentrate relative lifetime accumulation in thick-wall components such as main
valves of the boiler and turbine, critical headers and turbine rotors. These components
typically also determine the manufacturers' recommended maximum rates in changing
temperature and pressure during startups and shutdowns.

3. Locations and inspection criteria


The locations of interest are usually selected from components and areas with
earlier indications of defects or other deviations;
substandard or clearly less than ideal design (e.g. seam welded pipes);
significant overloading (e.g. improper hanger supports) or overheating;
suspected material/rnanufacturing defects; and
higher than average damage rates from general experience.
First inspections are recommended for quality purposes to be taken and documented at the
time of taking the plant / components into service. Apart from the more frequent (max. period
about 4 years) inspections for certain components such as boiler drums, safety valves and
other components included in the usual periodical pressure vessel inspections, it is
recommended that the extended inspections targeting for life assessment of the hot pipework
are started by the time when 80 % of the nominal design life has been consumed (max.
100,000 h), if there are no specific reasons to deviate from this rule (such as earlier inspection
results or damage / failures).
Normally the extent of such a first inspection can be as shown in Table 1.
The timing and selection of locations after a given inspection are defined according to the
results. If then no deviations or defects are found, a new evaluation is recommended not later
than at the time of an additional 80 % of the nominal life (max. 100,000 h); this applies only to
the obvious locations of interest according to Table 1. The extent of the first inspection of this
kind can often be reduced, when the steam temperature does not exceed 480C (e.g. recovery
boilers), and replaced completely by ordinary periodical inspections when the steam
temperature does not exceed 400C.
Damage could be also induced before attaining 80% of the nominal life, due to improper
hanger supports or other inadequacies related to design, operation or maintenance. Indications
of the possible early damage or service incidents of significance can be checked by regular hot
and cold walk downs, including noting of the general condition of hangers and supports of the
pipework.

4. Decision problem formulation and modeling

4.1 Generic
Fig. 1 gives an understanding to the overall decision problem formulation and modelling of the
inspection planning of power plant components. As may be seen, the process of inspection
planning is divided into two levels:
1. All component level": Selection and prioritisation of the components/locations
for the actual inspection purposes.
186

One component level": Inspection and determination of the next inspection


time for each selected component/ location.

Table 1: Recommended locations, extent and methods of first inspections for pipework operated
for 80 % of the nominal design life (max. 100,000 h)

Locations Methods Extent

Header nozzles and Endoscopy, MT/PT + End caps, welds in


welds RT,UT nozzles (20 %)

Safety valves, steam Check of operation Component internals


coolers (UT ofint, surfaces or
endoscopy)

Main steam valves other Check of operation, MT/PT n. 100 mm


heavy valves (at least MT/PT+RT (welds), wideof welds, RT
one) check of inside according to
wear/cracks findings

Bends near fixed points, Bend MT/PT+RT MT/PT from bend 2 )


bends of lengthy pipes minimum wall (UT) + RT at least from
curving up/down (min. ovalityl) extrados + ace. to
one per line) MT/PT-results

T- and Y-pieces near MT/PT+RT, UT at MT/PT of welds


fixed points (at least 2 + welds, wall 100 mm wide, RT
main branches) thickness U and ace. to indications,
strain^) UT for body

Deaeration/dewatering MT7PT+UT where Inside of the joint


nozzles (spot test or by water may be
experience) trapped^)

Flange joints near fixed MT/PT+RT, welds MT/PT externally


points
UT = ultrasonic testing; MT/PT = surface inspection; RT = replica testing. Visual
inspection in all cases; according to crack indications additional UT. Surface quality
requirements as in VGB R509L, except for RT as in SFS 3280.

1) +recalculation of stresses
2) d < 300: MT/PT whole bend; d > 300: MT/PT of four zones ~ 200 mm wide
3) especially near the boiler and when the nozzle dia ratio > 0.7
4) hoop measurement of the body and the nozzles; mark the measurement points

Note: consider internals where high fatigue life consumption is expected.


187

On each of these two levels the critical decision node is placed where a multi criteria decision
making (ranking) problem with crisp, fuzzy and/or random inputs has to be solved. Crisp
inputs are e.g. certain crisp numbers (e.g. number of operating hours). Fuzzy inputs are e.g.
those involving linguistic variables (e.g. "high risk") represented in terms of membership
functions. Random inputs are e.g. the stochastic input variables (e.g. temperature) represented
in terms of probability distributions.
Goal of the complete system represented in Figure lis to provide a new inspection plan for all
selected components/locations.

All Component Level User's preliminary


selection of inspec
tion items (systems,
compon., locations)
Criteria:
Importance, Prev. results

I
Cost, Safety, Environment
Past history,.
Decision node: COMPONENT LOCATION
Ranking of inspection RANKING (COLOR)
items ace. to ranking Component/ Type Rank
criteria Location
#2873 Header 2
#3987 T-piccc 3

I WmsSm Y-Ktci '.:^-:


One Component Level
Selection of inspec
tion item ace. to
position in ranking
list Criteria:
Inspection cost, Diffi
culties, Risk, Priority
I from COLOR

Decision node: INSPECTION STRATEGY


ADVISOR OSTRA)
Determination of
Inspection Extent for Rank
inspection strategy Strteer Y-PIece
for inspection item zero attention VI 3
low profile + TH 2
WSMfiWiw -t-MT.ET fu;

Inspection Results
I
5S'. I Performing of inspec
tions; determination
of next inspection
time and scope

RESULT: INSP ECTIONI 'LAN


Component/ 1994 1995 1996 -
New inspection plan Location
#2873 MT.PT VI UT
for selected #3987 ,
inspection items
mmmm W&M mm

Figure 1 : Generic flowchart of decision problem formulation


There are different problems that have to be handled when coordinating complex actions like
those in the decision making process for inspection planning of power plant components. In
188

order to cope with all of them, the developed decision support system consists of the
following elements:
1. a flowcharting part enabling to model the inspection/evaluation procedure
graphically;
2. a knowledge-based system part controlling the user's movement through the
procedure;
3. multi criteria decision analysis modules (COLOR, ISTRA) optimising the
selection of possible alternatives in each decision node;
4. a hypermedia part providing the explanation facility;
5. a numerical calculations part providing additional input (e.g. calculation of
consumed life according to standards, etc.).

4.2 In teiligen tflowch arting m odule


The modelling of the problem domain is done with an "intelligent" flowcharting program. The
"intelligence" of the program refers to its interaction with a knowledge-based system
controlling all movements in the flowchart. In that way, the resulting integrated module acts as
a user-advisor, assisting the user in facing the problem in a recommended way and allowing
him not only to obtain information and recommended actions from the other modules (MCDA,
Hypermedia) but also to input his personal thinking and/or experience, where uncertainty
exists. Finally, with the use of the system the user avoids possible overlooking of significant
aspects of the procedure.

4.3 Multi criteria decision analysis (MCDA) modules


The two modules, namely COLOR (COmponent/LOcation Ranking) and ISTRA (Inspection
STRategy Advisor), developed for the analysis of the two decision nodes shown in Fig. 1, are
both application-oriented and will be described later on in detaiL However, the underlying
methodology is very general and applicable also in other fields, where modelling of
uncertainties is mainly based on experience.
The applied methodology [8] is an extension of Saatys [9], as amended by Buckley [3,4]
in order to incorporate fuzzy comparison ratios. In such a way, it is much easier to model
uncertainties regarding comparisons of criteria, on which the decision has to be based, as well
as ranking of alternatives with respect to each criterion.
Both modules can also handle crisp and stochastic inputs. In this way, they model also
situations where no or stochastic uncertainty exist.

4.4 Hypermedia and numerical calculations modules


Both modules are integrated in the decision support system in order to provide related
background information in each step of the overall process. The related information, consisting
mainly of experience-based recommendations and/or guidelines, may be retrieved
automatically. Furthermore, numerical calculations based on them can provide additional
input.
189

5. COLOR MODULE
5.1 Alternatives
The alternatives for the ranking procedure are the different components of a power plant. For
each type of component there might be different locations. In general the hst of the alternatives
may be like the following one:
boiler tubing, location 1,
boiler tubing, location 2,
superheater, location 3,
conomiser, location 4, etc.

5.2 Criteria
To model the multi criteria decision of component/location ranking the following criteria were
defined:
1. Fundamental importance of the component for the present plant (seriousness of
failure/downtime)
2. Results of previous inspections
3. Cost of replacement of component
4. Safety aspects (including regulatory safety aspects)
5. Environmental aspects (including regulatory environmental aspects)
6. Qualitative past service history
7. Quantified past service history
8. Expected change in the operating conditions used for the analysis so far
9. Alternative supply patterns (Le. relative importance of the component in
comparison with existing alternatives).
Table 2 gives the types of input values and an example of the relative weights of the different
criteria calculated by a pairwise comparison.
Table 2: Types of input and weights for criteria of component location ranking
Name of criteria Input type Relative weight
Fundamental imp. of component Fuzzy 0.122(70)
Results of previous inspections Crisp 0.175(100)
Cost of replacement Crisp 0.122(70)
Safety aspects Fuzzy 0.070 (40)
Environmental aspects Fuzzy 0.070 (40)
Qualitative past service history Fuzzy 0.105(60)
Quantified past service history Crisp, Stoch. 0.053 (30)
Expected change in the op. conditions Fuzzy 0.140(80)
Alternative supply patterns Fuzzy 0.140(80)
190

6. ISTRA MODULE
6.1 Alternatives
The selection of the inspection strategy is done after the selection of the inspection locations.
For the selected locations on the different components there are five types of inspection
strategies or patterns possible:
"zero-attention" program
"low-profile" program
"standard" program
"extended" program
"extensive" program
In addition the detailed description of methods and extents of inspection are given in Table 3.
Table 3: Overview of the extend, costs and reliability of the alternatives
Name of inspection Inspection Relative inspection Rehability of
strategy time, max. cost inspection
"zero-attention" program 0 days 0 0
"low - profile" program 3 days 1 0,15
"standard" program 5 days 2 0,30
"extended" program 7 days 3 0,45
"extensive" program 2 weeks 5 0,8

6.2 Criteria
To model the multi criteria decision of inspection strategies the following criteria were
denned:
1. Inspection and other directly related maintenance cost (e.g. preparation cost)
2. Additional difficulties due to access
3. Implicit risk due to safety aspects
4. Component priority (result from COLOR)
Table 4 gives the types of input values. The relative weights of the different criteria should be
calculated by an expert.
Table 4: Types of input for criteria of component location ranking
Name of criteria Type of input Optimisation goal
values
Inspection and other Fuzzy Minimise (for higher level inspection
related maintenance cost patterns cost are increasing)
Additional difficulties Fuzzy Minimise (more difficult access to
due to access inspection region forces lower level
inspection pattern)
Implicit risk due to safety Fuzzy Minimise (higher safety needs forces
aspects higher level inspection patterns)
Component priority Crisp Maximise (higher component priority
(result from COLOR) forces higher level inspection patterns)
191

7. PRACTICAL APPLICATIONS
7.1 General
So far, within the BE5935 project, the methodology for inspection planning has been deployed
in two power plants, namely in GKM, Germany and in IVO, Finland (Table 1).
Table 5 Overview of considered for preUminary and detailed industrial problems in Task 513
(Decision Making for Inspection Planning)
Partner Type of component Preliminary analysis Detailed analysis
MPATVO/VTT sample steam line Yes partly
MPA/GKM full piping ('"Kessel Yes No
14")
MPA/GKM piping ("Kessel 15") Yes Yes

They illustrate how the methodology developed in the previous tasks of BE 5935 ("The BE
5935 FBMCDM Methodology") can be practically applied on industrial level. In both
application it was necessary to provide a practical and usable engineering answer to the
following main questions:
a) WHAT (Le. which components/locations and with which priorities) to inspect
b) HOW (Le. using which inspection methods and in which scope) to inspect, and
c) WHEN (Le. after how many operating hours ) to inspect

The methodology developed in BE 5935 enables to substantially improve current engineering


practice for achieving answer onto each of these three questions.

7.2 WO-Example
2801 2 3
Ttmo ^ ' 2202/
\ 2802 ^ 2806 / __-Z8G7
^ v J / ^ Z805 \ / 2004
rflW^C 2803 Z804
\ ^ * _ Z808
^^ \ ^ - ^ 2203
^-^^>^2t)02
2201 ZOOS
_-Z80
T-yse3aZ301
lhtee Soja 2RA14 z ^ ^-Z810

^ s ^ ^Z811
<T --Z812
2204^
__.^.Z813
Z815

Z20 Z205
m m ^
Figure 2:Sketch of IVO sample steam Une (including component ID)
The following (slightly modified) example works through a steam line. A sketch of this piping
including the component LD's mentioned in the input and output tables is given in Figure 2.
192

The material is 13CrMo44 with a nominal temperature of 545C, and the situation is given in
1993, after 110,000 service hours.
To support the complex decision making process a modelling tool, namely ExpertChart, was
developed and integrated to the decision support system ExpertChart is used to:
1. Model the problem domain
2. Lead the user through the problem
3. Provide background information
4. Perform analysis and calculations
The problem domain is modelled through flowcharts. Activities are represented by boxes and
their interconnections by lines and arrows. Each activity may be detailed on a sublevei, which
can be a complete chart on his own (activated by the small checked rectangle in the upper left
comer of the box). Traditional if-boxes are translated into pre- and post-conditions. These
conditions are also needed for leading the user through the flowchart. The flowchart modelling
the inspection planning for the IVO steam line is shown in the following figure:

Inspection Scheduling "All Component Level"


[ (1) F*t inspection? J

Ho (inspection results exist)

(2) Review rendis from Begin selection condition


previous inspection
3 Selection and pnDnujalion
of compontot* and locations.
End sil etion condition

X
() User's decision lo tai* one
component fot dele: gannii cm
of the next inspection time

3 (3)'One component level'


Perform detailed analysis
for one selected component

() Oth tr ccmponinU m
the COLOR ranking
Est available?

(7) End analysis

Figure 3:Inspection Scheduling "All Component Level"


Begin selection condition

(3) Selection and ptioritisation

(3.1) Prehnnejy uset*.! selection


of components to be considered

Decision (32) Run COLOR in order


to obtain a ranked list

node"A" of components

Figure 4: Selection and prortsaton of components and locations

10
193

(5) "One component level"


J_
(5.1) Change m service rendition?
('Significant' observed or | Sigrnffcanl c o n d i t i o n change I
everted changes, funy^ (52) Recalculation n e c e s s e i y ?
(53) R e c t i culote remtuning X

I
kTe for COLOR i n p u t (S.4) R e r u n COLOR n order
Quantitive p a s t history* t o o b t a i n n e w value for
C o m p o n e n t priority* for I S T R A

(5.5) R u n I S T R A m order t o
c h o o s e i n s p e c t i o n strategy
Decision
e gin t e s t i n g c ondition
node"B"
(5.6) Ferform t e s t i n g ecc. to -L.
Significant indications yes
table " M e t h o d s and extents ,..'
Significant indications found? (52) M aero cracks,
micro crack. cavities.
End damage condition
B e g i n delete c o n d i t i o n

(5 _?) P manent strain. Micro


structure, h e r d n cc, oxidation
(5T) Delete this l o c a t i o n from inspection
p r o g r a m s b u t r e c o n s i d e r after 10OOOO h End strain condition
if t h e r e are damage i n d i c a t i o n s nearby.

Showtime condition

(5.10) S h o w r c m m a n d e d tim
for next inspection. A d d results
t o inspection p l a n (matrix)


Figure 5:One component level, perform detailed analysis

S i g m a a n i indie triions yes

(5:8) Macr ocracks,.

I (5.8.1) Maximum damage type7

Cavities class 3.1 C aviti ( clacc 2.\


(N oratesi) (Mordlest)
Macrocracks vilhout
cavities (3j6)Retnspectwdhifi (5.8.10) Reinspect wilhm
20 kh; bend within 13 kh 30 kh; bends within 20 kh
(522) Repair / remove; re
inspect any repairs within 10 kh

Cavities class 3 2 Cavities class 11


Maerocracke with cavities (Noxdlest) (Hoi diaci)
(5 2 .7) Reincpacl within (5.8.11) Reinspect wdhin
13 kn; bends within 10 kh 20 kh; bends within 13 kh
(523) Rep atr/remove; reinspect
any repass within 10 kh; consider
replacement particulary for bends Cavities d a s t 3 3
Cavities class 2 .3
(Hordtest)
(Nordlest)
Microcraeks without (32.8) Remspect within 10 kh;
(3.8 13) Reinspect within
critics farbends consider replacement
13 kh; b en ds wrihin 10 kh
(58 4) Remove if abundant, imatl
amount acceptabla up to 30 kh

Condition 1:11 Condition 1:12


Mierociack* with cavities

O.'i 7) If carne tima > 100 kh and (5.8.14) If servici tim* > 100 kh and
(3.8.3) Repair and remspect within
materials are 13CrMo44 or materials are 13CrMo44 or
10 kh, foi bande confidar replacement
IOCrMo910, periods can be doubled I0CrMo91D periods can b doublad

End damage cendrtion

Figure 6:Macrocracks, microcraeks, cavities

11
194

m (5.9) Permanent strain, Micro-

J_
(55.1) Permanent strain
e >0,1W> observed?

JC e>0.1%
e <0.1% or unknown

(5.9.2) Unexpecte d macro-


structure, hardness or oxidation (593) Find cause; consider iep an/ replacement
if near ec 19, or set retest schedules a c e t o
observe d strain level; tc-t(ec/e)
Micro structure, hardness
or o s datier, found

(55.4) T>10C above the expec


ted avg. service temperature
or doubts of toughness?

> 10 degrees C

(55.5) Correct, restore or retest


ace. to calculated tc as
in case of observed strain

0-9.6) Reformulate the recommerv


dalion (next inspection time) obtained
front step (5.Z) "Microcraeks, macro-
cracks cavities'

End strain condition

figure 7: Permanent strain, microstructure, hardness, oxidation


The inspection planning procedure for this example follows the steps below:

Step 1 Is this the first inspection for this system? (Yes/No). User selects
[Box(l)] "No" since there are previous results from 1986-87.

Step 2 Access to the database of previous inspections. User selects the


[Box (2)] appropriate inspection data and reviews inspection results.
According to "Results of previous inspections" criterion,
components with ID No. #813, #815, #507 are in a critical phase.
Step 3 Initialisation of COLOR. User selects the above mentioned
[Box (3.1)] components as well as five others for further analysis. He fills all
related data in Table 8.
Step 4 Running of COLOR. With the data from Table 8 and the relative
[Box (3.2)] weights of each criterion, the system gives a priority to each
selected component. The ranked list in descending priority order is
the following: (6, 8, 1, 7, 2, 4, 3, 5)
Step 5 Which component should be analysed? (List of components).
[Box (4)] According to his experience user selects component No. 8 to
determine next inspection time.

12
195

Step 6 Change in service conditions? (Yes/No). New temperature


[Box (5.1)] monitoring results show a +5C change in average temperature for
the service conditions of component No. 8. Therefore, user selects
"Yes".
Step 7 Is a recalculation of consumed/residual life necessary? (Yes/No).
[Box (5.2)] User selects "Yes".

Step 8 With use of the calculations part module and based on the new
[Box (5.3)] data the system provides a new value for the criterion "Quantified
past service history" regarding component No. 8. Table 8 is also
modified.
Step 9 Running nf COT .OR With the new data of Table 8, COLOR
[Box (5.4)] produces a new output. The ranked Hst is now the following: (8,
6, 1, 7, 2, 4, 3, 5). Since the new priority of component No. 8 is
even greater user proceeds with the same component, in order to
find out the best inspection strategy.
Step 10 Initialisation of ISTRA For component No. 8 and for all possible
[Box (5.5)] strategies user fills all related data in a table (Table 6).
Step 11 Running of ISTRA. With the data from Table 6 and relative
[Box (5.5)] weights for each criterion given from an on-line pairwise
comparison, the system gives a priority value to each inspection
strategy/program. According to these values, standard inspection
program for component No. 8 is suggested.
Step 12 The system retrieves the recommended actions related to the
[Box (5.6)] standard program (tests and their extent). User performs the
recommended tests.
Step 13 Significant indications of damage found? (Yes/No). User selects
[Box (5.6)] "Yes" since there exist some indications.

Step 14 Maximum damage type found? (Several options). User selects the
[Box appropriate option (Microcraeks with cavities).
(5.8.1)]

Step 15 According to the recommendations, failures are repaired and


[Box reinspection is scheduled within 10,000 hours.
(5-8-5)1
Step 16 Permanent strain (el observed? (e>0.1%, e<0.1%. unknown). The
[Box observed permanent strain is less than 0.1% for this component.
(5-9-1)1
Step 17 Unexpected microstructure, hardness or oxidation? (Yes/No).
[Box User selects "No".
(5-9.2)1
Step 18 Since no other problems were found, next inspection time remains
[Box as it was.
(5-9-6)1

13
196

Step 19 Recommended time for next inspection is added in the inspection


[Box plan.
(5.10)1
Step 20 Since there are other components in the COLOR list available, i.e.
[Box (6)1 6, 1, 7, 2, 4, 3, 5, system returns to step 5. The same procedure is
then followed for components No. 6, 1, and 7. The analysis is then
stopped after user's decision.
Step 21 With the end of analysis the final output of the system is given in
(7)1 the form of Table 7.

Table 6 : Input values for ISTRA (TVO-Example)


Comp. No. 8 Cost Additional Implicit risk Component priority
difficulties due to due to safety (importance)
accessibility aspects
zero-attention none easy high 0.1
program
low profile low standard medium 0.2
program
standard medium difficult medium 0.3
program
extended high difficult low 0.4
program
extensive very high difficult low 0.5
program

Table 7: final output of the system (Recommendation 1993 for the example case)
Comp. Comp. Component Next Method Extent
No. ) type inspection
8 #507 Steam mixer Monitoring + MT/PT, MT/PT for welds
next year int. RT, UT at 100mm wide; RT ace.
inspection welds to indications; UT for
body
6 #813 Butt weld within next MT/PT, RT MT/PT 100mm wide;
15,000 h RT ace. to indications;
1 #803 T-piece weld within next MT/PT, RT MT/PT 100mm wide;
20,000 h RT ace. to indications;
7 #815 Terminal weld within next MT/PT, RT MT/PT 100mm wide;
to reduction 20,000 h RT ace. to indications;
valve body

14
Table 8: Input values for COLOR (IVO - Example)
Comp. Comp. Type of Typical Situation ace. to Cost of Safety Environ Qualitative Quantified past Future Alternative supply
No. ) component downtime previous replacement priority mental past service service history service availability
cost inspections [ECU] priority history [Equivalent hours] conditions

1 #802 piece weld Medium 1 25k Medium Low Mild 110000 No changes Average

2 #803 piece weld Medium 3 25k Medium Low Average 110000 No changes Average

3 #202 Pipe bend down High 1 10k High Low Mild 110000 No changes Average

4 #204 Pipe bend High 2 10k High Low Mild 110000 No changes Average
horizontal
5 #205 Pipe bend down High 2 10k High Low Severe 110000 No changes Average

6 #813 Straight pipe / Low 4 8k Low Low Severe 110000 No changes Average
bend weld
7 #815 Reduction valve Medium 3 45k Medium Low Mild 110000 No changes Relatively low
+welds
8 #507 Mixer Medium 5 30k Medium Low Average 110000 No changes Relatively low

9 #801 Straight pipe Low 2 8k Medium Low Average 110000 No changes Average
weld
10 #301 T-piece Medium 2 25k Medium Low Average 110000 No changes Average
nozzle+weld
11 #201 Horizontal bend High 2 10k High Low Average 110000 No changes Average

12 #804 Straight pipe / Low 2 8k Medium Low Average 110000 No changes Average
bend weld
13 #805 Straight pipe Low 2 8k Medium Low Average 110000 No changes Average
weld
14 #806 Straight pipe / Low 2 8k Medium Low Average 110000 No changes Average
bend weld
198

7.3 GKM-Example
On this application the whole piping from a boiler to the turbine inlet is considered. The
material is 10 CrMo 9 10 with a nominal temperature of 530C, a nominal pressure of 250 bar
and the situation is given after 200,000 service hours.

Figure 8: GKM Piping


The piping consists of 67 components/locations of potential interest (T-and Y-pieces, bends,
valves, corresponding welds, hangers etc.). Some of the more critical are shown in Figure 8.
Table 9 gives the input values for the COLOR calculation needs of the first 12 of the
preselected components. The input values for each component are collected using the power
plant data available. For components, where the criteria values were not known, default values
were used [7]. All collected information were then saved in a database (see Figure 9).
Apart of the inspection planning input data, the database contains component/location
specifications, including a detailed picture of the respective location of the piping.
The type of the input values are either crisp or linguistic. For the COLOR analysis, the
linguistic statements must be transformed to fuzzy numbers using appropriate membership
functions. After this analysis, the output is a list of components ranked by their priority for
inspection. A graphic illustration of COLOR results for GKM piping is shown in Figure 10.
The result value , shown in this figure, corresponds to the priority of each component, since it
integrates all criteria inputs. In that way, the most critical component achieves the highest
result value. In the GKM example, this component was the valve DH 14-la.
Table 9: Input values for C O L O R ( G K M - E x a m p l e )

Input data CRITERIA

COMPONENTS Typical Results of Cost of Safety Environ Qualitative past Quantified past Future Alternative
downtime previous replacement priority mental service history [equivalent hours] service supply
cost inspections [1000 ECU] priority conditions pattern

kind of input data fuzzy crisp crisp fuzzy fuzzy fuzzy crisp fuzzy fuzzy

weight 70 100 70 40 40 60 30 80 80

No. Component No. - Name max max max max max max max max max

1 200 - Montagenaht am medium 6 15 high high average 111000 no changes relatively low
Kesselaustritt
2 101 - Montagenaht low 4 15 medium high average 200000 no changes relatively low

3 Bl - Bogen medium 4 75 high high average 200000 no changes relatively low

4 4 - Werkstattnaht low 2 15 medium high average 200000 no changes relatively low

5 201 - Montagenaht low 4 15 medium high average 200000 no changes relatively low

S B2 - Bogen medium 6 75 high high average 170000 no changes relatively low

7 202 - Montagenaht low 4 15 medium high average 200000 no changes relatively low

S B3 - Bogen medium 4 75 high high average 200000 no changes relatively low

9 109 - Werkstattnaht low 2 15 medium high average 200000 no changes relatively low

10 54 - Bogen medium 4 75 high high average 200000 no changes relatively low

11 114 - Montagenaht low 4 15 medium high average 200000 no changes relatively low

12 203 - Montagenaht am medium 5 15 medium high average 0 no changes relatively low


Kesselaustritt
200

Microsoft A ccess
Eile dit Xew ecords yyindow Help
Components

poctian plaitniog i

tMcJkxj-iion M K JMontagenahl am Koctiauartt

4 coetfoneat pigt bcc


i ^ jSS:
" o ^W^jlSWMisdTkrteu^eb^Untesuc^boMPMn. ; S ,
J JVGB Ersetzt euch begangsstck nach Zeichnung OM
S i i s 754a Fa. Vorfndet R1177 KE 270 832
; : ;a;l^pgiSmSiwm^4aa
^ | | | K | P A S37 270 / 5

fotote irtee tonati* Ino chances

& f ^ AA
;. W etive*W*>.t*<M.. jrdabveVlow :i
^VWV;SVVWTS:::> kJ v:^;:;:::S:::^;w^^;;^^;^>^;^;^;^:^^^;^^>v::
Wj^n'ecad

flgsiwsstfet *)^&^:?&$.
m* ni 'J"U.UIIl'TJLUJLIJJ ^
niiiaiswi mmmmi
Figure 9: Database interface for inspection input data

160.00

140.00

Figure 10: Graphic illustration of the result of the ranking tool COLOR
As already shown in example, the next step when using the decision support system is to
establish the appropriate inspection strategy using ISTRA module, and to perform the
recommended tests. The developed database, interacts with the whole decision support
system, enabling a feedback of all the information gathered with the various techniques
applied.
201

8. Conclusions
The applications of the decision support system in the IVO and GKM power plants, confirmed
the capability of the system to efficiently use the experience of local domain experts and the
service history to quickly make a first draft of the inspection plan. The overall system
represents a helpful tool for maintenance of power plant structures.
In the future, the system will be coupled with a NDT database and used primarily for
preliminary screening and "drafting" of the annual inspection plans. Experts' revision of these
drafts will remain a mandatory part of the overall procedure.

9. Acknowledgements
Some of the work presented in the paper has been accomplished within the European Union
research projects SPRINT SP249 and BRITEEURAM BE5935. In addition, some of the
results have been achieved under the BriteEuram Fellowship Contract No. BRECT933039
(fellowship for the stay and research of Mr. S. Psomas at MPA Stuttgart). This support is
gratefully acknowledged here.

10. References
1. Auerkari, P., 1993, "Guidelines for Inspection criteria of Hot Pipework", SPRINT SP249
Technical Report, VTT metals Laboratory, Espoo
2. Auerkari, P., Borggreen, K., Salonen, J., 1992, "Reference Micrographs for Evaluation of
Creep Damage in replica Inspections", NT Technical report 170, Nordtest, Espoo
3. Buckley, J. J., 1985a, "Ranking Alternatives Using Fuzzy Numbers," Fuzzy Sets and Systems
15, NorthHolland, pp.2131.
4. Buckley, J. J., 1985b, "Fuzzy Hierarchical Analysis," Fuzzy Sets and Systems 17, North
Holland, pp. 233247.
5. Jovanovic, ., Psomas, S., Ellingsen, H.P., Kautz, H.R., McNiven, U , Rnnberg, J., Auerkari,
P., 1995, "Decision support system for planning of inspections in power plants. Part JJ
Application in GKM and IVO power plants", to be presented at the Baltica Conference, June 8,
1995.
6. Jovanovic, ., Psomas, S., Schwarzkopf, Ch., Auerkari, P., Bath, U , Weber, R., Kautz, H.
R., Vereist, L., 1994, "Decision Making for Power Plant Component inspection Scheduling"
Report on Task 4.2/4.3 of BE5935 Project RESTRUCT DecisionMaking for
Requalification of Structures, Document TECT401, MPA Stuttgart
7. Jovanovic, ., Zimmermann, H.J., 1990, "Decision Making and Uncertainty in Life
Assessment and Management of Power Plant Components", Document BE3088/89, MPA
Stuttgart
8. Lieven, ., Weber, R., Bath U , Jovanovic, ., Psomas, S., De Witte, M., Vereist, L., 1993,
"MultiCriteria Decision Making Modelling Technology" Report on Task 3.1 of BE5935
Project RESTRUCT DecisionMaking for Requalification of Structures, Document TEC
T3101, MPA Stuttgart
9. Saaty, R. W., 1987, "The Analytical Hierarchy Process What it is and how it is Used", Math.
Modelling, Vol. 9. No. 35, pp. 161176
10. VGBR 509 L, 1984, "Wiederkehrende Prfungen an Rohrleitungsanlagen in fossilbefeuerten
Wrmekraftwerken", VGB, Essen
11. VGBTW507e, 1992, "Guideline for the Assessment of Microstructure and Damage
Development of Creep Exposed Materials for Pipes and Boiler Components", VGB, Essen

19
203

THE 'PLUS' SYSTEM FOR OPTIMISED O&M


OF POWER AND PROCESS PLANT

J Cane
G Jones
J D Sanders
R D Townsend

ERA Technology Ltd


Cleeve Road, Leatherhead,
Surrey KT22 7SA
United Kingdom

ABSTRACT

The increasing importance of flexible operating capabilities for modern power and process plant,
means that thick section components such as steam headers or reactor vessels operating at high
temperature in the creep regime, now experience more temperature transients which introduce
thermo-mechanical fatigue as an additional damage mode interacting with creep to limit the life of the
components.

ERA therefore has developed a system known as the Plant Life Usage Surveillance system (PLUS) to
assess the creep-fatigue life utilisation in thick section components. PLUS is a unique system in which
on-line monitoring of operating parameters is integrated with off-line condition inspection data to
provide accurate real-time optimisation of component life usage.

This paper provides a description of a PLUS system, with reference to a case study application. The
particular concern in this case is ligament cracking in steam headers arising from increased cyclic
operation. Key issues regarding susceptibility to cracking and meaningful life monitoring are given to
demonstrate the benefits of on-line life surveillance.

1 Introduction

There is a growing awareness by power plant operators of the benefits to be gained by applying
on-line plant condition monitoring techniques. Market forces are now demanding that plant, originally
designed for base-load operation operate more flexibly. Experience has shown that even plant that
has been designed for cyclic operation, can fail by creep-fatigue mechanisms induced by operational
transients not allowed for in design. Competition is also forcing operators to reduce costs by
demanding, increased run times between outages, and reduced maintenance schedules. All these
factors make once-off investment in all forms of condition monitoring increasingly attractive.

This paper describes a system designed to monitor thick section, high temperature components
on-line for creep-fatigue degradation. ERA'S Plant Life Usage Surveillance (PLUS) System was

SG/BC61 ADMI/GTJ/doc-572
204

originally designed to address the problem of ligament cracking of steam headers prevalent in Europe
and the US. It is, however, equally applicable to other power plant components such as main steam
pipework, chests and casings and for process plant, thick section reactor vessels operating in the
creep range. There is also considerable interest in applying it to Heat Recovery Steam Generators
(HRSG) which are notoriously susceptible to creep-fatigue failures.

The basis function of PLUS is to convert signals, obtained from sensors (usually just thermocouples
and pressure transducers) strategically connected to critical locations on plant, to local stress and
strain values in real-time.

Using built-in advanced algorithms these values are converted and summed to give a realistic
measure of damage accumulation in real-time, or at convenient periodic intervals. It therefore serves
as both a life usage monitor an operations adviser (or alarm system) and thereby may be utilised as a
damage controller, and a maintenance planning tool. It may also be used as a simulator to assess
likely effects of changes in operation model.

2 Scope of the PLUS System

PLUS is a fully integrated on-line system, with real time data monitoring and processing, providing
periodic on-line analysis with facilities for integration of off-line inspection/interrogation data. It is fully
customised for each application and enables plant specific geometry, design and history to be
accommodated together with the specific local operational behaviour. This is achieved by the
incorporation of component specific temperature/pressure to stress calibration based on off-line FE
analysis using operator specific on-line data. Life prediction algorithms are implemented according to
the components and degradation processes being monitored.

Depending on the requirements of the application, using temperature surveillance and calculational
methods, future PLUS systems will be made to monitor and predict crack propagation, or monitor
cracked components using sensors to signal local failures.

The scope of the PLUS System developed on a UNIX Workstation is shown in Fig. 1. This figure
highlights the key components which facilitate various features, the more important ones being:

1 Data capture module connects PLUS system with the site sensor data collection system. Its
precise structure is therefore dependant upon the nature of the existing facilities.

2 Data validation module interrogates the sensor signals and applies a number of consistency
checks and marks data deemed to be faulty. This module is also responsible for filing the data
in a time order, such that any file can be retrieved by means of a time and date identity.

3 The database module holds relevant aspects of the component geometry; the component
specific stress functions and selected inspection data pertinent to the assessment. Using

SG/BC61 ADMI/GTJ/doc-5 72
205

template objects it allows customers to process additional monitoring points on components of


similar geometry. It also holds the damage history for all monitored components.

4 The display module allows the operator to select a location on a component and to display
the stress at that location in real-time.

5 The life analyses modules in PLUS are determined by the components and degradation
mechanisms being monitored. The analysis modules are periodically activated by the operator
to assess the consumed life based on newly available on-line plant data and the last
calculated life usage for each component.

3 Nature and Incidence of Ligament Cracking

Ligament cracking develops from the inside crotch corner of the header and tube intersection and
propagates along both the header and the tube internal walls. The classical form of ligament cracking
occurs where the adjacent tube penetrations are closely spaced such that crotch corner cracks
propagate across the ligament, shown in Fig. 2. This classic form is particularly severe since it can
result in catastrophic failure. Ligament cracking along the circumferential direction may result in fast
fracture with the header breaking in two. Where adjacent element rows are closely aligned along the
length of the header, ligament cracks may also develop in the axial direction where the crack surface
are normal to the dominant hoop pressure stress. This again raises the possibility of catastrophic
failure. Localised cracking with a star burst distribution, illustrated in Fig. 3, where the cracks radiate
out from the perimeter of the tube hole is associated with isolated or more widely spaced penetrations.
Starburst cracking is unlikely to cause catastrophic failure but may cause steam leaks.

Ligament cracks removed from service (both in the US and Europe) have been found, without
exception, to be consistent with high strain thermal fatigue generated by severe thermal transients.
That is the cracks are straight transgranular gaping and oxide filled with no associated creep damage.

All studies support the pattern of development in which multiple cracks first initiate from the inside
corners of the tube penetrations, but growth is dominated by the primary cracks which propagate
across the ligament and towards the outer wall. European experience indicates that crack initiation
occurs relatively early (10,000 to 20,000 hours of operation), with a relatively long propagation period
generally exceeding 50,000 hours. This is contrary to the US experience where crack initiation occurs
much later, i.e. over 100,000 hours, and crack propagation is generally much more rapid. The
difference in behaviour may be attributed to differing operating practices. Oxide notching has been
proposed as a crack initiation mechanism however this is not supported by European investigations.

Although no creep damage is observed for headers operating in the creep regime, creep relaxation of
high transient stresses contributes to the crack initiation and propagation mechanisms. Assessment
methods are therefore based on creep-fatigue analysis.

SG/BC61 ADMI/GTJ/doc-572
206

3.1 Factors Affecting Susceptibility to Ligament Cracking

Analysis of the available data on the incidence of ligament cracking in Europe (Table 1) and US (Table
2) reveals that the European and US experience exhibits common factors as summarised below:

Header Type Superheater Outlet headers were found to be susceptible to ligament cracking.
The US and European experience indicates that secondary and final superheater outlet headers
were the most susceptible. There is also European experience, but little US experience of
cracking in primary and interstage superheaters.

Header Geometry The susceptibility to cracking increases with decreasing ligament width and
with increasing wall thickness, with all headers exceeding a certain thickness exhibiting
cracking.

Boiler Maker and Unit Size The European experience indicates that all makes and sizes of
unit are susceptible to ligament cracking, with some manufacturers' components being
inexplicably more vulnerable. Larger units, supported by US experience, also show increased
incidences of cracking.

Operating Hours and Starts Figures 4 and 5 show graphs comparing observed cracking data
in Europe with plant operating hours and number of starts respectively. No correlation is
evident between incidences of cracking and operating hours or number of operating cycles.
Cracking has been observed after comparatively few cycles, less than 500, contradicting the
view that ligament cracking is a two shifting problem.

SG/BC61 ADMI/GTJ/doc572
207

3.2 Predictive Assessment and the Requirement for Plant Monitoring

Concern about the incidences of ligament cracking led to quantitative assessments being carried out
on ex-service headers in Europe.

Finite element analyses were carried out for typical service start-up and shut-down cycles, and crack
initiation times and propagation rates were determined using high strain fatigue cyclic endurance and
creep ductility exhaustion models.

The analyses predicted considerably longer crack initiation endurances and much slower crack growth
rates than that determined by oxide dating techniques applied to the removed samples. The
conclusion was that additional cycles were present that were not considered in the analysis.

Temperature monitoring to investigate the cause of thermal cycles responsible for ligament cracking
has confirmed that temperature ramp rates associated with normal two shifting operating cycles are
insufficient to generate the plastic strain ranges required to account for observed cracking. However
much more severe local transients were identified under certain operating conditions. Analysis of
monitored data indicated that major contributors to ligament cracking are:

Emergency shut downs following tube leaks

Spraying operations during cold start-ups

Temperature cycles during hot starts (for example problems with coal flow and mills)

The fact that thermocoupling and continuous monitoring of vulnerable headers confirmed the
occurrence of previously undetected transients, demonstrates the importance of on-line monitoring for
accurate life prediction for headers.

4 Case Study

The PLUS System addressed here was commissioned to monitor the creep-fatigue damage
accumulation, predict crack initiation in the platen, final and reheater outlet stub headers and
manifolds, as well as to monitor creep-corrosion damage in the associated boiler tubing in 8 boiler
units of 350 and 650 MW. The significance of ligament cracking on plant integrity and the benefits of
life monitoring are demonstrated by consideration of the background to the problem.

4.1 Plant Monitoring Requirements

Inlet steam temperatures are normally obtained from thermocouples on inlet tubes. The accuracy of
the estimate and definition of critical locations are improved as the number of tubes being monitored
increases. Additional thermocouples are also required to provide back-up in the event of thermocouple
failure.

SG/BC61 ADMI/GTJ/doc572
208

Thermocouples were installed on the two lead units of the case study plant. The selection of the
thermocouple locations was based on the identification of critical components using previous
analyses and taking existing thermocouple locations into account.

The thermocouples installed provided preliminary surveillance data for the finite element analyses
performed during PLUS customisation. A selection of these thermocouples was then used for the
on-going monitoring by PLUS ensuring that sufficient redundancy is provided.

4.2 Stress Calculation

For the purposes of PLUS it is assumed that the ligament stresses can be uniquely determined from a
number of temperature differences and rates of temperature change within the header. The process
undertaken to generate the stress functions used by PLUS is illustrated in Fig. 6.

1 Surveillance data from the thermocouples and pressure transducers are processed and
analysed to identify the thermal boundary conditions, typical and exceptional transients
experienced by the components. Some of the thermocouple surveillance data are also used
to validate the FE heat transfer analyses output.

2 A wide range thermal transients using realistic heat transfer coefficients, steam ramp rates
and temperature changes are analysed for each geometry. The outputs of the thermal
analyses are compared with plant data. Thermal boundary and heat transfer conditions are
refined until an optimum coaelation is achieved. Figure 7 provides an example of the
comparison of the FE thermal transient results and measured temperatures.

3 Each geometry under consideration is modelled using 3-D finite element analyses techniques.
For the case study, analysis of 32 geometries was required.

4 Finite element stress analysis is then performed for the thermal transients. The outputs of
these analyses provide inputs to and validation of the stress functions generated in step 5.
The hoop stress at a crotch corner location under a hot start condition is shown in Fig. 8.

5 Multiple parameter linear regression analysis is then carried out to relate the ligament stresses
from step 4 to the temperature differences and rates of temperature change. This analysis
produces relationships (stress functions) of the type

a = F(AThTi)

where , - temperature difference

T, - rate of temperature change

An example of the stress functions so developed is given in Fig. 9.

SG/BC61 ADMI/GTJ/doc-572
209

These functions enable direct calculation of stress and therefore strain from measured
temperature values obtained during PLUS operation, providing input to the creep-fatigue
calculation.

6. The stress functions are implemented in the PLUS system to provide real-time variations of
ligament stresses as shown in Fig. 10. These real-time displays provide the operator with an
instantaneous output of the ligament stress which can be compared with a stress limit set to
prevent crack initiation in a specified number of starts. Stress limits can be periodically
updated by PLUS using the latest calculations of life usage.

4.3 Creepfatigue Damage Calculation

The methodology adopted for PLUS assumes that any arbitrary cycle can be separated into an
elastic-plastic cyclic component and stress relaxation dwell component. The elastic-plastic cycling
causes low cycle fatigue damage and stress relaxation causes creep damage by a process of ductility
exhaustion. Both of these components contribute to the overall damage which is calculated using the
'linear damage summation rule'.

In carrying out the analysis each transient is resolved into discrete cycles. The hysteresis loop for
each cycle is constructed from the strain-time data generated by the stress functions by means of the
offset zero form of the Ramberg-Osgood equation

=| +

where A and are temperature and strain rate dependant materials parameters.

The fatigue damage component, D f, is obtained from the relationship

where < is the total strain range of the cycle

and Nf is the fatigue endurance as a function of the total strain range of the cycle
described by means of a suitable parametric equation.

The creep damage component, Dc, is calculated by a means of the ductility exhaustion approach
using:

Dc=-fP^dt

where t d is the dwell time

(t) is the instantaneous strain rate obtained from the stress relaxation relationship

SG/BC61ADMI/GTJ/doc-572
210

= [1-"1(7+1)]

,~ da/dt " 1

(0 = * ~

with - peak stress at start of the dwell


t - time
" - temperature dependant materials parameter.

The total creep-fatigue damage for each cycle, Dt, is calculated using the linear damage rule

Dt = Df+Dc

4.4 The Creep-fatigue Crack Initiation Program

Built into the PLUS system consists of the evaluation of the above two types of damage mechanisms
algorithms, shown in Fig. 11. The modules read in the time and temperature data collected by the
monitoring system and calculate the associated stress and strain using the stress functions. After
resolving the data into cycles and dwells, the two algorithms establish the LCF damage and creep
damage components for each identified cycle and dwell period respectively.

The total damage for each location is established and stored in the PLUS database. Cyclic life usage
calculated from monitored data is shown in Fig 12.

The creep-fatigue life analyses are performed periodically by PLUS using past life estimates and
newly available plant data. PLUS is set up to automatically update the life estimate on a monthly
basis. The life analysis may also be performed at any time upon user instruction. The operator may
initialise the life analysis in two ways. A commit may be initiated where new off-line data is supplied,
and the life estimates are updated based on these and latest available on-line data, the results are
stored in the data base. Alternatively, life calculations may be performed at any time using the latest
available data. The results are displayed to the operator but not stored.

4.5 Steady State Creep Damage Calculation

For periods of steady operation the accumulation of steady state creep is determined in PLUS using

Oc = E ^ + Dc,,

where tr is the allowable rupture time at the current operating temperature and stress

t is the time for which the operating temperature and stress remain constant

DCini, is defined by a clock setting exercise using inspection and condition


assessments

SG/BC61ADMI/GTJ/doc-572
211

Gret is the reference stress calculated for each critical location on the monitored
components using inverse design procedures.

4.6 Integration of Inspection Data in PLUS

The relationship between the calculational assessment route resident in the monitoring system and
off-line inspection results should be reciprocal. Besides the use of PLUS to guide on inspection
locations and times, it is possible to use inspection data to refine the system analyses.

The case study PLUS system is set up to enable quantitative microstructural damage assessments
made during an inspection to refine creep-fatigue damage assessment by PLUS. In setting up the
monitoring system assumptions were made regarding the position of a material in its property scatter
band and the evaluation of the reference stress, i.e. system loads acting to increase or decrease the
stress.

Since the creep life prediction algorithm can predict damage or strain evolution as well as final failure,
the predictions can be compared with observed damage or strain measurements.

Any differences between life fractions consumed determined by the reference stress technique used
by PLUS and that determined by off-line quantitative damage assessment will be due to materials
properties and/or the system stress uncertainty. Since rupture life is governed by the ratio of
stress/strength it is not necessary to know whether this difference is due to stress or materials effects.
A simple stress correction factor calculated from the observed differences can be applied to future
PLUS calculations as a modification to the stress/strength ratio thus scaling calculated lives to the
observed damage accumulation rate.

4.7 Monitoring Crack Propagation

Where cracks are detected in a component or predicted by PLUS, crack propagation may be
monitored by PLUS.

Crack monitoring utilises creep-fatigue crack propagation algorithms based on linear summation of
cyclic fatigue and creep damage. The cyclic component is obtained from fatigue crack growth laws
utilising stress intensity factor () solutions for the defect geometry:

{%)rA(AK)">

where A and m are materials properties

and the creep component is obtained from creep crack growth laws utilising C*:

*r\ = jw dt

SG/BC61 ADM l/GT J/doc-572


212

and = ( f ) c = (C-)M

where and k are materials properties

The total crack growth 'per cycle' is obtained from

da Ida\ (da\
dN ~ \dNJ \dNj ,
c

In cases where a leak before break situation is predicted, monitoring for steam leaks using acoustic
emission (AE) provides a practical safe alternative to the above algorithm based approach. In this
case the acoustic sensors are interfaced with PLUS allowing the system to raise alarms in the event of
a leak. Leaks may be detected by AE significantly sooner than the effects can be observed by normal
plant operating systems.

5 Conclusion

Temperature surveillance and life monitors have been demonstrated to be a very effective means of
monitoring high temperature components subject to thermal cycling. Wide experience of potentially
catastrophic ligament cracking in headers has shown the damage to be attributable to normally
undetected thermal transients.

ERA has developed a PLUS system which provides real-time temperature monitoring and processing,
and periodic damage and life assessment. The system also enables off-line inspection results to be
used to refine the analysis.

The customisation process and life monitoring functions of PLUS have been illustrated by a case study
application. The PLUS system addressed in this paper is currently being delivered to the clients.

6 Acknowledgements

This paper is published with the permission of ERA Technology Ltd.

7 References

V A Annis, C M Jeffery and J M Brear


"On-line Creep-Fatigue Monitoring of Steam Headers."
IMechE Seminar 'Load Cycling, Plant Transients and Off-design Operation.' London, April 1991

C M Jeffery and G Jones,


"Software Requirements for On-Line Condition Assessment in Power Stations"
Proc. 11th Conf. 'Electrical Power Stations', Liege, Sept. 1993

SG/BC61ADMI/GTJ/doc-572
213

R Viswanathan
"Life Assessment of High Temperature Components - Current concerns and research in the US"
ERA Report 93-0690, Conf. Proc 'Life Assessment of Industrial Components and Structures',
Cambridge, Sept 1993

EPRI Report
' An Integrated Approach to Life Assessment of Boiler Pressure Parts', EPRI Project RP 2253 -10

SG/BC61ADMI/GTJ/doc-572
214

Table 1
Summary of Inspections for Ligament Cracking in a European Utility

Header Type Number Number Maximum


Inspected Cracked % Cracked Depth, % wall
thickness
All Superheater Outlet 80 33 41 100
Primary 9 3 33 100
Interstage 28 8 29 100
Secondary and Final 43 22 51 69
All Superheater Inlet 32 2 6 20
All Reheater 18 0 0 N/A
All Headers 130 35 27 100

Table 2
Summary of Ligament Cracking Experience in the US

Header Type Number Number %Cracked


Inspected cracked
Secondary Superheater Outlet 157 44 28
11/4Cr 73 26 36
21/4 Cr 76 17 22
Op.T>_1050C 14 6 43
Reheater Outlet 118 2 2
All Others 101 4 4
PLUS SYSTEM SHELL STRUCTURE
~ "^

DISPLAY

PLUS OUTER SHELL UNIX


APPLICATIONS
OFF LINE
INFORMATION
BDE - BOILER DATA BASE EDITOR
MAN
ODM - ON-LINE DATA MANAGER to
I
u
RLA - REMANENT LIFE ASSESSMENT
TLA - TUBE LIFE ASSESSMENT
CDU - CREEP DAMAGE UPDATE
FUNCTIONS
TSU - TUBE STATISTICS UPDATE
PDU - PERIODIC DAMAGE UPDATE
DDD - DUMP DAMAGE DATA
ssc - STEADY STATE CREEP

SHARED MEMORY
GDD - GRAPHIC DAMAGE DISPLAY
HSD - HISTORIC STRESS DISPLAY
- DISPLAYS
TLD - TUBE LIFE DISPLAY
LOGGER/SERVER OSD - ON-LINE SURVEILLANCE DISPLAY

SFW - SURVEILLANCE FILE WRITE (SIMULATOR)

O Fig.1: Scope of PLUS


MONITORED LOCATIONS
216

y<*
f* l i is "
mr'

Fig.2: 'Classic' ligament cracking in a boiler header


at an advantage stage of development

G BC61ADMI\GTJ\DOC-572
217

'^oSafe'.!-:.-

Fig.3: 'Starburst' ligament cracking at an isolated tube


stub penetration inside a boiler header

G.BC61ADMnGTJ\DOC-572
218

All Superheater Outlet Headers


120 r

100

co
c I
Z 80

k.

*Q 60

O 40 f-
i i
!


'* aw - AM
Thousan 150 100

Number of Operating Hours


Secondary and Final + Primary and Interstage

Fig.4: Susceptibility to ligament cracking as a function


of the number of operating hours
219

All Superheater Outlet Headers


120

Thousands
Number of Starts
Secondary and Final Primary and Interstage

Fig.5: Susceptibility to ligament cracking as a function


of the number of starts
220

GENERATION OF STRESS FUNCTIONS FOR


PLUS

R
Plant surveillance
data j xr -{-- )
Filtering Generation of FE
& plotting of models
data

Thermal boundary Thermal transient


conditions analyses
and data for FE
companson

Companson of plant and FE


temperatures

Optimum
correlation

yes

Stress analysis of
thermal transients

Generation and verification


of Stress Functions

Fig. 6: Finite element analysis route for the generation


of stress functions aurina customisation of PLUS
Fig. 7: Comparison of FE Thermal analysis results and thermocouple datat
- Reheater outlet manifold body

600 "I

550 P;PPP!

X3
2

500

o
to
Ol
S 450 to
D) to
0)

a>

eu 400
D.
E
-*FE manifold
350
ne Manifold t/c
- Estimated steam

300
4000 5000 6000 7000 8000 9000 10000 11000 12000 13000 14000 15000
Time in seconds
ANSYS 5.0 A-31
SEP 7 1994
13:51:55
PLOT NO. 11
NODAL SOLUTION
STEP=27
SUB =1
TIME=5580
SEQV (AVG)
DMX =.003863
SMN =380924
SMX =.177E+09
SMXB=.190Et09
380924
czu .965E+07
.189E+08
.282E+08
I
. 375E + 08
ETTI .467E+08
t.'f .560E+08
t. ! . 1 .653E+08
.746E+08
.838E+08
.931E+08 IJ
.102E+09 IJ
IJ
.112E+09
. 1 2 J lit 0 9
.130E+09
.139(^09
.149E+09
.158E+09
167E+09
. 177E+09

SF*' . R ? h e M l l M t g t l n j a L j l f o l d ' s i n l ^ t s t u b model.

Fig. 8: Example of FE thermal stress contours at header crotch


corner
223

Platen superheater outlet header (Crotch corner position)

T6
;-

T7

270mm

1

6ttrmm
yT8
275mrnf

XT9

T10
X-

XT5 T4X X <T67 XT66


T3

30 Tube No. 11 0 9

Dimensions (mm) O/D Bore f Notes


AA od id t
BB od id t Element 9
HEADER od id t UnitB1,B3&B4
TUBE od id t Tubes 3-34
A) TUBE INTERSECTION
T,STEAM = (T3 + T4 + 66 + 67)/4
= C1*T10-C2*r 57ttw + C3 -C4
or = C1*T9 -C2*T^rC2 -C4

B) OUTLET STUB INTERSECTION


T,STEAM = (T3 + T4 + T66 + T67)/4
= Cl*T9-C2*TsmiM + C3 -C4
or = C1*T8-C2*r(STEAM c: C4

Fig. 9: Example stress functions generated for a critical header ligament


(The numbers have been changed for confidentiality purposes)
224

l'LUS System Stiess Display:- { S H i g s l w n . V J SHI WJ

Fig. 10: PLUS System real-time display of monitored temperatures and


ligament stresses derived from stress functions
225

READ IN TEMPERATURE
TIME AND ELASTIC
STRESSES

RESOLVE INTO SEQUENCE ':


OF TENSILE/COMPRESSIVE i
PEAKS AND DWELLS i

FATIGUE CREEP

DETERMINE THE RELAXED


PAIR COMPRESSIVE STRESS DURING THE
& TENSILE PEAKS IN PEAKS TAKING INTO
DESCENDING ORDER ACCOUNT DWELLS
OF SIZE

DETERMINE ELASTIC / CALCULATE CREEP


PLASTIC CYCLIC DAMAGE USING A
LOOPS USING DUCTILITY EXHAUSTION
NEUBER METHOD METHOD

CALCULATE ELASTIC/
PLASTIC STRAIN
RANGE

CALCULATE FATIGUE
DAMAGE FOR EACH j
SET OF PEAKS

SUM DAMAGE

Fig.11 : Creep-fatigue damage assessment route


227

CHAPTER 4

SPECIFIC APPLICATIONS
229

TECHNICAL AND ECONOMICAL FEASIBILITY STUDY OF THE ELECTRON


BEAM PROCESS FOR S 0 2 AND REMOVAL FROM COMBUSTION FLUE
GASES IN BRAZEL

POLI, D CR.*; ZJMEK, ZA.**; VIEIRA, J.M.*; RIVELLI, V.***

INSTITUTO DE PESQUISAS ENERGTICAS E NUCLEARES IPENCNEN/SP


Travessa "R", 400 Cidade Universitaria. 05508900SPBrasil
**INSTITUTE OF NUCLEAR CHEMISTRY AND TECHNOLOGY POLAND
***COMPANHIA DE TECHNOLOGIA DE SANEAMENTO AMBIENTAL CETESB/SP

ABSTRACT

The release of toxic gases into the atmosphere, mainly because of acid rain has been object of
many discussions in all the world resulting in international programs of research for the
development of efficient flue gas removal techniques, mainly SO2 and NOx, and in setting more
and more limits of emission. Among the flue gas treatment methods, the process of electron
beam irradiation has shown to be promising. Under irradiation, those gases are simultaneously
removed from the combustion gases. In the presence of ammonia, the byproduct of the
process is ammonium sulfate and ammonium nitrate and after filtration it can be used as a
fertilizer. The process has been investigated in Japan, Germany, USA and Poland. Data
concerning the present state of the process along with the design and implementation of a
laboratory pilot plant for the electron beam flue gases removal process located at
CNEN/SP are presented.

1. EVTRODUCTION

Sulfur oxides are created and exhausted into the air when fossil fuels that contain
sulfur (coal, oil and natural gas) are burned. Nitrogen oxides are formed when the nitrogen
and oxygen are burned with fossil fuels at high temperature. Latter acids are being formed in
the atmosphere and fall to earth as acid rain or snow. In result lakes and forests are being
damaged in certain part of Central Europe, China, Northeastern United States and Eastern
Canada. Some acid can be transported far away from industrialized zones and cross
international borders to ruin the environment in nonurban areas. Trees, crops, and plants may
be hurt. The acid rain affects buildings and monuments what can be seen in many european
cities. These are the reasons why stricter control of SO2 and emissions has become
internationally recognized as a global problem and many countries have set limits for the
discharge of pollutants. SO2 and are listed among them (5).

In the past years, the use of fossil fuels with high sulfur content in Brazilian industrial
installations has grown. In addition, estimates indicate such growing will be continuous. Due
to environmental regulations enacted, the development of a technique able to remove toxic
gases has become essential.

The air pollution in Europe is particularly severe. There exists consequently a strong
need for air pollution technology in order to improve such situation. Poland, which produces
energy mainly from pit and brown coal, is a big producer of these pollutants. Numbers
230

regarding the NO x emission should be multiplied by a factor 2.9 since the nitrogen dioxide
form much stronger acid becoming harmful to the environment (2).

1.1. CONVENTIONAL METHODS FOR S 0 2 AND REMOVAL

Several FGD (Flue Gas Desulphurization) methods have been developed up to now.
The methods can be divided into several cathegories: dry, wet and with sulfur recovery
system.

Dry Scrabbers
LSD - Lime Spray Dryer
CFB - Lurgi Circulating Fluid Bed
FSI - Furnance Sorbent Injection
EI - Economizer Injection
DSI - Duct Sorbent Injection
DSD - Duct Spray Drying
ADV - Moist Dust Injection
LSFO - Limstone with Forced Oxidation

Wet Scrubbers
LFSO - Limstone with Forced Oxidation
LSWB - Limstone with Wallboard Gypsum
LSINH - Limestone with Inhibited Oxidation
LSDBA - Limestone with Dibase Acids
PURE - Pure Air/Mitsubishi
MGL - Magnesium Enhaced Lime
LDA - Lime Dual Alkali
LSDA - Limestone Dual Alkali

Sulfur Recovery System


WLWN - Wellman Lord
ISPRA - ISPRA - Bromines
MgOx - Magnesium Oxide
LSFO - Limestone with Forced Oxidation

Dry and wet methods can be applied for reduction of NOx pollutants. SCR selective
catalytic reduction, precipitation on solids, catalytic decomposition on solid electrolyte and
reduction to Nz by NH3 are examples of dry scrubbers. Absorption in liquid with reduction to
NH4, adsorption in liquid with oxidation NO2, NO3 are used in the wet method.

The stricter control of NO x and SO2 pollutants, which are being forced in many
countries, provokes an impact in the development of low cost NO x /SO x control technology as
alternatives to existing ones: SCR (Selective Catalytic Reduction) for NO x and FED (Flue gas
desulphuration) for SO2 control. The evaluation of nearly 70 processes has been done under
the EPRI project to select the most promising technology (9).

The recommended methods were selected under screening technology


condition based on:
1. Development status (empirical experience, on-going development,
commercial use);
2. Technical feasibility (probability that commercially viable process can
be developed);
3. Retrofitability (land requirements for process and waste disposal, use
of existing equipment and required point of access to the flue gas stream),
4. Environmental risk (high volume waste, low volume waste,
secondary gaseous emissions, potencial risk due to process upset),
231

5. Process reliability (chemical and mechanical complexity, sensitivity to


process runnings, corrosive environment);
6. Energy and resource requirements (quantity, reagent consumption
rate, catalyst/sorbent consumption)
In addition to EB technology three other processes have been selected:
- NO x SO (solid phase adsorbent with fluidized bed reactor)
- SNRB (SO x - - ROX - BOX)
- WSA-SNOx (wet scrubbing iron-chelate process).

a) THE NO x SO PROCESS

The NO x SO process is based on the use of solid phase adsorbents to remove SO x and
NO x in a fluidized bed reactor. The adsorbent is removed from the reactor by means of several
steps processing. In first stage NO x is removed under controled temperature treatment. The
concentrated NO x stream from this stage is directed to the boiler inhibiting the formation of
additional NO x under thermodynamic equilibrium. In a second stage, a reducting gas is applied
(methane, carbon monoxide) to produce gas consisting of SO x , H2S and elemental sulphur
what can be later processed to produce marketable byproduct of sulphur. The adsorbent is
returned to the reactor after a stream-treatment and cooling operation.

The advantages of the NOxSO method is the low temperature (120^C) process
which corresponds to the ESP outlet. It also means retrofit applications because of the
dowstream of the ESP location. The use of fluidized-bed reactor for improving efficiency cost
makes a high pressure drop of the flue gas. The demonstration program includes the NO x SO
Corporation, the DOE, the Ohio Edison Company, the EPRI and other organizations.

b) THE SNRB PROCESS

In this process a lime or sodium reagent is injected into the flue gas duct wherein
ammonia is also injected. The alkaline reagent reacts with SO x in a duct and on hot filter
bags. A SCR catalyst is located on or within the bags to reduce NO x with the ammonia
presence and form elemental nitrogen. The development of the filter bags with SCR catalyst
suitable for high temperature operation will demonstrate the capability of this heat recovery
process. Babcocle & Wilcox, DOE, EPRI and others are engaged in this dry injection
technique development program.

c) THE WSA-SNOx PROCESS

The wet scrubbing Iron-Chelate Process can be easly adopted to retrofit the plants
wherein a FED System has been already implanted. The iron-chelate additives react with NO x
in wet scrubbing process to form compounds that include sulphur-nitrogen species.

In comparison with FED process a longer gas/liquid contact or a higher flue gas
measure drops may be required for appropiate NO x removal. The Iron-Chelate oxidation and
the stream of waste which should be additionally treated before disposal may create a technical
problem and a significantly increase of the cost of the process.

2. PRINCIPLE OF THE EB PROCESS

The research on flue gas treatment by radiation was initiated by the Ebara Corp. in
1970. Fundamental work and pilot scale experiments have been performed in Japan, USA
Germany, Poland and other countries since then. It was founded in a basic and experimental
way that EB technology for flue gas treatment has the following advantages (5):
232

- Simultaneous removal of SO2 and NO x


- Dry process without wastewater
- Byproduct can be used as fertilizer
- No need of a catalyst
- Low capital and operating costs compared with conventional methods.

The process is based on three stages. In the first one the flue gas is irradiated leading
to radical formation such as OH, O, HO2 In the second stage SO2 an(^ NO x are being
oxidized to H2SO4 and HNO3 in presence of water through a concurrent number of chemical
reactions. In the third stage the intermediate product reacts with the ammonia presence to
form ammonium sulfate and ammonium nitrate. Ammonia in near stechiometrie quantity is
injected into the vessel prior to the flue gas entrance into the process vessel. These dry
powdery ammonium salts are collected by the filtering units (ESP or bag filters) and can be
used as agricultural fertilizers (4).

The process can be used for treatment of the gases from coal and oil fixed power
stations, industrial boilers, furnaces and municipal solid waste incinerators.

Also retrofitting of existing facilities to reduce SO2 and NO x concentrations is possible


regarding to the low space requirement and location between the ESP and the stack where a
move space is available.

In practical instalations 95% of SO2 and 85% of NO x removal efficiency can be


obtained. The main components of the facility are the spray cooler, the process vessel
accelerator and byproduct collector which can be fully automated what makes the process
easier to be operated.

2.1 - PROCESS MECHANISM

When high energy electrons are applied for flue gas irradiation, radicals and free atoms
are generated. The interaction of these electrons and flue gas molecules results in ionizing and
dissociation. The fraction of energy absorbed by each gas component is proportional to its
partial pressure. Principal reactions in primary processes can be schematicaly represented by
(4):

N 2 > N 2 + (2.27); e- (2.68); N + (0.69); N (3.05); N 2 * (0.29)

0 2 - > 0 2 + (2.07); e- (3.30); 0 + (1.23); O (1.41); 0 2 * (1.90)

H 2 0 -> H 2 0 + (2.56); e- (3.23); H (4.07), O H (4.17); O (0.45)

CO2 --> C 0 2 + (2.24); e- (2.96); C O + (0.51), 0 + (0.21); O (0.38)

Where the number in parentheses represent the G values of the species and the G is the
number of molecules produced per lOOeV of energy absorbed in the system. This is the first
stage of the process.

During the second stage radicals and atoms containing the oxigen react with SO2 and
to form, in the presence of water, sulphuric and nitric acids. There is also the ion-
molecule reaction mechanism for the decay of the primary species. Low concentration
components have to compete with the primary radical decay processes. Above 760 reactions
were listed in Agate Code to describe the undergone processes. Some reactions from the
secondary stage, wherein SO2 and NO x are involved, are listed below (4):
233

S02 + H02 > SO3 + OH

S 0 2 + OH > HSO3

502 + O > SO3

503 + H 2 0 > H2S04

NO + 0 H > HNO2

NO2 + O3 > NHO3 + O2

N02 + H02 > HN02 + 0 2

NO2 + OH > HNO3

Most than 20% of the NO is converted into free N2 being released in the EB process
in the presence of ammonia according to JAERI and KFK's tests.The last stage is the product
formation. Finally, the gas conversion process is initiated by the reaction of sulphuric and nitric
acids in the presence of water and stoichiometric amount of ammonia. These acids are
converted into ammonium sulphate and ammonium nitrate and are collected by a filtering
system (4).

The efficiency of the EB process was determined in many experimental facilities to


optimize process conditions. Last data show that 95% of S 0 2 removal efficiency can be
obtained at a 5kGy dose being the water content and the thermal reaction condition properly
optimized. The multistage irradiation can significantly improve the NO x removal. The 7kGy
dose for the two stages and the 6kGy dose for a three stage irradiation is required for a 80 %
efficient removal (5).

2.2. ELECTRON BEAM FACILITY FOR FLUE GAS TREATMENT

The first experimental faculty for EB process applied to flue gas treatment was built by
the Ebara Corp. in Japan. The batch tests where carried out in the 1970-71 period. The
experiments proved that SO2 and NO x can be removed from irradiated flue gas in results of
radiation chemical reactions. Subsequent development of the process has been continued by
Ebara, JAERI, University of Tokio, NKK in Japan, Ebara, Research Cortrell, Department of
Energy, Electric Power Research Institute, University of Karlsruhe, KFK, Badenwerk in
Germany, Institute of Nuclear Chemistry and Technology, Warsaw Power Station in Poland
(5).

The EB process is being used now to remove other kinds of gas pollutants. The results
obtained from experimental works already underwent proved the capability of the process gas,
traffic tunnel ventilation gas and various VOC pollutants in the gas phase (3,7).
In order to demonstrate the capability of the EB process, four pilot plant
demonstration facilities are being now used both in Poland and Japan. They are based on the
Ebara process where ammonia is injected before the process vessel wherein the flue gas is
irradiated (5).

The Table 1 shows the parameters of the pilot plants for the flue gas treatment which
have been installed since 1991 and are being used now to demonstrate the capability of the EB
technology for commercial use (5).
234

In 1991, a 3year 14.3 million USD project was initiated in Japan by the Ebara Corp.
together with the Japan Atomic Energy Research Institute (JAERI, Takasaici) and the Chubu
Electric Power Company (Nagoya). The main objectives of the research carried out at this
pilot plant are as follows:

To recognize the quantitative characteristics of the process;


To test multistage irradiation;
To optimize collecting (ESP^ag house) and byproduct handung systems,
Study and evaluation of the commercial characteristics of the process;
To evaluate the rehability of the process during a long period operation;
To improve necessary areas of the faculty.
TABLE l.The major parameters of the pilot/demonstration plants for
the flue gas treatment which have been installed since 1991.
1 1 1
YEAR OF VOLUME
INSTITUTION 1 ACCELER.
INSTAL. FLOW RATE 1 TEMP
(NM3/h) :|SO 2 /NO X (C)
| (ppm)
1
1
INCT/KAWENCZYN
1991 20.000 1 60 500 - 700
POLAND |200/600 to KeV
1 80 2 X 50 kW |
250
EBARA / JAERI 1
1
|800 to 800 KeV
1992 12.000 1000 or 65
JAPAN
1 3 36 KW |
|l50/300
1
EBARA / TOKYO 1
| ____ 500 KeV
JAPAN 1992 50.000 1 20
| 0-5 2 12.5KW

j NKK / JAERI
1
1 400 - 350 |
1
| loo KeV
MATSUDO-JAPAN 1992 1000 100 150
HCl = 15 KW
1 | 1000 I
1
To confirm capability of the EB method in low NO x content gas, a Tokyo plant was
built by the Ebara Corp. and the Tokyo Metropolitan Government to treat ventilation
exhausted gases from a highway at the Tokyo Bay Tunnel. The facility was finished in June
1992. The main parameters of the pilot plant are shown in Table. 1. 50.000Nm^/h of gas from
the ventilation exhauster is introduced into the irradiation vessel for EB treatment with the
ammonia presence. As a result NO x is converted into powdery ammonia nitrate products. The
activated carbon is used to remove the ozone formed by the irradiation. A 80% targed removal
efficiency is being obtained at 3ppm level of NO x in inlet parts.

To evaluate the EB process applied to the flue gas from municipal waste
incinerators a pilot plant was built by NKK, JAERI and Matsudo City Government Clean
Center. The plant was completed in June 1992. The main parameters of the plant are shown in
Table 1. Targets of the removal efficiences are as follows:

NOx: lOOppm > <50ppm


S02 lOOppm > < lOppm
235

HCl: lOOOppm > < lOppm


The irradiation is being done where the slurry of calcium hydroxide is sprayed at a
temperature higher than 150C. The bag filter is used to collect powdery products (mixture of
calcium nitrate, sulfate and chloride) formed by the irradiation. During the process HCl and
SO2 are removed by spraying the slurry of Ca(OH)2 NO x is effectively removed by EB
irradiation (6).

The Polish Pilot Plant, with a 20000Nm3/h capacity, has been built at EPS
Kawenczyn in Warsaw. The installation was constructed on the by pass of the main stream of
the flue gas with total flow net 260000Nm3/h from the WP-120 boiler (nominal heat output
120Gcal/h, efficiency 84%, coal consumption 26-32 t/h). The black coal used contains 1.2%
sulphur, 18% ash content and a calorific value of 4700 Kcal/kg.

The Polish Pilot Plant is the first installation in which two stage irradiation by electron
beam was applied resulting in a significant decrease of energy consumption. The other
novelties of this construction are connected with the process vessel where irradiation zones
are located along the flue gas system flow and a double window construction was applied with
perpendicular streams of air for cooling the output windows at the accelerators and the inlet
windows of the process vessel.

The main objectives of the research carried out at the pilot plant are (2):
- Testing of all parts of the installation under industrial conditions;
- Optimizing of the process parameters leading to the reduction of energy
consumption with high efficiency of SO2 and removal;
- Selecting and testing filter devices and filtration process;
- Developing of the monitoring and control systems at industrial plant for flue
gas cleaning;
- Preparation of the design for an industrial scale faculty.

2.3. PRESENT STATUS OF ELECTRON BEAM PROCESS

The EB process applied to the flue gases treatment is suitable for full scale commercial
application. It was determined by basic experiments and operation of pilot plant facilities. This
is a dry process with a usable byproduct which can offset the operating and investment costs.
The EB technology was recognized as flexible and adaptable with excellent turndown ratios.
The process can be easy controlled for different removal efficiencies and adjusted for the
utilization of different fuels. Major conclusion regarding the EB process for flue gas treatment
are as follows:

- More than 95% of SO2 and 85% of NO x can be simultaneously removed


from the flue gas under optimal operating conditions;
- Ammonia should be injected into the process in near stoichiometric amount,
upstream injection was found to be more efficient;
- SO2 removal efficiency depends on the temperature injection, the filter
condition and the EB dose;
- The quantity of SO2 removed by EB is relatively independent from the inlet
S 0 2 concentration;
- NO x removal occurs almost entirely under EB application and depends
strongly on the dose, gas temperature and ammonia stoichiometry are the second order
effects;
- NO x removal efficiency is increased as the inlet S 0 2 concentration increases.
This occurs as a result of the formation of nitrosulphuric compounds;
- 5kGy is required for 95% of SO2 removal efficiency and 7kGy is required for
80% of NO x removal efficiency in a two stage irradiation facility in optimal conditions;
236

- Good reliability of the long time operation was demonstrated in pilot plant
facilities,
- The byproduct collected during the process consists of ammonium sulfate and
ammonium nitrate which can be effectively used as a fertilizer. The small amount contaminants
does not affect the quality of the product;
- No waste water in the process is being produced;
- Relatively low capital investment and operating cost of the EB process facility
can rate this method as equivalent or preferable to compare with FED/SCR ones;
- Low space requirements produce a significant advantage in the retrofit
installations;
To complete present data of the EB process intense experiments are being done in
Japan, Poland and Germany. The number of the most interesting subjects are listed below:

- Experimental study of quantitative characteristics of the process at the pilot


plant level;
- Design study and evaluation of commercial characteristics of the process;
- Experimental study to apply this method for other kind of gases treated by
radiation;
- Wet and dry ESP, baghouse, gravel bag filter experimental study to optimize
byproduct collecting system;
- Optimization of the spray cooler construction to obtain dry bottom and
reduction of power consumption;
- Optimization of the systems preventing or removing duct clogging byproduct;
- Duct configuration (rectangular, cylindrical) and gas velocity in duct and
process vessel are investigated;
- Multistage irradiation (two and three zones);
- Ammonia sup and ammonia injecting (location, quantity);
- Byproduct handling studies (granulation, liquid, storage, fertilizer tests).

The Electron Beam process for flue gas treatment could be used beneficially in the
future. Experimental studies describe above improve the technology and promote it for future
applications (2).

3. EQUD7MENT SPECD31CATION

BOILER - Oil or coal fired to produce thermal or electrical energy.


ESP - Electrostatic precipitator to reduce the fly ash content downstream to the boiler.
HEAT EXCHANGER - To reduce inlet or increase outlet gas temperature by
additional stream of air or water.
SPRAY COOLER - Vertically installed down to the boiler and ESP is used to increase
water content in flue gas and describe its temperature by complete evaporation injected water.
AMMONIA INJECTION - To keep stoichiometric quantity of NH3 in flue gas stream.
PROCESS VESSEL - Horizontally mounted with multistage irradiation capability.
ACCELERATOR - To initiate radiation chemical process of flue gas treatment.
ANALYTICAL AND CONTROL SYSTEM - to keep automatic control over the
process.
COLLECTOR - as baghouse/ ESP/gravel bed filter to collect byproduct.
BYPRODUCT HANDLING SYSTEM - To prepare powder, granules or wet sort of
byproduct.
INDUCED DRAFT FAN - To overcome pressure drop in ducts and byproduct
collector.

3.1. GENERAL ARRANGEMENT OF THE TECHNOLOGICAL PROCESS

Flue gas generated by the coal heated boilers enters the EB process after ESP where
the ash content is reduced in order to improve the quality of the fertilizer byproduct. No such
237

filter is foreseen after the oil-fired boiler. The initial concentration of S 0 2 depends on the
sulphur content of the applied fuel. N0 X concentration depends on the combustion process
temperature and is different for different burners and boiler construction.

Heat exchanger is usually used to reduce the gas temperature in the initial cooling
stage up to 150-250C level. Then flue gas enters the spray cooler where the temperature is
reduced to 65-80C by atomized water injection. Usually a dry bottom principle is applied to
operate the spray cooler facility, to eliminate a residual wastewater stream. Water is totally
evaporated by a heat exchange with the hot flue gas once the dew point of the gas is
approximately 50C. Water content in the flue gas should be increased up to 8-12% in this
stage.

Ammonia in stoichiometric quantity is injected before the flue gas enters the process
vessel where it is irradiated by the electron beam to promote the reaction of the ammonia and
flue gas. The beam interacts with nitrogen, oxygen, water and others substances in the flue gas
to produce active free radicals such as OH, O, H 0 2 . In results S 0 2 and N0 X are converted to
sulphuric and nitric acids and finally forms a byproduct consisting of ammonium sulfate and
ammonium nitrate (6).

The ammonium sulfate and the ammonium nitrate are collected by electrostatic
precipitator or bag filters and the cleaned flue gas is released through the fan into the stack.

3.2. MAJOR EQWPMENT

3.2.1. ACCELERATORS

The present estimate of the required dose level for an efficient NO x removal (80%)
shows that the radiation dose should be in the range of lOkGy for low sulphur content coals.
Multistage irradiation can reduce this figure up to 7kGy. It is necessary to remember that 95%
of S 0 2 removal can be obtained with a 5kGy dose. Significant improvement in NO x removal
can be achieved when high sulphur coal is applied. If it is assumed that gas absorbs 85% of the
total beam energy then 1MW accelerator facility will be sufficient for a 100MW generator
with the dose range described above.

The required beam power level is significantly higher than in those accelerators utilized
for industrial beam processing but there are technical prospects to build accelerators with a
200-500kW unit power what sharply reduces the number of accelerators in industrial facultes
and their cost.

According to accelerator producers the cost of high power 800keV machines is in the
range of 5 US$/W at present. The new developments which are under progress in USA
(induction linac) give some prospect to reduce the cost level by factor 2.

Many factors should be considered when specifying the location of the


accelerator/scanner relative to the process vessel. The most important are: dose uniformity,
cost and easy access to maintenance. The best position of the scanner was found to be at the
top of the process vessel with the irradiation zones along the gas stream flow. The multistage
irradiation is recommended to increase the process efficiency (10).

The process vessel location in horizontal position and at the underground level can
reduce shielding costs and allows to have an easy access and change of certain components of
scanner/process vessel systems.

The Table 2 shows the basic electron beam parameters which have been applied in
laboratory and pilot plant facilities for flue gas treatment.
238

The Table 3 shows producers and accelerators which are suitable for flue gas treatment
in capacity 10.000 20.000 Nm 3 /h (10).

3.2.2. FILTERS, BYPRODUCT HANDLING

The process of particles formation and filtration has been intensively investigated
during the recent years. The mass median aerodynamic diameter of the product aerosol
facilities around l u m depend on the dose and flue gas parameters (4).

A baghouse was initially selected as a byproduct collector. A precoating


system was used to protect the bag' surface from direct contact with hydroscopic byproduct.
To avoid decreasing property of byproduct by neutral precaution material diatomaceous earth
can be used.

TABLE 2. The basic parameters of the electron accelerators applied in


faculties for flue gas treatment.
1 I
TYPE OF BEAM TYPE OF
ENERGY POWER REMARKS
FACILITY (MeV) (KW) ACCELERATOR

12 1. 2 linear Ebara, Japani


LABORATORY 3 15 Cockrft-Walton JAERI, Japan|
1.2 1. 2 Dynamiton Tokyo Univ.
j FACILITY 1.5 30 JAERI, Japan|
0.22 22 Transformer Karlsh.,Germ|
< 1000 Nm 3 /h 0.3 3. 6 II
KFK, G ermanyj
0.7 5 Resonance INCT, Poland|

0.75 30 Ebara, Japan|


j PILOT 0.75 2
45 Ebara, Japan|
0.8 2
40 Res. Cott.Usj
DEMONSTRATION 0.8 2
80 Ebara, USA
0.3 90 Electrocurtain
2 Badenwerk, GE |
FACILITY 0.5 15 Cockrft-Walton KFK, G ermanyj
II
0.5 15 Ebara, Japan|
0.7 2 50 Transformer INCT, Poland
1000 - 20.000 0.8 3 36 Cockrft-Walton Ebara,Japan |
II
0.5 2 L2.5 Ebara, Japan|
(Nm3/h)

INDUSTRIAL 0.8 8 L50 Transformer


PLANT 1.0 4 400 Induction linear
300.000 Nm 3 /h
1 1

To remove byproduct deposition from the bag filter and reduce baghouse pressure
drops several methods can be applied.
Pulse jet cleaning
Reverse flow cleaning
Mechanical shaking

Acryllic and Teflon covered bags are the best in this application.lt was found that other
methods can be effectively used in the collection process. Wet and dry ESP and gravel bed
filters are being used to optimize byproduct collecting system.
239

ESP and baghouse can be installed in series to increase the efficiency of the byproduct
collection, but at a significantly higher cost of installation.

The usable byproduct is one of the major features of EB process for flue gas treatment.
The concentration of ammonium sulfate and ammonium nitrate depends on the fuel
composition, but its quality was estimated on 75% of the regular product. The sale of this
byproduct can be used to offset the cost of the ammonia which is applied in the process. Such
sale can significantly decrease operating costs.
TABLE 3. The basic parameters of the electron accelerators offered by the
different producers for flue gas treatment in the capacity 1000020000 Nm^/h
1 1 I I I I
TYPE OF |PRODUCER |ELECTRON |BEAM OUTPUT
ACCELERATOR G ENER Y CURRENT WINDOW
j (keV) (mA) (mm)
600/200/1830, Radiation
Dynamitron Dynamics, 500/600 200 1830
USA/Japan

ESI 0.3/90 Energy Seien.


Electrocurtain Corp., 300 300 1400
USA/Japan

ELW3A Inst, of Nucl.|


Transformer jPhys., 500/700 100 1500
|Russia/Japan

UW-075-2-2-W, NIIEFA, 750 j 2 60 | 2000


Transformer |Russia

1 EPS-500 Nissin High 500 80 j 1600


Cascade |Volt., Japan
1 1 I I I I
1 1 I I I I
|ESH, 1 Polimer
Transformer Physics, 280 220 700
|Germany
1 1 I I I I

Ammonium nitrate is the basic fertilizer for many plants. Ammonium sulfate is being
applied directly on certain sulphurdepleting agricultural crops like corn and cotton. The
combination of these two compounds provides a suitable quality material for direct
application.

Ammonium sulfate is required by sulphur defficient lands, generally located in the more
arid regions of the world. Existing ammonium sulfate sources do not meet market needs. Such
lack translates into an excellent opportunity to sell EB process byproduct at an attractive
price. Usually ammonium sulfate is a component of the final commercial product of the NPK
fertilizer.

An alternative application of the EB process byproduct is under consideration.


Enriching various organic compounds like sludge or municipal waste compost with a
byproduct addition may improve the nitrogen content, may adjust the precipitation of the
mixture and may be effective and economically replace the chemical fertilizer.
240

Depending on the coal sulphur content and the level of nitrogen oxides in the flue gas,
the nitrogen content of the byproduct mixture will be between 20 30%. For those facilities
using 2.5% sulphur coal the byproduct production can be estimated on 800 Kg/day/MWe.
With a nitrogen content of approximately 25%, the flyash is one of the significant compound
of the byproduct. Usually it is efficiently removed by the ESP located before the process
vessel. Presently, the flyash is not recognized as a hazardous waste material, but the high
flyash content in the byproduct decreases the nitrogen content and increases the distribution
and application costs per nitrogen unit (4).

Some trace of heavy metals are present in the flyash. Table 4 shows a record for two
different byproduct samples. The byproduct was collected at the installation operated by
Badenwork, Karlsruhe, Germany. Product A was a mixture byproduct with filtration, while
product is a pure EB process byproduct having the characteristics of a nitrogenous fertilizer,
with properties and fertilizing utility similar to that of the ammonium sulfate. Usually the
amounts of trace metal in the byproduct can be controlled at levels equal to or less to those
being found in commercial fertilizers. Typically no more than 10%, by weight, of flyash by
byproduct is accepted. This level of preremoval can be easily obtained by the use of relatively
low efficient collectors.

TABLE 4. Composition and chemical properties of tested products.


I 1 1
"B" Product
1 "A" Product
I1
| total 4.45 19.50
N-NE 4 4.16 19.40
-3 0.75 0.74
?25 1.12 0.21
2 1.21 0.07
| CaO 3.95 0.50
MgO 2.74 0.46
1 Na 2 0.57 0.04
CI 2.90 1.40
S total 3.95 25.50
S-SO4 3.34 24.50
1 PH j 7.35 4.50
% s.m. dry mass 98.20 99.50
| including:
R
23 16.10 0.53
Fe 2 0 3 1.27 0.11
sio 2 43.31 0.89
ash 77.89 2.51
content of heavy metals (ppm)
j Mn 160 60.0
Zn 60 254.0
Cu 38 3.0
Pb 26 26.0
j Cd 4 3.0
10
1er 1 1 3.6 j

3.3. COST ESTIMATE

The costs including capital investment cost, operating and maintenance cost and
byproduct credit should be taken into account to evaluate the EB process from the economic
point of view. For 100 MWe power plant lOOOkW electron beam power should be applied to
achieve 90% of SO2 removal efficiency and 80% of NO x removal efficiency at a 7kGy dose.
The present status of accelerator development allows to build 500 kW units at a cost rate of
241

2-5 US$/W of beam power depending on the accelerator construction and its producer. Table
5 shows capital cost estimate depending on the cost of the accelerator. Up to 25% of
thecapital cost is applied to buy accelerators what is slightly less than the typical cost of a
construction work (buildings, ducts) (4).

According to an Ebara estimate to a 100 MW plant burning 2% sulfur coal and SO2
removal rate 92% and the NO x removal rate 60% listed below, performance and economic
parameter can be achieved:

Power consumption 2.6MW/h


Ammonia requirements 1500 kg/h
Inert earth 100 kg/h
Fertilizer byproduct 600 kg/h
SO2 reduction 1400>112ppm
NO x reduction 400 >160 ppm
Flue gas - flow rate 300.000 Nm3/h
Total capital cost 19.300.000 US$
Process cost 193 US$/kW
Operating personnel 3 per 24 h
Annual maintenance cost 200.000 US$
- Annual operating cost 580.000 US$

It was recognized that the byproduct has 75% of the value of a commercial fertilizer
what meant 51 US$/t inl990.

TABLE 5. Estimate of the capital cost EB faculty for flue gas treatment
depending on the cost of the accelerator.
1 1
Accelerator Investment Multistage
Cost Cost Irradiation
(USD/W) (USD/KWe) Investment Cost |
beam power (USD/KWe)
0,75

2 225 169

5 350 262
1

3.4. LABORATORY INSTALLATION

A batch type laboratory unit with a flow system has been built in Japan, in Germany, in
Poland and in some others countries to investigate experimental characteristics of the EB flue
gas treatment process (2).

BATCH TYPE facility can be easily adopted to local experimental conditions. This
type of laboratory unit was applied in Ebara during the first tests performed in 1970-71 to
establish chemical reactions induced by radiation, responsible for SO2 and NO x removal from
the flue gas.

FLOW SYSTEM incorporates flow gas stream rate lower than 1000 NM^/h,
generated by oil or city gas burners. The gas flow can be arranged by the use of pressure tanks
containing NO, SO2, O2 and N2 at a moderate flow rate. An additional amount of water
should be incorporated to keep adequate water contents. Flue gas generated by both oil and
gas burners needs the additional injection of SO2 and NOx to meet appropriate experimental
conditions. The choice between OIL BURNER, CITY GAS BURNER OR A GAS MIXING
242

DEVICE depends mainly on finantial conditions or the possibility of adaptation of the existing
faculties. The highest flow rate can be obtained in a system equiped with a boiler.

ANALYTICAL EQUIPMENT should allow to measure number of process


parameters:
- Inlet and outlet SO2, NO x , O3, H2O, NH3 concentration;
- Dose rate;
- NH3, SO2, NOx injection flow rate;
- Flue gas flow rate,
- Temperature in determined points of then facility;
- Aerosol parameters.

ACCELERATOR is used to provide stream of electrons which is applied in the


process. Electron beam parameters are not critical in laboratory installations due to
experimental requirements. The energy of an electron may range from 0.22 to 12 MeV while
the energy beam power from 1.2-30 kW in laboratory installations which have been used to
investigate the EB process.

PROCESS VESSEL should stand a long time irradiation with appropriate temperature
according to the nature of the experimental condition. Stainless steel and other corrosion
resistant materials are preferable. Thermal isolation and additional heating system could be
used to stabilize experimental conditions.

HEATING EQUIPMENT is required to provide proper temperature conditions to the


process vessel and analyze gas paths. Process vessel temperature 60 - 100C is being used for
various experiments. The temperature of the gas paths is recommended by analytical
instrument producers and usually is into the 150C range.

RETENTION CHAMBER located downstream of the process vessel is sometimes


used to stimulate the product formation.

PREFBLTER is sometimes used after the burner to stop particles coming from the
combustion process.

HEAT EXCHANGER is sometimes used before the process vessel to control the
temperature of the EB process.

SPRAY COOLER is used for the water injection from an air-assisted manifold of
spray nozzles. The quantity of water injected is under control to cover temperature of the flue
gas by the evaporation process and increase its relative humidity.

AMMONIA INJECTION is supplied from the pressure tank after conversion from
liquid to gas phase. The amount of injected ammonia should be carefully controlled according
to experimental requirements. The injection point is usually located before the process vessel.

COLLECTOR of the product is being used to collect the final product. Bag filters
and/or ESP may be applied

FAN located before the stack is necessary to keep proper flow rate of the gas through
theprocess vessel and the collector of the product.

STACK and duct ne are used to extract the flue gas out of the building. A corrosion
effect and the deposition of the byproduct may occur when filter collector units are not
applied.
243

A laboratory pilot plant has been built at IPENCNEN/SP, using an electron beam
accelerator, from Radiation Dynamics Inc., having the following parameters (8):

.Electron energy 0.5 1.5 MeV


Beam current up to 25 mA
Scan length 0.61.2 m
Scan frequency 100 Hz

The irradiation device allows a fourturn irradiation and was already used for
dosimetric studies (1). The gas flow rate will be 251/min and a synthetic mixture of S0 2 and
will be used in preliminary studies. The carrier gas will be normal cooking gas, that is
burned at a proper burner. NH3 will also be injected and the fertilizer will be collected at a bag
filter. Several points will allow the measurement and control of gas rate, temperature and
humidity and also the analysis of the gases to calculate the efficiency of their removal.

4. REFERENCE

(1) CAMPOS, CA.; PEREZ, H E B . ; VIEIRA, J.M.; POLI, D.C.R.; SOMESSARI, S.L.;
ALBANO, G.D.C. Desenvolvimento de um sistema calorimtrico para dosimetria de
gases, em fluxo contnuo, irradiados com feixe de eltrons. Anais do V Congresso Geral de
Energia Nuclear, 2, 659661, Rio de JaneiroBr, 1994.

(2) CHMOLELEWSKI, AG.; TELER, .; , Z A ; LICKI, J. Laboratorium and


industriai research installation for electron beam flue gas treatment. Proceedings of an
International Symposium of Isotopes and Radiation in Conservation of the Environment,
Karlsuhe, 913 March, 1992, IAEAJM325/124, p.81, 1992.

(3) DOI, T.; OSADA Y ; MORISHIGE, A ; TOKUNAGA O.; MIYATA T., HJROTA .;
NAKATvLA, M ; MIYAJfMA K ; BABA S. Pilot plant for NO S 0 2 and HCl removal
from flue gas of municipal waste incinerator by electron beam irradiation. Radit. Phys.
Chem., 42, 679682, 1993.

(4) EBARA ENVIRONMENTAL CORPORATION. Final report for testing conduced on


the Ebara flue gas treatment system process demonstration unit at Indianapolis. Indiana,
Greenburg, PA 1988.

(5) INTERNATIONAL ATOMIC ENERGY AGENCY. Eletron beam processing of


combustion flue gases. Vienna, IAEATECDOC428, 1987.

(6) NAMBA H ; TOKUNAGA O.; SATO, S., KATO, Y. TANAKA, T., OGURA, Y.;
AOKI,, S.; SUSUKI, R. Electron beam treatment of coal-fired flue gas. Proceedings of the
Third International Synposium on Advanced Nuclear Energy Jaeri, Tokyo, Japan, 118122.
INISJP005, 1991.

(7) PAUR, HR , MAETZING, H Electron beam induced purification of dilute off gases
from industrial process and automobile tunnels Radit Phys. Chem.,42, 719722, 1993

(8) POLI, D.C.R; VIEIRA, J.M.; RIVELLI, V.; LAROCA M A M . Estudo sobre o
tratamento de gases txicos S02 e NOx provenientes de combusto de leo ou carvo
244

por aceleradores de eltrons. Anais do VI Congresso Brasileiro de Energia, 3, 965-970, Rio


de Janeiro-BR, 1993.

(9) PRIEST, W.; JARVIS, JB., CICHANOWICZ, JE.,; DENE, CE. Engineering
Evaluation of combined NOx/SOx removal process: second interim report. Control
Symposium, May, 811, 1990, New Orleans, Louisiana, USA.

(10) ZIMEK, Z A ; SALTMOV, R A Windowless output for high power - low energy
electron accelerators. Radit. Phys. Chem, 40, 317-320, 1992.
245

IMPLEMENTATION OF AN INTEGRATED ENVIRONMENT FOR


ADAPTIVE NEURAL CONTROL

Mauro Cezar Klinguelfus


LAC - COPEL/UFPR
mauro@lac.copel.br

Marcelo Stemmer
UFSC

Daniel Pagano
UFSC

ABSTRACT

This paper describes a neural net based environment designed to perform process control,
which is adequate for any process complexity. The environment includes all the necessary
resources for neural net training and real time control of the plant with different sampling rates.
The environment supports also the on-line training of a second neural controller in parallel to the
control task itself, enabling an adaptive behavior.

KEY WORDS

Neural Nets - Adaptive Control - Voltage Regulators - Speed Control.

1 - INTRODUCTION

The usage of neural nets in control systems appears as a very interesting alternative, mainly
when the plant is difficult to be modeled or has parameters that change with the time. Under this
conditions, the design of classical PTD-based controllers offers as the main difficulty the
mathematical modeling of the problem, which is not always trivial.

Among the aspects that motivate the use of neural nets in control, we can mention the
following ones:

Non algorithmic methodology: it is a quite intuitive method, where the pragmatic


knowledge of an expert about the problem is more important then a previous knowledge of
mathematical models;

Ability of learning: a neural net learns trough examples and is able to represent a certain
behavior. In this examples, the behavior that should be learned is presented to the net in the
form of input / output relations.

Simplicity: the structure of a neural net is relatively simple to understand from the user's
standpoint;
246

Generalization: a neural net is able to answer to patterns to which they have not been
trained. In order to achieve this, an adequate choice of training examples that attend the
range of interest is required;

Fast response: once trained, the response of an ANN is very fast and can be used even for
real time applications;

Universal approximator: with an ANN it is possible to represent any mathematical


function;

Natural noise elimination: this capability is due to the constructive characteristics of the net
itself;

Minimal knowledge of the process: neural controllers require a minimal knowledge about
the mathematical model of the process. Intrinsic characteristics of the process are
automatically considered;

Seen under this aspects, neural nets facilitate the controller design task, since it is not
based upon the classical ways to develop controllers but, instead, is based on a training process,
which is started before the realization of the controller itself. For this purpose we only need to
know the input / output relations of the process to be controlled, that means, we need a previous
knowledge of some values that define a set ofinputs end their respective outputs (this is necessary
for the so called supervised learning). Each pair ofinputs / outputs correspond to a training pair
or test vector, and the set oftest vectors correspond to a test pattern. The amount oftest vectors
needed in each pattern is proportional to the complexity of the process to be controlled. Practical
experiences have shown that this amount should be around 20 to 60 vectors.

The inspiration for the creation of the so called Artificial Neural Networks (or ANN)
comes from biological models and goes back to the 1940'es. In spite of that, only in the last
decades the interest for these connectionist models has grown in a solid base, due to a relatively
better understanding of the real neural systems and the improvements in the computer technology.
The development of new neuronal models and new training algorithms, beside the availability of
faster processors, contributed to popularize the use of ANNs to the most different applications,
including control, signal processing, pattern recognition (along with voice, image and text
recognition), event prediction, fault detection and diagnosis, among many others.

ANNs can be seen as estimators without a model, because they are universal
approximators of general functions, which permit the mapping of input vectors in output vectors
without the need of a mathematical model.

2 - FUNDAMENTAL CONCEPTS OF NEURAL NETS

The ANNs are composed of elements that perform some of the elementary functions of the
biological neuron. Beside their superficial similarity to the brain's structure, these nets exhibit some
characteristics of the human brain, like, for instance, the capability of learning by experience.
247

ANNs can modify their behavior as an answer to its environment. This fact, more than any
other, is responsible for the interest they are receiving. It is said then that a neural net can learn,
requiring for that a variety of training algorithms, each one with its strengths and weaknesses. For
each particular application, an appropriate model and learning algorithm should be chosen.

The ability of learning by training brings to the net a certain degree of unpredictability, that
is, the result will be so near the expected as better the training process was. This depends
fundamentally on a good choice of test patterns, that represent the problem to be solved in a
satisfactory way, and on the choice of an efficient learning algorithm. As a consequence, we
always have a certain error and a certain probability of correctness associated to a net output.

A net is trained so that the application of a set of inputs produces a desirable (or at least
consistent) set of outputs. The training is done through the sequential application of input vectors
while the weights of the net are adjusted according to a pre-defined procedure. The training can be
supervised or not supervised. The supervised training requires an input vector associated to a
desired output vector (the training pair). In the non-supervised learning, the training set consists
only of input vectors. In this case, the net weights are adjusted in order to produce a consistent
output vector, that is, the application of any input vector sufficiently similar to one of the training
vectors will produce the same output.

In the applications related to control, the multilayered neural networks are awaking the
greatest interest of the researchers.

In this nets, the neurons are totally interconnected with neurons of an adjacent layer. In this
kind of topology, the nets are composed of an input layer, one or more hidden layers and one
output layer. The neurons of the hidden layers perform the modeling of non linear functions and
serve also as noise and drift suppressers. The training of the net consists of adjusting the weights
of the various layers.

Neural nets can be used to control complex and nonlinear systems. They have high noise
immunity and can be used to implement adaptive controllers.

Adaptive control techniques have been developed basically for processes that work under
unexpected or hardly predictable conditions, which are difficult to include in the models.

The classical adaptive control techniques fail every time we don't have a complete
knowledge of the mathematical model of the process or when we don't take in consideration some
uncertainties or complexities of the system (what is the case of most practical applications). The
use of neural controllers is interesting precisely in this cases.

2.1 - THE NEURAL MODEL

McCulloch e Pitts [4] proposed a simplified model for the biological neuron. Their model
is based on the fact that, in a certain moment of time, the neuron is either firing or inactive, what
gives it a discrete and binary behavior.
248

There are excitatory and inhibitory connections in these neurons, represented trough a
weight with a signal, which reinforces or hampers the generation of one output impulse. One
neuron n produces one impulse, that is, one output o - 1 if and only if the sum of the inputs is
bigger or equal a certain threshold. Equation 1 defines the output function of the McCulloch &
Pitts neuron:

if - ij Vu - a s o
Oi (x)\n .

0)
where WJJ is the weight of the connection associated to the input ijj, and /' is the activation
threshold of the neuron n.

Starting from the model proposed by McCulloch & Pitts, many other models that permit
the production of any output, not necessarily 0 or 1, have been derived. Also many different
definitions of the activation function appeared. Figure 1 shows four if these activation functions,
namely: the linear function, the ramp function, the step function and the sigmoidal function.

ft
m

(a) (b)

f(x) SM

(c) (d)

Figure (1)

The sigmoidal function, also known as Sshape function, illustrated in figure l.d, is a
semilinear function, limited and monotonic. It is possible to define many sigmoidal functions. One
of the most important sigmoidal functions is the logistic function, defined on equation 2.

&(*) = /
l+
(2)
where the parameter T defines the form of the curve.
249

An elementary representation of the McCulloch & Pitts neuron is shown in figure 2.


x1

x2

x3

Figure (2)

Basically, the neuron corresponds to a weighted sum of the inputs, over which the
activation function is applied. In this work, the sigmoidal activation function presented in equation
2 has been used.

3 - IMPLEMENTATION OF THE NEURAL CONTROL ENVIRONMENT

The environment we describe here works with multilayered neural nets to perform adaptive
control of complex processes, which can contain nonlinearities.

3.1 - TRAINING ALGORITHMS

In this work, two distinct training algorithms have been used: genetic algorithms and back-
propagation.

3.1.1 - Genetic Algorithms

Genetic algorithms can be seen as generic algorithms for optimal general purpose solution
search. Their working mechanisms are similar to the mechanisms that rule the evolution of living
being populations.

In this algorithm we generate initially "n" sets of weights and each set is called a
chromosome. The set of test vectors used for training is than applied to the net using each
chromosome. For each chromosome, the resulting average quadratic error is stored. The first
action over the chromosomes is the so called elitization, in which the 25% worst chromosomes
are eliminated and the 25% better chromosomes are duplicated. The total amount of chromosomes
keeps the same, because the remaining 50% are kept unchanged.
250

In this work, the phase called crossover has been eliminated, for it does not amplify in
significant way the search space and has a high associated computational cost. This phase would
correspond to the exchange of weight values from their position in the same chromosome. The
position choice in made in randomic way. There is no rule to define the number of changes to be
executed.

The next phase is called mutation and consists in substituting, also in randomic form, some
of the values inside the matrix composed of all chromosomes. The mutation rate is an arbitrary
parameter. This process inserts new information in the population, what is desirable, because there
is no warranty that the solution is inside the universe of weights being considered. Because it is
randomic, the mutation can also destroy a good chromosome before it can be duplicated. The
practical work has shown that a too high mutation rate causes oscillations in the error values.

Once completed the mutation phase, the process of elitization / mutation is started over
again and again, until the expected error value or the specified number of iterations is reached.

In the environment we describe here, the alteration of the weights can be done in a
"controlled" or in "elitized" way. This permits us to by-pass in an effective way the biggest
difficulty associated with this algorithm, which is the divergence of the error when it reaches
relatively small values, mainly when the mutation rate is high. In the "controlled" form, the
alteration of weights is only executed if the new value presents a smaller quadratic average error
as the former value. In the "elitized" form, the alteration of weights happens only over the "bad"
chromosomes. In both situations the work of the user is facilitated, since a way to control the non
convergence problem is given. The main advantage of using the convergence control process is the
automatic search of a solution without the need of continuous supervision from the user.

3.1.2 - Backpropagation Algorithm

This method follows an iterative model, which goals the reduction of the quadratic average
error between the desired and obtained output values for each training pair (supervised learning).
The error found is then back-propagated from the output to the input and the weights of each
network layer are readjusted according to a well defined rule.

During the training process, a factor called learning rate is adjusted, which determines the
speed and stability of the convergence. The environment described here executes this adjustment
in automatic form.

3.1.3 - Coexistence of the algorithms

The backpropagation training algorithm, although largely used, presents certain


drawbacks, for the solution could not converge to a desired minimum if the solution space is too
convoluted. This comes from the fact that this algorithm is highly affected by local minima, which
can delay or even stop the process of getting a global optimal solution. This problem can be by-
passed by using the genetic algorithm, specially in the initial phase of the training. The genetic
algorithm is not affected by the local minima because it does't include minimization of gradient.
However, when the error becomes relatively small, the convergence of the genetic algorithm
becomes critical and slow. There is no error evolution granularity defined in the genetic algorithm.
251

What is meant is that the search for an optimal solution in this algorithm is done in a non
continuous form, that is, there can be a period in which the error value practically doesn't improve
and then, in the next iteration, the error can "sink" abruptly to the desired value, ending the
training phase. For all those reasons, it is interesting to make a composition of algorithms, where
the genetic algorithm is used at the beginning in order to initialize the weights of the net and then
the backpropagation is used to get the desired, more generic, solution.

One way to partially by-pass the problem of local minima in the backpropagation algorithm
is to induce a controlled randomization of the weight values every time they meet a local minimum
and the convergence stops. This solution is, in the most cases, enough to solve non complex
problems and has the advantage of being simpler to implement than genetic algorithms. Otherwise,
this solution offers no warranty that we are not going to fall in another local minimum, in which
case we need to start the whole randomization process again. In the environment we describe
here, a maximal number of automatic randomizations has been defined and the training process can
be interrupted after some steps if an optimal solution has not been reached.

The number of iterations needed for each algorithm to reach their best solution is sensibly
smaller in the genetic algorithm. However, the computational effort required by this algorithm is
larger. In this form, a compromise between the two algorithms must be established. The
environment permits the user to define the number of iterations after which a migration from one
algorithm to the other will occur.

3.2 - PRE-TRATNTNG

A practical problem one meets in the implementation of a neural controller which is going
to control a real process is how to initialize it before connecting it to the process. In order to avoid
unexpected behavior of the system (controller + process), the neural controller must first learn the
real dynamics of the process (observe that the net starts with randomized weights, where the input
/ output relation is unknown and aleatory).

The solution we adopted was to generate initially a set of test vectors which are obtained
through an assay of the process in open loop and considering that the controller is in a rest
condition or steady-state. Knowing the processes input values and the corresponding relation with
the output values, we can create a set of vectors, which we call "true-vectors", and use them in a
"pre-training" phase of the neural controller. Through these vectors we can automatically make
inferences about the system's response time to an external disturbance, obtaining in this way a
parameter that is similar to the behavior of the derivator in a classical ) controller.

The "true-vectors" are also used in the adaptive training phase, working as a kind of
"anchor" that hinders undesirable behavior caused by the introduction of bad test vectors. This can
occur due to an inadequate choice of the new test vectors dynamic acquisition method.

Due to the form in which the "true-vectors" are acquired, all the imperfections of the
process are automatically considered, including, for instance, transducer inaccuracies,
nonlinearities in the actuators or in the process itself.
252

If it's not possible to execute the assay in open loop, the true-vectors will need to be edited
directly by an expert. Once implemented the preliminary controller, it's performance can be
improved subsequently, trough an on-line acquisition of new test vectors.

3.3 - OFF-LINE TRAINING AND ON-LINE ACQUISITION

The integrated neural control environment has all the necessary tools for the automatic on-
line test vector acquisition. This enables the automatic consideration of all the dynamic
characteristics of the process and the implementation of adaptive controllers.

A quite significant point in this environment is the possibility of performing an off-line


training and an on-line test vector acquisition. This ability of the environment enables to give the
controller an adaptive capability. For instance, if a transducer coupled to the system starts to
present an error in it's output after a certain time of good operation and after the implementation
of the neural controller, it would be desirable that the controller recognizes this change and adjusts
itself in order to compensate the error. In order to accomplish this, the environment should read
new vectors in regular time intervals and use them to re-train the net. However, we can't
increment the number of test vectors indiscriminately. One of the reasons is the limitation of the
computer's memory and other is the consequent increase of the training time. Neither can we
detect which vector should be altered, for it belonged to a former set of vectors, defined before the
transducer changed its behavior. The solution of changing all the vectors should also not be used
indiscriminately, for it would be as if the net had lost its memory.

The environment offers a set of options that, when correctly combined, make it possible to
by-pass all these problems.

In the environment, it is possible to set and vary the immediate reference value in order to
facilitate the on-line vector acquisition. The reference value of the neural controller can vary in a
manual way, with pre- and post-triggering, or automatically, with pre-defined boundaries and
controlled number of repetitions. The reference can also be programmed to follow a set of pre-
defined values from a vector with controlled number of repetitions. These repetitions enable to
establish a flat stretch of values, which is very significant due to the inherent delays associated to
the process dynamics.

The alteration of the test vectors can be executed in blocks or totally, enabling to establish
a smoothness criterion of adaptation.

It's not interesting to have equal or very similar test vectors. During the process of
acquisition of new test vectors, the environment can be programmed to filter out test vectors that
are similar to the ones we already have. The similarity criterion can be adjusted by the user.

Case the controller stays a long time in the same operation point or region, it could happen
that it "overlearns" this point or region and "forgets" the expected behavior in the others, due to
the continuous acquisition of new test vectors in this place. In order to solve this problem, an
option has been implemented in the environment that only enables the substitution of a test vector
case the present reference value is in the neighborhood of the reference value of some test vector.
253

Another option implemented in the environment is concerned with the condition of starting
a new training cycle based on the newly acquired test vectors. This process can be activated by the
user or trough an external trigger. The external trigger is concerned with a determined variation of
a previously defined input. Case the change in the value of this input overshoots a certain
percentage, the acquisition of new test vectors and the corresponding training process are
automatically started.

When the process under study is very complex or presents many nonlinearities, the neural
net must be proportionally more complex, in order to execute the control in effective way. In the
integrated neural control environment, it is possible to decompose the main problem in many sub-
problems, with a different neural net responding for each one. That means that we can define a
different neural controller dedicated to each operation region of the process. The advantage of this
strategy is that the dedicated nets are significantly smaller and simpler (and consequently faster)
than the net we would need for the whole range of operation of the process. The switching time
between nets is very short in comparison to the response time of the net itself and is completely
transparent for the rest of the system. In order to accomplish a smooth transition by the switching,
it is possible to execute a superposition of the nets.

The environment enables also the visualization on the screen of all the acquired data, in
order to supervise the training and control tasks.

3.4 - ADOPTED NEURAL CONTROL MODELS

In the specialized literature we can find alternative forms of implementing neural


controller. Some of those make use of the mathematical model of the process and the neural nets
are used only due to its adaptive capabilities.

In the present work we adopted the strategy of implementing a pure neural controller,
where a previous knowledge of the mathematical model of the plant is not required. We assumed
it is enough to have an approximate idea of the order of the involved mathematical model, which
is also not imperative. The knowledge of the model's order helps only to reduce the training time
until a satisfactory solution is found.

Two neural control models have been adopted: the so called direct control and the indirect
control, described hereinafter.

3.4.1 - Direct Control

In this model we have solely the neural controller controlling directly the process.
The main difficulty in the implementation of this method appears in the neural controller
training phase. The problem is how to know what output value should the controller have for each
variation of its input. The error in the process output, related to the reference value, should be
compensated by the neural controller in a similar way as if we were using a classical PID
controller. The systems inner dynamics (delays) should be respected.
254

The desired controller output value for a given input depends not only on the value of the
input itself but also on the preceding state of the system. This means that the input / output
mapping is not a trivial task.

In order to escape from having to know in forehand the mathematical model of the plant
and still be able to extrapolate the controller's instant output value for a given input condition, we
opted by using a factor that changes the present controller output value in the direction to which
the system's error (between reference and process output) points. This factor, which we called
gain, can be linear or exponential and is basically added to the present output value of the net,
obtaining in this way the goal value for the training.

A linear gain basically adds to the present controller's output value a factor that is given by
the output value itself multiplied by the system's instant error and by an adjustment's speed factor.

target = actual_output + actual_output * instant_error * linear_gain_factor (3)

An exponential gain multiplies or divides the present controller output value by a factor
given by a pre-defined number raised to the system's instant error.
mstant
target = actual_output * exponential_gain_factor _error ^

The exponential gain has the characteristic of speeding up the adjustment when the
system's error is large.

3.4.2 - Indirect Control

Another model supported by the environment is the so called indirect control. In this
model, two neural nets are used: one is the identifier net, whose task is to represent the behavior
of the system and is used on the training of the second net, and the other is the controller net,
which plays the role of the process controller itself. The environment provides all the necessary
conditions to implement both nets.

Initially the identifier net is trained, using the environment tools, in order to obtain the
training vectors needed to map to input / output behavior of the process under study. After the
training is finished, the behavior of the identifier net can be compared in real time to the behavior
of the process, checking in this way the results of the training.

The next step is the training of the controller net.

At this point we can train the controller net off-line, that is, without a connection to the
process, or we can accomplish an on-line training, in connection with the process. In the second
case, we have to start the training using the already mentioned "true-vectors".

The first option (off-line) enables a preliminary training of the controller net without any
influence over the process itself and is recommendable when we are not sure about the resulting
behavior of the system in closed loop. In this option, a reduced amount of true-vectors is needed
in order to complete the matrix of values used during the training phase. The other vectors are

10
255

obtained from these true-vectors trough a list of reference values. This can be seen as an off-line
test vector acquisition.

In the majority of the cases, we can start using directly the second option (on-line), due
mainly to the safety given by the true-vectors. In this case, it is suggested the use of an amount of
true-vectors enough to cover the dynamic band of reference. Practical experiences have shown
that, for the most cases, 20 true-vectors would be enough. In this way it is assured that, at the
beginning of the dynamic vector acquisition process, the controller will respond to the commands
in a way that can't damage the process under control and also permits that variations in the
reference are accepted by the controller. The idea is to start the training using only the true-
vectors and afterwards improve it by the dynamic acquisition of new vectors.

One important point here is concerned with the adaptive behavior. During the normal
operation, the weight values of the identifier net are not changed, but only the weights of the
controller net. It is usually not necessary to have two adaptive structures in series. The reason for
the existence of the identifier net is to back propagate the system's global error (reference -
process output) in order to keep it available at the output of the controller net during the training.

4 - TEST OF THE NEURAL CONTROL ENVIRONMENT

In order to test the functionality of the integrated neural control environment, it has been
used to control a pilot plant described hereinafter.

The tests were done using a generator of 127 ACV and 1800 VA driven by a single-phase
motor.

The generator field was controlled trough a hexaphase bridge made of SCRs*. It is
important to point out that, beside the nonlinear behavior of the process itself, the hexaphase
bridge has also a nonlinear behavior, for it responds to the sinus of the applied signal.

The tests were conducted by comparing the performance of the neural controller to a
classical PID controller previously dimensioned for this process (generator + motor).

It has been observed that, after an initial training time, it is possible to achieve a more
continuous control on the extremities of the controlled band with the neural controller. This is due
to the strong non linearity of this process, mainly for low voltage values.

Another observation is that, during the training phase, the indirect model converges more
promptly than the direct control model. In the other hand, the direct control model requires a
smaller computational effort.

The success or failure in the practical implementation of neural controller is intimately


connected to the quality of the test vectors used in the net training. In the preliminary training we
have got a satisfactory result after 20.000 iterations using only the backpropagation algorithm and
this number was strongly reduced when genetic algorithms have been combined.

11
256

The environment was implemented on a IBM-PC 386DX with 40 MHz clock frequency.
For a neural controller with 5 neurons in the hidden layer it was required a training time of less
than 10 minutes. In the subsequent trainings, since the net was already pre-trained, this time was
reduced to some seconds, depending on the number of vectors used.

At any moment of time we have a neural net controlling the process in real time and
another being trained in parallel. Once finished the training of the parallel net (and assuming it was
successful) the weights of the controller net are quickly updated, without any interference in the
process under control. After the updating of the weights, the training process can be finished or
another training cycle can be automatically started.

5 - CONCLUSION

Surely the use of neural controllers is not restricted to generators voltage and speed
control. It is important to point out that for each application there will be a more adequate
structure, which will be identified after the realization of tests. We suggest to start with a very
simple net, which has a hidden layer with at least 3 neurons. Starting from the results obtained
with it, we can increase the complexity of the net's inner structure. It is recommendable to use
different nets specialized to each operation point or region of the process, in order to work only
with small nets and also assure fast training and adequate real time behavior.

One of the main motivations in the use of neural control resides in the fact that we can
implement very complex controllers without a deeper knowledge of specific control techniques.

Before anything else, the neural nets are universal approximators. We have to consider
that, when a classical controller is implemented in the field, it is usually done in an environment
were the received information (from transducers) already presents a certain amount of embedded
error. In the case of neural controllers, as the informations used to "design" them are obtained
directly from the process, all errors are automatically considered, including those implicit in the
process itself.

We can assert, in face to what has been said until here, that neural control represents a
good option for the most control problems. In each situation it is important to analyze the
convenience of using this technique or not. The main difficulty resides usually in the fact that,
during the training phase, the user should have an adequate methodology to obtain the test
vectors. It is just in this item that the here described environment can reduce the user effort,
providing integrated support for data acquisition and subsequent training.

6 - BD3LIOGRAPHY

[1] SILVA L. E. Borges da; TORRES G. Lambert; SATURNO, E. C. ; SILVA A. P. Alves da;
OLIVER G. - "Neural Net Adaptive Schemes for DC Motor Drives", IEEE Industry Applications
Society Conference, Toronto, October 1993.

[2] PAO, Yon-Han. "Adaptative pattern recognition and neural networks". Addison-Wesley
Publishing Company, United States of America, 1989.

12
257

[3] MASTERS, Timothy. 'Tractical neural networks recipes in C++". Academic Press, INC, San
Diego CA, 1993.

[4] McCULLOCH, W. S.; PITTS, W. H. "A logical calculus of ideas immanent in nervous
activity". Bull Math Biophys, 5:115133. 1943. Formal Neuron.

[5]KANATA, Yakichi; MAEDA, Yutaka. "Learning rule of neural networks for control ", SICE,
777 789. 1994.

[6]BOSE, Bimal K. "Expert system, fuzzy logic, and neural network applications in power
electronics and motion control". Proceedings of IEEE, vol. 82, no. 8, 1303 1323. 1994.

[7]TAKAHASHI, Hiroki; AGUI, Takeschi; NAGAHASHI, Hiroshi. "Designing adaptative neural


networks architectures and their learning". Science of Artificial Neural Networks , SPIE vol.
1966.208215. 1993.

[8]SHEBL, Gerald B.; MAIFELD, Timothy T. "Unit commitment by genetic algorithm and
expert system". Eletric Power Systems Research 30. 115 121. 1994.

[9]WU, Q. H ; HOGG, B. W.; IRWIN, G. W. "A neural network regulator for turbogenerators".
IEEE Transactions on Neural Networks, vol 3, no. 1, 95 100. Jan 1992.

[10] TORRES, Germano L. "Notas do curso de introduo s redes neuronais". EFEI. 1992.

[11] SEPEDA FILHO, Idmilson H.; STEMMER, Marcelo R. "Redes Neurais". Notas Internas
LCMI/UFSC. Set/1993.

[12] RUMELHART, David E.; LEHR, Michael .; WTDROW, Bernard. "Neural networks:
applications in industry, business and science". Cornminications of the ACM. Vol. 37, no. 3. 93
105. March 1994.

[13] DJUKANOVIC, M.; SOBAJIC, D. J.; PAO, Y. H. "Neural net based determination of
generatorshedding requirements in electric power systems". TEE proceedingsC, Vol. 139, No. 5,
427 436, sep/1992.

[14] CAMPAGNA, David P.; KRAFT, L. Gordon. "A comparison between CMAC neural
network control and two traditional adaptative control systems". IEEE Control Systems
Magazine. 3 6 4 3 . april/1990.

[15] ROY, Serge. "Nearoptimal dynamic learning rate for training back_propagation neural
networks". SPIE Vol. 1966 Science of Artificial Neural Networks . 277 283. 1993.

[16] JANAKJRAMAN, J.; HONAVAR V. "Adaptative learning rate for increasing learning speed
in backpropagation networks". SPIE Vol. 1966 Science of Artificial Neural Networks II. 225
235. 1993.

13
258

[17] CHANG, C. S.; SRINTVASAN, D.; LEEW, A. C. "A hibrid model for transient stability
evaluation of interconnected longitudional power systems using neural networks/pattern
recognition approach". Transactions on Power Systems. Vol. 9. No. 1, 85 - 92. Feb. 1994.

[18] MISTRY, Sanjay L; NAJR, Satish S. "Identification and control experiments using neural
designs". IEEE Control Systems. 48 -56. June/1994.

[19] VILLALOBOS, Leda; MERAT, Francis L. "Optimal learning capability assessment of


multicategory neural nets". SPIE Vol. 1966 Science of Artificial Neural Networks II. 384 - 395.
1993.

[20] ZHANG, Y.; CHEN, G. P.; MALIK, O. P.; HOPE, G. S. "An artificial neural network based
adaptative power system stabilizer". IEEE Transactions on Energy Conversion. Vol. 8. No. 1.71-
77. March/1993.

[21] WEERASOORIYA, S.; EL-SHARKAWI, M. A. "Laboratory implementation of neural


network trajectory controller for a dc motor". IEEE Transactions on Energy Conversion. Vol. 8.
No. 1. 107-113. March/1993.

[22] DJUKANOVIC, M.; SOBAJIC, D. J.; PAO, Y. H. "Preliminary results on neural net based
simulation of synchronous machine dynamic response". Eletric Power Systems Research, 25. 159 -
168. 1992.

[23] YANG, H. T.; HUANG, K. Y.; HUANG, C. L. "An artificial neural network based
identification and control approach for the field-oriented induction motor". Eletric Power Systems
Research, 30. 35 - 45. 1994.

14
259

Advanced Analysis of Material Properties using DataEngine

M. Poloni - MPA Stuttgart


Pfaffenwaldring 32, 70569 Stuttgart - Germany
Fax: +49 711 685 3053 e-mail: poloni@mpa.uni-stuttgart.de
R. Weber - MIT GmbH
Promenade 9, 52076 aachen - Germany
Fax: +49 2408 94582 e-mail: rw@mitgmbh.de

Abstract: Many industrial problems require adequate interpretation of data which are present
in the respective situations. For example process monitoring, diagnosis, quality control, and
determination of material properties to use in life and/or damage prediction are some of these
tasks. All the related problems have in common that a large amount of data describing the
respective area exists. But in most cases the information contained in the data is not used
sufficiently. Since the above described problems have different characteristics, different
methods are needed to analyse the existing data. In this paper an overview over advanced
methods for data analysis is given. In addition a software tool which supports the application
of these methods together with some applications are presented to emphasize the benefits of
advanced data analysis.
Keywords: cluster analysis; pattern recognition; data analysis; neural networks, material
properties, hardness, low cycle fatigue.

1. Introduction
This paper presents approaches of data analysis with intelligent technologies as for example
fuzzy technology and neural networks. After having seen the wave of successful industrial
applications of fuzzy control, data analysis has become a very fast growing and important
area where fuzzy and neural methods are applied. In particular their combination offers high
potentials for future use.
In section 2 a brief introduction to data analysis and the related terminology is given, section
3 proposes possibilities to support a potential user with methods and tools for data analysis.
While methods used in fuzzy control are based primarily on the formulation of fuzzy If-Then
rules, data analysis requires several different methods as shown briefly in section 3.1.
In section 3.2 a software-tool which contains the respective approaches is presented. Section
4 describes some examples in which methods for data analysis are used and further possible
applications are pointed out. The conclusions show some directions for future developments
of data analysis.

2. Basics of Data Analysis


In general, data analysis can be considered as a process in which starting from some given
data sets information about the respective application is generated. In this sense data analysis
can be defined as search for structure in data [4]. In order to clarify the terminology about
data analysis used throughout this paper a brief description of its general process is given
below.
260

In data analysis objects are considered which are described by some attributes. Objects can be
for example persons, things (machines, products, ...), time series, sensor signals, process
states, and so on. The specific values of the attributes are the data to be analysed. The overall
goal is to find structure (information) about these data. This can be achieved by classifying
the huge amount of data into relatively few classes of similar objects. This leads to a
complexity reduction in the considered application which allows for improved decisions
based on the gained information. Figure 1 shows the process of data analysis described so far
which can be separated into feature analysis, classifier design, and classification.
Process description

Features determination:

Numerical object data


Sensors
Pair-Relation data
Humans

Feature analysis

Pre-processing
Extraction
2-D display

Classifier design

Identification
Classification Input data
Estimation
Prediction
Output results
Assessment
Control

Figure 1: Contents of Data Analysis


Here three steps of complexity reduction can be found:
An object is characterized in the first step by all its attributes.
From these attributes the ones which are most relevant for the specific data
analysis task are extracted and called features (feature extraction).
According to these features the given objects are assigned to classes (classifier
design).
Information is gained from the data in the sense that relationships between objects are
detected by assigning objects to classes. Based on the derived insights, improved decisions
can be made. Here one could think of decision support for diagnosis problems (medical or
technical), evaluation tasks (e.g. creditworthiness [15]), forecast (sales, stock prices), and
quality control as well as direct process optimization (alarm management, maintenance
management, connection to process control systems, and development of improved sensor
systems). Of course, this list of applications is by no means exhaustive; for more approaches
see e.g. [4],
261

The process of data analysis described so far is not necessarily connected with fuzzy
concepts. If, however, either features or classes are fuzzy the use of fuzzy approaches is
desirable. In figure 1, for example, objects, features, and classes are considered. Both,
features and classes can be represented in crisp or fuzzy terms. An object is said to be fuzzy
if at least one of its features is fuzzy. This leads to the following four cases [13]:
crisp objects and crisp classes
crisp objects and fuzzy classes
fuzzy objects and crisp classes
fuzzy objects and fuzzy classes
In chapter 3 methods and a tool are described which can be used to solve data analysis
problems falling into the latter three cases. Chapter 4 contains two industrial applications
where crisp objects and fuzzy classes are considered.

3. Support for Advanced Data Analysis


As stated in section 2, the applications of data analysis have a wide range and occur in divers
areas where different problem formulations exist. Therefore knowledge-based fuzzy methods
alone, as used in most fuzzy controllers, are no longer sufficient to solve the complex tasks of
data analysis. Subsequently an overview over some of the methods to solve the related
problems is given, for more details see e.g. [1], [4].

3.1 Methods
Two groups of methods for data analysis can be distinguished:
methods for data pre-processing
methods for the classifier design and classification

3.1.1 Data Pre-processing


If, for example, in quality control some acoustic signals have to be investigated, it becomes
necessary to filter these data in order to overcome the problems of noisy input. In addition to
these filter methods some transformations of the measured data as, for example, Fast Fourier
Transformation (FFT) could improve the respective results. Both, filter methods as well as
FFT, belong to the class of signal processing techniques. Data Pre-processing includes signal
processing and also conventional statistical methods.
Statistical approaches could be used to detect relationships within a data set describing a
special kind of application. Here correlation analysis, regression analysis, and discrimination
analysis can be applied adequately. These methods could be used for example to facilitate the
process of feature extraction (see section 2). If, for example, two features from the set of
available features are highly correlated, it could be sufficient for a classification to consider
just one of these two.

3.1.2 Classifier design and classification


In order to find classes in some data sets, methods for classifier design and classification can
be used. Based on the specific data analysis formulation these tasks can be performed with
algorithmic techniques as for example clustering methods, knowledge-based systems, and
neural networks. Which of these methods is most appropriate depends on the specific
problem structure, see also [9], [12].
262

In the literature, a lot of different algorithmic methods for data analysis have been suggested
[5], [10]. One of the most frequently used cluster algorithms which has been applied very
extensively so far is the Fuzzy cmeans (FCM) [2]. This algorithm assigns objects, which are
described by several features, to fuzzy classes. Objects belong to these classes with different
degrees of membership. Here no explicitly formulated expert knowledge is required for the
task of data analysis.
If an expert has some knowledge about the analysis of data (as for example in the area of
diagnosis), this knowledge should be used for the evaluation. Then knowledgebased
methods for fuzzy data analysis are suitable [14]. This class is similar to the approach taken
in fuzzy control systems where fuzzy IfThen rules are formulated and a process of
fuzzyfication, inference, and defuzzyfication leads to the final decision [15]. The automatic
construction of such systems can be supported by fuzzy techniques from the area of machine
learning; see e.g. [11].
If an expert can not describe his knowledge explicitly but is able to deliver some examples
for "correct decisions" which contain the expert knowledge implicitly, a neural network can
be trained with these training examples, see e.g. [8],

3.1.3 New developments of methods for data analysis


Recently a lot of research efforts are directed towards the combination of different intelligent
techniques. Here the elaboration of neurofuzzy systems is one cornerstone for the future
development of intelligent machines, see e.g. [6]. One of these methods is a fuzzy version of
Kohonen's network [3].
It is expected that in the near future the areas of fuzzy technology, neural networks, and
genetic algorithms will be combined to a higher degree. Especially for data analysis the
combination of these methods could give promising results.

3.2 DataEngine - A Software-Tool for Data Analysis


DataEngine is a software tool that contains methods for data analysis which are described
above. Especially the combination of signal processing, statistical analysis, and intelligent
systems for classifier design and classification leads to a powerful software tool which can be
used in a very broad range of applications.

Basic Module with


File ::: statistical methods^
Serial port
INPUT Signal processing module OUTPUT File
Data :::
::%:: Modul e with fuzzv Serial port
acquisition elusieriegreethods ; * *

Printer
boards Module with fuzzy rule 2D
Data editor based iethods Graphics
Module with neural Nets
Module with nuerofuzzy
! : : methods

Figure 2: Structure of DataEngine


DataEngine is built using object oriented techniques in C++ and runs on all usual hardware
platforms. Interactive and automatic operation supported by an efficient and comfortable
263

graphical user interface facilitates the application of data analysis methods. In general,
applications ofthat kind are performed in the following three steps:

3.2.1 Modelling of a specific application with DataEngine


Each sub-task in an overall data analysis application is represented by a so called function
block in DataEngine (see Figure 3). Such function blocks represent software modules which
are specified by their input interfaces, output interfaces, and their function. Examples are a
certain filter method or a specific cluster algorithm. Function blocks could also be hardware
modules like neural network accelerator boards. This leads to a very high performance in
time-critical applications.

3.2.2 Classifier design (off-line data analysis)


After having modeled the application in DataEngine off-line analysis has to be performed
with given data sets to design the classifier. This task is done without process integration.
Classification
Once the classifier design is finished, the classification of new objects can be executed.
Depending on the specific requirements this step can be performed in an on-line or off-line
mode. If data analysis is used for decision support (e.g. in diagnosis or evaluation tasks)
objects are classified off-line. Data analysis could also be applied to process monitoring and
other problems where on-line classification is crucial. In such cases, direct process
integration is possible by configuration of function blocks for hardware interfaces.

Figure 3: Screenshot of DataEngine

4. Application to material properties determination


In the following two applications of fuzzy clustering in modelling of steel behaviour at hgh
temperature are reported. Respectively low cycle fatigue behaviour of the lCrMoV rotor
264

steel and hardnessbased temperature determination for lifetime prediction are the two
examples. There is to note that the models reported were calculated with a pure databased
procedure, exploiting the possibilities of the advanced clustering methods available in
DataEngine.

4.1 Material Low Cycle Fatigue behaviour modelling


Data from a material properties database about a lCrMoV rotor steel have been extracted for
the analysis [16]. The first step performed was to see if it was possible to reconstruct the
usual LCF curves using only numerical methods. These curves are characterised from two
different regions: the first with higher values of strain, the second with lower values of strain
range. Variations of the strain range in the first region are less likely to affect the number of
life cycles in a strong way. This effect is more evident in the second one.
A possible approach is to adopt a clustering method to find out the regions and to determine
eventual spurious measurement, that do not clearly belong to any of the clusters. This type of
evaluation can indicate the presence of noisy points and whether their number and
characteristics justify an additional investigation.
Graphic - T:\BE5245\FELL0WSH.IP\W0RKPLAN.2\CFATDATA\LCF.MES

lCrMoV rotor steel


4 j

3.5

2.5

2 @ E
Strain range (%)
1.5 4 1 ia
1

0.5 4 %

3
0
10 100 1000 10000 1D00D

Endurance (cycles)

uiititjuie.

Figure 4: Data set


The data in are for the same alloy and temperature but came from different sources.
The best result has been obtained using an assumption of the presence of two clusters. The
results are reported in Figure 7, where all the points are reported without considering their
membership values.
The example above shows that, in this case the local regression models, based on the fuzzy
clusters, does not lead to a result much more comprehensive than the one which can be
obtained by conventional regression analysis.
265

Figure 5 : Fuzzy clustering environment in DataEngine

Graphic-C:\MASSIMO\MUELL\CL1.8

lCrMoV rotor steel


4

3.5

2.5

2 M austeri
Strain range (%)
Cluster2
1.5

0.5

D ^ ^3| E
F"
10 100 1000 10000 100000

Endurance (cycles)

Select . Configure;;..; j elosi!


W ^ J / ^ / ^ . / ^ ^ ^
......Jr...........irn

Figure 6: Resulting clusters


There is to take into consideration that the regression based on the fuzzy model is only a by
product" of the analysis. For classical methods, on the contrary, the regression is the main
(and almost only!) result. In other words, the model derived by means of fuzzy cluster
analysis offers much more than a regression model only. This is an example on no-
knowledge approach", a model to classify LCF data has been automatically obtained from
the data set. For instance, an alphacut over the membership values of the data points in the

7
266

clusters can be made, that means considering only the data points belonging to the cluster
prototypes with a membership higher than a fixed threshold (alpha value). In this way the
local regression analysis is performed only over the data points with a high similarity to the
model.

Strain range (%)


10,
first duster
. lorenzianfit
. 95% confidence

O second cluster
lorenzianfit
95% confidence

0.1. - i 11 I I I i| - 1 1 1 ) 1 1 II I I I I I I I l| 1I I I I M I
-0 100 1000 10000 100000
Endurance (cycles)

Figure 7: Lorenzian fit of data


Another possibility would be (e.g.) to use the model for classification of new data, providing
the membership values to the clusters of new data pairs, that is, the determination of the
similarity of a new point to the general behaviour of the system as described by the model.

4.1.1 Integration of the results in a KBS


An intelligent software module, Expert Miner, implementing advanced methods using the
DataEngine ADL (Application Development Library) and containing classifiers like the
described one, is currently under development at MPA Stuttgart [17]. It will be incorporated
in a KBS about metallic material properties, in the framework of the European BRITE
project BE5245 CFAT [18].
Expert Miner will enable the user, a trained technician in his/her own field (in the case of C
FAT a metallurgist) to perform data mining tasks using advanced techniques. The system
will support the user through a KBS that analyses the user requests in terms of inputoutput
data, resulting model requirements and data available, returning an advice on which method
or technique is more suitable to solve the current problem. Moreover it will give the
possibility to the KBS user to apply the different kind of advanced analysis in parallel to the
classic methods, realising an important methodology transfer in the field of applied material
science.

4.2 Hardness-based temperature estimation


A set of experimental data has been extracted from [19], regarding the determination of
hardness properties for two different steels, namely 2%Cr1 Mo and 1 Cr1/2Mo.
In the following, hardness will be indicated with H, while the SherbyDorn parameter will be
Q
indicated with P. The expression of the SherbyDorn parameter is = logr , where / is the
267

time in hours and Tis the temperature in Kelvin. The two derived expressions will be used in
the paper:
C
(i)
log t

t = i(r TJ (2)
Hardness measures are used to estimate the temperature, and from temperature the remaining
lifetime.

4.2.1 21/CrlMo steel analysis


The material under consideration had the following composition:

C Si S Ni Cr Mo V W
0.14 0.14 0.011 0.012 0.5 Trace 2.56 1.04 0.04 < 0.05

The data, coming from hardness measures after different time slots at fixed temperatures
(varying from 550 to 750 degrees) are reported in Figure 8. For this 2%CrlMo steel,
comparisons are made with an approximation proposed in the European SPRINT 249 project
guideline [20]. The set of data has been processed using a fuzzy Cmeans algorithm. Two
different tests have been performed: one assuming the presence of three clusters and the
second assuming the presence of four clusters. The number of clusters is assumed taking into
consideration the possible material behaviour and through an evaluation of the results
obtained from the numerical procedure. The detected regions have been approximated using
local regression models. These models were than fused together to reach a unified model for
comparison purposes. The best prediction results have been obtained using the four cluster
subdivision. The results related to four clusters are shown in Figure 9, where the obtained
global model is reported together with the plot of the equation suggested from SP249, an
exponential function built up starting from mechanistic assumptions and constraints (like the
two asymptotes for H values of 180 and 115).

A B O
c6
S7-
mmmw.
11111:

13a

12a

17.5 -17.0 -16.5 -16.0 -15.5 -15.0 -14.5 -14.0 -13.5

Figure 8: set of data


268

In this case the material behaviour is more regular and this is reflected in the success of
different methods of approximation. Nonetheless the question of the underestimation of
temperature starting from hardness values remains open: the methods illustrated for this kind
of steel (including the SP249 guideline) have not a coherent conservative response, they can
lead to more conservative or less conservative temperature estimations.
An underestimation can be responsible for dangerous nonconservative evaluations of the
remaining lifetime (equation 2).
This problem comes from the bestfit approach used. Different approximations of the
material behaviour should be adopted to take into account the use that will be made of the
models. Lower bound approximations or interval analysisbased regression models have to be
adopted in this case.

180- |
SteeIE(2%Cr1Mo)
A

170- I
- cluster 1
cluster 2
160- ^ * cluster 3
cluster 4

H 150- \
regression model
140- idei in e

130-
* X
'""it
1

120-

-17 -16 -15 -14 13


Figure 9: global function approximation

4.2.2 Remarks
The lack of more data does not permit a check of the effectiveness of the curves obtained.
A point to stress is that an approach based only on a best-fit has the disadvantage of not
always being conservative in its result. This can affect remaining life assessment methods
based on the temperature estimated values. Different approximations of the material
behaviour should be adopted to take into account the use that will be made of these
models. Interval analysisbased regression models could play a role in this respect.
The approximations obtained by regression analysis are clearly reliable over the interval in
which we have data. An extrapolation of the behaviour outside these regions could lead to
nonconsistent results. This is obviously due to the fact that the equations are not a
mechanistic description of the physical process.
The parameter C is a source of uncertainty, because it is derived through a regression
analysis but not always in an affordable way. A suitable representation could be a fuzzy
number.

5. Conclusions
Data analysis has large potentials for industrial applications. It can lead to the automation of
tasks which are too complex or too illdefined to be solved satisfying with conventional

10
269

techniques. This can result in the reduction of cost, time, and energy which also improves
environmental criteria.
In contrast to fuzzy controllers where the behaviour of the controlled system can be observed
and therefore the performance of the controller can be stated immediatly, many applications
of methods for data analysis have in common that it will take some time to exactly quantify
their influences.
The applications reported show how the cited methods can be successfully introduced in the
field of material properties analysis. At MPA Stuttgart a research effort is currently under
way to exploit the possibilities of advanced data analysis.
The authors believe that the link of the software package with the available material
databases can bring some new insight in many difficult material analysis problems.

6. References
[1] H. Bandemer, W. Nther, Fuzzy Data Analysis (Kluwer, Dordrecht, 1992).
[2] J.C. Bezdek, Pattern Recognition with Fuzzy Objective Function Algorithms (Plenum
Press, New York, 1981).
[3] J.C. Bezdek, E. C.-K. Tsao, N.R. Pal, Fuzzy Kohonen Clustering Networks, in: IEEE
International Conference on Fuzzy Systems (San Diego, 1992) 1035-1043.
[4] Bezdek J.C, Pal S. Eds (1992) Fuzzy models for Pattern Recognition, IEEE Press
[5] A. Kandel, Fuzzy Techniques in Pattern Recognition. (John Wiley & Sons, New
York, 1982).
[6] B. Kosko, Neural Networks and Fuzzy Systems. (Prentice-Hall, Englewood Cliffs,
1992).
[7] R. Krishnapuram, J. Lee, Fuzzy-Set-Based Hierarchical Networks for Information
Fusion in Computer Vision. Neural Networks 5 (1992) 335-350.
[8] Y.-H. Pao, Adaptive Pattern Recognition and Neural Networks. (Addison-Wesley,
Reading, Mass., 1989).
[9] R. Schalkoff, Pattern Recognition Statistical, Structural and Neural Approaches.
(John Wiley & Sons, New York, 1992).
[10] J. Watada, Methods for Fuzzy Classification. Japanese Journal of Fuzzy Theory and
Systems 4 (1992) 149-163.
[11]R. Weber, Fuzzy-TD3: A Class of Methods for Automatic Knowledge Acquisition.
Proceedings of the 2nd International Conference on Fuzzy Logic & Neural Networks
(Iizuka, Japan, July 1992) 265-268.
[12] S.M. Weiss, CA. Kulikowski, Computer Systems that learn. (Morgan Kaufmann, San
Mateo, 1991).
[13] H.-J. Zimmermann, Fuzzy Sets in Pattern Recognition, in: P.A. Devijer, J. Kittler,
Eds., Pattern Recognition Theory and Applications (Springer-Verlag, Berlin, 1987) 383-
391.
[14] H.-J. Zimmermann, Fuzzy Sets, Decision Making, and Expert Systems. (Kluwer,
Boston, 1987).
[15] Zimmermann H.-J. (1991) Fuzzy Set Theory and Its Applications (2nd Edition),
Kluwer Academic Publishers, Boston, Dordrecht
[16] Holdsworth S.R. (1994) BRITE-EURAM C-FAT Project BE 5245: KBS-aided
Prediction of Crack Initiation and Early Crack Growth Behaviour Under Complex Creep-

11
270

Fatigue Loading Conditions, In Knowledge-Based (Expert) System Applications in Power


Plant and Structural Engineering, Jovanovic, Lucia, Fukuda Eds, Joint Research Centre of
European Commission, EUR 15408 EN, pp. 235-243
[17] M. Poloni, A. Jovanovic, H. P. Ellingsen, P. M. Schfer (1995) Extraction of
knowledge from data: application in power plants,, Third European Congress on
Intelligent Techniques and Sofi Computing, EUFIT 95, Aachen, Germany
[18] M. Poloni (1995) Data mining and dynamic worked examples in the C-FAT KBS, C-
FAT report CFAT/T6/MPA/220a
[19] Carruthers R.B., Day RV. (1968) The Spheroidisation of some Ferritic Superheater
Steels, Central Electricity Generating Board, North Eastern Region, Scientific Services
Department, Report SSD/NE/R.138.
[20] ERA Technology (1994), SPRINT 249 Guideline GG2

12
271

FUZZY LOGIC - AN APPLICATION FOR GROUP TECHNOLOGY

Jos Arnaldo Barra Montevechi


Escola Federal de Engenharia de Itajub
CP 50 - Itajub - MG - 37500-000, Brazil, tel.: +5535 629 1212,
fax: +5535 629 1148, e-mail: arnaldo%efei.uucp@dcc.ufmg.br

Paulo Eigi Miyagi


Universidade de So Paulo - Escola Politcnica
CP: 61548 - So Paulo - SP - 05508-900, Brazil

ABSTRACT

In this article a procedure for obtaining part families using fuzzy logic is discribed. It permits
taking into account uncertainties and ambiguities usually present in manufacturing. Aspects of
the data base are presented, which can aggregate design and manufacture information. How to
assign membership to the features that will be analysed, the similarity analysis to the
resemblance relation and part processing information are also presented. This procedure
makes it possible the elaboration of one software which will be a interesting tool to the
manufacture of small lots, it integrates the informations of design and manufacture and makes
possible a rationalization of resources. Finally the article discribes the use of fuzzy backward
reasoning for the classification of new parts in established families. This approach is a
interesting application for group technology.

KEYWORDS

Group Technology, part family, fuzzy logic, membership attribution, fuzzy backward
reasoning

1 - INTRODUCTION

Group Technology (GT) is a philosophy which tries to analyze and arrange the parts
and manufacturing processes, by agreement with the projects and manufacturing similarities
[2] [5] [6]. Then, families are established to make it possible to rationalize the manufacturing
processes, or to reduce the number of drawings in the design department.
Most papers about part family formation assume that information about cost and
processing time, demand of part, etc.. are accurate. It is usually supposed that a part only
belongs to one family. Nevertheless, in many cases it does not occur. The analysis of grouping,
making use of fuzzy logic, can provide a solution to this problem. However, few articles have
been published dealing with the problem of uncertainty, the formation of manufacturing cells
and the part families. Likewise, those articles consider such questions isolately neglecting the
development of methods to be shared by all company users [7].

01
272

The part similarities, which is the basic aspect for family formation, consist of a close
classification in geometry, function, material and /or process. It may be not sufficient to
describe part features using yes or no labels, when accurate classification is required [7]. In
order to obtain an efficient and flexible classification which considers uncertainties, thus
eliminating the shortcoming of the currently employed methods, this article describes a
procedure that makes use of fuzzy logic for the part family formation. Fuzzy membership
function permits taking into account the inherent uncertainties into part features description.
Thus producing more realistic results. The use of this technique, will make the part family
formation more sensible. The membership value, which lies between 0 and 1 can express what
extension of the feature the part has. The closer the value is to 1, more quantity of feature the
part has.
First are described details of data base. The grouping principle employed is also
described which consists of choosing one threshold value to the similarity. Once this threshold
value is chosen, two elements will be in the same group if, for example, in case of similarity
function, the similarity between them is greater than the threshold value. Since the similarity
relationship is not necessarily transitive, it is necessary to employ the fuzzy matrix theory to
form the closest structure which permits the separating data in exclusive and separated groups
which are, in essence, an equivalent class over a certain threshold value.
For process similarity a procedure is shown to search information of similarities that
should guide the formation of manufacturing cells.
Another important aspect described is the possibility to use qualitative data, such as,
complex, easy, hard, high surface roughness, etc.... How to translate this information into
numerical values, which is essential to similarity analysis, is also shown.
Finally, the autors belive that the use of backward reasoning makes it possible to
classify a new part in a established family in a faster and easier way. It is possible because will
be necessary to answer a less number of questions without the rigidity that is frequent in the
common methods.
The object of this methodology, is to develop an alternative procedure to traditional
methods for obtaining similarity. To this purpose, it is necessary to integrate apropriate
approachs that can incorporate the uncertainty which nowadays serve isolated aspects of
similarity analysis.

2 - AN OBJECT-ORIENTED DATA BASE


An extremely important component of the proposed methodology to obtain the part
families, is the data base providing the features that will be classified according to their
similarities. In the data base all important information about several component features of the
company have to be available, including design and manufacturing features. However, it
should be borne in mind that no coding procedure is proposed, which is common in the
Classification and Codification System (CCS). The part input information is made according to
discription of its features. This fact is very important because it permits data handling by all
company user, fact that does not happen with most CCS. Features that may be in the data base
are shown in Figure 1.
Development an object-oriented data base permits adding more data and features. It is
not possible in CCS, because after the coding is obtained the insertion of new features will be
difficult. The main purpose of the data base is provide a single reference for all company, like
suggest Figure 2.

02
273

FEATURES DETAILS QUANTITATIVE AND


QUALITATIVE ATTRIBUTES
Holes max. diameter
min. diameter
number of holes
Pockets max lenght
min. lenght
max. width
min. width
max. depth
min. depth
number of pockets
Slots max lenght
min. lenght
number of slots
thread roles max diameter
min. diameter
max pitch NUMBERS OR SUBJECTIVE
min pitch
number of roles VALUES OF THE
basic shape (A) total length
(B) max width PARTS
(C) max depth
Technology min. tolerance
complexity
Material stength
hardness
optimum cutting speed
Complexity shape number of diferents planes
number of rotational elements
number of gears
Production max number for production
min number for production
Annual production max number
min number
Process number of lathes used
number of drilling used
number of milling used

Figure 1 - Some prismatic part features for data base

3 - MEMBERSHD? ATTRD3UTION

The parts for the classification by their similarities is represented in a x m matrix


form (part feature). This matrix is shown in Figure 3. This matrix will be formed with the
data base features important for the grouping analysis.
Basically in the data base it is possible to have two types of features, namely
quantitative and qualitative. More clear, the features are:

Quantitative features: representative of parts property what can be express by numbers. In


this class are length , diameter , etc...
Qualitative features (incertain/fuzzy): these features describes attributes in terms uncertain,

03
274

like "wide, medium, small". An example of such feature can be the shape complexity of a
part, an analyst may define as very complex, complex or not complex.

PRODUCTION PLANNING
DESIGN

MANUFACTURE
*~ t- - i

SLNGLE DATAB ASE

-.' '. *:'-""-'a.

METROLOGY

m PROCESS PLANNING

MANAGEMENT

Figure 2 A single data base

'h ira

X =

v
nl 'nm

Figure 3 Matrix of parts X features

Since the features can be quantitative or qualitative, it is important to develop a


procedure for the grouping which can deal with these two kinds of features in a unified way. It
is necessary to transfer the data to these features in the same unit. Otherwise, there will be a
scale problem. In this methodology the data of different features are expressed by
memberships. To this purpose, each part feature is given a membership between 0 and 1,
which will put an end to the scale problem.
Membership values of quantitative features can be expressed directly in function of
values obtained from the data base. For example, the length feature whose values for 7 parts
are given by the vector [ 10.00; 8.5; 5.5; 3.75; 6.25; 8.00; 7.50], the membership values for
this feature, can be calculated by dividing each length value by the largest value of the vector
(i.e. 10.00). This procedure results in another vector, that is the membership vector, given by
[1.00; 0.850; 0.55; 0.375; 0.625; 0.800; 0.750]. These values also can be given by any of the
expressions suggested by [9], such as figure 4.
Likewise it is possible to use graphs with a more suitable membership function, which
automatically give the values of membership to the selected features for the matrix of Figure 3.
As an example, if the analysis of a family of lengthy shaft is desired, and the part in question
has 100 mm of length, if the function of membership is given by Figure 5, the membership
what the part will have for this geometric feature will be 0.8. The other parts that belongs to

04
275

the sample will have their values of membership calculated in the same form.

Confinnralion

Select an equation (or membership attribution

x
X
u;
l.j =
j M AX

jr -v
m x
= X
ij
X
JMIN

3M AX JMIN

x X
ij ~ j
i (
MAX JMIN

Figure 4 Expressions for membership attribution

Select a Graphic !

Graphic Number.

LENGTH

40 80 120 1E0 2O0

MAX LENGTH

<. ievWA. ; Hext , > ;

fnrert fite' i

Figure 5 - Membership by graphic

With qualitative features, now, the attribution of membership is not so easy. How to
transform a qualitative information, such as part complexity or with high roughtness in a
number?
To solve this problem, such as proposed by [1], it is possible to utilize the
(Analytic Hierarchy Process) method [8], which by means of a comparative analysis permits
the calculation of membership for qualitative attributes. These comparative analysis are made
in pairs between the attributes of the feature in question. It is necessary to make a matrix of
comparison for the attributes for each feature. Features are, for example, roughness, shape
complexity, length, etc, whereas the attributes are the feature designation as small, large,

05
276

complex, little complex, high roughness, etc. Saaty proposed that to provide attribute
comparison should be used values from the finite set: {1/9,1/8,...,1,2...,8,9}. These matrices
are calculated by the evaluation of the importance of an attribute over the other, through of
the following scale:

1. If A and are equal in importance, value 1 is attributed;


2. If A is little more important than B, value 3 is attributed;
3. If A is little less important than B, value 1/3 is attributed;
4. If A is much more important than B, value 5 is attributed;
5. If A is much less important than B, value 1/5 is attributed;
6. If A is obviously or much strongly more important than B, value 7 is attributed;
7. If A is obviously or much strongly less important than B, value 1/7 is attributed;
8. If A is absolutely more important than B, value 9 is attributed;
9. If A is absolutely less important than B, value 1/9 is attributed;

Each entry of the matrix is a pair of judgment. After the matrix of comparison is
defined, the eigenvalue ( ^ ) and its respective eigenvector will be calculated. The
eigenvector will represent the memberships that can be used for the attributes in question and
the eigenvalue is the measure or rate of consistency of the result.
To illustrate this method, one of the features which may be important for the obtaining
similarity is the complexity of shape evaluated by an analyst. The attributes of this feature
originating from the data base range from very complex, to complex, mean complexity, low
complexidade and very low complexity. All the parts, of the data base, have one of these
qualitative values for its feature of complexity. By means of the scale of priority shown, a
specialist will provide the matrix A of Figure 6.

Figure 6 Matrix A of comparison by pairs for the feature shape complexity.

In matrix A of comparison, entry aj indicates the number that estimates the relative
membership of attribute A when it is compared with the attribute Aj. Obviously, aj=l/aji. With
the eigenvector, the memberships that can be used for the similarity classification are available,
obviously after testing the consistence of the result to conclude that the answer is good [8]. In

06
277

this case, the memberships (one of eigenvectors normalized for the greatest weight to be equal
1), after the calculation, will be give by the first normalized eigenvector in Figure 7.

Shape Complexity EC
FIRST NORMALIZED
EI6EKVECTOR EI6ENVECTDR
3.936 1 INTERMEDIATE VALUES
2.036 .517
1 .254 MAX EIGENVALUE- 5.243
.431 .125
.254 .065 CI- .061

CR .054

(SECOND NORMALIZED
EIGENVECTOR MEMBERSHIP ATTRIBUTION

.51 Vciy Compiei - 1


.264 Coepfei - .52
.13 Meao Complexity - .25
.064 Lon Complexity -- . 1 2
.033 Very Low Conpteuty - .06 END

Figure 7 - Eigenvector for membership attribution

The values of memberships now will give a weight for each of the parts from the data
base. These memberships represent the importance of complexity of shape for each part.
After the memberships for the several features (qualitative and quantitative) are
calculated, now, it is necessary to have a procedure to obtain the clusters of similar parts.

4 - ANALYSIS OF SIMILARITY FOR THE FEATURES

A principle that can be used for the grouping is to choose a threshold value for the
similarity. Once this threshold value is chosen, two elements will be in the same grouping if the
similarity between them is larger than the value of comparison.
To estimate the resemblance between pairs of data, it is possible to use the convention
of arranging the data in form of matrix. Each entry in this matrix will represent the proximity
between two parts. This relationship, called S which is a matrix , represents the similarity
between different parts. To obtain this matrix it is possible to utilize several formulaes for the
calculation of similarity, as examplified by the expressions (1), (2), (3), (4) and (5).

k Zmin^ k (xV^ k (*" jk ))


S(xj,Xj) = - p (1)

JminOi k (x- ik )^ k (x- jk ))


S(Xj,Xj) = y - p (2)

07
278

x
S(xi,x.)
W
k=l "ik>*M x V
(3)

k51^k(x"ik)2)*(1^k(x"jk)2))/
x x

k=l
^ k ( "ik)M V
S(xi,Xj) = l f (4)
( x x
k tr^ M V>M jk

x
k=l ^ k ^ i k ) ^ ( " j k )
S(x i ,x j ) = i (5)
l^ik^k^jk
The symmetrie matrix can be used directly in the analysis of fuzzy grouping. The
similarity of parts consist of a very close classification in geometry, function, material and /or
process.
The measurements of similarity usually has a minimum variance, and they usually give
the same results if the grouping are compact and well separated. Nevertheless, if the grouping
are near one other, really diffrents results can be calculated [7]. With the similarities it is
possible to obtain the matrix of Figure 8, from matrix of Figure 3.

>11 in

S=

>nl J
nn
Figure 8 Matrix of similarities
The Figure 8 is a symmetric matrix that can be used directly in analysis of fuzzy
grouping. The similarity of parts consist of a very close classification in geometry, function,
material and /or process. The similarity as seen, will be obtained throught the membership
manipulation.
But the matrix does not have the propriety of being transitive. Here what occurs is that
there is a relationship of resemblance. Then, the following conclusion, that if A is similar to B,
and is similar to C, then A is similar to C will not be possible. To deal with this problem, it
is necessary to transform the matrix of similarity into a transitive matrix. It is possible to use
the fuzzy theory to transform this matrix [7]. The transitive matrix is the matrix fuzzy
equivalent. The relationship of composition of a fuzzy matrix is definided by (6) and (7).

R' RoR (6)

(7)
ik = mi
( r ij A r jk)

The transitive matrix of Figure 8 is the matrix FUZZY equivalent which can be simply
calculated by (8) [3].

03
279

R = Rv_;R2u...vjRn (8)

Finally, given one oc level, the groupings of similar parts are obtained for the level
chosen. With different c values, different classifications will appear. The greater is the cc
value, less parts will be classified in each family, thus more families will be formed.
An example of decomposition, for obtaining the families can be better understood
through Figures 9 and 10. Figure 9 shows the attribution of some oc values and Figure 10
shows, for each one of the (oc) levels, the groupings formed. For example, for oc = 0,9, there
are three groupings, the first one consisting of parts A, D and E, the second one, of part and
the third one, of part C.

Similarity Relationship
m
ALFAOJT 0.6 ALFACUT 0.7 . ALFACUT 0.8

Part 1 1 1 1 1 1 Part 1 1 1 1 1 1 Part 1 1 1 0 1 1


Part 2 1 1 1 1 1 . Part 2 1 1 1 1 Part M 1 0 1 1
Pat 3 1 1 1 1 1 Pat 3 1 1 1 1 1 Part 3 0 0 1 0 0
Pat 4 1 1 1 1 Part 4 1 1 1 1 1 Pat 4 1 1 0 1 1
Part 5 1 1 1 1 1 Part 5 1 1 1 1 1 Part 5 1 1 0 1 1

T *
+U- -: ;:.:= L +.U : : . \.T. 1.3 ..= , ::,,)*

ALFA-CUT - 0.3 ALFACUT 1

Pat 1 1 0 0 1 1 Pat 1 1 0 0 1 0 i
Part 2 0 1 0 0 0 Pat 2 0 1 0 0 0
Part 3 0 0 1 0 0 Pat 3 0 0 1 0 0
Pat 4 1 0 0 1 1 Pat 4 1 0 0 1 0
Pat 5 1 0 0 1 1 Part 5 0 0 0 0 1
JamiEe

*
U LE

Figure 9 Decomposition of a similarity relationship.

Parti
I I

|
I 'r
Parti Part 4 Part 5 Part 2 Part 3 acut = 0.9

1
r
Parti Part 2 Part 4 Part5 Part 3 acut 0.8

\'
*T

Figure 10 Tree shaped decomposition.

09
280

5 - ANALYSIS OF PROCESS SIMILARITY

A problem that may happen, in the formulation for obtaining the similarity seen in the
last item, is that the information about cell formation may not be enough. To prevent this from
happening, it is possible to think in such a way to obtain the matrix of membership between
part machine, where the vagueness will also be considered. In this way an algorithm for
checking the grouping of machines may be used , which is not evident in the last formulation,
where the information was about the part families.
If n parts and m machines are being considered to obtain manufacturing cells, usually
the representation of the machines that processes each part is given by a matrix part
machine, as seen in (9).

X. X. X

l J u In
'11 12 13
u u
BINARY MATRIX '21 22 '23 2n (9)
i31 u32 I33 u
3n

Y u
mL u ml u
m2 u
m3 mn

in (9):
Xjis apart andj = 1, 2, 3,...., n;
Yi is a machine and i = 1,2, 3,... m;
Ujj represents the relationship between part j and machine i (uj = 0 ou 1).

For example, un = 1 shows that the part 2 visit machine 1. Due to the inflexibility of
this matrix, which does not show the possibility of another machine also making part 2,
another matrix should be developed, which should be called of nonbinary, represented by (10).

Xi x2 X3 X.
, U
ll U
12 U,3 'In

Y2 U2i u22 u23 " '2n (10)


NONBTNARY MATRIX
Ya U3l u32 u33 '3n

Y. -"ml
um2 u^ " u.

in (10):
Xjis apart andj = 1, 2, 3,...., n;
Y is a machine and i = 1, 2, 3,..., m;
Ujj represents the relationship between part j and machine

In (10) it is possible to observe the following properties:

0<uj< 1 fori= 1,2,3, ...,m;j = 1,2,3, ..., (H)


JV>0fori=l,2,3, m (12)
H

10
281

The property defined by (11) indicates the intensity with which a machine is designated
to process a determined part, a number near of 1 meaning a great potentiality to process the
part, while with a number near 0, the machine would definitely not be appropriate.
The elements of matrix (10) are calculated from mixed functions between machines and
components, which is an interesting proposition from [12]. The following steps are necessary
to obtain these values :

1. To define the membership functions for each pair of feature machine;


2. To compute the degree of membership for each pair of feature machine;
3. To compute the combined index for each pair of machine part, because usually a part has
more than a feature processed by the same machine. This index, called combined, goes to
the nonbinary matrix.

To illustrate a membership function for one pair of feature machine, it is possible to


think about the tolerances a certain machine can obtain. This function may be represented by
Figure 11.

tl t2 O tolerance ()

Figure 11 Membership function of tolerances for determined machine

The memberships values from Figure 11 will be in the range designated by (13).

0 x < tl
1 tl<x<t2 (13)
() t3x
t2 < < t3
t3t2
0 x > t3

As it can be understood, to set the nonbinary matrix, it is necessary to obtain the


membership function of the main part features, in function of capacity of determined machine
to process it. The main criterion to select these features is the appropriate selection of the
features that can contribute for differentiation of the parts at the time of the grouping. Of
course, it will be necessary to obtain, for each machine, the membership functions for the
feature that is being studied.
After obtaining all the membership functions associating machine feature, it is
possible to begin the next step that is the calculation of combined index for each pair of
machine part.
If:
x = n
X
j - | x j i > x j 2 : ' jp| J l ' 2> *> " > features fuzzy set of component j ;
M = (Y,,Y 2 , ,Ym} set of available machines;

11
282

(^ ) membership for the machine i related to feature k of component j ;

where: n = number of parts ; = number of features and m = number of machines.

In agreement with the Zadeh theory [10], the membership between the machine i and
the component j may be given by (14).

(14)
M X J ) %l<k<p
MX*) v

With this procedure it is easy to establish the membership for all the pairs of machine
part, and to construct the nonbinary matrix.
It is important to observe that the binary matrix used in most methods has a different
interpretation from nonbinary. In the former, the entries represent the relationship of incidence
between a part and a machine. The relationship of correspondence should remain in the
resultant matrix. For example, it should be assured that all machines should be in one group,
where the parts have entry 1 with those machines. On the other hand, any entry will be an
exceptional element. In the nonbinary matrix, the entries represent the degree with which a
component can be processed in a machine. It is not necessary to assure that all the nonzero
parts are in groups, as long as alternative machines are available. If the necessary machines are
grouped in the cell for some components, then the outside elements for these parts become
exceptional elements.
To illustrate this procedure, for a small example, if the pertinence functions are those
of Figure 12, for a hypothetic case of machine 1 of m available, for each one of the 7 parts
that are to be grouped, with the value (from a data base) for the feature in question, it is
possible to do a chart of Figure 13.

uixi ud

finishins! tolerance (umi machine capacity (mm)

Figure 12 Example of membership function for feature that should be analyzed relative to
machine 1

Parts finishing tolerance machine capacity


1 1 1
2 1 0
3 1 1
4 0.8 1
5 1 0
6 0.1 1
7 0.9 1

Figure 13 Memberships given to each pair part feature for the machine 1

12
283

In the same way that the values of Figure 13 are given, the process should be repeated
for all the machines that will be analyzed, thus m matrices analogous to this is obtained. For
the application of the formula (14) for the matrix of Figure 13, the vector (15) is calculated,
which expresses the membership for the 7 parts of machine 1.

machinel[l 0 1 0.8 0 0.1 0.9] (15)

If the process for the m machine is repeated, m vectors such as (15) will be calculated,
which will consist of the nonbinary matrix that should be studied for obtaining the similarities
of the process. If for the 7 parts example a universe of 7 machines is available, after the
execution of the procedure, a nonbinary matrix as (16) may be obtained. The matrix (16) is the
one that should be analyzed to have a solution of similarity of process.

P P P P P P P
l 2 3 4 5 6 7
M, " 1 0 1 0.8 0 0.1 0.9
M2 0 1 0.3 0 0.7 0.8 0
M 0.3 0.7 0 0 0.8 0.8 0 (16)
M4 0 0.7 0 0.3 0.7 0 0.3
M, 0.6 0.1 0.5 0 0 0 0.5
M; 0.7 0.2 0.8 0.8 0.3 0 0.7
_0 1 0.5 0 0.8 0.9 0

Obtained the nonbinary matrix, it is now necessary to run a proper grouping algorithm
to get the possible manufacturing cells. [12] shows the use of the Rank Order Clustering
(ROC) [4] to analyze matrix (16). The result for this matrix is:

family 1 family 2 > P2 P5 P6


'<_Mi 1 1 0.9 0.8"
l 0 0 O.l" M

M2 0.3 0 0 0 < M 2 1 0.7 0.8


M3 0 0.3 0 0 < M 3 0.7 0.8 0.8
celli ' M4. 0 0 0.3 0.3 cell 2 ' ^ M 4 0.7 0.7 0
<M5 0.5 0.6 0.5 0 M 5 0.1 0 0
^M6 0.8 0.7 0.7 0.8 M
6 0.2 0.3 0
M7 0.5 0 0 0_ .< M 7 1 0.8 0.9_

Cell 1 composed of machines Mi, M and Ms and family 1 of part P 3 , Pi, P7 and P4;
Cell 2 composed of machines M7, M2, M3 and M4 and family 2 of part P 2 , P5 and ;

The importance of utilizing the nonbinary matrix is that, after the grouping of machines
is obtained, there is now the possibility of analyzing the machines that are more appropriate to
process the part families. Furthermore, it makes sense to eliminate the machines that process
similar operations. This characteristic is not possible if the binary matrix is used.
Other algorithms for the grouping can be adapted so that the nonbinary matrix will be
used for the cell formation.

13
284

6 - FUZZY BACKWARD REASONING FOR THE CLASSIFICATION OF NEW


PARTS IN ESTABLISHED FAMILIES

After to obtain the families, new parts can be introduced. To classify these new parts
always is a problem. To use the fuzzy backward reasoning it is a new way for solution of this
problem. Let the system be decribed by:

1. F = {Fl, F2,..., Fn} - the space of parts;

2. C = {Cl, C2, ...,Cm} - important technological features of parts;

It is possible to map a Fuzzy relation matriz R through the application of F in C as


shown in (17).

R: F -> C ()
In (17):

R = tj] i - l - > n
and j = 1,..., m - Fuzzy relation matrix between family and technological
features.

It is possible to infer, for the news parts, and making use of the features fuzzy weight,
which family (or families) is (are) the best suited for the new part. The expression for purpose
inference is (18).

C=RoF (18)

Two illustrative examples will be show. Let the following features:

CI = rotational;
C2 = prismatic;
C3 = through hole;
C4 - blind hole;
C5 = slot;
C6 = cone;
C7 = thread;

Existing parts available were divided and classified into families as shown in Figure 14.
With families and features it is possible to obtain the Fuzzy relation matriz R, shown in
Figure 15.

14
285

PARTS FAMILIES

Rotational, through hole Rotational, through hole and thread

Rotational, cone and blind hole Prismatic with slot


F

^-y

Prismatic with through hole

Figure 14 - Families established

Cl C2 C3 C4 CS C6 C7
Fl 0.9 0 0.7 0 0 0 0
F2 0.7 0 0,6 0 0 0,2 0,9
F3 0 0 0,2 0,7 0 0.9 0
F4 0 0,9 0 0 0.9 0 0,1
F5 0 0,9 0,8 0,6 0 0 0

Figure 15 - Fuzzy relation matrix

15
286

6.1 - CLASSIFICATION OF A ROTATIONAL PART

For the new rotational part in Figure 16 to be classified into families of the Figure 14,
the following features membership set Ci features is given:

xl = 0,9/Cl + 0/C2 + 0,7/C3 + 0,7/C4 + 0/C5 + 0,7/C6 + 0/C7

NEW PART

Rotational, blind hole

Figure 16 - Rotational part to classify

The objective is to evaluate the ai terms in the expression bellow, that means
membership between the new part and the Fi families.

yl = al/Fl + a2/F2 + a3/F3 + a4/F4 + a5/F5

yl = xl o R, that results in the following relation:

0,9 0,9 0,7 0 0 0"


0 0 0 0 0,9 0,9 "af
0,7 0,7 0,6 0,2 0 0,8 a2
0,7 0 0 0,7 0 0,6 o a3
0 0 0 0,9 0 a4
0,7 0 0,2 0,9 0 0 _a5_
0 0 0,9 0 0,1 0 j

Using MAX-MIN [11], it is possible to write the following equations:

0,9 = (O,9Aal)v(0,7Aa2)v(0Aa3)v(0Aa4)v(OAa5) (19)


0 = (0Aal)v(0Aa2)v(0Aa3)v(0,9Aa4)v(0,9Aa5) (20)
0,7 = (0,7Aal)v(0,6Aa2)v(0,2Aa3)v(0Aa4)v(0,8Aa5) (21)
0,7 = (03)(032)(0,733)(034)(0,635) (22)
0 = (0Aal)v(0Aa2)v(0Aa3)v(0,9Aa4)v(0Aa5) (23)

16
287

O = (0Aal)v(0,2Aa2)v(0,9Aa3)v(0Aa4)v(0Aa5) (24)
O = (0Aal)v(0,9Aa2)v(0Aa3)v(0,lAa4)v(0Aa5) (25)

Now from the equations we can obtain the following:

From equation (19): From equation (20): From equation (21):

(OAal) < 0 Val (0,7 A al) < 0,7 => al > 0,7
(0,9 A al) < 0,9 => al
' 9
(0Aa2) < 0 Va2 (0,6 A s2) < 0,7 V 32
(0,7 A a2) < 0,9 Va2 (0Aa3) < 0 Va3 (0,2 A a3) < 0,7 V 33
(0Aa3) < 0,9 Va3 (0,9Aa4) < 0 => s4 = 0 ( 0 A S 4 ) < 0,7 Va4
(0Aa4) < 0,9 Va4 (0,9Aa5) < 0 => s5 = 0 (0,8 A 35) < 0,7 => 35 < 0,7
(0Aa5) < 0,9. Va5

From equation (22): From equation (23): From equation (24):

(OAal) < 0,7 V a l (OAal) < 0 V a l (0A3l) < 0,7 V a l


(0Aa2) < 0,7 Va2 (0Aa2) < 0 V a2 (0,2 A a2) < 0,7 Va2
(0,7Aa3) < 0,7 => a3 > 0,7 (0Aa3) < 0 V a3 (0,9 A a3) < 0,7 => a3 = 0,7
(0Aa4) < 0,7 Va4 (0,9Aa4) < 0 ^> a4 = 0 (034) < 0,7 V 34
(0,6Aa5) < 0,7 Va5 (0A35) < 0 Va5 (035) < 0,7 Va5

From equation (25):

(OAal) < Val


(0,9 A s2) < 0 => s2 = 0
(0AS3) < 0 Va3

(0,1 A 34) < 0 => a4 = 0


(0AS5) < 0 Va5

The result of sevenequation problem, whose solution is:

si > 0,9; 32 = 0; 33 = 0,7; s4 = 0; s5 = 0

The conclusion is that the new part belongs to family 1.

6.2 CLASSIFICATION OF A PRISMATIC PART

For the new prismatic part in Figure 17 to be classified into families of the Figure 14,
the following features membership set Ci features is given:

x2 = 0/C1 + 0,9/C2 + 0,8/C3 + 0,6/C4 + 0/C5 + 0/C6 + 0/C7

17
288

We want to obtain this:

y2 = a l / F l + a2/F2 + a3/F3 + a4/F4 + a5/F5

y2 = x2 o R, that results in the following relation:

0 0,9 0,7 0 0 0
0,9 0 0 0 0,9 0,9 al
0,8 0,7 0,6 0,2 0 0,8 a2
0,6 = 0 0 0,7 0 0,6 0 a3
0 0 0 0 0,9 0 a4
0 0 0,2 0,9 0 0 _a5
0_ . 0 0,9 0 0,1 0_

Using MAX-MIN, we have:

0 = (0,9A3l)v(0,7A32)v(0Aa3)v(0A34)v(0Aa5) (26)
0,9 = ( 0 A a l ) v ( 0 A 3 2 ) v ( 0 A 3 3 ) v ( 0 , 9 A a 4 ) v ( 0 , 9 A s 5 ) (27)
0,8 = (0,7A3l)v(0,6Aa2)v(0,2A33)v(0Aa4)v(0,8Aa5) (28)
0 = ( 0 A 3 l ) v ( 0 A a 2 ) v ( 0 A a 3 ) (0,9 A a4) (0 A 35) (29)
0,6 = ( 0 A a l ) v ( 0 A a 2 ) v ( 0 , 7 A a 3 ) v ( 0 A 3 4 ) v ( 0 , 6 A 3 5 ) (30)
0 (03)(0,232)(0,933)(034)(035) (31)
0 = (03)(0,932)(033)(0,34)(035) (32)

NEW PiJSMATIC PART

M /4*T

y^k y^y y
; 1

Hy ,y' ^y W
y^

Prismatic , blind ane . through hole

Figure 17 Prismatic part to classify

18
289

In the same way like for the rotational part we may conclude that:

al = 0; a2 = 0; s3 = 0; s4 = 0;a5 > 0,9

The conclusion is that the new prismatic part belongs to family 5.

7 - CONCLUSIONS

This paper is the synthesis of what is possible to do with fuzzy logic in order to deal
with the problem of obtaining similarities, an important aspect for part families formation,
considering the uncertainty present in the manufacturing environment. This procedure groups
techniques that can cope with the problem of similarities isolately, and more comprehensively,
making use of the same data base, a fact that usually is necessary, although it is not possible in
the most currents tools.
With the procedure described here it is possible to provide a new contribution for the
Group Technology. The development of this model can also supply a solution for the problem
of setup time decrease, once it is possible to retrieve information of process and geometry
similarities together. In this way, is possible to profit by these similarities so that the
preparation time of machines will be shorter. It is possible to observe then that with the
rationalization, which is possible with the identification of similarities, work is being done to
save the company resources.
The autors belive that the use of backward reasoning makes it possible to classify a
new part in a established family in a faster and easier way. It is possible because will be
necessary to answer a less number of questions without the rigidity that is frequent in the
Classification and Coding Systems (CCS), a common method used to obtain similarities. The
paper shows the appreciation of simple cases for the classification of rotational and prismatic
parts into established families. For the classification, the backward reasoning is used, thus
simphrying the interaction between modeling and classification procedures. After the
examples, it is possible to conclude, that the use of fuzzy backward reasoning makes it
possible to classify a new part in established family in a faster and easier way. However, the
solution for the best family is not always simple. In some cases is not possible to obtain a
solution. For this reason it is necessary more research in this field.
Finally, this methodology may be transformed into a software, which will be an
interesting tool to the manufacture small lots. This consists of a different proposition as an
alternative to current methods available.

8 - REFERENCES

[1] Arieh, D.B.; Triantaphyllou, E. Quantifying data for group technology with weighted
fuzzy features. Int. J. Prod. Res, 30, 1285 - 1299, 1992.

[2] Hyer, N ; Wemmerlov, U. Group Technology and Productivity. Capabilities of Group


Technology. Michigan, The Computer and Automated Systems Association of SME, 3-
12, 1987.

19
290

3] Kaufmann, A. Introduction to the Theory of Fuzzy Subsets. Volumel, Academic Press,


INC., Orlando, USA, 1975.

4] King, JR. Machine-component grouping in production flow analysis: an approach


using a rank order clustering algorithm. Int. J. Prod. Res., 18, 213 - 232, 1980.

5] Min, H.; Shin, D. Simultaneous Formation of Machine and Human Cells in Group
Technology: a Multiple Objective Approach. International Journal of Production
Research, 31, no. 10, 2307-2318, 1993.

6] Montevechi, J A B . Tecnologia de Grupo Aplicada ao Projeto de Clulas de


Fabricao. Florianpolis, UFSC, Msc, 1989.

7] Montevechi, J.A.B. Formao de famlias de peas prismticas utilizando lgica


Fuzzy. So Paulo, EP-USP, Doctorate Qualifying Examination, 1994.

8] Saaty, T. L. A Scaling Method for Priorities in Hierarchical Structures. Journal of


Mathematical Psychology, 15, 234 - 281, 1977.

9] Xu, H ; Wang, HP. Part family formation for GT applications based on fuzzy
mathematics. Int. J. of Prod. Res., 27,1637 - 1651, 1989.

10] Zadeh, L.A. Fuzzy Sets. Information and Control, 8, 338-353, 1965.

11] Zadeh, L.A. The Concept of a Linguistic Variable and its Application to
Approximate Reasoning-Hi Information Sciences, 9, 43-80, 1975.

12] Zhang, C ; Wang, H P . Concurrent Formation of Part Families and Machine Cells
Based on the Fuzzy Set Theory. Journal of Manufacturing Systems, 11, 61 - 67, 1992.

20
291

CHAPTER 5

END-USERS' OF INTELLIGENT SOFTWARE


SYSTEMS
293

Artificial Neural Networks Applied to Protection of Power Plant


Denis V. Coury*,B.Sc.,M.Sc.,Ph.D.,MIEEE David C. Jorge*,B.Sc.,MLEEE

'Departamento de Engenharia Eltrica


Escola de Engenharia de So Carlos
Universidade de So Paulo
So Carlos - SP - Brazil

Abstract

The project of a power plant protection is nowadays limited to expected situations. To


work with unforeseen or unknown data is a challenging task. The implementation of a pattern
recognizer for a power plant protection diagnosis may provide great advances in the protection
field. This paper presents the use of Artificial Neural Networks as a pattern classifier for a
distance relay operation. The scheme utilizes the digitized form of three phase voltage and current
inputs. Increase of performance for the distance relays is expected.

1-Introduction

Distance relaying techniques have attracted considerable attention for the protection of
transmission lines. This principle measures the impedance at a fundamental frequency between a
relay location and the fault point and thus determines if a fault is internal or external to a
protection zone. Voltage and current data are used for these purposes and they generally contain
the fundamental frequency component added with harmonics and DC component (noise).
With digital technology being ever increasingly adopted in power substations, more
particularly in protection, distance relays have found some improvements mainly related to
efficient filtering methods (such as Fourier, Kalman, etc.) and as a consequence shorter decision
time has been achieved. The trip/no trip decision has been improved, compared to
electromechanical/solid state relays. However, if unforeseen or incomplete data input occurs, the
protection system may not act properly [1].
This paper presents the theory of Artificial Neural Networks (ANN) as being an
alternative computational concept to the conventional approach based on a programmed
instruction sequence. The ANN can provide solutions to problems with unknown determining
factors. Its potential has brought power system researchers to look at it as a possibility to solve
problems related to different subjects such as load forecasting, fault detection and location,
economic dispatch, etc. [2],[3],[4].
This work shows the application of ANN as a pattern classifier for distance relay operation
in transmission lines. The scheme can work with unexpected or incomplete data, improving the
performance of ordinary relays using the digital principle. The degree of accuracy for locating the
faults in the different zones is also improved.

2-The Artificial Neural Network

The Artificial Neural Network (ANN) is inspired by biological nervous systems and it was
first introduced as early as 1960. Nowadays the studies of ANN are growing rapidly, for many
reasons[5]:
294

ANN works with pattern recognition at large.


ANN is prepared to work with incomplete and unforeseen input data.
ANN has a high degree of robustness and ability to learn.
The neuron is the nervous cell and is represented in the ANN universe as a perceptron.
The interconnection of the perceptions can form a network which is composed of a single layer or
several layers as seen in Figure 1(b).

P 1
\ Wl
P2^ ^2
P3 a
: ~ )

/wn b
/
Pri
(A)

(B)

Figure 1 - ANN diagrams


(A) - Perceptron representation
(B) - ANN multi-layer scheme

Figure 1(a) shows a simple model of a neuron characterized by a number of inputs


the weights Wi,W2,...,Wn, the bias adjust b and an output a.
PI,P 2 ,...,PN,
The neuron uses the input, together with information on its current activation state to
determine the output a, given as in equation (1).

a = YWkPk+b (1)
*=1

The ANN models may be "trained" to work properly. The desired response is a special
input signal used to train the neuron. A special algorithm adjusts weights so that the output
response to the input patterns will be as close as possible to their respective desired responses. In
other words, the ANN must have a mechanism for learning. Learning alters the weights associated
with the various interconnections and thus leads to a modification in the strength of the
interconnections.
295

In order to use the ANN properly, it is necessary to know that empirical methods are the
only way to find satisfactory results. The network scheme will have direct influence on the ANN
performance. Problems may also arise from the ANN training. Depending on some factors, ANN
may not converge and it could be necessary to change the training parameters. The sequence of
the input data training, the initial weights used and the number of cases for the training data may
affect the results.
The use of ANN in distance relays may result in a considerable advance for the correct
diagnosis of operation. The ANN may solve the overreach and the underreach problems which are
very common in the power plant protection project. ANN can be trained with data provided from
a simulation of a faulted transmission line and 'learn" the aspects related to that situation. The use
of ANN make it possible to protect over 80% of the extension of the power system line. ANN can
deal with unforeseen situations related to faults in the power plant.

3-Backpropagation Method

The Backpropagation algorithm is central to much current work on learning in neural


networks. It was invented independently several times, by Bryson and Ho (1969), Werbos (1974),
Parker (1985) and Rumelhart, Hinton, and Willians (1986). A closely related approach was
proposed by Le Chun (1985). The Backpropagation method works very well adjusting the
weights (Wjn) which are connected in successive layers of multilayer perceptrons. The algorithm
gives a prescription for changing the weights in any feedforward network to learn a training set
of inputoutput pairs {Pn,ar}[6]. The use of the bias adjust in the ANN is optional, but the results
may be enhanced by it. Trained backpropagation networks tend to give reasonable answers when
presented with inputs that they have never seen. An elementary backpropagation neuron with R
inputs is show below on Figure 2.

PHWii
Output
w1>V
P[2J' . ? y n F a

P[Rf 1*
bias adjust
Input
data

Figure 2 Neuron with logsigmoid characteristic


where:
n n summation output R number ofinputs
W weights b bias adjust
F transfer function a = F[w.P,b]

1
a=logj/g(,b) = - ( " * * )
(2)
1+e

Backpropagation networks often use the logistic sigmoid as the activation transfer function. The
logistic sigmoid transfer function maps the neuron input from the interval (oo,+oo) into the
296

interval (0,+l). The logistic sigmoid equation (2) is applied to each element of the proposed ANN
[7]

4-Application of the Backpropagation Method for the Fault Location Problem

It is common, among the algorithms for digital distance protection, to use voltage and
current waveforms taken from a busbar in order to solve the fault location problem in a power
plant.
Figure 3 shows the ANN diagram chosen to solve the fault location problem using the
backpropagation method. This scheme also uses the three phase values of current and voltage
data. The Discrete Fourier Transform was used to filter this input data and extract the
fundamental components. The transfer function used for the perceptrons was the logistic sigmoid
described in the earlier section.

OUTPUT
[trip/no trip]

Figure 3 - ANN configuration used as a distance relay.

5-The Power Plant Diagram Used

In order to test the apphcability of the scheme proposed earlier, a simulation of the
transmission line in a faulted condition is needed. This paper makes use of a digital simulation of
faulted EHV transmission lines developed by Johns and Aggarwall [8]. The lOOKm transmission
line used to train and test the proposed ANN is shown in Figure 4. The digitized output of voltage
and current at the three phases are then used in real time to feed the ANN algorithm. Only one
side ofinformation was used in the referred method (from busbar A).
The primary situation used for training, considered the fault resistance as constant (as well
as the other parameters such as source capacity, etc.) and phase A to ground faults only. The
training values used in the ANN scheme considered the changes of the fault location along the
297

transmission line as the main variation of the input data. However, flexibility for untrained or
unforeseen data is expected for this kind of scheme.
Figure 5 shows the schematic diagram for the hardware needed in an ANN
implementation, including the microprocessor based neural relay. The converged set of weights,
which are worked in an off-line mode are then stored in the microprocessor for on-line
application. The scheme works in a sample frequency of 4kHz.
100 Km
r-

0

fault point

0f
20GVA 5 G VA
Rf=10n
V=ll0pu V=1|0pu

fault inception angle 90"

Figure 4 - Transmission line used for the ANN studies.

Q> r>ry~\

current
line switch

*
signal
voltage
I
signal

I
Off-line
routine ANN D/A
Analog training Converter
VJ input routine
signals

On-line
weights
hardware
Surge trained
process
Filter off-line

S/H Fourier logsig


Digital
Circuit A/D -*| Transform transfer Output
(clock) Converter Filter function 0/1

Figure 5 - Block diagram of the distance relay.

6-The Training Procedure and Test Results of the Proposed ANN

The "Neural Network Toolbox" from the software 'Matlab" [7] was used to create the
ANN diagram, train it and obtain the weights as output. The initial weights as well as the initial
bias used random values between 0-1.
59 different faulted cases were used in different locations of the transmission line in order
to train and test the proposed ANN. In the referred scheme, a protection of 80% of the line was
298

chosen as the extension of the first zone of the relay. Points next to the region where trip/no trip
condition exchanges (80Km for the line used) had special treatment. In this case, less degree of
scarcity was taken between locations used for training.
Table 1 shows the results of an ANN model used as a distance relay. The ANN answer is
shown, compared to the expected ones, for faults along the transmission line. It should be
mentioned that the cases used for the tests are different from the ones used for the training
The results presented in Table 1 show the efficiency of the proposed scheme. For all the
cases, the ANN scheme correctly classifies the fault as been internal or external to the first zone of
the relay.

Distance of ANN answer Correct Distance of ANN answer Correct


the fault from answer the fault from answer
point A (Km) point A (Km)
2.0 56.0
4.0 60.0
6.0 61.0
8.0 64.0
10.0 65.0
11.0 68.0
13.0 71.0
16.0 73.0
20.0 74.0
22.0 75.0
26.0 76.0
29.0 77.0 0.9998
30.0 78.0 0.9941
33.0 82.0 2.6845e"4 0
35.0 83.0 1.0104e"5 0
37.0 84.0 4.2734e"7 0
40.0 86.0 1.1656e"9 0
42.0 87.0 6.6594e"11 0
45.0 88.0 3.9977e"12 0
46.0 89.0 2.7110e"1J 0
48.0 92.0 2.0983e"16 0
52.0 94.0 2.8801e"18 0
54.0 96.0 5.3289e"20 0
55.0 98.0 1.5577e"21 0
fable 1-Results 1or the ANN sch eme.

7-Tests of the ANN for unforeseen data

In order to test the performance of the ANN scheme subjected to unknown data inputs,
some changes were made to the power system parameters. As said before, Table 1 presents the
relay results for the configuration presented in Figure 4. In order to test the ANN flexibility to
unforeseen inputs, the fault resistance, power generation capability and fault inception angle
suffered small variations. Table 2 shows the results of the ANN scheme subjected to such
299

variations. It could be noted that for most cases the ANN scheme still gives correct results,
confirming its capability as a pattern classifier. The wrong diagnosis was presented in the case of
small fault resistance where, as a consequence the current of a faulted phase increased. The wrong
diagnosis was given because this case is similar to the situation of the fault occurring in the first
zone of the relay trained earlier. However, it should be mentioned that such cases could be used in
the training set in order to avoid such a problem.

Change of trained parameters ANN Output Correct


Output
Fault inception angle set to 88, fault distance=75Km from A. 1 1
Fault inception angle set to 88, fault distance=85Km from A. 1.5024.10"' 0
Fault inception angle set to 92, fault distance=70Km from A. 1 1
Fault inception angle set to 92, fault distance=85Km from A. 3.3442.10"y 0
Fault resistance set to 0, fault distance=70Km from A. 1 1
Fault resistance set to 0, fault distance=90Km from A. 1 (wrong) 0
Fault resistance set to 5, fault distance=85Km from A. 1 (wrong) 0
Fault resistance set to 5, fault distance=70Km from A 1 1
Fault resistance set to 8, fault distance=70Km from A. 1 1
Fault resistance set to 8, fault distance=95Km from A. 2.0047.10"15 0
Fault resistance set to 12, fault distance=70Km from A. 1 1
Fault resistance set to 12, fault distance=90Km from A 5.8528.10"1K 0
Fault resistance set to 15, fault distance=70Km from A 0.9192 1
Fault resistance set to 15, fault distance^OKm from A 1.1459.10"21 0
Source at set to 4.5GVA, fault distance=75Km from A. 1 1
Source at set to 4.5GVA fault distance=90Km from A. 2.3569.10"13 0
Source at set to 4GVA, fault distance=70Km from A 1 1
Source at set to 4GVA, fault distance=90Km from A. 3.5749.10"12 0
Source at A set to 18GVA, fault distance=75Km from A. 1 1
Source at A set to 18GVA, fault distance=90Rm from A. 7.1024.10"15 0
Table 2-Results of the ANN for unforeseen data.

8-Conclusion

In this paper the use of ANN as a pattern classifier to work as a distance relay was
investigated. The results obtained in this scheme are very encouraging. The ANN scheme can
operate correctly in the location of the fault point. The scheme can be extended including some
more variations of parameters in the training set in order to avoid misoperation as seen in the
paper for the case of low fault resistance. It is also necessary to point out some problems related
to the ANN application. The initial network configuration is totally empirical and may not result
in the best performance for the scheme. The training points to be used can also be an expressive
problem. These are some points that can influence the speed of the conversion of weights and
consequently the performance of the scheme.
However, this tool opens a new dimension in relay philosophy which should be widely
investigated in order to solve some of the various problems related to the distance protection of
transmission lines.
300

References

[1] S.A Khapared, P.B. Kale and S.H. Agarwal, "Application of Artificial Neural Network in
Protective Relaying of Transmission Lines", IEEE. 1991.

[2] H Kanon, M. Kaneta and K Kanemaru, 'Tault location for transmission lines using inference
model Neural Network", Electrical Engineering in Japan. VoL 111, No. 7, 1991.

[3] K S . Swamp and H S . Chandrasekharaiah, 'Tault Detection and Diagnosis of Power Systems
Using Artificial Neural Networks", . 1991.

[4] M. A El-Sharkawi, R J. Marks and S. Weerasooriya, "Neural Networks and Their


Application to Power Engineering", Control and Dynamics Systems Vol. 41, pp.359-451, 1991.

[5] R AggarwaL Artificial Neural Networks for Power Systems, shortcourse notes.

[6] J. Hertz, A Krogh and R G. Palmer, Introduction to the theory of Neural Computation,
Adison-Wesley Pubhshing Co., 1991.

[7] H. Demuth and M. Beale, "Neural Network - Toolbox -For Use with Matlab". 1992.

[8] A T. Johns, R Aggarwal, " Digital Simulation of Faulted EHV Transmission Lines with
Particular Reference to Very High Speed Protection" , TEE proceeding. VoL 123, pp. 353-359,
April 1976
301

CONSEQUENCES OF CURRENT FAILURES FOR QUALITY ASSURANCE

Hans R.Kautz
Grosskraftwerk Mannheim AG
Germany
ABSTRACT
In August 1994, in the afternoon, In the course of an operational inspection tour the staff
noticed a temperature rise in the condensing turbine 2 of N230 power plant and moisture in the
soundproof hood. Approximately half an hour later an accurate control by the plant manage-
ment noticed in increasing flow noise due to escaping steam. The immediate decision was to
unload the turbine and to shut it down. During shutdown a steam explosion occurred. The
cause was the rupture of a hot reheat line directly upstream of the right turbine side.

Inspection and scope


Safety requirements and the necessity of a high availability, especially in case of industrial
plants, may require an extended scope of inspection.
Within the inspections an extraordinary amount of creep damage was detected. As a conse-
quence, the scope of inspection was expanded with the objective of a thorough assessment of
the plant condition. The knowledge base was substantially improved.

Damage Cause
There are indications on the failed tube that an inadequate heat treatment of the tube con-
tributed to the failure.

Recommendations
The systematic detailed analysis of the hot reheat line allows the following recommendations
for pipe line systems operated for long periods of time, > 150,000:
Only component metallography may give early, reliable indications as to creep damage initia-
tion.
If a certain degree of damage is detected, the following measures are recommended consider-
ing the intended future mode of operation:
Monitoring of continued operation and replacement of component during next overhaul
of the plant with prolonged operating time.
Load reduction by decreasing the temperature and/or pressure and repeated inspection or
replacement during next overhaul.
When replacing individual components care shall be taken that even straight pies may be
highly loaded and therefore damaged.

DESCRIPTION OF EVENTS
In August 1994, in the afternoon, in the course of an operational inspection tour the staff of
the Bayer-Uerdingen power plant noticed a temperature rise in the condensing turbine #2 of
the N230 power plant and moisture in the soundproof hood. Approximately half an hour later
an accurate control by the plant management detected an increasing flow noise due to escaping
steam. The immediate decision was to unload the turbine and to shut it down. During shut-
down a steam explosion occurred. A sudden massive steam release into the turbine hall of the
Uerdingen plant occurred. The cause was the rupture of a hot reheat line (Figure 1) directly
302

upstream of the right turbine side. The failed pipe section was cut out and submitted for further
examinations. The examination procedure was established by a Working Group three days
later.
In order to clarify the failure cause, extensive surface microstructure examinations were con-
ducted and non-destructive testing of the failed component performed including other relevant
pipe system components.
All preliminary examination results so far indicate that the failed pipe is a unique event with
respect to damage in all of the pipe system.

Failure Appearance

The pipe ruptured at the 6 o'clock position (towards the plant control room). The crack
orientation was in the pipe axis and the pipe body unfolded. In the lower section of the girth
weld the crack branched. Beyond the branching the crack ran on above and below the girth
weld. The macroscopic appearance of the rupture surfaces and the crack mouths of the two
cracks are an indication of the ductile nature of the rupture in this area.
The macroscopic result of the upper girth weld examination confirms the diagnosis of ductile
crack rupture. As opposed to the lower girth weld this crack did not branch, but ended short
off the weld.
The inner pipe surface has a dark-grey oxide film (Figure 2. In the crack area this film is
absent. By appearance it is a magnetite layer which spalled in the section of maximum de-
formation during unfolding of the pipe.
A part of the outer surface of the failed component displays grooves oriented in the direction
of the component circumference. It is obvious that the component was ground in the girth
welds area in the course of non-destructive examinations.

Failed Component Data


The failed pipe line section consists of a vertical pipe between the girth welds 52 (connection
upright pipe bend/failed component) and 52 (connection failed component/transition cone to
trip valve casing) of the right reheat line. The two girth welds 51 and 52 were rehabilitated in
1990.

Dimensions
length of failed pipe section 835 mm
inner diameter 150 mm
minimum wall thickness 13 mm
Material
4 MoV 6 3 ( a molybdenum-vanadium steel)
Operating conditions
pipe line medium steam
operating temperature approx. 525 C
operating pressure approx. 104 bar
operating hours approx. 217,000
303

EXAMINATION PROGRAM
The examination program established by the Working Group for the first phase of the failure
examination included an as-is condition report an a non-destructive examination of the failed
component plus other relevant components of the pipe system.

Non-Destructive Examinations
Failed Component - Measurement of circumference, UT volumetric measurement, radiogra-
phic examination, material determination, surface microstructure examination, UT wall thick-
ness measurement.
Cone between failed component and trip valve casing - Material determination, surface micro-
structure examination, UT wall thickness measurement.
Reheat line left side, straight pipe upstream of trip valve - Same examinations as for failed
component.
Reheat line right side, bends #1 and #2 upstream of failed component - Material determi-
nation, surface microstructure examination, UT wall thickness measurement, measurement of
circumference/ovality.

PERFORMANCE OF EXAMINATIONS

Material Determination
With random checks the metal alloy was determined by way of radiographic fluorescence
analysis.

Measurement of Circumference
The circumference was measured with a flexible metal gage.

Surface Microstructural Examination

Description of Damage-Classes
Practical experience of field metallography with replicas has indicated that the damage classes
are not appropriate as published by VdTV and VGB. In particular the damage Classes 2 and
3 do not allow to differentiate as required. In accordance with a long lasting practical experi-
ence the damage classes were newly defined for the present guideline, as below:

assessment class structural- and damage conditions


0 as received, without thermal service load
1 creep exposed, without cavities
2a advanced creep exposure, isolated cavities
2b more advanced creep exposure, numerous cavities without preferred orientation
3a creep damage, numerous oriented cavities
3b advanced creep damage, chains of cavities and/or grain boundary separations
4 advanced creep damage, microcraeks
5 large creep damage, macrocracks

The surface microstructure was examined by way of replica of ground and polished areas.
304

Polishing technique electrolytical


Etching agent 3 % alcoholic HNO3
Replica transcopy, gold-doped
Evaluation light microscope

US Wall Thickness Measurement

The wall thickness was measured with an analogous UT probe.

US Volumetric Examination

The volumetric measurement was performed with an analogous UT probe.

Radiographic Examination

Radiography was performed by way of an X-ray tube inside the pipe.

Ovali ty Measurement

The ovality is measured with a reference caliper over two pipe cross axes staggered by 90 .

Hardness Measurement

The hardness was measured by a Equotip hardness meter.

RESULTS
Examinations of Failed Component
Measurement of Circumference

The results of the circumference measurements - starting at the lower edge of the cut pipe
section with a spacing of 50 mm up to the component upper edge - are as follows:
The rupture is macroscopically undeformed. The pipe is not expanded.
By all appearance the pipe was chamfered; the pipe circumference is increasing from the
component lower to the upper edge.

Material Determination
The random check of the material confirmed the steel to be 14 MoV 6 3 (molybdenum-
vanadium). A chemically wet analysis of the material was performed within destructive exami-
nations of the failed component.

Surface Microstructural Examination


A replica of the failed component surface was made. There were 28 replica locations within a
given screen:
- 7 levels in the direction of the component axis including girth welds;
- staggering by 90 over the pipe cross section across the upper and lower girth weld.

The replicas were evaluated according to VGB data sheet TW 507/TW507e, 1992 issue.

In conclusion can be stated that


305

in all cases the weld metal of the two girth welds displays a microstructure according to
assessment class I;
the areas between 6 o'clock and 12 o'clock of the pipe circumference has a microstructure
of assessment class 3 a;
the microstructure in the areas between 3 o'clock and 9 o'clock of the pipe circumference
is of the assessment class 2b;
the strongest microstructural damage was found in the 6 o'clock position of the pipe
circumference 220 mm away from the lower girth weld - assessment class 3b. This is also
the area with the largest crack opening.

US Wall Thickness Measurement


The wall thickness is measured at the replica locations. The following observations are worth
noting:
The wall thickness is reduced within the crack area at 6 o'clock along all of the pipe axis;
the maximum thickness is found in the 3 o'clock and 9 o'clock positions.
The wall thickness ranges between 11.3 mm and 14.6 mm. The cause for this wall thickness
variations may be facing of an oval pipe. There may be a relation between the degree of micro-
structural damage and wall thickness variation.

UT Volumetric and Radiographic Examinations

The UT volumetric and the radiographic examinations did not detect any indications in the pipe
wall. Discrete indications on the inner pipe surface are caused by the oxide film.

After removal of the internal and external deposits within the destructive examination of the
failed component a surface crack test was conducted.

Hardness Test

The hardness test was performed at the locations of replica. The following observations are of
interest:
In the crack area in 6 o'clock position the material hardness was reduced;
the hardness values range between 133 and 153 HB with a minimum at 110 HB. The
total hardness level is at the lower limit of the hardness range to be anticipated.

The hardness was measured on metallographic specimens within the destructive examination of
the failed component.

FAILURE CAUSES
Prior to and during the war, in German power plant construction 'economy steels' were ap
plied. At the beginning of the fifties, when the fact that in Great Britain an economy steel alloy
ed with only vanadium and molybdenum was successfully applied extensive creep strength
examinations were performed. The material revealed excellent high-temperature strength des
pite the low alloy components as compared to 11 and 22. However, operational experience
with this material lacked. When around 1960 the first contracts for pipe lines of this material
were awarded. Experiments as to the best heat treatment for pipes and forgings were started,
also the development of welding fillers. However, the difficulties for heat treating the pipes
306

became so enormous that a (public) meeting of the VGB Materials Committee was called in
1963. Despite the knowledge andfindingsgathered at this meeting it took almost another ten
years until unanimity was reached on the adequate heat treatment of the tubes. Heat treatment
data for forgings differed, but pipe manufacturers established also differing parameters over the
years. It took a long time until it was clear how sensitive the material was with respect to cold
forming or manipulations such as heat treatment. The greatest difficulty was to harmonize then
theoretical knowledge and operational limitations. It absolutely happened in those days that
only stress relief annealing was performed instead of a heat treatment.
There are strong indications on the failed tube that an inadequate heat treatment of the tube
contributed to the failure.

INSPECTION AND SCOPE


Safety requirements and the necessity of a high availability, especially in case of industrial
plants, may require an extended scope of inspection.
Within the inspections an extraordinary amount of creep damage was detected. As a
consequence, the scope of inspection was expanded with the objective of a thorough
assessment of the plant condition. The knowledge base was substantially improved.

RECOMMENDATIONS
The systematic detailed analysis of the hot reheat line allows the following recommendations
for pipe line systems operated for long pe-riods of time, > 150,000:
Only component metallography may give early, reliable indications as to creep damage
initiation.
If a certain degree of damage is detected, the following measures are recommended
considering the intended future mode of operation:
Monitoring of continued operation and replacement of component during next overhaul
of the plant with prolonged operating time.
Load reduction by decreasing the temperature and/or pressure and repeated inspection or
replacement during next overhaul.
When replacing individual components care shall be taken that even straight pipes may be
highly loaded and therefore damaged.
307

Figure 1: Ruptured Hot Reheat Pipe

Figure 2: Internal Oxide Film of Ruptured Reheat Pipe


309

IMPROVEMENTS ON WELDING TECHNOLOGY BY


AN EXPERT SYSTEM APPLICATION (*).

Luiz Augusto D. Correa, Aludo Iwasse


PETROBRAS/REPAR
PO Box 009
83700-970 Araucaria Paran Brazil
(tecpar@lepus.celepar.br)

Milton P. Ramos, Lucio M. Silveira, Fabiano H. Budel


Instituto de Tecnologia do Paran - TECPAR
PO Box 357
81310-020 Curitiba Paran Brazil
(tecpar@lepus.celepar.br)

Julio Cezar Nievola


Centro Federal de Educao Tecnolgica do Paran - CEFET-PR
(nievola@dainf.cefetpr.br)

ABSTRACT

The Welding Expert System SES was developed with the main purpose of helping people involved
in welding procedure qualification and selection of qualified procedures. The generation or
selection of the procedures through the SES is made in accordance to the ASME code, Section LX
and several project codes applied to process plant. Few data entries are required to generate a new
welding procedure or to select a qualified procedure from its database. These data entries usually
are the base metal specification and thickness.

Besides the essential standard variables, the SES achieves the procedure considering the
environment constraints imposed to welding joint in work. Hydrogen damage, Stress corrosion
crack (SCC) and weld decay are same process plant environment constraints analyzed by the
system .

The development of this Welding Expert System is justified by the requests of welding experts and
the need to meet quality and productivity improvement goals, required by industrial sector in
Brazil.

Key words: Artificial Intelligence, Expert System, Welding Technology, Qualification.

( * ) Project supported by CNPq/RHAE n 610094/93-9 and FINEP n 56.94.0274.00.


310

INTRODUCTION

Technological developments require great efforts and financial resources. However these
resources are scarce, mainly in developing countries. Large amounts of these resources are spent
in training and in the development of expertise. The Expert Systems Technology (ES) offers an
important alternative to disseminate this expertise(1).

This alternative also supports the major emphasis Brazil has placed on increasing the quality,
specially for industrial production. The use of an ES tends to reduce costs, to standardize
procedures and to facilitate the information storage and retrieval, factors that appears in any
quality management system.

Welding technology comprises several areas of knowledge, and mastering it requires a great
number of experts in these areas. Thus, the development of an ES applied to the welding
technology becomes an important tool in the dissemination of knowledge when human resources
are scarce.

In this context and supported by the strategic importance of the ES development, the Welding
Expert System (Sistema Especialista em Soldagem - SES) was structured. It is capable of
generating Welding Procedures Specification (WPS) and managing a qualified procedure database.

The initial goal of the SES is to deal with welding procedures for carbon steel, alloy steel and
stainless steel base metals. Shield metal-arc-welding(SMAW) and gas tungsten-arc-
welding(GTAW) are the welding processes concerned in this stage. This arrangement fulfills
almost all welding procedures demand for assembly and maintenance of process plant.

The system generates WPS in accordance to ASME code, Section IX and PETROBRAS N-133
standard(2'3). Also project codes like ASME VIII, ASME I, ANSI B31.1and ANSI 31.3(456>7)
requirements for welded joint are attended. Filler metal specifications in procedures generated by
SES are in accordance with AWS/ASME section II(8). This structure is, certainly, one that better
represents the knowledge concerning welding technology, since the most important welding
parameters are included.

THE KNOWLEDGE DOMAIN

The generation or selection of a welding procedure requires the knowledge of several areas of
expertise. Welding process, metallurgical properties and features of both filler and base metals and
the welding metallurgy are some of these areas. Furthermore, the procedures should be qualified
according to the appropriated standards. These standards and codes are concerned chiefly with the
mechanical performance of the welded joint. However, considering the aggressive environment to
what the welded joint will be exposed to, the metallurgical performance is an important
information to prepare fitness procedures to usage. Certainly the welded joint is the most critical
311

region in equipment working in an aggressive environment^. Beside the qualification codes, the
properties and features of the base and filler metals and the welding metallurgy knowledge, there
are also information that makes SES capable to adjust the welding procedure parameters
according to the environment and its corrosion process. This knowledge includes the corrosion
process like Hydrogen damage and stress corrosion crack (SCC) in carbon steel welding, welding
decay in stainless austenitic and ferritic steel, knife corrosion line in stabilized austenitic stainless
steel and the usual corrosion processes for other standard base metals(10,11).

Although it is possible to achieve a procedure from de SES with few data entries (usually the base
metal specification and its thickness), the user may, at his convenience, modify the procedure
parameters that are being prepared by the system. Therefore, the procedure may be totally
developed by SES or developed interactively with the user. This configuration makes the system
more dynamic, since each procedure parameters can be evaluated and changed by user according
to his convenience. However, it is necessary aflexibleinference engine and an enlarged knowledge
base capable of evaluating any parameters change. SES warns the user about the feasibility
changes and, if necessary, SES points out any more parameters changes required to reach the
welding properties and qualification. For example, if the user changes a basic filler metal selected
by the system to weld a carbon steel subjected to hydrogen crack by the welding process, the
system will display a cold crack risk warning. In this case, if the user keeps his new option, the
system will change parameters like preheating and interpass temperature to avoid the cold cracks,.
This arrangement and knowledge base also converts the SES in a tutor capable to transfer its
expertise knowledge to a user, more than just preparing a welding procedure.

Knowledge is completed with quality control techniques at the execution level. Filler metal storage
and handling, joint preparation, welding and heat treatment techniques are supplied by the system
when the welding procedure specification (WPS) is issued.

SYSTEM DESCRIPTION

The artificial intelligence and expert systems technologies have been used in a large number
of applications related to industrial problems ' . Both try to simulate or emulate the human
intelligent behavior in terms of computational process. Specifically, knowledge-based systems and
expert systems try to reproduce the performance of a highly skilled professional in a specific
problem solving task . The main advantages of these technologies are to preserve and distribute
the human expert knowledge.

The nature of the welding procedure qualification doesn't present a pre-determinate solution
method, therefore it fits perfectly as an expert system application. This occurs because the solution
of this task doesn't need only the information contained in codes and standards, but needs also the
experience of a welding expert for correct manipulation of this information and search for a better
solution.

The SES structure is shown in Figure 1. It is composed of the following modules:


312

Database: Database is the manager module of WPS/PQR base. It allows the user to see and to
print these documents, and delete them, if the user has this permission. As these stored documents
belong to the company that generates them, it is possible to eliminate them from the WPS/PQR
database just using passwords.

Knowledge base: The representative AI method used to organize the domain knowledge was a
"production system" with a forward chain method of inference, where the knowledge database
consists of rules, called production rules, e.g.,

IF standard is ASME IX AND


standard is PETROBRASN-133 AND
project standard is ASME VIII AND
number is 1 AND
thickness >20 mm AND
thickness <30 mm AND
carbon-equivalent > 0.45 AND
carbon-equivalent <0.47.

THEN pre-heating temperature = 100 C AND


interpass temperature = 100 (minimum).

This choice was made because production systems offer good features in terms of modularity and
uniformity. The forward chaining is applied because the WPS generation starts with little available
information and tries do draw a conclusion that could be appropriate to do the new WPS.

The knowledge module of SES gives an overview of knowledge base content, comprehended in
various files. Then, is possible to see the production rules, the facts about filler metals and base
metals (extracted from welding codes) stored in the system. The main objective of this module is
to disseminate the knowledge acquired during the system's development, and the information
contained in standards related to filler metals and base metals.

Qualification: This module contains the inference engine, which tries to make new WPS/PQR
documents. The rule and facts of knowledge base are used with this objective, as well as they are
also used to do an intelligent search in WPS/PQR database and to manage the explanation
facilities. The intelligent search seeks a qualified WPS stored in the database, that satisfies the
conditions of a new welding process requested by the user. If this qualified WPS doesn't exist, the
system generates a new WPS to be qualified in laboratory, which contains all information
concerning to how do to the test coupon. The explanation facilities explains how a determinate
solution was obtained, during WPS elaboration, to assure the reliability of conclusions showed by
the system. See "Procedure Selection and Generation Systematic" in this paper.
313

User interface: This module provides a user friendly interface for the user, where it is possible to
consult the WPS/PQR data base, information about base and filler metals, and to generated a new
WPS/PQR.

Expert inte rface : A password protected interface where the expert can change parameters used
by the rules in order to adapt the knowledge stored in the system, providing data and tools for the
knowledge base evolution.

SES

USER EXPERT WELDING


USER -- >

INTERFACE INTERFACE * EXPERT


1
L 1

QUALIFICATION
MODULE
(inference engine)
1
' ji
DATABASE KNOWLEDGE
BASE

Figure 1 : SES structure

The SES was developed in 2 years by a team of three knowledge engineers and two welding
experts. The tool used in the development was a PROLOG interpreter/compiler. It runs on IBM-
PC compatible machines under MS-Windows environment.

PROCEDURE SELECTION AND GENERATION

A qualified Welding Procedure Specification (WPS) is intended to guarantee that the required
mechanical properties will be reached when welding a joint. In the same way, the qualification of
welders and operators is intended to assure that personnel are able to weld a specific joint
properly.

Project, construction and assembly standards and codes present mandatory requirements for
welding qualification and welder performance qualification. However, procedure qualification and
314

workmanship skill certification are basic for a Quality Assurance System in any production
activity.

Welding procedures are qualified according to these standards, when a test specimen(welded
according to the pre-established parameters in the procedure) provides the required properties,
for its intended application. These qualified procedure parameters determine the welding
procedure application fields through the essential variables range established in the codes. Thus, a
procedure qualified for carbon steel base metal, whose P- Number is 1, may always be used to
weld all P- Number 1 base metal. All others essential variables, like F- Number, A- Number,
thickness and pre-heating and treatment temperatures(2), have to be considered.

This systematic, established by welding qualification codes, results in a large application field for a
PQR Even if a few numbers of PQR are required to weld a variety of joints, a careful and
arduous selection from a procedure database is required. Considering these aspects of codes, the
SES was designed to minimize the routine work of procedures development by making an
intelligent WPS selection from its qualified procedure database. The WPS database stores
qualified WPS and PQR where the search and selection are made according to PQR variables.

The WPS qualification and search module are detailed in Figure 2. According to this flow chart, to
retrieve some WPS is required the base metal specification, qualification standards and welding
process. These data entries are presented by the system as options. More data entries may be
requested by the system as environment conditions that the welded joint will be subject and others
physical constrains. After this search, all available qualified WPS are pointed out to user
acceptance, or the Qualification Module becomes active when a new WPS is required.

Generation of a new WPS starts with the filler metal selection. The system points out the most
suitable filler metal and others that are possible to weld. Other filler metal may be chosen by the
user, from the tabulated AWS filler metals from the system database. In call case, the system
points out special conditions to apply other filler metal than the indicated by the system, as well as
the metallurgical and mechanical constraints, are given through warnings.

When a WPS is generated, all procedure variables may be modified by the user into the Updating
Variables Module. For each variable updated, the system will provide warnings when the change
could result in a pour WPS. At least all documents required to qualify the welding, are emitted:
WPS, PQR Test Specimen Preparation and Weld Instructions.

The tests and results of analyses are input into the system through the Results Data Entry Module.
Quantitative results are analyzed and approved by the system although qualitative results must be
approved by the welding inspector. As may be inferred from flow chart, the SES structure was
developed to be able to qualify or make a WPS intelligent selection with few data entries. Base
metal specification and thickness are usually enough. Thus, if it is necessary to modify any variable
of WPS established by the system to adequate it to a specific usage, each parameter is reviewed by
the system.
315

( BEGW )

i:
Qualification DataBase

Knowledge
Base

''
Base Melalurgic
Other
Metal Analisis
Rules
Rules Rules

^ END J

Figure 2: SES Flow Chart


316

CONCLUSION

Assembly and maintenance welding plays an important role to reliability and safety in process
plants. SES was developed with the main purpose of improve the work of welding experts and the
personnel in charge of equipment integrity, and disseminating the welding knowledge and
technology. Reaching theses aims, SES surely is contributing to improve that reliability and safety.

The SE S knowledge base, the inference engine, and the database design allow the system
improvement beyond the initial scope as well as the use of different base metals, dissimilar welding
and process combination, becoming SE S a modular system capable of being enlarged to attend
specific necessities of the user.

ACKNOWLEDGMENTS

Several people and institutions gave important contributions to the system development. We want
to express our gratitude to TECPAR and PETROBRS for the support.

REFERENCES

1- Barborak, D.M.; Dikinson, D.W.; and Madigan, R.B. 1991. PC-Based Expert
Systems and their applications to welding. Welding Journal 70 (1): 29-s to 38-s.

2- ASME Boiler and Pressure Vessel Code, Section IX, 1992 Edition. Q ualification
Standard for Welding and Brazing Procedures, Welders, Brazers, and Welding
and Brazing Operators, American Society of Mechanical Engineers, New York, NY.

3 E
P TROBRS N-133, January 1995. Soldagem. Petrleo Brasileiro S.A Rio de Janeiro,
Brazil.

4- ASME Boiler and Pressure Vessel Code, Section VIII, division 1 and 2, 1992 Edition,
Pressure Vessels. American Society of Mechanical Engineers, New York, . Y.

5- ASME Boiler and Pressure Vessel Code, Section I, 1992 Edition, Power Boilers. American
Society of Mechanical Engineers, New York, NY.

6- ASME Code for Pressure Piping B31.3, 1993 Edition, Chemical Plant and Petroleum
Refinery Piping. American Society of Mechanical Engineers, New York, N.Y.
317

7- ASME Code for Pressure Piping B31.1, 1993 Edition. Power Piping. American Society of
Mechanical Engineers, New York, N Y .

8- ASME Boiler and Pressure Vessel Code, Section II, 1992 Edition. Materials
Specifications for Welding Rods, Electrodes and Filler Metals. American Society of
Mechanical Engineers, New York, N.Y.

9- Metals Handbook, 9th edition. Volume 13 - Corrosion. 1987. American Society for
Metals.

10- Metals Handbook, 9th edition. Volume 6 - Welding, Brazing and Soldering. 1983.
American Society for Metals.

11- Kuo, S. 1987. Welding Metallurgy: 411 New York: Willey.

12- E. A. Feigenbaum, P. E. Friedland, . . Johnson, . P. Nii, . Schorr, .


Shrobe, R. S. Engelmore. 1994. Knowledge-based systems research and
applications in Japan, 1992. AI Magazine, vol. 15, no. 2, p. 29.

13- E. R Crowe, C. A. Vassiliadis. 1995. Artificial Intelligence: Starting to realize its


practical promise. Chemical Engineering Progress, January.

14- Schalkof, R.J. 1990. Artificial Intelligence. An Engineering Approach: 646 Singapore:
McGraw-Hill.
319

Integrated and Intelligent CAE Systems for Nuclear Power Plants

T. Sato
Nuclear Plant Design and Engineering Dept.
H. Futami and T. Hamano
Nuclear Engineering Information System Dept.
N. Narikawa
Resereh and Development Center

TOSHIBA CORPORATION
Yokohama, Japan

1.ABSTRACT

This paper presents an overview of integrated and intelligent CAE systems for nuclear
plants. We have integrated two-dimentional CAD systems, three-dimentional CAD
systems and a nuclear power plant database system. We have also developed an
automated routing system and design check system. The design and engineering of a
nuclear power plant covers various technical fields and the information which is created in
many fields is exchanged and utilized in parallel. As Computer Aided Engineering(CAE)
is applied for several plants, huge and diverse information has been accumulated on the
Data Base Management System(DBMS). TOSHIBA has integrated CAE systems to utilize
the reliable information and to make decision efficiently. We have integrated existing
two-dimensional(2D) CAD systems, a three-dimensional (3D) CAD system and a
relational database system which stores engineering information such as design conditions,
maintenance histories and inherent properties. As a design automation system, we have
developed an automated design check system. These systems are the main parts of the
plant engineering framework, and are utilized in the practical design. This paper
describes a situation that TOSHIBA has been promoting in order to improve user interface
in integrated environment and to replenish intelligent applications specialized for the nuclear
engineering.

2.INTRODUCTION

A nuclear power plant is composed of a large number of equipments, pipings and so


on. It takes 8 or 9 years to complete the design and engineering from the beginning to a
commercial operation. In this term, careful engineering is required to conform to strict
320

design standards regarding safety, reliability, reduction of radiation exposure and so on.
On the other hand, reduction of engineering term is indispensable to cut down plant costs
and a construction period. Furthermore operating plants are to be kept on maintaining and
improving for more than 30 years. TOSHIBA has constructed and applied distributed
processing systems by engineering work stations and network systems around the integrated
DBMS for the large scale plant engineering. TOSHIBA has promoted improvement of
reliability and efficiency of the common information usage in many fields in parallel.

3.CONCEPTS OF SYSTEM DEVELOPMENT

TOSHIBA has been developing CAE systems according to the following three concepts
on the basis of rich experiences about plant engineering, construction, maintenance and
know-how for computer applications.

(1 ) System Integration
(2 ) Engineering Visualization
(3) Information Processing Automation

That is, various local systems are integrated to improve the efficient information usage.
Comfortable user environmnt is supplied by a visualized user interface on the basis of
recent progress of computer graphics technologies. Furthermore, informaton processing
is automated to improve the reliability and efficiency of engineering. Depending on these
concepts, TOSHIBA is promoting the improvement of design quality and engineering
efficiency.

3.1 Plant Engineering


3.1.1 Overview

Figure 1 illustrates a typical engineering process in a nuclear power plant. Plant


engineering of this type consists of various processes; for example, project management,
design, manufacturing, construction, plant operation, maintenance process and so on.
The main features of nuclear power plant engineering are as follows.

(1) A plan spans more than eight years from the initial stage to full commercial operation,
and a nuclear power plant operates for more than 3 0 years. It is very impotant to manage
not only the design data but also historical data such as maintenance and replacement
records.

(2) There are many kinds of components including pipings, mechanical equipments,
valves and so on. Furthermore, there are strict design standards and constraints as
regards plant safety, reliability, minimizing radiation exposure, etc. Thus, the design
verification plays a very impotant role.
321

(3) A number of companies and departments within companies share the plant engineering.
Thus it is essential to set up an information infrastructure and measures to provide data
security.

Plant design can be characterized by certain special features as itemized below.

(4) Before the introduction of 3D CAD systems, scale plastic models were used for the
design and review process. Layout designers also used 2D CAD systems for drafting,
but it is only recently that 3D CAD systems have come into gradual use.

(5) Plant design is closer to VLSI design than to mechanism design; system design takes
place before layout design is considered. In some cases, the layout design process
requires that alterations be made to the system design.

It is desirable to set up an engineering framework which takes account of these features


and requirements.

3.1.2 Design Process

The design process is divided into a number of design phases, as shown in Figure 1.
Of particular importnce are the system design and layout design processes. The following
is an overview of both these design stages.

(1) System Design

It is very common in VLSI design to carry out the system design process before
beginning layout design. In case of VLSI, system design includes functional design,
logic design and circuit design. In the case of mechanical design, however, after the
specifications have been given, the designers begin by drawing 2D views or creating 3D
models using their experience and knowledge. There are few function description
languages or symbolic representation methods for this process. In the design of a nuclear
power plant, several different types of the system have to be considered at this stage.
For example, LPCS stands for the Low-Pressure Cooling system, one of the critical
cooling system. At the system design stage, a Piping and Instrumentation Diagram(P&ID),
which is similar to a circuit, is created. System designers determine the specifications of
equipments, pipings, valves, etc. at this stage. After carrying out certain simulation,
the design data are sent to the layout designers. In many cases, data from earlier plants
are reused after modification by the designers. For this reason, proper data management
is very important.

(2 ) Layout Design
322

Using the system design data supplied to them, layout designers determine the position
of equipments and then piping segments, elbows, valves, etc. Piping layouts are
created essentially by designers, whereas some automated CAD systems are used in VLSI
design. Wiring patterns are relatively simple in comparison, so several layout algorithms
have been developed. In piping design, however, the difficulty is how to lay out all the
piping according to the very strict design conditions. There are interactions with the system
design process, just as in VLSI design. At the system design stage, only schematics are
developed; that is, the physical dementions are not taken into account. For example,
when it is necessary to add a new pipe or to change the piping order, a layout designer
has to return the alteration to the system designers.

3.2 System Integration

The design and engineering of nuclear power plant covers various fields and it is closely
connected each other. The data which is created in each system is used as input data for
other engineering fields. Reliability and consistency of data which is exchanged in parallel
and sequentially in many fields are greatly required for the engineering of a nuclear power
plant. TOSHIBA developed the Nuclear Power Plant Database Management
System(PDBMS) as the core for unified information management and information exchange.
(See Figure 2) PDBMS has the following specific features.

(1) Open DBMS which is composed of multiple files and managed by the Master Parts
List
(2) File units which conform to actual engineering routine and configration of local
system
(3) Strong administration for consistent information by history management and
discrepancy detection

Figure 3 shows the relation between PDBMS and various data sheets for equipment,
valve and so on. Being different from other stand-alone P&ID CAD systems, the
information other than attributes of P&JD is referred from external DBMS in cooperating
copmanies. For example, in the case of a valve list, attributes of P&ID are sent as
design conditions to the cooperating companies. After completion of valve design, detail
information of valve is sent for approval from them through the network to the PDBMS in
TOSHIBA. Discrepancy detection of information is performed automatically at every data
exchange.

4.DTEGRATED CAE SYSTEM


4.1 3D Arranrement Ajusting System

In a nuclear power plant, lots of equipment, pipings, ventilating ducts and cable

1
323

trays are arranged complexly. Arrangement adjusting is performed as complying with


strict design criteria regarding system function, mechanical separation for safety,
shielding, operability, maintainability and so on. The 3D Arrangement Adjusting
System consists of the following subsystems. Using common geometric data, subsystems
aid the engineering regarding arrangement design by their functions. Figure 4 shows the
configration of the system.

4.1.1 Modeling System

Models such as equipments, pipings, ducts, cable trays and so on are created and
modified with this system using various parts libraries according to regulations and
standards. Generally, initial data input is performed by the batch input system. On the
other hand, addition and modification of data are performed interractively while engineers
examine from various points of view. On this system, regional schemes can be defined
freely with every building, floor, room and so on. The attributes for arrangement design
criteria can be defined in each regional scheme to utilize for various applications.
Geometric model can be modified region by region, so multiple engineers can modify the
same geometric model simultaneously in different region.

4.1.2 Interference Detection System

Interference detection for pipings, ducts, trays and so on is one of the basic functions
of arrangement adjustment Generally, for large scale project such as a nuclear power
plant, geometric models are created simultaneously by different companies which are
responsible for detail design and manufacturing. From this reason, interference detection
is indispensable in arrangement adjusting. Interference can be detected not only for physical
interference but also for imaginary interference such as spaces for access, maintenance,
insulation and installation error. Figure 5 shows one of the examples for imaginary
interference detection regarding jet blow down from ruptured piping inside dry well. With
this function, target objects of the jet blow down are extracted automatically without fail.

4.1.3 Arrangement Review System

It is very important at arrangement review that engineers in various fields can grasp the
arrangement easily whether it conforms their requirments or not. For above point of view,
this system supports to review the arrangement with state-of-the-art computer graphics
technologies. With this system, over all review can be done using graphic
representations such as shading, wire frame, transparency and so on. Comfortable and
real time arrangement review can be performed for practical amount of geometric models
by optimized graphic system. Figure 6 shows a scene of walk through simulation in an
ECCS pump room by using a large projector. This function is one of the specific features
of this arrangement review system to make up the presence at arrangement review.
324

4.1.4 Simulation System

The computer graphics has been applied as an effective method for motion simulation
scientific visualization. Computer simulation covers from basic animation of motion
trajectory to scientific visualization for results of analysis. For instance, in an
arrangement design of nuclear power plant, planning of carry in and out at construction and
maintainability of equipment in periodic inspection should be considered at an arrangement
adjusting stage. This system simulates these construction works visually for easy
confirmation by designer, construction engineer and user. At the same time, interference
along the motion trajectry is checked up. Figure 7 shows an example regarding
construction procedure simulation by the 3D simulation system which is linked with the
construction schedule management system. With these systems, construction procedure is
visualized automatically.

4.1.5 Plot Plan Ajusting System

This system is used to locate a plant site and to arrange buildings in early design stage.
It is able to calculate land cut and fill volume, excavation volume under buildings and
area of readjustment. It is also used to confirm the procedure for access path arrangement,
construction planning of temporary structures, excavation and refilling and so on in
construction period. Figure 8 shows the carry in simulation of the large scale module of
Reinforced Concrete Containment Vessel(RCCV) liner.

4.1.6 Artistic Design System

Coping with the stream of environmental consciousness, photorealistic computer


graphics technologies are applied for interior design of rooms and exterior design of
buildings. Recently, our clients are very interested in improving the environment of plant
operators and workers. Especially in main control room where operators are always
working under mental tension, not only the arrangement of operation boards on the basis
of human engineering but also lighting effect and color coordination in the room became
the center of client's interests. In this case, computer graphics technologies such as ray
tracing method, texture mapping and so on are applied to represent inside the room
realistically. Cliants can confirm their requirements in advance. The exterior of building
is also a matter of concern of our clients to be in harmony with its surroundings. Figure 9
shows an example of artistic interior design in the main control room Advanced Boiling
Water Reactor(ABWR).

4.2 Visualized User Environment for the Integrated DBMS

As the introduction of CAE progresses, huge and diverse information has been

6
325

accumulated on the PDBMS. Up to this time, convenient usages for common


information have been grouped. Using computer graphics technologies, improvement of
user environment to the PDBMS has just accomplished as one of the effective solutions.
TOSHIBA concluded that it is very much convenient for engineers to use the model data in
Two Dimentional(2D) CAD system and Three Dimentional(3D) CAD system as an index
for a large scale DBMS. Complicated procedure for data reference is avioded by
visualized user environment. It is very easy for engineers to use a Piping and
Instrumentation Diagram(P&ID) which is familiar in their routine as an index for seaching
data from the PDBMS comparing with retrieving only by name of data field. It comes to
the same thing for 3D geometric model which every engineer can recognize as actual
objects. Especially, in the case of management for an operating plant, data handling by
geometric models is very efficient for the maintenance history management which is
performed parts by parts. Figure 1 shows the system configuration of the integratd CAE
system.

4.2.1 Enhanced Information Reference for the PDBMS

In this system, diverse information in the PDBMS can be referred using P&JJD and 3D
geometric model as indices. User can select information from the PDBMS and set display
formats and register it freely on the screen. Figure 10 shows an example in case that
information is referred from multiple files on the PDBMS.

4.2.2 Maintenance History Management of Valve

Over ten thousands of valves are installed in a nuclear power plant. A number of them
are very important for the plant safety. This system manages diverse information such as
arrangement adjusting status for maintenance, detail design information for purchasing,
parts for exchange and inspection history. This system is linked to a optical disk system to
refer drawings and diagrams. Figure 11 shows a situation in case that design information
and maintenance history of motor operated valve is picked up.

4.2.3 In Service Inspection History Management

In Service Inspection(ISI) for important equipment and piping is required by the


regulation. This system assists to manage ISI engineering such as inspection planning and
history management of weld lines.

4.2.4 Countermeasure for Erosion Corrosion Failure

Detection of thickness reduction for piping system is required from the point of view of
preventive maintenance. This system extracts piping that has some possibility to make a
trouble by erosion corrosion failure according to internal logic of the system and attributes

7
326

of pipings such as design information and operating condition. The corresponded piping is
displayed on P&ID system. All kinds of information regarding maintenance history and
inspection results can be referred parts by parts. Figure 12 shows the situation in case that
information such as operating condition and inspection history regarding erosion corrosion
are displayed.

5.INTELLIGENT CAE SYSTEM

Two Intelligent CAE Systems are used for design automation. Piping design of a
nuclear power plant comprises two phases. One is the model generation phase, and this
is followed by the constraints checking phase.

5.1 Automated Routing System

As part of our effort to develop a design automation system for plant piping, we
developed a prototype of an automated routing system.

5.1.1 Overview

The piping layout problem is similar to that of VLSI layout. One way to solve the
VLSI layout problem is by mathematical algorithms, but this approach proved impossible
to apply in the case of the piping layout problem because of the strict design constraints.
Therefore, a more flexible and intelligent system was required. We have developed a
knowledge-based system using a Lisp-based expert shell. We call this the production
system.

5.1.2 Basic Method

The following is an outline of this new system. Unlike the traditional approach-in
which piping must avoid previous laid out piping-the new approach searches for the optimal
route first without considering the other piping. Then adjustments are made to all the
piping in the object area. In this way, dividing the design process into a routing step and
an adjustment step, it is possible to complete the layout design without it depending on
the routing order.

5.1.3 Result

The layout results obtained when this system was applied to a simplified nuclear plant
model agreed well with the practical layout design. Though the propriety of this layout
strategy was verified, it was not in fact used as a practical design tool. It did, however,
enable us to acquire methods for developing a knowledge-based system.
327

5.2 Automated Design Check System

Since developing the automated routing system, we have been working on an


automated design checking system. This is based on the object-oriented
programming(OOP) method and is implemented using the Common Lisp Object System.

5.2.1 Object-Oriented Programming

We treat OOP as one knowledge representation method. The hierarchical Class


structure is designed by analyzing not only the physical structure of the plant but also
virtual objects such as graphical user interface objects, documentation objects, and check
item objects. A Class object is of the abstract type, whereas an Instance object
represents the concrete model. Design knowledge such as design standards, constraints,
and expertise accompanies the specified Class object and is implemented using the Method.

5.2.2 System Configuration

Figure 13 illustrates the configration of an automated design check system. Design is


implemented separately for each system with a particular function in the plant. The check
system makes use of three kinds of data file for every system: the model description file,
the model geometry file, and the property file. These data are translated into the Lisp
format All data from the three files are stored in Instance objects. Figure 14 illustrates
the Class structure of a valve.

5.2.3 Problem Solving

Design check items comprise the design standards and the constraints. The former are
usually represented quantitatively, whereas the later are .qualitative. . The important
problem is whether it is possible to translate the qualitative constraints into the numerical
representations or not.
Another problem is that design data stored in the PDBMS and CAD database systems
are not complete. The reason for this is very simple: creating models in a computer takes
a great deal of effort. Designers input only the necessary and sufficient data in a practical
system; anything that can be gleaned from other sources or that is common sense as regards
plant design is not represented as a CAD model.
It is impossible to fully solve these problem. However, we did ask designers to
explain design constrains and gather concrete piping layout patterns which satisfy the
design constraints. We referred to the approach used in Case-Based Reasoning (Kolodner,
1985). Several design constraints have been translated into search problems.

5.2.4 Examples
328

Designers have long performed design checks using scale plastic models and/or CAD
models. The following are examples of the items checked in this case.

(1 ) Data Discrepancy
Layout design is carried out using P&LD data, which are the result of system design.
Piping layout design is performed manually by designers, whereas VLSI design is done
automatically. For this reason, there is a possibility of discrepancies occurring in the
order of objects between the layout design result and P&ID. Such discrepancies
sometimes occur after a design modification. If modifications to the layout design are not
reflected in the system design, this will lead to a problem. This check is very important,
since it affects plant safety and reliability.

(2 ) Design Constraints
Drain/vent pipes are critical components. The function of drain and vent pipes is to
remove water and air, respectively. There are some conditions specifying how drain/vent
pipes should be laid out. If a vent pipe is not at a global or local high point in the piping,
air will not be removed and the equipment will be affected. What is necessary is to check
whether it is possible to remove water and air actually. Sometimes a vent pipe is set at
another pipe from which branches the object piping segment Therefore, a tail recursive
search for piping is necessary. Figure 15 gives an example of a vent pipe check result.

(3) Numerical Analysis


It is also very important to carry out numerical analysis, such as eigenvalue analysis,
earthquake response analysis and thermal stress analysis. Tt is easy to obtain the
coordinates of a piping segment and the position of an object on a pipe, but it is awkward
to automatically create analytical models of specific objects such as valves, supports and
nozzles. We have developed a library which stores default parameter valves. All
elements and nodes are represented as Instance objects. Figure 16 shows .an example of
an eigenvalue analysis model and its result.

6.SUMMARY AND CONCLUSIONS

In this paper, we have outlined an integrated and intelligent database system which
forms a plant engineering framework. By integrating existing CAD systems and database
systems, we have made it possible to develop an efficient engineering environment. As
for design knowledge, we adopted object-oriented programming as the knowledge
representation method. We analyzed the hierarchical structure of the plant and the
knowledge related to each object, and then represented these, respectively, using a
Class/Instance structure and Method. We developed an automated design checking system
as one application of the technique. These integrated and newly developed systems are
used in the practical design. We have begun to develop a mechanical/electronic design
framework based on this approach.

10
329

REFERENCES

(1) Kolodner, J. , L. , Simpson, R. , L. , and Sycara, . , 1 9 8 5 , "A Process Model


of Case-Based Reasoning in Problem Solving", UCAI-85, pp. 2 84-2 9 0

(2) Sakamoto, . , et al. 1989, "PLEXSYS:An Expert System Development Tool for
Electric Power Industry - Application and Evaluation", Proceedings of EPRI Conference
on Expert Systems Applications for the Electric Power Industry

(3) Machiba, . , Sasaki, . , 1990, "Toshiba CAE System for Nuclear Power Plant",
Proceedings of SNA' 90, pp.425-4 30

(4) Kim, W. , Banerjee, J. , Chou, H. and Garza, J. , F. , 1990, "Object-Oriented


data base support for CAD", Computer Aided Design, Vol. 22, No. 8, pp. 4 6 9 -
479

(5 ) Narikawa, . , Sasaki, . , et al. , 1991 "An Automated Layout Design System


for Industrial Plant Piping", Proceedings of ASME Computers in Engineering Conference,
Vol. 1 pp. 1-6

(6) Hardwick, M. and Downie, . , R. , 1 9 9 1 , "On Object-Oriented Data Bases,


Materialized Views, and Concurrent Engineering", Proceedings of ASME Computers in
Engineering Conference, Engineering Database: An Enterprise Resource, pp. 93-97

(7 ) Kannapan, S. , M. and Marshek, . , . , 19 9 1 , "Engineering Design


Methodologies: A New Perspective", Intelligent - Design and Manufacturing, Edited by
Andrew Kusiak, WLLEYTNTER SCIENCE . . . .

(8 ) Sheth, S. , 1 9 9 1 , "Product Data Management and Supporting Infrastructure for an


Enterprise", Proceedings of ASME Computers in Engineering Conference, Engineering
Database: An Enterprise Resource, pp. 65-69

(9) Narikawa, . , et al. , 1992, "A Computer Aided Layout Design System for Plant
Piping", The 1 st JSME Conference on Design Engineering in System

(10) Sakamoto, . , et al. , 19 92, "A Knowledge Based System for Nuclear Plant
Design Support", Proceedings of ICHMT 2nd International Forum on Expert Systems and
Computer Simulation in Energy Engineering

(11) Abe, . , Sasaki, . , et al. , 19 92, "Toshiba Integrated Information System


for Design of Nuclear Power Plants", Proceedings of the 2nd SME/JSME International

11
330

Conference on Nuclear Engineering, pp. 711-715

(12) Machiba, . , Sasaki, . , et al. , 199 2, "Plactical Application of Computer


Graphics in Nuclear Power Plant Engineering", Toshiba Review Vol. 4 7 , No. 12, pp.
914-917

(13) Hakim, M. and Garret, J. , . , Jr. , 1992, "Object-Oriented Techniques for


Representing Engineering Knowledge and Data: Pros and Cons", Applications of artificial
Intelligence in Engineering VU, Computational Mechanics Publications(CMP), pp. 21-3
4

(14) Narikawa, . , Sasaki, . , et al. , 1 9 9 3 , "Application of Object-Oriented


Programing for Design Problem", Proceedings of the 11th JSME Design Symposium,
pp. 96

(15) Narikawa, . , Sato, . , et al. , 1 9 9 3 , "Development of an Engineering


Database System for Plant Piping", Proceedings of the 71 st JSME General Assembly of
JSME, pp. 16 9-171

(16) Sasaki, N. , 1993, "Integrated CAE Systems for Nuclear Power Plant",
Proceedings of Committee on Data for Science and Technology

(17) Narikawa, . , Sato, . , et al. , 19 94, "Tntegrated and Intelligent Database


Systems for Plant Engineering Framework", Proceedings of 1994 ASME Database
Symposiun

12
331

Feed Back

Fig. 1 Plant Engineering Process

Optical
Disk

DB Manage r
Refer
Information
Confirm
Specification
Modify
Attribute
Comparison

Design Che ck

Fig. 2 Configulation of Plant Engineering Framework

13
332

r PiPll S

K&ix

3 .:'>.v'iFK3IMt

\Vii*|

XJY< Ui

& '

MPL:Master Parts List

Fig. 3 Relationship between PDBMS and Local DBMSs

iMODEtlUStiS MiiLitiLy^-gDSlirii
Piping Duct Physical Interference
Equipment Tray Imaginary Interference
Support Concrete

\
aftffllS
High Speed Review

f:Vfcij|BiV3jt!
"5:^!< ] Patrol
: : Ope rability
|5, :l:1f:fc
Maintainability
Asse mble & Disasse mble
Carry in & out

mmm&mm
1
Inte rior De sign Yard Arrange me nt
Exte rior De sign Construction Planning

Fig. 4 3 D Arrangement Adjusting System

14
333

Fig. 5 Result of Jet Blow Down

Fig. 6 Walk Through Simulation System

15
334

f*'.*.

Fig. 7 Example of Construction Procedure Simulation

*m&tm

Fig. 8 Example of Carry In Simulation

16
335

Fig. 9 Example of Interior d esign of Main Control Room

BC* r-5"<- S3WU


J K j Sh) J.) -*g> jBj . s a g j gaewsj J*D 3) jawBD J i O J87J
M m m KMO-wo-:

v jrtt
39>t :"-*
*O :"
_ i*S
m
:
:tmt
1 t>-?<i/?f : K
V * r - V-t-(UM>_MBBtt
- vi: tg :w
;itll
:Mt
- TACO
i

:?** :'*! -
HOKJE : C.
l s ^ r - <ipl> :!
tr^T5-lT,-4) : :cxu>
.-:
.tl --."
(*)
riMfflM
./ * r - Ts-t atom jam
- ?7.<-s w*
M nu
Ml
- *if.S
I ' 1*4 Hfl roMB
'17 l
l-' '
lK*tlf
1BMW1I1
tn
UBWHlff
A
agK 3
maaw 4 f S

wit-

Fig. 1 0 Visualized User Interface for the PDBMS

17
336

fWsxfi.
sn
i ' - ' r S I -

W o . ftfcft wHRtoacn OP *
B , DO ( 2 ) , D T w-r,
r~ B SEESElB KflIE "W 8

-r-: ri[_._(-
B i l l STH
w : c i aasjBEBa

JCMM; waiti/amtnat." m m pattane";

Fig. 11 Visualized Maintenance Management for valve

Fig. 12 Visualized Inspection History for Erosion Corrosion Failure


337

Property CAD Data Geometry-


File File File

Class Method
System P&ID-Check
Pipe Slope-Check
Valve . Stem-Check..

Common Lisp object System


Lucid Common Lisp
X-WINDOW

Fi". 13 Configuration of an Automated Design Check System

Check-Valve
Superclass Slot
Valve Inherited Slot
Method
Slot fforiz-CAeck
Tagliarne FJoff-CecJc
Operation I n h e r i t e d Method
Geometry
DependPipe

Method
SetDir-Check Safe-Valve
Get-Geomtry
Get-Prop Slot
Inherited Slot
Method
Sten-CnecJc
I n h e r i t e d Method

Fig. 14 Valve Class Structure

19
338

Fig. 15 Example of Vent Piping Check Result

j j i Or ' ~,. '_.-'..


Fig. 1 6 Example of Numerical Check Result

20
339

SP249-END-USERS RESPONSE/ACCEPTANCE
What is required? What is available? What has yet to be done?
by Hans R.Kautz, Grosskraftwerk Mannheim AG
Mannheim, Germany
INTRODUCTION
Following a proposal of 13 European partners under the coordination of MPA Stuttgart a
Sprint Specific Project (designated SP 249) has been approved and is running. The main goal
of SP 249 was to enhance the transfer of the component life assessment (CLA) technology for
high-temperature components of fossil-fired power plants assuring diffusion of modern state-
of-the-art plant CLA technology among power plant utilities and research organizations in
Europe. The project addresses pressure parts operating at elevated temperature (operated in
the creep and creep-fatigue range) in fossil-fired power plants.
Figure 1 shows some essential definitions.
Many years ago, in connection with the authority requirement of retrofitting older power plants
with flue gas cleanup systems the question of remaining life and the possibility of extending the
service life of welded components operated in the creep range arose. What was the approach
of Grosskraftwerk Mannheim (GKM)? Initially, attempts were made to achieve high
availability and safety by extensive examinations and later, by way of condition-oriented main-
tenance to extend component life.

WHAT IS REQUIRED?
An attempt will be made to demonstrate by considerations laid down 10 years ago which pos-
sibilities are provided by the SPRINT 249 project system to solve the problem of plant life
extension. In those days, the question arose which reserves are still available in the plant
systems and how they can be activated. As hard as it is to state what portion of the service life
of a component or systems has been used (exhaustion), as difficult it is to determine the re-
maining life. GKM and other European energy suppliers tried in many different ways to solve
this problem.
Even EPRI (Electric Power Research Institute/Palo Alto) with the assistance of C.E.G.B.
(Central Electricity Generating Board) tried already around 1980 to develop a strategy for the
assessment of remaining life.

APPROACHES OF SERVICE LEFE ASSESSMENT


Consideration of External Loads
In former years, calculations of pipe systems were incomplete. Calculations of "as-is" pipe wall
thickness and weights were started only a few years ago. The consequences, such as mal-
designed hangers, pipe line displacement, and failures as a result of overstressed pipe lines and
hangers are all too well known.
Here a brief review of the plant component design for the creep range: In the boiler equation
the ultimate tensile strength after 100,000 hours at an adequate temperature divided by 1.5 was
used since 1968/69 the design was based on the life time, e.g. of 200,000 or 250,000 hours
depending on the duration of the creep tests.
Pipe line components under longterm loads were designed against creep failure at predomi-
nantly steady-state operation and only recently also against creep failure (cycling loads) at tran-
sient stresses. Pressure surges were ignored. Such an approach - creep tests as calculation basis
(static loads, limited test time) - is inadequate for inhomogeneously stressed components, as
340

premature failures confirmed. The stipulation to consider also external loads when designing
high-temperature components under internal pressure was hoped to be the remedy. In those
days engineers believed that only in this way the life of creep-stressed components could be
extended, of course not beyond the design life. The intention was to fully use this service life
determined by calculation based on temperature, pressure strength parameters for 200,000 op-
erating hours when running the plant at standard operating conditions (with components con-
forming to the drawings). Local stress peaks (exceeding the specified stresses) would reduce
the remaining life.
There is no need to emphasize that for the assessment of power plant components, i.e. their
availability and safety, but also for extending the service life, all operational loads should be
known and available for a life time analysis. This is almost impossible to achieve. For many
older plants hardly any documentation from the time of erection is available.
Life Time Monitoring Systems
In those days, more and more computer software was used for determining by calculation of
the degree of exhaustion of components under internal pressure or operated in the creep range.
The software was intended to calculate (subsequently) the used service life and the exhaustion
of different components, e.g. pipe line components. In 1978, for the first time, for the Moor-
burg/Hamburg power plant a report was published on longterm monitoring of boiler compo-
nents by way of a process computer. Then only the material exhaustion by creep was deter-
mined, but not the stresses due to load cycling. Later, an agreement was concluded among the
technical associations in Germany on recording measured data and on a code for determining
the degree of exhaustion of pressurized components by calculation pursuant to the German
Boiler Code TRD 508. The assumption was that the component life decreases with the number
of tolerated load cycles (linear failure accumulation according to Palmgren-Miner and linear
life time according to Robinson). The damage process was assumed to start immediately after
the first load cycle, and not - as in reality - after a critical number of load cycles, and to
progress until rupture. Another assumption is that the damage due to creep and fatigue is linear
with the number of load cycles and their duration and that both damage types may be added to
a total degree of damage. The most difficult task, however, is to determine the really "heavy-
loaded" system points.
Damage Rules
Damage rules for multiaxial stress states, variable loads, and temperature history have their
origin with Robinson in 1937. They are expressed as summations of time ratios, strain ratios,
or combinations of time and strain ratios. Additionally, Kachanov and Rabotnov dealt with
damage functions which varied in a continuous fashion from 0 to 1 between test initiation and
failure. Coffin and Goldhoff, Abo El Ata and Finnie have provided summaries of some of the
more useful damage rules. Figure 2 is a survey of these damage rules which again give rise to
doubts as to which is the optimal rule.
It is interesting to note that the Main-Wiesbaden AG power plant decided only in the past
years to participate in the development of a life time monitoring system within a European
project BRITE in order to fully exploit - according to the state of science and technology - the
possibilities of non-destructive material examination and monitoring within the preventive
maintenance.
Nonetheless, a number of problems due to physical/metal-induced conditions occur when
recording the necessary data.
341

STATISTICS
The standard calculation methods for the assessment of the service life of components of the
superheated steam region, such as design and service life calculations and calculation of the
degree of exhaustion were considered inadequate by the German authorities. Therefore, at-
tempts were made to implement the findings from 'statistics' in the calculation for determining
the service life considering the scatter of creep and wall thickness values (of the individual
components) and temperature etc. The application of statistical methods was intended to allow
statements as to what failure rates or possible damage had to be anticipated at a specific time.
Engineers believed in the possibility of making the complex conditions somewhat more com-
prehensible and to initiate systematically preventive measures.

PROBLEMS OF ASSESSING THE DEGREE OF EXHAUSTION OF POWER PLANT


COMPONENTS
The safety and profitability of a power plant are defined to a great extent by a feasible failure of
individual components. The determination of the degree of exhaustion, and thus the service life
consumed so far, of high-temperature pressurized components such as pipe lines and headers is
based principally on the following approach:
Calculation of the spent and the remaining life under creep and fatigue con-
ditions, e.g. pursuant to the German Pressure Vessel Code TRD 508;
destructive and non-destructive material examinations and strain measurements,
e.g. based on VGB Guideline R 509 L;
transfer and incorporation of engineering expertise and field experience when
analyzing the component condition.
The importance of developing advanced methods and procedures for the service life calculation
or estimation was also stressed in the course of various conferences and in publications. How-
ever, it must be remembered that novel and more detailed methods for life assessment are
difficult to introduce in the field. In most cases this is due to the fact that the required data
and/or the personnel necessary for performing the sophisticated analyses are unavailable.
Field experience and case studies demonstrated that the factors listed below resulted in huge
errors in the prediction of the remaining life so that the whole procedure becomes more or less
useless. The factors are:
Scatter of material data, local material inhomogeneities or differing material
properties (e.g. due to heat treatment or welding),
unknown additional forces or moments,
differences in design and as-is geometries,
uncertainties in the determination of temporary and locally variable operating
parameters such as pressure and/or temperature,
conservative strength and damage hypotheses,
design errors predominantly due to wrong assumptions or - in other words -
due to a "lack of engineering expertise and intuition" or "deviation from ap-
proved engineering practice".
Table 1 shows the effect of these influencing factors on the plant and component life.
342

Many years ago a paper was presented with the statement:" There is presently no accurate
procedure of determining the "remaining life", even though "methods" are available. The re-
maining life of a system operated in the creep range cannot be extended beyond the design life.
A component may be used within the specified life only, if the actual operational stress does
not exceed the design stress.

MICROSTRUCTURE
One method of assessing the component condition was very successful during the past years.
Over the years the micro structural changes occurring in the course of operation gained more
and more interest. In 1969, with an attachable microscope the first in-situ microstructure
assessment was performed in the Hamm-Uentrop power plant (prior to commissioning). The
adoption of replica taking represented a considerable progress. In the past years this exa-
mination method was accepted more and more, and the Guidelines for the Assessment of the
Microstructure and Damage evolution of Creep-exposed Materials for Pipes and Boiler
Components were refined. Table 2 shows the assessment classes and their definition.

GOALS OF SP 249
However, many past conferences and publications revealed how difficult it is to assess the
materials condition and thus the degree of exhaustion of a component, if engineers fail to
establish a system to organize the huge quantity of acquired data, to compare the meaning-
fulness of examination techniques and results, to harmonize the differing level of understanding
in power plant engineering (uneven distribution of experts/resources in Europe and all over the
world) and thus to find a common language. It is this difficulty that enhances the development
and implementation of knowledge-based systems for assessing the remaining life.

THE TURNING POINT


When GKM was asked some years ago by the Stuttgart Materials Testing Institute (MPA) to
participate in the development of a computer-aided management system, e. g. a knowledge-
based system for assessing the remaining life of power plant components the consent was given
without really knowing, whether this effort would meet success, but with the hope of finding
an improved solution for the problem.
In these cases expert or knowledge-based systems of similar software may provide an effective
support for engineers concerned with these problems. This is the basis of developing e.g. the
Expert System for damage analysis and determination of Remaining life (ESR) at the Stuttgart
Materials Testing Institute.

USER INTEREST IN KNOWLEDGE-BASED SYSTEMS


In a power plant like GKM the interest in technical development and the efforts to improve
continuously safety and quality of components and materials are as important as the availability
of the plant. Human expertise of different specialists is essential for the assessment, but is often
unavailable in the plants the very moment it is needed. Thus the plant engineers are often facing
the question of what to do with the component and/or the plant (e.g. shut down and reinspect,
reduce load, pressure, temperature etc.). In the field of power plant instrumentation and
control GKM makes all efforts to hold the leading position in modern techniques. The idea to
include an expert system which could support the departments of construction, calculation,
maintenance, and others should be one of the goals. Therefore, GKM decided to sponsor the
project financially and to provide expert knowledge so that the system would be capable of
satisfying the following requirements and needs of the company:
343

Improve safety and availability of piping systems first, and also of other components in
the future
support and help personnel in performing usual maintenance tasks;
support and help personnel when dealing with specific subproblems;
facilitate search and use of necessary documentation;
save and reuse knowledge of human expert (personnel);
preserve experience and knowledge gained during manufacturing/construction phases;
perform strategic analyses and analyses of case studies;
evaluate better single aspects of new cases and compare them with stored ones;
use both "plant specific" and "plant neutral" data.

WHAT IS AVAILABLE?
Knowledge Based System (KBS)
The current status of the SP249 system includes the following methods, procedures, and in
formation:
Generic guidelines for CLA
Overall structure of the system
Object management modules
Advanced Assessment Route
Case history management (with about 100 case histories)
Documentation management (with all CLA generic guidelines and associated standards
and codes like DIN, TRD, ASME, VGB and NT standards)
Material database (with the relevant standards ISO, DIN, BSI, ASTM and other ma
terials)
-Parameter calculation
Hardness calculation
TULIP (Tube Life Prediction)
Case history selection and management
Crack dating
SP249 remanent life calculation
Inclusion of oxidation effects
SP249 material database
Inverse stress calculation as per German Boiler Code TRD
Creep and fatigue usage calculation as per TRD
The modules yet to be developed are:
Cavity density measurement
Linear extrapolation
Influence of chemical composition on the remaining life
Failure assessment
Training materials for CLA and for SP249 KBS
SP249 KBS implemented at all participating utilities.
Although not all of the methods and procedures listed here will find the approval of the util
ities, they were included as state-of-the-art methods.
As an example: Some years ago the assessment models for the determination of the remaining
component life were discussed. Theoretical models of the remaining life assessment are based
on a combination of mathematical creep curve description and metallurgical variables such as a
quantifiable degree of failure and the distance between particles. The practical application so
344

far is limited to some experimental results; a systematic use is still to be expected. As a result
of uncertainties and necessary simplifications of the model bases and of the scatter of the
measured results it is unclear how to assess the life span realistically. The following models are
candidates: The Needham model, the -parameter model pursuant to Shammas, the p-
parameter model according to Riedel, the A*-parameter model pursuant to Eggeler, and the
particle spacing/EPRI model. We found that only the model for particle spacing might be
suitable for practical application.
In any case, the advanced assessment route - of which other authors already reported and
which will be briefly discussed later on - is the link between the theoretical knowledge base and
the case studies.

BENEFITS OF SP 249
Maintenance. Remaining Life. Costs
Statistics shows that presently 50 to 70 % of the German power plants exceed their design life
of approx. 20 years. Toward the end of the century, 15 to 25 % of the power plants will have
reached the 40 years limit and have to be repaired according to condition to mitigate or
eliminate old design-induced or operation-induced mistakes to continue operation, unless new
plants are constructed. This means also that the service life of a number of power plants must
be extended again in order to maintain the energy supply. The expenditure for maintenance
increasing in the course of life extension over the entire service life must be assessed, however,
under the aspect of investment costs of 2000 to 3500 DM per installed kilowatt for a new
plants. Life extension spread out over 10 to 15 years amounts to approx. 50 DM per kilowatt;
according to U.S. data, on the contrary, up to 500 $ per kilowatt. Life extending activities in a
power plant require a period of time between two to five years. The dominating aspect is,
however, the problem of profitability and service life analysis. In order to fully solve this
problem, a decision must be taken in which the service life calculation based on condition ana
lysis of the total plant and the component load analysis provides an overview of the expected
service life of all components. The longer the general service life, the higher the rehabilitation
costs for the system will be. Besides the objective of obtaining a possibly identical service life
for all components, the extension of the operating time should be paralleled by retrofitting and
upgrading including improved performance, efficiency, and availability.
A major aspect for participation in the SP249 project is to diminish costs of maintenance.
A great advantage is, that the knowledge-based system will contain plant neutral and plant
specific information. Thus there is another possibility to improve the knowledge concerning the
plant condition. The actual 'as-is' state does not have to be known. The engineer will receive a
prognosis enabling him to choose a specific investigation method for the components. This
prognosis depends on the number and quality of incorporated case studies. Such recommen
dations allow to minimize the amount of costs. Recommendations may include e.g.
furnishing of a scaffold,
removal of insulation,
selection of specific non-destructive or destructive tests,
attachment of insulation,
prediction of the plant outage time.
In the USA, a survey by Stone & Webster Corporation has shown that incorporation of the
reliability-centered maintenance (RCM) functional analysis in the design phase can save a
major portion of the up-front implementation cost. It can also solve not only most of the
translation problems by doing the translation as part of the design process, but it can avoid
costly design errors and misapplications.
345

Still some years ago, in the conflict between prevention at all costs and operation until break
attempts were frequently made to avoid failures by early detection and elimination of incipient
damage. As a result of extensive expertise and of the possibilities provided by electronic data
processing we are now able to perform maintenance work in a more differentiating and
efficient way. There are three stages (Table 3):
* Preventive and periodical maintenance,
* condition-oriented maintenance,
* failure-dependent maintenance
Condition-oriented maintenance includes all system components affecting the plant availability
and/or safety whose failure involves the risk of subsequent damage. Maintenance is performed
according to the findings of the condition check, the prerequisite being one, or better, several
diagnostic methods (examination procedures) for such an approach and a component utili-
zation of 80 - 95 %.
Condition-oriented maintenance is based on reliable early detection by way of process moni-
toring and recurrent inspection.
Systematic plant monitoring is the prerequisite for condition-oriented maintenance. By increas-
ing the application of monitoring systems, i.e. in-process monitoring, maintenance can be opti-
mized and plant availability increased. The adoption of electronic data processing-based main-
tenance models willfinallyresult in an integrated process control system.
Failure-dependent maintenance means waiting until failure occurs in components which are
not relevant for the plant safety and availability or which are redundant.
Advanced Assessment Route (Table 4)
In order to overcome of this dilemma - too many data, a huge quantity of urgently wanted in-
formation - a so-called 'road map' was developed within the SP249 system with the intention to
allow the user of the knowledge-based system to not only enter his specific problem, but to be
given - by the system - a reasonable choice between various remedial measures. The road map
- or advanced assessment route - allows the engineer to draw back on a collection of case
studies, look into component histories, recall pertinent standards and codes and will finally
recommend some solution, e.g. run or repair or shut down which the engineer then consider.
This route is therefore the link between the data base of the system that includes all pertinent
information in the form of engineering rules, models, codes, standards and the compilation of
events in the form of case studies and component histories.
WHAT HAS YET TO BE DONE?
Maintaining an update condition of the system by integrating always new case studies and link
them with the advanced assessment route to help making decisions.
Implementation of power plant component life assessment methodology using a knowledge-
based system
There is no doubt that the implementation of the knowledge-based system of the SP249 pro-
ject in the power plant will create problems because there is hardly any acceptance among
those that have to use it, because the software is not yet available in national languages and
because the required application knowledge is lacking. A basic knowledge of electronic data
processing and computer hardware should be available.
SP 249 KBS minimum user's profile - Table 5. This is an item which is frequently overlooked
when implementing the knowledge-based system in a power plant.
346

CONCLUSION
Without the many negative, and occasionally positive experience GKM would not have spon-
sored the development of a knowledge-based system and even actively participated. It is abso-
lutely mandatory that the innumerable individual events - failures, accidents, upset conditions
etc. - which can no longer be handled in a conventional way be compiled in a way that allows
easy access and useful combination with accepted engineering rules, official standards and
codes.
The advanced assessment route provides a systematic approach to component life assessment.
In its present form it is applicable to high-temperature boiler components and pipe lines. All
stages of life assessment are covered from the initial plant prioritization through conventional
and advanced inspection techniques including defect assessment methods to the 'run/repair/shut
down' decision. The route is in the form of a series of logically connected flow charts identical
to those displayed on the screen by the knowledge-based system. The linkage with generic
guidelines for life assessment is made whenever appropriate.
347

EXHAUSTION
This is the consumed deformation capacity characteristic of a material. The degree of
exhaustion upon rupture is always 100 %. There is a non-linear relation between strain
and time. An almost linear relation is likely to be observed until the end of the
secondary creep range (= constant creep range).

CALCULATION OF EXHAUSTION
The calculation of the degree of exhaustion is based on operating parameters and the
German Pressure Vessel Code TRD 508. In this code the calculation procedure is
described by examples. The major calculation variable for the determination of power
plant component creep exhaustion is the creep strength of creep resistant steels
pursuant to DIN Standard 17175.

CREEP
Creep is the time-dependent, progressive, ductile deformation at constant (static) load.
This phenomenon is caused in metals by the mobility of the atoms increasing with
temperature and the behavior of the lattice defects. The thermally actuated change of
location controlled by diffusion is of critical importance for these processes.

CREEP DAMAGE

This is an irrevocable degradation of the microstracture occurring under the simul-


taneous impact of temperature and stress.

CREEP STRENGTH
Creep strength is that static stress which results in specimen failure at the end of a
specified load period. The high-temperature strength of metals is characterized
predominantly by the creep characteristics.

Figure 1
348

Robinson Life Fraction Rule

if = 1 (D
*ri

Lieberman Strain Fraction Rule

If- = 1 (2)

Freeman and Voortiees Mixed Rule

(3)
\ tri Sri/
Oding and Burdusky Mixed Rule

Ai)
m

= 1 (4)

Abo El Ata and Finnie Mixed Rule

K-lf+(l-K-Jlf- = 1 (5)
*ri *ri

Figure 2
Effect of Influencing Factors on the Plant/Component Life
influencing design effect on the field effect on the required toler
variable tolerances calculated tolerances calculated ances In % with
ln% life in % ln% life In % 20 % confidence
+ 1 - 3 + 1 - 3
diameter - 1 + 4 1
+ 4
+ 1 + 4
wall thickness + 25 + 180 1
- 3
+ 1 - 3
Internal pressure + 10 + 50 1
+ 4

additional stresses + 5 - 16 + 5 - 16 5

+ 0.5 - 18
wall temperature + 3 + 220 0.5
- 0.5 + 20

material parameter + 40 + 500 + 40 + 500 5

thermal stress factor 5

+ 4 - 20
temperature amplitude 4
- 4 + 30

Table 1
350

assessment structural- and damage conditions


class
0 as received, without thermal service
load
1 creep exposed, without cavities
2a advanced creep exposure, isolated
cavities
2b more advanced creep exposure,
numerous cavities without preferred
orientation
3a creep damage, numerous oriented
cavities
3b advanced creep damage, chains of
cavities and/or grain boundary
separations
4 advanced creep damage, microcraeks
5 large creep damage, macrocracks

Table 2
Maintenance Strategy

I
preventive, at intervalls
i
condition-oriented
1
failure expecting, age-Induced
repair at scheduled Intervals repair depending on condition continue operation until failure
check occurs
component utilization 60 - 80 % component utilization 80 - 95 % 100 % component utilization
set lifetime necessary diagnosis procedure necessary redundancy necessar

e Important system components Important system components lower-level system components


e impact on availability Impact on availability no Impact on availability
e risk of consecutive damage risk of consecutive damage no risk of consecutive damage
e condition check uneconomic
early failure detection by:
determination of Intervals - operation monitoring replacement of parts
- recurrent examinations

Table 3
352

Phase 1 General calculation Review of Operation


g
I
Phase (1-1)
General calculation assessment

|~~ Phase (1-2)


Measure / calculate strain

Strain criterion relevant

Phase (1-3)
Assess strain level

Critical strain level exceeded Critical strain level NOT exceeded OR


Strain critierion NOT relevant
CONTINUE Phase (1-4)
with Phase 2 Assess creep / fatigue ufe fraction

Critical creep level exceeded


1
Critical creep level NOT exceeded

CONTINUE Phase (1-5)


with Phase 2 Check operational factors

Operational factors critical Operational factors OK

CONTINUE CONTINUE
with Phase 2 with Phase (1-6)

Table 4: Sample Display of Advanced Assessment Route


353

SP 249 KBS Required Minimum User's Profile


Nom! mod A dvanced mod Aanboring mode Installation

Main Hypertext "read o as In "Normal possibly as in o Install Guide


only" use mode" + "Advanced
mode" (not o install Kappa
report "read only", o edit the report necessarily) +
perform calculations o Install the
within the main o porform indepen o Input of new SP 249 System
system... dent casestudies
calculations..
o ...
o add Images in
the system

how to turn the o requirements for o requirements for o hardware


/?^':-^:;: computer and "Normal mode" + "Advanced configuration
kn& monitor on mode" + must be known

how to operate o Scanner o screen


mouse operation resolution and
knowledge drivers
recommended knowledge
recommended

Software- basic DOS know o requirements for o requirements for o operating


related ledge (meaning of Normal mooe + "Advanced system (DOS/
knowledge directories and files, mode" + Windows)
how to find them, creating basic
art etc.). directories (FHe o Using Guide knowledge
Manager under (creeling and
::.::^::'::.:.:.. Using mouse under Windows) edHngGuMe
:!|jy;::..: Windows
o ci oalliig and
Using menues under eOfUng NOBBfHM
Windows flies (m report
editor)
Using options under
"Rie" menue o creating, work
(Open.., "Save", ing with bitmaps
etc.)... (paintbrush),
drawings

CL 3d power plant con o requirements for o none, If the case none


::::
;.;;:;;:^a^ :. struction basics "Normal mode" + study Is well pre
(plant, block, sys pared on paper
tems, components) ability to under
stand and Inter otherwise:
basic knowledge prete calcu
of power plant o requirements for
rid lation results "Advanced
(ter) materials, standards
and similar (e.g. mode" deep
which standards are knowledge of
applicable for given CLA technology
plants, etc.) (experienced
expert)
e ...

4

U
Table 5 ?
355

Authors' index
357

P. Auerkari M. Gruden
VTT Manufacturing Technology Grupa R&D Consultancy Ltd
P.O. Box 1704 (Kemistintie 3, Espoo) Vodnikova 8
SF - 02044 VTT SI 61000 Ljubljana
Finland Slovenia
Tel.:00 358 4 0 - 5 01 51 83 Tel.:+386 61 55 33 70
Fax: 00 358 - 0 - 456 - 7002 Fax:+386 61 55 17 74
M. Behravesh A. S. Jovanovic
Electric Power Reasearch Institute MPA Stuttgart
3412 Hillview Avenue University of Stuttgart, Pfaffenwaldring 32,
CA 94303 Palo Alto 70569 Stuttgart
U.S.A. Germany
Tel: 001 (415)855-2388 Tel.:00-49-711-685-3007
Fax: 001 (415) 855-2774 Fax: 685-2635
J. M. Brear H. R. Kautz
ERA Technology Ltd. Grokraftwerk Mannheim AG
Cleeve Road Postfach 24 02 64
KT22 7SA Leatherhead Surrey 68172 Mannheim
United Kingdom Germany
Tel: 00-441-372-374-151 Tel.: 00-49-621-868-3702 or 3703
Fax: 00-441-372-374496 Fax:00-49-621-868-3710
B. Cane M. C. Klinguelfus
ERA Technology Ltd. COPEL - Companhia Paranaense de Energia
Cleeve Road (LAC/CNAT)
KT 22 7SA Leatherhead Surrey Rue Coronel Dulcidlo 800 Curitiba - Parana
England Brazil
Tel.: 00 441 (0) 1372 367000 Tel.: 0055 (41) 366-2020
Fax: 00 441 (0)1372 367099 Fax: 55 (41) 266-3582
L. A. D. Correa J. A. B. Montevechi
Petrleo Brasileiro S/A Petrobras/Repar Escola Federal de Engenharia de Itajub
Rod. do Xisto KM 16 CX. Postal 50
Arauoaria - Parana CEP 37500 - 000 Itajub (MG)
Brazil Brazil
Tel.: 0055 41 8412541 Tel.: 0055 (35) 6291212
Fax: 005541 8431244 Fax: 0055 (35) 6291148
D. V. Coury H. H. Over
Escola de Engenharia de So Carlos Institute of Advanced Materials, JRC Petten
Universidade de So Paulo of the European Commission,
Av. Dr. Carlos Botelho, 1465 - Cp 359 Netherlands
13560-970 So Carlos Tel.:00-31-2246-5256
Brazil Fax: 003122463424
Tel.: 0055 (162) 72 6222 Dora de Castro Rubio Poli
Fax: 0055 (162) 74 9235 Instituto de Pesquisas Energticas e Nuclares
H. P. Ellingsen (IPEN - CNEN/SP)
MPA Stuttgart Traverssa R 400 - Cidade Universitaria / CX.
University of Stuttgart, Pfaffenwaldring 32, Postal 11049- 5508 - 900 So Paulo
70569 Stuttgart Brazil
Germany Tel.: 0055 (11) 8169281
Tel.:0049-711 -2579361 Fax: 0055 (11) 8169186
S.Fukuda M. Poloni
Tokyo Metropolitan Institute of Technology MPA Stuttgart
6-6 Asahigaoka, Hino University of Stuttgart, Pfaffenwaldring 32,
191 Tokyo 70569 Stuttgart
Japan Germany
Tel: 00-81-4-2583-5111, ex. 266 Tel.: 0049 711 685 2040
Fax: 0081425835119 Fax:
358

S. Psomas R. D. Townsend
MPA Stuttgart ERA Technology Ltd.
University of Stuttgart, Pfaffenwaldring 32, Cleeve Road
70569 Stuttgart KT22 7SA Leatherhead Surrey
Germany United Kingdom
Tel.: 632994 B. R. Upadhyaya
G. M. Ribeiro The University of Tennesse, Knoxville, TN,
Companhia Energetica de Minas Gerais- USA
CEMIG Tel.:00-1-615-974-5048
Av. Barbacena 1200 - Belo Horizonte - R. Weber
30161-970-MG MIT GmbH
Brazil Aachen
T. Sato Germany
Nuclear Plant Design and Engineering Dept. Tel.: 02203/300573
Toshiba Corporation S. Yoshimura
Yokohama, Department of Quantum Engineering and
Japan Systems Science,
Tel.: 0055(247) 621 135 The University
of Tokyo, 7-3-1 Hongo, Bunkyo
Tokyoll3
Japan
Tel.: 00-81-474-74-1070
Fax:00-81-474-74-1070
Post Conference A w n IGATIN OF NTELLIGEN"! SOFTWARE
Seminar Nr. 13
Sao Paulo, Brazil SYSTEMS IN P O W E R PLANT, PROCESS PLANT
August 21 -23, 1993
Proceedings
AND STRUCTURAL ENGINEERING
SMiRT 13

Post Conference A P P L I C A T I O N S U N T E L I . S O F T W A R E .

f
Seminar Nr. 13
-)
Sao Paulo, Brazil SYSTEMS IN POWER PLANT. PROCE SS PLAN'!
August 21 -23,1993
Proceedings
AND STRUCTURAL ENGINEERING
SMiRT 13

Post Conference lOETWARI


APPLICATIONS or- IMTELUGEI
4 Seminar Nr. 13
Sao Paulo, Brazil STEWI.S IN POWER PLANT, PROCESS PLANT
August 21 -23, 1993
A N D S T R I I C T U R A I E N G I N E E R I N G
Proceedings
RT13

Post Conference APPLICATIONS OF INTELLIGENT SOFTWARE


Seminar Nr. 13

f
SMiRT 13
Sao Paulo, Brazil
August 21 -23,1993
Proceedings
SYSTEMS IN POWER PLANT, PROCESS PLANT
A N D STRUCTURAL ENGINEERING

Post Conference P L I C A T I O N S O F INTE, F I G E N T S O F T W A R E


Seminar Nr. 13
Sao Paulo, Brazil SYSTEMS N P O W E R PLANT, PROCESS PL
August 21 -23, 1993
Proceedings
AND STRUCTURAL. ENGINEERING
SMiRT 13
CLNA17669ENC

Вам также может понравиться