Вы находитесь на странице: 1из 8

Application of Data Reconciliation to Process Monitoring

Georges Heyen, Universit de Lige (Belgium)


G.Heyen@ULG.AC.BE
Summary : Plant data reconciliation has been developed for a long time and has been transposed with
success from academic to industrial applications. It has proven to be a valuable tool in processing raw
measurements collected in operating plant, and is now being linked to real time data logging systems.
Industrial applications of data reconciliation are discussed. Some allow to check sensor degradation
and need for recalibration. Other help in improving acceptance test runs.
The application of sensitivity analysis to the design of cost efficient measurement system is mentioned.
Finally fault detection through application of data reconciliation in parallel with principal component
analysis is also discussed.

Introduction
Are plant measurements fully exploited ?
Plant
operators
recognize that plant
measurements and lab analysis are never error
free. Using these measurements without
consistency evaluation and correction to
generate plant balances usually yields
inconsistencies in the balances. Even careful
installation and maintenance of the hardware
cannot completely eliminate this problem.
The situation has been changed by recent
progress in automatic data collection and
archiving : operators are now faced with a lot of
data, but they have little means to extract and
fully exploit the information it contains.
Until recently, no obvious solution to this
problem was available. But the steady decrease
in computer cost, the new developments in data
bases and in networking of measurement
devices should make possible the daily
industrial application of model based
techniques that were only applied by experts in
the past.
Generic commercial programs are now
available that apply model based statistical
methods to analyze and validate plant
measurements.
This
is
called
data
reconciliation.
What is data reconciliation
Data reconciliation is based on measurement
redundancy. This concept is not limited to the
case where the same variable is measured
simultaneously by several sensors. It is
generalized with the concept of spatial
redundancy, where a single variable can be
estimated by several independent ways, from
separate sets of measurements. For instance, the
outlet of a mixer can be directly measured, or

estimated by summing the measurements of all


inlet flow rates. More generally, plant structure
is an additional information which is exploited
to correct measurements.
Mathematical foundation
Variables describing the state of a process are
related by some constraints : basic laws of
nature must be verified: mass balance, energy
balances, some equilibrium constraints.
Data reconciliation is a method that uses
information redundancy and conservation laws
to correct measurements and convert them into
accurate and reliable knowledge.
Each measurement is corrected as slightly as
possible in such a way that the corrected
measurements match all the constraints (or
balances) of the process. Known accuracy of all
sensors is exploited, so that data obtained from
the most reliable sensors is normally corrected
the least.
Unmeasured process variables are calculated
and the precision of the reconciled values are
quantified. In that way data reconciliation acts
as a virtual instrument, giving access to
important but not directly measurable variables,
such as the conversion in a reactor. Moreover,
sensitivity analysis tools evaluates the
interdependence between all the data.
Although the data reconciliation relies on
complex mathematical and statistical basis, its
software implementation brings its power to the
reach of engineers in charge of plant operation
as well as those in charge of plant design.
Simulation and data reconciliation are
complementary
Process simulation and data reconciliation are
similar on several aspects. They are as a matter
of fact quite different as well, the two main
differences being the set of equations that are

used and the way plant data are handled. A


data reconciliation model is focused on balance
equations, that are always valid. The model is
used to check and correct the data, giving a
better information of the actual performance of
the plant. In a simulation model, rate equations
and empirical correlations are used for
performance prediction. To solve a simulation
model, the user has to provide the right number
of specifications : redundancy is to be avoided,
while in data reconciliation, a higher degree of
redundancy is beneficial, since it leads to a
higher
accuracy
of
the
reconciled
measurements.
Data reconciliation is a preliminary step before
process simulation. It converts plant data into
coherent information which can be used to fine
tune a simulation and optimization model.

Application
in
a
optimization framework

process

Tuning model parameters


Mathematical models are used to optimize the
operation and control of many plants. However
these models usually contain "gray box"
elements, that require tuning of empirical
parameters. Data reconciliation is a prerequisite
for tuning these model parameters. By
identifying unobserved state variables, it allows
decoupling the parameter identification
problem, which can be carried out unit by unit.
Thus a sequence of smaller optimization
problems
replaces
the
simultaneous
identification of all model parameters.
Obtaining a reliable balance
Besides the parameter identification, closing
mass and energy balance obtained from plant
data can be difficult. Measurement errors can
not be compensated unambiguously using ad
hoc techniques.
A typical experience reported by Henkel, relates
to their gas bill. Thanks to the use of BELSIMVALI software they have been able to detect a
measurement error leading to an overestimation of a few percent of their gas import.
A few percent may seem of low importance,
but in this case, this single error was
corresponding to an over-cost of 0.5 million
DM per year.
Another typical use of data reconciliation in
power plants relates to acceptance test-runs.
After any upgrade of a plant, tests are usually

performed to prove that the expected benefits


are achieved. The expensive measurement
devices that are normally used to perform those
tests can be avoided thanks data reconciliation.
For example KKL, the nuclear power station of
Leibstadt in Switzerland, has used this
technique after replacement of their LP turbines
and identified, at virtually no cost and weeks
before the acceptance tests, that the turbine
upgrade had increase their production by a 46
MW in place of the predicted 23 MW.

Application
in
measurement
system assessment
Sensor follow up
Data reconciliation is used in a growing
number of power plants, including nuclear
power plants.
In a nuclear power plant, an accurate
measurement of the feed water flow rate is of
crucial importance as it is used to calculate the
nuclear reactor power, which must never
exceed the licensed power. However, due to
orifice degradation, one usually observes a slow
decay of the actual feed water rate for a
constant feed rate measurement. This becomes
only obvious at the yearly orifice re-calibration
where an expensive tracer technology is used.
This means that in between two re-calibrations,
the actual reactor power is over-estimated, and
thus that the plant does not deliver its full load
of electricity.
The nuclear power station of Leibstadt in
Switzerland has suppressed this problem by
replacing the tracer re-calibration by data
reconciliation., bringing a double benefit. First,
they don't need the expensive yearly tracer recalibration any more.
Secondly, they recalibrate their orifices on a higher frequency on
basis of the reconciled values, which allow
them to work permanently at full throughput.
Sensitivity analysis
Knowing the variance of validated variables
allows to detect the respective importance of all
measurements in the state identification
problem. In particular, some measurements
might appear to have little effect on the result,
and might thus be discarded from analysis.
Some measurements may appear to have a very
high impact on the validated variables and on
their variance : these measurements should be

carried out with special caution, and it may


prove wise to duplicate the sensors (Heyen
1996).
Let us examine an example based on BELSIMVALI software. The sensitivity analysis module
generates two types of reports. The first one
contains for each measurement the measured
value and the reconciled value, the assumed
accuracy (standard deviation ) of the
measurement and the a posteriori accuracy of
the reconciled data. All state variables
estimated from a given measurement are also
listed, with the factor indicating the
contribution of the measurement variance to the
variance of the reconciled value. The second
type of report contains the same information,
but sorted by state variable : for each variable,
one obtains the list of the most important
measurements used to estimate its value.
The following example (Heyen 1996) illustrates
the content of the first report type. It relates to
an ammonia synthesis loop, whose model
involves 28 measurements, 33 unmeasured state
variables, 50 constraint equations, and thus 17
redundancies. Information listed is related to
the measurement of ammonia mole fraction in
the liquid outlet of the ammonia condensor,
identified by tag name FL1_MFNH3.
Measurement Tag Name
MFNH3 M FL Reconciled
FL1_MFNH3
Variable
MFNH3
M FL
MFH2
M FL

Value
98.005 %
97.800 %

Sigma
.12151
1.0000

Tag Name Contribution


FL1_MFNH3
1.48%
FL1_MFH2
1.45%

This value has been corrected from 97.80 mol%


to 98.005 mol%. Standard deviation of
validated value is 0.121 mol%, significantly
lower than the measurement standard deviation
1 mol%. This measurement has some (almost
negligible) contribution in the estimation of
validated value of two other measured state
variables. Surprisingly, the variance of the
ammonia molar fraction in the liquid stream FL
is almost independent from the variance of the
corresponding
measurement (contribution
1.48%). The "Contribution" column in the table
contains the contribution of the variance of
measurement k in the estimation of the
variance of reconciled state variable i.
Further inspection of the report shows that no
key variable is significantly influenced by the
measurements of condensate composition, since
the contribution of these measurements to the
variance of reconciled values is less than 10%.
One may conclude that those composition
measurements do not carry much information.

It would be wise to balance the cost of


measurement with the (small) additional cross
check it allows through data reconciliation. To
verify this, one may refer to the second type of
sensitivity report, that shows how each state
variable has been reconciled. The report
contains a separate entry for each state variable,
as shown below for partial molar fraction of
nitrogen in the same stream FL.
The first line of the table identifies the variable
(N2 molar fraction in Mixture FL), the tag
name of the corresponding measurement, the
reconciled value and its standard deviation, the
physical units for the variable.
Variable
MFN2 M FL

Tag Name
Reconciled
FL1_MFN2

Measurement
MFH2 M FL
MFN2 M FL
T
S RECYC
T
S PURGE
MFN2 M FV
MFNH3 M FV

Value
.52113 %
.50000 %

Tag Name
FL1_MFH2
FL1_MFN2
RECYC_T
PURGE_T
FV1_MFN2
FV1_MFNH3

Sigma
.03216
.10000

Contribution
53.34%
10.35%
6.69%
6.69%
5.09%
4.19%

The measured value and its standard deviation


are recalled on next line for comparison :
uncertainty has been decreased by a factor of 3.
The following lines show the most significant
measurements used to estimate the selected
variable.
After deleting 5 composition measurements for
stream FL, redundancy for the complete
synthesis loop is reduced to 12. Running again
the validation program demonstrates however
that validated variables are not significantly
affected. This can also be illustrated in another
way : when validation is performed after adding
some noise to the measurements of FL
composition, the state of the process is not
much affected and perturbed variables are
correctly brought back to normal by
reconciliation. This example allows to conclude
that sampling and analysis cost can be
decreased by suppressing all "inefficient"
measurements, while focusing on improvement
of other measurements.
Unmeasured variables are also analyzed in the
sensitivity report, as shown below for the
reaction extent.
Variable
Tag Name Value Sigma
EXT1 U REAC Computed .0989 kmol/s .00414
Measurement
MASSF M FL
T
S RCTIN
T
S RCTOUT
MFNH3 M RCTIN
MASSF M RCTIN
T

S F07

Tag Name
FL1_MASSF
RCTIN_T
RCTOUT_T
RCTIN_MFNH3
Average
F06_MASSF
RCTIN_MASSF
F07_T

Contrib.
35.87%
14.91%
14.84%
7.86%
6.78%
50.00%
50.00%
5.24%

The first line of the table identifies the variable


(EXTENT parameter in unit REAC). Since this
variable is not directly measured, it has no tag
name. It appears that the most important
measurement is the condensate flow rate (which
indeed is closely related to the ammonia
production in the reactor). Inlet and outlet
temperatures, combined with the mass flowrate
in the reactor, also contribute to the estimation
(they are linked to the conversion by the energy
balance). The other variables play a less
obvious role. One notices that the measurement
of the reactor inlet mass flowrate is in fact a
weighted average of two separate values linked
to streams F06 and RCTIN.
As a conclusion, one can infer rules that allow
to identify good measurements : they should not
be corrected too much by data reconciliation
(little measurement error) but their associated
standard deviation should be decreased through
the validation process (existence of redundancy
affecting the measurement).
Knowing the a posteriori accuracy for all
process variables allows to calculate confidence
bounds or safety margins on derived results
(e.g. approach to explosivity limit, or to
compressor surge curve). A properly selected
set of measurement supplemented by the data
reconciliation procedure allows to decrease to
safety margin with respect to critical operating
conditions, since the data reconciliation tool
reduces the uncertainty on the actual process
conditions.
Reduction in analysis cost
The reconciliation package BELSIM-VALI is
used by Wacker Chemie since 1992. Figure 2
shows the evolution of the total sum of errors
squared in one of their plants. This drastic
reduction of this sum is due to several factors:

detection of erroneous measurements


data reconciliation model improvements:
unaccounted by-passes, ...
process
knowledge
improvements:
unexpected reactions have been identified,
better estimations for some unmeasured
stream compositions, etc.

Weighted, squared sumof measurement errors


30 000
25 000

2
Y
recon, i
std dev i

Y
i

meas, i

)
Source: Wacker Chemie GmbH

20 000
15 000

online
daily +
autom.

10 000
5 000

0
1992

1993

1994

Figure 2. Reduction in measurements errors


Nowadays the total sum of errors squared is in
the range of 1000, and is the primary factor
tracked by the operator.
Any significant
increase in the sum is an indication of either a
process upset or a sensor drift or breakdown.
The main economical benefit in this application
has been the reduction by 50% of the number of
routine analyses that were necessary to operate
the plant.

Application in fault detection


Measurements are needed to monitor process
efficiency and equipment condition, but also to
take care that process parameters remain within
acceptable range to ensure good product
quality, avoid equipment failure and any
hazardous operation.
Model based statistical methods, such as data
reconciliation, are provided to analyze and
validate plant measurements. The objective of
these algorithms is to remove any error from
available measurements, and to yield complete
estimates of all the process state variables as
well as of unmeasured process parameters.
Besides that, algorithms are needed to detect
faults, i.e. any unwanted, and possibly
unexpected, mode of behavior of a process
component (Cameron 1999). In practice, the
following faults need to be detected and
diagnosed in the process industries:
Equipment failures and degradations.
Product quality problems.
Internal deviations from safe or desired
operation (hot spots, pressure rises).
Fault detection and diagnosis is generally
accepted to occur in three stages:
1. Detection - has a fault occurred?
2. Identification - where is the fault?

3. Diagnosis - why has the fault occurred?


Data reconciliation and principal component
analysis are two recognized statistical methods
used for plant monitoring and fault detection.
They can be combined for increased efficiency
(Amand 2000).
variability. However all process variables do not
fluctuate in a completely random way. These
are linked by a set of constraints (mass and
energy balances, operating policies) that can be
captured in a process model. Even if a rigorous
mathematical model is not available, statistical
analysis of the process measurement time series
can reveal underlying correlation between the
measured variables (Kresta et al, 1991)
If measurements related to abnormal conditions
are removed from the analyzed set, the
principal components of the covariance matrix
(i.e. the eigenvectors associated with the largest
eigenvalues) correspond to the major trends of
normal and accepted variations. Most of the
variability in the process variables can usually
be represented by the first few principal
components, which span a subspace of lower
dimension corresponding to the normal process
states (Snedecor G., 1956). Projection of any
new measurement point in this subspace is
expected to follow a Gaussian distribution, and
can be checked for deviations from their mean
values using the usual statistical tests.
With this approach, one can verify whether a
new measurement belongs to the same
distribution as the previous sets that were
recognized as normal. If not, a fault is likely to
be the cause : either normal operation in
conditions that were not covered in the original
set used to determine the principal component
1

0.9

Explained variability

0.8

0.7

0.6

0.5

0.4

Reconciled data
Rawdata

0.3

0.2

0.1

1
6
11
16
21
26
31
36
41
46
51
56
61
66
71
76
81
86
91
96
101
106
111
116
121
126
131
136
141
146
151
156
161
166
171
176
181
186

number of principal components

Figure 2 : explained process variability


vs number of components

basis, or excursion out of the normal range, or


equipment failure breaking the normal
correlation between the process variables.
With this approach, one can verify whether a
new measurement belongs to the same
distribution as the previous sets that were
recognized as normal. If not, a fault is likely to
be the cause : either normal operation in
conditions that were not covered in the original
set used to determine the principal component
basis, or excursion out of the normal range, or
equipment failure breaking the normal
correlation between the process variables.
However the number of principal components
needed to represent a significant part of process
variability cab be reduced dramatically if
reconciled data are used to determine the
covariance matrix and its eigenvectors. Figure
2 illustrates this for an example (an ammonia
synthesis loop, with 186 state variables). The
curves show the fraction of process variance
explained by the projections on the principal
components, both when the covariance matrix
is calculated from raw data (perturbed with
random noise), or for reconciled data.
Data reconciliation is recommended as a
preliminary filtering
step
before
the
determination of the PCA projection matrix and
the PCA correlation matrix between the PC and
the original variables. These matrices are
determined only once on the base of the
reference data set. If the raw data were to be
used for the determination of these matrices, a
larger number of components would be needed
to represent the data variability at the same
confidence level, but the least significant
components would be much affected by noise.
On the other hand, raw data should be used
when using the method to detect faults. DR
allows filtering of some data such as faults from
measuring devices, and is likely to mask them.
One should keep in mind that DR, by
correcting faulty measurements, can delay the
detection of a fault. The quality of the model
used is very important.
Overall, the fault detection method by principal
components analysis is effective in several cases
studied by Amand et al (2000). The sensitivity
of the method can be adjusted thanks to the
confidence regions for each component. This
method is a priori relevant to any kind of
process.
The precise identification of the fault location is
not always possible. Taking into account the
process dynamics and dead times in fault
propagation might improve the capabilities of

the method if measurements are sensitive and


fast.
We propose to use reconciled data in the first
step of the principal component analysis
technique, namely the determination of the
projection matrix (eigenvectors). Principal
component analysis can then be applied directly
to raw process data for monitoring purpose.
The pairing of both techniques aims at a better
efficiency in fault detection. It relies mainly on
monitoring a lower number of aggregate
variables.

Conclusions
Benefits
Benefits from data reconciliation are numerous
and include :
improvement of measurement lay-out
decrease of number of routine analyses
reduced frequency of sensor calibration :
only faulty sensors need to be calibrated
removal of systematic measurement errors
systematic improvement of process data
clear picture of plant operating condition
and reduced measurement noise in trends of
key variables
early detection of sensors deviation and of
equipment performance degradation
actual plant balances for accounting and
performance follow-up
safe operation closer to the limits
quality at process level
on-line model-based optimization tools
work with a more accurate information.

assessing their reliability is also important, thus


estimates of the standard deviation for validated
variables and for unmeasured variables have
been developed.
Three types of questions can be analyzed using
sensitivity analysis :
First one is to check how the accuracy of a
given state variable is influenced by the set of
measurements : which are the measurements
that contribute significantly to the variance of
the validated results for a set of state
variables ?
Second type of problem is to detect the state
variables whose accuracy is mostly influenced
by a given measurement : which are the state
variables whose variance is influenced
significantly by the accuracy of a given
measurement ?
The third type of problem is to study how the
value of a state variable is influenced by the
value and the standard deviation of all
measurements.
Based on this information, decisions can be
taken either when analyzing measurements
from an existing plant, or when designing a
measurement system. Unnecessary analysis may
be deleted, or requested less often, just to allow
cross checks, and this can result in significant
savings in operation costs. One can identify key
measurements for which any enhancement of
accuracy would
result
in
significant
improvement in the quality of the process
monitoring. One can also select the best
locations for sensors, that result in a good
estimation of all key process variables at the
lowest investment cost.

Data reconciliation allows the use of modelbased methods for plant performance follow-up
as well as for the preparation of scaling up and
new process development. The concept is not
purely theoretical, but it works in practice with
success, for instance within Wacker-Chemie.
Use of generic software packages allows non
experts to access power tools, while making
sure that their company keeps its know-how
under control.
Data reconciliation has proved its efficiency in :
total quality management
safety audits
ecological audits
continuous plant improvement.

References

One of the goals of validation is to improve the


knowledge of the system state variables.
Providing values is for sure a great help, but

Heyen G., Marchal E., Kalitventzeff B.,


Sensitivity Calculations and Variance Analysis

Amand T., Heyen G., Kalitventzeff B., Plant


Monitoring and Fault Detection : Synergy
between Data Reconciliation and Principal
Component Analysis., submitted to ESCAPE
10, Florence (Italy) (2000)
BELSIM, VALI III Users Guide, BELSIM sa,
Alle des Noisetiers 1, 4031 Angleur (Belgium)
(1999)
Cameron D, Fault Detection and Diagnosis, in
"Model Based Manufacturing - Consolidated
Review of Research and Applications",
document available in CAPE.NET web site
(http://capenet.chemeng.ucl.ac.uk/ ) (1999)

in
Plant
Measurement
Reconciliation,
Computers and Chemical Engineering (1996)
Joris
P.,
Kalitventzeff
B.,
Process
Measurements Analysis and Validation, Proc.
CEF'87, Use of Computers in Chemical
Engineering, Italy (1987)
Kresta J.V., MacGregor J.F, Marlin T.E.,
"Multivariate statistical Monitoring of Process
Performance", The Canadian Journal of
Chemical Engineering, 69, 35-47 (1991)
Madron F., Process Plant Performance.
Measurement and Data Processing for
Optimisation and Retrofits, Ellis Horwood,
Chichester England (1992)
Romagnoli J.A., Sanchez M.C., Data
Processing and Reconciliation for Chemical
Process Operations, Academic Press (1999)
Snedecor G., Statistical Methods (fifth
edition), The Iowa State College Press (1956)

Вам также может понравиться