Вы находитесь на странице: 1из 10

Simulation Software and Engineering Expertise: A Marriage of Necessity

By:
John E. Coon [Speaker]
Melinda G. Kusch
Michael C. Rowland
John R. Cunningham

Presented at the AIChE Spring Meeting


March, 10, 1998
New Orleans, LA

Paper Description

This paper describes the co-dependent relationship between engineering expertise and powerful
simulation tools in the achievement of high quality process engineering.
Abstract

Re-engineering has eroded the process experience base within the CPI. Today’s novice engineers
often lack experienced mentors to help direct them in producing and validating engineering calculations.
Employers, by default, often hope that modern engineering tools (such as process simulators) will help
fill this experience void.

Today’s process simulators certainly give this impression. Their comprehensive utilization of
Microsoft Windows gives them a standard look and feel which groups them with popular word
processing and spreadsheet applications in the minds of many users. Powerful graphical features, made
easy-to-use, allow even first-time users to quickly build a coherent process model. Enthusiastic,
inexperienced users sometimes accept a coherent model as sufficient, rather than merely necessary.
More experienced users have usually learned the importance of avoiding this potentially crucial mistake.

Modern process simulators, when used by experienced engineers, can nearly always supply a
valid answer to a process question. Today’s simulators offer a broader and deeper range of features and
data than ever before. Technology advances in thermodynamics, equipment modeling techniques, and
numerical solution algorithms allow experienced engineers to leverage their expertise more completely.
The combination of experienced engineer and powerful simulation tool is, more than ever, a true "expert
system".

The prudent manager recognizes the need to keep the "expert" in the "expert system".
Simulators, without competent users, can yield improbable (and even impossible) results in the real
world. It is the responsibility of the engineer to validate the quality of process design, troubleshooting,
and optimization results from simulation tools.

This paper presents prerequisites for good process design, and shows, by example, how
simulation choices may significantly affect the validity of the ultimate results. Novice simulation users
can take important steps to increase the quality of their simulation work. These steps include taking full
advantage of primary and secondary support services offered by simulation vendors, on-line help
systems, product documentation, and user training courses. These services allow simulation vendors to
offer their clients a true "engineering solution", rather than just a piece of software.
Introduction

The question posed by this session’s title (Engineering Experience Versus Milliseconds of
Computing Time?) is an interesting question to those of us in the simulation business. At the simplest
level, it is analogous to asking the question, “A good carpenter or a good electric drill?”. Few people
would argue that it is the skill of the carpenter, and not the quality of the drill, that is important in the
field of carpentry. We in the simulation field consider it a tribute to the impact that process simulation
and other CAPE applications have had on the field of chemical engineering that the question posed by
this session is even asked. That the value of a tool can even be considered relative to the value of the
user of the tool is a tremendous acknowledgement of the value of these software tools.

Perhaps a more appropriate analogy might be “The best race car driver or the best race car?”.
Having either of these with a poor representative of the other will not lead to victory. Likewise, it is
possible to have a winning combination of these even if neither one is the best on its own. Both the tool
and the user are critical to success. This is the situation with chemical engineers and process simulation
software.

In their ongoing efforts to make their companies more competitive globally, companies are
continually trying to get more productivity with fewer staff. Very often, the staff that remains and has to
do process simulation has little experience in process design. Adding further to the problem, more and
more company experts in various chemical engineering specialties are reaching retirement age. In many
cases, the expertise that these people had is disappearing with them so that the engineers doing design do
not have anyone to turn to for expert advice on their design efforts. We believe that this is a serious
situation but that it can be dealt with if some basic rules are followed.

Purposes of Simulation

Process simulation software allows an engineer to perform many tasks in an extremely efficient
and productive manner. Some of the common uses for simulation include, but are certainly not limited
to:

1. Process and product feasibility studies


2. Generation and analysis of process alternatives
3. Design of new processes
4. Sensitivity studies on the viability of new process designs relative to the assumptions made
5. Optimization of new process designs
6. Simulation and optimization of existing plants
7. Troubleshooting of existing plant problems
8. Debottlenecking of existing plants

In the past, many of these tasks were practically infeasible because of the amount of effort that
went into development of process heat and material balances. Only in the past decade has the cost of
computing power come down enough to allow these activities to become an integral part of the daily
work of most process and plant engineers. Adding to the acceptance of process simulation as a routine
activity for these engineers has been the substantial gains that have been made in making the simulation
tools easy to use and intuitive. These two trends have combined to extend the usage of simulation far
beyond its historical domain, which were the central engineering groups of large companies where it
was used by a limited number of simulation experts.

There is no doubt that this proliferation of simulation tools to non-traditional users has been one
of the key factors in the incredible productivity improvements that have been made by the CPI in the last
decade. Unfortunately, there is a price for these gains. The very same ease-of-use that has allowed more
engineers to utilize simulation in their work has also tended to mask the importance of following the
correct procedures when doing simulation. The ease-of-use enhancements have only simplified the
mechanics of producing a good process simulation; they have not simplified the engineering that needs
to occur to produce good simulations. Some of the basics of good process simulation are:

1. Development of a sound physical properties package for the simulation.


2. Development of a flowsheet that embodies the important aspects of the actual process
without being complicated by unimportant details.
3. Development of the control strategy to be used in solving the flowsheet. This refers to the
simulation control mechanisms, not the process control of the actual process, which can only
be studied using dynamic simulations of the process.
4. Understanding of the assumptions implicit in the simulation flowsheet and their impact on
the results.
5. Understanding of the economics involved and their impact on optimization efforts.

If the above aspects of simulation are understood and followed, a meaningful simulation of a
process can usually be developed and used to improve the process. Some of the common errors that can
defeat the usefulness of a simulation are discussed in the next section.

Common Errors

There are as many possible errors that can be made in putting together a process simulation as
there are processes to simulate but the following are ones that we encounter on a regular basis.

Inadequate Thermodynamics Package

Many papers have been written discussing the importance of thermophysical property
predictions in process simulation. Two papers that we consider to be good introductions to this area are
the 1994 paper by Mathias and Klotz (1) and the 1996 paper by Carlson (2). Mathias and Klotz discuss
aspects of various models that make them more, or less, appropriate for physical property modeling.
This aspect of thermodynamic package selection is beyond the scope of this paper and we refer the
reader to the previously mentioned paper. Carlson discusses five important aspects of this issue. This
paper discusses the first two of these, which are 1) selection of the thermodynamics methods to be used
in a simulation and 2) validation of the adequacy of the chosen method. These are the two areas that
most commonly lead to poor simulation results. The other three issues discussed by Carlson consider
what to do if your validation of your thermodynamics package indicates that it is not adequate. For the
purposes of this paper, determining if your thermodynamics package is valid is the key point because it
is a lack of awareness of the danger that is our chief concern.
There are many ways that the thermodynamics package can cause problems, ranging from gross
misuse of methods to some issues that are rather subtle. Some of these issues are:

1. Completely wrong choice of the method.

This is a common mistake of novice engineers. The most likely candidates in this case are the
choice of a traditional thermodynamic method (like SRK or NRTL) to simulate a system that
is undergoing reaction. Examples of this include using methods like those mentioned to
simulate sour water systems, amines treating systems, and electrolyte systems.

2. Use of the appropriate method without consideration of the availability of adjustable


interaction parameters.

Very often, users select the proper method for their system but do not then take the essential
next step, which is to determine whether or not the simulator has the necessary binary
parameters for the method. This can lead to a false sense of security because the correct
method has been chosen when, in reality, without the proper interaction parameters, the
method may return results that are disastrous. It is important to point out at this juncture that
the bad result is not a failure of the simulation tool but of the user. No simulation tool can
have the necessary parameters to correctly model the almost infinite number of systems that
can be investigated. In fact, it very often turns out that there are no parameters in the
simulator for a given binary because no data on that pair exists in the literature. It is the
responsibility of the user to determine if this is the case.

3. Use of the appropriate method but beyond its intended range of applicability.

An example of this type of problem was presented by Persichetti et al. (3). It involves the
design of wastewater strippers using the method (Henry’s Law) recommended by the EPA
for these environmental clean-up columns. The EPA provides a database of Henry’s
constants for many components of environmental interest. At first glance, it would appear
that the necessary data to use the correct method is available but that is not universally true.
The problem here is that the Henry’s constants in the database are values at 25 deg. C and the
values are not functions of temperature. If you are designing air strippers, which generally
operate at ambient pressure and temperature, this is fine and the results are quite acceptable.
If, on the other hand, you are designing steam strippers, which will operate at 100 deg. C, the
Henry’s constants in the EPA databank are inadequate. They will, in some cases, lead to the
mistaken result that components can be steam stripped out of water that, in reality, cannot be.

4. Use of available interaction parameters without validation of their appropriateness for the
complete range of conditions being encountered.

This is a slightly subtler version of #2 and #3. Here, the user uses parameters fitted in a
certain way to predict results for the same binary under different conditions. Due to the
limited experimental data that exists, this exercise is done all of the time, usually without ill
effects. Sometimes, this leads to serious problems. As an example, let’s say that the binary
interaction parameters for a given binary were determined from experimental data on the
infinite dilution activity coefficients and these parameters predict an azeotrope at 98.1 mole
% of component A, versus an experimental azeotrope value of 97.9 mole % A. If this
mixture is distilled to a product purity of 95 mole % A, the 0.2 % error in the azeotrope
prediction will probably lead to an insignificant error in the prediction of the column needed
to perform the separation. If the product specification were 97.5 mole % A, this same error
could lead to an undersized column with too few trays, making it difficult to make “on-spec.”
product. If the product specification is 98 mole % A, this same error will lead to a design
where it is physically impossible to achieve the desired separation.

An example of this type was presented by Coon et al. (4) at an AIChE meeting. The system
involved was ethanol-benzene-water and the paper showed that, even though the binary
parameters for NRTL used to model this system gave excellent results for each of the three
binaries, they were not accurate enough for design for the ternary system. They over-
estimated the size of the two-liquid phase region and they also predicted liquid-liquid tie-
lines that were steeply slanted even though the experimental data show them to be almost
flat. These errors lead to a design that contains a much larger recycle stream than is really
needed.

5. Inappropriate use of estimated pure component properties and binary interaction parameters.

Since there is not nearly enough experimental physical property data in the world to support
the needs of simulation, much of the data that gets used is estimated. While this is often
acceptable, it can also cause problems. One common example is when a reaction produces a
mixture of isomers, which has to be separated. Since the commonly used pure component
property prediction methods do not differentiate between isomers, they predict the same
properties for all of the isomers. This makes it impossible to determine the best way to
separate the mixture, since all separation technologies exploit some difference in the
component physical properties.

6. Inadequate discretization of continuous systems like distillation curves and polymer


molecular weight distributions.

This is a common problem that is encountered mostly in the oil and polymer industries,
where simulations are done using a discrete number of components to model a real system
that has a huge number of actual components, which are usually represented by a continuous
curve. Fortunately, this is one of the easier to detect and correct than most of the others.
Simply test your results by increasing the number of cuts used and keep doing so until the
results are no longer dependent on the number. In spite of the ease with which this can be
eliminated, we often encounter simulations that do not have enough cuts to yield good
results.

7. Inappropriate use of “KDATA”, which are user over-rides to the thermodynamic method
predictions.
In spite of the plethora of thermodynamic methods that are available in modern commercial
simulators, there will always be systems needing to be simulated that cannot be accurately
modeled, regardless of the care taken with the thermodynamic package. To overcome this,
most commercial simulators offer a way for the user to directly override the K-value
predictions of the thermodynamics package and adjust these values till the plant data are
accurately reproduced. While this is a powerful tool in the hands of an expert, it is extremely
dangerous if used improperly. The trouble usually occurs when a simulation developed by an
expert that includes KDATA gets inherited by a novice user. The novice, not understanding
the significance of the KDATA, uses the simulation as a starting point for developing a new
simulation of the same system but with substantially different conditions. Since the KDATA
are not predicted in a rigorous manner, they do not change in a realistic manner when the
simulation conditions change. This can mean that the new simulation yields poor, even
impossible results even if the first simulation was an excellent representation of the process.

Reactors

After the thermodynamics, the area of process flowsheeting that causes the most problems is the
reactor technology. There are many reasons for this, including the inherent complexity of modeling
reactors but here we just want to focus on some problems that are often encountered that can result in
results that are not justified. Most of these problems arise from the fundamental assumptions that are
made when the user selects the type of reactor that will be used for modeling his particular system. Most
simulators have a number of different types of reactors that can be used. Examples are:

1. Conversion Reactors

This type of reactor is the most commonly used because it requires minimum data. The user
only has to know the stoichiometry of the reaction. The user then specifies a conversion of
reactants into products, sometimes as a function of temperature. Trouble occurs when this
type of reactor is used in optimization of a flowsheet because, no matter how well the base
case was modeled, optimization will very likely change the conditions in the reactor. Even if
the conversion has been made a function of temperature, changing feed rates, etc. can
invalidate the basic assumptions that went into the conversion model. There is also the
possibility of converting the reactants into products beyond the equilibrium point.

2. Equilibrium Reactors and Gibbs Reactors

These types of reactor assume that a reaction will go to completion, reaching its equilibrium
values. Trouble begins when changing conditions cause that to be a bad assumption. This can
occur when an optimization increases the feed rate through a reactor to the point where the
reactor is no longer large enough for the reactions to go to completion.

3. Rigorous Kinetic Reactors [usually CSTRs (Continuous Stirred Tank Reactors) or PFRs
(Plug Flow Reactors)]

These are the most rigorous reactors and, since they require the most user input data, the least
commonly used. They are the best choices in that they account for the effect of changing
residence time in a reactor, etc. on the conversion. They tend to cause trouble in the
convergence area rather than by giving wrong answers. They can still give bad answers if, for
example, the reverse reactions are not included and conditions are set such that they go
beyond equilibrium conversions. Another concern comes if the conditions change so much
that the basic assumptions (complete mixing in the CSTR and turbulent velocity profile in the
PFR) fail.

Non-thermodynamic Unit Operations

Another area that gets novice engineers in trouble is the use of non-thermodynamic based unit
operations. These are unit operations that are available in most commercial simulators that allow the
user to define the products from a unit operation without doing any rigorous calculation based on first
principles. These unit operations are very useful tools for simulation experts that allow them to force
their results to match plant data even if the underlying thermodynamics are incapable of modeling the
system. This also allows them to model unit operations in their process that are not available in the
simulator and for which they do not have an in-house model that they could link into the software.

The problem these unit operations represent is the same one that is encountered when using
KDATA. As long as the user knows the correct answer for the system, these units can be used and can
be a great aid to productivity, as they allow the user to ignore details of the flowsheet that are not
important to them. Problems often arise, however, when simulations containing these kinds of constructs
are turned over to engineers other than the original creator.

Recycle Convergence and Mass Balance Issues

Another area where novices get into trouble is in the area of flowsheet convergence. There is an
excellent 1994 article on this topic by Schad (5). It deals in considerable detail with some of the
problems that can be encountered when solving flowsheets and gives advice on how to deal with those
problems. Some highlights that are discussed are:

1. Mass balance problems associated with large recycle streams and nested recycle streams.

Simulators solve recycle problems by breaking the recycles (called tear streams) and iterating on
them until they match within a certain tolerance. If a component has a large quantity in a recycle but
is only present in small amounts in the feeds and products, the solution of the tear stream can be
within tolerance but still cause the overall mass balance for the component to be off. The simulator
believes it is solved but it is not in mass balance.

2. Failure to solve due to poor specifications on units, etc.

Specifications which reset the product streams of units in a recycle loop can result in a convergence
where the loop is actually out of balance. These types of specifications should not be used.

3. Wasted effort spent solving flowsheets due to poor understanding of recycle issues.
Compositions and flowrate estimations for the tear streams greatly increase the probability of a
correct solution to a recycle problem. Zero rate streams often cause failures of units within a loop in
a manner where the eventual recovery is in doubt.

Engineering Experience

So how does engineering experience alleviate the problems that we have been describing? The
short answer is that experienced simulation engineers have encountered these problems in their pasts
and, once you’ve been burned, you will not make the same mistakes a second time. In addition to this,
these people usually have a much stronger grounding in the “common sense” part of engineering that
they have accumulated over the years. This experience allows them to spot anomalous results that might
slip past a less experienced engineer. In their particular area of specialization, they will have very good
intuition about the reliability of the results obtained from a simulator and will be able to suggest further
analysis for results that don’t appear to follow expected trends etc.

None of these things is a “magic bullet” that will prevent misuse of a simulator but, in the large
majority of cases, problems will be discovered and corrected before they lead to serious financial losses
or unsafe situations.

How to Manage This Environment

So what does all this mean to a manager who is expected to produce more work with less staff
and less experienced staff? The first thing it means is that staff reductions that reduce the overall level of
experience in these technology areas should be avoided if possible. If it is not possible to keep sufficient
expertise in-house to direct and train young engineers engaged in this type of work, there are still a
number of things you can do to try to manage this problem. Some of the obvious things are:

1. Insist on training for your engineers in the proper use of a simulation tool. If possible, make this
training a part of the contract so that it can not be budgeted out of existence. Most of the simulation
companies have a large number of training courses that teach things from the most basic use of the
tool to some very advanced concepts.

2. Most simulation companies also offer a certain amount of technical support (possibly unlimited) as
part of their contract with you. Insist that your engineers take full advantage of the support that is
available. This support will usually deal with concrete aspects of the use of the tool, rather than the
issues we are discussing here. By getting past some of the more mechanical problems in setting up
and running a simulation your engineers will be able to focus more of their time on analysis of their
results and their methodology.

3. Insist that your engineers be accountable, perhaps writing a report about their simulation efforts that
include discussions of how they dealt with the various issues presented in this paper. They should be
able to explain to you how they developed their thermodynamics package, why they choose the
reactor they choose, what aspects of the structure of the flowsheet could cause convergence and
mass balance problems, what considerations went into determining what variables they were
optimizing, etc. By writing these things down on paper, your young engineers will have to do some
analysis and will hopefully begin to see how their choices ultimately determined the results they
obtained.

4. Lastly, make sure that your engineers realize and accept that the quality of their engineering results
are their responsibility and that this responsibility cannot be abdicated to the simulation tool or the
simulation company.

Summary

Modern simulation tools are extremely powerful and, when used properly by an expert, they can
yield truly amazing productivity enhancement over those not using simulators. Like most powerful
tools, however, they are extremely dangerous when used improperly. Let us re-iterate that these “wrong”
answers are not the fault of the simulation tool, they are the result of getting the “correct” answer to a
poorly chosen question. It is critically important to the users of process simulation software that they
understand what the simulators are doing so that they can set up their problems correctly.

Simulation experts are the best way to assure that the necessary checks, etc. are made. If it is not
possible for all of your users to obtain the training and experience necessary to be experts in this area, it
is up to management to make sure that the novices doing simulation work have access to this level of
expertise.

Attempts to re-engineer away this important function may, in the short term, appear to be
successful in reducing the cost of engineering work in this area but, in the long term, the cost of losing
this expertise will be much greater than the short term savings.

References

1. Mathias, P. M. and Klotz, H. C., Chemical Engineering Progress, June, 1994, pp. 67-75.
2. Carlson, E. C. , Chemical Engineering Progress, October, 1996, pp. 35-46.
3. Persichetti, J. M., Coon, J. E., and Twu, C. H., presented at the AIChE Summer Meeting, August 19-
22, 1990.
4. Coon, J. E., Twu, C. H., Bluck, D., Cunningham, J. R., and Bondy, R., presented at the AIChE
National Meeting, San Francisco, November, 1991.
5. Schad, R. C. , Chemical Engineering Progress, December, 1994, pp. 68-76.

Вам также может понравиться